{"id":"J-39","title":"The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution","abstract":"Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest. As seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions. These misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers. This paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets. An empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue.","lvl-1":"The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution \u2217 Adam I. Juda Division of Engineering and Applied Sciences Harvard University, Harvard Business School ajuda@hbs.edu David C. Parkes Division of Engineering and Applied Sciences Harvard University parkes@eecs.harvard.edu ABSTRACT Bidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest.\nAs seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions.\nThese misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers.\nThis paper proposes a novel options-based extension to eBay``s proxy-bidding system that resolves this strategic issue for buyers in commoditized markets.\nAn empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Design, Economics 1.\nINTRODUCTION Electronic markets represent an application of information systems that has generated significant new trading opportunities while allowing for the dynamic pricing of goods.\nIn addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]).\nMany authors have written about a future in which commerce is mediated by online, automated trading agents [10, 25, 1].\nThere is still little evidence of automated trading in e-markets, though.\nWe believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs.\nWithout this, we do not expect individual consumers, or firms, to be confident in placing their business in the hands of an automated agent.\nOne of the most common examples today of an electronic marketplace is eBay, where the gross merchandise volume (i.e., the sum of all successfully closed listings) during 2005 was $44B.\nAmong items listed on eBay, many are essentially identical.\nThis is especially true in the Consumer Electronics category [9], which accounted for roughly $3.5B of eBay``s gross merchandise volume in 2005.\nThis presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem.\nFor example, Alice may want an LCD monitor, and could potentially bid in either a 1 o``clock or 3 o``clock eBay auction.\nWhile Alice would prefer to participate in whichever auction will have the lower winning price, she cannot determine beforehand which auction that may be, and could end up winning the wrong auction.\nThis is a problem of multiple copies.\nAnother problem bidders may face is the exposure problem.\nAs investigated by Bykowsky et al. [6], exposure problems exist when buyers desire a bundle of goods but may only participate in single-item auctions.1 For example, if Alice values a video game console by itself for $200, a video game by itself for $30, and both a console and game for $250, Alice must determine how much of the $20 of synergy value she might include in her bid for the console alone.\nBoth problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations.\nWhy might the sequential auction problem be bad?\nComplex games may lead to bidders employing costly strategies and making mistakes.\nPotential bidders who do not wish to bear such costs may choose not to participate in the 1 The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions.\nThe problem is also a familiar one of online decision making.\n180 market, inhibiting seller revenue opportunities.\nAdditionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities.\nWe are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties.\n1.1 Options + Proxies: A Proposed Solution Retail stores have developed policies to assist their customers in addressing sequential purchasing problems.\nReturn policies alleviate the exposure problem by allowing customers to return goods at the purchase price.\nPrice matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18].\nFurthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item.\nThese two retail policies provide the basis for the scheme proposed in this paper.2 We extend the proxy bidding technology currently employed by eBay.\nOur super-proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies.\nThe extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions.\nA seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option.\nBuyers interact through a proxy agent, defining a value on all possible bundles of goods in which they have interest together with the latest time period in which they are willing to wait to receive the good(s).\nThe proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions.\nA proxy agent exercises options held when the buyer``s patience has expired, choosing options that maximize a buyer``s payoff given the reported valuation.\nAll other options are returned to the market and not exercised.\nThe options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics.\nWe conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005.\nLCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem.\nWe first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period.\nThis model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market.\nWe also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments.\nUsing this estimate, one can approximate how much greater a bidder``s true value is 2 Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices.\nHowever, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers'' bids.\nfrom the maximum bid they were observed to have placed on eBay.\nBased on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient.\n1.2 Related Work A number of authors [27, 13, 28, 29] have analyzed the multiple copies problem, often times in the context of categorizing or modeling sniping behavior for reasons other than those first brought forward by Ockenfels and Roth [20].\nThese papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions.\nPeters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium.\nHowever, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders.\nPrevious work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2].\nPrevious work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1].\nUnfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction.\nIwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem.\nIn other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24].\nMost similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem.\nHowever, their work uses costly options and does not remove the sequential bidding problem completely.\nWork on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time.\nWe leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol.\nThe special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation.\nJiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions.\n2.\nEBAY AND THE DELL E193FP The most common type of auction held on eBay is a singleitem proxy auction.\nAuctions open at a given time and remain open for a set period of time (usually one week).\nBidders bid for the item by giving a proxy a value ceiling.\nThe proxy will bid on behalf of the bidder only as much as is necessary to maintain a winning position in the auction, up to the ceiling received from the bidder.\nBidders may communicate with the proxy multiple times before an auction closes.\nIn the event that a bidder``s proxy has been outbid, a bidder may give the proxy a higher ceiling to use in the auction.\neBay``s proxy auction implements an incremental version of a Vickrey auction, with the item sold to the highest bidder for the second-highest bid plus a small increment.\n181 10 0 10 1 10 2 10 3 10 4 10 0 10 1 10 2 10 3 10 4 Number of Auctions NumberofBidders Auctions Available Auctions in Which Bid Figure 1: Histogram of number of LCD auctions available to each bidder and number of LCD auctions in which a bidder participates.\nThe market analyzed in this paper is that of a specific model of an LCD monitor, a 19 Dell LCD model E193FP.\nThis market was selected for a variety of reasons including: \u2022 The mean price of the monitor was $240 (with standard deviation $32), so we believe it reasonable to assume that bidders on the whole are only interested in acquiring one copy of the item on eBay.3 \u2022 The volume transacted is fairly high, at approximately 500 units sold per month.\n\u2022 The item is not usually bundled with other items.\n\u2022 The item is typically sold as new, and so suitable for the price-matching of the options-based scheme.\nRaw auction information was acquired via a PERL script.\nThe script accesses the eBay search engine,4 and returns all auctions containing the terms `Dell'' and `LCD'' that have closed within the past month.5 Data was stored in a text file for post-processing.\nTo isolate the auctions in the domain of interest, queries were made against the titles of eBay auctions that closed between 27 May, 2005 through 1 October, 2005.6 Figure 1 provides a general sense of how many LCD auctions occur while a bidder is interested in pursuing a monitor.7 8,746 bidders (86%) had more than one auction available between when they first placed a bid on eBay and the 3 For reference, Dell``s October 2005 mail order catalogue quotes the price of the monitor as being $379 without a desktop purchase, and $240 as part of a desktop purchase upgrade.\n4 http:\/\/search.ebay.com 5 The search is not case-sensitive.\n6 Specifically, the query found all auctions where the title contained all of the following strings: `Dell,'' `LCD'' and `E193FP,'' while excluding all auctions that contained any of the following strings: `Dimension,'' `GHZ,'' `desktop,'' `p4'' and `GB.''\nThe exclusion terms were incorporated so that the only auctions analyzed would be those selling exclusively the LCD of interest.\nFor example, the few bundled auctions selling both a Dell Dimension desktop and the E193FP LCD are excluded.\n7 As a reference, most auctions close on eBay between noon and midnight EDT, with almost two auctions for the Dell LCD monitor closing each hour on average during peak time periods.\nBidders have an average observed patience of 3.9 days (with a standard deviation of 11.4 days).\nlatest closing time of an auction in which they bid (with an average of 78 auctions available).\nFigure 1 also illustrates the number of auctions in which each bidder participates.\nOnly 32.3% of bidders who had more than one auction available are observed to bid in more than one auction (bidding in 3.6 auctions on average).\nA simple regression analysis shows that bidders tend to submit maximal bids to an auction that are $1.22 higher after spending twice as much time in the system, as well as bids that are $0.27 higher in each subsequent auction.\nAmong the 508 bidders that won exactly one monitor and participated in multiple auctions, 201 (40%) paid more than $10 more than the closing price of another auction in which they bid, paying on average $35 more (standard deviation $21) than the closing price of the cheapest auction in which they bid but did not win.\nFurthermore, among the 2,216 bidders that never won an item despite participating in multiple auctions, 421 (19%) placed a losing bid in one auction that was more than $10 higher than the closing price of another auction in which they bid, submitting a losing bid on average $34 more (standard deviation $23) than the closing price of the cheapest auction in which they bid but did not win.\nAlthough these measures do not say a bidder that lost could have definitively won (because we only consider the final winning price and not the bid of the winner to her proxy), or a bidder that won could have secured a better price, this is at least indicative of some bidder mistakes.\n3.\nMODELING THE SEQUENTIAL AUCTION PROBLEM While the eBay analysis was for simple bidders who desire only a single item, let us now consider a more general scenario where people may desire multiple goods of different types, possessing general valuations over those goods.\nConsider a world with buyers (sometimes called bidders) B and K different types of goods G1...GK .\nLet T = {0, 1, ...} denote time periods.\nLet L denote a bundle of goods, represented as a vector of size K, where Lk \u2208 {0, 1} denotes the quantity of good type Gk in the bundle.8 The type of a buyer i \u2208 B is (ai, di, vi), with arrival time ai \u2208 T, departure time di \u2208 T, and private valuation vi(L) \u2265 0 for each bundle of goods L received between ai and di, and zero value otherwise.\nThe arrival time models the period in which a buyer first realizes her demand and enters the market, while the departure time models the period in which a buyer loses interest in acquiring the good(s).\nIn settings with general valuations, we need an additional assumption: an upper bound on the difference between a buyer``s arrival and departure, denoted \u0394Max.\nBuyers have quasi-linear utilities, so that the utility of buyer i receiving bundle L and paying p, in some period no later than di, is ui(L, p) = vi(L) \u2212 p. Each seller j \u2208 S brings a single item kj to the market, has no intrinsic value and wants to maximize revenue.\nSeller j has an arrival time, aj, which models the period in which she is first interested in listing the item, while the departure time, dj, models the latest period in which she is willing to consider having an auction for the item close.\nA seller will receive payment by the end of the reported departure of the winning buyer.\n8 We extend notation whereby a single item k of type Gk refers to a vector L : Lk = 1.\n182 We say an individual auction in a sequence is locally strategyproof (LSP) if truthful bidding is a dominant strategy for a buyer that can only bid in that auction.\nConsider the following example to see that LSP is insufficient for the existence of a dominant bidding strategy for buyers facing a sequence of auctions.\nExample 1.\nAlice values one ton of Sand with one ton of Stone at $2, 000.\nBob holds a Vickrey auction for one ton of Sand on Monday and a Vickrey auction for one ton of Stone on Tuesday.\nAlice has no dominant bidding strategy because she needs to know the price for Stone on Tuesday to know her maximum willingness to pay for Sand on Monday.\nDefinition 1.\nThe sequential auction problem.\nGiven a sequence of auctions, despite each auction being locally strategyproof, a bidder has no dominant bidding strategy.\nConsider a sequence of auctions.\nGenerally, auctions selling the same item will be uncertainly-ordered, because a buyer will not know the ordering of closing prices among the auctions.\nDefine the interesting bundles for a buyer as all bundles that could maximize the buyer``s profit for some combination of auctions and bids of other buyers.9 Within the interesting bundles, say that an item has uncertain marginal value if the marginal value of an item depends on the other goods held by the buyer.10 Say that an item is oversupplied if there is more than one auction offering an item of that type.\nSay two bundles are substitutes if one of those bundles has the same value as the union of both bundles.11 Proposition 1.\nGiven locally strategyproof single-item auctions, the sequential auction problem exists for a bidder if and only if either of the following two conditions is true: (1) within the set of interesting bundles (a) there are two bundles that are substitutes, (b) there is an item with uncertain marginal value, or (c) there is an item that is over-supplied; (2) a bidder faces competitors'' bids that are conditioned on the bidder``s past bids.\nProof.\n(Sketch.)\n(\u21d0) A bidder does not have a dominant strategy when (a) she does not know which bundle among substitutes to pursue, (b) she faces the exposure problem, or (c) she faces the multiple copies problem.\nAdditionally, a bidder does not have a dominant strategy when she does not how to optimally influence the bids of competitors.\n(\u21d2) By contradiction.\nA bidder has a dominant strategy to bid its constant marginal value for a given item in each auction available when conditions (1) and (2) are both false.\nFor example, the following buyers all face the sequential auction problem as a result of condition (a), (b) and (c) respectively: a buyer who values one ton of Sand for $1,000, or one ton of Stone for $2,000, but not both Sand and Stone; a buyer who values one ton of Sand for $1,000, one ton of Stone for $300, and one ton of Sand and one ton of Stone for $1,500, and can participate in an auction for Sand before an auction for Stone; a buyer who values one ton of Sand for $1,000 and can participate in many auctions selling Sand.\n9 Assume that the empty set is an interesting bundle.\n10 Formally, an item k has uncertain marginal value if |{m : m = vi(Q) \u2212 vi(Q \u2212 k), \u2200Q \u2286 L \u2208 InterestingBundle, Q \u2287 k}| > 1.\n11 Formally, two bundles A and B are substitutes if vi(A \u222a B) = max(vi(A), vi(B)), where A \u222a B = L where Lk = max(Ak, Bk).\n4.\nSUPER PROXIES AND OPTIONS The novel solution proposed in this work to resolve the sequential auction problem consists of two primary components: richer proxy agents, and options with price matching.\nIn finance, a real option is a right to acquire a real good at a certain price, called the exercise price.\nFor instance, Alice may obtain from Bob the right to buy Sand from him at an exercise price of $1, 000.\nAn option provides the right to purchase a good at an exercise price but not the obligation.\nThis flexibility allows buyers to put together a collection of options on goods and then decide which to exercise.\nOptions are typically sold at a price called the option price.\nHowever, options obtained at a non-zero option price cannot generally support a simple, dominant bidding strategy, as a buyer must compute the expected value of an option to justify the cost [8].\nThis computation requires a model of the future, which in our setting requires a model of the bidding strategies and the values of other bidders.\nThis is the very kind of game-theoretic reasoning that we want to avoid.\nInstead, we consider costless options with an option price of zero.\nThis will require some care as buyers are weakly better off with a costless option than without one, whatever its exercise price.\nHowever, multiple bidders pursuing options with no intention of exercising them would cause the efficiency of an auction for options to unravel.\nThis is the role of the mandatory proxy agents, which intermediate between buyers and the market.\nA proxy agent forces a link between the valuation function used to acquire options and the valuation used to exercise options.\nIf a buyer tells her proxy an inflated value for an item, she runs the risk of having the proxy exercise options at a price greater than her value.\n4.1 Buyer Proxies 4.1.1 Acquiring Options After her arrival, a buyer submits her valuation \u02c6vi (perhaps untruthfully) to her proxy in some period \u02c6ai \u2265 ai, along with a claim about her departure time \u02c6di \u2265 \u02c6ai.\nAll transactions are intermediated via proxy agents.\nEach auction is modified to sell an option on that good to the highest bidding proxy, with an initial exercise price set to the second-highest bid received.12 When an option in which a buyer is interested becomes available for the first time, the proxy determines its bid by computing the buyer``s maximum marginal value for the item, and then submits a bid in this amount.\nA proxy does not bid for an item when it already holds an option.\nThe bid price is: bidt i(k) = max L [\u02c6vi(L + k) \u2212 \u02c6vi(L)] (1) By having a proxy compute a buyer``s maximum marginal value for an item and then bidding only that amount, a buyer``s proxy will win any auction that could possibly be of benefit to the buyer and only lose those auctions that could never be of value to the buyer.\n12 The system can set a reserve price for each good, provided that the reserve is universal for all auctions selling the same item.\nWithout a universal reserve price, price matching is not possible because of the additional restrictions on prices that individual sellers will accept.\n183 Buyer Type Monday Tuesday Molly (Mon, Tues, $8) 6Nancy 6Nancy \u2192 4Polly Nancy (Mon, Tues, $6) - 4Polly Polly (Mon, Tues, $4) -Table 1: Three-buyer example with each wanting a single item and one auction occurring on Monday and Tuesday.\nXY implies an option with exercise price X and bookkeeping that a proxy has prevented Y from currently possessing an option.\n\u2192 is the updating of exercise price and bookkeeping.\nWhen a proxy wins an auction for an option, the proxy will store in its local memory the identity (which may be a pseudonym) of the proxy not holding an option because of the proxy``s win (i.e., the proxy that it `bumped'' from winning, if any).\nThis information will be used for price matching.\n4.1.2 Pricing Options Sellers agree by joining the market to allow the proxy representing a buyer to adjust the exercise price of an option that it holds downwards if the proxy discovers that it could have achieved a better price by waiting to bid in a later auction for an option on the same good.\nTo assist in the implementation of the price matching scheme each proxy tracks future auctions for an option that it has already won and will determine who would be bidding in that auction had the proxy delayed its entry into the market until this later auction.\nThe proxy will request price matching from the seller that granted it an option if the proxy discovers that it could have secured a lower price by waiting.\nTo reiterate, the proxy does not acquire more than one option for any good.\nRather, it reduces the exercise price on its already issued option if a better deal is found.\nThe proxy is able to discover these deals by asking each future auction to report the identities of the bidders in that auction together with their bids.\nThis needs to be enforced by eBay, as the central authority.\nThe highest bidder in this later auction, across those whose identity is not stored in the proxy``s memory for the given item, is exactly the bidder against whom the proxy would be competing had it delayed its entry until this auction.\nIf this high bid is lower than the current option price held, the proxy price matches down to this high bid price.\nAfter price matching, one of two adjustments will be made by the proxy for bookkeeping purposes.\nIf the winner of the auction is the bidder whose identity has been in the proxy``s local memory, the proxy will replace that local information with the identity of the bidder whose bid it just price matched, as that is now the bidder the proxy has prevented from obtaining an option.\nIf the auction winner``s identity is not stored in the proxy``s local memory the memory may be cleared.\nIn this case, the proxy will simply price match against the bids of future auction winners on this item until the proxy departs.\nExample 2 (Table 1).\nMolly``s proxy wins the Monday auction, submitting a bid of $8 and receiving an option for $6.\nMolly``s proxy adds Nancy to its local memory as Nancy``s proxy would have won had Molly``s proxy not bid.\nOn Tuesday, only Nancy``s and Polly``s proxy bid (as Molly``s proxy holds an option), with Nancy``s proxy winning an opBuyer Type Monday Tuesday Truth: Molly (Mon, Mon, $8) 6NancyNancy (Mon, Tues, $6) - 4Polly Polly (Mon, Tues, $4) -Misreport: Molly (Mon, Mon, $8) -Nancy (Mon, Tues, $10) 8Molly 8Molly \u2192 4\u03c6 Polly (Mon, Tues, $4) - 0\u03c6 Misreport & match low: Molly (Mon, Mon, $8) -Nancy (Mon, Tues, $10) 8 8 \u2192 0 Polly (Mon, Tues, $4) - 0 Table 2: Examples demonstrating why bookkeeping will lead to a truthful system whereas simply matching to the lowest winning price will not.\ntion for $4 and noting that it bumped Polly``s proxy.\nAt this time, Molly``s proxy will price match its option down to $4 and replace Nancy with Polly in its local memory as per the price match algorithm, as Polly would be holding an option had Molly never bid.\n4.1.3 Exercising Options At the reported departure time the proxy chooses which options to exercise.\nTherefore, a seller of an option must wait until period \u02c6dw for the option to be exercised and receive payment, where w was the winner of the option.13 For bidder i, in period \u02c6di, the proxy chooses the option(s) that maximize the (reported) utility of the buyer: \u03b8\u2217 t = argmax \u03b8\u2286\u0398 (\u02c6vi(\u03b3(\u03b8)) \u2212 \u03c0(\u03b8)) (2) where \u0398 is the set of all options held, \u03b3(\u03b8) are the goods corresponding to a set of options, and \u03c0(\u03b8) is the sum of exercise prices for a set of options.\nAll other options are returned.14 No options are exercised when no combination of options have positive utility.\n4.1.4 Why bookkeep and not match winning price?\nOne may believe that an alternative method for implementing a price matching scheme could be to simply have proxies match the lowest winning price they observe after winning an option.\nHowever, as demonstrated in Table 2, such a simple price matching scheme will not lead to a truthful system.\nThe first scenario in Table 2 demonstrates the outcome if all agents were to truthfully report their types.\nMolly 13 While this appears restrictive on the seller, we believe it not significantly different than what sellers on eBay currently endure in practice.\nAn auction on eBay closes at a specific time, but a seller must wait until a buyer relinquishes payment before being able to realize the revenue, an amount of time that could easily be days (if payment is via a money order sent through courier) to much longer (if a buyer is slow but not overtly delinquent in remitting her payment).\n14 Presumably, an option returned will result in the seller holding a new auction for an option on the item it still possesses.\nHowever, the system will not allow a seller to re-auction an option until \u0394Max after the option had first been issued in order to maintain a truthful mechanism.\n184 would win the Monday auction and receive an option with an exercise price of $6 (subsequently exercising that option at the end of Monday), and Nancy would win the Tuesday auction and receive an option with an exercise price of $4 (subsequently exercising that option at the end of Tuesday).\nThe second scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, using the proposed bookkeeping method.\nNancy would win the Monday auction and receive an option with an exercise price of $8.\nOn Tuesday, Polly would win the auction and receive an option with an exercise price of $0.\nNancy``s proxy would observe that the highest bid submitted on Tuesday among those proxies not stored in local memory is Polly``s bid of $4, and so Nancy``s proxy would price match the exercise price of its option down to $4.\nNote that the exercise price Nancy``s proxy has obtained at the end of Tuesday is the same as when she truthfully revealed her type to her proxy.\nThe third scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, if the price matching scheme were for proxies to simply match their option price to the lowest winning price at any time while they are in the system.\nNancy would win the Monday auction and receive an option with an exercise price of $8.\nOn Tuesday, Polly would win the auction and receive an option with an exercise price of $0.\nNancy``s proxy would observe that the lowest price on Tuesday was $0, and so Nancy``s proxy would price match the exercise price of its option down to $0.\nNote that the exercise price Nancy``s proxy has obtained at the end of Tuesday is lower than when she truthfully revealed her type to the proxy.\nTherefore, a price matching policy of simply matching the lowest price paid may not elicit truthful information from buyers.\n4.2 Complexity of Algorithm An XOR-valuation of size M for buyer i is a set of M terms, < L1 , v1 i > ...< LM , vM i >, that maps distinct bundles to values, where i is interested in acquiring at most one such bundle.\nFor any bundle S, vi(S) = maxLm\u2286S(vm i ).\nTheorem 1.\nGiven an XOR-valuation which possesses M terms, there is an O(KM2 ) algorithm for computing all maximum marginal values, where K is the number of different item types in which a buyer may be interested.\nProof.\nFor each item type, recall Equation 1 which defines the maximum marginal value of an item.\nFor each bundle L in the M-term valuation, vi(L + k) may be found by iterating over the M terms.\nTherefore, the number of terms explored to determine the maximum marginal value for any item is O(M2 ), and so the total number of bundle comparisons to be performed to calculate all maximum marginal values is O(KM2 ).\nTheorem 2.\nThe total memory required by a proxy for implementing price matching is O(K), where K is the number of distinct item types.\nThe total work performed by a proxy to conduct price matching in each auction is O(1).\nProof.\nBy construction of the algorithm, the proxy stores one maximum marginal value for each item for bidding, of which there are O(K); at most one buyer``s identity for each item, of which there are O(K); and one current option exercise price for each item, of which there are O(K).\nFor each auction, the proxy either submits a precomputed bid or price matches, both of which take O(1) work.\n4.3 Truthful Bidding to the Proxy Agent Proxies transform the market into a direct revelation mechanism, where each buyer i interacts with the proxy only once,15 and does so by declaring a bid, bi, which is defined as an announcement of her type, (\u02c6ai, \u02c6di, \u02c6vi), where the announcement may or may not be truthful.\nWe denote all received bids other than i``s as b\u2212i. Given bids, b = (bi, b\u2212i), the market determines allocations, xi(b), and payments, pi(b) \u2265 0, to each buyer (using an online algorithm).\nA dominant strategy equilibrium for buyers requires that vi(xi(bi, b\u2212i))\u2212pi(bi, b\u2212i) \u2265 vi(xi(bi, b\u2212i))\u2212pi(bi, b\u2212i), \u2200bi = bi, \u2200b\u2212i.\nWe now establish that it is a dominant strategy for a buyer to reveal her true valuation and true departure time to her proxy agent immediately upon arrival to the system.\nThe proof builds on the price-based characterization of strategyproof single-item online auctions in Hajiaghayi et al. [12].\nDefine a monotonic and value-independent price function psi(ai, di, L, v\u2212i) which can depend on the values of other agents v\u2212i. Price psi(ai, di, L, v\u2212i) will represent the price available to agent i for bundle L in the mechanism if it announces arrival time ai and departure time di.\nThe price is independent of the value vi of agent i, but can depend on ai, di and L as long as it satisfies a monotonicity condition.\nDefinition 2.\nPrice function psi(ai, di, L, v\u2212i) is monotonic if psi(ai, di, L , v\u2212i) \u2264 psi(ai, di, L, v\u2212i) for all ai \u2264 ai, all di \u2265 di, all bundles L \u2286 L and all v\u2212i. Lemma 1.\nAn online combinatorial auction will be strategyproof (with truthful reports of arrival, departure and value a dominant strategy) when there exists a monotonic and value-independent price function, psi(ai, di, L, v\u2212i), such that for all i and all ai, di \u2208 T and all vi, agent i is allocated bundle L\u2217 = argmaxL [vi(L) \u2212 psi(ai, di, L, v\u2212i)] in period di and makes payment psi(ai, di, L\u2217 , v\u2212i).\nProof.\nAgent i cannot benefit from reporting a later departure \u02c6di because the allocation is made in period \u02c6di and the agent would have no value for this allocation.\nAgent i cannot benefit from reporting a later arrival \u02c6ai \u2265 ai or earlier departure \u02c6di \u2264 di because of price monotonicity.\nFinally, the agent cannot benefit from reporting some \u02c6vi = vi because its reported valuation does not change the prices it faces and the mechanism maximizes its utility given its reported valuation and given the prices.\nLemma 2.\nAt any given time, there is at most one buyer in the system whose proxy does not hold an option for a given item type because of buyer i``s presence in the system, and the identity of that buyer will be stored in i``s proxy``s local memory at that time if such a buyer exists.\nProof.\nBy induction.\nConsider the first proxy that a buyer prevents from winning an option.\nEither (a) the 15 For analysis purposes, we view the mechanism as an opaque market so that the buyer cannot condition her bid on bids placed by others.\n185 bumped proxy will leave the system having never won an option, or (b) the bumped proxy will win an auction in the future.\nIf (a), the buyer``s presence prevented exactly that one buyer from winning an option, but will have not prevented any other proxies from winning an option (as the buyer``s proxy will not bid on additional options upon securing one), and will have had that bumped proxy``s identity in its local memory by definition of the algorithm.\nIf (b), the buyer has not prevented the bumped proxy from winning an option after all, but rather has prevented only the proxy that lost to the bumped proxy from winning (if any), whose identity will now be stored in the proxy``s local memory by definition of the algorithm.\nFor this new identity in the buyer``s proxy``s local memory, either scenario (a) or (b) will be true, ad infinitum.\nGiven this, we show that the options-based infrastructure implements a price-based auction with a monotonic and value-independent price schedule to every agent.\nTheorem 3.\nTruthful revelation of valuation, arrival and departure is a dominant strategy for a buyer in the optionsbased market.\nProof.\nFirst, define a simple agent-independent price function pk i (t, v\u2212i) as the highest bid by the proxies not holding an option on an item of type Gk at time t, not including the proxy representing i herself and not including any proxies that would have already won an option had i never entered the system (i.e., whose identity is stored in i``s proxy``s local memory)(\u221e if no supply at t).\nThis set of proxies is independent of any declaration i makes to its proxy (as the set explicitly excludes the at most one proxy (see Lemma 2) that i has prevented from holding an option), and each bid submitted by a proxy within this set is only a function of their own buyer``s declared valuation (see Equation 1).\nFurthermore, i cannot influence the supply she faces as any options returned by bidders due to a price set by i``s proxy``s bid will be re-auctioned after i has departed the system.\nTherefore, pk i (t, v\u2212i) is independent of i``s declaration to its proxy.\nNext, define psk i (\u02c6ai, \u02c6di, v\u2212i) = min\u02c6ai\u2264\u03c4\u2264 \u02c6di [pk i (\u03c4, v\u2212i)] (possibly \u221e) as the minimum price over pk i (t, v\u2212i), which is clearly monotonic.\nBy construction of price matching, this is exactly the price obtained by a proxy on any option that it holds at departure.\nDefine psi(\u02c6ai, \u02c6di, L, v\u2212i) = \u00c8k=K k=1 psk i (\u02c6ai, \u02c6di, v\u2212i)Lk, which is monotonic in \u02c6ai, \u02c6di and L since psk i (\u02c6ai, \u02c6di, v\u2212i) is monotonic in \u02c6ai and \u02c6di and (weakly) greater than zero for each k. Given the set of options held at \u02c6di, which may be a subset of those items with non-infinite prices, the proxy exercises options to maximize the reported utility.\nLeft to show is that all bundles that could not be obtained with options held are priced sufficiently high as to not be preferred.\nFor each such bundle, either there is an item priced at \u221e (in which case the bundle would not be desired) or there must be an item in that bundle for which the proxy does not hold an option that was available.\nIn all auctions for such an item there must have been a distinct bidder with a bid greater than bidt i(k), which subsequently results in psk i (\u02c6ai, \u02c6di, v\u2212i) > bidt i(k), and so the bundle without k would be preferred to the bundle.\nTheorem 4.\nThe super proxy, options-based scheme is individually-rational for both buyers and sellers.\nPrice \u03c3(Price) Value Surplus eBay $240.24 $32 $244 $4 Options $239.66 $12 $263 $23 Table 3: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as the simulated options-based market using worst-case estimates of bidders'' true value.\nProof.\nBy construction, the proxy exercises the profit maximizing set of options obtained, or no options if no set of options derives non-negative surplus.\nTherefore, buyers are guaranteed non-negative surplus by participating in the scheme.\nFor sellers, the price of each option is based on a non-negative bid or zero.\n5.\nEVALUATING THE OPTIONS \/ PROXY INFRASTRUCTURE A goal of the empirical benchmarking and a reason to collect data from eBay is to try and build a realistic model of buyers from which to estimate seller revenue and other market effects under the options-based scheme.\nWe simulate a sequence of auctions that match the timing of the Dell LCD auctions on eBay.16 When an auction successfully closes on eBay, we simulate a Vickrey auction for an option on the item sold in that period.\nAuctions that do not successfully close on eBay are not simulated.\nWe estimate the arrival, departure and value of each bidder on eBay from their observed behavior.17 Arrival is estimated as the first time that a bidder interacts with the eBay proxy, while departure is estimated as the latest closing time among eBay auctions in which a bidder participates.\nWe initially adopt a particularly conservative estimate for bidder value, estimating it as the highest bid a bidder was observed to make on eBay.\nTable 3 compares the distribution of closing prices on eBay and in the simulated options scheme.\nWhile the average revenue in both schemes is virtually the same ($239.66 in the options scheme vs. $240.24 on eBay), the winners in the options scheme tend to value the item won 7% more than the winners on eBay ($263 in the options scheme vs. $244 on eBay).\n5.1 Bid Identification We extend the work of Haile and Tamer [11] to sequential auctions to get a better view of underlying bidder values.\nRather than assume for bidders an equilibrium behavior as in standard econometric techniques, Haile and Tamer do not attempt to model how bidders'' true values get mapped into a bid in any given auction.\nRather, in the context of repeated 16 When running the simulations, the results of the first and final ten days of auctions are not recorded to reduce edge effects that come from viewing a discrete time window of a continuous process.\n17 For the 100 bidders that won multiple times on eBay, we have each one bid a constant marginal value for each additional item in each auction until the number of options held equals the total number of LCDs won on eBay, with each option available for price matching independently.\nThis bidding strategy is not a dominant strategy (falling outside the type space possible for buyers on which the proof of truthfulness has been built), but is believed to be the most appropriate first order action for simulation.\n186 0 100\u00a0200\u00a0300\u00a0400 500 0 0.2 0.4 0.6 0.8 1 Value ($) CDF Observed Max Bids Upper Bound of True Value Figure 2: CDF of maximum bids observed and upper bound estimate of the bidding population``s distribution for maximum willingness to pay.\nThe true population distribution lies below the estimated upper bound.\nsingle-item auctions with distinct bidder populations, Haile and Tamer make only the following two assumptions when estimating the distribution of true bidder values: 1.\nBidders do not bid more than they are willing to pay.\n2.\nBidders do not allow an opponent to win at a price they are willing to beat.\nFrom the first of their two assumptions, given the bids placed by each bidder in each auction, Haile and Tamer derive a method for estimating an upper bound of the bidding population``s true value distribution (i.e., the bound that lies above the true value distribution).\nFrom the second of their two assumptions, given the winning price of each auction, Haile and Tamer derive a method for estimating a lower bound of the bidding population``s true value distribution.\nIt is only the upper-bound of the distribution that we utilize in our work.\nHaile and Tamer assume that bidders only participate in a single auction, and require independence of the bidding population from auction to auction.\nNeither assumption is valid here: the former because bidders are known to bid in more than one auction, and the latter because the set of bidders in an auction is in all likelihood not a true i.i.d. sampling of the overall bidding population.\nIn particular, those who win auctions are less likely to bid in successive auctions, while those who lose auctions are more likely to remain bidders in future auctions.\nIn applying their methods we make the following adjustments: \u2022 Within a given auction, each individual bidder``s true willingness to pay is assumed weakly greater than the maximum bid that bidder submits across all auctions for that item (either past or future).\n\u2022 When estimating the upper bound of the value distribution, if a bidder bids in more than one auction, randomly select one of the auctions in which the bidder bid, and only utilize that one observation during the estimation.18 18 In current work, we assume that removing duplicate bidders is sufficient to make the buying populations independent i.i.d. draws from auction to auction.\nIf one believes that certain portions of the population are drawn to cerPrice \u03c3(Price) Value Surplus eBay $240.24 $32 $281 $40 Options $275.80 $14 $302 $26 Table 4: Average price paid, standard deviation of prices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as in the simulated options-based market using an adjusted Haile and Tamer estimate of bidders'' true values being 15% higher than their maximum observed bid.\nFigure 2 provides the distribution of maximum bids placed by bidders on eBay as well as the estimated upper bound of the true value distribution of bidders based on the extended Haile and Tamer method.19 As can be seen, the smallest relative gap between the two curves meaningfully occurs near the 80th percentile, where the upper bound is 1.17 times the maximum bid.\nTherefore, adopted as a less conservative model of bidder values is a uniform scaling factor of 1.15.\nWe now present results from this less conservative analysis.\nTable 4 shows the distribution of closing prices in auctions on eBay and in the simulated options scheme.\nThe mean price in the options scheme is now significantly higher, 15% greater, than the prices on eBay ($276 in the options scheme vs. $240 on eBay), while the standard deviation of closing prices is lower among the options scheme auctions ($14 in the options scheme vs. $32 on eBay).\nTherefore, not only is the expected revenue stream higher, but the lower variance provides sellers a greater likelihood of realizing that higher revenue.\nThe efficiency of the options scheme remains higher than on eBay.\nThe winners in the options scheme now have an average estimated value 7.5% higher at $302.\nIn an effort to better understand this efficiency, we formulated a mixed integer program (MIP) to determine a simple estimate of the allocative efficiency of eBay.\nThe MIP computes the efficient value of the o\ufb04ine problem with full hindsight on all bids and all supply.20 Using a scaling of 1.15, the total value allocated to eBay winners is estimated at $551,242, while the optimal value (from the MIP) is $593,301.\nThis suggests an allocative efficiency of 92.9%: while the typical value of a winner on eBay is $281, an average value of $303 was possible.21 Note the options-based tain auctions, then further adjustments would be required in order to utilize these techniques.\n19 The estimation of the points in the curve is a minimization over many variables, many of which can have smallnumbers bias.\nConsequently, Haile and Tamer suggest using a weighted average over all terms yi of \u00c8 i yi exp(yi\u03c1)\u00c8j exp(yj \u03c1) to approximate the minimum while reducing the small number effects.\nWe used \u03c1 = \u22121000 and removed observations of auctions with 17 bidders or more as they occurred very infrequently.\nHowever, some small numbers bias still demonstrated itself with the plateau in our upper bound estimate around a value of $300.\n20 Buyers who won more than one item on eBay are cloned so that they appear to be multiple bidders of identical type.\n21 As long as one believes that every bidder``s true value is a constant factor \u03b1 away from their observed maximum bid, the 92.9% efficiency calculation holds for any value of \u03b1.\nIn practice, this belief may not be reasonable.\nFor example, if losing bidders tend to have true values close to their observed 187 scheme comes very close to achieving this level of efficiency [at 99.7% efficient in this estimate] even though it operates without the benefit of hindsight.\nFinally, although the typical winning bidder surplus decreases between eBay and the options-based scheme, some surplus redistribution would be possible because the total market efficiency is improved.22 6.\nDISCUSSION The biggest concern with our scheme is that proxy agents who may be interested in many different items may acquire many more options than they finally exercise.\nThis can lead to efficiency loss.\nNotice that this is not an issue when bidders are only interested in a single item (as in our empirical study), or have linear-additive values on items.\nTo fix this, we would prefer to have proxy agents use more caution in acquiring options and use a more adaptive bidding strategy than that in Equation 1.\nFor instance, if a proxy is already holding an option with an exercise price of $3 on some item for which it has value of $10, and it values some substitute item at $5, the proxy could reason that in no circumstance will it be useful to acquire an option on the second item.\nWe formulate a more sophisticated bidding strategy along these lines.\nLet \u0398t be the set of all options a proxy for bidder i already possesses at time t. Let \u03b8t \u2286 \u0398t, be a subset of those options, the sum of whose exercise prices are \u03c0(\u03b8t), and the goods corresponding to those options being \u03b3(\u03b8t).\nLet \u03a0(\u03b8t) = \u02c6vi(\u03b3(\u03b8t)) \u2212 \u03c0(\u03b8t) be the (reported) available surplus associated with a set of options.\nLet \u03b8\u2217 t be the set of options currently held that would maximize the buyer``s surplus; i.e., \u03b8\u2217 t = argmax\u03b8t\u2286\u0398t \u03a0(\u03b8t).\nLet the maximal willingness to pay for an item k represent a price above which the agent knows it would never exercise an option on the item given the current options held.\nThis can be computed as follows: bidt i(k) = max L [0, min[\u02c6vi(L + k) \u2212 \u03a0(\u03b8\u2217 t ), \u02c6vi(L + k) \u2212 \u02c6vi(L)]] (3) where \u02c6vi(L+k)\u2212\u03a0(\u03b8\u2217 t ) considers surplus already held, \u02c6vi(L+ k)\u2212\u02c6vi(L) considers the marginal value of a good, and taking the max[0, .]\nconsiders the overall use of pursuing the good.\nHowever, and somewhat counter intuitively, we are not able to implement this bidding scheme without forfeiting truthfulness.\nThe \u03a0(\u03b8\u2217 t ) term in Equation 3 (i.e., the amount of guaranteed surplus bidder i has already obtained) can be influenced by proxy j``s bid.\nTherefore, bidder j may have the incentive to misrepresent her valuation to her proxy if she believes doing so will cause i to bid differently in the future in a manner beneficial to j. Consider the following example where the proxy scheme is refined to bid the maximum willingness to pay.\nExample 3.\nAlice values either one ton of Sand or one ton of Stone for $2,000.\nBob values either one ton of Sand or one ton of Stone for $1,500.\nAll bidders have a patience maximum bids while eBay winners have true values much greater than their observed maximum bids then downward bias is introduced in the efficiency calculation at present.\n22 The increase in eBay winner surplus between Tables 3 and 4 is to be expected as the \u03b1 scaling strictly increases the estimated value of the eBay winners while holding the prices at which they won constant.\nof 2 days.\nOn day one, a Sand auction is held, where Alice``s proxy bids $2,000 and Bob``s bids $1,500.\nAlice``s proxy wins an option to purchase Sand for $1,500.\nOn day two, a Stone auction is held, where Alice``s proxy bids $1,500 [as she has already obtained a guaranteed $500 of surplus from winning a Sand option, and so reduces her Stone bid by this amount], and Bob``s bids $1,500.\nEither Alice``s proxy or Bob``s proxy will win the Stone option.\nAt the end of the second day, Alice``s proxy holds an option with an exercise price of $1,500 to obtain a good valued for $2,000, and so obtains $500 in surplus.\nNow, consider what would have happened had Alice declared that she valued only Stone.\nExample 4.\nAlice declares valuing only Stone for $2,000.\nBob values either one ton of Sand or one ton of Stone for $1,500.\nAll bidders have a patience of 2 days.\nOn day one, a Sand auction is held, where Bob``s proxy bids $1,500.\nBob``s proxy wins an option to purchase Sand for $0.\nOn day two, a Stone auction is held, where Alice``s proxy bids $2,000, and Bob``s bids $0 [as he has already obtained a guaranteed $1,500 of surplus from winning a Sand option, and so reduces his Stone bid by this amount].\nAlice``s proxy wins the Stone option for $0.\nAt the end of the second day, Alice``s proxy holds an option with an exercise price of $0 to obtain a good valued for $2,000, and so obtains $2,000 in surplus.\nBy misrepresenting her valuation (i.e., excluding her value of Sand), Alice was able to secure higher surplus by guiding Bob``s bid for Stone to $0.\nAn area of immediate further work by the authors is to develop a more sophisticated proxy agent that can allow for bidding of maximum willingness to pay (Equation 3) while maintaining truthfulness.\nAn additional, practical, concern with our proxy scheme is that we assume an available, trusted, and well understood method to characterize goods (and presumably the quality of goods).\nWe envision this happening in practice by sellers defining a classification for their item upon entering the market, for instance via a UPC code.\nJust as in eBay, this would allow an opportunity for sellers to improve revenue by overstating the quality of their item (new vs. like new), and raises the issue of how well a reputation scheme could address this.\n7.\nCONCLUSIONS We introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods.\nOur scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic.\nIn addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers.\nFor instance, does the options scheme change seller incentives from what they currently are on eBay?\nAcknowledgments We would like to thank Pai-Ling Yin.\nHelpful comments have been received from William Simpson, attendees at Har188 vard University``s EconCS and ITM seminars, and anonymous reviewers.\nThank you to Aaron L. Roth and KangXing Jin for technical support.\nAll errors and omissions remain our own.\n8.\nREFERENCES [1] P. Anthony and N. R. Jennings.\nDeveloping a bidding agent for multiple heterogeneous auctions.\nACM Trans.\nOn Internet Technology, 2003.\n[2] R. Bapna, P. Goes, A. Gupta, and Y. Jin.\nUser heterogeneity and its impact on electronic auction market design: An empirical exploration.\nMIS Quarterly, 28(1):21-43, 2004.\n[3] D. Bertsimas, J. Hawkins, and G. Perakis.\nOptimal bidding in on-line auctions.\nWorking Paper, 2002.\n[4] C. Boutilier, M. Goldszmidt, and B. Sabata.\nSequential auctions for the allocation of resources with complementarities.\nIn Proc.\n16th International Joint Conference on Artificial Intelligence (IJCAI-99), pages 527-534, 1999.\n[5] A. Byde, C. Preist, and N. R. Jennings.\nDecision procedures for multiple auctions.\nIn Proc.\n1st Int.\nJoint Conf.\non Autonomous Agents and Multiagent Systems (AAMAS-02), 2002.\n[6] M. M. Bykowsky, R. J. Cull, and J. O. Ledyard.\nMutually destructive bidding: The FCC auction design problem.\nJournal of Regulatory Economics, 17(3):205-228, 2000.\n[7] Y. Chen, C. Narasimhan, and Z. J. Zhang.\nConsumer heterogeneity and competitive price-matching guarantees.\nMarketing Science, 20(3):300-314, 2001.\n[8] A. K. Dixit and R. S. Pindyck.\nInvestment under Uncertainty.\nPrinceton University Press, 1994.\n[9] R. Gopal, S. Thompson, Y. A. Tung, and A. B. Whinston.\nManaging risks in multiple online auctions: An options approach.\nDecision Sciences, 36(3):397-425, 2005.\n[10] A. Greenwald and J. O. Kephart.\nShopbots and pricebots.\nIn Proc.\n16th International Joint Conference on Artificial Intelligence (IJCAI-99), pages 506-511, 1999.\n[11] P. A. Haile and E. Tamer.\nInference with an incomplete model of English auctions.\nJournal of Political Economy, 11(1), 2003.\n[12] M. T. Hajiaghayi, R. Kleinberg, M. Mahdian, and D. C. Parkes.\nOnline auctions with re-usable goods.\nIn Proc.\nACM Conf.\non Electronic Commerce, 2005.\n[13] K. Hendricks, I. Onur, and T. Wiseman.\nPreemption and delay in eBay auctions.\nUniversity of Texas at Austin Working Paper, 2005.\n[14] A. Iwasaki, M. Yokoo, and K. Terada.\nA robust open ascending-price multi-unit auction protocol against false-name bids.\nDecision Support Systems, 39:23-40, 2005.\n[15] E. G. James D. Hess.\nPrice-matching policies: An empirical case.\nManagerial and Decision Economics, 12(4):305-315, 1991.\n[16] A. X. Jiang and K. Leyton-Brown.\nEstimating bidders'' valuation distributions in online auctions.\nIn Workshop on Game Theory and Decision Theory (GTDT) at IJCAI, 2005.\n[17] R. Lavi and N. Nisan.\nCompetitive analysis of incentive compatible on-line auctions.\nIn Proc.\n2nd ACM Conf.\non Electronic Commerce (EC-00), 2000.\n[18] Y. J. Lin.\nPrice matching in a model of equilibrium price dispersion.\nSouthern Economic Journal, 55(1):57-69, 1988.\n[19] D. Lucking-Reiley and D. F. Spulber.\nBusiness-to-business electronic commerce.\nJournal of Economic Perspectives, 15(1):55-68, 2001.\n[20] A. Ockenfels and A. Roth.\nLast-minute bidding and the rules for ending second-price auctions: Evidence from eBay and Amazon auctions on the Internet.\nAmerican Economic Review, 92(4):1093-1103, 2002.\n[21] M. Peters and S. Severinov.\nInternet auctions with many traders.\nJournal of Economic Theory (Forthcoming), 2005.\n[22] R. Porter.\nMechanism design for online real-time scheduling.\nIn Proceedings of the 5th ACM conference on Electronic commerce, pages 61-70.\nACM Press, 2004.\n[23] M. H. Rothkopf and R. Engelbrecht-Wiggans.\nInnovative approaches to competitive mineral leasing.\nResources and Energy, 14:233-248, 1992.\n[24] T. Sandholm and V. Lesser.\nLeveled commitment contracts and strategic breach.\nGames and Economic Behavior, 35:212-270, 2001.\n[25] T. W. Sandholm and V. R. Lesser.\nIssues in automated negotiation and electronic commerce: Extending the Contract Net framework.\nIn Proc.\n1st International Conference on Multi-Agent Systems (ICMAS-95), pages 328-335, 1995.\n[26] H. S. Shah, N. R. Joshi, A. Sureka, and P. R. Wurman.\nMining for bidding strategies on eBay.\nLecture Notes on Artificial Intelligence, 2003.\n[27] M. Stryszowska.\nLate and multiple bidding in competing second price Internet auctions.\nEuroConference on Auctions and Market Design: Theory, Evidence and Applications, 2003.\n[28] J. T.-Y.\nWang.\nIs last minute bidding bad?\nUCLA Working Paper, 2003.\n[29] R. Zeithammer.\nAn equilibrium model of a dynamic auction marketplace.\nWorking Paper, University of Chicago, 2005.\n189","lvl-3":"The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution *\nABSTRACT\nBidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest.\nAs seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions.\nThese misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers.\nThis paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets.\nAn empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue.\n1.\nINTRODUCTION\nElectronic markets represent an application of information systems that has generated significant new trading opportu * A preliminary version of this work appeared in the AMEC workshop in 2004.\nnities while allowing for the dynamic pricing of goods.\nIn addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]).\nMany authors have written about a future in which commerce is mediated by online, automated trading agents [10, 25, 1].\nThere is still little evidence of automated trading in e-markets, though.\nWe believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs.\nWithout this, we do not expect individual consumers, or firms, to be confident in placing their business in the \"hands\" of an automated agent.\nOne of the most common examples today of an electronic marketplace is eBay, where the gross merchandise volume (i.e., the sum of all successfully closed listings) during 2005 was $44B.\nAmong items listed on eBay, many are essentially identical.\nThis is especially true in the Consumer Electronics category [9], which accounted for roughly $3.5 B of eBay's gross merchandise volume in 2005.\nThis presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem.\nFor example, Alice may want an LCD monitor, and could potentially bid in either a 1 o'clock or 3 o'clock eBay auction.\nWhile Alice would prefer to participate in whichever auction will have the lower winning price, she cannot determine beforehand which auction that may be, and could end up winning the \"wrong\" auction.\nThis is a problem of multiple copies.\nAnother problem bidders may face is the exposure problem.\nAs investigated by Bykowsky et al. [6], exposure problems exist when buyers desire a bundle of goods but may only participate in single-item auctions .1 For example, if Alice values a video game console by itself for $200, a video game by itself for $30, and both a console and game for $250, Alice must determine how much of the $20 of synergy value she might include in her bid for the console alone.\nBoth problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations.\nWhy might the sequential auction problem be bad?\nComplex games may lead to bidders employing costly strategies and making mistakes.\nPotential bidders who do not wish to bear such costs may choose not to participate in the 1The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions.\nThe problem is also a familiar one of online decision making.\nmarket, inhibiting seller revenue opportunities.\nAdditionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities.\nWe are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties.\n1.1 Options + Proxies: A Proposed Solution\nRetail stores have developed policies to assist their customers in addressing sequential purchasing problems.\nReturn policies alleviate the exposure problem by allowing customers to return goods at the purchase price.\nPrice matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18].\nFurthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item.\nThese two retail policies provide the basis for the scheme proposed in this paper .2 We extend the proxy bidding technology currently employed by eBay.\nOur \"super\" - proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies.\nThe extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions.\nA seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option.\nBuyers interact through a proxy agent, defining a value on all possible bundles of goods in which they have interest together with the latest time period in which they are willing to wait to receive the good (s).\nThe proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions.\nA proxy agent exercises options held when the buyer's patience has expired, choosing options that maximize a buyer's payoff given the reported valuation.\nAll other options are returned to the market and not exercised.\nThe options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics.\nWe conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005.\nLCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem.\nWe first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period.\nThis model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market.\nWe also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments.\nUsing this estimate, one can approximate how much greater a bidder's true value is 2Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices.\nHowever, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers' bids.\nfrom the maximum bid they were observed to have placed on eBay.\nBased on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient.\n1.2 Related Work\nA number of authors [27, 13, 28, 29] have analyzed the multiple copies problem, often times in the context of categorizing or modeling sniping behavior for reasons other than those first brought forward by Ockenfels and Roth [20].\nThese papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions.\nPeters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium.\nHowever, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders.\nPrevious work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2].\nPrevious work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1].\nUnfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction.\nIwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem.\nIn other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24].\nMost similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem.\nHowever, their work uses costly options and does not remove the sequential bidding problem completely.\nWork on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time.\nWe leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol.\nThe special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation.\nJiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions.\n2.\nEBAY AND THE DELL E193FP\n3.\nMODELING THE SEQUENTIAL AUCTION PROBLEM\n4.\n\"SUPER\" PROXIES AND OPTIONS\n4.1 Buyer Proxies\n4.1.1 Acquiring Options\n4.1.2 Pricing Options\n4.1.3 Exercising Options\n4.1.4 Why bookkeep and not match winning price?\n4.2 Complexity of Algorithm\n4.3 Truthful Bidding to the Proxy Agent\n5.\nEVALUATING THE OPTIONS \/ PROXY INFRASTRUCTURE\n5.1 Bid Identification\n6.\nDISCUSSION\n7.\nCONCLUSIONS\nWe introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods.\nOur scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic.\nIn addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers.\nFor instance, does the options scheme change seller incentives from what they currently are on eBay?","lvl-4":"The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution *\nABSTRACT\nBidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest.\nAs seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions.\nThese misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers.\nThis paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets.\nAn empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue.\n1.\nINTRODUCTION\nnities while allowing for the dynamic pricing of goods.\nIn addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]).\nWe believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs.\nAmong items listed on eBay, many are essentially identical.\nThis presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem.\nFor example, Alice may want an LCD monitor, and could potentially bid in either a 1 o'clock or 3 o'clock eBay auction.\nThis is a problem of multiple copies.\nAnother problem bidders may face is the exposure problem.\nBoth problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations.\nWhy might the sequential auction problem be bad?\nComplex games may lead to bidders employing costly strategies and making mistakes.\nPotential bidders who do not wish to bear such costs may choose not to participate in the 1The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions.\nThe problem is also a familiar one of online decision making.\nmarket, inhibiting seller revenue opportunities.\nAdditionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities.\nWe are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties.\n1.1 Options + Proxies: A Proposed Solution\nRetail stores have developed policies to assist their customers in addressing sequential purchasing problems.\nReturn policies alleviate the exposure problem by allowing customers to return goods at the purchase price.\nPrice matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18].\nFurthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item.\nThese two retail policies provide the basis for the scheme proposed in this paper .2 We extend the proxy bidding technology currently employed by eBay.\nOur \"super\" - proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies.\nThe extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions.\nA seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option.\nThe proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions.\nA proxy agent exercises options held when the buyer's patience has expired, choosing options that maximize a buyer's payoff given the reported valuation.\nAll other options are returned to the market and not exercised.\nThe options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics.\nWe conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005.\nLCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem.\nWe first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period.\nThis model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market.\nWe also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments.\nUsing this estimate, one can approximate how much greater a bidder's true value is 2Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices.\nHowever, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers' bids.\nfrom the maximum bid they were observed to have placed on eBay.\nBased on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient.\n1.2 Related Work\nThese papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions.\nPeters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium.\nHowever, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders.\nPrevious work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2].\nPrevious work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1].\nUnfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction.\nIwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem.\nIn other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24].\nMost similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem.\nHowever, their work uses costly options and does not remove the sequential bidding problem completely.\nWork on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time.\nWe leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol.\nThe special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation.\nJiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions.\n7.\nCONCLUSIONS\nWe introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods.\nOur scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic.\nIn addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers.\nFor instance, does the options scheme change seller incentives from what they currently are on eBay?","lvl-2":"The Sequential Auction Problem on eBay: An Empirical Analysis and a Solution *\nABSTRACT\nBidders on eBay have no dominant bidding strategy when faced with multiple auctions each offering an item of interest.\nAs seen through an analysis of 1,956 auctions on eBay for a Dell E193FP LCD monitor, some bidders win auctions at prices higher than those of other available auctions, while others never win an auction despite placing bids in losing efforts that are greater than the closing prices of other available auctions.\nThese misqueues in strategic behavior hamper the efficiency of the system, and in so doing limit the revenue potential for sellers.\nThis paper proposes a novel options-based extension to eBay's proxy-bidding system that resolves this strategic issue for buyers in commoditized markets.\nAn empirical analysis of eBay provides a basis for computer simulations that investigate the market effects of the options-based scheme, and demonstrates that the options-based scheme provides greater efficiency than eBay, while also increasing seller revenue.\n1.\nINTRODUCTION\nElectronic markets represent an application of information systems that has generated significant new trading opportu * A preliminary version of this work appeared in the AMEC workshop in 2004.\nnities while allowing for the dynamic pricing of goods.\nIn addition to marketplaces such as eBay, electronic marketplaces are increasingly used for business-to-consumer auctions (e.g. to sell surplus inventory [19]).\nMany authors have written about a future in which commerce is mediated by online, automated trading agents [10, 25, 1].\nThere is still little evidence of automated trading in e-markets, though.\nWe believe that one leading place of resistance is in the lack of provably optimal bidding strategies for any but the simplest of market designs.\nWithout this, we do not expect individual consumers, or firms, to be confident in placing their business in the \"hands\" of an automated agent.\nOne of the most common examples today of an electronic marketplace is eBay, where the gross merchandise volume (i.e., the sum of all successfully closed listings) during 2005 was $44B.\nAmong items listed on eBay, many are essentially identical.\nThis is especially true in the Consumer Electronics category [9], which accounted for roughly $3.5 B of eBay's gross merchandise volume in 2005.\nThis presence of essentially identical items can expose bidders, and sellers, to risks because of the sequential auction problem.\nFor example, Alice may want an LCD monitor, and could potentially bid in either a 1 o'clock or 3 o'clock eBay auction.\nWhile Alice would prefer to participate in whichever auction will have the lower winning price, she cannot determine beforehand which auction that may be, and could end up winning the \"wrong\" auction.\nThis is a problem of multiple copies.\nAnother problem bidders may face is the exposure problem.\nAs investigated by Bykowsky et al. [6], exposure problems exist when buyers desire a bundle of goods but may only participate in single-item auctions .1 For example, if Alice values a video game console by itself for $200, a video game by itself for $30, and both a console and game for $250, Alice must determine how much of the $20 of synergy value she might include in her bid for the console alone.\nBoth problems arise in eBay as a result of sequential auctions of single items coupled with patient bidders with substitutes or complementary valuations.\nWhy might the sequential auction problem be bad?\nComplex games may lead to bidders employing costly strategies and making mistakes.\nPotential bidders who do not wish to bear such costs may choose not to participate in the 1The exposure problem has been primarily investigated by Bykowsky et al. in the context of simultaneous single-item auctions.\nThe problem is also a familiar one of online decision making.\nmarket, inhibiting seller revenue opportunities.\nAdditionally, among those bidders who do choose to participate, the mistakes made may lead to inefficient allocations, further limiting revenue opportunities.\nWe are interested in creating modifications to eBay-style markets that simplify the bidder problem, leading to simple equilibrium strategies, and preferably better efficiency and revenue properties.\n1.1 Options + Proxies: A Proposed Solution\nRetail stores have developed policies to assist their customers in addressing sequential purchasing problems.\nReturn policies alleviate the exposure problem by allowing customers to return goods at the purchase price.\nPrice matching alleviates the multiple copies problem by allowing buyers to receive from sellers after purchase the difference between the price paid for a good and a lower price found elsewhere for the same good [7, 15, 18].\nFurthermore, price matching can reduce the impact of exactly when a seller brings an item to market, as the price will in part be set by others selling the same item.\nThese two retail policies provide the basis for the scheme proposed in this paper .2 We extend the proxy bidding technology currently employed by eBay.\nOur \"super\" - proxy extension will take advantage of a new, real options-based, market infrastructure that enables simple, yet optimal, bidding strategies.\nThe extensions are computationally simple, handle temporal issues, and retain seller autonomy in deciding when to enter the market and conduct individual auctions.\nA seller sells an option for a good, which will ultimately lead to either a sale of the good or the return of the option.\nBuyers interact through a proxy agent, defining a value on all possible bundles of goods in which they have interest together with the latest time period in which they are willing to wait to receive the good (s).\nThe proxy agents use this information to determine how much to bid for options, and follow a dominant bidding strategy across all relevant auctions.\nA proxy agent exercises options held when the buyer's patience has expired, choosing options that maximize a buyer's payoff given the reported valuation.\nAll other options are returned to the market and not exercised.\nThe options-based protocol makes truthful and immediate revelation to a proxy a dominant strategy for buyers, whatever the future auction dynamics.\nWe conduct an empirical analysis of eBay, collecting data on over four months of bids for Dell LCD screens (model E193FP) starting in the Summer of 2005.\nLCD screens are a high-ticket item, for which we demonstrate evidence of the sequential bidding problem.\nWe first infer a conservative model for the arrival time, departure time and value of bidders on eBay for LCD screens during this period.\nThis model is used to simulate the performance of the optionsbased infrastructure, in order to make direct comparisons to the actual performance of eBay in this market.\nWe also extend the work of Haile and Tamer [11] to estimate an upper bound on the distribution of value of eBay bidders, taking into account the sequential auction problem when making the adjustments.\nUsing this estimate, one can approximate how much greater a bidder's true value is 2Prior work has shown price matching as a potential mechanism for colluding firms to set monopoly prices.\nHowever, in our context, auction prices will be matched, which are not explicitly set by sellers but rather by buyers' bids.\nfrom the maximum bid they were observed to have placed on eBay.\nBased on this approximation, revenue generated in a simulation of the options-based scheme exceeds revenue on eBay for the comparable population and sequence of auctions by 14.8%, while the options-based scheme demonstrates itself as being 7.5% more efficient.\n1.2 Related Work\nA number of authors [27, 13, 28, 29] have analyzed the multiple copies problem, often times in the context of categorizing or modeling sniping behavior for reasons other than those first brought forward by Ockenfels and Roth [20].\nThese papers perform equilibrium analysis in simpler settings, assuming bidders can participate in at most two auctions.\nPeters & Severinov [21] extend these models to allow buyers to consider an arbitrary number of auctions, and characterize a perfect Bayesian equilibrium.\nHowever, their model does not allow auctions to close at distinct times and does not consider the arrival and departure of bidders.\nPrevious work have developed a data-driven approach toward developing a taxonomy of strategies employed by bidders in practice when facing multi-unit auctions, but have not considered the sequential bidding problem [26, 2].\nPrevious work has also sought to provide agents with smarter bidding strategies [4, 3, 5, 1].\nUnfortunately, it seems hard to design artificial agents with equilibrium bidding strategies, even for a simple simultaneous ascending price auction.\nIwasaki et al. [14] have considered the role of options in the context of a single, monolithic, auction design to help bidders with marginal-increasing values avoid exposure in a multi-unit, homogeneous item auction problem.\nIn other contexts, options have been discussed for selling coal mine leases [23], or as leveled commitment contracts for use in a decentralized market place [24].\nMost similar to our work, Gopal et al. [9] use options for reducing the risks of buyers and sellers in the sequential auction problem.\nHowever, their work uses costly options and does not remove the sequential bidding problem completely.\nWork on online mechanisms and online auctions [17, 12, 22] considers agents that can dynamically arrive and depart across time.\nWe leverage a recent price-based characterization by Hajiaghayi et al. [12] to provide a dominant strategy equilibrium for buyers within our options-based protocol.\nThe special case for single-unit buyers is equivalent to the protocol of Hajiaghayi et al., albeit with an options-based interpretation.\nJiang and Leyton-Brown [16] use machine learning techniques for bid identification in online auctions.\n2.\nEBAY AND THE DELL E193FP\nThe most common type of auction held on eBay is a singleitem proxy auction.\nAuctions open at a given time and remain open for a set period of time (usually one week).\nBidders bid for the item by giving a proxy a value ceiling.\nThe proxy will bid on behalf of the bidder only as much as is necessary to maintain a winning position in the auction, up to the ceiling received from the bidder.\nBidders may communicate with the proxy multiple times before an auction closes.\nIn the event that a bidder's proxy has been outbid, a bidder may give the proxy a higher ceiling to use in the auction.\neBay's proxy auction implements an incremental version of a Vickrey auction, with the item sold to the highest bidder for the second-highest bid plus a small increment.\nFigure 1: Histogram of number of LCD auctions available to each bidder and number of LCD auctions in which a bidder participates.\nThe market analyzed in this paper is that of a specific model of an LCD monitor, a 19\" Dell LCD model E193FP.\nThis market was selected for a variety of reasons including:\n\u2022 The mean price of the monitor was $240 (with standard deviation $32), so we believe it reasonable to assume that bidders on the whole are only interested in acquiring one copy of the item on eBay .3 \u2022 The volume transacted is fairly high, at approximately 500 units sold per month.\n\u2022 The item is not usually bundled with other items.\n\u2022 The item is typically sold \"as new,\" and so suitable for the price-matching of the options-based scheme.\nRaw auction information was acquired via a PERL script.\nThe script accesses the eBay search engine,4 and returns all auctions containing the terms ` Dell' and ` LCD' that have closed within the past month .5 Data was stored in a text file for post-processing.\nTo isolate the auctions in the domain of interest, queries were made against the titles of eBay auctions that closed between 27 May, 2005 through 1 October, 2005.6 Figure 1 provides a general sense of how many LCD auctions occur while a bidder is interested in pursuing a monitor .7 8,746 bidders (86%) had more than one auction available between when they first placed a bid on eBay and the\n6Specifically, the query found all auctions where the title contained all of the following strings: ` Dell,' ` LCD' and ` E193FP,' while excluding all auctions that contained any of the following strings: ` Dimension,' ` GHZ,' ` desktop,' ` p4' and ` GB . '\nThe exclusion terms were incorporated so that the only auctions analyzed would be those selling exclusively the LCD of interest.\nFor example, the few bundled auctions selling both a Dell Dimension desktop and the E193FP LCD are excluded.\n7As a reference, most auctions close on eBay between noon and midnight EDT, with almost two auctions for the Dell LCD monitor closing each hour on average during peak time periods.\nBidders have an average observed patience of 3.9 days (with a standard deviation of 11.4 days).\nlatest closing time of an auction in which they bid (with an average of 78 auctions available).\nFigure 1 also illustrates the number of auctions in which each bidder participates.\nOnly 32.3% of bidders who had more than one auction available are observed to bid in more than one auction (bidding in 3.6 auctions on average).\nA simple regression analysis shows that bidders tend to submit maximal bids to an auction that are $1.22 higher after spending twice as much time in the system, as well as bids that are $0.27 higher in each subsequent auction.\nAmong the 508 bidders that won exactly one monitor and participated in multiple auctions, 201 (40%) paid more than $10 more than the closing price of another auction in which they bid, paying on average $35 more (standard deviation $21) than the closing price of the cheapest auction in which they bid but did not win.\nFurthermore, among the 2,216 bidders that never won an item despite participating in multiple auctions, 421 (19%) placed a losing bid in one auction that was more than $10 higher than the closing price of another auction in which they bid, submitting a losing bid on average $34 more (standard deviation $23) than the closing price of the cheapest auction in which they bid but did not win.\nAlthough these measures do not say a bidder that lost could have definitively won (because we only consider the final winning price and not the bid of the winner to her proxy), or a bidder that won could have secured a better price, this is at least indicative of some bidder mistakes.\n3.\nMODELING THE SEQUENTIAL AUCTION PROBLEM\nWhile the eBay analysis was for simple bidders who desire only a single item, let us now consider a more general scenario where people may desire multiple goods of different types, possessing general valuations over those goods.\nConsider a world with buyers (sometimes called bidders) B and K different types of goods G1...GK.\nLet T = {0, 1, ...} denote time periods.\nLet L denote a bundle of goods, represented as a vector of size K, where Lk \u2208 {0, 1} denotes the quantity of good type Gk in the bundle .8 The type of a buyer i \u2208 B is (ai, di, vi), with arrival time ai \u2208 T, departure time di \u2208 T, and private valuation vi (L) \u2265 0 for each bundle of goods L received between ai and di, and zero value otherwise.\nThe arrival time models the period in which a buyer first realizes her demand and enters the market, while the departure time models the period in which a buyer loses interest in acquiring the good (s).\nIn settings with general valuations, we need an additional assumption: an upper bound on the difference between a buyer's arrival and departure, denoted \u0394Max.\nBuyers have quasi-linear utilities, so that the utility of buyer i receiving bundle L and paying p, in some period no later than di, is ui (L, p) = vi (L) \u2212 p. Each seller j \u2208 S brings a single item kj to the market, has no intrinsic value and wants to maximize revenue.\nSeller j has an arrival time, aj, which models the period in which she is first interested in listing the item, while the departure time, dj, models the latest period in which she is willing to consider having an auction for the item close.\nA seller will receive payment by the end of the reported departure of the winning buyer.\n8We extend notation whereby a single item k of type Gk refers to a vector L: Lk = 1.\nWe say an individual auction in a sequence is locally strategyproof (LSP) if truthful bidding is a dominant strategy for a buyer that can only bid in that auction.\nConsider the following example to see that LSP is insufficient for the existence of a dominant bidding strategy for buyers facing a sequence of auctions.\nEXAMPLE 1.\nAlice values one ton of Sand with one ton of Stone at $2, 000.\nBob holds a Vickrey auction for one ton of Sand on Monday and a Vickrey auction for one ton of Stone on Tuesday.\nAlice has no dominant bidding strategy because she needs to know the price for Stone on Tuesday to know her maximum willingness to pay for Sand on Monday.\nDEFINITION 1.\nThe sequential auction problem.\nGiven a sequence of auctions, despite each auction being locally strategyproof, a bidder has no dominant bidding strategy.\nConsider a sequence of auctions.\nGenerally, auctions selling the same item will be uncertainly-ordered, because a buyer will not know the ordering of closing prices among the auctions.\nDefine the interesting bundles for a buyer as all bundles that could maximize the buyer's profit for some combination of auctions and bids of other buyers .9 Within the interesting bundles, say that an item has uncertain marginal value if the marginal value of an item depends on the other goods held by the buyer .10 Say that an item is oversupplied if there is more than one auction offering an item of that type.\nSay two bundles are substitutes if one of those bundles has the same value as the union of both bundles .11 PROPOSITION 1.\nGiven locally strategyproof single-item auctions, the sequential auction problem exists for a bidder if and only if either of the following two conditions is true: (1) within the set of interesting bundles (a) there are two bundles that are substitutes, (b) there is an item with uncertain marginal value, or (c) there is an item that is over-supplied; (2) a bidder faces competitors' bids that are conditioned on the bidder's past bids.\nPROOF.\n(Sketch.)\n(\u21d0) A bidder does not have a dominant strategy when (a) she does not know which bundle among substitutes to pursue, (b) she faces the exposure problem, or (c) she faces the multiple copies problem.\nAdditionally, a bidder does not have a dominant strategy when she does not how to optimally influence the bids of competitors.\n(\u21d2) By contradiction.\nA bidder has a dominant strategy to bid its constant marginal value for a given item in each auction available when conditions (1) and (2) are both false.\nFor example, the following buyers all face the sequential auction problem as a result of condition (a), (b) and (c) respectively: a buyer who values one ton of Sand for $1,000, or one ton of Stone for $2,000, but not both Sand and Stone; a buyer who values one ton of Sand for $1,000, one ton of Stone for $300, and one ton of Sand and one ton of Stone for $1,500, and can participate in an auction for Sand before an auction for Stone; a buyer who values one ton of Sand for $1,000 and can participate in many auctions selling Sand.\n9Assume that the empty set is an interesting bundle.\n10Formally, an item k has uncertain marginal value if Ifm: m = vi (Q)--vi (Q--k), ` dQ C _ L E InterestingBundle, Q _ D kJJ> 1.\n11Formally, two bundles A and B are substitutes if vi (A U B) = max (vi (A), vi (B)), where A U B = L where Lk = max (Ak, Bk).\n4.\n\"SUPER\" PROXIES AND OPTIONS\nThe novel solution proposed in this work to resolve the sequential auction problem consists of two primary components: richer proxy agents, and options with price matching.\nIn finance, a real option is a right to acquire a real good at a certain price, called the exercise price.\nFor instance, Alice may obtain from Bob the right to buy Sand from him at an exercise price of $1, 000.\nAn option provides the right to purchase a good at an exercise price but not the obligation.\nThis flexibility allows buyers to put together a collection of options on goods and then decide which to exercise.\nOptions are typically sold at a price called the option price.\nHowever, options obtained at a non-zero option price cannot generally support a simple, dominant bidding strategy, as a buyer must compute the expected value of an option to justify the cost [8].\nThis computation requires a model of the future, which in our setting requires a model of the bidding strategies and the values of other bidders.\nThis is the very kind of game-theoretic reasoning that we want to avoid.\nInstead, we consider costless options with an option price of zero.\nThis will require some care as buyers are weakly better off with a costless option than without one, whatever its exercise price.\nHowever, multiple bidders pursuing options with no intention of exercising them would cause the efficiency of an auction for options to unravel.\nThis is the role of the mandatory proxy agents, which intermediate between buyers and the market.\nA proxy agent forces a link between the valuation function used to acquire options and the valuation used to exercise options.\nIf a buyer tells her proxy an inflated value for an item, she runs the risk of having the proxy exercise options at a price greater than her value.\n4.1 Buyer Proxies\n4.1.1 Acquiring Options\nAfter her arrival, a buyer submits her valuation \u02c6vi (perhaps untruthfully) to her proxy in some period \u02c6ai> ai, along with a claim about her departure time \u02c6di> \u02c6ai.\nAll transactions are intermediated via proxy agents.\nEach auction is modified to sell an option on that good to the highest bidding proxy, with an initial exercise price set to the second-highest bid received .12 When an option in which a buyer is interested becomes available for the first time, the proxy determines its bid by computing the buyer's maximum marginal value for the item, and then submits a bid in this amount.\nA proxy does not bid for an item when it already holds an option.\nThe bid price is:\nBy having a proxy compute a buyer's maximum marginal value for an item and then bidding only that amount, a buyer's proxy will win any auction that could possibly be of benefit to the buyer and only lose those auctions that could never be of value to the buyer.\n12The system can set a reserve price for each good, provided that the reserve is universal for all auctions selling the same item.\nWithout a universal reserve price, price matching is not possible because of the additional restrictions on prices that individual sellers will accept.\nTable 1: Three-buyer example with each wanting a sin\ngle item and one auction occurring on Monday and Tuesday.\n\"XY\" implies an option with exercise price X and bookkeeping that a proxy has prevented Y from currently possessing an option.\n\"\u2192\" is the updating of exercise price and bookkeeping.\nWhen a proxy wins an auction for an option, the proxy will store in its local memory the identity (which may be a pseudonym) of the proxy not holding an option because of the proxy's win (i.e., the proxy that it ` bumped' from winning, if any).\nThis information will be used for price matching.\n4.1.2 Pricing Options\nSellers agree by joining the market to allow the proxy representing a buyer to adjust the exercise price of an option that it holds downwards if the proxy discovers that it could have achieved a better price by waiting to bid in a later auction for an option on the same good.\nTo assist in the implementation of the price matching scheme each proxy tracks future auctions for an option that it has already won and will determine who would be bidding in that auction had the proxy delayed its entry into the market until this later auction.\nThe proxy will request price matching from the seller that granted it an option if the proxy discovers that it could have secured a lower price by waiting.\nTo reiterate, the proxy does not acquire more than one option for any good.\nRather, it reduces the exercise price on its already issued option if a better deal is found.\nThe proxy is able to discover these deals by asking each future auction to report the identities of the bidders in that auction together with their bids.\nThis needs to be enforced by eBay, as the central authority.\nThe highest bidder in this later auction, across those whose identity is not stored in the proxy's memory for the given item, is exactly the bidder against whom the proxy would be competing had it delayed its entry until this auction.\nIf this high bid is lower than the current option price held, the proxy \"price matches\" down to this high bid price.\nAfter price matching, one of two adjustments will be made by the proxy for bookkeeping purposes.\nIf the winner of the auction is the bidder whose identity has been in the proxy's local memory, the proxy will replace that local information with the identity of the bidder whose bid it just price matched, as that is now the bidder the proxy has prevented from obtaining an option.\nIf the auction winner's identity is not stored in the proxy's local memory the memory may be cleared.\nIn this case, the proxy will simply price match against the bids of future auction winners on this item until the proxy departs.\nEXAMPLE 2 (TABLE 1).\nMolly's proxy wins the Monday auction, submitting a bid of $8 and receiving an option for $6.\nMolly's proxy adds Nancy to its local memory as Nancy's proxy would have won had Molly's proxy not bid.\nOn Tuesday, only Nancy's and Polly's proxy bid (as Molly's proxy holds an option), with Nancy's proxy winning an op\nTable 2: Examples demonstrating why bookkeeping will lead to a truthful system whereas simply matching to the lowest winning price will not.\ntion for $4 and noting that it bumped Polly's proxy.\nAt this time, Molly's proxy will price match its option down to $4 and replace Nancy with Polly in its local memory as per the price match algorithm, as Polly would be holding an option had Molly never bid.\n4.1.3 Exercising Options\nAt the reported departure time the proxy chooses which options to exercise.\nTherefore, a seller of an option must wait until period \u02c6dw for the option to be exercised and receive payment, where w was the winner of the option .13 For bidder i, in period \u02c6di, the proxy chooses the option (s) that maximize the (reported) utility of the buyer:\nwhere \u0398 is the set of all options held, \u03b3 (\u03b8) are the goods corresponding to a set of options, and \u03c0 (\u03b8) is the sum of exercise prices for a set of options.\nAll other options are returned .14 No options are exercised when no combination of options have positive utility.\n4.1.4 Why bookkeep and not match winning price?\nOne may believe that an alternative method for implementing a price matching scheme could be to simply have proxies match the lowest winning price they observe after winning an option.\nHowever, as demonstrated in Table 2, such a simple price matching scheme will not lead to a truthful system.\nThe first scenario in Table 2 demonstrates the outcome if all agents were to truthfully report their types.\nMolly 13While this appears restrictive on the seller, we believe it not significantly different than what sellers on eBay currently endure in practice.\nAn auction on eBay closes at a specific time, but a seller must wait until a buyer relinquishes payment before being able to realize the revenue, an amount of time that could easily be days (if payment is via a money order sent through courier) to much longer (if a buyer is slow but not overtly delinquent in remitting her payment).\n14Presumably, an option returned will result in the seller holding a new auction for an option on the item it still possesses.\nHowever, the system will not allow a seller to re-auction an option until \u0394Max after the option had first been issued in order to maintain a truthful mechanism.\nwould win the Monday auction and receive an option with an exercise price of $6 (subsequently exercising that option at the end of Monday), and Nancy would win the Tuesday auction and receive an option with an exercise price of $4 (subsequently exercising that option at the end of Tuesday).\nThe second scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, using the proposed bookkeeping method.\nNancy would win the Monday auction and receive an option with an exercise price of $8.\nOn Tuesday, Polly would win the auction and receive an option with an exercise price of $0.\nNancy's proxy would observe that the highest bid submitted on Tuesday among those proxies not stored in local memory is Polly's bid of $4, and so Nancy's proxy would price match the exercise price of its option down to $4.\nNote that the exercise price Nancy's proxy has obtained at the end of Tuesday is the same as when she truthfully revealed her type to her proxy.\nThe third scenario in Table 2 demonstrates the outcome if Nancy were to misreport her value for the good by reporting an inflated value of $10, if the price matching scheme were for proxies to simply match their option price to the lowest winning price at any time while they are in the system.\nNancy would win the Monday auction and receive an option with an exercise price of $8.\nOn Tuesday, Polly would win the auction and receive an option with an exercise price of $0.\nNancy's proxy would observe that the lowest price on Tuesday was $0, and so Nancy's proxy would price match the exercise price of its option down to $0.\nNote that the exercise price Nancy's proxy has obtained at the end of Tuesday is lower than when she truthfully revealed her type to the proxy.\nTherefore, a price matching policy of simply matching the lowest price paid may not elicit truthful information from buyers.\n4.2 Complexity of Algorithm\nAn XOR-valuation of size M for buyer i is a set of M terms, ... , that maps distinct bundles to values, where i is interested in acquiring at most one such bundle.\nFor any bundle S, vi (S) = maxLm \u2286 S (vmi).\nTHEOREM 1.\nGiven an XOR-valuation which possesses M terms, there is an O (KM2) algorithm for computing all maximum marginal values, where K is the number of different item types in which a buyer may be interested.\nPROOF.\nFor each item type, recall Equation 1 which defines the maximum marginal value of an item.\nFor each bundle L in the M-term valuation, vi (L + k) may be found by iterating over the M terms.\nTherefore, the number of terms explored to determine the maximum marginal value for any item is O (M2), and so the total number of bundle comparisons to be performed to calculate all maximum marginal values is O (KM2).\nTHEOREM 2.\nThe total memory required by a proxy for implementing price matching is O (K), where K is the number of distinct item types.\nThe total work performed by a proxy to conduct price matching in each auction is O (1).\nPROOF.\nBy construction of the algorithm, the proxy stores one maximum marginal value for each item for bidding, of which there are O (K); at most one buyer's identity for each item, of which there are O (K); and one current option exercise price for each item, of which there are O (K).\nFor each auction, the proxy either submits a precomputed bid or price matches, both of which take O (1) work.\n4.3 Truthful Bidding to the Proxy Agent\nProxies transform the market into a direct revelation mechanism, where each buyer i interacts with the proxy only once,15 and does so by declaring a bid, bi, which is defined as an announcement of her type, (\u02c6ai, \u02c6di, \u02c6vi), where the announcement may or may not be truthful.\nWe denote all received bids other than i's as b \u2212 i. Given bids, b = (bi, b \u2212 i), the market determines allocations, xi (b), and payments, pi (b)> 0, to each buyer (using an online algorithm).\nA dominant strategy equilibrium for buyers requires that vi (xi (bi, b \u2212 i))--pi (bi, b \u2212 i)> vi (xi (b ~ i, b \u2212 i))--pi (b ~ i, b \u2212 i), ` db ~ i = ~ bi, ` db \u2212 i.We now establish that it is a dominant strategy for a buyer to reveal her true valuation and true departure time to her proxy agent immediately upon arrival to the system.\nThe proof builds on the price-based characterization of strategyproof single-item online auctions in Hajiaghayi et al. [12].\nDefine a monotonic and value-independent price function psi (ai, di, L, v \u2212 i) which can depend on the values of other agents v \u2212 i. Price psi (ai, di, L, v \u2212 i) will represent the price available to agent i for bundle L in the mechanism if it announces arrival time ai and departure time di.\nThe price is independent of the value vi of agent i, but can depend on ai, di and L as long as it satisfies a monotonicity condition.\nPROOF.\nAgent i cannot benefit from reporting a later departure \u02c6di because the allocation is made in period \u02c6di and the agent would have no value for this allocation.\nAgent i cannot benefit from reporting a later arrival \u02c6ai> ai or earlier departure \u02c6di bidti (k), and so the bundle without k would be preferred to the bundle.\nTHEOREM 4.\nThe super proxy, options-based scheme is individually-rational for both buyers and sellers.\nTable 3: Average price paid, standard deviation of\nprices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as the simulated options-based market using worst-case estimates of bidders' true value.\nPROOF.\nBy construction, the proxy exercises the profit maximizing set of options obtained, or no options if no set of options derives non-negative surplus.\nTherefore, buyers are guaranteed non-negative surplus by participating in the scheme.\nFor sellers, the price of each option is based on a non-negative bid or zero.\n5.\nEVALUATING THE OPTIONS \/ PROXY INFRASTRUCTURE\nA goal of the empirical benchmarking and a reason to collect data from eBay is to try and build a realistic model of buyers from which to estimate seller revenue and other market effects under the options-based scheme.\nWe simulate a sequence of auctions that match the timing of the Dell LCD auctions on eBay .16 When an auction successfully closes on eBay, we simulate a Vickrey auction for an option on the item sold in that period.\nAuctions that do not successfully close on eBay are not simulated.\nWe estimate the arrival, departure and value of each bidder on eBay from their observed behavior .17 Arrival is estimated as the first time that a bidder interacts with the eBay proxy, while departure is estimated as the latest closing time among eBay auctions in which a bidder participates.\nWe initially adopt a particularly conservative estimate for bidder value, estimating it as the highest bid a bidder was observed to make on eBay.\nTable 3 compares the distribution of closing prices on eBay and in the simulated options scheme.\nWhile the average revenue in both schemes is virtually the same ($239.66 in the options scheme vs. $240.24 on eBay), the winners in the options scheme tend to value the item won 7% more than the winners on eBay ($263 in the options scheme vs. $244 on eBay).\n5.1 Bid Identification\nWe extend the work of Haile and Tamer [11] to sequential auctions to get a better view of underlying bidder values.\nRather than assume for bidders an equilibrium behavior as in standard econometric techniques, Haile and Tamer do not attempt to model how bidders' true values get mapped into a bid in any given auction.\nRather, in the context of repeated 16When running the simulations, the results of the first and final ten days of auctions are not recorded to reduce edge effects that come from viewing a discrete time window of a continuous process.\n17For the 100 bidders that won multiple times on eBay, we have each one bid a constant marginal value for each additional item in each auction until the number of options held equals the total number of LCDs won on eBay, with each option available for price matching independently.\nThis bidding strategy is not a dominant strategy (falling outside the type space possible for buyers on which the proof of truthfulness has been built), but is believed to be the most appropriate first order action for simulation.\nFigure 2: CDF of maximum bids observed and upper bound estimate of the bidding population's distribution\nfor maximum willingness to pay.\nThe true population distribution lies below the estimated upper bound.\nsingle-item auctions with distinct bidder populations, Haile and Tamer make only the following two assumptions when estimating the distribution of true bidder values:\n1.\nBidders do not bid more than they are willing to pay.\n2.\nBidders do not allow an opponent to win at a price they are willing to beat.\nFrom the first of their two assumptions, given the bids placed by each bidder in each auction, Haile and Tamer derive a method for estimating an upper bound of the bidding population's true value distribution (i.e., the bound that lies above the true value distribution).\nFrom the second of their two assumptions, given the winning price of each auction, Haile and Tamer derive a method for estimating a lower bound of the bidding population's true value distribution.\nIt is only the upper-bound of the distribution that we utilize in our work.\nHaile and Tamer assume that bidders only participate in a single auction, and require independence of the bidding population from auction to auction.\nNeither assumption is valid here: the former because bidders are known to bid in more than one auction, and the latter because the set of bidders in an auction is in all likelihood not a true i.i.d. sampling of the overall bidding population.\nIn particular, those who win auctions are less likely to bid in successive auctions, while those who lose auctions are more likely to remain bidders in future auctions.\nIn applying their methods we make the following adjustments:\n\u2022 Within a given auction, each individual bidder's true willingness to pay is assumed weakly greater than the maximum bid that bidder submits across all auctions for that item (either past or future).\n\u2022 When estimating the upper bound of the value distribution, if a bidder bids in more than one auction, randomly select one of the auctions in which the bidder bid, and only utilize that one observation during the estimation .18\nTable 4: Average price paid, standard deviation of\nprices paid, average bidder value among winners, and average winning bidder surplus on eBay for Dell E193FP LCD screens as well as in the simulated options-based market using an adjusted Haile and Tamer estimate of bidders' true values being 15% higher than their maximum observed bid.\nFigure 2 provides the distribution of maximum bids placed by bidders on eBay as well as the estimated upper bound of the true value distribution of bidders based on the extended Haile and Tamer method .19 As can be seen, the smallest relative gap between the two curves meaningfully occurs near the 80th percentile, where the upper bound is 1.17 times the maximum bid.\nTherefore, adopted as a less conservative model of bidder values is a uniform scaling factor of 1.15.\nWe now present results from this less conservative analysis.\nTable 4 shows the distribution of closing prices in auctions on eBay and in the simulated options scheme.\nThe mean price in the options scheme is now significantly higher, 15% greater, than the prices on eBay ($276 in the options scheme vs. $240 on eBay), while the standard deviation of closing prices is lower among the options scheme auctions ($14 in the options scheme vs. $32 on eBay).\nTherefore, not only is the expected revenue stream higher, but the lower variance provides sellers a greater likelihood of realizing that higher revenue.\nThe efficiency of the options scheme remains higher than on eBay.\nThe winners in the options scheme now have an average estimated value 7.5% higher at $302.\nIn an effort to better understand this efficiency, we formulated a mixed integer program (MIP) to determine a simple estimate of the allocative efficiency of eBay.\nThe MIP computes the efficient value of the offline problem with full hindsight on all bids and all supply .20 Using a scaling of 1.15, the total value allocated to eBay winners is estimated at $551,242, while the optimal value (from the MIP) is $593,301.\nThis suggests an allocative efficiency of 92.9%: while the typical value of a winner on eBay is $281, an average value of $303 was possible .21 Note the options-based tain auctions, then further adjustments would be required in order to utilize these techniques.\n19The estimation of the points in the curve is a minimization over many variables, many of which can have smallnumbers bias.\nConsequently, Haile and Tamer suggest using a weighted average over all terms yi of ~ i yi exp (yi\u03c1) j exp (yj \u03c1) to approximate the minimum while reducing the small number effects.\nWe used p = \u2212 1000 and removed observations of auctions with 17 bidders or more as they occurred very infrequently.\nHowever, some small numbers bias still demonstrated itself with the plateau in our upper bound estimate around a value of $300.\n20Buyers who won more than one item on eBay are cloned so that they appear to be multiple bidders of identical type.\n21As long as one believes that every bidder's true value is a constant factor \u03b1 away from their observed maximum bid, the 92.9% efficiency calculation holds for any value of \u03b1.\nIn practice, this belief may not be reasonable.\nFor example, if losing bidders tend to have true values close to their observed\nscheme comes very close to achieving this level of efficiency [at 99.7% efficient in this estimate] even though it operates without the benefit of hindsight.\nFinally, although the typical winning bidder surplus decreases between eBay and the options-based scheme, some surplus redistribution would be possible because the total market efficiency is improved .22\n6.\nDISCUSSION\nThe biggest concern with our scheme is that proxy agents who may be interested in many different items may acquire many more options than they finally exercise.\nThis can lead to efficiency loss.\nNotice that this is not an issue when bidders are only interested in a single item (as in our empirical study), or have linear-additive values on items.\nTo fix this, we would prefer to have proxy agents use more caution in acquiring options and use a more adaptive bidding strategy than that in Equation 1.\nFor instance, if a proxy is already holding an option with an exercise price of $3 on some item for which it has value of $10, and it values some substitute item at $5, the proxy could reason that in no circumstance will it be useful to acquire an option on the second item.\nWe formulate a more sophisticated bidding strategy along these lines.\nLet \u0398t be the set of all options a proxy for bidder i already possesses at time t. Let \u03b8t C _ \u0398t, be a subset of those options, the sum of whose exercise prices are \u03c0 (\u03b8t), and the goods corresponding to those options being \u03b3 (\u03b8t).\nLet \u03a0 (\u03b8t) = \u02c6vi (\u03b3 (\u03b8t))--\u03c0 (\u03b8t) be the (reported) available surplus associated with a set of options.\nLet \u03b8 * t be the set of options currently held that would maximize the buyer's surplus; i.e., \u03b8 * t = argmax\u03b8tC\u0398t \u03a0 (\u03b8t).\nLet the maximal willingness to pay for an item k represent a price above which the agent knows it would never exercise an option on the item given the current options held.\nThis can be computed as follows:\n(3) where \u02c6vi (L + k)--\u03a0 (\u03b8 * t) considers surplus already held, \u02c6vi (L + k)--\u02c6vi (L) considers the marginal value of a good, and taking the max [0,.]\nconsiders the overall use of pursuing the good.\nHowever, and somewhat counter intuitively, we are not able to implement this bidding scheme without forfeiting truthfulness.\nThe \u03a0 (\u03b8 * t) term in Equation 3 (i.e., the amount of guaranteed surplus bidder i has already obtained) can be influenced by proxy j's bid.\nTherefore, bidder j may have the incentive to misrepresent her valuation to her proxy if she believes doing so will cause i to bid differently in the future in a manner beneficial to j. Consider the following example where the proxy scheme is refined to bid the maximum willingness to pay.\nEXAMPLE 3.\nAlice values either one ton of Sand or one ton of Stone for $2,000.\nBob values either one ton of Sand or one ton of Stone for $1,500.\nAll bidders have a patience maximum bids while eBay winners have true values much greater than their observed maximum bids then downward bias is introduced in the efficiency calculation at present.\n22The increase in eBay winner surplus between Tables 3 and 4 is to be expected as the \u03b1 scaling strictly increases the estimated value of the eBay winners while holding the prices at which they won constant.\nof 2 days.\nOn day one, a Sand auction is held, where Alice's proxy bids $2,000 and Bob's bids $1,500.\nAlice's proxy wins an option to purchase Sand for $1,500.\nOn day two, a Stone auction is held, where Alice's proxy bids $1,500 [as she has already obtained a guaranteed $500 of surplus from winning a Sand option, and so reduces her Stone bid by this amount], and Bob's bids $1,500.\nEither Alice's proxy or Bob's proxy will win the Stone option.\nAt the end of the second day, Alice's proxy holds an option with an exercise price of $1,500 to obtain a good valued for $2,000, and so obtains $500 in surplus.\nNow, consider what would have happened had Alice declared that she valued only Stone.\nEXAMPLE 4.\nAlice declares valuing only Stone for $2,000.\nBob values either one ton of Sand or one ton of Stone for $1,500.\nAll bidders have a patience of 2 days.\nOn day one, a Sand auction is held, where Bob's proxy bids $1,500.\nBob's proxy wins an option to purchase Sand for $0.\nOn day two, a Stone auction is held, where Alice's proxy bids $2,000, and Bob's bids $0 [as he has already obtained a guaranteed $1,500 of surplus from winning a Sand option, and so reduces his Stone bid by this amount].\nAlice's proxy wins the Stone option for $0.\nAt the end of the second day, Alice's proxy holds an option with an exercise price of $0 to obtain a good valued for $2,000, and so obtains $2,000 in surplus.\nBy misrepresenting her valuation (i.e., excluding her value of Sand), Alice was able to secure higher surplus by guiding Bob's bid for Stone to $0.\nAn area of immediate further work by the authors is to develop a more sophisticated proxy agent that can allow for bidding of maximum willingness to pay (Equation 3) while maintaining truthfulness.\nAn additional, practical, concern with our proxy scheme is that we assume an available, trusted, and well understood method to characterize goods (and presumably the quality of goods).\nWe envision this happening in practice by sellers defining a classification for their item upon entering the market, for instance via a UPC code.\nJust as in eBay, this would allow an opportunity for sellers to improve revenue by overstating the quality of their item (\"new\" vs. \"like new\"), and raises the issue of how well a reputation scheme could address this.\n7.\nCONCLUSIONS\nWe introduced a new sales channel, consisting of an optionsbased and proxied auction protocol, to address the sequential auction problem that exists when bidders face multiple auctions for substitutes and complements goods.\nOur scheme provides bidders with a simple, dominant and truthful bidding strategy even though the market remains open and dynamic.\nIn addition to exploring more sophisticated proxies that bid in terms of maximum willingness to pay, future work should aim to better model seller incentives and resolve the strategic problems facing sellers.\nFor instance, does the options scheme change seller incentives from what they currently are on eBay?","keyphrases":["sequenti auction problem","empir analysi","bid strategi","multipl auction","strateg behavior","commodit market","comput simul","market effect","ebai","option-base extens","proxi-bid system","trade opportun","electron marketplac","busi-to-consum auction","autom trade agent","onlin auction","option","proxi bid"],"prmu":["P","P","P","P","P","P","P","P","U","M","M","U","U","M","U","M","U","M"]} {"id":"I-54","title":"Approximate and Online Multi-Issue Negotiation","abstract":"This paper analyzes bilateral multi-issue negotiation between self-interested autonomous agents. The agents have time constraints in the form of both deadlines and discount factors. There are m > 1 issues for negotiation where each issue is viewed as a pie of size one. The issues are indivisible (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent). Here different agents value different issues differently. Thus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities. For such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties. Then, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting. In order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium. We also analyze the relative error (i.e., the difference between the true optimum and the approximate). The time complexity of the approximate equilibrium strategies is O(nm\/ 2 ) where n is the negotiation deadline and the relative error. Finally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues. Specifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( \u221a m) . These approximate strategies also have polynomial time complexity.","lvl-1":"Approximate and Online Multi-Issue Negotiation Shaheen S. Fatima Department of Computer Science University of Liverpool Liverpool L69 3BX, UK.\nshaheen@csc.liv.ac.uk Michael Wooldridge Department of Computer Science University of Liverpool Liverpool L69 3BX, UK.\nmjw@csc.liv.ac.uk Nicholas R. Jennings School of Electronics and Computer Science University of Southampton Southampton SO17 1BJ, UK.\nnrj@ecs.soton.ac.uk ABSTRACT This paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents.\nThe agents have time constraints in the form of both deadlines and discount factors.\nThere are m > 1 issues for negotiation where each issue is viewed as a pie of size one.\nThe issues are indivisible (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent).\nHere different agents value different issues differently.\nThus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities.\nFor such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties.\nThen, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting.\nIn order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium.\nWe also analyze the relative error (i.e., the difference between the true optimum and the approximate).\nThe time complexity of the approximate equilibrium strategies is O(nm\/ 2 ) where n is the negotiation deadline and the relative error.\nFinally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues.\nSpecifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( \u221a m) .\nThese approximate strategies also have polynomial time complexity.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Algorithms, Design, Theory 1.\nINTRODUCTION Negotiation is a key form of interaction in multiagent systems.\nIt is a process in which disputing agents decide how to divide the gains from cooperation.\nSince this decision is made jointly by the agents themselves [20, 19, 13, 15], each party can only obtain what the other is prepared to allow them.\nNow, the simplest form of negotiation involves two agents and a single issue.\nFor example, consider a scenario in which a buyer and a seller negotiate on the price of a good.\nTo begin, the two agents are likely to differ on the price at which they believe the trade should take place, but through a process of joint decision-making they either arrive at a price that is mutually acceptable or they fail to reach an agreement.\nSince agents are likely to begin with different prices, one or both of them must move toward the other, through a series of offers and counter offers, in order to obtain a mutually acceptable outcome.\nHowever, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers.\nThat is, they must set the negotiation protocol [20].\nOn the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation).\nGiven this context, this work focuses on competitive scenarios with self-interested agents.\nFor such cases, each participant defines its strategy so as to maximise its individual utility.\nHowever, in most bilateral negotiations, the parties involved need to settle more than one issue.\nFor this case, the issues may be divisible or indivisible [4].\nFor the former, the problem for the agents is to decide how to split each issue between themselves [21].\nFor the latter, the individual issues cannot be divided.\nAn issue, in its entirety, must be allocated to either of the two agents.\nSince the agents value different issues differently, they must come to terms about who will take which issue.\nTo date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6].\nHowever, in many real-world settings, the issues are indivisible.\nHence, our focus here is on negotiation for indivisible issues.\nSuch negotiations are very common in multiagent systems.\nFor example, consider the case of task allocation between two agents.\nThere is a set of tasks to be carried out and different agents have different preferences for the tasks.\nThe tasks cannot be partitioned; a task must be carried out by one agent.\nThe problem then is for the agents to negotiate about who will carry out which task.\nA key problem in the study of multi-issue negotiation is to determine the equilibrium strategies.\nAn equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers.\nHowever, such computational issues have so far received little attention.\nAs we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues 951 978-81-904262-7-5 (RPS) c 2007 IFAAMAS and finding the equilibrium for this case is computationally easier than that for the case of indivisible issues.\nOur primary objective is, therefore, to answer the computational questions for the latter case for the types of situations that are commonly faced by agents in real-world contexts.\nThus, we consider negotiations in which there is incomplete information and time constraints.\nIncompleteness of information on the part of negotiators is a common feature of most practical negotiations.\nAlso, agents typically have time constraints in the form of both deadlines and discount factors.\nDeadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit.\nLikewise, discount factors are essential since the goods may be perishable or their value may decline due to inflation.\nMoreover, the strategic behaviour of agents with deadlines and discount factors differs from those without (see [21] for single issue bargaining without deadlines and [23, 13] for bargaining with deadlines and discount factors in the context of divisible issues).\nGiven this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents.\nFor this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting.\nThen, in order to overcome the problem of time complexity, we present strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium.\nWe also analyze the relative error (i.e., the difference between the true optimum and the approximate).\nThe time complexity of the approximate equilibrium strategies is O(nm\/ 2 ) where n is the negotiation deadline and the relative error.\nFinally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues.\nSpecifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O( \u221a m) .\nThese approximate strategies also have polynomial time complexity.\nIn so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria.\nNo previous work has determined these equilibria.\nSince software agents have limited computational resources, our results are especially relevant to such resource bounded agents.\nThe remainder of the paper is organised as follows.\nWe begin by giving a brief overview of single-issue negotiation in Section 2.\nIn Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem.\nWe then present an approximate equilibrium and evaluate its approximation error.\nSection 4 analyzes online multi-issue negotiation.\nSection 5 discusses the related literature and Section 6 concludes.\n2.\nSINGLE-ISSUE NEGOTIATION We adopt the single issue model of [27] because this is a model where, during negotiation, the parties are allowed to make offers from a set of discrete offers.\nSince our focus is on indivisible issues (i.e., parties are allowed to make one of two possible offers: zero or one), our scenario fits in well with [27].\nHence we use this basic single issue model and extend it to multiple issues.\nBefore doing so, we give an overview of this model and its equilibrium strategies.\nThere are two strategic agents: a and b. Each agent has time constraints in the form of deadlines and discount factors.\nThe two agents negotiate over a single indivisible issue (i).\nThis issue is a `pie'' of size 1 and the agents want to determine who gets the pie.\nThere is a deadline (i.e., a number of rounds by which negotiation must end).\nLet n \u2208 N+ denote this deadline.\nThe agents use an alternating offers protocol (as the one of Rubinstein [18]), which proceeds through a series of time periods.\nOne of the agents, say a, starts negotiation in the first time period (i.e., t = 1) by making an offer (xi = 0 or 1) to b. Agent b can either accept or reject the offer.\nIf it accepts, negotiation ends in an agreement with a getting xi and b getting yi = 1 \u2212 xi.\nOtherwise, negotiation proceeds to the next time period, in which agent b makes a counter-offer.\nThis process of making offers continues until one of the agents either accepts an offer or quits negotiation (resulting in a conflict).\nThus, there are three possible actions an agent can take during any time period: accept the last offer, make a new counter-offer, or quit the negotiation.\nAn essential feature of negotiations involving alternating offers is that the agents'' utilities decrease with time [21].\nSpecifically, the decrease occurs at each step of offer and counteroffer.\nThis decrease is represented with a discount factor denoted 0 < \u03b4i \u2264 1 for both1 agents.\nLet [xt i, yt i ] denote the offer made at time period t where xt i and yt i denote the share for agent a and b respectively.\nThen, for a given pie, the set of possible offers is: {[xt i, yt i ] : xt i = 0 or 1, yt i = 0 or 1, and xt i + yt i = 1} At time t, if a and b receive a share of xt i and yt i respectively, then their utilities are: ua i (xt i, t) = j xt i \u00d7 \u03b4t\u22121 if t \u2264 n 0 otherwise ub i (yt i , t) = j yt i \u00d7 \u03b4t\u22121 if t \u2264 n 0 otherwise The conflict utility (i.e., the utility received in the event that no deal is struck) is zero for both agents.\nFor the above setting, the agents reason as follows in order to determine what to offer at t = 1.\nWe let A(1) (B(1)) denote a``s (b``s) equilibrium offer for the first time period.\nLet agent a denote the first mover (i.e., at t = 1, a proposes to b who should get the pie).\nTo begin, consider the case where the deadline for both agents is n = 1.\nIf b accepts, the division occurs as agreed; if not, neither agent gets anything (since n = 1 is the deadline).\nHere, a is in a powerful position and is able to propose to keep 100 percent of the pie and give nothing to b 2 .\nSince the deadline is n = 1, b accepts this offer and agreement takes place in the first time period.\nNow, consider the case where the deadline is n = 2.\nIn order to decide what to offer in the first round, a looks ahead to t = 2 and reasons backwards.\nAgent a reasons that if negotiation proceeds to the second round, b will take 100 percent of the pie by offering [0, 1] and leave nothing for a. Thus, in the first time period, if a offers b anything less than the whole pie, b will reject the offer.\nHence, during the first time period, agent a offers [0, 1].\nAgent b accepts this and an agreement occurs in the first time period.\nIn general, if the deadline is n, negotiation proceeds as follows.\nAs before, agent a decides what to offer in the first round by looking ahead as far as t = n and then reasoning backwards.\nAgent a``s 1 Having a different discount factor for different agents only makes the presentation more involved without leading to any changes in the analysis of the strategic behaviour of the agents or the time complexity of finding the equilibrium offers.\nHence we have a single discount factor for both agents.\n2 It is possible that b may reject such a proposal.\nHowever, irrespective of whether b accepts or rejects the proposal, it gets zero utility (because the deadline is n = 1).\nThus, we assume that b accepts a``s offer.\n952 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) offer for t = 1 depends on who the offering agent is for the last time period.\nThis, in turn, depends on whether n is odd or even.\nSince a makes an offer at t = 1 and the agents use the alternating offers protocol, the offering agent for the last time period is b if n is even and it is a if n is odd.\nThus, depending on whether n is odd or even, a makes the following offer at t = 1: A(1) = j OFFER [1, 0] IF ODD n ACCEPT IF b``s TURN B(1) = j OFFER [0, 1] IF EVEN n ACCEPT IF a``s TURN Agent b accepts this offer and negotiation ends in the first time period.\nNote that the equilibrium outcome depends on who makes the first move.\nSince we have two agents and either of them could move first, we get two possible equilibrium outcomes.\nOn the basis of the above equilibrium for single-issue negotiation with complete information, we first obtain the equilibrium for multiple issues and then show that computing these offers is a hard problem.\nWe then present a time efficient approximate equilibrium.\n3.\nMULTI-ISSUE NEGOTIATION We first analyse the complete information setting.\nThis section forms the base which we extend to the case of information uncertainty in Section 4.\nHere a and b negotiate over m > 1 indivisible issues.\nThese issues are m distinct pies and the agents want to determine how to distribute the pies between themselves.\nLet S = {1, 2, ... , m} denote the set of m pies.\nAs before, each pie is of size 1.\nLet the discount factor for issue c, where 1 \u2264 c \u2264 m, be 0 < \u03b4c \u2264 1.\nFor each issue, let n denote each agent``s deadline.\nIn the offer for time period t (where 1 \u2264 t \u2264 n), agent a``s (b``s) share for each of the m issues is now represented as an m element vector xt \u2208 Bm (yt \u2208 Bm ) where B denotes the set {0, 1}.\nThus, if agent a``s share for issue c at time t is xt c, then agent b``s share is yt c = (1\u2212xt c).\nThe shares for a and b are together represented as the package [xt , yt ].\nAs is traditional in multi-issue utility theory, we define an agent``s cumulative utility using the standard additive form [12].\nThe functions Ua : Bm \u00d7 Bm \u00d7 N+ \u2192 R and Ub : Bm \u00d7 Bm \u00d7 N+ \u2192 R give the cumulative utilities for a and b respectively at time t.\nThese are defined as follows: Ua ([xt , yt ], t) = ( \u03a3m c=1ka c ua c (xt c, t) if t \u2264 n 0 otherwise (1) Ub ([xt , yt ], t) = ( \u03a3m c=1kb cub c(yt c, t) if t \u2264 n 0 otherwise (2) where ka \u2208 Nm + denotes an m element vector of constants for agent a and kb \u2208 Nm + that for b.\nHere N+ denotes the set of positive integers.\nThese vectors indicate how the agents value different issues.\nFor example, if ka c > ka c+1, then agent a values issue c more than issue c + 1.\nLikewise for agent b.\nIn other words, the m issues are perfect substitutes (i.e., all that matters to an agent is its total utility for all the m issues and not that for any subset of them).\nIn all the settings we study, the issues will be perfect substitutes.\nTo begin each agent has complete information about all negotiation parameters (i.e., n, m, ka c , kb c, and \u03b4c for 1 \u2264 c \u2264 m).\nNow, multi-issue negotiation can be done using different procedures.\nBroadly speaking, there are three key procedures for negotiating multiple issues [19]: 1.\nthe package deal procedure where all the issues are settled together as a bundle, 2.\nthe sequential procedure where the issues are discussed one after another, and 3.\nthe simultaneous procedure where the issues are discussed in parallel.\nBetween these three procedures, the package deal is known to generate Pareto optimal outcomes [19, 6].\nHence we adopt it here.\nWe first give a brief description of the procedure and then determine the equilibrium strategies for it.\n3.1 The package deal procedure In this procedure, the agents use the same protocol as for singleissue negotiation (described in Section 2).\nHowever, an offer for the package deal includes a proposal for each issue under negotiation.\nThus, for m issues, an offer includes m divisions, one for each issue.\nAgents are allowed to either accept a complete offer (i.e., all m issues) or reject a complete offer.\nAn agreement can therefore take place either on all m issues or on none of them.\nAs per the single-issue negotiation, an agent decides what to offer by looking ahead and reasoning backwards.\nHowever, since an offer for the package deal includes a share for all the m issues, the agents can now make tradeoffs across the issues in order to maximise their cumulative utilities.\nFor 1 \u2264 c \u2264 m, the equilibrium offer for issue c at time t is denoted as [at c, bt c] where at c and bt c denote the shares for agent a and b respectively.\nWe denote the equilibrium package at time t as [at , bt ] where at \u2208 Bm (bt \u2208 Bm ) is an m element vector that denotes a``s (b``s) share for each of the m issues.\nAlso, for 1 \u2264 c \u2264 m, \u03b4c is the discount factor for issue c.\nThe symbols 0 and 1 denote m element vectors of zeroes and ones respectively.\nNote that for 1 \u2264 t \u2264 n, at c + bt c = 1 (i.e., the sum of the agents'' shares (at time t) for each pie is one).\nFinally, for time period t (for 1 \u2264 t \u2264 n) we let A(t) (respectively B(t)) denote the equilibrium strategy for agent a (respectively b).\n3.2 Equilibrium strategies As mentioned in Section 1, the package deal allows agents to make tradeoffs.\nWe let TRADEOFFA (TRADEOFFB) denote agent a``s (b``s) function for making tradeoffs.\nWe let P denote a set of parameters to the procedure TRADEOFFA (TRADEOFFB) where P = {ka , kb , \u03b4, m}.\nGiven this, the following theorem characterises the equilibrium for the package deal procedure.\nTHEOREM 1.\nFor the package deal procedure, the following strategies form a Nash equilibrium.\nThe equilibrium strategy for t = n is: A(n) = j OFFER [1, 0] IF a``s TURN ACCEPT IF b``s TURN B(n) = j OFFER [0, 1] IF b``s TURN ACCEPT IF a``s TURN For all preceding time periods t < n, if [xt , yt ] denotes the offer made at time t, then the equilibrium strategies are defined as follows: A(t) = 8 < : OFFER TRADEOFFA(P, UB(t), t) IF a``s TURN If (Ua ([xt , yt ], t) \u2265 UA(t)) ACCEPT else REJECT IF b``s TURN B(t) = 8 < : OFFER TRADEOFFB(P, UA(t), t) IF b``s TURN If (Ub ([xt , yt ], t) \u2265 UB(t)) ACCEPT else REJECT IF a``s TURN The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 953 where UA(t) = Ua ([at+1 , bt+1 ], t + 1) and UB(t) = Ub ([at+1 , bt+1 ], t + 1).\nPROOF.\nWe look ahead to the last time period (i.e., t = n) and then reason backwards.\nTo begin, if negotiation reaches the deadline (n), then the agent whose turn it is takes everything and leaves nothing for its opponent.\nHence, we get the strategies A(n) and B(n) as given in the statement of the theorem.\nIn all the preceding time periods (t < n), the offering agent proposes a package that gives its opponent a cumulative utility equal to what the opponent would get from its own equilibrium offer for the next time period.\nDuring time period t, either a or b could be the offering agent.\nConsider the case where a makes an offer at t.\nThe package that a offers at t gives b a cumulative utility of Ub ([at+1 , bt+1 ], t + 1).\nHowever, since there is more than one issue, there is more than one package that gives b this cumulative utility.\nFrom among these packages, a offers the one that maximises its own cumulative utility (because it is a utility maximiser).\nThus, the problem for a is to find the package [at , bt ] so as to: maximize mX c=1 ka c (1 \u2212 bt c)\u03b4t\u22121 c (3) such that mX c=1 bt ckb c \u2265 UB(t) bt c = 0 or 1 for 1 \u2264 c \u2264 m where UB(t), \u03b4t\u22121 c , ka c , and kb c are constants and bt c (1 \u2264 c \u2264 m) is a variable.\nAssume that the function TRADEOFFA takes parameters P, UB(t), and t, to solve the maximisation problem given in Equation 3 and returns the corresponding package.\nIf there is more than one package that solves Equation 3, then TRADEOFFA returns any one of them (because agent a gets equal utility from all such packages and so does agent b).\nThe function TRADEOFFB for agent b is analogous to that for a. On the other hand, the equilibrium strategy for the agent that receives an offer is as follows.\nFor time period t, let b denote the receiving agent.\nThen, b accepts [xt , yt ] if UB(t) \u2264 Ub ([xt , yt ], t), otherwise it rejects the offer because it can get a higher utility in the next time period.\nThe equilibrium strategy for a as receiving agent is defined analogously.\nIn this way, we reason backwards and obtain the offers for the first time period.\nThus, we get the equilibrium strategies (A(t) and B(t)) given in the statement of the theorem.\nThe following example illustrates how the agents make tradeoffs using the above equilibrium strategies.\nEXAMPLE 1.\nAssume there are m = 2 issues for negotiation, the deadline for both issues is n = 2, and the discount factor for both issues for both agents is \u03b4 = 1\/2.\nLet ka 1 = 3, ka 2 = 1, kb 1 = 1, and kb 2 = 5.\nLet agent a be the first mover.\nBy using backward reasoning, a knows that if negotiation reaches the second time period (which is the deadline), then b will get a hundred percent of both the issues.\nThis gives b a cumulative utility of UB(2) = 1\/2 + 5\/2 = 3.\nThus, in the first time period, if b gets anything less than a utility of 3, it will reject a``s offer.\nSo, at t = 1, a offers the package where it gets issue 1 and b gets issue 2.\nThis gives a cumulative utility of 3 to a and 5 to b. Agent b accepts the package and an agreement takes place in the first time period.\nThe maximization problem in Equation 3 can be viewed as the 0-1 knapsack problem3 .\nIn the 0-1 knapsack problem, we have a set 3 Note that for the case of divisible issues this is the fractional knapof m items where each item has a profit and a weight.\nThere is a knapsack with a given capacity.\nThe objective is to fill the knapsack with items so as to maximize the cumulative profit of the items in the knapsack.\nThis problem is analogous to the negotiation problem we want to solve (i.e., the maximization problem of Equation 3).\nSince ka c and \u03b4t\u22121 c are constants, maximizing Pm c=1 ka c (1\u2212bt c)\u03b4t\u22121 c is the same as minimizing Pm c=1 ka c bt c. Hence Equation 3 can be written as: minimize mX c=1 ka c bt c (4) such that mX c=1 bt ckb c \u2265 UB(t) bt c = 0 or 1 for 1 \u2264 c \u2264 m Equation 4 is a minimization version of the standard 0-1 knapsack problem4 with m items where ka c represents the profit for item c, kb c the weight for item c, and UB(t) the knapsack capacity.\nExample 1 was for two issues and so it was easy to find the equilibrium offers.\nBut, in general, it is not computationally easy to find the equilibrium offers of Theorem 1.\nThe following theorem proves this.\nTHEOREM 2.\nFor the package deal procedure, the problem of finding the equilibrium offers given in Theorem 1 is NP-hard.\nPROOF.\nFinding the equilibrium offers given in Theorem 1 requires solving the 0-1 knapsack problem given in Equation 4.\nSince the 0-1 knapsack problem is NP-hard [17], the problem of finding equilibrium for the package deal is also NP-hard.\n3.3 Approximate equilibrium Researchers in the area of algorithms have found time efficient methods for computing approximate solutions to 0-1 knapsack problems [10].\nHence we use these methods to find a solution to our negotiation problem.\nAt this stage, we would like to point out the main difference between solving the 0-1 knapsack problem and solving our negotiation problem.\nThe 0-1 knapsack problem involves decision making by a single agent regarding which items to place in the knapsack.\nOn the other hand, our negotiation problem involves two players and they are both strategic.\nHence, in our case, it is not enough to just find an approximate solution to the knapsack problem, we must also show that such an approximation forms an equilibrium.\nThe traditional approach for overcoming the computational complexity in finding an equilibrium has been to use an approximate equilibrium (see [14, 26] for example).\nIn this approach, a strategy profile is said to form an approximate Nash equilibrium if neither agent can gain more than the constant by deviating.\nHence, our aim is to use the solution to the 0-1 knapsack problem proposed in [10] and show that it forms an approximate equilibrium to our negotiation problem.\nBefore doing so, we give a brief overview of the key ideas that underlie approximation algorithms.\nThere are two key issues in the design of approximate algorithms [1]: sack problem.\nThe factional knapsack problem is computationally easy; it can be solved in time polynomial in the number of items in the knapsack problem [17].\nIn contrast, the 0-1 knapsack problem is computationally hard.\n4 Note that for the standard 0-1 knapsack problem the weights, profits and the capacity are positive integers.\nHowever a 0-1 knapsack problem with fractions and non positive values can easily be transformed to one with positive integers in time linear in m using the methods given in [8, 17].\n954 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1.\nthe quality of their solution, and 2.\nthe time taken to compute the approximation.\nThe quality of an approximate algorithm is determined by comparing its performance to that of the optimal algorithm and measuring the relative error [3, 1].\nThe relative error is defined as (z\u2212z\u2217 )\/z\u2217 where z is the approximate solution and z\u2217 the optimal one.\nIn general, we are interested in finding approximate algorithms whose relative error is bounded from above by a certain constant , i.e., (z \u2212 z\u2217 )\/z\u2217 \u2264 (5) Regarding the second issue of time complexity, we are interested in finding fully polynomial approximation algorithms.\nAn approximation algorithm is said to be fully polynomial if for any > 0 it finds a solution satisfying Equation 5 in time polynomially bounded by size of the problem (for the 0-1 knapsack problem, the problem size is equal to the number of items) and by 1\/ [1].\nFor the 0-1 knapsack problem, Ibarra and Kim [10] presented a fully polynomial approximation method.\nThis method is based on dynamic programming.\nIt is a parametric method that takes as a parameter and for any > 0, finds a heuristic solution z with relative error at most , such that the time and space complexity grow polynomially with the number of items m and 1\/ .\nMore specifically, the space and time complexity are both O(m\/ 2 ) and hence polynomial in m and 1\/ (see [10] for the detailed approximation algorithm and proof of time and space complexity).\nSince the Ibarra and Kim method is fully polynomial, we use it to solve our negotiation problem.\nThis is done as follows.\nFor agent a, let APRX-TRADEOFFA(P, UB(t), t, ) denote a procedure that returns an approximate solution to Equation 4 using the Ibarra and Kim method.\nThe procedure APRX-TRADEOFFB(P, UA(t), t, ) for agent b is analogous.\nFor 1 \u2264 c \u2264 m, the approximate equilibrium offer for issue c at time t is denoted as [\u00afat c,\u00afbt c] where \u00afat c and \u00afbt c denote the shares for agent a and b respectively.\nWe denote the equilibrium package at time t as [\u00afat ,\u00afbt ] where \u00afat \u2208 Bm (\u00afbt \u2208 Bm ) is an m element vector that denotes a``s (b``s) share for each of the m issues.\nAlso, as before, for 1 \u2264 c \u2264 m, \u03b4c is the discount factor for issue c. Note that for 1 \u2264 t \u2264 n, \u00afat c + \u00afbt c = 1 (i.e., the sum of the agents'' shares (at time t) for each pie is one).\nFinally, for time period t (for 1 \u2264 t \u2264 n) we let \u00afA(t) (respectively \u00afB(t)) denote the approximate equilibrium strategy for agent a (respectively b).\nThe following theorem uses this notation and characterizes an approximate equilibrium for multi-issue negotiation.\nTHEOREM 3.\nFor the package deal procedure, the following strategies form an approximate Nash equilibrium.\nThe equilibrium strategy for t = n is: \u00afA(n) = j OFFER [1, 0] IF a``s TURN ACCEPT IF b``s TURN \u00afB(n) = j OFFER [0, 1] IF b``s TURN ACCEPT IF a``s TURN For all preceding time periods t < n, if [xt , yt ] denotes the offer made at time t, then the equilibrium strategies are defined as follows: \u00afA(t) = 8 < : OFFER APRX-TRADEOFFA(P, UB(t), t, ) IF a``s TURN If (Ua ([xt , yt ], t) \u2265 UA(t)) ACCEPT else REJECT IF b``s TURN \u00afB(t) = 8 < : OFFER APRX-TRADEOFFB(P, UA(t), t, ) IF b``s TURN If (Ub ([xt , yt ], t) \u2265 UB(t)) ACCEPT else REJECT IF a``s TURN where UA(t) = Ua ([\u00afat+1 ,\u00afbt+1 ], t + 1) and UB(t) = Ub ([\u00afat+1 , \u00afbt+1 ], t + 1).\nAn agreement takes place at t = 1.\nPROOF.\nAs in the proof for Theorem 1, we use backward reasoning.\nWe first obtain the strategies for the last time period t = n.\nIt is straightforward to get these strategies; the offering agent gets a hundred percent of all the issues.\nThen for t = n \u2212 1, the offering agent must solve the maximization problem of Equation 4 by substituting t = n\u22121 in it.\nFor agent a (b), this is done by APPROX-TRADEOFFA (APPROX-TRADEOFFB).\nThese two functions are nothing but the Ibarra and Kim``s approximation method for solving the 0-1 knapsack problem.\nThese two functions take as a parameter and use the Ibarra and Kim``s approximation method to return a package that approximately maximizes Equation 4.\nThus, the relative error for these two functions is the same as that for Ibarra and Kim``s method (i.e., it is at most where is given in Equation 5).\nAssume that a is the offering agent for t = n \u2212 1.\nAgent a must offer a package that gives b a cumulative utility equal to what it would get from its own approximate equilibrium offer for the next time period (i.e., Ub ([\u00afat+1 ,\u00afbt+1 ], t + 1) where [\u00afat+1 ,\u00afbt+1 ] is the approximate equilibrium package for the next time period).\nRecall that for the last time period, the offering agent gets a hundred percent of all the issues.\nSince a is the offering agent for t = n \u2212 1 and the agents use the alternating offers protocol, it is b``s turn at t = n. Thus Ub ([\u00afat+1 ,\u00afbt+1 ], t + 1) is equal to b``s cumulative utility from receiving a hundred percent of all the issues.\nUsing this utility as the capacity of the knapsack, a uses APPROX-TRADEOFFA and obtains the approximate equilibrium package for t = n \u2212 1.\nOn the other hand, if b is the offering agent at t = n \u2212 1, it uses APPROX-TRADEOFFB to obtain the approximate equilibrium package.\nIn the same way for t < n \u2212 1, the offering agent (say a) uses APPROX-TRADEOFFA to find an approximate equilibrium package that gives b a utility of Ub ([\u00afat+1 ,\u00afbt+1 ], t + 1).\nBy reasoning backwards, we obtain the offer for time period t = 1.\nIf a (b) is the offering agent, it proposes the offer APPROX-TRADEOFFA(P, UB(1), 1, ) (APPROX-TRADEOFFB(P, UA(1), 1, )).\nThe receiving agent accepts the offer.\nThis is because the relative error in its cumulative utility from the offer is at most .\nAn agreement therefore takes place in the first time period.\nTHEOREM 4.\nThe time complexity of finding the approximate equilibrium offer for the first time period is O(nm\/ 2 ).\nPROOF.\nThe time complexity of APPROX-TRADEOFFA and APPROXTRADEOFFB is the same as the time complexity of the Ibarra and Kim method [10] i.e., O(m\/ 2 )).\nIn order to find the equilibrium offer for the first time period using backward reasoning, APPROXTRADEOFFA (or APPROX- TRADEOFFB) is invoked n times.\nHence the time complexity of finding the approximate equilibrium offer for the first time period is O(nm\/ 2 ).\nThis analysis was done in a complete information setting.\nHowever an extension of this analysis to an incomplete information setting where the agents have probability distributions over some uncertain parameter is straightforward, as long as the negotiation is done offline; i.e., the agents know their preference for each individual issue before negotiation begins.\nFor instance, consider the case where different agents have different discount factors, and each agent is uncertain about its opponent``s discount factor although it knows its own.\nThis uncertainty is modelled with a probability distribution over the possible values for the opponent``s discount factor and having this distribution as common knowledge to the agents.\nAll our analysis for the complete information setting still holds for The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 955 this incomplete information setting, except for the fact that an agent must now use the given probability distribution and find its opponent``s expected utility instead of its actual utility.\nHence, instead of analyzing an incomplete information setting for offline negotiation, we focus on online multi-issue negotiation.\n4.\nONLINE MULTI-ISSUE NEGOTIATION We now consider a more general and, arguably more realistic, version of multi-issue negotiation, where the agents are uncertain about the issues they will have to negotiate about in future.\nIn this setting, when negotiating an issue, the agents know that they will negotiate more issues in the future, but they are uncertain about the details of those issues.\nAs before, let m be the total number of issues that are up for negotiation.\nThe agents have a probability distribution over the possible values of ka c and kb c. For 1 \u2264 c \u2264 m let ka c and kb c be uniformly distributed over [0,1].\nThis probability distribution, n, and m are common knowledge to the agents.\nHowever, the agents come to know ka c and kb c only just before negotiation for issue c begins.\nOnce the agents reach an agreement on issue c, it cannot be re-negotiated.\nThis scenario requires online negotiation since the agents must make decisions about an issue prior to having the information about the future issues [3].\nWe first give a brief introduction to online problems and then draw an analogy between the online knapsack problem and the negotiation problem we want to solve.\nIn an online problem, data is given to the algorithm incrementally, one unit at a time [3].\nThe online algorithm must also produce the output incrementally: after seeing i units of input it must output the ith unit of output.\nSince decisions about the output are made with incomplete knowledge about the entire input, an online algorithm often cannot produce an optimal solution.\nSuch an algorithm can only approximate the performance of the optimal algorithm that sees all the inputs in advance.\nIn the design of online algorithms, the main aim is to achieve a performance that is close to that of the optimal offline algorithm on each input.\nAn online algorithm is said to be stochastic if it makes decisions on the basis of the probability distributions for the future inputs.\nThe performance of stochastic online algorithms is assessed in terms of the expected difference between the optimum and the approximate solution (denoted E[z\u2217 m \u2212zm] where z\u2217 m is the optimal and zm the approximate solution).\nNote that the subscript m is used to indicate the fact that this difference depends on m.\nWe now describe the protocol for online negotiation and then obtain an approximate equilibrium.\nThe protocol is defined as follows.\nLet agent a denote the first mover (since we focus on the package deal procedure, the first mover is the same for all the m issues).\nStep 1.\nFor c = 1, the agents are given the values of ka c and kb c.\nThese two values are now common5 knowledge.\nStep 2.\nThe agents settle issue c using the alternating offers protocol described in Section 2.\nNegotiation for issue c must end within n time periods from the start of negotiation on the issue.\nIf an agreement is not reached within this time, then negotiation fails on this and on all remaining issues.\nStep 3.\nThe above steps are repeated for issues c = 2, 3, ... , m. Negotiation for issue c (2 \u2264 c \u2264 m) begins in the time period following an agreement on issue c \u2212 1.\n5 We assume common knowledge because it simplifies exposition.\nHowever, if ka c (kb c) is a``s (b``s) private knowledge, then our analysis will still hold but now an agent must find its opponent``s expected utility on the basis of the p.d.fs for ka c and kb c. Thus, during time period t, the problem for the offering agent (say a) is to find the optimal offer for issue c on the basis of ka c and kb c and the probability distribution for ka i and kb i (c < i \u2264 m).\nIn order to solve this online negotiation problem we draw analogy with the online knapsack problem.\nBefore doing so, however, we give a brief overview of the online knapsack problem.\nIn the online knapsack problem, there are m items.\nThe agent must examine the m items one at a time according to the order they are input (i.e., as their profit and size coefficients become known).\nHence, the algorithm is required to decide whether or not to include each item in the knapsack as soon as its weight and profit become known, without knowledge concerning the items still to be seen, except for their total number.\nNote that since the agents have a probability distribution over the weights and profits of the future items, this is a case of stochastic online knapsack problem.\nOur online negotiation problem is analogous to the online knapsack problem.\nThis analogy is described in detail in the proof for Theorem 5.\nAgain, researchers in algorithms have developed time efficient approximate solutions to the online knapsack problem [16].\nHence we use this solution and show that it forms an equilibrium.\nThe following theorem characterizes an approximate equilibrium for online negotiation.\nHere the agents have to choose a strategy without knowing the features of the future issues.\nBecause of this information incompleteness, the relevant equilibrium solution is that of a Bayes'' Nash Equilibrium (BNE) in which each agent plays the best response to the other agents with respect to their expected utilities [18].\nHowever, finding an agent``s BNE strategy is analogous to solving the online 0-1 knapsack problem.\nAlso, the online knapsack can only be solved approximately [16].\nHence the relevant equilibrium solution concept is approximate BNE (see [26] for example).\nThe following theorem finds this equilibrium using procedures ONLINE- TRADEOFFA and ONLINE-TRADEOFFB which are defined in the proof of the theorem.\nFor a given time period, we let zm denote the approximately optimal solution generated by ONLINE-TRADEOFFA (or ONLINE-TRADEOFFB) and z\u2217 m the actual optimum.\nTHEOREM 5.\nFor the package deal procedure, the following strategies form an approximate Bayes'' Nash equilibrium.\nThe equilibrium strategy for t = n is: A(n) = j OFFER [1, 0] IF a``s TURN ACCEPT IF b``s TURN B(n) = j OFFER [0, 1] IF b``s TURN ACCEPT IF a``s TURN For all preceding time periods t < n, if [xt , yt ] denotes the offer made at time t, then the equilibrium strategies are defined as follows: A(t) = 8 < : OFFER ONLINE-TRADEOFFA(P, UB(t), t) IF a``s TURN If (Ua ([xt , yt ], t) \u2265 UA(t)) ACCEPT else REJECT IF b``s TURN B(t) = 8 < : OFFER ONLINE-TRADEOFFB(P, UA(t), t) IF b``s TURN If (Ub ([xt , yt ], t) \u2265 UB(t)) ACCEPT else REJECT IF a``s TURN where UA(t) = Ua ([\u00afat+1 ,\u00afbt+1 ], t + 1) and UB(t) = Ub ([\u00afat+1 , \u00afbt+1 ], t + 1).\nAn agreement on issue c takes place at t = c. For a given time period, the expected difference between the solution generated by the optimal strategy and that by the approximate strategy is E[z\u2217 m \u2212 zm] = O( \u221a m).\n956 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) PROOF.\nAs in Theorem 1 we find the equilibrium offer for time period t = 1 using backward induction.\nLet a be the offering agent for t = 1 for all the m issues.\nConsider the last time period t = n (recall from Step 2 of the online protocol that n is the deadline for completing negotiation on the first issue).\nSince the first mover is the same for all the issues, and the agents make offers alternately, the offering agent for t = n is also the same for all the m issues.\nAssume that b is the offering agent for t = n.\nAs in Section 3, the offering agent for t = n gets a hundred percent of all the m issues.\nSince b is the offering agent for t = n, his utility for this time period is: UB(n) = kb 1\u03b4n\u22121 1 + 1\/2 mX i=2 \u03b4 i(n\u22121) i (6) Recall that ka i and kb i (for c < i \u2264 m) are not known to the agents.\nHence, the agents can only find their expected utilities from the future issues on the basis of the probability distribution functions for ka i and kb i .\nHowever, during the negotiation for issue c the agents know ka c but not kb c (see Step 1 of the online protocol).\nHence, a computes UB(n) as follows.\nAgent b``s utility from issue c = 1 is kb 1\u03b4n\u22121 1 (which is the first term of Equation 6).\nThen, on the basis of the probability distribution functions for ka i and kb i , agent a computes b``s expected utility from each future issue i as \u03b4 i(n\u22121) i \/2 (since ka i and kb i are uniformly distributed on [0, 1]).\nThus, b``s expected cumulative utility from these m \u2212 c issues is 1\/2 Pm i=2 \u03b4 i(n\u22121) i (which is the second term of Equation 6).\nNow, in order to decide what to offer for issue c = 1, the offering agent for t = n \u2212 1 (i.e., agent a) must solve the following online knapsack problem: maximize \u03a3m i=1ka i (1 \u2212 \u00afbt i)\u03b4n\u22121 i (7) such that \u03a3m i=1kb i \u00afbt i \u2265 UB(n) \u00afbt i = 0 or 1 for 1 \u2264 i \u2264 m The only variables in the above maximization problem are \u00afbt i. Now, maximizing \u03a3m i=1ka i (1\u2212\u00afbt i)\u03b4n\u22121 i is the same as minimizing \u03a3m i=1ka i \u00afbt i since \u03b4n\u22121 i and ka i are constants.\nThus, we write Equation 7 as: minimize \u03a3m i=1ka i \u00afbt i (8) such that \u03a3m i=1kb i \u00afbt i \u2265 UB(n) \u00afbt i = 0 or 1 for 1 \u2264 i \u2264 m The above optimization problem is analogous to the online 0-1 knapsack problem.\nAn algorithm to solve the online knapsack problem has already proposed in [16].\nThis algorithm is called the fixed-choice online algorithm.\nIt has time complexity linear in the number of items (m) in the knapsack problem.\nWe use this to solve our online negotiation problem.\nThus, our ONLINE-TRADEOFFA algorithm is nothing but the fixed-choice online algorithm and therefore has the same time complexity as the latter.\nThis algorithm takes the values of ka i and kb i one at a time and generates an approximate solution to the above knapsack problem.\nThe expected difference between the optimum and approximate solution is E[z\u2217 m \u2212 zm] = O( \u221a m) [16] (see [16] for the detailed fixed-choice online algorithm and a proof for E[z\u2217 m \u2212 zm] = O( \u221a m)).\nThe fixed-choice online algorithm of [16] is a generalization of the basic greedy algorithm for the offline knapsack problem; the idea behind it is as follows.\nA threshold value is determined on the basis of the information regarding weights and profits for the 0-1 knapsack problem.\nThe method then includes into the knapsack all items whose profit density (profit density of an item is its profit per unit weight) exceeds the threshold until either the knapsack is filled or all the m items have been considered.\nIn more detail, the algorithm ONLINE-TRADEOFFA works as follows.\nIt first gets the values of ka 1 and kb 1 and finds \u00afbt c.\nSince we have a 0-1 knapsack problem, \u00afbt c can be either zero or one.\nNow, if \u00afbt c = 1 for t = n, then \u00afbt c must be one for 1 \u2264 t < n (i.e., a must offer \u00afbt c = 1 at t = 1).\nIf \u00afbt c = 1 for t = n, but a offers \u00afbt c = 0 at t = 1, then agent b gets less utility than what it expects from a``s offer and rejects the proposal.\nThus, if \u00afbt c = 1 for t = n, then the optimal strategy for a is to offer \u00afbt c = 1 at t = 1.\nAgent b accepts the offer.\nThus, negotiation on the first issue starts at t = 1 and an agreement on it is also reached at t = 1.\nIn the next time period (i.e., t = 2), negotiation proceeds to the next issue.\nThe deadline for the second issue is n time periods from the start of negotiation on the issue.\nFor c = 2, the algorithm ONLINE-TRADEOFFA is given the values of ka 2 and kb 2 and finds \u00afbt c as described above.\nAgent offers bc at t = 2 and b accepts.\nThus, negotiation on the second issue starts at t = 2 and an agreement on it is also reached at t = 2.\nThis process repeats for the remaining issues c = 3, ... , m. Thus, each issue is agreed upon in the same time period in which it starts.\nAs negotiation for the next issue starts in the following time period (see step 3 of the online protocol), agreement on issue i occurs at time t = i. On the other hand, if b is the offering agent at t = 1, he uses the algorithm ONLINE-TRADEOFFB which is defined analogously.\nThus, irrespective of who makes the first move, all the m issues are settled at time t = m. THEOREM 6.\nThe time complexity of finding the approximate equilibrium offers of Theorem 5 is linear in m. PROOF.\nThe time complexity of ONLINE-TRADEOFFA and ONLINETRADEOFFB is the same as the time complexity of the fixed-choice online algorithm of [16].\nSince the latter has time complexity linear in m, the time complexity of ONLINE-TRADEOFFA and ONLINETRADEOFFB is also linear in m.\nIt is worth noting that, for the 0-1 knapsack problem, the lower bound on the expected difference between the optimum and the solution found by any online algorithm is \u03a9(1) [16].\nThus, it follows that this lower bound also holds for our negotiation problem.\n5.\nRELATED WORK Work on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues.\nWe first describe the existing work for the case of divisible issues.\nSince Schelling [24] first noted that the outcome of negotiation depends on the choice of negotiation procedure, much research effort has been devoted to the study of different procedures for negotiating multiple issues.\nHowever, most of this work has focussed on the sequential procedure [7, 2].\nFor this procedure, a key issue is the negotiation agenda.\nHere the term agenda refers to the order in which the issues are negotiated.\nThe agenda is important because each agent``s cumulative utility depends on the agenda; if we change the agenda then these utilities change.\nHence, the agents must decide what agenda they will use.\nNow, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous).\nFor instance, Fershtman [7] analyze sequential negotiation with exogenous agenda.\nA number of researchers have also studied negotiations with an endogenous agenda [2].\nIn contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure.\nHowever, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 957 where each issue is divisible.\nSpecifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation.\nExisting work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents.\nThe problem of task allocation has been previously studied in the context of coalitions involving more than two agents.\nFor example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole.\nIn contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities.\nOn the other hand [22] focus on the use of contracts for task allocation to multiple self interested agents but this work concerns finding ways of decommitting contracts (after the initial allocation has been done) so as to improve an agent``s utility.\nIn contrast, our focuses on negotiation regarding who will carry out which task.\nFinally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work).\n6.\nCONCLUSIONS This paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints.\nThe issues are indivisible and different agents value different issues differently.\nThus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities.\nSpecifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting.\nWe then presented approximately optimal negotiation strategies and showed that they form an equilibrium.\nThese strategies have polynomial time complexity.\nWe also analysed the difference between the true optimum and the approximate optimum.\nFinally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues.\nSpecifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error.\nThese approximate strategies also have polynomial time complexity.\nThere are several interesting directions for future work.\nFirst, for online negotiation, we assumed that the constants ka c and kb c are both uniformly distributed.\nIt will be interesting to analyze the case where ka c and kb c have other, possibly different, probability distributions.\nApart from this, we treated the number of issues as being common knowledge to the agents.\nIn future, it will be interesting to treat the number of issues as uncertain.\n7.\nREFERENCES [1] G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, and M. Protasi.\nComplexity and approximation: Combinatorial optimization problems and their approximability properties.\nSpringer, 2003.\n[2] M. Bac and H. Raff.\nIssue-by-issue negotiations: the role of information and time preference.\nGames and Economic Behavior, 13:125-134, 1996.\n[3] A. Borodin and R. El-Yaniv.\nOnline Computation and Competitive Analysis.\nCambridge University Press, 1998.\n[4] S. J. Brams.\nFair division: from cake cutting to dispute resolution.\nCambridge University Press, 1996.\n[5] L. A. Busch and I. J. Horstman.\nBargaining frictions, bargaining procedures and implied costs in multiple-issue bargaining.\nEconomica, 64:669-680, 1997.\n[6] S. S. Fatima, M. Wooldridge, and N. R. Jennings.\nMulti-issue negotiation with deadlines.\nJournal of Artificial Intelligence Research, 27:381-417, 2006.\n[7] C. Fershtman.\nThe importance of the agenda in bargaining.\nGames and Economic Behavior, 2:224-238, 1990.\n[8] F. Glover.\nA multiphase dual algorithm for the zero-one integer programming problem.\nOperations Research, 13:879-919, 1965.\n[9] M. T. Hajiaghayi, R. Kleinberg, and D. C. Parkes.\nAdaptive limited-supply online auctions.\nIn ACM Conference on Electronic Commerce (ACMEC-04), pages 71-80, New York, 2004.\n[10] O. H. Ibarra and C. E. Kim.\nFast approximation algorithms for the knapsack and sum of subset problems.\nJournal of ACM, 22:463-468, 1975.\n[11] R. Inderst.\nMulti-issue bargaining with endogenous agenda.\nGames and Economic Behavior, 30:64-82, 2000.\n[12] R. Keeney and H. Raiffa.\nDecisions with Multiple Objectives: Preferences and Value Trade-offs.\nNew York: John Wiley, 1976.\n[13] S. Kraus.\nStrategic negotiation in multi-agent environments.\nThe MIT Press, Cambridge, Massachusetts, 2001.\n[14] D. Lehman, L. I. O``Callaghan, and Y. Shoham.\nTruth revelation in approximately efficient combinatorial auctions.\nJournal of the ACM, 49(5):577-602, 2002.\n[15] A. Lomuscio, M. Wooldridge, and N. R. Jennings.\nA classification scheme for negotiation in electronic commerce.\nInternational Journal of Group Decision and Negotiation, 12(1):31-56, 2003.\n[16] A. Marchetti-Spaccamela and C. Vercellis.\nStochastic online knapsack problems.\nMathematical Programming, 68:73-104, 1995.\n[17] S. Martello and P. Toth.\nKnapsack problems: Algorithms and computer implementations.\nJohn Wiley and Sons, 1990.\n[18] M. J. Osborne and A. Rubinstein.\nA Course in Game Theory.\nThe MIT Press, 1994.\n[19] H. Raiffa.\nThe Art and Science of Negotiation.\nHarvard University Press, Cambridge, USA, 1982.\n[20] J. S. Rosenschein and G. Zlotkin.\nRules of Encounter.\nMIT Press, 1994.\n[21] A. Rubinstein.\nPerfect equilibrium in a bargaining model.\nEconometrica, 50(1):97-109, January 1982.\n[22] T. Sandholm and V. Lesser.\nLevelled commitment contracts and strategic breach.\nGames and Economic Behavior: Special Issue on AI and Economics, 35:212-270, 2001.\n[23] T. Sandholm and N. Vulkan.\nBargaining with deadlines.\nIn AAAI-99, pages 44-51, Orlando, FL, 1999.\n[24] T. C. Schelling.\nAn essay on bargaining.\nAmerican Economic Review, 46:281-306, 1956.\n[25] O. Shehory and S. Kraus.\nMethods for task allocation via agent coalition formation.\nArtificial Intelligence Journal, 101(1-2):165-200, 1998.\n[26] S. Singh, V. Soni, and M. Wellman.\nComputing approximate Bayes Nash equilibria in tree games of incomplete information.\nIn Proceedings of the ACM Conference on Electronic Commerce ACM-EC, pages 81-90, New York, May 2004.\n[27] I. Stahl.\nBargaining Theory.\nEconomics Research Institute, Stockholm School of Economics, Stockholm, 1972.\n958 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"Approximate and Online Multi-Issue Negotiation\nABSTRACT\nThis paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents.\nThe agents have time constraints in the form of both deadlines and discount factors.\nThere are m> 1 issues for negotiation where each issue is viewed as a pie of size one.\nThe issues are \"indivisible\" (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent).\nHere different agents value different issues differently.\nThus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities.\nFor such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties.\nThen, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting.\nIn order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium.\nWe also analyze the relative error (i.e., the difference between the true optimum and the approximate).\nThe time complexity of the approximate equilibrium strategies is O (nm \/ ~ 2) where n is the negotiation deadline and ~ the relative error.\nFinally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues.\nSpecifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O (\u221a m).\nThese approximate strategies also have polynomial time complexity.\n1.\nINTRODUCTION\nNegotiation is a key form of interaction in multiagent systems.\nIt is a process in which disputing agents decide how to divide the gains from cooperation.\nSince this decision is made jointly by the agents themselves [20, 19, 13, 15], each party can only obtain what the other is prepared to allow them.\nNow, the simplest form of negotiation involves two agents and a single issue.\nFor example, consider a scenario in which a buyer and a seller negotiate on the price of a good.\nTo begin, the two agents are likely to differ on the price at which they believe the trade should take place, but through a process of joint decision-making they either arrive at a price that is mutually acceptable or they fail to reach an agreement.\nSince agents are likely to begin with different prices, one or both of them must move toward the other, through a series of offers and counter offers, in order to obtain a mutually acceptable outcome.\nHowever, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers.\nThat is, they must set the negotiation protocol [20].\nOn the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation).\nGiven this context, this work focuses on competitive scenarios with self-interested agents.\nFor such cases, each participant defines its strategy so as to maximise its individual utility.\nHowever, in most bilateral negotiations, the parties involved need to settle more than one issue.\nFor this case, the issues may be divisible or indivisible [4].\nFor the former, the problem for the agents is to decide how to split each issue between themselves [21].\nFor the latter, the individual issues cannot be divided.\nAn issue, in its entirety, must be allocated to either of the two agents.\nSince the agents value different issues differently, they must come to terms about who will take which issue.\nTo date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6].\nHowever, in many real-world settings, the issues are indivisible.\nHence, our focus here is on negotiation for indivisible issues.\nSuch negotiations are very common in multiagent systems.\nFor example, consider the case of task allocation between two agents.\nThere is a set of tasks to be carried out and different agents have different preferences for the tasks.\nThe tasks cannot be partitioned; a task must be carried out by one agent.\nThe problem then is for the agents to negotiate about who will carry out which task.\nA key problem in the study of multi-issue negotiation is to determine the equilibrium strategies.\nAn equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers.\nHowever, such computational issues have so far received little attention.\nAs we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues\nand finding the equilibrium for this case is computationally easier than that for the case of indivisible issues.\nOur primary objective is, therefore, to answer the computational questions for the latter case for the types of situations that are commonly faced by agents in real-world contexts.\nThus, we consider negotiations in which there is incomplete information and time constraints.\nIncompleteness of information on the part of negotiators is a common feature of most practical negotiations.\nAlso, agents typically have time constraints in the form of both deadlines and discount factors.\nDeadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit.\nLikewise, discount factors are essential since the goods may be perishable or their value may decline due to inflation.\nMoreover, the strategic behaviour of agents with deadlines and discount factors differs from those without (see [21] for single issue bargaining without deadlines and [23, 13] for bargaining with deadlines and discount factors in the context of divisible issues).\nGiven this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents.\nFor this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting.\nThen, in order to overcome the problem of time complexity, we present strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium.\nWe also analyze the relative error (i.e., the difference between the true optimum and the approximate).\nThe time complexity of the approximate equilibrium strategies is 0 (nm \/ ~ 2) where n is the negotiation deadline and ~ the relative error.\nFinally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues.\nSpecifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is 0 (- \/ m).\nThese approximate strategies also have polynomial time complexity.\nIn so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria.\nNo previous work has determined these equilibria.\nSince software agents have limited computational resources, our results are especially relevant to such resource bounded agents.\nThe remainder of the paper is organised as follows.\nWe begin by giving a brief overview of single-issue negotiation in Section 2.\nIn Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem.\nWe then present an approximate equilibrium and evaluate its approximation error.\nSection 4 analyzes online multi-issue negotiation.\nSection 5 discusses the related literature and Section 6 concludes.\n2.\nSINGLE-ISSUE NEGOTIATION\n952 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.\nMULTI-ISSUE NEGOTIATION\n3.1 The package deal procedure\n3.2 Equilibrium strategies\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 953\n3.3 Approximate equilibrium\n954 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 955\n4.\nONLINE MULTI-ISSUE NEGOTIATION\n956 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.\nRELATED WORK\nWork on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues.\nWe first describe the existing work for the case of divisible issues.\nSince Schelling [24] first noted that the outcome of negotiation depends on the choice of negotiation procedure, much research effort has been devoted to the study of different procedures for negotiating multiple issues.\nHowever, most of this work has focussed on the sequential procedure [7, 2].\nFor this procedure, a key issue is the negotiation agenda.\nHere the term agenda refers to the order in which the issues are negotiated.\nThe agenda is important because each agent's cumulative utility depends on the agenda; if we change the agenda then these utilities change.\nHence, the agents must decide what agenda they will use.\nNow, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous).\nFor instance, Fershtman [7] analyze sequential negotiation with exogenous agenda.\nA number of researchers have also studied negotiations with an endogenous agenda [2].\nIn contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure.\nHowever, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 957\nwhere each issue is divisible.\nSpecifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation.\nExisting work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents.\nThe problem of task allocation has been previously studied in the context of coalitions involving more than two agents.\nFor example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole.\nIn contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities.\nOn the other hand [22] focus on the use of contracts for task allocation to multiple self interested agents but this work concerns finding ways of decommitting contracts (after the initial allocation has been done) so as to improve an agent's utility.\nIn contrast, our focuses on negotiation regarding who will carry out which task.\nFinally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work).\n6.\nCONCLUSIONS\nThis paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints.\nThe issues are indivisible and different agents value different issues differently.\nThus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities.\nSpecifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting.\nWe then presented approximately optimal negotiation strategies and showed that they form an equilibrium.\nThese strategies have polynomial time complexity.\nWe also analysed the difference between the true optimum and the approximate optimum.\nFinally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues.\nSpecifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error.\nThese approximate strategies also have polynomial time complexity.\nThere are several interesting directions for future work.\nFirst, for online negotiation, we assumed that the constants kac and kbc are both uniformly distributed.\nIt will be interesting to analyze the case where kac and kbchave other, possibly different, probability distributions.\nApart from this, we treated the number of issues as being common knowledge to the agents.\nIn future, it will be interesting to treat the number of issues as uncertain.","lvl-4":"Approximate and Online Multi-Issue Negotiation\nABSTRACT\nThis paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents.\nThe agents have time constraints in the form of both deadlines and discount factors.\nThere are m> 1 issues for negotiation where each issue is viewed as a pie of size one.\nThe issues are \"indivisible\" (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent).\nHere different agents value different issues differently.\nThus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities.\nFor such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties.\nThen, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting.\nIn order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium.\nWe also analyze the relative error (i.e., the difference between the true optimum and the approximate).\nThe time complexity of the approximate equilibrium strategies is O (nm \/ ~ 2) where n is the negotiation deadline and ~ the relative error.\nFinally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues.\nSpecifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O (\u221a m).\nThese approximate strategies also have polynomial time complexity.\n1.\nINTRODUCTION\nNegotiation is a key form of interaction in multiagent systems.\nIt is a process in which disputing agents decide how to divide the gains from cooperation.\nNow, the simplest form of negotiation involves two agents and a single issue.\nHowever, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers.\nThat is, they must set the negotiation protocol [20].\nOn the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation).\nGiven this context, this work focuses on competitive scenarios with self-interested agents.\nFor such cases, each participant defines its strategy so as to maximise its individual utility.\nHowever, in most bilateral negotiations, the parties involved need to settle more than one issue.\nFor this case, the issues may be divisible or indivisible [4].\nFor the former, the problem for the agents is to decide how to split each issue between themselves [21].\nFor the latter, the individual issues cannot be divided.\nAn issue, in its entirety, must be allocated to either of the two agents.\nSince the agents value different issues differently, they must come to terms about who will take which issue.\nTo date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6].\nHowever, in many real-world settings, the issues are indivisible.\nHence, our focus here is on negotiation for indivisible issues.\nSuch negotiations are very common in multiagent systems.\nFor example, consider the case of task allocation between two agents.\nThere is a set of tasks to be carried out and different agents have different preferences for the tasks.\nThe tasks cannot be partitioned; a task must be carried out by one agent.\nThe problem then is for the agents to negotiate about who will carry out which task.\nA key problem in the study of multi-issue negotiation is to determine the equilibrium strategies.\nAn equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers.\nHowever, such computational issues have so far received little attention.\nAs we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues\nand finding the equilibrium for this case is computationally easier than that for the case of indivisible issues.\nThus, we consider negotiations in which there is incomplete information and time constraints.\nIncompleteness of information on the part of negotiators is a common feature of most practical negotiations.\nAlso, agents typically have time constraints in the form of both deadlines and discount factors.\nDeadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit.\nGiven this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents.\nFor this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting.\nWe also analyze the relative error (i.e., the difference between the true optimum and the approximate).\nThe time complexity of the approximate equilibrium strategies is 0 (nm \/ ~ 2) where n is the negotiation deadline and ~ the relative error.\nFinally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues.\nSpecifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is 0 (- \/ m).\nThese approximate strategies also have polynomial time complexity.\nIn so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria.\nNo previous work has determined these equilibria.\nSince software agents have limited computational resources, our results are especially relevant to such resource bounded agents.\nWe begin by giving a brief overview of single-issue negotiation in Section 2.\nIn Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem.\nWe then present an approximate equilibrium and evaluate its approximation error.\nSection 4 analyzes online multi-issue negotiation.\n5.\nRELATED WORK\nWork on multi-issue negotiation can be divided into two main types: that for indivisible issues and that for divisible issues.\nWe first describe the existing work for the case of divisible issues.\nHowever, most of this work has focussed on the sequential procedure [7, 2].\nFor this procedure, a key issue is the negotiation agenda.\nHere the term agenda refers to the order in which the issues are negotiated.\nThe agenda is important because each agent's cumulative utility depends on the agenda; if we change the agenda then these utilities change.\nHence, the agents must decide what agenda they will use.\nNow, the agenda can be decided before negotiating the issues (such an agenda is called exogenous) or it may be decided during the process of negotiation (such an agenda is called endogenous).\nFor instance, Fershtman [7] analyze sequential negotiation with exogenous agenda.\nA number of researchers have also studied negotiations with an endogenous agenda [2].\nIn contrast to the above work that mainly deals with sequential negotiation, [6] studies the equilibrium for the package deal procedure.\nHowever, all the above mentioned work differs from ours in that we focus on indivisible issues while others focus on the case\nThe Sixth Intl. .\nJoint Conf.\nwhere each issue is divisible.\nSpecifically, no previous work has determined an approximate equilibrium for multi-issue negotiation or for online negotiation.\nExisting work for the case of indivisible issues has mostly dealt with task allocation problems (for tasks that cannot be partioned) to a group of agents.\nThe problem of task allocation has been previously studied in the context of coalitions involving more than two agents.\nFor example [25] analyze the problem for the case where the agents act so as to maximize the benefit of the system as a whole.\nIn contrast, our focus is on two agents where both of them are self-interested and want to maximize their individual utilities.\nIn contrast, our focuses on negotiation regarding who will carry out which task.\nFinally, online and approximate mechanisms have been studied in the context of auctions [14, 9] but not for bilateral negotiations (which is the focus of our work).\n6.\nCONCLUSIONS\nThis paper has studied bilateral multi-issue negotiation between self-interested autonomous agents with time constraints.\nThe issues are indivisible and different agents value different issues differently.\nThus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities.\nSpecifically, we first showed that finding the equilibrium offers is an NP-hard problem even in a complete information setting.\nWe then presented approximately optimal negotiation strategies and showed that they form an equilibrium.\nThese strategies have polynomial time complexity.\nWe also analysed the difference between the true optimum and the approximate optimum.\nFinally, we extended the analysis to online negotiation where the issues become available at different time points and the agents are uncertain about the features of these issues.\nSpecifically, we showed that an approximate equilibrium exists for online negotiation and analysed the approximation error.\nThese approximate strategies also have polynomial time complexity.\nThere are several interesting directions for future work.\nFirst, for online negotiation, we assumed that the constants kac and kbc are both uniformly distributed.\nApart from this, we treated the number of issues as being common knowledge to the agents.\nIn future, it will be interesting to treat the number of issues as uncertain.","lvl-2":"Approximate and Online Multi-Issue Negotiation\nABSTRACT\nThis paper analyzes bilateral multi-issue negotiation between selfinterested autonomous agents.\nThe agents have time constraints in the form of both deadlines and discount factors.\nThere are m> 1 issues for negotiation where each issue is viewed as a pie of size one.\nThe issues are \"indivisible\" (i.e., individual issues cannot be split between the parties; each issue must be allocated in its entirety to either agent).\nHere different agents value different issues differently.\nThus, the problem is for the agents to decide how to allocate the issues between themselves so as to maximize their individual utilities.\nFor such negotiations, we first obtain the equilibrium strategies for the case where the issues for negotiation are known a priori to the parties.\nThen, we analyse their time complexity and show that finding the equilibrium offers is an NP-hard problem, even in a complete information setting.\nIn order to overcome this computational complexity, we then present negotiation strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium.\nWe also analyze the relative error (i.e., the difference between the true optimum and the approximate).\nThe time complexity of the approximate equilibrium strategies is O (nm \/ ~ 2) where n is the negotiation deadline and ~ the relative error.\nFinally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues.\nSpecifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is O (\u221a m).\nThese approximate strategies also have polynomial time complexity.\n1.\nINTRODUCTION\nNegotiation is a key form of interaction in multiagent systems.\nIt is a process in which disputing agents decide how to divide the gains from cooperation.\nSince this decision is made jointly by the agents themselves [20, 19, 13, 15], each party can only obtain what the other is prepared to allow them.\nNow, the simplest form of negotiation involves two agents and a single issue.\nFor example, consider a scenario in which a buyer and a seller negotiate on the price of a good.\nTo begin, the two agents are likely to differ on the price at which they believe the trade should take place, but through a process of joint decision-making they either arrive at a price that is mutually acceptable or they fail to reach an agreement.\nSince agents are likely to begin with different prices, one or both of them must move toward the other, through a series of offers and counter offers, in order to obtain a mutually acceptable outcome.\nHowever, before the agents can actually perform such negotiations, they must decide the rules for making offers and counter offers.\nThat is, they must set the negotiation protocol [20].\nOn the basis of this protocol, each agent chooses its strategy (i.e., what offers it should make during the course of negotiation).\nGiven this context, this work focuses on competitive scenarios with self-interested agents.\nFor such cases, each participant defines its strategy so as to maximise its individual utility.\nHowever, in most bilateral negotiations, the parties involved need to settle more than one issue.\nFor this case, the issues may be divisible or indivisible [4].\nFor the former, the problem for the agents is to decide how to split each issue between themselves [21].\nFor the latter, the individual issues cannot be divided.\nAn issue, in its entirety, must be allocated to either of the two agents.\nSince the agents value different issues differently, they must come to terms about who will take which issue.\nTo date, most of the existing work on multi-issue negotiation has focussed on the former case [7, 2, 5, 23, 11, 6].\nHowever, in many real-world settings, the issues are indivisible.\nHence, our focus here is on negotiation for indivisible issues.\nSuch negotiations are very common in multiagent systems.\nFor example, consider the case of task allocation between two agents.\nThere is a set of tasks to be carried out and different agents have different preferences for the tasks.\nThe tasks cannot be partitioned; a task must be carried out by one agent.\nThe problem then is for the agents to negotiate about who will carry out which task.\nA key problem in the study of multi-issue negotiation is to determine the equilibrium strategies.\nAn equally important problem, especially in the context of software agents, is to find the time complexity of computing the equilibrium offers.\nHowever, such computational issues have so far received little attention.\nAs we will show, this is mainly due to the fact that existing work (describe in Section 5) has mostly focused on negotiation for divisible issues\nand finding the equilibrium for this case is computationally easier than that for the case of indivisible issues.\nOur primary objective is, therefore, to answer the computational questions for the latter case for the types of situations that are commonly faced by agents in real-world contexts.\nThus, we consider negotiations in which there is incomplete information and time constraints.\nIncompleteness of information on the part of negotiators is a common feature of most practical negotiations.\nAlso, agents typically have time constraints in the form of both deadlines and discount factors.\nDeadlines are an essential element since negotiation cannot go on indefinitely, rather it must end within a reasonable time limit.\nLikewise, discount factors are essential since the goods may be perishable or their value may decline due to inflation.\nMoreover, the strategic behaviour of agents with deadlines and discount factors differs from those without (see [21] for single issue bargaining without deadlines and [23, 13] for bargaining with deadlines and discount factors in the context of divisible issues).\nGiven this, we consider indivisible issues and first analyze the strategic behaviour of agents to obtain the equilibrium strategies for the case where all the issues for negotiation are known a priori to both agents.\nFor this case, we show that the problem of finding the equilibrium offers is NP-hard, even in a complete information setting.\nThen, in order to overcome the problem of time complexity, we present strategies that are approximately optimal but computationally efficient, and show that they form an equilibrium.\nWe also analyze the relative error (i.e., the difference between the true optimum and the approximate).\nThe time complexity of the approximate equilibrium strategies is 0 (nm \/ ~ 2) where n is the negotiation deadline and ~ the relative error.\nFinally, we extend the analysis to online negotiation where different issues become available at different time points and the agents are uncertain about their valuations for these issues.\nSpecifically, we show that an approximate equilibrium exists for online negotiation and show that the expected difference between the optimum and the approximate is 0 (- \/ m).\nThese approximate strategies also have polynomial time complexity.\nIn so doing, our contribution lies in analyzing the computational complexity of the above multi-issue negotiation problem, and finding the approximate and online equilibria.\nNo previous work has determined these equilibria.\nSince software agents have limited computational resources, our results are especially relevant to such resource bounded agents.\nThe remainder of the paper is organised as follows.\nWe begin by giving a brief overview of single-issue negotiation in Section 2.\nIn Section 3, we obtain the equilibrium for multi-issue negotiation and show that finding equilibrium offers is an NP-hard problem.\nWe then present an approximate equilibrium and evaluate its approximation error.\nSection 4 analyzes online multi-issue negotiation.\nSection 5 discusses the related literature and Section 6 concludes.\n2.\nSINGLE-ISSUE NEGOTIATION\nWe adopt the single issue model of [27] because this is a model where, during negotiation, the parties are allowed to make offers from a set of discrete offers.\nSince our focus is on indivisible issues (i.e., parties are allowed to make one of two possible offers: zero or one), our scenario fits in well with [27].\nHence we use this basic single issue model and extend it to multiple issues.\nBefore doing so, we give an overview of this model and its equilibrium strategies.\nThere are two strategic agents: a and b. Each agent has time constraints in the form of deadlines and discount factors.\nThe two agents negotiate over a single indivisible issue (i).\nThis issue is a ` pie' of size 1 and the agents want to determine who gets the pie.\nThere is a deadline (i.e., a number of rounds by which negotiation must end).\nLet n E N + denote this deadline.\nThe agents use an alternating offers protocol (as the one of Rubinstein [18]), which proceeds through a series of time periods.\nOne of the agents, say a, starts negotiation in the first time period (i.e., t = 1) by making an offer (xi = 0 or 1) to b. Agent b can either accept or reject the offer.\nIf it accepts, negotiation ends in an agreement with a getting xi and b getting yi = 1--xi.\nOtherwise, negotiation proceeds to the next time period, in which agent b makes a counter-offer.\nThis process of making offers continues until one of the agents either accepts an offer or quits negotiation (resulting in a conflict).\nThus, there are three possible actions an agent can take during any time period: accept the last offer, make a new counter-offer, or quit the negotiation.\nAn essential feature of negotiations involving alternating offers is that the agents' utilities decrease with time [21].\nSpecifically, the decrease occurs at each step of offer and counteroffer.\nThis decrease is represented with a discount factor denoted 0 <\u03b4i <1 for both1 agents.\nLet [xt i, yti] denote the offer made at time period t where xti and yti denote the share for agent a and b respectively.\nThen, for a given pie, the set of possible offers is:\nThe conflict utility (i.e., the utility received in the event that no deal is struck) is zero for both agents.\nFor the above setting, the agents reason as follows in order to determine what to offer at t = 1.\nWe let A (1) (B (1)) denote a's (b's) equilibrium offer for the first time period.\nLet agent a denote the first mover (i.e., at t = 1, a proposes to b who should get the pie).\nTo begin, consider the case where the deadline for both agents is n = 1.\nIf b accepts, the division occurs as agreed; if not, neither agent gets anything (since n = 1 is the deadline).\nHere, a is in a powerful position and is able to propose to keep 100 percent of the pie and give nothing to b 2.\nSince the deadline is n = 1, b accepts this offer and agreement takes place in the first time period.\nNow, consider the case where the deadline is n = 2.\nIn order to decide what to offer in the first round, a looks ahead to t = 2 and reasons backwards.\nAgent a reasons that if negotiation proceeds to the second round, b will take 100 percent of the pie by offering [0, 1] and leave nothing for a. Thus, in the first time period, if a offers b anything less than the whole pie, b will reject the offer.\nHence, during the first time period, agent a offers [0, 1].\nAgent b accepts this and an agreement occurs in the first time period.\nIn general, if the deadline is n, negotiation proceeds as follows.\nAs before, agent a decides what to offer in the first round by looking ahead as far as t = n and then reasoning backwards.\nAgent a's 1Having a different discount factor for different agents only makes the presentation more involved without leading to any changes in the analysis of the strategic behaviour of the agents or the time complexity of finding the equilibrium offers.\nHence we have a single discount factor for both agents.\n2It is possible that b may reject such a proposal.\nHowever, irrespective of whether b accepts or rejects the proposal, it gets zero utility (because the deadline is n = 1).\nThus, we assume that b accepts a's offer.\nubi\n952 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\noffer for t = 1 depends on who the offering agent is for the last time period.\nThis, in turn, depends on whether n is odd or even.\nSince a makes an offer at t = 1 and the agents use the alternating offers protocol, the offering agent for the last time period is b if n is even and it is a if n is odd.\nThus, depending on whether n is odd or even, a makes the following offer at t = 1:\nAgent b accepts this offer and negotiation ends in the first time period.\nNote that the equilibrium outcome depends on who makes the first move.\nSince we have two agents and either of them could move first, we get two possible equilibrium outcomes.\nOn the basis of the above equilibrium for single-issue negotiation with complete information, we first obtain the equilibrium for multiple issues and then show that computing these offers is a hard problem.\nWe then present a time efficient approximate equilibrium.\n3.\nMULTI-ISSUE NEGOTIATION\nWe first analyse the complete information setting.\nThis section forms the base which we extend to the case of information uncertainty in Section 4.\nHere a and b negotiate over m> 1 indivisible issues.\nThese issues are m distinct pies and the agents want to determine how to distribute the pies between themselves.\nLet S = {1, 2,..., mI denote the set of m pies.\nAs before, each pie is of size 1.\nLet the discount factor for issue c, where 1 kac +1, then agent a values issue c more than issue c + 1.\nLikewise for agent b.\nIn other words, the m issues are perfect substitutes (i.e., all that matters to an agent is its total utility for all the m issues and not that for any subset of them).\nIn all the settings we study, the issues will be perfect substitutes.\nTo begin each agent has complete information about all negotiation parameters (i.e., n, m, kac, kb c, and \u03b4c for 1 0 it finds a solution satisfying Equation 5 in time polynomially bounded by size of the problem (for the 0-1 knapsack problem, the problem size is equal to the number of items) and by 1 \/ ~ [1].\nFor the 0-1 knapsack problem, Ibarra and Kim [10] presented a fully polynomial approximation method.\nThis method is based on dynamic programming.\nIt is a parametric method that takes ~ as a parameter and for any ~> 0, finds a heuristic solution z with relative error at most ~, such that the time and space complexity grow polynomially with the number of items m and 1 \/ ~.\nMore specifically, the space and time complexity are both O (m \/ ~ 2) and hence polynomial in m and 1 \/ ~ (see [10] for the detailed approximation algorithm and proof of time and space complexity).\nSince the Ibarra and Kim method is fully polynomial, we use it to solve our negotiation problem.\nThis is done as follows.\nFor agent a, let APRX-TRADEOFFA (P, UB (t), t, ~) denote a procedure that returns an ~ approximate solution to Equation 4 using the Ibarra and Kim method.\nThe procedure APRX-TRADEOFFB (P, UA (t), t, ~) for agent b is analogous.\nFor 1 \u2264 c \u2264 m, the approximate equilibrium offer for issue c at time t is denoted as [\u00af atc, \u00af btc] where \u00af atc and \u00af btc denote the shares for agent a and b respectively.\nWe denote the equilibrium package at time t as [\u00af at, \u00af bt] where \u00af at \u2208 Bm (\u00af bt \u2208 Bm) is an m element vector that denotes a's (b's) share for each of the m issues.\nAlso, as before, for 1 \u2264 c \u2264 m, \u03b4c is the discount factor for issue c. Note that for 1 \u2264 t \u2264 n, \u00af atc + \u00af btc = 1 (i.e., the sum of the agents' shares (at time t) for each pie is one).\nFinally, for time period t (for 1 \u2264 t \u2264 n) we let \u00af A (t) (respectively \u00af B (t)) denote the approximate equilibrium strategy for agent a (respectively b).\nThe following theorem uses this notation and characterizes an approximate equilibrium for multi-issue negotiation.\nTHEOREM 3.\nFor the package deal procedure, the following strategies form an ~ approximate Nash equilibrium.\nThe equilibrium strategy for t = n is:\nFor all preceding time periods t UB (n)\nThe above optimization problem is analogous to the online 0-1 knapsack problem.\nAn algorithm to solve the online knapsack problem has already proposed in [16].\nThis algorithm is called the fixed-choice online algorithm.\nIt has time complexity linear in the number of items (m) in the knapsack problem.\nWe use this to solve our online negotiation problem.\nThus, our ONLINE-TRADEOFFA algorithm is nothing but the fixed-choice online algorithm and therefore has the same time complexity as the latter.\nThis algorithm takes the values of kai and kbi one at a time and generates an approximate solution to the above knapsack problem.\nThe expected difference between the optimum and approximate solution is E [z *\nrithm and a prooffor E [zm *--zm] = 0 (, \/ m)).\nThe fixed-choice online algorithm of [16] is a generalization of the basic greedy algorithm for the offline knapsack problem; the idea behind it is as follows.\nA threshold value is determined on the basis of the information regarding weights and profits for the 0-1 knapsack problem.\nThe method then includes into the knapsack all items whose profit density (profit density of an item is its profit per unit weight) exceeds the threshold until either the knapsack is filled or all the m items have been considered.\nIn more detail, the algorithm ONLINE-TRADEOFFA works as follows.\nIt first gets the values of ka1 and kb1 and finds \u00af btc.\nSince we have a 0-1 knapsack problem, \u00af btc can be either zero or one.\nNow, if \u00af btc = 1 for t = n, then \u00af btc must be one for 1 t such that vi(t) < vi(t ) E otherwise.\nWe now develop an analytical technique for performing the value function and probability function propagation phases.\n3 Similarly for the probability propagation phase 832 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 5.1 Value Function Propagation Phase Suppose, that we are performing a value function propagation phase during which the value functions are propagated from the sink methods to the source methods.\nAt any time during this phase we encounter a situation shown in Figure 2, where opportunity cost functions [Vjn ]N n=0 of methods [mjn ]N n=0 are known, and the opportunity cost Vi0 of method mi0 is to be derived.\nLet pi0 be the probability distribution function of method mi0 execution duration, and ri0 be the immediate reward for starting and completing the execution of method mi0 inside a time interval [\u03c4, \u03c4 ] such that mi0 \u03c4, \u03c4 \u2208 C[ ].\nThe function Vi0 is then derived from ri0 and opportunity costs Vjn,i0 (t) n = 1, ..., N from future methods.\nFormally: Vi0 (t) = 8 >>< >>: R \u03c4 \u2212t 0 pi0 (t )(ri0 + PN n=0 Vjn,i0 (t + t ))dt if \u2203 mi0 \u03c4,\u03c4 \u2208C[ ] such that t \u2208 [\u03c4, \u03c4 ] 0 otherwise (1) Note, that for t \u2208 [\u03c4, \u03c4 ], if h(t) := ri0 + PN n=0 Vjn,i0 (\u03c4 \u2212t) then Vi0 is a convolution of p and h: vi0 (t) = (pi0 \u2217h)(\u03c4 \u2212t).\nAssume for now, that Vjn,i0 represents a full opportunity cost, postponing the discussion on different techniques for splitting the opportunity cost Vj0 into [Vj0,ik ]K k=0 until section 6.\nWe now show how to derive Vj0,i0 (derivation of Vjn,i0 for n = 0 follows the same scheme).\nFigure 2: Fragment of an MDP of agent Ak.\nProbability functions propagate forward (left to right) whereas value functions propagate backward (right to left).\nLet V j0,i0 (t) be the opportunity cost of starting the execution of method mj0 at time t given that method mi0 has been completed.\nIt is derived by multiplying Vi0 by the probability functions of all methods other than mi0 that enable mj0 .\nFormally: V j0,i0 (t) = Vj0 (t) \u00b7 KY k=1 Pik (t).\nWhere similarly to [4] and [5] we ignored the dependency of [Plk ]K k=1.\nObserve that V j0,i0 does not have to be monotonically decreasing, i.e., delaying the execution of the method mi0 can sometimes be profitable.\nTherefore the opportunity cost Vj0,i0 (t) of enabling method mi0 at time t must be greater than or equal to V j0,i0 .\nFurthermore, Vj0,i0 should be non-increasing.\nFormally: Vj0,i0 = min f\u2208F f (2) Where F = {f | f \u2265 V j0,i0 and f(t) \u2265 f(t ) \u2200tt0 and \u2200k=0,.\n.\n,K we have Q k \u2208{0,...,K} k =k Pik (t) > c. Then: KX k=0 Vik (t0) > KX k=0 Z \u0394\u2212t0 0 pik (t )Vj0 (t0 + t ) \u00b7 c dt Because Pjk is non-decreasing.\nNow, suppose there exists t1 \u2208 (t0, \u0394], such that PK k=0 R t1\u2212t0 0 pik (t )dt > Vj0 (t0) c\u00b7Vj0 (t1) .\nSince decreasing the upper limit of the integral over positive function also decreases the integral, we have: KX k=0 Vik (t0) > c KX k=0 Z t1 t0 pik (t \u2212 t0)Vj0 (t )dt And since Vj0 (t ) is non-increasing we have: KX k=0 Vik (t0) > c \u00b7 Vj0 (t1) KX k=0 Z t1 t0 pik (t \u2212 t0)dt (5) = c \u00b7 Vj0 (t1) KX k=0 Z t1\u2212t0 0 pik (t )dt > c \u00b7 Vj0 (t1) Vj(t0) c \u00b7 Vj(t1) = Vj(t0) 4 Assuming LET0 t Consequently, the opportunity cost PK k=0 Vik (t0) of starting the execution of methods [mik ]K k=0 at time t \u2208 [0, .\n.\n, \u0394] is greater than the opportunity cost Vj0 (t0) which proves the theorem.Figure 4 shows that the overestimation of opportunity cost is easily observable in practice.\nTo remedy the problem of opportunity cost overestimation, we propose three alternative heuristics that split the opportunity cost functions: \u2022 Heuristic H 1,0 : Only one method, mik gets the full expected reward for enabling method mj0 , i.e., V j0,ik (t) = 0 for k \u2208 {0, ..., K}\\{k} and V j0,ik (t) = (Vj0 \u00b7 Q k \u2208{0,...,K} k =k Pik )(t).\n\u2022 Heuristic H 1\/2,1\/2 : Each method [mik ]K k=0 gets the full opportunity cost for enabling method mj0 divided by the number K of methods enabling the method mj0 , i.e., V j0,ik (t) = 1 K (Vj0 \u00b7 Q k \u2208{0,...,K} k =k Pik )(t) for k \u2208 {0, ..., K}.\n\u2022 Heuristic bH 1,1 : This is a normalized version of the H 1,1 heuristic in that each method [mik ]K k=0 initially gets the full opportunity cost for enabling the method mj0 .\nTo avoid opportunity cost overestimation, we normalize the split functions when their sum exceeds the opportunity cost function to be split.\nFormally: V j0,ik (t) = 8 >< >: V H 1,1 j0,ik (t) if PK k=0 V H 1,1 j0,ik (t) < Vj0 (t) Vj0 (t) V H 1,1 j0,ik (t) PK k=0 V H 1,1 j0,ik (t) otherwise Where V H 1,1 j0,ik (t) = (Vj0 \u00b7 Q k \u2208{0,...,K} k =k Pjk )(t).\nFor the new heuristics, we now prove, that: THEOREM 3.\nHeuristics H 1,0 , H 1\/2,1\/2 and bH 1,1 do not overestimate the opportunity cost.\nPROOF.\nWhen heuristic H 1,0 is used to split the opportunity cost function Vj0 , only one method (e.g. mik ) gets the opportunity cost for enabling method mj0 .\nThus: KX k =0 Vik (t) = Z \u0394\u2212t 0 pik (t )Vj0,ik (t + t )dt (6) And since Vj0 is non-increasing \u2264 Z \u0394\u2212t 0 pik (t )Vj0 (t + t ) \u00b7 Y k \u2208{0,...,K} k =k Pjk (t + t )dt \u2264 Z \u0394\u2212t 0 pik (t )Vj0 (t + t )dt \u2264 Vj0 (t) The last inequality is also a consequence of the fact that Vj0 is non-increasing.\nFor heuristic H 1\/2,1\/2 we similarly have: KX k=0 Vik (t) \u2264 KX k=0 Z \u0394\u2212t 0 pik (t ) 1 K Vj0 (t + t ) Y k \u2208{0,...,K} k =k Pjk (t + t )dt \u2264 1 K KX k=0 Z \u0394\u2212t 0 pik (t )Vj0 (t + t )dt \u2264 1 K \u00b7 K \u00b7 Vj0 (t) = Vj0 (t).\nFor heuristic bH 1,1 , the opportunity cost function Vj0 is by definition split in such manner, that PK k=0 Vik (t) \u2264 Vj0 (t).\nConsequently, we have proved, that our new heuristics H 1,0 , H 1\/2,1\/2 and bH 1,1 avoid the overestimation of the opportunity cost.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 835 The reason why we have introduced all three new heuristics is the following: Since H 1,1 overestimates the opportunity cost, one has to choose which method mik will receive the reward from enabling the method mj0 , which is exactly what the heuristic H 1,0 does.\nHowever, heuristic H 1,0 leaves K \u2212 1 methods that precede the method mj0 without any reward which leads to starvation.\nStarvation can be avoided if opportunity cost functions are split using heuristic H 1\/2,1\/2 , that provides reward to all enabling methods.\nHowever, the sum of split opportunity cost functions for the H 1\/2,1\/2 heuristic can be smaller than the non-zero split opportunity cost function for the H 1,0 heuristic, which is clearly undesirable.\nSuch situation (Figure 4, heuristic H 1,0 ) occurs because the mean f+g 2 of two functions f, g is not smaller than f nor g only if f = g.\nThis is why we have proposed the bH 1,1 heuristic, which by definition avoids the overestimation, underestimation and starvation problems.\n7.\nEXPERIMENTAL EVALUATION Since the VFP algorithm that we introduced provides two orthogonal improvements over the OC-DEC-MDP algorithm, the experimental evaluation we performed consisted of two parts: In part 1, we tested empirically the quality of solutions that an locally optimal solver (either OC-DEC-MDP or VFP) finds, given it uses different opportunity cost function splitting heuristic, and in part 2, we compared the runtimes of the VFP and OC-DEC- MDP algorithms for a variety of mission plan configurations.\nPart 1: We first ran the VFP algorithm on a generic mission plan configuration from Figure 3 where only methods mj0 , mi1 , mi2 and m0 were present.\nTime windows of all methods were set to 400, duration pj0 of method mj0 was uniform, i.e., pj0 (t) = 1 400 and durations pi1 , pi2 of methods mi1 , mi2 were normal distributions, i.e., pi1 = N(\u03bc = 250, \u03c3 = 20), and pi2 = N(\u03bc = 200, \u03c3 = 100).\nWe assumed that only method mj0 provided reward, i.e. rj0 = 10 was the reward for finishing the execution of method mj0 before time t = 400.\nWe show our results in Figure (4) where the x-axis of each of the graphs represents time whereas the y-axis represents the opportunity cost.\nThe first graph confirms, that when the opportunity cost function Vj0 was split into opportunity cost functions Vi1 and Vi2 using the H 1,1 heuristic, the function Vi1 +Vi2 was not always below the Vj0 function.\nIn particular, Vi1 (280) + Vi2 (280) exceeded Vj0 (280) by 69%.\nWhen heuristics H 1,0 , H 1\/2,1\/2 and bH 1,1 were used (graphs 2,3 and 4), the function Vi1 + Vi2 was always below Vj0 .\nWe then shifted our attention to the civilian rescue domain introduced in Figure 1 for which we sampled all action execution durations from the normal distribution N = (\u03bc = 5, \u03c3 = 2)).\nTo obtain the baseline for the heuristic performance, we implemented a globally optimal solver, that found a true expected total reward for this domain (Figure (6a)).\nWe then compared this reward with a expected total reward found by a locally optimal solver guided by each of the discussed heuristics.\nFigure (6a), which plots on the y-axis the expected total reward of a policy complements our previous results: H 1,1 heuristic overestimated the expected total reward by 280% whereas the other heuristics were able to guide the locally optimal solver close to a true expected total reward.\nPart 2: We then chose H 1,1 to split the opportunity cost functions and conducted a series of experiments aimed at testing the scalability of VFP for various mission plan configurations, using the performance of the OC-DEC-MDP algorithm as a benchmark.\nWe began the VFP scalability tests with a configuration from Figure (5a) associated with the civilian rescue domain, for which method execution durations were extended to normal distributions N(\u03bc = Figure 5: Mission plan configurations: (a) civilian rescue domain, (b) chain of n methods, (c) tree of n methods with branching factor = 3 and (d) square mesh of n methods.\nFigure 6: VFP performance in the civilian rescue domain.\n30, \u03c3 = 5), and the deadline was extended to \u0394 = 200.\nWe decided to test the runtime of the VFP algorithm running with three different levels of accuracy, i.e., different approximation parameters P and V were chosen, such that the cumulative error of the solution found by VFP stayed within 1%, 5% and 10% of the solution found by the OC- DEC-MDP algorithm.\nWe then run both algorithms for a total of 100 policy improvement iterations.\nFigure (6b) shows the performance of the VFP algorithm in the civilian rescue domain (y-axis shows the runtime in milliseconds).\nAs we see, for this small domain, VFP runs 15% faster than OCDEC-MDP when computing the policy with an error of less than 1%.\nFor comparison, the globally optimal solved did not terminate within the first three hours of its runtime which shows the strength of the opportunistic solvers, like OC-DEC-MDP.\nWe next decided to test how VFP performs in a more difficult domain, i.e., with methods forming a long chain (Figure (5b)).\nWe tested chains of 10, 20 and 30 methods, increasing at the same time method time windows to 350, 700 and 1050 to ensure that later methods can be reached.\nWe show the results in Figure (7a), where we vary on the x-axis the number of methods and plot on the y-axis the algorithm runtime (notice the logarithmic scale).\nAs we observe, scaling up the domain reveals the high performance of VFP: Within 1% error, it runs up to 6 times faster than OC-DECMDP.\nWe then tested how VFP scales up, given that the methods are arranged into a tree (Figure (5c)).\nIn particular, we considered trees with branching factor of 3, and depth of 2, 3 and 4, increasing at the same time the time horizon from 200 to 300, and then to 400.\nWe show the results in Figure (7b).\nAlthough the speedups are smaller than in case of a chain, the VFP algorithm still runs up to 4 times faster than OC-DEC-MDP when computing the policy with an error of less than 1%.\nWe finally tested how VFP handles the domains with methods arranged into a n \u00d7 n mesh, i.e., C\u227a = { mi,j, mk,j+1 } for i = 1, ..., n; k = 1, ..., n; j = 1, ..., n \u2212 1.\nIn particular, we consider 836 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 4: Visualization of heuristics for opportunity costs splitting.\nFigure 7: Scalability experiments for OC-DEC-MDP and VFP for different network configurations.\nmeshes of 3\u00d73, 4\u00d74, and 5\u00d75 methods.\nFor such configurations we have to greatly increase the time horizon since the probabilities of enabling the final methods by a particular time decrease exponentially.\nWe therefore vary the time horizons from 3000 to 4000, and then to 5000.\nWe show the results in Figure (7c) where, especially for larger meshes, the VFP algorithm runs up to one order of magnitude faster than OC-DEC-MDP while finding a policy that is within less than 1% from the policy found by OC- DECMDP.\n8.\nCONCLUSIONS Decentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains.\nIn this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs.\nOur heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP.\nIn terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4].\nFurthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11].\nUnfortunately, they fail to scale up to large-scale domains at present time.\nBeyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints.\nFinally, value function techniques have been studied in context of single agent MDPs [7] [9].\nHowever, similarly to [6], they fail to address the lack of global state knowledge, which is a fundamental issue in decentralized planning.\nAcknowledgments This material is based upon work supported by the DARPA\/IPTO COORDINATORS program and the Air Force Research Laboratory under Contract No.\nFA875005C0030.\nThe authors also want to thank Sven Koenig and anonymous reviewers for their valuable comments.\n9.\nREFERENCES [1] R. Becker, V. Lesser, and S. Zilberstein.\nDecentralized MDPs with Event-Driven Interactions.\nIn AAMAS, pages 302-309, 2004.\n[2] R. Becker, S. Zilberstein, V. Lesser, and C. V. Goldman.\nTransition-Independent Decentralized Markov Decision Processes.\nIn AAMAS, pages 41-48, 2003.\n[3] D. S. Bernstein, S. Zilberstein, and N. Immerman.\nThe complexity of decentralized control of Markov decision processes.\nIn UAI, pages 32-37, 2000.\n[4] A. Beynier and A. Mouaddib.\nA polynomial algorithm for decentralized Markov decision processes with temporal constraints.\nIn AAMAS, pages 963-969, 2005.\n[5] A. Beynier and A. Mouaddib.\nAn iterative algorithm for solving constrained decentralized Markov decision processes.\nIn AAAI, pages 1089-1094, 2006.\n[6] C. Boutilier.\nSequential optimality and coordination in multiagent systems.\nIn IJCAI, pages 478-485, 1999.\n[7] J. Boyan and M. Littman.\nExact solutions to time-dependent MDPs.\nIn NIPS, pages 1026-1032, 2000.\n[8] C. Goldman and S. Zilberstein.\nOptimizing information exchange in cooperative multi-agent systems, 2003.\n[9] L. Li and M. Littman.\nLazy approximation for solving continuous finite-horizon MDPs.\nIn AAAI, pages 1175-1180, 2005.\n[10] Y. Liu and S. Koenig.\nRisk-sensitive planning with one-switch utility functions: Value iteration.\nIn AAAI, pages 993-999, 2005.\n[11] D. Musliner, E. Durfee, J. Wu, D. Dolgov, R. Goldman, and M. Boddy.\nCoordinated plan management using multiagent MDPs.\nIn AAAI Spring Symposium, 2006.\n[12] R. Nair, M. Tambe, M. Yokoo, D. Pynadath, and S. Marsella.\nTaming decentralized POMDPs: Towards efficient policy computation for multiagent settings.\nIn IJCAI, pages 705-711, 2003.\n[13] R. Nair, P. Varakantham, M. Tambe, and M. Yokoo.\nNetworked distributed POMDPs: A synergy of distributed constraint optimization and POMDPs.\nIn IJCAI, pages 1758-1760, 2005.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 837","lvl-3":"On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints\nABSTRACT\nDecentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve.\nIn this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs.\nOur heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP.\nFirst, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval.\nFurthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC - MDP.\nWe test both improvements independently in a crisis-management domain as well as for other types of domains.\nOur experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations.\n1.\nINTRODUCTION\nThe development of algorithms for effective coordination of multiple agents acting as a team in uncertain and time critical domains has recently become a very active research field with potential applications ranging from coordination of agents during a hostage rescue mission [11] to the coordination of Autonomous Mars Explo\nration Rovers [2].\nBecause of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time.\nKey decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs).\nUnfortunately, solving these models optimally has been proven to be NEXP-complete [3], hence more tractable subclasses of these models have been the subject of intensive research.\nIn particular, Network Distributed POMDP [13] which assume that not all the agents interact with each other, Transition Independent DEC-MDP [2] which assume that transition function is decomposable into local transition functions or DEC-MDP with Event Driven Interactions [1] which assume that interactions between agents happen at fixed time points constitute good examples of such subclasses.\nAlthough globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks.\nTo remedy that, locally optimal algorithms have been proposed [12] [4] [5].\nIn particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons.\nAdditionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains.\nOC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration.\nHowever, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values.\nThe reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval.\nFurthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods.\nThis reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies.\nIn this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP.\nVFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain\nand manipulate a value function over time for each method rather than a separate value for each pair of method and time interval.\nSuch representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions.\nSecond, we prove (both theoretically and empirically) that OC-DEC - MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem.\nThis paper is organized as follows: In section 2 we motivate this research by introducing a civilian rescue domain where a team of fire - brigades must coordinate in order to rescue civilians trapped in a burning building.\nIn section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers.\nSections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements.\nFinally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm\n2.\nMOTIVATING EXAMPLE\n3.\nMODEL DESCRIPTION\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 831\n4.\nSOLUTION TECHNIQUES\n4.1 Optimal Algorithms\n4.2 Locally Optimal Algorithms\n5.\nVALUE FUNCTION PROPAGATION (VFP)\n832 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.1 Value Function Propagation Phase\n0 otherwise\n5.2 Reading the Policy\n5.3 Probability Function Propagation Phase\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 833\n5.4 The Algorithm\n5.5 Implementation of Function Operations\n6.\nSPLITTING THE OPPORTUNITY COST FUNCTIONS\n834 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 835\n7.\nEXPERIMENTAL EVALUATION\n836 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n8.\nCONCLUSIONS\nDecentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains.\nIn this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs.\nOur heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP.\nIn terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4].\nFurthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11].\nUnfortunately, they fail to scaleup to large-scale domains at present time.\nBeyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints.\nFinally, value function techniques have been studied in context of single agent MDPs [7] [9].\nHowever, similarly to [6], they fail to address the lack of global state knowledge, which is a fundamental issue in decentralized planning.","lvl-4":"On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints\nABSTRACT\nDecentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve.\nIn this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs.\nOur heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP.\nFirst, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval.\nFurthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC - MDP.\nWe test both improvements independently in a crisis-management domain as well as for other types of domains.\nOur experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations.\n1.\nINTRODUCTION\nration Rovers [2].\nBecause of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time.\nKey decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs).\nAlthough globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks.\nTo remedy that, locally optimal algorithms have been proposed [12] [4] [5].\nIn particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons.\nAdditionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains.\nOC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration.\nHowever, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values.\nThe reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval.\nFurthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods.\nThis reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies.\nIn this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP.\nVFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain\nand manipulate a value function over time for each method rather than a separate value for each pair of method and time interval.\nSuch representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions.\nSecond, we prove (both theoretically and empirically) that OC-DEC - MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem.\nIn section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers.\nSections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements.\nFinally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm\n8.\nCONCLUSIONS\nDecentralized Markov Decision Process (DEC-MDP) has been very popular for modeling of agent-coordination problems, it is very difficult to solve, especially for the real-world domains.\nIn this paper, we improved a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to large DEC-MDPs.\nOur heuristic solution method, called Value Function Propagation (VFP), provided two orthogonal improvements of OC-DEC-MDP: (i) It speeded up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each method rather than a separate value for each pair of method and time interval, and (ii) it achieved better solution qualities than OC-DEC-MDP because it corrected the overestimation of the opportunity cost of OC-DEC-MDP.\nIn terms of related work, we have extensively discussed the OCDEC-MDP algorithm [4].\nFurthermore, as discussed in Section 4, there are globally optimal algorithms for solving DEC-MDPs with temporal constraints [1] [11].\nUnfortunately, they fail to scaleup to large-scale domains at present time.\nBeyond OC-DEC-MDP, there are other locally optimal algorithms for DEC-MDPs and DECPOMDPs [8] [12], [13], yet, they have traditionally not dealt with uncertain execution times and temporal constraints.\nFinally, value function techniques have been studied in context of single agent MDPs [7] [9].","lvl-2":"On Opportunistic Techniques for Solving Decentralized Markov Decision Processes with Temporal Constraints\nABSTRACT\nDecentralized Markov Decision Processes (DEC-MDPs) are a popular model of agent-coordination problems in domains with uncertainty and time constraints but very difficult to solve.\nIn this paper, we improve a state-of-the-art heuristic solution method for DEC-MDPs, called OC-DEC-MDP, that has recently been shown to scale up to larger DEC-MDPs.\nOur heuristic solution method, called Value Function Propagation (VFP), combines two orthogonal improvements of OC-DEC-MDP.\nFirst, it speeds up OC-DECMDP by an order of magnitude by maintaining and manipulating a value function for each state (as a function of time) rather than a separate value for each pair of sate and time interval.\nFurthermore, it achieves better solution qualities than OC-DEC-MDP because, as our analytical results show, it does not overestimate the expected total reward like OC-DEC - MDP.\nWe test both improvements independently in a crisis-management domain as well as for other types of domains.\nOur experimental results demonstrate a significant speedup of VFP over OC-DEC-MDP as well as higher solution qualities in a variety of situations.\n1.\nINTRODUCTION\nThe development of algorithms for effective coordination of multiple agents acting as a team in uncertain and time critical domains has recently become a very active research field with potential applications ranging from coordination of agents during a hostage rescue mission [11] to the coordination of Autonomous Mars Explo\nration Rovers [2].\nBecause of the uncertain and dynamic characteristics of such domains, decision-theoretic models have received a lot of attention in recent years, mainly thanks to their expressiveness and the ability to reason about the utility of actions over time.\nKey decision-theoretic models that have become popular in the literature include Decentralized Markov Decision Processes (DECMDPs) and Decentralized, Partially Observable Markov Decision Processes (DEC-POMDPs).\nUnfortunately, solving these models optimally has been proven to be NEXP-complete [3], hence more tractable subclasses of these models have been the subject of intensive research.\nIn particular, Network Distributed POMDP [13] which assume that not all the agents interact with each other, Transition Independent DEC-MDP [2] which assume that transition function is decomposable into local transition functions or DEC-MDP with Event Driven Interactions [1] which assume that interactions between agents happen at fixed time points constitute good examples of such subclasses.\nAlthough globally optimal algorithms for these subclasses have demonstrated promising results, domains on which these algorithms run are still small and time horizons are limited to only a few time ticks.\nTo remedy that, locally optimal algorithms have been proposed [12] [4] [5].\nIn particular, Opportunity Cost DEC-MDP [4] [5], referred to as OC-DEC-MDP, is particularly notable, as it has been shown to scale up to domains with hundreds of tasks and double digit time horizons.\nAdditionally, OC-DEC-MDP is unique in its ability to address both temporal constraints and uncertain method execution durations, which is an important factor for real-world domains.\nOC-DEC-MDP is able to scale up to such domains mainly because instead of searching for the globally optimal solution, it carries out a series of policy iterations; in each iteration it performs a value iteration that reuses the data computed during the previous policy iteration.\nHowever, OC-DEC-MDP is still slow, especially as the time horizon and the number of methods approach large values.\nThe reason for high runtimes of OC-DEC-MDP for such domains is a consequence of its huge state space, i.e., OC-DEC-MDP introduces a separate state for each possible pair of method and method execution interval.\nFurthermore, OC-DEC-MDP overestimates the reward that a method expects to receive for enabling the execution of future methods.\nThis reward, also referred to as the opportunity cost, plays a crucial role in agent decision making, and as we show later, its overestimation leads to highly suboptimal policies.\nIn this context, we present VFP (= Value Function P ropagation), an efficient solution technique for the DEC-MDP model with temporal constraints and uncertain method execution durations, that builds on the success of OC-DEC-MDP.\nVFP introduces our two orthogonal ideas: First, similarly to [7] [9] and [10], we maintain\nand manipulate a value function over time for each method rather than a separate value for each pair of method and time interval.\nSuch representation allows us to group the time points for which the value function changes at the same rate (= its slope is constant), which results in fast, functional propagation of value functions.\nSecond, we prove (both theoretically and empirically) that OC-DEC - MDP overestimates the opportunity cost, and to remedy that, we introduce a set of heuristics, that correct the opportunity cost overestimation problem.\nThis paper is organized as follows: In section 2 we motivate this research by introducing a civilian rescue domain where a team of fire - brigades must coordinate in order to rescue civilians trapped in a burning building.\nIn section 3 we provide a detailed description of our DEC-MDP model with Temporal Constraints and in section 4 we discuss how one could solve the problems encoded in our model using globally optimal and locally optimal solvers.\nSections 5 and 6 discuss the two orthogonal improvements to the state-of-the-art OC-DEC-MDP algorithm that our VFP algorithm implements.\nFinally, in section 7 we demonstrate empirically the impact of our two orthogonal improvements, i.e., we show that: (i) The new heuristics correct the opportunity cost overestimation problem leading to higher quality policies, and (ii) By allowing for a systematic tradeoff of solution quality for time, the VFP algorithm runs much faster than the OC-DEC-MDP algorithm\n2.\nMOTIVATING EXAMPLE\nWe are interested in domains where multiple agents must coordinate their plans over time, despite uncertainty in plan execution duration and outcome.\nOne example domain is large-scale disaster, like a fire in a skyscraper.\nBecause there can be hundreds of civilians scattered across numerous floors, multiple rescue teams have to be dispatched, and radio communication channels can quickly get saturated and useless.\nIn particular, small teams of fire-brigades must be sent on separate missions to rescue the civilians trapped in dozens of different locations.\nPicture a small mission plan from Figure (1), where three firebrigades have been assigned a task to rescue the civilians trapped at site B, accessed from site A (e.g. an office accessed from the floor) 1.\nGeneral fire fighting procedures involve both: (i) putting out the flames, and (ii) ventilating the site to let the toxic, high temperature gases escape, with the restriction that ventilation should not be performed too fast in order to prevent the fire from spreading.\nThe team estimates that the civilians have 20 minutes before the fire at site B becomes unbearable, and that the fire at site A has to be put out in order to open the access to site B.\nAs has happened in the past in large scale disasters, communication often breaks down; and hence we assume in this domain that there is no communication between the fire-brigades 1,2 and 3 (denoted as FB1, FB2 and FB3).\nConsequently, FB2 does not know if it is already safe to ventilate site A, FB1 does not know if it is already safe to enter site A and start fighting fire at site B, etc. .\nWe assign the reward 50 for evacuating the civilians from site B, and a smaller reward 20 for the successful ventilation of site A, since the civilians themselves might succeed in breaking out from site B.\nOne can clearly see the dilemma, that FB2 faces: It can only estimate the durations of the \"Fight fire at site A\" methods to be executed by FB1 and FB3, and at the same time FB2 knows that time is running out for civilians.\nIf FB2 ventilates site A too early, the fire will spread out of control, whereas if FB2 waits with the ventilation method for too long, fire at site B will become unbearable for the civilians.\nIn general, agents have to perform a sequence of such\nFigure 1: Civilian rescue domain and a mission plan.\nDotted arrows represent implicit precedence constraints within an agent.\ndifficult decisions; in particular, decision process of FB2 involves first choosing when to start ventilating site A, and then (depending on the time it took to ventilate site A), choosing when to start evacuating the civilians from site B.\nSuch sequence of decisions constitutes the policy of an agent, and it must be found fast because time is running out.\n3.\nMODEL DESCRIPTION\nWe encode our decision problems in a model which we refer to as Decentralized MDP with Temporal Constraints 2.\nEach instance of our decision problems can be described as a tuple (M, A, C, P, R) where M = {mi} | M | i = 1 is the set of methods, and A = {Ak} | A | k = 1 is the set of agents.\nAgents cannot communicate during mission execution.\nEach agent Ak is assigned to a set Mk of methods, such that S | A | k = 1 Mk = M and b' i, j; i ~ = jMi fl Mj = \u00f8.\nAlso, each method of agent Ak can be executed only once, and agent Ak can execute only one method at a time.\nMethod execution times are uncertain and P = {pi} | M | i = 1 is the set of distributions of method execution durations.\nIn particular, pi (t) is the probability that the execution of method mi consumes time t. C is a set of temporal constraints in the system.\nMethods are partially ordered and each method has fixed time windows inside which it can be executed, i.e., C = C \u227a U C [] where C \u227a is the set of predecessor constraints and C [] is the set of time window constraints.\nFor c E C \u227a, c = (mi, mj) means that method mi precedes method mj i.e., execution of mj cannot start before mi terminates.\nIn particular, for an agent Ak, all its methods form a chain linked by predecessor constraints.\nWe assume, that the graph G = (M, C \u227a) is acyclic, does not have disconnected nodes (the problem cannot be decomposed into independent subproblems), and its source and sink vertices identify the source and sink methods of the system.\nFor c E C [], c = (mi, EST, LET) means that execution of mi can only start after the Earliest Starting Time EST and must finish before the Latest End Time LET; we allow methods to have multiple disjoint time window constraints.\nAlthough distributions pi can extend to infinite time horizons, given the time window constraints, the planning horizon \u0394 = max ~ m, \u03c4, \u03c4 ~ \u2208 C [] \u03c4 is considered as the mission deadline.\nFinally, R = {ri} | M | i = 1 is the set of non-negative rewards, i.e., ri is obtained upon successful execution of mi.\nSince there is no communication allowed, an agent can only estimate the probabilities that its methods have already been enabled 2One could also use the OC-DEC-MDP framework, which models both time and resource constraints\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 831\nby other agents.\nConsequently, if mj \u2208 Mk is the next method to be executed by the agent Ak and the current time is t \u2208 [0, \u0394], the agent has to make a decision whether to Execute the method mj (denoted as E), or to Wait (denoted as W).\nIn case agent Ak decides to wait, it remains idle for an arbitrary small time ~, and resumes operation at the same place (= about to execute method mj) at time t + ~.\nIn case agent Ak decides to Execute the next method, two outcomes are possible: Success: The agent Ak receives reward rj and moves on to its next method (if such method exists) so long as the following conditions hold: (i) All the methods {mi | ~ mi, mj ~ \u2208 C \u227a} that directly enable method mj have already been completed, (ii) Execution of method mj started in some time window of method mj, i.e., \u2203 ~ mj, \u03c4, \u03c4, ~ \u2208 C [] such that t \u2208 [\u03c4, \u03c4 ~], and (iii) Execution of method mj finished inside the same time window, i.e., agent Ak completed method mj in time less than or equal to \u03c4 ~ \u2212 t. Failure: If any of the above-mentioned conditions does not hold, agent Ak stops its execution.\nOther agents may continue their execution, but methods mk \u2208 {m | ~ mj, m ~ \u2208 C \u227a} will never become enabled.\nThe policy \u03c0k of an agent Ak is a function \u03c0k: Mk \u00d7 [0, \u0394] \u2192 {W, E}, and \u03c0k (~ m, t ~) = a means, that if Ak is at method m at time t, it will choose to perform the action a.\nA joint policy \u03c0 = [\u03c0k] | A | k = 1 is considered to be optimal (denoted as \u03c0 \u2217), if it maximizes the sum of expected rewards for all the agents.\n4.\nSOLUTION TECHNIQUES\n4.1 Optimal Algorithms\nOptimal joint policy \u03c0 \u2217 is usually found by using the Bellman update principle, i.e., in order to determine the optimal policy for method mj, optimal policies for methods mk \u2208 {m | ~ mj, m ~ \u2208 C \u227a} are used.\nUnfortunately, for our model, the optimal policy for method mj also depends on policies for methods mi \u2208 {m | ~ m, mj ~ \u2208 C \u227a}.\nThis double dependency results from the fact, that the expected reward for starting the execution of method mj at time t also depends on the probability that method mj will be enabled by time t. Consequently, if time is discretized, one needs to consider \u0394 | M | candidate policies in order to find \u03c0 \u2217.\nThus, globally optimal algorithms used for solving real-world problems are unlikely to terminate in reasonable time [11].\nThe complexity of our model could be reduced if we considered its more restricted version; in particular, if each method mj was allowed to be enabled at time points t \u2208 Tj \u2282 [0, \u0394], the Coverage Set Algorithm (CSA) [1] could be used.\nHowever, CSA complexity is double exponential in the size of Ti, and for our domains Tj can store all values ranging from 0 to \u0394.\n4.2 Locally Optimal Algorithms\nFollowing the limited applicability of globally optimal algorithms for DEC-MDPs with Temporal Constraints, locally optimal algorithms appear more promising.\nSpecially, the OC-DEC-MDP algorithm [4] is particularly significant, as it has shown to easily scale up to domains with hundreds of methods.\nThe idea of the OC-DECMDP algorithm is to start with the earliest starting time policy \u03c00 (according to which an agent will start executing the method m as soon as m has a non-zero chance of being already enabled), and then improve it iteratively, until no further improvement is possible.\nAt each iteration, the algorithm starts with some policy \u03c0, which uniquely determines the probabilities Pi, [\u03c4, \u03c4,] that method mi will be performed in the time interval [\u03c4, \u03c4 ~].\nIt then performs two steps: Step 1: It propagates from sink methods to source methods the values Vi, [\u03c4, \u03c4,], that represent the expected utility for executing method mi in the time interval [\u03c4, \u03c4 ~].\nThis propagation uses the probabilities Pi, [\u03c4, \u03c4,] from previous algorithm iteration.\nWe call this step a value propagation phase.\nStep 2: Given the values Vi, [\u03c4, \u03c4,] from Step 1, the algorithm chooses the most profitable method execution intervals which are stored in a new policy \u03c0 ~.\nIt then propagates the new probabilities Pi, [\u03c4, \u03c4,] from source methods to sink methods.\nWe call this step a probability propagation phase.\nIf policy \u03c0 ~ does not improve \u03c0, the algorithm terminates.\nThere are two shortcomings of the OC-DEC-MDP algorithm that we address in this paper.\nFirst, each of OC-DEC-MDP states is a pair ~ mj, [\u03c4, \u03c4 ~] ~, where [\u03c4, \u03c4 ~] is a time interval in which method mj can be executed.\nWhile such state representation is beneficial, in that the problem can be solved with a standard value iteration algorithm, it blurs the intuitive mapping from time t to the expected total reward for starting the execution of mj at time t. Consequently, if some method mi enables method mj, and the values Vj, [\u03c4, \u03c4,] \u2200 \u03c4, \u03c4, \u2208 [0, \u0394] are known, the operation that calculates the values Vi, [\u03c4, \u03c4,] \u2200 \u03c4, \u03c4 ~ \u2208 [0, \u0394] (during the value propagation phase), runs in time O (I2), where I is the number of time intervals 3.\nSince the runtime of the whole algorithm is proportional to the runtime of this operation, especially for big time horizons \u0394, the OC - DECMDP algorithm runs slow.\nSecond, while OC-DEC-MDP emphasizes on precise calculation of values Vj, [\u03c4, \u03c4,], it fails to address a critical issue that determines how the values Vj, [\u03c4, \u03c4,] are split given that the method mj has multiple enabling methods.\nAs we show later, OC-DEC-MDP splits Vj, [\u03c4, \u03c4,] into parts that may overestimate Vj, [\u03c4, \u03c4,] when summed up again.\nAs a result, methods that precede the method mj overestimate the value for enabling mj which, as we show later, can have disastrous consequences.\nIn the next two sections, we address both of these shortcomings.\n5.\nVALUE FUNCTION PROPAGATION (VFP)\nThe general scheme of the VFP algorithm is identical to the OCDEC-MDP algorithm, in that it performs a series of policy improvement iterations, each one involving a Value and Probability Propagation Phase.\nHowever, instead of propagating separate values, VFP maintains and propagates the whole functions, we therefore refer to these phases as the value function propagation phase and the probability function propagation phase.\nTo this end, for each method mi \u2208 M, we define three new functions: Value Function, denoted as vi (t), that maps time t \u2208 [0, \u0394] to the expected total reward for starting the execution of method mi at time t. Opportunity Cost Function, denoted as Vi (t), that maps time t \u2208 [0, \u0394] to the expected total reward for starting the execution of method mi at time t assuming that mi is enabled.\nProbability Function, denoted as Pi (t), that maps time t \u2208 [0, \u0394] to the probability that method mi will be completed before time t.\nSuch functional representation allows us to easily read the current policy, i.e., if an agent Ak is at method mi at time t, then it will wait as long as value function vi (t) will be greater in the future.\nFormally: j W if \u2203 t,> t such that vi (t) xi for all i, and the inequality is strict for at least one i.\nThe Pareto frontier in a consequence space then consists of all consequences that are not dominated by any other consequence.\nThis is illustrated in Fig. 1 in which an alternative consists of two attributes d1 and d2 and the decision maker tries to maximise the two objectives X1 and X2.\nA decision a \u2208 A whose consequence does not lie on the Pareto frontier is inefficient.\nWhile the Pareto 1x d2 a (X (a),X (a)) d1 1 x2 2 Alternative spaceA Pareto frontier Consequence space optimal consequenc Figure 1: The Pareto frontier frontier allows M to avoid taking inefficient decisions, M still has to decide which of the efficient consequences on the Pareto frontier is most preferred by him.\nMCDM theorists introduce a mechanism to allow the objective components of consequences to be normalised to the payoff valuations for the objectives.\nConsequences can then be ordered: if the gains in satisfaction brought about by C1 (in comparison to C2) equals to the losses in satisfaction brought about by C1 (in comparison to C2), then the two consequences C1 and C2 are considered indifferent.\nM can now construct the set of indifference curves1 in the consequence space (the dashed curves in Fig. 1).\nThe most preferred indifference curve that intersects with the Pareto frontier is in focus: its intersection with the Pareto frontier is the sought after consequence (i.e., the optimal consequence in Fig. 1).\n2.2 A negotiation framework A multi-agent negotiation framework consists of: 1.\nA set of two negotiating agents N = {1, 2}.\n2.\nA set of attributes Att = {\u03b11, ... , \u03b1m} characterising the issues the agents are negotiating over.\nEach attribute \u03b1 can take a value from the set V al\u03b1; 3.\nA set of alternative outcomes O.\nAn outcome o \u2208 O is represented by an assignment of values to the corresponding attributes in Att.\n4.\nAgents'' utility: Based on the theory of multiple-criteria decision making [8], we define the agents'' utility as follows: \u2022 Objectives: Agent i has a set of ni objectives, or interests; denoted by j (j = 1, ... , ni).\nTo measure how much an outcome o fulfills an objective j to an agent i, we use objective functions: for each agent i, we define i``s interests using the objective vector function fi = [fij ] : O \u2192 Rni .\n\u2022 Value functions: Instead of directly evaluating an outcome o, agent i looks at how much his objectives are fulfilled and will make a valuation based on these more basic criteria.\nThus, for each agent i, there is a value function \u03c3i : Rni \u2192 R.\nIn particular, Raiffa [17] shows how to systematically construct an additive value function to each party involved in a negotiation.\n\u2022 Utility: Now, given an outcome o \u2208 O, an agent i is able to determine its value, i.e., \u03c3i(fi(o)).\nHowever, a negotiation infrastructure is usually required to facilitate negotiation.\nThis might involve other mechanisms and factors\/parties, e.g., a mediator, a legal institution, participation fees, etc..\nThe standard way to implement such a thing is to allow money 1 In fact, given the k-dimensional space, these should be called indifference surfaces.\nHowever, we will not bog down to that level of details.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 509 and side-payments.\nIn this paper, we ignore those side-effects and assume that agent i``s utility function ui is normalised so that ui : O \u2192 [0, 1].\nEXAMPLE 1.\nThere are two agents, A and B. Agent A has a task T that needs to be done and also 100 units of a resource R. Agent B has the capacity to perform task T and would like to obtain at least 10 and at most 20 units of the resource R. Agent B is indifferent on any amount between 10 and 20 units of the resource R.\nThe objective functions for both agents A and B are cost and revenue.\nAnd they both aim at minimising costs while maximising revenues.\nHaving T done generates for A a revenue rA,T while doing T incurs a cost cB,T to B. Agent B obtains a revenue rB,R for each unit of the resource R while providing each unit of the resource R costs agent A cA,R. Assuming that money transfer between agents is possible, the set Att then contains three attributes: \u2022 T, taking values from the set {0, 1}, indicates whether the task T is assigned to agent B; \u2022 R, taking values from the set of non-negative integer, indicates the amount of resource R being allocated to agent B; and \u2022 MT, taking values from R, indicates the payment p to be transferred from A to B. Consider the outcome o = [T = 1, R = k, MT = p], i.e., the task T is assigned to B, and A allocates to B with k units of the resource R, and A transfers p dollars to B. Then, costA(o) = k.cA,R + p and revA(o) = rA,T ; and costB(o) = cB,T and revA(o) = j k.rB,R + p if 10 \u2264 k \u2264 20 p otherwise.\nAnd, \u03c3i(costi(o), revi(o)) = revi(o) \u2212 costi(o), (i = A, B).\n3.\nPROBLEM FORMALISATION Consider Example 1, assume that rA,T = $150 and cB,T = $100 and rB,R = $10 and cA,R = $7.\nThat is, the revenues generated for A exceeds the costs incurred to B to do task T, and B values resource R more highly than the cost for A to provide it.\nThe optimal solution to this problem scenario is to assign task T to agent B and to allocate 20 units of resource R (i.e., the maximal amount of resource R required by agent B) from agent A to agent B.\nThis outcome regarding the resource and task allocation problems leaves payoffs of $10 to agent A and $100 to agent B.2 Any other outcome would leave at least one of the agents worse off.\nIn other words, the presented outcome is Pareto-efficient and should be part of the solution outcome for this problem scenario.\nHowever, as the agents still have to bargain over the amount of money transfer p, neither agent would be willing to disclose their respective costs and revenues regarding the task T and the resource R.\nAs a consequence, agents often do not achieve the optimal outcome presented above in practice.\nTo address this issue, we introduce a mediator to help the agents discover better agreements than the ones they might try to settle on.\nNote that this problem is essentially the problem of searching for joint gains in a multilateral negotiation in which the involved parties hold strategic information, i.e., the integrative part in a negotiation.\nIn order to help facilitate this process, we introduce the role of a neutral mediator.\nBefore formalising the decision problems faced by the mediator and the 2 Certainly, without money transfer to compensate agent A, this outcome is not a fair one.\nnegotiating agents, we discuss the properties of the solution outcomes to be achieved by the mediator.\nIn a negotiation setting, the two typical design goals would be: \u2022 Efficiency: Avoid the agents from settling on an outcome that is not Pareto-optimal; and \u2022 Fairness: Avoid agreements that give the most of the gains to a subset of agents while leaving the rest with too little.\nThe above goals are axiomatised in Nash``s seminal work [16] on cooperative negotiation games.\nEssentially, Nash advocates for the following properties to be satisfied by solution to the bilateral negotiation problem: (i) it produces only Pareto-optimal outcomes; (ii) it is invariant to affine transformation (to the consequence space); (iii) it is symmetric; and (iv) it is independent from irrelevant alternatives.\nA solution satisfying Nash``s axioms is called a Nash bargaining solution.\nIt then turns out that, by taking the negotiators'' utilities as its objectives the mediator itself faces a multi-criteria decision making problem.\nThe issues faced by the mediator are: (i) the mediator requires access to the negotiators'' utility functions, and (ii) making (fair) tradeoffs between different agents'' utilities.\nOur methods allow the agents to repeatedly interact with the mediator so that a Nash solution outcome could be found by the parties.\nInformally, the problem faced by both the mediator and the negotiators is construction of the indifference curves.\nWhy are the indifference curves so important?\n\u2022 To the negotiators, knowing the options available along indifference curves opens up opportunities to reach more efficient outcomes.\nFor instance, consider an agent A who is presenting his opponent with an offer \u03b8A which she refuses to accept.\nRather than having to concede, A could look at his indifference curve going through \u03b8A and choose another proposal \u03b8A.\nTo him, \u03b8A and \u03b8A are indifferent but \u03b8A could give some gains to B and thus will be more acceptable to B.\nIn other words, the outcome \u03b8A is more efficient than \u03b8A to these two negotiators.\n\u2022 To the mediator, constructing indifference curves requires a measure of fairness between the negotiators.\nThe mediator needs to determine how much utility it needs to take away from the other negotiators to give a particular negotiator a specific gain G (in utility).\nIn order to search for integrative solutions within the outcome space O, we characterise the relationship between the agents over the set of attributes Att.\nAs the agents hold different objectives and have different capacities, it may be the case that changing between two values of a specific attribute implies different shifts in utility of the agents.\nHowever, the problem of finding the exact Paretooptimal set3 is NP-hard [2].\nOur approach is thus to solve this optimisation problem in two steps.\nIn the first steps, the more manageable attributes will be solved.\nThese are attributes that take a finite set of values.\nThe result of this step would be a subset of outcomes that contains the Pareto-optimal set.\nIn the second step, we employ an iterative procedure that allows the mediator to interact with the negotiators to find joint improvements that move towards a Pareto-optimal outcome.\nThis approach will not work unless the attributes from Att 3 The Pareto-optimal set is the set of outcomes whose consequences (in the consequence space) correspond to the Pareto frontier.\n510 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) are independent.\nMost works on multi-attribute, or multi-issue, negotiation (e.g. [17]) assume that the attributes or the issues are independent, resulting in an additive value function for each agent.4 ASSUMPTION 1.\nLet i \u2208 N and S \u2286 Att.\nDenote by \u00afS the set Att \\ S. Assume that vS and vS are two assignments of values to the attributes of S and v1 \u00afS, v2 \u00afS are two arbitrary value assignments to the attributes of \u00afS, then (ui([vS, v1 \u00afS]) \u2212 ui([vS, v2 \u00afS])) = (ui([vS, v1 \u00afS])\u2212ui([vS, v2 \u00afS])).\nThat is, the utility function of agent i will be defined on the attributes from S independently of any value assignment to other attributes.\n4.\nMEDIATOR-BASED BILATERAL NEGOTIATIONS As discussed by Lax and Sebenius [13], under incomplete information the tension between creating and claiming values is the primary cause of inefficient outcomes.\nThis can be seen most easily in negotiations involving two negotiators; during the distributive phase of the negotiation, the two negotiators``s objectives are directly opposing each other.\nWe will now formally characterise this relationship between negotiators by defining the opposition between two negotiating parties.\nThe following exposition will be mainly reproduced from [9].\nAssuming for the moment that all attributes from Att take values from the set of real numbers R, i.e., V alj \u2286 R for all j \u2208 Att.\nWe further assume that the set O = \u00d7j\u2208AttV alj of feasible outcomes is defined by constraints that all parties must obey and O is convex.\nNow, an outcome o \u2208 O is just a point in the m-dimensional space of real numbers.\nThen, the questions are: (i) from the point of view of an agent i, is o already the best outcome for i?\n(ii) if o is not the best outcome for i then is there another outcome o such that o gives i a better utility than o and o does not cause a utility loss to the other agent j in comparison to o?\nThe above questions can be answered by looking at the directions of improvement of the negotiating parties at o, i.e., the directions in the outcome space O into which their utilities increase at point o. Under the assumption that the parties'' utility functions ui are differentiable concave, the set of all directions of improvement for a party at a point o can be defined in terms of his most preferred, or gradient, direction at that point.\nWhen the gradient direction \u2207ui(o) of agent i at point o is outright opposing to the gradient direction \u2207uj (o) of agent j at point o then the two parties strongly disagree at o and no joint improvements can be achieved for i and j in the locality surrounding o.\nSince opposition between the two parties can vary considerably over the outcome space (with one pair of outcomes considered highly antagonistic and another pair being highly cooperative), we need to describe the local properties of the relationship.\nWe begin with the opposition at any point of the outcome space Rm .\nThe following definition is reproduced from [9]: DEFINITION 2.\n1.\nThe parties are in local strict opposition at a point x \u2208 Rm iff for all points x \u2208 Rm that are sufficiently close to x (i.e., for some > 0 such that \u2200x x \u2212x < ), an increase of one utility can be achieved only at the expense of a decrease of the other utility.\n2.\nThe parties are in local non-strict opposition at a point x \u2208 Rm iff they are not in local strict opposition at x, i.e., iff it is possible for both parties to raise their utilities by moving an infinitesimal distance from x. 4 Klein et al. [10] explore several implications of complex contracts in which attributes are possibly inter-dependent.\n3.\nThe parties are in local weak opposition at a point x \u2208 Rm iff \u2207u1(x).\n\u2207u2(x) \u2265 0, i.e., iff the gradients at x of the two utility functions form an acute or right angle.\n4.\nThe parties are in local strong opposition at a point x \u2208 Rm iff \u2207u1(x).\n\u2207u2(x) < 0, i.e., iff the gradients at x form an obtuse angle.\n5.\nThe parties are in global strict (nonstrict, weak, strong) opposition iff for every x \u2208 Rm they are in local strict (nonstrict, weak, strong) opposition.\nGlobal strict and nonstrict oppositions are complementary cases.\nEssentially, under global strict opposition the whole outcome space O becomes the Pareto-optimal set as at no point in O can the negotiating parties make a joint improvement, i.e., every point in O is a Pareto-efficient outcome.\nIn other words, under global strict opposition the outcome space O can be flattened out into a single line such that for each pair of outcomes x, y \u2208 O, u1(x) < u1(y) iff u2(x) > u2(y), i.e., at every point in O, the gradient of the two utility functions point to two different ends of the line.\nIntuitively, global strict opposition implies that there is no way to obtain joint improvements for both agents.\nAs a consequence, the negotiation degenerates to a distributive negotiation, i.e., the negotiating parties should try to claim as much shares from the negotiation issues as possible while the mediator should aim for the fairness of the division.\nOn the other hand, global nonstrict opposition allows room for joint improvements and all parties might be better off trying to realise the potential gains by reaching Pareto-efficient agreements.\nWeak and strong oppositions indicate different levels of opposition.\nThe weaker the opposition, the more potential gains can be realised making cooperation the better strategy to employ during negotiation.\nOn the other hand, stronger opposition suggests that the negotiating parties tend to behave strategically leading to misrepresentation of their respective objectives and utility functions and making joint gains more difficult to realise.\nWe have been temporarily making the assumption that the outcome space O is the subset of Rm .\nIn many real-world negotiations, this assumption would be too restrictive.\nWe will continue our exposition by lifting this restriction and allowing discrete attributes.\nHowever, as most negotiations involve only discrete issues with a bounded number of options, we will assume that each attribute takes values either from a finite set or from the set of real numbers R.\nIn the rest of the paper, we will refer to attributes whose values are from finite sets as simple attributes and attributes whose values are from R as continuous attributes.\nThe notions of local oppositions, i.e., strict, nonstrict, weak and strong, are not applicable to outcome spaces that contain simple attributes and nor are the notions of global weak and strong oppositions.\nHowever, the notions of global strict and nonstrict oppositions can be generalised for outcome spaces that contain simple attributes.\nDEFINITION 3.\nGiven an outcome space O, the parties are in global strict opposition iff \u2200x, y \u2208 O, u1(x) < u1(y) iff u2(x) > u2(y).\nThe parties are in global nonstrict opposition if they are not in global strict opposition.\n4.1 Optimisation on simple attributes In order to extract the optimal values for a subset of attributes, in the first step of this optimisation process the mediator requests the negotiators to submit their respective utility functions over the set of simple attributes.\nLet Simp \u2286 Att denote the set of all simple attributes from Att.\nNote that, due to Assumption 1, agent i``s The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 511 utility function can be characterised as follows: ui([vSimp, vSimp]) = wi 1 \u2217 ui,1([vSimp]) + wi 2 \u2217 ui,2([vSimp]), where Simp = Att \\ Simp, and ui,1 and ui,2 are the utility components of ui over the sets of attributes Simp and Simp, respectively, and 0 < wi 1, wi 2 < 1 and wi 1 + wi 2 = 1.\nAs attributes are independent of each other regarding the agents'' utility functions, the optimisation problem over the attributes from Simp can be carried out by fixing ui([vSimp]) to a constant C, and then search for the optimal values within the set of attributes Simp.\nNow, how does the mediator determine the optimal values for the attributes in Simp?\nSeveral well-known optimisation strategies could be applicable here: \u2022 The utilitarian solution: The sum of the agents'' utilities are maximised.\nThus, the optimal values are the solution of the following optimisation problem: arg max v\u2208V alSimp X i\u2208N ui(v) \u2022 The Nash solution: The product of the agents'' utilities are maximised.\nThus, the optimal values are the solution of the following optimisation problem: arg max v\u2208V alSimp Y i\u2208N ui(v) \u2022 The egalitarian solution (aka.\nthe maximin solution): The utility of the agent with minimum utility is maximised.\nThus, the optimal values are the solution of the following optimisation problem: arg max v\u2208V alSimp min i\u2208N ui(v) The question now is of course whether a negotiator has the incentive to misrepresent his utility function.\nFirst of all, recall that the agents'' utility functions are bounded, i.e., \u2200o \u2208 O.0 \u2264 ui(o) \u2264 1.\nThus, the agents have no incentive to overstate their utility regarding an outcome o: If o is the most preferred outcome to an agent i then he already assigns the maximal utility to o. On the other hand, if o is not the most preferred outcome to i then by overstating the utility he assigns to o, the agent i runs the risk of having to settle on an agreement which would give him less payoffs than he is supposed to receive.\nHowever, agents do have an incentive to understate their utility if the final settlement will be based on the above solutions alone.\nEssentially, the mechanism to avoid an agent to understate his utility regarding particular outcomes is to guarantee a certain measure of fairness for the final settlement.\nThat is, the agents lose the incentive to be dishonest to obtain gains from taking advantage of the known solutions to determine the settlement outcome for they would be offset by the fairness maintenance mechanism.\nFirsts, we state an easy lemma.\nLEMMA 1.\nWhen Simp contains one single attributes, the agents have the incentive to understate their utility functions regarding outcomes that are not attractive to them.\nBy way of illustration, consider the set Simp containing only one attribute that could take values from the finite set {A, B, C, D}.\nAssume that negotiator 1 assigns utilities of 0.4, 0.7, 0.9, and 1 to A, B, C, and D, respectively.\nAssume also that negotiator 2 assigns utilities of 1, 0.9, 0.7, and 0.4 to A, B, C, and D, respectively.\nIf agent 1 misrepresents his utility function to the mediator by reporting utility 0 for all values A, B and C and utility 1 for value D then the agent 2 who plays honestly in his report to the mediator will obtain the worst outcome D given any of the above solutions.\nNote that agent 1 doesn``t need to know agent 2``s utility function, nor does he need to know the strategy employed by agent 2.\nAs long as he knows that the mediator is going to employ one of the above three solutions, then the above misrepresentation is the dominant strategy for this game.\nHowever, when the set Simp contains more than one attribute and none of the attributes strongly dominate the other attributes then the above problem disminishes by itself thanks to the integrative solution.\nWe of course have to define clearly what it means for an attribute to strongly dominate other attributes.\nIntuitively, if most of an agent``s utility concentrates on one of the attributes then this attribute strongly dominates other attributes.\nWe again appeal to the Assumption 1 on additivity of utility functions to achieve a measure of fairness within this negotiation setting.\nDue to Assumption 1, we can characterise agent i``s utility component over the set of attributes Simp by the following equation: ui,1([vSimp]) = X j\u2208Simp wi j \u2217 ui,j([vj]) (1) where P j\u2208Simp wj = 1.\nThen, an attribute \u2208 Simp strongly dominates the rest of the attributes in Simp (for agent i) iff wi > P j\u2208(Simp\u2212 ) wi j .\nAttribute is said to be strongly dominant (for agent i) wrt.\nthe set of simple attributes Simp.\nThe following theorem shows that if the set of attributes Simp does not contain a strongly dominant attribute then the negotiators have no incentive to be dishonest.\nTHEOREM 1.\nGiven a negotiation framework, if for every agent the set of simple attributes doesn``t contain a strongly dominant attribute, then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes.\nSo far, we have been concentrating on the efficiency issue while leaving the fairness issue aside.\nA fair framework does not only support a more satisfactory distribution of utility among the agents, but also often a good measure to prevent misrepresentation of private information by the agents.\nOf the three solutions presented above, the utilitarian solution does not support fairness.\nOn the other hand, Nash [16] proves that the Nash solution satisfies the above four axioms for the cooperative bargaining games and is considered a fair solution.\nThe egalitarian solution is another mechanism to achieve fairness by essentially helping the worst off.\nThe problem with these solutions, as discussed earlier, is that they are vulnerable to strategic behaviours when one of the attributes strongly dominates the rest of attributes.\nHowever, there is yet another solution that aims to guarantee fairness, the minimax solution.\nThat is, the utility of the agent with maximum utility is minimised.\nIt``s obvious that the minimax solution produces inefficient outcomes.\nHowever, to get around this problem (given that the Pareto-optimal set can be tractably computed), we can apply this solution over the Pareto-optimal set only.\nLet POSet \u2286 V alSimp be the Pareto-optimal subset of the simple outcomes, the minimax solution is defined to be the solution of the following optimisation problem.\narg min v\u2208P OSet max i\u2208N ui(v) While overall efficiency often suffers under a minimax solution, i.e., the sum of all agents'' utilities are often lower than under other solutions, it can be shown that the minimax solution is less vulnerable to manipulation.\n512 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) THEOREM 2.\nGiven a negotiation framework, under the minimax solution, if the negotiators are uncertain about their opponents'' preferences then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes.\nThat is, even when there is only one single simple attribute, if an agent is uncertain whether the other agent``s most preferred resolution is also his own most preferred resolution then he should opt for truth-telling as the optimal strategy.\n4.2 Optimisation on continuous attributes When the attributes take values from infinite sets, we assume that they are continuous.\nThis is similar to the common practice in operations research in which linear programming solutions\/techniques are applied to integer programming problems.\nWe denote the number of continuous attributes by k, i.e., Att = Simp \u222a Simp and |Simp| = k. Then, the outcome space O can be represented as follows: O = ( Q j\u2208Simp V alj) \u00d7 ( Q l\u2208Simp V all), where Q l\u2208Simp V all \u2286 Rk is the continuous component of O. Let Oc denote the set Q l\u2208Simp V all.\nWe``ll refer to Oc as the feasible set and assume that Oc is closed and convex.\nAfter carrying out the optimisation over the set of simple attributes, we are able to assign the optimal values to the simple attributes from Simp.\nThus, we reduce the original problem to the problem of searching for optimal (and fair) outcomes within the feasible set Oc .\nRecall that, by Assumption 1, we can characterise agent i``s utility function as follows: ui([v\u2217 Simp, vSimp]) = C + wi 2 \u2217 ui,2([vSimp]), where C is the constant wi 1 \u2217 ui,1([v\u2217 Simp]) and v\u2217 Simp denotes the optimal values of the simple attributes in Simp.\nHence, without loss of generality (albeit with a blatant abuse of notation), we can take the agent i``s utility function as ui : Rk \u2192 [0, 1].\nAccordingly we will also take the set of outcomes under consideration by the agents to be the feasible set Oc .\nWe now state another assumption to be used in this section: ASSUMPTION 2.\nThe negotiators'' utility functions can be described by continuously differentiable and concave functions ui : Rk \u2192 [0, 1], (i = 1, 2).\nIt should be emphasised that we do not assume that agents explicitly know their utility functions.\nFor the method to be described in the following to work, we only assume that the agents know the relevant information, e.g. at certain point within the feasible set Oc , the gradient direction of their own utility functions and some section of their respective indifference curves.\nAssume that a tentative agreement (which is a point x \u2208 Rk ) is currently on the table, the process for the agents to jointly improve this agreement in order to reach a Pareto-optimal agreement can be described as follows.\nThe mediator asks the negotiators to discretely submit their respective gradient directions at x, i.e., \u2207u1(x) and \u2207u2(x).\nNote that the goal of the process to be described here is to search for agreements that are more efficient than the tentative agreement currently on the table.\nThat is, we are searching for points x within the feasible set Oc such that moving to x from the current tentative agreement x brings more gains to at least one of the agents while not hurting any of the agents.\nDue to the assumption made above, i.e. the feasible set Oc is bounded, the conditions for an alternative x \u2208 Oc to be efficient vary depending on the position of x.\nThe following results are proved in [9]: Let B(x) = 0 denote the equation of the boundary of Oc , defining x \u2208 Oc iff B(x) \u2265 0.\nAn alternative x\u2217 \u2208 Oc is efficient iff, either A. x\u2217 is in the interior of Oc and the parties are in local strict opposition at x\u2217 , i.e., \u2207u1(x\u2217 ) = \u2212\u03b3\u2207u2(x\u2217 ) (2) where \u03b3 > 0; or B. x\u2217 is on the boundary of Oc , and for some \u03b1, \u03b2 \u2265 0: \u03b1\u2207u1(x\u2217 ) + \u03b2\u2207u2(x\u2217 ) = \u2207B(x\u2217 ) (3) We are now interested in answering the following questions: (i) What is the initial tentative agreement x0?\n(ii) How to find the more efficient agreement xh+1, given the current tentative agreement xh?\n4.2.1 Determining a fair initial tentative agreement It should be emphasised that the choice of the initial tentative agreement affects the fairness of the final agreement to be reached by the presented method.\nFor instance, if the initial tentative agreement x0 is chosen to be the most preferred alternative to one of the agents then it is also a Pareto-optimal outcome, making it impossible to find any joint improvement from x0.\nHowever, if x0 will then be chosen to be the final settlement and if x0 turns out to be the worst alternative to the other agent then this outcome is a very unfair one.\nThus, it``s important that the choice of the initial tentative agreement be sensibly made.\nEhtamo et al [3] present several methods to choose the initial tentative agreement (called reference point in their paper).\nHowever, their goal is to approximate the Pareto-optimal set by systematically choosing a set of reference points.\nOnce an (approximate) Pareto-optimal set is generated, it is left to the negotiators to decide which of the generated Pareto-optimal outcomes to be chosen as the final settlement.\nThat is, distributive negotiation will then be required to settle the issue.\nWe, on the other hand, are interested in a fair initial tentative agreement which is not necessarily efficient.\nImproving a given tentative agreement to yield a Pareto-optimal agreement is considered in the next section.\nFor each attribute j \u2208 Simp, an agent i will be asked to discretely submit three values (from the set V alj): the most preferred value, denoted by pvi,j, the least preferred value, denoted by wvi,j, and a value that gives i an approximately average payoff, denoted by avi,j. (Note that this is possible because the set V alj is bounded.)\nIf pv1,j and pv2,j are sufficiently close, i.e., |pv1,j \u2212 pv2,j| < \u0394 for some pre-defined \u0394 > 0, then pv1,j and pv2,j are chosen to be the two core values, denoted by cv1 and cv2.\nOtherwise, between the two values pv1,j and av1,j, we eliminate the one that is closer to wv2,j, the remaining value is denoted by cv1.\nSimilarly, we obtain cv2 from the two values pv2,j and av2,j.\nIf cv1 = cv2 then cv1 is selected as the initial value for the attribute j as part of the initial tentative agreement.\nOtherwise, without loss of generality, we assume that cv1 < cv2.\nThe mediator selects randomly p values mv1, ... , mvp from the open interval (cv1, cv2), where p \u2265 1.\nThe mediator then asks the agents to submit their valuations over the set of values {cv1, cv2, mv1, ... , mvp}.\nThe value whose the two valuations of two agents are closest is selected as the initial value for the attribute j as part of the initial tentative agreement.\nThe above procedure guarantees that the agents do not gain by behaving strategically.\nBy performing the above procedure on every attribute j \u2208 Simp, we are able to identify the initial tentative agreement x0 such that x0 \u2208 Oc .\nThe next step is to compute a new tentative agreement from an existing tentative agreement so that the new one would be more efficient than the existing one.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 513 4.2.2 Computing new tentative agreement Our procedure is a combination of the method of jointly improving direction introduced by Ehtamo et al [4] and a method we propose in the coming section.\nBasically, the idea is to see how strong the opposition the parties are in.\nIf the two parties are in (local) weak opposition at the current tentative agreement xh, i.e., their improving directions at xh are close to each other, then the compromise direction proposed by Ehtamo et al [4] is likely to point to a better agreement for both agents.\nHowever, if the two parties are in local strong opposition at the current point xh then it``s unclear whether the compromise direction would really not hurt one of the agents whilst bringing some benefit to the other.\nWe will first review the method proposed by Ehtamo et al [4] to compute the compromise direction for a group of negotiators at a given point x \u2208 Oc .\nEhtamo et al define a a function T(x) that describes the mediator``s choice for a compromise direction at x. For the case of two-party negotiations, the following bisecting function, denoted by T BS , can be defined over the interior set of Oc .\nNote that the closed set Oc contains two disjoint subsets: Oc = Oc 0 \u222aOc B , where Oc 0 denotes the set of interior points of Oc and Oc B denotes the boundary of Oc .\nThe bisecting compromise is defined by a function T BS : Oc 0 \u2192 R2 , T BS (x) = \u2207u1(x) \u2207u1(x) + \u2207u2(x) \u2207u2(x) , x \u2208 Oc 0.\n(4) Given the current tentative agreement xh (h \u2265 0), the mediator has to choose a point xh+1 along d = T(xh) so that all parties gain.\nEhtamo et al then define a mechanism to generate a sequence of points and prove that when the generated sequence is bounded and when all generated points (from the sequence) belong to the interior set Oc 0 then the sequence converges to a weakly Paretooptimal agreement [4, pp. 59-60].5 As the above mechanism does not work at the boundary points of Oc , we will introduce a procedure that works everywhere in an alternative space Oc .\nLet x \u2208 Oc and let \u03b8(x) denote the angle between the gradients \u2207u1(x) and \u2207u2(x) at x.\nThat is, \u03b8(x) = arccos( \u2207u1(x).\n\u2207u2(x) \u2207u1(x) .\n\u2207u2(x) ) From Definition 2, it is obvious that the two parties are in local strict opposition (at x) iff \u03b8(x) = \u03c0, and they are in local strong opposition iff \u03c0 \u2265 \u03b8(x) > \u03c0\/2, and they are in local weak opposition iff \u03c0\/2 \u2265 \u03b8(x) \u2265 0.\nNote also that the two vectors \u2207u1(x) and \u2207u2(x) define a hyperplane, denoted by h\u2207(x), in the kdimensional space Rk .\nFurthermore, there are two indifference curves of agents 1 and 2 going through point x, denoted by IC1(x) and IC2(x), respectively.\nLet hT1(x) and hT2(x) denote the tangent hyperplanes to the indifference curves IC1(x) and IC2(x), respectively, at point x.\nThe planes hT1(x) and hT2(x) intersect h\u2207(x) in the lines IS1(x) and IS2(x), respectively.\nNote that given a line L(x) going through the point x, there are two (unit) vectors from x along L(x) pointing to two opposite directions, denoted by L+ (x) and L\u2212 (x).\nWe can now informally explain our solution to the problem of searching for joint gains.\nWhen it isn``t possible to obtain a compromise direction for joint improvements at a point x \u2208 Oc either because the compromise vector points to the space outside of the feasible set Oc or because the two parties are in local strong opposition at x, we will consider to move along the indifference curve of one party while trying to improve the utility of the other party.\nAs 5 Let S be the set of alternatives, x\u2217 is weakly Pareto optimal if there is no x \u2208 S such that ui(x) > ui(x\u2217 ) for all agents i. the mediator does not know the indifference curves of the parties, he has to use the tangent hyperplanes to the indifference curves of the parties at point x. Note that the tangent hyperplane to a curve is a useful approximation of the curve in the immediate vicinity of the point of tangency, x.\nWe are now describing an iteration step to reach the next tentative agreement xh+1 from the current tentative agreement xh \u2208 Oc .\nA vector v whose tail is xh is said to be bounded in Oc if \u2203\u03bb > 0 such that xh +\u03bbv \u2208 Oc .\nTo start, the mediator asks the negotiators for their gradients \u2207u1(xh) and \u2207u2(xh), respectively, at xh.\n1.\nIf xh is a Pareto-optimal outcome according to equation 2 or equation 3, then the process is terminated.\n2.\nIf 1 \u2265 \u2207u1(xh).\n\u2207u2(xh) > 0 and the vector T BS (xh) is bounded in Oc then the mediator chooses the compromise improving direction d = T BS (xh) and apply the method described by Ehtamo et al [4] to generate the next tentative agreement xh+1.\n3.\nOtherwise, among the four vectors IS\u03c3 i (xh), i = 1, 2 and \u03c3 = +\/\u2212, the mediator chooses the vector that (i) is bounded in Oc , and (ii) is closest to the gradient of the other agent, \u2207uj (xh)(j = i).\nDenote this vector by T G(xh).\nThat is, we will be searching for a point on the indifference curve of agent i, ICi(xh), while trying to improve the utility of agent j. Note that when xh is an interior point of Oc then the situation is symmetric for the two agents 1 and 2, and the mediator has the choice of either finding a point on IC1(xh) to improve the utility of agent 2, or finding a point on IC2(xh) to improve the utility of agent 1.\nTo decide on which choice to make, the mediator has to compute the distribution of gains throughout the whole process to avoid giving more gains to one agent than to the other.\nNow, the point xh+1 to be generated lies somewhere on the intersection of ICi(xh) and the hyperplane defined by \u2207ui(xh) and T G(xh).\nThis intersection is approximated by T G(xh).\nThus, the sought after point xh+1 can be generated by first finding a point yh along the direction of T G(xh) and then move from yh to the same direction of \u2207ui(xh) until we intersect with ICi(xh).\nMathematically, let \u03b6 and \u03be denote the vectors T G(xh) and \u2207ui(xh), respectively, xh+1 is the solution to the following optimisation problem: max \u03bb1,\u03bb2\u2208L uj(xh + \u03bb1\u03b6 + \u03bb2\u03be) s.t. xh+\u03bb1\u03b6+\u03bb2\u03be \u2208 Oc , and ui(xh+\u03bb1\u03b6+\u03bb2\u03be) = ui(xh), where L is a suitable interval of positive real numbers; e.g., L = {\u03bb|\u03bb > 0}, or L = {\u03bb|a < \u03bb \u2264 b}, 0 \u2264 a < b. Given an initial tentative agreement x0, the method described above allows a sequence of tentative agreements x1, x2, ... to be iteratively generated.\nThe iteration stops whenever a weakly Pareto optimal agreement is reached.\nTHEOREM 3.\nIf the sequence of agreements generated by the above method is bounded then the method converges to a point x\u2217 \u2208 Oc that is weakly Pareto optimal.\n5.\nCONCLUSION AND FUTURE WORK In this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents'' objectives and utilities.\nThe focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or create value.\n514 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) We have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents.\nFurthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome.\nThat is, the approach that we describe aims for both efficiency, in the sense that it produces Pareto optimal outcomes (i.e. no aspect can be improved for one of the parties without worsening the outcome for another party), and also for fairness, which chooses optimal solutions which distribute gains amongst the agents in some appropriate manner.\nWe have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes.\nFor simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution.\nIn order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation.\nWe have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes.\nFor non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution.\nThe approach presented in this paper is similar to the ideas behind negotiation analysis [18].\nEhtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations.\nThe relation of their approach to our approach is discussed in the preceding section.\nLai et al [12] provide an alternative approach to integrative negotiation.\nWhile their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear.\nZhang et at [22] discuss the use of integrative negotiation in agent organisations.\nThey assume that agents are honest.\nTheir main result is an experiment showing that in some situations, agents'' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation.\nJonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator.\nThus, their approach can be considered a complement of ours.\nTheir experimental results show that agents can reach Paretooptimal outcomes using their approach.\nThe details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done.\nThere is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes.\nIn order to provide a complete framework we are also working on the distributive phase using the mediator.\nAcknowledgement The authors acknowledge financial support by ARC Dicovery Grant (2006-2009, grant DP0663147) and DEST IAP grant (2004-2006, grant CG040014).\nThe authors would like to thank Lawrence Cavedon and the RMIT Agents research group for their helpful comments and suggestions.\n6.\nREFERENCES [1] F. Alemi, P. Fos, and W. Lacorte.\nA demonstration of methods for studying negotiations between physicians and health care managers.\nDecision Science, 21:633-641, 1990.\n[2] M. Ehrgott.\nMulticriteria Optimization.\nSpringer-Verlag, Berlin, 2000.\n[3] H. Ehtamo, R. P. Hamalainen, P. Heiskanen, J. Teich, M. Verkama, and S. Zionts.\nGenerating pareto solutions in a two-party setting: Constraint proposal methods.\nManagement Science, 45(12):1697-1709, 1999.\n[4] H. Ehtamo, E. Kettunen, and R. P. Hmlinen.\nSearching for joint gains in multi-party negotiations.\nEuropean Journal of Operational Research, 130:54-69, 2001.\n[5] P. Faratin.\nAutomated Service Negotiation Between Autonomous Computational Agents.\nPhD thesis, University of London, 2000.\n[6] A. Foroughi.\nMinimizing negotiation process losses with computerized negotiation support systems.\nThe Journal of Applied Business Research, 14(4):15-26, 1998.\n[7] C. M. Jonker, V. Robu, and J. Treur.\nAn agent architecture for multi-attribute negotiation using incomplete preference information.\nJ. Autonomous Agents and Multi-Agent Systems, (to appear).\n[8] R. L. Keeney and H. Raiffa.\nDecisions with Multiple Objectives: Preferences and Value Trade-Offs.\nJohn Wiley and Sons, Inc., New York, 1976.\n[9] G. Kersten and S. Noronha.\nRational agents, contract curves, and non-efficient compromises.\nIEEE Systems, Man, and Cybernetics, 28(3):326-338, 1998.\n[10] M. Klein, P. Faratin, H. Sayama, and Y. Bar-Yam.\nProtocols for negotiating complex contracts.\nIEEE Intelligent Systems, 18(6):32-38, 2003.\n[11] S. Kraus, J. Wilkenfeld, and G. Zlotkin.\nMultiagent negotiation under time constraints.\nArtificial Intelligence Journal, 75(2):297-345, 1995.\n[12] G. Lai, C. Li, and K. Sycara.\nEfficient multi-attribute negotiation with incomplete information.\nGroup Decision and Negotiation, 15:511-528, 2006.\n[13] D. Lax and J. Sebenius.\nThe manager as negotiator: The negotiator``s dilemma: Creating and claiming value, 2nd ed.\nIn S. Goldberg, F. Sander & N. Rogers, editors, Dispute Resolution, 2nd ed., pages 49-62.\nLittle Brown & Co., 1992.\n[14] M. Lomuscio and N. Jennings.\nA classification scheme for negotiation in electronic commerce.\nIn Agent-Mediated Electronic Commerce: A European Agentlink Perspective.\nSpringer-Verlag, 2001.\n[15] R. Maes and A. Moukas.\nAgents that buy and sell.\nCommunications of the ACM, 42(3):81-91, 1999.\n[16] J. Nash.\nTwo-person cooperative games.\nEconometrica, 21(1):128-140, April 1953.\n[17] H. Raiffa.\nThe Art and Science of Negotiation.\nHarvard University Press, Cambridge, USA, 1982.\n[18] H. Raiffa, J. Richardson, and D. Metcalfe.\nNegotiation Analysis: The Science and Art of Collaborative Decision Making.\nBelknap Press, Cambridge, MA, 2002.\n[19] T. Sandholm.\nAgents in electronic commerce: Component technologies for automated negotiation and coalition formation.\nJAAMAS, 3(1):73-96, 2000.\n[20] J. Sebenius.\nNegotiation analysis: A characterization and review.\nManagement Science, 38(1):18-38, 1992.\n[21] L. Weingart, E. Hyder, and M. Pietrula.\nKnowledge matters: The effect of tactical descriptions on negotiation behavior and outcome.\nTech.\nReport, CMU, 1995.\n[22] X. Zhang, V. R. Lesser, and T. Wagner.\nIntegrative negotiation among agents situated in organizations.\nIEEE Trans.\non Systems, Man, and Cybernetics, Part C, 36(1):19-30, 2006.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 515","lvl-3":"Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory\nABSTRACT\nIt is well established by conflict theorists and others that successful negotiation should incorporate \"creating value\" as well as \"claiming value.\"\nJoint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party.\nIn this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists.\nWe use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains.\nWe separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values.\nIn addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes.\nWe also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier.\n1.\nINTRODUCTION\nGiven that negotiation is perhaps one of the oldest activities in the history of human communication, it's perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21].\nRaiffa [17] and Sebenius [20] provide analyses on the negotiators' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons.\nAccording to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as\" creating value\" and\" claiming value.\"\nThey argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension.\nNegotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation.\nThey refer to this problem as the Negotiator's Dilemma.\nWe argue that the Negotiator's Dilemma is essentially informationbased, due to the private information held by the agents.\nSuch private information contains both the information that implies the agent's bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength.\nFor instance, when bargaining to sell a house to a potential buyer, the seller would try to hide her actual reserve price as much as possible for she hopes to reach an agreement at a much higher price than her reserve price.\nOn the other hand, the outside options available to her (e.g. other buyers who have expressed genuine interest with fairly good offers) consist in the information that improves her bargaining strength about which she would like to convey to her opponent.\nBut at the same time, her opponent is well aware of the fact that it is her incentive to boost her bargaining strength and thus will not accept every information she sends out unless it is substantiated by evidence.\nComing back to the Negotiator's Dilemma, it's not always possible to separate the integrative bargaining process from the distributive bargaining process.\nIn fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process.\nThis is because a negotiator could use the information about his opponent's interests against her during the distributive negotiation process.\nThat is, a negotiator may refuse to concede on an important conflicting issue by claiming that he has made a major concession (on another issue) to meet his opponent's interests even though the concession he made could be insignificant to him.\nFor instance, few buyers would start a bargaining with a dealer over a deal for a notebook computer by declaring that he is most interested in an extended warranty for the item and therefore prepared to pay a high price to get such an extended warranty.\nNegotiation Support Systems (NSSs) and negotiating software\nagents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]).\nHowever, because of the Negotiator's Dilemma and given even bargaining power and incomplete information, the following two undesirable situations often arise: (i) negotiators reach inefficient compromises, or (ii) negotiators engage in a deadlock situation in which both negotiators refuse to act upon with incomplete information and at the same time do not want to disclose more information.\nIn this paper, we argue for the role of a mediator to resolve the above two issues.\nThe mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device.\nTo take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed.\nSection 2 provides background on MCDM theory and the negotiation framework.\nSection 3 formulates the problem.\nIn Section 4, we discuss our approach to integrative negotiation.\nSection 5 discusses the future work with some concluding remarks.\n2.\nBACKGROUND\n2.1 Multi-criteria decision making theory\nLet A denote the set of feasible alternatives available to a decision maker M.\nAs an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1,..., m).\n(Attributes are also referred to as issues, or decision variables.)\nA typical decision maker also has several objectives X1,..., Xk.\nWe assume that Xi, (i = 1,..., k), maps the alternatives to real numbers.\nThus, a tuple (x1,..., xk) = (X1 (a),..., Xk (a)) denotes the consequence of the act a to the decision maker M. By definition, objectives are statements that delineate the desires of a decision maker.\nThus, M wishes to maximise his objectives.\nHowever, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision maker's objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another.\nFor instance, most businesses and public services have objectives like \"minimise cost\" and \"maximise the quality of services.\"\nSince better services can often only be attained for a price, these objectives conflict.\nDue to the conflicting nature of a decision maker's objectives, M usually has to settle at a compromise solution.\nThat is, he may have to choose an act a E A that does not optimise every objective.\nThis is the topic of the multi-criteria decision making theory.\nPart of the solution to this problem is that M has to try to identify the Pareto frontier in the consequence space 1 (X1 (a),..., Xk (a))} aEA.\nDEFINITION 1.\n(Dominant)\n2.2 A negotiation framework\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 509\n3.\nPROBLEM FORMALISATION\n510 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nMEDIATOR-BASED BILATERAL NEGOTIATIONS\n4.1 Optimisation on simple attributes\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 511\n512 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 Optimisation on continuous attributes\n4.2.1 Determining a fair initial tentative agreement\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 513 4.2.2 Computing new tentative agreement\n5.\nCONCLUSION AND FUTURE WORK\nIn this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents' objectives and utilities.\nThe focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or \"create value.\"\nmax \u03bb1, \u03bb2 \u2208 L\n514 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nWe have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents.\nFurthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome.\nThat is, the approach that we describe aims for both efficiency, in the sense that it produces Pareto optimal outcomes (i.e. no aspect can be improved for one of the parties without worsening the outcome for another party), and also for fairness, which chooses optimal solutions which distribute gains amongst the agents in some appropriate manner.\nWe have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes.\nFor simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution.\nIn order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation.\nWe have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes.\nFor non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution.\nThe approach presented in this paper is similar to the ideas behind negotiation analysis [18].\nEhtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations.\nThe relation of their approach to our approach is discussed in the preceding section.\nLai et al [12] provide an alternative approach to integrative negotiation.\nWhile their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear.\nZhang et at [22] discuss the use of integrative negotiation in agent organisations.\nThey assume that agents are honest.\nTheir main result is an experiment showing that in some situations, agents' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation.\nJonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator.\nThus, their approach can be considered a complement of ours.\nTheir experimental results show that agents can reach Paretooptimal outcomes using their approach.\nThe details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done.\nThere is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes.\nIn order to provide a complete framework we are also working on the distributive phase using the mediator.","lvl-4":"Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory\nABSTRACT\nIt is well established by conflict theorists and others that successful negotiation should incorporate \"creating value\" as well as \"claiming value.\"\nJoint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party.\nIn this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists.\nWe use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains.\nWe separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values.\nIn addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes.\nWe also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier.\n1.\nINTRODUCTION\nGiven that negotiation is perhaps one of the oldest activities in the history of human communication, it's perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21].\nRaiffa [17] and Sebenius [20] provide analyses on the negotiators' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons.\nAccording to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as\" creating value\" and\" claiming value.\"\nThey argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension.\nNegotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation.\nThey refer to this problem as the Negotiator's Dilemma.\nWe argue that the Negotiator's Dilemma is essentially informationbased, due to the private information held by the agents.\nSuch private information contains both the information that implies the agent's bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength.\nComing back to the Negotiator's Dilemma, it's not always possible to separate the integrative bargaining process from the distributive bargaining process.\nIn fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process.\nThis is because a negotiator could use the information about his opponent's interests against her during the distributive negotiation process.\nNegotiation Support Systems (NSSs) and negotiating software\nagents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]).\nIn this paper, we argue for the role of a mediator to resolve the above two issues.\nThe mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device.\nTo take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed.\nSection 2 provides background on MCDM theory and the negotiation framework.\nSection 3 formulates the problem.\nIn Section 4, we discuss our approach to integrative negotiation.\nSection 5 discusses the future work with some concluding remarks.\n2.\nBACKGROUND\n2.1 Multi-criteria decision making theory\nLet A denote the set of feasible alternatives available to a decision maker M.\nAs an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1,..., m).\n(Attributes are also referred to as issues, or decision variables.)\nA typical decision maker also has several objectives X1,..., Xk.\nThus, M wishes to maximise his objectives.\nHowever, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision maker's objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another.\nFor instance, most businesses and public services have objectives like \"minimise cost\" and \"maximise the quality of services.\"\nSince better services can often only be attained for a price, these objectives conflict.\nDue to the conflicting nature of a decision maker's objectives, M usually has to settle at a compromise solution.\nThat is, he may have to choose an act a E A that does not optimise every objective.\nThis is the topic of the multi-criteria decision making theory.\n5.\nCONCLUSION AND FUTURE WORK\nIn this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents' objectives and utilities.\nThe focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or \"create value.\"\nmax \u03bb1, \u03bb2 \u2208 L\n514 The Sixth Intl. .\nJoint Conf.\nWe have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents.\nFurthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome.\nWe have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes.\nFor simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution.\nIn order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation.\nWe have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes.\nFor non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution.\nThe approach presented in this paper is similar to the ideas behind negotiation analysis [18].\nEhtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations.\nThe relation of their approach to our approach is discussed in the preceding section.\nLai et al [12] provide an alternative approach to integrative negotiation.\nWhile their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear.\nZhang et at [22] discuss the use of integrative negotiation in agent organisations.\nThey assume that agents are honest.\nTheir main result is an experiment showing that in some situations, agents' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation.\nJonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator.\nThus, their approach can be considered a complement of ours.\nTheir experimental results show that agents can reach Paretooptimal outcomes using their approach.\nThe details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done.\nThere is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes.\nIn order to provide a complete framework we are also working on the distributive phase using the mediator.","lvl-2":"Searching for Joint Gains in Automated Negotiations Based on Multi-criteria Decision Making Theory\nABSTRACT\nIt is well established by conflict theorists and others that successful negotiation should incorporate \"creating value\" as well as \"claiming value.\"\nJoint improvements that bring benefits to all parties can be realised by (i) identifying attributes that are not of direct conflict between the parties, (ii) tradeoffs on attributes that are valued differently by different parties, and (iii) searching for values within attributes that could bring more gains to one party while not incurring too much loss on the other party.\nIn this paper we propose an approach for maximising joint gains in automated negotiations by formulating the negotiation problem as a multi-criteria decision making problem and taking advantage of several optimisation techniques introduced by operations researchers and conflict theorists.\nWe use a mediator to protect the negotiating parties from unnecessary disclosure of information to their opponent, while also allowing an objective calculation of maximum joint gains.\nWe separate out attributes that take a finite set of values (simple attributes) from those with continuous values, and we show that for simple attributes, the mediator can determine the Pareto-optimal values.\nIn addition we show that if none of the simple attributes strongly dominates the other simple attributes, then truth telling is an equilibrium strategy for negotiators during the optimisation of simple attributes.\nWe also describe an approach for improving joint gains on non-simple attributes, by moving the parties in a series of steps, towards the Pareto-optimal frontier.\n1.\nINTRODUCTION\nGiven that negotiation is perhaps one of the oldest activities in the history of human communication, it's perhaps surprising that conducted experiments on negotiations have shown that negotiators more often than not reach inefficient compromises [1, 21].\nRaiffa [17] and Sebenius [20] provide analyses on the negotiators' failure to achieve efficient agreements in practice and their unwillingness to disclose private information due to strategic reasons.\nAccording to conflict theorists Lax and Sebenius [13], most negotiation actually involves both integrative and distributive bargaining which they refer to as\" creating value\" and\" claiming value.\"\nThey argue that negotiation necessarily includes both cooperative and competitive elements, and that these elements exist in tension.\nNegotiators face a dilemma in deciding whether to pursue a cooperative or a competitive strategy at a particular time during a negotiation.\nThey refer to this problem as the Negotiator's Dilemma.\nWe argue that the Negotiator's Dilemma is essentially informationbased, due to the private information held by the agents.\nSuch private information contains both the information that implies the agent's bottom lines (or, her walk-away positions) and the information that enforces her bargaining strength.\nFor instance, when bargaining to sell a house to a potential buyer, the seller would try to hide her actual reserve price as much as possible for she hopes to reach an agreement at a much higher price than her reserve price.\nOn the other hand, the outside options available to her (e.g. other buyers who have expressed genuine interest with fairly good offers) consist in the information that improves her bargaining strength about which she would like to convey to her opponent.\nBut at the same time, her opponent is well aware of the fact that it is her incentive to boost her bargaining strength and thus will not accept every information she sends out unless it is substantiated by evidence.\nComing back to the Negotiator's Dilemma, it's not always possible to separate the integrative bargaining process from the distributive bargaining process.\nIn fact, more often than not, the two processes interplay with each other making information manipulation become part of the integrative bargaining process.\nThis is because a negotiator could use the information about his opponent's interests against her during the distributive negotiation process.\nThat is, a negotiator may refuse to concede on an important conflicting issue by claiming that he has made a major concession (on another issue) to meet his opponent's interests even though the concession he made could be insignificant to him.\nFor instance, few buyers would start a bargaining with a dealer over a deal for a notebook computer by declaring that he is most interested in an extended warranty for the item and therefore prepared to pay a high price to get such an extended warranty.\nNegotiation Support Systems (NSSs) and negotiating software\nagents (NSAs) have been introduced either to assist humans in making decisions or to enable automated negotiation to allow computer processes to engage in meaningful negotiation to reach agreements (see, for instance, [14, 15, 19, 6, 5]).\nHowever, because of the Negotiator's Dilemma and given even bargaining power and incomplete information, the following two undesirable situations often arise: (i) negotiators reach inefficient compromises, or (ii) negotiators engage in a deadlock situation in which both negotiators refuse to act upon with incomplete information and at the same time do not want to disclose more information.\nIn this paper, we argue for the role of a mediator to resolve the above two issues.\nThe mediator thus plays two roles in a negotiation: (i) to encourage cooperative behaviour among the negotiators, and (ii) to absorb the information disclosure by the negotiators to prevent negotiators from using uncertainty and private information as a strategic device.\nTo take advantage of existing results in negotiation analysis and operations research (OR) literatures [18], we employ multi-criteria decision making (MCDM) theory to allow the negotiation problem to be represented and analysed.\nSection 2 provides background on MCDM theory and the negotiation framework.\nSection 3 formulates the problem.\nIn Section 4, we discuss our approach to integrative negotiation.\nSection 5 discusses the future work with some concluding remarks.\n2.\nBACKGROUND\n2.1 Multi-criteria decision making theory\nLet A denote the set of feasible alternatives available to a decision maker M.\nAs an act, or decision, a in A may involve multiple aspects, we usually describe the alternatives a with a set of attributes j; (j = 1,..., m).\n(Attributes are also referred to as issues, or decision variables.)\nA typical decision maker also has several objectives X1,..., Xk.\nWe assume that Xi, (i = 1,..., k), maps the alternatives to real numbers.\nThus, a tuple (x1,..., xk) = (X1 (a),..., Xk (a)) denotes the consequence of the act a to the decision maker M. By definition, objectives are statements that delineate the desires of a decision maker.\nThus, M wishes to maximise his objectives.\nHowever, as discussed thoroughly by Keeney and Raiffa [8], it is quite likely that a decision maker's objectives will conflict with each other in that the improved achievement with one objective can only be accomplished at the expense of another.\nFor instance, most businesses and public services have objectives like \"minimise cost\" and \"maximise the quality of services.\"\nSince better services can often only be attained for a price, these objectives conflict.\nDue to the conflicting nature of a decision maker's objectives, M usually has to settle at a compromise solution.\nThat is, he may have to choose an act a E A that does not optimise every objective.\nThis is the topic of the multi-criteria decision making theory.\nPart of the solution to this problem is that M has to try to identify the Pareto frontier in the consequence space 1 (X1 (a),..., Xk (a))} aEA.\nDEFINITION 1.\n(Dominant)\nLet x = (x1,..., xk) and x' = (x' 1,..., x ` k) be two consequences.\nx dominates x' iff xi> x' i for all i, and the inequality is strict for at least one i.\nThe Pareto frontier in a consequence space then consists of all consequences that are not dominated by any other consequence.\nThis is illustrated in Fig. 1 in which an alternative consists of two attributes d1 and d2 and the decision maker tries to maximise the two objectives X1 and X2.\nA decision a E A whose consequence does not lie on the Pareto frontier is inefficient.\nWhile the Pareto\nFigure 1: The Pareto frontier\nfrontier allows M to avoid taking inefficient decisions, M still has to decide which of the efficient consequences on the Pareto frontier is most preferred by him.\nMCDM theorists introduce a mechanism to allow the objective components of consequences to be normalised to the payoff valuations for the objectives.\nConsequences can then be ordered: if the gains in satisfaction brought about by C1 (in comparison to C2) equals to the losses in satisfaction brought about by C1 (in comparison to C2), then the two consequences C1 and C2 are considered indifferent.\nM can now construct the set of indifference curves1 in the consequence space (the dashed curves in Fig. 1).\nThe most preferred indifference curve that intersects with the Pareto frontier is in focus: its intersection with the Pareto frontier is the sought after consequence (i.e., the optimal consequence in Fig. 1).\n2.2 A negotiation framework\nA multi-agent negotiation framework consists of:\n1.\nA set of two negotiating agents N = 11, 2}.\n2.\nA set of attributes Att = 1\u03b11,..., \u03b1m} characterising the issues the agents are negotiating over.\nEach attribute \u03b1 can take a value from the set Val\u03b1; 3.\nA set of alternative outcomes O.\nAn outcome o E O is represented by an assignment of values to the corresponding attributes in Att.\n4.\nAgents' utility: Based on the theory of multiple-criteria decision making [8], we define the agents' utility as follows: \u2022 Objectives: Agent i has a set of ni objectives, or interests; denoted by j (j = 1,..., ni).\nTo measure how much an outcome o fulfills an objective j to an agent i, we use objective functions: for each agent i, we define i's interests using the objective vector function fi = [fij]: O--* Rni.\n\u2022 Value functions: Instead of directly evaluating an outcome o, agent i looks at how much his objectives are fulfilled and will make a valuation based on these more basic criteria.\nThus, for each agent i, there is a value function \u03c3i: Rni--* R.\nIn particular, Raiffa [17] shows how to systematically construct an additive value function to each party involved in a negotiation.\n\u2022 Utility: Now, given an outcome o E O, an agent i is able to determine its value, i.e., \u03c3i (fi (o)).\nHowever, a negotiation infrastructure is usually required to facilitate negotiation.\nThis might involve other mechanisms and factors\/parties, e.g., a mediator, a legal institution, participation fees, etc. .\nThe standard way to implement such a thing is to allow money 1In fact, given the k-dimensional space, these should be called indifference surfaces.\nHowever, we will not bog down to that level of details.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 509\nand side-payments.\nIn this paper, we ignore those side-effects and assume that agent i's utility function v, i is normalised so that v, i: O \u2192 [0, 1].\nEXAMPLE 1.\nThere are two agents, A and B. Agent A has a task T that needs to be done and also 100 units of a resource R. Agent B has the capacity to perform task T and would like to obtain at least 10 and at most 20 units of the resource R. Agent B is indifferent on any amount between 10 and 20 units of the resource R.\nThe objective functions for both agents A and B are cost and revenue.\nAnd they both aim at minimising costs while maximising revenues.\nHaving T done generates for A a revenue rA, T while doing T incurs a cost cB, T to B. Agent B obtains a revenue rB, R for each unit of the resource R while providing each unit of the resource R costs agent A cA, R. Assuming that money transfer between agents is possible, the set Att then contains three attributes:\n\u2022 T, taking values from the set {0, 1}, indicates whether the task T is assigned to agent B; \u2022 R, taking values from the set of non-negative integer, indicates the amount of resource R being allocated to agent B; and \u2022 MT, taking values from R, indicates the payment p to be transferred from A to B.\n3.\nPROBLEM FORMALISATION\nConsider Example 1, assume that rA, T = $150 and cB, T = $100 and rB, R = $10 and cA, R = $7.\nThat is, the revenues generated for A exceeds the costs incurred to B to do task T, and B values resource R more highly than the cost for A to provide it.\nThe optimal solution to this problem scenario is to assign task T to agent B and to allocate 20 units of resource R (i.e., the maximal amount of resource R required by agent B) from agent A to agent B.\nThis outcome regarding the resource and task allocation problems leaves payoffs of $10 to agent A and $100 to agent B. 2 Any other outcome would leave at least one of the agents worse off.\nIn other words, the presented outcome is Pareto-efficient and should be part of the solution outcome for this problem scenario.\nHowever, as the agents still have to bargain over the amount of money transfer p, neither agent would be willing to disclose their respective costs and revenues regarding the task T and the resource R.\nAs a consequence, agents often do not achieve the optimal outcome presented above in practice.\nTo address this issue, we introduce a mediator to help the agents discover better agreements than the ones they might try to settle on.\nNote that this problem is essentially the problem of searching for joint gains in a multilateral negotiation in which the involved parties hold strategic information, i.e., the integrative part in a negotiation.\nIn order to help facilitate this process, we introduce the role of a neutral mediator.\nBefore formalising the decision problems faced by the mediator and the 2Certainly, without money transfer to compensate agent A, this outcome is not a fair one.\nnegotiating agents, we discuss the properties of the solution outcomes to be achieved by the mediator.\nIn a negotiation setting, the two typical design goals would be:\n\u2022 Efficiency: Avoid the agents from settling on an outcome that is not Pareto-optimal; and \u2022 Fairness: Avoid agreements that give the most of the gains to a subset of agents while leaving the rest with too little.\nThe above goals are axiomatised in Nash's seminal work [16] on cooperative negotiation games.\nEssentially, Nash advocates for the following properties to be satisfied by solution to the bilateral negotiation problem: (i) it produces only Pareto-optimal outcomes; (ii) it is invariant to affine transformation (to the consequence space); (iii) it is symmetric; and (iv) it is independent from irrelevant alternatives.\nA solution satisfying Nash's axioms is called a Nash bargaining solution.\nIt then turns out that, by taking the negotiators' utilities as its objectives the mediator itself faces a multi-criteria decision making problem.\nThe issues faced by the mediator are: (i) the mediator requires access to the negotiators' utility functions, and (ii) making (fair) tradeoffs between different agents' utilities.\nOur methods allow the agents to repeatedly interact with the mediator so that a Nash solution outcome could be found by the parties.\nInformally, the problem faced by both the mediator and the negotiators is construction of the indifference curves.\nWhy are the indifference curves so important?\n\u2022 To the negotiators, knowing the options available along indifference curves opens up opportunities to reach more efficient outcomes.\nFor instance, consider an agent A who is presenting his opponent with an offer \u03b8A which she refuses to accept.\nRather than having to concede, A could look at his indifference curve going through \u03b8A and choose another proposal \u03b8 ~ A. To him, \u03b8A and \u03b8 ~ A are indifferent but \u03b8 ~ A could give some gains to B and thus will be more acceptable to B.\nIn other words, the outcome \u03b8 ~ A is more efficient than \u03b8A to these two negotiators.\n\u2022 To the mediator, constructing indifference curves requires a measure of fairness between the negotiators.\nThe mediator needs to determine how much utility it needs to take away from the other negotiators to give a particular negotiator a specific gain G (in utility).\nIn order to search for integrative solutions within the outcome space O, we characterise the relationship between the agents over the set of attributes Att.\nAs the agents hold different objectives and have different capacities, it may be the case that changing between two values of a specific attribute implies different shifts in utility of the agents.\nHowever, the problem of finding the exact Paretooptimal set3 is NP-hard [2].\nOur approach is thus to solve this optimisation problem in two steps.\nIn the first steps, the more manageable attributes will be solved.\nThese are attributes that take a finite set of values.\nThe result of this step would be a subset of outcomes that contains the Pareto-optimal set.\nIn the second step, we employ an iterative procedure that allows the mediator to interact with the negotiators to find joint improvements that move towards a Pareto-optimal outcome.\nThis approach will not work unless the attributes from Att\n3The Pareto-optimal set is the set of outcomes whose consequences (in the consequence space) correspond to the Pareto frontier.\notherwise.\n510 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nare independent.\nMost works on multi-attribute, or multi-issue, negotiation (e.g. [17]) assume that the attributes or the issues are independent, resulting in an additive value function for each agent .4\n-- set Att \\ S. Assume that vS and v'S are two assignments of values to the attributes of S and v1 \u00af S, v2 \u00af S are two arbitrary value assignments to the attributes of \u00af S, then (ui ([vS, v1 \u00af S]) ui ([v'S, v2 \u00af S])) =\n4.\nMEDIATOR-BASED BILATERAL NEGOTIATIONS\nAs discussed by Lax and Sebenius [13], under incomplete information the tension between creating and claiming values is the primary cause of inefficient outcomes.\nThis can be seen most easily in negotiations involving two negotiators; during the distributive phase of the negotiation, the two negotiators's objectives are directly opposing each other.\nWe will now formally characterise this relationship between negotiators by defining the opposition between two negotiating parties.\nThe following exposition will be mainly reproduced from [9].\nAssuming for the moment that all attributes from Att take values from the set of real numbers R, i.e., V alj C R for all j E Att.\nWe further assume that the set 0 = XjEAttV alj of feasible outcomes is defined by constraints that all parties must obey and 0 is convex.\nNow, an outcome o E 0 is just a point in the m-dimensional space of real numbers.\nThen, the questions are: (i) from the point of view of an agent i, is o already the best outcome for i?\n(ii) if o is not the best outcome for i then is there another outcome o' such that o' gives i a better utility than o and o' does not cause a utility loss to the other agent j in comparison to o?\nThe above questions can be answered by looking at the directions of improvement of the negotiating parties at o, i.e., the directions in the outcome space 0 into which their utilities increase at point o. Under the assumption that the parties' utility functions ui are differentiable concave, the set of all directions of improvement for a party at a point o can be defined in terms of his most preferred, or gradient, direction at that point.\nWhen the gradient direction Vui (o) of agent i at point o is outright opposing to the gradient direction Vuj (o) of agent j at point o then the two parties strongly disagree at o and no joint improvements can be achieved for i and j in the locality surrounding o.\nSince opposition between the two parties can vary considerably over the outcome space (with one pair of outcomes considered highly antagonistic and another pair being highly cooperative), we need to describe the local properties of the relationship.\nWe begin with the opposition at any point of the outcome space Rm.\nThe following definition is reproduced from [9]:\n3.\nThe parties are in local weak opposition at a point x E Rm iff Vu1 (x) - Vu2 (x)> 0, i.e., iff the gradients at x of the two utility functions form an acute or right angle.\n4.\nThe parties are in local strong opposition at a point x E Rm iff Vu1 (x) - Vu2 (x) <0, i.e., iff the gradients at x form an obtuse angle.\n5.\nThe parties are in global strict (nonstrict, weak, strong) opposition ifffor every x E Rm they are in local strict (nonstrict, weak, strong) opposition.\nGlobal strict and nonstrict oppositions are complementary cases.\nEssentially, under global strict opposition the whole outcome space 0 becomes the Pareto-optimal set as at no point in 0 can the negotiating parties make a joint improvement, i.e., every point in 0 is a Pareto-efficient outcome.\nIn other words, under global strict opposition the outcome space 0 can be flattened out into a single line such that for each pair of outcomes x, y E 0, u1 (x) u2 (y), i.e., at every point in 0, the gradient of the two utility functions point to two different ends of the line.\nIntuitively, global strict opposition implies that there is no way to obtain joint improvements for both agents.\nAs a consequence, the negotiation degenerates to a distributive negotiation, i.e., the negotiating parties should try to claim as much shares from the negotiation issues as possible while the mediator should aim for the fairness of the division.\nOn the other hand, global nonstrict opposition allows room for joint improvements and all parties might be better off trying to realise the potential gains by reaching Pareto-efficient agreements.\nWeak and strong oppositions indicate different levels of opposition.\nThe weaker the opposition, the more potential gains can be realised making cooperation the better strategy to employ during negotiation.\nOn the other hand, stronger opposition suggests that the negotiating parties tend to behave strategically leading to misrepresentation of their respective objectives and utility functions and making joint gains more difficult to realise.\nWe have been temporarily making the assumption that the outcome space 0 is the subset of Rm.\nIn many real-world negotiations, this assumption would be too restrictive.\nWe will continue our exposition by lifting this restriction and allowing discrete attributes.\nHowever, as most negotiations involve only discrete issues with a bounded number of options, we will assume that each attribute takes values either from a finite set or from the set of real numbers R.\nIn the rest of the paper, we will refer to attributes whose values are from finite sets as simple attributes and attributes whose values are from R as continuous attributes.\nThe notions of local oppositions, i.e., strict, nonstrict, weak and strong, are not applicable to outcome spaces that contain simple attributes and nor are the notions of global weak and strong oppositions.\nHowever, the notions of global strict and nonstrict oppositions can be generalised for outcome spaces that contain simple attributes.\nDEFINITION 3.\nGiven an outcome space 0, the parties are in global strict opposition iff b ` x, y E 0, u1 (x) u2 (y).\nThe parties are in global nonstrict opposition if they are not in global strict opposition.\n4.1 Optimisation on simple attributes\nIn order to extract the optimal values for a subset of attributes, in the first step of this optimisation process the mediator requests the negotiators to submit their respective utility functions over the set of simple attributes.\nLet Simp C Att denote the set of all simple attributes from Att.\nNote that, due to Assumption 1, agent i's (ui ([vS, v1 \u00af S])--ui ([v'S, v2 \u00af S])).\nThat is, the utility function of agent i will be defined on the attributes from S independently of any value assignment to other attributes.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 511\nutility function can be characterised as follows: ui ([VSimp, VSimp]) = wi1 * ui,1 ([VSimp]) + wi2 * ui,2 ([VSimp]), where Simp = Att \\ Simp, and ui,1 and ui,2 are the utility components of ui over the sets of attributes Simp and Simp, respectively, and 0 PjE (Simp _ ~) wij.\nAttribute f is said to be strongly dominant (for agent i) wrt.\nthe set of simple attributes Simp.\nThe following theorem shows that if the set of attributes Simp does not contain a strongly dominant attribute then the negotiators have no incentive to be dishonest.\nTHEOREM 1.\nGiven a negotiation framework, if for every agent the set of simple attributes doesn't contain a strongly dominant attribute, then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes.\nSo far, we have been concentrating on the efficiency issue while leaving the fairness issue aside.\nA fair framework does not only support a more satisfactory distribution of utility among the agents, but also often a good measure to prevent misrepresentation of private information by the agents.\nOf the three solutions presented above, the utilitarian solution does not support fairness.\nOn the other hand, Nash [16] proves that the Nash solution satisfies the above four axioms for the cooperative bargaining games and is considered a fair solution.\nThe egalitarian solution is another mechanism to achieve fairness by essentially helping the worst off.\nThe problem with these solutions, as discussed earlier, is that they are vulnerable to strategic behaviours when one of the attributes strongly dominates the rest of attributes.\nHowever, there is yet another solution that aims to guarantee fairness, the minimax solution.\nThat is, the utility of the agent with maximum utility is minimised.\nIt's obvious that the minimax solution produces inefficient outcomes.\nHowever, to get around this problem (given that the Pareto-optimal set can be tractably computed), we can apply this solution over the Pareto-optimal set only.\nLet POSet C V alSimp be the Pareto-optimal subset of the simple outcomes, the minimax solution is defined to be the solution of the following optimisation problem.\narg min vEP OSet While overall efficiency often suffers under a minimax solution, i.e., the sum of all agents' utilities are often lower than under other solutions, it can be shown that the minimax solution is less vulnerable to manipulation.\nwhere\n512 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nTHEOREM 2.\nGiven a negotiation framework, under the minimax solution, if the negotiators are uncertain about their opponents' preferences then truth-telling is an equilibrium strategy for the negotiators during the optimisation of simple attributes.\nThat is, even when there is only one single simple attribute, if an agent is uncertain whether the other agent's most preferred resolution is also his own most preferred resolution then he should opt for truth-telling as the optimal strategy.\n4.2 Optimisation on continuous attributes\nWhen the attributes take values from infinite sets, we assume that they are continuous.\nThis is similar to the common practice in operations research in which linear programming solutions\/techniques are applied to integer programming problems.\nWe denote the number of continuous attributes by k, i.e., Att = Simp \u222a Simp and | Simp | = k. Then, the outcome space O can be represented as follows: O = (QjESimp V alj) \u00d7 (QlESimp Vall), where QlESimp V all \u2286 Rk is the continuous component of O. Let Oc denote the set QlESimp V all.\nWe'll refer to Oc as the feasible set and assume that Oc is closed and convex.\nAfter carrying out the optimisation over the set of simple attributes, we are able to assign the optimal values to the simple attributes from Simp.\nThus, we reduce the original problem to the problem of searching for optimal (and fair) outcomes within the feasible set Oc.\nRecall that, by Assumption 1, we can characterise agent i's utility function as follows:\nwhere C is the constant wi1 \u2217 ui,1 ([v * Simp]) and v * Simp denotes the optimal values of the simple attributes in Simp.\nHence, without loss of generality (albeit with a blatant abuse of notation), we can take the agent i's utility function as ui: Rk \u2192 [0, 1].\nAccordingly we will also take the set of outcomes under consideration by the agents to be the feasible set Oc.\nWe now state another assumption to be used in this section:\nASSUMPTION 2.\nThe negotiators' utility functions can be described by continuously differentiable and concave functions ui: Rk \u2192 [0, 1], (i = 1, 2).\nIt should be emphasised that we do not assume that agents explicitly know their utility functions.\nFor the method to be described in the following to work, we only assume that the agents know the relevant information, e.g. at certain point within the feasible set Oc, the gradient direction of their own utility functions and some section of their respective indifference curves.\nAssume that a tentative agreement (which is a point x \u2208 Rk) is currently on the table, the process for the agents to jointly improve this agreement in order to reach a Pareto-optimal agreement can be described as follows.\nThe mediator asks the negotiators to discretely submit their respective gradient directions at x, i.e., \u2207 u1 (x) and \u2207 u2 (x).\nNote that the goal of the process to be described here is to search for agreements that are more efficient than the tentative agreement currently on the table.\nThat is, we are searching for points x' within the feasible set Oc such that moving to x' from the current tentative agreement x brings more gains to at least one of the agents while not hurting any of the agents.\nDue to the assumption made above, i.e. the feasible set Oc is bounded, the conditions for an alternative x \u2208 Oc to be efficient vary depending on the position of x.\nThe following results are proved in [9]: Let B (x) = 0 denote the equation of the boundary of Oc, defining x \u2208 Oc iff B (x) \u2265 0.\nAn alternative x * \u2208 Oc is efficient iff,\nWe are now interested in answering the following questions:\n(i) What is the initial tentative agreement x0?\n(ii) How to find the more efficient agreement xh +1, given the current tentative agreement xh?\n4.2.1 Determining a fair initial tentative agreement\nIt should be emphasised that the choice of the initial tentative agreement affects the fairness of the final agreement to be reached by the presented method.\nFor instance, if the initial tentative agreement x0 is chosen to be the most preferred alternative to one of the agents then it is also a Pareto-optimal outcome, making it impossible to find any joint improvement from x0.\nHowever, if x0 will then be chosen to be the final settlement and if x0 turns out to be the worst alternative to the other agent then this outcome is a very unfair one.\nThus, it's important that the choice of the initial tentative agreement be sensibly made.\nEhtamo et al [3] present several methods to choose the initial tentative agreement (called reference point in their paper).\nHowever, their goal is to approximate the Pareto-optimal set by systematically choosing a set of reference points.\nOnce an (approximate) Pareto-optimal set is generated, it is left to the negotiators to decide which of the generated Pareto-optimal outcomes to be chosen as the final settlement.\nThat is, distributive negotiation will then be required to settle the issue.\nWe, on the other hand, are interested in a fair initial tentative agreement which is not necessarily efficient.\nImproving a given tentative agreement to yield a Pareto-optimal agreement is considered in the next section.\nFor each attribute j \u2208 Simp, an agent i will be asked to discretely submit three values (from the set V alj): the most preferred value, denoted by pvi, j, the least preferred value, denoted by wvi, j, and a value that gives i an approximately average payoff, denoted by avi, j. (Note that this is possible because the set V alj is bounded.)\nIf pv1, j and pv2, j are sufficiently close, i.e., | pv1, j \u2212 pv2, j | <\u0394 for some pre-defined \u0394> 0, then pv1, j and pv2, j are chosen to be the two \"core\" values, denoted by cv1 and cv2.\nOtherwise, between the two values pv1, j and av1, j, we eliminate the one that is closer to wv2, j, the remaining value is denoted by cv1.\nSimilarly, we obtain cv2 from the two values pv2, j and av2, j.\nIf cv1 = cv2 then cv1 is selected as the initial value for the attribute j as part of the initial tentative agreement.\nOtherwise, without loss of generality, we assume that cv1 \u03c0 \/ 2, and they are in local weak opposition iff \u03c0 \/ 2 \u2265 \u03b8 (x) \u2265 0.\nNote also that the two vectors \u2207 u1 (x) and \u2207 u2 (x) define a hyperplane, denoted by h \u2207 (x), in the kdimensional space Rk.\nFurthermore, there are two indifference curves of agents 1 and 2 going through point x, denoted by IC1 (x) and IC2 (x), respectively.\nLet hT1 (x) and hT2 (x) denote the tangent hyperplanes to the indifference curves IC1 (x) and IC2 (x), respectively, at point x.\nThe planes hT1 (x) and hT2 (x) intersect h \u2207 (x) in the lines IS1 (x) and IS2 (x), respectively.\nNote that given a line L (x) going through the point x, there are two (unit) vectors from x along L (x) pointing to two opposite directions, denoted by L + (x) and L \u2212 (x).\nWe can now informally explain our solution to the problem of searching for joint gains.\nWhen it isn't possible to obtain a compromise direction for joint improvements at a point x \u2208 Oc either because the compromise vector points to the space outside of the feasible set Oc or because the two parties are in local strong opposition at x, we will consider to move along the indifference curve of one party while trying to improve the utility of the other party.\nAs 5Let S be the set of alternatives, x \u2217 is weakly Pareto optimal if there is no x \u2208 S such that ui (x)> ui (x \u2217) for all agents i. the mediator does not know the indifference curves of the parties, he has to use the tangent hyperplanes to the indifference curves of the parties at point x. Note that the tangent hyperplane to a curve is a useful approximation of the curve in the immediate vicinity of the point of tangency, x.\nWe are now describing an iteration step to reach the next tentative agreement xh +1 from the current tentative agreement xh \u2208 Oc.\nA vector v whose tail is xh is said to be bounded in Oc if \u2203 \u03bb> 0 such that xh + \u03bbv \u2208 Oc.\nTo start, the mediator asks the negotiators for their gradients \u2207 u1 (xh) and \u2207 u2 (xh), respectively, at xh.\n1.\nIf xh is a Pareto-optimal outcome according to equation 2 or equation 3, then the process is terminated.\n2.\nIf 1 \u2265 \u2207 u1 (xh).\n\u2207 u2 (xh)> 0 and the vector TBS (xh) is bounded in Oc then the mediator chooses the compromise improving direction d = T BS (xh) and apply the method described by Ehtamo et al [4] to generate the next tentative agreement xh +1.\n3.\nOtherwise, among the four vectors IS\u03c3i (xh), i = 1, 2 and \u03c3 = + \/ \u2212, the mediator chooses the vector that (i) is bounded in Oc, and (ii) is closest to the gradient of the other agent, \u2207 uj (xh) (j = i).\nDenote this vector by TG (xh).\nThat is, we will be searching for a point on the indifference curve of agent i, ICi (xh), while trying to improve the utility of agent j. Note that when xh is an interior point of Oc then the situa\ntion is symmetric for the two agents 1 and 2, and the mediator has the choice of either finding a point on IC1 (xh) to improve the utility of agent 2, or finding a point on IC2 (xh) to improve the utility of agent 1.\nTo decide on which choice to make, the mediator has to compute the distribution of gains throughout the whole process to avoid giving more gains to one agent than to the other.\nNow, the point xh +1 to be generated lies somewhere on the intersection of ICi (xh) and the hyperplane defined by \u2207 ui (xh) and TG (xh).\nThis intersection is approximated by TG (xh).\nThus, the sought after point xh +1 can be generated by first finding a point yh along the direction of TG (xh) and then move from yh to the same direction of \u2207 ui (xh) until we intersect with ICi (xh).\nMathematically, let \u03b6 and \u03be denote the vectors TG (xh) and \u2207 ui (xh), respectively, xh +1 is the solution to the following optimisation problem:\nGiven an initial tentative agreement x0, the method described above allows a sequence of tentative agreements x1, x2,...to be iteratively generated.\nThe iteration stops whenever a weakly Pareto optimal agreement is reached.\nTHEOREM 3.\nIf the sequence of agreements generated by the above method is bounded then the method converges to a point x \u2217 \u2208 Oc that is weakly Pareto optimal.\n5.\nCONCLUSION AND FUTURE WORK\nIn this paper we have established a framework for negotiation that is based on MCDM theory for representing the agents' objectives and utilities.\nThe focus of the paper is on integrative negotiation in which agents aim to maximise joint gains, or \"create value.\"\nmax \u03bb1, \u03bb2 \u2208 L\n514 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nWe have introduced a mediator into the negotiation in order to allow negotiators to disclose information about their utilities, without providing this information to their opponents.\nFurthermore, the mediator also works toward the goal of achieving fairness of the negotiation outcome.\nThat is, the approach that we describe aims for both efficiency, in the sense that it produces Pareto optimal outcomes (i.e. no aspect can be improved for one of the parties without worsening the outcome for another party), and also for fairness, which chooses optimal solutions which distribute gains amongst the agents in some appropriate manner.\nWe have developed a two step process for addressing the NP-hard problem of finding a solution for a set of integrative attributes, which is within the Pareto-optimal set for those attributes.\nFor simple attributes (i.e. those which have a finite set of values) we use known optimisation techniques to find a Paretooptimal solution.\nIn order to discourage agents from misrepresenting their utilities to gain an advantage, we look for solutions that are least vulnerable to manipulation.\nWe have shown that as long as one of the simple attributes does not strongly dominate the others, then truth telling is an equilibrium strategy for the negotiators during the stage of optimising simple attributes.\nFor non-simple attributes we propose a mechanism that provides stepwise improvements to move the proposed solution in the direction of a Paretooptimal solution.\nThe approach presented in this paper is similar to the ideas behind negotiation analysis [18].\nEhtamo et al [4] presents an approach to searching for joint gains in multi-party negotiations.\nThe relation of their approach to our approach is discussed in the preceding section.\nLai et al [12] provide an alternative approach to integrative negotiation.\nWhile their approach was clearly described for the case of two-issue negotiations, the generalisation to negotiations with more than two issues is not entirely clear.\nZhang et at [22] discuss the use of integrative negotiation in agent organisations.\nThey assume that agents are honest.\nTheir main result is an experiment showing that in some situations, agents' cooperativeness may not bring the most benefits to the organisation as a whole, while giving no explanation.\nJonker et al [7] consider an approach to multi-attribute negotiation without the use of a mediator.\nThus, their approach can be considered a complement of ours.\nTheir experimental results show that agents can reach Paretooptimal outcomes using their approach.\nThe details of the approach have currently been shown only for bilateral negotiation, and while we believe they are generalisable to multiple negotiators, this work remains to be done.\nThere is also future work to be done in more fully characterising the outcomes of the determination of values for the non-simple attributes.\nIn order to provide a complete framework we are also working on the distributive phase using the mediator.","keyphrases":["autom negoti","negoti","creat valu","claim valu","mediat","ineffici compromis","dilemma","concess","deadlock situat","uncertainti","incomplet inform","mcdm","integr negoti","multi-criterion decis make"],"prmu":["P","P","P","P","P","U","U","U","U","U","M","U","M","M"]} {"id":"J-38","title":"Multi-Attribute Coalitional Games","abstract":"We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value. This framework allows us to model diverse economic interactions by picking the right attributes. We study the computational complexity of two coalitional solution concepts for these games -- the Shapley value and the core. We show how the positive results obtained in this paper imply comparable results for other games studied in the literature.","lvl-1":"Multi-Attribute Coalitional Games\u2217 Samuel Ieong \u2020 Computer Science Department Stanford University Stanford, CA 94305 sieong@cs.stanford.edu Yoav Shoham Computer Science Department Stanford University Stanford, CA 94305 shoham@cs.stanford.edu ABSTRACT We study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value.\nThis framework allows us to model diverse economic interactions by picking the right attributes.\nWe study the computational complexity of two coalitional solution concepts for these gamesthe Shapley value and the core.\nWe show how the positive results obtained in this paper imply comparable results for other games studied in the literature.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems; J.4 [Social and Behavioral Sciences]: Economics; F.2 [Analysis of Algorithms and Problem Complexity] General Terms Algorithms, Economics 1.\nINTRODUCTION When agents interact with one another, the value of their contribution is determined by what they can do with their skills and resources, rather than simply their identities.\nConsider the problem of forming a soccer team.\nFor a team to be successful, a team needs some forwards, midfielders, defenders, and a goalkeeper.\nThe relevant attributes of the players are their skills at playing each of the four positions.\nThe value of a team depends on how well its players can play these positions.\nAt a finer level, we can extend the model to consider a wider range of skills, such as passing, shooting, and tackling, but the value of a team remains solely a function of the attributes of its players.\nConsider an example from the business world.\nCompanies in the metals industry are usually vertically-integrated and diversified.\nThey have mines for various types of ores, and also mills capable of processing and producing different kinds of metal.\nThey optimize their production profile according to the market prices for their products.\nFor example, when the price of aluminum goes up, they will allocate more resources to producing aluminum.\nHowever, each company is limited by the amount of ores it has, and its capacities in processing given kinds of ores.\nTwo or more companies may benefit from trading ores and processing capacities with one another.\nTo model the metal industry, the relevant attributes are the amount of ores and the processing capacities of the companies.\nGiven the exogenous input of market prices, the value of a group of companies will be determined by these attributes.\nMany real-world problems can be likewise modeled by picking the right attributes.\nAs attributes apply to both individual agents and groups of agents, we propose the use of coalitional game theory to understand what groups may form and what payoffs the agents may expect in such models.\nCoalitional game theory focuses on what groups of agents can achieve, and thus connects strongly with e-commerce, as the Internet economies have significantly enhanced the abilities of business to identify and capitalize on profitable opportunities of cooperation.\nOur goal is to understand the computational aspects of computing the solution concepts (stable and\/or fair distribution of payoffs, formally defined in Section 3) for coalitional games described using attributes.\nOur contributions can be summarized as follows: \u2022 We define a formal representation for coalitional games based on attributes, and relate this representation to others proposed in the literature.\nWe show that when compared to other representations, there exists games for which a multi-attribute description can be exponentially more succinct, and for no game it is worse.\n\u2022 Given the generality of the model, positive results carry over to other representations.\nWe discuss two positive results in the paper, one for the Shapley value and one for the core, and show how these imply related results in the literature.\n170 \u2022 We study an approximation heuristic for the Shapley value when its exact values cannot be found efficiently.\nWe provide an explicit bound on the maximum error of the estimate, and show that the bound is asymptotically tight.\nWe also carry out experiments to evaluate how the heuristic performs on random instances.1 2.\nRELATED WORK Coalitional game theory has been well studied in economics [9, 10, 14].\nA vast amount of literature have focused on defining and comparing solution concepts, and determining their existence and properties.\nThe first algorithmic study of coalitional games, as far as we know, is performed by Deng and Papadimitriou in [5].\nThey consider coalitional games defined on graphs, where the players are the vertices and the value of coalition is determined by the sum of the weights of the edges spanned by these players.\nThis can be efficiently modeled and generalized using attributes.\nAs a formal representation, multi-attribute coalitional games is closely related to the multi-issue representation of Conitzer and Sandholm [3] and our work on marginal contribution networks [7].\nBoth of these representations are based on dividing a coalitional game into subgames (termed issues in [3] and rules in [7]), and aggregating the subgames via linear combination.\nThe key difference in our work is the unrestricted aggregation of subgames: the aggregation could be via a polynomial function of the attributes, or even by treating the subgames as input to another computational problem such as a min-cost flow problem.\nThe relationship of these models will be made clear after we define the multiattribute representation in Section 4.\nAnother representation proposed in the literature is one specialized for superadditive games by Conitzer and Sandholm [2].\nThis representation is succinct, but to find the values of some coalitions may require solving an NP-hard problem.\nWhile it is possible for multi-attribute coalitional games to efficiently represent these games, it necessarily requires the solution to an NP-hard problem in order to find out the values of some coalitions.\nIn this paper, we stay within the boundary of games that admits efficient algorithm for determining the value of coalitions.\nWe will therefore not make further comparisons with [2].\nThe model of coalitional games with attributes has been considered in the works of Shehory and Kraus.\nThey model the agents as possessing capabilities that indicates their proficiencies in different areas, and consider how to efficiently allocate tasks [12] and the dynamics of coalition formation [13].\nOur work differs significantly as our focus is on reasoning about solution concepts.\nOur model also covers a wider scope as attributes generalize the notion of capabilities.\nYokoo et al. have also considered a model of coalitional games where agents are modeled by sets of skills, and these skills in turn determine the value of coalitions [15].\nThere are two major differences between their work and ours.\nFirstly, Yokoo et al. assume that each skill is fundamentally different from another, hence no two agents may possess the same skill.\nAlso, they focus on developing new solution concepts that are robust with respect to manipulation by agents.\nOur focus is on reasoning about traditional solution concepts.\n1 We acknowledge that random instances may not be typical of what happens in practice, but given the generality of our model, it provides the most unbiased view.\nOur work is also related to the study of cooperative games with committee control [4].\nIn these games, there is usually an underlying set of resources each controlled by a (possibly overlapping) set of players known as the committee, engaged in a simple game (defined in Section 3).\nmultiattribute coalitional games generalize these by considering relationship between the committee and the resources beyond simple games.\nWe note that when restricted to simple games, we derive similar results to that in [4].\n3.\nPRELIMINARIES In this section, we will review the relevant concepts of coalitional game theory and its two most important solution concepts - the Shapley value and the core.\nWe will then define the computational questions that will be studied in the second half of the paper.\n3.1 Coalitional Games Throughout this paper, we assume that payoffs to groups of agents can be freely distributed among its members.\nThis transferable utility assumption is commonly made in coalitional game theory.\nThe canonical representation of a coalitional game with transferable utility is its characteristic form.\nDefinition 1.\nA coalition game with transferable utility in characteristic form is denoted by the pair N, v , where \u2022 N is the set of agents; and \u2022 v : 2N \u2192 R is a function that maps each group of agents S \u2286 N to a real-valued payoff.\nA group of agents in a game is known as a coalition, and the entire set of agents is known as the grand coalition.\nAn important class of coalitional games is the class of monotonic games.\nDefinition 2.\nA coalitional game is monotonic if for all S \u2282 T \u2286 N, v(S) \u2264 v(T).\nAnother important class of coalitional games is the class of simple games.\nIn a simple game, a coalition either wins, in which case it has a value of 1, or loses, in which case it has a value of 0.\nIt is often used to model voting situations.\nSimple games are often assumed to be monotonic, i.e., if S wins, then for all T \u2287 S, T also wins.\nThis coincides with the notion of using simple games as a model for voting.\nIf a simple game is monotonic, then it is fully described by the set of minimal winning coalitions, i.e., coalitions S for which v(S) = 1 but for all coalitions T \u2282 S, v(T) = 0.\nAn outcome in a coalitional game specifies the utilities the agents receive.\nA solution concept assigns to each coalitional game a set of reasonable outcomes.\nDifferent solution concepts attempt to capture in some way outcomes that are stable and\/or fair.\nTwo of the best known solution concepts are the Shapley value and the core.\nThe Shapley value is a normative solution concept that prescribes a fair way to divide the gains from cooperation when the grand coalition is formed.\nThe division of payoff to agent i is the average marginal contribution of agent i over all possible permutations of the agents.\nFormally, Definition 3.\nThe Shapley value of agent i, \u03c6i(v), in game N, v is given by the following formula \u03c6i(v) = S\u2286N\\{i} |S|!\n(|N| \u2212 |S| \u2212 1)!\n|N|!\n(v(S \u222a {i}) \u2212 v(S)) 171 The core is a descriptive solution concept that focuses on outcomes that are stable.\nStability under core means that no set of players can jointly deviate to improve their payoffs.\nDefinition 4.\nAn outcome x \u2208 R|N| is in the core of the game N, v if for all S \u2286 N, i\u2208S xi \u2265 v(S) Note that the core of a game may be empty, i.e., there may not exist any payoff vector that satisfies the stability requirement for the given game.\n3.2 Computational Problems We will study the following three problems related to solution concepts in coalitional games.\nProblem 1.\n(Shapley Value) Given a description of the coalitional game and an agent i, compute the Shapley value of agent i. Problem 2.\n(Core Membership) Given a description of the coalitional game and a payoff vector x such that \u00c8 i\u2208N xi = v(N), determine if \u00c8 i\u2208S xi \u2265 v(S) for all S \u2286 N. Problem 3.\n(Core Non-emptiness) Given a description of the coalitional game, determine if there exists any payoff vector x such that \u00c8 i\u2208S xi \u2265 V (S) for all S \u2286 N, and\u00c8 i\u2208N xi = v(N).\nNote that the complexity of the above problems depends on the how the game is described.\nAll these problems will be easy if the game is described by its characteristic form, but only so because the description takes space exponential in the number of agents, and hence simple brute-force approach takes time polynomial to the input description.\nTo properly understand the computational complexity questions, we have to look at compact representation.\n4.\nFORMAL MODEL In this section, we will give a formal definition of multiattribute coalitional games, and show how it is related to some of the representations discussed in the literature.\nWe will also discuss some limitations to our proposed approach.\n4.1 Multi-Attribute Coalitional Games A multi-attribute coalitional game (MACG) consists of two parts: a description of the attributes of the agents, which we termed an attribute model, and a function that assigns values to combination of attributes.\nTogether, they induce a coalitional game over the agents.\nWe first define the attribute model.\nDefinition 5.\nAn attribute model is a tuple N, M, A , where \u2022 N denotes the set of agents, of size n; \u2022 M denotes the set of attributes, of size m; \u2022 A \u2208 Rm\u00d7n , the attribute matrix, describes the values of the attributes of the agents, with Aij denoting the value of attribute i for agent j.\nWe can directly define a function that maps combinations of attributes to real values.\nHowever, for many problems, we can describe the function more compactly by computing it in two steps: we first compute an aggregate value for each attribute, then compute the values of combination of attributes using only the aggregated information.\nFormally, Definition 6.\nAn aggregating function (or aggregator) takes as input a row of the attribute matrix and a coalition S, and summarizes the attributes of the agents in S with a single number.\nWe can treat it as a mapping from Rn \u00d7 2N \u2192 R. Aggregators often perform basic arithmetic or logical operations.\nFor example, it may compute the sum of the attributes, or evaluate a Boolean expression by treating the agents i \u2208 S as true and j \/\u2208 S as false.\nAnalogous to the notion of simple games, we call an aggregator simple if its range is {0, 1}.\nFor any aggregator, there is a set of relevant agents, and a set of irrelevant agents.\nAn agent i is irrelevant to aggregator aj if aj (S \u222a {i}) = aj (S) for all S \u2286 N.\nA relevant agent is one not irrelevant.\nGiven the attribute matrix, an aggregator assigns a value to each coalition S \u2286 N. Thus, each aggregator defines a game over N. For aggregator aj , we refer to this induced game as the game of attribute j, and denote it with aj (A).\nWhen the attribute matrix is clear from the context, we may drop A and simply denote the game as aj .\nWe may refer to the game as the aggregator when no ambiguities arise.\nWe now define the second step of the computation with the help of aggregators.\nDefinition 7.\nAn aggregate value function takes as input the values of the aggregators and maps these to a real value.\nIn this paper, we will focus on having one aggregator per attribute.\nTherefore, in what follows, we will refer to the aggregate value function as a function over the attributes.\nNote that when all aggregators are simple, the aggregate value function implicitly defines a game over the attributes, as it assigns a value to each set of attributes T \u2286 M.\nWe refer to this as the game among attributes.\nWe now define multi-attribute coalitional game.\nDefinition 8.\nA multi-attribute coalitional game is defined by the tuple N, M, A, a, w , where \u2022 N, M, A is an attribute model; \u2022 a is a set of aggregators, one for each attribute; we can treat the set together as a vector function, mapping Rm\u00d7n \u00d7 2N \u2192 Rm \u2022 w : Rm \u2192 R is an aggregate value function.\nThis induces a coalitional game with transferable payoffs N, v with players N and the value function defined by v(S) = w(a(A, S)) Note that MACG as defined is fully capable of representing any coalitional game N, v .\nWe can simply take the set of attributes as equal to the set of agents, i.e., M = N, an identity matrix for A, aggregators of sums, and the aggregate value function w to be v. 172 4.2 An Example Let us illustrate how MACG can be used to represent a game with a simple example.\nSuppose there are four types of resources in the world: gold, silver, copper, and iron, that each agent is endowed with some amount of these resources, and there is a fixed price for each of the resources in the market.\nThis game can be described using MACG with an attribute matrix A, where Aij denote the amount of resource i that agent j is endowed.\nFor each resource, the aggregator sums together the amount of resources the agents have.\nFinally, the aggregate value function takes the dot product between the market price vector and the aggregate vector.\nNote the inherent flexibility in the model: only limited work would be required to update the game as the market price changes, or when a new agent arrives.\n4.3 Relationship with Other Representations As briefly discussed in Section 2, MACG is closely related to two other representations in the literature, the multiissue representation of Conitzer and Sandholm [3], and our work on marginal contribution nets [7].\nTo make their relationships clear, we first review these two representations.\nWe have changed the notations from the original papers to highlight their similarities.\nDefinition 9.\nA multi-issue representation is given as a vector of coalitional games, (v1, v2, ... vm), each possibly with a varying set of agents, say N1, ... , Nm.\nThe coalitional game N, v induced by multi-issue representation has player set N = \u00cbm i=1 Ni, and for each coalition S \u2286 N, v(S) = \u00c8m i=1 v(S \u2229 Ni).\nThe games vi are assumed to be represented in characteristic form.\nDefinition 10.\nA marginal contribution net is given as a set of rules (r1, r2, ... , rm), where rule ri has a weight wi, and a pattern pi that is a conjunction over literals (positive or negative).\nThe agents are represented as literals.\nA coalition S is said to satisfy the pattern pi, if we treat the agents i \u2208 S as true, an agent j \/\u2208 S as false, pi(S) evaluates to true.\nDenote the set of literals involved in rule i by Ni.\nThe coalitional game N, v induced by a marginal contribution net has player set N = \u00cbm i=1 Ni, and for each coalition S \u2286 N, v(S) = \u00c8 i:pi(S)=true wi.\nFrom these definitions, we can see the relationships among these three representations clearly.\nAn issue of a multi-issue representation corresponds to an attribute in MACG.\nSimilarly, a rule of a marginal contribution net corresponds to an attribute in MACG.\nThe aggregate value functions are simple sums and weighted sums for the respective representations.\nTherefore, it is clear that MACG will be no less succinct than either representation.\nHowever, MACG differs in two important way.\nFirstly, there is no restriction on the operations performed by the aggregate value function over the attributes.\nThis is an important generalization over the linear combination of issues or rules in the other two approaches.\nIn particular, there are games for which MACG can be exponentially more compact.\nThe proof of the following proposition can be found in the Appendix.\nProposition 1.\nConsider the parity game N, v where coalition S \u2286 N has value v(S) = 1 if |S| is odd, and v(S) = 0 otherwise.\nMACG can represent the game in O(n) space.\nBoth multi-issue representation and marginal contribution nets requires O(2n ) space.\nA second important difference of MACG is that the attribute model and the value function is cleanly separated.\nAs suggested in the example in Section 4.2, this often allows us more efficient update of the values of the game as it changes.\nAlso, the same attribute model can be evaluated using different value functions, and the same value function can be used to evaluate different attribute model.\nTherefore, MACG is very suitable for representing multiple games.\nWe believe the problems of updating games and representing multiple games are interesting future directions to explore.\n4.4 Limitation of One Aggregator per Attribute Before focusing on one aggregator per attribute for the rest of the paper, it is natural to wonder if any is lost per such restriction.\nThe unfortunate answer is yes, best illustrated by the following.\nConsider again the problem of forming a soccer team discussed in the introduction, where we model the attributes of the agents as their ability to take the four positions of the field, and the value of a team depends on the positions covered.\nIf we first aggregate each of the attribute individually, we will lose the distributional information of the attributes.\nIn other words, we will not be able to distinguish between two teams, one of which has a player for each position, the other has one player who can play all positions, but the rest can only play the same one position.\nThis loss of distributional information can be recovered by using aggregators that take as input multiple rows of the attribute matrix rather than just a single row.\nAlternatively, if we leave such attributes untouched, we can leave the burden of correctly evaluating these attributes to the aggregate value function.\nHowever, for many problems that we found in the literature, such as the transportation domain of [12] and the flow game setting of [4], the distribution of attributes does not affect the value of the coalitions.\nIn addition, the problem may become unmanageably complex as we introduce more complicated aggregators.\nTherefore, we will focus on the representation as defined in Definition 8.\n5.\nSHAPLEY VALUE In this section, we focus on computational issues of finding the Shapley value of a player in MACG.\nWe first set up the problem with the use of oracles to avoid complexities arising from the aggregators.\nWe then show that when attributes are linearly separable, the Shapley value can be efficiently computed.\nThis generalizes the proofs of related results in the literature.\nFor the non-linearly separable case, we consider a natural heuristic for estimating the Shapley value, and study the heuristic theoretically and empirically.\n5.1 Problem Setup We start by noting that computing the Shapley value for simple aggregators can be hard in general.\nIn particular, we can define aggregators to compute weighted majority over its input set of agents.\nAs noted in [6], finding the Shapley value of a weighted majority game is #P-hard.\nTherefore, discussion of complexity of Shapley value for MACG with unrestricted aggregators is moot.\nInstead of placing explicit restriction on the aggregator, we assume that the Shapley value of the aggregator can be 173 answered by an oracle.\nFor notation, let \u03c6i(u) denote the Shapley value for some game u.\nWe make the following assumption: Assumption 1.\nFor each aggregator aj in a MACG, there is an associated oracle that answers the Shapley value of the game of attribute j.\nIn other words, \u03c6i(aj ) is known.\nFor many aggregators that perform basic operations over its input, polynomial time oracle for Shapley value exists.\nThis include operations such as sum, and symmetric functions when the attributes are restricted to {0, 1}.\nAlso, when only few agents have an effect on the aggregator, brute-force computation for Shapley value is feasible.\nTherefore, the above assumption is reasonable for many settings.\nIn any case, such abstraction allows us to focus on the aggregate value function.\n5.2 Linearly Separable Attributes When the aggregate value function can be written as a linear function of the attributes, the Shapley value of the game can be efficiently computed.\nTheorem 1.\nGiven a game N, v represented as a MACG N, M, A, a, w , if the aggregate value function can be written as a linear function of its attributes, i.e., w(a(A, S)) = m j=1 cj aj (A, S) The Shapley value of agent i in N, v is given by \u03c6i(v) = m j=1 cj \u03c6i(aj ) (1) Proof.\nFirst, we note that Shapley value satisfies an additivity axiom [11].\nThe Shapley value satisfies additivity, namely, \u03c6i(a + b) = \u03c6i(a) + \u03c6i(b), where N, a + b is a game defined to be (a + b)(S) = a(S) + b(S) for all S \u2286 N.\nIt is also clear that Shapley value satisfies scaling, namely \u03c6i(\u03b1v) = \u03b1\u03c6i(v) where (\u03b1v)(S) = \u03b1v(S) for all S \u2286 N.\nSince the aggregate value function can be expressed as a weighted sum of games of attributes, \u03c6i(v) = \u03c6i(w(a)) = \u03c6i( m j=1 cjaj ) = m j=1 cj\u03c6i(aj ) Many positive results regarding efficient computation of Shapley value in the literature depends on some form of linearity.\nExamples include the edge-spanning game on graphs by Deng and Papadimitriou [5], the multi-issue representation of [3], and the marginal contribution nets of [7].\nThe key to determine if the Shapley value can be efficiently computed depends on the linear separability of attributes.\nOnce this is satisfied, as long as the Shapley value of the game of attributes can be efficiently determined, the Shapley value of the entire game can be efficiently computed.\nCorollary 1.\nThe Shapley value for the edge-spanning game of [5], games in multi-issue representation [3], and games in marginal contribution nets [7], can be computed in polynomial time.\n5.3 Polynomial Combination of Attributes When the aggregate value function cannot be expressed as a linear function of its attributes, computing the Shapley value exactly is difficult.\nHere, we will focus on aggregate value function that can be expressed as some polynomial of its attributes.\nIf we do not place a limit on the degree of the polynomial, and the game N, v is not necessarily monotonic, the problem is #P-hard.\nTheorem 2.\nComputing the Shapley value of a MACG N, M, A, a, w , when w can be an arbitrary polynomial of the aggregates a, is #P-hard, even when the Shapley value of each aggregator can be efficiently computed.\nThe proof is via reduction from three-dimensional matching, and details can be found in the Appendix.\nEven if we restrict ourselves to monotonic games, and non-negative coefficients for the polynomial aggregate value function, computing the exact Shapley value can still be hard.\nFor example, suppose there are two attributes.\nAll agents in some set B \u2286 N possess the first attribute, and all agents in some set C \u2286 N possess the second, and B and C are disjoint.\nFor a coalition S \u2286 N, the aggregator for the first evaluates to 1 if and only if |S \u2229 B| \u2265 b , and similarly, the aggregator for the second evaluates to 1 if and only if |S \u2229 C| \u2265 c .\nLet the cardinality of the sets B and C be b and c.\nWe can verify that the Shapley value of an agent i in B equals \u03c6i = 1 b b \u22121 i=0 b i \u00a1 c c \u22121 \u00a1 b+c c +i\u22121 \u00a1 c \u2212 c + 1 b + c \u2212 c \u2212 i + 1 The equation corresponds to a weighted sum of probability values of hypergeometric random variables.\nThe correspondence with hypergeometric distribution is due to sampling without replacement nature of Shapley value.\nAs far as we know, there is no close-form formula to evaluate the sum above.\nIn addition, as the number of attributes involved increases, we move to multi-variate hypergeometric random variables, and the number of summands grow exponentially in the number of attributes.\nTherefore, it is highly unlikely that the exact Shapley value can be determined efficiently.\nTherefore, we look for approximation.\n5.3.1 Approximation First, we need a criteria for evaluating how well an estimate, \u02c6\u03c6, approximates the true Shapley value, \u03c6.\nWe consider the following three natural criteria: \u2022 Maximum underestimate: maxi \u03c6i\/\u02c6\u03c6i \u2022 Maximum overestimate: maxi \u02c6\u03c6i\/\u03c6i \u2022 Total variation: 1 2 \u00c8 i |\u03c6i \u2212 \u02c6\u03c6i|, or alternatively maxS | \u00c8 i\u2208S \u03c6i \u2212 \u00c8 i\u2208S \u02c6\u03c6i| The total variation criterion is more meaningful when we normalize the game to having a value of 1 for the grand coalition, i.e., v(N) = 1.\nWe can also define additive analogues of the under- and overestimates, especially when the games are normalized.\n174 We will assume for now that the aggregate value function is a polynomial over the attributes with non-negative coefficients.\nWe will also assume that the aggregators are simple.\nWe will evaluate a specific heuristic that is analogous to Equation (1).\nSuppose the aggregate function can be written as a polynomial with p terms w(a(A, S)) = p j=1 cj aj(1) (A, S)aj(2) (A, S) \u00b7 \u00b7 \u00b7 aj(kj ) (A, S) (2) For term j, the coefficient of the term is cj , its degree kj , and the attributes involved in the term are j(1), ... , j(kj ).\nWe compute an estimate \u02c6\u03c6 to the Shapley value as \u02c6\u03c6i = p j=1 kj l=1 cj kj \u03c6i(aj(l) ) (3) The idea behind the estimate is that for each term, we divide the value of the term equally among all its attributes.\nThis is represented by the factor cj kj .\nThen for for each attribute of an agent, we assign the player a share of value from the attribute.\nThis share is determined by the Shapley value of the simple game of that attribute.\nWithout considering the details of the simple games, this constitutes a fair (but blind) rule of sharing.\n5.3.2 Theoretical analysis of heuristic We can derive a simple and tight bound for the maximum (multiplicative) underestimate of the heuristic estimate.\nTheorem 3.\nGiven a game N, v represented as a MACG N, M, A, a, w , suppose w can be expressed as a polynomial function of its attributes (cf Equation (2)).\nLet K = maxjkj, i.e., the maximum degree of the polynomial.\nLet \u02c6\u03c6 denote the estimated Shapley value using Equation (3), and \u03c6 denote the true Shapley value.\nFor all i \u2208 N, \u03c6i \u2265 K \u02c6\u03c6i.\nProof.\nWe bound the maximum underestimate term-byterm.\nLet tj be the j-th term of the polynomial.\nWe note that the term can be treated as a game among attributes, as it assigns a value to each coalition S \u2286 N. Without loss of generality, renumber attributes j(1) through j(kj ) as 1 through kj.\ntj (S) = cj kj l=1 al (A, S) To make the equations less cluttered, let B(N, S) = |S|!\n(|N| \u2212 |S| \u2212 1)!\n|N|!\nand for a game a, contribution of agent i to group S : i \/\u2208 S, \u2206i(a, S) = a(S \u222a {i}) \u2212 a(S) The true Shapley value of the game tj is \u03c6i(tj) = cj S\u2286N\\{i} B(N, S)\u2206i(tj, S) For each coalition S, i \/\u2208 S, \u2206i(tj , S) = 1 if and only if for at least one attribute, say l\u2217 , \u2206i(al\u2217 , S) = 1.\nTherefore, if we sum over all the attributes, we would have included l\u2217 for sure.\n\u03c6i(tj) \u2264 cj kj j=1 S\u2286N\\{i} B(N, S)\u2206i(aj , S) = kj kj j=1 cj kj \u03c6i(aj ) = kj \u02c6\u03c6i(T) Summing over the terms, we see that the worst case underestimate is by the maximum degree.\nWithout loss of generality, since the bound is multiplicative, we can normalize the game to having v(N) = 1.\nAs a corollary, because we cannot overestimate any set by more than K times, we obtain a bound on the total variation: Corollary 2.\nThe total variation between the estimated Shapley value and the true Shapley value, for K-degree bounded polynomial aggregate value function, is K\u22121 K .\nWe can show that this bound is tight.\nExample 1.\nConsider a game with n players and K attributes.\nLet the first (n\u22121) agents be a member of the first (K \u2212 1) attributes, and that the corresponding aggregator returns 1 if any one of the first (K \u2212 1) agents is present.\nLet the n-th agent be the sole member of the K-th attribute.\nThe estimated Shapley will assign a value of K\u22121 K 1 n\u22121 to the first (n \u2212 1) agents and 1 K to the n-th agent.\nHowever, the true Shapley value of the n-th agent tends to 1 as n \u2192 \u221e, and the total variation approaches K\u22121 K .\nIn general, we cannot bound how much \u02c6\u03c6 may overestimate the true Shapley value.\nThe problem is that \u02c6\u03c6i may be non-zero for agent i even though may have no influence over the outcome of a game when attributes are multiplied together, as illustrated by the following example.\nExample 2.\nConsider a game with 2 players and 2 attributes, and let the first agent be a member of both attributes, and the other agent a member of the second attribute.\nFor a coalition S, the first aggregator evaluates to 1 if agent 1 \u2208 S, and the second aggregator evaluates to 1 if both agents are in S.\nWhile agent 2 is not a dummy with respect to the second attribute, it is a dummy with respect to the product of the attributes.\nAgent 2 will be assigned a value of 1 4 by the estimate.\nAs mentioned, for simple monotonic games, a game is fully described by its set of minimal winning coalitions.\nWhen the simple aggregators are represented as such, it is possible to check, in polynomial time, for agents turning dummies after attributes are multiplied together.\nTherefore, we can improve the heuristic estimate in this special case.\n5.3.3 Empirical evaluation Due to a lack of benchmark problems for coalitional games, we have tested the heuristic on random instances.\nWe believe more meaningful results can be obtained when we have real instances to test this heuristic on.\nOur experiment is set up as follows.\nWe control three parameters of the experiment: the number of players (6 \u2212 10), 175 0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 6 7 8 9 10 No.\nof Players TotalVariationDistance 2 3 4 5 (a) Effect of Max Degree 0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 6 7 8 9 10 No.\nof Players TotalVariationDistance 4 5 6 (b) Effect of Number of Attributes Figure 1: Experimental results the number of attributes (3 \u2212 8), and the maximum degree of the polynomial (2 \u2212 5).\nFor each attribute, we randomly sample one to three minimal winning coalitions.\nWe then randomly generate a polynomial of the desired maximum degree with a random number (3 \u2212 12) of terms, each with a random positive weight.\nWe normalize each game to have v(N) = 1.\nThe results of the experiments are shown in Figure 1.\nThe y-axis of the graphs shows the total variation, and the x-axis the number of players.\nEach datapoint is an average of approximately 700 random samples.\nFigure 1(a) explores the effect of the maximum degree and the number of players when the number of attributes is fixed (at six).\nAs expected, the total variation increases as the maximum degree increases.\nOn the other hand, there is only a very small increase in error as the number of players increases.\nThe error is nowhere near the theoretical worstcase bound of 1 2 to 4 5 for polynomials of degrees 2 to 5.\nFigure 1(b) explores the effect of the number of attributes and the number of players when the maximum degree of the polynomial is fixed (at three).\nWe first note that these three lines are quite tightly clustered together, suggesting that the number of attributes has relatively little effect on the error of the estimate.\nAs the number of attributes increases, the total variation decreases.\nWe think this is an interesting phenomenon.\nThis is probably due to the precise construct required for the worst-case bound, and so as more attributes are available, we have more diverse terms in the polynomial, and the diversity pushes away from the worst-case bound.\n6.\nCORE-RELATED QUESTIONS In this section, we look at the complexity of the two computational problems related to the core: Core Nonemptiness and Core Membership.\nWe show that the nonemptiness of core of the game among attributes and the cores of the aggregators imply non-emptiness of the core of the game induced by the MACG.\nWe also show that there appears to be no such general relationship that relates the core memberships of the game among attributes, games of attributes, and game induced by MACG.\n6.1 Problem Setup There are many problems in the literature for which the questions of Core Non-emptiness and Core Membership are known to be hard [1].\nFor example, for the edgespanning game that Deng and Papadimitriou studied [5], both of these questions are coNP-complete.\nAs MACG can model the edge-spanning game in the same amount of space, these hardness results hold for MACG as well.\nAs in the case for computing Shapley value, we attempt to find a way around the hardness barrier by assuming the existence of oracles, and try to build algorithms with these oracles.\nFirst, we consider the aggregate value function.\nAssumption 2.\nFor a MACG N, M, A, a, w , we assume there are oracles that answers the questions of Core Nonemptiness, and Core Membership for the aggregate value function w.\nWhen the aggregate value function is a non-negative linear function of its attributes, the core is always non-empty, and core membership can be determined efficiently.\nThe concept of core for the game among attributes makes the most sense when the aggregators are simple games.\nWe will further assume that these simple games are monotonic.\nAssumption 3.\nFor a MACG N, M, A, a, w , we assume all aggregators are monotonic and simple.\nWe also assume there are oracles that answers the questions of Core Nonemptiness, and Core Membership for the aggregators.\nWe consider this a mild assumption.\nRecall that monotonic simple games are fully described by their set of minimal winning coalitions (cf Section 3).\nIf the aggregators are represented as such, Core Non-emptiness and Core Membership can be checked in polynomial time.\nThis is due to the following well-known result regarding simple games: Lemma 1.\nA simple game N, v has a non-empty core if and only if it has a set of veto players, say V , such that v(S) = 0 for all S \u2287 V .\nFurther, A payoff vector x is in the core if and only if xi = 0 for all i \/\u2208 V .\n6.2 Core Non-emptiness There is a strong connection between the non-emptiness of the cores of the games among attributes, games of the attributes, and the game induced by a MACG.\nTheorem 4.\nGiven a game N, v represented as a MACG N, M, A, a, w , if the core of the game among attributes, 176 M, w , is non-empty, and the cores of the games of attributes are non-empty, then the core of N, v is non-empty.\nProof.\nLet u be an arbitrary payoff vector in the core of the game among attributes, M, w .\nFor each attribute j, let \u03b8j be an arbitrary payoff vector in the core of the game of attribute j. By Lemma 1, each attribute j must have a set of veto players; let this set be denoted by Pj .\nFor each agent i \u2208 N, let yi = \u00c8 j uj\u03b8j i .\nWe claim that this vector y is in the core of N, v .\nConsider any coalition S \u2286 N, v(S) = w(a(A, S)) \u2264 j:S\u2287P j uj (4) This is true because an aggregator cannot evaluate to 1 without all members of the veto set.\nFor any attribute j, by Lemma 1, \u00c8 i\u2208P j \u03b8j i = 1.\nTherefore, j:S\u2287P j uj = j:S\u2287P j uj i\u2208P j \u03b8j i = i\u2208S j:S\u2287P j uj\u03b8j i \u2264 i\u2208S yi Note that the proof is constructive, and hence if we are given an element in the core of the game among attributes, we can construct an element of the core of the coalitional game.\nFrom Theorem 4, we can obtain the following corollaries that have been previously shown in the literature.\nCorollary 3.\nThe core of the edge-spanning game of [5] is non-empty when the edge weights are non-negative.\nProof.\nLet the players be the vertices, and their attributes the edges incident on them.\nFor each attribute, there is a veto set - namely, both endpoints of the edges.\nAs previously observed, an aggregate value function that is a non-negative linear function of its aggregates has non-empty core.\nTherefore, the precondition of Theorem 4 is satisfied, and the edge-spanning game with non-negative edge weights has a non-empty core.\nCorollary 4 (Theorem 1 of [4]).\nThe core of a flow game with committee control, where each edge is controlled by a simple game with a veto set of players, is non-empty.\nProof.\nWe treat each edge of the flow game as an attribute, and so each attribute has a veto set of players.\nThe core of a flow game (without committee) has been shown to be non-empty in [8].\nWe can again invoke Theorem 4 to show the non-emptiness of core for flow games with committee control.\nHowever, the core of the game induced by a MACG may be non-empty even when the core of the game among attributes is empty, as illustrated by the following example.\nExample 3.\nSuppose the minimal winning coalition of all aggregators in a MACG N, M, A, a, w is N, then v(S) = 0 for all coalitions S \u2282 N.\nAs long as v(N) \u2265 0, any nonnegative vector x that satisfies \u00c8 i\u2208N xi = v(N) is in the core of N, v .\nComplementary to the example above, when all the aggregators have empty cores, the core of N, v is also empty.\nTheorem 5.\nGiven a game N, v represented as a MACG N, M, A, a, w , if the cores of all aggregators are empty, v(N) > 0, and for each i \u2208 N, v({i}) \u2265 0, then the core of N, v is empty.\nProof.\nSuppose the core of N, v is non-empty.\nLet x be a member of the core, and pick an agent i such that xi > 0.\nHowever, for each attribute, since the core is empty, by Lemma 1, there are at least two disjoint winning coalitions.\nPick the winning coalition Sj that does not include i for each attribute j. Let S\u2217 = \u00cb j Sj .\nBecause S\u2217 is winning for all coalitions, v(S\u2217 ) = v(N).\nHowever, v(N) = j\u2208N xj = xi + j \/\u2208N xj \u2265 xi + j\u2208S\u2217 xj > j\u2208S\u2217 xj Therefore, v(S\u2217 ) > \u00c8 j\u2208S\u2217 xj, contradicting the fact that x is in the core of N, v .\nWe do not have general results regarding the problem of Core Non-emptiness when some of the aggregators have non-empty cores while others have empty cores.\nWe suspect knowledge about the status of the cores of the aggregators alone is insufficient to decide this problem.\n6.3 Core Membership Since it is possible for the game induced by the MACG to have a non-empty core when the core of the aggregate value function is empty (Example 3), we try to explore the problem of Core Membership assuming that the core of both the game among attributes, M, w , and the underlying game, N, v , is known to be non-empty, and see if there is any relationship between their members.\nOne reasonable requirement is whether a payoff vector x in the core of N, v can be decomposed and re-aggregated to a payoff vector y in the core of M, w .\nFormally, Definition 11.\nWe say that a vector x \u2208 Rn \u22650 can be decomposed and re-aggregated into a vector y \u2208 Rm \u22650 if there exists Z \u2208 Rm\u00d7n \u22650 , such that yi = n j=1 Zij for all i xj = m i=1 Zij for all j We may refer Z as shares.\nWhen there is no restriction on the entries of Z, it is always possible to decompose a payoff vector x in the core of N, v to a payoff vector y in the core of M, w .\nHowever, it seems reasonable to restrict that if an agent j is irrelevant to the aggregator i, i.e., i never changes the outcome of aggregator j, then Zij should be restricted to be 0.\nUnfortunately, this restriction is already too strong.\nExample 4.\nConsider a MACG N, M, A, a, w with two players and three attributes.\nSuppose agent 1 is irrelevant to attribute 1, and agent 2 is irrelevant to attributes 2 and 3.\nFor any set of attributes T \u2286 M, let w be defined as w(T) = 0 if |T| = 0 or 1 6 if |T| = 2 10 if |T| = 3 177 Since the core of a game with a finite number of players forms a polytope, we can verify that the set of vectors (4, 4, 2), (4, 2, 4), and (2, 4, 4), fully characterize the core C of M, w .\nOn the other hand, the vector (10, 0) is in the core of N, v .\nThis vector cannot be decomposed and re-aggregated to a vector in C under the stated restriction.\nBecause of the apparent lack of relationship among members of the core of N, v and that of M, w , we believe an algorithm for testing Core Membership will require more input than just the veto sets of the aggregators and the oracle of Core Membership for the aggregate value function.\n7.\nCONCLUDING REMARKS Multi-attribute coalitional games constitute a very natural way of modeling problems of interest.\nIts space requirement compares favorably with other representations discussed in the literature, and hence it serves well as a prototype to study computational complexity of coalitional game theory for a variety of problems.\nPositive results obtained under this representation can easily be translated to results about other representations.\nSome of these corollary results have been discussed in Sections 5 and 6.\nAn important direction to explore in the future is the question of efficiency in updating a game, and how to evaluate the solution concepts without starting from scratch.\nAs pointed out at the end of Section 4.3, MACG is very naturally suited for updates.\nRepresentation results regarding efficiency of updates, and algorithmic results regarding how to compute the different solution concepts from updates, will both be very interesting.\nOur work on approximating the Shapley value when the aggregate value function is a non-linear function of the attributes suggests more work to be done there as well.\nGiven the natural probabilistic interpretation of the Shapley value, we believe that a random sampling approach may have significantly better theoretical guarantees.\n8.\nREFERENCES [1] J. M. Bilbao, J. R. Fern\u00b4andez, and J. J. L\u00b4opez.\nComplexity in cooperative game theory.\nhttp:\/\/www.esi.us.es\/~mbilbao.\n[2] V. Conitzer and T. Sandholm.\nComplexity of determining nonemptiness of the core.\nIn Proc.\n18th Int.\nJoint Conf.\non Artificial Intelligence, pages 613-618, 2003.\n[3] V. Conitzer and T. Sandholm.\nComputing Shapley values, manipulating value division schemes, and checking core membership in multi-issue domains.\nIn Proc.\n19th Nat.\nConf.\non Artificial Intelligence, pages 219-225, 2004.\n[4] I. J. Curiel, J. J. Derks, and S. H. Tijs.\nOn balanced games and games with committee control.\nOR Spectrum, 11:83-88, 1989.\n[5] X. Deng and C. H. Papadimitriou.\nOn the complexity of cooperative solution concepts.\nMath.\nOper.\nRes., 19:257-266, May 1994.\n[6] M. R. Garey and D. S. Johnson.\nComputers and Intractability: A Guide to the Theory of NP-Completeness.\nW. H. Freeman, New York, 1979.\n[7] S. Ieong and Y. Shoham.\nMarginal contribution nets: A compact representation scheme for coalitional games.\nIn Proc.\n6th ACM Conf.\non Electronic Commerce, pages 193-202, 2005.\n[8] E. Kalai and E. Zemel.\nTotally balanced games and games of flow.\nMath.\nOper.\nRes., 7:476-478, 1982.\n[9] A. Mas-Colell, M. D. Whinston, and J. R. Green.\nMicroeconomic Theory.\nOxford University Press, New York, 1995.\n[10] M. J. Osborne and A. Rubinstein.\nA Course in Game Theory.\nThe MIT Press, Cambridge, Massachusetts, 1994.\n[11] L. S. Shapley.\nA value for n-person games.\nIn H. W. Kuhn and A. W. Tucker, editors, Contributions to the Theory of Games II, number 28 in Annals of Mathematical Studies, pages 307-317.\nPrinceton University Press, 1953.\n[12] O. Shehory and S. Kraus.\nTask allocation via coalition formation among autonomous agents.\nIn Proc.\n14th Int.\nJoint Conf.\non Artificial Intelligence, pages 31-45, 1995.\n[13] O. Shehory and S. Kraus.\nA kernel-oriented model for autonomous-agent coalition-formation in general environments: Implentation and results.\nIn Proc.\n13th Nat.\nConf.\non Artificial Intelligence, pages 134-140, 1996.\n[14] J. von Neumann and O. Morgenstern.\nTheory of Games and Economic Behvaior.\nPrinceton University Press, 1953.\n[15] M. Yokoo, V. Conitzer, T. Sandholm, N. Ohta, and A. Iwasaki.\nCoalitional games in open anonymous environments.\nIn Proc.\n20th Nat.\nConf.\non Artificial Intelligence, pages 509-515, 2005.\nAppendix We complete the missing proofs from the main text here.\nTo prove Proposition 1, we need the following lemma.\nLemma 2.\nMarginal contribution nets when all coalitions are restricted to have values 0 or 1 have the same representation power as an AND\/OR circuit with negation at the literal level (i.e., AC0 circuit) of depth two.\nProof.\nIf a rule assigns a negative value in the marginal contribution nets, we can write the rule by a corresponding set of at most n rules, where n is the number of agents, such that each of which has positive values through application of De Morgan``s Law.\nWith all values of the rules non-negative, we can treat the weighted summation step of marginal contribution nets can be viewed as an OR, and each rule as a conjunction over literals, possibly negated.\nThis exactly match up with an AND\/OR circuit of depth two.\nProof (Proposition 1).\nThe parity game can be represented with a MACG using a single attribute, aggregator of sum, and an aggregate value function that evaluates that sum modulus two.\nAs a Boolean function, parity is known to require an exponential number of prime implicants.\nBy Lemma 2, a prime implicant is the exact analogue of a pattern in a rule of marginal contribution nets.\nTherefore, to represent the parity function, a marginal contribution nets must be an exponential number of rules.\nFinally, as shown in [7], a marginal contribution net is at worst a factor of O(n) less compact than multi-issue representation.\nTherefore, multi-issue representation will also 178 take exponential space to represent the parity game.\nThis is assuming that each issue in the game is represented in characteristic form.\nProof (Theorem 2).\nAn instance of three-dimensional matching is as follows [6]: Given set P \u2286 W \u00d7 X \u00d7 Y , where W , X, Y are disjoint sets having the same number q of elements, does there exist a matching P \u2286 P such that |P | = q and no two elements of P agree in any coordinate.\nFor notation, let P = {p1, p2, ... , pK}.\nWe construct a MACG N, M, A, a, w as follows: \u2022 M: Let attributes 1 to q correspond to elements in W , (q+1) to 2q correspond to elements in X, (2q+1) to 3q corresponds to element in Y , and let there be a special attribute (3q + 1).\n\u2022 N: Let player i corresponds to pi, and let there be a special player .\n\u2022 A: Let Aji = 1 if the element corresponding to attribute j is in pi.\nThus, for the first K columns, there are exactly three non-zero entries.\nWe also set A(3q+1) = 1.\n\u2022 a: for each aggregator j, aj (A(S)) = 1 if and only if sum of row j of A(S) equals 1.\n\u2022 w: product over all aj .\nIn the game N, v that corresponds to this construction, v(S) = 1 if and only if all attributes are covered exactly once.\nTherefore, for \/\u2208 T \u2286 N, v(T \u222a { }) \u2212 v(T) = 1 if and only if T covers attributes 1 to 3q exactly once.\nSince all such T, if exists, must be of size q, the number of threedimensional matchings is given by \u03c6 (v) (K + 1)!\nq!\n(K \u2212 q)!\n179","lvl-3":"Multi-Attribute Coalitional Games * t\nABSTRACT\nWe study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value.\nThis framework allows us to model diverse economic interactions by picking the right attributes.\nWe study the computational complexity of two coalitional solution concepts for these games--the Shapley value and the core.\nWe show how the positive results obtained in this paper imply comparable results for other games studied in the literature.\n1.\nINTRODUCTION\nWhen agents interact with one another, the value of their contribution is determined by what they can do with their skills and resources, rather than simply their identities.\nConsider the problem of forming a soccer team.\nFor a team to be successful, a team needs some forwards, midfielders, defenders, and a goalkeeper.\nThe relevant attributes of the\nplayers are their skills at playing each of the four positions.\nThe value of a team depends on how well its players can play these positions.\nAt a finer level, we can extend the model to consider a wider range of skills, such as passing, shooting, and tackling, but the value of a team remains solely a function of the attributes of its players.\nConsider an example from the business world.\nCompanies in the metals industry are usually vertically-integrated and diversified.\nThey have mines for various types of ores, and also mills capable of processing and producing different kinds of metal.\nThey optimize their production profile according to the market prices for their products.\nFor example, when the price of aluminum goes up, they will allocate more resources to producing aluminum.\nHowever, each company is limited by the amount of ores it has, and its capacities in processing given kinds of ores.\nTwo or more companies may benefit from trading ores and processing capacities with one another.\nTo model the metal industry, the relevant attributes are the amount of ores and the processing capacities of the companies.\nGiven the exogenous input of market prices, the value of a group of companies will be determined by these attributes.\nMany real-world problems can be likewise modeled by picking the right attributes.\nAs attributes apply to both individual agents and groups of agents, we propose the use of coalitional game theory to understand what groups may form and what payoffs the agents may expect in such models.\nCoalitional game theory focuses on what groups of agents can achieve, and thus connects strongly with e-commerce, as the Internet economies have significantly enhanced the abilities of business to identify and capitalize on profitable opportunities of cooperation.\nOur goal is to understand the computational aspects of computing the solution concepts (stable and\/or fair distribution of payoffs, formally defined in Section 3) for coalitional games described using attributes.\nOur contributions can be summarized as follows: 9 We define a formal representation for coalitional games based on attributes, and relate this representation to others proposed in the literature.\nWe show that when compared to other representations, there exists games for which a multi-attribute description can be exponentially more succinct, and for no game it is worse.\n9 Given the generality of the model, positive results carry over to other representations.\nWe discuss two positive results in the paper, one for the Shapley value and one for the core, and show how these imply related results in the literature.\n\u2022 We study an approximation heuristic for the Shapley value when its exact values cannot be found efficiently.\nWe provide an explicit bound on the maximum error of the estimate, and show that the bound is asymptotically tight.\nWe also carry out experiments to evaluate how the heuristic performs on random instances .1\n2.\nRELATED WORK\nCoalitional game theory has been well studied in economics [9, 10, 14].\nA vast amount of literature have focused on defining and comparing solution concepts, and determining their existence and properties.\nThe first algorithmic study of coalitional games, as far as we know, is performed by Deng and Papadimitriou in [5].\nThey consider coalitional games defined on graphs, where the players are the vertices and the value of coalition is determined by the sum of the weights of the edges spanned by these players.\nThis can be efficiently modeled and generalized using attributes.\nAs a formal representation, multi-attribute coalitional games is closely related to the multi-issue representation of Conitzer and Sandholm [3] and our work on marginal contribution networks [7].\nBoth of these representations are based on dividing a coalitional game into subgames (termed \"issues\" in [3] and \"rules\" in [7]), and aggregating the subgames via linear combination.\nThe key difference in our work is the unrestricted aggregation of subgames: the aggregation could be via a polynomial function of the attributes, or even by treating the subgames as input to another computational problem such as a min-cost flow problem.\nThe relationship of these models will be made clear after we define the multiattribute representation in Section 4.\nAnother representation proposed in the literature is one specialized for superadditive games by Conitzer and Sandholm [2].\nThis representation is succinct, but to find the values of some coalitions may require solving an NP-hard problem.\nWhile it is possible for multi-attribute coalitional games to efficiently represent these games, it necessarily requires the solution to an NP-hard problem in order to find out the values of some coalitions.\nIn this paper, we stay within the boundary of games that admits efficient algorithm for determining the value of coalitions.\nWe will therefore not make further comparisons with [2].\nThe model of coalitional games with attributes has been considered in the works of Shehory and Kraus.\nThey model the agents as possessing capabilities that indicates their proficiencies in different areas, and consider how to efficiently allocate tasks [12] and the dynamics of coalition formation [13].\nOur work differs significantly as our focus is on reasoning about solution concepts.\nOur model also covers a wider scope as attributes generalize the notion of capabilities.\nYokoo et al. have also considered a model of coalitional games where agents are modeled by sets of skills, and these skills in turn determine the value of coalitions [15].\nThere are two major differences between their work and ours.\nFirstly, Yokoo et al. assume that each skill is fundamentally different from another, hence no two agents may possess the same skill.\nAlso, they focus on developing new solution concepts that are robust with respect to manipulation by agents.\nOur focus is on reasoning about traditional solution concepts.\nOur work is also related to the study of cooperative games with committee control [4].\nIn these games, there is usually an underlying set of resources each controlled by a (possibly overlapping) set of players known as the committee, engaged in a simple game (defined in Section 3).\nmultiattribute coalitional games generalize these by considering relationship between the committee and the resources beyond simple games.\nWe note that when restricted to simple games, we derive similar results to that in [4].\n3.\nPRELIMINARIES\n3.1 Coalitional Games\n3.2 Computational Problems\n4.\nFORMAL MODEL\n4.1 Multi-Attribute Coalitional Games\n4.2 An Example\n4.3 Relationship with Other Representations\n4.4 Limitation of One Aggregator per Attribute\n5.\nSHAPLEY VALUE\n5.1 Problem Setup\n5.2 Linearly Separable Attributes\n5.3 Polynomial Combination of Attributes\n5.3.1 Approximation\n5.3.2 Theoretical analysis of heuristic\n5.3.3 Empirical evaluation\n6.\nCORE-RELATED QUESTIONS\n6.1 Problem Setup\n6.2 Core Non-emptiness\n6.3 Core Membership\n7.\nCONCLUDING REMARKS\nMulti-attribute coalitional games constitute a very natural way of modeling problems of interest.\nIts space requirement compares favorably with other representations discussed in the literature, and hence it serves well as a prototype to study computational complexity of coalitional game theory for a variety of problems.\nPositive results obtained under this representation can easily be translated to results about other representations.\nSome of these corollary results have been discussed in Sections 5 and 6.\nAn important direction to explore in the future is the question of efficiency in updating a game, and how to evaluate the solution concepts without starting from scratch.\nAs pointed out at the end of Section 4.3, MACG is very naturally suited for updates.\nRepresentation results regarding efficiency of updates, and algorithmic results regarding how to compute the different solution concepts from updates, will both be very interesting.\nOur work on approximating the Shapley value when the aggregate value function is a non-linear function of the attributes suggests more work to be done there as well.\nGiven the natural probabilistic interpretation of the Shapley value, we believe that a random sampling approach may have significantly better theoretical guarantees.","lvl-4":"Multi-Attribute Coalitional Games * t\nABSTRACT\nWe study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value.\nThis framework allows us to model diverse economic interactions by picking the right attributes.\nWe study the computational complexity of two coalitional solution concepts for these games--the Shapley value and the core.\nWe show how the positive results obtained in this paper imply comparable results for other games studied in the literature.\n1.\nINTRODUCTION\nWhen agents interact with one another, the value of their contribution is determined by what they can do with their skills and resources, rather than simply their identities.\nConsider the problem of forming a soccer team.\nThe relevant attributes of the\nplayers are their skills at playing each of the four positions.\nThe value of a team depends on how well its players can play these positions.\nAt a finer level, we can extend the model to consider a wider range of skills, such as passing, shooting, and tackling, but the value of a team remains solely a function of the attributes of its players.\nConsider an example from the business world.\nCompanies in the metals industry are usually vertically-integrated and diversified.\nThey have mines for various types of ores, and also mills capable of processing and producing different kinds of metal.\nHowever, each company is limited by the amount of ores it has, and its capacities in processing given kinds of ores.\nTwo or more companies may benefit from trading ores and processing capacities with one another.\nTo model the metal industry, the relevant attributes are the amount of ores and the processing capacities of the companies.\nGiven the exogenous input of market prices, the value of a group of companies will be determined by these attributes.\nMany real-world problems can be likewise modeled by picking the right attributes.\nAs attributes apply to both individual agents and groups of agents, we propose the use of coalitional game theory to understand what groups may form and what payoffs the agents may expect in such models.\nOur goal is to understand the computational aspects of computing the solution concepts (stable and\/or fair distribution of payoffs, formally defined in Section 3) for coalitional games described using attributes.\nOur contributions can be summarized as follows: 9 We define a formal representation for coalitional games based on attributes, and relate this representation to others proposed in the literature.\nWe show that when compared to other representations, there exists games for which a multi-attribute description can be exponentially more succinct, and for no game it is worse.\n9 Given the generality of the model, positive results carry over to other representations.\nWe discuss two positive results in the paper, one for the Shapley value and one for the core, and show how these imply related results in the literature.\n\u2022 We study an approximation heuristic for the Shapley value when its exact values cannot be found efficiently.\nWe also carry out experiments to evaluate how the heuristic performs on random instances .1\n2.\nRELATED WORK\nCoalitional game theory has been well studied in economics [9, 10, 14].\nA vast amount of literature have focused on defining and comparing solution concepts, and determining their existence and properties.\nThe first algorithmic study of coalitional games, as far as we know, is performed by Deng and Papadimitriou in [5].\nThey consider coalitional games defined on graphs, where the players are the vertices and the value of coalition is determined by the sum of the weights of the edges spanned by these players.\nThis can be efficiently modeled and generalized using attributes.\nAs a formal representation, multi-attribute coalitional games is closely related to the multi-issue representation of Conitzer and Sandholm [3] and our work on marginal contribution networks [7].\nBoth of these representations are based on dividing a coalitional game into subgames (termed \"issues\" in [3] and \"rules\" in [7]), and aggregating the subgames via linear combination.\nThe relationship of these models will be made clear after we define the multiattribute representation in Section 4.\nAnother representation proposed in the literature is one specialized for superadditive games by Conitzer and Sandholm [2].\nThis representation is succinct, but to find the values of some coalitions may require solving an NP-hard problem.\nWhile it is possible for multi-attribute coalitional games to efficiently represent these games, it necessarily requires the solution to an NP-hard problem in order to find out the values of some coalitions.\nIn this paper, we stay within the boundary of games that admits efficient algorithm for determining the value of coalitions.\nThe model of coalitional games with attributes has been considered in the works of Shehory and Kraus.\nOur work differs significantly as our focus is on reasoning about solution concepts.\nOur model also covers a wider scope as attributes generalize the notion of capabilities.\nYokoo et al. have also considered a model of coalitional games where agents are modeled by sets of skills, and these skills in turn determine the value of coalitions [15].\nThere are two major differences between their work and ours.\nAlso, they focus on developing new solution concepts that are robust with respect to manipulation by agents.\nOur focus is on reasoning about traditional solution concepts.\nOur work is also related to the study of cooperative games with committee control [4].\nIn these games, there is usually an underlying set of resources each controlled by a (possibly overlapping) set of players known as the committee, engaged in a simple game (defined in Section 3).\nmultiattribute coalitional games generalize these by considering relationship between the committee and the resources beyond simple games.\nWe note that when restricted to simple games, we derive similar results to that in [4].\n7.\nCONCLUDING REMARKS\nMulti-attribute coalitional games constitute a very natural way of modeling problems of interest.\nIts space requirement compares favorably with other representations discussed in the literature, and hence it serves well as a prototype to study computational complexity of coalitional game theory for a variety of problems.\nPositive results obtained under this representation can easily be translated to results about other representations.\nSome of these corollary results have been discussed in Sections 5 and 6.\nAn important direction to explore in the future is the question of efficiency in updating a game, and how to evaluate the solution concepts without starting from scratch.\nRepresentation results regarding efficiency of updates, and algorithmic results regarding how to compute the different solution concepts from updates, will both be very interesting.\nOur work on approximating the Shapley value when the aggregate value function is a non-linear function of the attributes suggests more work to be done there as well.\nGiven the natural probabilistic interpretation of the Shapley value, we believe that a random sampling approach may have significantly better theoretical guarantees.","lvl-2":"Multi-Attribute Coalitional Games * t\nABSTRACT\nWe study coalitional games where the value of cooperation among the agents are solely determined by the attributes the agents possess, with no assumption as to how these attributes jointly determine this value.\nThis framework allows us to model diverse economic interactions by picking the right attributes.\nWe study the computational complexity of two coalitional solution concepts for these games--the Shapley value and the core.\nWe show how the positive results obtained in this paper imply comparable results for other games studied in the literature.\n1.\nINTRODUCTION\nWhen agents interact with one another, the value of their contribution is determined by what they can do with their skills and resources, rather than simply their identities.\nConsider the problem of forming a soccer team.\nFor a team to be successful, a team needs some forwards, midfielders, defenders, and a goalkeeper.\nThe relevant attributes of the\nplayers are their skills at playing each of the four positions.\nThe value of a team depends on how well its players can play these positions.\nAt a finer level, we can extend the model to consider a wider range of skills, such as passing, shooting, and tackling, but the value of a team remains solely a function of the attributes of its players.\nConsider an example from the business world.\nCompanies in the metals industry are usually vertically-integrated and diversified.\nThey have mines for various types of ores, and also mills capable of processing and producing different kinds of metal.\nThey optimize their production profile according to the market prices for their products.\nFor example, when the price of aluminum goes up, they will allocate more resources to producing aluminum.\nHowever, each company is limited by the amount of ores it has, and its capacities in processing given kinds of ores.\nTwo or more companies may benefit from trading ores and processing capacities with one another.\nTo model the metal industry, the relevant attributes are the amount of ores and the processing capacities of the companies.\nGiven the exogenous input of market prices, the value of a group of companies will be determined by these attributes.\nMany real-world problems can be likewise modeled by picking the right attributes.\nAs attributes apply to both individual agents and groups of agents, we propose the use of coalitional game theory to understand what groups may form and what payoffs the agents may expect in such models.\nCoalitional game theory focuses on what groups of agents can achieve, and thus connects strongly with e-commerce, as the Internet economies have significantly enhanced the abilities of business to identify and capitalize on profitable opportunities of cooperation.\nOur goal is to understand the computational aspects of computing the solution concepts (stable and\/or fair distribution of payoffs, formally defined in Section 3) for coalitional games described using attributes.\nOur contributions can be summarized as follows: 9 We define a formal representation for coalitional games based on attributes, and relate this representation to others proposed in the literature.\nWe show that when compared to other representations, there exists games for which a multi-attribute description can be exponentially more succinct, and for no game it is worse.\n9 Given the generality of the model, positive results carry over to other representations.\nWe discuss two positive results in the paper, one for the Shapley value and one for the core, and show how these imply related results in the literature.\n\u2022 We study an approximation heuristic for the Shapley value when its exact values cannot be found efficiently.\nWe provide an explicit bound on the maximum error of the estimate, and show that the bound is asymptotically tight.\nWe also carry out experiments to evaluate how the heuristic performs on random instances .1\n2.\nRELATED WORK\nCoalitional game theory has been well studied in economics [9, 10, 14].\nA vast amount of literature have focused on defining and comparing solution concepts, and determining their existence and properties.\nThe first algorithmic study of coalitional games, as far as we know, is performed by Deng and Papadimitriou in [5].\nThey consider coalitional games defined on graphs, where the players are the vertices and the value of coalition is determined by the sum of the weights of the edges spanned by these players.\nThis can be efficiently modeled and generalized using attributes.\nAs a formal representation, multi-attribute coalitional games is closely related to the multi-issue representation of Conitzer and Sandholm [3] and our work on marginal contribution networks [7].\nBoth of these representations are based on dividing a coalitional game into subgames (termed \"issues\" in [3] and \"rules\" in [7]), and aggregating the subgames via linear combination.\nThe key difference in our work is the unrestricted aggregation of subgames: the aggregation could be via a polynomial function of the attributes, or even by treating the subgames as input to another computational problem such as a min-cost flow problem.\nThe relationship of these models will be made clear after we define the multiattribute representation in Section 4.\nAnother representation proposed in the literature is one specialized for superadditive games by Conitzer and Sandholm [2].\nThis representation is succinct, but to find the values of some coalitions may require solving an NP-hard problem.\nWhile it is possible for multi-attribute coalitional games to efficiently represent these games, it necessarily requires the solution to an NP-hard problem in order to find out the values of some coalitions.\nIn this paper, we stay within the boundary of games that admits efficient algorithm for determining the value of coalitions.\nWe will therefore not make further comparisons with [2].\nThe model of coalitional games with attributes has been considered in the works of Shehory and Kraus.\nThey model the agents as possessing capabilities that indicates their proficiencies in different areas, and consider how to efficiently allocate tasks [12] and the dynamics of coalition formation [13].\nOur work differs significantly as our focus is on reasoning about solution concepts.\nOur model also covers a wider scope as attributes generalize the notion of capabilities.\nYokoo et al. have also considered a model of coalitional games where agents are modeled by sets of skills, and these skills in turn determine the value of coalitions [15].\nThere are two major differences between their work and ours.\nFirstly, Yokoo et al. assume that each skill is fundamentally different from another, hence no two agents may possess the same skill.\nAlso, they focus on developing new solution concepts that are robust with respect to manipulation by agents.\nOur focus is on reasoning about traditional solution concepts.\nOur work is also related to the study of cooperative games with committee control [4].\nIn these games, there is usually an underlying set of resources each controlled by a (possibly overlapping) set of players known as the committee, engaged in a simple game (defined in Section 3).\nmultiattribute coalitional games generalize these by considering relationship between the committee and the resources beyond simple games.\nWe note that when restricted to simple games, we derive similar results to that in [4].\n3.\nPRELIMINARIES\nIn this section, we will review the relevant concepts of coalitional game theory and its two most important solution concepts--the Shapley value and the core.\nWe will then define the computational questions that will be studied in the second half of the paper.\n3.1 Coalitional Games\nThroughout this paper, we assume that payoffs to groups of agents can be freely distributed among its members.\nThis transferable utility assumption is commonly made in coalitional game theory.\nThe canonical representation of a coalitional game with transferable utility is its characteristic form.\nDefinition 1.\nA coalition game with transferable utility in characteristic form is denoted by the pair (N, v), where \u2022 N is the set of agents; and \u2022 v: 2N \u2192 R is a function that maps each group of agents S \u2286 N to a real-valued payoff.\nA group of agents in a game is known as a coalition, and the entire set of agents is known as the grand coalition.\nAn important class of coalitional games is the class of monotonic games.\nAnother important class of coalitional games is the class of simple games.\nIn a simple game, a coalition either wins, in which case it has a value of 1, or loses, in which case it has a value of 0.\nIt is often used to model voting situations.\nSimple games are often assumed to be monotonic, i.e., if S wins, then for all T D S, T also wins.\nThis coincides with the notion of using simple games as a model for voting.\nIf a simple game is monotonic, then it is fully described by the set of minimal winning coalitions, i.e., coalitions S for which v (S) = 1 but for all coalitions T \u2282 S, v (T) = 0.\nAn outcome in a coalitional game specifies the utilities the agents receive.\nA solution concept assigns to each coalitional game a set of \"reasonable\" outcomes.\nDifferent solution concepts attempt to capture in some way outcomes that are stable and\/or fair.\nTwo of the best known solution concepts are the Shapley value and the core.\nThe Shapley value is a normative solution concept that prescribes a \"fair\" way to divide the gains from cooperation when the grand coalition is formed.\nThe division of payoff to agent i is the average marginal contribution of agent i over all possible permutations of the agents.\nFormally, Definition 3.\nThe Shapley value of agent i, \u03c6i (v), in game (N, v) is given by the following formula 1We acknowledge that random instances may not be typical \u03c6i (v) = E | S |!\n(| N | \u2212 | S | \u2212 1)!\n(v (S U {i}) \u2212 v (S)) of what happens in practice, but given the generality of our S \u2286 N \\ {i} | N |!\nmodel, it provides the most unbiased view.\nThe core is a descriptive solution concept that focuses on outcomes that are \"stable.\"\nStability under core means that no set of players can jointly deviate to improve their payoffs.\nDefinition 4.\nAn outcome x \u2208 RINI is in the core of the game ~ N, v ~ if for all S \u2286 N,\nNote that the core of a game may be empty, i.e., there may not exist any payoff vector that satisfies the stability requirement for the given game.\n3.2 Computational Problems\nWe will study the following three problems related to solution concepts in coalitional games.\nProblem 1.\n(SHAPLEY VALUE) Given a description of the coalitional game and an agent i, compute the Shapley value of agent i. the coalitional game and a payoff vector x such that EiEN xi = Problem 2.\n(CORE MEMBERSHIP) Given a description of v (N), determine if EiES xi \u2265 v (S) for all S \u2286 N. Problem 3.\n(CORE NON-EMPTINESS) Given a description of the coalitional game, determine if there exists any payoff vector such that EiES xi \u2265 V (S) for all S \u2286 N, and\nNote that the complexity of the above problems depends on the how the game is described.\nAll these problems will be \"easy\" if the game is described by its characteristic form, but only so because the description takes space exponential in the number of agents, and hence simple brute-force approach takes time polynomial to the input description.\nTo properly understand the computational complexity questions, we have to look at compact representation.\n4.\nFORMAL MODEL\nIn this section, we will give a formal definition of multiattribute coalitional games, and show how it is related to some of the representations discussed in the literature.\nWe will also discuss some limitations to our proposed approach.\n4.1 Multi-Attribute Coalitional Games\nA multi-attribute coalitional game (MACG) consists of two parts: a description of the attributes of the agents, which we termed an attribute model, and a function that assigns values to combination of attributes.\nTogether, they induce a coalitional game over the agents.\nWe first define the attribute model.\n\u2022 N denotes the set of agents, of size n; \u2022 M denotes the set of attributes, of size m; \u2022 A \u2208 RmXn, the attribute matrix, describes the values of the attributes of the agents, with Aij denoting the value of attribute i for agent j.\nWe can directly define a function that maps combinations of attributes to real values.\nHowever, for many problems, we can describe the function more compactly by computing it in two steps: we first compute an aggregate value for each attribute, then compute the values of combination of attributes using only the aggregated information.\nFormally, Definition 6.\nAn aggregating function (or aggregator) takes as input a row of the attribute matrix and a coalition S, and summarizes the attributes of the agents in S with a single number.\nWe can treat it as a mapping from Rn \u00d7 2N ~ \u2192 R. Aggregators often perform basic arithmetic or logical operations.\nFor example, it may compute the sum of the attributes, or evaluate a Boolean expression by treating the agents i \u2208 S as true and j \u2208 \/ S as false.\nAnalogous to the notion of simple games, we call an aggregator simple if its range is {0, 1}.\nFor any aggregator, there is a set of relevant agents, and a set of irrelevant agents.\nAn agent i is irrelevant to aggregator aj if aj (S \u222a {i}) = aj (S) for all S \u2286 N.\nA relevant agent is one not irrelevant.\nGiven the attribute matrix, an aggregator assigns a value to each coalition S \u2286 N. Thus, each aggregator defines a game over N. For aggregator aj, we refer to this induced game as the game of attribute j, and denote it with aj (A).\nWhen the attribute matrix is clear from the context, we may drop A and simply denote the game as aj.\nWe may refer to the game as the aggregator when no ambiguities arise.\nWe now define the second step of the computation with the help of aggregators.\nDefinition 7.\nAn aggregate value function takes as input the values of the aggregators and maps these to a real value.\nIn this paper, we will focus on having one aggregator per attribute.\nTherefore, in what follows, we will refer to the aggregate value function as a function over the attributes.\nNote that when all aggregators are simple, the aggregate value function implicitly defines a game over the attributes, as it assigns a value to each set of attributes T \u2286 M.\nWe refer to this as the game among attributes.\nWe now define multi-attribute coalitional game.\n\u2022 ~ N, M, A ~ is an attribute model; \u2022 a is a set of aggregators, one for each attribute; we can treat the set together as a vector function, mapping RmXn \u00d7 2N ~ \u2192 Rm \u2022 w: Rm ~ \u2192 R is an aggregate value function.\nThis induces a coalitional game with transferable payoffs ~ N, v ~ with players N and the value function defined by\nNote that MACG as defined is fully capable of representing any coalitional game ~ N, v ~.\nWe can simply take the set of attributes as equal to the set of agents, i.e., M = N, an identity matrix for A, aggregators of sums, and the aggregate value function w to be v.\n4.2 An Example\nLet us illustrate how MACG can be used to represent a game with a simple example.\nSuppose there are four types of resources in the world: gold, silver, copper, and iron, that each agent is endowed with some amount of these resources, and there is a fixed price for each of the resources in the market.\nThis game can be described using MACG with an attribute matrix A, where Aij denote the amount of resource i that agent j is endowed.\nFor each resource, the aggregator sums together the amount of resources the agents have.\nFinally, the aggregate value function takes the dot product between the market price vector and the aggregate vector.\nNote the inherent flexibility in the model: only limited work would be required to update the game as the market price changes, or when a new agent arrives.\n4.3 Relationship with Other Representations\nAs briefly discussed in Section 2, MACG is closely related to two other representations in the literature, the multiissue representation of Conitzer and Sandholm [3], and our work on marginal contribution nets [7].\nTo make their relationships clear, we first review these two representations.\nWe have changed the notations from the original papers to highlight their similarities.\nDefinition 9.\nA multi-issue representation is given as a vector of coalitional games, (v1, v2,...vm), each possibly with a varying set of agents, say N1,..., Nm.\nThe coalitional game (N, v) induced by multi-issue representation has player set N = Umi = 1 Ni, and for each coalition S C N, v (S) = Emi = 1 v (S n Ni).\nThe games vi are assumed to be represented in characteristic form.\nDefinition 10.\nA marginal contribution net is given as a set of rules (r1, r2,..., rm), where rule ri has a weight wi, and a pattern pi that is a conjunction over literals (positive or negative).\nThe agents are represented as literals.\nA coalition S is said to satisfy the pattern pi, if we treat the agents i E S as true, an agent j E \/ S as false, pi (S) evaluates to true.\nDenote the set of literals involved in rule i by Ni.\nThe coalitional game (N, v) induced by a marginal contribution net has player set N = Um coalition S C N, v (S) = Ei: pi (S) = true wi.\ni = 1 Ni, and for each From these definitions, we can see the relationships among these three representations clearly.\nAn issue of a multi-issue representation corresponds to an attribute in MACG.\nSimilarly, a rule of a marginal contribution net corresponds to an attribute in MACG.\nThe aggregate value functions are simple sums and weighted sums for the respective representations.\nTherefore, it is clear that MACG will be no less succinct than either representation.\nHowever, MACG differs in two important way.\nFirstly, there is no restriction on the operations performed by the aggregate value function over the attributes.\nThis is an important generalization over the linear combination of issues or rules in the other two approaches.\nIn particular, there are games for which MACG can be exponentially more compact.\nThe proof of the following proposition can be found in the Appendix.\nPROPOSITION 1.\nConsider the parity game (N, v) where coalition S C N has value v (S) = 1 if ISI is odd, and v (S) = 0 otherwise.\nMACG can represent the game in O (n) space.\nBoth multi-issue representation and marginal contribution nets requires O (2n) space.\nA second important difference of MACG is that the attribute model and the value function is cleanly separated.\nAs suggested in the example in Section 4.2, this often allows us more efficient update of the values of the game as it changes.\nAlso, the same attribute model can be evaluated using different value functions, and the same value function can be used to evaluate different attribute model.\nTherefore, MACG is very suitable for representing multiple games.\nWe believe the problems of updating games and representing multiple games are interesting future directions to explore.\n4.4 Limitation of One Aggregator per Attribute\nBefore focusing on one aggregator per attribute for the rest of the paper, it is natural to wonder if any is lost per such restriction.\nThe unfortunate answer is yes, best illustrated by the following.\nConsider again the problem of forming a soccer team discussed in the introduction, where we model the attributes of the agents as their ability to take the four positions of the field, and the value of a team depends on the positions covered.\nIf we first aggregate each of the attribute individually, we will lose the distributional information of the attributes.\nIn other words, we will not be able to distinguish between two teams, one of which has a player for each position, the other has one player who can play all positions, but the rest can only play the same one position.\nThis loss of distributional information can be recovered by using aggregators that take as input multiple rows of the attribute matrix rather than just a single row.\nAlternatively, if we leave such attributes untouched, we can leave the burden of correctly evaluating these attributes to the aggregate value function.\nHowever, for many problems that we found in the literature, such as the transportation domain of [12] and the flow game setting of [4], the distribution of attributes does not affect the value of the coalitions.\nIn addition, the problem may become unmanageably complex as we introduce more complicated aggregators.\nTherefore, we will focus on the representation as defined in Definition 8.\n5.\nSHAPLEY VALUE\nIn this section, we focus on computational issues of finding the Shapley value of a player in MACG.\nWe first set up the problem with the use of oracles to avoid complexities arising from the aggregators.\nWe then show that when attributes are linearly separable, the Shapley value can be efficiently computed.\nThis generalizes the proofs of related results in the literature.\nFor the non-linearly separable case, we consider a natural heuristic for estimating the Shapley value, and study the heuristic theoretically and empirically.\n5.1 Problem Setup\nWe start by noting that computing the Shapley value for simple aggregators can be hard in general.\nIn particular, we can define aggregators to compute weighted majority over its input set of agents.\nAs noted in [6], finding the Shapley value of a weighted majority game is #P - hard.\nTherefore, discussion of complexity of Shapley value for MACG with unrestricted aggregators is moot.\nInstead of placing explicit restriction on the aggregator, we assume that the Shapley value of the aggregator can be\nanswered by an oracle.\nFor notation, let \u03c6i (u) denote the Shapley value for some game u.\nWe make the following assumption: Assumption 1.\nFor each aggregator aj in a MACG, there is an associated oracle that answers the Shapley value of the game of attribute j.\nIn other words, \u03c6i (aj) is known.\nFor many aggregators that perform basic operations over its input, polynomial time oracle for Shapley value exists.\nThis include operations such as sum, and symmetric functions when the attributes are restricted to {0, 11.\nAlso, when only few agents have an effect on the aggregator, brute-force computation for Shapley value is feasible.\nTherefore, the above assumption is reasonable for many settings.\nIn any case, such abstraction allows us to focus on the aggregate value function.\n5.2 Linearly Separable Attributes\nWhen the aggregate value function can be written as a linear function of the attributes, the Shapley value of the game can be efficiently computed.\nPROOF.\nFirst, we note that Shapley value satisfies an additivity axiom [11].\nThe Shapley value satisfies additivity, namely, \u03c6i (a + b) = \u03c6i (a) + \u03c6i (b), where (N, a + b) is a game defined to be (a + b) (S) = a (S) + b (S) for all S C _ N.\nIt is also clear that Shapley value satisfies scaling, namely \u03c6i (\u03b1v) = \u03b1\u03c6i (v) where (\u03b1v) (S) = \u03b1v (S) for all S C _ N.\nSince the aggregate value function can be expressed as a weighted sum of games of attributes,\nMany positive results regarding efficient computation of Shapley value in the literature depends on some form of linearity.\nExamples include the edge-spanning game on graphs by Deng and Papadimitriou [5], the multi-issue representation of [3], and the marginal contribution nets of [7].\nThe key to determine if the Shapley value can be efficiently computed depends on the linear separability of attributes.\nOnce this is satisfied, as long as the Shapley value of the game of attributes can be efficiently determined, the Shapley value of the entire game can be efficiently computed.\n5.3 Polynomial Combination of Attributes\nWhen the aggregate value function cannot be expressed as a linear function of its attributes, computing the Shapley value exactly is difficult.\nHere, we will focus on aggregate value function that can be expressed as some polynomial of its attributes.\nIf we do not place a limit on the degree of the polynomial, and the game (N, v) is not necessarily monotonic, the problem is #P - hard.\nThe proof is via reduction from three-dimensional matching, and details can be found in the Appendix.\nEven if we restrict ourselves to monotonic games, and non-negative coefficients for the polynomial aggregate value function, computing the exact Shapley value can still be hard.\nFor example, suppose there are two attributes.\nAll agents in some set B C _ N possess the first attribute, and all agents in some set C C _ N possess the second, and B and C are disjoint.\nFor a coalition S C _ N, the aggregator for the first evaluates to 1 if and only if IS n BI> b ~, and similarly, the aggregator for the second evaluates to 1 if and only if IS n CI> c ~.\nLet the cardinality of the sets B and C be b and c.\nWe can verify that the Shapley value of an agent i in\nThe equation corresponds to a weighted sum of probability values of hypergeometric random variables.\nThe correspondence with hypergeometric distribution is due to sampling without replacement nature of Shapley value.\nAs far as we know, there is no close-form formula to evaluate the sum above.\nIn addition, as the number of attributes involved increases, we move to multi-variate hypergeometric random variables, and the number of summands grow exponentially in the number of attributes.\nTherefore, it is highly unlikely that the exact Shapley value can be determined efficiently.\nTherefore, we look for approximation.\n5.3.1 Approximation\nFirst, we need a criteria for evaluating how well an estimate, \u02c6\u03c6, approximates the true Shapley value, \u03c6.\nWe consider the following three natural criteria:\n9 Maximum underestimate: maxi \u03c6i \/ \u02c6\u03c6i 9 Maximum overestimate: maxi \u02c6\u03c6i \/ \u03c6i 9 Total variation: 21 Pi I\u03c6i--\u02c6\u03c6iI, or alternatively maxS I Pi \u2208 S \u03c6i--Pi \u2208 S \u02c6\u03c6iI\nThe total variation criterion is more meaningful when we normalize the game to having a value of 1 for the grand coalition, i.e., v (N) = 1.\nWe can also define additive analogues of the under - and overestimates, especially when the games are normalized.\nWe will assume for now that the aggregate value function is a polynomial over the attributes with non-negative coefficients.\nWe will also assume that the aggregators are simple.\nWe will evaluate a specific heuristic that is analogous to Equation (1).\nSuppose the aggregate function can be written as a polynomial with p terms\nand the attributes involved in the term are j (1),..., j (kj).\nWe compute an estimate \u03c6\u02c6 to the Shapley value as\nThe idea behind the estimate is that for each term, we divide the value of the term equally among all its attributes.\nThis is represented by the factor cj kj.\nThen for for each attribute of an agent, we assign the player a share of value from the attribute.\nThis share is determined by the Shapley value of the simple game of that attribute.\nWithout considering the details of the simple games, this constitutes a fair (but blind) rule of sharing.\n5.3.2 Theoretical analysis of heuristic\nWe can derive a simple and tight bound for the maximum (multiplicative) underestimate of the heuristic estimate.\nTHEOREM 3.\nGiven a game (N, v) represented as a MACG (N, M, A, a, w), suppose w can be expressed as a polynomial function of its attributes (cf Equation (2)).\nLet K = maxjkj, i.e., the maximum degree of the polynomial.\nLet \u03c6\u02c6 denote the estimated Shapley value using Equation (3), and \u03c6 denote the true Shapley value.\nFor all i E N, \u03c6i> K \u02c6\u03c6i.\nPROOF.\nWe bound the maximum underestimate term-byterm.\nLet tj be the j-th term of the polynomial.\nWe note that the term can be treated as a game among attributes, as it assigns a value to each coalition S C _ N. Without loss of generality, renumber attributes j (1) through j (kj) as 1 through kj.\nTo make the equations less cluttered, let\nand for a game a, contribution of agent i to group S: i E \/ S,\nFor each coalition S, i E \/ S, \u2206 i (tj, S) = 1 if and only if for at least one attribute, say l \u2217, \u2206 i (al \u2217, S) = 1.\nTherefore, if we sum over all the attributes, we would have included l \u2217 Summing over the terms, we see that the worst case underestimate is by the maximum degree.\nWithout loss of generality, since the bound is multiplicative, we can normalize the game to having v (N) = 1.\nAs a corollary, because we cannot overestimate any set by more than K times, we obtain a bound on the total variation:\nCOROLLARY 2.\nThe total variation between the estimated Shapley value and the true Shapley value, for K-degree bounded\npolynomial aggregate value function, is K \u2212 1\nWe can show that this bound is tight.\nExample 1.\nConsider a game with n players and K attributes.\nLet the first (n--1) agents be a member of the first (K--1) attributes, and that the corresponding aggregator returns 1 if any one of the first (K--1) agents is present.\nLet the n-th agent be the sole member of the K-th attribute.\nfirst (n--1) agents and 1K to the n-th agent.\nHowever, the true Shapley value of the n-th agent tends to 1 as n--+ oo, and the total variation approaches K \u2212 1\nIn general, we cannot bound how much \u03c6\u02c6 may overestimate the true Shapley value.\nThe problem is that \u02c6\u03c6i may be non-zero for agent i even though may have no influence over the outcome of a game when attributes are multiplied together, as illustrated by the following example.\nExample 2.\nConsider a game with 2 players and 2 attributes, and let the first agent be a member of both attributes, and the other agent a member of the second attribute.\nFor a coalition S, the first aggregator evaluates to 1 if agent 1 E S, and the second aggregator evaluates to 1 if both agents are in S.\nWhile agent 2 is not a dummy with respect to the second attribute, it is a dummy with respect to the product of the attributes.\nAgent 2 will be assigned a value of 14 by the estimate.\nAs mentioned, for simple monotonic games, a game is fully described by its set of minimal winning coalitions.\nWhen the simple aggregators are represented as such, it is possible to check, in polynomial time, for agents turning dummies after attributes are multiplied together.\nTherefore, we can improve the heuristic estimate in this special case.\n5.3.3 Empirical evaluation\nDue to a lack of benchmark problems for coalitional games, we have tested the heuristic on random instances.\nWe believe more meaningful results can be obtained when we have real instances to test this heuristic on.\nOur experiment is set up as follows.\nWe control three parameters of the experiment: the number of players (6--10),\nFigure 1: Experimental results\nthe number of attributes (3--8), and the maximum degree of the polynomial (2--5).\nFor each attribute, we randomly sample one to three minimal winning coalitions.\nWe then randomly generate a polynomial of the desired maximum degree with a random number (3--12) of terms, each with a random positive weight.\nWe normalize each game to have v (N) = 1.\nThe results of the experiments are shown in Figure 1.\nThe y-axis of the graphs shows the total variation, and the x-axis the number of players.\nEach datapoint is an average of approximately 700 random samples.\nFigure 1 (a) explores the effect of the maximum degree and the number of players when the number of attributes is fixed (at six).\nAs expected, the total variation increases as the maximum degree increases.\nOn the other hand, there is only a very small increase in error as the number of players increases.\nThe error is nowhere near the theoretical worstcase bound of 21 to 54 for polynomials of degrees 2 to 5.\nFigure 1 (b) explores the effect of the number of attributes and the number of players when the maximum degree of the polynomial is fixed (at three).\nWe first note that these three lines are quite tightly clustered together, suggesting that the number of attributes has relatively little effect on the error of the estimate.\nAs the number of attributes increases, the total variation decreases.\nWe think this is an interesting phenomenon.\nThis is probably due to the precise construct required for the worst-case bound, and so as more attributes are available, we have more diverse terms in the polynomial, and the diversity pushes away from the worst-case bound.\n6.\nCORE-RELATED QUESTIONS\nIn this section, we look at the complexity of the two computational problems related to the core: CORE NONEMPTINESS and CORE MEMBERSHIP.\nWe show that the nonemptiness of core of the game among attributes and the cores of the aggregators imply non-emptiness of the core of the game induced by the MACG.\nWe also show that there appears to be no such general relationship that relates the core memberships of the game among attributes, games of attributes, and game induced by MACG.\n6.1 Problem Setup\nThere are many problems in the literature for which the questions of CORE NON-EMPTINESS and CORE MEMBERSHIP are known to be hard [1].\nFor example, for the edgespanning game that Deng and Papadimitriou studied [5], both of these questions are coNP-complete.\nAs MACG can model the edge-spanning game in the same amount of space, these hardness results hold for MACG as well.\nAs in the case for computing Shapley value, we attempt to find a way around the hardness barrier by assuming the existence of oracles, and try to build algorithms with these oracles.\nFirst, we consider the aggregate value function.\nAssumption 2.\nFor a MACG (N, M, A, a, w), we assume there are oracles that answers the questions of CORE NONEMPTINESS, and CORE MEMBERSHIP for the aggregate value function w.\nWhen the aggregate value function is a non-negative linear function of its attributes, the core is always non-empty, and core membership can be determined efficiently.\nThe concept of core for the game among attributes makes the most sense when the aggregators are simple games.\nWe will further assume that these simple games are monotonic.\nAssumption 3.\nFor a MACG (N, M, A, a, w), we assume all aggregators are monotonic and simple.\nWe also assume there are oracles that answers the questions of CORE NONEMPTINESS, and CORE MEMBERSHIP for the aggregators.\nWe consider this a mild assumption.\nRecall that monotonic simple games are fully described by their set of minimal winning coalitions (cf Section 3).\nIf the aggregators are represented as such, CORE NON-EMPTINESS and CORE MEMBERSHIP can be checked in polynomial time.\nThis is due to the following well-known result regarding simple games:\n6.2 Core Non-emptiness\nThere is a strong connection between the non-emptiness of the cores of the games among attributes, games of the attributes, and the game induced by a MACG.\n(M, w), is non-empty, and the cores of the games of attributes are non-empty, then the core of (N, v) is non-empty.\nPROOF.\nLet u be an arbitrary payoff vector in the core of the game among attributes, (M, w).\nFor each attribute j, let 0j be an arbitrary payoff vector in the core of the game of attribute j. By Lemma 1, each attribute j must have a set of veto players; let this set be denoted by P j. For each agent i E N, let yi = Ej uj0j i.\nWe claim that this vector y is in the core of (N, v).\nConsider any coalition S C N,\nThis is true because an aggregator cannot evaluate to 1 without all members of the veto set.\nFor any attribute j, by Lemma 1, EiEPj 0ji = 1.\nTherefore,\nNote that the proof is constructive, and hence if we are given an element in the core of the game among attributes, we can construct an element of the core of the coalitional game.\nFrom Theorem 4, we can obtain the following corollaries that have been previously shown in the literature.\nPROOF.\nLet the players be the vertices, and their attributes the edges incident on them.\nFor each attribute, there is a veto set--namely, both endpoints of the edges.\nAs previously observed, an aggregate value function that is a non-negative linear function of its aggregates has non-empty core.\nTherefore, the precondition of Theorem 4 is satisfied, and the edge-spanning game with non-negative edge weights has a non-empty core.\nCOROLLARY 4 (THEOREM 1 OF [4]).\nThe core of a flow game with committee control, where each edge is controlled by a simple game with a veto set of players, is non-empty.\nPROOF.\nWe treat each edge of the flow game as an attribute, and so each attribute has a veto set of players.\nThe core of a flow game (without committee) has been shown to be non-empty in [8].\nWe can again invoke Theorem 4 to show the non-emptiness of core for flow games with committee control.\nHowever, the core of the game induced by a MACG may be non-empty even when the core of the game among attributes is empty, as illustrated by the following example.\nExample 3.\nSuppose the minimal winning coalition of all aggregators in a MACG (N, M, A, a, w) is N, then v (S) = 0 for all coalitions S C N.\nAs long as v (N)> 0, any nonnegative vector x that satisfies EiEN xi = v (N) is in the core of (N, v).\nComplementary to the example above, when all the aggregators have empty cores, the core of (N, v) is also empty.\nPROOF.\nSuppose the core of (N, v) is non-empty.\nLet x be a member of the core, and pick an agent i such that xi> 0.\nHowever, for each attribute, since the core is empty, by Lemma 1, there are at least two disjoint winning coalitions.\nPick the winning coalition Sj that does not include i for each attribute j. Let S * = Uj Sj.\nBecause S * is winning for all coalitions, v (S *) = v (N).\nHowever,\nTherefore, v (S *)> EjES \u2217 xj, contradicting the fact that x is in the core of (N, v).\nWe do not have general results regarding the problem of CORE NON-EMPTINESS when some of the aggregators have non-empty cores while others have empty cores.\nWe suspect knowledge about the status of the cores of the aggregators alone is insufficient to decide this problem.\n6.3 Core Membership\nSince it is possible for the game induced by the MACG to have a non-empty core when the core of the aggregate value function is empty (Example 3), we try to explore the problem of CORE MEMBERSHIP assuming that the core of both the game among attributes, (M, w), and the underlying game, (N, v), is known to be non-empty, and see if there is any relationship between their members.\nOne reasonable requirement is whether a payoff vector x in the core of (N, v) can be decomposed and re-aggregated to a payoff vector y in the core of (M, w).\nFormally, Definition 11.\nWe say that a vector x E Rn \u2265 0 can be decomposed and re-aggregated into a vector y E Rm \u2265 0 if there exists Z E Rm \u00d7 n \u2265 0, such that Zij for all i Zij for all j We may refer Z as shares.\nWhen there is no restriction on the entries of Z, it is always possible to decompose a payoff vector x in the core of (N, v) to a payoff vector y in the core of (M, w).\nHowever, it seems reasonable to restrict that if an agent j is irrelevant to the aggregator i, i.e., i never changes the outcome of aggregator j, then Zij should be restricted to be 0.\nUnfortunately, this restriction is already too strong.\nExample 4.\nConsider a MACG (N, M, A, a, w) with two players and three attributes.\nSuppose agent 1 is irrelevant to attribute 1, and agent 2 is irrelevant to attributes 2 and 3.\nFor any set of attributes T C M, let w be defined as\nSince the core of a game with a finite number of players forms a polytope, we can verify that the set of vectors (4, 4, 2), (4, 2, 4), and (2, 4, 4), fully characterize the core C of (M, w).\nOn the other hand, the vector (10, 0) is in the core of (N, v).\nThis vector cannot be decomposed and re-aggregated to a vector in C under the stated restriction.\nBecause of the apparent lack of relationship among members of the core of (N, v) and that of (M, w), we believe an algorithm for testing CORE MEMBERSHIP will require more input than just the veto sets of the aggregators and the oracle of CORE MEMBERSHIP for the aggregate value function.\n7.\nCONCLUDING REMARKS\nMulti-attribute coalitional games constitute a very natural way of modeling problems of interest.\nIts space requirement compares favorably with other representations discussed in the literature, and hence it serves well as a prototype to study computational complexity of coalitional game theory for a variety of problems.\nPositive results obtained under this representation can easily be translated to results about other representations.\nSome of these corollary results have been discussed in Sections 5 and 6.\nAn important direction to explore in the future is the question of efficiency in updating a game, and how to evaluate the solution concepts without starting from scratch.\nAs pointed out at the end of Section 4.3, MACG is very naturally suited for updates.\nRepresentation results regarding efficiency of updates, and algorithmic results regarding how to compute the different solution concepts from updates, will both be very interesting.\nOur work on approximating the Shapley value when the aggregate value function is a non-linear function of the attributes suggests more work to be done there as well.\nGiven the natural probabilistic interpretation of the Shapley value, we believe that a random sampling approach may have significantly better theoretical guarantees.","keyphrases":["multi-attribut coalit game","coalit game","cooper","agent","divers econom interact","comput complex","core","shaplei valu","graph","multi-issu represent","linear combin","unrestrict aggreg of subgam","polynomi function min-cost flow problem","min-cost flow problem","superaddit game","coalit game theori","multi-attribut model","compact represent"],"prmu":["P","P","P","P","P","P","P","M","U","U","U","M","U","U","M","M","R","U"]} {"id":"I-57","title":"Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System","abstract":"In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts. Our starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities. We present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions. We then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system. Finally, we present a novel solution to the problem of rumour propagation within such systems. This solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence.","lvl-1":"Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System Steven Reece1 , Alex Rogers2 , Stephen Roberts1 and Nicholas R. Jennings2 1 Department of Engineering Science, University of Oxford, Oxford, OX1 3PJ, UK.\n{reece,sjrob}@robots.\nox.ac.uk 2 Electronics and Computer Science, University of Southampton, Southampton, SO17 1BJ, UK.\n{acr,nrj}@ecs.\nsoton.ac.uk ABSTRACT In this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts.\nOur starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities.\nWe present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions.\nWe then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system.\nFinally, we present a novel solution to the problem of rumour propagation within such systems.\nThis solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent agents General Terms Algorithms, Design, Theory 1.\nINTRODUCTION The role of computational models of trust within multi-agent systems in particular, and open distributed systems in general, has recently generated a great deal of research interest.\nIn such systems, agents must typically choose between interaction partners, and in this context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments.\nTo date, however, much of the work within this area has used domain specific or ad-hoc trust metrics, and has focused on providing heuristics to evaluate and update these metrics using direct experience and reputation reports from other agents (see [8] for a review).\nRecent work has attempted to place the notion of computational trust within the framework of probability theory [6, 11].\nThis approach allows many of the desiderata of computational trust models to be addressed through principled means.\nIn particular: (i) it allows agents to update their estimates of the trustworthiness of a supplier as they acquire direct experience, (ii) it provides a natural framework for agents to express their uncertainty this trustworthiness, and, (iii) it allows agents to exchange, combine and filter reputation reports received from other agents.\nWhilst this approach is attractive, it is somewhat limited in that it has so far only considered single dimensional outcomes (i.e. whether the contract has succeeded or failed in its entirety).\nHowever, in many real world settings the success or failure of an interaction may be decomposed into several dimensions [7].\nThis presents the challenge of combining these multiple dimensions into a single metric over which a decision can be made.\nFurthermore, these dimensions will typically also exhibit correlations.\nFor example, a contract within a supply chain may specify criteria for timeliness, quality and quantity.\nA supplier who is suffering delays may attempt a trade-off between these dimensions by supplying the full amount late, or supplying as much as possible (but less than the quantity specified within the contract) on time.\nThus, correlations will naturally arise between these dimensions, and hence, between the probabilities that describe the successful fulfillment of each contract dimension.\nTo date, however, no such principled framework exists to describe these multi-dimensional contracts, nor the correlations between these dimensions (although some ad-hoc models do exist - see section 2 for more details).\nTo rectify this shortcoming, in this paper we develop a probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts.\nThe starting point for our work is to consider how an agent can estimate the utility that it will derive from interacting with a supplier.\nHere we use standard approaches from the literature of data fusion (since this is a well developed field where the notion of multi-dimensional correlated estimates is well established1 ) to show that this naturally leads to a trust model where the agent must estimate probabilities and correlations over 1 In this context, the multiple dimensions typically represent the physical coordinates of a target being tracked, and correlations arise through the operation and orientation of sensors.\n1070 978-81-904262-7-5 (RPS) c 2007 IFAAMAS multiple dimensions.\nBuilding upon this, we then devise a novel trust model that addresses the three desiderata discussed above.\nIn more detail, in this paper we extend the state of the art in four key ways: 1.\nWe devise a novel multi-dimensional probabilistic trust model that enables an agent to estimate the expected utility of a contract, by estimating (i) the probability that each contract dimension will be successfully fulfilled, and (ii) the correlations between these estimates.\n2.\nWe present an exact probabilistic model based upon the Dirichlet distribution that allows agents to use their direct experience of contract outcomes to calculate the probabilities and correlations described above.\nWe then benchmark this solution and show that it leads to good estimates.\n3.\nWe show that agents can use the sufficient statistics of this Dirichlet distribution in order to exchange reputation reports with one another.\nThe sufficient statistics represent aggregations of their direct experience, and thus, express contract outcomes in a compact format with no loss of information.\n4.\nWe show that, while being efficient, the aggregation of contract outcomes can lead to double counting, and rumour propagation, in decentralised reputation systems.\nThus, we present a novel solution based upon the idea of private and shared information.\nWe show that it yields estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding overconfidence.\nThe remainder of this paper is organised as follows: in section 2 we review related work.\nIn section 3 we present our notation for a single dimensional contract, before introducing our multi-dimensional trust model in section 4.\nIn sections 5 and 6 we discuss communicating reputation, and present our solution to rumour propagation in decentralised reputation systems.\nWe conclude in section 7.\n2.\nRELATED WORK The need for a multi-dimensional trust model has been recognised by a number of researchers.\nSabater and Sierra present a model of reputation, in which agents form contracts based on multiple variables (such as delivery date and quality), and define impressions as subjective evaluations of the outcome of these contracts.\nThey provide heuristic approaches to combining these impressions to form a measure they call subjective reputation.\nLikewise, Griffiths decomposes overall trust into a number of different dimensions such as success, cost, timeliness and quality [4].\nIn his case, each dimension is scored as a real number that represents a comparative value with no strong semantic meaning.\nHe develops an heuristic rule to update these values based on the direct experiences of the individual agent, and an heuristic function that takes the individual trust dimensions and generates a single scalar that is then used to select between suppliers.\nWhilst, he comments that the trust values could have some associated confidence level, heuristics for updating these levels are not presented.\nGujral et al. take a similar approach and present a trust model over multiple domain specific dimensions [5].\nThey define multidimensional goal requirements, and evaluate an expected payoff based on a supplier``s estimated behaviour.\nThese estimates are, however, simple aggregations over the direct experience of several agents, and there is no measure of the uncertainty.\nNevertheless, they show that agents who select suppliers based on these multiple dimensions outperform those who consider just a single one.\nBy contrast, a number of researchers have presented more principled computational trust models based on probability theory, albeit limited to a single dimension.\nJ\u00f8sang and Ismail describe the Beta Reputation System whereby the reputation of an agent is compiled from the positive and negative reports from other agents who have interacted with it [6].\nThe beta distribution represents a natural choice for representing these binary outcomes, and it provides a principled means of representing uncertainty.\nMoreover, they provide a number of extensions to this initial model including an approach to exchanging reputation reports using the sufficient statistics of the beta distribution, methods to discount the opinions of agents who themselves have low reputation ratings, and techniques to deal with reputations that may change over time.\nLikewise, Teacy et al. use the beta distribution to describe an agent``s belief in the probability that another agent will successfully fulfill its commitments [11].\nThey present a formalism using a beta distribution that allows the agent to estimate this probability based upon its direct experience, and again they use the sufficient statistics of this distribution to communicate this estimate to other agents.\nThey provide a number of extensions to this initial model, and, in particular, they consider that agents may not always truthfully report their trust estimates.\nThus, they present a principled approach to detecting and removing inconsistent reports.\nOur work builds upon these more principled approaches.\nHowever, the starting point of our approach is to consider an agent that is attempting to estimate the expected utility of a contract.\nWe show that estimating this expected utility requires that an agent must estimate the probability with which the supplier will fulfill its contract.\nIn the single-dimensional case, this naturally leads to a trust model using the beta distribution (as per J\u00f8sang and Ismail and Teacy et al.).\nHowever, we then go on to extend this analysis to multiple dimensions, where we use the natural extension of the beta distribution, namely the Dirichlet distribution, to represent the agent``s belief over multiple dimensions.\n3.\nSINGLE-DIMENSIONAL TRUST Before presenting our multi-dimensional trust model, we first introduce the notation and formalism that we will use by describing the more familiar single dimensional case.\nWe consider an agent who must decide whether to engage in a future contract with a supplier.\nThis contract will lead to some outcome, o, and we consider that o = 1 if the contract is successfully fulfilled, and o = 0 if not2 .\nIn order for the agent to make a rational decision, it should consider the utility that it will derive from this contract.\nWe assume that in the case that the contract is successfully fulfilled, the agent derives a utility u(o = 1), otherwise it receives no utility3 .\nNow, given that the agent is uncertain of the reliability with which the supplier will fulfill the contract, it should consider the expected utility that it will derive, E[U], and this is given by: E[U] = p(o = 1)u(o = 1) (1) where p(o = 1) is the probability that the supplier will successfully fulfill the contract.\nHowever, whilst u(o = 1) is known by the agent, p(o = 1) is not.\nThe best the agent can do is to determine a distribution over possible values of p(o = 1) given its direct experience of previous contract outcomes.\nGiven that it has been able to do so, it can then determine an estimate of the expected utility4 of the contract, E[E[U]], and a measure of its uncertainty in this expected utility, Var(E[U]).\nThis uncertainty is important since a risk averse agent may make a decision regarding a contract, 2 Note that we only consider binary contract outcomes, although extending this to partial outcomes is part of our future work.\n3 Clearly this can be extended to the case where some utility is derived from an unsuccessful outcome.\n4 Note that this is often called the expected expected utility, and this is the notation that we adopt here [2].\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1071 not only on its estimate of the expected utility of the contract, but also on the probability that the expected utility will exceed some minimum amount.\nThese two properties are given by: E[E[U]] = \u02c6p(o = 1)u(o = 1) (2) Var(E[U]) = Var(p(o = 1))u(o = 1)2 (3) where \u02c6p(o = 1) and Var(p(o = 1)) are the estimate and uncertainty of the probability that a contract will be successfully fulfilled, and are calculated from the distribution over possible values of p(o = 1) that the agent determines from its direct experience.\nThe utility based approach that we present here provides an attractive motivation for this model of Teacy et al. [11].\nNow, in the case of binary contract outcomes, the beta distribution is the natural choice to represent the distribution over possible values of p(o = 1) since within Bayesian statistics this well known to be the conjugate prior for binomial observations [3].\nBy adopting the beta distribution, we can calculate \u02c6p(o = 1) and Var(p(o = 1)) using standard results, and thus, if an agent observed N previous contracts of which n were successfully fulfilled, then: \u02c6p(o = 1) = n + 1 N + 2 and: Var(p(o = 1)) = (n + 1)(N \u2212 n + 1) (N + 2)2(N + 3) Note that as expected, the greater the number of contracts the agent observes, the smaller the variance term Var(p(o = 1)), and, thus, the less the uncertainty regarding the probability that a contract will be successfully fulfilled, \u02c6p(o = 1).\n4.\nMULTI-DIMENSIONAL TRUST We now extend the description above, to consider contracts between suppliers and agents that are represented by multiple dimensions, and hence the success or failure of a contract can be decomposed into the success or failure of each separate dimension.\nConsider again the example of the supply chain that specifies the timeliness, quantity, and quality of the goods that are to be delivered.\nThus, within our trust model oa = 1 now indicates a successful outcome over dimension a of the contract and oa = 0 indicates an unsuccessful one.\nA contract outcome, X, is now composed of a vector of individual contract part outcomes (e.g. X = {oa = 1, ob = 0, oc = 0, ...}).\nGiven a multi-dimensional contract whose outcome is described by the vector X, we again consider that in order for an agent to make a rational decision, it should consider the utility that it will derive from this contract.\nTo this end, we can make the general statement that the expected utility of a contract is given by: E[U] = p(X)U(X)T (4) where p(X) is a joint probability distribution over all possible contract outcomes: p(X) = \u239b \u239c \u239c \u239c \u239d p(oa = 1, ob = 0, oc = 0, ...) p(oa = 1, ob = 1, oc = 0, ...) p(oa = 0, ob = 1, oc = 0, ...) ... \u239e \u239f \u239f \u239f \u23a0 (5) and U(X) is the utility derived from these possible outcomes: U(X) = \u239b \u239c \u239c \u239c \u239d u(oa = 1, ob = 0, oc = 0, ...) u(oa = 1, ob = 1, oc = 0, ...) u(oa = 0, ob = 1, oc = 0, ...) ... \u239e \u239f \u239f \u239f \u23a0 (6) As before, whilst U(X) is known to the agent, the probability distribution p(X) is not.\nRather, given the agent``s direct experience of the supplier, the agent can determine a distribution over possible values for p(X).\nIn the single dimensional case, a beta distribution was the natural choice over possible values of p(o = 1).\nIn the multi-dimensional case, where p(X) itself is a vector of probabilities, the corresponding natural choice is the Dirichlet distribution, since this is a conjugate prior for multinomial proportions [3].\nGiven this distribution, the agent is then able to calculate an estimate of the expected utility of a contract.\nAs before, this estimate is itself represented by an expected value given by: E[E[U]] = \u02c6p(X)U(X)T (7) and a variance, describing the uncertainty in this expected utility: Var(E[U]) = U(X)Cov(p(X))U(X)T (8) where: Cov(p(X)) E[(p(X) \u2212 \u02c6p(X))(p(X) \u2212 \u02c6p(X))T ] (9) Thus, whilst the single dimensional case naturally leads to a trust model in which the agents attempt to derive an estimate of probability that a contract will be successfully fulfilled, \u02c6p(o = 1), along with a scalar variance that describes the uncertainty in this probability, Var(p(o = 1)), in this case, the agents must derive an estimate of a vector of probabilities, \u02c6p(X), along with a covariance matrix, Cov(p(X)), that represents the uncertainty in p(X) given the observed contractual outcomes.\nAt this point, it is interesting to note that the estimate in the single dimensional case, \u02c6p(o = 1), has a clear semantic meaning in relation to trust; it is the agent``s belief in the probability of a supplier successfully fulfilling a contract.\nHowever, in the multi-dimensional case the agent must determine \u02c6p(X), and since this describes the probability of all possible contract outcomes, including those that are completely un-fulfilled, this direct semantic interpretation is not present.\nIn the next section, we describe the exemplar utility function that we shall use in the remainder of this paper.\n4.1 Exemplar Utility Function The approach described so far is completely general, in that it applies to any utility function of the form described above, and also applies to the estimation of any joint probability distribution.\nIn the remainder of this paper, for illustrative purposes, we shall limit the discussion to the simplest possible utility function that exhibits a dependence upon the correlations between the contract dimensions.\nThat is, we consider the case that expected utility is dependent only on the marginal probabilities of each contract dimension being successfully fulfilled, rather than the full joint probabilities: U(X) = \u239b \u239c \u239c \u239c \u239d u(oa = 1) u(ob = 1) u(oc = 1) ... \u239e \u239f \u239f \u239f \u23a0 (10) Thus, \u02c6p(X) is a vector estimate of the probability of each contract dimension being successfully fulfilled, and maintains the clear semantic interpretation seen in the single dimensional case: \u02c6p(X) = \u239b \u239c \u239c \u239c \u239d \u02c6p(oa = 1) \u02c6p(ob = 1) \u02c6p(oc = 1) ... \u239e \u239f \u239f \u239f \u23a0 (11) The correlations between the contract dimensions affect the uncertainty in the expected utility.\nTo see this, consider the covariance 1072 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) matrix that describes this uncertainty, Cov(p(X)), is now given by: Cov(p(X)) = \u239b \u239c \u239c \u239c \u239d Va Cab Cac ... Cab Vb Cbc ... Cac Cbc Vc ... ... ... ... \u239e \u239f \u239f \u239f \u23a0 (12) In this matrix, the diagonal terms, Va, Vb and Vc, represent the uncertainties in p(oa = 1), p(ob = 1) and p(oc = 1) within p(X).\nThe off-diagonal terms, Cab, Cac and Cbc, represent the correlations between these probabilities.\nIn the next section, we use the Dirichlet distribution to calculate both \u02c6p(X) and Cov(p(X)) from an agent``s direct experience of previous contract outcomes.\nWe first illustrate why this is necessary by considering an alternative approach to modelling multi-dimensional contracts whereby an agent na\u00a8\u0131vely assumes that the dimensions are independent, and thus, it models each individually by separate beta distributions (as in the single dimensional case we presented in section 3).\nThis is actually equivalent to setting the off-diagonal terms within the covariance matrix, Cov(p(X)), to zero.\nHowever, doing so can lead an agent to assume that its estimate of the expected utility of the contract is more accurate than it actually is.\nTo illustrate this, consider a specific scenario with the following values: u(oa = 1) = u(ob = 1) = 1 and Va = Vb = 0.2.\nIn this case, Var(E[U]) = 0.4(1 + Cab), and thus, if the correlation Cab is ignored then the variance in the expected utility is 0.4.\nHowever, if the contract outcomes are completely correlated then Cab = 1 and Var(E[U]) is actually 0.8.\nThus, in order to have an accurate estimate of the variance of the expected contract utility, and to make a rational decision, it is essential that the agent is able to represent and calculate these correlation terms.\nIn the next section, we describe how an agent may do so using the Dirichlet distribution.\n4.2 The Dirichlet Distribution In this section, we describe how the agent may use its direct experience of previous contracts, and the standard results of the Dirichlet distribution, to determine an estimate of the probability that each contract dimension will be successful fulfilled, \u02c6p(X), and a measure of the uncertainties in these probabilities that expresses the correlations between the contract dimensions, Cov(p(X)).\nWe first consider the calculation of \u02c6p(X) and the diagonal terms of the covariance matrix Cov(p(X)).\nAs described above, the derivation of these results is identical to the case of the single dimensional beta distribution, where out of N contract outcomes, n are successfully fulfilled.\nIn the multi-dimensional case, however, we have a vector {na, nb, nc, ...} that represents the number of outcomes for which each of the individual contract dimensions were successfully fulfilled.\nThus, in terms of the standard Dirichlet parameters where \u03b1a = na + 1 and \u03b10 = N + 2, the agent can estimate the probability of this contract dimension being successfully fulfilled: \u02c6p(oa = 1) = \u03b1a \u03b10 = na + 1 N + 2 and can also calculate the variance in any contract dimension: Va = \u03b1a(\u03b10 \u2212 \u03b1a) \u03b12 0(1 + \u03b10) = (na + 1)(N \u2212 na + 1) (N + 2)2(N + 3) However, calculating the off-diagonal terms within Cov(p(X)) is more complex since it is necessary to consider the correlations between the contract dimensions.\nThus, for each pair of dimensions (i.e. a and b), we must consider all possible combinations of contract outcomes, and thus we define nab ij as the number of contract outcomes for which both oa = i and ob = j. For example, nab 10 represents the number of contracts for which oa = 1 and ob = 0.\nNow, using the standard Dirichlet notation, we can define \u03b1ab ij nab ij + 1 for all i and j taking values 0 and 1, and then, to calculate the cross-correlations between contract pairs a and b, we note that the Dirichlet distribution over pair-wise joint probabilities is: Prob(pab) = Kab i\u2208{0,1} j\u2208{0,1} p(oa = i, ob = j)\u03b1ab ij \u22121 where: i\u2208{0,1} j\u2208{0,1} p(oa = i, ob = j) = 1 and Kab is a normalising constant [3].\nFrom this we can derive pair-wise probability estimates and variances: E[p(oa = i, ob = j)] = \u03b1ab ij \u03b10 (13) V [p(oa = i, ob = j)] = \u03b1ab ij (\u03b10 \u2212 \u03b1ab ij ) \u03b12 0(1 + \u03b10) (14) where: \u03b10 = i\u2208{0,1} j\u2208{0,1} \u03b1ab ij (15) and in fact, \u03b10 = N + 2, where N is the total number of contracts observed.\nLikewise, we can express the covariance in these pairwise probabilities in similar terms: C[p(oa = i, ob = j), p(oa = m, ob = n)] = \u2212\u03b1ab ij \u03b1ab mn \u03b12 0(1 + \u03b10) Finally, we can use the expression: p(oa = 1) = j\u2208{0,1} p(oa = 1, ob = j) to determine the covariance Cab.\nTo do so, we first simplify the notation by defining V ab ij V [p(oa = i, ob = j)] and Cab ijmn C[p(oa = i, ob = j), p(oa = m, ob = n)].\nThe covariance for the probability of positive contract outcomes is then the covariance between j\u2208{0,1} p(oa = 1, ob = j) and i\u2208{0,1} p(oa = i, ob = 1), and thus: Cab = Cab 1001 + Cab 1101 + Cab 1011 + V ab 11 .\nThus, given a set of contract outcomes that represent the agent``s previous interactions with a supplier, we may use the Dirichlet distribution to calculate the mean and variance of the probability of any contract dimension being successfully fulfilled (i.e. \u02c6p(oa = 1) and Va).\nIn addition, by a somewhat more complex procedure we can also calculate the correlations between these probabilities (i.e. Cab).\nThis allows us to calculate an estimate of the probability that any contract dimension will be successfully fulfilled, \u02c6p(X), and also represent the uncertainty and correlations in these probabilities by the covariance matrix, Cov(p(X)).\nIn turn, these results may be used to calculate the estimate and uncertainty in the expected utility of the contract.\nIn the next section we present empirical results that show that in practise this formalism yields significant improvements in these estimates compared to the na\u00a8\u0131ve approximation using multiple independent beta distributions.\n4.3 Empirical Comparison In order to evaluate the effectiveness of our formalism, and show the importance of the off-diagonal terms in Cov(p(X)), we compare two approaches: The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1073 \u22121 \u22120.5 0 0.5 1 0.2 0.4 0.6 0.8 Correlation (\u03c1) Var(E[U]) Dirichlet Distribution Indepedent Beta Distributions \u22121 \u22120.5 0 0.5 1 0.5 1 1.5 2 2.5 x 10 4 Correlation (\u03c1) Information(I) Dirichlet Distribution Indepedent Beta Distributions Figure 1: Plots showing (i) the variance of the expected contract utility and (ii) the information content of the estimates computed using the Dirichlet distribution and multiple independent beta distributions.\nResults are averaged over 106 runs, and the error bars show the standard error in the mean.\n\u2022 Dirichlet Distribution: We use the full Dirichlet distribution, as described above, to calculate \u02c6p(X) and Cov(p(X)) including all its off-diagonal terms that represent the correlations between the contract dimensions.\n\u2022 Independent Beta Distributions: We use independent beta distributions to represent each contract dimension, in order to calculate \u02c6p(X), and then, as described earlier, we approximate Cov(p(X)) and ignore the correlations by setting all the off-diagonal terms to zero.\nWe consider a two-dimensional case where u(oa = 1) = 6 and u(ob = 1) = 2, since this allows us to plot \u02c6p(X) and Cov(p(X)) as ellipses in a two-dimensional plane, and thus explain the differences between the two approaches.\nSpecifically, we initially allocate the agent some previous contract outcomes that represents its direct experience with a supplier.\nThe number of contracts is drawn uniformly between 10 and 20, and the actual contract outcomes are drawn from an arbitrary joint distribution intended to induce correlations between the contract dimensions.\nFor each set of contracts, we use the approaches described above to calculate \u02c6p(X) and Cov(p(X)), and hence, the variance in the expected contract utility, Var(E[U]).\nIn addition, we calculate a scalar measure of the information content, I, of the covariance matrix Cov(p(X)), which is a standard way of measuring the uncertainty encoded within the covariance matrix [1].\nMore specifically, we calculate the determinant of the inverse of the covariance matrix: I = det(Cov(p(X))\u22121 ) (16) and note that the larger the information content, the more precise \u02c6p(X) will be, and thus, the better the estimate of the expected utility that the agent is able to calculate.\nFinally, we use the results 0.3 0.4 0.5 0.6 0.7 0.8 0.3 0.4 0.5 0.6 0.7 0.8 p(o =1) p(o=1) a b Dirichlet Distribution Indepedent Beta Distributions Figure 2: Examples of \u02c6p(X) and Cov(p(X)) plotted as second standard error ellipses.\npresented in section 4.2 to calculate the actual correlation, \u03c1, associated with this particular set of contract outcomes: \u03c1 = Cab \u221a VaVb (17) where Cab, Va and Vb are calculated as described in section 4.2.\nThe results of this analysis are shown in figure 1.\nHere we show the values of I and Var(E[U]) calculated by the agents, plotted against the correlation of the contract outcomes, \u03c1, that constituted their direct experience.\nThe results are averaged over 106 simulation runs.\nNote that as expected, when the dimensions of the contract outcomes are uncorrelated (i.e. \u03c1 = 0), then both approaches give the same results.\nHowever, the value of using our formalism with the full Dirichlet distribution is shown when the correlation between the dimensions increases (either negatively or positively).\nAs can be seen, if we approximate the Dirichlet distribution with multiple independent beta distributions, all of the correlation information contained within the covariance matrix, Cov(p(X)), is lost, and thus, the information content of the matrix is much lower.\nThe loss of this correlation information leads the variance of the expected utility of the contract to be incorrect (either over or under estimated depending on the correlation)5 , with the exact amount of mis-estimation depending on the actual utility function chosen (i.e. the values of u(oa = 1) and u(ob = 1)).\nIn addition, in figure 2 we illustrate an example of the estimates calculated through both methods, for a single exemplar set of contract outcomes.\nWe represent the probability estimates, \u02c6p(X), and the covariance matrix, Cov(p(X)), in the standard way as an ellipse [1].\nThat is, \u02c6p(X) determines the position of the center of the ellipse, Cov(p(X)) defines its size and shape.\nNote that whilst the ellipse resulting from the full Dirichlet formalism accurately reflects the true distribution (samples of which are plotted as points), that calculated by using multiple independent Beta distributions (and thus ignoring the correlations) results in a much larger ellipse that does not reflect the true distribution.\nThe larger size of this ellipse is a result of the off-diagonal terms of the covariance matrix being set to zero, and corresponds to the agent miscalculating the uncertainty in the probability of each contract dimension being fulfilled.\nThis, in turn, leads it to miscalculate the uncertainty in the expected utility of a contract (shown in figure 1 as Var(E[U]).\n5.\nCOMMUNICATING REPUTATION Having described how an individual agent can use its own direct experience of contract outcomes in order to estimate the probabil5 Note that the plots are not smooth due to the fact that given a limited number of contract outcomes, then the mean of Va and Vb do not vary smoothly with \u03c1.\n1074 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) ity that a multi-dimensional contract will be successfully fulfilled, we now go on to consider how agents within an open multi-agent system can communicate these estimates to one another.\nThis is commonly referred to as reputation and allows agents with limited direct experience of a supplier to make rational decisions.\nBoth J\u00f8sang and Ismail, and Teacy et al. present models whereby reputation is communicated between agents using the sufficient statistics of the beta distribution [6, 11].\nThis approach is attractive since these sufficient statistics are simple aggregations of contract outcomes (more precisely, they are simply the total number of contracts observed, N, and the number of these that were successfully fulfilled, n).\nUnder the probabilistic framework of the beta distribution, reputation reports in this form may simply be aggregated with an agent``s own direct experience, in order to gain a more precise estimate based on a larger set of contract outcomes.\nWe can immediately extend this approach to the multi-dimensional case considered here, by requiring that the agents exchange the sufficient statistics of the Dirichlet distribution instead of the beta distribution.\nIn this case, for each pair of dimensions (i.e. a and b), the agents must communicate a vector of contract outcomes, N, which are the sufficient statistics of the Dirichlet distribution, given by: N =< nab ij > \u2200a, b, i \u2208 {0, 1}, j \u2208 {0, 1} (18) Thus, an agent is able to communicate the sufficient statistics of its own Dirichlet distribution in terms of just 2d(d \u2212 1) numbers (where d is the number of contract dimensions).\nFor instance, in the case of three dimensions, N, is given by: N =< nab 00, nab 01, nab 10, nab 11, nac 00, nac 01, nac 10, nac 11, nbc 00, nbc 01, nbc 10, nbc 11 > and, hence, large sets of contract outcomes may be communicated within a relatively small message size, with no loss of information.\nAgain, agents receiving these sufficient statistics may simply aggregate them with their own direct experience in order to gain a more precise estimate of the trustworthiness of a supplier.\nFinally, we note that whilst it is not the focus of our work here, by adopting the same principled approach as J\u00f8sang and Ismail, and Teacy et al., many of the techniques that they have developed (such as discounting reports from unreliable agents, and filtering inconsistent reports from selfish agents) may be directly applied within this multi-dimensional model.\nHowever, we now go on to consider a new issue that arises in both the single and multi-dimensional models, namely the problems that arise when such aggregated sufficient statistics are propagated within decentralised agent networks.\n6.\nRUMOUR PROPAGATION WITHIN REPUTATION SYSTEMS In the previous section, we described the use of sufficient statistics to communicate reputation, and we showed that by aggregating contract outcomes together into these sufficient statistics, a large number of contract outcomes can be represented and communicated in a compact form.\nWhilst, this is an attractive property, it can be problematic in practise, since the individual provenance of each contract outcome is lost in the aggregation.\nThus, to ensure an accurate estimate, the reputation system must ensure that each observation of a contract outcome is included within the aggregated statistics no more than once.\nWithin a centralised reputation system, where all agents report their direct experience to a trusted center, such double counting of contract outcomes is easy to avoid.\nHowever, in a decentralised reputation system, where agents communicate reputation to one another, and aggregate their direct experience with these reputation reports on-the-fly, avoiding double counting is much more difficult.\na1 a2 a3 \u00a8 \u00a8\u00a8 \u00a8\u00a8 \u00a8\u00a8B E T N1 N1 N1 + N2 Figure 3: Example of rumour propagation in a decentralised reputation system.\nFor example, consider the case shown in figure 3 where three agents (a1 ... a3), each with some direct experience of a supplier, share reputation reports regarding this supplier.\nIf agent a1 were to provide its estimate to agents a2 and a3 in the form of the sufficient statistics of its Dirichlet distribution, then these agents can aggregate these contract outcomes with their own, and thus obtain more precise estimates.\nIf at a later stage, agent a2 were to send its aggregate vector of contract outcomes to agent a3, then agent a3 being unaware of the full history of exchanges, may attempt to combine these contract outcomes with its own aggregated vector.\nHowever, since both vectors contain a contribution from agent a1, these will be counted twice in the final aggregated vector, and will result in a biased and overconfident estimate.\nThis is termed rumour propagation or data incest in the data fusion literature [9].\nOne possible solution would be to uniquely identify the source of each contract outcome, and then propagate each vector, along with its label, through the network.\nAgents can thus identify identical observations that have arrived through different routes, and after removing the duplicates, can aggregate these together to form their estimates.\nWhilst this appears to be attractive in principle, for a number of reasons, it is not always a viable solution in practise [12].\nFirstly, agents may not actually wish to have their uniquely labelled contract outcomes passed around an open system, since such information may have commercial or practical significance that could be used to their disadvantage.\nAs such, agents may only be willing to exchange identifiable contract outcomes with a small number of other agents (perhaps those that they have some sort of reciprocal relationship with).\nSecondly, the fact that there is no aggregation of the contract outcomes as they pass around the network means that the message size increases over time, and the ultimate size of these messages is bounded only by the number of agents within the system (possibly an extremely large number for a global system).\nFinally, it may actually be difficult to assign globally agreeable, consistent, and unique labels for each agent within an open system.\nIn the next section, we develop a novel solution to the problem of rumour propagation within decentralised reputation systems.\nOur solution is based on an approach developed within the area of target tracking and data fusion [9].\nIt avoids the need to uniquely identify an agent, it allows agents to restrict the number of other agents who they reveal their private estimates to, and yet it still allows information to propagate throughout the network.\n6.1 Private and Shared Information Our solution to rumour propagation within decentralised reputation systems introduces the notion of private information that an agent knows it has not communicated to any other agent, and shared information that has been communicated to, or received from, another agent.\nThus, the agent can decompose its contract outcome vector, N, into two vectors, a private one, Np, that has not been communicated to another agent, and a shared one, Ns, that has been shared with, or received from, another agent: N = Np + Ns (19) Now, whenever an agent communicates reputation, it communicates both its private and shared vectors separately.\nBoth the origThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1075 inating and receiving agents then update their two vectors appropriately.\nTo understand this, consider the case that agent a\u03b1 sends its private and shared contract outcome vectors, N\u03b1 p and N\u03b1 s , to agent a\u03b2 that itself has private and shared contract outcomes N\u03b2 p and N\u03b2 s .\nEach agent updates its vectors of contract outcomes according to the following procedure: \u2022 Originating Agent: Once the originating agent has sent both its shared and private contract outcome vectors to another agent, its private information is no longer private.\nThus, it must remove the contract outcomes that were in its private vector, and add them into its shared vector: N\u03b1 s \u2190 N\u03b1 s + N\u03b1 p N\u03b1 p \u2190 \u2205.\n\u2022 Receiving Agent: The goal of the receiving agent is to accumulate the largest number contract outcomes (since this will result in the most precise estimate) without including shared information from both itself and the other agent (since this may result in double counting of contract outcomes).\nIt has two choices depending on the total number of contract outcomes6 within its own shared vector, N\u03b2 s , and within that of the originating agent, N\u03b1 s .\nThus, it updates its vector according to the procedure below: - N\u03b2 s > N\u03b1 s : If the receiving agent``s shared vector represents a greater number of contract outcomes than that of the shared vector of the originating agent, then the agent combines its shared vector with the private vector of the originating agent: N\u03b2 s \u2190 N\u03b2 s + N\u03b1 p N\u03b2 p unchanged.\n- N\u03b2 s < N\u03b1 s : Alternatively if the receiving agent``s shared vector represents a smaller number contract outcomes than that of the shared vector of the originating agent, then the receiving agent discards its own shared vector and forms a new one from both the private and shared vectors of the originating agent: N\u03b2 s \u2190 N\u03b1 s + N\u03b1 p N\u03b2 p unchanged.\nIn the case that N\u03b2 s = N\u03b1 s then either option is appropriate.\nOnce the receiving agent has updated its sets, it uses the contract outcomes within both to form its trust estimate.\nIf agents receive several vectors simultaneously, this approach generalises to the receiving agent using the largest shared vector, and the private vectors of itself and all the originating agents to form its new shared vector.\nThis procedure has a number of attractive properties.\nFirstly, since contract outcomes in an agent``s shared vector are never combined with those in the shared vector of another agent, outcomes that originated from the same agent are never combined together, and thus, rumour propagation is completely avoided.\nHowever, since the receiving agent may discard its own shared vector, and adopt the shared vector of the originating agent, information is still propagated around the network.\nMoreover, since contract outcomes are aggregated together within the private and shared vectors, the message size is constant and does not increase as the number of interactions increases.\nFinally, an agent only communicates its own private contract outcomes to its immediate neighbours.\nIf this agent 6 Note that this may be calculated from N = nab 00 +nab 01 +nab 10 +nab 11.\nsubsequently passes it on, it does so as unidentifiable aggregated information within its shared information.\nThus, an agent may limit the number of agents with which it is willing to reveal identifiable contract outcomes, and yet these contract outcomes can still propagate within the network, and thus, improve estimates of other agents.\nNext, we demonstrate empirically that these properties can indeed be realised in practise.\n6.2 Empirical Comparison In order to evaluate the effectiveness of this procedure we simulated random networks consisting of ten agents.\nEach agent has some direct experience of interacting with a supplier (as described in section 4.3).\nAt each iteration of the simulation, it interacts with its immediate neighbours and exchanges reputation reports through the sufficient statistics of their Dirichlet distributions.\nWe compare our solution to two of the most obvious decentralised alternatives: \u2022 Private and Shared Information: The agents follow the procedure described in the previous section.\nThat is, they maintain separate private and shared vectors of contract outcomes, and at each iteration they communicate both these vectors to their immediate neighbours.\n\u2022 Rumour Propagation: The agents do not differentiate between private and shared contract outcomes.\nAt the first iteration they communicate all of the contract outcomes that constitute their direct experience.\nIn subsequent iterations, they propagate contract outcomes that they receive from any of the neighbours, to all their other immediate neighbours.\n\u2022 Private Information Only: The agents only communicate the contract outcomes that constitute their direct experience.\nIn all cases, at each iteration, the agents use the Dirichlet distribution in order to calculate their trust estimates.\nWe compare these three decentralised approaches to a centralised reputation system: \u2022 Centralised Reputation: All the agents pass their direct experience to a centralised reputation system that aggregates them together, and passes this estimate back to each agent.\nThis centralised solution makes the most effective use of information available in the network.\nHowever, most real world problems demand decentralised solutions due to scalability, modularity and communication concerns.\nThus, this centralised solution is included since it represents the optimal case, and allows us to benchmark our decentralised solution.\nThe results of these comparisons are shown in figure 4.\nHere we show the sum of the information content of each agent``s covariance matrix (calculated as discussed earlier in section 4.3), for each of these four different approaches.\nWe first note that where private information only is communicated, there is no change in information after the first iteration.\nOnce each agent has received the direct experience of its immediate neighbours, no further increase in information can be achieved.\nThis represents the minimum communication, and it exhibits the lowest total information of the four cases.\nNext, we note that in the case of rumour propagation, the information content increases continually, and rapidly exceeds the centralised reputation result.\nThe fact that the rumour propagation case incorrectly exceeds this limit, indicates that it is continuously counting the same contract outcomes as they cycle around the network, in the belief that they are independent events.\nFinally, we note that using private and shared information represents a compromise between the private information only case and the centralised reputation case.\nInformation is still allowed to propagate around the network, however rumours are eliminated.\nAs before, we also plot a single instance of the trust estimates from one agent (i.e. \u02c6p(X) and Cov(p(X))) as a set of ellipses on a 1076 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1 2 3 4 5 10 4 10 6 10 8 10 10 Iteration Information(I) Private & Shared Information Rumour Propagation Private Information Only Centralised Reputation Figure 4: Sum of information over all agents as a function of the communication iteration.\ntwo-dimensional plane (along with samples from the true distribution).\nAs expected, the centralised reputation system achieves the best estimate of the true distribution, since it uses the direct experience of all agents.\nThe private information only case shows the largest ellipse since it propagates the least information around the network.\nThe rumour propagation case shows the smallest ellipse, but it is inconsistent with the actual distribution p(X).\nThus, propagating rumours around the network and double counting contract outcomes in the belief that they are independent events, results in an overconfident estimate.\nHowever, we note that our solution, using separate vectors of private and shared information, allows us to propagate more information than the private information only case, but we completely avoid the problems of rumour propagation.\nFinally, we consider the effect that this has on the agents'' calculation of the expected utility of the contract.\nWe assume the same utility function as used in section 4.3 (i.e. u(oa = 1) = 6 and u(ob = 1) = 2), and in table 1 we present the estimate of the expected utility, and its standard deviation calculated for all four cases by a single agent at iteration five (after communication has ceased to have any further effect for all methods other than rumour propagation).\nWe note that the rumour propagation case is clearly inconsistent with the centralised reputation system, since its standard deviation is too small and does not reflect the true uncertainty in the expected utility, given the contract outcomes.\nHowever, we observe that our solution represents the closest case to the centralised reputation system, and thus succeeds in propagating information throughout the network, whilst also avoiding bias and overconfidence.\nThe exact difference between it and the centralised reputation system depends upon the topology of the network, and the history of exchanges that take place within it.\n7.\nCONCLUSIONS In this paper we addressed the need for a principled probabilistic model of computational trust that deals with contracts that have multiple correlated dimensions.\nOur starting point was an agent estimating the expected utility of a contract, and we showed that this leads to a model of computational trust that uses the Dirichlet distribution to calculate a trust estimate from the direct experience of an agent.\nWe then showed how agents may use the sufficient statistics of this Dirichlet distribution to represent and communicate reputation within a decentralised reputation system, and we presented a solution to rumour propagation within these systems.\nOur future work in this area is to extend the exchange of reputation to the case where contracts are not homogeneous.\nThat is, not all agents observe the same contract dimensions.\nThis is a challenging extension, since in this case, the sufficient statistics of the Dirichlet distribution can not be used directly.\nHowever, by 0.2 0.3 0.4 0.5 0.6 0.7 0.1 0.2 0.3 0.4 0.5 0.6 0.7 p(o =1) p(o=1) a b Private & Shared Information Rumour Propagation Private Information Only Centralised Reputation Figure 5: Instances of \u02c6p(X) and Cov(p(X)) plotted as second standard error ellipses after 5 communication iterations.\nMethod E[E[U]] \u00b1 Var(E[U]) Private and Shared Information 3.18 \u00b1 0.54 Rumour Propagation 3.33 \u00b1 0.07 Private Information Only 3.20 \u00b1 0.65 Centralised Reputation 3.17 \u00b1 0.42 Table 1: Estimated expected utility and its standard error as calculated by a single agent after 5 communication iterations.\naddressing this challenge, we hope to be able to apply these techniques to a setting in which a suppliers provides a range of services whose failures are correlated, and agents only have direct experiences of different subsets of these services.\n8.\nACKNOWLEDGEMENTS This research was undertaken as part of the ALADDIN (Autonomous Learning Agents for Decentralised Data and Information Networks) project and is jointly funded by a BAE Systems and EPSRC strategic partnership (EP\/C548051\/1).\n9.\nREFERENCES [1] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan.\nEstimation with Applications to Tracking and Navigation.\nWiley Interscience, 2001.\n[2] C. Boutilier.\nThe foundations of expected expected utility.\nIn Proc.\nof the 4th Int.\nJoint Conf.\non on Artificial Intelligence, pages 285-290, Acapulco, Mexico, 2003.\n[3] M. Evans, N. Hastings, and B. Peacock.\nStatistical Distributions.\nJohn Wiley & Sons, Inc., 1993.\n[4] N. Griffiths.\nTask delegation using experience-based multi-dimensional trust.\nIn Proc.\nof the 4th Int.\nJoint Conf.\non Autonomous Agents and Multiagent Systems, pages 489-496, New York, USA, 2005.\n[5] N. Gukrai, D. DeAngelis, K. K. Fullam, and K. S. Barber.\nModelling multi-dimensional trust.\nIn Proc.\nof the 9th Int.\nWorkshop on Trust in Agent Systems, Hakodate, Japan, 2006.\n[6] A. J\u00f8sang and R. Ismail.\nThe beta reputation system.\nIn Proc.\nof the 15th Bled Electronic Commerce Conf., pages 324-337, Bled, Slovenia, 2002.\n[7] E. M. Maximilien and M. P. Singh.\nAgent-based trust model involving multiple qualities.\nIn Proc.\nof the 4th Int.\nJoint Conf.\non Autonomous Agents and Multiagent Systems, pages 519-526, Utrecht, The Netherlands, 2005.\n[8] S. D. Ramchurn, D. Hunyh, and N. R. Jennings.\nTrust in multi-agent systems.\nKnowledge Engineering Review, 19(1):1-25, 2004.\n[9] S. Reece and S. Roberts.\nRobust, low-bandwidth, multi-vehicle mapping.\nIn Proc.\nof the 8th Int.\nConf.\non Information Fusion, Philadelphia, USA, 2005.\n[10] J. Sabater and C. Sierra.\nREGRET: A reputation model for gregarious societies.\nIn Proc.\nof the 4th Workshop on Deception, Fraud and Trust in Agent Societies, pages 61-69, Montreal, Canada, 2001.\n[11] W. T. L. Teacy, J. Patel, N. R. Jennings, and M. Luck.\nTRAVOS: Trust and reputation in the context of inaccurate information sources.\nAutonomous Agents and Multi-Agent Systems, 12(2):183-198, 2006.\n[12] S. Utete.\nNetwork Management in Decentralised Sensing Systems.\nPhD thesis, University of Oxford, UK, 1994.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1077","lvl-3":"Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System\nABSTRACT\nIn this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts.\nOur starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities.\nWe present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions.\nWe then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system.\nFinally, we present a novel solution to the problem of rumour propagation within such systems.\nThis solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence.\n1.\nINTRODUCTION\nThe role of computational models of trust within multi-agent systems in particular, and open distributed systems in general, has recently generated a great deal of research interest.\nIn such systems, agents must typically choose between interaction partners, and in\nthis context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments.\nTo date, however, much of the work within this area has used domain specific or ad hoc trust metrics, and has focused on providing heuristics to evaluate and update these metrics using direct experience and reputation reports from other agents (see [8] for a review).\nRecent work has attempted to place the notion of computational trust within the framework of probability theory [6, 11].\nThis approach allows many of the desiderata of computational trust models to be addressed through principled means.\nIn particular: (i) it allows agents to update their estimates of the trustworthiness of a supplier as they acquire direct experience, (ii) it provides a natural framework for agents to express their uncertainty this trustworthiness, and, (iii) it allows agents to exchange, combine and filter reputation reports received from other agents.\nWhilst this approach is attractive, it is somewhat limited in that it has so far only considered single dimensional outcomes (i.e. whether the contract has succeeded or failed in its entirety).\nHowever, in many real world settings the success or failure of an interaction may be decomposed into several dimensions [7].\nThis presents the challenge of combining these multiple dimensions into a single metric over which a decision can be made.\nFurthermore, these dimensions will typically also exhibit correlations.\nFor example, a contract within a supply chain may specify criteria for timeliness, quality and quantity.\nA supplier who is suffering delays may attempt a trade-off between these dimensions by supplying the full amount late, or supplying as much as possible (but less than the quantity specified within the contract) on time.\nThus, correlations will naturally arise between these dimensions, and hence, between the probabilities that describe the successful fulfillment of each contract dimension.\nTo date, however, no such principled framework exists to describe these multi-dimensional contracts, nor the correlations between these dimensions (although some ad hoc models do exist--see section 2 for more details).\nTo rectify this shortcoming, in this paper we develop a probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts.\nThe starting point for our work is to consider how an agent can estimate the utility that it will derive from interacting with a supplier.\nHere we use standard approaches from the literature of data fusion (since this is a well developed field where the notion of multi-dimensional correlated estimates is well established1) to show that this naturally leads to a trust model where the agent must estimate probabilities and correlations over 1In this context, the multiple dimensions typically represent the physical coordinates of a target being tracked, and correlations arise through the operation and orientation of sensors.\nmultiple dimensions.\nBuilding upon this, we then devise a novel trust model that addresses the three desiderata discussed above.\nIn more detail, in this paper we extend the state of the art in four key ways:\n1.\nWe devise a novel multi-dimensional probabilistic trust model that enables an agent to estimate the expected utility of a contract, by estimating (i) the probability that each contract dimension will be successfully fulfilled, and (ii) the correlations between these estimates.\n2.\nWe present an exact probabilistic model based upon the Dirichlet distribution that allows agents to use their direct experience of contract outcomes to calculate the probabilities and correlations described above.\nWe then benchmark this solution and show that it leads to good estimates.\n3.\nWe show that agents can use the sufficient statistics of this Dirichlet distribution in order to exchange reputation reports with one another.\nThe sufficient statistics represent aggregations of their direct experience, and thus, express contract outcomes in a compact format with no loss of information.\n4.\nWe show that, while being efficient, the aggregation of contract outcomes can lead to double counting, and rumour propagation, in decentralised reputation systems.\nThus, we present a novel solution based upon the idea of private and shared information.\nWe show that it yields estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding overconfidence.\nThe remainder of this paper is organised as follows: in section 2 we review related work.\nIn section 3 we present our notation for a single dimensional contract, before introducing our multi-dimensional trust model in section 4.\nIn sections 5 and 6 we discuss communicating reputation, and present our solution to rumour propagation in decentralised reputation systems.\nWe conclude in section 7.\n2.\nRELATED WORK\nThe need for a multi-dimensional trust model has been recognised by a number of researchers.\nSabater and Sierra present a model of reputation, in which agents form contracts based on multiple variables (such as delivery date and quality), and define impressions as subjective evaluations of the outcome of these contracts.\nThey provide heuristic approaches to combining these impressions to form a measure they call subjective reputation.\nLikewise, Griffiths decomposes overall trust into a number of different dimensions such as success, cost, timeliness and quality [4].\nIn his case, each dimension is scored as a real number that represents a comparative value with no strong semantic meaning.\nHe develops an heuristic rule to update these values based on the direct experiences of the individual agent, and an heuristic function that takes the individual trust dimensions and generates a single scalar that is then used to select between suppliers.\nWhilst, he comments that the trust values could have some associated confidence level, heuristics for updating these levels are not presented.\nGujral et al. take a similar approach and present a trust model over multiple domain specific dimensions [5].\nThey define multidimensional goal requirements, and evaluate an expected payoff based on a supplier's estimated behaviour.\nThese estimates are, however, simple aggregations over the direct experience of several agents, and there is no measure of the uncertainty.\nNevertheless, they show that agents who select suppliers based on these multiple dimensions outperform those who consider just a single one.\nBy contrast, a number of researchers have presented more principled computational trust models based on probability theory, albeit limited to a single dimension.\nJ\u00f8sang and Ismail describe the Beta Reputation System whereby the reputation of an agent is compiled from the positive and negative reports from other agents who have interacted with it [6].\nThe beta distribution represents a natural choice for representing these binary outcomes, and it provides a principled means of representing uncertainty.\nMoreover, they provide a number of extensions to this initial model including an approach to exchanging reputation reports using the sufficient statistics of the beta distribution, methods to discount the opinions of agents who themselves have low reputation ratings, and techniques to deal with reputations that may change over time.\nLikewise, Teacy et al. use the beta distribution to describe an agent's belief in the probability that another agent will successfully fulfill its commitments [11].\nThey present a formalism using a beta distribution that allows the agent to estimate this probability based upon its direct experience, and again they use the sufficient statistics of this distribution to communicate this estimate to other agents.\nThey provide a number of extensions to this initial model, and, in particular, they consider that agents may not always truthfully report their trust estimates.\nThus, they present a principled approach to detecting and removing inconsistent reports.\nOur work builds upon these more principled approaches.\nHowever, the starting point of our approach is to consider an agent that is attempting to estimate the expected utility of a contract.\nWe show that estimating this expected utility requires that an agent must estimate the probability with which the supplier will fulfill its contract.\nIn the single-dimensional case, this naturally leads to a trust model using the beta distribution (as per J\u00f8sang and Ismail and Teacy et al.).\nHowever, we then go on to extend this analysis to multiple dimensions, where we use the natural extension of the beta distribution, namely the Dirichlet distribution, to represent the agent's belief over multiple dimensions.\n3.\nSINGLE-DIMENSIONAL TRUST\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1071\n4.\nMULTI-DIMENSIONAL TRUST\n4.1 Exemplar Utility Function\n1072 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 The Dirichlet Distribution\n4.3 Empirical Comparison\n5.\nCOMMUNICATING REPUTATION\n1074 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n6.\nRUMOUR PROPAGATION WITHIN REPUTATION SYSTEMS\n6.1 Private and Shared Information\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1075\n6.2 Empirical Comparison\n1076 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7.\nCONCLUSIONS\nIn this paper we addressed the need for a principled probabilistic model of computational trust that deals with contracts that have multiple correlated dimensions.\nOur starting point was an agent estimating the expected utility of a contract, and we showed that this leads to a model of computational trust that uses the Dirichlet distribution to calculate a trust estimate from the direct experience of an agent.\nWe then showed how agents may use the sufficient statistics of this Dirichlet distribution to represent and communicate reputation within a decentralised reputation system, and we presented a solution to rumour propagation within these systems.\nOur future work in this area is to extend the exchange of reputation to the case where contracts are not homogeneous.\nThat is, not all agents observe the same contract dimensions.\nThis is a challenging extension, since in this case, the sufficient statistics of the Dirichlet distribution cannot be used directly.\nHowever, by\nFigure 5: Instances of \u02c6p (X) and Cov (p (X)) plotted as second standard error ellipses after 5 communication iterations.\nTable 1: Estimated expected utility and its standard error as calculated by a single agent after 5 communication iterations.\naddressing this challenge, we hope to be able to apply these techniques to a setting in which a suppliers provides a range of services whose failures are correlated, and agents only have direct experiences of different subsets of these services.","lvl-4":"Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System\nABSTRACT\nIn this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts.\nOur starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities.\nWe present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions.\nWe then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system.\nFinally, we present a novel solution to the problem of rumour propagation within such systems.\nThis solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence.\n1.\nINTRODUCTION\nThe role of computational models of trust within multi-agent systems in particular, and open distributed systems in general, has recently generated a great deal of research interest.\nIn such systems, agents must typically choose between interaction partners, and in\nthis context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments.\nRecent work has attempted to place the notion of computational trust within the framework of probability theory [6, 11].\nThis approach allows many of the desiderata of computational trust models to be addressed through principled means.\nThis presents the challenge of combining these multiple dimensions into a single metric over which a decision can be made.\nFurthermore, these dimensions will typically also exhibit correlations.\nFor example, a contract within a supply chain may specify criteria for timeliness, quality and quantity.\nThus, correlations will naturally arise between these dimensions, and hence, between the probabilities that describe the successful fulfillment of each contract dimension.\nTo date, however, no such principled framework exists to describe these multi-dimensional contracts, nor the correlations between these dimensions (although some ad hoc models do exist--see section 2 for more details).\nTo rectify this shortcoming, in this paper we develop a probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts.\nThe starting point for our work is to consider how an agent can estimate the utility that it will derive from interacting with a supplier.\nmultiple dimensions.\nBuilding upon this, we then devise a novel trust model that addresses the three desiderata discussed above.\n1.\nWe devise a novel multi-dimensional probabilistic trust model that enables an agent to estimate the expected utility of a contract, by estimating (i) the probability that each contract dimension will be successfully fulfilled, and (ii) the correlations between these estimates.\n2.\nWe present an exact probabilistic model based upon the Dirichlet distribution that allows agents to use their direct experience of contract outcomes to calculate the probabilities and correlations described above.\nWe then benchmark this solution and show that it leads to good estimates.\n3.\nWe show that agents can use the sufficient statistics of this Dirichlet distribution in order to exchange reputation reports with one another.\nThe sufficient statistics represent aggregations of their direct experience, and thus, express contract outcomes in a compact format with no loss of information.\n4.\nWe show that, while being efficient, the aggregation of contract outcomes can lead to double counting, and rumour propagation, in decentralised reputation systems.\nThus, we present a novel solution based upon the idea of private and shared information.\nWe show that it yields estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding overconfidence.\nThe remainder of this paper is organised as follows: in section 2 we review related work.\nIn section 3 we present our notation for a single dimensional contract, before introducing our multi-dimensional trust model in section 4.\nIn sections 5 and 6 we discuss communicating reputation, and present our solution to rumour propagation in decentralised reputation systems.\nWe conclude in section 7.\n2.\nRELATED WORK\nThe need for a multi-dimensional trust model has been recognised by a number of researchers.\nSabater and Sierra present a model of reputation, in which agents form contracts based on multiple variables (such as delivery date and quality), and define impressions as subjective evaluations of the outcome of these contracts.\nThey provide heuristic approaches to combining these impressions to form a measure they call subjective reputation.\nLikewise, Griffiths decomposes overall trust into a number of different dimensions such as success, cost, timeliness and quality [4].\nHe develops an heuristic rule to update these values based on the direct experiences of the individual agent, and an heuristic function that takes the individual trust dimensions and generates a single scalar that is then used to select between suppliers.\nWhilst, he comments that the trust values could have some associated confidence level, heuristics for updating these levels are not presented.\nGujral et al. take a similar approach and present a trust model over multiple domain specific dimensions [5].\nThey define multidimensional goal requirements, and evaluate an expected payoff based on a supplier's estimated behaviour.\nThese estimates are, however, simple aggregations over the direct experience of several agents, and there is no measure of the uncertainty.\nNevertheless, they show that agents who select suppliers based on these multiple dimensions outperform those who consider just a single one.\nBy contrast, a number of researchers have presented more principled computational trust models based on probability theory, albeit limited to a single dimension.\nJ\u00f8sang and Ismail describe the Beta Reputation System whereby the reputation of an agent is compiled from the positive and negative reports from other agents who have interacted with it [6].\nThe beta distribution represents a natural choice for representing these binary outcomes, and it provides a principled means of representing uncertainty.\nLikewise, Teacy et al. use the beta distribution to describe an agent's belief in the probability that another agent will successfully fulfill its commitments [11].\nThey present a formalism using a beta distribution that allows the agent to estimate this probability based upon its direct experience, and again they use the sufficient statistics of this distribution to communicate this estimate to other agents.\nThey provide a number of extensions to this initial model, and, in particular, they consider that agents may not always truthfully report their trust estimates.\nThus, they present a principled approach to detecting and removing inconsistent reports.\nOur work builds upon these more principled approaches.\nHowever, the starting point of our approach is to consider an agent that is attempting to estimate the expected utility of a contract.\nWe show that estimating this expected utility requires that an agent must estimate the probability with which the supplier will fulfill its contract.\nIn the single-dimensional case, this naturally leads to a trust model using the beta distribution (as per J\u00f8sang and Ismail and Teacy et al.).\nHowever, we then go on to extend this analysis to multiple dimensions, where we use the natural extension of the beta distribution, namely the Dirichlet distribution, to represent the agent's belief over multiple dimensions.\n7.\nCONCLUSIONS\nIn this paper we addressed the need for a principled probabilistic model of computational trust that deals with contracts that have multiple correlated dimensions.\nOur starting point was an agent estimating the expected utility of a contract, and we showed that this leads to a model of computational trust that uses the Dirichlet distribution to calculate a trust estimate from the direct experience of an agent.\nWe then showed how agents may use the sufficient statistics of this Dirichlet distribution to represent and communicate reputation within a decentralised reputation system, and we presented a solution to rumour propagation within these systems.\nOur future work in this area is to extend the exchange of reputation to the case where contracts are not homogeneous.\nThat is, not all agents observe the same contract dimensions.\nThis is a challenging extension, since in this case, the sufficient statistics of the Dirichlet distribution cannot be used directly.\nHowever, by\nTable 1: Estimated expected utility and its standard error as calculated by a single agent after 5 communication iterations.","lvl-2":"Rumours and Reputation: Evaluating Multi-Dimensional Trust within a Decentralised Reputation System\nABSTRACT\nIn this paper we develop a novel probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts.\nOur starting point is to consider an agent attempting to estimate the utility of a contract, and we show that this leads to a model of computational trust whereby an agent must determine a vector of estimates that represent the probability that any dimension of the contract will be successfully fulfilled, and a covariance matrix that describes the uncertainty and correlations in these probabilities.\nWe present a formalism based on the Dirichlet distribution that allows an agent to calculate these probabilities and correlations from their direct experience of contract outcomes, and we show that this leads to superior estimates compared to an alternative approach using multiple independent beta distributions.\nWe then show how agents may use the sufficient statistics of this Dirichlet distribution to communicate and fuse reputation within a decentralised reputation system.\nFinally, we present a novel solution to the problem of rumour propagation within such systems.\nThis solution uses the notion of private and shared information, and provides estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding bias and overconfidence.\n1.\nINTRODUCTION\nThe role of computational models of trust within multi-agent systems in particular, and open distributed systems in general, has recently generated a great deal of research interest.\nIn such systems, agents must typically choose between interaction partners, and in\nthis context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments.\nTo date, however, much of the work within this area has used domain specific or ad hoc trust metrics, and has focused on providing heuristics to evaluate and update these metrics using direct experience and reputation reports from other agents (see [8] for a review).\nRecent work has attempted to place the notion of computational trust within the framework of probability theory [6, 11].\nThis approach allows many of the desiderata of computational trust models to be addressed through principled means.\nIn particular: (i) it allows agents to update their estimates of the trustworthiness of a supplier as they acquire direct experience, (ii) it provides a natural framework for agents to express their uncertainty this trustworthiness, and, (iii) it allows agents to exchange, combine and filter reputation reports received from other agents.\nWhilst this approach is attractive, it is somewhat limited in that it has so far only considered single dimensional outcomes (i.e. whether the contract has succeeded or failed in its entirety).\nHowever, in many real world settings the success or failure of an interaction may be decomposed into several dimensions [7].\nThis presents the challenge of combining these multiple dimensions into a single metric over which a decision can be made.\nFurthermore, these dimensions will typically also exhibit correlations.\nFor example, a contract within a supply chain may specify criteria for timeliness, quality and quantity.\nA supplier who is suffering delays may attempt a trade-off between these dimensions by supplying the full amount late, or supplying as much as possible (but less than the quantity specified within the contract) on time.\nThus, correlations will naturally arise between these dimensions, and hence, between the probabilities that describe the successful fulfillment of each contract dimension.\nTo date, however, no such principled framework exists to describe these multi-dimensional contracts, nor the correlations between these dimensions (although some ad hoc models do exist--see section 2 for more details).\nTo rectify this shortcoming, in this paper we develop a probabilistic model of computational trust that explicitly deals with correlated multi-dimensional contracts.\nThe starting point for our work is to consider how an agent can estimate the utility that it will derive from interacting with a supplier.\nHere we use standard approaches from the literature of data fusion (since this is a well developed field where the notion of multi-dimensional correlated estimates is well established1) to show that this naturally leads to a trust model where the agent must estimate probabilities and correlations over 1In this context, the multiple dimensions typically represent the physical coordinates of a target being tracked, and correlations arise through the operation and orientation of sensors.\nmultiple dimensions.\nBuilding upon this, we then devise a novel trust model that addresses the three desiderata discussed above.\nIn more detail, in this paper we extend the state of the art in four key ways:\n1.\nWe devise a novel multi-dimensional probabilistic trust model that enables an agent to estimate the expected utility of a contract, by estimating (i) the probability that each contract dimension will be successfully fulfilled, and (ii) the correlations between these estimates.\n2.\nWe present an exact probabilistic model based upon the Dirichlet distribution that allows agents to use their direct experience of contract outcomes to calculate the probabilities and correlations described above.\nWe then benchmark this solution and show that it leads to good estimates.\n3.\nWe show that agents can use the sufficient statistics of this Dirichlet distribution in order to exchange reputation reports with one another.\nThe sufficient statistics represent aggregations of their direct experience, and thus, express contract outcomes in a compact format with no loss of information.\n4.\nWe show that, while being efficient, the aggregation of contract outcomes can lead to double counting, and rumour propagation, in decentralised reputation systems.\nThus, we present a novel solution based upon the idea of private and shared information.\nWe show that it yields estimates consistent with a centralised reputation system, whilst maintaining the anonymity of the agents, and avoiding overconfidence.\nThe remainder of this paper is organised as follows: in section 2 we review related work.\nIn section 3 we present our notation for a single dimensional contract, before introducing our multi-dimensional trust model in section 4.\nIn sections 5 and 6 we discuss communicating reputation, and present our solution to rumour propagation in decentralised reputation systems.\nWe conclude in section 7.\n2.\nRELATED WORK\nThe need for a multi-dimensional trust model has been recognised by a number of researchers.\nSabater and Sierra present a model of reputation, in which agents form contracts based on multiple variables (such as delivery date and quality), and define impressions as subjective evaluations of the outcome of these contracts.\nThey provide heuristic approaches to combining these impressions to form a measure they call subjective reputation.\nLikewise, Griffiths decomposes overall trust into a number of different dimensions such as success, cost, timeliness and quality [4].\nIn his case, each dimension is scored as a real number that represents a comparative value with no strong semantic meaning.\nHe develops an heuristic rule to update these values based on the direct experiences of the individual agent, and an heuristic function that takes the individual trust dimensions and generates a single scalar that is then used to select between suppliers.\nWhilst, he comments that the trust values could have some associated confidence level, heuristics for updating these levels are not presented.\nGujral et al. take a similar approach and present a trust model over multiple domain specific dimensions [5].\nThey define multidimensional goal requirements, and evaluate an expected payoff based on a supplier's estimated behaviour.\nThese estimates are, however, simple aggregations over the direct experience of several agents, and there is no measure of the uncertainty.\nNevertheless, they show that agents who select suppliers based on these multiple dimensions outperform those who consider just a single one.\nBy contrast, a number of researchers have presented more principled computational trust models based on probability theory, albeit limited to a single dimension.\nJ\u00f8sang and Ismail describe the Beta Reputation System whereby the reputation of an agent is compiled from the positive and negative reports from other agents who have interacted with it [6].\nThe beta distribution represents a natural choice for representing these binary outcomes, and it provides a principled means of representing uncertainty.\nMoreover, they provide a number of extensions to this initial model including an approach to exchanging reputation reports using the sufficient statistics of the beta distribution, methods to discount the opinions of agents who themselves have low reputation ratings, and techniques to deal with reputations that may change over time.\nLikewise, Teacy et al. use the beta distribution to describe an agent's belief in the probability that another agent will successfully fulfill its commitments [11].\nThey present a formalism using a beta distribution that allows the agent to estimate this probability based upon its direct experience, and again they use the sufficient statistics of this distribution to communicate this estimate to other agents.\nThey provide a number of extensions to this initial model, and, in particular, they consider that agents may not always truthfully report their trust estimates.\nThus, they present a principled approach to detecting and removing inconsistent reports.\nOur work builds upon these more principled approaches.\nHowever, the starting point of our approach is to consider an agent that is attempting to estimate the expected utility of a contract.\nWe show that estimating this expected utility requires that an agent must estimate the probability with which the supplier will fulfill its contract.\nIn the single-dimensional case, this naturally leads to a trust model using the beta distribution (as per J\u00f8sang and Ismail and Teacy et al.).\nHowever, we then go on to extend this analysis to multiple dimensions, where we use the natural extension of the beta distribution, namely the Dirichlet distribution, to represent the agent's belief over multiple dimensions.\n3.\nSINGLE-DIMENSIONAL TRUST\nBefore presenting our multi-dimensional trust model, we first introduce the notation and formalism that we will use by describing the more familiar single dimensional case.\nWe consider an agent who must decide whether to engage in a future contract with a supplier.\nThis contract will lead to some outcome, o, and we consider that o = 1 if the contract is successfully fulfilled, and o = 0 if not2.\nIn order for the agent to make a rational decision, it should consider the utility that it will derive from this contract.\nWe assume that in the case that the contract is successfully fulfilled, the agent derives a utility u (o = 1), otherwise it receives no utility3.\nNow, given that the agent is uncertain of the reliability with which the supplier will fulfill the contract, it should consider the expected utility that it will derive, E [U], and this is given by:\nwhere p (o = 1) is the probability that the supplier will successfully fulfill the contract.\nHowever, whilst u (o = 1) is known by the agent, p (o = 1) is not.\nThe best the agent can do is to determine a distribution over possible values of p (o = 1) given its direct experience of previous contract outcomes.\nGiven that it has been able to do so, it can then determine an estimate of the expected utility4 of the contract, E [E [U]], and a measure of its uncertainty in this expected utility, Var (E [U]).\nThis uncertainty is important since a risk averse agent may make a decision regarding a contract, 2Note that we only consider binary contract outcomes, although extending this to partial outcomes is part of our future work.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1071\nnot only on its estimate of the expected utility of the contract, but also on the probability that the expected utility will exceed some minimum amount.\nThese two properties are given by:\nwhere \u02c6p (o = 1) and Var (p (o = 1)) are the estimate and uncertainty of the probability that a contract will be successfully fulfilled, and are calculated from the distribution over possible values of p (o = 1) that the agent determines from its direct experience.\nThe utility based approach that we present here provides an attractive motivation for this model of Teacy et al. [11].\nNow, in the case of binary contract outcomes, the beta distribution is the natural choice to represent the distribution over possible values of p (o = 1) since within Bayesian statistics this well known to be the conjugate prior for binomial observations [3].\nBy adopting the beta distribution, we can calculate \u02c6p (o = 1) and Var (p (o = 1)) using standard results, and thus, if an agent observed N previous contracts of which n were successfully fulfilled, then:\nNote that as expected, the greater the number of contracts the agent observes, the smaller the variance term Var (p (o = 1)), and, thus, the less the uncertainty regarding the probability that a contract will be successfully fulfilled, \u02c6p (o = 1).\n4.\nMULTI-DIMENSIONAL TRUST\nWe now extend the description above, to consider contracts between suppliers and agents that are represented by multiple dimensions, and hence the success or failure of a contract can be decomposed into the success or failure of each separate dimension.\nConsider again the example of the supply chain that specifies the timeliness, quantity, and quality of the goods that are to be delivered.\nThus, within our trust model \"oa = 1\" now indicates a successful outcome over dimension a of the contract and \"oa = 0\" indicates an unsuccessful one.\nA contract outcome, X, is now composed of a vector of individual contract part outcomes (e.g. X = {oa = 1, ob = 0, oc = 0, ...}).\nGiven a multi-dimensional contract whose outcome is described by the vector X, we again consider that in order for an agent to make a rational decision, it should consider the utility that it will derive from this contract.\nTo this end, we can make the general statement that the expected utility of a contract is given by:\n... As before, whilst U (X) is known to the agent, the probability distribution p (X) is not.\nRather, given the agent's direct experience of the supplier, the agent can determine a distribution over possible values for p (X).\nIn the single dimensional case, a beta distribution was the natural choice over possible values of p (o = 1).\nIn the multi-dimensional case, where p (X) itself is a vector of probabilities, the corresponding natural choice is the Dirichlet distribution, since this is a conjugate prior for multinomial proportions [3].\nGiven this distribution, the agent is then able to calculate an estimate of the expected utility of a contract.\nAs before, this estimate is itself represented by an expected value given by:\nand a variance, describing the uncertainty in this expected utility:\nThus, whilst the single dimensional case naturally leads to a trust model in which the agents attempt to derive an estimate of probability that a contract will be successfully fulfilled, \u02c6p (o = 1), along with a scalar variance that describes the uncertainty in this probability, Var (p (o = 1)), in this case, the agents must derive an estimate of a vector of probabilities, \u02c6p (X), along with a covariance matrix, Cov (p (X)), that represents the uncertainty in p (X) given the observed contractual outcomes.\nAt this point, it is interesting to note that the estimate in the single dimensional case, \u02c6p (o = 1), has a clear semantic meaning in relation to trust; it is the agent's belief in the probability of a supplier successfully fulfilling a contract.\nHowever, in the multi-dimensional case the agent must determine \u02c6p (X), and since this describes the probability of all possible contract outcomes, including those that are completely un-fulfilled, this direct semantic interpretation is not present.\nIn the next section, we describe the exemplar utility function that we shall use in the remainder of this paper.\n4.1 Exemplar Utility Function\nThe approach described so far is completely general, in that it applies to any utility function of the form described above, and also applies to the estimation of any joint probability distribution.\nIn the remainder of this paper, for illustrative purposes, we shall limit the discussion to the simplest possible utility function that exhibits a dependence upon the correlations between the contract dimensions.\nThat is, we consider the case that expected utility is dependent only on the marginal probabilities of each contract dimension being successfully fulfilled, rather than the full joint probabilities:\n... Thus, \u02c6p (X) is a vector estimate of the probability of each contract dimension being successfully fulfilled, and maintains the clear semantic interpretation seen in the single dimensional case:\n... The correlations between the contract dimensions affect the uncertainty in the expected utility.\nTo see this, consider the covariance\n1072 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nmatrix that describes this uncertainty, Cov (p (X)), is now given by:\n......... In this matrix, the \"diagonal\" terms, Va, Vb and Vc, represent the uncertainties in p (oa = 1), p (ob = 1) and p (oc = 1) within p (X).\nThe \"off-diagonal\" terms, Cab, Cac and Cbc, represent the correlations between these probabilities.\nIn the next section, we use the Dirichlet distribution to calculate both \u02c6p (X) and Cov (p (X)) from an agent's direct experience of previous contract outcomes.\nWe first illustrate why this is necessary by considering an alternative approach to modelling multi-dimensional contracts whereby an agent na \u00a8 \u0131vely assumes that the dimensions are independent, and thus, it models each individually by separate beta distributions (as in the single dimensional case we presented in section 3).\nThis is actually equivalent to setting the off-diagonal terms within the covariance matrix, Cov (p (X)), to zero.\nHowever, doing so can lead an agent to assume that its estimate of the expected utility of the contract is more accurate than it actually is.\nTo illustrate this, consider a specific scenario with the following values: u (oa = 1) = u (ob = 1) = 1 and Va = Vb = 0.2.\nIn this case, Var (E [U]) = 0.4 (1 + Cab), and thus, if the correlation Cab is ignored then the variance in the expected utility is 0.4.\nHowever, if the contract outcomes are completely correlated then Cab = 1 and Var (E [U]) is actually 0.8.\nThus, in order to have an accurate estimate of the variance of the expected contract utility, and to make a rational decision, it is essential that the agent is able to represent and calculate these correlation terms.\nIn the next section, we describe how an agent may do so using the Dirichlet distribution.\n4.2 The Dirichlet Distribution\nIn this section, we describe how the agent may use its direct experience of previous contracts, and the standard results of the Dirichlet distribution, to determine an estimate of the probability that each contract dimension will be successful fulfilled, \u02c6p (X), and a measure of the uncertainties in these probabilities that expresses the correlations between the contract dimensions, Cov (p (X)).\nWe first consider the calculation of \u02c6p (X) and the diagonal terms of the covariance matrix Cov (p (X)).\nAs described above, the derivation of these results is identical to the case of the single dimensional beta distribution, where out of N contract outcomes, n are successfully fulfilled.\nIn the multi-dimensional case, however, we have a vector Ina, nb, nc, ...} that represents the number of outcomes for which each of the individual contract dimensions were successfully fulfilled.\nThus, in terms of the standard Dirichlet parameters where \u03b1a = na + 1 and \u03b10 = N + 2, the agent can estimate the probability of this contract dimension being successfully fulfilled:\nHowever, calculating the off-diagonal terms within Cov (p (X)) is more complex since it is necessary to consider the correlations between the contract dimensions.\nThus, for each pair of dimensions (i.e. a and b), we must consider all possible combinations of contract outcomes, and thus we define nabij as the number of contract outcomes for which both oa = i and ob = j. For example, nab\nrepresents the number of contracts for which oa = 1 and ob = 0.\nNow, using the standard Dirichlet notation, we can define \u03b1ab\nij + 1 for all i and j taking values 0 and 1, and then, to calculate the cross-correlations between contract pairs a and b, we note that the Dirichlet distribution over pair-wise joint probabilities is:\nand Kab is a normalising constant [3].\nFrom this we can derive pair-wise probability estimates and variances:\nand in fact, \u03b10 = N + 2, where N is the total number of contracts observed.\nLikewise, we can express the covariance in these pairwise probabilities in similar terms:\nto determine the covariance Cab.\nTo do so, we first simplify the notation by defining Vijab' = V [p (oa = i, ob = j)] and Cab ijmn' _ C [p (oa = i, ob = j), p (oa = m, ob = n)].\nThe covariance for the probability of positive contract outcomes is then the covariance between Ej \u2208 {0,1} p (oa = 1, ob = j) and Ei \u2208 {0,1} p (oa = i, ob = 1), and thus:\nThus, given a set of contract outcomes that represent the agent's previous interactions with a supplier, we may use the Dirichlet distribution to calculate the mean and variance of the probability of any contract dimension being successfully fulfilled (i.e. \u02c6p (oa = 1) and Va).\nIn addition, by a somewhat more complex procedure we can also calculate the correlations between these probabilities (i.e. Cab).\nThis allows us to calculate an estimate of the probability that any contract dimension will be successfully fulfilled, \u02c6p (X), and also represent the uncertainty and correlations in these probabilities by the covariance matrix, Cov (p (X)).\nIn turn, these results may be used to calculate the estimate and uncertainty in the expected utility of the contract.\nIn the next section we present empirical results that show that in practise this formalism yields significant improvements in these estimates compared to the na \u00a8 \u0131ve approximation using multiple independent beta distributions.\n4.3 Empirical Comparison\nIn order to evaluate the effectiveness of our formalism, and show the importance of the off-diagonal terms in Cov (p (X)), we compare two approaches:\nFigure 1: Plots showing (i) the variance of the expected contract utility and (ii) the information content of the estimates computed using the Dirichlet distribution and multiple independent beta distributions.\nResults are averaged over 106 runs, and the error bars show the standard error in the mean.\n\u2022 Dirichlet Distribution: We use the full Dirichlet distribution, as described above, to calculate \u02c6p (X) and Cov (p (X)) including all its off-diagonal terms that represent the correlations between the contract dimensions.\n\u2022 Independent Beta Distributions: We use independent beta distributions to represent each contract dimension, in order to calculate \u02c6p (X), and then, as described earlier, we approximate Cov (p (X)) and ignore the correlations by setting all the off-diagonal terms to zero.\nWe consider a two-dimensional case where u (oa = 1) = 6 and u (ob = 1) = 2, since this allows us to plot \u02c6p (X) and Cov (p (X)) as ellipses in a two-dimensional plane, and thus explain the differences between the two approaches.\nSpecifically, we initially allocate the agent some previous contract outcomes that represents its direct experience with a supplier.\nThe number of contracts is drawn uniformly between 10 and 20, and the actual contract outcomes are drawn from an arbitrary joint distribution intended to induce correlations between the contract dimensions.\nFor each set of contracts, we use the approaches described above to calculate \u02c6p (X) and Cov (p (X)), and hence, the variance in the expected contract utility, Var (E [U]).\nIn addition, we calculate a scalar measure of the information content, I, of the covariance matrix Cov (p (X)), which is a standard way of measuring the uncertainty encoded within the covariance matrix [1].\nMore specifically, we calculate the determinant of the inverse of the covariance matrix:\nand note that the larger the information content, the more precise \u02c6p (X) will be, and thus, the better the estimate of the expected utility that the agent is able to calculate.\nFinally, we use the results\nFigure 2: Examples of \u02c6p (X) and Cov (p (X)) plotted as second standard error ellipses.\npresented in section 4.2 to calculate the actual correlation, \u03c1, associated with this particular set of contract outcomes:\nwhere Cab, Va and Vb are calculated as described in section 4.2.\nThe results of this analysis are shown in figure 1.\nHere we show the values of I and Var (E [U]) calculated by the agents, plotted against the correlation of the contract outcomes, \u03c1, that constituted their direct experience.\nThe results are averaged over 106 simulation runs.\nNote that as expected, when the dimensions of the contract outcomes are uncorrelated (i.e. \u03c1 = 0), then both approaches give the same results.\nHowever, the value of using our formalism with the full Dirichlet distribution is shown when the correlation between the dimensions increases (either negatively or positively).\nAs can be seen, if we approximate the Dirichlet distribution with multiple independent beta distributions, all of the correlation information contained within the covariance matrix, Cov (p (X)), is lost, and thus, the information content of the matrix is much lower.\nThe loss of this correlation information leads the variance of the expected utility of the contract to be incorrect (either over or under estimated depending on the correlation) 5, with the exact amount of mis-estimation depending on the actual utility function chosen (i.e. the values of u (oa = 1) and u (ob = 1)).\nIn addition, in figure 2 we illustrate an example of the estimates calculated through both methods, for a single exemplar set of contract outcomes.\nWe represent the probability estimates, \u02c6p (X), and the covariance matrix, Cov (p (X)), in the standard way as an ellipse [1].\nThat is, \u02c6p (X) determines the position of the center of the ellipse, Cov (p (X)) defines its size and shape.\nNote that whilst the ellipse resulting from the full Dirichlet formalism accurately reflects the true distribution (samples of which are plotted as points), that calculated by using multiple independent Beta distributions (and thus ignoring the correlations) results in a much larger ellipse that does not reflect the true distribution.\nThe larger size of this ellipse is a result of the off-diagonal terms of the covariance matrix being set to zero, and corresponds to the agent miscalculating the uncertainty in the probability of each contract dimension being fulfilled.\nThis, in turn, leads it to miscalculate the uncertainty in the expected utility of a contract (shown in figure 1 as Var (E [U]).\n5.\nCOMMUNICATING REPUTATION\nHaving described how an individual agent can use its own direct experience of contract outcomes in order to estimate the probabil5Note that the plots are not smooth due to the fact that given a limited number of contract outcomes, then the mean of Va and Vb do not vary smoothly with \u03c1.\n1074 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nity that a multi-dimensional contract will be successfully fulfilled, we now go on to consider how agents within an open multi-agent system can communicate these estimates to one another.\nThis is commonly referred to as reputation and allows agents with limited direct experience of a supplier to make rational decisions.\nBoth 7\u00f8sang and Ismail, and Teacy et al. present models whereby reputation is communicated between agents using the sufficient statistics of the beta distribution [6, 11].\nThis approach is attractive since these sufficient statistics are simple aggregations of contract outcomes (more precisely, they are simply the total number of contracts observed, N, and the number of these that were successfully fulfilled, n).\nUnder the probabilistic framework of the beta distribution, reputation reports in this form may simply be aggregated with an agent's own direct experience, in order to gain a more precise estimate based on a larger set of contract outcomes.\nWe can immediately extend this approach to the multi-dimensional case considered here, by requiring that the agents exchange the sufficient statistics of the Dirichlet distribution instead of the beta distribution.\nIn this case, for each pair of dimensions (i.e. a and b), the agents must communicate a vector of contract outcomes, N, which are the sufficient statistics of the Dirichlet distribution, given by:\nThus, an agent is able to communicate the sufficient statistics of its own Dirichlet distribution in terms of just 2d (d \u2212 1) numbers (where d is the number of contract dimensions).\nFor instance, in the case of three dimensions, N, is given by:\nand, hence, large sets of contract outcomes may be communicated within a relatively small message size, with no loss of information.\nAgain, agents receiving these sufficient statistics may simply aggregate them with their own direct experience in order to gain a more precise estimate of the trustworthiness of a supplier.\nFinally, we note that whilst it is not the focus of our work here, by adopting the same principled approach as 7\u00f8sang and Ismail, and Teacy et al., many of the techniques that they have developed (such as discounting reports from unreliable agents, and filtering inconsistent reports from selfish agents) may be directly applied within this multi-dimensional model.\nHowever, we now go on to consider a new issue that arises in both the single and multi-dimensional models, namely the problems that arise when such aggregated sufficient statistics are propagated within decentralised agent networks.\n6.\nRUMOUR PROPAGATION WITHIN REPUTATION SYSTEMS\nIn the previous section, we described the use of sufficient statistics to communicate reputation, and we showed that by aggregating contract outcomes together into these sufficient statistics, a large number of contract outcomes can be represented and communicated in a compact form.\nWhilst, this is an attractive property, it can be problematic in practise, since the individual provenance of each contract outcome is lost in the aggregation.\nThus, to ensure an accurate estimate, the reputation system must ensure that each observation of a contract outcome is included within the aggregated statistics no more than once.\nWithin a centralised reputation system, where all agents report their direct experience to a trusted center, such double counting of contract outcomes is easy to avoid.\nHowever, in a decentralised reputation system, where agents communicate reputation to one another, and aggregate their direct experience with these reputation reports on-the-fly, avoiding double counting is much more difficult.\nFigure 3: Example of rumour propagation in a decentralised reputation system.\nFor example, consider the case shown in figure 3 where three agents (a1...a3), each with some direct experience of a supplier, share reputation reports regarding this supplier.\nIf agent a1 were to provide its estimate to agents a2 and a3 in the form of the sufficient statistics of its Dirichlet distribution, then these agents can aggregate these contract outcomes with their own, and thus obtain more precise estimates.\nIf at a later stage, agent a2 were to send its aggregate vector of contract outcomes to agent a3, then agent a3 being unaware of the full history of exchanges, may attempt to combine these contract outcomes with its own aggregated vector.\nHowever, since both vectors contain a contribution from agent a1, these will be counted twice in the final aggregated vector, and will result in a biased and overconfident estimate.\nThis is termed rumour propagation or data incest in the data fusion literature [9].\nOne possible solution would be to uniquely identify the source of each contract outcome, and then propagate each vector, along with its label, through the network.\nAgents can thus identify identical observations that have arrived through different routes, and after removing the duplicates, can aggregate these together to form their estimates.\nWhilst this appears to be attractive in principle, for a number of reasons, it is not always a viable solution in practise [12].\nFirstly, agents may not actually wish to have their uniquely labelled contract outcomes passed around an open system, since such information may have commercial or practical significance that could be used to their disadvantage.\nAs such, agents may only be willing to exchange identifiable contract outcomes with a small number of other agents (perhaps those that they have some sort of reciprocal relationship with).\nSecondly, the fact that there is no aggregation of the contract outcomes as they pass around the network means that the message size increases over time, and the ultimate size of these messages is bounded only by the number of agents within the system (possibly an extremely large number for a global system).\nFinally, it may actually be difficult to assign globally agreeable, consistent, and unique labels for each agent within an open system.\nIn the next section, we develop a novel solution to the problem of rumour propagation within decentralised reputation systems.\nOur solution is based on an approach developed within the area of target tracking and data fusion [9].\nIt avoids the need to uniquely identify an agent, it allows agents to restrict the number of other agents who they reveal their private estimates to, and yet it still allows information to propagate throughout the network.\n6.1 Private and Shared Information\nOur solution to rumour propagation within decentralised reputation systems introduces the notion of private information that an agent knows it has not communicated to any other agent, and shared information that has been communicated to, or received from, another agent.\nThus, the agent can decompose its contract outcome vector, N, into two vectors, a private one, Np, that has not been communicated to another agent, and a shared one, Ns, that has been shared with, or received from, another agent:\nNow, whenever an agent communicates reputation, it communicates both its private and shared vectors separately.\nBoth the orig\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1075\ninating and receiving agents then update their two vectors appropriately.\nTo understand this, consider the case that agent a\u03b1 sends its private and shared contract outcome vectors, Np\u03b1 and Ns\u03b1, to agent a\u03b2 that itself has private and shared contract outcomes Np\u03b2 and Ns\u03b2.\nEach agent updates its vectors of contract outcomes according to the following procedure:\n\u2022 Originating Agent: Once the originating agent has sent both\nits shared and private contract outcome vectors to another agent, its private information is no longer private.\nThus, it must remove the contract outcomes that were in its private vector, and add them into its shared vector: Ns\u03b1 \u2190 Ns\u03b1 + Np\u03b1 Np\u03b1 \u2190 \u2205.\n\u2022 Receiving Agent: The goal of the receiving agent is to accu\nmulate the largest number contract outcomes (since this will result in the most precise estimate) without including shared information from both itself and the other agent (since this may result in double counting of contract outcomes).\nIt has two choices depending on the total number of contract outcomes6 within its own shared vector, Ns\u03b2, and within that of the originating agent, Ns\u03b1.\nThus, it updates its vector according to the procedure below:--Ns\u03b2> Ns\u03b1: If the receiving agent's shared vector represents a greater number of contract outcomes than that of the shared vector of the originating agent, then the agent combines its shared vector with the private vector of the originating agent: Ns\u03b2 \u2190 Ns\u03b2 + Np\u03b1 Np\u03b2 unchanged.\n-- Ns\u03b2 , where: - S is the set of all possible environment states; - s0 is the initial state of the environment (which can also be viewed as a probability distribution over S); The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791 - A is the set of all possible actions applicable in the environment; - T is the environment``s probabilistic transition function: T : S \u00d7A \u2192 \u03a0(S).\nThat is, T(s |a, s) is the probability that the environment will move from state s to state s under action a; - O is the set of all possible observations.\nThis is what the sensor input would look like for an outside observer; - \u03a9 is the observation probability function: \u03a9 : S \u00d7 A \u00d7 S \u2192 \u03a0(O).\nThat is, \u03a9(o|s , a, s) is the probability that one will observe o given that the environment has moved from state s to state s under action a. \u2022 User Level, in the case of Markovian environment, operates on the set of system dynamics described by a family of conditional probabilities F = {\u03c4 : S \u00d7 A \u2192 \u03a0(S)}.\nThus specification of target dynamics can be expressed by q \u2208 F, and the learning or tracking algorithm can be represented as a function L : O\u00d7(A\u00d7O)\u2217 \u2192 F; that is, it maps sequences of observations and actions performed so far into an estimate \u03c4 \u2208 F of system dynamics.\nThere are many possible variations available at the User Level to define divergence between system dynamics; several of them are: - Trace distance or L1 distance between two distributions p and q defined by D(p(\u00b7), q(\u00b7)) = 1 2 x |p(x) \u2212 q(x)| - Fidelity measure of distance F(p(\u00b7), q(\u00b7)) = x p(x)q(x) - Kullback-Leibler divergence DKL(p(\u00b7) q(\u00b7)) = x p(x) log p(x) q(x) Notice that the latter two are not actually metrics over the space of possible distributions, but nevertheless have meaningful and important interpretations.\nFor instance, KullbackLeibler divergence is an important tool of information theory [3] that allows one to measure the price of encoding an information source governed by q, while assuming that it is governed by p.\nThe User Level also defines the threshold of dynamics deviation probability \u03b8.\n\u2022 Agent Level is then faced with a problem of selecting a control signal function a\u2217 to satisfy a minimization problem as follows: a\u2217 = arg min a Pr(d(\u03c4a, q) > \u03b8) where d(\u03c4a, q) is a random variable describing deviation of the dynamics estimate \u03c4a, created by L under control signal a, from the ideal dynamics q. Implicit in this minimization problem is that L is manipulated via the environment, based on the environment model produced by the Environment Design Level.\n3.2 DBC View of the State Space It is important to note the complementary view that DBC and POMDPs take on the state space of the environment.\nPOMDPs regard state as a stationary snap-shot of the environment; whatever attributes of state sequencing one seeks are reached through properties of the control process, in this case reward accumulation.\nThis can be viewed as if the sequencing of states and the attributes of that sequencing are only introduced by and for the controlling mechanism-the POMDP policy.\nDBC concentrates on the underlying principle of state sequencing, the system dynamics.\nDBC``s target dynamics specification can use the environment``s state space as a means to describe, discern, and preserve changes that occur within the system.\nAs a result, DBC has a greater ability to express state sequencing properties, which are grounded in the environment model and its state space definition.\nFor example, consider the task of moving through rough terrain towards a goal and reaching it as fast as possible.\nPOMDPs would encode terrain as state space points, while speed would be ensured by negative reward for every step taken without reaching the goalaccumulating higher reward can be reached only by faster motion.\nAlternatively, the state space could directly include the notion of speed.\nFor POMDPs, this would mean that the same concept is encoded twice, in some sense: directly in the state space, and indirectly within reward accumulation.\nNow, even if the reward function would encode more, and finer, details of the properties of motion, the POMDP solution will have to search in a much larger space of policies, while still being guided by the implicit concept of the reward accumulation procedure.\nOn the other hand, the tactical target expression of variations and correlations between position and speed of motion is now grounded in the state space representation.\nIn this situation, any further constraints, e.g., smoothness of motion, speed limits in different locations, or speed reductions during sharp turns, are explicitly and uniformly expressed by the tactical target, and can result in faster and more effective action selection by a DBC algorithm.\n4.\nEMT-BASED CONTROL AS A DBC Recently, a control algorithm was introduced called EMT-based Control [13], which instantiates the DBC framework.\nAlthough it provides an approximate greedy solution in the DBC sense, initial experiments using EMT-based control have been encouraging [14, 15].\nEMT-based control is based on the Markovian environment definition, as in the case with POMDPs, but its User and Agent Levels are of the Markovian DBC type of optimality.\n\u2022 User Level of EMT-based control defines a limited-case target system dynamics independent of action: qEMT : S \u2192 \u03a0(S).\nIt then utilizes the Kullback-Leibler divergence measure to compose a momentary system dynamics estimator-the Extended Markov Tracking (EMT) algorithm.\nThe algorithm keeps a system dynamics estimate \u03c4t EMT that is capable of explaining recent change in an auxiliary Bayesian system state estimator from pt\u22121 to pt, and updates it conservatively using Kullback-Leibler divergence.\nSince \u03c4t EMT and pt\u22121,t are respectively the conditional and marginal probabilities over the system``s state space, explanation simply means that pt(s ) = s \u03c4t EMT (s |s)pt\u22121(s), and the dynamics estimate update is performed by solving a 792 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: Structure of POMDP vs. Dynamics-Based Control in Markovian Environment Level Approach MDP Markovian DBC Environment < S, A, T, O, \u03a9 >,where S - set of states A - set of actions Design T : S \u00d7 A \u2192 \u03a0(S) - transition O - observation set \u03a9 : S \u00d7 A \u00d7 S \u2192 \u03a0(O) User r : S \u00d7 A \u00d7 S \u2192 R q : S \u00d7 A \u2192 \u03a0(S) F(\u03c0\u2217 ) \u2192 r L(o1, ..., ot) \u2192 \u03c4 r - reward function q - ideal dynamics F - reward remodeling L - dynamics estimator \u03b8 - threshold Agent \u03c0\u2217 = arg max \u03c0 E[ \u03b3t rt] \u03c0\u2217 = arg min \u03c0 Prob(d(\u03c4 q) > \u03b8) minimization problem: \u03c4t EMT = H[pt, pt\u22121, \u03c4t\u22121 EMT ] = arg min \u03c4 DKL(\u03c4 \u00d7 pt\u22121 \u03c4t\u22121 EMT \u00d7 pt\u22121) s.t. pt(s ) = s (\u03c4 \u00d7 pt\u22121)(s , s) pt\u22121(s) = s (\u03c4 \u00d7 pt\u22121)(s , s) \u2022 Agent Level in EMT-based control is suboptimal with respect to DBC (though it remains within the DBC framework), performing greedy action selection based on prediction of EMT``s reaction.\nThe prediction is based on the environment model provided by the Environment Design level, so that if we denote by Ta the environment``s transition function limited to action a, and pt\u22121 is the auxiliary Bayesian system state estimator, then the EMT-based control choice is described by a\u2217 = arg min a\u2208A DKL(H[Ta \u00d7 pt, pt, \u03c4t EMT ] qEMT \u00d7 pt\u22121) Note that this follows the Markovian DBC framework precisely: the rewarding optimality of POMDPs is discarded, and in its place a dynamics estimator (EMT in this case) is manipulated via action effects on the environment to produce an estimate close to the specified target system dynamics.\nYet as we mentioned, naive EMTbased control is suboptimal in the DBC sense, and has several additional limitations that do not exist in the general DBC framework (discussed in Section 4.2).\n4.1 Multi-Target EMT At times, there may exist several behavioral preferences.\nFor example, in the case of museum guards, some art items are more heavily guarded, requiring that the guards stay more often in their close vicinity.\nOn the other hand, no corner of the museum is to be left unchecked, which demands constant motion.\nSuccessful museum security would demand that the guards adhere to, and balance, both of these behaviors.\nFor EMT-based control, this would mean facing several tactical targets {qk}K k=1, and the question becomes how to merge and balance them.\nA balancing mechanism can be applied to resolve this issue.\nNote that EMT-based control, while selecting an action, creates a preference vector over the set of actions based on their predicted performance with respect to a given target.\nIf these preference vectors are normalized, they can be combined into a single unified preference.\nThis requires replacement of standard EMT-based action selection by the algorithm below [15]: \u2022 Given: - a set of target dynamics {qk}K k=1, - vector of weights w(k) \u2022 Select action as follows - For each action a \u2208 A predict the future state distribution \u00afpa t+1 = Ta \u2217 pt; - For each action, compute Da = H(\u00afpa t+1, pt, PDt) - For each a \u2208 A and qk tactical target, denote V (a, k) = DKL (Da qk) pt .\nLet Vk(a) = 1 Zk V (a, k), where Zk = a\u2208A V (a, k) is a normalization factor.\n- Select a\u2217 = arg min a k k=1 w(k)Vk(a) The weights vector w = (w1, ..., wK ) allows the additional tuning of importance among target dynamics without the need to redesign the targets themselves.\nThis balancing method is also seamlessly integrated into the EMT-based control flow of operation.\n4.2 EMT-based Control Limitations EMT-based control is a sub-optimal (in the DBC sense) representative of the DBC structure.\nIt limits the User by forcing EMT to be its dynamic tracking algorithm, and replaces Agent optimization by greedy action selection.\nThis kind of combination, however, is common for on-line algorithms.\nAlthough further development of EMT-based controllers is necessary, evidence so far suggests that even the simplest form of the algorithm possesses a great deal of power, and displays trends that are optimal in the DBC sense of the word.\nThere are two further, EMT-specific, limitations to EMT-based control that are evident at this point.\nBoth already have partial solutions and are subjects of ongoing research.\nThe first limitation is the problem of negative preference.\nIn the POMDP framework for example, this is captured simply, through The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793 the appearance of values with different signs within the reward structure.\nFor EMT-based control, however, negative preference means that one would like to avoid a certain distribution over system development sequences; EMT-based control, however, concentrates on getting as close as possible to a distribution.\nAvoidance is thus unnatural in native EMT-based control.\nThe second limitation comes from the fact that standard environment modeling can create pure sensory actions-actions that do not change the state of the world, and differ only in the way observations are received and the quality of observations received.\nSince the world state does not change, EMT-based control would not be able to differentiate between different sensory actions.\nNotice that both of these limitations of EMT-based control are absent from the general DBC framework, since it may have a tracking algorithm capable of considering pure sensory actions and, unlike Kullback-Leibler divergence, a distribution deviation measure that is capable of dealing with negative preference.\n5.\nEMT PLAYING TAG The Game of Tag was first introduced in [11].\nIt is a single agent problem of capturing a quarry, and belongs to the class of area sweeping problems.\nAn example domain is shown in Figure 2.\n0 51 2 3 4 6 7 8 10 12 13 161514 17 18 19 2221 23 9 11Q A 20 Figure 2: Tag domain; an agent (A) attempts to seek and capture a quarry (Q) The Game of Tag extremely limits the agent``s perception, so that the agent is able to detect the quarry only if they are co-located in the same cell of the grid world.\nIn the classical version of the game, co-location leads to a special observation, and the `Tag'' action can be performed.\nWe slightly modify this setting: the moment that both agents occupy the same cell, the game ends.\nAs a result, both the agent and its quarry have the same motion capability, which allows them to move in four directions, North, South, East, and West.\nThese form a formal space of actions within a Markovian environment.\nThe state space of the formal Markovian environment is described by the cross-product of the agent and quarry``s positions.\nFor Figure 2, it would be S = {s0, ..., s23} \u00d7 {s0, ..., s23}.\nThe effects of an action taken by the agent are deterministic, but the environment in general has a stochastic response due to the motion of the quarry.\nWith probability q0 1 it stays put, and with probability 1 \u2212 q0 it moves to an adjacent cell further away from the 1 In our experiments this was taken to be q0 = 0.2.\nagent.\nSo for the instance shown in Figure 2 and q0 = 0.1: P(Q = s9|Q = s9, A = s11) = 0.1 P(Q = s2|Q = s9, A = s11) = 0.3 P(Q = s8|Q = s9, A = s11) = 0.3 P(Q = s14|Q = s9, A = s11) = 0.3 Although the evasive behavior of the quarry is known to the agent, the quarry``s position is not.\nThe only sensory information available to the agent is its own location.\nWe use EMT and directly specify the target dynamics.\nFor the Game of Tag, we can easily formulate three major trends: catching the quarry, staying mobile, and stalking the quarry.\nThis results in the following three target dynamics: Tcatch(At+1 = si|Qt = sj, At = sa) \u221d 1 si = sj 0 otherwise Tmobile(At+1 = si|Qt = so, At = sj) \u221d 0 si = sj 1 otherwise Tstalk(At+1 = si|Qt = so, At = sj) \u221d 1 dist(si,so) Note that none of the above targets are directly achievable; for instance, if Qt = s9 and At = s11, there is no action that can move the agent to At+1 = s9 as required by the Tcatch target dynamics.\nWe ran several experiments to evaluate EMT performance in the Tag Game.\nThree configurations of the domain shown in Figure 3 were used, each posing a different challenge to the agent due to partial observability.\nIn each setting, a set of 1000 runs was performed with a time limit of 100 steps.\nIn every run, the initial position of both the agent and its quarry was selected at random; this means that as far as the agent was concerned, the quarry``s initial position was uniformly distributed over the entire domain cell space.\nWe also used two variations of the environment observability function.\nIn the first version, observability function mapped all joint positions of hunter and quarry into the position of the hunter as an observation.\nIn the second, only those joint positions in which hunter and quarry occupied different locations were mapped into the hunter``s location.\nThe second version in fact utilized and expressed the fact that once hunter and quarry occupy the same cell the game ends.\nThe results of these experiments are shown in Table 2.\nBalancing [15] the catch, move, and stalk target dynamics described in the previous section by the weight vector [0.8, 0.1, 0.1], EMT produced stable performance in all three domains.\nAlthough direct comparisons are difficult to make, the EMT performance displayed notable efficiency vis-`a-vis the POMDP approach.\nIn spite of a simple and inefficient Matlab implementation of the EMT algorithm, the decision time for any given step averaged significantly below 1 second in all experiments.\nFor the irregular open arena domain, which proved to be the most difficult, 1000 experiment runs bounded by 100 steps each, a total of 42411 steps, were completed in slightly under 6 hours.\nThat is, over 4 \u00d7 104 online steps took an order of magnitude less time than the offline computation of POMDP policy in [11].\nThe significance of this differential is made even more prominent by the fact that, should the environment model parameters change, the online nature of EMT would allow it to maintain its performance time, while the POMDP policy would need to be recomputed, requiring yet again a large overhead of computation.\nWe also tested the behavior cell frequency entropy, empirical measures from trial data.\nAs Figure 4 and Figure 5 show, empir794 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) A Q Q A 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 A Q Figure 3: These configurations of the Tag Game space were used: a) multiple dead-end, b) irregular open arena, c) circular corridor Table 2: Performance of the EMT-based solution in three Tag Game domains and two observability models: I) omniposition quarry, II) quarry is not at hunter``s position Model Domain Capture% E(Steps) Time\/Step I Dead-ends 100 14.8 72(mSec) Arena 80.2 42.4 500(mSec) Circle 91.4 34.6 187(mSec) II Dead-ends 100 13.2 91(mSec) Arena 96.8 28.67 396(mSec) Circle 94.4 31.63 204(mSec) ical entropy grows with the length of interaction.\nFor runs where the quarry was not captured immediately, the entropy reaches between 0.85 and 0.952 for different runs and scenarios.\nAs the agent actively seeks the quarry, the entropy never reaches its maximum.\nOne characteristic of the entropy graph for the open arena scenario particularly caught our attention in the case of the omniposition quarry observation model.\nNear the maximum limit of trial length (100 steps), entropy suddenly dropped.\nFurther analysis of the data showed that under certain circumstances, a fluctuating behavior occurs in which the agent faces equally viable versions of quarry-following behavior.\nSince the EMT algorithm has greedy action selection, and the state space does not encode any form of commitment (not even speed or acceleration), the agent is locked within a small portion of cells.\nIt is essentially attempting to simultaneously follow several courses of action, all of which are consistent with the target dynamics.\nThis behavior did not occur in our second observation model, since it significantly reduced the set of eligible courses of action-essentially contributing to tie-breaking among them.\n6.\nDISCUSSION The design of the EMT solution for the Tag Game exposes the core difference in approach to planning and control between EMT or DBC, on the one hand, and the more familiar POMDP approach, on the other.\nPOMDP defines a reward structure to optimize, and influences system dynamics indirectly through that optimization.\nEMT discards any reward scheme, and instead measures and influences system dynamics directly.\n2 Entropy was calculated using log base equal to the number of possible locations within the domain; this properly scales entropy expression into the range [0, 1] for all domains.\nThus for the Tag Game, we did not search for a reward function that would encode and express our preference over the agent``s behavior, but rather directly set three (heuristic) behavior preferences as the basis for target dynamics to be maintained.\nExperimental data shows that these targets need not be directly achievable via the agent``s actions.\nHowever, the ratio between EMT performance and achievability of target dynamics remains to be explored.\nThe tag game experiment data also revealed the different emphasis DBC and POMDPs place on the formulation of an environment state space.\nPOMDPs rely entirely on the mechanism of reward accumulation maximization, i.e., formation of the action selection procedure to achieve necessary state sequencing.\nDBC, on the other hand, has two sources of sequencing specification: through the properties of an action selection procedure, and through direct specification within the target dynamics.\nThe importance of the second source was underlined by the Tag Game experiment data, in which the greedy EMT algorithm, applied to a POMDP-type state space specification, failed, since target description over such a state space was incapable of encoding the necessary behavior tendencies, e.g., tie-breaking and commitment to directed motion.\nThe structural differences between DBC (and EMT in particular), and POMDPs, prohibits direct performance comparison, and places them on complementary tracks, each within a suitable niche.\nFor instance, POMDPs could be seen as a much more natural formulation of economic sequential decision-making problems, while EMT is a better fit for continual demand for stochastic change, as happens in many robotic or embodied-agent problems.\nThe complementary properties of POMDPs and EMT can be further exploited.\nThere is recent interest in using POMDPs in hybrid solutions [17], in which the POMDPs can be used together with other control approaches to provide results not easily achievable with either approach by itself.\nDBC can be an effective partner in such a hybrid solution.\nFor instance, POMDPs have prohibitively large off-line time requirements for policy computation, but can be readily used in simpler settings to expose beneficial behavioral trends; this can serve as a form of target dynamics that are provided to EMT in a larger domain for on-line operation.\n7.\nCONCLUSIONS AND FUTURE WORK In this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework.\nDBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment.\nOptimality of DBC plans of action are measured The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Dead\u2212ends 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Arena 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Circle Figure 4: Observation Model I: Omniposition quarry.\nEntropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.\n0 10 20 30 40 50 60 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Dead\u2212ends 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Arena 0 20 40 60 80 100 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Steps Entropy Circle Figure 5: Observation Model II: quarry not observed at hunter``s position.\nEntropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.\nwith respect to the deviation of actual system dynamics from the target dynamics.\nWe show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC.\nIn fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure.\nSince EMT exhibits the key features of the general DBC framework, as well as polynomial time complexity, we used the multitarget version of EMT [15] to demonstrate that the class of area sweeping problems naturally lends itself to dynamics-based descriptions, as instantiated by our experiments in the Tag Game domain.\nAs enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference.\nThis prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]).\nHowever, DBC in general has no such limitations, and readily enables the formulation of evasion games.\nIn future work, we intend to proceed with the development of dynamics-based controllers for these problems.\n8.\nACKNOWLEDGMENT The work of the first two authors was partially supported by Israel Science Foundation grant #898\/05, and the third author was partially supported by a grant from Israel``s Ministry of Science and Technology.\n9.\nREFERENCES [1] R. C. Arkin.\nBehavior-Based Robotics.\nMIT Press, 1998.\n[2] J. A. Bilmes.\nA gentle tutorial of the EM algorithm and its application to parameter estimation for Gaussian mixture and Hidden Markov Models.\nTechnical Report TR-97-021, Department of Electrical Engeineering and Computer Science, University of California at Berkeley, 1998.\n[3] T. M. Cover and J. A. Thomas.\nElements of information theory.\nWiley, 1991.\n[4] M. E. desJardins, E. H. Durfee, C. L. Ortiz, and M. J. Wolverton.\nA survey of research in distributed, continual planning.\nAI Magazine, 4:13-22, 1999.\n[5] V. R. Konda and J. N. Tsitsiklis.\nActor-Critic algorithms.\nSIAM Journal on Control and Optimization, 42(4):1143-1166, 2003.\n[6] W. S. Lim.\nA rendezvous-evasion game on discrete locations with joint randomization.\nAdvances in Applied Probability, 29(4):1004-1017, December 1997.\n[7] M. L. Littman, T. L. Dean, and L. P. Kaelbling.\nOn the complexity of solving Markov decision problems.\nIn Proceedings of the 11th Annual Conference on Uncertainty in Artificial Intelligence (UAI-95), pages 394-402, 1995.\n[8] O. Madani, S. Hanks, and A. Condon.\nOn the undecidability of probabilistic planning and related stochastic optimization problems.\nArtificial Intelligence Journal, 147(1-2):5-34, July 2003.\n[9] R. M. Neal and G. E. Hinton.\nA view of the EM algorithm 796 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) that justifies incremental, sparse, and other variants.\nIn M. I. Jordan, editor, Learning in Graphical Models, pages 355-368.\nKluwer Academic Publishers, 1998.\n[10] P. Paruchuri, M. Tambe, F. Ordonez, and S. Kraus.\nSecurity in multiagent systems by policy randomization.\nIn Proceeding of AAMAS 2006, 2006.\n[11] J. Pineau, G. Gordon, and S. Thrun.\nPoint-based value iteration: An anytime algorithm for pomdps.\nIn International Joint Conference on Artificial Intelligence (IJCAI), pages 1025-1032, August 2003.\n[12] M. L. Puterman.\nMarkov Decision Processes.\nWiley Series in Probability and Mathematical Statistics: Applied Probability and Statistics Section.\nWiley-Interscience Publication, New York, 1994.\n[13] Z. Rabinovich and J. S. Rosenschein.\nExtended Markov Tracking with an application to control.\nIn The Workshop on Agent Tracking: Modeling Other Agents from Observations, at the Third International Joint Conference on Autonomous Agents and Multiagent Systems, pages 95-100, New-York, July 2004.\n[14] Z. Rabinovich and J. S. Rosenschein.\nMultiagent coordination by Extended Markov Tracking.\nIn The Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, pages 431-438, Utrecht, The Netherlands, July 2005.\n[15] Z. Rabinovich and J. S. Rosenschein.\nOn the response of EMT-based control to interacting targets and models.\nIn The Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, pages 465-470, Hakodate, Japan, May 2006.\n[16] R. F. Stengel.\nOptimal Control and Estimation.\nDover Publications, 1994.\n[17] M. Tambe, E. Bowring, H. Jung, G. Kaminka, R. Maheswaran, J. Marecki, J. Modi, R. Nair, J. Pearce, P. Paruchuri, D. Pynadath, P. Scerri, N. Schurr, and P. Varakantham.\nConflicts in teamwork: Hybrids to the The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 797","lvl-3":"Dynamics Based Control with an Application to Area-Sweeping Problems\nABSTRACT\nIn this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments.\nUnlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics.\nWe show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC.\nEMT employs greedy action selection to provide an efficient control algorithm in Markovian environments.\nWe exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets).\nWe show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation.\n1.\nINTRODUCTION\nPlanning and control constitutes a central research area in multiagent systems and artificial intelligence.\nIn recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments.\nIn this framework, the planning and control problem is often\naddressed by imposing a reward function, and computing a policy (of choosing actions) that is optimal, in the sense that it will result in the highest expected utility.\nWhile theoretically attractive, the complexity of optimally solving a POMDP is prohibitive [8, 7].\nWe take an alternative view of planning in stochastic environments.\nWe do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics.\nThe idea here is to view planexecution as a process that compels a (stochastic) system to change, and a plan as a dynamic process that shapes that change according to desired criteria.\nWe call this general planning framework Dynamics Based Control (DBC).\nIn DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics.\nAs actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16].\nHere, optimality is measured in terms of probability of deviation magnitudes.\nIn this paper, we present the structure of Dynamics Based Control.\nWe show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC.\nEMT is an efficient instantiation of DBC.\nTo evaluate DBC, we carried out a set of experiments applying multi-target EMT to the Tag Game [11]; this is a variant on the area sweeping problem, where an agent is trying to \"tag\" a moving target (quarry) whose position is not known with certainty.\nExperimental data demonstrates that even with a simple model of the environment and a simple design of target dynamics, high success rates can be produced both in catching the quarry, and in surprising the quarry (as expressed by the observed entropy of the controlled agent's position).\nThe paper is organized as follows.\nIn Section 2 we motivate DBC using area-sweeping problems, and discuss related work.\nSection 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments.\nThis is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4.\nThat section also discusses the limitations of EMT-based control relative to the general DBC framework.\nExperimental settings and results are then presented in Section 5.\nSection 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work.\n2.\nMOTIVATION AND RELATED WORK\n3.\nDYNAMICS BASED CONTROL\n3.1 DBC for Markovian Environments\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791\n3.2 DBC View of the State Space\n4.\nEMT-BASED CONTROL AS A DBC\n792 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.1 Multi-Target EMT\n4.2 EMT-based Control Limitations\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793\n5.\nEMT PLAYING TAG\n6.\nDISCUSSION\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework.\nDBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment.\nOptimality of DBC plans of action are measured\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795\nFigure 4: Observation Model I: Omniposition quarry.\nEntropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.\nFigure 5: Observation Model II: quarry not observed at hunter's position.\nEntropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.\nwith respect to the deviation of actual system dynamics from the target dynamics.\nWe show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC.\nIn fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure.\nSince EMT exhibits the key features of the general DBC framework, as well as polynomial time complexity, we used the multitarget version of EMT [15] to demonstrate that the class of area sweeping problems naturally lends itself to dynamics-based descriptions, as instantiated by our experiments in the Tag Game domain.\nAs enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference.\nThis prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]).\nHowever, DBC in general has no such limitations, and readily enables the formulation of evasion games.\nIn future work, we intend to proceed with the development of dynamics-based controllers for these problems.","lvl-4":"Dynamics Based Control with an Application to Area-Sweeping Problems\nABSTRACT\nIn this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments.\nUnlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics.\nWe show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC.\nEMT employs greedy action selection to provide an efficient control algorithm in Markovian environments.\nWe exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets).\nWe show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation.\n1.\nINTRODUCTION\nPlanning and control constitutes a central research area in multiagent systems and artificial intelligence.\nIn recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments.\nIn this framework, the planning and control problem is often\nWe take an alternative view of planning in stochastic environments.\nWe do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics.\nWe call this general planning framework Dynamics Based Control (DBC).\nIn DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics.\nAs actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16].\nHere, optimality is measured in terms of probability of deviation magnitudes.\nIn this paper, we present the structure of Dynamics Based Control.\nWe show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC.\nEMT is an efficient instantiation of DBC.\nThe paper is organized as follows.\nIn Section 2 we motivate DBC using area-sweeping problems, and discuss related work.\nSection 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments.\nThis is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4.\nThat section also discusses the limitations of EMT-based control relative to the general DBC framework.\nExperimental settings and results are then presented in Section 5.\nSection 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work.\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework.\nDBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment.\nOptimality of DBC plans of action are measured\nThe Sixth Intl. .\nJoint Conf.\nFigure 4: Observation Model I: Omniposition quarry.\nEntropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.\nFigure 5: Observation Model II: quarry not observed at hunter's position.\nEntropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.\nwith respect to the deviation of actual system dynamics from the target dynamics.\nWe show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC.\nIn fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure.\nAs enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference.\nThis prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]).\nHowever, DBC in general has no such limitations, and readily enables the formulation of evasion games.\nIn future work, we intend to proceed with the development of dynamics-based controllers for these problems.","lvl-2":"Dynamics Based Control with an Application to Area-Sweeping Problems\nABSTRACT\nIn this paper we introduce Dynamics Based Control (DBC), an approach to planning and control of an agent in stochastic environments.\nUnlike existing approaches, which seek to optimize expected rewards (e.g., in Partially Observable Markov Decision Problems (POMDPs)), DBC optimizes system behavior towards specified system dynamics.\nWe show that a recently developed planning and control approach, Extended Markov Tracking (EMT) is an instantiation of DBC.\nEMT employs greedy action selection to provide an efficient control algorithm in Markovian environments.\nWe exploit this efficiency in a set of experiments that applied multitarget EMT to a class of area-sweeping problems (searching for moving targets).\nWe show that such problems can be naturally defined and efficiently solved using the DBC framework, and its EMT instantiation.\n1.\nINTRODUCTION\nPlanning and control constitutes a central research area in multiagent systems and artificial intelligence.\nIn recent years, Partially Observable Markov Decision Processes (POMDPs) [12] have become a popular formal basis for planning in stochastic environments.\nIn this framework, the planning and control problem is often\naddressed by imposing a reward function, and computing a policy (of choosing actions) that is optimal, in the sense that it will result in the highest expected utility.\nWhile theoretically attractive, the complexity of optimally solving a POMDP is prohibitive [8, 7].\nWe take an alternative view of planning in stochastic environments.\nWe do not use a (state-based) reward function, but instead optimize over a different criterion, a transition-based specification of the desired system dynamics.\nThe idea here is to view planexecution as a process that compels a (stochastic) system to change, and a plan as a dynamic process that shapes that change according to desired criteria.\nWe call this general planning framework Dynamics Based Control (DBC).\nIn DBC, the goal of a planning (or control) process becomes to ensure that the system will change in accordance with specific (potentially stochastic) target dynamics.\nAs actual system behavior may deviate from that which is specified by target dynamics (due to the stochastic nature of the system), planning in such environments needs to be continual [4], in a manner similar to classical closed-loop controllers [16].\nHere, optimality is measured in terms of probability of deviation magnitudes.\nIn this paper, we present the structure of Dynamics Based Control.\nWe show that the recently developed Extended Markov Tracking (EMT) approach [13, 14, 15] is subsumed by DBC, with EMT employing greedy action selection, which is a specific parameterization among the options possible within DBC.\nEMT is an efficient instantiation of DBC.\nTo evaluate DBC, we carried out a set of experiments applying multi-target EMT to the Tag Game [11]; this is a variant on the area sweeping problem, where an agent is trying to \"tag\" a moving target (quarry) whose position is not known with certainty.\nExperimental data demonstrates that even with a simple model of the environment and a simple design of target dynamics, high success rates can be produced both in catching the quarry, and in surprising the quarry (as expressed by the observed entropy of the controlled agent's position).\nThe paper is organized as follows.\nIn Section 2 we motivate DBC using area-sweeping problems, and discuss related work.\nSection 3 introduces the Dynamics Based Control (DBC) structure, and its specialization to Markovian environments.\nThis is followed by a review of the Extended Markov Tracking (EMT) approach as a DBC-structured control regimen in Section 4.\nThat section also discusses the limitations of EMT-based control relative to the general DBC framework.\nExperimental settings and results are then presented in Section 5.\nSection 6 provides a short discussion of the overall approach, and Section 7 gives some concluding remarks and directions for future work.\n2.\nMOTIVATION AND RELATED WORK\nMany real-life scenarios naturally have a stochastic target dynamics specification, especially those domains where there exists no ultimate goal, but rather system behavior (with specific properties) that has to be continually supported.\nFor example, security guards perform persistent sweeps of an area to detect any sign of intrusion.\nCunning thieves will attempt to track these sweeps, and time their operation to key points of the guards' motion.\nIt is thus advisable to make the guards' motion dynamics appear irregular and random.\nRecent work by Paruchuri et al. [10] has addressed such randomization in the context of single-agent and distributed POMDPs.\nThe goal in that work was to generate policies that provide a measure of action-selection randomization, while maintaining rewards within some acceptable levels.\nOur focus differs from this work in that DBC does not optimize expected rewards--indeed we do not consider rewards at all--but instead maintains desired dynamics (including, but not limited to, randomization).\nThe Game of Tag is another example of the applicability of the approach.\nIt was introduced in the work by Pineau et al. [11].\nThere are two agents that can move about an area, which is divided into a grid.\nThe grid may have blocked cells (holes) into which no agent can move.\nOne agent (the hunter) seeks to move into a cell occupied by the other (the quarry), such that they are co-located (this is a \"successful tag\").\nThe quarry seeks to avoid the hunter agent, and is always aware of the hunter's position, but does not know how the hunter will behave, which opens up the possibility for a hunter to surprise the prey.\nThe hunter knows the quarry's probabilistic law of motion, but does not know its current location.\nTag is an instance of a family of area-sweeping (pursuit-evasion) problems.\nIn [11], the hunter modeled the problem using a POMDP.\nA reward function was defined, to reflect the desire to tag the quarry, and an action policy was computed to optimize the reward collected over time.\nDue to the intractable complexity of determining the optimal policy, the action policy computed in that paper was essentially an approximation.\nIn this paper, instead of formulating a reward function, we use EMT to solve the problem, by directly specifying the target dynamics.\nIn fact, any search problem with randomized motion, the socalled class of area sweeping problems, can be described through specification of such target system dynamics.\nDynamics Based Control provides a natural approach to solving these problems.\n3.\nDYNAMICS BASED CONTROL\nThe specification of Dynamics Based Control (DBC) can be broken into three interacting levels: Environment Design Level, User Level, and Agent Level.\n\u2022 Environment Design Level is concerned with the formal specification and modeling of the environment.\nFor example, this level would specify the laws of physics within the system, and set its parameters, such as the gravitation constant.\n\u2022 User Level in turn relies on the environment model produced by Environment Design to specify the target system dynamics it wishes to observe.\nThe User Level also specifies the estimation or learning procedure for system dynamics, and the measure of deviation.\nIn the museum guard scenario above, these would correspond to a stochastic sweep schedule, and a measure of relative surprise between the specified and actual sweeping.\n\u2022 Agent Level in turn combines the environment model from\nthe Environment Design level, the dynamics estimation procedure, the deviation measure and the target dynamics specification from User Level, to produce a sequence of actions that create system dynamics as close as possible to the targeted specification.\nAs we are interested in the continual development of a stochastic system, such as happens in classical control theory [16] and continual planning [4], as well as in our example of museum sweeps, the question becomes how the Agent Level is to treat the deviation measurements over time.\nTo this end, we use a probability threshold--that is, we would like the Agent Level to maximize the probability that the deviation measure will remain below a certain threshold.\nSpecific action selection then depends on system formalization.\nOne possibility would be to create a mixture of available system trends, much like that which happens in Behavior-Based Robotic architectures [1].\nThe other alternative would be to rely on the estimation procedure provided by the User Level--to utilize the Environment Design Level model of the environment to choose actions, so as to manipulate the dynamics estimator into believing that a certain dynamics has been achieved.\nNotice that this manipulation is not direct, but via the environment.\nThus, for strong enough estimator algorithms, successful manipulation would mean a successful simulation of the specified target dynamics (i.e., beyond discerning via the available sensory input).\nDBC levels can also have a back-flow of information (see Figure 1).\nFor instance, the Agent Level could provide data about target dynamics feasibility, allowing the User Level to modify the requirement, perhaps focusing on attainable features of system behavior.\nData would also be available about the system response to different actions performed; combined with a dynamics estimator defined by the User Level, this can provide an important tool for the environment model calibration at the Environment Design Level.\nFigure 1: Data flow of the DBC framework\nExtending upon the idea of Actor-Critic algorithms [5], DBC data flow can provide a good basis for the design of a learning algorithm.\nFor example, the User Level can operate as an exploratory device for a learning algorithm, inferring an ideal dynamics target from the environment model at hand that would expose and verify most critical features of system behavior.\nIn this case, feasibility and system response data from the Agent Level would provide key information for an environment model update.\nIn fact, the combination of feasibility and response data can provide a basis for the application of strong learning algorithms such as EM [2, 9].\n3.1 DBC for Markovian Environments\nFor a Partially Observable Markovian Environment, DBC can be specified in a more rigorous manner.\nNotice how DBC discards rewards, and replaces it by another optimality criterion (structural differences are summarized in Table 1):\n\u2022 Environment Design level is to specify a tuple , where:--S is the set of all possible environment states;--s0 is the initial state of the environment (which can also be viewed as a probability distribution over S);\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 791\nity that the environment will move from state s to state s' under action a;--O is the set of all possible observations.\nThis is what the sensor input would look like for an outside observer;\nobserve o given that the environment has moved from state s to state s' under action a.\n\u2022 User Level, in the case of Markovian environment, operates on the set of system dynamics described by a family of conditional probabilities F = {\u03c4: S \u00d7 A--* \u03a0 (S)}.\nThus specification of target dynamics can be expressed by q \u2208 F, and the learning or tracking algorithm can be represented as a function L: O \u00d7 (A \u00d7 O) *--* F; that is, it maps sequences of observations and actions performed so far into an estimate \u03c4 \u2208 F of system dynamics.\nThere are many possible variations available at the User Level to define divergence between system dynamics; several of them are:\nNotice that the latter two are not actually metrics over the space of possible distributions, but nevertheless have meaningful and important interpretations.\nFor instance, KullbackLeibler divergence is an important tool of information theory [3] that allows one to measure the \"price\" of encoding an information source governed by q, while assuming that it is governed by p.\nThe User Level also defines the threshold of dynamics deviation probability \u03b8.\n\u2022 Agent Level is then faced with a problem of selecting a control signal function a * to satisfy a minimization problem as follows:\nwhere d (\u03c4a, q) is a random variable describing deviation of the dynamics estimate \u03c4a, created by L under control signal a, from the ideal dynamics q. Implicit in this minimization problem is that L is manipulated via the environment, based on the environment model produced by the Environment Design Level.\n3.2 DBC View of the State Space\nIt is important to note the complementary view that DBC and POMDPs take on the state space of the environment.\nPOMDPs regard state as a stationary snap-shot of the environment; whatever attributes of state sequencing one seeks are reached through properties of the control process, in this case reward accumulation.\nThis can be viewed as if the sequencing of states and the attributes of that sequencing are only introduced by and for the controlling mechanism--the POMDP policy.\nDBC concentrates on the underlying principle of state sequencing, the system dynamics.\nDBC's target dynamics specification can use the environment's state space as a means to describe, discern, and preserve changes that occur within the system.\nAs a result, DBC has a greater ability to express state sequencing properties, which are grounded in the environment model and its state space definition.\nFor example, consider the task of moving through rough terrain towards a goal and reaching it as fast as possible.\nPOMDPs would encode terrain as state space points, while speed would be ensured by negative reward for every step taken without reaching the goal--accumulating higher reward can be reached only by faster motion.\nAlternatively, the state space could directly include the notion of speed.\nFor POMDPs, this would mean that the same concept is encoded twice, in some sense: directly in the state space, and indirectly within reward accumulation.\nNow, even if the reward function would encode more, and finer, details of the properties of motion, the POMDP solution will have to search in a much larger space of policies, while still being guided by the implicit concept of the reward accumulation procedure.\nOn the other hand, the tactical target expression of variations and correlations between position and speed of motion is now grounded in the state space representation.\nIn this situation, any further constraints, e.g., smoothness of motion, speed limits in different locations, or speed reductions during sharp turns, are explicitly and uniformly expressed by the tactical target, and can result in faster and more effective action selection by a DBC algorithm.\n4.\nEMT-BASED CONTROL AS A DBC\nRecently, a control algorithm was introduced called EMT-based Control [13], which instantiates the DBC framework.\nAlthough it provides an approximate greedy solution in the DBC sense, initial experiments using EMT-based control have been encouraging [14, 15].\nEMT-based control is based on the Markovian environment definition, as in the case with POMDPs, but its User and Agent Levels are of the Markovian DBC type of optimality.\n\u2022 User Level of EMT-based control defines a limited-case target system dynamics independent of action:\nIt then utilizes the Kullback-Leibler divergence measure to compose a momentary system dynamics estimator--the Extended Markov Tracking (EMT) algorithm.\nThe algorithm keeps a system dynamics estimate \u03c4t EMT that is capable of explaining recent change in an auxiliary Bayesian system state estimator from pt-1 to pt, and updates it conservatively using Kullback-Leibler divergence.\nSince \u03c4t EMT and pt-1, t are respectively the conditional and marginal probabilities over the system's state space, \"explanation\" simply means that\nand the dynamics estimate update is performed by solving a--Trace distance or L1 distance between two distributions p and q defined by\n792 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nTable 1: Structure of POMDP vs. Dynamics-Based Control in Markovian Environment\nperformance with respect to a given target.\nIf these preference vectors are normalized, they can be combined into a single unified preference.\nThis requires replacement of standard EMT-based action selection by the algorithm below [15]:\n\u2022 Given:\n-- a set of target dynamics {qk} Kk = 1,--vector of weights w (k) \u2022 Select action as follows \u2022 Agent Level in EMT-based control is suboptimal with respect to DBC (though it remains within the DBC framework), performing greedy action selection based on prediction of EMT's reaction.\nThe prediction is based on the environment model provided by the Environment Design level, so that if we denote by Ta the environment's transition function limited to action a, and pt \u2212 1 is the auxiliary Bayesian system state estimator, then the EMT-based control choice is described by\nNote that this follows the Markovian DBC framework precisely: the rewarding optimality of POMDPs is discarded, and in its place a dynamics estimator (EMT in this case) is manipulated via action effects on the environment to produce an estimate close to the specified target system dynamics.\nYet as we mentioned, naive EMTbased control is suboptimal in the DBC sense, and has several additional limitations that do not exist in the general DBC framework (discussed in Section 4.2).\n4.1 Multi-Target EMT\nAt times, there may exist several behavioral preferences.\nFor example, in the case of museum guards, some art items are more heavily guarded, requiring that the guards stay more often in their close vicinity.\nOn the other hand, no corner of the museum is to be left unchecked, which demands constant motion.\nSuccessful museum security would demand that the guards adhere to, and balance, both of these behaviors.\nFor EMT-based control, this would mean facing several tactical targets {qk} Kk = 1, and the question becomes how to merge and balance them.\nA balancing mechanism can be applied to resolve this issue.\nNote that EMT-based control, while selecting an action, creates a preference vector over the set of actions based on their predicted--For each action a \u2208 A predict the future state distribution \u00af pat +1 = Ta \u2217 pt;--For each action, compute\nThe weights vector w ~ = (w1,..., wK) allows the additional \"tuning of importance\" among target dynamics without the need to redesign the targets themselves.\nThis balancing method is also seamlessly integrated into the EMT-based control flow of operation.\n4.2 EMT-based Control Limitations\nEMT-based control is a sub-optimal (in the DBC sense) representative of the DBC structure.\nIt limits the User by forcing EMT to be its dynamic tracking algorithm, and replaces Agent optimization by greedy action selection.\nThis kind of combination, however, is common for on-line algorithms.\nAlthough further development of EMT-based controllers is necessary, evidence so far suggests that even the simplest form of the algorithm possesses a great deal of power, and displays trends that are optimal in the DBC sense of the word.\nThere are two further, EMT-specific, limitations to EMT-based control that are evident at this point.\nBoth already have partial solutions and are subjects of ongoing research.\nThe first limitation is the problem of negative preference.\nIn the POMDP framework for example, this is captured simply, through\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 793\nthe appearance of values with different signs within the reward structure.\nFor EMT-based control, however, negative preference means that one would like to avoid a certain distribution over system development sequences; EMT-based control, however, concentrates on getting as close as possible to a distribution.\nAvoidance is thus unnatural in native EMT-based control.\nThe second limitation comes from the fact that standard environment modeling can create pure sensory actions--actions that do not change the state of the world, and differ only in the way observations are received and the quality of observations received.\nSince the world state does not change, EMT-based control would not be able to differentiate between different sensory actions.\nNotice that both of these limitations of EMT-based control are absent from the general DBC framework, since it may have a tracking algorithm capable of considering pure sensory actions and, unlike Kullback-Leibler divergence, a distribution deviation measure that is capable of dealing with negative preference.\n5.\nEMT PLAYING TAG\nThe Game of Tag was first introduced in [11].\nIt is a single agent problem of capturing a quarry, and belongs to the class of area sweeping problems.\nAn example domain is shown in Figure 2.\nFigure 2: Tag domain; an agent (A) attempts to seek and capture a quarry (Q)\nThe Game of Tag extremely limits the agent's perception, so that the agent is able to detect the quarry only if they are co-located in the same cell of the grid world.\nIn the classical version of the game, co-location leads to a special observation, and the ` Tag' action can be performed.\nWe slightly modify this setting: the moment that both agents occupy the same cell, the game ends.\nAs a result, both the agent and its quarry have the same motion capability, which allows them to move in four directions, North, South, East, and West.\nThese form a formal space of actions within a Markovian environment.\nThe state space of the formal Markovian environment is described by the cross-product of the agent and quarry's positions.\nFor Figure 2, it would be S = {s0,..., s23} \u00d7 {s0,..., s23}.\nThe effects of an action taken by the agent are deterministic, but the environment in general has a stochastic response due to the motion of the quarry.\nWith probability q01 it stays put, and with probability 1 \u2212 q0 it moves to an adjacent cell further away from the 1In our experiments this was taken to be q0 = 0.2.\nagent.\nSo for the instance shown in Figure 2 and q0 = 0.1:\nAlthough the evasive behavior of the quarry is known to the agent, the quarry's position is not.\nThe only sensory information available to the agent is its own location.\nWe use EMT and directly specify the target dynamics.\nFor the Game of Tag, we can easily formulate three major trends: catching the quarry, staying mobile, and stalking the quarry.\nThis results in the following three target dynamics:\nNote that none of the above targets are directly achievable; for instance, if Qt = s9 and At = s11, there is no action that can move the agent to At +1 = s9 as required by the Tcatch target dynamics.\nWe ran several experiments to evaluate EMT performance in the Tag Game.\nThree configurations of the domain shown in Figure 3 were used, each posing a different challenge to the agent due to partial observability.\nIn each setting, a set of 1000 runs was performed with a time limit of 100 steps.\nIn every run, the initial position of both the agent and its quarry was selected at random; this means that as far as the agent was concerned, the quarry's initial position was uniformly distributed over the entire domain cell space.\nWe also used two variations of the environment observability function.\nIn the first version, observability function mapped all joint positions of hunter and quarry into the position of the hunter as an observation.\nIn the second, only those joint positions in which hunter and quarry occupied different locations were mapped into the hunter's location.\nThe second version in fact utilized and expressed the fact that once hunter and quarry occupy the same cell the game ends.\nThe results of these experiments are shown in Table 2.\nBalancing [15] the catch, move, and stalk target dynamics described in the previous section by the weight vector [0.8, 0.1, 0.1], EMT produced stable performance in all three domains.\nAlthough direct comparisons are difficult to make, the EMT performance displayed notable efficiency vis - ` a-vis the POMDP approach.\nIn spite of a simple and inefficient Matlab implementation of the EMT algorithm, the decision time for any given step averaged significantly below 1 second in all experiments.\nFor the irregular open arena domain, which proved to be the most difficult, 1000 experiment runs bounded by 100 steps each, a total of 42411 steps, were completed in slightly under 6 hours.\nThat is, over 4 \u00d7 104 online steps took an order of magnitude less time than the offline computation of POMDP policy in [11].\nThe significance of this differential is made even more prominent by the fact that, should the environment model parameters change, the online nature of EMT would allow it to maintain its performance time, while the POMDP policy would need to be recomputed, requiring yet again a large overhead of computation.\nWe also tested the behavior cell frequency entropy, empirical measures from trial data.\nAs Figure 4 and Figure 5 show, empir\nFigure 3: These configurations of the Tag Game space were used: a) multiple dead-end, b) irregular open arena, c) circular corridor\nTable 2: Performance of the EMT-based solution in three Tag Game domains and two observability models: I) omniposition quarry, II) quarry is not at hunter's position\nical entropy grows with the length of interaction.\nFor runs where the quarry was not captured immediately, the entropy reaches between 0.85 and 0.952 for different runs and scenarios.\nAs the agent actively seeks the quarry, the entropy never reaches its maximum.\nOne characteristic of the entropy graph for the open arena scenario particularly caught our attention in the case of the omniposition quarry observation model.\nNear the maximum limit of trial length (100 steps), entropy suddenly dropped.\nFurther analysis of the data showed that under certain circumstances, a fluctuating behavior occurs in which the agent faces equally viable versions of quarry-following behavior.\nSince the EMT algorithm has greedy action selection, and the state space does not encode any form of commitment (not even speed or acceleration), the agent is locked within a small portion of cells.\nIt is essentially attempting to simultaneously follow several courses of action, all of which are consistent with the target dynamics.\nThis behavior did not occur in our second observation model, since it significantly reduced the set of eligible courses of action--essentially contributing to tie-breaking among them.\n6.\nDISCUSSION\nThe design of the EMT solution for the Tag Game exposes the core difference in approach to planning and control between EMT or DBC, on the one hand, and the more familiar POMDP approach, on the other.\nPOMDP defines a reward structure to optimize, and influences system dynamics indirectly through that optimization.\nEMT discards any reward scheme, and instead measures and influences system dynamics directly.\n2Entropy was calculated using log base equal to the number of possible locations within the domain; this properly scales entropy expression into the range [0, 1] for all domains.\nThus for the Tag Game, we did not search for a reward function that would encode and express our preference over the agent's behavior, but rather directly set three (heuristic) behavior preferences as the basis for target dynamics to be maintained.\nExperimental data shows that these targets need not be directly achievable via the agent's actions.\nHowever, the ratio between EMT performance and achievability of target dynamics remains to be explored.\nThe tag game experiment data also revealed the different emphasis DBC and POMDPs place on the formulation of an environment state space.\nPOMDPs rely entirely on the mechanism of reward accumulation maximization, i.e., formation of the action selection procedure to achieve necessary state sequencing.\nDBC, on the other hand, has two sources of sequencing specification: through the properties of an action selection procedure, and through direct specification within the target dynamics.\nThe importance of the second source was underlined by the Tag Game experiment data, in which the greedy EMT algorithm, applied to a POMDP-type state space specification, failed, since target description over such a state space was incapable of encoding the necessary behavior tendencies, e.g., tie-breaking and commitment to directed motion.\nThe structural differences between DBC (and EMT in particular), and POMDPs, prohibits direct performance comparison, and places them on complementary tracks, each within a suitable niche.\nFor instance, POMDPs could be seen as a much more natural formulation of economic sequential decision-making problems, while EMT is a better fit for continual demand for stochastic change, as happens in many robotic or embodied-agent problems.\nThe complementary properties of POMDPs and EMT can be further exploited.\nThere is recent interest in using POMDPs in hybrid solutions [17], in which the POMDPs can be used together with other control approaches to provide results not easily achievable with either approach by itself.\nDBC can be an effective partner in such a hybrid solution.\nFor instance, POMDPs have prohibitively large off-line time requirements for policy computation, but can be readily used in simpler settings to expose beneficial behavioral trends; this can serve as a form of target dynamics that are provided to EMT in a larger domain for on-line operation.\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we have presented a novel perspective on the process of planning and control in stochastic environments, in the form of the Dynamics Based Control (DBC) framework.\nDBC formulates the task of planning as support of a specified target system dynamics, which describes the necessary properties of change within the environment.\nOptimality of DBC plans of action are measured\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 795\nFigure 4: Observation Model I: Omniposition quarry.\nEntropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.\nFigure 5: Observation Model II: quarry not observed at hunter's position.\nEntropy development with length of Tag Game for the three experiment scenarios: a) multiple dead-end, b) irregular open arena, c) circular corridor.\nwith respect to the deviation of actual system dynamics from the target dynamics.\nWe show that a recently developed technique of Extended Markov Tracking (EMT) [13] is an instantiation of DBC.\nIn fact, EMT can be seen as a specific case of DBC parameterization, which employs a greedy action selection procedure.\nSince EMT exhibits the key features of the general DBC framework, as well as polynomial time complexity, we used the multitarget version of EMT [15] to demonstrate that the class of area sweeping problems naturally lends itself to dynamics-based descriptions, as instantiated by our experiments in the Tag Game domain.\nAs enumerated in Section 4.2, EMT has a number of limitations, such as difficulty in dealing with negative dynamic preference.\nThis prevents direct application of EMT to such problems as Rendezvous-Evasion Games (e.g., [6]).\nHowever, DBC in general has no such limitations, and readily enables the formulation of evasion games.\nIn future work, we intend to proceed with the development of dynamics-based controllers for these problems.","keyphrases":["dynam base control","dynam base control","control","area-sweep problem","stochast environ","partial observ markov decis problem","system dynam","extend markov track","reward function","multi-agent system","target dynam","action-select random","tag game","environ design level","user level","agent level","robot"],"prmu":["P","P","P","P","P","P","P","P","M","M","R","U","U","M","U","M","U"]} {"id":"I-42","title":"A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements","abstract":"Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems. Several current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure. We introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements. Our algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches. The algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP. We compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm. We prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree. We use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity. For some problem instances we observe significant improvements in message and computation sizes compared to DPOP.","lvl-1":"A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements\u2217 James Atlas Computer and Information Sciences University of Delaware Newark, DE 19716 atlas@cis.udel.edu Keith Decker Computer and Information Sciences University of Delaware Newark, DE 19716 decker@cis.udel.edu ABSTRACT Distributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems.\nSeveral current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure.\nWe introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements.\nOur algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches.\nThe algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP.\nWe compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm.\nWe prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree.\nWe use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity.\nFor some problem instances we observe significant improvements in message and computation sizes compared to DPOP.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent Systems General Terms Algorithms 1.\nINTRODUCTION Many historical problems in the AI community can be transformed into Constraint Satisfaction Problems (CSP).\nWith the advent of distributed AI, multi-agent systems became a popular way to model the complex interactions and coordination required to solve distributed problems.\nCSPs were originally extended to distributed agent environments in [9].\nEarly domains for distributed constraint satisfaction problems (DisCSP) included job shop scheduling [1] and resource allocation [2].\nMany domains for agent systems, especially teamwork coordination, distributed scheduling, and sensor networks, involve overly constrained problems that are difficult or impossible to satisfy for every constraint.\nRecent approaches to solving problems in these domains rely on optimization techniques that map constraints into multi-valued utility functions.\nInstead of finding an assignment that satisfies all constraints, these approaches find an assignment that produces a high level of global utility.\nThis extension to the original DisCSP approach has become popular in multi-agent systems, and has been labeled the Distributed Constraint Optimization Problem (DCOP) [1].\nCurrent algorithms that solve complete DCOPs use two main approaches: search and dynamic programming.\nSearch based algorithms that originated from DisCSP typically use some form of backtracking [10] or bounds propagation, as in ADOPT [3].\nDynamic programming based algorithms include DPOP and its extensions [5, 6, 7].\nTo date, both categories of algorithms arrange agents into a traditional pseudotree to solve the problem.\nIt has been shown in [6] that any constraint graph can be mapped into a traditional pseudotree.\nHowever, it was also shown that finding the optimal pseudotree was NP-Hard.\nWe began to investigate the performance of traditional pseudotrees generated by current edge-traversal heuristics.\nWe found that these heuristics often produced little parallelism as the pseudotrees tended to have high depth and low branching factors.\nWe suspected that there could be other ways to arrange the pseudotrees that would provide increased parallelism and smaller message sizes.\nAfter exploring these other arrangements we found that cross-edged pseudotrees provide shorter depths and higher branching factors than the traditional pseudotrees.\nOur hypothesis was that these crossedged pseudotrees would outperform traditional pseudotrees for some problem types.\nIn this paper we introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements which include cross-edged pseudotrees.\nWe begin with a definition of 741 978-81-904262-7-5 (RPS) c 2007 IFAAMAS DCOP, traditional pseudotrees, and cross-edged pseudotrees.\nWe then provide a summary of the original DPOP algorithm and introduce our DCPOP algorithm.\nWe discuss the complexity of our algorithm as well as the impact of pseudotree generation heuristics.\nWe then show that our Distributed Cross-edged Pseudotree Optimization Procedure (DCPOP) performs significantly better in practice than the original DPOP algorithm for some problem instances.\nWe conclude with a selection of ideas for future work and extensions for DCPOP.\n2.\nPROBLEM DEFINITION DCOP has been formalized in slightly different ways in recent literature, so we will adopt the definition as presented in [6].\nA Distributed Constraint Optimization Problem with n nodes and m constraints consists of the tuple < X, D, U > where: \u2022 X = {x1,.\n.\n,xn} is a set of variables, each one assigned to a unique agent \u2022 D = {d1,.\n.\n,dn} is a set of finite domains for each variable \u2022 U = {u1,.\n.\n,um} is a set of utility functions such that each function involves a subset of variables in X and defines a utility for each combination of values among these variables An optimal solution to a DCOP instance consists of an assignment of values in D to X such that the sum of utilities in U is maximal.\nProblem domains that require minimum cost instead of maximum utility can map costs into negative utilities.\nThe utility functions represent soft constraints but can also represent hard constraints by using arbitrarily large negative values.\nFor this paper we only consider binary utility functions involving two variables.\nHigher order utility functions can be modeled with minor changes to the algorithm, but they also substantially increase the complexity.\n2.1 Traditional Pseudotrees Pseudotrees are a common structure used in search procedures to allow parallel processing of independent branches.\nAs defined in [6], a pseudotree is an arrangement of a graph G into a rooted tree T such that vertices in G that share an edge are in the same branch in T.\nA back-edge is an edge between a node X and any node which lies on the path from X to the root (excluding X``s parent).\nFigure 1 shows a pseudotree with four nodes, three edges (A-B, B-C, BD), and one back-edge (A-C).\nAlso defined in [6] are four types of relationships between nodes exist in a pseudotree: \u2022 P(X) - the parent of a node X: the single node higher in the pseudotree that is connected to X directly through a tree edge \u2022 C(X) - the children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through tree edges \u2022 PP(X) - the pseudo-parents of a node X: the set of nodes higher in the pseudotree that are connected to X directly through back-edges (In Figure 1, A = PP(C)) \u2022 PC(X) - the pseudo-children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through back-edges (In Figure 1, C = PC(A)) Figure 1: A traditional pseudotree.\nSolid line edges represent parent-child relationships and the dashed line represents a pseudo-parent-pseudo-child relationship.\nFigure 2: A cross-edged pseudotree.\nSolid line edges represent parent-child relationships, the dashed line represents a pseudoparent-pseudo-child relationship, and the dotted line represents a branch-parent-branch-child relationship.\nThe bolded node, B, is the merge point for node E. 2.2 Cross-edged Pseudotrees We define a cross-edge as an edge from node X to a node Y that is above X but not in the path from X to the root.\nA cross-edged pseudotree is a traditional pseudotree with the addition of cross-edges.\nFigure 2 shows a cross-edged pseudotree with a cross-edge (D-E).\nIn a cross-edged pseudotree we designate certain edges as primary.\nThe set of primary edges defines a spanning tree of the nodes.\nThe parent, child, pseudo-parent, and pseudo-child relationships from the traditional pseudotree are now defined in the context of this primary edge spanning tree.\nThis definition also yields two additional types of relationships that may exist between nodes: \u2022 BP(X) - the branch-parents of a node X: the set of nodes higher in the pseudotree that are connected to X but are not in the primary path from X to the root (In Figure 2, D = BP(E)) \u2022 BC(X) - the branch-children of a node X: the set of nodes lower in the pseudotree that are connected to X but are not in any primary path from X to any leaf node (In Figure 2, E = BC(D)) 2.3 Pseudotree Generation 742 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Current algorithms usually have a pre-execution phase to generate a traditional pseudotree from a general DCOP instance.\nOur DCPOP algorithm generates a cross-edged pseudotree in the same fashion.\nFirst, the DCOP instance < X, D, U > translates directly into a graph with X as the set of vertices and an edge for each pair of variables represented in U. Next, various heuristics are used to arrange this graph into a pseudotree.\nOne common heuristic is to perform a guided depth-first search (DFS) as the resulting traversal is a pseudotree, and a DFS can easily be performed in a distributed fashion.\nWe define an edge-traversal based method as any method that produces a pseudotree in which all parent\/child pairs share an edge in the original graph.\nThis includes DFS, breadth-first search, and best-first search based traversals.\nOur heuristics that generate cross-edged pseudotrees use a distributed best-first search traversal.\n3.\nDPOP ALGORITHM The original DPOP algorithm operates in three main phases.\nThe first phase generates a traditional pseudotree from the DCOP instance using a distributed algorithm.\nThe second phase joins utility hypercubes from children and the local node and propagates them towards the root.\nThe third phase chooses an assignment for each domain in a top down fashion beginning with the agent at the root node.\nThe complexity of DPOP depends on the size of the largest computation and utility message during phase two.\nIt has been shown that this size directly corresponds to the induced width of the pseudotree generated in phase one [6].\nDPOP uses polynomial time heuristics to generate the pseudotree since finding the minimum induced width pseudotree is NP-hard.\nSeveral distributed edgetraversal heuristics have been developed to find low width pseudotrees [8].\nAt the end of the first phase, each agent knows its parent, children, pseudo-parents, and pseudo-children.\n3.1 Utility Propagation Agents located at leaf nodes in the pseudotree begin the process by calculating a local utility hypercube.\nThis hypercube at node X contains summed utilities for each combination of values in the domains for P(X) and PP(X).\nThis hypercube has dimensional size equal to the number of pseudo-parents plus one.\nA message containing this hypercube is sent to P(X).\nAgents located at non-leaf nodes wait for all messages from children to arrive.\nOnce the agent at node Y has all utility messages, it calculates its local utility hypercube which includes domains for P(Y), PP(Y), and Y.\nThe local utility hypercube is then joined with all of the hypercubes from the child messages.\nAt this point all utilities involving node Y are known, and the domain for Y may be safely eliminated from the joined hypercube.\nThis elimination process chooses the best utility over the domain of Y for each combination of the remaining domains.\nA message containing this hypercube is now sent to P(Y).\nThe dimensional size of this hypercube depends on the number of overlapping domains in received messages and the local utility hypercube.\nThis dynamic programming based propagation phase continues until the agent at the root node of the pseudotree has received all messages from its children.\n3.2 Value Propagation Value propagation begins when the agent at the root node Z has received all messages from its children.\nSince Z has no parents or pseudo-parents, it simply combines the utility hypercubes received from its children.\nThe combined hypercube contains only values for the domain for Z.\nAt this point the agent at node Z simply chooses the assignment for its domain that has the best utility.\nA value propagation message with this assignment is sent to each node in C(Z).\nEach other node then receives a value propagation message from its parent and chooses the assignment for its domain that has the best utility given the assignments received in the message.\nThe node adds its domain assignment to the assignments it received and passes the set of assignments to its children.\nThe algorithm is complete when all nodes have chosen an assignment for their domain.\n4.\nDCPOP ALGORITHM Our extension to the original DPOP algorithm, shown in Algorithm 1, shares the same three phases.\nThe first phase generates the cross-edged pseudotree for the DCOP instance.\nThe second phase merges branches and propagates the utility hypercubes.\nThe third phase chooses assignments for domains at branch merge points and in a top down fashion, beginning with the agent at the root node.\nFor the first phase we generate a pseudotree using several distributed heuristics and select the one with lowest overall complexity.\nThe complexity of the computation and utility message size in DCPOP does not directly correspond to the induced width of the cross-edged pseudotree.\nInstead, we use a polynomial time method for calculating the maximum computation and utility message size for a given cross-edged pseudotree.\nA description of this method and the pseudotree selection process appears in Section 5.\nAt the end of the first phase, each agent knows its parent, children, pseudo-parents, pseudo-children, branch-parents, and branch-children.\n4.1 Merging Branches and Utility Propagation In the original DPOP algorithm a node X only had utility functions involving its parent and its pseudo-parents.\nIn DCPOP, a node X is allowed to have a utility function involving a branch-parent.\nThe concept of a branch can be seen in Figure 2 with node E representing our node X.\nThe two distinct paths from node E to node B are called branches of E.\nThe single node where all branches of E meet is node B, which is called the merge point of E. Agents with nodes that have branch-parents begin by sending a utility propagation message to each branch-parent.\nThis message includes a two dimensional utility hypercube with domains for the node X and the branch-parent BP(X).\nIt also includes a branch information structure which contains the origination node of the branch, X, the total number of branches originating from X, and the number of branches originating from X that are merged into a single representation by this branch information structure (this number starts at 1).\nIntuitively when the number of merged branches equals the total number of originating branches, the algorithm has reached the merge point for X.\nIn Figure 2, node E sends a utility propagation message to its branch-parent, node D.\nThis message has dimensions for the domains of E and D, and includes branch information with an origin of E, 2 total branches, and 1 merged branch.\nAs in the original DPOP utility propagation phase, an agent at leaf node X sends a utility propagation message to its parent.\nIn DCPOP this message contains dimensions for the domains of P(X) and PP(X).\nIf node X also has branch-parents, then the utility propagation message also contains a dimension for the domain of X, and will include a branch information structure.\nIn Figure 2, node E sends a utility propagation message to its parent, node C.\nThis message has dimensions for the domains of E and C, and includes branch information with an origin of E, 2 total branches, and 1 merged branch.\nWhen a node Y receives utility propagation messages from all of The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 743 its children and branch-children, it merges any branches with the same origination node X.\nThe merged branch information structure accumulates the number of merged branches for X.\nIf the cumulative total number of merged branches equals the total number of branches, then Y is the merge point for X.\nThis means that the utility hypercubes present at Y contain all information about the valuations for utility functions involving node X.\nIn addition to the typical elimination of the domain of Y from the utility hypercubes, we can now safely eliminate the domain of X from the utility hypercubes.\nTo illustrate this process, we will examine what happens in the second phase for node B in Figure 2.\nIn the second phase Node B receives two utility propagation messages.\nThe first comes from node C and includes dimensions for domains E, B, and A.\nIt also has a branch information structure with origin of E, 2 total branches, and 1 merged branch.\nThe second comes from node D and includes dimensions for domains E and B.\nIt also has a branch information structure with origin of E, 2 total branches, and 1 merged branch.\nNode B then merges the branch information structures from both messages because they have the same origination, node E.\nSince the number of merged branches originating from E is now 2 and the total branches originating from E is 2, node B now eliminates the dimensions for domain E. Node B also eliminates the dimension for its own domain, leaving only information about domain A. Node B then sends a utility propagation message to node A, containing only one dimension for the domain of A. Although not possible in DPOP, this method of utility propagation and dimension elimination may produce hypercubes at node Y that do not share any domains.\nIn DCPOP we do not join domain independent hypercubes, but instead may send multiple hypercubes in the utility propagation message sent to the parent of Y.\nThis lazy approach to joins helps to reduce message sizes.\n4.2 Value Propagation As in DPOP, value propagation begins when the agent at the root node Z has received all messages from its children.\nAt this point the agent at node Z chooses the assignment for its domain that has the best utility.\nIf Z is the merge point for the branches of some node X, Z will also choose the assignment for the domain of X. Thus any node that is a merge point will choose assignments for a domain other than its own.\nThese assignments are then passed down the primary edge hierarchy.\nIf node X in the hierarchy has branch-parents, then the value assignment message from P(X) will contain an assignment for the domain of X. Every node in the hierarchy adds any assignments it has chosen to the ones it received and passes the set of assignments to its children.\nThe algorithm is complete when all nodes have chosen or received an assignment for their domain.\n4.3 Proof of Correctness We will prove the correctness of DCPOP by first noting that DCPOP fully extends DPOP and then examining the two cases for value assignment in DCPOP.\nGiven a traditional pseudotree as input, the DCPOP algorithm execution is identical to DPOP.\nUsing a traditional pseudotree arrangement no nodes have branch-parents or branch-children since all edges are either back-edges or tree edges.\nThus the DCPOP algorithm using a traditional pseudotree sends only utility propagation messages that contain domains belonging to the parent or pseudo-parents of a node.\nSince no node has any branch-parents, no branches exist, and thus no node serves as a merge point for any other node.\nThus all value propagation assignments are chosen at the node of the assignment domain.\nFor DCPOP execution with cross-edged pseudotrees, some nodes serve as merge points.\nWe note that any node X that is not a merge point assigns its value exactly as in DPOP.\nThe local utility hypercube at X contains domains for X, P(X), PP(X), and BC(X).\nAs in DPOP the value assignment message received at X includes the values assigned to P(X) and PP(X).\nAlso, since X is not a merge point, all assignments to BC(X) must have been calculated at merge points higher in the tree and are in the value assignment message from P(X).\nThus after eliminating domains for which assignments are known, only the domain of X is left.\nThe agent at node X can now correctly choose the assignment with maximum utility for its own domain.\nIf node X is a merge point for some branch-child Y, we know that X must be a node along the path from Y to the root, and from P(Y) and all BP(Y) to the root.\nFrom the algorithm, we know that Y necessarily has all information from C(Y), PC(Y), and BC(Y) since it waits for their messages.\nNode X has information about all nodes below it in the tree, which would include Y, P(Y), BP(Y), and those PP(Y) that are below X in the tree.\nFor any PP(Y) above X in the tree, X receives the assignment for the domain of PP(Y) in the value assignment message from P(X).\nThus X has utility information about all of the utility functions of which Y is a part.\nBy eliminating domains included in the value assignment message, node X is left with a local utility hypercube with domains for X and Y.\nThe agent at node X can now correctly choose the assignments with maximum utility for the domains of X and Y. 4.4 Complexity Analysis The first phase of DCPOP sends one message to each P(X), PP(X), and BP(X).\nThe second phase sends one value assignment message to each C(X).\nThus, DCPOP produces a linear number of messages with respect to the number of edges (utility functions) in the cross-edged pseudotree and the original DCOP instance.\nThe actual complexity of DCPOP depends on two additional measurements: message size and computation size.\nMessage size and computation size in DCPOP depend on the number of overlapping branches as well as the number of overlapping back-edges.\nIt was shown in [6] that the number of overlapping back-edges is equal to the induced width of the pseudotree.\nIn a poorly constructed cross-edged pseudotree, the number of overlapping branches at node X can be as large as the total number of descendants of X. Thus, the total message size in DCPOP in a poorly constructed instance can be space-exponential in the total number of nodes in the graph.\nHowever, in practice a well constructed cross-edged pseudotree can achieve much better results.\nLater we address the issue of choosing well constructed crossedged pseudotrees from a set.\nWe introduce an additional measurement of the maximum sequential path cost through the algorithm.\nThis measurement directly relates to the maximum amount of parallelism achievable by the algorithm.\nTo take this measurement we first store the total computation size for each node during phase two and three.\nThis computation size represents the number of individual accesses to a value in a hypercube at each node.\nFor example, a join between two domains of size 4 costs 4 \u2217 4 = 16.\nTwo directed acyclic graphs (DAG) can then be drawn; one with the utility propagation messages as edges and the phase two costs at nodes, and the other with value assignment messages and the phase three costs at nodes.\nThe maximum sequential path cost is equal to the sum of the longest path on each DAG from the root to any leaf node.\n5.\nHEURISTICS In our assessment of complexity in DCPOP we focused on the worst case possibly produced by the algorithm.\nWe acknowledge 744 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Algorithm 1 DCPOP Algorithm 1: DCPOP(X; D; U) Each agent Xi executes: Phase 1: pseudotree creation 2: elect leader from all Xj \u2208 X 3: elected leader initiates pseudotree creation 4: afterwards, Xi knows P(Xi), PP(Xi), BP(Xi), C(Xi), BC(Xi) and PC(Xi) Phase 2: UTIL message propagation 5: if |BP(Xi)| > 0 then 6: BRANCHXi \u2190 |BP(Xi)| + 1 7: for all Xk \u2208BP(Xi) do 8: UTILXi (Xk) \u2190Compute utils(Xi, Xk) 9: Send message(Xk,UTILXi (Xk),BRANCHXi ) 10: if |C(Xi)| = 0(i.e. Xi is a leaf node) then 11: UTILXi (P(Xi)) \u2190 Compute utils(P(Xi),PP(Xi)) for all PP(Xi) 12: Send message(P(Xi), UTILXi (P(Xi)),BRANCHXi ) 13: Send message(PP(Xi), empty UTIL, empty BRANCH) to all PP(Xi) 14: activate UTIL Message handler() Phase 3: VALUE message propagation 15: activate VALUE Message handler() END ALGORITHM UTIL Message handler(Xk,UTILXk (Xi), BRANCHXk ) 16: store UTILXk (Xi),BRANCHXk (Xi) 17: if UTIL messages from all children and branch children arrived then 18: for all Bj \u2208BRANCH(Xi) do 19: if Bj is merged then 20: join all hypercubes where Bj \u2208UTIL(Xi) 21: eliminate Bj from the joined hypercube 22: if P(Xi) == null (that means Xi is the root) then 23: v \u2217 i \u2190 Choose optimal(null) 24: Send VALUE(Xi, v \u2217 i) to all C(Xi) 25: else 26: UTILXi (P(Xi)) \u2190 Compute utils(P(Xi), PP(Xi)) 27: Send message(P(Xi),UTILXi (P(Xi)), BRANCHXi (P(Xi))) VALUE Message handler(VALUEXi ,P(Xi)) 28: add all Xk \u2190 v \u2217 k \u2208VALUEXi ,P(Xi) to agent view 29: Xi \u2190 v \u2217 i =Choose optimal(agent view) 30: Send VALUEXl , Xi to all Xl \u2208C(Xi) that in real world problems the generation of the pseudotree has a significant impact on the actual performance.\nThe problem of finding the best pseudotree for a given DCOP instance is NP-Hard.\nThus a heuristic is used for generation, and the performance of the algorithm depends on the pseudotree found by the heuristic.\nSome previous research focused on finding heuristics to generate good pseudotrees [8].\nWhile we have developed some heuristics that generate good cross-edged pseudotrees for use with DCPOP, our focus has been to use multiple heuristics and then select the best pseudotree from the generated pseudotrees.\nWe consider only heuristics that run in polynomial time with respect to the number of nodes in the original DCOP instance.\nThe actual DCPOP algorithm has worst case exponential complexity, but we can calculate the maximum message size, computation size, and sequential path cost for a given cross-edged pseudotree in linear space-time complexity.\nTo do this, we simply run the algorithm without attempting to calculate any of the local utility hypercubes or optimal value assignments.\nInstead, messages include dimensional and branch information but no utility hypercubes.\nAfter each heuristic completes its generation of a pseudotree, we execute the measurement procedure and propagate the measurement information up to the chosen root in that pseudotree.\nThe root then broadcasts the total complexity for that heuristic to all nodes.\nAfter all heuristics have had a chance to complete, every node knows which heuristic produced the best pseudotree.\nEach node then proceeds to begin the DCPOP algorithm using its knowledge of the pseudotree generated by the best heuristic.\nThe heuristics used to generate traditional pseudotrees perform a distributed DFS traversal.\nThe general distributed algorithm uses a token passing mechanism and a linear number of messages.\nImproved DFS based heuristics use a special procedure to choose the root node, and also provide an ordering function over the neighbors of a node to determine the order of path recursion.\nThe DFS based heuristics used in our experiments come from the work done in [4, 8].\n5.1 The best-first cross-edged pseudotree heuristic The heuristics used to generate cross-edged pseudotrees perform a best-first traversal.\nA general distributed best-first algorithm for node expansion is presented in Algorithm 2.\nAn evaluation function at each node provides the values that are used to determine the next best node to expand.\nNote that in this algorithm each node only exchanges its best value with its neighbors.\nIn our experiments we used several evaluation functions that took as arguments an ordered list of ancestors and a node, which contains a list of neighbors (with each neighbor``s placement depth in the tree if it was placed).\nFrom these we can calculate branchparents, branch-children, and unknown relationships for a potential node placement.\nThe best overall function calculated the value as ancestors\u2212(branchparents+branchchildren) with the number of unknown relationships being a tiebreak.\nAfter completion each node has knowledge of its parent and ancestors, so it can easily determine which connected nodes are pseudo-parents, branchparents, pseudo-children, and branch-children.\nThe complexity of the best-first traversal depends on the complexity of the evaluation function.\nAssuming a complexity of O(V ) for the evaluation function, which is the case for our best overall function, the best-first traversal is O(V \u00b7 E) which is at worst O(n3 ).\nFor each v \u2208 V we perform a place operation, and find the next node to place using the getBestNeighbor operation.\nThe place operation is at most O(V ) because of the sent messages.\nFinding the next node uses recursion and traverses only already placed The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 745 Algorithm 2 Distributed Best-First Search Algorithm root \u2190 electedleader next(root, \u2205) place(node, parent) node.parent \u2190 parent node.ancestors \u2190 parent.ancestors \u222a parent send placement message (node, node.ancestors) to all neighbors of node next(current, previous) if current is not placed then place(current, previous) next(current, \u2205) else best \u2190 getBestNeighbor(current, previous) if best = \u2205 then if previous = \u2205 then terminate, all nodes are placed next(previous, \u2205) else next(best, current) getBestNeighbor(current, previous) best \u2190 \u2205; score \u2190 0 for all n \u2208 current.neighbors do if n!\n= previous then if n is placed then nscore \u2190 getBestNeighbor(n, current) else nscore \u2190 evaluate(current, n) if nscore > score then score \u2190 nscore best \u2190 n return best, score nodes, so it has O(V ) recursions.\nEach recursion performs a recursive getBestNeighbor operation that traverses all placed nodes and their neighbors.\nThis operation is O(V \u00b7 E), but results can be cached using only O(V ) space at each node.\nThus we have O(V \u00b7(V +V +V \u00b7E)) = O(V 2 \u00b7E).\nIf we are smart about evaluating local changes when each node receives placement messages from its neighbors and cache the results the getBestNeighbor operation is only O(E).\nThis increases the complexity of the place operation, but for all placements the total complexity is only O(V \u00b7 E).\nThus we have an overall complexity of O(V \u00b7E+V \u00b7(V +E)) = O(V \u00b7E).\n6.\nCOMPARISON OF COMPLEXITY IN DPOP AND DCPOP We have already shown that given the same input, DCPOP performs the same as DPOP.\nWe also have shown that we can accurately predict performance of a given pseudotree in linear spacetime complexity.\nIf we use a constant number of heuristics to generate the set of pseudotrees, we can choose the best pseudotree in linear space-time complexity.\nWe will now show that there exists a DCOP instance for which a cross-edged pseudotree outperforms all possible traditional pseudotrees (based on edge-traversal heuristics).\nIn Figure 3(a) we have a DCOP instance with six nodes.\nThis is a bipartite graph with each partition fully connected to the other (a) (b) (c) Figure 3: (a) The DCOP instance (b) A traditional pseudotree arrangement for the DCOP instance (c) A cross-edged pseudotree arrangement for the DCOP instance partition.\nIn Figure 3(b) we see a traditional pseudotree arrangement for this DCOP instance.\nIt is easy to see that any edgetraversal based heuristic cannot expand two nodes from the same partition in succession.\nWe also see that no node can have more than one child because any such arrangement would be an invalid pseudotree.\nThus any traditional pseudotree arrangement for this DCOP instance must take the form of Figure 3(b).\nWe can see that the back-edges F-B and F-A overlap node C. Node C also has a parent E, and a back-edge with D. Using the original DPOP algorithm (or DCPOP since they are identical in this case), we find that the computation at node C involves five domains: A, B, C, D, and E.\nIn contrast, the cross-edged pseudotree arrangement in Figure 3(c) requires only a maximum of four domains in any computation during DCPOP.\nSince node A is the merge point for branches from both B and C, we can see that each of the nodes D, E, and F have two overlapping branches.\nIn addition each of these nodes has node A as its parent.\nUsing the DCPOP algorithm we find that the computation at node D (or E or F) involves four domains: A, B, C, and D (or E or F).\nSince no better traditional pseudotree arrangement can be created using an edge-traversal heuristic, we have shown that DCPOP can outperform DPOP even if we use the optimal pseudotree found through edge-traversal.\nWe acknowledge that pseudotree arrangements that allow parent-child relationships without an actual constraint can solve the problem in Figure 3(a) with maximum computation size of four domains.\nHowever, current heuristics used with DPOP do not produce such pseudotrees, and such a heuristic would be difficult to distribute since each node would require information about nodes with which it has no constraint.\nAlso, while we do not prove it here, cross-edged pseudotrees can produce smaller message sizes than such pseudotrees even if the computation size is similar.\nIn practice, since finding the best pseudotree arrangement is NP-Hard, we find that heuristics that produce cross-edged pseudotrees often produce significantly smaller computation and message sizes.\n7.\nEXPERIMENTAL RESULTS 746 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Existing performance metrics for DCOP algorithms include the total number of messages, synchronous clock cycles, and message size.\nWe have already shown that the total number of messages is linear with respect to the number of constraints in the DCOP instance.\nWe also introduced the maximum sequential path cost (PC) as a measurement of the maximum amount of parallelism achievable by the algorithm.\nThe maximum sequential path cost is equal to the sum of the computations performed on the longest path from the root to any leaf node.\nWe also include as metrics the maximum computation size in number of dimensions (CD) and maximum message size in number of dimensions (MD).\nTo analyze the relative complexity of a given DCOP instance, we find the minimum induced width (IW) of any traditional pseudotree produced by a heuristic for the original DPOP.\n7.1 Generic DCOP instances For our initial tests we randomly generated two sets of problems with 3000 cases in each.\nEach problem was generated by assigning a random number (picked from a range) of constraints to each variable.\nThe generator then created binary constraints until each variable reached its maximum number of constraints.\nThe first set uses 20 variables, and the best DPOP IW ranges from 1 to 16 with an average of 8.5.\nThe second set uses 100 variables, and the best DPOP IW ranged from 2 to 68 with an average of 39.3.\nSince most of the problems in the second set were too complex to actually compute the solution, we took measurements of the metrics using the techniques described earlier in Section 5 without actually solving the problem.\nResults are shown for the first set in Table 1 and for the second set in Table 2.\nFor the two problem sets we split the cases into low density and high density categories.\nLow density cases consist of those problems that have a best DPOP IW less than or equal to half of the total number of nodes (e.g. IW \u2264 10 for the 20 node problems and IW \u2264 50 for the 100 node problems).\nHigh density problems consist of the remainder of the problem sets.\nIn both Table 1 and Table 2 we have listed performance metrics for the original DPOP algorithm, the DCPOP algorithm using only cross-edged pseudotrees (DCPOP-CE), and the DCPOP algorithm using traditional and cross-edged pseudotrees (DCPOP-All).\nThe pseudotrees used for DPOP were generated using 5 heuristics: DFS, DFS MCN, DFS CLIQUE MCN, DFS MCN DSTB, and DFS MCN BEC.\nThese are all versions of the guided DFS traversal discussed in Section 5.\nThe cross-edged pseudotrees used for DCPOP-CE were generated using 5 heuristics: MCN, LCN, MCN A-B, LCN A-B, and LCSG A-B.\nThese are all versions of the best-first traversal discussed in Section 5.\nFor both DPOP and DCPOP-CE we chose the best pseudotree produced by their respective 5 heuristics for each problem in the set.\nFor DCPOP-All we chose the best pseudotree produced by all 10 heuristics for each problem in the set.\nFor the CD and MD metrics the value shown is the average number of dimensions.\nFor the PC metric the value shown is the natural logarithm of the maximum sequential path cost (since the actual value grows exponentially with the complexity of the problem).\nThe final row in both tables is a measurement of improvement of DCPOP-All over DPOP.\nFor the CD and MD metrics the value shown is a reduction in number of dimensions.\nFor the PC metric the value shown is a percentage reduction in the maximum sequential path cost (% = DP OP \u2212DCP OP DCP OP \u2217 100).\nNotice that DCPOPAll outperforms DPOP on all metrics.\nThis logically follows from our earlier assertion that given the same input, DCPOP performs exactly the same as DPOP.\nThus given the choice between the pseudotrees produced by all 10 heuristics, DCPOP-All will always outLow Density High Density Algorithm CD MD PC CD MD PC DPOP 7.81 6.81 3.78 13.34 12.34 5.34 DCPOP-CE 7.94 6.73 3.74 12.83 11.43 5.07 DCPOP-All 7.62 6.49 3.66 12.72 11.36 5.05 Improvement 0.18 0.32 13% 0.62 0.98 36% Table 1: 20 node problems Low Density High Density Algorithm CD MD PC CD MD PC DPOP 33.35 32.35 14.55 58.51 57.50 19.90 DCPOP-CE 33.49 29.17 15.22 57.11 50.03 20.01 DCPOP-All 32.35 29.57 14.10 56.33 51.17 18.84 Improvement 1.00 2.78 104% 2.18 6.33 256% Table 2: 100 node problems Figure 4: Computation Dimension Size Figure 5: Message Dimension Size The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 747 Figure 6: Path Cost DCPOP Improvement Ag Mtg Vars Const IW CD MD PC 10 4 12 13.5 2.25 -0.01 -0.01 5.6% 30 14 44 57.6 3.63 0.09 0.09 10.9% 50 24 76 101.3 4.17 0.08 0.09 10.7% 100\u00a049\u00a0156\u00a0212.9 5.04 0.16 0.20 30.0% 150\u00a074\u00a0236\u00a0321.8 5.32 0.21 0.23 35.8% 200\u00a099\u00a0316\u00a0434.2 5.66 0.18 0.22 29.5% Table 3: Meeting Scheduling Problems perform DPOP.\nAnother trend we notice is that the improvement is greater for high density problems than low density problems.\nWe show this trend in greater detail in Figures 4, 5, and 6.\nNotice how the improvement increases as the complexity of the problem increases.\n7.2 Meeting Scheduling Problem In addition to our initial generic DCOP tests, we ran a series of tests on the Meeting Scheduling Problem (MSP) as described in [6].\nThe problem setup includes a number of people that are grouped into departments.\nEach person must attend a specified number of meetings.\nMeetings can be held within departments or among departments, and can be assigned to one of eight time slots.\nThe MSP maps to a DCOP instance where each variable represents the time slot that a specific person will attend a specific meeting.\nAll variables that belong to the same person have mutual exclusion constraints placed so that the person cannot attend more than one meeting during the same time slot.\nAll variables that belong to the same meeting have equality constraints so that all of the participants choose the same time slot.\nUnary constraints are placed on each variable to account for a person``s valuation of each meeting and time slot.\nFor our tests we generated 100 sample problems for each combination of agents and meetings.\nResults are shown in Table 3.\nThe values in the first five columns represent (in left to right order), the total number of agents, the total number of meetings, the total number of variables, the average total number of constraints, and the average minimum IW produced by a traditional pseudotree.\nThe last three columns show the same metrics we used for the generic DCOP instances, except this time we only show the improvements of DCPOP-All over DPOP.\nPerformance is better on average for all MSP instances, but again we see larger improvements for more complex problem instances.\n8.\nCONCLUSIONS AND FUTURE WORK We presented a complete, distributed algorithm that solves general DCOP instances using cross-edged pseudotree arrangements.\nOur algorithm extends the DPOP algorithm by adding additional utility propagation messages, and introducing the concept of branch merging during the utility propagation phase.\nOur algorithm also allows value assignments to occur at higher level merge points for lower level nodes.\nWe have shown that DCPOP fully extends DPOP by performing the same operations given the same input.\nWe have also shown through some examples and experimental data that DCPOP can achieve greater performance for some problem instances by extending the allowable input set to include cross-edged pseudotrees.\nWe placed particular emphasis on the role that edge-traversal heuristics play in the generation of pseudotrees.\nWe have shown that the performance penalty is minimal to generate multiple heuristics, and that we can choose the best generated pseudotree in linear space-time complexity.\nGiven the importance of a good pseudotree for performance, future work will include new heuristics to find better pseudotrees.\nFuture work will also include adapting existing DPOP extensions [5, 7] that support different problem domains for use with DCPOP.\n9.\nREFERENCES [1] J. Liu and K. P. Sycara.\nExploiting problem structure for distributed constraint optimization.\nIn V. Lesser, editor, Proceedings of the First International Conference on Multi-Agent Systems, pages 246-254, San Francisco, CA, 1995.\nMIT Press.\n[2] P. J. Modi, H. Jung, M. Tambe, W.-M.\nShen, and S. Kulkarni.\nA dynamic distributed constraint satisfaction approach to resource allocation.\nLecture Notes in Computer Science, 2239:685-700, 2001.\n[3] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo.\nAn asynchronous complete method for distributed constraint optimization.\nIn AAMAS 03, 2003.\n[4] A. Petcu.\nFrodo: A framework for open\/distributed constraint optimization.\nTechnical Report No. 2006\/001 2006\/001, Swiss Federal Institute of Technology (EPFL), Lausanne (Switzerland), 2006.\nhttp:\/\/liawww.epfl.ch\/frodo\/.\n[5] A. Petcu and B. Faltings.\nA-dpop: Approximations in distributed optimization.\nIn poster in CP 2005, pages 802-806, Sitges, Spain, October 2005.\n[6] A. Petcu and B. Faltings.\nDpop: A scalable method for multiagent constraint optimization.\nIn IJCAI 05, pages 266-271, Edinburgh, Scotland, Aug 2005.\n[7] A. Petcu, B. Faltings, and D. Parkes.\nM-dpop: Faithful distributed implementation of efficient social choice problems.\nIn AAMAS 06, pages 1397-1404, Hakodate, Japan, May 2006.\n[8] G. Ushakov.\nSolving meeting scheduling problems using distributed pseudotree-optimization procedure.\nMaster``s thesis, \u00b4Ecole Polytechnique F\u00b4ed\u00b4erale de Lausanne, 2005.\n[9] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara.\nDistributed constraint satisfaction for formalizing distributed problem solving.\nIn International Conference on Distributed Computing Systems, pages 614-621, 1992.\n[10] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara.\nThe distributed constraint satisfaction problem: Formalization and algorithms.\nKnowledge and Data Engineering, 10(5):673-685, 1998.\n748 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements *\nABSTRACT\nDistributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems.\nSeveral current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure.\nWe introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements.\nOur algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches.\nThe algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP.\nWe compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm.\nWe prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree.\nWe use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity.\nFor some problem instances we observe significant improvements in message and computation sizes compared to DPOP.\n1.\nINTRODUCTION\nMany historical problems in the AI community can be transformed into Constraint Satisfaction Problems (CSP).\nWith the advent of distributed AI, multi-agent systems became a popular way to model the complex interactions and coordination required to solve distributed problems.\nCSPs were originally extended to distributed agent environments in [9].\nEarly domains for distributed constraint satisfaction problems (DisCSP) included job shop scheduling [1] and resource allocation [2].\nMany domains for agent systems, especially teamwork coordination, distributed scheduling, and sensor networks, involve overly constrained problems that are difficult or impossible to satisfy for every constraint.\nRecent approaches to solving problems in these domains rely on optimization techniques that map constraints into multi-valued utility functions.\nInstead of finding an assignment that satisfies all constraints, these approaches find an assignment that produces a high level of global utility.\nThis extension to the original DisCSP approach has become popular in multi-agent systems, and has been labeled the Distributed Constraint Optimization Problem (DCOP) [1].\nCurrent algorithms that solve complete DCOPs use two main approaches: search and dynamic programming.\nSearch based algorithms that originated from DisCSP typically use some form of backtracking [10] or bounds propagation, as in ADOPT [3].\nDynamic programming based algorithms include DPOP and its extensions [5, 6, 7].\nTo date, both categories of algorithms arrange agents into a traditional pseudotree to solve the problem.\nIt has been shown in [6] that any constraint graph can be mapped into a traditional pseudotree.\nHowever, it was also shown that finding the optimal pseudotree was NP-Hard.\nWe began to investigate the performance of traditional pseudotrees generated by current edge-traversal heuristics.\nWe found that these heuristics often produced little parallelism as the pseudotrees tended to have high depth and low branching factors.\nWe suspected that there could be other ways to arrange the pseudotrees that would provide increased parallelism and smaller message sizes.\nAfter exploring these other arrangements we found that cross-edged pseudotrees provide shorter depths and higher branching factors than the traditional pseudotrees.\nOur hypothesis was that these crossedged pseudotrees would outperform traditional pseudotrees for some problem types.\nIn this paper we introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements which include cross-edged pseudotrees.\nWe begin with a definition of\nDCOP, traditional pseudotrees, and cross-edged pseudotrees.\nWe then provide a summary of the original DPOP algorithm and introduce our DCPOP algorithm.\nWe discuss the complexity of our algorithm as well as the impact of pseudotree generation heuristics.\nWe then show that our Distributed Cross-edged Pseudotree Optimization Procedure (DCPOP) performs significantly better in practice than the original DPOP algorithm for some problem instances.\nWe conclude with a selection of ideas for future work and extensions for DCPOP.\n2.\nPROBLEM DEFINITION\n2.1 Traditional Pseudotrees\n2.2 Cross-edged Pseudotrees\n2.3 Pseudotree Generation\n742 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.\nDPOP ALGORITHM\n3.1 Utility Propagation\n3.2 Value Propagation\n4.\nDCPOP ALGORITHM\n4.1 Merging Branches and Utility Propagation\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 743\n4.2 Value Propagation\n4.3 Proof of Correctness\n4.4 Complexity Analysis\n5.\nHEURISTICS\n744 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.1 The best-first cross-edged pseudotree heuristic\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 745\n6.\nCOMPARISON OF COMPLEXITY IN DPOP AND DCPOP\n7.\nEXPERIMENTAL RESULTS\n746 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7.1 Generic DCOP instances\n7.2 Meeting Scheduling Problem\n8.\nCONCLUSIONS AND FUTURE WORK\nWe presented a complete, distributed algorithm that solves general DCOP instances using cross-edged pseudotree arrangements.\nOur algorithm extends the DPOP algorithm by adding additional utility propagation messages, and introducing the concept of branch merging during the utility propagation phase.\nOur algorithm also allows value assignments to occur at higher level merge points for lower level nodes.\nWe have shown that DCPOP fully extends DPOP by performing the same operations given the same input.\nWe have also shown through some examples and experimental data that DCPOP can achieve greater performance for some problem instances by extending the allowable input set to include cross-edged pseudotrees.\nWe placed particular emphasis on the role that edge-traversal heuristics play in the generation of pseudotrees.\nWe have shown that the performance penalty is minimal to generate multiple heuristics, and that we can choose the best generated pseudotree in linear space-time complexity.\nGiven the importance of a good pseudotree for performance, future work will include new heuristics to find better pseudotrees.\nFuture work will also include adapting existing DPOP extensions [5, 7] that support different problem domains for use with DCPOP.","lvl-4":"A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements *\nABSTRACT\nDistributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems.\nSeveral current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure.\nWe introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements.\nOur algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches.\nThe algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP.\nWe compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm.\nWe prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree.\nWe use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity.\nFor some problem instances we observe significant improvements in message and computation sizes compared to DPOP.\n1.\nINTRODUCTION\nMany historical problems in the AI community can be transformed into Constraint Satisfaction Problems (CSP).\nEarly domains for distributed constraint satisfaction problems (DisCSP) included job shop scheduling [1] and resource allocation [2].\nRecent approaches to solving problems in these domains rely on optimization techniques that map constraints into multi-valued utility functions.\nThis extension to the original DisCSP approach has become popular in multi-agent systems, and has been labeled the Distributed Constraint Optimization Problem (DCOP) [1].\nCurrent algorithms that solve complete DCOPs use two main approaches: search and dynamic programming.\nDynamic programming based algorithms include DPOP and its extensions [5, 6, 7].\nTo date, both categories of algorithms arrange agents into a traditional pseudotree to solve the problem.\nIt has been shown in [6] that any constraint graph can be mapped into a traditional pseudotree.\nHowever, it was also shown that finding the optimal pseudotree was NP-Hard.\nWe began to investigate the performance of traditional pseudotrees generated by current edge-traversal heuristics.\nWe found that these heuristics often produced little parallelism as the pseudotrees tended to have high depth and low branching factors.\nWe suspected that there could be other ways to arrange the pseudotrees that would provide increased parallelism and smaller message sizes.\nAfter exploring these other arrangements we found that cross-edged pseudotrees provide shorter depths and higher branching factors than the traditional pseudotrees.\nOur hypothesis was that these crossedged pseudotrees would outperform traditional pseudotrees for some problem types.\nIn this paper we introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements which include cross-edged pseudotrees.\nDCOP, traditional pseudotrees, and cross-edged pseudotrees.\nWe then provide a summary of the original DPOP algorithm and introduce our DCPOP algorithm.\nWe discuss the complexity of our algorithm as well as the impact of pseudotree generation heuristics.\nWe then show that our Distributed Cross-edged Pseudotree Optimization Procedure (DCPOP) performs significantly better in practice than the original DPOP algorithm for some problem instances.\nWe conclude with a selection of ideas for future work and extensions for DCPOP.\n8.\nCONCLUSIONS AND FUTURE WORK\nWe presented a complete, distributed algorithm that solves general DCOP instances using cross-edged pseudotree arrangements.\nOur algorithm extends the DPOP algorithm by adding additional utility propagation messages, and introducing the concept of branch merging during the utility propagation phase.\nOur algorithm also allows value assignments to occur at higher level merge points for lower level nodes.\nWe have shown that DCPOP fully extends DPOP by performing the same operations given the same input.\nWe have also shown through some examples and experimental data that DCPOP can achieve greater performance for some problem instances by extending the allowable input set to include cross-edged pseudotrees.\nWe placed particular emphasis on the role that edge-traversal heuristics play in the generation of pseudotrees.\nWe have shown that the performance penalty is minimal to generate multiple heuristics, and that we can choose the best generated pseudotree in linear space-time complexity.\nGiven the importance of a good pseudotree for performance, future work will include new heuristics to find better pseudotrees.\nFuture work will also include adapting existing DPOP extensions [5, 7] that support different problem domains for use with DCPOP.","lvl-2":"A Complete Distributed Constraint Optimization Method For Non-Traditional Pseudotree Arrangements *\nABSTRACT\nDistributed Constraint Optimization (DCOP) is a general framework that can model complex problems in multi-agent systems.\nSeveral current algorithms that solve general DCOP instances, including ADOPT and DPOP, arrange agents into a traditional pseudotree structure.\nWe introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements.\nOur algorithm correctly solves DCOP instances for pseudotrees that include edges between nodes in separate branches.\nThe algorithm also solves instances with traditional pseudotree arrangements using the same procedure as DPOP.\nWe compare our algorithm with DPOP using several metrics including the induced width of the pseudotrees, the maximum dimensionality of messages and computation, and the maximum sequential path cost through the algorithm.\nWe prove that for some problem instances it is not possible to generate a traditional pseudotree using edge-traversal heuristics that will outperform a cross-edged pseudotree.\nWe use multiple heuristics to generate pseudotrees and choose the best pseudotree in linear space-time complexity.\nFor some problem instances we observe significant improvements in message and computation sizes compared to DPOP.\n1.\nINTRODUCTION\nMany historical problems in the AI community can be transformed into Constraint Satisfaction Problems (CSP).\nWith the advent of distributed AI, multi-agent systems became a popular way to model the complex interactions and coordination required to solve distributed problems.\nCSPs were originally extended to distributed agent environments in [9].\nEarly domains for distributed constraint satisfaction problems (DisCSP) included job shop scheduling [1] and resource allocation [2].\nMany domains for agent systems, especially teamwork coordination, distributed scheduling, and sensor networks, involve overly constrained problems that are difficult or impossible to satisfy for every constraint.\nRecent approaches to solving problems in these domains rely on optimization techniques that map constraints into multi-valued utility functions.\nInstead of finding an assignment that satisfies all constraints, these approaches find an assignment that produces a high level of global utility.\nThis extension to the original DisCSP approach has become popular in multi-agent systems, and has been labeled the Distributed Constraint Optimization Problem (DCOP) [1].\nCurrent algorithms that solve complete DCOPs use two main approaches: search and dynamic programming.\nSearch based algorithms that originated from DisCSP typically use some form of backtracking [10] or bounds propagation, as in ADOPT [3].\nDynamic programming based algorithms include DPOP and its extensions [5, 6, 7].\nTo date, both categories of algorithms arrange agents into a traditional pseudotree to solve the problem.\nIt has been shown in [6] that any constraint graph can be mapped into a traditional pseudotree.\nHowever, it was also shown that finding the optimal pseudotree was NP-Hard.\nWe began to investigate the performance of traditional pseudotrees generated by current edge-traversal heuristics.\nWe found that these heuristics often produced little parallelism as the pseudotrees tended to have high depth and low branching factors.\nWe suspected that there could be other ways to arrange the pseudotrees that would provide increased parallelism and smaller message sizes.\nAfter exploring these other arrangements we found that cross-edged pseudotrees provide shorter depths and higher branching factors than the traditional pseudotrees.\nOur hypothesis was that these crossedged pseudotrees would outperform traditional pseudotrees for some problem types.\nIn this paper we introduce an extension to the DPOP algorithm that handles an extended set of pseudotree arrangements which include cross-edged pseudotrees.\nWe begin with a definition of\nDCOP, traditional pseudotrees, and cross-edged pseudotrees.\nWe then provide a summary of the original DPOP algorithm and introduce our DCPOP algorithm.\nWe discuss the complexity of our algorithm as well as the impact of pseudotree generation heuristics.\nWe then show that our Distributed Cross-edged Pseudotree Optimization Procedure (DCPOP) performs significantly better in practice than the original DPOP algorithm for some problem instances.\nWe conclude with a selection of ideas for future work and extensions for DCPOP.\n2.\nPROBLEM DEFINITION\nDCOP has been formalized in slightly different ways in recent literature, so we will adopt the definition as presented in [6].\nA Distributed Constraint Optimization Problem with n nodes and m constraints consists of the tuple where:\n\u2022 X = {x1,.\n.\n, xn} is a set of variables, each one assigned to a unique agent \u2022 D = {d1,.\n.\n, dn} is a set of finite domains for each variable \u2022 U = {u1,.\n.\n, um} is a set of utility functions such that each\nfunction involves a subset of variables in X and defines a utility for each combination of values among these variables An optimal solution to a DCOP instance consists of an assignment of values in D to X such that the sum of utilities in U is maximal.\nProblem domains that require minimum cost instead of maximum utility can map costs into negative utilities.\nThe utility functions represent soft constraints but can also represent hard constraints by using arbitrarily large negative values.\nFor this paper we only consider binary utility functions involving two variables.\nHigher order utility functions can be modeled with minor changes to the algorithm, but they also substantially increase the complexity.\n2.1 Traditional Pseudotrees\nPseudotrees are a common structure used in search procedures to allow parallel processing of independent branches.\nAs defined in [6], a pseudotree is an arrangement of a graph G into a rooted tree T such that vertices in G that share an edge are in the same branch in T.\nA back-edge is an edge between a node X and any node which lies on the path from X to the root (excluding X's parent).\nFigure 1 shows a pseudotree with four nodes, three edges (A-B, B-C, BD), and one back-edge (A-C).\nAlso defined in [6] are four types of relationships between nodes exist in a pseudotree:\n\u2022 P (X) - the parent of a node X: the single node higher in the pseudotree that is connected to X directly through a tree edge \u2022 C (X) - the children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through tree edges \u2022 PP (X) - the pseudo-parents of a node X: the set of nodes higher in the pseudotree that are connected to X directly through back-edges (In Figure 1, A = PP (C)) \u2022 PC (X) - the pseudo-children of a node X: the set of nodes lower in the pseudotree that are connected to X directly through back-edges (In Figure 1, C = PC (A))\nFigure 1: A traditional pseudotree.\nSolid line edges represent parent-child relationships and the dashed line represents a pseudo-parent-pseudo-child relationship.\nFigure 2: A cross-edged pseudotree.\nSolid line edges represent parent-child relationships, the dashed line represents a pseudoparent-pseudo-child relationship, and the dotted line represents a branch-parent-branch-child relationship.\nThe bolded node, B, is the merge point for node E.\n2.2 Cross-edged Pseudotrees\nWe define a cross-edge as an edge from node X to a node Y that is above X but not in the path from X to the root.\nA cross-edged pseudotree is a traditional pseudotree with the addition of cross-edges.\nFigure 2 shows a cross-edged pseudotree with a cross-edge (D-E).\nIn a cross-edged pseudotree we designate certain edges as primary.\nThe set of primary edges defines a spanning tree of the nodes.\nThe parent, child, pseudo-parent, and pseudo-child relationships from the traditional pseudotree are now defined in the context of this primary edge spanning tree.\nThis definition also yields two additional types of relationships that may exist between nodes:\n\u2022 BP (X) - the branch-parents of a node X: the set of nodes higher in the pseudotree that are connected to X but are not in the primary path from X to the root (In Figure 2, D = BP (E)) \u2022 BC (X) - the branch-children of a node X: the set of nodes lower in the pseudotree that are connected to X but are not in any primary path from X to any leaf node (In Figure 2, E = BC (D))\n2.3 Pseudotree Generation\n742 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nCurrent algorithms usually have a pre-execution phase to generate a traditional pseudotree from a general DCOP instance.\nOur DCPOP algorithm generates a cross-edged pseudotree in the same fashion.\nFirst, the DCOP instance translates directly into a graph with X as the set of vertices and an edge for each pair of variables represented in U. Next, various heuristics are used to arrange this graph into a pseudotree.\nOne common heuristic is to perform a guided depth-first search (DFS) as the resulting traversal is a pseudotree, and a DFS can easily be performed in a distributed fashion.\nWe define an edge-traversal based method as any method that produces a pseudotree in which all parent\/child pairs share an edge in the original graph.\nThis includes DFS, breadth-first search, and best-first search based traversals.\nOur heuristics that generate cross-edged pseudotrees use a distributed best-first search traversal.\n3.\nDPOP ALGORITHM\nThe original DPOP algorithm operates in three main phases.\nThe first phase generates a traditional pseudotree from the DCOP instance using a distributed algorithm.\nThe second phase joins utility hypercubes from children and the local node and propagates them towards the root.\nThe third phase chooses an assignment for each domain in a top down fashion beginning with the agent at the root node.\nThe complexity of DPOP depends on the size of the largest computation and utility message during phase two.\nIt has been shown that this size directly corresponds to the induced width of the pseudotree generated in phase one [6].\nDPOP uses polynomial time heuristics to generate the pseudotree since finding the minimum induced width pseudotree is NP-hard.\nSeveral distributed edgetraversal heuristics have been developed to find low width pseudotrees [8].\nAt the end of the first phase, each agent knows its parent, children, pseudo-parents, and pseudo-children.\n3.1 Utility Propagation\nAgents located at leaf nodes in the pseudotree begin the process by calculating a local utility hypercube.\nThis hypercube at node X contains summed utilities for each combination of values in the domains for P (X) and PP (X).\nThis hypercube has dimensional size equal to the number of pseudo-parents plus one.\nA message containing this hypercube is sent to P (X).\nAgents located at non-leaf nodes wait for all messages from children to arrive.\nOnce the agent at node Y has all utility messages, it calculates its local utility hypercube which includes domains for P (Y), PP (Y), and Y.\nThe local utility hypercube is then joined with all of the hypercubes from the child messages.\nAt this point all utilities involving node Y are known, and the domain for Y may be safely eliminated from the joined hypercube.\nThis elimination process chooses the best utility over the domain of Y for each combination of the remaining domains.\nA message containing this hypercube is now sent to P (Y).\nThe dimensional size of this hypercube depends on the number of overlapping domains in received messages and the local utility hypercube.\nThis dynamic programming based propagation phase continues until the agent at the root node of the pseudotree has received all messages from its children.\n3.2 Value Propagation\nValue propagation begins when the agent at the root node Z has received all messages from its children.\nSince Z has no parents or pseudo-parents, it simply combines the utility hypercubes received from its children.\nThe combined hypercube contains only values for the domain for Z.\nAt this point the agent at node Z simply chooses the assignment for its domain that has the best utility.\nA value propagation message with this assignment is sent to each node in C (Z).\nEach other node then receives a value propagation message from its parent and chooses the assignment for its domain that has the best utility given the assignments received in the message.\nThe node adds its domain assignment to the assignments it received and passes the set of assignments to its children.\nThe algorithm is complete when all nodes have chosen an assignment for their domain.\n4.\nDCPOP ALGORITHM\nOur extension to the original DPOP algorithm, shown in Algorithm 1, shares the same three phases.\nThe first phase generates the cross-edged pseudotree for the DCOP instance.\nThe second phase merges branches and propagates the utility hypercubes.\nThe third phase chooses assignments for domains at branch merge points and in a top down fashion, beginning with the agent at the root node.\nFor the first phase we generate a pseudotree using several distributed heuristics and select the one with lowest overall complexity.\nThe complexity of the computation and utility message size in DCPOP does not directly correspond to the induced width of the cross-edged pseudotree.\nInstead, we use a polynomial time method for calculating the maximum computation and utility message size for a given cross-edged pseudotree.\nA description of this method and the pseudotree selection process appears in Section 5.\nAt the end of the first phase, each agent knows its parent, children, pseudo-parents, pseudo-children, branch-parents, and branch-children.\n4.1 Merging Branches and Utility Propagation\nIn the original DPOP algorithm a node X only had utility functions involving its parent and its pseudo-parents.\nIn DCPOP, a node X is allowed to have a utility function involving a branch-parent.\nThe concept of a branch can be seen in Figure 2 with node E representing our node X.\nThe two distinct paths from node E to node B are called branches of E.\nThe single node where all branches of E meet is node B, which is called the merge point of E. Agents with nodes that have branch-parents begin by sending a utility propagation message to each branch-parent.\nThis message includes a two dimensional utility hypercube with domains for the node X and the branch-parent BP (X).\nIt also includes a branch information structure which contains the origination node of the branch, X, the total number of branches originating from X, and the number of branches originating from X that are merged into a single representation by this branch information structure (this number starts at 1).\nIntuitively when the number of merged branches equals the total number of originating branches, the algorithm has reached the merge point for X.\nIn Figure 2, node E sends a utility propagation message to its branch-parent, node D.\nThis message has dimensions for the domains of E and D, and includes branch information with an origin of E, 2 total branches, and 1 merged branch.\nAs in the original DPOP utility propagation phase, an agent at leaf node X sends a utility propagation message to its parent.\nIn DCPOP this message contains dimensions for the domains of P (X) and PP (X).\nIf node X also has branch-parents, then the utility propagation message also contains a dimension for the domain of X, and will include a branch information structure.\nIn Figure 2, node E sends a utility propagation message to its parent, node C.\nThis message has dimensions for the domains of E and C, and includes branch information with an origin of E, 2 total branches, and 1 merged branch.\nWhen a node Y receives utility propagation messages from all of\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 743\nits children and branch-children, it merges any branches with the same origination node X.\nThe merged branch information structure accumulates the number of merged branches for X.\nIf the cumulative total number of merged branches equals the total number of branches, then Y is the merge point for X.\nThis means that the utility hypercubes present at Y contain all information about the valuations for utility functions involving node X.\nIn addition to the typical elimination of the domain of Y from the utility hypercubes, we can now safely eliminate the domain of X from the utility hypercubes.\nTo illustrate this process, we will examine what happens in the second phase for node B in Figure 2.\nIn the second phase Node B receives two utility propagation messages.\nThe first comes from node C and includes dimensions for domains E, B, and A.\nIt also has a branch information structure with origin of E, 2 total branches, and 1 merged branch.\nThe second comes from node D and includes dimensions for domains E and B.\nIt also has a branch information structure with origin of E, 2 total branches, and 1 merged branch.\nNode B then merges the branch information structures from both messages because they have the same origination, node E.\nSince the number of merged branches originating from E is now 2 and the total branches originating from E is 2, node B now eliminates the dimensions for domain E. Node B also eliminates the dimension for its own domain, leaving only information about domain A. Node B then sends a utility propagation message to node A, containing only one dimension for the domain of A. Although not possible in DPOP, this method of utility propagation and dimension elimination may produce hypercubes at node Y that do not share any domains.\nIn DCPOP we do not join domain independent hypercubes, but instead may send multiple hypercubes in the utility propagation message sent to the parent of Y.\nThis lazy approach to joins helps to reduce message sizes.\n4.2 Value Propagation\nAs in DPOP, value propagation begins when the agent at the root node Z has received all messages from its children.\nAt this point the agent at node Z chooses the assignment for its domain that has the best utility.\nIf Z is the merge point for the branches of some node X, Z will also choose the assignment for the domain of X. Thus any node that is a merge point will choose assignments for a domain other than its own.\nThese assignments are then passed down the primary edge hierarchy.\nIf node X in the hierarchy has branch-parents, then the value assignment message from P (X) will contain an assignment for the domain of X. Every node in the hierarchy adds any assignments it has chosen to the ones it received and passes the set of assignments to its children.\nThe algorithm is complete when all nodes have chosen or received an assignment for their domain.\n4.3 Proof of Correctness\nWe will prove the correctness of DCPOP by first noting that DCPOP fully extends DPOP and then examining the two cases for value assignment in DCPOP.\nGiven a traditional pseudotree as input, the DCPOP algorithm execution is identical to DPOP.\nUsing a traditional pseudotree arrangement no nodes have branch-parents or branch-children since all edges are either back-edges or tree edges.\nThus the DCPOP algorithm using a traditional pseudotree sends only utility propagation messages that contain domains belonging to the parent or pseudo-parents of a node.\nSince no node has any branch-parents, no branches exist, and thus no node serves as a merge point for any other node.\nThus all value propagation assignments are chosen at the node of the assignment domain.\nFor DCPOP execution with cross-edged pseudotrees, some nodes serve as merge points.\nWe note that any node X that is not a merge point assigns its value exactly as in DPOP.\nThe local utility hypercube at X contains domains for X, P (X), PP (X), and BC (X).\nAs in DPOP the value assignment message received at X includes the values assigned to P (X) and PP (X).\nAlso, since X is not a merge point, all assignments to BC (X) must have been calculated at merge points higher in the tree and are in the value assignment message from P (X).\nThus after eliminating domains for which assignments are known, only the domain of X is left.\nThe agent at node X can now correctly choose the assignment with maximum utility for its own domain.\nIf node X is a merge point for some branch-child Y, we know that X must be a node along the path from Y to the root, and from P (Y) and all BP (Y) to the root.\nFrom the algorithm, we know that Y necessarily has all information from C (Y), PC (Y), and BC (Y) since it waits for their messages.\nNode X has information about all nodes below it in the tree, which would include Y, P (Y), BP (Y), and those PP (Y) that are below X in the tree.\nFor any PP (Y) above X in the tree, X receives the assignment for the domain of PP (Y) in the value assignment message from P (X).\nThus X has utility information about all of the utility functions of which Y is a part.\nBy eliminating domains included in the value assignment message, node X is left with a local utility hypercube with domains for X and Y.\nThe agent at node X can now correctly choose the assignments with maximum utility for the domains of X and Y.\n4.4 Complexity Analysis\nThe first phase of DCPOP sends one message to each P (X), PP (X), and BP (X).\nThe second phase sends one value assignment message to each C (X).\nThus, DCPOP produces a linear number of messages with respect to the number of edges (utility functions) in the cross-edged pseudotree and the original DCOP instance.\nThe actual complexity of DCPOP depends on two additional measurements: message size and computation size.\nMessage size and computation size in DCPOP depend on the number of overlapping branches as well as the number of overlapping back-edges.\nIt was shown in [6] that the number of overlapping back-edges is equal to the induced width of the pseudotree.\nIn a poorly constructed cross-edged pseudotree, the number of overlapping branches at node X can be as large as the total number of descendants of X. Thus, the total message size in DCPOP in a poorly constructed instance can be space-exponential in the total number of nodes in the graph.\nHowever, in practice a well constructed cross-edged pseudotree can achieve much better results.\nLater we address the issue of choosing well constructed crossedged pseudotrees from a set.\nWe introduce an additional measurement of the maximum sequential path cost through the algorithm.\nThis measurement directly relates to the maximum amount of parallelism achievable by the algorithm.\nTo take this measurement we first store the total computation size for each node during phase two and three.\nThis computation size represents the number of individual accesses to a value in a hypercube at each node.\nFor example, a join between two domains of size 4 costs 4 \u2217 4 = 16.\nTwo directed acyclic graphs (DAG) can then be drawn; one with the utility propagation messages as edges and the phase two costs at nodes, and the other with value assignment messages and the phase three costs at nodes.\nThe maximum sequential path cost is equal to the sum of the longest path on each DAG from the root to any leaf node.\n5.\nHEURISTICS\nIn our assessment of complexity in DCPOP we focused on the worst case possibly produced by the algorithm.\nWe acknowledge\n744 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2: elect leader from all Xj E X 3: elected leader initiates pseudotree creation 4: afterwards, Xi knows P (Xi), PP (Xi), BP (Xi), C (Xi), BC (Xi) and PC (Xi) Phase 2: UTIL message propagation 5: if IBP (Xi) I> 0 then 6: BRANCHXi +--IBP (Xi) I + 1 7: for all Xk EBP (Xi) do 8: UTILXi (Xk) +--Compute utils (Xi, Xk) 9: Send message (Xk, UTILXi (Xk), BRANCHXi) 10: if IC (Xi) I = 0 (i.e. Xi is a leaf node) then 11: UTILXi (P (Xi)) +--Compute utils (P (Xi), PP (Xi)) for all PP (Xi) 12: Send message (P (Xi), UTILXi (P (Xi)), BRANCHXi) 13: Send message (PP (Xi), empty UTIL, empty BRANCH) to all PP (Xi) 14: activate UTIL Message handler () Phase 3: VALUE message propagation 15: activate VALUE Message handler () END ALGORITHM UTIL Message handler (Xk, UTILXk (Xi), BRANCHXk) 16: store UTILXk (Xi), BRANCHXk (Xi) 17: if UTIL messages from all children and branch children arrived then 18: for all Bj EBRANCH (Xi) do 19: if Bj is merged then 20: join all hypercubes where Bj EUTIL (Xi) 21: eliminate Bj from the joined hypercube 22: if P (Xi) == null (that means Xi is the root) then 23: v * i +--Choose optimal (null) 24: Send VALUE (Xi, v * i) to all C (Xi) 25: else 26: UTILXi (P (Xi)) +--Compute utils (P (Xi), PP (Xi)) 27: Send message (P (Xi), UTILXi (P (Xi)), BRANCHXi (P (Xi))) VALUE Message handler (VALUEXi, P (Xi)) 28: add all Xk +--v * k EVALUEXi, P (Xi) to agent view 29: Xi +--v * i = Choose optimal (agent view) 30: Send VALUEXl, Xi to all Xl EC (Xi)\nthat in real world problems the generation of the pseudotree has a significant impact on the actual performance.\nThe problem of finding the best pseudotree for a given DCOP instance is NP-Hard.\nThus a heuristic is used for generation, and the performance of the algorithm depends on the pseudotree found by the heuristic.\nSome previous research focused on finding heuristics to generate good pseudotrees [8].\nWhile we have developed some heuristics that generate good cross-edged pseudotrees for use with DCPOP, our focus has been to use multiple heuristics and then select the best pseudotree from the generated pseudotrees.\nWe consider only heuristics that run in polynomial time with respect to the number of nodes in the original DCOP instance.\nThe actual DCPOP algorithm has worst case exponential complexity, but we can calculate the maximum message size, computation size, and sequential path cost for a given cross-edged pseudotree in linear space-time complexity.\nTo do this, we simply run the algorithm without attempting to calculate any of the local utility hypercubes or optimal value assignments.\nInstead, messages include dimensional and branch information but no utility hypercubes.\nAfter each heuristic completes its generation of a pseudotree, we execute the measurement procedure and propagate the measurement information up to the chosen root in that pseudotree.\nThe root then broadcasts the total complexity for that heuristic to all nodes.\nAfter all heuristics have had a chance to complete, every node knows which heuristic produced the best pseudotree.\nEach node then proceeds to begin the DCPOP algorithm using its knowledge of the pseudotree generated by the best heuristic.\nThe heuristics used to generate traditional pseudotrees perform a distributed DFS traversal.\nThe general distributed algorithm uses a token passing mechanism and a linear number of messages.\nImproved DFS based heuristics use a special procedure to choose the root node, and also provide an ordering function over the neighbors of a node to determine the order of path recursion.\nThe DFS based heuristics used in our experiments come from the work done in [4, 8].\n5.1 The best-first cross-edged pseudotree heuristic\nThe heuristics used to generate cross-edged pseudotrees perform a best-first traversal.\nA general distributed best-first algorithm for node expansion is presented in Algorithm 2.\nAn evaluation function at each node provides the values that are used to determine the next best node to expand.\nNote that in this algorithm each node only exchanges its best value with its neighbors.\nIn our experiments we used several evaluation functions that took as arguments an ordered list of ancestors and a node, which contains a list of neighbors (with each neighbor's placement depth in the tree if it was placed).\nFrom these we can calculate branchparents, branch-children, and unknown relationships for a potential node placement.\nThe best overall function calculated the value as ancestors--(branchparents + branchchildren) with the number of unknown relationships being a tiebreak.\nAfter completion each node has knowledge of its parent and ancestors, so it can easily determine which connected nodes are pseudo-parents, branchparents, pseudo-children, and branch-children.\nThe complexity of the best-first traversal depends on the complexity of the evaluation function.\nAssuming a complexity of O (V) for the evaluation function, which is the case for our best overall function, the best-first traversal is O (V \u2022 E) which is at worst O (n3).\nFor each v E V we perform a place operation, and find the next node to place using the getBestNeighbor operation.\nThe place operation is at most O (V) because of the sent messages.\nFinding the next node uses recursion and traverses only already placed\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 745\nreturn best, score\nnodes, so it has O (V) recursions.\nEach recursion performs a recursive getBestNeighbor operation that traverses all placed nodes and their neighbors.\nThis operation is O (V \u2022 E), but results can be cached using only O (V) space at each node.\nThus we have O (V \u2022 (V + V + V \u2022 E)) = O (V2 \u2022 E).\nIf we are smart about evaluating local changes when each node receives placement messages from its neighbors and cache the results the getBestNeighbor operation is only O (E).\nThis increases the complexity of the place operation, but for all placements the total complexity is only O (V \u2022 E).\nThus we have an overall complexity of O (V \u2022 E+V \u2022 (V + E)) = O (V \u2022 E).\n6.\nCOMPARISON OF COMPLEXITY IN DPOP AND DCPOP\nWe have already shown that given the same input, DCPOP performs the same as DPOP.\nWe also have shown that we can accurately predict performance of a given pseudotree in linear spacetime complexity.\nIf we use a constant number of heuristics to generate the set of pseudotrees, we can choose the best pseudotree in linear space-time complexity.\nWe will now show that there exists a DCOP instance for which a cross-edged pseudotree outperforms all possible traditional pseudotrees (based on edge-traversal heuristics).\nIn Figure 3 (a) we have a DCOP instance with six nodes.\nThis is a bipartite graph with each partition fully connected to the other\nFigure 3: (a) The DCOP instance (b) A traditional pseudotree\narrangement for the DCOP instance (c) A cross-edged pseudotree arrangement for the DCOP instance partition.\nIn Figure 3 (b) we see a traditional pseudotree arrangement for this DCOP instance.\nIt is easy to see that any edgetraversal based heuristic cannot expand two nodes from the same partition in succession.\nWe also see that no node can have more than one child because any such arrangement would be an invalid pseudotree.\nThus any traditional pseudotree arrangement for this DCOP instance must take the form of Figure 3 (b).\nWe can see that the back-edges F-B and F-A overlap node C. Node C also has a parent E, and a back-edge with D. Using the original DPOP algorithm (or DCPOP since they are identical in this case), we find that the computation at node C involves five domains: A, B, C, D, and E.\nIn contrast, the cross-edged pseudotree arrangement in Figure 3 (c) requires only a maximum of four domains in any computation during DCPOP.\nSince node A is the merge point for branches from both B and C, we can see that each of the nodes D, E, and F have two overlapping branches.\nIn addition each of these nodes has node A as its parent.\nUsing the DCPOP algorithm we find that the computation at node D (or E or F) involves four domains: A, B, C, and D (or E or F).\nSince no better traditional pseudotree arrangement can be created using an edge-traversal heuristic, we have shown that DCPOP can outperform DPOP even if we use the optimal pseudotree found through edge-traversal.\nWe acknowledge that pseudotree arrangements that allow parent-child relationships without an actual constraint can solve the problem in Figure 3 (a) with maximum computation size of four domains.\nHowever, current heuristics used with DPOP do not produce such pseudotrees, and such a heuristic would be difficult to distribute since each node would require information about nodes with which it has no constraint.\nAlso, while we do not prove it here, cross-edged pseudotrees can produce smaller message sizes than such pseudotrees even if the computation size is similar.\nIn practice, since finding the best pseudotree arrangement is NP-Hard, we find that heuristics that produce cross-edged pseudotrees often produce significantly smaller computation and message sizes.\n7.\nEXPERIMENTAL RESULTS\n746 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nExisting performance metrics for DCOP algorithms include the total number of messages, synchronous clock cycles, and message size.\nWe have already shown that the total number of messages is linear with respect to the number of constraints in the DCOP instance.\nWe also introduced the maximum sequential path cost (PC) as a measurement of the maximum amount of parallelism achievable by the algorithm.\nThe maximum sequential path cost is equal to the sum of the computations performed on the longest path from the root to any leaf node.\nWe also include as metrics the maximum computation size in number of dimensions (CD) and maximum message size in number of dimensions (MD).\nTo analyze the relative complexity of a given DCOP instance, we find the minimum induced width (IW) of any traditional pseudotree produced by a heuristic for the original DPOP.\n7.1 Generic DCOP instances\nFor our initial tests we randomly generated two sets of problems with 3000 cases in each.\nEach problem was generated by assigning a random number (picked from a range) of constraints to each variable.\nThe generator then created binary constraints until each variable reached its maximum number of constraints.\nThe first set uses 20 variables, and the best DPOP IW ranges from 1 to 16 with an average of 8.5.\nThe second set uses 100 variables, and the best DPOP IW ranged from 2 to 68 with an average of 39.3.\nSince most of the problems in the second set were too complex to actually compute the solution, we took measurements of the metrics using the techniques described earlier in Section 5 without actually solving the problem.\nResults are shown for the first set in Table 1 and for the second set in Table 2.\nFor the two problem sets we split the cases into low density and high density categories.\nLow density cases consist of those problems that have a best DPOP IW less than or equal to half of the total number of nodes (e.g. IW <10 for the 20 node problems and IW <50 for the 100 node problems).\nHigh density problems consist of the remainder of the problem sets.\nIn both Table 1 and Table 2 we have listed performance metrics for the original DPOP algorithm, the DCPOP algorithm using only cross-edged pseudotrees (DCPOP-CE), and the DCPOP algorithm using traditional and cross-edged pseudotrees (DCPOP-All).\nThe pseudotrees used for DPOP were generated using 5 heuristics: DFS, DFS MCN, DFS CLIQUE MCN, DFS MCN DSTB, and DFS MCN BEC.\nThese are all versions of the guided DFS traversal discussed in Section 5.\nThe cross-edged pseudotrees used for DCPOP-CE were generated using 5 heuristics: MCN, LCN, MCN A-B, LCN A-B, and LCSG A-B.\nThese are all versions of the best-first traversal discussed in Section 5.\nFor both DPOP and DCPOP-CE we chose the best pseudotree produced by their respective 5 heuristics for each problem in the set.\nFor DCPOP-All we chose the best pseudotree produced by all 10 heuristics for each problem in the set.\nFor the CD and MD metrics the value shown is the average number of dimensions.\nFor the PC metric the value shown is the natural logarithm of the maximum sequential path cost (since the actual value grows exponentially with the complexity of the problem).\nThe final row in both tables is a measurement of improvement of DCPOP-All over DPOP.\nFor the CD and MD metrics the value shown is a reduction in number of dimensions.\nFor the PC metric the value shown is a percentage reduction in the maximum sequential path cost (% = DP OP \u2212 DCP OP DCP OP \u2217 100).\nNotice that DCPOPAll outperforms DPOP on all metrics.\nThis logically follows from our earlier assertion that given the same input, DCPOP performs exactly the same as DPOP.\nThus given the choice between the pseudotrees produced by all 10 heuristics, DCPOP-All will always out\nTable 1: 20 node problems\nTable 2: 100 node problems\nFigure 4: Computation Dimension Size Figure 5: Message Dimension Size\nFigure 6: Path Cost\nTable 3: Meeting Scheduling Problems\nperform DPOP.\nAnother trend we notice is that the improvement is greater for high density problems than low density problems.\nWe show this trend in greater detail in Figures 4, 5, and 6.\nNotice how the improvement increases as the complexity of the problem increases.\n7.2 Meeting Scheduling Problem\nIn addition to our initial generic DCOP tests, we ran a series of tests on the Meeting Scheduling Problem (MSP) as described in [6].\nThe problem setup includes a number of people that are grouped into departments.\nEach person must attend a specified number of meetings.\nMeetings can be held within departments or among departments, and can be assigned to one of eight time slots.\nThe MSP maps to a DCOP instance where each variable represents the time slot that a specific person will attend a specific meeting.\nAll variables that belong to the same person have mutual exclusion constraints placed so that the person cannot attend more than one meeting during the same time slot.\nAll variables that belong to the same meeting have equality constraints so that all of the participants choose the same time slot.\nUnary constraints are placed on each variable to account for a person's valuation of each meeting and time slot.\nFor our tests we generated 100 sample problems for each combination of agents and meetings.\nResults are shown in Table 3.\nThe values in the first five columns represent (in left to right order), the total number of agents, the total number of meetings, the total number of variables, the average total number of constraints, and the average minimum IW produced by a traditional pseudotree.\nThe last three columns show the same metrics we used for the generic DCOP instances, except this time we only show the improvements of DCPOP-All over DPOP.\nPerformance is better on average for all MSP instances, but again we see larger improvements for more complex problem instances.\n8.\nCONCLUSIONS AND FUTURE WORK\nWe presented a complete, distributed algorithm that solves general DCOP instances using cross-edged pseudotree arrangements.\nOur algorithm extends the DPOP algorithm by adding additional utility propagation messages, and introducing the concept of branch merging during the utility propagation phase.\nOur algorithm also allows value assignments to occur at higher level merge points for lower level nodes.\nWe have shown that DCPOP fully extends DPOP by performing the same operations given the same input.\nWe have also shown through some examples and experimental data that DCPOP can achieve greater performance for some problem instances by extending the allowable input set to include cross-edged pseudotrees.\nWe placed particular emphasis on the role that edge-traversal heuristics play in the generation of pseudotrees.\nWe have shown that the performance penalty is minimal to generate multiple heuristics, and that we can choose the best generated pseudotree in linear space-time complexity.\nGiven the importance of a good pseudotree for performance, future work will include new heuristics to find better pseudotrees.\nFuture work will also include adapting existing DPOP extensions [5, 7] that support different problem domains for use with DCPOP.","keyphrases":["distribut constraint optim","pseudotre arrang","agent","maximum sequenti path cost","cross-edg pseudotre","multi-agent system","edg-travers heurist","job shop schedul","resourc alloc","teamwork coordin","multi-valu util function","global util","distribut constraint satisfact and optim","multi-agent coordin"],"prmu":["P","P","P","P","P","M","M","U","U","U","U","U","M","U"]} {"id":"I-56","title":"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework","abstract":"This paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation. The Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment. By anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies. A major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP. To this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed. Performance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles.","lvl-1":"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework Bao Chau Le Dinh and Kiam Tian Seow School of Computer Engineering Nanyang Technological University Republic of Singapore {ledi0002,asktseow}@ntu.\nedu.sg ABSTRACT This paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation.\nThe Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment.\nBy anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies.\nA major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP.\nTo this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed.\nPerformance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent Agents, Multiagent Systems General Terms Algorithms, Design, Experimentation 1.\nINTRODUCTION At the core of many emerging distributed applications is the distributed constraint satisfaction problem (DCSP) - one which involves finding a consistent combination of actions (abstracted as domain values) to satisfy the constraints among multiple agents in a shared environment.\nImportant application examples include distributed resource allocation [1] and distributed scheduling [2].\nMany important algorithms, such as distributed breakout (DBO) [3], asynchronous backtracking (ABT) [4], asynchronous partial overlay (APO) [5] and asynchronous weak-commitment (AWC) [4], have been developed to address the DCSP and provide the agent solution basis for its applications.\nBroadly speaking, these algorithms are based on two different approaches, either extending from classical backtracking algorithms [6] or introducing mediation among the agents.\nWhile there has been no lack of efforts in this promising research field, especially in dealing with outstanding issues such as resource restrictions (e.g., limits on time and communication) [7] and privacy requirements [8], there is unfortunately no conceptually clear treatment to prise open the model-theoretic workings of the various agent algorithms that have been developed.\nAs a result, for instance, a deeper intellectual understanding on why one algorithm is better than the other, beyond computational issues, is not possible.\nIn this paper, we present a novel, unified distributed constraint satisfaction framework based on automated negotiation [9].\nNegotiation is viewed as a process of several agents searching for a solution called an agreement.\nThe search can be realized via a negotiation mechanism (or algorithm) by which the agents follow a high level protocol prescribing the rules of interactions, using a set of strategies devised to select their own preferences at each negotiation step.\nAnchoring the DCSP search on automated negotiation, we show in this paper that several well-known DCSP algorithms [3] are actually mechanisms that share the same Belief-DesireIntention (BDI) interaction protocol to reach agreements, but use different action or value selection strategies.\nThe proposed framework provides not only a clearer understanding of existing DCSP algorithms from a unified BDI agent perspective, but also opens up the opportunities to extend and develop new strategies for DCSP.\nTo this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed.\nOur performance evaluation shows that UMA can outperform ABT and AWC in terms of the average number of computational cycles for both the sparse and critical coloring problems [6].\nThe rest of this paper is organized as follows.\nIn Section 2, we provide a formal overview of DCSP.\nSection 3 presents a BDI negotiation model by which a DCSP agent reasons.\nSection 4 presents the existing algorithms ABT, AWC and DBO as different strategies formalized on a common protocol.\nA new strategy called Unsolicited Mutual Advice is proposed in Section 5; our empirical results and discussion attempt to highlight the merits of the new strategy over existing ones.\nSection 6 concludes the paper and points to some future work.\n2.\nDCSP: PROBLEM FORMALIZATION The DCSP [4] considers the following environment.\n\u2022 There are n agents with k variables x0, x1, \u00b7 \u00b7 \u00b7 , xk\u22121, n \u2264 k, which have values in domains D1, D2, \u00b7 \u00b7 \u00b7 , Dk, respectively.\nWe define a partial function B over the productrange {0, 1, ... , (n\u22121)}\u00d7{0, 1, ... , (k \u22121)} such that, that variable xj belongs to agent i is denoted by B(i, j)!\n.\nThe exclamation mark `!''\nmeans `is defined''.\n\u2022 There are m constraints c0, c1, \u00b7 \u00b7 \u00b7 cm\u22121 to be conjunctively satisfied.\nIn a similar fashion as defined for B(i, j), we use E(l, j)!\n, (0 \u2264 l < m, 0 \u2264 j < k), to denote that xj is relevant to the constraint cl.\nThe DCSP may be formally stated as follows.\nProblem Statement: \u2200i, j (0 \u2264 i < n)(0 \u2264 j < k) where B(i, j)!\n, find the assignment xj = dj \u2208 Dj such that \u2200l (0 \u2264 l < m) where E(l, j)!\n, cl is satisfied.\nA constraint may consist of different variables belonging to different agents.\nAn agent cannot change or modify the assignment values of other agents'' variables.\nTherefore, in cooperatively searching for a DCSP solution, the agents would need to communicate with one another, and adjust and re-adjust their own variable assignments in the process.\n2.1 DCSP Agent Model In general, all DCSP agents must cooperatively interact, and essentially perform the assignment and reassignment of domain values to variables to resolve all constraint violations.\nIf the agents succeed in their resolution, a solution is found.\nIn order to engage in cooperative behavior, a DCSP agent needs five fundamental parameters, namely, (i) a variable [4] or a variable set [10], (ii) domains, (iii) priority, (iv) a neighbor list and (v) a constraint list.\nEach variable assumes a range of values called a domain.\nA domain value, which usually abstracts an action, is a possible option that an agent may take.\nEach agent has an assigned priority.\nThese priority values help decide the order in which they revise or modify their variable assignments.\nAn agent``s priority may be fixed (static) or changing (dynamic) when searching for a solution.\nIf an agent has more than one variable, each variable can be assigned a different priority, to help determine which variable assignment the agent should modify first.\nAn agent which shares the same constraint with another agent is called the latter``s neighbor.\nEach agent needs to refer to its list of neighbors during the search process.\nThis list may also be kept unchanged or updated accordingly in runtime.\nSimilarly, each agent maintains a constraint list.\nThe agent needs to ensure that there is no violation of the constraints in this list.\nConstraints can be added or removed from an agent``s constraint list in runtime.\nAs with an agent, a constraint can also be associated with a priority value.\nConstraints with a high priority are said to be more important than constraints with a lower priority.\nTo distinguish it from the priority of an agent, the priority of a constraint is called its weight.\n3.\nTHE BDI NEGOTIATION MODEL The BDI model originates with the work of M. Bratman [11].\nAccording to [12, Ch.1], the BDI architecture is based on a philosophical model of human practical reasoning, and draws out the process of reasoning by which an agent decides which actions to perform at consecutive moments when pursuing certain goals.\nGrounding the scope to the DCSP framework, the common goal of all agents is finding a combination of domain values to satisfy a set of predefined constraints.\nIn automated negotiation [9], such a solution is called an agreement among the agents.\nWithin this scope, we found that we were able to unearth the generic behavior of a DCSP agent and formulate it in a negotiation protocol, prescribed using the powerful concepts of BDI.\nThus, our proposed negotiation model can be said to combine the BDI concepts with automated negotiation in a multiagent framework, allowing us to conceptually separate DCSP mechanisms into a common BDI interaction protocol and the adopted strategies.\n3.1 The generic protocol Figure 1 shows the basic reasoning steps in an arbitrary round of negotiation that constitute the new protocol.\nThe solid line indicates the common component or transition which always exists regardless of the strategy used.\nThe dotted line indicates the Percept Belief Desire Intention Mediation Execution P B D I I I Info Message Info Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Figure 1: The BDI interaction protocol component or transition which may or may not appear depending on the adopted strategy.\nTwo types of messages are exchanged through this protocol, namely, the info message and the negotiation message.\nAn info message perceived is a message sent by another agent.\nThe message will contain the current selected values and priorities of the variables of that sending agent.\nThe main purpose of this message is to update the agent about the current environment.\nInfo message is sent out at the end of one negotiation round (also called a negotiation cycle), and received at the beginning of next round.\nA negotiation message is a message which may be sent within a round.\nThis message is for mediation purposes.\nThe agent may put different contents into this type of message as long as it is agreed among the group.\nThe format of the negotiation message and when it is to be sent out are subject to the strategy.\nA negotiation message can be sent out at the end of one reasoning step and received at the beginning of the next step.\nMediation is a step of the protocol that depends on whether the agent``s interaction with others is synchronous or asynchronous.\nIn synchronous mechanism, mediation is required in every negotiation round.\nIn an asynchronous one, mediation is needed only in a negotiation round when the agent receives a negotiation message.\nA more in-depth view of this mediation step is provided later in this section.\nThe BDI protocol prescribes the skeletal structure for DCSP negotiation.\nWe will show in Section 4 that several well-known DCSP mechanisms all inherit this generic model.\nThe details of the six main reasoning steps for the protocol (see Figure 1) are described as follows for a DCSP agent.\nFor a conceptually clearer description, we assume that there is only one variable per agent.\n\u2022 Percept.\nIn this step, the agent receives info messages from its neighbors in the environment, and using its Percept function, returns an image P.\nThis image contains the current values assigned to the variables of all agents in its neighbor list.\nThe image P will drive the agent``s actions in subsequent steps.\nThe agent also updates its constraint list C using some criteria of the adopted strategy.\n\u2022 Belief.\nUsing the image P and constraint list C, the agent will check if there is any violated constraint.\nIf there is no violation, the agent will believe it is choosing a correct option and therefore will take no action.\nThe agent will do nothing if it is in a local stable state - a snapshot of the variables assignments of the agent and all its neighbors by which they satisfy their shared constraints.\nWhen all agents are in their local stable states, the whole environment is said to be in a global stable state and an agreeThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 525 ment is found.\nIn case the agent finds its value in conflict with some of its neighbors'', i.e., the combination of values assigned to the variables leads to a constraint violation, the agent will first try to reassign its own variable using a specific strategy.\nIf it finds a suitable option which meets some criteria of the adopted strategy, the agent will believe it should change to the new option.\nHowever it does not always happen that an agent can successfully find such an option.\nIf no option can be found, the agent will believe it has no option, and therefore will request its neighbors to reconsider their variable assignments.\nTo summarize, there are three types of beliefs that a DCSP agent can form: (i) it can change its variable assignment to improve the current situation, (ii) it cannot change its variable assignment and some constraints violations cannot be resolved and (iii) it need not change its variable assignment as all the constraints are satisfied.\nOnce the beliefs are formed, the agent will determine its desires, which are the options that attempt to resolve the current constraint violations.\n\u2022 Desire.\nIf the agent takes Belief (i), it will generate a list of its own suitable domain values as its desire set.\nIf the agent takes Belief (ii), it cannot ascertain its desire set, but will generate a sublist of agents from its neighbor list, whom it will ask to reconsider their variable assignments.\nHow this sublist is created depends on the strategy devised for the agent.\nIn this situation, the agent will use a virtual desire set that it determines based on its adopted strategy.\nIf the agent takes Belief (iii), it will have no desire to revise its domain value, and hence no intention.\n\u2022 Intention.\nThe agent will select a value from its desire set as its intention.\nAn intention is the best desired option that the agent assigns to its variable.\nThe criteria for selecting a desire as the agent``s intention depend on the strategy used.\nOnce the intention is formed, the agent may either proceed to the execution step, or undergo mediation.\nAgain, the decision to do so is determined by some criteria of the adopted strategy.\n\u2022 Mediation.\nThis is an important function of the agent.\nSince, if the agent executes its intention without performing intention mediation with its neighbors, the constraint violation between the agents may not be resolved.\nTake for example, suppose two agents have variables, x1 and x2, associated with the same domain {1, 2}, and their shared constraint is (x1 + x2 = 3).\nThen if both the variables are initialized with value 1, they will both concurrently switch between the values 2 and 1 in the absence of mediation between them.\nThere are two types of mediation: local mediation and group mediation.\nIn the former, the agents exchange their intentions.\nWhen an agent receives another``s intention which conflicts with its own, the agent must mediate between the intentions, by either changing its own intention or informing the other agent to change its intention.\nIn the latter, there is an agent which acts as a group mediator.\nThis mediator will collect the intentions from the group - a union of the agent and its neighbors - and determine which intention is to be executed.\nThe result of this mediation is passed back to the agents in the group.\nFollowing mediation, the agent may proceed to the next reasoning step to execute its intention or begin a new negotiation round.\n\u2022 Execution.\nThis is the last step of a negotiation round.\nThe agent will execute by updating its variable assignment if the intention obtained at this step is its own.\nFollowing execution, the agent will inform its neighbors about its new variable assignment and updated priority.\nTo do so, the agent will send out an info message.\n3.2 The strategy A strategy plays an important role in the negotiation process.\nWithin the protocol, it will often determine the efficiency of the Percept Belief Desire Intention Mediation Execution P B D I Info Message Info Message Negotiation Message Negotiation Message Negotiation Message Figure 2: BDI protocol with Asynchronous Backtracking strategy search process in terms of computational cycles and message communication costs.\nThe design space when devising a strategy is influenced by the following dimensions: (i) asynchronous or synchronous, (ii) dynamic or static priority, (iii) dynamic or static constraint weight, (iv) number of negotiation messages to be communicated, (v) the negotiation message format and (vi) the completeness property.\nIn other words, these dimensions provide technical considerations for a strategy design.\n4.\nDCSP ALGORITHMS: BDI PROTOCOL + STRATEGIES In this section, we apply the proposed BDI negotiation model presented in Section 3 to expose the BDI protocol and the different strategies used for three well-known algorithms, ABT, AWC and DBO.\nAll these algorithms assume that there is only one variable per agent.\nUnder our framework, we call the strategies applied the ABT, AWC and DBO strategies, respectively.\nTo describe each strategy formally, the following mathematical notations are used: \u2022 n is the number of agents, m is the number of constraints; \u2022 xi denotes the variable held by agent i, (0 \u2264 i < n); \u2022 Di denotes the domain of variable xi; Fi denotes the neighbor list of agent i; Ci denotes its constraint list; \u2022 pi denotes the priority of agent i; and Pi = {(xj = vj, pj = k) | agent j \u2208 Fi, vj \u2208 Dj is the current value assigned to xj and the priority value k is a positive integer } is the perception of agent i; \u2022 wl denotes the weight of constraint l, (0 \u2264 l < m); \u2022 Si(v) is the total weight of the violated constraints in Ci when its variable has the value v \u2208 Di.\n4.1 Asynchronous Backtracking Figure 2 presents the BDI negotiation model incorporating the Asynchronous Backtracking (ABT) strategy.\nAs mentioned in Section 3, for an asynchronous mechanism that ABT is, the mediation step is needed only in a negotiation round when an agent receives a negotiation message.\nFor agent i, beginning initially with (wl = 1, (0 \u2264 l < m); pi = i, (0 \u2264 i < n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven ABT strategy is described as follows.\nStep 1 - Percept: Update Pi upon receiving the info messages from the neighbors (in Fi).\nUpdate Ci to be the list of 526 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) constraints which only consists of agents in Fi that have equal or higher priority than this agent.\nStep 2 - Belief: The belief function GB (Pi,Ci) will return a value bi \u2208 {0, 1, 2}, decided as follows: \u2022 bi = 0 when agent i can find an optimal option, i.e., if (Si(vi) = 0 or vi is in bad values list) and (\u2203a \u2208 Di)(Si(a) = 0) and a is not in a list of domain values called bad values list.\nInitially this list is empty and it will be cleared when a neighbor of higher priority changes its variable assignment.\n\u2022 bi = 1 when it cannot find an optimal option, i.e., if (\u2200a \u2208 Di)(Si(a) = 0) or a is in bad values list.\n\u2022 bi = 2 when its current variable assignment is an optimal option, i.e., if Si(vi) = 0 and vi is not in bad value list.\nStep 3 - Desire: The desire function GD (bi) will return a desire set denoted by DS, decided as follows: \u2022 If bi = 0, then DS = {a | (a = vi), (Si(a) = 0) and a is not in the bad value list }.\n\u2022 If bi = 1, then DS = \u2205, the agent also finds agent k which is determined by {k | pk = min(pj) with agent j \u2208 Fi and pk > pi }.\n\u2022 If bi = 2, then DS = \u2205.\nStep 4 - Intention: The intention function GI (DS) will return an intention, decided as follows: \u2022 If DS = \u2205, then select an arbitrary value (say, vi) from DS as the intention.\n\u2022 If DS = \u2205, then assign nil as the intention (to denote its lack thereof).\nStep 5 - Execution: \u2022 If agent i has a domain value as its intention, the agent will update its variable assignment with this value.\n\u2022 If bi = 1, agent i will send a negotiation message to agent k, then remove k from Fi and begin its next negotiation round.\nThe negotiation message will contain the list of variable assignments of those agents in its neighbor list Fi that have a higher priority than agent i in the current image Pi.\nMediation: When agent i receives a negotiation message, several sub-steps are carried out, as follows: \u2022 If the list of agents associated with the negotiation message contains agents which are not in Fi, it will add these agents to Fi, and request these agents to add itself to their neighbor lists.\nThe request is considered as a type of negotiation message.\n\u2022 Agent i will first check if the sender agent is updated with its current value vi.\nThe agent will add vi to its bad values list if it is so, or otherwise send its current value to the sender agent.\nFollowing this step, agent i proceeds to the next negotiation round.\n4.2 Asynchronous Weak Commitment Search Figure 3 presents the BDI negotiation model incorporating the Asynchronous Weak Commitment (AWC) strategy.\nThe model is similar to that of incorporating the ABT strategy (see Figure 2).\nThis is not surprising; AWC and ABT are found to be strategically similar, differing only in the details of some reasoning steps.\nThe distinguishing point of AWC is that when the agent cannot find a suitable variable assignment, it will change its priority to the highest among its group members ({i} \u222a Fi).\nFor agent i, beginning initially with (wl = 1, (0 \u2264 l < m); pi = i, (0 \u2264 i < n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven AWC strategy is described as follows.\nStep 1 - Percept: This step is identical to the Percept step of ABT.\nStep 2 - Belief: The belief function GB (Pi,Ci) will return a value bi \u2208 {0, 1, 2}, decided as follows: Percept Belief Desire Intention Mediation Execution P B D I Info Message Info Message Negotiation Message Negotiation Message Negotiation Message Figure 3: BDI protocol with Asynchronous WeakCommitment strategy \u2022 bi = 0 when the agent can find an optimal option i.e., if (Si(vi) = 0 or the assignment xi = vi and the current variables assignments of the neighbors in Fi who have higher priority form a nogood [4]) stored in a list called nogood list and \u2203a \u2208 Di, Si(a) = 0 (initially the list is empty).\n\u2022 bi = 1 when the agent cannot find any optimal option i.e., if \u2200a \u2208 Di, Si(a) = 0.\n\u2022 bi = 2 when the current assignment is an optimal option i.e., if Si(vi) = 0 and the current state is not a nogood in nogood list.\nStep 3 - Desire: The desire function GD (bi) will return a desire set DS, decided as follows: \u2022 If bi = 0, then DS = {a | (a = vi), (Si(a) = 0) and the number of constraint violations with lower priority agents is minimized }.\n\u2022 If bi = 1, then DS = {a | a \u2208 Di and the number of violations of all relevant constraints is minimized }.\n\u2022 If bi = 2, then DS = \u2205.\nFollowing, if bi = 1, agent i will find a list Ki of higher priority neighbors, defined by Ki = {k | agent k \u2208 Fi and pk > pi}.\nStep 4 - Intention: This step is similar to the Intention step of ABT.\nHowever, for this strategy, the negotiation message will contain the variable assignments (of the current image Pi) for all the agents in Ki.\nThis list of assignment is considered as a nogood.\nIf the same negotiation message had been sent out before, agent i will have nil intention.\nOtherwise, the agent will send the message and save the nogood in the nogood list.\nStep 5 - Execution: \u2022 If agent i has a domain value as its intention, the agent will update its variable assignment with this value.\n\u2022 If bi = 1, it will send the negotiation message to its neighbors in Ki, and set pi = max{pj} + 1, with agent j \u2208 Fi.\nMediation: This step is identical to the Mediation step of ABT, except that agent i will now add the nogood contained in the negotiation message received to its own nogood list.\n4.3 Distributed Breakout Figure 4 presents the BDI negotiation model incorporating the Distributed Breakout (DBO) strategy.\nEssentially, by this synchronous strategy, each agent will search iteratively for improvement by reducing the total weight of the violated constraints.\nThe iteration will continue until no agent can improve further, at which time if some constraints remain violated, the weights of The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 527 Percept Belief Desire Intention Mediation Execution P B D I I A Info Message Info Message Negotiation Message Negotiation Message Figure 4: BDI protocol with Distributed Breakout strategy these constraints will be increased by 1 to help `breakout'' from a local minimum.\nFor agent i, beginning initially with (wl = 1, (0 \u2264 l < m), pi = i, (0 \u2264 i < n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven DBO strategy is described as follows.\nStep 1 - Percept: Update Pi upon receiving the info messages from the neighbors (in Fi).\nUpdate Ci to be the list of its relevant constraints.\nStep 2 - Belief: The belief function GB (Pi,Ci) will return a value bi \u2208 {0, 1, 2}, decided as follows: \u2022 bi = 0 when agent i can find an option to reduce the number violations of the constraints in Ci, i.e., if \u2203a \u2208 Di, Si(a) < Si(vi).\n\u2022 bi = 1 when it cannot find any option to improve situation, i.e., if \u2200a \u2208 Di, a = vi, Si(a) \u2265 Si(vi).\n\u2022 bi = 2 when its current assignment is an optimal option, i.e., if Si(vi) = 0.\nStep 3 - Desire: The desire function GD (bi) will return a desire set DS, decided as follows: \u2022 If bi = 0, then DS = {a | a = vi, Si(a) < Si(vi) and (Si(vi)\u2212Si(a)) is maximized }.\n(max{(Si(vi)\u2212Si(a))} will be referenced by hmax i in subsequent steps, and it defines the maximal reduction in constraint violations).\n\u2022 Otherwise, DS = \u2205.\nStep 4 - Intention: The intention function GI (DS) will return an intention, decided as follows: \u2022 If DS = \u2205, then select an arbitrary value (say, vi) from DS as the intention.\n\u2022 If DS = \u2205, then assign nil as the intention.\nFollowing, agent i will send its intention to all its neighbors.\nIn return, it will receive intentions from these agents before proceeding to Mediation step.\nMediation: Agent i receives all the intentions from its neighbors.\nIf it finds that the intention received from a neighbor agent j is associated with hmax j > hmax i , the agent will automatically cancel its current intention.\nStep 5 - Execution: \u2022 If agent i did not cancel its intention, it will update its variable assignment with the intended value.\nPercept Belief Desire Intention Mediation Execution P B D I I A Info Message Info Message Negotiation Message Negotiation Message Negotiation Message Negotiation Message Figure 5: BDI protocol with Unsolicited Mutual Advice strategy \u2022 If all intentions received and its own one are nil intention, the agent will increase the weight of each currently violated constraint by 1.\n5.\nTHE UMA STRATEGY Figure 5 presents the BDI negotiation model incorporating the Unsolicited Mutual Advice(UMA) strategy.\nUnlike when using the strategies of the previous section, a DCSP agent using UMA will not only send out a negotiation message when concluding its Intention step, but also when concluding its Desire step.\nThe negotiation message that it sends out to conclude the Desire step constitutes an unsolicited advice for all its neighbors.\nIn turn, the agent will wait to receive unsolicited advices from all its neighbors, before proceeding on to determine its intention.\nFor agent i, beginning initially with (wl = 1, (0 \u2264 l < m), pi = i, (0 \u2264 i < n)) and Fi contains all the agents who share the constraints with agent i, its BDI-driven UMA strategy is described as follows.\nStep 1 - Percept: Update Pi upon receiving the info messages from the neighbors (in Fi).\nUpdate Ci to be the list of constraints relevant to agent i. Step 2 - Belief: The belief function GB (Pi,Ci) will return a value bi \u2208 {0, 1, 2}, decided as follows: \u2022 bi = 0 when agent i can find an option to reduce the number violations of the constraints in Ci, i.e., if \u2203a \u2208 Di, Si(a) < Si(vi) and the assignment xi = a and the current variable assignments of its neighbors do not form a local state stored in a list called bad states list (initially this list is empty).\n\u2022 bi = 1 when it cannot find a value a such as a \u2208 Di, Si(a) < Si(vi), and the assignment xi = a and the current variable assignments of its neighbors do not form a local state stored in the bad states list.\n\u2022 bi = 2 when its current assignment is an optimal option, i.e., if Si(vi) = 0.\nStep 3 - Desire: The desire function GD (bi) will return a desire set DS, decided as follows: \u2022 If bi = 0, then DS = {a | a = vi, Si(a) < Si(vi) and (Si(vi) \u2212 Si(a)) is maximized } and the assignment xi = a and the current variable assignments of agent i``s neighbors do not form a state in the bad states list.\nIn this case, DS is called a set of voluntary desires.\nmax{(Si(vi)\u2212Si(a))} will be referenced by hmax i in subsequent steps, and it defines 528 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) the maximal reduction in constraint violations.\nIt is also referred to as an improvement).\n\u2022 If bi = 1, then DS = {a | a = vi, Si(a) is minimized } and the assignment xi = a and the current variable assignments of agent i``s neighbors do not form a state in the bad states list.\nIn this case, DS is called a set of reluctant desires \u2022 If bi = 2, then DS = \u2205.\nFollowing, if bi = 0, agent i will send a negotiation message containing hmax i to all its neighbors.\nThis message is called a voluntary advice.\nIf bi = 1, agent i will send a negotiation message called change advice to the neighbors in Fi who share the violated constraints with agent i. Agent i receives advices from all its neighbors and stores them in a list called A, before proceeding to the next step.\nStep 4 - Intention: The intention function GI (DS, A) will return an intention, decided as follows: \u2022 If there is a voluntary advice from an agent j which is associated with hmax j > hmax i , assign nil as the intention.\n\u2022 If DS = \u2205, DS is a set of voluntary desires and hmax i is the biggest improvement among those associated with the voluntary advices received, select an arbitrary value (say, vi) from DS as the intention.\nThis intention is called a voluntary intention.\n\u2022 If DS = \u2205, DS is a set of reluctant desires and agent i receives some change advices, select an arbitrary value (say, vi) from DS as the intention.\nThis intention is called reluctant intention.\n\u2022 If DS = \u2205, then assign nil as the intention.\nFollowing, if the improvement hmax i is the biggest improvement and equal to some improvements associated with the received voluntary advices, agent i will send its computed intention to all its neighbors.\nIf agent i has a reluctant intention, it will also send this intention to all its neighbors.\nIn both cases, agent i will attach the number of received change advices in the current negotiation round with its intention.\nIn return, agent i will receive the intentions from its neighbors before proceeding to Mediation step.\nMediation: If agent i does not send out its intention before this step, i.e., the agent has either a nil intention or a voluntary intention with biggest improvement, it will proceed to next step.\nOtherwise, agent i will select the best intention among all the intentions received, including its own (if any).\nThe criteria to select the best intention are listed, applied in descending order of importance as follows.\n\u2022 A voluntary intention is preferred over a reluctant intention.\n\u2022 A voluntary intention (if any) with biggest improvement is selected.\n\u2022 If there is no voluntary intention, the reluctant intention with the lowest number of constraint violations is selected.\n\u2022 The intention from an agent who has received a higher number of change advices in the current negotiation round is selected.\n\u2022 Intention from an agent with highest priority is selected.\nIf the selected intention is not agent i``s intention, it will cancel its intention.\nStep 5 - Execution: If agent i does not cancel its intention, it will update its variable assignment with the intended value.\nTermination Condition: Since each agent does not have full information about the global state, it may not know when it has reached a solution, i.e., when all the agents are in a global stable state.\nHence an observer is needed that will keep track of the negotiation messages communicated in the environment.\nFollowing a certain period of time when there is no more message communication (and this happens when all the agents have no more intention to update their variable assignments), the observer will inform the agents in the environment that a solution has been found.\n1 2 3 4 5 6 7 8 9 10 Figure 6: Example problem 5.1 An Example To illustrate how UMA works, consider a 2-color graph problem [6] as shown in Figure 6.\nIn this example, each agent has a color variable representing a node.\nThere are 10 color variables sharing the same domain {Black, White}.\nThe following records the outcome of each step in every negotiation round executed.\nRound 1: Step 1 - Percept: Each agent obtains the current color assignments of those nodes (agents) adjacent to it, i.e., its neighbors''.\nStep 2 - Belief: Agents which have positive improvements are agent 1 (this agent believes it should change its color to White), agent 2 (this believes should change its color to White), agent 7 (this agent believes it should change its color to Black) and agent 10 (this agent believes it should change its value to Black).\nIn this negotiation round, the improvements achieved by these agents are 1.\nAgents which do not have any improvements are agents 4, 5 and 8.\nAgents 3, 6 and 9 need not change as all their relevant constraints are satisfied.\nStep 3 - Desire: Agents 1, 2, 7 and 10 have the voluntary desire (White color for agents 1, 2 and Black color for agents 7, 10).\nThese agents will send the voluntary advices to all their neighbors.\nMeanwhile, agents 4, 5 and 8 have the reluctant desires (White color for agent 4 and Black color for agents 5, 8).\nAgent 4 will send a change advice to agent 2 as agent 2 is sharing the violated constraint with it.\nSimilarly, agents 5 and 8 will send change advices to agents 7 and 10 respectively.\nAgents 3, 6 and 9 do not have any desire to update their color assignments.\nStep 4 - Intention: Agents 2, 7 and 10 receive the change advices from agents 4, 5 and 8, respectively.\nThey form their voluntary intentions.\nAgents 4, 5 and 8 receive the voluntary advices from agents 2, 7 and 10, hence they will not have any intention.\nAgents 3, 6 and 9 do not have any intention.\nFollowing, the intention from the agents will be sent to all their neighbors.\nMediation: Agent 1 finds that the intention from agent 2 is better than its intention.\nThis is because, although both agents have voluntary intentions with improvement of 1, agent 2 has received one change advice from agent 4 while agent 1 has not received any.\nHence agent 1 cancels its intention.\nAgent 2 will keep its intention.\nAgents 7 and 10 keep their intentions since none of their neighbors has an intention.\nThe rest of the agents do nothing in this step as they do not have any intention.\nStep 5 - Execution: Agent 2 changes its color to White.\nAgents 7 and 10 change their colors to Black.\nThe new state after round 1 is shown in Figure 7.\nRound 2: Step 1 - Percept: The agents obtain the current color assignments of their neighbors.\nStep 2 - Belief: Agent 3 is the only agent who has a positive improvement which is 1.\nIt believes it should change its The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 529 1 2 3 4 5 6 7 8 9 10 Figure 7: The graph after round 1 color to Black.\nAgent 2 does not have any positive improvement.\nThe rest of the agents need not make any change as all their relevant constraints are satisfied.\nThey will have no desire, and hence no intention.\nStep 3 - Desire: Agent 3 desires to change its color to Black voluntarily, hence it sends out a voluntary advice to its neighbor, i.e., agent 2.\nAgent 2 does not have any value for its reluctant desire set as the only option, Black color, will bring agent 2 and its neighbors to the previous state which is known to be a bad state.\nSince agent 2 is sharing the constraint violation with agent 3, it sends a change advice to agent 3.\nStep 4 - Intention: Agent 3 will have a voluntary intention while agent 2 will not have any intention as it receives the voluntary advice from agent 3.\nMediation: Agent 3 will keep its intention as its only neighbor, agent 2, does not have any intention.\nStep 5 - Execution: Agent 3 changes its color to Black.\nThe new state after round 2 is shown in Figure 8.\nRound 3: In this round, every agent finds that it has no desire and hence no intention to revise its variable assignment.\nFollowing, with no more negotiation message communication in the environment, the observer will inform all the agents that a solution has been found.\n2 3 4 5 6 7 8 91 10 Figure 8: The solution obtained 5.2 Performance Evaluation To facilitate credible comparisons with existing strategies, we measured the execution time in terms of computational cycles as defined in [4], and built a simulator that could reproduce the published results for ABT and AWC.\nThe definition of a computational cycle is as follows.\n\u2022 In one cycle, each agent receives all the incoming messages, performs local computation and sends out a reply.\n\u2022 A message which is sent at time t will be received at time t + 1.\nThe network delay is neglected.\n\u2022 Each agent has it own clock.\nThe initial clock``s value is 0.\nAgents attach their clock value as a time-stamp in the outgoing message and use the time-stamp in the incoming message to update their own clock``s value.\nFour benchmark problems [6] were considered, namely, n-queens and node coloring for sparse, dense and critical graphs.\nFor each problem, a finite number of test cases were generated for various problem sizes n.\nThe maximum execution time was set to 0 200 400 600 800 1000 10 50 100 Number of queens Cycles Asynchronous Backtracking Asynchronous Weak Commitment Unsolicited Mutual Advice Figure 9: Relationship between execution time and problem size 10000 cycles for node coloring for critical graphs and 1000 cycles for other problems.\nThe simulator program was terminated after this period and the algorithm was considered to fail a test case if it did not find a solution by then.\nIn such a case, the execution time for the test was counted as 1000 cycles.\n5.2.1 Evaluation with n-queens problem The n-queens problem is a traditional problem of constraint satisfaction.\n10 test cases were generated for each problem size n \u2208 {10, 50 and 100}.\nFigure 9 shows the execution time for different problem sizes when ABT, AWC and UMA were run.\n5.2.2 Evaluation with graph coloring problem The graph coloring problem can be characterized by three parameters: (i) the number of colors k, the number of nodes\/agents n and the number of links m. Based on the ratio m\/n, the problem can be classified into three types [3]: (i) sparse (with m\/n = 2), (ii) critical (with m\/n = 2.7 or 4.7) and (iii) dense (with m\/n = (n \u2212 1)\/4).\nFor this problem, we did not include ABT in our empirical results as its failure rate was found to be very high.\nThis poor performance of ABT was expected since the graph coloring problem is more difficult than the n-queens problem, on which ABT already did not perform well (see Figure 9).\nThe sparse and dense (coloring) problem types are relatively easy while the critical type is difficult to solve.\nIn the experiments, we fix k = 3.\n10 test cases were created using the method described in [13] for each value of n \u2208 {60, 90, 120}, for each problem type.\nThe simulation results for each type of problem are shown in Figures 10 - 12.\n0 40 80 120 160 200 60\u00a090\u00a0120\u00a0150 Number of Nodes Cycles Asynchronous Weak Commitment Unsolicited Mutual Advice Figure 10: Comparison between AWC and UMA (sparse graph coloring) 5.3 Discussion 5.3.1 Comparison with ABT and AWC 530 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0 1000 2000 3000 4000 5000 6000 60 90 120 Number of Nodes Cycles Asynchronous Weak Commitment Unsolicited Mutual Advice Figure 11: Comparison between AWC and UMA (critical graph coloring) 0 10 20 30 40 50 60 90 120 Number of Nodes Cycles Asynchronous Weak Commitment Unsolicited Mutual Advice Figure 12: Comparison between AWC and UMA (dense graph coloring) Figure 10 shows that the average performance of UMA is slightly better than AWC for the sparse problem.\nUMA outperforms AWC in solving the critical problem as shown in Figure 11.\nIt was observed that the latter strategy failed in some test cases.\nHowever, as seen in Figure 12, both the strategies are very efficient when solving the dense problem, with AWC showing slightly better performance.\nThe performance of UMA, in the worst (time complexity) case, is similar to that of all evaluated strategies.\nThe worst case occurs when all the possible global states of the search are reached.\nSince only a few agents have the right to change their variable assignments in a negotiation round, the number of redundant computational cycles and info messages is reduced.\nAs we observe from the backtracking in ABT and AWC, the difference in the ordering of incoming messages can result in a different number of computational cycles to be executed by the agents.\n5.3.2 Comparison with DBO The computational performance of UMA is arguably better than DBO for the following reasons: \u2022 UMA can guarantee that there will be a variable reassignment following every negotiation round whereas DBO cannot.\n\u2022 UMA introduces one more communication round trip (that of sending a message and awaiting a reply) than DBO, which occurs due to the need to communicate unsolicited advices.\nAlthough this increases the communication cost per negotiation round, we observed from our simulations that the overall communication cost incurred by UMA is lower due to the significantly lower number of negotiation rounds.\n\u2022 Using UMA, in the worst case, an agent will only take 2 or 3 communication round trips per negotiation round, following which the agent or its neighbor will do a variable assignment update.\nUsing DBO, this number of round trips is uncertain as each agent might have to increase the weights of the violated constraints until an agent has a positive improvement; this could result in a infinite loop [3].\n6.\nCONCLUSION Applying automated negotiation to DCSP, this paper has proposed a protocol that prescribes the generic reasoning of a DCSP agent in a BDI architecture.\nOur work shows that several wellknown DCSP algorithms, namely ABT, AWC and DBO, can be described as mechanisms sharing the same proposed protocol, and only differ in the strategies employed for the reasoning steps per negotiation round as governed by the protocol.\nImportantly, this means that it might furnish a unified framework for DCSP that not only provides a clearer BDI agent-theoretic view of existing DCSP approaches, but also opens up the opportunities to enhance or develop new strategies.\nTowards the latter, we have proposed and formulated a new strategy - the UMA strategy.\nEmpirical results and our discussion suggest that UMA is superior to ABT, AWC and DBO in some specific aspects.\nIt was observed from our simulations that UMA possesses the completeness property.\nFuture work will attempt to formally establish this property, as well as formalize other existing DSCP algorithms as BDI negotiation mechanisms, including the recent endeavor that employs a group mediator [5].\nThe idea of DCSP agents using different strategies in the same environment will also be investigated.\n7.\nREFERENCES [1] P. J. Modi, H. Jung, M. Tambe, W.-M.\nShen, and S. Kulkarni, Dynamic distributed resource allocation: A distributed constraint satisfaction approach, in Lecture Notes in Computer Science, 2001, p. 264.\n[2] H. Schlenker and U. Geske, Simulating large railway networks using distributed constraint satisfaction, in 2nd IEEE International Conference on Industrial Informatics (INDIN-04), 2004, pp. 441- 446.\n[3] M. Yokoo, Distributed Constraint Satisfaction : Foundations of Cooperation in Multi-Agent Systems.\nSpringer Verlag, 2000, springer Series on Agent Technology.\n[4] M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara, The distributed constraint satisfaction problem : Formalization and algorithms, IEEE Transactions on Knowledge and Data Engineering, vol.\n10, no. 5, pp. 673-685, September\/October 1998.\n[5] R. Mailler and V. Lesser, Using cooperative mediation to solve distributed constraint satisfaction problems, in Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-04), 2004, pp. 446-453.\n[6] E. Tsang, Foundation of Constraint Satisfaction.\nAcademic Press, 1993.\n[7] R. Mailler, R. Vincent, V. Lesser, T. Middlekoop, and J. Shen, Soft Real-Time, Cooperative Negotiation for Distributed Resource Allocation, AAAI Fall Symposium on Negotiation Methods for Autonomous Cooperative Systems, November 2001.\n[8] M. Yokoo, K. Suzuki, and K. Hirayama, Secure distributed constraint satisfaction: Reaching agreement without revealing private information, Artificial Intelligence, vol.\n161, no. 1-2, pp. 229-246, 2005.\n[9] J. S. Rosenschein and G. Zlotkin, Rules of Encounter.\nThe MIT Press, 1994.\n[10] M. Yokoo and K. Hirayama, Distributed constraint satisfaction algorithm for complex local problems, in Proceedings of the Third International Conference on Multiagent Systems (ICMAS-98), 1998, pp. 372-379.\n[11] M. E. Bratman, Intentions, Plans and Practical Reason.\nHarvard University Press, Cambridge, M.A, 1987.\n[12] G. Weiss, Ed., Multiagent System : A Modern Approach to Distributed Artificial Intelligence.\nThe MIT Press, London, U.K, 1999.\n[13] S. Minton, M. D. Johnson, A. B. Philips, and P. Laird, Minimizing conflicts: A heuristic repair method for constraint satisfaction and scheduling problems, Artificial Intelligence, vol.\ne58, no. 1-3, pp. 161-205, 1992.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 531","lvl-3":"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework\nABSTRACT\nThis paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation.\nThe Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment.\nBy anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies.\nA major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP.\nTo this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed.\nPerformance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles.\n1.\nINTRODUCTION\nAt the core of many emerging distributed applications is the distributed constraint satisfaction problem (DCSP) - one which involves finding a consistent combination of actions (abstracted as domain values) to satisfy the constraints among multiple agents in a shared environment.\nImportant application examples include\ndistributed resource allocation [1] and distributed scheduling [2].\nMany important algorithms, such as distributed breakout (DBO) [3], asynchronous backtracking (ABT) [4], asynchronous partial overlay (APO) [5] and asynchronous weak-commitment (AWC) [4], have been developed to address the DCSP and provide the agent solution basis for its applications.\nBroadly speaking, these algorithms are based on two different approaches, either extending from classical backtracking algorithms [6] or introducing mediation among the agents.\nWhile there has been no lack of efforts in this promising research field, especially in dealing with outstanding issues such as resource restrictions (e.g., limits on time and communication) [7] and privacy requirements [8], there is unfortunately no conceptually clear treatment to prise open the model-theoretic workings of the various agent algorithms that have been developed.\nAs a result, for instance, a deeper intellectual understanding on why one algorithm is better than the other, beyond computational issues, is not possible.\nIn this paper, we present a novel, unified distributed constraint satisfaction framework based on automated negotiation [9].\nNegotiation is viewed as a process of several agents searching for a solution called an agreement.\nThe search can be realized via a negotiation mechanism (or algorithm) by which the agents follow a high level protocol prescribing the rules of interactions, using a set of strategies devised to select their own preferences at each negotiation step.\nAnchoring the DCSP search on automated negotiation, we show in this paper that several well-known DCSP algorithms [3] are actually mechanisms that share the same Belief-DesireIntention (BDI) interaction protocol to reach agreements, but use different action or value selection strategies.\nThe proposed framework provides not only a clearer understanding of existing DCSP algorithms from a unified BDI agent perspective, but also opens up the opportunities to extend and develop new strategies for DCSP.\nTo this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed.\nOur performance evaluation shows that UMA can outperform ABT and AWC in terms of the average number of computational cycles for both the sparse and critical coloring problems [6].\nThe rest of this paper is organized as follows.\nIn Section 2, we provide a formal overview of DCSP.\nSection 3 presents a BDI negotiation model by which a DCSP agent reasons.\nSection 4 presents the existing algorithms ABT, AWC and DBO as different strategies formalized on a common protocol.\nA new strategy called Unsolicited Mutual Advice is proposed in Section 5; our empirical results and discussion attempt to highlight the merits of the new strategy over existing ones.\nSection 6 concludes the paper and points to some future work.\n2.\nDCSP: PROBLEM FORMALIZATION\n2.1 DCSP Agent Model\n3.\nTHE BDI NEGOTIATION MODEL\n3.1 The generic protocol\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 525\n3.2 The strategy\n4.\nDCSP ALGORITHMS: BDI PROTOCOL + STRATEGIES\n4.1 Asynchronous Backtracking\n526 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 Asynchronous Weak Commitment Search\n4.3 Distributed Breakout\n5.\nTHE UMA STRATEGY\n528 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.1 An Example\nRound 1:\nRound 2:\n5.2 Performance Evaluation\n5.2.1 Evaluation with n-queens problem\n5.2.2 Evaluation with graph coloring problem\n5.3 Discussion\n5.3.1 Comparison with ABT and AWC\n530 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.3.2 Comparison with DBO\n6.\nCONCLUSION\nApplying automated negotiation to DCSP, this paper has proposed a protocol that prescribes the generic reasoning of a DCSP agent in a BDI architecture.\nOur work shows that several wellknown DCSP algorithms, namely ABT, AWC and DBO, can be described as mechanisms sharing the same proposed protocol, and only differ in the strategies employed for the reasoning steps per negotiation round as governed by the protocol.\nImportantly, this means that it might furnish a unified framework for DCSP that not only provides a clearer BDI agent-theoretic view of existing DCSP approaches, but also opens up the opportunities to enhance or develop new strategies.\nTowards the latter, we have proposed and formulated a new strategy - the UMA strategy.\nEmpirical results and our discussion suggest that UMA is superior to ABT, AWC and DBO in some specific aspects.\nIt was observed from our simulations that UMA possesses the completeness property.\nFuture work will attempt to formally establish this property, as well as formalize other existing DSCP algorithms as BDI negotiation mechanisms, including the recent endeavor that employs a group mediator [5].\nThe idea of DCSP agents using different strategies in the same environment will also be investigated.","lvl-4":"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework\nABSTRACT\nThis paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation.\nThe Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment.\nBy anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies.\nA major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP.\nTo this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed.\nPerformance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles.\n1.\nINTRODUCTION\nAt the core of many emerging distributed applications is the distributed constraint satisfaction problem (DCSP) - one which involves finding a consistent combination of actions (abstracted as domain values) to satisfy the constraints among multiple agents in a shared environment.\nImportant application examples include\ndistributed resource allocation [1] and distributed scheduling [2].\nBroadly speaking, these algorithms are based on two different approaches, either extending from classical backtracking algorithms [6] or introducing mediation among the agents.\nAs a result, for instance, a deeper intellectual understanding on why one algorithm is better than the other, beyond computational issues, is not possible.\nIn this paper, we present a novel, unified distributed constraint satisfaction framework based on automated negotiation [9].\nNegotiation is viewed as a process of several agents searching for a solution called an agreement.\nThe search can be realized via a negotiation mechanism (or algorithm) by which the agents follow a high level protocol prescribing the rules of interactions, using a set of strategies devised to select their own preferences at each negotiation step.\nAnchoring the DCSP search on automated negotiation, we show in this paper that several well-known DCSP algorithms [3] are actually mechanisms that share the same Belief-DesireIntention (BDI) interaction protocol to reach agreements, but use different action or value selection strategies.\nThe proposed framework provides not only a clearer understanding of existing DCSP algorithms from a unified BDI agent perspective, but also opens up the opportunities to extend and develop new strategies for DCSP.\nTo this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed.\nThe rest of this paper is organized as follows.\nIn Section 2, we provide a formal overview of DCSP.\nSection 3 presents a BDI negotiation model by which a DCSP agent reasons.\nSection 4 presents the existing algorithms ABT, AWC and DBO as different strategies formalized on a common protocol.\nA new strategy called Unsolicited Mutual Advice is proposed in Section 5; our empirical results and discussion attempt to highlight the merits of the new strategy over existing ones.\nSection 6 concludes the paper and points to some future work.\n6.\nCONCLUSION\nApplying automated negotiation to DCSP, this paper has proposed a protocol that prescribes the generic reasoning of a DCSP agent in a BDI architecture.\nOur work shows that several wellknown DCSP algorithms, namely ABT, AWC and DBO, can be described as mechanisms sharing the same proposed protocol, and only differ in the strategies employed for the reasoning steps per negotiation round as governed by the protocol.\nImportantly, this means that it might furnish a unified framework for DCSP that not only provides a clearer BDI agent-theoretic view of existing DCSP approaches, but also opens up the opportunities to enhance or develop new strategies.\nTowards the latter, we have proposed and formulated a new strategy - the UMA strategy.\nFuture work will attempt to formally establish this property, as well as formalize other existing DSCP algorithms as BDI negotiation mechanisms, including the recent endeavor that employs a group mediator [5].\nThe idea of DCSP agents using different strategies in the same environment will also be investigated.","lvl-2":"Unifying Distributed Constraint Algorithms in a BDI Negotiation Framework\nABSTRACT\nThis paper presents a novel, unified distributed constraint satisfaction framework based on automated negotiation.\nThe Distributed Constraint Satisfaction Problem (DCSP) is one that entails several agents to search for an agreement, which is a consistent combination of actions that satisfies their mutual constraints in a shared environment.\nBy anchoring the DCSP search on automated negotiation, we show that several well-known DCSP algorithms are actually mechanisms that can reach agreements through a common Belief-Desire-Intention (BDI) protocol, but using different strategies.\nA major motivation for this BDI framework is that it not only provides a conceptually clearer understanding of existing DCSP algorithms from an agent model perspective, but also opens up the opportunities to extend and develop new strategies for DCSP.\nTo this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed.\nPerformance evaluation shows that the UMA strategy can outperform some existing mechanisms in terms of computational cycles.\n1.\nINTRODUCTION\nAt the core of many emerging distributed applications is the distributed constraint satisfaction problem (DCSP) - one which involves finding a consistent combination of actions (abstracted as domain values) to satisfy the constraints among multiple agents in a shared environment.\nImportant application examples include\ndistributed resource allocation [1] and distributed scheduling [2].\nMany important algorithms, such as distributed breakout (DBO) [3], asynchronous backtracking (ABT) [4], asynchronous partial overlay (APO) [5] and asynchronous weak-commitment (AWC) [4], have been developed to address the DCSP and provide the agent solution basis for its applications.\nBroadly speaking, these algorithms are based on two different approaches, either extending from classical backtracking algorithms [6] or introducing mediation among the agents.\nWhile there has been no lack of efforts in this promising research field, especially in dealing with outstanding issues such as resource restrictions (e.g., limits on time and communication) [7] and privacy requirements [8], there is unfortunately no conceptually clear treatment to prise open the model-theoretic workings of the various agent algorithms that have been developed.\nAs a result, for instance, a deeper intellectual understanding on why one algorithm is better than the other, beyond computational issues, is not possible.\nIn this paper, we present a novel, unified distributed constraint satisfaction framework based on automated negotiation [9].\nNegotiation is viewed as a process of several agents searching for a solution called an agreement.\nThe search can be realized via a negotiation mechanism (or algorithm) by which the agents follow a high level protocol prescribing the rules of interactions, using a set of strategies devised to select their own preferences at each negotiation step.\nAnchoring the DCSP search on automated negotiation, we show in this paper that several well-known DCSP algorithms [3] are actually mechanisms that share the same Belief-DesireIntention (BDI) interaction protocol to reach agreements, but use different action or value selection strategies.\nThe proposed framework provides not only a clearer understanding of existing DCSP algorithms from a unified BDI agent perspective, but also opens up the opportunities to extend and develop new strategies for DCSP.\nTo this end, a new strategy called Unsolicited Mutual Advice (UMA) is proposed.\nOur performance evaluation shows that UMA can outperform ABT and AWC in terms of the average number of computational cycles for both the sparse and critical coloring problems [6].\nThe rest of this paper is organized as follows.\nIn Section 2, we provide a formal overview of DCSP.\nSection 3 presents a BDI negotiation model by which a DCSP agent reasons.\nSection 4 presents the existing algorithms ABT, AWC and DBO as different strategies formalized on a common protocol.\nA new strategy called Unsolicited Mutual Advice is proposed in Section 5; our empirical results and discussion attempt to highlight the merits of the new strategy over existing ones.\nSection 6 concludes the paper and points to some future work.\n2.\nDCSP: PROBLEM FORMALIZATION\nThe DCSP [4] considers the following environment.\n\u2022 There are n agents with k variables x0, x1, \u00b7 \u00b7 \u00b7, xk \u2212 1, n <\nk, which have values in domains D1, D2, \u00b7 \u00b7 \u00b7, Dk, respectively.\nWe define a partial function B over the productrange 10, 1,..., (n \u2212 1)} x 10, 1,..., (k \u2212 1)} such that, that\nvariable xj belongs to agent i is denoted by B (i, j)!\n.\nThe exclamation mark `! '\nmeans ` is defined'.\n\u2022 There are m constraints c0, c1, \u00b7 \u00b7 \u00b7 cm \u2212 1 to be conjunctively satisfied.\nIn a similar fashion as defined for B (i, j), we use E (l, j)!\n, (0 pi}.\n\u2022 If bi = 2, then DS = 0.\nStep 4 - Intention: The intention function GI (DS) will return an intention, decided as follows: \u2022 If DS = ~ 0, then select an arbitrary value (say, v ~ i) from DS as the intention.\n\u2022 If DS = 0, then assign nil as the intention (to denote its lack thereof).\nStep 5 - Execution: \u2022 If agent i has a domain value as its intention, the agent will update its variable assignment with this value.\n\u2022 If bi = 1, agent i will send a negotiation message to agent k, then remove k from Fi and begin its next negotiation round.\nThe negotiation message will contain the list of variable assignments of those agents in its neighbor list Fi that have a higher priority than agent i in the current image Pi.\nMediation: When agent i receives a negotiation message, several sub-steps are carried out, as follows: \u2022 If the list of agents associated with the negotiation message contains agents which are not in Fi, it will add these agents to Fi, and request these agents to add itself to their neighbor lists.\nThe request is considered as a type of negotiation message.\n\u2022 Agent i will first check if the sender agent is updated with its current value vi.\nThe agent will add vi to its bad values list if it is so, or otherwise send its current value to the sender agent.\nFollowing this step, agent i proceeds to the next negotiation round.\n4.2 Asynchronous Weak Commitment Search\nFigure 3 presents the BDI negotiation model incorporating the Asynchronous Weak Commitment (AWC) strategy.\nThe model is similar to that of incorporating the ABT strategy (see Figure 2).\nThis is not surprising; AWC and ABT are found to be strategically similar, differing only in the details of some reasoning steps.\nThe distinguishing point of AWC is that when the agent cannot find a suitable variable assignment, it will change its priority to the highest among its group members ({i} U Fi).\nFor agent i, beginning initially with (wl = 1, (0 pi}.\nStep 4 - Intention: This step is similar to the Intention step of ABT.\nHowever, for this strategy, the negotiation message will contain the variable assignments (of the current image Pi) for all the agents in Ki.\nThis list of assignment is considered as a nogood.\nIf the same negotiation message had been sent out before, agent i will have nil intention.\nOtherwise, the agent will send the message and save the nogood in the nogood list.\nStep 5 - Execution:\n\u2022 If agent i has a domain value as its intention, the agent will update its variable assignment with this value.\n\u2022 If bi = 1, it will send the negotiation message to its neighbors in Ki, and set pi = max {pj} + 1, with agent j E Fi.\nMediation: This step is identical to the Mediation step of ABT, except that agent i will now add the nogood contained in the negotiation message received to its own nogood list.\n4.3 Distributed Breakout\nFigure 4 presents the BDI negotiation model incorporating the Distributed Breakout (DBO) strategy.\nEssentially, by this synchronous strategy, each agent will search iteratively for improvement by reducing the total weight of the violated constraints.\nThe iteration will continue until no agent can improve further, at which time if some constraints remain violated, the weights of\nFigure 4: BDI protocol with Distributed Breakout strategy\nthese constraints will be increased by 1 to help ` breakout' from a local minimum.\nFor agent i, beginning initially with (wl = 1, (0 hmax i, the agent will automatically cancel its current intention.\nStep 5 - Execution:\n\u2022 If agent i did not cancel its intention, it will update its variable assignment with the intended value.\nFigure 5: BDI protocol with Unsolicited Mutual Advice strategy\n\u2022 If all intentions received and its own one are nil intention, the agent will increase the weight of each currently violated constraint by 1.\n5.\nTHE UMA STRATEGY\nFigure 5 presents the BDI negotiation model incorporating the Unsolicited Mutual Advice (UMA) strategy.\nUnlike when using the strategies of the previous section, a DCSP agent using UMA will not only send out a negotiation message when concluding its Intention step, but also when concluding its Desire step.\nThe negotiation message that it sends out to conclude the Desire step constitutes an unsolicited advice for all its neighbors.\nIn turn, the agent will wait to receive unsolicited advices from all its neighbors, before proceeding on to determine its intention.\nFor agent i, beginning initially with (wl = 1, (0 hmax i, assign nil as the intention.\n\u2022 If DS = ~ 0, DS is a set of voluntary desires and hmax i is the biggest improvement among those associated with the voluntary advices received, select an arbitrary value (say, v ~ i) from DS as the intention.\nThis intention is called a voluntary intention.\n\u2022 If DS = ~ 0, DS is a set of reluctant desires and agent ireceives some change advices, select an arbitrary value (say, v ~ i) from DS as the intention.\nThis intention is called reluctant intention.\n\u2022 If DS = 0, then assign nil as the intention.\nFollowing, if the improvement hmaxi is the biggest improvement and equal to some improvements associated with the received voluntary advices, agent i will send its computed intention to all its neighbors.\nIf agent i has a reluctant intention, it will also send this intention to all its neighbors.\nIn both cases, agent i will attach the number of received change advices in the current negotiation round with its intention.\nIn return, agent i will receive the intentions from its neighbors before proceeding to Mediation step.\nMediation: If agent i does not send out its intention before this step, i.e., the agent has either a nil intention or a voluntary intention with biggest improvement, it will proceed to next step.\nOtherwise, agent i will select the best intention among all the intentions received, including its own (if any).\nThe criteria to select the best intention are listed, applied in descending order of importance as follows.\n\u2022 A voluntary intention is preferred over a reluctant intention.\n\u2022 A voluntary intention (if any) with biggest improvement is selected.\n\u2022 If there is no voluntary intention, the reluctant intention with the lowest number of constraint violations is selected.\n\u2022 The intention from an agent who has received a higher number of change advices in the current negotiation round is selected.\n\u2022 Intention from an agent with highest priority is selected.\nIf the selected intention is not agent i's intention, it will cancel its intention.\nStep 5 - Execution: If agent i does not cancel its intention, it will update its variable assignment with the intended value.\nTermination Condition: Since each agent does not have full information about the global state, it may not know when it has reached a solution, i.e., when all the agents are in a global stable state.\nHence an observer is needed that will keep track of the negotiation messages communicated in the environment.\nFollowing a certain period of time when there is no more message communication (and this happens when all the agents have no more intention to update their variable assignments), the observer will inform the agents in the environment that a solution has been found.\nFigure 6: Example problem\n5.1 An Example\nTo illustrate how UMA works, consider a 2-color graph problem [6] as shown in Figure 6.\nIn this example, each agent has a color variable representing a node.\nThere are 10 color variables sharing the same domain {Black, White}.\nThe following records the outcome of each step in every negotiation round executed.\nRound 1:\nStep 1 - Percept: Each agent obtains the current color assignments of those nodes (agents) adjacent to it, i.e., its neighbors'.\nStep 2 - Belief: Agents which have positive improvements are agent 1 (this agent believes it should change its color to White), agent 2 (this believes should change its color to White), agent 7 (this agent believes it should change its color to Black) and agent 10 (this agent believes it should change its value to Black).\nIn this negotiation round, the improvements achieved by these agents are 1.\nAgents which do not have any improvements are agents 4, 5 and 8.\nAgents 3, 6 and 9 need not change as all their relevant constraints are satisfied.\nStep 3 - Desire: Agents 1, 2, 7 and 10 have the voluntary desire (White color for agents 1, 2 and Black color for agents 7, 10).\nThese agents will send the voluntary advices to all their neighbors.\nMeanwhile, agents 4, 5 and 8 have the reluctant desires (White color for agent 4 and Black color for agents 5, 8).\nAgent 4 will send a change advice to agent 2 as agent 2 is sharing the violated constraint with it.\nSimilarly, agents 5 and 8 will send change advices to agents 7 and 10 respectively.\nAgents 3, 6 and 9 do not have any desire to update their color assignments.\nStep 4 - Intention: Agents 2, 7 and 10 receive the change advices from agents 4, 5 and 8, respectively.\nThey form their voluntary intentions.\nAgents 4, 5 and 8 receive the voluntary advices from agents 2, 7 and 10, hence they will not have any intention.\nAgents 3, 6 and 9 do not have any intention.\nFollowing, the intention from the agents will be sent to all their neighbors.\nMediation: Agent 1 finds that the intention from agent 2 is better than its intention.\nThis is because, although both agents have voluntary intentions with improvement of 1, agent 2 has received one change advice from agent 4 while agent 1 has not received any.\nHence agent 1 cancels its intention.\nAgent 2 will keep its intention.\nAgents 7 and 10 keep their intentions since none of their neighbors has an intention.\nThe rest of the agents do nothing in this step as they do not have any intention.\nStep 5 - Execution: Agent 2 changes its color to White.\nAgents 7 and 10 change their colors to Black.\nThe new state after round 1 is shown in Figure 7.\nRound 2:\nStep 1 - Percept: The agents obtain the current color assignments of their neighbors.\nStep 2 - Belief: Agent 3 is the only agent who has a positive improvement which is 1.\nIt believes it should change its\nFigure 7: The graph after round 1\ncolor to Black.\nAgent 2 does not have any positive improvement.\nThe rest of the agents need not make any change as all their relevant constraints are satisfied.\nThey will have no desire, and hence no intention.\nStep 3 - Desire: Agent 3 desires to change its color to Black voluntarily, hence it sends out a voluntary advice to its neighbor, i.e., agent 2.\nAgent 2 does not have any value for its reluctant desire set as the only option, Black color, will bring agent 2 and its neighbors to the previous state which is known to be a bad state.\nSince agent 2 is sharing the constraint violation with agent 3, it sends a change advice to agent 3.\nStep 4 - Intention: Agent 3 will have a voluntary intention while agent 2 will not have any intention as it receives the voluntary advice from agent 3.\nMediation: Agent 3 will keep its intention as its only neighbor, agent 2, does not have any intention.\nStep 5 - Execution: Agent 3 changes its color to Black.\nThe new state after round 2 is shown in Figure 8.\nRound 3: In this round, every agent finds that it has no desire and hence no intention to revise its variable assignment.\nFollowing, with no more negotiation message communication in the environment, the observer will inform all the agents that a solution has been found.\nFigure 8: The solution obtained\n5.2 Performance Evaluation\nTo facilitate credible comparisons with existing strategies, we measured the execution time in terms of computational cycles as defined in [4], and built a simulator that could reproduce the published results for ABT and AWC.\nThe definition of a computational cycle is as follows.\n\u2022 In one cycle, each agent receives all the incoming messages, performs local computation and sends out a reply.\n\u2022 A message which is sent at time t will be received at time t + 1.\nThe network delay is neglected.\n\u2022 Each agent has it own clock.\nThe initial clock's value is 0.\nAgents attach their clock value as a time-stamp in the outgoing message and use the time-stamp in the incoming message to update their own clock's value.\nFour benchmark problems [6] were considered, namely, n-queens and node coloring for sparse, dense and critical graphs.\nFor each problem, a finite number of test cases were generated for various problem sizes n.\nThe maximum execution time was set to\nFigure 9: Relationship between execution time and problem size\n10000 cycles for node coloring for critical graphs and 1000 cycles for other problems.\nThe simulator program was terminated after this period and the algorithm was considered to fail a test case if it did not find a solution by then.\nIn such a case, the execution time for the test was counted as 1000 cycles.\n5.2.1 Evaluation with n-queens problem\nThe n-queens problem is a traditional problem of constraint satisfaction.\n10 test cases were generated for each problem size n \u2208 {10, 50 and 100}.\nFigure 9 shows the execution time for different problem sizes when ABT, AWC and UMA were run.\n5.2.2 Evaluation with graph coloring problem\nThe graph coloring problem can be characterized by three parameters: (i) the number of colors k, the number of nodes\/agents n and the number of links m. Based on the ratio m\/n, the problem can be classified into three types [3]: (i) sparse (with m\/n = 2), (ii) critical (with m\/n = 2.7 or 4.7) and (iii) dense (with m\/n = (n \u2212 1) \/ 4).\nFor this problem, we did not include ABT in our empirical results as its failure rate was found to be very high.\nThis poor performance of ABT was expected since the graph coloring problem is more difficult than the n-queens problem, on which ABT already did not perform well (see Figure 9).\nThe sparse and dense (coloring) problem types are relatively easy while the critical type is difficult to solve.\nIn the experiments, we fix k = 3.\n10 test cases were created using the method described in [13] for each value of n \u2208 {60, 90, 120}, for each problem type.\nThe simulation results for each type of problem are shown in Figures 10 - 12.\nFigure 10: Comparison between AWC and UMA (sparse graph coloring)\n5.3 Discussion\n5.3.1 Comparison with ABT and AWC\n530 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 11: Comparison between AWC and UMA (critical graph coloring) Figure 12: Comparison between AWC and UMA (dense graph coloring)\nFigure 10 shows that the average performance of UMA is slightly better than AWC for the sparse problem.\nUMA outperforms AWC in solving the critical problem as shown in Figure 11.\nIt was observed that the latter strategy failed in some test cases.\nHowever, as seen in Figure 12, both the strategies are very efficient when solving the dense problem, with AWC showing slightly better performance.\nThe performance of UMA, in the worst (time complexity) case, is similar to that of all evaluated strategies.\nThe worst case occurs when all the possible global states of the search are reached.\nSince only a few agents have the right to change their variable assignments in a negotiation round, the number of redundant computational cycles and info messages is reduced.\nAs we observe from the backtracking in ABT and AWC, the difference in the ordering of incoming messages can result in a different number of computational cycles to be executed by the agents.\n5.3.2 Comparison with DBO\nThe computational performance of UMA is arguably better than DBO for the following reasons:\n\u2022 UMA can guarantee that there will be a variable reassignment following every negotiation round whereas DBO cannot.\n\u2022 UMA introduces one more communication round trip (that\nof sending a message and awaiting a reply) than DBO, which occurs due to the need to communicate unsolicited advices.\nAlthough this increases the communication cost per negotiation round, we observed from our simulations that the overall communication cost incurred by UMA is lower due to the significantly lower number of negotiation rounds.\n\u2022 Using UMA, in the worst case, an agent will only take 2 or 3 communication round trips per negotiation round, following which the agent or its neighbor will do a variable assignment update.\nUsing DBO, this number of round trips is uncertain as each agent might have to increase the weights of the violated constraints until an agent has a positive improvement; this could result in a infinite loop [3].\n6.\nCONCLUSION\nApplying automated negotiation to DCSP, this paper has proposed a protocol that prescribes the generic reasoning of a DCSP agent in a BDI architecture.\nOur work shows that several wellknown DCSP algorithms, namely ABT, AWC and DBO, can be described as mechanisms sharing the same proposed protocol, and only differ in the strategies employed for the reasoning steps per negotiation round as governed by the protocol.\nImportantly, this means that it might furnish a unified framework for DCSP that not only provides a clearer BDI agent-theoretic view of existing DCSP approaches, but also opens up the opportunities to enhance or develop new strategies.\nTowards the latter, we have proposed and formulated a new strategy - the UMA strategy.\nEmpirical results and our discussion suggest that UMA is superior to ABT, AWC and DBO in some specific aspects.\nIt was observed from our simulations that UMA possesses the completeness property.\nFuture work will attempt to formally establish this property, as well as formalize other existing DSCP algorithms as BDI negotiation mechanisms, including the recent endeavor that employs a group mediator [5].\nThe idea of DCSP agents using different strategies in the same environment will also be investigated.","keyphrases":["constraint","algorithm","bdi","negoti","distribut constraint satisfact problem","dcsp","share environ","uma","backtrack","mediat","resourc restrict","privaci requir","belief-desireintent model","agent negoti"],"prmu":["P","P","P","P","P","P","P","P","U","U","U","U","M","R"]} {"id":"I-52","title":"A Unified and General Framework for Argumentation-based Negotiation","abstract":"This paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed. The framework makes it possible to study the outcomes of an argumentation-based negotiation. It shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case. It defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue.","lvl-1":"A Unified and General Framework for Argumentation-based Negotiation Leila Amgoud IRIT - CNRS 118, route de Narbonne 31062, Toulouse, France amgoud@irit.fr Yannis Dimopoulos University of Cyprus 75 Kallipoleos Str.\nPO Box 20537, Cyprus yannis@cs.ucy.ac.cy Pavlos Moraitis Paris-Descartes University 45 rue des Saints-P\u00e8res 75270 Paris Cedex 06, France pavlos@math-info.univparis5.fr ABSTRACT This paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed.\nThe framework makes it possible to study the outcomes of an argumentation-based negotiation.\nIt shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case.\nIt defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue.\nCategories and Subject Descriptors I.2.3 [Deduction and Theorem Proving]: Nonmonotonic reasoning and belief revision ; I.2.11 [Distributed Artificial Intelligence]: Intelligent agents General Terms Human Factors, Theory 1.\nINTRODUCTION Roughly speaking, negotiation is a process aiming at finding some compromise or consensus between two or several agents about some matters of collective agreement, such as pricing products, allocating resources, or choosing candidates.\nNegotiation models have been proposed for the design of systems able to bargain in an optimal way with other agents for example, buying or selling products in ecommerce.\nDifferent approaches to automated negotiation have been investigated, including game-theoretic approaches (which usually assume complete information and unlimited computation capabilities) [11], heuristic-based approaches which try to cope with these limitations [6], and argumentation-based approaches [2, 3, 7, 8, 9, 12, 13] which emphasize the importance of exchanging information and explanations between negotiating agents in order to mutually influence their behaviors (e.g. an agent may concede a goal having a small priority), and consequently the outcome of the dialogue.\nIndeed, the two first types of settings do not allow for the addition of information or for exchanging opinions about offers.\nIntegrating argumentation theory in negotiation provides a good means for supplying additional information and also helps agents to convince each other by adequate arguments during a negotiation dialogue.\nIndeed, an offer supported by a good argument has a better chance to be accepted by an agent, and can also make him reveal his goals or give up some of them.\nThe basic idea behind an argumentationbased approach is that by exchanging arguments, the theories of the agents (i.e. their mental states) may evolve, and consequently, the status of offers may change.\nFor instance, an agent may reject an offer because it is not acceptable for it.\nHowever, the agent may change its mind if it receives a strong argument in favor of this offer.\nSeveral proposals have been made in the literature for modeling such an approach.\nHowever, the work is still preliminary.\nSome researchers have mainly focused on relating argumentation with protocols.\nThey have shown how and when arguments in favor of offers can be computed and exchanged.\nOthers have emphasized on the decision making problem.\nIn [3, 7], the authors argued that selecting an offer to propose at a given step of the dialogue is a decision making problem.\nThey have thus proposed an argumentationbased decision model, and have shown how such a model can be related to the dialogue protocol.\nIn most existing works, there is no deep formal analysis of the role of argumentation in negotiation dialogues.\nIt is not clear how argumentation can influence the outcome of the dialogue.\nMoreover, basic concepts in negotiation such as agreement (i.e. optimal solutions, or compromise) and concession are neither defined nor studied.\nThis paper aims to propose a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed, and where the existing systems can be restated.\nIn this framework, a negotiation dialogue takes place between two agents on a set O of offers, whose structure is not known.\nThe goal of a negotiation is to find among elements of O, an offer that satisfies more or less 967 978-81-904262-7-5 (RPS) c 2007 IFAAMAS the preferences of both agents.\nEach agent is supposed to have a theory represented in an abstract way.\nA theory consists of a set A of arguments whose structure and origin are not known, a function specifying for each possible offer in O, the arguments of A that support it, a non specified conflict relation among the arguments, and finally a preference relation between the arguments.\nThe status of each argument is defined using Dung``s acceptability semantics.\nConsequently, the set of offers is partitioned into four subsets: acceptable, rejected, negotiable and non-supported offers.\nWe show how an agent``s theory may evolve during a negotiation dialogue.\nWe define formally the notions of concession, compromise, and optimal solution.\nThen, we propose a protocol that allows agents i) to exchange offers and arguments, and ii) to make concessions when necessary.\nWe show that dialogues generated under such a protocol terminate, and even reach optimal solutions when they exist.\nThis paper is organized as follows: Section 2 introduces the logical language that is used in the rest of the paper.\nSection 3 defines the agents as well as their theories.\nIn section 4, we study the properties of these agents'' theories.\nSection 5 defines formally an argumentation-based negotiation, shows how the theories of agents may evolve during a dialogue, and how this evolution may influence the outcome of the dialogue.\nTwo kinds of outcomes: optimal solution and compromise are defined, and we show when such outcomes are reached.\nSection 6 illustrates our general framework through some examples.\nSection 7 compares our formalism with existing ones.\nSection 8 concludes and presents some perspectives.\nDue to lack of space, the proofs are not included.\nThese last are in a technical report that we will make available online at some later time.\n2.\nTHE LOGICAL LANGUAGE In what follows, L will denote a logical language, and \u2261 is an equivalence relation associated with it.\nFrom L, a set O = {o1, ... , on} of n offers is identified, such that oi, oj \u2208 O such that oi \u2261 oj.\nThis means that the offers are different.\nOffers correspond to the different alternatives that can be exchanged during a negotiation dialogue.\nFor instance, if the agents try to decide the place of their next meeting, then the set O will contain different towns.\nDifferent arguments can be built from L.\nThe set Args(L) will contain all those arguments.\nBy argument, we mean a reason in believing or of doing something.\nIn [3], it has been argued that the selection of the best offer to propose at a given step of the dialogue is a decision problem.\nIn [4], it has been shown that in an argumentation-based approach for decision making, two kinds of arguments are distinguished: arguments supporting choices (or decisions), and arguments supporting beliefs.\nMoreover, it has been acknowledged that the two categories of arguments are formally defined in different ways, and they play different roles.\nIndeed, an argument in favor of a decision, built both on an agent``s beliefs and goals, tries to justify the choice; whereas an argument in favor of a belief, built only from beliefs, tries to destroy the decision arguments, in particular the beliefs part of those decision arguments.\nConsequently, in a negotiation dialogue, those two kinds of arguments are generally exchanged between agents.\nIn what follows, the set Args(L) is then divided into two subsets: a subset Argso(L) of arguments supporting offers, and a subset Argsb(L) of arguments supporting beliefs.\nThus, Args(L) = Argso(L) \u222a Argsb(L).\nAs in [5], in what follows, we consider that the structure of the arguments is not known.\nSince the knowledge bases from which arguments are built may be inconsistent, the arguments may be conflicting too.\nIn what follows, those conflicts will be captured by the relation RL, thus RL \u2286 Args(L) \u00d7 Args(L).\nThree assumptions are made on this relation: First the arguments supporting different offers are conflicting.\nThe idea behind this assumption is that since offers are exclusive, an agent has to choose only one at a given step of the dialogue.\nNote that, the relation RL is not necessarily symmetric between the arguments of Argsb(L).\nThe second hypothesis says that arguments supporting the same offer are also conflicting.\nThe idea here is to return the strongest argument among these arguments.\nThe third condition does not allow an argument in favor of an offer to attack an argument supporting a belief.\nThis avoids wishful thinking.\nFormally: Definition 1.\nRL \u2286 Args(L) \u00d7 Args(L) is a conflict relation among arguments such that: \u2022 \u2200a, a \u2208 Argso(L), s.t. a = a , a RL a \u2022 a \u2208 Argso(L) and a \u2208 Argsb(L) such that a RL a Note that the relation RL is not symmetric.\nThis is due to the fact that arguments of Argsb(L) may be conflicting but not necessarily in a symmetric way.\nIn what follows, we assume that the set Args(L) of arguments is finite, and each argument is attacked by a finite number of arguments.\n3.\nNEGOTIATING AGENTS THEORIES AND REASONING MODELS In this section we define formally the negotiating agents, i.e. their theories, as well as the reasoning model used by those agents in a negotiation dialogue.\n3.1 Negotiating agents theories Agents involved in a negotiation dialogue, called negotiating agents, are supposed to have theories.\nIn this paper, the theory of an agent will not refer, as usual, to its mental states (i.e. its beliefs, desires and intentions).\nHowever, it will be encoded in a more abstract way in terms of the arguments owned by the agent, a conflict relation among those arguments, a preference relation between the arguments, and a function that specifies which arguments support offers of the set O.\nWe assume that an agent is aware of all the arguments of the set Args(L).\nThe agent is even able to express a preference between any pair of arguments.\nThis does not mean that the agent will use all the arguments of Args(L), but it encodes the fact that when an agent receives an argument from another agent, it can interpret it correctly, and it can also compare it with its own arguments.\nSimilarly, each agent is supposed to be aware of the conflicts between arguments.\nThis also allows us to encode the fact that an agent can recognize whether the received argument is in conflict or not with its arguments.\nHowever, in its theory, only the conflicts between its own arguments are considered.\nDefinition 2 (Negotiating agent theory).\nLet O be a set of n offers.\nA negotiating agent theory is a tuple A, F, , R, Def such that: \u2022 A \u2286 Args(L).\n968 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) \u2022 F: O \u2192 2A s.t \u2200i, j with i = j, F(oi) \u2229 F(oj) = \u2205.\nLet AO = \u222aF(oi) with i = 1, ... , n. \u2022 \u2286 Args(L) \u00d7 Args(L) is a partial preorder denoting a preference relation between arguments.\n\u2022 R \u2286 RL such that R \u2286 A \u00d7 A \u2022 Def \u2286 A \u00d7 A such that \u2200 a, b \u2208 A, a defeats b, denoted a Def b iff: - a R b, and - not (b a) The function F returns the arguments supporting offers in O.\nIn [4], it has been argued that any decision may have arguments supporting it, called arguments PRO, and arguments against it, called arguments CONS.\nMoreover, these two types of arguments are not necessarily conflicting.\nFor simplicity reasons, in this paper we consider only arguments PRO.\nMoreover, we assume that an argument cannot support two distinct offers.\nHowever, it may be the case that an offer is not supported at all by arguments, thus F(oi) may be empty.\nExample 1.\nLet O = {o1, o2, o3} be a set of offers.\nThe following theory is the theory of agent i: \u2022 A = {a1, a2, a3, a4} \u2022 F(o1) = {a1}, F(o2) = {a2}, F(o3) = \u2205.\nThus, Ao = {a1, a2} \u2022 = {(a1, a2), (a2, a1), (a3, a2), (a4, a3)} \u2022 R = {a1, a2), (a2, a1), (a3, a2), (a4, a3)} \u2022 Def = {(a4, a3), (a3, a2)} From the above definition of agent theory, the following hold: Property 1.\n\u2022 Def \u2286 R \u2022 \u2200a, a \u2208 F(oi), a R a 3.2 The reasoning model From the theory of an agent, one can define the argumentation system used by that agent for reasoning about the offers and the arguments, i.e. for computing the status of the different offers and arguments.\nDefinition 3 (Argumentation system).\nLet A, F, , R, Def be the theory of an agent.\nThe argumentation system of that agent is the pair A, Def .\nIn [5], different acceptability semantics have been introduced for computing the status of arguments.\nThese are based on two basic concepts, defence and conflict-free, defined as follows: Definition 4 (Defence\/conflict-free).\nLet S \u2286 A. \u2022 S defends an argument a iff each argument that defeats a is defeated by some argument in S. \u2022 S is conflict-free iff there exist no a, a in S such that a Def a .\nDefinition 5 (Acceptability semantics).\nLet S be a conflict-free set of arguments, and let T : 2A \u2192 2A be a function such that T (S) = {a | a is defended by S}.\n\u2022 S is a complete extension iff S = T (S).\n\u2022 S is a preferred extension iff S is a maximal (w.r.t set \u2286) complete extension.\n\u2022 S is a grounded extension iff it is the smallest (w.r.t set \u2286) complete extension.\nLet E1, ... , Ex denote the different extensions under a given semantics.\nNote that there is only one grounded extension.\nIt contains all the arguments that are not defeated, and those arguments that are defended directly or indirectly by nondefeated arguments.\nTheorem 1.\nLet A, Def the argumentation system defined as shown above.\n1.\nIt may have x \u2265 1 preferred extensions.\n2.\nThe grounded extensions is S = i\u22651 T (\u2205).\nNote that when the grounded extension (or the preferred extension) is empty, this means that there is no acceptable offer for the negotiating agent.\nExample 2.\nIn example 1, there is one preferred extension, E = {a1, a2, a4}.\nNow that the acceptability semantics is defined, we are ready to define the status of any argument.\nDefinition 6 (Argument status).\nLet A, Def be an argumentation system, and E1, ... , Ex its extensions under a given semantics.\nLet a \u2208 A. 1.\na is accepted iff a \u2208 Ei, \u2200Ei with i = 1, ... , x. 2.\na is rejected iff Ei such that a \u2208 Ei.\n3.\na is undecided iff a is neither accepted nor rejected.\nThis means that a is in some extensions and not in others.\nNote that A = {a|a is accepted} \u222a {a|a is rejected} \u222a {a|a is undecided}.\nExample 3.\nIn example 1, the arguments a1, a2 and a4 are accepted, whereas the argument a3 is rejected.\nAs said before, agents use argumentation systems for reasoning about offers.\nIn a negotiation dialogue, agents propose and accept offers that are acceptable for them, and reject bad ones.\nIn what follows, we will define the status of an offer.\nAccording to the status of arguments, one can define four statuses of the offers as follows: Definition 7 (Offers status).\nLet o \u2208 O. \u2022 The offer o is acceptable for the negotiating agent iff \u2203 a \u2208 F(o) such that a is accepted.\nOa = {oi \u2208 O, such that oi is acceptable}.\n\u2022 The offer o is rejected for the negotiating agent iff \u2200 a \u2208 F(o), a is rejected.\nOr = {oi \u2208 O, such that oi is rejected}.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 969 \u2022 The offer o is negotiable iff \u2200 a \u2208 F(o), a is undecided.\nOn = {oi \u2208 O, such that oi is negotiable}.\n\u2022 The offer o is non-supported iff it is neither acceptable, nor rejected or negotiable.\nOns = {oi \u2208 O, such that oi is non-supported offers}.\nExample 4.\nIn example 1, the two offers o1 and o2 are acceptable since they are supported by accepted arguments, whereas the offer o3 is non-supported since it has no argument in its favor.\nFrom the above definitions, the following results hold: Property 2.\nLet o \u2208 O. \u2022 O = Oa \u222a Or \u222a On \u222a Ons.\n\u2022 The set Oa may contain more than one offer.\nFrom the above partition of the set O of offers, a preference relation between offers is defined.\nLet Ox and Oy be two subsets of O. Ox Oy means that any offer in Ox is preferred to any offer in the set Oy.\nWe can write also for two offers oi, oj, oi oj iff oi \u2208 Ox, oj \u2208 Oy and Ox Oy.\nDefinition 8 (Preference between offers).\nLet O be a set of offers, and Oa, Or, On, Ons its partition.\nOa On Ons Or.\nExample 5.\nIn example 1, we have o1 o3, and o2 o3.\nHowever, o1 and o2 are indifferent.\n4.\nTHE STRUCTURE OF NEGOTIATION THEORIES In this section, we study the properties of the system developed above.\nWe first show that in the particular case where A = AO (ie.\nall of the agent``s arguments refer to offers), the corresponding argumentation system will return at least one non-empty preferred extension.\nTheorem 2.\nLet A, Def an argumentation system such that A = AO.\nThen the system returns at least one extension E, such that |E| \u2265 1.\nWe now present some results that demonstrate the importance of indifference in negotiating agents, and more specifically its relation to acceptable outcomes.\nWe first show that the set Oa may contain several offers when their corresponding accepted arguments are indifferent w.r.t the preference relation .\nTheorem 3.\nLet o1, o2 \u2208 O. o1, o2 \u2208 Oa iff \u2203 a1 \u2208 F(o1), \u2203 a2 \u2208 F(o2), such that a1 and a2 are accepted and are indifferent w.r.t (i.e. a b and b a).\nWe now study acyclic preference relations that are defined formally as follows.\nDefinition 9 (Acyclic relation).\nA relation R on a set A is acyclic if there is no sequence a1, a2, ... , an \u2208 A, with n > 1, such that (ai, ai+1) \u2208 R and (an, a1) \u2208 R, with 1 \u2264 i < n. Note that acyclicity prohibits pairs of arguments a, b such that a b and b a, ie., an acyclic preference relation disallows indifference.\nTheorem 4.\nLet A be a set of arguments, R the attacking relation of A defined as R \u2286 A \u00d7 A, and an acyclic relation on A.\nThen for any pair of arguments a, b \u2208 A, such that (a, b) \u2208 R, either (a, b) \u2208 Def or (b, a) \u2208 Def (or both).\nThe previous result is used in the proof of the following theorem that states that acyclic preference relations sanction extensions that support exactly one offer.\nTheorem 5.\nLet A be a set of arguments, and an acyclic relation on A.\nIf E is an extension of , then |E \u2229 AO| = 1.\nAn immediate consequence of the above is the following.\nProperty 3.\nLet A be a set of arguments such that A = AO.\nIf the relation on A is acyclic, then each extension Ei of , |Ei| = 1.\nAnother direct consequence of the above theorem is that in acyclic preference relations, arguments that support offers can participate in only one preferred extension.\nTheorem 6.\nLet A be a set of arguments, and an acyclic relation on A.\nThen the preferred extensions of A, Def are pairwise disjoint w.r.t arguments of AO.\nUsing the above results we can prove the main theorem of this section that states that negotiating agents with acyclic preference relations do not have acceptable offers.\nTheorem 7.\nLet A, F, R, , Def be a negotiating agent such that A = AO and is an acyclic relation.\nThen the set of accepted arguments w.r.t A, Def is emtpy.\nConsequently, the set of acceptable offers, Oa is empty as well.\n5.\nARGUMENTATION-BASED NEGOTIATION In this section, we define formally a protocol that generates argumentation-based negotiation dialogues between two negotiating agents P and C.\nThe two agents negotiate about an object whose possible values belong to a set O.\nThis set O is supposed to be known and the same for both agents.\nFor simplicity reasons, we assume that this set does not change during the dialogue.\nThe agents are equipped with theories denoted respectively AP , FP , P , RP , DefP , and AC , FC , C , RC , DefC .\nNote that the two theories may be different in the sense that the agents may have different sets of arguments, and different preference relations.\nWorst yet, they may have different arguments in favor of the same offers.\nMoreover, these theories may evolve during the dialogue.\n5.1 Evolution of the theories Before defining formally the evolution of an agent``s theory, let us first introduce the notion of dialogue moves, or moves for short.\nDefinition 10 (Move).\nA move is a tuple mi = pi, ai, oi, ti such that: \u2022 pi \u2208 {P, C} \u2022 ai \u2208 Args(L) \u222a \u03b81 1 In what follows \u03b8 denotes the fact that no argument, or no offer is given 970 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) \u2022 oi \u2208 O \u222a \u03b8 \u2022 ti \u2208 N\u2217 is the target of the move, such that ti < i The function Player (resp.\nArgument, Offer, Target) returns the player of the move (i.e. pi) (resp.\nthe argument of a move, i.e ai, the offer oi, and the target of the move, ti).\nLet M denote the set of all the moves that can be built from {P, C}, Arg(L), O .\nNote that the set M is finite since Arg(L) and O are assumed to be finite.\nLet us now see how an agent``s theory evolves and why.\nThe idea is that if an agent receives an argument from another agent, it will add the new argument to its theory.\nMoreover, since an argument may bring new information for the agent, thus new arguments can emerge.\nLet us take the following example: Example 6.\nSuppose that an agent P has the following propositional knowledge base: \u03a3P = {x, y \u2192 z}.\nFrom this base one cannot deduce z. Let``s assume that this agent receives the following argument {a, a \u2192 y} that justifies y.\nIt is clear that now P can build an argument, say {a, a \u2192 y, y \u2192 z} in favor of z.\nIn a similar way, if a received argument is in conflict with the arguments of the agent i, then those conflicts are also added to its relation Ri .\nNote that new conflicts may arise between the original arguments of the agent and the ones that emerge after adding the received arguments to its theory.\nThose new conflicts should also be considered.\nAs a direct consequence of the evolution of the sets Ai and Ri , the defeat relation Defi is also updated.\nThe initial theory of an agent i, (i.e. its theory before the dialogue starts), is denoted by Ai 0, Fi 0, i 0, Ri 0, Defi 0 , with i \u2208 {P, C}.\nBesides, in this paper, we suppose that the preference relation i of an agent does not change during the dialogue.\nDefinition 11 (Theory evolution).\nLet m1, ..., mt, ..., mj be a sequence of moves.\nThe theory of an agent i at a step t > 0 is: Ai t, Fi t , i t, Ri t, Defi t such that: \u2022 Ai t = Ai 0 \u222a {ai, i = 1, ... , t, ai = Argument(mi)} \u222a A with A \u2286 Args(L) \u2022 Fi t = O \u2192 2Ai t \u2022 i t = i 0 \u2022 Ri t = Ri 0 \u222a {(ai, aj) | ai = Argument(mi), aj = Argument(mj), i, j \u2264 t, and ai RL aj} \u222a R with R \u2286 RL \u2022 Defi t \u2286 Ai t \u00d7 Ai t The above definition captures the monotonic aspect of an argument.\nIndeed, an argument cannot be removed.\nHowever, its status may change.\nAn argument that is accepted at step t of the dialogue by an agent may become rejected at step t + i. Consequently, the status of offers also change.\nThus, the sets Oa, Or, On, and Ons may change from one step of the dialogue to another.\nThat means for example that some offers could move from the set Oa to the set Or and vice-versa.\nNote that in the definition of Rt, the relation RL is used to denote a conflict between exchanged arguments.\nThe reason is that, such a conflict may not be in the set Ri of the agent i. Thus, in order to recognize such conflicts, we have supposed that the set RL is known to the agents.\nThis allows us to capture the situation where an agent is able to prove an argument that it was unable to prove before, by incorporating in its beliefs some information conveyed through the exchange of arguments with another agent.\nThis, unknown at the beginning of the dialogue argument, could give to this agent the possibility to defeat an argument that it could not by using its initial arguments.\nThis could even lead to a change of the status of these initial arguments and this change would lead to the one of the associated offers'' status.\nIn what follows, Oi t,x denotes the set of offers of type x, where x \u2208 {a, n, r, ns}, of the agent i at step t of the dialogue.\nIn some places, we can use for short the notation Oi t to denote the partition of the set O at step t for agent i. Note that we have: not(Oi t,x \u2286 Oi t+1,x).\n5.2 The notion of agreement As said in the introduction, negotiation is a process aiming at finding an agreement about some matters.\nBy agreement, one means a solution that satisfies to the largest possible extent the preferences of both agents.\nIn case there is no such solution, we say that the negotiation fails.\nIn what follows, we will discuss the different kinds of solutions that may be reached in a negotiation.\nThe first one is the optimal solution.\nAn optimal solution is the best offer for both agents.\nFormally: Definition 12 (Optimal solution).\nLet O be a set of offers, and o \u2208 O.\nThe offer o is an optimal solution at a step t \u2265 0 iff o \u2208 OP t,a \u2229 OC t,a Such a solution does not always exist since agents may have conflicting preferences.\nThus, agents make concessions by proposing\/accepting less preferred offers.\nDefinition 13 (Concession).\nLet o \u2208 O be an offer.\nThe offer o is a concession for an agent i iff o \u2208 Oi x such that \u2203Oi y = \u2205, and Oi y Oi x.\nDuring a negotiation dialogue, agents exchange first their most preferred offers, and if these last are rejected, they make concessions.\nIn this case, we say that their best offers are no longer defendable.\nIn an argumentation setting, this means that the agent has already presented all its arguments supporting its best offers, and it has no counter argument against the ones presented by the other agent.\nFormally: Definition 14 (Defendable offer).\nLet Ai t, Fi t , i t, Ri t, Defi t be the theory of agent i at a step t > 0 of the dialogue.\nLet o \u2208 O such that \u2203j \u2264 t with Player(mj) = i and offer(mj) = o.\nThe offer o is defendable by the agent i iff: \u2022 \u2203a \u2208 Fi t (o), and k \u2264 t s.t. Argument(mk) = a, or \u2022 \u2203a \u2208 At \\Fi t (o) s.t. a Defi t b with - Argument(mk) = b, k \u2264 t, and Player(mk) = i - l \u2264 t, Argument(ml) = a The offer o is said non-defendable otherwise and NDi t is the set of non-defendable offers of agent i at a step t.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 971 5.3 Negotiation dialogue Now that we have shown how the theories of the agents evolve during a dialogue, we are ready to define formally an argumentation-based negotiation dialogue.\nFor that purpose, we need to define first the notion of a legal continuation.\nDefinition 15 (Legal move).\nA move m is a legal continuation of a sequence of moves m1, ... , ml iff j, k < l, such that: \u2022 Offer(mj) = Offer(mk), and \u2022 Player(mj) = Player(mk) The idea here is that if the two agents present the same offer, then the dialogue should terminate, and there is no longer possible continuation of the dialogue.\nDefinition 16 (Argumentation-based negotiation).\nAn argumentation-based negotiation dialogue d between two agents P and C is a non-empty sequence of moves m1, ... , ml such that: \u2022 pi = P iff i is even, and pi = C iff i is odd \u2022 Player(m1) = P, Argument(m1) = \u03b8, Offer(m1) = \u03b8, and Target(m1) = 02 \u2022 \u2200 mi, if Offer(mi) = \u03b8, then Offer(mi) oj, \u2200 oj \u2208 O\\(O Player(mi) i,r \u222a ND Player(mi) i ) \u2022 \u2200i = 1, ... , l, mi is a legal continuation of m1, ... , mi\u22121 \u2022 Target(mi) = mj such that j < i and Player(mi) = Player(mj) \u2022 If Argument(mi) = \u03b8, then: - if Offer(mi) = \u03b8 then Argument(mi) \u2208 F(Offer(mi)) - if Offer(mi) = \u03b8 then Argument(mi) Def Player(mi) i Argument(Target(mi)) \u2022 i, j \u2264 l such that mi = mj \u2022 m \u2208 M such that m is a legal continuation of m1, ... , ml Let D be the set of all possible dialogues.\nThe first condition says that the two agents take turn.\nThe second condition says that agent P starts the negotiation dialogue by presenting an offer.\nNote that, in the first turn, we suppose that the agent does not present an argument.\nThis assumption is made for strategical purposes.\nIndeed, arguments are exchanged as soon as a conflict appears.\nThe third condition ensures that agents exchange their best offers, but never the rejected ones.\nThis condition takes also into account the concessions that an agent will have to make if it was established that a concession is the only option for it at the current state of the dialogue.\nOf course, as we have shown in a previous section, an agent may have several good or acceptable offers.\nIn this case, the agent chooses one of them randomly.\nThe fourth condition ensures that the moves are legal.\nThis condition allows to terminate the dialogue as soon as an offer is presented by both agents.\nThe fifth condition allows agents to backtrack.\nThe sixth 2 The first move has no target.\ncondition says that an agent may send arguments in favor of offers, and in this case the offer should be stated in the same move.\nAn agent can also send arguments in order to defeat arguments of the other agent.\nThe next condition prevents repeating the same move.\nThis is useful for avoiding loops.\nThe last condition ensures that all the possible legal moves have been presented.\nThe outcome of a negotiation dialogue is computed as follows: Definition 17 (Dialogue outcome).\nLet d = m1, ..., ml be a argumentation-based negotiation dialogue.\nThe outcome of this dialogue, denoted Outcome, is Outcome(d) = Offer(ml) iff \u2203j < l s.t. Offer(ml) = Offer(mj), and Player(ml) = Player(mj).\nOtherwise, Outcome(d) = \u03b8.\nNote that when Outcome(d) = \u03b8, the negotiation fails, and no agreement is reached by the two agents.\nHowever, if Outcome(d) = \u03b8, the negotiation succeeds, and a solution that is either optimal or a compromise is found.\nTheorem 8.\n\u2200di \u2208 D, the argumentation-based negotiation di terminates.\nThe above result is of great importance, since it shows that the proposed protocol avoids loops, and dialogues terminate.\nAnother important result shows that the proposed protocol ensures to reach an optimal solution if it exists.\nFormally: Theorem 9 (Completeness).\nLet d = m1, ... , ml be a argumentation-based negotiation dialogue.\nIf \u2203t \u2264 l such that OP t,a \u2229 OC t,a = \u2205, then Outcome(d) \u2208 OP t,a \u2229 OC t,a.\nWe show also that the proposed dialogue protocol is sound in the sense that, if a dialogue returns a solution, then that solution is for sure a compromise.\nIn other words, that solution is a common agreement at a given step of the dialogue.\nWe show also that if the negotiation fails, then there is no possible solution.\nTheorem 10 (Soundness).\nLet d = m1, ... , ml be a argumentation-based negotiation dialogue.\n1.\nIf Outcome(d) = o, (o = \u03b8), then \u2203t \u2264 l such that o \u2208 OP t,x \u2229 OC t,y, with x, y \u2208 {a, n, ns}.\n2.\nIf Outcome(d) = \u03b8, then \u2200t \u2264 l, OP t,x \u2229 OC t,y = \u2205, \u2200 x, y \u2208 {a, n, ns}.\nA direct consequence of the above theorem is the following: Property 4.\nLet d = m1, ... , ml be a argumentationbased negotiation dialogue.\nIf Outcome(d) = \u03b8, then \u2200t \u2264 l, \u2022 OP t,r = OC t,a \u222a OC t,n \u222a OC t,ns, and \u2022 OC t,r = OP t,a \u222a OP t,n \u222a OP t,ns.\n6.\nILLUSTRATIVE EXAMPLES In this section we will present some examples in order to illustrate our general framework.\nExample 7 (No argumentation).\nLet O = {o1, o2} be the set of all possible offers.\nLet P and C be two agents, equipped with the same theory: A, F, , R, Def such that A = \u2205, F(o1) = F(o2) = \u2205, = \u2205, R = \u2205, Def = \u2205.\nIn this case, it is clear that the two offers o1 and o2 are nonsupported.\nThe proposed protocol (see Definition 16) will generate one of the following dialogues: 972 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) P: m1 = P, \u03b8, o1, 0 C: m2 = C, \u03b8, o1, 1 This dialogue ends with o1 as a compromise.\nNote that this solution is not considered as optimal since it is not an acceptable offer for the agents.\nP: m1 = P, \u03b8, o1, 0 C: m2 = C, \u03b8, o2, 1 P: m3 = P, \u03b8, o2, 2 This dialogue ends with o2 as a compromise.\nP: m1 = P, \u03b8, o2, 0 C: m2 = C, \u03b8, o2, 1 This dialogue also ends with o2 as a compromise.\nThe last possible dialgue is the following that ends with o1 as a compromise.\nP: m1 = P, \u03b8, o2, 0 C: m2 = C, \u03b8, o1, 1 P: m3 = P, \u03b8, o1, 2 Note that in the above example, since there is no exchange of arguments, the theories of both agents do not change.\nLet us now consider the following example.\nExample 8 (Static theories).\nLet O = {o1, o2} be the set of all possible offers.\nThe theory of agent P is AP , FP , P , RP , DefP such that: AP = {a1, a2}, FP (o1) = {a1}, FP (o2) = {a2}, P = {(a1, a2)}, RP = {(a1, a2), (a2, a1)}, DefP = {a1, a2}.\nThe argumentation system AP , DefP of this agent will return a1 as an accepted argument, and a2 as a rejected one.\nConsequently, the offer o1 is acceptable and o2 is rejected.\nThe theory of agent C is AC , FC , C , RC , DefC such that: AC = {a1, a2}, FC (o1) = {a1}, FC (o2) = {a2}, C = {(a2, a1)}, RC = {(a1, a2), (a2, a1)}, DefC = {a2, a1}.\nThe argumentation system AC , DefC of this agent will return a2 as an accepted argument, and a1 as a rejected one.\nConsequently, the offer o2 is acceptable and o1 is rejected.\nThe only possible dialogues that may take place between the two agents are the following: P: m1 = P, \u03b8, o1, 0 C: m2 = C, \u03b8, o2, 1 P: m3 = P, a1, o1, 2 C: m4 = C, a2, o2, 3 The second possible dialogue is the following: P: m1 = P, \u03b8, o1, 0 C: m2 = C, a2, o2, 1 P: m3 = P, a1, o1, 2 C: m4 = C, \u03b8, o2, 3 Both dialogues end with failure.\nNote that in both dialogues, the theories of both agents do not change.\nThe reason is that the exchanged arguments are already known to both agents.\nThe negotiation fails because the agents have conflicting preferences.\nLet us now consider an example in which argumentation will allow agents to reach an agreement.\nExample 9 (Dynamic theories).\nLet O = {o1, o2} be the set of all possible offers.\nThe theory of agent P is AP , FP , P , RP , DefP such that: AP = {a1, a2}, FP (o1) = {a1}, FP (o2) = {a2}, P = {(a1, a2), (a3, a1)}, RP = {(a1, a2), (a2, a1)}, DefP = {(a1, a2)}.\nThe argumentation system AP , DefP of this agent will return a1 as an accepted argument, and a2 as a rejected one.\nConsequently, the offer o1 is acceptable and o2 is rejected.\nThe theory of agent C is AC , FC , C , RC , DefC such that: AC = {a1, a2, a3}, FC (o1) = {a1}, FC (o2) = {a2}, C = {(a1, a2), (a3, a1)}, RC = {(a1, a2), (a2, a1), (a3, a1)}, DefC = {(a1, a2), (a3, a1)}.\nThe argumentation system AC , DefC of this agent will return a3 and a2 as accepted arguments, and a1 as a rejected one.\nConsequently, the offer o2 is acceptable and o1 is rejected.\nThe following dialogue may take place between the two agents: P: m1 = P, \u03b8, o1, 0 C: m2 = C, \u03b8, o2, 1 P: m3 = P, a1, o1, 2 C: m4 = C, a3, \u03b8, 3 C: m5 = P, \u03b8, o2, 4 At step 4 of the dialogue, the agent P receives the argument a3 from P. Thus, its theory evolves as follows: AP = {a1, a2, a3}, RP = {(a1, a2), (a2, a1), (a3, a1)}, DefP = {(a1, a2), (a3, a1)}.\nAt this step, the argument a1 which was accepted will become rejected, and the argument a2 which was at the beginning of the dialogue rejected will become accepted.\nThus, the offer o2 will be acceptable for the agent, whereas o1 will become rejected.\nAt this step 4, the offer o2 is acceptable for both agents, thus it is an optimal solution.\nThe dialogue ends by returning this offer as an outcome.\n7.\nRELATED WORK Argumentation has been integrated in negotiation dialogues at the early nineties by Sycara [12].\nIn that work, the author has emphasized the advantages of using argumentation in negotiation dialogues, and a specific framework has been introduced.\nIn [8], the different types of arguments that are used in a negotiation dialogue, such as threats and rewards, have been discussed.\nMoreover, a particular framework for negotiation have been proposed.\nIn [9, 13], different other frameworks have been proposed.\nEven if all these frameworks are based on different logics, and use different definitions of arguments, they all have at their heart an exchange of offers and arguments.\nHowever, none of those proposals explain when arguments can be used within a negotiation, and how they should be dealt with by the agent that receives them.\nThus the protocol for handling arguments was missing.\nAnother limitation of the above frameworks is the fact that the argumentation frameworks they The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 973 use are quite poor, since they use a very simple acceptability semantics.\nIn [2] a negotiation framework that fills the gap has been suggested.\nA protocol that handles the arguments was proposed.\nHowever, the notion of concession is not modeled in that framework, and it is not clear what is the status of the outcome of the dialogue.\nMoreover, it is not clear how an agent chooses the offer to propose at a given step of the dialogue.\nIn [1, 7], the authors have focused mainly on this decision problem.\nThey have proposed an argumentation-based decision framework that is used by agents in order to choose the offer to propose or to accept during the dialogue.\nIn that work, agents are supposed to have a beliefs base and a goals base.\nOur framework is more general since it does not impose any specific structure for the arguments, the offers, or the beliefs.\nThe negotiation protocol is general as well.\nThus this framework can be instantiated in different ways by creating, in such manner, different specific argumentation-based negotiation frameworks, all of them respecting the same properties.\nOur framework is also a unified one because frameworks like the ones presented above can be represented within this framework.\nFor example the decision making mechanism proposed in [7] for the evaluation of arguments and therefore of offers, which is based on a priority relation between mutually attacked arguments, can be captured by the relation defeat proposed in our framework.\nThis relation takes simultaneously into account the attacking and preference relations that may exist between two arguments.\n8.\nCONCLUSIONS AND FUTURE WORK In this paper we have presented a unified and general framework for argumentation-based negotiation.\nLike any other argumentation-based negotiation framework, as it is evoked in (e.g. [10]), our framework has all the advantages that argumentation-based negotiation approaches present when related to the negotiation approaches based either on game theoretic models (see e.g. [11]) or heuristics ([6]).\nThis work is a first attempt to formally define the role of argumentation in the negotiation process.\nMore precisely, for the first time, it formally establishes the link that exists between the status of the arguments and the offers they support, it defines the notion of concession and shows how it influences the evolution of the negotiation, it determines how the theories of agents evolve during the dialogue and performs an analysis of the negotiation outcomes.\nIt is also the first time where a study of the formal properties of the negotiation theories of the agents as well as of an argumentative negotiation dialogue is presented.\nOur future work concerns several points.\nA first point is to relax the assumption that the set of possible offers is the same to both agents.\nIndeed, it is more natural to assume that agents may have different sets of offers.\nDuring a negotiation dialogue, these sets will evolve.\nArguments in favor of the new offers may be built from the agent theory.\nThus, the set of offers will be part of the agent theory.\nAnother possible extension of this work would be to allow agents to handle both arguments PRO and CONS offers.\nThis is more akin to the way human take decisions.\nConsidering both types of arguments will refine the evaluation of the offers status.\nIn the proposed model, a preference relation between offers is defined on the basis of the partition of the set of offers.\nThis preference relation can be refined.\nFor instance, among the acceptable offers, one may prefer the offer that is supported by the strongest argument.\nIn [4], different criteria have been proposed for comparing decisions.\nOur framework can thus be extended by integrating those criteria.\nAnother interesting point to investigate is that of considering negotiation dialogues between two agents with different profiles.\nBy profile, we mean the criterion used by an agent to compare its offers.\n9.\nREFERENCES [1] L. Amgoud, S. Belabbes, and H. Prade.\nTowards a formal framework for the search of a consensus between autonomous agents.\nIn Proceedings of the 4th International Joint Conference on Autonomous Agents and Multi-Agents systems, pages 537-543, 2005.\n[2] L. Amgoud, S. Parsons, and N. Maudet.\nArguments, dialogue, and negotiation.\nIn Proceedings of the 14th European Conference on Artificial Intelligence, 2000.\n[3] L. Amgoud and H. Prade.\nReaching agreement through argumentation: A possibilistic approach.\nIn 9 th International Conference on the Principles of Knowledge Representation and Reasoning, KR``2004, 2004.\n[4] L. Amgoud and H. Prade.\nExplaining qualitative decision under uncertainty by argumentation.\nIn 21st National Conference on Artificial Intelligence, AAAI``06, pages 16 - 20, 2006.\n[5] P. M. Dung.\nOn the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.\nArtificial Intelligence, 77:321-357, 1995.\n[6] N. R. Jennings, P. Faratin, A. R. Lumuscio, S. Parsons, and C. Sierra.\nAutomated negotiation: Prospects, methods and challenges.\nInternational Journal of Group Decision and Negotiation, 2001.\n[7] A. Kakas and P. Moraitis.\nAdaptive agent negotiation via argumentation.\nIn Proceedings of the 5th International Joint Conference on Autonomous Agents and Multi-Agents systems, pages 384-391, 2006.\n[8] S. Kraus, K. Sycara, and A. Evenchik.\nReaching agreements through argumentation: a logical model and implementation.\nArtificial Intelligence, 104:1-69, 1998.\n[9] S. Parsons and N. R. Jennings.\nNegotiation through argumentation-a preliminary report.\nIn Proceedings of the 2nd International Conference on Multi Agent Systems, pages 267-274, 1996.\n[10] I. Rahwan, S. D. Ramchurn, N. R. Jennings, P. McBurney, S. Parsons, and E. Sonenberg.\nArgumentation-based negotiation.\nKnowledge Engineering Review, 18 (4):343-375, 2003.\n[11] J. Rosenschein and G. Zlotkin.\nRules of Encounter: Designing Conventions for Automated Negotiation Among Computers,.\nMIT Press, Cambridge, Massachusetts, 1994., 1994.\n[12] K. Sycara.\nPersuasive argumentation in negotiation.\nTheory and Decision, 28:203-242, 1990.\n[13] F. Tohm\u00b4e. Negotiation and defeasible reasons for choice.\nIn Proceedings of the Stanford Spring Symposium on Qualitative Preferences in Deliberation and Practical Reasoning, pages 95-102, 1997.\n974 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"A Unified and General Framework for Argumentation-based Negotiation\nABSTRACT\nThis paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed.\nThe framework makes it possible to study the outcomes of an argumentation-based negotiation.\nIt shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case.\nIt defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue.\n1.\nINTRODUCTION\nRoughly speaking, negotiation is a process aiming at finding some compromise or consensus between two or several agents about some matters of collective agreement, such as pricing products, allocating resources, or choosing candidates.\nNegotiation models have been proposed for the design of systems able to bargain in an optimal way with other agents for example, buying or selling products in ecommerce.\nDifferent approaches to automated negotiation have been investigated, including game-theoretic approaches (which usu\nally assume complete information and unlimited computation capabilities) [11], heuristic-based approaches which try to cope with these limitations [6], and argumentation-based approaches [2, 3, 7, 8, 9, 12, 13] which emphasize the importance of exchanging information and explanations between negotiating agents in order to mutually influence their behaviors (e.g. an agent may concede a goal having a small priority), and consequently the outcome of the dialogue.\nIndeed, the two first types of settings do not allow for the addition of information or for exchanging opinions about offers.\nIntegrating argumentation theory in negotiation provides a good means for supplying additional information and also helps agents to convince each other by adequate arguments during a negotiation dialogue.\nIndeed, an offer supported by a good argument has a better chance to be accepted by an agent, and can also make him reveal his goals or give up some of them.\nThe basic idea behind an argumentationbased approach is that by exchanging arguments, the theories of the agents (i.e. their mental states) may evolve, and consequently, the status of offers may change.\nFor instance, an agent may reject an offer because it is not acceptable for it.\nHowever, the agent may change its mind if it receives a strong argument in favor of this offer.\nSeveral proposals have been made in the literature for modeling such an approach.\nHowever, the work is still preliminary.\nSome researchers have mainly focused on relating argumentation with protocols.\nThey have shown how and when arguments in favor of offers can be computed and exchanged.\nOthers have emphasized on the decision making problem.\nIn [3, 7], the authors argued that selecting an offer to propose at a given step of the dialogue is a decision making problem.\nThey have thus proposed an argumentationbased decision model, and have shown how such a model can be related to the dialogue protocol.\nIn most existing works, there is no deep formal analysis of the role of argumentation in negotiation dialogues.\nIt is not clear how argumentation can influence the outcome of the dialogue.\nMoreover, basic concepts in negotiation such as agreement (i.e. optimal solutions, or compromise) and concession are neither defined nor studied.\nThis paper aims to propose a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed, and where the existing systems can be restated.\nIn this framework, a negotiation dialogue takes place between two agents on a set O of offers, whose structure is not known.\nThe goal of a negotiation is to find among elements of O, an offer that satisfies more or less\nthe preferences of both agents.\nEach agent is supposed to have a theory represented in an abstract way.\nA theory consists of a set A of arguments whose structure and origin are not known, a function specifying for each possible offer in O, the arguments of A that support it, a non specified conflict relation among the arguments, and finally a preference relation between the arguments.\nThe status of each argument is defined using Dung's acceptability semantics.\nConsequently, the set of offers is partitioned into four subsets: acceptable, rejected, negotiable and non-supported offers.\nWe show how an agent's theory may evolve during a negotiation dialogue.\nWe define formally the notions of concession, compromise, and optimal solution.\nThen, we propose a protocol that allows agents i) to exchange offers and arguments, and ii) to make concessions when necessary.\nWe show that dialogues generated under such a protocol terminate, and even reach optimal solutions when they exist.\nThis paper is organized as follows: Section 2 introduces the logical language that is used in the rest of the paper.\nSection 3 defines the agents as well as their theories.\nIn section 4, we study the properties of these agents' theories.\nSection 5 defines formally an argumentation-based negotiation, shows how the theories of agents may evolve during a dialogue, and how this evolution may influence the outcome of the dialogue.\nTwo kinds of outcomes: optimal solution and compromise are defined, and we show when such outcomes are reached.\nSection 6 illustrates our general framework through some examples.\nSection 7 compares our formalism with existing ones.\nSection 8 concludes and presents some perspectives.\nDue to lack of space, the proofs are not included.\nThese last are in a technical report that we will make available online at some later time.\n2.\nTHE LOGICAL LANGUAGE\n3.\nNEGOTIATING AGENTS THEORIES AND REASONING MODELS\n3.1 Negotiating agents theories\n968 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.2 The reasoning model\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 969\n4.\nTHE STRUCTURE OF NEGOTIATION THEORIES\n5.\nARGUMENTATION-BASED NEGOTIATION\n5.1 Evolution of the theories\n970 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.2 The notion of agreement\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 971\n5.3 Negotiation dialogue\n6.\nILLUSTRATIVE EXAMPLES\n972 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7.\nRELATED WORK\nArgumentation has been integrated in negotiation dialogues at the early nineties by Sycara [12].\nIn that work, the author has emphasized the advantages of using argumentation in negotiation dialogues, and a specific framework has been introduced.\nIn [8], the different types of arguments that are used in a negotiation dialogue, such as threats and rewards, have been discussed.\nMoreover, a particular framework for negotiation have been proposed.\nIn [9, 13], different other frameworks have been proposed.\nEven if all these frameworks are based on different logics, and use different definitions of arguments, they all have at their heart an exchange of offers and arguments.\nHowever, none of those proposals explain when arguments can be used within a negotiation, and how they should be dealt with by the agent that receives them.\nThus the protocol for handling arguments was missing.\nAnother limitation of the above frameworks is the fact that the argumentation frameworks they\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 973\nuse are quite poor, since they use a very simple acceptability semantics.\nIn [2] a negotiation framework that fills the gap has been suggested.\nA protocol that handles the arguments was proposed.\nHowever, the notion of concession is not modeled in that framework, and it is not clear what is the status of the outcome of the dialogue.\nMoreover, it is not clear how an agent chooses the offer to propose at a given step of the dialogue.\nIn [1, 7], the authors have focused mainly on this decision problem.\nThey have proposed an argumentation-based decision framework that is used by agents in order to choose the offer to propose or to accept during the dialogue.\nIn that work, agents are supposed to have a beliefs base and a goals base.\nOur framework is more general since it does not impose any specific structure for the arguments, the offers, or the beliefs.\nThe negotiation protocol is general as well.\nThus this framework can be instantiated in different ways by creating, in such manner, different specific argumentation-based negotiation frameworks, all of them respecting the same properties.\nOur framework is also a unified one because frameworks like the ones presented above can be represented within this framework.\nFor example the decision making mechanism proposed in [7] for the evaluation of arguments and therefore of offers, which is based on a priority relation between mutually attacked arguments, can be captured by the relation defeat proposed in our framework.\nThis relation takes simultaneously into account the attacking and preference relations that may exist between two arguments.\n8.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we have presented a unified and general framework for argumentation-based negotiation.\nLike any other argumentation-based negotiation framework, as it is evoked in (e.g. [10]), our framework has all the advantages that argumentation-based negotiation approaches present when related to the negotiation approaches based either on game theoretic models (see e.g. [11]) or heuristics ([6]).\nThis work is a first attempt to formally define the role of argumentation in the negotiation process.\nMore precisely, for the first time, it formally establishes the link that exists between the status of the arguments and the offers they support, it defines the notion of concession and shows how it influences the evolution of the negotiation, it determines how the theories of agents evolve during the dialogue and performs an analysis of the negotiation outcomes.\nIt is also the first time where a study of the formal properties of the negotiation theories of the agents as well as of an argumentative negotiation dialogue is presented.\nOur future work concerns several points.\nA first point is to relax the assumption that the set of possible offers is the same to both agents.\nIndeed, it is more natural to assume that agents may have different sets of offers.\nDuring a negotiation dialogue, these sets will evolve.\nArguments in favor of the new offers may be built from the agent theory.\nThus, the set of offers will be part of the agent theory.\nAnother possible extension of this work would be to allow agents to handle both arguments PRO and CONS offers.\nThis is more akin to the way human take decisions.\nConsidering both types of arguments will refine the evaluation of the offers status.\nIn the proposed model, a preference relation between offers is defined on the basis of the partition of the set of offers.\nThis preference relation can be refined.\nFor instance, among the acceptable offers, one may prefer the offer that is supported by the strongest argument.\nIn [4], different criteria have been proposed for comparing decisions.\nOur framework can thus be extended by integrating those criteria.\nAnother interesting point to investigate is that of considering negotiation dialogues between two agents with different profiles.\nBy profile, we mean the criterion used by an agent to compare its offers.","lvl-4":"A Unified and General Framework for Argumentation-based Negotiation\nABSTRACT\nThis paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed.\nThe framework makes it possible to study the outcomes of an argumentation-based negotiation.\nIt shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case.\nIt defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue.\n1.\nINTRODUCTION\nNegotiation models have been proposed for the design of systems able to bargain in an optimal way with other agents for example, buying or selling products in ecommerce.\nDifferent approaches to automated negotiation have been investigated, including game-theoretic approaches (which usu\nIndeed, the two first types of settings do not allow for the addition of information or for exchanging opinions about offers.\nIntegrating argumentation theory in negotiation provides a good means for supplying additional information and also helps agents to convince each other by adequate arguments during a negotiation dialogue.\nIndeed, an offer supported by a good argument has a better chance to be accepted by an agent, and can also make him reveal his goals or give up some of them.\nThe basic idea behind an argumentationbased approach is that by exchanging arguments, the theories of the agents (i.e. their mental states) may evolve, and consequently, the status of offers may change.\nFor instance, an agent may reject an offer because it is not acceptable for it.\nHowever, the agent may change its mind if it receives a strong argument in favor of this offer.\nSeveral proposals have been made in the literature for modeling such an approach.\nHowever, the work is still preliminary.\nSome researchers have mainly focused on relating argumentation with protocols.\nThey have shown how and when arguments in favor of offers can be computed and exchanged.\nOthers have emphasized on the decision making problem.\nIn [3, 7], the authors argued that selecting an offer to propose at a given step of the dialogue is a decision making problem.\nThey have thus proposed an argumentationbased decision model, and have shown how such a model can be related to the dialogue protocol.\nIn most existing works, there is no deep formal analysis of the role of argumentation in negotiation dialogues.\nIt is not clear how argumentation can influence the outcome of the dialogue.\nMoreover, basic concepts in negotiation such as agreement (i.e. optimal solutions, or compromise) and concession are neither defined nor studied.\nThis paper aims to propose a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed, and where the existing systems can be restated.\nIn this framework, a negotiation dialogue takes place between two agents on a set O of offers, whose structure is not known.\nThe goal of a negotiation is to find among elements of O, an offer that satisfies more or less\nthe preferences of both agents.\nEach agent is supposed to have a theory represented in an abstract way.\nThe status of each argument is defined using Dung's acceptability semantics.\nWe show how an agent's theory may evolve during a negotiation dialogue.\nWe define formally the notions of concession, compromise, and optimal solution.\nThen, we propose a protocol that allows agents i) to exchange offers and arguments, and ii) to make concessions when necessary.\nWe show that dialogues generated under such a protocol terminate, and even reach optimal solutions when they exist.\nSection 3 defines the agents as well as their theories.\nIn section 4, we study the properties of these agents' theories.\nSection 5 defines formally an argumentation-based negotiation, shows how the theories of agents may evolve during a dialogue, and how this evolution may influence the outcome of the dialogue.\nTwo kinds of outcomes: optimal solution and compromise are defined, and we show when such outcomes are reached.\nSection 6 illustrates our general framework through some examples.\nSection 7 compares our formalism with existing ones.\nSection 8 concludes and presents some perspectives.\n7.\nRELATED WORK\nArgumentation has been integrated in negotiation dialogues at the early nineties by Sycara [12].\nIn that work, the author has emphasized the advantages of using argumentation in negotiation dialogues, and a specific framework has been introduced.\nIn [8], the different types of arguments that are used in a negotiation dialogue, such as threats and rewards, have been discussed.\nMoreover, a particular framework for negotiation have been proposed.\nIn [9, 13], different other frameworks have been proposed.\nEven if all these frameworks are based on different logics, and use different definitions of arguments, they all have at their heart an exchange of offers and arguments.\nHowever, none of those proposals explain when arguments can be used within a negotiation, and how they should be dealt with by the agent that receives them.\nThus the protocol for handling arguments was missing.\nAnother limitation of the above frameworks is the fact that the argumentation frameworks they\nThe Sixth Intl. .\nJoint Conf.\nIn [2] a negotiation framework that fills the gap has been suggested.\nA protocol that handles the arguments was proposed.\nHowever, the notion of concession is not modeled in that framework, and it is not clear what is the status of the outcome of the dialogue.\nMoreover, it is not clear how an agent chooses the offer to propose at a given step of the dialogue.\nThey have proposed an argumentation-based decision framework that is used by agents in order to choose the offer to propose or to accept during the dialogue.\nIn that work, agents are supposed to have a beliefs base and a goals base.\nOur framework is more general since it does not impose any specific structure for the arguments, the offers, or the beliefs.\nThe negotiation protocol is general as well.\nThus this framework can be instantiated in different ways by creating, in such manner, different specific argumentation-based negotiation frameworks, all of them respecting the same properties.\nOur framework is also a unified one because frameworks like the ones presented above can be represented within this framework.\nThis relation takes simultaneously into account the attacking and preference relations that may exist between two arguments.\n8.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we have presented a unified and general framework for argumentation-based negotiation.\nThis work is a first attempt to formally define the role of argumentation in the negotiation process.\nIt is also the first time where a study of the formal properties of the negotiation theories of the agents as well as of an argumentative negotiation dialogue is presented.\nOur future work concerns several points.\nA first point is to relax the assumption that the set of possible offers is the same to both agents.\nIndeed, it is more natural to assume that agents may have different sets of offers.\nDuring a negotiation dialogue, these sets will evolve.\nArguments in favor of the new offers may be built from the agent theory.\nThus, the set of offers will be part of the agent theory.\nAnother possible extension of this work would be to allow agents to handle both arguments PRO and CONS offers.\nThis is more akin to the way human take decisions.\nConsidering both types of arguments will refine the evaluation of the offers status.\nIn the proposed model, a preference relation between offers is defined on the basis of the partition of the set of offers.\nThis preference relation can be refined.\nFor instance, among the acceptable offers, one may prefer the offer that is supported by the strongest argument.\nIn [4], different criteria have been proposed for comparing decisions.\nOur framework can thus be extended by integrating those criteria.\nAnother interesting point to investigate is that of considering negotiation dialogues between two agents with different profiles.\nBy profile, we mean the criterion used by an agent to compare its offers.","lvl-2":"A Unified and General Framework for Argumentation-based Negotiation\nABSTRACT\nThis paper proposes a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed.\nThe framework makes it possible to study the outcomes of an argumentation-based negotiation.\nIt shows what an agreement is, how it is related to the theories of the agents, when it is possible, and how this can be attained by the negotiating agents in this case.\nIt defines also the notion of concession, and shows in which situation an agent will make one, as well as how it influences the evolution of the dialogue.\n1.\nINTRODUCTION\nRoughly speaking, negotiation is a process aiming at finding some compromise or consensus between two or several agents about some matters of collective agreement, such as pricing products, allocating resources, or choosing candidates.\nNegotiation models have been proposed for the design of systems able to bargain in an optimal way with other agents for example, buying or selling products in ecommerce.\nDifferent approaches to automated negotiation have been investigated, including game-theoretic approaches (which usu\nally assume complete information and unlimited computation capabilities) [11], heuristic-based approaches which try to cope with these limitations [6], and argumentation-based approaches [2, 3, 7, 8, 9, 12, 13] which emphasize the importance of exchanging information and explanations between negotiating agents in order to mutually influence their behaviors (e.g. an agent may concede a goal having a small priority), and consequently the outcome of the dialogue.\nIndeed, the two first types of settings do not allow for the addition of information or for exchanging opinions about offers.\nIntegrating argumentation theory in negotiation provides a good means for supplying additional information and also helps agents to convince each other by adequate arguments during a negotiation dialogue.\nIndeed, an offer supported by a good argument has a better chance to be accepted by an agent, and can also make him reveal his goals or give up some of them.\nThe basic idea behind an argumentationbased approach is that by exchanging arguments, the theories of the agents (i.e. their mental states) may evolve, and consequently, the status of offers may change.\nFor instance, an agent may reject an offer because it is not acceptable for it.\nHowever, the agent may change its mind if it receives a strong argument in favor of this offer.\nSeveral proposals have been made in the literature for modeling such an approach.\nHowever, the work is still preliminary.\nSome researchers have mainly focused on relating argumentation with protocols.\nThey have shown how and when arguments in favor of offers can be computed and exchanged.\nOthers have emphasized on the decision making problem.\nIn [3, 7], the authors argued that selecting an offer to propose at a given step of the dialogue is a decision making problem.\nThey have thus proposed an argumentationbased decision model, and have shown how such a model can be related to the dialogue protocol.\nIn most existing works, there is no deep formal analysis of the role of argumentation in negotiation dialogues.\nIt is not clear how argumentation can influence the outcome of the dialogue.\nMoreover, basic concepts in negotiation such as agreement (i.e. optimal solutions, or compromise) and concession are neither defined nor studied.\nThis paper aims to propose a unified and general framework for argumentation-based negotiation, in which the role of argumentation is formally analyzed, and where the existing systems can be restated.\nIn this framework, a negotiation dialogue takes place between two agents on a set O of offers, whose structure is not known.\nThe goal of a negotiation is to find among elements of O, an offer that satisfies more or less\nthe preferences of both agents.\nEach agent is supposed to have a theory represented in an abstract way.\nA theory consists of a set A of arguments whose structure and origin are not known, a function specifying for each possible offer in O, the arguments of A that support it, a non specified conflict relation among the arguments, and finally a preference relation between the arguments.\nThe status of each argument is defined using Dung's acceptability semantics.\nConsequently, the set of offers is partitioned into four subsets: acceptable, rejected, negotiable and non-supported offers.\nWe show how an agent's theory may evolve during a negotiation dialogue.\nWe define formally the notions of concession, compromise, and optimal solution.\nThen, we propose a protocol that allows agents i) to exchange offers and arguments, and ii) to make concessions when necessary.\nWe show that dialogues generated under such a protocol terminate, and even reach optimal solutions when they exist.\nThis paper is organized as follows: Section 2 introduces the logical language that is used in the rest of the paper.\nSection 3 defines the agents as well as their theories.\nIn section 4, we study the properties of these agents' theories.\nSection 5 defines formally an argumentation-based negotiation, shows how the theories of agents may evolve during a dialogue, and how this evolution may influence the outcome of the dialogue.\nTwo kinds of outcomes: optimal solution and compromise are defined, and we show when such outcomes are reached.\nSection 6 illustrates our general framework through some examples.\nSection 7 compares our formalism with existing ones.\nSection 8 concludes and presents some perspectives.\nDue to lack of space, the proofs are not included.\nThese last are in a technical report that we will make available online at some later time.\n2.\nTHE LOGICAL LANGUAGE\nIn what follows, L will denote a logical language, and \u2261 is an equivalence relation associated with it.\nFrom L, a set O = {o1,..., on} of n offers is identified, such that oi, oj \u2208 O such that oi \u2261 oj.\nThis means that the offers are different.\nOffers correspond to the different alternatives that can be exchanged during a negotiation dialogue.\nFor instance, if the agents try to decide the place of their next meeting, then the set O will contain different towns.\nDifferent arguments can be built from L.\nThe set Args (L) will contain all those arguments.\nBy argument, we mean a reason in believing or of doing something.\nIn [3], it has been argued that the selection of the best offer to propose at a given step of the dialogue is a decision problem.\nIn [4], it has been shown that in an argumentation-based approach for decision making, two kinds of arguments are distinguished: arguments supporting choices (or decisions), and arguments supporting beliefs.\nMoreover, it has been acknowledged that the two categories of arguments are formally defined in different ways, and they play different roles.\nIndeed, an argument in favor of a decision, built both on an agent's beliefs and goals, tries to justify the choice; whereas an argument in favor of a belief, built only from beliefs, tries to destroy the decision arguments, in particular the beliefs part of those decision arguments.\nConsequently, in a negotiation dialogue, those two kinds of arguments are generally exchanged between agents.\nIn what follows, the set Args (L) is then divided into two subsets: a subset Argso (L) of arguments supporting offers, and a subset Argsb (L) of arguments supporting beliefs.\nThus, Args (L) = Argso (L) \u222a Argsb (L).\nAs in [5], in what follows, we consider that the structure of the arguments is not known.\nSince the knowledge bases from which arguments are built may be inconsistent, the arguments may be conflicting too.\nIn what follows, those conflicts will be captured by the relation Rc, thus Rc \u2286 Args (L) \u00d7 Args (L).\nThree assumptions are made on this relation: First the arguments supporting different offers are conflicting.\nThe idea behind this assumption is that since offers are exclusive, an agent has to choose only one at a given step of the dialogue.\nNote that, the relation Rc is not necessarily symmetric between the arguments of Argsb (L).\nThe second hypothesis says that arguments supporting the same offer are also conflicting.\nThe idea here is to return the strongest argument among these arguments.\nThe third condition does not allow an argument in favor of an offer to attack an argument supporting a belief.\nThis avoids wishful thinking.\nFormally:\n\u2022 \u2200 a, a' \u2208 Argso (L), s.t. a = ~ a', a Rc a' \u2022 a \u2208 Argso (L) and a' \u2208 Argsb (L) such that a Rc a '\nNote that the relation Rc is not symmetric.\nThis is due to the fact that arguments of Argsb (L) may be conflicting but not necessarily in a symmetric way.\nIn what follows, we assume that the set Args (L) of arguments is finite, and each argument is attacked by a finite number of arguments.\n3.\nNEGOTIATING AGENTS THEORIES AND REASONING MODELS\nIn this section we define formally the negotiating agents, i.e. their theories, as well as the reasoning model used by those agents in a negotiation dialogue.\n3.1 Negotiating agents theories\nAgents involved in a negotiation dialogue, called negotiating agents, are supposed to have theories.\nIn this paper, the theory of an agent will not refer, as usual, to its mental states (i.e. its beliefs, desires and intentions).\nHowever, it will be encoded in a more abstract way in terms of the arguments owned by the agent, a conflict relation among those arguments, a preference relation between the arguments, and a function that specifies which arguments support offers of the set O.\nWe assume that an agent is aware of all the arguments of the set Args (L).\nThe agent is even able to express a preference between any pair of arguments.\nThis does not mean that the agent will use all the arguments of Args (L), but it encodes the fact that when an agent receives an argument from another agent, it can interpret it correctly, and it can also compare it with its own arguments.\nSimilarly, each agent is supposed to be aware of the conflicts between arguments.\nThis also allows us to encode the fact that an agent can recognize whether the received argument is in conflict or not with its arguments.\nHowever, in its theory, only the conflicts between its own arguments are considered.\n\u2022 A \u2286 Args (L).\n968 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n\u2022 F: O--* 2 \u2022 4 s.t \u2200 i, j with i = ~ j, F (oi) \u2229 F (oj) = 0.\nLet Ao = \u222a F (oi) with i = 1,..., n. \u2022> - \u2286 Args (L) \u00d7 Args (L) is a partial preorder denoting a preference relation between arguments.\n\u2022 R \u2286 Rc such that R \u2286 A \u00d7 A \u2022 Def \u2286 A \u00d7 A such that \u2200 a, b \u2208 A, a defeats b, denoted a Def b iff:--a R b, and--not (b> - a)\nThe function F returns the arguments supporting offers in O.\nIn [4], it has been argued that any decision may have arguments supporting it, called arguments PRO, and arguments against it, called arguments CONS.\nMoreover, these two types of arguments are not necessarily conflicting.\nFor simplicity reasons, in this paper we consider only arguments PRO.\nMoreover, we assume that an argument cannot support two distinct offers.\nHowever, it may be the case that an offer is not supported at all by arguments, thus F (oi) may be empty.\nEXAMPLE 1.\nLet O = {o1, o2, o3} be a set of offers.\nThe following theory is the theory of agent i:\n\u2022 A = {a1, a2, a3, a4} \u2022 F (o1) = {a1}, F (o2) = {a2}, F (o3) = 0.\nThus, Ao = {a1, a2} \u2022> - = {(a1, a2), (a2, a1), (a3, a2), (a4, a3)} \u2022 R = {a1, a2), (a2, a1), (a3, a2), (a4, a3)} \u2022 Def = {(a4, a3), (a3, a2)} From the above definition of agent theory, the following hold: PROPERTY 1.\n\u2022 Def \u2286 R \u2022 \u2200 a, a' \u2208 F (oi), a R a '\n3.2 The reasoning model\nFrom the theory of an agent, one can define the argumentation system used by that agent for reasoning about the offers and the arguments, i.e. for computing the status of the different offers and arguments.\nDEFINITION 3 (ARGUMENTATION SYSTEM).\nLet (A, F,> -, R, Def) be the theory of an agent.\nThe argumentation system of that agent is the pair (A, Def).\nIn [5], different acceptability semantics have been introduced for computing the status of arguments.\nThese are based on two basic concepts, defence and conflict-free, defined as follows:\nDEFINITION 4 (DEFENCE\/CONFLICT-FREE).\nLet S \u2286 A. \u2022 S defends an argument a iff each argument that defeats a is defeated by some argument in S. \u2022 S is conflict-free iff there exist no a, a' in S such that a Def a'.\nDEFINITION 5 (ACCEPTABILITY SEMANTICS).\nLet S be a conflict-free set of arguments, and let T: 2 \u2022 4--* 2 \u2022 4 be a function such that T (S) = {a | a is defended by S}.\n\u2022 S is a complete extension iff S = T (S).\n\u2022 S is a preferred extension iff S is a maximal (w.r.t set \u2286) complete extension.\n\u2022 S is a grounded extension iff it is the smallest (w.r.t set \u2286) complete extension.\nLet E1,..., Ex denote the different extensions under a given semantics.\nNote that there is only one grounded extension.\nIt contains all the arguments that are not defeated, and those arguments that are defended directly or indirectly by nondefeated arguments.\nTHEOREM 1.\nLet (A, Def) the argumentation system defined as shown above.\n1.\nIt may have x \u2265 1 preferred extensions.\n2.\nThe grounded extensions is S = Ui> 1 T (0).\nNote that when the grounded extension (or the preferred extension) is empty, this means that there is no acceptable offer for the negotiating agent.\nEXAMPLE 2.\nIn example 1, there is one preferred extension, E = {a1, a2, a4}.\nNow that the acceptability semantics is defined, we are ready to define the status of any argument.\n1.\na is accepted iff a \u2208 Ei, \u2200 Ei with i = 1,..., x. 2.\na is rejected iff Ei such that a \u2208 Ei.\n3.\na is undecided iff a is neither accepted nor rejected.\nThis means that a is in some extensions and not in others.\nNote that A = {a | a is accepted} \u222a {a | a is rejected} \u222a {a | a is undecided}.\nEXAMPLE 3.\nIn example 1, the arguments a1, a2 and a4 are accepted, whereas the argument a3 is rejected.\nAs said before, agents use argumentation systems for reasoning about offers.\nIn a negotiation dialogue, agents propose and accept offers that are acceptable for them, and reject bad ones.\nIn what follows, we will define the status of an offer.\nAccording to the status of arguments, one can define four statuses of the offers as follows:\n\u2022 The offer o is acceptable for the negotiating agent iff \u2203 a \u2208 F (o) such that a is accepted.\nOa = {oi \u2208 O, such that oi is acceptable}.\n\u2022 The offer o is rejected for the negotiating agent iff \u2200 a \u2208 F (o), a is rejected.\nOr = {oi \u2208 O, such that oi is rejected}.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 969\n4.\nTHE STRUCTURE OF NEGOTIATION THEORIES\nIn this section, we study the properties of the system developed above.\nWe first show that in the particular case where A = Ao (ie.\nall of the agent's arguments refer to offers), the corresponding argumentation system will return at least one non-empty preferred extension.\nTHEOREM 2.\nLet ~ A, Def ~ an argumentation system such that A = Ao.\nThen the system returns at least one extension E, such that | E | \u2265 1.\nWe now present some results that demonstrate the importance of indifference in negotiating agents, and more specifically its relation to acceptable outcomes.\nWe first show that the set Oa may contain several offers when their corresponding accepted arguments are indifferent w.r.t the preference relation ~.\nTHEOREM 3.\nLet o1, o2 \u2208 O. o1, o2 \u2208 Oa iff \u2203 a1 \u2208 F (o1), \u2203 a2 \u2208 F (o2), such that a1 and a2 are accepted and are indifferent w.r.t ~ (i.e. a ~ b and b ~ a).\n\u2022 The offer o is negotiable iff \u2200 a \u2208 F (o), a is undecided.\nOn = {oi \u2208 O, such that oi is negotiable}.\n\u2022 The offer o is non-supported iff it is neither acceptable, nor rejected or negotiable.\nOns = {oi \u2208 O, such that oi is non-supported offers}.\nEXAMPLE 4.\nIn example 1, the two offers o1 and o2 are acceptable since they are supported by accepted arguments, whereas the offer o3 is non-supported since it has no argument in its favor.\nFrom the above definitions, the following results hold: PROPERTY 2.\nLet o \u2208 O.\n\u2022 O = Oa \u222a Or \u222a On \u222a Ons.\n\u2022 The set Oa may contain more than one offer.\nFrom the above partition of the set O of offers, a preference relation between offers is defined.\nLet Ox and Oy be two subsets of O. Ox r> Oy means that any offer in Ox is preferred to any offer in the set Oy.\nWe can write also for two offers oi, oj, oi r> oj iff oi \u2208 Ox, oj \u2208 Oy and Ox r> Oy.\nEXAMPLE 5.\nIn example 1, we have o1 r> o3, and o2 r> o3.\nHowever, o1 and o2 are indifferent.\nWe now study acyclic preference relations that are defined formally as follows.\nacyclic relation on A.\nThen the preferred extensions of ~ A, Def ~ are pairwise disjoint w.r.t arguments of Ao.\nUsing the above results we can prove the main theorem of this section that states that negotiating agents with acyclic preference relations do not have acceptable offers.\nTHEOREM 7.\nLet ~ A, F, R, ~, Def ~ be a negotiating agent such that A = Ao and ~ is an acyclic relation.\nThen the set of accepted arguments w.r.t ~ A, Def ~ is emtpy.\nConsequently, the set of acceptable offers, Oa is empty as well.\n5.\nARGUMENTATION-BASED NEGOTIATION\nIn this section, we define formally a protocol that generates argumentation-based negotiation dialogues between two negotiating agents P and C.\nThe two agents negotiate about an object whose possible values belong to a set O.\nThis set O is supposed to be known and the same for both agents.\nFor simplicity reasons, we assume that this set does not change during the dialogue.\nThe agents are equipped with theories denoted respectively ~ AP, FP, ~ P, RP, DefP ~, and ~ AC, FC, ~ C, RC, DefC ~.\nNote that the two theories may be different in the sense that the agents may have different sets of arguments, and different preference relations.\nWorst yet, they may have different arguments in favor of the same offers.\nMoreover, these theories may evolve during the dialogue.\n5.1 Evolution of the theories\nBefore defining formally the evolution of an agent's theory, let us first introduce the notion of dialogue moves, or moves for short.\n\u2022 pi \u2208 {P, C} \u2022 ai \u2208 Args (L) \u222a \u03b81\n970 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n\u2022 oi \u2208 O \u222a \u03b8 \u2022 ti \u2208 N \u2217 is the target of the move, such that ti 0 is: ~ Ai t, Fti, ~ i t, Ri t, Defit ~ such that:\n\u2022 Ait = Ai0 \u222a {ai, i = 1,..., t, ai = Argument (mi)} \u222a A' with A' \u2286 Args (L) t \u2022 Ft i = O \u2192 2Ai \u2022 ~ it = ~ i0 \u2022 Rit = Ri0 \u222a {(ai, aj) | ai = Argument (mi),\n\u2022 Defit \u2286 Ait \u00d7 Ait\nThe above definition captures the monotonic aspect of an argument.\nIndeed, an argument cannot be removed.\nHowever, its status may change.\nAn argument that is accepted at step t of the dialogue by an agent may become rejected at step t + i. Consequently, the status of offers also change.\nThus, the sets Oa, Or, On, and Ons may change from one step of the dialogue to another.\nThat means for example that some offers could move from the set Oa to the set Or and vice-versa.\nNote that in the definition of Rt, the relation RL is used to denote a conflict between exchanged arguments.\nThe reason is that, such a conflict may not be in the set Ri of the agent i. Thus, in order to recognize such conflicts, we have supposed that the set RL is known to the agents.\nThis allows us to capture the situation where an agent is able to prove an argument that it was unable to prove before, by incorporating in its beliefs some information conveyed through the exchange of arguments with another agent.\nThis, unknown at the beginning of the dialogue argument, could give to this agent the possibility to defeat an argument that it could not by using its initial arguments.\nThis could even lead to a change of the status of these initial arguments and this change would lead to the one of the associated offers' status.\nIn what follows, Oit, x denotes the set of offers of type x, where x \u2208 {a, n, r, ns}, of the agent i at step t of the dialogue.\nIn some places, we can use for short the notation Oit to denote the partition of the set O at step t for agent i. Note that we have: not (Oit, x \u2286 Oit +1, x).\n5.2 The notion of agreement\nAs said in the introduction, negotiation is a process aiming at finding an agreement about some matters.\nBy agreement, one means a solution that satisfies to the largest possible extent the preferences of both agents.\nIn case there is no such solution, we say that the negotiation fails.\nIn what follows, we will discuss the different kinds of solutions that may be reached in a negotiation.\nThe first one is the optimal solution.\nAn optimal solution is the best offer for both agents.\nFormally: DEFINITION 12 (OPTIMAL SOLUTION).\nLet O be a set of offers, and o \u2208 O.\nThe offer o is an optimal solution at a step t \u2265 0 iff o \u2208 OPt, a \u2229 OCt, a Such a solution does not always exist since agents may have conflicting preferences.\nThus, agents make concessions by proposing\/accepting less preferred offers.\nDEFINITION 13 (CONCESSION).\nLet o \u2208 O be an offer.\nThe offer o is a concession for an agent i iff o \u2208 Oix such that \u2203 Oiy = \u2205, and Oiy ~ Oi x.\nDuring a negotiation dialogue, agents exchange first their most preferred offers, and if these last are rejected, they make concessions.\nIn this case, we say that their best offers are no longer defendable.\nIn an argumentation setting, this means that the agent has already presented all its arguments supporting its best offers, and it has no counter argument against the ones presented by the other agent.\nFormally:\nDEFINITION 14 (DEFENDABLE OFFER).\nLet ~ Ai t, Fti, ~ i t, Ri t, Defit ~ be the theory of agent i at a step t> 0 of the dialogue.\nLet o \u2208 O such that \u2203 j \u2264 t with Player (mj) = i and offer (mj) = o.\nThe offer o is defendable by the agent i iff: \u2022 \u2203 a \u2208 Fti (o), and k \u2264 t s.t. Argument (mk) = a, or \u2022 \u2203 a \u2208 At \\ Fti (o) s.t. a Defit b with--Argument (mk) = b, k \u2264 t, and Player (mk) = i--l \u2264 t, Argument (ml) = a\nThe offer o is said non-defendable otherwise and NDit is the set of non-defendable offers of agent i at a step t.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 971\n5.3 Negotiation dialogue\nNow that we have shown how the theories of the agents evolve during a dialogue, we are ready to define formally an argumentation-based negotiation dialogue.\nFor that purpose, we need to define first the notion of a legal continuation.\nDEFINITION 15 (LEGAL MOVE).\nA move m is a legal continuation of a sequence of moves m1,..., ml iff j, k 0; M, q | = ((A)) ir\u03d5 U \u03c8 iff there exist SA such that, for every a E A, q' such that q \u223c a q', and \u03bb E out (SA, q'), there is i> 0 for which M, \u03bb [i] | = \u03c8, and M, \u03bb [j] | = \u03d5 for every 0 ws) and small (with weight ws).\nThere is one large player and m small ones.\nThe quota for this game is q; i.e., we have a game of the form q; wl, ws, ws, ... , ws .\nThe total number of players is (m + 1).\nThe value of a coalition is one if the weight of the coalition is greater than or equal to q, otherwise it is zero.\nLet \u03d5l denote the Shapley value for the large player and \u03d5s that for each small player.\nWe first consider ws = 1 and then ws > 1.\nThe smallest possible value for q is wl + 1.\nThis is because, if q \u2264 wl, then the large party can win the election on its own without the need for a coalition.\nThus, the quota for the game satisfies the constraint wl + 1 \u2264 q \u2264 m + wl \u2212 1.\nAlso, the lower and upper limits for wl are 2 and (q \u2212 1) respectively.\nThe lower limit is 2 because the weight of the large party has to be greater than each small one.\nFurthermore, the weight of the large party cannot be greater than q, since in that case, there would be no need for the large party to form a coalition.\nRecall that for our voting game, a player``s marginal contribution to a coalition can only be zero or one.\nConsider the large party.\nThis party can join a coalition as the ith member where 1 \u2264 i \u2264 (m + 1).\nHowever, the marginal contribution of the large party is one if it joins a coalition as the ith member where (q \u2212 wl) \u2264 i < q.\nIn all the remaining cases, its marginal contribution is zero.\nThus, out of the total (m + 1) possible cases, its marginal contribution is one in wl cases.\nHence, the Shapley value of the large party is: \u03d5l = wl\/(m + 1).\nIn the same way, we obtain the Shapley value of the large party for the general case where ws > 1 as: \u03d5l = wl\/ws \/(m + 1) Now consider a small player.\nWe know that the sum of the Shapley values of all the m+1 players is one.\nAlso, since the small parties have equal weights, their Shapley values are the same.\nHence, we get: \u03d5s = 1 \u2212 \u03d5l m Thus, both \u03d5l and \u03d5s can be computed in constant time.\nThis is because both require a constant number of basic operations (addition, subtraction, multiplication, and division).\nIn the same way, the Shapley value for a voting game with a single large party and multiple small parties can be determined in constant time.\n3.3 Multiple large and small parties We now consider a voting game that has two player types: large and small (as in Section 3.2), but now there are multiple large and multiple small parties.\nThe set of parties consists of ml large parties and ms small parties.\nThe weight of each large party is wl and that of each small one is ws, where ws < wl.\nWe show the computational tractability for this game by considering the following four possible scenarios: S1 q \u2264 mlwl and q \u2264 msws S2 q \u2264 mlwl and q \u2265 msws S3 q \u2265 mlwl and q \u2265 msws S4 q \u2265 mlwl and q \u2264 msws For the first scenario, consider a large player.\nIn order to determine the Shapley value for this player, we need to consider the number of all possible coalitions that give it a marginal contribution of one.\nIt is possible for the marginal contribution of this player to be one if it joins a coalition in which the number of large players is between zero and (q \u22121)\/wl.\nIn other words, there are (q \u22121)\/wl +1 such cases and we now consider each of them.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 961 Consider a coalition such that when the large player joins in, there are i large players and (q \u2212 iwl \u2212 1)\/ws small players already in it, and the remaining players join after the large player.\nSuch a coalition gives the large player unit marginal contribution.\nLet C2 l (i, q) denote the number of all such coalitions.\nTo begin, consider the case i = 0: C2 l (0, q) = C \u201e ms, q \u2212 1 ws `` \u00d7 FACTORIAL \u201e q \u2212 1 ws `` \u00d7 FACTORIAL \u201e ml + ms \u2212 q \u2212 1 ws \u2212 1 `` where C(y, x) denotes the number of possible combinations of x items from a set of y items.\nFor i = 1, we get: C2 l (1, q) = C(ml, 1) \u00d7 C \u201e ms, q \u2212 wl \u2212 1 ws `` \u00d7 FACTORIAL \u201e q \u2212 wl \u2212 1 ws `` \u00d7 FACTORIAL \u201e ml + ms \u2212 q \u2212 wl \u2212 1 ws \u2212 1 `` In general, for i > 1, we get: C2 l (i, q) = C(ml, i) \u00d7 C \u201e ms, q \u2212 iwl \u2212 1 ws `` \u00d7 FACTORIAL \u201e q \u2212 iwl \u2212 1 ws `` \u00d7 FACTORIAL \u201e ml + ms \u2212 q \u2212 wl \u2212 1 ws \u2212 1 `` Thus the large player``s Shapley value is: \u03d5l = q\u22121 wlX i=0 C2 l (i, q)\/FACTORIAL(ml + ms) For a given i, the time to find C2 l (i, q) is O(T) where T = (mlms(q \u2212 iwl \u2212 1)(ml + ms))\/ws Hence, the time to find the Shapley value is O(T q\/wl).\nIn the same way, a small player``s Shapley value is: \u03d5s = q\u22121 wsX i=0 C2 s (i, q)\/FACTORIAL(ml + ms) and can be found in time O(T q\/ws).\nLikewise, the remaining three scenarios (S2 to S4) can be shown to have the same time complexity.\n3.4 Three player types We now consider a voting game that has three player types: 1, 2, and 3.\nThe set of parties consists of m1 players of type 1 (each with weight w1), m2 players of type 2 (each with weight w2), and m3 players of type 3 (each with weight w3).\nFor this voting game, consider a player of type 1.\nIt is possible for the marginal contribution of this player to be one if it joins a coalition in which the number of type 1 players is between zero and (q \u2212 1)\/w1.\nIn other words, there are (q \u2212 1)\/w1 + 1 such cases and we now consider each of them.\nConsider a coalition such that when the type 1 player joins in, there are i type 1 players already in it.\nThe remaining players join after the type 1 player.\nLet C3 l (i, q) denote the number of all such coalitions that give a marginal contribution of one to the type 1 player where: C3 1 (i, q) = q\u22121 w1X i=0 q\u2212iw1\u22121 w2X j=0 C2 1 (j, q \u2212 iw1) Therefore the Shapley value of the type 1 player is: \u03d51 = q\u22121 w1X i=0 C3 1 (i, q)\/FACTORIAL(m1 + m2 + m3) The time complexity of finding this value is O(T q2 \/w1w2) where: T = ( 3Y i=1 mi)(q \u2212 iwl \u2212 1)( 3X i=1 mi)\/(w2 + w3) Likewise, for the other two player types (2 and 3).\nThus, we have identified games for which the exact Shapley value can be easily determined.\nHowever, the computational complexity of the above direct enumeration method increases with the number of player types.\nFor a voting game with more than three player types, the time complexity of the above method is a polynomial of degree four or more.\nTo deal with such situations, therefore, the following section presents a faster randomised method for finding the approximate Shapley value.\n4.\nFINDING THE APPROXIMATE SHAPLEY VALUE We first give a brief introduction to randomized algorithms and then present our randomized method for finding the approximate Shapley value.\nRandomized algorithms are the most commonly used approach for finding approximate solutions to computationally hard problems.\nA randomized algorithm is an algorithm that, during some of its steps, performs random choices [2].\nThe random steps performed by the algorithm imply that by executing the algorithm several times with the same input we are not guaranteed to find the same solution.\nNow, since such algorithms generate approximate solutions, their performance is evaluated in terms of two criteria: their time complexity, and their error of approximation.\nThe approximation error refers to the difference between the exact solution and its approximation.\nAgainst this background, we present a randomized method for finding the approximate Shapley value and empirically evaluate its error.\nWe first describe the general voting game and then present our randomized algorithm.\nIn its general form, a voting game has more than two types of players.\nLet wi denote the weight of player i. Thus, for m players and for quota q the game is of the form q; w1, w2, ... , wm .\nThe weights are specified in terms of a probability distribution function.\nFor such a game, we want to find the approximate Shapley value.\nWe let P denote a population of players.\nThe players'' weights in this population are defined by a probability distribution function.\nIrrespective of the actual probability distribution function, let \u03bc be the mean weight for the population of players and \u03bd the variance in the players'' weights.\nFrom this population of players we randomly draw samples and find the sum of the players'' weights in the sample using the following rule from Sampling Theory (see [8] p425): If w1, w2, ... , wn is a random sample of size n drawn from any distribution with mean \u03bc and variance \u03bd, then the sample sum has an approximate Normal distribution with mean n\u03bc and variance \u03bd n (the larger the n the better the approximation).\n962 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) R-SHAPLEYVALUE (P, \u03bc, \u03bd, q, wi) P: Population of players \u03bc: Mean weight of the population P \u03bd: Variance in the weights for poulation P q: Quota for the voting game wi: Player i``s weight 1.\nTi \u2190 0; a \u2190 q \u2212 wi; b \u2190 q \u2212 2.\nFor X from 1 to m repeatedly do the following 2.1.\nSelect a random sample SX of size X from the population P 2.2.\nEvaluate expected marginal contribution (\u0394X i ) of player i to SX as: \u0394X i \u2190 1\u221a (2\u03c0\u03bd\/X) R b a e\u2212X (x\u2212X\u03bc)2 2\u03bd dx 2.3.\nTi \u2190 Ti + \u0394X i 3.\nEvaluate Shapley value of player i as: \u03d5i \u2190 Ti\/m Table 1: Randomized algorithm to find the Shapley value for player i.\nWe know from Definition 1, that the Shapley value for a player is the expectation (E) of its marginal contribution to a coalition that is chosen randomly.\nWe use this rule to determine the Shapley value as follows.\nFor player i with weight wi, let \u03d5i denote the Shapley value.\nLet X denote the size of a random sample drawn from a population in which the individual player weights have any distribution.\nThe marginal contribution of player i to this random sample is one if the total weight of the X players in the sample is greater than or equal to a = q \u2212wi but less than b = q \u2212 (where is an inifinitesimally small quantity).\nOtherwise, its marginal contribution is zero.\nThus, the expected marginal contribution of player i (denoted \u0394X i ) to the sample coalition is the area under the curve defined by N(X\u03bc, \u03bd X ) in the interval [a, b].\nThis area is shown as the region B in Figure 1 (the dotted line in the figure is X\u03bc).\nHence we get: \u0394X i = 1 p (2\u03c0\u03bd\/X) Z b a e\u2212X (x\u2212X\u03bc)2 2\u03bd dx (2) and the Shapley value is: \u03d5i = 1 m mX X=1 \u0394X i (3) The above steps are described in Table 1.\nIn more detail, Step 1 does the initialization.\nIn Step 2, we vary X between 1 and m and repeatedly do the following.\nIn Step 2.1, we randomly select a sample SX of size X from the population P. Player i``s marginal contribution to the random coalition SX is found in Step 2.2.\nThe average marginal contribution is found in Step 3 - and this is the Shapley value for player i. THEOREM 1.\nThe time complexity of the proposed randomized method is linear in the number of players.\nPROOF.\nAs per Equation 3, \u0394X i must be computed m times.\nThis is done in the for loop of Step 2 in Table 1.\nHence, the time complexity of computing a player``s Shapley value is O(m).\nThe following section analyses the approximation error for the proposed method.\n5.\nPERFORMANCE OF THE RANDOMIZED METHOD We first derive the formula for measuring the error in the approximate Shapley value and then conduct experiments for evaluating this error in a wide range of settings.\nHowever, before doing so, we introduce the idea of error.\nThe concept of error relates to a measurement made of a quantity which has an accepted value [22, 4].\nObviously, it cannot be determined exactly how far off a measurement is from the accepted value; if this could be done, it would be possible to just give a more accurate, corrected value.\nThus, error has to do with uncertainty in measurements that nothing can be done about.\nIf a measurement is repeated, the values obtained will differ and none of the results can be preferred over the others.\nHowever, although it is not possible to do anything about such error, it can be characterized.\nAs described in Section 4, we make measurements on samples that are drawn randomly from a given population (P) of players.\nNow, there are statistical errors associated with sampling which are unavoidable and must be lived with.\nHence, if the result of a measurement is to have meaning it cannot consist of the measured value alone.\nAn indication of how accurate the result is must be included also.\nThus, the result of any physical measurement has two essential components: 1.\na numerical value giving the best estimate possible of the quantity measured, and 2.\nthe degree of uncertainty associated with this estimated value.\nFor example, if the estimate of a quantity is x and the uncertainty is e(x) the quantity would lie in x \u00b1 e(x).\nFor sampling experiments, the standard error is by far the most common way of characterising uncertainty [22].\nGiven this, the following section defines this error and uses it to evaluate the performance of the proposed randomized method.\n5.1 Approximation error The accuracy of the above randomized method depends on its sampling error which is defined as follows [22, 4]: DEFINITION 2.\nThe sampling error (or standard error) is defined as the standard deviation for a set of measurements divided by the square root of the number of measurements.\nTo this end, let e(\u03c3X ) be the sampling error in the sum of the weights for a sample of size X drawn from the distribution N(X\u03bc, \u03bd X ) where: e(\u03c3X ) = p (\u03bd\/X)\/ p (X) = p (\u03bd)\/X (4) Let e(\u0394X i ) denote the error in the marginal contribution for player i (given in Equation 2).\nThis error is obtained by propagating the error in Equation 4 to Equation 2.\nIn Equation 2, a and b are the lower and upper limits for the sum of the players'' weights for a The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 963 b C B a \u2212 e (\u03c3X ) a A )X(\u03c3eb+ Sum of weights Z1 Z2 Figure 1: A normal distribution for the sum of players'' weights in a coalition of size X. 40 60 80 100 0 50 100 0 5 10 15 20 25 QuotaWeight PercentageerrorintheShapleyvalue Figure 2: Performance of the randomized method for m = 10 players.\ncoalition of size X.\nSince the error in this sum is e(\u03c3X ), the actual values of a and b lie in the interval a \u00b1 e(\u03c3X ) and b \u00b1 e(\u03c3X ) respectively.\nHence, the error in Equation 2 is either the probability that the sum lies between the limits a \u2212 e(\u03c3X ) and a (i.e., the area under the curve defined by N(X\u03bc, \u03bd X ) between a \u2212 e(\u03c3X ) and a, which is the shaded region A in Figure 1) or the probability that the sum of weights lies between the limits b and b+e(\u03c3X ) (i.e., the area under the curve defined by N(X\u03bc, \u03bd X ) between b and b + e(\u03c3X ), which is the shaded region C in Figure 1).\nMore specifically, the error is the maximum of these two probabilities: e(\u0394X i ) = 1 p (2\u03c0\u03bd\/X) \u00d7 MAX \u201eZ a a\u2212e(\u03c3X ) e\u2212X (x\u2212X\u03bc)2 2\u03bd dx, Z b+e(\u03c3X ) b e\u2212X (x\u2212X\u03bc)2 2\u03bd dx `` On the basis of the above error, we find the error in the Shapley value by using the following standard error propagation rules [22]: R1 If x and y are two random variables with errors e(x) and e(y) respectively, then the error in the random variable z = x + y is given by: e(z) = e(x) + e(y) R2 If x is a random variable with error e(x) and z = kx where 0 100 200 300 400 500 0 100 200 300 400 500 0 5 10 15 20 25 QuotaWeight PercentageerrorintheShapleyvalue Figure 3: Performance of the randomized method for m = 50 players.\nthe constant k has no error, then the error in z is: e(z) = |k|e(x) Using the above rules, the error in the Shapley value (given in Equation 3) is obtained by propagating the error in Equation 4 to all coalitions between the sizes X = 1 and X = m. Let e(\u03d5i) denote this error where: e(\u03d5i) = 1 m mX X=1 e(\u0394X i ) We analyze the performance of our method in terms of the percentage error PE in the approximate Shapley value which is defined as follows: PE = 100 \u00d7 e(\u03d5i)\/\u03d5i (5) 5.2 Experimental Results We now compute the percentage error in the Shapley value using the above equation for PE.\nSince this error depends on the parameters of the voting game, we evaluate it in a range of settings by systematically varying the parameters of the voting game.\nIn particular, we conduct experiments in the following setting.\nFor a player with weight w, the percentage error in a player``s Shapley value depends on the following five parameters (see Equation 3): 1.\nThe number of parties (m).\n2.\nThe mean weight (\u03bc).\n3.\nThe variance in the player``s weights (\u03bd).\n4.\nThe quota for the voting game (q).\n5.\nThe given player``s weight (w).\nWe fix \u03bc = 10 and \u03bd = 1.\nThis is because, for the normal distribution, \u03bc = 10 ensures that for almost all the players the weight is positive, and \u03bd = 1 is used most commonly in statistical experiments (\u03bd can be higher or lower but PE is increasing in \u03bdsee Equations 4 and 5).\nWe then vary m, q, and w as follows.\nWe vary m between 5 and 100 (since beyond 100 we found that the error is close to zero), for each m we vary q between 4\u03bc and m\u03bc (we impose these limits because they ensure that the size of the 964 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0 200 400 600 800 1000 0 200 400 600 800 1000 0 5 10 15 20 25 QuotaWeight PercentageerrorintheShapleyvalue Figure 4: Performance of the randomized method for m = 100 players.\nwinning coalition is more than one and less than m - see Section 3 for details), and for each q, we vary w between 1 and q\u22121 (because a winning coalition must contain at least two players).\nThe results of these experiments are shown in Figures 2, 3, and 4.\nAs seen in the figures, the maximum PE is around 20% and in most cases it is below 5%.\nWe now analyse the effect of the three parameters: w, q, and m on the percentage error in more detail.\n- Effect of w.\nThe PE depends on e(\u03c3X ) because, in Equation 5, the limits of integration depend on e(\u03c3X ).\nThe interval over which the first integration in Equation 5 is done is a \u2212 a + e(\u03c3X ) = e(\u03c3X ), and the interval over which the second one is done is b + e(\u03c3X ) \u2212 b = e(\u03c3X ).\nThus, the interval is the same for both integrations and it is independent of wi.\nNote that each of the two functions that are integrated in Equation 5 are the same as the function that is integrated in Equation 2.\nOnly the limits of the integration are different.\nAlso, the interval over which the integration for the marginal contribution of Equation 2 is done is b \u2212 a = wi \u2212 (see Figure 1).\nThe error in the marginal contribution is either the area of the shaded region A (between a \u2212 e(\u03c3X ) and a) in Figure 1, or the shaded area C (between b and b + e(\u03c3X )).\nAs per Equation 5, it is the maximum of these two areas.\nSince e(\u03c3X ) is independent of wi, as wi increases, e(\u03c3X ) remains unchanged.\nHowever, the area of the unshaded region B increases.\nHence, as wi increases, the error in the marginal contribution decreases and PE also decreases.\n- Effect of q. For a given q, the Shapley value for player i is as given in Equation 3.\nWe know that, for a sample of size X, the sum of the players'' weights is distributed normally with mean X\u03bc and variance \u03bd\/X.\nSince 99% of a normal distribution lies within two standard deviations of its mean [8], player i``s marginal contribution to a sample of size X is almost zero if: a < X\u03bc + 2 p \u03bd\/X or b > X\u03bc \u2212 2 p \u03bd\/X This is because the three regions A, B, and C (in Figure 1) lie either to the right of Z2 or to the left of Z1.\nHowever, player i``s marginal contribution is greater than zero for those X for which the following constraint is satisfied: X\u03bc \u2212 2 p \u03bd\/X < a < b < X\u03bc + 2 p \u03bd\/X For this constraint, the three regions A, B, and C lie somewhere between Z1 and Z2.\nSince a = q \u2212wi and b = q \u2212 , Equation 6 can also be written as: X\u03bc \u2212 2 p \u03bd\/X < q \u2212 wi < q \u2212 < X\u03bc + 2 p \u03bd\/X The smallest X that satisfies the constraint in Equation 6 strictly increases with q.\nAs X increases, the error in sum of weights in a sample (i.e., e(\u03c3X ) = p (\u03bd)\/X) decreases.\nConsequently, the error in a player``s marginal contribution (see Equation 5) also decreases.\nThis implies that as q increases, the error in the marginal contribution (and consequently the error in the Shapley value) decreases.\n- Effect of m.\nIt is clear from Equation 4 that the error e(\u03c3X ) is highest for X = 1 and it decreases with X. Hence, for small m, e(\u03c31 ) has a significant effect on PE.\nBut as m increases, the effect of e(\u03c31 ) on PE decreases and, as a result, PE decreases.\n6.\nRELATED WORK In order to overcome the computational complexity of finding the Shapley value, two main approaches have been proposed in the literature.\nOne approach is to use generating functions [3].\nThis method is an exact procedure that overcomes the problem of time complexity, but its storage requirements are substantial - it requires huge arrays.\nIt also has the limitation (not shared by other approaches) that it can only be applied to games with integer weights and quotas.\nThe other method uses an approximation technique based on Monte Carlo simulation.\nIn [12], for instance, the Shapley value is computed by considering a random sample from a large population of players.\nThe method we propose differs from this in that they define the Shapley value by treating a player``s number of swings (if a player can change a losing coalition to a winning one, then, for the player, the coalition is counted as a swing) as a random variable, while we treat the players'' weights as random variables.\nIn [12], however, the question remains how to get the number of swings from the definition of a voting game and what is the time complexity of doing this.\nSince the voting game is defined in terms of the players'' weights and the number of swings are obtained from these weights, our method corresponds more closely to the definition of the voting game.\nOur method also differs from [7] in that while [7] presents a method for the case where all the players'' weights are distributed normally, our method applies to any type of distribution for these weights.\nThus, as stated in Section 1, our method is more general than [3, 12, 7].\nAlso, unlike all the above mentioned work, we provide an analysis of the performance of our method in terms of the percentage error in the approximate Shapley value.\nA method for finding the Shapley value was also proposed in [5].\nThis method gives the exact Shapley value, but its time complexity is exponential.\nFurthermore, the method can be used only if the game is represented in a specific form (viz., the multi-issue representation), not otherwise.\nFinally, [9, 10] present a polynomial time method for finding the Shapley value.\nThis method can be used if the coalition game is represented as a marginal contribution net.\nFurthermore, they assume that the Shapley value of a component of a given coalition game is given by an oracle, and on the basis of this assumption aggregate these values to find the value for the overall game.\nIn contrast, our method is independent The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 965 of the representation and gives an approximate Shapley value in linear time, without the need for an oracle.\n7.\nCONCLUSIONS AND FUTURE WORK Coalition formation is an important form of interaction in multiagent systems.\nAn important issue in such work is for the agents to decide how to split the gains from cooperation between the members of a coalition.\nIn this context, cooperative game theory offers a solution concept called the Shapley value.\nThe main advantage of the Shapley value is that it provides a solution that is both unique and fair.\nHowever, its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time.\nIn particular, the problem of finding this value for the voting game is #P-complete.\nAlthough this problem is, in general #P-complete, we show that there are some specific voting games for which the Shapley value can be determined in polynomial time and characterise such games.\nBy doing so, we have shown when it is computationally feasible to find the exact Shapley value.\nFor other complex voting games, we presented a new randomized method for determining the approximate Shapley value.\nThe time complexity of the proposed method is linear in the number of players.\nWe analysed the performance of this method in terms of the percentage error in the approximate Shapley value.\nOur experiments show that the percentage error in the Shapley value is at most 20.\nFurthermore, in most cases, the error is less than 5%.\nFinally, we analyse the effect of the different parameters of the voting game on this error.\nOur study shows that the error decreases as 1.\na player``s weight increases, 2.\nthe quota increases, and 3.\nthe number of players increases.\nGiven the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents.\nIn future, we will explore the problem of determining the Shapley value for other commonly occurring coalition games like the production economy and the market economy.\n8.\nREFERENCES [1] R. Aumann.\nAcceptable points in general cooperative n-person games.\nIn Contributions to theTheory of Games volume IV.\nPrinceton University Press, 1959.\n[2] G. Ausiello, P. Crescenzi, G. Gambosi, V. Kann, A. Marchetti-Spaccamela, and M. Protasi.\nComplexity and approximation: Combinatorial optimization problems and their approximability properties.\nSpringer, 2003.\n[3] J. M. Bilbao, J. R. Fernandez, A. J. Losada, and J. J. Lopez.\nGenerating functions for computing power indices efficiently.\nTop 8, 2:191-213, 2000.\n[4] P. Bork, H. Grote, D. Notz, and M. Regler.\nData Analysis Techniques in High Energy Physics Experiments.\nCambridge University Press, 1993.\n[5] V. Conitzer and T. Sandholm.\nComputing Shapley values, manipulating value division schemes, and checking core membership in multi-issue domains.\nIn Proceedings of the National Conference on Artificial Intelligence, pages 219-225, San Jose, California, 2004.\n[6] X. Deng and C. H. Papadimitriou.\nOn the complexity of cooperative solution concepts.\nMathematics of Operations Research, 19(2):257-266, 1994.\n[7] S. S. Fatima, M. Wooldridge, and N. R. Jennings.\nAn analysis of the shapley value and its uncertainty for the voting game.\nIn Proc.\n7th Int.\nWorkshop on Agent Mediated Electronic Commerce, pages 39-52, 2005.\n[8] A. Francis.\nAdvanced Level Statistics.\nStanley Thornes Publishers, 1979.\n[9] S. Ieong and Y. Shoham.\nMarginal contribution nets: A compact representation scheme for coalitional games.\nIn Proceedings of the Sixth ACM Conference on Electronic Commerce, pages 193-202, Vancouver, Canada, 2005.\n[10] S. Ieong and Y. Shoham.\nMulti-attribute coalition games.\nIn Proceedings of the Seventh ACM Conference on Electronic Commerce, pages 170-179, Ann Arbor, Michigan, 2006.\n[11] J. P. Kahan and A. Rapoport.\nTheories of Coalition Formation.\nLawrence Erlbaum Associates Publishers, 1984.\n[12] I. Mann and L. S. Shapley.\nValues for large games iv: Evaluating the electoral college exactly.\nTechnical report, The RAND Corporation, Santa Monica, 1962.\n[13] A. MasColell, M. Whinston, and J. R. Green.\nMicroeconomic Theory.\nOxford University Press, 1995.\n[14] M. J. Osborne and A. Rubinstein.\nA Course in Game Theory.\nThe MIT Press, 1994.\n[15] C. H. Papadimitriou.\nComputational Complexity.\nAddison Wesley Longman, 1994.\n[16] A. Rapoport.\nN-person Game Theory : Concepts and Applications.\nDover Publications, Mineola, NY, 2001.\n[17] A. E. Roth.\nIntroduction to the shapley value.\nIn A. E. Roth, editor, The Shapley value, pages 1-27.\nUniversity of Cambridge Press, Cambridge, 1988.\n[18] T. Sandholm and V. Lesser.\nCoalitions among computationally bounded agents.\nArtificial Intelligence Journal, 94(1):99-137, 1997.\n[19] L. S. Shapley.\nA value for n person games.\nIn A. E. Roth, editor, The Shapley value, pages 31-40.\nUniversity of Cambridge Press, Cambridge, 1988.\n[20] O. Shehory and S. Kraus.\nA kernel-oriented model for coalition-formation in general environments: Implemetation and results.\nIn In Proceedings of the National Conference on Artificial Intelligence (AAAI-96), pages 131-140, 1996.\n[21] O. Shehory and S. Kraus.\nMethods for task allocation via agent coalition formation.\nArtificial Intelligence Journal, 101(2):165-200, 1998.\n[22] J. R. Taylor.\nAn introduction to error analysis: The study of uncertainties in physical measurements.\nUniversity Science Books, 1982.\n966 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"A Randomized Method for the Shapley Value for the Voting Game\nABSTRACT\nThe Shapley value is one of the key solution concepts for coalition games.\nIts main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time.\nIn particular, the problem of finding this value for the voting game is known to be #P - complete in the general case.\nHowever, in this paper, we show that there are some specific voting games for which the problem is computationally tractable.\nFor other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value.\nThe time complexity of this method is linear in the number of players.\nWe also show, through empirical studies, that the percentage error for the proposed method is always less than 20% and, in most cases, less than 5%.\n1.\nINTRODUCTION\nCoalition formation, a key form of interaction in multi-agent systems, is the process of joining together two or more agents so as to achieve goals that individuals on their own cannot, or to achieve them more efficiently [1, 11, 14, 13].\nOften, in such situations, there is more than one possible coalition and a player's payoff depends on which one it joins.\nGiven this, a key problem is to ensure that none of the parties in a coalition has any incentive to break away from it and join another coalition (i.e., the coalitions should be stable).\nHowever, in many cases there may be more than one\nsolution (i.e., a stable coalition).\nIn such cases, it becomes difficult to select a single solution from among the possible ones, especially if the parties are self-interested (i.e., they have different preferences over stable coalitions).\nIn this context, cooperative game theory deals with the problem of coalition formation and offers a number of solution concepts that possess desirable properties like stability, fair division ofjoint gains, and uniqueness [16, 14].\nCooperative game theory differs from its non-cooperative counterpart in that for the former the players are allowed to form binding agreements and so there is a strong incentive to work together to receive the largest total payoff.\nAlso, unlike non-cooperative game theory, cooperative game theory does not specify a game through a description of the strategic environment (including the order of players' moves and the set of actions at each move) and the resulting payoffs, but, instead, it reduces this collection of data to the coalitional form where each coalition is represented by a single real number: there are no actions, moves, or individual payoffs.\nThe chief advantage of this approach, at least in multiple-player environments, is its practical usefulness.\nThus, many more real-life situations fit more easily into a coalitional form game, whose structure is more tractable than that of a non-cooperative game, whether that be in normal or extensive form and it is for this reason that we focus on such forms in this paper.\nGiven these observations, a number of multiagent systems researchers have used and extended cooperative game-theoretic solutions to facilitate automated coalition formation [20, 21, 18].\nMoreover, in this work, one of the most extensively studied solution concepts is the Shapley value [19].\nA player's Shapley value gives an indication of its prospects of playing the game--the higher the Shapley value, the better its prospects.\nThe main advantage of the Shapley value is that it provides a solution that is both unique and fair (see Section 2.1 for a discussion of the property of fairness).\nHowever, while these are both desirable properties, the Shapley value has one major drawback: for many coalition games, it cannot be determined in polynomial time.\nFor instance, finding this value for the weighted voting game is, in general, #P - complete [6].\nA problem is #P - hard if solving it is as hard as counting satisfying assignments of propositional logic formulae [15, p442].\nSince #P - completeness thus subsumes NP-completeness, this implies that computing the Shapley value for the weighted voting game will be intractable in general.\nIn other words, it is practically infeasible to try to compute the exact Shapley value.\nHowever, the voting game has practical relevance to multi-agent systems as it is an important means of reaching consensus between multiple agents.\nHence, our objective is to overcome the computational complexity of finding the Shapley value for this game.\nSpecifically, we first show that there are some specific voting games for which the exact value can\nbe computed in polynomial time.\nBy identifying such games, we show, for the first tme, when it is feasible to find the exact value and when it is not.\nFor the computationally complex voting games, we present a new randomised method, along the lines of Monte-Carlo simulation, for computing the approximate Shapley value.\nThe computational complexity of such games has typically been tackled using two main approaches.\nThe first is to use generating functions [3].\nThis method trades time complexity for storage space.\nThe second uses an approximation technique based on Monte Carlo simulation [12, 7].\nHowever the method we propose is more general than either of these (see Section 6 for details).\nMoreover, no work has previously analysed the approximation error.\nThe approximation error relates to how close the approximate is to the true Shapley value.\nSpecifically, it is the difference between the true and the approximate Shapley value.\nIt is important to determine this error because the performance of an approximation method is evaluated in terms of two criteria: its time complexity, and its approximation error.\nThus, our contribution lies in also in providing, for the first time, an analysis of the percentage error in the approximate Shapley value.\nThis analysis is carried out empirically.\nOur experiments show that the error is always less than 20%, and in most cases it is under 5%.\nFinally, our method has time complexity linear in the number of players and it does not require any arrays (i.e., it is economical in terms of both computing time and storage space).\nGiven this, and the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents.\nThe rest of the paper is organised as follows.\nSection 2 defines the Shapley value and describes the weighted voting game.\nIn Section 3 we describe voting games whose Shapley value can be found in polynomial time.\nIn Section 4, we present a randomized method for finding the approximate Shapley value and analyse its performance in Section 5.\nSection 6 discusses related literature.\nFinally, Section 7 concludes.\n2.\nBACKGROUND\nWe begin by introducing coalition games and the Shapley value and then define the weighted voting game.\nA coalition game is a game where groups of players (\"coalitions\") may enforce cooperative behaviour between their members.\nHence the game is a competition between coalitions of players, rather than between individual players.\nDepending on how the players measure utility, coalition game theory is split into two parts.\nIf the players measure utility or the payoff in the same units and there is a means of exchange of utility such as side payments, we say the game has transferable utility; otherwise it has non-transferable utility.\nMore formally, a coalition game with transferable utility, ~ N, v ~, consists of:\n1.\na finite set (N = {1, 2,..., n}) of players and 2.\na function (v) that associates with every non-empty subset S of N (i.e., a coalition) a real number v (S) (the worth of S).\nFor each coalition S, the number v (S) is the total payoff that is available for division among the members of S (i.e., the set of joint actions that coalition S can take consists of all possible divisions of v (S) among the members of S).\nCoalition games with nontransferable payoffs differ from ones with transferable payoffs in the following way.\nFor the former, each coalition is associated with a set of payoff vectors that is not necessarily the set of all possible divisions of some fixed amount.\nThe focus of this paper is on the weighted voting game (described in Section 2.1) which is a game with transferable payoffs.\nThus, in either case, the players will only join a coalition if they expect to gain from it.\nHere, the players are allowed to form binding agreements, and so there is strong incentive to work together to receive the largest total payoff.\nThe problem then is how to split the total payoff between or among the players.\nIn this context, Shapley [19] constructed a solution using an axiomatic approach.\nShapley defined a value for games to be a function that assigns to a game (N, v), a number \u03d5i (N, v) for each i in N.\nThis function satisfies three axioms [17]:\n1.\nSymmetry.\nThis axiom requires that the names of players play no role in determining the value.\n2.\nCarrier.\nThis axiom requires that the sum of \u03d5i (N, v) for all players i in any carrier C equal v (C).\nA carrier C is a subset of N such that v (S) = v (S \u2229 C) for any subset of players S \u2282 N. 3.\nAdditivity.\nThis axiom specifies how the values of different games must be related to one another.\nIt requires that for any games \u03d5i (N, v) and \u03d5i (N, v'), \u03d5i (N, v) + \u03d5i (N, v) = \u03d5i (N, v + v) for all i in N.\nShapley showed that there is a unique function that satisfies these three axioms.\nShapley viewed this value as an index for measuring the power of players in a game.\nLike a price index or other market indices, the value uses averages (or weighted averages in some of its generalizations) to aggregate the power of players in their various cooperation opportunities.\nAlternatively, one can think of the Shapley value as a measure of the utility of risk neutral players in a game.\nWe first introduce some notation and then define the Shapley value.\nLet S denote the set N \u2212 {i} and fi: S \u2192 2N \u2212 {i} be a random variable that takes its values in the set of all subsets of N \u2212 {i}, and has the probability distribution function (g) defined as:\nThe random variable fi is interpreted as the random choice of a coalition that player i joins.\nThen, a player's Shapley value is defined in terms of its marginal contribution.\nThus, the marginal contribution of player i to coalition S with i \u2208 \/ S is a function Div that is defined as follows:\nThus a player's marginal contribution to a coalition S is the increase in the value of S as a result of i joining it.\nThe Shapley value is interpreted as follows.\nSuppose that all the players are arranged in some order, all orderings being equally likely.\nThen \u03d5i (N, v) is the expected marginal contribution, over all orderings, of player i to the set of players who precede him.\nThe method for finding a player's Shapley value depends on the definition of the value function (v).\nThis function is different for different games, but here we focus specifically on the weighted voting game for the reasons outlined in Section 1.\n960 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.1 The weighted voting game\nWe adopt the definition of the voting game given in [6].\nThus, there is a set of n players that may, for example, represent shareholders in a company or members in a parliament.\nThe weighted voting game is then a game G = (N, v) in which:\nfor any coalition S. Thus wi is the number of votes that player i has and q is the number of votes needed to win the game (i.e., the quota).\nNote that for this game (denoted (q \u2022, w1,..., wn)), a player's marginal contribution is either zero or one.\nThis is because the value of any coalition is either zero or one.\nA coalition with value zero is called a \"losing coalition\" and with value one a \"winning coalition\".\nIf a player's entry to a coalition changes it from losing to winning, then the player's marginal contribution for that coalition is one; otherwise it is zero.\nThe main advantage of the Shapley value is that it gives a solution that is both unique and fair.\nThe property of uniqueness is desirable because it leaves no ambiguity.\nThe property of fairness relates to how the gains from cooperation are split between the members of a coalition.\nIn this case, a player's Shapley value is proportional to the contribution it makes as a member of a coalition; the more contribution it makes, the higher its value.\nThus, from a player's perspective, both uniqueness and fairness are desirable properties.\n3.\nVOTING GAMES WITH POLYNOMIAL TIME SOLUTIONS\n3.1 All players have equal weight\n3.2 A single large party\n3.3 Multiple large and small parties\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 961\n3.4 Three player types\n4.\nFINDING THE APPROXIMATE SHAPLEY VALUE\n962 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.\nPERFORMANCE OF THE RANDOMIZED METHOD\n5.1 Approximation error\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 963\n5.2 Experimental Results\n6.\nRELATED WORK\nIn order to overcome the computational complexity of finding the Shapley value, two main approaches have been proposed in the literature.\nOne approach is to use generating functions [3].\nThis method is an exact procedure that overcomes the problem of time complexity, but its storage requirements are substantial--it requires huge arrays.\nIt also has the limitation (not shared by other approaches) that it can only be applied to games with integer weights and quotas.\nThe other method uses an approximation technique based on Monte Carlo simulation.\nIn [12], for instance, the Shapley value is computed by considering a random sample from a large population of players.\nThe method we propose differs from this in that they define the Shapley value by treating a player's number of swings (if a player can change a losing coalition to a winning one, then, for the player, the coalition is counted as a swing) as a random variable, while we treat the players' weights as random variables.\nIn [12], however, the question remains how to get the number of swings from the definition of a voting game and what is the time complexity of doing this.\nSince the voting game is defined in terms of the players' weights and the number of swings are obtained from these weights, our method corresponds more closely to the definition of the voting game.\nOur method also differs from [7] in that while [7] presents a method for the case where all the players' weights are distributed normally, our method applies to any type of distribution for these weights.\nThus, as stated in Section 1, our method is more general than [3, 12, 7].\nAlso, unlike all the above mentioned work, we provide an analysis of the performance of our method in terms of the percentage error in the approximate Shapley value.\nA method for finding the Shapley value was also proposed in [5].\nThis method gives the \"exact\" Shapley value, but its time complexity is exponential.\nFurthermore, the method can be used only if the game is represented in a specific form (viz., the \"multi-issue representation\"), not otherwise.\nFinally, [9, 10] present a polynomial time method for finding the Shapley value.\nThis method can be used if the coalition game is represented as a \"marginal contribution net\".\nFurthermore, they assume that the Shapley value of a component of a given coalition game is given by an oracle, and on the basis of this assumption aggregate these values to find the value for the overall game.\nIn contrast, our method is independent\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 965\nof the representation and gives an approximate Shapley value in linear time, without the need for an oracle.\n7.\nCONCLUSIONS AND FUTURE WORK\nCoalition formation is an important form of interaction in multiagent systems.\nAn important issue in such work is for the agents to decide how to split the gains from cooperation between the members of a coalition.\nIn this context, cooperative game theory offers a solution concept called the Shapley value.\nThe main advantage of the Shapley value is that it provides a solution that is both unique and fair.\nHowever, its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time.\nIn particular, the problem of finding this value for the voting game is #P - complete.\nAlthough this problem is, in general #P - complete, we show that there are some specific voting games for which the Shapley value can be determined in polynomial time and characterise such games.\nBy doing so, we have shown when it is computationally feasible to find the exact Shapley value.\nFor other complex voting games, we presented a new randomized method for determining the approximate Shapley value.\nThe time complexity of the proposed method is linear in the number of players.\nWe analysed the performance of this method in terms of the percentage error in the approximate Shapley value.\nOur experiments show that the percentage error in the Shapley value is at most 20.\nFurthermore, in most cases, the error is less than 5%.\nFinally, we analyse the effect of the different parameters of the voting game on this error.\nOur study shows that the error decreases as\n1.\na player's weight increases, 2.\nthe quota increases, and 3.\nthe number of players increases.\nGiven the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents.\nIn future, we will explore the problem of determining the Shapley value for other commonly occurring coalition games like the \"production economy\" and the \"market economy\".","lvl-4":"A Randomized Method for the Shapley Value for the Voting Game\nABSTRACT\nThe Shapley value is one of the key solution concepts for coalition games.\nIts main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time.\nIn particular, the problem of finding this value for the voting game is known to be #P - complete in the general case.\nHowever, in this paper, we show that there are some specific voting games for which the problem is computationally tractable.\nFor other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value.\nThe time complexity of this method is linear in the number of players.\nWe also show, through empirical studies, that the percentage error for the proposed method is always less than 20% and, in most cases, less than 5%.\n1.\nINTRODUCTION\nOften, in such situations, there is more than one possible coalition and a player's payoff depends on which one it joins.\nHowever, in many cases there may be more than one\nsolution (i.e., a stable coalition).\nMoreover, in this work, one of the most extensively studied solution concepts is the Shapley value [19].\nA player's Shapley value gives an indication of its prospects of playing the game--the higher the Shapley value, the better its prospects.\nThe main advantage of the Shapley value is that it provides a solution that is both unique and fair (see Section 2.1 for a discussion of the property of fairness).\nHowever, while these are both desirable properties, the Shapley value has one major drawback: for many coalition games, it cannot be determined in polynomial time.\nFor instance, finding this value for the weighted voting game is, in general, #P - complete [6].\nSince #P - completeness thus subsumes NP-completeness, this implies that computing the Shapley value for the weighted voting game will be intractable in general.\nIn other words, it is practically infeasible to try to compute the exact Shapley value.\nHowever, the voting game has practical relevance to multi-agent systems as it is an important means of reaching consensus between multiple agents.\nHence, our objective is to overcome the computational complexity of finding the Shapley value for this game.\nSpecifically, we first show that there are some specific voting games for which the exact value can\nbe computed in polynomial time.\nBy identifying such games, we show, for the first tme, when it is feasible to find the exact value and when it is not.\nFor the computationally complex voting games, we present a new randomised method, along the lines of Monte-Carlo simulation, for computing the approximate Shapley value.\nThe computational complexity of such games has typically been tackled using two main approaches.\nThe first is to use generating functions [3].\nThis method trades time complexity for storage space.\nHowever the method we propose is more general than either of these (see Section 6 for details).\nMoreover, no work has previously analysed the approximation error.\nThe approximation error relates to how close the approximate is to the true Shapley value.\nSpecifically, it is the difference between the true and the approximate Shapley value.\nIt is important to determine this error because the performance of an approximation method is evaluated in terms of two criteria: its time complexity, and its approximation error.\nThus, our contribution lies in also in providing, for the first time, an analysis of the percentage error in the approximate Shapley value.\nOur experiments show that the error is always less than 20%, and in most cases it is under 5%.\nFinally, our method has time complexity linear in the number of players and it does not require any arrays (i.e., it is economical in terms of both computing time and storage space).\nGiven this, and the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents.\nSection 2 defines the Shapley value and describes the weighted voting game.\nIn Section 3 we describe voting games whose Shapley value can be found in polynomial time.\nIn Section 4, we present a randomized method for finding the approximate Shapley value and analyse its performance in Section 5.\nSection 6 discusses related literature.\nFinally, Section 7 concludes.\n2.\nBACKGROUND\nWe begin by introducing coalition games and the Shapley value and then define the weighted voting game.\nA coalition game is a game where groups of players (\"coalitions\") may enforce cooperative behaviour between their members.\nHence the game is a competition between coalitions of players, rather than between individual players.\nDepending on how the players measure utility, coalition game theory is split into two parts.\nMore formally, a coalition game with transferable utility, ~ N, v ~, consists of:\n1.\na finite set (N = {1, 2,..., n}) of players and 2.\nCoalition games with nontransferable payoffs differ from ones with transferable payoffs in the following way.\nThe focus of this paper is on the weighted voting game (described in Section 2.1) which is a game with transferable payoffs.\nThus, in either case, the players will only join a coalition if they expect to gain from it.\nHere, the players are allowed to form binding agreements, and so there is strong incentive to work together to receive the largest total payoff.\nThe problem then is how to split the total payoff between or among the players.\nIn this context, Shapley [19] constructed a solution using an axiomatic approach.\nShapley defined a value for games to be a function that assigns to a game (N, v), a number \u03d5i (N, v) for each i in N.\nThis function satisfies three axioms [17]:\n1.\nSymmetry.\nThis axiom requires that the names of players play no role in determining the value.\n2.\nCarrier.\nThis axiom requires that the sum of \u03d5i (N, v) for all players i in any carrier C equal v (C).\nA carrier C is a subset of N such that v (S) = v (S \u2229 C) for any subset of players S \u2282 N. 3.\nAdditivity.\nThis axiom specifies how the values of different games must be related to one another.\nShapley showed that there is a unique function that satisfies these three axioms.\nShapley viewed this value as an index for measuring the power of players in a game.\nLike a price index or other market indices, the value uses averages (or weighted averages in some of its generalizations) to aggregate the power of players in their various cooperation opportunities.\nAlternatively, one can think of the Shapley value as a measure of the utility of risk neutral players in a game.\nWe first introduce some notation and then define the Shapley value.\nThe random variable fi is interpreted as the random choice of a coalition that player i joins.\nThen, a player's Shapley value is defined in terms of its marginal contribution.\nThus, the marginal contribution of player i to coalition S with i \u2208 \/ S is a function Div that is defined as follows:\nThus a player's marginal contribution to a coalition S is the increase in the value of S as a result of i joining it.\nThe Shapley value is interpreted as follows.\nSuppose that all the players are arranged in some order, all orderings being equally likely.\nThen \u03d5i (N, v) is the expected marginal contribution, over all orderings, of player i to the set of players who precede him.\nThe method for finding a player's Shapley value depends on the definition of the value function (v).\nThis function is different for different games, but here we focus specifically on the weighted voting game for the reasons outlined in Section 1.\n960 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.1 The weighted voting game\nWe adopt the definition of the voting game given in [6].\nThus, there is a set of n players that may, for example, represent shareholders in a company or members in a parliament.\nThe weighted voting game is then a game G = (N, v) in which:\nfor any coalition S. Thus wi is the number of votes that player i has and q is the number of votes needed to win the game (i.e., the quota).\nNote that for this game (denoted (q \u2022, w1,..., wn)), a player's marginal contribution is either zero or one.\nThis is because the value of any coalition is either zero or one.\nA coalition with value zero is called a \"losing coalition\" and with value one a \"winning coalition\".\nIf a player's entry to a coalition changes it from losing to winning, then the player's marginal contribution for that coalition is one; otherwise it is zero.\nThe main advantage of the Shapley value is that it gives a solution that is both unique and fair.\nThe property of fairness relates to how the gains from cooperation are split between the members of a coalition.\nIn this case, a player's Shapley value is proportional to the contribution it makes as a member of a coalition; the more contribution it makes, the higher its value.\nThus, from a player's perspective, both uniqueness and fairness are desirable properties.\n6.\nRELATED WORK\nIn order to overcome the computational complexity of finding the Shapley value, two main approaches have been proposed in the literature.\nOne approach is to use generating functions [3].\nThis method is an exact procedure that overcomes the problem of time complexity, but its storage requirements are substantial--it requires huge arrays.\nIt also has the limitation (not shared by other approaches) that it can only be applied to games with integer weights and quotas.\nThe other method uses an approximation technique based on Monte Carlo simulation.\nIn [12], for instance, the Shapley value is computed by considering a random sample from a large population of players.\nIn [12], however, the question remains how to get the number of swings from the definition of a voting game and what is the time complexity of doing this.\nSince the voting game is defined in terms of the players' weights and the number of swings are obtained from these weights, our method corresponds more closely to the definition of the voting game.\nThus, as stated in Section 1, our method is more general than [3, 12, 7].\nAlso, unlike all the above mentioned work, we provide an analysis of the performance of our method in terms of the percentage error in the approximate Shapley value.\nA method for finding the Shapley value was also proposed in [5].\nThis method gives the \"exact\" Shapley value, but its time complexity is exponential.\nFurthermore, the method can be used only if the game is represented in a specific form (viz., the \"multi-issue representation\"), not otherwise.\nFinally, [9, 10] present a polynomial time method for finding the Shapley value.\nThis method can be used if the coalition game is represented as a \"marginal contribution net\".\nFurthermore, they assume that the Shapley value of a component of a given coalition game is given by an oracle, and on the basis of this assumption aggregate these values to find the value for the overall game.\nIn contrast, our method is independent\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 965\nof the representation and gives an approximate Shapley value in linear time, without the need for an oracle.\n7.\nCONCLUSIONS AND FUTURE WORK\nCoalition formation is an important form of interaction in multiagent systems.\nAn important issue in such work is for the agents to decide how to split the gains from cooperation between the members of a coalition.\nIn this context, cooperative game theory offers a solution concept called the Shapley value.\nThe main advantage of the Shapley value is that it provides a solution that is both unique and fair.\nHowever, its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time.\nIn particular, the problem of finding this value for the voting game is #P - complete.\nAlthough this problem is, in general #P - complete, we show that there are some specific voting games for which the Shapley value can be determined in polynomial time and characterise such games.\nBy doing so, we have shown when it is computationally feasible to find the exact Shapley value.\nFor other complex voting games, we presented a new randomized method for determining the approximate Shapley value.\nThe time complexity of the proposed method is linear in the number of players.\nWe analysed the performance of this method in terms of the percentage error in the approximate Shapley value.\nOur experiments show that the percentage error in the Shapley value is at most 20.\nFurthermore, in most cases, the error is less than 5%.\nFinally, we analyse the effect of the different parameters of the voting game on this error.\nOur study shows that the error decreases as\n1.\na player's weight increases, 2.\nthe quota increases, and 3.\nthe number of players increases.\nGiven the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents.\nIn future, we will explore the problem of determining the Shapley value for other commonly occurring coalition games like the \"production economy\" and the \"market economy\".","lvl-2":"A Randomized Method for the Shapley Value for the Voting Game\nABSTRACT\nThe Shapley value is one of the key solution concepts for coalition games.\nIts main advantage is that it provides a unique and fair solution, but its main problem is that, for many coalition games, the Shapley value cannot be determined in polynomial time.\nIn particular, the problem of finding this value for the voting game is known to be #P - complete in the general case.\nHowever, in this paper, we show that there are some specific voting games for which the problem is computationally tractable.\nFor other general voting games, we overcome the problem of computational complexity by presenting a new randomized method for determining the approximate Shapley value.\nThe time complexity of this method is linear in the number of players.\nWe also show, through empirical studies, that the percentage error for the proposed method is always less than 20% and, in most cases, less than 5%.\n1.\nINTRODUCTION\nCoalition formation, a key form of interaction in multi-agent systems, is the process of joining together two or more agents so as to achieve goals that individuals on their own cannot, or to achieve them more efficiently [1, 11, 14, 13].\nOften, in such situations, there is more than one possible coalition and a player's payoff depends on which one it joins.\nGiven this, a key problem is to ensure that none of the parties in a coalition has any incentive to break away from it and join another coalition (i.e., the coalitions should be stable).\nHowever, in many cases there may be more than one\nsolution (i.e., a stable coalition).\nIn such cases, it becomes difficult to select a single solution from among the possible ones, especially if the parties are self-interested (i.e., they have different preferences over stable coalitions).\nIn this context, cooperative game theory deals with the problem of coalition formation and offers a number of solution concepts that possess desirable properties like stability, fair division ofjoint gains, and uniqueness [16, 14].\nCooperative game theory differs from its non-cooperative counterpart in that for the former the players are allowed to form binding agreements and so there is a strong incentive to work together to receive the largest total payoff.\nAlso, unlike non-cooperative game theory, cooperative game theory does not specify a game through a description of the strategic environment (including the order of players' moves and the set of actions at each move) and the resulting payoffs, but, instead, it reduces this collection of data to the coalitional form where each coalition is represented by a single real number: there are no actions, moves, or individual payoffs.\nThe chief advantage of this approach, at least in multiple-player environments, is its practical usefulness.\nThus, many more real-life situations fit more easily into a coalitional form game, whose structure is more tractable than that of a non-cooperative game, whether that be in normal or extensive form and it is for this reason that we focus on such forms in this paper.\nGiven these observations, a number of multiagent systems researchers have used and extended cooperative game-theoretic solutions to facilitate automated coalition formation [20, 21, 18].\nMoreover, in this work, one of the most extensively studied solution concepts is the Shapley value [19].\nA player's Shapley value gives an indication of its prospects of playing the game--the higher the Shapley value, the better its prospects.\nThe main advantage of the Shapley value is that it provides a solution that is both unique and fair (see Section 2.1 for a discussion of the property of fairness).\nHowever, while these are both desirable properties, the Shapley value has one major drawback: for many coalition games, it cannot be determined in polynomial time.\nFor instance, finding this value for the weighted voting game is, in general, #P - complete [6].\nA problem is #P - hard if solving it is as hard as counting satisfying assignments of propositional logic formulae [15, p442].\nSince #P - completeness thus subsumes NP-completeness, this implies that computing the Shapley value for the weighted voting game will be intractable in general.\nIn other words, it is practically infeasible to try to compute the exact Shapley value.\nHowever, the voting game has practical relevance to multi-agent systems as it is an important means of reaching consensus between multiple agents.\nHence, our objective is to overcome the computational complexity of finding the Shapley value for this game.\nSpecifically, we first show that there are some specific voting games for which the exact value can\nbe computed in polynomial time.\nBy identifying such games, we show, for the first tme, when it is feasible to find the exact value and when it is not.\nFor the computationally complex voting games, we present a new randomised method, along the lines of Monte-Carlo simulation, for computing the approximate Shapley value.\nThe computational complexity of such games has typically been tackled using two main approaches.\nThe first is to use generating functions [3].\nThis method trades time complexity for storage space.\nThe second uses an approximation technique based on Monte Carlo simulation [12, 7].\nHowever the method we propose is more general than either of these (see Section 6 for details).\nMoreover, no work has previously analysed the approximation error.\nThe approximation error relates to how close the approximate is to the true Shapley value.\nSpecifically, it is the difference between the true and the approximate Shapley value.\nIt is important to determine this error because the performance of an approximation method is evaluated in terms of two criteria: its time complexity, and its approximation error.\nThus, our contribution lies in also in providing, for the first time, an analysis of the percentage error in the approximate Shapley value.\nThis analysis is carried out empirically.\nOur experiments show that the error is always less than 20%, and in most cases it is under 5%.\nFinally, our method has time complexity linear in the number of players and it does not require any arrays (i.e., it is economical in terms of both computing time and storage space).\nGiven this, and the fact that software agents have limited computational resources and therefore cannot compute the true Shapley value, our results are especially relevant to such resource bounded agents.\nThe rest of the paper is organised as follows.\nSection 2 defines the Shapley value and describes the weighted voting game.\nIn Section 3 we describe voting games whose Shapley value can be found in polynomial time.\nIn Section 4, we present a randomized method for finding the approximate Shapley value and analyse its performance in Section 5.\nSection 6 discusses related literature.\nFinally, Section 7 concludes.\n2.\nBACKGROUND\nWe begin by introducing coalition games and the Shapley value and then define the weighted voting game.\nA coalition game is a game where groups of players (\"coalitions\") may enforce cooperative behaviour between their members.\nHence the game is a competition between coalitions of players, rather than between individual players.\nDepending on how the players measure utility, coalition game theory is split into two parts.\nIf the players measure utility or the payoff in the same units and there is a means of exchange of utility such as side payments, we say the game has transferable utility; otherwise it has non-transferable utility.\nMore formally, a coalition game with transferable utility, ~ N, v ~, consists of:\n1.\na finite set (N = {1, 2,..., n}) of players and 2.\na function (v) that associates with every non-empty subset S of N (i.e., a coalition) a real number v (S) (the worth of S).\nFor each coalition S, the number v (S) is the total payoff that is available for division among the members of S (i.e., the set of joint actions that coalition S can take consists of all possible divisions of v (S) among the members of S).\nCoalition games with nontransferable payoffs differ from ones with transferable payoffs in the following way.\nFor the former, each coalition is associated with a set of payoff vectors that is not necessarily the set of all possible divisions of some fixed amount.\nThe focus of this paper is on the weighted voting game (described in Section 2.1) which is a game with transferable payoffs.\nThus, in either case, the players will only join a coalition if they expect to gain from it.\nHere, the players are allowed to form binding agreements, and so there is strong incentive to work together to receive the largest total payoff.\nThe problem then is how to split the total payoff between or among the players.\nIn this context, Shapley [19] constructed a solution using an axiomatic approach.\nShapley defined a value for games to be a function that assigns to a game (N, v), a number \u03d5i (N, v) for each i in N.\nThis function satisfies three axioms [17]:\n1.\nSymmetry.\nThis axiom requires that the names of players play no role in determining the value.\n2.\nCarrier.\nThis axiom requires that the sum of \u03d5i (N, v) for all players i in any carrier C equal v (C).\nA carrier C is a subset of N such that v (S) = v (S \u2229 C) for any subset of players S \u2282 N. 3.\nAdditivity.\nThis axiom specifies how the values of different games must be related to one another.\nIt requires that for any games \u03d5i (N, v) and \u03d5i (N, v'), \u03d5i (N, v) + \u03d5i (N, v) = \u03d5i (N, v + v) for all i in N.\nShapley showed that there is a unique function that satisfies these three axioms.\nShapley viewed this value as an index for measuring the power of players in a game.\nLike a price index or other market indices, the value uses averages (or weighted averages in some of its generalizations) to aggregate the power of players in their various cooperation opportunities.\nAlternatively, one can think of the Shapley value as a measure of the utility of risk neutral players in a game.\nWe first introduce some notation and then define the Shapley value.\nLet S denote the set N \u2212 {i} and fi: S \u2192 2N \u2212 {i} be a random variable that takes its values in the set of all subsets of N \u2212 {i}, and has the probability distribution function (g) defined as:\nThe random variable fi is interpreted as the random choice of a coalition that player i joins.\nThen, a player's Shapley value is defined in terms of its marginal contribution.\nThus, the marginal contribution of player i to coalition S with i \u2208 \/ S is a function Div that is defined as follows:\nThus a player's marginal contribution to a coalition S is the increase in the value of S as a result of i joining it.\nThe Shapley value is interpreted as follows.\nSuppose that all the players are arranged in some order, all orderings being equally likely.\nThen \u03d5i (N, v) is the expected marginal contribution, over all orderings, of player i to the set of players who precede him.\nThe method for finding a player's Shapley value depends on the definition of the value function (v).\nThis function is different for different games, but here we focus specifically on the weighted voting game for the reasons outlined in Section 1.\n960 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.1 The weighted voting game\nWe adopt the definition of the voting game given in [6].\nThus, there is a set of n players that may, for example, represent shareholders in a company or members in a parliament.\nThe weighted voting game is then a game G = (N, v) in which:\nfor any coalition S. Thus wi is the number of votes that player i has and q is the number of votes needed to win the game (i.e., the quota).\nNote that for this game (denoted (q \u2022, w1,..., wn)), a player's marginal contribution is either zero or one.\nThis is because the value of any coalition is either zero or one.\nA coalition with value zero is called a \"losing coalition\" and with value one a \"winning coalition\".\nIf a player's entry to a coalition changes it from losing to winning, then the player's marginal contribution for that coalition is one; otherwise it is zero.\nThe main advantage of the Shapley value is that it gives a solution that is both unique and fair.\nThe property of uniqueness is desirable because it leaves no ambiguity.\nThe property of fairness relates to how the gains from cooperation are split between the members of a coalition.\nIn this case, a player's Shapley value is proportional to the contribution it makes as a member of a coalition; the more contribution it makes, the higher its value.\nThus, from a player's perspective, both uniqueness and fairness are desirable properties.\n3.\nVOTING GAMES WITH POLYNOMIAL TIME SOLUTIONS\nHere we describe those voting games for which the Shapley value can be determined in polynomial time.\nThis is achieved using the direct enumeration approach (i.e., listing all possible coalitions and finding a player's marginal contribution to each of them).\nWe characterise such games in terms of the number of players and their weights.\n3.1 All players have equal weight\nConsider the game (q \u2022, j,..., j) with m parties.\nEach party has j votes.\nIf q ws) and small (with weight ws).\nThere is one large player and m small ones.\nThe quota for this game is q; i.e., we have a game of the form (q \u2022, wl, ws, ws,..., ws).\nThe total number of players is (m + 1).\nThe value of a coalition is one if the weight of the coalition is greater than or equal to q, otherwise it is zero.\nLet \u03d5l denote the Shapley value for the large player and \u03d5s that for each small player.\nWe first consider ws = 1 and then ws> 1.\nThe smallest possible value for q is wl + 1.\nThis is because, if q 1 as:\nNow consider a small player.\nWe know that the sum of the Shapley values of all the m + 1 players is one.\nAlso, since the small parties have equal weights, their Shapley values are the same.\nHence, we get:\nThus, both \u03d5l and \u03d5s can be computed in constant time.\nThis is because both require a constant number of basic operations (addition, subtraction, multiplication, and division).\nIn the same way, the Shapley value for a voting game with a single large party and multiple small parties can be determined in constant time.\n3.3 Multiple large and small parties\nWe now consider a voting game that has two player types: large and small (as in Section 3.2), but now there are multiple large and multiple small parties.\nThe set of parties consists of ml large parties and ms small parties.\nThe weight of each large party is wl and that of each small one is ws, where ws msws S3 q> mlwl and q> msws S4 q> mlwl and q CAi (\u03b1t i), then Ai considers that \u03b2t i is stronger than its previous argument, changes its argument to \u03b2t i by sending assert(\u03b2t i ) to the rest of the agents (the intuition behind this is that since a counterargument is also an argument, Ai checks if the newly counterargument is a better argument than the one he was previously holding) and rebut(\u03b2t i , \u03b1t j) to Aj.\nOtherwise (i.e. CAi (\u03b2t i ) \u2264 CAi (\u03b1t i)), Ai will send only rebut(\u03b2t i , \u03b1t j) to Aj.\nIn any of the two situations the protocol moves to step 3.\n\u2022 If \u03b2t i is a counterexample c, then Ai sends rebut(c, \u03b1t j) to Aj.\nThe protocol moves to step 4.\n\u2022 If Ai cannot generate any counterargument or counterexample, the token is sent to the next agent, a new round t + 1 starts, and the protocol moves to state 2.\n3.\nThe agent Aj that has received the counterargument \u03b2t i , locally compares it against its own argument, \u03b1t j, by locally assessing their confidence.\nIf CAj (\u03b2t i ) > CAj (\u03b1t j), then Aj will accept the counterargument as stronger than its own argument, and it will send assert(\u03b2t i ) to the other agents.\nOtherwise (i.e. CAj (\u03b2t i ) \u2264 CAj (\u03b1t j)), Aj will not accept the counterargument, and will inform the other agents accordingly.\nAny of the two situations start a new round t + 1, Ai sends the token to the next agent, and the protocol moves back to state 2.\n4.\nThe agent Aj that has received the counterexample c retains it into its case base and generates a new argument \u03b1t+1 j that takes into account c, and informs the rest of the agents by sending assert(\u03b1t+1 j ) to all of them.\nThen, Ai sends the token to the next agent, a new round t + 1 starts, and the protocol moves back to step 2.\n5.\nThe protocol ends yielding a joint prediction, as follows: if the arguments in Ht agree then their prediction is the joint prediction, otherwise a voting mechanism is used to decide the joint prediction.\nThe voting mechanism uses the joint confidence measure as the voting weights, as follows: S = arg max Sk\u2208S \u03b1i\u2208Ht|\u03b1i.S=Sk C(\u03b1i) Moreover, in order to avoid infinite iterations, if an agent sends twice the same argument or counterargument to the same agent, the message is not considered.\n8.\nEXEMPLIFICATION Let us consider a system composed of three agents A1, A2 and A3.\nOne of the agents, A1 receives a problem P to solve, and decides to use AMAL to solve it.\nFor that reason, invites A2 and A3 to take part in the argumentation process.\nThey accept the invitation, and the argumentation protocol starts.\nInitially, each agent generates its individual prediction for P, and broadcasts it to the other agents.\nThus, all of them can compute H0 = \u03b10 1, \u03b10 2, \u03b10 3 .\nIn particular, in this example: \u2022 \u03b10 1 = A1, P, hadromerida, D1 \u2022 \u03b10 2 = A2, P, astrophorida, D2 \u2022 \u03b10 3 = A3, P, axinellida, D3 A1 starts owning the token and tries to generate counterarguments for \u03b10 2 and \u03b10 3, but does not succeed, however it has one counterexample c13 for \u03b10 3.\nThus, A1 sends the the message rebut( c13, \u03b10 3) to A3.\nA3 incorporates c13 into its case base and tries to solve the problem P again, now taking c13 into consideration.\nA3 comes up with the justified prediction \u03b11 3 = A3, P, hadromerida, D4 , and broadcasts it to the rest of the agents with the message assert(\u03b11 3).\nThus, all of them know the new H1 = \u03b10 1, \u03b10 2, \u03b11 3 .\nRound 1 starts and A2 gets the token.\nA2 tries to generate counterarguments for \u03b10 1 and \u03b11 3 and only succeeds to generate a counterargument \u03b21 2 = A2, P, astrophorida, D5 against \u03b11 3.\nThe counterargument is sent to A3 with the message rebut(\u03b21 2 , \u03b11 3).\nAgent A3 receives the counterargument and assesses its local confidence.\nThe result is that the individual confidence of the counterargument \u03b21 2 is lower than the local confidence of \u03b11 3.\nTherefore, A3 does not accept the counterargument, and thus H2 = \u03b10 1, \u03b10 2, \u03b11 3 .\nRound 2 starts and A3 gets the token.\nA3 generates a counterargument \u03b22 3 = A3, P, hadromerida, D6 for \u03b10 2 and sends it to A2 with the message rebut(\u03b22 3 , \u03b10 2).\nAgent A2 receives the counterargument and assesses its local confidence.\nThe result is that the local confidence of the counterargument \u03b22 3 is higher than the local confidence of \u03b10 2.\nTherefore, A2 accepts the counterargument and informs the rest of the agents with the message assert(\u03b22 3 ).\nAfter that, H3 = \u03b10 1, \u03b22 3 , \u03b11 3 .\nAt Round 3, since all the agents agree (all the justified predictions in H3 predict hadromerida as the solution class) The protocol ends, and A1 (the agent that received the problem) considers hadromerida as the joint solution for the problem P. 9.\nEXPERIMENTAL EVALUATION 980 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) SPONGE 75 77 79 81 83 85 87 89 91 2 3 4 5 AMAL Voting Individual SOYBEAN 55 60 65 70 75 80 85 90 2 3 4 5 AMAL Voting Individual Figure 5: Individual and joint accuracy for 2 to 5 agents.\nIn this section we empirically evaluate the AMAL argumentation framework.\nWe have made experiments in two different data sets: soybean (from the UCI machine learning repository) and sponge (a relational data set).\nThe soybean data set has 307 examples and 19 solution classes, while the sponge data set has 280 examples and 3 solution classes.\nIn an experimental run, the data set is divided in 2 sets: the training set and the test set.\nThe training set examples are distributed among 5 different agents without replication, i.e. there is no example shared by two agents.\nIn the testing stage, problems in the test set arrive randomly to one of the agents, and their goal is to predict the correct solution.\nThe experiments are designed to test two hypotheses: (H1) that argumentation is a useful framework for joint deliberation and can improve over other typical methods such as voting; and (H2) that learning from communication improves the individual performance of a learning agent participating in an argumentation process.\nMoreover, we also expect that the improvement achieved from argumentation will increase as the number of agents participating in the argumentation increases (since more information will be taken into account).\nConcerning H1 (argumentation is a useful framework for joint deliberation), we ran 4 experiments, using 2, 3, 4, and 5 agents respectively (in all experiments each agent has a 20% of the training data, since the training is always distributed among 5 agents).\nFigure 5 shows the result of those experiments in the sponge and soybean data sets.\nClassification accuracy is plotted in the vertical axis, and in the horizontal axis the number of agents that took part in the argumentation processes is shown.\nFor each number of agents, three bars are shown: individual, Voting, and AMAL.\nThe individual bar shows the average accuracy of individual agents predictions; the voting bar shows the average accuracy of the joint prediction achieved by voting but without any argumentation; and finally the AMAL bar shows the average accuracy of the joint prediction using argumentation.\nThe results shown are the average of 5 10-fold cross validation runs.\nFigure 5 shows that collaboration (voting and AMAL) outperforms individual problem solving.\nMoreover, as we expected, the accuracy improves as more agents collaborate, since more information is taken into account.\nWe can also see that AMAL always outperforms standard voting, proving that joint decisions are based on better information as provided by the argumentation process.\nFor instance, the joint accuracy for 2 agents in the sponge data set is of 87.57% for AMAL and 86.57% for voting (while individual accuracy is just 80.07%).\nMoreover, the improvement achieved by AMAL over Voting is even larger in the soybean data set.\nThe reason is that the soybean data set is more difficult (in the sense that agents need more data to produce good predictions).\nThese experimental results show that AMAL effectively exploits the opportunity for improvement: the accuracy is higher only because more agents have changed their opinion during argumentation (otherwise they would achieve the same result as Voting).\nConcerning H2 (learning from communication in argumentation processes improves individual prediction ), we ran the following experiment: initially, we distributed a 25% of the training set among the five agents; after that, the rest of the cases in the training set is sent to the agents one by one; when an agent receives a new training case, it has several options: the agent can discard it, the agent can retain it, or the agent can use it for engaging an argumentation process.\nFigure 6 shows the result of that experiment for the two data sets.\nFigure 6 contains three plots, where NL (not learning) shows accuracy of an agent with no learning at all; L (learning), shows the evolution of the individual classification accuracy when agents learn by retaining the training cases they individually receive (notice that when all the training cases have been retained at 100%, the accuracy should be equal to that of Figure 5 for individual agents); and finally LFC (learning from communication) shows the evolution of the individual classification accuracy of learning agents that also learn by retaining those counterexamples received during argumentation (i.e. they learn both from training examples and counterexamples).\nFigure 6 shows that if an agent Ai learns also from communication, Ai can significantly improve its individual performance with just a small number of additional cases (those selected as relevant counterexamples for Ai during argumentation).\nFor instance, in the soybean data set, individual agents have achieved an accuracy of 70.62% when they also learn from communication versus an accuracy of 59.93% when they only learn from their individual experience.\nThe number of cases learnt from communication depends on the properties of the data set: in the sponges data set, agents have retained only very few additional cases, and significantly improved individual accuracy; namely they retain 59.96 cases in average (compared to the 50.4 cases retained if they do not learn from communication).\nIn the soybean data set more counterexamples are learnt to significantly improve individual accuracy, namely they retain 87.16 cases in average (compared to 55.27 cases retained if they do not learn from communication).\nFinally, the fact that both data sets show a significant improvement points out the adaptive nature of the argumentation-based approach to learning from communication: the useful cases are selected as counterexamples (and no more than those needed), and they have the intended effect.\n10.\nRELATED WORK Concerning CBR in a multi-agent setting, the first research was on negotiated case retrieval [11] among groups of agents.\nOur work on multi-agent case-based learning started in 1999 [6]; later Mc Ginty and Smyth [7] presented a multi-agent collaborative CBR approach (CCBR) for planning.\nFinally, another interesting approach is multi-case-base reasoning (MCBR) [5], that deals with The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 981 SPONGE 60 65 70 75 80 85 25% 40% 55% 70% 85% 100% LFC L NL SOYBEAN 20 30 40 50 60 70 80 90 25% 40% 55% 70% 85% 100% LFC L NL Figure 6: Learning from communication resulting from argumentation in a system composed of 5 agents.\ndistributed systems where there are several case bases available for the same task and addresses the problems of cross-case base adaptation.\nThe main difference is that our MAC approach is a way to distribute the Reuse process of CBR (using a voting system) while Retrieve is performed individually by each agent; the other multiagent CBR approaches, however, focus on distributing the Retrieve process.\nResearch on MAS argumentation focus on several issues like a) logics, protocols and languages that support argumentation, b) argument selection and c) argument interpretation.\nApproaches for logic and languages that support argumentation include defeasible logic [4] and BDI models [13].\nAlthough argument selection is a key aspect of automated argumentation (see [12] and [13]), most research has been focused on preference relations among arguments.\nIn our framework we have addressed both argument selection and preference relations using a case-based approach.\n11.\nCONCLUSIONS AND FUTURE WORK In this paper we have presented an argumentation-based framework for multi-agent learning.\nSpecifically, we have presented AMAL, a framework that allows a group of learning agents to argue about the solution of a given problem and we have shown how the learning capabilities can be used to generate arguments and counterarguments.\nThe experimental evaluation shows that the increased amount of information provided to the agents by the argumentation process increases their predictive accuracy, and specially when an adequate number of agents take part in the argumentation.\nThe main contributions of this work are: a) an argumentation framework for learning agents; b) a case-based preference relation over arguments, based on computing an overall confidence estimation of arguments; c) a case-based policy to generate counterarguments and select counterexamples; and d) an argumentation-based approach for learning from communication.\nFinally, in the experiments presented here a learning agent would retain all counterexamples submitted by the other agent; however, this is a very simple case retention policy, and we will like to experiment with more informed policies - with the goal that individual learning agents could significantly improve using only a small set of cases proposed by other agents.\nFinally, our approach is focused on lazy learning, and future works aims at incorporating eager inductive learning inside the argumentative framework for learning from communication.\n12.\nREFERENCES [1] Agnar Aamodt and Enric Plaza.\nCase-based reasoning: Foundational issues, methodological variations, and system approaches.\nArtificial Intelligence Communications, 7(1):39-59, 1994.\n[2] E. Armengol and E. Plaza.\nLazy induction of descriptions for relational case-based learning.\nIn ECML``2001, pages 13-24, 2001.\n[3] Gerhard Brewka.\nDynamic argument systems: A formal model of argumentation processes based on situation calculus.\nJournal of Logic and Computation, 11(2):257-282, 2001.\n[4] Carlos I. Ches\u00f1evar and Guillermo R. Simari.\nFormalizing Defeasible Argumentation using Labelled Deductive Systems.\nJournal of Computer Science & Technology, 1(4):18-33, 2000.\n[5] D. Leake and R. Sooriamurthi.\nAutomatically selecting strategies for multi-case-base reasoning.\nIn S. Craw and A. Preece, editors, ECCBR``2002, pages 204-219, Berlin, 2002.\nSpringer Verlag.\n[6] Francisco J. Mart\u00edn, Enric Plaza, and Josep-Lluis Arcos.\nKnowledge and experience reuse through communications among competent (peer) agents.\nInternational Journal of Software Engineering and Knowledge Engineering, 9(3):319-341, 1999.\n[7] Lorraine McGinty and Barry Smyth.\nCollaborative case-based reasoning: Applications in personalized route planning.\nIn I. Watson and Q. Yang, editors, ICCBR, number 2080 in LNAI, pages 362-376.\nSpringer-Verlag, 2001.\n[8] Santi Onta\u00f1\u00f3n and Enric Plaza.\nJustification-based multiagent learning.\nIn ICML``2003, pages 576-583.\nMorgan Kaufmann, 2003.\n[9] Enric Plaza, Eva Armengol, and Santiago Onta\u00f1\u00f3n.\nThe explanatory power of symbolic similarity in case-based reasoning.\nArtificial Intelligence Review, 24(2):145-161, 2005.\n[10] David Poole.\nOn the comparison of theories: Preferring the most specific explanation.\nIn IJCAI-85, pages 144-147, 1985.\n[11] M V Nagendra Prassad, Victor R Lesser, and Susan Lander.\nRetrieval and reasoning in distributed case bases.\nTechnical report, UMass Computer Science Department, 1995.\n[12] K. Sycara S. Kraus and A. Evenchik.\nReaching agreements through argumentation: a logical model and implementation.\nArtificial Intelligence Journal, 104:1-69, 1998.\n[13] N. R. Jennings S. Parsons, C. Sierra.\nAgents that reason and negotiate by arguing.\nJournal of Logic and Computation, 8:261-292, 1998.\n[14] Bruce A. Wooley.\nExplanation component of software systems.\nACM CrossRoads, 5.1, 1998.\n982 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"Learning and Joint Deliberation through Argumentation in Multi-Agent Systems\nABSTRACT\nIn this paper we will present an argumentation framework for learning agents (AMAL) designed for two purposes: (1) for joint deliberation, and (2) for learning from communication.\nThe AMAL framework is completely based on learning from examples: the argument preference relation, the argument generation policy, and the counterargument generation policy are case-based techniques.\nFor join deliberation, learning agents share their experience by forming a committee to decide upon some joint decision.\nWe experimentally show that the argumentation among committees of agents improves both the individual and joint performance.\nFor learning from communication, an agent engages into arguing with other agents in order to contrast its individual hypotheses and receive counterexamples; the argumentation process improves their learning scope and individual performance.\n1.\nINTRODUCTION\nArgumentation frameworks for multi-agent systems can be used for different purposes like joint deliberation, persuasion, negotiation, and conflict resolution.\nIn this paper we will present an argumentation framework for learning agents, and show that it can be used for two purposes: (1) joint deliberation, and (2) learning from communication.\nArgumentation-based joint deliberation involves discussion over the outcome of a particular situation or the appropriate course of action for a particular situation.\nLearning agents are capable of learning from experience, in the sense that past examples (situations and their outcomes) are used to predict the outcome for the situation\nat hand.\nHowever, since individual agents experience may be limited, individual knowledge and prediction accuracy is also limited.\nThus, learning agents that are capable of arguing their individual predictions with other agents may reach better prediction accuracy after such an argumentation process.\nMost existing argumentation frameworks for multi-agent systems are based on deductive logic or some other deductive logic formalism specifically designed to support argumentation, such as default logic [3]).\nUsually, an argument is seen as a logical statement, while a counterargument is an argument offered in opposition to another argument [4, 13]; agents use a preference relation to resolve conflicting arguments.\nHowever, logic-based argumentation frameworks assume agents with preloaded knowledge and preference relation.\nIn this paper, we focus on an Argumentation-based Multi-Agent Learning (AMAL) framework where both knowledge and preference relation are learned from experience.\nThus, we consider a scenario with agents that (1) work in the same domain using a shared ontology, (2) are capable of learning from examples, and (3) communicate using an argumentative framework.\nHaving learning capabilities allows agents effectively use a specific form of counterargument, namely the use of counterexamples.\nCounterexamples offer the possibility of agents learning during the argumentation process.\nMoreover, learning agents allow techniques that use learnt experience to generate adequate arguments and counterarguments.\nSpecifically, we will need to address two issues: (1) how to define a technique to generate arguments and counterarguments from examples, and (2) how to define a preference relation over two conflicting arguments that have been induced from examples.\nThis paper presents a case-based approach to address both issues.\nThe agents use case-based reasoning (CBR) [1] to learn from past cases (where a case is a situation and its outcome) in order to predict the outcome of a new situation.\nWe propose an argumentation protocol inside the AMAL framework at supports agents in reaching a joint prediction over a specific situation or problem--moreover, the reasoning needed to support the argumentation process will also be based on cases.\nIn particular, we present two case-based measures, one for generating the arguments and counterarguments adequate to a particular situation and another for determining preference relation among arguments.\nFinally, we evaluate (1) if argumentation between learning agents can produce a joint prediction that improves over individual learning performance and (2) if learning from the counterexamples conveyed during the argumentation process increases the individual performance with precisely those cases being used while arguing among them.\nThe paper is structured as follows.\nSection 2 discusses the relation among argumentation, collaboration and learning.\nThen Sec\ntion 3 introduces our multi-agent CBR (MAC) framework and the notion of justified prediction.\nAfter that, Section 4 formally defines our argumentation framework.\nSections 5 and 6 present our case-based preference relation and argument generation policies respectively.\nLater, Section 7 presents the argumentation protocol in our AMAL framework.\nAfter that, Section 8 presents an exemplification of the argumentation framework.\nFinally, Section 9 presents an empirical evaluation of our two main hypotheses.\nThe paper closes with related work and conclusions sections.\n2.\nARGUMENTATION, COLLABORATION AND LEARNING\n3.\nMULTI-AGENT CBR SYSTEMS\n3.1 Justified Predictions\n976 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nARGUMENTS AND COUNTERARGUMENTS\n5.\nPREFERENCE RELATION\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 977\n1 + Y\u03b1 Ai + NAi\n6.\nGENERATION OF ARGUMENTS\n6.1 Generation of Counterarguments\n978 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7.\nARGUMENTATION-BASED MULTI-AGENT LEARNING\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 979\n8.\nEXEMPLIFICATION\n9.\nEXPERIMENTAL EVALUATION\n10.\nRELATED WORK\nConcerning CBR in a multi-agent setting, the first research was on \"negotiated case retrieval' [11] among groups of agents.\nOur work on multi-agent case-based learning started in 1999 [6]; later Mc Ginty and Smyth [7] presented a multi-agent collaborative CBR approach (CCBR) for planning.\nFinally, another interesting approach is multi-case-base reasoning (MCBR) [5], that deals with\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 981\nFigure 6: Learning from communication resulting from argumentation in a system composed of 5 agents.\ndistributed systems where there are several case bases available for the same task and addresses the problems of cross-case base adaptation.\nThe main difference is that our MAC approach is a way to distribute the Reuse process of CBR (using a voting system) while Retrieve is performed individually by each agent; the other multiagent CBR approaches, however, focus on distributing the Retrieve process.\nResearch on MAS argumentation focus on several issues like a) logics, protocols and languages that support argumentation, b) argument selection and c) argument interpretation.\nApproaches for logic and languages that support argumentation include defeasible logic [4] and BDI models [13].\nAlthough argument selection is a key aspect of automated argumentation (see [12] and [13]), most research has been focused on preference relations among arguments.\nIn our framework we have addressed both argument selection and preference relations using a case-based approach.\n11.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we have presented an argumentation-based framework for multi-agent learning.\nSpecifically, we have presented AMAL, a framework that allows a group of learning agents to argue about the solution of a given problem and we have shown how the learning capabilities can be used to generate arguments and counterarguments.\nThe experimental evaluation shows that the increased amount of information provided to the agents by the argumentation process increases their predictive accuracy, and specially when an adequate number of agents take part in the argumentation.\nThe main contributions of this work are: a) an argumentation framework for learning agents; b) a case-based preference relation over arguments, based on computing an overall confidence estimation of arguments; c) a case-based policy to generate counterarguments and select counterexamples; and d) an argumentation-based approach for learning from communication.\nFinally, in the experiments presented here a learning agent would retain all counterexamples submitted by the other agent; however, this is a very simple case retention policy, and we will like to experiment with more informed policies--with the goal that individual learning agents could significantly improve using only a small set of cases proposed by other agents.\nFinally, our approach is focused on lazy learning, and future works aims at incorporating eager inductive learning inside the argumentative framework for learning from communication.","lvl-4":"Learning and Joint Deliberation through Argumentation in Multi-Agent Systems\nABSTRACT\nIn this paper we will present an argumentation framework for learning agents (AMAL) designed for two purposes: (1) for joint deliberation, and (2) for learning from communication.\nThe AMAL framework is completely based on learning from examples: the argument preference relation, the argument generation policy, and the counterargument generation policy are case-based techniques.\nFor join deliberation, learning agents share their experience by forming a committee to decide upon some joint decision.\nWe experimentally show that the argumentation among committees of agents improves both the individual and joint performance.\nFor learning from communication, an agent engages into arguing with other agents in order to contrast its individual hypotheses and receive counterexamples; the argumentation process improves their learning scope and individual performance.\n1.\nINTRODUCTION\nArgumentation frameworks for multi-agent systems can be used for different purposes like joint deliberation, persuasion, negotiation, and conflict resolution.\nIn this paper we will present an argumentation framework for learning agents, and show that it can be used for two purposes: (1) joint deliberation, and (2) learning from communication.\nLearning agents are capable of learning from experience, in the sense that past examples (situations and their outcomes) are used to predict the outcome for the situation\nat hand.\nHowever, since individual agents experience may be limited, individual knowledge and prediction accuracy is also limited.\nThus, learning agents that are capable of arguing their individual predictions with other agents may reach better prediction accuracy after such an argumentation process.\nMost existing argumentation frameworks for multi-agent systems are based on deductive logic or some other deductive logic formalism specifically designed to support argumentation, such as default logic [3]).\nHowever, logic-based argumentation frameworks assume agents with preloaded knowledge and preference relation.\nIn this paper, we focus on an Argumentation-based Multi-Agent Learning (AMAL) framework where both knowledge and preference relation are learned from experience.\nThus, we consider a scenario with agents that (1) work in the same domain using a shared ontology, (2) are capable of learning from examples, and (3) communicate using an argumentative framework.\nHaving learning capabilities allows agents effectively use a specific form of counterargument, namely the use of counterexamples.\nCounterexamples offer the possibility of agents learning during the argumentation process.\nMoreover, learning agents allow techniques that use learnt experience to generate adequate arguments and counterarguments.\nThis paper presents a case-based approach to address both issues.\nThe agents use case-based reasoning (CBR) [1] to learn from past cases (where a case is a situation and its outcome) in order to predict the outcome of a new situation.\nWe propose an argumentation protocol inside the AMAL framework at supports agents in reaching a joint prediction over a specific situation or problem--moreover, the reasoning needed to support the argumentation process will also be based on cases.\nIn particular, we present two case-based measures, one for generating the arguments and counterarguments adequate to a particular situation and another for determining preference relation among arguments.\nFinally, we evaluate (1) if argumentation between learning agents can produce a joint prediction that improves over individual learning performance and (2) if learning from the counterexamples conveyed during the argumentation process increases the individual performance with precisely those cases being used while arguing among them.\nThe paper is structured as follows.\nSection 2 discusses the relation among argumentation, collaboration and learning.\nThen Sec\ntion 3 introduces our multi-agent CBR (MAC) framework and the notion of justified prediction.\nAfter that, Section 4 formally defines our argumentation framework.\nSections 5 and 6 present our case-based preference relation and argument generation policies respectively.\nLater, Section 7 presents the argumentation protocol in our AMAL framework.\nAfter that, Section 8 presents an exemplification of the argumentation framework.\nFinally, Section 9 presents an empirical evaluation of our two main hypotheses.\nThe paper closes with related work and conclusions sections.\n10.\nRELATED WORK\nConcerning CBR in a multi-agent setting, the first research was on \"negotiated case retrieval' [11] among groups of agents.\nOur work on multi-agent case-based learning started in 1999 [6]; later Mc Ginty and Smyth [7] presented a multi-agent collaborative CBR approach (CCBR) for planning.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 981\nFigure 6: Learning from communication resulting from argumentation in a system composed of 5 agents.\nResearch on MAS argumentation focus on several issues like a) logics, protocols and languages that support argumentation, b) argument selection and c) argument interpretation.\nApproaches for logic and languages that support argumentation include defeasible logic [4] and BDI models [13].\nAlthough argument selection is a key aspect of automated argumentation (see [12] and [13]), most research has been focused on preference relations among arguments.\nIn our framework we have addressed both argument selection and preference relations using a case-based approach.\n11.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we have presented an argumentation-based framework for multi-agent learning.\nSpecifically, we have presented AMAL, a framework that allows a group of learning agents to argue about the solution of a given problem and we have shown how the learning capabilities can be used to generate arguments and counterarguments.\nThe main contributions of this work are: a) an argumentation framework for learning agents; b) a case-based preference relation over arguments, based on computing an overall confidence estimation of arguments; c) a case-based policy to generate counterarguments and select counterexamples; and d) an argumentation-based approach for learning from communication.\nFinally, our approach is focused on lazy learning, and future works aims at incorporating eager inductive learning inside the argumentative framework for learning from communication.","lvl-2":"Learning and Joint Deliberation through Argumentation in Multi-Agent Systems\nABSTRACT\nIn this paper we will present an argumentation framework for learning agents (AMAL) designed for two purposes: (1) for joint deliberation, and (2) for learning from communication.\nThe AMAL framework is completely based on learning from examples: the argument preference relation, the argument generation policy, and the counterargument generation policy are case-based techniques.\nFor join deliberation, learning agents share their experience by forming a committee to decide upon some joint decision.\nWe experimentally show that the argumentation among committees of agents improves both the individual and joint performance.\nFor learning from communication, an agent engages into arguing with other agents in order to contrast its individual hypotheses and receive counterexamples; the argumentation process improves their learning scope and individual performance.\n1.\nINTRODUCTION\nArgumentation frameworks for multi-agent systems can be used for different purposes like joint deliberation, persuasion, negotiation, and conflict resolution.\nIn this paper we will present an argumentation framework for learning agents, and show that it can be used for two purposes: (1) joint deliberation, and (2) learning from communication.\nArgumentation-based joint deliberation involves discussion over the outcome of a particular situation or the appropriate course of action for a particular situation.\nLearning agents are capable of learning from experience, in the sense that past examples (situations and their outcomes) are used to predict the outcome for the situation\nat hand.\nHowever, since individual agents experience may be limited, individual knowledge and prediction accuracy is also limited.\nThus, learning agents that are capable of arguing their individual predictions with other agents may reach better prediction accuracy after such an argumentation process.\nMost existing argumentation frameworks for multi-agent systems are based on deductive logic or some other deductive logic formalism specifically designed to support argumentation, such as default logic [3]).\nUsually, an argument is seen as a logical statement, while a counterargument is an argument offered in opposition to another argument [4, 13]; agents use a preference relation to resolve conflicting arguments.\nHowever, logic-based argumentation frameworks assume agents with preloaded knowledge and preference relation.\nIn this paper, we focus on an Argumentation-based Multi-Agent Learning (AMAL) framework where both knowledge and preference relation are learned from experience.\nThus, we consider a scenario with agents that (1) work in the same domain using a shared ontology, (2) are capable of learning from examples, and (3) communicate using an argumentative framework.\nHaving learning capabilities allows agents effectively use a specific form of counterargument, namely the use of counterexamples.\nCounterexamples offer the possibility of agents learning during the argumentation process.\nMoreover, learning agents allow techniques that use learnt experience to generate adequate arguments and counterarguments.\nSpecifically, we will need to address two issues: (1) how to define a technique to generate arguments and counterarguments from examples, and (2) how to define a preference relation over two conflicting arguments that have been induced from examples.\nThis paper presents a case-based approach to address both issues.\nThe agents use case-based reasoning (CBR) [1] to learn from past cases (where a case is a situation and its outcome) in order to predict the outcome of a new situation.\nWe propose an argumentation protocol inside the AMAL framework at supports agents in reaching a joint prediction over a specific situation or problem--moreover, the reasoning needed to support the argumentation process will also be based on cases.\nIn particular, we present two case-based measures, one for generating the arguments and counterarguments adequate to a particular situation and another for determining preference relation among arguments.\nFinally, we evaluate (1) if argumentation between learning agents can produce a joint prediction that improves over individual learning performance and (2) if learning from the counterexamples conveyed during the argumentation process increases the individual performance with precisely those cases being used while arguing among them.\nThe paper is structured as follows.\nSection 2 discusses the relation among argumentation, collaboration and learning.\nThen Sec\ntion 3 introduces our multi-agent CBR (MAC) framework and the notion of justified prediction.\nAfter that, Section 4 formally defines our argumentation framework.\nSections 5 and 6 present our case-based preference relation and argument generation policies respectively.\nLater, Section 7 presents the argumentation protocol in our AMAL framework.\nAfter that, Section 8 presents an exemplification of the argumentation framework.\nFinally, Section 9 presents an empirical evaluation of our two main hypotheses.\nThe paper closes with related work and conclusions sections.\n2.\nARGUMENTATION, COLLABORATION AND LEARNING\nBoth learning and collaboration are ways in which an agent can improve individual performance.\nIn fact, there is a clear parallelism between learning and collaboration in multi-agent systems, since both are ways in which agents can deal with their shortcomings.\nLet us show which are the main motivations that an agent can have to learn or to collaborate.\n\u2022 Motivations to learn:--Increase quality of prediction,--Increase efficiency,--Increase the range of solvable problems.\n\u2022 Motivations to collaborate:--Increase quality of prediction,--Increase efficiency,--Increase the range of solvable problems,--Increase the range of accessible resources.\nLooking at the above lists of motivation, we can easily see that learning and collaboration are very related in multi-agent systems.\nIn fact, with the exception of the last item in the motivations to collaborate list, they are two extremes of a continuum of strategies to improve performance.\nAn agent may choose to increase performance by learning, by collaborating, or by finding an intermediate point that combines learning and collaboration in order to improve performance.\nIn this paper we will propose AMAL, an argumentation framework for learning agents, and will also also show how AMAL can be used both for learning from communication and for solving problems in a collaborative way:\n\u2022 Agents can solve problems in a collaborative way via engaging an argumentation process about the prediction for the situation at hand.\nUsing this collaboration, the prediction can be done in a more informed way, since the information known by several agents has been taken into account.\n\u2022 Agents can also learn from communication with other agents by engaging an argumentation process.\nAgents that engage in such argumentation processes can learn from the argu\nments and counterexamples received from other agents, and use this information for predicting the outcomes of future situations.\nIn the rest of this paper we will propose an argumentation framework and show how it can be used both for learning and for solving problems in a collaborative way.\n3.\nMULTI-AGENT CBR SYSTEMS\nA Multi-Agent Case Based Reasoning System (MAC) M = {(Al, Cl),..., (An, Cn)} is a multi-agent system composed of A = {Ai,..., An}, a set of CBR agents, where each agent Ai \u2208 A possesses an individual case base Ci.\nEach individual agent Ai in a MAC is completely autonomous and each agent Ai has access only to its individual and private case base Ci.\nA case base Ci = {cl,..., cm} is a collection of cases.\nAgents in a MAC system are able to individually solve problems, but they can also collaborate with other agents to solve problems.\nIn this framework, we will restrict ourselves to analytical tasks, i.e. tasks like classification, where the solution of a problem is achieved by selecting a solution class from an enumerated set of solution classes.\nIn the following we will note the set of all the solution classes by S = {Sl,..., SK}.\nTherefore, a case c = (P, S) is a tuple containing a case description P and a solution class S \u2208 S.\nIn the following, we will use the terms problem and case description indistinctly.\nMoreover, we will use the dot notation to refer to elements inside a tuple; e.g., to refer to the solution class of a case c, we will write c.S. Therefore, we say a group of agents perform joint deliberation, when they collaborate to find a joint solution by means of an argumentation process.\nHowever, in order to do so, an agent has to be able to justify its prediction to the other agents (i.e. generate an argument for its predicted solution that can be examined and critiqued by the other agents).\nThe next section addresses this issue.\n3.1 Justified Predictions\nBoth expert systems and CBR systems may have an explanation component [14] in charge of justifying why the system has provided a specific answer to the user.\nThe line of reasoning of the system can then be examined by a human expert, thus increasing the reliability of the system.\nMost of the existing work on explanation generation focuses on generating explanations to be provided to the user.\nHowever, in our approach we use explanations (or justifications) as a tool for improving communication and coordination among agents.\nWe are interested in justifications since they can be used as arguments.\nFor that purpose, we will benefit from the ability of some machine learning methods to provide justifications.\nA justification built by a CBR method after determining that the solution of a particular problem P was Sk is a description that contains the relevant information from the problem P that the CBR method has considered to predict Sk as the solution of P.\nIn particular, CBR methods work by retrieving similar cases to the problem at hand, and then reusing their solutions for the current problem, expecting that since the problem and the cases are similar, the solutions will also be similar.\nThus, if a CBR method has retrieved a set of cases Cl,..., Cn to solve a particular problem P the justification built will contain the relevant information from the problem P that made the CBR system retrieve that particular set of cases, i.e. it will contain the relevant information that P and Cl,..., Cn have in common.\nFor example, Figure 1 shows a justification build by a CBR system for a toy problem (in the following sections we will show justifications for real problems).\nIn the figure, a problem has two attributes (Traffic_light, and Cars_passing), the retrieval mechanism of the CBR system notices that by considering only the attribute Traffic_light, it can retrieve two cases that predict the same solution: wait.\nThus, since only this attribute has been used, it is the only one appearing in the justification.\nThe values of the rest of attributes are irrelevant, since whatever their value the solution class would have been the same.\n976 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 1: An example of justification generation in a CBR system.\nNotice that, since the only relevant feature to decide is Traffic_light (the only one used to retrieve cases), it is the only one appearing in the justification.\nIn general, the meaning of a justification is that all (or most of) the cases in the case base of an agent that satisfy the justification (i.e. all the cases that are subsumed by the justification) belong to the predicted solution class.\nIn the rest of the paper, we will use C to denote the subsumption relation.\nIn our work, we use LID [2], a CBR method capable of building symbolic justifications such as the one exemplified in Figure 1.\nWhen an agent provides a justification for a prediction, the agent generates a justified prediction:\nJustifications can have many uses for CBR systems [8, 9].\nIn this paper, we are going to use justifications as arguments, in order to allow learning agents to engage in argumentation processes.\n4.\nARGUMENTS AND COUNTERARGUMENTS\nFor our purposes an argument \u03b1 generated by an agent A is composed of a statement S and some evidence D supporting S as correct.\nIn the remainder of this section we will see how this general definition of argument can be instantiated in specific kind of arguments that the agents can generate.\nIn the context of MAC systems, agents argue about predictions for new problems and can provide two kinds of information: a) specific cases (P, S), and b) justified predictions: (A, P, S, D).\nUsing this information, we can define three types of arguments: justified predictions, counterarguments, and counterexamples.\nA justified prediction \u03b1 is generated by an agent Ai to argue that Ai believes that the correct solution for a given problem P is \u03b1.S, and the evidence provided is the justification \u03b1.D.\nIn the example depicted in Figure 1, an agent Ai may generate the argument \u03b1 = (Ai, P, Wait, (Traffic-light = red)), meaning that the agent Ai believes that the correct solution for P is Wait because the attribute Traffic-light equals red.\nA counterargument \u03b2 is an argument offered in opposition to another argument \u03b1.\nIn our framework, a counterargument consists of a justified prediction (Aj, P, S', D') generated by an agent Aj with the intention to rebut an argument \u03b1 generated by another agent Ai, that endorses a solution class S' different from that of \u03b1.S for the problem at hand and justifies this with a justification D'.\nIn the example in Figure 1, if an agent generates the argument \u03b1 = (Ai, P, Walk, (Cars-passing = no)), an agent that thinks that the correct solution is Wait might answer with the counterargument \u03b2 = (Aj, P, Wait, (Cars-passing = no n Traffic-light = red)), meaning that, although there are no cars passing, the traffic light is red, and the street cannot be crossed.\nA counterexample c is a case that contradicts an argument \u03b1.\nThus a counterexample is also a counterargument, one that states that a specific argument \u03b1 is not always true, and the evidence provided is the case c. Specifically, for a case c to be a counterexample of an argument \u03b1, the following conditions have to be met: \u03b1.D C c and \u03b1.S = ~ c.S, i.e. the case must satisfy the justification \u03b1.D and the solution of c must be different than the predicted by \u03b1.\nBy exchanging arguments and counterarguments (including counterexamples), agents can argue about the correct solution of a given problem, i.e. they can engage a joint deliberation process.\nHowever, in order to do so, they need a specific interaction protocol, a preference relation between contradicting arguments, and a decision policy to generate counterarguments (including counterexamples).\nIn the following sections we will present these elements.\n5.\nPREFERENCE RELATION\nA specific argument provided by an agent might not be consistent with the information known to other agents (or even to some of the information known by the agent that has generated the justification due to noise in training data).\nFor that reason, we are going to define a preference relation over contradicting justified predictions based on cases.\nBasically, we will define a confidence measure for each justified prediction (that takes into account the cases owned by each agent), and the justified prediction with the highest confidence will be the preferred one.\nThe idea behind case-based confidence is to count how many of the cases in an individual case base endorse a justified prediction, and how many of them are counterexamples of it.\nThe more the endorsing cases, the higher the confidence; and the more the counterexamples, the lower the confidence.\nSpecifically, to assess the confidence of a justified prediction \u03b1, an agent obtains the set of cases in its individual case base that are subsumed by \u03b1.D.\nWith them, an agent Ai obtains the Y (aye) and N (nay) values:\n\u2022 Y Ai \u03b1 = I1c E CiI \u03b1.D C c.P n \u03b1.S = c.S} I is the number of cases in the agent's case base subsumed by the justification \u03b1.D that belong to the solution class \u03b1.S, \u2022 NAi\n\u03b1 = I1c E CiI \u03b1.D C c.P n \u03b1.S = ~ c.S} I is the number of cases in the agent's case base subsumed by justification \u03b1.D that do not belong to that solution class.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 977\nFigure 2: Confidence of arguments is evaluated by contrasting them against the case bases of the agents.\nAn agent estimates the confidence of an argument as:\n1 + Y\u03b1 Ai + NAi\n\u03b1 i.e. the confidence on a justified prediction is the number of endorsing cases divided by the number of endorsing cases plus counterexamples.\nNotice that we add 1 to the denominator, this is to avoid giving excessively high confidences to justified predictions whose confidence has been computed using a small number of cases.\nNotice that this correction follows the same idea than the Laplace correction to estimate probabilities.\nFigure 2 illustrates the individual evaluation of the confidence of an argument, in particular, three endorsing cases and one counterexample are found in the case base of agents Ai, giving an estimated confidence of 0.6 Moreover, we can also define the joint confidence of an argument \u03b1 as the confidence computed using the cases present in the case bases of all the agents in the group:\nNotice that, to collaboratively compute the joint confidence, the agents only have to make public the aye and nay values locally computed for a given argument.\nIn our framework, agents use this joint confidence as the preference relation: a justified prediction \u03b1 is preferred over another one,3 if C (\u03b1) \u2265 C (,3).\n6.\nGENERATION OF ARGUMENTS\nIn our framework, arguments are generated by the agents from cases, using learning methods.\nAny learning method able to provide a justified prediction can be used to generate arguments.\nFor instance, decision trees and LID [2] are suitable learning methods.\nSpecifically, in the experiments reported in this paper agents use LID.\nThus, when an agent wants to generate an argument endorsing that a specific solution class is the correct solution for a problem P, it generates a justified prediction as explained in Section 3.1.\nFor instance, Figure 3 shows a real justification generated by LID after solving a problem P in the domain of marine sponges identification.\nIn particular, Figure 3 shows how when an agent receives a new problem to solve (in this case, a new sponge to determine its order), the agent uses LID to generate an argument (consisting on a justified prediction) using the cases in the case base of the agent.\nThe justification shown in Figure 3 can be interpreted saying that \"the predicted solution is hadromerida because the smooth form of the megascleres of the spiculate skeleton of the sponge is of type tylostyle, the spikulate skeleton of the sponge has no uniform length, and there is no gemmules in the external features of the sponge\".\nThus, the argument generated will be \u03b1 = (Ar, P, hadromerida, Dr).\n6.1 Generation of Counterarguments\nAs previously stated, agents may try to rebut arguments by generating counterargument or by finding counterexamples.\nLet us explain how they can be generated.\nAn agent Ai wants to generate a counterargument,3 to rebut an argument \u03b1 when \u03b1 is in contradiction with the local case base of Ai.\nMoreover, while generating such counterargument,3, Ai expects that,3 is preferred over \u03b1.\nFor that purpose, we will present a specific policy to generate counterarguments based on the specificity criterion [10].\nThe specificity criterion is widely used in deductive frameworks for argumentation, and states that between two conflicting arguments, the most specific should be preferred since it is, in principle, more informed.\nThus, counterarguments generated based on the specificity criterion are expected to be preferable (since they are more informed) to the arguments they try to rebut.\nHowever, there is no guarantee that such counterarguments will always win, since, as we have stated in Section 5, agents in our framework use a preference relation based on joint confidence.\nMoreover, one may think that it would be better that the agents generate counterarguments based on the joint confidence preference relation; however it is not obvious how to generate counterarguments based on joint confidence in an efficient way, since collaboration is required in order to evaluate joint confidence.\nThus, the agent generating the counterargument should constantly communicate with the other agents at each step of the induction algorithm used to generate counterarguments (presently one of our future research lines).\nThus, in our framework, when an agent wants to generate a counterargument,3 to an argument \u03b1,,3 has to be more specific than \u03b1 (i.e. \u03b1.D \u2741,3.\nD).\nThe generation of counterarguments using the specificity criterion imposes some restrictions over the learning method, although LID or ID3 can be easily adapted for this task.\nFor instance, LID is an algorithm that generates a description starting from scratch and heuristically adding features to that term.\nThus, at every step, the description is made more specific than in the previous step, and the number of cases that are subsumed by that description is reduced.\nWhen the description covers only (or almost only) cases of a single solution class LID terminates and predicts that solution class.\nTo generate a counterargument to an argument \u03b1 LID just has to use as starting point the description \u03b1.D instead of starting from scratch.\nIn this way, the justification provided by LID will always be subsumed by \u03b1.D, and thus the resulting counterargument will be more specific than \u03b1.\nHowever, notice that LID may sometimes not be able to generate counterarguments, since LID may not be able to specialize the description \u03b1.D any further, or because the agent Ai has no case inCi that is subsumed by \u03b1.D.\nFigure 4 shows how an agent A2 that disagreed with the argument shown in Figure 3, generates a counterargument using LID.\nMoreover, Figure 4 shows the generation of a counterargument,3 r2 for the argument \u03b10r (in Figure 3) that is a specialization of \u03b10r.\n978 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 3: Example of a real justification generated by LID in the marine sponges data set.\nSpecifically, in our experiments, when an agent Ai wants to rebut an argument \u03b1, uses the following policy:\n1.\nAgent Ai uses LID to try to find a counterargument \u03b2 more specific than \u03b1; if found, \u03b2 is sent to the other agent as a counterargument of \u03b1.\n2.\nIf not found, then Ai searches for a counterexample c E Ci of \u03b1.\nIf a case c is found, then c is sent to the other agent as a counterexample of \u03b1.\n3.\nIf no counterexamples are found, then Ai cannot rebut the argument \u03b1.\n7.\nARGUMENTATION-BASED MULTI-AGENT LEARNING\nThe interaction protocol of AMAL allows a group of agents A1,..., An to deliberate about the correct solution of a problem P by means of an argumentation process.\nIf the argumentation process arrives to a consensual solution, the joint deliberation ends; otherwise a weighted vote is used to determine the joint solution.\nMoreover, AMAL also allows the agents to learn from the counterexamples received from other agents.\nThe AMAL protocol consists on a series of rounds.\nIn the initial round, each agent states which is its individual prediction for P. Then, at each round an agent can try to rebut the prediction made by any of the other agents.\nThe protocol uses a token passing mechanism so that agents (one at a time) can send counterarguments or counterexamples if they disagree with the prediction made by any other agent.\nSpecifically, each agent is allowed to send one counterargument or counterexample each time he gets the token (notice that this restriction is just to simplify the protocol, and that it does not restrict the number of counterargument an agent can sent, since they can be delayed for subsequent rounds).\nWhen an agent receives a counterargument or counterexample, it informs the other agents if it accepts the counterargument (and changes its prediction) or not.\nMoreover, agents have also the opportunity to answer to counterarguments when they receive the token, by trying to generate a counterargument to the counterargument.\nWhen all the agents have had the token once, the token returns to the first agent, and so on.\nIf at any time in the protocol, all the agents agree or during the last n rounds no agent has generated any counterargument, the protocol ends.\nMoreover, if at the end of the argumentation the agents have not reached an agreement, then a voting mechanism that uses the confidence of each prediction as weights is used to decide the final solution (Thus, AMAL follows the same mechanism as human committees, first each individual member of a committee exposes his arguments and discuses those of the other members (joint deliberation), and if no consensus is reached, then a voting mechanism is required).\nAt each iteration, agents can use the following performatives:\n\u2022 assert (\u03b1): the justified prediction held during the next round will be \u03b1.\nAn agent can only hold a single prediction at each round, thus is multiple asserts are send, only the last one is considered as the currently held prediction.\n\u2022 rebut (\u03b2, \u03b1): the agent has found a counterargument \u03b2 to the prediction \u03b1.\nWe will define Ht = (\u03b1t1,..., \u03b1tn) as the predictions that each of the n agents hold at a round t. Moreover, we will also define contradict (\u03b1ti) = {\u03b1 E Ht | \u03b1.S = ~ \u03b1ti.S} as the set of contradicting arguments for an agent Ai in a round t, i.e. the set of arguments at round t that support a different solution class than \u03b1ti.\nThe protocol is initiated because one of the agents receives a problem P to be solved.\nAfter that, the agent informs all the other agents about the problem P to solve, and the protocol starts:\n1.\nAt round t = 0, each one of the agents individually solves P, and builds a justified prediction using its own CBR method.\nThen, each agent Ai sends the performative assert (\u03b10i) to the other agents.\nThus, the agents know H0 = (\u03b10i,..., \u03b10n).\nOnce all the predictions have been sent the token is given to the first agent A1.\n2.\nAt each round t (other than 0), the agents check whether their arguments in Ht agree.\nIf they do, the protocol moves to step 5.\nMoreover, if during the last n rounds no agent has sent any counterexample or counterargument, the protocol also moves to step 5.\nOtherwise, the agent Ai owner of the token tries to generate a counterargument for each of the opposing arguments in contradict (\u03b1ti) C Ht (see Section 6.1).\nThen, the counterargument \u03b2ti against the prediction \u03b1tj with the lowest confidence C (\u03b1tj) is selected (since \u03b1tj is the prediction more likely to be successfully rebutted).\n\u2022 If \u03b2ti is a counterargument, then, Ai locally compares \u03b1ti with \u03b2ti by assessing their confidence against its individual case base Ci (see Section 5) (notice that Ai is comparing its previous argument with the counterargument that Ai itself has just generated and that is about\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 979\nFigure 4: Generation of a counterargument using LID in the sponges data set.\nto send to Aj).\nIf CAi (,3 ti)> CAi (\u03b1ti), then Ai considers that,3 ti is stronger than its previous argument, changes its argument to,3 ti by sending assert (,3 ti) to the rest of the agents (the intuition behind this is that since a counterargument is also an argument, Ai checks if the newly counterargument is a better argument than the one he was previously holding) and rebut (,3 ti, \u03b1tj) to Aj.\nOtherwise (i.e. CAi (,3 ti) CAj (\u03b1tj), then Aj will accept the counterargument as stronger than its own argument, and it will send assert (,3 ti) to the other agents.\nOtherwise (i.e. CAj (,3 ti) [ <]A[j]+0.1 and B[i] \u2264 [ \u2265]A[j]; both A[i] \u2264 [ \u2265]A[j] and B[i] > [ < ]A[j] + 0.1.\nA 1 B C ... 2 5 6 3 4 1 2 5 6 3 4 0 1 2 3 4 5 6 0 2 4 6 8 10 12 Gamma ValueDifference Trust set={1} Trust set={2} Trust set={3} Trust set={4} Trust set={5} Trust set={6} (a) (b) Figure 2: (a) The toy graph consisting of six nodes, and node 1 is being manipulated by adding new nodes A, B, C, ... (b) The approximation tendency to PageRank by DiffusionRank 5.3 Experimental Set-up The experiments on the middle-size graph and the largesize graphs are conducted on the workstation, whose hardware model is Nix Dual Intel Xeon 2.2GHz with 1GB RAM and a Linux Kernel 2.4.18-27smp (RedHat7.3).\nIn calculating DiffusionRank, we employ Eq.\n(6) and the discrete approximation of Eq.\n(4) for such graphs.\nThe related tasks are implemented using C language.\nWhile in the toy graph, we employ the continuous diffusion kernel in Eq.\n(4) and Eq.\n(5), and implement related tasks using Matlab.\nFor nodes that have zero out-degree (dangling nodes), we employ the method in the modified PageRank algorithm [8], in which dangling nodes of are considered to have random links uniformly to each node.\nWe set \u03b1 = \u03b1I = \u03b1B = 0.85 in all algorithms.\nWe also set g to be the uniform distribution in both PageRank and DiffusionRank.\nFor DiffusionRank, we set \u03b3 = 1.\nAccording to the discussions in Section 4.3 and Section 4.4, we set the iteration number to be MB = 100 in DiffusionRank, and for accuracy consideration, the iteration number in all the algorithms is set to be 100.\n5.4 Approximation of PageRank We show that when \u03b3 tends to infinity, the value differences between DiffusionRank and PageRank tend to zero.\nFig. 2 (b) shows the approximation property of DiffusionRank, as proved in Theorem 1, on the toy graph.\nThe horizontal axis of Fig. 2 (b) marks the \u03b3 value, and vertical axis corresponds to the value difference between DiffusionRank and PageRank.\nAll the possible trusted sets with L = 1 are considered.\nFor L > 1, the results should be the linear combination of some of these curves because of the linearity of the solutions to heat equations.\nOn other graphs, the situations are similar.\n5.5 Results of Anti-manipulation In this section, we show how the rank values change as the intensity of manipulation increases.\nWe measure the intensity of manipulation by the number of newly added points that point to the manipulated point.\nThe horizontal axes of Fig. 3 stand for the numbers of newly added points, and vertical axes show the corresponding rank values of the manipulated nodes.\nTo be clear, we consider all six situations.\nEvery node in Fig. 2 (a) is manipulated respectively, and its 0 50 100 0 10 20 30 40 50 RankoftheManipulatdNode\u22121 DiffusionRank\u2212Trust 4 PageRank TrustRanl\u2212Trust 4 0 50 100 0 10 20 30 40 50 RankoftheManipulatdNode\u22122 DiffusionRank\u2212Trust 4 PageRank TrustRanl\u2212Trust 4 0 50 100 0 10 20 30 40 50 RankoftheManipulatdNode\u22123 DiffusionRank\u2212Trust 4 PageRank TrustRanl\u2212Trust 4 0 50 100 0 10 20 30 40 50 Number of New Added Nodes RankoftheManipulatdNode\u22124 DiffusionRank\u2212Trust 3 PageRank TrustRanl\u2212Trust 3 0 50 100 0 10 20 30 40 50 Number of New Added Nodes RankoftheManipulatdNode\u22125 DiffusionRank\u2212Trust 4 PageRank TrustRanl\u2212Trust 4 0 50 100 0 10 20 30 40 50 Number of New Added Nodes RankoftheManipulatdNode\u22126 DiffusionRank\u2212Trust 4 PageRank TrustRanl\u2212Trust 4 Figure 3: The rank values of the manipulated nodes on the toy graph 200040006000800010000 0 1000 2000 3000 4000 5000 6000 7000 8000 Number of New Added Points RankoftheManipulatdNode PageRank DiffusionRank\u2212uniform DiffusionRank0 DiffusionRank1 DiffusionRank2 DiffusionRank3 TrustRank0 TrustRank1 TrustRank2 TrustRank3 2000\u00a04000\u00a06000\u00a08000 10000 0 20 40 60 80 100 120 140 160 180 Number of New Added Points RankoftheManipulatdNode PageRank DiffusionRank TrustRank DiffusionRank\u2212uniform (a) (b) Figure 4: (a) The rank values of the manipulated nodes on the middle-size graph; (b) The rank values of the manipulated nodes on the large-size graph corresponding values for PageRank, TrustRank (TR), DiffusionRank (DR) are shown in the one of six sub-figures in Fig. 3.\nThe vertical axes show which node is being manipulated.\nIn each sub-figure, the trusted sets are computed below.\nSince the inverse PageRank yields the results [1.26, 0.85, 1.31, 1.36, 0.51, 0.71].\nLet L = 1.\nIf the manipulated node is not 4, then the trusted set is {4}, and otherwise {3}.\nWe observe that in all the cases, rank values of the manipulated node for DiffusionRank grow slowest as the number of the newly added nodes increases.\nOn the middle-size graph and the large-size graph, this conclusion is also true, see Fig. 4.\nNote that, in Fig. 4 (a), we choose four trusted sets (L = 1), on which we test DiffusionRank and TrustRank, the results are denoted by DiffusionRanki and TrustRanki (i = 0, 1, 2, 3 denotes the four trusted set); in Fig. 4 (b), we choose one trusted set (L = 1).\nMoreover, in both Fig. 4 (a) and Fig. 4 (b), we show the results for DiffusionRank when we have no trusted set, and we trust all the pages before some of them are manipulated.\nWe also test the order difference between the ranking order A before the page is manipulated and the ranking order PA after the page is manipulated.\nBecause after manipulation, the number of pages changes, we only compare the common part of A and PA..\nThis experiment is used to test the stability of all these algorithms.\nThe less the order difference, the stabler the algorithm, in the sense that only a smaller part of the order relations is affected by the manipulation.\nFigure 5 (a) shows that the order difference values change when we add new nodes that point to the manipulated node.\nWe give several \u03b3 settings.\nWe find that when \u03b3 = 1, the least order difference is achieved by DiffusionRank.\nIt is interesting to point out that as \u03b3 increases, the order difference will increase first; after reaching a maximum value, it will decrease, and finally it tends to the PageRank results.\nWe show this tendency in Fig. 5 (b), in which we choose three different settings-the number of manipulated nodes are 2,000, 5,000, and 10,000 respectively.\nFrom these figures, we can see that when \u03b3 < 2, the values are less than those for PageRank, and that when \u03b3 > 20, the difference between PageRank and DiffusionRank is very small.\nAfter these investigations, we find that in all the graphs we tested, DiffusionRank (when \u03b3 = 1) is most robust to manipulation both in value difference and order difference.\nThe trust set selection algorithm proposed in [7] is effective for both TrustRank and DiffusionRank.\n0 1000\u00a02000\u00a03000\u00a04000 5000\u00a06000\u00a07000\u00a08000 9000 10000 0 0.5 1 1.5 2 2.5 3 x 10 5 Number of New Added Points PairwiseOrderDifference PageRank DiffusionRank\u2212Gamma=1 DiffusionRank\u2212Gamma=2 DiffusionRank\u2212Gamma=3 DiffusionRank\u2212Gamma=4 DiffusionRank\u2212Gamma=5 DiffusionRank\u2212Gamma=15 TrustRank 0 5 10 15 20 0 0.5 1 1.5 2 2.5 x 10 5 Gamma PairwiseOrderDifference DiffusionRank: when added 2000 nodes DiffusionRank: when added 5000 nodes DiffusionRank: when added 10000 nodes PageRank (a) (b) Figure 5: (a) Pairwise order difference on the middle-size graph, the least it is, the more stable the algorithm; (b) The tendency of varying \u03b3 6.\nCONCLUSIONS We conclude that DiffusionRank is a generalization of PageRank, which is interesting in that the heat diffusion coefficient \u03b3 can balance the extent that we want to model the original Web graph and the extent that we want to reduce the effect of link manipulations.\nThe experimental results show that we can actually achieve such a balance by setting \u03b3 = 1, although the best setting including varying \u03b3i is still under further investigation.\nThis anti-manipulation feature enables DiffusionRank to be a candidate as a penicillin for Web spamming.\nMoreover, DiffusionRank can be employed to find group-group relations and to partition Web graph into small communities.\nAll these advantages can be achieved in the same computational complexity as PageRank.\nFor the special application of anti-manipulation, DiffusionRank performs best both in reduction effects and in its stability among all the three algorithms.\n7.\nACKNOWLEDGMENTS We thank Patrick Lau, Zhenjiang Lin and Zenglin Xu for their help.\nThis work is fully supported by two grants from the Research Grants Council of the Hong Kong Special administrative Region, China (Project No.\nCUHK4205\/04E and Project No.\nCUHK4235\/04E).\n8.\nREFERENCES [1] E. Agichtein, E. Brill, and S. T. Dumais.\nImproving web search ranking by incorporating user behavior information.\nIn E. N. Efthimiadis, S. T. Dumais, D. Hawking, and K. J\u00a8arvelin, editors, Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 19-26, 2006.\n[2] R. A. Baeza-Yates, P. Boldi, and C. Castillo.\nGeneralizing pagerank: damping functions for link-based ranking algorithms.\nIn E. N. Efthimiadis, S. T. Dumais, D. Hawking, and K. J\u00a8arvelin, editors, Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 308-315, 2006.\n[3] M. Belkin and P. Niyogi.\nLaplacian eigenmaps for dimensionality reduction and data representation.\nNeural Computation, 15(6):1373-1396, Jun 2003.\n[4] B. Bollob\u00b4as.\nRandom Graphs.\nAcademic Press Inc. (London), 1985.\n[5] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender.\nLearning to rank using gradient descent.\nIn Proceedings of the 22nd international conference on Machine learning (ICML), pages 89-96, 2005.\n[6] N. Eiron, K. S. McCurley, and J. A. Tomlin.\nRanking the web frontier.\nIn Proceeding of the 13th World Wide Web Conference (WWW), pages 309-318, 2004.\n[7] Z. Gy\u00a8ongyi, H. Garcia-Molina, and J. Pedersen.\nCombating web spam with trustrank.\nIn M. A. Nascimento, M. T. \u00a8Ozsu, D. Kossmann, R. J. Miller, J. A. Blakeley, and K. B. Schiefer, editors, Proceedings of the Thirtieth International Conference on Very Large Data Bases (VLDB), pages 576-587, 2004.\n[8] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub.\nExploiting the block structure of the web for computing pagerank.\nTechnical report, Stanford University, 2003.\n[9] R. I. Kondor and J. D. Lafferty.\nDiffusion kernels on graphs and other discrete input spaces.\nIn C. Sammut and A. G. Hoffmann, editors, Proceedings of the Nineteenth International Conference on Machine Learning (ICML), pages 315-322, 2002.\n[10] J. Lafferty and G. Lebanon.\nDiffusion kernels on statistical manifolds.\nJournal of Machine Learning Research, 6:129-163, Jan 2005.\n[11] C. R. MacCluer.\nThe many proofs and applications of perron``s theorem.\nSIAM Review, 42(3):487-498, 2000.\n[12] A. Ntoulas, M. Najork, M. Manasse, and D. Fetterly.\nDetecting spam web pages through content analysis.\nIn Proceedings of the 15th international conference on World Wide Web (WWW), pages 83-92, 2006.\n[13] L. Page, S. Brin, R. Motwani, and T. Winograd.\nThe pagerank citation ranking: Bringing order to the web.\nTechnical Report Paper SIDL-WP-1999-0120 (version of 11\/11\/1999), Stanford Digital Library Technologies Project, 1999.\n[14] H. Yang, I. King, and M. R. Lyu.\nNHDC and PHDC: Non-propagating and propagating heat diffusion classifiers.\nIn Proceedings of the 12th International Conference on Neural Information Processing (ICONIP), pages 394-399, 2005.\n[15] H. Yang, I. King, and M. R. Lyu.\nPredictive ranking: a novel page ranking approach by estimating the web structure.\nIn Proceedings of the 14th international conference on World Wide Web (WWW) - Special interest tracks and posters, pages 944-945, 2005.\n[16] H. Yang, I. King, and M. R. Lyu.\nPredictive random graph ranking on the web.\nIn Proceedings of the IEEE World Congress on Computational Intelligence (WCCI), pages 3491-3498, 2006.\n[17] D. Zhou, J. Weston, A. Gretton, O. Bousquet, and B. Sch\u00a8olkopf.\nRanking on data manifolds.\nIn S. Thrun, L. Saul, and B. Sch\u00a8olkopf, editors, Advances in Neural Information Processing Systems 16 (NIPS 2003), 2004.","lvl-3":"DiffusionRank: A Possible Penicillin for Web Spamming\nABSTRACT\nWhile the PageRank algorithm has proven to be very effective for ranking Web pages, the rank scores of Web pages can be manipulated.\nTo handle the manipulation problem and to cast a new insight on the Web structure, we propose a ranking algorithm called DiffusionRank.\nDiffusionRank is motivated by the heat diffusion phenomena, which can be connected to Web ranking because the activities flow on the Web can be imagined as heat flow, the link from a page to another can be treated as the pipe of an air-conditioner, and heat flow can embody the structure of the underlying Web graph.\nTheoretically we show that DiffusionRank can serve as a generalization of PageRank when the heat diffusion coefficient - y tends to infinity.\nIn such a case 1 \/ - y = 0, DiffusionRank (PageRank) has low ability of anti-manipulation.\nWhen - y = 0, DiffusionRank obtains the highest ability of anti-manipulation, but in such a case, the web structure is completely ignored.\nConsequently, - y is an interesting factor that can control the balance between the ability of preserving the original Web and the ability of reducing the effect of manipulation.\nIt is found empirically that, when - y = 1, DiffusionRank has a Penicillin-like effect on the link manipulation.\nMoreover, DiffusionRank can be employed to find group-to-group relations on the Web, to divide the Web graph into several parts, and to find link communities.\nExperimental results show that the DiffusionRank algorithm achieves the above mentioned advantages as expected.\n1.\nINTRODUCTION\nWhile the PageRank algorithm [13] has proven to be very effective for ranking Web pages, inaccurate PageRank results are induced because of web page manipulations by peo\nple for commercial interests.\nThe manipulation problem is also called the Web spam, which refers to hyperlinked pages on the World Wide Web that are created with the intention of misleading search engines [7].\nIt is reported that approximately 70% of all pages in the.\nbiz domain and about 35% of the pages in the.\nus domain belong to the spam category [12].\nThe reason for the increasing amount of Web spam is explained in [12]: some web site operators try to influence the positioning of their pages within search results because of the large fraction of web traffic originating from searches and the high potential monetary value of this traffic.\nFrom the viewpoint of the Web site operators who want to increase the ranking value of a particular page for search engines, Keyword Stuffing and Link Stuffing are being used widely [7, 12].\nFrom the viewpoint of the search engine managers, the Web spam is very harmful to the users' evaluations and thus their preference to choosing search engines because people believe that a good search engine should not return irrelevant or low-quality results.\nThere are two methods being employed to combat the Web spam problem.\nMachine learning methods are employed to handle the keyword stuffing.\nTo successfully apply machine learning methods, we need to dig out some useful textual features for Web pages, to mark part of the Web pages as either spam or non-spam, then to apply supervised learning techniques to mark other pages.\nFor example, see [5, 12].\nLink analysis methods are also employed to handle the link stuffing problem.\nOne example is the TrustRank [7], a link-based method, in which the link structure is utilized so that human labelled trusted pages can propagate their trust scores trough their links.\nThis paper focuses on the link-based method.\nThe rest of the materials are organized as follows.\nIn the next section, we give a brief literature review on various related ranking techniques.\nWe establish the Heat Diffusion Model (HDM) on various cases in Section 3, and propose DiffusionRank in Section 4.\nIn Section 5, we describe the data sets that we worked on and the experimental results.\nFinally, we draw conclusions in Section 6.\n2.\nLITERATURE REVIEW\n2.1 PageRank\n2.2 TrustRank\n2.3 Manifold Ranking\n2.4 Heat Diffusion\n3.\nHEAT DIFFUSION MODEL\n3.1 Motivations\n3.2 Heat Flow On a Known Manifold\n3.3 On an Undirected Graph\n3.4 On a Directed Graph\n3.5 On a Random Directed Graph\n4.\nDIFFUSIONRANK\n4.1 Algorithm\n4.2 Advantages\n4.2.1 Two closed forms\n4.2.2 Group-group relations\n4.2.3 Graph cut\n4.2.4 Anti-manipulation\n4.3 The Physical Meaning of \u03b3\n4.4 The Number of Iterations\n5.\nEXPERIMENTS\n5.1 Data Preparation\n5.2 Methodology\n5.3 Experimental Set-up\n5.4 Approximation of PageRank\n5.5 Results of Anti-manipulation\n6.\nCONCLUSIONS\nWe conclude that DiffusionRank is a generalization of PageRank, which is interesting in that the heat diffusion coefficient ry can balance the extent that we want to model the original Web graph and the extent that we want to reduce the effect of link manipulations.\nThe experimental results show that we can actually achieve such a balance by setting ry = 1, although the best setting including varying rya is still under further investigation.\nThis anti-manipulation feature enables DiffusionRank to be a candidate as a penicillin for Web spamming.\nMoreover, DiffusionRank can be employed to find group-group relations and to partition Web graph into small communities.\nAll these advantages can be achieved in the same computational complexity as PageRank.\nFor the special application of anti-manipulation, DiffusionRank performs best both in reduction effects and in its stability among all the three algorithms.","lvl-4":"DiffusionRank: A Possible Penicillin for Web Spamming\nABSTRACT\nWhile the PageRank algorithm has proven to be very effective for ranking Web pages, the rank scores of Web pages can be manipulated.\nTo handle the manipulation problem and to cast a new insight on the Web structure, we propose a ranking algorithm called DiffusionRank.\nDiffusionRank is motivated by the heat diffusion phenomena, which can be connected to Web ranking because the activities flow on the Web can be imagined as heat flow, the link from a page to another can be treated as the pipe of an air-conditioner, and heat flow can embody the structure of the underlying Web graph.\nTheoretically we show that DiffusionRank can serve as a generalization of PageRank when the heat diffusion coefficient - y tends to infinity.\nIn such a case 1 \/ - y = 0, DiffusionRank (PageRank) has low ability of anti-manipulation.\nWhen - y = 0, DiffusionRank obtains the highest ability of anti-manipulation, but in such a case, the web structure is completely ignored.\nConsequently, - y is an interesting factor that can control the balance between the ability of preserving the original Web and the ability of reducing the effect of manipulation.\nIt is found empirically that, when - y = 1, DiffusionRank has a Penicillin-like effect on the link manipulation.\nMoreover, DiffusionRank can be employed to find group-to-group relations on the Web, to divide the Web graph into several parts, and to find link communities.\nExperimental results show that the DiffusionRank algorithm achieves the above mentioned advantages as expected.\n1.\nINTRODUCTION\nWhile the PageRank algorithm [13] has proven to be very effective for ranking Web pages, inaccurate PageRank results are induced because of web page manipulations by peo\nple for commercial interests.\nThe manipulation problem is also called the Web spam, which refers to hyperlinked pages on the World Wide Web that are created with the intention of misleading search engines [7].\nIt is reported that approximately 70% of all pages in the.\nbiz domain and about 35% of the pages in the.\nus domain belong to the spam category [12].\nFrom the viewpoint of the Web site operators who want to increase the ranking value of a particular page for search engines, Keyword Stuffing and Link Stuffing are being used widely [7, 12].\nThere are two methods being employed to combat the Web spam problem.\nMachine learning methods are employed to handle the keyword stuffing.\nLink analysis methods are also employed to handle the link stuffing problem.\nOne example is the TrustRank [7], a link-based method, in which the link structure is utilized so that human labelled trusted pages can propagate their trust scores trough their links.\nThis paper focuses on the link-based method.\nIn the next section, we give a brief literature review on various related ranking techniques.\nWe establish the Heat Diffusion Model (HDM) on various cases in Section 3, and propose DiffusionRank in Section 4.\nIn Section 5, we describe the data sets that we worked on and the experimental results.\nFinally, we draw conclusions in Section 6.\n6.\nCONCLUSIONS\nWe conclude that DiffusionRank is a generalization of PageRank, which is interesting in that the heat diffusion coefficient ry can balance the extent that we want to model the original Web graph and the extent that we want to reduce the effect of link manipulations.\nThe experimental results show that we can actually achieve such a balance by setting ry = 1, although the best setting including varying rya is still under further investigation.\nThis anti-manipulation feature enables DiffusionRank to be a candidate as a penicillin for Web spamming.\nMoreover, DiffusionRank can be employed to find group-group relations and to partition Web graph into small communities.\nAll these advantages can be achieved in the same computational complexity as PageRank.\nFor the special application of anti-manipulation, DiffusionRank performs best both in reduction effects and in its stability among all the three algorithms.","lvl-2":"DiffusionRank: A Possible Penicillin for Web Spamming\nABSTRACT\nWhile the PageRank algorithm has proven to be very effective for ranking Web pages, the rank scores of Web pages can be manipulated.\nTo handle the manipulation problem and to cast a new insight on the Web structure, we propose a ranking algorithm called DiffusionRank.\nDiffusionRank is motivated by the heat diffusion phenomena, which can be connected to Web ranking because the activities flow on the Web can be imagined as heat flow, the link from a page to another can be treated as the pipe of an air-conditioner, and heat flow can embody the structure of the underlying Web graph.\nTheoretically we show that DiffusionRank can serve as a generalization of PageRank when the heat diffusion coefficient - y tends to infinity.\nIn such a case 1 \/ - y = 0, DiffusionRank (PageRank) has low ability of anti-manipulation.\nWhen - y = 0, DiffusionRank obtains the highest ability of anti-manipulation, but in such a case, the web structure is completely ignored.\nConsequently, - y is an interesting factor that can control the balance between the ability of preserving the original Web and the ability of reducing the effect of manipulation.\nIt is found empirically that, when - y = 1, DiffusionRank has a Penicillin-like effect on the link manipulation.\nMoreover, DiffusionRank can be employed to find group-to-group relations on the Web, to divide the Web graph into several parts, and to find link communities.\nExperimental results show that the DiffusionRank algorithm achieves the above mentioned advantages as expected.\n1.\nINTRODUCTION\nWhile the PageRank algorithm [13] has proven to be very effective for ranking Web pages, inaccurate PageRank results are induced because of web page manipulations by peo\nple for commercial interests.\nThe manipulation problem is also called the Web spam, which refers to hyperlinked pages on the World Wide Web that are created with the intention of misleading search engines [7].\nIt is reported that approximately 70% of all pages in the.\nbiz domain and about 35% of the pages in the.\nus domain belong to the spam category [12].\nThe reason for the increasing amount of Web spam is explained in [12]: some web site operators try to influence the positioning of their pages within search results because of the large fraction of web traffic originating from searches and the high potential monetary value of this traffic.\nFrom the viewpoint of the Web site operators who want to increase the ranking value of a particular page for search engines, Keyword Stuffing and Link Stuffing are being used widely [7, 12].\nFrom the viewpoint of the search engine managers, the Web spam is very harmful to the users' evaluations and thus their preference to choosing search engines because people believe that a good search engine should not return irrelevant or low-quality results.\nThere are two methods being employed to combat the Web spam problem.\nMachine learning methods are employed to handle the keyword stuffing.\nTo successfully apply machine learning methods, we need to dig out some useful textual features for Web pages, to mark part of the Web pages as either spam or non-spam, then to apply supervised learning techniques to mark other pages.\nFor example, see [5, 12].\nLink analysis methods are also employed to handle the link stuffing problem.\nOne example is the TrustRank [7], a link-based method, in which the link structure is utilized so that human labelled trusted pages can propagate their trust scores trough their links.\nThis paper focuses on the link-based method.\nThe rest of the materials are organized as follows.\nIn the next section, we give a brief literature review on various related ranking techniques.\nWe establish the Heat Diffusion Model (HDM) on various cases in Section 3, and propose DiffusionRank in Section 4.\nIn Section 5, we describe the data sets that we worked on and the experimental results.\nFinally, we draw conclusions in Section 6.\n2.\nLITERATURE REVIEW\nThe importance of a Web page is determined by either the textual content of pages or the hyperlink structure or both.\nAs in previous work [7, 13], we focus on ranking methods solely determined by hyperlink structure of the Web graph.\nAll the mentioned ranking algorithms are established on a graph.\nFor our convenience, we first give some notations.\nDenote a static graph by G = (V, E), where V = {v1, v2,..., vn}, E = {(vi, vj) | there is an edge from vi to\nvj}.\nIi and di denote the in-degree and the out-degree of page i respectively.\n2.1 PageRank\nThe importance of a Web page is an inherently subjective matter, which depends on the reader's interests, knowledge and attitudes [13].\nHowever, the average importance of all readers can be considered as an objective matter.\nPageRank tries to find such average importance based on the Web link structure, which is considered to contain a large amount of statistical data.\nThe Web is modelled by a directed graph G in the PageRank algorithms, and the rank or \"importance\" which point to it: xi = E xi for page vi \u2208 V is defined recursively in terms of pages (j, i) \u2208 E aijxj, where aij is assumed to be 1\/dj if there is a link from j to i, and 0 otherwise.\nOr in matrix terms, x = Ax.\nWhen the concept of \"random jump\" is introduced, the matrix form is changed to\nwhere \u03b1 is the probability of following the actual link from a page, (1 \u2212 \u03b1) is the probability of taking a \"random jump\", and g is a stochastic vector, i.e., 1T g = 1.\nTypically, \u03b1 = 0.85, and g = n1 1 is one of the standard settings, where 1 is the vector of all ones [6, 13].\n2.2 TrustRank\nTrustRank [7] is composed of two parts.\nThe first part is the seed selection algorithm, in which the inverse PageRank was proposed to help an expert of determining a good node.\nThe second part is to utilize the biased PageRank, in which the stochastic distribution g is set to be shared by all the trusted pages found in the first part.\nMoreover, the initial input of x is also set to be g.\nThe justification for the inverse PageRank and the solid experiments support its advantage in combating the Web spam.\nAlthough there are many variations of PageRank, e.g., a family of link-based ranking algorithms in [2], TrustRank is especially chosen for comparisons for three reasonss: (1) it is designed for combatting spamming; (2) its fixed parameters make a comparison easy; and (3) it has a strong theoretical relations with PageRank and DiffusionRank.\n2.3 Manifold Ranking\nIn [17], the idea of ranking on the data manifolds was proposed.\nThe data points represented as vectors in Euclidean space are considered to be drawn from a manifold.\nFrom the data points on such a manifold, an undirected weighted graph is created, then the weight matrix is given by the Gaussian Kernel smoothing.\nWhile the manifold ranking algorithm achieves an impressive result on ranking images, the biased vector g and the parameter k in the general personalized PageRank in [17] are unknown in the Web graph setting; therefore we do not include it in the comparisons.\n2.4 Heat Diffusion\nHeat diffusion is a physical phenomena.\nIn a medium, heat always flow from position with high temperature to position with low temperature.\nHeat kernel is used to describe the amount of heat that one point receives from another point.\nRecently, the idea of heat kernel on a manifold is borrowed in applications such as dimension reduction [3] and classification [9, 10, 14].\nIn these work, the input data is considered to lie in a special structure.\nAll the above topics are related to our work.\nThe readers can find that our model is a generalization of PageRank in order to resist Web manipulation, that we inherit the first part of TrustRank, that we borrow the concept of ranking on the manifold to introduce our model, and that heat diffusion is a main scheme in this paper.\n3.\nHEAT DIFFUSION MODEL\nHeat diffusion provides us with another perspective about how we can view the Web and also a way to calculate ranking values.\nIn this paper, the Web pages are considered to be drawn from an unknown manifold, and the link structure forms a directed graph, which is considered as an approximation to the unknown manifold.\nThe heat kernel established on the Web graph is considered as the representation of the relationship between Web pages.\nThe temperature distribution after a fixed time period, induced by a special initial temperature distribution, is considered as the rank scores on the Web pages.\nBefore establishing the proposed models, we first show our motivations.\n3.1 Motivations\nThere are two points to explain that PageRank is susceptible to web spam.\n\u2022 Over-democratic.\nThere is a belief behind PageRank--all pages are born equal.\nThis can be seen from the equal voting ability of one page: the sum of each column is equal to one.\nThis equal voting ability of all pages gives the chance for a Web site operator to increase a manipulated page by creating a large number of new pages pointing to this page since all the newly created pages can obtain an equal voting right.\n\u2022 Input-independent.\nFor any given non-zero initial input, the iteration will converge to the same stable distribution corresponding to the maximum eigenvalue 1 of the transition matrix.\nThis input-independent property makes it impossible to set a special initial input (larger values for trusted pages and less values even negative values for spam pages) to avoid web spam.\nThe input-independent feature of PageRank can be further explained as follows.\nP = [(1 \u2212 \u03b1) g1T + \u03b1A] is a positive stochastic matrix if g is set to be a positive stochastic vector (the uniform distribution is one of such settings), and so the largest eigenvalue is 1 and no other eigenvalue whose absolute value is equal to 1, which is guaranteed by the Perron Theorem [11].\nLet y be the eigenvector corresponding to 1, then we have Py = y. Let {xk} be the sequence generated from the iterations xk +1 = Pxk, and x0 is the initial input.\nIf {xk} converges to x, then xk +1 = Pxk implies that x must satisfy Px = x.\nSince the only maximum eigenvalue is 1, we have x = cy where c is a constant, and if both x and y are normalized by their sums, then c = 1.\nThe above discussions show that PageRank is independent of the initial input x0.\nIn our opinion, g and \u03b1 are objective parameters determined by the users' behaviors and preferences.\nA, \u03b1 and g are the \"true\" web structure.\nWhile A is obtained by a crawler and the setting \u03b1 = 0.85 is accepted by the people, we think that g should be determined by a user behavior investigation, something like [1].\nWithout any prior knowledge, g has to be set as g = n1 1.\nTrustRank model does not follow the \"true\" web structure by setting a biased g, but the effects of combatting spamming are achieved in [7]; PageRank is on the contrary in some ways.\nWe expect a ranking algorithm that has an effect of anti-manipulation as TrustRank while respecting the \"true\" web structure as PageRank.\nWe observe that the heat diffusion model is a natural way to avoid the over-democratic and input-independent feature of PageRank.\nSince heat always flows from a position with higher temperatures to one with lower temperatures, points are not equal as some points are born with high temperatures while others are born with low temperatures.\nOn the other hand, different initial temperature distributions will give rise to different temperature distributions after a fixed time period.\nBased on these considerations, we propose the novel DiffusionRank.\nThis ranking algorithm is also motivated by the viewpoint for the Web structure.\nWe view all the Web pages as points drawn from a highly complex geometric structure, like a manifold in a high dimensional space.\nOn a manifold, heat can flow from one point to another through the underlying geometric structure in a given time period.\nDifferent geometric structures determine different heat diffusion behaviors, and conversely the diffusion behavior can reflect the geometric structure.\nMore specifically, on the manifold, the heat flows from one point to another point, and in a given time period, if one point x receives a large amount of heat from another point y, we can say x and y are well connected, and thus x and y have a high similarity in the sense of a high mutual connection.\nWe note that on a point with unit mass, the temperature and the heat of this point are equivalent, and these two terms are interchangeable in this paper.\nIn the following, we first show the HDM on a manifold, which is the origin of HDM, but cannot be employed to the World Wide Web directly, and so is considered as the ideal case.\nTo connect the ideal case and the practical case, we then establish HDM on a graph as an intermediate case.\nTo model the real world problem, we further build HDM on a random graph as a practical case.\nFinally we demonstrate the DiffusionRank which is derived from the HDM on a random graph.\n3.2 Heat Flow On a Known Manifold\nIf the underlying manifold is known, the heat flow throughout a geometric manifold with initial conditions can be described by the following second order differential equation:\nat time t, and Af is the Laplace-Beltrami operator on a function f.\nThe heat diffusion kernel Kt (x, y) is a special solution to the heat equation with a special initial condition--a unit heat source at position y when there is no heat in other positions.\nBased on this, the heat kernel Kt (x, y) describes the heat distribution at time t diffusing from the initial unit heat source at position y, and thus describes the connectivity (which is considered as a kind of similarity) between x and y. However, it is very difficult to represent the World Wide Web as a regular geometry with a known dimension; even the underlying is known, it is very difficult to find the heat kernel Kt (x, y), which involves solving the heat equation with the delta function as the initial condition.\nThis motivates us to investigate the heat flow on a graph.\nThe graph is considered as an approximation to the underlying manifold, and so the heat flow on the graph is considered as an approximation to the heat flow on the manifold.\n3.3 On an Undirected Graph\nOn an undirected graph G, the edge (vi, vj) is considered as a pipe that connects nodes vi and vj.\nThe value fi (t) describes the heat at node vi at time t, beginning from an initial distribution of heat given by fi (0) at time zero.\nf (t) (f (0)) denotes the vector consisting of fi (t) (fi (0)).\nWe construct our model as follows.\nSuppose, at time t, each node i receives M (i, j, t, At) amount of heat from its neighbor j during a period of At.\nThe heat M (i, j, t, At) should be proportional to the time period At and the heat difference fj (t)--fi (t).\nMoreover, the heat flows from node j to node i through the pipe that connects nodes i and j. Based on this consideration, we assume that M (i, j, t, At) = - y (fj (t)--fi (t)) At.\nAs a result, the heat difference at node i between time t + At and time t will be equal to the sum of the heat that it receives from all its neighbors.\nThis is formulated as\nwhere E is the set of edges.\nTo find a closed form solution to Eq.\n(2), we express it in a matrix form: (f (t + At)--f (t)) \/ At = - yHf (t), where d (v) denotes the degree of the node v.\nIn the limit At \u2192 0, it becomes dtd f (t) = - yHf (t).\nSolving it, we obtain f (t) = e\u03b3tHf (0), especially we have\n3.4 On a Directed Graph\nThe above heat diffusion model must be modified to fit the situation where the links between Web pages are directed.\nOn one Web page, when the page-maker creates a link (a, b) to another page b, he actually forces the energy flow, for example, people's click-through activities, to that page, and so there is added energy imposed on the link.\nAs a result, heat flows in a one-way manner, only from a to b, not from b to a. Based on such consideration, we modified the heat diffusion model on an undirected graph as follows.\nOn a directed graph G, the pipe (vi, vj) is forced by added energy such that heat flows only from vi to vj.\nSuppose, at time t, each node vi receives RH = RH (i, j, t, At) amount of heat from vj during a period of At.\nWe have three assumptions: (1) RH should be proportional to the time period At; (2) RH should be proportional to the the heat at node vj; and (3) RH is zero if there is no link from vj to vi.\nAs a result, vi will receive Ej: (vj, vi) \u2208 E \u03c3j fj (t) At amount of heat from all its neighbors that points to it.\nOn the other hand, node vi diffuses DH (i, t, At) amount of heat to its subsequent nodes.\nWe assume that: (1) The heat DH (i, t, At) should be proportional to the time period At.\n(2) The heat DH (i, t, At) should be proportional to the the heat at node vi.\n(3) Each node has the same ability of diffusing heat.\nThis fits the intuition that a Web surfer only has one choice to find the next page that he wants to browse.\n(4) The heat DH (i, t, At) should be uniformly distributed to its subsequent nodes.\nThe real situation is more complex than what we assume, but we have to make this simple assumption in order to make our model concise.\nAs a result, node vi will diffuse - yfi (t) At\/di amount of heat to any of its\nsubsequent nodes, and each of its subsequent node should receive - yfi (t) \u0394t \/ di amount of heat.\nTherefore \u03c3j = - y\/dj.\nTo sum up, the heat difference at node vi between time t + \u0394t and time t will be equal to the sum of the heat that it receives, deducted by what it diffuses.\nThis is formulated as fi (t + \u0394t) \u2212 fi (t) = \u2212 - yfi (t) \u0394t + Ej: (vj, vi) \u2208 E - y\/djfj (t) \u0394t.\nSimilarly, we obtain\n3.5 On a Random Directed Graph\nFor real world applications, we have to consider random edges.\nThis can be seen in two viewpoints.\nThe first one is that in Eq.\n(1), the Web graph is actually modelled as a random graph, there is an edge from node vi to node vj with a probability of (1 \u2212 \u03b1) gj (see the item (1 \u2212 \u03b1) g1T), and that the Web graph is predicted by a random graph [15, 16].\nThe second one is that the Web structure is a random graph in essence if we consider the content similarity between two pages, though this is not done in this paper.\nFor these reasons, the model would become more flexible if we extend it to random graphs.\nThe definition of a random graph is given below.\nThe original definition of random graphs in [4], is changed slightly to consider the situation of directed graphs.\nNote that every static graph can be considered as a special random graph in the sense that pij can only be 0 or 1.\nOn a random graph RG = (V, P), where P = (pij) is the probability of the edge (vi, vj) exists.\nIn such a random graph, the expected heat difference at node i between time t + \u0394t and time t will be equal to the sum of the expected heat that it receives from all its antecedents, deducted by the expected heat that it diffuses.\nSince the probability of the link (vj, vi) is pji, the expected heat flow from node j to node i should be multiplied by pji, and so we have fi (t + \u0394t) \u2212 fi (t) = \u2212 - y fi (t) \u0394t + P j: (vj, vi) \u2208 E - ypjifj (t) \u0394t \/ RD + (vj), where RD + (vi) is the expected out-degree of node vi, it is defined as Ek pik.\nSimilarly we have\nWhen the graph is large, a direct computation of e\u03b3R is time-consuming, and we adopt its discrete approximation:\nThe matrix (I + \u03b3NR) N in Eq.\n(6) and matrix e\u03b3R in Eq.\n(5) are called Discrete Diffusion Kernel and the Continuous Diffusion Kernel respectively.\nBased on the Heat Diffusion Models and their solutions, DiffusionRank can be established on undirected graphs, directed graphs, and random graphs.\nIn the next section, we mainly focus on DiffusionRank in the random graph setting.\n4.\nDIFFUSIONRANK\nFor a random graph, the matrix (I + \u03b3NR) N or e\u03b3R can measure the similarity relationship between nodes.\nLet fi (0) = 1, fj (0) = 0 if j = 6 i, then the vector f (0) represent the unit heat at node vi while all other nodes has zero heat.\nFor such f (0) in a random graph, we can find the heat distribution at time 1 by using Eq.\n(5) or Eq.\n(6).\nThe heat distribution is exactly the i \u2212 th row of the matrix of (I + \u03b3N R) N or e\u03b3R.\nSo the ith-row jth-column element hij in the matrix (I + - y\u0394tR) N or e\u03b3R means the amount of heat that vi can receive from vj from time 0 to 1.\nThus the value hij can be used to measure the similarity from vj to vi.\nFor a static graph, similarly the matrix (I + \u03b3N H) N or e\u03b3H can measure the similarity relationship between nodes.\nThe intuition behind is that the amount h (i, j) of heat that a page vi receives from a unit heat in a page vj in a unit time embodies the extent of the link connections from page vj to page vi.\nRoughly speaking, when there are more uncrossed paths from vj to vi, vi will receive more heat from vj; when the path length from vj to vi is shorter, vi will receive more heat from vj; and when the pipe connecting vj and vi is wide, the heat will flow quickly.\nThe final heat that vi receives will depend on various paths from vj to vi, their length, and the width of the pipes.\nAlgorithm 1 DiffusionRank Function Input: The transition matrix A; the inverse transition matrix U; the decay factor \u03b1I for the inverse PageRank; the decay factor \u03b1B for PageRank; number of iterations MI for the inverse PageRank; the number of trusted pages L; the thermal conductivity coefficient - y. Output: DiffusionRank score vector h.\n1: s = 1 2: for i = 1 TO MI do 3: s = \u03b1I \u00b7 U \u00b7 s + (1 \u2212 \u03b1I) \u00b7 n1 \u00b7 1 4: end for 5: Sort s in a decreasing order: \u03c0 = Rank ({1,..., n}, s) 6: d = 0, Count = 0, i = 0 7: while Count \u2264 L do 8: if \u03c0 (i) is evaluated as a trusted page then 9: d (\u03c0 (i)) = 1, Count + + 10: end if 11: i + + 12: end while 13: d = d \/ | d | 14: h = d 15: Find the iteration number MB according to \u03bb 16: for i = 1 TO MB do 17: h = (1 \u2212 MB \u03b3) h + MB \u03b3 (\u03b1B \u00b7 A \u00b7 h + (1 \u2212 \u03b1B) \u00b7 n 1 \u00b7 1) 18: end for 19: RETURN h\n4.1 Algorithm\nFor the ranking task, we adopt the heat kernel on a random graph.\nFormally the DiffusionRank is described in Algorithm 1, in which, the element Uij in the inverse transition matrix U is defined to be 1\/Ij if there is a link from i to j, and 0 otherwise.\nThis trusted pages selection procedure by inverse PageRank is completely borrowed from TrustRank [7] except for a fix number of the size of the trusted set.\nAlthough the inverse PageRank is not perfect in its abil\nity of determining the maximum coverage, it is appealing because of its polynomial execution time and its reasonable intuition--we actually inverse the original link when we try to build the seed set from those pages that point to many pages that in turn point to many pages and so on.\nIn the algorithm, the underlying random graph is set as P = \u03b1B \u00b7 A + (1 \u2212 \u03b1B) \u00b7 n1 \u00b7 1n \u00d7 n, which is induced by the Web graph.\nAs a result, R = \u2212 I + P.\nIn fact, the more general setting for DiffusionRank is P = \u03b1B \u00b7 A + (1 \u2212 \u03b1B) \u00b7 n1 \u00b7 g \u00b7 1T.\nBy such a setting, DiffusionRank is a generalization of TrustRank when - y tends to infinity and when g is set in the same way as TrustRank.\nHowever, the second part of TrustRank is not adopted by us.\nIn our model, g should be the true \"teleportation\" determined by the user's browse habits, popularity distribution over all the Web pages, and so on; P should be the true model of the random nature of the World Wide Web.\nSetting g according to the trusted pages will not be consistent with the basic idea of Heat Diffusion on a random graph.\nWe simply set g = 1 only because we cannot find it without any priori knowledge.\nRemark.\nIn a social network interpretation, DiffusionRank first recognizes a group of trusted people, who may not be highly ranked, but they know many other people.\nThe initially trusted people are endowed with the power to decide who can be further trusted, but cannot decide the final voting results, and so they are not dictators.\n4.2 Advantages\nNext we show the four advantages for DiffusionRank.\n4.2.1 Two closed forms\nFirst, its solutions have two forms, both of which are closed form.\nOne takes the discrete form, and has the advantage of fast computing while the other takes the continuous form, and has the advantage of being easily analyzed in theoretical aspects.\nThe theoretical advantage has been shown in the proof of theorem in the next section.\nFigure 1: Two graphs\n4.2.2 Group-group relations\nSecond, it can be naturally employed to detect the groupgroup relation.\nFor example, let G2 and G1 denote two groups, containing pages (j1, j2,..., js) and (i1, i2,..., it), respectively.\nThen Eu, v hi., j \u201e is the total amounts of heat that G1 receives from G2, where hi., j \u201e is the iu \u2212 th row jv \u2212 th column element of the heat kernel.\nMore specifically, we need to first set f (0) for such an application as follows.\nIn f (0) = (f1 (0), f2 (0),..., fn (0)) T, if i \u2208 {j1, j2,..., js}, then fi (0) = 1, and 0 otherwise.\nThen we employ Eq.\n(5) to calculate f (1) = (f1 (1), f2 (1),..., fn (1)) T, finally we sum those fj (1) where j \u2208 {i1, i2,..., it}.\nFig. 1 (a) shows the results generated by the DiffusionRank.\nWe consider five groups--five departments in our Engineering Faculty: CSE, MAE, EE, IE, and SE.\n- y is set to be 1, the numbers in Fig. 1 (a) are the amount of heat that they diffuse to each other.\nThese results are normalized by the total number of each group, and the edges are ignored if the values are less than 0.000001.\nThe group-to-group relations are therefore detected, for example, we can see that the most strong overall tie is from EE to IE.\nWhile it is a natural application for DiffusionRank because of the easy interpretation by the amount heat from one group to another group, it is difficult to apply other ranking techniques to such an application because they lack such a physical meaning.\n4.2.3 Graph cut\nThird, it can be used to partition the Web graph into several parts.\nA quick example is shown below.\nThe graph in Fig. 1 (b) is an undirected graph, and so we employ the Eq.\n(3).\nIf we know that node 1 belongs to one community and that node 12 belongs to another community, then we can put one unit positive heat source on node 1 and one unit negative heat source on node 12.\nAfter time 1, if we set - y = 0.5, the heat distribution is [0.25, 0.16, 0.17, 0.16, 0.15, 0.09, 0.01, -0.04, -0.18 -0.21, -0.21, -0.34], and if we set - y = 1, it will be [0.17, 0.16, 0.17, 0.16, 0.16, 0.12, 0.02, -0.07, -0.18, -0.22, -0.24, -0.24].\nIn both settings, we can easily divide the graph into two parts: {1, 2, 3, 4, 5, 6, 7} with positive temperatures and {8, 9, 10, 11, 12} with negative temperatures.\nFor directed graphs and random graphs, similarly we can cut them by employing corresponding heat solution.\n4.2.4 Anti-manipulation\nFourth, it can be used to combat manipulation.\nLet G2 contain trusted Web pages (j1, j2,..., js), then for each page i, & hi, j \u201e is the heat that page i receives from G2, and can be computed by the discrete approximation of Eq.\n(4) in the case of a static graph or Eq.\n(6) in the case of a random graph, in which f (0) is set to be a special initial heat distribution so that the trusted Web pages have unit heat while all the others have zero heat.\nIn doing so, manipulated Web page will get a lower rank unless it has strong in-links from the trusted Web pages directly or indirectly.\nThe situation is quite different for PageRank because PageRank is inputindependent as we have shown in Section 3.1.\nBased on the fact that the connection from a trusted page to a \"bad\" page should be weak--less uncross paths, longer distance and narrower pipe, we can say DiffusionRank can resist web spam if we can select trusted pages.\nIt is fortunate that the trusted pages selection method in [7]--the first part of TrustRank can help us to fulfill this task.\nFor such an application of DiffusionRank, the computation complexity for Discrete Diffusion Kernel is the same as that for PageRank in cases of both a static graph and a random graph.\nThis can be seen in Eq.\n(6), by which we need N iterations and for each iteration we need a multiplication operation between a matrix and a vector, while in Eq.\n(1) we also need a multiplication operation between a matrix and a vector for each iteration.\n4.3 The Physical Meaning of \u03b3\n- y plays an important role in the anti-manipulation effect of DiffusionRank.\n- y is the thermal conductivity--the heat diffusion coefficient.\nIf it has a high value, heat will diffuse very quickly.\nConversely, if it is small, heat will diffuse slowly.\nIn the extreme case, if it is infinitely large, then heat will diffuse from one node to other nodes immediately, and this is exactly the case corresponding to PageRank.\nNext, we will interpret it mathematically.\nTHEOREM 1.\nWhen - y tends to infinity and f (0) is not the zero vector, e\u03b3Rf (0) is proportional to the stable distribution produced by PageRank.\nLet g = n1 1.\nBy the Perron Theorem [11], we have shown that 1 is the largest eigenvalue of P = [(1 \u2212 \u03b1) g1T + \u03b1A], and that no other eigenvalue whose absolute value is equal to 1.\nLet x be the stable distribution, and so Px = x. x is the eigenvector corresponding to the eigenvalue 1.\nAssume the n \u2212 1 other eigenvalues of P are | \u03bb2 | <1,..., | \u03bbn | <1,\nall eigenvalues of the matrix e\u03b3R are 1, e\u03b3 (\u03bb2-1),..., e\u03b3 (\u03bbn-1).\nWhen - y--+ oo, they become 1, 0,..., 0, which means that 1 is the only nonzero eigenvalue of e\u03b3R when - y--+ oo.\nWe can see that when - y--+ oo, e\u03b3Re\u03b3Rf (0) = e\u03b3Rf (0), and so e\u03b3Rf (0) is an eigenvector of e\u03b3R when - y--+ oo.\nOn the other hand, e\u03b3Rx = (I + - yR + \u03b322!\nR2 + \u03b333!\nR3 + ...) x = Ix + - yRx + \u03b322!\nR2x + \u03b33 3!\nR3x +...= x since Rx = (\u2212 I + P) x = \u2212 x + x = 0, and hence x is the eigenvector of e\u03b3R for any - y. Therefore both x and e\u03b3Rf (0) are the eigenvectors corresponding the unique eigenvalue 1 of e\u03b3R when - y--+ oo, and consequently x = ce\u03b3Rf (0).\nBy this theorem, we see that DiffusionRank is a generalization of PageRank.\nWhen - y = 0, the ranking value is most robust to manipulation since no heat is diffused and the system is unchangeable, but the Web structure is completely ignored since e\u03b3Rf (0) = e0Rf (0) = If (0) = f (0); when - y = oo, DiffusionRank becomes PageRank, it can be manipulated easily.\nWe expect an appropriate setting of - y that can balance both.\nFor this, we have no theoretical result, but in practice we find that - y = 1 works well in Section 5.\nNext we discuss how to determine the number of iterations if we employ the discrete heat kernel.\n4.4 The Number of Iterations\nWhile we enjoy the advantage of the concise form of the exponential heat kernel, it is better for us to calculate DiffusionRank by employing Eq.\n(6) in an iterative way.\nThen the problem about determining N--the number of iterations arises: For a given threshold e, find N such that | | ((I + \u03b3NR) N \u2212 e\u03b3R) f (0) | | [<] A [j] +0.1 and B [i] <[>] A [j]; both A [i] <[>] A [j] and B [i]> [<] A [j] + 0.1.\nFigure 2: (a) The toy graph consisting of six nodes, and node 1 is being manipulated by adding new Figure 3: The rank values of the manipulated nodes nodes A, B, C,...(b) The approximation tendency to on the toy graph PageRank by DiffusionRank\n5.3 Experimental Set-up\nThe experiments on the middle-size graph and the largesize graphs are conducted on the workstation, whose hardware model is Nix Dual Intel Xeon 2.2 GHz with 1GB RAM and a Linux Kernel 2.4.18-27smp (RedHat7 .3).\nIn calculating DiffusionRank, we employ Eq.\n(6) and the discrete approximation of Eq.\n(4) for such graphs.\nThe related tasks are implemented using C language.\nWhile in the toy graph, we employ the continuous diffusion kernel in Eq.\n(4) and Eq.\n(5), and implement related tasks using Matlab.\nFor nodes that have zero out-degree (dangling nodes), we employ the method in the modified PageRank algorithm [8], in which dangling nodes of are considered to have random links uniformly to each node.\nWe set \u03b1 = \u03b1I = \u03b1B = 0.85 in all algorithms.\nWe also set g to be the uniform distribution in both PageRank and DiffusionRank.\nFor DiffusionRank, we set - y = 1.\nAccording to the discussions in Section 4.3 and Section 4.4, we set the iteration number to be MB = 100 in DiffusionRank, and for accuracy consideration, the iteration number in all the algorithms is set to be 100.\n5.4 Approximation of PageRank\nWe show that when - y tends to infinity, the value differences between DiffusionRank and PageRank tend to zero.\nFig. 2 (b) shows the approximation property of DiffusionRank, as proved in Theorem 1, on the toy graph.\nThe horizontal axis of Fig. 2 (b) marks the - y value, and vertical axis corresponds to the value difference between DiffusionRank and PageRank.\nAll the possible trusted sets with L = 1 are considered.\nFor L> 1, the results should be the linear combination of some of these curves because of the linearity of the solutions to heat equations.\nOn other graphs, the situations are similar.\n5.5 Results of Anti-manipulation\nIn this section, we show how the rank values change as the intensity of manipulation increases.\nWe measure the intensity of manipulation by the number of newly added points that point to the manipulated point.\nThe horizontal axes of Fig. 3 stand for the numbers of newly added points, and vertical axes show the corresponding rank values of the manipulated nodes.\nTo be clear, we consider all six situations.\nEvery node in Fig. 2 (a) is manipulated respectively, and its corresponding values for PageRank, TrustRank (TR), DiffusionRank (DR) are shown in the one of six sub-figures in Fig. 3.\nThe vertical axes show which node is being manipulated.\nIn each sub-figure, the trusted sets are computed below.\nSince the inverse PageRank yields the results [1.26, 0.85, 1.31, 1.36, 0.51, 0.71].\nLet L = 1.\nIf the manipulated node is not 4, then the trusted set is {4}, and otherwise {3}.\nWe observe that in all the cases, rank values of the manipulated node for DiffusionRank grow slowest as the number of the newly added nodes increases.\nOn the middle-size graph and the large-size graph, this conclusion is also true, see Fig. 4.\nNote that, in Fig. 4 (a), we choose four trusted sets (L = 1), on which we test DiffusionRank and TrustRank, the results are denoted by DiffusionRanki and TrustRanki (i = 0, 1, 2, 3 denotes the four trusted set); in Fig. 4 (b), we choose one trusted set (L = 1).\nMoreover, in both Fig. 4 (a) and Fig. 4 (b), we show the results for DiffusionRank when we have no trusted set, and we trust all the pages before some of them are manipulated.\nWe also test the order difference between the ranking order A before the page is manipulated and the ranking order PA after the page is manipulated.\nBecause after manipu\nFigure 4: (a) The rank values of the manipulated nodes on the middle-size graph; (b) The rank values of the manipulated nodes on the large-size graph\nlation, the number of pages changes, we only compare the common part of A and PA. .\nThis experiment is used to test the stability of all these algorithms.\nThe less the order difference, the stabler the algorithm, in the sense that only a smaller part of the order relations is affected by the manipulation.\nFigure 5 (a) shows that the order difference values change when we add new nodes that point to the manipulated node.\nWe give several ry settings.\nWe find that when ry = 1, the least order difference is achieved by DiffusionRank.\nIt is interesting to point out that as ry increases, the order difference will increase first; after reaching a maximum value, it will decrease, and finally it tends to the PageRank results.\nWe show this tendency in Fig. 5 (b), in which we choose three different settings--the number of manipulated nodes are 2,000, 5,000, and 10,000 respectively.\nFrom these figures, we can see that when ry <2, the values are less than those for PageRank, and that when ry> 20, the difference between PageRank and DiffusionRank is very small.\nAfter these investigations, we find that in all the graphs we tested, DiffusionRank (when ry = 1) is most robust to manipulation both in value difference and order difference.\nThe trust set selection algorithm proposed in [7] is effective for both TrustRank and DiffusionRank.\nFigure 5: (a) Pairwise order difference on the middle-size graph, the least it is, the more stable the algorithm; (b) The tendency of varying ry\n6.\nCONCLUSIONS\nWe conclude that DiffusionRank is a generalization of PageRank, which is interesting in that the heat diffusion coefficient ry can balance the extent that we want to model the original Web graph and the extent that we want to reduce the effect of link manipulations.\nThe experimental results show that we can actually achieve such a balance by setting ry = 1, although the best setting including varying rya is still under further investigation.\nThis anti-manipulation feature enables DiffusionRank to be a candidate as a penicillin for Web spamming.\nMoreover, DiffusionRank can be employed to find group-group relations and to partition Web graph into small communities.\nAll these advantages can be achieved in the same computational complexity as PageRank.\nFor the special application of anti-manipulation, DiffusionRank performs best both in reduction effects and in its stability among all the three algorithms.","keyphrases":["diffusionrank","rank","web spam","pagerank","web graph","group-to-group relat","link commun","random graph","keyword stuf","link stuf","machin learn","link analysi","seed select algorithm","gaussian kernel smooth","equal vote abil"],"prmu":["P","P","P","P","P","P","P","M","U","M","U","M","M","U","M"]} {"id":"H-77","title":"Automatic Extraction of Titles from General Documents using Machine Learning","abstract":"In this paper, we propose a machine learning approach to title extraction from general documents. By general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters. Previously, methods have been proposed mainly for title extraction from research papers. It has not been clear whether it could be possible to conduct automatic title extraction from general documents. As a case study, we consider extraction from Office including Word and PowerPoint. In our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models. Our method is unique in that we mainly utilize formatting information such as font size as features in the models. It turns out that the use of formatting information can lead to quite accurate extraction from general documents. Precision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data. Other important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language. Moreover, we can significantly improve search ranking results in document retrieval by using the extracted titles.","lvl-1":"Automatic Extraction of Titles from General Documents using Machine Learning Yunhua Hu1 Computer Science Department Xi``an Jiaotong University No 28, Xianning West Road Xi'an, China, 710049 yunhuahu@mail.xjtu.edu.cn Hang Li, Yunbo Cao Microsoft Research Asia 5F Sigma Center, No. 49 Zhichun Road, Haidian, Beijing, China, 100080 {hangli,yucao}@microsoft.\ncom Qinghua Zheng Computer Science Department Xi``an Jiaotong University No 28, Xianning West Road Xi'an, China, 710049 qhzheng@mail.xjtu.edu.cn Dmitriy Meyerzon Microsoft Corporation One Microsoft Way Redmond, WA, USA, 98052 dmitriym@microsoft.com ABSTRACT In this paper, we propose a machine learning approach to title extraction from general documents.\nBy general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters.\nPreviously, methods have been proposed mainly for title extraction from research papers.\nIt has not been clear whether it could be possible to conduct automatic title extraction from general documents.\nAs a case study, we consider extraction from Office including Word and PowerPoint.\nIn our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models.\nOur method is unique in that we mainly utilize formatting information such as font size as features in the models.\nIt turns out that the use of formatting information can lead to quite accurate extraction from general documents.\nPrecision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data.\nOther important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language.\nMoreover, we can significantly improve search ranking results in document retrieval by using the extracted titles.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - Search Process; H.4.1 [Information Systems Applications]: Office Automation - Word processing; D.2.8 [Software Engineering]: Metrics - complexity measures, performance measures General Terms Algorithms, Experimentation, Performance.\n1.\nINTRODUCTION Metadata of documents is useful for many kinds of document processing such as search, browsing, and filtering.\nIdeally, metadata is defined by the authors of documents and is then used by various systems.\nHowever, people seldom define document metadata by themselves, even when they have convenient metadata definition tools [26].\nThus, how to automatically extract metadata from the bodies of documents turns out to be an important research issue.\nMethods for performing the task have been proposed.\nHowever, the focus was mainly on extraction from research papers.\nFor instance, Han et al. [10] proposed a machine learning based method to conduct extraction from research papers.\nThey formalized the problem as that of classification and employed Support Vector Machines as the classifier.\nThey mainly used linguistic features in the model.1 In this paper, we consider metadata extraction from general documents.\nBy general documents, we mean documents that may belong to any one of a number of specific genres.\nGeneral documents are more widely available in digital libraries, intranets and the internet, and thus investigation on extraction from them is sorely needed.\nResearch papers usually have well-formed styles and noticeable characteristics.\nIn contrast, the styles of general documents can vary greatly.\nIt has not been clarified whether a machine learning based approach can work well for this task.\nThere are many types of metadata: title, author, date of creation, etc..\nAs a case study, we consider title extraction in this paper.\nGeneral documents can be in many different file formats: Microsoft Office, PDF (PS), etc..\nAs a case study, we consider extraction from Office including Word and PowerPoint.\nWe take a machine learning approach.\nWe annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data to train several types of models, and perform title extraction using any one type of the trained models.\nIn the models, we mainly utilize formatting information such as font size as features.\nWe employ the following models: Maximum Entropy Model, Perceptron with Uneven Margins, Maximum Entropy Markov Model, and Voted Perceptron.\nIn this paper, we also investigate the following three problems, which did not seem to have been examined previously.\n(1) Comparison between models: among the models above, which model performs best for title extraction; (2) Generality of model: whether it is possible to train a model on one domain and apply it to another domain, and whether it is possible to train a model in one language and apply it to another language; (3) Usefulness of extracted titles: whether extracted titles can improve document processing such as search.\nExperimental results indicate that our approach works well for title extraction from general documents.\nOur method can significantly outperform the baselines: one that always uses the first lines as titles and the other that always uses the lines in the largest font sizes as titles.\nPrecision and recall for title extraction from Word are 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint are 0.875 and 0.895 respectively.\nIt turns out that the use of format features is the key to successful title extraction.\n(1) We have observed that Perceptron based models perform better in terms of extraction accuracies.\n(2) We have empirically verified that the models trained with our approach are generic in the sense that they can be trained on one domain and applied to another, and they can be trained in one language and applied to another.\n(3) We have found that using the extracted titles we can significantly improve precision of document retrieval (by 10%).\nWe conclude that we can indeed conduct reliable title extraction from general documents and use the extracted results to improve real applications.\nThe rest of the paper is organized as follows.\nIn section 2, we introduce related work, and in section 3, we explain the motivation and problem setting of our work.\nIn section 4, we describe our method of title extraction, and in section 5, we describe our method of document retrieval using extracted titles.\nSection 6 gives our experimental results.\nWe make concluding remarks in section 7.\n2.\nRELATED WORK 2.1 Document Metadata Extraction Methods have been proposed for performing automatic metadata extraction from documents; however, the main focus was on extraction from research papers.\nThe proposed methods fall into two categories: the rule based approach and the machine learning based approach.\nGiuffrida et al. [9], for instance, developed a rule-based system for automatically extracting metadata from research papers in Postscript.\nThey used rules like titles are usually located on the upper portions of the first pages and they are usually in the largest font sizes.\nLiddy et al. [14] and Yilmazel el al. [23] performed metadata extraction from educational materials using rule-based natural language processing technologies.\nMao et al. [16] also conducted automatic metadata extraction from research papers using rules on formatting information.\nThe rule-based approach can achieve high performance.\nHowever, it also has disadvantages.\nIt is less adaptive and robust when compared with the machine learning approach.\nHan et al. [10], for instance, conducted metadata extraction with the machine learning approach.\nThey viewed the problem as that of classifying the lines in a document into the categories of metadata and proposed using Support Vector Machines as the classifier.\nThey mainly used linguistic information as features.\nThey reported high extraction accuracy from research papers in terms of precision and recall.\n2.2 Information Extraction Metadata extraction can be viewed as an application of information extraction, in which given a sequence of instances, we identify a subsequence that represents information in which we are interested.\nHidden Markov Model [6], Maximum Entropy Model [1, 4], Maximum Entropy Markov Model [17], Support Vector Machines [3], Conditional Random Field [12], and Voted Perceptron [2] are widely used information extraction models.\nInformation extraction has been applied, for instance, to part-ofspeech tagging [20], named entity recognition [25] and table extraction [19].\n2.3 Search Using Title Information Title information is useful for document retrieval.\nIn the system Citeseer, for instance, Giles et al. managed to extract titles from research papers and make use of the extracted titles in metadata search of papers [8].\nIn web search, the title fields (i.e., file properties) and anchor texts of web pages (HTML documents) can be viewed as `titles'' of the pages [5].\nMany search engines seem to utilize them for web page retrieval [7, 11, 18, 22].\nZhang et al., found that web pages with well-defined metadata are more easily retrieved than those without well-defined metadata [24].\nTo the best of our knowledge, no research has been conducted on using extracted titles from general documents (e.g., Office documents) for search of the documents.\n146 3.\nMOTIVATION AND PROBLEM SETTING We consider the issue of automatically extracting titles from general documents.\nBy general documents, we mean documents that belong to one of any number of specific genres.\nThe documents can be presentations, books, book chapters, technical papers, brochures, reports, memos, specifications, letters, announcements, or resumes.\nGeneral documents are more widely available in digital libraries, intranets, and internet, and thus investigation on title extraction from them is sorely needed.\nFigure 1 shows an estimate on distributions of file formats on intranet and internet [15].\nOffice and PDF are the main file formats on the intranet.\nEven on the internet, the documents in the formats are still not negligible, given its extremely large size.\nIn this paper, without loss of generality, we take Office documents as an example.\nFigure 1.\nDistributions of file formats in internet and intranet.\nFor Office documents, users can define titles as file properties using a feature provided by Office.\nWe found in an experiment, however, that users seldom use the feature and thus titles in file properties are usually very inaccurate.\nThat is to say, titles in file properties are usually inconsistent with the `true'' titles in the file bodies that are created by the authors and are visible to readers.\nWe collected 6,000 Word and 6,000 PowerPoint documents from an intranet and the internet and examined how many titles in the file properties are correct.\nWe found that surprisingly the accuracy was only 0.265 (cf., Section 6.3 for details).\nA number of reasons can be considered.\nFor example, if one creates a new file by copying an old file, then the file property of the new file will also be copied from the old file.\nIn another experiment, we found that Google uses the titles in file properties of Office documents in search and browsing, but the titles are not very accurate.\nWe created 50 queries to search Word and PowerPoint documents and examined the top 15 results of each query returned by Google.\nWe found that nearly all the titles presented in the search results were from the file properties of the documents.\nHowever, only 0.272 of them were correct.\nActually, `true'' titles usually exist at the beginnings of the bodies of documents.\nIf we can accurately extract the titles from the bodies of documents, then we can exploit reliable title information in document processing.\nThis is exactly the problem we address in this paper.\nMore specifically, given a Word document, we are to extract the title from the top region of the first page.\nGiven a PowerPoint document, we are to extract the title from the first slide.\nA title sometimes consists of a main title and one or two subtitles.\nWe only consider extraction of the main title.\nAs baselines for title extraction, we use that of always using the first lines as titles and that of always using the lines with largest font sizes as titles.\nFigure 2.\nTitle extraction from Word document.\nFigure 3.\nTitle extraction from PowerPoint document.\nNext, we define a `specification'' for human judgments in title data annotation.\nThe annotated data will be used in training and testing of the title extraction methods.\nSummary of the specification: The title of a document should be identified on the basis of common sense, if there is no difficulty in the identification.\nHowever, there are many cases in which the identification is not easy.\nThere are some rules defined in the specification that guide identification for such cases.\nThe rules include a title is usually in consecutive lines in the same format, a document can have no title, titles in images are not considered, a title should not contain words like `draft'', 147 `whitepaper'', etc, if it is difficult to determine which is the title, select the one in the largest font size, and if it is still difficult to determine which is the title, select the first candidate.\n(The specification covers all the cases we have encountered in data annotation.)\nFigures 2 and 3 show examples of Office documents from which we conduct title extraction.\nIn Figure 2, `Differences in Win32 API Implementations among Windows Operating Systems'' is the title of the Word document.\n`Microsoft Windows'' on the top of this page is a picture and thus is ignored.\nIn Figure 3, `Building Competitive Advantages through an Agile Infrastructure'' is the title of the PowerPoint document.\nWe have developed a tool for annotation of titles by human annotators.\nFigure 4 shows a snapshot of the tool.\nFigure 4.\nTitle annotation tool.\n4.\nTITLE EXTRACTION METHOD 4.1 Outline Title extraction based on machine learning consists of training and extraction.\nThe same pre-processing step occurs before training and extraction.\nDuring pre-processing, from the top region of the first page of a Word document or the first slide of a PowerPoint document a number of units for processing are extracted.\nIf a line (lines are separated by `return'' symbols) only has a single format, then the line will become a unit.\nIf a line has several parts and each of them has its own format, then each part will become a unit.\nEach unit will be treated as an instance in learning.\nA unit contains not only content information (linguistic information) but also formatting information.\nThe input to pre-processing is a document and the output of pre-processing is a sequence of units (instances).\nFigure 5 shows the units obtained from the document in Figure 2.\nFigure 5.\nExample of units.\nIn learning, the input is sequences of units where each sequence corresponds to a document.\nWe take labeled units (labeled as title_begin, title_end, or other) in the sequences as training data and construct models for identifying whether a unit is title_begin title_end, or other.\nWe employ four types of models: Perceptron, Maximum Entropy (ME), Perceptron Markov Model (PMM), and Maximum Entropy Markov Model (MEMM).\nIn extraction, the input is a sequence of units from one document.\nWe employ one type of model to identify whether a unit is title_begin, title_end, or other.\nWe then extract units from the unit labeled with `title_begin'' to the unit labeled with `title_end''.\nThe result is the extracted title of the document.\nThe unique characteristic of our approach is that we mainly utilize formatting information for title extraction.\nOur assumption is that although general documents vary in styles, their formats have certain patterns and we can learn and utilize the patterns for title extraction.\nThis is in contrast to the work by Han et al., in which only linguistic features are used for extraction from research papers.\n4.2 Models The four models actually can be considered in the same metadata extraction framework.\nThat is why we apply them together to our current problem.\nEach input is a sequence of instances kxxx L21 together with a sequence of labels kyyy L21 .\nix and iy represents an instance and its label, respectively ( ki ,,2,1 L= ).\nRecall that an instance here represents a unit.\nA label represents title_begin, title_end, or other.\nHere, k is the number of units in a document.\nIn learning, we train a model which can be generally denoted as a conditional probability distribution )|( 11 kk XXYYP LL where iX and iY denote random variables taking instance ix and label iy as values, respectively ( ki ,,2,1 L= ).\nLearning Tool Extraction Tool 21121 2222122221 1121111211 nknnknn kk kk yyyxxx yyyxxx yyyxxx LL LL LL LL \u2192 \u2192 \u2192 )|(maxarg 11 mkmmkm xxyyP LL )|( 11 kk XXYYP LL Conditional Distribution mkmm xxx L21 Figure 6.\nMetadata extraction model.\nWe can make assumptions about the general model in order to make it simple enough for training.\n148 For example, we can assume that kYY ,,1 L are independent of each other given kXX ,,1 L .\nThus, we have )|()|( )|( 11 11 kk kk XYPXYP XXYYP L LL = In this way, we decompose the model into a number of classifiers.\nWe train the classifiers locally using the labeled data.\nAs the classifier, we employ the Perceptron or Maximum Entropy model.\nWe can also assume that the first order Markov property holds for kYY ,,1 L given kXX ,,1 L .\nThus, we have )|()|( )|( 111 11 kkk kk XYYPXYP XXYYP \u2212= L LL Again, we obtain a number of classifiers.\nHowever, the classifiers are conditioned on the previous label.\nWhen we employ the Percepton or Maximum Entropy model as a classifier, the models become a Percepton Markov Model or Maximum Entropy Markov Model, respectively.\nThat is to say, the two models are more precise.\nIn extraction, given a new sequence of instances, we resort to one of the constructed models to assign a sequence of labels to the sequence of instances, i.e., perform extraction.\nFor Perceptron and ME, we assign labels locally and combine the results globally later using heuristics.\nSpecifically, we first identify the most likely title_begin.\nThen we find the most likely title_end within three units after the title_begin.\nFinally, we extract as a title the units between the title_begin and the title_end.\nFor PMM and MEMM, we employ the Viterbi algorithm to find the globally optimal label sequence.\nIn this paper, for Perceptron, we actually employ an improved variant of it, called Perceptron with Uneven Margin [13].\nThis version of Perceptron can work well especially when the number of positive instances and the number of negative instances differ greatly, which is exactly the case in our problem.\nWe also employ an improved version of Perceptron Markov Model in which the Perceptron model is the so-called Voted Perceptron [2].\nIn addition, in training, the parameters of the model are updated globally rather than locally.\n4.3 Features There are two types of features: format features and linguistic features.\nWe mainly use the former.\nThe features are used for both the title-begin and the title-end classifiers.\n4.3.1 Format Features Font Size: There are four binary features that represent the normalized font size of the unit (recall that a unit has only one type of font).\nIf the font size of the unit is the largest in the document, then the first feature will be 1, otherwise 0.\nIf the font size is the smallest in the document, then the fourth feature will be 1, otherwise 0.\nIf the font size is above the average font size and not the largest in the document, then the second feature will be 1, otherwise 0.\nIf the font size is below the average font size and not the smallest, the third feature will be 1, otherwise 0.\nIt is necessary to conduct normalization on font sizes.\nFor example, in one document the largest font size might be `12pt'', while in another the smallest one might be `18pt''.\nBoldface: This binary feature represents whether or not the current unit is in boldface.\nAlignment: There are four binary features that respectively represent the location of the current unit: `left'', `center'', `right'', and `unknown alignment''.\nThe following format features with respect to `context'' play an important role in title extraction.\nEmpty Neighboring Unit: There are two binary features that represent, respectively, whether or not the previous unit and the current unit are blank lines.\nFont Size Change: There are two binary features that represent, respectively, whether or not the font size of the previous unit and the font size of the next unit differ from that of the current unit.\nAlignment Change: There are two binary features that represent, respectively, whether or not the alignment of the previous unit and the alignment of the next unit differ from that of the current one.\nSame Paragraph: There are two binary features that represent, respectively, whether or not the previous unit and the next unit are in the same paragraph as the current unit.\n4.3.2 Linguistic Features The linguistic features are based on key words.\nPositive Word: This binary feature represents whether or not the current unit begins with one of the positive words.\nThe positive words include `title:'', `subject:'', `subject line:'' For example, in some documents the lines of titles and authors have the same formats.\nHowever, if lines begin with one of the positive words, then it is likely that they are title lines.\nNegative Word: This binary feature represents whether or not the current unit begins with one of the negative words.\nThe negative words include `To'', `By'', `created by'', `updated by'', etc..\nThere are more negative words than positive words.\nThe above linguistic features are language dependent.\nWord Count: A title should not be too long.\nWe heuristically create four intervals: [1, 2], [3, 6], [7, 9] and [9, \u221e) and define one feature for each interval.\nIf the number of words in a title falls into an interval, then the corresponding feature will be 1; otherwise 0.\nEnding Character: This feature represents whether the unit ends with `:'', `-'', or other special characters.\nA title usually does not end with such a character.\n5.\nDOCUMENT RETRIEVAL METHOD We describe our method of document retrieval using extracted titles.\nTypically, in information retrieval a document is split into a number of fields including body, title, and anchor text.\nA ranking function in search can use different weights for different fields of 149 the document.\nAlso, titles are typically assigned high weights, indicating that they are important for document retrieval.\nAs explained previously, our experiment has shown that a significant number of documents actually have incorrect titles in the file properties, and thus in addition of using them we use the extracted titles as one more field of the document.\nBy doing this, we attempt to improve the overall precision.\nIn this paper, we employ a modification of BM25 that allows field weighting [21].\nAs fields, we make use of body, title, extracted title and anchor.\nFirst, for each term in the query we count the term frequency in each field of the document; each field frequency is then weighted according to the corresponding weight parameter: \u2211= f tfft tfwwtf Similarly, we compute the document length as a weighted sum of lengths of each field.\nAverage document length in the corpus becomes the average of all weighted document lengths.\n\u2211= f ff dlwwdl In our experiments we used 75.0,8.11 == bk .\nWeight for content was 1.0, title was 10.0, anchor was 10.0, and extracted title was 5.0.\n6.\nEXPERIMENTAL RESULTS 6.1 Data Sets and Evaluation Measures We used two data sets in our experiments.\nFirst, we downloaded and randomly selected 5,000 Word documents and 5,000 PowerPoint documents from an intranet of Microsoft.\nWe call it MS hereafter.\nSecond, we downloaded and randomly selected 500 Word and 500 PowerPoint documents from the DotGov and DotCom domains on the internet, respectively.\nFigure 7 shows the distributions of the genres of the documents.\nWe see that the documents are indeed `general documents'' as we define them.\nFigure 7.\nDistributions of document genres.\nThird, a data set in Chinese was also downloaded from the internet.\nIt includes 500 Word documents and 500 PowerPoint documents in Chinese.\nWe manually labeled the titles of all the documents, on the basis of our specification.\nNot all the documents in the two data sets have titles.\nTable 1 shows the percentages of the documents having titles.\nWe see that DotCom and DotGov have more PowerPoint documents with titles than MS. This might be because PowerPoint documents published on the internet are more formal than those on the intranet.\nTable 1.\nThe portion of documents with titles Domain Type MS DotCom DotGov Word 75.7% 77.8% 75.6% PowerPoint 82.1% 93.4% 96.4% In our experiments, we conducted evaluations on title extraction in terms of precision, recall, and F-measure.\nThe evaluation measures are defined as follows: Precision: P = A \/ ( A + B ) Recall: R = A \/ ( A + C ) F-measure: F1 = 2PR \/ ( P + R ) Here, A, B, C, and D are numbers of documents as those defined in Table 2.\nTable 2.\nContingence table with regard to title extraction Is title Is not title Extracted A B Not extracted C D 6.2 Baselines We test the accuracies of the two baselines described in section 4.2.\nThey are denoted as `largest font size'' and `first line'' respectively.\n6.3 Accuracy of Titles in File Properties We investigate how many titles in the file properties of the documents are reliable.\nWe view the titles annotated by humans as true titles and test how many titles in the file properties can approximately match with the true titles.\nWe use Edit Distance to conduct the approximate match.\n(Approximate match is only used in this evaluation).\nThis is because sometimes human annotated titles can be slightly different from the titles in file properties on the surface, e.g., contain extra spaces).\nGiven string A and string B: if ( (D == 0) or ( D \/ ( La + Lb ) < \u03b8 ) ) then string A = string B D: Edit Distance between string A and string B La: length of string A Lb: length of string B \u03b8: 0.1 \u2211 \u00d7 ++\u2212 + = t t n N wtf avwdl wdl bbk kwtf FBM )log( ))1(( )1( 25 1 1 150 Table 3.\nAccuracies of titles in file properties File Type Domain Precision Recall F1 MS 0.299 0.311 0.305 DotCom 0.210 0.214 0.212Word DotGov 0.182 0.177 0.180 MS 0.229 0.245 0.237 DotCom 0.185 0.186 0.186PowerPoint DotGov 0.180 0.182 0.181 6.4 Comparison with Baselines We conducted title extraction from the first data set (Word and PowerPoint in MS).\nAs the model, we used Perceptron.\nWe conduct 4-fold cross validation.\nThus, all the results reported here are those averaged over 4 trials.\nTables 4 and 5 show the results.\nWe see that Perceptron significantly outperforms the baselines.\nIn the evaluation, we use exact matching between the true titles annotated by humans and the extracted titles.\nTable 4.\nAccuracies of title extraction with Word Precision Recall F1 Model Perceptron 0.810 0.837 0.823 Largest font size 0.700 0.758 0.727 Baselines First line 0.707 0.767 0.736 Table 5.\nAccuracies of title extraction with PowerPoint Precision Recall F1 Model Perceptron 0.875 0.\n895 0.885 Largest font size 0.844 0.887 0.865 Baselines First line 0.639 0.671 0.655 We see that the machine learning approach can achieve good performance in title extraction.\nFor Word documents both precision and recall of the approach are 8 percent higher than those of the baselines.\nFor PowerPoint both precision and recall of the approach are 2 percent higher than those of the baselines.\nWe conduct significance tests.\nThe results are shown in Table 6.\nHere, `Largest'' denotes the baseline of using the largest font size, `First'' denotes the baseline of using the first line.\nThe results indicate that the improvements of machine learning over baselines are statistically significant (in the sense p-value < 0.05) Table 6.\nSign test results Documents Type Sign test between p-value Perceptron vs. Largest 3.59e-26 Word Perceptron vs. First 7.12e-10 Perceptron vs. Largest 0.010 PowerPoint Perceptron vs. First 5.13e-40 We see, from the results, that the two baselines can work well for title extraction, suggesting that font size and position information are most useful features for title extraction.\nHowever, it is also obvious that using only these two features is not enough.\nThere are cases in which all the lines have the same font size (i.e., the largest font size), or cases in which the lines with the largest font size only contain general descriptions like `Confidential'', `White paper'', etc..\nFor those cases, the `largest font size'' method cannot work well.\nFor similar reasons, the `first line'' method alone cannot work well, either.\nWith the combination of different features (evidence in title judgment), Perceptron can outperform Largest and First.\nWe investigate the performance of solely using linguistic features.\nWe found that it does not work well.\nIt seems that the format features play important roles and the linguistic features are supplements.\n.\nFigure 8.\nAn example Word document.\nFigure 9.\nAn example PowerPoint document.\nWe conducted an error analysis on the results of Perceptron.\nWe found that the errors fell into three categories.\n(1) About one third of the errors were related to `hard cases''.\nIn these documents, the layouts of the first pages were difficult to understand, even for humans.\nFigure 8 and 9 shows examples.\n(2) Nearly one fourth of the errors were from the documents which do not have true titles but only contain bullets.\nSince we conduct extraction from the top regions, it is difficult to get rid of these errors with the current approach.\n(3).\nConfusions between main titles and subtitles were another type of error.\nSince we only labeled the main titles as titles, the extractions of both titles were considered incorrect.\nThis type of error does little harm to document processing like search, however.\n6.5 Comparison between Models To compare the performance of different machine learning models, we conducted another experiment.\nAgain, we perform 4-fold cross 151 validation on the first data set (MS).\nTable 7, 8 shows the results of all the four models.\nIt turns out that Perceptron and PMM perform the best, followed by MEMM, and ME performs the worst.\nIn general, the Markovian models perform better than or as well as their classifier counterparts.\nThis seems to be because the Markovian models are trained globally, while the classifiers are trained locally.\nThe Perceptron based models perform better than the ME based counterparts.\nThis seems to be because the Perceptron based models are created to make better classifications, while ME models are constructed for better prediction.\nTable 7.\nComparison between different learning models for title extraction with Word Model Precision Recall F1 Perceptron 0.810 0.837 0.823 MEMM 0.797 0.824 0.810 PMM 0.827 0.823 0.825 ME 0.801 0.621 0.699 Table 8.\nComparison between different learning models for title extraction with PowerPoint Model Precision Recall F1 Perceptron 0.875 0.\n895 0.\n885 MEMM 0.841 0.861 0.851 PMM 0.873 0.896 0.885 ME 0.753 0.766 0.759 6.6 Domain Adaptation We apply the model trained with the first data set (MS) to the second data set (DotCom and DotGov).\nTables 9-12 show the results.\nTable 9.\nAccuracies of title extraction with Word in DotGov Precision Recall F1 Model Perceptron 0.716 0.759 0.737 Largest font size 0.549 0.619 0.582Baselines First line 0.462 0.521 0.490 Table 10.\nAccuracies of title extraction with PowerPoint in DotGov Precision Recall F1 Model Perceptron 0.900 0.906 0.903 Largest font size 0.871 0.888 0.879Baselines First line 0.554 0.564 0.559 Table 11.\nAccuracies of title extraction with Word in DotCom Precisio n Recall F1 Model Perceptron 0.832 0.880 0.855 Largest font size 0.676 0.753 0.712Baselines First line 0.577 0.643 0.608 Table 12.\nPerformance of PowerPoint document title extraction in DotCom Precisio n Recall F1 Model Perceptron 0.910 0.903 0.907 Largest font size 0.864 0.886 0.875Baselines First line 0.570 0.585 0.577 From the results, we see that the models can be adapted to different domains well.\nThere is almost no drop in accuracy.\nThe results indicate that the patterns of title formats exist across different domains, and it is possible to construct a domain independent model by mainly using formatting information.\n6.7 Language Adaptation We apply the model trained with the data in English (MS) to the data set in Chinese.\nTables 13-14 show the results.\nTable 13.\nAccuracies of title extraction with Word in Chinese Precision Recall F1 Model Perceptron 0.817 0.805 0.811 Largest font size 0.722 0.755 0.738Baselines First line 0.743 0.777 0.760 Table 14.\nAccuracies of title extraction with PowerPoint in Chinese Precision Recall F1 Model Perceptron 0.766 0.812 0.789 Largest font size 0.753 0.813 0.782Baselines First line 0.627 0.676 0.650 We see that the models can be adapted to a different language.\nThere are only small drops in accuracy.\nObviously, the linguistic features do not work for Chinese, but the effect of not using them is negligible.\nThe results indicate that the patterns of title formats exist across different languages.\nFrom the domain adaptation and language adaptation results, we conclude that the use of formatting information is the key to a successful extraction from general documents.\n6.8 Search with Extracted Titles We performed experiments on using title extraction for document retrieval.\nAs a baseline, we employed BM25 without using extracted titles.\nThe ranking mechanism was as described in Section 5.\nThe weights were heuristically set.\nWe did not conduct optimization on the weights.\nThe evaluation was conducted on a corpus of 1.3 M documents crawled from the intranet of Microsoft using 100 evaluation queries obtained from this intranet``s search engine query logs.\n50 queries were from the most popular set, while 50 queries other were chosen randomly.\nUsers were asked to provide judgments of the degree of document relevance from a scale of 1to 5 (1 meaning detrimental, 2 - bad, 3 - fair, 4 - good and 5 - excellent).\n152 Figure 10 shows the results.\nIn the chart two sets of precision results were obtained by either considering good or excellent documents as relevant (left 3 bars with relevance threshold 0.5), or by considering only excellent documents as relevant (right 3 bars with relevance threshold 1.0) 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 P@10 P@5 Reciprocal P@10 P@5 Reciprocal 0.5 1 BM25 Anchor, Title, Body BM25 Anchor, Title, Body, ExtractedTitle Name All RelevanceThreshold Data Description Figure 10.\nSearch ranking results.\nFigure 10 shows different document retrieval results with different ranking functions in terms of precision @10, precision @5 and reciprocal rank: \u2022 Blue bar - BM25 including the fields body, title (file property), and anchor text.\n\u2022 Purple bar - BM25 including the fields body, title (file property), anchor text, and extracted title.\nWith the additional field of extracted title included in BM25 the precision @10 increased from 0.132 to 0.145, or by ~10%.\nThus, it is safe to say that the use of extracted title can indeed improve the precision of document retrieval.\n7.\nCONCLUSION In this paper, we have investigated the problem of automatically extracting titles from general documents.\nWe have tried using a machine learning approach to address the problem.\nPrevious work showed that the machine learning approach can work well for metadata extraction from research papers.\nIn this paper, we showed that the approach can work for extraction from general documents as well.\nOur experimental results indicated that the machine learning approach can work significantly better than the baselines in title extraction from Office documents.\nPrevious work on metadata extraction mainly used linguistic features in documents, while we mainly used formatting information.\nIt appeared that using formatting information is a key for successfully conducting title extraction from general documents.\nWe tried different machine learning models including Perceptron, Maximum Entropy, Maximum Entropy Markov Model, and Voted Perceptron.\nWe found that the performance of the Perceptorn models was the best.\nWe applied models constructed in one domain to another domain and applied models trained in one language to another language.\nWe found that the accuracies did not drop substantially across different domains and across different languages, indicating that the models were generic.\nWe also attempted to use the extracted titles in document retrieval.\nWe observed a significant improvement in document ranking performance for search when using extracted title information.\nAll the above investigations were not conducted in previous work, and through our investigations we verified the generality and the significance of the title extraction approach.\n8.\nACKNOWLEDGEMENTS We thank Chunyu Wei and Bojuan Zhao for their work on data annotation.\nWe acknowledge Jinzhu Li for his assistance in conducting the experiments.\nWe thank Ming Zhou, John Chen, Jun Xu, and the anonymous reviewers of JCDL``05 for their valuable comments on this paper.\n9.\nREFERENCES [1] Berger, A. L., Della Pietra, S. A., and Della Pietra, V. J.\nA maximum entropy approach to natural language processing.\nComputational Linguistics, 22:39-71, 1996.\n[2] Collins, M. Discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms.\nIn Proceedings of Conference on Empirical Methods in Natural Language Processing, 1-8, 2002.\n[3] Cortes, C. and Vapnik, V. Support-vector networks.\nMachine Learning, 20:273-297, 1995.\n[4] Chieu, H. L. and Ng, H. T.\nA maximum entropy approach to information extraction from semi-structured and free text.\nIn Proceedings of the Eighteenth National Conference on Artificial Intelligence, 768-791, 2002.\n[5] Evans, D. K., Klavans, J. L., and McKeown, K. R. Columbia newsblaster: multilingual news summarization on the Web.\nIn Proceedings of Human Language Technology conference \/ North American chapter of the Association for Computational Linguistics annual meeting, 1-4, 2004.\n[6] Ghahramani, Z. and Jordan, M. I. Factorial hidden markov models.\nMachine Learning, 29:245-273, 1997.\n[7] Gheel, J. and Anderson, T. Data and metadata for finding and reminding, In Proceedings of the 1999 International Conference on Information Visualization, 446-451,1999.\n[8] Giles, C. L., Petinot, Y., Teregowda P. B., Han, H., Lawrence, S., Rangaswamy, A., and Pal, N. eBizSearch: a niche search engine for e-Business.\nIn Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 413414, 2003.\n[9] Giuffrida, G., Shek, E. C., and Yang, J. Knowledge-based metadata extraction from PostScript files.\nIn Proceedings of the Fifth ACM Conference on Digital Libraries, 77-84, 2000.\n[10] Han, H., Giles, C. L., Manavoglu, E., Zha, H., Zhang, Z., and Fox, E. A. Automatic document metadata extraction using support vector machines.\nIn Proceedings of the Third ACM\/IEEE-CS Joint Conference on Digital Libraries, 37-48, 2003.\n[11] Kobayashi, M., and Takeda, K. Information retrieval on the Web.\nACM Computing Surveys, 32:144-173, 2000.\n[12] Lafferty, J., McCallum, A., and Pereira, F. Conditional random fields: probabilistic models for segmenting and 153 labeling sequence data.\nIn Proceedings of the Eighteenth International Conference on Machine Learning, 282-289, 2001.\n[13] Li, Y., Zaragoza, H., Herbrich, R., Shawe-Taylor J., and Kandola, J. S.\nThe perceptron algorithm with uneven margins.\nIn Proceedings of the Nineteenth International Conference on Machine Learning, 379-386, 2002.\n[14] Liddy, E. D., Sutton, S., Allen, E., Harwell, S., Corieri, S., Yilmazel, O., Ozgencil, N. E., Diekema, A., McCracken, N., and Silverstein, J. Automatic Metadata generation & evaluation.\nIn Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 401-402, 2002.\n[15] Littlefield, A. Effective enterprise information retrieval across new content formats.\nIn Proceedings of the Seventh Search Engine Conference, http:\/\/www.infonortics.com\/searchengines\/sh02\/02prog.html, 2002.\n[16] Mao, S., Kim, J. W., and Thoma, G. R.\nA dynamic feature generation system for automated metadata extraction in preservation of digital materials.\nIn Proceedings of the First International Workshop on Document Image Analysis for Libraries, 225-232, 2004.\n[17] McCallum, A., Freitag, D., and Pereira, F. Maximum entropy markov models for information extraction and segmentation.\nIn Proceedings of the Seventeenth International Conference on Machine Learning, 591-598, 2000.\n[18] Murphy, L. D. Digital document metadata in organizations: roles, analytical approaches, and future research directions.\nIn Proceedings of the Thirty-First Annual Hawaii International Conference on System Sciences, 267-276, 1998.\n[19] Pinto, D., McCallum, A., Wei, X., and Croft, W. B. Table extraction using conditional random fields.\nIn Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 235242, 2003.\n[20] Ratnaparkhi, A. Unsupervised statistical models for prepositional phrase attachment.\nIn Proceedings of the Seventeenth International Conference on Computational Linguistics.\n1079-1085, 1998.\n[21] Robertson, S., Zaragoza, H., and Taylor, M. Simple BM25 extension to multiple weighted fields, In Proceedings of ACM Thirteenth Conference on Information and Knowledge Management, 42-49, 2004.\n[22] Yi, J. and Sundaresan, N. Metadata based Web mining for relevance, In Proceedings of the 2000 International Symposium on Database Engineering & Applications, 113121, 2000.\n[23] Yilmazel, O., Finneran, C. M., and Liddy, E. D. MetaExtract: An NLP system to automatically assign metadata.\nIn Proceedings of the 2004 Joint ACM\/IEEE Conference on Digital Libraries, 241-242, 2004.\n[24] Zhang, J. and Dimitroff, A. Internet search engines' response to metadata Dublin Core implementation.\nJournal of Information Science, 30:310-320, 2004.\n[25] Zhang, L., Pan, Y., and Zhang, T. Recognising and using named entities: focused named entity recognition using machine learning.\nIn Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 281-288, 2004.\n[26] http:\/\/dublincore.org\/groups\/corporate\/Seattle\/ 154","lvl-3":"Automatic Extraction of Titles from General Documents using Machine Learning\nABSTRACT\nIn this paper, we propose a machine learning approach to title extraction from general documents.\nBy general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters.\nPreviously, methods have been proposed mainly for title extraction from research papers.\nIt has not been clear whether it could be possible to conduct automatic title extraction from general documents.\nAs a case study, we consider extraction from Office including Word and PowerPoint.\nIn our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models.\nOur method is unique in that we mainly utilize formatting information such as font size as features in the models.\nIt turns out that the use of formatting information can lead to quite accurate extraction from general documents.\nPrecision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data.\nOther important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language.\nMoreover, we can significantly improve search ranking results in document retrieval by using the extracted titles.\n1.\nINTRODUCTION\nMetadata of documents is useful for many kinds of document processing such as search, browsing, and filtering.\nIdeally, metadata is defined by the authors of documents and is then used by various systems.\nHowever, people seldom define document metadata by themselves, even when they have convenient metadata definition tools [26].\nThus, how to automatically extract metadata from the bodies of documents turns out to be an important research issue.\nMethods for performing the task have been proposed.\nHowever, the focus was mainly on extraction from research papers.\nFor instance, Han et al. [10] proposed a machine learning based method to conduct extraction from research papers.\nThey formalized the problem as that of classification and employed Support Vector Machines as the classifier.\nThey mainly used linguistic features in the model.\nIn this paper, we consider metadata extraction from general documents.\nBy general documents, we mean documents that may belong to any one of a number of specific genres.\nGeneral documents are more widely available in digital libraries, intranets and the internet, and thus investigation on extraction from them is\nsorely needed.\nResearch papers usually have well-formed styles and noticeable characteristics.\nIn contrast, the styles of general documents can vary greatly.\nIt has not been clarified whether a machine learning based approach can work well for this task.\nThere are many types of metadata: title, author, date of creation, etc. .\nAs a case study, we consider title extraction in this paper.\nGeneral documents can be in many different file formats: Microsoft Office, PDF (PS), etc. .\nAs a case study, we consider extraction from Office including Word and PowerPoint.\nWe take a machine learning approach.\nWe annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data to train several types of models, and perform title extraction using any one type of the trained models.\nIn the models, we mainly utilize formatting information such as font size as features.\nWe employ the following models: Maximum Entropy Model, Perceptron with Uneven Margins, Maximum Entropy Markov Model, and Voted Perceptron.\nIn this paper, we also investigate the following three problems, which did not seem to have been examined previously.\n(1) Comparison between models: among the models above, which model performs best for title extraction; (2) Generality of model: whether it is possible to train a model on one domain and apply it to another domain, and whether it is possible to train a model in one language and apply it to another language; (3) Usefulness of extracted titles: whether extracted titles can improve document processing such as search.\nExperimental results indicate that our approach works well for title extraction from general documents.\nOur method can significantly outperform the baselines: one that always uses the first lines as titles and the other that always uses the lines in the largest font sizes as titles.\nPrecision and recall for title extraction from Word are 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint are 0.875 and 0.895 respectively.\nIt turns out that the use of format features is the key to successful title extraction.\n(1) We have observed that Perceptron based models perform better in terms of extraction accuracies.\n(2) We have empirically verified that the models trained with our approach are generic in the sense that they can be trained on one domain and applied to another, and they can be trained in one language and applied to another.\n(3) We have found that using the extracted titles we can significantly improve precision of document retrieval (by 10%).\nWe conclude that we can indeed conduct reliable title extraction from general documents and use the extracted results to improve real applications.\nThe rest of the paper is organized as follows.\nIn section 2, we introduce related work, and in section 3, we explain the motivation and problem setting of our work.\nIn section 4, we describe our method of title extraction, and in section 5, we describe our method of document retrieval using extracted titles.\nSection 6 gives our experimental results.\nWe make concluding remarks in section 7.\n2.\nRELATED WORK\n2.1 Document Metadata Extraction\nMethods have been proposed for performing automatic metadata extraction from documents; however, the main focus was on extraction from research papers.\nThe proposed methods fall into two categories: the rule based approach and the machine learning based approach.\nGiuffrida et al. [9], for instance, developed a rule-based system for automatically extracting metadata from research papers in Postscript.\nThey used rules like \"titles are usually located on the upper portions of the first pages and they are usually in the largest font sizes\".\nLiddy et al. [14] and Yilmazel el al. [23] performed metadata extraction from educational materials using rule-based natural language processing technologies.\nMao et al. [16] also conducted automatic metadata extraction from research papers using rules on formatting information.\nThe rule-based approach can achieve high performance.\nHowever, it also has disadvantages.\nIt is less adaptive and robust when compared with the machine learning approach.\nHan et al. [10], for instance, conducted metadata extraction with the machine learning approach.\nThey viewed the problem as that of classifying the lines in a document into the categories of metadata and proposed using Support Vector Machines as the classifier.\nThey mainly used linguistic information as features.\nThey reported high extraction accuracy from research papers in terms of precision and recall.\n2.2 Information Extraction\nMetadata extraction can be viewed as an application of information extraction, in which given a sequence of instances, we identify a subsequence that represents information in which we are interested.\nHidden Markov Model [6], Maximum Entropy Model [1, 4], Maximum Entropy Markov Model [17], Support Vector Machines [3], Conditional Random Field [12], and Voted Perceptron [2] are widely used information extraction models.\nInformation extraction has been applied, for instance, to part-ofspeech tagging [20], named entity recognition [25] and table extraction [19].\n2.3 Search Using Title Information\nTitle information is useful for document retrieval.\nIn the system Citeseer, for instance, Giles et al. managed to extract titles from research papers and make use of the extracted titles in metadata search of papers [8].\nIn web search, the title fields (i.e., file properties) and anchor texts of web pages (HTML documents) can be viewed as ` titles' of the pages [5].\nMany search engines seem to utilize them for web page retrieval [7, 11, 18, 22].\nZhang et al., found that web pages with well-defined metadata are more easily retrieved than those without well-defined metadata [24].\nTo the best of our knowledge, no research has been conducted on using extracted titles from general documents (e.g., Office documents) for search of the documents.\n3.\nMOTIVATION AND PROBLEM SETTING\n4.\nTITLE EXTRACTION METHOD 4.1 Outline\n4.2 Models\n4.3 Features\n4.3.1 Format Features\n4.3.2 Linguistic Features\n5.\nDOCUMENT RETRIEVAL METHOD\n6.\nEXPERIMENTAL RESULTS\n6.1 Data Sets and Evaluation Measures\n7.\nDistributions of document genres.\n6.2 Baselines\n6.3 Accuracy of Titles in File Properties\n6.4 Comparison with Baselines\n6.5 Comparison between Models\n6.6 Domain Adaptation\n6.7 Language Adaptation\n6.8 Search with Extracted Titles\n7.\nCONCLUSION\nIn this paper, we have investigated the problem of automatically extracting titles from general documents.\nWe have tried using a machine learning approach to address the problem.\nPrevious work showed that the machine learning approach can work well for metadata extraction from research papers.\nIn this paper, we showed that the approach can work for extraction from general documents as well.\nOur experimental results indicated that the machine learning approach can work significantly better than the baselines in title extraction from Office documents.\nPrevious work on metadata extraction mainly used linguistic features in documents, while we mainly used formatting information.\nIt appeared that using formatting information is a key for successfully conducting title extraction from general documents.\nWe tried different machine learning models including Perceptron, Maximum Entropy, Maximum Entropy Markov Model, and Voted Perceptron.\nWe found that the performance of the Perceptorn models was the best.\nWe applied models constructed in one domain to another domain and applied models trained in one language to another language.\nWe found that the accuracies did not drop substantially across different domains and across different languages, indicating that the models were generic.\nWe also attempted to use the extracted titles in document retrieval.\nWe observed a significant improvement in document ranking performance for search when using extracted title information.\nAll the above investigations were not conducted in previous work, and through our investigations we verified the generality and the significance of the title extraction approach.","lvl-4":"Automatic Extraction of Titles from General Documents using Machine Learning\nABSTRACT\nIn this paper, we propose a machine learning approach to title extraction from general documents.\nBy general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters.\nPreviously, methods have been proposed mainly for title extraction from research papers.\nIt has not been clear whether it could be possible to conduct automatic title extraction from general documents.\nAs a case study, we consider extraction from Office including Word and PowerPoint.\nIn our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models.\nOur method is unique in that we mainly utilize formatting information such as font size as features in the models.\nIt turns out that the use of formatting information can lead to quite accurate extraction from general documents.\nPrecision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data.\nOther important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language.\nMoreover, we can significantly improve search ranking results in document retrieval by using the extracted titles.\n1.\nINTRODUCTION\nMetadata of documents is useful for many kinds of document processing such as search, browsing, and filtering.\nIdeally, metadata is defined by the authors of documents and is then used by various systems.\nHowever, people seldom define document metadata by themselves, even when they have convenient metadata definition tools [26].\nThus, how to automatically extract metadata from the bodies of documents turns out to be an important research issue.\nMethods for performing the task have been proposed.\nHowever, the focus was mainly on extraction from research papers.\nFor instance, Han et al. [10] proposed a machine learning based method to conduct extraction from research papers.\nThey mainly used linguistic features in the model.\nIn this paper, we consider metadata extraction from general documents.\nBy general documents, we mean documents that may belong to any one of a number of specific genres.\nGeneral documents are more widely available in digital libraries, intranets and the internet, and thus investigation on extraction from them is\nsorely needed.\nResearch papers usually have well-formed styles and noticeable characteristics.\nIn contrast, the styles of general documents can vary greatly.\nIt has not been clarified whether a machine learning based approach can work well for this task.\nThere are many types of metadata: title, author, date of creation, etc. .\nAs a case study, we consider title extraction in this paper.\nGeneral documents can be in many different file formats: Microsoft Office, PDF (PS), etc. .\nAs a case study, we consider extraction from Office including Word and PowerPoint.\nWe take a machine learning approach.\nWe annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data to train several types of models, and perform title extraction using any one type of the trained models.\nIn the models, we mainly utilize formatting information such as font size as features.\nExperimental results indicate that our approach works well for title extraction from general documents.\nPrecision and recall for title extraction from Word are 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint are 0.875 and 0.895 respectively.\nIt turns out that the use of format features is the key to successful title extraction.\n(1) We have observed that Perceptron based models perform better in terms of extraction accuracies.\n(3) We have found that using the extracted titles we can significantly improve precision of document retrieval (by 10%).\nWe conclude that we can indeed conduct reliable title extraction from general documents and use the extracted results to improve real applications.\nThe rest of the paper is organized as follows.\nIn section 4, we describe our method of title extraction, and in section 5, we describe our method of document retrieval using extracted titles.\nSection 6 gives our experimental results.\nWe make concluding remarks in section 7.\n2.\nRELATED WORK\n2.1 Document Metadata Extraction\nMethods have been proposed for performing automatic metadata extraction from documents; however, the main focus was on extraction from research papers.\nThe proposed methods fall into two categories: the rule based approach and the machine learning based approach.\nGiuffrida et al. [9], for instance, developed a rule-based system for automatically extracting metadata from research papers in Postscript.\nThey used rules like \"titles are usually located on the upper portions of the first pages and they are usually in the largest font sizes\".\nLiddy et al. [14] and Yilmazel el al. [23] performed metadata extraction from educational materials using rule-based natural language processing technologies.\nMao et al. [16] also conducted automatic metadata extraction from research papers using rules on formatting information.\nThe rule-based approach can achieve high performance.\nHowever, it also has disadvantages.\nIt is less adaptive and robust when compared with the machine learning approach.\nHan et al. [10], for instance, conducted metadata extraction with the machine learning approach.\nThey viewed the problem as that of classifying the lines in a document into the categories of metadata and proposed using Support Vector Machines as the classifier.\nThey mainly used linguistic information as features.\nThey reported high extraction accuracy from research papers in terms of precision and recall.\n2.2 Information Extraction\nMetadata extraction can be viewed as an application of information extraction, in which given a sequence of instances, we identify a subsequence that represents information in which we are interested.\nInformation extraction has been applied, for instance, to part-ofspeech tagging [20], named entity recognition [25] and table extraction [19].\n2.3 Search Using Title Information\nTitle information is useful for document retrieval.\nIn the system Citeseer, for instance, Giles et al. managed to extract titles from research papers and make use of the extracted titles in metadata search of papers [8].\nTo the best of our knowledge, no research has been conducted on using extracted titles from general documents (e.g., Office documents) for search of the documents.\n7.\nCONCLUSION\nIn this paper, we have investigated the problem of automatically extracting titles from general documents.\nWe have tried using a machine learning approach to address the problem.\nPrevious work showed that the machine learning approach can work well for metadata extraction from research papers.\nIn this paper, we showed that the approach can work for extraction from general documents as well.\nOur experimental results indicated that the machine learning approach can work significantly better than the baselines in title extraction from Office documents.\nPrevious work on metadata extraction mainly used linguistic features in documents, while we mainly used formatting information.\nIt appeared that using formatting information is a key for successfully conducting title extraction from general documents.\nWe tried different machine learning models including Perceptron, Maximum Entropy, Maximum Entropy Markov Model, and Voted Perceptron.\nWe found that the performance of the Perceptorn models was the best.\nWe applied models constructed in one domain to another domain and applied models trained in one language to another language.\nWe also attempted to use the extracted titles in document retrieval.\nWe observed a significant improvement in document ranking performance for search when using extracted title information.\nAll the above investigations were not conducted in previous work, and through our investigations we verified the generality and the significance of the title extraction approach.","lvl-2":"Automatic Extraction of Titles from General Documents using Machine Learning\nABSTRACT\nIn this paper, we propose a machine learning approach to title extraction from general documents.\nBy general documents, we mean documents that can belong to any one of a number of specific genres, including presentations, book chapters, technical papers, brochures, reports, and letters.\nPreviously, methods have been proposed mainly for title extraction from research papers.\nIt has not been clear whether it could be possible to conduct automatic title extraction from general documents.\nAs a case study, we consider extraction from Office including Word and PowerPoint.\nIn our approach, we annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data, train machine learning models, and perform title extraction using the trained models.\nOur method is unique in that we mainly utilize formatting information such as font size as features in the models.\nIt turns out that the use of formatting information can lead to quite accurate extraction from general documents.\nPrecision and recall for title extraction from Word is 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint is 0.875 and 0.895 respectively in an experiment on intranet data.\nOther important new findings in this work include that we can train models in one domain and apply them to another domain, and more surprisingly we can even train models in one language and apply them to another language.\nMoreover, we can significantly improve search ranking results in document retrieval by using the extracted titles.\n1.\nINTRODUCTION\nMetadata of documents is useful for many kinds of document processing such as search, browsing, and filtering.\nIdeally, metadata is defined by the authors of documents and is then used by various systems.\nHowever, people seldom define document metadata by themselves, even when they have convenient metadata definition tools [26].\nThus, how to automatically extract metadata from the bodies of documents turns out to be an important research issue.\nMethods for performing the task have been proposed.\nHowever, the focus was mainly on extraction from research papers.\nFor instance, Han et al. [10] proposed a machine learning based method to conduct extraction from research papers.\nThey formalized the problem as that of classification and employed Support Vector Machines as the classifier.\nThey mainly used linguistic features in the model.\nIn this paper, we consider metadata extraction from general documents.\nBy general documents, we mean documents that may belong to any one of a number of specific genres.\nGeneral documents are more widely available in digital libraries, intranets and the internet, and thus investigation on extraction from them is\nsorely needed.\nResearch papers usually have well-formed styles and noticeable characteristics.\nIn contrast, the styles of general documents can vary greatly.\nIt has not been clarified whether a machine learning based approach can work well for this task.\nThere are many types of metadata: title, author, date of creation, etc. .\nAs a case study, we consider title extraction in this paper.\nGeneral documents can be in many different file formats: Microsoft Office, PDF (PS), etc. .\nAs a case study, we consider extraction from Office including Word and PowerPoint.\nWe take a machine learning approach.\nWe annotate titles in sample documents (for Word and PowerPoint respectively) and take them as training data to train several types of models, and perform title extraction using any one type of the trained models.\nIn the models, we mainly utilize formatting information such as font size as features.\nWe employ the following models: Maximum Entropy Model, Perceptron with Uneven Margins, Maximum Entropy Markov Model, and Voted Perceptron.\nIn this paper, we also investigate the following three problems, which did not seem to have been examined previously.\n(1) Comparison between models: among the models above, which model performs best for title extraction; (2) Generality of model: whether it is possible to train a model on one domain and apply it to another domain, and whether it is possible to train a model in one language and apply it to another language; (3) Usefulness of extracted titles: whether extracted titles can improve document processing such as search.\nExperimental results indicate that our approach works well for title extraction from general documents.\nOur method can significantly outperform the baselines: one that always uses the first lines as titles and the other that always uses the lines in the largest font sizes as titles.\nPrecision and recall for title extraction from Word are 0.810 and 0.837 respectively, and precision and recall for title extraction from PowerPoint are 0.875 and 0.895 respectively.\nIt turns out that the use of format features is the key to successful title extraction.\n(1) We have observed that Perceptron based models perform better in terms of extraction accuracies.\n(2) We have empirically verified that the models trained with our approach are generic in the sense that they can be trained on one domain and applied to another, and they can be trained in one language and applied to another.\n(3) We have found that using the extracted titles we can significantly improve precision of document retrieval (by 10%).\nWe conclude that we can indeed conduct reliable title extraction from general documents and use the extracted results to improve real applications.\nThe rest of the paper is organized as follows.\nIn section 2, we introduce related work, and in section 3, we explain the motivation and problem setting of our work.\nIn section 4, we describe our method of title extraction, and in section 5, we describe our method of document retrieval using extracted titles.\nSection 6 gives our experimental results.\nWe make concluding remarks in section 7.\n2.\nRELATED WORK\n2.1 Document Metadata Extraction\nMethods have been proposed for performing automatic metadata extraction from documents; however, the main focus was on extraction from research papers.\nThe proposed methods fall into two categories: the rule based approach and the machine learning based approach.\nGiuffrida et al. [9], for instance, developed a rule-based system for automatically extracting metadata from research papers in Postscript.\nThey used rules like \"titles are usually located on the upper portions of the first pages and they are usually in the largest font sizes\".\nLiddy et al. [14] and Yilmazel el al. [23] performed metadata extraction from educational materials using rule-based natural language processing technologies.\nMao et al. [16] also conducted automatic metadata extraction from research papers using rules on formatting information.\nThe rule-based approach can achieve high performance.\nHowever, it also has disadvantages.\nIt is less adaptive and robust when compared with the machine learning approach.\nHan et al. [10], for instance, conducted metadata extraction with the machine learning approach.\nThey viewed the problem as that of classifying the lines in a document into the categories of metadata and proposed using Support Vector Machines as the classifier.\nThey mainly used linguistic information as features.\nThey reported high extraction accuracy from research papers in terms of precision and recall.\n2.2 Information Extraction\nMetadata extraction can be viewed as an application of information extraction, in which given a sequence of instances, we identify a subsequence that represents information in which we are interested.\nHidden Markov Model [6], Maximum Entropy Model [1, 4], Maximum Entropy Markov Model [17], Support Vector Machines [3], Conditional Random Field [12], and Voted Perceptron [2] are widely used information extraction models.\nInformation extraction has been applied, for instance, to part-ofspeech tagging [20], named entity recognition [25] and table extraction [19].\n2.3 Search Using Title Information\nTitle information is useful for document retrieval.\nIn the system Citeseer, for instance, Giles et al. managed to extract titles from research papers and make use of the extracted titles in metadata search of papers [8].\nIn web search, the title fields (i.e., file properties) and anchor texts of web pages (HTML documents) can be viewed as ` titles' of the pages [5].\nMany search engines seem to utilize them for web page retrieval [7, 11, 18, 22].\nZhang et al., found that web pages with well-defined metadata are more easily retrieved than those without well-defined metadata [24].\nTo the best of our knowledge, no research has been conducted on using extracted titles from general documents (e.g., Office documents) for search of the documents.\n3.\nMOTIVATION AND PROBLEM SETTING\nWe consider the issue of automatically extracting titles from general documents.\nBy general documents, we mean documents that belong to one of any number of specific genres.\nThe documents can be presentations, books, book chapters, technical papers, brochures, reports, memos, specifications, letters, announcements, or resumes.\nGeneral documents are more widely available in digital libraries, intranets, and internet, and thus investigation on title extraction from them is sorely needed.\nFigure 1 shows an estimate on distributions of file formats on intranet and internet [15].\nOffice and PDF are the main file formats on the intranet.\nEven on the internet, the documents in the formats are still not negligible, given its extremely large size.\nIn this paper, without loss of generality, we take Office documents as an example.\nFigure 1.\nDistributions of file formats in internet and intranet.\nFor Office documents, users can define titles as file properties using a feature provided by Office.\nWe found in an experiment, however, that users seldom use the feature and thus titles in file properties are usually very inaccurate.\nThat is to say, titles in file properties are usually inconsistent with the ` true' titles in the file bodies that are created by the authors and are visible to readers.\nWe collected 6,000 Word and 6,000 PowerPoint documents from an intranet and the internet and examined how many titles in the file properties are correct.\nWe found that surprisingly the accuracy was only 0.265 (cf., Section 6.3 for details).\nA number of reasons can be considered.\nFor example, if one creates a new file by copying an old file, then the file property of the new file will also be copied from the old file.\nIn another experiment, we found that Google uses the titles in file properties of Office documents in search and browsing, but the titles are not very accurate.\nWe created 50 queries to search Word and PowerPoint documents and examined the top 15 results of each query returned by Google.\nWe found that nearly all the titles presented in the search results were from the file properties of the documents.\nHowever, only 0.272 of them were correct.\nActually, ` true' titles usually exist at the beginnings of the bodies of documents.\nIf we can accurately extract the titles from the bodies of documents, then we can exploit reliable title information in document processing.\nThis is exactly the problem we address in this paper.\nMore specifically, given a Word document, we are to extract the title from the top region of the first page.\nGiven a PowerPoint document, we are to extract the title from the first slide.\nA title sometimes consists of a main title and one or two subtitles.\nWe only consider extraction of the main title.\nAs baselines for title extraction, we use that of always using the first lines as titles and that of always using the lines with largest font sizes as titles.\nFigure 2.\nTitle extraction from Word document.\nFigure 3.\nTitle extraction from PowerPoint document.\nNext, we define a ` specification' for human judgments in title data annotation.\nThe annotated data will be used in training and testing of the title extraction methods.\nSummary of the specification: The title of a document should be identified on the basis of common sense, if there is no difficulty in the identification.\nHowever, there are many cases in which the identification is not easy.\nThere are some rules defined in the specification that guide identification for such cases.\nThe rules include \"a title is usually in consecutive lines in the same format\", \"a document can have no title\", \"titles in images are not considered\", \"a title should not contain words like ` draft',\n` whitepaper', etc\", \"if it is difficult to determine which is the title, select the one in the largest font size\", and \"if it is still difficult to determine which is the title, select the first candidate\".\n(The specification covers all the cases we have encountered in data annotation.)\nFigures 2 and 3 show examples of Office documents from which we conduct title extraction.\nIn Figure 2, ` Differences in Win32 API Implementations among Windows Operating Systems' is the title of the Word document.\n` Microsoft Windows' on the top of this page is a picture and thus is ignored.\nIn Figure 3, ` Building Competitive Advantages through an Agile Infrastructure' is the title of the PowerPoint document.\nWe have developed a tool for annotation of titles by human annotators.\nFigure 4 shows a snapshot of the tool.\nFigure 4.\nTitle annotation tool.\n4.\nTITLE EXTRACTION METHOD 4.1 Outline\nTitle extraction based on machine learning consists of training and extraction.\nThe same pre-processing step occurs before training and extraction.\nDuring pre-processing, from the top region of the first page of a Word document or the first slide of a PowerPoint document a number of units for processing are extracted.\nIf a line (lines are separated by ` return' symbols) only has a single format, then the line will become a unit.\nIf a line has several parts and each of them has its own format, then each part will become a unit.\nEach unit will be treated as an instance in learning.\nA unit contains not only content information (linguistic information) but also formatting information.\nThe input to pre-processing is a document and the output of pre-processing is a sequence of units (instances).\nFigure 5 shows the units obtained from the document in Figure 2.\nFigure 5.\nExample of units.\nIn learning, the input is sequences of units where each sequence corresponds to a document.\nWe take labeled units (labeled as title_begin, title_end, or other) in the sequences as training data and construct models for identifying whether a unit is title_begin title_end, or other.\nWe employ four types of models: Perceptron, Maximum Entropy (ME), Perceptron Markov Model (PMM), and Maximum Entropy Markov Model (MEMM).\nIn extraction, the input is a sequence of units from one document.\nWe employ one type of model to identify whether a unit is title_begin, title_end, or other.\nWe then extract units from the unit labeled with ` title_begin' to the unit labeled with ` title_end'.\nThe result is the extracted title of the document.\nThe unique characteristic of our approach is that we mainly utilize formatting information for title extraction.\nOur assumption is that although general documents vary in styles, their formats have certain patterns and we can learn and utilize the patterns for title extraction.\nThis is in contrast to the work by Han et al., in which only linguistic features are used for extraction from research papers.\n4.2 Models\nThe four models actually can be considered in the same metadata extraction framework.\nThat is why we apply them together to our current problem.\nEach input is a sequence of instances x1x2...xk together with a sequence of labels y1 y2...yk.\nxi and yi represents an instance and its label, respectively (i = 1,2,..., k).\nRecall that an instance here represents a unit.\nA label represents title_begin, title_end, or other.\nHere, k is the number of units in a document.\nIn learning, we train a model which can be generally denoted as a conditional probability distribution P (Y1...Yk | X1...Xk) where Xi and Yi denote random variables taking instance xi and label yi as values, respectively (i = 1,2,..., k).\nFigure 6.\nMetadata extraction model.\nWe can make assumptions about the general model in order to make it simple enough for training.\nFor example, we can assume that Y1,..., Yk are independent of each other given X 1,..., X k.\nThus, we have In this way, we decompose the model into a number of classifiers.\nWe train the classifiers locally using the labeled data.\nAs the classifier, we employ the Perceptron or Maximum Entropy model.\nWe can also assume that the first order Markov property holds for Y1,..., Yk given X 1,..., X k.\nThus, we have Again, we obtain a number of classifiers.\nHowever, the classifiers are conditioned on the previous label.\nWhen we employ the Percepton or Maximum Entropy model as a classifier, the models become a Percepton Markov Model or Maximum Entropy Markov Model, respectively.\nThat is to say, the two models are more precise.\nIn extraction, given a new sequence of instances, we resort to one of the constructed models to assign a sequence of labels to the sequence of instances, i.e., perform extraction.\nFor Perceptron and ME, we assign labels locally and combine the results globally later using heuristics.\nSpecifically, we first identify the most likely title_begin.\nThen we find the most likely title_end within three units after the title_begin.\nFinally, we extract as a title the units between the title_begin and the title_end.\nFor PMM and MEMM, we employ the Viterbi algorithm to find the globally optimal label sequence.\nIn this paper, for Perceptron, we actually employ an improved variant of it, called Perceptron with Uneven Margin [13].\nThis version of Perceptron can work well especially when the number of positive instances and the number of negative instances differ greatly, which is exactly the case in our problem.\nWe also employ an improved version of Perceptron Markov Model in which the Perceptron model is the so-called Voted Perceptron [2].\nIn addition, in training, the parameters of the model are updated globally rather than locally.\n4.3 Features\nThere are two types of features: format features and linguistic features.\nWe mainly use the former.\nThe features are used for both the title-begin and the title-end classifiers.\n4.3.1 Format Features\nFont Size: There are four binary features that represent the normalized font size of the unit (recall that a unit has only one type of font).\nIf the font size of the unit is the largest in the document, then the first feature will be 1, otherwise 0.\nIf the font size is the smallest in the document, then the fourth feature will be 1, otherwise 0.\nIf the font size is above the average font size and not the largest in the document, then the second feature will be 1, otherwise 0.\nIf the font size is below the average font size and not the smallest, the third feature will be 1, otherwise 0.\nIt is necessary to conduct normalization on font sizes.\nFor example, in one document the largest font size might be ` 12pt', while in another the smallest one might be ` 18pt'.\nBoldface: This binary feature represents whether or not the current unit is in boldface.\nAlignment: There are four binary features that respectively represent the location of the current unit: ` left', ` center', ` right', and ` unknown alignment'.\nThe following format features with respect to ` context' play an important role in title extraction.\nEmpty Neighboring Unit: There are two binary features that represent, respectively, whether or not the previous unit and the current unit are blank lines.\nFont Size Change: There are two binary features that represent, respectively, whether or not the font size of the previous unit and the font size of the next unit differ from that of the current unit.\nAlignment Change: There are two binary features that represent, respectively, whether or not the alignment of the previous unit and the alignment of the next unit differ from that of the current one.\nSame Paragraph: There are two binary features that represent, respectively, whether or not the previous unit and the next unit are in the same paragraph as the current unit.\n4.3.2 Linguistic Features\nThe linguistic features are based on key words.\nPositive Word: This binary feature represents whether or not the current unit begins with one of the positive words.\nThe positive words include ` title:', ` subject:', ` subject line:' For example, in some documents the lines of titles and authors have the same formats.\nHowever, if lines begin with one of the positive words, then it is likely that they are title lines.\nNegative Word: This binary feature represents whether or not the current unit begins with one of the negative words.\nThe negative words include ` To', ` By', ` created by', ` updated by', etc. .\nThere are more negative words than positive words.\nThe above linguistic features are language dependent.\nWord Count: A title should not be too long.\nWe heuristically\ncreate four intervals: [1, 2], [3, 6], [7, 9] and [9, \u221e) and define one feature for each interval.\nIf the number of words in a title falls into an interval, then the corresponding feature will be 1; otherwise 0.\nEnding Character: This feature represents whether the unit ends with `:', ` -', or other special characters.\nA title usually does not end with such a character.\n5.\nDOCUMENT RETRIEVAL METHOD\nWe describe our method of document retrieval using extracted titles.\nTypically, in information retrieval a document is split into a number of fields including body, title, and anchor text.\nA ranking function in search can use different weights for different fields of\nthe document.\nAlso, titles are typically assigned high weights, indicating that they are important for document retrieval.\nAs explained previously, our experiment has shown that a significant number of documents actually have incorrect titles in the file properties, and thus in addition of using them we use the extracted titles as one more field of the document.\nBy doing this, we attempt to improve the overall precision.\nIn this paper, we employ a modification of BM25 that allows field weighting [21].\nAs fields, we make use of body, title, extracted title and anchor.\nFirst, for each term in the query we count the term frequency in each field of the document; each field frequency is then weighted according to the corresponding weight parameter:\nSimilarly, we compute the document length as a weighted sum of lengths of each field.\nAverage document length in the corpus becomes the average of all weighted document lengths.\nThird, a data set in Chinese was also downloaded from the internet.\nIt includes 500 Word documents and 500 PowerPoint documents in Chinese.\nWe manually labeled the titles of all the documents, on the basis of our specification.\nNot all the documents in the two data sets have titles.\nTable 1 shows the percentages of the documents having titles.\nWe see that DotCom and DotGov have more PowerPoint documents with titles than MS. This might be because PowerPoint documents published on the internet are more formal than those on the intranet.\nTable 1.\nThe portion of documents with titles\nas In our experiments we used k1 = 1.\n8, b = 0.75.\nWeight for content was 1.0, title was 10.0, anchor was 10.0, and extracted title 5.0.\n6.\nEXPERIMENTAL RESULTS\n6.1 Data Sets and Evaluation Measures\nWe used two data sets in our experiments.\nFirst, we downloaded and randomly selected 5,000 Word documents and 5,000 PowerPoint documents from an intranet of Microsoft.\nWe call it MS hereafter.\nSecond, we downloaded and randomly selected 500 Word and 500 PowerPoint documents from the DotGov and DotCom domains on the internet, respectively.\nFigure 7 shows the distributions of the genres of the documents.\nWe see that the documents are indeed ` general documents' as we define them.\nFigure\n7.\nDistributions of document genres.\nIn our experiments, we conducted evaluations on title extraction in terms of precision, recall, and F-measure.\nThe evaluation measures are defined as follows:\nHere, A, B, C, and D are numbers of documents as those defined in Table 2.\nTable 2.\nContingence table with regard to title extraction\n6.2 Baselines\nWe test the accuracies of the two baselines described in section 4.2.\nThey are denoted as ` largest font size' and ` first line' respectively.\nan\n6.3 Accuracy of Titles in File Properties\nWe investigate how many titles in the file properties of the documents are reliable.\nWe view the titles annotated by humans as true titles and test how many titles in the file properties approximately match with the true titles.\nWe use Edit Distance to conduct the approximate match.\n(Approximate match is only used in this evaluation).\nThis is because sometimes human annotated titles can be slightly different from the titles in file properties on the surface, e.g., contain extra spaces).\nGiven string A and string B: if ((D == 0) or (D \/ (La + Lb) <\u03b8)) then string A = string B D: Edit Distance between string A and string B La: length of string A Lb: length of string B\nTable 3.\nAccuracies of titles in file properties\n6.4 Comparison with Baselines\nWe conducted title extraction from the first data set (Word and PowerPoint in MS).\nAs the model, we used Perceptron.\nWe conduct 4-fold cross validation.\nThus, all the results reported here are those averaged over 4 trials.\nTables 4 and 5 show the results.\nWe see that Perceptron significantly outperforms the baselines.\nIn the evaluation, we use exact matching between the true titles annotated by humans and the extracted titles.\nTable 4.\nAccuracies of title extraction with Word\nTable 5.\nAccuracies of title extraction with PowerPoint\nWe see that the machine learning approach can achieve good performance in title extraction.\nFor Word documents both precision and recall of the approach are 8 percent higher than those of the baselines.\nFor PowerPoint both precision and recall of the approach are 2 percent higher than those of the baselines.\nWe conduct significance tests.\nThe results are shown in Table 6.\nHere, ` Largest' denotes the baseline of using the largest font size, ` First' denotes the baseline of using the first line.\nThe results indicate that the improvements of machine learning over baselines are statistically significant (in the sense p-value <0.05)\nTable 6.\nSign test results\nWe see, from the results, that the two baselines can work well for title extraction, suggesting that font size and position information are most useful features for title extraction.\nHowever, it is also obvious that using only these two features is not enough.\nThere are cases in which all the lines have the same font size (i.e., the largest font size), or cases in which the lines with the largest font size only contain general descriptions like ` Confidential', ` White paper', etc. .\nFor those cases, the ` largest font size' method cannot work well.\nFor similar reasons, the ` first line' method alone cannot work well, either.\nWith the combination of different features (evidence in title judgment), Perceptron can outperform Largest and First.\nWe investigate the performance of solely using linguistic features.\nWe found that it does not work well.\nIt seems that the format features play important roles and the linguistic features are supplements.\n.\nWe conducted an error analysis on the results of Perceptron.\nWe found that the errors fell into three categories.\n(1) About one third of the errors were related to ` hard cases'.\nIn these documents, the layouts of the first pages were difficult to understand, even for humans.\nFigure 8 and 9 shows examples.\n(2) Nearly one fourth of the errors were from the documents which do not have true titles but only contain bullets.\nSince we conduct extraction from the top regions, it is difficult to get rid of these errors with the current approach.\n(3).\nConfusions between main titles and subtitles were another type of error.\nSince we only labeled the main titles as titles, the extractions of both titles were considered incorrect.\nThis type of error does little harm to document processing like search, however.\n6.5 Comparison between Models\nTo compare the performance of different machine learning models, we conducted another experiment.\nAgain, we perform 4-fold cross\nFigure 8.\nAn example Word document.\nFigure 9.\nAn example PowerPoint document.\nvalidation on the first data set (MS).\nTable 7, 8 shows the results of all the four models.\nIt turns out that Perceptron and PMM perform the best, followed by MEMM, and ME performs the worst.\nIn general, the Markovian models perform better than or as well as their classifier counterparts.\nThis seems to be because the Markovian models are trained globally, while the classifiers are trained locally.\nThe Perceptron based models perform better than the ME based counterparts.\nThis seems to be because the Perceptron based models are created to make better classifications, while ME models are constructed for better prediction.\nTable 7.\nComparison between different learning models for\nTable 8.\nComparison between different learning models for\n6.6 Domain Adaptation\nWe apply the model trained with the first data set (MS) to the second data set (DotCom and DotGov).\nTables 9-12 show the results.\nTable 9.\nAccuracies of title extraction with Word in DotGov\nTable 10.\nAccuracies of title extraction with PowerPoint in\nTable 11.\nAccuracies of title extraction with Word in DotCom\nTable 12.\nPerformance of PowerPoint document title\nFrom the results, we see that the models can be adapted to different domains well.\nThere is almost no drop in accuracy.\nThe results indicate that the patterns of title formats exist across different domains, and it is possible to construct a domain independent model by mainly using formatting information.\n6.7 Language Adaptation\nWe apply the model trained with the data in English (MS) to the data set in Chinese.\nTables 13-14 show the results.\nTable 13.\nAccuracies of title extraction with Word in Chinese\nTable 14.\nAccuracies of title extraction with PowerPoint in\nWe see that the models can be adapted to a different language.\nThere are only small drops in accuracy.\nObviously, the linguistic features do not work for Chinese, but the effect of not using them is negligible.\nThe results indicate that the patterns of title formats exist across different languages.\nFrom the domain adaptation and language adaptation results, we conclude that the use of formatting information is the key to a successful extraction from general documents.\n6.8 Search with Extracted Titles\nWe performed experiments on using title extraction for document retrieval.\nAs a baseline, we employed BM25 without using extracted titles.\nThe ranking mechanism was as described in Section 5.\nThe weights were heuristically set.\nWe did not conduct optimization on the weights.\nThe evaluation was conducted on a corpus of 1.3 M documents crawled from the intranet of Microsoft using 100 evaluation queries obtained from this intranet's search engine query logs.\n50 queries were from the most popular set, while 50 queries other were chosen randomly.\nUsers were asked to provide judgments of the degree of document relevance from a scale of 1to 5 (1 meaning detrimental, 2--bad, 3--fair, 4--good and 5--excellent).\nFigure 10 shows the results.\nIn the chart two sets of precision results were obtained by either considering good or excellent documents as relevant (left 3 bars with relevance threshold 0.5), or by considering only excellent documents as relevant (right 3 bars with relevance threshold 1.0)\nFigure 10.\nSearch ranking results.\nFigure 10 shows different document retrieval results with different ranking functions in terms of precision @ 10, precision @ 5 and reciprocal rank:\n\u2022 Blue bar--BM25 including the fields body, title (file property), and anchor text.\n\u2022 Purple bar--BM25 including the fields body, title (file property), anchor text, and extracted title.\nWith the additional field of extracted title included in BM25 the precision @ 10 increased from 0.132 to 0.145, or by ~ 10%.\nThus, it is safe to say that the use of extracted title can indeed improve the precision of document retrieval.\n7.\nCONCLUSION\nIn this paper, we have investigated the problem of automatically extracting titles from general documents.\nWe have tried using a machine learning approach to address the problem.\nPrevious work showed that the machine learning approach can work well for metadata extraction from research papers.\nIn this paper, we showed that the approach can work for extraction from general documents as well.\nOur experimental results indicated that the machine learning approach can work significantly better than the baselines in title extraction from Office documents.\nPrevious work on metadata extraction mainly used linguistic features in documents, while we mainly used formatting information.\nIt appeared that using formatting information is a key for successfully conducting title extraction from general documents.\nWe tried different machine learning models including Perceptron, Maximum Entropy, Maximum Entropy Markov Model, and Voted Perceptron.\nWe found that the performance of the Perceptorn models was the best.\nWe applied models constructed in one domain to another domain and applied models trained in one language to another language.\nWe found that the accuracies did not drop substantially across different domains and across different languages, indicating that the models were generic.\nWe also attempted to use the extracted titles in document retrieval.\nWe observed a significant improvement in document ranking performance for search when using extracted title information.\nAll the above investigations were not conducted in previous work, and through our investigations we verified the generality and the significance of the title extraction approach.","keyphrases":["machin learn","search","titl extract","genr","automat titl extract","format inform","document retriev","languag independ","metada of document","linguist featur","comparison between model","model gener","extract titl us","classifi","inform extract","metada extract"],"prmu":["P","P","P","P","P","P","P","M","M","M","M","R","M","U","R","M"]} {"id":"H-63","title":"Location based Indexing Scheme for DAYS","abstract":"Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common. Many dissemination schemes have been proposed but most of them push data to wireless channels for general consumption. Push based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server. Push based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast. Access latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme. Two of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes. None of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination. In this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD. We argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes. We prove our argument with the help of simulation results.","lvl-1":"Location based Indexing Scheme for DAYS Debopam Acharya and Vijay Kumar 1 Computer Science and Informatics University of Missouri-Kansas City Kansas City, MO 64110 dargc(kumarv)@umkc.\nedu ABSTRACT Data dissemination through wireless channels for broadcasting information to consumers is becoming quite common.\nMany dissemination schemes have been proposed but most of them push data to wireless channels for general consumption.\nPush based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server.\nPush based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast.\nAccess latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme.\nTwo of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes.\nNone of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination.\nIn this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD.\nWe argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes.\nWe prove our argument with the help of simulation results.\nCategories and Subject Descriptors H.3.1 [Information Systems]: Information Storage and Retrieval - content analysis and indexing; H.3.3 [Information Systems]: Information Storage and Retrieval - information search and retrieval.\nGeneral Terms Algorithms, Performance, Experimentation 1.\nINTRODUCTION Wireless data dissemination is an economical and efficient way to make desired data available to a large number of mobile or static users.\nThe mode of data transfer is essentially asymmetric, that is, the capacity of the transfer of data (downstream communication) from the server to the client (mobile user) is significantly larger than the client or mobile user to the server (upstream communication).\nThe effectiveness of a data dissemination system is judged by its ability to provide user the required data at anywhere and at anytime.\nOne of the best ways to accomplish this is through the dissemination of highly personalized Location Based Services (LBS) which allows users to access personalized location dependent data.\nAn example would be someone using their mobile device to search for a vegetarian restaurant.\nThe LBS application would interact with other location technology components or use the mobile user's input to determine the user's location and download the information about the restaurants in proximity to the user by tuning into the wireless channel which is disseminating LDD.\nWe see a limited deployment of LBS by some service providers.\nBut there are every indications that with time some of the complex technical problems such as uniform location framework, calculating and tracking locations in all types of places, positioning in various environments, innovative location applications, etc., will be resolved and LBS will become a common facility and will help to improve market productivity and customer comfort.\nIn our project called DAYS, we use wireless data broadcast mechanism to push LDD to users and mobile users monitor and tune the channel to find and download the required data.\nA simple broadcast, however, is likely to cause significant performance degradation in the energy constrained mobile devices and a common solution to this problem is the use of efficient air indexing.\nThe indexing approach stores control information which tells the user about the data location in the broadcast and how and when he could access it.\nA mobile user, thus, has some free time to go into the doze mode which conserves valuable power.\nIt also allows the user to personalize his own mobile device by selectively tuning to the information of his choice.\nAccess efficiency and energy conservation are the two issues which are significant for data broadcast systems.\nAccess efficiency refers to the latency experienced when a request is initiated till the response is received.\nEnergy conservation [7, 10] refers to the efficient use of the limited energy of the mobile device in accessing broadcast data.\nTwo parameters that affect these are the tuning time and the access latency.\nTuning time refers to the time during which the mobile unit (MU) remains in active state to tune the channel and download its required data.\nIt can also be defined as the number of buckets tuned by the mobile device in active state to get its required data.\nAccess latency may be defined as the time elapsed since a request has been issued till the response has been received.\n1 This research was supported by a grant from NSF IIS-0209170.\nSeveral indexing schemes have been proposed in the past and the prominent among them are the tree based and the exponential indexing schemes [17].\nThe main disadvantages of the tree based schemes are that they are based on centralized tree structures.\nTo start a search, the MU has to wait until it reaches the root of the next broadcast tree.\nThis significantly affects the tuning time of the mobile unit.\nThe exponential schemes facilitate index replication by sharing links in different search trees.\nFor broadcasts with large number of pages, the exponential scheme has been shown to perform similarly as the tree based schemes in terms of access latency.\nAlso, the average length of broadcast increases due to the index replication and this may cause significant increase in the access latency.\nNone of the above indexing schemes is equally effective in broadcasting location dependent data.\nIn addition to providing low latency, they lack properties which are used to address LDD issues.\nWe propose an indexing scheme in DAYS which takes care of some these problems.\nWe show with simulation results that our scheme outperforms some of the earlier indexing schemes for broadcasting LDD in terms of tuning time.\nThe rest of the paper is presented as follows.\nIn section 2, we discuss previous work related to indexing of broadcast data.\nSection 3 describes our DAYS architecture.\nLocation dependent data, its generation and subsequent broadcast is presented in section 4.\nSection 5 discusses our indexing scheme in detail.\nSimulation of our scheme and its performance evaluation is presented in section 6.\nSection 7 concludes the paper and mentions future related work.\n2.\nPREVIOUS WORK Several disk-based indexing techniques have been used for air indexing.\nImielinski et al. [5, 6] applied the B+ index tree, where the leaf nodes store the arrival times of the data items.\nThe distributed indexing method was proposed to efficiently replicate and distribute the index tree in a broadcast.\nSpecifically, the index tree is divided into a replicated part and a non replicated part.\nEach broadcast consists of the replicated part and the nonreplicated part that indexes the data items immediately following it.\nAs such, each node in the non-replicated part appears only once in a broadcast and, hence, reduces the replication cost and access latency while achieving a good tuning time.\nChen et al. [2] and Shivakumar et al. [8] considered unbalanced tree structures to optimize energy consumption for non-uniform data access.\nThese structures minimize the average index search cost by reducing the number of index searches for hot data at the expense of spending more on cold data.\nTan and Yu discussed data and index organization under skewed broadcast Hashing and signature methods have also been suggested for wireless broadcast that supports equality queries [9].\nA flexible indexing method was proposed in [5].\nThe flexible index first sorts the data items in ascending (or descending) order of the search key values and then divides them into p segments.\nThe first bucket in each data segment contains a control index, which is a binary index mapping a given key value to the segment containing that key, and a local index, which is an m-entry index mapping a given key value to the buckets within the current segment.\nBy tuning the parameters of p and m, mobile clients can achieve either a good tuning time or good access latency.\nAnother indexing technique proposed is the exponential indexing scheme [17].\nIn this scheme, a parameterized index, called the exponential index is used to optimize the access latency or the tuning time.\nIt facilitates index replication by linking different search trees.\nAll of the above mentioned schemes have been applied to data which are non related to each other.\nThese non related data may be clustered or non clustered.\nHowever, none of them has specifically addressed the requirements of LDD.\nLocation dependent data are data which are associated with a location.\nPresently there are several applications that deal with LDD [13, 16].\nAlmost all of them depict LDD with the help of hierarchical structures [3, 4].\nThis is based on the containment property of location dependent data.\nThe Containment property helps determining relative position of an object by defining or identifying locations that contains those objects.\nThe subordinate locations are hierarchically related to each other.\nThus, Containment property limits the range of availability or operation of a service.\nWe use this containment property in our indexing scheme to index LDD.\n3.\nDAYS ARCHITECTURE DAYS has been conceptualized to disseminate topical and nontopical data to users in a local broadcast space and to accept queries from individual users globally.\nTopical data, for example, weather information, traffic information, stock information, etc., constantly changes over time.\nNon topical data such as hotel, restaurant, real estate prices, etc., do not change so often.\nThus, we envision the presence of two types of data distribution: In the first case, server pushes data to local users through wireless channels.\nThe other case deals with the server sending results of user queries through downlink wireless channels.\nTechnically, we see the presence of two types of queues in the pull based data access.\nOne is a heavily loaded queue containing globally uploaded queries.\nThe other is a comparatively lightly loaded queue consisting of locally uploaded queries.\nThe DAYS architecture [12] as shown in figure 1 consists of a Data Server, Broadcast Scheduler, DAYS Coordinator, Network of LEO satellites for global data delivery and a Local broadcast space.\nData is pushed into the local broadcast space so that users may tune into the wireless channels to access the data.\nThe local broadcast space consists of a broadcast tower, mobile units and a network of data staging machines called the surrogates.\nData staging in surrogates has been earlier investigated as a successful technique [12, 15] to cache users' related data.\nWe believe that data staging can be used to drastically reduce the latency time for both the local broadcast data as well as global responses.\nQuery request in the surrogates may subsequently be used to generate the popularity patterns which ultimately decide the broadcast schedule [12].\n18 Popularity Feedback from Surrogates for Broadcast Scheduler Local Broadcast Space Broadcast Tower SurrogateMU MU MU MU Data ServerBroadcast schedulerDAYS Coordinator Local downlink channel Global downlink channel Pull request queue Global request queue Local request queue Location based index Starbucks Plaza Kansas City Figure 1.\nDAYS Architecture Figure 2.\nLocation Structure ofStarbucks, Plaza 4.\nLOCATION DEPENDENT DATA (LDD) We argue that incorporating location information in wireless data broadcast can significantly decrease the access latency.\nThis property becomes highly useful for mobile unit which has limited storage and processing capability.\nThere are a variety of applications to obtain information about traffic, restaurant and hotel booking, fast food, gas stations, post office, grocery stores, etc..\nIf these applications are coupled with location information, then the search will be fast and highly cost effective.\nAn important property of the locations is Containment which helps to determine the relative location of an object with respect to its parent that contains the object.\nThus, Containment limits the range of availability of a data.\nWe use this property in our indexing scheme.\nThe database contains the broadcast contents which are converted into LDD [14] by associating them with respective locations so that it can be broadcasted in a clustered manner.\nThe clustering of LDD helps the user to locate information efficiently and supports containment property.\nWe present an example to justify our proposition.\nExample: Suppose a user issues query Starbucks Coffee in Plaza please.\nto access information about the Plaza branch of Starbucks Coffee in Kansas City.\nIn the case of location independent set up the system will list all Starbucks coffee shops in Kansas City area.\nIt is obvious that such responses will increase access latency and are not desirable.\nThese can be managed efficiently if the server has location dependent data, i.e., a mapping between a Starbucks coffee shop data and its physical location.\nAlso, for a query including range of locations of Starbucks, a single query requesting locations for the entire region of Kansas City, as shown in Figure 2, will suffice.\nThis will save enormous amount of bandwidth by decreasing the number of messages and at the same time will be helpful in preventing the scalability bottleneck in highly populated area.\n4.1 Mapping Function for LDD The example justifies the need for a mapping function to process location dependent queries.\nThis will be especially important for pull based queries across the globe for which the reply could be composed for different parts of the world.\nThe mapping function is necessary to construct the broadcast schedule.\nWe define Global Property Set (GPS) [11], Information Content (IC) set, and Location Hierarchy (LH) set where IC \u2286 GPS and LH \u2286 GPS to develop a mapping function.\nLH = {l1, l2, l3...,lk} where li represent locations in the location tree and IC = {ic1, ic2, ic3,...,icn} where ici represent information type.\nFor example, if we have traffic, weather, and stock information are in broadcast then IC = {ictraffic, icweather, and icstock}.\nThe mapping scheme must be able to identify and select an IC member and a LH node for (a) correct association, (b) granularity match, (c) and termination condition.\nFor example, weather \u2208 IC could be associated with a country or a state or a city or a town of LH.\nThe granularity match between the weather and a LH node is as per user requirement.\nThus, with a coarse granularity weather information is associated with a country to get country``s weather and with town in a finer granularity.\nIf a town is the finest granularity, then it defines the terminal condition for association between IC and LH for weather.\nThis means that a user cannot get weather information about subdivision of a town.\nIn reality weather of a subdivision does not make any sense.\nWe develop a simple heuristic mapping approach scheme based on user requirement.\nLet IC = {m1, m2,m3 .\n,..., mk}, where mi represent its element and let LH = {n1, n2, n3, ..., nl}, where ni represents LH``s member.\nWe define GPS for IC (GPSIC) \u2286 GPS and for LH (GPSLH) \u2286 GPS as GPSIC = {P1, P2,..., Pn}, where P1, P2, P3,..., Pn are properties of its members and GPSLH = {Q1, Q2,..., Qm} where Q1, Q2,..., Qm are properties of its members.\nThe properties of a particular member of IC are a subset of GPSIC.\nIt is generally true that (property set (mi\u2208 IC) \u222a property set (mj\u2208 IC)) \u2260 \u2205, however, there may be cases where the intersection is not null.\nFor example, stock \u2208 IC and movie \u2208 IC rating do not have any property in common.\nWe assume that any two or more members of IC have at least one common geographical property (i.e. location) because DAYS broadcasts information about those categories, which are closely tied with a location.\nFor example, stock of a company is related to a country, weather is related to a city or state, etc..\nWe define the property subset of mi\u2208 IC as PSm i \u2200 mi \u2208 IC and PSm i = {P1, P2, ..., Pr} where r \u2264 n. \u2200 Pr {Pr \u2208 PSm i \u2192 Pr\u2208 GPSIC} which implies that \u2200 i, PSm i \u2286 GPSIC.\nThe geographical properties of this set are indicative of whether mi \u2208 IC can be mapped to only a single granularity level (i.e. a single location) in LH or a multiple granularity levels (i.e. more than one nodes in 19 the hierarchy) in LH.\nHow many and which granularity levels should a mi map to, depends upon the level at which the service provider wants to provide information about the mi in question.\nSimilarly we define a property subset of LH members as PSn j \u2200 nj \u2208 LH which can be written as PSn j ={Q1, Q2, Q3, ..., Qs} where s \u2264 m.\nIn addition, \u2200 Qs {Qs\u2208 PSn j \u2192 Qs\u2208 GPSLH} which implies that \u2200j, PSn j \u2286 GPSLH.\nThe process of mapping from IC to LH is then identifying for some mx\u2208 IC one or more ny\u2208 LH such that PSmx \u2229 PSnv \u2260 \u03c6.\nThis means that when mx maps to ny and all children of ny if mx can map to multiple granularity levels or mx maps only to ny if mx can map to a single granularity level.\nWe assume that new members can join and old member can leave IC or LH any time.\nThe deletion of members from the IC space is simple but addition of members to the IC space is more restrictive.\nIf we want to add a new member to the IC space, then we first define a property set for the new member: PSmnew_m ={P1, P2, P3, ..., Pt} and add it to the IC only if the condition:\u2200 Pw {Pw\u2208 PSpnew_m \u2192 Pw\u2208 GPSIC} is satisfied.\nThis scheme has an additional benefit of allowing the information service providers to have a control over what kind of information they wish to provide to the users.\nWe present the following example to illustrate the mapping concept.\nIC = {Traffic, Stock, Restaurant, Weather, Important history dates, Road conditions} LH = {Country, State, City, Zip-code, Major-roads} GPSIC = {Surface-mobility, Roads, High, Low, Italian-food, StateName, Temp, CityName, Seat-availability, Zip, Traffic-jams, Stock-price, CountryName, MajorRoadName, Wars, Discoveries, World} GPSLH = {Country, CountrySize, StateName, CityName, Zip, MajorRoadName} Ps(ICStock) = {Stock-price, CountryName, High, Low} Ps(ICTraffic) = {Surface-mobility, Roads, High, Low, Traffic-jams, CityName} Ps(ICImportant dates in history) = {World, Wars, Discoveries} Ps(ICRoad conditions) = {Precipitation, StateName, CityName} Ps(ICRestaurant) = {Italian-food, Zip code} Ps(ICWeather) = {StateName, CityName, Precipitation, Temperature} PS(LHCountry) = {CountryName, CountrySize} PS(LHState = {StateName, State size}, PS(LHCity) ={CityName, City size} PS(LHZipcode) = {ZipCodeNum } PS(LHMajor roads) = {MajorRoadName} Now, only PS(ICStock) \u2229 PSCountry \u2260\u03c6.\nIn addition, PS(ICStock) indicated that Stock can map to only a single location Country.\nWhen we consider the member Traffic of IC space, only PS(ICTraffic) \u2229 PScity \u2260 \u03c6.\nAs PS(ICTraffic) indicates that Traffic can map to only a single location, it maps only to City and none of its children.\nNow unlike Stock, mapping of Traffic with Major roads, which is a child of City, is meaningful.\nHowever service providers have right to control the granularity levels at which they want to provide information about a member of IC space.\nPS(ICRoad conditions) \u2229 PSState \u2260\u03c6 and PS(ICRoad conditions) \u2229 PSCity\u2260\u03c6.\nSo Road conditions maps to State as well as City.\nAs PS(ICRoad conditions) indicates that Road conditions can map to multiple granularity levels, Road conditions will also map to Zip Code and Major roads, which are the children of State and City.\nSimilarly, Restaurant maps only to Zip code, and Weather maps to State, City and their children, Major Roads and Zip Code.\n5.\nLOCATION BASED INDEXING SCHEME This section discusses our location based indexing scheme (LBIS).\nThe scheme is designed to conform to the LDD broadcast in our project DAYS.\nAs discussed earlier, we use the containment property of LDD in the indexing scheme.\nThis significantly limits the search of our required data to a particular portion of broadcast.\nThus, we argue that the scheme provides bounded tuning time.\nWe describe the architecture of our indexing scheme.\nOur scheme contains separate data buckets and index buckets.\nThe index buckets are of two types.\nThe first type is called the Major index.\nThe Major index provides information about the types of data broadcasted.\nFor example, if we intend to broadcast information like Entertainment, Weather, Traffic etc., then the major index points to either these major types of information and\/or their main subtypes of information, the number of main subtypes varying from one information to another.\nThis strictly limits number of accesses to a Major index.\nThe Major index never points to the original data.\nIt points to the sub indexes called the Minor index.\nThe minor indexes are the indexes which actually points to the original data.\nWe called these minor index pointers as Location Pointers as they points to the data which are associated with a location.\nThus, our search for a data includes accessing of a major index and some minor indexes, the number of minor index varying depending on the type of information.\nThus, our indexing scheme takes into account the hierarchical nature of the LDD, the Containment property, and requires our broadcast schedule to be clustered based on data type and location.\nThe structure of the location hierarchy requires the use of different types of index at different levels.\nThe structure and positions of index strictly depend on the location hierarchy as described in our mapping scheme earlier.\nWe illustrate the implementation of our scheme with an example.\nThe rules for framing the index are mentioned subsequently.\n20 A1 Entertainment Resturant Movie A2 A3 A4 R1 R2 R3 R4 R5 R6 R7 R8 Weather KC SL JC SF Entertainment R1 R2 R3 R4 R5 R6 R7 R8 KC SL JC SF (A, R, NEXT = 8) 3, R5 4, R7 Type (S, L) ER W E EM (1, 4) (5, 4) (1, 4), (9, 4) (9, 4) Type (S, L) W E EM ER (1, 4) (5, 8) (5, 4) (9, 4) Type (S, L) E EM ER W (1, 8) (1, 4) (5, 4) (9, 4) A1 A2 A3 A4 Movie Resturant Weather 1 2 3 4 5 6 7 8 9 10 11 12 Major index Major index Major index Minor index Major index Minor index Figure 3.\nLocation Mapped Information for Broadcast Figure 4.\nData coupled with Location based Index Example: Let us suppose that our broadcast content contains ICentertainment and ICweather which is represented as shown in Fig. 3.\nAi represents Areas of City and Ri represents roads in a certain area.\nThe leaves of Weather structure represent four cities.\nThe index structure is given in Fig. 4 which shows the position of major and minor index and data in the broadcast schedule.\nWe propose the following rules for the creation of the air indexed broadcast schedule: \u2022 The major index and the minor index are created.\n\u2022 The major index contains the position and range of different types of data items (Weather and Entertainment, Figure 3) and their categories.\nThe sub categories of Entertainment, Movie and Restaurant, are also in the index.\nThus, the major index contains Entertainment (E), Entertainment-Movie (EM), Entertainment-Restaurant (ER), and Weather (W).\nThe tuple (S, L) represents the starting position (S) of the data item and L represents the range of the item in terms of number of data buckets.\n\u2022 The minor index contains the variables A, R and a pointer Next.\nIn our example (Figure 3), road R represents the first node of area A.\nThe minor index is used to point to actual data buckets present at the lowest levels of the hierarchy.\nIn contrast, the major index points to a broader range of locations and so it contains information about main and sub categories of data.\n\u2022 Index information is not incorporated in the data buckets.\nIndex buckets are separate containing only the control information.\n\u2022 The number of major index buckets m=#(IC), IC = {ic1, ic2, ic3,...,icn} where ici represent information type and # represents the cardinality of the Information Content set IC.\nIn this example, IC= {icMovie, icWeather, icRestaurant} and so #(IC) =3.\nHence, the number of major index buckets is 3.\n\u2022 Mechanism to resolve the query is present in the java based coordinator in MU.\nFor example, if a query Q is presented as Q (Entertainment, Movie, Road_1), then the resultant search will be for the EM information in the major index.\nWe say, Q EM.\nOur proposed index works as follows: Let us suppose that an MU issues a query which is represented by Java Coordinator present in the MU as Restaurant information on Road 7.\nThis is resolved by the coordinator as Q ER.\nThis means one has to search for ER unit of index in the major index.\nLet us suppose that the MU logs into the channel at R2.\nThe first index it receives is a minor index after R2.\nIn this index, value of Next variable = 4, which means that the next major index is present after bucket 4.\nThe MU may go into doze mode.\nIt becomes active after bucket 4 and receives the major index.\nIt searches for ER information which is the first entry in this index.\nIt is now certain that the MU will get the position of the data bucket in the adjoining minor index.\nThe second unit in the minor index depicts the position of the required data R7.\nIt tells that the data bucket is the first bucket in Area 4.\nThe MU goes into doze mode again and becomes active after bucket 6.\nIt gets the required data in the next bucket.\nWe present the algorithm for searching the location based Index.\nAlgorithm 1 Location based Index Search in DAYS 1.\nScan broadcast for the next index bucket, found=false 2.\nWhile (not found) do 3.\nif bucket is Major Index then 4.\nFind the Type & Tuple (S, L) 5.\nif S is greater than 1, go into doze mode for S seconds 6.\nend if 7.\nWake up at the Sth bucket and observe the Minor Index 8.\nend if 9.\nif bucket is Minor Index then 10.\nif TypeRequested not equal to Typefound and (A,R)Request not equal to (A,R)found then 11.\nGo into doze mode till NEXT & repeat from step 3 12.\nend if 13.\nelse find entry in Minor Index which points to data 14.\nCompute time of arrival T of data bucket 15.\nGo into doze mode till T 16.\nWake up at T and access data, found = true 17.\nend else 18.\nend if 19.\nend While 21 6.\nPERFORMANCE EVALUATION Conservation of energy is the main concern when we try to access data from wireless broadcast.\nAn efficient scheme should allow the mobile device to access its required data by staying active for a minimum amount of time.\nThis would save considerable amount of energy.\nSince items are distributed based on types and are mapped to suitable locations, we argue that our broadcast deals with clustered data types.\nThe mobile unit has to access a larger major index and a relatively much smaller minor index to get information about the time of arrival of data.\nThis is in contrast to the exponential scheme where the indexes are of equal sizes.\nThe example discussed and Algorithm 1 reveals that to access any data, we need to access the major index only once followed by one or more accesses to the minor index.\nThe number of minor index access depends on the number of internal locations.\nAs the number of internal locations vary for item to item (for example, Weather is generally associated with a City whereas traffic is granulated up to major and minor roads of a city), we argue that the structure of the location mapped information may be visualized as a forest which is a collection of general trees, the number of general trees depending on the types of information broadcasted and depth of a tree depending on the granularity of the location information associated with the information.\nFor our experiments, we assume the forest as a collection of balanced M-ary trees.\nWe further assume the M-ary trees to be full by assuming the presence of dummy nodes in different levels of a tree.\nThus, if the number of data items is d and the height of the tree is m, then n= (m*d-1)\/(m-1) where n is the number of vertices in the tree and i= (d-1)\/(m-1) where i is the number of internal vertices.\nTuning time for a data item involves 1 unit of time required to access the major index plus time required to access the data items present in the leaves of the tree.\nThus, tuning time with d data items is t = logmd+1 We can say that tuning time is bounded by O(logmd).\nWe compare our scheme with the distributed indexing and exponential scheme.\nWe assume a flat broadcast and number of pages varying from 5000 to 25000.\nThe various simulation parameters are shown in Table 1.\nFigure 5-8 shows the relative tuning times of three indexing algorithms, ie, the LBIS, exponential scheme and the distributed tree scheme.\nFigure 5 shows the result for number of internal location nodes = 3.\nWe can see that LBIS significantly outperforms both the other schemes.\nThe tuning time in LBIS ranges from approx 6.8 to 8.\nThis large tuning time is due to the fact that after reaching the lowest minor index, the MU may have to access few buckets sequentially to get the required data bucket.\nWe can see that the tuning time tends to become stable as the length of broadcast increases.\nIn figure 6 we consider m= 4.\nHere we can see that the exponential and the distributed perform almost similarly, though the former seems to perform slightly better as the broadcast length increases.\nA very interesting pattern is visible in figure 7.\nFor smaller broadcast size, the LBIS seems to have larger tuning time than the other two schemes.\nBut as the length of broadcast increases, it is clearly visible the LBIS outperforms the other two schemes.\nThe Distributed tree indexing shows similar behavior like the LBIS.\nThe tuning time in LBIS remains low because the algorithm allows the MU to skip some intermediate Minor Indexes.\nThis allows the MU to move into lower levels directly after coming into active mode, thus saving valuable energy.\nThis action is not possible in the distributed tree indexing and hence we can observe that its tuning time is more than the LBIS scheme, although it performs better than the exponential scheme.\nFigure 8, in contrast, shows us that the tuning time in LBIS, though less than the other two schemes, tends to increase sharply as the broadcast length becomes greater than the 15000 pages.\nThis may be attributed both due to increase in time required to scan the intermediate Minor Indexes and the length of the broadcast.\nBut we can observe that the slope of the LBIS curve is significantly less than the other two curves.\nTable 1 Simulation Parameters P Definition Values N Number of data Items 5000 - 25000 m Number of internal location nodes 3, 4, 5, 6 B Capacity of bucket without index (for exponential index) 10,64,128,256 i Index base for exponential index 2,4,6,8 k Index size for distributed tree 8 bytes The simulation results establish some facts about our location based indexing scheme.\nThe scheme performs better than the other two schemes in terms of tuning time in most of the cases.\nAs the length of the broadcast increases, after a certain point, though the tuning time increases as a result of factors which we have described before, the scheme always performs better than the other two schemes.\nDue to the prescribed limit of the number of pages in the paper, we are unable to show more results.\nBut these omitted results show similar trend as the results depicted in figure 5-8.\n7.\nCONCLUSION AND FUTURE WORK In this paper we have presented a scheme for mapping of wireless broadcast data with their locations.\nWe have presented an example to show how the hierarchical structure of the location tree maps with the data to create LDD.\nWe have presented a scheme called LBIS to index this LDD.\nWe have used the containment property of LDD in the scheme that limits the search to a narrow range of data in the broadcast, thus saving valuable energy in the device.\nThe mapping of data with locations and the indexing scheme will be used in our DAYS project to create the push based architecture.\nThe LBIS has been compared with two other prominent indexing schemes, i.e., the distributed tree indexing scheme and the exponential indexing scheme.\nWe showed in our simulations that the LBIS scheme has the lowest tuning time for broadcasts having large number of pages, thus saving valuable battery power in the MU.\n22 In the future work we try to incorporate pull based architecture in our DAYS project.\nData from the server is available for access by the global users.\nThis may be done by putting a request to the source server.\nThe query in this case is a global query.\nIt is transferred from the user``s source server to the destination server through the use of LEO satellites.\nWe intend to use our LDD scheme and data staging architecture in the pull based architecture.\nWe will show that the LDD scheme together with the data staging architecture significantly improves the latency for global as well as local query.\n8.\nREFERENCES [1] Acharya, S., Alonso, R. Franklin, M and Zdonik S. Broadcast disk: Data management for asymmetric communications environments.\nIn Proceedings of ACM SIGMOD Conference on Management of Data, pages 199-210, San Jose, CA, May 1995.\n[2] Chen, M.S.,Wu, K.L. and Yu, P. S. Optimizing index allocation for sequential data broadcasting in wireless mobile computing.\nIEEE Transactions on Knowledge and Data Engineering (TKDE), 15(1):161-173, January\/February 2003.\nFigure 5.\nBroadcast Size (# buckets) Dist tree Expo LBIS Figure 6.\nBroadcast Size (# buckets) Dist tree Expo LBIS Figure 7.\nBroadcast Size (# buckets) Dist tree Expo LBIS Figure 8.\nBroadcast Size (# buckets) Dist tree Expo LBIS Averagetuningtime Averagetuningtime Averagetuningtime Averagetuningtime 23 [3] Hu, Q. L., Lee, D. L. and Lee, W.C. Performance evaluation of a wireless hierarchical data dissemination system.\nIn Proceedings of the 5th Annual ACM International Conference on Mobile Computing and Networking (MobiCom``99), pages 163-173, Seattle, WA, August 1999.\n[4] Hu, Q. L. Lee, W.C. and Lee, D. L. Power conservative multi-attribute queries on data broadcast.\nIn Proceedings of the 16th International Conference on Data Engineering (ICDE``00), pages 157-166, San Diego, CA, February 2000.\n[5] Imielinski, T., Viswanathan, S. and Badrinath.\nB. R. Power efficient filtering of data on air.\nIn Proceedings of the 4th International Conference on Extending Database Technology (EDBT``94), pages 245-258, Cambridge, UK, March 1994.\n[6] Imielinski, T., Viswanathan, S. and Badrinath.\nB. R. Data on air - Organization and access.\nIEEE Transactions on Knowledge and Data Engineering (TKDE), 9(3):353-372, May\/June 1997.\n[7] Shih, E., Bahl, P. and Sinclair, M. J. Wake on wireless: An event driven energy saving strategy for battery operated devices.\nIn Proceedings of the 8th Annual ACM International Conference on Mobile Computing and Networking (MobiCom``02), pages 160-171, Atlanta, GA, September 2002.\n[8] Shivakumar N. and Venkatasubramanian, S. Energy-efficient indexing for information dissemination in wireless systems.\nACM\/Baltzer Journal of Mobile Networks and Applications (MONET), 1(4):433-446, December 1996.\n[9] Tan K. L. and Yu, J. X. Energy efficient filtering of non uniform broadcast.\nIn Proceedings of the 16th International Conference on Distributed Computing Systems (ICDCS``96), pages 520-527, Hong Kong, May 1996.\n[10] Viredaz, M. A., Brakmo, L. S. and Hamburgen, W. R. Energy management on handheld devices.\nACM Queue, 1(7):44-52, October 2003.\n[11] Garg, N. Kumar, V., & Dunham, M.H. Information Mapping and Indexing in DAYS, 6th International Workshop on Mobility in Databases and Distributed Systems, in conjunction with the 14th International Conference on Database and Expert Systems Applications September 1-5, Prague, Czech Republic, 2003.\n[12] Acharya D., Kumar, V., & Dunham, M.H. InfoSpace: Hybrid and Adaptive Public Data Dissemination System for Ubiquitous Computing.\nAccepted for publication in the special issue of Pervasive Computing.\nWiley Journal for Wireless Communications and Mobile Computing, 2004.\n[13] Acharya D., Kumar, V., & Prabhu, N. Discovering and using Web Services in M-Commerce, Proceedings for 5th VLDB Workshop on Technologies for E-Services, Toronto, Canada,2004.\n[14] Acharya D., Kumar, V. Indexing Location Dependent Data in broadcast environment.\nAccepted for publication, JDIM special issue on Distributed Data Management, 2005.\n[15] Flinn, J., Sinnamohideen, S., & Satyanarayan, M. Data Staging on Untrusted Surrogates, Intel Research, Pittsburg, Unpublished Report, 2003.\n[16] Seydim, A.Y., Dunham, M.H. & Kumar, V. Location dependent query processing, Proceedings of the 2nd ACM international workshop on Data engineering for wireless and mobile access, p.47-53, Santa Barbara, California, USA, 2001.\n[17] Xu, J., Lee, W.C., Tang., X. Exponential Index: A Parameterized Distributed Indexing Scheme for Data on Air.\nIn Proceedings of the 2nd ACM\/USENIX International Conference on Mobile Systems, Applications, and Services (MobiSys'04), Boston, MA, June 2004.\n24","lvl-3":"Location based Indexing Scheme for DAYS\nABSTRACT\nData dissemination through wireless channels for broadcasting information to consumers is becoming quite common.\nMany dissemination schemes have been proposed but most of them push data to wireless channels for general consumption.\nPush based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server.\nPush based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast.\nAccess latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme.\nTwo of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes.\nNone of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination.\nIn this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD.\nWe argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes.\nWe prove our argument with the help of simulation results.\n1.\nINTRODUCTION\nWireless data dissemination is an economical and efficient way to make desired data available to a large number of mobile or static users.\nThe mode of data transfer is essentially asymmetric,\nthat is, the capacity of the transfer of data (downstream communication) from the server to the client (mobile user) is significantly larger than the client or mobile user to the server (upstream communication).\nThe effectiveness of a data dissemination system is judged by its ability to provide user the required data at anywhere and at anytime.\nOne of the best ways to accomplish this is through the dissemination of highly personalized Location Based Services (LBS) which allows users to access personalized location dependent data.\nAn example would be someone using their mobile device to search for a vegetarian restaurant.\nThe LBS application would interact with other location technology components or use the mobile user's input to determine the user's location and download the information about the restaurants in proximity to the user by tuning into the wireless channel which is disseminating LDD.\nWe see a limited deployment of LBS by some service providers.\nBut there are every indications that with time some of the complex technical problems such as uniform location framework, calculating and tracking locations in all types of places, positioning in various environments, innovative location applications, etc., will be resolved and LBS will become a common facility and will help to improve market productivity and customer comfort.\nIn our project called DAYS, we use wireless data broadcast mechanism to push LDD to users and mobile users monitor and tune the channel to find and download the required data.\nA simple broadcast, however, is likely to cause significant performance degradation in the energy constrained mobile devices and a common solution to this problem is the use of efficient air indexing.\nThe indexing approach stores control information which tells the user about the data location in the broadcast and how and when he could access it.\nA mobile user, thus, has some free time to go into the doze mode which conserves valuable power.\nIt also allows the user to personalize his own mobile device by selectively tuning to the information of his choice.\nAccess efficiency and energy conservation are the two issues which are significant for data broadcast systems.\nAccess efficiency refers to the latency experienced when a request is initiated till the response is received.\nEnergy conservation [7, 10] refers to the efficient use of the limited energy of the mobile device in accessing broadcast data.\nTwo parameters that affect these are the tuning time and the access latency.\nTuning time refers to the time during which the mobile unit (MU) remains in active state to tune the channel and download its required data.\nIt can also be defined as the number of buckets tuned by the mobile device in active state to get its required data.\nAccess latency may be defined as the time elapsed since a request has been issued till the response has been received.\nSeveral indexing schemes have been proposed in the past and the prominent among them are the tree based and the exponential indexing schemes [17].\nThe main disadvantages of the tree based schemes are that they are based on centralized tree structures.\nTo start a search, the MU has to wait until it reaches the root of the next broadcast tree.\nThis significantly affects the tuning time of the mobile unit.\nThe exponential schemes facilitate index replication by sharing links in different search trees.\nFor broadcasts with large number of pages, the exponential scheme has been shown to perform similarly as the tree based schemes in terms of access latency.\nAlso, the average length of broadcast increases due to the index replication and this may cause significant increase in the access latency.\nNone of the above indexing schemes is equally effective in broadcasting location dependent data.\nIn addition to providing low latency, they lack properties which are used to address LDD issues.\nWe propose an indexing scheme in DAYS which takes care of some these problems.\nWe show with simulation results that our scheme outperforms some of the earlier indexing schemes for broadcasting LDD in terms of tuning time.\nThe rest of the paper is presented as follows.\nIn section 2, we discuss previous work related to indexing of broadcast data.\nSection 3 describes our DAYS architecture.\nLocation dependent data, its generation and subsequent broadcast is presented in section 4.\nSection 5 discusses our indexing scheme in detail.\nSimulation of our scheme and its performance evaluation is presented in section 6.\nSection 7 concludes the paper and mentions future related work.\n2.\nPREVIOUS WORK\nSeveral disk-based indexing techniques have been used for air indexing.\nImielinski et al. [5, 6] applied the B + index tree, where the leaf nodes store the arrival times of the data items.\nThe distributed indexing method was proposed to efficiently replicate and distribute the index tree in a broadcast.\nSpecifically, the index tree is divided into a replicated part and a non replicated part.\nEach broadcast consists of the replicated part and the nonreplicated part that indexes the data items immediately following it.\nAs such, each node in the non-replicated part appears only once in a broadcast and, hence, reduces the replication cost and access latency while achieving a good tuning time.\nChen et al. [2] and Shivakumar et al. [8] considered unbalanced tree structures to optimize energy consumption for non-uniform data access.\nThese structures minimize the average index search cost by reducing the number of index searches for hot data at the expense of spending more on cold data.\nTan and Yu discussed data and index organization under skewed broadcast Hashing and signature methods have also been suggested for wireless broadcast that supports equality queries [9].\nA flexible indexing method was proposed in [5].\nThe flexible index first sorts the data items in ascending (or descending) order of the search key values and then divides them into p segments.\nThe first bucket in each data segment contains a control index, which is a binary index mapping a given key value to the segment containing that key, and a local index, which is an m-entry index mapping a given key value to the buckets within the current segment.\nBy tuning the parameters of p and m, mobile clients can achieve either a good tuning time or good access latency.\nAnother indexing technique proposed is the exponential indexing scheme [17].\nIn this scheme, a parameterized index, called the exponential index is used to optimize the access latency or the tuning time.\nIt facilitates index replication by linking different search trees.\nAll of the above mentioned schemes have been applied to data which are non related to each other.\nThese non related data may be clustered or non clustered.\nHowever, none of them has specifically addressed the requirements of LDD.\nLocation dependent data are data which are associated with a location.\nPresently there are several applications that deal with LDD [13, 16].\nAlmost all of them depict LDD with the help of hierarchical structures [3, 4].\nThis is based on the containment property of location dependent data.\nThe Containment property helps determining relative position of an object by defining or identifying locations that contains those objects.\nThe subordinate locations are hierarchically related to each other.\nThus, Containment property limits the range of availability or operation of a service.\nWe use this containment property in our indexing scheme to index LDD.\n3.\nDAYS ARCHITECTURE\n4.\nLOCATION DEPENDENT DATA (LDD)\n4.1 Mapping Function for LDD\n5.\nLOCATION BASED INDEXING SCHEME\n6.\nPERFORMANCE EVALUATION","lvl-4":"Location based Indexing Scheme for DAYS\nABSTRACT\nData dissemination through wireless channels for broadcasting information to consumers is becoming quite common.\nMany dissemination schemes have been proposed but most of them push data to wireless channels for general consumption.\nPush based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server.\nPush based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast.\nAccess latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme.\nTwo of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes.\nNone of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination.\nIn this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD.\nWe argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes.\nWe prove our argument with the help of simulation results.\n1.\nINTRODUCTION\nWireless data dissemination is an economical and efficient way to make desired data available to a large number of mobile or static users.\nThe mode of data transfer is essentially asymmetric,\nThe effectiveness of a data dissemination system is judged by its ability to provide user the required data at anywhere and at anytime.\nOne of the best ways to accomplish this is through the dissemination of highly personalized Location Based Services (LBS) which allows users to access personalized location dependent data.\nAn example would be someone using their mobile device to search for a vegetarian restaurant.\nIn our project called DAYS, we use wireless data broadcast mechanism to push LDD to users and mobile users monitor and tune the channel to find and download the required data.\nThe indexing approach stores control information which tells the user about the data location in the broadcast and how and when he could access it.\nA mobile user, thus, has some free time to go into the doze mode which conserves valuable power.\nIt also allows the user to personalize his own mobile device by selectively tuning to the information of his choice.\nAccess efficiency and energy conservation are the two issues which are significant for data broadcast systems.\nAccess efficiency refers to the latency experienced when a request is initiated till the response is received.\nEnergy conservation [7, 10] refers to the efficient use of the limited energy of the mobile device in accessing broadcast data.\nTwo parameters that affect these are the tuning time and the access latency.\nTuning time refers to the time during which the mobile unit (MU) remains in active state to tune the channel and download its required data.\nIt can also be defined as the number of buckets tuned by the mobile device in active state to get its required data.\nAccess latency may be defined as the time elapsed since a request has been issued till the response has been received.\nSeveral indexing schemes have been proposed in the past and the prominent among them are the tree based and the exponential indexing schemes [17].\nThe main disadvantages of the tree based schemes are that they are based on centralized tree structures.\nTo start a search, the MU has to wait until it reaches the root of the next broadcast tree.\nThis significantly affects the tuning time of the mobile unit.\nThe exponential schemes facilitate index replication by sharing links in different search trees.\nFor broadcasts with large number of pages, the exponential scheme has been shown to perform similarly as the tree based schemes in terms of access latency.\nAlso, the average length of broadcast increases due to the index replication and this may cause significant increase in the access latency.\nNone of the above indexing schemes is equally effective in broadcasting location dependent data.\nIn addition to providing low latency, they lack properties which are used to address LDD issues.\nWe propose an indexing scheme in DAYS which takes care of some these problems.\nWe show with simulation results that our scheme outperforms some of the earlier indexing schemes for broadcasting LDD in terms of tuning time.\nIn section 2, we discuss previous work related to indexing of broadcast data.\nSection 3 describes our DAYS architecture.\nLocation dependent data, its generation and subsequent broadcast is presented in section 4.\nSection 5 discusses our indexing scheme in detail.\nSimulation of our scheme and its performance evaluation is presented in section 6.\nSection 7 concludes the paper and mentions future related work.\n2.\nPREVIOUS WORK\nSeveral disk-based indexing techniques have been used for air indexing.\nImielinski et al. [5, 6] applied the B + index tree, where the leaf nodes store the arrival times of the data items.\nThe distributed indexing method was proposed to efficiently replicate and distribute the index tree in a broadcast.\nSpecifically, the index tree is divided into a replicated part and a non replicated part.\nEach broadcast consists of the replicated part and the nonreplicated part that indexes the data items immediately following it.\nAs such, each node in the non-replicated part appears only once in a broadcast and, hence, reduces the replication cost and access latency while achieving a good tuning time.\nChen et al. [2] and Shivakumar et al. [8] considered unbalanced tree structures to optimize energy consumption for non-uniform data access.\nThese structures minimize the average index search cost by reducing the number of index searches for hot data at the expense of spending more on cold data.\nTan and Yu discussed data and index organization under skewed broadcast Hashing and signature methods have also been suggested for wireless broadcast that supports equality queries [9].\nA flexible indexing method was proposed in [5].\nThe flexible index first sorts the data items in ascending (or descending) order of the search key values and then divides them into p segments.\nBy tuning the parameters of p and m, mobile clients can achieve either a good tuning time or good access latency.\nAnother indexing technique proposed is the exponential indexing scheme [17].\nIn this scheme, a parameterized index, called the exponential index is used to optimize the access latency or the tuning time.\nIt facilitates index replication by linking different search trees.\nAll of the above mentioned schemes have been applied to data which are non related to each other.\nThese non related data may be clustered or non clustered.\nLocation dependent data are data which are associated with a location.\nThis is based on the containment property of location dependent data.\nThe Containment property helps determining relative position of an object by defining or identifying locations that contains those objects.\nThe subordinate locations are hierarchically related to each other.\nThus, Containment property limits the range of availability or operation of a service.\nWe use this containment property in our indexing scheme to index LDD.","lvl-2":"Location based Indexing Scheme for DAYS\nABSTRACT\nData dissemination through wireless channels for broadcasting information to consumers is becoming quite common.\nMany dissemination schemes have been proposed but most of them push data to wireless channels for general consumption.\nPush based broadcast [1] is essentially asymmetric, i.e., the volume of data being higher from the server to the users than from the users back to the server.\nPush based scheme requires some indexing which indicates when the data will be broadcast and its position in the broadcast.\nAccess latency and tuning time are the two main parameters which may be used to evaluate an indexing scheme.\nTwo of the important indexing schemes proposed earlier were tree based and the exponential indexing schemes.\nNone of these schemes were able to address the requirements of location dependent data (LDD) which is highly desirable feature of data dissemination.\nIn this paper, we discuss the broadcast of LDD in our project DAta in Your Space (DAYS), and propose a scheme for indexing LDD.\nWe argue that this scheme, when applied to LDD, significantly improves performance in terms of tuning time over the above mentioned schemes.\nWe prove our argument with the help of simulation results.\n1.\nINTRODUCTION\nWireless data dissemination is an economical and efficient way to make desired data available to a large number of mobile or static users.\nThe mode of data transfer is essentially asymmetric,\nthat is, the capacity of the transfer of data (downstream communication) from the server to the client (mobile user) is significantly larger than the client or mobile user to the server (upstream communication).\nThe effectiveness of a data dissemination system is judged by its ability to provide user the required data at anywhere and at anytime.\nOne of the best ways to accomplish this is through the dissemination of highly personalized Location Based Services (LBS) which allows users to access personalized location dependent data.\nAn example would be someone using their mobile device to search for a vegetarian restaurant.\nThe LBS application would interact with other location technology components or use the mobile user's input to determine the user's location and download the information about the restaurants in proximity to the user by tuning into the wireless channel which is disseminating LDD.\nWe see a limited deployment of LBS by some service providers.\nBut there are every indications that with time some of the complex technical problems such as uniform location framework, calculating and tracking locations in all types of places, positioning in various environments, innovative location applications, etc., will be resolved and LBS will become a common facility and will help to improve market productivity and customer comfort.\nIn our project called DAYS, we use wireless data broadcast mechanism to push LDD to users and mobile users monitor and tune the channel to find and download the required data.\nA simple broadcast, however, is likely to cause significant performance degradation in the energy constrained mobile devices and a common solution to this problem is the use of efficient air indexing.\nThe indexing approach stores control information which tells the user about the data location in the broadcast and how and when he could access it.\nA mobile user, thus, has some free time to go into the doze mode which conserves valuable power.\nIt also allows the user to personalize his own mobile device by selectively tuning to the information of his choice.\nAccess efficiency and energy conservation are the two issues which are significant for data broadcast systems.\nAccess efficiency refers to the latency experienced when a request is initiated till the response is received.\nEnergy conservation [7, 10] refers to the efficient use of the limited energy of the mobile device in accessing broadcast data.\nTwo parameters that affect these are the tuning time and the access latency.\nTuning time refers to the time during which the mobile unit (MU) remains in active state to tune the channel and download its required data.\nIt can also be defined as the number of buckets tuned by the mobile device in active state to get its required data.\nAccess latency may be defined as the time elapsed since a request has been issued till the response has been received.\nSeveral indexing schemes have been proposed in the past and the prominent among them are the tree based and the exponential indexing schemes [17].\nThe main disadvantages of the tree based schemes are that they are based on centralized tree structures.\nTo start a search, the MU has to wait until it reaches the root of the next broadcast tree.\nThis significantly affects the tuning time of the mobile unit.\nThe exponential schemes facilitate index replication by sharing links in different search trees.\nFor broadcasts with large number of pages, the exponential scheme has been shown to perform similarly as the tree based schemes in terms of access latency.\nAlso, the average length of broadcast increases due to the index replication and this may cause significant increase in the access latency.\nNone of the above indexing schemes is equally effective in broadcasting location dependent data.\nIn addition to providing low latency, they lack properties which are used to address LDD issues.\nWe propose an indexing scheme in DAYS which takes care of some these problems.\nWe show with simulation results that our scheme outperforms some of the earlier indexing schemes for broadcasting LDD in terms of tuning time.\nThe rest of the paper is presented as follows.\nIn section 2, we discuss previous work related to indexing of broadcast data.\nSection 3 describes our DAYS architecture.\nLocation dependent data, its generation and subsequent broadcast is presented in section 4.\nSection 5 discusses our indexing scheme in detail.\nSimulation of our scheme and its performance evaluation is presented in section 6.\nSection 7 concludes the paper and mentions future related work.\n2.\nPREVIOUS WORK\nSeveral disk-based indexing techniques have been used for air indexing.\nImielinski et al. [5, 6] applied the B + index tree, where the leaf nodes store the arrival times of the data items.\nThe distributed indexing method was proposed to efficiently replicate and distribute the index tree in a broadcast.\nSpecifically, the index tree is divided into a replicated part and a non replicated part.\nEach broadcast consists of the replicated part and the nonreplicated part that indexes the data items immediately following it.\nAs such, each node in the non-replicated part appears only once in a broadcast and, hence, reduces the replication cost and access latency while achieving a good tuning time.\nChen et al. [2] and Shivakumar et al. [8] considered unbalanced tree structures to optimize energy consumption for non-uniform data access.\nThese structures minimize the average index search cost by reducing the number of index searches for hot data at the expense of spending more on cold data.\nTan and Yu discussed data and index organization under skewed broadcast Hashing and signature methods have also been suggested for wireless broadcast that supports equality queries [9].\nA flexible indexing method was proposed in [5].\nThe flexible index first sorts the data items in ascending (or descending) order of the search key values and then divides them into p segments.\nThe first bucket in each data segment contains a control index, which is a binary index mapping a given key value to the segment containing that key, and a local index, which is an m-entry index mapping a given key value to the buckets within the current segment.\nBy tuning the parameters of p and m, mobile clients can achieve either a good tuning time or good access latency.\nAnother indexing technique proposed is the exponential indexing scheme [17].\nIn this scheme, a parameterized index, called the exponential index is used to optimize the access latency or the tuning time.\nIt facilitates index replication by linking different search trees.\nAll of the above mentioned schemes have been applied to data which are non related to each other.\nThese non related data may be clustered or non clustered.\nHowever, none of them has specifically addressed the requirements of LDD.\nLocation dependent data are data which are associated with a location.\nPresently there are several applications that deal with LDD [13, 16].\nAlmost all of them depict LDD with the help of hierarchical structures [3, 4].\nThis is based on the containment property of location dependent data.\nThe Containment property helps determining relative position of an object by defining or identifying locations that contains those objects.\nThe subordinate locations are hierarchically related to each other.\nThus, Containment property limits the range of availability or operation of a service.\nWe use this containment property in our indexing scheme to index LDD.\n3.\nDAYS ARCHITECTURE\nDAYS has been conceptualized to disseminate topical and nontopical data to users in a local broadcast space and to accept queries from individual users globally.\nTopical data, for example, weather information, traffic information, stock information, etc., constantly changes over time.\nNon topical data such as hotel, restaurant, real estate prices, etc., do not change so often.\nThus, we envision the presence of two types of data distribution: In the first case, server pushes data to local users through wireless channels.\nThe other case deals with the server sending results of user queries through downlink wireless channels.\nTechnically, we see the presence of two types of queues in the pull based data access.\nOne is a heavily loaded queue containing globally uploaded queries.\nThe other is a comparatively lightly loaded queue consisting of locally uploaded queries.\nThe DAYS architecture [12] as shown in figure 1 consists of a Data Server, Broadcast Scheduler, DAYS Coordinator, Network of LEO satellites for global data delivery and a Local broadcast space.\nData is pushed into the local broadcast space so that users may tune into the wireless channels to access the data.\nThe local broadcast space consists of a broadcast tower, mobile units and a network of data staging machines called the surrogates.\nData staging in surrogates has been earlier investigated as a successful technique [12, 15] to cache users' related data.\nWe believe that data staging can be used to drastically reduce the latency time for both the local broadcast data as well as global responses.\nQuery request in the surrogates may subsequently be used to generate the popularity patterns which ultimately decide the broadcast schedule [12].\nFigure 1.\nDAYS Architecture Figure 2.\nLocation Structure of Starbucks, Plaza\n4.\nLOCATION DEPENDENT DATA (LDD)\nWe argue that incorporating location information in wireless data broadcast can significantly decrease the access latency.\nThis property becomes highly useful for mobile unit which has limited storage and processing capability.\nThere are a variety of applications to obtain information about traffic, restaurant and hotel booking, fast food, gas stations, post office, grocery stores, etc. .\nIf these applications are coupled with location information, then the search will be fast and highly cost effective.\nAn important property of the locations is Containment which helps to determine the relative location of an object with respect to its parent that contains the object.\nThus, Containment limits the range of availability of a data.\nWe use this property in our indexing scheme.\nThe database contains the broadcast contents which are converted into LDD [14] by associating them with respective locations so that it can be broadcasted in a clustered manner.\nThe clustering of LDD helps the user to locate information efficiently and supports containment property.\nWe present an example to justify our proposition.\nExample: Suppose a user issues query \"Starbucks Coffee in Plaza please.\"\nto access information about the Plaza branch of Starbucks Coffee in Kansas City.\nIn the case of location independent set up the system will list all Starbucks coffee shops in Kansas City area.\nIt is obvious that such responses will increase access latency and are not desirable.\nThese can be managed efficiently if the server has location dependent data, i.e., a mapping between a Starbucks coffee shop data and its physical location.\nAlso, for a query including range of locations of Starbucks, a single query requesting locations for the entire region of Kansas City, as shown in Figure 2, will suffice.\nThis will save enormous amount of bandwidth by decreasing the number of messages and at the same time will be helpful in preventing the scalability bottleneck in highly populated area.\n4.1 Mapping Function for LDD\nThe example justifies the need for a mapping function to process location dependent queries.\nThis will be especially important for pull based queries across the globe for which the reply could be composed for different parts of the world.\nThe mapping function is necessary to construct the broadcast schedule.\nWe define Global Property Set (GPS) [11], Information Content (IC) set, and Location Hierarchy (LH) set where IC \u2286 GPS and LH \u2286 GPS to develop a mapping function.\nLH = {l1, l2, l3..., lk} where li represent locations in the location tree and IC = {ic1, ic2, ic3,..., icn} where ici represent information type.\nFor example, if we have traffic, weather, and stock information are in broadcast then IC = {ictraffic, icweather, and icstock}.\nThe mapping scheme must be able to identify and select an IC member and a LH node for (a) correct association, (b) granularity match, (c) and termination condition.\nFor example, weather \u2208 IC could be associated with a country or a state or a city or a town of LH.\nThe granularity match between the weather and a LH node is as per user requirement.\nThus, with a coarse granularity weather information is associated with a country to get country's weather and with town in a finer granularity.\nIf a town is the finest granularity, then it defines the terminal condition for association between IC and LH for weather.\nThis means that a user cannot get weather information about subdivision of a town.\nIn reality weather of a subdivision does not make any sense.\nWe develop a simple heuristic mapping approach scheme based on user requirement.\nLet IC = (m1, m2, m3.\n,, mk}, where mi represent its element and let LH = (n1, n2, n3,..., nl}, where ni represents LH's member.\nWe define GPS for IC (GPSIC) \u2286 GPS and for LH (GPSLH) \u2286 GPS as GPSIC = (P1, P2,..., Pn}, where P1, P2, P3,..., Pn are properties of its members and GPSLH = (Q1, Q2,..., Qm} where Q1, Q2,..., Qm are properties of its members.\nThe properties of a particular member of IC are a subset of GPSIC.\nIt is generally true that (property set (mi \u2208 IC) \u222a property set (mj \u2208 IC)) \u2260 \u2205, however, there may be cases where the intersection is not null.\nFor example, stock \u2208 IC and movie \u2208 IC rating do not have any property in common.\nWe assume that any two or more members of IC have at least one common geographical property (i.e. location) because DAYS broadcasts information about those categories, which are closely tied with a location.\nFor example, stock of a company is related to a country, weather is related to a city or state, etc. .\nWe define the property subset of mi \u2208 IC as PSmi \u2200 mi \u2208 IC and\nGPSIC} which implies that \u2200 i, PSmi \u2286 GPSIC.\nThe geographical properties of this set are indicative of whether mi \u2208 IC can be mapped to only a single granularity level (i.e. a single location) in LH or a multiple granularity levels (i.e. more than one nodes in\nthe hierarchy) in LH.\nHow many and which granularity levels should a mi map to, depends upon the level at which the service provider wants to provide information about the mi in question.\nSimilarly we define a property subset of LH members as PSn \u2200 nj\nThe process of mapping from IC to LH is then identifying for some mx \u2208 IC one or more ny \u2208 LH such that PSmx \u2229 PSnv \u2260 \u03c6.\nThis means that when mx maps to ny and all children of ny if mx can map to multiple granularity levels or mx maps only to ny if mx can map to a single granularity level.\nWe assume that new members can join and old member can leave IC or LH any time.\nThe deletion of members from the IC space is simple but addition of members to the IC space is more restrictive.\nIf we want to add a new member to the IC space, then we first define a property set for the new member: PSmnew_m = {P1, P2, P3,..., Pt} and add it to the IC only if the condition: \u2200 Pw {Pw \u2208 PSpnew_m \u2192 Pw \u2208 GPSIC} is satisfied.\nThis scheme has an additional benefit of allowing the information service providers to have a control over what kind of information they wish to provide to the users.\nWe present the following example to illustrate the mapping concept.\nNow, only PS (ICStock) \u2229 PSCountry \u2260 \u03c6.\nIn addition, PS (ICStock) indicated that Stock can map to only a single location Country.\nWhen we consider the member Traffic of IC space, only PS (ICTraffic) \u2229 PScity \u2260 \u03c6.\nAs PS (ICTraffic) indicates that Traffic can map to only a single location, it maps only to City and none of its children.\nNow unlike Stock, mapping of Traffic with Major roads, which is a child of City, is meaningful.\nHowever service providers have right to control the granularity levels at which they want to provide information about a member of IC space.\nPS (ICRoad conditions) \u2229 PSState \u2260 \u03c6 and PS (ICRoad conditions) \u2229 PSCity \u2260 \u03c6.\nSo Road conditions maps to State as well as City.\nAs PS (ICRoad conditions) indicates that Road conditions can map to multiple granularity levels, Road conditions will also map to Zip Code and Major roads, which are the children of State and City.\nSimilarly, Restaurant maps only to Zip code, and Weather maps to State, City and their children, Major Roads and Zip Code.\n5.\nLOCATION BASED INDEXING SCHEME\nThis section discusses our location based indexing scheme (LBIS).\nThe scheme is designed to conform to the LDD broadcast in our project DAYS.\nAs discussed earlier, we use the containment property of LDD in the indexing scheme.\nThis significantly limits the search of our required data to a particular portion of broadcast.\nThus, we argue that the scheme provides bounded tuning time.\nWe describe the architecture of our indexing scheme.\nOur scheme contains separate data buckets and index buckets.\nThe index buckets are of two types.\nThe first type is called the Major index.\nThe Major index provides information about the types of data broadcasted.\nFor example, if we intend to broadcast information like Entertainment, Weather, Traffic etc., then the major index points to either these major types of information and\/or their main subtypes of information, the number of main subtypes varying from one information to another.\nThis strictly limits number of accesses to a Major index.\nThe Major index never points to the original data.\nIt points to the sub indexes called the Minor index.\nThe minor indexes are the indexes which actually points to the original data.\nWe called these minor index pointers as Location Pointers as they points to the data which are associated with a location.\nThus, our search for a data includes accessing of a major index and some minor indexes, the number of minor index varying depending on the type of information.\nThus, our indexing scheme takes into account the hierarchical nature of the LDD, the Containment property, and requires our broadcast schedule to be clustered based on data type and location.\nThe structure of the location hierarchy requires the use of different types of index at different levels.\nThe structure and positions of index strictly depend on the location hierarchy as described in our mapping scheme earlier.\nWe illustrate the implementation of our scheme with an example.\nThe rules for framing the index are mentioned subsequently.\nFigure 3.\nLocation Mapped Information for Broadcast Figure 4.\nData coupled with Location based Index\nExample: Let us suppose that our broadcast content contains ICentertainment and ICweather which is represented as shown in Fig. 3.\nAi represents Areas of City and Ri represents roads in a certain area.\nThe leaves of Weather structure represent four cities.\nThe index structure is given in Fig. 4 which shows the position of major and minor index and data in the broadcast schedule.\nWe propose the following rules for the creation of the air indexed broadcast schedule:\n\u2022 The major index and the minor index are created.\n\u2022 The major index contains the position and range of different types of data items (Weather and Entertainment, Figure 3) and their categories.\nThe sub categories of Entertainment, Movie and Restaurant, are also in the index.\nThus, the major index contains Entertainment (E), Entertainment-Movie (EM), Entertainment-Restaurant (ER), and Weather (W).\nThe tuple (S, L) represents the starting position (S) of the data item and L represents the range of the item in terms of number of data buckets.\n\u2022 The minor index contains the variables A, R and a pointer Next.\nIn our example (Figure 3), road R represents the first node of area A.\nThe minor index is used to point to actual data buckets present at the lowest levels of the hierarchy.\nIn contrast, the major index points to a broader range of locations and so it contains information about main and sub categories of data.\n\u2022 Index information is not incorporated in the data buckets.\nIndex buckets are separate containing only the control information.\n\u2022 The number of major index buckets m = #(IC), IC = {ic1, ic2, ic3,..., icn} where ici represent information type and #represents the cardinality of the Information Content set IC.\nIn this example, IC = {icMovie, icWeather, icRestaurant} and so #(IC) = 3.\nHence, the number of major index buckets is 3.\n\u2022 Mechanism to resolve the query is present in the java based coordinator in MU.\nFor example, if a query Q is presented as Q (Entertainment, Movie, Road_1), then the resultant search will be for the EM information in the major index.\nWe say, Q4EM.\nOur proposed index works as follows: Let us suppose that an MU issues a query which is represented by Java Coordinator present in the MU as \"Restaurant information on Road 7\".\nThis is resolved by the coordinator as Q4ER.\nThis means one has to search for ER unit of index in the major index.\nLet us suppose that the MU logs into the channel at R2.\nThe first index it receives is a minor index after R2.\nIn this index, value of Next variable = 4, which means that the next major index is present after bucket 4.\nThe MU may go into doze mode.\nIt becomes active after bucket 4 and receives the major index.\nIt searches for ER information which is the first entry in this index.\nIt is now certain that the MU will get the position of the data bucket in the adjoining minor index.\nThe second unit in the minor index depicts the position of the required data R7.\nIt tells that the data bucket is the first bucket in Area 4.\nThe MU goes into doze mode again and becomes active after bucket 6.\nIt gets the required data in the next bucket.\nWe present the algorithm for searching the location based Index.\nAlgorithm 1 Location based Index Search in DAYS\n1.\nScan broadcast for the next index bucket, found = false 2.\nWhile (not found) do 3.\nif bucket is Major Index then 4.\nFind the Type & Tuple (S, L) 5.\nif S is greater than 1, go into doze mode for S seconds 6.\nend if 7.\nWake up at the Sth bucket and observe the Minor Index 8.\nend if 9.\nif bucket is Minor Index then 10.\nif TypeRequested not equal to Typefound and (A, R) Request not equal to (A, R) found then 11.\nGo into doze mode till NEXT & repeat from step 3 12.\nend if 13.\nelse find entry in Minor Index which points to data 14.\nCompute time of arrival T of data bucket 15.\nGo into doze mode till T 16.\nWake up at T and access data, found = true\n6.\nPERFORMANCE EVALUATION\nConservation of energy is the main concern when we try to access data from wireless broadcast.\nAn efficient scheme should allow the mobile device to access its required data by staying active for a minimum amount of time.\nThis would save considerable amount of energy.\nSince items are distributed based on types and are mapped to suitable locations, we argue that our broadcast deals with clustered data types.\nThe mobile unit has to access a larger major index and a relatively much smaller minor index to get information about the time of arrival of data.\nThis is in contrast to the exponential scheme where the indexes are of equal sizes.\nThe example discussed and Algorithm 1 reveals that to access any data, we need to access the major index only once followed by one or more accesses to the minor index.\nThe number of minor index access depends on the number of internal locations.\nAs the number of internal locations vary for item to item (for example, Weather is generally associated with a City whereas traffic is granulated up to major and minor roads of a city), we argue that the structure of the location mapped information may be visualized as a forest which is a collection of general trees, the number of general trees depending on the types of information broadcasted and depth of a tree depending on the granularity of the location information associated with the information.\nFor our experiments, we assume the forest as a collection of balanced M-ary trees.\nWe further assume the M-ary trees to be full by assuming the presence of dummy nodes in different levels of a tree.\nThus, if the number of data items is d and the height of the tree is m, then n = (m * d-1) \/ (m-1) where n is the number of vertices in the tree and i = (d-1) \/ (m-1) where i is the number of internal vertices.\nTuning time for a data item involves 1 unit of time required to access the major index plus time required to access the data items present in the leaves of the tree.\nThus, tuning time with d data items is t = logmd +1 We can say that tuning time is bounded by O (logmd).\nWe compare our scheme with the distributed indexing and exponential scheme.\nWe assume a flat broadcast and number of pages varying from 5000 to 25000.\nThe various simulation parameters are shown in Table 1.\nFigure 5-8 shows the relative tuning times of three indexing algorithms, ie, the LBIS, exponential scheme and the distributed tree scheme.\nFigure 5 shows the result for number of internal location nodes = 3.\nWe can see that LBIS significantly outperforms both the other schemes.\nThe tuning time in LBIS ranges from approx 6.8 to 8.\nThis large tuning time is due to the fact that after reaching the lowest minor index, the MU may have to access few buckets sequentially to get the required data bucket.\nWe can see that the tuning time tends to become stable as the length of broadcast increases.\nIn figure 6 we consider m = 4.\nHere we can see that the exponential and the distributed perform almost similarly, though the former seems to perform slightly better as the broadcast length increases.\nA very interesting pattern is visible in figure 7.\nFor smaller broadcast size, the LBIS seems to have larger tuning time than the other two schemes.\nBut as the length of broadcast increases, it is clearly visible the LBIS outperforms the other two schemes.\nThe Distributed tree indexing shows similar behavior like the LBIS.\nThe tuning time in LBIS remains low because the algorithm allows the MU to skip some intermediate Minor Indexes.\nThis allows the MU to move into lower levels directly after coming into active mode, thus saving valuable energy.\nThis action is not possible in the distributed tree indexing and hence we can observe that its tuning time is more than the LBIS scheme, although it performs better than the exponential scheme.\nFigure 8, in contrast, shows us that the tuning time in LBIS, though less than the other two schemes, tends to increase sharply as the broadcast length becomes greater than the 15000 pages.\nThis may be attributed both due to increase in time required to scan the intermediate Minor Indexes and the length of the broadcast.\nBut we can observe that the slope of the LBIS curve is significantly less than the other two curves.\nTable 1 Simulation Parameters\nThe simulation results establish some facts about our location based indexing scheme.\nThe scheme performs better than the other two schemes in terms of tuning time in most of the cases.\nAs the length of the broadcast increases, after a certain point, though the tuning time increases as a result of factors which we have described before, the scheme always performs better than the other two schemes.\nDue to the prescribed limit of the number of pages in the paper, we are unable to show more results.\nBut these omitted results show similar trend as the results depicted in figure 5-8.\n7.\nCONCLUSION AND FUTURE WORK In this paper we have presented a scheme for mapping of wireless broadcast data with their locations.\nWe have presented an example to show how the hierarchical structure of the location tree maps with the data to create LDD.\nWe have presented a scheme called LBIS to index this LDD.\nWe have used the containment property of LDD in the scheme that limits the search to a narrow range of data in the broadcast, thus saving valuable energy in the device.\nThe mapping of data with locations and the indexing scheme will be used in our DAYS project to create the push based architecture.\nThe LBIS has been compared with two other prominent indexing schemes, i.e., the distributed tree indexing scheme and the exponential indexing scheme.\nWe showed in our simulations that the LBIS scheme has the lowest tuning time for broadcasts having large number of pages, thus saving valuable battery power in the MU.\nFigure 5.\nBroadcast Size (#buckets) Figure 6.\nBroadcast Size (#buckets) Figure 7.\nBroadcast Size (#buckets) Figure 8.\nBroadcast Size (#buckets)\nIn the future work we try to incorporate pull based architecture in our DAYS project.\nData from the server is available for access by the global users.\nThis may be done by putting a request to the source server.\nThe query in this case is a global query.\nIt is transferred from the user's source server to the destination server through the use of LEO satellites.\nWe intend to use our LDD scheme and data staging architecture in the pull based architecture.\nWe will show that the LDD scheme together with the data staging architecture significantly improves the latency for global as well as local query.","keyphrases":["index scheme","index scheme","index","wireless channel","locat depend data","ldd","wireless data dissemin","locat base servic","mobil user","data broadcast system","tree structur","dai","pull base data access","wireless broadcast data map","wireless data broadcast","data stage"],"prmu":["P","P","P","P","P","P","R","M","M","M","M","U","M","M","R","M"]} {"id":"H-88","title":"Controlling Overlap in Content-Oriented XML Retrieval","abstract":"The direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents. This paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents. The test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation.","lvl-1":"Controlling Overlap in Content-Oriented XML Retrieval Charles L. A. Clarke School of Computer Science, University of Waterloo, Canada claclark@plg.uwaterloo.ca ABSTRACT The direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents.\nThis paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents.\nThe test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation.\nCategories and Subject Descriptors H.3.3 [Information Systems]: Information Storage and Retrieval-Information Search and Retrieval General Terms Algorithms, Measurement, Performance, Experimentation 1.\nINTRODUCTION The representation of documents in XML provides an opportunity for information retrieval systems to take advantage of document structure, returning individual document components when appropriate, rather than complete documents in all circumstances.\nIn response to a user query, an XML information retrieval system might return a mixture of paragraphs, sections, articles, bibliographic entries and other components.\nThis facility is of particular benefit when a collection contains very long documents, such as product manuals or books, where the user should be directed to the most relevant portions of these documents.\n
Text Compression for Dynamic Document Databases<\/atl> Alistair Moffat<\/au> Justin Zobel<\/au> Neil Sharman<\/au>

Abstract<\/b> For ...<\/p><\/abs> <\/fm> INTRODUCTION<\/st> Modern document databases...<\/ip1>

There are good reasons to compress...<\/p> <\/sec> REDUCING MEMORY REQUIREMENTS<\/st>... 2.1 Method A<\/st>... <\/sec> ... <\/bdy> <\/article> Figure 1: A journal article encoded in XML.\nFigure 1 provides an example of a journal article encoded in XML, illustrating many of the important characteristics of XML documents.\nTags indicate the beginning and end of each element, with elements varying widely in size, from one word to thousands of words.\nSome elements, such as paragraphs and sections, may be reasonably presented to the user as retrieval results, but others are not appropriate.\nElements overlap each other - articles contain sections, sections contain subsections, and subsections contain paragraphs.\nEach of these characteristics affects the design of an XML IR system, and each leads to fundamental problems that must be solved in an successful system.\nMost of these fundamental problems can be solved through the careful adaptation of standard IR techniques, but the problems caused by overlap are unique to this area [4,11] and form the primary focus of this paper.\nThe article of figure 1 may be viewed as an XML tree, as illustrated in figure 2.\nFormally, a collection of XML documents may be represented as a forest of ordered, rooted trees, consisting of a set of nodes N and a set of directed edges E connecting these nodes.\nFor each node x \u2208 N , the notation x.parent refers to the parent node of x, if one exists, and the notation x.children refers to the set of child nodes sec bdyfm atl au au au abs p b st ip1 sec st ss1 st article p Figure 2: Example XML tree.\nof x.\nSince an element may be represented by the node at its root, the output of an XML IR system may be viewed as a ranked list of the top-m nodes.\nThe direct application of a standard relevance ranking technique to a set of XML elements can produce a result in which the top ranks are dominated by many structurally related elements.\nA high scoring section is likely to contain several high scoring paragraphs and to be contained in an high scoring article.\nFor example, many of the elements in figure 2 would receive a high score on the keyword query text index compression algorithms.\nIf each of these elements are presented to a user as an individual and separate result, she may waste considerable time reviewing and rejecting redundant content.\nOne possible solution is to report only the highest scoring element along a given path in the tree, and to remove from the lower ranks any element containing it, or contained within it.\nUnfortunately, this approach destroys some of the possible benefits of XML IR.\nFor example, an outer element may contain a substantial amount of information that does not appear in an inner element, but the inner element may be heavily focused on the query topic and provide a short overview of the key concepts.\nIn such cases, it is reasonable to report elements which contain, or are contained in, higher ranking elements.\nEven when an entire book is relevant, a user may still wish to have the most important paragraphs highlighted, to guide her reading and to save time [6].\nThis paper presents a method for controlling overlap.\nStarting with an initial element ranking, a re-ranking algorithm adjusts the scores of lower ranking elements that contain, or are contained within, higher ranking elements, reflecting the fact that this information may now be redundant.\nFor example, once an element representing a section appears in the ranking, the scores for the paragraphs it contains and the article that contains it are reduced.\nThe inspiration for this strategy comes partially from recent work on structured documents retrieval, where terms appearing in different fields, such as the title and body, are given different weights [20].\nExtending that approach, the re-ranking algorithm varies weights dynamically as elements are processed.\nThe remainder of the paper is organized as follows: After a discussion of background work and evaluation methodology, a baseline retrieval method is presented in section 4.\nThis baseline method represents a reasonable adaptation of standard IR technology to XML.\nSection 5 then outlines a strategy for controlling overlap, using the baseline method as a starting point.\nA re-ranking algorithm implementing this strategy is presented in section 6 and evaluated in section 7.\nSection 8 discusses an extended version of the algorithm.\n2.\nBACKGROUND This section provides a general overview of XML information retrieval and discusses related work, with an emphasis on the fundamental problems mentioned in the introduction.\nMuch research in the area of XML retrieval views it from a traditional database perspective, being concerned with such problems as the implementation of structured query languages [5] and the processing of joins [1].\nHere, we take a content oriented IR perceptive, focusing on XML documents that primarily contain natural language data and queries that are primarily expressed in natural language.\nWe assume that these queries indicate only the nature of desired content, not its structure, and that the role of the IR system is to determine which elements best satisfy the underlying information need.\nOther IR research has considered mixed queries, in which both content and structural requirements are specified [2,6,14,17,23].\n2.1 Term and Document Statistics In traditional information retrieval applications the standard unit of retrieval is taken to be the document.\nDepending on the application, this term might be interpreted to encompass many different objects, including web pages, newspaper articles and email messages.\nWhen applying standard relevance ranking techniques in the context of XML IR, a natural approach is to treat each element as a separate document, with term statistics available for each [16].\nIn addition, most ranking techniques require global statistics (e.g. inverse document frequency) computed over the collection as a whole.\nIf we consider this collection to include all elements that might be returned by the system, a specific occurrence of a term may appear in several different documents, perhaps in elements representing a paragraph, a subsection, a section and an article.\nIt is not appropriate to compute inverse document frequency under the assumption that the term is contained in all of these elements, since the number of elements that contain a term depends entirely on the structural arrangement of the documents [13,23].\n2.2 Retrievable Elements While an XML IR system might potentially retrieve any element, many elements may not be appropriate as retrieval results.\nThis is usually the case when elements contain very little text [10].\nFor example, a section title containing only the query terms may receive a high score from a ranking algorithm, but alone it would be of limited value to a user, who might prefer the actual section itself.\nOther elements may reflect the document``s physical, rather than logical, structure, which may have little or no meaning to a user.\nAn effective XML IR system must return only those elements that have sufficient content to be usable and are able to stand alone as independent objects [15,18].\nStandard document components such as paragraphs, sections, subsections, and abstracts usually meet these requirements; titles, italicized phrases, and individual metadata fields often do not.\n2.3 Evaluation Methodology Over the past three years, the INitiative for the Evaluation of XML Retrieval (INEX) has encouraged research into XML information retrieval technology [7,8].\nINEX is an experimental conference series, similar to TREC, with groups from different institutions completing one or more experimental tasks using their own tools and systems, and comparing their results at the conference itself.\nOver 50 groups participated in INEX 2004, and the conference has become as influential in the area of XML IR as TREC is in other IR areas.\nThe research described in this paper, as well as much of the related work it cites, depends on the test collections developed by INEX.\nOverlap causes considerable problems with retrieval evaluation, and the INEX organizers and participants have wrestled with these problems since the beginning.\nWhile substantial progress has been made, these problem are still not completely solved.\nKazai et al. [11] provide a detailed exposition of the overlap problem in the context of INEX retrieval evaluation and discuss both current and proposed evaluation metrics.\nMany of these metrics are applied to evaluate the experiments reported in this paper, and they are briefly outlined in the next section.\n3.\nINEX 2004 Space limitations prevent the inclusion of more than a brief summary of INEX 2004 tasks and evaluation methodology.\nFor detailed information, the proceedings of the conference itself should be consulted [8].\n3.1 Tasks For the main experimental tasks, INEX 2004 participants were provided with a collection of 12,107 articles taken from the IEEE Computer Societies magazines and journals between 1995 and 2002.\nEach document is encoded in XML using a common DTD, with the document of figures 1 and 2 providing one example.\nAt INEX 2004, the two main experimental tasks were both adhoc retrieval tasks, investigating the performance of systems searching a static collection using previously unseen topics.\nThe two tasks differed in the types of topics they used.\nFor one task, the content-only or CO task, the topics consist of short natural language statements with no direct reference to the structure of the documents in the collection.\nFor this task, the IR system is required to select the elements to be returned.\nFor the other task, the contentand-structure or CAS task, the topics are written in an XML query language [22] and contain explicit references to document structure, which the IR system must attempt to satisfy.\nSince the work described in this paper is directed at the content-only task, where the IR system receives no guidance regarding the elements to return, the CAS task is ignored in the remainder of our description.\nIn 2004, 40 new CO topics were selected by the conference organizers from contributions provided by the conference participants.\nEach topic includes a short keyword query, which is executed over the collection by each participating group on their own XML IR system.\nEach group could submit up to three experimental runs consisting of the top m = 1500 elements for each topic.\n3.2 Relevance Assessment Since XML IR is concerned with locating those elements that provide complete coverage of a topic while containing as little extraneous information as possible, simple relevant vs. not relevant judgments are not sufficient.\nInstead, the INEX organizers adopted two dimensions for relevance assessment: The exhaustivity dimension reflects the degree to which an element covers the topic, and the specificity dimension reflects the degree to which an element is focused on the topic.\nA four-point scale is used in both dimensions.\nThus, a (3,3) element is highly exhaustive and highly specific, a (1,3) element is marginally exhaustive and highly specific, and a (0,0) element is not relevant.\nAdditional information on the assessment methodology may be found in Piwowarski and Lalmas [19], who provide a detailed rationale.\n3.3 Evaluation Metrics The principle evaluation metric used at INEX 2004 is a version of mean average precision (MAP), adjusted by various quantization functions to give different weights to different elements, depending on their exhaustivity and specificity values.\nOne variant, the strict quantization function gives a weight of 1 to (3,3) elements and a weight of 0 to all others.\nThis variant is essentially the familiar MAP value, with (3,3) elements treated as relevant and all other elements treated as not relevant.\nOther quantization functions are designed to give partial credit to elements which are near misses, due to a lack or exhaustivity and\/or specificity.\nBoth the generalized quantization function and the specificity-oriented generalization (sog) function credit elements according to their degree of relevance [11], with the second function placing greater emphasis on specificity.\nThis paper reports results of this metric using all three of these quantization functions.\nSince this metric was first introduced at INEX 2002, it is generally referred as the inex-2002 metric.\nThe inex-2002 metric does not penalize overlap.\nIn particular, both the generalized and sog quantization functions give partial credit to a near miss even when a (3,3) element overlapping it is reported at a higher rank.\nTo address this problem, Kazai et al. [11] propose an XML cumulated gain metric, which compares the cumulated gain [9] of a ranked list to an ideal gain vector.\nThis ideal gain vector is constructed from the relevance judgments by eliminating overlap and retaining only best element along a given path.\nThus, the XCG metric rewards retrieval runs that avoid overlap.\nWhile XCG was not used officially at INEX 2004, a version of it is likely to be used in the future.\nAt INEX 2003, yet another metric was introduced to ameliorate the perceived limitations of the inex-2002 metric.\nThis inex-2003 metric extends the definitions of precision and recall to consider both the size of reported components and the overlap between them.\nTwo versions were created, one that considered only component size and another that considered both size and overlap.\nWhile the inex-2003 metric exhibits undesirable anomalies [11], and was not used in 2004, values are reported in the evaluation section to provide an additional instrument for investigating overlap.\n4.\nBASELINE RETRIEVAL METHOD This section provides an overview of baseline XML information retrieval method currently used in the MultiText IR system, developed by the Information Retrieval Group at the University of Waterloo [3].\nThis retrieval method results from the adaptation and tuning of the Okapi BM25 measure [21] to the XML information retrieval task.\nThe MultiText system performed respectably at INEX 2004, placing in the top ten under all of the quantization functions, and placing first when the quantization function emphasized exhaustivity.\nTo support retrieval from XML and other structured document types, the system provides generalized queries of the form: rank X by Y where X is a sub-query specifying a set of document elements to be ranked and Y is a vector of sub-queries specifying individual retrieval terms.\nFor our INEX 2004 runs, the sub-query X specified a list of retrievable elements as those with tag names as follows: abs app article bb bdy bm fig fm ip1 li p sec ss1 ss2 vt This list includes bibliographic entries (bb) and figure captions (fig) as well as paragraphs, sections and subsections.\nPrior to INEX 2004, the INEX collection and the INEX 2003 relevance judgments were manually analyzed to select these tag names.\nTag names were selected on the basis of their frequency in the collection, the average size of their associated elements, and the relative number of positive relevance judgments they received.\nAutomating this selection process is planned as future work.\nFor INEX 2004, the term vector Y was derived from the topic by splitting phrases into individual words, eliminating stopwords and negative terms (those starting with -), and applying a stemmer.\nFor example, keyword field of topic 166 +``tree edit distance'' + XML -image became the four-term query ``$tree'' ``$edit'' ``$distance'' ``$xml'' where the $ operator within a quoted string stems the term that follows it.\nOur implementation of Okapi BM25 is derived from the formula of Robertson et al. [21] by setting parameters k2 = 0 and k3 = \u221e.\nGiven a term set Q, an element x is assigned the score t\u2208Q w(1) qt (k1 + 1)xt K + xt (1) where w(1) = log \u00a1 D \u2212 Dt + 0.5 Dt + 0.5 cents D = number of documents in the corpus Dt = number of documents containing t qt = frequency that t occurs in the topic xt = frequency that t occurs in x K = k1((1 \u2212 b) + b \u00b7 lx\/lavg) lx = length of x lavg = average document length 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0 2 4 6 8 10 12 14 16 MeanAveragePrecision(inex-2002) k1 strict generalized sog Figure 3: Impact of k1 on inex-2002 mean average precision with b = 0.75 (INEX 2003 CO topics).\nPrior to INEX 2004, the INEX 2003 topics and judgments were used to tune the b and k1 parameters, and the impact of this tuning is discussed later in this section.\nFor the purposes of computing document-level statistics (D, Dt and lavg) a document is defined to be an article.\nThese statistics are used for ranking all element types.\nFollowing the suggestion of Kamps et al. [10], the retrieval results are filtered to eliminate very short elements, those less than 25 words in length.\nThe use of article statistics for all element types might be questioned.\nThis approach may be justified by viewing the collection as a set of articles to be searched using standard document-oriented techniques, where only articles may be returned.\nThe score computed for an element is essentially the score it would receive if it were added to the collection as a new document, ignoring the minor adjustments needed to the document-level statistics.\nNonetheless, we plan to examine this issue again in the future.\nIn our experience, the performance of BM25 typically benefits from tuning the b and k1 parameters to the collection, whenever training queries are available for this purpose.\nPrior to INEX 2004, we trained the MultiText system using the INEX 2003 queries.\nAs a starting point we used the values b = 0.75 and k1 = 1.2, which perform well on TREC adhoc collections and are used as default values in our system.\nThe results were surprising.\nFigure 3 shows the result of varying k1 with b = 0.75 on the MAP values under three quantization functions.\nIn our experience, optimal values for k1 are typically in the range 0.0 to 2.0.\nIn this case, large values are required for good performance.\nBetween k1 = 1.0 and k1 = 6.0 MAP increases by over 15% under the strict quantization.\nSimilar improvements are seen under the generalized and sog quantizations.\nIn contrast, our default value of b = 0.75 works well under all quantization functions (figure 4).\nAfter tuning over a wide range of values under several quantization functions, we selected values of k = 10.0 and b = 0.80 for our INEX 2004 experiments, and these values are used for the experiments reported in section 7.\n0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 MeanAveragePrecision(inex-2002) b strict generalized sog Figure 4: Impact of b on inex-2002 mean average precision with k1 = 10 (INEX 2003 CO topics).\n5.\nCONTROLLING OVERLAP Starting with an element ranking generated by the baseline method described in the previous section, elements are re-ranked to control overlap by iteratively adjusting the scores of those elements containing or contained in higher ranking elements.\nAt a conceptual level, re-ranking proceeds as follows: 1.\nReport the highest ranking element.\n2.\nAdjust the scores of the unreported elements.\n3.\nRepeat steps 1 and 2 until m elements are reported.\nOne approach to adjusting the scores of unreported elements in step 2 might be based on the Okapi BM25 scores of the involved elements.\nFor example, assume a paragraph with score p is reported in step 1.\nIn step 2, the section containing the paragraph might then have its score s lowered by an amount \u03b1 \u00b7 p to reflect the reduced contribution the paragraph should make to the section``s score.\nIn a related context, Robertson et al. [20] argue strongly against the linear combination of Okapi scores in this fashion.\nThat work considers the problem of assigning different weights to different document fields, such as the title and body associated with Web pages.\nA common approach to this problem scores the title and body separately and generates a final score as a linear combination of the two.\nRobertson et al. discuss the theoretical flaws in this approach and demonstrate experimentally that it can actually harm retrieval effectiveness.\nInstead, they apply the weights at the term frequency level, with an occurrence of a query term t in the title making a greater contribution to the score than an occurrence in the body.\nIn equation 1, xt becomes \u03b10 \u00b7 yt + \u03b11 \u00b7 zt, where yt is the number of times t occurs in the title and zt is the number of times t occurs in the body.\nTranslating this approach to our context, the contribution of terms appearing in elements is dynamically reduced as they are reported.\nThe next section presents and analysis a simple re-ranking algorithm that follows this strategy.\nThe algorithm is evaluated experimentally in section 7.\nOne limitation of the algorithm is that the contribution of terms appearing in reported elements is reduced by the same factor regardless of the number of reported elements in which it appears.\nIn section 8 the algorithm is extended to apply increasing weights, lowering the score, when a term appears in more than one reported element.\n6.\nRE-RANKING ALGORITHM The re-ranking algorithm operates over XML trees, such as the one appearing in figure 2.\nInput to the algorithm is a list of n elements ranked according to their initial BM25 scores.\nDuring the initial ranking the XML tree is dynamically re-constructed to include only those nodes with nonzero BM25 scores, so n may be considerably less than |N |.\nOutput from the algorithm is a list of the top m elements, ranked according to their adjusted scores.\nAn element is represented by the node x \u2208 N at its root.\nAssociated with this node are fields storing the length of element, term frequencies, and other information required by the re-ranking algorithm, as follows: x.f - term frequency vector x.g - term frequency adjustments x.l - element length x.score - current Okapi BM25 score x.reported - boolean flag, initially false x.children - set of child nodes x.parent - parent node, if one exists These fields are populated during the initial ranking process, and updated as the algorithm progresses.\nThe vector x.f contains term frequency information corresponding to each term in the query.\nThe vector x.g is initially zero and is updated by the algorithm as elements are reported.\nThe score field contains the current BM25 score for the element, which will change as the values in x.g change.\nThe score is computed using equation 1, with the xt value for each term determined by a combination of the values in x.f and x.g. Given a term t \u2208 Q, let ft be the component of x.f corresponding to t, and let gt be the component of x.g corresponding to t, then: xt = ft \u2212 \u03b1 \u00b7 gt (2) For processing by the re-ranking algorithm, nodes are stored in priority queues, ordered by decreasing score.\nEach priority queue PQ supports three operations: PQ.front() - returns the node with greatest score PQ.add (x) - adds node x to the queue PQ.remove(x) - removes node x from the queue When implemented using standard data structures, the front operation requires O(1) time, and the other operations require O(log n) time, where n is the size of the queue.\nThe core of the re-ranking algorithm is presented in figure 5.\nThe algorithm takes as input the priority queue S containing the initial ranking, and produces the top-m reranked nodes in the priority queue F.\nAfter initializing F to be empty on line 1, the algorithm loops m times over lines 215, transferring at least one node from S to F during each iteration.\nAt the start of each iteration, the unreported node at the front of S has the greatest adjusted score, and it is removed and added to F.\nThe algorithm then traverses the 1 F \u2190 \u2205 2 for i \u2190 1 to m do 3 x \u2190 S.front() 4 S.remove(x) 5 x.reported \u2190 true 6 F.add(x) 7 8 foreach y \u2208 x.children do 9 Down (y) 10 end do 11 12 if x is not a root node then 13 Up (x, x.parent) 14 end if 15 end do Figure 5: Re-Ranking Algorithm - As input, the algorithm takes a priority queue S, containing XML nodes ranked by their initial scores, and returns its results in priority queue F, ranked by adjusted scores.\n1 Up(x, y) \u2261 2 S.remove(y) 3 y.g \u2190 y.g + x.f \u2212 x.g 4 recompute y.score 5 S.add(y) 6 if y is not a root node then 7 Up (x, y.parent) 8 end if 9 10 Down(x) \u2261 11 if not x.reported then 12 S.remove(x) 14 x.g \u2190 x.f 15 recompute x.score 16 if x.score > 0 then 17 F.add(x) 18 end if 19 x.reported \u2190 true 20 foreach y \u2208 x.children do 21 Down (y) 22 end do 23 end if Figure 6: Tree traversal routines called by the reranking algorithm.\n0.0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.25 0.26 0.27 0.28 0.29 0.30 0.31 0.32 0.33 0.34 0.35 0.36 MeanAveragePrecision(inex-2002) XMLCumulatedGain(XCG) alpha MAP (strict) MAP (generalized) MAP (sog) XCG (sog2) Figure 7: Impact of \u03b1 on XCG and inex-2002 MAP (INEX 2004 CO topics; assessment set I).\nnode``s ancestors (lines 8-10) and descendants (lines 12-14) adjusting the scores of these nodes.\nThe tree traversal routines, Up and Down are given in figure 6.\nThe Up routine removes each ancestor node from S, adjusts its term frequency values, recomputes its score, and adds it back into S.\nThe adjustment of the term frequency values (line 3) adds to y.g only the previously unreported term occurrences in x. Re-computation of the score on line 4 uses equations 1 and 2.\nThe Down routine performs a similar operation on each descendant.\nHowever, since the contents of each descendant are entirely contained in a reported element its final score may be computed, and it is removed from S and added to F.\nIn order to determine the time complexity of the algorithm, first note that a node may be an argument to Down at most once.\nThereafter, the reported flag of its parent is true.\nDuring each call to Down a node may be moved from S to F, requiring O(log n) time.\nThus, the total time for all calls to Down is O(n log n), and we may temporarily ignore lines 8-10 of figure 5 when considering the time complexity of the loop over lines 2-15.\nDuring each iteration of this loop, a node and each of its ancestors are removed from a priority queue and then added back into a priority queue.\nSince a node may have at most h ancestors, where h is the maximum height of any tree in the collection, each of the m iterations requires O(h log n) time.\nCombining these observations produces an overall time complexity of O((n + mh) log n).\nIn practice, re-ranking an INEX result set requires less than 200ms on a three-year-old desktop PC.\n7.\nEVALUATION None of the metrics described in section 3.3 is a close fit with the view of overlap advocated by this paper.\nNonetheless, when taken together they provide insight into the behaviour of the re-ranking algorithm.\nThe INEX evaluation packages (inex_eval and inex_eval_ng) were used to compute values for the inex-2002 and inex-2003 metrics.\nValues for the XCG metrics were computed using software supplied by its inventors [11].\nFigure 7 plots the three variants of inex-2002 MAP metric together with the XCG metric.\nValues for these metrics 0.0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 MeanAveragePrecision(inex-2003) alpha strict, overlap not considered strict, overlap considered generalized, overlap not considered generalized, overlap considered Figure 8: Impact of \u03b1 on inex-2003 MAP (INEX 2004 CO topics; assessment set I).\nare plotted for values of \u03b1 between 0.0 and 1.0.\nRecalling that the XCG metric is designed to penalize overlap, while the inex-2002 metric ignores overlap, the conflict between the metrics is obvious.\nThe MAP values at one extreme (\u03b1 = 0.0) and the XCG value at the other extreme (\u03b1 = 1.0) represent retrieval performance comparable to the best systems at INEX 2004 [8,12].\nFigure 8 plots values of the inex-2003 MAP metric for two quantizations, with and without consideration of overlap.\nOnce again, conflict is apparent, with the influence of \u03b1 substantially lessened when overlap is considered.\n8.\nEXTENDED ALGORITHM One limitation of the re-ranking algorithm is that a single weight \u03b1 is used to adjust the scores of both the ancestors and descendants of reported elements.\nAn obvious extension is to use different weights in these two cases.\nFurthermore, the same weight is used regardless of the number of times an element is contained in a reported element.\nFor example, a paragraph may form part of a reported section and then form part of a reported article.\nSince the user may now have seen this paragraph twice, its score should be further lowered by increasing the value of the weight.\nMotivated by these observations, the re-ranking algorithm may be extended with a series of weights 1 = \u03b20 \u2265 \u03b21 \u2265 \u03b22 \u2265 ... \u2265 \u03b2M \u2265 0.\nwhere \u03b2j is the weight applied to a node that has been a descendant of a reported node j times.\nNote that an upper bound on M is h, the maximum height of any XML tree in the collection.\nHowever, in practice M is likely to be relatively small (perhaps 3 or 4).\nFigure 9 presents replacements for the Up and Down routines of figure 6, incorporating this series of weights.\nOne extra field is required in each node, as follows: x.j - down count The value of x.j is initially set to zero in all nodes and is incremented each time Down is called with x as its argument.\nWhen computing the score of node, the value of x.j selects 1 Up(x, y) \u2261 2 if not y.reported then 3 S.remove(y) 4 y.g \u2190 y.g + x.f \u2212 x.g 5 recompute y.score 6 S.add(y) 8 if y is not a root node then 9 Up (x, y.parent) 10 end if 11 end if 12 13 Down(x) \u2261 14 if x.j < M then 15 x.j \u2190 x.j + 1 16 if not x.reported then 17 S.remove(x) 18 recompute x.score 19 S.add(x) 20 end if 21 foreach y \u2208 x.children do 22 Down (y) 23 end do 24 end if Figure 9: Extended tree traversal routines.\nthe weight to be applied to the node by adjusting the value of xt in equation 1, as follows: xt = \u03b2x.j \u00b7 (ft \u2212 \u03b1 \u00b7 gt) (3) where ft and gt are the components of x.f and x.g corresponding to term t.\nA few additional changes are required to extend Up and Down.\nThe Up routine returns immediately (line 2) if its argument has already been reported, since term frequencies have already been adjusted in its ancestors.\nThe Down routine does not report its argument, but instead recomputes its score and adds it back into S.\nA node cannot be an argument to Down more than M +1 times, which in turn implies an overall time complexity of O((nM + mh) log n).\nSince M \u2264 h and m \u2264 n, the time complexity is also O(nh log n).\n9.\nCONCLUDING DISCUSSION When generating retrieval results over an XML collection, some overlap in the results should be tolerated, and may be beneficial.\nFor example, when a highly exhaustive and fairly specific (3,2) element contains a much smaller (2,3) element, both should be reported to the user, and retrieval algorithms and evaluation metrics should respect this relationship.\nThe algorithm presented in this paper controls overlap by weighting the terms occurring in reported elements to reflect their reduced importance.\nOther approaches may also help to control overlap.\nFor example, when XML retrieval results are presented to users it may be desirable to cluster structurally related elements together, visually illustrating the relationships between them.\nWhile this style of user interface may help a user cope with overlap, the strategy presented in this paper continues to be applicable, by determining the best elements to include in each cluster.\nAt Waterloo, we continue to develop and test our ideas for INEX 2005.\nIn particular, we are investigating methods for learning the \u03b1 and \u03b2j weights.\nWe are also re-evaluating our approach to document statistics and examining appropriate adjustments to the k1 parameter as term weights change [20].\n10.\nACKNOWLEDGMENTS Thanks to Gabriella Kazai and Arjen de Vries for providing an early version of their software for computing the XCG metric, and thanks to Phil Tilker and Stefan B\u00a8uttcher for their help with the experimental evaluation.\nIn part, funding for this project was provided by IBM Canada through the National Institute for Software Research.\n11.\nREFERENCES [1] N. Bruno, N. Koudas, and D. Srivastava.\nHolistic twig joins: Optimal XML pattern matching.\nIn Proceedings of the 2002 ACM SIGMOD International Conference on the Management of Data, pages 310-321, Madison, Wisconsin, June 2002.\n[2] D. Carmel, Y. S. Maarek, M. Mandelbrod, Y. Mass, and A. Soffer.\nSearching XML documents via XML fragments.\nIn Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 151-158, Toronto, Canada, 2003.\n[3] C. L. A. Clarke and P. L. Tilker.\nMultiText experiments for INEX 2004.\nIn INEX 2004 Workshop Proceedings, 2004.\nPublished in LNCS 3493 [8].\n[4] A. P. de Vries, G. Kazai, and M. Lalmas.\nTolerance to irrelevance: A user-effort oriented evaluation of retrieval systems without predefined retrieval unit.\nIn RIAO 2004 Conference Proceedings, pages 463-473, Avignon, France, April 2004.\n[5] D. DeHaan, D. Toman, M. P. Consens, and M. T. \u00a8Ozsu.\nA comprehensive XQuery to SQL translation using dynamic interval encoding.\nIn Proceedings of the 2003 ACM SIGMOD International Conference on the Management of Data, San Diego, June 2003.\n[6] N. Fuhr and K. Gro\u00dfjohann.\nXIRQL: A query language for information retrieval in XML documents.\nIn Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 172-180, New Orleans, September 2001.\n[7] N. Fuhr, M. Lalmas, and S. Malik, editors.\nInitiative for the Evaluation of XML Retrieval.\nProceedings of the Second Workshop (INEX 2003), Dagstuhl, Germany, December 2003.\n[8] N. Fuhr, M. Lalmas, S. Malik, and Zolt\u00b4an Szl\u00b4avik, editors.\nInitiative for the Evaluation of XML Retrieval.\nProceedings of the Third Workshop (INEX 2004), Dagstuhl, Germany, December 2004.\nPublished as Advances in XML Information Retrieval, Lecture Notes in Computer Science, volume 3493, Springer, 2005.\n[9] K. J\u00a8avelin and J. Kek\u00a8al\u00a8ainen.\nCumulated gain-based evaluation of IR techniques.\nACM Transactions on Information Systems, 20(4):422-446, 2002.\n[10] J. Kamps, M. de Rijke, and B. Sigurbj\u00a8ornsson.\nLength normalization in XML retrieval.\nIn Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 80-87, Sheffield, UK, July 2004.\n[11] G. Kazai, M. Lalmas, and A. P. de Vries.\nThe overlap problem in content-oriented XML retrieval evaluation.\nIn Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 72-79, Sheffield, UK, July 2004.\n[12] G. Kazai, M. Lalmas, and A. P. de Vries.\nReliability tests for the XCG and inex-2002 metrics.\nIn INEX 2004 Workshop Proceedings, 2004.\nPublished in LNCS 3493 [8].\n[13] J. Kek\u00a8al\u00a8ainen, M. Junkkari, P. Arvola, and T. Aalto.\nTRIX 2004 - Struggling with the overlap.\nIn INEX 2004 Workshop Proceedings, 2004.\nPublished in LNCS 3493 [8].\n[14] S. Liu, Q. Zou, and W. W. Chu.\nConfigurable indexing and ranking for XML information retrieval.\nIn Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 88-95, Sheffield, UK, July 2004.\n[15] Y. Mass and M. Mandelbrod.\nRetrieving the most relevant XML components.\nIn INEX 2003 Workshop Proceedings, Dagstuhl, Germany, December 2003.\n[16] Y. Mass and M. Mandelbrod.\nComponent ranking and automatic query refinement for XML retrieval.\nIn INEX 2004 Workshop Proceedings, 2004.\nPublished in LNCS 3493 [8].\n[17] P. Ogilvie and J. Callan.\nHierarchical language models for XML component retrieval.\nIn INEX 2004 Workshop Proceedings, 2004.\nPublished in LNCS 3493 [8].\n[18] J. Pehcevski, J. A. Thom, and A. Vercoustre.\nHybrid XML retrieval re-visited.\nIn INEX 2004 Workshop Proceedings, 2004.\nPublished in LNCS 3493 [8].\n[19] B. Piwowarski and M. Lalmas.\nProviding consistent and exhaustive relevance assessments for XML retrieval evaluation.\nIn Proceedings of the 13th ACM Conference on Information and Knowledge Management, pages 361-370, Washington, DC, November 2004.\n[20] S. Robertson, H. Zaragoza, and M. Taylor.\nSimple BM25 extension to multiple weighted fields.\nIn Proceedings of the 13th ACM Conference on Information and Knowledge Management, pages 42-50, Washington, DC, November 2004.\n[21] S. E. Robertson, S. Walker, and M. Beaulieu.\nOkapi at TREC-7: Automatic ad-hoc, filtering, VLC and interactive track.\nIn Proceedings of the Seventh Text REtrieval Conference, Gaithersburg, MD, November 1998.\n[22] A. Trotman and B. Sigurbj\u00a8ornsson.\nNEXI, now and next.\nIn INEX 2004 Workshop Proceedings, 2004.\nPublished in LNCS 3493 [8].\n[23] J. Vittaut, B. Piwowarski, and P. Gallinari.\nAn algebra for structured queries in bayesian networks.\nIn INEX 2004 Workshop Proceedings, 2004.\nPublished in LNCS 3493 [8].","lvl-3":"Controlling Overlap in Content-Oriented XML Retrieval\nABSTRACT\nThe direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents.\nThis paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents.\nThe test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation.\n1.\nINTRODUCTION\nThe representation of documents in XML provides an opportunity for information retrieval systems to take advantage of document structure, returning individual document components when appropriate, rather than complete documents in all circumstances.\nIn response to a user query, an XML information retrieval system might return a mixture of paragraphs, sections, articles, bibliographic entries and other components.\nThis facility is of particular benefit when a collection contains very long documents, such as product manuals or books, where the user should be directed to the most relevant portions of these documents.\nFigure 1: A journal article encoded in XML.\nFigure 1 provides an example of a journal article encoded in XML, illustrating many of the important characteristics of XML documents.\nTags indicate the beginning and end of each element, with elements varying widely in size, from one word to thousands of words.\nSome elements, such as paragraphs and sections, may be reasonably presented to the user as retrieval results, but others are not appropriate.\nElements overlap each other--articles contain sections, sections contain subsections, and subsections contain paragraphs.\nEach of these characteristics affects the design of an XML IR system, and each leads to fundamental problems that must be solved in an successful system.\nMost of these fundamental problems can be solved through the careful adaptation of standard IR techniques, but the problems caused by overlap are unique to this area [4,11] and form the primary focus of this paper.\nThe article of figure 1 may be viewed as an XML tree, as illustrated in figure 2.\nFormally, a collection of XML documents may be represented as a forest of ordered, rooted trees, consisting of a set of nodes N and a set of directed edges E connecting these nodes.\nFor each node x \u2208 N, the notation x.parent refers to the parent node of x, if one exists, and the notation x.children refers to the set of child nodes\nFigure 2: Example XML tree.\nof x.\nSince an element may be represented by the node at its root, the output of an XML IR system may be viewed as a ranked list of the top-m nodes.\nThe direct application of a standard relevance ranking technique to a set of XML elements can produce a result in which the top ranks are dominated by many structurally related elements.\nA high scoring section is likely to contain several high scoring paragraphs and to be contained in an high scoring article.\nFor example, many of the elements in figure 2 would receive a high score on the keyword query \"text index compression algorithms\".\nIf each of these elements are presented to a user as an individual and separate result, she may waste considerable time reviewing and rejecting redundant content.\nOne possible solution is to report only the highest scoring element along a given path in the tree, and to remove from the lower ranks any element containing it, or contained within it.\nUnfortunately, this approach destroys some of the possible benefits of XML IR.\nFor example, an outer element may contain a substantial amount of information that does not appear in an inner element, but the inner element may be heavily focused on the query topic and provide a short overview of the key concepts.\nIn such cases, it is reasonable to report elements which contain, or are contained in, higher ranking elements.\nEven when an entire book is relevant, a user may still wish to have the most important paragraphs highlighted, to guide her reading and to save time [6].\nThis paper presents a method for controlling overlap.\nStarting with an initial element ranking, a re-ranking algorithm adjusts the scores of lower ranking elements that contain, or are contained within, higher ranking elements, reflecting the fact that this information may now be redundant.\nFor example, once an element representing a section appears in the ranking, the scores for the paragraphs it contains and the article that contains it are reduced.\nThe inspiration for this strategy comes partially from recent work on structured documents retrieval, where terms appearing in different fields, such as the title and body, are given different weights [20].\nExtending that approach, the re-ranking algorithm varies weights dynamically as elements are processed.\nThe remainder of the paper is organized as follows: After a discussion of background work and evaluation methodology, a baseline retrieval method is presented in section 4.\nThis baseline method represents a reasonable adaptation of standard IR technology to XML.\nSection 5 then outlines a strategy for controlling overlap, using the baseline method as a starting point.\nA re-ranking algorithm implementing this strategy is presented in section 6 and evaluated in section 7.\nSection 8 discusses an extended version of the algorithm.\n2.\nBACKGROUND\nThis section provides a general overview of XML information retrieval and discusses related work, with an emphasis on the fundamental problems mentioned in the introduction.\nMuch research in the area of XML retrieval views it from a traditional database perspective, being concerned with such problems as the implementation of structured query languages [5] and the processing of joins [1].\nHere, we take a \"content oriented\" IR perceptive, focusing on XML documents that primarily contain natural language data and queries that are primarily expressed in natural language.\nWe assume that these queries indicate only the nature of desired content, not its structure, and that the role of the IR system is to determine which elements best satisfy the underlying information need.\nOther IR research has considered mixed queries, in which both content and structural requirements are specified [2, 6,14,17, 23].\n2.1 Term and Document Statistics\nIn traditional information retrieval applications the standard unit of retrieval is taken to be the \"document\".\nDepending on the application, this term might be interpreted to encompass many different objects, including web pages, newspaper articles and email messages.\nWhen applying standard relevance ranking techniques in the context of XML IR, a natural approach is to treat each element as a separate \"document\", with term statistics available for each [16].\nIn addition, most ranking techniques require global statistics (e.g. inverse document frequency) computed over the collection as a whole.\nIf we consider this collection to include all elements that might be returned by the system, a specific occurrence of a term may appear in several different \"documents\", perhaps in elements representing a paragraph, a subsection, a section and an article.\nIt is not appropriate to compute inverse document frequency under the assumption that the term is contained in all of these elements, since the number of elements that contain a term depends entirely on the structural arrangement of the documents [13, 23].\n2.2 Retrievable Elements\nWhile an XML IR system might potentially retrieve any element, many elements may not be appropriate as retrieval results.\nThis is usually the case when elements contain very little text [10].\nFor example, a section title containing only the query terms may receive a high score from a ranking algorithm, but alone it would be of limited value to a user, who might prefer the actual section itself.\nOther elements may reflect the document's physical, rather than logical, struc\nture, which may have little or no meaning to a user.\nAn effective XML IR system must return only those elements that have sufficient content to be usable and are able to stand alone as independent objects [15,18].\nStandard document components such as paragraphs, sections, subsections, and abstracts usually meet these requirements; titles, italicized phrases, and individual metadata fields often do not.\n2.3 Evaluation Methodology\nOver the past three years, the INitiative for the Evaluation of XML Retrieval (INEX) has encouraged research into XML information retrieval technology [7,8].\nINEX is an experimental conference series, similar to TREC, with groups from different institutions completing one or more experimental tasks using their own tools and systems, and comparing their results at the conference itself.\nOver 50 groups participated in INEX 2004, and the conference has become as influential in the area of XML IR as TREC is in other IR areas.\nThe research described in this paper, as well as much of the related work it cites, depends on the test collections developed by INEX.\nOverlap causes considerable problems with retrieval evaluation, and the INEX organizers and participants have wrestled with these problems since the beginning.\nWhile substantial progress has been made, these problem are still not completely solved.\nKazai et al. [11] provide a detailed exposition of the overlap problem in the context of INEX retrieval evaluation and discuss both current and proposed evaluation metrics.\nMany of these metrics are applied to evaluate the experiments reported in this paper, and they are briefly outlined in the next section.\n3.\nINEX 2004\n3.1 Tasks\n3.2 Relevance Assessment\n3.3 Evaluation Metrics\n4.\nBASELINE RETRIEVAL METHOD\n6.\nRE-RANKING ALGORITHM\n5.\nCONTROLLING OVERLAP\n7.\nEVALUATION\n8.\nEXTENDED ALGORITHM\n9.\nCONCLUDING DISCUSSION\nWhen generating retrieval results over an XML collection, some overlap in the results should be tolerated, and may be beneficial.\nFor example, when a highly exhaustive and fairly specific (3,2) element contains a much smaller (2,3) element, both should be reported to the user, and retrieval algorithms and evaluation metrics should respect this relationship.\nThe algorithm presented in this paper controls overlap by weighting the terms occurring in reported elements to reflect their reduced importance.\nOther approaches may also help to control overlap.\nFor example, when XML retrieval results are presented to users it may be desirable to cluster structurally related elements together, visually illustrating the relationships between them.\nWhile this style of user interface may help a user cope with overlap, the strategy presented in this paper continues to be applicable, by determining the best elements to include in each cluster.\nAt Waterloo, we continue to develop and test our ideas for INEX 2005.\nIn particular, we are investigating methods for learning the \u03b1 and \u03b2j weights.\nWe are also re-evaluating our approach to document statistics and examining appropriate adjustments to the k1 parameter as term weights change [20].","lvl-4":"Controlling Overlap in Content-Oriented XML Retrieval\nABSTRACT\nThe direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents.\nThis paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents.\nThe test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation.\n1.\nINTRODUCTION\nThe representation of documents in XML provides an opportunity for information retrieval systems to take advantage of document structure, returning individual document components when appropriate, rather than complete documents in all circumstances.\nIn response to a user query, an XML information retrieval system might return a mixture of paragraphs, sections, articles, bibliographic entries and other components.\nThis facility is of particular benefit when a collection contains very long documents, such as product manuals or books, where the user should be directed to the most relevant portions of these documents.\nFigure 1: A journal article encoded in XML.\nFigure 1 provides an example of a journal article encoded in XML, illustrating many of the important characteristics of XML documents.\nTags indicate the beginning and end of each element, with elements varying widely in size, from one word to thousands of words.\nSome elements, such as paragraphs and sections, may be reasonably presented to the user as retrieval results, but others are not appropriate.\nElements overlap each other--articles contain sections, sections contain subsections, and subsections contain paragraphs.\nThe article of figure 1 may be viewed as an XML tree, as illustrated in figure 2.\nFigure 2: Example XML tree.\nof x.\nSince an element may be represented by the node at its root, the output of an XML IR system may be viewed as a ranked list of the top-m nodes.\nThe direct application of a standard relevance ranking technique to a set of XML elements can produce a result in which the top ranks are dominated by many structurally related elements.\nA high scoring section is likely to contain several high scoring paragraphs and to be contained in an high scoring article.\nFor example, many of the elements in figure 2 would receive a high score on the keyword query \"text index compression algorithms\".\nIf each of these elements are presented to a user as an individual and separate result, she may waste considerable time reviewing and rejecting redundant content.\nOne possible solution is to report only the highest scoring element along a given path in the tree, and to remove from the lower ranks any element containing it, or contained within it.\nFor example, an outer element may contain a substantial amount of information that does not appear in an inner element, but the inner element may be heavily focused on the query topic and provide a short overview of the key concepts.\nIn such cases, it is reasonable to report elements which contain, or are contained in, higher ranking elements.\nThis paper presents a method for controlling overlap.\nStarting with an initial element ranking, a re-ranking algorithm adjusts the scores of lower ranking elements that contain, or are contained within, higher ranking elements, reflecting the fact that this information may now be redundant.\nFor example, once an element representing a section appears in the ranking, the scores for the paragraphs it contains and the article that contains it are reduced.\nThe inspiration for this strategy comes partially from recent work on structured documents retrieval, where terms appearing in different fields, such as the title and body, are given different weights [20].\nExtending that approach, the re-ranking algorithm varies weights dynamically as elements are processed.\nThe remainder of the paper is organized as follows: After a discussion of background work and evaluation methodology, a baseline retrieval method is presented in section 4.\nThis baseline method represents a reasonable adaptation of standard IR technology to XML.\nSection 5 then outlines a strategy for controlling overlap, using the baseline method as a starting point.\nA re-ranking algorithm implementing this strategy is presented in section 6 and evaluated in section 7.\nSection 8 discusses an extended version of the algorithm.\n2.\nBACKGROUND\nThis section provides a general overview of XML information retrieval and discusses related work, with an emphasis on the fundamental problems mentioned in the introduction.\nHere, we take a \"content oriented\" IR perceptive, focusing on XML documents that primarily contain natural language data and queries that are primarily expressed in natural language.\nWe assume that these queries indicate only the nature of desired content, not its structure, and that the role of the IR system is to determine which elements best satisfy the underlying information need.\n2.1 Term and Document Statistics\nIn traditional information retrieval applications the standard unit of retrieval is taken to be the \"document\".\nDepending on the application, this term might be interpreted to encompass many different objects, including web pages, newspaper articles and email messages.\nWhen applying standard relevance ranking techniques in the context of XML IR, a natural approach is to treat each element as a separate \"document\", with term statistics available for each [16].\nIn addition, most ranking techniques require global statistics (e.g. inverse document frequency) computed over the collection as a whole.\nIf we consider this collection to include all elements that might be returned by the system, a specific occurrence of a term may appear in several different \"documents\", perhaps in elements representing a paragraph, a subsection, a section and an article.\nIt is not appropriate to compute inverse document frequency under the assumption that the term is contained in all of these elements, since the number of elements that contain a term depends entirely on the structural arrangement of the documents [13, 23].\n2.2 Retrievable Elements\nWhile an XML IR system might potentially retrieve any element, many elements may not be appropriate as retrieval results.\nThis is usually the case when elements contain very little text [10].\nFor example, a section title containing only the query terms may receive a high score from a ranking algorithm, but alone it would be of limited value to a user, who might prefer the actual section itself.\nOther elements may reflect the document's physical, rather than logical, struc\nture, which may have little or no meaning to a user.\nAn effective XML IR system must return only those elements that have sufficient content to be usable and are able to stand alone as independent objects [15,18].\nStandard document components such as paragraphs, sections, subsections, and abstracts usually meet these requirements; titles, italicized phrases, and individual metadata fields often do not.\n2.3 Evaluation Methodology\nOver the past three years, the INitiative for the Evaluation of XML Retrieval (INEX) has encouraged research into XML information retrieval technology [7,8].\nThe research described in this paper, as well as much of the related work it cites, depends on the test collections developed by INEX.\nOverlap causes considerable problems with retrieval evaluation, and the INEX organizers and participants have wrestled with these problems since the beginning.\nKazai et al. [11] provide a detailed exposition of the overlap problem in the context of INEX retrieval evaluation and discuss both current and proposed evaluation metrics.\nMany of these metrics are applied to evaluate the experiments reported in this paper, and they are briefly outlined in the next section.\n9.\nCONCLUDING DISCUSSION\nWhen generating retrieval results over an XML collection, some overlap in the results should be tolerated, and may be beneficial.\nFor example, when a highly exhaustive and fairly specific (3,2) element contains a much smaller (2,3) element, both should be reported to the user, and retrieval algorithms and evaluation metrics should respect this relationship.\nThe algorithm presented in this paper controls overlap by weighting the terms occurring in reported elements to reflect their reduced importance.\nOther approaches may also help to control overlap.\nFor example, when XML retrieval results are presented to users it may be desirable to cluster structurally related elements together, visually illustrating the relationships between them.\nWhile this style of user interface may help a user cope with overlap, the strategy presented in this paper continues to be applicable, by determining the best elements to include in each cluster.\nAt Waterloo, we continue to develop and test our ideas for INEX 2005.\nIn particular, we are investigating methods for learning the \u03b1 and \u03b2j weights.\nWe are also re-evaluating our approach to document statistics and examining appropriate adjustments to the k1 parameter as term weights change [20].","lvl-2":"Controlling Overlap in Content-Oriented XML Retrieval\nABSTRACT\nThe direct application of standard ranking techniques to retrieve individual elements from a collection of XML documents often produces a result set in which the top ranks are dominated by a large number of elements taken from a small number of highly relevant documents.\nThis paper presents and evaluates an algorithm that re-ranks this result set, with the aim of minimizing redundant content while preserving the benefits of element retrieval, including the benefit of identifying topic-focused components contained within relevant documents.\nThe test collection developed by the INitiative for the Evaluation of XML Retrieval (INEX) forms the basis for the evaluation.\n1.\nINTRODUCTION\nThe representation of documents in XML provides an opportunity for information retrieval systems to take advantage of document structure, returning individual document components when appropriate, rather than complete documents in all circumstances.\nIn response to a user query, an XML information retrieval system might return a mixture of paragraphs, sections, articles, bibliographic entries and other components.\nThis facility is of particular benefit when a collection contains very long documents, such as product manuals or books, where the user should be directed to the most relevant portions of these documents.\nFigure 1: A journal article encoded in XML.\nFigure 1 provides an example of a journal article encoded in XML, illustrating many of the important characteristics of XML documents.\nTags indicate the beginning and end of each element, with elements varying widely in size, from one word to thousands of words.\nSome elements, such as paragraphs and sections, may be reasonably presented to the user as retrieval results, but others are not appropriate.\nElements overlap each other--articles contain sections, sections contain subsections, and subsections contain paragraphs.\nEach of these characteristics affects the design of an XML IR system, and each leads to fundamental problems that must be solved in an successful system.\nMost of these fundamental problems can be solved through the careful adaptation of standard IR techniques, but the problems caused by overlap are unique to this area [4,11] and form the primary focus of this paper.\nThe article of figure 1 may be viewed as an XML tree, as illustrated in figure 2.\nFormally, a collection of XML documents may be represented as a forest of ordered, rooted trees, consisting of a set of nodes N and a set of directed edges E connecting these nodes.\nFor each node x \u2208 N, the notation x.parent refers to the parent node of x, if one exists, and the notation x.children refers to the set of child nodes\nFigure 2: Example XML tree.\nof x.\nSince an element may be represented by the node at its root, the output of an XML IR system may be viewed as a ranked list of the top-m nodes.\nThe direct application of a standard relevance ranking technique to a set of XML elements can produce a result in which the top ranks are dominated by many structurally related elements.\nA high scoring section is likely to contain several high scoring paragraphs and to be contained in an high scoring article.\nFor example, many of the elements in figure 2 would receive a high score on the keyword query \"text index compression algorithms\".\nIf each of these elements are presented to a user as an individual and separate result, she may waste considerable time reviewing and rejecting redundant content.\nOne possible solution is to report only the highest scoring element along a given path in the tree, and to remove from the lower ranks any element containing it, or contained within it.\nUnfortunately, this approach destroys some of the possible benefits of XML IR.\nFor example, an outer element may contain a substantial amount of information that does not appear in an inner element, but the inner element may be heavily focused on the query topic and provide a short overview of the key concepts.\nIn such cases, it is reasonable to report elements which contain, or are contained in, higher ranking elements.\nEven when an entire book is relevant, a user may still wish to have the most important paragraphs highlighted, to guide her reading and to save time [6].\nThis paper presents a method for controlling overlap.\nStarting with an initial element ranking, a re-ranking algorithm adjusts the scores of lower ranking elements that contain, or are contained within, higher ranking elements, reflecting the fact that this information may now be redundant.\nFor example, once an element representing a section appears in the ranking, the scores for the paragraphs it contains and the article that contains it are reduced.\nThe inspiration for this strategy comes partially from recent work on structured documents retrieval, where terms appearing in different fields, such as the title and body, are given different weights [20].\nExtending that approach, the re-ranking algorithm varies weights dynamically as elements are processed.\nThe remainder of the paper is organized as follows: After a discussion of background work and evaluation methodology, a baseline retrieval method is presented in section 4.\nThis baseline method represents a reasonable adaptation of standard IR technology to XML.\nSection 5 then outlines a strategy for controlling overlap, using the baseline method as a starting point.\nA re-ranking algorithm implementing this strategy is presented in section 6 and evaluated in section 7.\nSection 8 discusses an extended version of the algorithm.\n2.\nBACKGROUND\nThis section provides a general overview of XML information retrieval and discusses related work, with an emphasis on the fundamental problems mentioned in the introduction.\nMuch research in the area of XML retrieval views it from a traditional database perspective, being concerned with such problems as the implementation of structured query languages [5] and the processing of joins [1].\nHere, we take a \"content oriented\" IR perceptive, focusing on XML documents that primarily contain natural language data and queries that are primarily expressed in natural language.\nWe assume that these queries indicate only the nature of desired content, not its structure, and that the role of the IR system is to determine which elements best satisfy the underlying information need.\nOther IR research has considered mixed queries, in which both content and structural requirements are specified [2, 6,14,17, 23].\n2.1 Term and Document Statistics\nIn traditional information retrieval applications the standard unit of retrieval is taken to be the \"document\".\nDepending on the application, this term might be interpreted to encompass many different objects, including web pages, newspaper articles and email messages.\nWhen applying standard relevance ranking techniques in the context of XML IR, a natural approach is to treat each element as a separate \"document\", with term statistics available for each [16].\nIn addition, most ranking techniques require global statistics (e.g. inverse document frequency) computed over the collection as a whole.\nIf we consider this collection to include all elements that might be returned by the system, a specific occurrence of a term may appear in several different \"documents\", perhaps in elements representing a paragraph, a subsection, a section and an article.\nIt is not appropriate to compute inverse document frequency under the assumption that the term is contained in all of these elements, since the number of elements that contain a term depends entirely on the structural arrangement of the documents [13, 23].\n2.2 Retrievable Elements\nWhile an XML IR system might potentially retrieve any element, many elements may not be appropriate as retrieval results.\nThis is usually the case when elements contain very little text [10].\nFor example, a section title containing only the query terms may receive a high score from a ranking algorithm, but alone it would be of limited value to a user, who might prefer the actual section itself.\nOther elements may reflect the document's physical, rather than logical, struc\nture, which may have little or no meaning to a user.\nAn effective XML IR system must return only those elements that have sufficient content to be usable and are able to stand alone as independent objects [15,18].\nStandard document components such as paragraphs, sections, subsections, and abstracts usually meet these requirements; titles, italicized phrases, and individual metadata fields often do not.\n2.3 Evaluation Methodology\nOver the past three years, the INitiative for the Evaluation of XML Retrieval (INEX) has encouraged research into XML information retrieval technology [7,8].\nINEX is an experimental conference series, similar to TREC, with groups from different institutions completing one or more experimental tasks using their own tools and systems, and comparing their results at the conference itself.\nOver 50 groups participated in INEX 2004, and the conference has become as influential in the area of XML IR as TREC is in other IR areas.\nThe research described in this paper, as well as much of the related work it cites, depends on the test collections developed by INEX.\nOverlap causes considerable problems with retrieval evaluation, and the INEX organizers and participants have wrestled with these problems since the beginning.\nWhile substantial progress has been made, these problem are still not completely solved.\nKazai et al. [11] provide a detailed exposition of the overlap problem in the context of INEX retrieval evaluation and discuss both current and proposed evaluation metrics.\nMany of these metrics are applied to evaluate the experiments reported in this paper, and they are briefly outlined in the next section.\n3.\nINEX 2004\nSpace limitations prevent the inclusion of more than a brief summary of INEX 2004 tasks and evaluation methodology.\nFor detailed information, the proceedings of the conference itself should be consulted [8].\n3.1 Tasks\nFor the main experimental tasks, INEX 2004 participants were provided with a collection of 12,107 articles taken from the IEEE Computer Societies magazines and journals between 1995 and 2002.\nEach document is encoded in XML using a common DTD, with the document of figures 1 and 2 providing one example.\nAt INEX 2004, the two main experimental tasks were both adhoc retrieval tasks, investigating the performance of systems searching a static collection using previously unseen topics.\nThe two tasks differed in the types of topics they used.\nFor one task, the \"content-only\" or CO task, the topics consist of short natural language statements with no direct reference to the structure of the documents in the collection.\nFor this task, the IR system is required to select the elements to be returned.\nFor the other task, the \"contentand-structure\" or CAS task, the topics are written in an XML query language [22] and contain explicit references to document structure, which the IR system must attempt to satisfy.\nSince the work described in this paper is directed at the content-only task, where the IR system receives no guidance regarding the elements to return, the CAS task is ignored in the remainder of our description.\nIn 2004, 40 new CO topics were selected by the conference organizers from contributions provided by the conference participants.\nEach topic includes a short keyword query, which is executed over the collection by each participating group on their own XML IR system.\nEach group could submit up to three experimental runs consisting of the top m = 1500 elements for each topic.\n3.2 Relevance Assessment\nSince XML IR is concerned with locating those elements that provide complete coverage of a topic while containing as little extraneous information as possible, simple \"relevant\" vs. \"not relevant\" judgments are not sufficient.\nInstead, the INEX organizers adopted two dimensions for relevance assessment: The exhaustivity dimension reflects the degree to which an element covers the topic, and the specificity dimension reflects the degree to which an element is focused on the topic.\nA four-point scale is used in both dimensions.\nThus, a (3,3) element is highly exhaustive and highly specific, a (1,3) element is marginally exhaustive and highly specific, and a (0,0) element is not relevant.\nAdditional information on the assessment methodology may be found in Piwowarski and Lalmas [19], who provide a detailed rationale.\n3.3 Evaluation Metrics\nThe principle evaluation metric used at INEX 2004 is a version of mean average precision (MAP), adjusted by various quantization functions to give different weights to different elements, depending on their exhaustivity and specificity values.\nOne variant, the strict quantization function gives a weight of 1 to (3,3) elements and a weight of 0 to all others.\nThis variant is essentially the familiar MAP value, with (3,3) elements treated as \"relevant\" and all other elements treated as \"not relevant\".\nOther quantization functions are designed to give partial credit to elements which are \"near misses\", due to a lack or exhaustivity and\/or specificity.\nBoth the generalized quantization function and the specificity-oriented generalization (sog) function credit elements \"according to their degree of relevance\" [11], with the second function placing greater emphasis on specificity.\nThis paper reports results of this metric using all three of these quantization functions.\nSince this metric was first introduced at INEX 2002, it is generally referred as the \"inex-2002\" metric.\nThe inex-2002 metric does not penalize overlap.\nIn particular, both the generalized and sog quantization functions give partial credit to a \"near miss\" even when a (3,3) element overlapping it is reported at a higher rank.\nTo address this problem, Kazai et al. [11] propose an XML cumulated gain metric, which compares the cumulated gain [9] of a ranked list to an ideal gain vector.\nThis ideal gain vector is constructed from the relevance judgments by eliminating overlap and retaining only best element along a given path.\nThus, the XCG metric rewards retrieval runs that avoid overlap.\nWhile XCG was not used officially at INEX 2004, a version of it is likely to be used in the future.\nAt INEX 2003, yet another metric was introduced to ameliorate the perceived limitations of the inex-2002 metric.\nThis \"inex-2003\" metric extends the definitions of precision and recall to consider both the size of reported components and the overlap between them.\nTwo versions were created, one that considered only component size and another that considered both size and overlap.\nWhile the inex-2003 metric exhibits undesirable anomalies [11], and was not used in 2004, values are reported in the evaluation section to provide an additional instrument for investigating overlap.\n4.\nBASELINE RETRIEVAL METHOD\nThis section provides an overview of baseline XML information retrieval method currently used in the MultiText IR system, developed by the Information Retrieval Group at the University of Waterloo [3].\nThis retrieval method results from the adaptation and tuning of the Okapi BM25 measure [21] to the XML information retrieval task.\nThe MultiText system performed respectably at INEX 2004, placing in the top ten under all of the quantization functions, and placing first when the quantization function emphasized exhaustivity.\nTo support retrieval from XML and other structured document types, the system provides generalized queries of the form:\nwhere Xis a sub-query specifying a set of document elements to be ranked and Y is a vector of sub-queries specifying individual retrieval terms.\nFor our INEX 2004 runs, the sub-query X specified a list of retrievable elements as those with tag names as follows: abs app article bb bdy bm fig fm ip1 li p sec ss1 ss2 vt This list includes bibliographic entries (bb) and figure captions (fig) as well as paragraphs, sections and subsections.\nPrior to INEX 2004, the INEX collection and the INEX 2003 relevance judgments were manually analyzed to select these tag names.\nTag names were selected on the basis of their frequency in the collection, the average size of their associated elements, and the relative number of positive relevance judgments they received.\nAutomating this selection process is planned as future work.\nFor INEX 2004, the term vector Y was derived from the topic by splitting phrases into individual words, eliminating stopwords and negative terms (those starting with \"-\"), and applying a stemmer.\nFor example, keyword field of topic 166 + \"tree edit distance\" + XML - image became the four-term query \"$tree\" \"$edit\" \"$distance\" \"$xml\" where the \"$\" operator within a quoted string stems the term that follows it.\nOur implementation of Okapi BM25 is derived from the formula of Robertson et al. [21] by setting parameters k2 = 0 and k3 = \u221e.\nGiven a term set Q, an element x is assigned the score\nwhere\nFigure 3: Impact of k1 on inex-2002 mean average precision with b = 0.75 (INEX 2003 CO topics).\nPrior to INEX 2004, the INEX 2003 topics and judgments were used to tune the b and k1 parameters, and the impact of this tuning is discussed later in this section.\nFor the purposes of computing document-level statistics (D, Dt and lavg) a document is defined to be an article.\nThese statistics are used for ranking all element types.\nFollowing the suggestion of Kamps et al. [10], the retrieval results are filtered to eliminate very short elements, those less than 25 words in length.\nThe use of article statistics for all element types might be questioned.\nThis approach may be justified by viewing the collection as a set of articles to be searched using standard document-oriented techniques, where only articles may be returned.\nThe score computed for an element is essentially the score it would receive if it were added to the collection as a new document, ignoring the minor adjustments needed to the document-level statistics.\nNonetheless, we plan to examine this issue again in the future.\nIn our experience, the performance of BM25 typically benefits from tuning the b and k1 parameters to the collection, whenever training queries are available for this purpose.\nPrior to INEX 2004, we trained the MultiText system using the INEX 2003 queries.\nAs a starting point we used the values b = 0.75 and k1 = 1.2, which perform well on TREC adhoc collections and are used as default values in our system.\nThe results were surprising.\nFigure 3 shows the result of varying k1 with b = 0.75 on the MAP values under three quantization functions.\nIn our experience, optimal values for k1 are typically in the range 0.0 to 2.0.\nIn this case, large values are required for good performance.\nBetween k1 = 1.0 and k1 = 6.0 MAP increases by over 15% under the strict quantization.\nSimilar improvements are seen under the generalized and sog quantizations.\nIn contrast, our default value of b = 0.75 works well under all quantization functions (figure 4).\nAfter tuning over a wide range of values under several quantization functions, we selected values of k = 10.0 and b = 0.80 for our INEX 2004 experiments, and these values are used for the experiments reported in section 7.\nFigure 4: Impact of b on inex-2002 mean average precision with k1 = 10 (INEX 2003 CO topics).\nappearing in reported elements is reduced by the same factor regardless of the number of reported elements in which it appears.\nIn section 8 the algorithm is extended to apply increasing weights, lowering the score, when a term appears in more than one reported element.\n6.\nRE-RANKING ALGORITHM\nThe re-ranking algorithm operates over XML trees, such as the one appearing in figure 2.\nInput to the algorithm is a list of n elements ranked according to their initial BM25 scores.\nDuring the initial ranking the XML tree is dynamically re-constructed to include only those nodes with nonzero BM25 scores, so n may be considerably less than lNl.\nOutput from the algorithm is a list of the top m elements, ranked according to their adjusted scores.\nAn element is represented by the node x G N at its root.\nAssociated with this node are fields storing the length of element, term frequencies, and other information required by the re-ranking algorithm, as follows:\n5.\nCONTROLLING OVERLAP\nStarting with an element ranking generated by the baseline method described in the previous section, elements are re-ranked to control overlap by iteratively adjusting the scores of those elements containing or contained in higher ranking elements.\nAt a conceptual level, re-ranking proceeds as follows:\n1.\nReport the highest ranking element.\n2.\nAdjust the scores of the unreported elements.\n3.\nRepeat steps 1 and 2 until m elements are reported.\nOne approach to adjusting the scores of unreported elements in step 2 might be based on the Okapi BM25 scores of the involved elements.\nFor example, assume a paragraph with score p is reported in step 1.\nIn step 2, the section containing the paragraph might then have its score s lowered by an amount \u03b1 \u2022 p to reflect the reduced contribution the paragraph should make to the section's score.\nIn a related context, Robertson et al. [20] argue strongly against the linear combination of Okapi scores in this fashion.\nThat work considers the problem of assigning different weights to different document fields, such as the title and body associated with Web pages.\nA common approach to this problem scores the title and body separately and generates a final score as a linear combination of the two.\nRobertson et al. discuss the theoretical flaws in this approach and demonstrate experimentally that it can actually harm retrieval effectiveness.\nInstead, they apply the weights at the term frequency level, with an occurrence of a query term t in the title making a greater contribution to the score than an occurrence in the body.\nIn equation 1, xt becomes \u03b10 \u2022 yt + \u03b11 \u2022 zt, where yt is the number of times t occurs in the title and zt is the number of times t occurs in the body.\nTranslating this approach to our context, the contribution of terms appearing in elements is dynamically reduced as they are reported.\nThe next section presents and analysis a simple re-ranking algorithm that follows this strategy.\nThe algorithm is evaluated experimentally in section 7.\nOne limitation of the algorithm is that the contribution of terms\nThese fields are populated during the initial ranking process, and updated as the algorithm progresses.\nThe vector x. f ~ contains term frequency information corresponding to each term in the query.\nThe vector x. ~ g is initially zero and is updated by the algorithm as elements are reported.\nThe score field contains the current BM25 score for the element, which will change as the values in x. ~ g change.\nThe score is computed using equation 1, with the xt value for each term determined by a combination of the values in x. f ~ and x. ~ g. Given a term t G Q, let ft be the component of x. f ~ corresponding to t, and let gt be the component of x. ~ g corresponding to t, then:\nFor processing by the re-ranking algorithm, nodes are stored in priority queues, ordered by decreasing score.\nEach priority queue PQ supports three operations:\nWhen implemented using standard data structures, the front operation requires O (1) time, and the other operations require O (log n) time, where n is the size of the queue.\nThe core of the re-ranking algorithm is presented in figure 5.\nThe algorithm takes as input the priority queue S containing the initial ranking, and produces the top-m reranked nodes in the priority queue F.\nAfter initializing F to be empty on line 1, the algorithm loops m times over lines 215, transferring at least one node from S to F during each iteration.\nAt the start of each iteration, the unreported node at the front of S has the greatest adjusted score, and it is removed and added to F.\nThe algorithm then traverses the\nFigure 5: Re-Ranking Algorithm--As input, the algorithm takes a priority queue S, containing XML nodes ranked by their initial scores, and returns its results in priority queue F, ranked by adjusted scores.\nFigure 6: Tree traversal routines called by the reranking algorithm.\nFigure 7: Impact of \u03b1 on XCG and inex-2002 MAP (INEX 2004 CO topics; assessment set I).\nnode's ancestors (lines 8-10) and descendants (lines 12-14) adjusting the scores of these nodes.\nThe tree traversal routines, Up and Down are given in figure 6.\nThe Up routine removes each ancestor node from S, adjusts its term frequency values, recomputes its score, and adds it back into S.\nThe adjustment of the term frequency values (line 3) adds to y. ~ g only the previously unreported term occurrences in x. Re-computation of the score on line 4 uses equations 1 and 2.\nThe Down routine performs a similar operation on each descendant.\nHowever, since the contents of each descendant are entirely contained in a reported element its final score may be computed, and it is removed from S and added to F.\nIn order to determine the time complexity of the algorithm, first note that a node may be an argument to Down at most once.\nThereafter, the reported flag of its parent is true.\nDuring each call to Down a node may be moved from S to F, requiring O (log n) time.\nThus, the total time for all calls to Down is O (n log n), and we may temporarily ignore lines 8-10 of figure 5 when considering the time complexity of the loop over lines 2-15.\nDuring each iteration of this loop, a node and each of its ancestors are removed from a priority queue and then added back into a priority queue.\nSince a node may have at most h ancestors, where h is the maximum height of any tree in the collection, each of the m iterations requires O (h log n) time.\nCombining these observations produces an overall time complexity of O ((n + mh) log n).\nIn practice, re-ranking an INEX result set requires less than 200ms on a three-year-old desktop PC.\n7.\nEVALUATION\nNone of the metrics described in section 3.3 is a close fit with the view of overlap advocated by this paper.\nNonetheless, when taken together they provide insight into the behaviour of the re-ranking algorithm.\nThe INEX evaluation packages (inex_eval and inex_eval_ng) were used to compute values for the inex-2002 and inex-2003 metrics.\nValues for the XCG metrics were computed using software supplied by its inventors [11].\nFigure 7 plots the three variants of inex-2002 MAP metric together with the XCG metric.\nValues for these metrics\nFigure 8: Impact of \u03b1 on inex-2003 MAP (INEX 2004 CO topics; assessment set I).\nare plotted for values of \u03b1 between 0.0 and 1.0.\nRecalling that the XCG metric is designed to penalize overlap, while the inex-2002 metric ignores overlap, the conflict between the metrics is obvious.\nThe MAP values at one extreme (\u03b1 = 0.0) and the XCG value at the other extreme (\u03b1 = 1.0) represent retrieval performance comparable to the best systems at INEX 2004 [8,12].\nFigure 8 plots values of the inex-2003 MAP metric for two quantizations, with and without consideration of overlap.\nOnce again, conflict is apparent, with the influence of \u03b1 substantially lessened when overlap is considered.\n8.\nEXTENDED ALGORITHM\nOne limitation of the re-ranking algorithm is that a single weight \u03b1 is used to adjust the scores of both the ancestors and descendants of reported elements.\nAn obvious extension is to use different weights in these two cases.\nFurthermore, the same weight is used regardless of the number of times an element is contained in a reported element.\nFor example, a paragraph may form part of a reported section and then form part of a reported article.\nSince the user may now have seen this paragraph twice, its score should be further lowered by increasing the value of the weight.\nMotivated by these observations, the re-ranking algorithm may be extended with a series of weights\nwhere \u03b2j is the weight applied to a node that has been a descendant of a reported node j times.\nNote that an upper bound on M is h, the maximum height of any XML tree in the collection.\nHowever, in practice M is likely to be relatively small (perhaps 3 or 4).\nFigure 9 presents replacements for the Up and Down routines of figure 6, incorporating this series of weights.\nOne extra field is required in each node, as follows: x.j--down count The value of x.j is initially set to zero in all nodes and is incremented each time Down is called with x as its argument.\nWhen computing the score of node, the value of x.j selects\nFigure 9: Extended tree traversal routines.\nthe weight to be applied to the node by adjusting the value of xt in equation 1, as follows:\nwhere ft and gt are the components of x. f ~ and x. ~ g corresponding to term t.\nA few additional changes are required to extend Up and Down.\nThe Up routine returns immediately (line 2) if its argument has already been reported, since term frequencies have already been adjusted in its ancestors.\nThe Down routine does not report its argument, but instead recomputes its score and adds it back into S.\nA node cannot be an argument to Down more than M +1 times, which in turn implies an overall time complexity of O ((nM + mh) log n).\nSince M v(S) \u2212 p(S) That is, the agent either confirms that the presented bundle is most preferred at the quoted prices, or indicates a better one [15].4 Note that we include \u2205 as a bundle, so the agent will only respond `YES'' if its utility for the proposed bundle is non-negative.\nNote also that communicating nonlinear prices does not necessarily entail quoting a price for every possible bundle.\nThere may be more succinct ways of communicating this vector, as we show in section 5.\nWe make the following definitions to parallel the query learning setting and to simplify the statements of later results: Definition 2.\nThe representation classes V1, ... , Vn can be polynomial-query elicited from value and demand queries if there is a fixed polynomial p(\u00b7, \u00b7) and an algorithm L with access to value and demand queries of the agents such that for any (v1, ... , vn) \u2208 V1 \u00d7 ... \u00d7 Vn, L outputs after at most p(size(v1, ... , vn), m) queries an allocation (S1, ... , Sn) \u2208 arg max(S1,...,Sn)\u2208\u0393 \u00c8 vi(Si).\nSimilarly, the representation class C can be efficiently elicited from value and demand queries if the algorithm L outputs an optimal allocation with communication p(size(v1, ... , vn), m), for some fixed polynomial p(\u00b7, \u00b7).\nThere are some key differences here with the query learning definition.\nWe have dropped the term exactly since the valuation functions need not be determined exactly in order to compute an optimal allocation.\nAlso, an efficient elicitation algorithm is polynomial communication, rather than polynomial time.\nThis reflects the fact that communication rather than runtime is the bottleneck in elicitation.\nComputing an optimal allocation of goods even when given the true valuations is NP-hard for a wide range of valuation classes.\nIt is thus unreasonable to require polynomial time in the definition of an efficient preference elicitation algorithm.\nWe are happy to focus on the communication complexity of elicitation because this problem is widely believed to be more significant in practice than that of winner determination [11].5 4 This differs slightly from the definition provided by Blum et al. [5] Their demand queries are restricted to linear prices over the goods, where the price of a bundle is the sum of the prices of its underlying goods.\nIn contrast our demand queries allow for nonlinear prices, i.e. a distinct price for every possible bundle.\nThis is why the lower bound in their Theorem 2 does not contradict our result that follows.\n5 Though the winner determination problem is NP-hard for general valuations, there exist many algorithms that solve it efficiently in practice.\nThese range from special purpose algorithms [7, 16] to approaches using off-the-shelf IP solvers [1].\n182 Since the valuations need not be elicited exactly it is initially less clear whether the polynomial dependence on size(v1, ... , vn) is justified in this setting.\nIntuitively, this parameter is justified because we must learn valuations exactly when performing elicitation, in the worst-case.\nWe address this in the next section.\n3.\nPARALLELSBETWEEN EQUIVALENCE AND DEMAND QUERIES We have described the query learning and preference elicitation settings in a manner that highlights their similarities.\nValue and membership queries are clear analogs.\nSlightly less obvious is the fact that equivalence and demand queries are also analogs.\nTo see this, we need the concept of Lindahl prices.\nLindahl prices are nonlinear and non-anonymous prices over the bundles.\nThey are nonlinear in the sense that each bundle is assigned a price, and this price is not necessarily the sum of prices over its underlying goods.\nThey are non-anonymous in the sense that two agents may face different prices for the same bundle of goods.\nThus Lindahl prices are of the form pi(S), for all S \u2286 M, for all i \u2208 N. Lindahl prices are presented to the agents in demand queries.\nWhen agents have normalized quasi-linear utility functions, Bikhchandani and Ostroy [4] show that there always exist Lindahl prices such that (S1, ... , Sn) is an optimal allocation if and only if Si \u2208 arg max Si vi(Si) \u2212 pi(Si) \u00a1 \u2200i \u2208 N (1) (S1, ... , Sn) \u2208 arg max (S1,...,Sn)\u2208\u0393 i\u2208N pi(Si) (2) Condition (1) states that each agent is allocated a bundle that maximizes its utility at the given prices.\nCondition (2) states that the allocation maximizes the auctioneer``s revenue at the given prices.\nThe scenario in which these conditions hold is called a Lindahl equilibrium, or often a competitive equilibrium.\nWe say that the Lindahl prices support the optimal allocation.\nIt is therefore sufficient to announce supporting Lindahl prices to verify an optimal allocation.\nOnce we have found an allocation with supporting Lindahl prices, the elicitation problem is solved.\nThe problem of finding an optimal allocation (with respect to the manifest valuations) can be formulated as a linear program whose solutions are guaranteed to be integral [4].\nThe dual variables to this linear program are supporting Lindahl prices for the resulting allocation.\nThe objective function to the dual program is: min pi(S) \u03c0s + i\u2208N \u03c0i (3) with \u03c0i = max S\u2286M (\u02dcvi(S) \u2212 pi(S)) \u03c0s = max (S1,...,Sn)\u2208\u0393 i\u2208N pi(Si) The optimal values of \u03c0i and \u03c0s correspond to the maximal utility to agent i with respect to its manifest valuation and the maximal revenue to the seller.\nThere is usually a range of possible Lindahl prices supporting a given optimal allocation.\nThe agent``s manifest valuations are in fact valid Lindahl prices, and we refer to them as maximal Lindahl prices.\nOut of all possible vectors of Lindahl prices, maximal Lindahl prices maximize the utility of the auctioneer, in fact giving her the entire social welfare.\nConversely, prices that maximize the \u00c8 i\u2208N \u03c0i component of the objective (the sum of the agents'' utilities) are minimal Lindahl prices.\nAny Lindahl prices will do for our results, but some may have better elicitation properties than others.\nNote that a demand query with maximal Lindahl prices is almost identical to an equivalence query, since in both cases we communicate the manifest valuation to the agent.\nWe leave for future work the question of which Lindahl prices to choose to minimize preference elicitation.\nConsidering now why demand and equivalence queries are direct analogs, first note that given the \u03c0i in some Lindahl equilibrium, setting pi(S) = max{0, \u02dcvi(S) \u2212 \u03c0i} (4) for all i \u2208 N and S \u2286 M yields valid Lindahl prices.\nThese prices leave every agent indifferent across all bundles with positive price, and satisfy condition (1).\nThus demand queries can also implicitly communicate manifest valuations, since Lindahl prices will typically be an additive constant away from these by equality (4).\nIn the following lemma we show how to obtain counterexamples to equivalence queries through demand queries.\nLemma 1.\nSuppose an agent replies with a preferred bundle S when proposed a bundle S and supporting Lindahl prices p(S) (supporting with respect to the the agent``s manifest valuation).\nThen either \u02dcv(S) = v(S) or \u02dcv(S ) = v(S ).\nProof.\nWe have the following inequalities: \u02dcv(S) \u2212 p(S) \u2265 \u02dcv(S ) \u2212 p(S ) \u21d2 \u02dcv(S ) \u2212 \u02dcv(S) \u2264 p(S ) \u2212 p(S) (5) v(S ) \u2212 p(S ) > v(S) \u2212 p(S) \u21d2 v(S ) \u2212 v(S) > p(S ) \u2212 p(S) (6) Inequality (5) holds because the prices support the proposed allocation with respect to the manifest valuation.\nInequality (6) holds because the agent in fact prefers S to S given the prices, according to its response to the demand query.\nIf it were the case that \u02dcv(S) = v(S) and \u02dcv(S ) = v(S ), these inequalities would represent a contradiction.\nThus at least one of S and S is a counterexample to the agent``s manifest valuation.\nFinally, we justify dependence on size(v1, ... , vn) in elicitation problems.\nNisan and Segal (Proposition 1, [12]) and Parkes (Theorem 1, [13]) show that supporting Lindahl prices must necessarily be revealed in the course of any preference elicitation protocol which terminates with an optimal allocation.\nFurthermore, Nisan and Segal (Lemma 1, [12]) state that in the worst-case agents'' prices must coincide with their valuations (up to a constant), when the valuation class is rich enough to contain dual valuations (as will be the case with most interesting classes).\nSince revealing Lindahl prices is a necessary condition for establishing an optimal allocation, and since Lindahl prices contain the same information as valuation functions (in the worst-case), allowing for dependence on size(v1, ... , vn) in elicitation problems is entirely natural.\n183 4.\nFROM LEARNING TO PREFERENCE ELICITATION The key to converting a learning algorithm to an elicitation algorithm is to simulate equivalence queries with demand and value queries until an optimal allocation is found.\nBecause of our Lindahl price construction, when all agents reply `YES'' to a demand query, we have found an optimal allocation, analogous to the case where an agent replies `YES'' to an equivalence query when the target function has been exactly learned.\nOtherwise, we can obtain a counterexample to an equivalence query given an agent``s response to a demand query.\nTheorem 1.\nThe representation classes V1, ... , Vn can be polynomial-query elicited from value and demand queries if they can each be polynomial-query exactly learned from membership and equivalence queries.\nProof.\nConsider the elicitation algorithm in Figure 1.\nEach membership query in step 1 is simulated with a value query since these are in fact identical.\nConsider step 4.\nIf all agents reply `YES'', condition (1) holds.\nCondition (2) holds because the computed allocation is revenue-maximizing for the auctioneer, regardless of the agents'' true valuations.\nThus an optimal allocation has been found.\nOtherwise, at least one of Si or Si is a counterexample to \u02dcvi, by Lemma 1.\nWe identify a counterexample by performing value queries on both these bundles, and provide it to Ai as a response to its equivalence query.\nThis procedure will halt, since in the worst-case all agent valuations will be learned exactly, in which case the optimal allocation and Lindahl prices will be accepted by all agents.\nThe procedure performs a polynomial number of queries, since A1, ... , An are all polynomial-query learning algorithms.\nNote that the conversion procedure results in a preference elicitation algorithm, not a learning algorithm.\nThat is, the resulting algorithm does not simply learn the valuations exactly, then compute an optimal allocation.\nRather, it elicits partial information about the valuations through value queries, and periodically tests whether enough information has been gathered by proposing an allocation to the agents through demand queries.\nIt is possible to generate a Lindahl equilibrium for valuations v1, ... , vn using an allocation and prices derived using manifest valuations \u02dcv1, ... , \u02dcvn, and finding an optimal allocation does not imply that the agents'' valuations have been exactly learned.\nThe use of demand queries to simulate equivalence queries enables this early halting.\nWe would not obtain this property with equivalence queries based on manifest valuations.\n5.\nCOMMUNICATION COMPLEXITY In this section, we turn to the issue of the communication complexity of elicitation.\nNisan and Segal [12] show that for a variety of rich valuation spaces (such as general and submodular valuations), the worst-case communication burden of determining Lindahl prices is exponential in the number of goods, m.\nThe communication burden is measured in terms of the number of bits transmitted between agents and auctioneer in the case of discrete communication, or in terms of the number of real numbers transmitted in the case of continuous communication.\nConverting efficient learning algorithms to an elicitation algorithm produces an algorithm whose queries have sizes polynomial in the parameters m and size(v1, ... , vn).\nTheorem 2.\nThe representation classes V1, ... , Vn can be efficiently elicited from value and demand queries if they can each be efficiently exactly learned from membership and equivalence queries.\nProof.\nThe size of any value query is O(m): the message consists solely of the queried bundle.\nTo communicate Lindahl prices to agent i, it is sufficient to communicate the agent``s manifest valuation function and the value \u03c0i, by equality (4).\nNote that an efficient learning algorithm never builds up a manifest hypothesis of superpolynomial size, because the algorithm``s runtime would then also be superpolynomial, contradicting efficiency.\nThus communicating the manifest valuation requires size at most p(size(vi), m), for some polynomial p that upper-bounds the runtime of the efficient learning algorithm.\nRepresenting the surplus \u03c0i to agent i cannot require space greater than q(size(\u02dcvi), m) for some fixed polynomial q, because we assume that the chosen representation is polynomially interpretable, and thus any value generated will be of polynomial size.\nWe must also communicate to i its allocated bundle, so the total message size for a demand query is at most p(size(vi), m) + q(p(size(vi), m), m)+O(m).\nClearly, an agent``s response to a value or demand query has size at most q(size(vi), m) + O(m).\nThus the value and demand queries, and the responses to these queries, are always of polynomial size.\nAn efficient learning algorithm performs a polynomial number of queries, so the total communication of the resulting elicitation algorithm is polynomial in the relevant parameters.\nThere will often be explicit bounds on the number of membership and equivalence queries performed by a learning algorithm, with constants that are not masked by big-O notation.\nThese bounds can be translated to explicit bounds on the number of value and demand queries made by the resulting elicitation algorithm.\nWe upper-bounded the size of the manifest hypothesis with the runtime of the learning algorithm in Theorem 2.\nWe are likely to be able to do much better than this in practice.\nRecall that an equivalence query is proper if size( \u02dcf) \u2264 size(f) at the time the query is made.\nIf the learning algorithm``s equivalence queries are all proper, it may then also be possible to provide tight bounds on the communication requirements of the resulting elicitation algorithm.\nTheorem 2 show that elicitation algorithms that depend on the size(v1, ... , vn) parameter sidestep Nisan and Segal``s [12] negative results on the worst-case communication complexity of efficient allocation problems.\nThey provide guarantees with respect to the sizes of the instances of valuation functions faced at any run of the algorithm.\nThese algorithms will fare well if the chosen representation class provides succinct representations for the simplest and most common of valuations, and thus the focus moves back to one of compact yet expressive bidding languages.\nWe consider these issues below.\n6.\nAPPLICATIONS In this section, we demonstrate the application of our methods to particular representation classes for combinatorial valuations.\nWe have shown that the preference elicitation problem for valuation classes V1, ... , Vn can be reduced 184 Given: exact learning algorithms A1, ... , An for valuations classes V1, ... , Vn respectively.\nLoop until there is a signal to halt: 1.\nRun A1, ... , An in parallel on their respective agents until each requires a response to an equivalence query, or has halted with the agent``s exact valuation.\n2.\nCompute an optimal allocation (S1, ... , Sn) and corresponding Lindahl prices with respect to the manifest valuations \u02dcv1, ... , \u02dcvn determined so far.\n3.\nPresent the allocation and prices to the agents in the form of a demand query.\n4.\nIf they all reply `YES'', output the allocation and halt.\nOtherwise there is some agent i that has replied with some preferred bundle Si.\nPerform value queries on Si and Si to find a counterexample to \u02dcvi, and provide it to Ai.\nFigure 1: Converting learning algorithms to an elicitation algorithm.\nto the problem of finding an efficient learning algorithm for each of these classes separately.\nThis is significant because there already exist learning algorithms for a wealth of function classes, and because it may often be simpler to solve each learning subproblem separately than to attack the preference elicitation problem directly.\nWe can develop an elicitation algorithm that is tailored to each agent``s valuation, with the underlying learning algorithms linked together at the demand query stages in an algorithm-independent way.\nWe show that existing learning algorithms for polynomials, monotone DNF formulae, and linear-threshold functions can be converted into preference elicitation algorithms for general valuations, valuations with free-disposal, and valuations with substitutabilities, respectively.\nWe focus on representations that are polynomially interpretable, because the computational learning theory literature places a heavy emphasis on computational tractability [18].\nIn interpreting the methods we emphasize the expressiveness and succinctness of each representation class.\nThe representation class, which in combinatorial auction terms defines a bidding language, must necessarily be expressive enough to represent all possible valuations of interest, and should also succinctly represent the simplest and most common functions in the class.\n6.1 Polynomial Representations Schapire and Sellie [17] give a learning algorithm for sparse multivariate polynomials that can be used as the basis for a combinatorial auction protocol.\nThe equivalence queries made by this algorithm are all proper.\nSpecifically, their algorithm learns the representation class of t-sparse multivariate polynomials over the real numbers, where the variables may take on values either 0 or 1.\nA t-sparse polynomial has at most t terms, where a term is a product of variables, e.g. x1x3x4.\nA polynomial over the real numbers has coefficients drawn from the real numbers.\nPolynomials are expressive: every valuation function v : 2M \u2192 \u00ca+ can be uniquely written as a polynomial [17].\nTo get an idea of the succinctness of polynomials as a bidding language, consider the additive and single-item valuations presented by Nisan [11].\nIn the additive valuation, the value of a bundle is the number of goods the bundle contains.\nIn the single-item valuation, all bundles have value 1, except \u2205 which has value 0 (i.e. the agent is satisfied as soon as it has acquired a single item).\nIt is not hard to show that the single-item valuation requires polynomials of size 2m \u2212 1, while polynomials of size m suffice for the additive valuation.\nPolynomials are thus appropriate for valuations that are mostly additive, with a few substitutabilities and complementarities that can be introduced by adjusting coefficients.\nThe learning algorithm for polynomials makes at most mti +2 equivalence queries and at most (mti +1)(t2 i +3ti)\/2 membership queries to an agent i, where ti is the sparcity of the polynomial representing vi [17].\nWe therefore obtain an algorithm that elicits general valuations with a polynomial number of queries and polynomial communication.6 6.2 XOR Representations The XOR bidding language is standard in the combinatorial auctions literature.\nRecall that an XOR bid is characterized by a set of bundles B \u2286 2M and a value function w : B \u2192 \u00ca+ defined on those bundles, which induces the valuation function: v(B) = max {B \u2208B | B \u2286B} w(B ) (7) XOR bids can represent valuations that satisfy free-disposal (and only such valuations), which again is the property that A \u2286 B \u21d2 v(A) \u2264 v(B).\nThe XOR bidding language is slightly less expressive than polynomials, because polynomials can represent valuations that do not satisfy free-disposal.\nHowever, XOR is as expressive as required in most economic settings.\nNisan [11] notes that XOR bids can represent the single-item valuation with m atomic bids, but 2m \u2212 1 atomic bids are needed to represent the additive valuation.\nSince the opposite holds for polynomials, these two languages are incomparable in succinctness, and somewhat complementary for practical use.\nBlum et al. [5] note that monotone DNF formulae are the analogs of XOR bids in the learning theory literature.\nA monotone DNF formula is a disjunction of conjunctions in which the variables appear unnegated, for example x1x2 \u2228 x3 \u2228 x2x4x5.\nNote that such formulae can be represented as XOR bids where each atomic bid has value 1; thus XOR bids generalize monotone DNF formulae from Boolean to real-valued functions.\nThese insights allow us to generalize a classic learning algorithm for monotone DNF ([3] Theorem 6 Note that Theorem 1 applies even if valuations do not satisfy free-disposal.\n185 1, [18] Theorem B) to a learning algorithm for XOR bids.7 Lemma 2.\nAn XOR bid containing t atomic bids can be exactly learned with t + 1 equivalence queries and at most tm membership queries.\nProof.\nThe algorithm will identify each atomic bid in the target XOR bid in turn.\nInitialize the manifest valuation \u02dcv to the bid that is identically zero on all bundles (this is an XOR bid containing 0 atomic bids).\nPresent \u02dcv as an equivalence query.\nIf the response is `YES'', we are done.\nOtherwise we obtain a bundle S for which v(S) = \u02dcv(S).\nCreate a bundle T as follows.\nFirst initialize T = S. For each item i in T, check via a membership query whether v(T) = v(T \u2212 {i}).\nIf so set T = T \u2212 {i}.\nOtherwise leave T as is and proceed to the next item.\nWe claim that (T, v(T)) is an atomic bid of the target XOR bid.\nFor each item i in T, we have v(T) = v(T \u2212 {i}).\nTo see this, note that at some point when generating T, we had a \u00afT such that T \u2286 \u00afT \u2286 S and v( \u00afT) > v( \u00afT \u2212 {i}), so that i was kept in \u00afT. Note that v(S) = v( \u00afT) = v(T) because the value of the bundle S is maintained throughout the process of deleting items.\nNow assume v(T) = v(T \u2212 {i}).\nThen v( \u00afT) = v(T) = v(T \u2212 {i}) > v( \u00afT \u2212 {i}) which contradicts free-disposal, since T \u2212 {i} \u2286 \u00afT \u2212 {i}.\nThus v(T) > v(T \u2212 {i}) for all items i in T.\nThis implies that (T, v(T)) is an atomic bid of v.\nIf this were not the case, T would take on the maximum value of its strict subsets, by the definition of an XOR bid, and we would have v(T) = max i\u2208T { max T \u2286T \u2212{i} v(T )} = max i\u2208T {v(T \u2212 {i})} < v(T) which is a contradiction.\nWe now show that v(T) = \u02dcv(T), which will imply that (T, v(T)) is not an atomic bid of our manifest hypothesis by induction.\nAssume that every atomic bid (R, \u02dcv(R)) identified so far is indeed an atomic bid of v (meaning R is indeed listed in an atomic bid of v as having value v(R) = \u02dcv(R)).\nThis assumption holds vacuously when the manifest valuation is initialized.\nUsing the notation from (7), let ( \u02dcB, \u02dcw) be our hypothesis, and (B, w) be the target function.\nWe have \u02dcB \u2286 B, and \u02dcw(B) = w(B) for B \u2208 \u02dcB by assumption.\nThus, \u02dcv(S) = max {B\u2208 \u02dcB | B\u2286S} \u02dcw(B) = max {B\u2208 \u02dcB | B\u2286S} w(B) \u2264 max {B\u2208B | B\u2286S} w(B) = v(S) (8) Now assume v(T) = \u02dcv(T).\nThen, \u02dcv(T) = v(T) = v(S) = \u02dcv(S) (9) The second equality follows from the fact that the value remains constant when we derive T from S.\nThe last inequality holds because S is a counterexample to the manifest valuation.\nFrom equation (9) and free-disposal, we 7 The cited algorithm was also used as the basis for Zinkevich et al.``s [19] elicitation algorithm for Toolbox DNF.\nRecall that Toolbox DNF are polynomials with non-negative coefficients.\nFor these representations, an equivalence query can be simulated with a value query on the bundle containing all goods.\nhave \u02dcv(T) < \u02dcv(S).\nThen again from equation (9) it follows that v(S) < \u02dcv(S).\nThis contradicts (8), so we in fact have v(T) = \u02dcv(T).\nThus (T, v(T)) is not currently in our hypothesis as an atomic bid, or we would correctly have \u02dcv(T) = v(T) by the induction hypothesis.\nWe add (T, v(T)) to our hypothesis and repeat the process above, performing additional equivalence queries until all atomic bids have been identified.\nAfter each equivalence query, an atomic bid is identified with at most m membership queries.\nEach counterexample leads to the discovery of a new atomic bid.\nThus we make at most tm membership queries and exactly t + 1 equivalence queries.\nThe number of time steps required by this algorithm is essentially the same as the number of queries performed, so the algorithm is efficient.\nApplying Theorem 2, we therefore obtain the following corollary: Theorem 3.\nThe representation class of XOR bids can be efficiently elicited from value and demand queries.\nThis contrasts with Blum et al.``s negative results ([5], Theorem 2) stating that monotone DNF (and hence XOR bids) cannot be efficiently elicited when the demand queries are restricted to linear and anonymous prices over the goods.\n6.3 Linear-Threshold Representations Polynomials, XOR bids, and all languages based on the OR bidding language (such as XOR-of-OR, OR-of-XOR, and OR\u2217 ) fail to succinctly represent the majority valuation [11].\nIn this valuation, bundles have value 1 if they contain at least m\/2 items, and value 0 otherwise.\nMore generally, consider the r-of-S family of valuations where bundles have value 1 if they contain at least r items from a specified set of items S \u2286 M, and value 0 otherwise.\nThe majority valuation is a special case of the r-of-S valuation with r = m\/2 and S = M.\nThese valuations are appropriate for representing substitutabilities: once a required set of items has been obtained, no other items can add value.\nLetting k = |S|, such valuations are succinctly represented by r-of-k threshold functions.\nThese functions take the form of linear inequalities: xi1 + ... + xik \u2265 r where the function has value 1 if the inequality holds, and 0 otherwise.\nHere i1, ... , ik are the items in S. Littlestone``s WINNOW 2 algorithm can learn such functions using equivalence queries only, using at most 8r2 + 5k + 14kr ln m + 1 queries [10].\nTo provide this guarantee, r must be known to the algorithm, but S (and k) are unknown.\nThe elicitation algorithm that results from WINNOW 2 uses demand queries only (value queries are not necessary here because the values of counterexamples are implied when there are only two possible values).\nNote that r-of-k threshold functions can always be succinctly represented in O(m) space.\nThus we obtain an algorithm that can elicit such functions with a polynomial number of queries and polynomial communication, in the parameters n and m alone.\n186 7.\nCONCLUSIONS AND FUTURE WORK We have shown that exact learning algorithms with membership and equivalence queries can be used as a basis for preference elicitation algorithms with value and demand queries.\nAt the heart of this result is the fact that demand queries may be viewed as modified equivalence queries, specialized to the problem of preference elicitation.\nOur result allows us to apply the wealth of available learning algorithms to the problem of preference elicitation.\nA learning approach to elicitation also motivates a different approach to designing elicitation algorithms that decomposes neatly across agent types.\nIf the designer knowns beforehand what types of preferences each agent is likely to exhibit (mostly additive, many substitutes, etc..\n.)\n, she can design learning algorithms tailored to each agents'' valuations and integrate them into an elicitation scheme.\nThe resulting elicitation algorithm makes a polynomial number of queries, and makes polynomial communication if the original learning algorithms are efficient.\nWe do not require that agent valuations can be learned with value and demand queries.\nEquivalence queries can only be, and need only be, simulated up to the point where an optimal allocation has been computed.\nThis is the preference elicitation problem.\nTheorem 1 implies that elicitation with value and demand queries is no harder than learning with membership and equivalence queries, but it does not provide any asymptotic improvements over the learning algorithms'' complexity.\nIt would be interesting to find examples of valuation classes for which elicitation is easier than learning.\nBlum et al. [5] provide such an example when considering membership\/value queries only (Theorem 4).\nIn future work we plan to address the issue of incentives when converting learning algorithms to elicitation algorithms.\nIn the learning setting, we usually assume that oracles will provide honest responses to queries; in the elicitation setting, agents are usually selfish and will provide possibly dishonest responses so as to maximize their utility.\nWe also plan to implement the algorithms for learning polynomials and XOR bids as elicitation algorithms, and test their performance against other established combinatorial auction protocols [6, 15].\nAn interesting question here is: which Lindahl prices in the maximal to minimal range are best to quote in order to minimize information revelation?\nWe conjecture that information revelation is reduced when moving from maximal to minimal Lindahl prices, namely as we move demand queries further away from equivalence queries.\nFinally, it would be useful to determine whether the OR\u2217 bidding language [11] can be efficiently learned (and hence elicited), given this language``s expressiveness and succinctness for a wide variety of valuation classes.\nAcknowledgements We would like to thank Debasis Mishra for helpful discussions.\nThis work is supported in part by NSF grant IIS0238147.\n8.\nREFERENCES [1] A. Andersson, M. Tenhunen, and F. Ygge.\nInteger programming for combinatorial auction winner determination.\nIn Proceedings of the Fourth International Conference on Multiagent Systems (ICMAS-00), 2000.\n[2] D. Angluin.\nLearning regular sets from queries and counterexamples.\nInformation and Computation, 75:87-106, November 1987.\n[3] D. Angluin.\nQueries and concept learning.\nMachine Learning, 2:319-342, 1987.\n[4] S. Bikhchandani and J. Ostroy.\nThe Package Assignment Model.\nJournal of Economic Theory, 107(2), December 2002.\n[5] A. Blum, J. Jackson, T. Sandholm, and M. Zinkevich.\nPreference elicitation and query learning.\nIn Proc.\n16th Annual Conference on Computational Learning Theory (COLT), Washington DC, 2003.\n[6] W. Conen and T. Sandholm.\nPartial-revelation VCG mechanism for combinatorial auctions.\nIn Proc.\nthe 18th National Conference on Artificial Intelligence (AAAI), 2002.\n[7] Y. Fujishima, K. Leyton-Brown, and Y. Shoham.\nTaming the computational complexity of combinatorial auctions: Optimal and approximate approaches.\nIn Proc.\nthe 16th International Joint Conference on Artificial Intelligence (IJCAI), pages 548-553, 1999.\n[8] B. Hudson and T. Sandholm.\nUsing value queries in combinatorial auctions.\nIn Proc.\n4th ACM Conference on Electronic Commerce (ACM-EC), San Diego, CA, June 2003.\n[9] M. J. Kearns and U. V. Vazirani.\nAn Introduction to Computational Learning Theory.\nMIT Press, 1994.\n[10] N. Littlestone.\nLearning quickly when irrelevant attributes abound: A new linear-threshold algorithm.\nMachine Learning, 2:285-318, 1988.\n[11] N. Nisan.\nBidding and allocation in combinatorial auctions.\nIn Proc.\nthe ACM Conference on Electronic Commerce, pages 1-12, 2000.\n[12] N. Nisan and I. Segal.\nThe communication requirements of efficient allocations and supporting Lindahl prices.\nWorking Paper, Hebrew University, 2003.\n[13] D. C. Parkes.\nPrice-based information certificates for minimal-revelation combinatorial auctions.\nIn Padget et al., editor, Agent-Mediated Electronic Commerce IV,LNAI 2531, pages 103-122.\nSpringer-Verlag, 2002.\n[14] D. C. Parkes.\nAuction design with costly preference elicitation.\nIn Special Issues of Annals of Mathematics and AI on the Foundations of Electronic Commerce, Forthcoming (2003).\n[15] D. C. Parkes and L. H. Ungar.\nIterative combinatorial auctions: Theory and practice.\nIn Proc.\n17th National Conference on Artificial Intelligence (AAAI-00), pages 74-81, 2000.\n[16] T. Sandholm, S. Suri, A. Gilpin, and D. Levine.\nCABOB: A fast optimal algorithm for combinatorial auctions.\nIn Proc.\nthe 17th International Joint Conference on Artificial Intelligence (IJCAI), pages 1102-1108, 2001.\n[17] R. Schapire and L. Sellie.\nLearning sparse multivariate polynomials over a field with queries and counterexamples.\nIn Proceedings of the Sixth Annual ACM Workshop on Computational Learning Theory, pages 17-26.\nACM Press, 1993.\n187 [18] L. Valiant.\nA theory of the learnable.\nCommun.\nACM, 27(11):1134-1142, Nov. 1984.\n[19] M. Zinkevich, A. Blum, and T. Sandholm.\nOn polynomial-time preference elicitation with value-queries.\nIn Proc.\n4th ACM Conference on Electronic Commerce (ACM-EC), San Diego, CA, June 2003.\n188","lvl-3":"Applying Learning Algorithms to Preference Elicitation\nABSTRACT\nWe consider the parallels between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from learning theory.\nWe show that learning algorithms can be used as a basis for preference elicitation algorithms.\nThe resulting elicitation algorithms perform a polynomial number of queries.\nWe also give conditions under which the resulting algorithms have polynomial communication.\nOur conversion procedure allows us to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions.\nIn particular, we obtain an algorithm that elicits XOR bids with polynomial communication.\n1.\nINTRODUCTION\nIn a combinatorial auction, agents may bid on bundles of goods rather than individual goods alone.\nSince there are an exponential number of bundles (in the number of goods), communicating values over these bundles can be problematic.\nCommunicating valuations in a one-shot fashion can be prohibitively expensive if the number of goods is only moderately large.\nFurthermore, it might even be hard for agents to determine their valuations for single bundles [14].\nIt is in the interest of such agents to have auction protocols\nwhich require them to bid on as few bundles as possible.\nEven if agents can efficiently compute their valuations, they might still be reluctant to reveal them entirely in the course of an auction, because such information may be valuable to their competitors.\nThese considerations motivate the need for auction protocols that minimize the communication and information revelation required to determine an optimal allocation of goods.\nThere has been recent work exploring the links between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from computational learning theory [5, 19].\nIn learning theory, the goal is to learn a function via various types of queries, such as \"What is the function's value on these inputs?\"\nIn preference elicitation, the goal is to elicit enough partial information about preferences to be able to compute an optimal allocation.\nThough the goals of learning and preference elicitation differ somewhat, it is clear that these problems share similar structure, and it should come as no surprise that techniques from one field should be relevant to the other.\nWe show that any exact learning algorithm with membership and equivalence queries can be converted into a preference elicitation algorithm with value and demand queries.\nThe resulting elicitation algorithm guarantees elicitation in a polynomial number of value and demand queries.\nHere we mean polynomial in the number of goods, agents, and the sizes of the agents' valuation functions in a given encoding scheme.\nPreference elicitation schemes have not traditionally considered this last parameter.\nWe argue that complexity guarantees for elicitation schemes should allow dependence on this parameter.\nIntroducing this parameter also allows us to guarantee polynomial worst-case communication, which usually cannot be achieved in the number of goods and agents alone.\nFinally, we use our conversion procedure to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions.\nOf course, a one-shot combinatorial auction where agents provide their entire valuation functions at once would also have polynomial communication in the size of the agents' valuations, and only require one query.\nThe advantage of our scheme is that agents can be viewed as \"black-boxes\" that provide incremental information about their valuations.\nThere is no burden on the agents to formulate their valuations in an encoding scheme of the auctioneer's choosing.\nWe expect this to be an important consideration in practice.\nAlso, with our scheme entire revelation only happens in the worst-case.\nFor now, we leave the issue of incentives aside when deriving elicitation algorithms.\nOur focus is on the time and communication complexity of preference elicitation regardless of incentive constraints, and on the relationship between the complexities of learning and preference elicitation.\nRelated work.\nZinkevich et al. [19] consider the problem of learning restricted classes of valuation functions which can be represented using read-once formulas and Toolbox DNF.\nRead-once formulas can represent certain substitutabilities, but no complementarities, whereas the opposite holds for Toolbox DNF.\nSince their work is also grounded in learning theory, they allow dependence on the size of the target valuation as we do (though read-once valuations can always be succinctly represented anyway).\nTheir work only makes use of value queries, which are quite limited in power.\nBecause we allow ourselves demand queries, we are able to derive an elicitation scheme for general valuation functions.\nBlum et al. [5] provide results relating the complexities of query learning and preference elicitation.\nThey consider models with membership and equivalence queries in query learning, and value and demand queries in preference elicitation.\nThey show that certain classes of functions can be efficiently learned yet not efficiently elicited, and vice-versa.\nIn contrast, our work shows that given a more general (yet still quite standard) version of demand query than the type they consider, the complexity of preference elicitation is no greater than the complexity of learning.\nWe will show that demand queries can simulate equivalence queries until we have enough information about valuations to imply a solution to the elicitation problem.\nNisan and Segal [12] study the communication complexity of preference elicitation.\nThey show that for many rich classes of valuations, the worst-case communication complexity of computing an optimal allocation is exponential.\nTheir results apply to the \"black-box\" model of computational complexity.\nIn this model algorithms are allowed to ask questions about agent valuations and receive honest responses, without any insight into how the agents internally compute their valuations.\nThis is in fact the basic framework of learning theory.\nOur work also addresses the issue of communication complexity, and we are able to derive algorithms that provide significant communication guarantees despite Nisan and Segal's negative results.\nTheir work motivates the need to rely on the sizes of agents' valuation functions in stating worst-case results.\n2.\nTHE MODELS\n2.1 Query Learning\n2.2 Preference Elicitation\n3.\nPARALLELS BETWEEN EQUIVALENCE AND DEMAND QUERIES\n4.\nFROM LEARNING TO PREFERENCE ELICITATION\n5.\nCOMMUNICATION COMPLEXITY\n6.\nAPPLICATIONS\n6.1 Polynomial Representations\n6.2 XOR Representations\n6.3 Linear-Threshold Representations\n7.\nCONCLUSIONS AND FUTURE WORK\nWe have shown that exact learning algorithms with membership and equivalence queries can be used as a basis for preference elicitation algorithms with value and demand queries.\nAt the heart of this result is the fact that demand queries may be viewed as modified equivalence queries, specialized to the problem of preference elicitation.\nOur result allows us to apply the wealth of available learning algorithms to the problem of preference elicitation.\nA learning approach to elicitation also motivates a different approach to designing elicitation algorithms that decomposes neatly across agent types.\nIf the designer knowns beforehand what types of preferences each agent is likely to exhibit (mostly additive, many substitutes, etc. .\n.)\n, she can design learning algorithms tailored to each agents' valuations and integrate them into an elicitation scheme.\nThe resulting elicitation algorithm makes a polynomial number of queries, and makes polynomial communication if the original learning algorithms are efficient.\nWe do not require that agent valuations can be learned with value and demand queries.\nEquivalence queries can only be, and need only be, simulated up to the point where an optimal allocation has been computed.\nThis is the preference elicitation problem.\nTheorem 1 implies that elicitation with value and demand queries is no harder than learning with membership and equivalence queries, but it does not provide any asymptotic improvements over the learning algorithms' complexity.\nIt would be interesting to find examples of valuation classes for which elicitation is easier than learning.\nBlum et al. [5] provide such an example when considering membership\/value queries only (Theorem 4).\nIn future work we plan to address the issue of incentives when converting learning algorithms to elicitation algorithms.\nIn the learning setting, we usually assume that oracles will provide honest responses to queries; in the elicitation setting, agents are usually selfish and will provide possibly dishonest responses so as to maximize their utility.\nWe also plan to implement the algorithms for learning polynomials and XOR bids as elicitation algorithms, and test their performance against other established combinatorial auction protocols [6, 15].\nAn interesting question here is: which Lindahl prices in the maximal to minimal range are best to quote in order to minimize information revelation?\nWe conjecture that information revelation is reduced when moving from maximal to minimal Lindahl prices, namely as we move demand queries further away from equivalence queries.\nFinally, it would be useful to determine whether the OR \u2217 bidding language [11] can be efficiently learned (and hence elicited), given this language's expressiveness and succinctness for a wide variety of valuation classes.","lvl-4":"Applying Learning Algorithms to Preference Elicitation\nABSTRACT\nWe consider the parallels between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from learning theory.\nWe show that learning algorithms can be used as a basis for preference elicitation algorithms.\nThe resulting elicitation algorithms perform a polynomial number of queries.\nWe also give conditions under which the resulting algorithms have polynomial communication.\nOur conversion procedure allows us to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions.\nIn particular, we obtain an algorithm that elicits XOR bids with polynomial communication.\n1.\nINTRODUCTION\nIn a combinatorial auction, agents may bid on bundles of goods rather than individual goods alone.\nCommunicating valuations in a one-shot fashion can be prohibitively expensive if the number of goods is only moderately large.\nFurthermore, it might even be hard for agents to determine their valuations for single bundles [14].\nIt is in the interest of such agents to have auction protocols\nThese considerations motivate the need for auction protocols that minimize the communication and information revelation required to determine an optimal allocation of goods.\nThere has been recent work exploring the links between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from computational learning theory [5, 19].\nIn learning theory, the goal is to learn a function via various types of queries, such as \"What is the function's value on these inputs?\"\nIn preference elicitation, the goal is to elicit enough partial information about preferences to be able to compute an optimal allocation.\nWe show that any exact learning algorithm with membership and equivalence queries can be converted into a preference elicitation algorithm with value and demand queries.\nThe resulting elicitation algorithm guarantees elicitation in a polynomial number of value and demand queries.\nHere we mean polynomial in the number of goods, agents, and the sizes of the agents' valuation functions in a given encoding scheme.\nPreference elicitation schemes have not traditionally considered this last parameter.\nWe argue that complexity guarantees for elicitation schemes should allow dependence on this parameter.\nIntroducing this parameter also allows us to guarantee polynomial worst-case communication, which usually cannot be achieved in the number of goods and agents alone.\nFinally, we use our conversion procedure to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions.\nOf course, a one-shot combinatorial auction where agents provide their entire valuation functions at once would also have polynomial communication in the size of the agents' valuations, and only require one query.\nThe advantage of our scheme is that agents can be viewed as \"black-boxes\" that provide incremental information about their valuations.\nThere is no burden on the agents to formulate their valuations in an encoding scheme of the auctioneer's choosing.\nAlso, with our scheme entire revelation only happens in the worst-case.\nFor now, we leave the issue of incentives aside when deriving elicitation algorithms.\nOur focus is on the time and communication complexity of preference elicitation regardless of incentive constraints, and on the relationship between the complexities of learning and preference elicitation.\nRelated work.\nZinkevich et al. [19] consider the problem of learning restricted classes of valuation functions which can be represented using read-once formulas and Toolbox DNF.\nTheir work only makes use of value queries, which are quite limited in power.\nBecause we allow ourselves demand queries, we are able to derive an elicitation scheme for general valuation functions.\nBlum et al. [5] provide results relating the complexities of query learning and preference elicitation.\nThey consider models with membership and equivalence queries in query learning, and value and demand queries in preference elicitation.\nThey show that certain classes of functions can be efficiently learned yet not efficiently elicited, and vice-versa.\nIn contrast, our work shows that given a more general (yet still quite standard) version of demand query than the type they consider, the complexity of preference elicitation is no greater than the complexity of learning.\nWe will show that demand queries can simulate equivalence queries until we have enough information about valuations to imply a solution to the elicitation problem.\nNisan and Segal [12] study the communication complexity of preference elicitation.\nThey show that for many rich classes of valuations, the worst-case communication complexity of computing an optimal allocation is exponential.\nTheir results apply to the \"black-box\" model of computational complexity.\nIn this model algorithms are allowed to ask questions about agent valuations and receive honest responses, without any insight into how the agents internally compute their valuations.\nThis is in fact the basic framework of learning theory.\nOur work also addresses the issue of communication complexity, and we are able to derive algorithms that provide significant communication guarantees despite Nisan and Segal's negative results.\nTheir work motivates the need to rely on the sizes of agents' valuation functions in stating worst-case results.\n7.\nCONCLUSIONS AND FUTURE WORK\nWe have shown that exact learning algorithms with membership and equivalence queries can be used as a basis for preference elicitation algorithms with value and demand queries.\nAt the heart of this result is the fact that demand queries may be viewed as modified equivalence queries, specialized to the problem of preference elicitation.\nOur result allows us to apply the wealth of available learning algorithms to the problem of preference elicitation.\nA learning approach to elicitation also motivates a different approach to designing elicitation algorithms that decomposes neatly across agent types.\n.)\n, she can design learning algorithms tailored to each agents' valuations and integrate them into an elicitation scheme.\nThe resulting elicitation algorithm makes a polynomial number of queries, and makes polynomial communication if the original learning algorithms are efficient.\nWe do not require that agent valuations can be learned with value and demand queries.\nEquivalence queries can only be, and need only be, simulated up to the point where an optimal allocation has been computed.\nThis is the preference elicitation problem.\nTheorem 1 implies that elicitation with value and demand queries is no harder than learning with membership and equivalence queries, but it does not provide any asymptotic improvements over the learning algorithms' complexity.\nIt would be interesting to find examples of valuation classes for which elicitation is easier than learning.\nBlum et al. [5] provide such an example when considering membership\/value queries only (Theorem 4).\nIn future work we plan to address the issue of incentives when converting learning algorithms to elicitation algorithms.\nIn the learning setting, we usually assume that oracles will provide honest responses to queries; in the elicitation setting, agents are usually selfish and will provide possibly dishonest responses so as to maximize their utility.\nWe also plan to implement the algorithms for learning polynomials and XOR bids as elicitation algorithms, and test their performance against other established combinatorial auction protocols [6, 15].\nWe conjecture that information revelation is reduced when moving from maximal to minimal Lindahl prices, namely as we move demand queries further away from equivalence queries.","lvl-2":"Applying Learning Algorithms to Preference Elicitation\nABSTRACT\nWe consider the parallels between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from learning theory.\nWe show that learning algorithms can be used as a basis for preference elicitation algorithms.\nThe resulting elicitation algorithms perform a polynomial number of queries.\nWe also give conditions under which the resulting algorithms have polynomial communication.\nOur conversion procedure allows us to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions.\nIn particular, we obtain an algorithm that elicits XOR bids with polynomial communication.\n1.\nINTRODUCTION\nIn a combinatorial auction, agents may bid on bundles of goods rather than individual goods alone.\nSince there are an exponential number of bundles (in the number of goods), communicating values over these bundles can be problematic.\nCommunicating valuations in a one-shot fashion can be prohibitively expensive if the number of goods is only moderately large.\nFurthermore, it might even be hard for agents to determine their valuations for single bundles [14].\nIt is in the interest of such agents to have auction protocols\nwhich require them to bid on as few bundles as possible.\nEven if agents can efficiently compute their valuations, they might still be reluctant to reveal them entirely in the course of an auction, because such information may be valuable to their competitors.\nThese considerations motivate the need for auction protocols that minimize the communication and information revelation required to determine an optimal allocation of goods.\nThere has been recent work exploring the links between the preference elicitation problem in combinatorial auctions and the problem of learning an unknown function from computational learning theory [5, 19].\nIn learning theory, the goal is to learn a function via various types of queries, such as \"What is the function's value on these inputs?\"\nIn preference elicitation, the goal is to elicit enough partial information about preferences to be able to compute an optimal allocation.\nThough the goals of learning and preference elicitation differ somewhat, it is clear that these problems share similar structure, and it should come as no surprise that techniques from one field should be relevant to the other.\nWe show that any exact learning algorithm with membership and equivalence queries can be converted into a preference elicitation algorithm with value and demand queries.\nThe resulting elicitation algorithm guarantees elicitation in a polynomial number of value and demand queries.\nHere we mean polynomial in the number of goods, agents, and the sizes of the agents' valuation functions in a given encoding scheme.\nPreference elicitation schemes have not traditionally considered this last parameter.\nWe argue that complexity guarantees for elicitation schemes should allow dependence on this parameter.\nIntroducing this parameter also allows us to guarantee polynomial worst-case communication, which usually cannot be achieved in the number of goods and agents alone.\nFinally, we use our conversion procedure to generate combinatorial auction protocols from learning algorithms for polynomials, monotone DNF, and linear-threshold functions.\nOf course, a one-shot combinatorial auction where agents provide their entire valuation functions at once would also have polynomial communication in the size of the agents' valuations, and only require one query.\nThe advantage of our scheme is that agents can be viewed as \"black-boxes\" that provide incremental information about their valuations.\nThere is no burden on the agents to formulate their valuations in an encoding scheme of the auctioneer's choosing.\nWe expect this to be an important consideration in practice.\nAlso, with our scheme entire revelation only happens in the worst-case.\nFor now, we leave the issue of incentives aside when deriving elicitation algorithms.\nOur focus is on the time and communication complexity of preference elicitation regardless of incentive constraints, and on the relationship between the complexities of learning and preference elicitation.\nRelated work.\nZinkevich et al. [19] consider the problem of learning restricted classes of valuation functions which can be represented using read-once formulas and Toolbox DNF.\nRead-once formulas can represent certain substitutabilities, but no complementarities, whereas the opposite holds for Toolbox DNF.\nSince their work is also grounded in learning theory, they allow dependence on the size of the target valuation as we do (though read-once valuations can always be succinctly represented anyway).\nTheir work only makes use of value queries, which are quite limited in power.\nBecause we allow ourselves demand queries, we are able to derive an elicitation scheme for general valuation functions.\nBlum et al. [5] provide results relating the complexities of query learning and preference elicitation.\nThey consider models with membership and equivalence queries in query learning, and value and demand queries in preference elicitation.\nThey show that certain classes of functions can be efficiently learned yet not efficiently elicited, and vice-versa.\nIn contrast, our work shows that given a more general (yet still quite standard) version of demand query than the type they consider, the complexity of preference elicitation is no greater than the complexity of learning.\nWe will show that demand queries can simulate equivalence queries until we have enough information about valuations to imply a solution to the elicitation problem.\nNisan and Segal [12] study the communication complexity of preference elicitation.\nThey show that for many rich classes of valuations, the worst-case communication complexity of computing an optimal allocation is exponential.\nTheir results apply to the \"black-box\" model of computational complexity.\nIn this model algorithms are allowed to ask questions about agent valuations and receive honest responses, without any insight into how the agents internally compute their valuations.\nThis is in fact the basic framework of learning theory.\nOur work also addresses the issue of communication complexity, and we are able to derive algorithms that provide significant communication guarantees despite Nisan and Segal's negative results.\nTheir work motivates the need to rely on the sizes of agents' valuation functions in stating worst-case results.\n2.\nTHE MODELS\n2.1 Query Learning\nThe query learning model we consider here is called exact learning from membership and equivalence queries, introduced by Angluin [2].\nIn this model the learning algorithm's objective is to exactly identify an unknown target function f: X--+ Y via queries to an oracle.\nThe target function is drawn from a function class C that is known to the algorithm.\nTypically the domain X is some subset of {0, 11m, and the range Y is either {0, 11 or some subset of the real numbers R.\nAs the algorithm progresses, it constructs a manifest hypothesis f\u02dc which is its current estimate of the target function.\nUpon termination, the manifest hypothesis of a correct learning algorithm satisfies f\u02dc (x) = f (x) for all x E X.\nIt is important to specify the representation that will be used to encode functions from C. For example, consider the following function from {0, 11m to R: f (x) = 2 if x consists of m 1's, and f (x) = 0 otherwise.\nThis function may simply be represented as a list of 2m values.\nOr it may be encoded as the polynomial 2x1 \u00b7 \u00b7 \u00b7 xm, which is much more succinct.\nThe choice of encoding may thus have a significant impact on the time and space requirements of the learning algorithm.\nLet size (f) be the size of the encoding of f with respect to the given representation class.\nMost representation classes have a natural measure of encoding size.\nThe size of a polynomial can be defined as the number of non-zero coefficients in the polynomial, for example.\nWe will usually only refer to representation classes; the corresponding function classes will be implied.\nFor example, the representation class of monotone DNF formulae implies the function class of monotone Boolean functions.\nTwo types of queries are commonly used for exact learning: membership and equivalence queries.\nOn a membership query, the learner presents some x E X and the oracle replies with f (x).\nOn an equivalence query, the learner presents its manifest hypothesis \u02dcf.\nThe oracle either replies ` YES' if f\u02dc = f, or returns a counterexample x such that \u02dcf (x) = ~ f (x).\nAn equivalence query is proper if size (\u02dcf) .\nv (A) v (T \u00af \u2212 {i}), so that i was kept in T \u00af.\nNote that v (S) = v (T \u00af) = v (T) because the value of the bundle S is maintained throughout the process of deleting items.\nNow assume v (T) = v (T \u2212 {i}).\nThen\nwhich contradicts free-disposal, since T \u2212 {i} C _ T \u00af \u2212 {i}.\nThus v (T)> v (T \u2212 {i}) for all items i in T.\nThis implies that (T, v (T)) is an atomic bid of v.\nIf this were not the case, T would take on the maximum value of its strict subsets, by the definition of an XOR bid, and we would have\nwhich is a contradiction.\nWe now show that v (T) = ~ \u02dcv (T), which will imply that (T, v (T)) is not an atomic bid of our manifest hypothesis by induction.\nAssume that every atomic bid (R, \u02dcv (R)) identified so far is indeed an atomic bid of v (meaning R is indeed listed in an atomic bid of v as having value v (R) = \u02dcv (R)).\nThis assumption holds vacuously when the manifest valuation is initialized.\nUsing the notation from (7), let (\u02dct3, \u02dcw) be our hypothesis, and (t3, w) be the target function.\nWe have t3\u02dc C _ t3, and \u02dcw (B) = w (B) for B E t3\u02dc by assumption.\nThus,\nThe second equality follows from the fact that the value remains constant when we derive T from S.\nThe last inequality holds because S is a counterexample to the manifest valuation.\nFrom equation (9) and free-disposal, we 7The cited algorithm was also used as the basis for Zinkevich et al.'s [19] elicitation algorithm for Toolbox DNF.\nRecall that Toolbox DNF are polynomials with non-negative coefficients.\nFor these representations, an equivalence query can be simulated with a value query on the bundle containing all goods.\nhave \u02dcv (T) <\u02dcv (S).\nThen again from equation (9) it follows that v (S) <\u02dcv (S).\nThis contradicts (8), so we in fact have v (T) = ~ \u02dcv (T).\nThus (T, v (T)) is not currently in our hypothesis as an atomic bid, or we would correctly have \u02dcv (T) = v (T) by the induction hypothesis.\nWe add (T, v (T)) to our hypothesis and repeat the process above, performing additional equivalence queries until all atomic bids have been identified.\nAfter each equivalence query, an atomic bid is identified with at most m membership queries.\nEach counterexample leads to the discovery of a new atomic bid.\nThus we make at most tm membership queries and exactly t + 1 equivalence queries.\nThe number of time steps required by this algorithm is essentially the same as the number of queries performed, so the algorithm is efficient.\nApplying Theorem 2, we therefore obtain the following corollary: THEOREM 3.\nThe representation class of XOR bids can be efficiently elicited from value and demand queries.\nThis contrasts with Blum et al.'s negative results ([5], Theorem 2) stating that monotone DNF (and hence XOR bids) cannot be efficiently elicited when the demand queries are restricted to linear and anonymous prices over the goods.\n6.3 Linear-Threshold Representations\nPolynomials, XOR bids, and all languages based on the OR bidding language (such as XOR-of-OR, OR-of-XOR, and OR \u2217) fail to succinctly represent the majority valuation [11].\nIn this valuation, bundles have value 1 if they contain at least m\/2 items, and value 0 otherwise.\nMore generally, consider the r-of-S family of valuations where bundles have value 1 if they contain at least r items from a specified set of items S C _ M, and value 0 otherwise.\nThe majority valuation is a special case of the r-of-S valuation with r = m\/2 and S = M.\nThese valuations are appropriate for representing substitutabilities: once a required set of items has been obtained, no other items can add value.\nLetting k = | S |, such valuations are succinctly represented by r-of-k threshold functions.\nThese functions take the form of linear inequalities:\nwhere the function has value 1 if the inequality holds, and 0 otherwise.\nHere i1,..., ik are the items in S. Littlestone's WINNOW 2 algorithm can learn such functions using equivalence queries only, using at most 8r2 + 5k + 14kr ln m + 1 queries [10].\nTo provide this guarantee, r must be known to the algorithm, but S (and k) are unknown.\nThe elicitation algorithm that results from WINNOW 2 uses demand queries only (value queries are not necessary here because the values of counterexamples are implied when there are only two possible values).\nNote that r-of-k threshold functions can always be succinctly represented in O (m) space.\nThus we obtain an algorithm that can elicit such functions with a polynomial number of queries and polynomial communication, in the parameters n and m alone.\n7.\nCONCLUSIONS AND FUTURE WORK\nWe have shown that exact learning algorithms with membership and equivalence queries can be used as a basis for preference elicitation algorithms with value and demand queries.\nAt the heart of this result is the fact that demand queries may be viewed as modified equivalence queries, specialized to the problem of preference elicitation.\nOur result allows us to apply the wealth of available learning algorithms to the problem of preference elicitation.\nA learning approach to elicitation also motivates a different approach to designing elicitation algorithms that decomposes neatly across agent types.\nIf the designer knowns beforehand what types of preferences each agent is likely to exhibit (mostly additive, many substitutes, etc. .\n.)\n, she can design learning algorithms tailored to each agents' valuations and integrate them into an elicitation scheme.\nThe resulting elicitation algorithm makes a polynomial number of queries, and makes polynomial communication if the original learning algorithms are efficient.\nWe do not require that agent valuations can be learned with value and demand queries.\nEquivalence queries can only be, and need only be, simulated up to the point where an optimal allocation has been computed.\nThis is the preference elicitation problem.\nTheorem 1 implies that elicitation with value and demand queries is no harder than learning with membership and equivalence queries, but it does not provide any asymptotic improvements over the learning algorithms' complexity.\nIt would be interesting to find examples of valuation classes for which elicitation is easier than learning.\nBlum et al. [5] provide such an example when considering membership\/value queries only (Theorem 4).\nIn future work we plan to address the issue of incentives when converting learning algorithms to elicitation algorithms.\nIn the learning setting, we usually assume that oracles will provide honest responses to queries; in the elicitation setting, agents are usually selfish and will provide possibly dishonest responses so as to maximize their utility.\nWe also plan to implement the algorithms for learning polynomials and XOR bids as elicitation algorithms, and test their performance against other established combinatorial auction protocols [6, 15].\nAn interesting question here is: which Lindahl prices in the maximal to minimal range are best to quote in order to minimize information revelation?\nWe conjecture that information revelation is reduced when moving from maximal to minimal Lindahl prices, namely as we move demand queries further away from equivalence queries.\nFinally, it would be useful to determine whether the OR \u2217 bidding language [11] can be efficiently learned (and hence elicited), given this language's expressiveness and succinctness for a wide variety of valuation classes.","keyphrases":["learn","learn algorithm","learn","prefer elicit","parallel","prefer elicit problem","combinatori auction","learn theori","prefer elicit algorithm","elicit algorithm","polynomi","result algorithm","polynomi commun","polynomi commun","convers procedur","combinatori auction protocol","monoton dnf","linear-threshold function","xor bid","queri polynomi number"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"C-56","title":"A Hierarchical Process Execution Support for Grid Computing","abstract":"Grid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost. Nowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion. In order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment. Their advantages are automatic and structured distribution of activities and easy process monitoring and steering.","lvl-1":"A Hierarchical Process Execution Support for Grid Computing F\u00e1bio R. L. Cicerre Institute of Computing State University of Campinas Campinas, Brazil fcicerre@ic.unicamp.br Edmundo R. M. Madeira Institute of Computing State University of Campinas Campinas, Brazil edmundo@ic.unicamp.br Luiz E. Buzato Institute of Computing State University of Campinas Campinas, Brazil buzato@ic.unicamp.br ABSTRACT Grid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost.\nNowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion.\nIn order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment.\nTheir advantages are automatic and structured distribution of activities and easy process monitoring and steering.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-distributed applications General Terms Design, Performance, Management, Algorithms 1.\nINTRODUCTION Grid computing is a model for wide-area distributed and parallel computing across heterogeneous networks in multiple administrative domains.\nThis research field aims to promote sharing of resources and provides breakthrough computing power over this wide network of virtual organizations in a seamless manner [8].\nTraditionally, as in Globus [6], Condor-G [9] and Legion [10], there is a minimal infrastructure that provides data resource sharing, computational resource utilization management, and distributed execution.\nSpecifically, considering distributed execution, most of the existing grid infrastructures supports execution of isolated tasks, but they do not consider their task interdependencies as in processes (workflows) [12].\nThis deficiency restricts better scheduling algorithms, distributed execution coordination and automatic execution recovery.\nThere are few proposed middleware infrastructures that support process execution over the grid.\nIn general, they model processes by interconnecting their activities through control and data dependencies.\nAmong them, WebFlow [1] emphasizes an architecture to construct distributed processes; Opera-G [3] provides execution recovering and steering, GridFlow [5] focuses on improved scheduling algorithms that take advantage of activity dependencies, and SwinDew [13] supports totally distributed execution on peer-to-peer networks.\nHowever, such infrastructures contain scheduling algorithms that are centralized by process [1, 3, 5], or completely distributed, but difficult to monitor and control [13].\nIn order to address such constraints, this paper proposes a structured programming model for process description and a hierarchical process execution infrastructure.\nThe programming model employs structured control flow to promote controlled and contextualized activity execution.\nComplementary, the support infrastructure, which executes a process specification, takes advantage of the hierarchical structure of a specified process in order to distribute and schedule strong dependent activities as a unit, allowing a better execution performance and fault-tolerance and providing localized communication.\nThe programming model and the support infrastructure, named X avantes, are under implementation in order to show the feasibility of the proposed model and to demonstrate its two major advantages: to promote widely distributed process execution and scheduling, but in a controlled, structured and localized way.\nNext Section describes the programming model, and Section 3, the support infrastructure for the proposed grid computing model.\nSection 4 demonstrates how the support infrastructure executes processes and distributes activities.\nRelated works are presented and compared to the proposed model in Section 5.\nThe last Section concludes this paper encompassing the advantages of the proposed hierarchical process execution support for the grid computing area and lists some future works.\n87 Middleware 2004 Companion ProcessElement Process Activity Controller 1 * 1 * Figure 1: High-level framework of the programming model 2.\nPROGRAMMING MODEL The programming model designed for the grid computing architecture is very similar to the specified to the Business Process Execution Language (BPEL) [2].\nBoth describe processes in XML [4] documents, but the former specifies processes strictly synchronous and structured, and has more constructs for structured parallel control.\nThe rationale behind of its design is the possibility of hierarchically distribute the process control and coordination based on structured constructs, differently from BPEL, which does not allow hierarchical composition of processes.\nIn the proposed programming model, a process is a set of interdependent activities arranged to solve a certain problem.\nIn detail, a process is composed of activities, subprocesses, and controllers (see Figure 1).\nActivities represent simple tasks that are executed on behalf of a process; subprocesses are processes executed in the context of a parent process; and controllers are control elements used to specify the execution order of these activities and subprocesses.\nLike structured languages, controllers can be nested and then determine the execution order of other controllers.\nData are exchanged among process elements through parameters.\nThey are passed by value, in case of simple objects, or by reference, if they are remote objects shared among elements of the same controller or process.\nExternal data can be accessed through data sources, such as relational databases or distributed objects.\n2.1 Controllers Controllers are structured control constructs used to define the control flow of processes.\nThere are sequential and parallel controllers.\nThe sequential controller types are: block, switch, for and while.\nThe block controller is a simple sequential construct, and the others mimic equivalent structured programming language constructs.\nSimilarly, the parallel types are: par, parswitch, parfor and parwhile.\nThey extend the respective sequential counterparts to allow parallel execution of process elements.\nAll parallel controller types fork the execution of one or more process elements, and then, wait for each execution to finish.\nIndeed, they contain a fork and a join of execution.\nAiming to implement a conditional join, all parallel controller types contain an exit condition, evaluated all time that an element execution finishes, in order to determine when the controller must end.\nThe parfor and parwhile are the iterative versions of the parallel controller types.\nBoth fork executions while the iteration condition is true.\nThis provides flexibility to determine, at run-time, the number of process elements to execute simultaneously.\nWhen compared to workflow languages, the parallel controller types represent structured versions of the workflow control constructors, because they can nest other controllers and also can express fixed and conditional forks and joins, present in such languages.\n2.2 Process Example This section presents an example of a prime number search application that receives a certain range of integers and returns a set of primes contained in this range.\nThe whole computation is made by a process, which uses a parallel controller to start and dispatch several concurrent activities of the same type, in order to find prime numbers.\nThe portion of the XML document that describes the process and activity types is shown below.\n setPrimes(new RemoteHashSet()); parfor.setMin(getMin()); parfor.setMax(getMax()); parfor.setNumPrimes(getNumPrimes()); parfor.setNumActs(getNumActs()); parfor.setPrimes(getPrimes()); parfor.setCounterBegin(0); parfor.setCounterEnd(getNumActs()-1); <\/PRE_CODE> int range= (getMax()-getMin()+1)\/getNumActs(); int minNum = range*getCounter()+getMin(); int maxNum = minNum+range-1; if (getCounter() == getNumActs()-1) maxNum = getMax(); findPrimes.setMin(minNum); findPrimes.setMax(maxNum); findPrimes.setNumPrimes(getNumPrimes()); findPrimes.setPrimes(getPrimes()); <\/PRE_CODE> <\/ITERATE> <\/PARFOR> <\/BODY> <\/PROCESS_TYPE> Middleware for Grid Computing 88 for (int num=getMin(); num<=getMax(); num++) { \/\/ stop, required number of primes was found if (primes.size() >= getNumPrimes()) break; boolean prime = true; for (int i=2; i <\/ACTIVITY_TYPE> Firstly, a process type that finds prime numbers, named FindPrimes, is defined.\nIt receives, through its input parameters, a range of integers in which prime numbers have to be found, the number of primes to be returned, and the number of activities to be executed in order to perform this work.\nAt the end, the found prime numbers are returned as a collection through its output parameter.\nThis process contains a PARFOR controller aiming to execute a determined number of parallel activities.\nIt iterates from 0 to getNumActs() - 1, which determines the number of activities, starting a parallel activity in each iteration.\nIn such case, the controller divides the whole range of numbers in subranges of the same size, and, in each iteration, starts a parallel activity that finds prime numbers in a specific subrange.\nThese activities receive a shared object by reference in order to store the prime numbers just found and control if the required number of primes has been reached.\nFinally, it is defined the activity type, FindPrimes, used to find prime numbers in each subrange.\nIt receives, through its input parameters, the range of numbers in which it has to find prime numbers, the total number of prime numbers to be found by the whole process, and, passed by reference, a collection object to store the found prime numbers.\nBetween its CODE markers, there is a simple code to find prime numbers, which iterates over the specified range and verifies if the current integer is a prime.\nAdditionally, in each iteration, the code verifies if the required number of primes, inserted in the primes collection by all concurrent activities, has been reached, and exits if true.\nThe advantage of using controllers is the possibility of the support infrastructure determines the point of execution the process is in, allowing automatic recovery and monitoring, and also the capability of instantiating and dispatching process elements only when there are enough computing resources available, reducing unnecessary overhead.\nBesides, due to its structured nature, they can be easily composed and the support infrastructure can take advantage of this in order to distribute hierarchically the nested controllers to Group Server Group Java Virtual Machine RMI JDBC Group Manager Process Server Java Virtual Machine RMI JDBC Process Coordinator Worker Java Virtual Machine RMI Activity Manager Repository Figure 2: Infrastructure architecture different machines over the grid, allowing enhanced scalability and fault-tolerance.\n3.\nSUPPORT INFRASTRUCTURE The support infrastructure comprises tools for specification, and services for execution and monitoring of structured processes in highly distributed, heterogeneous and autonomous grid environments.\nIt has services to monitor availability of resources in the grid, to interpret processes and schedule activities and controllers, and to execute activities.\n3.1 Infrastructure Architecture The support infrastructure architecture is composed of groups of machines and data repositories, which preserves its administrative autonomy.\nGenerally, localized machines and repositories, such as in local networks or clusters, form a group.\nEach machine in a group must have a Java Virtual Machine (JVM) [11], and a Java Runtime Library, besides a combination of the following grid support services: group manager (GM), process coordinator (PC) and activity manager (AM).\nThis combination determines what kind of group node it represents: a group server, a process server, or simply a worker (see Figure 2).\nIn a group there are one or more group managers, but only one acts as primary and the others, as replicas.\nThey are responsible to maintain availability information of group machines.\nMoreover, group managers maintain references to data resources of the group.\nThey use group repositories to persist and recover the location of nodes and their availability.\nTo control process execution, there are one or more process coordinators per group.\nThey are responsible to instantiate and execute processes and controllers, select resources, and schedule and dispatch activities to workers.\nIn order to persist and recover process execution and data, and also load process specification, they use group repositories.\nFinally, in several group nodes there is an activity manager.\nIt is responsible to execute activities in the hosted machine on behalf of the group process coordinators, and to inform the current availability of the associated machine to group managers.\nThey also have pendent activity queues, containing activities to be executed.\n3.2 Inter-group Relationships In order to model real grid architecture, the infrastructure must comprise several, potentially all, local networks, like Internet does.\nAiming to satisfy this intent, local groups are 89 Middleware 2004 Companion GM GM GM GM Figure 3: Inter-group relationships connected to others, directly or indirectly, through its group managers (see Figure 3).\nEach group manager deals with requests of its group (represented by dashed ellipses), in order to register local machines and maintain correspondent availability.\nAdditionally, group managers communicate to group managers of other groups.\nEach group manager exports coarse availability information to group managers of adjacent groups and also receives requests from other external services to furnish detailed availability information.\nIn this way, if there are resources available in external groups, it is possible to send processes, controllers and activities to these groups in order to execute them in external process coordinators and activity managers, respectively.\n4.\nPROCESS EXECUTION In the proposed grid architecture, a process is specified in XML, using controllers to determine control flow; referencing other processes and activities; and passing objects to their parameters in order to define data flow.\nAfter specified, the process is compiled in a set of classes, which represent specific process, activity and controller types.\nAt this time, it can be instantiated and executed by a process coordinator.\n4.1 Dynamic Model To execute a specified process, it must be instantiated by referencing its type on a process coordinator service of a specific group.\nAlso, the initial parameters must be passed to it, and then it can be started.\nThe process coordinator carries out the process by executing the process elements included in its body sequentially.\nIf the element is a process or a controller, the process coordinator can choose to execute it in the same machine or to pass it to another process coordinator in a remote machine, if available.\nElse, if the element is an activity, it passes to an activity manager of an available machine.\nProcess coordinators request the local group manager to find available machines that contain the required service, process coordinator or activity manager, in order to execute a process element.\nThen, it can return a local machine, a machine in another group or none, depending on the availability of such resource in the grid.\nIt returns an external worker (activity manager machine) if there are no available workers in the local group; and, it returns an external process server (process coordinator machine), if there are no available process servers or workers in the local group.\nObeying this rule, group managers try to find process servers in the same group of the available workers.\nSuch procedure is followed recursively by all process coGM FindPrimes Activity AM FindPrimes Activity AM FindPrimes Activity AM FindPrimes Process PC Figure 4: FindPrimes process execution ordinators that execute subprocesses or controllers of a process.\nTherefore, because processes are structured by nesting process elements, the process execution is automatically distributed hierarchically through one or more grid groups according to the availability and locality of computing resources.\nThe advantage of this distribution model is wide area execution, which takes advantage of potentially all grid resources; and localized communication of process elements, because strong dependent elements, which are under the same controller, are placed in the same or near groups.\nBesides, it supports easy monitoring and steering, due to its structured controllers, which maintain state and control over its inner elements.\n4.2 Process Execution Example Revisiting the example shown in Section 2.2, a process type is specified to find prime numbers in a certain range of numbers.\nIn order to solve this problem, it creates a number of activities using the parfor controller.\nEach activity, then, finds primes in a determined part of the range of numbers.\nFigure 4 shows an instance of this process type executing over the proposed infrastructure.\nA FindPrimes process instance is created in an available process coordinator (PC), which begins executing the parfor controller.\nIn each iteration of this controller, the process coordinator requests to the group manager (GM) an available activity manager (AM) in order to execute a new instance of the FindPrimes activity.\nIf there is any AM available in this group or in an external one, the process coordinator sends the activity class and initial parameters to this activity manager and requests its execution.\nElse, if no activity manager is available, then the controller enters in a wait state until an activity manager is made available, or is created.\nIn parallel, whenever an activity finishes, its result is sent back to the process coordinator, which records it in the parfor controller.\nThen, the controller waits until all activities that have been started are finished, and it ends.\nAt this point, the process coordinator verifies that there is no other process element to execute and finishes the process.\n5.\nRELATED WORK There are several academic and commercial products that promise to support grid computing, aiming to provide interfaces, protocols and services to leverage the use of widely Middleware for Grid Computing 90 distributed resources in heterogeneous and autonomous networks.\nAmong them, Globus [6], Condor-G [9] and Legion [10] are widely known.\nAiming to standardize interfaces and services to grid, the Open Grid Services Architecture (OGSA) [7] has been defined.\nThe grid architectures generally have services that manage computing resources and distribute the execution of independent tasks on available ones.\nHowever, emerging architectures maintain task dependencies and automatically execute tasks in a correct order.\nThey take advantage of these dependencies to provide automatic recovery, and better distribution and scheduling algorithms.\nFollowing such model, WebFlow [1] is a process specification tool and execution environment constructed over CORBA that allows graphical composition of activities and their distributed execution in a grid environment.\nOpera-G [3], like WebFlow, uses a process specification language similar to the data flow diagram and workflow languages, but furnishes automatic execution recovery and limited steering of process execution.\nThe previously referred architectures and others that enact processes over the grid have a centralized coordination.\nIn order to surpass this limitation, systems like SwinDew [13] proposed a widely distributed process execution, in which each node knows where to execute the next activity or join activities in a peer-to-peer environment.\nIn the specific area of activity distribution and scheduling, emphasized in this work, GridFlow [5] is remarkable.\nIt uses a two-level scheduling: global and local.\nIn the local level, it has services that predict computing resource utilization and activity duration.\nBased on this information, GridFlow employs a PERT-like technique that tries to forecast the activity execution start time and duration in order to better schedule them to the available resources.\nThe architecture proposed in this paper, which encompasses a programming model and an execution support infrastructure, is widely decentralized, differently from WebFlow and Opera-G, being more scalable and fault-tolerant.\nBut, like the latter, it is designed to support execution recovery.\nComparing to SwinDew, the proposed architecture contains widely distributed process coordinators, which coordinate processes or parts of them, differently from SwinDew where each node has a limited view of the process: only the activity that starts next.\nThis makes easier to monitor and control processes.\nFinally, the support infrastructure breaks the process and its subprocesses for grid execution, allowing a group to require another group for the coordination and execution of process elements on behalf of the first one.\nThis is different from GridFlow, which can execute a process in at most two levels, having the global level as the only responsible to schedule subprocesses in other groups.\nThis can limit the overall performance of processes, and make the system less scalable.\n6.\nCONCLUSION AND FUTURE WORK Grid computing is an emerging research field that intends to promote distributed and parallel computing over the wide area network of heterogeneous and autonomous administrative domains in a seamless way, similar to what Internet does to the data sharing.\nThere are several products that support execution of independent tasks over grid, but only a few supports the execution of processes with interdependent tasks.\nIn order to address such subject, this paper proposes a programming model and a support infrastructure that allow the execution of structured processes in a widely distributed and hierarchical manner.\nThis support infrastructure provides automatic, structured and recursive distribution of process elements over groups of available machines; better resource use, due to its on demand creation of process elements; easy process monitoring and steering, due to its structured nature; and localized communication among strong dependent process elements, which are placed under the same controller.\nThese features contribute to better scalability, fault-tolerance and control for processes execution over the grid.\nMoreover, it opens doors for better scheduling algorithms, recovery mechanisms, and also, dynamic modification schemes.\nThe next work will be the implementation of a recovery mechanism that uses the execution and data state of processes and controllers to recover process execution.\nAfter that, it is desirable to advance the scheduling algorithm to forecast machine use in the same or other groups and to foresee start time of process elements, in order to use this information to pre-allocate resources and, then, obtain a better process execution performance.\nFinally, it is interesting to investigate schemes of dynamic modification of processes over the grid, in order to evolve and adapt long-term processes to the continuously changing grid environment.\n7.\nACKNOWLEDGMENTS We would like to thank Paulo C. Oliveira, from the State Treasury Department of Sao Paulo, for its deeply revision and insightful comments.\n8.\nREFERENCES [1] E. Akarsu, G. C. Fox, W. Furmanski, and T. Haupt.\nWebFlow: High-Level Programming Environment and Visual Authoring Toolkit for High Performance Distributed Computing.\nIn Proceedings of Supercom puting (SC98), 1998.\n[2] T. Andrews and F. Curbera.\nSpecification: Business Process Execution Language for W eb Services V ersion 1.1.\nIBM DeveloperWorks, 2003.\nAvailable at http:\/\/www-106.ibm.com\/developerworks\/library\/wsbpel.\n[3] W. Bausch.\nO PERA -G :A M icrokernelfor Com putationalG rids.\nPhD thesis, Swiss Federal Institute of Technology, Zurich, 2004.\n[4] T. Bray and J. Paoli.\nExtensible M arkup Language (X M L) 1.0.\nXML Core WG, W3C, 2004.\nAvailable at http:\/\/www.w3.org\/TR\/2004\/REC-xml-20040204.\n[5] J. Cao, S. A. Jarvis, S. Saini, and G. R. Nudd.\nGridFlow: Workflow Management for Grid Computing.\nIn Proceedings ofthe International Sym posium on Cluster Com puting and the G rid (CCG rid 2003), 2003.\n[6] I. Foster and C. Kesselman.\nGlobus: A Metacomputing Infrastructure Toolkit.\nIntl.J. Supercom puter A pplications, 11(2):115-128, 1997.\n[7] I. Foster, C. Kesselman, J. M. Nick, and S. Tuecke.\nThe Physiology ofthe G rid: A n O pen G rid Services A rchitecture for D istributed System s Integration.\n91 Middleware 2004 Companion Open Grid Service Infrastructure WG, Global Grid Forum, 2002.\n[8] I. Foster, C. Kesselman, and S. Tuecke.\nThe Anatomy of the Grid: Enabling Scalable Virtual Organization.\nThe Intl.JournalofH igh Perform ance Com puting A pplications, 15(3):200-222, 2001.\n[9] J. Frey, T. Tannenbaum, M. Livny, I. Foster, and S. Tuecke.\nCondor-G: A Computational Management Agent for Multi-institutional Grids.\nIn Proceedings of the Tenth Intl.Sym posium on H igh Perform ance D istributed Com puting (H PD C-10).\nIEEE, 2001.\n[10] A. S. Grimshaw and W. A. Wulf.\nLegion - A View from 50,000 Feet.\nIn Proceedings ofthe Fifth Intl..\nSym posium on H igh Perform ance D istributed Com puting.\nIEEE, 1996.\n[11] T. Lindholm and F. Yellin.\nThe Java V irtualM achine Specification.\nSun Microsystems, Second Edition edition, 1999.\n[12] B. R. Schulze and E. R. M. Madeira.\nGrid Computing with Active Services.\nConcurrency and Com putation: Practice and Experience Journal, 5(16):535-542, 2004.\n[13] J. Yan, Y. Yang, and G. K. Raikundalia.\nEnacting Business Processes in a Decentralised Environment with P2P-Based Workflow Support.\nIn Proceedings of the Fourth Intl.Conference on W eb-Age Inform ation M anagem ent(W A IM 2003), 2003.\nMiddleware for Grid Computing 92","lvl-3":"A Hierarchical Process Execution Support for Grid Computing\nABSTRACT\nGrid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost.\nNowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion.\nIn order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment.\nTheir advantages are automatic and structured distribution of activities and easy process monitoring and steering.\n1.\nINTRODUCTION\nGrid computing is a model for wide-area distributed and parallel computing across heterogeneous networks in multiple administrative domains.\nThis research field aims to promote sharing of resources and provides breakthrough computing power over this wide network of virtual organizations in a seamless manner [8].\nTraditionally, as in Globus [6], Condor-G [9] and Legion [10], there is a minimal infrastructure that provides data resource sharing, computational resource utilization management, and distributed execution.\nSpecifically, considering distributed execution, most of the existing grid infrastructures supports execution of isolated tasks, but they do not consider their task interdependencies as in processes (workflows) [12].\nThis deficiency restricts better scheduling algorithms, distributed execution coordination and automatic execution recovery.\nThere are few proposed middleware infrastructures that support process execution over the grid.\nIn general, they model processes by interconnecting their activities through control and data dependencies.\nAmong them, WebFlow [1] emphasizes an architecture to construct distributed processes; Opera-G [3] provides execution recovering and steering, GridFlow [5] focuses on improved scheduling algorithms that take advantage of activity dependencies, and SwinDew [13] supports totally distributed execution on peer-to-peer networks.\nHowever, such infrastructures contain scheduling algorithms that are centralized by process [1, 3, 5], or completely distributed, but difficult to monitor and control [13].\nIn order to address such constraints, this paper proposes a structured programming model for process description and a hierarchical process execution infrastructure.\nThe programming model employs structured control flow to promote controlled and contextualized activity execution.\nComplementary, the support infrastructure, which executes a process specification, takes advantage of the hierarchical structure of a specified process in order to distribute and schedule strong dependent activities as a unit, allowing a better execution performance and fault-tolerance and providing localized communication.\nThe programming model and the support infrastructure, named X avantes, are under implementation in order to show the feasibility of the proposed model and to demonstrate its two major advantages: to promote widely distributed process execution and scheduling, but in a controlled, structured and localized way.\nNext Section describes the programming model, and Section 3, the support infrastructure for the proposed grid computing model.\nSection 4 demonstrates how the support infrastructure executes processes and distributes activities.\nRelated works are presented and compared to the proposed model in Section 5.\nThe last Section concludes this paper encompassing the advantages of the proposed hierarchical process execution support for the grid computing area and lists some future works.\nFigure 1: High-level framework of the programming model\n2.\nPROGRAMMING MODEL\n2.1 Controllers\n2.2 Process Example\n3.\nSUPPORT INFRASTRUCTURE\n3.1 Infrastructure Architecture\n3.2 Inter-group Relationships\n4.\nPROCESS EXECUTION\n4.1 Dynamic Model\n4.2 Process Execution Example\n5.\nRELATED WORK\nThere are several academic and commercial products that promise to support grid computing, aiming to provide interfaces, protocols and services to leverage the use of widely\n91 Middleware 2004 Companion\n6.\nCONCLUSION AND FUTURE WORK\nGrid computing is an emerging research field that intends to promote distributed and parallel computing over the wide area network of heterogeneous and autonomous administrative domains in a seamless way, similar to what Internet does to the data sharing.\nThere are several products that support execution of independent tasks over grid, but only a few supports the execution of processes with interdependent tasks.\nIn order to address such subject, this paper proposes a programming model and a support infrastructure that allow the execution of structured processes in a widely distributed and hierarchical manner.\nThis support infrastructure provides automatic, structured and recursive distribution of process elements over groups of available machines; better resource use, due to its on demand creation of process elements; easy process monitoring and steering, due to its structured nature; and localized communication among strong dependent process elements, which are placed under the same controller.\nThese features contribute to better scalability, fault-tolerance and control for processes execution over the grid.\nMoreover, it opens doors for better scheduling algorithms, recovery mechanisms, and also, dynamic modification schemes.\nThe next work will be the implementation of a recovery mechanism that uses the execution and data state of processes and controllers to recover process execution.\nAfter that, it is desirable to advance the scheduling algorithm to forecast machine use in the same or other groups and to foresee start time of process elements, in order to use this information to pre-allocate resources and, then, obtain a better process execution performance.\nFinally, it is interesting to investigate schemes of dynamic modification of processes over the grid, in order to evolve and adapt long-term processes to the continuously changing grid environment.","lvl-4":"A Hierarchical Process Execution Support for Grid Computing\nABSTRACT\nGrid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost.\nNowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion.\nIn order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment.\nTheir advantages are automatic and structured distribution of activities and easy process monitoring and steering.\n1.\nINTRODUCTION\nGrid computing is a model for wide-area distributed and parallel computing across heterogeneous networks in multiple administrative domains.\nThis research field aims to promote sharing of resources and provides breakthrough computing power over this wide network of virtual organizations in a seamless manner [8].\nSpecifically, considering distributed execution, most of the existing grid infrastructures supports execution of isolated tasks, but they do not consider their task interdependencies as in processes (workflows) [12].\nThis deficiency restricts better scheduling algorithms, distributed execution coordination and automatic execution recovery.\nThere are few proposed middleware infrastructures that support process execution over the grid.\nIn general, they model processes by interconnecting their activities through control and data dependencies.\nHowever, such infrastructures contain scheduling algorithms that are centralized by process [1, 3, 5], or completely distributed, but difficult to monitor and control [13].\nIn order to address such constraints, this paper proposes a structured programming model for process description and a hierarchical process execution infrastructure.\nThe programming model employs structured control flow to promote controlled and contextualized activity execution.\nComplementary, the support infrastructure, which executes a process specification, takes advantage of the hierarchical structure of a specified process in order to distribute and schedule strong dependent activities as a unit, allowing a better execution performance and fault-tolerance and providing localized communication.\nThe programming model and the support infrastructure, named X avantes, are under implementation in order to show the feasibility of the proposed model and to demonstrate its two major advantages: to promote widely distributed process execution and scheduling, but in a controlled, structured and localized way.\nNext Section describes the programming model, and Section 3, the support infrastructure for the proposed grid computing model.\nSection 4 demonstrates how the support infrastructure executes processes and distributes activities.\nRelated works are presented and compared to the proposed model in Section 5.\nThe last Section concludes this paper encompassing the advantages of the proposed hierarchical process execution support for the grid computing area and lists some future works.\nFigure 1: High-level framework of the programming model\n5.\nRELATED WORK\nThere are several academic and commercial products that promise to support grid computing, aiming to provide interfaces, protocols and services to leverage the use of widely\n6.\nCONCLUSION AND FUTURE WORK\nThere are several products that support execution of independent tasks over grid, but only a few supports the execution of processes with interdependent tasks.\nIn order to address such subject, this paper proposes a programming model and a support infrastructure that allow the execution of structured processes in a widely distributed and hierarchical manner.\nThese features contribute to better scalability, fault-tolerance and control for processes execution over the grid.\nMoreover, it opens doors for better scheduling algorithms, recovery mechanisms, and also, dynamic modification schemes.\nThe next work will be the implementation of a recovery mechanism that uses the execution and data state of processes and controllers to recover process execution.\nFinally, it is interesting to investigate schemes of dynamic modification of processes over the grid, in order to evolve and adapt long-term processes to the continuously changing grid environment.","lvl-2":"A Hierarchical Process Execution Support for Grid Computing\nABSTRACT\nGrid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost.\nNowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion.\nIn order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment.\nTheir advantages are automatic and structured distribution of activities and easy process monitoring and steering.\n1.\nINTRODUCTION\nGrid computing is a model for wide-area distributed and parallel computing across heterogeneous networks in multiple administrative domains.\nThis research field aims to promote sharing of resources and provides breakthrough computing power over this wide network of virtual organizations in a seamless manner [8].\nTraditionally, as in Globus [6], Condor-G [9] and Legion [10], there is a minimal infrastructure that provides data resource sharing, computational resource utilization management, and distributed execution.\nSpecifically, considering distributed execution, most of the existing grid infrastructures supports execution of isolated tasks, but they do not consider their task interdependencies as in processes (workflows) [12].\nThis deficiency restricts better scheduling algorithms, distributed execution coordination and automatic execution recovery.\nThere are few proposed middleware infrastructures that support process execution over the grid.\nIn general, they model processes by interconnecting their activities through control and data dependencies.\nAmong them, WebFlow [1] emphasizes an architecture to construct distributed processes; Opera-G [3] provides execution recovering and steering, GridFlow [5] focuses on improved scheduling algorithms that take advantage of activity dependencies, and SwinDew [13] supports totally distributed execution on peer-to-peer networks.\nHowever, such infrastructures contain scheduling algorithms that are centralized by process [1, 3, 5], or completely distributed, but difficult to monitor and control [13].\nIn order to address such constraints, this paper proposes a structured programming model for process description and a hierarchical process execution infrastructure.\nThe programming model employs structured control flow to promote controlled and contextualized activity execution.\nComplementary, the support infrastructure, which executes a process specification, takes advantage of the hierarchical structure of a specified process in order to distribute and schedule strong dependent activities as a unit, allowing a better execution performance and fault-tolerance and providing localized communication.\nThe programming model and the support infrastructure, named X avantes, are under implementation in order to show the feasibility of the proposed model and to demonstrate its two major advantages: to promote widely distributed process execution and scheduling, but in a controlled, structured and localized way.\nNext Section describes the programming model, and Section 3, the support infrastructure for the proposed grid computing model.\nSection 4 demonstrates how the support infrastructure executes processes and distributes activities.\nRelated works are presented and compared to the proposed model in Section 5.\nThe last Section concludes this paper encompassing the advantages of the proposed hierarchical process execution support for the grid computing area and lists some future works.\nFigure 1: High-level framework of the programming model\n2.\nPROGRAMMING MODEL\nThe programming model designed for the grid computing architecture is very similar to the specified to the Business Process Execution Language (BPEL) [2].\nBoth describe processes in XML [4] documents, but the former specifies processes strictly synchronous and structured, and has more constructs for structured parallel control.\nThe rationale behind of its design is the possibility of hierarchically distribute the process control and coordination based on structured constructs, differently from BPEL, which does not allow hierarchical composition of processes.\nIn the proposed programming model, a process is a set of interdependent activities arranged to solve a certain problem.\nIn detail, a process is composed of activities, subprocesses, and controllers (see Figure 1).\nActivities represent simple tasks that are executed on behalf of a process; subprocesses are processes executed in the context of a parent process; and controllers are control elements used to specify the execution order of these activities and subprocesses.\nLike structured languages, controllers can be nested and then determine the execution order of other controllers.\nData are exchanged among process elements through parameters.\nThey are passed by value, in case of simple objects, or by reference, if they are remote objects shared among elements of the same controller or process.\nExternal data can be accessed through data sources, such as relational databases or distributed objects.\n2.1 Controllers\nControllers are structured control constructs used to define the control flow of processes.\nThere are sequential and parallel controllers.\nThe sequential controller types are: block, switch, for and while.\nThe block controller is a simple sequential construct, and the others mimic equivalent structured programming language constructs.\nSimilarly, the parallel types are: par, parswitch, parfor and parwhile.\nThey extend the respective sequential counterparts to allow parallel execution of process elements.\nAll parallel controller types fork the execution of one or more process elements, and then, wait for each execution to finish.\nIndeed, they contain a fork and a join of execution.\nAiming to implement a conditional join, all parallel controller types contain an exit condition, evaluated all time that an element execution finishes, in order to determine when the controller must end.\nThe parfor and parwhile are the iterative versions of the parallel controller types.\nBoth fork executions while the iteration condition is true.\nThis provides flexibility to determine, at run-time, the number of process elements to execute simultaneously.\nWhen compared to workflow languages, the parallel controller types represent structured versions of the workflow control constructors, because they can nest other controllers and also can express fixed and conditional forks and joins, present in such languages.\n2.2 Process Example\nThis section presents an example of a prime number search application that receives a certain range of integers and returns a set of primes contained in this range.\nThe whole computation is made by a process, which uses a parallel controller to start and dispatch several concurrent activities of the same type, in order to find prime numbers.\nThe portion of the XML document that describes the process and activity types is shown below.\nFirstly, a process type that finds prime numbers, named FindPrimes, is defined.\nIt receives, through its input parameters, a range of integers in which prime numbers have to be found, the number of primes to be returned, and the number of activities to be executed in order to perform this work.\nAt the end, the found prime numbers are returned as a collection through its output parameter.\nThis process contains a PARFOR controller aiming to execute a determined number of parallel activities.\nIt iterates from 0 to getNumActs () - 1, which determines the number of activities, starting a parallel activity in each iteration.\nIn such case, the controller divides the whole range of numbers in subranges of the same size, and, in each iteration, starts a parallel activity that finds prime numbers in a specific subrange.\nThese activities receive a shared object by reference in order to store the prime numbers just found and control if the required number of primes has been reached.\nFinally, it is defined the activity type, FindPrimes, used to find prime numbers in each subrange.\nIt receives, through its input parameters, the range of numbers in which it has to find prime numbers, the total number of prime numbers to be found by the whole process, and, passed by reference, a collection object to store the found prime numbers.\nBetween its CODE markers, there is a simple code to find prime numbers, which iterates over the specified range and verifies if the current integer is a prime.\nAdditionally, in each iteration, the code verifies if the required number of primes, inserted in the primes collection by all concurrent activities, has been reached, and exits if true.\nThe advantage of using controllers is the possibility of the support infrastructure determines the point of execution the process is in, allowing automatic recovery and monitoring, and also the capability of instantiating and dispatching process elements only when there are enough computing resources available, reducing unnecessary overhead.\nBesides, due to its structured nature, they can be easily composed and the support infrastructure can take advantage of this in order to distribute hierarchically the nested controllers to\nFigure 2: Infrastructure architecture\ndifferent machines over the grid, allowing enhanced scalability and fault-tolerance.\n3.\nSUPPORT INFRASTRUCTURE\nThe support infrastructure comprises tools for specification, and services for execution and monitoring of structured processes in highly distributed, heterogeneous and autonomous grid environments.\nIt has services to monitor availability of resources in the grid, to interpret processes and schedule activities and controllers, and to execute activities.\n3.1 Infrastructure Architecture\nThe support infrastructure architecture is composed of groups of machines and data repositories, which preserves its administrative autonomy.\nGenerally, localized machines and repositories, such as in local networks or clusters, form a group.\nEach machine in a group must have a Java Virtual Machine (JVM) [11], and a Java Runtime Library, besides a combination of the following grid support services: group manager (GM), process coordinator (PC) and activity manager (AM).\nThis combination determines what kind of group node it represents: a group server, a process server, or simply a worker (see Figure 2).\nIn a group there are one or more group managers, but only one acts as primary and the others, as replicas.\nThey are responsible to maintain availability information of group machines.\nMoreover, group managers maintain references to data resources of the group.\nThey use group repositories to persist and recover the location of nodes and their availability.\nTo control process execution, there are one or more process coordinators per group.\nThey are responsible to instantiate and execute processes and controllers, select resources, and schedule and dispatch activities to workers.\nIn order to persist and recover process execution and data, and also load process specification, they use group repositories.\nFinally, in several group nodes there is an activity manager.\nIt is responsible to execute activities in the hosted machine on behalf of the group process coordinators, and to inform the current availability of the associated machine to group managers.\nThey also have pendent activity queues, containing activities to be executed.\n3.2 Inter-group Relationships\nIn order to model real grid architecture, the infrastructure must comprise several, potentially all, local networks, like Internet does.\nAiming to satisfy this intent, local groups are\nFigure 3: Inter-group relationships\nconnected to others, directly or indirectly, through its group managers (see Figure 3).\nEach group manager deals with requests of its group (represented by dashed ellipses), in order to register local machines and maintain correspondent availability.\nAdditionally, group managers communicate to group managers of other groups.\nEach group manager exports coarse availability information to group managers of adjacent groups and also receives requests from other external services to furnish detailed availability information.\nIn this way, if there are resources available in external groups, it is possible to send processes, controllers and activities to these groups in order to execute them in external process coordinators and activity managers, respectively.\n4.\nPROCESS EXECUTION\nIn the proposed grid architecture, a process is specified in XML, using controllers to determine control flow; referencing other processes and activities; and passing objects to their parameters in order to define data flow.\nAfter specified, the process is compiled in a set of classes, which represent specific process, activity and controller types.\nAt this time, it can be instantiated and executed by a process coordinator.\n4.1 Dynamic Model\nTo execute a specified process, it must be instantiated by referencing its type on a process coordinator service of a specific group.\nAlso, the initial parameters must be passed to it, and then it can be started.\nThe process coordinator carries out the process by executing the process elements included in its body sequentially.\nIf the element is a process or a controller, the process coordinator can choose to execute it in the same machine or to pass it to another process coordinator in a remote machine, if available.\nElse, if the element is an activity, it passes to an activity manager of an available machine.\nProcess coordinators request the local group manager to find available machines that contain the required service, process coordinator or activity manager, in order to execute a process element.\nThen, it can return a local machine, a machine in another group or none, depending on the availability of such resource in the grid.\nIt returns an external worker (activity manager machine) if there are no available workers in the local group; and, it returns an external process server (process coordinator machine), if there are no available process servers or workers in the local group.\nObeying this rule, group managers try to find process servers in the same group of the available workers.\nSuch procedure is followed recursively by all process co\nFigure 4: FindPrimes process execution\nordinators that execute subprocesses or controllers of a process.\nTherefore, because processes are structured by nesting process elements, the process execution is automatically distributed hierarchically through one or more grid groups according to the availability and locality of computing resources.\nThe advantage of this distribution model is wide area execution, which takes advantage of potentially all grid resources; and localized communication of process elements, because strong dependent elements, which are under the same controller, are placed in the same or near groups.\nBesides, it supports easy monitoring and steering, due to its structured controllers, which maintain state and control over its inner elements.\n4.2 Process Execution Example\nRevisiting the example shown in Section 2.2, a process type is specified to find prime numbers in a certain range of numbers.\nIn order to solve this problem, it creates a number of activities using the parfor controller.\nEach activity, then, finds primes in a determined part of the range of numbers.\nFigure 4 shows an instance of this process type executing over the proposed infrastructure.\nA FindPrimes process instance is created in an available process coordinator (PC), which begins executing the parfor controller.\nIn each iteration of this controller, the process coordinator requests to the group manager (GM) an available activity manager (AM) in order to execute a new instance of the FindPrimes activity.\nIf there is any AM available in this group or in an external one, the process coordinator sends the activity class and initial parameters to this activity manager and requests its execution.\nElse, if no activity manager is available, then the controller enters in a wait state until an activity manager is made available, or is created.\nIn parallel, whenever an activity finishes, its result is sent back to the process coordinator, which records it in the parfor controller.\nThen, the controller waits until all activities that have been started are finished, and it ends.\nAt this point, the process coordinator verifies that there is no other process element to execute and finishes the process.\n5.\nRELATED WORK\nThere are several academic and commercial products that promise to support grid computing, aiming to provide interfaces, protocols and services to leverage the use of widely\n91 Middleware 2004 Companion\ndistributed resources in heterogeneous and autonomous networks.\nAmong them, Globus [6], Condor-G [9] and Legion [10] are widely known.\nAiming to standardize interfaces and services to grid, the Open Grid Services Architecture (OGSA) [7] has been defined.\nThe grid architectures generally have services that manage computing resources and distribute the execution of independent tasks on available ones.\nHowever, emerging architectures maintain task dependencies and automatically execute tasks in a correct order.\nThey take advantage of these dependencies to provide automatic recovery, and better distribution and scheduling algorithms.\nFollowing such model, WebFlow [1] is a process specification tool and execution environment constructed over CORBA that allows graphical composition of activities and their distributed execution in a grid environment.\nOpera-G [3], like WebFlow, uses a process specification language similar to the data flow diagram and workflow languages, but furnishes automatic execution recovery and limited steering of process execution.\nThe previously referred architectures and others that enact processes over the grid have a centralized coordination.\nIn order to surpass this limitation, systems like SwinDew [13] proposed a widely distributed process execution, in which each node knows where to execute the next activity or join activities in a peer-to-peer environment.\nIn the specific area of activity distribution and scheduling, emphasized in this work, GridFlow [5] is remarkable.\nIt uses a two-level scheduling: global and local.\nIn the local level, it has services that predict computing resource utilization and activity duration.\nBased on this information, GridFlow employs a PERT-like technique that tries to forecast the activity execution start time and duration in order to better schedule them to the available resources.\nThe architecture proposed in this paper, which encompasses a programming model and an execution support infrastructure, is widely decentralized, differently from WebFlow and Opera-G, being more scalable and fault-tolerant.\nBut, like the latter, it is designed to support execution recovery.\nComparing to SwinDew, the proposed architecture contains widely distributed process coordinators, which coordinate processes or parts of them, differently from SwinDew where each node has a limited view of the process: only the activity that starts next.\nThis makes easier to monitor and control processes.\nFinally, the support infrastructure breaks the process and its subprocesses for grid execution, allowing a group to require another group for the coordination and execution of process elements on behalf of the first one.\nThis is different from GridFlow, which can execute a process in at most two levels, having the global level as the only responsible to schedule subprocesses in other groups.\nThis can limit the overall performance of processes, and make the system less scalable.\n6.\nCONCLUSION AND FUTURE WORK\nGrid computing is an emerging research field that intends to promote distributed and parallel computing over the wide area network of heterogeneous and autonomous administrative domains in a seamless way, similar to what Internet does to the data sharing.\nThere are several products that support execution of independent tasks over grid, but only a few supports the execution of processes with interdependent tasks.\nIn order to address such subject, this paper proposes a programming model and a support infrastructure that allow the execution of structured processes in a widely distributed and hierarchical manner.\nThis support infrastructure provides automatic, structured and recursive distribution of process elements over groups of available machines; better resource use, due to its on demand creation of process elements; easy process monitoring and steering, due to its structured nature; and localized communication among strong dependent process elements, which are placed under the same controller.\nThese features contribute to better scalability, fault-tolerance and control for processes execution over the grid.\nMoreover, it opens doors for better scheduling algorithms, recovery mechanisms, and also, dynamic modification schemes.\nThe next work will be the implementation of a recovery mechanism that uses the execution and data state of processes and controllers to recover process execution.\nAfter that, it is desirable to advance the scheduling algorithm to forecast machine use in the same or other groups and to foresee start time of process elements, in order to use this information to pre-allocate resources and, then, obtain a better process execution performance.\nFinally, it is interesting to investigate schemes of dynamic modification of processes over the grid, in order to evolve and adapt long-term processes to the continuously changing grid environment.","keyphrases":["hierarch process execut","process execut","grid comput","distribut system","distribut applic","distribut process","distribut schedul","distribut execut","distribut comput","parallel comput","parallel execut","process descript","grid architectur","schedul algorithm","process support","distribut middlewar"],"prmu":["P","P","P","M","M","R","R","R","R","M","M","M","M","M","R","M"]} {"id":"C-42","title":"Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization","abstract":"Ensemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration. In this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells. The Schlumberger ECLIPSE software is used for these simulations. Since models in the ensemble do not communicate, message-passing implementation is a good choice. Each model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available. We have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment. By pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime. In this paper, we provide an account of our efforts in Grid-enabling the ensemble Kalman Filter data assimilation methodology. Potential benefits of this approach, observations and lessons learned will be discussed.","lvl-1":"Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization Ravi Vadapalli High Performance Computing Center Texas Tech University Lubbock, TX 79409 001-806-742-4350 Ravi.Vadapalli@ttu.edu Ajitabh Kumar Department of Petroleum Engineering Texas A&M University College Station, TX 77843 001-979-847-8735 akumar@tamu.edu Ping Luo Supercomputing Facility Texas A&M University College Station, TX 77843 001-979-862-3107 pingluo@sc.tamu.edu Shameem Siddiqui Department of Petroleum Engineering Texas Tech University Lubbock, TX 79409 001-806-742-3573 Shameem.Siddiqui@ttu.edu Taesung Kim Supercomputing Facility Texas A&M University College Station, TX 77843 001-979-204-5076 tskim@sc.tamu.edu ABSTRACT Ensemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration.\nIn this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells.\nThe Schlumberger ECLIPSE software is used for these simulations.\nSince models in the ensemble do not communicate, message-passing implementation is a good choice.\nEach model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available.\nWe have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment.\nBy pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime.\nIn this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology.\nPotential benefits of this approach, observations and lessons learned will be discussed.\nCategories and Subject Descriptors C 2.4 [Distributed Systems]: Distributed applications General Terms Algorithms, Design, Performance 1.\nINTRODUCTION Grid computing [1] is an emerging collaborative computing paradigm to extend institution\/organization specific high performance computing (HPC) capabilities greatly beyond local resources.\nIts importance stems from the fact that ground breaking research in strategic application areas such as bioscience and medicine, energy exploration and environmental modeling involve strong interdisciplinary components and often require intercampus collaborations and computational capabilities beyond institutional limitations.\nThe Texas Internet Grid for Research and Education (TIGRE) [2,3] is a state funded cyberinfrastructure development project carried out by five (Rice, A&M, TTU, UH and UT Austin) major university systems - collectively called TIGRE Institutions.\nThe purpose of TIGRE is to create a higher education Grid to sustain and extend research and educational opportunities across Texas.\nTIGRE is a project of the High Performance Computing across Texas (HiPCAT) [4] consortium.\nThe goal of HiPCAT is to support advanced computational technologies to enhance research, development, and educational activities.\nThe primary goal of TIGRE is to design and deploy state-of-the-art Grid middleware that enables integration of computing systems, storage systems and databases, visualization laboratories and displays, and even instruments and sensors across Texas.\nThe secondary goal is to demonstrate the TIGRE capabilities to enhance research and educational opportunities in strategic application areas of interest to the State of Texas.\nThese are bioscience and medicine, energy exploration and air quality modeling.\nVision of the TIGRE project is to foster interdisciplinary and intercampus collaborations, identify novel approaches to extend academic-government-private partnerships, and become a competitive model for external funding opportunities.\nThe overall goal of TIGRE is to support local, campus and regional user interests and offer avenues to connect with national Grid projects such as Open Science Grid [5], and TeraGrid [6].\nWithin the energy exploration strategic application area, we have Grid-enabled the ensemble Kalman Filter (EnKF) [7] approach for data assimilation in reservoir modeling and demonstrated the extensibility of the application using the TIGRE environment and the GridWay [8] metascheduler.\nSection 2 provides an overview of the TIGRE environment and capabilities.\nApplication description and the need for Grid-enabling EnKF methodology is provided in Section 3.\nThe implementation details and merits of our approach are discussed in Section 4.\nConclusions are provided in Section 5.\nFinally, observations and lessons learned are documented in Section 6.\n2.\nTIGRE ENVIRONMENT The TIGRE Grid middleware consists of minimal set of components derived from a subset of the Virtual Data Toolkit (VDT) [9] which supports a variety of operating systems.\nThe purpose of choosing a minimal software stack is to support applications at hand, and to simplify installation and distribution of client\/server stacks across TIGRE sites.\nAdditional components will be added as they become necessary.\nThe PacMan [10] packaging and distribution mechanism is employed for TIGRE client\/server installation and management.\nThe PacMan distribution mechanism involves retrieval, installation, and often configuration of the packaged software.\nThis approach allows the clients to keep current, consistent versions of TIGRE software.\nIt also helps TIGRE sites to install the needed components on resources distributed throughout the participating sites.\nThe TIGRE client\/server stack consists of an authentication and authorization layer, Globus GRAM4-based job submission via web services (pre-web services installations are available up on request).\nThe tools for handling Grid proxy generation, Grid-enabled file transfer and Grid-enabled remote login are supported.\nThe pertinent details of TIGRE services and tools for job scheduling and management are provided below.\n2.1.\nCertificate Authority The TIGRE security infrastructure includes a certificate authority (CA) accredited by the International Grid Trust Federation (IGTF) for issuing X. 509 user and resource Grid certificates [11].\nThe Texas Advanced Computing Center (TACC), University of Texas at Austin is the TIGRE``s shared CA.\nThe TIGRE Institutions serve as Registration Authorities (RA) for their respective local user base.\nFor up-to-date information on securing user and resource certificates and their installation instructions see ref [2].\nThe users and hosts on TIGRE are identified by their distinguished name (DN) in their X.509 certificate provided by the CA.\nA native Grid-mapfile that contains a list of authorized DNs is used to authenticate and authorize user job scheduling and management on TIGRE site resources.\nAt Texas Tech University, the users are dynamically allocated one of the many generic pool accounts.\nThis is accomplished through the Grid User Management System (GUMS) [12].\n2.2.\nJob Scheduling and Management The TIGRE environment supports GRAM4-based job submission via web services.\nThe job submission scripts are generated using XML.\nThe web services GRAM translates the XML scripts into target cluster specific batch schedulers such as LSF, PBS, or SGE.\nThe high bandwidth file transfer protocols such as GridFTP are utilized for staging files in and out of the target machine.\nThe login to remote hosts for compilation and debugging is only through GSISSH service which requires resource authentication through X.509 certificates.\nThe authentication and authorization of Grid jobs are managed by issuing Grid certificates to both users and hosts.\nThe certificate revocation lists (CRL) are updated on a daily basis to maintain high security standards of the TIGRE Grid services.\nThe TIGRE portal [2] documentation area provides a quick start tutorial on running jobs on TIGRE.\n2.3.\nMetascheduler The metascheduler interoperates with the cluster level batch schedulers (such as LSF, PBS) in the overall Grid workflow management.\nIn the present work, we have employed GridWay [8] metascheduler - a Globus incubator project - to schedule and manage jobs across TIGRE.\nThe GridWay is a light-weight metascheduler that fully utilizes Globus functionalities.\nIt is designed to provide efficient use of dynamic Grid resources by multiple users for Grid infrastructures built on top of Globus services.\nThe TIGRE site administrator can control the resource sharing through a powerful built-in scheduler provided by GridWay or by extending GridWay``s external scheduling module to provide their own scheduling policies.\nApplication users can write job descriptions using GridWay``s simple and direct job template format (see Section 4 for details) or standard Job Submission Description Language (JSDL).\nSee section 4 for implementation details.\n2.4.\nCustomer Service Management System A TIGRE portal [2] was designed and deployed to interface users and resource providers.\nIt was designed using GridPort [13] and is maintained by TACC.\nThe TIGRE environment is supported by open source tools such as the Open Ticket Request System (OTRS) [14] for servicing trouble tickets, and MoinMoin [15] Wiki for TIGRE content and knowledge management for education, outreach and training.\nThe links for OTRS and Wiki are consumed by the TIGRE portal [2] - the gateway for users and resource providers.\nThe TIGRE resource status and loads are monitored by the Grid Port Information Repository (GPIR) service of the GridPort toolkit [13] which interfaces with local cluster load monitoring service such as Ganglia.\nThe GPIR utilizes cron jobs on each resource to gather site specific resource characteristics such as jobs that are running, queued and waiting for resource allocation.\n3.\nENSEMBLE KALMAN FILTER APPLICATION The main goal of hydrocarbon reservoir simulations is to forecast the production behavior of oil and gas field (denoted as field hereafter) for its development and optimal management.\nIn reservoir modeling, the field is divided into several geological models as shown in Figure 1.\nFor accurate performance forecasting of the field, it is necessary to reconcile several geological models to the dynamic response of the field through history matching [16-20].\nFigure 1.\nCross-sectional view of the Field.\nVertical layers correspond to different geological models and the nails are oil wells whose historical information will be used for forecasting the production behavior.\n(Figure Ref:http:\/\/faculty.smu.edu\/zchen\/research.html).\nThe EnKF is a Monte Carlo method that works with an ensemble of reservoir models.\nThis method utilizes crosscovariances [21] between the field measurements and the reservoir model parameters (derived from several models) to estimate prediction uncertainties.\nThe geological model parameters in the ensemble are sequentially updated with a goal to minimize the prediction uncertainties.\nHistorical production response of the field for over 50 years is used in these simulations.\nThe main advantage of EnKF is that it can be readily linked to any reservoir simulator, and can assimilate latest production data without the need to re-run the simulator from initial conditions.\nResearchers in Texas are large subscribers of the Schlumberger ECLIPSE [22] package for reservoir simulations.\nIn the reservoir modeling, each geological model checks out an ECLIPSE license.\nThe simulation runtime of the EnKF methodology depends on the number of geological models used, number of ECLIPSE licenses available, production history of the field, and propagated uncertainties in history matching.\nThe overall EnKF workflow is shown Figure 2.\nFigure 2.\nEnsemble Kaman Filter Data Assimilation Workflow.\nEach site has L licenses.\nAt START, the master\/control process (EnKF main program) reads the simulation configuration file for number (N) of models, and model-specific input files.\nThen, N working directories are created to store the output files.\nAt the end of iteration, the master\/control process collects the output files from N models and post processes crosscovariances [21] to estimate the prediction uncertainties.\nThis information will be used to update models (or input files) for the next iteration.\nThe simulation continues until the production histories are exhausted.\nTypical EnKF simulation with N=50 and field histories of 50-60 years, in time steps ranging from three months to a year, takes about three weeks on a serial computing environment.\nIn parallel computing environment, there is no interprocess communication between the geological models in the ensemble.\nHowever, at the end of each simulation time-step, model-specific output files are to be collected for analyzing cross covariances [21] and to prepare next set of input files.\nTherefore, master-slave model in messagepassing (MPI) environment is a suitable paradigm.\nIn this approach, the geological models are treated as slaves and are distributed across the available processors.\nThe master Cluster or (TIGRE\/GridWay) START Read Configuration File Create N Working Directories Create N Input files Model l Model 2 Model N. .\n.\nECLIPSE on site A ECLIPSE on Site B ECLIPSE on Site Z Collect N Model Outputs, Post-process Output files END ... process collects model-specific output files, analyzes and prepares next set of input files for the simulation.\nSince each geological model checks out an ECLIPSE license, parallelizability of the simulation depends on the number of licenses available.\nWhen the available number of licenses is less than the number of models in the ensemble, one or more of the nodes in the MPI group have to handle more than one model in a serial fashion and therefore, it takes longer to complete the simulation.\nA Petroleum Engineering Department usually procures 10-15 ECLIPSE licenses while at least ten-fold increase in the number of licenses would be necessary for industry standard simulations.\nThe number of licenses can be increased by involving several Petroleum Engineering Departments that support ECLIPSE package.\nSince MPI does not scale very well for applications that involve remote compute clusters, and to get around the firewall issues with license servers across administrative domains, Grid-enabling the EnKF workflow seems to be necessary.\nWith this motivation, we have implemented Grid-enabled EnKF workflow for the TIGRE environment and demonstrated parallelizability of the application across TIGRE using GridWay metascheduler.\nFurther details are provided in the next section.\n4.\nIMPLEMENTATION DETAILS To Grid-enable the EnKF approach, we have eliminated the MPI code for parallel processing and replaced with N single processor jobs (or sub-jobs) where, N is the number of geological models in the ensemble.\nThese model-specific sub-jobs were distributed across TIGRE sites that support ECLIPSE package using the GridWay [8] metascheduler.\nFor each sub-job, we have constructed a GridWay job template that specifies the executable, input and output files, and resource requirements.\nSince the TIGRE compute resources are not expected to change frequently, we have used static resource discovery policy for GridWay and the sub-jobs were scheduled dynamically across the TIGRE resources using GridWay.\nFigure 3 represents the sub-job template file for the GridWay metascheduler.\nFigure 3.\nGridWay Sub-Job Template In Figure 3, REQUIREMENTS flag is set to choose the resources that satisfy the application requirements.\nIn the case of EnKF application, for example, we need resources that support ECLIPSE package.\nARGUMENTS flag specifies the model in the ensemble that will invoke ECLIPSE at a remote site.\nINPUT_FILES is prepared by the EnKF main program (or master\/control process) and is transferred by GridWay to the remote site where it is untared and is prepared for execution.\nFinally, OUTPUT_FILES specifies the name and location where the output files are to be written.\nThe command-line features of GridWay were used to collect and process the model-specific outputs to prepare new set of input files.\nThis step mimics MPI process synchronization in master-slave model.\nAt the end of each iteration, the compute resources and licenses are committed back to the pool.\nTable 1 shows the sub-jobs in TIGRE Grid via GridWay using gwps command and for clarity, only selected columns were shown .\nUSER JID DM EM NAME HOST pingluo 88 wrap pend enkf.jt antaeus.hpcc.ttu.edu\/LSF pingluo 89 wrap pend enkf.jt antaeus.hpcc.ttu.edu\/LSF pingluo 90 wrap actv enkf.jt minigar.hpcc.ttu.edu\/LSF pingluo 91 wrap pend enkf.jt minigar.hpcc.ttu.edu\/LSF pingluo 92 wrap done enkf.jt cosmos.tamu.edu\/PBS pingluo 93 wrap epil enkf.jt cosmos.tamu.edu\/PBS Table 1.\nJob scheduling across TIGRE using GridWay Metascheduler.\nDM: Dispatch state, EM: Execution state, JID is the job id and HOST corresponds to site specific cluster and its local batch scheduler.\nWhen a job is submitted to GridWay, it will go through a series of dispatch (DM) and execution (EM) states.\nFor DM, the states include pend(ing), prol(og), wrap(per), epil(og), and done.\nDM=prol means the job has been scheduled to a resource and the remote working directory is in preparation.\nDM=warp implies that GridWay is executing the wrapper which in turn executes the application.\nDM=epil implies the job has finished running at the remote site and results are being transferred back to the GridWay server.\nSimilarly, when EM=pend implies the job is waiting in the queue for resource and the job is running when EM=actv.\nFor complete list of message flags and their descriptions, see the documentation in ref [8].\nWe have demonstrated the Grid-enabled EnKF runs using GridWay for TIGRE environment.\nThe jobs are so chosen that the runtime doesn``t exceed more than a half hour.\nThe simulation runs involved up to 20 jobs between A&M and TTU sites with TTU serving 10 licenses.\nFor resource information, see Table I.\nOne of the main advantages of Grid-enabled EnKF simulation is that both the resources and licenses are released back to the pool at the end of each simulation time step unlike in the case of MPI implementation where licenses and nodes are locked until the completion of entire simulation.\nHowever, the fact that each sub-job gets scheduled independently via GridWay could possibly incur another time delay caused by waiting in queue for execution in each simulation time step.\nSuch delays are not expected EXECUTABLE=runFORWARD REQUIREMENTS=HOSTNAME=cosmos.tamu.edu | HOSTNAME=antaeus.hpcc.ttu.edu | HOSTNAME=minigar.hpcc.ttu.edu | ARGUMENTS=001 INPUT_FILES=001.in.tar OUTPUT_FILES=001.out.tar in MPI implementation where the node is blocked for processing sub-jobs (model-specific calculation) until the end of the simulation.\nThere are two main scenarios for comparing Grid and cluster computing approaches.\nScenario I: The cluster is heavily loaded.\nThe conceived average waiting time of job requesting large number of CPUs is usually longer than waiting time of jobs requesting single CPU.\nTherefore, overall waiting time could be shorter in Grid approach which requests single CPU for each sub-job many times compared to MPI implementation that requests large number of CPUs at a single time.\nIt is apparent that Grid scheduling is beneficial especially when cluster is heavily loaded and requested number of CPUs for the MPI job is not readily available.\nScenario II: The cluster is relatively less loaded or largely available.\nIt appears the MPI implementation is favorable compared to the Grid scheduling.\nHowever, parallelizability of the EnKF application depends on the number of ECLIPSE licenses and ideally, the number of licenses should be equal to the number of models in the ensemble.\nTherefore, if a single institution does not have sufficient number of licenses, the cluster availability doesn``t help as much as it is expected.\nSince the collaborative environment such as TIGRE can address both compute and software resource requirements for the EnKF application, Grid-enabled approach is still advantageous over the conventional MPI implementation in any of the above scenarios.\n5.\nCONCLUSIONS AND FUTURE WORK TIGRE is a higher education Grid development project and its purpose is to sustain and extend research and educational opportunities across Texas.\nWithin the energy exploration application area, we have Grid-enabled the MPI implementation of the ensemble Kalman filter data assimilation methodology for reservoir characterization.\nThis task was accomplished by removing MPI code for parallel processing and replacing with single processor jobs one for each geological model in the ensemble.\nThese single processor jobs were scheduled across TIGRE via GridWay metascheduler.\nWe have demonstrated that by pooling licenses across TIGRE sites, more geological models can be handled in parallel and therefore conceivably better simulation accuracy.\nThis approach has several advantages over MPI implementation especially when a site specific cluster is heavily loaded and\/or the number licenses required for the simulation is more than those available at a single site.\nTowards the future work, it would be interesting to compare the runtime between MPI, and Grid implementations for the EnKF application.\nThis effort could shed light on quality of service (QoS) of Grid environments in comparison with cluster computing.\nAnother aspect of interest in the near future would be managing both compute and license resources to address the job (or processor)-to-license ratio management.\n6.\nOBSERVATIONS AND LESSIONS LEARNED The Grid-enabling efforts for EnKF application have provided ample opportunities to gather insights on the visibility and promise of Grid computing environments for application development and support.\nThe main issues are industry standard data security and QoS comparable to cluster computing.\nSince the reservoir modeling research involves proprietary data of the field, we had to invest substantial efforts initially in educating the application researchers on the ability of Grid services in supporting the industry standard data security through role- and privilege-based access using X.509 standard.\nWith respect to QoS, application researchers expect cluster level QoS with Grid environments.\nAlso, there is a steep learning curve in Grid computing compared to the conventional cluster computing.\nSince Grid computing is still an emerging technology, and it spans over several administrative domains, Grid computing is still premature especially in terms of the level of QoS although, it offers better data security standards compared to commodity clusters.\nIt is our observation that training and outreach programs that compare and contrast the Grid and cluster computing environments would be a suitable approach for enhancing user participation in Grid computing.\nThis approach also helps users to match their applications and abilities Grids can offer.\nIn summary, our efforts through TIGRE in Grid-enabling the EnKF data assimilation methodology showed substantial promise in engaging Petroleum Engineering researchers through intercampus collaborations.\nEfforts are under way to involve more schools in this effort.\nThese efforts may result in increased collaborative research, educational opportunities, and workforce development through graduate\/faculty research programs across TIGRE Institutions.\n7.\nACKNOWLEDGMENTS The authors acknowledge the State of Texas for supporting the TIGRE project through the Texas Enterprise Fund, and TIGRE Institutions for providing the mechanism, in which the authors (Ravi Vadapalli, Taesung Kim, and Ping Luo) are also participating.\nThe authors thank the application researchers Prof. Akhil Datta-Gupta of Texas A&M University and Prof. Lloyd Heinze of Texas Tech University for their discussions and interest to exploit the TIGRE environment to extend opportunities in research and development.\n8.\nREFERENCES [1] Foster, I. and Kesselman, C. (eds.)\n2004.\nThe Grid: Blueprint for a new computing infrastructure (The Elsevier series in Grid computing) [2] TIGRE Portal: http:\/\/tigreportal.hipcat.net [3] Vadapalli, R. Sill, A., Dooley, R., Murray, M., Luo, P., Kim, T., Huang, M., Thyagaraja, K., and Chaffin, D. 2007.\nDemonstration of TIGRE environment for Grid enabled\/suitable applications.\n8th IEEE\/ACM Int.\nConf.\non Grid Computing, Sept 19-21, Austin [4] The High Performance Computing across Texas Consortium http:\/\/www.hipcat.net [5] Pordes, R. Petravick, D. Kramer, B. Olson, D. Livny, M. Roy, A. Avery, P. Blackburn, K. Wenaus, T. W\u00fcrthwein, F. Foster, I. Gardner, R. Wilde, M. Blatecky, A. McGee, J. and Quick, R. 2007.\nThe Open Science Grid, J. Phys Conf Series http:\/\/www.iop.org\/EJ\/abstract\/1742-6596\/78\/1\/012057 and http:\/\/www.opensciencegrid.org [6] Reed, D.A. 2003.\nGrids, the TeraGrid and Beyond, Computer, vol 30, no. 1 and http:\/\/www.teragrid.org [7] Evensen, G. 2006.\nData Assimilation: The Ensemble Kalman Filter, Springer [8] Herrera, J. Huedo, E. Montero, R. S. and Llorente, I. M. 2005.\nScientific Programming, vol 12, No. 4.\npp 317-331 [9] Avery, P. and Foster, I. 2001.\nThe GriPhyN project: Towards petascale virtual data grids, technical report GriPhyN-200115 and http:\/\/vdt.cs.wisc.edu [10] The PacMan documentation and installation guide http:\/\/physics.bu.edu\/pacman\/htmls [11] Caskey, P. Murray, M. Perez, J. and Sill, A. 2007.\nCase studies in identify management for virtual organizations, EDUCAUSE Southwest Reg.\nConf., Feb 21-23, Austin, TX.\nhttp:\/\/www.educause.edu\/ir\/library\/pdf\/SWR07058.pdf [12] The Grid User Management System (GUMS) https:\/\/www.racf.bnl.gov\/Facility\/GUMS\/index.html [13] Thomas, M. and Boisseau, J. 2003.\nBuilding grid computing portals: The NPACI grid portal toolkit, Grid computing: making the global infrastructure a reality, Chapter 28, Berman, F. Fox, G. Thomas, M. Boisseau, J. and Hey, T. (eds), John Wiley and Sons, Ltd, Chichester [14] Open Ticket Request System http:\/\/otrs.org [15] The MoinMoin Wiki Engine http:\/\/moinmoin.wikiwikiweb.de [16] Vasco, D.W. Yoon, S. and Datta-Gupta, A. 1999.\nIntegrating dynamic data into high resolution reservoir models using streamline-based analytic sensitivity coefficients, Society of Petroleum Engineers (SPE) Journal, 4 (4).\n[17] Emanuel, A. S. and Milliken, W. J. 1998.\nHistory matching finite difference models with 3D streamlines, SPE 49000, Proc of the Annual Technical Conf and Exhibition, Sept 2730, New Orleans, LA. [18] N\u00e6vdal, G. Johnsen, L.M. Aanonsen, S.I. and Vefring, E.H. 2003.\nReservoir monitoring and Continuous Model Updating using Ensemble Kalman Filter, SPE 84372, Proc of the Annual Technical Conf and Exhibition, Oct 5-8, Denver, CO. [19] Jafarpour B. and McLaughlin, D.B. 2007.\nHistory matching with an ensemble Kalman filter and discrete cosine parameterization, SPE 108761, Proc of the Annual Technical Conf and Exhibition, Nov 11-14, Anaheim, CA [20] Li, G. and Reynolds, A. C. 2007.\nAn iterative ensemble Kalman filter for data assimilation, SPE 109808, Proc of the SPE Annual Technical Conf and Exhibition, Nov 11-14, Anaheim, CA [21] Arroyo-Negrete, E. Devagowda, D. Datta-Gupta, A. 2006.\nStreamline assisted ensemble Kalman filter for rapid and continuous reservoir model updating.\nProc of the Int.\nOil & Gas Conf and Exhibition, SPE 104255, Dec 5-7, China [22] ECLIPSE Reservoir Engineering Software http:\/\/www.slb.com\/content\/services\/software\/reseng\/index.a sp","lvl-3":"Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization\nABSTRACT\nEnsemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration.\nIn this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells.\nThe Schlumberger ECLIPSE software is used for these simulations.\nSince models in the ensemble do not communicate, message-passing implementation is a good choice.\nEach model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available.\nWe have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment.\nBy pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime.\nIn this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology.\nPotential benefits of this approach, observations and lessons learned will be discussed.\n1.\nINTRODUCTION\nGrid computing [1] is an emerging \"collaborative\" computing paradigm to extend institution\/organization specific high performance computing (HPC) capabilities greatly beyond local resources.\nIts importance stems from the fact that ground breaking research in strategic application areas such as bioscience and medicine, energy exploration and environmental modeling involve strong interdisciplinary components and often require intercampus collaborations and computational capabilities beyond institutional limitations.\nThe Texas Internet Grid for Research and Education (TIGRE) [2,3] is a state funded cyberinfrastructure development project carried out by five (Rice, A&M, TTU, UH and UT Austin) major university systems - collectively called TIGRE Institutions.\nThe purpose of TIGRE is to create a higher education Grid to sustain and extend research and educational opportunities across Texas.\nTIGRE is a project of the High Performance Computing across Texas (HiPCAT) [4] consortium.\nThe goal of HiPCAT is to support advanced computational technologies to enhance research, development, and educational activities.\nThe primary goal of TIGRE is to design and deploy state-of-the-art Grid middleware that enables integration of computing systems, storage systems and databases, visualization laboratories and displays, and even instruments and sensors across Texas.\nThe secondary goal is to demonstrate the TIGRE capabilities to enhance research and educational opportunities in strategic application areas of interest to the State of Texas.\nThese are bioscience and medicine, energy exploration and air quality modeling.\nVision of the TIGRE project is to foster interdisciplinary and intercampus collaborations, identify novel approaches to extend academic-government-private partnerships, and become a competitive model for external funding opportunities.\nThe overall goal of TIGRE is to support local, campus and regional user interests and offer avenues to connect with national Grid projects such as Open Science Grid [5], and TeraGrid [6].\nWithin the energy exploration strategic application area, we have Grid-enabled the ensemble Kalman Filter (EnKF) [7] approach for data assimilation in reservoir modeling and demonstrated the extensibility of the application using the TIGRE environment and the GridWay [8] metascheduler.\nSection 2 provides an overview of the TIGRE environment and capabilities.\nApplication description and the need for Grid-enabling EnKF methodology is provided in Section 3.\nThe implementation details and merits of our approach are discussed in Section 4.\nConclusions are provided in Section 5.\nFinally, observations and lessons learned are documented in Section 6.\n2.\nTIGRE ENVIRONMENT\n2.1.\nCertificate Authority\n2.2.\nJob Scheduling and Management\n2.3.\nMetascheduler\n2.4.\nCustomer Service Management System\n3.\nENSEMBLE KALMAN FILTER APPLICATION\n4.\nIMPLEMENTATION DETAILS\n5.\nCONCLUSIONS AND FUTURE WORK\n6.\nOBSERVATIONS AND LESSIONS LEARNED","lvl-4":"Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization\nABSTRACT\nEnsemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration.\nIn this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells.\nThe Schlumberger ECLIPSE software is used for these simulations.\nSince models in the ensemble do not communicate, message-passing implementation is a good choice.\nEach model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available.\nWe have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment.\nBy pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime.\nIn this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology.\nPotential benefits of this approach, observations and lessons learned will be discussed.\n1.\nINTRODUCTION\nGrid computing [1] is an emerging \"collaborative\" computing paradigm to extend institution\/organization specific high performance computing (HPC) capabilities greatly beyond local resources.\nThe purpose of TIGRE is to create a higher education Grid to sustain and extend research and educational opportunities across Texas.\nTIGRE is a project of the High Performance Computing across Texas (HiPCAT) [4] consortium.\nThe goal of HiPCAT is to support advanced computational technologies to enhance research, development, and educational activities.\nThe primary goal of TIGRE is to design and deploy state-of-the-art Grid middleware that enables integration of computing systems, storage systems and databases, visualization laboratories and displays, and even instruments and sensors across Texas.\nThe secondary goal is to demonstrate the TIGRE capabilities to enhance research and educational opportunities in strategic application areas of interest to the State of Texas.\nThese are bioscience and medicine, energy exploration and air quality modeling.\nVision of the TIGRE project is to foster interdisciplinary and intercampus collaborations, identify novel approaches to extend academic-government-private partnerships, and become a competitive model for external funding opportunities.\nWithin the energy exploration strategic application area, we have Grid-enabled the ensemble Kalman Filter (EnKF) [7] approach for data assimilation in reservoir modeling and demonstrated the extensibility of the application using the TIGRE environment and the GridWay [8] metascheduler.\nSection 2 provides an overview of the TIGRE environment and capabilities.\nApplication description and the need for Grid-enabling EnKF methodology is provided in Section 3.\nThe implementation details and merits of our approach are discussed in Section 4.\nConclusions are provided in Section 5.\nFinally, observations and lessons learned are documented in Section 6.","lvl-2":"Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization\nABSTRACT\nEnsemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration.\nIn this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells.\nThe Schlumberger ECLIPSE software is used for these simulations.\nSince models in the ensemble do not communicate, message-passing implementation is a good choice.\nEach model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available.\nWe have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment.\nBy pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime.\nIn this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology.\nPotential benefits of this approach, observations and lessons learned will be discussed.\n1.\nINTRODUCTION\nGrid computing [1] is an emerging \"collaborative\" computing paradigm to extend institution\/organization specific high performance computing (HPC) capabilities greatly beyond local resources.\nIts importance stems from the fact that ground breaking research in strategic application areas such as bioscience and medicine, energy exploration and environmental modeling involve strong interdisciplinary components and often require intercampus collaborations and computational capabilities beyond institutional limitations.\nThe Texas Internet Grid for Research and Education (TIGRE) [2,3] is a state funded cyberinfrastructure development project carried out by five (Rice, A&M, TTU, UH and UT Austin) major university systems - collectively called TIGRE Institutions.\nThe purpose of TIGRE is to create a higher education Grid to sustain and extend research and educational opportunities across Texas.\nTIGRE is a project of the High Performance Computing across Texas (HiPCAT) [4] consortium.\nThe goal of HiPCAT is to support advanced computational technologies to enhance research, development, and educational activities.\nThe primary goal of TIGRE is to design and deploy state-of-the-art Grid middleware that enables integration of computing systems, storage systems and databases, visualization laboratories and displays, and even instruments and sensors across Texas.\nThe secondary goal is to demonstrate the TIGRE capabilities to enhance research and educational opportunities in strategic application areas of interest to the State of Texas.\nThese are bioscience and medicine, energy exploration and air quality modeling.\nVision of the TIGRE project is to foster interdisciplinary and intercampus collaborations, identify novel approaches to extend academic-government-private partnerships, and become a competitive model for external funding opportunities.\nThe overall goal of TIGRE is to support local, campus and regional user interests and offer avenues to connect with national Grid projects such as Open Science Grid [5], and TeraGrid [6].\nWithin the energy exploration strategic application area, we have Grid-enabled the ensemble Kalman Filter (EnKF) [7] approach for data assimilation in reservoir modeling and demonstrated the extensibility of the application using the TIGRE environment and the GridWay [8] metascheduler.\nSection 2 provides an overview of the TIGRE environment and capabilities.\nApplication description and the need for Grid-enabling EnKF methodology is provided in Section 3.\nThe implementation details and merits of our approach are discussed in Section 4.\nConclusions are provided in Section 5.\nFinally, observations and lessons learned are documented in Section 6.\n2.\nTIGRE ENVIRONMENT\nThe TIGRE Grid middleware consists of minimal set of components derived from a subset of the Virtual Data Toolkit (VDT) [9] which supports a variety of operating systems.\nThe purpose of choosing a minimal software stack is to support applications at hand, and to simplify installation and distribution of client\/server stacks across TIGRE sites.\nAdditional components will be added as they become necessary.\nThe PacMan [10] packaging and distribution mechanism is employed for TIGRE client\/server installation and management.\nThe PacMan distribution mechanism involves retrieval, installation, and often configuration of the packaged software.\nThis approach allows the clients to keep current, consistent versions of TIGRE software.\nIt also helps TIGRE sites to install the needed components on resources distributed throughout the participating sites.\nThe TIGRE client\/server stack consists of an authentication and authorization layer, Globus GRAM4-based job submission via web services (pre-web services installations are available up on request).\nThe tools for handling Grid proxy generation, Grid-enabled file transfer and Grid-enabled remote login are supported.\nThe pertinent details of TIGRE services and tools for job scheduling and management are provided below.\n2.1.\nCertificate Authority\nThe TIGRE security infrastructure includes a certificate authority (CA) accredited by the International Grid Trust Federation (IGTF) for issuing X. 509 user and resource Grid certificates [11].\nThe Texas Advanced Computing Center (TACC), University of Texas at Austin is the TIGRE's shared CA.\nThe TIGRE Institutions serve as Registration Authorities (RA) for their respective local user base.\nFor up-to-date information on securing user and resource certificates and their installation instructions see ref [2].\nThe users and hosts on TIGRE are identified by their distinguished name (DN) in their X. 509 certificate provided by the CA.\nA native Grid-mapfile that contains a list of authorized DNs is used to authenticate and authorize user job scheduling and management on TIGRE site resources.\nAt Texas Tech University, the users are dynamically allocated one of the many generic pool accounts.\nThis is accomplished through the Grid User Management System (GUMS) [12].\n2.2.\nJob Scheduling and Management\nThe TIGRE environment supports GRAM4-based job submission via web services.\nThe job submission scripts are generated using XML.\nThe web services GRAM translates the XML scripts into target cluster specific batch schedulers such as LSF, PBS, or SGE.\nThe high bandwidth file transfer protocols such as GridFTP are utilized for staging files in and out of the target machine.\nThe login to remote hosts for compilation and debugging is only through GSISSH service which requires resource authentication through X. 509 certificates.\nThe authentication and authorization of Grid jobs are managed by issuing Grid certificates to both users and hosts.\nThe certificate revocation lists (CRL) are updated on a daily basis to maintain high security standards of the TIGRE Grid services.\nThe TIGRE portal [2] documentation area provides a quick start tutorial on running jobs on TIGRE.\n2.3.\nMetascheduler\nThe metascheduler interoperates with the cluster level batch schedulers (such as LSF, PBS) in the overall Grid workflow management.\nIn the present work, we have employed GridWay [8] metascheduler--a Globus incubator project--to schedule and manage jobs across TIGRE.\nThe GridWay is a light-weight metascheduler that fully utilizes Globus functionalities.\nIt is designed to provide efficient use of dynamic Grid resources by multiple users for Grid infrastructures built on top of Globus services.\nThe TIGRE site administrator can control the resource sharing through a powerful built-in scheduler provided by GridWay or by extending GridWay's external scheduling module to provide their own scheduling policies.\nApplication users can write job descriptions using GridWay's simple and direct job template format (see Section 4 for details) or standard Job Submission Description Language (JSDL).\nSee section 4 for implementation details.\n2.4.\nCustomer Service Management System\nA TIGRE portal [2] was designed and deployed to interface users and resource providers.\nIt was designed using GridPort [13] and is maintained by TACC.\nThe TIGRE environment is supported by open source tools such as the Open Ticket Request System (OTRS) [14] for servicing trouble tickets, and MoinMoin [15] Wiki for TIGRE content and knowledge management for education, outreach and training.\nThe links for OTRS and Wiki are consumed by the TIGRE portal [2]--the gateway for users and resource providers.\nThe TIGRE resource status and loads are monitored by the Grid Port Information Repository (GPIR) service of the GridPort toolkit [13] which interfaces with local cluster load monitoring service such as Ganglia.\nThe GPIR utilizes \"cron\" jobs on each resource to gather site specific resource characteristics such as jobs that are running, queued and waiting for resource allocation.\n3.\nENSEMBLE KALMAN FILTER APPLICATION\nThe main goal of hydrocarbon reservoir simulations is to forecast the production behavior of oil and gas field (denoted as field hereafter) for its development and optimal management.\nIn reservoir modeling, the field is divided into several geological models as shown in Figure 1.\nFor accurate performance forecasting of the field, it is necessary to reconcile several geological models to the dynamic response of the field through history matching [16-20].\nFigure 1.\nCross-sectional view of the Field.\nVertical layers correspond to different geological models and the nails are oil wells whose historical information will be used for forecasting the production behavior.\n(Figure Ref: http:\/\/faculty.smu.edu\/zchen\/research.html).\nThe EnKF is a Monte Carlo method that works with an ensemble of reservoir models.\nThis method utilizes crosscovariances [21] between the field measurements and the reservoir model parameters (derived from several models) to estimate prediction uncertainties.\nThe geological model parameters in the ensemble are sequentially updated with a goal to minimize the prediction uncertainties.\nHistorical production response of the field for over 50 years is used in these simulations.\nThe main advantage of EnKF is that it can be readily linked to any reservoir simulator, and can assimilate latest production data without the need to re-run the simulator from initial conditions.\nResearchers in Texas are large subscribers of the Schlumberger ECLIPSE [22] package for reservoir simulations.\nIn the reservoir modeling, each geological model checks out an ECLIPSE license.\nThe simulation runtime of the EnKF methodology depends on the number of geological models used, number of ECLIPSE licenses available, production history of the field, and propagated uncertainties in history matching.\nThe overall EnKF workflow is shown Figure 2.\nFigure 2.\nEnsemble Kaman Filter Data Assimilation Workflow.\nEach site has L licenses.\nAt START, the master\/control process (EnKF main program) reads the simulation configuration file for number (N) of models, and model-specific input files.\nThen, N working directories are created to store the output files.\nAt the end of iteration, the master\/control process collects the output files from N models and post processes crosscovariances [21] to estimate the prediction uncertainties.\nThis information will be used to update models (or input files) for the next iteration.\nThe simulation continues until the production histories are exhausted.\nTypical EnKF simulation with N = 50 and field histories of 50-60 years, in time steps ranging from three months to a year, takes about three weeks on a serial computing environment.\nIn parallel computing environment, there is no interprocess communication between the geological models in the ensemble.\nHowever, at the end of each simulation time-step, model-specific output files are to be collected for analyzing cross covariances [21] and to prepare next set of input files.\nTherefore, master-slave model in messagepassing (MPI) environment is a suitable paradigm.\nIn this approach, the geological models are treated as slaves and are distributed across the available processors.\nThe master\nprocess collects model-specific output files, analyzes and prepares next set of input files for the simulation.\nSince each geological model checks out an ECLIPSE license, parallelizability of the simulation depends on the number of licenses available.\nWhen the available number of licenses is less than the number of models in the ensemble, one or more of the nodes in the MPI group have to handle more than one model in a serial fashion and therefore, it takes longer to complete the simulation.\nA Petroleum Engineering Department usually procures 10-15 ECLIPSE licenses while at least ten-fold increase in the number of licenses would be necessary for industry standard simulations.\nThe number of licenses can be increased by involving several Petroleum Engineering Departments that support ECLIPSE package.\nSince MPI does not scale very well for applications that involve remote compute clusters, and to get around the firewall issues with license servers across administrative domains, Grid-enabling the EnKF workflow seems to be necessary.\nWith this motivation, we have implemented Grid-enabled EnKF workflow for the TIGRE environment and demonstrated parallelizability of the application across TIGRE using GridWay metascheduler.\nFurther details are provided in the next section.\n4.\nIMPLEMENTATION DETAILS\nTo Grid-enable the EnKF approach, we have eliminated the MPI code for parallel processing and replaced with N single processor jobs (or sub-jobs) where, N is the number of geological models in the ensemble.\nThese model-specific sub-jobs were distributed across TIGRE sites that support ECLIPSE package using the GridWay [8] metascheduler.\nFor each sub-job, we have constructed a GridWay job template that specifies the executable, input and output files, and resource requirements.\nSince the TIGRE compute resources are not expected to change frequently, we have used static resource discovery policy for GridWay and the sub-jobs were scheduled dynamically across the TIGRE resources using GridWay.\nFigure 3 represents the sub-job template file for the GridWay metascheduler.\nFigure 3.\nGridWay Sub-Job Template\nIn Figure 3, REQUIREMENTS flag is set to choose the resources that satisfy the application requirements.\nIn the case of EnKF application, for example, we need resources that support ECLIPSE package.\nARGUMENTS flag specifies the model in the ensemble that will invoke ECLIPSE at a remote site.\nINPUT_FILES is prepared by the EnKF main program (or master\/control process) and is transferred by GridWay to the remote site where it is untared and is prepared for execution.\nFinally, OUTPUT_FILES specifies the name and location where the output files are to be written.\nThe command-line features of GridWay were used to collect and process the model-specific outputs to prepare new set of input files.\nThis step mimics MPI process synchronization in master-slave model.\nAt the end of each iteration, the compute resources and licenses are committed back to the pool.\nTable 1 shows the sub-jobs in TIGRE Grid via GridWay using \"gwps\" command and for clarity, only selected columns were shown\nTable 1.\nJob scheduling across TIGRE using GridWay Metascheduler.\nDM: Dispatch state, EM: Execution state,\nJID is the job id and HOST corresponds to site specific cluster and its local batch scheduler.\nWhen a job is submitted to GridWay, it will go through a series of dispatch (DM) and execution (EM) states.\nFor DM, the states include pend (ing), prol (og), wrap (per), epil (og), and done.\nDM =\" prol\" means the job has been scheduled to a resource and the remote working directory is in preparation.\nDM =\" warp\" implies that GridWay is executing the wrapper which in turn executes the application.\nDM =\" epil\" implies the job has finished running at the remote site and results are being transferred back to the GridWay server.\nSimilarly, when EM =\" pend\" implies the job is waiting in the queue for resource and the job is running when EM =\" actv\".\nFor complete list of message flags and their descriptions, see the documentation in ref [8].\nWe have demonstrated the Grid-enabled EnKF runs using GridWay for TIGRE environment.\nThe jobs are so chosen that the runtime doesn't exceed more than a half hour.\nThe simulation runs involved up to 20 jobs between A&M and TTU sites with TTU serving 10 licenses.\nFor resource information, see Table I.\nOne of the main advantages of Grid-enabled EnKF simulation is that both the resources and licenses are released back to the pool at the end of each simulation time step unlike in the case of MPI implementation where licenses and nodes are locked until the completion of entire simulation.\nHowever, the fact that each sub-job gets scheduled independently via GridWay could possibly incur another time delay caused by waiting in queue for execution in each simulation time step.\nSuch delays are not expected in MPI implementation where the node is blocked for processing sub-jobs (model-specific calculation) until the end of the simulation.\nThere are two main scenarios for comparing Grid and cluster computing approaches.\nScenario I: The cluster is heavily loaded.\nThe conceived average waiting time of job requesting large number of CPUs is usually longer than waiting time of jobs requesting single CPU.\nTherefore, overall waiting time could be shorter in Grid approach which requests single CPU for each sub-job many times compared to MPI implementation that requests large number of CPUs at a single time.\nIt is apparent that Grid scheduling is beneficial especially when cluster is heavily loaded and requested number of CPUs for the MPI job is not readily available.\nScenario II: The cluster is relatively less loaded or largely available.\nIt appears the MPI implementation is favorable compared to the Grid scheduling.\nHowever, parallelizability of the EnKF application depends on the number of ECLIPSE licenses and ideally, the number of licenses should be equal to the number of models in the ensemble.\nTherefore, if a single institution does not have sufficient number of licenses, the cluster availability doesn't help as much as it is expected.\nSince the collaborative environment such as TIGRE can address both compute and software resource requirements for the EnKF application, Grid-enabled approach is still advantageous over the conventional MPI implementation in any of the above scenarios.\n5.\nCONCLUSIONS AND FUTURE WORK\nTIGRE is a higher education Grid development project and its purpose is to sustain and extend research and educational opportunities across Texas.\nWithin the energy exploration application area, we have Grid-enabled the MPI implementation of the ensemble Kalman filter data assimilation methodology for reservoir characterization.\nThis task was accomplished by removing MPI code for parallel processing and replacing with single processor jobs one for each geological model in the ensemble.\nThese single processor jobs were scheduled across TIGRE via GridWay metascheduler.\nWe have demonstrated that by pooling licenses across TIGRE sites, more geological models can be handled in parallel and therefore conceivably better simulation accuracy.\nThis approach has several advantages over MPI implementation especially when a site specific cluster is heavily loaded and\/or the number licenses required for the simulation is more than those available at a single site.\nTowards the future work, it would be interesting to compare the runtime between MPI, and Grid implementations for the EnKF application.\nThis effort could shed light on quality of service (QoS) of Grid environments in comparison with cluster computing.\nAnother aspect of interest in the near future would be managing both compute and license resources to address the job (or processor) - to-license ratio management.\n6.\nOBSERVATIONS AND LESSIONS LEARNED\nThe Grid-enabling efforts for EnKF application have provided ample opportunities to gather insights on the visibility and promise of Grid computing environments for application development and support.\nThe main issues are industry standard data security and QoS comparable to cluster computing.\nSince the reservoir modeling research involves proprietary data of the field, we had to invest substantial efforts initially in educating the application researchers on the ability of Grid services in supporting the industry standard data security through role - and privilege-based access using X. 509 standard.\nWith respect to QoS, application researchers expect \"cluster\" level QoS with Grid environments.\nAlso, there is a steep learning curve in Grid computing compared to the conventional \"cluster\" computing.\nSince Grid computing is still an \"emerging\" technology, and it spans over several administrative domains, Grid computing is still premature especially in terms of the level of QoS although, it offers better data security standards compared to commodity clusters.\nIt is our observation that training and outreach programs that compare and contrast the Grid and cluster computing environments would be a suitable approach for enhancing user participation in Grid computing.\nThis approach also helps users to match their applications and abilities Grids can offer.\nIn summary, our efforts through TIGRE in Grid-enabling the EnKF data assimilation methodology showed substantial promise in engaging Petroleum Engineering researchers through intercampus collaborations.\nEfforts are under way to involve more schools in this effort.\nThese efforts may result in increased collaborative research, educational opportunities, and workforce development through graduate\/faculty research programs across TIGRE Institutions.","keyphrases":["ensembl kalman filter","data assimil methodolog","hydrocarbon reservoir simul","energi explor","tigr grid comput environ","tigr","grid comput","cyberinfrastructur develop project","high perform comput","tigr grid middlewar","strateg applic area","gridwai metaschedul","pool licens","grid-enabl","reservoir model","enkf"],"prmu":["P","P","P","P","P","P","P","U","M","M","U","M","R","U","R","U"]} {"id":"J-66","title":"Expressive Negotiation over Donations to Charities","abstract":"When donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more. Such negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount. However, in their current form, matching offers allow for only limited negotiation. For one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies. Also, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities. In both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur. In this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities. We formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings. We give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program. We then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming. Subsequently, we show that the clearing problem is much easier when bids are quasilinear-for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave). For the quasilinear setting, we study the mechanism design question. We show that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids. We also show that there may be benefits to linking the charities from a mechanism design standpoint.","lvl-1":"Expressive Negotiation over Donations to Charities\u2217 Vincent Conitzer Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA conitzer@cs.cmu.edu Tuomas Sandholm Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA sandholm@cs.cmu.edu ABSTRACT When donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more.\nSuch negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount.\nHowever, in their current form, matching offers allow for only limited negotiation.\nFor one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies.\nAlso, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities.\nIn both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur.\nIn this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities.\nWe formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings.\nWe give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program.\nWe then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming.\nSubsequently, we show that the clearing problem is much easier when bids are quasilinear-for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave).\nFor the quasilinear setting, we study the mechanism design question.\nWe show that an ex-post efficient mechanism is \u2217 Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678.\nimpossible even with only one charity and a very restricted class of bids.\nWe also show that there may be benefits to linking the charities from a mechanism design standpoint.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION When money is donated to a charitable (or other) cause (hereafter referred to as charity), often the donating party gives unconditionally: a fixed amount is transferred from the donator to the charity, and none of this transfer is contingent on other events-in particular, it is not contingent on the amount given by other parties.\nIndeed, this is currently often the only way to make a donation, especially for small donating parties such as private individuals.\nHowever, when multiple parties support the same charity, each of them would prefer to see the others give more rather than less to this charity.\nIn such scenarios, it is sensible for a party to use its contemplated donation as negotiating material to induce the others to give more.\nThis is done by making the donation conditional on the others'' donations.\nThe following example will illustrate this, and show that the donating parties as well as the charitable cause may simultaneously benefit from the potential for such negotiation.\nSuppose we have two parties, 1 and 2, who are both supporters of charity A. To either of them, it would be worth $0.75 if A received $1.\nIt follows neither of them will be willing to give unconditionally, because $0.75 < $1.\nHowever, if the two parties draw up a contract that says that they will each give $0.5, both the parties have an incentive to accept this contract (rather than have no contract at all): with the contract, the charity will receive $1 (rather than $0 without a contract), which is worth $0.75 to each party, which is greater than the $0.5 that that party will have to give.\nEffectively, each party has made its donation conditional on the other party``s donation, leading to larger donations and greater happiness to all parties involved.\n51 One method that is often used to effect this is to make a matching offer.\nExamples of matching offers are: I will give x dollars for every dollar donated., or I will give x dollars if the total collected from other parties exceeds y.\nIn our example above, one of the parties can make the offer I will donate $0.5 if the other party also donates at least that much, and the other party will have an incentive to indeed donate $0.5, so that the total amount given to the charity increases by $1.\nThus this matching offer implements the contract suggested above.\nAs a real-world example, the United States government has authorized a donation of up to $1 billion to the Global Fund to fight AIDS, TB and Malaria, under the condition that the American contribution does not exceed one third of the total-to encourage other countries to give more [23].\nHowever, there are several severe limitations to the simple approach of matching offers as just described.\n1.\nIt is not clear how two parties can make matching offers where each party``s offer is stated in terms of the amount that the other pays.\n(For example, it is not clear what the outcome should be when both parties offer to match the other``s donation.)\nThus, matching offers can only be based on payments made by parties that are giving unconditionally (not in terms of a matching offer)-or at least there can be no circular dependencies.1 2.\nGiven the current infrastructure for making matching offers, it is impractical to make a matching offer depend on the amounts given to multiple charities.\nFor instance, a party may wish to specify that it will pay $100 given that charity A receives a total of $1000, but that it will also count donations made to charity B, at half the rate.\n(Thus, a total payment of $500 to charity A combined with a total payment of $1000 to charity B would be just enough for the party``s offer to take effect.)\nIn contrast, in this paper we propose a new approach where each party can express its relative preferences for different charities, and make its offer conditional on its own appreciation for the vector of donations made to the different charities.\nMoreover, the amount the party offers to donate at different levels of appreciation is allowed to vary arbitrarily (it does need to be a dollar-for-dollar (or n-dollarfor-dollar) matching arrangement, or an arrangement where the party offers a fixed amount provided a given (strike) total has been exceeded).\nFinally, there is a clear interpretation of what it means when multiple parties are making conditional offers that are stated in terms of each other.\nGiven each combination of (conditional) offers, there is a (usually) unique solution which determines how much each party pays, and how much each charity is paid.\nHowever, as we will show, finding this solution (the clearing problem) requires solving a potentially difficult optimization problem.\nA large part of this paper is devoted to studying how difficult this problem is under different assumptions on the structure of the offers, and providing algorithms for solving it.\n1 Typically, larger organizations match offers of private individuals.\nFor example, the American Red Cross Liberty Disaster Fund maintains a list of businesses that match their customers'' donations [8].\nTowards the end of the paper, we also study the mechanism design problem of motivating the bidders to bid truthfully.\nIn short, expressive negotiation over donations to charities is a new way in which electronic commerce can help the world.\nA web-based implementation of the ideas described in this paper can facilitate voluntary reallocation of wealth on a global scale.\nAditionally, optimally solving the clearing problem (and thereby generating the maximum economic welfare) requires the application of sophisticated algorithms.\n2.\nCOMPARISON TO COMBINATORIAL AUCTIONS AND EXCHANGES This section discusses the relationship between expressive charity donation and combinatorial auctions and exchanges.\nIt can be skipped, but may be of interest to the reader with a background in combinatorial auctions and exchanges.\nIn a combinatorial auction, there are m items for sale, and bidders can place bids on bundles of one or more items.\nThe auctioneer subsequently labels each bid as winning or losing, under the constraint that no item can be in more than one winning bid, to maximize the sum of the values of the winning bids.\n(This is known as the clearing problem.)\nVariants include combinatorial reverse auctions, where the auctioneer is seeking to procure a set of items; and combinatorial exchanges, where bidders can both buy and and sell items (even within the same bid).\nOther extensions include allowing for side constraints, as well as the specification of attributes of the items in bids.\nCombinatorial auctions and exchanges have recently become a popular research topic [20, 21, 17, 22, 9, 18, 13, 3, 12, 26, 19, 25, 2].\nThe problems of clearing expressive charity donation markets and clearing combinatorial auctions or exchanges are very different in formulation.\nNevertheless, there are interesting parallels.\nOne of the main reasons for the interest in combinatorial auctions and exchanges is that it allows for expressive bidding.\nA bidder can express exactly how much each different allocation is worth to her, and thus the globally optimal allocation may be chosen by the auctioneer.\nCompare this to a bidder having to bid on two different items in two different (one-item) auctions, without any way of expressing that (for instance) one item is worthless if the other item is not won.\nIn this scenario, the bidder may win the first item but not the second (because there was another high bid on the second item that she did not anticipate), leading to economic inefficiency.\nExpressive bidding is also one of the main benefits of the expressive charity donation market.\nHere, bidders can express exactly how much they are willing to donate for every vector of amounts donated to charities.\nThis may allow bidders to negotiate a complex arrangement of who gives how much to which charity, which is beneficial to all parties involved; whereas no such arrangement may have been possible if the bidders had been restricted to using simple matching offers on individual charities.\nAgain, expressive bidding is necessary to achieve economic efficiency.\nAnother parallel is the computational complexity of the clearing problem.\nIn order to achieve the full economic efficiency allowed by the market``s expressiveness (or even come close to it), hard computational problems must be solved in combinatorial auctions and exchanges, as well as in the charity donation market (as we will see).\n52 3.\nDEFINITIONS Throughout this paper, we will refer to the offers that the donating parties make as bids, and to the donating parties as bidders.\nIn our bidding framework, a bid will specify, for each vector of total payments made to the charities, how much that bidder is willing to contribute.\n(The contribution of this bidder is also counted in the vector of paymentsso, the vector of total payments to the charities represents the amount given by all donating parties, not just the ones other than this bidder.)\nThe bidding language is expressive enough that no bidder should have to make more than one bid.\nThe following definition makes the general form of a bid in our framework precise.\nDefinition 1.\nIn a setting with m charities c1, c2, ... , cm, a bid by bidder bj is a function vj : Rm \u2192 R.\nThe interpretation is that if charity ci receives a total amount of \u03c0ci , then bidder j is willing to donate (up to) vj(\u03c0c1 , \u03c0c2 , ... , \u03c0cm ).\nWe now define possible outcomes in our model, and which outcomes are valid given the bids that were made.\nDefinition 2.\nAn outcome is a vector of payments made by the bidders (\u03c0b1 , \u03c0b2 , ... , \u03c0bn ), and a vector of payments received by the charities (\u03c0c1 , \u03c0c2 , ... , \u03c0cm ).\nA valid outcome is an outcome where 1.\nn j=1 \u03c0bj \u2265 m i=1 \u03c0ci (at least as much money is collected as is given away); 2.\nFor all 1 \u2264 j \u2264 n, \u03c0bj \u2264 vj(\u03c0c1 , \u03c0c2 , ... , \u03c0cm ) (no bidder gives more than she is willing to).\nOf course, in the end, only one of the valid outcomes can be chosen.\nWe choose the valid outcome that maximizes the objective that we have for the donation process.\nDefinition 3.\nAn objective is a function from the set of all outcomes to R.2 After all bids have been collected, a valid outcome will be chosen that maximizes this objective.\nOne example of an objective is surplus, given by n j=1 \u03c0bj \u2212 m i=1 \u03c0ci .\nThe surplus could be the profits of a company managing the expressive donation marketplace; but, alternatively, the surplus could be returned to the bidders, or given to the charities.\nAnother objective is total amount donated, given by m i=1 \u03c0ci .\n(Here, different weights could also be placed on the different charities.)\nFinding the valid outcome that maximizes the objective is a (nontrivial) computational problem.\nWe will refer to it as the clearing problem.\nThe formal definition follows.\nDefinition 4 (DONATION-CLEARING).\nWe are given a set of n bids over charities c1, c2, ... , cm.\nAdditionally, we are given an objective function.\nWe are asked to find an objective-maximizing valid outcome.\nHow difficult the DONATION-CLEARING problem is depends on the types of bids used and the language in which they are expressed.\nThis is the topic of the next section.\n2 In general, the objective function may also depend on the bids, but the objective functions under consideration in this paper do not depend on the bids.\nThe techniques presented in this paper will typically generalize to objectives that take the bids into account directly.\n4.\nA SIMPLIFIED BIDDING LANGUAGE Specifying a general bid in our framework (as defined above) requires being able to specify an arbitrary real-valued function over Rm .\nEven if we restricted the possible total payment made to each charity to the set {0, 1, 2, ... , s}, this would still require a bidder to specify (s+1)m values.\nThus, we need a bidding language that will allow the bidders to at least specify some bids more concisely.\nWe will specify a bidding language that only represents a subset of all possible bids, which can be described concisely.3 To introduce our bidding language, we will first describe the bidding function as a composition of two functions; then we will outline our assumptions on each of these functions.\nFirst, there is a utility function uj : Rm \u2192 R, specifying how much bidder j appreciates a given vector of total donations to the charities.\n(Note that the way we define a bidder``s utility function, it does not take the payments the bidder makes into account.)\nThen, there is a donation willingness function wj : R \u2192 R, which specifies how much bidder j is willing to pay given her utility for the vector of donations to the charities.\nWe emphasize that this function does not need to be linear, so that utilities should not be thought of as expressible in dollar amounts.\n(Indeed, when an individual is donating to a large charity, the reason that the individual donates only a bounded amount is typically not decreasing marginal value of the money given to the charity, but rather that the marginal value of a dollar to the bidder herself becomes larger as her budget becomes smaller.)\nSo, we have wj(uj(\u03c0c1 , \u03c0c2 , ... , \u03c0cm )) = vj(\u03c0c1 , \u03c0c2 , ... , \u03c0cm ), and we let the bidder describe her functions uj and wj separately.\n(She will submit these functions as her bid.)\nOur first restriction is that the utility that a bidder derives from money donated to one charity is independent of the amount donated to another charity.\nThus, uj(\u03c0c1 , \u03c0c2 , ... , \u03c0cm ) = m i=1 ui j(\u03c0ci ).\n(We observe that this does not imply that the bid function vj decomposes similarly, because of the nonlinearity of wj.)\nFurthermore, each ui j must be piecewise linear.\nAn interesting special case which we will study is when each ui j is a line: ui j(\u03c0ci ) = ai j\u03c0ci .\nThis special case is justified in settings where the scale of the donations by the bidders is small relative to the amounts the charities receive from other sources, so that the marginal use of a dollar to the charity is not affected by the amount given by the bidders.\nThe only restriction that we place on the payment willingness functions wj is that they are piecewise linear.\nOne interesting special case is a threshold bid, where wj is a step function: the bidder will provide t dollars if her utility exceeds s, and otherwise 0.\nAnother interesting case is when such a bid is partially acceptable: the bidder will provide t dollars if her utility exceeds s; but if her utility is u < s, she is still willing to provide ut s dollars.\nOne might wonder why, if we are given the bidders'' utility functions, we do not simply maximize the sum of the utilities rather than surplus or total donated.\nThere are several reasons.\nFirst, because affine transformations do not affect utility functions in a fundamental way, it would be possi3 Of course, our bidding language can be trivially extended to allow for fully expressive bids, by also allowing bids from a fully expressive bidding language, in addition to the bids in our bidding language.\n53 ble for a bidder to inflate her utility by changing its units, thereby making her bid more important for utility maximization purposes.\nSecond, a bidder could simply give a payment willingness function that is 0 everywhere, and have her utility be taken into account in deciding on the outcome, in spite of her not contributing anything.\n5.\nAVOIDING INDIRECT PAYMENTS In an initial implementation, the approach of having donations made out to a center, and having a center forward these payments to charities, may not be desirable.\nRather, it may be preferable to have a partially decentralized solution, where the donating parties write out checks to the charities directly according to a solution prescribed by the center.\nIn this scenario, the center merely has to verify that parties are giving the prescribed amounts.\nAdvantages of this include that the center can keep its legal status minimal, as well as that we do not require the donating parties to trust the center to transfer their donations to the charities (or require some complicated verification protocol).\nIt is also a step towards a fully decentralized solution, if this is desirable.\nTo bring this about, we can still use the approach described earlier.\nAfter we clear the market in the manner described before, we know the amount that each donator is supposed to give, and the amount that each charity is supposed to receive.\nThen, it is straightforward to give some specification of who should give how much to which charity, that is consistent with that clearing.\nAny greedy algorithm that increases the cash flow from any bidder who has not yet paid enough, to any charity that has not yet received enough, until either the bidder has paid enough or the charity has received enough, will provide such a specification.\n(All of this is assuming that bj \u03c0bj = ci \u03c0ci .\nIn the case where there is nonzero surplus, that is, bj \u03c0bj > ci \u03c0ci , we can distribute this surplus across the bidders by not requiring them to pay the full amount, or across the charities by giving them more than the solution specifies.)\nNevertheless, with this approach, a bidder may have to write out a check to a charity that she does not care for at all.\n(For example, an environmental activist who was using the system to increase donations to a wildlife preservation fund may be required to write a check to a group supporting a right-wing political party.)\nThis is likely to lead to complaints and noncompliance with the clearing.\nWe can address this issue by letting each bidder specify explicitly (before the clearing) which charities she would be willing to make a check out to.\nThese additional constraints, of course, may change the optimal solution.\nIn general, checking whether a given centralized solution (with zero surplus) can be accomplished through decentralized payments when there are such constraints can be modeled as a MAX-FLOW problem.\nIn the MAX-FLOW instance, there is an edge from the source node s to each bidder bj, with a capacity of \u03c0bj (as specified in the centralized solution); an edge from each bidder bj to each charity ci that the bidder is willing to donate money to, with a capacity of \u221e; and an edge from each charity ci to the target node t with capacity \u03c0ci (as specified in the centralized solution).\nIn the remainder of this paper, all our hardness results apply even to the setting where there is no constraint on which bidders can pay to which charity (that is, even the problem as it was specified before this section is hard).\nWe also generalize our clearing algorithms to the partially decentralized case with constraints.\n6.\nHARDNESS OF CLEARING THE MARKET In this section, we will show that the clearing problem is completely inapproximable, even when every bidder``s utility function is linear (with slope 0 or 1 in each charity``s payments), each bidder cares either about at most two charities or about all charities equally, and each bidder``s payment willingness function is a step function.\nWe will reduce from MAX2SAT (given a formula in conjunctive normal form (where each clause has two literals) and a target number of satisfied clauses T, does there exist an assignment of truth values to the variables that makes at least T clauses true?)\n, which is NP-complete [7].\nTheorem 1.\nThere exists a reduction from MAX2SAT instances to DONATION-CLEARING instances such that 1.\nIf the MAX2SAT instance has no solution, then the only valid outcome is the zero outcome (no bidder pays anything and no charity receives anything); 2.\nOtherwise, there exists a solution with positive surplus.\nAdditionally, the DONATION-CLEARING instances that we reduce to have the following properties: 1.\nEvery ui j is a line; that is, the utility that each bidder derives from any charity is linear; 2.\nAll the ui j have slope either 0 or 1; 3.\nEvery bidder either has at most 2 charities that affect her utility (with slope 1), or all charities affect her utility (with slope 1); 4.\nEvery bid is a threshold bid; that is, every bidder``s payment willingness function wj is a step function.\nProof.\nThe problem is in NP because we can nondeterministically choose the payments to be made and received, and check the validity and objective value of this outcome.\nIn the following, we will represent bids as follows: ({(ck, ak)}, s, t) indicates that uk j (\u03c0ck ) = ak\u03c0ck (this function is 0 for ck not mentioned in the bid), and wj(uj) = t for uj \u2265 s, wj(uj) = 0 otherwise.\nTo show NP-hardness, we reduce an arbitrary MAX2SAT instance, given by a set of clauses K = {k} = {(l1 k, l2 k)} over a variable set V together with a target number of satisfied clauses T, to the following DONATION-CLEARING instance.\nLet the set of charities be as follows.\nFor every literal l \u2208 L, there is a charity cl.\nThen, let the set of bids be as follows.\nFor every variable v, there is a bid bv = ({(c+v, 1), (c\u2212v, 1)}, 2, 1 \u2212 1 4|V | ).\nFor every literal l, there is a bid bl = ({(cl, 1)}, 2, 1).\nFor every clause k = {l1 k, l2 k} \u2208 K, there is a bid bk = ({(cl1 k , 1), (cl2 k , 1)}, 2, 1 8|V ||K| ).\nFinally, there is a single bid that values all charities equally: b0 = ({(c1, 1), (c2, 1), ... , (cm, 1)}, 2|V |+ T 8|V ||K| , 1 4 + 1 16|V ||K| ).\nWe show the two instances are equivalent.\nFirst, suppose there exists a solution to the MAX2SAT instance.\nIf in this solution, l is true, then let \u03c0cl = 2 + T 8|V |2|K| ; otherwise \u03c0cl = 0.\nAlso, the only bids that are not accepted (meaning the threshold is not met) are the bl where l is false, and the bk such that both of l1 k, l2 k are false.\nFirst we show that no bidder whose bid is accepted pays more than she is willing to.\nFor each bv, either c+v or c\u2212v receives at least 2, so this bidder``s threshold has been met.\n54 For each bl, either l is false and the bid is not accepted, or l is true, cl receives at least 2, and the threshold has been met.\nFor each bk, either both of l1 k, l2 k are false and the bid is not accepted, or at least one of them (say li k) is true (that is, k is satisfied) and cli k receives at least 2, and the threshold has been met.\nFinally, because the total amount received by the charities is 2|V | + T 8|V ||K| , b0``s threshold has also been met.\nThe total amount that can be extracted from the accepted bids is at least |V |(1\u2212 1 4|V | )+|V |+T 1 8|V ||K| + 1 4 + 1 16|V ||K| ) = 2|V |+ T 8|V ||K| + 1 16|V ||K| > 2|V |+ T 8|V ||K| , so there is positive surplus.\nSo there exists a solution with positive surplus to the DONATION-CLEARING instance.\nNow suppose there exists a nonzero outcome in the DONATION-CLEARING instance.\nFirst we show that it is not possible (for any v \u2208 V ) that both b+v and b\u2212v are accepted.\nFor, this would require that \u03c0c+v + \u03c0c\u2212v \u2265 4.\nThe bids bv, b+v, b\u2212v cannot contribute more than 3, so we need another 1 at least.\nIt is easily seen that for any other v , accepting any subset of {bv , b+v , b\u2212v } would require that at least as much is given to c+v and c\u2212v as can be extracted from these bids, so this cannot help.\nFinally, all the other bids combined can contribute at most |K| 1 8|V ||K| + 1 4 + 1 16|V ||K| < 1.\nIt follows that we can interpret the outcome in the DONATION-CLEARING instance as a partial assignment of truth values to variables: v is set to true if b+v is accepted, and to false if b\u2212v is accepted.\nAll that is left to show is that this partial assignment satisfies at least T clauses.\nFirst we show that if a clause bid bk is accepted, then either bl1 k or bl2 k is accepted (and thus either l1 k or l2 k is set to true, hence k is satisfied).\nIf bk is accepted, at least one of cl1 k and cl2 k must be receiving at least 1; without loss of generality, say it is cl1 k , and say l1 k corresponds to variable v1 k (that is, it is +v1 k or \u2212v1 k).\nIf cl1 k does not receive at least 2, bl1 k is not accepted, and it is easy to check that the bids bv1 k , b+v1 k , b\u2212v1 k contribute (at least) 1 less than is paid to c+v1 k and c+v1 k .\nBut this is the same situation that we analyzed before, and we know it is impossible.\nAll that remains to show is that at least T clause bids are accepted.\nWe now show that b0 is accepted.\nSuppose it is not; then one of the bv must be accepted.\n(The solution is nonzero by assumption; if only some bk are accepted, the total payment from these bids is at most |K| 1 8|V ||K| < 1, which is not enough for any bid to be accepted; and if one of the bl is accepted, then the threshold for the corresponding bv is also reached.)\nFor this v, bv1 k , b+v1 k , b\u2212v1 k contribute (at least) 1 4|V | less than the total payments to c+v and c\u2212v. Again, the other bv and bl cannot (by themselves) help to close this gap; and the bk can contribute at most |K| 1 8|V ||K| < 1 4|V | .\nIt follows that b0 is accepted.\nNow, in order for b0 to be accepted, a total of 2|V |+ T 8|V ||K| must be donated.\nBecause is not possible (for any v \u2208 V ) that both b+v and b\u2212v are accepted, it follows that the total payment by the bv and the bl can be at most 2|V | \u2212 1 4 .\nAdding b0``s payment of 1 4 + 1 16|V ||K| to this, we still need T \u2212 1 2 8|V ||K| from the bk.\nBut each one of them contributes at most 1 8|V ||K| , so at least T of them must be accepted.\nCorollary 1.\nUnless P=NP, there is no polynomial-time algorithm for approximating DONATION-CLEARING (with either the surplus or the total amount donated as the objective) within any ratio f(n), where f is a nonzero function of the size of the instance.\nThis holds even if the DONATIONCLEARING structures satisfy all the properties given in Theorem 1.\nProof.\nSuppose we had such a polynomial time algorithm, and applied it to the DONATION-CLEARING instances that were reduced from MAX2SAT instances in Theorem 1.\nIt would return a nonzero solution when the MAX2SAT instance has a solution, and a zero solution otherwise.\nSo we can decide whether arbitrary MAX2SAT instances are satisfiable this way, and it would follow that P=NP.\n(Solving the problem to optimality is NP-complete in many other (noncomparable or even more restricted) settings as well-we omit such results because of space constraint.)\nThis should not be interpreted to mean that our approach is infeasible.\nFirst, as we will show, there are very expressive families of bids for which the problem is solvable in polynomial time.\nSecond, NP-completeness is often overcome in practice (especially when the stakes are high).\nFor instance, even though the problem of clearing combinatorial auctions is NP-complete [20] (even to approximate [21]), they are typically solved to optimality in practice.\n7.\nMIXED INTEGER PROGRAMMING FORMULATION In this section, we give a mixed integer programming (MIP) formulation for the general problem.\nWe also discuss in which special cases this formulation reduces to a linear programming (LP) formulation.\nIn such cases, the problem is solvable in polynomial time, because linear programs can be solved in polynomial time [11].\nThe variables of the MIP defining the final outcome are the payments made to the charities, denoted by \u03c0ci , and the payments extracted from the bidders, \u03c0bj .\nIn the case where we try to avoid direct payments and let the bidders pay the charities directly, we add variables \u03c0ci,bj indicating how much bj pays to ci, with the constraints that for each ci, \u03c0ci \u2264 bj \u03c0ci,bj ; and for each bj, \u03c0bj \u2265 ci \u03c0ci,bj .\nAdditionally, there is a constraint \u03c0ci,bj = 0 whenever bidder bj is unwilling to pay charity ci.\nThe rest of the MIP can be phrased in terms of the \u03c0ci and \u03c0bj .\nThe objectives we have discussed earlier are both linear: surplus is given by n j=1 \u03c0bj \u2212 m i=1 \u03c0ci , and total amount donated is given by m i=1 \u03c0ci (coefficients can be added to represent different weights on the different charities in the objective).\nThe constraint that the outcome should be valid (no deficit) is given simply by: n j=1 \u03c0bj \u2265 m i=1 \u03c0ci .\nFor every bidder, for every charity, we define an additional utility variable ui j indicating the utility that this bidder derives from the payment to this charity.\nThe bidder``s total 55 utility is given by another variable uj, with the constraint that uj = m i=1 ui j. Each ui j is given as a function of \u03c0ci by the (piecewise linear) function provided by the bidder.\nIn order to represent this function in the MIP formulation, we will merely place upper bounding constraints on ui j, so that it cannot exceed the given functions.\nThe MIP solver can then push the ui j variables all the way up to the constraint, in order to extract as much payment from this bidder as possible.\nIn the case where the ui j are concave, this is easy: if (sl, tl) and (sl+1, tl+1) are endpoints of a finite linear segment in the function, we add the constraint that ui j \u2264 tl + \u03c0ci \u2212sl sl+1\u2212sl (tl+1 \u2212 tl).\nIf the final (infinite) segment starts at (sk, tk) and has slope d, we add the constraint that ui j \u2264 tk + d(\u03c0ci \u2212 sk).\nUsing the fact that the function is concave, for each value of \u03c0ci , the tightest upper bound on ui j is the one corresponding to the segment above that value of \u03c0ci , and therefore these constraints are sufficient to force the correct value of ui j.\nWhen the function is not concave, we require (for the first time) some binary variables.\nFirst, we define another point on the function: (sk+1, tk+1) = (sk + M, tk + dM), where d is the slope of the infinite segment and M is any upper bound on the \u03c0cj .\nThis has the effect that we will never be on the infinite segment again.\nNow, let xi,j l be an indicator variable that should be 1 if \u03c0ci is below the lth segment of the function, and 0 otherwise.\nTo effect this, first add a constraint k l=0 xi,j l = 1.\nNow, we aim to represent \u03c0ci as a weighted average of its two neighboring si,j l .\nFor 0 \u2264 l \u2264 k + 1, let \u03bbi,j l be the weight on si,j l .\nWe add the constraint k+1 l=0 \u03bbi,j l = 1.\nAlso, for 0 \u2264 l \u2264 k + 1, we add the constraint \u03bbi,j l \u2264 xl\u22121 +xl (where x\u22121 and xk+1 are defined to be zero), so that indeed only the two neighboring si,j l have nonzero weight.\nNow we add the constraint \u03c0ci = k+1 l=0 si,j l \u03bbi,j l , and now the \u03bbi,j l must be set correctly.\nThen, we can set ui j = k+1 l=0 ti,j l \u03bbi,j l .\n(This is a standard MIP technique [16].)\nFinally, each \u03c0bj is bounded by a function of uj by the (piecewise linear) function provided by the bidder (wj).\nRepresenting this function is entirely analogous to how we represented ui j as a function of \u03c0ci .\n(Again we will need binary variables only if the function is not concave.)\nBecause we only use binary variables when either a utility function ui j or a payment willingness function wj is not concave, it follows that if all of these are concave, our MIP formulation is simply a linear program-which can be solved in polynomial time.\nThus: Theorem 2.\nIf all functions ui j and wj are concave (and piecewise linear), the DONATION-CLEARING problem can be solved in polynomial time using linear programming.\nEven if some of these functions are not concave, we can simply replace each such function by the smallest upper bounding concave function, and use the linear programming formulation to obtain an upper bound on the objectivewhich may be useful in a search formulation of the general problem.\n8.\nWHY ONE CANNOT DO MUCH BETTER THAN LINEAR PROGRAMMING One may wonder if, for the special cases of the DONATIONCLEARING problem that can be solved in polynomial time with linear programming, there exist special purpose algorithms that are much faster than linear programming algorithms.\nIn this section, we show that this is not the case.\nWe give a reduction from (the decision variant of) the general linear programming problem to (the decision variant of) a special case of the DONATION-CLEARING problem (which can be solved in polynomial time using linear programming).\n(The decision variant of an optimization problem asks the binary question: Can the objective value exceed o?)\nThus, any special-purpose algorithm for solving the decision variant of this special case of the DONATIONCLEARING problem could be used to solve a decision question about an arbitrary linear program just as fast.\n(And thus, if we are willing to call the algorithm a logarithmic number of times, we can solve the optimization version of the linear program.)\nWe first observe that for linear programming, a decision question about the objective can simply be phrased as another constraint in the LP (forcing the objective to exceed the given value); then, the original decision question coincides with asking whether the resulting linear program has a feasible solution.\nTheorem 3.\nThe question of whether an LP (given by a set of linear constraints4 ) has a feasible solution can be modeled as a DONATION-CLEARING instance with payment maximization as the objective, with 2v charities and v + c bids (where v is the number of variables in the LP, and c is the number of constraints).\nIn this model, each bid bj has only linear ui j functions, and is a partially acceptable threshold bid (wj(u) = tj for u \u2265 sj, otherwise wj(u) = utj sj ).\nThe v bids corresponding to the variables mention only two charities each; the c bids corresponding to the constraints mention only two times the number of variables in the corresponding constraint.\nProof.\nFor every variable xi in the LP, let there be two charities, c+xi and c\u2212xi .\nLet H be some number such that if there is a feasible solution to the LP, there is one in which every variable has absolute value at most H.\nIn the following, we will represent bids as follows: ({(ck, ak)}, s, t) indicates that uk j (\u03c0ck ) = ak\u03c0ck (this function is 0 for ck not mentioned in the bid), and wj(uj) = t for uj \u2265 s, wj(uj) = uj t s otherwise.\nFor every variable xi in the LP, let there be a bid bxi = ({(c+xi , 1), (c\u2212xi , 1)}, 2H, 2H \u2212 c v ).\nFor every constraint i rj i xi \u2264 sj in the linear program, let there be a bid bj = ({(c\u2212xi , rj i )}i:r j i >0 \u222a {(c+xi , \u2212rj i )}i:r j i <0 , ( i |rj i |)H \u2212 sj, 1).\nLet the target total amount donated be 2vH.\nSuppose there is a feasible solution (x\u2217 1, x\u2217 2, ... , x\u2217 v) to the LP.\nWithout loss of generality, we can suppose that |x\u2217 i | \u2264 H for all i. Then, in the DONATION-CLEARING instance, 4 These constraints must include bounds on the variables (including nonnegativity bounds), if any.\n56 for every i, let \u03c0c+xi = H + x\u2217 i , and let \u03c0c\u2212xi = H \u2212 x\u2217 i (for a total payment of 2H to these two charities).\nThis allows us to extract the maximum payment from the bids bxi -a total payment of 2vH \u2212 c. Additionally, the utility of bidder bj is now i:r j i >0 rj i (H \u2212 x\u2217 i ) + i:r j i <0 \u2212rj i (H + x\u2217 i ) = ( i |rj i |)H \u2212 i rj i x\u2217 i \u2265 ( i |rj i |)H \u2212 sj (where the last inequality stems from the fact that constraint j must be satisfied in the LP solution), so it follows we can extract the maximum payment from all the bidders bj, for a total payment of c.\nIt follows that we can extract the required 2vH payment from the bidders, and there exists a solution to the DONATION-CLEARING instance with a total amount donated of at least 2vH.\nNow suppose there is a solution to the DONATIONCLEARING instance with a total amount donated of at least vH.\nThen the maximum payment must be extracted from each bidder.\nFrom the fact that the maximum payment must be extracted from each bidder bxi , it follows that for each i, \u03c0c+xi + \u03c0c\u2212xi \u2265 2H.\nBecause the maximum extractable total payment is 2vH, it follows that for each i, \u03c0c+xi + \u03c0c\u2212xi = 2H.\nLet x\u2217 i = \u03c0c+xi \u2212 H = H \u2212 \u03c0c\u2212xi .\nThen, from the fact that the maximum payment must be extracted from each bidder bj, it follows that ( i |rj i |)H \u2212 sj \u2264 i:r j i >0 rj i \u03c0c\u2212xi + i:r j i <0 \u2212rj i \u03c0c+xi = i:r j i >0 rj i (H \u2212 x\u2217 i ) + i:r j i <0 \u2212rj i (H + x\u2217 i ) = ( i |rj i |)H \u2212 i rj i x\u2217 i .\nEquivalently, i rj i x\u2217 i \u2264 sj.\nIt follows that the x\u2217 i constitute a feasible solution to the LP.\n9.\nQUASILINEAR BIDS Another class of bids of interest is the class of quasilinear bids.\nIn a quasilinear bid, the bidder``s payment willingness function is linear in utility: that is, wj = uj.\n(Because the units of utility are arbitrary, we may as well let them correspond exactly to units of money-so we do not need a constant multiplier.)\nIn most cases, quasilinearity is an unreasonable assumption: for example, usually bidders have a limited budget for donations, so that the payment willingness will stop increasing in utility after some point (or at least increase slower in the case of a softer budget constraint).\nNevertheless, quasilinearity may be a reasonable assumption in the case where the bidders are large organizations with large budgets, and the charities are a few small projects requiring relatively little money.\nIn this setting, once a certain small amount has been donated to a charity, a bidder will derive no more utility from more money being donated from that charity.\nThus, the bidders will never reach a high enough utility for their budget constraint (even when it is soft) to take effect, and thus a linear approximation of their payment willingness function is reasonable.\nAnother reason for studying the quasilinear setting is that it is the easiest setting for mechanism design, which we will discuss shortly.\nIn this section, we will see that the clearing problem is much easier in the case of quasilinear bids.\nFirst, we address the case where we are trying to maximize surplus (which is the most natural setting for mechanism design).\nThe key observation here is that when bids are quasilinear, the clearing problem decomposes across charities.\nLemma 1.\nSuppose all bids are quasilinear, and surplus is the objective.\nThen we can clear the market optimally by clearing the market for each charity individually.\nThat is, for each bidder bj, let \u03c0bj = ci \u03c0bi j .\nThen, for each charity ci, maximize ( bj \u03c0bi j ) \u2212 \u03c0ci , under the constraint that for every bidder bj, \u03c0bi j \u2264 ui j(\u03c0ci ).\nProof.\nThe resulting solution is certainly valid: first of all, at least as much money is collected as is given away, because bj \u03c0bj \u2212 ci \u03c0ci = bj ci \u03c0bi j \u2212 ci \u03c0ci = ci (( bj \u03c0bi j ) \u2212 \u03c0ci )-and the terms of this summation are the objectives of the individual optimization problems, each of which can be set at least to 0 (by setting all the variables are set to 0), so it follows that the expression is nonnegative.\nSecond, no bidder bj pays more than she is willing to, because uj \u2212\u03c0bj = ci ui j(\u03c0ci )\u2212 ci \u03c0bi j = ci (ui j(\u03c0ci )\u2212\u03c0bi j )-and the terms of this summation are nonnegative by the constraints we imposed on the individual optimization problems.\nAll that remains to show is that the solution is optimal.\nBecause in an optimal solution, we will extract as much payment from the bidders as possible given the \u03c0ci , all we need to show is that the \u03c0ci are set optimally by this approach.\nLet \u03c0\u2217 ci be the amount paid to charity \u03c0ci in some optimal solution.\nIf we change this amount to \u03c0ci and leave everything else unchanged, this will only affect the payment that we can extract from the bidders because of this particular charity, and the difference in surplus will be bj ui j(\u03c0ci ) \u2212 ui j(\u03c0\u2217 ci ) \u2212 \u03c0ci + \u03c0\u2217 ci .\nThis expression is, of course, 0 if \u03c0ci = \u03c0\u2217 ci .\nBut now notice that this expression is maximized as a function of \u03c0ci by the decomposed solution for this charity (the terms without \u03c0ci in them do not matter, and of course in the decomposed solution we always set \u03c0bi j = ui j(\u03c0ci )).\nIt follows that if we change \u03c0ci to the decomposed solution, the change in surplus will be at least 0 (and the solution will still be valid).\nThus, we can change the \u03c0ci one by one to the decomposed solution without ever losing any surplus.\nTheorem 4.\nWhen all bids are quasilinear and surplus is the objective, DONATION-CLEARING can be done in linear time.\nProof.\nBy Lemma 1, we can solve the problem separately for each charity.\nFor charity ci, this amounts to maximizing ( bj ui j(\u03c0ci )) \u2212 \u03c0ci as a function of \u03c0ci .\nBecause all its terms are piecewise linear functions, this whole function is piecewise linear, and must be maximized at one of the points where it is nondifferentiable.\nIt follows that we need only check all the points at which one of the terms is nondifferentiable.\nUnfortunately, the decomposing lemma does not hold for payment maximization.\nProposition 1.\nWhen the objective is payment maximization, even when bids are quasilinear, the solution obtained by decomposing the problem across charities is in general not optimal (even with concave bids).\n57 Proof.\nConsider a single bidder b1 placing the following quasilinear bid over two charities c1 and c2: u1 1(\u03c0c1 ) = 2\u03c0ci for 0 \u2264 \u03c0ci \u2264 1, u1 1(\u03c0c1 ) = 2 + \u03c0ci \u22121 4 otherwise; u2 1(\u03c0c2 ) = \u03c0ci 2 .\nThe decomposed solution is \u03c0c1 = 7 3 , \u03c0c2 = 0, for a total donation of 7 3 .\nBut the solution \u03c0c1 = 1, \u03c0c2 = 2 is also valid, for a total donation of 3 > 7 3 .\nIn fact, when payment maximization is the objective, DONATION-CLEARING remains (weakly) NP-complete in general.\n(In the remainder of the paper, proofs are omitted because of space constraint.)\nTheorem 5.\nDONATION-CLEARING is (weakly) NPcomplete when payment maximization is the objective, even when every bid is concerns only one charity (and has a stepfunction utility function for this charity), and is quasilinear.\nHowever, when the bids are also concave, a simple greedy clearing algorithm is optimal.\nTheorem 6.\nGiven a DONATION-CLEARING instance with payment maximization as the objective where all bids are quasilinear and concave, consider the following algorithm.\nStart with \u03c0ci = 0 for all charities.\nThen, letting \u03b3ci = d bj ui j (\u03c0ci ) d\u03c0ci (at nondifferentiable points, these derivatives should be taken from the right), increase \u03c0c\u2217 i (where c\u2217 i \u2208 arg maxci \u03b3ci ), until either \u03b3c\u2217 i is no longer the highest (in which case, recompute c\u2217 i and start increasing the corresponding payment), or bj uj = ci \u03c0ci and \u03b3c\u2217 i < 1.\nFinally, let \u03c0bj = uj.\n(A similar greedy algorithm works when the objective is surplus and the bids are quasilinear and concave, with as only difference that we stop increasing the payments as soon as \u03b3c\u2217 i < 1.)\n10.\nINCENTIVE COMPATIBILITY Up to this point, we have not discussed the bidders'' incentives for bidding any particular way.\nSpecifically, the bids may not truthfully reflect the bidders'' preferences over charities because a bidder may bid strategically, misrepresenting her preferences in order to obtain a result that is better to herself.\nThis means the mechanism is not strategy-proof.\n(We will show some concrete examples of this shortly.)\nThis is not too surprising, because the mechanism described so far is, in a sense, a first-price mechanism, where the mechanism will extract as much payment from a bidder as her bid allows.\nSuch mechanisms (for example, first-price auctions, where winners pay the value of their bids) are typically not strategy-proof: if a bidder reports her true valuation for an outcome, then if this outcome occurs, the payment the bidder will have to make will offset her gains from the outcome completely.\nOf course, we could try to change the rules of the game-which outcome (payment vector to charities) do we select for which bid vector, and which bidder pays how much-in order to make bidding truthfully beneficial, and to make the outcome better with regard to the bidders'' true preferences.\nThis is the field of mechanism design.\nIn this section, we will briefly discuss the options that mechanism design provides for the expressive charity donation problem.\n10.1 Strategic bids under the first-price mechanism We first point out some reasons for bidders to misreport their preferences under the first-price mechanism described in the paper up to this point.\nFirst of all, even when there is only one charity, it may make sense to underbid one``s true valuation for the charity.\nFor example, suppose a bidder would like a charity to receive a certain amount x, but does not care if the charity receives more than that.\nAdditionally, suppose that the other bids guarantee that the charity will receive at least x no matter what bid the bidder submits (and the bidder knows this).\nThen the bidder is best off not bidding at all (or submitting a utility for the charity of 0), to avoid having to make any payment.\n(This is known in economics as the free rider problem [14].\nWith multiple charities, another kind of manipulation may occur, where the bidder attempts to steer others'' payments towards her preferred charity.\nSuppose that there are two charities, and three bidders.\nThe first bidder bids u1 1(\u03c0c1 ) = 1 if \u03c0c1 \u2265 1, u1 1(\u03c0c1 ) = 0 otherwise; u2 1(\u03c0c2 ) = 1 if \u03c0c2 \u2265 1, u2 1(\u03c0c2 ) = 0 otherwise; and w1(u1) = u1 if u1 \u2264 1, w1(u1) = 1+ 1 100 (u1 \u22121) otherwise.\nThe second bidder bids u1 2(\u03c0c1 ) = 1 if \u03c0c1 \u2265 1, u1 1(\u03c0c1 ) = 0 otherwise; u2 2(\u03c0c2 ) = 0 (always); w2(u2) = 1 4 u2 if u2 \u2264 1, w2(u2) = 1 4 + 1 100 (u2 \u22121) otherwise.\nNow, the third bidder``s true preferences are accurately represented5 by the bid u1 3(\u03c0c1 ) = 1 if \u03c0c1 \u2265 1, u1 3(\u03c0c1 ) = 0 otherwise; u2 3(\u03c0c2 ) = 3 if \u03c0c2 \u2265 1, u2 3(\u03c0c1 ) = 0 otherwise; and w3(u3) = 1 3 u3 if u3 \u2264 1, w3(u3) = 1 3 + 1 100 (u3 \u2212 1) otherwise.\nNow, it is straightforward to check that, if the third bidder bids truthfully, regardless of whether the objective is surplus maximization or total donated, charity 1 will receive at least 1, and charity 2 will receive less than 1.\nThe same is true if bidder 3 does not place a bid at all (as in the previous type of manipulation); hence bidder 2``s utility will be 1 in this case.\nBut now, if bidder 3 reports u1 3(\u03c0c1 ) = 0 everywhere; u2 3(\u03c0c2 ) = 3 if \u03c0c2 \u2265 1, u2 3(\u03c0c2 ) = 0 otherwise (this part of the bid is truthful); and w3(u3) = 1 3 u3 if u3 \u2264 1, w3(u3) = 1 3 otherwise; then charity 2 will receive at least 1, and bidder 3 will have to pay at most 1 3 .\nBecause up to this amount of payment, one unit of money corresponds to three units of utility to bidder 3, it follows his utility is now at least 3 \u2212 1 = 2 > 1.\nWe observe that in this case, the strategic bidder is not only affecting how much the bidders pay, but also how much the charities receive.\n10.2 Mechanism design in the quasilinear setting There are four reasons why the mechanism design approach is likely to be most successful in the setting of quasilinear preferences.\nFirst, historically, mechanism design has been been most successful when the quasilinear assumption could be made.\nSecond, because of this success, some very general mechanisms have been discovered for the quasilinear setting (for instance, the VCG mechanisms [24, 4, 10], or the dAGVA mechanism [6, 1]) which we could apply directly to the expressive charity donation problem.\nThird, as we saw in Section 9, the clearing problem is much easier in 5 Formally, this means that if the bidder is forced to pay the full amount that his bid allows for a particular vector of payments to charities, the bidder is indifferent between this and not participating in the mechanism at all.\n(Compare this to bidding truthfully in a first-price auction.)\n58 this setting, and thus we are less likely to run into computational trouble for the mechanism design problem.\nFourth, as we will show shortly, the quasilinearity assumption in some cases allows for decomposing the mechanism design problem over the charities (as it did for the simple clearing problem).\nMoreover, in the quasilinear setting (unlike in the general setting), it makes sense to pursue social welfare (the sum of the utilities) as the objective, because now 1) units of utility correspond directly to units of money, so that we do not have the problem of the bidders arbitrarily scaling their utilities; and 2) it is no longer possible to give a payment willingness function of 0 while still affecting the donations through a utility function.\nBefore presenting the decomposition result, we introduce some terms from game theory.\nA type is a preference profile that a bidder can have and can report (thus, a type report is a bid).\nIncentive compatibility (IC) means that bidders are best off reporting their preferences truthfully; either regardless of the others'' types (in dominant strategies), or in expectation over them (in Bayes-Nash equilibrium).\nIndividual rationality (IR) means agents are at least as well off participating in the mechanism as not participating; either regardless of the others'' types (ex-post), or in expectation over them (ex-interim).\nA mechanism is budget balanced if there is no flow of money into or out of the system-in general (ex-post), or in expectation over the type reports (ex-ante).\nA mechanism is efficient if it (always) produces the efficient allocation of wealth to charities.\nTheorem 7.\nSuppose all agents'' preferences are quasilinear.\nFurthermore, suppose that there exists a single-charity mechanism M that, for a certain subclass P of (quasilinear) preferences, under a given solution concept S (implementation in dominant strategies or Bayes-Nash equilibrium) and a given notion of individual rationality R (ex post, ex interim, or none), satisfies a certain notion of budget balance (ex post, ex ante, or none), and is ex-post efficient.\nThen there exists such a mechanism for any number of charities.\nTwo mechanisms that satisfy efficiency (and can in fact be applied directly to the multiple-charity problem without use of the previous theorem) are the VCG (which is incentive compatible in dominant strategies) and dAGVA (which is incentive compatible only in Bayes-Nash equilibrium) mechanisms.\nEach of them, however, has a drawback that would probably make it impractical in the setting of donations to charities.\nThe VCG mechanism is not budget balanced.\nThe dAGVA mechanism does not satisfy ex-post individual rationality.\nIn the next subsection, we will investigate if we can do better in the setting of donations to charities.\n10.3 Impossibility of efficiency In this subsection, we show that even in a very restricted setting, and with minimal requirements on IC and IR constraints, it is impossible to create a mechanism that is efficient.\nTheorem 8.\nThere is no mechanism which is ex-post budget balanced, ex-post efficient, and ex-interim individually rational with Bayes-Nash equilibrium as the solution concept (even with only one charity, only two quasilinear bidders, with identical type distributions (uniform over two types, with either both utility functions being step functions or both utility functions being concave piecewise linear functions)).\nThe case of step-functions in this theorem corresponds exactly to the case of a single, fixed-size, nonexcludable public good (the public good being that the charity receives the desired amount)-for which such an impossibility result is already known [14].\nMany similar results are known, probably the most famous of which is the Myerson-Satterthwaite impossibility result, which proves the impossibility of efficient bilateral trade under the same requirements [15].\nTheorem 7 indicates that there is no reason to decide on donations to multiple charities under a single mechanism (rather than a separate one for each charity), when an efficient mechanism with the desired properties exists for the single-charity case.\nHowever, because under the requirements of Theorem 8, no such mechanism exists, there may be a benefit to bringing the charities under the same umbrella.\nThe next proposition shows that this is indeed the case.\nProposition 2.\nThere exist settings with two charities where there exists no ex-post budget balanced, ex-post efficient, and ex-interim individually rational mechanism with Bayes-Nash equilibrium as the solution concept for either charity alone; but there exists an ex-post budget balanced, ex-post efficient, and ex-post individually rational mechanism with dominant strategies as the solution concept for both charities together.\n(Even when the conditions are the same as in Theorem 8, apart from the fact that there are now two charities.)\n11.\nCONCLUSION We introduced a bidding language for expressing very general types of matching offers over multiple charities.\nWe formulated the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and showed that it is NP-complete to approximate to any ratio even in very restricted settings.\nWe gave a mixed-integer program formulation of the clearing problem, and showed that for concave bids (where utility functions and payment willingness function are concave), the program reduces to a linear program and can hence be solved in polynomial time.\nWe then showed that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming, suggesting that we cannot do much better than a linear programming implementation for such bids.\nSubsequently, we showed that the clearing problem is much easier when bids are quasilinear (where payment willingness functions are linear)-for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave).\nFor the quasilinear setting, we studied the mechanism design question of making the bidders report their preferences truthfully rather than strategically.\nWe showed that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids.\nWe also showed that even though the clearing problem decomposes over charities in the quasilinear setting, there may be benefits to linking the charities from a mechanism design standpoint.\nThere are many directions for future research.\nOne is to build a web-based implementation of the (first-price) mechanism proposed in this paper.\nAnother is to study the computational scalability of our MIP\/LP approach.\nIt is also 59 important to identify other classes of bids (besides concave ones) for which the clearing problem is tractable.\nMuch crucial work remains to be done on the mechanism design problem.\nFinally, are there good iterative mechanisms for charity donation?\n6 12.\nREFERENCES [1] K. Arrow.\nThe property rights doctrine and demand revelation under incomplete information.\nIn M. Boskin, editor, Economics and human welfare.\nNew York Academic Press, 1979.\n[2] L. M. Ausubel and P. Milgrom.\nAscending auctions with package bidding.\nFrontiers of Theoretical Economics, 1, 2002.\nNo. 1, Article 1.\n[3] Y. Bartal, R. Gonen, and N. Nisan.\nIncentive compatible multi-unit combinatorial auctions.\nIn Theoretical Aspects of Rationality and Knowledge (TARK IX), Bloomington, Indiana, USA, 2003.\n[4] E. H. Clarke.\nMultipart pricing of public goods.\nPublic Choice, 11:17-33, 1971.\n[5] V. Conitzer and T. Sandholm.\nComplexity of mechanism design.\nIn Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI-02), pages 103-110, Edmonton, Canada, 2002.\n[6] C. d``Aspremont and L. A. G\u00b4erard-Varet.\nIncentives and incomplete information.\nJournal of Public Economics, 11:25-45, 1979.\n[7] M. R. Garey, D. S. Johnson, and L. Stockmeyer.\nSome simplified NP-complete graph problems.\nTheoretical Computer Science, 1:237-267, 1976.\n[8] D. Goldburg and S. McElligott.\nRed cross statement on official donation locations.\n2001.\nPress release, http:\/\/www.redcross.org\/press\/disaster\/ds pr\/ 011017legitdonors.html.\n[9] R. Gonen and D. Lehmann.\nOptimal solutions for multi-unit combinatorial auctions: Branch and bound heuristics.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 13-20, Minneapolis, MN, Oct. 2000.\n[10] T. Groves.\nIncentives in teams.\nEconometrica, 41:617-631, 1973.\n[11] L. Khachiyan.\nA polynomial algorithm in linear programming.\nSoviet Math.\nDoklady, 20:191-194, 1979.\n[12] R. Lavi, A. Mu``Alem, and N. Nisan.\nTowards a characterization of truthful combinatorial auctions.\nIn Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS), 2003.\n[13] D. Lehmann, L. I. O``Callaghan, and Y. Shoham.\nTruth revelation in rapid, approximately efficient combinatorial auctions.\nJournal of the ACM, 49(5):577-602, 2002.\nEarly version appeared in ACMEC-99.\n6 Compare, for example, iterative mechanisms in the combinatorial auction setting [19, 25, 2].\n[14] A. Mas-Colell, M. Whinston, and J. R. Green.\nMicroeconomic Theory.\nOxford University Press, 1995.\n[15] R. Myerson and M. Satterthwaite.\nEfficient mechanisms for bilateral trading.\nJournal of Economic Theory, 28:265-281, 1983.\n[16] G. L. Nemhauser and L. A. Wolsey.\nInteger and Combinatorial Optimization.\nJohn Wiley & Sons, 1999.\nSection 4, page 11.\n[17] N. Nisan.\nBidding and allocation in combinatorial auctions.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 1-12, Minneapolis, MN, 2000.\n[18] N. Nisan and A. Ronen.\nComputationally feasible VCG mechanisms.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 242-252, Minneapolis, MN, 2000.\n[19] D. C. Parkes.\niBundle: An efficient ascending price bundle auction.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 148-157, Denver, CO, Nov. 1999.\n[20] M. H. Rothkopf, A. Peke\u02c7c, and R. M. Harstad.\nComputationally manageable combinatorial auctions.\nManagement Science, 44(8):1131-1147, 1998.\n[21] T. Sandholm.\nAlgorithm for optimal winner determination in combinatorial auctions.\nArtificial Intelligence, 135:1-54, Jan. 2002.\nConference version appeared at the International Joint Conference on Artificial Intelligence (IJCAI), pp. 542-547, Stockholm, Sweden, 1999.\n[22] T. Sandholm, S. Suri, A. Gilpin, and D. Levine.\nCABOB: A fast optimal algorithm for combinatorial auctions.\nIn Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI), pages 1102-1108, Seattle, WA, 2001.\n[23] J. Tagliabue.\nGlobal AIDS Funds Is Given Attention, but Not Money.\nThe New York Times, June 1, 2003.\nReprinted on http:\/\/www.healthgap.org\/press releases\/a03\/ 060103 NYT HGAP G8 fund.html.\n[24] W. Vickrey.\nCounterspeculation, auctions, and competitive sealed tenders.\nJournal of Finance, 16:8-37, 1961.\n[25] P. R. Wurman and M. P. Wellman.\nAkBA: A progressive, anonymous-price combinatorial auction.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 21-29, Minneapolis, MN, Oct. 2000.\n[26] M. Yokoo.\nThe characterization of strategy\/false-name proof combinatorial auction protocols: Price-oriented, rationing-free protocol.\nIn Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI), Acapulco, Mexico, Aug. 2003.\n60","lvl-3":"Expressive Negotiation over Donations to Charities \u2217\nABSTRACT\nWhen donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more.\nSuch negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount.\nHowever, in their current form, matching offers allow for only limited negotiation.\nFor one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies.\nAlso, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities.\nIn both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur.\nIn this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities.\nWe formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings.\nWe give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program.\nWe then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming.\nSubsequently, we show that the clearing problem is much easier when bids are quasilinear--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave).\nFor the quasilinear setting, we study the mechanism design question.\nWe show that an ex-post efficient mechanism is \u2217 Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678.\nimpossible even with only one charity and a very restricted class of bids.\nWe also show that there may be benefits to linking the charities from a mechanism design standpoint.\n1.\nINTRODUCTION\nWhen money is donated to a charitable (or other) cause (hereafter referred to as charity), often the donating party gives unconditionally: a fixed amount is transferred from the donator to the charity, and none of this transfer is contingent on other events--in particular, it is not contingent on the amount given by other parties.\nIndeed, this is currently often the only way to make a donation, especially for small donating parties such as private individuals.\nHowever, when multiple parties support the same charity, each of them would prefer to see the others give more rather than less to this charity.\nIn such scenarios, it is sensible for a party to use its contemplated donation as negotiating material to induce the others to give more.\nThis is done by making the donation conditional on the others' donations.\nThe following example will illustrate this, and show that the donating parties as well as the charitable cause may simultaneously benefit from the potential for such negotiation.\nSuppose we have two parties, 1 and 2, who are both supporters of charity A. To either of them, it would be worth $0.75 if A received $1.\nIt follows neither of them will be willing to give unconditionally, because $0.75 <$1.\nHowever, if the two parties draw up a contract that says that they will each give $0.5, both the parties have an incentive to accept this contract (rather than have no contract at all): with the contract, the charity will receive $1 (rather than $0 without a contract), which is worth $0.75 to each party, which is greater than the $0.5 that that party will have to give.\nEffectively, each party has made its donation conditional on the other party's donation, leading to larger donations and greater happiness to all parties involved.\nOne method that is often used to effect this is to make a matching offer.\nExamples of matching offers are: \"I will give x dollars for every dollar donated.\"\n, or \"I will give x dollars if the total collected from other parties exceeds y.\" In our example above, one of the parties can make the offer \"I will donate $0.5 if the other party also donates at least that much\", and the other party will have an incentive to indeed donate $0.5, so that the total amount given to the charity increases by $1.\nThus this matching offer implements the contract suggested above.\nAs a real-world example, the United States government has authorized a donation of up to $1 billion to the Global Fund to fight AIDS, TB and Malaria, under the condition that the American contribution does not exceed one third of the total--to encourage other countries to give more [23].\nHowever, there are several severe limitations to the simple approach of matching offers as just described.\n1.\nIt is not clear how two parties can make matching offers where each party's offer is stated in terms of the amount that the other pays.\n(For example, it is not clear what the outcome should be when both parties offer to match the other's donation.)\nThus, matching offers can only be based on payments made by parties that are giving unconditionally (not in terms of a matching offer)--or at least there can be no circular dependencies .1 2.\nGiven the current infrastructure for making matching offers, it is impractical to make a matching offer depend on the amounts given to multiple charities.\nFor instance, a party may wish to specify that it will pay $100 given that charity A receives a total of $1000, but that it will also count donations made to charity B, at half the rate.\n(Thus, a total payment of $500 to charity A combined with a total payment of $1000 to charity B would be just enough for the party's offer to take effect.)\nIn contrast, in this paper we propose a new approach where each party can express its relative preferences for different charities, and make its offer conditional on its own appreciation for the vector of donations made to the different charities.\nMoreover, the amount the party offers to donate at different levels of appreciation is allowed to vary arbitrarily (it does need to be a dollar-for-dollar (or n-dollarfor-dollar) matching arrangement, or an arrangement where the party offers a fixed amount provided a given (strike) total has been exceeded).\nFinally, there is a clear interpretation of what it means when multiple parties are making conditional offers that are stated in terms of each other.\nGiven each combination of (conditional) offers, there is a (usually) unique solution which determines how much each party pays, and how much each charity is paid.\nHowever, as we will show, finding this solution (the clearing problem) requires solving a potentially difficult optimization problem.\nA large part of this paper is devoted to studying how difficult this problem is under different assumptions on the structure of the offers, and providing algorithms for solving it.\n1Typically, larger organizations match offers of private individuals.\nFor example, the American Red Cross Liberty Disaster Fund maintains a list of businesses that match their customers' donations [8].\nTowards the end of the paper, we also study the mechanism design problem of motivating the bidders to bid truthfully.\nIn short, expressive negotiation over donations to charities is a new way in which electronic commerce can help the world.\nA web-based implementation of the ideas described in this paper can facilitate voluntary reallocation of wealth on a global scale.\nAditionally, optimally solving the clearing problem (and thereby generating the maximum economic welfare) requires the application of sophisticated algorithms.\n2.\nCOMPARISON TO COMBINATORIAL AUCTIONS AND EXCHANGES\n3.\nDEFINITIONS\n4.\nA SIMPLIFIED BIDDING LANGUAGE\n5.\nAVOIDING INDIRECT PAYMENTS\n6.\nHARDNESS OF CLEARING THE MARKET\n7.\nMIXED INTEGER PROGRAMMING FORMULATION\n8.\nWHY ONE CANNOT DO MUCH BETTER THAN LINEAR PROGRAMMING\n9.\nQUASILINEAR BIDS\n10.\nINCENTIVE COMPATIBILITY\n10.1 Strategic bids under the first-price mechanism\n10.3 Impossibility of efficiency\n11.\nCONCLUSION\nWe introduced a bidding language for expressing very general types of matching offers over multiple charities.\nWe formulated the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and showed that it is NP-complete to approximate to any ratio even in very restricted settings.\nWe gave a mixed-integer program formulation of the clearing problem, and showed that for concave bids (where utility functions and payment willingness function are concave), the program reduces to a linear program and can hence be solved in polynomial time.\nWe then showed that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming, suggesting that we cannot do much better than a linear programming implementation for such bids.\nSubsequently, we showed that the clearing problem is much easier when bids are quasilinear (where payment willingness functions are linear)--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave).\nFor the quasilinear setting, we studied the mechanism design question of making the bidders report their preferences truthfully rather than strategically.\nWe showed that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids.\nWe also showed that even though the clearing problem decomposes over charities in the quasilinear setting, there may be benefits to linking the charities from a mechanism design standpoint.\nThere are many directions for future research.\nOne is to build a web-based implementation of the (first-price) mechanism proposed in this paper.\nAnother is to study the computational scalability of our MIP\/LP approach.\nIt is also\nimportant to identify other classes of bids (besides concave ones) for which the clearing problem is tractable.\nMuch crucial work remains to be done on the mechanism design problem.\nFinally, are there good iterative mechanisms for charity donation?\n6","lvl-4":"Expressive Negotiation over Donations to Charities \u2217\nABSTRACT\nWhen donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more.\nSuch negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount.\nHowever, in their current form, matching offers allow for only limited negotiation.\nFor one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies.\nAlso, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities.\nIn both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur.\nIn this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities.\nWe formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings.\nWe give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program.\nWe then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming.\nSubsequently, we show that the clearing problem is much easier when bids are quasilinear--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave).\nFor the quasilinear setting, we study the mechanism design question.\nWe show that an ex-post efficient mechanism is \u2217 Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678.\nimpossible even with only one charity and a very restricted class of bids.\nWe also show that there may be benefits to linking the charities from a mechanism design standpoint.\n1.\nINTRODUCTION\nIndeed, this is currently often the only way to make a donation, especially for small donating parties such as private individuals.\nHowever, when multiple parties support the same charity, each of them would prefer to see the others give more rather than less to this charity.\nIn such scenarios, it is sensible for a party to use its contemplated donation as negotiating material to induce the others to give more.\nThis is done by making the donation conditional on the others' donations.\nThe following example will illustrate this, and show that the donating parties as well as the charitable cause may simultaneously benefit from the potential for such negotiation.\nSuppose we have two parties, 1 and 2, who are both supporters of charity A. To either of them, it would be worth $0.75 if A received $1.\nIt follows neither of them will be willing to give unconditionally, because $0.75 <$1.\nEffectively, each party has made its donation conditional on the other party's donation, leading to larger donations and greater happiness to all parties involved.\nOne method that is often used to effect this is to make a matching offer.\nExamples of matching offers are: \"I will give x dollars for every dollar donated.\"\nThus this matching offer implements the contract suggested above.\nHowever, there are several severe limitations to the simple approach of matching offers as just described.\n1.\nIt is not clear how two parties can make matching offers where each party's offer is stated in terms of the amount that the other pays.\n(For example, it is not clear what the outcome should be when both parties offer to match the other's donation.)\nThus, matching offers can only be based on payments made by parties that are giving unconditionally (not in terms of a matching offer)--or at least there can be no circular dependencies .1 2.\nGiven the current infrastructure for making matching offers, it is impractical to make a matching offer depend on the amounts given to multiple charities.\nFor instance, a party may wish to specify that it will pay $100 given that charity A receives a total of $1000, but that it will also count donations made to charity B, at half the rate.\n(Thus, a total payment of $500 to charity A combined with a total payment of $1000 to charity B would be just enough for the party's offer to take effect.)\nIn contrast, in this paper we propose a new approach where each party can express its relative preferences for different charities, and make its offer conditional on its own appreciation for the vector of donations made to the different charities.\nFinally, there is a clear interpretation of what it means when multiple parties are making conditional offers that are stated in terms of each other.\nGiven each combination of (conditional) offers, there is a (usually) unique solution which determines how much each party pays, and how much each charity is paid.\nHowever, as we will show, finding this solution (the clearing problem) requires solving a potentially difficult optimization problem.\nA large part of this paper is devoted to studying how difficult this problem is under different assumptions on the structure of the offers, and providing algorithms for solving it.\n1Typically, larger organizations match offers of private individuals.\nFor example, the American Red Cross Liberty Disaster Fund maintains a list of businesses that match their customers' donations [8].\nTowards the end of the paper, we also study the mechanism design problem of motivating the bidders to bid truthfully.\nIn short, expressive negotiation over donations to charities is a new way in which electronic commerce can help the world.\nAditionally, optimally solving the clearing problem (and thereby generating the maximum economic welfare) requires the application of sophisticated algorithms.\n11.\nCONCLUSION\nWe introduced a bidding language for expressing very general types of matching offers over multiple charities.\nWe formulated the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and showed that it is NP-complete to approximate to any ratio even in very restricted settings.\nWe gave a mixed-integer program formulation of the clearing problem, and showed that for concave bids (where utility functions and payment willingness function are concave), the program reduces to a linear program and can hence be solved in polynomial time.\nWe then showed that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming, suggesting that we cannot do much better than a linear programming implementation for such bids.\nFor the quasilinear setting, we studied the mechanism design question of making the bidders report their preferences truthfully rather than strategically.\nWe showed that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids.\nWe also showed that even though the clearing problem decomposes over charities in the quasilinear setting, there may be benefits to linking the charities from a mechanism design standpoint.\nOne is to build a web-based implementation of the (first-price) mechanism proposed in this paper.\nAnother is to study the computational scalability of our MIP\/LP approach.\nIt is also\nimportant to identify other classes of bids (besides concave ones) for which the clearing problem is tractable.\nMuch crucial work remains to be done on the mechanism design problem.\nFinally, are there good iterative mechanisms for charity donation?\n6","lvl-2":"Expressive Negotiation over Donations to Charities \u2217\nABSTRACT\nWhen donating money to a (say, charitable) cause, it is possible to use the contemplated donation as negotiating material to induce other parties interested in the charity to donate more.\nSuch negotiation is usually done in terms of matching offers, where one party promises to pay a certain amount if others pay a certain amount.\nHowever, in their current form, matching offers allow for only limited negotiation.\nFor one, it is not immediately clear how multiple parties can make matching offers at the same time without creating circular dependencies.\nAlso, it is not immediately clear how to make a donation conditional on other donations to multiple charities, when the donator has different levels of appreciation for the different charities.\nIn both these cases, the limited expressiveness of matching offers causes economic loss: it may happen that an arrangement that would have made all parties (donators as well as charities) better off cannot be expressed in terms of matching offers and will therefore not occur.\nIn this paper, we introduce a bidding language for expressing very general types of matching offers over multiple charities.\nWe formulate the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and show that it is NP-complete to approximate to any ratio even in very restricted settings.\nWe give a mixed-integer program formulation of the clearing problem, and show that for concave bids, the program reduces to a linear program.\nWe then show that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming.\nSubsequently, we show that the clearing problem is much easier when bids are quasilinear--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave).\nFor the quasilinear setting, we study the mechanism design question.\nWe show that an ex-post efficient mechanism is \u2217 Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678.\nimpossible even with only one charity and a very restricted class of bids.\nWe also show that there may be benefits to linking the charities from a mechanism design standpoint.\n1.\nINTRODUCTION\nWhen money is donated to a charitable (or other) cause (hereafter referred to as charity), often the donating party gives unconditionally: a fixed amount is transferred from the donator to the charity, and none of this transfer is contingent on other events--in particular, it is not contingent on the amount given by other parties.\nIndeed, this is currently often the only way to make a donation, especially for small donating parties such as private individuals.\nHowever, when multiple parties support the same charity, each of them would prefer to see the others give more rather than less to this charity.\nIn such scenarios, it is sensible for a party to use its contemplated donation as negotiating material to induce the others to give more.\nThis is done by making the donation conditional on the others' donations.\nThe following example will illustrate this, and show that the donating parties as well as the charitable cause may simultaneously benefit from the potential for such negotiation.\nSuppose we have two parties, 1 and 2, who are both supporters of charity A. To either of them, it would be worth $0.75 if A received $1.\nIt follows neither of them will be willing to give unconditionally, because $0.75 <$1.\nHowever, if the two parties draw up a contract that says that they will each give $0.5, both the parties have an incentive to accept this contract (rather than have no contract at all): with the contract, the charity will receive $1 (rather than $0 without a contract), which is worth $0.75 to each party, which is greater than the $0.5 that that party will have to give.\nEffectively, each party has made its donation conditional on the other party's donation, leading to larger donations and greater happiness to all parties involved.\nOne method that is often used to effect this is to make a matching offer.\nExamples of matching offers are: \"I will give x dollars for every dollar donated.\"\n, or \"I will give x dollars if the total collected from other parties exceeds y.\" In our example above, one of the parties can make the offer \"I will donate $0.5 if the other party also donates at least that much\", and the other party will have an incentive to indeed donate $0.5, so that the total amount given to the charity increases by $1.\nThus this matching offer implements the contract suggested above.\nAs a real-world example, the United States government has authorized a donation of up to $1 billion to the Global Fund to fight AIDS, TB and Malaria, under the condition that the American contribution does not exceed one third of the total--to encourage other countries to give more [23].\nHowever, there are several severe limitations to the simple approach of matching offers as just described.\n1.\nIt is not clear how two parties can make matching offers where each party's offer is stated in terms of the amount that the other pays.\n(For example, it is not clear what the outcome should be when both parties offer to match the other's donation.)\nThus, matching offers can only be based on payments made by parties that are giving unconditionally (not in terms of a matching offer)--or at least there can be no circular dependencies .1 2.\nGiven the current infrastructure for making matching offers, it is impractical to make a matching offer depend on the amounts given to multiple charities.\nFor instance, a party may wish to specify that it will pay $100 given that charity A receives a total of $1000, but that it will also count donations made to charity B, at half the rate.\n(Thus, a total payment of $500 to charity A combined with a total payment of $1000 to charity B would be just enough for the party's offer to take effect.)\nIn contrast, in this paper we propose a new approach where each party can express its relative preferences for different charities, and make its offer conditional on its own appreciation for the vector of donations made to the different charities.\nMoreover, the amount the party offers to donate at different levels of appreciation is allowed to vary arbitrarily (it does need to be a dollar-for-dollar (or n-dollarfor-dollar) matching arrangement, or an arrangement where the party offers a fixed amount provided a given (strike) total has been exceeded).\nFinally, there is a clear interpretation of what it means when multiple parties are making conditional offers that are stated in terms of each other.\nGiven each combination of (conditional) offers, there is a (usually) unique solution which determines how much each party pays, and how much each charity is paid.\nHowever, as we will show, finding this solution (the clearing problem) requires solving a potentially difficult optimization problem.\nA large part of this paper is devoted to studying how difficult this problem is under different assumptions on the structure of the offers, and providing algorithms for solving it.\n1Typically, larger organizations match offers of private individuals.\nFor example, the American Red Cross Liberty Disaster Fund maintains a list of businesses that match their customers' donations [8].\nTowards the end of the paper, we also study the mechanism design problem of motivating the bidders to bid truthfully.\nIn short, expressive negotiation over donations to charities is a new way in which electronic commerce can help the world.\nA web-based implementation of the ideas described in this paper can facilitate voluntary reallocation of wealth on a global scale.\nAditionally, optimally solving the clearing problem (and thereby generating the maximum economic welfare) requires the application of sophisticated algorithms.\n2.\nCOMPARISON TO COMBINATORIAL AUCTIONS AND EXCHANGES\nThis section discusses the relationship between expressive charity donation and combinatorial auctions and exchanges.\nIt can be skipped, but may be of interest to the reader with a background in combinatorial auctions and exchanges.\nIn a combinatorial auction, there are m items for sale, and bidders can place bids on bundles of one or more items.\nThe auctioneer subsequently labels each bid as winning or losing, under the constraint that no item can be in more than one winning bid, to maximize the sum of the values of the winning bids.\n(This is known as the clearing problem.)\nVariants include combinatorial reverse auctions, where the auctioneer is seeking to procure a set of items; and combinatorial exchanges, where bidders can both buy and and sell items (even within the same bid).\nOther extensions include allowing for side constraints, as well as the specification of attributes of the items in bids.\nCombinatorial auctions and exchanges have recently become a popular research topic [20, 21, 17, 22, 9, 18, 13, 3, 12, 26, 19, 25, 2].\nThe problems of clearing expressive charity donation markets and clearing combinatorial auctions or exchanges are very different in formulation.\nNevertheless, there are interesting parallels.\nOne of the main reasons for the interest in combinatorial auctions and exchanges is that it allows for expressive bidding.\nA bidder can express exactly how much each different allocation is worth to her, and thus the globally optimal allocation may be chosen by the auctioneer.\nCompare this to a bidder having to bid on two different items in two different (one-item) auctions, without any way of expressing that (for instance) one item is worthless if the other item is not won.\nIn this scenario, the bidder may win the first item but not the second (because there was another high bid on the second item that she did not anticipate), leading to economic inefficiency.\nExpressive bidding is also one of the main benefits of the expressive charity donation market.\nHere, bidders can express exactly how much they are willing to donate for every vector of amounts donated to charities.\nThis may allow bidders to negotiate a complex arrangement of who gives how much to which charity, which is beneficial to all parties involved; whereas no such arrangement may have been possible if the bidders had been restricted to using simple matching offers on individual charities.\nAgain, expressive bidding is necessary to achieve economic efficiency.\nAnother parallel is the computational complexity of the clearing problem.\nIn order to achieve the full economic efficiency allowed by the market's expressiveness (or even come close to it), hard computational problems must be solved in combinatorial auctions and exchanges, as well as in the charity donation market (as we will see).\n3.\nDEFINITIONS\nThroughout this paper, we will refer to the offers that the donating parties make as bids, and to the donating parties as bidders.\nIn our bidding framework, a bid will specify, for each vector of total payments made to the charities, how much that bidder is willing to contribute.\n(The contribution of this bidder is also counted in the vector of payments--so, the vector of total payments to the charities represents the amount given by all donating parties, not just the ones other than this bidder.)\nThe bidding language is expressive enough that no bidder should have to make more than one bid.\nThe following definition makes the general form of a bid in our framework precise.\nWe now define possible outcomes in our model, and which outcomes are valid given the bids that were made.\nby the bidders (\u03c0b1, \u03c0b2,..., \u03c0bn), and a vector of payments received by the charities (\u03c0c1, \u03c0c2,..., \u03c0cm).\nA valid outcome is an outcome where \u03c0ci (at least as much money is collected as is given away); 2.\nFor all 1 E \u03c0ci, we bj ci can distribute this surplus across the bidders by not requiring them to pay the full amount, or across the charities by giving them more than the solution specifies.)\nNevertheless, with this approach, a bidder may have to write out a check to a charity that she does not care for at all.\n(For example, an environmental activist who was using the system to increase donations to a wildlife preservation fund may be required to write a check to a group supporting a right-wing political party.)\nThis is likely to lead to complaints and noncompliance with the clearing.\nWe can address this issue by letting each bidder specify explicitly (before the clearing) which charities she would be willing to make a check out to.\nThese additional constraints, of course, may change the optimal solution.\nIn general, checking whether a given centralized solution (with zero surplus) can be accomplished through decentralized payments when there are such constraints can be modeled as a MAX-FLOW problem.\nIn the MAX-FLOW instance, there is an edge from the source node s to each bidder bj, with a capacity of \u03c0bj (as specified in the centralized solution); an edge from each bidder bj to each charity ci that the bidder is willing to donate money to, with a capacity of \u221e; and an edge from each charity ci to the target node t with capacity \u03c0ci (as specified in the centralized solution).\nIn the remainder of this paper, all our hardness results apply even to the setting where there is no constraint on which bidders can pay to which charity (that is, even the problem PROOF.\nThe problem is in NP because we can nondeterministically choose the payments to be made and received, and check the validity and objective value of this outcome.\nIn the following, we will represent bids as follows: ({(ck, ak)}, s, t) indicates that ukj (\u03c0ck) = ak\u03c0ck (this function is 0 for ck not mentioned in the bid), and wj (uj) = t for uj \u2265 s, wj (uj) = 0 otherwise.\nTo show NP-hardness, we reduce an arbitrary MAX2SAT instance, given by a set of clauses K = {k} = {(l1k, l2k)} over a variable set V together with a target number of satisfied clauses T, to the following DONATION-CLEARING instance.\nLet the set of charities be as follows.\nFor every literal l \u2208 L, there is a charity cl.\nThen, let the set of bids be as follows.\nFor every variable v, there is a bid bv = ({(c + v, 1), (c_v, 1)}, 2,1 \u2212 41V11).\nFor every literal l, there is a bid bl = ({(cl, 1)}, 2, 1).\nFor every clause k = {l1k, l2k} \u2208 K, there is a bid bk = ({(cl1k, 1), (cl2k, 1)}, 2, 1 81V 11K1).\nFinally, there is a single bid that values all charities equally: b0 = ({(c1, 1), (c2, 1),..., (cm, 1)}, 2 | V | + show the two instances are equivalent.\nFirst, suppose there exists a solution to the MAX2SAT instance.\nIf in this solution, l is true, then let \u03c0cl = 2 + 81V 121K1; otherwise \u03c0cl = 0.\nAlso, the only bids that areT not accepted (meaning the threshold is not met) are the bl where l is false, and the bk such that both of l1k, l2k are false.\nFirst we show that no bidder whose bid is accepted pays more than she is willing to.\nFor each bv, either c + v or c_v receives at least 2, so this bidder's threshold has been met.\nFor each bl, either l is false and the bid is not accepted, or l is true, cl receives at least 2, and the threshold has been met.\nFor each bk, either both of l1k, l2k are false and the bid is not accepted, or at least one of them (say lik) is true (that is, k is satisfied) and clik receives at least 2, and the threshold has been met.\nFinally, because the total amount received by the\nsurplus.\nSo there exists a solution with positive surplus to the DONATION-CLEARING instance.\nNow suppose there exists a nonzero outcome in the DONATION-CLEARING instance.\nFirst we show that it is not possible (for any v G V) that both b + v and b \u2212 v are accepted.\nFor, this would require that \u03c0c + v + \u03c0c_v> 4.\nThe bids bv, b + v, b \u2212 v cannot contribute more than 3, so we need another 1 at least.\nIt is easily seen that for any other v ~, accepting any subset of {bv,, b + v,, b \u2212 v, I would require that at least as much is given to c + v, and c \u2212 v, as can be extracted from these bids, so this cannot help.\nFinally, all the other bids combined can contribute at most IKI 1\npret the outcome in the DONATION-CLEARING instance as a partial assignment of truth values to variables: v is set to true if b + v is accepted, and to false if b \u2212 v is accepted.\nAll that is left to show is that this partial assignment satisfies at least T clauses.\nFirst we show that if a clause bid bk is accepted, then either bl1k or bl2k is accepted (and thus either l1k or l2k is set to true, hence k is satisfied).\nIf bk is accepted, at least one of cl1 k and cl2 k must be receiving at least 1; without loss of generality, say it is cl1 k, and say l1 k corresponds to variable v1k (that is, it is + v1k or--v1k).\nIf cl1k does not receive at least 2, bl1 k is not accepted, and it is easy to check that the bids bv1k, b + v1k, b \u2212 v1k contribute (at least) 1 less than is paid to c + v1k and c + v1k.\nBut this is the same situation that we analyzed before, and we know it is impossible.\nAll that remains to show is that at least T clause bids are accepted.\nWe now show that b0 is accepted.\nSuppose it is not; then one of the bv must be accepted.\n(The solution is nonzero by assumption; if only some bk are accepted, the total payment from these bids is at most IKI1 8 | V | | K | <1, which is not enough for any bid to be accepted; and if one of the bl is accepted, then the threshold for the corresponding bv is also reached.)\nFor this v, bv1k, b + v1k, b \u2212 v1k contribute (at least) 4 | V | less than the total payments to c + v and c \u2212 v. Again,\nthe other bv and bl cannot (by themselves) help to close this gap; and the bk can contribute at most IKI 1\nIt follows that b0 is accepted.\nNow, in order for b0 to be accepted, a total of 2IV I+T\nmust be donated.\nBecause is not possible (for any v G V) that both b + v and b \u2212 v are accepted, it follows that the total payment by the bv and the bl can be at most 2IVI--1\nPROOF.\nSuppose we had such a polynomial time algorithm, and applied it to the DONATION-CLEARING instances that were reduced from MAX2SAT instances in Theorem 1.\nIt would return a nonzero solution when the MAX2SAT instance has a solution, and a zero solution otherwise.\nSo we can decide whether arbitrary MAX2SAT instances are satisfiable this way, and it would follow that P = NP.\n(Solving the problem to optimality is NP-complete in many other (noncomparable or even more restricted) settings as well--we omit such results because of space constraint.)\nThis should not be interpreted to mean that our approach is infeasible.\nFirst, as we will show, there are very expressive families of bids for which the problem is solvable in polynomial time.\nSecond, NP-completeness is often overcome in practice (especially when the stakes are high).\nFor instance, even though the problem of clearing combinatorial auctions is NP-complete [20] (even to approximate [21]), they are typically solved to optimality in practice.\n7.\nMIXED INTEGER PROGRAMMING FORMULATION\nIn this section, we give a mixed integer programming (MIP) formulation for the general problem.\nWe also discuss in which special cases this formulation reduces to a linear programming (LP) formulation.\nIn such cases, the problem is solvable in polynomial time, because linear programs can be solved in polynomial time [11].\nThe variables of the MIP defining the final outcome are the payments made to the charities, denoted by \u03c0ci, and the payments extracted from the bidders, \u03c0bj.\nIn the case where we try to avoid direct payments and let the bidders pay the charities directly, we add variables \u03c0ci, bj indicating how much bj pays to ci, with the constraints that for each ci, \u03c0ci <~ \u03c0ci, bj; and for each bj, \u03c0bj> ~ \u03c0ci, bj.\nAddibj ci tionally, there is a constraint \u03c0ci, bj = 0 whenever bidder bj is unwilling to pay charity ci.\nThe rest of the MIP can be phrased in terms of the \u03c0ci and \u03c0bj.\nThe objectives we have discussed earlier are both linear: \u03c0ci, and total amount donated is given by ~ m \u03c0ci (coefficients can be added to repi = 1 resent different weights on the different charities in the objective).\nThe constraint that the outcome should be valid (no deficit) is given simply by: For every bidder, for every charity, we define an additional utility variable uij indicating the utility that this bidder derives from the payment to this charity.\nThe bidder's total surplus is given by\nEach uij is given as a function of \u03c0ci by the (piecewise linear) function provided by the bidder.\nIn order to represent this function in the MIP formulation, we will merely place upper bounding constraints on uij, so that it cannot exceed the given functions.\nThe MIP solver can then push the uij variables all the way up to the constraint, in order to extract as much payment from this bidder as possible.\nIn the case where the uij are concave, this is easy: if (sl, tl) and (sl +1, tl +1) are endpoints of a finite linear segment in the function, we add the constraint that uij \u2264 tl + \u03c0ci _ sl sl +1 _ sl (tl +1 \u2212 tl).\nIf the final (infinite) segment starts at (sk, tk) and has slope d, we add the constraint that uij \u2264 tk + d (\u03c0ci \u2212 sk).\nUsing the fact that the function is concave, for each value of \u03c0ci, the tightest upper bound on uij is the one corresponding to the segment above that value of \u03c0ci, and therefore these constraints are sufficient to force the correct value of uij.\nWhen the function is not concave, we require (for the first time) some binary variables.\nFirst, we define another point on the function: (sk +1, tk +1) = (sk + M, tk + dM), where d is the slope of the infinite segment and M is any upper bound on the \u03c0cj.\nThis has the effect that we will never be on the infinite segment again.\nNow, let xi, j l be an indicator variable that should be 1 if \u03c0ci is below the lth segment of the function, and 0 otherwise.\nTo effect this, first add a\nl \u2264 xl_1 + xl (where x_1 and xk +1 are defined to be zero), so that indeed only the two neighboring si, j l have nonzero weight.\nNow we add the constraint \u03c0ci = now the \u03bbi, j\nFinally, each \u03c0bj is bounded by a function of uj by the (piecewise linear) function provided by the bidder (wj).\nRepresenting this function is entirely analogous to how we represented uij as a function of \u03c0ci.\n(Again we will need binary variables only if the function is not concave.)\nBecause we only use binary variables when either a utility function uij or a payment willingness function wj is not concave, it follows that if all of these are concave, our MIP formulation is simply a linear program--which can be solved in polynomial time.\nThus: THEOREM 2.\nIf all functions uij and wj are concave (and piecewise linear), the DONATION-CLEARING problem can be solved in polynomial time using linear programming.\nEven if some of these functions are not concave, we can simply replace each such function by the smallest upper bounding concave function, and use the linear programming formulation to obtain an upper bound on the objective--which may be useful in a search formulation of the general problem.\n8.\nWHY ONE CANNOT DO MUCH BETTER THAN LINEAR PROGRAMMING\nOne may wonder if, for the special cases of the DONATIONCLEARING problem that can be solved in polynomial time with linear programming, there exist special purpose algorithms that are much faster than linear programming algorithms.\nIn this section, we show that this is not the case.\nWe give a reduction from (the decision variant of) the general linear programming problem to (the decision variant of) a special case of the DONATION-CLEARING problem (which can be solved in polynomial time using linear programming).\n(The decision variant of an optimization problem asks the binary question: \"Can the objective value exceed o?\")\nThus, any special-purpose algorithm for solving the decision variant of this special case of the DONATIONCLEARING problem could be used to solve a decision question about an arbitrary linear program just as fast.\n(And thus, if we are willing to call the algorithm a logarithmic number of times, we can solve the optimization version of the linear program.)\nWe first observe that for linear programming, a decision question about the objective can simply be phrased as another constraint in the LP (forcing the objective to exceed the given value); then, the original decision question coincides with asking whether the resulting linear program has a feasible solution.\nTHEOREM 3.\nThe question of whether an LP (given by a set of linear constraints4) has a feasible solution can be modeled as a DONATION-CLEARING instance with payment maximization as the objective, with 2v charities and v + c bids (where v is the number of variables in the LP, and c is the number of constraints).\nIn this model, each bid bj has only linear uij functions, and is a partially acceptable threshold bid (wj (u) = tj for u \u2265 sj, otherwise wj (u) = utj sj).\nThe v bids corresponding to the variables mention only two charities each; the c bids corresponding to the constraints mention only two times the number of variables in the corresponding constraint.\nPROOF.\nFor every variable xi in the LP, let there be two charities, c + xi and c_xi.\nLet H be some number such that if there is a feasible solution to the LP, there is one in which every variable has absolute value at most H.\nIn the following, we will represent bids as follows: ({(ck, ak)}, s, t) indicates that ukj (\u03c0ck) = ak\u03c0ck (this function is 0 for ck not mentioned in the bid), and wj (uj) = t for uj \u2265 s, wj (uj) = ujt s otherwise.\nFor every variable xi in the LP, let there be a bid bxi = ({(c + xi, 1), (c_xi, 1)}, 2H, 2H \u2212 vc).\nFor every constraint E rji xi \u2264 sj in the linear program, let there be a bid bj = i ({(c_xi, rji)} i: rji> 0 \u222a {(c + xi, \u2212 rji)} i: rj i <0, (E | rji |) H \u2212 sj, 1).\ni Let the target total amount donated be 2vH.\nSuppose there is a feasible solution (x \u2217 1, x \u2217 2,..., x \u2217 v) to the LP.\nWithout loss of generality, we can suppose that | x \u2217 i | \u2264 H for all i. Then, in the DONATION-CLEARING instance,\nfor every i, let \u03c0c + xi = H + x * i, and let \u03c0c \u2212 xi = H \u2212 x * i (for a total payment of 2H to these two charities).\nThis allows us to extract the maximum payment from the bids bxi--a total payment of 2vH \u2212 c. Additionally, the utility of bidder bj is now E rji (H \u2212 x * i) + E \u2212 rji (H + x * i) =\nequality stems from the fact that constraint j must be satisfied in the LP solution), so it follows we can extract the maximum payment from all the bidders bj, for a total payment of c.\nIt follows that we can extract the required 2vH payment from the bidders, and there exists a solution to the DONATION-CLEARING instance with a total amount donated of at least 2vH.\nNow suppose there is a solution to the DONATIONCLEARING instance with a total amount donated of at least vH.\nThen the maximum payment must be extracted from each bidder.\nFrom the fact that the maximum payment must be extracted from each bidder bxi, it follows that for each i, \u03c0c + xi + \u03c0c \u2212 xi \u2265 2H.\nBecause the maximum extractable total payment is 2vH, it follows that for each i, \u03c0c + xi + \u03c0c \u2212 xi = 2H.\nLet x * i = \u03c0c + xi \u2212 H = H \u2212 \u03c0c \u2212 xi.\nThen, from the fact that the maximum payment must be extracted from each bidder bj, it follows that (E | rji |) H \u2212\nsolution to the LP.\n9.\nQUASILINEAR BIDS\nAnother class of bids of interest is the class of quasilinear bids.\nIn a quasilinear bid, the bidder's payment willingness function is linear in utility: that is, wj = uj.\n(Because the units of utility are arbitrary, we may as well let them correspond exactly to units of money--so we do not need a constant multiplier.)\nIn most cases, quasilinearity is an unreasonable assumption: for example, usually bidders have a limited budget for donations, so that the payment willingness will stop increasing in utility after some point (or at least increase slower in the case of a \"softer\" budget constraint).\nNevertheless, quasilinearity may be a reasonable assumption in the case where the bidders are large organizations with large budgets, and the charities are a few small projects requiring relatively little money.\nIn this setting, once a certain small amount has been donated to a charity, a bidder will derive no more utility from more money being donated from that charity.\nThus, the bidders will never reach a high enough utility for their budget constraint (even when it is soft) to take effect, and thus a linear approximation of their payment willingness function is reasonable.\nAnother reason for studying the quasilinear setting is that it is the easiest setting for mechanism design, which we will discuss shortly.\nIn this section, we will see that the clearing problem is much easier in the case of quasilinear bids.\nFirst, we address the case where we are trying to maximize surplus (which is the most natural setting for mechanism design).\nThe key observation here is that when bids are quasilinear, the clearing problem decomposes across charities.\nPROOF.\nThe resulting solution is certainly valid: first of all, at least as much money is collected as is given away,\n\u03c0ci)--and the terms of this summation are the objectives of the individual optimization problems, each of which can be set at least to 0 (by setting all the variables are set to 0), so it follows that the expression is nonnegative.\nSecond, no bidder bj pays more than she is willing to, because uj \u2212 \u03c0bj =\nsummation are nonnegative by the constraints we imposed on the individual optimization problems.\nAll that remains to show is that the solution is optimal.\nBecause in an optimal solution, we will extract as much payment from the bidders as possible given the \u03c0ci, all we need to show is that the \u03c0ci are set optimally by this approach.\nLet \u03c0 * ci be the amount paid to charity \u03c0ci in some optimal solution.\nIf we change this amount to \u03c0' ci and leave everything else unchanged, this will only affect the payment that we can extract from the bidders because of this particular charity, and the difference in surplus will be E uij (\u03c0' ci) \u2212 uij (\u03c0 * ci) \u2212 \u03c0' ci + \u03c0 * ci.\nThis expression is, of bj course, 0 if \u03c0' ci = \u03c0 * ci.\nBut now notice that this expression is maximized as a function of \u03c0' ci by the decomposed solution for this charity (the terms without \u03c0' ci in them do not matter, and of course in the decomposed solution we always set \u03c0bi = uij (\u03c0ci)).\nIt follows that if we change \u03c0ci to the j decomposed solution, the change in surplus will be at least 0 (and the solution will still be valid).\nThus, we can change the \u03c0ci one by one to the decomposed solution without ever losing any surplus.\nPROOF.\nBy Lemma 1, we can solve the problem separately for each charity.\nFor charity ci, this amounts to maximizing (E uij (\u03c0ci)) \u2212 \u03c0ci as a function of \u03c0ci.\nBecause all bj its terms are piecewise linear functions, this whole function is piecewise linear, and must be maximized at one of the points where it is nondifferentiable.\nIt follows that we need only check all the points at which one of the terms is nondifferentiable.\nUnfortunately, the decomposing lemma does not hold for payment maximization.\nPROOF.\nConsider a single bidder b1 placing the following quasilinear bid over two charities c1 and c2: u11 (\u03c0c1) = 2\u03c0ci for 0 <\u03c0ci <1, u11 (\u03c0c1) = 2 + \u03c0ci \u2212 1\n2.\nThe decomposed solution is \u03c0c1 = 73, \u03c0c2 = 0, for a total donation of 73.\nBut the solution \u03c0c1 = 1, \u03c0c2 = 2 is also valid, for a total donation of 3> 73.\nIn fact, when payment maximization is the objective, DONATION-CLEARING remains (weakly) NP-complete in general.\n(In the remainder of the paper, proofs are omitted because of space constraint.)\nTHEOREM 5.\nDONATION-CLEARING is (weakly) NPcomplete when payment maximization is the objective, even when every bid is concerns only one charity (and has a stepfunction utility function for this charity), and is quasilinear.\nHowever, when the bids are also concave, a simple greedy clearing algorithm is optimal.\nTHEOREM 6.\nGiven a DONATION-CLEARING instance with payment maximization as the objective where all bids are quasilinear and concave, consider the following algorithm.\nStart with \u03c0ci = 0 for all charities.\nThen, letting\nd\u03c0ci (at nondifferentiable points, these derivatives should be taken from the right), increase \u03c0c \u2217 i (where c \u2217 i G arg maxci \u03b3ci), until either \u03b3c \u2217 i is no longer the highest (in which case, recompute c \u2217 i and start increasing the corresponding payment), or E bj let \u03c0bj = uj.\n(A similar greedy algorithm works when the objective is surplus and the bids are quasilinear and concave, with as only difference that we stop increasing the payments as soon as \u03b3c \u2217 i <1.)\n10.\nINCENTIVE COMPATIBILITY\nUp to this point, we have not discussed the bidders' incentives for bidding any particular way.\nSpecifically, the bids may not truthfully reflect the bidders' preferences over charities because a bidder may bid strategically, misrepresenting her preferences in order to obtain a result that is better to herself.\nThis means the mechanism is not strategy-proof.\n(We will show some concrete examples of this shortly.)\nThis is not too surprising, because the mechanism described so far is, in a sense, a first-price mechanism, where the mechanism will extract as much payment from a bidder as her bid allows.\nSuch mechanisms (for example, first-price auctions, where winners pay the value of their bids) are typically not strategy-proof: if a bidder reports her true valuation for an outcome, then if this outcome occurs, the payment the bidder will have to make will offset her gains from the outcome completely.\nOf course, we could try to change the rules of the game--which outcome (payment vector to charities) do we select for which bid vector, and which bidder pays how much--in order to make bidding truthfully beneficial, and to make the outcome better with regard to the bidders' true preferences.\nThis is the field of mechanism design.\nIn this section, we will briefly discuss the options that mechanism design provides for the expressive charity donation problem.\n10.1 Strategic bids under the first-price mechanism\nWe first point out some reasons for bidders to misreport their preferences under the first-price mechanism described in the paper up to this point.\nFirst of all, even when there is only one charity, it may make sense to underbid one's true valuation for the charity.\nFor example, suppose a bidder would like a charity to receive a certain amount x, but does not care if the charity receives more than that.\nAdditionally, suppose that the other bids guarantee that the charity will receive at least x no matter what bid the bidder submits (and the bidder knows this).\nThen the bidder is best off not bidding at all (or submitting a utility for the charity of 0), to avoid having to make any payment.\n(This is known in economics as the free rider problem [14].\nWith multiple charities, another kind of manipulation may occur, where the bidder attempts to steer others' payments towards her preferred charity.\nSuppose that there are two charities, and three bidders.\nThe first bidder bids u11 (\u03c0c1) =\nNow, the third bidder's true preferences are accurately represented5 by the bid u13 (\u03c0c1) = 1 if \u03c0c1> 1, u13 (\u03c0c1) = 0 otherwise; u23 (\u03c0c2) = 3 if \u03c0c2> 1, u23 (\u03c0c1) = 0 otherwise; and w3 (u3) = 31u3 if u3 <1, w3 (u3) = 31 + 1 100 (u3--1) otherwise.\nNow, it is straightforward to check that, if the third bidder bids truthfully, regardless of whether the objective is surplus maximization or total donated, charity 1 will receive at least 1, and charity 2 will receive less than 1.\nThe same is true if bidder 3 does not place a bid at all (as in the previous type of manipulation); hence bidder 2's utility will be 1 in this case.\nBut now, if bidder 3 reports u13 (\u03c0c1) = 0 everywhere; u23 (\u03c0c2) = 3 if \u03c0c2> 1, u23 (\u03c0c2) = 0 otherwise (this part of the bid is truthful); and w3 (u3) = 31u3 if u3 <1, w3 (u3) = 3 1 otherwise; then charity 2 will receive at least 1, and bidder 3 will have to pay at most 31.\nBecause up to this amount of payment, one unit of money corresponds to three units of utility to bidder 3, it follows his utility is now at least 3--1 = 2> 1.\nWe observe that in this case, the strategic bidder is not only affecting how much the bidders pay, but also how much the charities receive.\n10.2 Mechanism design in the quasilinear setting There are four reasons why the mechanism design approach is likely to be most successful in the setting of quasilinear preferences.\nFirst, historically, mechanism design has been been most successful when the quasilinear assumption could be made.\nSecond, because of this success, some very general mechanisms have been discovered for the quasilinear setting (for instance, the VCG mechanisms [24, 4, 10], or the dAGVA mechanism [6, 1]) which we could apply directly to the expressive charity donation problem.\nThird, as we saw in Section 9, the clearing problem is much easier in 5Formally, this means that if the bidder is forced to pay the full amount that his bid allows for a particular vector of payments to charities, the bidder is indifferent between this and not participating in the mechanism at all.\n(Compare this to bidding truthfully in a first-price auction.)\nthis setting, and thus we are less likely to run into computational trouble for the mechanism design problem.\nFourth, as we will show shortly, the quasilinearity assumption in some cases allows for decomposing the mechanism design problem over the charities (as it did for the simple clearing problem).\nMoreover, in the quasilinear setting (unlike in the general setting), it makes sense to pursue social welfare (the sum of the utilities) as the objective, because now 1) units of utility correspond directly to units of money, so that we do not have the problem of the bidders arbitrarily scaling their utilities; and 2) it is no longer possible to give a payment willingness function of 0 while still affecting the donations through a utility function.\nBefore presenting the decomposition result, we introduce some terms from game theory.\nA type is a preference profile that a bidder can have and can report (thus, a type report is a bid).\nIncentive compatibility (IC) means that bidders are best off reporting their preferences truthfully; either regardless of the others' types (in dominant strategies), or in expectation over them (in Bayes-Nash equilibrium).\nIndividual rationality (IR) means agents are at least as well off participating in the mechanism as not participating; either regardless of the others' types (ex-post), or in expectation over them (ex-interim).\nA mechanism is budget balanced if there is no flow of money into or out of the system--in general (ex-post), or in expectation over the type reports (ex-ante).\nA mechanism is efficient if it (always) produces the efficient allocation of wealth to charities.\nTHEOREM 7.\nSuppose all agents' preferences are quasilinear.\nFurthermore, suppose that there exists a single-charity mechanism M that, for a certain subclass P of (quasilinear) preferences, under a given solution concept S (implementation in dominant strategies or Bayes-Nash equilibrium) and a given notion of individual rationality R (ex post, ex interim, or none), satisfies a certain notion of budget balance (ex post, ex ante, or none), and is ex-post efficient.\nThen there exists such a mechanism for any number of charities.\nTwo mechanisms that satisfy efficiency (and can in fact be applied directly to the multiple-charity problem without use of the previous theorem) are the VCG (which is incentive compatible in dominant strategies) and dAGVA (which is incentive compatible only in Bayes-Nash equilibrium) mechanisms.\nEach of them, however, has a drawback that would probably make it impractical in the setting of donations to charities.\nThe VCG mechanism is not budget balanced.\nThe dAGVA mechanism does not satisfy ex-post individual rationality.\nIn the next subsection, we will investigate if we can do better in the setting of donations to charities.\n10.3 Impossibility of efficiency\nIn this subsection, we show that even in a very restricted setting, and with minimal requirements on IC and IR constraints, it is impossible to create a mechanism that is efficient.\nThe case of step-functions in this theorem corresponds exactly to the case of a single, fixed-size, nonexcludable public good (the \"public good\" being that the charity receives the desired amount)--for which such an impossibility result is already known [14].\nMany similar results are known, probably the most famous of which is the Myerson-Satterthwaite impossibility result, which proves the impossibility of efficient bilateral trade under the same requirements [15].\nTheorem 7 indicates that there is no reason to decide on donations to multiple charities under a single mechanism (rather than a separate one for each charity), when an efficient mechanism with the desired properties exists for the single-charity case.\nHowever, because under the requirements of Theorem 8, no such mechanism exists, there may be a benefit to bringing the charities under the same umbrella.\nThe next proposition shows that this is indeed the case.\nPROPOSITION 2.\nThere exist settings with two charities where there exists no ex-post budget balanced, ex-post efficient, and ex-interim individually rational mechanism with Bayes-Nash equilibrium as the solution concept for either charity alone; but there exists an ex-post budget balanced, ex-post efficient, and ex-post individually rational mechanism with dominant strategies as the solution concept for both charities together.\n(Even when the conditions are the same as in Theorem 8, apart from the fact that there are now two charities.)\n11.\nCONCLUSION\nWe introduced a bidding language for expressing very general types of matching offers over multiple charities.\nWe formulated the corresponding clearing problem (deciding how much each bidder pays, and how much each charity receives), and showed that it is NP-complete to approximate to any ratio even in very restricted settings.\nWe gave a mixed-integer program formulation of the clearing problem, and showed that for concave bids (where utility functions and payment willingness function are concave), the program reduces to a linear program and can hence be solved in polynomial time.\nWe then showed that the clearing problem for a subclass of concave bids is at least as hard as the decision variant of linear programming, suggesting that we cannot do much better than a linear programming implementation for such bids.\nSubsequently, we showed that the clearing problem is much easier when bids are quasilinear (where payment willingness functions are linear)--for surplus, the problem decomposes across charities, and for payment maximization, a greedy approach is optimal if the bids are concave (although this latter problem is weakly NP-complete when the bids are not concave).\nFor the quasilinear setting, we studied the mechanism design question of making the bidders report their preferences truthfully rather than strategically.\nWe showed that an ex-post efficient mechanism is impossible even with only one charity and a very restricted class of bids.\nWe also showed that even though the clearing problem decomposes over charities in the quasilinear setting, there may be benefits to linking the charities from a mechanism design standpoint.\nThere are many directions for future research.\nOne is to build a web-based implementation of the (first-price) mechanism proposed in this paper.\nAnother is to study the computational scalability of our MIP\/LP approach.\nIt is also\nimportant to identify other classes of bids (besides concave ones) for which the clearing problem is tractable.\nMuch crucial work remains to be done on the mechanism design problem.\nFinally, are there good iterative mechanisms for charity donation?\n6","keyphrases":["express negoti","donat to chariti","negoti materi","bid languag","concav bid","linear program","quasilinear","mechan design","chariti support","express chariti donat","combinatori auction","econom effici","bid framework","donat-clear","threshold bid","payment willing function","incent compat","market clear"],"prmu":["P","P","P","P","P","P","P","P","M","R","U","R","M","U","M","M","U","M"]} {"id":"C-81","title":"Adaptive Duty Cycling for Energy Harvesting Systems","abstract":"Harvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks. In this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment. The algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energy-neutrality constraint, and (c) adapting to the dynamics of the energy source at run-time. We present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data. We also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source. Our methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used.","lvl-1":"Adaptive Duty Cycling for Energy Harvesting Systems Jason Hsu, Sadaf Zahedi, Aman Kansal, Mani Srivastava Electrical Engineering Department University of California Los Angeles {jasonh,kansal,szahedi,mbs} @ ee.ucla.edu Vijay Raghunathan NEC Labs America Princeton, NJ vijay@nec-labs.com ABSTRACT Harvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks.\nIn this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment.\nThe algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energyneutrality constraint, and (c) adapting to the dynamics of the energy source at run-time.\nWe present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data.\nWe also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source.\nOur methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used.\nCategories and Subject Descriptors C.2.4 [Computer Systems Organization]: Computer Communication Networks-Distributed Systems General Terms Algorithms, Design 1.\nINTRODUCTION Energy supply has always been a crucial issue in designing battery-powered wireless sensor networks because the lifetime and utility of the systems are limited by how long the batteries are able to sustain the operation.\nThe fidelity of the data produced by a sensor network begins to degrade once sensor nodes start to run out of battery power.\nTherefore, harvesting energy from the environment has been proposed to supplement or completely replace battery supplies to enhance system lifetime and reduce the maintenance cost of replacing batteries periodically.\nHowever, metrics for evaluating energy harvesting systems are different from those used for battery powered systems.\nEnvironmental energy is distinct from battery energy in two ways.\nFirst it is an inexhaustible supply which, if appropriately used, can allow the system to last forever, unlike the battery which is a limited resource.\nSecond, there is an uncertainty associated with its availability and measurement, compared to the energy stored in the battery which can be known deterministically.\nThus, power management methods based on battery status are not always applicable to energy harvesting systems.\nIn addition, most power management schemes designed for battery-powered systems only account for the dynamics of the energy consumers (e.g., CPU, radio) but not the dynamics of the energy supply.\nConsequently, battery powered systems usually operate at the lowest performance level that meets the minimum data fidelity requirement in order to maximize the system life.\nEnergy harvesting systems, on the other hand, can provide enhanced performance depending on the available energy.\nIn this paper, we will study how to adapt the performance of the available energy profile.\nThere exist many techniques to accomplish performance scaling at the node level, such as radio transmit power adjustment [1], dynamic voltage scaling [2], and the use of low power modes [3].\nHowever, these techniques require hardware support and may not always be available on resource constrained sensor nodes.\nAlternatively, a common performance scaling technique is duty cycling.\nLow power devices typically provide at least one low power mode in which the node is shut down and the power consumption is negligible.\nIn addition, the rate of duty cycling is directly related to system performance metrics such as network latency and sampling frequency.\nWe will use duty cycle adjustment as the primitive performance scaling technique in our algorithms.\n2.\nRELATED WORK Energy harvesting has been explored for several different types of systems, such as wearable computers [4], [5], [6], sensor networks [7], etc..\nSeveral technologies to extract energy from the environment have been demonstrated including solar, motion-based, biochemical, vibration-based [8], [9], [10], [11], and others are being developed [12], [13].\nWhile several energy harvesting sensor node platforms have been prototyped [14], [15], [16], there is a need for systematic power management techniques that provide performance guarantees during system operation.\nThe first work to take environmental energy into account for data routing was [17], followed by [18].\nWhile these works did demonstrate that environment aware decisions improve performance compared to battery aware decisions, their objective was not to achieve energy neutral operation.\nOur proposed techniques attempt to maximize system performance while maintaining energy-neutral operation.\n3.\nSYSTEM MODEL The energy usage considerations in a harvesting system vary significantly from those in a battery powered system, as mentioned earlier.\nWe propose the model shown in Figure 1 for designing energy management methods in a harvesting system.\nThe functions of the various blocks shown in the figure are discussed below.\nThe precise methods used in our system to achieve these functions will be discussed in subsequent sections.\nHarvested Energy Tracking: This block represents the mechanisms used to measure the energy received from the harvesting device, such as the solar panel.\nSuch information is useful for determining the energy availability profile and adapting system performance based on it.\nCollecting this information requires that the node hardware be equipped with the facility to measure the power generated from the environment, and the Heliomote platform [14] we used for evaluating the algorithms has this capability.\nEnergy Generation Model: For wireless sensor nodes with limited storage and processing capabilities to be able to use the harvested energy data, models that represent the essential components of this information without using extensive storage are required.\nThe purpose of this block is to provide a model for the energy available to the system in a form that may be used for making power management decisions.\nThe data measured by the energy tracking block is used here to predict future energy availability.\nA good prediction model should have a low prediction error and provide predicted energy values for durations long enough to make meaningful performance scaling decisions.\nFurther, for energy sources that exhibit both long-term and short-term patterns (e.g., diurnal and climate variations vs. weather patterns for solar energy), the model must be able to capture both characteristics.\nSuch a model can also use information from external sources such as local weather forecast service to improve its accuracy.\nEnergy Consumption Model: It is also important to have detailed information about the energy usage characteristics of the system, at various performance levels.\nFor general applicability of our design, we will assume that only one sleep mode is available.\nWe assume that the power consumption in the sleep and active modes is known.\nIt may be noted that for low power systems with more advanced capabilities such as dynamic voltage scaling (DVS), multiple low power modes, and the capability to shut down system components selectively, the power consumption in each of the states and the resultant effect on application performance should be known to make power management decisions.\nEnergy Storage Model: This block represents the model for the energy storage technology.\nSince all the generated energy may not be used instantaneously, the harvesting system will usually have some energy storage technology.\nStorage technologies (e.g., batteries and ultra-capacitors) are non-ideal, in that there is some energy loss while storing and retrieving energy from them.\nThese characteristics must be known to efficiently manage energy usage and storage.\nThis block also includes the system capability to measure the residual stored energy.\nMost low power systems use batteries to store energy and provide residual battery status.\nThis is commonly based on measuring the battery voltage which is then mapped to the residual battery energy using the known charge to voltage relationship for the battery technology in use.\nMore sophisticated methods which track the flow of energy into and out of the battery are also available.\nHarvesting-aware Power Management: The inputs provided by the previously mentioned blocks are used here to determine the suitable power management strategy for the system.\nPower management could be carried to meet different objectives in different applications.\nFor instance, in some systems, the harvested energy may marginally supplement the battery supply and the objective may be to maximize the system lifetime.\nA more interesting case is when the harvested energy is used as the primary source of energy for the system with the objective of achieving indefinitely long system lifetime.\nIn such cases, the power management objective is to achieve energy neutral operation.\nIn other words, the system should only use as much energy as harvested from the environment and attempt to maximize performance within this available energy budget.\n4.\nTHEORETICALLY OPTIMAL POWER MANAGEMENT We develop the following theory to understand the energy neutral mode of operation.\nLet us define Ps(t) as the energy harvested from the environment at time t, and the energy being consumed by the load at that time is Pc(t).\nFurther, we model the non-ideal storage buffer by its round-trip efficiency \u03b7 (strictly less than 1) and a constant leakage power Pleak.\nUsing this notation, applying the rule of energy conservation leads to the following inequality: 0 0 00 [ ( ) ( )] [ ( ) ( )] 0 T T T s c c s leakP t P t dt P t P t dt P dtB \u03b7 + + \u2212 \u2212 \u2212 \u2265+ \u2212\u222b \u222b \u222b (1) where B0 is the initial battery level and the function [X]+ = X if X > 0 and zero otherwise.\nDEFINITION 1 (\u03c1,\u03c31,\u03c32) function: A non-negative, continuous and bounded function P (t) is said to be a (\u03c1,\u03c31,\u03c32) function if and only if for any value of finite real number T , the following are satisfied: 2 1( ) T T P t dt T\u03c1 \u03c3 \u03c1 \u03c3\u2212 \u2264 \u2264 +\u222b (2) This function can be used to model both energy sources and loads.\nIf the harvested energy profile Ps(t) is a (\u03c11,\u03c31,\u03c32) function, then the average rate of available energy over long durations becomes \u03c11, and the burstiness is bounded by \u03c31 and \u03c32 .\nSimilarly, Pc(t) can be modeled as a (\u03c12,\u03c33) function, when \u03c12 and \u03c33 are used to place an upper bound on power consumption (the inequality on the right side) while there are no minimum power consumption constraints.\nThe condition for energy neutrality, equation (1), leads to the following theorem, based on the energy production, consumption, and energy buffer models discussed above.\nTHEOREM 1 (ENERGY NEUTRAL OPERATION): Consider a harvesting system in which the energy production profile is characterized by a (\u03c11, \u03c31, \u03c32) function, the load is characterized by a (\u03c12, \u03c33) function and the energy buffer is characterized by parameters \u03b7 for storage efficiency, and Pleak for leakage power.\nThe following conditions are sufficient for the system to achieve energy neutrality: \u03c12 \u2264 \u03b7\u03c11 \u2212 Pleak (3) B0 \u2265 \u03b7\u03c32 + \u03c33 (4) B \u2265 B0 (5) where B0 is the initial energy stored in the buffer and provides a lower bound on the capacity of the energy buffer B.\nThe proof is presented in our prior work [19].\nTo adjust the duty cycle D using our performance scaling algorithm, we assume the following relation between duty cycle and the perceived utility of the system to the user: Suppose the utility of the application to the user is represented by U(D) when the system operates at a duty cycle D. Then, min 1 min max 2 max ( ) 0, ( ) , ( ) , U D if D D U D k D if D D D U D k if D D \u03b2 = < = + \u2264 \u2264 = > This is a fairly general and simple model and the specific values of Dmin and Dmax may be determined as per application requirements.\nAs an example, consider a sensor node designed to detect intrusion across a periphery.\nIn this case, a linear increase in duty cycle translates into a linear increase in the detection probability.\nThe fastest and the slowest speeds of the intruders may be known, leading to a minimum and Harvested Energy Tracking Energy Consumption Model Energy Storage Model Harvestingaware Power Mangement Energy Generation Model LOAD Figure 1.\nSystem model for an energy harvesting system.\n181 maximum sensing delay tolerable, which results in the relevant Dmax and Dmin for the sensor node.\nWhile there may be cases where the relationship between utility and duty cycle may be non-linear, in this paper, we restrict our focus on applications that follow this linear model.\nIn view of the above models for the system components and the required performance, the objective of our power management strategy is adjust the duty cycle D(i) dynamically so as to maximize the total utility U(D) over a period of time, while ensuring energy neutral operation for the sensor node.\nBefore discussing the performance scaling methods for harvesting aware duty cycle adaptation, let us first consider the optimal power management strategy that is possible for a given energy generation profile.\nFor the calculation of the optimal strategy, we assume complete knowledge of the energy availability profile at the node, including the availability in the future.\nThe calculation of the optimal is a useful tool for evaluating the performance of our proposed algorithm.\nThis is particularly useful for our algorithm since no prior algorithms are available to serve as a baseline for comparison.\nSuppose the time axis is partitioned into discrete slots of duration \u0394T, and the duty cycle adaptation calculation is carried out over a window of Nw such time slots.\nWe define the following energy profile variables, with the index i ranging over {1,..., Nw}: Ps(i) is the power output from the harvested source in time slot i, averaged over the slot duration, Pc is the power consumption of the load in active mode, and D(i) is the duty cycle used in slot i, whose value is to be determined.\nB(i) is the residual battery energy at the beginning of slot i. Following this convention, the battery energy left after the last slot in the window is represented by B(Nw+1).\nThe values of these variables will depend on the choice of D(i).\nThe energy used directly from the harvested source and the energy stored and used from the battery must be accounted for differently.\nFigure 2 shows two possible cases for Ps(i) in a time slot.\nPs(i) may either be less than or higher than Pc , as shown on the left and right respectively.\nWhen Ps(i) is lower than Pc, some of the energy used by the load comes from the battery, while when Ps(i) is higher than Pc, all the energy used is supplied directly from the harvested source.\nThe crosshatched area shows the energy that is available for storage into the battery while the hashed area shows the energy drawn from the battery.\nWe can write the energy used from the battery in any slot i as: ( ) ( ) ( ) ( )[ ] ( ) ( ){ } ( ) ( )[ ]1 1c cs s sB i B i TD i P P i TP i D i TD i P i P\u03b7 \u03b7 + + \u2212 + = \u0394 \u2212 \u2212 \u0394 \u2212 \u2212 \u2212 (6) In equation (6), the first term on the right hand side measures the energy drawn from the battery when Ps(i) < Pc, the next term measures the energy stored into the battery when the node is in sleep mode, and the last term measures the energy stored into the battery in active mode if Ps(i) > Pc.\nFor energy neutral operation, we require the battery at the end of the window of Nw slots to be greater than or equal to the starting battery.\nClearly, battery level will go down when the harvested energy is not available and the system is operated from stored energy.\nHowever, the window Nw is judiciously chosen such that over that duration, we expect the environmental energy availability to complete a periodic cycle.\nFor instance, in the case of solar energy harvesting, Nw could be chosen to be a twenty-four hour duration, corresponding to the diurnal cycle in the harvested energy.\nThis is an approximation since an ideal choice of the window size would be infinite, but a finite size must be used for analytical tractability.\nFurther, the battery level cannot be negative at any time, and this is ensured by having a large enough initial battery level B0 such that node operation is sustained even in the case of total blackout during a window period.\nStating the above constraints quantitatively, we can express the calculation of the optimal duty cycles as an optimization problem below: 1 max ( ) wN i D i = \u2211 (7) ( ) ( ) ( ) ( ) ( ) ( ){ } ( ) ( )1 1c s s s cB i B i TD i P P i TP i D i TD i P i P\u03b7 \u03b7 + + \u23a1 \u23a4 \u23a1 \u23a4\u2212 + = \u0394 \u2212 \u2212 \u0394 \u2212 \u2212 \u2212\u23a3 \u23a6 \u23a3 \u23a6 (8) 0(1)B B= (9) 0( 1)wB N B+ \u2265 (10) min w( ) i {1,...,N }D i D\u2265 \u2200 \u2208 (11) max w( ) i {1,...,N }D i D\u2264 \u2200 \u2208 (12) The solution to the optimization problem yields the duty cycles that must be used in every slot and the evolution of residual battery over the course of Nw slots.\nNote that while the constraints above contain the non-linear function [x]+ , the quantities occurring within that function are all known constants.\nThe variable quantities occur only in linear terms and hence the above optimization problem can be solved using standard linear programming techniques, available in popular optimization toolboxes.\n5.\nHARVESTING-AWARE POWER MANAGEMENT We now present a practical algorithm for power management that may be used for adapting the performance based on harvested energy information.\nThis algorithm attempts to achieve energy neutral operation without using knowledge of the future energy availability and maximizes the achievable performance within that constraint.\nThe harvesting-aware power management strategy consists of three parts.\nThe first part is an instantiation of the energy generation model which tracks past energy input profiles and uses them to predict future energy availability.\nThe second part computes the optimal duty cycles based on the predicted energy, and this step uses our computationally tractable method to solve the optimization problem.\nThe third part consists of a method to dynamically adapt the duty cycle in response to the observed energy generation profile in real time.\nThis step is required since the observed energy generation may deviate significantly from the predicted energy availability and energy neutral operation must be ensured with the actual energy received rather than the predicted values.\n5.1.\nEnergy Prediction Model We use a prediction model based on Exponentially Weighted Moving-Average (EWMA).\nThe method is designed to exploit the diurnal cycle in solar energy but at the same time adapt to the seasonal variations.\nA historical summary of the energy generation profile is maintained for this purpose.\nWhile the storage data size is limited to a vector length of Nw values in order to minimize the memory overheads of the power management algorithm, the window size is effectively infinite as each value in the history window depends on all the observed data up to that instant.\nThe window size is chosen to be 24 hours and each time slot is taken to be 30 minutes as the variation in generated power by the solar panel using this setting is less than 10% between each adjacent slots.\nThis yields Nw = 48.\nSmaller slot durations may be used at the expense of a higher Nw.\nThe historical summary maintained is derived as follows.\nOn a typical day, we expect the energy generation to be similar to the energy generation at the same time on the previous days.\nThe value of energy generated in a particular slot is maintained as a weighted average of the energy received in the same time-slot during all observed days.\nThe weights are exponential, resulting in decaying contribution from older Figure 2.\nTwo possible cases for energy calculations Slot i Slot k Pc Pc P(i) P(i) Active Sleep 182 data.\nMore specifically, the historical average maintained for each slot is given by: 1 (1 )k k kx x x\u03b1 \u03b1\u2212= + \u2212 where \u03b1 is the value of the weighting factor, kx is the observed value of energy generated in the slot, and 1kx \u2212 is the previously stored historical average.\nIn this model, the importance of each day relative to the previous one remains constant because the same weighting factor was used for all days.\nThe average value derived for a slot is treated as an estimate of predicted energy value for the slot corresponding to the subsequent day.\nThis method helps the historical average values adapt to the seasonal variations in energy received on different days.\nOne of the parameters to be chosen in the above prediction method is the parameter \u03b1, which is a measure of rate of shift in energy pattern over time.\nSince this parameter is affected by the characteristics of the energy and sensor node location, the system should have a training period during which this parameter will be determined.\nTo determine a good value of \u03b1, we collected energy data over 72 days and compared the average error of the prediction method for various values of \u03b1.\nThe error based on the different values of \u03b1 is shown in Figure 3.\nThis curve suggests an optimum value of \u03b1 = 0.15 for minimum prediction error and this value will be used in the remainder of this paper.\n5.2.\nLow-complexity Solution The energy values predicted for the next window of Nw slots are used to calculated the desired duty cycles for the next window, assuming the predicted values match the observed values in the future.\nSince our objective is to develop a practical algorithm for embedded computing systems, we present a simplified method to solve the linear programming problem presented in Section 4.\nTo this end, we define the sets S and D as follows: { } { } | ( ) 0 | ( ) 0 s c c s S i P i P D i P P i = \u2212 \u2265 = \u2212 > The two sets differ by the condition that whether the node operation can be sustained entirely from environmental energy.\nIn the case that energy produced from the environment is not sufficient, battery will be discharged to supplement the remaining energy.\nNext we sum up both sides of (6) over the entire Nw window and rewrite it with the new notation.\n1 1 1 1 ( )[ ( )] ( ) ( ) ( ) ( )[ ( ) ] Nw Nw Nw i i c s s s s c i i D i i i S B B TD i P P i TP i TP i D i TD i P i P\u03b7 \u03b7 \u03b7+ = \u2208 = = \u2208 \u2212 = \u0394 \u2212 \u2212 \u0394 + \u0394 \u2212 \u0394 \u2212\u2211 \u2211 \u2211 \u2211 \u2211 The term on the left hand side is actually the battery energy used over the entire window of Nw slots, which can be set to 0 for energy neutral operation.\nAfter some algebraic manipulation, this yields: 1 1 ( ) ( ) ( ) 1 ( ) Nw c s s c i i D i S P P i D i P i P D i \u03b7 \u03b7= \u2208 \u2208 \u239b \u239e\u239b \u239e = + \u2212 +\u239c \u239f\u239c \u239f \u239d \u23a0\u239d \u23a0 \u2211 \u2211 \u2211 (13) The term on the left hand side is the total energy received in Nw slots.\nThe first term on the right hand side can be interpreted as the total energy consumed during the D slots and the second term is the total energy consumed during the S slots.\nWe can now replace three constraints (8), (9), and (10) in the original problem with (13), restating the optimization problem as follows: 1 max ( ) wN i D i = \u2211 1 1 ( ) ( ) ( ) 1 ( ) Nw c s s c i i D i S P P i D i P i P D i \u03b7 \u03b7= \u2208 \u2208 \u239b \u239e\u239b \u239e = + \u2212 +\u239c \u239f\u239c \u239f \u239d \u23a0\u239d \u23a0 \u2211 \u2211 \u2211 min max D(i) D {1,...,Nw) D(i) D {1,...,Nw) i i \u2265 \u2200 \u2208 \u2264 \u2200 \u2208 This form facilitates a low complexity solution that doesn``t require a general linear programming solver.\nSince our objective is to maximize the total system utility, it is preferable to set the duty cycle to Dmin for time slots where the utility per unit energy is the least.\nOn the other hand, we would also like the time slots with the highest Ps to operate at Dmax because of better efficiency of using energy directly from the energy source.\nCombining these two characteristics, we define the utility co-efficient for each slot i as follows: 1 ( ) 1 1 ( ) 1 c c s P for i S W i P P i for i D \u03b7 \u03b7 \u2208\u23a7 \u23aa = \u239b \u239e\u239b \u239e\u23a8 + \u2212 \u2208\u239c \u239f\u239c \u239f\u23aa \u239d \u23a0\u239d \u23a0\u23a9 where W(i) is a representation of how efficient the energy usage in a particular time slot i is.\nA larger W(i) indicates more system utility per unit energy in slot i and vice versa.\nThe algorithm starts by assuming D(i) =Dmin for i = {1...NW} because of the minimum duty cycle requirement, and computes the remaining system energy R by: 1 1 ( ) ( ) ( ) 1 ( ) (14) Nw c s s c i i D i S P R P i D i P i P D i \u03b7 \u03b7= \u2208 \u2208 \u239b \u239e\u239b \u239e = \u2212 + \u2212 \u2212\u239c \u239f\u239c \u239f \u239d \u23a0\u239d \u23a0 \u2211 \u2211 \u2211 A negative R concludes that the optimization problem is infeasible, meaning the system cannot achieve energy neutrality even at the minimum duty cycle.\nIn this case, the system designer is responsible for increasing the environment energy availability (e.g., by using larger solar panels).\nIf R is positive, it means the system has excess energy that is not being used, and this may be allocated to increase the duty cycle beyond Dmin for some slots.\nSince our objective is to maximize the total system utility, the most efficient way to allocate the excess energy is to assign duty cycle Dmax to the slots with the highest W(i).\nSo, the coefficients W(i) are arranged in decreasing order and duty cycle Dmax is assigned to the slots beginning with the largest coefficients until the excess energy available, R (computed by (14) in every iteration), is insufficient to assign Dmax to another slot.\nThe remaining energy, RLast, is used to increase the duty cycle to some value between Dmin and Dmax in the slot with the next lower coefficient.\nDenoting this slot with index j, the duty cycle is given by: D(j)= min \/ ( ( ) ) \/ ( ) Last c Last s c s R P if j D DR if j S P j P P j\u03b7 \u2208\u23a7 \u23ab \u23aa \u23aa +\u23a8 \u23ac \u2208\u23aa \u23aa\u2212 \u2212\u23a9 \u23ad The above solution to the optimization problem requires only simple arithmetic calculations and one sorting step which can be easily implemented on an embedded platform, as opposed to implementing a general linear program solver.\n5.3.\nSlot-by-slot continual duty cycle adaptiation.\nThe observed energy values may vary greatly from the predicted ones, such as due to the effect of clouds or other sudden changes.\nIt is thus important to adapt the duty cycles calculated using the predicted values, to the actual energy measurements in real time to ensure energy neutrality.\nDenote the initial duty cycle assignments for each time slot i computed using the predicted energy values as D(i) = {1, ...,Nw}.\nFirst we compute the difference between predicted power level Ps(i) and actual power level observed, Ps''(i) in every slot i. Then, the excess energy in slot i, denoted by X, can be obtained as follows: ( ) '( ) '( ) 1 ( ) '( ) ( )[ ( ) '( )](1 ) '( ) s s s c s s s s s c P i P i if P i P X P i P i D i P i P i if P i P \u03b7 \u2212 >\u23a7 \u23aa = \u23a8 \u2212 \u2212 \u2212 \u2212 \u2264\u23aa \u23a9 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 2.6 2.7 2.8 2.9 3 alpha AvgError(mA) Figure 3.\nChoice of prediction parameter.\n183 The upper term accounts for the energy difference when actual received energy is more than the power drawn by the load.\nOn the other hand, if the energy received is less than Pc, we will need to account for the extra energy used from the battery by the load, which is a function of duty cycle used in time slot i and battery efficiency factor \u03b7.\nWhen more energy is received than predicted, X is positive and that excess energy is available for use in the subsequent solutes, while if X is negative, that energy must be compensated from subsequent slots.\nCASE I: X<0.\nIn this case, we want to reduce the duty cycles used in the future slots in order to make up for this shortfall of energy.\nSince our objective function is to maximize the total system utility, we have to reduce the duty cycles for time slots with the smallest normalized utility coefficient, W(i).\nThis is accomplished by first sorting the coefficient W(j) ,where j>i. in decreasing order, and then iteratively reducing Dj to Dmin until the total reduction in energy consumption is the same as X. CASE II: X>0.\nHere, we want to increase the duty cycles used in the future to utilize this excess energy we received in recent time slot.\nIn contrast to Case I, the duty cycles of future time slots with highest utility coefficient W(i) should be increased first in order to maximize the total system utility.\nSuppose the duty cycle is changed by d in slot j. Define a quantity R(j,d) as follows: \u23aa \u23a9 \u23aa \u23a8 \u23a7 <=\u239f \u239f \u23a0 \u239e \u239c \u239c \u239d \u239b \u239f\u239f \u23a0 \u239e \u239c\u239c \u239d \u239b \u2212+ >\u22c5 = lji l ljl PPifP P d PPifP djR 1 1 d ),( \u03b7\u03b7 The precise procedure to adapt the duty cycle to account for the above factors is presented in Algorithm 1.\nThis calculation is performed at the end of every slot to set the duty cycle for the next slot.\nWe claim that our duty cycling algorithm is energy neutral because an surplus of energy at the previous time slot will always translate to additional energy opportunity for future time slots, and vice versa.\nThe claim may be violated in cases of severe energy shortages especially towards the end of window.\nFor example, a large deficit in energy supply can``t be restored if there is no future energy input until the end of the window.\nIn such case, this offset will be carried over to the next window so that long term energy neutrality is still maintained.\n6.\nEVALUATION Our adaptive duty cycling algorithm was evaluated using an actual solar energy profile measured using a sensor node called Heliomote, capable of harvesting solar energy [14].\nThis platform not only tracks the generated energy but also the energy flow into and out of the battery to provide an accurate estimate of the stored energy.\nThe energy harvesting platform was deployed in a residential area in Los Angeles from the beginning of June through the middle of August for a total of 72 days.\nThe sensor node used is a Mica2 mote running at a fixed 40% duty cycle with an initially full battery.\nBattery voltage and net current from the solar panels are sampled at a period of 10 seconds.\nThe energy generation profile for that duration, measured by tracking the output current from the solar cell is shown in Figure 4, both on continuous and diurnal scales.\nWe can observe that although the energy profile varies from day to day, it still exhibits a general pattern over several days.\n0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70 Day mA 0 5 10 15 20 0 10 20 30 40 50 60 70 Hour mA 6.1.\nPrediction Model We first evaluate the performance of the prediction model, which is judged by the amount of absolute error it made between the predicted and actual energy profile.\nFigure 5 shows the average error of each time slot in mA over the entire 72 days.\nGenerally, the amount of error is larger during the day time because that``s when the factor of weather can cause deviations in received energy, while the prediction made for night time is mostly correct.\n6.2.\nAdaptive Duty cycling algorithm Prior methods to optimize performance while achieving energy neutral operation using harvested energy are scarce.\nInstead, we compare the performance of our algorithm against two extremes: the theoretical optimal calculated assuming complete knowledge about future energy availability and a simple approach which attempts to achieve energy neutrality using a fixed duty cycle without accounting for battery inefficiency.\nThe optimal duty cycles are calculated for each slot using the future knowledge of actual received energy for that slot.\nFor the simple approach, the duty cycle is kept constant within each day and is Figure 4 Solar Energy Profile (Left: Continuous, Right: Diurnal) Input: D: Initial duty cycle, X: Excess energy due to error in the prediction, P: Predicted energy profile, i: index of current time slot Output: D: Updated duty cycles in one or more subsequent slots AdaptDutyCycle() Iteration: At each time slot do: if X > 0 Wsorted = W{1, ...,Nw} sorted in decending order.\nQ := indices of Wsorted for k = 1 to |Q| if Q(k) \u2264 i or D(Q(k)) \u2265 Dmax \/\/slot is already passed continue if R(Q(k), Dmax \u2212 D(Q(k))) < X D(Q(k)) = Dmax X = X \u2212 R(j, Dmax \u2212 D(Q(k))) else \/\/X insufficient to increase duty cycle to Dmax if P (Q(k)) > Pl D(Q(k)) = D(Q(k)) + X\/Pl else D(Q(k)) = D(Q(k)) + ( ( ))(1 1 ))c s X P P Q k\u03b7 \u03b7+ \u2212 if X < 0 Wsorted = W{1, ...,Nw} sorted in ascending order.\nQ := indices of Wsorted for k = 1 to |Q| if Q(k) \u2264 I or D(Q(k)) \u2264 Dmin continue if R(Q(k), Dmax \u2212 D(Q(k))) > X D(Q(k)) = Dmin X = X \u2212 R(j, Dmin \u2212 D(Q(k))) else if P (Q(k)) > Pc D(Q(k)) = D(Q(k)) + X\/Pc else D(Q(k)) = D(Q(k)) + ( ( ))(1 1 ))c s X P P Q k\u03b7 \u03b7+ \u2212 ALGORITHM 1 Pseudocode for the duty-cycle adaptation algorithm Figure 5.\nAverage Predictor Error in mA 0 5 10 15 20 25 0 2 4 6 8 10 12 Time(H) abserror(mA) 184 computed by taking the ratio of the predicted energy availability and the maximum usage, and this guarantees that the senor node will never deplete its battery running at this duty cycle.\n{1.\n. )\n( )s w c i Nw D P i N P\u03b7 \u2208 = \u22c5 \u22c5\u2211 We then compare the performance of our algorithm to the two extremes with varying battery efficiency.\nFigure 6 shows the results, using Dmax = 0.8 and Dmin = 0.3.\nThe battery efficiency was varied from 0.5 to 1 on the x-axis and solar energy utilizations achieved by the three algorithms are shown on the y-axis.\nIt shows the fraction of net received energy that is used to perform useful work rather than lost due to storage inefficiency.\nAs can be seen from the figure, battery efficiency factor has great impact on the performance of the three different approaches.\nThe three approaches all converges to 100% utilization if we have a perfect battery (\u03b7=1), that is, energy is not lost by storing it into the batteries.\nWhen battery inefficiency is taken into account, both the adaptive and optimal approach have much better solar energy utilization rate than the simple one.\nAdditionally, the result also shows that our adaptive duty cycle algorithm performs extremely close to the optimal.\n0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.5 0.6 0.7 0.8 0.9 1 Eta-batery roundtrip efficiency SolarEnergyUtilization(%) Optimal Adaptive Simple We also compare the performance of our algorithm with different values of Dmin and Dmax for \u03b7=0.7, which is typical of NiMH batteries.\nThese results are shown in Table 1 as the percentage of energy saved by the optimal and adaptive approaches, and this is the energy which would normally be wasted in the simple approach.\nThe figures and table indicate that our real time algorithm is able to achieve a performance very close to the optimal feasible.\nIn addition, these results show that environmental energy harvesting with appropriate power management can achieve much better utilization of the environmental energy.\nDmax Dmin 0.8 0.05 0.8 0.1 0.8 0.3 0.5 0.2 0.9 0.2 1.0 0.2 Adaptive 51.0% 48.2% 42.3% 29.4% 54.7% 58.7% Optimal 52.3% 49.6% 43.7% 36.7% 56.6% 60.8% 7.\nCONCLUSIONS We discussed various issues in power management for systems powered using environmentally harvested energy.\nSpecifically, we designed a method for optimizing performance subject to the constraint of energy neutral operation.\nWe also derived a theoretically optimal bound on the performance and showed that our proposed algorithm operated very close to the optimal.\nThe proposals were evaluated using real data collected using an energy harvesting sensor node deployed in an outdoor environment.\nOur method has significant advantages over currently used methods which are based on a conservative estimate of duty cycle and can only provide sub-optimal performance.\nHowever, this work is only the first step towards optimal solutions for energy neutral operation.\nIt is designed for a specific power scaling method based on adapting the duty cycle.\nSeveral other power scaling methods, such as DVS, submodule power switching and the use of multiple low power modes are also available.\nIt is thus of interest to extend our methods to exploit these advanced capabilities.\n8.\nACKNOWLEDGEMENTS This research was funded in part through support provided by DARPA under the PAC\/C program, the National Science Foundation (NSF) under award #0306408, and the UCLA Center for Embedded Networked Sensing (CENS).\nAny opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of DARPA, NSF, or CENS.\nREFERENCES [1] R Ramanathan, and R Hain, Toplogy Control of Multihop Wireless Networks Using Transmit Power Adjustment in Proc.\nInfocom.\nVol 2.\n26-30 pp. 404-413.\nMarch 2000 [2] T.A. Pering, T.D. Burd, and R. W. Brodersen, The simulation and evaluation of dynamic voltage scaling algorithms, in Proc.\nACM ISLPED, pp. 76-81, 1998 [3] L. Benini and G. De Micheli, Dynamic Power Management: Design Techniques and CAD Tools.\nKluwer Academic Publishers, Norwell, MA, 1997.\n[4] John Kymisis, Clyde Kendall, Joseph Paradiso, and Neil Gershenfeld.\nParasitic power harvesting in shoes.\nIn ISWC, pages 132-139.\nIEEE Computer Society press, October 1998.\n[5] Nathan S. Shenck and Joseph A. Paradiso.\nEnergy scavenging with shoemounted piezoelectrics.\nIEEE Micro, 21(3):30\u00f142, May-June 2001.\n[6] T Starner.\nHuman-powered wearable computing.\nIBM Systems Journal, 35(3-4), 1996.\n[7] Mohammed Rahimi, Hardik Shah, Gaurav S. Sukhatme, John Heidemann, and D. Estrin.\nStudying the feasibility of energy harvesting in a mobile sensor network.\nIn ICRA, 2003.\n[8] ChrisMelhuish.\nThe ecobot project.\nwww.ias.uwe.ac.uk\/energy autonomy\/EcoBot web page.html.\n[9] Jan M.Rabaey, M. Josie Ammer, Julio L. da Silva Jr., Danny Patel, and Shad Roundy.\nPicoradio supports ad-hoc ultra-low power wireless networking.\nIEEE Computer, pages 42-48, July 2000.\n[10] Joseph A. Paradiso and Mark Feldmeier.\nA compact, wireless, selfpowered pushbutton controller.\nIn ACM Ubicomp, pages 299-304, Atlanta, GA, USA, September 2001.\nSpringer-Verlag Berlin Heidelberg.\n[11] SE Wright, DS Scott, JB Haddow, andMA Rosen.\nThe upper limit to solar energy conversion.\nvolume 1, pages 384 - 392, July 2000.\n[12] Darpa energy harvesting projects.\nhttp:\/\/www.darpa.mil\/dso\/trans\/energy\/projects.html.\n[13] Werner Weber.\nAmbient intelligence: industrial research on a visionary concept.\nIn Proceedings of the 2003 international symposium on Low power electronics and design, pages 247-251.\nACM Press, 2003.\n[14] V Raghunathan, A Kansal, J Hsu, J Friedman, and MB Srivastava, ``Design Considerations for Solar Energy Harvesting Wireless Embedded Systems,'' (IPSN\/SPOTS), April 2005.\n[15] Xiaofan Jiang, Joseph Polastre, David Culler, Perpetual Environmentally Powered Sensor Networks, (IPSN\/SPOTS), April 25-27, 2005.\n[16] Chulsung Park, Pai H. Chou, and Masanobu Shinozuka, ``DuraNode: Wireless Networked Sensor for Structural Health Monitoring,'' to appear in Proceedings of the 4th IEEE International Conference on Sensors, Irvine, CA, Oct. 31 - Nov. 1, 2005.\n[17] Aman Kansal and Mani B. Srivastava.\nAn environmental energy harvesting framework for sensor networks.\nIn International symposium on Low power electronicsand design, pages 481-486.\nACM Press, 2003.\n[18] Thiemo Voigt, Hartmut Ritter, and Jochen Schiller.\nUtilizing solar power in wireless sensor networks.\nIn LCN, 2003.\n[19] A. Kansal, J. Hsu, S. Zahedi, and M. B. Srivastava.\nPower management in energy harvesting sensor networks.\nTechnical Report TR-UCLA-NESL200603-02, Networked and Embedded Systems Laboratory, UCLA, March 2006.\nFigure 6.\nDuty Cycles achieved with respect to \u03b7 TABLE 1.\nEnergy Saved by adaptive and optimal approach.\n185","lvl-3":"Adaptive Duty Cycling for Energy Harvesting Systems\n{jasonh, kansal, szahedi, mbs} @ ee.ucla.edu vijay@nec-labs.com\nABSTRACT\nHarvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks.\nIn this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment.\nThe algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energyneutrality constraint, and (c) adapting to the dynamics of the energy source at run-time.\nWe present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data.\nWe also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source.\nOur methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used.\n1.\nINTRODUCTION\nEnergy supply has always been a crucial issue in designing battery-powered wireless sensor networks because the lifetime and utility of the systems are limited by how long the batteries are able to sustain the operation.\nThe fidelity of the data produced by a sensor network begins to degrade once sensor nodes start to run out of battery power.\nTherefore, harvesting energy from the environment has been proposed to supplement or completely replace battery supplies to enhance system lifetime and reduce the maintenance cost of replacing batteries periodically.\nHowever, metrics for evaluating energy harvesting systems are different from those used for battery powered systems.\nEnvironmental energy is distinct from battery energy in two ways.\nFirst it is an inexhaustible supply which, if appropriately used, can allow the system to last forever, unlike the battery which is a limited resource.\nSecond, there is an uncertainty associated with its availability and measurement, compared to the energy stored in the\nbattery which can be known deterministically.\nThus, power management methods based on battery status are not always applicable to energy harvesting systems.\nIn addition, most power management schemes designed for battery-powered systems only account for the dynamics of the energy consumers (e.g., CPU, radio) but not the dynamics of the energy supply.\nConsequently, battery powered systems usually operate at the lowest performance level that meets the minimum data fidelity requirement in order to maximize the system life.\nEnergy harvesting systems, on the other hand, can provide enhanced performance depending on the available energy.\nIn this paper, we will study how to adapt the performance of the available energy profile.\nThere exist many techniques to accomplish performance scaling at the node level, such as radio transmit power adjustment [1], dynamic voltage scaling [2], and the use of low power modes [3].\nHowever, these techniques require hardware support and may not always be available on resource constrained sensor nodes.\nAlternatively, a common performance scaling technique is duty cycling.\nLow power devices typically provide at least one low power mode in which the node is shut down and the power consumption is negligible.\nIn addition, the rate of duty cycling is directly related to system performance metrics such as network latency and sampling frequency.\nWe will use duty cycle adjustment as the primitive performance scaling technique in our algorithms.\n2.\nRELATED WORK\nEnergy harvesting has been explored for several different types of systems, such as wearable computers [4], [5], [6], sensor networks [7], etc. .\nSeveral technologies to extract energy from the environment have been demonstrated including solar, motion-based, biochemical, vibration-based [8], [9], [10], [11], and others are being developed [12], [13].\nWhile several energy harvesting sensor node platforms have been prototyped [14], [15], [16], there is a need for systematic power management techniques that provide performance guarantees during system operation.\nThe first work to take environmental energy into account for data routing was [17], followed by [18].\nWhile these works did demonstrate that environment aware decisions improve performance compared to battery aware decisions, their objective was not to achieve energy neutral operation.\nOur proposed techniques attempt to maximize system performance while maintaining energy-neutral operation.\n3.\nSYSTEM MODEL\n4.\nTHEORETICALLY OPTIMAL POWER MANAGEMENT\n5.\nHARVESTING-AWARE POWER MANAGEMENT\n5.1.\nEnergy Prediction Model\n5.2.\nLow-complexity Solution\n5.3.\nSlot-by-slot continual duty cycle adaptiation.\n6.\nEVALUATION\nDay Hour\n4 Solar Energy Profile (Left: Continuous, Right: Diurnal)\n6.1.\nPrediction Model\n5.\nAverage Predictor Error in mA\n6.2.\nAdaptive Duty cycling algorithm\n7.\nCONCLUSIONS\nWe discussed various issues in power management for systems powered using environmentally harvested energy.\nSpecifically, we designed a method for optimizing performance subject to the constraint of energy neutral operation.\nWe also derived a theoretically optimal bound on the performance and showed that our proposed algorithm operated very close to the optimal.\nThe proposals were evaluated using real data collected using an energy harvesting sensor node deployed in an outdoor environment.\nOur method has significant advantages over currently used methods which are based on a conservative estimate of duty cycle and can only provide sub-optimal performance.\nHowever, this work is only the first step towards optimal solutions for energy neutral operation.\nIt is designed for a specific power scaling method based on adapting the duty cycle.\nSeveral other power scaling methods, such as DVS, submodule power switching and the use of multiple low power modes are also available.\nIt is thus of interest to extend our methods to exploit these advanced capabilities.","lvl-4":"Adaptive Duty Cycling for Energy Harvesting Systems\n{jasonh, kansal, szahedi, mbs} @ ee.ucla.edu vijay@nec-labs.com\nABSTRACT\nHarvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks.\nIn this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment.\nThe algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energyneutrality constraint, and (c) adapting to the dynamics of the energy source at run-time.\nWe present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data.\nWe also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source.\nOur methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used.\n1.\nINTRODUCTION\nEnergy supply has always been a crucial issue in designing battery-powered wireless sensor networks because the lifetime and utility of the systems are limited by how long the batteries are able to sustain the operation.\nThe fidelity of the data produced by a sensor network begins to degrade once sensor nodes start to run out of battery power.\nTherefore, harvesting energy from the environment has been proposed to supplement or completely replace battery supplies to enhance system lifetime and reduce the maintenance cost of replacing batteries periodically.\nHowever, metrics for evaluating energy harvesting systems are different from those used for battery powered systems.\nEnvironmental energy is distinct from battery energy in two ways.\nSecond, there is an uncertainty associated with its availability and measurement, compared to the energy stored in the\nbattery which can be known deterministically.\nThus, power management methods based on battery status are not always applicable to energy harvesting systems.\nIn addition, most power management schemes designed for battery-powered systems only account for the dynamics of the energy consumers (e.g., CPU, radio) but not the dynamics of the energy supply.\nConsequently, battery powered systems usually operate at the lowest performance level that meets the minimum data fidelity requirement in order to maximize the system life.\nEnergy harvesting systems, on the other hand, can provide enhanced performance depending on the available energy.\nIn this paper, we will study how to adapt the performance of the available energy profile.\nHowever, these techniques require hardware support and may not always be available on resource constrained sensor nodes.\nAlternatively, a common performance scaling technique is duty cycling.\nIn addition, the rate of duty cycling is directly related to system performance metrics such as network latency and sampling frequency.\nWe will use duty cycle adjustment as the primitive performance scaling technique in our algorithms.\n2.\nRELATED WORK\nEnergy harvesting has been explored for several different types of systems, such as wearable computers [4], [5], [6], sensor networks [7], etc. .\nWhile several energy harvesting sensor node platforms have been prototyped [14], [15], [16], there is a need for systematic power management techniques that provide performance guarantees during system operation.\nThe first work to take environmental energy into account for data routing was [17], followed by [18].\nWhile these works did demonstrate that environment aware decisions improve performance compared to battery aware decisions, their objective was not to achieve energy neutral operation.\nOur proposed techniques attempt to maximize system performance while maintaining energy-neutral operation.\n7.\nCONCLUSIONS\nWe discussed various issues in power management for systems powered using environmentally harvested energy.\nSpecifically, we designed a method for optimizing performance subject to the constraint of energy neutral operation.\nThe proposals were evaluated using real data collected using an energy harvesting sensor node deployed in an outdoor environment.\nOur method has significant advantages over currently used methods which are based on a conservative estimate of duty cycle and can only provide sub-optimal performance.\nHowever, this work is only the first step towards optimal solutions for energy neutral operation.\nIt is designed for a specific power scaling method based on adapting the duty cycle.\nSeveral other power scaling methods, such as DVS, submodule power switching and the use of multiple low power modes are also available.","lvl-2":"Adaptive Duty Cycling for Energy Harvesting Systems\n{jasonh, kansal, szahedi, mbs} @ ee.ucla.edu vijay@nec-labs.com\nABSTRACT\nHarvesting energy from the environment is feasible in many applications to ameliorate the energy limitations in sensor networks.\nIn this paper, we present an adaptive duty cycling algorithm that allows energy harvesting sensor nodes to autonomously adjust their duty cycle according to the energy availability in the environment.\nThe algorithm has three objectives, namely (a) achieving energy neutral operation, i.e., energy consumption should not be more than the energy provided by the environment, (b) maximizing the system performance based on an application utility model subject to the above energyneutrality constraint, and (c) adapting to the dynamics of the energy source at run-time.\nWe present a model that enables harvesting sensor nodes to predict future energy opportunities based on historical data.\nWe also derive an upper bound on the maximum achievable performance assuming perfect knowledge about the future behavior of the energy source.\nOur methods are evaluated using data gathered from a prototype solar energy harvesting platform and we show that our algorithm can utilize up to 58% more environmental energy compared to the case when harvesting-aware power management is not used.\n1.\nINTRODUCTION\nEnergy supply has always been a crucial issue in designing battery-powered wireless sensor networks because the lifetime and utility of the systems are limited by how long the batteries are able to sustain the operation.\nThe fidelity of the data produced by a sensor network begins to degrade once sensor nodes start to run out of battery power.\nTherefore, harvesting energy from the environment has been proposed to supplement or completely replace battery supplies to enhance system lifetime and reduce the maintenance cost of replacing batteries periodically.\nHowever, metrics for evaluating energy harvesting systems are different from those used for battery powered systems.\nEnvironmental energy is distinct from battery energy in two ways.\nFirst it is an inexhaustible supply which, if appropriately used, can allow the system to last forever, unlike the battery which is a limited resource.\nSecond, there is an uncertainty associated with its availability and measurement, compared to the energy stored in the\nbattery which can be known deterministically.\nThus, power management methods based on battery status are not always applicable to energy harvesting systems.\nIn addition, most power management schemes designed for battery-powered systems only account for the dynamics of the energy consumers (e.g., CPU, radio) but not the dynamics of the energy supply.\nConsequently, battery powered systems usually operate at the lowest performance level that meets the minimum data fidelity requirement in order to maximize the system life.\nEnergy harvesting systems, on the other hand, can provide enhanced performance depending on the available energy.\nIn this paper, we will study how to adapt the performance of the available energy profile.\nThere exist many techniques to accomplish performance scaling at the node level, such as radio transmit power adjustment [1], dynamic voltage scaling [2], and the use of low power modes [3].\nHowever, these techniques require hardware support and may not always be available on resource constrained sensor nodes.\nAlternatively, a common performance scaling technique is duty cycling.\nLow power devices typically provide at least one low power mode in which the node is shut down and the power consumption is negligible.\nIn addition, the rate of duty cycling is directly related to system performance metrics such as network latency and sampling frequency.\nWe will use duty cycle adjustment as the primitive performance scaling technique in our algorithms.\n2.\nRELATED WORK\nEnergy harvesting has been explored for several different types of systems, such as wearable computers [4], [5], [6], sensor networks [7], etc. .\nSeveral technologies to extract energy from the environment have been demonstrated including solar, motion-based, biochemical, vibration-based [8], [9], [10], [11], and others are being developed [12], [13].\nWhile several energy harvesting sensor node platforms have been prototyped [14], [15], [16], there is a need for systematic power management techniques that provide performance guarantees during system operation.\nThe first work to take environmental energy into account for data routing was [17], followed by [18].\nWhile these works did demonstrate that environment aware decisions improve performance compared to battery aware decisions, their objective was not to achieve energy neutral operation.\nOur proposed techniques attempt to maximize system performance while maintaining energy-neutral operation.\n3.\nSYSTEM MODEL\nThe energy usage considerations in a harvesting system vary significantly from those in a battery powered system, as mentioned earlier.\nWe propose the model shown in Figure 1 for designing energy management methods in a harvesting system.\nThe functions of the various blocks shown in the figure are discussed below.\nThe precise methods used in our system to achieve these functions will be discussed in subsequent sections.\nHarvested Energy Tracking: This block represents the mechanisms used to measure the energy received from the harvesting device, such as the solar panel.\nSuch information is useful for determining the energy availability profile and adapting system performance based on it.\nCollecting this information requires that the node hardware be equipped with the facility to measure the power\nFigure 1.\nSystem model for an energy harvesting system.\ngenerated from the environment, and the Heliomote platform [14] we used for evaluating the algorithms has this capability.\nEnergy Generation Model: For wireless sensor nodes with limited storage and processing capabilities to be able to use the harvested energy data, models that represent the essential components of this information without using extensive storage are required.\nThe purpose of this block is to provide a model for the energy available to the system in a form that may be used for making power management decisions.\nThe data measured by the energy tracking block is used here to predict future energy availability.\nA good prediction model should have a low prediction error and provide predicted energy values for durations long enough to make meaningful performance scaling decisions.\nFurther, for energy sources that exhibit both long-term and short-term patterns (e.g., diurnal and climate variations vs. weather patterns for solar energy), the model must be able to capture both characteristics.\nSuch a model can also use information from external sources such as local weather forecast service to improve its accuracy.\nEnergy Consumption Model: It is also important to have detailed information about the energy usage characteristics of the system, at various performance levels.\nFor general applicability of our design, we will assume that only one sleep mode is available.\nWe assume that the power consumption in the sleep and active modes is known.\nIt may be noted that for low power systems with more advanced capabilities such as dynamic voltage scaling (DVS), multiple low power modes, and the capability to shut down system components selectively, the power consumption in each of the states and the resultant effect on application performance should be known to make power management decisions.\nEnergy Storage Model: This block represents the model for the energy storage technology.\nSince all the generated energy may not be used instantaneously, the harvesting system will usually have some energy storage technology.\nStorage technologies (e.g., batteries and ultra-capacitors) are non-ideal, in that there is some energy loss while storing and retrieving energy from them.\nThese characteristics must be known to efficiently manage energy usage and storage.\nThis block also includes the system capability to measure the residual stored energy.\nMost low power systems use batteries to store energy and provide residual battery status.\nThis is commonly based on measuring the battery voltage which is then mapped to the residual battery energy using the known charge to voltage relationship for the battery technology in use.\nMore sophisticated methods which track the flow of energy into and out of the battery are also available.\nHarvesting-aware Power Management: The inputs provided by the previously mentioned blocks are used here to determine the suitable power management strategy for the system.\nPower management could be carried to meet different objectives in different applications.\nFor instance, in some systems, the harvested energy may marginally supplement the battery supply and the objective may be to maximize the system lifetime.\nA more interesting case is when the harvested energy is used as the primary source of energy for the system with the objective of achieving indefinitely long system lifetime.\nIn such cases, the power management objective is to achieve energy neutral operation.\nIn other words, the system should only use as much energy as harvested from the environment and attempt to maximize performance within this available energy budget.\n4.\nTHEORETICALLY OPTIMAL POWER MANAGEMENT\nWe develop the following theory to understand the energy neutral mode of operation.\nLet us define Ps (t) as the energy harvested from the environment at time t, and the energy being consumed by the load at that time is Pc (t).\nFurther, we model the non-ideal storage buffer by its round-trip efficiency 11 (strictly less than 1) and a constant leakage power Pleak.\nUsing this notation, applying the rule of energy conservation leads to the following inequality:\nwhere B0 is the initial battery level and the function [X] + = X if X> 0 and zero otherwise.\nDEFINITION 1 (p,61,62) function: A non-negative, continuous and bounded function P (t) is said to be a (p,61,62) function if and only if for any value of finite real number T, the following are satisfied:\nThis function can be used to model both energy sources and loads.\nIf the harvested energy profile Ps (t) is a (p1,61,62) function, then the average rate of available energy over long durations becomes p1, and the burstiness is bounded by 61 and 62.\nSimilarly, Pc (t) can be modeled as a (p2,63) function, when p2 and 63 are used to place an upper bound on power consumption (the inequality on the right side) while there are no minimum power consumption constraints.\nThe condition for energy neutrality, equation (1), leads to the following theorem, based on the energy production, consumption, and energy buffer models discussed above.\nTHEOREM 1 (ENERGY NEUTRAL OPERATION): Consider a harvesting system in which the energy production profile is characterized by a (p1, 61, 62) function, the load is characterized by a (p2, 63) function and the energy buffer is characterized by parameters ii for storage efficiency, and Pleak for leakage power.\nThe following conditions are sufficient for the system to achieve energy neutrality:\nwhere B0 is the initial energy stored in the buffer and provides a lower bound on the capacity of the energy buffer B.\nThe proof is presented in our prior work [19].\nTo adjust the duty cycle D using our performance scaling algorithm, we assume the following relation between duty cycle and the perceived utility of the system to the user: Suppose the utility of the application to the user is represented by U (D) when the system operates at a duty cycle D. Then,\nThis is a fairly general and simple model and the specific values of Dmin and Dmax may be determined as per application requirements.\nAs an example, consider a sensor node designed to detect intrusion across a periphery.\nIn this case, a linear increase in duty cycle translates into a linear increase in the detection probability.\nThe fastest and the slowest speeds of the intruders may be known, leading to a minimum and\nFigure 2.\nTwo possible cases for energy calculations\nmaximum sensing delay tolerable, which results in the relevant Dmax and Dmin for the sensor node.\nWhile there may be cases where the relationship between utility and duty cycle may be non-linear, in this paper, we restrict our focus on applications that follow this linear model.\nIn view of the above models for the system components and the required performance, the objective of our power management strategy is adjust the duty cycle D (i) dynamically so as to maximize the total utility U (D) over a period of time, while ensuring energy neutral operation for the sensor node.\nBefore discussing the performance scaling methods for harvesting aware duty cycle adaptation, let us first consider the optimal power management strategy that is possible for a given energy generation profile.\nFor the calculation of the optimal strategy, we assume complete knowledge of the energy availability profile at the node, including the availability in the future.\nThe calculation of the optimal is a useful tool for evaluating the performance of our proposed algorithm.\nThis is particularly useful for our algorithm since no prior algorithms are available to serve as a baseline for comparison.\nSuppose the time axis is partitioned into discrete slots of duration \u0394T, and the duty cycle adaptation calculation is carried out over a window of Nw such time slots.\nWe define the following energy profile variables, with the index i ranging over {1,..., Nw}: Ps (i) is the power output from the harvested source in time slot i, averaged over the slot duration, Pc is the power consumption of the load in active mode, and D (i) is the duty cycle used in slot i, whose value is to be determined.\nB (i) is the residual battery energy at the beginning of slot i. Following this convention, the battery energy left after the last slot in the window is represented by B (Nw +1).\nThe values of these variables will depend on the choice of D (i).\nThe energy used directly from the harvested source and the energy stored and used from the battery must be accounted for differently.\nFigure 2 shows two possible cases for Ps (i) in a time slot.\nPs (i) may either be less than or higher than Pc, as shown on the left and right respectively.\nWhen Ps (i) is lower than Pc, some of the energy used by the load comes from the battery, while when Ps (i) is higher than Pc, all the energy used is supplied directly from the harvested source.\nThe crosshatched area shows the energy that is available for storage into the battery while the hashed area shows the energy drawn from the battery.\nWe can write the energy used from the battery in any slot i as:\nIn equation (6), the first term on the right hand side measures the energy drawn from the battery when Ps (i) Pc.\nFor energy neutral operation, we require the battery at the end of the window of Nw slots to be greater than or equal to the starting battery.\nClearly, battery level will go down when the harvested energy is not available and the system is operated from stored energy.\nHowever, the window Nw is judiciously chosen such that over that duration, we expect the environmental energy availability to complete a periodic cycle.\nFor instance, in the case of solar energy harvesting, Nw could be chosen to be a twenty-four hour duration, corresponding to the diurnal cycle in the harvested energy.\nThis is an approximation since an ideal choice of the window size would be infinite, but a finite size must be used for analytical tractability.\nFurther, the battery level cannot be negative at any time, and this is ensured by having a large enough initial battery level B0 such that node operation is sustained even in the case of total blackout during a window period.\nStating the above constraints quantitatively, we can express the calculation of the optimal duty cycles as an optimization problem below:\nThe solution to the optimization problem yields the duty cycles that must be used in every slot and the evolution of residual battery over the course of Nw slots.\nNote that while the constraints above contain the non-linear function [x] +, the quantities occurring within that function are all known constants.\nThe variable quantities occur only in linear terms and hence the above optimization problem can be solved using standard linear programming techniques, available in popular optimization toolboxes.\n5.\nHARVESTING-AWARE POWER MANAGEMENT\nWe now present a practical algorithm for power management that may be used for adapting the performance based on harvested energy information.\nThis algorithm attempts to achieve energy neutral operation without using knowledge of the future energy availability and maximizes the achievable performance within that constraint.\nThe harvesting-aware power management strategy consists of three parts.\nThe first part is an instantiation of the energy generation model which tracks past energy input profiles and uses them to predict future energy availability.\nThe second part computes the optimal duty cycles based on the predicted energy, and this step uses our computationally tractable method to solve the optimization problem.\nThe third part consists of a method to dynamically adapt the duty cycle in response to the observed energy generation profile in real time.\nThis step is required since the observed energy generation may deviate significantly from the predicted energy availability and energy neutral operation must be ensured with the actual energy received rather than the predicted values.\n5.1.\nEnergy Prediction Model\nWe use a prediction model based on Exponentially Weighted Moving-Average (EWMA).\nThe method is designed to exploit the diurnal cycle in solar energy but at the same time adapt to the seasonal variations.\nA historical summary of the energy generation profile is maintained for this purpose.\nWhile the storage data size is limited to a vector length of Nw values in order to minimize the memory overheads of the power management algorithm, the window size is effectively infinite as each value in the history window depends on all the observed data up to that instant.\nThe window size is chosen to be 24 hours and each time slot is taken to be 30 minutes as the variation in generated power by the solar panel using this setting is less than 10% between each adjacent slots.\nThis yields Nw = 48.\nSmaller slot durations may be used at the expense of a higher Nw.\nThe historical summary maintained is derived as follows.\nOn a typical day, we expect the energy generation to be similar to the energy generation at the same time on the previous days.\nThe value of energy generated in a particular slot is maintained as a weighted average of the energy received in the same time-slot during all observed days.\nThe weights are exponential, resulting in decaying contribution from older\ndata.\nMore specifically, the historical average maintained for each slot is given by:\nwhere a is the value of the weighting factor, xk is the observed value of energy generated in the slot, and xk \u2212 1 is the previously stored historical average.\nIn this model, the importance of each day relative to the previous one remains constant because the same weighting factor was used for all days.\nThe average value derived for a slot is treated as an estimate of predicted energy value for the slot corresponding to the subsequent day.\nThis method helps the historical average values adapt to the seasonal variations in energy received on different days.\nOne of the parameters to be chosen in the above prediction method is the parameter a, which is a measure of rate of shift in energy pattern over time.\nSince this parameter is affected by the characteristics of the energy and sensor node location, the system should have a training period during which this parameter will be determined.\nTo determine a good value of a, we collected energy data over 72 days and compared the average error of the prediction method for various values of a.\nThe error based on the different values of a is shown in Figure 3.\nThis curve suggests an optimum value of a = 0.15 for minimum prediction error and this value will be used in the remainder of this paper.\nFigure 3.\nChoice of prediction parameter.\n5.2.\nLow-complexity Solution\nThe energy values predicted for the next window of Nw slots are used to calculated the desired duty cycles for the next window, assuming the predicted values match the observed values in the future.\nSince our objective is to develop a practical algorithm for embedded computing systems, we present a simplified method to solve the linear programming problem presented in Section 4.\nTo this end, we define the sets S and D as follows:\nThe two sets differ by the condition that whether the node operation can be sustained entirely from environmental energy.\nIn the case that energy produced from the environment is not sufficient, battery will be discharged to supplement the remaining energy.\nNext we sum up both sides of (6) over the entire Nw window and rewrite it with the new notation.\nThe term on the left hand side is actually the battery energy used over the entire window of Nw slots, which can be set to 0 for energy neutral operation.\nAfter some algebraic manipulation, this yields:\nThe term on the left hand side is the total energy received in Nw slots.\nThe first term on the right hand side can be interpreted as the total energy consumed during the D slots and the second term is the total energy consumed during the S slots.\nWe can now replace three constraints (8), (9), and (10) in the original problem with (13), restating the optimization problem as follows:\nThis form facilitates a low complexity solution that doesn't require a general linear programming solver.\nSince our objective is to maximize the total system utility, it is preferable to set the duty cycle to Dmin for time slots where the utility per unit energy is the least.\nOn the other hand, we would also like the time slots with the highest Ps to operate at Dmax because of better efficiency of using energy directly from the energy source.\nCombining these two characteristics, we define the utility co-efficient for each slot i as follows:\nwhere W (i) is a representation of how efficient the energy usage in a particular time slot i is.\nA larger W (i) indicates more system utility per unit energy in slot i and vice versa.\nThe algorithm starts by assuming D (i) = Dmin for i = {1...NW} because of the minimum duty cycle requirement, and computes the remaining system energy R by:\nA negative R concludes that the optimization problem is infeasible, meaning the system cannot achieve energy neutrality even at the minimum duty cycle.\nIn this case, the system designer is responsible for increasing the environment energy availability (e.g., by using larger solar panels).\nIf R is positive, it means the system has excess energy that is not being used, and this may be allocated to increase the duty cycle beyond Dmin for some slots.\nSince our objective is to maximize the total system utility, the most efficient way to allocate the excess energy is to assign duty cycle Dmax to the slots with the highest W (i).\nSo, the coefficients W (i) are arranged in decreasing order and duty cycle Dmax is assigned to the slots beginning with the largest coefficients until the excess energy available, R (computed by (14) in every iteration), is insufficient to assign Dmax to another slot.\nThe remaining energy, RLast, is used to increase the duty cycle to some value between Dmin and Dmax in the slot with the next lower coefficient.\nDenoting this slot with index j, the duty cycle is given by:\nThe above solution to the optimization problem requires only simple arithmetic calculations and one sorting step which can be easily implemented on an embedded platform, as opposed to implementing a general linear program solver.\n5.3.\nSlot-by-slot continual duty cycle adaptiation.\nThe observed energy values may vary greatly from the predicted ones, such as due to the effect of clouds or other sudden changes.\nIt is thus important to adapt the duty cycles calculated using the predicted values, to the actual energy measurements in real time to ensure energy neutrality.\nDenote the initial duty cycle assignments for each time slot i computed using the predicted energy values as D (i) = {1,..., Nw}.\nFirst we compute the difference between predicted power level Ps (i) and actual power level observed, Ps' (i) in every slot i. Then, the excess energy in slot i, denoted by X, can be obtained as follows:\nALGORITHM 1 Pseudocode for the duty-cycle adaptation algorithm The upper term accounts for the energy difference when actual received energy is more than the power drawn by the load.\nOn the other hand, if the energy received is less than Pc, we will need to account for the extra energy used from the battery by the load, which is a function of duty cycle used in time slot i and battery efficiency factor \u03b7.\nWhen more energy is received than predicted, X is positive and that excess energy is available for use in the subsequent solutes, while if X is negative, that energy must be compensated from subsequent slots.\nCASE I: X <0.\nIn this case, we want to reduce the duty cycles used in the future slots in order to make up for this shortfall of energy.\nSince our objective function is to maximize the total system utility, we have to reduce the duty cycles for time slots with the smallest normalized utility coefficient, W (i).\nThis is accomplished by first sorting the coefficient W (j), where j> i. in decreasing order, and then iteratively reducing Dj to Dmin until the total reduction in energy consumption is the same as X. CASE II: X> 0.\nHere, we want to increase the duty cycles used in the future to utilize this excess energy we received in recent time slot.\nIn contrast to Case I, the duty cycles of future time slots with highest utility coefficient W (i) should be increased first in order to maximize the total system utility.\nSuppose the duty cycle is changed by d in slot j. Define a quantity R (j, d) as follows: The precise procedure to adapt the duty cycle to account for the above factors is presented in Algorithm 1.\nThis calculation is performed at the end of every slot to set the duty cycle for the next slot.\nWe claim that our duty cycling algorithm is energy neutral because an surplus of energy at the previous time slot will always translate to additional energy opportunity for future time slots, and vice versa.\nThe claim may be violated in cases of severe energy shortages especially towards the end of window.\nFor example, a large deficit in energy supply can't be restored if there is no future energy input until the end of the window.\nIn such case, this offset will be carried over to the next window so that long term energy neutrality is still maintained.\n6.\nEVALUATION\nOur adaptive duty cycling algorithm was evaluated using an actual solar energy profile measured using a sensor node called Heliomote, capable of harvesting solar energy [14].\nThis platform not only tracks the generated energy but also the energy flow into and out of the battery to provide an accurate estimate of the stored energy.\nThe energy harvesting platform was deployed in a residential area in Los Angeles from the beginning of June through the middle of August for a total of 72 days.\nThe sensor node used is a Mica2 mote running at a fixed 40% duty cycle with an initially full battery.\nBattery voltage and net current from the solar panels are sampled at a period of 10 seconds.\nThe energy generation profile for that duration, measured by tracking the output current from the solar cell is shown in Figure 4, both on continuous and diurnal scales.\nWe can observe that although the energy profile varies from day to day, it still exhibits a general pattern over several days.\nDay Hour\nFigure\n4 Solar Energy Profile (Left: Continuous, Right: Diurnal)\n6.1.\nPrediction Model\nWe first evaluate the performance of the prediction model, which is judged by the amount of absolute error it made between the predicted and actual energy profile.\nFigure 5 shows the average error of each time slot in mA over the entire 72 days.\nGenerally, the amount of error is larger during the day time because that's when the factor of weather can cause deviations in received energy, while the prediction made for night time is mostly correct.\nFigure\n5.\nAverage Predictor Error in mA\n6.2.\nAdaptive Duty cycling algorithm\nPrior methods to optimize performance while achieving energy neutral operation using harvested energy are scarce.\nInstead, we compare the performance of our algorithm against two extremes: the theoretical optimal calculated assuming complete knowledge about future energy availability and a simple approach which attempts to achieve energy neutrality using a fixed duty cycle without accounting for battery inefficiency.\nThe optimal duty cycles are calculated for each slot using the future knowledge of actual received energy for that slot.\nFor the simple approach, the duty cycle is kept constant within each day and is\ncomputed by taking the ratio of the predicted energy availability and the maximum usage, and this guarantees that the senor node will never deplete its battery running at this duty cycle.\nWe then compare the performance of our algorithm to the two extremes with varying battery efficiency.\nFigure 6 shows the results, using Dmax = 0.8 and Dmin = 0.3.\nThe battery efficiency was varied from 0.5 to 1 on the x-axis and solar energy utilizations achieved by the three algorithms are shown on the y-axis.\nIt shows the fraction of net received energy that is used to perform useful work rather than lost due to storage inefficiency.\nAs can be seen from the figure, battery efficiency factor has great impact on the performance of the three different approaches.\nThe three approaches all converges to 100% utilization if we have a perfect battery (\u03b7 = 1), that is, energy is not lost by storing it into the batteries.\nWhen battery inefficiency is taken into account, both the adaptive and optimal approach have much better solar energy utilization rate than the simple one.\nAdditionally, the result also shows that our adaptive duty cycle algorithm performs extremely close to the optimal.\nFigure 6.\nDuty Cycles achieved with respect to \u03b7\nWe also compare the performance of our algorithm with different values of Dmin and Dmax for \u03b7 = 0.7, which is typical of NiMH batteries.\nThese results are shown in Table 1 as the percentage of energy saved by the optimal and adaptive approaches, and this is the energy which would normally be wasted in the simple approach.\nThe figures and table indicate that our real time algorithm is able to achieve a performance very close to the optimal feasible.\nIn addition, these results show that environmental energy harvesting with appropriate power management can achieve much better utilization of the environmental energy.\nTABLE 1.\nEnergy Saved by adaptive and optimal approach.\n7.\nCONCLUSIONS\nWe discussed various issues in power management for systems powered using environmentally harvested energy.\nSpecifically, we designed a method for optimizing performance subject to the constraint of energy neutral operation.\nWe also derived a theoretically optimal bound on the performance and showed that our proposed algorithm operated very close to the optimal.\nThe proposals were evaluated using real data collected using an energy harvesting sensor node deployed in an outdoor environment.\nOur method has significant advantages over currently used methods which are based on a conservative estimate of duty cycle and can only provide sub-optimal performance.\nHowever, this work is only the first step towards optimal solutions for energy neutral operation.\nIt is designed for a specific power scaling method based on adapting the duty cycle.\nSeveral other power scaling methods, such as DVS, submodule power switching and the use of multiple low power modes are also available.\nIt is thus of interest to extend our methods to exploit these advanced capabilities.","keyphrases":["duti cycl","energi harvest system","energi harvest","sensor network","energi neutral oper","environment energi","power manag","harvest-awar power manag","perform scale","duti cycl rate","network latenc","sampl frequenc","solar panel","energi track","storag buffer","power scale","low power design"],"prmu":["P","P","P","P","P","P","P","M","M","M","M","U","M","M","U","M","M"]} {"id":"I-37","title":"A Framework for Agent-Based Distributed Machine Learning and Data Mining","abstract":"This paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents. We present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes. This allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions. We apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms. We report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework.","lvl-1":"A Framework for Agent-Based Distributed Machine Learning and Data Mining Jan Tozicka Gerstner Laboratory Czech Technical University Technick``a 2, Prague, 166 27 Czech Republic tozicka@labe.felk.cvut.cz Michael Rovatsos School of Informatics The University of Edinburgh Edinburgh EH8 9LE United Kingdom mrovatso@inf.ed.ac.uk Michal Pechoucek Gerstner Laboratory Czech Technical University Technick``a 2, Prague, 166 27 Czech Republic pechouc@labe.felk.cvut.cz ABSTRACT This paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents.\nWe present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes.\nThis allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions.\nWe apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms.\nWe report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework.\nGeneral Terms Theory Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent Systems 1.\nINTRODUCTION In the areas of machine learning and data mining (cf. [14, 17] for overviews), it has long been recognised that parallelisation and distribution can be used to improve learning performance.\nVarious techniques have been suggested in this respect, ranging from the low-level integration of independently derived learning hypotheses (e.g. combining different classifiers to make optimal classification decisions [4, 7], model averaging of Bayesian classifiers [8], or consensusbased methods for integrating different clusterings [11]), to the high-level combination of learning results obtained by heterogeneous learning agents using meta-learning (e.g. [3, 10, 21]).\nAll of these approaches assume homogeneity of agent design (all agents apply the same learning algorithm) and\/or agent objectives (all agents are trying to cooperatively solve a single, global learning problem).\nTherefore, the techniques they suggest are not applicable in societies of autonomous learners interacting in open systems.\nIn such systems, learners (agents) may not be able to integrate their datasets or learning results (because of different data formats and representations, learning algorithms, or legal restrictions that prohibit such integration [11]) and cannot always be guaranteed to interact in a strictly cooperative fashion (discovered knowledge and collected data might be economic assets that should only be shared when this is deemed profitable; malicious agents might attempt to adversely influence others'' learning results, etc.).\nExamples for applications of this kind abound.\nMany distributed learning domains involve the use of sensitive data and prohibit the exchange of this data (e.g. exchange of patient data in distributed brain tumour diagnosis [2]) - however, they may permit the exchange of local learning hypotheses among different learners.\nIn other areas, training data might be commercially valuable, so that agents would only make it available to others if those agents could provide something in return (e.g. in remote ship surveillance and tracking, where the different agencies involved are commercial service providers [1]).\nFurthermore, agents might have a vested interest in negatively affecting other agents'' learning performance.\nAn example for this is that of fraudulent agents on eBay which may try to prevent reputationlearning agents from the construction of useful models for detecting fraud.\nViewing learners as autonomous, self-directed agents is the only appropriate view one can take in modelling these distributed learning environments: the agent metaphor becomes a necessity as oppossed to preferences for scalability, dynamic data selection, interactivity [13], which can also be achieved through (non-agent) distribution and parallelisation in principle.\nDespite the autonomy and self-directedness of learning agents, many of these systems exhibit a sufficient overlap in terms of individual learning goals so that beneficial cooperation might be possible if a model for flexible interaction between autonomous learners was available that allowed agents to 1.\nexchange information about different aspects of their own learning mechanism at different levels of detail without being forced to reveal private information that should not be disclosed, 2.\ndecide to what extent they want to share information about their own learning processes and utilise information provided by other learners, and 3.\nreason about how this information can best be used to improve their own learning performance.\nOur model is based on the simple idea that autonomous learners should maintain meta-descriptions of their own learning processes (see also [3]) in order to be able to exchange information and reason about them in a rational way (i.e. with the overall objective of improving their own learning results).\nOur hypothesis is a very simple one: If we can devise a sufficiently general, abstract view of describing learning processes, we will be able to utilise the whole range of methods for (i) rational reasoning and (ii) communication and coordination offered by agent technology so as to build effective autonomous learning agents.\nTo test this hypothesis, we introduce such an abstract architecture (section 2) and implement a simple, concrete instance of it in a real-world domain (section 3).\nWe report on empirical results obtained with this implemented system that demonstrate the viability of our approach (section 4).\nFinally, we review related work (section 5) and conclude with a summary, discussion of our approach and outlook to future work on the subject (section 6).\n2.\nABSTRACT ARCHITECTURE Our framework is based on providing formal (meta-level) descriptions of learning processes, i.e. representations of all relevant components of the learning machinery used by a learning agent, together with information about the state of the learning process.\nTo ensure that this framework is sufficiently general, we consider the following general description of a learning problem: Given data D \u2286 D taken from an instance space D, a hypothesis space H and an (unknown) target function c \u2208 H1 , derive a function h \u2208 H that approximates c as well as possible according to some performance measure g : H \u2192 Q where Q is a set of possible levels of learning performance.\n1 By requiring this we are ensuring that the learning problem can be solved in principle using the given hypothesis space.\nThis very broad definition includes a number of components of a learning problem for which more concrete specifications can be provided if we want to be more precise.\nFor the cases of classification and clustering, for example, we can further specify the above as follows: Learning data can be described in both cases as D = \u00d7n i=1[Ai] where [Ai] is the domain of the ith attribute and the set of attributes is A = {1, ... , n}.\nFor the hypothesis space we obtain H \u2286 {h|h : D \u2192 {0, 1}} in the case of classification (i.e. a subset of the set of all possible classifiers, the nature of which depends on the expressivity of the learning algorithm used) and H \u2286 {h|h : D \u2192 N, h is total with range {1, ... , k}} in the case of clustering (i.e. a subset of all sets of possible cluster assignments that map data points to a finite number of clusters numbered 1 to k).\nFor classification, g might be defined in terms of the numbers of false negatives and false positives with respect to some validation set V \u2286 D, and clustering might use various measures of cluster validity to evaluate the quality of a current hypothesis, so that Q = R in both cases (but other sets of learning quality levels can be imagined).\nNext, we introduce a notion of learning step, which imposes a uniform basic structure on all learning processes that are supposed to exchange information using our framework.\nFor this, we assume that each learner is presented with a finite set of data D = d1, ... dk in each step (this is an ordered set to express that the order in which the samples are used for training matters) and employs a training\/update function f : H \u00d7 D\u2217 \u2192 H which updates h given a series of samples d1, ... , dk.\nIn other words, one learning step always consists of applying the update function to all samples in D exactly once.\nWe define a learning step as a tuple l = D, H, f, g, h where we require that H \u2286 H and h \u2208 H.\nThe intuition behind this definition is that each learning step completely describes one learning iteration as shown in Figure 1: in step t, the learner updates the current hypothesis ht\u22121 with data Dt, and evaluates the resulting new hypothesis ht according to the current performance measure gt.\nSuch a learning step is equivalent to the following steps of computation: 1.\ntrain the algorithm on all samples in D (once), i.e. calculate ft(ht\u22121, Dt) = ht, 2.\ncalculate the quality gt of the resulting hypothesis gt(ht).\nWe denote the set of all possible learning steps by L. For ease of notation, we denote the components of any l \u2208 L by D(l), H(l), f(l) and g(l), respectively.\nThe reason why such learning step specifications use a subset H of H instead of H itself is that learners often have explicit knowledge about which hypotheses are effectively ruled out by f given h in the future (if this is not the case, we can still set H = H).\nA learning process is a finite, non-empty sequence l = l1 \u2192 l2 \u2192 ... \u2192 ln of learning steps such that \u22001 \u2264 i < n .\nh(li+1) = f(li)(h(li), D(li)) The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 679 training function ht performance measure solution quality qtgtft training set Dt hypothesis hypothesis ht\u22121 Figure 1: A generic model of a learning step i.e. the only requirement the transition relation \u2192\u2286 L \u00d7 L makes is that the new hypothesis is the result of training the old hypothesis on all available sample data that belongs to the current step.\nWe denote the set of all possible learning processes by L (ignoring, for ease of notation, the fact that this set depends on H, D and the spaces of possible training and evaluation functions f and g).\nThe performance trace associated with a learning process l is the sequence q1, ... , qn \u2208 Qn where qi = g(li)(h(li)), i.e. the sequence of quality values calculated by the performance measures of the individual learning steps on the respective hypotheses.\nSuch specifications allow agents to provide a selfdescription of their learning process.\nHowever, in communication among learning agents, it is often useful to provide only partial information about one``s internal learning process rather than its full details, e.g. when advertising this information in order to enter information exchange negotiations with others.\nFor this purpose, we will assume that learners describe their internal state in terms of sets of learning processes (in the sense of disjunctive choice) which we call learning process descriptions (LPDs) rather than by giving precise descriptions about a single, concrete learning process.\nThis allows us to describe properties of a learning process without specifying its details exhaustively.\nAs an example, the set {l \u2208 L|\u2200l = l[i].\nD(l) \u2264 100} describes all processes that have a training set of at most 100 samples (where all the other elements are arbitrary).\nLikewise, {l \u2208 L|\u2200l = l[i].\nD(l) = {d}} is equivalent to just providing information about a single sample {d} and no other details about the process (this can be useful to model, for example, data received from the environment).\nTherefore, we use \u2118(L), that is the set of all LPDs, as the basis for designing content languages for communication in the protocols we specify below.\nIn practice, the actual content language chosen will of course be more restricted and allow only for a special type of subsets of L to be specified in a compact way, and its choice will be crucial for the interactions that can occur between learning agents.\nFor our examples below, we simply assume explicit enumeration of all possible elements of the respective sets and function spaces (D, H, etc.) extended by the use of wildcard symbols \u2217 (so that our second example above would become ({d}, \u2217, \u2217, \u2217, \u2217)).\n2.1 Learning agents In our framework, a learning agent is essentially a metareasoning function that operates on information about learning processes and is situated in an environment co-inhabited by other learning agents.\nThis means that it is not only capable of meta-level control on how to learn, but in doing so it can take information into account that is provided by other agents or the environment.\nAlthough purely cooperative or hybrid cases are possible, for the purposes of this paper we will assume that agents are purely self-interested, and that while there may be a potential for cooperation considering how agents can mutually improve each others'' learning performance, there is no global mechanism that can enforce such cooperative behaviour.2 Formally speaking, an agent``s learning function is a function which, given a set of histories of previous learning processes (of oneself and potentially of learning processes about which other agents have provided information) and outputs a learning step which is its next learning action.\nIn the most general sense, our learning agent``s internal learning process update can hence be viewed as a function \u03bb : \u2118(L) \u2192 L \u00d7 \u2118(L) which takes a set of learning histories of oneself and others as inputs and computes a new learning step to be executed while updating the set of known learning process histories (e.g. by appending the new learning action to one``s own learning process and leaving all information about others'' learning processes untouched).\nNote that in \u03bb({l1, ... ln}) = (l, {l1, ... ln }) some elements li of the input learning process set may be descriptions of new learning data received from the environment.\nThe \u03bb-function can essentially be freely chosen by the agent as long as one requirement is met, namely that the learning data that is being used always stems from what has been previously observed.\nMore formally, \u2200{l1, ... ln} \u2208 \u2118(L).\n\u03bb({l1, ... ln}) = (l, {l1, ... ln }) \u21d2 \u201e D(l) \u222a [ l =li[j] D(l ) `` \u2286 [ l =li[j] D(l ) i.e. whatever \u03bb outputs as a new learning step and updated set of learning histories, it cannot invent new data; it has to work with the samples that have been made available to it earlier in the process through the environment or from other agents (and it can of course re-train on previously used data).\nThe goal of the agent is to output an optimal learning step in each iteration given the information that it has.\nOne possibility of specifying this is to require that \u2200{l1, ... ln} \u2208 \u2118(L).\n\u03bb({l1, ... ln}) = (l, {l1, ... ln }) \u21d2 l = arg max l \u2208L g(l )(h(l )) but since it will usually be unrealistic to compute the optimal next learning step in every situation, it is more useful 2 Note that our outlook is not only different from common, cooperative models of distributed machine learning and data mining, but also delineates our approach from multiagent learning systems in which agents learn about other agents [25], i.e. the learning goal itself is not affected by agents'' behaviour in the environment.\n680 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) i j Dj Hj fj gj hj Di pD\u2192D 1 (Di, Dj) .\n.\n.\npD\u2192D kD\u2192D (Di, Dj) ... .\nn\/a ... Hi .\n.\n.\n... n\/a fi .\n.\n.\n... n\/a gi .\n.\n.\nn\/a pg\u2192h 1 (gi, hj) .\n.\n.\npg\u2192h kg\u2192h (gi, hj) hi .\n.\n.\nn\/a ... Table 1: Matrix of integration functions for messages sent from learner i to j to simply use g(l )(h(l )) as a running performance measure to evaluate how well the agent is performing.\nThis is too abstract and unspecific for our purposes: While it describes what agents should do (transform the settings for the next learning step in an optimal way), it doesn``t specify how this can be achieved in practice.\n2.2 Integrating learning process information To specify how an agent``s learning process can be affected by integrating information received from others, we need to flesh out the details of how the learning steps it will perform can be modified using incoming information about learning processes described by other agents (this includes the acquisition of new learning data from the environment as a special case).\nIn the most general case, we can specify this in terms of the potential modifications to the existing information about learning histories that can be performed using new information.\nFor ease of presentation, we will assume that agents are stationary learning processes that can only record the previously executed learning step and only exchange information about this one individual learning step (our model can be easily extended to cater for more complex settings).\nLet lj = Dj, Hj, fj, gj, hj be the current state of agent j when receiving a learning process description li = Di, Hi, fi, gi, hi from agent i (for the time being, we assume that this is a specific learning step and not a more vague, disjunctive description of properties of the learning step of i).\nConsidering all possible interactions at an abstract level, we basically obtain a matrix of possibilities for modifications of j``s learning step specification as shown in Table 1.\nIn this matrix, each entry specifies a family of integration functions pc\u2192c 1 , ... , pc\u2192c kc\u2192c where c, c \u2208 {D, H, f, g, h} and which define how agent j``s component cj will be modified using the information ci provided about (the same or a different component of) i``s learning step by applying pc\u2192c r (ci, cj) for some r \u2208 {1, ... , kc\u2192c }.\nTo put it more simply, the collections of p-functions an agent j uses specifies how it will modify its own learning behaviour using information obtained from i. For the diagonal of this matrix, which contains the most common ways of integrating new information in one``s own learning model, obvious ways of modifying one``s own learning process include replacing cj by ci or ignoring ci altogether.\nMore complex\/subtle forms of learning process integration include: \u2022 Modification of Dj: append Di to Dj; filter out all elements from Dj which also appear in Di; append Di to Dj discarding all elements with attributes outside ranges which affect gj, or those elements already correctly classified by hj; \u2022 Modification of Hi: use the union\/intersection of Hi and Hj; alternatively, discard elements of Hj that are inconsistent with Dj in the process of intersection or union, or filter out elements that cannot be obtained using fj (unless fj is modified at the same time) \u2022 Modification of fj: modify parameters or background knowledge of fj using information about fi; assess their relevance by simulating previous learning steps on Dj using gj and discard those that do not help improve own performance \u2022 Modification of hj: combine hj with hi using (say) logical or mathematical operators; make the use of hi contingent on a pre-integration assessment of its quality using own data Dj and gj While this list does not include fully fledged, concrete integration operations for learning processes, it is indicative of the broad range of interactions between individual agents'' learning processes that our framework enables.\nNote that the list does not include any modifications to gj.\nThis is because we do not allow modifications to the agent``s own quality measure as this would render the model of rational (learning) action useless (if the quality measure is relative and volatile, we cannot objectively judge learning performance).\nAlso note that some of the above examples require consulting other elements of lj than those appearing as arguments of the p-operations; we omit these for ease of notation, but emphasise that information-rich operations will involve consulting many different aspects of lj.\nApart from operations along the diagonal of the matrix, more exotic integration operations are conceivable that combine information about different components.\nIn theory we could fill most of the matrix with entries for them, but for lack of space we list only a few examples: \u2022 Modification of Dj using fi: pre-process samples in fi, e.g. to achieve intermediate representations that fj can be applied to \u2022 Modification of Dj using hi: filter out samples from Dj that are covered by hi and build hj using fj only on remaining samples \u2022 Modification of Hj using fi: filter out hypotheses from Hj that are not realisable using fi \u2022 Modification of hj using gi: if hj is composed of several sub-components, filter out those sub-components that do not perform well according to gi \u2022 ... Finally, many messages received from others describing properties of their learning processes will contain information about several elements of a learning step, giving rise to yet more complex operations that depend on which kinds of information are available.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 681 Figure 2: Screenshot of our simulation system, displaying online vessel tracking data for the North Sea region 3.\nAPPLICATION EXAMPLE 3.1 Domain description As an illustration of our framework, we present an agentbased data mining system for clustering-based surveillance using AIS (Automatic Identification System [1]) data.\nIn our application domain, different commercial and governmental agencies track the journeys of ships over time using AIS data which contains structured information automatically provided by ships equipped with shipborne mobile AIS stations to shore stations, other ships and aircrafts.\nThis data contains the ship``s identity, type, position, course, speed, navigational status and other safety-related information.\nFigure 2 shows a screenshot of our simulation system.\nIt is the task of AIS agencies to detect anomalous behaviour so as to alarm police\/coastguard units to further investigate unusual, potentially suspicious behaviour.\nSuch behaviour might include things such as deviation from the standard routes between the declared origin and destination of the journey, unexpected close encounters between different vessels on sea, or unusual patterns in the choice of destination over multiple journeys, taking the type of vessel and reported freight into account.\nWhile the reasons for such unusual behaviour may range from pure coincidence or technical problems to criminal activity (such as smuggling, piracy, terrorist\/military attacks) it is obviously useful to pre-process the huge amount of vessel (tracking) data that is available before engaging in further analysis by human experts.\nTo support this automated pre-processing task, software used by these agencies applies clustering methods in order to identify outliers and flag those as potentially suspicious entities to the human user.\nHowever, many agencies active in this domain are competing enterprises and use their (partially overlapping, but distinct) datasets and learning hypotheses (models) as assets and hence cannot be expected to collaborate in a fully cooperative way to improve overall learning results.\nConsidering that this is the reality of the domain in the real world, it is easy to see that a framework like the one we have suggested above might be useful to exploit the cooperation potential that is not exploited by current systems.\n3.2 Agent-based distributed learning system design To describe a concrete design for the AIS domain, we need to specify the following elements of the overall system: 1.\nThe datasets and clustering algorithms available to individual agents, 2.\nthe interaction mechanism used for exchanging descriptions of learning processes, and 3.\nthe decision mechanism agents apply to make learning decisions.\nRegarding 1., our agents are equipped with their own private datasets in the form of vessel descriptions.\nLearning samples are represented by tuples containing data about individual vessels in terms of attributes A = {1, ... , n} including things such as width, length, etc. with real-valued domains ([Ai] = R for all i).\nIn terms of learning algorithm, we consider clustering with a fixed number of k clusters using the k-means and k-medoids clustering algorithms [5] (fixed meaning that the learning algorithm will always output k clusters; however, we allow agents to change the value of k over different learning cycles).\nThis means that the hypothesis space can be defined as H = { c1, ... , ck |ci \u2208 R|A| } i.e. the set of all possible sets of k cluster centroids in |A|-dimensional Euclidean space.\nFor each hypothesis h = c1, ... , ck and any data point d \u2208 \u00d7n i=1[Ai] given domain [Ai] for the ith attribute of each sample, the assignment to clusters is given by C( c1, ... , ck , d) = arg min 1\u2264j\u2264k |d \u2212 cj| i.e. d is assigned to that cluster whose centroid is closest to the data point in terms of Euclidean distance.\nFor evaluation purposes, each dataset pertaining to a particular agent i is initially split into a training set Di and a validation Vi.\nThen, we generate a set of fake vessels Fi such that |Fi| = |Vi|.\nThese two sets assess the agent``s ability to detect suspicious vessels.\nFor this, we assign a confidence value r(h, d) to every ship d: r(h, d) = 1 |d \u2212 cC(h,d)| where C(h, d) is the index of the nearest centroid.\nBased on this measure, we classify any vessel in Fi \u222a Vi as fake if its r-value is below the median of all the confidences r(h, d) for d \u2208 Fi \u222a Vi.\nWith this, we can compute the quality gi(h) \u2208 R as the ratio between all correctly classified vessels and all vessels in Fi \u222a Vi.\nAs concerns 2., we use a simple Contract-Net Protocol (CNP) [20] based hypothesis trading mechanism: Before each learning iteration, agents issue (publicly broadcasted) Calls-For-Proposals (CfPs), advertising their own numerical model quality.\nIn other words, the initiator of a CNP describes its own current learning state as (\u2217, \u2217, \u2217, gi(h), \u2217) where h is their current hypothesis\/model.\nWe assume that agents are sincere when advertising their model quality, but note that this quality might be of limited relevance to other agents as they may specialise on specific regions of the data space not related to the test set of the sender of the CfP.\nSubsequently, (some) agents may issue bids in which they advertise, in turn, the quality of their own model.\nIf the 682 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) bids (if any) are accepted by the initiator of the protocol who issued the CfP, the agents exchange their hypotheses and the next learning iteration ensues.\nTo describe what is necessary for 3., we have to specify (i) under which conditions agents submit bids in response to a CfP, (ii) when they accept bids in the CNP negotiation process, and (iii) how they integrate the received information in their own learning process.\nConcerning (i) and (ii), we employ a very simple rule that is identical in both cases: let g be one``s own model quality and g that advertised by the CfP (or highest bid, respectively).\nIf g > g we respond to the CfP (accept the bid), else respond to the CfP (accept the bid) with probability P(g \/g) and ignore (reject) it else.\nIf two agents make a deal, they exchange their learning hypotheses (models).\nIn our experiments, g and g are calculated by an additional agent that acts as a global validation mechanism for all agents (in a more realistic setting a comparison mechanism for different g functions would have to be provided).\nAs for (iii), each agent uses a single model merging operator taken from the following two classes of operators (hj is the receiver``s own model and hi is the provider``s model): \u2022 ph\u2192h (hi, hj) : - m-join: The m best clusters (in terms of coverage of Dj) from hypothesis hi are appended to hj.\n- m-select: The set of the m best clusters (in terms of coverage of Dj) from the union hi \u222ahj is chosen as a new model.\n(Unlike m-join this method does not prefer own clusters over others''.)\n\u2022 ph\u2192D (hi, Dj) : - m-filter: The m best clusters (as above) from hi are identified and appended to a new model formed by using those samples not covered by these clusters applying the own learning algorithm fj.\nWhenever m is large enough to encompass all clusters, we simply write join or filter for them.\nIn section 4 we analyse the performance of each of these two classes for different choices of m.\nIt is noteworthy that this agent-based distributed data mining system is one of the simplest conceivable instances of our abstract architecture.\nWhile we have previously applied it also to a more complex market-based architecture using Inductive Logic Programming learners in a transport logistics domain [22], we believe that the system described here is complex enough to illustrate the key design decisions involved in using our framework and provides simple example solutions for these design issues.\n4.\nEXPERIMENTAL RESULTS Figure 3 shows results obtained from simulations with three learning agents in the above system using the k-means and k-medoids clustering methods respectively.\nWe partition the total dataset of 300 ships into three disjoint sets of 100 samples each and assign each of these to one learning agent.\nThe Single Agent is learning from the whole dataset.\nThe parameter k is set to 10 as this is the optimal value for the total dataset according to the Davies-Bouldin index [9].\nFor m-select we assume m = k which achieves a constant Figure 3: Performance results obtained for different integration operations in homogeneous learner societies using the k-means (top) and k-medoids (bottom) methods model size.\nFor m-join and m-filter we assume m = 3 to limit the extent to which models increase over time.\nDuring each experiment the learning agents receive ship descriptions in batches of 10 samples.\nBetween these batches, there is enough time to exchange the models among the agents and recompute the models if necessary.\nEach ship is described using width, length, draught and speed attributes with the goal of learning to detect which vessels have provided fake descriptions of their own properties.\nThe validation set contains 100 real and 100 randomly generated fake ships.\nTo generate sufficiently realistic properties for fake ships, their individual attribute values are taken from randomly selected ships in the validation set (so that each fake sample is a combination of attribute values of several existing ships).\nIn these experiments, we are mainly interested in investigating whether a simple form of knowledge sharing between self-interested learning agents could improve agent performance compared to a setting of isolated learners.\nThereby, we distinguish between homogeneous learner societies where all agents use the same clustering algorithm and heterogeneous ones where different agents use different algorithms.\nAs can be seen from the performance plots in Figure 3 (homogeneous case) and 4 (heterogeneous case, two agents use the same method and one agent uses the other) this is clearly the case for the (unrestricted) join and filter integration operations (m = k) in both cases.\nThis is quite natural, as these operations amount to sharing all available model knowledge among agents (under appropriate constraints depending on how beneficial the exchange seems to the agents).\nWe can see that the quality of these operations is very close to the Single Agent that has access to all training data.\nFor the restricted (m < k) m-join, m-filter and m-select methods we can also observe an interesting distinction, The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 683 Figure 4: Performance results obtained for different integration operations in heterogeneous societies with the majority of learners using the k-means (top) and k-medoids (bottom) methods namely that these perform similarly to the isolated learner case in homogeneous agent groups but better than isolated learners in more heterogeneous societies.\nThis suggests that heterogeneous learners are able to benefit even from rather limited knowledge sharing (and this is what using a rather small m = 3 amounts to given that k = 10) while this is not always true for homogeneous agents.\nThis nicely illustrates how different learning or data mining algorithms can specialise on different parts of the problem space and then integrate their local results to achieve better individual performance.\nApart from these obvious performance benefits, integrating partial learning results can also have other advantages: The m-filter operation, for example, decreases the number of learning samples and thus can speed up the learning process.\nThe relative number of filtered examples measured in our experiments is shown in the following table.\nk-means k-medoids filtering 30-40 % 10-20 % m-filtering 20-30 % 5-15 % The overall conclusion we can draw from these initial experiments with our architecture is that since a very simplistic application of its principles has proven capable of improving the performance of individual learning agents, it is worthwhile investigating more complex forms of information exchange about learning processes among autonomous learners.\n5.\nRELATED WORK We have already mentioned work on distributed (nonagent) machine learning and data mining in the introductory chapter, so in this section we shall restrict ourselves to approaches that are more closely related to our outlook on distributed learning systems.\nVery often, approaches that are allegedly agent-based completely disregard agent autonomy and prescribe local decision-making procedures a priori.\nA typical example for this type of system is the one suggested by Caragea et al. [6] which is based on a distributed support-vector machine approach where agents incrementally join their datasets together according to a fixed distributed algorithm.\nA similar example is the work of Weiss [24], where groups of classifier agents learn to organise their activity so as to optimise global system behaviour.\nThe difference between this kind of collaborative agentbased learning systems [16] and our own framework is that these approaches assume a joint learning goal that is pursued collaboratively by all agents.\nMany approaches rely heavily on a homogeneity assumption: Plaza and Ontanon [15] suggest methods for agentbased intelligent reuse of cases in case-based reasoning but is only applicable to societies of homogeneous learners (and coined towards a specific learning method).\nAn agentbased method for integrating distributed cluster analysis processes using density estimation is presented by Klusch et al. [13] which is also specifically designed for a particular learning algorithm.\nThe same is true of [22, 23] which both present market-based mechanisms for aggregating the output of multiple learning agents, even though these approaches consider more interesting interaction mechanisms among learners.\nA number of approaches for sharing learning data [18] have also been proposed: Grecu and Becker [12] suggest an exchange of learning samples among agents, and Ghosh et al. [11] is a step in the right direction in terms of revealing only partial information about one``s learning process as it deals with limited information sharing in distributed clustering.\nPapyrus [3] is a system that provides a markup language for meta-description of data, hypotheses and intermediate results and allows for an exchange of all this information among different nodes, however with a strictly cooperative goal of distributing the load for massively distributed data mining tasks.\nThe MALE system [19] was a very early multiagent learning system in which agents used a blackboard approach to communicate their hypotheses.\nAgents were able to critique each others'' hypotheses until agreement was reached.\nHowever, all agents in this system were identical and the system was strictly cooperative.\nThe ANIMALS system [10] was used to simulate multistrategy learning by combining two or more learning techniques (represented by heterogeneous agents) in order to overcome weaknesses in the individual algorithms, yet it was also a strictly cooperative system.\nAs these examples show and to the best of our knowledge, there have been no previous attempts to provide a framework that can accommodate both independent and heterogeneous learning agents and this can be regarded as the main contribution of our work.\n6.\nCONCLUSION In this paper, we outlined a generic, abstract framework for distributed machine learning and data mining.\nThis framework constitutes, to our knowledge, the first attempt 684 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) to capture complex forms of interaction between heterogeneous and\/or self-interested learners in an architecture that can be used as the foundation for implementing systems that use complex interaction and reasoning mechanisms to enable agents to inform and improve their learning abilities with information provided by other learners in the system, provided that all agents engage in a sufficiently similar learning activity.\nTo illustrate that the abstract principles of our architecture can be turned into concrete, computational systems, we described a market-based distributed clustering system which was evaluated in the domain of vessel tracking for purposes of identifying deviant or suspicious behaviour.\nAlthough our experimental results only hint at the potential of using our architecture, they underline that what we are proposing is feasible in principle and can have beneficial effects even in its most simple instantiation.\nYet there is a number of issues that we have not addressed in the presentation of the architecture and its empirical evaluation: Firstly, we have not considered the cost of communication and made the implicit assumption that the required communication comes for free.\nThis is of course inadequate if we want to evaluate our method in terms of the total effort required for producing a certain quality of learning results.\nSecondly, we have not experimented with agents using completely different learning algorithms (e.g. symbolic and numerical).\nIn systems composed of completely different agents the circumstances under which successful information exchange can be achieved might be very different from those described here, and much more complex communication and reasoning methods may be necessary to achieve a useful integration of different agents'' learning processes.\nFinally, more sophisticated evaluation criteria for such distributed learning architectures have to be developed to shed some light on what the right measures of optimality for autonomously reasoning and communicating agents should be.\nThese issues, together with a more systematic and thorough investigation of advanced interaction and communication mechanisms for distributed, collaborating and competing agents will be the subject of our future work on the subject.\nAcknowledgement: We gratefully acknowledge the support of the presented research by Army Research Laboratory project N62558-03-0819 and Office for Naval Research project N00014-06-1-0232.\n7.\nREFERENCES [1] http:\/\/www.aislive.com.\n[2] http:\/\/www.healthagents.com.\n[3] S. Bailey, R. Grossman, H. Sivakumar, and A. Turinsky.\nPapyrus: A System for Data Mining over Local and Wide Area Clusters and Super-Clusters.\nIn Proc.\nof the Conference on Supercomputing.\n1999.\n[4] E. Bauer and R. Kohavi.\nAn Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants.\nMachine Learning, 36, 1999.\n[5] P. Berkhin.\nSurvey of Clustering Data Mining Techniques, Technical Report, Accrue Software, 2002.\n[6] D. Caragea, A. Silvescu, and V. Honavar.\nAgents that Learn from Distributed Dynamic Data sources.\nIn Proc.\nof the Workshop on Learning Agents, 2000.\n[7] N. Chawla and S. E. abd L. O. Hall.\nCreating ensembles of classifiers.\nIn Proceedings of ICDM 2001, pages 580-581, San Jose, CA, USA, 2001.\n[8] D. Dash and G. F. Cooper.\nModel Averaging for Prediction with Discrete Bayesian Networks.\nJournal of Machine Learning Research, 5:1177-1203, 2004.\n[9] D. L. Davies and D. W. Bouldin.\nA Cluster Separation Measure.\nIEEE Transactions on Pattern Analysis and Machine Intelligence, 4:224-227, 1979.\n[10] P. Edwards and W. Davies.\nA Heterogeneous Multi-Agent Learning System.\nIn Proceedings of the Special Interest Group on Cooperating Knowledge Based Systems, pages 163-184, 1993.\n[11] J. Ghosh, A. Strehl, and S. Merugu.\nA Consensus Framework for Integrating Distributed Clusterings Under Limited Knowledge Sharing.\nIn NSF Workshop on Next Generation Data Mining, 99-108, 2002.\n[12] D. L. Grecu and L. A. Becker.\nCoactive Learning for Distributed Data Mining.\nIn Proceedings of KDD-98, pages 209-213, New York, NY, August 1998.\n[13] M. Klusch, S. Lodi, and G. Moro.\nAgent-based distributed data mining: The KDEC scheme.\nIn AgentLink, number 2586 in LNCS.\nSpringer, 2003.\n[14] T. M. Mitchell.\nMachine Learning, pages 29-36.\nMcGraw-Hill, New York, 1997.\n[15] S. Ontanon and E. Plaza.\nRecycling Data for Multi-Agent Learning.\nIn Proc.\nof ICML-05, 2005.\n[16] L. Panait and S. Luke.\nCooperative multi-agent learning: The state of the art.\nAutonomous Agents and Multi-Agent Systems, 11(3):387-434, 2005.\n[17] B. Park and H. Kargupta.\nDistributed Data Mining: Algorithms, Systems, and Applications.\nIn N. Ye, editor, Data Mining Handbook, pages 341-358, 2002.\n[18] F. J. Provost and D. N. Hennessy.\nScaling up: Distributed machine learning with cooperation.\nIn Proc.\nof AAAI-96, pages 74-79.\nAAAI Press, 1996.\n[19] S. Sian.\nExtending learning to multiple agents: Issues and a model for multi-agent machine learning (ma-ml).\nIn Y. Kodratoff, editor, Machine LearningEWSL-91, pages 440-456.\nSpringer-Verlag, 1991.\n[20] R. Smith.\nThe contract-net protocol: High-level communication and control in a distributed problem solver.\nIEEE Transactions on Computers, C-29(12):1104-1113, 1980.\n[21] S. J. Stolfo, A. L. Prodromidis, S. Tselepis, W. Lee, D. W. Fan, and P. K. Chan.\nJam: Java Agents for Meta-Learning over Distributed Databases.\nIn Proc.\nof the KDD-97, pages 74-81, USA, 1997.\n[22] J. To\u02c7zi\u02c7cka, M. Jakob, and M. P\u02c7echou\u02c7cek.\nMarket-Inspired Approach to Collaborative Learning.\nIn Cooperative Information Agents X (CIA 2006), volume 4149 of LNCS, pages 213-227.\nSpringer, 2006.\n[23] Y. Z. Wei, L. Moreau, and N. R. Jennings.\nRecommender systems: a market-based design.\nIn Proceedings of AAMAS-03), pages 600-607, 2003.\n[24] G. Wei\u00df.\nA Multiagent Perspective of Parallel and Distributed Machine Learning.\nIn Proceedings of Agents``98, pages 226-230, 1998.\n[25] G. Weiss and P. Dillenbourg.\nWhat is ``multi'' in multi-agent learning?\nCollaborative-learning: Cognitive and Computational Approaches, 64-80, 1999.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 685","lvl-3":"A Framework for Agent-Based Distributed Machine Learning and Data Mining\nABSTRACT\nThis paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents.\nWe present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes.\nThis allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions.\nWe apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms.\nWe report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework.\n1.\nINTRODUCTION\nIn the areas of machine learning and data mining (cf. [14, 17] for overviews), it has long been recognised that parallelisation and distribution can be used to improve learning performance.\nVarious techniques have been suggested in this respect, ranging from the low-level integration of independently derived learning hypotheses (e.g. combining different classifiers to make optimal classification decisions [4, 7], model averaging of Bayesian classifiers [8], or consensusbased methods for integrating different clusterings [11]), to the high-level combination of learning results obtained by heterogeneous learning \"agents\" using meta-learning (e.g. [3, 10, 21]).\nAll of these approaches assume homogeneity of agent design (all agents apply the same learning algorithm) and\/or agent objectives (all agents are trying to cooperatively solve a single, global learning problem).\nTherefore, the techniques they suggest are not applicable in societies of autonomous learners interacting in open systems.\nIn such systems, learners (agents) may not be able to integrate their datasets or learning results (because of different data formats and representations, learning algorithms, or legal restrictions that prohibit such integration [11]) and cannot always be guaranteed to interact in a strictly cooperative fashion (discovered knowledge and collected data might be economic assets that should only be shared when this is deemed profitable; malicious agents might attempt to adversely influence others' learning results, etc.).\nExamples for applications of this kind abound.\nMany distributed learning domains involve the use of sensitive data and prohibit the exchange of this data (e.g. exchange of patient data in distributed brain tumour diagnosis [2])--however, they may permit the exchange of local learning hypotheses among different learners.\nIn other areas, training data might be commercially valuable, so that agents would only make it available to others if those agents could provide something in return (e.g. in remote ship surveillance and tracking, where the different agencies involved are commercial service providers [1]).\nFurthermore, agents might have a vested interest in negatively affecting other agents' learning performance.\nAn example for this is that of fraudulent agents on eBay which may try to prevent reputationlearning agents from the construction of useful models for detecting fraud.\nViewing learners as autonomous, self-directed agents is the only appropriate view one can take in modelling these distributed learning environments: the agent metaphor be\ncomes a necessity as oppossed to preferences for scalability, dynamic data selection, \"interactivity\" [13], which can also be achieved through (non-agent) distribution and parallelisation in principle.\nDespite the autonomy and self-directedness of learning agents, many of these systems exhibit a sufficient overlap in terms of individual learning goals so that beneficial cooperation might be possible if a model for flexible interaction between autonomous learners was available that allowed agents to\n1.\nexchange information about different aspects of their own learning mechanism at different levels of detail without being forced to reveal private information that should not be disclosed, 2.\ndecide to what extent they want to share information about their own learning processes and utilise information provided by other learners, and 3.\nreason about how this information can best be used to improve their own learning performance.\nOur model is based on the simple idea that autonomous learners should maintain meta-descriptions of their own learning processes (see also [3]) in order to be able to exchange information and reason about them in a rational way (i.e. with the overall objective of improving their own learning results).\nOur hypothesis is a very simple one: If we can devise a sufficiently general, abstract view of describing learning processes, we will be able to utilise the whole range of methods for (i) rational reasoning and (ii) communication and coordination offered by agent technology so as to build effective autonomous learning agents.\nTo test this hypothesis, we introduce such an abstract architecture (section 2) and implement a simple, concrete instance of it in a real-world domain (section 3).\nWe report on empirical results obtained with this implemented system that demonstrate the viability of our approach (section 4).\nFinally, we review related work (section 5) and conclude with a summary, discussion of our approach and outlook to future work on the subject (section 6).\n2.\nABSTRACT ARCHITECTURE\n2.1 Learning agents\n680 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.2 Integrating learning process information\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 681\n3.\nAPPLICATION EXAMPLE\n3.1 Domain description\n3.2 Agent-based distributed learning system design\n682 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nEXPERIMENTAL RESULTS\n5.\nRELATED WORK\nWe have already mentioned work on distributed (nonagent) machine learning and data mining in the introductory chapter, so in this section we shall restrict ourselves to approaches that are more closely related to our outlook on distributed learning systems.\nVery often, approaches that are allegedly agent-based completely disregard agent autonomy and prescribe local decision-making procedures a priori.\nA typical example for this type of system is the one suggested by Caragea et al. [6] which is based on a distributed support-vector machine approach where agents incrementally join their datasets together according to a fixed distributed algorithm.\nA similar example is the work of Weiss [24], where groups of classifier agents learn to organise their activity so as to optimise global system behaviour.\nThe difference between this kind of collaborative agentbased learning systems [16] and our own framework is that these approaches assume a joint learning goal that is pursued collaboratively by all agents.\nMany approaches rely heavily on a homogeneity assumption: Plaza and Ontanon [15] suggest methods for agentbased intelligent reuse of cases in case-based reasoning but is only applicable to societies of homogeneous learners (and coined towards a specific learning method).\nAn agentbased method for integrating distributed cluster analysis processes using density estimation is presented by Klusch et al. [13] which is also specifically designed for a particular learning algorithm.\nThe same is true of [22, 23] which both present market-based mechanisms for aggregating the output of multiple learning agents, even though these approaches consider more interesting interaction mechanisms among learners.\nA number of approaches for sharing learning data [18] have also been proposed: Grecu and Becker [12] suggest an exchange of learning samples among agents, and Ghosh et al. [11] is a step in the right direction in terms of revealing only partial information about one's learning process as it deals with limited information sharing in distributed clustering.\nPapyrus [3] is a system that provides a markup language for meta-description of data, hypotheses and intermediate results and allows for an exchange of all this information among different nodes, however with a strictly cooperative goal of distributing the load for massively distributed data mining tasks.\nThe MALE system [19] was a very early multiagent learning system in which agents used a blackboard approach to communicate their hypotheses.\nAgents were able to critique each others' hypotheses until agreement was reached.\nHowever, all agents in this system were identical and the system was strictly cooperative.\nThe ANIMALS system [10] was used to simulate multistrategy learning by combining two or more learning techniques (represented by heterogeneous agents) in order to overcome weaknesses in the individual algorithms, yet it was also a strictly cooperative system.\nAs these examples show and to the best of our knowledge, there have been no previous attempts to provide a framework that can accommodate both independent and heterogeneous learning agents and this can be regarded as the main contribution of our work.\n6.\nCONCLUSION\nIn this paper, we outlined a generic, abstract framework for distributed machine learning and data mining.\nThis framework constitutes, to our knowledge, the first attempt\n684 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nto capture complex forms of interaction between heterogeneous and\/or self-interested learners in an architecture that can be used as the foundation for implementing systems that use complex interaction and reasoning mechanisms to enable agents to inform and improve their learning abilities with information provided by other learners in the system, provided that all agents engage in a sufficiently similar learning activity.\nTo illustrate that the abstract principles of our architecture can be turned into concrete, computational systems, we described a market-based distributed clustering system which was evaluated in the domain of vessel tracking for purposes of identifying deviant or suspicious behaviour.\nAlthough our experimental results only hint at the potential of using our architecture, they underline that what we are proposing is feasible in principle and can have beneficial effects even in its most simple instantiation.\nYet there is a number of issues that we have not addressed in the presentation of the architecture and its empirical evaluation: Firstly, we have not considered the cost of communication and made the implicit assumption that the required communication \"comes for free\".\nThis is of course inadequate if we want to evaluate our method in terms of the total effort required for producing a certain quality of learning results.\nSecondly, we have not experimented with agents using completely different learning algorithms (e.g. symbolic and numerical).\nIn systems composed of completely different agents the circumstances under which successful information exchange can be achieved might be very different from those described here, and much more complex communication and reasoning methods may be necessary to achieve a useful integration of different agents' learning processes.\nFinally, more sophisticated evaluation criteria for such distributed learning architectures have to be developed to shed some light on what the right measures of optimality for autonomously reasoning and communicating agents should be.\nThese issues, together with a more systematic and thorough investigation of advanced interaction and communication mechanisms for distributed, collaborating and competing agents will be the subject of our future work on the subject.\nAcknowledgement: We gratefully acknowledge the support of the presented research by Army Research Laboratory project N62558-03-0819 and Office for Naval Research project N00014-06-1-0232.","lvl-4":"A Framework for Agent-Based Distributed Machine Learning and Data Mining\nABSTRACT\nThis paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents.\nWe present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes.\nThis allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions.\nWe apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms.\nWe report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework.\n1.\nINTRODUCTION\nIn the areas of machine learning and data mining (cf. [14, 17] for overviews), it has long been recognised that parallelisation and distribution can be used to improve learning performance.\nAll of these approaches assume homogeneity of agent design (all agents apply the same learning algorithm) and\/or agent objectives (all agents are trying to cooperatively solve a single, global learning problem).\nTherefore, the techniques they suggest are not applicable in societies of autonomous learners interacting in open systems.\nExamples for applications of this kind abound.\nMany distributed learning domains involve the use of sensitive data and prohibit the exchange of this data (e.g. exchange of patient data in distributed brain tumour diagnosis [2])--however, they may permit the exchange of local learning hypotheses among different learners.\nFurthermore, agents might have a vested interest in negatively affecting other agents' learning performance.\nAn example for this is that of fraudulent agents on eBay which may try to prevent reputationlearning agents from the construction of useful models for detecting fraud.\nViewing learners as autonomous, self-directed agents is the only appropriate view one can take in modelling these distributed learning environments: the agent metaphor be\nDespite the autonomy and self-directedness of learning agents, many of these systems exhibit a sufficient overlap in terms of individual learning goals so that beneficial cooperation might be possible if a model for flexible interaction between autonomous learners was available that allowed agents to\n1.\nexchange information about different aspects of their own learning mechanism at different levels of detail without being forced to reveal private information that should not be disclosed, 2.\ndecide to what extent they want to share information about their own learning processes and utilise information provided by other learners, and 3.\nreason about how this information can best be used to improve their own learning performance.\nOur model is based on the simple idea that autonomous learners should maintain meta-descriptions of their own learning processes (see also [3]) in order to be able to exchange information and reason about them in a rational way (i.e. with the overall objective of improving their own learning results).\nOur hypothesis is a very simple one: If we can devise a sufficiently general, abstract view of describing learning processes, we will be able to utilise the whole range of methods for (i) rational reasoning and (ii) communication and coordination offered by agent technology so as to build effective autonomous learning agents.\nTo test this hypothesis, we introduce such an abstract architecture (section 2) and implement a simple, concrete instance of it in a real-world domain (section 3).\nWe report on empirical results obtained with this implemented system that demonstrate the viability of our approach (section 4).\nFinally, we review related work (section 5) and conclude with a summary, discussion of our approach and outlook to future work on the subject (section 6).\n5.\nRELATED WORK\nWe have already mentioned work on distributed (nonagent) machine learning and data mining in the introductory chapter, so in this section we shall restrict ourselves to approaches that are more closely related to our outlook on distributed learning systems.\nVery often, approaches that are allegedly agent-based completely disregard agent autonomy and prescribe local decision-making procedures a priori.\nA typical example for this type of system is the one suggested by Caragea et al. [6] which is based on a distributed support-vector machine approach where agents incrementally join their datasets together according to a fixed distributed algorithm.\nA similar example is the work of Weiss [24], where groups of classifier agents learn to organise their activity so as to optimise global system behaviour.\nThe difference between this kind of collaborative agentbased learning systems [16] and our own framework is that these approaches assume a joint learning goal that is pursued collaboratively by all agents.\nMany approaches rely heavily on a homogeneity assumption: Plaza and Ontanon [15] suggest methods for agentbased intelligent reuse of cases in case-based reasoning but is only applicable to societies of homogeneous learners (and coined towards a specific learning method).\nAn agentbased method for integrating distributed cluster analysis processes using density estimation is presented by Klusch et al. [13] which is also specifically designed for a particular learning algorithm.\nThe same is true of [22, 23] which both present market-based mechanisms for aggregating the output of multiple learning agents, even though these approaches consider more interesting interaction mechanisms among learners.\nPapyrus [3] is a system that provides a markup language for meta-description of data, hypotheses and intermediate results and allows for an exchange of all this information among different nodes, however with a strictly cooperative goal of distributing the load for massively distributed data mining tasks.\nThe MALE system [19] was a very early multiagent learning system in which agents used a blackboard approach to communicate their hypotheses.\nAgents were able to critique each others' hypotheses until agreement was reached.\nHowever, all agents in this system were identical and the system was strictly cooperative.\nThe ANIMALS system [10] was used to simulate multistrategy learning by combining two or more learning techniques (represented by heterogeneous agents) in order to overcome weaknesses in the individual algorithms, yet it was also a strictly cooperative system.\nAs these examples show and to the best of our knowledge, there have been no previous attempts to provide a framework that can accommodate both independent and heterogeneous learning agents and this can be regarded as the main contribution of our work.\n6.\nCONCLUSION\nIn this paper, we outlined a generic, abstract framework for distributed machine learning and data mining.\nThis framework constitutes, to our knowledge, the first attempt\n684 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nto capture complex forms of interaction between heterogeneous and\/or self-interested learners in an architecture that can be used as the foundation for implementing systems that use complex interaction and reasoning mechanisms to enable agents to inform and improve their learning abilities with information provided by other learners in the system, provided that all agents engage in a sufficiently similar learning activity.\nTo illustrate that the abstract principles of our architecture can be turned into concrete, computational systems, we described a market-based distributed clustering system which was evaluated in the domain of vessel tracking for purposes of identifying deviant or suspicious behaviour.\nThis is of course inadequate if we want to evaluate our method in terms of the total effort required for producing a certain quality of learning results.\nSecondly, we have not experimented with agents using completely different learning algorithms (e.g. symbolic and numerical).\nIn systems composed of completely different agents the circumstances under which successful information exchange can be achieved might be very different from those described here, and much more complex communication and reasoning methods may be necessary to achieve a useful integration of different agents' learning processes.\nFinally, more sophisticated evaluation criteria for such distributed learning architectures have to be developed to shed some light on what the right measures of optimality for autonomously reasoning and communicating agents should be.\nThese issues, together with a more systematic and thorough investigation of advanced interaction and communication mechanisms for distributed, collaborating and competing agents will be the subject of our future work on the subject.","lvl-2":"A Framework for Agent-Based Distributed Machine Learning and Data Mining\nABSTRACT\nThis paper proposes a framework for agent-based distributed machine learning and data mining based on (i) the exchange of meta-level descriptions of individual learning processes among agents and (ii) online reasoning about learning success and learning progress by learning agents.\nWe present an abstract architecture that enables agents to exchange models of their local learning processes and introduces a number of different methods for integrating these processes.\nThis allows us to apply existing agent interaction mechanisms to distributed machine learning tasks, thus leveraging the powerful coordination methods available in agent-based computing, and enables agents to engage in meta-reasoning about their own learning decisions.\nWe apply this architecture to a real-world distributed clustering application to illustrate how the conceptual framework can be used in practical systems in which different learners may be using different datasets, hypotheses and learning algorithms.\nWe report on experimental results obtained using this system, review related work on the subject, and discuss potential future extensions to the framework.\n1.\nINTRODUCTION\nIn the areas of machine learning and data mining (cf. [14, 17] for overviews), it has long been recognised that parallelisation and distribution can be used to improve learning performance.\nVarious techniques have been suggested in this respect, ranging from the low-level integration of independently derived learning hypotheses (e.g. combining different classifiers to make optimal classification decisions [4, 7], model averaging of Bayesian classifiers [8], or consensusbased methods for integrating different clusterings [11]), to the high-level combination of learning results obtained by heterogeneous learning \"agents\" using meta-learning (e.g. [3, 10, 21]).\nAll of these approaches assume homogeneity of agent design (all agents apply the same learning algorithm) and\/or agent objectives (all agents are trying to cooperatively solve a single, global learning problem).\nTherefore, the techniques they suggest are not applicable in societies of autonomous learners interacting in open systems.\nIn such systems, learners (agents) may not be able to integrate their datasets or learning results (because of different data formats and representations, learning algorithms, or legal restrictions that prohibit such integration [11]) and cannot always be guaranteed to interact in a strictly cooperative fashion (discovered knowledge and collected data might be economic assets that should only be shared when this is deemed profitable; malicious agents might attempt to adversely influence others' learning results, etc.).\nExamples for applications of this kind abound.\nMany distributed learning domains involve the use of sensitive data and prohibit the exchange of this data (e.g. exchange of patient data in distributed brain tumour diagnosis [2])--however, they may permit the exchange of local learning hypotheses among different learners.\nIn other areas, training data might be commercially valuable, so that agents would only make it available to others if those agents could provide something in return (e.g. in remote ship surveillance and tracking, where the different agencies involved are commercial service providers [1]).\nFurthermore, agents might have a vested interest in negatively affecting other agents' learning performance.\nAn example for this is that of fraudulent agents on eBay which may try to prevent reputationlearning agents from the construction of useful models for detecting fraud.\nViewing learners as autonomous, self-directed agents is the only appropriate view one can take in modelling these distributed learning environments: the agent metaphor be\ncomes a necessity as oppossed to preferences for scalability, dynamic data selection, \"interactivity\" [13], which can also be achieved through (non-agent) distribution and parallelisation in principle.\nDespite the autonomy and self-directedness of learning agents, many of these systems exhibit a sufficient overlap in terms of individual learning goals so that beneficial cooperation might be possible if a model for flexible interaction between autonomous learners was available that allowed agents to\n1.\nexchange information about different aspects of their own learning mechanism at different levels of detail without being forced to reveal private information that should not be disclosed, 2.\ndecide to what extent they want to share information about their own learning processes and utilise information provided by other learners, and 3.\nreason about how this information can best be used to improve their own learning performance.\nOur model is based on the simple idea that autonomous learners should maintain meta-descriptions of their own learning processes (see also [3]) in order to be able to exchange information and reason about them in a rational way (i.e. with the overall objective of improving their own learning results).\nOur hypothesis is a very simple one: If we can devise a sufficiently general, abstract view of describing learning processes, we will be able to utilise the whole range of methods for (i) rational reasoning and (ii) communication and coordination offered by agent technology so as to build effective autonomous learning agents.\nTo test this hypothesis, we introduce such an abstract architecture (section 2) and implement a simple, concrete instance of it in a real-world domain (section 3).\nWe report on empirical results obtained with this implemented system that demonstrate the viability of our approach (section 4).\nFinally, we review related work (section 5) and conclude with a summary, discussion of our approach and outlook to future work on the subject (section 6).\n2.\nABSTRACT ARCHITECTURE\nOur framework is based on providing formal (meta-level) descriptions of learning processes, i.e. representations of all relevant components of the learning machinery used by a learning agent, together with information about the state of the learning process.\nTo ensure that this framework is sufficiently general, we consider the following general description of a learning problem: Given data D \u2286 D taken from an instance space D, a hypothesis space H and an (unknown) target function c \u2208 H1, derive a function h \u2208 H that approximates c as well as possible according to some performance measure g: H \u2192 Q where Q is a set of possible levels of learning performance.\nThis very broad definition includes a number of components of a learning problem for which more concrete specifications can be provided if we want to be more precise.\nFor the cases of classification and clustering, for example, we can further specify the above as follows: Learning data can be described in both cases as D = \u00d7 n i = 1 [Ai] where [Ai] is the domain of the ith attribute and the set of attributes is A = {1,..., n}.\nFor the hypothesis space we obtain\nin the case of classification (i.e. a subset of the set of all possible classifiers, the nature of which depends on the expressivity of the learning algorithm used) and\nin the case of clustering (i.e. a subset of all sets of possible cluster assignments that map data points to a finite number of clusters numbered 1 to k).\nFor classification, g might be defined in terms of the numbers of false negatives and false positives with respect to some validation set V \u2286 D, and clustering might use various measures of cluster validity to evaluate the quality of a current hypothesis, so that Q = R in both cases (but other sets of learning quality levels can be imagined).\nNext, we introduce a notion of learning step, which imposes a uniform basic structure on all learning processes that are supposed to exchange information using our framework.\nFor this, we assume that each learner is presented with a finite set of data D = ~ d1,...dk ~ in each step (this is an ordered set to express that the order in which the samples are used for training matters) and employs a training\/update function f: H \u00d7 D \u2217 \u2192 H which updates h given a series of samples d1,..., dk.\nIn other words, one learning step always consists of applying the update function to all samples in D exactly once.\nWe define a learning step as a tuple\nwhere we require that H \u2286 H and h \u2208 H.\nThe intuition behind this definition is that each learning step completely describes one learning iteration as shown in Figure 1: in step t, the learner updates the current hypothesis ht \u2212 1 with data Dt, and evaluates the resulting new hypothesis ht according to the current performance measure gt.\nSuch a learning step is equivalent to the following steps of computation:\n1.\ntrain the algorithm on all samples in D (once), i.e. calculate ft (ht \u2212 1, Dt) = ht, 2.\ncalculate the quality gt of the resulting hypothesis gt (ht).\nWe denote the set of all possible learning steps by L. For ease of notation, we denote the components of any l \u2208 L by D (l), H (l), f (l) and g (l), respectively.\nThe reason why such learning step specifications use a subset H of H instead of H itself is that learners often have explicit knowledge about which hypotheses are effectively ruled out by f given h in the future (if this is not the case, we can still set H = H).\nA learning process is a finite, non-empty sequence\nof learning steps such that\nFigure 1: A generic model of a learning step\ntraining set Dt i.e. the only requirement the transition relation--+ C L x L makes is that the new hypothesis is the result of training the old hypothesis on all available sample data that belongs to the current step.\nWe denote the set of all possible learning processes by L (ignoring, for ease of notation, the fact that this set depends on W, D and the spaces of possible training and evaluation functions f and g).\nThe performance trace associated with a learning process l is the sequence (q1,..., qn) E Qn where qi = g (li) (h (li)), i.e. the sequence of quality values calculated by the performance measures of the individual learning steps on the respective hypotheses.\nSuch specifications allow agents to provide a selfdescription of their learning process.\nHowever, in communication among learning agents, it is often useful to provide only partial information about one's internal learning process rather than its full details, e.g. when advertising this information in order to enter information exchange negotiations with others.\nFor this purpose, we will assume that learners describe their internal state in terms of sets of learning processes (in the sense of disjunctive choice) which we call learning process descriptions (LPDs) rather than by giving precise descriptions about a single, concrete learning process.\nThis allows us to describe properties of a learning process without specifying its details exhaustively.\nAs an example, the set {l E L | Vl = l [i].\nD (l) <100} describes all processes that have a training set of at most 100 samples (where all the other elements are arbitrary).\nLikewise, {l E L | Vl = l [i].\nD (l) = {d}} is equivalent to just providing information about a single sample {d} and no other details about the process (this can be useful to model, for example, data received from the environment).\nTherefore, we use \u2118 (L), that is the set of all LPDs, as the basis for designing content languages for communication in the protocols we specify below.\nIn practice, the actual content language chosen will of course be more restricted and allow only for a special type of subsets of L to be specified in a compact way, and its choice will be crucial for the interactions that can occur between learning agents.\nFor our examples below, we simply assume explicit enumeration of all possible elements of the respective sets and function spaces (D, H, etc.) extended by the use of wildcard symbols * (so that our second example above would become ({d}, *, *, *, *)).\n2.1 Learning agents\nIn our framework, a learning agent is essentially a metareasoning function that operates on information about learning processes and is situated in an environment co-inhabited by other learning agents.\nThis means that it is not only capable of meta-level control on \"how to learn\", but in doing so it can take information into account that is provided by other agents or the environment.\nAlthough purely cooperative or hybrid cases are possible, for the purposes of this paper we will assume that agents are purely self-interested, and that while there may be a potential for cooperation considering how agents can mutually improve each others' learning performance, there is no global mechanism that can enforce such cooperative behaviour .2 Formally speaking, an agent's learning function is a function which, given a set of histories of previous learning processes (of oneself and potentially of learning processes about which other agents have provided information) and outputs a learning step which is its next \"learning action\".\nIn the most general sense, our learning agent's internal learning process update can hence be viewed as a function\nwhich takes a set of learning \"histories\" of oneself and others as inputs and computes a new learning step to be executed while updating the set of known learning process histories (e.g. by appending the new learning action to one's own learning process and leaving all information about others' learning processes untouched).\nNote that in \u03bb ({l1,...ln}) = (l, {l' 1,...l' n,}) some elements li of the input learning process set may be descriptions of new learning data received from the environment.\nThe \u03bb-function can essentially be freely chosen by the agent as long as one requirement is met, namely that the learning data that is being used always stems from what has been previously observed.\nMore formally, = l, i [j] = li [j] i.e. whatever \u03bb outputs as a new learning step and updated set of learning histories, it cannot \"invent\" new data; it has to work with the samples that have been made available to it earlier in the process through the environment or from other agents (and it can of course re-train on previously used data).\nThe goal of the agent is to output an optimal learning step in each iteration given the information that it has.\nOne possibility of specifying this is to require that\nbut since it will usually be unrealistic to compute the optimal next learning step in every situation, it is more useful 2Note that our outlook is not only different from common, cooperative models of distributed machine learning and data mining, but also delineates our approach from multiagent learning systems in which agents learn about other agents [25], i.e. the learning goal itself is not affected by agents' behaviour in the environment.\n680 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nTable 1: Matrix of integration functions for messages sent from learner i to j\nto simply use g (l') (h (l')) as a running performance measure to evaluate how well the agent is performing.\nThis is too abstract and unspecific for our purposes: While it describes what agents should do (transform the settings for the next learning step in an optimal way), it doesn't specify how this can be achieved in practice.\n2.2 Integrating learning process information\nTo specify how an agent's learning process can be affected by integrating information received from others, we need to flesh out the details of how the learning steps it will perform can be modified using incoming information about learning processes described by other agents (this includes the acquisition of new learning data from the environment as a special case).\nIn the most general case, we can specify this in terms of the potential modifications to the existing information about learning histories that can be performed using new information.\nFor ease of presentation, we will assume that agents are stationary learning processes that can only record the previously executed learning step and only exchange information about this one individual learning step (our model can be easily extended to cater for more complex settings).\nLet lj = (Dj, Hj, fj, gj, hj) be the current \"state\" of agent j when receiving a learning process description li = (Di, Hi, fi, gi, hi) from agent i (for the time being, we assume that this is a specific learning step and not a more vague, disjunctive description of properties of the learning step of i).\nConsidering all possible interactions at an abstract level, we basically obtain a matrix of possibilities for modifications of j's learning step specification as shown in Table 1.\nIn this matrix, each entry specifies a family of integration functions pc \u2192 c\nkc-c where c, c' \u2208 {D, H, f, g, h} and which define how agent j's component c' j will be modified using the information ci provided about (the same or a different component of) i's learning step by applying pc \u2192 c r (ci, c' j) for some r \u2208 {1,..., kc \u2192 c}.\nTo put it more simply, the collections of p-functions an agent j uses specifies how it will modify its own learning behaviour using information obtained from i. For the diagonal of this matrix, which contains the most common ways of integrating new information in one's own learning model, obvious ways of modifying one's own learning process include replacing c' j by ci or ignoring ci altogether.\nMore complex\/subtle forms of learning process integration include:\n\u2022 Modification of Dj: append Di to Dj; filter out all elements from Dj which also appear in Di; append Di to Dj discarding all elements with attributes outside ranges which affect gj, or those elements already correctly classified by hj; \u2022 Modification of Hi: use the union\/intersection of Hi and Hj; alternatively, discard elements of Hj that are inconsistent with Dj in the process of intersection or union, or filter out elements that cannot be obtained using fj (unless fj is modified at the same time) \u2022 Modification of fj: modify parameters or background knowledge of fj using information about fi; assess their relevance by simulating previous learning steps on Dj using gj and discard those that do not help improve own performance \u2022 Modification of hj: combine hj with hi using (say) logical or mathematical operators; make the use of hi contingent on a \"pre-integration\" assessment of its quality using own data Dj and gj\nWhile this list does not include fully fledged, concrete integration operations for learning processes, it is indicative of the broad range of interactions between individual agents' learning processes that our framework enables.\nNote that the list does not include any modifications to gj.\nThis is because we do not allow modifications to the agent's own quality measure as this would render the model of rational (learning) action useless (if the quality measure is relative and volatile, we cannot objectively judge learning performance).\nAlso note that some of the above examples require consulting other elements of lj than those appearing as arguments of the p-operations; we omit these for ease of notation, but emphasise that information-rich operations will involve consulting many different aspects of lj.\nApart from operations along the diagonal of the matrix, more \"exotic\" integration operations are conceivable that combine information about different components.\nIn theory we could fill most of the matrix with entries for them, but for lack of space we list only a few examples:\n\u2022 Modification of Dj using fi: pre-process samples in fi, e.g. to achieve intermediate representations that fj can be applied to \u2022 Modification of Dj using hi: filter out samples from Dj that are covered by hi and build hj using fj only on remaining samples \u2022 Modification of Hj using fi: filter out hypotheses from Hj that are not realisable using fi \u2022 Modification of hj using gi: if hj is composed of several sub-components, filter out those sub-components that do not perform well according to gi \u2022 ...\nFinally, many messages received from others describing properties of their learning processes will contain information about several elements of a learning step, giving rise to yet more complex operations that depend on which kinds of information are available.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 681\nFigure 2: Screenshot of our simulation system, displaying online vessel tracking data for the North Sea region\n3.\nAPPLICATION EXAMPLE\n3.1 Domain description\nAs an illustration of our framework, we present an agentbased data mining system for clustering-based surveillance using AIS (Automatic Identification System [1]) data.\nIn our application domain, different commercial and governmental agencies track the journeys of ships over time using AIS data which contains structured information automatically provided by ships equipped with shipborne mobile AIS stations to shore stations, other ships and aircrafts.\nThis data contains the ship's identity, type, position, course, speed, navigational status and other safety-related information.\nFigure 2 shows a screenshot of our simulation system.\nIt is the task of AIS agencies to detect anomalous behaviour so as to alarm police\/coastguard units to further investigate unusual, potentially suspicious behaviour.\nSuch behaviour might include things such as deviation from the standard routes between the declared origin and destination of the journey, unexpected \"close encounters\" between different vessels on sea, or unusual patterns in the choice of destination over multiple journeys, taking the type of vessel and reported freight into account.\nWhile the reasons for such unusual behaviour may range from pure coincidence or technical problems to criminal activity (such as smuggling, piracy, terrorist\/military attacks) it is obviously useful to pre-process the huge amount of vessel (tracking) data that is available before engaging in further analysis by human experts.\nTo support this automated pre-processing task, software used by these agencies applies clustering methods in order to identify outliers and flag those as potentially suspicious entities to the human user.\nHowever, many agencies active in this domain are competing enterprises and use their (partially overlapping, but distinct) datasets and learning hypotheses (models) as assets and hence cannot be expected to collaborate in a fully cooperative way to improve overall learning results.\nConsidering that this is the reality of the domain in the real world, it is easy to see that a framework like the one we have suggested above might be useful to exploit the cooperation potential that is not exploited by current systems.\n3.2 Agent-based distributed learning system design\nTo describe a concrete design for the AIS domain, we need to specify the following elements of the overall system:\n1.\nThe datasets and clustering algorithms available to individual agents, 2.\nthe interaction mechanism used for exchanging descriptions of learning processes, and 3.\nthe decision mechanism agents apply to make learning decisions.\nRegarding 1., our agents are equipped with their own private datasets in the form of vessel descriptions.\nLearning samples are represented by tuples containing data about individual vessels in terms of attributes A = {1,.\n.\n.\n, n} including things such as width, length, etc. with real-valued domains ([Ai] = R for all i).\nIn terms of learning algorithm, we consider clustering with a fixed number of k clusters using the k-means and k-medoids clustering algorithms [5] (\"fixed\" meaning that the learning algorithm will always output k clusters; however, we allow agents to change the value of k over different learning cycles).\nThis means that the hypothesis space can be defined as H = {~ c1,..., ck ~ | ci \u2208 R | A |} i.e. the set of all possible sets of k cluster centroids in | A | - dimensional Euclidean space.\nFor each hypothesis h = ~ c1,..., ck ~ and any data point d \u2208 \u00d7 ni = 1 [Ai] given domain [Ai] for the ith attribute of each sample, the assignment to clusters is given by | d \u2212 cC (h, d) | where C (h, d) is the index of the nearest centroid.\nBased on this measure, we classify any vessel in Fi \u222a Vi as fake if its r-value is below the median of all the confidences r (h, d) for d \u2208 Fi \u222a Vi.\nWith this, we can compute the quality gi (h) \u2208 R as the ratio between all correctly classified vessels and all vessels in Fi \u222a Vi.\nAs concerns 2., we use a simple Contract-Net Protocol (CNP) [20] based \"hypothesis trading\" mechanism: Before each learning iteration, agents issue (publicly broadcasted) Calls-For-Proposals (CfPs), advertising their own numerical model quality.\nIn other words, the \"initiator\" of a CNP describes its own current learning state as (\u2217, \u2217, \u2217, gi (h), \u2217) where h is their current hypothesis\/model.\nWe assume that agents are sincere when advertising their model quality, but note that this quality might be of limited relevance to other agents as they may specialise on specific regions of the data space not related to the test set of the sender of the CfP.\nSubsequently, (some) agents may issue bids in which they advertise, in turn, the quality of their own model.\nIf the\ni.e. d is assigned to that cluster whose centroid is closest to the data point in terms of Euclidean distance.\nFor evaluation purposes, each dataset pertaining to a particular agent i is initially split into a training set Di and a validation Vi.\nThen, we generate a set of \"fake\" vessels Fi such that | Fi | = | Vi |.\nThese two sets assess the agent's ability to detect \"suspicious\" vessels.\nFor this, we assign a confidence value r (h, d) to every ship d:\n682 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nbids (if any) are accepted by the initiator of the protocol who issued the CfP, the agents exchange their hypotheses and the next learning iteration ensues.\nTo describe what is necessary for 3., we have to specify (i) under which conditions agents submit bids in response to a CfP, (ii) when they accept bids in the CNP negotiation process, and (iii) how they integrate the received information in their own learning process.\nConcerning (i) and (ii), we employ a very simple rule that is identical in both cases: let g be one's own model quality and g' that advertised by the CfP (or highest bid, respectively).\nIf g'> g we respond to the CfP (accept the bid), else respond to the CfP (accept the bid) with probability P (g' \/ g) and ignore (reject) it else.\nIf two agents make a deal, they exchange their learning hypotheses (models).\nIn our experiments, g and g' are calculated by an additional agent that acts as a global validation mechanism for all agents (in a more realistic setting a comparison mechanism for different g functions would have to be provided).\nAs for (iii), each agent uses a single model merging operator taken from the following two classes of operators (hj is the receiver's own model and hi is the provider's model):\n\u2022 ph--.\nh (hi, hj):--m-join: The m best clusters (in terms of coverage of Dj) from hypothesis hi are appended to hj.\n-- m-select: The set of the m best clusters (in terms of coverage of Dj) from the union hi \u222a hj is chosen as a new model.\n(Unlike m-join this method does not prefer own clusters over others'.)\n\u2022 ph--.\nD (hi, Dj):\n-- m-filter: The m best clusters (as above) from hi are identified and appended to a new model formed by using those samples not covered by these clusters applying the own learning algorithm fj.\nWhenever m is large enough to encompass all clusters, we simply write join or filter for them.\nIn section 4 we analyse the performance of each of these two classes for different choices of m.\nIt is noteworthy that this agent-based distributed data mining system is one of the simplest conceivable instances of our abstract architecture.\nWhile we have previously applied it also to a more complex market-based architecture using Inductive Logic Programming learners in a transport logistics domain [22], we believe that the system described here is complex enough to illustrate the key design decisions involved in using our framework and provides simple example solutions for these design issues.\n4.\nEXPERIMENTAL RESULTS\nFigure 3 shows results obtained from simulations with three learning agents in the above system using the k-means and k-medoids clustering methods respectively.\nWe partition the total dataset of 300 ships into three disjoint sets of 100 samples each and assign each of these to one learning agent.\nThe Single Agent is learning from the whole dataset.\nThe parameter k is set to 10 as this is the optimal value for the total dataset according to the Davies-Bouldin index [9].\nFor m-select we assume m = k which achieves a constant\nFigure 3: Performance results obtained for different integration operations in homogeneous learner societies using the k-means (top) and k-medoids (bottom) methods\nmodel size.\nFor m-join and m-filter we assume m = 3 to limit the extent to which models increase over time.\nDuring each experiment the learning agents receive ship descriptions in batches of 10 samples.\nBetween these batches, there is enough time to exchange the models among the agents and recompute the models if necessary.\nEach ship is described using width, length, draught and speed attributes with the goal of learning to detect which vessels have provided fake descriptions of their own properties.\nThe validation set contains 100 real and 100 randomly generated fake ships.\nTo generate sufficiently realistic properties for fake ships, their individual attribute values are taken from randomly selected ships in the validation set (so that each fake sample is a combination of attribute values of several existing ships).\nIn these experiments, we are mainly interested in investigating whether a simple form of knowledge sharing between self-interested learning agents could improve agent performance compared to a setting of isolated learners.\nThereby, we distinguish between homogeneous learner societies where all agents use the same clustering algorithm and heterogeneous ones where different agents use different algorithms.\nAs can be seen from the performance plots in Figure 3 (homogeneous case) and 4 (heterogeneous case, two agents use the same method and one agent uses the other) this is clearly the case for the (unrestricted) join and filter integration operations (m = k) in both cases.\nThis is quite natural, as these operations amount to sharing all available model knowledge among agents (under appropriate constraints depending on how beneficial the exchange seems to the agents).\nWe can see that the quality of these operations is very close to the Single Agent that has access to all training data.\nFor the restricted (m 4.0 = l2), the algorithm preempts job 2 for job 3, which then executes to completion.\nJob ri di li vi 1 0.0 0.9 0.9 0.9 2 0.5 5.5 4.0 4.0 3 4.8 17.0 12.2 12.2 01 5 17 6 ?\n6 ?\n6 ?\nTable 1: Input used to recap TD1 (version 2) [4].\nThe up and down arrows represent ri and di, respectively, while the length of the box equals li.\n3.\nMECHANISM DESIGN SETTING However, false information about job 2 would cause TD1 (version 2) to complete this job.\nFor example, if job 2``s deadline were declared as \u02c6d2 = 4.7, then it would have zero laxity at time 0.7.\nAt this time, the algorithm would preempt job 1 for job 2, because te \u2212tb +p loss 4 = 4.7\u22120.0+1.0 4 > 0.9 = l1.\nJob 2 would then complete before the arrival of job 3.2 1 While it would be easy to alter the algorithm to recognize that this is not possible for the jobs in Table 1, our example does not depend on the use of p loss.\n2 While we will not describe the significantly more complex In order to address incentive issues such as this one, we need to formalize the setting as a mechanism design problem.\nIn this section we first present the mechanism design formulation, and then define our goals for the mechanism.\n3.1 Formulation There exists a center, who controls the processor, and N agents, where the value of N is unknown by the center beforehand.\nEach job i is owned by a separate agent i.\nThe characteristics of the job define the agent``s type \u03b8i \u2208 \u0398i.\nAt time ri, agent i privately observes its type \u03b8i, and has no information about job i before ri.\nThus, jobs are still released over time, but now each job is revealed only to the owning agent.\nAgents interact with the center through a direct mechanism \u0393 = (\u03981, ... , \u0398N , g(\u00b7)), in which each agent declares a job, denoted by \u02c6\u03b8i = (\u02c6ri, \u02c6di, \u02c6li, \u02c6vi), and g : \u03981\u00d7...\u00d7\u0398N \u2192 O maps the declared types to an outcome o \u2208 O.\nAn outcome o = (S(\u00b7), p1, ... , pN ) consists of a schedule and a payment from each agent to the mechanism.\nIn a standard mechanism design setting, the outcome is enforced at the end of the mechanism.\nHowever, since the end is not well-defined in this online setting, we choose to model returning the job if it is completed and collecting a payment from each agent i as occurring at \u02c6di, which, according to the agent``s declaration, is the latest relevant point of time for that agent.\nThat is, even if job i is completed before \u02c6di, the center does not return the job to agent i until that time.\nThis modelling decision could instead be viewed as a decision by the mechanism designer from a larger space of possible mechanisms.\nIndeed, as we will discuss later, this decision of when to return a completed job is crucial to our mechanism.\nEach agent``s utility, ui(g(\u02c6\u03b8), \u03b8i) = vi \u00b7 \u00b5(ei(\u02c6\u03b8, di) \u2265 li) \u00b7 \u00b5( \u02c6di \u2264 di) \u2212 pi(\u02c6\u03b8), is a quasi-linear function of its value for its job (if completed and returned by its true deadline) and the payment it makes to the center.\nWe assume that each agent is a rational, expected utility maximizer.\nAgent declarations are restricted in that an agent cannot declare a length shorter than the true length, since the center would be able to detect such a lie if the job were completed.\nOn the other hand, in the general formulation we will allow agents to declare longer lengths, since in some settings it may be possible add unnecessary work to a job.\nHowever, we will also consider a restricted formulation in which this type of lie is not possible.\nThe declared release time \u02c6ri is the time that the agent chooses to submit job i to the center, and it cannot precede the time ri at which the job is revealed to the agent.\nThe agent can declare an arbitrary deadline or value.\nTo summarize, agent i can declare any type \u02c6\u03b8i = (\u02c6ri, \u02c6di, \u02c6li, \u02c6vi) such that \u02c6li \u2265 li and \u02c6ri \u2265 ri.\nWhile in the non-strategic setting it was sufficient for the algorithm to know the upper bound k on the ratio \u03c1max \u03c1min , in the mechanism design setting we will strengthen this assumption so that the mechanism also knows \u03c1min (or, equivalently, the range [\u03c1min, \u03c1max] of possible value densities).3 Dover , we note that it is similar in its use of intervals and its preference for the active job.\nAlso, we note that the lower bound we will show in Section 5 implies that false information can also benefit a job in Dover .\n3 Note that we could then force agent declarations to satisfy \u03c1min \u2264 \u02c6vi \u02c6li \u2264 \u03c1max.\nHowever, this restriction would not 63 While we feel that it is unlikely that a center would know k without knowing this range, we later present a mechanism that does not depend on this extra knowledge in a restricted setting.\nThe restriction on the schedule is now that S(\u02c6\u03b8, t) = i implies \u02c6ri \u2264 t, to capture the fact that a job cannot be scheduled on the processor before it is declared to the mechanism.\nAs before, preemption of jobs is allowed, and job switching takes no time.\nThe constraints due to the online mechanism``s lack of knowledge of the future are that \u02c6\u03b8(t) = \u02c6\u03b8 (t) implies S(\u02c6\u03b8, t) = S(\u02c6\u03b8 , t), and \u02c6\u03b8( \u02c6di) = \u02c6\u03b8 ( \u02c6di) implies pi(\u02c6\u03b8) = pi(\u02c6\u03b8 ) for each agent i.\nThe setting can then be summarized as follows.\n1Overview of the Setting: for all t do The center instantiates S(\u02c6\u03b8, t) \u2190 i, for some i s.t. \u02c6ri \u2264 t if \u2203i, (ri = t) then \u03b8i is revealed to agent i if \u2203i, (t \u2265 ri) and agent i has not declared a job then Agent i can declare any job \u02c6\u03b8i, s.t. \u02c6ri = t and \u02c6li \u2265 li if \u2203i, ( \u02c6di = t) \u2227 (ei(\u02c6\u03b8, t) \u2265 li) then Completed job i is returned to agent i if \u2203i, ( \u02c6di = t) then Center sets and collects payment pi(\u02c6\u03b8) from agent i 3.2 Mechanism Goals Our aim as mechanism designer is to maximize the value of completed jobs, subject to the constraints of incentive compatibility and individual rationality.\nThe condition for (dominant strategy) incentive compatibility is that for each agent i, regardless of its true type and of the declared types of all other agents, agent i cannot increase its utility by unilaterally changing its declaration.\nDefinition 1.\nA direct mechanism \u0393 satisfies incentive compatibility (IC) if \u2200i, \u03b8i, \u03b8i, \u02c6\u03b8\u2212i : ui(g(\u03b8i, \u02c6\u03b8\u2212i), \u03b8i) \u2265 ui(g(\u03b8i, \u02c6\u03b8\u2212i), \u03b8i) From an agent perspective, dominant strategies are desirable because the agent does not have to reason about either the strategies of the other agents or the distribution from the which other agent``s types are drawn.\nFrom a mechanism designer perspective, dominant strategies are important because we can reasonably assume that an agent who has a dominant strategy will play according to it.\nFor these reasons, in this paper we require dominant strategies, as opposed to a weaker equilibrium concept such as Bayes-Nash, under which we could improve upon our positive results.4 decrease the lower bound on the competitive ratio.\n4 A possible argument against the need for incentive compatibility is that an agent``s lie may actually improve the schedule.\nIn fact, this was the case in the example we showed for the false declaration \u02c6d2 = 4.7.\nHowever, if an agent lies due to incorrect beliefs over the future input, then the lie could instead make the schedule the worse (for example, if job 3 were never released, then job 1 would have been unnecessarily abandoned).\nFurthermore, if we do not know the beliefs of the agents, and thus cannot predict how they will lie, then we can no longer provide a competitive guarantee for our mechanism.\nWhile restricting ourselves to incentive compatible direct mechanisms may seem limiting at first, the Revelation Principle for Dominant Strategies (see, e.g., [17]) tells us that if our goal is dominant strategy implementation, then we can make this restriction without loss of generality.\nThe second goal for our mechanism, individual rationality, requires that agents who truthfully reveal their type never have negative utility.\nThe rationale behind this goal is that participation in the mechanism is assumed to be voluntary.\nDefinition 2.\nA direct mechanism \u0393 satisfies individual rationality (IR) if \u2200i, \u03b8i, \u02c6\u03b8\u2212i, ui(g(\u03b8i, \u02c6\u03b8\u2212i), \u03b8i) \u2265 0.\nFinally, the social welfare function that we aim to maximize is the same as the objective function of the non-strategic setting: W(o, \u03b8) = i vi \u00b7 \u00b5(ei(\u03b8, di) \u2265 li) .\nAs in the nonstrategic setting, we will evaluate an online mechanism using competitive analysis to compare it against an optimal o\ufb04ine mechanism (which we will denote by \u0393offline).\nAn o\ufb04ine mechanism knows all of the types at time 0, and thus can always achieve W\u2217 (\u03b8).5 Definition 3.\nAn online mechanism \u0393 is (strictly) ccompetitive if it satisfies IC and IR, and if there does not exist a profile of agent types \u03b8 such that c\u00b7W(g(\u03b8), \u03b8) < W\u2217 (\u03b8).\n4.\nRESULTS In this section, we first present our main positive result: a (1+ \u221a k)2 +1 -competitive mechanism (\u03931).\nAfter providing some intuition as to why \u03931 satisfies individual rationality and incentive compatibility, we formally prove first these two properties and then the competitive ratio.\nWe then consider a special case in which k = 1 and agents cannot lie about the length of their job, which allows us to alter this mechanism so that it no longer requires either knowledge of \u03c1min or the collection of payments from agents.\nUnlike TD1 (version 2) and Dover , \u03931 gives no preference to the active job.\nInstead, it always executes the available job with the highest priority: (\u02c6vi + \u221a k \u00b7 ei(\u02c6\u03b8, t) \u00b7 \u03c1min).\nEach agent whose job is completed is then charged the lowest value that it could have declared such that its job still would have been completed, holding constant the rest of its declaration.\nBy the use of a payment rule similar to that of a secondprice auction, \u03931 satisfies both IC with respect to values and IR.\nWe now argue why it satisfies IC with respect to the other three characteristics.\nDeclaring an improved job (i.e., declaring an earlier release time, a shorter length, or a later deadline) could possibly decrease the payment of an agent.\nHowever, the first two lies are not possible in our setting, while the third would cause the job, if it is completed, to be returned to the agent after the true deadline.\nThis is the reason why it is important to always return a completed job at its declared deadline, instead of at the point at which it is completed.\n5 Another possibility is to allow only the agents to know their types at time 0, and to force \u0393offline to be incentive compatible so that agents will truthfully declare their types at time 0.\nHowever, this would not affect our results, since executing a VCG mechanism (see, e.g., [17]) at time 0 both satisfies incentive compatibility and always maximizes social welfare.\n64 Mechanism 1 \u03931 Execute S(\u02c6\u03b8, \u00b7) according to Algorithm 1 for all i do if ei(\u02c6\u03b8, \u02c6di) \u2265 \u02c6li {Agent i``s job is completed} then pi(\u02c6\u03b8) \u2190 arg minvi\u22650(ei(((\u02c6ri, \u02c6di, \u02c6li, vi), \u02c6\u03b8\u2212i), \u02c6di) \u2265 \u02c6li) else pi(\u02c6\u03b8) \u2190 0 Algorithm 1 for all t do Avail \u2190 {i|(t \u2265 \u02c6ri)\u2227(ei(\u02c6\u03b8, t) < \u02c6li)\u2227(ei(\u02c6\u03b8, t)+ \u02c6di\u2212t \u2265 \u02c6li)} {Set of all released, non-completed, non-abandoned jobs} if Avail = \u2205 then S(\u02c6\u03b8, t) \u2190 arg maxi\u2208Avail(\u02c6vi + \u221a k \u00b7 ei(\u02c6\u03b8, t) \u00b7 \u03c1min) {Break ties in favor of lower \u02c6ri} else S(\u02c6\u03b8, t) \u2190 0 It remains to argue why an agent does not have incentive to worsen its job.\nThe only possible effects of an inflated length are delaying the completion of the job and causing it to be abandoned, and the only possible effects of an earlier declared deadline are causing to be abandoned and causing it to be returned earlier (which has no effect on the agent``s utility in our setting).\nOn the other hand, it is less obvious why agents do not have incentive to declare a later release time.\nConsider a mechanism \u03931 that differs from \u03931 in that it does not preempt the active job i unless there exists another job j such that (\u02c6vi + \u221a k\u00b7li(\u02c6\u03b8, t)\u00b7\u03c1min) < \u02c6vj.\nNote that as an active job approaches completion in \u03931, its condition for preemption approaches that of \u03931.\nHowever, the types in Table 2 for the case of k = 1 show why an agent may have incentive to delay the arrival of its job under \u03931.\nJob 1 becomes active at time 0, and job 2 is abandoned upon its release at time 6, because 10 + 10 = v1 +l1 > v2 = 13.\nThen, at time 8, job 1 is preempted by job 3, because 10 + 10 = v1 + l1 < v3 = 22.\nJob 3 then executes to completion, forcing job 1 to be abandoned.\nHowever, job 2 had more weight than job 1, and would have prevented job 3 from being executed if it had been the active job at time 8, since 13 + 13 = v2 + l2 > v3 = 22.\nThus, if agent 1 had falsely declared \u02c6r1 = 20, then job 3 would have been abandoned at time 8, and job 1 would have completed over the range [20, 30].\nJob ri di li vi 1 0 30 10 10 2 6 19 13 13 3 8 30 22 22 0 6 10 20 30 6 ?\n6 ?\n6 ?\nTable 2: Jobs used to show why a slightly altered version of \u03931 would not be incentive compatible with respect to release times.\nIntuitively, \u03931 avoids this problem because of two properties.\nFirst, when a job becomes active, it must have a greater priority than all other available jobs.\nSecond, because a job``s priority can only increase through the increase of its elapsed time, ei(\u02c6\u03b8, t), the rate of increase of a job``s priority is independent of its characteristics.\nThese two properties together imply that, while a job is active, there cannot exist a time at which its priority is less than the priority that one of these other jobs would have achieved by executing on the processor instead.\n4.1 Proof of Individual Rationality and Incentive Compatibility After presenting the (trivial) proof of IR, we break the proof of IC into lemmas.\nTheorem 1.\nMechanism \u03931 satisfies individual rationality.\nProof.\nFor arbitrary i, \u03b8i, \u02c6\u03b8\u2212i, if job i is not completed, then agent i pays nothing and thus has a utility of zero; that is, pi(\u03b8i, \u02c6\u03b8\u2212i) = 0 and ui(g(\u03b8i, \u02c6\u03b8\u2212i), \u03b8i) = 0.\nOn the other hand, if job i is completed, then its value must exceed agent i``s payment.\nFormally, ui(g(\u03b8i, \u02c6\u03b8\u2212i), \u03b8i) = vi \u2212 arg minvi\u22650(ei(((ri, di, li, vi), \u02c6\u03b8\u2212i), di) \u2265 li) \u2265 0 must hold, since vi = vi satisfies the condition.\nTo prove IC, we need to show that for an arbitrary agent i, and an arbitrary profile \u02c6\u03b8\u2212i of declarations of the other agents, agent i can never gain by making a false declaration \u02c6\u03b8i = \u03b8i, subject to the constraints that \u02c6ri \u2265 ri and \u02c6li \u2265 li.\nWe start by showing that, regardless of \u02c6vi, if truthful declarations of ri, di, and li do not cause job i to be completed, then worse declarations of these variables (that is, declarations that satisfy \u02c6ri \u2265 ri, \u02c6li \u2265 li and \u02c6di \u2264 di) can never cause the job to be completed.\nWe break this part of the proof into two lemmas, first showing that it holds for the release time, regardless of the declarations of the other variables, and then for length and deadline.\nLemma 2.\nIn mechanism \u03931, the following condition holds for all i, \u03b8i, \u02c6\u03b8\u2212i: \u2200 \u02c6vi, \u02c6li \u2265 li, \u02c6di \u2264 di, \u02c6ri \u2265 ri, ei ((\u02c6ri, \u02c6di, \u02c6li, \u02c6vi), \u02c6\u03b8\u2212i), \u02c6di \u2265 \u02c6li =\u21d2 ei ((ri, \u02c6di, \u02c6li, \u02c6vi), \u02c6\u03b8\u2212i), \u02c6di \u2265 \u02c6li Proof.\nAssume by contradiction that this condition does not hold- that is, job i is not completed when ri is truthfully declared, but is completed for some false declaration \u02c6ri \u2265 ri.\nWe first analyze the case in which the release time is truthfully declared, and then we show that job i cannot be completed when agent i delays submitting it to the center.\nCase I: Agent i declares \u02c6\u03b8i = (ri, \u02c6di, \u02c6li, \u02c6vi).\nFirst, define the following three points in the execution of job i. \u2022 Let ts = arg mint S((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) = i be the time that job i first starts execution.\n\u2022 Let tp = arg mint>ts S((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) = i be the time that job i is first preempted.\n\u2022 Let ta = arg mint ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) + \u02c6di \u2212 t < \u02c6li be the time that job i is abandoned.\n65 If ts and tp are undefined because job i never becomes active, then let ts = tp = ta .\nAlso, partition the jobs declared by other agents before ta into the following three sets.\n\u2022 X = {j|(\u02c6rj < tp ) \u2227 (j = i)} consists of the jobs (other than i) that arrive before job i is first preempted.\n\u2022 Y = {j|(tp \u2264 \u02c6rj \u2264 ta )\u2227(\u02c6vj > \u02c6vi + \u221a k\u00b7ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), \u02c6rj)} consists of the jobs that arrive in the range [tp , ta ] and that when they arrive have higher priority than job i (note that we are make use of the normalization).\n\u2022 Z = {j|(tp \u2264 \u02c6rj \u2264 ta )\u2227(\u02c6vj \u2264 \u02c6vi + \u221a k \u00b7ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), \u02c6rj)} consists of the jobs that arrive in the range [tp , ta ] and that when they arrive have lower priority than job i.\nWe now show that all active jobs during the range (tp , ta ] must be either i or in the set Y .\nUnless tp = ta (in which case this property trivially holds), it must be the case that job i has a higher priority than an arbitrary job x \u2208 X at time tp , since at the time just preceding tp job x was available and job i was active.\nFormally, \u02c6vx + \u221a k \u00b7 ex((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tp ) < \u02c6vi + \u221a k \u00b7 ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tp ) must hold.6 We can then show that, over the range [tp , ta ], no job x \u2208 X runs on the processor.\nAssume by contradiction that this is not true.\nLet tf \u2208 [tp , ta ] be the earliest time in this range that some job x \u2208 X is active, which implies that ex((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tf ) = ex((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tp ).\nWe can then show that job i has a higher priority at time tf as follows: \u02c6vx+ \u221a k\u00b7ex((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tf ) = \u02c6vx+ \u221a k\u00b7ex((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tp ) < \u02c6vi + \u221a k \u00b7 ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tp ) \u2264 \u02c6vi + \u221a k \u00b7 ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tf ), contradicting the fact that job x is active at time tf .\nA similar argument applies to an arbitrary job z \u2208 Z, starting at it release time \u02c6rz > tp , since by definition job i has a higher priority at that time.\nThe only remaining jobs that can be active over the range (tp , ta ] are i and those in the set Y .\nCase II: Agent i declares \u02c6\u03b8i = (\u02c6ri, \u02c6di, \u02c6li, \u02c6vi), where \u02c6ri > ri.\nWe now show that job i cannot be completed in this case, given that it was not completed in case I. First, we can restrict the range of \u02c6ri that we need to consider as follows.\nDeclaring \u02c6ri \u2208 (ri, ts ] would not affect the schedule, since ts would still be the first time that job i executes.\nAlso, declaring \u02c6ri > ta could not cause the job to be completed, since di \u2212 ta < \u02c6li holds, which implies that job i would be abandoned at its release.\nThus, we can restrict consideration to \u02c6ri \u2208 (ts , ta ].\nIn order for declaring \u02c6\u03b8i to cause job i to be completed, a necessary condition is that the execution of some job yc \u2208 Y must change during the range (tp , ta ], since the only jobs other than i that are active during that range are in Y .\nLet tc = arg mint\u2208(tp,ta][\u2203yc \u2208 Y, (S((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) = yc ) \u2227 (S((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) = yc )] be the first time that such a change occurs.\nWe will now show that for any \u02c6ri \u2208 (ts , ta ], there cannot exist a job with higher priority than yc at time tc , contradicting (S((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) = yc ).\nFirst note that job i cannot have a higher priority, since there would have to exist a t \u2208 (tp , tc ) such that \u2203y \u2208 6 For simplicity, when we give the formal condition for a job x to have a higher priority than another job y, we will assume that job x``s priority is strictly greater than job y``s, because, in the case of a tie that favors x, future ties would also be broken in favor of job x. Y, (S((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) = y) \u2227 (S((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) = i), contradicting the definition of tc .\nNow consider an arbitrary y \u2208 Y such that y = yc .\nIn case I, we know that job y has lower priority than yc at time tc ; that is, \u02c6vy + \u221a k\u00b7ey((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tc ) < \u02c6vyc + \u221a k\u00b7eyc ((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tc ).\nThus, moving to case II, job y must replace some other job before tc .\nSince \u02c6ry \u2265 tp , the condition is that there must exist some t \u2208 (tp , tc ) such that \u2203w \u2208 Y \u222a{i}, (S((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) = w) \u2227 (S((\u02c6\u03b8i, \u02c6\u03b8\u2212i), t) = y).\nSince w \u2208 Y would contradict the definition of tc , we know that w = i.\nThat is, the job that y replaces must be i. By definition of the set Y , we know that \u02c6vy > \u02c6vi + \u221a k \u00b7 ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), \u02c6ry).\nThus, if \u02c6ry \u2264 t, then job i could not have executed instead of y in case I. On the other hand, if \u02c6ry > t, then job y obviously could not execute at time t, contradicting the existence of such a time t.\nNow consider an arbitrary job x \u2208 X.\nWe know that in case I job i has a higher priority than job x at time ts , or, formally, that \u02c6vx + \u221a k \u00b7 ex((\u02c6\u03b8i, \u02c6\u03b8\u2212i), ts ) < \u02c6vi + \u221a k \u00b7 ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), ts ).\nWe also know that \u02c6vi + \u221a k\u00b7ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tc ) < \u02c6vyc + \u221a k \u00b7 eyc ((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tc ).\nSince delaying i``s arrival will not affect the execution up to time ts , and since job x cannot execute instead of a job y \u2208 Y at any time t \u2208 (tp , tc ] by definition of tc , the only way for job x``s priority to increase before tc as we move from case I to II is to replace job i over the range (ts , tc ].\nThus, an upper bound on job x``s priority when agent i declares \u02c6\u03b8i is: \u02c6vx+ \u221a k\u00b7 ex((\u02c6\u03b8i, \u02c6\u03b8\u2212i), ts )+ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tc )\u2212ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), ts ) < \u02c6vi + \u221a k\u00b7 ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), ts )+ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tc )\u2212ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), ts ) = \u02c6vi + \u221a k \u00b7 ei((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tc ) < \u02c6vyc + \u221a k \u00b7 eyc ((\u02c6\u03b8i, \u02c6\u03b8\u2212i), tc ).\nThus, even at this upper bound, job yc would execute instead of job x at time tc .\nA similar argument applies to an arbitrary job z \u2208 Z, starting at it release time \u02c6rz.\nSince the sets {i}, X, Y, Z partition the set of jobs released before ta , we have shown that no job could execute instead of job yc , contradicting the existence of tc , and completing the proof.\nLemma 3.\nIn mechanism \u03931, the following condition holds for all i, \u03b8i, \u02c6\u03b8\u2212i: \u2200 \u02c6vi, \u02c6li \u2265 li, \u02c6di \u2264 di, ei ((ri, \u02c6di, \u02c6li, \u02c6vi), \u02c6\u03b8\u2212i), \u02c6di \u2265 \u02c6li =\u21d2 ei ((ri, di, li, \u02c6vi), \u02c6\u03b8\u2212i), \u02c6di \u2265 li Proof.\nAssume by contradiction there exists some instantiation of the above variables such that job i is not completed when li and di are truthfully declared, but is completed for some pair of false declarations \u02c6li \u2265 li and \u02c6di \u2264 di.\nNote that the only effect that \u02c6di and \u02c6li have on the execution of the algorithm is on whether or not i \u2208 Avail.\nSpecifically, they affect the two conditions: (ei(\u02c6\u03b8, t) < \u02c6li) and (ei(\u02c6\u03b8, t) + \u02c6di \u2212 t \u2265 \u02c6li).\nBecause job i is completed when \u02c6li and \u02c6di are declared, the former condition (for completion) must become false before the latter.\nSince truthfully declaring li \u2264 \u02c6li and di \u2265 \u02c6di will only make the former condition become false earlier and the latter condition become false later, the execution of the algorithm will not be affected when moving to truthful declarations, and job i will be completed, a contradiction.\nWe now use these two lemmas to show that the payment for a completed job can only increase by falsely declaring worse \u02c6li, \u02c6di, and \u02c6ri.\n66 Lemma 4.\nIn mechanism \u03931, the following condition holds for all i, \u03b8i, \u02c6\u03b8\u2212i: \u2200 \u02c6li \u2265 li, \u02c6di \u2264 di, \u02c6ri \u2265 ri, arg min vi\u22650 ei ((\u02c6ri, \u02c6di, \u02c6li, vi), \u02c6\u03b8\u2212i), \u02c6di \u2265 \u02c6li \u2265 arg min vi\u22650 ei ((ri, di, li, vi), \u02c6\u03b8\u2212i), di \u2265 li Proof.\nAssume by contradiction that this condition does not hold.\nThis implies that there exists some value vi such that the condition (ei(((\u02c6ri, \u02c6di, \u02c6li, vi), \u02c6\u03b8\u2212i), \u02c6di) \u2265 \u02c6li) holds, but (ei(((ri, di, li, vi), \u02c6\u03b8\u2212i), di) \u2265 li) does not.\nApplying Lemmas 2 and 3: (ei(((\u02c6ri, \u02c6di, \u02c6li, vi), \u02c6\u03b8\u2212i), \u02c6di) \u2265 \u02c6li) =\u21d2 (ei(((ri, \u02c6di, \u02c6li, vi), \u02c6\u03b8\u2212i), \u02c6di) \u2265 \u02c6li) =\u21d2 (ei(((ri, di, li, vi), \u02c6\u03b8\u2212i), di) \u2265 li), a contradiction.\nFinally, the following lemma tells us that the completion of a job is monotonic in its declared value.\nLemma 5.\nIn mechanism \u03931, the following condition holds for all i, \u02c6\u03b8i, \u02c6\u03b8\u2212i: \u2200 \u02c6vi \u2265 \u02c6vi, ei ((\u02c6ri, \u02c6di, \u02c6li, \u02c6vi), \u02c6\u03b8\u2212i), \u02c6di \u2265 \u02c6li =\u21d2 ei ((\u02c6ri, \u02c6di, \u02c6li, \u02c6vi), \u02c6\u03b8\u2212i), \u02c6di \u2265 \u02c6li The proof, by contradiction, of this lemma is omitted because it is essentially identical to that of Lemma 2 for \u02c6ri.\nIn case I, agent i declares (\u02c6ri, \u02c6di, \u02c6li, \u02c6vi) and the job is not completed, while in case II he declares (\u02c6ri, \u02c6di, \u02c6li, \u02c6vi) and the job is completed.\nThe analysis of the two cases then proceeds as before- the execution will not change up to time ts because the initial priority of job i decreases as we move from case I to II; and, as a result, there cannot be a change in the execution of a job other than i over the range (tp , ta ].\nWe can now combine the lemmas to show that no profitable deviation is possible.\nTheorem 6.\nMechanism \u03931 satisfies incentive compatibility.\nProof.\nFor an arbitrary agent i, we know that \u02c6ri \u2265 ri and \u02c6li \u2265 li hold by assumption.\nWe also know that agent i has no incentive to declare \u02c6di > di, because job i would never be returned before its true deadline.\nThen, because the payment function is non-negative, agent i``s utility could not exceed zero.\nBy IR, this is the minimum utility it would achieve if it truthfully declared \u03b8i.\nThus, we can restrict consideration to \u02c6\u03b8i that satisfy \u02c6ri \u2265 ri, \u02c6li \u2265 li, and \u02c6di \u2264 di.\nAgain using IR, we can further restrict consideration to \u02c6\u03b8i that cause job i to be completed, since any other \u02c6\u03b8i yields a utility of zero.\nIf truthful declaration of \u03b8i causes job i to be completed, then by Lemma 4 any such false declaration \u02c6\u03b8i could not decrease the payment of agent i. On the other hand, if truthful declaration does not cause job i to be completed, then declaring such a \u02c6\u03b8i will cause agent i to have negative utility, since vi < arg minvi\u22650 ei(((ri, di, li, vi), \u02c6\u03b8\u2212i), \u02c6di) \u2265 li \u2264 arg minvi\u22650 ei(((\u02c6ri, \u02c6di, \u02c6li, vi), \u02c6\u03b8\u2212i), \u02c6di) \u2265 \u02c6li holds by Lemmas 5 and 4, respectively.\n4.2 Proof of Competitive Ratio The proof of the competitive ratio, which makes use of techniques adapted from those used in [15], is also broken into lemmas.\nHaving shown IC, we can assume truthful declaration (\u02c6\u03b8 = \u03b8).\nSince we have also shown IR, in order to prove the competitive ratio it remains to bound the loss of social welfare against \u0393offline.\nDenote by (1, 2, ... , F) the sequence of jobs completed by \u03931.\nDivide time into intervals If = (topen f , tclose f ], one for each job f in this sequence.\nSet tclose f to be the time at which job f is completed, and set topen f = tclose f\u22121 for f \u2265 2, and topen 1 = 0 for f = 1.\nAlso, let tbegin f be the first time that the processor is not idle in interval If .\nLemma 7.\nFor any interval If , the following inequality holds: tclose f \u2212 tbegin f \u2264 (1 + 1\u221a k ) \u00b7 vf Proof.\nInterval If begins with a (possibly zero length) period of time in which the processor is idle because there is no available job.\nThen, it continuously executes a sequence of jobs (1, 2, ... , c), where each job i in this sequence is preempted by job i + 1, except for job c, which is completed (thus, job c in this sequence is the same as job f is the global sequence of completed jobs).\nLet ts i be the time that job i begins execution.\nNote that ts 1 = tbegin f .\nOver the range [tbegin f , tclose f ], the priority (vi+ \u221a k\u00b7ei(\u03b8, t)) of the active job is monotonically increasing with time, because this function linearly increases while a job is active, and can only increase at a point in time when preemption occurs.\nThus, each job i > 1 in this sequence begins execution at its release time (that is, ts i = ri), because its priority does not increase while it is not active.\nWe now show that the value of the completed job c exceeds the product of \u221a k and the time spent in the interval on jobs 1 through c\u22121, or, more formally, that the following condition holds: vc \u2265 \u221a k c\u22121 h=1(eh(\u03b8, ts h+1) \u2212 eh(\u03b8, ts h)).\nTo show this, we will prove by induction that the stronger condition vi \u2265 \u221a k i\u22121 h=1 eh(\u03b8, ts h+1) holds for all jobs i in the sequence.\nBase Case: For i = 1, v1 \u2265 \u221a k 0 h=1 eh(\u03b8, ts h+1) = 0, since the sum is over zero elements.\nInductive Step: For an arbitrary 1 \u2264 i < c, we assume that vi \u2265 \u221a k i\u22121 h=1 eh(\u03b8, ts h+1) holds.\nAt time ts i+1, we know that vi+1 \u2265 vi + \u221a k \u00b7 ei(\u03b8, ts i+1) holds, because ts i+1 = ri+1.\nThese two inequalities together imply that vi+1 \u2265\u221a k i h=1 eh(\u03b8, ts h+1), completing the inductive step.\nWe also know that tclose f \u2212 ts c \u2264 lc \u2264 vc must hold, by the simplifying normalization of \u03c1min = 1 and the fact that job c``s execution time cannot exceed its length.\nWe can thus bound the total execution time of If by: tclose f \u2212 tbegin f = (tclose f \u2212ts c)+ c\u22121 h=1(eh(\u03b8, ts h+1)\u2212eh(\u03b8, ts h)) \u2264 (1+ 1\u221a k )vf .\nWe now consider the possible execution of uncompleted jobs by \u0393offline.\nAssociate each job i that is not completed by \u03931 with the interval during which it was abandoned.\nAll jobs are now associated with an interval, since there are no gaps between the intervals, and since no job i can be abandoned after the close of the last interval at tclose F .\nBecause the processor is idle after tclose F , any such job i would become active at some time t \u2265 tclose F , which would lead to the completion of some job, creating a new interval and contradicting the fact that IF is the last one.\n67 The following lemma is equivalent to Lemma 5.6 of [15], but the proof is different for our mechanism.\nLemma 8.\nFor any interval If and any job i abandoned in If , the following inequality holds: vi \u2264 (1 + \u221a k)vf .\nProof.\nAssume by contradiction that there exists a job i abandoned in If such that vi > (1 + \u221a k)vf .\nAt tclose f , the priority of job f is vf + \u221a k \u00b7 lf < (1 + \u221a k)vf .\nBecause the priority of the active job monotonically increases over the range [tbegin f , tclose f ], job i would have a higher priority than the active job (and thus begin execution) at some time t \u2208 [tbegin f , tclose f ].\nAgain applying monotonicity, this would imply that the priority of the active job at tclose f exceeds (1 + \u221a k)vf , contradicting the fact that it is (1 + \u221a k)vf .\nAs in [15], for each interval If , we give \u0393offline the following gift: k times the amount of time in the range [tbegin f , tclose f ] that it does not schedule a job.\nAdditionally, we give the adversary vf , since the adversary may be able to complete this job at some future time, due to the fact that \u03931 ignores deadlines.\nThe following lemma is Lemma 5.10 in [15], and its proof now applies directly.\nLemma 9.\n[15] With the above gifts the total net gain obtained by the clairvoyant algorithm from scheduling the jobs abandoned during If is not greater than (1 + \u221a k) \u00b7 vf .\nThe intuition behind this lemma is that the best that the adversary can do is to take almost all of the gift of k \u00b7(tclose f \u2212tbegin f ) (intuitively, this is equivalent to executing jobs with the maximum possible value density over the time that \u03931 is active), and then begin execution of a job abandoned by \u03931 right before tclose f .\nBy Lemma 8, the value of this job is bounded by (1 + \u221a k) \u00b7 vf .\nWe can now combine the results of these lemmas to prove the competitive ratio.\nTheorem 10.\nMechanism \u03931 is (1+ \u221a k)2+1 -competitive.\nProof.\nUsing the fact that the way in which jobs are associated with the intervals partitions the entire set of jobs, we can show the competitive ratio by showing that \u03931 is (1+ \u221a k)2 +1 -competitive for each interval in the sequence (1, ... , F).\nOver an arbitrary interval If , the o\ufb04ine algorithm can achieve at most (tclose f \u2212tbegin f )\u00b7k+vf +(1+ \u221a k)vf , from the two gifts and the net gain bounded by Lemma 9.\nApplying Lemma 7, this quantity is then bounded from above by (1+ 1\u221a k )\u00b7vf \u00b7k+vf +(1+ \u221a k)vf = ((1+ \u221a k)2 +1)\u00b7vf .\nSince \u03931 achieves vf , the competitive ratio holds.\n4.3 Special Case: Unalterable length and k=1 While so far we have allowed each agent to lie about all four characteristics of its job, lying about the length of the job is not possible in some settings.\nFor example, a user may not know how to alter a computational problem in a way that both lengthens the job and allows the solution of the original problem to be extracted from the solution to the altered problem.\nAnother restriction that is natural in some settings is uniform value densities (k = 1), which was the case considered by [4].\nIf the setting satisfies these two conditions, then, by using Mechanism \u03932, we can achieve a competitive ratio of 5 (which is the same competitive ratio as \u03931 for the case of k = 1) without knowledge of \u03c1min and without the use of payments.\nThe latter property may be necessary in settings that are more local than grid computing (e.g., within a department) but in which the users are still self-interested.7 Mechanism 2 \u03932 Execute S(\u02c6\u03b8, \u00b7) according to Algorithm 2 for all i do pi(\u02c6\u03b8) \u2190 0 Algorithm 2 for all t do Avail \u2190 {i|(t \u2265 \u02c6ri)\u2227(ei(\u02c6\u03b8, t) < li)\u2227(ei(\u02c6\u03b8, t)+ \u02c6di\u2212t \u2265 li)} if Avail = \u2205 then S(\u02c6\u03b8, t) \u2190 arg maxi\u2208Avail(li + ei(\u02c6\u03b8, t)) {Break ties in favor of lower \u02c6ri} else S(\u02c6\u03b8, t) \u2190 0 Theorem 11.\nWhen k = 1, and each agent i cannot falsely declare li, Mechanism \u03932 satisfies individual rationality and incentive compatibility.\nTheorem 12.\nWhen k = 1, and each agent i cannot falsely declare li, Mechanism \u03932 is 5-competitive.\nSince this mechanism is essentially a simplification of \u03931, we omit proofs of these theorems.\nBasically, the fact that k = 1 and \u02c6li = li both hold allows \u03932 to substitute the priority (li +ei(\u02c6\u03b8, t)) for the priority used in \u03931; and, since \u02c6vi is ignored, payments are no longer needed to ensure incentive compatibility.\n5.\nCOMPETITIVE LOWER BOUND We now show that the competitive ratio of (1 + \u221a k)2 + 1 achieved by \u03931 is a lower bound for deterministic online mechanisms.\nTo do so, we will appeal to third requirement on a mechanism, non-negative payments (NNP), which requires that the center never pays an agent (formally, \u2200i, \u02c6\u03b8, pi(\u02c6\u03b8i) \u2265 0).\nUnlike IC and IR, this requirement is not standard in mechanism design.\nWe note, however, that both \u03931 and \u03932 satisfy it trivially, and that, in the following proof, zero only serves as a baseline utility for an agent, and could be replaced by any non-positive function of \u02c6\u03b8\u2212i.\nThe proof of the lower bound uses an adversary argument similar to that used in [4] to show a lower bound of (1 +\u221a k)2 in the non-strategic setting, with the main novelty lying in the perturbation of the job sequence and the related incentive compatibility arguments.\nWe first present a lemma relating to the recurrence used for this argument, with the proof omitted due to space constraints.\nLemma 13.\nFor any k \u2265 1, for the recurrence defined by li+1 = \u03bb \u00b7 li \u2212 k \u00b7 i h=1 lh and l1 = 1, where (1 + \u221a k)2 \u2212 1 < \u03bb < (1 + \u221a k)2 , there exists an integer m \u2265 1 such that lm+k\u00b7 m\u22121 h=1 lh lm > \u03bb.\n7 While payments are not required in this setting, \u03932 can be changed to collect a payments without affecting incentive compatibility by charging some fixed fraction of li for each job i that is completed.\n68 Theorem 14.\nThere does not exist a deterministic online mechanism that satisfies NNP and that achieves a competitive ratio less than (1 + \u221a k)2 + 1.\nProof.\nAssume by contradiction that there exists a deterministic online mechanism \u0393 that satisfies NNP and that achieves a competitive ratio of c = (1 + \u221a k)2 + 1 \u2212 for some > 0 (and, by implication, satisfies IC and IR as well).\nSince a competitive ratio of c implies a competitive ratio of c + x, for any x > 0, we assume without loss of generality that < 1.\nFirst, we will construct a profile of agent types \u03b8 using an adversary argument.\nAfter possibly slightly perturbing \u03b8 to assure that a strictness property is satisfied, we will then use a more significant perturbation of \u03b8 to reach a contradiction.\nWe now construct the original profile \u03b8.\nPick an \u03b1 such that 0 < \u03b1 < , and define \u03b4 = \u03b1 ck+3k .\nThe adversary uses two sequences of jobs: minor and major.\nMinor jobs i are characterized by li = \u03b4, vi = k \u00b7 \u03b4, and zero laxity.\nThe first minor job is released at time 0, and ri = di\u22121 for all i > 1.\nThe sequence stops whenever \u0393 completes any job.\nMajor jobs also have zero laxity, but they have the smallest possible value ratio (that is, vi = li).\nThe lengths of the major jobs that may be released, starting with i = 1, are determined by the following recurrence relation.\nli+1 = (c \u2212 1 + \u03b1) \u00b7 li \u2212 k \u00b7 i h=1 lh l1 = 1 The bounds on \u03b1 imply that (1 + \u221a k)2 \u2212 1 < c \u2212 1 + \u03b1 < (1+ \u221a k)2 , which allows us to apply Lemma 13.\nLet m be the smallest positive number such that lm+k\u00b7 m\u22121 h=1 lh lm > c\u22121+\u03b1.\nThe first major job has a release time of 0, and each major job i > 1 has a release time of ri = di\u22121 \u2212 \u03b4, just before the deadline of the previous job.\nThe adversary releases major job i \u2264 m if and only if each major job j < i was executed continuously over the range [ri, ri+1].\nNo major job is released after job m.\nIn order to achieve the desired competitive ratio, \u0393 must complete some major job f, because \u0393offline can always at least complete major job 1 (for a value of 1), and \u0393 can complete at most one minor job (for a value of \u03b1 c+3 < 1 c ).\nAlso, in order for this job f to be released, the processor time preceding rf can only be spent executing major jobs that are later abandoned.\nIf f < m, then major job f + 1 will be released and it will be the final major job.\n\u0393 cannot complete job f +1, because rf +lf = df > rf+1.\nTherefore, \u03b8 consists of major jobs 1 through f + 1 (or, f, if f = m), plus minor jobs from time 0 through time df .\nWe now possibly perturb \u03b8 slightly.\nBy IR, we know that vf \u2265 pf (\u03b8).\nSince we will later need this inequality to be strict, if vf = pf (\u03b8), then change \u03b8f to \u03b8f , where rf = rf , but vf , lf , and df are all incremented by \u03b4 over their respective values in \u03b8f .\nBy IC, job f must still be completed by \u0393 for the profile (\u03b8f , \u03b8\u2212f ).\nIf not, then by IR and NNP we know that pf (\u03b8f , \u03b8\u2212f ) = 0, and thus that uf (g(\u03b8f , \u03b8\u2212f ), \u03b8f ) = 0.\nHowever, agent f could then increase its utility by falsely declaring the original type of \u03b8f , receiving a utility of: uf (g(\u03b8f , \u03b8\u2212f ), \u03b8f ) = vf \u2212 pf (\u03b8) = \u03b4 > 0, violating IC.\nFurthermore, agent f must be charged the same amount (that is, pf (\u03b8f , \u03b8\u2212f ) = pf (\u03b8)), due to a similar incentive compatibility argument.\nThus, for the remainder of the proof, assume that vf > pf (\u03b8).\nWe now use a more substantial perturbation of \u03b8 to complete the proof.\nIf f < m, then define \u03b8f to be identical to \u03b8f , except that df = df+1 + lf , allowing job f to be completely executed after job f + 1 is completed.\nIf f = m, then instead set df = df +lf .\nIC requires that for the profile (\u03b8f , \u03b8\u2212f ), \u0393 still executes job f continuously over the range [rf , rf +lf ], thus preventing job f +1 from being completed.\nAssume by contradiction that this were not true.\nThen, at the original deadline of df , job f is not completed.\nConsider the possible profile (\u03b8f , \u03b8\u2212f , \u03b8x), which differs from the new profile only in the addition of a job x which has zero laxity, rx = df , and vx = lx = max(df \u2212 df , (c + 1) \u00b7 (lf + lf+1)).\nBecause this new profile is indistinguishable from (\u03b8f , \u03b8\u2212f ) to \u0393 before time df , it must schedule jobs in the same way until df .\nThen, in order to achieve the desired competitive ratio, it must execute job x continuously until its deadline, which is by construction at least as late as the new deadline df of job f. Thus, job f will not be completed, and, by IR and NNP, it must be the case that pf (\u03b8f , \u03b8\u2212f , \u03b8x) = 0 and uf (g(\u03b8f , \u03b8\u2212f , \u03b8x), \u03b8f ) = 0.\nUsing the fact that \u03b8 is indistinguishable from (\u03b8f , \u03b8\u2212f , \u03b8x) up to time df , if agent f falsely declared his type to be the original \u03b8f , then its job would be completed by df and it would be charged pf (\u03b8).\nIts utility would then increase to uf (g(\u03b8f , \u03b8\u2212f , \u03b8x), \u03b8f ) = vf \u2212 pf (\u03b8) > 0, contradicting IC.\nWhile \u0393``s execution must be identical for both (\u03b8f , \u03b8\u2212f ) and (\u03b8f , \u03b8\u2212f ), \u0393offline can take advantage of the change.\nIf f < m, then \u0393 achieves a value of at most lf +\u03b4 (the value of job f if it were perturbed), while \u0393offline achieves a value of at least k\u00b7( f h=1 lh \u22122\u03b4)+lf+1 +lf by executing minor jobs until rf+1, followed by job f +1 and then job f (we subtract two \u03b4``s instead of one because the last minor job before rf+1 may have to be abandoned).\nSubstituting in for lf+1, the competitive ratio is then at least: k\u00b7( f h=1 lh\u22122\u03b4)+lf+1+lf lf +\u03b4 = k\u00b7( f h=1 lh)\u22122k\u00b7\u03b4+(c\u22121+\u03b1)\u00b7lf \u2212k\u00b7( f h=1 lh)+lf lf +\u03b4 = c\u00b7lf +(\u03b1\u00b7lf \u22122k\u00b7\u03b4) lf +\u03b4 \u2265 c\u00b7lf +((ck+3k)\u03b4\u22122k\u00b7\u03b4) lf +\u03b4 > c.\nIf instead f = m, then \u0393 achieves a value of at most lm +\u03b4, while \u0393offline achieves a value of at least k \u00b7 ( m h=1 lh \u2212 2\u03b4) + lm by completing minor jobs until dm = rm + lm, and then completing job m.\nThe competitive ratio is then at least: k\u00b7( m h=1 lh\u22122\u03b4)+lm lm+\u03b4 = k\u00b7( m\u22121 h=1 lh)\u22122k\u00b7\u03b4+klm+lm lm+\u03b4 > (c\u22121+\u03b1)\u00b7lm\u22122k\u00b7\u03b4+klm lm+\u03b4 = (c+k\u22121)\u00b7lm+(\u03b1lm\u22122k\u00b7\u03b4) lm+\u03b4 > c. 6.\nRELATED WORK In this section we describe related work other than the two papers ([4] and [15]) on which this paper is based.\nRecent work related to this scheduling domain has focused on competitive analysis in which the online algorithm uses a faster processor than the o\ufb04ine algorithm (see, e.g., [13, 14]).\nMechanism design was also applied to a scheduling problem in [18].\nIn their model, the center owns the jobs in an o\ufb04ine setting, and it is the agents who can execute them.\nThe private information of an agent is the time it will require to execute each job.\nSeveral incentive compatible mechanisms are presented that are based on approximation algorithms for the computationally infeasible optimization problem.\nThis paper also launched the area of algorithmic mechanism design, in which the mechanism must sat69 isfy computational requirements in addition to the standard incentive requirements.\nA growing sub-field in this area is multicast cost-sharing mechanism design (see, e.g., [1]), in which the mechanism must efficiently determine, for each agent in a multicast tree, whether the agent receives the transmission and the price it must pay.\nFor a survey of this and other topics in distributed algorithmic mechanism design, see [9].\nOnline execution presents a different type of algorithmic challenge, and several other papers study online algorithms or mechanisms in economic settings.\nFor example, [5] considers an online market clearing setting, in which the auctioneer matches buy and sells bids (which are assumed to be exogenous) that arrive and expire over time.\nIn [2], a general method is presented for converting an online algorithm into an online mechanism that is incentive compatible with respect to values.\nTruthful declaration of values is also considered in [3] and [16], which both consider multi-unit online auctions.\nThe main difference between the two is that the former considers the case of a digital good, which thus has unlimited supply.\nIt is pointed out in [16] that their results continue to hold when the setting is extended so that bidders can delay their arrival.\nThe only other paper we are aware of that addresses the issue of incentive compatibility in a real-time system is [11], which considers several variants of a model in which the center allocates bandwidth to agents who declare both their value and their arrival time.\nA dominant strategy IC mechanism is presented for the variant in which every point in time is essentially independent, while a Bayes-Nash IC mechanism is presented for the variant in which the center``s current decision affects the cost of future actions.\n7.\nCONCLUSION In this paper, we considered an online scheduling domain for which algorithms with the best possible competitive ratio had been found, but for which new solutions were required when the setting is extended to include self-interested agents.\nWe presented a mechanism that is incentive compatible with respect to release time, deadline, length and value, and that only increases the competitive ratio by one.\nWe also showed how this mechanism could be simplified when k = 1 and each agent cannot lie about the length of its job.\nWe then showed a matching lower bound on the competitive ratio that can be achieved by a deterministic mechanism that never pays the agents.\nSeveral open problems remain in this setting.\nOne is to determine whether the lower bound can be strengthened by removing the restriction of non-negative payments.\nAlso, while we feel that it is reasonable to strengthen the assumption of knowing the maximum possible ratio of value densities (k) to knowing the actual range of possible value densities, it would be interesting to determine whether there exists a ((1 + \u221a k)2 + 1)-competitive mechanism under the original assumption.\nFinally, randomized mechanisms provide an unexplored area for future work.\n8.\nREFERENCES [1] A. Archer, J. Feigenbaum, A. Krishnamurthy, R. Sami, and S. Shenker, Approximation and collusion in multicast cost sharing, Games and Economic Behavior (to appear).\n[2] B. Awerbuch, Y. Azar, and A. Meyerson, Reducing truth-telling online mechanisms to online optimization, Proceedings on the 35th Symposium on the Theory of Computing, 2003.\n[3] Z. Bar-Yossef, K. Hildrum, and F. Wu, Incentive-compatible online auctions for digital goods, Proceedings of the 13th Annual ACM-SIAM Symposium on Discrete Algorithms, 2002.\n[4] S. Baruah, G. Koren, D. Mao, B. Mishra, A. Raghunathan, L. Rosier, D. Shasha, and F. Wang, On the competitiveness of on-line real-time task scheduling, Journal of Real-Time Systems 4 (1992), no. 2, 125-144.\n[5] A. Blum, T. Sandholm, and M. Zinkevich, Online algorithms for market clearing, Proceedings of the 13th Annual ACM-SIAM Symposium on Discrete Algorithms, 2002.\n[6] A. Borodin and R. El-Yaniv, Online computation and competitive analysis, Cambridge University Press, 1998.\n[7] R. Buyya, D. Abramson, J. Giddy, and H. Stockinger, Economic models for resource management and scheduling in grid computing, The Journal of Concurrency and Computation: Practice and Experience 14 (2002), 1507-1542.\n[8] N. Camiel, S. London, N. Nisan, and O. Regev, The popcorn project: Distributed computation over the internet in java, 6th International World Wide Web Conference, 1997.\n[9] J. Feigenbaum and S. Shenker, Distributed algorithmic mechanism design: Recent results and future directions, Proceedings of the 6th International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications, 2002, pp. 1-13.\n[10] A. Fiat and G. Woeginger (editors), Online algorithms: The state of the art, Springer Verlag, 1998.\n[11] E. Friedman and D. Parkes, Pricing wifi at starbucksissues in online mechanism design, EC``03, 2003.\n[12] R. L. Graham, Bounds for certain multiprocessor anomalies, Bell System Technical Journal 45 (1966), 1563-1581.\n[13] B. Kalyanasundaram and K. Pruhs, Speed is as powerful as clairvoyance, Journal of the ACM 47 (2000), 617-643.\n[14] C. Koo, T. Lam, T. Ngan, and K. To, On-line scheduling with tight deadlines, Theoretical Computer Science 295 (2003), 251-261.\n[15] G. Koren and D. Shasha, D-over: An optimal on-line scheduling algorithm for overloaded real-time systems, SIAM Journal of Computing 24 (1995), no. 2, 318-339.\n[16] R. Lavi and N. Nisan, Competitive analysis of online auctions, EC``00, 2000.\n[17] A. Mas-Colell, M. Whinston, and J. Green, Microeconomic theory, Oxford University Press, 1995.\n[18] N. Nisan and A. Ronen, Algorithmic mechanism design, Games and Economic Behavior 35 (2001), 166-196.\n[19] C. Papadimitriou, Algorithms, games, and the internet, STOC, 2001, pp. 749-753.\n70","lvl-3":"Mechanism Design for Online Real-Time Scheduling\nABSTRACT\nFor the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm.\nHowever, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm.\nMotivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent.\nThe agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent.\nFor the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one.\nWe then show a matching lower bound for deterministic mechanisms that never pay the agents.\n1.\nINTRODUCTION\nWe consider the problem of online scheduling of jobs on a single processor.\nEach job is characterized by a release time, a deadline, a processing time, and a value for successful completion by its deadline.\nThe objective is to maximize the sum of the values of the jobs completed by their respective deadlines.\nThe key challenge in this online setting is that the schedule must be constructed in real-time, even though nothing is known about a job until its release time.\nCompetitive analysis [6, 10], with its roots in [12], is a well-studied approach for analyzing online algorithms by comparing them against the optimal offline algorithm, which has full knowledge of the input at the beginning of its execution.\nOne interpretation of this approach is as a game between the designer of the online algorithm and an adversary.\nFirst, the designer selects the online algorithm.\nThen, the adversary observes the algorithm and selects the sequence of jobs that maximizes the competitive ratio: the ratio of the value of the jobs completed by an optimal offline algorithm to the value of those completed by the online algorithm.\nTwo papers paint a complete picture in terms of competitive analysis for this setting, in which the algorithm is assumed to know k, the maximum ratio between the value densities (value divided by processing time) of any two jobs.\nFor k = 1, [4] presents a 4-competitive algorithm, and proves that this is a lower bound on the competitive ratio for deterministic algorithms.\nThe same paper also generalizes the \\ \/ lower bound to (1 + k) 2 for any k \u2265 1, and [15] then \\ \/ presents a matching (1 + k) 2-competitive algorithm.\nThe setting addressed by these papers is completely nonstrategic, and the algorithm is assumed to always know the true characteristics of each job upon its release.\nHowever, in domains such as grid computing (see, for example, [7, 8]) this assumption is invalid, because \"buyers\" of processor time choose when and how to submit their jobs.\nFurthermore, \"sellers\" not only schedule jobs but also determine the amount that they charge buyers, an issue not addressed in the non-strategic setting.\nThus, we consider an extension of the setting in which each job is owned by a separate, self-interested agent.\nInstead of being released to the algorithm, each job is now released only to its owning agent.\nEach agent now has four different ways in which it can manipulate the algorithm: it decides when to submit the job to the algorithm after the true release time, it can artificially inflate the length of the job, and it can declare an arbitrary value and deadline for the job.\nBecause the agents are self-interested, they will choose to manipulate the algorithm if doing so will cause\ntheir job to be completed; and, indeed, one can find examples in which agents have incentive to manipulate the algorithms presented in [4] and [15].\nThe addition of self-interested agents moves the problem from the area of algorithm design to that of mechanism design [17], the science of crafting protocols for self-interested agents.\nRecent years have seen much activity at the interface of computer science and mechanism design (see, e.g., [9, 18, 19]).\nIn general, a mechanism defines a protocol for interaction between the agents and the center that culminates with the selection of an outcome.\nIn our setting, a mechanism will take as input a job from each agent, and return a schedule for the jobs, and a payment to be made by each agent to the center.\nA basic solution concept of mechanism design is incentive compatibility, which, in our setting, requires that it is always in each agent's best interests to immediately submit its job upon release, and to truthfully declare its value, length, and deadline.\nIn order to evaluate a mechanism using competitive analysis, the adversary model must be updated.\nIn the new model, the adversary still determines the sequence of jobs, but it is the self-interested agents who determine the observed input of the mechanism.\nThus, in order to achieve a competitive ratio of c, an online mechanism must both be incentive compatible, and always achieve at least c1 of the value that the optimal offline mechanism achieves on the same sequence of jobs.\nThe rest of the paper is structured as follows.\nIn Section 2, we formally define and review results from the original, non-strategic setting.\nAfter introducing the incentive issues through an example, we formalize the mechanism design setting in Section 3.\nIn Section 4 we present our first \u221a main result, a ((1 + k) 2 + 1) - competitive mechanism, and formally prove incentive compatibility and the competitive ratio.\nWe also show how we can simplify this mechanism for the special case in which k = 1 and each agent cannot alter the length of its job.\nReturning the general setting, we show in Section 5 that this competitive ratio is a lower bound for deterministic mechanisms that do not pay agents.\nFinally, in Section 6, we discuss related work other than the directly relevant [4] and [15], before concluding with Section 7.\n2.\nNON-STRATEGIC SETTING\n2.1 Formulation\n2.2 Previous Results\n3.\nMECHANISM DESIGN SETTING\n3.1 Formulation\nOverview1 of the Setting: for all t do\n3.2 Mechanism Goals\n4.\nRESULTS\n4.1 Proof of Individual Rationality and Incentive Compatibility\n4.2 Proof of Competitive Ratio\n4.3 Special Case: Unalterable length and k = 1\n5.\nCOMPETITIVE LOWER BOUND\n6.\nRELATED WORK\nIn this section we describe related work other than the two papers ([4] and [15]) on which this paper is based.\nRecent work related to this scheduling domain has focused on competitive analysis in which the online algorithm uses a faster processor than the offline algorithm (see, e.g., [13, 14]).\nMechanism design was also applied to a scheduling problem in [18].\nIn their model, the center owns the jobs in an offline setting, and it is the agents who can execute them.\nThe private information of an agent is the time it will require to execute each job.\nSeveral incentive compatible mechanisms are presented that are based on approximation algorithms for the computationally infeasible optimization problem.\nThis paper also launched the area of algorithmic mechanism design, in which the mechanism must sat\nisfy computational requirements in addition to the standard incentive requirements.\nA growing sub-field in this area is multicast cost-sharing mechanism design (see, e.g., [1]), in which the mechanism must efficiently determine, for each agent in a multicast tree, whether the agent receives the transmission and the price it must pay.\nFor a survey of this and other topics in distributed algorithmic mechanism design, see [9].\nOnline execution presents a different type of algorithmic challenge, and several other papers study online algorithms or mechanisms in economic settings.\nFor example, [5] considers an online market clearing setting, in which the auctioneer matches buy and sells bids (which are assumed to be exogenous) that arrive and expire over time.\nIn [2], a general method is presented for converting an online algorithm into an online mechanism that is incentive compatible with respect to values.\nTruthful declaration of values is also considered in [3] and [16], which both consider multi-unit online auctions.\nThe main difference between the two is that the former considers the case of a digital good, which thus has unlimited supply.\nIt is pointed out in [16] that their results continue to hold when the setting is extended so that bidders can delay their arrival.\nThe only other paper we are aware of that addresses the issue of incentive compatibility in a real-time system is [11], which considers several variants of a model in which the center allocates bandwidth to agents who declare both their value and their arrival time.\nA dominant strategy IC mechanism is presented for the variant in which every point in time is essentially independent, while a Bayes-Nash IC mechanism is presented for the variant in which the center's current decision affects the cost of future actions.\n7.\nCONCLUSION\nIn this paper, we considered an online scheduling domain for which algorithms with the best possible competitive ratio had been found, but for which new solutions were required when the setting is extended to include self-interested agents.\nWe presented a mechanism that is incentive compatible with respect to release time, deadline, length and value, and that only increases the competitive ratio by one.\nWe also showed how this mechanism could be simplified when k = 1 and each agent cannot lie about the length of its job.\nWe then showed a matching lower bound on the competitive ratio that can be achieved by a deterministic mechanism that never pays the agents.\nSeveral open problems remain in this setting.\nOne is to determine whether the lower bound can be strengthened by removing the restriction of non-negative payments.\nAlso, while we feel that it is reasonable to strengthen the assumption of knowing the maximum possible ratio of value densities (k) to knowing the actual range of possible value densities, it would be interesting to determine whether there \u221a exists a ((1 + k) 2 + 1) - competitive mechanism under the original assumption.\nFinally, randomized mechanisms provide an unexplored area for future work.","lvl-4":"Mechanism Design for Online Real-Time Scheduling\nABSTRACT\nFor the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm.\nHowever, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm.\nMotivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent.\nThe agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent.\nFor the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one.\nWe then show a matching lower bound for deterministic mechanisms that never pay the agents.\n1.\nINTRODUCTION\nWe consider the problem of online scheduling of jobs on a single processor.\nEach job is characterized by a release time, a deadline, a processing time, and a value for successful completion by its deadline.\nThe objective is to maximize the sum of the values of the jobs completed by their respective deadlines.\nThe key challenge in this online setting is that the schedule must be constructed in real-time, even though nothing is known about a job until its release time.\nOne interpretation of this approach is as a game between the designer of the online algorithm and an adversary.\nFirst, the designer selects the online algorithm.\nThen, the adversary observes the algorithm and selects the sequence of jobs that maximizes the competitive ratio: the ratio of the value of the jobs completed by an optimal offline algorithm to the value of those completed by the online algorithm.\nTwo papers paint a complete picture in terms of competitive analysis for this setting, in which the algorithm is assumed to know k, the maximum ratio between the value densities (value divided by processing time) of any two jobs.\nFor k = 1, [4] presents a 4-competitive algorithm, and proves that this is a lower bound on the competitive ratio for deterministic algorithms.\nThe same paper also generalizes the \\ \/ lower bound to (1 + k) 2 for any k \u2265 1, and [15] then \\ \/ presents a matching (1 + k) 2-competitive algorithm.\nThe setting addressed by these papers is completely nonstrategic, and the algorithm is assumed to always know the true characteristics of each job upon its release.\nFurthermore, \"sellers\" not only schedule jobs but also determine the amount that they charge buyers, an issue not addressed in the non-strategic setting.\nThus, we consider an extension of the setting in which each job is owned by a separate, self-interested agent.\nInstead of being released to the algorithm, each job is now released only to its owning agent.\nBecause the agents are self-interested, they will choose to manipulate the algorithm if doing so will cause\ntheir job to be completed; and, indeed, one can find examples in which agents have incentive to manipulate the algorithms presented in [4] and [15].\nThe addition of self-interested agents moves the problem from the area of algorithm design to that of mechanism design [17], the science of crafting protocols for self-interested agents.\nRecent years have seen much activity at the interface of computer science and mechanism design (see, e.g., [9, 18, 19]).\nIn general, a mechanism defines a protocol for interaction between the agents and the center that culminates with the selection of an outcome.\nIn our setting, a mechanism will take as input a job from each agent, and return a schedule for the jobs, and a payment to be made by each agent to the center.\nA basic solution concept of mechanism design is incentive compatibility, which, in our setting, requires that it is always in each agent's best interests to immediately submit its job upon release, and to truthfully declare its value, length, and deadline.\nIn order to evaluate a mechanism using competitive analysis, the adversary model must be updated.\nIn the new model, the adversary still determines the sequence of jobs, but it is the self-interested agents who determine the observed input of the mechanism.\nThus, in order to achieve a competitive ratio of c, an online mechanism must both be incentive compatible, and always achieve at least c1 of the value that the optimal offline mechanism achieves on the same sequence of jobs.\nThe rest of the paper is structured as follows.\nIn Section 2, we formally define and review results from the original, non-strategic setting.\nAfter introducing the incentive issues through an example, we formalize the mechanism design setting in Section 3.\nIn Section 4 we present our first \u221a main result, a ((1 + k) 2 + 1) - competitive mechanism, and formally prove incentive compatibility and the competitive ratio.\nWe also show how we can simplify this mechanism for the special case in which k = 1 and each agent cannot alter the length of its job.\nReturning the general setting, we show in Section 5 that this competitive ratio is a lower bound for deterministic mechanisms that do not pay agents.\n6.\nRELATED WORK\nIn this section we describe related work other than the two papers ([4] and [15]) on which this paper is based.\nRecent work related to this scheduling domain has focused on competitive analysis in which the online algorithm uses a faster processor than the offline algorithm (see, e.g., [13, 14]).\nMechanism design was also applied to a scheduling problem in [18].\nIn their model, the center owns the jobs in an offline setting, and it is the agents who can execute them.\nThe private information of an agent is the time it will require to execute each job.\nSeveral incentive compatible mechanisms are presented that are based on approximation algorithms for the computationally infeasible optimization problem.\nThis paper also launched the area of algorithmic mechanism design, in which the mechanism must sat\nisfy computational requirements in addition to the standard incentive requirements.\nFor a survey of this and other topics in distributed algorithmic mechanism design, see [9].\nOnline execution presents a different type of algorithmic challenge, and several other papers study online algorithms or mechanisms in economic settings.\nFor example, [5] considers an online market clearing setting, in which the auctioneer matches buy and sells bids (which are assumed to be exogenous) that arrive and expire over time.\nIn [2], a general method is presented for converting an online algorithm into an online mechanism that is incentive compatible with respect to values.\nTruthful declaration of values is also considered in [3] and [16], which both consider multi-unit online auctions.\nThe only other paper we are aware of that addresses the issue of incentive compatibility in a real-time system is [11], which considers several variants of a model in which the center allocates bandwidth to agents who declare both their value and their arrival time.\n7.\nCONCLUSION\nIn this paper, we considered an online scheduling domain for which algorithms with the best possible competitive ratio had been found, but for which new solutions were required when the setting is extended to include self-interested agents.\nWe presented a mechanism that is incentive compatible with respect to release time, deadline, length and value, and that only increases the competitive ratio by one.\nWe also showed how this mechanism could be simplified when k = 1 and each agent cannot lie about the length of its job.\nWe then showed a matching lower bound on the competitive ratio that can be achieved by a deterministic mechanism that never pays the agents.\nSeveral open problems remain in this setting.\nOne is to determine whether the lower bound can be strengthened by removing the restriction of non-negative payments.\nFinally, randomized mechanisms provide an unexplored area for future work.","lvl-2":"Mechanism Design for Online Real-Time Scheduling\nABSTRACT\nFor the problem of online real-time scheduling of jobs on a single processor, previous work presents matching upper and lower bounds on the competitive ratio that can be achieved by a deterministic algorithm.\nHowever, these results only apply to the non-strategic setting in which the jobs are released directly to the algorithm.\nMotivated by emerging areas such as grid computing, we instead consider this problem in an economic setting, in which each job is released to a separate, self-interested agent.\nThe agent can then delay releasing the job to the algorithm, inflate its length, and declare an arbitrary value and deadline for the job, while the center determines not only the schedule, but the payment of each agent.\nFor the resulting mechanism design problem (in which we also slightly strengthen an assumption from the non-strategic setting), we present a mechanism that addresses each incentive issue, while only increasing the competitive ratio by one.\nWe then show a matching lower bound for deterministic mechanisms that never pay the agents.\n1.\nINTRODUCTION\nWe consider the problem of online scheduling of jobs on a single processor.\nEach job is characterized by a release time, a deadline, a processing time, and a value for successful completion by its deadline.\nThe objective is to maximize the sum of the values of the jobs completed by their respective deadlines.\nThe key challenge in this online setting is that the schedule must be constructed in real-time, even though nothing is known about a job until its release time.\nCompetitive analysis [6, 10], with its roots in [12], is a well-studied approach for analyzing online algorithms by comparing them against the optimal offline algorithm, which has full knowledge of the input at the beginning of its execution.\nOne interpretation of this approach is as a game between the designer of the online algorithm and an adversary.\nFirst, the designer selects the online algorithm.\nThen, the adversary observes the algorithm and selects the sequence of jobs that maximizes the competitive ratio: the ratio of the value of the jobs completed by an optimal offline algorithm to the value of those completed by the online algorithm.\nTwo papers paint a complete picture in terms of competitive analysis for this setting, in which the algorithm is assumed to know k, the maximum ratio between the value densities (value divided by processing time) of any two jobs.\nFor k = 1, [4] presents a 4-competitive algorithm, and proves that this is a lower bound on the competitive ratio for deterministic algorithms.\nThe same paper also generalizes the \\ \/ lower bound to (1 + k) 2 for any k \u2265 1, and [15] then \\ \/ presents a matching (1 + k) 2-competitive algorithm.\nThe setting addressed by these papers is completely nonstrategic, and the algorithm is assumed to always know the true characteristics of each job upon its release.\nHowever, in domains such as grid computing (see, for example, [7, 8]) this assumption is invalid, because \"buyers\" of processor time choose when and how to submit their jobs.\nFurthermore, \"sellers\" not only schedule jobs but also determine the amount that they charge buyers, an issue not addressed in the non-strategic setting.\nThus, we consider an extension of the setting in which each job is owned by a separate, self-interested agent.\nInstead of being released to the algorithm, each job is now released only to its owning agent.\nEach agent now has four different ways in which it can manipulate the algorithm: it decides when to submit the job to the algorithm after the true release time, it can artificially inflate the length of the job, and it can declare an arbitrary value and deadline for the job.\nBecause the agents are self-interested, they will choose to manipulate the algorithm if doing so will cause\ntheir job to be completed; and, indeed, one can find examples in which agents have incentive to manipulate the algorithms presented in [4] and [15].\nThe addition of self-interested agents moves the problem from the area of algorithm design to that of mechanism design [17], the science of crafting protocols for self-interested agents.\nRecent years have seen much activity at the interface of computer science and mechanism design (see, e.g., [9, 18, 19]).\nIn general, a mechanism defines a protocol for interaction between the agents and the center that culminates with the selection of an outcome.\nIn our setting, a mechanism will take as input a job from each agent, and return a schedule for the jobs, and a payment to be made by each agent to the center.\nA basic solution concept of mechanism design is incentive compatibility, which, in our setting, requires that it is always in each agent's best interests to immediately submit its job upon release, and to truthfully declare its value, length, and deadline.\nIn order to evaluate a mechanism using competitive analysis, the adversary model must be updated.\nIn the new model, the adversary still determines the sequence of jobs, but it is the self-interested agents who determine the observed input of the mechanism.\nThus, in order to achieve a competitive ratio of c, an online mechanism must both be incentive compatible, and always achieve at least c1 of the value that the optimal offline mechanism achieves on the same sequence of jobs.\nThe rest of the paper is structured as follows.\nIn Section 2, we formally define and review results from the original, non-strategic setting.\nAfter introducing the incentive issues through an example, we formalize the mechanism design setting in Section 3.\nIn Section 4 we present our first \u221a main result, a ((1 + k) 2 + 1) - competitive mechanism, and formally prove incentive compatibility and the competitive ratio.\nWe also show how we can simplify this mechanism for the special case in which k = 1 and each agent cannot alter the length of its job.\nReturning the general setting, we show in Section 5 that this competitive ratio is a lower bound for deterministic mechanisms that do not pay agents.\nFinally, in Section 6, we discuss related work other than the directly relevant [4] and [15], before concluding with Section 7.\n2.\nNON-STRATEGIC SETTING\nIn this section, we formally define the original, non-strategic setting, and recap previous results.\n2.1 Formulation\nThere exists a single processor on which jobs can execute, and N jobs, although this number is not known beforehand.\nEach job i is characterized by a tuple \u03b8i = (ri, di, li, vi), which denotes the release time, deadline, length of processing time required, and value, respectively.\nThe space \u0398i of possible tuples is the same for each job and consists of all \u03b8i such that ri, di, li, vi \u2208 ~ + (thus, the model of time is continuous).\nEach job is released at time ri, at which point its three other characteristics are known.\nNothing is known about the job before its arrival.\nEach deadline is firm (or, hard), which means that no value is obtained for a job that is completed after its deadline.\nPreemption of jobs is allowed, and it takes no time to switch between jobs.\nThus, job i is completed if and only if the total time it executes on the processor before di is at least li.\nLet \u03b8 = (\u03b81,..., \u03b8N) denote the vector of tuples for all jobs, and let \u03b8_i = (\u03b81,..., \u03b8i_1, \u03b8i +1,..., \u03b8N) denote the same vector without the tuple for job i. Thus, (\u03b8i, \u03b8_i) denotes a complete vector of tuples.\nDefine the value density \u03c1i = vi li of job i to be the ratio of its value to its length.\nFor an input \u03b8, denote the maximum and minimum value densities as \u03c1min = mini \u03c1i and \u03c1max = maxi \u03c1i.\nThe importance ratio is then defined to be \u03c1max\u03c1min, the maximal ratio of value densities between two jobs.\nThe algorithm is assumed to always know an upper bound k on the importance ratio.\nFor simplicity, we normalize the range of possible value densities so that \u03c1min = 1.\nAn online algorithm is a function f: \u03981 \u00d7...\u00d7 \u0398N \u2192 O that maps the vector of tuples (for any number N) to an outcome o.\nAn outcome o \u2208 O is simply a schedule of jobs on the processor, recorded by the function S: ~ + \u2192 {0, 1,..., N}, which maps each point in time to the active job, or to 0 if the processor is idle.\nTo denote the total elapsed time that a job has spent on the processor at time t, we will use the function ei (t) = f0 t \u00b5 (S (x) = i) dx, where\u00b5 (\u00b7) is an indicator function that job's laxity at time t is defined to be (di \u2212 t \u2212 li + ei (t)), returns 1 if the argument is true, and zero otherwise.\nA the amount of time that it can remain inactive and still be completed by its deadline.\nA job is abandoned if it cannot be completed by its deadline (formally, if di \u2212 t + ei (t) v2 = 13.\nThen, at time 8, job 1 is preempted by job 3, because 10 + 10 = v1 + l1 v3 = 22.\nThus, if agent 1 had falsely declared \u02c6r1 = 20, then job 3 would have been abandoned at time 8, and job 1 would have completed over the range [20, 30].\nTable 2: Jobs used to show why a slightly altered version of \u03931 would not be incentive compatible with respect to release times.\nIntuitively, \u03931 avoids this problem because of two properties.\nFirst, when a job becomes active, it must have a greater priority than all other available jobs.\nSecond, because a job's priority can only increase through the increase of its elapsed time, ei (\u02c6\u03b8, t), the rate of increase of a job's priority is independent of its characteristics.\nThese two properties together imply that, while a job is active, there cannot exist a time at which its priority is less than the priority that one of these other jobs would have achieved by executing on the processor instead.\n4.1 Proof of Individual Rationality and Incentive Compatibility\nAfter presenting the (trivial) proof of IR, we break the proof of IC into lemmas.\nTHEOREM 1.\nMechanism \u03931 satisfies individual rationality.\nPROOF.\nFor arbitrary i, \u03b8i, \u02c6\u03b8 \u2212 i, if job i is not completed, then agent i pays nothing and thus has a utility of zero; that is, pi (\u03b8i, \u02c6\u03b8 \u2212 i) = 0 and ui (g (\u03b8i, \u02c6\u03b8 \u2212 i), \u03b8i) = 0.\nOn the other hand, if job i is completed, then its value must exceed agent i's payment.\nFormally, ui (g (\u03b8i, \u02c6\u03b8 \u2212 i), \u03b8i) = vi \u2212 arg minvii \u2265 0 (ei (((ri, di, li, v ~ i), \u02c6\u03b8 \u2212 i), di) \u2265 li) \u2265 0 must hold, since v ~ i = vi satisfies the condition.\nTo prove IC, we need to show that for an arbitrary agent i, and an arbitrary profile \u02c6\u03b8 \u2212 i of declarations of the other agents, agent i can never gain by making a false declaration \u02c6\u03b8i = ~ \u03b8i, subject to the constraints that \u02c6ri \u2265 ri and \u02c6li \u2265 li.\nWe start by showing that, regardless of \u02c6vi, if truthful declarations of ri, di, and li do not cause job i to be completed, then \"worse\" declarations of these variables (that is, declarations that satisfy \u02c6ri \u2265 ri, \u02c6li \u2265 li and \u02c6di \u2264 di) can never cause the job to be completed.\nWe break this part of the proof into two lemmas, first showing that it holds for the release time, regardless of the declarations of the other variables, and then for length and deadline.\nPROOF.\nAssume by contradiction that this condition does not hold--that is, job i is not completed when ri is truthfully declared, but is completed for some false declaration \u02c6ri \u2265 ri.\nWe first analyze the case in which the release time is truthfully declared, and then we show that job i cannot be completed when agent i delays submitting it to the center.\nCase I: Agent i declares \u02c6\u03b8 ~ i = (ri, \u02c6di, \u02c6li, \u02c6vi).\nFirst, define the following three points in the execution of job i.\n\u2022 Let ts = arg mint (S ((\u02c6\u03b8 ~ i, \u02c6\u03b8 \u2212 i), t) = i) be the time that job i first starts execution.\n\u2022 Let tp = arg mint> ts (S ((\u02c6\u03b8 ~ i, \u02c6\u03b8 \u2212 i), t) = ~ i) be the time that job i is first preempted.\n\u2022 Let ta = arg mint (ei ((\u02c6\u03b8 ~ i, \u02c6\u03b8 \u2212 i), t) + \u02c6di \u2212 t <\u02c6li) be the time that job i is abandoned.\nIf ts and tp are undefined because job i never becomes active, then let ts = tp = ta.\nAlso, partition the jobs declared by other agents before ta into the following three sets.\n\u2022 X = {j | (\u02c6rj \u02c6vi + k \u00b7 ei ((\u02c6\u03b8' i, \u02c6\u03b8_i), \u02c6rj)} consists of the jobs that arrive in the range [tp, ta] and that when they arrive have higher priority than job i (note that we are make use of the normalization).\n\u221a \u2022 Z = {j | (tp \u2264 \u02c6rj \u2264 ta) \u2227 (\u02c6vj \u2264 \u02c6vi + k \u00b7 ei ((\u02c6\u03b8' i, \u02c6\u03b8_i), \u02c6rj)} consists of the jobs that arrive in the range [tp, ta] and that when they arrive have lower priority than job i.\nWe now show that all active jobs during the range (tp, ta] must be either i or in the set Y.\nUnless tp = ta (in which case this property trivially holds), it must be the case that job i has a higher priority than an arbitrary job x \u2208 X at time tp, since at the time just preceding tp job x was available and job\ni was active.\nFormally, \u02c6vx + k \u00b7 ex ((\u02c6\u03b8' i, \u02c6\u03b8_i), tp) <\u02c6vi + k \u00b7 ei ((\u02c6\u03b8' i, \u02c6\u03b8_i), tp) must hold .6 We can then show that, over the range [tp, ta], no job x \u2208 X runs on the processor.\nAssume by contradiction that this is not true.\nLet tf \u2208 [tp, ta] be the earliest time in this range that some job x \u2208 X is active, which implies that ex ((\u02c6\u03b8 ` i, \u02c6\u03b8_i), tf) = ex ((\u02c6\u03b8 ` i, \u02c6\u03b8_i), tp).\n\u02c6\u03b8i = (\u02c6ri, \u02c6di, \u02c6li, \u02c6vi), where \u02c6ri> ri.\nWe now show that job i cannot be completed in this case, given that it was not completed in case I. First, we can restrict the range of \u02c6ri that we need to consider as follows.\nDeclaring \u02c6ri \u2208 (ri, ts] would not affect the schedule, since ts would still be the first time that job i executes.\nAlso, declaring \u02c6ri> ta could not cause the job to be completed, since di \u2212 ta <\u02c6li holds, which implies that job i would be abandoned at its release.\nThus, we can restrict consideration to \u02c6ri \u2208 (ts, ta].\nIn order for declaring \u02c6\u03b8i to cause job i to be completed, a necessary condition is that the execution of some job yc \u2208 Y must change during the range (tp, ta], since the only jobs other than i that are active during that range are in Y.\n\u02c6\u03b8_i), t) = yc) \u2227 (S ((\u02c6\u03b8i, \u02c6\u03b8_i), t) = ~ yc)] be the first time that such a change occurs.\nWe will now show that for any \u02c6ri \u2208 (ts, ta], there cannot exist a job with higher priority than yc at time tc, contradicting (S ((\u02c6\u03b8i, \u02c6\u03b8_i), t) = ~ yc).\nFirst note that job i cannot have a higher priority, since there would have to exist a t \u2208 (tp, tc) such that \u2203 y \u2208 6For simplicity, when we give the formal condition for a job x to have a higher priority than another job y, we will assume that job x's priority is strictly greater than job y's, because, in the case of a tie that favors x, future ties would also be broken in favor of job x.\nthe definition of tc.\nNow consider an arbitrary y \u2208 Y such that y = ~ yc.\nIn case I, we know that job y has lower priority than yc at time tc;\nThus, moving to case II, job y must replace some other job before tc.\nSince \u02c6ry \u2265 tp, the condition is that there must exist some t \u2208 (tp, tc) such that \u2203 w \u2208 Y \u222a {i}, (S ((\u02c6\u03b8 ` i, \u02c6\u03b8_i), t) = w) \u2227 (S ((\u02c6\u03b8i, \u02c6\u03b8_i), t) = y).\nSince w \u2208 Y would contradict the definition of tc, we know that w = i.\nThat is, the job that y replaces must be i. By definition of the set Y, we know that \u221a \u02c6vy> \u02c6vi + k \u00b7 ei ((\u02c6\u03b8' i, \u02c6\u03b8_i), \u02c6ry).\nThus, if \u02c6ry \u2264 t, then job i could not have executed instead of y in case I. On the other hand, if \u02c6ry> t, then job y obviously could not execute at time t, contradicting the existence of such a time t.\nNow consider an arbitrary job x \u2208 X.\nWe know that in case I job i has a higher priority than job x at time\n\u02c6vyc + k \u00b7 eyc ((\u02c6\u03b8' i, \u02c6\u03b8_i), tc).\nSince delaying i's arrival will not affect the execution up to time ts, and since job x cannot execute instead of a job y \u2208 Y at any time t \u2208 (tp, tc] by definition of tc, the only way for job x's priority to increase before tc as we move from case I to II is to replace job i over the range (ts, tc].\nThus, an upper bound on job x's priority when agent i declares \u02c6\u03b8i is:\nThus, even at this upper bound, job yc would execute instead of job x at time tc.\nA similar argument applies to an arbitrary job z \u2208 Z, starting at it release time \u02c6rz.\nSince the sets {i}, X, Y, Z partition the set of jobs released before ta, we have shown that no job could execute instead of job yc, contradicting the existence of tc, and completing the proof.\nLEMMA 3.\nIn mechanism \u03931, the following condition holds for all i, \u03b8i, \u02c6\u03b8_i: \u2200 \u02c6vi, \u02c6li \u2265 li, \u02c6di \u2264 di, ~ ei ~ ((ri, \u02c6di, \u02c6li, \u02c6vi), \u02c6\u03b8_i), \u02c6di ~ \u2265 \u02c6li ~ = \u21d2 ~ ei ~ ~ ((ri, di, li, \u02c6vi), \u02c6\u03b8_i), \u02c6di ~ \u2265 li PROOF.\nAssume by contradiction there exists some instantiation of the above variables such that job i is not completed when li and di are truthfully declared, but is completed for some pair of false declarations \u02c6li \u2265 li and \u02c6di \u2264 di.\nNote that the only effect that \u02c6di and \u02c6li have on the execution of the algorithm is on whether or not i \u2208 Avail.\nSpecifically, they affect the two conditions: (ei (\u02c6\u03b8, t) <\u02c6li) and (ei (\u02c6\u03b8, t) + \u02c6di \u2212 t \u2265 \u02c6li).\nBecause job i is completed when \u02c6li and \u02c6di are declared, the former condition (for completion) must become false before the latter.\nSince truthfully declaring li \u2264 \u02c6li and di \u2265 \u02c6di will only make the former condition become false earlier and the latter condition become false later, the execution of the algorithm will not be affected when moving to truthful declarations, and job i will be completed, a contradiction.\nWe now use these two lemmas to show that the payment for a completed job can only increase by falsely declaring \"worse\" \u02c6li, \u02c6di, and \u02c6ri.\nWe can then show that job i has a higher priority at time tf as \u221a \u221a follows: \u02c6vx + k \u00b7 ex ((\u02c6\u03b8' i, \u02c6\u03b8_i), tf) = \u02c6vx + k \u00b7 ex ((\u02c6\u03b8' i, \u02c6\u03b8_i), tp) <\u221a \u221a \u02c6vi + k \u00b7 ei ((\u02c6\u03b8' i, \u02c6\u03b8_i), tp) \u2264 \u02c6vi + k \u00b7 ei ((\u02c6\u03b8' i, \u02c6\u03b8_i), tf), contradicting the fact that job x is active at time tf.\nA similar argument applies to an arbitrary job z \u2208 Z, starting at it release time \u02c6rz> tp, since by definition job i has a higher priority at that time.\nThe only remaining jobs that can be active over the range (tp, ta] are i and those in the set Y.\nCase II: Agent i declares Let tc = arg mintE (tp, ta] [\u2203 yc \u2208 Y, (S ((\nPROOF.\nAssume by contradiction that this condition does not hold.\nThis implies that there exists some value v ~ i such that the condition (ei (((\u02c6ri, \u02c6di, \u02c6li, v ~ i), \u02c6\u03b8_i), \u02c6di) \u2265 \u02c6li) holds, but (ei (((ri, di, li, v ~ i), \u02c6\u03b8_i), di) \u2265 li) does not.\nApplying Lemmas 2 and 3: (ei (((\u02c6ri, \u02c6di, \u02c6li, v ~ i), \u02c6\u03b8_i), \u02c6di) \u2265 \u02c6li) = \u21d2\nFinally, the following lemma tells us that the completion of a job is monotonic in its declared value.\n~ ei ~ ~ ((\u02c6ri, \u02c6di, \u02c6li, \u02c6v ~ i), \u02c6\u03b8_i), \u02c6di ~ \u2265 \u02c6li The proof, by contradiction, of this lemma is omitted because it is essentially identical to that of Lemma 2 for \u02c6ri.\nIn case I, agent i declares (\u02c6ri, \u02c6di, \u02c6li, \u02c6v ~ i) and the job is not completed, while in case II he declares (\u02c6ri, \u02c6di, \u02c6li, \u02c6vi) and the job is completed.\nThe analysis of the two cases then proceeds as before--the execution will not change up to time ts because the initial priority of job i decreases as we move from case I to II; and, as a result, there cannot be a change in the execution of a job other than i over the range (tp, ta].\nWe can now combine the lemmas to show that no profitable deviation is possible.\nTHEOREM 6.\nMechanism \u03931 satisfies incentive compatibility.\nPROOF.\nFor an arbitrary agent i, we know that \u02c6ri \u2265 ri and \u02c6li \u2265 li hold by assumption.\nWe also know that agent i has no incentive to declare \u02c6di> di, because job i would never be returned before its true deadline.\nThen, because the payment function is non-negative, agent i's utility could not exceed zero.\nBy IR, this is the minimum utility it would achieve if it truthfully declared \u03b8i.\nThus, we can restrict consideration to \u02c6\u03b8i that satisfy \u02c6ri \u2265 ri, \u02c6li \u2265 li, and \u02c6di \u2264 di.\nAgain using IR, we can further restrict consideration to \u02c6\u03b8i that cause job i to be completed, since any other \u02c6\u03b8i yields a utility of zero.\nIf truthful declaration of \u03b8i causes job i to be completed, then by Lemma 4 any such false declaration \u02c6\u03b8i could not decrease the payment of agent i. On the other hand, if truthful declaration does not cause job i to be completed, then declaring such a \u02c6\u03b8i will cause agent i to have negative\nLemmas 5 and 4, respectively.\n4.2 Proof of Competitive Ratio\nThe proof of the competitive ratio, which makes use of techniques adapted from those used in [15], is also broken into lemmas.\nHaving shown IC, we can assume truthful declaration (\u02c6\u03b8 = \u03b8).\nSince we have also shown IR, in order to prove the competitive ratio it remains to bound the loss of social welfare against \u0393off line.\nDenote by (1, 2,..., F) the sequence of jobs completed by \u03931.\nDivide time into intervals If = (topen f, tclose f], one for each job f in this sequence.\nSet tclose f to be the time at which job f is completed, and set topen\nf be the first time that the processor is not idle in interval If.\nLEMMA 7.\nFor any interval If, the following inequality holds: tclose f \u2212 tbegin f \u2264 (1 + \u221a 1k) \u00b7 vf PROOF.\nInterval If begins with a (possibly zero length) period of time in which the processor is idle because there is no available job.\nThen, it continuously executes a sequence of jobs (1, 2,..., c), where each job i in this sequence is preempted by job i + 1, except for job c, which is completed (thus, job c in this sequence is the same as job f is the global sequence of completed jobs).\nLet tsi be the time that job i begins execution.\nNote that ts1 = tbegin f.\n\u221a Over the range [tbegin f, tclose f], the priority (vi + k \u00b7 ei (\u03b8, t)) of the active job is monotonically increasing with time, because this function linearly increases while a job is active, and can only increase at a point in time when preemption occurs.\nThus, each job i> 1 in this sequence begins execution at its release time (that is, tsi = ri), because its priority does not increase while it is not active.\nWe now show that the value of the completed job c ex \u221a ceeds the product of k and the time spent in the interval on jobs 1 through c \u2212 1, or, more formally, that the following \u221a k ~ c_1 condition holds: vc \u2265 h = 1 (eh (\u03b8, ts h +1) \u2212 eh (\u03b8, tsh)).\nTo show this, we will prove by induction that the stronger con \u221a k ~ i_1\nBase Case: For i = 1, v1 \u2265 h = 1 eh (\u03b8, tsh +1) = 0, since the sum is over zero elements.\nInductive Step: For an arbitrary 1 \u2264 i 0 (and, by implication, satisfies IC and IR as well).\nSince a competitive ratio of c implies a competitive ratio of c + x, for any x> 0, we assume without loss of generality that e <1.\nFirst, we will construct a profile of agent types \u03b8 using an adversary argument.\nAfter possibly slightly perturbing \u03b8 to assure that a strictness property is satisfied, we will then use a more significant perturbation of \u03b8 to reach a contradiction.\nWe now construct the original profile \u03b8.\nPick an \u03b1 such that 0 <\u03b1 1.\nThe sequence stops whenever \u0393 completes any job.\nMajor jobs also have zero laxity, but they have the smallest possible value ratio (that is, vi = li).\nThe lengths of the major jobs that may be released, starting with i = 1, are determined by the following recurrence relation.\nThe first major job has a release time of 0, and each major job i> 1 has a release time of ri = di \u2212 1--\u03b4, just before the deadline of the previous job.\nThe adversary releases major job i rf +1.\nTherefore, \u03b8 consists of major jobs 1 through f +1 (or, f, if f = m), plus minor jobs from time 0 through time df.\nWe now possibly perturb \u03b8 slightly.\nBy IR, we know that vf> pf (\u03b8).\nSince we will later need this inequality to be strict, if vf = pf (\u03b8), then change \u03b8f to \u03b8 ~ f, where r ~ f = rf, but v ~ f, l ~ f, and d ~ f are all incremented by \u03b4 over their respective values in \u03b8f.\nBy IC, job f must still be completed by \u0393 for the profile (\u03b8 ~ f, \u03b8 \u2212 f).\nIf not, then by IR and NNP we know that pf (\u03b8 ~ f, \u03b8 \u2212 f) = 0, and thus that uf (g (\u03b8 ~ f, \u03b8 \u2212 f), \u03b8 ~ f) = 0.\nHowever, agent f could then increase its utility by falsely declaring the original type of \u03b8f, receiving a utility of: uf (g (\u03b8 ~ f, \u03b8 \u2212 f), \u03b8 ~ f) = v ~ f--pf (\u03b8) = \u03b4> 0, violating IC.\nFurthermore, agent f must be charged the same amount (that is, pf (\u03b8 ~ f, \u03b8 \u2212 f) = pf (\u03b8)), due to a similar incentive compatibility argument.\nThus, for the remainder of the proof, assume that vf> pf (\u03b8).\nWe now use a more substantial perturbation of \u03b8 to complete the proof.\nIf f 0, contradicting IC.\nWhile \u0393's execution must be identical for both (\u03b8f, \u03b8 \u2212 f) and (\u03b8 ~ ~ f, \u03b8 \u2212 f), \u0393off line can take advantage of the change.\nIf f i, this limit order must have been executed.\nThus unexecuted limit orders bound the VWAPs of the remainder of the day, resulting in at most one unexecuted order per price level.\nA bound on the lost revenue is thus the sum of the discretized prices: \u221e i=1(1 \u2212 )i pmax \u2264 pmax \/ .\nClearly our algorithm has sold at most N shares.\nNote that as N becomes large, VWAPA approaches 1 \u2212 times the market VWAP.\nIf we knew that the final total volume of the market executions is V , then we can set \u03b3 = V\/N, assuming that \u03b3 >> 1.\nIf we have only an upper and lower bound on V we should be able to guess and incur a logarithmic loss.\nThe following assumption tries to capture the market volume variability.\n3.4.0.5 Order Book Volume Variability Assumption.\nWe now assume that the total volume (which includes the shares executed by both our algorithm and the market) is variable within some known region and that the market volume will be greater than our algorithms volume.\nMore formally, for all S \u2208 \u03a3, assume that the total volume V of shares traded in execM (S, S ), for any sequence S of N sell limit orders, satisfies 2N \u2264 Vmin \u2264 V \u2264 Vmax .\nLet Q = Vmax \/Vmin .\nThe following corollary is derived using a constant = 1\/2 and observing that if we set \u03b3 such that V \u2264 \u03b3N \u2264 2V then our algorithm will place between N and N\/2 limit orders.\nCorollary 9.\nIn the order book model, if the bounded order volume and max price assumption, and the order book volume variability assumption hold, there exists an online algorithm A for selling at most N shares such that VWAPA(S, S ) \u2265 1 4 log(Q) VWAPM (S, S ) \u2212 2pmax N 195 0 2 4 6 8 x 10 7 0 20 40 60 80 100 QQQ: log(Q)=4.71, E=3.77 0 2 4 6 8 10 x 10 6 0 20 40 60 80 JNPR: log(Q)=5.66, E=3.97 0 0.5 1 1.5 2 x 10 6 0 10 20 30 40 50 60 70 MCHP: log(Q)=5.28, E=3.86 0 2 4 6 8 10 x 10 6 0 50 100 150 200 250 CHKP: log(Q)=6.56, E=4.50 Figure 3: Here we present bounds from Section 4 based on the empirical volume distributions for four real stocks: QQQ, MCHP, JNPR, and CHKP.\nThe plots show histograms for the total daily volumes transacted on Island for these stocks, in the last year and a half, along with the corresponding values of log(Q) and E(Pbins vol ) (denoted by ``E'').\nWe assume that the minimum and maximum daily volumes in the data correspond to Vmin and Vmax , respectively.\nThe worst-case competitive ratio bounds (which are twice log(Q)) of our algorithm for those stocks are 9.42, 10.56, 11.32, and 13.20, respectively.\nThe corresponding bounds on the competitive ratio performance of our algorithm under the volume distribution model (which are twice E(Pbins vol )) are better: 7.54, 7.72, 7.94, and 9.00, respectively (a 20\u221240% relative improvement).\nUsing a finer volume binning along with a slightly more refined bound on the competitive ratio, we can construct algorithms that, using the empirical volume distribution given as correct, guarantee even better competitive ratios of 2.76, 2.73, 2.75, and 3.17, respectively for those stocks (details omitted).\n4.\nMACROSCOPIC DISTRIBUTION MODELS We conclude our results with a return to the price-volume model, where we shall introduce some refined methods of analysis for online trading algorithms.\nWe leave the generalization of these methods to the order book model for future work.\nThe competitive ratios defined so far measure performance relative to some baseline criterion in the worst case over all market sequences S \u2208 \u03a3.\nIt has been observed in many online settings that such worst-case metrics can yield pessimistic results, and various relaxations have been considered, such as permitting a probability distribution over the input sequence.\nWe now consider distributional models that are considerably weaker than assuming a distribution over complete market sequences S \u2208 \u03a3.\nIn the volume distribution model, we assume only that there exists a distribution Pvol over the total volume V traded in the market for the day, and then examine the worst-case competitive ratio over sequences consistent with the randomly chosen volume.\nMore precisely, we define RVWAP(A, Pvol ) = EV \u223cPvol max S\u2208seq(V ) VWAPM (S) VWAPA(S) .\nHere V \u223c Pvol denotes that V is chosen with respect to distribution Pvol , and seq(V ) \u2282 \u03a3 is the set of all market sequences (p1, v1), ... , (pT , vT ) satisfying T t=1 vt = V .\nSimilarly, for OWT, we can define ROWT(A, Pmaxprice ) = Ep\u223cPmaxprice max S\u2208seq(p) p VWAPA(S) .\nHere Pmaxprice is a distribution over just the maximum price of the day, and we then examine worst-case sequences consistent with this price (seq(p) \u2282 \u03a3 is the set of all market sequences satisfying max1\u2264t\u2264T pt = p).\nAnalogous buy-side definitions can be given.\nWe emphasize that in these models, only the distribution of maximum volume and price is known to the algorithm.\nWe also note that our probabilistic assumptions on S are considerably weaker than typical statistical finance models, which would posit a detailed stochastic model for the step-by-step evolution of (pt, vt).\nHere we instead permit only a distribution over crude, macroscopic measures of the entire day``s market activity, such as the total volume and high price, and analyze the worst-case performance consistent with these crude measures.\nFor this reason, we refer to such settings as the macroscopic distribution model.\nThe work of El-Yaniv et al. [3] examines distributional assumptions similar to ours, but they emphasize the worst196 case choices for the distributions as well, and show that this leads to results no better than the original worst-case analysis over all sequences.\nIn contrast, we feel that the analysis of specific distributions Pvol and Pmaxprice is natural in many financial contexts and our preliminary experimental results show significant improvements when this rather crude distributional information is taken into account (see Figure 3).\nOur results in the VWAP setting examine the cases where these distributions are known exactly or only approximately.\nSimilar results can be obtained for macroscopic distributions of maximum daily price for the one-way trading setting.\n4.1 Results in the Macroscopic Distribution Model We begin by noting that the algorithms examined so far work by binning total volumes or maximum prices into bins of exponentially increasing size, and then guessing the index of the bin in which the actual quantity falls.\nIt is thus natural that the macroscopic distribution model performance of such algorithms (which are common in competitive analysis) might depend on the distribution of the true bin index.\nIn the remaining, we assume that Q is a power of 2 and the base of the logarithm is 2.\nLet Pvol denote the distribution of total daily market volume.\nWe define the related distribution Pbins vol over bin indices i as follows: for all i = 1, ... , log(Q) \u2212 1, Pbins vol (i) is equal to the probability, under Pvol , that the daily volume falls in the interval [Vmin 2i\u22121 , Vmin 2i ), and Pbins vol (log(Q)) is for the last interval [Vmax \/2, Vmax ] .\nWe define E as E(Pbins vol ) \u2261 Ei\u223cP bins vol 1\/Pbins vol (i) 2 = log(Q) i=1 Pbins vol (i) 2 .\nSince the support of Pbins vol has only log(Q) elements, E(Pbins vol ) can vary from 1 (for distributions Pvol that place all of their weight in only one of the log(Q) intervals between Vmin , Vmin 2, Vmin 4, ... , Vmax ) to log(Q) (for distributions Pvol in which the total daily volume is equally likely to fall in any one of these intervals).\nNote that distributions Pvol of this latter type are far from uniform over the entire range [Vmin , Vmax ].\nTheorem 10.\nIn the volume distribution model under the volume variability assumption, there exists an online algorithm A for selling N shares that, using only knowledge of the total volume distribution Pvol , achieves RVWAP(A, Pvol ) \u2264 2E(Pbins vol ).\nAll proofs in this section are provided in the appendix.\nAs a concrete example, consider the case in which Pvol is the uniform distribution over [Vmin , Vmax ].\nIn that case, Pbins vol is exponentially increasing and peaks at the last bin, which, having the largest width, also has the largest weight.\nIn this case E(Pbins vol ) is a constant (i.e., independent of Q), leading to a constant competitive ratio.\nOn the other hand, if Pvol is exponential, then Pbins vol is uniform, leading to an O(log(Q)) competitive ratio, just as in the more adversarial price-volume setting discussed earlier.\nIn Figure 3, we provide additional specific bounds obtained for empirical total daily volume distributions computed for some real stocks.\nWe now examine the setting in which Pvol is unknown, but an approximation \u02dcPvol is available.\nLet us define C(Pbins vol , \u02dcPbins vol ) = log(Q) j=1 \u02dcPbins vol (j) log(Q) i=1 P bins vol (i) \u221a \u02dcP bins vol (i) .\nC is minimized at C(Pbins vol , Pbins vol ) = E(Pbins vol ), and C may be infinite if \u02dcPbins vol (i) is 0 when Pbins vol (i) > 0.\nTheorem 11.\nIn the volume distribution model under the volume variability assumption, there exists an online algorithm A for selling N shares that using only knowledge of an approximation \u02dcPvol of Pvol achieves RVWAP(A, Pvol ) \u2264 2C(Pbins vol , \u02dcPbins vol ).\nAs an example of this result, suppose our approximation obeys (1\/\u03b1)Pbins vol (i) \u2264 \u02dcPbins vol (i) \u2264 \u03b1Pbins vol (i) for all i, for some \u03b1 > 1.\nThus our estimated bin index probabilities are all within a factor of \u03b1 of the truth.\nThen it is easy to show that C(Pbins vol , \u02dcPbins vol ) \u2264 \u03b1E(Pbins vol ), so according to Theorems 10 and 11 our penalty for using the approximate distribution is a factor of \u03b1 in competitive ratio.\n5.\nREFERENCES [1] B. Awerbuch, Y. Bartal, A. Fiat, and A. Ros\u00b4en.\nCompetitive non-preemptive call control.\nIn Proc.\n5``th ACM-SIAM Symp.\non Discrete Algorithms, pages 312-320, 1994.\n[2] A. Borodin and R. El-Yaniv.\nOnline Computation and Competitive Analysis.\nCambridge University Press, 1998.\n[3] R. El-Yaniv, A. Fiat, R. M. Karp, and G. Turpin.\nOptimal search and one-way trading online algorithms.\nAlgorithmica, 30:101-139, 2001.\n[4] M. Kearns and L. Ortiz.\nThe Penn-Lehman automated trading project.\nIEEE Intelligent Systems, 2003.\nTo appear.\n6.\nAPPENDIX 6.1 Proofs from Subsection 2.3 Proof.\n(Sketch of Theorem 3) W.l.o.g., assume that Q = 1 and the total volume is V .\nConsider the time t where the fixed schedule f sells the least, then ft \u2264 N\/T.\nConsider the sequences where at time t we have pt = pmax , vt = V , and for times t = t we have pt = pmin and vt = 0.\nThe VWAP is pmax and the fixed schedule average is (N\/T)pmax + (N \u2212 N\/T)pmin .\nProof.\n(Sketch of Theorem 4) The algorithm simply sells ut = (vt\/Vmin )N shares at time t.\nThe total number of shares sold U is clearly more than N and U = t ut = t (vt\/Vmin )N = (V\/Vmin )N \u2264 QN The average price is V WAPA(S) = ( t ptut)\/U = t pt(vt\/V ) = V WAPM (S), where we used the fact that ut\/U = vt\/V .\n197 Proof.\n(of Theorem 5) We start with the proof of the lower bound.\nConsider the following scenario.\nFor the first T time units we have a price of \u221a Rpmin , and a total volume of Vmin .\nWe observe how many shares the online algorithm has bought.\nIf it has bought more than half of the shares, the remaining time steps have price pmin and volume Vmax \u2212 Vmin .\nOtherwise, the remaining time steps have price pmax and negligible volume.\nIn the first case the online has paid at least \u221a Rpmin \/2 while the VWAP is at most \u221a Rpmin \/Q + pmin .\nTherefore, in this case the competitive ratio is \u2126(Q).\nIn the second case the online has to buy at least half of the shares at pmax , so its average cost is at least pmax \/2.\nThe market VWAP is\u221a Rpmin = pmax \/ \u221a R, hence the competitive ratio is \u2126( \u221a R).\nFor the upper bound, we can get a \u221a R competitive ratio, by buying all the shares once the price drops below \u221a Rpmin .\nThe Q upper bound is derive by running an algorithm that assumes the volume is Vmin .\nThe online pays a cost of p, while the VWAP will be at least p\/Q.\n6.2 Proofs from Section 4 Proof.\n(Sketch of Theorem 10) We use the idea of guessing the total volume from Theorem 1, but now allow for the possibility of an arbitrary (but known) distribution over the total volume.\nIn particular, consider constructing a distribution Gbins vol over a set of volume values using Pvol and use it to guess the total volume V .\nLet the algorithm guess \u02c6V = Vmin 2i with probability Gbins vol (i).\nThen note that, for any price-volume sequence S, if V \u2208 [Vmin 2i\u22121 , Vmin 2i ], VWAPA(S) \u2265 Gbins vol (i)VWAPM (S)\/2.\nThis implies an upper bound on RVWAP(A, Pvol ) in terms of Gbins vol .\nWe then get that Gbins vol (i) \u221d Pbins vol (i) minimizes the upper bound, which leads to the upper bound stated in the theorem.\nProof.\n(Sketch of Theorem 11) Replace Pvol with \u02dcPvol in the expression for Gbins vol in the proof sketch for the last result.\n198","lvl-3":"Competitive Algorithms for VWAP and Limit Order Trading\nABSTRACT\nWe introduce new online models for two important aspects of modern financial markets: Volume Weighted Average Price trading and limit order books.\nWe provide an extensive study of competitive algorithms in these models and relate them to earlier online algorithms for stock trading.\n1.\nINTRODUCTION\nWhile popular images of Wall Street often depict swashbuckling traders boldly making large gambles on just their market intuitions, the vast majority of trading is actually considerably more technical and constrained.\nThe constraints often derive from a complex combination of business, regulatory and institutional issues, and result in certain kinds of \"standard\" trading strategies or criteria that invite algorithmic analysis.\nOne of the most common activities in modern financial markets is known as Volume Weighted Average Price, or\nVWAP, trading.\nInformally, the VWAP of a stock over a specified market period is simply the average price paid per share during that period, so the price of each transaction in the market is weighted by its volume.\nIn VWAP trading, one attempts to buy or sell a fixed number of shares at a price that closely tracks the VWAP.\nVery large institutional trades constitute one of the main motivations behind VWAP activity.\nA typical scenario goes as follows.\nSuppose a very large mutual fund holds 3% of the outstanding shares of a large, publicly traded company--a huge fraction of the shares--and that this fund's manager decides he would like to reduce this holding to 2% over a 1-month period.\n(Such a decision might be forced by the fund's own regulations or other considerations.)\nTypically, such a fund manager would be unqualified to sell such a large number of shares in the open market--it requires a professional broker to intelligently break the trade up over time, and possibly over multiple exchanges, in order to minimize the market impact of such a sizable transaction.\nThus, the fund manager would approach brokerages for help in selling the 1%.\nThe brokerage will typically alleviate the fund manager's problem immediately by simply buying the shares directly from the fund manager, and then selling them off later--but what price should the brokerage pay the fund manager?\nPaying the price on the day of the sale is too risky for the brokerage, as they need to sell the shares themselves over an extended period, and events beyond their control (such as wars) could cause the price to fall dramatically.\nThe usual answer is that the brokerage offers to buy the shares from the fund manager at a per-share price tied to the VWAP over some future period--in our example, the brokerage might offer to buy the 1% at a per-share price of the coming month's VWAP minus 1 cent.\nThe brokerage now has a very clean challenge: by selling the shares themselves over the next month in a way that exactly matches the VWAP, a penny per share is earned in profits.\nIf they can beat the VWAP by a penny, they make two cents per share.\nSuch small-margin, high-volume profits can be extremely lucrative for a large brokerage.\nThe importance of the VWAP has led to many automated VWAP trading algorithms--indeed, every major brokerage has at least one\" VWAP box\",\nFigure 1: The table summarizes the results presented in this paper.\nThe rows represent results for either the OWT\nor VWAP criterion.\nThe columns represent which model we are working in.\nThe entry in the table is the competitive ratio between our algorithm and an optimal algorithm, and the closer the ratio is to 1 the better.\nThe parameter R represents a bound on the maximum to the minimum price fluctuation and the parameter Q represents a bound on the maximum to minimum volume fluctuation in the respective model.\n(See Section 4 for a description of the Macroscopic Distribution Model.)\nAll the results for the OWT trading criterion (which is a stronger criterion) directly translate to the VWAP criterion.\nHowever, in the VWAP setting, considering a restriction on the maximum to the minimum volume fluctuation Q, leads to an additional class of results which depends on Q. and some small companies focus exclusively on proprietary VWAP trading technology.\nIn this paper, we provide the first study of VWAP trading algorithms in an online, competitive ratio setting.\nWe first formalize the VWAP trading problem in a basic online model we call the price-volume model, which can be viewed as a generalization of previous theoretical online trading models incorporating market volume information.\nIn this model, we provide VWAP algorithms and competitive ratios, and compare this setting with the one-way trading (OWT) problem studied in [3].\nOur most interesting results, however, examine the VWAP trading problem in a new online trading model capturing the important recent phenomenon of limit order books in financial markets.\nBriefly, a limit buy or sell order specifies both the number of shares and the desired price, and will only be executed if there is a matching party on the opposing side, according to a well-defined matching procedure used by all the major exchanges.\nWhile limit order books (the list of limit orders awaiting possible future execution) have existed since the dawn of equity exchanges, only very recently have these books become visible to traders in real time, thus opening the way to trading algorithms of all varieties that attempt to exploit this rich market microstructure data.\nSuch data and algorithms are a topic of great current interest on Wall Street [4].\nWe thus introduce a new online trading model incorporating limit order books, and examine both the one-way and VWAP trading problems in it.\nOur results are summarized in Figure 1 (see the caption for a summary).\n2.\nTHE PRICE-VOLUME TRADING MODEL\n2.1 The Model\n2.2 VWAP Results in the Price-Volume Model\n2.2.0.1 Volume Variability Assumption.\n.\n2.2.0.2 Price Variability Assumption.\n.\n2.3 Related Results in the Price-Volume Model\n3.\nA LIMIT ORDER BOOK TRADING MODEL\n3.1 Background on Limit Order Books and Market Microstructure\n3.2 The Model\n3.3 OWT Results in the Order Book Model\n3.3.0.3 Order Book Price Variability Assumption.\n.\n3.4 VWAP Results in the Order Book Model\n3.4.0.5 Order Book Volume Variability Assumption.\n4.\nMACROSCOPIC DISTRIBUTION MODELS\nROWT (A, Pmaxprice) = Ep_Pmaxprice\n4.1 Results in the Macroscopic Distribution Model","lvl-4":"Competitive Algorithms for VWAP and Limit Order Trading\nABSTRACT\nWe introduce new online models for two important aspects of modern financial markets: Volume Weighted Average Price trading and limit order books.\nWe provide an extensive study of competitive algorithms in these models and relate them to earlier online algorithms for stock trading.\n1.\nINTRODUCTION\nWhile popular images of Wall Street often depict swashbuckling traders boldly making large gambles on just their market intuitions, the vast majority of trading is actually considerably more technical and constrained.\nThe constraints often derive from a complex combination of business, regulatory and institutional issues, and result in certain kinds of \"standard\" trading strategies or criteria that invite algorithmic analysis.\nOne of the most common activities in modern financial markets is known as Volume Weighted Average Price, or\nVWAP, trading.\nInformally, the VWAP of a stock over a specified market period is simply the average price paid per share during that period, so the price of each transaction in the market is weighted by its volume.\nIn VWAP trading, one attempts to buy or sell a fixed number of shares at a price that closely tracks the VWAP.\nVery large institutional trades constitute one of the main motivations behind VWAP activity.\nA typical scenario goes as follows.\nThus, the fund manager would approach brokerages for help in selling the 1%.\nThe brokerage will typically alleviate the fund manager's problem immediately by simply buying the shares directly from the fund manager, and then selling them off later--but what price should the brokerage pay the fund manager?\nThe usual answer is that the brokerage offers to buy the shares from the fund manager at a per-share price tied to the VWAP over some future period--in our example, the brokerage might offer to buy the 1% at a per-share price of the coming month's VWAP minus 1 cent.\nThe brokerage now has a very clean challenge: by selling the shares themselves over the next month in a way that exactly matches the VWAP, a penny per share is earned in profits.\nIf they can beat the VWAP by a penny, they make two cents per share.\nSuch small-margin, high-volume profits can be extremely lucrative for a large brokerage.\nThe importance of the VWAP has led to many automated VWAP trading algorithms--indeed, every major brokerage has at least one\" VWAP box\",\nFigure 1: The table summarizes the results presented in this paper.\nThe rows represent results for either the OWT\nor VWAP criterion.\nThe columns represent which model we are working in.\nThe entry in the table is the competitive ratio between our algorithm and an optimal algorithm, and the closer the ratio is to 1 the better.\nThe parameter R represents a bound on the maximum to the minimum price fluctuation and the parameter Q represents a bound on the maximum to minimum volume fluctuation in the respective model.\n(See Section 4 for a description of the Macroscopic Distribution Model.)\nAll the results for the OWT trading criterion (which is a stronger criterion) directly translate to the VWAP criterion.\nHowever, in the VWAP setting, considering a restriction on the maximum to the minimum volume fluctuation Q, leads to an additional class of results which depends on Q. and some small companies focus exclusively on proprietary VWAP trading technology.\nIn this paper, we provide the first study of VWAP trading algorithms in an online, competitive ratio setting.\nWe first formalize the VWAP trading problem in a basic online model we call the price-volume model, which can be viewed as a generalization of previous theoretical online trading models incorporating market volume information.\nIn this model, we provide VWAP algorithms and competitive ratios, and compare this setting with the one-way trading (OWT) problem studied in [3].\nOur most interesting results, however, examine the VWAP trading problem in a new online trading model capturing the important recent phenomenon of limit order books in financial markets.\nSuch data and algorithms are a topic of great current interest on Wall Street [4].\nWe thus introduce a new online trading model incorporating limit order books, and examine both the one-way and VWAP trading problems in it.\nOur results are summarized in Figure 1 (see the caption for a summary).","lvl-2":"Competitive Algorithms for VWAP and Limit Order Trading\nABSTRACT\nWe introduce new online models for two important aspects of modern financial markets: Volume Weighted Average Price trading and limit order books.\nWe provide an extensive study of competitive algorithms in these models and relate them to earlier online algorithms for stock trading.\n1.\nINTRODUCTION\nWhile popular images of Wall Street often depict swashbuckling traders boldly making large gambles on just their market intuitions, the vast majority of trading is actually considerably more technical and constrained.\nThe constraints often derive from a complex combination of business, regulatory and institutional issues, and result in certain kinds of \"standard\" trading strategies or criteria that invite algorithmic analysis.\nOne of the most common activities in modern financial markets is known as Volume Weighted Average Price, or\nVWAP, trading.\nInformally, the VWAP of a stock over a specified market period is simply the average price paid per share during that period, so the price of each transaction in the market is weighted by its volume.\nIn VWAP trading, one attempts to buy or sell a fixed number of shares at a price that closely tracks the VWAP.\nVery large institutional trades constitute one of the main motivations behind VWAP activity.\nA typical scenario goes as follows.\nSuppose a very large mutual fund holds 3% of the outstanding shares of a large, publicly traded company--a huge fraction of the shares--and that this fund's manager decides he would like to reduce this holding to 2% over a 1-month period.\n(Such a decision might be forced by the fund's own regulations or other considerations.)\nTypically, such a fund manager would be unqualified to sell such a large number of shares in the open market--it requires a professional broker to intelligently break the trade up over time, and possibly over multiple exchanges, in order to minimize the market impact of such a sizable transaction.\nThus, the fund manager would approach brokerages for help in selling the 1%.\nThe brokerage will typically alleviate the fund manager's problem immediately by simply buying the shares directly from the fund manager, and then selling them off later--but what price should the brokerage pay the fund manager?\nPaying the price on the day of the sale is too risky for the brokerage, as they need to sell the shares themselves over an extended period, and events beyond their control (such as wars) could cause the price to fall dramatically.\nThe usual answer is that the brokerage offers to buy the shares from the fund manager at a per-share price tied to the VWAP over some future period--in our example, the brokerage might offer to buy the 1% at a per-share price of the coming month's VWAP minus 1 cent.\nThe brokerage now has a very clean challenge: by selling the shares themselves over the next month in a way that exactly matches the VWAP, a penny per share is earned in profits.\nIf they can beat the VWAP by a penny, they make two cents per share.\nSuch small-margin, high-volume profits can be extremely lucrative for a large brokerage.\nThe importance of the VWAP has led to many automated VWAP trading algorithms--indeed, every major brokerage has at least one\" VWAP box\",\nFigure 1: The table summarizes the results presented in this paper.\nThe rows represent results for either the OWT\nor VWAP criterion.\nThe columns represent which model we are working in.\nThe entry in the table is the competitive ratio between our algorithm and an optimal algorithm, and the closer the ratio is to 1 the better.\nThe parameter R represents a bound on the maximum to the minimum price fluctuation and the parameter Q represents a bound on the maximum to minimum volume fluctuation in the respective model.\n(See Section 4 for a description of the Macroscopic Distribution Model.)\nAll the results for the OWT trading criterion (which is a stronger criterion) directly translate to the VWAP criterion.\nHowever, in the VWAP setting, considering a restriction on the maximum to the minimum volume fluctuation Q, leads to an additional class of results which depends on Q. and some small companies focus exclusively on proprietary VWAP trading technology.\nIn this paper, we provide the first study of VWAP trading algorithms in an online, competitive ratio setting.\nWe first formalize the VWAP trading problem in a basic online model we call the price-volume model, which can be viewed as a generalization of previous theoretical online trading models incorporating market volume information.\nIn this model, we provide VWAP algorithms and competitive ratios, and compare this setting with the one-way trading (OWT) problem studied in [3].\nOur most interesting results, however, examine the VWAP trading problem in a new online trading model capturing the important recent phenomenon of limit order books in financial markets.\nBriefly, a limit buy or sell order specifies both the number of shares and the desired price, and will only be executed if there is a matching party on the opposing side, according to a well-defined matching procedure used by all the major exchanges.\nWhile limit order books (the list of limit orders awaiting possible future execution) have existed since the dawn of equity exchanges, only very recently have these books become visible to traders in real time, thus opening the way to trading algorithms of all varieties that attempt to exploit this rich market microstructure data.\nSuch data and algorithms are a topic of great current interest on Wall Street [4].\nWe thus introduce a new online trading model incorporating limit order books, and examine both the one-way and VWAP trading problems in it.\nOur results are summarized in Figure 1 (see the caption for a summary).\n2.\nTHE PRICE-VOLUME TRADING MODEL\nWe now present a trading model which includes both price and volume information about the sequence of trades.\nWhile this model is a generalization of previous formalisms for online trading, it makes an infinite liquidity assumption which fails to model the negative market impact that trading a large number of shares typically has.\nThis will be addressed in the order book model studied in the next section.\nA note on terminology: throughout the paper (unless otherwise specified), we shall use the term \"market\" to describe all activity or orders other than those of the algorithm under consideration.\nThe setting we consider can be viewed as a game between our algorithm and the market.\n2.1 The Model\nIn the price-volume trading model, we assume that the intraday trading activity in a given stock is summarized by a discrete sequence of price and volume pairs (pt, vt) for t = 1,..., T.\nHere t = 0 corresponds to the day's market open, and t = T to the close.\nWhile there is nothing technically special about the time horizon of a single day, it is particularly consistent with limit order book trading on Wall Street.\nThe pair (pt, vt) represents the fact that a total of vt shares were traded at an (average) price per share pt in the market between time t \u2212 1 and t. Realistically, we should imagine the number of intervals T being reasonably large, so that it is sensible to assign a common approximate price to all shares traded within an interval.\nIn the price-volume model, we shall make an infinite liquidity assumption for our trading algorithms.\nMore precisely, in this online model, we see the price-volume sequence one pair at a time.\nFollowing the observation of (pt, vt), we are permitted to sell any (possibly fractional) number of shares nt at the price pt.\nLet us assume that our goal is to sell N shares over the course of the day.\nHence, at each time, we must select a (possibly fractional) number of shares nt to sell at price pt, subject to the global constraint Tt = 1 nt = N.\nIt is thus assumed that if we have \"left over\" the closing price of the market--that is, nT = N \u2212 ET \u2212 1 shares to sell after time T \u2212 1, we are forced to sell them at t = 1 nt is sold at pT.\nIn this way we are certain to sell exactly N shares over the course of the day; the only thing an algorithm must do is determine the schedule of selling based on the incoming market price-volume stream.\nAny algorithm which sells fractional volumes can be converted to a randomized algorithm which only sells integral volumes with the same expected number of shares sold.\nIf we keep the hard constraint of selling exactly N shares, we might incur an additional slight loss in the conversion.\n(Note that we only allow fractional volumes in the price-volume model, where liquidity is not an issue.\nIn the order book model to follow, we do not allow fractional volumes.)\nIn VWAP trading, the goal of an online algorithm A which sells exactly N shares is not to maximize profits per se, but to track the market VWAP.\nThe market VWAP for an intraday trading sequence S = (p1, v1),..., (pT, vT) is simply the average price paid per share over the course of the trading\nwhere V is the total daily volume, i.e., V = ETt = 1 vt. .\nIf on the sequence S, the algorithm A sells its N stocks using the volume sequence n1,...nT, then we analogously define the VWAP of A on market sequence S by\nNote that the market VWAP does not include the shares that the algorithm sells.\nThe VWAP competitive ratio of A with respect to a set of sequences \u03a3 is then\nIn the case that A is randomized, we generalize the definition above by taking an expectation over VWAPA (S) inside the max.\nWe note that unlike on Wall Street, our definition of VWAPM does not take our own trading into account.\nIt is easy to see that this makes it a more challenging criterion to track.\nIn contrast to the VWAP, another common measure of the performance of an online selling algorithm would be its one-way trading (OWT) competitive ratio [3] with respect to a set of sequences \u03a3:\nwhere the algorithms performance is compared to the largest individual price appearing in the sequence S.\nIn both VWAP and OWT, we are comparing the average price per share received by a selling algorithm to some measure of market performance.\nIn the case of OWT, we compare to the rather ambitious benchmark of the high price of the day, ignoring volumes entirely.\nIn VWAP trading, we have the more modest goal of comparing favorably to the overall market average of the day.\nAs we shall see, there are some important commonalities and differences to these two approaches.\nFor now we note one simple fact: on any specific sequence S, VWAPA (S) may be larger that VWAPM (S).\nHowever, RVWAP (A) cannot be smaller than 1, since on any sequence S in which all price pt are identical, it is impossible to get a better average share per price.\nThus, for all algorithms A, both RVWAP (A) and ROWT (A) are larger than 1, and the closer to 1 they are, the better A is tracking its respective performance measure.\n2.2 VWAP Results in the Price-Volume Model\nAs in previous work on online trading, it is generally not possible to obtain finite bounds on competitive ratios with absolutely no assumptions on the set of sequences \u03a3--bounds on the maximum variation in price or volume are required, depending on the exact setting.\nWe thus introduce the following two assumptions.\n2.2.0.1 Volume Variability Assumption.\n.\nLet 0 p2>... .\nConsider guessing the kth highest such price, pk.\nIf an order for N shares is placed at the day's start at price pk, then we are guaranteed to obtain a return of kpk.\nLet k * = argmaxk {kpk}.\nWe can view our algorithm as attempting to guess pk \u2217, and succeeding if the guess p satisfies p E [pk \u2217 \/ 2, pk \u2217].\nHence, we are 2 log (R) competitive with the quantity max1 i, this limit order must have been executed.\nThus unexecuted limit orders bound the VWAPs of the remainder of the day, resulting in at most one unexecuted order per price level.\ncretized prices: E \u221e A bound on the lost revenue is thus the sum of the disi = 1 (1 \u2212 ~) ipmax \u2264 pmax \/ ~.\nClearly our algorithm has sold at most N shares.\nNote that as N becomes large, VWAPA approaches 1 \u2212 ~ times the market VWAP.\nIf we knew that the final total volume of the market executions is V, then we can set \u03b3 = V\/N, assuming that \u03b3>> 1.\nIf we have only an upper and lower bound on V we should be able to \"guess\" and incur a logarithmic loss.\nThe following assumption tries to capture the market volume variability.\n3.4.0.5 Order Book Volume Variability Assumption.\nWe now assume that the total volume (which includes the shares executed by both our algorithm and the market) is variable within some known region and that the market volume will be greater than our algorithms volume.\nMore formally, for all S \u2208 \u03a3, assume that the total volume V of shares traded in execM (S, S'), for any sequence S' of N sell limit orders, satisfies 2N \u2264 Vmin \u2264 V \u2264 Vmax.\nLet\nFigure 3: Here we present bounds from Section 4 based on the empirical volume distributions for four real stocks: QQQ, MCHP, JNPR, and CHKP.\nThe plots show histograms for the total daily volumes transacted on Island for these stocks, in the last year and a half, along with the corresponding values of log (Q) and #(P bins\nvol) (denoted by' E').\nWe assume that the minimum and maximum daily volumes in the data correspond to Vmin and Vmax, respectively.\nThe worst-case competitive ratio bounds (which are twice log (Q)) of our algorithm for those stocks are 9.42, 10.56, 11.32, and 13.20, respectively.\nThe corresponding bounds on the competitive ratio performance of our algorithm under the volume distribution model (which are twice #(P bins vol)) are better: 7.54, 7.72, 7.94, and 9.00, respectively (a 20 \u2212 40% relative improvement).\nUsing a finer volume binning along with a slightly more refined bound on the competitive ratio, we can construct algorithms that, using the empirical volume distribution given as correct, guarantee even better competitive ratios of 2.76, 2.73, 2.75, and 3.17, respectively for those stocks (details omitted).\n4.\nMACROSCOPIC DISTRIBUTION MODELS\nWe conclude our results with a return to the price-volume model, where we shall introduce some refined methods of analysis for online trading algorithms.\nWe leave the generalization of these methods to the order book model for future work.\nThe competitive ratios defined so far measure performance relative to some baseline criterion in the worst case over all market sequences S \u2208 \u03a3.\nIt has been observed in many online settings that such worst-case metrics can yield pessimistic results, and various relaxations have been considered, such as permitting a probability distribution over the input sequence.\nWe now consider distributional models that are considerably weaker than assuming a distribution over complete market sequences S \u2208 \u03a3.\nIn the volume distribution model, we assume only that there exists a distribution Pvol over the total volume V traded in the market for the day, and then examine the worst-case competitive ratio over sequences consistent with the randomly chosen volume.\nMore precisely, we define\nS \u2208 seq (V) VWAPA (S) Here V \u223c Pvol denotes that V is chosen with respect to distribution Pvol, and seq (V) \u2282 \u03a3 is the set of all market sequences (P1, V1),..., (PT, VT) satisfying Tt = 1 Vt = V.\nSimilarly, for OWT, we can define\nHere Pmaxprice is a distribution over just the maximum price of the day, and we then examine worst-case sequences consistent with this price (seq (P) \u2282 \u03a3 is the set of all market sequences satisfying max1 1.\nThus our estimated bin index probabilities are all within a factor of \u03b1 of the truth.\nThen it is easy to show that C (P vol bins, P\u02dc vol bins) \u2264 \u03b1E (Pbins vol), so according to Theorems 10 and 11 our penalty for using the approximate distribution is a factor of \u03b1 in competitive ratio.","keyphrases":["competit algorithm","vwap","onlin model","modern financi market","onlin algorithm","stock trade","volum weight averag price trade model","limit order book trade model","trade sequenc","share","market order","onlin trade","competit analysi"],"prmu":["P","P","P","P","P","P","R","R","M","U","R","R","M"]} {"id":"C-57","title":"Congestion Games with Load-Dependent Failures: Identical Resources","abstract":"We define a new class of games, congestion games with load-dependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games. In a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure. Each agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility. The utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses. CGLFs possess two novel features. It is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games. In addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource. Although, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions.","lvl-1":"Congestion Games with Load-Dependent Failures: Identical Resources Michal Penn Technion - IIT Haifa, Israel mpenn@ie.technion.ac.il Maria Polukarov Technion - IIT Haifa, Israel pmasha@tx.technion.ac.il Moshe Tennenholtz Technion - IIT Haifa, Israel moshet@ie.technion.ac.il ABSTRACT We define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games.\nIn a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure.\nEach agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility.\nThe utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses.\nCGLFs possess two novel features.\nIt is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games.\nIn addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource.\nAlthough, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence -multiagent systems General Terms Theory, Economics 1.\nINTRODUCTION We study the effects of resource failures in congestion settings.\nThis study is motivated by a variety of situations in multi-agent systems with unreliable components, such as machines, computers etc..\nWe define a model for congestion games with load-dependent failures (CGLFs) which provides simple and natural description of such situations.\nIn this model, we are given a finite set of identical resources (service providers) where each element possesses a failure probability describing the probability of unsuccessful completion of its assigned tasks as a (nondecreasing) function of its congestion.\nThere is a fixed number of agents, each having a task which can be carried out by any of the resources.\nFor reliability reasons, each agent may decide to assign his task, simultaneously, to a number of resources.\nThus, the congestion on the resources is not known in advance, but is strategy-dependent.\nEach resource is associated with a cost, which is a (nonnegative) function of the congestion experienced by this resource.\nThe objective of each agent is to maximize his own utility, which is the difference between his benefit from successful task completion and the sum of the costs over the set of resources he uses.\nThe benefits of the agents from successful completion of their tasks are allowed to vary across the agents.\nThe resource cost function describes the cost suffered by an agent for selecting that resource, as a function of the number of agents who have selected it.\nThus, it is natural to assume that these functions are nonnegative.\nIn addition, in many real-life applications of our model the resource cost functions have a special structure.\nIn particular, they can monotonically increase or decrease with the number of the users, depending on the context.\nThe former case is motivated by situations where high congestion on a resource causes longer delay in its assigned tasks execution and as a result, the cost of utilizing this resource might be higher.\nA typical example of such situation is as follows.\nAssume we need to deliver an important package.\nSince there is no guarantee that a courier will reach the destination in time, we might send several couriers to deliver the same package.\nThe time required by each courier to deliver the package increases with the congestion on his way.\nIn addition, the payment to a courier is proportional to the time he spends in delivering the package.\nThus, the payment to the courier increases when the congestion increases.\nThe latter case (decreasing cost functions) describes situations where a group of agents using a particular resource have an opportunity to share its cost among the group``s members, or, the cost of 210 using a resource decreases with the number of users, according to some marketing policy.\nOur results We show that CGLFs and, in particular, CGLFs with nondecreasing cost functions, do not admit a potential function.\nTherefore, the CGLF model can not be reduced to congestion games.\nNevertheless, if the failure probabilities are constant (do not depend on the congestion) then a potential function is guaranteed to exist.\nWe show that CGLFs and, in particular, CGLFs with decreasing cost functions, do not possess pure strategy Nash equilibria.\nHowever, as we show in our main result, there exists a pure strategy Nash equilibrium in any CGLF with nondecreasing cost functions.\nRelated work Our model extends the well-known class of congestion games [11].\nIn a congestion game, every agent has to choose from a finite set of resources, where the utility (or cost) of an agent from using a particular resource depends on the number of agents using it, and his total utility (cost) is the sum of the utilities (costs) obtained from the resources he uses.\nAn important property of these games is the existence of pure strategy Nash equilibria.\nMonderer and Shapley [9] introduced the notions of potential function and potential game and proved that the existence of a potential function implies the existence of a pure strategy Nash equilibrium.\nThey observed that Rosenthal [11] proved his theorem on congestion games by constructing a potential function (hence, every congestion game is a potential game).\nMoreover, they showed that every finite potential game is isomorphic to a congestion game; hence, the classes of finite potential games and congestion games coincide.\nCongestion games have been extensively studied and generalized.\nIn particular, Leyton-Brown and Tennenholtz [5] extended the class of congestion games to the class of localeffect games.\nIn a local-effect game, each agent``s payoff is effected not only by the number of agents who have chosen the same resources as he has chosen, but also by the number of agents who have chosen neighboring resources (in a given graph structure).\nMonderer [8] dealt with another type of generalization of congestion games, in which the resource cost functions are player-specific (PS-congestion games).\nHe defined PS-congestion games of type q (q-congestion games), where q is a positive number, and showed that every game in strategic form is a q-congestion game for some q. Playerspecific resource cost functions were discussed for the first time by Milchtaich [6].\nHe showed that simple and strategysymmetric PS-congestion games are not potential games, but always possess a pure strategy Nash equilibrium.\nPScongestion games were generalized to weighted congestion games [6] (or, ID-congestion games [7]), in which the resource cost functions are not only player-specific, but also depend on the identity of the users of the resource.\nAckermann et al. [1] showed that weighted congestion games admit pure strategy Nash equilibria if the strategy space of each player consists of the bases of a matroid on the set of resources.\nMuch of the work on congestion games has been inspired by the fact that every such game has a pure strategy Nash equilibrium.\nIn particular, Fabrikant et al. [3] studied the computational complexity of finding pure strategy Nash equilibria in congestion games.\nIntensive study has also been devoted to quantify the inefficiency of equilibria in congestion games.\nKoutsoupias and Papadimitriou [4] proposed the worst-case ratio of the social welfare achieved by a Nash equilibrium and by a socially optimal strategy profile (dubbed the price of anarchy) as a measure of the performance degradation caused by lack of coordination.\nChristodoulou and Koutsoupias [2] considered the price of anarchy of pure equilibria in congestion games with linear cost functions.\nRoughgarden and Tardos [12] used this approach to study the cost of selfish routing in networks with a continuum of users.\nHowever, the above settings do not take into consideration the possibility that resources may fail to execute their assigned tasks.\nIn the computer science context of congestion games, where the alternatives of concern are machines, computers, communication lines etc., which are obviously prone to failures, this issue should not be ignored.\nPenn, Polukarov and Tennenholtz were the first to incorporate the issue of failures into congestion settings [10].\nThey introduced a class of congestion games with failures (CGFs) and proved that these games, while not being isomorphic to congestion games, always possess Nash equilibria in pure strategies.\nThe CGF-model significantly differs from ours.\nIn a CGF, the authors considered the delay associated with successful task completion, where the delay for an agent is the minimum of the delays of his successful attempts and the aim of each agent is to minimize his expected delay.\nIn contrast with the CGF-model, in our model we consider the total cost of the utilized resources, where each agent wishes to maximize the difference between his benefit from a successful task completion and the sum of his costs over the resources he uses.\nThe above differences imply that CGFs and CGLFs possess different properties.\nIn particular, if in our model the resource failure probabilities were constant and known in advance, then a potential function would exist.\nThis, however, does not hold for CGFs; in CGFs, the failure probabilities are constant but there is no potential function.\nFurthermore, the procedures proposed by the authors in [10] for the construction of a pure strategy Nash equilibrium are not valid in our model, even in the simple, agent-symmetric case, where all agents have the same benefit from successful completion of their tasks.\nOur work provides the first model of congestion settings with resource failures, which considers the sum of congestiondependent costs over utilized resources, and therefore, does not extend the CGF-model, but rather generalizes the classic model of congestion games.\nMoreover, it is the first model to consider load-dependent failures in the above context.\n211 Organization The rest of the paper is organized as follows.\nIn Section 2 we define our model.\nIn Section 3 we present our results.\nIn 3.1 we show that CGLFs, in general, do not have pure strategy Nash equilibria.\nIn 3.2 we focus on CGLFs with nondecreasing cost functions (nondecreasing CGLFs).\nWe show that these games do not admit a potential function.\nHowever, in our main result we show the existence of pure strategy Nash equilibria in nondecreasing CGLFs.\nSection 4 is devoted to a short discussion.\nMany of the proofs are omitted from this conference version of the paper, and will appear in the full version.\n2.\nTHE MODEL The scenarios considered in this work consist of a finite set of agents where each agent has a task that can be carried out by any element of a set of identical resources (service providers).\nThe agents simultaneously choose a subset of the resources in order to perform their tasks, and their aim is to maximize their own expected payoff, as described in the sequel.\nLet N be a set of n agents (n \u2208 N), and let M be a set of m resources (m \u2208 N).\nAgent i \u2208 N chooses a strategy \u03c3i \u2208 \u03a3i which is a (potentially empty) subset of the resources.\nThat is, \u03a3i is the power set of the set of resources: \u03a3i = P(M).\nGiven a subset S \u2286 N of the agents, the set of strategy combinations of the members of S is denoted by \u03a3S = \u00d7i\u2208S\u03a3i, and the set of strategy combinations of the complement subset of agents is denoted by \u03a3\u2212S (\u03a3\u2212S = \u03a3N S = \u00d7i\u2208N S\u03a3i).\nThe set of pure strategy profiles of all the agents is denoted by \u03a3 (\u03a3 = \u03a3N ).\nEach resource is associated with a cost, c(\u00b7), and a failure probability, f(\u00b7), each of which depends on the number of agents who use this resource.\nWe assume that the failure probabilities of the resources are independent.\nLet \u03c3 = (\u03c31, ... , \u03c3n) \u2208 \u03a3 be a pure strategy profile.\nThe (m-dimensional) congestion vector that corresponds to \u03c3 is h\u03c3 = (h\u03c3 e )e\u2208M , where h\u03c3 e = \u02db \u02db{i \u2208 N : e \u2208 \u03c3i} \u02db \u02db.\nThe failure probability of a resource e is a monotone nondecreasing function f : {1, ... , n} \u2192 [0, 1) of the congestion experienced by e.\nThe cost of utilizing resource e is a function c : {1, ... , n} \u2192 R+ of the congestion experienced by e.\nThe outcome for agent i \u2208 N is denoted by xi \u2208 {S, F}, where S and F, respectively, indicate whether the task execution succeeded or failed.\nWe say that the execution of agent``s i task succeeds if the task of agent i is successfully completed by at least one of the resources chosen by him.\nThe benefit of agent i from his outcome xi is denoted by Vi(xi), where Vi(S) = vi, a given (nonnegative) value, and Vi(F) = 0.\nThe utility of agent i from strategy profile \u03c3 and his outcome xi, ui(\u03c3, xi), is the difference between his benefit from the outcome (Vi(xi)) and the sum of the costs of the resources he has used: ui(\u03c3, xi) = Vi(xi) \u2212 X e\u2208\u03c3i c(h\u03c3 e ) .\nThe expected utility of agent i from strategy profile \u03c3, Ui(\u03c3), is, therefore: Ui(\u03c3) = 1 \u2212 Y e\u2208\u03c3i f(h\u03c3 e ) !\nvi \u2212 X e\u2208\u03c3i c(h\u03c3 e ) , where 1 \u2212 Q e\u2208\u03c3i f(h\u03c3 e ) denotes the probability of successful completion of agent i``s task.\nWe use the convention thatQ e\u2208\u2205 f(h\u03c3 e ) = 1.\nHence, if agent i chooses an empty set \u03c3i = \u2205 (does not assign his task to any resource), then his expected utility, Ui(\u2205, \u03c3\u2212i), equals zero.\n3.\nPURE STRATEGY NASH EQUILIBRIA IN CGLFS In this section we present our results on CGLFs.\nWe investigate the property of the (non-)existence of pure strategy Nash equilibria in these games.\nWe show that this class of games does not, in general, possess pure strategy equilibria.\nNevertheless, if the resource cost functions are nondecreasing then such equilibria are guaranteed to exist, despite the non-existence of a potential function.\n3.1 Decreasing Cost Functions We start by showing that the class of CGLFs and, in particular, the subclass of CGLFs with decreasing cost functions, does not, in general, possess Nash equilibria in pure strategies.\nConsider a CGLF with two agents (N = {1, 2}) and two resources (M = {e1, e2}).\nThe cost function of each resource is given by c(x) = 1 xx , where x \u2208 {1, 2}, and the failure probabilities are f(1) = 0.01 and f(2) = 0.26.\nThe benefits of the agents from successful task completion are v1 = 1.1 and v2 = 4.\nBelow we present the payoff matrix of the game.\n\u2205 {e1} {e2} {e1, e2} \u2205 U1 = 0 U1 = 0 U1 = 0 U1 = 0 U2 = 0 U2 = 2.96 U2 = 2.96 U2 = 1.9996 {e1} U1 = 0.089 U1 = 0.564 U1 = 0.089 U1 = 0.564 U2 = 0 U2 = 2.71 U2 = 2.96 U2 = 2.7396 {e2} U1 = 0.089 U1 = 0.089 U1 = 0.564 U1 = 0.564 U2 = 0 U2 = 2.96 U2 = 2.71 U2 = 2.7396 {e1, e2} U1 = \u22120.90011 U1 = \u22120.15286 U1 = \u22120.15286 U1 = 0.52564 U2 = 0 U2 = 2.71 U2 = 2.71 U2 = 3.2296 Table 1: Example for non-existence of pure strategy Nash equilibria in CGLFs.\nIt can be easily seen that for every pure strategy profile \u03c3 in this game there exist an agent i and a strategy \u03c3i \u2208 \u03a3i such that Ui(\u03c3\u2212i, \u03c3i) > Ui(\u03c3).\nThat is, every pure strategy profile in this game is not in equilibrium.\nHowever, if the cost functions in a given CGLF do not decrease in the number of users, then, as we show in the main result of this paper, a pure strategy Nash equilibrium is guaranteed to exist.\n212 3.2 Nondecreasing Cost Functions This section focuses on the subclass of CGLFs with nondecreasing cost functions (henceforth, nondecreasing CGLFs).\nWe show that nondecreasing CGLFs do not, in general, admit a potential function.\nTherefore, these games are not congestion games.\nNevertheless, we prove that all such games possess pure strategy Nash equilibria.\n3.2.1 The (Non-)Existence of a Potential Function Recall that Monderer and Shapley [9] introduced the notions of potential function and potential game, where potential game is defined to be a game that possesses a potential function.\nA potential function is a real-valued function over the set of pure strategy profiles, with the property that the gain (or loss) of an agent shifting to another strategy while the other agents'' strategies are kept unchanged, equals to the corresponding increment of the potential function.\nThe authors [9] showed that the classes of finite potential games and congestion games coincide.\nHere we show that the class of CGLFs and, in particular, the subclass of nondecreasing CGLFs, does not admit a potential function, and therefore is not included in the class of congestion games.\nHowever, for the special case of constant failure probabilities, a potential function is guaranteed to exist.\nTo prove these statements we use the following characterization of potential games [9].\nA path in \u03a3 is a sequence \u03c4 = (\u03c30 \u2192 \u03c31 \u2192 \u00b7 \u00b7 \u00b7 ) such that for every k \u2265 1 there exists a unique agent, say agent i, such that \u03c3k = (\u03c3k\u22121 \u2212i , \u03c3i) for some \u03c3i = \u03c3k\u22121 i in \u03a3i.\nA finite path \u03c4 = (\u03c30 \u2192 \u03c31 \u2192 \u00b7 \u00b7 \u00b7 \u2192 \u03c3K ) is closed if \u03c30 = \u03c3K .\nIt is a simple closed path if in addition \u03c3l = \u03c3k for every 0 \u2264 l = k \u2264 K \u2212 1.\nThe length of a simple closed path is defined to be the number of distinct points in it; that is, the length of \u03c4 = (\u03c30 \u2192 \u03c31 \u2192 \u00b7 \u00b7 \u00b7 \u2192 \u03c3K ) is K. Theorem 1.\n[9] Let G be a game in strategic form with a vector U = (U1, ... , Un) of utility functions.\nFor a finite path \u03c4 = (\u03c30 \u2192 \u03c31 \u2192 \u00b7 \u00b7 \u00b7 \u2192 \u03c3K ), let U(\u03c4) = PK k=1[Uik (\u03c3k )\u2212 Uik (\u03c3k\u22121 )], where ik is the unique deviator at step k. Then, G is a potential game if and only if U(\u03c4) = 0 for every simple closed path \u03c4 of length 4.\nLoad-Dependent Failures Based on Theorem 1, we present the following counterexample that demonstrates the non-existence of a potential function in CGLFs.\nWe consider the following agent-symmetric game G in which two agents (N = {1, 2}) wish to assign a task to two resources (M = {e1, e2}).\nThe benefit from a successful task completion of each agent equals v, and the failure probability function strictly increases with the congestion.\nConsider the simple closed path of length 4 which is formed by \u03b1 = (\u2205, {e2}) , \u03b2 = ({e1}, {e2}) , \u03b3 = ({e1}, {e1, e2}) , \u03b4 = (\u2205, {e1, e2}) : {e2} {e1, e2} \u2205 U1 = 0 U1 = 0 U2 = (1 \u2212 f(1)) v \u2212 c(1) U2 = ` 1 \u2212 f(1)2 \u00b4 v \u2212 2c(1) {e1} U1 = (1 \u2212 f(1)) v \u2212 c(1) U1 = (1 \u2212 f(2)) v \u2212 c(2) U2 = (1 \u2212 f(1)) v \u2212 c(1) U2 = (1 \u2212 f(1)f(2)) v \u2212 c(1) \u2212 c(2) Table 2: Example for non-existence of potentials in CGLFs.\nTherefore, U1(\u03b1) \u2212 U1(\u03b2) + U2(\u03b2) \u2212 U2(\u03b3) + U1(\u03b3) \u2212 U1(\u03b4) +U2(\u03b4) \u2212 U2(\u03b1) = v (1 \u2212 f(1)) (f(1) \u2212 f(2)) = 0.\nThus, by Theorem 1, nondecreasing CGLFs do not admit potentials.\nAs a result, they are not congestion games.\nHowever, as presented in the next section, the special case in which the failure probabilities are constant, always possesses a potential function.\nConstant Failure Probabilities We show below that CGLFs with constant failure probabilities always possess a potential function.\nThis follows from the fact that the expected benefit (revenue) of each agent in this case does not depend on the choices of the other agents.\nIn addition, for each agent, the sum of the costs over his chosen subset of resources, equals the payoff of an agent choosing the same strategy in the corresponding congestion game.\nAssume we are given a game G with constant failure probabilities.\nLet \u03c4 = (\u03b1 \u2192 \u03b2 \u2192 \u03b3 \u2192 \u03b4 \u2192 \u03b1) be an arbitrary simple closed path of length 4.\nLet i and j denote the active agents (deviators) in \u03c4 and z \u2208 \u03a3\u2212{i,j} be a fixed strategy profile of the other agents.\nLet \u03b1 = (xi, xj, z), \u03b2 = (yi, xj, z), \u03b3 = (yi, yj, z), \u03b4 = (xi, yj, z), where xi, yi \u2208 \u03a3i and xj, yj \u2208 \u03a3j.\nThen, U(\u03c4) = Ui(xi, xj, z) \u2212 Ui(yi, xj, z) +Uj(yi, xj, z) \u2212 Uj(yi, yj, z) +Ui(yi, yj, z) \u2212 Ui(xi, yj, z) +Uj(xi, yj, z) \u2212 Uj(xi, xj, z) = 1 \u2212 f|xi| vi \u2212 X e\u2208xi c(h (xi,xj ,z) e ) \u2212 ... \u2212 1 \u2212 f|xj | vj + X e\u2208xj c(h (xi,xj ,z) e ) = '' 1 \u2212 f|xi| vi \u2212 ... \u2212 1 \u2212 f|xj | vj \u2212 '' X e\u2208xi c(h (xi,xj ,z) e ) \u2212 ... \u2212 X e\u2208xj c(h (xi,xj ,z) e ) .\nNotice that '' 1 \u2212 f|xi| vi \u2212 ... \u2212 1 \u2212 f|xj | vj = 0, as a sum of a telescope series.\nThe remaining sum equals 0, by applying Theorem 1 to congestion games, which are known to possess a potential function.\nThus, by Theorem 1, G is a potential game.\n213 We note that the above result holds also for the more general settings with non-identical resources (having different failure probabilities and cost functions) and general cost functions (not necessarily monotone and\/or nonnegative).\n3.2.2 The Existence of a Pure Strategy Nash Equilibrium In the previous section, we have shown that CGLFs and, in particular, nondecreasing CGLFs, do not admit a potential function, but this fact, in general, does not contradict the existence of an equilibrium in pure strategies.\nIn this section, we present and prove the main result of this paper (Theorem 2) which shows the existence of pure strategy Nash equilibria in nondecreasing CGLFs.\nTheorem 2.\nEvery nondecreasing CGLF possesses a Nash equilibrium in pure strategies.\nThe proof of Theorem 2 is based on Lemmas 4, 7 and 8, which are presented in the sequel.\nWe start with some definitions and observations that are needed for their proofs.\nIn particular, we present the notions of A-, D- and S-stability and show that a strategy profile is in equilibrium if and only if it is A-, D- and S- stable.\nFurthermore, we prove the existence of such a profile in any given nondecreasing CGLF.\nDefinition 3.\nFor any strategy profile \u03c3 \u2208 \u03a3 and for any agent i \u2208 N, the operation of adding precisely one resource to his strategy, \u03c3i, is called an A-move of i from \u03c3.\nSimilarly, the operation of dropping a single resource is called a D-move, and the operation of switching one resource with another is called an S-move.\nClearly, if agent i deviates from strategy \u03c3i to strategy \u03c3i by applying a single A-, D- or S-move, then max {|\u03c3i \u03c3i|, |\u03c3i \u03c3i|} = 1, and vice versa, if max {|\u03c3i \u03c3i|, |\u03c3i \u03c3i|} = 1 then \u03c3i is obtained from \u03c3i by applying exactly one such move.\nFor simplicity of exposition, for any pair of sets A and B, let \u00b5(A, B) = max {|A B|, |B A|}.\nThe following lemma implies that any strategy profile, in which no agent wishes unilaterally to apply a single A-, Dor S-move, is a Nash equilibrium.\nMore precisely, we show that if there exists an agent who benefits from a unilateral deviation from a given strategy profile, then there exists a single A-, D- or S-move which is profitable for him as well.\nLemma 4.\nGiven a nondecreasing CGLF, let \u03c3 \u2208 \u03a3 be a strategy profile which is not in equilibrium, and let i \u2208 N such that \u2203xi \u2208 \u03a3i for which Ui(\u03c3\u2212i, xi) > Ui(\u03c3).\nThen, there exists yi \u2208 \u03a3i such that Ui(\u03c3\u2212i, yi) > Ui(\u03c3) and \u00b5(yi, \u03c3i) = 1.\nTherefore, to prove the existence of a pure strategy Nash equilibrium, it suffices to look for a strategy profile for which no agent wishes to unilaterally apply an A-, D- or S-move.\nBased on the above observation, we define A-, D- and Sstability as follows.\nDefinition 5.\nA strategy profile \u03c3 is said to be A-stable (resp., D-stable, S-stable) if there are no agents with a profitable A- (resp., D-, S-) move from \u03c3.\nSimilarly, we define a strategy profile \u03c3 to be DS-stable if there are no agents with a profitable D- or S-move from \u03c3.\nThe set of all DS-stable strategy profiles is denoted by \u03a30 .\nObviously, the profile (\u2205, ... , \u2205) is DS-stable, so \u03a30 is not empty.\nOur goal is to find a DS-stable profile for which no profitable A-move exists, implying this profile is in equilibrium.\nTo describe how we achieve this, we define the notions of light (heavy) resources and (nearly-) even strategy profiles, which play a central role in the proof of our main result.\nDefinition 6.\nGiven a strategy profile \u03c3, resource e is called \u03c3-light if h\u03c3 e \u2208 arg mine\u2208M h\u03c3 e and \u03c3-heavy otherwise.\nA strategy profile \u03c3 with no heavy resources will be termed even.\nA strategy profile \u03c3 satisfying |h\u03c3 e \u2212 h\u03c3 e | \u2264 1 for all e, e \u2208 M will be termed nearly-even.\nObviously, every even strategy profile is nearly-even.\nIn addition, in a nearly-even strategy profile, all heavy resources (if exist) have the same congestion.\nWe also observe that the profile (\u2205, ... , \u2205) is even (and DS-stable), so the subset of even, DS-stable strategy profiles is not empty.\nBased on the above observations, we define two types of an A-move that are used in the sequel.\nSuppose \u03c3 \u2208 \u03a30 is a nearly-even DS-stable strategy profile.\nFor each agent i \u2208 N, let ei \u2208 arg mine\u2208M \u03c3i h\u03c3 e .\nThat is, ei is a lightest resource not chosen previously by i. Then, if there exists any profitable A-move for agent i, then the A-move with ei is profitable for i as well.\nThis is since if agent i wishes to unilaterally add a resource, say a \u2208 M \u03c3i, then Ui (\u03c3\u2212i, (\u03c3i \u222a {a})) > Ui(\u03c3).\nHence, 1 \u2212 Y e\u2208\u03c3i f(h\u03c3 e )f(h\u03c3 a + 1) !\nvi \u2212 X e\u2208\u03c3i c(h\u03c3 e ) \u2212 c(h\u03c3 a + 1) > 1 \u2212 Y e\u2208\u03c3i f(h\u03c3 e ) !\nvi \u2212 X e\u2208\u03c3i c(h\u03c3 e ) \u21d2 vi Y e\u2208\u03c3i f(h\u03c3 e ) > c(h\u03c3 a + 1) 1 \u2212 f(h\u03c3 a + 1) \u2265 c(h\u03c3 ei + 1) 1 \u2212 f(h\u03c3 ei + 1) \u21d2 Ui (\u03c3\u2212i, (\u03c3i \u222a {ei})) > Ui(\u03c3) .\nIf no agent wishes to change his strategy in this manner, i.e. Ui(\u03c3) \u2265 Ui(\u03c3\u2212i, \u03c3i \u222a{ei}) for all i \u2208 N, then by the above Ui(\u03c3) \u2265 Ui(\u03c3\u2212i, \u03c3i \u222a{a}) for all i \u2208 N and a \u2208 M \u03c3i.\nHence, \u03c3 is A-stable and by Lemma 4, \u03c3 is a Nash equilibrium strategy profile.\nOtherwise, let N(\u03c3) denote the subset of all agents for which there exists ei such that a unilateral addition of ei is profitable.\nLet a \u2208 arg minei : i\u2208N(\u03c3) h\u03c3 ei .\nLet also i \u2208 N(\u03c3) be the agent for which ei = a.\nIf a is \u03c3-light, then let \u03c3 = (\u03c3\u2212i, \u03c3i \u222a {a}).\nIn this case we say that \u03c3 is obtained from \u03c3 by a one-step addition of resource a, and a is called an added resource.\nIf a is \u03c3-heavy then there exists a \u03c3-light resource b and an agent j such that a \u2208 \u03c3j and b \/\u2208 \u03c3j.\nThen let \u03c3 = ` \u03c3\u2212{i,j}, \u03c3i \u222a {a}, (\u03c3j {a}) \u222a {b} \u00b4 .\nIn this case we say that \u03c3 is obtained from \u03c3 by a two-step addition of resource b, and b is called an added resource.\nWe notice that, in both cases, the congestion of each resource in \u03c3 is the same as in \u03c3, except for the added resource, for which its congestion in \u03c3 increased by 1.\nThus, since the added resource is \u03c3-light and \u03c3 is nearly-even, \u03c3 is nearly-even.\nThen, the following lemma implies the Sstability of \u03c3 .\n214 Lemma 7.\nIn a nondecreasing CGLF, every nearly-even strategy profile is S-stable.\nCoupled with Lemma 7, the following lemma shows that if \u03c3 is a nearly-even and DS-stable strategy profile, and \u03c3 is obtained from \u03c3 by a one- or two-step addition of resource a, then the only potential cause for a non-DS-stability of \u03c3 is the existence of an agent k \u2208 N with \u03c3k = \u03c3k, who wishes to drop the added resource a. Lemma 8.\nLet \u03c3 be a nearly-even DS-stable strategy profile of a given nondecreasing CGLF, and let \u03c3 be obtained from \u03c3 by a one- or two-step addition of resource a. Then, there are no profitable D-moves for any agent i \u2208 N with \u03c3i = \u03c3i.\nFor an agent i \u2208 N with \u03c3i = \u03c3i, the only possible profitable D-move (if exists) is to drop the added resource a.\nWe are now ready to prove our main result - Theorem 2.\nLet us briefly describe the idea behind the proof.\nBy Lemma 4, it suffices to prove the existence of a strategy profile which is A-, D- and S-stable.\nWe start with the set of even and DS-stable strategy profiles which is obviously not empty.\nIn this set, we consider the subset of strategy profiles with maximum congestion and maximum sum of the agents'' utilities.\nAssuming on the contrary that every DSstable profile admits a profitable A-move, we show the existence of a strategy profile x in the above subset, such that a (one-step) addition of some resource a to x results in a DSstable strategy.\nThen by a finite series of one- or two-step addition operations we obtain an even, DS-stable strategy profile with strictly higher congestion on the resources, contradicting the choice of x.\nThe full proof is presented below.\nProof of Theorem 2: Let \u03a31 \u2286 \u03a30 be the subset of all even, DS-stable strategy profiles.\nObserve that since (\u2205, ... , \u2205) is an even, DS-stable strategy profile, then \u03a31 is not empty, and min\u03c3\u2208\u03a30 \u02db \u02db{e \u2208 M : e is \u03c3\u2212heavy} \u02db \u02db = 0.\nThen, \u03a31 could also be defined as \u03a31 = arg min \u03c3\u2208\u03a30 \u02db \u02db{e \u2208 M : e is \u03c3\u2212heavy} \u02db \u02db , with h\u03c3 being the common congestion.\nNow, let \u03a32 \u2286 \u03a31 be the subset of \u03a31 consisting of all those profiles with maximum congestion on the resources.\nThat is, \u03a32 = arg max \u03c3\u2208\u03a31 h\u03c3 .\nLet UN (\u03c3) = P i\u2208N Ui(\u03c3) denotes the group utility of the agents, and let \u03a33 \u2286 \u03a32 be the subset of all profiles in \u03a32 with maximum group utility.\nThat is, \u03a33 = arg max \u03c3\u2208\u03a32 X i\u2208N Ui(\u03c3) = arg max \u03c3\u2208\u03a32 UN (\u03c3) .\nConsider first the simple case in which max\u03c3\u2208\u03a31 h\u03c3 = 0.\nObviously, in this case, \u03a31 = \u03a32 = \u03a33 = {x = (\u2205, ... , \u2205)}.\nWe show below that by performing a finite series of (onestep) addition operations on x, we obtain an even, DSstable strategy profile y with higher congestion, that is with hy > hx = 0, in contradiction to x \u2208 \u03a32 .\nLet z \u2208 \u03a30 be a nearly-even (not necessarily even) DS-stable profile such that mine\u2208M hz e = 0, and note that the profile x satisfies the above conditions.\nLet N(z) be the subset of agents for which a profitable A-move exists, and let i \u2208 N(z).\nObviously, there exists a z-light resource a such that Ui(z\u2212i, zi \u222a {a}) > Ui(z) (otherwise, arg mine\u2208M hz e \u2286 zi, in contradiction to mine\u2208M hz e = 0).\nConsider the strategy profile z = (z\u2212i, zi \u222a {a}) which is obtained from z by a (one-step) addition of resource a by agent i.\nSince z is nearly-even and a is z-light, we can easily see that z is nearly-even.\nThen, Lemma 7 implies that z is S-stable.\nSince i is the only agent using resource a in z , by Lemma 8, no profitable D-moves are available.\nThus, z is a DS-stable strategy profile.\nTherefore, since the number of resources is finite, there is a finite series of one-step addition operations on x = (\u2205, ... , \u2205) that leads to strategy profile y \u2208 \u03a31 with hy = 1 > 0 = hx , in contradiction to x \u2208 \u03a32 .\nWe turn now to consider the other case where max\u03c3\u2208\u03a31 h\u03c3 \u2265 1.\nIn this case we select from \u03a33 a strategy profile x, as described below, and use it to contradict our contrary assumption.\nSpecifically, we show that there exists x \u2208 \u03a33 such that for all j \u2208 N, vjf(hx )|xj |\u22121 \u2265 c(hx + 1) 1 \u2212 f(hx + 1) .\n(1) Let x be a strategy profile which is obtained from x by a (one-step) addition of some resource a \u2208 M by some agent i \u2208 N(x) (note that x is nearly-even).\nThen, (1) is derived from and essentially equivalent to the inequality Uj(x ) \u2265 Uj(x\u2212j, xj {a}), for all a \u2208 xj.\nThat is, after performing an A-move with a by i, there is no profitable D-move with a. Then, by Lemmas 7 and 8, x is DS-stable.\nFollowing the same lines as above, we construct a procedure that initializes at x and achieves a strategy profile y \u2208 \u03a31 with hy > hx , in contradiction to x \u2208 \u03a32 .\nNow, let us confirm the existence of x \u2208 \u03a33 that satisfies (1).\nLet x \u2208 \u03a33 and let M(x) be the subset of all resources for which there exists a profitable (one-step) addition.\nFirst, we show that (1) holds for all j \u2208 N such that xj \u2229M(x) = \u2205, that is, for all those agents with one of their resources being desired by another agent.\nLet a \u2208 M(x), and let x be the strategy profile that is obtained from x by the (one-step) addition of a by agent i. Assume on the contrary that there is an agent j with a \u2208 xj such that vjf(hx )|xj |\u22121 < c(hx + 1) 1 \u2212 f(hx + 1) .\nLet x = (x\u2212j, xj {a}).\nBelow we demonstrate that x is a DS-stable strategy profile and, since x and x correspond to the same congestion vector, we conclude that x lies in \u03a32 .\nIn addition, we show that UN (x ) > UN (x), contradicting the fact that x \u2208 \u03a33 .\nTo show that x \u2208 \u03a30 we note that x is an even strategy profile, and thus no S-moves may be performed for x .\nIn addition, since hx = hx and x \u2208 \u03a30 , there are no profitable D-moves for any agent k = i, j.\nIt remains to show that there are no profitable D-moves for agents i and j as well.\n215 Since Ui(x ) > Ui(x), we get vif(hx )|xi| > c(hx + 1) 1 \u2212 f(hx + 1) \u21d2 vif(hx )|xi |\u22121 = vif(hx )|xi| > c(hx + 1) 1 \u2212 f(hx + 1) > c(hx ) 1 \u2212 f(hx) = c(hx ) 1 \u2212 f(hx ) , which implies Ui(x ) > Ui(x\u2212i, xi {b}), for all b \u2208 xi .\nThus, there are no profitable D-moves for agent i. By the DS-stability of x, for agent j and for all b \u2208 xj, we have Uj(x) \u2265 Uj(x\u2212j, xj {b}) \u21d2 vjf(hx )|xj |\u22121 \u2265 c(hx ) 1 \u2212 f(hx) .\nThen, vjf(hx )|xj |\u22121 > vjf(hx )|xj | = vjf(hx )|xj |\u22121 \u2265 c(hx ) 1 \u2212 f(hx) = c(hx ) 1 \u2212 f(hx ) \u21d2 Uj(x ) > Uj(x\u2212j, xj {b}), for all b \u2208 xi.\nTherefore, x is DS-stable and lies in \u03a32 .\nTo show that UN (x ), the group utility of x , satisfies UN (x ) > UN (x), we note that hx = hx , and thus Uk(x ) = Uk(x), for all k \u2208 N {i, j}.\nTherefore, we have to show that Ui(x ) + Uj(x ) > Ui(x) + Uj(x), or Ui(x ) \u2212 Ui(x) > Uj(x) \u2212 Uj(x ).\nObserve that Ui(x ) > Ui(x) \u21d2 vif(hx )|xi| > c(hx + 1) 1 \u2212 f(hx + 1) and Uj(x ) < Uj(x ) \u21d2 vjf(hx )|xj |\u22121 < c(hx + 1) 1 \u2212 f(hx + 1) , which yields vif(hx )|xi| > vjf(hx )|xj |\u22121 .\nThus, Ui(x ) \u2212 Ui(x) = 1 \u2212 f(hx )|xi|+1 vi \u2212 (|xi| + 1) c(hx ) \u2212 h 1 \u2212 f(hx )|xi| vi \u2212 |xi|c(hx ) i = vif(hx )|xi| (1 \u2212 f(hx )) \u2212 c(hx ) > vjf(hx )|xj |\u22121 (1 \u2212 f(hx )) \u2212 c(hx ) = 1 \u2212 f(hx )|xj | vj \u2212 |xj|c(hx ) \u2212 h 1 \u2212 f(hx )|xj |\u22121 vj \u2212 (|xi| \u2212 1) c(hx ) i = Uj(x) \u2212 Uj(x ) .\nTherefore, x lies in \u03a32 and satisfies UN (x ) > UN (x), in contradiction to x \u2208 \u03a33 .\nHence, if x \u2208 \u03a33 then (1) holds for all j \u2208 N such that xj \u2229M(x) = \u2205.\nNow let us see that there exists x \u2208 \u03a33 such that (1) holds for all the agents.\nFor that, choose an agent i \u2208 arg mink\u2208N vif(hx )|xk| .\nIf there exists a \u2208 xi \u2229 M(x) then i satisfies (1), implying by the choice of agent i, that the above obviously yields the correctness of (1) for any agent k \u2208 N. Otherwise, if no resource in xi lies in M(x), then let a \u2208 xi and a \u2208 M(x).\nSince a \u2208 xi, a \/\u2208 xi, and hx a = hx a , then there exists agent j such that a \u2208 xj and a \/\u2208 xj.\nOne can easily check that the strategy profile x = ` x\u2212{i,j}, (xi {a}) \u222a {a }, (xj {a }) \u222a {a} \u00b4 lies in \u03a33 .\nThus, x satisfies (1) for agent i, and therefore, for any agent k \u2208 N. Now, let x \u2208 \u03a33 satisfy (1).\nWe show below that by performing a finite series of one- and two-step addition operations on x, we can achieve a strategy profile y that lies in \u03a31 , such that hy > hx , in contradiction to x \u2208 \u03a32 .\nLet z \u2208 \u03a30 be a nearly-even (not necessarily even), DS-stable strategy profile, such that vi Y e\u2208zi {b} f(hz e) \u2265 c(hz b + 1) 1 \u2212 f(hz b + 1) , (2) for all i \u2208 N and for all z-light resource b \u2208 zi.\nWe note that for profile x \u2208 \u03a33 \u2286 \u03a31 , with all resources being x-light, conditions (2) and (1) are equivalent.\nLet z be obtained from z by a one- or two-step addition of a z-light resource a. Obviously, z is nearly-even.\nIn addition, hz e \u2265 hz e for all e \u2208 M, and mine\u2208M hz e \u2265 mine\u2208M hz e. To complete the proof we need to show that z is DS-stable, and, in addition, that if mine\u2208M hz e = mine\u2208M hz e then z has property (2).\nThe DS-stability of z follows directly from Lemmas 7 and 8, and from (2) with respect to z.\nIt remains to prove property (2) for z with mine\u2208M hz e = mine\u2208M hz e. Using (2) with respect to z, for any agent k with zk = zk and for any zlight resource b \u2208 zk, we get vk Y e\u2208zk {b} f(hz e ) \u2265 vk Y e\u2208zk {b} f(hz e) \u2265 c(hz b + 1) 1 \u2212 f(hz b + 1) = c(hz b + 1) 1 \u2212 f(hz b + 1) , as required.\nNow let us consider the rest of the agents.\nAssume z is obtained by the one-step addition of a by agent i.\nIn this case, i is the only agent with zi = zi.\nThe required property for agent i follows directly from Ui(z ) > Ui(z).\nIn the case of a two-step addition, let z = ` z\u2212{i,j}, zi \u222a {b}, (zj {b}) \u222a {a}), where b is a z-heavy resource.\nFor agent i, from Ui(z\u2212i, zi \u222a {b}) > Ui(z) we get 1 \u2212 Y e\u2208zi f(hz e)f(hz b + 1) !\nvi \u2212 X e\u2208zi c(hz e) \u2212 c(hz b + 1) > 1 \u2212 Y e\u2208zi f(hz e) !\nvi \u2212 X e\u2208zi c(hz e) \u21d2 vi Y e\u2208zi f(hz e) > c(hz b + 1) 1 \u2212 f(hz b + 1) , (3) and note that since hz b \u2265 hz e for all e \u2208 M and, in particular, for all z -light resources, then c(hz b + 1) 1 \u2212 f(hz b + 1) \u2265 c(hz e + 1) 1 \u2212 f(hz e + 1) , (4) for any z -light resource e .\n216 Now, since hz e \u2265 hz e for all e \u2208 M and b is z-heavy, then vi Y e\u2208zi {e } f(hz e ) \u2265 vi Y e\u2208zi {e } f(hz e) = vi Y e\u2208(zi\u222a{b}) {e } f(hz e) \u2265 vi Y e\u2208zi f(hz e) , for any z -light resource e .\nThe above, coupled with (3) and (4), yields the required.\nFor agent j we just use (2) with respect to z and the equality hz b = hz a .\nFor any z -light resource e , vj Y e\u2208zj {e } f(hz e ) \u2265 vi Y e\u2208zi {e } f(hz e) \u2265 c(hz e + 1) 1 \u2212 f(hz e + 1) = c(hz e + 1) 1 \u2212 f(hz e + 1) .\nThus, since the number of resources is finite, there is a finite series of one- and two-step addition operations on x that leads to strategy profile y \u2208 \u03a31 with hy > hx , in contradiction to x \u2208 \u03a32 .\nThis completes the proof.\n4.\nDISCUSSION In this paper, we introduce and investigate congestion settings with unreliable resources, in which the probability of a resource``s failure depends on the congestion experienced by this resource.\nWe defined a class of congestion games with load-dependent failures (CGLFs), which generalizes the wellknown class of congestion games.\nWe study the existence of pure strategy Nash equilibria and potential functions in the presented class of games.\nWe show that these games do not, in general, possess pure strategy equilibria.\nNevertheless, if the resource cost functions are nondecreasing then such equilibria are guaranteed to exist, despite the non-existence of a potential function.\nThe CGLF-model can be modified to the case where the agents pay only for non-faulty resources they selected.\nBoth the model discussed in this paper and the modified one are reasonable.\nIn the full version we will show that the modified model leads to similar results.\nIn particular, we can show the existence of a pure strategy equilibrium for nondecreasing CGLFs also in the modified model.\nIn future research we plan to consider various extensions of CGLFs.\nIn particular, we plan to consider CGLFs where the resources may have different costs and failure probabilities, as well as CGLFs in which the resource failure probabilities are mutually dependent.\nIn addition, it is of interest to develop an efficient algorithm for the computation of pure strategy Nash equilibrium, as well as discuss the social (in)efficiency of the equilibria.\n5.\nREFERENCES [1] H. Ackermann, H. R\u00a8oglin, and B. V\u00a8ocking.\nPure nash equilibria in player-specific and weighted congestion games.\nIn WINE-06, 2006.\n[2] G. Christodoulou and E. Koutsoupias.\nThe price of anarchy of finite congestion games.\nIn Proceedings of the 37th Annual ACM Symposium on Theory and Computing (STOC-05), 2005.\n[3] A. Fabrikant, C. Papadimitriou, and K. Talwar.\nThe complexity of pure nash equilibria.\nIn STOC-04, pages 604-612, 2004.\n[4] E. Koutsoupias and C. Papadimitriou.\nWorst-case equilibria.\nIn Proceedings of the 16th Annual Symposium on Theoretical Aspects of Computer Science, pages 404-413, 1999.\n[5] K. Leyton-Brown and M. Tennenholtz.\nLocal-effect games.\nIn IJCAI-03, 2003.\n[6] I. Milchtaich.\nCongestion games with player-specific payoff functions.\nGames and Economic Behavior, 13:111-124, 1996.\n[7] D. Monderer.\nSolution-based congestion games.\nAdvances in Mathematical Economics, 8:397-407, 2006.\n[8] D. Monderer.\nMultipotential games.\nIn IJCAI-07, 2007.\n[9] D. Monderer and L. Shapley.\nPotential games.\nGames and Economic Behavior, 14:124-143, 1996.\n[10] M. Penn, M. Polukarov, and M. Tennenholtz.\nCongestion games with failures.\nIn Proceedings of the 6th ACM Conference on Electronic Commerce (EC-05), pages 259-268, 2005.\n[11] R. Rosenthal.\nA class of games possessing pure-strategy nash equilibria.\nInternational Journal of Game Theory, 2:65-67, 1973.\n[12] T. Roughgarden and E. Tardos.\nHow bad is selfish routing.\nJournal of the ACM, 49(2):236-259, 2002.\n217","lvl-3":"Congestion Games with Load-Dependent Failures: Identical Resources\nABSTRACT\nWe define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games.\nIn a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure.\nEach agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility.\nThe utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses.\nCGLFs possess two novel features.\nIt is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games.\nIn addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource.\nAlthough, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions.\n1.\nINTRODUCTION\nWe study the effects of resource failures in congestion settings.\nThis study is motivated by a variety of situations in multi-agent systems with unreliable components, such as machines, computers etc. .\nWe define a model for congestion games with load-dependent failures (CGLFs) which provides simple and natural description of such situations.\nIn this model, we are given a finite set of identical resources (service providers) where each element possesses a failure probability describing the probability of unsuccessful completion of its assigned tasks as a (nondecreasing) function of its congestion.\nThere is a fixed number of agents, each having a task which can be carried out by any of the resources.\nFor reliability reasons, each agent may decide to assign his task, simultaneously, to a number of resources.\nThus, the congestion on the resources is not known in advance, but is strategy-dependent.\nEach resource is associated with a cost, which is a (nonnegative) function of the congestion experienced by this resource.\nThe objective of each agent is to maximize his own utility, which is the difference between his benefit from successful task completion and the sum of the costs over the set of resources he uses.\nThe benefits of the agents from successful completion of their tasks are allowed to vary across the agents.\nThe resource cost function describes the cost suffered by an agent for selecting that resource, as a function of the number of agents who have selected it.\nThus, it is natural to assume that these functions are nonnegative.\nIn addition, in many real-life applications of our model the resource cost functions have a special structure.\nIn particular, they can monotonically increase or decrease with the number of the users, depending on the context.\nThe former case is motivated by situations where high congestion on a resource causes longer delay in its assigned tasks execution and as a result, the cost of utilizing this resource might be higher.\nA typical example of such situation is as follows.\nAssume we need to deliver an important package.\nSince there is no guarantee that a courier will reach the destination in time, we might send several couriers to deliver the same package.\nThe time required by each courier to deliver the package increases with the congestion on his way.\nIn addition, the payment to a courier is proportional to the time he spends in delivering the package.\nThus, the payment to the courier increases when the congestion increases.\nThe latter case (decreasing cost functions) describes situations where a group of agents using a particular resource have an opportunity to share its cost among the group's members, or, the cost of\nusing a resource decreases with the number of users, according to some marketing policy.\nOur results.\nWe show that CGLFs and, in particular, CGLFs with nondecreasing cost functions, do not admit a potential function.\nTherefore, the CGLF model cannot be reduced to congestion games.\nNevertheless, if the failure probabilities are constant (do not depend on the congestion) then a potential function is guaranteed to exist.\n.\nWe show that CGLFs and, in particular, CGLFs with decreasing cost functions, do not possess pure strategy Nash equilibria.\nHowever, as we show in our main result, there exists a pure strategy Nash equilibrium in any CGLF with nondecreasing cost functions.\nRelated work Our model extends the well-known class of congestion games [11].\nIn a congestion game, every agent has to choose from a finite set of resources, where the utility (or cost) of an agent from using a particular resource depends on the number of agents using it, and his total utility (cost) is the sum of the utilities (costs) obtained from the resources he uses.\nAn important property of these games is the existence of pure strategy Nash equilibria.\nMonderer and Shapley [9] introduced the notions of potential function and potential game and proved that the existence of a potential function implies the existence of a pure strategy Nash equilibrium.\nThey observed that Rosenthal [11] proved his theorem on congestion games by constructing a potential function (hence, every congestion game is a potential game).\nMoreover, they showed that every finite potential game is isomorphic to a congestion game; hence, the classes of finite potential games and congestion games coincide.\nCongestion games have been extensively studied and generalized.\nIn particular, Leyton-Brown and Tennenholtz [5] extended the class of congestion games to the class of localeffect games.\nIn a local-effect game, each agent's payoff is effected not only by the number of agents who have chosen the same resources as he has chosen, but also by the number of agents who have chosen neighboring resources (in a given graph structure).\nMonderer [8] dealt with another type of generalization of congestion games, in which the resource cost functions are player-specific (PS-congestion games).\nHe defined PS-congestion games of type q (q-congestion games), where q is a positive number, and showed that every game in strategic form is a q-congestion game for some q. Playerspecific resource cost functions were discussed for the first time by Milchtaich [6].\nHe showed that simple and strategysymmetric PS-congestion games are not potential games, but always possess a pure strategy Nash equilibrium.\nPScongestion games were generalized to weighted congestion games [6] (or, ID-congestion games [7]), in which the resource cost functions are not only player-specific, but also depend on the identity of the users of the resource.\nAckermann et al. [1] showed that weighted congestion games admit pure strategy Nash equilibria if the strategy space of each player consists of the bases of a matroid on the set of resources.\nMuch of the work on congestion games has been inspired by the fact that every such game has a pure strategy Nash equilibrium.\nIn particular, Fabrikant et al. [3] studied the computational complexity of finding pure strategy Nash equilibria in congestion games.\nIntensive study has also been devoted to quantify the inefficiency of equilibria in congestion games.\nKoutsoupias and Papadimitriou [4] proposed the worst-case ratio of the social welfare achieved by a Nash equilibrium and by a socially optimal strategy profile (dubbed the price of anarchy) as a measure of the performance degradation caused by lack of coordination.\nChristodoulou and Koutsoupias [2] considered the price of anarchy of pure equilibria in congestion games with linear cost functions.\nRoughgarden and Tardos [12] used this approach to study the cost of selfish routing in networks with a continuum of users.\nHowever, the above settings do not take into consideration the possibility that resources may fail to execute their assigned tasks.\nIn the computer science context of congestion games, where the alternatives of concern are machines, computers, communication lines etc., which are obviously prone to failures, this issue should not be ignored.\nPenn, Polukarov and Tennenholtz were the first to incorporate the issue of failures into congestion settings [10].\nThey introduced a class of congestion games with failures (CGFs) and proved that these games, while not being isomorphic to congestion games, always possess Nash equilibria in pure strategies.\nThe CGF-model significantly differs from ours.\nIn a CGF, the authors considered the delay associated with successful task completion, where the delay for an agent is the minimum of the delays of his successful attempts and the aim of each agent is to minimize his expected delay.\nIn contrast with the CGF-model, in our model we consider the total cost of the utilized resources, where each agent wishes to maximize the difference between his benefit from a successful task completion and the sum of his costs over the resources he uses.\nThe above differences imply that CGFs and CGLFs possess different properties.\nIn particular, if in our model the resource failure probabilities were constant and known in advance, then a potential function would exist.\nThis, however, does not hold for CGFs; in CGFs, the failure probabilities are constant but there is no potential function.\nFurthermore, the procedures proposed by the authors in [10] for the construction of a pure strategy Nash equilibrium are not valid in our model, even in the simple, agent-symmetric case, where all agents have the same benefit from successful completion of their tasks.\nOur work provides the first model of congestion settings with resource failures, which considers the sum of congestiondependent costs over utilized resources, and therefore, does not extend the CGF-model, but rather generalizes the classic model of congestion games.\nMoreover, it is the first model to consider load-dependent failures in the above context.\nOrganization The rest of the paper is organized as follows.\nIn Section 2 we define our model.\nIn Section 3 we present our results.\nIn 3.1 we show that CGLFs, in general, do not have pure strategy Nash equilibria.\nIn 3.2 we focus on CGLFs with nondecreasing cost functions (nondecreasing CGLFs).\nWe show that these games do not admit a potential function.\nHowever, in our main result we show the existence of pure strategy Nash equilibria in nondecreasing CGLFs.\nSection 4 is devoted to a short discussion.\nMany of the proofs are omitted from this conference version of the paper, and will appear in the full version.\n2.\nTHE MODEL\n3.\nPURE STRATEGY NASH EQUILIBRIA IN CGLFS\n3.1 Decreasing Cost Functions\n3.2 Nondecreasing Cost Functions\n3.2.1 The (Non -) Existence of a Potential Function\nConstant Failure Probabilities\n3.2.2 The Existence of a Pure Strategy Nash Equilibrium","lvl-4":"Congestion Games with Load-Dependent Failures: Identical Resources\nABSTRACT\nWe define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games.\nIn a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure.\nEach agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility.\nThe utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses.\nCGLFs possess two novel features.\nIt is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games.\nIn addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource.\nAlthough, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions.\n1.\nINTRODUCTION\nWe study the effects of resource failures in congestion settings.\nWe define a model for congestion games with load-dependent failures (CGLFs) which provides simple and natural description of such situations.\nIn this model, we are given a finite set of identical resources (service providers) where each element possesses a failure probability describing the probability of unsuccessful completion of its assigned tasks as a (nondecreasing) function of its congestion.\nThere is a fixed number of agents, each having a task which can be carried out by any of the resources.\nFor reliability reasons, each agent may decide to assign his task, simultaneously, to a number of resources.\nThus, the congestion on the resources is not known in advance, but is strategy-dependent.\nEach resource is associated with a cost, which is a (nonnegative) function of the congestion experienced by this resource.\nThe objective of each agent is to maximize his own utility, which is the difference between his benefit from successful task completion and the sum of the costs over the set of resources he uses.\nThe benefits of the agents from successful completion of their tasks are allowed to vary across the agents.\nThe resource cost function describes the cost suffered by an agent for selecting that resource, as a function of the number of agents who have selected it.\nThus, it is natural to assume that these functions are nonnegative.\nIn addition, in many real-life applications of our model the resource cost functions have a special structure.\nIn particular, they can monotonically increase or decrease with the number of the users, depending on the context.\nThe former case is motivated by situations where high congestion on a resource causes longer delay in its assigned tasks execution and as a result, the cost of utilizing this resource might be higher.\nA typical example of such situation is as follows.\nAssume we need to deliver an important package.\nThe time required by each courier to deliver the package increases with the congestion on his way.\nIn addition, the payment to a courier is proportional to the time he spends in delivering the package.\nThus, the payment to the courier increases when the congestion increases.\nThe latter case (decreasing cost functions) describes situations where a group of agents using a particular resource have an opportunity to share its cost among the group's members, or, the cost of\nusing a resource decreases with the number of users, according to some marketing policy.\nOur results.\nWe show that CGLFs and, in particular, CGLFs with nondecreasing cost functions, do not admit a potential function.\nTherefore, the CGLF model cannot be reduced to congestion games.\nNevertheless, if the failure probabilities are constant (do not depend on the congestion) then a potential function is guaranteed to exist.\n.\nWe show that CGLFs and, in particular, CGLFs with decreasing cost functions, do not possess pure strategy Nash equilibria.\nHowever, as we show in our main result, there exists a pure strategy Nash equilibrium in any CGLF with nondecreasing cost functions.\nRelated work Our model extends the well-known class of congestion games [11].\nIn a congestion game, every agent has to choose from a finite set of resources, where the utility (or cost) of an agent from using a particular resource depends on the number of agents using it, and his total utility (cost) is the sum of the utilities (costs) obtained from the resources he uses.\nAn important property of these games is the existence of pure strategy Nash equilibria.\nMonderer and Shapley [9] introduced the notions of potential function and potential game and proved that the existence of a potential function implies the existence of a pure strategy Nash equilibrium.\nThey observed that Rosenthal [11] proved his theorem on congestion games by constructing a potential function (hence, every congestion game is a potential game).\nMoreover, they showed that every finite potential game is isomorphic to a congestion game; hence, the classes of finite potential games and congestion games coincide.\nCongestion games have been extensively studied and generalized.\nIn particular, Leyton-Brown and Tennenholtz [5] extended the class of congestion games to the class of localeffect games.\nMonderer [8] dealt with another type of generalization of congestion games, in which the resource cost functions are player-specific (PS-congestion games).\nHe showed that simple and strategysymmetric PS-congestion games are not potential games, but always possess a pure strategy Nash equilibrium.\nPScongestion games were generalized to weighted congestion games [6] (or, ID-congestion games [7]), in which the resource cost functions are not only player-specific, but also depend on the identity of the users of the resource.\nAckermann et al. [1] showed that weighted congestion games admit pure strategy Nash equilibria if the strategy space of each player consists of the bases of a matroid on the set of resources.\nMuch of the work on congestion games has been inspired by the fact that every such game has a pure strategy Nash equilibrium.\nIn particular, Fabrikant et al. [3] studied the computational complexity of finding pure strategy Nash equilibria in congestion games.\nIntensive study has also been devoted to quantify the inefficiency of equilibria in congestion games.\nChristodoulou and Koutsoupias [2] considered the price of anarchy of pure equilibria in congestion games with linear cost functions.\nHowever, the above settings do not take into consideration the possibility that resources may fail to execute their assigned tasks.\nIn the computer science context of congestion games, where the alternatives of concern are machines, computers, communication lines etc., which are obviously prone to failures, this issue should not be ignored.\nPenn, Polukarov and Tennenholtz were the first to incorporate the issue of failures into congestion settings [10].\nThey introduced a class of congestion games with failures (CGFs) and proved that these games, while not being isomorphic to congestion games, always possess Nash equilibria in pure strategies.\nIn contrast with the CGF-model, in our model we consider the total cost of the utilized resources, where each agent wishes to maximize the difference between his benefit from a successful task completion and the sum of his costs over the resources he uses.\nThe above differences imply that CGFs and CGLFs possess different properties.\nIn particular, if in our model the resource failure probabilities were constant and known in advance, then a potential function would exist.\nThis, however, does not hold for CGFs; in CGFs, the failure probabilities are constant but there is no potential function.\nOur work provides the first model of congestion settings with resource failures, which considers the sum of congestiondependent costs over utilized resources, and therefore, does not extend the CGF-model, but rather generalizes the classic model of congestion games.\nMoreover, it is the first model to consider load-dependent failures in the above context.\nOrganization The rest of the paper is organized as follows.\nIn Section 2 we define our model.\nIn Section 3 we present our results.\nIn 3.1 we show that CGLFs, in general, do not have pure strategy Nash equilibria.\nIn 3.2 we focus on CGLFs with nondecreasing cost functions (nondecreasing CGLFs).\nWe show that these games do not admit a potential function.\nHowever, in our main result we show the existence of pure strategy Nash equilibria in nondecreasing CGLFs.\nSection 4 is devoted to a short discussion.","lvl-2":"Congestion Games with Load-Dependent Failures: Identical Resources\nABSTRACT\nWe define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games.\nIn a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure.\nEach agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility.\nThe utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses.\nCGLFs possess two novel features.\nIt is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games.\nIn addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource.\nAlthough, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions.\n1.\nINTRODUCTION\nWe study the effects of resource failures in congestion settings.\nThis study is motivated by a variety of situations in multi-agent systems with unreliable components, such as machines, computers etc. .\nWe define a model for congestion games with load-dependent failures (CGLFs) which provides simple and natural description of such situations.\nIn this model, we are given a finite set of identical resources (service providers) where each element possesses a failure probability describing the probability of unsuccessful completion of its assigned tasks as a (nondecreasing) function of its congestion.\nThere is a fixed number of agents, each having a task which can be carried out by any of the resources.\nFor reliability reasons, each agent may decide to assign his task, simultaneously, to a number of resources.\nThus, the congestion on the resources is not known in advance, but is strategy-dependent.\nEach resource is associated with a cost, which is a (nonnegative) function of the congestion experienced by this resource.\nThe objective of each agent is to maximize his own utility, which is the difference between his benefit from successful task completion and the sum of the costs over the set of resources he uses.\nThe benefits of the agents from successful completion of their tasks are allowed to vary across the agents.\nThe resource cost function describes the cost suffered by an agent for selecting that resource, as a function of the number of agents who have selected it.\nThus, it is natural to assume that these functions are nonnegative.\nIn addition, in many real-life applications of our model the resource cost functions have a special structure.\nIn particular, they can monotonically increase or decrease with the number of the users, depending on the context.\nThe former case is motivated by situations where high congestion on a resource causes longer delay in its assigned tasks execution and as a result, the cost of utilizing this resource might be higher.\nA typical example of such situation is as follows.\nAssume we need to deliver an important package.\nSince there is no guarantee that a courier will reach the destination in time, we might send several couriers to deliver the same package.\nThe time required by each courier to deliver the package increases with the congestion on his way.\nIn addition, the payment to a courier is proportional to the time he spends in delivering the package.\nThus, the payment to the courier increases when the congestion increases.\nThe latter case (decreasing cost functions) describes situations where a group of agents using a particular resource have an opportunity to share its cost among the group's members, or, the cost of\nusing a resource decreases with the number of users, according to some marketing policy.\nOur results.\nWe show that CGLFs and, in particular, CGLFs with nondecreasing cost functions, do not admit a potential function.\nTherefore, the CGLF model cannot be reduced to congestion games.\nNevertheless, if the failure probabilities are constant (do not depend on the congestion) then a potential function is guaranteed to exist.\n.\nWe show that CGLFs and, in particular, CGLFs with decreasing cost functions, do not possess pure strategy Nash equilibria.\nHowever, as we show in our main result, there exists a pure strategy Nash equilibrium in any CGLF with nondecreasing cost functions.\nRelated work Our model extends the well-known class of congestion games [11].\nIn a congestion game, every agent has to choose from a finite set of resources, where the utility (or cost) of an agent from using a particular resource depends on the number of agents using it, and his total utility (cost) is the sum of the utilities (costs) obtained from the resources he uses.\nAn important property of these games is the existence of pure strategy Nash equilibria.\nMonderer and Shapley [9] introduced the notions of potential function and potential game and proved that the existence of a potential function implies the existence of a pure strategy Nash equilibrium.\nThey observed that Rosenthal [11] proved his theorem on congestion games by constructing a potential function (hence, every congestion game is a potential game).\nMoreover, they showed that every finite potential game is isomorphic to a congestion game; hence, the classes of finite potential games and congestion games coincide.\nCongestion games have been extensively studied and generalized.\nIn particular, Leyton-Brown and Tennenholtz [5] extended the class of congestion games to the class of localeffect games.\nIn a local-effect game, each agent's payoff is effected not only by the number of agents who have chosen the same resources as he has chosen, but also by the number of agents who have chosen neighboring resources (in a given graph structure).\nMonderer [8] dealt with another type of generalization of congestion games, in which the resource cost functions are player-specific (PS-congestion games).\nHe defined PS-congestion games of type q (q-congestion games), where q is a positive number, and showed that every game in strategic form is a q-congestion game for some q. Playerspecific resource cost functions were discussed for the first time by Milchtaich [6].\nHe showed that simple and strategysymmetric PS-congestion games are not potential games, but always possess a pure strategy Nash equilibrium.\nPScongestion games were generalized to weighted congestion games [6] (or, ID-congestion games [7]), in which the resource cost functions are not only player-specific, but also depend on the identity of the users of the resource.\nAckermann et al. [1] showed that weighted congestion games admit pure strategy Nash equilibria if the strategy space of each player consists of the bases of a matroid on the set of resources.\nMuch of the work on congestion games has been inspired by the fact that every such game has a pure strategy Nash equilibrium.\nIn particular, Fabrikant et al. [3] studied the computational complexity of finding pure strategy Nash equilibria in congestion games.\nIntensive study has also been devoted to quantify the inefficiency of equilibria in congestion games.\nKoutsoupias and Papadimitriou [4] proposed the worst-case ratio of the social welfare achieved by a Nash equilibrium and by a socially optimal strategy profile (dubbed the price of anarchy) as a measure of the performance degradation caused by lack of coordination.\nChristodoulou and Koutsoupias [2] considered the price of anarchy of pure equilibria in congestion games with linear cost functions.\nRoughgarden and Tardos [12] used this approach to study the cost of selfish routing in networks with a continuum of users.\nHowever, the above settings do not take into consideration the possibility that resources may fail to execute their assigned tasks.\nIn the computer science context of congestion games, where the alternatives of concern are machines, computers, communication lines etc., which are obviously prone to failures, this issue should not be ignored.\nPenn, Polukarov and Tennenholtz were the first to incorporate the issue of failures into congestion settings [10].\nThey introduced a class of congestion games with failures (CGFs) and proved that these games, while not being isomorphic to congestion games, always possess Nash equilibria in pure strategies.\nThe CGF-model significantly differs from ours.\nIn a CGF, the authors considered the delay associated with successful task completion, where the delay for an agent is the minimum of the delays of his successful attempts and the aim of each agent is to minimize his expected delay.\nIn contrast with the CGF-model, in our model we consider the total cost of the utilized resources, where each agent wishes to maximize the difference between his benefit from a successful task completion and the sum of his costs over the resources he uses.\nThe above differences imply that CGFs and CGLFs possess different properties.\nIn particular, if in our model the resource failure probabilities were constant and known in advance, then a potential function would exist.\nThis, however, does not hold for CGFs; in CGFs, the failure probabilities are constant but there is no potential function.\nFurthermore, the procedures proposed by the authors in [10] for the construction of a pure strategy Nash equilibrium are not valid in our model, even in the simple, agent-symmetric case, where all agents have the same benefit from successful completion of their tasks.\nOur work provides the first model of congestion settings with resource failures, which considers the sum of congestiondependent costs over utilized resources, and therefore, does not extend the CGF-model, but rather generalizes the classic model of congestion games.\nMoreover, it is the first model to consider load-dependent failures in the above context.\nOrganization The rest of the paper is organized as follows.\nIn Section 2 we define our model.\nIn Section 3 we present our results.\nIn 3.1 we show that CGLFs, in general, do not have pure strategy Nash equilibria.\nIn 3.2 we focus on CGLFs with nondecreasing cost functions (nondecreasing CGLFs).\nWe show that these games do not admit a potential function.\nHowever, in our main result we show the existence of pure strategy Nash equilibria in nondecreasing CGLFs.\nSection 4 is devoted to a short discussion.\nMany of the proofs are omitted from this conference version of the paper, and will appear in the full version.\n2.\nTHE MODEL\nThe scenarios considered in this work consist of a finite set of agents where each agent has a task that can be carried out by any element of a set of identical resources (service providers).\nThe agents simultaneously choose a subset of the resources in order to perform their tasks, and their aim is to maximize their own expected payoff, as described in the sequel.\nLet N be a set of n agents (n E N), and let M be a set of m resources (m E N).\nAgent i E N chooses a strategy \u03c3i E \u03a3i which is a (potentially empty) subset of the resources.\nThat is, \u03a3i is the power set of the set of resources: \u03a3i = P (M).\nGiven a subset S C _ N of the agents, the set of strategy combinations of the members of S is denoted by \u03a3S = Xi \u2208 S\u03a3i, and the set of strategy combinations of the complement subset of agents is denoted by \u03a3 \u2212 S (\u03a3 \u2212 S = \u03a3NIS = Xi \u2208 NIS\u03a3i).\nThe set of pure strategy profiles of all the agents is denoted by \u03a3 (\u03a3 = \u03a3N).\nEach resource is associated with a cost, c (\u00b7), and a failure probability, f (\u00b7), each of which depends on the number of agents who use this resource.\nWe assume that the failure probabilities of the resources are independent.\nLet \u03c3 = (\u03c31,..., \u03c3n) E \u03a3 be a pure strategy profile.\nThe (m-dimensional) congestion vector that corresponds to \u03c3 is h\u03c3 = (h\u03c3e) e \u2208 M, where h\u03c3 \u02db\u02db.\nThe faile = \u02db\u02db {i E N: e E \u03c3i} ure probability of a resource e is a monotone nondecreasing function f: {1,..., n}--+ [0, 1) of the congestion experienced by e.\nThe cost of utilizing resource e is a function\nThe outcome for agent i E N is denoted by xi E {S, F}, where S and F, respectively, indicate whether the task execution succeeded or failed.\nWe say that the execution of agent's i task succeeds if the task of agent i is successfully completed by at least one of the resources chosen by him.\nThe benefit of agent i from his outcome xi is denoted by Vi (xi), where Vi (S) = vi, a given (nonnegative) value, and Vi (F) = 0.\nThe utility of agent i from strategy profile \u03c3 and his outcome xi, ui (\u03c3, xi), is the difference between his benefit from the outcome (Vi (xi)) and the sum of the costs of the resources he has used:\nwhere 1 \u2212 Fe \u2208 \u03c3i f (h\u03c3e) denotes the probability of successful completion of agent i's task.\nWe use the convention that Fe \u2208 \u2205 f (h\u03c3e) = 1.\nHence, if agent i chooses an empty set \u03c3i = 0 (does not assign his task to any resource), then his expected utility, Ui (0, \u03c3 \u2212 i), equals zero.\n3.\nPURE STRATEGY NASH EQUILIBRIA IN CGLFS\nIn this section we present our results on CGLFs.\nWe investigate the property of the (non -) existence of pure strategy Nash equilibria in these games.\nWe show that this class of games does not, in general, possess pure strategy equilibria.\nNevertheless, if the resource cost functions are nondecreasing then such equilibria are guaranteed to exist, despite the non-existence of a potential function.\n3.1 Decreasing Cost Functions\nWe start by showing that the class of CGLFs and, in particular, the subclass of CGLFs with decreasing cost functions, does not, in general, possess Nash equilibria in pure strategies.\nConsider a CGLF with two agents (N = {1, 2}) and two resources (M = {e1, e2}).\nThe cost function of each resource is given by c (x) = xx1, where x E {1, 2}, and the failure probabilities are f (1) = 0.01 and f (2) = 0.26.\nThe benefits of the agents from successful task completion are v1 = 1.1 and v2 = 4.\nBelow we present the payoff matrix of the game.\nTable 1: Example for non-existence of pure strategy Nash equilibria in CGLFs.\nIt can be easily seen that for every pure strategy profile \u03c3 in this game there exist an agent i and a strategy \u03c30i E \u03a3i such that Ui (\u03c3 \u2212 i, \u03c30i)> Ui (\u03c3).\nThat is, every pure strategy profile in this game is not in equilibrium.\nHowever, if the cost functions in a given CGLF do not decrease in the number of users, then, as we show in the main result of this paper, a pure strategy Nash equilibrium is guaranteed to exist.\n3.2 Nondecreasing Cost Functions\nThis section focuses on the subclass of CGLFs with nondecreasing cost functions (henceforth, nondecreasing CGLFs).\nWe show that nondecreasing CGLFs do not, in general, admit a potential function.\nTherefore, these games are not congestion games.\nNevertheless, we prove that all such games possess pure strategy Nash equilibria.\nTable 2: Example for non-existence of potentials in CGLFs.\n3.2.1 The (Non -) Existence of a Potential Function\nRecall that Monderer and Shapley [9] introduced the notions of potential function and potential game, where potential game is defined to be a game that possesses a potential function.\nA potential function is a real-valued function over the set of pure strategy profiles, with the property that the gain (or loss) of an agent shifting to another strategy while the other agents' strategies are kept unchanged, equals to the corresponding increment of the potential function.\nThe authors [9] showed that the classes of finite potential games and congestion games coincide.\nHere we show that the class of CGLFs and, in particular, the subclass of nondecreasing CGLFs, does not admit a potential function, and therefore is not included in the class of congestion games.\nHowever, for the special case of constant failure probabilities, a potential function is guaranteed to exist.\nTo prove these statements we use the following characterization of potential games [9].\nA path in \u03a3 is a sequence \u03c4 = (\u03c30 \u2192 \u03c31 \u2192 \u00b7 \u00b7 \u00b7) such that for every k \u2265 1 there exists a unique agent, say agent i, such that \u03c3k = (\u03c3k \u2212 1 \u2212 i, \u03c30i) for some \u03c30i = 6 \u03c3k \u2212 1 i in \u03a3i.\nA finite path \u03c4 = (\u03c30 \u2192 \u03c31 \u2192 \u00b7 \u00b7 \u00b7 \u2192 \u03c3K) is closed if \u03c30 = \u03c3K.\nIt is a simple closed path if in addition \u03c3l = 6 \u03c3k for every 0 \u2264 l = 6 k \u2264 K \u2212 1.\nThe length of a simple closed path is defined to be the number of distinct points in it; that is, the length of \u03c4 = (\u03c30 \u2192 \u03c31 \u2192 \u00b7 \u00b7 \u00b7 \u2192 \u03c3K) is K. THEOREM 1.\n[9] Let G be a game in strategic form with a vector U = (U1,..., Un) of utility functions.\nFor a finite path \u03c4 = (\u03c30 \u2192 \u03c31 \u2192 \u00b7 \u00b7 \u00b7 \u2192 \u03c3K), let U (\u03c4) = PKk = 1 [Uik (\u03c3k) \u2212 Uik (\u03c3k \u2212 1)], where ik is the unique deviator at step k. Then, G is a potential game if and only if U (\u03c4) = 0 for every simple closed path \u03c4 of length 4.\nThus, by Theorem 1, nondecreasing CGLFs do not admit potentials.\nAs a result, they are not congestion games.\nHowever, as presented in the next section, the special case in which the failure probabilities are constant, always possesses a potential function.\nConstant Failure Probabilities\nWe show below that CGLFs with constant failure probabilities always possess a potential function.\nThis follows from the fact that the expected benefit (revenue) of each agent in this case does not depend on the choices of the other agents.\nIn addition, for each agent, the sum of the costs over his chosen subset of resources, equals the payoff of an agent choosing the same strategy in the corresponding congestion game.\nAssume we are given a game G with constant failure probabilities.\nLet \u03c4 = (\u03b1 \u2192 \u03b2 \u2192 \u03b3 \u2192 \u03b4 \u2192 \u03b1) be an arbitrary simple closed path of length 4.\nLet i and j denote the active agents (deviators) in \u03c4 and z \u2208 \u03a3 \u2212 {i, j} be a fixed strategy profile of the other agents.\nLet \u03b1 = (xi, xj, z), \u03b2 = (yi, xj, z), \u03b3 = (yi, yj, z), \u03b4 = (xi, yj, z), where xi, yi \u2208 \u03a3i and xj, yj \u2208 \u03a3j.\nThen,\nBased on Theorem 1, we present the following counterexample that demonstrates the non-existence of a potential function in CGLFs.\nWe consider the following agent-symmetric game G in which two agents (N = {1, 2}) wish to assign a task to two resources (M = {e1, e2}).\nThe benefit from a successful task completion of each agent equals v, and the failure probability function strictly increases with the congestion.\nConsider the simple closed path of length 4 which is formed by\nNotice that vi \u2212...\u2212 vj = 0, as a sum of a telescope series.\nThe remaining sum equals 0, by applying Theorem 1 to congestion games, which are known to possess a potential function.\nThus, by Theorem 1, G is a potential game.\nc (h (xi, xj, z) e)\nWe note that the above result holds also for the more general settings with non-identical resources (having different failure probabilities and cost functions) and general cost functions (not necessarily monotone and\/or nonnegative).\n3.2.2 The Existence of a Pure Strategy Nash Equilibrium\nIn the previous section, we have shown that CGLFs and, in particular, nondecreasing CGLFs, do not admit a potential function, but this fact, in general, does not contradict the existence of an equilibrium in pure strategies.\nIn this section, we present and prove the main result of this paper (Theorem 2) which shows the existence of pure strategy Nash equilibria in nondecreasing CGLFs.\nThe proof of Theorem 2 is based on Lemmas 4, 7 and 8, which are presented in the sequel.\nWe start with some definitions and observations that are needed for their proofs.\nIn particular, we present the notions of A -, D - and S-stability and show that a strategy profile is in equilibrium if and only if it is A -, D - and S - stable.\nFurthermore, we prove the existence of such a profile in any given nondecreasing CGLF.\nClearly, if agent i deviates from strategy ori to strategy or0i by applying a single A -, D - or S-move, then max {| ori -, or0 i |, | or0i -, ori |} = 1, and vice versa, if max {| ori -, or0i |, | or0i -, ori |} = 1 then or0i is obtained from ori by applying exactly one such move.\nFor simplicity of exposition, for any pair of sets A and B, let \u00b5 (A, B) = max {| A -, B |, | B -, A |}.\nThe following lemma implies that any strategy profile, in which no agent wishes unilaterally to apply a single A -, Dor S-move, is a Nash equilibrium.\nMore precisely, we show that if there exists an agent who benefits from a unilateral deviation from a given strategy profile, then there exists a single A -, D - or S-move which is profitable for him as well.\nLEMMA 4.\nGiven a nondecreasing CGLF, let or \u2208 \u03a3 be a strategy profile which is not in equilibrium, and let i \u2208 N such that \u2203 xi \u2208 \u03a3i for which Ui (or \u2212 i, xi)> Ui (or).\nThen, there exists yi \u2208 \u03a3i such that Ui (or \u2212 i, yi)> Ui (or) and \u00b5 (yi, ori) = 1.\nTherefore, to prove the existence of a pure strategy Nash equilibrium, it suffices to look for a strategy profile for which no agent wishes to unilaterally apply an A -, D - or S-move.\nBased on the above observation, we define A -, D - and Sstability as follows.\nThe set of all DS-stable strategy profiles is denoted by \u03a3 \u00b0.\nObviously, the profile (0,..., 0) is DS-stable, so \u03a3 \u00b0 is not empty.\nOur goal is to find a DS-stable profile for which no profitable A-move exists, implying this profile is in equilibrium.\nTo describe how we achieve this, we define the notions of light (heavy) resources and (nearly -) even strategy profiles, which play a central role in the proof of our main result.\nObviously, every even strategy profile is nearly-even.\nIn addition, in a nearly-even strategy profile, all heavy resources (if exist) have the same congestion.\nWe also observe that the profile (0,..., 0) is even (and DS-stable), so the subset of even, DS-stable strategy profiles is not empty.\nBased on the above observations, we define two types of an A-move that are used in the sequel.\nSuppose or \u2208 \u03a3 \u00b0 is a nearly-even DS-stable strategy profile.\nFor each agent i \u2208 N, let ei \u2208 arg mine \u2208 M1\u03c3i h\u03c3e.\nThat is, ei is a lightest resource not chosen previously by i. Then, if there exists any profitable A-move for agent i, then the A-move with ei is profitable for i as well.\nThis is since if agent i wishes to unilaterally add a resource, say a \u2208 M -, ori, then\nIf no agent wishes to change his strategy in this manner, i.e. Ui (or) \u2265 Ui (or \u2212 i, ori \u222a {ei}) for all i \u2208 N, then by the above Ui (or) \u2265 Ui (or \u2212 i, ori \u222a {a}) for all i \u2208 N and a \u2208 M -, ori.\nHence, or is A-stable and by Lemma 4, or is a Nash equilibrium strategy profile.\nOtherwise, let N (or) denote the subset of all agents for which there exists ei such that a unilateral addition of ei is profitable.\nLet a \u2208 arg minei: i \u2208 N (\u03c3) h\u03c3 ei.\nLet also i \u2208 N (or) be the agent for which ei = a.\nIf a is or-light, then let or0 = (or \u2212 i, ori \u222a {a}).\nIn this case we say that or0 is obtained from or by a one-step addition of resource a, and a is called an added resource.\nIf a is or-heavy then there exists a or-light resource b and an agent j such that a \u2208 orj and b \u2208 \/ orj.\nThen let or0 = ` or \u2212 {i, j}, ori \u222a {a}, (orj -, {a}) \u222a {b}).\nIn this case we say that or0 is obtained from or by a two-step addition of resource b, and b is called an added resource.\nWe notice that, in both cases, the congestion of each resource in or0 is the same as in or, except for the added resource, for which its congestion in or0 increased by 1.\nThus, since the added resource is or-light and or is nearly-even, or0 is nearly-even.\nThen, the following lemma implies the Sstability of or0.\nCoupled with Lemma 7, the following lemma shows that if \u03c3 is a nearly-even and DS-stable strategy profile, and \u03c30 is obtained from \u03c3 by a one - or two-step addition of resource a, then the only potential cause for a non-DS-stability of \u03c30 is the existence of an agent k E N with \u03c30k = \u03c3k, who wishes to drop the added resource a. LEMMA 8.\nLet \u03c3 be a nearly-even DS-stable strategy profile of a given nondecreasing CGLF, and let \u03c30 be obtained from \u03c3 by a one - or two-step addition of resource a. Then, there are no profitable D-moves for any agent i E N with \u03c30i = 6 \u03c3i.\nFor an agent i E N with \u03c30i = \u03c3i, the only possible profitable D-move (if exists) is to drop the added resource a.\nWe are now ready to prove our main result - Theorem 2.\nLet us briefly describe the idea behind the proof.\nBy Lemma 4, it suffices to prove the existence of a strategy profile which is A -, D - and S-stable.\nWe start with the set of even and DS-stable strategy profiles which is obviously not empty.\nIn this set, we consider the subset of strategy profiles with maximum congestion and maximum sum of the agents' utilities.\nAssuming on the contrary that every DSstable profile admits a profitable A-move, we show the existence of a strategy profile x in the above subset, such that a (one-step) addition of some resource a to x results in a DSstable strategy.\nThen by a finite series of one - or two-step addition operations we obtain an even, DS-stable strategy profile with strictly higher congestion on the resources, contradicting the choice of x.\nThe full proof is presented below.\nProof of Theorem 2: Let \u03a31 C _ \u03a30 be the subset of all even, DS-stable strategy profiles.\nObserve that since (0,..., 0) is an even, DS-stable strategy profile, then \u03a31 \u02db\u02db = 0.\nis not empty, and min\u03c3 \u2208 \u03a3o \u02db\u02db {e E M: e is \u03c3 \u2212 heavy} Then, \u03a31 could also be defined as\nwith h\u03c3 being the common congestion.\nNow, let \u03a32 C _ \u03a31 be the subset of \u03a31 consisting of all those profiles with maximum congestion on the resources.\nThat is,\nLet UN (\u03c3) = Ei \u2208 N Ui (\u03c3) denotes the group utility of the agents, and let \u03a33 C _ \u03a32 be the subset of all profiles in \u03a32 with maximum group utility.\nThat is,\nConsider first the simple case in which max\u03c3 \u2208 \u03a3l h\u03c3 = 0.\nObviously, in this case, \u03a31 = \u03a32 = \u03a33 = {x = (0,..., 0)}.\nWe show below that by performing a finite series of (onestep) addition operations on x, we obtain an even, DSstable strategy profile y with higher congestion, that is with hy> hx = 0, in contradiction to x E \u03a32.\nLet z E \u03a30 be a nearly-even (not necessarily even) DS-stable profile such that mine \u2208 M hze = 0, and note that the profile x satisfies the above conditions.\nLet N (z) be the subset of agents for which a profitable A-move exists, and let i E N (z).\nObviously, there exists a z-light resource a such that Ui (z \u2212 i, zi U {a})> Ui (z) (otherwise, arg mine \u2208 M hze C _ zi, in contradiction to mine \u2208 M hze = 0).\nConsider the strategy profile z0 = (z \u2212 i, zi U {a}) which is obtained from z by a (one-step) addition of resource a by agent i.\nSince z is nearly-even and a is z-light, we can easily see that z0 is nearly-even.\nThen, Lemma 7 implies that z0 is S-stable.\nSince i is the only agent using resource a in z0, by Lemma 8, no profitable D-moves are available.\nThus, z0 is a DS-stable strategy profile.\nTherefore, since the number of resources is finite, there is a finite series of one-step addition operations on x = (0,..., 0) that leads to strategy profile y E \u03a31 with hy = 1> 0 = hx, in contradiction to x E \u03a32.\nWe turn now to consider the other case where max\u03c3 \u2208 \u03a3l h\u03c3> 1.\nIn this case we select from \u03a33 a strategy profile x, as described below, and use it to contradict our contrary assumption.\nSpecifically, we show that there exists x E \u03a33 such that for all j E N,\nLet x0 be a strategy profile which is obtained from x by a (one-step) addition of some resource a E M by some agent i E N (x) (note that x0 is nearly-even).\nThen, (1) is derived from and essentially equivalent to the inequality Uj (x0)> Uj (x0 \u2212 j, x0j \\ {a}), for all a E xj.\nThat is, after performing an A-move with a by i, there is no profitable D-move with a. Then, by Lemmas 7 and 8, x0 is DS-stable.\nFollowing the same lines as above, we construct a procedure that initializes at x and achieves a strategy profile y E \u03a31 with hy> hx, in contradiction to x E \u03a32.\nNow, let us confirm the existence of x E \u03a33 that satisfies (1).\nLet x E \u03a33 and let M (x) be the subset of all resources for which there exists a profitable (one-step) addition.\nFirst, we show that (1) holds for all j E N such that xj n M (x) = 6 0, that is, for all those agents with one of their resources being desired by another agent.\nLet a E M (x), and let x0 be the strategy profile that is obtained from x by the (one-step) addition of a by agent i. Assume on the contrary that there is an agent j with a E xj such that\nLet x00 = (x0 \u2212 j, x0j \\ {a}).\nBelow we demonstrate that x00 is a DS-stable strategy profile and, since x00 and x correspond to the same congestion vector, we conclude that x00 lies in \u03a32.\nIn addition, we show that UN (x00)> UN (x), contradicting the fact that x E \u03a33.\nTo show that x00 E \u03a30 we note that x00 is an even strategy profile, and thus no S-moves may be performed for x00.\nIn addition, since hx00 D-moves for any agent k = 6 i, j.\nIt remains to show that there are no profitable D-moves for agents i and j as well.\ni.\nThus, there are no profitable D-moves for agent i. By the DS-stability of x, for agent j and for all b \u2208 xj, we have Uj (x) \u2265 Uj (x \u2212 j, xj -, {b}) \u21d2 vj f (hx) | xj | \u2212 1 \u2265 c (hx) Then,\nTherefore, x00 lies in \u03a32 and satisfies UN (x00)> UN (x), in contradiction to x \u2208 \u03a33.\nHence, if x \u2208 \u03a33 then (1) holds for all j \u2208 N such that xj \u2229 M (x) = 6 0.\nNow let us see that there exists x \u2208 \u03a33 such that (1) holds for all the agents.\nFor that, choose an agent i \u2208 arg mink \u2208 N vif (hx) | xk |.\nIf there exists a \u2208 xi \u2229 M (x) then i satisfies (1), implying by the choice of agent i, that the above obviously yields the correctness of (1) for any agent k \u2208 N. Otherwise, if no resource in xi lies in M (x), then let a \u2208 xi and a0 \u2208 M (x).\nSince a \u2208 xi, a0 \u2208 \/ xi, and hxa = hxay, then there exists agent j such that a0 \u2208 xj and a \u2208 \/ xj.\nOne can easily check that the strategy profile x0 = ` x \u2212 {i, j}, (xi -, {a}) \u222a {a0}, (xj -, {a0}) \u222a {a} \u00b4 lies in \u03a33.\nThus, x0 satisfies (1) for agent i, and therefore, for any agent k \u2208 N. Now, let x \u2208 \u03a33 satisfy (1).\nWe show below that by performing a finite series of one - and two-step addition operations on x, we can achieve a strategy profile y that lies in \u03a31, such that hy> hx, in contradiction to x \u2208 \u03a32.\nLet z \u2208 \u03a30 be a nearly-even (not necessarily even), DS-stable strategy profile, such that\nfor all i \u2208 N and for all z-light resource b \u2208 zi.\nWe note that for profile x \u2208 \u03a33 \u2286 \u03a31, with all resources being x-light, conditions (2) and (1) are equivalent.\nLet z0 be obtained from z by a one - or two-step addition of a z-light resource a. Obviously, z0 is nearly-even.\nIn addition, hzye \u2265 hze for all e \u2208 M, and mine \u2208 M hzy e \u2265 mine \u2208 M hze.\nTo complete the proof we need to show that z0 is DS-stable, and, in addition, that if mine \u2208 M hzy e = mine \u2208 M hze then z0 has property (2).\nThe DS-stability of z0 follows directly from Lemmas 7 and 8, and from (2) with respect to z.\nIt remains to prove property (2) for z0 with mine \u2208 M hzy\nas required.\nNow let us consider the rest of the agents.\nAssume z is obtained by the one-step addition of a by agent i.\nIn this case, i is the only agent with z0i = 6 zi.\nThe required the case of a two-step addition, let z0 = ` z \u2212 {i, j}, zi \u222a {b}, property for agent i follows directly from Ui (z0)> Ui (z).\nIn (zj -, {b}) \u222a {a}), where b is a z-heavy resource.\nFor agent i, from Ui (z \u2212 i, zi \u222a {b})> Ui (z) we get\nand note that since hzb \u2265 hzy ey for all e0 \u2208 M and, in particular, for all z0-light resources, then\nfor any z0-light resource e0.\nfor any z' - light resource e'.\nThe above, coupled with (3) and (4), yields the required.\nFor agent j we just use (2) with respect to z and the equality hzb = hz '\nThus, since the number of resources is finite, there is a finite series of one - and two-step addition operations on x that leads to strategy profile y E \u03a31 with hy> hx, in contradiction to x E \u03a32.\nThis completes the proof.\n\u2751","keyphrases":["congest game","load-depend failur","load-depend failur","ident resourc","failur probabl","potenti function","pure strategi nash equilibrium","nash equilibrium","nondecreas cost function","localeffect game","resourc cost function","real-valu function","load-depend resourc failur"],"prmu":["P","P","P","P","P","P","P","P","P","M","R","M","R"]} {"id":"H-62","title":"Implicit User Modeling for Personalized Search","abstract":"Information retrieval systems (e.g., web search engines) are critical for overcoming information overload. A major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance. For example, a tourist and a programmer may use the same word java to search for different information, but the current search systems would return the same results. In this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search. We present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval. We develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information. Experiments on web search show that our search agent can improve search accuracy over the popular Google search engine.","lvl-1":"Implicit User Modeling for Personalized Search Xuehua Shen, Bin Tan, ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign ABSTRACT Information retrieval systems (e.g., web search engines) are critical for overcoming information overload.\nA major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance.\nFor example, a tourist and a programmer may use the same word java to search for different information, but the current search systems would return the same results.\nIn this paper, we study how to infer a user``s interest from the user``s search context and use the inferred implicit user model for personalized search .\nWe present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval.\nWe develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information.\nExperiments on web search show that our search agent can improve search accuracy over the popular Google search engine.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models, Relevance feedback, Search Process General Terms Algorithms 1.\nINTRODUCTION Although many information retrieval systems (e.g., web search engines and digital library systems) have been successfully deployed, the current retrieval systems are far from optimal.\nA major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users [17].\nThis inherent non-optimality is seen clearly in the following two cases: (1) Different users may use exactly the same query (e.g., Java) to search for different information (e.g., the Java island in Indonesia or the Java programming language), but existing IR systems return the same results for these users.\nWithout considering the actual user, it is impossible to know which sense Java refers to in a query.\n(2) A user``s information needs may change over time.\nThe same user may use Java sometimes to mean the Java island in Indonesia and some other times to mean the programming language.\nWithout recognizing the search context, it would be again impossible to recognize the correct sense.\nIn order to optimize retrieval accuracy, we clearly need to model the user appropriately and personalize search according to each individual user.\nThe major goal of user modeling for information retrieval is to accurately model a user``s information need, which is, unfortunately, a very difficult task.\nIndeed, it is even hard for a user to precisely describe what his\/her information need is.\nWhat information is available for a system to infer a user``s information need?\nObviously, the user``s query provides the most direct evidence.\nIndeed, most existing retrieval systems rely solely on the query to model a user``s information need.\nHowever, since a query is often extremely short, the user model constructed based on a keyword query is inevitably impoverished .\nAn effective way to improve user modeling in information retrieval is to ask the user to explicitly specify which documents are relevant (i.e., useful for satisfying his\/her information need), and then to improve user modeling based on such examples of relevant documents.\nThis is called relevance feedback, which has been proved to be quite effective for improving retrieval accuracy [19, 20].\nUnfortunately, in real world applications, users are usually reluctant to make the extra effort to provide relevant examples for feedback [11].\nIt is thus very interesting to study how to infer a user``s information need based on any implicit feedback information, which naturally exists through user interactions and thus does not require any extra user effort.\nIndeed, several previous studies have shown that implicit user modeling can improve retrieval accuracy.\nIn [3], a web browser (Curious Browser) is developed to record a user``s explicit relevance ratings of web pages (relevance feedback) and browsing behavior when viewing a page, such as dwelling time, mouse click, mouse movement and scrolling (implicit feedback).\nIt is shown that the dwelling time on a page, amount of scrolling on a page and the combination of time and scrolling have a strong correlation with explicit relevance ratings, which suggests that implicit feedback may be helpful for inferring user information need.\nIn [10], user clickthrough data is collected as training data to learn a retrieval function, which is used to produce a customized ranking of search results that suits a group of users'' preferences.\nIn [25], the clickthrough data collected over a long time period is exploited through query expansion to improve retrieval accuracy.\n824 While a user may have general long term interests and preferences for information, often he\/she is searching for documents to satisfy an ad-hoc information need, which only lasts for a short period of time; once the information need is satisfied, the user would generally no longer be interested in such information.\nFor example, a user may be looking for information about used cars in order to buy one, but once the user has bought a car, he\/she is generally no longer interested in such information.\nIn such cases, implicit feedback information collected over a long period of time is unlikely to be very useful, but the immediate search context and feedback information, such as which of the search results for the current information need are viewed, can be expected to be much more useful.\nConsider the query Java again.\nAny of the following immediate feedback information about the user could potentially help determine the intended meaning of Java in the query: (1) The previous query submitted by the user is hashtable (as opposed to, e.g., travel Indonesia).\n(2) In the search results, the user viewed a page where words such as programming, software, and applet occur many times.\nTo the best of our knowledge, how to exploit such immediate and short-term search context to improve search has so far not been well addressed in the previous work.\nIn this paper, we study how to construct and update a user model based on the immediate search context and implicit feedback information and use the model to improve the accuracy of ad-hoc retrieval.\nIn order to maximally benefit the user of a retrieval system through implicit user modeling, we propose to perform eager implicit feedback.\nThat is, as soon as we observe any new piece of evidence from the user, we would update the system``s belief about the user``s information need and respond with improved retrieval results based on the updated user model.\nWe present a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function.\nIn a traditional retrieval paradigm, the retrieval problem is to match a query with documents and rank documents according to their relevance values.\nAs a result, the retrieval process is a simple independent cycle of query and result display.\nIn the proposed new retrieval paradigm, the user``s search context plays an important role and the inferred implicit user model is exploited immediately to benefit the user.\nThe new retrieval paradigm is thus fundamentally different from the traditional paradigm, and is inherently more general.\nWe further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user.\nUsing these techniques, we develop a client-side web search agent UCAIR (User-Centered Adaptive Information Retrieval) on top of a popular search engine (Google).\nExperiments on web search show that our search agent can improve search accuracy over Google.\nSince the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort.\nThus the developed search agent can improve existing web search performance without additional effort from the user.\nThe remaining sections are organized as follows.\nIn Section 2, we discuss the related work.\nIn Section 3, we present a decisiontheoretic interactive retrieval framework for implicit user modeling.\nIn Section 4, we present the design and implementation of an intelligent client-side web search agent (UCAIR) that performs eager implicit feedback.\nIn Section 5, we report our experiment results using the search agent.\nSection 6 concludes our work.\n2.\nRELATED WORK Implicit user modeling for personalized search has been studied in previous work, but our work differs from all previous work in several aspects: (1) We emphasize the exploitation of immediate search context such as the related immediately preceding query and the viewed documents in the same session, while most previous work relies on long-term collection of implicit feedback information [25].\n(2) We perform eager feedback and bring the benefit of implicit user modeling as soon as any new implicit feedback information is available, while the previous work mostly exploits longterm implicit feedback [10].\n(3) We propose a retrieval framework to integrate implicit user modeling with the interactive retrieval process, while the previous work either studies implicit user modeling separately from retrieval [3] or only studies specific retrieval models for exploiting implicit feedback to better match a query with documents [23, 27, 22].\n(4) We develop and evaluate a personalized Web search agent with online user studies, while most existing work evaluates algorithms offline without real user interactions.\nCurrently some search engines provide rudimentary personalization, such as Google Personalized web search [6], which allows users to explicitly describe their interests by selecting from predefined topics, so that those results that match their interests are brought to the top, and My Yahoo! search [16], which gives users the option to save web sites they like and block those they dislike.\nIn contrast, UCAIR personalizes web search through implicit user modeling without any additional user efforts.\nFurthermore, the personalization of UCAIR is provided on the client side.\nThere are two remarkable advantages on this.\nFirst, the user does not need to worry about the privacy infringement, which is a big concern for personalized search [26].\nSecond, both the computation of personalization and the storage of the user profile are done at the client side so that the server load is reduced dramatically [9].\nThere have been many works studying user query logs [1] or query dynamics [13].\nUCAIR makes direct use of a user``s query history to benefit the same user immediately in the same search session.\nUCAIR first judges whether two neighboring queries belong to the same information session and if so, it selects terms from the previous query to perform query expansion.\nOur query expansion approach is similar to automatic query expansion [28, 15, 5], but instead of using pseudo feedback to expand the query, we use user``s implicit feedback information to expand the current query.\nThese two techniques may be combined.\n3.\nOPTIMIZATION IN INTERACTIVE IR In interactive IR, a user interacts with the retrieval system through an action dialogue, in which the system responds to each user action with some system action.\nFor example, the user``s action may be submitting a query and the system``s response may be returning a list of 10 document summaries.\nIn general, the space of user actions and system responses and their granularities would depend on the interface of a particular retrieval system.\nIn principle, every action of the user can potentially provide new evidence to help the system better infer the user``s information need.\nThus in order to respond optimally, the system should use all the evidence collected so far about the user when choosing a response.\nWhen viewed in this way, most existing search engines are clearly non-optimal.\nFor example, if a user has viewed some documents on the first page of search results, when the user clicks on the Next link to fetch more results, an existing retrieval system would still return the next page of results retrieved based on the original query without considering the new evidence that a particular result has been viewed by the user.\n825 We propose to optimize retrieval performance by adapting system responses based on every action that a user has taken, and cast the optimization problem as a decision task.\nSpecifically, at any time, the system would attempt to do two tasks: (1) User model updating: Monitor any useful evidence from the user regarding his\/her information need and update the user model as soon as such evidence is available; (2) Improving search results: Rerank immediately all the documents that the user has not yet seen, as soon as the user model is updated.\nWe emphasize eager updating and reranking, which makes our work quite different from any existing work.\nBelow we present a formal decision theoretic framework for optimizing retrieval performance through implicit user modeling in interactive information retrieval.\n3.1 A decision-theoretic framework Let A be the set of all user actions and R(a) be the set of all possible system responses to a user action a \u2208 A.\nAt any time, let At = (a1, ..., at) be the observed sequence of user actions so far (up to time point t) and Rt\u22121 = (r1, ..., rt\u22121) be the responses that the system has made responding to the user actions.\nThe system``s goal is to choose an optimal response rt \u2208 R(at) for the current user action at.\nLet M be the space of all possible user models.\nWe further define a loss function L(a, r, m) \u2208 , where a \u2208 A is a user action, r \u2208 R(a) is a system response, and m \u2208 M is a user model.\nL(a, r, m) encodes our decision preferences and assesses the optimality of responding with r when the current user model is m and the current user action is a.\nAccording to Bayesian decision theory, the optimal decision at time t is to choose a response that minimizes the Bayes risk, i.e., r\u2217 t = argmin r\u2208R(at) M L(at, r, mt)P(mt|U, D, At, Rt\u22121)dmt (1) where P(mt|U, D, At, Rt\u22121) is the posterior probability of the user model mt given all the observations about the user U we have made up to time t. To simplify the computation of Equation 1, let us assume that the posterior probability mass P(mt|U, D, At, Rt\u22121) is mostly concentrated on the mode m\u2217 t = argmaxmt P(mt|U, D, At, Rt\u22121).\nWe can then approximate the integral with the value of the loss function at m\u2217 t .\nThat is, r\u2217 t \u2248 argminr\u2208R(at)L(at, r, m\u2217 t ) (2) where m\u2217 t = argmaxmt P(mt|U, D, At, Rt\u22121).\nLeaving aside how to define and estimate these probabilistic models and the loss function, we can see that such a decision-theoretic formulation suggests that, in order to choose the optimal response to at, the system should perform two tasks: (1) compute the current user model and obtain m\u2217 t based on all the useful information.\n(2) choose a response rt to minimize the loss function value L(at, rt, m\u2217 t ).\nWhen at does not affect our belief about m\u2217 t , the first step can be omitted and we may reuse m\u2217 t\u22121 for m\u2217 t .\nNote that our framework is quite general since we can potentially model any kind of user actions and system responses.\nIn most cases, as we may expect, the system``s response is some ranking of documents, i.e., for most actions a, R(a) consists of all the possible rankings of the unseen documents, and the decision problem boils down to choosing the best ranking of unseen documents based on the most current user model.\nWhen a is the action of submitting a keyword query, such a response is exactly what a current retrieval system would do.\nHowever, we can easily imagine that a more intelligent web search engine would respond to a user``s clicking of the Next link (to fetch more unseen results) with a more optimized ranking of documents based on any viewed documents in the current page of results.\nIn fact, according to our eager updating strategy, we may even allow a system to respond to a user``s clicking of browser``s Back button after viewing a document in the same way, so that the user can maximally benefit from implicit feedback.\nThese are precisely what our UCAIR system does.\n3.2 User models A user model m \u2208 M represents what we know about the user U, so in principle, it can contain any information about the user that we wish to model.\nWe now discuss two important components in a user model.\nThe first component is a component model of the user``s information need.\nPresumably, the most important factor affecting the optimality of the system``s response is how well the response addresses the user``s information need.\nIndeed, at any time, we may assume that the system has some belief about what the user is interested in, which we model through a term vector x = (x1, ..., x|V |), where V = {w1, ..., w|V |} is the set of all terms (i.e., vocabulary) and xi is the weight of term wi.\nSuch a term vector is commonly used in information retrieval to represent both queries and documents.\nFor example, the vector-space model, assumes that both the query and the documents are represented as term vectors and the score of a document with respect to a query is computed based on the similarity between the query vector and the document vector [21].\nIn a language modeling approach, we may also regard the query unigram language model [12, 29] or the relevance model [14] as a term vector representation of the user``s information need.\nIntuitively, x would assign high weights to terms that characterize the topics which the user is interested in.\nThe second component we may include in our user model is the documents that the user has already viewed.\nObviously, even if a document is relevant, if the user has already seen the document, it would not be useful to present the same document again.\nWe thus introduce another variable S \u2282 D (D is the whole set of documents in the collection) to denote the subset of documents in the search results that the user has already seen\/viewed.\nIn general, at time t, we may represent a user model as mt = (S, x, At, Rt\u22121), where S is the seen documents, x is the system``s understanding of the user``s information need, and (At, Rt\u22121) represents the user``s interaction history.\nNote that an even more general user model may also include other factors such as the user``s reading level and occupation.\nIf we assume that the uncertainty of a user model mt is solely due to the uncertainty of x, the computation of our current estimate of user model m\u2217 t will mainly involve computing our best estimate of x.\nThat is, the system would choose a response according to r\u2217 t = argminr\u2208R(at)L(at, r, S, x\u2217 , At, Rt\u22121) (3) where x\u2217 = argmaxx P(x|U, D, At, Rt\u22121).\nThis is the decision mechanism implemented in the UCAIR system to be described later.\nIn this system, we avoided specifying the probabilistic model P(x|U, D, At, Rt\u22121) by computing x\u2217 directly with some existing feedback method.\n3.3 Loss functions The exact definition of loss function L depends on the responses, thus it is inevitably application-specific.\nWe now briefly discuss some possibilities when the response is to rank all the unseen documents and present the top k of them.\nLet r = (d1, ..., dk) be the top k documents, S be the set of seen documents by the user, and x\u2217 be the system``s best guess of the user``s information need.\nWe 826 may simply define the loss associated with r as the negative sum of the probability that each of the di is relevant, i.e., L(a, r, m) = \u2212 k i=1 P(relevant|di, m).\nClearly, in order to minimize this loss function, the optimal response r would contain the k documents with the highest probability of relevance, which is intuitively reasonable.\nOne deficiency of this top-k loss function is that it is not sensitive to the internal order of the selected top k documents, so switching the ranking order of a non-relevant document and a relevant one would not affect the loss, which is unreasonable.\nTo model ranking, we can introduce a factor of the user model - the probability of each of the k documents being viewed by the user, P(view|di), and define the following ranking loss function: L(a, r, m) = \u2212 k i=1 P(view|di)P(relevant|di, m) Since in general, if di is ranked above dj (i.e., i < j), P(view|di) > P(view|dj), this loss function would favor a decision to rank relevant documents above non-relevant ones, as otherwise, we could always switch di with dj to reduce the loss value.\nThus the system should simply perform a regular retrieval and rank documents according to the probability of relevance [18].\nDepending on the user``s retrieval preferences, there can be many other possibilities.\nFor example, if the user does not want to see redundant documents, the loss function should include some redundancy measure on r based on the already seen documents S. Of course, when the response is not to choose a ranked list of documents, we would need a different loss function.\nWe discuss one such example that is relevant to the search agent that we implement.\nWhen a user enters a query qt (current action), our search agent relies on some existing search engine to actually carry out search.\nIn such a case, even though the search agent does not have control of the retrieval algorithm, it can still attempt to optimize the search results through refining the query sent to the search engine and\/or reranking the results obtained from the search engine.\nThe loss functions for reranking are already discussed above; we now take a look at the loss functions for query refinement.\nLet f be the retrieval function of the search engine that our agent uses so that f(q) would give us the search results using query q. Given that the current action of the user is entering a query qt (i.e., at = qt), our response would be f(q) for some q.\nSince we have no choice of f, our decision is to choose a good q. Formally, r\u2217 t = argminrt L(a, rt, m) = argminf(q)L(a, f(q), m) = f(argminqL(qt, f(q), m)) which shows that our goal is to find q\u2217 = argminqL(qt, f(q), m), i.e., an optimal query that would give us the best f(q).\nA different choice of loss function L(qt, f(q), m) would lead to a different query refinement strategy.\nIn UCAIR, we heuristically compute q\u2217 by expanding qt with terms extracted from rt\u22121 whenever qt\u22121 and qt have high similarity.\nNote that rt\u22121 and qt\u22121 are contained in m as part of the user``s interaction history.\n3.4 Implicit user modeling Implicit user modeling is captured in our framework through the computation of x\u2217 = argmaxx P(x|U, D, At, Rt\u22121), i.e., the system``s current belief of what the user``s information need is.\nHere again there may be many possibilities, leading to different algorithms for implicit user modeling.\nWe now discuss a few of them.\nFirst, when two consecutive queries are related, the previous query can be exploited to enrich the current query and provide more search context to help disambiguation.\nFor this purpose, instead of performing query expansion as we did in the previous section, we could also compute an updated x\u2217 based on the previous query and retrieval results.\nThe computed new user model can then be used to rank the documents with a standard information retrieval model.\nSecond, we can also infer a user``s interest based on the summaries of the viewed documents.\nWhen a user is presented with a list of summaries of top ranked documents, if the user chooses to skip the first n documents and to view the (n+1)-th document, we may infer that the user is not interested in the displayed summaries for the first n documents, but is attracted by the displayed summary of the (n + 1)-th document.\nWe can thus use these summaries as negative and positive examples to learn a more accurate user model x\u2217 .\nHere many standard relevance feedback techniques can be exploited [19, 20].\nNote that we should use the displayed summaries, as opposed to the actual contents of those documents, since it is possible that the displayed summary of the viewed document is relevant, but the document content is actually not.\nSimilarly, a displayed summary may mislead a user to skip a relevant document.\nInferring user models based on such displayed information, rather than the actual content of a document is an important difference between UCAIR and some other similar systems.\nIn UCAIR, both of these strategies for inferring an implicit user model are implemented.\n4.\nUCAIR: A PERSONALIZED SEARCH AGENT 4.1 Design In this section, we present a client-side web search agent called UCAIR, in which we implement some of the methods discussed in the previous section for performing personalized search through implicit user modeling.\nUCAIR is a web browser plug-in 1 that acts as a proxy for web search engines.\nCurrently, it is only implemented for Internet Explorer and Google, but it is a matter of engineering to make it run on other web browsers and interact with other search engines.\nThe issue of privacy is a primary obstacle for deploying any real world applications involving serious user modeling, such as personalized search.\nFor this reason, UCAIR is strictly running as a client-side search agent, as opposed to a server-side application.\nThis way, the captured user information always resides on the computer that the user is using, thus the user does not need to release any information to the outside.\nClient-side personalization also allows the system to easily observe a lot of user information that may not be easily available to a server.\nFurthermore, performing personalized search on the client-side is more scalable than on the serverside, since the overhead of computation and storage is distributed among clients.\nAs shown in Figure 1, the UCAIR toolbar has 3 major components: (1) The (implicit) user modeling module captures a user``s search context and history information, including the submitted queries and any clicked search results and infers search session boundaries.\n(2) The query modification module selectively improves the query formulation according to the current user model.\n(3) The result re-ranking module immediately re-ranks any unseen search results whenever the user model is updated.\nIn UCAIR, we consider four basic user actions: (1) submitting a keyword query; (2) viewing a document; (3) clicking the Back button; (4) clicking the Next link on a result page.\nFor each of these four actions, the system responds with, respectively, (1) 1 UCAIR is available at: http:\/\/sifaka.cs.uiuc.edu\/ir\/ucair\/download.html 827 Search Engine (e.g., Google) Search History Log (e.g.,past queries, clicked results) Query Modification Result Re-Ranking User Modeling Result Buffer UCAIR Userquery results clickthrough... Figure 1: UCAIR architecture generating a ranked list of results by sending a possibly expanded query to a search engine; (2) updating the information need model x; (3) reranking the unseen results on the current result page based on the current model x; and (4) reranking the unseen pages and generating the next page of results based on the current model x. Behind these responses, there are three basic tasks: (1) Decide whether the previous query is related to the current query and if so expand the current query with useful terms from the previous query or the results of the previous query.\n(2) Update the information need model x based on a newly clicked document summary.\n(3) Rerank a set of unseen documents based on the current model x. Below we describe our algorithms for each of them.\n4.2 Session boundary detection and query expansion To effectively exploit previous queries and their corresponding clickthrough information, UCAIR needs to judge whether two adjacent queries belong to the same search session (i.e., detect session boundaries).\nExisting work on session boundary detection is mostly in the context of web log analysis (e.g., [8]), and uses statistical information rather than textual features.\nSince our clientside agent does not have access to server query logs, we make session boundary decisions based on textual similarity between two queries.\nBecause related queries do not necessarily share the same words (e.g., java island and travel Indonesia), it is insufficient to use only query text.\nTherefore we use the search results of the two queries to help decide whether they are topically related.\nFor example, for the above queries java island and travel Indonesia'', the words java, bali, island, indonesia and travel may occur frequently in both queries'' search results, yielding a high similarity score.\nWe only use the titles and summaries of the search results to calculate the similarity since they are available in the retrieved search result page and fetching the full text of every result page would significantly slow down the process.\nTo compensate for the terseness of titles and summaries, we retrieve more results than a user would normally view for the purpose of detecting session boundaries (typically 50 results).\nThe similarity between the previous query q and the current query q is computed as follows.\nLet {s1, s2, ... , sn } and {s1, s2, ... , sn} be the result sets for the two queries.\nWe use the pivoted normalization TF-IDF weighting formula [24] to compute a term weight vector si for each result si.\nWe define the average result savg to be the centroid of all the result vectors, i.e., (s1 + s2 + ... + sn)\/n.\nThe cosine similarity between the two average results is calculated as s avg \u00b7 savg\/ s 2 avg \u00b7 s2 avg If the similarity value exceeds a predefined threshold, the two queries will be considered to be in the same information session.\nIf the previous query and the current query are found to belong to the same search session, UCAIR would attempt to expand the current query with terms from the previous query and its search results.\nSpecifically, for each term in the previous query or the corresponding search results, if its frequency in the results of the current query is greater than a preset threshold (e.g. 5 results out of 50), the term would be added to the current query to form an expanded query.\nIn this case, UCAIR would send this expanded query rather than the original one to the search engine and return the results corresponding to the expanded query.\nCurrently, UCAIR only uses the immediate preceding query for query expansion; in principle, we could exploit all related past queries.\n4.3 Information need model updating Suppose at time t, we have observed that the user has viewed k documents whose summaries are s1, ..., sk.\nWe update our user model by computing a new information need vector with a standard feedback method in information retrieval (i.e., Rocchio [19]).\nAccording to the vector space retrieval model, each clicked summary si can be represented by a term weight vector si with each term weighted by a TF-IDF weighting formula [21].\nRocchio computes the centroid vector of all the summaries and interpolates it with the original query vector to obtain an updated term vector.\nThat is, x = \u03b1q + (1 \u2212 \u03b1) 1 k k i=1 si where q is the query vector, k is the number of summaries the user clicks immediately following the current query and \u03b1 is a parameter that controls the influence of the clicked summaries on the inferred information need model.\nIn our experiments, \u03b1 is set to 0.5.\nNote that we update the information need model whenever the user views a document.\n4.4 Result reranking In general, we want to rerank all the unseen results as soon as the user model is updated.\nCurrently, UCAIR implements reranking in two cases, corresponding to the user clicking the Back button and Next link in the Internet Explorer.\nIn both cases, the current (updated) user model would be used to rerank the unseen results so that the user would see improved search results immediately.\nTo rerank any unseen document summaries, UCAIR uses the standard vector space retrieval model and scores each summary based on the similarity of the result and the current user information need vector x [21].\nSince implicit feedback is not completely reliable, we bring up only a small number (e.g. 5) of highest reranked results to be followed by any originally high ranked results.\n828 Google result (user query = java map) UCAIR result (user query =java map) previous query = travel Indonesia previous query = hashtable expanded user query = java map Indonesia expanded user query = java map class 1 Java map projections of the world ... Lonely Planet - Indonesia Map Map (Java 2 Platform SE v1.4.2) www.btinternet.com\/ se16\/js\/mapproj.\nhtm www.lonelyplanet.com\/mapshells\/... java.sun.com\/j2se\/1.4.2\/docs\/... 2 Java map projections of the world ... INDONESIA TOURISM : CENTRAL JAVA - MAP Java 2 Platform SE v1.3.1: Interface Map www.btinternet.com\/ se16\/js\/oldmapproj.\nhtm www.indonesia-tourism.com\/... java.sun.com\/j2se\/1.3\/docs\/api\/java\/... 3 Java Map INDONESIA TOURISM : WEST JAVA - MAP An Introduction to Java Map Collection Classes java.sun.com\/developer\/... www.indonesia-tourism.com\/ ... www.oracle.com\/technology\/... 4 Java Technology Concept Map IndoStreets - Java Map An Introduction to Java Map Collection Classes java.sun.com\/developer\/onlineTraining\/... www.indostreets.com\/maps\/java\/ www.theserverside.com\/news\/... 5 Science@NASA Home Indonesia Regions and Islands Maps, Bali, Java, ... Koders - Mappings.java science.nasa.gov\/Realtime\/... www.maps2anywhere.com\/Maps\/... www.koders.com\/java\/ 6 An Introduction to Java Map Collection Classes Indonesia City Street Map,... Hibernate simplifies inheritance mapping www.oracle.com\/technology\/... www.maps2anywhere.com\/Maps\/... www.ibm.com\/developerworks\/java\/... 7 Lonely Planet - Java Map Maps Of Indonesia tmap 30.\nmap Class Hierarchy www.lonelyplanet.com\/mapshells\/ www.embassyworld.com\/maps\/... tmap.pmel.noaa.gov\/... 8 ONJava.com: Java API Map Maps of Indonesia by Peter Loud Class Scope www.onjava.com\/pub\/a\/onjava\/api map\/ users.powernet.co.uk\/... jalbum.net\/api\/se\/datadosen\/util\/Scope.html 9 GTA San Andreas : Sam Maps of Indonesia by Peter Loud Class PrintSafeHashMap www.gtasanandreas.net\/sam\/ users.powernet.co.uk\/mkmarina\/indonesia\/ jalbum.net\/api\/se\/datadosen\/... 10 INDONESIA TOURISM : WEST JAVA - MAP indonesiaphoto.com Java Pro - Union and Vertical Mapping of Classes www.indonesia-tourism.com\/... www.indonesiaphoto.com\/... www.fawcette.com\/javapro\/... Table 1: Sample results of query expansion 5.\nEVALUATION OF UCAIR We now present some results on evaluating the two major UCAIR functions: selective query expansion and result reranking based on user clickthrough data.\n5.1 Sample results The query expansion strategy implemented in UCAIR is intentionally conservative to avoid misinterpretation of implicit user models.\nIn practice, whenever it chooses to expand the query, the expansion usually makes sense.\nIn Table 1, we show how UCAIR can successfully distinguish two different search contexts for the query java map, corresponding to two different previous queries (i.e., travel Indonesia vs. hashtable).\nDue to implicit user modeling, UCAIR intelligently figures out to add Indonesia and class, respectively, to the user``s query java map, which would otherwise be ambiguous as shown in the original results from Google on March 21, 2005.\nUCAIR``s results are much more accurate than Google``s results and reflect personalization in search.\nThe eager implicit feedback component is designed to immediately respond to a user``s activity such as viewing a document.\nIn Figure 2, we show how UCAIR can successfully disambiguate an ambiguous query jaguar by exploiting a viewed document summary.\nIn this case, the initial retrieval results using jaguar (shown on the left side) contain two results about the Jaguar cars followed by two results about the Jaguar software.\nHowever, after the user views the web page content of the second result (about Jaguar car) and returns to the search result page by clicking Back button, UCAIR automatically nominates two new search results about Jaguar cars (shown on the right side), while the original two results about Jaguar software are pushed down on the list (unseen from the picture).\n5.2 Quantitative evaluation To further evaluate UCAIR quantitatively, we conduct a user study on the effectiveness of the eager implicit feedback component.\nIt is a challenge to quantitatively evaluate the potential performance improvement of our proposed model and UCAIR over Google in an unbiased way [7].\nHere, we design a user study, in which participants would do normal web search and judge a randomly and anonymously mixed set of results from Google and UCAIR at the end of the search session; participants do not know whether a result comes from Google or UCAIR.\nWe recruited 6 graduate students for this user study, who have different backgrounds (3 computer science, 2 biology, and 1 chem Number: 716 Spammer arrest sue <desc> Description: Have any spammers been arrested or sued for sending unsolicited e-mail?\n<narr> Narrative: Instances of arrests, prosecutions, convictions, and punishments of spammers, and lawsuits against them are relevant.\nDocuments which describe laws to limit spam without giving details of lawsuits or criminal trials are not relevant.\n<\/top> Figure 3: An example of TREC query topic, expressed in a form which might be given to a human assistant or librarian istry).\nWe use query topics from TREC 2 2004 Terabyte track [2] and TREC 2003 Web track [4] topic distillation task in the way to be described below.\nAn example topic from TREC 2004 Terabyte track appears in Figure 3.\nThe title is a short phrase and may be used as a query to the retrieval system.\nThe description field provides a slightly longer statement of the topic requirement, usually expressed as a single complete sentence or question.\nFinally the narrative supplies additional information necessary to fully specify the requirement, expressed in the form of a short paragraph.\nInitially, each participant would browse 50 topics either from Terabyte track or Web track and pick 5 or 7 most interesting topics.\nFor each picked topic, the participant would essentially do the normal web search using UCAIR to find many relevant web pages by using the title of the query topic as the initial keyword query.\nDuring this process, the participant may view the search results and possibly click on some interesting ones to view the web pages, just as in a normal web search.\nThere is no requirement or restriction on how many queries the participant must submit or when the participant should stop the search for one topic.\nWhen the participant plans to change the search topic, he\/she will simply press a button 2 Text REtrieval Conference: http:\/\/trec.nist.gov\/ 829 Figure 2: Screen shots for result reranking to evaluate the search results before actually switching to the next topic.\nAt the time of evaluation, 30 top ranked results from Google and UCAIR (some are overlapping) are randomly mixed together so that the participant would not know whether a result comes from Google or UCAIR.\nThe participant would then judge the relevance of these results.\nWe measure precision at top n (n = 5, 10, 20, 30) documents of Google and UCAIR.\nWe also evaluate precisions at different recall levels.\nAltogether, 368 documents judged as relevant from Google search results and 429 documents judged as relevant from UCAIR by participants.\nScatter plots of precision at top 10 and top 20 documents are shown in Figure 4 and Figure 5 respectively (The scatter plot of precision at top 30 documents is very similar to precision at top 20 documents).\nEach point of the scatter plots represents the precisions of Google and UCAIR on one query topic.\nTable 2 shows the average precision at top n documents among 32 topics.\nFrom Figure 4, Figure 5 and Table 2, we see that the search results from UCAIR are consistently better than those from Google by all the measures.\nMoreover, the performance improvement is more dramatic for precision at top 20 documents than that at precision at top 10 documents.\nOne explanation for this is that the more interaction the user has with the system, the more clickthrough data UCAIR can be expected to collect.\nThus the retrieval system can build more precise implicit user models, which lead to better retrieval accuracy.\nRanking Method prec@5 prec@10 prec@20 prec@30 Google 0.538 0.472 0.377 0.308 UCAIR 0.581 0.556 0.453 0.375 Improvement 8.0% 17.8% 20.2% 21.8% Table 2: Table of average precision at top n documents for 32 query topics The plot in Figure 6 shows the precision-recall curves for UCAIR and Google, where it is clearly seen that the performance of UCAIR 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 UCAIR prec@10 Googleprec@10 Scatterplot of Precision at Top 10 Documents Figure 4: Precision at top 10 documents of UCAIR and Google is consistently and considerably better than that of Google at all levels of recall.\n6.\nCONCLUSIONS In this paper, we studied how to exploit implicit user modeling to intelligently personalize information retrieval and improve search accuracy.\nUnlike most previous work, we emphasize the use of immediate search context and implicit feedback information as well as eager updating of search results to maximally benefit a user.\nWe presented a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function.\nWe further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user.\nUsing these techniques, we develop a client-side web search agent (UCAIR) on top of a popular search engine (Google).\nExperiments on web search show that our search agent can improve search accuracy over 830 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 UCAIR prec@20 Googleprec@20 Scatterplot of Precision at Top 20 documents Figure 5: Precision at top 20 documents of UCAIR and Google 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 recall precision Precision\u2212Recall curves Google Result UCAIR Result Figure 6: Precision at top 20 result of UCAIR and Google Google.\nSince the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort.\nThe developed search agent thus can improve existing web search performance without any additional effort from the user.\n7.\nACKNOWLEDGEMENT We thank the six participants of our evaluation experiments.\nThis work was supported in part by the National Science Foundation grants IIS-0347933 and IIS-0428472.\n8.\nREFERENCES [1] S. M. Beitzel, E. C. Jensen, A. Chowdhury, D. Grossman, and O. Frieder.\nHourly analysis of a very large topically categorized web query log.\nIn Proceedings of SIGIR 2004, pages 321-328, 2004.\n[2] C. Clarke, N. Craswell, and I. Soboroff.\nOverview of the TREC 2004 terabyte track.\nIn Proceedings of TREC 2004, 2004.\n[3] M. Claypool, P. Le, M. Waseda, and D. Brown.\nImplicit interest indicators.\nIn Proceedings of Intelligent User Interfaces 2001, pages 33-40, 2001.\n[4] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu.\nOverview of the TREC 2003 web track.\nIn Proceedings of TREC 2003, 2003.\n[5] W. B. Croft, S. Cronen-Townsend, and V. Larvrenko.\nRelevance feedback and personalization: A language modeling perspective.\nIn Proeedings of Second DELOS Workshop: Personalisation and Recommender Systems in Digital Libraries, 2001.\n[6] Google Personalized.\nhttp:\/\/labs.google.com\/personalized.\n[7] D. Hawking, N. Craswell, P. B. Thistlewaite, and D. Harman.\nResults and challenges in web search evaluation.\nComputer Networks, 31(11-16):1321-1330, 1999.\n[8] X. Huang, F. Peng, A. An, and D. Schuurmans.\nDynamic web log session identification with statistical language models.\nJournal of the American Society for Information Science and Technology, 55(14):1290-1303, 2004.\n[9] G. Jeh and J. Widom.\nScaling personalized web search.\nIn Proceedings of WWW 2003, pages 271-279, 2003.\n[10] T. Joachims.\nOptimizing search engines using clickthrough data.\nIn Proceedings of SIGKDD 2002, pages 133-142, 2002.\n[11] D. Kelly and J. Teevan.\nImplicit feedback for inferring user preference: A bibliography.\nSIGIR Forum, 37(2):18-28, 2003.\n[12] J. Lafferty and C. Zhai.\nDocument language models, query models, and risk minimization for information retrieval.\nIn Proceedings of SIGIR``01, pages 111-119, 2001.\n[13] T. Lau and E. Horvitz.\nPatterns of search: Analyzing and modeling web query refinement.\nIn Proceedings of the Seventh International Conference on User Modeling (UM), pages 145 -152, 1999.\n[14] V. Lavrenko and B. Croft.\nRelevance-based language models.\nIn Proceedings of SIGIR``01, pages 120-127, 2001.\n[15] M. Mitra, A. Singhal, and C. Buckley.\nImproving automatic query expansion.\nIn Proceedings of SIGIR 1998, pages 206-214, 1998.\n[16] My Yahoo! http:\/\/mysearch.yahoo.com.\n[17] G. Nunberg.\nAs google goes, so goes the nation.\nNew York Times, May 2003.\n[18] S. E. Robertson.\nThe probability ranking principle in \u0131\u02da.\nJournal of Documentation, 33(4):294-304, 1977.\n[19] J. J. Rocchio.\nRelevance feedback in information retrieval.\nIn The SMART Retrieval System: Experiments in Automatic Document Processing, pages 313-323.\nPrentice-Hall Inc., 1971.\n[20] G. Salton and C. Buckley.\nImproving retrieval performance by retrieval feedback.\nJournal of the American Society for Information Science, 41(4):288-297, 1990.\n[21] G. Salton and M. J. McGill.\nIntroduction to Modern Information Retrieval.\nMcGraw-Hill, 1983.\n[22] X. Shen, B. Tan, and C. Zhai.\nContext-sensitive information retrieval using implicit feedback.\nIn Proceedings of SIGIR 2005, pages 43-50, 2005.\n[23] X. Shen and C. Zhai.\nExploiting query history for document ranking in interactive information retrieval (Poster).\nIn Proceedings of SIGIR 2003, pages 377-378, 2003.\n[24] A. Singhal.\nModern information retrieval: A brief overview.\nBulletin of the IEEE Computer Society Technical Committee on Data Engineering, 24(4):35-43, 2001.\n[25] K. Sugiyama, K. Hatano, and M. Yoshikawa.\nAdaptive web search based on user profile constructed without any effort from users.\nIn Proceedings of WWW 2004, pages 675-684, 2004.\n[26] E. Volokh.\nPersonalization and privacy.\nCommunications of the ACM, 43(8):84-88, 2000.\n[27] R. W. White, J. M. Jose, C. J. van Rijsbergen, and I. Ruthven.\nA simulated study of implicit feedback models.\nIn Proceedings of ECIR 2004, pages 311-326, 2004.\n[28] J. Xu and W. B. Croft.\nQuery expansion using local and global document analysis.\nIn Proceedings of SIGIR 1996, pages 4-11, 1996.\n[29] C. Zhai and J. Lafferty.\nModel-based feedback in KL divergence retrieval model.\nIn Proceedings of the CIKM 2001, pages 403-410, 2001.\n831","lvl-3":"Implicit User Modeling for Personalized Search\nABSTRACT\nInformation retrieval systems (e.g., web search engines) are critical for overcoming information overload.\nA major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance.\nFor example, a tourist and a programmer may use the same word \"java\" to search for different information, but the current search systems would return the same results.\nIn this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search.\nWe present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval.\nWe develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information.\nExperiments on web search show that our search agent can improve search accuracy over the popular Google search engine.\n1.\nINTRODUCTION\nAlthough many information retrieval systems (e.g., web search engines and digital library systems) have been successfully deployed, the current retrieval systems are far from optimal.\nA major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users [17].\nThis inherent non-optimality is seen clearly in the following two cases:\n(1) Different users may use exactly the same query (e.g., \"Java\") to search for different information (e.g., the Java island in Indonesia or the Java programming language), but existing IR systems return the same results for these users.\nWithout considering the actual user, it is impossible to know which sense \"Java\" refers to in a query.\n(2) A user's information needs may change over time.\nThe same user may use \"Java\" sometimes to mean the Java island in Indonesia and some other times to mean the programming language.\nWithout recognizing the search context, it would be again impossible to recognize the correct sense.\nIn order to optimize retrieval accuracy, we clearly need to model the user appropriately and personalize search according to each individual user.\nThe major goal of user modeling for information retrieval is to accurately model a user's information need, which is, unfortunately, a very difficult task.\nIndeed, it is even hard for a user to precisely describe what his\/her information need is.\nWhat information is available for a system to infer a user's information need?\nObviously, the user's query provides the most direct evidence.\nIndeed, most existing retrieval systems rely solely on the query to model a user's information need.\nHowever, since a query is often extremely short, the user model constructed based on a keyword query is inevitably impoverished.\nAn effective way to improve user modeling in information retrieval is to ask the user to explicitly specify which documents are relevant (i.e., useful for satisfying his\/her information need), and then to improve user modeling based on such examples of relevant documents.\nThis is called relevancefeedback, which has been proved to be quite effective for improving retrieval accuracy [19, 20].\nUnfortunately, in real world applications, users are usually reluctant to make the extra effort to provide relevant examples for feedback [11].\nIt is thus very interesting to study how to infer a user's information need based on any implicit feedback information, which naturally exists through user interactions and thus does not require any extra user effort.\nIndeed, several previous studies have shown that implicit user modeling can improve retrieval accuracy.\nIn [3], a web browser (Curious Browser) is developed to record a user's explicit relevance ratings of web pages (relevance feedback) and browsing behavior when viewing a page, such as dwelling time, mouse click, mouse movement and scrolling (implicit feedback).\nIt is shown that the dwelling time on a page, amount of scrolling on a page and the combination of time and scrolling have a strong correlation with explicit relevance ratings, which suggests that implicit feedback may be helpful for inferring user information need.\nIn [10], user clickthrough data is collected as training data to learn a retrieval function, which is used to produce a customized ranking of search results that suits a group of users' preferences.\nIn [25], the clickthrough data collected over a long time period is exploited through query expansion to improve retrieval accuracy.\nWhile a user may have general long term interests and preferences for information, often he\/she is searching for documents to satisfy an \"ad hoc\" information need, which only lasts for a short period of time; once the information need is satisfied, the user would generally no longer be interested in such information.\nFor example, a user may be looking for information about used cars in order to buy one, but once the user has bought a car, he\/she is generally no longer interested in such information.\nIn such cases, implicit feedback information collected over a long period of time is unlikely to be very useful, but the immediate search context and feedback information, such as which of the search results for the current information need are viewed, can be expected to be much more useful.\nConsider the query \"Java\" again.\nAny of the following immediate feedback information about the user could potentially help determine the intended meaning of \"Java\" in the query: (1) The previous query submitted by the user is \"hashtable\" (as opposed to, e.g., \"travel Indonesia\").\n(2) In the search results, the user viewed a page where words such as \"programming\", \"software\", and \"applet\" occur many times.\nTo the best of our knowledge, how to exploit such immediate and short-term search context to improve search has so far not been well addressed in the previous work.\nIn this paper, we study how to construct and update a user model based on the immediate search context and implicit feedback information and use the model to improve the accuracy of ad hoc retrieval.\nIn order to maximally benefit the user of a retrieval system through implicit user modeling, we propose to perform \"eager implicit feedback\".\nThat is, as soon as we observe any new piece of evidence from the user, we would update the system's belief about the user's information need and respond with improved retrieval results based on the updated user model.\nWe present a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function.\nIn a traditional retrieval paradigm, the retrieval problem is to match a query with documents and rank documents according to their relevance values.\nAs a result, the retrieval process is a simple independent cycle of \"query\" and \"result display\".\nIn the proposed new retrieval paradigm, the user's search context plays an important role and the inferred implicit user model is exploited immediately to benefit the user.\nThe new retrieval paradigm is thus fundamentally different from the traditional paradigm, and is inherently more general.\nWe further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user.\nUsing these techniques, we develop a client-side web search agent UCAIR (User-Centered Adaptive Information Retrieval) on top of a popular search engine (Google).\nExperiments on web search show that our search agent can improve search accuracy over Google.\nSince the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort.\nThus the developed search agent can improve existing web search performance without additional effort from the user.\nThe remaining sections are organized as follows.\nIn Section 2, we discuss the related work.\nIn Section 3, we present a decisiontheoretic interactive retrieval framework for implicit user modeling.\nIn Section 4, we present the design and implementation of an intelligent client-side web search agent (UCAIR) that performs eager implicit feedback.\nIn Section 5, we report our experiment results using the search agent.\nSection 6 concludes our work.\n2.\nRELATED WORK\nImplicit user modeling for personalized search has been studied in previous work, but our work differs from all previous work in several aspects: (1) We emphasize the exploitation of immediate search context such as the related immediately preceding query and the viewed documents in the same session, while most previous work relies on long-term collection of implicit feedback information [25].\n(2) We perform eager feedback and bring the benefit of implicit user modeling as soon as any new implicit feedback information is available, while the previous work mostly exploits longterm implicit feedback [10].\n(3) We propose a retrieval framework to integrate implicit user modeling with the interactive retrieval process, while the previous work either studies implicit user modeling separately from retrieval [3] or only studies specific retrieval models for exploiting implicit feedback to better match a query with documents [23, 27, 22].\n(4) We develop and evaluate a personalized Web search agent with online user studies, while most existing work evaluates algorithms offline without real user interactions.\nCurrently some search engines provide rudimentary personalization, such as Google Personalized web search [6], which allows users to explicitly describe their interests by selecting from predefined topics, so that those results that match their interests are brought to the top, and My Yahoo! search [16], which gives users the option to save web sites they like and block those they dislike.\nIn contrast, UCAIR personalizes web search through implicit user modeling without any additional user efforts.\nFurthermore, the personalization of UCAIR is provided on the client side.\nThere are two remarkable advantages on this.\nFirst, the user does not need to worry about the privacy infringement, which is a big concern for personalized search [26].\nSecond, both the computation of personalization and the storage of the user profile are done at the client side so that the server load is reduced dramatically [9].\nThere have been many works studying user query logs [1] or query dynamics [13].\nUCAIR makes direct use of a user's query history to benefit the same user immediately in the same search session.\nUCAIR first judges whether two neighboring queries belong to the same information session and if so, it selects terms from the previous query to perform query expansion.\nOur query expansion approach is similar to automatic query expansion [28, 15, 5], but instead of using pseudo feedback to expand the query, we use user's implicit feedback information to expand the current query.\nThese two techniques may be combined.\n3.\nOPTIMIZATION IN INTERACTIVE IR\n3.1 A decision-theoretic framework\n3.2 User models\n3.3 Loss functions\n3.4 Implicit user modeling\n4.\nUCAIR: A PERSONALIZED SEARCH AGENT\n4.1 Design\n4.2 Session boundary detection and query expansion\n4.3 Information need model updating\n4.4 Result reranking\n5.\nEVALUATION OF UCAIR\n5.1 Sample results\n5.2 Quantitative evaluation\n6.\nCONCLUSIONS\nIn this paper, we studied how to exploit implicit user modeling to intelligently personalize information retrieval and improve search accuracy.\nUnlike most previous work, we emphasize the use of immediate search context and implicit feedback information as well as eager updating of search results to maximally benefit a user.\nWe presented a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function.\nWe further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user.\nUsing these techniques, we develop a client-side web search agent (UCAIR) on top of a popular search engine (Google).\nExperiments on web search show that our search agent can improve search accuracy over\nFigure 5: Precision at top 20 documents of UCAIR and Google\nFigure 6: Precision at top 20 result of UCAIR and Google\nGoogle.\nSince the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort.\nThe developed search agent thus can improve existing web search performance without any additional effort from the user.","lvl-4":"Implicit User Modeling for Personalized Search\nABSTRACT\nInformation retrieval systems (e.g., web search engines) are critical for overcoming information overload.\nA major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance.\nFor example, a tourist and a programmer may use the same word \"java\" to search for different information, but the current search systems would return the same results.\nIn this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search.\nWe present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval.\nWe develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information.\nExperiments on web search show that our search agent can improve search accuracy over the popular Google search engine.\n1.\nINTRODUCTION\nAlthough many information retrieval systems (e.g., web search engines and digital library systems) have been successfully deployed, the current retrieval systems are far from optimal.\nA major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users [17].\n(1) Different users may use exactly the same query (e.g., \"Java\") to search for different information (e.g., the Java island in Indonesia or the Java programming language), but existing IR systems return the same results for these users.\nWithout considering the actual user, it is impossible to know which sense \"Java\" refers to in a query.\n(2) A user's information needs may change over time.\nThe same user may use \"Java\" sometimes to mean the Java island in Indonesia and some other times to mean the programming language.\nWithout recognizing the search context, it would be again impossible to recognize the correct sense.\nIn order to optimize retrieval accuracy, we clearly need to model the user appropriately and personalize search according to each individual user.\nThe major goal of user modeling for information retrieval is to accurately model a user's information need, which is, unfortunately, a very difficult task.\nIndeed, it is even hard for a user to precisely describe what his\/her information need is.\nWhat information is available for a system to infer a user's information need?\nObviously, the user's query provides the most direct evidence.\nIndeed, most existing retrieval systems rely solely on the query to model a user's information need.\nHowever, since a query is often extremely short, the user model constructed based on a keyword query is inevitably impoverished.\nAn effective way to improve user modeling in information retrieval is to ask the user to explicitly specify which documents are relevant (i.e., useful for satisfying his\/her information need), and then to improve user modeling based on such examples of relevant documents.\nThis is called relevancefeedback, which has been proved to be quite effective for improving retrieval accuracy [19, 20].\nUnfortunately, in real world applications, users are usually reluctant to make the extra effort to provide relevant examples for feedback [11].\nIt is thus very interesting to study how to infer a user's information need based on any implicit feedback information, which naturally exists through user interactions and thus does not require any extra user effort.\nIndeed, several previous studies have shown that implicit user modeling can improve retrieval accuracy.\nIn [10], user clickthrough data is collected as training data to learn a retrieval function, which is used to produce a customized ranking of search results that suits a group of users' preferences.\nIn [25], the clickthrough data collected over a long time period is exploited through query expansion to improve retrieval accuracy.\nFor example, a user may be looking for information about used cars in order to buy one, but once the user has bought a car, he\/she is generally no longer interested in such information.\nConsider the query \"Java\" again.\n(2) In the search results, the user viewed a page where words such as \"programming\", \"software\", and \"applet\" occur many times.\nTo the best of our knowledge, how to exploit such immediate and short-term search context to improve search has so far not been well addressed in the previous work.\nIn this paper, we study how to construct and update a user model based on the immediate search context and implicit feedback information and use the model to improve the accuracy of ad hoc retrieval.\nIn order to maximally benefit the user of a retrieval system through implicit user modeling, we propose to perform \"eager implicit feedback\".\nThat is, as soon as we observe any new piece of evidence from the user, we would update the system's belief about the user's information need and respond with improved retrieval results based on the updated user model.\nWe present a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function.\nIn a traditional retrieval paradigm, the retrieval problem is to match a query with documents and rank documents according to their relevance values.\nAs a result, the retrieval process is a simple independent cycle of \"query\" and \"result display\".\nIn the proposed new retrieval paradigm, the user's search context plays an important role and the inferred implicit user model is exploited immediately to benefit the user.\nThe new retrieval paradigm is thus fundamentally different from the traditional paradigm, and is inherently more general.\nUsing these techniques, we develop a client-side web search agent UCAIR (User-Centered Adaptive Information Retrieval) on top of a popular search engine (Google).\nExperiments on web search show that our search agent can improve search accuracy over Google.\nSince the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort.\nThus the developed search agent can improve existing web search performance without additional effort from the user.\nThe remaining sections are organized as follows.\nIn Section 2, we discuss the related work.\nIn Section 3, we present a decisiontheoretic interactive retrieval framework for implicit user modeling.\nIn Section 4, we present the design and implementation of an intelligent client-side web search agent (UCAIR) that performs eager implicit feedback.\nIn Section 5, we report our experiment results using the search agent.\nSection 6 concludes our work.\n2.\nRELATED WORK\n(2) We perform eager feedback and bring the benefit of implicit user modeling as soon as any new implicit feedback information is available, while the previous work mostly exploits longterm implicit feedback [10].\n(3) We propose a retrieval framework to integrate implicit user modeling with the interactive retrieval process, while the previous work either studies implicit user modeling separately from retrieval [3] or only studies specific retrieval models for exploiting implicit feedback to better match a query with documents [23, 27, 22].\n(4) We develop and evaluate a personalized Web search agent with online user studies, while most existing work evaluates algorithms offline without real user interactions.\nIn contrast, UCAIR personalizes web search through implicit user modeling without any additional user efforts.\nFurthermore, the personalization of UCAIR is provided on the client side.\nFirst, the user does not need to worry about the privacy infringement, which is a big concern for personalized search [26].\nThere have been many works studying user query logs [1] or query dynamics [13].\nUCAIR makes direct use of a user's query history to benefit the same user immediately in the same search session.\nUCAIR first judges whether two neighboring queries belong to the same information session and if so, it selects terms from the previous query to perform query expansion.\nOur query expansion approach is similar to automatic query expansion [28, 15, 5], but instead of using pseudo feedback to expand the query, we use user's implicit feedback information to expand the current query.\nThese two techniques may be combined.\n6.\nCONCLUSIONS\nIn this paper, we studied how to exploit implicit user modeling to intelligently personalize information retrieval and improve search accuracy.\nUnlike most previous work, we emphasize the use of immediate search context and implicit feedback information as well as eager updating of search results to maximally benefit a user.\nWe presented a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function.\nUsing these techniques, we develop a client-side web search agent (UCAIR) on top of a popular search engine (Google).\nExperiments on web search show that our search agent can improve search accuracy over\nFigure 5: Precision at top 20 documents of UCAIR and Google\nFigure 6: Precision at top 20 result of UCAIR and Google\nGoogle.\nSince the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort.\nThe developed search agent thus can improve existing web search performance without any additional effort from the user.","lvl-2":"Implicit User Modeling for Personalized Search\nABSTRACT\nInformation retrieval systems (e.g., web search engines) are critical for overcoming information overload.\nA major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users, resulting in inherently non-optimal retrieval performance.\nFor example, a tourist and a programmer may use the same word \"java\" to search for different information, but the current search systems would return the same results.\nIn this paper, we study how to infer a user's interest from the user's search context and use the inferred implicit user model for personalized search.\nWe present a decision theoretic framework and develop techniques for implicit user modeling in information retrieval.\nWe develop an intelligent client-side web search agent (UCAIR) that can perform eager implicit feedback, e.g., query expansion based on previous queries and immediate result reranking based on clickthrough information.\nExperiments on web search show that our search agent can improve search accuracy over the popular Google search engine.\n1.\nINTRODUCTION\nAlthough many information retrieval systems (e.g., web search engines and digital library systems) have been successfully deployed, the current retrieval systems are far from optimal.\nA major deficiency of existing retrieval systems is that they generally lack user modeling and are not adaptive to individual users [17].\nThis inherent non-optimality is seen clearly in the following two cases:\n(1) Different users may use exactly the same query (e.g., \"Java\") to search for different information (e.g., the Java island in Indonesia or the Java programming language), but existing IR systems return the same results for these users.\nWithout considering the actual user, it is impossible to know which sense \"Java\" refers to in a query.\n(2) A user's information needs may change over time.\nThe same user may use \"Java\" sometimes to mean the Java island in Indonesia and some other times to mean the programming language.\nWithout recognizing the search context, it would be again impossible to recognize the correct sense.\nIn order to optimize retrieval accuracy, we clearly need to model the user appropriately and personalize search according to each individual user.\nThe major goal of user modeling for information retrieval is to accurately model a user's information need, which is, unfortunately, a very difficult task.\nIndeed, it is even hard for a user to precisely describe what his\/her information need is.\nWhat information is available for a system to infer a user's information need?\nObviously, the user's query provides the most direct evidence.\nIndeed, most existing retrieval systems rely solely on the query to model a user's information need.\nHowever, since a query is often extremely short, the user model constructed based on a keyword query is inevitably impoverished.\nAn effective way to improve user modeling in information retrieval is to ask the user to explicitly specify which documents are relevant (i.e., useful for satisfying his\/her information need), and then to improve user modeling based on such examples of relevant documents.\nThis is called relevancefeedback, which has been proved to be quite effective for improving retrieval accuracy [19, 20].\nUnfortunately, in real world applications, users are usually reluctant to make the extra effort to provide relevant examples for feedback [11].\nIt is thus very interesting to study how to infer a user's information need based on any implicit feedback information, which naturally exists through user interactions and thus does not require any extra user effort.\nIndeed, several previous studies have shown that implicit user modeling can improve retrieval accuracy.\nIn [3], a web browser (Curious Browser) is developed to record a user's explicit relevance ratings of web pages (relevance feedback) and browsing behavior when viewing a page, such as dwelling time, mouse click, mouse movement and scrolling (implicit feedback).\nIt is shown that the dwelling time on a page, amount of scrolling on a page and the combination of time and scrolling have a strong correlation with explicit relevance ratings, which suggests that implicit feedback may be helpful for inferring user information need.\nIn [10], user clickthrough data is collected as training data to learn a retrieval function, which is used to produce a customized ranking of search results that suits a group of users' preferences.\nIn [25], the clickthrough data collected over a long time period is exploited through query expansion to improve retrieval accuracy.\nWhile a user may have general long term interests and preferences for information, often he\/she is searching for documents to satisfy an \"ad hoc\" information need, which only lasts for a short period of time; once the information need is satisfied, the user would generally no longer be interested in such information.\nFor example, a user may be looking for information about used cars in order to buy one, but once the user has bought a car, he\/she is generally no longer interested in such information.\nIn such cases, implicit feedback information collected over a long period of time is unlikely to be very useful, but the immediate search context and feedback information, such as which of the search results for the current information need are viewed, can be expected to be much more useful.\nConsider the query \"Java\" again.\nAny of the following immediate feedback information about the user could potentially help determine the intended meaning of \"Java\" in the query: (1) The previous query submitted by the user is \"hashtable\" (as opposed to, e.g., \"travel Indonesia\").\n(2) In the search results, the user viewed a page where words such as \"programming\", \"software\", and \"applet\" occur many times.\nTo the best of our knowledge, how to exploit such immediate and short-term search context to improve search has so far not been well addressed in the previous work.\nIn this paper, we study how to construct and update a user model based on the immediate search context and implicit feedback information and use the model to improve the accuracy of ad hoc retrieval.\nIn order to maximally benefit the user of a retrieval system through implicit user modeling, we propose to perform \"eager implicit feedback\".\nThat is, as soon as we observe any new piece of evidence from the user, we would update the system's belief about the user's information need and respond with improved retrieval results based on the updated user model.\nWe present a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function.\nIn a traditional retrieval paradigm, the retrieval problem is to match a query with documents and rank documents according to their relevance values.\nAs a result, the retrieval process is a simple independent cycle of \"query\" and \"result display\".\nIn the proposed new retrieval paradigm, the user's search context plays an important role and the inferred implicit user model is exploited immediately to benefit the user.\nThe new retrieval paradigm is thus fundamentally different from the traditional paradigm, and is inherently more general.\nWe further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user.\nUsing these techniques, we develop a client-side web search agent UCAIR (User-Centered Adaptive Information Retrieval) on top of a popular search engine (Google).\nExperiments on web search show that our search agent can improve search accuracy over Google.\nSince the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort.\nThus the developed search agent can improve existing web search performance without additional effort from the user.\nThe remaining sections are organized as follows.\nIn Section 2, we discuss the related work.\nIn Section 3, we present a decisiontheoretic interactive retrieval framework for implicit user modeling.\nIn Section 4, we present the design and implementation of an intelligent client-side web search agent (UCAIR) that performs eager implicit feedback.\nIn Section 5, we report our experiment results using the search agent.\nSection 6 concludes our work.\n2.\nRELATED WORK\nImplicit user modeling for personalized search has been studied in previous work, but our work differs from all previous work in several aspects: (1) We emphasize the exploitation of immediate search context such as the related immediately preceding query and the viewed documents in the same session, while most previous work relies on long-term collection of implicit feedback information [25].\n(2) We perform eager feedback and bring the benefit of implicit user modeling as soon as any new implicit feedback information is available, while the previous work mostly exploits longterm implicit feedback [10].\n(3) We propose a retrieval framework to integrate implicit user modeling with the interactive retrieval process, while the previous work either studies implicit user modeling separately from retrieval [3] or only studies specific retrieval models for exploiting implicit feedback to better match a query with documents [23, 27, 22].\n(4) We develop and evaluate a personalized Web search agent with online user studies, while most existing work evaluates algorithms offline without real user interactions.\nCurrently some search engines provide rudimentary personalization, such as Google Personalized web search [6], which allows users to explicitly describe their interests by selecting from predefined topics, so that those results that match their interests are brought to the top, and My Yahoo! search [16], which gives users the option to save web sites they like and block those they dislike.\nIn contrast, UCAIR personalizes web search through implicit user modeling without any additional user efforts.\nFurthermore, the personalization of UCAIR is provided on the client side.\nThere are two remarkable advantages on this.\nFirst, the user does not need to worry about the privacy infringement, which is a big concern for personalized search [26].\nSecond, both the computation of personalization and the storage of the user profile are done at the client side so that the server load is reduced dramatically [9].\nThere have been many works studying user query logs [1] or query dynamics [13].\nUCAIR makes direct use of a user's query history to benefit the same user immediately in the same search session.\nUCAIR first judges whether two neighboring queries belong to the same information session and if so, it selects terms from the previous query to perform query expansion.\nOur query expansion approach is similar to automatic query expansion [28, 15, 5], but instead of using pseudo feedback to expand the query, we use user's implicit feedback information to expand the current query.\nThese two techniques may be combined.\n3.\nOPTIMIZATION IN INTERACTIVE IR\nIn interactive IR, a user interacts with the retrieval system through an \"action dialogue\", in which the system responds to each user action with some system action.\nFor example, the user's action may be submitting a query and the system's response may be returning a list of 10 document summaries.\nIn general, the space of user actions and system responses and their granularities would depend on the interface of a particular retrieval system.\nIn principle, every action of the user can potentially provide new evidence to help the system better infer the user's information need.\nThus in order to respond optimally, the system should use all the evidence collected so far about the user when choosing a response.\nWhen viewed in this way, most existing search engines are clearly non-optimal.\nFor example, if a user has viewed some documents on the first page of search results, when the user clicks on the \"Next\" link to fetch more results, an existing retrieval system would still return the next page of results retrieved based on the original query without considering the new evidence that a particular result has been viewed by the user.\nWe propose to optimize retrieval performance by adapting system responses based on every action that a user has taken, and cast the optimization problem as a decision task.\nSpecifically, at any time, the system would attempt to do two tasks: (1) User model updating: Monitor any useful evidence from the user regarding his\/her information need and update the user model as soon as such evidence is available; (2) Improving search results: Rerank immediately all the documents that the user has not yet seen, as soon as the user model is updated.\nWe emphasize eager updating and reranking, which makes our work quite different from any existing work.\nBelow we present a formal decision theoretic framework for optimizing retrieval performance through implicit user modeling in interactive information retrieval.\n3.1 A decision-theoretic framework\nLet A be the set of all user actions and R (a) be the set of all possible system responses to a user action a 2 A.\nAt any time, let At = (a1,..., at) be the observed sequence of user actions so far (up to time point t) and Rt \u2212 1 = (r1,..., rt \u2212 1) be the responses that the system has made responding to the user actions.\nThe system's goal is to choose an optimal response rt 2 R (at) for the current user action at.\nLet M be the space of all possible user models.\nWe further define a loss function L (a, r, m) 2 OR, where a 2 A is a user action, r 2 R (a) is a system response, and m 2 M is a user model.\nL (a, r, m) encodes our decision preferences and assesses the optimality of responding with r when the current user model is m and the current user action is a.\nAccording to Bayesian decision theory, the optimal decision at time t is to choose a response that minimizes the Bayes risk, i.e.,\nwhere P (mtjU, D, At, Rt \u2212 1) is the posterior probability of the user model mt given all the observations about the user U we have made up to time t. To simplify the computation of Equation 1, let us assume that the posterior probability mass P (mtjU, D, At, Rt \u2212 1) is mostly concentrated on the mode m \u2217 t = argmaxmt P (mtjU, D, At, Rt \u2212 1).\nWe can then approximate the integral with the value of the loss function at m \u2217 t.\nThat is,\nwhere m \u2217 t = argmaxmt P (mtjU, D, At, Rt \u2212 1).\nLeaving aside how to define and estimate these probabilistic models and the loss function, we can see that such a decision-theoretic formulation suggests that, in order to choose the optimal response to at, the system should perform two tasks: (1) compute the current user model and obtain m \u2217 t based on all the useful information.\n(2) choose a response rt to minimize the loss function value L (at, rt, m \u2217 t).\nWhen at does not affect our belief about m \u2217 t, the first step can be omitted and we may reuse m \u2217 t \u2212 1 for m \u2217 t.\nNote that our framework is quite general since we can potentially model any kind of user actions and system responses.\nIn most cases, as we may expect, the system's response is some ranking of documents, i.e., for most actions a, R (a) consists of all the possible rankings of the unseen documents, and the decision problem boils down to choosing the best ranking of unseen documents based on the most current user model.\nWhen a is the action of submitting a keyword query, such a response is exactly what a current retrieval system would do.\nHowever, we can easily imagine that a more intelligent web search engine would respond to a user's clicking of the \"Next\" link (to fetch more unseen results) with a more optimized ranking of documents based on any viewed documents in the current page of results.\nIn fact, according to our eager updating strategy, we may even allow a system to respond to a user's clicking of browser's \"Back\" button after viewing a document in the same way, so that the user can maximally benefit from implicit feedback.\nThese are precisely what our UCAIR system does.\n3.2 User models\nA user model m 2 M represents what we know about the user U, so in principle, it can contain any information about the user that we wish to model.\nWe now discuss two important components in a user model.\nThe first component is a component model of the user's information need.\nPresumably, the most important factor affecting the optimality of the system's response is how well the response addresses the user's information need.\nIndeed, at any time, we may assume that the system has some \"belief\" about what the user is interested in, which we model through a term vector x ~ = (x1,..., x | V |), where V = {w1,..., w | V |} is the set of all terms (i.e., vocabulary) and xi is the weight of term wi.\nSuch a term vector is commonly used in information retrieval to represent both queries and documents.\nFor example, the vector-space model, assumes that both the query and the documents are represented as term vectors and the score of a document with respect to a query is computed based on the similarity between the query vector and the document vector [21].\nIn a language modeling approach, we may also regard the query unigram language model [12, 29] or the relevance model [14] as a term vector representation of the user's information need.\nIntuitively, x ~ would assign high weights to terms that characterize the topics which the user is interested in.\nThe second component we may include in our user model is the documents that the user has already viewed.\nObviously, even if a document is relevant, if the user has already seen the document, it would not be useful to present the same document again.\nWe thus introduce another variable S 1\/2 D (D is the whole set of documents in the collection) to denote the subset of documents in the search results that the user has already seen\/viewed.\nIn general, at time t, we may represent a user model as mt = (S, ~ x, At, Rt \u2212 1), where S is the seen documents, x ~ is the system's \"understanding\" of the user's information need, and (At, Rt \u2212 1) represents the user's interaction history.\nNote that an even more general user model may also include other factors such as the user's reading level and occupation.\nIf we assume that the uncertainty of a user model mt is solely due to the uncertainty of ~ x, the computation of our current estimate of user model m \u2217 t will mainly involve computing our best estimate of ~ x.\nThat is, the system would choose a response according to\nwhere ~ x \u2217 = argmax ~ x P (~ xjU, D, At, Rt \u2212 1).\nThis is the decision mechanism implemented in the UCAIR system to be described later.\nIn this system, we avoided specifying the probabilistic model P (~ xjU, D, At, Rt \u2212 1) by computing ~ x \u2217 directly with some existing feedback method.\n3.3 Loss functions\nThe exact definition of loss function L depends on the responses, thus it is inevitably application-specific.\nWe now briefly discuss some possibilities when the response is to rank all the unseen documents and present the top k of them.\nLet r = (d1,..., dk) be the top k documents, S be the set of seen documents by the user, and ~ x \u2217 be the system's best guess of the user's information need.\nWe\nmay simply define the loss associated with r as the negative sum of the probability that each of the di is relevant, i.e., L (a, r, m) = \u2212 Eki = 1 P (relevant | di, m).\nClearly, in order to minimize this loss function, the optimal response r would contain the k documents with the highest probability of relevance, which is intuitively reasonable.\nOne deficiency of this \"top-k loss function\" is that it is not sensitive to the internal order of the selected top k documents, so switching the ranking order of a non-relevant document and a relevant one would not affect the loss, which is unreasonable.\nTo model ranking, we can introduce a factor of the user model--the probability of each of the k documents being viewed by the user, P (view | di), and define the following \"ranking loss function\":\nSince in general, if di is ranked above dj (i.e., i <j), P (view | di)> P (view | dj), this loss function would favor a decision to rank relevant documents above non-relevant ones, as otherwise, we could always switch di with dj to reduce the loss value.\nThus the system should simply perform a regular retrieval and rank documents according to the probability of relevance [18].\nDepending on the user's retrieval preferences, there can be many other possibilities.\nFor example, if the user does not want to see redundant documents, the loss function should include some redundancy measure on r based on the already seen documents S. Of course, when the response is not to choose a ranked list of documents, we would need a different loss function.\nWe discuss one such example that is relevant to the search agent that we implement.\nWhen a user enters a query qt (current action), our search agent relies on some existing search engine to actually carry out search.\nIn such a case, even though the search agent does not have control of the retrieval algorithm, it can still attempt to optimize the search results through refining the query sent to the search engine and\/or reranking the results obtained from the search engine.\nThe loss functions for reranking are already discussed above; we now take a look at the loss functions for query refinement.\nLet f be the retrieval function of the search engine that our agent uses so that f (q) would give us the search results using query q. Given that the current action of the user is entering a query qt (i.e., at = qt), our response would be f (q) for some q.\nSince we have no choice of f, our decision is to choose a good q. Formally,\nwhich shows that our goal is to find q * = argminqL (qt, f (q), m), i.e., an optimal query that would give us the best f (q).\nA different choice of loss function L (qt, f (q), m) would lead to a different query refinement strategy.\nIn UCAIR, we heuristically compute q * by expanding qt with terms extracted from rt_1 whenever qt_1 and qt have high similarity.\nNote that rt_1 and qt_1 are contained in m as part of the user's interaction history.\n3.4 Implicit user modeling\nImplicit user modeling is captured in our framework through the computation of: = argmax ~ x P (x | U, D, At, Rt_1), i.e., the system's current belief of what the user's information need is.\nHere again there may be many possibilities, leading to different algorithms for implicit user modeling.\nWe now discuss a few of them.\nFirst, when two consecutive queries are related, the previous query can be exploited to enrich the current query and provide more search context to help disambiguation.\nFor this purpose, instead of performing query expansion as we did in the previous section, we could also compute an updated Y * based on the previous query and retrieval results.\nThe computed new user model can then be used to rank the documents with a standard information retrieval model.\nSecond, we can also infer a user's interest based on the summaries of the viewed documents.\nWhen a user is presented with a list of summaries of top ranked documents, if the user chooses to skip the first n documents and to view the (n + 1) - th document, we may infer that the user is not interested in the displayed summaries for the first n documents, but is attracted by the displayed summary of the (n + 1) - th document.\nWe can thus use these summaries as negative and positive examples to learn a more accurate user model: e *.\nHere many standard relevance feedback techniques can be exploited [19, 20].\nNote that we should use the displayed summaries, as opposed to the actual contents of those documents, since it is possible that the displayed summary of the viewed document is relevant, but the document content is actually not.\nSimilarly, a displayed summary may mislead a user to skip a relevant document.\nInferring user models based on such displayed information, rather than the actual content of a document is an important difference between UCAIR and some other similar systems.\nIn UCAIR, both of these strategies for inferring an implicit user model are implemented.\n4.\nUCAIR: A PERSONALIZED SEARCH AGENT\n4.1 Design\nIn this section, we present a client-side web search agent called UCAIR, in which we implement some of the methods discussed in the previous section for performing personalized search through implicit user modeling.\nUCAIR is a web browser plug-in 1 that acts as a proxy for web search engines.\nCurrently, it is only implemented for Internet Explorer and Google, but it is a matter of engineering to make it run on other web browsers and interact with other search engines.\nThe issue of privacy is a primary obstacle for deploying any real world applications involving serious user modeling, such as personalized search.\nFor this reason, UCAIR is strictly running as a client-side search agent, as opposed to a server-side application.\nThis way, the captured user information always resides on the computer that the user is using, thus the user does not need to release any information to the outside.\nClient-side personalization also allows the system to easily observe a lot of user information that may not be easily available to a server.\nFurthermore, performing personalized search on the client-side is more scalable than on the serverside, since the overhead of computation and storage is distributed among clients.\nAs shown in Figure 1, the UCAIR toolbar has 3 major components: (1) The (implicit) user modeling module captures a user's search context and history information, including the submitted queries and any clicked search results and infers search session boundaries.\n(2) The query modification module selectively improves the query formulation according to the current user model.\n(3) The result re-ranking module immediately re-ranks any unseen search results whenever the user model is updated.\nIn UCAIR, we consider four basic user actions: (1) submitting a keyword query; (2) viewing a document; (3) clicking the \"Back\" button; (4) clicking the \"Next\" link on a result page.\nFor each of these four actions, the system responds with, respectively, (1)\nFigure 1: UCAIR architecture\ngenerating a ranked list of results by sending a possibly expanded query to a search engine; (2) updating the information need model ~ x; (3) reranking the unseen results on the current result page based on the current model ~ x; and (4) reranking the unseen pages and generating the next page of results based on the current model ~ x. Behind these responses, there are three basic tasks: (1) Decide whether the previous query is related to the current query and if so expand the current query with useful terms from the previous query or the results of the previous query.\n(2) Update the information need model x ~ based on a newly clicked document summary.\n(3) Rerank a set of unseen documents based on the current model ~ x. Below we describe our algorithms for each of them.\n4.2 Session boundary detection and query expansion\nTo effectively exploit previous queries and their corresponding clickthrough information, UCAIR needs to judge whether two adjacent queries belong to the same search session (i.e., detect session boundaries).\nExisting work on session boundary detection is mostly in the context of web log analysis (e.g., [8]), and uses statistical information rather than textual features.\nSince our clientside agent does not have access to server query logs, we make session boundary decisions based on textual similarity between two queries.\nBecause related queries do not necessarily share the same words (e.g., \"java island\" and \"travel Indonesia\"), it is insufficient to use only query text.\nTherefore we use the search results of the two queries to help decide whether they are topically related.\nFor example, for the above queries \"java island\" and \"travel Indonesia\"', the words \"java\", \"bali\", \"island\",\" indonesia\" and\" travel\" may occur frequently in both queries' search results, yielding a high similarity score.\nWe only use the titles and summaries of the search results to calculate the similarity since they are available in the retrieved search result page and fetching the full text of every result page would significantly slow down the process.\nTo compensate for the terseness of titles and summaries, we retrieve more results than a user would normally view for the purpose of detecting session boundaries (typically 50 results).\nThe similarity between the previous query q' and the current query q is computed as follows.\nLet {si, s2,..., sn,} and {s1, s2,..., sn} be the result sets for the two queries.\nWe use the pivoted normalization TF-IDF weighting formula [24] to compute a term weight vector ~ si for each result si.\nWe define the average result ~ savg to be the centroid of all the result vectors, i.e., (~ s1 + ~ s2 +...+ ~ sn) \/ n.\nThe cosine similarity between the two average results is calculated as ~ s avg \u00b7 ~ savg \/ ~ s 2avg \u00b7 ~ s2avg If the similarity value exceeds a predefined threshold, the two queries will be considered to be in the same information session.\nIf the previous query and the current query are found to belong to the same search session, UCAIR would attempt to expand the current query with terms from the previous query and its search results.\nSpecifically, for each term in the previous query or the corresponding search results, if its frequency in the results of the current query is greater than a preset threshold (e.g. 5 results out of 50), the term would be added to the current query to form an expanded query.\nIn this case, UCAIR would send this expanded query rather than the original one to the search engine and return the results corresponding to the expanded query.\nCurrently, UCAIR only uses the immediate preceding query for query expansion; in principle, we could exploit all related past queries.\n4.3 Information need model updating\nSuppose at time t, we have observed that the user has viewed k documents whose summaries are s1,..., sk.\nWe update our user model by computing a new information need vector with a standard feedback method in information retrieval (i.e., Rocchio [19]).\nAccording to the vector space retrieval model, each clicked summary si can be represented by a term weight vector ~ si with each term weighted by a TF-IDF weighting formula [21].\nRocchio computes the centroid vector of all the summaries and interpolates it with the original query vector to obtain an updated term vector.\nThat is,\nwhere q ~ is the query vector, k is the number of summaries the user clicks immediately following the current query and \u03b1 is a parameter that controls the influence of the clicked summaries on the inferred information need model.\nIn our experiments, \u03b1 is set to 0.5.\nNote that we update the information need model whenever the user views a document.\n4.4 Result reranking\nIn general, we want to rerank all the unseen results as soon as the user model is updated.\nCurrently, UCAIR implements reranking in two cases, corresponding to the user clicking the \"Back\" button and \"Next\" link in the Internet Explorer.\nIn both cases, the current (updated) user model would be used to rerank the unseen results so that the user would see improved search results immediately.\nTo rerank any unseen document summaries, UCAIR uses the standard vector space retrieval model and scores each summary based on the similarity of the result and the current user information need vector x ~ [21].\nSince implicit feedback is not completely reliable, we bring up only a small number (e.g. 5) of highest reranked results to be followed by any originally high ranked results.\nTable 1: Sample results of query expansion\n5.\nEVALUATION OF UCAIR\nWe now present some results on evaluating the two major UCAIR functions: selective query expansion and result reranking based on user clickthrough data.\n5.1 Sample results\nThe query expansion strategy implemented in UCAIR is intentionally conservative to avoid misinterpretation of implicit user models.\nIn practice, whenever it chooses to expand the query, the expansion usually makes sense.\nIn Table 1, we show how UCAIR can successfully distinguish two different search contexts for the query \"java map\", corresponding to two different previous queries (i.e., \"travel Indonesia\" vs. \"hashtable\").\nDue to implicit user modeling, UCAIR intelligently figures out to add \"Indonesia\" and \"class\", respectively, to the user's query \"java map\", which would otherwise be ambiguous as shown in the original results from Google on March 21, 2005.\nUCAIR's results are much more accurate than Google's results and reflect personalization in search.\nThe eager implicit feedback component is designed to immediately respond to a user's activity such as viewing a document.\nIn Figure 2, we show how UCAIR can successfully disambiguate an ambiguous query \"jaguar\" by exploiting a viewed document summary.\nIn this case, the initial retrieval results using \"jaguar\" (shown on the left side) contain two results about the Jaguar cars followed by two results about the Jaguar software.\nHowever, after the user views the web page content of the second result (about \"Jaguar car\") and returns to the search result page by clicking \"Back\" button, UCAIR automatically nominates two new search results about Jaguar cars (shown on the right side), while the original two results about Jaguar software are pushed down on the list (unseen from the picture).\n5.2 Quantitative evaluation\nTo further evaluate UCAIR quantitatively, we conduct a user study on the effectiveness of the eager implicit feedback component.\nIt is a challenge to quantitatively evaluate the potential performance improvement of our proposed model and UCAIR over Google in an unbiased way [7].\nHere, we design a user study, in which participants would do normal web search and judge a randomly and anonymously mixed set of results from Google and UCAIR at the end of the search session; participants do not know whether a result comes from Google or UCAIR.\nWe recruited 6 graduate students for this user study, who have different backgrounds (3 computer science, 2 biology, and 1 chem\n<narr> Narrative: Instances of arrests, prosecutions, convictions, and punishments of spammers, and lawsuits against them are relevant.\nDocuments which describe laws to limit spam without giving details of lawsuits or criminal trials are not relevant.\nFigure 3: An example of TREC query topic, expressed in a form which might be given to a human assistant or librarian\nistry).\nWe use query topics from TREC 2 2004 Terabyte track [2] and TREC 2003 Web track [4] topic distillation task in the way to be described below.\nAn example topic from TREC 2004 Terabyte track appears in Figure 3.\nThe title is a short phrase and may be used as a query to the retrieval system.\nThe description field provides a slightly longer statement of the topic requirement, usually expressed as a single complete sentence or question.\nFinally the narrative supplies additional information necessary to fully specify the requirement, expressed in the form of a short paragraph.\nInitially, each participant would browse 50 topics either from Terabyte track or Web track and pick 5 or 7 most interesting topics.\nFor each picked topic, the participant would essentially do the normal web search using UCAIR to find many relevant web pages by using the title of the query topic as the initial keyword query.\nDuring this process, the participant may view the search results and possibly click on some interesting ones to view the web pages, just as in a normal web search.\nThere is no requirement or restriction on how many queries the participant must submit or when the participant should stop the search for one topic.\nWhen the participant plans to change the search topic, he\/she will simply press a button\nFigure 2: Screen shots for result reranking\nto evaluate the search results before actually switching to the next topic.\nAt the time of evaluation, 30 top ranked results from Google and UCAIR (some are overlapping) are randomly mixed together so that the participant would not know whether a result comes from Google or UCAIR.\nThe participant would then judge the relevance of these results.\nWe measure precision at top n (n = 5, 10, 20, 30) documents of Google and UCAIR.\nWe also evaluate precisions at different recall levels.\nAltogether, 368 documents judged as relevant from Google search results and 429 documents judged as relevant from UCAIR by participants.\nScatter plots of precision at top 10 and top 20 documents are shown in Figure 4 and Figure 5 respectively (The scatter plot of precision at top 30 documents is very similar to precision at top 20 documents).\nEach point of the scatter plots represents the precisions of Google and UCAIR on one query topic.\nTable 2 shows the average precision at top n documents among 32 topics.\nFrom Figure 4, Figure 5 and Table 2, we see that the search results from UCAIR are consistently better than those from Google by all the measures.\nMoreover, the performance improvement is more dramatic for precision at top 20 documents than that at precision at top 10 documents.\nOne explanation for this is that the more interaction the user has with the system, the more clickthrough data UCAIR can be expected to collect.\nThus the retrieval system can build more precise implicit user models, which lead to better retrieval accuracy.\nTable 2: Table of average precision at top n documents for 32 query topics\nThe plot in Figure 6 shows the precision-recall curves for UCAIR and Google, where it is clearly seen that the performance of UCAIR\nFigure 4: Precision at top 10 documents of UCAIR and Google\nis consistently and considerably better than that of Google at all levels of recall.\n6.\nCONCLUSIONS\nIn this paper, we studied how to exploit implicit user modeling to intelligently personalize information retrieval and improve search accuracy.\nUnlike most previous work, we emphasize the use of immediate search context and implicit feedback information as well as eager updating of search results to maximally benefit a user.\nWe presented a decision-theoretic framework for optimizing interactive information retrieval based on eager user model updating, in which the system responds to every action of the user by choosing a system action to optimize a utility function.\nWe further propose specific techniques to capture and exploit two types of implicit feedback information: (1) identifying related immediately preceding query and using the query and the corresponding search results to select appropriate terms to expand the current query, and (2) exploiting the viewed document summaries to immediately rerank any documents that have not yet been seen by the user.\nUsing these techniques, we develop a client-side web search agent (UCAIR) on top of a popular search engine (Google).\nExperiments on web search show that our search agent can improve search accuracy over\nFigure 5: Precision at top 20 documents of UCAIR and Google\nFigure 6: Precision at top 20 result of UCAIR and Google\nGoogle.\nSince the implicit information we exploit already naturally exists through user interactions, the user does not need to make any extra effort.\nThe developed search agent thus can improve existing web search performance without any additional effort from the user.","keyphrases":["implicit user model","user model","person search","inform retriev system","retriev perform","implicit feedback","queri expans","search accuraci","user-center adapt inform retriev","person web search","interact ir","queri refin","person inform retriev","interact retriev"],"prmu":["P","P","P","P","P","P","P","P","M","R","U","M","R","M"]} {"id":"H-48","title":"A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch","abstract":"The effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents. Query-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems. Query Expansion (QE) is one method for dealing with term mismatch. IR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets. While this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch. In this paper, we propose a new approach for evaluating query expansion techniques. The proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing.","lvl-1":"A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch Tonya Custis Thomson Corporation 610 Opperman Drive St. Paul, MN tonya.custis@thomson.com Khalid Al-Kofahi Thomson Corporation 610 Opperman Drive St. Paul, MN khalid.al-kofahi@thomson.com ABSTRACT The effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents.\nQuery-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems.\nQuery Expansion (QE) is one method for dealing with term mismatch.\nIR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets.\nWhile this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch.\nIn this paper, we propose a new approach for evaluating query expansion techniques.\nThe proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms Measurement, Experimentation 1.\nINTRODUCTION In our domain,1 and unlike web search, it is very important for attorneys to find all documents (e.g., cases) that are relevant to an issue.\nMissing relevant documents may have non-trivial consequences on the outcome of a court proceeding.\nAttorneys are especially concerned about missing relevant documents when researching a legal topic that is new to them, as they may not be aware of all language variations in such topics.\nTherefore, it is important to develop information retrieval systems that are robust with respect to language variations or term mismatch between queries and relevant documents.\nDuring our work on developing such systems, we concluded that current evaluation methods are not sufficient for this purpose.\n{Whooping cough, pertussis}, {heart attack, myocardial infarction}, {car wash, automobile cleaning}, {attorney, legal counsel, lawyer} are all examples of things that share the same meaning.\nOften, the terms chosen by users in their queries are different than those appearing in the documents relevant to their information needs.\nThis query-document term mismatch arises from two sources: (1) the synonymy found in natural language, both at the term and the phrasal level, and (2) the degree to which the user is an expert at searching and\/or has expert knowledge in the domain of the collection being searched.\nIR evaluations are comparative in nature (cf. TREC).\nGenerally, IR evaluations show how System A did in relation to System B on the same test collection based on various precision- and recall-based metrics.\nSimilarly, IR systems with QE capabilities are typically evaluated by executing each search twice, once with and once without query expansion, and then comparing the two result sets.\nWhile this approach shows which system may have performed better overall with respect to a particular test collection, it does not directly or systematically measure the effectiveness of IR systems in overcoming query-document term mismatch.\nIf the goal of QE is to increase search performance by mitigating the effects of query-document term mismatch, then the degree to which a system does so should be measurable in evaluation.\nAn effective evaluation method should measure the performance of IR systems under varying degrees of query-document term mismatch, not just in terms of overall performance on a collection relative to another system.\n1 Thomson Corporation builds information based solutions to the professional markets including legal, financial, health care, scientific, and tax and accounting.\nIn order to measure that a particular IR system is able to overcome query-document term mismatch by retrieving documents that are relevant to a user``s query, but that do not necessarily contain the query terms themselves, we systematically introduce term mismatch into the test collection by removing query terms from known relevant documents.\nBecause we are purposely inducing term mismatch between the queries and known relevant documents in our test collections, the proposed evaluation framework is able to measure the effectiveness of QE in a way that testing on the whole collection is not.\nIf a QE search method finds a document that is known to be relevant but that is nonetheless missing query terms, it shows that QE technique is indeed robust with respect to query-document term mismatch.\n2.\nRELATED WORK Accounting for term mismatch between the terms in user queries and the documents relevant to users'' information needs has been a fundamental issue in IR research for almost 40 years [38, 37, 47].\nQuery expansion (QE) is one technique used in IR to improve search performance by increasing the likelihood of term overlap (either explicitly or implicitly) between queries and documents that are relevant to users'' information needs.\nExplicit query expansion occurs at run-time, based on the initial search results, as is the case with relevance feedback and pseudo relevance feedback [34, 37].\nImplicit query expansion can be based on statistical properties of the document collection, or it may rely on external knowledge sources such as a thesaurus or an ontology [32, 17, 26, 50, 51, 2].\nRegardless of method, QE algorithms that are capable of retrieving relevant documents despite partial or total term mismatch between queries and relevant documents should increase the recall of IR systems (by retrieving documents that would have previously been missed) as well as their precision (by retrieving more relevant documents).\nIn practice, QE tends to improve the average overall retrieval performance, doing so by improving performance on some queries while making it worse on others.\nQE techniques are judged as effective in the case that they help more than they hurt overall on a particular collection [47, 45, 41, 27].\nOften, the expansion terms added to a query in the query expansion phase end up hurting the overall retrieval performance because they introduce semantic noise, causing the meaning of the query to drift.\nAs such, much work has been done with respect to different strategies for choosing semantically relevant QE terms to include in order to avoid query drift [34, 50, 51, 18, 24, 29, 30, 32, 3, 4, 5].\nThe evaluation of IR systems has received much attention in the research community, both in terms of developing test collections for the evaluation of different systems [11, 12, 13, 43] and in terms of the utility of evaluation metrics such as recall, precision, mean average precision, precision at rank, Bpref, etc. [7, 8, 44, 14].\nIn addition, there have been comparative evaluations of different QE techniques on various test collections [47, 45, 41].\nIn addition, the IR research community has given attention to differences between the performance of individual queries.\nResearch efforts have been made to predict which queries will be improved by QE and then selectively applying it only to those queries [1, 5, 27, 29, 15, 48], to achieve optimal overall performance.\nIn addition, related work on predicting query difficulty, or which queries are likely to perform poorly, has been done [1, 4, 5, 9].\nThere is general interest in the research community to improve the robustness of IR systems by improving retrieval performance on difficult queries, as is evidenced by the Robust track in the TREC competitions and new evaluation measures such as GMAP.\nGMAP (geometric mean average precision) gives more weight to the lower end of the average precision (as opposed to MAP), thereby emphasizing the degree to which difficult or poorly performing queries contribute to the score [33].\nHowever, no attention is given to evaluating the robustness of IR systems implementing QE with respect to querydocument term mismatch in quantifiable terms.\nBy purposely inducing mismatch between the terms in queries and relevant documents, our evaluation framework allows us a controlled manner in which to degrade the quality of the queries with respect to their relevant documents, and then to measure the both the degree of (induced) difficulty of the query and the degree to which QE improves the retrieval performance of the degraded query.\nThe work most similar to our own in the literature consists of work in which document collections or queries are altered in a systematic way to measure differences query performance.\n[42] introduces into the document collection pseudowords that are ambiguous with respect to word sense, in order to measure the degree to which word sense disambiguation is useful in IR.\n[6] experiments with altering the document collection by adding semantically related expansion terms to documents at indexing time.\nIn cross-language IR, [28] explores different query expansion techniques while purposely degrading their translation resources, in what amounts to expanding a query with only a controlled percentage of its translation terms.\nAlthough similar in introducing a controlled amount of variance into their test collections, these works differ from the work being presented in this paper in that the work being presented here explicitly and systematically measures query effectiveness in the presence of query-document term mismatch.\n3.\nMETHODOLOGY In order to accurately measure IR system performance in the presence of query-term mismatch, we need to be able to adjust the degree of term mismatch in a test corpus in a principled manner.\nOur approach is to introduce querydocument term mismatch into a corpus in a controlled manner and then measure the performance of IR systems as the degree of term mismatch changes.\nWe systematically remove query terms from known relevant documents, creating alternate versions of a test collection that differ only in how many or which query terms have been removed from the documents relevant to a particular query.\nIntroducing query-document term mismatch into the test collection in this manner allows us to manipulate the degree of term mismatch between relevant documents and queries in a controlled manner.\nThis removal process affects only the relevant documents in the search collection.\nThe queries themselves remain unaltered.\nQuery terms are removed from documents one by one, so the differences in IR system performance can be measured with respect to missing terms.\nIn the most extreme case (i.e., when the length of the query is less than or equal to the number of query terms removed from the relevant documents), there will be no term overlap between a query and its relevant documents.\nNotice that, for a given query, only relevant documents are modified.\nNon-relevant documents are left unchanged, even in the case that they contain query terms.\nAlthough, on the surface, we are changing the distribution of terms between the relevant and non-relevant documents sets by removing query terms from the relevant documents, doing so does not change the conceptual relevancy of these documents.\nSystematically removing query terms from known relevant documents introduces a controlled amount of query-document term mismatch by which we can evaluate the degree to which particular QE techniques are able to retrieve conceptually relevant documents, despite a lack of actual term overlap.\nRemoving a query term from relevant documents simply masks the presence of that query term in those documents.\nIt does not in any way change the conceptual relevancy of the documents.\nThe evaluation framework presented in this paper consists of three elements: a test collection, C; a strategy for selecting which query terms to remove from the relevant documents in that collection, S; and a metric by which to compare performance of the IR systems, m.\nThe test collection, C, consists of a document collection, queries, and relevance judgments.\nThe strategy, S, determines the order and manner in which query terms are removed from the relevant documents in C.\nThis evaluation framework is not metric-specific; any metric (MAP, P@10, recall, etc.) can be used to measure IR system performance.\nAlthough test collections are difficult to come by, it should be noted that this evaluation framework can be used on any available test collection.\nIn fact, using this framework stretches the value of existing test collections in that one collection becomes several when query terms are removed from relevant documents, thereby increasing the amount of information that can be gained from evaluating on a particular collection.\nIn other evaluations of QE effectiveness, the controlled variable is simply whether or not queries have been expanded or not, compared in terms of some metric.\nIn contrast, the controlled variable in this framework is the query term that has been removed from the documents relevant to that query, as determined by the removal strategy, S. Query terms are removed one by one, in a manner and order determined by S, so that collections differ only with respect to the one term that has been removed (or masked) in the documents relevant to that query.\nIt is in this way that we can explicitly measure the degree to which an IR system overcomes query-document term mismatch.\nThe choice of a query term removal strategy is relatively flexible; the only restriction in choosing a strategy S is that query terms must be removed one at a time.\nTwo decisions must be made when choosing a removal strategy S.\nThe first is the order in which S removes terms from the relevant documents.\nPossible orders for removal could be based on metrics such as IDF or the global probability of a term in a document collection.\nBased on the purpose of the evaluation and the retrieval algorithm being used, it might make more sense to choose a removal order for S based on query term IDF or perhaps based on a measure of query term probability in the document collection.\nOnce an order for removal has been decided, a manner for term removal\/masking must be decided.\nIt must be determined if S will remove the terms individually (i.e., remove just one different term each time) or additively (i.e., remove one term first, then that term in addition to another, and so on).\nThe incremental additive removal of query terms from relevant documents allows the evaluation to show the degree to which IR system performance degrades as more and more query terms are missing, thereby increasing the degree of query-document term mismatch.\nRemoving terms individually allows for a clear comparison of the contribution of QE in the absence of each individual query term.\n4.\nEXPERIMENTAL SET-UP 4.1 IR Systems We used the proposed evaluation framework to evaluate four IR systems on two test collections.\nOf the four systems used in the evaluation, two implement query expansion techniques: Okapi (with pseudo-feedback for QE), and a proprietary concept search engine (we``ll call it TCS, for Thomson Concept Search).\nTCS is a language modeling based retrieval engine that utilizes a subject-appropriate external corpus (i.e., legal or news) as a knowledge source.\nThis external knowledge source is a corpus separate from, but thematically related to, the document collection to be searched.\nTranslation probabilities for QE [2] are calculated from these large external corpora.\nOkapi (without feedback) and a language model query likelihood (QL) model (implemented using Indri) are included as keyword-only baselines.\nOkapi without feedback is intended as an analogous baseline for Okapi with feedback, and the QL model is intended as an appropriate baseline for TCS, as they both implement language-modeling based retrieval algorithms.\nWe choose these as baselines because they are dependent only on the words appearing in the queries and have no QE capabilities.\nAs a result, we expect that when query terms are removed from relevant documents, the performance of these systems should degrade more dramatically than their counterparts that implement QE.\nThe Okapi and QL model results were obtained using the Lemur Toolkit.2 Okapi was run with the parameters k1=1.2, b=0.75, and k3=7.\nWhen run with feedback, the feedback parameters used in Okapi were set at 10 documents and 25 terms.\nThe QL model used Jelinek-Mercer smoothing, with \u03bb = 0.6.\n4.2 Test Collections We evaluated the performance of the four IR systems outlined above on two different test collections.\nThe two test collections used were the TREC AP89 collection (TIPSTER disk 1) and the FSupp Collection.\nThe FSupp Collection is a proprietary collection of 11,953 case law documents for which we have 44 queries (ranging from four to twenty-two words after stop word removal) with full relevance judgments.3 The average length of documents in the FSupp Collection is 3444 words.\n2 www.lemurproject.org 3 Each of the 11,953 documents was evaluated by domain experts with respect to each of the 44 queries.\nThe TREC AP89 test collection contains 84,678 documents, averaging 252 words in length.\nIn our evaluation, we used both the title and the description fields of topics 151200 as queries, so we have two sets of results for the AP89 Collection.\nAfter stop word removal, the title queries range from two to eleven words and the description queries range from four to twenty-six terms.\n4.3 Query Term Removal Strategy In our experiments, we chose to sequentially and additively remove query terms from highest-to-lowest inverse document frequency (IDF) with respect to the entire document collection.\nTerms with high IDF values tend to influence document ranking more than those with lower IDF values.\nAdditionally, high IDF terms tend to be domainspecific terms that are less likely to be known to non-expert user, hence we start by removing these first.\nFor the FSupp Collection, queries were evaluated incrementally with one, two, three, five, and seven terms removed from their corresponding relevant documents.\nThe longer description queries from TREC topics 151-200 were likewise evaluated on the AP89 Collection with one, two, three, five, and seven query terms removed from their relevant documents.\nFor the shorter TREC title queries, we removed one, two, three, and five terms from the relevant documents.\n4.4 Metrics In this implementation of the evaluation framework, we chose three metrics by which to compare IR system performance: mean average precision (MAP), precision at 10 documents (P10), and recall at 1000 documents.\nAlthough these are the metrics we chose to demonstrate this framework, any appropriate IR metrics could be used within the framework.\n5.\nRESULTS 5.1 FSupp Collection Figures 1, 2, and 3 show the performance (in terms of MAP, P10 and Recall, respectively) for the four search engines on the FSupp Collection.\nAs expected, the performance of the keyword-only IR systems, QL and Okapi, drops quickly as query terms are removed from the relevant documents in the collection.\nThe performance of Okapi with feedback (Okapi FB) is somewhat surprising in that on the original collection (i.e., prior to query term removal), its performance is worse than that of Okapi without feedback on all three measures.\nTCS outperforms the QL keyword baseline on every measure except for MAP on the original collection (i.e., prior to removing any query terms).\nBecause TCS employs implicit query expansion using an external domain specific knowledge base, it is less sensitive to term removal (i.e., mismatch) than the Okapi FB, which relies on terms from the top-ranked documents retrieved by an initial keywordonly search.\nBecause overall search engine performance is frequently measured in terms of MAP, and because other evaluations of QE often only consider performance on the entire collection (i.e., they do not consider term mismatch), the QE implemented in TCS would be considered (in an0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 MeanAveragePrecision(MAP) Okapi FB Okapi TCS QL FSupp: Mean Average Precision with Query Terms Removed Figure 1: The performance of the four retrieval systems on the FSupp collection in terms of Mean Average Precision (MAP) and as a function of the number of query terms removed (the horizontal axis).\n0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Precisionat10Documents(P10) Okapi FB Okapi TCS QL FSupp: P10 with Query Terms Removed Figure 2: The performance of the four retrieval systems on the FSupp collection in terms of Precision at 10 and as a function of the number of query terms removed (the horizontal axis).\nother evaluation) to hurt performance on the FSupp Collection.\nHowever, when we look at the comparison of TCS to QL when query terms are removed from the relevant documents, we can see that the QE in TCS is indeed contributing positively to the search.\n5.2 The AP89 Collection: using the description queries Figures 4, 5, and 6 show the performance of the four IR systems on the AP89 Collection, using the TREC topic descriptions as queries.\nThe most interesting difference between the performance on the FSupp Collection and the AP89 collection is the reversal of Okapi FB and TCS.\nOn FSupp, TCS outperformed the other engines consistently (see Figures 1, 2, and 3); on the AP89 Collection, Okapi FB is clearly the best performer (see Figures 4, 5, and 6).\nThis is all the more interesting, based on the fact that QE in Okapi FB takes place after the first search iteration, which 0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Okapi FB Okapi TCS Indri FSupp: Recall at 1000 documents with Query Terms Removed Figure 3: The Recall (at 1000) of the four retrieval systems on the FSupp collection as a function of the number of query terms removed (the horizontal axis).\n0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 MeanAveragePrecision(MAP) Okapi FB Okapi TCS QL AP89: Mean Average Precision with Query Terms Removed (description queries) Figure 4: MAP of the four IR systems on the AP89 Collection, using TREC description queries.\nMAP is measured as a function of the number of query terms removed.\n0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Precisionat10Documents(P10) Okapi FB Okapi TCS QL AP89: P10 with Query Terms Removed (description queries) Figure 5: Precision at 10 of the four IR systems on the AP89 Collection, using TREC description queries.\nP at 10 is measured as a function of the number of query terms removed.\n0 1 2 3 4 5 6 7 Number of Query Terms Removed from Relevant Documents 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Recall Okapi FB Okapi TCS QL AP89: Recall at 1000 documents with Query Terms Removed (description queries) Figure 6: Recall (at 1000) of the four IR systems on the AP89 Collection, using TREC description queries, and as a function of the number of query terms removed.\nwe would expect to be handicapped when query terms are removed.\nLooking at P10 in Figure 5, we can see that TCS and Okapi FB score similarly on P10, starting at the point where one query term is removed from relevant documents.\nAt two query terms removed, TCS starts outperforming Okapi FB.\nIf modeling this in terms of expert versus non-expert users, we could conclude that TCS might be a better search engine for non-experts to use on the AP89 Collection, while Okapi FB would be best for an expert searcher.\nIt is interesting to note that on each metric for the AP89 description queries, TCS performs more poorly than all the other systems on the original collection, but quickly surpasses the baseline systems and approaches Okapi FB``s performance as terms are removed.\nThis is again a case where the performance of a system on the entire collection is not necessarily indicative of how it handles query-document term mismatch.\n5.3 The AP89 Collection: using the title queries Figures 7, 8, and 9 show the performance of the four IR systems on the AP89 Collection, using the TREC topic titles as queries.\nAs with the AP89 description queries, Okapi FB is again the best performer of the four systems in the evaluation.\nAs before, the performance of the Okapi and QL systems, the non-QE baseline systems, sharply degrades as query terms are removed.\nOn the shorter queries, TCS seems to have a harder time catching up to the performance of Okapi FB as terms are removed.\nPerhaps the most interesting result from our evaluation is that although the keyword-only baselines performed consistently and as expected on both collections with respect to query term removal from relevant documents, the performances of the engines implementing QE techniques differed dramatically between collections.\n0 1 2 3 4 5 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 MeanAveragePrecision(MAP) Okapi FB Okapi TCS QL AP89: Mean Average Precision with Query Terms Removed (title queries) Figure 7: MAP of the four IR systems on the AP89 Collection, using TREC title queries and as a function of the number of query terms removed.\n0 1 2 3 4 5 Number of Query Terms Removed from Relevant Documents 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Precisionat10Documents(P10) Okapi FB Okapi TCS QL AP89: P10 with Query Terms Removed (title queries) Figure 8: Precision at 10 of the four IR systems on the AP89 Collection, using TREC title queries, and as a function of the number of query terms removed.\n0 1 2 3 4 5 Number of Query Terms Removed from Relevant Documents 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Recall Okapi FB Okapi TCS QL AP89: Recall at 1000 documents with Query Terms Removed (title queries) Figure 9: Recall (at 1000) of the four IR systems on the AP89 Collection, using TREC title queries and as a function of the number of query terms removed.\n6.\nDISCUSSION The intuition behind this evaluation framework is to measure the degree to which various QE techniques overcome term mismatch between queries and relevant documents.\nIn general, it is easy to evaluate the overall performance of different techniques for QE in comparison to each other or against a non-QE variant on any complete test collection.\nSuch an approach does tell us which systems perform better on a complete test collection, but it does not measure the ability of a particular QE technique to retrieve relevant documents despite partial or complete term mismatch between queries and relevant documents.\nA systematic evaluation of IR systems as outlined in this paper is useful not only with respect to measuring the general success or failure of particular QE techniques in the presence of query-document term mismatch, but it also provides insight into how a particular IR system will perform when used by expert versus non-expert users on a particular collection.\nThe less a user knows about the domain of the document collection on which they are searching, the more prevalent query-document term mismatch is likely to be.\nThis distinction is especially relevant in the case that the test collection is domain-specific (i.e., medical or legal, as opposed to a more general domain, such as news), where the distinction between experts and non-experts may be more marked.\nFor example, a non-expert in the medical domain might search for whooping cough, but relevant documents might instead contain the medical term pertussis.\nSince query terms are masked only the in relevant documents, this evaluation framework is actually biased against retrieving relevant documents.\nThis is because non-relevant documents may also contain query terms, which can cause a retrieval system to rank such documents higher than it would have before terms were masked in relevant documents.\nStill, we think this is a more realistic scenario than removing terms from all documents regardless of relevance.\nThe degree to which a QE technique is well-suited to a particular collection can be evaluated in terms of its ability to still find the relevant documents, even when they are missing query terms, despite the bias of this approach against relevant documents.\nHowever, given that Okapi FB and TCS outperformed each other on two different collection sets, further investigation into the degree of compatibility between QE expansion approach and target collection is probably warranted.\nFurthermore, the investigation of other term removal strategies could provide insight into the behavior of different QE techniques and their overall impact on the user experience.\nAs mentioned earlier, our choice of the term removal strategy was motivated by (1) our desire to see the highest impact on system performance as terms are removed and (2) because high IDF terms, in our domain context, are more likely to be domain specific, which allows us to better understand the performance of an IR system as experienced by expert and non-expert users.\nAlthough not attempted in our experiments, another application of this evaluation framework would be to remove query terms individually, rather than incrementally, to analyze which terms (or possibly which types of terms) are being helped most by a QE technique on a particular test collection.\nThis could lead to insight as to when QE should and should not be applied.\nThis evaluation framework allows us to see how IR systems perform in the presence of query-document term mismatch.\nIn other evaluations, the performance of a system is measured only on the entire collection, in which the degree of query-term document mismatch is not known.\nBy systematically introducing this mismatch, we can see that even if an IR system is not the best performer on the entire collection, its performance may nonetheless be more robust to query-document term mismatch than other systems.\nSuch robustness makes a system more user-friendly, especially to non-expert users.\nThis paper presents a novel framework for IR system evaluation, the applications of which are numerous.\nThe results presented in this paper are not by any means meant to be exhaustive or entirely representative of the ways in which this evaluation could be applied.\nTo be sure, there is much future work that could be done using this framework.\nIn addition to looking at average performance of IR systems, the results of individual queries could be examined and compared more closely, perhaps giving more insight into the classification and prediction of difficult queries, or perhaps showing which QE techniques improve (or degrade) individual query performance under differing degrees of querydocument term mismatch.\nIndeed, this framework would also benefit from further testing on a larger collection.\n7.\nCONCLUSION The proposed evaluation framework allows us to measure the degree to which different IR systems overcome (or don``t overcome) term mismatch between queries and relevant documents.\nEvaluations of IR systems employing QE performed only on the entire collection do not take into account that the purpose of QE is to mitigate the effects of term mismatch in retrieval.\nBy systematically removing query terms from relevant documents, we can measure the degree to which QE contributes to a search by showing the difference between the performances of a QE system and its keywordonly baseline when query terms have been removed from known relevant documents.\nFurther, we can model the behavior of expert versus non-expert users by manipulating the amount of query-document term mismatch introduced into the collection.\nThe evaluation framework proposed in this paper is attractive for several reasons.\nMost importantly, it provides a controlled manner in which to measure the performance of QE with respect to query-document term mismatch.\nIn addition, this framework takes advantage and stretches the amount of information we can get from existing test collections.\nFurther, this evaluation framework is not metricspecific: information in terms of any metric (MAP, P@10, etc.) can be gained from evaluating an IR system this way.\nIt should also be noted that this framework is generalizable to any IR system, in that it evaluates how well IR systems evaluate users'' information needs as represented by their queries.\nAn IR system that is easy to use should be good at retrieving documents that are relevant to users'' information needs, even if the queries provided by the users do not contain the same keywords as the relevant documents.\n8.\nREFERENCES [1] Amati, G., C. Carpineto, and G. Romano.\nQuery difficulty, robustness and selective application of query expansion.\nIn Proceedings of the 25th European Conference on Information Retrieval (ECIR 2004), pp. 127-137.\n[2] Berger, A. and J.D. Lafferty.\n1999.\nInformation retrieval as statistical translation.\nIn Research and Development in Information Retrieval, pages 222-229.\n[3] Billerbeck, B., F. Scholer, H. E. Williams, and J. Zobel.\n2003.\nQuery expansion using associated queries.\nIn Proceedings of CIKM 2003, pp. 2-9.\n[4] Billerbeck, B., and J. Zobel.\n2003.\nWhen Query Expansion Fails.\nIn Proceedings of SIGIR 2003, pp. 387-388.\n[5] Billerbeck, B. and J. Zobel.\n2004.\nQuestioning Query Expansion: An Examination of Behaviour and Parameters.\nIn Proceedings of the 15th Australasian Database Conference (ADC2004), pp. 69-76.\n[6] Billerbeck, B. and J. Zobel.\n2005.\nDocument Expansion versus Query Expansion for ad-hoc Retrieval.\nIn Proceedings of the 10th Australasian Document Computing Symposium.\n[7] Buckley, C. and E.M. Voorhees.\n2000.\nEvaluating Evaluation Measure Stability.\nIn Proceedings of SIGIR 2000, pp. 33-40.\n[8] Buckley, C. and E.M. Voorhees.\n2004.\nRetrieval evaluation with incomplete information.\nIn Proceedings of SIGIR 2004, pp. 25-32.\n[9] Carmel, D., E. Yom-Tov, A. Darlow, D. Pelleg.\n2006.\nWhat Makes A Query Difficult?\nIn Proceedings of SIGIR 2006, pp. 390-397.\n[10] Carpineto, C., R. Mori and G. Romano.\n1998.\nInformative Term Selection for Automatic Query Expansion.\nIn The 7th Text REtrieval Conference, pp.363:369.\n[11] Carterette, B. and J. Allan.\n2005.\nIncremental Test Collections.\nIn Proceedings of CIKM 2005, pp. 680-687.\n[12] Carterette, B., J. Allan, and R. Sitaraman.\n2006.\nMinimal Test Collections for Retrieval Evaluation.\nIn Proceedings of SIGIR 2006, pp. 268-275.\n[13] Cormack, G.V., C. R. Palmer, and C.L. Clarke.\n1998.\nEfficient Construction of Large Test Collections.\nIn Proceedings of SIGIR 1998, pp. 282-289.\n[14] Cormack, G. and T.R. Lynam.\n2006.\nStatistical Precision of Information Retrieval Evaluation.\nIn Proceedings of SIGIR 2006, pp. 533-540.\n[15] Cronen-Townsend, S., Y. Zhou, and W.B. Croft.\n2004.\nA Language Modeling Framework for Selective Query Expansion, CIIR Technical Report.\n[16] Efthimiadis, E.N. Query Expansion.\n1996.\nIn Martha E. Williams (ed.)\n, Annual Review of Information Systems and Technology (ARIST), v31, pp 121- 187.\n[17] Evans, D.A. and Lefferts, R.G. 1995.\nCLARIT-TREC Experiments.\nInformation Processing & Management.\n31(3): 385-295.\n[18] Fang, H. and C.X. Zhai.\n2006.\nSemantic Term Matching in Axiomatic Approaches to Information Retrieval.\nIn Proceedings of SIGIR 2006, pp. 115-122.\n[19] Gao, J., J. Nie, G. Wu and G. Cao.\n2004.\nDependence language model for information retrieval.\nIn Proceedings of SIGIR 2004, pp. 170-177.\n[20] Harman, D.K. 1992.\nRelevance feedback revisited.\nIn Proceedings of ACM SIGIR 1992, pp. 1-10.\n[21] Harman, D.K., ed.\n1993.\nThe First Text REtrieval Conference (TREC-1): 1992.\n[22] Harman, D.K., ed.\n1994.\nThe Second Text REtrieval Conference (TREC-2): 1993.\n[23] Harman, D.K., ed.\n1995.\nThe Third Text REtrieval Conference (TREC-3): 1994.\n[24] Harman, D.K., 1998.\nTowards Interactive Query Expansion.\nIn Proceedings of SIGIR 1998, pp. 321-331.\n[25] Hofmann, T. 1999.\nProbabilistic latent semantic indexing.\nIn Proceedings of SIGIR 1999, pp 50-57.\n[26] Jing, Y. and W.B. Croft.\n1994.\nThe Association Thesaurus for Information Retrieval.\nIn Proceedings of RIAO 1994, pp. 146-160 [27] Lu, X.A. and R.B. Keefer.\nQuery expansion\/reduction and its impact on retrieval effectiveness.\nIn: D.K. Harman, ed.\nThe Third Text REtrieval Conference (TREC-3).\nGaithersburg, MD: National Institute of Standards and Technology, 1995,231-239.\n[28] McNamee, P. and J. Mayfield.\n2002.\nComparing Cross-Language Query Expansion Techniques by Degrading Translation Resources.\nIn Proceedings of SIGIR 2002, pp. 159-166.\n[29] Mitra, M., A. Singhal, and C. Buckley.\n1998.\nImproving Automatic Query Expansion.\nIn Proceedings of SIGIR 1998, pp. 206-214.\n[30] Peat, H. J. and P. Willett.\n1991.\nThe limitations of term co-occurrence data for query expansion in document retrieval systems.\nJournal of the American Society for Information Science, 42(5): 378-383.\n[31] Ponte, J.M. and W.B. Croft.\n1998.\nA language modeling approach to information retrieval.\nIn Proceedings of SIGIR 1998, pp.275-281.\n[32] Qiu Y., and Frei H. 1993.\nConcept based query expansion.\nIn Proceedings of SIGIR 1993, pp. 160-169.\n[33] Robertson, S. 2006.\nOn GMAP - and other transformations.\nIn Proceedings of CIKM 2006, pp. 78-83.\n[34] Robertson, S.E. and K. Sparck Jones.\n1976.\nRelevance Weighting of Search Terms.\nJournal of the American Society for Information Science, 27(3): 129-146.\n[35] Robertson, S.E., S. Walker, S. Jones, M.M. Hancock-Beaulieu, and M. Gatford.\n1994.\nOkapi at TREC-2.\nIn D.K. Harman (ed).\n1994.\nThe Second Text REtrieval Conference (TREC-2): 1993, pp. 21-34.\n[36] Robertson, S.E., S. Walker, S. Jones, M.M. Hancock-Beaulieu, and M. Gatford.\n1995.\nOkapi at TREC-3.\nIn D.K. Harman (ed).\n1995.\nThe Third Text REtrieval Conference (TREC-2): 1993, pp. 109-126 [37] Rocchio, J.J. 1971.\nRelevance feedback in information retrieval.\nIn G. Salton (Ed.)\n, The SMART Retrieval System.\nPrentice-Hall, Inc., Englewood Cliffs, NJ, pp. 313-323.\n[38] Salton, G. 1968.\nAutomatic Information Organization and Retrieval.\nMcGraw-Hill.\n[39] Salton, G. 1971.\nThe SMART Retrieval System: Experiments in Automatic Document Processing.\nEnglewood Cliffs NJ; Prentice-Hall.\n[40] Salton,G. 1980.\nAutomatic term class construction using relevance-a summary of work in automatic pseudoclassification.\nInformation Processing & Management.\n16(1): 1-15.\n[41] Salton, G., and C. Buckley.\n1988.\nOn the Use of Spreading Activation Methods in Automatic Information Retrieval.\nIn Proceedings of SIGIR 1998, pp. 147-160.\n[42] Sanderson, M. 1994.\nWord sense disambiguation and information retrieval.\nIn Proceedings of SIGIR 1994, pp. 161-175.\n[43] Sanderson, M. and H. Joho.\n2004.\nForming test collections with no system pooling.\nIn Proceedings of SIGIR 2004, pp. 186-193.\n[44] Sanderson, M. and Zobel, J. 2005.\nInformation Retrieval System Evaluation: Effort, Sensitivity, and Reliability.\nIn Proceedings of SIGIR 2005, pp. 162-169.\n[45] Smeaton, A.F. and C.J. Van Rijsbergen.\n1983.\nThe Retrieval Effects of Query Expansion on a Feedback Document Retrieval System.\nComputer Journal.\n26(3):239-246.\n[46] Song, F. and W.B. Croft.\n1999.\nA general language model for information retrieval.\nIn Proceedings of the Eighth International Conference on Information and Knowledge Management, pages 316-321.\n[47] Sparck Jones, K. 1971.\nAutomatic Keyword Classification for Information Retrieval.\nLondon: Butterworths.\n[48] Terra, E. and C. L. Clarke.\n2004.\nScoring missing terms in information retrieval tasks.\nIn Proceedings of CIKM 2004, pp. 50-58.\n[49] Turtle, Howard.\n1994.\nNatural Language vs. Boolean Query Evaluation: A Comparison of Retrieval Performance.\nIn Proceedings of SIGIR 1994, pp. 212-220.\n[50] Voorhees, E.M. 1994a.\nOn Expanding Query Vectors with Lexically Related Words.\nIn Harman, D. K., ed.\nText REtrieval Conference (TREC-1): 1992.\n[51] Voorhees, E.M. 1994b.\nQuery Expansion Using Lexical-Semantic Relations.\nIn Proceedings of SIGIR 1994, pp. 61-69.","lvl-3":"A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch\nABSTRACT\nThe effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents.\nQuery-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems.\nQuery Expansion (QE) is one method for dealing with term mismatch.\nIR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets.\nWhile this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch.\nIn this paper, we propose a new approach for evaluating query expansion techniques.\nThe proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing.\n1.\nINTRODUCTION\nIn our domain,' and unlike web search, it is very important for attorneys to find all documents (e.g., cases) that are relevant to an issue.\nMissing relevant documents may have non-trivial consequences on the outcome of a court proceeding.\nAttorneys are especially concerned about missing relevant documents when researching a legal topic that is new to them, as they may not be aware of all language variations in such topics.\nTherefore, it is important to develop information retrieval systems that are robust with respect to language variations or term mismatch between queries and relevant documents.\nDuring our work on developing such systems, we concluded that current evaluation methods are not sufficient for this purpose.\n{Whooping cough, pertussis}, {heart attack, myocardial infarction}, {car wash, automobile cleaning}, {attorney, legal counsel, lawyer} are all examples of things that share the same meaning.\nOften, the terms chosen by users in their queries are different than those appearing in the documents relevant to their information needs.\nThis query-document term mismatch arises from two sources: (1) the synonymy found in natural language, both at the term and the phrasal level, and (2) the degree to which the user is an expert at searching and\/or has expert knowledge in the domain of the collection being searched.\nIR evaluations are comparative in nature (cf. TREC).\nGenerally, IR evaluations show how System A did in relation to System B on the same test collection based on various precision - and recall-based metrics.\nSimilarly, IR systems with QE capabilities are typically evaluated by executing each search twice, once with and once without query expansion, and then comparing the two result sets.\nWhile this approach shows which system may have performed better overall with respect to a particular test collection, it does not directly or systematically measure the effectiveness of IR systems in overcoming query-document term mismatch.\nIf the goal of QE is to increase search performance by mitigating the effects of query-document term mismatch, then the degree to which a system does so should be measurable in evaluation.\nAn effective evaluation method should measure the performance of IR systems under varying degrees of query-document term mismatch, not just in terms of overall performance on a collection relative to another system . '\nThomson Corporation builds information based solutions to the professional markets including legal, financial, health care, scientific, and tax and accounting.\nIn order to measure that a particular IR system is able to overcome query-document term mismatch by retrieving documents that are relevant to a user's query, but that do not necessarily contain the query terms themselves, we systematically introduce term mismatch into the test collection by removing query terms from known relevant documents.\nBecause we are purposely inducing term mismatch between the queries and known relevant documents in our test collections, the proposed evaluation framework is able to measure the effectiveness of QE in a way that testing on the whole collection is not.\nIf a QE search method finds a document that is known to be relevant but that is nonetheless missing query terms, it shows that QE technique is indeed robust with respect to query-document term mismatch.\n2.\nRELATED WORK\nAccounting for term mismatch between the terms in user queries and the documents relevant to users' information needs has been a fundamental issue in IR research for almost 40 years [38, 37, 47].\nQuery expansion (QE) is one technique used in IR to improve search performance by increasing the likelihood of term overlap (either explicitly or implicitly) between queries and documents that are relevant to users' information needs.\nExplicit query expansion occurs at run-time, based on the initial search results, as is the case with relevance feedback and pseudo relevance feedback [34, 37].\nImplicit query expansion can be based on statistical properties of the document collection, or it may rely on external knowledge sources such as a thesaurus or an ontology [32, 17, 26, 50, 51, 2].\nRegardless of method, QE algorithms that are capable of retrieving relevant documents despite partial or total term mismatch between queries and relevant documents should increase the recall of IR systems (by retrieving documents that would have previously been missed) as well as their precision (by retrieving more relevant documents).\nIn practice, QE tends to improve the average overall retrieval performance, doing so by improving performance on some queries while making it worse on others.\nQE techniques are judged as effective in the case that they help more than they hurt overall on a particular collection [47, 45, 41, 27].\nOften, the expansion terms added to a query in the query expansion phase end up hurting the overall retrieval performance because they introduce semantic noise, causing the meaning of the query to drift.\nAs such, much work has been done with respect to different strategies for choosing semantically relevant QE terms to include in order to avoid query drift [34, 50, 51, 18, 24, 29, 30, 32, 3, 4, 5].\nThe evaluation of IR systems has received much attention in the research community, both in terms of developing test collections for the evaluation of different systems [11, 12, 13, 43] and in terms of the utility of evaluation metrics such as recall, precision, mean average precision, precision at rank, Bpref, etc. [7, 8, 44, 14].\nIn addition, there have been comparative evaluations of different QE techniques on various test collections [47, 45, 41].\nIn addition, the IR research community has given attention to differences between the performance of individual queries.\nResearch efforts have been made to predict which queries will be improved by QE and then selectively applying it only to those queries [1, 5, 27, 29, 15, 48], to achieve optimal overall performance.\nIn addition, related work on predicting query difficulty, or which queries are likely to perform poorly, has been done [1, 4, 5, 9].\nThere is general interest in the research community to improve the robustness of IR systems by improving retrieval performance on difficult queries, as is evidenced by the Robust track in the TREC competitions and new evaluation measures such as GMAP.\nGMAP (geometric mean average precision) gives more weight to the lower end of the average precision (as opposed to MAP), thereby emphasizing the degree to which difficult or poorly performing queries contribute to the score [33].\nHowever, no attention is given to evaluating the robustness of IR systems implementing QE with respect to querydocument term mismatch in quantifiable terms.\nBy purposely inducing mismatch between the terms in queries and relevant documents, our evaluation framework allows us a controlled manner in which to degrade the quality of the queries with respect to their relevant documents, and then to measure the both the degree of (induced) difficulty of the query and the degree to which QE improves the retrieval performance of the degraded query.\nThe work most similar to our own in the literature consists of work in which document collections or queries are altered in a systematic way to measure differences query performance.\n[42] introduces into the document collection pseudowords that are ambiguous with respect to word sense, in order to measure the degree to which word sense disambiguation is useful in IR.\n[6] experiments with altering the document collection by adding semantically related expansion terms to documents at indexing time.\nIn cross-language IR, [28] explores different query expansion techniques while purposely degrading their translation resources, in what amounts to expanding a query with only a controlled percentage of its translation terms.\nAlthough similar in introducing a controlled amount of variance into their test collections, these works differ from the work being presented in this paper in that the work being presented here explicitly and systematically measures query effectiveness in the presence of query-document term mismatch.\n3.\nMETHODOLOGY\n4.\nEXPERIMENTAL SET-UP\n4.1 IR Systems\n4.2 Test Collections\n4.3 Query Term Removal Strategy\n4.4 Metrics\n5.\nRESULTS\n5.1 FSupp Collection\n5.2 The AP89 Collection: using the description queries\n5.3 The AP89 Collection: using the title queries\n7.\nCONCLUSION\nThe proposed evaluation framework allows us to measure the degree to which different IR systems overcome (or don't overcome) term mismatch between queries and relevant documents.\nEvaluations of IR systems employing QE performed only on the entire collection do not take into account that the purpose of QE is to mitigate the effects of term mismatch in retrieval.\nBy systematically removing query terms from relevant documents, we can measure the degree to which QE contributes to a search by showing the difference between the performances of a QE system and its keywordonly baseline when query terms have been removed from known relevant documents.\nFurther, we can model the behavior of expert versus non-expert users by manipulating the amount of query-document term mismatch introduced into the collection.\nThe evaluation framework proposed in this paper is attractive for several reasons.\nMost importantly, it provides a controlled manner in which to measure the performance of QE with respect to query-document term mismatch.\nIn addition, this framework takes advantage and stretches the amount of information we can get from existing test collections.\nFurther, this evaluation framework is not metricspecific: information in terms of any metric (MAP, P@10, etc.) can be gained from evaluating an IR system this way.\nIt should also be noted that this framework is generalizable to any IR system, in that it evaluates how well IR systems evaluate users' information needs as represented by their queries.\nAn IR system that is easy to use should be good at retrieving documents that are relevant to users' information needs, even if the queries provided by the users do not contain the same keywords as the relevant documents.","lvl-4":"A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch\nABSTRACT\nThe effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents.\nQuery-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems.\nQuery Expansion (QE) is one method for dealing with term mismatch.\nIR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets.\nWhile this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch.\nIn this paper, we propose a new approach for evaluating query expansion techniques.\nThe proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing.\n1.\nINTRODUCTION\nIn our domain,' and unlike web search, it is very important for attorneys to find all documents (e.g., cases) that are relevant to an issue.\nMissing relevant documents may have non-trivial consequences on the outcome of a court proceeding.\nTherefore, it is important to develop information retrieval systems that are robust with respect to language variations or term mismatch between queries and relevant documents.\nDuring our work on developing such systems, we concluded that current evaluation methods are not sufficient for this purpose.\nOften, the terms chosen by users in their queries are different than those appearing in the documents relevant to their information needs.\nIR evaluations are comparative in nature (cf. TREC).\nGenerally, IR evaluations show how System A did in relation to System B on the same test collection based on various precision - and recall-based metrics.\nSimilarly, IR systems with QE capabilities are typically evaluated by executing each search twice, once with and once without query expansion, and then comparing the two result sets.\nWhile this approach shows which system may have performed better overall with respect to a particular test collection, it does not directly or systematically measure the effectiveness of IR systems in overcoming query-document term mismatch.\nIf the goal of QE is to increase search performance by mitigating the effects of query-document term mismatch, then the degree to which a system does so should be measurable in evaluation.\nAn effective evaluation method should measure the performance of IR systems under varying degrees of query-document term mismatch, not just in terms of overall performance on a collection relative to another system . '\nIn order to measure that a particular IR system is able to overcome query-document term mismatch by retrieving documents that are relevant to a user's query, but that do not necessarily contain the query terms themselves, we systematically introduce term mismatch into the test collection by removing query terms from known relevant documents.\nBecause we are purposely inducing term mismatch between the queries and known relevant documents in our test collections, the proposed evaluation framework is able to measure the effectiveness of QE in a way that testing on the whole collection is not.\nIf a QE search method finds a document that is known to be relevant but that is nonetheless missing query terms, it shows that QE technique is indeed robust with respect to query-document term mismatch.\n2.\nRELATED WORK\nAccounting for term mismatch between the terms in user queries and the documents relevant to users' information needs has been a fundamental issue in IR research for almost 40 years [38, 37, 47].\nQuery expansion (QE) is one technique used in IR to improve search performance by increasing the likelihood of term overlap (either explicitly or implicitly) between queries and documents that are relevant to users' information needs.\nExplicit query expansion occurs at run-time, based on the initial search results, as is the case with relevance feedback and pseudo relevance feedback [34, 37].\nImplicit query expansion can be based on statistical properties of the document collection, or it may rely on external knowledge sources such as a thesaurus or an ontology [32, 17, 26, 50, 51, 2].\nRegardless of method, QE algorithms that are capable of retrieving relevant documents despite partial or total term mismatch between queries and relevant documents should increase the recall of IR systems (by retrieving documents that would have previously been missed) as well as their precision (by retrieving more relevant documents).\nIn practice, QE tends to improve the average overall retrieval performance, doing so by improving performance on some queries while making it worse on others.\nOften, the expansion terms added to a query in the query expansion phase end up hurting the overall retrieval performance because they introduce semantic noise, causing the meaning of the query to drift.\nAs such, much work has been done with respect to different strategies for choosing semantically relevant QE terms to include in order to avoid query drift [34, 50, 51, 18, 24, 29, 30, 32, 3, 4, 5].\nIn addition, there have been comparative evaluations of different QE techniques on various test collections [47, 45, 41].\nIn addition, the IR research community has given attention to differences between the performance of individual queries.\nIn addition, related work on predicting query difficulty, or which queries are likely to perform poorly, has been done [1, 4, 5, 9].\nThere is general interest in the research community to improve the robustness of IR systems by improving retrieval performance on difficult queries, as is evidenced by the Robust track in the TREC competitions and new evaluation measures such as GMAP.\nHowever, no attention is given to evaluating the robustness of IR systems implementing QE with respect to querydocument term mismatch in quantifiable terms.\nBy purposely inducing mismatch between the terms in queries and relevant documents, our evaluation framework allows us a controlled manner in which to degrade the quality of the queries with respect to their relevant documents, and then to measure the both the degree of (induced) difficulty of the query and the degree to which QE improves the retrieval performance of the degraded query.\nThe work most similar to our own in the literature consists of work in which document collections or queries are altered in a systematic way to measure differences query performance.\n[42] introduces into the document collection pseudowords that are ambiguous with respect to word sense, in order to measure the degree to which word sense disambiguation is useful in IR.\n[6] experiments with altering the document collection by adding semantically related expansion terms to documents at indexing time.\nIn cross-language IR, [28] explores different query expansion techniques while purposely degrading their translation resources, in what amounts to expanding a query with only a controlled percentage of its translation terms.\nAlthough similar in introducing a controlled amount of variance into their test collections, these works differ from the work being presented in this paper in that the work being presented here explicitly and systematically measures query effectiveness in the presence of query-document term mismatch.\n7.\nCONCLUSION\nThe proposed evaluation framework allows us to measure the degree to which different IR systems overcome (or don't overcome) term mismatch between queries and relevant documents.\nEvaluations of IR systems employing QE performed only on the entire collection do not take into account that the purpose of QE is to mitigate the effects of term mismatch in retrieval.\nBy systematically removing query terms from relevant documents, we can measure the degree to which QE contributes to a search by showing the difference between the performances of a QE system and its keywordonly baseline when query terms have been removed from known relevant documents.\nFurther, we can model the behavior of expert versus non-expert users by manipulating the amount of query-document term mismatch introduced into the collection.\nThe evaluation framework proposed in this paper is attractive for several reasons.\nMost importantly, it provides a controlled manner in which to measure the performance of QE with respect to query-document term mismatch.\nIn addition, this framework takes advantage and stretches the amount of information we can get from existing test collections.\nFurther, this evaluation framework is not metricspecific: information in terms of any metric (MAP, P@10, etc.) can be gained from evaluating an IR system this way.\nIt should also be noted that this framework is generalizable to any IR system, in that it evaluates how well IR systems evaluate users' information needs as represented by their queries.\nAn IR system that is easy to use should be good at retrieving documents that are relevant to users' information needs, even if the queries provided by the users do not contain the same keywords as the relevant documents.","lvl-2":"A New Approach for Evaluating Query Expansion: Query-Document Term Mismatch\nABSTRACT\nThe effectiveness of information retrieval (IR) systems is influenced by the degree of term overlap between user queries and relevant documents.\nQuery-document term mismatch, whether partial or total, is a fact that must be dealt with by IR systems.\nQuery Expansion (QE) is one method for dealing with term mismatch.\nIR systems implementing query expansion are typically evaluated by executing each query twice, with and without query expansion, and then comparing the two result sets.\nWhile this measures an overall change in performance, it does not directly measure the effectiveness of IR systems in overcoming the inherent issue of term mismatch between the query and relevant documents, nor does it provide any insight into how such systems would behave in the presence of query-document term mismatch.\nIn this paper, we propose a new approach for evaluating query expansion techniques.\nThe proposed approach is attractive because it provides an estimate of system performance under varying degrees of query-document term mismatch, it makes use of readily available test collections, and it does not require any additional relevance judgments or any form of manual processing.\n1.\nINTRODUCTION\nIn our domain,' and unlike web search, it is very important for attorneys to find all documents (e.g., cases) that are relevant to an issue.\nMissing relevant documents may have non-trivial consequences on the outcome of a court proceeding.\nAttorneys are especially concerned about missing relevant documents when researching a legal topic that is new to them, as they may not be aware of all language variations in such topics.\nTherefore, it is important to develop information retrieval systems that are robust with respect to language variations or term mismatch between queries and relevant documents.\nDuring our work on developing such systems, we concluded that current evaluation methods are not sufficient for this purpose.\n{Whooping cough, pertussis}, {heart attack, myocardial infarction}, {car wash, automobile cleaning}, {attorney, legal counsel, lawyer} are all examples of things that share the same meaning.\nOften, the terms chosen by users in their queries are different than those appearing in the documents relevant to their information needs.\nThis query-document term mismatch arises from two sources: (1) the synonymy found in natural language, both at the term and the phrasal level, and (2) the degree to which the user is an expert at searching and\/or has expert knowledge in the domain of the collection being searched.\nIR evaluations are comparative in nature (cf. TREC).\nGenerally, IR evaluations show how System A did in relation to System B on the same test collection based on various precision - and recall-based metrics.\nSimilarly, IR systems with QE capabilities are typically evaluated by executing each search twice, once with and once without query expansion, and then comparing the two result sets.\nWhile this approach shows which system may have performed better overall with respect to a particular test collection, it does not directly or systematically measure the effectiveness of IR systems in overcoming query-document term mismatch.\nIf the goal of QE is to increase search performance by mitigating the effects of query-document term mismatch, then the degree to which a system does so should be measurable in evaluation.\nAn effective evaluation method should measure the performance of IR systems under varying degrees of query-document term mismatch, not just in terms of overall performance on a collection relative to another system . '\nThomson Corporation builds information based solutions to the professional markets including legal, financial, health care, scientific, and tax and accounting.\nIn order to measure that a particular IR system is able to overcome query-document term mismatch by retrieving documents that are relevant to a user's query, but that do not necessarily contain the query terms themselves, we systematically introduce term mismatch into the test collection by removing query terms from known relevant documents.\nBecause we are purposely inducing term mismatch between the queries and known relevant documents in our test collections, the proposed evaluation framework is able to measure the effectiveness of QE in a way that testing on the whole collection is not.\nIf a QE search method finds a document that is known to be relevant but that is nonetheless missing query terms, it shows that QE technique is indeed robust with respect to query-document term mismatch.\n2.\nRELATED WORK\nAccounting for term mismatch between the terms in user queries and the documents relevant to users' information needs has been a fundamental issue in IR research for almost 40 years [38, 37, 47].\nQuery expansion (QE) is one technique used in IR to improve search performance by increasing the likelihood of term overlap (either explicitly or implicitly) between queries and documents that are relevant to users' information needs.\nExplicit query expansion occurs at run-time, based on the initial search results, as is the case with relevance feedback and pseudo relevance feedback [34, 37].\nImplicit query expansion can be based on statistical properties of the document collection, or it may rely on external knowledge sources such as a thesaurus or an ontology [32, 17, 26, 50, 51, 2].\nRegardless of method, QE algorithms that are capable of retrieving relevant documents despite partial or total term mismatch between queries and relevant documents should increase the recall of IR systems (by retrieving documents that would have previously been missed) as well as their precision (by retrieving more relevant documents).\nIn practice, QE tends to improve the average overall retrieval performance, doing so by improving performance on some queries while making it worse on others.\nQE techniques are judged as effective in the case that they help more than they hurt overall on a particular collection [47, 45, 41, 27].\nOften, the expansion terms added to a query in the query expansion phase end up hurting the overall retrieval performance because they introduce semantic noise, causing the meaning of the query to drift.\nAs such, much work has been done with respect to different strategies for choosing semantically relevant QE terms to include in order to avoid query drift [34, 50, 51, 18, 24, 29, 30, 32, 3, 4, 5].\nThe evaluation of IR systems has received much attention in the research community, both in terms of developing test collections for the evaluation of different systems [11, 12, 13, 43] and in terms of the utility of evaluation metrics such as recall, precision, mean average precision, precision at rank, Bpref, etc. [7, 8, 44, 14].\nIn addition, there have been comparative evaluations of different QE techniques on various test collections [47, 45, 41].\nIn addition, the IR research community has given attention to differences between the performance of individual queries.\nResearch efforts have been made to predict which queries will be improved by QE and then selectively applying it only to those queries [1, 5, 27, 29, 15, 48], to achieve optimal overall performance.\nIn addition, related work on predicting query difficulty, or which queries are likely to perform poorly, has been done [1, 4, 5, 9].\nThere is general interest in the research community to improve the robustness of IR systems by improving retrieval performance on difficult queries, as is evidenced by the Robust track in the TREC competitions and new evaluation measures such as GMAP.\nGMAP (geometric mean average precision) gives more weight to the lower end of the average precision (as opposed to MAP), thereby emphasizing the degree to which difficult or poorly performing queries contribute to the score [33].\nHowever, no attention is given to evaluating the robustness of IR systems implementing QE with respect to querydocument term mismatch in quantifiable terms.\nBy purposely inducing mismatch between the terms in queries and relevant documents, our evaluation framework allows us a controlled manner in which to degrade the quality of the queries with respect to their relevant documents, and then to measure the both the degree of (induced) difficulty of the query and the degree to which QE improves the retrieval performance of the degraded query.\nThe work most similar to our own in the literature consists of work in which document collections or queries are altered in a systematic way to measure differences query performance.\n[42] introduces into the document collection pseudowords that are ambiguous with respect to word sense, in order to measure the degree to which word sense disambiguation is useful in IR.\n[6] experiments with altering the document collection by adding semantically related expansion terms to documents at indexing time.\nIn cross-language IR, [28] explores different query expansion techniques while purposely degrading their translation resources, in what amounts to expanding a query with only a controlled percentage of its translation terms.\nAlthough similar in introducing a controlled amount of variance into their test collections, these works differ from the work being presented in this paper in that the work being presented here explicitly and systematically measures query effectiveness in the presence of query-document term mismatch.\n3.\nMETHODOLOGY\nIn order to accurately measure IR system performance in the presence of query-term mismatch, we need to be able to adjust the degree of term mismatch in a test corpus in a principled manner.\nOur approach is to introduce querydocument term mismatch into a corpus in a controlled manner and then measure the performance of IR systems as the degree of term mismatch changes.\nWe systematically remove query terms from known relevant documents, creating alternate versions of a test collection that differ only in how many or which query terms have been removed from the documents relevant to a particular query.\nIntroducing query-document term mismatch into the test collection in this manner allows us to manipulate the degree of term mismatch between relevant documents and queries in a controlled manner.\nThis removal process affects only the relevant documents in the search collection.\nThe queries themselves remain unaltered.\nQuery terms are removed from documents one by one, so the differences in IR system performance can be measured with respect to missing terms.\nIn the most extreme case (i.e., when the length of the query is less than or equal\nto the number of query terms removed from the relevant documents), there will be no term overlap between a query and its relevant documents.\nNotice that, for a given query, only relevant documents are modified.\nNon-relevant documents are left unchanged, even in the case that they contain query terms.\nAlthough, on the surface, we are changing the distribution of terms between the relevant and non-relevant documents sets by removing query terms from the relevant documents, doing so does not change the conceptual relevancy of these documents.\nSystematically removing query terms from known relevant documents introduces a controlled amount of query-document term mismatch by which we can evaluate the degree to which particular QE techniques are able to retrieve conceptually relevant documents, despite a lack of actual term overlap.\nRemoving a query term from relevant documents simply masks the presence of that query term in those documents.\nIt does not in any way change the conceptual relevancy of the documents.\nThe evaluation framework presented in this paper consists of three elements: a test collection, C; a strategy for selecting which query terms to remove from the relevant documents in that collection, S; and a metric by which to compare performance of the IR systems, m.\nThe test collection, C, consists of a document collection, queries, and relevance judgments.\nThe strategy, S, determines the order and manner in which query terms are removed from the relevant documents in C.\nThis evaluation framework is not metric-specific; any metric (MAP, P@10, recall, etc.) can be used to measure IR system performance.\nAlthough test collections are difficult to come by, it should be noted that this evaluation framework can be used on any available test collection.\nIn fact, using this framework stretches the value of existing test collections in that one collection becomes several when query terms are removed from relevant documents, thereby increasing the amount of information that can be gained from evaluating on a particular collection.\nIn other evaluations of QE effectiveness, the controlled variable is simply whether or not queries have been expanded or not, compared in terms of some metric.\nIn contrast, the controlled variable in this framework is the query term that has been removed from the documents relevant to that query, as determined by the removal strategy, S. Query terms are removed one by one, in a manner and order determined by S, so that collections differ only with respect to the one term that has been removed (or masked) in the documents relevant to that query.\nIt is in this way that we can explicitly measure the degree to which an IR system overcomes query-document term mismatch.\nThe choice of a query term removal strategy is relatively flexible; the only restriction in choosing a strategy S is that query terms must be removed one at a time.\nTwo decisions must be made when choosing a removal strategy S.\nThe first is the order in which S removes terms from the relevant documents.\nPossible orders for removal could be based on metrics such as IDF or the global probability of a term in a document collection.\nBased on the purpose of the evaluation and the retrieval algorithm being used, it might make more sense to choose a removal order for S based on query term IDF or perhaps based on a measure of query term probability in the document collection.\nOnce an order for removal has been decided, a manner for term removal\/masking must be decided.\nIt must be determined if S will remove the terms individually (i.e., remove just one different term each time) or additively (i.e., remove one term first, then that term in addition to another, and so on).\nThe incremental additive removal of query terms from relevant documents allows the evaluation to show the degree to which IR system performance degrades as more and more query terms are missing, thereby increasing the degree of query-document term mismatch.\nRemoving terms individually allows for a clear comparison of the contribution of QE in the absence of each individual query term.\n4.\nEXPERIMENTAL SET-UP\n4.1 IR Systems\nWe used the proposed evaluation framework to evaluate four IR systems on two test collections.\nOf the four systems used in the evaluation, two implement query expansion techniques: Okapi (with pseudo-feedback for QE), and a proprietary concept search engine (we'll call it TCS, for Thomson Concept Search).\nTCS is a language modeling based retrieval engine that utilizes a subject-appropriate external corpus (i.e., legal or news) as a knowledge source.\nThis external knowledge source is a corpus separate from, but thematically related to, the document collection to be searched.\nTranslation probabilities for QE [2] are calculated from these large external corpora.\nOkapi (without feedback) and a language model query likelihood (QL) model (implemented using Indri) are included as keyword-only baselines.\nOkapi without feedback is intended as an analogous baseline for Okapi with feedback, and the QL model is intended as an appropriate baseline for TCS, as they both implement language-modeling based retrieval algorithms.\nWe choose these as baselines because they are dependent only on the words appearing in the queries and have no QE capabilities.\nAs a result, we expect that when query terms are removed from relevant documents, the performance of these systems should degrade more dramatically than their counterparts that implement QE.\nThe Okapi and QL model results were obtained using the Lemur Toolkit .2 Okapi was run with the parameters k1 = 1.2, b = 0.75, and k3 = 7.\nWhen run with feedback, the feedback parameters used in Okapi were set at 10 documents and 25 terms.\nThe QL model used Jelinek-Mercer smoothing, with \u03bb = 0.6.\n4.2 Test Collections\nWe evaluated the performance of the four IR systems outlined above on two different test collections.\nThe two test collections used were the TREC AP89 collection (TIPSTER disk 1) and the FSupp Collection.\nThe FSupp Collection is a proprietary collection of 11,953 case law documents for which we have 44 queries (ranging from four to twenty-two words after stop word removal) with full relevance judgments .3 The average length of documents in the FSupp Collection is 3444 words.\nThe TREC AP89 test collection contains 84,678 documents, averaging 252 words in length.\nIn our evaluation, we used both the title and the description fields of topics 151200 as queries, so we have two sets of results for the AP89 Collection.\nAfter stop word removal, the title queries range from two to eleven words and the description queries range from four to twenty-six terms.\n4.3 Query Term Removal Strategy\nIn our experiments, we chose to sequentially and additively remove query terms from highest-to-lowest inverse document frequency (IDF) with respect to the entire document collection.\nTerms with high IDF values tend to influence document ranking more than those with lower IDF values.\nAdditionally, high IDF terms tend to be domainspecific terms that are less likely to be known to non-expert user, hence we start by removing these first.\nFor the FSupp Collection, queries were evaluated incrementally with one, two, three, five, and seven terms removed from their corresponding relevant documents.\nThe longer description queries from TREC topics 151-200 were likewise evaluated on the AP89 Collection with one, two, three, five, and seven query terms removed from their relevant documents.\nFor the shorter TREC title queries, we removed one, two, three, and five terms from the relevant documents.\n4.4 Metrics\nIn this implementation of the evaluation framework, we chose three metrics by which to compare IR system performance: mean average precision (MAP), precision at 10 documents (P10), and recall at 1000 documents.\nAlthough these are the metrics we chose to demonstrate this framework, any appropriate IR metrics could be used within the framework.\n5.\nRESULTS\n5.1 FSupp Collection\nFigures 1, 2, and 3 show the performance (in terms of MAP, P10 and Recall, respectively) for the four search engines on the FSupp Collection.\nAs expected, the performance of the keyword-only IR systems, QL and Okapi, drops quickly as query terms are removed from the relevant documents in the collection.\nThe performance of Okapi with feedback (Okapi FB) is somewhat surprising in that on the original collection (i.e., prior to query term removal), its performance is worse than that of Okapi without feedback on all three measures.\nTCS outperforms the QL keyword baseline on every measure except for MAP on the original collection (i.e., prior to removing any query terms).\nBecause TCS employs implicit query expansion using an external domain specific knowledge base, it is less sensitive to term removal (i.e., mismatch) than the Okapi FB, which relies on terms from the top-ranked documents retrieved by an initial keywordonly search.\nBecause overall search engine performance is frequently measured in terms of MAP, and because other evaluations of QE often only consider performance on the entire collection (i.e., they do not consider term mismatch), the QE implemented in TCS would be considered (in an\nFigure 1: The performance of the four retrieval systems on the FSupp collection in terms of Mean Average Precision (MAP) and as a function of the number of query terms removed (the horizontal axis).\nFigure 2: The performance of the four retrieval systems on the FSupp collection in terms of Precision at 10 and as a function of the number of query terms removed (the horizontal axis).\nother evaluation) to hurt performance on the FSupp Collection.\nHowever, when we look at the comparison of TCS to QL when query terms are removed from the relevant documents, we can see that the QE in TCS is indeed contributing positively to the search.\n5.2 The AP89 Collection: using the description queries\nFigures 4, 5, and 6 show the performance of the four IR systems on the AP89 Collection, using the TREC topic descriptions as queries.\nThe most interesting difference between the performance on the FSupp Collection and the AP89 collection is the reversal of Okapi FB and TCS.\nOn FSupp, TCS outperformed the other engines consistently (see Figures 1, 2, and 3); on the AP89 Collection, Okapi FB is clearly the best performer (see Figures 4, 5, and 6).\nThis is all the more interesting, based on the fact that QE in Okapi FB takes place after the first search iteration, which\nFigure 3: The Recall (at 1000) of the four retrieval systems on the FSupp collection as a function of the number of query terms removed (the horizontal axis).\nFigure 4: MAP of the four IR systems on the AP89 Collection, using TREC description queries.\nMAP is measured as a function of the number of query terms removed.\nFigure 5: Precision at 10 of the four IR systems on the AP89 Collection, using TREC description queries.\nP at 10 is measured as a function of the number of query terms removed.\nFigure 6: Recall (at 1000) of the four IR systems\non the AP89 Collection, using TREC description queries, and as a function of the number of query terms removed.\nwe would expect to be handicapped when query terms are removed.\nLooking at P10 in Figure 5, we can see that TCS and Okapi FB score similarly on P10, starting at the point where one query term is removed from relevant documents.\nAt two query terms removed, TCS starts outperforming Okapi FB.\nIf modeling this in terms of expert versus non-expert users, we could conclude that TCS might be a better search engine for non-experts to use on the AP89 Collection, while Okapi FB would be best for an expert searcher.\nIt is interesting to note that on each metric for the AP89 description queries, TCS performs more poorly than all the other systems on the original collection, but quickly surpasses the baseline systems and approaches Okapi FB's performance as terms are removed.\nThis is again a case where the performance of a system on the entire collection is not necessarily indicative of how it handles query-document term mismatch.\n5.3 The AP89 Collection: using the title queries\nFigures 7, 8, and 9 show the performance of the four IR systems on the AP89 Collection, using the TREC topic titles as queries.\nAs with the AP89 description queries, Okapi FB is again the best performer of the four systems in the evaluation.\nAs before, the performance of the Okapi and QL systems, the non-QE baseline systems, sharply degrades as query terms are removed.\nOn the shorter queries, TCS seems to have a harder time catching up to the performance of Okapi FB as terms are removed.\nPerhaps the most interesting result from our evaluation is that although the keyword-only baselines performed consistently and as expected on both collections with respect to query term removal from relevant documents, the performances of the engines implementing QE techniques differed dramatically between collections.\nFigure 7: MAP of the four IR systems on the AP89 Collection, using TREC title queries and as a function of the number of query terms removed.\nFigure 8: Precision at 10 of the four IR systems on the AP89 Collection, using TREC title queries, and as a function of the number of query terms removed.\nFigure 9: Recall (at 1000) of the four IR systems on the AP89 Collection, using TREC title queries and as a function of the number of query terms removed.\n7.\nCONCLUSION\nThe proposed evaluation framework allows us to measure the degree to which different IR systems overcome (or don't overcome) term mismatch between queries and relevant documents.\nEvaluations of IR systems employing QE performed only on the entire collection do not take into account that the purpose of QE is to mitigate the effects of term mismatch in retrieval.\nBy systematically removing query terms from relevant documents, we can measure the degree to which QE contributes to a search by showing the difference between the performances of a QE system and its keywordonly baseline when query terms have been removed from known relevant documents.\nFurther, we can model the behavior of expert versus non-expert users by manipulating the amount of query-document term mismatch introduced into the collection.\nThe evaluation framework proposed in this paper is attractive for several reasons.\nMost importantly, it provides a controlled manner in which to measure the performance of QE with respect to query-document term mismatch.\nIn addition, this framework takes advantage and stretches the amount of information we can get from existing test collections.\nFurther, this evaluation framework is not metricspecific: information in terms of any metric (MAP, P@10, etc.) can be gained from evaluating an IR system this way.\nIt should also be noted that this framework is generalizable to any IR system, in that it evaluates how well IR systems evaluate users' information needs as represented by their queries.\nAn IR system that is easy to use should be good at retrieving documents that are relevant to users' information needs, even if the queries provided by the users do not contain the same keywords as the relevant documents.","keyphrases":["evalu","queri expans","inform retriev","relev document","queri-document term mismatch","inform search","document expans","document process"],"prmu":["P","P","P","P","M","M","R","R"]} {"id":"H-60","title":"A Frequency-based and a Poisson-based Definition of the Probability of Being Informative","abstract":"This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf). We show that an intuitive idf-based probability function for the probability of a term being informative assumes disjoint document events. By assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative. The framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.","lvl-1":"A Frequency-based and a Poisson-based Definition of the Probability of Being Informative Thomas Roelleke Department of Computer Science Queen Mary University of London thor@dcs.qmul.ac.uk ABSTRACT This paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf ).\nWe show that an intuitive idf -based probability function for the probability of a term being informative assumes disjoint document events.\nBy assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative.\nThe framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models General Terms Theory 1.\nINTRODUCTION AND BACKGROUND The inverse document frequency (idf ) is one of the most successful parameters for a relevance-based ranking of retrieved objects.\nWith N being the total number of documents, and n(t) being the number of documents in which term t occurs, the idf is defined as follows: idf(t) := \u2212 log n(t) N , 0 <= idf(t) < \u221e Ranking based on the sum of the idf -values of the query terms that occur in the retrieved documents works well, this has been shown in numerous applications.\nAlso, it is well known that the combination of a document-specific term weight and idf works better than idf alone.\nThis approach is known as tf-idf , where tf(t, d) (0 <= tf(t, d) <= 1) is the so-called term frequency of term t in document d.\nThe idf reflects the discriminating power (informativeness) of a term, whereas the tf reflects the occurrence of a term.\nThe idf alone works better than the tf alone does.\nAn explanation might be the problem of tf with terms that occur in many documents; let us refer to those terms as noisy terms.\nWe use the notion of noisy terms rather than frequent terms since frequent terms leaves open whether we refer to the document frequency of a term in a collection or to the so-called term frequency (also referred to as withindocument frequency) of a term in a document.\nWe associate noise with the document frequency of a term in a collection, and we associate occurrence with the withindocument frequency of a term.\nThe tf of a noisy term might be high in a document, but noisy terms are not good candidates for representing a document.\nTherefore, the removal of noisy terms (known as stopword removal) is essential when applying tf .\nIn a tf-idf approach, the removal of stopwords is conceptually obsolete, if stopwords are just words with a low idf .\nFrom a probabilistic point of view, tf is a value with a frequency-based probabilistic interpretation whereas idf has an informative rather than a probabilistic interpretation.\nThe missing probabilistic interpretation of idf is a problem in probabilistic retrieval models where we combine uncertain knowledge of different dimensions (e.g.: informativeness of terms, structure of documents, quality of documents, age of documents, etc.) such that a good estimate of the probability of relevance is achieved.\nAn intuitive solution is a normalisation of idf such that we obtain values in the interval [0; 1].\nFor example, consider a normalisation based on the maximal idf -value.\nLet T be the set of terms occurring in a collection.\nPfreq (t is informative) := idf(t) maxidf maxidf := max({idf(t)|t \u2208 T}), maxidf <= \u2212 log(1\/N) minidf := min({idf(t)|t \u2208 T}), minidf >= 0 minidf maxidf \u2264 Pfreq (t is informative) \u2264 1.0 This frequency-based probability function covers the interval [0; 1] if the minimal idf is equal to zero, which is the case if we have at least one term that occurs in all documents.\nCan we interpret Pfreq , the normalised idf , as the probability that the term is informative?\nWhen investigating the probabilistic interpretation of the 227 normalised idf , we made several observations related to disjointness and independence of document events.\nThese observations are reported in section 3.\nWe show in section 3.1 that the frequency-based noise probability n(t) N used in the classic idf -definition can be explained by three assumptions: binary term occurrence, constant document containment and disjointness of document containment events.\nIn section 3.2 we show that by assuming independence of documents, we obtain 1 \u2212 e\u22121 \u2248 1 \u2212 0.37 as the upper bound of the noise probability of a term.\nThe value e\u22121 is related to the logarithm and we investigate in section 3.3 the link to information theory.\nIn section 4, we link the results of the previous sections to probability theory.\nWe show the steps from possible worlds to binomial distribution and Poisson distribution.\nIn section 5, we emphasise that the theoretical framework of this paper is applicable for both idf and tf .\nFinally, in section 6, we base the definition of the probability of being informative on the results of the previous sections and compare frequency-based and Poisson-based definitions.\n2.\nBACKGROUND The relationship between frequencies, probabilities and information theory (entropy) has been the focus of many researchers.\nIn this background section, we focus on work that investigates the application of the Poisson distribution in IR since a main part of the work presented in this paper addresses the underlying assumptions of Poisson.\n[4] proposes a 2-Poisson model that takes into account the different nature of relevant and non-relevant documents, rare terms (content words) and frequent terms (noisy terms, function words, stopwords).\n[9] shows experimentally that most of the terms (words) in a collection are distributed according to a low dimension n-Poisson model.\n[10] uses a 2-Poisson model for including term frequency-based probabilities in the probabilistic retrieval model.\nThe non-linear scaling of the Poisson function showed significant improvement compared to a linear frequency-based probability.\nThe Poisson model was here applied to the term frequency of a term in a document.\nWe will generalise the discussion by pointing out that document frequency and term frequency are dual parameters in the collection space and the document space, respectively.\nOur discussion of the Poisson distribution focuses on the document frequency in a collection rather than on the term frequency in a document.\n[7] and [6] address the deviation of idf and Poisson, and apply Poisson mixtures to achieve better Poisson-based estimates.\nThe results proved again experimentally that a onedimensional Poisson does not work for rare terms, therefore Poisson mixtures and additional parameters are proposed.\n[3], section 3.3, illustrates and summarises comprehensively the relationships between frequencies, probabilities and Poisson.\nDifferent definitions of idf are put into context and a notion of noise is defined, where noise is viewed as the complement of idf .\nWe use in our paper a different notion of noise: we consider a frequency-based noise that corresponds to the document frequency, and we consider a term noise that is based on the independence of document events.\n[11], [12], [8] and [1] link frequencies and probability estimation to information theory.\n[12] establishes a framework in which information retrieval models are formalised based on probabilistic inference.\nA key component is the use of a space of disjoint events, where the framework mainly uses terms as disjoint events.\nThe probability of being informative defined in our paper can be viewed as the probability of the disjoint terms in the term space of [12].\n[8] address entropy and bibliometric distributions.\nEntropy is maximal if all events are equiprobable and the frequency-based Lotka law (N\/i\u03bb is the number of scientists that have written i publications, where N and \u03bb are distribution parameters), Zipf and the Pareto distribution are related.\nThe Pareto distribution is the continuous case of the Lotka and Lotka and Zipf show equivalences.\nThe Pareto distribution is used by [2] for term frequency normalisation.\nThe Pareto distribution compares to the Poisson distribution in the sense that Pareto is fat-tailed, i. e. Pareto assigns larger probabilities to large numbers of events than Poisson distributions do.\nThis makes Pareto interesting since Poisson is felt to be too radical on frequent events.\nWe restrict in this paper to the discussion of Poisson, however, our results show that indeed a smoother distribution than Poisson promises to be a good candidate for improving the estimation of probabilities in information retrieval.\n[1] establishes a theoretical link between tf-idf and information theory and the theoretical research on the meaning of tf-idf clarifies the statistical model on which the different measures are commonly based.\nThis motivation matches the motivation of our paper: We investigate theoretically the assumptions of classical idf and Poisson for a better understanding of parameter estimation and combination.\n3.\nFROM DISJOINT TO INDEPENDENT We define and discuss in this section three probabilities: The frequency-based noise probability (definition 1), the total noise probability for disjoint documents (definition 2).\nand the noise probability for independent documents (definition 3).\n3.1 Binary occurrence, constant containment and disjointness of documents We show in this section, that the frequency-based noise probability n(t) N in the idf definition can be explained as a total probability with binary term occurrence, constant document containment and disjointness of document containments.\nWe refer to a probability function as binary if for all events the probability is either 1.0 or 0.0.\nThe occurrence probability P(t|d) is binary, if P(t|d) is equal to 1.0 if t \u2208 d, and P(t|d) is equal to 0.0, otherwise.\nP(t|d) is binary : \u21d0\u21d2 P(t|d) = 1.0 \u2228 P(t|d) = 0.0 We refer to a probability function as constant if for all events the probability is equal.\nThe document containment probability reflect the chance that a document occurs in a collection.\nThis containment probability is constant if we have no information about the document containment or we ignore that documents differ in containment.\nContainment could be derived, for example, from the size, quality, age, links, etc. of a document.\nFor a constant containment in a collection with N documents, 1 N is often assumed as the containment probability.\nWe generalise this definition and introduce the constant \u03bb where 0 \u2264 \u03bb \u2264 N.\nThe containment of a document d depends on the collection c, this is reflected by the notation P(d|c) used for the containment 228 of a document.\nP(d|c) is constant : \u21d0\u21d2 \u2200d : P(d|c) = \u03bb N For disjoint documents that cover the whole event space, we set \u03bb = 1 and obtain \u00c8d P(d|c) = 1.0.\nNext, we define the frequency-based noise probability and the total noise probability for disjoint documents.\nWe introduce the event notation t is noisy and t occurs for making the difference between the noise probability P(t is noisy|c) in a collection and the occurrence probability P(t occurs|d) in a document more explicit, thereby keeping in mind that the noise probability corresponds to the occurrence probability of a term in a collection.\nDefinition 1.\nThe frequency-based term noise probability: Pfreq (t is noisy|c) := n(t) N Definition 2.\nThe total term noise probability for disjoint documents: Pdis (t is noisy|c) := d P(t occurs|d) \u00b7 P(d|c) Now, we can formulate a theorem that makes assumptions explicit that explain the classical idf .\nTheorem 1.\nIDF assumptions: If the occurrence probability P(t|d) of term t over documents d is binary, and the containment probability P(d|c) of documents d is constant, and document containments are disjoint events, then the noise probability for disjoint documents is equal to the frequency-based noise probability.\nPdis (t is noisy|c) = Pfreq (t is noisy|c) Proof.\nThe assumptions are: \u2200d : (P(t occurs|d) = 1 \u2228 P(t occurs|d) = 0) \u2227 P(d|c) = \u03bb N \u2227 d P(d|c) = 1.0 We obtain: Pdis (t is noisy|c) = d|t\u2208d 1 N = n(t) N = Pfreq (t is noisy|c) The above result is not a surprise but it is a mathematical formulation of assumptions that can be used to explain the classical idf .\nThe assumptions make explicit that the different types of term occurrence in documents (frequency of a term, importance of a term, position of a term, document part where the term occurs, etc.) and the different types of document containment (size, quality, age, etc.) are ignored, and document containments are considered as disjoint events.\nFrom the assumptions, we can conclude that idf (frequencybased noise, respectively) is a relatively simple but strict estimate.\nStill, idf works well.\nThis could be explained by a leverage effect that justifies the binary occurrence and constant containment: The term occurrence for small documents tends to be larger than for large documents, whereas the containment for small documents tends to be smaller than for large documents.\nFrom that point of view, idf means that P(t \u2227 d|c) is constant for all d in which t occurs, and P(t \u2227 d|c) is zero otherwise.\nThe occurrence and containment can be term specific.\nFor example, set P(t\u2227d|c) = 1\/ND(c) if t occurs in d, where ND(c) is the number of documents in collection c (we used before just N).\nWe choose a document-dependent occurrence P(t|d) := 1\/NT (d), i. e. the occurrence probability is equal to the inverse of NT (d), which is the total number of terms in document d. Next, we choose the containment P(d|c) := NT (d)\/NT (c)\u00b7NT (c)\/ND(c) where NT (d)\/NT (c) is a document length normalisation (number of terms in document d divided by the number of terms in collection c), and NT (c)\/ND(c) is a constant factor of the collection (number of terms in collection c divided by the number of documents in collection c).\nWe obtain P(t\u2227d|c) = 1\/ND(c).\nIn a tf-idf -retrieval function, the tf -component reflects the occurrence probability of a term in a document.\nThis is a further explanation why we can estimate the idf with a simple P(t|d), since the combined tf-idf contains the occurrence probability.\nThe containment probability corresponds to a document normalisation (document length normalisation, pivoted document length) and is normally attached to the tf -component or the tf-idf -product.\nThe disjointness assumption is typical for frequency-based probabilities.\nFrom a probability theory point of view, we can consider documents as disjoint events, in order to achieve a sound theoretical model for explaining the classical idf .\nBut does disjointness reflect the real world where the containment of a document appears to be independent of the containment of another document?\nIn the next section, we replace the disjointness assumption by the independence assumption.\n3.2 The upper bound of the noise probability for independent documents For independent documents, we compute the probability of a disjunction as usual, namely as the complement of the probability of the conjunction of the negated events: P(d1 \u2228 ... \u2228 dN ) = 1 \u2212 P(\u00acd1 \u2227 ... \u2227 \u00acdN ) = 1 \u2212 d (1 \u2212 P(d)) The noise probability can be considered as the conjunction of the term occurrence and the document containment.\nP(t is noisy|c) := P(t occurs \u2227 (d1 \u2228 ... \u2228 dN )|c) For disjoint documents, this view of the noise probability led to definition 2.\nFor independent documents, we use now the conjunction of negated events.\nDefinition 3.\nThe term noise probability for independent documents: Pin (t is noisy|c) := d (1 \u2212 P(t occurs|d) \u00b7 P(d|c)) With binary occurrence and a constant containment P(d|c) := \u03bb\/N, we obtain the term noise of a term t that occurs in n(t) documents: Pin (t is noisy|c) = 1 \u2212 1 \u2212 \u03bb N n(t) 229 For binary occurrence and disjoint documents, the containment probability was 1\/N.\nNow, with independent documents, we can use \u03bb as a collection parameter that controls the average containment probability.\nWe show through the next theorem that the upper bound of the noise probability depends on \u03bb.\nTheorem 2.\nThe upper bound of being noisy: If the occurrence P(t|d) is binary, and the containment P(d|c) is constant, and document containments are independent events, then 1 \u2212 e\u2212\u03bb is the upper bound of the noise probability.\n\u2200t : Pin (t is noisy|c) < 1 \u2212 e\u2212\u03bb Proof.\nThe upper bound of the independent noise probability follows from the limit limN\u2192\u221e(1 + x N )N = ex (see any comprehensive math book, for example, [5], for the convergence equation of the Euler function).\nWith x = \u2212\u03bb, we obtain: lim N\u2192\u221e 1 \u2212 \u03bb N N = e\u2212\u03bb For the term noise, we have: Pin (t is noisy|c) = 1 \u2212 1 \u2212 \u03bb N n(t) Pin (t is noisy|c) is strictly monotonous: The noise of a term tn is less than the noise of a term tn+1, where tn occurs in n documents and tn+1 occurs in n + 1 documents.\nTherefore, a term with n = N has the largest noise probability.\nFor a collection with infinite many documents, the upper bound of the noise probability for terms tN that occur in all documents becomes: lim N\u2192\u221e Pin (tN is noisy) = lim N\u2192\u221e 1 \u2212 1 \u2212 \u03bb N N = 1 \u2212 e\u2212\u03bb By applying an independence rather a disjointness assumption, we obtain the probability e\u22121 that a term is not noisy even if the term does occur in all documents.\nIn the disjoint case, the noise probability is one for a term that occurs in all documents.\nIf we view P(d|c) := \u03bb\/N as the average containment, then \u03bb is large for a term that occurs mostly in large documents, and \u03bb is small for a term that occurs mostly in small documents.\nThus, the noise of a term t is large if t occurs in n(t) large documents and the noise is smaller if t occurs in small documents.\nAlternatively, we can assume a constant containment and a term-dependent occurrence.\nIf we assume P(d|c) := 1, then P(t|d) := \u03bb\/N can be interpreted as the average probability that t represents a document.\nThe common assumption is that the average containment or occurrence probability is proportional to n(t).\nHowever, here is additional potential: The statistical laws (see [3] on Luhn and Zipf) indicate that the average probability could follow a normal distribution, i. e. small probabilities for small n(t) and large n(t), and larger probabilities for medium n(t).\nFor the monotonous case we investigate here, the noise of a term with n(t) = 1 is equal to 1 \u2212 (1 \u2212 \u03bb\/N) = \u03bb\/N and the noise of a term with n(t) = N is close to 1\u2212 e\u2212\u03bb .\nIn the next section, we relate the value e\u2212\u03bb to information theory.\n3.3 The probability of a maximal informative signal The probability e\u22121 is special in the sense that a signal with that probability is a signal with maximal information as derived from the entropy definition.\nConsider the definition of the entropy contribution H(t) of a signal t. H(t) := P(t) \u00b7 \u2212 ln P(t) We form the first derivation for computing the optimum.\n\u2202H(t) \u2202P(t) = \u2212 ln P(t) + \u22121 P(t) \u00b7 P(t) = \u2212(1 + ln P(t)) For obtaining optima, we use: 0 = \u2212(1 + ln P(t)) The entropy contribution H(t) is maximal for P(t) = e\u22121 .\nThis result does not depend on the base of the logarithm as we see next: \u2202H(t) \u2202P(t) = \u2212 logb P(t) + \u22121 P(t) \u00b7 ln b \u00b7 P(t) = \u2212 1 ln b + logb P(t) = \u2212 1 + ln P(t) ln b We summarise this result in the following theorem: Theorem 3.\nThe probability of a maximal informative signal: The probability Pmax = e\u22121 \u2248 0.37 is the probability of a maximal informative signal.\nThe entropy of a maximal informative signal is Hmax = e\u22121 .\nProof.\nThe probability and entropy follow from the derivation above.\nThe complement of the maximal noise probability is e\u2212\u03bb and we are looking now for a generalisation of the entropy definition such that e\u2212\u03bb is the probability of a maximal informative signal.\nWe can generalise the entropy definition by computing the integral of \u03bb+ ln P(t), i. e. this derivation is zero for e\u2212\u03bb .\nWe obtain a generalised entropy: \u2212(\u03bb + ln P(t)) d(P(t)) = P(t) \u00b7 (1 \u2212 \u03bb \u2212 ln P(t)) The generalised entropy corresponds for \u03bb = 1 to the classical entropy.\nBy moving from disjoint to independent documents, we have established a link between the complement of the noise probability of a term that occurs in all documents and information theory.\nNext, we link independent documents to probability theory.\n4.\nTHE LINK TO PROBABILITY THEORY We review for independent documents three concepts of probability theory: possible worlds, binomial distribution and Poisson distribution.\n4.1 Possible Worlds Each conjunction of document events (for each document, we consider two document events: the document can be true or false) is associated with a so-called possible world.\nFor example, consider the eight possible worlds for three documents (N = 3).\n230 world w conjunction w7 d1 \u2227 d2 \u2227 d3 w6 d1 \u2227 d2 \u2227 \u00acd3 w5 d1 \u2227 \u00acd2 \u2227 d3 w4 d1 \u2227 \u00acd2 \u2227 \u00acd3 w3 \u00acd1 \u2227 d2 \u2227 d3 w2 \u00acd1 \u2227 d2 \u2227 \u00acd3 w1 \u00acd1 \u2227 \u00acd2 \u2227 d3 w0 \u00acd1 \u2227 \u00acd2 \u2227 \u00acd3 With each world w, we associate a probability \u00b5(w), which is equal to the product of the single probabilities of the document events.\nworld w probability \u00b5(w) w7 \u03bb N \u00a13 \u00b7 1 \u2212 \u03bb N \u00a10 w6 \u03bb N \u00a12 \u00b7 1 \u2212 \u03bb N \u00a11 w5 \u03bb N \u00a12 \u00b7 1 \u2212 \u03bb N \u00a11 w4 \u03bb N \u00a11 \u00b7 1 \u2212 \u03bb N \u00a12 w3 \u03bb N \u00a12 \u00b7 1 \u2212 \u03bb N \u00a11 w2 \u03bb N \u00a11 \u00b7 1 \u2212 \u03bb N \u00a12 w1 \u03bb N \u00a11 \u00b7 1 \u2212 \u03bb N \u00a12 w0 \u03bb N \u00a10 \u00b7 1 \u2212 \u03bb N \u00a13 The sum over the possible worlds in which k documents are true and N \u2212k documents are false is equal to the probability function of the binomial distribution, since the binomial coefficient yields the number of possible worlds in which k documents are true.\n4.2 Binomial distribution The binomial probability function yields the probability that k of N events are true where each event is true with the single event probability p. P(k) := binom(N, k, p) := N k pk (1 \u2212 p)N \u2212k The single event probability is usually defined as p := \u03bb\/N, i. e. p is inversely proportional to N, the total number of events.\nWith this definition of p, we obtain for an infinite number of documents the following limit for the product of the binomial coefficient and pk : lim N\u2192\u221e N k pk = = lim N\u2192\u221e N \u00b7 (N \u22121) \u00b7 ... \u00b7 (N \u2212k +1) k!\n\u03bb N k = \u03bbk k!\nThe limit is close to the actual value for k << N. For large k, the actual value is smaller than the limit.\nThe limit of (1\u2212p)N \u2212k follows from the limit limN\u2192\u221e(1+ x N )N = ex .\nlim N\u2192\u221e (1 \u2212 p)N\u2212k = lim N\u2192\u221e 1 \u2212 \u03bb N N \u2212k = lim N\u2192\u221e e\u2212\u03bb \u00b7 1 \u2212 \u03bb N \u2212k = e\u2212\u03bb Again, the limit is close to the actual value for k << N. For large k, the actual value is larger than the limit.\n4.3 Poisson distribution For an infinite number of events, the Poisson probability function is the limit of the binomial probability function.\nlim N\u2192\u221e binom(N, k, p) = \u03bbk k!\n\u00b7 e\u2212\u03bb P(k) = poisson(k, \u03bb) := \u03bbk k!\n\u00b7 e\u2212\u03bb The probability poisson(0, 1) is equal to e\u22121 , which is the probability of a maximal informative signal.\nThis shows the relationship of the Poisson distribution and information theory.\nAfter seeing the convergence of the binomial distribution, we can choose the Poisson distribution as an approximation of the independent term noise probability.\nFirst, we define the Poisson noise probability: Definition 4.\nThe Poisson term noise probability: Ppoi (t is noisy|c) := e\u2212\u03bb \u00b7 n(t) k=1 \u03bbk k!\nFor independent documents, the Poisson distribution approximates the probability of the disjunction for large n(t), since the independent term noise probability is equal to the sum over the binomial probabilities where at least one of n(t) document containment events is true.\nPin (t is noisy|c) = n(t) k=1 n(t) k pk (1 \u2212 p)N \u2212k Pin (t is noisy|c) \u2248 Ppoi (t is noisy|c) We have defined a frequency-based and a Poisson-based probability of being noisy, where the latter is the limit of the independence-based probability of being noisy.\nBefore we present in the final section the usage of the noise probability for defining the probability of being informative, we emphasise in the next section that the results apply to the collection space as well as to the the document space.\n5.\nTHE COLLECTION SPACE AND THE DOCUMENT SPACE Consider the dual definitions of retrieval parameters in table 1.\nWe associate a collection space D \u00d7 T with a collection c where D is the set of documents and T is the set of terms in the collection.\nLet ND := |D| and NT := |T| be the number of documents and terms, respectively.\nWe consider a document as a subset of T and a term as a subset of D. Let nT (d) := |{t|d \u2208 t}| be the number of terms that occur in the document d, and let nD(t) := |{d|t \u2208 d}| be the number of documents that contain the term t.\nIn a dual way, we associate a document space L \u00d7 T with a document d where L is the set of locations (also referred to as positions, however, we use the letters L and l and not P and p for avoiding confusion with probabilities) and T is the set of terms in the document.\nThe document dimension in a collection space corresponds to the location (position) dimension in a document space.\nThe definition makes explicit that the classical notion of term frequency of a term in a document (also referred to as the within-document term frequency) actually corresponds to the location frequency of a term in a document.\nFor the 231 space collection document dimensions documents and terms locations and terms document\/location frequency nD(t, c): Number of documents in which term t occurs in collection c nL(t, d): Number of locations (positions) at which term t occurs in document d ND(c): Number of documents in collection c NL(d): Number of locations (positions) in document d term frequency nT (d, c): Number of terms that document d contains in collection c nT (l, d): Number of terms that location l contains in document d NT (c): Number of terms in collection c NT (d): Number of terms in document d noise\/occurrence P(t|c) (term noise) P(t|d) (term occurrence) containment P(d|c) (document) P(l|d) (location) informativeness \u2212 ln P(t|c) \u2212 ln P(t|d) conciseness \u2212 ln P(d|c) \u2212 ln P(l|d) P(informative) ln(P(t|c))\/ ln(P(tmin, c)) ln(P(t|d))\/ ln(P(tmin, d)) P(concise) ln(P(d|c))\/ ln(P(dmin|c)) ln(P(l|d))\/ ln(P(lmin|d)) Table 1: Retrieval parameters actual term frequency value, it is common to use the maximal occurrence (number of locations; let lf be the location frequency).\ntf(t, d):=lf(t, d):= Pfreq (t occurs|d) Pfreq (tmax occurs|d) = nL(t, d) nL(tmax , d) A further duality is between informativeness and conciseness (shortness of documents or locations): informativeness is based on occurrence (noise), conciseness is based on containment.\nWe have highlighted in this section the duality between the collection space and the document space.\nWe concentrate in this paper on the probability of a term to be noisy and informative.\nThose probabilities are defined in the collection space.\nHowever, the results regarding the term noise and informativeness apply to their dual counterparts: term occurrence and informativeness in a document.\nAlso, the results can be applied to containment of documents and locations.\n6.\nTHE PROBABILITY OF BEING INFORMATIVE We showed in the previous sections that the disjointness assumption leads to frequency-based probabilities and that the independence assumption leads to Poisson probabilities.\nIn this section, we formulate a frequency-based definition and a Poisson-based definition of the probability of being informative and then we compare the two definitions.\nDefinition 5.\nThe frequency-based probability of being informative: Pfreq (t is informative|c) := \u2212 ln n(t) N \u2212 ln 1 N = \u2212 logN n(t) N = 1 \u2212 logN n(t) = 1 \u2212 ln n(t) ln N We define the Poisson-based probability of being informative analogously to the frequency-based probability of being informative (see definition 5).\nDefinition 6.\nThe Poisson-based probability of being informative: Ppoi (t is informative|c) := \u2212 ln e\u2212\u03bb \u00b7 \u00c8n(t) k=1 \u03bbk k!\n\u2212 ln(e\u2212\u03bb \u00b7 \u03bb) = \u03bb \u2212 ln \u00c8n(t) k=1 \u03bbk k!\n\u03bb \u2212 ln \u03bb For the sum expression, the following limit holds: lim n(t)\u2192\u221e n(t) k=1 \u03bbk k!\n= e\u03bb \u2212 1 For \u03bb >> 1, we can alter the noise and informativeness Poisson by starting the sum from 0, since e\u03bb >> 1.\nThen, the minimal Poisson informativeness is poisson(0, \u03bb) = e\u2212\u03bb .\nWe obtain a simplified Poisson probability of being informative: Ppoi (t is informative|c) \u2248 \u03bb \u2212 ln \u00c8n(t) k=0 \u03bbk k!\n\u03bb = 1 \u2212 ln \u00c8n(t) k=0 \u03bbk k!\n\u03bb The computation of the Poisson sum requires an optimisation for large n(t).\nThe implementation for this paper exploits the nature of the Poisson density: The Poisson density yields only values significantly greater than zero in an interval around \u03bb.\nConsider the illustration of the noise and informativeness definitions in figure 1.\nThe probability functions displayed are summarised in figure 2 where the simplified Poisson is used in the noise and informativeness graphs.\nThe frequency-based noise corresponds to the linear solid curve in the noise figure.\nWith an independence assumption, we obtain the curve in the lower triangle of the noise figure.\nBy changing the parameter p := \u03bb\/N of the independence probability, we can lift or lower the independence curve.\nThe noise figure shows the lifting for the value \u03bb := ln N \u2248 9.2.\nThe setting \u03bb = ln N is special in the sense that the frequency-based and the Poisson-based informativeness have the same denominator, namely ln N, and the Poisson sum converges to \u03bb.\nWhether we can draw more conclusions from this setting is an open question.\nWe can conclude, that the lifting is desirable if we know for a collection that terms that occur in relatively few doc232 0 0.2 0.4 0.6 0.8 1 0 2000\u00a04000\u00a06000\u00a08000 10000 Probabilityofbeingnoisy n(t): Number of documents with term t frequency independence: 1\/N independence: ln(N)\/N poisson: 1000 poisson: 2000 poisson: 1000,2000 0 0.2 0.4 0.6 0.8 1 0 2000\u00a04000\u00a06000\u00a08000 10000 Probabilityofbeinginformative n(t): Number of documents with term t frequency independence: 1\/N independence: ln(N)\/N poisson: 1000 poisson: 2000 poisson: 1000,2000 Figure 1: Noise and Informativeness Probability function Noise Informativeness Frequency Pfreq Def n(t)\/N ln(n(t)\/N)\/ ln(1\/N) Interval 1\/N \u2264 Pfreq \u2264 1.0 0.0 \u2264 Pfreq \u2264 1.0 Independence Pin Def 1 \u2212 (1 \u2212 p)n(t) ln(1 \u2212 (1 \u2212 p)n(t) )\/ ln(p) Interval p \u2264 Pin < 1 \u2212 e\u2212\u03bb ln(p) \u2264 Pin \u2264 1.0 Poisson Ppoi Def e\u2212\u03bb \u00c8n(t) k=1 \u03bbk k!\n(\u03bb \u2212 ln \u00c8n(t) k=1 \u03bbk k! )\n\/(\u03bb \u2212 ln \u03bb) Interval e\u2212\u03bb \u00b7 \u03bb \u2264 Ppoi < 1 \u2212 e\u2212\u03bb (\u03bb \u2212 ln(e\u03bb \u2212 1))\/(\u03bb \u2212 ln \u03bb) \u2264 Ppoi \u2264 1.0 Poisson Ppoi simplified Def e\u2212\u03bb \u00c8n(t) k=0 \u03bbk k!\n(\u03bb \u2212 ln \u00c8n(t) k=0 \u03bbk k! )\n\/\u03bb Interval e\u2212\u03bb \u2264 Ppoi < 1.0 0.0 < Ppoi \u2264 1.0 Figure 2: Probability functions uments are no guarantee for finding relevant documents, i. e. we assume that rare terms are still relatively noisy.\nOn the opposite, we could lower the curve when assuming that frequent terms are not too noisy, i. e. they are considered as being still significantly discriminative.\nThe Poisson probabilities approximate the independence probabilities for large n(t); the approximation is better for larger \u03bb.\nFor n(t) < \u03bb, the noise is zero whereas for n(t) > \u03bb the noise is one.\nThis radical behaviour can be smoothened by using a multi-dimensional Poisson distribution.\nFigure 1 shows a Poisson noise based on a two-dimensional Poisson: poisson(k, \u03bb1, \u03bb2) := \u03c0 \u00b7 e\u2212\u03bb1 \u00b7 \u03bbk 1 k!\n+ (1 \u2212 \u03c0) \u00b7 e\u2212\u03bb2 \u00b7 \u03bbk 2 k!\nThe two dimensional Poisson shows a plateau between \u03bb1 = 1000 and \u03bb2 = 2000, we used here \u03c0 = 0.5.\nThe idea behind this setting is that terms that occur in less than 1000 documents are considered to be not noisy (i.e. they are informative), that terms between 1000 and 2000 are half noisy, and that terms with more than 2000 are definitely noisy.\nFor the informativeness, we observe that the radical behaviour of Poisson is preserved.\nThe plateau here is approximately at 1\/6, and it is important to realise that this plateau is not obtained with the multi-dimensional Poisson noise using \u03c0 = 0.5.\nThe logarithm of the noise is normalised by the logarithm of a very small number, namely 0.5 \u00b7 e\u22121000 + 0.5 \u00b7 e\u22122000 .\nThat is why the informativeness will be only close to one for very little noise, whereas for a bit of noise, informativeness will drop to zero.\nThis effect can be controlled by using small values for \u03c0 such that the noise in the interval [\u03bb1; \u03bb2] is still very little.\nThe setting \u03c0 = e\u22122000\/6 leads to noise values of approximately e\u22122000\/6 in the interval [\u03bb1; \u03bb2], the logarithms lead then to 1\/6 for the informativeness.\nThe indepence-based and frequency-based informativeness functions do not differ as much as the noise functions do.\nHowever, for the indepence-based probability of being informative, we can control the average informativeness by the definition p := \u03bb\/N whereas the control on the frequencybased is limited as we address next.\nFor the frequency-based idf , the gradient is monotonously decreasing and we obtain for different collections the same distances of idf -values, i. e. the parameter N does not affect the distance.\nFor an illustration, consider the distance between the value idf(tn+1) of a term tn+1 that occurs in n+1 documents, and the value idf(tn) of a term tn that occurs in n documents.\nidf(tn+1) \u2212 idf(tn) = ln n n + 1 The first three values of the distance function are: idf(t2) \u2212 idf(t1) = ln(1\/(1 + 1)) = 0.69 idf(t3) \u2212 idf(t2) = ln(1\/(2 + 1)) = 0.41 idf(t4) \u2212 idf(t3) = ln(1\/(3 + 1)) = 0.29 For the Poisson-based informativeness, the gradient decreases first slowly for small n(t), then rapidly near n(t) \u2248 \u03bb and then it grows again slowly for large n(t).\nIn conclusion, we have seen that the Poisson-based definition provides more control and parameter possibilities than 233 the frequency-based definition does.\nWhereas more control and parameter promises to be positive for the personalisation of retrieval systems, it bears at the same time the danger of just too many parameters.\nThe framework presented in this paper raises the awareness about the probabilistic and information-theoretic meanings of the parameters.\nThe parallel definitions of the frequency-based probability and the Poisson-based probability of being informative made the underlying assumptions explicit.\nThe frequency-based probability can be explained by binary occurrence, constant containment and disjointness of documents.\nIndependence of documents leads to Poisson, where we have to be aware that Poisson approximates the probability of a disjunction for a large number of events, but not for a small number.\nThis theoretical result explains why experimental investigations on Poisson (see [7]) show that a Poisson estimation does work better for frequent (bad, noisy) terms than for rare (good, informative) terms.\nIn addition to the collection-wide parameter setting, the framework presented here allows for document-dependent settings, as explained for the independence probability.\nThis is in particular interesting for heterogeneous and structured collections, since documents are different in nature (size, quality, root document, sub document), and therefore, binary occurrence and constant containment are less appropriate than in relatively homogeneous collections.\n7.\nSUMMARY The definition of the probability of being informative transforms the informative interpretation of the idf into a probabilistic interpretation, and we can use the idf -based probability in probabilistic retrieval approaches.\nWe showed that the classical definition of the noise (document frequency) in the inverse document frequency can be explained by three assumptions: the term within-document occurrence probability is binary, the document containment probability is constant, and the document containment events are disjoint.\nBy explicitly and mathematically formulating the assumptions, we showed that the classical definition of idf does not take into account parameters such as the different nature (size, quality, structure, etc.) of documents in a collection, or the different nature of terms (coverage, importance, position, etc.) in a document.\nWe discussed that the absence of those parameters is compensated by a leverage effect of the within-document term occurrence probability and the document containment probability.\nBy applying an independence rather a disjointness assumption for the document containment, we could establish a link between the noise probability (term occurrence in a collection), information theory and Poisson.\nFrom the frequency-based and the Poisson-based probabilities of being noisy, we derived the frequency-based and Poisson-based probabilities of being informative.\nThe frequency-based probability is relatively smooth whereas the Poisson probability is radical in distinguishing between noisy or not noisy, and informative or not informative, respectively.\nWe showed how to smoothen the radical behaviour of Poisson with a multidimensional Poisson.\nThe explicit and mathematical formulation of idf - and Poisson-assumptions is the main result of this paper.\nAlso, the paper emphasises the duality of idf and tf , collection space and document space, respectively.\nThus, the result applies to term occurrence and document containment in a collection, and it applies to term occurrence and position containment in a document.\nThis theoretical framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.\nThe links between indepence-based noise as document frequency, probabilistic interpretation of idf , information theory and Poisson described in this paper may lead to variable probabilistic idf and tf definitions and combinations as required in advanced and personalised information retrieval systems.\nAcknowledgment: I would like to thank Mounia Lalmas, Gabriella Kazai and Theodora Tsikrika for their comments on the as they said heavy pieces.\nMy thanks also go to the meta-reviewer who advised me to improve the presentation to make it less formidable and more accessible for those without a theoretic bent.\nThis work was funded by a research fellowship from Queen Mary University of London.\n8.\nREFERENCES [1] A. Aizawa.\nAn information-theoretic perspective of tf-idf measures.\nInformation Processing and Management, 39:45-65, January 2003.\n[2] G. Amati and C. J. Rijsbergen.\nTerm frequency normalization via Pareto distributions.\nIn 24th BCS-IRSG European Colloquium on IR Research, Glasgow, Scotland, 2002.\n[3] R. K. Belew.\nFinding out about.\nCambridge University Press, 2000.\n[4] A. Bookstein and D. Swanson.\nProbabilistic models for automatic indexing.\nJournal of the American Society for Information Science, 25:312-318, 1974.\n[5] I. N. Bronstein.\nTaschenbuch der Mathematik.\nHarri Deutsch, Thun, Frankfurt am Main, 1987.\n[6] K. Church and W. Gale.\nPoisson mixtures.\nNatural Language Engineering, 1(2):163-190, 1995.\n[7] K. W. Church and W. A. Gale.\nInverse document frequency: A measure of deviations from poisson.\nIn Third Workshop on Very Large Corpora, ACL Anthology, 1995.\n[8] T. Lafouge and C. Michel.\nLinks between information construction and information gain: Entropy and bibliometric distribution.\nJournal of Information Science, 27(1):39-49, 2001.\n[9] E. Margulis.\nN-poisson document modelling.\nIn Proceedings of the 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 177-189, 1992.\n[10] S. E. Robertson and S. Walker.\nSome simple effective approximations to the 2-poisson model for probabilistic weighted retrieval.\nIn Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 232-241, London, et al., 1994.\nSpringer-Verlag.\n[11] S. Wong and Y. Yao.\nAn information-theoric measure of term specificity.\nJournal of the American Society for Information Science, 43(1):54-61, 1992.\n[12] S. Wong and Y. Yao.\nOn modeling information retrieval with probabilistic inference.\nACM Transactions on Information Systems, 13(1):38-68, 1995.\n234","lvl-3":"A Frequency-based and a Poisson-based Definition of the Probability of Being Informative\nABSTRACT\nThis paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf).\nWe show that an intuitive idf - based probability function for the probability of a term being informative assumes disjoint document events.\nBy assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative.\nThe framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.\n1.\nINTRODUCTION AND BACKGROUND\nThe inverse document frequency (idf) is one of the most successful parameters for a relevance-based ranking of retrieved objects.\nWith N being the total number of documents, and n (t) being the number of documents in which term t occurs, the idf is defined as follows:\nRanking based on the sum of the idf - values of the query terms that occur in the retrieved documents works well, this has been shown in numerous applications.\nAlso, it is well known that the combination of a document-specific term\nweight and idf works better than idf alone.\nThis approach is known as tf-idf, where tf (t, d) (0 <= tf (t, d) <= 1) is the so-called term frequency of term t in document d.\nThe idf reflects the discriminating power (informativeness) of a term, whereas the tf reflects the occurrence of a term.\nThe idf alone works better than the tf alone does.\nAn explanation might be the problem of tf with terms that occur in many documents; let us refer to those terms as \"noisy\" terms.\nWe use the notion of \"noisy\" terms rather than \"frequent\" terms since frequent terms leaves open whether we refer to the document frequency of a term in a collection or to the so-called term frequency (also referred to as withindocument frequency) of a term in a document.\nWe associate \"noise\" with the document frequency of a term in a collection, and we associate \"occurrence\" with the withindocument frequency of a term.\nThe tf of a noisy term might be high in a document, but noisy terms are not good candidates for representing a document.\nTherefore, the removal of noisy terms (known as \"stopword removal\") is essential when applying tf.\nIn a tf-idf approach, the removal of stopwords is conceptually obsolete, if stopwords are just words with a low idf.\nFrom a probabilistic point of view, tf is a value with a frequency-based probabilistic interpretation whereas idf has an \"informative\" rather than a probabilistic interpretation.\nThe missing probabilistic interpretation of idf is a problem in probabilistic retrieval models where we combine uncertain knowledge of different dimensions (e.g.: informativeness of terms, structure of documents, quality of documents, age of documents, etc.) such that a good estimate of the probability of relevance is achieved.\nAn intuitive solution is a normalisation of idf such that we obtain values in the interval [0; 1].\nFor example, consider a normalisation based on the maximal idf-value.\nLet T be the set of terms occurring in a collection.\nThis frequency-based probability function covers the interval [0; 1] if the minimal idf is equal to zero, which is the case if we have at least one term that occurs in all documents.\nCan we interpret Pfreq, the normalised idf, as the probability that the term is informative?\nWhen investigating the probabilistic interpretation of the\nnormalised idf, we made several observations related to disjointness and independence of document events.\nThese observations are reported in section 3.\nWe show in section 3.1 that the frequency-based noise probability n (t) N used in the classic idf-definition can be explained by three assumptions: binary term occurrence, constant document containment and disjointness of document containment events.\nIn section 3.2 we show that by assuming independence of documents, we obtain 1--e \u2212 1 Pz 1--0.37 as the upper bound of the noise probability of a term.\nThe value e \u2212 1 is related to the logarithm and we investigate in section 3.3 the link to information theory.\nIn section 4, we link the results of the previous sections to probability theory.\nWe show the steps from possible worlds to binomial distribution and Poisson distribution.\nIn section 5, we emphasise that the theoretical framework of this paper is applicable for both idf and tf.\nFinally, in section 6, we base the definition of the probability of being informative on the results of the previous sections and compare frequency-based and Poisson-based definitions.\n2.\nBACKGROUND\nThe relationship between frequencies, probabilities and information theory (entropy) has been the focus of many researchers.\nIn this background section, we focus on work that investigates the application of the Poisson distribution in IR since a main part of the work presented in this paper addresses the underlying assumptions of Poisson.\n[4] proposes a 2-Poisson model that takes into account the different nature of relevant and non-relevant documents, rare terms (content words) and frequent terms (noisy terms, function words, stopwords).\n[9] shows experimentally that most of the terms (words) in a collection are distributed according to a low dimension n-Poisson model.\n[10] uses a 2-Poisson model for including term frequency-based probabilities in the probabilistic retrieval model.\nThe non-linear scaling of the Poisson function showed significant improvement compared to a linear frequency-based probability.\nThe Poisson model was here applied to the term frequency of a term in a document.\nWe will generalise the discussion by pointing out that document frequency and term frequency are dual parameters in the collection space and the document space, respectively.\nOur discussion of the Poisson distribution focuses on the document frequency in a collection rather than on the term frequency in a document.\n[7] and [6] address the deviation of idf and Poisson, and apply Poisson mixtures to achieve better Poisson-based estimates.\nThe results proved again experimentally that a onedimensional Poisson does not work for rare terms, therefore Poisson mixtures and additional parameters are proposed.\n[3], section 3.3, illustrates and summarises comprehensively the relationships between frequencies, probabilities and Poisson.\nDifferent definitions of idf are put into context and a notion of \"noise\" is defined, where noise is viewed as the complement of idf.\nWe use in our paper a different notion of noise: we consider a frequency-based noise that corresponds to the document frequency, and we consider a term noise that is based on the independence of document events.\n[11], [12], [8] and [1] link frequencies and probability estimation to information theory.\n[12] establishes a framework in which information retrieval models are formalised based on probabilistic inference.\nA key component is the use of a space of disjoint events, where the framework mainly uses terms as disjoint events.\nThe probability of being informative defined in our paper can be viewed as the probability of the disjoint terms in the term space of [12].\n[8] address entropy and bibliometric distributions.\nEntropy is maximal if all events are equiprobable and the frequency-based Lotka law (N\/i \u03bb is the number of scientists that have written i publications, where N and \u03bb are distribution parameters), Zipf and the Pareto distribution are related.\nThe Pareto distribution is the continuous case of the Lotka and Lotka and Zipf show equivalences.\nThe Pareto distribution is used by [2] for term frequency normalisation.\nThe Pareto distribution compares to the Poisson distribution in the sense that Pareto is \"fat-tailed\", i. e. Pareto assigns larger probabilities to large numbers of events than Poisson distributions do.\nThis makes Pareto interesting since Poisson is felt to be too radical on frequent events.\nWe restrict in this paper to the discussion of Poisson, however, our results show that indeed a smoother distribution than Poisson promises to be a good candidate for improving the estimation of probabilities in information retrieval.\n[1] establishes a theoretical link between tf-idf and information theory and the theoretical research on the meaning of tf-idf \"clarifies the statistical model on which the different measures are commonly based\".\nThis motivation matches the motivation of our paper: We investigate theoretically the assumptions of classical idf and Poisson for a better understanding of parameter estimation and combination.\n3.\nFROM DISJOINT TO INDEPENDENT\n3.1 Binary occurrence, constant containment and disjointness of documents\n3.2 The upper bound of the noise probability\n3.3 The probability of a maximal informative signal\n4.\nTHE LINK TO PROBABILITY THEORY\n4.1 Possible Worlds\n4.2 Binomial distribution\n4.3 Poisson distribution\n5.\nTHE COLLECTION SPACE AND THE DOCUMENT SPACE\n6.\nTHE PROBABILITY OF BEING INFORMATIVE\n7.\nSUMMARY\nThe definition of the probability of being informative transforms the informative interpretation of the idf into a probabilistic interpretation, and we can use the idf - based probability in probabilistic retrieval approaches.\nWe showed that the classical definition of the noise (document frequency) in the inverse document frequency can be explained by three assumptions: the term within-document occurrence probability is binary, the document containment probability is constant, and the document containment events are disjoint.\nBy explicitly and mathematically formulating the assumptions, we showed that the classical definition of idf does not take into account parameters such as the different nature (size, quality, structure, etc.) of documents in a collection, or the different nature of terms (coverage, importance, position, etc.) in a document.\nWe discussed that the absence of those parameters is compensated by a leverage effect of the within-document term occurrence probability and the document containment probability.\nBy applying an independence rather a disjointness assumption for the document containment, we could establish a link between the noise probability (term occurrence in a collection), information theory and Poisson.\nFrom the frequency-based and the Poisson-based probabilities of being noisy, we derived the frequency-based and Poisson-based probabilities of being informative.\nThe frequency-based probability is relatively smooth whereas the Poisson probability is radical in distinguishing between noisy or not noisy, and informative or not informative, respectively.\nWe showed how to smoothen the radical behaviour of Poisson with a multidimensional Poisson.\nThe explicit and mathematical formulation of idf - and Poisson-assumptions is the main result of this paper.\nAlso, the paper emphasises the duality of idf and tf, collection space and document space, respectively.\nThus, the result applies to term occurrence and document containment in a collection, and it applies to term occurrence and position containment in a document.\nThis theoretical framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.\nThe links between indepence-based noise as document frequency, probabilistic interpretation of idf, information theory and Poisson described in this paper may lead to variable probabilistic idf and tf definitions and combinations as required in advanced and personalised information retrieval systems.\nAcknowledgment: I would like to thank Mounia Lalmas, Gabriella Kazai and Theodora Tsikrika for their comments on the as they said \"heavy\" pieces.\nMy thanks also go to the meta-reviewer who advised me to improve the presentation to make it less \"formidable\" and more accessible for those \"without a theoretic bent\".\nThis work was funded by a research fellowship from Queen Mary University of London.","lvl-4":"A Frequency-based and a Poisson-based Definition of the Probability of Being Informative\nABSTRACT\nThis paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf).\nWe show that an intuitive idf - based probability function for the probability of a term being informative assumes disjoint document events.\nBy assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative.\nThe framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.\n1.\nINTRODUCTION AND BACKGROUND\nThe inverse document frequency (idf) is one of the most successful parameters for a relevance-based ranking of retrieved objects.\nWith N being the total number of documents, and n (t) being the number of documents in which term t occurs, the idf is defined as follows:\nRanking based on the sum of the idf - values of the query terms that occur in the retrieved documents works well, this has been shown in numerous applications.\nAlso, it is well known that the combination of a document-specific term\nweight and idf works better than idf alone.\nThis approach is known as tf-idf, where tf (t, d) (0 <= tf (t, d) <= 1) is the so-called term frequency of term t in document d.\nThe idf reflects the discriminating power (informativeness) of a term, whereas the tf reflects the occurrence of a term.\nThe idf alone works better than the tf alone does.\nAn explanation might be the problem of tf with terms that occur in many documents; let us refer to those terms as \"noisy\" terms.\nWe use the notion of \"noisy\" terms rather than \"frequent\" terms since frequent terms leaves open whether we refer to the document frequency of a term in a collection or to the so-called term frequency (also referred to as withindocument frequency) of a term in a document.\nWe associate \"noise\" with the document frequency of a term in a collection, and we associate \"occurrence\" with the withindocument frequency of a term.\nThe tf of a noisy term might be high in a document, but noisy terms are not good candidates for representing a document.\nTherefore, the removal of noisy terms (known as \"stopword removal\") is essential when applying tf.\nFrom a probabilistic point of view, tf is a value with a frequency-based probabilistic interpretation whereas idf has an \"informative\" rather than a probabilistic interpretation.\nFor example, consider a normalisation based on the maximal idf-value.\nLet T be the set of terms occurring in a collection.\nThis frequency-based probability function covers the interval [0; 1] if the minimal idf is equal to zero, which is the case if we have at least one term that occurs in all documents.\nCan we interpret Pfreq, the normalised idf, as the probability that the term is informative?\nWhen investigating the probabilistic interpretation of the\nnormalised idf, we made several observations related to disjointness and independence of document events.\nThese observations are reported in section 3.\nWe show in section 3.1 that the frequency-based noise probability n (t) N used in the classic idf-definition can be explained by three assumptions: binary term occurrence, constant document containment and disjointness of document containment events.\nIn section 3.2 we show that by assuming independence of documents, we obtain 1--e \u2212 1 Pz 1--0.37 as the upper bound of the noise probability of a term.\nThe value e \u2212 1 is related to the logarithm and we investigate in section 3.3 the link to information theory.\nIn section 4, we link the results of the previous sections to probability theory.\nWe show the steps from possible worlds to binomial distribution and Poisson distribution.\nIn section 5, we emphasise that the theoretical framework of this paper is applicable for both idf and tf.\nFinally, in section 6, we base the definition of the probability of being informative on the results of the previous sections and compare frequency-based and Poisson-based definitions.\n2.\nBACKGROUND\nThe relationship between frequencies, probabilities and information theory (entropy) has been the focus of many researchers.\nIn this background section, we focus on work that investigates the application of the Poisson distribution in IR since a main part of the work presented in this paper addresses the underlying assumptions of Poisson.\n[4] proposes a 2-Poisson model that takes into account the different nature of relevant and non-relevant documents, rare terms (content words) and frequent terms (noisy terms, function words, stopwords).\n[9] shows experimentally that most of the terms (words) in a collection are distributed according to a low dimension n-Poisson model.\n[10] uses a 2-Poisson model for including term frequency-based probabilities in the probabilistic retrieval model.\nThe non-linear scaling of the Poisson function showed significant improvement compared to a linear frequency-based probability.\nThe Poisson model was here applied to the term frequency of a term in a document.\nWe will generalise the discussion by pointing out that document frequency and term frequency are dual parameters in the collection space and the document space, respectively.\nOur discussion of the Poisson distribution focuses on the document frequency in a collection rather than on the term frequency in a document.\n[7] and [6] address the deviation of idf and Poisson, and apply Poisson mixtures to achieve better Poisson-based estimates.\nThe results proved again experimentally that a onedimensional Poisson does not work for rare terms, therefore Poisson mixtures and additional parameters are proposed.\n[3], section 3.3, illustrates and summarises comprehensively the relationships between frequencies, probabilities and Poisson.\nWe use in our paper a different notion of noise: we consider a frequency-based noise that corresponds to the document frequency, and we consider a term noise that is based on the independence of document events.\n[11], [12], [8] and [1] link frequencies and probability estimation to information theory.\n[12] establishes a framework in which information retrieval models are formalised based on probabilistic inference.\nA key component is the use of a space of disjoint events, where the framework mainly uses terms as disjoint events.\nThe probability of being informative defined in our paper can be viewed as the probability of the disjoint terms in the term space of [12].\n[8] address entropy and bibliometric distributions.\nThe Pareto distribution is the continuous case of the Lotka and Lotka and Zipf show equivalences.\nThe Pareto distribution is used by [2] for term frequency normalisation.\nThe Pareto distribution compares to the Poisson distribution in the sense that Pareto is \"fat-tailed\", i. e. Pareto assigns larger probabilities to large numbers of events than Poisson distributions do.\nThis makes Pareto interesting since Poisson is felt to be too radical on frequent events.\nWe restrict in this paper to the discussion of Poisson, however, our results show that indeed a smoother distribution than Poisson promises to be a good candidate for improving the estimation of probabilities in information retrieval.\nThis motivation matches the motivation of our paper: We investigate theoretically the assumptions of classical idf and Poisson for a better understanding of parameter estimation and combination.\n7.\nSUMMARY\nThe definition of the probability of being informative transforms the informative interpretation of the idf into a probabilistic interpretation, and we can use the idf - based probability in probabilistic retrieval approaches.\nWe showed that the classical definition of the noise (document frequency) in the inverse document frequency can be explained by three assumptions: the term within-document occurrence probability is binary, the document containment probability is constant, and the document containment events are disjoint.\nWe discussed that the absence of those parameters is compensated by a leverage effect of the within-document term occurrence probability and the document containment probability.\nBy applying an independence rather a disjointness assumption for the document containment, we could establish a link between the noise probability (term occurrence in a collection), information theory and Poisson.\nFrom the frequency-based and the Poisson-based probabilities of being noisy, we derived the frequency-based and Poisson-based probabilities of being informative.\nThe frequency-based probability is relatively smooth whereas the Poisson probability is radical in distinguishing between noisy or not noisy, and informative or not informative, respectively.\nWe showed how to smoothen the radical behaviour of Poisson with a multidimensional Poisson.\nAlso, the paper emphasises the duality of idf and tf, collection space and document space, respectively.\nThus, the result applies to term occurrence and document containment in a collection, and it applies to term occurrence and position containment in a document.\nThis theoretical framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.\nThe links between indepence-based noise as document frequency, probabilistic interpretation of idf, information theory and Poisson described in this paper may lead to variable probabilistic idf and tf definitions and combinations as required in advanced and personalised information retrieval systems.","lvl-2":"A Frequency-based and a Poisson-based Definition of the Probability of Being Informative\nABSTRACT\nThis paper reports on theoretical investigations about the assumptions underlying the inverse document frequency (idf).\nWe show that an intuitive idf - based probability function for the probability of a term being informative assumes disjoint document events.\nBy assuming documents to be independent rather than disjoint, we arrive at a Poisson-based probability of being informative.\nThe framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.\n1.\nINTRODUCTION AND BACKGROUND\nThe inverse document frequency (idf) is one of the most successful parameters for a relevance-based ranking of retrieved objects.\nWith N being the total number of documents, and n (t) being the number of documents in which term t occurs, the idf is defined as follows:\nRanking based on the sum of the idf - values of the query terms that occur in the retrieved documents works well, this has been shown in numerous applications.\nAlso, it is well known that the combination of a document-specific term\nweight and idf works better than idf alone.\nThis approach is known as tf-idf, where tf (t, d) (0 <= tf (t, d) <= 1) is the so-called term frequency of term t in document d.\nThe idf reflects the discriminating power (informativeness) of a term, whereas the tf reflects the occurrence of a term.\nThe idf alone works better than the tf alone does.\nAn explanation might be the problem of tf with terms that occur in many documents; let us refer to those terms as \"noisy\" terms.\nWe use the notion of \"noisy\" terms rather than \"frequent\" terms since frequent terms leaves open whether we refer to the document frequency of a term in a collection or to the so-called term frequency (also referred to as withindocument frequency) of a term in a document.\nWe associate \"noise\" with the document frequency of a term in a collection, and we associate \"occurrence\" with the withindocument frequency of a term.\nThe tf of a noisy term might be high in a document, but noisy terms are not good candidates for representing a document.\nTherefore, the removal of noisy terms (known as \"stopword removal\") is essential when applying tf.\nIn a tf-idf approach, the removal of stopwords is conceptually obsolete, if stopwords are just words with a low idf.\nFrom a probabilistic point of view, tf is a value with a frequency-based probabilistic interpretation whereas idf has an \"informative\" rather than a probabilistic interpretation.\nThe missing probabilistic interpretation of idf is a problem in probabilistic retrieval models where we combine uncertain knowledge of different dimensions (e.g.: informativeness of terms, structure of documents, quality of documents, age of documents, etc.) such that a good estimate of the probability of relevance is achieved.\nAn intuitive solution is a normalisation of idf such that we obtain values in the interval [0; 1].\nFor example, consider a normalisation based on the maximal idf-value.\nLet T be the set of terms occurring in a collection.\nThis frequency-based probability function covers the interval [0; 1] if the minimal idf is equal to zero, which is the case if we have at least one term that occurs in all documents.\nCan we interpret Pfreq, the normalised idf, as the probability that the term is informative?\nWhen investigating the probabilistic interpretation of the\nnormalised idf, we made several observations related to disjointness and independence of document events.\nThese observations are reported in section 3.\nWe show in section 3.1 that the frequency-based noise probability n (t) N used in the classic idf-definition can be explained by three assumptions: binary term occurrence, constant document containment and disjointness of document containment events.\nIn section 3.2 we show that by assuming independence of documents, we obtain 1--e \u2212 1 Pz 1--0.37 as the upper bound of the noise probability of a term.\nThe value e \u2212 1 is related to the logarithm and we investigate in section 3.3 the link to information theory.\nIn section 4, we link the results of the previous sections to probability theory.\nWe show the steps from possible worlds to binomial distribution and Poisson distribution.\nIn section 5, we emphasise that the theoretical framework of this paper is applicable for both idf and tf.\nFinally, in section 6, we base the definition of the probability of being informative on the results of the previous sections and compare frequency-based and Poisson-based definitions.\n2.\nBACKGROUND\nThe relationship between frequencies, probabilities and information theory (entropy) has been the focus of many researchers.\nIn this background section, we focus on work that investigates the application of the Poisson distribution in IR since a main part of the work presented in this paper addresses the underlying assumptions of Poisson.\n[4] proposes a 2-Poisson model that takes into account the different nature of relevant and non-relevant documents, rare terms (content words) and frequent terms (noisy terms, function words, stopwords).\n[9] shows experimentally that most of the terms (words) in a collection are distributed according to a low dimension n-Poisson model.\n[10] uses a 2-Poisson model for including term frequency-based probabilities in the probabilistic retrieval model.\nThe non-linear scaling of the Poisson function showed significant improvement compared to a linear frequency-based probability.\nThe Poisson model was here applied to the term frequency of a term in a document.\nWe will generalise the discussion by pointing out that document frequency and term frequency are dual parameters in the collection space and the document space, respectively.\nOur discussion of the Poisson distribution focuses on the document frequency in a collection rather than on the term frequency in a document.\n[7] and [6] address the deviation of idf and Poisson, and apply Poisson mixtures to achieve better Poisson-based estimates.\nThe results proved again experimentally that a onedimensional Poisson does not work for rare terms, therefore Poisson mixtures and additional parameters are proposed.\n[3], section 3.3, illustrates and summarises comprehensively the relationships between frequencies, probabilities and Poisson.\nDifferent definitions of idf are put into context and a notion of \"noise\" is defined, where noise is viewed as the complement of idf.\nWe use in our paper a different notion of noise: we consider a frequency-based noise that corresponds to the document frequency, and we consider a term noise that is based on the independence of document events.\n[11], [12], [8] and [1] link frequencies and probability estimation to information theory.\n[12] establishes a framework in which information retrieval models are formalised based on probabilistic inference.\nA key component is the use of a space of disjoint events, where the framework mainly uses terms as disjoint events.\nThe probability of being informative defined in our paper can be viewed as the probability of the disjoint terms in the term space of [12].\n[8] address entropy and bibliometric distributions.\nEntropy is maximal if all events are equiprobable and the frequency-based Lotka law (N\/i \u03bb is the number of scientists that have written i publications, where N and \u03bb are distribution parameters), Zipf and the Pareto distribution are related.\nThe Pareto distribution is the continuous case of the Lotka and Lotka and Zipf show equivalences.\nThe Pareto distribution is used by [2] for term frequency normalisation.\nThe Pareto distribution compares to the Poisson distribution in the sense that Pareto is \"fat-tailed\", i. e. Pareto assigns larger probabilities to large numbers of events than Poisson distributions do.\nThis makes Pareto interesting since Poisson is felt to be too radical on frequent events.\nWe restrict in this paper to the discussion of Poisson, however, our results show that indeed a smoother distribution than Poisson promises to be a good candidate for improving the estimation of probabilities in information retrieval.\n[1] establishes a theoretical link between tf-idf and information theory and the theoretical research on the meaning of tf-idf \"clarifies the statistical model on which the different measures are commonly based\".\nThis motivation matches the motivation of our paper: We investigate theoretically the assumptions of classical idf and Poisson for a better understanding of parameter estimation and combination.\n3.\nFROM DISJOINT TO INDEPENDENT\nWe define and discuss in this section three probabilities: The frequency-based noise probability (definition 1), the total noise probability for disjoint documents (definition 2).\nand the noise probability for independent documents (definition 3).\n3.1 Binary occurrence, constant containment and disjointness of documents\nWe show in this section, that the frequency-based noise probability n (t) N in the idf definition can be explained as a total probability with binary term occurrence, constant document containment and disjointness of document containments.\nWe refer to a probability function as binary if for all events the probability is either 1.0 or 0.0.\nThe occurrence probability P (t1d) is binary, if P (t1d) is equal to 1.0 if t E d, and P (t1d) is equal to 0.0, otherwise.\nWe refer to a probability function as constant if for all events the probability is equal.\nThe document containment probability reflect the chance that a document occurs in a collection.\nThis containment probability is constant if we have no information about the document containment or we ignore that documents differ in containment.\nContainment could be derived, for example, from the size, quality, age, links, etc. of a document.\nFor a constant containment in a collection with N documents, 1N is often assumed as the containment probability.\nWe generalise this definition and introduce the constant \u03bb where 0 <\u03bb <N.\nThe containment of a document d depends on the collection c, this is reflected by the notation P (d1c) used for the containment\nof a document.\nwe set \u03bb = 1 and obtain d P (d | c) = 1.0.\nNext, we define For disjoint documents that cover the whole event space, the frequency-based noise probability and the total noise probability for disjoint documents.\nWe introduce the event notation t is noisy and t occurs for making the difference between the noise probability P (t is noisy | c) in a collection and the occurrence probability P (t occurs | d) in a document more explicit, thereby keeping in mind that the noise probability corresponds to the occurrence probability of a term in a collection.\nNow, we can formulate a theorem that makes assumptions explicit that explain the classical idf.\nPROOF.\nThe assumptions are:\nd the containment for small documents tends to be smaller than for large documents.\nFrom that point of view, idf means that P (t \u2227 d | c) is constant for all d in which t occurs, and P (t \u2227 d | c) is zero otherwise.\nThe occurrence and containment can be term specific.\nFor example, set P (t \u2227 d | c) = 1\/ND (c) if t occurs in d, where ND (c) is the number of documents in collection c (we used before just N).\nWe choose a document-dependent occurrence P (t | d): = 1\/NT (d), i. e. the occurrence probability is equal to the inverse of NT (d), which is the total number of terms in document d. Next, we choose the containment P (d | c): = NT (d) \/ NT (c) \u00b7 NT (c) \/ ND (c) where NT (d) \/ NT (c) is a document length normalisation (number of terms in document d divided by the number of terms in collection c), and NT (c) \/ ND (c) is a constant factor of the collection (number of terms in collection c divided by the number of documents in collection c).\nWe obtain P (t \u2227 d | c) = 1\/ND (c).\nIn a tf-idf - retrieval function, the tf-component reflects the occurrence probability of a term in a document.\nThis is a further explanation why we can estimate the idf with a simple P (t | d), since the combined tf-idf contains the occurrence probability.\nThe containment probability corresponds to a document normalisation (document length normalisation, pivoted document length) and is normally attached to the tf - component or the tf-idf-product.\nThe disjointness assumption is typical for frequency-based probabilities.\nFrom a probability theory point of view, we can consider documents as disjoint events, in order to achieve a sound theoretical model for explaining the classical idf.\nBut does disjointness reflect the real world where the containment of a document appears to be independent of the containment of another document?\nIn the next section, we replace the disjointness assumption by the independence assumption.\n3.2 The upper bound of the noise probability\nfor independent documents For independent documents, we compute the probability of a disjunction as usual, namely as the complement of the probability of the conjunction of the negated events:\nThe noise probability can be considered as the conjunction of the term occurrence and the document containment.\nThe above result is not a surprise but it is a mathematical formulation of assumptions that can be used to explain the classical idf.\nThe assumptions make explicit that the different types of term occurrence in documents (frequency of a term, importance of a term, position of a term, document part where the term occurs, etc.) and the different types of document containment (size, quality, age, etc.) are ignored, and document containments are considered as disjoint events.\nFrom the assumptions, we can conclude that idf (frequencybased noise, respectively) is a relatively simple but strict estimate.\nStill, idf works well.\nThis could be explained by a leverage effect that justifies the binary occurrence and constant containment: The term occurrence for small documents tends to be larger than for large documents, whereas\nFor disjoint documents, this view of the noise probability led to definition 2.\nFor independent documents, we use now the conjunction of negated events.\nWith binary occurrence and a constant containment P (d | c): = \u03bb \/ N, we obtain the term noise of a term t that occurs in n (t) documents:\nFor binary occurrence and disjoint documents, the containment probability was 1\/N.\nNow, with independent documents, we can use \u03bb as a collection parameter that controls the average containment probability.\nWe show through the next theorem that the upper bound of the noise probability depends on \u03bb.\nPROOF.\nThe upper bound of the independent noise probability follows from the limit limN \u2192 \u221e (1 + xN) N = ex (see any comprehensive math book, for example, [5], for the convergence equation of the Euler function).\nWith x = \u2212 \u03bb, we obtain:\nFor the term noise, we have:\nPin (t is noisy | c) is strictly monotonous: The noise of a term tn is less than the noise of a term tn +1, where tn occurs in n documents and tn +1 occurs in n + 1 documents.\nTherefore, a term with n = N has the largest noise probability.\nFor a collection with infinite many documents, the upper bound of the noise probability for terms tN that occur in all documents becomes:\nBy applying an independence rather a disjointness assumption, we obtain the probability e \u2212 1 that a term is not noisy even if the term does occur in all documents.\nIn the disjoint case, the noise probability is one for a term that occurs in all documents.\nIf we view P (d | c): = \u03bb \/ N as the average containment, then \u03bb is large for a term that occurs mostly in large documents, and \u03bb is small for a term that occurs mostly in small documents.\nThus, the noise of a term t is large if t occurs in n (t) large documents and the noise is smaller if t occurs in small documents.\nAlternatively, we can assume a constant containment and a term-dependent occurrence.\nIf we assume P (d | c): = 1, then P (t | d): = \u03bb \/ N can be interpreted as the average probability that t represents a document.\nThe common assumption is that the average containment or occurrence probability is proportional to n (t).\nHowever, here is additional potential: The statistical laws (see [3] on Luhn and Zipf) indicate that the average probability could follow a normal distribution, i. e. small probabilities for small n (t) and large n (t), and larger probabilities for medium n (t).\nFor the monotonous case we investigate here, the noise of a term with n (t) = 1 is equal to 1 \u2212 (1 \u2212 \u03bb \/ N) = \u03bb \/ N and the noise of a term with n (t) = N is close to 1 \u2212 e \u2212 \u03bb.\nIn the next section, we relate the value e \u2212 \u03bb to information theory.\n3.3 The probability of a maximal informative signal\nThe probability e \u2212 1 is special in the sense that a signal with that probability is a signal with maximal information as derived from the entropy definition.\nConsider the definition of the entropy contribution H (t) of a signal t.\nWe form the first derivation for computing the optimum.\nThe entropy contribution H (t) is maximal for P (t) = e \u2212 1.\nThis result does not depend on the base of the logarithm as we see next: We summarise this result in the following theorem: THEOREM 3.\nThe probability of a maximal informative signal: The probability Pmax = e \u2212 1 \u2248 0.37 is the probability of a maximal informative signal.\nThe entropy of a maximal informative signal is Hmax = e \u2212 1.\nPROOF.\nThe probability and entropy follow from the derivation above.\nThe complement of the maximal noise probability is e \u2212 \u03bb and we are looking now for a generalisation of the entropy definition such that e \u2212 \u03bb is the probability of a maximal informative signal.\nWe can generalise the entropy definition by computing the integral of \u03bb + ln P (t), i. e. this derivation is zero for e \u2212 \u03bb.\nWe obtain a generalised entropy:\nThe generalised entropy corresponds for \u03bb = 1 to the classical entropy.\nBy moving from disjoint to independent documents, we have established a link between the complement of the noise probability of a term that occurs in all documents and information theory.\nNext, we link independent documents to probability theory.\n4.\nTHE LINK TO PROBABILITY THEORY\nWe review for independent documents three concepts of probability theory: possible worlds, binomial distribution and Poisson distribution.\n4.1 Possible Worlds\nEach conjunction of document events (for each document, we consider two document events: the document can be true or false) is associated with a so-called possible world.\nFor example, consider the eight possible worlds for three documents (N = 3).\nWith each world w, we associate a probability \u00b5 (w), which is equal to the product of the single probabilities of the document events.\nThe sum over the possible worlds in which k documents are true and N \u2212 k documents are false is equal to the probability function of the binomial distribution, since the binomial coefficient yields the number of possible worlds in which k documents are true.\n4.2 Binomial distribution\nThe binomial probability function yields the probability that k of N events are true where each event is true with the single event probability p.\nThe single event probability is usually defined as p: = \u03bb \/ N, i. e. p is inversely proportional to N, the total number of events.\nWith this definition of p, we obtain for an infinite number of documents the following limit for the product of the binomial coefficient and pk:\nThe limit is close to the actual value for k <<N. For large k, the actual value is smaller than the limit.\nThe limit of (1 \u2212 p) N \u2212 k follows from the limit limN \u2192 \u221e (1 +\nAgain, the limit is close to the actual value for k <<N. For large k, the actual value is larger than the limit.\n4.3 Poisson distribution\nFor an infinite number of events, the Poisson probability function is the limit of the binomial probability function.\nThe probability poisson (0, 1) is equal to e \u2212 1, which is the probability of a maximal informative signal.\nThis shows the relationship of the Poisson distribution and information theory.\nAfter seeing the convergence of the binomial distribution, we can choose the Poisson distribution as an approximation of the independent term noise probability.\nFirst, we define the Poisson noise probability:\nFor independent documents, the Poisson distribution approximates the probability of the disjunction for large n (t), since the independent term noise probability is equal to the sum over the binomial probabilities where at least one of n (t) document containment events is true.\nWe have defined a frequency-based and a Poisson-based probability of being noisy, where the latter is the limit of the independence-based probability of being noisy.\nBefore we present in the final section the usage of the noise probability for defining the probability of being informative, we emphasise in the next section that the results apply to the collection space as well as to the the document space.\n5.\nTHE COLLECTION SPACE AND THE DOCUMENT SPACE\nConsider the dual definitions of retrieval parameters in table 1.\nWe associate a collection space D x T with a collection c where D is the set of documents and T is the set of terms in the collection.\nLet ND: = | D | and NT: = | T | be the number of documents and terms, respectively.\nWe consider a document as a subset of T and a term as a subset of D. Let nT (d): = | {t | d E t} | be the number of terms that occur in the document d, and let nD (t): = | {d | t E d} | be the number of documents that contain the term t.\nIn a dual way, we associate a document space L x T with a document d where L is the set of locations (also referred to as positions, however, we use the letters L and l and not P and p for avoiding confusion with probabilities) and T is the set of terms in the document.\nThe document dimension in a collection space corresponds to the location (position) dimension in a document space.\nThe definition makes explicit that the classical notion of term frequency of a term in a document (also referred to as the within-document term frequency) actually corresponds to the location frequency of a term in a document.\nFor the\nTable 1: Retrieval parameters\nactual term frequency value, it is common to use the maximal occurrence (number of locations; let lf be the location frequency).\nA further duality is between informativeness and conciseness (shortness of documents or locations): informativeness is based on occurrence (noise), conciseness is based on containment.\nWe have highlighted in this section the duality between the collection space and the document space.\nWe concentrate in this paper on the probability of a term to be noisy and informative.\nThose probabilities are defined in the collection space.\nHowever, the results regarding the term noise and informativeness apply to their dual counterparts: term occurrence and informativeness in a document.\nAlso, the results can be applied to containment of documents and locations.\n6.\nTHE PROBABILITY OF BEING INFORMATIVE\nWe showed in the previous sections that the disjointness assumption leads to frequency-based probabilities and that the independence assumption leads to Poisson probabilities.\nIn this section, we formulate a frequency-based definition and a Poisson-based definition of the probability of being informative and then we compare the two definitions.\nWe define the Poisson-based probability of being informative analogously to the frequency-based probability of being informative (see definition 5).\nFor the sum expression, the following limit holds: lim n (t) \u2192 \u221e For \u03bb>> 1, we can alter the noise and informativeness Poisson by starting the sum from 0, since e\u03bb>> 1.\nThen, the minimal Poisson informativeness is poisson (0, \u03bb) = e \u2212 \u03bb.\nWe obtain a simplified Poisson probability of being informative: The computation of the Poisson sum requires an optimisation for large n (t).\nThe implementation for this paper exploits the nature of the Poisson density: The Poisson density yields only values significantly greater than zero in an interval around \u03bb.\nConsider the illustration of the noise and informativeness definitions in figure 1.\nThe probability functions displayed are summarised in figure 2 where the simplified Poisson is used in the noise and informativeness graphs.\nThe frequency-based noise corresponds to the linear solid curve in the noise figure.\nWith an independence assumption, we obtain the curve in the lower triangle of the noise figure.\nBy changing the parameter p: = \u03bb \/ N of the independence probability, we can lift or lower the independence curve.\nThe noise figure shows the lifting for the value \u03bb: = ln N \u2248 9.2.\nThe setting \u03bb = ln N is special in the sense that the frequency-based and the Poisson-based informativeness have the same denominator, namely ln N, and the Poisson sum converges to \u03bb.\nWhether we can draw more conclusions from this setting is an open question.\nWe can conclude, that the lifting is desirable if we know for a collection that terms that occur in relatively few doc\nFigure 1: Noise and Informativeness\nFigure 2: Probability functions\numents are no guarantee for finding relevant documents, i. e. we assume that rare terms are still relatively noisy.\nOn the opposite, we could lower the curve when assuming that frequent terms are not too noisy, i. e. they are considered as being still significantly discriminative.\nThe Poisson probabilities approximate the independence probabilities for large n (t); the approximation is better for larger \u03bb.\nFor n (t) <\u03bb, the noise is zero whereas for n (t)> \u03bb the noise is one.\nThis radical behaviour can be smoothened by using a multi-dimensional Poisson distribution.\nFigure 1 shows a Poisson noise based on a two-dimensional Poisson:\nThe two dimensional Poisson shows a plateau between \u03bb1 = 1000 and \u03bb2 = 2000, we used here \u03c0 = 0.5.\nThe idea behind this setting is that terms that occur in less than 1000 documents are considered to be not noisy (i.e. they are informative), that terms between 1000 and 2000 are half noisy, and that terms with more than 2000 are definitely noisy.\nFor the informativeness, we observe that the radical behaviour of Poisson is preserved.\nThe plateau here is approximately at 1\/6, and it is important to realise that this plateau is not obtained with the multi-dimensional Poisson noise using \u03c0 = 0.5.\nThe logarithm of the noise is normalised by the logarithm of a very small number, namely 0.5 \u00b7 e--1000 + 0.5 \u00b7 e--2000.\nThat is why the informativeness will be only close to one for very little noise, whereas for a bit of noise, informativeness will drop to zero.\nThis effect can be controlled by using small values for \u03c0 such that the noise in the interval [\u03bb1; \u03bb2] is still very little.\nThe setting \u03c0 = e--2000\/6 leads to noise values of approximately e--2000\/6 in the interval [\u03bb1; \u03bb2], the logarithms lead then to 1\/6 for the informativeness.\nThe indepence-based and frequency-based informativeness functions do not differ as much as the noise functions do.\nHowever, for the indepence-based probability of being informative, we can control the average informativeness by the definition p: = \u03bb \/ N whereas the control on the frequencybased is limited as we address next.\nFor the frequency-based idf, the gradient is monotonously decreasing and we obtain for different collections the same distances of idf - values, i. e. the parameter N does not affect the distance.\nFor an illustration, consider the distance between the value idf (tn +1) of a term tn +1 that occurs in n +1 documents, and the value idf (tn) of a term tn that occurs in n documents.\nThe first three values of the distance function are:\nFor the Poisson-based informativeness, the gradient decreases first slowly for small n (t), then rapidly near n (t) \u2248 \u03bb and then it grows again slowly for large n (t).\nIn conclusion, we have seen that the Poisson-based definition provides more control and parameter possibilities than\nthe frequency-based definition does.\nWhereas more control and parameter promises to be positive for the personalisation of retrieval systems, it bears at the same time the danger of just too many parameters.\nThe framework presented in this paper raises the awareness about the probabilistic and information-theoretic meanings of the parameters.\nThe parallel definitions of the frequency-based probability and the Poisson-based probability of being informative made the underlying assumptions explicit.\nThe frequency-based probability can be explained by binary occurrence, constant containment and disjointness of documents.\nIndependence of documents leads to Poisson, where we have to be aware that Poisson approximates the probability of a disjunction for a large number of events, but not for a small number.\nThis theoretical result explains why experimental investigations on Poisson (see [7]) show that a Poisson estimation does work better for frequent (bad, noisy) terms than for rare (good, informative) terms.\nIn addition to the collection-wide parameter setting, the framework presented here allows for document-dependent settings, as explained for the independence probability.\nThis is in particular interesting for heterogeneous and structured collections, since documents are different in nature (size, quality, root document, sub document), and therefore, binary occurrence and constant containment are less appropriate than in relatively homogeneous collections.\n7.\nSUMMARY\nThe definition of the probability of being informative transforms the informative interpretation of the idf into a probabilistic interpretation, and we can use the idf - based probability in probabilistic retrieval approaches.\nWe showed that the classical definition of the noise (document frequency) in the inverse document frequency can be explained by three assumptions: the term within-document occurrence probability is binary, the document containment probability is constant, and the document containment events are disjoint.\nBy explicitly and mathematically formulating the assumptions, we showed that the classical definition of idf does not take into account parameters such as the different nature (size, quality, structure, etc.) of documents in a collection, or the different nature of terms (coverage, importance, position, etc.) in a document.\nWe discussed that the absence of those parameters is compensated by a leverage effect of the within-document term occurrence probability and the document containment probability.\nBy applying an independence rather a disjointness assumption for the document containment, we could establish a link between the noise probability (term occurrence in a collection), information theory and Poisson.\nFrom the frequency-based and the Poisson-based probabilities of being noisy, we derived the frequency-based and Poisson-based probabilities of being informative.\nThe frequency-based probability is relatively smooth whereas the Poisson probability is radical in distinguishing between noisy or not noisy, and informative or not informative, respectively.\nWe showed how to smoothen the radical behaviour of Poisson with a multidimensional Poisson.\nThe explicit and mathematical formulation of idf - and Poisson-assumptions is the main result of this paper.\nAlso, the paper emphasises the duality of idf and tf, collection space and document space, respectively.\nThus, the result applies to term occurrence and document containment in a collection, and it applies to term occurrence and position containment in a document.\nThis theoretical framework is useful for understanding and deciding the parameter estimation and combination in probabilistic retrieval models.\nThe links between indepence-based noise as document frequency, probabilistic interpretation of idf, information theory and Poisson described in this paper may lead to variable probabilistic idf and tf definitions and combinations as required in advanced and personalised information retrieval systems.\nAcknowledgment: I would like to thank Mounia Lalmas, Gabriella Kazai and Theodora Tsikrika for their comments on the as they said \"heavy\" pieces.\nMy thanks also go to the meta-reviewer who advised me to improve the presentation to make it less \"formidable\" and more accessible for those \"without a theoretic bent\".\nThis work was funded by a research fellowship from Queen Mary University of London.","keyphrases":["inform","invers document frequenc","invers document frequenc","idf","probabl function","frequenc-base probabl","poisson-base probabl","document disjoint","nois probabl","inform retriev","probabl theori","collect space","probabilist inform retriev","poisson distribut","inform theori","independ assumpt"],"prmu":["P","P","P","P","P","M","M","R","M","R","M","U","R","U","M","R"]} {"id":"J-59","title":"Cost Sharing in a Job Scheduling Problem Using the Shapley Value","abstract":"A set of jobs need to be served by a single server which can serve only one job at a time. Jobs have processing times and incur waiting costs (linear in their waiting time). The jobs share their costs through compensation using monetary transfers. We characterize the Shapley value rule for this model using fairness axioms. Our axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job.","lvl-1":"Cost Sharing in a Job Scheduling Problem Using the Shapley Value Debasis Mishra Center for Operations Research and Econometrics (CORE) Universit\u00b4e Catholique de Louvain Louvain la Neuve, Belgium mishra@core.ucl.ac.be Bharath Rangarajan Center for Operations Research and Econometrics (CORE) Universit\u00b4e Catholique de Louvain Louvain la Neuve, Belgium rangarajan@core.ucl.ac.be ABSTRACT A set of jobs need to be served by a single server which can serve only one job at a time.\nJobs have processing times and incur waiting costs (linear in their waiting time).\nThe jobs share their costs through compensation using monetary transfers.\nWe characterize the Shapley value rule for this model using fairness axioms.\nOur axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job.\nCategories and Subject Descriptors J.4 [Social and Behaviorial Sciences]: Economics General Terms Economics, Theory 1.\nINTRODUCTION A set of jobs need to be served by a server.\nThe server can process only one job at a time.\nEach job has a finite processing time and a per unit time waiting cost.\nEfficient ordering of this queue directs us to serve the jobs in increasing order of the ratio of per unit time waiting cost and processing time.\nTo compensate for waiting by jobs, monetary transfers to jobs are allowed.\nHow should the jobs share the cost equitably amongst themselves (through transfers)?\nThe problem of fair division of costs among agents in a queue has many practical applications.\nFor example, computer programs are regularly scheduled on servers, data are scheduled to be transmitted over networks, jobs are scheduled in shop-floor on machines, and queues appear in many public services (post offices, banks).\nStudy of queueing problems has attracted economists for a long time [7, 17].\nCost sharing is a fundamental problem in many settings on the Internet.\nInternet can be seen as a common resource shared by many users and the cost incured by using the resource needs to be shared in an equitable manner.\nThe current surge in cost sharing literature from computer scientists validate this claim [8, 11, 12, 6, 24].\nInternet has many settings in which our model of job scheduling appears and the agents waiting in a queue incur costs (jobs scheduled on servers, queries answered from a database, data scheduled to be transmitted over a fixed bandwidth network etc.).\nWe hope that our analysis will give new insights on cost sharing problems of this nature.\nRecently, there has been increased interest in cost sharing methods with submodular cost functions [11, 12, 6, 24].\nWhile many settings do have submodular cost functions (for example, multi-cast transmission games [8]), while the cost function of our game is supermodular.\nAlso, such literature typically does not assume budget-balance (transfers adding up to zero), while it is an inherent feature of our model.\nA recent paper by Maniquet [15] is the closest to our model and is the motivation behind our work 1 .\nManiquet [15] studies a model where he assumes all processing times are unity.\nFor such a model, he characterizes the Shapley value rule using classical fairness axioms.\nChun [1] interprets the worth of a coalition of jobs in a different manner for the same model and derives a reverse rule.\nChun characterizes this rule using similar fairness axioms.\nChun [2] also studies the envy properties of these rules.\nMoulin [22, 21] studies the queueing problem from a strategic point view when per unit waiting costs are unity.\nMoulin introduces new concepts in the queueing settings such as splitting and merging of jobs, and ways to prevent them.\nAnother stream of literature is on sequencing games, first introduced by Curiel et al. [4].\nFor a detailed survey, refer to Curiel et al. [3].\nCuriel et al. [4] defined sequencing games similar to our model, but in which an initial ordering of jobs is given.\nBesides, their notion of worth of a coalition is very different from the notions studied in Maniquet [15] and Chun [1] (these are the notions used in our work too).\nThe particular notion of the worth of a coalition makes the sequencing game of Curiel et al. [4] convex, whereas our game is not convex and does not assume the presence of any initial order.\nIn summary, the focus of this stream of 1 The authors thank Fran\u00b8cois Maniquet for several fruitful discussions.\n232 research is how to share the savings in costs from the initial ordering to the optimal ordering amongst jobs (also see Hamers et al. [9], Curiel et al. [5]).\nRecently, Klijn and S\u00b4anchez [13, 14] considered sequencing games without any initial ordering of jobs.\nThey take two approaches to define the worth of coalitions.\nOne of their approaches, called the tail game, is related to the reverse rule of Chun [1].\nIn the tail game, jobs in a coalition are served after the jobs not in the coalition are served.\nKlijn and S\u00b4anchez [14] showed that the tail game is balanced.\nFurther, they provide expressions for the Shapley value in tail game in terms of marginal vectors and reversed marginal vectors.\nWe provide a simpler expression of the Shapley value in the tail game, generalizing the result in Chun [1].\nKlijn and S\u00b4anchez [13] study the core of this game in detail.\nStrategic aspects of queueing problems have also been researched.\nMitra [19] studies the first best implementation in queueing models with generic cost functions.\nFirst best implementation means that there exists an efficient mechanism in which jobs in the queue have a dominant strategy to reveal their true types and their transfers add up to zero.\nSuijs [27] shows that if waiting costs of jobs are linear then first best implementation is possible.\nMitra [19] shows that among a more general class of queueing problems first best implementation is possible if and only if the cost is linear.\nFor another queueing model, Mitra [18] shows that first best implementation is possible if and only if the cost function satisfies a combinatorial property and an independence property.\nMoulin [22, 21] studies strategic concepts such as splitting and merging in queueing problems with unit per unit waiting costs.\nThe general cost sharing literature is vast and has a long history.\nFor a good survey, we refer to [20].\nFrom the seminal work of Shapley [25] to recent works on cost sharing in multi-cast transmission and optimization problems [8, 6, 23] this area has attracted economists, computer scientists, and operations researchers.\n1.1 Our Contribution Ours is the first model which considers cost sharing when both processing time and per unit waiting cost of jobs are present.\nWe take a cooperative game theory approach and apply the classical Shapley value rule to the problem.\nWe show that the Shapley value rule satisfies many intuitive fairness axioms.\nDue to two dimensional nature of our model and one dimensional nature of Maniquet``s model [15], his axioms are insufficient to characterize the Shapley value in our setting.\nWe introduce axioms such as independece of preceding jobs'' unit waiting cost and independence of following jobs'' processing time.\nA key axiom that we introduce gives us a bound on cost share of a job in a group of jobs which have the same ratio of unit time waiting cost and processing time (these jobs can be ordered in any manner between themseleves in an efficient ordering).\nIf such a group consists of just one job, then the axiom says that such a job should at least pay his own processing cost (i.e., the cost it would have incurred if it was the only job in the queue).\nIf there are multiple jobs in such a group, the probability of any two jobs from such a group inflicting costs on each other is same (1 2 ) in an efficient ordering.\nDepending on the ordering selected, one job inflicts cost on the other.\nOur fairness axiom says that each job should at least bear such expected costs.\nWe characterize the Shapley value rule using these fairness axioms.\nWe also extend the envy results in [2] to our setting and discuss a class of reasonable cost sharing mechanisms.\n2.\nTHE MODEL There are n jobs that need to be served by one server which can process only one job at a time.\nThe set of jobs are denoted as N = {1, ... , n}.\n\u03c3 : N \u2192 N is an ordering of jobs in N and \u03c3i denotes the position of job i in the ordering \u03c3.\nGiven an ordering \u03c3, define Fi(\u03c3) = {j \u2208 N : \u03c3i < \u03c3j} and Pi(\u03c3) = {j \u2208 N : \u03c3i > \u03c3j}.\nEvery job i is identified by two parameters: (pi, \u03b8i).\npi is the processing time and \u03b8i is the cost per unit waiting time of job i. Thus, a queueing problem is defined by a list q = (N, p, \u03b8) \u2208 Q, where Q is the set of all possible lists.\nWe will denote \u03b3i = \u03b8i pi .\nGiven an ordering of jobs \u03c3, the cost incurred by job i is given by ci(\u03c3) = pi\u03b8i + \u03b8i j\u2208Pi(\u03c3) pj.\nThe total cost incurred by all jobs due to an ordering \u03c3 can be written in two ways: (i) by summing the cost incurred by every job and (ii) by summing the costs inflicted by a job on other jobs with their own processing cost.\nC(N, \u03c3) = i\u2208N ci(\u03c3) = i\u2208N pi\u03b8i + i\u2208N \u00a1\u03b8i j\u2208Pi(\u03c3) pjcents.\n= i\u2208N pi\u03b8i + i\u2208N \u00a1pi j\u2208Fi(\u03c3) \u03b8jcents.\nAn efficient ordering \u03c3\u2217 is the one which minimizes the total cost incurred by all jobs.\nSo, C(N, \u03c3\u2217 ) \u2264 C(N, \u03c3) \u2200 \u03c3 \u2208 \u03a3.\nTo achieve notational simplicity, we will write the total cost in an efficient ordering of jobs from N as C(N) whenever it is not confusing.\nSometimes, we will deal with only a subset of jobs S \u2286 N.\nThe ordering \u03c3 will then be defined on jobs in S only and we will write the total cost from an efficient ordering of jobs in S as C(S).\nThe following lemma shows that jobs are ordered in decreasing \u03b3 in an efficient ordering.\nThis is also known as the weighted shortest processing time rule, first introduced by Smith [26].\nLemma 1.\nFor any S \u2286 N, let \u03c3\u2217 be an efficient ordering of jobs in S. For every i = j, i, j \u2208 S, if \u03c3\u2217 i > \u03c3\u2217 j , then \u03b3i \u2264 \u03b3j.\nProof.\nAssume for contradiction that the statment of the lemma is not true.\nThis means, we can find two consecutive jobs i, j \u2208 S (\u03c3\u2217 i = \u03c3\u2217 j + 1) such that \u03b3i > \u03b3j.\nDefine a new ordering \u03c3 by interchanging i and j in \u03c3\u2217 .\nThe costs to jobs in S \\ {i, j} is not changed from \u03c3\u2217 to \u03c3.\nThe difference between total costs in \u03c3\u2217 and \u03c3 is given by, C(S, \u03c3) \u2212 C(S, \u03c3\u2217 ) = \u03b8jpi \u2212 \u03b8ipj.\nFrom efficiency we get \u03b8jpi \u2212 \u03b8ipj \u2265 0.\nThis gives us \u03b3j \u2265 \u03b3i, which is a contradiction.\nAn allocation for q = (N, p, \u03b8) \u2208 Q has two components: an ordering \u03c3 and a transfer ti for every job i \u2208 N. ti denotes the payment received by job i. Given a transfer ti and an ordering \u03c3, the cost share of job i is defined as, \u03c0i = ci(\u03c3) \u2212 ti = \u03b8i j\u2208N:\u03c3j \u2264\u03c3i pj \u2212 ti.\n233 An allocation (\u03c3, t) is efficient for q = (N, p, \u03b8) whenever \u03c3 is an efficient ordering and #i\u2208N ti = 0.\nThe set of efficient orderings of q is denoted as \u03a3\u2217 (q) and \u03c3\u2217 (q) will be used to refer to a typical element of the set.\nThe following straightforward lemma says that for two different efficient orderings, the cost share in one efficient allocation is possible to achieve in the other by appropriately modifying the transfers.\nLemma 2.\nLet (\u03c3, t) be an efficient allocation and \u03c0 be the vector of cost shares of jobs from this allocation.\nIf \u03c3\u2217 = \u03c3 be an efficient ordering and t\u2217 i = ci(\u03c3\u2217 ) \u2212 \u03c0i \u2200 i \u2208 N, then (\u03c3\u2217 , t\u2217 ) is also an efficient allocation.\nProof.\nSince (\u03c3, t) is efficient, #i\u2208N ti = 0.\nThis gives #i\u2208N \u03c0i = C(N).\nSince \u03c3\u2217 is an efficient ordering, #i\u2208N ci(\u03c3\u2217 ) = C(N).\nThis means, #i\u2208N t\u2217 i = #i\u2208N [ci(\u03c3\u2217 ) \u2212 \u03c0i] = 0.\nSo, (\u03c3\u2217 , t\u2217 ) is an efficient allocation.\nDepending on the transfers, the cost shares in different efficient allocations may differ.\nAn allocation rule \u03c8 associates with every q \u2208 Q a non-empty subset \u03c8(q) of allocations.\n3.\nCOST SHARING USING THE SHAPLEY VALUE In this section, we define the coalitional cost of this game and analyze the solution proposed by the Shapley value.\nGiven a queue q \u2208 Q, the cost of a coalition of S \u2286 N jobs in the queue is defined as the cost incurred by jobs in S if these are the only jobs served in the queue using an efficient ordering.\nFormally, the cost of a coalition S \u2286 N is, C(S) = i\u2208S j\u2208S:\u03c3\u2217 j \u2264\u03c3\u2217 i \u03b8jpj, where \u03c3\u2217 = \u03c3\u2217 (S) is an efficient ordering considering jobs from S only.\nThe worth of a coalition of S jobs is just \u2212C(S).\nManiquet [15] observes another equivalent way to define the worth of a coalition is using the dual function of the cost function C(\u00b7).\nOther interesting ways to define the worth of a coalition in such games is discussed by Chun [1], who assume that a coalition of jobs are served after the jobs not in the coalition are served.\nThe Shapley value (or cost share) of a job i is defined as, SVi = S\u2286N\\{i} |S|!\n(|N| \u2212 |S| \u2212 1)!\n|N|!\n\u00a1C(S\u222a{i})\u2212C(S)cents.\n(1) The Shapley value allocation rule says that jobs are ordered using an efficient ordering and transfers are assigned to jobs such that the cost share of job i is given by Equation 1.\nLemma 3.\nLet \u03c3\u2217 be an efficient ordering of jobs in set N. For all i \u2208 N, the Shapley value is given by, SVi = pi\u03b8i + 1 2 \u00a1Li + Ricents, where Li = \u03b8i #j\u2208Pi(\u03c3\u2217) pj and Ri = pi #j\u2208Fi(\u03c3\u2217) \u03b8j.\nProof.\nAnother way to write the Shapley value formula is the following [10], SVi = S\u2286N:i\u2208S \u2206(S) |S| , where \u2206(S) = C(S) if |S| = 1 and \u2206(S) = C(S)\u2212#T S \u2206(T).\nThis gives \u2206({i}) = C({i}) = pi\u03b8i \u2200i \u2208 N. For any i, j \u2208 N with i = j, we have \u2206({i, j}) = C({i, j}) \u2212 C({i}) \u2212 C({j}) = min(pi\u03b8i + pj\u03b8j + pj\u03b8i, pi\u03b8i + pj\u03b8j + pi\u03b8j) \u2212 pi\u03b8i \u2212 pj\u03b8j = min(pj\u03b8i, pi\u03b8j).\nWe will show by induction that \u2206(S) = 0 if |S| > 2.\nFor |S| = 3, let S = {i, j, k}.\nWithout loss of generality, assume \u03b8i pi \u2265 \u03b8j pj \u2265 \u03b8k pk .\nSo, \u2206(S) = C(S) \u2212 \u2206({i, j}) \u2212 \u2206({j, k}) \u2212 \u2206({i, k})\u2212\u2206({i})\u2212\u2206({j})\u2212\u2206({k}) = C(S)\u2212pi\u03b8j \u2212pj\u03b8k \u2212 pi\u03b8k \u2212 pi\u03b8i \u2212 pj\u03b8j \u2212 pk\u03b8k = C(S) \u2212 C(S) = 0.\nNow, assume for T S, \u2206(T) = 0 if |T| > 2.\nWithout loss of generality assume that \u03c3 to be the identity mapping.\nNow, \u2206(S) = C(S) \u2212 T S \u2206(T) = C(S) \u2212 i\u2208S j\u2208S:j<i \u2206({i, j}) \u2212 i\u2208S \u2206({i}) = C(S) \u2212 i\u2208S j\u2208S:j<i pj\u03b8i \u2212 i\u2208S pi\u03b8i = C(S) \u2212 C(S) = 0.\nThis proves that \u2206(S) = 0 if |S| > 2.\nUsing the Shapley value formula now, SVi = S\u2286N:i\u2208S \u2206(S) |S| = \u2206({i}) + 1 2 j\u2208N:j=i \u2206({i, j}) = pi\u03b8i + 1 2 \u00a1 j<i \u2206({i, j}) + j>i \u2206({i, j})cents = pi\u03b8i + 1 2 \u00a1 j<i pj\u03b8i + j>i pi\u03b8jcents= pi\u03b8i + 1 2 \u00a1Li + Ricents.\n4.\nAXIOMATICCHARACTERIZATIONOF THE SHAPLEY VALUE In this section, we will define serveral axioms on fairness and characterize the Shapley value using them.\nFor a given q \u2208 Q, we will denote \u03c8(q) as the set of allocations from allocation rule \u03c8.\nAlso, we will denote the cost share vector associated with an allocation rule (\u03c3, t) as \u03c0 and that with allocation rule (\u03c3 , t ) as \u03c0 etc. 4.1 The Fairness Axioms We will define three types of fairness axioms: (i) related to efficiency, (ii) related to equity, and (iii) related to independence.\nEfficiency Axioms We define two types of efficiency axioms.\nOne related to efficiency which states that an efficient ordering should be selected and the transfers of jobs should add up to zero (budget balance).\nDefinition 1.\nAn allocation rule \u03c8 satisfies efficiency if for every q \u2208 Q and (\u03c3, t) \u2208 \u03c8(q), (\u03c3, t) is an efficient allocation.\n234 The second axiom related to efficiency says that the allocation rule should not discriminate between two allocations which are equivalent to each other in terms of cost shares of jobs.\nDefinition 2.\nAn allocation rule \u03c8 satisfies Pareto indifference if for every q \u2208 Q, (\u03c3, t) \u2208 \u03c8(q), and (\u03c3 , t ) \u2208 \u03a3(q), we have \u00a1\u03c0i = \u03c0i \u2200 i \u2208 Ncents\u21d2 \u00a1(\u03c3 , t ) \u2208 \u03c8(q)cents.\nAn implication of Pareto indifference axiom and Lemma 2 is that for every efficient ordering there is some set of transfers of jobs such that it is part of an efficient rule and the cost share of a job in all these allocations are same.\nEquity Axioms How should the cost be shared between two jobs if the jobs have some kind of similarity between them?\nEquity axioms provide us with fairness properties which help us answer this question.\nWe provide five such axioms.\nSome of these axioms (for example anonymity, equal treatment of equals) are standard in the literature, while some are new.\nWe start with a well known equity axiom called anonymity.\nDenote \u03c1 : N \u2192 N as a permutation of elements in N. Let \u03c1(\u03c3, t) denote the allocation obtained by permuting elements in \u03c3 and t according to \u03c1.\nSimilarly, let \u03c1(p, \u03b8) denote the new list of (p, \u03b8) obtained by permuting elements of p and \u03b8 according to \u03c1.\nOur first equity axiom states that allocation rules should be immune to such permutation of data.\nDefinition 3.\nAn allocation rule \u03c8 satisfies anonymity if for all q \u2208 Q, (\u03c3, t) \u2208 \u03c8(q) and every permutation \u03c1, we then \u03c1(\u03c3, t) \u2208 \u03c8(N, \u03c1(q)).\nThe next equity axiom is classical in literature and says that two similar jobs should be compensated such that their cost shares are equal.\nThis implies that if all the jobs are of same type, then jobs should equally share the total system cost.\nDefinition 4.\nAn allocation rule \u03c8 satisfies equal treatment of equals (ETE) if for all q \u2208 Q, (\u03c3, t) \u2208 \u03c8(q), i, j \u2208 N, then \u00a1pi = pj; \u03b8i = \u03b8jcents\u21d2 \u00a1\u03c0i = \u03c0jcents.\nETE directs us to share costs equally between jobs if they are of the same per unit waiting cost and processing time.\nBut it is silent about the cost shares of two jobs i and j which satisfy \u03b8i pi = \u03b8j pj .\nWe introduce a new axiom for this.\nIf an efficient rule chooses \u03c3 such that \u03c3i < \u03c3j for some i, j \u2208 N, then job i is inflicting a cost of pi\u03b8j on job j and job j is inflicting zero cost on job i. Define for some \u03b3 \u2265 0, S(\u03b3) = {i \u2208 N : \u03b3i = \u03b3}.\nIn an efficient rule, the elements in S(\u03b3) can be ordered in any manner (in |S(\u03b3)|!\nways).\nIf i, j \u2208 S(\u03b3) then we have pj\u03b8i = pi\u03b8j.\nProbability of \u03c3i < \u03c3j is 1 2 and so is the probability of \u03c3i > \u03c3j.\nThe expected cost i inflicts on j is 1 2 pi\u03b8j and j inflicts on i is 1 2 pj\u03b8i.\nOur next fairness axiom says that i and j should each be responsible for their own processing cost and this expected cost they inflict on each other.\nArguing for every pair of jobs i, j \u2208 S(\u03b3), we establish a bound on the cost share of jobs in S(\u03b3).\nWe impose this as an equity axiom below.\nDefinition 5.\nAn allocation rule satisfies expected cost bound (ECB) if for all q \u2208 Q, (\u03c3, t) \u2208 \u03c8(q) with \u03c0 being the resulting cost share, for any \u03b3 \u2265 0, and for every i \u2208 S(\u03b3), we have \u03c0i \u2265 pi\u03b8i + 1 2 \u00a1 j\u2208S(\u03b3):\u03c3j <\u03c3i pj\u03b8i + j\u2208S(\u03b3):\u03c3j >\u03c3i pi\u03b8jcents.\nThe central idea behind this axiom is that of expected cost inflicted.\nIf an allocation rule chooses multiple allocations, we can assign equal probabilities of selecting one of the allocations.\nIn that case, the expected cost inflicted by a job i on another job j in the allocation rule can be calculated.\nOur axiom says that the cost share of a job should be at least its own processing cost and the total expected cost it inflicts on others.\nNote that the above bound poses no constraints on how the costs are shared among different groups.\nAlso observe that if S(\u03b3) contains just one job, ECB says that job should at least bear its own processing cost.\nA direct consequence of ECB is the following lemma.\nLemma 4.\nLet \u03c8 be an efficient rule which satisfies ECB.\nFor a q \u2208 Q if S(\u03b3) = N, then for any (\u03c3, t) \u2208 \u03c8(q) which gives a cost share of \u03c0, \u03c0i = pi\u03b8i + 1 2 \u00a1Li + Ricents\u2200 i \u2208 N. Proof.\nFrom ECB, we get \u03c0i \u2265 pi\u03b8i+1 2 \u00a1Li+Ricents\u2200 i \u2208 N. Assume for contradiction that there exists j \u2208 N such that \u03c0j > pj\u03b8j + 1 2 \u00a1Li + Ricents.\nUsing efficiency and the fact that #i\u2208N Li = #i\u2208N Ri, we get #i\u2208N \u03c0i = C(N) > #i\u2208N pi\u03b8i + 1 2 #i\u2208N \u00a1Li + Ricents = C(N).\nThis gives us a contradiction.\nNext, we introduce an axiom about sharing the transfer of a job between a set of jobs.\nIn particular, if the last job quits the system, then the ordering need not change.\nBut the transfer to the last job needs to be shared between the other jobs.\nThis should be done in proportion to their processing times because every job influenced the last job based on its processing time.\nDefinition 6.\nAn allocation rule \u03c8 satisfies proportionate responsibility of p (PRp) if for all q \u2208 Q, for all (\u03c3, t) \u2208 \u03c8(q), k \u2208 N such that \u03c3k = |N|, q = (N \\ {k}, p , \u03b8 ) \u2208 Q, such that for all i \u2208 N\\{k}: \u03b8i = \u03b8i, pi = pi, there exists (\u03c3 , t ) \u2208 \u03c8(q ) such that for all i \u2208 N \\ {k}: \u03c3i = \u03c3i and ti = ti + tk pi #j=k pj .\nAn analogous fairness axiom results if we remove the job from the beginning of the queue.\nSince the presence of the first job influenced each job depending on their \u03b8 values, its transfer needs to be shared in proportion to \u03b8 values.\nDefinition 7.\nAn allocation rule \u03c8 satisfies proportionate responsibility of \u03b8 (PR\u03b8) if for all q \u2208 Q, for all (\u03c3, t) \u2208 \u03c8(q), k \u2208 N such that \u03c3k = 1, q = (N \\{k}, p , \u03b8 ) \u2208 Q, such that for all i \u2208 N \\{k}: \u03b8i = \u03b8i, pi = pi, there exists (\u03c3 , t ) \u2208 \u03c8(q ) such that for all i \u2208 N \\ {k}: \u03c3i = \u03c3i and ti = ti + tk \u03b8i #j=k \u03b8j .\nThe proportionate responsibility axioms are generalizations of equal responsibility axioms introduced by Maniquet [15].\n235 Independence Axioms The waiting cost of a job does not depend on the per unit waiting cost of its preceding jobs.\nSimilarly, the waiting cost inflicted by a job to its following jobs is independent of the processing times of the following jobs.\nThese independence properties should be carried over to the cost sharing rules.\nThis gives us two independence axioms.\nDefinition 8.\nAn allocation rule \u03c8 satisfies independence of preceding jobs'' \u03b8 (IPJ\u03b8) if for all q = (N, p, \u03b8), q = (N, p , \u03b8 ) \u2208 Q, (\u03c3, t) \u2208 \u03c8(q), (\u03c3 , t ) \u2208 \u03c8(q ), if for all i \u2208 N \\ {k}: \u03b8i = \u03b8i, pi = pi and \u03b3k < \u03b3k, pk = pk, then for all j \u2208 N such that \u03c3j > \u03c3k: \u03c0j = \u03c0j, where \u03c0 is the cost share in (\u03c3, t) and \u03c0 is the cost share in (\u03c3 , t ).\nDefinition 9.\nAn allocation rule \u03c8 satisfies independence of following jobs'' p (IFJp) if for all q = (N, p, \u03b8), q = (N, p , \u03b8 ) \u2208 Q, (\u03c3, t) \u2208 \u03c8(q), (\u03c3 , t ) \u2208 \u03c8(q ), if for all i \u2208 N \\ {k}: \u03b8i = \u03b8i, pi = pi and \u03b3k > \u03b3k, \u03b8k = \u03b8k, then for all j \u2208 N such that \u03c3j < \u03c3k: \u03c0j = \u03c0j, where \u03c0 is the cost share in (\u03c3, t) and \u03c0 is the cost share in (\u03c3 , t ).\n4.2 The Characterization Results Having stated the fairness axioms, we propose three different ways to characterize the Shapley value rule using these axioms.\nAll our characterizations involve efficiency and ECB.\nBut if we have IPJ\u03b8, we either need IFJp or PRp.\nSimilarly if we have IFJp, we either need IPJ\u03b8 or PR\u03b8.\nProposition 1.\nAny efficient rule \u03c8 that satisfies ECB, IPJ\u03b8, and IFJp is a rule implied by the Shapley value rule.\nProof.\nDefine for any i, j \u2208 N, \u03b8i j = \u03b3ipj and pi j = \u03b8j \u03b3i .\nAssume without loss of generality that \u03c3 is an efficient ordering with \u03c3i = i \u2200 i \u2208 N. Consider the following q = (N, p , \u03b8 ) corresponding to job i with pj = pj if j \u2264 i and pj = pi j if j > i, \u03b8j = \u03b8i j if j < i and \u03b8j = \u03b8j if j \u2265 i. Observe that all jobs have the same \u03b3: \u03b3i.\nBy Lemma 2 and efficiency, (\u03c3, t ) \u2208 \u03c8(q ) for some set of transfers t .\nUsing Lemma 4, we get cost share of i from (\u03c3, t ) as \u03c0i = pi\u03b8i + 1 2 \u00a1Li + Ricents.\nNow, for any j < i, if we change \u03b8j to \u03b8j without changing processing time, the new \u03b3 of j is \u03b3j \u2265 \u03b3i.\nApplying IPJ\u03b8, the cost share of job i should not change.\nSimilarly, for any job j > i, if we change pj to pj without changing \u03b8j, the new \u03b3 of j is \u03b3j \u2264 \u03b3i.\nApplying IFJp, the cost share of job i should not change.\nApplying this procedure for every j < i with IPJ\u03b8 and for every j > i with IFJp, we reach q = (N, p, \u03b8) and the payoff of i does not change from \u03c0i.\nUsing this argument for every i \u2208 N and using the expression for the Shapley value in Lemma 3, we get the Shapley value rule.\nIt is possible to replace one of the independence axioms with an equity axiom on sharing the transfer of a job.\nThis is shown in Propositions 2 and 3.\nProposition 2.\nAny efficient rule \u03c8 that satisfies ECB, IPJ\u03b8, and PRp is a rule implied by the Shapley value rule.\nProof.\nAs in the proof of Proposition 1, define \u03b8i j = \u03b3ipj \u2200 i, j \u2208 N. Assume without loss of generality that \u03c3 is an efficient ordering with \u03c3i = i \u2200 i \u2208 N. Consider a queue with jobs in set K = {1, ... , i, i + 1}, where i < n. Define q = (K, p, \u03b8 ), where \u03b8j = \u03b8i+1 j \u2200 j \u2208 K. Define \u03c3j = \u03c3j \u2200 j \u2208 K. \u03c3 is an efficient ordering for q .\nBy ECB and Lemma 4 the cost share of job i + 1 in any allocation rule in \u03c8 must be \u03c0i+1 = pi+1\u03b8i+1 + 1 2 \u00a1#j<i+1 pj\u03b8i+1cents.\nNow, consider q = (K, p, \u03b8 ) such that \u03b8j = \u03b8i j \u2200 j \u2264 i and \u03b8i+1 = \u03b8i+1.\n\u03c3 remains an efficient ordering in q and by IPJ\u03b8 the cost share of i + 1 remains \u03c0i+1.\nIn q = (K \\ {i + 1}, p, \u03b8 ), we can calculate the cost share of job i using ECB and Lemma 4 as \u03c0i = pi\u03b8i + 1 2 #j<i pj\u03b8i.\nSo, using PRp we get the new cost share of job i in q as \u03c0i = \u03c0i + ti+1 pi j<i+1 pj = pi\u03b8i + 1 2 \u00a1#j<i pj\u03b8i + pi\u03b8i+1cents.\nNow, we can set K = K \u222a {i + 2}.\nAs before, we can find cost share of i + 2 in this queue as \u03c0i+2 = pi+2\u03b8i+2 + 1 2 \u00a1#j<i+2 pj\u03b8i+2cents.\nUsing PRp we get the new cost share of job i in the new queue as \u03c0i = pi\u03b8i + 1 2 \u00a1#j<i pj\u03b8i + pi\u03b8i+1 + pi\u03b8i+2cents.\nThis process can be repeated till we add job n at which point cost share of i is pi\u03b8i + 1 2 \u00a1#j<i pj\u03b8i + #j>i pi\u03b8jcents.\nThen, we can adjust the \u03b8 of preceding jobs of i to their original value and applying IPJ\u03b8, the payoffs of jobs i through n will not change.\nThis gives us the Shapley values of jobs i through n. Setting i = 1, we get cost shares of all the jobs from \u03c8 as the Shapley value.\nProposition 3.\nAny efficient rule \u03c8 that satisfies ECB, IFJp, and PR\u03b8 is a rule implied by the Shapley value rule.\nProof.\nThe proof mirrors the proof of Proposition 2.\nWe provide a short sketch.\nAnalogous to the proof of Proposition 2, \u03b8s are kept equal to original data and processing times are initialized to pi+1 j .\nThis allows us to use IFJp.\nAlso, contrast to Proposition 2, we consider K = {i, i + 1, ... , n} and repeatedly add jobs to the beginning of the queue maintaining the same efficient ordering.\nSo, we add the cost components of preceding jobs to the cost share of jobs in each iteration and converge to the Shapley value rule.\nThe next proposition shows that the Shapley value rule satisfies all the fairness axioms discussed.\nProposition 4.\nThe Shapley value rule satisfies efficiency, pareto indifference, anonymity, ETE, ECB, IPJ\u03b8, IFJp, PRp, and PR\u03b8.\nProof.\nThe Shapley value rule chooses an efficient ordering and by definition the payments add upto zero.\nSo, it satisfies efficiency.\nThe Shapley value assigns same cost share to a job irrespective of the efficient ordering chosen.\nSo, it is pareto indifferent.\nThe Shapley value is anonymous because the particular index of a job does not effect his ordering or cost share.\nFor ETE, consider two jobs i, j \u2208 N such that pi = pj and \u03b8i = \u03b8j.\nWithout loss of generality assume the efficient ordering to be 1, ... , i, ... , j, ... , n. Now, the Shapley value of job i is 236 SVi = pi\u03b8i + 1 2 \u00a1Li + Ricents(From Lemma 3) = pj\u03b8j + 1 2 \u00a1Lj + Rjcents\u2212 1 2 \u00a1Li \u2212 Lj + Ri \u2212 Rjcents = SVj \u2212 1 2 \u00a1 i<k\u2264j pi\u03b8k \u2212 i\u2264k<j pk\u03b8icents = SVj \u2212 1 2 i<k\u2264j (pi\u03b8k \u2212 pk\u03b8i) (Using pi = pj and \u03b8i = \u03b8j) = SVj (Using \u03b8k pk = \u03b8i pi for all i \u2264 k \u2264 j).\nThe Shapley value satisfies ECB by its expression in Lemma 3.\nConsider any job i, in an efficient ordering \u03c3, if we increase the value of \u03b3j for some j = i such that \u03c3j > \u03c3i, then the set Pi ( preceding jobs) does not change in the new efficient ordering.\nIf \u03b3j is changed such that pj remains the same, then the expression #j\u2208Pi \u03b8ipj is unchanged.\nIf (p, \u03b8) values of no other jobs are changed, then the Shapley value is unchanged by increasing \u03b3j for some j \u2208 Pi while keeping pj unchanged.\nThus, the Shapley value rule satisfies IPJ\u03b8.\nAn analogous argument shows that the Shapley value rule satisfies IFJp.\nFor PRp, assume without loss of generality that jobs are ordered 1, ... , n in an efficient ordering.\nDenote the transfer of job i = n due to the Shapley value with set of jobs N and set of jobs N \\ {n} as ti and ti respectively.\nTransfer of last job is tn = 1 2 \u03b8n #j<n pj.\nNow, ti = 1 2 \u00a1\u03b8i j<i pj \u2212 pi j>i \u03b8jcents = 1 2 \u00a1\u03b8i j<i pj \u2212 pi j>i:j=n \u03b8jcents\u2212 1 2 pi\u03b8n = ti \u2212 1 2 \u03b8n j<n pj pi #j<n pj = ti \u2212 tn pi #j<n pj .\nA similar argument shows that the Shapley value rule satisfies PR\u03b8.\nThese series of propositions lead us to our main result.\nTheorem 1.\nLet \u03c8 be an allocation rule.\nThe following statements are equivalent: 1) For each q \u2208 Q, \u03c8(q) selects all the allocation assigning jobs cost shares implied by the Shapley value.\n2) \u03c8 satisfies efficiency, ECB, IFJp, and IPJ\u03b8.\n3) \u03c8 satisfies efficiency, ECB, IFJp, and PR\u03b8.\n4) \u03c8 satisfies efficiency, ECB, PRp, and IPJ\u03b8.\nProof.\nThe proof follows from Propositions 1, 2, 3, and 4.\n5.\nDISCUSSIONS 5.1 A Reasonable Class of Cost Sharing Mechanisms In this section, we will define a reasonable class of cost sharing mechanisms.\nWe will show how these reasonable mechanisms lead to the Shapley value mechanism.\nDefinition 10.\nAn allocation rule \u03c8 is reasonable if for all q \u2208 Q and (\u03c3, t) \u2208 \u03c8(q) we have for all i \u2208 N, ti = \u03b1 \u00a1\u03b8i j\u2208Pi(\u03c3) pj \u2212 pi j\u2208Fi(\u03c3) \u03b8jcents\u2200 i \u2208 N, where 0 \u2264 \u03b1 \u2264 1.\nThe reasonable cost sharing mechanism says that every job should be paid a constant fraction of the difference between the waiting cost he incurs and the waiting cost he inflicts on other jobs.\nIf \u03b1 = 0, then every job bears its own cost.\nIf \u03b1 = 1, then every job gets compensated for its waiting cost but compensates others for the cost he inflicts on others.\nThe Shapley value rule comes as a result of ETE as shown in the following proposition.\nProposition 5.\nAny efficient and reasonable allocation rule \u03c8 that satisfies ETE is a rule implied by the Shapley value rule.\nProof.\nConsider a q \u2208 Q in which pi = pj and \u03b8i = \u03b8j.\nLet (\u03c3, t) \u2208 \u03c8(q) and \u03c0 be the resulting cost shares.\nFrom ETE, we get, \u03c0i = \u03c0j \u21d2 ci(\u03c3) \u2212 ti = cj(\u03c3) \u2212 tj \u21d2 pi\u03b8i + (1 \u2212 \u03b1)Li + \u03b1Ri = pj\u03b8j + (1 \u2212 \u03b1)Lj + \u03b1Rj (Since \u03c8 is efficient and reasonable) \u21d2 (1 \u2212 \u03b1)(Li \u2212 Lj) = \u03b1(Rj \u2212 Ri) (Using pi = pj, \u03b8i = \u03b8j) \u21d2 1 \u2212 \u03b1 = \u03b1 (Using Li \u2212 Lj = Rj \u2212 Ri = 0) \u21d2 \u03b1 = 1 2 .\nThis gives us the Shapley value rule by Lemma 3.\n5.2 Results on Envy Chun [2] discusses a fariness condition called no-envy for the case when processing times of all jobs are unity.\nDefinition 11.\nAn allocation rule satisfies no-envy if for all q \u2208 Q, (\u03c3, t) \u2208 \u03c8(q), and i, j \u2208 N, we have \u03c0i \u2264 ci(\u03c3ij ) \u2212 tj, where \u03c0 is the cost share from allocation rule (\u03c3, t) and \u03c3ij is the ordering obtaining by swapping i and j. From the result in [2], the Shapley value rule does not satisfy no-envy in our model also.\nTo overcome this, Chun [2] introduces the notion of adjusted no-envy, which he shows is satisfied in the Shapley value rule when processing times of all jobs are unity.\nHere, we show that adjusted envy continues to hold in the Shapley value rule in our model (when processing times need not be unity).\nAs before denote \u03c3ij be an ordering where the position of i and j is swapped from an ordering \u03c3.\nFor adjusted noenvy, if (\u03c3, t) is an allocation for some q \u2208 Q, let tij be the 237 transfer of job i when the transfer of i is calculated with respect to ordering \u03c3ij .\nObserve that an allocation may not allow for calculation of tij .\nFor example, if \u03c8 is efficient, then tij cannot be calculated if \u03c3ij is also not efficient.\nFor simplicity, we state the definition of adjusted no-envy to apply to all such rules.\nDefinition 12.\nAn allocation rule satisfies adjusted noenvy if for all q \u2208 Q, (\u03c3, t) \u2208 \u03c8(q), and i, j \u2208 N, we have \u03c0i \u2264 ci(\u03c3ij ) \u2212 tij i .\nProposition 6.\nThe Shapley value rule satisfies adjusted no-envy.\nProof.\nWithout loss of generality, assume efficient ordering of jobs is: 1, ... , n. Consider two jobs i and i + k. From Lemma 3, SVi = pi\u03b8i + 1 2 \u00a1 j<i \u03b8ipj + j>i \u03b8jpicents.\nLet \u02c6\u03c0i be the cost share of i due to adjusted transfer tii+k i in the ordering \u03c3ii+k .\n\u02c6\u03c0i = ci(\u03c3ii+k ) \u2212 tii+k i = pi\u03b8i + 1 2 \u00a1 j<i \u03b8ipj + \u03b8ipi+k + i<j<i+k \u03b8ipj + j>i \u03b8jpi \u2212 \u03b8i+kpi \u2212 i<j<i+k \u03b8jpicents = SVi + 1 2 i<j\u2264i+k \u00a1\u03b8ipj \u2212 \u03b8jpicents \u2265 SVi (Using the fact that \u03b8i pi \u2265 \u03b8j pj for i < j).\n6.\nCONCLUSION We studied the problem of sharing costs for a job scheduling problem on a single server, when jobs have processing times and unit time waiting costs.\nWe took a cooperative game theory approach and show that the famous the Shapley value rule satisfies many nice fairness properties.\nWe characterized the Shapley value rule using different intuitive fairness axioms.\nIn future, we plan to further simplify some of the fairness axioms.\nSome initial simplifications already appear in [16], where we provide an alternative axiom to ECB and also discuss the implication of transfers between jobs (in stead of transfers from jobs to a central server).\nWe also plan to look at cost sharing mechanisms other than the Shapley value.\nInvestigating the strategic power of jobs in such mechanisms is another line of future research.\n7.\nREFERENCES [1] Youngsub Chun.\nA Note on Maniquet``s Characterization of the Shapley Value in Queueing Problems.\nWorking Paper, Rochester University, 2004.\n[2] Youngsub Chun.\nNo-envy in Queuing Problems.\nWorking Paper, Rochester University, 2004.\n[3] Imma Curiel, Herbert Hamers, and Flip Klijn.\nSequencing Games: A Survey.\nIn Peter Borm and Hans Peters, editors, Chapter in Game Theory.\nTheory and Decision Library, Kulwer Academic Publishers, 2002.\n[4] Imma Curiel, Giorgio Pederzoli, and Stef Tijs.\nSequencing Games.\nEuropean Journal of Operational Research, 40:344-351, 1989.\n[5] Imma Curiel, Jos Potters, Rajendra Prasad, Stef Tijs, and Bart Veltman.\nSequencing and Cooperation.\nOperations Research, 42(3):566-568, May-June 1994.\n[6] Nikhil R. Devanur, Milena Mihail, and Vijay V. Vazirani.\nStrategyproof Cost-sharing Mechanisms for Set Cover and Facility Location Games.\nIn Proceedings of Fourth Annual ACM Conferece on Electronic Commerce, 2003.\n[7] Robert J. Dolan.\nIncentive Mechanisms for Priority Queueing Problems.\nBell Journal of Economics, 9:421-436, 1978.\n[8] Joan Feigenbaum, Christos Papadimitriou, and Scott Shenker.\nSharing the Cost of Multicast Transmissions.\nIn Proceedings of Thirty-Second Annual ACM Symposium on Theory of Computing, 2000.\n[9] Herbert Hamers, Jeroen Suijs, Stef Tijs, and Peter Borm.\nThe Split Core for Sequencing Games.\nGames and Economic Behavior, 15:165-176, 1996.\n[10] John C. Harsanyi.\nContributions to Theory of Games IV, chapter A Bargaining Model for Cooperative n-person Games.\nPrinceton University Press, 1959.\nEditors: A. W. Tucker, R. D. Luce.\n[11] Kamal Jain and Vijay Vazirani.\nApplications of Approximate Algorithms to Cooperative Games.\nIn Proceedings of 33rd Symposium on Theory of Computing (STOC ``01), 2001.\n[12] Kamal Jain and Vijay Vazirani.\nEquitable Cost Allocations via Primal-Dual Type Algorithms.\nIn Proceedings of 34th Symposium on Theory of Computing (STOC ``02), 2002.\n[13] Flip Klijn and Estela S\u00b4anchez.\nSequencing Games without a Completely Specified Initial Order.\nReport in Statistics and Operations Research, pages 1-17, 2002.\nReport 02-04.\n[14] Flip Klijn and Estela S\u00b4anchez.\nSequencing Games without Initial Order.\nWorking Paper, Universitat Aut\u00b4onoma de Barcelona, July 2004.\n[15] Franois Maniquet.\nA Characterization of the Shapley Value in Queueing Problems.\nJournal of Economic Theory, 109:90-103, 2003.\n[16] Debasis Mishra and Bharath Rangarajan.\nCost sharing in a job scheduling problem.\nWorking Paper, CORE, 2005.\n[17] Manipushpak Mitra.\nEssays on First Best Implementable Incentive Problems.\nPh.D..\nThesis, Indian Statistical Institute, New Delhi, 2000.\n[18] Manipushpak Mitra.\nMechanism design in queueing problems.\nEconomic Theory, 17(2):277-305, 2001.\n[19] Manipushpak Mitra.\nAchieving the first best in sequencing problems.\nReview of Economic Design, 7:75-91, 2002.\n[20] Herv\u00b4e Moulin.\nHandbook of Social Choice and Welfare, chapter Axiomatic Cost and Surplus Sharing.\nNorth-Holland, 2002.\nPublishers: Arrow, Sen, Suzumura.\n[21] Herv\u00b4e Moulin.\nOn Scheduling Fees to Prevent 238 Merging, Splitting and Transferring of Jobs.\nWorking Paper, Rice University, 2004.\n[22] Herv\u00b4e Moulin.\nSplit-proof Probabilistic Scheduling.\nWorking Paper, Rice University, 2004.\n[23] Herv\u00b4e Moulin and Rakesh Vohra.\nCharacterization of Additive Cost Sharing Methods.\nEconomic Letters, 80:399-407, 2003.\n[24] Martin P\u00b4al and \u00b4Eva Tardos.\nGroup Strategyproof Mechanisms via Primal-Dual Algorithms.\nIn Proceedings of the 44th Annual IEEE Symposium on the Foundations of Computer Science (FOCS ``03), 2003.\n[25] Lloyd S. Shapley.\nContributions to the Theory of Games II, chapter A Value for n-person Games, pages 307-317.\nAnnals of Mathematics Studies, 1953.\nEdiors: H. W. Kuhn, A. W. Tucker.\n[26] Wayne E. Smith.\nVarious Optimizers for Single-Stage Production.\nNaval Research Logistics Quarterly, 3:59-66, 1956.\n[27] Jeroen Suijs.\nOn incentive compatibility and budget balancedness in public decision making.\nEconomic Design, 2, 2002.\n239","lvl-3":"Cost Sharing in a Job Scheduling Problem Using the Shapley Value\nABSTRACT\nA set of jobs need to be served by a single server which can serve only one job at a time.\nJobs have processing times and incur waiting costs (linear in their waiting time).\nThe jobs share their costs through compensation using monetary transfers.\nWe characterize the Shapley value rule for this model using fairness axioms.\nOur axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job.\n1.\nINTRODUCTION\nA set of jobs need to be served by a server.\nThe server can process only one job at a time.\nEach job has a finite processing time and a per unit time waiting cost.\nEfficient ordering of this queue directs us to serve the jobs in increasing order of the ratio of per unit time waiting cost and processing time.\nTo compensate for waiting by jobs, monetary transfers to jobs are allowed.\nHow should the jobs share the cost equitably amongst themselves (through transfers)?\nThe problem of fair division of costs among agents in a queue has many practical applications.\nFor example, computer programs are regularly scheduled on servers, data are scheduled to be transmitted over networks, jobs are scheduled in shop-floor on machines, and queues appear in many\npublic services (post offices, banks).\nStudy of queueing problems has attracted economists for a long time [7, 17].\nCost sharing is a fundamental problem in many settings on the Internet.\nInternet can be seen as a common resource shared by many users and the cost incured by using the resource needs to be shared in an equitable manner.\nThe current surge in cost sharing literature from computer scientists validate this claim [8, 11, 12, 6, 24].\nInternet has many settings in which our model of job scheduling appears and the agents waiting in a queue incur costs (jobs scheduled on servers, queries answered from a database, data scheduled to be transmitted over a fixed bandwidth network etc.).\nWe hope that our analysis will give new insights on cost sharing problems of this nature.\nRecently, there has been increased interest in cost sharing methods with submodular cost functions [11, 12, 6, 24].\nWhile many settings do have submodular cost functions (for example, multi-cast transmission games [8]), while the cost function of our game is supermodular.\nAlso, such literature typically does not assume budget-balance (transfers adding up to zero), while it is an inherent feature of our model.\nA recent paper by Maniquet [15] is the closest to our model and is the motivation behind our work 1.\nManiquet [15] studies a model where he assumes all processing times are unity.\nFor such a model, he characterizes the Shapley value rule using classical fairness axioms.\nChun [1] interprets the worth of a coalition of jobs in a different manner for the same model and derives a \"reverse\" rule.\nChun characterizes this rule using similar fairness axioms.\nChun [2] also studies the envy properties of these rules.\nMoulin [22, 21] studies the queueing problem from a strategic point view when per unit waiting costs are unity.\nMoulin introduces new concepts in the queueing settings such as splitting and merging of jobs, and ways to prevent them.\nAnother stream of literature is on \"sequencing games\", first introduced by Curiel et al. [4].\nFor a detailed survey, refer to Curiel et al. [3].\nCuriel et al. [4] defined sequencing games similar to our model, but in which an initial ordering of jobs is given.\nBesides, their notion of worth of a coalition is very different from the notions studied in Maniquet [15] and Chun [1] (these are the notions used in our work too).\nThe particular notion of the worth of a coalition makes the sequencing game of Curiel et al. [4] convex, whereas our game is not convex and does not assume the presence of any initial order.\nIn summary, the focus of this stream of 1The authors thank Fran \u00b8 cois Maniquet for several fruitful discussions.\nresearch is how to share the savings in costs from the initial ordering to the optimal ordering amongst jobs (also see Hamers et al. [9], Curiel et al. [5]).\nRecently, Klijn and S \u00b4 anchez [13, 14] considered sequencing games without any initial ordering of jobs.\nThey take two approaches to define the worth of coalitions.\nOne of their approaches, called the tail game, is related to the reverse rule of Chun [1].\nIn the tail game, jobs in a coalition are served after the jobs not in the coalition are served.\nKlijn and S \u00b4 anchez [14] showed that the tail game is balanced.\nFurther, they provide expressions for the Shapley value in tail game in terms of marginal vectors and reversed marginal vectors.\nWe provide a simpler expression of the Shapley value in the tail game, generalizing the result in Chun [1].\nKlijn and S \u00b4 anchez [13] study the core of this game in detail.\nStrategic aspects of queueing problems have also been researched.\nMitra [19] studies the first best implementation in queueing models with generic cost functions.\nFirst best implementation means that there exists an efficient mechanism in which jobs in the queue have a dominant strategy to reveal their true types and their transfers add up to zero.\nSuijs [27] shows that if waiting costs of jobs are linear then first best implementation is possible.\nMitra [19] shows that among a more general class of queueing problems first best implementation is possible if and only if the cost is linear.\nFor another queueing model, Mitra [18] shows that first best implementation is possible if and only if the cost function satisfies a combinatorial property and an independence property.\nMoulin [22, 21] studies strategic concepts such as splitting and merging in queueing problems with unit per unit waiting costs.\nThe general cost sharing literature is vast and has a long history.\nFor a good survey, we refer to [20].\nFrom the seminal work of Shapley [25] to recent works on cost sharing in multi-cast transmission and optimization problems [8, 6, 23] this area has attracted economists, computer scientists, and operations researchers.\n1.1 Our Contribution\nOurs is the first model which considers cost sharing when both processing time and per unit waiting cost of jobs are present.\nWe take a cooperative game theory approach and apply the classical Shapley value rule to the problem.\nWe show that the Shapley value rule satisfies many intuitive fairness axioms.\nDue to two dimensional nature of our model and one dimensional nature of Maniquet's model [15], his axioms are insufficient to characterize the Shapley value in our setting.\nWe introduce axioms such as independece of preceding jobs' unit waiting cost and independence of following jobs' processing time.\nA key axiom that we introduce gives us a bound on cost share of a job in a group of jobs which have the same ratio of unit time waiting cost and processing time (these jobs can be ordered in any manner between themseleves in an efficient ordering).\nIf such a group consists of just one job, then the axiom says that such a job should at least pay his own processing cost (i.e., the cost it would have incurred if it was the only job in the queue).\nIf there are multiple jobs in such a group, the probability of any two jobs from such a group inflicting costs on each other is same (21) in an efficient ordering.\nDepending on the ordering selected, one job inflicts cost on the other.\nOur fairness axiom says that each job should at least bear such expected costs.\nWe characterize the Shapley value rule using these fairness axioms.\nWe also extend the envy results in [2] to our setting and discuss a class of reasonable cost sharing mechanisms.\n2.\nTHE MODEL\n3.\nCOST SHARING USING THE SHAPLEY VALUE\n4.\nAXIOMATIC CHARACTERIZATION OF THE SHAPLEY VALUE\n4.1 The Fairness Axioms\nEfficiency Axioms\nEquity Axioms\nIndependence Axioms\n4.2 The Characterization Results\n5.\nDISCUSSIONS\n5.1 A Reasonable Class of Cost Sharing Mechanisms\nThe Shapley value satisfies ECB by its expression in Lemma\nThese series of propositions lead us to our main result.\n(Using Li--Lj = Rj--Ri = 6 0)\n5.2 Results on Envy\n6.\nCONCLUSION\nWe studied the problem of sharing costs for a job scheduling problem on a single server, when jobs have processing times and unit time waiting costs.\nWe took a cooperative game theory approach and show that the famous the Shapley value rule satisfies many nice fairness properties.\nWe characterized the Shapley value rule using different intuitive fairness axioms.\nIn future, we plan to further simplify some of the fairness axioms.\nSome initial simplifications already appear in [16], where we provide an alternative axiom to ECB and also discuss the implication of transfers between jobs (in stead of transfers from jobs to a central server).\nWe also plan to look at cost sharing mechanisms other than the Shapley value.\nInvestigating the strategic power of jobs in such mechanisms is another line of future research.","lvl-4":"Cost Sharing in a Job Scheduling Problem Using the Shapley Value\nABSTRACT\nA set of jobs need to be served by a single server which can serve only one job at a time.\nJobs have processing times and incur waiting costs (linear in their waiting time).\nThe jobs share their costs through compensation using monetary transfers.\nWe characterize the Shapley value rule for this model using fairness axioms.\nOur axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job.\n1.\nINTRODUCTION\nA set of jobs need to be served by a server.\nThe server can process only one job at a time.\nEach job has a finite processing time and a per unit time waiting cost.\nEfficient ordering of this queue directs us to serve the jobs in increasing order of the ratio of per unit time waiting cost and processing time.\nTo compensate for waiting by jobs, monetary transfers to jobs are allowed.\nHow should the jobs share the cost equitably amongst themselves (through transfers)?\nThe problem of fair division of costs among agents in a queue has many practical applications.\nStudy of queueing problems has attracted economists for a long time [7, 17].\nCost sharing is a fundamental problem in many settings on the Internet.\nInternet can be seen as a common resource shared by many users and the cost incured by using the resource needs to be shared in an equitable manner.\nThe current surge in cost sharing literature from computer scientists validate this claim [8, 11, 12, 6, 24].\nWe hope that our analysis will give new insights on cost sharing problems of this nature.\nRecently, there has been increased interest in cost sharing methods with submodular cost functions [11, 12, 6, 24].\nWhile many settings do have submodular cost functions (for example, multi-cast transmission games [8]), while the cost function of our game is supermodular.\nManiquet [15] studies a model where he assumes all processing times are unity.\nFor such a model, he characterizes the Shapley value rule using classical fairness axioms.\nChun [1] interprets the worth of a coalition of jobs in a different manner for the same model and derives a \"reverse\" rule.\nChun characterizes this rule using similar fairness axioms.\nChun [2] also studies the envy properties of these rules.\nMoulin [22, 21] studies the queueing problem from a strategic point view when per unit waiting costs are unity.\nMoulin introduces new concepts in the queueing settings such as splitting and merging of jobs, and ways to prevent them.\nAnother stream of literature is on \"sequencing games\", first introduced by Curiel et al. [4].\nFor a detailed survey, refer to Curiel et al. [3].\nCuriel et al. [4] defined sequencing games similar to our model, but in which an initial ordering of jobs is given.\nresearch is how to share the savings in costs from the initial ordering to the optimal ordering amongst jobs (also see Hamers et al. [9], Curiel et al. [5]).\nRecently, Klijn and S \u00b4 anchez [13, 14] considered sequencing games without any initial ordering of jobs.\nThey take two approaches to define the worth of coalitions.\nOne of their approaches, called the tail game, is related to the reverse rule of Chun [1].\nIn the tail game, jobs in a coalition are served after the jobs not in the coalition are served.\nKlijn and S \u00b4 anchez [14] showed that the tail game is balanced.\nFurther, they provide expressions for the Shapley value in tail game in terms of marginal vectors and reversed marginal vectors.\nWe provide a simpler expression of the Shapley value in the tail game, generalizing the result in Chun [1].\nKlijn and S \u00b4 anchez [13] study the core of this game in detail.\nStrategic aspects of queueing problems have also been researched.\nMitra [19] studies the first best implementation in queueing models with generic cost functions.\nSuijs [27] shows that if waiting costs of jobs are linear then first best implementation is possible.\nMitra [19] shows that among a more general class of queueing problems first best implementation is possible if and only if the cost is linear.\nFor another queueing model, Mitra [18] shows that first best implementation is possible if and only if the cost function satisfies a combinatorial property and an independence property.\nMoulin [22, 21] studies strategic concepts such as splitting and merging in queueing problems with unit per unit waiting costs.\nThe general cost sharing literature is vast and has a long history.\nFor a good survey, we refer to [20].\n1.1 Our Contribution\nOurs is the first model which considers cost sharing when both processing time and per unit waiting cost of jobs are present.\nWe take a cooperative game theory approach and apply the classical Shapley value rule to the problem.\nWe show that the Shapley value rule satisfies many intuitive fairness axioms.\nDue to two dimensional nature of our model and one dimensional nature of Maniquet's model [15], his axioms are insufficient to characterize the Shapley value in our setting.\nWe introduce axioms such as independece of preceding jobs' unit waiting cost and independence of following jobs' processing time.\nA key axiom that we introduce gives us a bound on cost share of a job in a group of jobs which have the same ratio of unit time waiting cost and processing time (these jobs can be ordered in any manner between themseleves in an efficient ordering).\nIf such a group consists of just one job, then the axiom says that such a job should at least pay his own processing cost (i.e., the cost it would have incurred if it was the only job in the queue).\nIf there are multiple jobs in such a group, the probability of any two jobs from such a group inflicting costs on each other is same (21) in an efficient ordering.\nDepending on the ordering selected, one job inflicts cost on the other.\nOur fairness axiom says that each job should at least bear such expected costs.\nWe characterize the Shapley value rule using these fairness axioms.\nWe also extend the envy results in [2] to our setting and discuss a class of reasonable cost sharing mechanisms.\n6.\nCONCLUSION\nWe studied the problem of sharing costs for a job scheduling problem on a single server, when jobs have processing times and unit time waiting costs.\nWe took a cooperative game theory approach and show that the famous the Shapley value rule satisfies many nice fairness properties.\nWe characterized the Shapley value rule using different intuitive fairness axioms.\nIn future, we plan to further simplify some of the fairness axioms.\nWe also plan to look at cost sharing mechanisms other than the Shapley value.\nInvestigating the strategic power of jobs in such mechanisms is another line of future research.","lvl-2":"Cost Sharing in a Job Scheduling Problem Using the Shapley Value\nABSTRACT\nA set of jobs need to be served by a single server which can serve only one job at a time.\nJobs have processing times and incur waiting costs (linear in their waiting time).\nThe jobs share their costs through compensation using monetary transfers.\nWe characterize the Shapley value rule for this model using fairness axioms.\nOur axioms include a bound on the cost share of jobs in a group, efficiency, and some independence properties on the the cost share of a job.\n1.\nINTRODUCTION\nA set of jobs need to be served by a server.\nThe server can process only one job at a time.\nEach job has a finite processing time and a per unit time waiting cost.\nEfficient ordering of this queue directs us to serve the jobs in increasing order of the ratio of per unit time waiting cost and processing time.\nTo compensate for waiting by jobs, monetary transfers to jobs are allowed.\nHow should the jobs share the cost equitably amongst themselves (through transfers)?\nThe problem of fair division of costs among agents in a queue has many practical applications.\nFor example, computer programs are regularly scheduled on servers, data are scheduled to be transmitted over networks, jobs are scheduled in shop-floor on machines, and queues appear in many\npublic services (post offices, banks).\nStudy of queueing problems has attracted economists for a long time [7, 17].\nCost sharing is a fundamental problem in many settings on the Internet.\nInternet can be seen as a common resource shared by many users and the cost incured by using the resource needs to be shared in an equitable manner.\nThe current surge in cost sharing literature from computer scientists validate this claim [8, 11, 12, 6, 24].\nInternet has many settings in which our model of job scheduling appears and the agents waiting in a queue incur costs (jobs scheduled on servers, queries answered from a database, data scheduled to be transmitted over a fixed bandwidth network etc.).\nWe hope that our analysis will give new insights on cost sharing problems of this nature.\nRecently, there has been increased interest in cost sharing methods with submodular cost functions [11, 12, 6, 24].\nWhile many settings do have submodular cost functions (for example, multi-cast transmission games [8]), while the cost function of our game is supermodular.\nAlso, such literature typically does not assume budget-balance (transfers adding up to zero), while it is an inherent feature of our model.\nA recent paper by Maniquet [15] is the closest to our model and is the motivation behind our work 1.\nManiquet [15] studies a model where he assumes all processing times are unity.\nFor such a model, he characterizes the Shapley value rule using classical fairness axioms.\nChun [1] interprets the worth of a coalition of jobs in a different manner for the same model and derives a \"reverse\" rule.\nChun characterizes this rule using similar fairness axioms.\nChun [2] also studies the envy properties of these rules.\nMoulin [22, 21] studies the queueing problem from a strategic point view when per unit waiting costs are unity.\nMoulin introduces new concepts in the queueing settings such as splitting and merging of jobs, and ways to prevent them.\nAnother stream of literature is on \"sequencing games\", first introduced by Curiel et al. [4].\nFor a detailed survey, refer to Curiel et al. [3].\nCuriel et al. [4] defined sequencing games similar to our model, but in which an initial ordering of jobs is given.\nBesides, their notion of worth of a coalition is very different from the notions studied in Maniquet [15] and Chun [1] (these are the notions used in our work too).\nThe particular notion of the worth of a coalition makes the sequencing game of Curiel et al. [4] convex, whereas our game is not convex and does not assume the presence of any initial order.\nIn summary, the focus of this stream of 1The authors thank Fran \u00b8 cois Maniquet for several fruitful discussions.\nresearch is how to share the savings in costs from the initial ordering to the optimal ordering amongst jobs (also see Hamers et al. [9], Curiel et al. [5]).\nRecently, Klijn and S \u00b4 anchez [13, 14] considered sequencing games without any initial ordering of jobs.\nThey take two approaches to define the worth of coalitions.\nOne of their approaches, called the tail game, is related to the reverse rule of Chun [1].\nIn the tail game, jobs in a coalition are served after the jobs not in the coalition are served.\nKlijn and S \u00b4 anchez [14] showed that the tail game is balanced.\nFurther, they provide expressions for the Shapley value in tail game in terms of marginal vectors and reversed marginal vectors.\nWe provide a simpler expression of the Shapley value in the tail game, generalizing the result in Chun [1].\nKlijn and S \u00b4 anchez [13] study the core of this game in detail.\nStrategic aspects of queueing problems have also been researched.\nMitra [19] studies the first best implementation in queueing models with generic cost functions.\nFirst best implementation means that there exists an efficient mechanism in which jobs in the queue have a dominant strategy to reveal their true types and their transfers add up to zero.\nSuijs [27] shows that if waiting costs of jobs are linear then first best implementation is possible.\nMitra [19] shows that among a more general class of queueing problems first best implementation is possible if and only if the cost is linear.\nFor another queueing model, Mitra [18] shows that first best implementation is possible if and only if the cost function satisfies a combinatorial property and an independence property.\nMoulin [22, 21] studies strategic concepts such as splitting and merging in queueing problems with unit per unit waiting costs.\nThe general cost sharing literature is vast and has a long history.\nFor a good survey, we refer to [20].\nFrom the seminal work of Shapley [25] to recent works on cost sharing in multi-cast transmission and optimization problems [8, 6, 23] this area has attracted economists, computer scientists, and operations researchers.\n1.1 Our Contribution\nOurs is the first model which considers cost sharing when both processing time and per unit waiting cost of jobs are present.\nWe take a cooperative game theory approach and apply the classical Shapley value rule to the problem.\nWe show that the Shapley value rule satisfies many intuitive fairness axioms.\nDue to two dimensional nature of our model and one dimensional nature of Maniquet's model [15], his axioms are insufficient to characterize the Shapley value in our setting.\nWe introduce axioms such as independece of preceding jobs' unit waiting cost and independence of following jobs' processing time.\nA key axiom that we introduce gives us a bound on cost share of a job in a group of jobs which have the same ratio of unit time waiting cost and processing time (these jobs can be ordered in any manner between themseleves in an efficient ordering).\nIf such a group consists of just one job, then the axiom says that such a job should at least pay his own processing cost (i.e., the cost it would have incurred if it was the only job in the queue).\nIf there are multiple jobs in such a group, the probability of any two jobs from such a group inflicting costs on each other is same (21) in an efficient ordering.\nDepending on the ordering selected, one job inflicts cost on the other.\nOur fairness axiom says that each job should at least bear such expected costs.\nWe characterize the Shapley value rule using these fairness axioms.\nWe also extend the envy results in [2] to our setting and discuss a class of reasonable cost sharing mechanisms.\n2.\nTHE MODEL\nThere are n jobs that need to be served by one server which can process only one job at a time.\nThe set of jobs are denoted as N = {1,..., n}.\na: N--+ N is an ordering of jobs in N and ai denotes the position of job i in the ordering a. Given an ordering a, define Fi (a) = {j E N: ai <aj} and Pi (a) = {j E N: ai> aj}.\nEvery job i is identified by two parameters: (pi, \u03b8i).\npi is the processing time and \u03b8i is the cost per unit waiting time of job i. Thus, a queueing problem is defined by a list q = (N, p, \u03b8) E Q, where Q is the set of all possible lists.\nWe will denote ` yi = \u03b8i pi.\nGiven an ordering of jobs a, the cost incurred by job i is given by\nThe total cost incurred by all jobs due to an ordering a can be written in two ways: (i) by summing the cost incurred by every job and (ii) by summing the costs inflicted by a job on other jobs with their own processing cost.\nAn efficient ordering a * is the one which minimizes the total cost incurred by all jobs.\nSo, C (N, a *) <C (N, a) ` d a E \u03a3.\nTo achieve notational simplicity, we will write the total cost in an efficient ordering of jobs from N as C (N) whenever it is not confusing.\nSometimes, we will deal with only a subset of jobs S C N.\nThe ordering a will then be defined on jobs in S only and we will write the total cost from an efficient ordering of jobs in S as C (S).\nThe following lemma shows that jobs are ordered in decreasing' y in an efficient ordering.\nThis is also known as the weighted shortest processing time rule, first introduced by Smith [26].\nPROOF.\nAssume for contradiction that the statment of the lemma is not true.\nThis means, we can find two consecutive jobs i, j E S (a \u2217 i = aj * + 1) such that ` yi> ` yj.\nDefine a new ordering a by interchanging i and j in a *.\nThe costs to jobs in S \\ {i, j} is not changed from a * to a.\nThe difference between total costs in a * and a is given by, C (S, a) \u2212 C (S, a *) = \u03b8jpi \u2212 \u03b8ipj.\nFrom efficiency we get \u03b8jpi \u2212 \u03b8ipj> 0.\nThis gives us ` yj> ` yi, which is a contradiction.\nAn allocation for q = (N, p, \u03b8) E Q has two components: an ordering a and a transfer ti for every job i E N. ti denotes the payment received by job i. Given a transfer ti and an ordering a, the cost share of job i is defined as,\nLEMMA 2.\nLet (\u03c3, t) be an efficient allocation and \u03c0 be the vector of cost shares of jobs from this allocation.\nIf \u03c3 * = 6 \u03c3 be an efficient ordering and t * i = ci (\u03c3 *) \u2212 \u03c0i ` d i E N, then (\u03c3 *, t *) is also an efficient allocation.\nPROOF.\nSince (\u03c3, t) is efficient, iEN ti = 0.\nThis gives iEN \u03c0i = C (N).\nSince \u03c3 * is an efficient ordering, iEN ci (\u03c3 *) = C (N).\nThis means, iEN t * i = iEN [ci (\u03c3 *) \u2212 \u03c0i] = 0.\nSo, (\u03c3 *, t *) is an efficient allocation.\nDepending on the transfers, the cost shares in different efficient allocations may differ.\nAn allocation rule \u03c8 associates with every q E Q a non-empty subset \u03c8 (q) of allocations.\nAn allocation (\u03c3, t) is efficient for q = (N, p, \u03b8) whenever \u03c3 is an efficient ordering and iEN ti = 0.\nThe set of efficient orderings of q is denoted as \u03a3 * (q) and \u03c3 * (q) will be used to refer to a typical element of the set.\nThe following straightforward lemma says that for two different efficient orderings, the cost share in one efficient allocation is possible to achieve in the other by appropriately modifying the transfers.\n3.\nCOST SHARING USING THE SHAPLEY VALUE\nIn this section, we define the coalitional cost of this game and analyze the solution proposed by the Shapley value.\nGiven a queue q E Q, the cost of a coalition of S C N jobs in the queue is defined as the cost incurred by jobs in S if these are the only jobs served in the queue using an efficient ordering.\nFormally, the cost of a coalition S C N is,\nwhere \u03c3 * = \u03c3 * (S) is an efficient ordering considering jobs from S only.\nThe worth of a coalition of S jobs is just \u2212 C (S).\nManiquet [15] observes another equivalent way to define the worth of a coalition is using the dual function of the cost function C (\u00b7).\nOther interesting ways to define the worth of a coalition in such games is discussed by Chun [1], who assume that a coalition of jobs are served after the jobs not in the coalition are served.\nThe Shapley value (or cost share) of a job i is defined as,\nThe Shapley value allocation rule says that jobs are ordered using an efficient ordering and transfers are assigned to jobs such that the cost share of job i is given by Equation 1.\nwhere Li = \u03b8i jEPi (\u03c3 *) pj and Ri = pi jEFi (\u03c3 *) \u03b8j.\nPROOF.\nAnother way to write the Shapley value formula is the following [10],\nwhere \u0394 (S) = C (S) if | S | = 1 and \u0394 (S) = C (S) \u2212 T (; S \u0394 (T).\nThis gives \u0394 ({i}) = C ({i}) = pi\u03b8i ` di E N. For any i, j E N with i = 6 j, we have \u0394 ({i, j}) = C ({i, j}) \u2212 C ({i}) \u2212 C ({j})\nWe will show by induction that \u0394 (S) = 0 if | S |> 2.\nFor | S | = 3, let S = {i, j, k}.\nWithout loss of generality, assume\nNow, assume for T C S, \u0394 (T) = 0 if | T |> 2.\nWithout loss of generality assume that \u03c3 to be the identity mapping.\nNow,\nThis proves that \u0394 (S) = 0 if | S |> 2.\nUsing the Shapley value formula now,\n4.\nAXIOMATIC CHARACTERIZATION OF THE SHAPLEY VALUE\nIn this section, we will define serveral axioms on fairness and characterize the Shapley value using them.\nFor a given q E Q, we will denote \u03c8 (q) as the set of allocations from allocation rule \u03c8.\nAlso, we will denote the cost share vector associated with an allocation rule (\u03c3, t) as \u03c0 and that with allocation rule (\u03c3', t') as \u03c0' etc. .\n4.1 The Fairness Axioms\nWe will define three types of fairness axioms: (i) related to efficiency, (ii) related to equity, and (iii) related to independence.\nEfficiency Axioms\nWe define two types of efficiency axioms.\nOne related to efficiency which states that an efficient ordering should be selected and the transfers of jobs should add up to zero (budget balance).\nDefinition 1.\nAn allocation rule \u03c8 satisfies efficiency if for every q E Q and (\u03c3, t) E \u03c8 (q), (\u03c3, t) is an efficient allocation.\nThe second axiom related to efficiency says that the allocation rule should not discriminate between two allocations which are equivalent to each other in terms of cost shares of jobs.\nDefinition 2.\nAn allocation rule \u03c8 satisfies Pareto indifference if for every q E Q, (\u03c3, t) E \u03c8 (q), and (\u03c3', t') E \u03a3 (q), we have \u03c0i = \u03c0' i ` d i E N \u21d2 (\u03c3', t') E \u03c8 (q).\nAn implication of Pareto indifference axiom and Lemma 2 is that for every efficient ordering there is some set of transfers of jobs such that it is part of an efficient rule and the cost share of a job in all these allocations are same.\nEquity Axioms\nHow should the cost be shared between two jobs if the jobs have some kind of similarity between them?\nEquity axioms provide us with fairness properties which help us answer this question.\nWe provide five such axioms.\nSome of these axioms (for example anonymity, equal treatment of equals) are standard in the literature, while some are new.\nWe start with a well known equity axiom called anonymity.\nDenote \u03c1: N--+ N as a permutation of elements in N. Let \u03c1 (\u03c3, t) denote the allocation obtained by permuting elements in \u03c3 and t according to \u03c1.\nSimilarly, let \u03c1 (p, \u03b8) denote the new list of (p, \u03b8) obtained by permuting elements of p and \u03b8 according to \u03c1.\nOur first equity axiom states that allocation rules should be immune to such permutation of data.\nDefinition 3.\nAn allocation rule \u03c8 satisfies anonymity if for all q E Q, (\u03c3, t) E \u03c8 (q) and every permutation \u03c1, we then \u03c1 (\u03c3, t) E \u03c8 (N, \u03c1 (q)).\nThe next equity axiom is classical in literature and says that two similar jobs should be compensated such that their cost shares are equal.\nThis implies that if all the jobs are of same type, then jobs should equally share the total system cost.\nDefinition 4.\nAn allocation rule \u03c8 satisfies equal treatment of equals (ETE) if for all q E Q, (\u03c3, t) E \u03c8 (q), i, j E N, then\nETE directs us to share costs equally between jobs if they are of the same per unit waiting cost and processing time.\nBut it is silent about the cost shares of two jobs i and j which satisfy pi \u03b8i = \u03b8j pj.\nWe introduce a new axiom for this.\nIf an efficient rule chooses \u03c3 such that \u03c3i <\u03c3j for some i, j E N, then job i is inflicting a cost of pi\u03b8j on job j and job j is inflicting zero cost on job i. Define for some \u03b3> 0, S (\u03b3) = {i E N: \u03b3i = \u03b3}.\nIn an efficient rule, the elements in S (\u03b3) can be ordered in any manner (in | S (\u03b3) |!\nways).\nIf i, j E S (\u03b3) then we have pj\u03b8i = pi\u03b8j.\nProbability of \u03c3i <\u03c3j is 21 and so is the probability of \u03c3i> \u03c3j.\nThe expected cost i inflicts on j is 21 pi\u03b8j and j inflicts on i is 2 pj\u03b8i.\nOur next fairness axiom says that i and j should\neach be responsible for their own processing cost and this expected cost they inflict on each other.\nArguing for every pair of jobs i, j E S (\u03b3), we establish a bound on the cost share of jobs in S (\u03b3).\nWe impose this as an equity axiom below.\nDefinition 5.\nAn allocation rule satisfies expected cost bound (ECB) if for all q E Q, (\u03c3, t) E \u03c8 (q) with \u03c0 being the resulting cost share, for any \u03b3> 0, and for every i E S (\u03b3), we have\nThe central idea behind this axiom is that of \"expected cost inflicted\".\nIf an allocation rule chooses multiple allocations, we can assign equal probabilities of selecting one of the allocations.\nIn that case, the expected cost inflicted by a job i on another job j in the allocation rule can be calculated.\nOur axiom says that the cost share of a job should be at least its own processing cost and the total expected cost it inflicts on others.\nNote that the above bound poses no constraints on how the costs are shared among different groups.\nAlso observe that if S (\u03b3) contains just one job, ECB says that job should at least bear its own processing cost.\nA direct consequence of ECB is the following lemma.\nLEMMA 4.\nLet \u03c8 be an efficient rule which satisfies ECB.\nFor a q E Q if S (\u03b3) = N, then for any (\u03c3, t) E \u03c8 (q) which gives a cost share of \u03c0, \u03c0i = pi\u03b8i + 21 Li + Ri ` d i E N. PROOF.\nFrom ECB, we get \u03c0i> pi\u03b8i +21 Li + Ri ` d i E N. Assume for contradiction that there exists j E N such that \u03c0j> pj \u03b8j + 21 Li + Ri.\nUsing efficiency and the fact that iEN Li = iEN Ri, we get iEN \u03c0i = C (N)> iEN pi\u03b8i + 21 iEN Li + Ri = C (N).\nThis gives us a contradiction.\nNext, we introduce an axiom about sharing the transfer of a job between a set of jobs.\nIn particular, if the last job quits the system, then the ordering need not change.\nBut the transfer to the last job needs to be shared between the other jobs.\nThis should be done in proportion to their processing times because every job influenced the last job based on its processing time.\nDefinition 6.\nAn allocation rule \u03c8 satisfies proportionate responsibility of p (PRp) if for all q E Q, for all (\u03c3, t) E \u03c8 (q), k E N such that \u03c3k = | N |, q' = (N \\ {k}, p', \u03b8') E Q, such that for all i E N \\ {k}: \u03b8' i = \u03b8i, p' i = pi, there exists (\u03c3', t') E \u03c8 (q') such that for all i E N \\ {k}: \u03c3' i = \u03c3i and\nAn analogous fairness axiom results if we remove the job from the beginning of the queue.\nSince the presence of the first job influenced each job depending on their \u03b8 values, its transfer needs to be shared in proportion to \u03b8 values.\nDefinition 7.\nAn allocation rule \u03c8 satisfies proportionate responsibility of \u03b8 (PR\u03b8) if for all q E Q, for all (\u03c3, t) E \u03c8 (q), k E N such that \u03c3k = 1, q' = (N \\ {k}, p', \u03b8') E Q, such that for all i E N \\ {k}: \u03b8' i = \u03b8i, p' i = pi, there exists (\u03c3', t') E \u03c8 (q') such that for all i E N \\ {k}: \u03c3' i = \u03c3i and\nThe proportionate responsibility axioms are generalizations of equal responsibility axioms introduced by Maniquet [15].\nIndependence Axioms\nThe waiting cost of a job does not depend on the per unit waiting cost of its preceding jobs.\nSimilarly, the waiting cost inflicted by a job to its following jobs is independent of the processing times of the following jobs.\nThese independence properties should be carried over to the cost sharing rules.\nThis gives us two independence axioms.\nDefinition 8.\nAn allocation rule \u03c8 satisfies independence of preceding jobs' \u03b8 (IPJ\u03b8) if for all q = (N, p, \u03b8), q' = (N, p', \u03b8') E Q, (\u03c3, t) E \u03c8 (q), (\u03c3', t') E \u03c8 (q'), if for all i E N \\ {k}: \u03b8i = \u03b8' i, pi = p' i and \u03b3k <\u03b3 ` k, pk = p ` k, then for all j E N such that \u03c3j> \u03c3k: \u03c0j = \u03c0 ` j, where \u03c0 is the cost share in (\u03c3, t) and \u03c0' is the cost share in (\u03c3', t').\nDefinition 9.\nAn allocation rule \u03c8 satisfies independence of following jobs' p (IFJp) if for all q = (N, p, \u03b8), q' = (N, p', \u03b8') E Q, (\u03c3, t) E \u03c8 (q), (\u03c3', t') E \u03c8 (q'), if for all i E N \\ {k}: \u03b8i = \u03b8' i, pi = p' i and \u03b3k> \u03b3 ` k, \u03b8k = \u03b8 ` k, then for all j E N such that \u03c3j <\u03c3k: \u03c0j = \u03c0 ` j, where \u03c0 is the cost share in (\u03c3, t) and \u03c0' is the cost share in (\u03c3', t').\n4.2 The Characterization Results\nHaving stated the fairness axioms, we propose three different ways to characterize the Shapley value rule using these axioms.\nAll our characterizations involve efficiency and ECB.\nBut if we have IPJ\u03b8, we either need IFJp or PRp.\nSimilarly if we have IFJp, we either need IPJ\u03b8 or PR\u03b8.\nPROOF.\nDefine for any i, j E N, \u03b8ij = \u03b3ipj and pij = \u03b8j.\nAssume without loss of generality that \u03c3 is an efficient \u03b3i ordering with \u03c3i = i ` d i E N. Consider the following q' = (N, p', \u03b8') corresponding to job i with pj' = pj if j <i and p' j = pij if j> i, \u03b8' j = \u03b8ij if j <i and \u03b8' j = \u03b8j if j> i. Observe that all jobs have the same \u03b3: \u03b3i.\nBy Lemma 2 and efficiency, (\u03c3, t') E \u03c8 (q') for some set of transfers t'.\nUsing Lemma 4, we get cost share of i from (\u03c3, t') as \u03c0i = pi\u03b8i + 21 Li + Ri.\nNow, for any j <i, if we change \u03b8' j to \u03b8j without changing processing time, the new \u03b3 of j is \u03b3j> \u03b3i.\nApplying IPJ\u03b8, the cost share of job i should not change.\nSimilarly, for any job j> i, if we change p' j to pj without changing \u03b8j, the new \u03b3 of j is \u03b3j <\u03b3i.\nApplying IFJp, the cost share of job i should not change.\nApplying this procedure for every j <i with IPJ\u03b8 and for every j> i with IFJp, we reach q = (N, p, \u03b8) and the payoff of i does not change from \u03c0i.\nUsing this argument for every i E N and using the expression for the Shapley value in Lemma 3, we get the Shapley value rule.\nIt is possible to replace one of the independence axioms with an equity axiom on sharing the transfer of a job.\nThis is shown in Propositions 2 and 3.\nPROOF.\nAs in the proof of Proposition 1, define \u03b8ij = \u03b3ipj ` d i, j E N. Assume without loss of generality that \u03c3 is an efficient ordering with \u03c3i = i ` d i E N. Consider a queue with jobs in set K = {1,..., i, i + 1}, where i <n. Define q' = (K, p, \u03b8'), where \u03b8' j = \u03b8i +1\nordering in q\" and by IPJ\u03b8 the cost share of i + 1 remains \u03c0i +1.\nIn q\"' = (K \\ {i + 1}, p, \u03b8\"), we can calculate the cost share of job i using ECB and Lemma 4 as \u03c0i = pi\u03b8i + 2 j <i pj\u03b8i.\nSo, using PRp we get the new cost share of job\nNow, we can set K = K U {i + 2}.\nAs before, we can find cost share of i + 2 in this queue as \u03c0i +2 = pi +2 \u03b8i +2 +\nof job i in the new queue as \u03c0i = pi\u03b8i + 21 j <ipj\u03b8i + pi\u03b8i +1 + pi\u03b8i +2.\nThis process can be repeated till we add job n at which point cost share of i is pi\u03b8i + 21 j <i pj\u03b8i + j> i pi\u03b8j.\nThen, we can adjust the \u03b8 of preceding jobs of i to their original value and applying IPJ\u03b8, the payoffs of jobs i through n will not change.\nThis gives us the Shapley values of jobs i through n. Setting i = 1, we get cost shares of all the jobs from \u03c8 as the Shapley value.\nPROOF.\nThe proof mirrors the proof of Proposition 2.\nWe provide a short sketch.\nAnalogous to the proof of Proposition 2, \u03b8s are kept equal to original data and processing times are initialized to pi +1 j.\nThis allows us to use IFJp.\nAlso, contrast to Proposition 2, we consider K = {i, i + 1,..., n} and repeatedly add jobs to the beginning of the queue maintaining the same efficient ordering.\nSo, we add the cost components of preceding jobs to the cost share of jobs in each iteration and converge to the Shapley value rule.\nThe next proposition shows that the Shapley value rule satisfies all the fairness axioms discussed.\nPROOF.\nThe Shapley value rule chooses an efficient ordering and by definition the payments add upto zero.\nSo, it satisfies efficiency.\nThe Shapley value assigns same cost share to a job irrespective of the efficient ordering chosen.\nSo, it is pareto indifferent.\nThe Shapley value is anonymous because the particular index of a job does not effect his ordering or cost share.\nFor ETE, consider two jobs i, j E N such that pi = pj and \u03b8i = \u03b8j.\nWithout loss of generality assume the efficient ordering to be 1,..., i,..., j,..., n. Now, the Shapley value of job i is = pi\u03b8i + 21 j <ipj\u03b8i +\n5.\nDISCUSSIONS\n5.1 A Reasonable Class of Cost Sharing Mechanisms\nIn this section, we will define a reasonable class of cost sharing mechanisms.\nWe will show how these reasonable mechanisms lead to the Shapley value mechanism.\nThe Shapley value satisfies ECB by its expression in Lemma\n3.\nConsider any job i, in an efficient ordering \u03c3, if we increase the value of \u03b3j for some j = 6 i such that \u03c3j> \u03c3i, then the set Pi (preceding jobs) does not change in the new efficient ordering.\nIf \u03b3j is changed such that pj remains the same, then the expression jEPi \u03b8ipj is unchanged.\nIf (p, \u03b8) values of no other jobs are changed, then the Shapley value is unchanged by increasing \u03b3j for some j E Pi while keeping pj unchanged.\nThus, the Shapley value rule satisfies IPJ\u03b8.\nAn analogous argument shows that the Shapley value rule satisfies IFJp.\nFor PRp, assume without loss of generality that jobs are ordered 1,..., n in an efficient ordering.\nDenote the transfer of job i = 6 n due to the Shapley value with set of jobs N and set of jobs N \\ {n} as ti and t0i respectively.\nTransfer of last job is tn = 21\u03b8n j <npj.\nNow,\nA similar argument shows that the Shapley value rule satisfies PR\u03b8.\nThese series of propositions lead us to our main result.\n1) For each q E Q, \u03c8 (q) selects all the allocation assigning jobs cost shares implied by the Shapley value.\n2) \u03c8 satisfies efficiency, ECB, IFJp, and IPJ\u03b8.\n3) \u03c8 satisfies efficiency, ECB, IFJp, and PR\u03b8.\n4) \u03c8 satisfies efficiency, ECB, PRp, and IPJ\u03b8.\nPROOF.\nThe proof follows from Propositions 1, 2, 3, and 4.\nwhere 0 <\u03b1 <1.\nThe reasonable cost sharing mechanism says that every job should be paid a constant fraction of the difference between the waiting cost he incurs and the waiting cost he inflicts on other jobs.\nIf \u03b1 = 0, then every job bears its own cost.\nIf \u03b1 = 1, then every job gets compensated for its waiting cost but compensates others for the cost he inflicts on others.\nThe Shapley value rule comes as a result of ETE as shown in the following proposition.\nPROOF.\nConsider a q E Q in which pi = pj and \u03b8i = \u03b8j.\nLet (\u03c3, t) E \u03c8 (q) and \u03c0 be the resulting cost shares.\nFrom ETE, we get,\n(Using pi = pj, \u03b8i = \u03b8j) te 1--\u03b1 = \u03b1\n(Using Li--Lj = Rj--Ri = 6 0)\nte \u03b1 = 21.\nThis gives us the Shapley value rule by Lemma 3.\n5.2 Results on Envy\nChun [2] discusses a fariness condition called no-envy for the case when processing times of all jobs are unity.\nDefinition 11.\nAn allocation rule satisfies no-envy if for all q E Q, (\u03c3, t) E \u03c8 (q), and i, j E N, we have \u03c0i <ci (\u03c3ij)--tj, where \u03c0 is the cost share from allocation rule (\u03c3, t) and \u03c3ij is the ordering obtaining by swapping i and j. From the result in [2], the Shapley value rule does not satisfy no-envy in our model also.\nTo overcome this, Chun [2] introduces the notion of adjusted no-envy, which he shows is satisfied in the Shapley value rule when processing times of all jobs are unity.\nHere, we show that adjusted envy continues to hold in the Shapley value rule in our model (when processing times need not be unity).\nAs before denote \u03c3ij be an ordering where the position of i and j is swapped from an ordering \u03c3.\nFor adjusted noenvy, if (\u03c3, t) is an allocation for some q E Q, let tij be the\ntransfer of job i when the transfer of i is calculated with respect to ordering \u03c3ij.\nObserve that an allocation may not allow for calculation of tij.\nFor example, if \u03c8 is efficient, then tij cannot be calculated if \u03c3ij is also not efficient.\nFor simplicity, we state the definition of adjusted no-envy to apply to all such rules.\nDefinition 12.\nAn allocation rule satisfies adjusted noenvy if for all q E Q, (\u03c3, t) E \u03c8 (q), and i, j E N, we have\nPROPOSITION 6.\nThe Shapley value rule satisfies adjusted no-envy.\nPROOF.\nWithout loss of generality, assume efficient ordering of jobs is: 1,..., n. Consider two jobs i and i + k. From Lemma 3,\n6.\nCONCLUSION\nWe studied the problem of sharing costs for a job scheduling problem on a single server, when jobs have processing times and unit time waiting costs.\nWe took a cooperative game theory approach and show that the famous the Shapley value rule satisfies many nice fairness properties.\nWe characterized the Shapley value rule using different intuitive fairness axioms.\nIn future, we plan to further simplify some of the fairness axioms.\nSome initial simplifications already appear in [16], where we provide an alternative axiom to ECB and also discuss the implication of transfers between jobs (in stead of transfers from jobs to a central server).\nWe also plan to look at cost sharing mechanisms other than the Shapley value.\nInvestigating the strategic power of jobs in such mechanisms is another line of future research.","keyphrases":["cost share","cost share","job schedul","job schedul","process time","monetari transfer","fair axiom","shaplei valu","queue problem","agent","cooper game theori approach","unit wait cost","alloc rule","expect cost bound"],"prmu":["P","P","P","P","P","P","P","M","M","U","U","M","M","M"]} {"id":"C-69","title":"pTHINC: A Thin-Client Architecture for Mobile Wireless Web","abstract":"Although web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites. We have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display. pTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes. pTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display. We have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices. Our results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.","lvl-1":"pTHINC: A Thin-Client Architecture for Mobile Wireless Web Joeng Kim, Ricardo A. Baratto, and Jason Nieh Department of Computer Science Columbia University, New York, NY, USA {jk2438, ricardo, nieh}@cs.\ncolumbia.edu ABSTRACT Although web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites.\nWe have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display.\npTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes.\npTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display.\nWe have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices.\nOur results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.\nCategories and Subject Descriptors: C.2.4 ComputerCommunication-Networks: Distributed Systems - client\/ server General Terms: Design, Experimentation, Performance 1.\nINTRODUCTION The increasing ubiquity of wireless networks and decreasing cost of hardware is fueling a proliferation of mobile wireless handheld devices, both as standalone wireless Personal Digital Assistants (PDA) and popular integrated PDA\/cell phone devices.\nThese devices are enabling new forms of mobile computing and communication.\nService providers are leveraging these devices to deliver pervasive web access, and mobile web users already often use these devices to access web-enabled information such as news, email, and localized travel guides and maps.\nIt is likely that within a few years, most of the devices accessing the web will be mobile.\nUsers typically access web content by running a web browser and associated helper applications locally on the PDA.\nAlthough native web browsers exist for PDAs, they deliver subpar performance and have a much smaller feature set and more limited functionality than their desktop computing counterparts [10].\nAs a result, PDA web browsers are often not able to display web content from web sites that leverage more advanced web technologies to deliver a richer web experience.\nThis fundamental problem arises for two reasons.\nFirst, because PDAs have a completely different hardware\/software environment from traditional desktop computers, web applications need to be rewritten and customized for PDAs if at all possible, duplicating development costs.\nBecause the desktop application market is larger and more mature, most development effort generally ends up being spent on desktop applications, resulting in greater functionality and performance than their PDA counterparts.\nSecond, PDAs have a more resource constrained environment than traditional desktop computers to provide a smaller form factor and longer battery life.\nDesktop web browsers are large, complex applications that are unable to run on a PDA.\nInstead, developers are forced to significantly strip down these web browsers to provide a usable PDA web browser, thereby crippling PDA browser functionality.\nThin-client computing provides an alternative approach for enabling pervasive web access from handheld devices.\nA thin-client computing system consists of a server and a client that communicate over a network using a remote display protocol.\nThe protocol enables graphical displays to be virtualized and served across a network to a client device, while application logic is executed on the server.\nUsing the remote display protocol, the client transmits user input to the server, and the server returns screen updates of the applications from the server to the client.\nUsing a thin-client model for mobile handheld devices, PDAs can become simple stateless clients that leverage the remote server capabilities to execute web browsers and other helper applications.\nThe thin-client model provides several important benefits for mobile wireless web.\nFirst, standard desktop web applications can be used to deliver web content to PDAs without rewriting or adapting applications to execute on a PDA, reducing development costs and leveraging existing software investments.\nSecond, complex web applications can be executed on powerful servers instead of running stripped down versions on more resource constrained PDAs, providing greater functionality and better performance [10].\nThird, web applications can take advantage of servers with faster networks and better connectivity, further boosting application performance.\nFourth, PDAs can be even simpler devices since they do not need to perform complex application logic, potentially reducing energy consumption and extend143 ing battery life.\nFinally, PDA thin clients can be essentially stateless appliances that do not need to be backed up or restored, require almost no maintenance or upgrades, and do not store any sensitive data that can be lost or stolen.\nThis model provides a viable avenue for medical organizations to comply with HIPAA regulations [6] while embracing mobile handhelds in their day to day operations.\nDespite these potential advantages, thin clients have been unable to provide the full range of these benefits in delivering web applications to mobile handheld devices.\nExisting thin clients were not designed for PDAs and do not account for important usability issues in the context of small form factor devices, resulting in difficulty in navigating displayed web content.\nFurthermore, existing thin clients are ineffective at providing seamless mobility across the heterogeneous mix of device display sizes and resolutions.\nWhile existing thin clients can already provide faster performance than native PDA web browsers in delivering HTML web content, they do not effectively support more display-intensive web helper applications such as multimedia video, which is increasingly an integral part of available web content.\nTo harness the full potential of thin-client computing in providing mobile wireless web on PDAs, we have developed pTHINC (PDA THin-client InterNet Computing).\npTHINC builds on our previous work on THINC [1] to provide a thinclient architecture for mobile handheld devices.\npTHINC virtualizes and resizes the display on the server to efficiently deliver high-fidelity screen updates to a broad range of different clients, screen sizes, and screen orientations, including both portrait and landscape viewing modes.\nThis enables pTHINC to provide the same persistent web session across different client devices.\nFor example, pTHINC can provide the same web browsing session appropriately scaled for display on a desktop computer and a PDA so that the same cookies, bookmarks, and other meta-data are continuously available on both machines simultaneously.\npTHINC``s virtual display approach leverages semantic information available in display commands, and client-side video hardware to provide more efficient remote display mechanisms that are crucial for supporting more display-intensive web applications.\nGiven limited display resolution on PDAs, pTHINC maximizes the use of screen real estate for remote display by moving control functionality from the screen to readily available PDA control buttons, improving system usability.\nWe have implemented pTHINC on Windows Mobile and demonstrated that it works transparently with existing applications, window systems, and operating systems, and does not require modifying, recompiling, or relinking existing software.\nWe have quantitatively evaluated pTHINC against local PDA web browsers and other thin-client approaches on Pocket PC devices.\nOur experimental results demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.\nThis paper presents the design and implementation of pTHINC.\nSection 2 describes the overall usage model and usability characteristics of pTHINC.\nSection 3 presents the design and system architecture of pTHINC.\nSection 4 presents experimental results measuring the performance of pTHINC on web applications and comparing it against native PDA browsers and other popular PDA thin-client systems.\nSection 5 discusses related work.\nFinally, we present some concluding remarks.\n2.\nPTHINC USAGE MODEL pTHINC is a thin-client system that consists of a simple client viewer application that runs on the PDA and a server that runs on a commodity PC.\nThe server leverages more powerful PCs to to run web browsers and other application logic.\nThe client takes user input from the PDA stylus and virtual keyboard and sends them to the server to pass to the applications.\nScreen updates are then sent back from the server to the client for display to the user.\nWhen the pTHINC PDA client is started, the user is presented with a simple graphical interface where information such as server address and port, user authentication information, and session settings can be provided.\npTHINC first attempts to connect to the server and perform the necessary handshaking.\nOnce this process has been completed, pTHINC presents the user with the most recent display of his session.\nIf the session does not exist, a new session is created.\nExisting sessions can be seamlessly continued without changes in the session setting or server configuration.\nUnlike other thin-client systems, pTHINC provides a user with a persistent web session model in which a user can launch a session running a web browser and associated applications at the server, then disconnect from that session and reconnect to it again anytime.\nWhen a user reconnects to the session, all of the applications continue running where the user left off, so that the user can continue working as though he or she never disconnected.\nThe ability to disconnect and reconnect to a session at anytime is an important benefit for mobile wireless PDA users which may have intermittent network connectivity.\npTHINC``s persistent web session model enables a user to reconnect to a web session from devices other than the one on which the web session was originally initiated.\nThis provides users with seamless mobility across different devices.\nIf a user loses his PDA, he can easily use another PDA to access his web session.\nFurthermore, pTHINC allows users to use non-PDA devices to access web sessions as well.\nA user can access the same persistent web session on a desktop PC as on a PDA, enabling a user to use the same web session from any computer.\npTHINC``s persistent web session model addresses a key problem encountered by mobile web users, the lack of a common web environment across computers.\nWeb browsers often store important information such as bookmarks, cookies, and history, which enable them to function in a much more useful manner.\nThe problem that occurs when a user moves between computers is that this data, which is specific to a web browser installation, cannot move with the user.\nFurthermore, web browsers often need helper applications to process different media content, and those applications may not be consistently available across all computers.\npTHINC addresses this problem by enabling a user to remotely use the exact same web browser environment and helper applications from any computer.\nAs a result, pTHINC can provide a common, consistent web browsing environment for mobile users across different devices without requiring them to attempt to repeatedly synchronize different web browsing environments across multiple machines.\nTo enable a user to access the same web session on different devices, pTHINC must provide mechanisms to support different display sizes and resolutions.\nToward this end, pTHINC provides a zoom feature that enables a user to zoom in and out of a display and allows the display of a web 144 Figure 1: pTHINC shortcut keys session to be resized to fit the screen of the device being used.\nFor example, if the server is running a web session at 1024\u00d7768 but the client is a PDA with a display resolution of 640\u00d7480, pTHINC will resize the desktop display to fit the full display in the smaller screen of the PDA.\npTHINC provides the PDA user with the option to increase the size of the display by zooming in to different parts of the display.\nUsers are often familiar with the general layout of commonly visited websites, and are able to leverage this resizing feature to better navigate through web pages.\nFor example, a user can zoom out of the display to view the entire page content and navigate hyperlinks, then zoom in to a region of interest for a better view.\nTo enable a user to access the same web session on different devices, pTHINC must also provide mechanisms to support different display orientations.\nIn a desktop environment, users are typically accustomed to having displays presented in landscape mode where the screen width is larger than its height.\nHowever, in a PDA environment, the choice is not always obvious.\nSome users may prefer having the display in portrait mode, as it is easier to hold the device in their hands, while others may prefer landscape mode in order to minimize the amount of side-scrolling necessary to view a web page.\nTo accommodate PDA user preferences, pTHINC provides an orientation feature that enables it to seamless rotate the display between landscape and portrait mode.\nThe landscape mode is particularly useful for pTHINC users who frequently access their web sessions on both desktop and PDA devices, providing those users with the same familiar landscape setting across different devices.\nBecause screen space is a relatively scarce resource on PDAs, pTHINC runs in fullscreen mode to maximize the screen area available to display the web session.\nTo be able to use all of the screen on the PDA and still allow the user to control and interact with it, pTHINC reuses the typical shortcut buttons found on PDAs to perform all the control functions available to the user.\nThe buttons used by pTHINC do not require any OS environment changes; they are simply intercepted by the pTHINC client application when they are pressed.\nFigure 1 shows how pTHINC utilizes the shortcut buttons to provide easy navigation and improve the overall user experience.\nThese buttons are not device specific, and the layout shown is common to widelyused PocketPC devices.\npTHINC provides six shortcuts to support its usage model: \u2022 Rotate Screen: The record button on the left edge is used to rotate the screen between portrait and landscape mode each time the button is pressed.\n\u2022 Zoom Out: The leftmost button on the bottom front is used to zoom out the display of the web session providing a bird``s eye view of the web session.\n\u2022 Zoom In: The second leftmost button on the bottom front is used to zoom in the display of the web session to more clearly view content of interest.\n\u2022 Directional Scroll: The middle button on the bottom front is used to scroll around the display using a single control button in a way that is already familiar to PDA users.\nThis feature is particularly useful when the user has zoomed in to a region of the display such that only part of the display is visible on the screen.\n\u2022 Show\/Hide Keyboard: The second rightmost button on the bottom front is used to bring up a virtual keyboard drawn on the screen for devices which have no physical keyboard.\nThe virtual keyboard uses standard PDA OS mechanisms, providing portability across different PDA environments.\n\u2022 Close Session: The rightmost button on the bottom front is used to disconnect from the pTHINC session.\npTHINC uses the PDA touch screen, stylus, and standard user interface mechanisms to provide a user interface pointand-click metaphor similar to that provided by the mouse in a traditional desktop computing environment.\npTHINC does not use a cursor since PDA environments do not provide one.\nInstead, a user can use the stylus to tap on different sections of the touch screen to indicate input focus.\nA single tap on the touch screen generates a corresponding single click mouse event.\nA double tap on the touch screen generates a corresponding double click mouse event.\npTHINC provides two-button mouse emulation by using the stylus to press down on the screen for one second to generate a right mouse click.\nAll of these actions are identical to the way users already interact with PDA applications in the common PocketPC environment.\nIn web browsing, users can click on hyperlinks and focus on input boxes by simply tapping on the desired screen area of interest.\nUnlike local PDA web browsers and other PDA applications, pTHINC leverages more powerful desktop user interface metaphors to enable users to manipulate multiple open application windows instead of being limited to a single application window at any given moment.\nThis provides increased browsing flexibility beyond what is currently available on PDA devices.\nSimilar to a desktop environment, browser windows and other application windows can be moved around by pressing down and dragging the stylus similar to a mouse.\n3.\nPTHINC SYSTEM ARCHITECTURE pTHINC builds on the THINC [1] remote display architecture to provide a thin-client system for PDAs.\npTHINC virtualizes the display at the server by leveraging the video device abstraction layer, which sits below the window server and above the framebuffer.\nThis is a well-defined, low-level, device-dependent layer that exposes the video hardware to the display system.\npTHINC accomplishes this through a simple virtual display driver that intercepts drawing commands, packetizes, and sends them over the network.\n145 While other thin-client approaches intercept display commands at other layers of the display subsystem, pTHINC``s display virtualization approach provides some key benefits in efficiently supporting PDA clients.\nFor example, intercepting display commands at a higher layer between applications and the window system as is done by X [17] requires replicating and running a great deal of functionality on the PDA that is traditionally provided by the desktop window system.\nGiven both the size and complexity of traditional window systems, attempting to replicate this functionality in the restricted PDA environment would have proven to be a daunting, and perhaps unfeasible task.\nFurthermore, applications and the window system often require tight synchronization in their operation and imposing a wireless network between them by running the applications on the server and the window system on the client would significantly degrade performance.\nOn the other hand, intercepting at a lower layer by extracting pixels out of the framebuffer as they are rendered provides a simple solution that requires very little functionality on the PDA client, but can also result in degraded performance.\nThe reason is that by the time the remote display server attempts to send screen updates, it has lost all semantic information that may have helped it encode efficiently, and it must resort to using a generic and expensive encoding mechanism on the server, as well as a potentially expensive decoding mechanism on the limited PDA client.\nIn contrast to both the high and low level interception approaches, pTHINC``s approach of intercepting at the device driver provides an effective balance between client and server simplicity, and the ability to efficiently encode and decode screen updates.\nBy using a low-level virtual display approach, pTHINC can efficiently encode application display commands using only a small set of low-level commands.\nIn a PDA environment, this set of commands provides a crucial component in maintaining the simplicity of the client in the resourceconstrained PDA environment.\nThe display commands are shown in Table 1, and work as follows.\nCOPY instructs the client to copy a region of the screen from its local framebuffer to another location.\nThis command improves the user experience by accelerating scrolling and opaque window movement without having to resend screen data from the server.\nSFILL, PFILL, and BITMAP are commands that paint a fixed-size region on the screen.\nThey are useful for accelerating the display of solid window backgrounds, desktop patterns, backgrounds of web pages, text drawing, and certain operations in graphics manipulation programs.\nSFILL fills a sizable region on the screen with a single color.\nPFILL replicates a tile over a screen region.\nBITMAP performs a fill using a bitmap of ones and zeros as a stipple to apply a foreground and background color.\nFinally, RAW is used to transmit unencoded pixel data to be displayed verbatim on a region of the screen.\nThis command is invoked as a last resort if the server is unable to employ any other command, and it is the only command that may be compressed to mitigate its impact on network bandwidth.\npTHINC delivers its commands using a non-blocking, serverpush update mechanism, where as soon as display updates are generated on the server, they are sent to the client.\nClients are not required to explicitly request display updates, thus minimizing the impact that the typical varying network latency of wireless links may have on the responsiveness of the system.\nKeeping in mind that resource Command Description COPY Copy a frame buffer area to specified coordinates SFILL Fill an area with a given pixel color value PFILL Tile an area with a given pixel pattern BITMAP Fill a region using a bit pattern RAW Display raw pixel data at a given location Table 1: pTHINC Protocol Display Commands constrained PDAs and wireless networks may not be able to keep up with a fast server generating a large number of updates, pTHINC is able to coalesce, clip, and discard updates automatically if network loss or congestion occurs, or the client cannot keep up with the rate of updates.\nThis type of behavior proves crucial in a web browsing environment, where for example, a page may be redrawn multiple times as it is rendered on the fly by the browser.\nIn this case, the PDA will only receive and render the final result, which clearly is all the user is interesting in seeing.\npTHINC prioritizes the delivery of updates to the PDA using a Shortest-Remaining-Size-First (SRSF) preemptive update scheduler.\nSRSF is analogous to Shortest-RemainingProcessing-Time scheduling, which is known to be optimal for minimizing mean response time in an interactive system.\nIn a web browsing environment, short jobs are associated with text and basic page layout components such as the page``s background, which are critical web content for the user.\nOn the other hand, large jobs are often lower priority beautifying elements, or, even worse, web page banners and advertisements, which are of questionable value to the user as he or she is browsing the page.\nUsing SRSF, pTHINC is able to maximize the utilization of the relatively scarce bandwidth available on the wireless connection between the PDA and the server.\n3.1 Display Management To enable users to just as easily access their web browser and helper applications from a desktop computer at home as from a PDA while on the road, pTHINC provides a resize mechanism to zoom in and out of the display of a web session.\npTHINC resizing is completely supported by the server, not the client.\nThe server resamples updates to fit within the PDAs viewport before they are transmitted over the network.\npTHINC uses Fant``s resampling algorithm to resize pixel updates.\nThis provides smooth, visually pleasing updates with properly antialiasing and has only modest computational requirements.\npTHINC``s resizing approach has a number of advantages.\nFirst, it allows the PDA to leverage the vastly superior computational power of the server to use high quality resampling algorithms and produce higher quality updates for the PDA to display.\nSecond, resizing the screen does not translate into additional resource requirements for the PDA, since it does not need to perform any additional work.\nFinally, better utilization of the wireless network is attained since rescaling the updates reduces their bandwidth requirements.\nTo enable users to orient their displays on a PDA to provide a viewing experience that best accommodates user preferences and the layout of web pages or applications, pTHINC provides a display rotation mechanism to switch between landscape and portrait viewing modes.\npTHINC display rotation is completely supported by the client, not the server.\npTHINC does not explicitly recalculate the ge146 ometry of display updates to perform rotation, which would be computationally expensive.\nInstead, pTHINC simply changes the way data is copied into the framebuffer to switch between display modes.\nWhen in portrait mode, data is copied along the rows of the framebuffer from left to right.\nWhen in landscape mode, data is copied along the columns of the framebuffer from top to bottom.\nThese very fast and simple techniques replace one set of copy operations with another and impose no performance overhead.\npTHINC provides its own rotation mechanism to support a wide range of devices without imposing additional feature requirements on the PDA.\nAlthough some newer PDA devices provide native support for different orientations, this mechanism is not dynamic and requires the user to rotate the PDA``s entire user interface before starting the pTHINC client.\nWindows Mobile provides native API mechanisms for PDA applications to rotate their UI on the fly, but these mechanisms deliver poor performance and display quality as the rotation is performed naively and is not completely accurate.\n3.2 Video Playback Video has gradually become an integral part of the World Wide Web, and its presence will only continue to increase.\nWeb sites today not only use animated graphics and flash to deliver web content in an attractive manner, but also utilize streaming video to enrich the web interface.\nUsers are able to view pre-recorded and live newscasts on CNN, watch sports highlights on ESPN, and even search through large collection of videos on Google Video.\nTo allow applications to provide efficient video playback, interfaces have been created in display systems that allow video device drivers to expose their hardware capabilities back to the applications.\npTHINC takes advantage of these interfaces and its virtual device driver approach to provide a virtual bridge between the remote client and its hardware and the applications, and transparently support video playback.\nOn top of this architecture, pTHINC uses the YUV colorspace to encode the video content, which provides a number of benefits.\nFirst, it has become increasingly common for PDA video hardware to natively support YUV and be able to perform the colorspace conversion and scaling automatically.\nAs a result, pTHINC is able to provide fullscreen video playback without any performance hits.\nSecond, the use of YUV allows for a more efficient representation of RGB data without loss of quality, by taking advantage of the human eye``s ability to better distinguish differences in brightness than in color.\nIn particular, pTHINC uses the YV12 format, which allows full color RGB data to be encoded using just 12 bits per pixel.\nThird, YUV data is produced as one of the last steps of the decoding process of most video codecs, allowing pTHINC to provide video playback in a manner that is format independent.\nFinally, even if the PDA``s video hardware is unable to accelerate playback, the colorspace conversion process is simple enough that it does not impose unreasonable requirements on the PDA.\nA more concrete example of how pTHINC leverages the PDA video hardware to support video playback can be seen in our prototype implementation on the popular Dell Axim X51v PDA, which is equipped with the Intel 2700G multimedia accelerator.\nIn this case, pTHINC creates an offscreen buffer in video memory and writes and reads from this memory region data on the YV12 format.\nWhen a new video frame arrives, video data is copied from the buffer to Figure 2: Experimental Testbed an overlay surface in video memory, which is independent of the normal surface used for traditional drawing.\nAs the YV12 data is put onto the overlay, the Intel accelerator automatically performs both colorspace conversion and scaling.\nBy using the overlay surface, pTHINC has no need to redraw the screen once video playback is over since the overlapped surface is unaffected.\nIn addition, specific overlay regions can be manipulated by leveraging the video hardware, for example to perform hardware linear interpolation to smooth out the frame and display it fullscreen, and to do automatic rotation when the client runs in landscape mode.\n4.\nEXPERIMENTAL RESULTS We have implemented a pTHINC prototype that runs the client on widely-used Windows Mobile-based Pocket PC devices and the server on both Windows and Linux operating systems.\nTo demonstrate its effectiveness in supporting mobile wireless web applications, we have measured its performance on web applications.\nWe present experimental results on different PDA devices for two popular web applications, browsing web pages and playing video content from the web.\nWe compared pTHINC against native web applications running locally on the PDA to demonstrate the improvement that pTHINC can provide over the traditional fat-client approach.\nWe also compared pTHINC against three of the most widely used thin clients that can run on PDAs, Citrix Meta-FrameXP [2], Microsoft Remote Desktop [3] and VNC (Virtual Network Computing) [16].\nWe follow common practice and refer to Citrix MetaFrameXP and Microsoft Remote Desktop by their respective remote display protocols, ICA (Independent Computing Architecture) and RDP (Remote Desktop Protocol).\n4.1 Experimental Testbed We conducted our web experiments using two different wireless Pocket PC PDAs in an isolated Wi-Fi network testbed, as shown in Figure 2.\nThe testbed consisted of two PDA client devices, a packet monitor, a thin-client server, and a web server.\nExcept for the PDAs, all of the other machines were IBM Netfinity 4500R servers with dual 933 MHz Intel PIII CPUs and 512 MB RAM and were connected on a switched 100 Mbps FastEthernet network.\nThe web server used was Apache 1.3.27, the network emulator was NISTNet 2.0.12, and the packet monitor was Ethereal 0.10.9.\nThe PDA clients connected to the testbed through a 802.11b Lucent Orinoco AP-2000 wireless access point.\nAll experiments using the wireless network were conducted within ten feet of the access point, so we considered the amount of packet loss to be negligible in our experiments.\nTwo Pocket PC PDAs were used to provide results across both older, less powerful models and newer higher performance models.\nThe older model was a Dell Axim X5 with 147 Client 1024\u00d7768 640\u00d7480 Depth Resize Clip RDP no yes 8-bit no yes VNC yes yes 16-bit no no ICA yes yes 16-bit yes no pTHINC yes yes 24-bit yes no Table 2: Thin-client Testbed Configuration Setting a 400 MHz Intel XScale PXA255 CPU and 64 MB RAM running Windows Mobile 2003 and a Dell TrueMobile 1180 2.4Ghz CompactFlash card for wireless networking.\nThe newer model was a Dell Axim X51v with a 624 MHz Intel XScale XPA270 CPU and 64 MB RAM running Windows Mobile 5.0 and integrated 802.11b wireless networking.\nThe X51v has an Intel 2700G multimedia accelerator with 16MB video memory.\nBoth PDAs are capable of 16-bit color but have different screen sizes and display resolutions.\nThe X5 has a 3.5 inch diagonal screen with 240\u00d7320 resolution.\nThe X51v has a 3.7 inch diagonal screen with 480\u00d7640.\nThe four thin clients that we used support different levels of display quality as summarized in Table 2.\nThe RDP client only supports a fixed 640\u00d7480 display resolution on the server with 8-bit color depth, while other platforms provide higher levels of display quality.\nTo provide a fair comparison across all platforms, we conducted our experiments with thin-client sessions configured for two possible resolutions, 1024\u00d7768 and 640\u00d7480.\nBoth ICA and VNC were configured to use the native PDA resolution of 16-bit color depth.\nThe current pTHINC prototype uses 24-bit color directly and the client downsamples updates to the 16-bit color depth available on the PDA.\nRDP was configured using only 8-bit color depth since it does not support any better color depth.\nSince both pTHINC and ICA provide the ability to view the display resized to fit the screen, we measured both clients with and without the display resized to fit the PDA screen.\nEach thin client was tested using landscape rather than portrait mode when available.\nAll systems run on the X51v could run in landscape mode because the hardware provides a landscape mode feature.\nHowever, the X5 does not provide this functionality.\nOnly pTHINC directly supports landscape mode, so it was the only system that could run in landscape mode on both the X5 and X51v.\nTo provide a fair comparison, we also standardized on common hardware and operating systems whenever possible.\nAll of the systems used the Netfinity server as the thin-client server.\nFor the two systems designed for Windows servers, ICA and RDP, we ran Windows 2003 Server on the server.\nFor the other systems which support X-based servers, VNC and pTHINC, we ran the Debian Linux Unstable distribution with the Linux 2.6.10 kernel on the server.\nWe used the latest thin-client server versions available on each platform at the time of our experiments, namely Citrix MetaFrame XP Server for Windows Feature Release 3, Microsoft Remote Desktop built into Windows XP and Windows 2003 using RDP 5.2, and VNC 4.0.\n4.2 Application Benchmarks We used two web application benchmarks for our experiments based on two common application scenarios, browsing web pages and playing video content from the web.\nSince many thin-client systems including two of the ones tested are closed and proprietary, we measured their performance in a noninvasive manner by capturing network traffic with a packet monitor and using a variant of slow-motion benchmarking [13] previously developed to measure thin-client performance in PDA environments [10].\nThis measurement methodology accounts for both the display decoupling that can occur between client and server in thin-client systems as well as client processing time, which may be significant in the case of PDAs.\nTo measure web browsing performance, we used a web browsing benchmark based on the Web Text Page Load Test from the Ziff-Davis i-Bench benchmark suite [7].\nThe benchmark consists of JavaScript controlled load of 55 pages from the web server.\nThe pages contain both text and graphics with pages varying in size.\nThe graphics are embedded images in GIF and JPEG formats.\nThe original i-Bench benchmark was modified for slow-motion benchmarking by introducing delays of several seconds between the pages using JavaScript.\nThen two tests were run, one where delays where added between each page, and one where pages where loaded continuously without waiting for them to be displayed on the client.\nIn the first test, delays were sufficiently adjusted in each case to ensure that each page could be received and displayed on the client completely without temporal overlap in transferring the data belonging to two consecutive pages.\nWe used the packet monitor to record the packet traffic for each run of the benchmark, then used the timestamps of the first and last packet in the trace to obtain our latency measures [10].\nThe packet monitor also recorded the amount of data transmitted between the client and the server.\nThe ratio between the data traffic in the two tests yields a scale factor.\nThis scale factor shows the loss of data between the server and the client due to inability of the client to process the data quickly enough.\nThe product of the scale factor with the latency measurement produces the true latency accounting for client processing time.\nTo run the web browsing benchmark, we used Mozilla Firefox 1.0.4 running on the thin-client server for the thin clients, and Windows Internet Explorer (IE) Mobile for 2003 and Mobile for 5.0 for the native browsers on the X5 and X51v PDAs, respectively.\nIn all cases, the web browser used was sized to fill the entire display region available.\nTo measure video playback performance, we used a video benchmark that consisted of playing a 34.75s MPEG-1 video clip containing a mix of news and entertainment programming at full-screen resolution.\nThe video clip is 5.11 MB and consists of 834 352x240 pixel frames with an ideal frame rate of 24 frames\/sec.\nWe measured video performance using slow-motion benchmarking by monitoring resulting packet traffic at two playback rates, 1 frames\/second (fps) and 24 fps, and comparing the results to determine playback delays and frame drops that occur at 24 fps to measure overall video quality [13].\nFor example, 100% quality means that all video frames were played at real-time speed.\nOn the other hand, 50% quality could mean that half the video data was dropped, or that the clip took twice as long to play even though all of the video data was displayed.\nTo run the video benchmark, we used Windows Media Player 9 for Windows-based thin-client servers, MPlayer 1.0 pre 6 for X-based thin-client servers, and Windows Media Player 9 Mobile and 10 Mobile for the native video players running locally on the X5 and X51v PDAs, respectively.\nIn all cases, the video player used was sized to fill the entire display region available.\n4.3 Measurements Figures 3 and 4 show the results of running the web brows148 0 1 10 100 pTHINC Resized pTHINCICA Resized ICAVNCRDPLOCAL Latency(s) Platform Axim X5 (640x480 or less) Axim X51v (640x480) Axim X5 (1024x768) Axim X51v (1024x768) Figure 3: Browsing Benchmark: Average Page Latency ing benchmark.\nFor each platform, we show results for up to four different configurations, two on the X5 and two on the X51v, depending on whether each configuration was supported.\nHowever, not all platforms could support all configurations.\nThe local browser only runs at the display resolution of the PDA, 480\u00d7680 or less for the X51v and the X5.\nRDP only runs at 640\u00d7480.\nNeither platform could support 1024\u00d7768 display resolution.\nICA only ran on the X5 and could not run on the X51v because it did not work on Windows Mobile 5.\nFigure 3 shows the average latency per web page for each platform.\npTHINC provides the lowest average web browsing latency on both PDAs.\nOn the X5, pTHINC performs up to 70 times better than other thin-client systems and 8 times better than the local browser.\nOn the X51v, pTHINC performs up to 80 times better than other thin-client systems and 7 times better than the native browser.\nIn fact, all of the thin clients except VNC outperform the local PDA browser, demonstrating the performance benefits of the thin-client approach.\nUsability studies have shown that web pages should take less than one second to download for the user to experience an uninterrupted web browsing experience [14].\nThe measurements show that only the thin clients deliver subsecond web page latencies.\nIn contrast, the local browser requires more than 3 seconds on average per web page.\nThe local browser performs worse since it needs to run a more limited web browser to process the HTML, JavaScript, and do all the rendering using the limited capabilities of the PDA.\nThe thin clients can take advantage of faster server hardware and a highly tuned web browser to process the web content much faster.\nFigure 3 shows that RDP is the next fastest platform after pTHINC.\nHowever, RDP is only able to run at a fixed resolution of 640\u00d7480 and 8-bit color depth.\nFurthermore, RDP also clips the display to the size of the PDA screen so that it does not need to send updates that are not visible on the PDA screen.\nThis provides a performance benefit assuming the remaining web content is not viewed, but degrades performance when a user scrolls around the display to view other web content.\nRDP achieves its performance with significantly lower display quality compared to the other thin clients and with additional display clipping not used by other systems.\nAs a result, RDP performance alone does not provide a complete comparison with the other platforms.\nIn contrast, pTHINC provides the fastest performance while at the same time providing equal or better display quality than the other systems.\n0 1 10 100 1000 pTHINC Resized pTHINCICA Resized ICAVNCRDPLOCAL DataSize(KB) Platform Axim X5 (640x480 or less) Axim X51v (640x480) Axim X5 (1024x768) Axim X51v (1024x768) Figure 4: Browsing Benchmark: Average Page Data Transferred Since VNC and ICA provide similar display quality to pTHINC, these systems provide a more fair comparison of different thin-client approaches.\nICA performs worse in part because it uses higher-level display primitives that require additional client processing costs.\nVNC performs worse in part because it loses display data due to its client-pull delivery mechanism and because of the client processing costs in decompressing raw pixel primitives.\nIn both cases, their performance was limited in part because their PDA clients were unable to keep up with the rate at which web pages were being displayed.\nFigure 3 also shows measurements for those thin clients that support resizing the display to fit the PDA screen, namely ICA and pTHINC.\nResizing requires additional processing, which results in slower average web page latencies.\nThe measurements show that the additional delay incurred by ICA when resizing versus not resizing is much more substantial than for pTHINC.\nICA performs resizing on the slower PDA client.\nIn contrast, pTHINC leverage the more powerful server to do resizing, reducing the performance difference between resizing and not resizing.\nUnlike ICA, pTHINC is able to provide subsecond web page download latencies in both cases.\nFigure 4 shows the data transferred in KB per page when running the slow-motion version of the tests.\nAll of the platforms have modest data transfer requirements of roughly 100 KB per page or less.\nThis is well within the bandwidth capacity of Wi-Fi networks.\nThe measurements show that the local browser does not transfer the least amount of data.\nThis is surprising as HTML is often considered to be a very compact representation of content.\nInstead, RDP is the most bandwidth efficient platform, largely as a result of using only 8-bit color depth and screen clipping so that it does not transfer the entire web page to the client.\npTHINC overall has the largest data requirements, slightly more than VNC.\nThis is largely a result of the current pTHINC prototype``s lack of native support for 16-bit color data in the wire protocol.\nHowever, this result also highlights pTHINC``s performance as it is faster than all other systems even while transferring more data.\nFurthermore, as newer PDA models support full 24-bit color, these results indicate that pTHINC will continue to provide good web browsing performance.\nSince display usability and quality are as important as performance, Figures 5 to 8 compare screenshots of the different thin clients when displaying a web page, in this case from the popular BBC news website.\nExcept for ICA, all of the screenshots were taken on the X51v in landscape mode 149 Figure 5: Browser Screenshot: RDP 640x480 Figure 6: Browser Screenshot: VNC 1024x768 Figure 7: Browser Screenshot: ICA Resized 1024x768 Figure 8: Browser Screenshot: pTHINC Resized 1024x768 using the maximum display resolution settings for each platform given in Table 2.\nThe ICA screenshot was taken on the X5 since ICA does not run on the X51v.\nWhile the screenshots lack the visual fidelity of the actual device display, several observations can be made.\nFigure 5 shows that RDP does not support fullscreen mode and wastes lots of screen space for controls and UI elements, requiring the user to scroll around in order to access the full contents of the web browsing session.\nFigure 6 shows that VNC makes better use of the screen space and provides better display quality, but still forces the user to scroll around to view the web page due to its lack of resizing support.\nFigure 7 shows ICA``s ability to display the full web page given its resizing support, but that its lack of landscape capability and poorer resize algorithm significantly compromise display quality.\nIn contrast, Figure 8 shows pTHINC using resizing to provide a high quality fullscreen display of the full width of the web page.\npTHINC maximizes the entire viewing region by moving all controls to the PDA buttons.\nIn addition, pTHINC leverages the server computational power to use a high quality resizing algorithm to resize the display to fit the PDA screen without significant overhead.\nFigures 9 and 10 show the results of running the video playback benchmark.\nFor each platform except ICA, we show results for an X5 and X51v configuration.\nICA could not run on the X51v as noted earlier.\nThe measurements were done using settings that reflected the environment a user would have to access a web session from both a desktop computer and a PDA.\nAs such, a 1024\u00d7768 server display resolution was used whenever possible and the video was shown at fullscreen.\nRDP was limited to 640\u00d7480 display resolution as noted earlier.\nSince viewing the entire video display is the only really usable option, we resized the display to fit the PDA screen for those platforms that supported this feature, namely ICA and pTHINC.\nFigure 9 shows the video quality for each platform.\npTHINC is the only thin client able to provide perfect video playback quality, similar to the native PDA video player.\nAll of the other thin clients deliver very poor video quality.\nWith the exception of RDP on the X51v which provided unacceptable 35% video quality, none of the other systems were even able to achieve 10% video quality.\nVNC and ICA have the worst quality at 8% on the X5 device.\npTHINC``s native video support enables superior video performance, while other thin clients suffer from their inability to distinguish video from normal display updates.\nThey attempt to apply ineffective and expensive compression algorithms on the video data and are unable to keep up with the stream of updates generated, resulting in dropped frames or long playback times.\nVNC suffers further from its client-pull update model because video frames are generated faster than the rate at which the client can process and send requests to the server to obtain the next display update.\nFigure 10 shows the total data transferred during 150 0% 20% 40% 60% 80% 100% pTHINCICAVNCRDPLOCAL VideoQuality Platform Axim X5 Axim X51v Figure 9: Video Benchmark: Fullscreen Video Quality 0 1 10 100 pTHINCICAVNCRDPLOCAL VideoDataSize(MB) Platform Axim X5 Axim X51v Figure 10: Video Benchmark: Fullscreen Video Data video playback for each system.\nThe native player is the most bandwidth efficient platform, sending less than 6 MB of data, which corresponds to about 1.2 Mbps of bandwidth.\npTHINC``s 100% video quality requires about 25 MB of data which corresponds to a bandwidth usage of less than 6 Mbps.\nWhile the other thin clients send less data than THINC, they do so because they are dropping video data, resulting in degraded video quality.\nFigures 11 to 14 compare screenshots of the different thin clients when displaying the video clip.\nExcept for ICA, all of the screenshots were taken on the X51v in landscape mode using the maximum display resolution settings for each platform given in Table 2.\nThe ICA screenshot was taken on the X5 since ICA does not run on the X51v.\nFigures 11 and 12 show that RDP and VNC are unable to display the entire video frame on the PDA screen.\nRDP wastes screen space for UI elements and VNC only shows the top corner of the video frame on the screen.\nFigure 13 shows that ICA provides resizing to display the entire video frame, but did not proportionally resize the video data, resulting in strange display artifacts.\nIn contrast, Figure 14 shows pTHINC using resizing to provide a high quality fullscreen display of the entire video frame.\npTHINC provides visually more appealing video display than RDP, VNC, or ICA.\n5.\nRELATED WORK Several studies have examined the web browsing performance of thin-client computing [13, 19, 10].\nThe ability for thin clients to improve web browsing performance on wireless PDAs was first quantitatively demonstrated in a previous study by one of the authors [10].\nThis study demonstrated that thin clients can provide both faster web browsing performance and greater web browsing functionality.\nThe study considered a wide range of web content including content from medical information systems.\nOur work builds on this previous study and consider important issues such as how usable existing thin clients are in PDA environments, the trade-offs between thin-client usability and performance, performance across different PDA devices, and the performance of thin clients on common web-related applications such as video.\nMany thin clients have been developed and some have PDA clients, including Microsoft``s Remote Desktop [3], Citrix MetraFrame XP [2], Virtual Network Computing [16, 12], GoToMyPC [5], and Tarantella [18].\nThese systems were first designed for desktop computing and retrofitted for PDAs.\nUnlike pTHINC, they do not address key system architecture and usability issues important for PDAs.\nThis limits their display quality, system performance, available screen space, and overall usability on PDAs.\npTHINC builds on previous work by two of the authors on THINC [1], extending the server architecture and introducing a client interface and usage model to efficiently support PDA devices for mobile web applications.\nOther approaches to improve the performance of mobile wireless web browsing have focused on using transcoding and caching proxies in conjunction with the fat client model [11, 9, 4, 8].\nThey work by pushing functionality to external proxies, and using specialized browsing applications on the PDA device that communicate with the proxy.\nOur thinclient approach differs fundamentally from these fat-client approaches by pushing all web browser logic to the server, leveraging existing investments in desktop web browsers and helper applications to work seamlessly with production systems without any additional proxy configuration or web browser modifications.\nWith the emergence of web browsing on small display devices, web sites have been redesigned using mechanisms like WAP and specialized native web browsers have been developed to tailor the needs of these devices.\nRecently, Opera has developed the Opera Mini [15] web browser, which uses an approach similar to the thin-client model to provide access across a number of mobile devices that would normally be incapable of running a web browser.\nInstead of requiring the device to process web pages, it uses a remote server to pre-process the page before sending it to the phone.\n6.\nCONCLUSIONS We have introduced pTHINC, a thin-client architecture for wireless PDAs.\npTHINC provides key architectural and usability mechanisms such as server-side screen resizing, clientside screen rotation using simple copy techniques, YUV video support, and maximizing screen space for display updates and leveraging existing PDA control buttons for UI elements.\npTHINC transparently supports traditional desktop browsers and their helper applications on PDA devices and desktop machines, providing mobile users with ubiquitous access to a consistent, personalized, and full-featured web environment across heterogeneous devices.\nWe have implemented pTHINC and measured its performance on web applications compared to existing thin-client systems and native web applications.\nOur results on multiple mobile wireless devices demonstrate that pTHINC delivers web browsing performance up to 80 times better than existing thin-client systems, and 8 times better than a native PDA browser.\nIn addition, pTHINC is the only PDA thin client 151 Figure 11: Video Screenshot: RDP 640x480 Figure 12: Video Screenshot: VNC 1024x768 Figure 13: Video Screenshot: ICA Resized 1024x768 Figure 14: Video Screenshot: pTHINC Resized 1024x768 that transparently provides full-screen, full frame rate video playback, making web sites with multimedia content accessible to mobile web users.\n7.\nACKNOWLEDGEMENTS This work was supported in part by NSF ITR grants CCR0219943 and CNS-0426623, and an IBM SUR Award.\n8.\nREFERENCES [1] R. Baratto, L. Kim, and J. Nieh.\nTHINC: A Virtual Display Architecture for Thin-Client Computing.\nIn Proceedings of the 20th ACM Symposium on Operating Systems Principles (SOSP), Oct. 2005.\n[2] Citrix Metaframe.\nhttp:\/\/www.citrix.com.\n[3] B. C. Cumberland, G. Carius, and A. Muir.\nMicrosoft Windows NT Server 4.0, Terminal Server Edition: Technical Reference.\nMicrosoft Press, Redmond, WA, 1999.\n[4] A. Fox, I. Goldberg, S. D. Gribble, and D. C. Lee.\nExperience With Top Gun Wingman: A Proxy-Based Graphical Web Browser for the 3Com PalmPilot.\nIn Proceedings of Middleware ``98, Lake District, England, September 1998, 1998.\n[5] GoToMyPC.\nhttp:\/\/www.gotomypc.com\/.\n[6] Health Insurance Portability and Accountability Act.\nhttp:\/\/www.hhs.gov\/ocr\/hipaa\/.\n[7] i-Bench version 1.5.\nhttp: \/\/etestinglabs.com\/benchmarks\/i-bench\/i-bench.asp.\n[8] A. Joshi.\nOn proxy agents, mobility, and web access.\nMobile Networks and Applications, 5(4):233-241, 2000.\n[9] J. Kangasharju, Y. G. Kwon, and A. Ortega.\nDesign and Implementation of a Soft Caching Proxy.\nComputer Networks and ISDN Systems, 30(22-23):2113-2121, 1998.\n[10] A. Lai, J. Nieh, B. Bohra, V. Nandikonda, A. P. Surana, and S. Varshneya.\nImproving Web Browsing on Wireless PDAs Using Thin-Client Computing.\nIn Proceedings of the 13th International World Wide Web Conference (WWW), May 2004.\n[11] A. Maheshwari, A. Sharma, K. Ramamritham, and P. Shenoy.\nTranSquid: Transcoding and caching proxy for heterogenous ecommerce environments.\nIn Proceedings of the 12th IEEE Workshop on Research Issues in Data Engineering (RIDE ``02), Feb. 2002.\n[12] .\nNET VNC Viewer for PocketPC.\nhttp:\/\/dotnetvnc.sourceforge.net\/.\n[13] J. Nieh, S. J. Yang, and N. Novik.\nMeasuring Thin-Client Performance Using Slow-Motion Benchmarking.\nACM Trans.\nComputer Systems, 21(1):87-115, Feb. 2003.\n[14] J. Nielsen.\nDesigning Web Usability.\nNew Riders Publishing, Indianapolis, IN, 2000.\n[15] Opera Mini Browser.\nhttp:\/\/www.opera.com\/products\/mobile\/operamini\/.\n[16] T. Richardson, Q. Stafford-Fraser, K. R. Wood, and A. Hopper.\nVirtual Network Computing.\nIEEE Internet Computing, 2(1), Jan.\/Feb. 1998.\n[17] R. W. Scheifler and J. Gettys.\nThe X Window System.\nACM Trans.\nGr., 5(2):79-106, Apr. 1986.\n[18] Sun Secure Global Desktop.\nhttp:\/\/www.sun.com\/software\/products\/sgd\/.\n[19] S. J. Yang, J. Nieh, S. Krishnappa, A. Mohla, and M. Sajjadpour.\nWeb Browsing Performance of Wireless Thin-Client Computing.\nIn Proceedings of the 12th International World Wide Web Conference (WWW), May 2003.\n152","lvl-3":"pTHINC: A Thin-Client Architecture for Mobile Wireless Web\nABSTRACT\nAlthough web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites.\nWe have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display.\npTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes.\npTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display.\nWe have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices.\nOur results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.\nCategories and Subject Descriptors: C. 2.4 ComputerCommunication-Networks: Distributed Systems--client \/ server\n1.\nINTRODUCTION\nThe increasing ubiquity of wireless networks and decreasing cost of hardware is fueling a proliferation of mobile wireless handheld devices, both as standalone wireless Personal Digital Assistants (PDA) and popular integrated PDA\/cell phone devices.\nThese devices are enabling new forms of mobile computing and communication.\nService providers are leveraging these devices to deliver pervasive web access, and mobile web users already often use these devices to access web-enabled information such as news, email, and localized travel guides and maps.\nIt is likely that within a few years, most of the devices accessing the web will be mobile.\nUsers typically access web content by running a web browser and associated helper applications locally on the PDA.\nAlthough native web browsers exist for PDAs, they deliver\nsubpar performance and have a much smaller feature set and more limited functionality than their desktop computing counterparts [10].\nAs a result, PDA web browsers are often not able to display web content from web sites that leverage more advanced web technologies to deliver a richer web experience.\nThis fundamental problem arises for two reasons.\nFirst, because PDAs have a completely different hardware\/software environment from traditional desktop computers, web applications need to be rewritten and customized for PDAs if at all possible, duplicating development costs.\nBecause the desktop application market is larger and more mature, most development effort generally ends up being spent on desktop applications, resulting in greater functionality and performance than their PDA counterparts.\nSecond, PDAs have a more resource constrained environment than traditional desktop computers to provide a smaller form factor and longer battery life.\nDesktop web browsers are large, complex applications that are unable to run on a PDA.\nInstead, developers are forced to significantly strip down these web browsers to provide a usable PDA web browser, thereby crippling PDA browser functionality.\nThin-client computing provides an alternative approach for enabling pervasive web access from handheld devices.\nA thin-client computing system consists of a server and a client that communicate over a network using a remote display protocol.\nThe protocol enables graphical displays to be virtualized and served across a network to a client device, while application logic is executed on the server.\nUsing the remote display protocol, the client transmits user input to the server, and the server returns screen updates of the applications from the server to the client.\nUsing a thin-client model for mobile handheld devices, PDAs can become simple stateless clients that leverage the remote server capabilities to execute web browsers and other helper applications.\nThe thin-client model provides several important benefits for mobile wireless web.\nFirst, standard desktop web applications can be used to deliver web content to PDAs without rewriting or adapting applications to execute on a PDA, reducing development costs and leveraging existing software investments.\nSecond, complex web applications can be executed on powerful servers instead of running stripped down versions on more resource constrained PDAs, providing greater functionality and better performance [10].\nThird, web applications can take advantage of servers with faster networks and better connectivity, further boosting application performance.\nFourth, PDAs can be even simpler devices since they do not need to perform complex application logic, potentially reducing energy consumption and extend\ning battery life.\nFinally, PDA thin clients can be essentially stateless appliances that do not need to be backed up or restored, require almost no maintenance or upgrades, and do not store any sensitive data that can be lost or stolen.\nThis model provides a viable avenue for medical organizations to comply with HIPAA regulations [6] while embracing mobile handhelds in their day to day operations.\nDespite these potential advantages, thin clients have been unable to provide the full range of these benefits in delivering web applications to mobile handheld devices.\nExisting thin clients were not designed for PDAs and do not account for important usability issues in the context of small form factor devices, resulting in difficulty in navigating displayed web content.\nFurthermore, existing thin clients are ineffective at providing seamless mobility across the heterogeneous mix of device display sizes and resolutions.\nWhile existing thin clients can already provide faster performance than native PDA web browsers in delivering HTML web content, they do not effectively support more display-intensive web helper applications such as multimedia video, which is increasingly an integral part of available web content.\nTo harness the full potential of thin-client computing in providing mobile wireless web on PDAs, we have developed pTHINC (PDA THin-client InterNet Computing).\npTHINC builds on our previous work on THINC [1] to provide a thinclient architecture for mobile handheld devices.\npTHINC virtualizes and resizes the display on the server to efficiently deliver high-fidelity screen updates to a broad range of different clients, screen sizes, and screen orientations, including both portrait and landscape viewing modes.\nThis enables pTHINC to provide the same persistent web session across different client devices.\nFor example, pTHINC can provide the same web browsing session appropriately scaled for display on a desktop computer and a PDA so that the same cookies, bookmarks, and other meta-data are continuously available on both machines simultaneously.\npTHINC's virtual display approach leverages semantic information available in display commands, and client-side video hardware to provide more efficient remote display mechanisms that are crucial for supporting more display-intensive web applications.\nGiven limited display resolution on PDAs, pTHINC maximizes the use of screen real estate for remote display by moving control functionality from the screen to readily available PDA control buttons, improving system usability.\nWe have implemented pTHINC on Windows Mobile and demonstrated that it works transparently with existing applications, window systems, and operating systems, and does not require modifying, recompiling, or relinking existing software.\nWe have quantitatively evaluated pTHINC against local PDA web browsers and other thin-client approaches on Pocket PC devices.\nOur experimental results demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.\nThis paper presents the design and implementation of pTHINC.\nSection 2 describes the overall usage model and usability characteristics of pTHINC.\nSection 3 presents the design and system architecture of pTHINC.\nSection 4 presents experimental results measuring the performance of pTHINC on web applications and comparing it against native PDA browsers and other popular PDA thin-client systems.\nSection 5 discusses related work.\nFinally, we present some concluding remarks.\n2.\nPTHINC USAGE MODEL\n3.\nPTHINC SYSTEM ARCHITECTURE\n3.1 Display Management\n3.2 Video Playback\n4.\nEXPERIMENTAL RESULTS\n4.1 Experimental Testbed\n4.2 Application Benchmarks\n4.3 Measurements\n5.\nRELATED WORK\nSeveral studies have examined the web browsing performance of thin-client computing [13, 19, 10].\nThe ability for thin clients to improve web browsing performance on wireless PDAs was first quantitatively demonstrated in a previous study by one of the authors [10].\nThis study demonstrated that thin clients can provide both faster web browsing performance and greater web browsing functionality.\nThe study considered a wide range of web content including content from medical information systems.\nOur work builds on this previous study and consider important issues such as how usable existing thin clients are in PDA environments, the trade-offs between thin-client usability and performance, performance across different PDA devices, and the performance of thin clients on common web-related applications such as video.\nMany thin clients have been developed and some have PDA clients, including Microsoft's Remote Desktop [3], Citrix MetraFrame XP [2], Virtual Network Computing [16, 12], GoToMyPC [5], and Tarantella [18].\nThese systems were first designed for desktop computing and retrofitted for PDAs.\nUnlike pTHINC, they do not address key system architecture and usability issues important for PDAs.\nFigure 10: Video Benchmark: Fullscreen Video Data\nThis limits their display quality, system performance, available screen space, and overall usability on PDAs.\npTHINC builds on previous work by two of the authors on THINC [1], extending the server architecture and introducing a client interface and usage model to efficiently support PDA devices for mobile web applications.\nOther approaches to improve the performance of mobile wireless web browsing have focused on using transcoding and caching proxies in conjunction with the fat client model [11, 9, 4, 8].\nThey work by pushing functionality to external proxies, and using specialized browsing applications on the PDA device that communicate with the proxy.\nOur thinclient approach differs fundamentally from these fat-client approaches by pushing all web browser logic to the server, leveraging existing investments in desktop web browsers and helper applications to work seamlessly with production systems without any additional proxy configuration or web browser modifications.\nWith the emergence of web browsing on small display devices, web sites have been redesigned using mechanisms like WAP and specialized native web browsers have been developed to tailor the needs of these devices.\nRecently, Opera has developed the Opera Mini [15] web browser, which uses an approach similar to the thin-client model to provide access across a number of mobile devices that would normally be incapable of running a web browser.\nInstead of requiring the device to process web pages, it uses a remote server to pre-process the page before sending it to the phone.\n6.\nCONCLUSIONS\nWe have introduced pTHINC, a thin-client architecture for wireless PDAs.\npTHINC provides key architectural and usability mechanisms such as server-side screen resizing, clientside screen rotation using simple copy techniques, YUV video support, and maximizing screen space for display updates and leveraging existing PDA control buttons for UI elements.\npTHINC transparently supports traditional desktop browsers and their helper applications on PDA devices and desktop machines, providing mobile users with ubiquitous access to a consistent, personalized, and full-featured web environment across heterogeneous devices.\nWe have implemented pTHINC and measured its performance on web applications compared to existing thin-client systems and native web applications.\nOur results on multiple mobile wireless devices demonstrate that pTHINC delivers web browsing performance up to 80 times better than existing thin-client systems, and 8 times better than a native PDA browser.\nIn addition, pTHINC is the only PDA thin client\nFigure 9: Video Benchmark: Fullscreen Video Quality\nFigure 11: Video Screenshot: RDP 640x480 Figure 12: Video Screenshot: VNC 1024x768 Figure 14: Video Screenshot: pTHINC Resized 1024x768 Figure 13: Video Screenshot: ICA Resized 1024x768\nthat transparently provides full-screen, full frame rate video playback, making web sites with multimedia content accessible to mobile web users.","lvl-4":"pTHINC: A Thin-Client Architecture for Mobile Wireless Web\nABSTRACT\nAlthough web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites.\nWe have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display.\npTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes.\npTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display.\nWe have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices.\nOur results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.\nCategories and Subject Descriptors: C. 2.4 ComputerCommunication-Networks: Distributed Systems--client \/ server\n1.\nINTRODUCTION\nThese devices are enabling new forms of mobile computing and communication.\nIt is likely that within a few years, most of the devices accessing the web will be mobile.\nUsers typically access web content by running a web browser and associated helper applications locally on the PDA.\nAlthough native web browsers exist for PDAs, they deliver\nsubpar performance and have a much smaller feature set and more limited functionality than their desktop computing counterparts [10].\nAs a result, PDA web browsers are often not able to display web content from web sites that leverage more advanced web technologies to deliver a richer web experience.\nSecond, PDAs have a more resource constrained environment than traditional desktop computers to provide a smaller form factor and longer battery life.\nDesktop web browsers are large, complex applications that are unable to run on a PDA.\nInstead, developers are forced to significantly strip down these web browsers to provide a usable PDA web browser, thereby crippling PDA browser functionality.\nThin-client computing provides an alternative approach for enabling pervasive web access from handheld devices.\nA thin-client computing system consists of a server and a client that communicate over a network using a remote display protocol.\nThe protocol enables graphical displays to be virtualized and served across a network to a client device, while application logic is executed on the server.\nUsing the remote display protocol, the client transmits user input to the server, and the server returns screen updates of the applications from the server to the client.\nUsing a thin-client model for mobile handheld devices, PDAs can become simple stateless clients that leverage the remote server capabilities to execute web browsers and other helper applications.\nThe thin-client model provides several important benefits for mobile wireless web.\nFirst, standard desktop web applications can be used to deliver web content to PDAs without rewriting or adapting applications to execute on a PDA, reducing development costs and leveraging existing software investments.\nSecond, complex web applications can be executed on powerful servers instead of running stripped down versions on more resource constrained PDAs, providing greater functionality and better performance [10].\nThird, web applications can take advantage of servers with faster networks and better connectivity, further boosting application performance.\nFourth, PDAs can be even simpler devices since they do not need to perform complex application logic, potentially reducing energy consumption and extend\ning battery life.\nDespite these potential advantages, thin clients have been unable to provide the full range of these benefits in delivering web applications to mobile handheld devices.\nExisting thin clients were not designed for PDAs and do not account for important usability issues in the context of small form factor devices, resulting in difficulty in navigating displayed web content.\nFurthermore, existing thin clients are ineffective at providing seamless mobility across the heterogeneous mix of device display sizes and resolutions.\nWhile existing thin clients can already provide faster performance than native PDA web browsers in delivering HTML web content, they do not effectively support more display-intensive web helper applications such as multimedia video, which is increasingly an integral part of available web content.\nTo harness the full potential of thin-client computing in providing mobile wireless web on PDAs, we have developed pTHINC (PDA THin-client InterNet Computing).\npTHINC builds on our previous work on THINC [1] to provide a thinclient architecture for mobile handheld devices.\npTHINC virtualizes and resizes the display on the server to efficiently deliver high-fidelity screen updates to a broad range of different clients, screen sizes, and screen orientations, including both portrait and landscape viewing modes.\nThis enables pTHINC to provide the same persistent web session across different client devices.\npTHINC's virtual display approach leverages semantic information available in display commands, and client-side video hardware to provide more efficient remote display mechanisms that are crucial for supporting more display-intensive web applications.\nGiven limited display resolution on PDAs, pTHINC maximizes the use of screen real estate for remote display by moving control functionality from the screen to readily available PDA control buttons, improving system usability.\nWe have implemented pTHINC on Windows Mobile and demonstrated that it works transparently with existing applications, window systems, and operating systems, and does not require modifying, recompiling, or relinking existing software.\nWe have quantitatively evaluated pTHINC against local PDA web browsers and other thin-client approaches on Pocket PC devices.\nOur experimental results demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.\nThis paper presents the design and implementation of pTHINC.\nSection 2 describes the overall usage model and usability characteristics of pTHINC.\nSection 3 presents the design and system architecture of pTHINC.\nSection 4 presents experimental results measuring the performance of pTHINC on web applications and comparing it against native PDA browsers and other popular PDA thin-client systems.\nSection 5 discusses related work.\nFinally, we present some concluding remarks.\n5.\nRELATED WORK\nSeveral studies have examined the web browsing performance of thin-client computing [13, 19, 10].\nThe ability for thin clients to improve web browsing performance on wireless PDAs was first quantitatively demonstrated in a previous study by one of the authors [10].\nThis study demonstrated that thin clients can provide both faster web browsing performance and greater web browsing functionality.\nThe study considered a wide range of web content including content from medical information systems.\nOur work builds on this previous study and consider important issues such as how usable existing thin clients are in PDA environments, the trade-offs between thin-client usability and performance, performance across different PDA devices, and the performance of thin clients on common web-related applications such as video.\nThese systems were first designed for desktop computing and retrofitted for PDAs.\nUnlike pTHINC, they do not address key system architecture and usability issues important for PDAs.\nFigure 10: Video Benchmark: Fullscreen Video Data\nThis limits their display quality, system performance, available screen space, and overall usability on PDAs.\npTHINC builds on previous work by two of the authors on THINC [1], extending the server architecture and introducing a client interface and usage model to efficiently support PDA devices for mobile web applications.\nOther approaches to improve the performance of mobile wireless web browsing have focused on using transcoding and caching proxies in conjunction with the fat client model [11, 9, 4, 8].\nThey work by pushing functionality to external proxies, and using specialized browsing applications on the PDA device that communicate with the proxy.\nWith the emergence of web browsing on small display devices, web sites have been redesigned using mechanisms like WAP and specialized native web browsers have been developed to tailor the needs of these devices.\nRecently, Opera has developed the Opera Mini [15] web browser, which uses an approach similar to the thin-client model to provide access across a number of mobile devices that would normally be incapable of running a web browser.\nInstead of requiring the device to process web pages, it uses a remote server to pre-process the page before sending it to the phone.\n6.\nCONCLUSIONS\nWe have introduced pTHINC, a thin-client architecture for wireless PDAs.\npTHINC provides key architectural and usability mechanisms such as server-side screen resizing, clientside screen rotation using simple copy techniques, YUV video support, and maximizing screen space for display updates and leveraging existing PDA control buttons for UI elements.\npTHINC transparently supports traditional desktop browsers and their helper applications on PDA devices and desktop machines, providing mobile users with ubiquitous access to a consistent, personalized, and full-featured web environment across heterogeneous devices.\nWe have implemented pTHINC and measured its performance on web applications compared to existing thin-client systems and native web applications.\nOur results on multiple mobile wireless devices demonstrate that pTHINC delivers web browsing performance up to 80 times better than existing thin-client systems, and 8 times better than a native PDA browser.\nIn addition, pTHINC is the only PDA thin client\nFigure 9: Video Benchmark: Fullscreen Video Quality\nthat transparently provides full-screen, full frame rate video playback, making web sites with multimedia content accessible to mobile web users.","lvl-2":"pTHINC: A Thin-Client Architecture for Mobile Wireless Web\nABSTRACT\nAlthough web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites.\nWe have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display.\npTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes.\npTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display.\nWe have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices.\nOur results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.\nCategories and Subject Descriptors: C. 2.4 ComputerCommunication-Networks: Distributed Systems--client \/ server\n1.\nINTRODUCTION\nThe increasing ubiquity of wireless networks and decreasing cost of hardware is fueling a proliferation of mobile wireless handheld devices, both as standalone wireless Personal Digital Assistants (PDA) and popular integrated PDA\/cell phone devices.\nThese devices are enabling new forms of mobile computing and communication.\nService providers are leveraging these devices to deliver pervasive web access, and mobile web users already often use these devices to access web-enabled information such as news, email, and localized travel guides and maps.\nIt is likely that within a few years, most of the devices accessing the web will be mobile.\nUsers typically access web content by running a web browser and associated helper applications locally on the PDA.\nAlthough native web browsers exist for PDAs, they deliver\nsubpar performance and have a much smaller feature set and more limited functionality than their desktop computing counterparts [10].\nAs a result, PDA web browsers are often not able to display web content from web sites that leverage more advanced web technologies to deliver a richer web experience.\nThis fundamental problem arises for two reasons.\nFirst, because PDAs have a completely different hardware\/software environment from traditional desktop computers, web applications need to be rewritten and customized for PDAs if at all possible, duplicating development costs.\nBecause the desktop application market is larger and more mature, most development effort generally ends up being spent on desktop applications, resulting in greater functionality and performance than their PDA counterparts.\nSecond, PDAs have a more resource constrained environment than traditional desktop computers to provide a smaller form factor and longer battery life.\nDesktop web browsers are large, complex applications that are unable to run on a PDA.\nInstead, developers are forced to significantly strip down these web browsers to provide a usable PDA web browser, thereby crippling PDA browser functionality.\nThin-client computing provides an alternative approach for enabling pervasive web access from handheld devices.\nA thin-client computing system consists of a server and a client that communicate over a network using a remote display protocol.\nThe protocol enables graphical displays to be virtualized and served across a network to a client device, while application logic is executed on the server.\nUsing the remote display protocol, the client transmits user input to the server, and the server returns screen updates of the applications from the server to the client.\nUsing a thin-client model for mobile handheld devices, PDAs can become simple stateless clients that leverage the remote server capabilities to execute web browsers and other helper applications.\nThe thin-client model provides several important benefits for mobile wireless web.\nFirst, standard desktop web applications can be used to deliver web content to PDAs without rewriting or adapting applications to execute on a PDA, reducing development costs and leveraging existing software investments.\nSecond, complex web applications can be executed on powerful servers instead of running stripped down versions on more resource constrained PDAs, providing greater functionality and better performance [10].\nThird, web applications can take advantage of servers with faster networks and better connectivity, further boosting application performance.\nFourth, PDAs can be even simpler devices since they do not need to perform complex application logic, potentially reducing energy consumption and extend\ning battery life.\nFinally, PDA thin clients can be essentially stateless appliances that do not need to be backed up or restored, require almost no maintenance or upgrades, and do not store any sensitive data that can be lost or stolen.\nThis model provides a viable avenue for medical organizations to comply with HIPAA regulations [6] while embracing mobile handhelds in their day to day operations.\nDespite these potential advantages, thin clients have been unable to provide the full range of these benefits in delivering web applications to mobile handheld devices.\nExisting thin clients were not designed for PDAs and do not account for important usability issues in the context of small form factor devices, resulting in difficulty in navigating displayed web content.\nFurthermore, existing thin clients are ineffective at providing seamless mobility across the heterogeneous mix of device display sizes and resolutions.\nWhile existing thin clients can already provide faster performance than native PDA web browsers in delivering HTML web content, they do not effectively support more display-intensive web helper applications such as multimedia video, which is increasingly an integral part of available web content.\nTo harness the full potential of thin-client computing in providing mobile wireless web on PDAs, we have developed pTHINC (PDA THin-client InterNet Computing).\npTHINC builds on our previous work on THINC [1] to provide a thinclient architecture for mobile handheld devices.\npTHINC virtualizes and resizes the display on the server to efficiently deliver high-fidelity screen updates to a broad range of different clients, screen sizes, and screen orientations, including both portrait and landscape viewing modes.\nThis enables pTHINC to provide the same persistent web session across different client devices.\nFor example, pTHINC can provide the same web browsing session appropriately scaled for display on a desktop computer and a PDA so that the same cookies, bookmarks, and other meta-data are continuously available on both machines simultaneously.\npTHINC's virtual display approach leverages semantic information available in display commands, and client-side video hardware to provide more efficient remote display mechanisms that are crucial for supporting more display-intensive web applications.\nGiven limited display resolution on PDAs, pTHINC maximizes the use of screen real estate for remote display by moving control functionality from the screen to readily available PDA control buttons, improving system usability.\nWe have implemented pTHINC on Windows Mobile and demonstrated that it works transparently with existing applications, window systems, and operating systems, and does not require modifying, recompiling, or relinking existing software.\nWe have quantitatively evaluated pTHINC against local PDA web browsers and other thin-client approaches on Pocket PC devices.\nOur experimental results demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.\nThis paper presents the design and implementation of pTHINC.\nSection 2 describes the overall usage model and usability characteristics of pTHINC.\nSection 3 presents the design and system architecture of pTHINC.\nSection 4 presents experimental results measuring the performance of pTHINC on web applications and comparing it against native PDA browsers and other popular PDA thin-client systems.\nSection 5 discusses related work.\nFinally, we present some concluding remarks.\n2.\nPTHINC USAGE MODEL\npTHINC is a thin-client system that consists of a simple client viewer application that runs on the PDA and a server that runs on a commodity PC.\nThe server leverages more powerful PCs to to run web browsers and other application logic.\nThe client takes user input from the PDA stylus and virtual keyboard and sends them to the server to pass to the applications.\nScreen updates are then sent back from the server to the client for display to the user.\nWhen the pTHINC PDA client is started, the user is presented with a simple graphical interface where information such as server address and port, user authentication information, and session settings can be provided.\npTHINC first attempts to connect to the server and perform the necessary handshaking.\nOnce this process has been completed, pTHINC presents the user with the most recent display of his session.\nIf the session does not exist, a new session is created.\nExisting sessions can be seamlessly continued without changes in the session setting or server configuration.\nUnlike other thin-client systems, pTHINC provides a user with a persistent web session model in which a user can launch a session running a web browser and associated applications at the server, then disconnect from that session and reconnect to it again anytime.\nWhen a user reconnects to the session, all of the applications continue running where the user left off, so that the user can continue working as though he or she never disconnected.\nThe ability to disconnect and reconnect to a session at anytime is an important benefit for mobile wireless PDA users which may have intermittent network connectivity.\npTHINC's persistent web session model enables a user to reconnect to a web session from devices other than the one on which the web session was originally initiated.\nThis provides users with seamless mobility across different devices.\nIf a user loses his PDA, he can easily use another PDA to access his web session.\nFurthermore, pTHINC allows users to use non-PDA devices to access web sessions as well.\nA user can access the same persistent web session on a desktop PC as on a PDA, enabling a user to use the same web session from any computer.\npTHINC's persistent web session model addresses a key problem encountered by mobile web users, the lack of a common web environment across computers.\nWeb browsers often store important information such as bookmarks, cookies, and history, which enable them to function in a much more useful manner.\nThe problem that occurs when a user moves between computers is that this data, which is specific to a web browser installation, cannot move with the user.\nFurthermore, web browsers often need helper applications to process different media content, and those applications may not be consistently available across all computers.\npTHINC addresses this problem by enabling a user to remotely use the exact same web browser environment and helper applications from any computer.\nAs a result, pTHINC can provide a common, consistent web browsing environment for mobile users across different devices without requiring them to attempt to repeatedly synchronize different web browsing environments across multiple machines.\nTo enable a user to access the same web session on different devices, pTHINC must provide mechanisms to support different display sizes and resolutions.\nToward this end, pTHINC provides a zoom feature that enables a user to zoom in and out of a display and allows the display of a web\nFigure 1: pTHINC shortcut keys\nsession to be resized to fit the screen of the device being used.\nFor example, if the server is running a web session at 1024 \u00d7 768 but the client is a PDA with a display resolution of 640 \u00d7 480, pTHINC will resize the desktop display to fit the full display in the smaller screen of the PDA.\npTHINC provides the PDA user with the option to increase the size of the display by zooming in to different parts of the display.\nUsers are often familiar with the general layout of commonly visited websites, and are able to leverage this resizing feature to better navigate through web pages.\nFor example, a user can zoom out of the display to view the entire page content and navigate hyperlinks, then zoom in to a region of interest for a better view.\nTo enable a user to access the same web session on different devices, pTHINC must also provide mechanisms to support different display orientations.\nIn a desktop environment, users are typically accustomed to having displays presented in landscape mode where the screen width is larger than its height.\nHowever, in a PDA environment, the choice is not always obvious.\nSome users may prefer having the display in portrait mode, as it is easier to hold the device in their hands, while others may prefer landscape mode in order to minimize the amount of side-scrolling necessary to view a web page.\nTo accommodate PDA user preferences, pTHINC provides an orientation feature that enables it to seamless rotate the display between landscape and portrait mode.\nThe landscape mode is particularly useful for pTHINC users who frequently access their web sessions on both desktop and PDA devices, providing those users with the same familiar landscape setting across different devices.\nBecause screen space is a relatively scarce resource on PDAs, pTHINC runs in fullscreen mode to maximize the screen area available to display the web session.\nTo be able to use all of the screen on the PDA and still allow the user to control and interact with it, pTHINC reuses the typical shortcut buttons found on PDAs to perform all the control functions available to the user.\nThe buttons used by pTHINC do not require any OS environment changes; they are simply intercepted by the pTHINC client application when they are pressed.\nFigure 1 shows how pTHINC utilizes the shortcut buttons to provide easy navigation and improve the overall user experience.\nThese buttons are not device specific, and the layout shown is common to widelyused PocketPC devices.\npTHINC provides six shortcuts to support its usage model:\n\u2022 Rotate Screen: The record button on the left edge is used to rotate the screen between portrait and landscape mode each time the button is pressed.\n\u2022 Zoom Out: The leftmost button on the bottom front is used to zoom out the display of the web session providing a bird's eye view of the web session.\n\u2022 Zoom In: The second leftmost button on the bottom front is used to zoom in the display of the web session to more clearly view content of interest.\n\u2022 Directional Scroll: The middle button on the bottom front is used to scroll around the display using a single control button in a way that is already familiar to PDA users.\nThis feature is particularly useful when the user has zoomed in to a region of the display such that only part of the display is visible on the screen.\n\u2022 Show\/Hide Keyboard: The second rightmost button on the bottom front is used to bring up a virtual keyboard drawn on the screen for devices which have no physical keyboard.\nThe virtual keyboard uses standard PDA OS mechanisms, providing portability across different PDA environments.\n\u2022 Close Session: The rightmost button on the bottom front is used to disconnect from the pTHINC session.\npTHINC uses the PDA touch screen, stylus, and standard user interface mechanisms to provide a user interface pointand-click metaphor similar to that provided by the mouse in a traditional desktop computing environment.\npTHINC does not use a cursor since PDA environments do not provide one.\nInstead, a user can use the stylus to tap on different sections of the touch screen to indicate input focus.\nA single tap on the touch screen generates a corresponding single click mouse event.\nA double tap on the touch screen generates a corresponding double click mouse event.\npTHINC provides two-button mouse emulation by using the stylus to press down on the screen for one second to generate a right mouse click.\nAll of these actions are identical to the way users already interact with PDA applications in the common PocketPC environment.\nIn web browsing, users can click on hyperlinks and focus on input boxes by simply tapping on the desired screen area of interest.\nUnlike local PDA web browsers and other PDA applications, pTHINC leverages more powerful desktop user interface metaphors to enable users to manipulate multiple open application windows instead of being limited to a single application window at any given moment.\nThis provides increased browsing flexibility beyond what is currently available on PDA devices.\nSimilar to a desktop environment, browser windows and other application windows can be moved around by pressing down and dragging the stylus similar to a mouse.\n3.\nPTHINC SYSTEM ARCHITECTURE\npTHINC builds on the THINC [1] remote display architecture to provide a thin-client system for PDAs.\npTHINC virtualizes the display at the server by leveraging the video device abstraction layer, which sits below the window server and above the framebuffer.\nThis is a well-defined, low-level, device-dependent layer that exposes the video hardware to the display system.\npTHINC accomplishes this through a simple virtual display driver that intercepts drawing commands, packetizes, and sends them over the network.\nWhile other thin-client approaches intercept display commands at other layers of the display subsystem, pTHINC's display virtualization approach provides some key benefits in efficiently supporting PDA clients.\nFor example, intercepting display commands at a higher layer between applications and the window system as is done by X [17] requires replicating and running a great deal of functionality on the PDA that is traditionally provided by the desktop window system.\nGiven both the size and complexity of traditional window systems, attempting to replicate this functionality in the restricted PDA environment would have proven to be a daunting, and perhaps unfeasible task.\nFurthermore, applications and the window system often require tight synchronization in their operation and imposing a wireless network between them by running the applications on the server and the window system on the client would significantly degrade performance.\nOn the other hand, intercepting at a lower layer by extracting pixels out of the framebuffer as they are rendered provides a simple solution that requires very little functionality on the PDA client, but can also result in degraded performance.\nThe reason is that by the time the remote display server attempts to send screen updates, it has lost all semantic information that may have helped it encode efficiently, and it must resort to using a generic and expensive encoding mechanism on the server, as well as a potentially expensive decoding mechanism on the limited PDA client.\nIn contrast to both the high and low level interception approaches, pTHINC's approach of intercepting at the device driver provides an effective balance between client and server simplicity, and the ability to efficiently encode and decode screen updates.\nBy using a low-level virtual display approach, pTHINC can efficiently encode application display commands using only a small set of low-level commands.\nIn a PDA environment, this set of commands provides a crucial component in maintaining the simplicity of the client in the resourceconstrained PDA environment.\nThe display commands are shown in Table 1, and work as follows.\nCOPY instructs the client to copy a region of the screen from its local framebuffer to another location.\nThis command improves the user experience by accelerating scrolling and opaque window movement without having to resend screen data from the server.\nSFILL, PFILL, and BITMAP are commands that paint a fixed-size region on the screen.\nThey are useful for accelerating the display of solid window backgrounds, desktop patterns, backgrounds of web pages, text drawing, and certain operations in graphics manipulation programs.\nSFILL fills a sizable region on the screen with a single color.\nPFILL replicates a tile over a screen region.\nBITMAP performs a fill using a bitmap of ones and zeros as a stipple to apply a foreground and background color.\nFinally, RAW is used to transmit unencoded pixel data to be displayed verbatim on a region of the screen.\nThis command is invoked as a last resort if the server is unable to employ any other command, and it is the only command that may be compressed to mitigate its impact on network bandwidth.\npTHINC delivers its commands using a non-blocking, serverpush update mechanism, where as soon as display updates are generated on the server, they are sent to the client.\nClients are not required to explicitly request display updates, thus minimizing the impact that the typical varying network latency of wireless links may have on the responsiveness of the system.\nKeeping in mind that resource\nTable 1: pTHINC Protocol Display Commands\nconstrained PDAs and wireless networks may not be able to keep up with a fast server generating a large number of updates, pTHINC is able to coalesce, clip, and discard updates automatically if network loss or congestion occurs, or the client cannot keep up with the rate of updates.\nThis type of behavior proves crucial in a web browsing environment, where for example, a page may be redrawn multiple times as it is rendered on the fly by the browser.\nIn this case, the PDA will only receive and render the final result, which clearly is all the user is interesting in seeing.\npTHINC prioritizes the delivery of updates to the PDA using a Shortest-Remaining-Size-First (SRSF) preemptive update scheduler.\nSRSF is analogous to Shortest-RemainingProcessing-Time scheduling, which is known to be optimal for minimizing mean response time in an interactive system.\nIn a web browsing environment, short jobs are associated with text and basic page layout components such as the page's background, which are critical web content for the user.\nOn the other hand, large jobs are often lower priority \"beautifying\" elements, or, even worse, web page banners and advertisements, which are of questionable value to the user as he or she is browsing the page.\nUsing SRSF, pTHINC is able to maximize the utilization of the relatively scarce bandwidth available on the wireless connection between the PDA and the server.\n3.1 Display Management\nTo enable users to just as easily access their web browser and helper applications from a desktop computer at home as from a PDA while on the road, pTHINC provides a resize mechanism to zoom in and out of the display of a web session.\npTHINC resizing is completely supported by the server, not the client.\nThe server resamples updates to fit within the PDAs viewport before they are transmitted over the network.\npTHINC uses Fant's resampling algorithm to resize pixel updates.\nThis provides smooth, visually pleasing updates with properly antialiasing and has only modest computational requirements.\npTHINC's resizing approach has a number of advantages.\nFirst, it allows the PDA to leverage the vastly superior computational power of the server to use high quality resampling algorithms and produce higher quality updates for the PDA to display.\nSecond, resizing the screen does not translate into additional resource requirements for the PDA, since it does not need to perform any additional work.\nFinally, better utilization of the wireless network is attained since rescaling the updates reduces their bandwidth requirements.\nTo enable users to orient their displays on a PDA to provide a viewing experience that best accommodates user preferences and the layout of web pages or applications, pTHINC provides a display rotation mechanism to switch between landscape and portrait viewing modes.\npTHINC display rotation is completely supported by the client, not the server.\npTHINC does not explicitly recalculate the ge\nometry of display updates to perform rotation, which would be computationally expensive.\nInstead, pTHINC simply changes the way data is copied into the framebuffer to switch between display modes.\nWhen in portrait mode, data is copied along the rows of the framebuffer from left to right.\nWhen in landscape mode, data is copied along the columns of the framebuffer from top to bottom.\nThese very fast and simple techniques replace one set of copy operations with another and impose no performance overhead.\npTHINC provides its own rotation mechanism to support a wide range of devices without imposing additional feature requirements on the PDA.\nAlthough some newer PDA devices provide native support for different orientations, this mechanism is not dynamic and requires the user to rotate the PDA's entire user interface before starting the pTHINC client.\nWindows Mobile provides native API mechanisms for PDA applications to rotate their UI on the fly, but these mechanisms deliver poor performance and display quality as the rotation is performed naively and is not completely accurate.\n3.2 Video Playback\nVideo has gradually become an integral part of the World Wide Web, and its presence will only continue to increase.\nWeb sites today not only use animated graphics and flash to deliver web content in an attractive manner, but also utilize streaming video to enrich the web interface.\nUsers are able to view pre-recorded and live newscasts on CNN, watch sports highlights on ESPN, and even search through large collection of videos on Google Video.\nTo allow applications to provide efficient video playback, interfaces have been created in display systems that allow video device drivers to expose their hardware capabilities back to the applications.\npTHINC takes advantage of these interfaces and its virtual device driver approach to provide a virtual bridge between the remote client and its hardware and the applications, and transparently support video playback.\nOn top of this architecture, pTHINC uses the YUV colorspace to encode the video content, which provides a number of benefits.\nFirst, it has become increasingly common for PDA video hardware to natively support YUV and be able to perform the colorspace conversion and scaling automatically.\nAs a result, pTHINC is able to provide fullscreen video playback without any performance hits.\nSecond, the use of YUV allows for a more efficient representation of RGB data without loss of quality, by taking advantage of the human eye's ability to better distinguish differences in brightness than in color.\nIn particular, pTHINC uses the YV12 format, which allows full color RGB data to be encoded using just 12 bits per pixel.\nThird, YUV data is produced as one of the last steps of the decoding process of most video codecs, allowing pTHINC to provide video playback in a manner that is format independent.\nFinally, even if the PDA's video hardware is unable to accelerate playback, the colorspace conversion process is simple enough that it does not impose unreasonable requirements on the PDA.\nA more concrete example of how pTHINC leverages the PDA video hardware to support video playback can be seen in our prototype implementation on the popular Dell Axim X51v PDA, which is equipped with the Intel 2700G multimedia accelerator.\nIn this case, pTHINC creates an offscreen buffer in video memory and writes and reads from this memory region data on the YV12 format.\nWhen a new video frame arrives, video data is copied from the buffer to\nFigure 2: Experimental Testbed\nan overlay surface in video memory, which is independent of the normal surface used for traditional drawing.\nAs the YV12 data is put onto the overlay, the Intel accelerator automatically performs both colorspace conversion and scaling.\nBy using the overlay surface, pTHINC has no need to redraw the screen once video playback is over since the overlapped surface is unaffected.\nIn addition, specific overlay regions can be manipulated by leveraging the video hardware, for example to perform hardware linear interpolation to smooth out the frame and display it fullscreen, and to do automatic rotation when the client runs in landscape mode.\n4.\nEXPERIMENTAL RESULTS\nWe have implemented a pTHINC prototype that runs the client on widely-used Windows Mobile-based Pocket PC devices and the server on both Windows and Linux operating systems.\nTo demonstrate its effectiveness in supporting mobile wireless web applications, we have measured its performance on web applications.\nWe present experimental results on different PDA devices for two popular web applications, browsing web pages and playing video content from the web.\nWe compared pTHINC against native web applications running locally on the PDA to demonstrate the improvement that pTHINC can provide over the traditional fat-client approach.\nWe also compared pTHINC against three of the most widely used thin clients that can run on PDAs, Citrix Meta-FrameXP [2], Microsoft Remote Desktop [3] and VNC (Virtual Network Computing) [16].\nWe follow common practice and refer to Citrix MetaFrameXP and Microsoft Remote Desktop by their respective remote display protocols, ICA (Independent Computing Architecture) and RDP (Remote Desktop Protocol).\n4.1 Experimental Testbed\nWe conducted our web experiments using two different wireless Pocket PC PDAs in an isolated Wi-Fi network testbed, as shown in Figure 2.\nThe testbed consisted of two PDA client devices, a packet monitor, a thin-client server, and a web server.\nExcept for the PDAs, all of the other machines were IBM Netfinity 4500R servers with dual 933 MHz Intel PIII CPUs and 512 MB RAM and were connected on a switched 100 Mbps FastEthernet network.\nThe web server used was Apache 1.3.27, the network emulator was NISTNet 2.0.12, and the packet monitor was Ethereal 0.10.9.\nThe PDA clients connected to the testbed through a 802.11 b Lucent Orinoco AP-2000 wireless access point.\nAll experiments using the wireless network were conducted within ten feet of the access point, so we considered the amount of packet loss to be negligible in our experiments.\nTwo Pocket PC PDAs were used to provide results across both older, less powerful models and newer higher performance models.\nThe older model was a Dell Axim X5 with\nTable 2: Thin-client Testbed Configuration Setting\na 400 MHz Intel XScale PXA255 CPU and 64 MB RAM running Windows Mobile 2003 and a Dell TrueMobile 1180 2.4 Ghz CompactFlash card for wireless networking.\nThe newer model was a Dell Axim X51v with a 624 MHz Intel XScale XPA270 CPU and 64 MB RAM running Windows Mobile 5.0 and integrated 802.11 b wireless networking.\nThe X51v has an Intel 2700G multimedia accelerator with 16MB video memory.\nBoth PDAs are capable of 16-bit color but have different screen sizes and display resolutions.\nThe X5 has a 3.5 inch diagonal screen with 240 \u00d7 320 resolution.\nThe X51v has a 3.7 inch diagonal screen with 480 \u00d7 640.\nThe four thin clients that we used support different levels of display quality as summarized in Table 2.\nThe RDP client only supports a fixed 640 \u00d7 480 display resolution on the server with 8-bit color depth, while other platforms provide higher levels of display quality.\nTo provide a fair comparison across all platforms, we conducted our experiments with thin-client sessions configured for two possible resolutions, 1024 \u00d7 768 and 640 \u00d7 480.\nBoth ICA and VNC were configured to use the native PDA resolution of 16-bit color depth.\nThe current pTHINC prototype uses 24-bit color directly and the client downsamples updates to the 16-bit color depth available on the PDA.\nRDP was configured using only 8-bit color depth since it does not support any better color depth.\nSince both pTHINC and ICA provide the ability to view the display resized to fit the screen, we measured both clients with and without the display resized to fit the PDA screen.\nEach thin client was tested using landscape rather than portrait mode when available.\nAll systems run on the X51v could run in landscape mode because the hardware provides a landscape mode feature.\nHowever, the X5 does not provide this functionality.\nOnly pTHINC directly supports landscape mode, so it was the only system that could run in landscape mode on both the X5 and X51v.\nTo provide a fair comparison, we also standardized on common hardware and operating systems whenever possible.\nAll of the systems used the Netfinity server as the thin-client server.\nFor the two systems designed for Windows servers, ICA and RDP, we ran Windows 2003 Server on the server.\nFor the other systems which support X-based servers, VNC and pTHINC, we ran the Debian Linux Unstable distribution with the Linux 2.6.10 kernel on the server.\nWe used the latest thin-client server versions available on each platform at the time of our experiments, namely Citrix MetaFrame XP Server for Windows Feature Release 3, Microsoft Remote Desktop built into Windows XP and Windows 2003 using RDP 5.2, and VNC 4.0.\n4.2 Application Benchmarks\nWe used two web application benchmarks for our experiments based on two common application scenarios, browsing web pages and playing video content from the web.\nSince many thin-client systems including two of the ones tested are closed and proprietary, we measured their performance in a noninvasive manner by capturing network traffic with a packet monitor and using a variant of slow-motion benchmarking [13] previously developed to measure thin-client performance in PDA environments [10].\nThis measurement methodology accounts for both the display decoupling that can occur between client and server in thin-client systems as well as client processing time, which may be significant in the case of PDAs.\nTo measure web browsing performance, we used a web browsing benchmark based on the Web Text Page Load Test from the Ziff-Davis i-Bench benchmark suite [7].\nThe benchmark consists of JavaScript controlled load of 55 pages from the web server.\nThe pages contain both text and graphics with pages varying in size.\nThe graphics are embedded images in GIF and JPEG formats.\nThe original i-Bench benchmark was modified for slow-motion benchmarking by introducing delays of several seconds between the pages using JavaScript.\nThen two tests were run, one where delays where added between each page, and one where pages where loaded continuously without waiting for them to be displayed on the client.\nIn the first test, delays were sufficiently adjusted in each case to ensure that each page could be received and displayed on the client completely without temporal overlap in transferring the data belonging to two consecutive pages.\nWe used the packet monitor to record the packet traffic for each run of the benchmark, then used the timestamps of the first and last packet in the trace to obtain our latency measures [10].\nThe packet monitor also recorded the amount of data transmitted between the client and the server.\nThe ratio between the data traffic in the two tests yields a scale factor.\nThis scale factor shows the loss of data between the server and the client due to inability of the client to process the data quickly enough.\nThe product of the scale factor with the latency measurement produces the true latency accounting for client processing time.\nTo run the web browsing benchmark, we used Mozilla Firefox 1.0.4 running on the thin-client server for the thin clients, and Windows Internet Explorer (IE) Mobile for 2003 and Mobile for 5.0 for the native browsers on the X5 and X51v PDAs, respectively.\nIn all cases, the web browser used was sized to fill the entire display region available.\nTo measure video playback performance, we used a video benchmark that consisted of playing a 34.75 s MPEG-1 video clip containing a mix of news and entertainment programming at full-screen resolution.\nThe video clip is 5.11 MB and consists of 834 352x240 pixel frames with an ideal frame rate of 24 frames\/sec.\nWe measured video performance using slow-motion benchmarking by monitoring resulting packet traffic at two playback rates, 1 frames\/second (fps) and 24 fps, and comparing the results to determine playback delays and frame drops that occur at 24 fps to measure overall video quality [13].\nFor example, 100% quality means that all video frames were played at real-time speed.\nOn the other hand, 50% quality could mean that half the video data was dropped, or that the clip took twice as long to play even though all of the video data was displayed.\nTo run the video benchmark, we used Windows Media Player 9 for Windows-based thin-client servers, MPlayer 1.0 pre 6 for X-based thin-client servers, and Windows Media Player 9 Mobile and 10 Mobile for the native video players running locally on the X5 and X51v PDAs, respectively.\nIn all cases, the video player used was sized to fill the entire display region available.\n4.3 Measurements\nFigures 3 and 4 show the results of running the web brows\ning benchmark.\nFor each platform, we show results for up to four different configurations, two on the X5 and two on the X51v, depending on whether each configuration was supported.\nHowever, not all platforms could support all configurations.\nThe local browser only runs at the display resolution of the PDA, 480 \u00d7 680 or less for the X51v and the X5.\nRDP only runs at 640 \u00d7 480.\nNeither platform could support 1024 \u00d7 768 display resolution.\nICA only ran on the X5 and could not run on the X51v because it did not work on Windows Mobile 5.\nFigure 3 shows the average latency per web page for each platform.\npTHINC provides the lowest average web browsing latency on both PDAs.\nOn the X5, pTHINC performs up to 70 times better than other thin-client systems and 8 times better than the local browser.\nOn the X51v, pTHINC performs up to 80 times better than other thin-client systems and 7 times better than the native browser.\nIn fact, all of the thin clients except VNC outperform the local PDA browser, demonstrating the performance benefits of the thin-client approach.\nUsability studies have shown that web pages should take less than one second to download for the user to experience an uninterrupted web browsing experience [14].\nThe measurements show that only the thin clients deliver subsecond web page latencies.\nIn contrast, the local browser requires more than 3 seconds on average per web page.\nThe local browser performs worse since it needs to run a more limited web browser to process the HTML, JavaScript, and do all the rendering using the limited capabilities of the PDA.\nThe thin clients can take advantage of faster server hardware and a highly tuned web browser to process the web content much faster.\nFigure 3 shows that RDP is the next fastest platform after pTHINC.\nHowever, RDP is only able to run at a fixed resolution of 640 \u00d7 480 and 8-bit color depth.\nFurthermore, RDP also clips the display to the size of the PDA screen so that it does not need to send updates that are not visible on the PDA screen.\nThis provides a performance benefit assuming the remaining web content is not viewed, but degrades performance when a user scrolls around the display to view other web content.\nRDP achieves its performance with significantly lower display quality compared to the other thin clients and with additional display clipping not used by other systems.\nAs a result, RDP performance alone does not provide a complete comparison with the other platforms.\nIn contrast, pTHINC provides the fastest performance while at the same time providing equal or better display quality than the other systems.\nSince VNC and ICA provide similar display quality to pTHINC, these systems provide a more fair comparison of different thin-client approaches.\nICA performs worse in part because it uses higher-level display primitives that require additional client processing costs.\nVNC performs worse in part because it loses display data due to its client-pull delivery mechanism and because of the client processing costs in decompressing raw pixel primitives.\nIn both cases, their performance was limited in part because their PDA clients were unable to keep up with the rate at which web pages were being displayed.\nFigure 3 also shows measurements for those thin clients that support resizing the display to fit the PDA screen, namely ICA and pTHINC.\nResizing requires additional processing, which results in slower average web page latencies.\nThe measurements show that the additional delay incurred by ICA when resizing versus not resizing is much more substantial than for pTHINC.\nICA performs resizing on the slower PDA client.\nIn contrast, pTHINC leverage the more powerful server to do resizing, reducing the performance difference between resizing and not resizing.\nUnlike ICA, pTHINC is able to provide subsecond web page download latencies in both cases.\nFigure 4 shows the data transferred in KB per page when running the slow-motion version of the tests.\nAll of the platforms have modest data transfer requirements of roughly 100 KB per page or less.\nThis is well within the bandwidth capacity of Wi-Fi networks.\nThe measurements show that the local browser does not transfer the least amount of data.\nThis is surprising as HTML is often considered to be a very compact representation of content.\nInstead, RDP is the most bandwidth efficient platform, largely as a result of using only 8-bit color depth and screen clipping so that it does not transfer the entire web page to the client.\npTHINC overall has the largest data requirements, slightly more than VNC.\nThis is largely a result of the current pTHINC prototype's lack of native support for 16-bit color data in the wire protocol.\nHowever, this result also highlights pTHINC's performance as it is faster than all other systems even while transferring more data.\nFurthermore, as newer PDA models support full 24-bit color, these results indicate that pTHINC will continue to provide good web browsing performance.\nSince display usability and quality are as important as performance, Figures 5 to 8 compare screenshots of the different thin clients when displaying a web page, in this case from the popular BBC news website.\nExcept for ICA, all of the screenshots were taken on the X51v in landscape mode\nFigure 3: Browsing Benchmark: Average Page Latency\nFigure 4: Browsing Benchmark: Average Page Data\nFigure 5: Browser Screenshot: RDP 640x480 Figure 6: Browser Screenshot: VNC 1024x768 Figure 7: Browser Screenshot: ICA Resized 1024x768 Figure 8: Browser Screenshot: pTHINC Resized 1024x768\nusing the maximum display resolution settings for each platform given in Table 2.\nThe ICA screenshot was taken on the X5 since ICA does not run on the X51v.\nWhile the screenshots lack the visual fidelity of the actual device display, several observations can be made.\nFigure 5 shows that RDP does not support fullscreen mode and wastes lots of screen space for controls and UI elements, requiring the user to scroll around in order to access the full contents of the web browsing session.\nFigure 6 shows that VNC makes better use of the screen space and provides better display quality, but still forces the user to scroll around to view the web page due to its lack of resizing support.\nFigure 7 shows ICA's ability to display the full web page given its resizing support, but that its lack of landscape capability and poorer resize algorithm significantly compromise display quality.\nIn contrast, Figure 8 shows pTHINC using resizing to provide a high quality fullscreen display of the full width of the web page.\npTHINC maximizes the entire viewing region by moving all controls to the PDA buttons.\nIn addition, pTHINC leverages the server computational power to use a high quality resizing algorithm to resize the display to fit the PDA screen without significant overhead.\nFigures 9 and 10 show the results of running the video playback benchmark.\nFor each platform except ICA, we show results for an X5 and X51v configuration.\nICA could not run on the X51v as noted earlier.\nThe measurements were done using settings that reflected the environment a user would have to access a web session from both a desktop computer and a PDA.\nAs such, a 1024 \u00d7 768 server display resolution was used whenever possible and the video was shown at fullscreen.\nRDP was limited to 640 \u00d7 480 display resolution as noted earlier.\nSince viewing the entire video display is the only really usable option, we resized the display to fit the PDA screen for those platforms that supported this feature, namely ICA and pTHINC.\nFigure 9 shows the video quality for each platform.\npTHINC is the only thin client able to provide perfect video playback quality, similar to the native PDA video player.\nAll of the other thin clients deliver very poor video quality.\nWith the exception of RDP on the X51v which provided unacceptable 35% video quality, none of the other systems were even able to achieve 10% video quality.\nVNC and ICA have the worst quality at 8% on the X5 device.\npTHINC's native video support enables superior video performance, while other thin clients suffer from their inability to distinguish video from normal display updates.\nThey attempt to apply ineffective and expensive compression algorithms on the video data and are unable to keep up with the stream of updates generated, resulting in dropped frames or long playback times.\nVNC suffers further from its client-pull update model because video frames are generated faster than the rate at which the client can process and send requests to the server to obtain the next display update.\nFigure 10 shows the total data transferred during\nvideo playback for each system.\nThe native player is the most bandwidth efficient platform, sending less than 6 MB of data, which corresponds to about 1.2 Mbps of bandwidth.\npTHINC's 100% video quality requires about 25 MB of data which corresponds to a bandwidth usage of less than 6 Mbps.\nWhile the other thin clients send less data than THINC, they do so because they are dropping video data, resulting in degraded video quality.\nFigures 11 to 14 compare screenshots of the different thin clients when displaying the video clip.\nExcept for ICA, all of the screenshots were taken on the X51v in landscape mode using the maximum display resolution settings for each platform given in Table 2.\nThe ICA screenshot was taken on the X5 since ICA does not run on the X51v.\nFigures 11 and 12 show that RDP and VNC are unable to display the entire video frame on the PDA screen.\nRDP wastes screen space for UI elements and VNC only shows the top corner of the video frame on the screen.\nFigure 13 shows that ICA provides resizing to display the entire video frame, but did not proportionally resize the video data, resulting in strange display artifacts.\nIn contrast, Figure 14 shows pTHINC using resizing to provide a high quality fullscreen display of the entire video frame.\npTHINC provides visually more appealing video display than RDP, VNC, or ICA.\n5.\nRELATED WORK\nSeveral studies have examined the web browsing performance of thin-client computing [13, 19, 10].\nThe ability for thin clients to improve web browsing performance on wireless PDAs was first quantitatively demonstrated in a previous study by one of the authors [10].\nThis study demonstrated that thin clients can provide both faster web browsing performance and greater web browsing functionality.\nThe study considered a wide range of web content including content from medical information systems.\nOur work builds on this previous study and consider important issues such as how usable existing thin clients are in PDA environments, the trade-offs between thin-client usability and performance, performance across different PDA devices, and the performance of thin clients on common web-related applications such as video.\nMany thin clients have been developed and some have PDA clients, including Microsoft's Remote Desktop [3], Citrix MetraFrame XP [2], Virtual Network Computing [16, 12], GoToMyPC [5], and Tarantella [18].\nThese systems were first designed for desktop computing and retrofitted for PDAs.\nUnlike pTHINC, they do not address key system architecture and usability issues important for PDAs.\nFigure 10: Video Benchmark: Fullscreen Video Data\nThis limits their display quality, system performance, available screen space, and overall usability on PDAs.\npTHINC builds on previous work by two of the authors on THINC [1], extending the server architecture and introducing a client interface and usage model to efficiently support PDA devices for mobile web applications.\nOther approaches to improve the performance of mobile wireless web browsing have focused on using transcoding and caching proxies in conjunction with the fat client model [11, 9, 4, 8].\nThey work by pushing functionality to external proxies, and using specialized browsing applications on the PDA device that communicate with the proxy.\nOur thinclient approach differs fundamentally from these fat-client approaches by pushing all web browser logic to the server, leveraging existing investments in desktop web browsers and helper applications to work seamlessly with production systems without any additional proxy configuration or web browser modifications.\nWith the emergence of web browsing on small display devices, web sites have been redesigned using mechanisms like WAP and specialized native web browsers have been developed to tailor the needs of these devices.\nRecently, Opera has developed the Opera Mini [15] web browser, which uses an approach similar to the thin-client model to provide access across a number of mobile devices that would normally be incapable of running a web browser.\nInstead of requiring the device to process web pages, it uses a remote server to pre-process the page before sending it to the phone.\n6.\nCONCLUSIONS\nWe have introduced pTHINC, a thin-client architecture for wireless PDAs.\npTHINC provides key architectural and usability mechanisms such as server-side screen resizing, clientside screen rotation using simple copy techniques, YUV video support, and maximizing screen space for display updates and leveraging existing PDA control buttons for UI elements.\npTHINC transparently supports traditional desktop browsers and their helper applications on PDA devices and desktop machines, providing mobile users with ubiquitous access to a consistent, personalized, and full-featured web environment across heterogeneous devices.\nWe have implemented pTHINC and measured its performance on web applications compared to existing thin-client systems and native web applications.\nOur results on multiple mobile wireless devices demonstrate that pTHINC delivers web browsing performance up to 80 times better than existing thin-client systems, and 8 times better than a native PDA browser.\nIn addition, pTHINC is the only PDA thin client\nFigure 9: Video Benchmark: Fullscreen Video Quality\nFigure 11: Video Screenshot: RDP 640x480 Figure 12: Video Screenshot: VNC 1024x768 Figure 14: Video Screenshot: pTHINC Resized 1024x768 Figure 13: Video Screenshot: ICA Resized 1024x768\nthat transparently provides full-screen, full frame rate video playback, making web sites with multimedia content accessible to mobile web users.","keyphrases":["pthinc","thin-client","mobil","web applic","mobil wireless pda","web browser","function","pda thinclient solut","seamless mobil","system usabl","screen resolut","local pda web browser","web brows perform","crucial browser helper applic","video playback","full-function web browser","high-fidel displai","thin-client comput","remot displai","pervas web"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","M","M","U","M"]} {"id":"J-65","title":"Privacy in Electronic Commerce and the Economics of Immediate Gratification","abstract":"Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained. We apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce. We show that it is unrealistic to expect individual rationality in this context. Models of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data. In particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only 'na\u00efve' individuals but also 'sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant.","lvl-1":"Privacy in Electronic Commerce and the Economics of Immediate Gratification Alessandro Acquisti H. John Heinz III School of Public Policy and Management Carnegie Mellon University acquisti@andrew.cmu.edu ABSTRACT Dichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained.\nWe apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce.\nWe show that it is unrealistic to expect individual rationality in this context.\nModels of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data.\nIn particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only `na\u00a8\u0131ve'' individuals but also `sophisticated'' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.1 [Public Policy Issues]: Privacy General Terms Economics, Security, Human Factors 1.\nPRIVACY AND ELECTRONIC COMMERCE Privacy remains an important issue for electronic commerce.\nA PriceWaterhouseCoopers study in 2000 showed that nearly two thirds of the consumers surveyed would shop more online if they knew retail sites would not do anything with their personal information [15].\nA Federal Trade Commission study reported in 2000 that sixty-seven percent of consumers were very concerned about the privacy of the personal information provided on-line [11].\nMore recently, a February 2002 Harris Interactive survey found that the three biggest consumer concerns in the area of on-line personal information security were: companies trading personal data without permission, the consequences of insecure transactions, and theft of personal data [19].\nAccording to a Jupiter Research study in 2002, $24.5 billion in on-line sales will be lost by 2006 - up from $5.5 billion in 2001.\nOnline retail sales would be approximately twenty-four percent higher in 2006 if consumers'' fears about privacy and security were addressed effectively [21].\nAlthough the media hype has somewhat diminished, risks and costs have notas evidenced by the increasing volumes of electronic spam and identity theft [16].\nSurveys in this field, however, as well as experiments and anecdotal evidence, have also painted a different picture.\n[36, 10, 18, 21] have found evidence that even privacy concerned individuals are willing to trade-off privacy for convenience, or bargain the release of very personal information in exchange for relatively small rewards.\nThe failure of several on-line services aimed at providing anonymity for Internet users [6] offers additional indirect evidence of the reluctance by most individuals to spend any effort in protecting their personal information.\nThe dichotomy between privacy attitudes and behavior has been highlighted in the literature.\nPreliminary interpretations of this phenomenon have been provided [2, 38, 33, 40].\nStill missing are: an explanation grounded in economic or psychological theories; an empirical validation of the proposed explanation; and, of course, the answer to the most recurring question: should people bother at all about privacy?\nIn this paper we focus on the first question: we formally analyze the individual decision making process with respect to privacy and its possible shortcomings.\nWe focus on individual (mis)conceptions about their handling of risks they face when revealing private information.\nWe do not address the issue of whether people should actually protect themselves.\nWe will comment on that in Section 5, where we will also discuss strategies to empirically validate our theory.\nWe apply lessons from behavioral economics.\nTraditional economics postulates that people are forward-looking and bayesian updaters: they take into account how current behavior will influence their future well-being and preferences.\nFor example, [5] study rational models of addiction.\nThis approach can be compared to those who see in the decision 21 not to protect one``s privacy a rational choice given the (supposedly) low risks at stake.\nHowever, developments in the area of behavioral economics have highlighted various forms of psychological inconsistencies (self-control problems, hyperbolic discounting, present-biases, etc.) that clash with the fully rational view of the economic agent.\nIn this paper we draw from these developments to reach the following conclusions: \u2022 We show that it is unlikely that individuals can act rationally in the economic sense when facing privacy sensitive decisions.\n\u2022 We show that alternative models of personal behavior and time-inconsistent preferences are compatible with the dichotomy between attitudes and behavior and can better match current data.\nFor example, they can explain the results presented by [36] at the ACM EC ``01 conference.\nIn their experiment, self-proclaimed privacy advocates were found to be willing to reveal varying amounts of personal information in exchange for small rewards.\n\u2022 In particular, we show that individuals may have a tendency to under-protect themselves against the privacy risks they perceive, and over-provide personal information even when wary of (perceived) risks involved.\n\u2022 We show that the magnitude of the perceived costs of privacy under certain conditions will not act as deterrent against behavior the individual admits is risky.\n\u2022 We show, following similar studies in the economics of immediate gratification [31], that even `sophisticated'' individuals may under certain conditions become `privacy myopic.''\nOur conclusion is that simply providing more information and awareness in a self-regulative environment is not sufficient to protect individual privacy.\nImproved technologies, by lowering costs of adoption and protection, certainly can help.\nHowever, more fundamental human behavioral responses must also be addressed if privacy ought to be protected.\nIn the next section we propose a model of rational agents facing privacy sensitive decisions.\nIn Section 3 we show the difficulties that hinder any model of privacy decision making based on full rationality.\nIn Section 4 we show how behavioral models based on immediate gratification bias can better explain the attitudes-behavior dichotomy and match available data.\nIn Section 5 we summarize and discuss our conclusions.\n2.\nA MODEL OF RATIONALITY IN PRIVACY DECISION MAKING Some have used the dichotomy between privacy attitudes and behavior to claim that individuals are acting rationally when it comes to privacy.\nUnder this view, individuals may accept small rewards for giving away information because they expect future damages to be even smaller (when discounted over time and with their probability of occurrence).\nHere we want to investigate what underlying assumptions about personal behavior would support the hypothesis of full rationality in privacy decision making.\nSince [28, 37, 29] economists have been interested in privacy, but only recently formal models have started appearing [3, 7, 39, 40].\nWhile these studies focus on market interactions between one agent and other parties, here we are interested in formalizing the decision process of the single individual.\nWe want to see if individuals can be economically rational (forward-lookers, bayesian updaters, utility maximizers, and so on) when it comes to protect their own personal information.\nThe concept of privacy, once intended as the right to be left alone [41], has transformed as our society has become more information oriented.\nIn an information society the self is expressed, defined, and affected through and by information and information technology.\nThe boundaries between private and public become blurred.\nPrivacy has therefore become more a class of multifaceted interests than a single, unambiguous concept.\nHence its value may be discussed (if not ascertained) only once its context has also been specified.\nThis most often requires the study of a network of relations between a subject, certain information (related to the subject), other parties (that may have various linkages of interest or association with that information or that subject), and the context in which such linkages take place.\nTo understand how a rational agent could navigate through those complex relations, in Equation 1 we abstract the decision process of an idealized rational economic agent who is facing privacy trade-offs when completing a certain transaction.\nmax d Ut = \u03b4 vE (a) , pd (a) + \u03b3 vE (t) , pd (t) \u2212 cd t (1) In Equation 1, \u03b4 and \u03b3 are unspecified functional forms that describe weighted relations between expected payoffs from a set of events v and the associated probabilities of occurrence of those events p.\nMore precisely, the utility U of completing a transaction t (the transaction being any action - not necessarily a monetary operation - possibly involving exposure of personal information) is equal to some function of the expected payoff vE (a) from maintaining (or not) certain information private during that transaction, and the probability of maintaining [or not maintaining] that information private when using technology d, pd (a) [1 \u2212 pd (a)]; plus some function of the expected payoff vE (t) from completing (or non completing) the transaction (possibly revealing personal information), and the probability of completing [or not completing] that transaction with a certain technology d, pd (t) [1 \u2212 pd (t)]; minus the cost of using the technology t: cd t .1 The technology d may or may not be privacy enhancing.\nSince the payoffs in Equation 1 can be either positive or negative, Equation 1 embodies the duality implicit in privacy issues: there are both costs and benefits gained from revealing or from protecting personal information, and the costs and benefits from completing a transaction, vE (t), might be distinct from the costs and benefits from keeping the associated information private, vE (a).\nFor instance, revealing one``s identity to an on-line bookstore may earn a discount.\nViceversa, it may also cost a larger bill, because of price discrimination.\nProtecting one``s financial privacy by not divulging credit card information on-line may protect against future losses and hassles related to identity theft.\nBut it may 1 See also [1].\n22 make one``s on-line shopping experience more cumbersome, and therefore more expensive.\nThe functional parameters \u03b4 and \u03b3 embody the variable weights and attitudes an individual may have towards keeping her information private (for example, her privacy sensitivity, or her belief that privacy is a right whose respect should be enforced by the government) and completing certain transactions.\nNote that vE and p could refer to sets of payoffs and the associated probabilities of occurrence.\nThe payoffs are themselves only expected because, regardless of the probability that the transaction is completed or the information remains private, they may depend on other sets of events and their associated probabilities.\nvE() and pd (), in other words, can be read as multi-variate parameters inside which are hidden several other variables, expectations, and functions because of the complexity of the privacy network described above.\nOver time, the probability of keeping certain information private, for instance, will not only depend on the chosen technology d but also on the efforts by other parties to appropriate that information.\nThese efforts may be function, among other things, of the expected value of that information to those parties.\nThe probability of keeping information private will also depend on the environment in which the transaction is taking place.\nSimilarly, the expected benefit from keeping information private will also be a collection over time of probability distributions dependent on several parameters.\nImagine that the probability of keeping your financial transactions private is very high when you use a bank in Bermuda: still, the expected value from keeping your financial information confidential will depend on a number of other factors.\nA rational agent would, in theory, choose the technology d that maximizes her expected payoff in Equation 1.\nMaybe she would choose to complete the transaction under the protection of a privacy enhancing technology.\nMaybe she would complete the transaction without protection.\nMaybe she would not complete the transaction at all (d = 0).\nFor example, the agent may consider the costs and benefits of sending an email through an anonymous MIX-net system [8] and compare those to the costs and benefits of sending that email through a conventional, non-anonymous channel.\nThe magnitudes of the parameters in Equation 1 will change with the chosen technology.\nMIX-net systems may decrease the expected losses from privacy intrusions.\nNonanonymous email systems may promise comparably higher reliability and (possibly) reduced costs of operations.\n3.\nRATIONALITY AND PSYCHOLOGICAL DISTORTIONS IN PRIVACY Equation 1 is a comprehensive (while intentionally generic) road-map for navigation across privacy trade-offs that no human agent would be actually able to use.\nWe hinted to some difficulties as we noted that several layers of complexities are hidden inside concepts such as the expected value of maintaining certain information private, and the probability of succeeding doing so.\nMore precisely, an agent will face three problems when comparing the tradeoffs implicit in Equation 1: incomplete information about all parameters; bounded power to process all available information; no deviation from the rational path towards utilitymaximization.\nThose three problems are precisely the same issues real people have to deal with on an everyday basis as they face privacy-sensitive decisions.\nWe discuss each problem in detail.\n1.\nIncomplete information.\nWhat information has the individual access to as she prepares to take privacy sensitive decisions?\nFor instance, is she aware of privacy invasions and the associated risks?\nWhat is her knowledge of the existence and characteristics of protective technologies?\nEconomic transactions are often characterized by incomplete or asymmetric information.\nDifferent parties involved may not have the same amount of information about the transaction and may be uncertain about some important aspects of it [4].\nIncomplete information will affect almost all parameters in Equation 1, and in particular the estimation of costs and benefits.\nCosts and benefits associated with privacy protection and privacy intrusions are both monetary and immaterial.\nMonetary costs may for instance include adoption costs (which are probably fixed) and usage costs (which are variable) of protective technologies - if the individual decides to protect herself.\nOr they may include the financial costs associated to identity theft, if the individual``s information turns out not to have been adequately protected.\nImmaterial costs may include learning costs of a protective technology, switching costs between different applications, or social stigma when using anonymizing technologies, and many others.\nLikewise, the benefits from protecting (or not protecting) personal information may also be easy to quantify in monetary terms (the discount you receive for revealing personal data) or be intangible (the feeling of protection when you send encrypted emails).\nIt is difficult for an individual to estimate all these values.\nThrough information technology, privacy invasions can be ubiquitous and invisible.\nMany of the payoffs associated with privacy protection or intrusion may be discovered or ascertained only ex post through actual experience.\nConsider, for instance, the difficulties in using privacy and encrypting technologies described in [43].\nIn addition, the calculations implicit in Equation 1 depend on incomplete information about the probability distribution of future events.\nSome of those distributions may be predicted after comparable data - for example, the probability that a certain credit card transaction will result in fraud today could be calculated using existing statistics.\nThe probability distributions of other events may be very difficult to estimate because the environment is too dynamicfor example, the probability of being subject to identity theft 5 years in the future because of certain data you are releasing now.\nAnd the distributions of some other events may be almost completely subjective - for example, the probability that a new and practical form of attack on a currently secure cryptosystem will expose all of your encrypted personal communications a few years from now.\nThis leads to a related problem: bounded rationality.\n2.\nBounded rationality.\nIs the individual able to calculate all the parameters relevant to her choice?\nOr is she limited by bounded rationality?\nIn our context, bounded rationality refers to the inability to calculate and compare the magnitudes of payoffs associated with various strategies the individual may choose in privacy-sensitive situations.\nIt also refers to the inability to process all the stochastic information related to risks and probabilities of events leading to privacy costs and benefits.\n23 In traditional economic theory, the agent is assumed to have both rationality and unbounded `computational'' power to process information.\nBut human agents are unable to process all information in their hands and draw accurate conclusions from it [34].\nIn the scenario we consider, once an individual provides personal information to other parties, she literally loses control of that information.\nThat loss of control propagates through other parties and persists for unpredictable spans of time.\nBeing in a position of information asymmetry with respect to the party with whom she is transacting, decisions must be based on stochastic assessments, and the magnitudes of the factors that may affect the individual become very difficult to aggregate, calculate, and compare.2 Bounded rationality will affect the calculation of the parameters in Equation 1, and in particular \u03b4, \u03b3, vE(), and pt().\nThe cognitive costs involved in trying to calculate the best strategy could therefore be so high that the individual may just resort to simple heuristics.\n3.\nPsychological distortions.\nEventually, even if an individual had access to complete information and could appropriately compute it, she still may find it difficult to follow the rational strategy presented in Equation 1.\nA vast body of economic and psychological literature has by now confirmed the impact of several forms of psychological distortions on individual decision making.\nPrivacy seems to be a case study encompassing many of those distortions: hyperbolic discounting, under insurance, self-control problems, immediate gratification, and others.\nThe traditional dichotomy between attitude and behavior, observed in several aspects of human psychology and studied in the social psychology literature since [24] and [13], may also appear in the privacy space because of these distortions.\nFor example, individuals have a tendency to discount `hyperbolically'' future costs or benefits [31, 27].\nIn economics, hyperbolic discounting implies inconsistency of personal preferences over time - future events may be discounted at different discount rates than near-term events.\nHyperbolic discounting may affect privacy decisions, for instance when we heavily discount the (low) probability of (high) future risks such as identity theft.3 Related to hyperbolic discounting is the tendency to underinsure oneself against certain risks [22].\nIn general, individuals may put constraints on future behavior that limit their own achievement of maximum utility: people may genuinely want to protect themselves, but because of self-control bias, they will not actually take those steps, and opt for immediate gratification instead.\nPeople tend to underappreciate the effects of changes in their states, and hence falsely project their current preferences over consumption onto their future preferences.\nFar more than suggesting merely that people mispredict future tastes, this projection bias posits a systematic pattern in these mispredictions which can lead to systematic errors in dynamicchoice environments [25, p. 2].\n2 The negative utility coming from future potential misuses of somebody``s personal information could be a random shock whose probability and scope are extremely variable.\nFor example, a small and apparently innocuous piece of information might become a crucial asset or a dangerous liability in the right context.\n3 A more rigorous description and application of hyperbolic discounting is provided in Section 4.\nIn addition, individuals suffer from optimism bias [42], the misperception that one``s risks are lower than those of other individuals under similar conditions.\nOptimism bias may lead us to believe that we will not be subject to privacy intrusions.\nIndividuals encounter difficulties when dealing with cumulative risks.\n[35], for instance, shows that while young smokers appreciate the long term risks of smoking, they do not fully realize the cumulative relation between the low risks of each additional cigarette and the slow building up of a serious danger.\nDifficulties with dealing with cumulative risks apply to privacy, because our personal information, once released, can remain available over long periods of time.\nAnd since it can be correlated to other data, the `anonymity sets'' [32, 14] in which we wish to remain hidden get smaller.\nAs a result, the whole risk associated with revealing different pieces of personal information is more than the sum of the individual risks associated with each piece of data.\nAlso, it is easier to deal with actions and effects that are closer to us in time.\nActions and effects that are in the distant future are difficult to focus on given our limited foresight perspective.\nAs the foresight changes, so does behavior, even when preferences remain the same [20].\nThis phenomenon may also affects privacy decisions, since the costs of privacy protection may be immediate, but the rewards may be invisible (absence of intrusions) and spread over future periods of time.\nTo summarize: whenever we face privacy sensitive decisions, we hardly have all data necessary for an informed choice.\nBut even if we had, we would be likely unable to process it.\nAnd even if we could process it, we may still end behaving against our own better judgment.\nIn what follows, we present a model of privacy attitudes and behavior based on some of these findings, and in particular on the plight of immediate gratification.\n4.\nPRIVACY AND THE ECONOMICS OF IMMEDIATE GRATIFICATION The problem of immediate gratification (which is related to the concepts of time inconsistency, hyperbolic discounting, and self-control bias) is so described by O``Donoghue and Rabin [27, p. 4]: A person``s relative preference for wellbeing at an earlier date over a later date gets stronger as the earlier date gets closer.\n[...] [P]eople have self-control problems caused by a tendency to pursue immediate gratification in a way that their `long-run selves'' do not appreciate.\nFor example, if you were given only two alternatives, on Monday you may claim you will prefer working 5 hours on Saturday to 5 hours and half on Sunday.\nBut as Saturday comes, you will be more likely to prefer postponing work until Sunday.\nThis simple observation has rather important consequences in economic theory, where time-consistency of preferences is the dominant model.\nConsider first the traditional model of utility that agents derive from consumption: the model states that utility discounts exponentially over time: Ut = T \u03c4=t \u03b4\u03c4 u\u03c4 (2) In Equation 2, the cumulative utility U at time t is the discounted sum of all utilities from time t (the present) until time T (the future).\n\u03b4 is the discount factor, with a value 24 Period 1 Period 2 Period 3 Period 4 Benefits from selling period 1 2 0 0 0 Costs from selling period 1 0 1 1 1 Benefits from selling period 2 0 2 0 0 Costs from selling period 2 0 0 1 1 Benefits from selling period 3 0 0 2 0 Costs from selling period 3 0 0 0 1 Table 1: (Fictional) expected payoffs from joining loyalty program.\nbetween 0 and 1.\nA value of 0 would imply that the individual discounts so heavily that the utility from future periods is worth zero today.\nA value of 1 would imply that the individual is so patient she does not discount future utilities.\nThe discount factor is used in economics to capture the fact that having (say) one dollar one year from now is valuable, but not as much as having that dollar now.\nIn Equation 2, if all u\u03c4 were constant - for instance, 10 - and \u03b4 was 0.9, then at time t = 0 (that is, now) u0 would be worth 10, but u1 would be worth 9.\nModifying the traditional model of utility discounting, [23] and then [31] have proposed a model which takes into account possible time-inconsistency of preferences.\nConsider Equation 3: Ut(ut, ut+1, ..., uT ) = \u03b4t ut + \u03b2 T \u03c4=t+1 \u03b4\u03c4 u\u03c4 (3) Assume that \u03b4, \u03b2 \u2208 [0, 1].\n\u03b4 is the discount factor for intertemporal utility as in Equation 2.\n\u03b2 is the parameter that captures an individual``s tendency to gratify herself immediately (a form of time-inconsistent preferences).\nWhen \u03b2 is 1, the model maps the traditional time-consistent utility model, and Equation 3 is identical to Equation 2.\nBut when \u03b2 is zero, the individual does not care for anything but today.\nIn fact, any \u03b2 smaller than 1 represents self-control bias.\nThe experimental literature has convincingly proved that human beings tend to have self-control problems even when they claim otherwise: we tend to avoid and postpone undesirable activities even when this will imply more effort tomorrow; and we tend to over-engage in pleasant activities even though this may cause suffering or reduced utility in the future.\nThis analytical framework can be applied to the study of privacy attitudes and behavior.\nProtecting your privacy sometimes means protecting yourself from a clear and present hassle (telemarketers, or people peeping through your window and seeing how you live - see [33]); but sometimes it represents something akin to getting an insurance against future and only uncertain risks.\nIn surveys completed at time t = 0, subjects asked about their attitude towards privacy risks may mentally consider some costs of protecting themselves at a later time t = s and compare those to the avoided costs of privacy intrusions in an even more distant future t = s + n.\nTheir alternatives at survey time 0 are represented in Equation 4.\nmin wrt x DU0 = \u03b2[(E(cs,p)\u03b4s x) + (E(cs+n,i)\u03b4s+n (1 \u2212 x))] (4) x is a dummy variable that can take values 0 or 1.\nIt represents the individual``s choice - which costs the individual opts to face: the expected cost of protecting herself at time s, E(cs,p) (in which case x = 1), or the expected costs of being subject to privacy intrusions at a later time s + n, E(cs+n,i).\nThe individual is trying to minimize the disutility DU of these costs with respect to x. Because she discounts the two future events with the same discount factor (although at different times), for certain values of the parameters the individual may conclude that paying to protect herself is worthy.\nIn particular, this will happen when: E(cs,p)\u03b4s < E(cs+n,i)\u03b4s+n (5) Now, consider what happens as the moment t = s comes.\nNow a real price should be paid in order to enjoy some form of protection (say, starting to encrypt all of your emails to protect yourself from future intrusions).\nNow the individual will perceive a different picture: min wrt x DUs = \u03b4E(cs,p)x + \u03b2E(cn,i)\u03b4n (1 \u2212 x)] (6) Note that nothing has changed in the equation (certainly not the individual``s perceived risks) except time.\nIf \u03b2 (the parameter indicating the degree of self-control problems) is less than one, chances are that the individual now will actually choose not to protect herself.\nThis will in fact happen when: \u03b4E(cs,p) > \u03b2E(cn,i)\u03b4n (7) Note that Disequalities 5 and 7 may be simultaneously met for certain \u03b2 < 1.\nAt survey time the individual honestly claimed she wanted to protect herself in principlethat is, some time in the future.\nBut as she is asked to make an effort to protect herself right now, she chooses to run the risk of privacy intrusion.\nSimilar mathematical arguments can be made for the comparison between immediate costs with immediate benefits (subscribing to a `no-call'' list to stop telemarketers from harassing you at dinner), and immediate costs with only future expected rewards (insuring yourself against identity theft, or protecting yourself from frauds by never using your credit card on-line), particularly when expected future rewards (or avoided risks) are also intangible: the immaterial consequences of living (or not) in a dossier society, or the chilling effects (or lack thereof) of being under surveillance.\nThe reader will have noticed that we have focused on perceived (expected) costs E(c), rather than real costs.\nWe do not know the real costs and we do not claim that the 25 individual does.\nBut we are able to show that under certain conditions even costs perceived as very high (as during periods of intense privacy debate) will be ignored.\nWe can provide some fictional numerical examples to make the analysis more concrete.\nWe present some scenarios inspired by the calculations in [31].\nImagine an economy with just 4 periods (Table 1).\nEach individual can enroll in a supermarket``s loyalty program by revealing personal information.\nIf she does so, the individual gets a discount of 2 during the period of enrollment, only to pay one unit each time thereafter because of price discrimination based on the information she revealed (we make no attempt at calibrating the realism of this obviously abstract example; the point we are focusing on is how time inconsistencies may affect individual behavior given the expected costs and benefits of certain actions).4 Depending on which period the individual chooses for `selling'' her data, we have the undiscounted payoffs represented in Table 1.\nImagine that the individual is contemplating these options and discounting them according to Equation 3.\nSuppose that \u03b4 = 1 for all types of individuals (this means that for simplicity we do not consider intertemporal discounting) but \u03b2 = 1\/2 for time-inconsistent individuals and \u03b2 = 1 for everybody else.\nThe time-consistent individual will choose to join the program at the very last period and rip off a benefit of 2-1=1.\nThe individual with immediate gratification problems, for whom \u03b2 = 1\/2, will instead perceive the benefits from joining now or in period 3 as equivalent (0.5), and will join the program now, thus actually making herself worse off.\n[31] also suggest that, in addition to the distinction between time-consistent individuals and individuals with timeinconsistent preferences, we should also distinguish timeinconsistent individuals who are na\u00a8\u0131ve from those who are sophisticated.\nNa\u00a8\u0131ve time-inconsistent individuals are not aware of their self-control problems - for example, they are those who always plan to start a diet next week.\nSophisticated time-inconsistent individuals suffer of immediate gratification bias, but are at least aware of their inconsistencies.\nPeople in this category choose their behavior today correctly estimating their future time-inconsistent behavior.\nNow consider how this difference affects decisions in another scenario, represented in Table 2.\nAn individual is considering the adoption of a certain privacy enhancing technology.\nIt will cost her some money both to protect herself and not to protect herself.\nIf she decides to protect herself, the cost will be the amount she pays - for example - for some technology that shields her personal information.\nIf she decides not to protect herself, the cost will be the expected consequences of privacy intrusions.\nWe assume that both these aggregate costs increase over time, although because of separate dynamics.\nAs time goes by, more and more information about the individual has been revealed, and it becomes more costly to be protected against privacy intrusions.\nAt the same time, however, intrusions become more frequent and dangerous.\n4 One may claim that loyalty cards keep on providing benefits over time.\nHere we make the simplifying assumption that such benefits are not larger than the future costs incurred after having revealed one``s tastes.\nWe also assume that the economy ends in period 4 for all individuals, regardless of when they chose to join the loyalty program.\nIn period 1, the individual may protect herself by spending 5, or she may choose to face a risk of privacy intrusion the following period, expected to cost 7.\nIn the second period, assuming that no intrusion has yet taken place, she may once again protect herself by spending a little more, 6; or she may choose to face a risk of privacy intrusion the next (third) period, expected to cost 9.\nIn the third period she could protect herself for 8 or face an expected cost of 15 in the following last period.\nHere too we make no attempt at calibrating the values in Table 2.\nAgain, we focus on the different behavior driven by heterogeneity in time-consistency and sophistication versus na\u00a8\u0131vete.\nWe assume that \u03b2 = 1 for individuals with no self control problems and \u03b2 = 1\/2 for everybody else.\nWe assume for simplicity that \u03b4 = 1 for all.\nThe time-consistent individuals will obviously choose to protect themselves as soon as possible.\nIn the first period, na\u00a8\u0131ve time-inconsistent individuals will compare the costs of protecting themselves then or face a privacy intrusion in the second period.\nBecause 5 > 7 \u2217 (1\/2), they will prefer to wait until the following period to protect themselves.\nBut in the second period they will be comparing 6 > 9 \u2217 (1\/2) - and so they will postpone their protection again.\nThey will keep on doing so, facing higher and higher risks.\nEventually, they will risk to incur the highest perceived costs of privacy intrusions (note again that we are simply assuming that individuals believe there are privacy risks and that they increase over time; we will come back to this concept later on).\nTime-inconsistent but sophisticated individuals, on the other side, will adopt a protective technology in period 2 and pay 6.\nBy period 2, in fact, they will (correctly) realize that if they wait till period 3 (which they are tempted to do, because 6 > 9 \u2217 (1\/2)), their self-control bias will lead them to postpone adopting the technology once more (because 8 > 15 \u2217 (1\/2)).\nTherefore they predict they would incur the expected cost 15 \u2217 (1\/2), which is larger than 6the cost of protecting oneself in period 2.\nIn period 1, however, they correctly predict that they will not wait to protect themselves further than period 2.\nSo they wait till period 2, because 5 > 6 \u2217 (1\/2), at which time they will adopt a protective technology (see also [31]).\nTo summarize, time-inconsistent people tend not to fully appreciate future risks and, if na\u00a8\u0131ve, also their inability to deal with them.\nThis happens even if they are aware of those risks and they are aware that those risks are increasing.\nAs we learnt from the second scenario, time inconsistency can lead individuals to accept higher and higher risks.\nIndividuals may tend to downplay the fact that single actions present low risks, but their repetition forms a huge liability: it is a deceiving aspect of privacy that its value is truly appreciated only after privacy itself is lost.\nThis dynamics captures the essence of privacy and the so-called anonymity sets [32, 14], where each bit of information we reveal can be linked to others, so that the whole is more than the sum of the parts.\nIn addition, [31] show that when costs are immediate, time-inconsistent individuals tend to procrastinate; when benefits are immediate, they tend to preoperate.\nIn our context things are even more interesting because all privacy decisions involve at the same time costs and benefits.\nSo we opt against using eCash [9] in order to save us the costs of switching from credit cards.\nBut we accept the risk that our credit card number on the Internet could be used ma26 Period 1 Period 2 Period 3 Period 4 Protection costs 5 6 8 .\nExpected intrusion costs .\n7 9 15 Table 2: (Fictional) costs of protecting privacy and expected costs of privacy intrusions over time.\nliciously.\nAnd we give away our personal information to supermarkets in order to gain immediate discounts - which will likely turn into price discrimination in due time [3, 26].\nWe have shown in the second scenario above how sophisticated but time-inconsistent individuals may choose to protect their information only in period 2.\nSophisticated people with self-control problems may be at a loss, sometimes even when compared to na\u00a8\u0131ve people with time inconsistency problems (how many privacy advocates do use privacy enhancing technologies all the time?)\n.\nThe reasoning is that sophisticated people are aware of their self-control problems, and rather than ignoring them, they incorporate them into their decision process.\nThis may decrease their own incentive to behave in the optimal way now.\nSophisticated privacy advocates might realize that protecting themselves from any possible privacy intrusion is unrealistic, and so they may start misbehaving now (and may get used to that, a form of coherent arbitrariness).\nThis is consistent with the results by [36] presented at the ACM EC ``01 conference.\n[36] found that privacy advocates were also willing to reveal personal information in exchange for monetary rewards.\nIt is also interesting to note that these inconsistencies are not caused by ignorance of existing risks or confusion about available technologies.\nIndividuals in the abstract scenarios we described are aware of their perceived risks and costs.\nHowever, under certain conditions, the magnitude of those liabilities is almost irrelevant.\nThe individual will take very slowly increasing risks, which become steps towards huge liabilities.\n5.\nDISCUSSION Applying models of self-control bias and immediate gratification to the study of privacy decision making may offer a new perspective on the ongoing privacy debate.\nWe have shown that a model of rational privacy behavior is unrealistic, while models based on psychological distortions offer a more accurate depiction of the decision process.\nWe have shown why individuals who genuinely would like to protect their privacy may not do so because of psychological distortions well documented in the behavioral economics literature.\nWe have highlighted that these distortions may affect not only na\u00a8\u0131ve individuals but also sophisticated ones.\nSurprisingly, we have also found that these inconsistencies may occur when individuals perceive the risks from not protecting their privacy as significant.\nAdditional uncertainties, risk aversion, and varying attitudes towards losses and gains may be confounding elements in our analysis.\nEmpirical validation is necessary to calibrate the effects of different factors.\nAn empirical analysis may start with the comparison of available data on the adoption rate of privacy technologies that offer immediate refuge from minor but pressing privacy concerns (for example, `do not call'' marketing lists), with data on the adoption of privacy technologies that offer less obviously perceivable protection from more dangerous but also less visible privacy risks (for example, identity theft insurances).\nHowever, only an experimental approach over different periods of time in a controlled environment may allow us to disentangle the influence of several factors.\nSurveys alone cannot suffice, since we have shown why survey-time attitudes will rarely match decision-time actions.\nAn experimental verification is part of our ongoing research agenda.\nThe psychological distortions we have discussed may be considered in the ongoing debate on how to deal with the privacy problem: industry self-regulation, users'' self protection (through technology or other strategies), or government``s intervention.\nThe conclusions we have reached suggest that individuals may not be trusted to make decisions in their best interests when it comes to privacy.\nThis does not mean that privacy technologies are ineffective.\nOn the contrary, our results, by aiming at offering a more realistic model of user-behavior, could be of help to technologists in their design of privacy enhancing tools.\nHowever, our results also imply that technology alone or awareness alone may not address the heart of the privacy problem.\nImproved technologies (with lower costs of adoption and protection) and more information about risks and opportunities certainly can help.\nHowever, more fundamental human behavioral mechanisms must also be addressed.\nSelf-regulation, even in presence of complete information and awareness, may not be trusted to work for the same reasons.\nA combination of technology, awareness, and regulative policies - calibrated to generate and enforce liabilities and incentives for the appropriate parties - may be needed for privacy-related welfare increase (as in other areas of an economy: see on a related analysis [25]).\nObserving that people do not want to pay for privacy or do not care about privacy, therefore, is only a half truth.\nPeople may not be able to act as economically rational agents when it comes to personal privacy.\nAnd the question whether do consumers care?\nis a different question from does privacy matter?\nWhether from an economic standpoint privacy ought to be protected or not, is still an open question.\nIt is a question that involves defining specific contexts in which the concept of privacy is being invoked.\nBut the value of privacy eventually goes beyond the realms of economic reasoning and cost benefit analysis, and ends up relating to one``s views on society and freedom.\nStill, even from a purely economic perspective, anecdotal evidence suggest that the costs of privacy (from spam to identity theft, lost sales, intrusions, and the like [30, 12, 17, 33, 26]) are high and increasing.\n6.\nACKNOWLEDGMENTS The author gratefully acknowledges Carnegie Mellon University``s Berkman Development Fund, that partially supported this research.\nThe author also wishes to thank Jens Grossklags, Charis Kaskiris, and three anonymous referees for their helpful comments.\n27 7.\nREFERENCES [1] A. Acquisti, R. Dingledine, and P. Syverson.\nOn the economics of anonymity.\nIn Financial CryptographyFC ``03, pages 84-102.\nSpringer Verlag, LNCS 2742, 2003.\n[2] A. Acquisti and J. Grossklags.\nLosses, gains, and hyperbolic discounting: An experimental approach to information security attitudes and behavior.\nIn 2nd Annual Workshop on Economics and Information Security - WEIS ``03, 2003.\n[3] A. Acquisti and H. R. Varian.\nConditioning prices on purchase history.\nTechnical report, University of California, Berkeley, 2001.\nPresented at the European Economic Association Conference, Venice, IT, August 2002.\nhttp:\/\/www.heinz.cmu.edu\/~acquisti\/ papers\/privacy.\npdf.\n[4] G. A. Akerlof.\nThe market for `lemons:'' quality uncertainty and the market mechanism.\nQuarterly Journal of Economics, 84:488-500, 1970.\n[5] G. S. Becker and K. M. Murphy.\nA theory of rational addiction.\nJournal of Political Economy, 96:675-700, 1988.\n[6] B. D. Brunk.\nUnderstanding the privacy space.\nFirst Monday, 7, 2002.\nhttp:\/\/firstmonday.org\/issues\/ issue7_10\/brunk\/index.\nhtml.\n[7] G. Calzolari and A. Pavan.\nOptimal design of privacy policies.\nTechnical report, Gremaq, University of Toulouse, 2001.\n[8] D. Chaum.\nUntraceable electronic mail, return addresses, and digital pseudonyms.\nCommunications of the ACM, 24(2):84-88, 1981.\n[9] D. Chaum.\nBlind signatures for untraceable payments.\nIn Advances in Cryptology - Crypto ``82, pages 199-203.\nPlenum Press, 1983.\n[10] R. K. Chellappa and R. Sin.\nPersonalization versus privacy: An empirical examination of the online consumer``s dilemma.\nIn 2002 Informs Meeting, 2002.\n[11] F. T. Commission.\nPrivacy online: Fair information practices in the electronic marketplace, 2000.\nhttp:\/\/www.ftc.gov\/reports\/privacy2000\/ privacy2000.pdf.\n[12] Community Banker Association of Indiana.\nIdentity fraud expected to triple by 2005, 2001.\nhttp:\/\/www.cbai.org\/Newsletter\/December2001\/ identity_fraud_de2001.htm.\n[13] S. Corey.\nProfessional attitudes and actual behavior.\nJournal of Educational Psychology, 28(1):271 - 280, 1937.\n[14] C. Diaz, S. Seys, J. Claessens, and B. Preneel.\nTowards measuring anonymity.\nIn P. Syverson and R. Dingledine, editors, Privacy Enhancing Technologies - PET ``02.\nSpringer Verlag, 2482, 2002.\n[15] ebusinessforum.com.\neMarketer: The great online privacy debate, 2000.\nhttp:\/\/www.ebusinessforum.\ncom\/index.\nasp?doc_id=1785&layout=rich_story.\n[16] Federal Trade Commission.\nIdentity theft heads the ftc``s top 10 consumer fraud complaints of 2001, 2002.\nhttp:\/\/www.ftc.gov\/opa\/2002\/01\/idtheft.htm.\n[17] R. Gellman.\nPrivacy, consumers, and costs - How the lack of privacy costs consumers and why business studies of privacy costs are biased and incomplete, 2002.\nhttp:\/\/www.epic.org\/reports\/dmfprivacy.html.\n[18] I.-H.\nHarn, K.-L.\nHui, T. S. Lee, and I. P. L. Png.\nOnline information privacy: Measuring the cost-benefit trade-off.\nIn 23rd International Conference on Information Systems, 2002.\n[19] Harris Interactive.\nFirst major post-9.11 privacy survey finds consumers demanding companies do more to protect privacy; public wants company privacy policies to be independently verified, 2002.\nhttp:\/\/www.harrisinteractive.com\/news\/ allnewsbydate.asp?NewsID=429.\n[20] P. Jehiel and A. Lilico.\nSmoking today and stopping tomorrow: A limited foresight perspective.\nTechnical report, Department of Economics, UCLA, 2002.\n[21] Jupiter Research.\nSeventy percent of US consumers worry about online privacy, but few take protective action, 2002.\nhttp: \/\/www.jmm.com\/xp\/jmm\/press\/2002\/pr_060302.xml.\n[22] H. Kunreuther.\nCauses of underinsurance against natural disasters.\nGeneva Papers on Risk and Insurance, 1984.\n[23] D. Laibson.\nEssays on hyperbolic discounting.\nMIT, Department of Economics, Ph.D..\nDissertation, 1994.\n[24] R. LaPiere.\nAttitudes versus actions.\nSocial Forces, 13:230-237, 1934.\n[25] G. Lowenstein, T. O``Donoghue, and M. Rabin.\nProjection bias in predicting future utility.\nTechnical report, Carnegie Mellon University, Cornell University, and University of California, Berkeley, 2003.\n[26] A. Odlyzko.\nPrivacy, economics, and price discrimination on the Internet.\nIn Fifth International Conference on Electronic Commerce, pages 355-366.\nACM, 2003.\n[27] T. O``Donoghue and M. Rabin.\nChoice and procrastination.\nQuartely Journal of Economics, 116:121-160, 2001.\nThe page referenced in the text refers to the 2000 working paper version.\n[28] R. A. Posner.\nAn economic theory of privacy.\nRegulation, pages 19-26, 1978.\n[29] R. A. Posner.\nThe economics of privacy.\nAmerican Economic Review, 71(2):405-409, 1981.\n[30] Privacy Rights Clearinghouse.\nNowhere to turn: Victims speak out on identity theft, 2000.\nhttp: \/\/www.privacyrights.org\/ar\/idtheft2000.htm.\n[31] M. Rabin and T. O``Donoghue.\nThe economics of immediate gratification.\nJournal of Behavioral Decision Making, 13:233-250, 2000.\n[32] A. Serjantov and G. Danezis.\nTowards an information theoretic metric for anonymity.\nIn P. Syverson and R. Dingledine, editors, Privacy Enhancing Technologies - PET ``02.\nSpringer Verlag, LNCS 2482, 2002.\n[33] A. Shostack.\nPaying for privacy: Consumers and infrastructures.\nIn 2nd Annual Workshop on Economics and Information Security - WEIS ``03, 2003.\n[34] H. A. Simon.\nModels of bounded rationality.\nThe MIT Press, Cambridge, MA, 1982.\n28 [35] P. Slovic.\nWhat does it mean to know a cumulative risk?\nAdolescents'' perceptions of short-term and long-term consequences of smoking.\nJournal of Behavioral Decision Making, 13:259-266, 2000.\n[36] S. Spiekermann, J. Grossklags, and B. Berendt.\nE-privacy in 2nd generation e-commerce: Privacy preferences versus actual behavior.\nIn 3rd ACM Conference on Electronic Commerce - EC ``01, pages 38-47, 2002.\n[37] G. J. Stigler.\nAn introduction to privacy in economics and politics.\nJournal of Legal Studies, 9:623-644, 1980.\n[38] P. Syverson.\nThe paradoxical value of privacy.\nIn 2nd Annual Workshop on Economics and Information Security - WEIS ``03, 2003.\n[39] C. R. Taylor.\nPrivate demands and demands for privacy: Dynamic pricing and the market for customer information.\nDepartment of Economics, Duke University, Duke Economics Working Paper 02-02, 2002.\n[40] T. Vila, R. Greenstadt, and D. Molnar.\nWhy we can``t be bothered to read privacy policies: Models of privacy economics as a lemons market.\nIn 2nd Annual Workshop on Economics and Information SecurityWEIS ``03, 2003.\n[41] S. Warren and L. Brandeis.\nThe right to privacy.\nHarvard Law Review, 4:193-220, 1890.\n[42] N. D. Weinstein.\nOptimistic biases about personal risks.\nScience, 24:1232-1233, 1989.\n[43] A. Whitten and J. D. Tygar.\nWhy Johnny can``t encrypt: A usability evaluation of PGP 5.0.\nIn 8th USENIX Security Symposium, 1999.\n29","lvl-3":"Privacy in Electronic Commerce and the Economics of Immediate Gratification\nABSTRACT\nDichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained.\nWe apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce.\nWe show that it is unrealistic to expect individual rationality in this context.\nModels of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data.\nIn particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only ` na \u00a8 \u0131ve' individuals but also ` sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant.\n1.\nPRIVACY AND ELECTRONIC COMMERCE\nPrivacy remains an important issue for electronic commerce.\nA PriceWaterhouseCoopers study in 2000 showed that nearly two thirds of the consumers surveyed \"would shop more online if they knew retail sites would not do anything with their personal information\" [15].\nA Federal Trade\nCommission study reported in 2000 that sixty-seven percent of consumers were \"very concerned\" about the privacy of the personal information provided on-line [11].\nMore recently, a February 2002 Harris Interactive survey found that the three biggest consumer concerns in the area of on-line personal information security were: companies trading personal data without permission, the consequences of insecure transactions, and theft of personal data [19].\nAccording to a Jupiter Research study in 2002, \"$24.5 billion in on-line sales will be lost by 2006 - up from $5.5 billion in 2001.\nOnline retail sales would be approximately twenty-four percent higher in 2006 if consumers' fears about privacy and security were addressed effectively\" [21].\nAlthough the media hype has somewhat diminished, risks and costs have not as evidenced by the increasing volumes of electronic spam and identity theft [16].\nSurveys in this field, however, as well as experiments and anecdotal evidence, have also painted a different picture.\n[36, 10, 18, 21] have found evidence that even privacy concerned individuals are willing to trade-off privacy for convenience, or bargain the release of very personal information in exchange for relatively small rewards.\nThe failure of several on-line services aimed at providing anonymity for Internet users [6] offers additional indirect evidence of the reluctance by most individuals to spend any effort in protecting their personal information.\nThe dichotomy between privacy attitudes and behavior has been highlighted in the literature.\nPreliminary interpretations of this phenomenon have been provided [2, 38, 33, 40].\nStill missing are: an explanation grounded in economic or psychological theories; an empirical validation of the proposed explanation; and, of course, the answer to the most recurring question: should people bother at all about privacy?\nIn this paper we focus on the first question: we formally analyze the individual decision making process with respect to privacy and its possible shortcomings.\nWe focus on individual (mis) conceptions about their handling of risks they face when revealing private information.\nWe do not address the issue of whether people should actually protect themselves.\nWe will comment on that in Section 5, where we will also discuss strategies to empirically validate our theory.\nWe apply lessons from behavioral economics.\nTraditional economics postulates that people are forward-looking and bayesian updaters: they take into account how current behavior will influence their future well-being and preferences.\nFor example, [5] study rational models of addiction.\nThis approach can be compared to those who see in the decision\nnot to protect one's privacy a rational choice given the (supposedly) low risks at stake.\nHowever, developments in the area of behavioral economics have highlighted various forms of psychological inconsistencies (self-control problems, hyperbolic discounting, present-biases, etc.) that clash with the fully rational view of the economic agent.\nIn this paper we draw from these developments to reach the following conclusions: \u2022 We show that it is unlikely that individuals can act rationally in the economic sense when facing privacy sensitive decisions.\n\u2022 We show that alternative models of personal behavior and time-inconsistent preferences are compatible with the dichotomy between attitudes and behavior and can better match current data.\nFor example, they can explain the results presented by [36] at the ACM EC \u201901 conference.\nIn their experiment, self-proclaimed privacy advocates were found to be willing to reveal varying amounts of personal information in exchange for small rewards.\n\u2022 In particular, we show that individuals may have a tendency to under-protect themselves against the privacy risks they perceive, and over-provide personal information even when wary of (perceived) risks involved.\n\u2022 We show that the magnitude of the perceived costs of privacy under certain conditions will not act as deterrent against behavior the individual admits is risky.\n\u2022 We show, following similar studies in the economics of immediate gratification [31], that even ` sophisticated' individuals may under certain conditions become ` privacy myopic . '\nOur conclusion is that simply providing more information and awareness in a self-regulative environment is not sufficient to protect individual privacy.\nImproved technologies, by lowering costs of adoption and protection, certainly can help.\nHowever, more fundamental human behavioral responses must also be addressed if privacy ought to be protected.\nIn the next section we propose a model of rational agents facing privacy sensitive decisions.\nIn Section 3 we show the difficulties that hinder any model of privacy decision making based on full rationality.\nIn Section 4 we show how behavioral models based on immediate gratification bias can better explain the attitudes-behavior dichotomy and match available data.\nIn Section 5 we summarize and discuss our conclusions.\n2.\nA MODEL OF RATIONALITY IN PRIVACY DECISION MAKING\n3.\nRATIONALITY AND PSYCHOLOGICAL DISTORTIONS IN PRIVACY\n4.\nPRIVACY AND THE ECONOMICS OF IMMEDIATE GRATIFICATION","lvl-4":"Privacy in Electronic Commerce and the Economics of Immediate Gratification\nABSTRACT\nDichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained.\nWe apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce.\nWe show that it is unrealistic to expect individual rationality in this context.\nModels of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data.\nIn particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only ` na \u00a8 \u0131ve' individuals but also ` sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant.\n1.\nPRIVACY AND ELECTRONIC COMMERCE\nPrivacy remains an important issue for electronic commerce.\nA PriceWaterhouseCoopers study in 2000 showed that nearly two thirds of the consumers surveyed \"would shop more online if they knew retail sites would not do anything with their personal information\" [15].\nA Federal Trade\nCommission study reported in 2000 that sixty-seven percent of consumers were \"very concerned\" about the privacy of the personal information provided on-line [11].\nOnline retail sales would be approximately twenty-four percent higher in 2006 if consumers' fears about privacy and security were addressed effectively\" [21].\nAlthough the media hype has somewhat diminished, risks and costs have not as evidenced by the increasing volumes of electronic spam and identity theft [16].\nSurveys in this field, however, as well as experiments and anecdotal evidence, have also painted a different picture.\n[36, 10, 18, 21] have found evidence that even privacy concerned individuals are willing to trade-off privacy for convenience, or bargain the release of very personal information in exchange for relatively small rewards.\nThe failure of several on-line services aimed at providing anonymity for Internet users [6] offers additional indirect evidence of the reluctance by most individuals to spend any effort in protecting their personal information.\nThe dichotomy between privacy attitudes and behavior has been highlighted in the literature.\nStill missing are: an explanation grounded in economic or psychological theories; an empirical validation of the proposed explanation; and, of course, the answer to the most recurring question: should people bother at all about privacy?\nIn this paper we focus on the first question: we formally analyze the individual decision making process with respect to privacy and its possible shortcomings.\nWe focus on individual (mis) conceptions about their handling of risks they face when revealing private information.\nWe do not address the issue of whether people should actually protect themselves.\nWe will comment on that in Section 5, where we will also discuss strategies to empirically validate our theory.\nWe apply lessons from behavioral economics.\nTraditional economics postulates that people are forward-looking and bayesian updaters: they take into account how current behavior will influence their future well-being and preferences.\nFor example, [5] study rational models of addiction.\nThis approach can be compared to those who see in the decision\nnot to protect one's privacy a rational choice given the (supposedly) low risks at stake.\nIn this paper we draw from these developments to reach the following conclusions: \u2022 We show that it is unlikely that individuals can act rationally in the economic sense when facing privacy sensitive decisions.\n\u2022 We show that alternative models of personal behavior and time-inconsistent preferences are compatible with the dichotomy between attitudes and behavior and can better match current data.\nIn their experiment, self-proclaimed privacy advocates were found to be willing to reveal varying amounts of personal information in exchange for small rewards.\n\u2022 In particular, we show that individuals may have a tendency to under-protect themselves against the privacy risks they perceive, and over-provide personal information even when wary of (perceived) risks involved.\n\u2022 We show that the magnitude of the perceived costs of privacy under certain conditions will not act as deterrent against behavior the individual admits is risky.\n\u2022 We show, following similar studies in the economics of immediate gratification [31], that even ` sophisticated' individuals may under certain conditions become ` privacy myopic . '\nOur conclusion is that simply providing more information and awareness in a self-regulative environment is not sufficient to protect individual privacy.\nHowever, more fundamental human behavioral responses must also be addressed if privacy ought to be protected.\nIn the next section we propose a model of rational agents facing privacy sensitive decisions.\nIn Section 3 we show the difficulties that hinder any model of privacy decision making based on full rationality.\nIn Section 4 we show how behavioral models based on immediate gratification bias can better explain the attitudes-behavior dichotomy and match available data.\nIn Section 5 we summarize and discuss our conclusions.","lvl-2":"Privacy in Electronic Commerce and the Economics of Immediate Gratification\nABSTRACT\nDichotomies between privacy attitudes and behavior have been noted in the literature but not yet fully explained.\nWe apply lessons from the research on behavioral economics to understand the individual decision making process with respect to privacy in electronic commerce.\nWe show that it is unrealistic to expect individual rationality in this context.\nModels of self-control problems and immediate gratification offer more realistic descriptions of the decision process and are more consistent with currently available data.\nIn particular, we show why individuals who may genuinely want to protect their privacy might not do so because of psychological distortions well documented in the behavioral literature; we show that these distortions may affect not only ` na \u00a8 \u0131ve' individuals but also ` sophisticated' ones; and we prove that this may occur also when individuals perceive the risks from not protecting their privacy as significant.\n1.\nPRIVACY AND ELECTRONIC COMMERCE\nPrivacy remains an important issue for electronic commerce.\nA PriceWaterhouseCoopers study in 2000 showed that nearly two thirds of the consumers surveyed \"would shop more online if they knew retail sites would not do anything with their personal information\" [15].\nA Federal Trade\nCommission study reported in 2000 that sixty-seven percent of consumers were \"very concerned\" about the privacy of the personal information provided on-line [11].\nMore recently, a February 2002 Harris Interactive survey found that the three biggest consumer concerns in the area of on-line personal information security were: companies trading personal data without permission, the consequences of insecure transactions, and theft of personal data [19].\nAccording to a Jupiter Research study in 2002, \"$24.5 billion in on-line sales will be lost by 2006 - up from $5.5 billion in 2001.\nOnline retail sales would be approximately twenty-four percent higher in 2006 if consumers' fears about privacy and security were addressed effectively\" [21].\nAlthough the media hype has somewhat diminished, risks and costs have not as evidenced by the increasing volumes of electronic spam and identity theft [16].\nSurveys in this field, however, as well as experiments and anecdotal evidence, have also painted a different picture.\n[36, 10, 18, 21] have found evidence that even privacy concerned individuals are willing to trade-off privacy for convenience, or bargain the release of very personal information in exchange for relatively small rewards.\nThe failure of several on-line services aimed at providing anonymity for Internet users [6] offers additional indirect evidence of the reluctance by most individuals to spend any effort in protecting their personal information.\nThe dichotomy between privacy attitudes and behavior has been highlighted in the literature.\nPreliminary interpretations of this phenomenon have been provided [2, 38, 33, 40].\nStill missing are: an explanation grounded in economic or psychological theories; an empirical validation of the proposed explanation; and, of course, the answer to the most recurring question: should people bother at all about privacy?\nIn this paper we focus on the first question: we formally analyze the individual decision making process with respect to privacy and its possible shortcomings.\nWe focus on individual (mis) conceptions about their handling of risks they face when revealing private information.\nWe do not address the issue of whether people should actually protect themselves.\nWe will comment on that in Section 5, where we will also discuss strategies to empirically validate our theory.\nWe apply lessons from behavioral economics.\nTraditional economics postulates that people are forward-looking and bayesian updaters: they take into account how current behavior will influence their future well-being and preferences.\nFor example, [5] study rational models of addiction.\nThis approach can be compared to those who see in the decision\nnot to protect one's privacy a rational choice given the (supposedly) low risks at stake.\nHowever, developments in the area of behavioral economics have highlighted various forms of psychological inconsistencies (self-control problems, hyperbolic discounting, present-biases, etc.) that clash with the fully rational view of the economic agent.\nIn this paper we draw from these developments to reach the following conclusions: \u2022 We show that it is unlikely that individuals can act rationally in the economic sense when facing privacy sensitive decisions.\n\u2022 We show that alternative models of personal behavior and time-inconsistent preferences are compatible with the dichotomy between attitudes and behavior and can better match current data.\nFor example, they can explain the results presented by [36] at the ACM EC \u201901 conference.\nIn their experiment, self-proclaimed privacy advocates were found to be willing to reveal varying amounts of personal information in exchange for small rewards.\n\u2022 In particular, we show that individuals may have a tendency to under-protect themselves against the privacy risks they perceive, and over-provide personal information even when wary of (perceived) risks involved.\n\u2022 We show that the magnitude of the perceived costs of privacy under certain conditions will not act as deterrent against behavior the individual admits is risky.\n\u2022 We show, following similar studies in the economics of immediate gratification [31], that even ` sophisticated' individuals may under certain conditions become ` privacy myopic . '\nOur conclusion is that simply providing more information and awareness in a self-regulative environment is not sufficient to protect individual privacy.\nImproved technologies, by lowering costs of adoption and protection, certainly can help.\nHowever, more fundamental human behavioral responses must also be addressed if privacy ought to be protected.\nIn the next section we propose a model of rational agents facing privacy sensitive decisions.\nIn Section 3 we show the difficulties that hinder any model of privacy decision making based on full rationality.\nIn Section 4 we show how behavioral models based on immediate gratification bias can better explain the attitudes-behavior dichotomy and match available data.\nIn Section 5 we summarize and discuss our conclusions.\n2.\nA MODEL OF RATIONALITY IN PRIVACY DECISION MAKING\nSome have used the dichotomy between privacy attitudes and behavior to claim that individuals are acting rationally when it comes to privacy.\nUnder this view, individuals may accept small rewards for giving away information because they expect future damages to be even smaller (when discounted over time and with their probability of occurrence).\nHere we want to investigate what underlying assumptions about personal behavior would support the hypothesis of full rationality in privacy decision making.\nSince [28, 37, 29] economists have been interested in privacy, but only recently formal models have started appearing [3, 7, 39, 40].\nWhile these studies focus on market interactions between one agent and other parties, here we are interested in formalizing the decision process of the single individual.\nWe want to see if individuals can be economically rational (forward-lookers, bayesian updaters, utility maximizers, and so on) when it comes to protect their own personal information.\nThe concept of privacy, once intended as the right to be left alone [41], has transformed as our society has become more information oriented.\nIn an information society the self is expressed, defined, and affected through and by information and information technology.\nThe boundaries between private and public become blurred.\nPrivacy has therefore become more a class of multifaceted interests than a single, unambiguous concept.\nHence its value may be discussed (if not ascertained) only once its context has also been specified.\nThis most often requires the study of a network of relations between a subject, certain information (related to the subject), other parties (that may have various linkages of interest or association with that information or that subject), and the context in which such linkages take place.\nTo understand how a rational agent could navigate through those complex relations, in Equation 1 we abstract the decision process of an idealized rational economic agent who is facing privacy trade-offs when completing a certain transaction.\nIn Equation 1, \u03b4 and \u03b3 are unspecified functional forms that describe weighted relations between expected payoffs from a set of events v and the associated probabilities of occurrence of those events p.\nMore precisely, the utility U of completing a transaction t (the transaction being any action - not necessarily a monetary operation - possibly involving exposure of personal information) is equal to some function of the expected payoff vE (a) from maintaining (or not) certain information private during that transaction, and the probability of maintaining [or not maintaining] that information private when using technology d, pd (a) [1 \u2212 pd (a)]; plus some function of the expected payoff vE (t) from completing (or non completing) the transaction (possibly revealing personal information), and the probability of completing [or not completing] that transaction with a certain technology d, pd (t) [1 \u2212 pd (t)]; minus the cost of using the technology t: cdt . '\nThe technology d may or may not be privacy enhancing.\nSince the payoffs in Equation 1 can be either positive or negative, Equation 1 embodies the duality implicit in privacy issues: there are both costs and benefits gained from revealing or from protecting personal information, and the costs and benefits from completing a transaction, vE (t), might be distinct from the costs and benefits from keeping the associated information private, vE (a).\nFor instance, revealing one's identity to an on-line bookstore may earn a discount.\nViceversa, it may also cost a larger bill, because of price discrimination.\nProtecting one's financial privacy by not divulging credit card information on-line may protect against future losses and hassles related to identity theft.\nBut it may\nmake one's on-line shopping experience more cumbersome, and therefore more expensive.\nThe functional parameters \u03b4 and \u03b3 embody the variable weights and attitudes an individual may have towards keeping her information private (for example, her privacy sensitivity, or her belief that privacy is a right whose respect should be enforced by the government) and completing certain transactions.\nNote that vE and p could refer to sets of payoffs and the associated probabilities of occurrence.\nThe payoffs are themselves only expected because, regardless of the probability that the transaction is completed or the information remains private, they may depend on other sets of events and their associated probabilities.\nvE () and pd (), in other words, can be read as multi-variate parameters inside which are hidden several other variables, expectations, and functions because of the complexity of the privacy network described above.\nOver time, the probability of keeping certain information private, for instance, will not only depend on the chosen technology d but also on the efforts by other parties to appropriate that information.\nThese efforts may be function, among other things, of the expected value of that information to those parties.\nThe probability of keeping information private will also depend on the environment in which the transaction is taking place.\nSimilarly, the expected benefit from keeping information private will also be a collection over time of probability distributions dependent on several parameters.\nImagine that the probability of keeping your financial transactions private is very high when you use a bank in Bermuda: still, the expected value from keeping your financial information confidential will depend on a number of other factors.\nA rational agent would, in theory, choose the technology d that maximizes her expected payoff in Equation 1.\nMaybe she would choose to complete the transaction under the protection of a privacy enhancing technology.\nMaybe she would complete the transaction without protection.\nMaybe she would not complete the transaction at all (d = 0).\nFor example, the agent may consider the costs and benefits of sending an email through an anonymous MIX-net system [8] and compare those to the costs and benefits of sending that email through a conventional, non-anonymous channel.\nThe magnitudes of the parameters in Equation 1 will change with the chosen technology.\nMIX-net systems may decrease the expected losses from privacy intrusions.\nNonanonymous email systems may promise comparably higher reliability and (possibly) reduced costs of operations.\n3.\nRATIONALITY AND PSYCHOLOGICAL DISTORTIONS IN PRIVACY\nEquation 1 is a comprehensive (while intentionally generic) road-map for navigation across privacy trade-offs that no human agent would be actually able to use.\nWe hinted to some difficulties as we noted that several layers of complexities are hidden inside concepts such as the \"expected value of maintaining certain information private,\" and the \"probability\" of succeeding doing so.\nMore precisely, an agent will face three problems when comparing the tradeoffs implicit in Equation 1: incomplete information about all parameters; bounded power to process all available information; no deviation from the rational path towards utilitymaximization.\nThose three problems are precisely the same issues real people have to deal with on an everyday basis as they face privacy-sensitive decisions.\nWe discuss each problem in detail.\n1.\nIncomplete information.\nWhat information has the individual access to as she prepares to take privacy sensitive decisions?\nFor instance, is she aware of privacy invasions and the associated risks?\nWhat is her knowledge of the existence and characteristics of protective technologies?\nEconomic transactions are often characterized by incomplete or asymmetric information.\nDifferent parties involved may not have the same amount of information about the transaction and may be uncertain about some important aspects of it [4].\nIncomplete information will affect almost all parameters in Equation 1, and in particular the estimation of costs and benefits.\nCosts and benefits associated with privacy protection and privacy intrusions are both monetary and immaterial.\nMonetary costs may for instance include adoption costs (which are probably fixed) and usage costs (which are variable) of protective technologies - if the individual decides to protect herself.\nOr they may include the financial costs associated to identity theft, if the individual's information turns out not to have been adequately protected.\nImmaterial costs may include learning costs of a protective technology, switching costs between different applications, or social stigma when using anonymizing technologies, and many others.\nLikewise, the benefits from protecting (or not protecting) personal information may also be easy to quantify in monetary terms (the discount you receive for revealing personal data) or be intangible (the feeling of protection when you send encrypted emails).\nIt is difficult for an individual to estimate all these values.\nThrough information technology, privacy invasions can be ubiquitous and invisible.\nMany of the payoffs associated with privacy protection or intrusion may be discovered or ascertained only ex post through actual experience.\nConsider, for instance, the difficulties in using privacy and encrypting technologies described in [43].\nIn addition, the calculations implicit in Equation 1 depend on incomplete information about the probability distribution of future events.\nSome of those distributions may be predicted after comparable data - for example, the probability that a certain credit card transaction will result in fraud today could be calculated using existing statistics.\nThe probability distributions of other events may be very difficult to estimate because the environment is too dynamic for example, the probability of being subject to identity theft 5 years in the future because of certain data you are releasing now.\nAnd the distributions of some other events may be almost completely subjective - for example, the probability that a new and practical form of attack on a currently secure cryptosystem will expose all of your encrypted personal communications a few years from now.\nThis leads to a related problem: bounded rationality.\n2.\nBounded rationality.\nIs the individual able to calculate all the parameters relevant to her choice?\nOr is she limited by bounded rationality?\nIn our context, bounded rationality refers to the inability to calculate and compare the magnitudes of payoffs associated with various strategies the individual may choose in privacy-sensitive situations.\nIt also refers to the inability to process all the stochastic information related to risks and probabilities of events leading to privacy costs and benefits.\nIn traditional economic theory, the agent is assumed to have both rationality and unbounded ` computational' power to process information.\nBut human agents are unable to process all information in their hands and draw accurate conclusions from it [34].\nIn the scenario we consider, once an individual provides personal information to other parties, she literally loses control of that information.\nThat loss of control propagates through other parties and persists for unpredictable spans of time.\nBeing in a position of information asymmetry with respect to the party with whom she is transacting, decisions must be based on stochastic assessments, and the magnitudes of the factors that may affect the individual become very difficult to aggregate, calculate, and compare .2 Bounded rationality will affect the calculation of the parameters in Equation 1, and in particular \u03b4, \u03b3, vE (), and pt ().\nThe cognitive costs involved in trying to calculate the best strategy could therefore be so high that the individual may just resort to simple heuristics.\n3.\nPsychological distortions.\nEventually, even if an individual had access to complete information and could appropriately compute it, she still may find it difficult to follow the rational strategy presented in Equation 1.\nA vast body of economic and psychological literature has by now confirmed the impact of several forms of psychological distortions on individual decision making.\nPrivacy seems to be a case study encompassing many of those distortions: hyperbolic discounting, under insurance, self-control problems, immediate gratification, and others.\nThe traditional dichotomy between attitude and behavior, observed in several aspects of human psychology and studied in the social psychology literature since [24] and [13], may also appear in the privacy space because of these distortions.\nFor example, individuals have a tendency to discount ` hyperbolically' future costs or benefits [31, 27].\nIn economics, hyperbolic discounting implies inconsistency of personal preferences over time - future events may be discounted at different discount rates than near-term events.\nHyperbolic discounting may affect privacy decisions, for instance when we heavily discount the (low) probability of (high) future risks such as identity theft .3 Related to hyperbolic discounting is the tendency to underinsure oneself against certain risks [22].\nIn general, individuals may put constraints on future behavior that limit their own achievement of maximum utility: people may genuinely want to protect themselves, but because of self-control bias, they will not actually take those steps, and opt for immediate gratification instead.\n\"People tend to underappreciate the effects of changes in their states, and hence falsely project their current preferences over consumption onto their future preferences.\nFar more than suggesting merely that people mispredict future tastes, this projection bias posits a systematic pattern in these mispredictions which can lead to systematic errors in dynamicchoice environments\" [25, p. 2].\n2The negative utility coming from future potential misuses of somebody's personal information could be a random shock whose probability and scope are extremely variable.\nFor example, a small and apparently innocuous piece of information might become a crucial asset or a dangerous liability in the right context.\n3A more rigorous description and application of hyperbolic discounting is provided in Section 4.\nIn addition, individuals suffer from optimism bias [42], the misperception that one's risks are lower than those of other individuals under similar conditions.\nOptimism bias may lead us to believe that we will not be subject to privacy intrusions.\nIndividuals encounter difficulties when dealing with cumulative risks.\n[35], for instance, shows that while young smokers appreciate the long term risks of smoking, they do not fully realize the cumulative relation between the low risks of each additional cigarette and the slow building up of a serious danger.\nDifficulties with dealing with cumulative risks apply to privacy, because our personal information, once released, can remain available over long periods of time.\nAnd since it can be correlated to other data, the ` anonymity sets' [32, 14] in which we wish to remain hidden get smaller.\nAs a result, the whole risk associated with revealing different pieces of personal information is more than the sum of the individual risks associated with each piece of data.\nAlso, it is easier to deal with actions and effects that are closer to us in time.\nActions and effects that are in the distant future are difficult to focus on given our limited foresight perspective.\nAs the foresight changes, so does behavior, even when preferences remain the same [20].\nThis phenomenon may also affects privacy decisions, since the costs of privacy protection may be immediate, but the rewards may be invisible (absence of intrusions) and spread over future periods of time.\nTo summarize: whenever we face privacy sensitive decisions, we hardly have all data necessary for an informed choice.\nBut even if we had, we would be likely unable to process it.\nAnd even if we could process it, we may still end behaving against our own better judgment.\nIn what follows, we present a model of privacy attitudes and behavior based on some of these findings, and in particular on the plight of immediate gratification.\n4.\nPRIVACY AND THE ECONOMICS OF IMMEDIATE GRATIFICATION\nThe problem of immediate gratification (which is related to the concepts of time inconsistency, hyperbolic discounting, and self-control bias) is so described by O'Donoghue and Rabin [27, p. 4]: \"A person's relative preference for wellbeing at an earlier date over a later date gets stronger as the earlier date gets closer.\n[...] [P] eople have self-control problems caused by a tendency to pursue immediate gratification in a way that their ` long-run selves' do not appreciate.\"\nFor example, if you were given only two alternatives, on Monday you may claim you will prefer working 5 hours on Saturday to 5 hours and half on Sunday.\nBut as Saturday comes, you will be more likely to prefer postponing work until Sunday.\nThis simple observation has rather important consequences in economic theory, where time-consistency of preferences is the dominant model.\nConsider first the traditional model of utility that agents derive from consumption: the model states that utility discounts exponentially over time:\nIn Equation 2, the cumulative utility U at time t is the discounted sum of all utilities from time t (the present) until time T (the future).\n\u03b4 is the discount factor, with a value\nTable 1: (Fictional) expected payoffs from joining loyalty program.\nbetween 0 and 1.\nA value of 0 would imply that the individual discounts so heavily that the utility from future periods is worth zero today.\nA value of 1 would imply that the individual is so patient she does not discount future utilities.\nThe discount factor is used in economics to capture the fact that having (say) one dollar one year from now is valuable, but not as much as having that dollar now.\nIn Equation 2, if all u\u03c4 were constant - for instance, 10 - and \u03b4 was 0.9, then at time t = 0 (that is, now) u0 would be worth 10, but u1 would be worth 9.\nModifying the traditional model of utility discounting, [23] and then [31] have proposed a model which takes into account possible time-inconsistency of preferences.\nConsider Equation 3:\nAssume that \u03b4, \u03b2 \u2208 [0, 1].\n\u03b4 is the discount factor for intertemporal utility as in Equation 2.\n\u03b2 is the parameter that captures an individual's tendency to gratify herself immediately (a form of time-inconsistent preferences).\nWhen \u03b2 is 1, the model maps the traditional time-consistent utility model, and Equation 3 is identical to Equation 2.\nBut when \u03b2 is zero, the individual does not care for anything but today.\nIn fact, any \u03b2 smaller than 1 represents self-control bias.\nThe experimental literature has convincingly proved that human beings tend to have self-control problems even when they claim otherwise: we tend to avoid and postpone undesirable activities even when this will imply more effort tomorrow; and we tend to over-engage in pleasant activities even though this may cause suffering or reduced utility in the future.\nThis analytical framework can be applied to the study of privacy attitudes and behavior.\nProtecting your privacy sometimes means protecting yourself from a clear and present hassle (telemarketers, or people peeping through your window and seeing how you live - see [33]); but sometimes it represents something akin to getting an insurance against future and only uncertain risks.\nIn surveys completed at time t = 0, subjects asked about their attitude towards privacy risks may mentally consider some costs of protecting themselves at a later time t = s and compare those to the avoided costs of privacy intrusions in an even more distant future t = s + n.\nTheir alternatives at survey time 0 are represented in Equation 4.\nx is a dummy variable that can take values 0 or 1.\nIt represents the individual's choice - which costs the individual opts to face: the expected cost of protecting herself at time s, E (cs, p) (in which case x = 1), or the expected costs of being subject to privacy intrusions at a later time s + n, E (cs + n, i).\nThe individual is trying to minimize the disutility DU of these costs with respect to x. Because she discounts the two future events with the same discount factor (although at different times), for certain values of the parameters the individual may conclude that paying to protect herself is worthy.\nIn particular, this will happen when:\nNow, consider what happens as the moment t = s comes.\nNow a real price should be paid in order to enjoy some form of protection (say, starting to encrypt all of your emails to protect yourself from future intrusions).\nNow the individual will perceive a different picture:\nNote that nothing has changed in the equation (certainly not the individual's perceived risks) except time.\nIf \u03b2 (the parameter indicating the degree of self-control problems) is less than one, chances are that the individual now will actually choose not to protect herself.\nThis will in fact happen when:\nNote that Disequalities 5 and 7 may be simultaneously met for certain \u03b2 <1.\nAt survey time the individual honestly claimed she wanted to protect herself in principle that is, some time in the future.\nBut as she is asked to make an effort to protect herself right now, she chooses to run the risk of privacy intrusion.\nSimilar mathematical arguments can be made for the comparison between immediate costs with immediate benefits (subscribing to a ` no-call' list to stop telemarketers from harassing you at dinner), and immediate costs with only future expected rewards (insuring yourself against identity theft, or protecting yourself from frauds by never using your credit card on-line), particularly when expected future rewards (or avoided risks) are also intangible: the immaterial consequences of living (or not) in a dossier society, or the chilling effects (or lack thereof) of being under surveillance.\nThe reader will have noticed that we have focused on perceived (expected) costs E (c), rather than real costs.\nWe do not know the real costs and we do not claim that the\nindividual does.\nBut we are able to show that under certain conditions even costs perceived as very high (as during periods of intense privacy debate) will be ignored.\nWe can provide some fictional numerical examples to make the analysis more concrete.\nWe present some scenarios inspired by the calculations in [31].\nImagine an economy with just 4 periods (Table 1).\nEach individual can enroll in a supermarket's loyalty program by revealing personal information.\nIf she does so, the individual gets a discount of 2 during the period of enrollment, only to pay one unit each time thereafter because of price discrimination based on the information she revealed (we make no attempt at calibrating the realism of this obviously abstract example; the point we are focusing on is how time inconsistencies may affect individual behavior given the expected costs and benefits of certain actions).4 Depending on which period the individual chooses for ` selling' her data, we have the undiscounted payoffs represented in Table 1.\nImagine that the individual is contemplating these options and discounting them according to Equation 3.\nSuppose that \u03b4 = 1 for all types of individuals (this means that for simplicity we do not consider intertemporal discounting) but \u03b2 = 1\/2 for time-inconsistent individuals and \u03b2 = 1 for everybody else.\nThe time-consistent individual will choose to join the program at the very last period and rip off a benefit of 2-1 = 1.\nThe individual with immediate gratification problems, for whom \u03b2 = 1\/2, will instead perceive the benefits from joining now or in period 3 as equivalent (0.5), and will join the program now, thus actually making herself worse off.\n[31] also suggest that, in addition to the distinction between time-consistent individuals and individuals with timeinconsistent preferences, we should also distinguish timeinconsistent individuals who are na \u00a8 \u0131ve from those who are sophisticated.\nNa \u00a8 \u0131ve time-inconsistent individuals are not aware of their self-control problems - for example, they are those who always plan to start a diet next week.\nSophisticated time-inconsistent individuals suffer of immediate gratification bias, but are at least aware of their inconsistencies.\nPeople in this category choose their behavior today correctly estimating their future time-inconsistent behavior.\nNow consider how this difference affects decisions in another scenario, represented in Table 2.\nAn individual is considering the adoption of a certain privacy enhancing technology.\nIt will cost her some money both to protect herself and not to protect herself.\nIf she decides to protect herself, the cost will be the amount she pays - for example - for some technology that shields her personal information.\nIf she decides not to protect herself, the cost will be the expected consequences of privacy intrusions.\nWe assume that both these aggregate costs increase over time, although because of separate dynamics.\nAs time goes by, more and more information about the individual has been revealed, and it becomes more costly to be protected against privacy intrusions.\nAt the same time, however, intrusions become more frequent and dangerous.\n4One may claim that loyalty cards keep on providing benefits over time.\nHere we make the simplifying assumption that such benefits are not larger than the future costs incurred after having revealed one's tastes.\nWe also assume that the economy ends in period 4 for all individuals, regardless of when they chose to join the loyalty program.\nIn period 1, the individual may protect herself by spending 5, or she may choose to face a risk of privacy intrusion the following period, expected to cost 7.\nIn the second period, assuming that no intrusion has yet taken place, she may once again protect herself by spending a little more, 6; or she may choose to face a risk of privacy intrusion the next (third) period, expected to cost 9.\nIn the third period she could protect herself for 8 or face an expected cost of 15 in the following last period.\nHere too we make no attempt at calibrating the values in Table 2.\nAgain, we focus on the different behavior driven by heterogeneity in time-consistency and sophistication versus na \u00a8 \u0131vete.\nWe assume that \u03b2 = 1 for individuals with no self control problems and \u03b2 = 1\/2 for everybody else.\nWe assume for simplicity that \u03b4 = 1 for all.\nThe time-consistent individuals will obviously choose to protect themselves as soon as possible.\nIn the first period, na \u00a8 \u0131ve time-inconsistent individuals will compare the costs of protecting themselves then or face a privacy intrusion in the second period.\nBecause 5> 7 * (1\/2), they will prefer to wait until the following period to protect themselves.\nBut in the second period they will be comparing 6> 9 * (1\/2) - and so they will postpone their protection again.\nThey will keep on doing so, facing higher and higher risks.\nEventually, they will risk to incur the highest perceived costs of privacy intrusions (note again that we are simply assuming that individuals believe there are privacy risks and that they increase over time; we will come back to this concept later on).\nTime-inconsistent but sophisticated individuals, on the other side, will adopt a protective technology in period 2 and pay 6.\nBy period 2, in fact, they will (correctly) realize that if they wait till period 3 (which they are tempted to do, because 6> 9 * (1\/2)), their self-control bias will lead them to postpone adopting the technology once more (because 8> 15 * (1\/2)).\nTherefore they predict they would incur the expected cost 15 * (1\/2), which is larger than 6 the cost of protecting oneself in period 2.\nIn period 1, however, they correctly predict that they will not wait to protect themselves further than period 2.\nSo they wait till period 2, because 5> 6 * (1\/2), at which time they will adopt a protective technology (see also [31]).\nTo summarize, time-inconsistent people tend not to fully appreciate future risks and, if na \u00a8 \u0131ve, also their inability to deal with them.\nThis happens even if they are aware of those risks and they are aware that those risks are increasing.\nAs we learnt from the second scenario, time inconsistency can lead individuals to accept higher and higher risks.\nIndividuals may tend to downplay the fact that single actions present low risks, but their repetition forms a huge liability: it is a deceiving aspect of privacy that its value is truly appreciated only after privacy itself is lost.\nThis dynamics captures the essence of privacy and the so-called anonymity sets [32, 14], where each bit of information we reveal can be linked to others, so that the whole is more than the sum of the parts.\nIn addition, [31] show that when costs are immediate, time-inconsistent individuals tend to procrastinate; when benefits are immediate, they tend to preoperate.\nIn our context things are even more interesting because all privacy decisions involve at the same time costs and benefits.\nSo we opt against using eCash [9] in order to save us the costs of switching from credit cards.\nBut we accept the risk that our credit card number on the Internet could be used ma\nTable 2: (Fictional) costs of protecting privacy and expected costs of privacy intrusions over time.\nliciously.\nAnd we give away our personal information to supermarkets in order to gain immediate discounts - which will likely turn into price discrimination in due time [3, 26].\nWe have shown in the second scenario above how sophisticated but time-inconsistent individuals may choose to protect their information only in period 2.\nSophisticated people with self-control problems may be at a loss, sometimes even when compared to na \u00a8 \u0131ve people with time inconsistency problems (how many privacy advocates do use privacy enhancing technologies all the time?)\n.\nThe reasoning is that sophisticated people are aware of their self-control problems, and rather than ignoring them, they incorporate them into their decision process.\nThis may decrease their own incentive to behave in the optimal way now.\nSophisticated privacy advocates might realize that protecting themselves from any possible privacy intrusion is unrealistic, and so they may start misbehaving now (and may get used to that, a form of coherent arbitrariness).\nThis is consistent with the results by [36] presented at the ACM EC \u201901 conference.\n[36] found that privacy advocates were also willing to reveal personal information in exchange for monetary rewards.\nIt is also interesting to note that these inconsistencies are not caused by ignorance of existing risks or confusion about available technologies.\nIndividuals in the abstract scenarios we described are aware of their perceived risks and costs.\nHowever, under certain conditions, the magnitude of those liabilities is almost irrelevant.\nThe individual will take very slowly increasing risks, which become steps towards huge liabilities.","keyphrases":["privaci","electron commerc","immedi gratif","individu decis make process","ration","self-control problem","psycholog distort","anonym","person inform protect","psycholog inconsist","time-inconsist prefer","financi privaci","privaci sensit decis","privaci enhanc technolog","hyperbol discount"],"prmu":["P","P","P","P","P","P","P","U","M","M","U","M","M","M","U"]} {"id":"C-41","title":"Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems","abstract":"A challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions. This paper presents two contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability. Second, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time. Our results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability.","lvl-1":"Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems Nishanth Shankaran, \u2217 Xenofon Koutsoukos, Douglas C. Schmidt, and Aniruddha Gokhale Dept. of EECS, Vanderbilt University, Nashville ABSTRACT A challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions.\nThis paper presents two contributions to research in adaptive resource management for DRE systems.\nFirst, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability.\nSecond, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time.\nOur results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability.\nCategories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed Applications; D.4.7 [Organization and Design]: Real-time Systems and Embedded Systems 1.\nINTRODUCTION Achieving end-to-end real-time quality of service (QoS) is particularly important for open distributed real-time and embedded (DRE) systems that face resource constraints, such as limited computing power and network bandwidth.\nOverutilization of these system resources can yield unpredictable and unstable behavior, whereas under-utilization can yield excessive system cost.\nA promising approach to meeting these end-to-end QoS requirements effectively, therefore, is to develop and apply adaptive middleware [10, 15], which is software whose functional and QoS-related properties can be modified either statically or dynamically.\nStatic modifications are carried out to reduce footprint, leverage capabilities that exist in specific platforms, enable functional subsetting, and\/or minimize hardware\/software infrastructure dependencies.\nObjectives of dynamic modifications include optimizing system responses to changing environments or requirements, such as changing component interconnections, power-levels, CPU and network bandwidth availability, latency\/jitter, and workload.\nIn open DRE systems, adaptive middleware must make such modifications dependably, i.e., while meeting stringent end-to-end QoS requirements, which requires the specification and enforcement of upper and lower bounds on system resource utilization to ensure effective use of system resources.\nTo meet these requirements, we have developed the Hybrid Adaptive Resource-management Middleware (HyARM), which is an open-source1 distributed resource management middleware.\nHyARM is based on hybrid control theoretic techniques [8], which provide a theoretical framework for designing control of complex system with both continuous and discrete dynamics.\nIn our case study, which involves a distributed real-time video distribution system, the task of adaptive resource management is to control the utilization of the different resources, whose utilizations are described by continuous variables.\nWe achieve this by adapting the resolution of the transmitted video, which is modeled as a continuous variable, and by changing the frame-rate and the compression, which are modeled by discrete actions.\nWe have implemented HyARM atop The ACE ORB (TAO) [13], which is an implementation of the Real-time CORBA specification [12].\nOur results show that (1) HyARM ensures effective system resource utilization and (2) end-to-end QoS requirements of higher priority applications are met, even in the face of fluctuations in workload.\nThe remainder of the paper is organized as follows: Section 2 describes the architecture, functionality, and resource utilization model of our DRE multimedia system case study; Section 3 explains the structure and functionality of HyARM; Section 4 evaluates the adaptive behavior of HyARM via experiments on our multimedia system case study; Section 5 compares our research on HyARM with related work; and Section 6 presents concluding remarks.\n1 The code and examples for HyARM are available at www.\ndre.vanderbilt.edu\/\u223cnshankar\/HyARM\/.\nArticle 7 2.\nCASE STUDY: DRE MULTIMEDIA SYSTEM This section describes the architecture and QoS requirements of our DRE multimedia system.\n2.1 Multimedia System Architecture Wireless Link Wireless Link Wireless Link ` ` ` Physical Link Physical Link Physical Link Base Station End Receiver End Receiver End Receiver` Physical Link End Receiver UAV Camera Video Encoder Camera Video Encoder Camera Video Encoder UAV Camera Video Encoder Camera Video Encoder Camera Video Encoder UAV Camera Video Encoder Camera Video Encoder Camera Video Encoder Figure 1: DRE Multimedia System Architecture The architecture for our DRE multimedia system is shown in Figure 1 and consists of the following entities: (1)Data source (video capture by UAV), where video is captured (related to subject of interest) by camera(s) on each UAV, followed by encoding of raw video using a specific encoding scheme and transmitting the video to the next stage in the pipeline.\n(2)Data distributor (base station), where the video is processed to remove noise, followed by retransmission of the processed video to the next stage in the pipeline.\n(3) Sinks (command and control center), where the received video is again processed to remove noise, then decoded and finally rendered to end user via graphical displays.\nSignificant improvements in video encoding\/decoding and (de)compression techniques have been made as a result of recent advances in video encoding and compression techniques [14].\nCommon video compression schemes are MPEG1, MPEG-2, Real Video, and MPEG-4.\nEach compression scheme is characterized by its resource requirement, e.g., the computational power to (de)compress the video signal and the network bandwidth required to transmit the compressed video signal.\nProperties of the compressed video, such as resolution and frame-rate determine both the quality and the resource requirements of the video.\nOur multimedia system case study has the following endto-end real-time QoS requirements: (1) latency, (2) interframe delay (also know as jitter), (3) frame rate, and (4) picture resolution.\nThese QoS requirements can be classified as being either hard or soft.\nHard QoS requirements should be met by the underlying system at all times, whereas soft QoS requirements can be missed occasionally.2 For our case study, we treat QoS requirements such as latency and jitter as harder QoS requirements and strive to meet these requirements at all times.\nIn contrast, we treat QoS requirements such as video frame rate and picture resolution as softer QoS requirements and modify these video properties adaptively to handle dynamic changes in resource availabil2 Although hard and soft are often portrayed as two discrete requirement sets, in practice they are usually two ends of a continuum ranging from softer to harder rather than two disjoint points.\nity effectively.\n2.2 DRE Multimedia System Rresources There are two primary types of resources in our DRE multimedia system: (1) processors that provide computational power available at the UAVs, base stations, and end receivers and (2) network links that provide communication bandwidth between UAVs, base stations, and end receivers.\nThe computing power required by the video capture and encoding tasks depends on dynamic factors, such as speed of the UAV, speed of the subject (if the subject is mobile), and distance between UAV and the subject.\nThe wireless network bandwidth available to transmit video captured by UAVs to base stations also depends on the wireless connectivity between the UAVs and the base station, which in-turn depend on dynamic factors such as the speed of the UAVs and the relative distance between UAVs and base stations.\nThe bandwidth of the link between the base station and the end receiver is limited, but more stable than the bandwidth of the wireless network.\nResource requirements and availability of resources are subjected to dynamic changes.\nTwo classes of applications - QoS-enabled and best-effort - use the multimedia system infrastructure described above to transmit video to their respective receivers.\nQoS-enabled class of applications have higher priority over best-effort class of application.\nIn our study, emergency response applications belong to QoS-enabled and surveillance applications belong to best-effort class.\nFor example, since a stream from an emergency response application is of higher importance than a video stream from a surveillance application, it receives more resources end-to-end.\nSince resource availability significantly affects QoS, we use current resource utilization as the primary indicator of system performance.\nWe refer to the current level of system resource utilization as the system condition.\nBased on this definition, we can classify system conditions as being either under, over, or effectively utilized.\nUnder-utilization of system resources occurs when the current resource utilization is lower than the desired lower bound on resource utilization.\nIn this system condition, residual system resources (i.e., network bandwidth and computational power) are available in large amounts after meeting end-to-end QoS requirements of applications.\nThese residual resources can be used to increase the QoS of the applications.\nFor example, residual CPU and network bandwidth can be used to deliver better quality video (e.g., with greater resolution and higher frame rate) to end receivers.\nOver-utilization of system resources occurs when the current resource utilization is higher than the desired upper bound on resource utilization.\nThis condition can arise from loss of resources - network bandwidth and\/or computing power at base station, end receiver or at UAV - or may be due to an increase in resource demands by applications.\nOver-utilization is generally undesirable since the quality of the received video (such as resolution and frame rate) and timeliness properties (such as latency and jitter) are degraded and may result in an unstable (and thus ineffective) system.\nEffective resource utilization is the desired system condition since it ensures that end-to-end QoS requirements of the UAV-based multimedia system are met and utilization of both system resources, i.e., network bandwidth and computational power, are within their desired utilization bounds.\nArticle 7 Section 3 describes techniques we applied to achieve effective utilization, even in the face of fluctuating resource availability and\/or demand.\n3.\nOVERVIEW OF HYARM This section describes the architecture of the Hybrid Adaptive Resource-management Middleware (HyARM).\nHyARM ensures efficient and predictable system performance by providing adaptive resource management, including monitoring of system resources and enforcing bounds on application resource utilization.\n3.1 HyARM Structure and Functionality Resource Utilization Legend Resource Allocation Application Parameters Figure 2: HyARM Architecture HyARM is composed of three types of entities shown in Figure 2 and described below: Resource monitors observe the overall resource utilization for each type of resource and resource utilization per application.\nIn our multimedia system, there are resource monitors for CPU utilization and network bandwidth.\nCPU monitors observe the CPU resource utilization of UAVs, base station, and end receivers.\nNetwork bandwidth monitors observe the network resource utilization of (1) wireless network link between UAVs and the base station and (2) wired network link between the base station and end receivers.\nThe central controller maintains the system resource utilization below a desired bound by (1) processing periodic updates it receives from resource monitors and (2) modifying the execution of applications accordingly, e.g., by using different execution algorithms or operating the application with increased\/decreased QoS.\nThis adaptation process ensures that system resources are utilized efficiently and end-to-end application QoS requirements are met.\nIn our multimedia system, the HyARM controller determines the value of application parameters such as (1) video compression schemes, such as Real Video and MPEG-4, and\/or (2) frame rate, and (3) picture resolution.\nFrom the perspective of hybrid control theoretic techniques [8], the different video compression schemes and frame rate form the discrete variables of application execution and picture resolution forms the continuous variables.\nApplication adapters modify application execution according to parameters recommended by the controller and ensures that the operation of the application is in accordance with the recommended parameters.\nIn the current mplementation of HyARM, the application adapter modifies the input parameters to the application that affect application QoS and resource utilization - compression scheme, frame rate, and picture resolution.\nIn our future implementations, we plan to use resource reservation mechanisms such as Differentiated Service [7, 3] and Class-based Kernel Resource Management [4] to provision\/reserve network and CPU resources.\nIn our multimedia system, the application adapter ensures that the video is encoded at the recommended frame rate and resolution using the specified compression scheme.\n3.2 Applying HyARM to the Multimedia System Case Study HyARM is built atop TAO [13], a widely used open-source implementation of Real-time CORBA [12].\nHyARM can be applied to ensure efficient, predictable and adaptive resource management of any DRE system where resource availability and requirements are subject to dynamic change.\nFigure 3 shows the interaction of various parts of the DRE multimedia system developed with HyARM, TAO, and TAO``s A\/V Streaming Service.\nTAO``s A\/V Streaming service is an implementation of the CORBA A\/V Streaming Service specification.\nTAO``s A\/V Streaming Service is a QoS-enabled video distribution service that can transfer video in real-time to one or more receivers.\nWe use the A\/V Streaming Service to transmit the video from the UAVs to the end receivers via the base station.\nThree entities of Receiver UAV TAO Resource Utilization HyARM Central Controller A\/V Streaming Service : Sender MPEG1 MPEG4 Real Video HyARM Resource Monitor A\/V Streaming Service : Receiver Compressed Video Compressed Video Application HyARM Application Adapter Remote Object Call Control Inputs Resource Utilization Resource Utilization \/ Control Inputs Control Inputs Legend Figure 3: Developing the DRE Multimedia System with HyARM HyARM, namely the resource monitors, central controller, and application adapters are built as CORBA servants, so they can be distributed throughout a DRE system.\nResource monitors are remote CORBA objects that update the central controller periodically with the current resource utilization.\nApplication adapters are collocated with applications since the two interact closely.\nAs shown in Figure 3, UAVs compress the data using various compression schemes, such as MPEG1, MPEG4, and Real Video, and uses TAO``s A\/V streaming service to transmit the video to end receivers.\nHyARM``s resource monitors continuously observe the system resource utilization and notify the central controller with the current utilization.\n3 The interaction between the controller and the resource monitors uses the Observer pattern [5].\nWhen the controller receives resource utilization updates from monitors, it computes the necessary modifications to application(s) parameters and notifies application adapter(s) via a remote operation call.\nApplication adapter(s), that are collocated with the application, modify the input parameters to the application - in our case video encoder - to modify the application resource utilization and QoS.\n3 The base station is not included in the figure since it only retransmits the video received from UAVs to end receivers.\nArticle 7 4.\nPERFORMANCE RESULTS AND ANALYSIS This section first describes the testbed that provides the infrastructure for our DRE multimedia system, which was used to evaluate the performance of HyARM.\nWe then describe our experiments and analyze the results obtained to empirically evaluate how HyARM behaves during underand over-utilization of system resources.\n4.1 Overview of the Hardware and Software Testbed Our experiments were performed on the Emulab testbed at University of Utah.\nThe hardware configuration consists of two nodes acting as UAVs, one acting as base station, and one as end receiver.\nVideo from the two UAVs were transmitted to a base station via a LAN configured with the following properties: average packet loss ratio of 0.3 and bandwidth 1 Mbps.\nThe network bandwidth was chosen to be 1 Mbps since each UAV in the DRE multimedia system is allocated 250 Kbps.\nThese parameters were chosen to emulate an unreliable wireless network with limited bandwidth between the UAVs and the base station.\nFrom the base station, the video was retransmitted to the end receiver via a reliable wireline link of 10 Mbps bandwidth with no packet loss.\nThe hardware configuration of all the nodes was chosen as follows: 600 MHz Intel Pentium III processor, 256 MB physical memory, 4 Intel EtherExpress Pro 10\/100 Mbps Ethernet ports, and 13 GB hard drive.\nA real-time version of Linux - TimeSys Linux\/NET 3.1.214 based on RedHat Linux 9was used as the operating system for all nodes.\nThe following software packages were also used for our experiments: (1) Ffmpeg 0.4.9-pre1, which is an open-source library (http: \/\/www.ffmpeg.sourceforge.net\/download.php) that compresses video into MPEG-2, MPEG-4, Real Video, and many other video formats.\n(2) Iftop 0.16, which is an opensource library (http:\/\/www.ex-parrot.com\/\u223cpdw\/iftop\/) we used for monitoring network activity and bandwidth utilization.\n(3) ACE 5.4.3 + TAO 1.4.3, which is an opensource (http:\/\/www.dre.vanderbilt.edu\/TAO) implementation of the Real-time CORBA [12] specification upon which HyARM is built.\nTAO provides the CORBA Audio\/Video (A\/V) Streaming Service that we use to transmit the video from the UAVs to end receivers via the base station.\n4.2 Experiment Configuration Our experiment consisted of two (emulated) UAVs that simultaneously send video to the base station using the experimentation setup described in Section 4.1.\nAt the base station, video was retransmitted to the end receivers (without any modifications), where it was stored to a file.\nEach UAV hosted two applications, one QoS-enabled application (emergency response), and one best-effort application (surveillance).\nWithin each UAV, computational power is shared between the applications, while the network bandwidth is shared among all applications.\nTo evaluate the QoS provided by HyARM, we monitored CPU utilization at the two UAVs, and network bandwidth utilization between the UAV and the base station.\nCPU resource utilization was not monitored at the base station and the end receiver since they performed no computationallyintensive operations.\nThe resource utilization of the 10 Mpbs physical link between the base station and the end receiver does not affect QoS of applications and is not monitored by HyARM since it is nearly 10 times the 1 MB bandwidth of the LAN between the UAVs and the base station.\nThe experiment also monitors properties of the video that affect the QoS of the applications, such as latency, jitter, frame rate, and resolution.\nThe set point on resource utilization for each resource was specified at 0.69, which is the upper bound typically recommended by scheduling techniques, such as rate monotonic algorithm [9].\nSince studies [6] have shown that human eyes can perceive delays more than 200ms, we use this as the upper bound on jitter of the received video.\nQoS requirements for each class of application is specified during system initialization and is shown in Table 1.\n4.3 Empirical Results and Analysis This section presents the results obtained from running the experiment described in Section 4.2 on our DRE multimedia system testbed.\nWe used system resource utilization as a metric to evaluate the adaptive resource management capabilities of HyARM under varying input work loads.\nWe also used application QoS as a metric to evaluate HyARM``s capabilities to support end-to-end QoS requirements of the various classes of applications in the DRE multimedia system.\nWe analyze these results to explain the significant differences in system performance and application QoS.\nComparison of system performance is decomposed into comparison of resource utilization and application QoS.\nFor system resource utilization, we compare (1) network bandwidth utilization of the local area network and (2) CPU utilization at the two UAV nodes.\nFor application QoS, we compare mean values of video parameters, including (1) picture resolution, (2) frame rate, (3) latency, and (4) jitter.\nComparison of resource utilization.\nOver-utilization of system resources in DRE systems can yield an unstable system.\nIn contrast, under-utilization of system resources increases system cost.\nFigure 4 and Figure 5 compare the system resource utilization with and without HyARM.\nFigure 4 shows that HyARM maintains system utilization close to the desired utilization set point during fluctuation in input work load by transmitting video of higher (or lower) QoS for QoS-enabled (or best-effort) class of applications during over (or under) utilization of system resources.\nFigure 5 shows that without HyARM, network utilization was as high as 0.9 during increase in workload conditions, which is greater than the utilization set point of 0.7 by 0.2.\nAs a result of over-utilization of resources, QoS of the received video, such as average latency and jitter, was affected significantly.\nWithout HyARM, system resources were either under-utilized or over-utilized, both of which are undesirable.\nIn contrast, with HyARM, system resource utilization is always close to the desired set point, even during fluctuations in application workload.\nDuring sudden fluctuation in application workload, system conditions may be temporarily undesirable, but are restored to the desired condition within several sampling periods.\nTemporary over-utilization of resources is permissible in our multimedia system since the quality of the video may be degraded for a short period of time, though application QoS will be degraded significantly if poor quality video is transmitted for a longer period of time.\nComparison of application QoS.\nFigures 6, Figure 7, and Table 2 compare latency, jitter, resolution, and frameArticle 7 Class Resolution Frame Rate Latency (msec ) Jitter (msec) QoS Enabled 1024 x 768\u00a025\u00a0200\u00a0200 Best-effort 320 x 240\u00a015\u00a0300\u00a0250 Table 1: Application QoS Requirements Figure 4: Resource utilization with HyARM Figure 5: Resource utilization without HyARM rate of the received video, respectively.\nTable 2 shows that HyARM increases the resolution and frame video of QoSenabled applications, but decreases the resolution and frame rate of best effort applications.\nDuring over utilization of system resources, resolution and frame rate of lower priority applications are reduced to adapt to fluctuations in application workload and to maintain the utilization of resources at the specified set point.\nIt can be seen from Figure 6 and Figure 7 that HyARM reduces the latency and jitter of the received video significantly.\nThese figures show that the QoS of QoS-enabled applications is greatly improved by HyARM.\nAlthough application parameters, such as frame rate and resolutions, which affect the soft QoS requirements of best-effort applications may be compromised, the hard QoS requirements, such as latency and jitter, of all applications are met.\nHyARM responds to fluctuation in resource availability and\/or demand by constant monitoring of resource utilization.\nAs shown in Figure 4, when resources utilization increases above the desired set point, HyARM lowers the utilization by reducing the QoS of best-effort applications.\nThis adaptation ensures that enough resources are available for QoS-enabled applications to meet their QoS needs.\nFigures 6 and 7 show that the values of latency and jitter of the received video of the system with HyARM are nearly half of the corresponding value of the system without HyARM.\nWith HyARM, values of these parameters are well below the specified bounds, whereas without HyARM, these value are significantly above the specified bounds due to overutilization of the network bandwidth, which leads to network congestion and results in packet loss.\nHyARM avoids this by reducing video parameters such as resolution, frame-rate, and\/or modifying the compression scheme used to compress the video.\nOur conclusions from analyzing the results described above are that applying adaptive middleware via hybrid control to DRE system helps to (1) improve application QoS, (2) increase system resource utilization, and (3) provide better predictability (lower latency and inter-frame delay) to QoSenabled applications.\nThese improvements are achieved largely due to monitoring of system resource utilization, efficient system workload management, and adaptive resource provisioning by means of HyARM``s network\/CPU resource monitors, application adapter, and central controller, respectively.\n5.\nRELATED WORK A number of control theoretic approaches have been applied to DRE systems recently.\nThese techniques aid in overcoming limitations with traditional scheduling approaches that handle dynamic changes in resource availability poorly and result in a rigidly scheduled system that adapts poorly to change.\nA survey of these techniques is presented in [1].\nOne such approach is feedback control scheduling (FCS) [2, 11].\nFCS algorithms dynamically adjust resource allocation by means of software feedback control loops.\nFCS algorithms are modeled and designed using rigorous controltheoretic methodologies.\nThese algorithms provide robust and analytical performance assurances despite uncertainties in resource availability and\/or demand.\nAlthough existing FCS algorithms have shown promise, these algorithms often assume that the system has continuous control variable(s) that can continuously be adjusted.\nWhile this assumption holds for certain classes of systems, there are many classes of DRE systems, such as avionics and total-ship computing environments that only support a finite a priori set of discrete configurations.\nThe control variables in such systems are therefore intrinsically discrete.\nHyARM handles both continuous control variables, such as picture resolution, and discrete control variable, such as discrete set of frame rates.\nHyARM can therefore be applied to system that support continuous and\/or discrete set of control variables.\nThe DRE multimedia system as described in Section 2 is an example DRE system that offers both continuous (picture resolution) and discrete set (frame-rate) of control variables.\nThese variables are modified by HyARM to achieve efficient resource utilization and improved application QoS.\n6.\nCONCLUDING REMARKS Article 7 Figure 6: Comparison of Video Latency Figure 7: Comparison of Video Jitter Source Picture Size \/ Frame Rate With HyARM Without HyARM UAV1 QoS Enabled Application 1122 X 1496 \/ 25 960 X 720 \/ 20 UAV1 Best-effort Application 288 X 384 \/ 15 640 X 480 \/ 20 UAV2 QoS Enabled Application 1126 X 1496 \/ 25 960 X 720 \/ 20 UAV2 Best-effort Application 288 X 384 \/ 15 640 X 480 \/ 20 Table 2: Comparison of Video Quality Many distributed real-time and embedded (DRE) systems demand end-to-end quality of service (QoS) enforcement from their underlying platforms to operate correctly.\nThese systems increasingly run in open environments, where resource availability is subject to dynamic change.\nTo meet end-to-end QoS in dynamic environments, DRE systems can benefit from an adaptive middleware that monitors system resources, performs efficient application workload management, and enables efficient resource provisioning for executing applications.\nThis paper described HyARM, an adaptive middleware, that provides effective resource management to DRE systems.\nHyARM employs hybrid control techniques to provide the adaptive middleware capabilities, such as resource monitoring and application adaptation that are key to providing the dynamic resource management capabilities for open DRE systems.\nWe employed HyARM to a representative DRE multimedia system that is implemented using Real-time CORBA and CORBA A\/V Streaming Service.\nWe evaluated the performance of HyARM in a system composed of three distributed resources and two classes of applications with two applications each.\nOur empirical results indicate that HyARM ensures (1) efficient resource utilization by maintaining the resource utilization of system resources within the specified utilization bounds, (2) QoS requirements of QoS-enabled applications are met at all times.\nOverall, HyARM ensures efficient, predictable, and adaptive resource management for DRE systems.\n7.\nREFERENCES [1] T. F. Abdelzaher, J. Stankovic, C. Lu, R. Zhang, and Y. Lu.\nFeddback Performance Control in Software Services.\nIEEE: Control Systems, 23(3), June 2003.\n[2] L. Abeni, L. Palopoli, G. Lipari, and J. Walpole.\nAnalysis of a reservation-based feedback scheduler.\nIn IEEE Real-Time Systems Symposium, Dec. 2002.\n[3] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss.\nAn architecture for differentiated services.\nNetwork Information Center RFC 2475, Dec. 1998.\n[4] H. Franke, S. Nagar, C. Seetharaman, and V. Kashyap.\nEnabling Autonomic Workload Management in Linux.\nIn Proceedings of the International Conference on Autonomic Computing (ICAC), New York, New York, May 2004.\nIEEE.\n[5] E. Gamma, R. Helm, R. Johnson, and J. Vlissides.\nDesign Patterns: Elements of Reusable Object-Oriented Software.\nAddison-Wesley, Reading, MA, 1995.\n[6] G. Ghinea and J. P. Thomas.\nQos impact on user perception and understanding of multimedia video clips.\nIn MULTIMEDIA ``98: Proceedings of the sixth ACM international conference on Multimedia, pages 49-54, Bristol, United Kingdom, 1998.\nACM Press.\n[7] Internet Engineering Task Force.\nDifferentiated Services Working Group (diffserv) Charter.\nwww.ietf.org\/html.charters\/diffserv-charter.html, 2000.\n[8] X. Koutsoukos, R. Tekumalla, B. Natarajan, and C. Lu.\nHybrid Supervisory Control of Real-Time Systems.\nIn 11th IEEE Real-Time and Embedded Technology and Applications Symposium, San Francisco, California, Mar. 2005.\n[9] J. Lehoczky, L. Sha, and Y. Ding.\nThe Rate Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behavior.\nIn Proceedings of the 10th IEEE Real-Time Systems Symposium (RTSS 1989), pages 166-171.\nIEEE Computer Society Press, 1989.\n[10] J. Loyall, J. Gossett, C. Gill, R. Schantz, J. Zinky, P. Pal, R. Shapiro, C. Rodrigues, M. Atighetchi, and D. Karr.\nComparing and Contrasting Adaptive Middleware Support in Wide-Area and Embedded Distributed Object Applications.\nIn Proceedings of the 21st International Conference on Distributed Computing Systems (ICDCS-21), pages 625-634.\nIEEE, Apr. 2001.\n[11] C. Lu, J. A. Stankovic, G. Tao, and S. H. Son.\nFeedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms.\nReal-Time Systems Journal, 23(1\/2):85-126, July 2002.\n[12] Object Management Group.\nReal-time CORBA Specification, OMG Document formal\/02-08-02 edition, Aug. 2002.\n[13] D. C. Schmidt, D. L. Levine, and S. Mungee.\nThe Design and Performance of Real-Time Object Request Brokers.\nComputer Communications, 21(4):294-324, Apr. 1998.\n[14] Thomas Sikora.\nTrends and Perspectives in Image and Video Coding.\nIn Proceedings of the IEEE, Jan. 2005.\n[15] X. Wang, H.-M.\nHuang, V. Subramonian, C. Lu, and C. Gill.\nCAMRIT: Control-based Adaptive Middleware for Real-time Image Transmission.\nIn Proc.\nof the 10th IEEE Real-Time and Embedded Tech.\nand Applications Symp.\n(RTAS), Toronto, Canada, May 2004.\nArticle 7","lvl-3":"Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems\nABSTRACT\nA challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions.\nThis paper presents two contributions to research in adaptive resource management for DRE systems.\nFirst, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability.\nSecond, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time.\nOur results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability.\n1.\nINTRODUCTION\nAchieving end-to-end real-time quality of service (QoS) is particularly important for open distributed real-time and embedded (DRE) systems that face resource constraints, such as limited computing power and network bandwidth.\nOverutilization of these system resources can yield unpredictable and unstable behavior, whereas under-utilization can yield excessive system cost.\nA promising approach to meeting \u2217 Contact author:nshankar@dre.vanderbilt.edu\nthese end-to-end QoS requirements effectively, therefore, is to develop and apply adaptive middleware [10, 15], which is software whose functional and QoS-related properties can be modified either statically or dynamically.\nStatic modifications are carried out to reduce footprint, leverage capabilities that exist in specific platforms, enable functional subsetting, and\/or minimize hardware\/software infrastructure dependencies.\nObjectives of dynamic modifications include optimizing system responses to changing environments or requirements, such as changing component interconnections, power-levels, CPU and network bandwidth availability, latency\/jitter, and workload.\nIn open DRE systems, adaptive middleware must make such modifications dependably, i.e., while meeting stringent end-to-end QoS requirements, which requires the specification and enforcement of upper and lower bounds on system resource utilization to ensure effective use of system resources.\nTo meet these requirements, we have developed the Hybrid Adaptive Resource-management Middleware (HyARM), which is an open-source' distributed resource management middleware.\nHyARM is based on hybrid control theoretic techniques [8], which provide a theoretical framework for designing control of complex system with both continuous and discrete dynamics.\nIn our case study, which involves a distributed real-time video distribution system, the task of adaptive resource management is to control the utilization of the different resources, whose utilizations are described by continuous variables.\nWe achieve this by adapting the resolution of the transmitted video, which is modeled as a continuous variable, and by changing the frame-rate and the compression, which are modeled by discrete actions.\nWe have implemented HyARM atop The ACE ORB (TAO) [13], which is an implementation of the Real-time CORBA specification [12].\nOur results show that (1) HyARM ensures effective system resource utilization and (2) end-to-end QoS requirements of higher priority applications are met, even in the face of fluctuations in workload.\nThe remainder of the paper is organized as follows: Section 2 describes the architecture, functionality, and resource utilization model of our DRE multimedia system case study; Section 3 explains the structure and functionality of HyARM; Section 4 evaluates the adaptive behavior of HyARM via experiments on our multimedia system case study; Section 5 compares our research on HyARM with related work; and Section 6 presents concluding remarks . '\nThe code and examples for HyARM are available at www.\ndre.vanderbilt.edu\/\u223cnshankar\/HyARM\/.\n2.\nCASE STUDY: DRE MULTIMEDIA SYSTEM\n2.1 Multimedia System Architecture\n2.2 DRE Multimedia System Rresources\n3.\nOVERVIEW OF HYARM\n3.1 HyARM Structure and Functionality\n3.2 Applying HyARM to the Multimedia System Case Study\n4.\nPERFORMANCE RESULTS AND ANALYSIS\n4.1 Overview of the Hardware and Software Testbed\n4.2 Experiment Configuration\n4.3 Empirical Results and Analysis\n5.\nRELATED WORK\nA number of control theoretic approaches have been applied to DRE systems recently.\nThese techniques aid in overcoming limitations with traditional scheduling approaches that handle dynamic changes in resource availability poorly and result in a rigidly scheduled system that adapts poorly to change.\nA survey of these techniques is presented in [1].\nOne such approach is feedback control scheduling (FCS) [2, 11].\nFCS algorithms dynamically adjust resource allocation by means of software feedback control loops.\nFCS algorithms are modeled and designed using rigorous controltheoretic methodologies.\nThese algorithms provide robust and analytical performance assurances despite uncertainties in resource availability and\/or demand.\nAlthough existing FCS algorithms have shown promise, these algorithms often assume that the system has continuous control variable (s) that can continuously be adjusted.\nWhile this assumption holds for certain classes of systems, there are many classes of DRE systems, such as avionics and total-ship computing environments that only support a finite a priori set of discrete configurations.\nThe control variables in such systems are therefore intrinsically discrete.\nHyARM handles both continuous control variables, such as picture resolution, and discrete control variable, such as discrete set of frame rates.\nHyARM can therefore be applied to system that support continuous and\/or discrete set of control variables.\nThe DRE multimedia system as described in Section 2 is an example DRE system that offers both continuous (picture resolution) and discrete set (frame-rate) of control variables.\nThese variables are modified by HyARM to achieve efficient resource utilization and improved application QoS.\n6.\nCONCLUDING REMARKS\nArticle 7 Figure 6: Comparison of Video Latency Figure 7: Comparison of Video Jitter\nTable 2: Comparison of Video Quality\nMany distributed real-time and embedded (DRE) systems demand end-to-end quality of service (QoS) enforcement from their underlying platforms to operate correctly.\nThese systems increasingly run in open environments, where resource availability is subject to dynamic change.\nTo meet end-to-end QoS in dynamic environments, DRE systems can benefit from an adaptive middleware that monitors system resources, performs efficient application workload management, and enables efficient resource provisioning for executing applications.\nThis paper described HyARM, an adaptive middleware, that provides effective resource management to DRE systems.\nHyARM employs hybrid control techniques to provide the adaptive middleware capabilities, such as resource monitoring and application adaptation that are key to providing the dynamic resource management capabilities for open DRE systems.\nWe employed HyARM to a representative DRE multimedia system that is implemented using Real-time CORBA and CORBA A\/V Streaming Service.\nWe evaluated the performance of HyARM in a system composed of three distributed resources and two classes of applications with two applications each.\nOur empirical results indicate that HyARM ensures (1) efficient resource utilization by maintaining the resource utilization of system resources within the specified utilization bounds, (2) QoS requirements of QoS-enabled applications are met at all times.\nOverall, HyARM ensures efficient, predictable, and adaptive resource management for DRE systems.","lvl-4":"Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems\nABSTRACT\nA challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions.\nThis paper presents two contributions to research in adaptive resource management for DRE systems.\nFirst, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability.\nSecond, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time.\nOur results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability.\n1.\nINTRODUCTION\nAchieving end-to-end real-time quality of service (QoS) is particularly important for open distributed real-time and embedded (DRE) systems that face resource constraints, such as limited computing power and network bandwidth.\nOverutilization of these system resources can yield unpredictable and unstable behavior, whereas under-utilization can yield excessive system cost.\nA promising approach to meeting \u2217 Contact author:nshankar@dre.vanderbilt.edu\nObjectives of dynamic modifications include optimizing system responses to changing environments or requirements, such as changing component interconnections, power-levels, CPU and network bandwidth availability, latency\/jitter, and workload.\nIn open DRE systems, adaptive middleware must make such modifications dependably, i.e., while meeting stringent end-to-end QoS requirements, which requires the specification and enforcement of upper and lower bounds on system resource utilization to ensure effective use of system resources.\nTo meet these requirements, we have developed the Hybrid Adaptive Resource-management Middleware (HyARM), which is an open-source' distributed resource management middleware.\nHyARM is based on hybrid control theoretic techniques [8], which provide a theoretical framework for designing control of complex system with both continuous and discrete dynamics.\nIn our case study, which involves a distributed real-time video distribution system, the task of adaptive resource management is to control the utilization of the different resources, whose utilizations are described by continuous variables.\nWe have implemented HyARM atop The ACE ORB (TAO) [13], which is an implementation of the Real-time CORBA specification [12].\nOur results show that (1) HyARM ensures effective system resource utilization and (2) end-to-end QoS requirements of higher priority applications are met, even in the face of fluctuations in workload.\nThe code and examples for HyARM are available at www.\ndre.vanderbilt.edu\/\u223cnshankar\/HyARM\/.\n5.\nRELATED WORK\nA number of control theoretic approaches have been applied to DRE systems recently.\nThese techniques aid in overcoming limitations with traditional scheduling approaches that handle dynamic changes in resource availability poorly and result in a rigidly scheduled system that adapts poorly to change.\nOne such approach is feedback control scheduling (FCS) [2, 11].\nFCS algorithms dynamically adjust resource allocation by means of software feedback control loops.\nFCS algorithms are modeled and designed using rigorous controltheoretic methodologies.\nThese algorithms provide robust and analytical performance assurances despite uncertainties in resource availability and\/or demand.\nAlthough existing FCS algorithms have shown promise, these algorithms often assume that the system has continuous control variable (s) that can continuously be adjusted.\nThe control variables in such systems are therefore intrinsically discrete.\nHyARM handles both continuous control variables, such as picture resolution, and discrete control variable, such as discrete set of frame rates.\nHyARM can therefore be applied to system that support continuous and\/or discrete set of control variables.\nThe DRE multimedia system as described in Section 2 is an example DRE system that offers both continuous (picture resolution) and discrete set (frame-rate) of control variables.\nThese variables are modified by HyARM to achieve efficient resource utilization and improved application QoS.\n6.\nCONCLUDING REMARKS\nTable 2: Comparison of Video Quality\nMany distributed real-time and embedded (DRE) systems demand end-to-end quality of service (QoS) enforcement from their underlying platforms to operate correctly.\nThese systems increasingly run in open environments, where resource availability is subject to dynamic change.\nTo meet end-to-end QoS in dynamic environments, DRE systems can benefit from an adaptive middleware that monitors system resources, performs efficient application workload management, and enables efficient resource provisioning for executing applications.\nThis paper described HyARM, an adaptive middleware, that provides effective resource management to DRE systems.\nHyARM employs hybrid control techniques to provide the adaptive middleware capabilities, such as resource monitoring and application adaptation that are key to providing the dynamic resource management capabilities for open DRE systems.\nWe employed HyARM to a representative DRE multimedia system that is implemented using Real-time CORBA and CORBA A\/V Streaming Service.\nWe evaluated the performance of HyARM in a system composed of three distributed resources and two classes of applications with two applications each.\nOur empirical results indicate that HyARM ensures (1) efficient resource utilization by maintaining the resource utilization of system resources within the specified utilization bounds, (2) QoS requirements of QoS-enabled applications are met at all times.\nOverall, HyARM ensures efficient, predictable, and adaptive resource management for DRE systems.","lvl-2":"Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems\nABSTRACT\nA challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions.\nThis paper presents two contributions to research in adaptive resource management for DRE systems.\nFirst, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability.\nSecond, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time.\nOur results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability.\n1.\nINTRODUCTION\nAchieving end-to-end real-time quality of service (QoS) is particularly important for open distributed real-time and embedded (DRE) systems that face resource constraints, such as limited computing power and network bandwidth.\nOverutilization of these system resources can yield unpredictable and unstable behavior, whereas under-utilization can yield excessive system cost.\nA promising approach to meeting \u2217 Contact author:nshankar@dre.vanderbilt.edu\nthese end-to-end QoS requirements effectively, therefore, is to develop and apply adaptive middleware [10, 15], which is software whose functional and QoS-related properties can be modified either statically or dynamically.\nStatic modifications are carried out to reduce footprint, leverage capabilities that exist in specific platforms, enable functional subsetting, and\/or minimize hardware\/software infrastructure dependencies.\nObjectives of dynamic modifications include optimizing system responses to changing environments or requirements, such as changing component interconnections, power-levels, CPU and network bandwidth availability, latency\/jitter, and workload.\nIn open DRE systems, adaptive middleware must make such modifications dependably, i.e., while meeting stringent end-to-end QoS requirements, which requires the specification and enforcement of upper and lower bounds on system resource utilization to ensure effective use of system resources.\nTo meet these requirements, we have developed the Hybrid Adaptive Resource-management Middleware (HyARM), which is an open-source' distributed resource management middleware.\nHyARM is based on hybrid control theoretic techniques [8], which provide a theoretical framework for designing control of complex system with both continuous and discrete dynamics.\nIn our case study, which involves a distributed real-time video distribution system, the task of adaptive resource management is to control the utilization of the different resources, whose utilizations are described by continuous variables.\nWe achieve this by adapting the resolution of the transmitted video, which is modeled as a continuous variable, and by changing the frame-rate and the compression, which are modeled by discrete actions.\nWe have implemented HyARM atop The ACE ORB (TAO) [13], which is an implementation of the Real-time CORBA specification [12].\nOur results show that (1) HyARM ensures effective system resource utilization and (2) end-to-end QoS requirements of higher priority applications are met, even in the face of fluctuations in workload.\nThe remainder of the paper is organized as follows: Section 2 describes the architecture, functionality, and resource utilization model of our DRE multimedia system case study; Section 3 explains the structure and functionality of HyARM; Section 4 evaluates the adaptive behavior of HyARM via experiments on our multimedia system case study; Section 5 compares our research on HyARM with related work; and Section 6 presents concluding remarks . '\nThe code and examples for HyARM are available at www.\ndre.vanderbilt.edu\/\u223cnshankar\/HyARM\/.\n2.\nCASE STUDY: DRE MULTIMEDIA SYSTEM\nThis section describes the architecture and QoS requirements of our DRE multimedia system.\n2.1 Multimedia System Architecture\nFigure 1: DRE Multimedia System Architecture\nThe architecture for our DRE multimedia system is shown in Figure 1 and consists of the following entities: (1) Data source (video capture by UAV), where video is captured (related to subject of interest) by camera (s) on each UAV, followed by encoding of raw video using a specific encoding scheme and transmitting the video to the next stage in the pipeline.\n(2) Data distributor (base station), where the video is processed to remove noise, followed by retransmission of the processed video to the next stage in the pipeline.\n(3) Sinks (command and control center), where the received video is again processed to remove noise, then decoded and finally rendered to end user via graphical displays.\nSignificant improvements in video encoding\/decoding and (de) compression techniques have been made as a result of recent advances in video encoding and compression techniques [14].\nCommon video compression schemes are MPEG1, MPEG-2, Real Video, and MPEG-4.\nEach compression scheme is characterized by its resource requirement, e.g., the computational power to (de) compress the video signal and the network bandwidth required to transmit the compressed video signal.\nProperties of the compressed video, such as resolution and frame-rate determine both the quality and the resource requirements of the video.\nOur multimedia system case study has the following endto-end real-time QoS requirements: (1) latency, (2) interframe delay (also know as jitter), (3) frame rate, and (4) picture resolution.\nThese QoS requirements can be classified as being either hard or soft.\nHard QoS requirements should be met by the underlying system at all times, whereas soft QoS requirements can be missed occasionally .2 For our case study, we treat QoS requirements such as latency and jitter as harder QoS requirements and strive to meet these requirements at all times.\nIn contrast, we treat QoS requirements such as video frame rate and picture resolution as softer QoS requirements and modify these video properties adaptively to handle dynamic changes in resource availabil2Although hard and soft are often portrayed as two discrete requirement sets, in practice they are usually two ends of a continuum ranging from \"softer\" to \"harder\" rather than two disjoint points.\nity effectively.\n2.2 DRE Multimedia System Rresources\nThere are two primary types of resources in our DRE multimedia system: (1) processors that provide computational power available at the UAVs, base stations, and end receivers and (2) network links that provide communication bandwidth between UAVs, base stations, and end receivers.\nThe computing power required by the video capture and encoding tasks depends on dynamic factors, such as speed of the UAV, speed of the subject (if the subject is mobile), and distance between UAV and the subject.\nThe wireless network bandwidth available to transmit video captured by UAVs to base stations also depends on the wireless connectivity between the UAVs and the base station, which in-turn depend on dynamic factors such as the speed of the UAVs and the relative distance between UAVs and base stations.\nThe bandwidth of the link between the base station and the end receiver is limited, but more stable than the bandwidth of the wireless network.\nResource requirements and availability of resources are subjected to dynamic changes.\nTwo classes of applications--QoS-enabled and best-effort--use the multimedia system infrastructure described above to transmit video to their respective receivers.\nQoS-enabled class of applications have higher priority over best-effort class of application.\nIn our study, emergency response applications belong to QoS-enabled and surveillance applications belong to best-effort class.\nFor example, since a stream from an emergency response application is of higher importance than a video stream from a surveillance application, it receives more resources end-to-end.\nSince resource availability significantly affects QoS, we use current resource utilization as the primary indicator of system performance.\nWe refer to the current level of system resource utilization as the system condition.\nBased on this definition, we can classify system conditions as being either under, over, or effectively utilized.\nUnder-utilization of system resources occurs when the current resource utilization is lower than the desired lower bound on resource utilization.\nIn this system condition, residual system resources (i.e., network bandwidth and computational power) are available in large amounts after meeting end-to-end QoS requirements of applications.\nThese residual resources can be used to increase the QoS of the applications.\nFor example, residual CPU and network bandwidth can be used to deliver better quality video (e.g., with greater resolution and higher frame rate) to end receivers.\nOver-utilization of system resources occurs when the current resource utilization is higher than the desired upper bound on resource utilization.\nThis condition can arise from loss of resources - network bandwidth and\/or computing power at base station, end receiver or at UAV - or may be due to an increase in resource demands by applications.\nOver-utilization is generally undesirable since the quality of the received video (such as resolution and frame rate) and timeliness properties (such as latency and jitter) are degraded and may result in an unstable (and thus ineffective) system.\nEffective resource utilization is the desired system condition since it ensures that end-to-end QoS requirements of the UAV-based multimedia system are met and utilization of both system resources, i.e., network bandwidth and computational power, are within their desired utilization bounds.\nArticle 7\nSection 3 describes techniques we applied to achieve effective utilization, even in the face of fluctuating resource availability and\/or demand.\n3.\nOVERVIEW OF HYARM\nThis section describes the architecture of the Hybrid Adaptive Resource-management Middleware (HyARM).\nHyARM ensures efficient and predictable system performance by providing adaptive resource management, including monitoring of system resources and enforcing bounds on application resource utilization.\n3.1 HyARM Structure and Functionality\nFigure 2: HyARM Architecture HyARM is composed of three types of entities shown in Figure 2 and described below:\nResource monitors observe the overall resource utilization for each type of resource and resource utilization per application.\nIn our multimedia system, there are resource monitors for CPU utilization and network bandwidth.\nCPU monitors observe the CPU resource utilization of UAVs, base station, and end receivers.\nNetwork bandwidth monitors observe the network resource utilization of (1) wireless network link between UAVs and the base station and (2) wired network link between the base station and end receivers.\nThe central controller maintains the system resource utilization below a desired bound by (1) processing periodic updates it receives from resource monitors and (2) modifying the execution of applications accordingly, e.g., by using different execution algorithms or operating the application with increased\/decreased QoS.\nThis adaptation process ensures that system resources are utilized efficiently and end-to-end application QoS requirements are met.\nIn our multimedia system, the HyARM controller determines the value of application parameters such as (1) video compression schemes, such as Real Video and MPEG-4, and\/or (2) frame rate, and (3) picture resolution.\nFrom the perspective of hybrid control theoretic techniques [8], the different video compression schemes and frame rate form the discrete variables of application execution and picture resolution forms the continuous variables.\nApplication adapters modify application execution according to parameters recommended by the controller and ensures that the operation of the application is in accordance with the recommended parameters.\nIn the current mplementation of HyARM, the application adapter modifies the input parameters to the application that affect application QoS and resource utilization - compression scheme, frame rate, and picture resolution.\nIn our future implementations, we plan to use resource reservation mechanisms such as Differentiated Service [7, 3] and Class-based Kernel Resource Management [4] to provision\/reserve network and CPU resources.\nIn our multimedia system, the application adapter ensures that the video is encoded at the recommended frame rate and resolution using the specified compression scheme.\n3.2 Applying HyARM to the Multimedia System Case Study\nHyARM is built atop TAO [13], a widely used open-source implementation of Real-time CORBA [12].\nHyARM can be applied to ensure efficient, predictable and adaptive resource management of any DRE system where resource availability and requirements are subject to dynamic change.\nFigure 3 shows the interaction of various parts of the DRE multimedia system developed with HyARM, TAO, and TAO's A\/V Streaming Service.\nTAO's A\/V Streaming service is an implementation of the CORBA A\/V Streaming Service specification.\nTAO's A\/V Streaming Service is a QoS-enabled video distribution service that can transfer video in real-time to one or more receivers.\nWe use the A\/V Streaming Service to transmit the video from the UAVs to the end receivers via the base station.\nThree entities of\nFigure 3: Developing the DRE Multimedia System with HyARM\nHyARM, namely the resource monitors, central controller, and application adapters are built as CORBA servants, so they can be distributed throughout a DRE system.\nResource monitors are remote CORBA objects that update the central controller periodically with the current resource utilization.\nApplication adapters are collocated with applications since the two interact closely.\nAs shown in Figure 3, UAVs compress the data using various compression schemes, such as MPEG1, MPEG4, and Real Video, and uses TAO's A\/V streaming service to transmit the video to end receivers.\nHyARM's resource monitors continuously observe the system resource utilization and notify the central controller with the current utilization.\n3 The interaction between the controller and the resource monitors uses the Observer pattern [5].\nWhen the controller receives resource utilization updates from monitors, it computes the necessary modifications to application (s) parameters and notifies application adapter (s) via a remote operation call.\nApplication adapter (s), that are collocated with the application, modify the input parameters to the application--in our case video encoder--to modify the application resource utilization and QoS.\n3The base station is not included in the figure since it only retransmits the video received from UAVs to end receivers.\n4.\nPERFORMANCE RESULTS AND ANALYSIS\nThis section first describes the testbed that provides the infrastructure for our DRE multimedia system, which was used to evaluate the performance of HyARM.\nWe then describe our experiments and analyze the results obtained to empirically evaluate how HyARM behaves during underand over-utilization of system resources.\n4.1 Overview of the Hardware and Software Testbed\nOur experiments were performed on the Emulab testbed at University of Utah.\nThe hardware configuration consists of two nodes acting as UAVs, one acting as base station, and one as end receiver.\nVideo from the two UAVs were transmitted to a base station via a LAN configured with the following properties: average packet loss ratio of 0.3 and bandwidth 1 Mbps.\nThe network bandwidth was chosen to be 1 Mbps since each UAV in the DRE multimedia system is allocated 250 Kbps.\nThese parameters were chosen to emulate an unreliable wireless network with limited bandwidth between the UAVs and the base station.\nFrom the base station, the video was retransmitted to the end receiver via a reliable wireline link of 10 Mbps bandwidth with no packet loss.\nThe hardware configuration of all the nodes was chosen as follows: 600 MHz Intel Pentium III processor, 256 MB physical memory, 4 Intel EtherExpress Pro 10\/100 Mbps Ethernet ports, and 13 GB hard drive.\nA real-time version of Linux--TimeSys Linux\/NET 3.1.214 based on RedHat Linux 9--was used as the operating system for all nodes.\nThe following software packages were also used for our experiments: (1) Ffmpeg 0.4.9-pre1, which is an open-source library (http: \/ \/ www.ffmpeg.sourceforge.net\/download.php) that compresses video into MPEG-2, MPEG-4, Real Video, and many other video formats.\n(2) Iftop 0.16, which is an opensource library (http:\/\/www.ex-parrot.com\/\u223cpdw\/iftop\/) we used for monitoring network activity and bandwidth utilization.\n(3) ACE 5.4.3 + TAO 1.4.3, which is an opensource (http:\/\/www.dre.vanderbilt.edu\/TAO) implementation of the Real-time CORBA [12] specification upon which HyARM is built.\nTAO provides the CORBA Audio\/Video (A\/V) Streaming Service that we use to transmit the video from the UAVs to end receivers via the base station.\n4.2 Experiment Configuration\nOur experiment consisted of two (emulated) UAVs that simultaneously send video to the base station using the experimentation setup described in Section 4.1.\nAt the base station, video was retransmitted to the end receivers (without any modifications), where it was stored to a file.\nEach UAV hosted two applications, one QoS-enabled application (emergency response), and one best-effort application (surveillance).\nWithin each UAV, computational power is shared between the applications, while the network bandwidth is shared among all applications.\nTo evaluate the QoS provided by HyARM, we monitored CPU utilization at the two UAVs, and network bandwidth utilization between the UAV and the base station.\nCPU resource utilization was not monitored at the base station and the end receiver since they performed no computationallyintensive operations.\nThe resource utilization of the 10 Mpbs physical link between the base station and the end receiver does not affect QoS of applications and is not monitored by HyARM since it is nearly 10 times the 1 MB bandwidth of the LAN between the UAVs and the base station.\nThe experiment also monitors properties of the video that affect the QoS of the applications, such as latency, jitter, frame rate, and resolution.\nThe set point on resource utilization for each resource was specified at 0.69, which is the upper bound typically recommended by scheduling techniques, such as rate monotonic algorithm [9].\nSince studies [6] have shown that human eyes can perceive delays more than 200ms, we use this as the upper bound on jitter of the received video.\nQoS requirements for each class of application is specified during system initialization and is shown in Table 1.\n4.3 Empirical Results and Analysis\nThis section presents the results obtained from running the experiment described in Section 4.2 on our DRE multimedia system testbed.\nWe used system resource utilization as a metric to evaluate the adaptive resource management capabilities of HyARM under varying input work loads.\nWe also used application QoS as a metric to evaluate HyARM's capabilities to support end-to-end QoS requirements of the various classes of applications in the DRE multimedia system.\nWe analyze these results to explain the significant differences in system performance and application QoS.\nComparison of system performance is decomposed into comparison of resource utilization and application QoS.\nFor system resource utilization, we compare (1) network bandwidth utilization of the local area network and (2) CPU utilization at the two UAV nodes.\nFor application QoS, we compare mean values of video parameters, including (1) picture resolution, (2) frame rate, (3) latency, and (4) jitter.\nComparison of resource utilization.\nOver-utilization of system resources in DRE systems can yield an unstable system.\nIn contrast, under-utilization of system resources increases system cost.\nFigure 4 and Figure 5 compare the system resource utilization with and without HyARM.\nFigure 4 shows that HyARM maintains system utilization close to the desired utilization set point during fluctuation in input work load by transmitting video of higher (or lower) QoS for QoS-enabled (or best-effort) class of applications during over (or under) utilization of system resources.\nFigure 5 shows that without HyARM, network utilization was as high as 0.9 during increase in workload conditions, which is greater than the utilization set point of 0.7 by 0.2.\nAs a result of over-utilization of resources, QoS of the received video, such as average latency and jitter, was affected significantly.\nWithout HyARM, system resources were either under-utilized or over-utilized, both of which are undesirable.\nIn contrast, with HyARM, system resource utilization is always close to the desired set point, even during fluctuations in application workload.\nDuring sudden fluctuation in application workload, system conditions may be temporarily undesirable, but are restored to the desired condition within several sampling periods.\nTemporary over-utilization of resources is permissible in our multimedia system since the quality of the video may be degraded for a short period of time, though application QoS will be degraded significantly if poor quality video is transmitted for a longer period of time.\nComparison of application QoS.\nFigures 6, Figure 7, and Table 2 compare latency, jitter, resolution, and frame\nTable 1: Application QoS Requirements\nFigure 4: Resource utilization with HyARM Figure 5: Resource utilization without HyARM\nrate of the received video, respectively.\nTable 2 shows that HyARM increases the resolution and frame video of QoSenabled applications, but decreases the resolution and frame rate of best effort applications.\nDuring over utilization of system resources, resolution and frame rate of lower priority applications are reduced to adapt to fluctuations in application workload and to maintain the utilization of resources at the specified set point.\nIt can be seen from Figure 6 and Figure 7 that HyARM reduces the latency and jitter of the received video significantly.\nThese figures show that the QoS of QoS-enabled applications is greatly improved by HyARM.\nAlthough application parameters, such as frame rate and resolutions, which affect the soft QoS requirements of best-effort applications may be compromised, the hard QoS requirements, such as latency and jitter, of all applications are met.\nHyARM responds to fluctuation in resource availability and\/or demand by constant monitoring of resource utilization.\nAs shown in Figure 4, when resources utilization increases above the desired set point, HyARM lowers the utilization by reducing the QoS of best-effort applications.\nThis adaptation ensures that enough resources are available for QoS-enabled applications to meet their QoS needs.\nFigures 6 and 7 show that the values of latency and jitter of the received video of the system with HyARM are nearly half of the corresponding value of the system without HyARM.\nWith HyARM, values of these parameters are well below the specified bounds, whereas without HyARM, these value are significantly above the specified bounds due to overutilization of the network bandwidth, which leads to network congestion and results in packet loss.\nHyARM avoids this by reducing video parameters such as resolution, frame-rate, and\/or modifying the compression scheme used to compress the video.\nOur conclusions from analyzing the results described above are that applying adaptive middleware via hybrid control to DRE system helps to (1) improve application QoS, (2) increase system resource utilization, and (3) provide better predictability (lower latency and inter-frame delay) to QoSenabled applications.\nThese improvements are achieved largely due to monitoring of system resource utilization, efficient system workload management, and adaptive resource provisioning by means of HyARM's network\/CPU resource monitors, application adapter, and central controller, respectively.\n5.\nRELATED WORK\nA number of control theoretic approaches have been applied to DRE systems recently.\nThese techniques aid in overcoming limitations with traditional scheduling approaches that handle dynamic changes in resource availability poorly and result in a rigidly scheduled system that adapts poorly to change.\nA survey of these techniques is presented in [1].\nOne such approach is feedback control scheduling (FCS) [2, 11].\nFCS algorithms dynamically adjust resource allocation by means of software feedback control loops.\nFCS algorithms are modeled and designed using rigorous controltheoretic methodologies.\nThese algorithms provide robust and analytical performance assurances despite uncertainties in resource availability and\/or demand.\nAlthough existing FCS algorithms have shown promise, these algorithms often assume that the system has continuous control variable (s) that can continuously be adjusted.\nWhile this assumption holds for certain classes of systems, there are many classes of DRE systems, such as avionics and total-ship computing environments that only support a finite a priori set of discrete configurations.\nThe control variables in such systems are therefore intrinsically discrete.\nHyARM handles both continuous control variables, such as picture resolution, and discrete control variable, such as discrete set of frame rates.\nHyARM can therefore be applied to system that support continuous and\/or discrete set of control variables.\nThe DRE multimedia system as described in Section 2 is an example DRE system that offers both continuous (picture resolution) and discrete set (frame-rate) of control variables.\nThese variables are modified by HyARM to achieve efficient resource utilization and improved application QoS.\n6.\nCONCLUDING REMARKS\nArticle 7 Figure 6: Comparison of Video Latency Figure 7: Comparison of Video Jitter\nTable 2: Comparison of Video Quality\nMany distributed real-time and embedded (DRE) systems demand end-to-end quality of service (QoS) enforcement from their underlying platforms to operate correctly.\nThese systems increasingly run in open environments, where resource availability is subject to dynamic change.\nTo meet end-to-end QoS in dynamic environments, DRE systems can benefit from an adaptive middleware that monitors system resources, performs efficient application workload management, and enables efficient resource provisioning for executing applications.\nThis paper described HyARM, an adaptive middleware, that provides effective resource management to DRE systems.\nHyARM employs hybrid control techniques to provide the adaptive middleware capabilities, such as resource monitoring and application adaptation that are key to providing the dynamic resource management capabilities for open DRE systems.\nWe employed HyARM to a representative DRE multimedia system that is implemented using Real-time CORBA and CORBA A\/V Streaming Service.\nWe evaluated the performance of HyARM in a system composed of three distributed resources and two classes of applications with two applications each.\nOur empirical results indicate that HyARM ensures (1) efficient resource utilization by maintaining the resource utilization of system resources within the specified utilization bounds, (2) QoS requirements of QoS-enabled applications are met at all times.\nOverall, HyARM ensures efficient, predictable, and adaptive resource management for DRE systems.","keyphrases":["adapt resourc manag","hybrid adapt resourcemanag middlewar","hybrid control techniqu","distribut real-time embed system","servic end-to-end qualiti","real-time video distribut system","real-time corba specif","video encod\/decod","resourc reserv mechan","dynam environ","stream servic","distribut real-time emb system","hybrid system","servic qualiti"],"prmu":["P","P","P","M","R","M","U","M","M","U","M","M","R","R"]} {"id":"C-55","title":"Context Awareness for Group Interaction Support","abstract":"In this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments. First, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given. We then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction.","lvl-1":"Context Awareness for Group Interaction Support Alois Ferscha, Clemens Holzmann, Stefan Oppl Institut f\u00fcr Pervasive Computing, Johannes Kepler Universit\u00e4t Linz Altenbergerstra\u00dfe 69, A-4040 Linz {ferscha,holzmann,oppl}@soft.\nuni-linz.\nac.at ABSTRACT In this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments.\nFirst, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given.\nWe then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems - distributed applications.\nH.1.2 [Models and Principles]: User\/Machine Systems - human factors.\nH.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces - asynchronous interaction, collaborative computing, theory and models, synchronous interaction.\nGeneral Terms Design, Experimentation 1.\nINTRODUCTION Today``s computing environments are characterized by an increasing number of powerful, wirelessly connected mobile devices.\nUsers can move throughout an environment while carrying their computers with them and having remote access to information and services, anytime and anywhere.\nNew situations appear, where the user``s context - for example his current location or nearby people - is more dynamic; computation does not occur at a single location and in a single context any longer, but comprises a multitude of situations and locations.\nThis development leads to a new class of applications, which are aware of the context in which they run in and thus bringing virtual and real worlds together.\nMotivated by this and the fact, that only a few studies have been done for supporting group communication in such computing environments [12], we have developed a system, which we refer to as Group Interaction Support System (GISS).\nIt supports group interaction in mobile distributed computing environments in a way that group members need not to at the same place any longer in order to interact with each other or just to be aware of the others situation.\nIn the following subchapters, we will give a short overview on context aware computing and motivate its benefits for supporting group interaction.\nA software framework for developing contextsensitive applications is presented, which serves as middleware for GISS.\nChapter 2 presents the architecture of GISS, and chapter 3 and 4 discuss the location sensing and group interaction concepts of GISS in more detail.\nChapter 5 gives a final summary of our work.\n1.1 What is Context Computing?\nAccording to Merriam-Webster``s Online Dictionary1 , context is defined as the interrelated conditions in which something exists or occurs.\nBecause this definition is very general, many approaches have been made to define the notion of context with respect to computing environments.\nMost definitions of context are done by enumerating examples or by choosing synonyms for context.\nThe term context-aware has been introduced first in [10] where context is referred to as location, identities of nearby people and objects, and changes to those objects.\nIn [2], context is also defined by an enumeration of examples, namely location, identities of the people around the user, the time of the day, season, temperature etc. [9] defines context as the user``s location, environment, identity and time.\nHere we conform to a widely accepted and more formal definition, which defines context as any information than can be used to characterize the situation of an entity.\nAn entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves.\n[4] [4] identifies four primary types of context information (sometimes referred to as context dimensions), that are - with respect to characterizing the situation of an entity - more important than others.\nThese are location, identity, time and activity, which can also be used to derive other sources of contextual information (secondary context types).\nFor example, if we know a person``s identity, we can easily derive related information about this person from several data sources (e.g. day of birth or e-mail address).\nAccording to this definition, [4] defines a system to be contextaware if it uses context to provide relevant information and\/or services to the user, where relevancy depends on the user``s task.\n[4] also gives a classification of features for context-aware applications, which comprises presentation of information and services to a user, automatic execution of a service and tagging of context to information for later retrieval.\nFigure 1.\nLayers of a context-aware system Context computing is based on two major issues, namely identifying relevant context (identity, location, time, activity) and using obtained context (automatic execution, presentation, tagging).\nIn order to do this, there are a few layers between (see Figure 1).\nFirst, the obtained low-level context information has to be transformed, aggregated and interpreted (context transformation) and represented in an abstract context world model (context representation), either centralized or decentralized.\nFinally, the stored context information is used to trigger certain context events (context triggering).\n[7] 1.2 Group Interaction in Context After these abstract and formal definitions about what context and context computing is, we will now focus on the main goal of this work, namely how the interaction of mobile group members can be supported by using context information.\nIn [6] we have identified organizational systems to be crucial for supporting mobile groups (see Figure 2).\nFirst, there has to be an Information and Knowledge Management System, which is capable of supporting a team with its information processing- and knowledge gathering needs.\nThe next part is the Awareness System, which is dedicated to the perceptualisation of the effects of team activity.\nIt does this by communicating work context, agenda and workspace information to the users.\nThe Interaction Systems provide support for the communication among team members, either synchronous or asynchronous, and for the shared access to artefacts, such as documents.\nMobility Systems deploy mechanisms to enable any-place access to team memory as well as the capturing and delivery of awareness information from and to any places.\nFinally yet importantly, the organisational innovation system integrates aspects of the team itself, like roles, leadership and shared facilities.\nWith respect to these five aspects of team support, we focus on interaction and partly cover mobility- and awareness-support.\nGroup interaction includes all means that enable group members to communicate freely with all the other members.\nAt this point, the question how context information can be used for supporting group interaction comes up.\nWe believe that information about the current situation of a person provides a surplus value to existing group interaction systems.\nContext information facilitates group interaction by allowing each member to be aware of the availability status or the current location of each other group member, which again makes it possible to form groups dynamically, to place virtual post-its in the real world or to determine which people are around.\nFigure 2.\nSupport for Mobile Groups [6] Most of today``s context-aware applications use location and time only, and location is referred to as a crucial type of context information [3].\nWe also see the importance of location information in mobile and ubiquitous environments, wherefore a main focus of our work is on the utilization of location information and information about users in spatial proximity.\nNevertheless, we believe that location, as the only used type of context information, is not sufficient to support group interaction, wherefore we also take advantage of the other three primary types, namely identity, time and activity.\nThis provides a comprehensive description of a user``s current situation and thus enabling numerous means for supporting group interaction, which are described in detail in chapter 4.4.\nWhen we look at the types of context information stated above, we can see that all of them are single user-centred, taking into account only the context of the user itself.\nWe believe, that for the support of group interaction, the status of the group itself has also be taken into account.\nTherefore, we have added a fifth contextdimension group-context, which comprises more than the sum of the individual member``s contexts.\nGroup context includes any information about the situation of a whole group, for example how many members a group currently has or if a certain group meets right now.\n1.3 Context Middleware The Group Interaction Support System (GISS) uses the softwareframework introduced in [1], which serves as a middleware for developing context-sensitive applications.\nThis so-called Context Framework is based on a distributed communication architecture and it supports different kinds of transport protocols and message coding mechanisms.\n89 A main feature of the framework is the abstraction of context information retrieval via various sensors and its delivery to a level where no difference appears, for the application designer, between these different kinds of context retrieval mechanisms; the information retrieval is hidden from the application developer.\nThis is achieved by so-called entities, which describe objectse.g.\na human user - that are important for a certain context scenario.\nEntities express their functionality by the use of so-called attributes, which can be loaded into the entity.\nThese attributes are complex pieces of software, which are implemented as Java classes.\nTypical attributes are encapsulations of sensors, but they can also be used to implement context services, for example to notify other entities about location changes of users.\nEach entity can contain a collection of such attributes, where an entity itself is an attribute.\nThe initial set of attributes an entity contains can change dynamically at runtime, if an entity loads or unloads attributes from the local storage or over the network.\nIn order to load and deploy new attributes, an entity has to reference a class loader and a transport and lookup layer, which manages the lookup mechanism for discovering other entities and the transport.\nXML configuration files specify which initial set of entities should be loaded and which attributes these entities own.\nThe communication between entities and attributes is based on context events.\nEach attribute is able to trigger events, which are addressed to other attributes and entities respectively, independently on which physical computer they are running.\nAmong other things, and event contains the name of the event and a list of parameters delivering information about the event itself.\nRelated with this event-based architecture is the use of ECA (Event-Condition-Action)-rules for defining the behaviour of the context system.\nTherefore, every entity has a rule-interpreter, which catches triggered events, checks conditions associated with them and causes certain actions.\nThese rules are referenced by the entity``s XML configuration.\nA rule itself is even able to trigger the insertion of new rules or the unloading of existing rules at runtime in order to change the behaviour of the context system dynamically.\nTo sum up, the context framework provides a flexible, distributed architecture for hiding low-level sensor data from high-level applications and it hides external communication details from the application developer.\nFurthermore, it is able to adapt its behaviour dynamically by loading attributes, entities or ECArules at runtime.\n2.\nARCHITECTURE OVERVIEW As GISS uses the Context Framework described in chapter 1.3 as middleware, every user is represented by an entity, as well as the central server, which is responsible for context transformation, context representation and context triggering (cf. Figure 1).\nA main part of our work is about the automated acquisition of position information and its sensor-independent provision at application level.\nWe do not only sense the current location of users, but also determine spatial proximities between them.\nDeveloping the architecture, we focused on keeping the client as simple as possible and reducing the communication between client and server to a minimum.\nEach client may have various location and\/or proximity sensors attached, which are encapsulated by respective Context Framework-attributes (Sensor Encapsulation).\nThese attributes are responsible for integrating native sensor-implementations into the Context Framework and sending sensor-dependent position information to the server.\nWe consider it very important to support different types of sensors even at the same time, in order to improve location accuracy on the one hand, while providing a pervasive location-sensing environment with seamless transition between different location sensing techniques on the other hand.\nAll location- and proximity-sensors supported are represented by server-side context-attributes, which correspond to the client-side sensor encapsulation-attributes and abstract the sensor-dependent position information received from all users via the wireless network (sensor abstraction).\nThis requires a context repository, where the mapping of diverse physical positions to standardized locations is stored.\nThe standardized location- and proximity-information of each user is then passed to the so-called Sensor Fusion-attributes, one for symbolic locations and a second one for spatial proximities.\nTheir job is to merge location- and proximityinformation of clients, respectively, which is described in detail in Chapter 3.3.\nEvery time the symbolic location of a user or the spatial proximity between two users changes, the Sensor Fusion-attributes notify the GISS Core-attribute, which controls the application.\nBecause of the abstraction of sensor-dependent position information, the system can easily be extended by additional sensors, just by implementing the (typically two) attributes for encapsulating sensors (some sensors may not need a client-side part), abstracting physical positions and observing the interface to GISS Core.\nFigure 3.\nArchitecture of the Group Interaction Support System (GISS) The GISS Core-attribute is the central coordinator of the application as it shows to the user.\nIt not only serves as an interface to the location-sensing subsystem, but also collects further context information in other dimensions (time, identity or activity).\n90 Every time a change in the context of one or more users is detected, GISS Core evaluates the effect of these changes on the user, on the groups he belongs to and on the other members of these groups.\nWhenever necessary, events are thrown to the affected clients to trigger context-aware activities, like changing the presentation of awareness information or the execution of services.\nThe client-side part of the application is kept as simple as possible.\nFurthermore, modular design was not only an issue on the sensor side but also when designing the user interface architecture.\nThus, the complete user interface can be easily exchanged, if all of the defined events are taken into account and understood by the new interface-attribute.\nThe currently implemented user interface is split up in two parts, which are also represented by two attributes.\nThe central attribute on client-side is the so-called Instant Messenger Encapsulation, which on the one hand interacts with the server through events and on the other hand serves as a proxy for the external application the user interface is built on.\nAs external application, we use an existing open source instant messenger - the ICQ2 -compliant Simple Instant Messenger (SIM)3 .\nWe have chosen and instant messenger as front-end because it provides a well-known interface for most users and facilitates a seamless integration of group interaction support, thus increasing acceptance and ease of use.\nAs the basic functionality of the instant messenger - to serve as a client in an instant messenger network - remains fully functional, our application is able to use the features already provided by the messenger.\nFor example, the contexts activity and identity are derived from the messenger network as it is described later.\nThe Instant Messenger Encapsulation is also responsible for supporting group communication.\nThrough the interface of the messenger, it provides means of synchronous and asynchronous communication as well as a context-aware reminder system and tools for managing groups and the own availability status.\nThe second part of the user interface is a visualisation of the user``s locations, which is implemented in the attribute Viewer.\nThe current implementation provides a two-dimensional map of the campus, but it can easily be replaced by other visualisations, a three-dimensional VRML-model for example.\nFurthermore, this visualisation is used to show the artefacts for asynchronous communication.\nBased on a floor plan-view of the geographical area the user currently resides in, it gives a quick overview of which people are nearby, their state and provides means to interact with them.\nIn the following chapters 3 and 4, we describe the location sensing-backend and the application front-end for supporting group interaction in more detail.\n3.\nLOCATION SENSING In the following chapter, we will introduce a location model, which is used for representing locations; afterwards, we will describe the integration of location- and proximity-sensors in 2 http:\/\/www.icq.com\/ 3 http:\/\/sim-icq.sourceforge.net more detail.\nFinally, we will have a closer look on the fusion of location- and proximity-information, acquired by various sensors.\n3.1 Location Model A location model (i.e. a context representation for the contextinformation location) is needed to represent the locations of users, in order to be able to facilitate location-related queries like given a location, return a list of all the objects there or given an object, return its current location.\nIn general, there are two approaches [3,5]: symbolic models, which represent location as abstract symbols, and a geometric model, which represent location as coordinates.\nWe have chosen a symbolic location model, which refers to locations as abstract symbols like Room P111 or Physics Building, because we do not require geometric location data.\nInstead, abstract symbols are more convenient for human interaction at application level.\nFurthermore, we use a symbolic location containment hierarchy similar to the one introduced in [11], which consists of top-level regions, which contain buildings, which contain floors, and the floors again contain rooms.\nWe also distinguish four types, namely region (e.g. a whole campus), section (e.g. a building or an outdoor section), level (e.g. a certain floor in a building) and area (e.g. a certain room).\nWe introduce a fifth type of location, which we refer to as semantic.\nThese socalled semantic locations can appear at any level in the hierarchy and they can be nested, but they do not necessarily have a geographic representation.\nExamples for such semantic locations are tagged objects within a room (e.g. a desk and a printer on this desk) or the name of a department, which contains certain rooms.\nFigure 4.\nSymbolic Location Containment Hierarchy The hierarchy of symbolic locations as well as the type of each position is stored in the context repository.\n3.2 Sensors Our architecture supports two different kinds of sensors: location sensors, which acquire location information, and proximity sensors, which detect spatial proximities between users.\nAs described above, each sensor has a server- and in most cases a corresponding client-side-implementation, too.\nWhile the clientattributes (Sensor Abstraction) are responsible for acquiring low-level sensor-data and transmitting it to the server, the corresponding Sensor Encapsulation-attributes transform them into a uniform and sensor-independent format, namely symbolic locations and IDs of users in spatial proximity, respectively.\n91 Afterwards, the respective attribute Sensor Fusion is being triggered with this sensor-independent information of a certain user, detected by a particular sensor.\nSuch notifications are performed every time the sensor acquired new information.\nAccordingly, Sensor Abstraction-attributes are responsible to detect when a certain sensor is no longer available on the client side (e.g. if it has been unplugged by the user) or when position respectively proximity could not be determined any longer (e.g. RFID reader cannot detect tags) and notify the corresponding sensor fusion about this.\n3.2.1 Location Sensors In order to sense physical positions, the Sensor Encapsulationattributes asynchronously transmit sensor-dependent position information to the server.\nThe corresponding location Sensor Abstraction-attributes collect these physical positions delivered by the sensors of all users, and perform a repository-lookup in order to get the associated symbolic location.\nThis requires certain tables for each sensor, which map physical positions to symbolic locations.\nOne physical position may have multiple symbolic locations at different accuracy-levels in the location hierarchy assigned to, for example if a sensor covers several rooms.\nIf such a mapping could be found, an event is thrown in order to notify the attribute Location Sensor Fusion about the symbolic locations a certain sensor of a particular user determined.\nWe have prototypically implemented three kinds of location sensors, which are based on WLAN (IEEE 802.11), Bluetooth and RFID (Radio Frequency Identification).\nWe have chosen these three completely different sensors because of their differences concerning accuracy, coverage and administrative effort, in order to evaluate the flexibility of our system (see Table 1).\nThe most accurate one is an RFID sensor, which is based on an active RFID-reader.\nAs soon as the reader is plugged into the client, it scans for active RFID tags in range and transmits their serial numbers to the server, where they are mapped to symbolic locations.\nWe also take into account RSSI (Radio Signal Strength Information), which provides position accuracy of few centimetres and thus enables us to determine which RFID-tag is nearest.\nDue to this high accuracy, RFID is used for locating users within rooms.\nThe administration is quite simple; once a new RFID tag is placed, its serial number simply has to be assigned to a single symbolic location.\nA drawback is the poor availability, which can be traced back to the fact that RFID readers are still very expensive.\nThe second one is an 802.11 WLAN sensor.\nTherefore, we integrated a purely software-based, commercial WLAN positioning system for tracking clients on the university campuswide WLAN infrastructure.\nThe reached position accuracy is in the range of few meters and thus is suitable for location sensing at the granularity of rooms.\nA big disadvantage is that a map of the whole area has to be calibrated with measuring points at a distance of 5 meters each.\nBecause most mobile computers are equipped with WLAN technology and the positioning-system is a software-only solution, nearly everyone is able to use this kind of sensor.\nFinally, we have implemented a Bluetooth sensor, which detects Bluetooth tags (i.e. Bluetooth-modules with known position) in range and transmits them to the server that maps to symbolic locations.\nBecause of the fact that we do not use signal strengthinformation in the current implementation, the accuracy is above 10 meters and therefore a single Bluetooth MAC address is associated with several symbolic locations, according to the physical locations such a Bluetooth module covers.\nThis leads to the disadvantage that the range of each Bluetooth-tag has to be determined and mapped to symbolic locations within this range.\nTable 1.\nComparison of implemented sensors Sensor Accuracy Coverage Administration RFID < 10 cm poor easy WLAN 1-4 m very well very timeconsuming Bluetooth ~ 10 m well time-consuming 3.2.2 Proximity Sensors Any sensor that is able to detect whether two users are in spatial proximity is referred to as proximity sensor.\nSimilar to the location sensors, the Proximity Sensor Abstraction-attributes collect physical proximity information of all users and transform them to mappings of user-IDs.\nWe have implemented two types of proximity-sensors, which are based on Bluetooth on the one hand and on fused symbolic locations (see chapter 3.3.1) on the other hand.\nThe Bluetooth-implementation goes along with the implementation of the Bluetooth-based location sensor.\nThe already determined Bluetooth MAC addresses in range of a certain client are being compared with those of all other clients, and each time the attribute Bluetooth Sensor Abstraction detects congruence, it notifies the proximity sensor fusion about this.\nThe second sensor is based on symbolic locations processed by Location Sensor Fusion, wherefore it does not need a client-side implementation.\nEach time the fused symbolic location of a certain user changes, it checks whether he is at the same symbolic location like another user and again notifies the proximity sensor fusion about the proximity between these two users.\nThe range can be restricted to any level of the location containment hierarchy, for example to room granularity.\nA currently unresolved issue is the incomparable granularity of different proximity sensors.\nFor example, the symbolic locations at same level in the location hierarchy mostly do not cover the same geographic area.\n3.3 Sensor Fusion Core of the location sensing subsystem is the sensor fusion.\nIt merges data of various sensors, while coping with differences concerning accuracy, coverage and sample-rate.\nAccording to the two kinds of sensors described in chapter 3.2, we distinguish between fusion of location sensors on the one hand, and fusion of proximity sensors on the other hand.\nThe fusion of symbolic locations as well as the fusion of spatial proximities operates on standardized information (cf. Figure 3).\nThis has the advantage, that additional position- and proximitysensors can be added easily or the fusion algorithms can be replaced by ones that are more sophisticated.\n92 Fusion is performed for each user separately and takes into account the measurements at a single point in time only (i.e. no history information is used for determining the current location of a certain user).\nThe algorithm collects all events thrown by the Sensor Abstraction-attributes, performs fusion and triggers the GISS Core-attribute if the symbolic location of a certain user or the spatial proximity between users changed.\nAn important feature is the persistent storage of location- and proximity-history in a database in order to allow future retrieval.\nThis enables applications to visualize the movement of users for example.\n3.3.1 Location Sensor Fusion Goal of the fusion of location information is to improve precision and accuracy by merging the set of symbolic locations supplied by various location sensors, in order to reduce the number of these locations to a minimum, ideally to a single symbolic location per user.\nThis is quite difficult, because different sensors may differ in accuracy and sample rate as well.\nThe Location Sensor Fusion-attribute is triggered by events, which are thrown by the Location Sensor Abstractionattributes.\nThese events contain information about the identity of the user concerned, his current location and the sensor by which the location has been determined.\nIf the attribute Location Sensor Fusion receives such an event, it checks if the amount of symbolic locations of the user concerned has changed (compared with the last event).\nIf this is the case, it notifies the GISS Core-attribute about all symbolic locations this user is currently associated with.\nHowever, this information is not very useful on its own if a certain user is associated with several locations.\nAs described in chapter 3.2.1, a single location sensor may deliver multiple symbolic locations.\nMoreover, a certain user may have several location sensors, which supply symbolic locations differing in accuracy (i.e. different levels in the location containment hierarchy).\nTo cope with this challenge, we implemented a fusion algorithm in order to reduce the number of symbolic locations to a minimum (ideally to a single location).\nIn a first step, each symbolic location is associated with its number of occurrences.\nA symbolic location may occur several times if it is referred to by more than one sensor or if a single sensor detects multiple tags, which again refer to several locations.\nFurthermore, this number is added to the previously calculated number of occurrences of each symbolic location, which is a child-location of the considered one in the location containment hierarchy.\nFor example, if - in Figure 4 - room2 occurs two times and desk occurs a single time, the value 2 of room2 is added to the value 1 of desk, whereby desk finally gets the value 3.\nIn a final step, only those symbolic locations are left which are assigned with the highest number of occurrences.\nA further reduction can be achieved by assigning priorities to sensors (based on accuracy and confidence) and cumulating these priorities for each symbolic location instead of just counting the number of occurrences.\nIf the remaining fused locations have changed (i.e. if they differ from the fused locations the considered user is currently associated with), they are provided with the current timestamp, written to the database and the GISS-attribute is notified about where the user is probably located.\nFinally, the most accurate, common location in the location hierarchy is calculated (i.e. the least upper bound of these symbolic locations) in order to get a single symbolic location.\nIf it changes, the GISS Core-attribute is triggered again.\n3.3.2 Proximity Sensor Fusion Proximity sensor fusion is much simpler than the fusion of symbolic locations.\nThe corresponding proximity sensor fusionattribute is triggered by events, which are thrown by the Proximity Sensor Abstraction-attributes.\nThese special events contain information about the identity of the two users concerned, if they are currently in spatial proximity or if proximity no longer persists, and by which proximity-sensor this has been detected.\nIf the sensor fusion-attribute is notified by a certain Proximity Sensor Abstraction-attribute about an existing spatial proximity, it first checks if these two users are already known to be in proximity (detected either by another user or by another proximity-sensor of the user, which caused the event).\nIf not, this change in proximity is written to the context repository with current timestamp.\nSimilarly, if the attribute Proximity Fusion is notified about an ended proximity, it checks if the users are still known to be in proximity, and writes this change to the repository if not.\nFinally, if spatial proximity between the two users actually changed, an event is thrown to notify the GISS Core-attribute about this.\n4.\nCONTEXTSENSITIVE INTERACTION 4.1 Overview In most of today``s systems supporting interaction in groups, the provided means lack any awareness of the user``s current context, thus being unable to adapt to his needs.\nIn our approach, we use context information to enhance interaction and provide further services, which offer new possibilities to the user.\nFurthermore, we believe that interaction in groups also has to take into account the current context of the group itself and not only the context of individual group members.\nFor this reason, we also retrieve information about the group``s current context, derived from the contexts of the group members together with some sort of meta-information (see chapter 4.3).\nThe sources of context used for our application correspond with the four primary context types given in chapter 1.1 - identity (I), location (L), time (T) and activity (A).\nAs stated before, we also take into account the context of the group the user is interaction with, so that we could add a fifth type of context informationgroup awareness (G) - to the classification.\nUsing this context information, we can trigger context-aware activities in all of the three categories described in chapter 1.1 - presentation of information (P), automatic execution of services (A) and tagging of context to information for later retrieval (T).\nTable 2 gives an overview of activities we have already implemented; they are described comprehensively in chapter 4.4.\nThe table also shows which types of context information are used for each activity and the category the activity could be classified in.\n93 Table 2.\nClassification of implemented context-aware activities Service L T I A G P A T Location Visualisation X X X Group Building Support X X X X Support for Synchronous Communication X X X X Support for Asynchronous Communication X X X X X X X Availability Management X X X Task Management Support X X X X Meeting Support X X X X X X Reasons for implementing these very features are to take advantage of all four types of context information in order to support group interaction by utilizing a comprehensive knowledge about the situation a single user or a whole group is in.\nA critical issue for the user acceptance of such a system is the usability of its interface.\nWe have evaluated several ways of presenting context-aware means of interaction to the user, until we came to the solution we use right now.\nAlthough we think that the user interface that has been implemented now offers the best trade-off between seamless integration of features and ease of use, it would be no problem to extend the architecture with other user interfaces, even on different platforms.\nThe chosen solution is based on an existing instant messenger, which offers several possibilities to integrate our system (see chapter 4.2).\nThe biggest advantage of this approach is that the user is confronted with a graphical user interface he is already used to in most cases.\nFurthermore, our system uses an instant messenger account as an identifier, so that the user does not have to register a further account anywhere else (for example, the user can use his already existing ICQ2 -account).\n4.2 Instant Messenger Integration Our system is based upon an existing instant messenger, the socalled Simple Instant Messenger (SIM)3 .\nThe implementation of this messenger is carried out as a project at Sourceforge4 .\nSIM supports multiple messenger protocols such as AIM5 , ICQ2 and MSN6 .\nIt also supports connections to multiple accounts at the same time.\nFurthermore, full support for SMS-notification (where provided from the used protocol) is given.\nSIM is based on a plug-in concept.\nAll protocols as well as parts of the user-interface are implemented as plug-ins.\nIts architecture is also used to extend the application``s abilities to communicate with external applications.\nFor this purpose, a remote control plug-in is provided, by which SIM can be controlled from external applications via socket connection.\nThis remote control interface is extensively used by GISS for retrieving the contact list, setting the user``s availability-state or sending messages.\nThe 4 http:\/\/sourceforge.net\/ 5 http:\/\/www.aim.com\/ 6 http:\/\/messenger.msn.com\/ functionality of the plug-in was extended in several ways, for example to accept messages for an account (as if they would have been sent via the messenger network).\nThe messenger, more exactly the contact list (i.e. a list of profiles of all people registered with the instant messenger, which is visualized by listing their names as it can be seen in Figure 5), is also used to display locations of other members of the groups a user belongs to.\nThis provides location awareness without taking too much space or requesting the user``s full attention.\nA more comprehensive description of these features is given in chapter 4.4.\n4.3 Sources of Context Information While the location-context of a user is obtained from our location sensing subsystem described in chapter 3, we consider further types of context than location relevant for the support of group interaction, too.\nLocal time as a very important context dimension can be easily retrieved from the real time clock of the user``s system.\nBesides location and time, we also use context information of user``s activity and identity, where we exploit the functionality provided by the underlying instant messenger system.\nIdentity (or more exactly, the mapping of IDs to names as well as additional information from the user``s profile) can be distilled out of the contents of the user``s contact list.\nInformation about the activity or a certain user is only available in a very restricted area, namely the activity at the computer itself.\nOther activities like making a phone call or something similar, cannot be recognized with the current implementation of the activity sensor.\nThe only context-information used is the instant messenger``s availability state, thus only providing a very coarse classification of the user``s activity (online, offline, away, busy etc.).\nAlthough this may not seem to be very much information, it is surely relevant and can be used to improve or even enable several services.\nHaving collected the context information from all available users, it is now possible to distil some information about the context of a certain group.\nInformation about the context of a group includes how many members the group currently has, if the group meets right now, which members are participating at a meeting, how many members have read which of the available posts from other team members and so on.\nTherefore, some additional information like a list of members for each group is needed.\nThese lists can be assembled manually (by users joining and leaving groups) or retrieved automatically.\nThe context of a group is secondary context and is aggregated from the available contexts of the group members.\nEvery time the context of a single group member changes, the context of the whole group is changing and has to be recalculated.\nWith knowledge about a user``s context and the context of the groups he belongs to, we can provide several context-aware services to the user, which enhance his interaction abilities.\nA brief description of these services is given in chapter 4.4.\n94 4.4 Group Interaction Support 4.4.1 Visualisation of Location Information An important feature is the visualisation of location information, thus allowing users to be aware of the location of other users and members of groups he joined, respectively.\nAs already described in chapter 2, we use two different forms of visualisation.\nThe maybe more important one is to display location information in the contact list of the instant messenger, right beside the name, thus being always visible while not drawing the user``s attention on it (compared with a twodimensional view for example, which requires a own window for displaying a map of the environment).\nDue to the restricted space in the contact list, it has been necessary to implement some sort of level-of-detail concept.\nAs we use a hierarchical location model, we are able to determine the most accurate common location of two users.\nIn the contact list, the current symbolic location one level below the previously calculated common location is then displayed.\nIf, for example, user A currently resides in room P121 at the first floor of a building and user B, which has to be displayed in the contact list of user A, is in room P304 at the third floor, the most accurate common location of these two users is the building they are in.\nFor that reason, the floor (i.e. one level beyond the common location, namely the building) of user B is displayed in the contact list of user A.\nIf both people reside on the same floor or even in the same room, the room would be taken.\nFigure 5 shows a screenshot of the Simple Instant Messenger3 , where the current location of those people, whose location is known by GISS, is displayed in brackets right beside their name.\nOn top of the image, the heightened, integrated GISS-toolbar is shown, which currently contains the following, implemented functionality (from left to right): Asynchronous communication for groups (see chapter 4.4.4), context-aware reminders (see chapter 4.4.6), two-dimensional visualisation of locationinformation, forming and managing groups (see chapter 4.4.2), context-aware availability-management (see chapter 4.4.5) and finally a button for terminating GISS.\nFigure 5.\nGISS integration in Simple Instant Messenger3 As displaying just this short form of location may not be enough for the user, because he may want to see the most accurate position available, a fully qualified position is shown if a name in the contact-list is clicked (e.g. in the form of desk@room2@department1@1stfloor@building 1@campus).\nThe second possible form of visualisation is a graphical one.\nWe have evaluated a three-dimensional view, which was based on a VRML model of the respective area (cf. Figure 6).\nDue to lacks in navigational and usability issues, we decided to use a twodimensional view of the floor (it is referred to as level in the location hierarchy, cf. Figure 4).\nOther levels of granularity like section (e.g. building) and region (e.g. campus) are also provided.\nIn this floor-plan-based view, the current locations are shown in the manner of ICQ2 contacts, which are placed at the currently sensed location of the respective person.\nThe availability-status of a user, for example away if he is not on the computer right now, or busy if he does not want to be disturbed, is visualized by colour-coding the ICQ2 -flower left beside the name.\nFurthermore, the floor-plan-view shows so-called the virtual post-its, which are virtual counterparts of real-life post-its and serve as our means of asynchronous communication (more about virtual post-its can be found in chapter 4.4.4).\nFigure 6.\n3D-view of the floor (VRML) Figure 7 shows the two-dimensional map of a certain floor, where several users are currently located (visualized by their name and the flower left beside).\nThe location of the client, on which the map is displayed, is visualized by a green circle.\nDown to the right, two virtual post-its can be seen.\nFigure 7.\n2D view of the floor Another feature of the 2D-view is the visualisation of locationhistory of users.\nAs we store the complete history of a user``s locations together with a timestamp, we are able to provide information about the locations he has been back in time.\nWhen the mouse is moved over the name of a certain user in the 2Dview, footprints of a user, placed at the locations he has been, are faded out the stronger, the older the location information is.\n95 4.4.2 Forming and Managing Groups To support interaction in groups, it is first necessary to form groups.\nAs groups can have different purposes, we distinguish two types of groups.\nSo-called static groups are groups, which are built up manually by people joining and leaving them.\nStatic groups can be further divided into two subtypes.\nIn open static groups, everybody can join and leave anytime, useful for example to form a group of lecture attendees of some sort of interest group.\nClosed static groups have an owner, who decides, which persons are allowed to join, although everybody could leave again at any time.\nClosed groups enable users for example to create a group of their friends, thus being able to communicate with them easily.\nIn contrast to that, we also support the creation of dynamic groups.\nThey are formed among persons, who are at the same location at the same time.\nThe creation of dynamic groups is only performed at locations, where it makes sense to form groups, for example in lecture halls or meeting rooms, but not on corridors or outdoor.\nIt would also be not very meaningful to form a group only of the people residing in the left front sector of a hall; instead, the complete hall should be considered.\nFor these reasons, all the defined locations in the hierarchy are tagged, whether they allow the formation of groups or not.\nDynamic groups are also not only formed granularity of rooms, but also on higher levels in the hierarchy, for example with the people currently residing in the area of a department.\nAs the members of dynamic groups constantly change, it is possible to create an open static group out of them.\n4.4.3 Synchronous Communication for Groups The most important form of synchronous communication on computers today is instant messaging; some people even see instant messaging to be the real killer application on the Internet.\nThis has also motivated the decision to build GISS upon an instant messaging system.\nIn today``s messenger systems, peer-to-peer-communication is extensively supported.\nHowever, when it comes to communication in groups, the support is rather poor most of the time.\nOften, only sending a message to multiple recipients is supported, lacking means to take into account the current state of the recipients.\nFurthermore, groups can only be formed of members in one``s contact list, thus being not able to send messages to a group, where not all of its members are known (which may be the case in settings, where the participants of a lecture form a group).\nOur approach does not have the mentioned restrictions.\nWe introduce group-entries in the user``s contact list; enable him or his to send messages to this group easily, without knowing who exactly is currently a member of this group.\nFurthermore, group messages are only delivered to persons, who are currently not busy, thus preventing a disturbance by a message, which is possibly unimportant for the user.\nThese features cannot be carried out in the messenger network itself, so whenever a message to a group account is sent, we intercept it and route it through our system to all the recipients, which are available at a certain time.\nCommunication via a group account is also stored centrally, enabling people to query missed messages or simply viewing the message history.\n4.4.4 Asynchronous Communication for Groups Asynchronous communication in groups is not a new idea.\nThe goal of this approach is not to reinvent the wheel, as email is maybe the most widely used form of asynchronous communication on computers and is broadly accepted and standardized.\nIn out work, we aim at the combination of asynchronous communication with location awareness.\nFor this reason, we introduce the concept of so-called virtual postits (cp.\n[13]), which are messages that are bound to physical locations.\nThese virtual post-its could be either visible for all users that are passing by or they can be restricted to be visible for certain groups of people only.\nMoreover, a virtual post-it can also have an expiry date after which it is dropped and not displayed anymore.\nVirtual post-its can also be commented by others, thus providing some from of forum-like interaction, where each post-it forms a thread.\nVirtual post-its are displayed automatically, whenever a user (available) passes by the first time.\nAfterwards, post-its can be accessed via the 2D-viewer, where all visible post-its are shown.\nAll readers of a post-it are logged and displayed when viewing it, providing some sort of awareness about the group members'' activities in the past.\n4.4.5 Context-aware Availability Management Instant messengers in general provide some kind of availability information about a user.\nAlthough this information can be only defined in a very coarse granularity, we have decided to use these means of gathering activity context, because the introduction of an additional one would strongly decrease the usability of the system.\nTo support the user managing his availability, we provide an interface that lets the user define rules to adapt his availability to the current context.\nThese rules follow the form on event (E) if condition (C) then action (A), which is directly supported by the ECA-rules of the Context Framework described in chapter 1.3.\nThe testing of conditions is periodically triggered by throwing events (whenever the context of a user changes).\nThe condition itself is defined by the user, who can demand the change of his availability status as the action in the rule.\nAs a condition, the user can define his location, a certain time (also triggering daily, every week or every month) or any logical combination of these criteria.\n4.4.6 Context-Aware Reminders Reminders [14] are used to give the user the opportunity of defining tasks and being reminded of those, when certain criteria are fulfilled.\nThus, a reminder can be seen as a post-it to oneself, which is only visible in certain cases.\nReminders can be bound to a certain place or time, but also to spatial proximity of users or groups.\nThese criteria can be combined with Boolean operators, thus providing a powerful means to remind the user of tasks that he wants to carry out when a certain context occurs.\nA reminder will only pop up the first time the actual context meets the defined criterion.\nOn showing up the reminder, the user has the chance to resubmit it to be reminded again, for example five minutes later or the next time a certain user is in spatial proximity.\n96 4.4.7 Context-Aware Recognition and Notification of Group Meetings With the available context information, we try to recognize meetings of a group.\nThe determination of the criteria, when the system recognizes a group having a meeting, is part of the ongoing work.\nIn a first approach, we use the location- and activity-context of the group members to determine a meeting.\nWhenever more than 50 % of the members of a group are available at a location, where a meeting is considered to make sense (e.g. not on a corridor), a meeting minutes post-it is created at this location and all absent group members are notified of the meeting and the location it takes place.\nDuring the meeting, the comment-feature of virtual post-its provides a means to take notes for all of the participants.\nWhen members are joining or leaving the meeting, this is automatically added as a note to the list of comments.\nLike the recognition of the beginning of a meeting, the recognition of its end is still part of ongoing work.\nIf the end of the meeting is recognized, all group members get the complete list of comments as a meeting protocol at the end of the meeting.\n5.\nCONCLUSIONS This paper discussed the potentials of support for group interaction by using context information.\nFirst, we introduced the notions of context and context computing and motivated their value for supporting group interaction.\nAn architecture is presented to support context-aware group interaction in mobile, distributed environments.\nIt is built upon a flexible and extensible framework, thus enabling an easy adoption to available context sources (e.g. by adding additional sensors) as well as the required form of representation.\nWe have prototypically developed a set of services, which enhance group interaction by taking into account the current context of the users as well as the context of groups itself.\nImportant features are dynamic formation of groups, visualization of location on a two-dimensional map as well as unobtrusively integrated in an instant-messenger, asynchronous communication by virtual post-its, which are bound to certain locations, and a context-aware availability-management, which adapts the availability-status of a user to his current situation.\nTo provide location information, we have implemented a subsystem for automated acquisition of location- and proximityinformation provided by various sensors, which provides a technology-independent presentation of locations and spatial proximities between users and merges this information using sensor-independent fusion algorithms.\nA history of locations as well as of spatial proximities is stored in a database, thus enabling context history-based services.\n6.\nREFERENCES [1] Beer, W., Christian, V., Ferscha, A., Mehrmann, L. Modeling Context-aware Behavior by Interpreted ECA Rules.\nIn Proceedings of the International Conference on Parallel and Distributed Computing (EUROPAR``03).\n(Klagenfurt, Austria, August 26-29, 2003).\nSpringer Verlag, LNCS 2790, 1064-1073.\n[2] Brown, P.J., Bovey, J.D., Chen X. Context-Aware Applications: From the Laboratory to the Marketplace.\nIEEE Personal Communications, 4(5) (1997), 58-64.\n[3] Chen, H., Kotz, D.\nA Survey of Context-Aware Mobile Computing Research.\nTechnical Report TR2000-381, Computer Science Department, Dartmouth College, Hanover, New Hampshire, November 2000.\n[4] Dey, A. Providing Architectural Support for Building Context-Aware Applications.\nPh.D..\nThesis, Department of Computer Science, Georgia Institute of Technology, Atlanta, November 2000.\n[5] Svetlana Domnitcheva.\nLocation Modeling: State of the Art and Challenges.\nIn Proceedings of the Workshop on Location Modeling for Ubiquitous Computing.\n(Atlanta, Georgia, United States, September 30, 2001).\n13-19.\n[6] Ferscha, A. Workspace Awareness in Mobile Virtual Teams.\nIn Proceedings of the IEEE 9th International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE``00).\n(Gaithersburg, Maryland, March 14-16, 2000).\nIEEE Computer Society Press, 272-277.\n[7] Ferscha, A. Coordination in Pervasive Computing Environments.\nIn Proceedings of the Twelfth International IEEE Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE-2003).\n(June 9-11, 2003).\nIEEE Computer Society Press, 3-9.\n[8] Leonhard, U. Supporting Location Awareness in Open Distributed Systems.\nPh.D..\nThesis, Department of Computing, Imperial College, London, May 1998.\n[9] Ryan, N., Pascoe, J., Morse, D. Enhanced Reality Fieldwork: the Context-Aware Archaeological Assistant.\nGaffney, V., Van Leusen, M., Exxon, S. (eds.)\nComputer Applications in Archaeology (1997) [10] Schilit, B.N., Theimer, M. Disseminating Active Map Information to Mobile Hosts.\nIEEE Network, 8(5) (1994), 22-32.\n[11] Schilit, B.N..\nA System Architecture for Context-Aware Mobile Computing.\nPh.D..\nThesis, Columbia University, Department of Computer Science, May 1995.\n[12] Wang, B., Bodily, J., Gupta, S.K.S. Supporting Persistent Social Groups in Ubiquitous Computing Environments Using Context-Aware Ephemeral Group Service.\nIn Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (PerCom``04).\n(March 14-17, 2004).\nIEEE Computer Society Press, 287-296.\n[13] Pascoe, J.\nThe Stick-e Note Architecture: Extending the Interface Beyond the User.\nProceedings of the 2nd International Conference of Intelligent User Interfaces (IUI``97).\n(Orlando, USA, 1997), 261-264.\n[14] Dey, A., Abowd, G. CybreMinder: A Context-Aware System for Supporting Re-minders.\nProceedings of the 2nd International Symposium on Handheld and Ubiquitous Computing (HUC``00).\n(Bristol, UK, 2000), 172-186.\n97","lvl-3":"Context Awareness for Group Interaction Support\nABSTRACT\nIn this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments.\nFirst, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given.\nWe then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction.\n1.\nINTRODUCTION\nToday's computing environments are characterized by an increasing number of powerful, wirelessly connected mobile devices.\nUsers can move throughout an environment while carrying their computers with them and having remote access to information and services, anytime and anywhere.\nNew situations appear, where the user's context--for example his current location or nearby people--is more dynamic; computation does not occur at a single location and in a single context any longer, but comprises a multitude of situations and locations.\nThis development leads to a new class of applications, which are aware of the context in which they run in and thus bringing virtual and real worlds together.\nMotivated by this and the fact, that only a few studies have been done for supporting group communication in such computing environments [12], we have developed a system, which we refer to as Group Interaction Support System (GISS).\nIt supports group interaction in mobile distributed computing environments in a way that group members need not to at the same place any longer in order to interact with each other or just to be aware of the others situation.\nIn the following subchapters, we will give a short overview on context aware computing and motivate its benefits for supporting group interaction.\nA software framework for developing contextsensitive applications is presented, which serves as middleware for GISS.\nChapter 2 presents the architecture of GISS, and chapter 3 and 4 discuss the location sensing and group interaction concepts of GISS in more detail.\nChapter 5 gives a final summary of our work.\n1.1 What is Context Computing?\nAccording to Merriam-Webster's Online Dictionary1, context is defined as the \"interrelated conditions in which something exists or occurs\".\nBecause this definition is very general, many approaches have been made to define the notion of context with respect to computing environments.\nMost definitions of context are done by enumerating examples or by choosing synonyms for context.\nThe term context-aware has been introduced first in [10] where context is referred to as location, identities of nearby people and objects, and changes to those objects.\nIn [2], context is also defined by an enumeration of examples, namely location, identities of the people around the user, the time of the day, season, temperature etc. [9] defines context as the user's location, environment, identity and time.\nHere we conform to a widely accepted and more formal definition, which defines context as \"any information than can be used to characterize the situation of an entity.\nAn entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves\".\n[4] [4] identifies four primary types of context information (sometimes referred to as context dimensions), that are--with respect to characterizing the situation of an entity--more important than others.\nThese are location, identity, time and activity, which can also be used to derive other sources of contextual information (secondary context types).\nFor example, if we know a person's identity, we can easily derive related information about this person from several data sources (e.g. day of birth or e-mail address).\nAccording to this definition, [4] defines a system to be contextaware \"if it uses context to provide relevant information and\/or services to the user, where relevancy depends on the user's task\".\n[4] also gives a classification of features for context-aware applications, which comprises presentation of information and services to a user, automatic execution of a service and tagging of context to information for later retrieval.\nFigure 1.\nLayers of a context-aware system\nContext computing is based on two major issues, namely identifying relevant context (identity, location, time, activity) and using obtained context (automatic execution, presentation, tagging).\nIn order to do this, there are a few layers between (see Figure 1).\nFirst, the obtained low-level context information has to be transformed, aggregated and interpreted (context transformation) and represented in an abstract context world model (context representation), either centralized or decentralized.\nFinally, the stored context information is used to trigger certain context events (context triggering).\n[7]\n1.2 Group Interaction in Context\nAfter these abstract and formal definitions about what context and context computing is, we will now focus on the main goal of this work, namely how the interaction of mobile group members can be supported by using context information.\nIn [6] we have identified organizational systems to be crucial for supporting mobile groups (see Figure 2).\nFirst, there has to be an Information and Knowledge Management System, which is capable of supporting a team with its information processing - and knowledge gathering needs.\nThe next part is the Awareness System, which is dedicated to the perceptualisation of the effects of team activity.\nIt does this by communicating work context, agenda and workspace information to the users.\nThe Interaction Systems provide support for the communication among team members, either synchronous or asynchronous, and for the shared access to artefacts, such as documents.\nMobility Systems deploy mechanisms to enable any-place access to team memory as well as the capturing and delivery of awareness information from and to any places.\nFinally yet importantly, the organisational innovation system integrates aspects of the team itself, like roles, leadership and shared facilities.\nWith respect to these five aspects of team support, we focus on interaction and partly cover mobility - and awareness-support.\nGroup interaction includes all means that enable group members to communicate freely with all the other members.\nAt this point, the question how context information can be used for supporting group interaction comes up.\nWe believe that information about the current situation of a person provides a surplus value to existing group interaction systems.\nContext information facilitates group interaction by allowing each member to be aware of the availability status or the current location of each other group member, which again makes it possible to form groups dynamically, to place virtual post-its in the real world or to determine which people are around.\nFigure 2.\nSupport for Mobile Groups [6]\nMost of today's context-aware applications use location and time only, and location is referred to as a crucial type of context information [3].\nWe also see the importance of location information in mobile and ubiquitous environments, wherefore a main focus of our work is on the utilization of location information and information about users in spatial proximity.\nNevertheless, we believe that location, as the only used type of context information, is not sufficient to support group interaction, wherefore we also take advantage of the other three primary types, namely identity, time and activity.\nThis provides a comprehensive description of a user's current situation and thus enabling numerous means for supporting group interaction, which are described in detail in chapter 4.4.\nWhen we look at the types of context information stated above, we can see that all of them are single user-centred, taking into account only the context of the user itself.\nWe believe, that for the support of group interaction, the status of the group itself has also be taken into account.\nTherefore, we have added a fifth contextdimension group-context, which comprises more than the sum of the individual member's contexts.\nGroup context includes any information about the situation of a whole group, for example how many members a group currently has or if a certain group meets right now.\n1.3 Context Middleware\nThe Group Interaction Support System (GISS) uses the softwareframework introduced in [1], which serves as a middleware for developing context-sensitive applications.\nThis so-called Context Framework is based on a distributed communication architecture and it supports different kinds of transport protocols and message coding mechanisms.\nA main feature of the framework is the abstraction of context information retrieval via various sensors and its delivery to a level where no difference appears, for the application designer, between these different kinds of context retrieval mechanisms; the information retrieval is hidden from the application developer.\nThis is achieved by so-called entities, which describe objects--e.g. a human user--that are important for a certain context scenario.\nEntities express their functionality by the use of so-called attributes, which can be loaded into the entity.\nThese attributes are complex pieces of software, which are implemented as Java classes.\nTypical attributes are encapsulations of sensors, but they can also be used to implement context services, for example to notify other entities about location changes of users.\nEach entity can contain a collection of such attributes, where an entity itself is an attribute.\nThe initial set of attributes an entity contains can change dynamically at runtime, if an entity loads or unloads attributes from the local storage or over the network.\nIn order to load and deploy new attributes, an entity has to reference a class loader and a transport and lookup layer, which manages the lookup mechanism for discovering other entities and the transport.\nXML configuration files specify which initial set of entities should be loaded and which attributes these entities own.\nThe communication between entities and attributes is based on context events.\nEach attribute is able to trigger events, which are addressed to other attributes and entities respectively, independently on which physical computer they are running.\nAmong other things, and event contains the name of the event and a list of parameters delivering information about the event itself.\nRelated with this event-based architecture is the use of ECA (Event-Condition-Action) - rules for defining the behaviour of the context system.\nTherefore, every entity has a rule-interpreter, which catches triggered events, checks conditions associated with them and causes certain actions.\nThese rules are referenced by the entity's XML configuration.\nA rule itself is even able to trigger the insertion of new rules or the unloading of existing rules at runtime in order to change the behaviour of the context system dynamically.\nTo sum up, the context framework provides a flexible, distributed architecture for hiding low-level sensor data from high-level applications and it hides external communication details from the application developer.\nFurthermore, it is able to adapt its behaviour dynamically by loading attributes, entities or ECArules at runtime.\n2.\nARCHITECTURE OVERVIEW\n3.\nLOCATION SENSING\n3.1 Location Model\n3.2 Sensors\n3.2.1 Location Sensors\n3.2.2 Proximity Sensors\n3.3 Sensor Fusion\n3.3.1 Location Sensor Fusion\n3.3.2 Proximity Sensor Fusion\n4.\nCONTEXTSENSITIVE INTERACTION 4.1 Overview\n4.2 Instant Messenger Integration\n4.3 Sources of Context Information\n4.4 Group Interaction Support\n4.4.1 Visualisation of Location Information\n4.4.2 Forming and Managing Groups\n4.4.3 Synchronous Communication for Groups\n4.4.4 Asynchronous Communication for Groups\n4.4.5 Context-aware Availability Management\n4.4.6 Context-Aware Reminders\n4.4.7 Context-Aware Recognition and Notification of Group Meetings\n5.\nCONCLUSIONS\nThis paper discussed the potentials of support for group interaction by using context information.\nFirst, we introduced the notions of context and context computing and motivated their value for supporting group interaction.\nAn architecture is presented to support context-aware group interaction in mobile, distributed environments.\nIt is built upon a flexible and extensible framework, thus enabling an easy adoption to available context sources (e.g. by adding additional sensors) as well as the required form of representation.\nWe have prototypically developed a set of services, which enhance group interaction by taking into account the current context of the users as well as the context of groups itself.\nImportant features are dynamic formation of groups, visualization of location on a two-dimensional map as well as unobtrusively integrated in an instant-messenger, asynchronous communication by virtual post-its, which are bound to certain locations, and a context-aware availability-management, which adapts the availability-status of a user to his current situation.\nTo provide location information, we have implemented a subsystem for automated acquisition of location - and proximityinformation provided by various sensors, which provides a technology-independent presentation of locations and spatial proximities between users and merges this information using sensor-independent fusion algorithms.\nA history of locations as well as of spatial proximities is stored in a database, thus enabling context history-based services.","lvl-4":"Context Awareness for Group Interaction Support\nABSTRACT\nIn this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments.\nFirst, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given.\nWe then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction.\n1.\nINTRODUCTION\nToday's computing environments are characterized by an increasing number of powerful, wirelessly connected mobile devices.\nUsers can move throughout an environment while carrying their computers with them and having remote access to information and services, anytime and anywhere.\nThis development leads to a new class of applications, which are aware of the context in which they run in and thus bringing virtual and real worlds together.\nMotivated by this and the fact, that only a few studies have been done for supporting group communication in such computing environments [12], we have developed a system, which we refer to as Group Interaction Support System (GISS).\nIt supports group interaction in mobile distributed computing environments in a way that group members need not to at the same place any longer in order to interact with each other or just to be aware of the others situation.\nIn the following subchapters, we will give a short overview on context aware computing and motivate its benefits for supporting group interaction.\nA software framework for developing contextsensitive applications is presented, which serves as middleware for GISS.\nChapter 2 presents the architecture of GISS, and chapter 3 and 4 discuss the location sensing and group interaction concepts of GISS in more detail.\n1.1 What is Context Computing?\nAccording to Merriam-Webster's Online Dictionary1, context is defined as the \"interrelated conditions in which something exists or occurs\".\nBecause this definition is very general, many approaches have been made to define the notion of context with respect to computing environments.\nMost definitions of context are done by enumerating examples or by choosing synonyms for context.\nThe term context-aware has been introduced first in [10] where context is referred to as location, identities of nearby people and objects, and changes to those objects.\nHere we conform to a widely accepted and more formal definition, which defines context as \"any information than can be used to characterize the situation of an entity.\nAn entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves\".\n[4] [4] identifies four primary types of context information (sometimes referred to as context dimensions), that are--with respect to characterizing the situation of an entity--more important than others.\nThese are location, identity, time and activity, which can also be used to derive other sources of contextual information (secondary context types).\nAccording to this definition, [4] defines a system to be contextaware \"if it uses context to provide relevant information and\/or services to the user, where relevancy depends on the user's task\".\n[4] also gives a classification of features for context-aware applications, which comprises presentation of information and services to a user, automatic execution of a service and tagging of context to information for later retrieval.\nFigure 1.\nLayers of a context-aware system\nContext computing is based on two major issues, namely identifying relevant context (identity, location, time, activity) and using obtained context (automatic execution, presentation, tagging).\nIn order to do this, there are a few layers between (see Figure 1).\nFirst, the obtained low-level context information has to be transformed, aggregated and interpreted (context transformation) and represented in an abstract context world model (context representation), either centralized or decentralized.\nFinally, the stored context information is used to trigger certain context events (context triggering).\n[7]\n1.2 Group Interaction in Context\nAfter these abstract and formal definitions about what context and context computing is, we will now focus on the main goal of this work, namely how the interaction of mobile group members can be supported by using context information.\nIn [6] we have identified organizational systems to be crucial for supporting mobile groups (see Figure 2).\nFirst, there has to be an Information and Knowledge Management System, which is capable of supporting a team with its information processing - and knowledge gathering needs.\nIt does this by communicating work context, agenda and workspace information to the users.\nThe Interaction Systems provide support for the communication among team members, either synchronous or asynchronous, and for the shared access to artefacts, such as documents.\nMobility Systems deploy mechanisms to enable any-place access to team memory as well as the capturing and delivery of awareness information from and to any places.\nWith respect to these five aspects of team support, we focus on interaction and partly cover mobility - and awareness-support.\nGroup interaction includes all means that enable group members to communicate freely with all the other members.\nAt this point, the question how context information can be used for supporting group interaction comes up.\nWe believe that information about the current situation of a person provides a surplus value to existing group interaction systems.\nContext information facilitates group interaction by allowing each member to be aware of the availability status or the current location of each other group member, which again makes it possible to form groups dynamically, to place virtual post-its in the real world or to determine which people are around.\nFigure 2.\nSupport for Mobile Groups [6]\nMost of today's context-aware applications use location and time only, and location is referred to as a crucial type of context information [3].\nWe also see the importance of location information in mobile and ubiquitous environments, wherefore a main focus of our work is on the utilization of location information and information about users in spatial proximity.\nNevertheless, we believe that location, as the only used type of context information, is not sufficient to support group interaction, wherefore we also take advantage of the other three primary types, namely identity, time and activity.\nThis provides a comprehensive description of a user's current situation and thus enabling numerous means for supporting group interaction, which are described in detail in chapter 4.4.\nWhen we look at the types of context information stated above, we can see that all of them are single user-centred, taking into account only the context of the user itself.\nWe believe, that for the support of group interaction, the status of the group itself has also be taken into account.\nTherefore, we have added a fifth contextdimension group-context, which comprises more than the sum of the individual member's contexts.\nGroup context includes any information about the situation of a whole group, for example how many members a group currently has or if a certain group meets right now.\n1.3 Context Middleware\nThe Group Interaction Support System (GISS) uses the softwareframework introduced in [1], which serves as a middleware for developing context-sensitive applications.\nThis so-called Context Framework is based on a distributed communication architecture and it supports different kinds of transport protocols and message coding mechanisms.\nThis is achieved by so-called entities, which describe objects--e.g. a human user--that are important for a certain context scenario.\nEntities express their functionality by the use of so-called attributes, which can be loaded into the entity.\nThese attributes are complex pieces of software, which are implemented as Java classes.\nTypical attributes are encapsulations of sensors, but they can also be used to implement context services, for example to notify other entities about location changes of users.\nEach entity can contain a collection of such attributes, where an entity itself is an attribute.\nThe initial set of attributes an entity contains can change dynamically at runtime, if an entity loads or unloads attributes from the local storage or over the network.\nXML configuration files specify which initial set of entities should be loaded and which attributes these entities own.\nThe communication between entities and attributes is based on context events.\nEach attribute is able to trigger events, which are addressed to other attributes and entities respectively, independently on which physical computer they are running.\nAmong other things, and event contains the name of the event and a list of parameters delivering information about the event itself.\nRelated with this event-based architecture is the use of ECA (Event-Condition-Action) - rules for defining the behaviour of the context system.\nThese rules are referenced by the entity's XML configuration.\nA rule itself is even able to trigger the insertion of new rules or the unloading of existing rules at runtime in order to change the behaviour of the context system dynamically.\nTo sum up, the context framework provides a flexible, distributed architecture for hiding low-level sensor data from high-level applications and it hides external communication details from the application developer.\nFurthermore, it is able to adapt its behaviour dynamically by loading attributes, entities or ECArules at runtime.\n5.\nCONCLUSIONS\nThis paper discussed the potentials of support for group interaction by using context information.\nFirst, we introduced the notions of context and context computing and motivated their value for supporting group interaction.\nAn architecture is presented to support context-aware group interaction in mobile, distributed environments.\nWe have prototypically developed a set of services, which enhance group interaction by taking into account the current context of the users as well as the context of groups itself.\nA history of locations as well as of spatial proximities is stored in a database, thus enabling context history-based services.","lvl-2":"Context Awareness for Group Interaction Support\nABSTRACT\nIn this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments.\nFirst, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given.\nWe then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction.\n1.\nINTRODUCTION\nToday's computing environments are characterized by an increasing number of powerful, wirelessly connected mobile devices.\nUsers can move throughout an environment while carrying their computers with them and having remote access to information and services, anytime and anywhere.\nNew situations appear, where the user's context--for example his current location or nearby people--is more dynamic; computation does not occur at a single location and in a single context any longer, but comprises a multitude of situations and locations.\nThis development leads to a new class of applications, which are aware of the context in which they run in and thus bringing virtual and real worlds together.\nMotivated by this and the fact, that only a few studies have been done for supporting group communication in such computing environments [12], we have developed a system, which we refer to as Group Interaction Support System (GISS).\nIt supports group interaction in mobile distributed computing environments in a way that group members need not to at the same place any longer in order to interact with each other or just to be aware of the others situation.\nIn the following subchapters, we will give a short overview on context aware computing and motivate its benefits for supporting group interaction.\nA software framework for developing contextsensitive applications is presented, which serves as middleware for GISS.\nChapter 2 presents the architecture of GISS, and chapter 3 and 4 discuss the location sensing and group interaction concepts of GISS in more detail.\nChapter 5 gives a final summary of our work.\n1.1 What is Context Computing?\nAccording to Merriam-Webster's Online Dictionary1, context is defined as the \"interrelated conditions in which something exists or occurs\".\nBecause this definition is very general, many approaches have been made to define the notion of context with respect to computing environments.\nMost definitions of context are done by enumerating examples or by choosing synonyms for context.\nThe term context-aware has been introduced first in [10] where context is referred to as location, identities of nearby people and objects, and changes to those objects.\nIn [2], context is also defined by an enumeration of examples, namely location, identities of the people around the user, the time of the day, season, temperature etc. [9] defines context as the user's location, environment, identity and time.\nHere we conform to a widely accepted and more formal definition, which defines context as \"any information than can be used to characterize the situation of an entity.\nAn entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves\".\n[4] [4] identifies four primary types of context information (sometimes referred to as context dimensions), that are--with respect to characterizing the situation of an entity--more important than others.\nThese are location, identity, time and activity, which can also be used to derive other sources of contextual information (secondary context types).\nFor example, if we know a person's identity, we can easily derive related information about this person from several data sources (e.g. day of birth or e-mail address).\nAccording to this definition, [4] defines a system to be contextaware \"if it uses context to provide relevant information and\/or services to the user, where relevancy depends on the user's task\".\n[4] also gives a classification of features for context-aware applications, which comprises presentation of information and services to a user, automatic execution of a service and tagging of context to information for later retrieval.\nFigure 1.\nLayers of a context-aware system\nContext computing is based on two major issues, namely identifying relevant context (identity, location, time, activity) and using obtained context (automatic execution, presentation, tagging).\nIn order to do this, there are a few layers between (see Figure 1).\nFirst, the obtained low-level context information has to be transformed, aggregated and interpreted (context transformation) and represented in an abstract context world model (context representation), either centralized or decentralized.\nFinally, the stored context information is used to trigger certain context events (context triggering).\n[7]\n1.2 Group Interaction in Context\nAfter these abstract and formal definitions about what context and context computing is, we will now focus on the main goal of this work, namely how the interaction of mobile group members can be supported by using context information.\nIn [6] we have identified organizational systems to be crucial for supporting mobile groups (see Figure 2).\nFirst, there has to be an Information and Knowledge Management System, which is capable of supporting a team with its information processing - and knowledge gathering needs.\nThe next part is the Awareness System, which is dedicated to the perceptualisation of the effects of team activity.\nIt does this by communicating work context, agenda and workspace information to the users.\nThe Interaction Systems provide support for the communication among team members, either synchronous or asynchronous, and for the shared access to artefacts, such as documents.\nMobility Systems deploy mechanisms to enable any-place access to team memory as well as the capturing and delivery of awareness information from and to any places.\nFinally yet importantly, the organisational innovation system integrates aspects of the team itself, like roles, leadership and shared facilities.\nWith respect to these five aspects of team support, we focus on interaction and partly cover mobility - and awareness-support.\nGroup interaction includes all means that enable group members to communicate freely with all the other members.\nAt this point, the question how context information can be used for supporting group interaction comes up.\nWe believe that information about the current situation of a person provides a surplus value to existing group interaction systems.\nContext information facilitates group interaction by allowing each member to be aware of the availability status or the current location of each other group member, which again makes it possible to form groups dynamically, to place virtual post-its in the real world or to determine which people are around.\nFigure 2.\nSupport for Mobile Groups [6]\nMost of today's context-aware applications use location and time only, and location is referred to as a crucial type of context information [3].\nWe also see the importance of location information in mobile and ubiquitous environments, wherefore a main focus of our work is on the utilization of location information and information about users in spatial proximity.\nNevertheless, we believe that location, as the only used type of context information, is not sufficient to support group interaction, wherefore we also take advantage of the other three primary types, namely identity, time and activity.\nThis provides a comprehensive description of a user's current situation and thus enabling numerous means for supporting group interaction, which are described in detail in chapter 4.4.\nWhen we look at the types of context information stated above, we can see that all of them are single user-centred, taking into account only the context of the user itself.\nWe believe, that for the support of group interaction, the status of the group itself has also be taken into account.\nTherefore, we have added a fifth contextdimension group-context, which comprises more than the sum of the individual member's contexts.\nGroup context includes any information about the situation of a whole group, for example how many members a group currently has or if a certain group meets right now.\n1.3 Context Middleware\nThe Group Interaction Support System (GISS) uses the softwareframework introduced in [1], which serves as a middleware for developing context-sensitive applications.\nThis so-called Context Framework is based on a distributed communication architecture and it supports different kinds of transport protocols and message coding mechanisms.\nA main feature of the framework is the abstraction of context information retrieval via various sensors and its delivery to a level where no difference appears, for the application designer, between these different kinds of context retrieval mechanisms; the information retrieval is hidden from the application developer.\nThis is achieved by so-called entities, which describe objects--e.g. a human user--that are important for a certain context scenario.\nEntities express their functionality by the use of so-called attributes, which can be loaded into the entity.\nThese attributes are complex pieces of software, which are implemented as Java classes.\nTypical attributes are encapsulations of sensors, but they can also be used to implement context services, for example to notify other entities about location changes of users.\nEach entity can contain a collection of such attributes, where an entity itself is an attribute.\nThe initial set of attributes an entity contains can change dynamically at runtime, if an entity loads or unloads attributes from the local storage or over the network.\nIn order to load and deploy new attributes, an entity has to reference a class loader and a transport and lookup layer, which manages the lookup mechanism for discovering other entities and the transport.\nXML configuration files specify which initial set of entities should be loaded and which attributes these entities own.\nThe communication between entities and attributes is based on context events.\nEach attribute is able to trigger events, which are addressed to other attributes and entities respectively, independently on which physical computer they are running.\nAmong other things, and event contains the name of the event and a list of parameters delivering information about the event itself.\nRelated with this event-based architecture is the use of ECA (Event-Condition-Action) - rules for defining the behaviour of the context system.\nTherefore, every entity has a rule-interpreter, which catches triggered events, checks conditions associated with them and causes certain actions.\nThese rules are referenced by the entity's XML configuration.\nA rule itself is even able to trigger the insertion of new rules or the unloading of existing rules at runtime in order to change the behaviour of the context system dynamically.\nTo sum up, the context framework provides a flexible, distributed architecture for hiding low-level sensor data from high-level applications and it hides external communication details from the application developer.\nFurthermore, it is able to adapt its behaviour dynamically by loading attributes, entities or ECArules at runtime.\n2.\nARCHITECTURE OVERVIEW\nAs GISS uses the Context Framework described in chapter 1.3 as middleware, every user is represented by an entity, as well as the central server, which is responsible for context transformation, context representation and context triggering (cf. Figure 1).\nA main part of our work is about the automated acquisition of position information and its sensor-independent provision at application level.\nWe do not only sense the current location of users, but also determine spatial proximities between them.\nDeveloping the architecture, we focused on keeping the client as simple as possible and reducing the communication between client and server to a minimum.\nEach client may have various location and\/or proximity sensors attached, which are encapsulated by respective Context Framework-attributes (\"Sensor Encapsulation\").\nThese attributes are responsible for integrating native sensor-implementations into the Context Framework and sending sensor-dependent position information to the server.\nWe consider it very important to support different types of sensors even at the same time, in order to improve location accuracy on the one hand, while providing a pervasive location-sensing environment with seamless transition between different location sensing techniques on the other hand.\nAll location - and proximity-sensors supported are represented by server-side context-attributes, which correspond to the client-side sensor encapsulation-attributes and abstract the sensor-dependent position information received from all users via the wireless network (sensor abstraction).\nThis requires a context repository, where the mapping of diverse physical positions to standardized locations is stored.\nThe standardized location - and proximity-information of each user is then passed to the so-called \"Sensor Fusion\" - attributes, one for symbolic locations and a second one for spatial proximities.\nTheir job is to merge location - and proximityinformation of clients, respectively, which is described in detail in Chapter 3.3.\nEvery time the symbolic location of a user or the spatial proximity between two users changes, the \"Sensor Fusion\" - attributes notify the \"GISS Core\" - attribute, which controls the application.\nBecause of the abstraction of sensor-dependent position information, the system can easily be extended by additional sensors, just by implementing the (typically two) attributes for encapsulating sensors (some sensors may not need a client-side part), abstracting physical positions and observing the interface to \"GISS Core\".\nFigure 3.\nArchitecture of the Group Interaction Support System (GISS)\nThe \"GISS Core\" - attribute is the central coordinator of the application as it shows to the user.\nIt not only serves as an interface to the location-sensing subsystem, but also collects further context information in other dimensions (time, identity or activity).\nEvery time a change in the context of one or more users is detected, \"GISS Core\" evaluates the effect of these changes on the user, on the groups he belongs to and on the other members of these groups.\nWhenever necessary, events are thrown to the affected clients to trigger context-aware activities, like changing the presentation of awareness information or the execution of services.\nThe client-side part of the application is kept as simple as possible.\nFurthermore, modular design was not only an issue on the sensor side but also when designing the user interface architecture.\nThus, the complete user interface can be easily exchanged, if all of the defined events are taken into account and understood by the new interface-attribute.\nThe currently implemented user interface is split up in two parts, which are also represented by two attributes.\nThe central attribute on client-side is the so-called \"Instant Messenger Encapsulation\", which on the one hand interacts with the server through events and on the other hand serves as a proxy for the external application the user interface is built on.\nAs external application, we use an existing open source instant messenger--the ICQ2-compliant Simple Instant Messenger (SIM) 3.\nWe have chosen and instant messenger as front-end because it provides a well-known interface for most users and facilitates a seamless integration of group interaction support, thus increasing acceptance and ease of use.\nAs the basic functionality of the instant messenger--to serve as a client in an instant messenger network--remains fully functional, our application is able to use the features already provided by the messenger.\nFor example, the contexts activity and identity are derived from the messenger network as it is described later.\nThe Instant Messenger Encapsulation is also responsible for supporting group communication.\nThrough the interface of the messenger, it provides means of synchronous and asynchronous communication as well as a context-aware reminder system and tools for managing groups and the own availability status.\nThe second part of the user interface is a visualisation of the user's locations, which is implemented in the attribute \"Viewer\".\nThe current implementation provides a two-dimensional map of the campus, but it can easily be replaced by other visualisations, a three-dimensional VRML-model for example.\nFurthermore, this visualisation is used to show the artefacts for asynchronous communication.\nBased on a floor plan-view of the geographical area the user currently resides in, it gives a quick overview of which people are nearby, their state and provides means to interact with them.\nIn the following chapters 3 and 4, we describe the location sensing-backend and the application front-end for supporting group interaction in more detail.\n3.\nLOCATION SENSING\nIn the following chapter, we will introduce a location model, which is used for representing locations; afterwards, we will describe the integration of location - and proximity-sensors in\nmore detail.\nFinally, we will have a closer look on the fusion of location - and proximity-information, acquired by various sensors.\n3.1 Location Model\nA location model (i.e. a context representation for the contextinformation location) is needed to represent the locations of users, in order to be able to facilitate location-related queries like \"given a location, return a list of all the objects there\" or \"given an object, return its current location\".\nIn general, there are two approaches [3,5]: symbolic models, which represent location as abstract symbols, and a geometric model, which represent location as coordinates.\nWe have chosen a symbolic location model, which refers to locations as abstract symbols like \"Room P111\" or \"Physics Building\", because we do not require geometric location data.\nInstead, abstract symbols are more convenient for human interaction at application level.\nFurthermore, we use a symbolic location containment hierarchy similar to the one introduced in [11], which consists of top-level regions, which contain buildings, which contain floors, and the floors again contain rooms.\nWe also distinguish four types, namely region (e.g. a whole campus), section (e.g. a building or an outdoor section), level (e.g. a certain floor in a building) and area (e.g. a certain room).\nWe introduce a fifth type of location, which we refer to as semantic.\nThese socalled semantic locations can appear at any level in the hierarchy and they can be nested, but they do not necessarily have a geographic representation.\nExamples for such semantic locations are tagged objects within a room (e.g. a desk and a printer on this desk) or the name of a department, which contains certain rooms.\nFigure 4.\nSymbolic Location Containment Hierarchy\nThe hierarchy of symbolic locations as well as the type of each position is stored in the context repository.\n3.2 Sensors\nOur architecture supports two different kinds of sensors: location sensors, which acquire location information, and proximity sensors, which detect spatial proximities between users.\nAs described above, each sensor has a server - and in most cases a corresponding client-side-implementation, too.\nWhile the clientattributes (\"Sensor Abstraction\") are responsible for acquiring low-level sensor-data and transmitting it to the server, the corresponding \"Sensor Encapsulation\" - attributes transform them into a uniform and sensor-independent format, namely symbolic locations and IDs of users in spatial proximity, respectively.\nAfterwards, the respective attribute \"Sensor Fusion\" is being triggered with this sensor-independent information of a certain user, detected by a particular sensor.\nSuch notifications are performed every time the sensor acquired new information.\nAccordingly, \"Sensor Abstraction\" - attributes are responsible to detect when a certain sensor is no longer available on the client side (e.g. if it has been unplugged by the user) or when position respectively proximity could not be determined any longer (e.g. RFID reader cannot detect tags) and notify the corresponding sensor fusion about this.\n3.2.1 Location Sensors\nIn order to sense physical positions, the \"Sensor Encapsulation\" attributes asynchronously transmit sensor-dependent position information to the server.\nThe corresponding location \"Sensor Abstraction\" - attributes collect these physical positions delivered by the sensors of all users, and perform a repository-lookup in order to get the associated symbolic location.\nThis requires certain tables for each sensor, which map physical positions to symbolic locations.\nOne physical position may have multiple symbolic locations at different accuracy-levels in the location hierarchy assigned to, for example if a sensor covers several rooms.\nIf such a mapping could be found, an event is thrown in order to notify the attribute \"Location Sensor Fusion\" about the symbolic locations a certain sensor of a particular user determined.\nWe have prototypically implemented three kinds of location sensors, which are based on WLAN (IEEE 802.11), Bluetooth and RFID (Radio Frequency Identification).\nWe have chosen these three completely different sensors because of their differences concerning accuracy, coverage and administrative effort, in order to evaluate the flexibility of our system (see Table 1).\nThe most accurate one is an RFID sensor, which is based on an active RFID-reader.\nAs soon as the reader is plugged into the client, it scans for active RFID tags in range and transmits their serial numbers to the server, where they are mapped to symbolic locations.\nWe also take into account RSSI (Radio Signal Strength Information), which provides position accuracy of few centimetres and thus enables us to determine which RFID-tag is nearest.\nDue to this high accuracy, RFID is used for locating users within rooms.\nThe administration is quite simple; once a new RFID tag is placed, its serial number simply has to be assigned to a single symbolic location.\nA drawback is the poor availability, which can be traced back to the fact that RFID readers are still very expensive.\nThe second one is an 802.11 WLAN sensor.\nTherefore, we integrated a purely software-based, commercial WLAN positioning system for tracking clients on the university campuswide WLAN infrastructure.\nThe reached position accuracy is in the range of few meters and thus is suitable for location sensing at the granularity of rooms.\nA big disadvantage is that a map of the whole area has to be calibrated with measuring points at a distance of 5 meters each.\nBecause most mobile computers are equipped with WLAN technology and the positioning-system is a software-only solution, nearly everyone is able to use this kind of sensor.\nFinally, we have implemented a Bluetooth sensor, which detects Bluetooth tags (i.e. Bluetooth-modules with known position) in range and transmits them to the server that maps to symbolic locations.\nBecause of the fact that we do not use signal strengthinformation in the current implementation, the accuracy is above 10 meters and therefore a single Bluetooth MAC address is associated with several symbolic locations, according to the physical locations such a Bluetooth module covers.\nThis leads to the disadvantage that the range of each Bluetooth-tag has to be determined and mapped to symbolic locations within this range.\nTable 1.\nComparison of implemented sensors\n3.2.2 Proximity Sensors\nAny sensor that is able to detect whether two users are in spatial proximity is referred to as proximity sensor.\nSimilar to the location sensors, the \"Proximity Sensor Abstraction\" - attributes collect physical proximity information of all users and transform them to mappings of user-IDs.\nWe have implemented two types of proximity-sensors, which are based on Bluetooth on the one hand and on fused symbolic locations (see chapter 3.3.1) on the other hand.\nThe Bluetooth-implementation goes along with the implementation of the Bluetooth-based location sensor.\nThe already determined Bluetooth MAC addresses in range of a certain client are being compared with those of all other clients, and each time the attribute \"Bluetooth Sensor Abstraction\" detects congruence, it notifies the proximity sensor fusion about this.\nThe second sensor is based on symbolic locations processed by \"Location Sensor Fusion\", wherefore it does not need a client-side implementation.\nEach time the fused symbolic location of a certain user changes, it checks whether he is at the same symbolic location like another user and again notifies the proximity sensor fusion about the proximity between these two users.\nThe range can be restricted to any level of the location containment hierarchy, for example to room granularity.\nA currently unresolved issue is the incomparable granularity of different proximity sensors.\nFor example, the symbolic locations at same level in the location hierarchy mostly do not cover the same geographic area.\n3.3 Sensor Fusion\nCore of the location sensing subsystem is the sensor fusion.\nIt merges data of various sensors, while coping with differences concerning accuracy, coverage and sample-rate.\nAccording to the two kinds of sensors described in chapter 3.2, we distinguish between fusion of location sensors on the one hand, and fusion of proximity sensors on the other hand.\nThe fusion of symbolic locations as well as the fusion of spatial proximities operates on standardized information (cf. Figure 3).\nThis has the advantage, that additional position - and proximitysensors can be added easily or the fusion algorithms can be replaced by ones that are more sophisticated.\nFusion is performed for each user separately and takes into account the measurements at a single point in time only (i.e. no history information is used for determining the current location of a certain user).\nThe algorithm collects all events thrown by the \"Sensor Abstraction\" - attributes, performs fusion and triggers the \"GISS Core\" - attribute if the symbolic location of a certain user or the spatial proximity between users changed.\nAn important feature is the persistent storage of location - and proximity-history in a database in order to allow future retrieval.\nThis enables applications to visualize the movement of users for example.\n3.3.1 Location Sensor Fusion\nGoal of the fusion of location information is to improve precision and accuracy by merging the set of symbolic locations supplied by various location sensors, in order to reduce the number of these locations to a minimum, ideally to a single symbolic location per user.\nThis is quite difficult, because different sensors may differ in accuracy and sample rate as well.\nThe \"Location Sensor Fusion\" - attribute is triggered by events, which are thrown by the \"Location Sensor Abstraction\" attributes.\nThese events contain information about the identity of the user concerned, his current location and the sensor by which the location has been determined.\nIf the attribute \"Location Sensor Fusion\" receives such an event, it checks if the amount of symbolic locations of the user concerned has changed (compared with the last event).\nIf this is the case, it notifies the \"GISS Core\" - attribute about all symbolic locations this user is currently associated with.\nHowever, this information is not very useful on its own if a certain user is associated with several locations.\nAs described in chapter 3.2.1, a single location sensor may deliver multiple symbolic locations.\nMoreover, a certain user may have several location sensors, which supply symbolic locations differing in accuracy (i.e. different levels in the location containment hierarchy).\nTo cope with this challenge, we implemented a fusion algorithm in order to reduce the number of symbolic locations to a minimum (ideally to a single location).\nIn a first step, each symbolic location is associated with its number of occurrences.\nA symbolic location may occur several times if it is referred to by more than one sensor or if a single sensor detects multiple tags, which again refer to several locations.\nFurthermore, this number is added to the previously calculated number of occurrences of each symbolic location, which is a child-location of the considered one in the location containment hierarchy.\nFor example, if--in Figure 4--\"room2\" occurs two times and \"desk\" occurs a single time, the value 2 of \"room2\" is added to the value 1 of \"desk\", whereby \"desk\" finally gets the value 3.\nIn a final step, only those symbolic locations are left which are assigned with the highest number of occurrences.\nA further reduction can be achieved by assigning priorities to sensors (based on accuracy and confidence) and cumulating these priorities for each symbolic location instead of just counting the number of occurrences.\nIf the remaining fused locations have changed (i.e. if they differ from the fused locations the considered user is currently associated with), they are provided with the current timestamp, written to the database and the \"GISS\" - attribute is notified about where the user is probably located.\nFinally, the most accurate, common location in the location hierarchy is calculated (i.e. the least upper bound of these symbolic locations) in order to get a single symbolic location.\nIf it changes, the \"GISS Core\" - attribute is triggered again.\n3.3.2 Proximity Sensor Fusion\nProximity sensor fusion is much simpler than the fusion of symbolic locations.\nThe corresponding proximity sensor fusionattribute is triggered by events, which are thrown by the \"Proximity Sensor Abstraction\" - attributes.\nThese special events contain information about the identity of the two users concerned, if they are currently in spatial proximity or if proximity no longer persists, and by which proximity-sensor this has been detected.\nIf the sensor fusion-attribute is notified by a certain \"Proximity Sensor Abstraction\" - attribute about an existing spatial proximity, it first checks if these two users are already known to be in proximity (detected either by another user or by another proximity-sensor of the user, which caused the event).\nIf not, this change in proximity is written to the context repository with current timestamp.\nSimilarly, if the attribute \"Proximity Fusion\" is notified about an ended proximity, it checks if the users are still known to be in proximity, and writes this change to the repository if not.\nFinally, if spatial proximity between the two users actually changed, an event is thrown to notify the \"GISS Core\" - attribute about this.\n4.\nCONTEXTSENSITIVE INTERACTION 4.1 Overview\nIn most of today's systems supporting interaction in groups, the provided means lack any awareness of the user's current context, thus being unable to adapt to his needs.\nIn our approach, we use context information to enhance interaction and provide further services, which offer new possibilities to the user.\nFurthermore, we believe that interaction in groups also has to take into account the current context of the group itself and not only the context of individual group members.\nFor this reason, we also retrieve information about the group's current context, derived from the contexts of the group members together with some sort of meta-information (see chapter 4.3).\nThe sources of context used for our application correspond with the four primary context types given in chapter 1.1--identity (I), location (L), time (T) and activity (A).\nAs stated before, we also take into account the context of the group the user is interaction with, so that we could add a fifth type of context information--group awareness (G)--to the classification.\nUsing this context information, we can trigger context-aware activities in all of the three categories described in chapter 1.1--presentation of information (P), automatic execution of services (A) and tagging of context to information for later retrieval (T).\nTable 2 gives an overview of activities we have already implemented; they are described comprehensively in chapter 4.4.\nThe table also shows which types of context information are used for each activity and the category the activity could be classified in.\nTable 2.\nClassification of implemented context-aware activities\nReasons for implementing these very features are to take advantage of all four types of context information in order to support group interaction by utilizing a comprehensive knowledge about the situation a single user or a whole group is in.\nA critical issue for the user acceptance of such a system is the usability of its interface.\nWe have evaluated several ways of presenting context-aware means of interaction to the user, until we came to the solution we use right now.\nAlthough we think that the user interface that has been implemented now offers the best trade-off between seamless integration of features and ease of use, it would be no problem to extend the architecture with other user interfaces, even on different platforms.\nThe chosen solution is based on an existing instant messenger, which offers several possibilities to integrate our system (see chapter 4.2).\nThe biggest advantage of this approach is that the user is confronted with a graphical user interface he is already used to in most cases.\nFurthermore, our system uses an instant messenger account as an identifier, so that the user does not have to register a further account anywhere else (for example, the user can use his already existing ICQ2-account).\n4.2 Instant Messenger Integration\nOur system is based upon an existing instant messenger, the socalled Simple Instant Messenger (SIM) 3.\nThe implementation of this messenger is carried out as a project at Sourceforge4.\nSIM supports multiple messenger protocols such as AIM5, ICQ2 and MSN6.\nIt also supports connections to multiple accounts at the same time.\nFurthermore, full support for SMS-notification (where provided from the used protocol) is given.\nSIM is based on a plug-in concept.\nAll protocols as well as parts of the user-interface are implemented as plug-ins.\nIts architecture is also used to extend the application's abilities to communicate with external applications.\nFor this purpose, a remote control plug-in is provided, by which SIM can be controlled from external applications via socket connection.\nThis remote control interface is extensively used by GISS for retrieving the contact list, setting the user's availability-state or sending messages.\nThe\nfunctionality of the plug-in was extended in several ways, for example to accept messages for an account (as if they would have been sent via the messenger network).\nThe messenger, more exactly the contact list (i.e. a list of profiles of all people registered with the instant messenger, which is visualized by listing their names as it can be seen in Figure 5), is also used to display locations of other members of the groups a user belongs to.\nThis provides location awareness without taking too much space or requesting the user's full attention.\nA more comprehensive description of these features is given in chapter 4.4.\n4.3 Sources of Context Information\nWhile the location-context of a user is obtained from our location sensing subsystem described in chapter 3, we consider further types of context than location relevant for the support of group interaction, too.\nLocal time as a very important context dimension can be easily retrieved from the real time clock of the user's system.\nBesides location and time, we also use context information of user's activity and identity, where we exploit the functionality provided by the underlying instant messenger system.\nIdentity (or more exactly, the mapping of IDs to names as well as additional information from the user's profile) can be distilled out of the contents of the user's contact list.\nInformation about the activity or a certain user is only available in a very restricted area, namely the activity at the computer itself.\nOther activities like making a phone call or something similar, cannot be recognized with the current implementation of the activity sensor.\nThe only context-information used is the instant messenger's availability state, thus only providing a very coarse classification of the user's activity (online, offline, away, busy etc.).\nAlthough this may not seem to be very much information, it is surely relevant and can be used to improve or even enable several services.\nHaving collected the context information from all available users, it is now possible to distil some information about the context of a certain group.\nInformation about the context of a group includes how many members the group currently has, if the group meets right now, which members are participating at a meeting, how many members have read which of the available posts from other team members and so on.\nTherefore, some additional information like a list of members for each group is needed.\nThese lists can be assembled manually (by users joining and leaving groups) or retrieved automatically.\nThe context of a group is secondary context and is aggregated from the available contexts of the group members.\nEvery time the context of a single group member changes, the context of the whole group is changing and has to be recalculated.\nWith knowledge about a user's context and the context of the groups he belongs to, we can provide several context-aware services to the user, which enhance his interaction abilities.\nA brief description of these services is given in chapter 4.4.\n4.4 Group Interaction Support\n4.4.1 Visualisation of Location Information\nAn important feature is the visualisation of location information, thus allowing users to be aware of the location of other users and members of groups he joined, respectively.\nAs already described in chapter 2, we use two different forms of visualisation.\nThe maybe more important one is to display location information in the contact list of the instant messenger, right beside the name, thus being always visible while not drawing the user's attention on it (compared with a twodimensional view for example, which requires a own window for displaying a map of the environment).\nDue to the restricted space in the contact list, it has been necessary to implement some sort of level-of-detail concept.\nAs we use a hierarchical location model, we are able to determine the most accurate common location of two users.\nIn the contact list, the current symbolic location one level below the previously calculated common location is then displayed.\nIf, for example, user A currently resides in room \"P121\" at the first floor of a building and user B, which has to be displayed in the contact list of user A, is in room \"P304\" at the third floor, the most accurate common location of these two users is the building they are in.\nFor that reason, the floor (i.e. one level beyond the common location, namely the building) of user B is displayed in the contact list of user A.\nIf both people reside on the same floor or even in the same room, the room would be taken.\nFigure 5 shows a screenshot of the Simple Instant Messenger3, where the current location of those people, whose location is known by GISS, is displayed in brackets right beside their name.\nOn top of the image, the heightened, integrated GISS-toolbar is shown, which currently contains the following, implemented functionality (from left to right): Asynchronous communication for groups (see chapter 4.4.4), context-aware reminders (see chapter 4.4.6), two-dimensional visualisation of locationinformation, forming and managing groups (see chapter 4.4.2), context-aware availability-management (see chapter 4.4.5) and finally a button for terminating GISS.\nFigure 5.\nGISS integration in Simple Instant Messenger3\nAs displaying just this short form of location may not be enough for the user, because he may want to see the most accurate position available, a \"fully qualified\" position is shown if a name in the contact-list is clicked (e.g. in the form of \"desk@room2@department1@1stfloor@building 1@campus\u201d).\nThe second possible form of visualisation is a graphical one.\nWe have evaluated a three-dimensional view, which was based on a VRML model of the respective area (cf. Figure 6).\nDue to lacks in navigational and usability issues, we decided to use a twodimensional view of the floor (it is referred to as level in the location hierarchy, cf. Figure 4).\nOther levels of granularity like section (e.g. building) and region (e.g. campus) are also provided.\nIn this floor-plan-based view, the current locations are shown in the manner of ICQ2 contacts, which are placed at the currently sensed location of the respective person.\nThe availability-status of a user, for example \"away\" if he is not on the computer right now, or \"busy\" if he does not want to be disturbed, is visualized by colour-coding the ICQ2-flower left beside the name.\nFurthermore, the floor-plan-view shows so-called the virtual post-its, which are virtual counterparts of real-life post-its and serve as our means of asynchronous communication (more about virtual post-its can be found in chapter 4.4.4).\nFigure 6.\n3D-view of the floor (VRML) Figure 7 shows the two-dimensional map of a certain floor, where several users are currently located (visualized by their name and the flower left beside).\nThe location of the client, on which the map is displayed, is visualized by a green circle.\nDown to the right, two virtual post-its can be seen.\nFigure 7.\n2D view of the floor\nAnother feature of the 2D-view is the visualisation of locationhistory of users.\nAs we store the complete history of a user's locations together with a timestamp, we are able to provide information about the locations he has been back in time.\nWhen the mouse is moved over the name of a certain user in the 2Dview, \"footprints\" of a user, placed at the locations he has been, are faded out the stronger, the older the location information is.\n4.4.2 Forming and Managing Groups\nTo support interaction in groups, it is first necessary to form groups.\nAs groups can have different purposes, we distinguish two types of groups.\nSo-called static groups are groups, which are built up manually by people joining and leaving them.\nStatic groups can be further divided into two subtypes.\nIn open static groups, everybody can join and leave anytime, useful for example to form a group of lecture attendees of some sort of interest group.\nClosed static groups have an owner, who decides, which persons are allowed to join, although everybody could leave again at any time.\nClosed groups enable users for example to create a group of their friends, thus being able to communicate with them easily.\nIn contrast to that, we also support the creation of dynamic groups.\nThey are formed among persons, who are at the same location at the same time.\nThe creation of dynamic groups is only performed at locations, where it makes sense to form groups, for example in lecture halls or meeting rooms, but not on corridors or outdoor.\nIt would also be not very meaningful to form a group only of the people residing in the left front sector of a hall; instead, the complete hall should be considered.\nFor these reasons, all the defined locations in the hierarchy are tagged, whether they allow the formation of groups or not.\nDynamic groups are also not only formed granularity of rooms, but also on higher levels in the hierarchy, for example with the people currently residing in the area of a department.\nAs the members of dynamic groups constantly change, it is possible to create an open static group out of them.\n4.4.3 Synchronous Communication for Groups\nThe most important form of synchronous communication on computers today is instant messaging; some people even see instant messaging to be the real killer application on the Internet.\nThis has also motivated the decision to build GISS upon an instant messaging system.\nIn today's messenger systems, peer-to-peer-communication is extensively supported.\nHowever, when it comes to communication in groups, the support is rather poor most of the time.\nOften, only sending a message to multiple recipients is supported, lacking means to take into account the current state of the recipients.\nFurthermore, groups can only be formed of members in one's contact list, thus being not able to send messages to a group, where not all of its members are known (which may be the case in settings, where the participants of a lecture form a group).\nOur approach does not have the mentioned restrictions.\nWe introduce group-entries in the user's contact list; enable him or his to send messages to this group easily, without knowing who exactly is currently a member of this group.\nFurthermore, group messages are only delivered to persons, who are currently not busy, thus preventing a disturbance by a message, which is possibly unimportant for the user.\nThese features cannot be carried out in the messenger network itself, so whenever a message to a group account is sent, we intercept it and route it through our system to all the recipients, which are available at a certain time.\nCommunication via a group account is also stored centrally, enabling people to query missed messages or simply viewing the message history.\n4.4.4 Asynchronous Communication for Groups\nAsynchronous communication in groups is not a new idea.\nThe goal of this approach is not to reinvent the wheel, as email is maybe the most widely used form of asynchronous communication on computers and is broadly accepted and standardized.\nIn out work, we aim at the combination of asynchronous communication with location awareness.\nFor this reason, we introduce the concept of so-called virtual postits (cp.\n[13]), which are messages that are bound to physical locations.\nThese virtual post-its could be either visible for all users that are passing by or they can be restricted to be visible for certain groups of people only.\nMoreover, a virtual post-it can also have an expiry date after which it is dropped and not displayed anymore.\nVirtual post-its can also be commented by others, thus providing some from of forum-like interaction, where each post-it forms a thread.\nVirtual post-its are displayed automatically, whenever a user (available) passes by the first time.\nAfterwards, post-its can be accessed via the 2D-viewer, where all visible post-its are shown.\nAll readers of a post-it are logged and displayed when viewing it, providing some sort of awareness about the group members' activities in the past.\n4.4.5 Context-aware Availability Management\nInstant messengers in general provide some kind of availability information about a user.\nAlthough this information can be only defined in a very coarse granularity, we have decided to use these means of gathering activity context, because the introduction of an additional one would strongly decrease the usability of the system.\nTo support the user managing his availability, we provide an interface that lets the user define rules to adapt his availability to the current context.\nThese rules follow the form \"on event (E) if condition (C) then action (A)\", which is directly supported by the ECA-rules of the Context Framework described in chapter 1.3.\nThe testing of conditions is periodically triggered by throwing events (whenever the context of a user changes).\nThe condition itself is defined by the user, who can demand the change of his availability status as the action in the rule.\nAs a condition, the user can define his location, a certain time (also triggering daily, every week or every month) or any logical combination of these criteria.\n4.4.6 Context-Aware Reminders\nReminders [14] are used to give the user the opportunity of defining tasks and being reminded of those, when certain criteria are fulfilled.\nThus, a reminder can be seen as a post-it to oneself, which is only visible in certain cases.\nReminders can be bound to a certain place or time, but also to spatial proximity of users or groups.\nThese criteria can be combined with Boolean operators, thus providing a powerful means to remind the user of tasks that he wants to carry out when a certain context occurs.\nA reminder will only pop up the first time the actual context meets the defined criterion.\nOn showing up the reminder, the user has the chance to resubmit it to be reminded again, for example five minutes later or the next time a certain user is in spatial proximity.\n4.4.7 Context-Aware Recognition and Notification of Group Meetings\nWith the available context information, we try to recognize meetings of a group.\nThe determination of the criteria, when the system recognizes a group having a meeting, is part of the ongoing work.\nIn a first approach, we use the location - and activity-context of the group members to determine a meeting.\nWhenever more than 50% of the members of a group are available at a location, where a meeting is considered to make sense (e.g. not on a corridor), a meeting minutes post-it is created at this location and all absent group members are notified of the meeting and the location it takes place.\nDuring the meeting, the comment-feature of virtual post-its provides a means to take notes for all of the participants.\nWhen members are joining or leaving the meeting, this is automatically added as a note to the list of comments.\nLike the recognition of the beginning of a meeting, the recognition of its end is still part of ongoing work.\nIf the end of the meeting is recognized, all group members get the complete list of comments as a meeting protocol at the end of the meeting.\n5.\nCONCLUSIONS\nThis paper discussed the potentials of support for group interaction by using context information.\nFirst, we introduced the notions of context and context computing and motivated their value for supporting group interaction.\nAn architecture is presented to support context-aware group interaction in mobile, distributed environments.\nIt is built upon a flexible and extensible framework, thus enabling an easy adoption to available context sources (e.g. by adding additional sensors) as well as the required form of representation.\nWe have prototypically developed a set of services, which enhance group interaction by taking into account the current context of the users as well as the context of groups itself.\nImportant features are dynamic formation of groups, visualization of location on a two-dimensional map as well as unobtrusively integrated in an instant-messenger, asynchronous communication by virtual post-its, which are bound to certain locations, and a context-aware availability-management, which adapts the availability-status of a user to his current situation.\nTo provide location information, we have implemented a subsystem for automated acquisition of location - and proximityinformation provided by various sensors, which provides a technology-independent presentation of locations and spatial proximities between users and merges this information using sensor-independent fusion algorithms.\nA history of locations as well as of spatial proximities is stored in a database, thus enabling context history-based services.","keyphrases":["context awar","group interact","locat sens","softwar framework","contextawar","mobil system","fifth contextdimens group-context","xml configur file","event-condit-action","sensor fusion"],"prmu":["P","P","P","U","U","R","U","U","U","U"]} {"id":"J-71","title":"A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation","abstract":"I develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM). A DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both. Like a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution. The trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share. The DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand. Since the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in. I explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions.","lvl-1":"A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation David M. Pennock Yahoo! Research Labs 74 N. Pasadena Ave, 3rd Floor Pasadena, CA 91103 USA pennockd@yahoo-inc.com ABSTRACT I develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM).\nA DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both.\nLike a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution.\nThe trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share.\nThe DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand.\nSince the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in.\nI explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Design, Economics, Theory.\n1.\nINTRODUCTION A wide variety of financial and wagering mechanisms have been developed to support hedging (i.e., insuring) against exposure to uncertain events and\/or speculative trading on uncertain events.\nThe dominant mechanism used in financial circles is the continuous double auction (CDA), or in some cases the CDA with market maker (CDAwMM).\nThe primary mechanism used for sports wagering is a bookie or bookmaker, who essentially acts exactly as a market maker.\nHorse racing and jai alai wagering traditionally employ the pari-mutuel mechanism.\nThough there is no formal or logical separation between financial trading and wagering, the two endeavors are socially considered distinct.\nRecently, there has been a move to employ CDAs or CDAwMMs for all types of wagering, including on sports, horse racing, political events, world news, and many other uncertain events, and a simultaneous and opposite trend to use bookie systems for betting on financial markets.\nThese trends highlight the interchangeable nature of the mechanisms and further blur the line between investing and betting.\nSome companies at the forefront of these movements are growing exponentially, with some industry observers declaring the onset of a revolution in the wagering business.1 Each mechanism has pros and cons for the market institution and the participating traders.\nA CDA only matches willing traders, and so poses no risk whatsoever for the market institution.\nBut a CDA can suffer from illiquidity in the form huge bid-ask spreads or even empty bid-ask queues if trading is light and thus markets are thin.\nA successful CDA must overcome a chicken-and-egg problem: traders are attracted to liquid markets, but liquid markets require a large number of traders.\nA CDAwMM and the similar bookie mechanism have built-in liquidity, but at a cost: the market maker itself, usually affiliated with the market institution, is exposed to significant risk of large monetary losses.\nBoth the CDA and CDAwMM offer incentives for traders to leverage information continuously as soon as that information becomes available.\nAs a result, prices are known to capture the current state of information exceptionally well.\nPari-mutuel markets effectively have infinite liquidity: anyone can place a bet on any outcome at any time, without the need for a matching offer from another bettor or a market maker.\nPari-mutuel markets also involve no risk for the market institution, since they only redistribute money from losing wagers to winning wagers.\nHowever, pari-mutuel mar1 http:\/\/www.wired.com\/news\/ebiz\/0,1272,61051,00.html 170 kets are not suitable for situations where information arrives over time, since there is a strong disincentive for placing bets until either (1) all information is revealed, or (2) the market is about to close.\nFor this reason, pari-mutuel prices prior to the market``s close cannot be considered a reflection of current information.\nPari-mutuel market participants cannot buy low and sell high: they cannot cash out gains (or limit losses) before the event outcome is revealed.\nBecause the process whereby information arrives continuously over time is the rule rather than the exception, the applicability of the standard pari-mutuel mechanism is questionable in a large number of settings.\nIn this paper, I develop a new mechanism suitable for hedging, speculating, and wagering, called a dynamic parimutuel market (DPM).\nA DPM can be thought of as a hybrid between a pari-mutuel market and a CDA.\nA DPM is indeed pari-mutuel in nature, meaning that it acts only to redistribute money from some traders to others, and so exposes the market institution to no volatility (no risk).\nA constant, pre-determined subsidy is required to start the market.\nThe subsidy can in principle be arbitrarily small and might conceivably come from traders (via antes or transaction fees) rather than the market institution, though a nontrivial outside subsidy may actually encourage trading and information aggregation.\nA DPM has the infinite liquidity of a pari-mutuel market: traders can always purchase shares in any outcome at any time, at some price automatically set by the market institution.\nA DPM is also able to react to and incorporate information arriving over time, like a CDA.\nThe market institution changes the price for particular outcomes based on the current state of wagering.\nIf a particular outcome receives a relatively large number of wagers, its price increases; if an outcome receives relatively few wagers, its price decreases.\nPrices are computed automatically using a price function, which can differ depending on what properties are desired.\nThe price function determines the instantaneous price per share for an infinitesimal quantity of shares; the total cost for purchasing n shares is computed as the integral of the price function from 0 to n.\nThe complexity of the price function can be hidden from traders by communicating only the ask prices for various lots of shares (e.g., lots of 100 shares), as is common practice in CDAs and CDAwMMs.\nDPM prices do reflect current information, and traders can cash out in an aftermarket to lock in gains or limit losses before the event outcome is revealed.\nWhile there is always a market maker willing to accept buy orders, there is not a market maker accepting sell orders, and thus no guaranteed liquidity for selling: instead, selling is accomplished via a standard CDA mechanism.\nTraders can always hedge-sell by purchasing the opposite outcome than they already own.\n2.\nBACKGROUND AND RELATED WORK 2.1 Pari-mutuel markets Pari-mutuel markets are common at horse races [1, 22, 24, 25, 26], dog races, and jai alai games.\nIn a pari-mutuel market people place wagers on which of two or more mutually exclusive and exhaustive outcomes will occur at some time in the future.\nAfter the true outcome becomes known, all of the money that is lost by those who bet on the incorrect outcome is redistributed to those who bet on the correct outcome, in direct proportion to the amount they wagered.\nMore formally, if there are k mutually exclusive and exhaustive outcomes (e.g., k horses, exactly one of which will win), and M1, M2, ... , Mk dollars are bet on each outcome, and outcome i occurs, then everyone who bet on an outcome j = i loses their wager, while everyone who bet on outcome i receives Pk j=1 Mj\/Mi dollars for every dollar they wagered.\nThat is, every dollar wagered on i receives an equal share of all money wagered.\nAn equivalent way to think about the redistribution rule is that every dollar wagered on i is refunded, then receives an equal share of all remaining money bet on the losing outcomes, or P j=i Mj\/Mi dollars.\nIn practice, the market institution (e.g., the racetrack) first takes a certain percent of the total amount wagered, usually about 20% in the United States, then redistributes whatever money remains to the winners in proportion to their amount bet.\nConsider a simple example with two outcomes, A and B.\nThe outcomes are mutually exclusive and exhaustive, meaning that Pr(A \u2227 B) = 0 and Pr(A) + Pr(B) = 1.\nSuppose $800 is bet on A and $200 on B.\nNow suppose that A occurs (e.g., horse A wins the race).\nPeople who wagered on B lose their money, or $200 in total.\nPeople who wagered on A win and each receives a proportional share of the total $1000 wagered (ignoring fees).\nSpecifically, each $1 wager on A entitles its owner a 1\/800 share of the $1000, or $1.25.\nEvery dollar bet in a pari-mutuel market has an equal payoff, regardless of when the wager was placed or how much money was invested in the various outcomes at the time the wager was placed.\nThe only state that matters is the final state: the final amounts wagered on all the outcomes when the market closes, and the identity of the correct outcome.\nAs a result, there is a disincentive to place a wager early if there is any chance that new information might become available.\nMoreover, there are no guarantees about the payoff rate of a particular bet, except that it will be nonnegative if the correct outcome is chosen.\nPayoff rates can fluctuate arbitrarily until the market closes.\nSo a second reason not to bet early is to wait to get a better sense of the final payout rates.\nThis is in contrast to CDAs and CDAwMMs, like the stock market, where incentives exist to invest as soon as new information is revealed.\nPari-mutuel bettors may be allowed to switch their chosen outcome, or even cancel their bet, prior to the market``s close.\nHowever, they cannot cash out of the market early, to either lock in gains or limit losses, if new information favors one outcome over another, as is possible in a CDA or a CDAwMM.\nIf bettors can cancel or change their bets, then an aftermarket to sell existing wagers is not sensible: every dollar wagered is worth exactly $1 up until the market``s close-no one would buy at greater than $1 and no one would sell at less than $1.\nPari-mutuel bettors must wait until the outcome is revealed to realize any profit or loss.\nUnlike a CDA, in a pari-mutuel market, anyone can place a wager of any amount at any time-there is in a sense infinite liquidity for buying.\nA CDAwMM also has built-in liquidity, but at the cost of significant risk for the market maker.\nIn a pari-mutuel market, since money is only redistributed among bettors, the market institution itself has no risk.\nThe main drawback of a pari-mutuel market is that it is useful only for capturing the value of an uncertain asset at some instant in time.\nIt is ill-suited for situations where information arrives over time, continuously updating the estimated value of the asset-situations common in al171 most all trading and wagering scenarios.\nThere is no notion of buying low and selling high, as occurs in a CDA, where buying when few others are buying (and the price is low) is rewarded more than buying when many others are buying (and the price is high).\nPerhaps for this reason, in most dynamic environments, financial mechanisms like the CDA that can react in real-time to changing information are more typically employed to facilitate speculating and hedging.\nSince a pari-mutuel market can estimate the value of an asset at a single instant in time, a repeated pari-mutuel market, where distinct pari-mutuel markets are run at consecutive intervals, could in principle capture changing information dynamics.\nBut running multiple consecutive markets would likely thin out trading in each individual market.\nAlso, in each individual pari-mutuel market, the incentives would still be to wait to bet until just before the ending time of that particular market.\nThis last problem might be mitigated by instituting a random stopping rule for each individual pari-mutuel market.\nIn laboratory experiments, pari-mutuel markets have shown a remarkable ability to aggregate and disseminate information dispersed among traders, at least for a single snapshot in time [17].\nA similar ability has been recognized at real racetracks [1, 22, 24, 25, 26].\n2.2 Financial markets In the financial world, wagering on the outcomes of uncertain future propositions is also common.\nThe typical market mechanism used is the continuous double auction (CDA).\nThe term securities market in economics and finance generically encompasses a number of markets where speculating on uncertain events is possible.\nExamples include stock markets like NASDAQ, options markets like the CBOE [13], futures markets like the CME [21], other derivatives markets, insurance markets, political stock markets [6, 7], idea futures markets [12], decision markets [10] and even market games [3, 15, 16].\nSecurities markets generally have an economic and social value beyond facilitating speculation or wagering: they allow traders to hedge risk, or to insure against undesirable outcomes.\nSo if a particular outcome has disutility for a trader, he or she can mitigate the risk by wagering for the outcome, to arrange for compensation in case the outcome occurs.\nIn this sense, buying automobile insurance is effectively a bet that an accident or other covered event will occur.\nSimilarly, buying a put option, which is useful as a hedge for a stockholder, is a bet that the underlying stock will go down.\nIn practice, agents engage in a mixture of hedging and speculating, and there is no clear dividing line between the two [14].\nLike pari-mutuel markets, often prices in financial markets are excellent information aggregators, yielding very accurate forecasts of future events [5, 18, 19].\nA CDA constantly matches orders to buy an asset with orders to sell.\nIf at any time one party is willing to buy one unit of the asset at a bid price of pbid, while another party is willing to sell one unit of the asset at an ask price of pask, and pbid is greater than or equal to pask, then the two parties transact (at some price between pbid and pask).\nIf the highest bid price is less than the lowest ask price, then no transactions occur.\nIn a CDA, the bid and ask prices rapidly change as new information arrives and traders reassess the value of the asset.\nSince the auctioneer only matches willing bidders, the auctioneer takes on no risk.\nHowever, buyers can only buy as many shares as sellers are willing to sell; for any transaction to occur, there must be a counterparty on the other side willing to accept the trade.\nAs a result, when few traders participate in a CDA, it may become illiquid, meaning that not much trading activity occurs.\nThe spread between the highest bid price and the lowest ask price may be very large, or one or both queues may be completely empty, discouraging trading.2 One way to induce liquidity is to provide a market maker who is willing to accept a large number of buy and sell orders at particular prices.\nWe call this mechanism a CDA with market maker (CDAwMM).3 Conceptually, the market maker is just like any other trader, but typically is willing to accept a much larger volume of trades.\nThe market maker may be a person, or may be an automated algorithm.\nAdding a market maker to the system increases liquidity, but exposes the market maker to risk.\nNow, instead of only matching trades, the system actually takes on risk of its own, and depending on what happens in the future, may lose considerable amounts of money.\n2.3 Wagering markets The typical Las Vegas bookmaker or oddsmaker functions much like a market maker in a CDA.\nIn this case, the market institution (the book or house) sets the odds,4 initially according to expert opinion, and later in response to the relative level of betting on the various outcomes.\nUnlike in a pari-mutuel environment, whenever a wager is placed with a bookmaker, the odds or terms for that bet are fixed at the time of the bet.\nThe bookmaker profits by offering different odds for the two sides of the bet, essentially defining a bidask spread.\nWhile odds may change in response to changing information, any bets made at previously set odds remain in effect according to the odds at the time of the bet; this is precisely in analogy to a CDAwMM.\nOne difference between a bookmaker and a market maker is that the former usually operates in a take it or leave it mode: bettors cannot place their own limit orders on a common queue, they can in effect only place market orders at prices defined by the bookmaker.\nStill, the bookmaker certainly reacts to bettor demand.\nLike a market maker, the bookmaker exposes itself to significant risk.\nSports betting markets have also been shown to provide high quality aggregate forecasts [4, 9, 23].\n2.4 Market scoring rule Hanson``s [11] market scoring rule (MSR) is a new mechanism for hedging and speculating that shares some properties in common with a DPM.\nLike a DPM, an MSR can be conceptualized as an automated market maker always willing to accept a trade on any event at some price.\nAn MSR requires a patron to subsidize the market.\nThe patron``s final loss is variable, and thus technically implies a degree of risk, though the maximum loss is bounded.\nAn MSR maintains a probability distribution over all events.\nAt any time any 2 Thin markets do occur often in practice, and can be seen in a variety of the less popular markets available on http:\/\/TradeSports.com, or in some financial options markets, for example.\n3 A very clear example of a CDAwMM is the interactive betting market on http:\/\/WSEX.com.\n4 Or, alternatively, the bookmaker sets the game line in order to provide even-money odds.\n172 trader who believes the probabilities are wrong can change any part of the distribution by accepting a lottery ticket that pays off according to a scoring rule (e.g., the logarithmic scoring rule) [27], as long as that trader also agrees to pay off the most recent person to change the distribution.\nIn the limit of a single trader, the mechanism behaves like a scoring rule, suitable for polling a single agent for its probability distribution.\nIn the limit of many traders, it produces a combined estimate.\nSince the market essentially always has a complete set of posted prices for all possible outcomes, the mechanism avoids the problem of thin markets or illiquidity.\nAn MSR is not pari-mutuel in nature, as the patron in general injects a variable amount of money into the system.\nAn MSR provides a two-sided automated market maker, while a DPM provides a one-sided automated market maker.\nIn an MSR, the vector of payoffs across outcomes is fixed at the time of the trade, while in a DPM, the vector of payoffs across outcomes depends both on the state of wagering at the time of the trade and the state of wagering at the market``s close.\nWhile the mechanisms are quite different-and so trader acceptance and incentives may strongly differ-the properties and motivations of DPMs and MSRs are quite similar.\nHanson shows how MSRs are especially well suited for allowing bets on a combinatorial number of outcomes.\nThe patron``s payment for subsidizing trading on all 2n possible combinations of n events is no larger than the sum of subsidizing the n event marginals independently.\nThe mechanism was planned for use in the Policy Analysis Market (PAM), a futures market in Middle East related outcomes and funded by DARPA [20], until a media firestorm killed the project.5 As of this writing, the founders of PAM were considering reopening under private control.6 3.\nA DYNAMIC PARI-MUTUEL MARKET 3.1 High-level description In contrast to a standard pari-mutuel market, where each dollar always buys an equal share of the payoff, in a DPM each dollar buys a variable share in the payoff depending on the state of wagering at the time of purchase.\nSo a wager on A at a time when most others are wagering on B offers a greater possible profit than a wager on A when most others are also wagering on A.\nA natural way to communicate the changing payoff of a bet is to say that, at any given time, a certain amount of money will buy a certain number of shares in one outcome the other.\nPurchasing a share entitles its owner to an equal stake in the winning pot should the chosen outcome occur.\nThe payoff is variable, because when few people are betting on an outcome, shares will generally be cheaper than at a time when many people are betting that outcome.\nThere is no pre-determined limit on the number of shares: new shares can be continually generated as trading proceeds.\nFor simplicity, all analyses in this paper consider the binary outcome case; generalizing to multiple discrete outcomes should be straightforward.\nDenote the two outcomes A and B.\nThe outcomes are mutually exclusive and ex5 See http:\/\/hanson.gmu.edu\/policyanalysismarket.html for more information, or http:\/\/dpennock.com\/pam.html for commentary.\n6 http:\/\/www.policyanalysismarket.com\/ haustive.\nDenote the instantaneous price per share of A as p1 and the price per share of B as p2.\nDenote the payoffs per share as P1 and P2, respectively.\nThese four numbers, p1, p2, P1, P2 are the key numbers that traders must track and understand.\nNote that the price is set at the time of the wager; the payoff per share is finalized only after the event outcome is revealed.\nAt any time, a trader can purchase an infinitesimal quantity of shares of A at price p1 (and similarly for B).\nHowever, since the price changes continuously as shares are purchased, the cost of buying n shares is computed as the integral of a price function from 0 to n.\nThe use of continuous functions and integrals can be hidden from traders by aggregating the automated market maker``s sell orders into discrete lots of, say, 100 shares each.\nThese ask orders can be automatically entered into the system by the market institution, so that traders interact with what looks like a more familiar CDA; we examine this interface issue in more detail below in Section 4.2.\nFor our analysis, we introduce the following additional notation.\nDenote M1 as the total amount of money wagered on A, M2 as the total amount of money wagered on B, T = M1 + M2 as the total amount of money wagered on both sides, N1 as the total number of shares purchased of A, and N2 as the total number of shares purchased of B.\nThere are many ways to formulate the price function.\nSeveral natural price functions are outlined below; each is motivated as the unique solution to a particular constraint on price dynamics.\n3.2 Advantages and disadvantages To my knowledge, a DPM is the only known mechanism for hedging and speculating that exhibits all three of the following properties: (1) guaranteed liquidity, (2) no risk for the market institution, and (3) continuous incorporation of information.\nA standard pari-mutuel fails (3).\nA CDA fails (1).\nA CDAwMM, the bookmaker mechanism, and an MSR all fail (2).\nEven though technically an MSR exposes its patron to risk (i.e., a variable future payoff), the patron``s maximum loss is bounded, so the distinction between a DPM and an MSR in terms of these three properties is more technical than practical.\nDPM traders can cash out of the market early, just like stock market traders, to lock in a profit or limit a loss, an action that is simply not possible in a standard pari-mutuel.\nA DPM also has some drawbacks.\nThe payoff for a wager depends both on the price at the time of the trade, and on the final payoff per share at the market``s close.\nThis contrasts with the CDA variants, where the payoff vector across possible future outcomes is fixed at the time of the trade.\nSo a trader``s strategic optimization problem is complicated by the need to predict the final values of P1 and P2.\nIf P changes according to a random walk, then traders can take the current P as an unbiased estimate of the final P, greatly decreasing the complexity of their optimization.\nIf P does not change according to a random walk, the mechanism still has utility as a mechanism for hedging and speculating, though optimization may be difficult, and determining a measure of the market``s aggregate opinion of the probabilities of A and B may be difficult.\nWe discuss the implications of random walk behavior further below in Section 4.1 in the discussion surrounding Assumption 3.\nA second drawback of a DPM is its one-sided nature.\n173 While an automated market maker always stands ready to accept buy orders, there is no corresponding market maker to accept sell orders.\nTraders must sell to each other using a standard CDA mechanism, for example by posting an ask order at a price at or below the market maker``s current ask price.\nTraders can also always hedge-sell by purchasing shares in the opposite outcome from the market maker, thereby hedging their bet if not fully liquidating it.\n3.3 Redistribution rule In a standard pari-mutuel market, payoffs can be computed in either of two equivalent ways: (1) each winning $1 wager receives a refund of the initial $1 paid, plus an equal share of all losing wagers, or (2) each winning $1 wager receives an equal share of all wagers, winning or losing.\nBecause each dollar always earns an equal share of the payoff, the two formulations are precisely the same: $1 + Mlose Mwin = Mwin + Mlose Mwin .\nIn a dynamic pari-mutuel market, because each dollar is not equally weighted, the two formulations are distinct, and lead to significantly different price functions and mechanisms, each with different potentially desirable properties.\nWe consider each case in turn.\nThe next section analyzes case (1), where only losing money is redistributed.\nSection 5 examines case (2), where all money is redistributed.\n4.\nDPM I: LOSING MONEY REDISTRIBUTED For the case where the initial payments on winning bets are refunded, and only losing money is redistributed, the respective payoffs per share are simply: P1 = M2 N1 P2 = M1 N2 .\nSo, if A occurs, shareholders of A receive all of their initial payment back, plus P1 dollars per share owned, while shareholders of B lose all money wagered.\nSimilarly, if B occurs, shareholders of B receive all of their initial payment back, plus P2 dollars per share owned, while shareholders of A lose all money wagered.\nWithout loss of generality, I will analyze the market from the perspective of A, deriving prices and payoffs for A only.\nThe equations for B are symmetric.\nThe trader``s per-share expected value for purchasing an infinitesimal quantity of shares of A is E[ shares] = Pr(A) \u00b7 E [P1|A] \u2212 (1 \u2212 Pr(A)) \u00b7 p1 E[ shares] = Pr(A) \u00b7 E '' M2 N1 \u02db \u02db \u02db \u02db A \u2212 (1 \u2212 Pr(A)) \u00b7 p1 where is an infinitesimal quantity of shares of A, Pr(A) is the trader``s belief in the probability of A, and p1 is the instantaneous price per share of A for an infinitesimal quantity of shares.\nE[P1|A] is the trader``s expectation of the payoff per share of A after the market closes and given that A occurs.\nThis is a subtle point.\nThe value of P1 does not matter if B occurs, since in this case shares of A are worthless, and the current value of P1 does not necessarily matter as this may change as trading continues.\nSo, in order to determine the expected value of shares of A, the trader must estimate what he or she expects the payoff per share to be in the end (after the market closes) if A occurs.\nIf E[ shares]\/ > 0, a risk-neutral trader should purchase shares of A.\nHow many shares?\nThis depends on the price function determining p1.\nIn general, p1 increases as more shares are purchased.\nThe risk-neutral trader should continue purchasing shares until E[ shares]\/ = 0.\n(A riskaverse trader will generally stop purchasing shares before driving E[ shares]\/ all the way to zero.)\nAssuming riskneutrality, the trader``s optimization problem is to choose a number of shares n \u2265 0 of A to purchase, in order to maximize E[n shares] = Pr(A)\u00b7n\u00b7E [P1|A]\u2212(1\u2212Pr(A))\u00b7 Z n 0 p1(n)dn.\n(1) It``s easy to see that the same value of n can be solved for by finding the number of shares required to drive E[ shares]\/ to zero.\nThat is, find n \u2265 0 satisfying 0 = Pr(A) \u00b7 E [P1|A] \u2212 (1 \u2212 Pr(A)) \u00b7 p1(n), if such a n exists, otherwise n = 0.\n4.1 Market probability As traders who believe that E[ shares of A]\/ > 0 purchase shares of A and traders who believe that E[ shares of B]\/ > 0 purchase shares of B, the prices p1 and p2 change according to a price function, as prescribed below.\nThe current prices in a sense reflect the market``s opinion as a whole of the relative probabilities of A and B. Assuming an efficient marketplace, the market as a whole considers E[ shares]\/ = 0, since the mechanisms is a zero sum game.\nFor example, if market participants in aggregate felt that E[ shares]\/ > 0, then there would be net demand for A, driving up the price of A until E[ shares]\/ = 0.\nDefine MPr(A) to be the market probability of A, or the probability of A inferred by assuming that E[ shares]\/ = 0.\nWe can consider MPr(A) to be the aggregate probability of A as judged by the market as a whole.\nMPr(A) is the solution to 0 = MPr(A) \u00b7 E[P1|A] \u2212 (1 \u2212 MPr(A)) \u00b7 p1.\nSolving we get MPr(A) = p1 p1 + E[P1|A] .\n(2) At this point we make a critical assumption in order to greatly simplify the analysis; we assume that E[P1|A] = P1.\n(3) That is, we assume that the current value for the payoff per share of A is the same as the expected final value of the payoff per share of A given that A occurs.\nThis is certainly true for the last (infinitesimal) wager before the market closes.\nIt``s not obvious, however, that the assumption is true well before the market``s close.\nBasically, we are assuming that the value of P1 moves according to an unbiased random walk: the current value of P1 is the best expectation of its future value.\nI conjecture that there are reasonable market efficiency conditions under which assumption (3) is true, though I have not been able to prove that it arises naturally from rational trading.\nWe examine scenarios below in which 174 assumption (3) seems especially plausible.\nNonetheless, the assumption effects our analysis only.\nRegardless of whether (3) is true, each price function derived below implies a welldefined zero-sum game in which traders can play.\nIf traders can assume that (3) is true, then their optimization problem (1) is greatly simplified; however, optimizing (1) does not depend on the assumption, and traders can still optimize by strategically projecting the final expected payoff in whatever complicated way they desire.\nSo, the utility of DPM for hedging and speculating does not necessarily hinge on the truth of assumption (3).\nOn the other hand, the ability to easily infer an aggregate market consensus probability from market prices does depend on (3).\n4.2 Price functions A variety of price functions seem reasonable, each exhibiting various properties, and implying differing market probabilities.\n4.2.1 Price function I: Price of A equals payoff of B One natural price function to consider is to set the price per share of A equal to the payoff per share of B, and set the price per share of B equal to the payoff per share of A.\nThat is, p1 = P2 p2 = P1.\n(4) Enforcing this relationship reduces the dimensionality of the system from four to two, simplifying the interface: traders need only track two numbers instead of four.\nThe relationship makes sense, since new information supporting A should encourage purchasing of shares A, driving up both the price of A and the payoff of B, and driving down the price of B and the payoff of A.\nIn this setting, assumption (3) seems especially reasonable, since if an efficient market hypothesis leads prices to follow a random walk, than payoffs must also follow a random walk.\nThe constraints (4) lead to the following derivation of the market probability: MPr(A)P1 = MPr(B)p1 MPr(A)P1 = MPr(B)P2 MPr(A) MPr(B) = P2 P1 MPr(A) MPr(B) = M1 N2 M2 N1 MPr(A) MPr(B) = M1N1 M2N2 MPr(A) = M1N1 M1N1 + M2N2 (5) The constraints (4) specify the instantaneous relationship between payoff and price.\nFrom this, we can derive how prices change when (non-infinitesimal) shares are purchased.\nLet n be the number of shares purchased and let m be the amount of money spent purchasing n shares.\nNote that p1 = dm\/dn, the instantaneous price per share, and m = R n 0 p1(n)dn.\nSubstituting into equation (4), we get: p1 = P2 dm dn = M1 + m N2 dm M1 + m = dn N2 Z dm M1 + m = Z dn N2 ln(M1 + m) = n N2 + C m = M1 h e n N2 \u2212 1 i (6) Equation 6 gives the cost of purchasing n shares.\nThe instantaneous price per share as a function of n is p1(n) = dm dn = M1 N2 e n N2 .\n(7) Note that p1(0) = M1\/N2 = P2 as required.\nThe derivation of the price function p2(n) for B is analogous and the results are symmetric.\nThe notion of buying infinitesimal shares, or integrating costs over a continuous function, are probably foreign to most traders.\nA more standard interface can be implemented by discretizing the costs into round lots of shares, for example lots of 100 shares.\nThen ask orders of 100 shares each at the appropriate price can be automatically placed by the market institution.\nFor example, the market institution can place an ask order for 100 shares at price m(100)\/100, another ask order for 100 shares at price (m(200)\u2212m(100))\/100, a third ask for 100 shares at (m(300)\u2212 m(200))\/100, etc..\nIn this way, the market looks more familiar to traders, like a typical CDA with a number of ask orders at various prices automatically available.\nA trader buying less than 100 shares would pay a bit more than if the true cost were computed using (6), but the discretized interface would probably be more intuitive and transparent to the majority of traders.\nThe above equations assume that all money that comes in is eventually returned or redistributed.\nIn other words, the mechanism is a zero sum game, and the market institution takes no portion of the money.\nThis could be generalized so that the market institution always takes a certain amount, or a certain percent, or a certain amount per transaction, or a certain percent per transaction, before money in returned or redistributed.\nFinally, note that the above price function is undefined when the amount bet or the number of shares are zero.\nSo the system must begin with some positive amount on both sides, and some positive number of shares outstanding on both sides.\nThese initial amounts can be arbitrarily small in principle, but the size of the initial subsidy may affect the incentives of traders to participate.\nAlso, the smaller the initial amounts, the more each new dollar effects the prices.\nThe initialization amounts could be funded as a subsidy from the market institution or a patron, which I``ll call a seed wager, or from a portion of the fees charged, which I``ll call an ante wager.\n4.2.2 Price function II: Price of A proportional to money on A A second price function can be derived by requiring the ratio of prices to be equal to the ratio of money wagered.\n175 That is, p1 p2 = M1 M2 .\n(8) In other words, the price of A is proportional to the amount of money wagered on A, and similarly for B.\nThis seems like a particularly natural way to set the price, since the more money that is wagered on one side, the cheaper becomes a share on the other side, in exactly the same proportion.\nUsing Equation 8, along with (2) and (3), we can derive the implied market probability: M1 M2 = p1 p2 = MPr(A) MPr(B) \u00b7 M2 N1 MPr(B) MPr(A) \u00b7 M1 N2 = (MPr(A))2 (MPr(B))2 \u00b7 M2N2 M1N1 (MPr(A))2 (MPr(B))2 = (M1)2 N1 (M2)2N2 MPr(A) MPr(B) = M1 \u221a N1 M2 \u221a N2 MPr(A) = M1 \u221a N1 M1 \u221a N1 + M2 \u221a N2 (9) We can solve for the instantaneous price as follows: p1 = MPr(A) MPr(B) \u00b7 P1 = M1 \u221a N1 M2 \u221a N2 \u00b7 M2 N1 = M1 \u221a N1N2 (10) Working from the above instantaneous price, we can derive the implied cost function m as a function of the number n of shares purchased as follows: dm dn = M1 + m \u221a N1 + n \u221a N2 Z dm M1 + m = Z dn \u221a N1 + n \u221a N2 ln(M1 + m) = 2 N2 [(N1 + n)N2] 1 2 + C m = M1 '' e 2 r N1+n N2 \u22122 r N1 N2 \u2212 1 # .\n(11) From this we get the price function: p1(n) = dm dn = M1 p (N1 + n)N2 e 2 r N1+n N2 \u22122 r N1 N2 .\n(12) Note that, as required, p1(0) = M1\/ \u221a N1N2, and p1(0)\/p2(0) = M1\/M2.\nIf one uses the above price function, then the market dynamics will be such that the ratio of the (instantaneous) prices of A and B always equals the ratio of the amounts wagered on A and B, which seems fairly natural.\nNote that, as before, the mechanism can be modified to collect transaction fees of some kind.\nAlso note that seed or ante wagers are required to initialize the system.\n5.\nDPM II: ALL MONEY REDISTRIBUTED Above we examined the policy of refunding winning wagers and redistributing only losing wagers.\nIn this section we consider the second policy mentioned in Section 3.3: all money from all wagers are collected and redistributed to winning wagers.\nFor the case where all money is redistributed, the respective payoffs per share are: P1 = M1 + M2 N1 = T N1 P2 = M1 + M2 N2 = T N2 , where T = M1 + M2 is the total amount of money wagered on both sides.\nSo, if A occurs, shareholders of A lose their initial price paid, but receive P1 dollars per share owned; shareholders of B simply lose all money wagered.\nSimilarly, if B occurs, shareholders of B lose their initial price paid, but receive P2 dollars per share owned; shareholders of A lose all money wagered.\nIn this case, the trader``s per-share expected value for purchasing an infinitesimal quantity of shares of A is E[ shares] = Pr(A) \u00b7 E [P1|A] \u2212 p1.\n(13) A risk-neutral trader optimizes by choosing a number of shares n \u2265 0 of A to purchase, in order to maximize E[n shares] = Pr(A) \u00b7 n \u00b7 E [P1|A] \u2212 Z n 0 p1(n)dn = Pr(A) \u00b7 n \u00b7 E [P1|A] \u2212 m (14) The same value of n can be solved for by finding the number of shares required to drive E[ shares]\/ to zero.\nThat is, find n \u2265 0 satisfying 0 = Pr(A) \u00b7 E [P1|A] \u2212 p1(n), if such a n exists, otherwise n = 0.\n5.1 Market probability In this case MPr(A), the aggregate probability of A as judged by the market as a whole, is the solution to 0 = MPr(A) \u00b7 E[P1|A] \u2212 p1.\nSolving we get MPr(A) = p1 E[P1|A] .\n(15) As before, we make the simplifying assumption (3) that the expected final payoff per share equals the current payoff per share.\nThe assumption is critical for our analysis, but may not be required for a practical implementation.\n5.2 Price functions For the case where all money is distributed, the constraints (4) that keep the price of A equal to the payoff of B, and vice versa, do not lead to the derivation of a coherent price function.\nA reasonable price function can be derived from the constraint (8) employed in Section 4.2.2, where we require that the ratio of prices to be equal to the ratio of money wagered.\nThat is, p1\/p2 = M1\/M2.\nIn other words, the price of A is proportional to the amount of money wagered on A, and similarly for B. 176 Using Equations 3, 8, and 15 we can derive the implied market probability: M1 M2 = p1 p2 = MPr(A) MPr(B) \u00b7 T N1 \u00b7 N2 T = MPr(A) MPr(B) \u00b7 N2 N1 MPr(A) MPr(B) = M1N1 M2N2 MPr(A) = M1N1 M1N1 + M2N2 (16) Interestingly, this is the same market probability derived in Section 4.2.1 for the case of losing-money redistribution with the constraints that the price of A equal the payoff of B and vice versa.\nThe instantaneous price per share for an infinitesimal quantity of shares is: p1 = (M1)2 + M1M2 M1N1 + M2N2 = M1 + M2 N1 + M2 M1 N2 Working from the above instantaneous price, we can derive the number of shares n that can be purchased for m dollars, as follows: dm dn = M1 + M2 + m N1 + n + M2 M1+m N2 dn dm = N1 + n + M2 M1+m N2 M1 + M2 + m (17) \u00b7 \u00b7 \u00b7 n = m(N1 \u2212 N2) T + N2(T + m) M2 ln '' T(M1 + m) M1(T + m) .\nNote that we solved for n(m) rather than m(n).\nI could not find a closed-form solution for m(n), as was derived for the two other cases above.\nStill, n(m) can be used to determine how many shares can be purchased for m dollars, and the inverse function can be approximated to any degree numerically.\nFrom n(m) we can also compute the price function: p1(m) = dm dn = (M1 + m)M2T denom , (18) where denom = (M1 + m)M2N1 + (M2 \u2212 m)M2N2 +T(M1 + m)N2 ln '' T(M1 + m) M1(T + m) Note that, as required, p1(0)\/p2(0) = M1\/M2.\nIf one uses the above price function, then the market dynamics will be such that the ratio of the (instantaneous) prices of A and B always equals the ratio of the amounts wagered on A and B.\nThis price function has another desirable property: it acts such that the expected value of wagering $1 on A and simultaneously wagering $1 on B equals zero, assuming (3).\nThat is, E[$1 of A + $1 of B] = 0.\nThe derivation is omitted.\n5.3 Comparing DPM I and II The main advantage of refunding winning wagers (DPM I) is that every bet on the winning outcome is guaranteed to at least break even.\nThe main disadvantage of refunding winning wagers is that shares are not homogenous: each share of A, for example, is actually composed of two distinct parts: (1) the refund, or a lottery ticket that pays $p if A occurs, where p is the price paid per share, and (2) one share of the final payoff ($P1) if A occurs.\nThis complicates the implementation of an aftermarket to cash out of the market early, which we will examine below in Section 7.\nWhen all money is redistributed (DPM II), shares are homogeneous: each share entitles its owner to an equal slice of the final payoff.\nBecause shares are homogenous, the implementation of an aftermarket is straightforward, as we shall see in Section 7.\nOn the other hand, because initial prices paid are not refunded for winning bets, there is a chance that, if prices swing wildly enough, a wager on the correct outcome might actually lose money.\nTraders must be aware that if they buy in at an excessively high price that later tumbles allowing many others to get in at a much lower price, they may lose money in the end regardless of the outcome.\nFrom informal experiments, I don``t believe this eventuality would be common, but nonetheless it requires care in communicating to traders the possible risks.\nOne potential fix would be for the market maker to keep track of when the price is going too low, endangering an investor on the correct outcome.\nAt this point, the market maker could artificially stop lowering the price.\nSell orders in the aftermarket might still come in below the market maker``s price, but in this way the system could ensure that every wager on the correct outcome at least breaks even.\n6.\nOTHER VARIATIONS A simple ascending price function would set p1 = \u03b1M1 and p2 = \u03b1M2, where \u03b1 > 0.\nIn this case, prices would only go up.\nFor the case of all money being redistributed, this would eliminate the possibility of losing money on a wager on the correct outcome.\nEven though the market maker``s price only rises, the going price may fall well below the market maker``s price, as ask orders are placed in the aftermarket.\nI have derived price functions for several other cases, using the same methodology above.\nEach price function may have its own desirable properties, but it``s not clear which is best, or even that a single best method exists.\nFurther analyses and, more importantly, empirical investigations are required to answer these questions.\n7.\nAFTERMARKETS A key advantage of DPM over a standard pari-mutuel market is the ability to cash out of the market before it closes, in order to take a profit or limit a loss.\nThis is accomplished by allowing traders to place ask orders on the same queue as the market maker.\nSo traders can sell the shares that they purchased at or below the price set by the market maker.\nOr traders can place a limit sell order at any price.\nBuyers will purchase any existing shares for sale at the lower prices first, before purchasing new shares from the market maker.\n7.1 Aftermarket for DPM II For the second main case explored above, where all money 177 is redistributed, allowing an aftermarket is simple.\nIn fact, aftermarket may be a poor descriptor: buying and selling are both fully integrated into the same mechanism.\nEvery share is worth precisely the same amount, so traders can simply place ask orders on the same queue as the market maker in order to sell their shares.\nNew buyers will accept the lowest ask price, whether it comes from the market maker or another trader.\nIn this way, traders can cash out early and walk away with their current profit or loss, assuming they can find a willing buyer.\n7.2 Aftermarket for DPM I When winning wagers are refunded and only losing wagers are redistributed, each share is potentially worth a different amount, depending on how much was paid for it, so it is not as simple a matter to set up an aftermarket.\nHowever, an aftermarket is still possible.\nIn fact, much of the complexity can be hidden from traders, so it looks nearly as simple as placing a sell order on the queue.\nIn this case shares are not homogenous: each share of A is actually composed of two distinct parts: (1) the refund of p \u00b7 1A dollars, and (2) the payoff of P1 \u00b7 1A dollars, where p is the per-share price paid and 1A is the indicator function equalling 1 if A occurs, and 0 otherwise.\nOne can imagine running two separate aftermarkets where people can sell these two respective components.\nHowever, it is possible to automate the two aftermarkets, by automatically bundling them together in the correct ratio and selling them in the central DPM.\nIn this way, traders can cash out by placing sell orders on the same queue as the DPM market maker, effectively hiding the complexity of explicitly having two separate aftermarkets.\nThe bundling mechanism works as follows.\nSuppose the current price for 1 share of A is p1.\nA buyer agrees to purchase the share at p1.\nThe buyer pays p1 dollars and receives p1 \u00b7 1A + P1 \u00b7 1A dollars.\nIf there is enough inventory in the aftermarkets, the buyer``s share is constructed by bundling together p1 \u00b71A from the first aftermarket, and P1 \u00b71A from the second aftermarket.\nThe seller in the first aftermarket receives p1MPr(A) dollars, and the seller in the second aftermarket receives p1MPr(B) dollars.\n7.3 Pseudo aftermarket for DPM I There is an alternative pseudo aftermarket that``s possible for the case of DPM I that does not require bundling.\nConsider a share of A purchased for $5.\nThe share is composed of $5\u00b71A and $P1 \u00b71A.\nNow suppose the current price has moved from $5 to $10 per share and the trader wants to cash out at a profit.\nThe trader can sell 1\/2 share at market price (1\/2 share for $5), receiving all of the initial $5 investment back, and retaining 1\/2 share of A.\nThe 1\/2 share is worth either some positive amount, or nothing, depending on the outcome and the final payoff.\nSo the trader is left with shares worth a positive expected value and all of his or her initial investment.\nThe trader has essentially cashed out and locked in his or her gains.\nNow suppose instead that the price moves downward, from $5 to $2 per share.\nThe trader decides to limit his or her loss by selling the share for $2.\nThe buyer gets the 1 share plus $2\u00b71A (the buyer``s price refunded).\nThe trader (seller) gets the $2 plus what remains of the original price refunded, or $3 \u00b7 1A.\nThe trader``s loss is now limited to $3 at most instead of $5.\nIf A occurs, the trader breaks even; if B occurs, the trader loses $3.\nAlso note that-in either DPM formulation-traders can always hedge sell by buying the opposite outcome without the need for any type of aftermarket.\n8.\nCONCLUSIONS I have presented a new market mechanism for wagering on, or hedging against, a future uncertain event, called a dynamic pari-mutuel market (DPM).\nThe mechanism combines the infinite liquidity and risk-free nature of a parimutuel market with the dynamic nature of a CDA, making it suitable for continuous information aggregation.\nTo my knowledge, all existing mechanisms-including the standard pari-mutuel market, the CDA, the CDAwMM, the bookie mechanism, and the MSR-exhibit at most two of the three properties.\nAn MSR is the closest to a DPM in terms of these properties, if not in terms of mechanics.\nGiven some natural constraints on price dynamics, I have derived in closed form the implied price functions, which encode how prices change continuously as shares are purchased.\nThe interface for traders looks much like the familiar CDA, with the system acting as an automated market maker willing to accept an infinite number of buy orders at some price.\nI have explored two main variations of a DPM: one where only losing money is redistributed, and one where all money is redistributed.\nEach has its own pros and cons, and each supports several reasonable price functions.\nI have described the workings of an aftermarket, so that traders can cash out of the market early, like in a CDA, to lock in their gains or limit their losses, an operation that is not possible in a standard pari-mutuel setting.\n9.\nFUTURE WORK This paper reports the results of an initial investigation of the concept of a dynamic pari-mutuel market.\nMany avenues for future work present themselves, including the following: \u2022 Random walk conjecture.\nThe most important question mark in my mind is whether the random walk assumption (3) can be proven under reasonable market efficiency conditions and, if not, how severely it effects the practicality of the system.\n\u2022 Incentive analysis.\nFormally, what are the incentives for traders to act on new information and when?\nHow does the level of initial subsidy effect trader incentives?\n\u2022 Laboratory experiments and field tests.\nThis paper concentrated on the mathematics and algorithmics of the mechanism.\nHowever, the true test of the mechanism``s ability to serve as an instrument for hedging, wagering, or information aggregation is to test it with real traders in a realistic environment.\nIn reality, how do people behave when faced with a DPM mechanism?\n\u2022 DPM call market.\nI have derived the price functions to react to wagers on one outcome at a time.\nThe mechanism could be generalized to accept orders on both sides, then update the prices wholistically, rather than by assuming a particular sequence on the wagers.\n\u2022 Real-valued variables.\nI believe the mechanisms in this paper can easily be generalized to multiple discrete 178 outcomes, and multiple real-valued outcomes that always sum to some constant value (e.g., multiple percentage values that must sum to 100).\nHowever, the generalization to real-valued variables with arbitrary range is less clear, and open for future development.\n\u2022 Compound\/combinatorial betting.\nI believe that DPM may be well suited for compound [8, 11] or combinatorial [2] betting, for many of the same reasons that market scoring rules [11] are well suited for the task.\nDPM may also have some computational advantages over MSR, though this remains to be seen.\nAcknowledgments I thank Dan Fain, Gary Flake, Lance Fortnow, and Robin Hanson.\n10.\nREFERENCES [1] Mukhtar M. Ali.\nProbability and utility estimates for racetrack bettors.\nJournal of Political Economy, 85(4):803-816, 1977.\n[2] Peter Bossaerts, Leslie Fine, and John Ledyard.\nInducing liquidity in thin financial markets through combined-value trading mechanisms.\nEuropean Economic Review, 46:1671-1695, 2002.\n[3] Kay-Yut Chen, Leslie R. Fine, and Bernardo A. Huberman.\nForecasting uncertain events with small groups.\nIn Third ACM Conference on Electronic Commerce (EC``01), pages 58-64, 2001.\n[4] Sandip Debnath, David M. Pennock, C. Lee Giles, and Steve Lawrence.\nInformation incorporation in online in-game sports betting markets.\nIn Fourth ACM Conference on Electronic Commerce (EC``03), 2003.\n[5] Robert Forsythe and Russell Lundholm.\nInformation aggregation in an experimental market.\nEconometrica, 58(2):309-347, 1990.\n[6] Robert Forsythe, Forrest Nelson, George R. Neumann, and Jack Wright.\nAnatomy of an experimental political stock market.\nAmerican Economic Review, 82(5):1142-1161, 1992.\n[7] Robert Forsythe, Thomas A. Rietz, and Thomas W. Ross.\nWishes, expectations, and actions: A survey on price formation in election stock markets.\nJournal of Economic Behavior and Organization, 39:83-110, 1999.\n[8] Lance Fortnow, Joe Kilian, David M. Pennock, and Michael P. Wellman.\nBetting boolean-style: A framework for trading in securities based on logical formulas.\nIn Proceedings of the Fourth Annual ACM Conference on Electronic Commerce, pages 144-155, 2003.\n[9] John M. Gandar, William H. Dare, Craig R. Brown, and Richard A. Zuber.\nInformed traders and price variations in the betting market for professional basketball games.\nJournal of Finance, LIII(1):385-401, 1998.\n[10] Robin Hanson.\nDecision markets.\nIEEE Intelligent Systems, 14(3):16-19, 1999.\n[11] Robin Hanson.\nCombinatorial information market design.\nInformation Systems Frontiers, 5(1), 2002.\n[12] Robin D. Hanson.\nCould gambling save science?\nEncouraging an honest consensus.\nSocial Epistemology, 9(1):3-33, 1995.\n[13] Jens Carsten Jackwerth and Mark Rubinstein.\nRecovering probability distributions from options prices.\nJournal of Finance, 51(5):1611-1631, 1996.\n[14] Joseph B. Kadane and Robert L. Winkler.\nSeparating probability elicitation from utilities.\nJournal of the American Statistical Association, 83(402):357-363, 1988.\n[15] David M. Pennock, Steve Lawrence, C. Lee Giles, and Finn \u02daArup Nielsen.\nThe real power of artificial markets.\nScience, 291:987-988, February 9 2001.\n[16] David M. Pennock, Steve Lawrence, Finn \u02daArup Nielsen, and C. Lee Giles.\nExtracting collective probabilistic forecasts from web games.\nIn Seventh International Conference on Knowledge Discovery and Data Mining, pages 174-183, 2001.\n[17] C. R. Plott, J. Wit, and W. C. Yang.\nParimutuel betting markets as information aggregation devices: Experimental results.\nTechnical Report Social Science Working Paper 986, California Institute of Technology, April 1997.\n[18] Charles R. Plott.\nMarkets as information gathering tools.\nSouthern Economic Journal, 67(1):1-15, 2000.\n[19] Charles R. Plott and Shyam Sunder.\nRational expectations and the aggregation of diverse information in laboratory security markets.\nEconometrica, 56(5):1085-1118, 1988.\n[20] Charles Polk, Robin Hanson, John Ledyard, and Takashi Ishikida.\nPolicy analysis market: An electronic commerce application of a combinatorial information market.\nIn Proceedings of the Fourth Annual ACM Conference on Electronic Commerce, pages 272-273, 2003.\n[21] R. Roll.\nOrange juice and weather.\nAmerican Economic Review, 74(5):861-880, 1984.\n[22] Richard N. Rosett.\nGambling and rationality.\nJournal of Political Economy, 73(6):595-607, 1965.\n[23] Carsten Schmidt and Axel Werwatz.\nHow accurate do markets predict the outcome of an event?\nThe Euro 2000 soccer championships experiment.\nTechnical Report 09-2002, Max Planck Institute for Research into Economic Systems, 2002.\n[24] Wayne W. Snyder.\nHorse racing: Testing the efficient markets model.\nJournal of Finance, 33(4):1109-1118, 1978.\n[25] Richard H. Thaler and William T. Ziemba.\nAnomalies: Parimutuel betting markets: Racetracks and lotteries.\nJournal of Economic Perspectives, 2(2):161-174, 1988.\n[26] Martin Weitzman.\nUtility analysis and group behavior: An empirical study.\nJournal of Political Economy, 73(1):18-26, 1965.\n[27] Robert L. Winkler and Allan H. Murphy.\nGood probability assessors.\nJ. Applied Meteorology, 7:751-758, 1968.\n179","lvl-3":"A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation\nABSTRACT\nI develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM).\nA DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both.\nLike a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution.\nThe trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share.\nThe DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand.\nSince the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in.\nI explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions.\n1.\nINTRODUCTION\nA wide variety of financial and wagering mechanisms have been developed to support hedging (i.e., insuring) against exposure to uncertain events and\/or speculative trading on uncertain events.\nThe dominant mechanism used in financial circles is the continuous double auction (CDA), or in some cases the CDA with market maker (CDAwMM).\nThe primary mechanism used for sports wagering is a bookie or bookmaker, who essentially acts exactly as a market maker.\nHorse racing and jai alai wagering traditionally employ the pari-mutuel mechanism.\nThough there is no formal or logical separation between financial trading and wagering, the two endeavors are socially considered distinct.\nRecently, there has been a move to employ CDAs or CDAwMMs for all types of wagering, including on sports, horse racing, political events, world news, and many other uncertain events, and a simultaneous and opposite trend to use bookie systems for betting on financial markets.\nThese trends highlight the interchangeable nature of the mechanisms and further blur the line between investing and betting.\nSome companies at the forefront of these movements are growing exponentially, with some industry observers declaring the onset of a revolution in the wagering business .1 Each mechanism has pros and cons for the market institution and the participating traders.\nA CDA only matches willing traders, and so poses no risk whatsoever for the market institution.\nBut a CDA can suffer from illiquidity in the form huge bid-ask spreads or even empty bid-ask queues if trading is light and thus markets are thin.\nA successful CDA must overcome a chicken-and-egg problem: traders are attracted to liquid markets, but liquid markets require a large number of traders.\nA CDAwMM and the similar bookie mechanism have built-in liquidity, but at a cost: the market maker itself, usually affiliated with the market institution, is exposed to significant risk of large monetary losses.\nBoth the CDA and CDAwMM offer incentives for traders to leverage information continuously as soon as that information becomes available.\nAs a result, prices are known to capture the current state of information exceptionally well.\nPari-mutuel markets effectively have infinite liquidity: anyone can place a bet on any outcome at any time, without the need for a matching offer from another bettor or a market maker.\nPari-mutuel markets also involve no risk for the market institution, since they only redistribute money from losing wagers to winning wagers.\nHowever, pari-mutuel mar\nkets are not suitable for situations where information arrives over time, since there is a strong disincentive for placing bets until either (1) all information is revealed, or (2) the market is about to close.\nFor this reason, pari-mutuel \"prices\" prior to the market's close cannot be considered a reflection of current information.\nPari-mutuel market participants cannot \"buy low and sell high\": they cannot cash out gains (or limit losses) before the event outcome is revealed.\nBecause the process whereby information arrives continuously over time is the rule rather than the exception, the applicability of the standard pari-mutuel mechanism is questionable in a large number of settings.\nIn this paper, I develop a new mechanism suitable for hedging, speculating, and wagering, called a dynamic parimutuel market (DPM).\nA DPM can be thought of as a hybrid between a pari-mutuel market and a CDA.\nA DPM is indeed pari-mutuel in nature, meaning that it acts only to redistribute money from some traders to others, and so exposes the market institution to no volatility (no risk).\nA constant, pre-determined subsidy is required to start the market.\nThe subsidy can in principle be arbitrarily small and might conceivably come from traders (via antes or transaction fees) rather than the market institution, though a nontrivial outside subsidy may actually encourage trading and information aggregation.\nA DPM has the infinite liquidity of a pari-mutuel market: traders can always purchase shares in any outcome at any time, at some price automatically set by the market institution.\nA DPM is also able to react to and incorporate information arriving over time, like a CDA.\nThe market institution changes the price for particular outcomes based on the current state of wagering.\nIf a particular outcome receives a relatively large number of wagers, its price increases; if an outcome receives relatively few wagers, its price decreases.\nPrices are computed automatically using a price function, which can differ depending on what properties are desired.\nThe price function determines the instantaneous price per share for an infinitesimal quantity of shares; the total cost for purchasing n shares is computed as the integral of the price function from 0 to n.\nThe complexity of the price function can be hidden from traders by communicating only the ask prices for various lots of shares (e.g., lots of 100 shares), as is common practice in CDAs and CDAwMMs.\nDPM prices do reflect current information, and traders can cash out in an aftermarket to lock in gains or limit losses before the event outcome is revealed.\nWhile there is always a market maker willing to accept buy orders, there is not a market maker accepting sell orders, and thus no guaranteed liquidity for selling: instead, selling is accomplished via a standard CDA mechanism.\nTraders can always \"hedge-sell\" by purchasing the opposite outcome than they already own.\n2.\nBACKGROUND AND RELATED WORK 2.1 Pari-mutuel markets\nPari-mutuel markets are common at horse races [1, 22, 24, 25, 26], dog races, and jai alai games.\nIn a pari-mutuel market people place wagers on which of two or more mutually exclusive and exhaustive outcomes will occur at some time in the future.\nAfter the true outcome becomes known, all of the money that is lost by those who bet on the incorrect outcome is redistributed to those who bet on the correct outcome, in direct proportion to the amount they wagered.\nMore formally, if there are k mutually exclusive and exhaustive outcomes (e.g., k horses, exactly one of which will win), and M1, M2,..., Mk dollars are bet on each outcome, and outcome i occurs, then everyone who bet on an outcome j = ~ i loses their wager, while everyone who bet on outcome i receives Pkj = 1 Mj \/ Mi dollars for every dollar they wagered.\nThat is, every dollar wagered on i receives an equal share of all money wagered.\nAn equivalent way to think about the redistribution rule is that every dollar wagered on i is refunded, then receives an equal share of all remaining money bet on the losing outcomes, or Pj ~ = i Mj\/Mi dollars.\nIn practice, the market institution (e.g., the racetrack) first takes a certain percent of the total amount wagered, usually about 20% in the United States, then redistributes whatever money remains to the winners in proportion to their amount bet.\nConsider a simple example with two outcomes, A and B.\nThe outcomes are mutually exclusive and exhaustive, meaning that Pr (A \u2227 B) = 0 and Pr (A) + Pr (B) = 1.\nSuppose $800 is bet on A and $200 on B.\nNow suppose that A occurs (e.g., horse A wins the race).\nPeople who wagered on B lose their money, or $200 in total.\nPeople who wagered on A win and each receives a proportional share of the total $1000 wagered (ignoring fees).\nSpecifically, each $1 wager on A entitles its owner a 1\/800 share of the $1000, or $1.25.\nEvery dollar bet in a pari-mutuel market has an equal payoff, regardless of when the wager was placed or how much money was invested in the various outcomes at the time the wager was placed.\nThe only state that matters is the final state: the final amounts wagered on all the outcomes when the market closes, and the identity of the correct outcome.\nAs a result, there is a disincentive to place a wager early if there is any chance that new information might become available.\nMoreover, there are no guarantees about the payoff rate of a particular bet, except that it will be nonnegative if the correct outcome is chosen.\nPayoff rates can fluctuate arbitrarily until the market closes.\nSo a second reason not to bet early is to wait to get a better sense of the final payout rates.\nThis is in contrast to CDAs and CDAwMMs, like the stock market, where incentives exist to invest as soon as new information is revealed.\nPari-mutuel bettors may be allowed to switch their chosen outcome, or even cancel their bet, prior to the market's close.\nHowever, they cannot cash out of the market early, to either lock in gains or limit losses, if new information favors one outcome over another, as is possible in a CDA or a CDAwMM.\nIf bettors can cancel or change their bets, then an aftermarket to sell existing wagers is not sensible: every dollar wagered is worth exactly $1 up until the market's close--no one would buy at greater than $1 and no one would sell at less than $1.\nPari-mutuel bettors must wait until the outcome is revealed to realize any profit or loss.\nUnlike a CDA, in a pari-mutuel market, anyone can place a wager of any amount at any time--there is in a sense infinite liquidity for buying.\nA CDAwMM also has built-in liquidity, but at the cost of significant risk for the market maker.\nIn a pari-mutuel market, since money is only redistributed among bettors, the market institution itself has no risk.\nThe main drawback of a pari-mutuel market is that it is useful only for capturing the value of an uncertain asset at some instant in time.\nIt is ill-suited for situations where information arrives over time, continuously updating the estimated value of the asset--situations common in al\nmost all trading and wagering scenarios.\nThere is no notion of \"buying low and selling high\", as occurs in a CDA, where buying when few others are buying (and the price is low) is rewarded more than buying when many others are buying (and the price is high).\nPerhaps for this reason, in most dynamic environments, financial mechanisms like the CDA that can react in real-time to changing information are more typically employed to facilitate speculating and hedging.\nSince a pari-mutuel market can estimate the value of an asset at a single instant in time, a repeated pari-mutuel market, where distinct pari-mutuel markets are run at consecutive intervals, could in principle capture changing information dynamics.\nBut running multiple consecutive markets would likely thin out trading in each individual market.\nAlso, in each individual pari-mutuel market, the incentives would still be to wait to bet until just before the ending time of that particular market.\nThis last problem might be mitigated by instituting a random stopping rule for each individual pari-mutuel market.\nIn laboratory experiments, pari-mutuel markets have shown a remarkable ability to aggregate and disseminate information dispersed among traders, at least for a single snapshot in time [17].\nA similar ability has been recognized at real racetracks [1, 22, 24, 25, 26].\n2.2 Financial markets\nIn the financial world, wagering on the outcomes of uncertain future propositions is also common.\nThe typical market mechanism used is the continuous double auction (CDA).\nThe term securities market in economics and finance generically encompasses a number of markets where speculating on uncertain events is possible.\nExamples include stock markets like NASDAQ, options markets like the CBOE [13], futures markets like the CME [21], other derivatives markets, insurance markets, political stock markets [6, 7], idea futures markets [12], decision markets [10] and even market games [3, 15, 16].\nSecurities markets generally have an economic and social value beyond facilitating speculation or wagering: they allow traders to hedge risk, or to insure against undesirable outcomes.\nSo if a particular outcome has disutility for a trader, he or she can mitigate the risk by wagering for the outcome, to arrange for compensation in case the outcome occurs.\nIn this sense, buying automobile insurance is effectively a bet that an accident or other covered event will occur.\nSimilarly, buying a put option, which is useful as a hedge for a stockholder, is a bet that the underlying stock will go down.\nIn practice, agents engage in a mixture of hedging and speculating, and there is no clear dividing line between the two [14].\nLike pari-mutuel markets, often prices in financial markets are excellent information aggregators, yielding very accurate forecasts of future events [5, 18, 19].\nA CDA constantly matches orders to buy an asset with orders to sell.\nIf at any time one party is willing to buy one unit of the asset at a bid price of pbid, while another party is willing to sell one unit of the asset at an ask price of pask, and pbid is greater than or equal to pask, then the two parties transact (at some price between pbid and pask).\nIf the highest bid price is less than the lowest ask price, then no transactions occur.\nIn a CDA, the bid and ask prices rapidly change as new information arrives and traders reassess the value of the asset.\nSince the auctioneer only matches willing bidders, the auctioneer takes on no risk.\nHowever, buyers can only buy as many shares as sellers are willing to sell; for any transaction to occur, there must be a counterparty on the other side willing to accept the trade.\nAs a result, when few traders participate in a CDA, it may become illiquid, meaning that not much trading activity occurs.\nThe spread between the highest bid price and the lowest ask price may be very large, or one or both queues may be completely empty, discouraging trading .2 One way to induce liquidity is to provide a market maker who is willing to accept a large number of buy and sell orders at particular prices.\nWe call this mechanism a CDA with market maker (CDAwMM).3 Conceptually, the market maker is just like any other trader, but typically is willing to accept a much larger volume of trades.\nThe market maker may be a person, or may be an automated algorithm.\nAdding a market maker to the system increases liquidity, but exposes the market maker to risk.\nNow, instead of only matching trades, the system actually takes on risk of its own, and depending on what happens in the future, may lose considerable amounts of money.\n2.3 Wagering markets\nThe typical Las Vegas bookmaker or oddsmaker functions much like a market maker in a CDA.\nIn this case, the market institution (the book or house) sets the odds,4 initially according to expert opinion, and later in response to the relative level of betting on the various outcomes.\nUnlike in a pari-mutuel environment, whenever a wager is placed with a bookmaker, the odds or terms for that bet are fixed at the time of the bet.\nThe bookmaker profits by offering different odds for the two sides of the bet, essentially defining a bidask spread.\nWhile odds may change in response to changing information, any bets made at previously set odds remain in effect according to the odds at the time of the bet; this is precisely in analogy to a CDAwMM.\nOne difference between a bookmaker and a market maker is that the former usually operates in a \"take it or leave it mode\": bettors cannot place their own limit orders on a common queue, they can in effect only place market orders at prices defined by the bookmaker.\nStill, the bookmaker certainly reacts to bettor demand.\nLike a market maker, the bookmaker exposes itself to significant risk.\nSports betting markets have also been shown to provide high quality aggregate forecasts [4, 9, 23].\n2.4 Market scoring rule\nHanson's [11] market scoring rule (MSR) is a new mechanism for hedging and speculating that shares some properties in common with a DPM.\nLike a DPM, an MSR can be conceptualized as an automated market maker always willing to accept a trade on any event at some price.\nAn MSR requires a patron to subsidize the market.\nThe patron's final loss is variable, and thus technically implies a degree of risk, though the maximum loss is bounded.\nAn MSR maintains a probability distribution over all events.\nAt any time any 2Thin markets do occur often in practice, and can be seen in a variety of the less popular markets available on http:\/\/TradeSports.com, or in some financial options markets, for example.\n3A very clear example of a CDAwMM is the \"interactive\" betting market on http:\/\/WSEX.com.\n4Or, alternatively, the bookmaker sets the game line in order to provide even-money odds.\ntrader who believes the probabilities are wrong can change any part of the distribution by accepting a lottery ticket that pays off according to a scoring rule (e.g., the logarithmic scoring rule) [27], as long as that trader also agrees to pay off the most recent person to change the distribution.\nIn the limit of a single trader, the mechanism behaves like a scoring rule, suitable for polling a single agent for its probability distribution.\nIn the limit of many traders, it produces a combined estimate.\nSince the market essentially always has a complete set of posted prices for all possible outcomes, the mechanism avoids the problem of thin markets or illiquidity.\nAn MSR is not pari-mutuel in nature, as the patron in general injects a variable amount of money into the system.\nAn MSR provides a two-sided automated market maker, while a DPM provides a one-sided automated market maker.\nIn an MSR, the vector of payoffs across outcomes is fixed at the time of the trade, while in a DPM, the vector of payoffs across outcomes depends both on the state of wagering at the time of the trade and the state of wagering at the market's close.\nWhile the mechanisms are quite different--and so trader acceptance and incentives may strongly differ--the properties and motivations of DPMs and MSRs are quite similar.\nHanson shows how MSRs are especially well suited for allowing bets on a combinatorial number of outcomes.\nThe patron's payment for subsidizing trading on all 2n possible combinations of n events is no larger than the sum of subsidizing the n event marginals independently.\nThe mechanism was planned for use in the Policy Analysis Market (PAM), a futures market in Middle East related outcomes and funded by DARPA [20], until a media firestorm killed the project .5 As of this writing, the founders of PAM were considering reopening under private control .6\n3.\nA DYNAMIC PARI-MUTUEL MARKET\n3.1 High-level description\n3.2 Advantages and disadvantages\n3.3 Redistribution rule\n4.\nDPM I: LOSING MONEY REDISTRIBUTED\n4.1 Market probability\n4.2 Price functions\n4.2.1 Price function I: Price of A equals payoff of B\n4.2.2 Price function II: Price of A proportional to money on A\n5.\nDPM II: ALL MONEY REDISTRIBUTED\n5.1 Market probability\n5.2 Price functions\n5.3 Comparing DPM I and II\n6.\nOTHER VARIATIONS\n7.\nAFTERMARKETS\n7.1 Aftermarket for DPM II\n7.2 Aftermarket for DPM I\n7.3 Pseudo aftermarket for DPM I\n8.\nCONCLUSIONS\n9.\nFUTURE WORK\nThis paper reports the results of an initial investigation of the concept of a dynamic pari-mutuel market.\nMany avenues for future work present themselves, including the following: 9 Random walk conjecture.\nThe most important question mark in my mind is whether the random walk assumption (3) can be proven under reasonable market efficiency conditions and, if not, how severely it effects the practicality of the system.\n9 Incentive analysis.\nFormally, what are the incentives for traders to act on new information and when?\nHow does the level of initial subsidy effect trader incentives?\n9 Laboratory experiments and field tests.\nThis paper concentrated on the mathematics and algorithmics of the mechanism.\nHowever, the true test of the mechanism's ability to serve as an instrument for hedging, wagering, or information aggregation is to test it with real traders in a realistic environment.\nIn reality, how do people behave when faced with a DPM mechanism?\n9 DPM call market.\nI have derived the price functions to react to wagers on one outcome at a time.\nThe mechanism could be generalized to accept orders on both sides, then update the prices wholistically, rather than by assuming a particular sequence on the wagers.\n9 Real-valued variables.\nI believe the mechanisms in this paper can easily be generalized to multiple discrete\noutcomes, and multiple real-valued outcomes that always sum to some constant value (e.g., multiple percentage values that must sum to 100).\nHowever, the generalization to real-valued variables with arbitrary range is less clear, and open for future development.\n\u2022 Compound\/combinatorial betting.\nI believe that DPM may be well suited for compound [8, 11] or combinatorial [2] betting, for many of the same reasons that market scoring rules [11] are well suited for the task.\nDPM may also have some computational advantages over MSR, though this remains to be seen.","lvl-4":"A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation\nABSTRACT\nI develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM).\nA DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both.\nLike a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution.\nThe trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share.\nThe DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand.\nSince the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in.\nI explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions.\n1.\nINTRODUCTION\nA wide variety of financial and wagering mechanisms have been developed to support hedging (i.e., insuring) against exposure to uncertain events and\/or speculative trading on uncertain events.\nThe dominant mechanism used in financial circles is the continuous double auction (CDA), or in some cases the CDA with market maker (CDAwMM).\nThe primary mechanism used for sports wagering is a bookie or bookmaker, who essentially acts exactly as a market maker.\nHorse racing and jai alai wagering traditionally employ the pari-mutuel mechanism.\nThough there is no formal or logical separation between financial trading and wagering, the two endeavors are socially considered distinct.\nThese trends highlight the interchangeable nature of the mechanisms and further blur the line between investing and betting.\nSome companies at the forefront of these movements are growing exponentially, with some industry observers declaring the onset of a revolution in the wagering business .1 Each mechanism has pros and cons for the market institution and the participating traders.\nA CDA only matches willing traders, and so poses no risk whatsoever for the market institution.\nBut a CDA can suffer from illiquidity in the form huge bid-ask spreads or even empty bid-ask queues if trading is light and thus markets are thin.\nA successful CDA must overcome a chicken-and-egg problem: traders are attracted to liquid markets, but liquid markets require a large number of traders.\nA CDAwMM and the similar bookie mechanism have built-in liquidity, but at a cost: the market maker itself, usually affiliated with the market institution, is exposed to significant risk of large monetary losses.\nBoth the CDA and CDAwMM offer incentives for traders to leverage information continuously as soon as that information becomes available.\nAs a result, prices are known to capture the current state of information exceptionally well.\nPari-mutuel markets effectively have infinite liquidity: anyone can place a bet on any outcome at any time, without the need for a matching offer from another bettor or a market maker.\nPari-mutuel markets also involve no risk for the market institution, since they only redistribute money from losing wagers to winning wagers.\nHowever, pari-mutuel mar\nkets are not suitable for situations where information arrives over time, since there is a strong disincentive for placing bets until either (1) all information is revealed, or (2) the market is about to close.\nFor this reason, pari-mutuel \"prices\" prior to the market's close cannot be considered a reflection of current information.\nPari-mutuel market participants cannot \"buy low and sell high\": they cannot cash out gains (or limit losses) before the event outcome is revealed.\nBecause the process whereby information arrives continuously over time is the rule rather than the exception, the applicability of the standard pari-mutuel mechanism is questionable in a large number of settings.\nIn this paper, I develop a new mechanism suitable for hedging, speculating, and wagering, called a dynamic parimutuel market (DPM).\nA DPM can be thought of as a hybrid between a pari-mutuel market and a CDA.\nA DPM is indeed pari-mutuel in nature, meaning that it acts only to redistribute money from some traders to others, and so exposes the market institution to no volatility (no risk).\nA constant, pre-determined subsidy is required to start the market.\nThe subsidy can in principle be arbitrarily small and might conceivably come from traders (via antes or transaction fees) rather than the market institution, though a nontrivial outside subsidy may actually encourage trading and information aggregation.\nA DPM has the infinite liquidity of a pari-mutuel market: traders can always purchase shares in any outcome at any time, at some price automatically set by the market institution.\nA DPM is also able to react to and incorporate information arriving over time, like a CDA.\nThe market institution changes the price for particular outcomes based on the current state of wagering.\nIf a particular outcome receives a relatively large number of wagers, its price increases; if an outcome receives relatively few wagers, its price decreases.\nPrices are computed automatically using a price function, which can differ depending on what properties are desired.\nThe complexity of the price function can be hidden from traders by communicating only the ask prices for various lots of shares (e.g., lots of 100 shares), as is common practice in CDAs and CDAwMMs.\nDPM prices do reflect current information, and traders can cash out in an aftermarket to lock in gains or limit losses before the event outcome is revealed.\nWhile there is always a market maker willing to accept buy orders, there is not a market maker accepting sell orders, and thus no guaranteed liquidity for selling: instead, selling is accomplished via a standard CDA mechanism.\nTraders can always \"hedge-sell\" by purchasing the opposite outcome than they already own.\n2.\nBACKGROUND AND RELATED WORK 2.1 Pari-mutuel markets\nPari-mutuel markets are common at horse races [1, 22, 24, 25, 26], dog races, and jai alai games.\nIn a pari-mutuel market people place wagers on which of two or more mutually exclusive and exhaustive outcomes will occur at some time in the future.\nAfter the true outcome becomes known, all of the money that is lost by those who bet on the incorrect outcome is redistributed to those who bet on the correct outcome, in direct proportion to the amount they wagered.\nThat is, every dollar wagered on i receives an equal share of all money wagered.\nAn equivalent way to think about the redistribution rule is that every dollar wagered on i is refunded, then receives an equal share of all remaining money bet on the losing outcomes, or Pj ~ = i Mj\/Mi dollars.\nIn practice, the market institution (e.g., the racetrack) first takes a certain percent of the total amount wagered, usually about 20% in the United States, then redistributes whatever money remains to the winners in proportion to their amount bet.\nConsider a simple example with two outcomes, A and B.\nNow suppose that A occurs (e.g., horse A wins the race).\nPeople who wagered on B lose their money, or $200 in total.\nPeople who wagered on A win and each receives a proportional share of the total $1000 wagered (ignoring fees).\nEvery dollar bet in a pari-mutuel market has an equal payoff, regardless of when the wager was placed or how much money was invested in the various outcomes at the time the wager was placed.\nThe only state that matters is the final state: the final amounts wagered on all the outcomes when the market closes, and the identity of the correct outcome.\nAs a result, there is a disincentive to place a wager early if there is any chance that new information might become available.\nMoreover, there are no guarantees about the payoff rate of a particular bet, except that it will be nonnegative if the correct outcome is chosen.\nPayoff rates can fluctuate arbitrarily until the market closes.\nThis is in contrast to CDAs and CDAwMMs, like the stock market, where incentives exist to invest as soon as new information is revealed.\nPari-mutuel bettors may be allowed to switch their chosen outcome, or even cancel their bet, prior to the market's close.\nHowever, they cannot cash out of the market early, to either lock in gains or limit losses, if new information favors one outcome over another, as is possible in a CDA or a CDAwMM.\nIf bettors can cancel or change their bets, then an aftermarket to sell existing wagers is not sensible: every dollar wagered is worth exactly $1 up until the market's close--no one would buy at greater than $1 and no one would sell at less than $1.\nPari-mutuel bettors must wait until the outcome is revealed to realize any profit or loss.\nUnlike a CDA, in a pari-mutuel market, anyone can place a wager of any amount at any time--there is in a sense infinite liquidity for buying.\nA CDAwMM also has built-in liquidity, but at the cost of significant risk for the market maker.\nIn a pari-mutuel market, since money is only redistributed among bettors, the market institution itself has no risk.\nThe main drawback of a pari-mutuel market is that it is useful only for capturing the value of an uncertain asset at some instant in time.\nIt is ill-suited for situations where information arrives over time, continuously updating the estimated value of the asset--situations common in al\nmost all trading and wagering scenarios.\nPerhaps for this reason, in most dynamic environments, financial mechanisms like the CDA that can react in real-time to changing information are more typically employed to facilitate speculating and hedging.\nSince a pari-mutuel market can estimate the value of an asset at a single instant in time, a repeated pari-mutuel market, where distinct pari-mutuel markets are run at consecutive intervals, could in principle capture changing information dynamics.\nBut running multiple consecutive markets would likely thin out trading in each individual market.\nAlso, in each individual pari-mutuel market, the incentives would still be to wait to bet until just before the ending time of that particular market.\nThis last problem might be mitigated by instituting a random stopping rule for each individual pari-mutuel market.\nIn laboratory experiments, pari-mutuel markets have shown a remarkable ability to aggregate and disseminate information dispersed among traders, at least for a single snapshot in time [17].\n2.2 Financial markets\nIn the financial world, wagering on the outcomes of uncertain future propositions is also common.\nThe typical market mechanism used is the continuous double auction (CDA).\nThe term securities market in economics and finance generically encompasses a number of markets where speculating on uncertain events is possible.\nSecurities markets generally have an economic and social value beyond facilitating speculation or wagering: they allow traders to hedge risk, or to insure against undesirable outcomes.\nSo if a particular outcome has disutility for a trader, he or she can mitigate the risk by wagering for the outcome, to arrange for compensation in case the outcome occurs.\nIn this sense, buying automobile insurance is effectively a bet that an accident or other covered event will occur.\nLike pari-mutuel markets, often prices in financial markets are excellent information aggregators, yielding very accurate forecasts of future events [5, 18, 19].\nA CDA constantly matches orders to buy an asset with orders to sell.\nIf the highest bid price is less than the lowest ask price, then no transactions occur.\nIn a CDA, the bid and ask prices rapidly change as new information arrives and traders reassess the value of the asset.\nSince the auctioneer only matches willing bidders, the auctioneer takes on no risk.\nAs a result, when few traders participate in a CDA, it may become illiquid, meaning that not much trading activity occurs.\nThe spread between the highest bid price and the lowest ask price may be very large, or one or both queues may be completely empty, discouraging trading .2 One way to induce liquidity is to provide a market maker who is willing to accept a large number of buy and sell orders at particular prices.\nWe call this mechanism a CDA with market maker (CDAwMM).3 Conceptually, the market maker is just like any other trader, but typically is willing to accept a much larger volume of trades.\nThe market maker may be a person, or may be an automated algorithm.\nAdding a market maker to the system increases liquidity, but exposes the market maker to risk.\nNow, instead of only matching trades, the system actually takes on risk of its own, and depending on what happens in the future, may lose considerable amounts of money.\n2.3 Wagering markets\nThe typical Las Vegas bookmaker or oddsmaker functions much like a market maker in a CDA.\nIn this case, the market institution (the book or house) sets the odds,4 initially according to expert opinion, and later in response to the relative level of betting on the various outcomes.\nUnlike in a pari-mutuel environment, whenever a wager is placed with a bookmaker, the odds or terms for that bet are fixed at the time of the bet.\nOne difference between a bookmaker and a market maker is that the former usually operates in a \"take it or leave it mode\": bettors cannot place their own limit orders on a common queue, they can in effect only place market orders at prices defined by the bookmaker.\nStill, the bookmaker certainly reacts to bettor demand.\nLike a market maker, the bookmaker exposes itself to significant risk.\nSports betting markets have also been shown to provide high quality aggregate forecasts [4, 9, 23].\n2.4 Market scoring rule\nHanson's [11] market scoring rule (MSR) is a new mechanism for hedging and speculating that shares some properties in common with a DPM.\nLike a DPM, an MSR can be conceptualized as an automated market maker always willing to accept a trade on any event at some price.\nAn MSR requires a patron to subsidize the market.\nThe patron's final loss is variable, and thus technically implies a degree of risk, though the maximum loss is bounded.\nAn MSR maintains a probability distribution over all events.\nAt any time any 2Thin markets do occur often in practice, and can be seen in a variety of the less popular markets available on http:\/\/TradeSports.com, or in some financial options markets, for example.\n3A very clear example of a CDAwMM is the \"interactive\" betting market on http:\/\/WSEX.com.\n4Or, alternatively, the bookmaker sets the game line in order to provide even-money odds.\nIn the limit of a single trader, the mechanism behaves like a scoring rule, suitable for polling a single agent for its probability distribution.\nIn the limit of many traders, it produces a combined estimate.\nSince the market essentially always has a complete set of posted prices for all possible outcomes, the mechanism avoids the problem of thin markets or illiquidity.\nAn MSR is not pari-mutuel in nature, as the patron in general injects a variable amount of money into the system.\nAn MSR provides a two-sided automated market maker, while a DPM provides a one-sided automated market maker.\nIn an MSR, the vector of payoffs across outcomes is fixed at the time of the trade, while in a DPM, the vector of payoffs across outcomes depends both on the state of wagering at the time of the trade and the state of wagering at the market's close.\nWhile the mechanisms are quite different--and so trader acceptance and incentives may strongly differ--the properties and motivations of DPMs and MSRs are quite similar.\nHanson shows how MSRs are especially well suited for allowing bets on a combinatorial number of outcomes.\n9.\nFUTURE WORK\nThis paper reports the results of an initial investigation of the concept of a dynamic pari-mutuel market.\nMany avenues for future work present themselves, including the following: 9 Random walk conjecture.\n9 Incentive analysis.\nFormally, what are the incentives for traders to act on new information and when?\nHow does the level of initial subsidy effect trader incentives?\n9 Laboratory experiments and field tests.\nThis paper concentrated on the mathematics and algorithmics of the mechanism.\nHowever, the true test of the mechanism's ability to serve as an instrument for hedging, wagering, or information aggregation is to test it with real traders in a realistic environment.\nIn reality, how do people behave when faced with a DPM mechanism?\n9 DPM call market.\nI have derived the price functions to react to wagers on one outcome at a time.\nThe mechanism could be generalized to accept orders on both sides, then update the prices wholistically, rather than by assuming a particular sequence on the wagers.\n9 Real-valued variables.\nI believe the mechanisms in this paper can easily be generalized to multiple discrete\noutcomes, and multiple real-valued outcomes that always sum to some constant value (e.g., multiple percentage values that must sum to 100).\nHowever, the generalization to real-valued variables with arbitrary range is less clear, and open for future development.\n\u2022 Compound\/combinatorial betting.\nI believe that DPM may be well suited for compound [8, 11] or combinatorial [2] betting, for many of the same reasons that market scoring rules [11] are well suited for the task.","lvl-2":"A Dynamic Pari-Mutuel Market for Hedging, Wagering, and Information Aggregation\nABSTRACT\nI develop a new mechanism for risk allocation and information speculation called a dynamic pari-mutuel market (DPM).\nA DPM acts as hybrid between a pari-mutuel market and a continuous double auction (CDA), inheriting some of the advantages of both.\nLike a pari-mutuel market, a DPM offers infinite buy-in liquidity and zero risk for the market institution; like a CDA, a DPM can continuously react to new information, dynamically incorporate information into prices, and allow traders to lock in gains or limit losses by selling prior to event resolution.\nThe trader interface can be designed to mimic the familiar double auction format with bid-ask queues, though with an addition variable called the payoff per share.\nThe DPM price function can be viewed as an automated market maker always offering to sell at some price, and moving the price appropriately according to demand.\nSince the mechanism is pari-mutuel (i.e., redistributive), it is guaranteed to pay out exactly the amount of money taken in.\nI explore a number of variations on the basic DPM, analyzing the properties of each, and solving in closed form for their respective price functions.\n1.\nINTRODUCTION\nA wide variety of financial and wagering mechanisms have been developed to support hedging (i.e., insuring) against exposure to uncertain events and\/or speculative trading on uncertain events.\nThe dominant mechanism used in financial circles is the continuous double auction (CDA), or in some cases the CDA with market maker (CDAwMM).\nThe primary mechanism used for sports wagering is a bookie or bookmaker, who essentially acts exactly as a market maker.\nHorse racing and jai alai wagering traditionally employ the pari-mutuel mechanism.\nThough there is no formal or logical separation between financial trading and wagering, the two endeavors are socially considered distinct.\nRecently, there has been a move to employ CDAs or CDAwMMs for all types of wagering, including on sports, horse racing, political events, world news, and many other uncertain events, and a simultaneous and opposite trend to use bookie systems for betting on financial markets.\nThese trends highlight the interchangeable nature of the mechanisms and further blur the line between investing and betting.\nSome companies at the forefront of these movements are growing exponentially, with some industry observers declaring the onset of a revolution in the wagering business .1 Each mechanism has pros and cons for the market institution and the participating traders.\nA CDA only matches willing traders, and so poses no risk whatsoever for the market institution.\nBut a CDA can suffer from illiquidity in the form huge bid-ask spreads or even empty bid-ask queues if trading is light and thus markets are thin.\nA successful CDA must overcome a chicken-and-egg problem: traders are attracted to liquid markets, but liquid markets require a large number of traders.\nA CDAwMM and the similar bookie mechanism have built-in liquidity, but at a cost: the market maker itself, usually affiliated with the market institution, is exposed to significant risk of large monetary losses.\nBoth the CDA and CDAwMM offer incentives for traders to leverage information continuously as soon as that information becomes available.\nAs a result, prices are known to capture the current state of information exceptionally well.\nPari-mutuel markets effectively have infinite liquidity: anyone can place a bet on any outcome at any time, without the need for a matching offer from another bettor or a market maker.\nPari-mutuel markets also involve no risk for the market institution, since they only redistribute money from losing wagers to winning wagers.\nHowever, pari-mutuel mar\nkets are not suitable for situations where information arrives over time, since there is a strong disincentive for placing bets until either (1) all information is revealed, or (2) the market is about to close.\nFor this reason, pari-mutuel \"prices\" prior to the market's close cannot be considered a reflection of current information.\nPari-mutuel market participants cannot \"buy low and sell high\": they cannot cash out gains (or limit losses) before the event outcome is revealed.\nBecause the process whereby information arrives continuously over time is the rule rather than the exception, the applicability of the standard pari-mutuel mechanism is questionable in a large number of settings.\nIn this paper, I develop a new mechanism suitable for hedging, speculating, and wagering, called a dynamic parimutuel market (DPM).\nA DPM can be thought of as a hybrid between a pari-mutuel market and a CDA.\nA DPM is indeed pari-mutuel in nature, meaning that it acts only to redistribute money from some traders to others, and so exposes the market institution to no volatility (no risk).\nA constant, pre-determined subsidy is required to start the market.\nThe subsidy can in principle be arbitrarily small and might conceivably come from traders (via antes or transaction fees) rather than the market institution, though a nontrivial outside subsidy may actually encourage trading and information aggregation.\nA DPM has the infinite liquidity of a pari-mutuel market: traders can always purchase shares in any outcome at any time, at some price automatically set by the market institution.\nA DPM is also able to react to and incorporate information arriving over time, like a CDA.\nThe market institution changes the price for particular outcomes based on the current state of wagering.\nIf a particular outcome receives a relatively large number of wagers, its price increases; if an outcome receives relatively few wagers, its price decreases.\nPrices are computed automatically using a price function, which can differ depending on what properties are desired.\nThe price function determines the instantaneous price per share for an infinitesimal quantity of shares; the total cost for purchasing n shares is computed as the integral of the price function from 0 to n.\nThe complexity of the price function can be hidden from traders by communicating only the ask prices for various lots of shares (e.g., lots of 100 shares), as is common practice in CDAs and CDAwMMs.\nDPM prices do reflect current information, and traders can cash out in an aftermarket to lock in gains or limit losses before the event outcome is revealed.\nWhile there is always a market maker willing to accept buy orders, there is not a market maker accepting sell orders, and thus no guaranteed liquidity for selling: instead, selling is accomplished via a standard CDA mechanism.\nTraders can always \"hedge-sell\" by purchasing the opposite outcome than they already own.\n2.\nBACKGROUND AND RELATED WORK 2.1 Pari-mutuel markets\nPari-mutuel markets are common at horse races [1, 22, 24, 25, 26], dog races, and jai alai games.\nIn a pari-mutuel market people place wagers on which of two or more mutually exclusive and exhaustive outcomes will occur at some time in the future.\nAfter the true outcome becomes known, all of the money that is lost by those who bet on the incorrect outcome is redistributed to those who bet on the correct outcome, in direct proportion to the amount they wagered.\nMore formally, if there are k mutually exclusive and exhaustive outcomes (e.g., k horses, exactly one of which will win), and M1, M2,..., Mk dollars are bet on each outcome, and outcome i occurs, then everyone who bet on an outcome j = ~ i loses their wager, while everyone who bet on outcome i receives Pkj = 1 Mj \/ Mi dollars for every dollar they wagered.\nThat is, every dollar wagered on i receives an equal share of all money wagered.\nAn equivalent way to think about the redistribution rule is that every dollar wagered on i is refunded, then receives an equal share of all remaining money bet on the losing outcomes, or Pj ~ = i Mj\/Mi dollars.\nIn practice, the market institution (e.g., the racetrack) first takes a certain percent of the total amount wagered, usually about 20% in the United States, then redistributes whatever money remains to the winners in proportion to their amount bet.\nConsider a simple example with two outcomes, A and B.\nThe outcomes are mutually exclusive and exhaustive, meaning that Pr (A \u2227 B) = 0 and Pr (A) + Pr (B) = 1.\nSuppose $800 is bet on A and $200 on B.\nNow suppose that A occurs (e.g., horse A wins the race).\nPeople who wagered on B lose their money, or $200 in total.\nPeople who wagered on A win and each receives a proportional share of the total $1000 wagered (ignoring fees).\nSpecifically, each $1 wager on A entitles its owner a 1\/800 share of the $1000, or $1.25.\nEvery dollar bet in a pari-mutuel market has an equal payoff, regardless of when the wager was placed or how much money was invested in the various outcomes at the time the wager was placed.\nThe only state that matters is the final state: the final amounts wagered on all the outcomes when the market closes, and the identity of the correct outcome.\nAs a result, there is a disincentive to place a wager early if there is any chance that new information might become available.\nMoreover, there are no guarantees about the payoff rate of a particular bet, except that it will be nonnegative if the correct outcome is chosen.\nPayoff rates can fluctuate arbitrarily until the market closes.\nSo a second reason not to bet early is to wait to get a better sense of the final payout rates.\nThis is in contrast to CDAs and CDAwMMs, like the stock market, where incentives exist to invest as soon as new information is revealed.\nPari-mutuel bettors may be allowed to switch their chosen outcome, or even cancel their bet, prior to the market's close.\nHowever, they cannot cash out of the market early, to either lock in gains or limit losses, if new information favors one outcome over another, as is possible in a CDA or a CDAwMM.\nIf bettors can cancel or change their bets, then an aftermarket to sell existing wagers is not sensible: every dollar wagered is worth exactly $1 up until the market's close--no one would buy at greater than $1 and no one would sell at less than $1.\nPari-mutuel bettors must wait until the outcome is revealed to realize any profit or loss.\nUnlike a CDA, in a pari-mutuel market, anyone can place a wager of any amount at any time--there is in a sense infinite liquidity for buying.\nA CDAwMM also has built-in liquidity, but at the cost of significant risk for the market maker.\nIn a pari-mutuel market, since money is only redistributed among bettors, the market institution itself has no risk.\nThe main drawback of a pari-mutuel market is that it is useful only for capturing the value of an uncertain asset at some instant in time.\nIt is ill-suited for situations where information arrives over time, continuously updating the estimated value of the asset--situations common in al\nmost all trading and wagering scenarios.\nThere is no notion of \"buying low and selling high\", as occurs in a CDA, where buying when few others are buying (and the price is low) is rewarded more than buying when many others are buying (and the price is high).\nPerhaps for this reason, in most dynamic environments, financial mechanisms like the CDA that can react in real-time to changing information are more typically employed to facilitate speculating and hedging.\nSince a pari-mutuel market can estimate the value of an asset at a single instant in time, a repeated pari-mutuel market, where distinct pari-mutuel markets are run at consecutive intervals, could in principle capture changing information dynamics.\nBut running multiple consecutive markets would likely thin out trading in each individual market.\nAlso, in each individual pari-mutuel market, the incentives would still be to wait to bet until just before the ending time of that particular market.\nThis last problem might be mitigated by instituting a random stopping rule for each individual pari-mutuel market.\nIn laboratory experiments, pari-mutuel markets have shown a remarkable ability to aggregate and disseminate information dispersed among traders, at least for a single snapshot in time [17].\nA similar ability has been recognized at real racetracks [1, 22, 24, 25, 26].\n2.2 Financial markets\nIn the financial world, wagering on the outcomes of uncertain future propositions is also common.\nThe typical market mechanism used is the continuous double auction (CDA).\nThe term securities market in economics and finance generically encompasses a number of markets where speculating on uncertain events is possible.\nExamples include stock markets like NASDAQ, options markets like the CBOE [13], futures markets like the CME [21], other derivatives markets, insurance markets, political stock markets [6, 7], idea futures markets [12], decision markets [10] and even market games [3, 15, 16].\nSecurities markets generally have an economic and social value beyond facilitating speculation or wagering: they allow traders to hedge risk, or to insure against undesirable outcomes.\nSo if a particular outcome has disutility for a trader, he or she can mitigate the risk by wagering for the outcome, to arrange for compensation in case the outcome occurs.\nIn this sense, buying automobile insurance is effectively a bet that an accident or other covered event will occur.\nSimilarly, buying a put option, which is useful as a hedge for a stockholder, is a bet that the underlying stock will go down.\nIn practice, agents engage in a mixture of hedging and speculating, and there is no clear dividing line between the two [14].\nLike pari-mutuel markets, often prices in financial markets are excellent information aggregators, yielding very accurate forecasts of future events [5, 18, 19].\nA CDA constantly matches orders to buy an asset with orders to sell.\nIf at any time one party is willing to buy one unit of the asset at a bid price of pbid, while another party is willing to sell one unit of the asset at an ask price of pask, and pbid is greater than or equal to pask, then the two parties transact (at some price between pbid and pask).\nIf the highest bid price is less than the lowest ask price, then no transactions occur.\nIn a CDA, the bid and ask prices rapidly change as new information arrives and traders reassess the value of the asset.\nSince the auctioneer only matches willing bidders, the auctioneer takes on no risk.\nHowever, buyers can only buy as many shares as sellers are willing to sell; for any transaction to occur, there must be a counterparty on the other side willing to accept the trade.\nAs a result, when few traders participate in a CDA, it may become illiquid, meaning that not much trading activity occurs.\nThe spread between the highest bid price and the lowest ask price may be very large, or one or both queues may be completely empty, discouraging trading .2 One way to induce liquidity is to provide a market maker who is willing to accept a large number of buy and sell orders at particular prices.\nWe call this mechanism a CDA with market maker (CDAwMM).3 Conceptually, the market maker is just like any other trader, but typically is willing to accept a much larger volume of trades.\nThe market maker may be a person, or may be an automated algorithm.\nAdding a market maker to the system increases liquidity, but exposes the market maker to risk.\nNow, instead of only matching trades, the system actually takes on risk of its own, and depending on what happens in the future, may lose considerable amounts of money.\n2.3 Wagering markets\nThe typical Las Vegas bookmaker or oddsmaker functions much like a market maker in a CDA.\nIn this case, the market institution (the book or house) sets the odds,4 initially according to expert opinion, and later in response to the relative level of betting on the various outcomes.\nUnlike in a pari-mutuel environment, whenever a wager is placed with a bookmaker, the odds or terms for that bet are fixed at the time of the bet.\nThe bookmaker profits by offering different odds for the two sides of the bet, essentially defining a bidask spread.\nWhile odds may change in response to changing information, any bets made at previously set odds remain in effect according to the odds at the time of the bet; this is precisely in analogy to a CDAwMM.\nOne difference between a bookmaker and a market maker is that the former usually operates in a \"take it or leave it mode\": bettors cannot place their own limit orders on a common queue, they can in effect only place market orders at prices defined by the bookmaker.\nStill, the bookmaker certainly reacts to bettor demand.\nLike a market maker, the bookmaker exposes itself to significant risk.\nSports betting markets have also been shown to provide high quality aggregate forecasts [4, 9, 23].\n2.4 Market scoring rule\nHanson's [11] market scoring rule (MSR) is a new mechanism for hedging and speculating that shares some properties in common with a DPM.\nLike a DPM, an MSR can be conceptualized as an automated market maker always willing to accept a trade on any event at some price.\nAn MSR requires a patron to subsidize the market.\nThe patron's final loss is variable, and thus technically implies a degree of risk, though the maximum loss is bounded.\nAn MSR maintains a probability distribution over all events.\nAt any time any 2Thin markets do occur often in practice, and can be seen in a variety of the less popular markets available on http:\/\/TradeSports.com, or in some financial options markets, for example.\n3A very clear example of a CDAwMM is the \"interactive\" betting market on http:\/\/WSEX.com.\n4Or, alternatively, the bookmaker sets the game line in order to provide even-money odds.\ntrader who believes the probabilities are wrong can change any part of the distribution by accepting a lottery ticket that pays off according to a scoring rule (e.g., the logarithmic scoring rule) [27], as long as that trader also agrees to pay off the most recent person to change the distribution.\nIn the limit of a single trader, the mechanism behaves like a scoring rule, suitable for polling a single agent for its probability distribution.\nIn the limit of many traders, it produces a combined estimate.\nSince the market essentially always has a complete set of posted prices for all possible outcomes, the mechanism avoids the problem of thin markets or illiquidity.\nAn MSR is not pari-mutuel in nature, as the patron in general injects a variable amount of money into the system.\nAn MSR provides a two-sided automated market maker, while a DPM provides a one-sided automated market maker.\nIn an MSR, the vector of payoffs across outcomes is fixed at the time of the trade, while in a DPM, the vector of payoffs across outcomes depends both on the state of wagering at the time of the trade and the state of wagering at the market's close.\nWhile the mechanisms are quite different--and so trader acceptance and incentives may strongly differ--the properties and motivations of DPMs and MSRs are quite similar.\nHanson shows how MSRs are especially well suited for allowing bets on a combinatorial number of outcomes.\nThe patron's payment for subsidizing trading on all 2n possible combinations of n events is no larger than the sum of subsidizing the n event marginals independently.\nThe mechanism was planned for use in the Policy Analysis Market (PAM), a futures market in Middle East related outcomes and funded by DARPA [20], until a media firestorm killed the project .5 As of this writing, the founders of PAM were considering reopening under private control .6\n3.\nA DYNAMIC PARI-MUTUEL MARKET\n3.1 High-level description\nIn contrast to a standard pari-mutuel market, where each dollar always buys an equal share of the payoff, in a DPM each dollar buys a variable share in the payoff depending on the state of wagering at the time of purchase.\nSo a wager on A at a time when most others are wagering on B offers a greater possible profit than a wager on A when most others are also wagering on A.\nA natural way to communicate the changing payoff of a bet is to say that, at any given time, a certain amount of money will buy a certain number of shares in one outcome the other.\nPurchasing a share entitles its owner to an equal stake in the winning pot should the chosen outcome occur.\nThe payoff is variable, because when few people are betting on an outcome, shares will generally be cheaper than at a time when many people are betting that outcome.\nThere is no pre-determined limit on the number of shares: new shares can be continually generated as trading proceeds.\nFor simplicity, all analyses in this paper consider the binary outcome case; generalizing to multiple discrete outcomes should be straightforward.\nDenote the two outcomes A and B.\nThe outcomes are mutually exclusive and ex\nhaustive.\nDenote the instantaneous price per share of A as p1 and the price per share of B as p2.\nDenote the payoffs per share as P1 and P2, respectively.\nThese four numbers, p1, p2, P1, P2 are the key numbers that traders must track and understand.\nNote that the price is set at the time of the wager; the payoff per share is finalized only after the event outcome is revealed.\nAt any time, a trader can purchase an infinitesimal quantity of shares of A at price p1 (and similarly for B).\nHowever, since the price changes continuously as shares are purchased, the cost of buying n shares is computed as the integral of a price function from 0 to n.\nThe use of continuous functions and integrals can be hidden from traders by aggregating the automated market maker's sell orders into discrete lots of, say, 100 shares each.\nThese ask orders can be automatically entered into the system by the market institution, so that traders interact with what looks like a more familiar CDA; we examine this interface issue in more detail below in Section 4.2.\nFor our analysis, we introduce the following additional notation.\nDenote M1 as the total amount of money wagered on A, M2 as the total amount of money wagered on B, T = M1 + M2 as the total amount of money wagered on both sides, N1 as the total number of shares purchased of A, and N2 as the total number of shares purchased of B.\nThere are many ways to formulate the price function.\nSeveral natural price functions are outlined below; each is motivated as the unique solution to a particular constraint on price dynamics.\n3.2 Advantages and disadvantages\nTo my knowledge, a DPM is the only known mechanism for hedging and speculating that exhibits all three of the following properties: (1) guaranteed liquidity, (2) no risk for the market institution, and (3) continuous incorporation of information.\nA standard pari-mutuel fails (3).\nA CDA fails (1).\nA CDAwMM, the bookmaker mechanism, and an MSR all fail (2).\nEven though technically an MSR exposes its patron to risk (i.e., a variable future payoff), the patron's maximum loss is bounded, so the distinction between a DPM and an MSR in terms of these three properties is more technical than practical.\nDPM traders can cash out of the market early, just like stock market traders, to lock in a profit or limit a loss, an action that is simply not possible in a standard pari-mutuel.\nA DPM also has some drawbacks.\nThe payoff for a wager depends both on the price at the time of the trade, and on the final payoff per share at the market's close.\nThis contrasts with the CDA variants, where the payoff vector across possible future outcomes is fixed at the time of the trade.\nSo a trader's strategic optimization problem is complicated by the need to predict the final values of P1 and P2.\nIf P changes according to a random walk, then traders can take the current P as an unbiased estimate of the final P, greatly decreasing the complexity of their optimization.\nIf P does not change according to a random walk, the mechanism still has utility as a mechanism for hedging and speculating, though optimization may be difficult, and determining a measure of the market's aggregate opinion of the probabilities of A and B may be difficult.\nWe discuss the implications of random walk behavior further below in Section 4.1 in the discussion surrounding Assumption 3.\nA second drawback of a DPM is its one-sided nature.\nWhile an automated market maker always stands ready to accept buy orders, there is no corresponding market maker to accept sell orders.\nTraders must sell to each other using a standard CDA mechanism, for example by posting an ask order at a price at or below the market maker's current ask price.\nTraders can also always \"hedge-sell\" by purchasing shares in the opposite outcome from the market maker, thereby hedging their bet if not fully liquidating it.\n3.3 Redistribution rule\nIn a standard pari-mutuel market, payoffs can be computed in either of two equivalent ways: (1) each winning $1 wager receives a refund of the initial $1 paid, plus an equal share of all losing wagers, or (2) each winning $1 wager receives an equal share of all wagers, winning or losing.\nBecause each dollar always earns an equal share of the payoff, the two formulations are precisely the same:\nIn a dynamic pari-mutuel market, because each dollar is not equally weighted, the two formulations are distinct, and lead to significantly different price functions and mechanisms, each with different potentially desirable properties.\nWe consider each case in turn.\nThe next section analyzes case (1), where only losing money is redistributed.\nSection 5 examines case (2), where all money is redistributed.\n4.\nDPM I: LOSING MONEY REDISTRIBUTED\nFor the case where the initial payments on winning bets are refunded, and only losing money is redistributed, the respective payoffs per share are simply:\nSo, if A occurs, shareholders of A receive all of their initial payment back, plus P1 dollars per share owned, while shareholders of B lose all money wagered.\nSimilarly, if B occurs, shareholders of B receive all of their initial payment back, plus P2 dollars per share owned, while shareholders of A lose all money wagered.\nWithout loss of generality, I will analyze the market from the perspective of A, deriving prices and payoffs for A only.\nThe equations for B are symmetric.\nThe trader's per-share expected value for purchasing an infinitesimal quantity ~ of shares of A is\nwhere ~ is an infinitesimal quantity of shares of A, Pr (A) is the trader's belief in the probability of A, and p1 is the instantaneous price per share of A for an infinitesimal quantity of shares.\nE [P1JA] is the trader's expectation of the payoff per share of A after the market closes and given that A occurs.\nThis is a subtle point.\nThe value of P1 does not matter if B occurs, since in this case shares of A are worthless, and the current value of P1 does not necessarily matter as this may change as trading continues.\nSo, in order to determine the expected value of shares of A, the trader must estimate what he or she expects the payoff per share to be in the end (after the market closes) if A occurs.\nIf E [~ shares] \/ ~> 0, a risk-neutral trader should purchase shares of A.\nHow many shares?\nThis depends on the price function determining p1.\nIn general, p1 increases as more shares are purchased.\nThe risk-neutral trader should continue purchasing shares until E [~ shares] \/ ~ = 0.\n(A riskaverse trader will generally stop purchasing shares before driving E [~ shares] \/ ~ all the way to zero.)\nAssuming riskneutrality, the trader's optimization problem is to choose a number of shares n> 0 of A to purchase, in order to maximize\nIt's easy to see that the same value of n can be solved for by finding the number of shares required to drive E [~ shares] \/ ~ to zero.\nThat is, find n> 0 satisfying 0 = Pr (A) \u00b7 E [P1JA] \u2212 (1 \u2212 Pr (A)) \u00b7 p1 (n), if such a n exists, otherwise n = 0.\n4.1 Market probability\nAs traders who believe that E [~ shares of A] \/ ~> 0 purchase shares of A and traders who believe that E [~ shares of B] \/ ~> 0 purchase shares of B, the prices p1 and p2 change according to a price function, as prescribed below.\nThe current prices in a sense reflect the market's opinion as a whole of the relative probabilities of A and B. Assuming an efficient marketplace, the market as a whole considers E [~ shares] \/ ~ = 0, since the mechanisms is a zero sum game.\nFor example, if market participants in aggregate felt that E [~ shares] \/ ~> 0, then there would be net demand for A, driving up the price of A until E [~ shares] \/ ~ = 0.\nDefine MPr (A) to be the market probability of A, or the probability of A inferred by assuming that E [~ shares] \/ ~ = 0.\nWe can consider MPr (A) to be the aggregate probability of A as judged by the market as a whole.\nMPr (A) is the solution to\nAt this point we make a critical assumption in order to greatly simplify the analysis; we assume that\nThat is, we assume that the current value for the payoff per share of A is the same as the expected final value of the payoff per share of A given that A occurs.\nThis is certainly true for the last (infinitesimal) wager before the market closes.\nIt's not obvious, however, that the assumption is true well before the market's close.\nBasically, we are assuming that the value of P1 moves according to an unbiased random walk: the current value of P1 is the best expectation of its future value.\nI conjecture that there are reasonable market efficiency conditions under which assumption (3) is true, though I have not been able to prove that it arises naturally from rational trading.\nWe examine scenarios below in which\nassumption (3) seems especially plausible.\nNonetheless, the assumption effects our analysis only.\nRegardless of whether (3) is true, each price function derived below implies a welldefined zero-sum game in which traders can play.\nIf traders can assume that (3) is true, then their optimization problem (1) is greatly simplified; however, optimizing (1) does not depend on the assumption, and traders can still optimize by strategically projecting the final expected payoff in whatever complicated way they desire.\nSo, the utility of DPM for hedging and speculating does not necessarily hinge on the truth of assumption (3).\nOn the other hand, the ability to easily infer an aggregate market consensus probability from market prices does depend on (3).\n4.2 Price functions\nA variety of price functions seem reasonable, each exhibiting various properties, and implying differing market probabilities.\n4.2.1 Price function I: Price of A equals payoff of B\nOne natural price function to consider is to set the price per share of A equal to the payoff per share of B, and set the price per share of B equal to the payoff per share of A.\nThat is,\nEnforcing this relationship reduces the dimensionality of the system from four to two, simplifying the interface: traders need only track two numbers instead of four.\nThe relationship makes sense, since new information supporting A should encourage purchasing of shares A, driving up both the price of A and the payoff of B, and driving down the price of B and the payoff of A.\nIn this setting, assumption (3) seems especially reasonable, since if an efficient market hypothesis leads prices to follow a random walk, than payoffs must also follow a random walk.\nThe constraints (4) lead to the following derivation of the market probability: The constraints (4) specify the instantaneous relationship between payoff and price.\nFrom this, we can derive how prices change when (non-infinitesimal) shares are purchased.\nLet n be the number of shares purchased and let m be the amount of money spent purchasing n shares.\nNote that p1 = dm\/dn, the instantaneous price per share, and\nNote that p1 (0) = M1\/N2 = P2 as required.\nThe derivation of the price function p2 (n) for B is analogous and the results are symmetric.\nThe notion of buying infinitesimal shares, or integrating costs over a continuous function, are probably foreign to most traders.\nA more standard interface can be implemented by discretizing the costs into round lots of shares, for example lots of 100 shares.\nThen ask orders of 100 shares each at the appropriate price can be automatically placed by the market institution.\nFor example, the market institution can place an ask order for 100 shares at price m (100) \/ 100, another ask order for 100 shares at price (m (200) \u2212 m (100)) \/ 100, a third ask for 100 shares at (m (300) \u2212 m (200)) \/ 100, etc. .\nIn this way, the market looks more familiar to traders, like a typical CDA with a number of ask orders at various prices automatically available.\nA trader buying less than 100 shares would pay a bit more than if the true cost were computed using (6), but the discretized interface would probably be more intuitive and transparent to the majority of traders.\nThe above equations assume that all money that comes in is eventually returned or redistributed.\nIn other words, the mechanism is a zero sum game, and the market institution takes no portion of the money.\nThis could be generalized so that the market institution always takes a certain amount, or a certain percent, or a certain amount per transaction, or a certain percent per transaction, before money in returned or redistributed.\nFinally, note that the above price function is undefined when the amount bet or the number of shares are zero.\nSo the system must begin with some positive amount on both sides, and some positive number of shares outstanding on both sides.\nThese initial amounts can be arbitrarily small in principle, but the size of the initial subsidy may affect the incentives of traders to participate.\nAlso, the smaller the initial amounts, the more each new dollar effects the prices.\nThe initialization amounts could be funded as a subsidy from the market institution or a patron, which I'll call a seed wager, or from a portion of the fees charged, which I'll call an ante wager.\n4.2.2 Price function II: Price of A proportional to money on A\nA second price function can be derived by requiring the ratio of prices to be equal to the ratio of money wagered.\nEquation 6 gives the cost of purchasing n shares.\nThe instantaneous price per share as a function of n is\nIn other words, the price of A is proportional to the amount of money wagered on A, and similarly for B.\nThis seems like a particularly natural way to set the price, since the more money that is wagered on one side, the cheaper becomes a share on the other side, in exactly the same proportion.\nUsing Equation 8, along with (2) and (3), we can derive the implied market probability:\nWorking from the above instantaneous price, we can derive the implied cost function m as a function of the number n of shares purchased as follows:\nNote that, as required, p1 (0) = M1 \/ \u221a N1N2, and p1 (0) \/ p2 (0) = M1\/M2.\nIf one uses the above price function, then the market dynamics will be such that the ratio of the (instantaneous) prices of A and B always equals the ratio of the amounts wagered on A and B, which seems fairly natural.\nNote that, as before, the mechanism can be modified to collect transaction fees of some kind.\nAlso note that seed or ante wagers are required to initialize the system.\n5.\nDPM II: ALL MONEY REDISTRIBUTED\nAbove we examined the policy of refunding winning wagers and redistributing only losing wagers.\nIn this section we consider the second policy mentioned in Section 3.3: all money from all wagers are collected and redistributed to winning wagers.\nFor the case where all money is redistributed, the respective payoffs per share are:\nwhere T = M1 + M2 is the total amount of money wagered on both sides.\nSo, if A occurs, shareholders of A lose their initial price paid, but receive P1 dollars per share owned; shareholders of B simply lose all money wagered.\nSimilarly, if B occurs, shareholders of B lose their initial price paid, but receive P2 dollars per share owned; shareholders of A lose all money wagered.\nIn this case, the trader's per-share expected value for purchasing an infinitesimal quantity ~ of shares of A is\nThe same value of n can be solved for by finding the number of shares required to drive E [~ shares] \/ ~ to zero.\nThat is, find n \u2265 0 satisfying\nif such a n exists, otherwise n = 0.\n5.1 Market probability\nIn this case MPr (A), the aggregate probability of A as judged by the market as a whole, is the solution to\nAs before, we make the simplifying assumption (3) that the expected final payoff per share equals the current payoff per share.\nThe assumption is critical for our analysis, but may not be required for a practical implementation.\n5.2 Price functions\nFor the case where all money is distributed, the constraints (4) that keep the price of A equal to the payoff of B, and vice versa, do not lead to the derivation of a coherent price function.\nA reasonable price function can be derived from the constraint (8) employed in Section 4.2.2, where we require that the ratio of prices to be equal to the ratio of money wagered.\nThat is, p1\/p2 = M1\/M2.\nIn other words, the price of A is proportional to the amount of money wagered on A, and similarly for B.\nUsing Equations 3, 8, and 15 we can derive the implied market probability:\nInterestingly, this is the same market probability derived in Section 4.2.1 for the case of losing-money redistribution with the constraints that the price of A equal the payoff of B and vice versa.\nThe instantaneous price per share for an infinitesimal quantity of shares is:\nWorking from the above instantaneous price, we can derive the number of shares n that can be purchased for m dollars, as follows:\nNote that we solved for n (m) rather than m (n).\nI could not find a closed-form solution for m (n), as was derived for the two other cases above.\nStill, n (m) can be used to determine how many shares can be purchased for m dollars, and the inverse function can be approximated to any degree numerically.\nFrom n (m) we can also compute the price function:\nwhere\nNote that, as required, p1 (0) \/ p2 (0) = M1\/M2.\nIf one uses the above price function, then the market dynamics will be such that the ratio of the (instantaneous) prices of A and B always equals the ratio of the amounts wagered on A and B.\nThis price function has another desirable property: it acts such that the expected value of wagering $1 on A and simultaneously wagering $1 on B equals zero, assuming (3).\nThat is, E [$1 of A + $1 of B] = 0.\nThe derivation is omitted.\n5.3 Comparing DPM I and II\nThe main advantage of refunding winning wagers (DPM I) is that every bet on the winning outcome is guaranteed to at least break even.\nThe main disadvantage of refunding winning wagers is that shares are not homogenous: each share of A, for example, is actually composed of two distinct parts: (1) the refund, or a lottery ticket that pays $p if A occurs, where p is the price paid per share, and (2) one share of the final payoff ($P1) if A occurs.\nThis complicates the implementation of an aftermarket to cash out of the market early, which we will examine below in Section 7.\nWhen all money is redistributed (DPM II), shares are homogeneous: each share entitles its owner to an equal slice of the final payoff.\nBecause shares are homogenous, the implementation of an aftermarket is straightforward, as we shall see in Section 7.\nOn the other hand, because initial prices paid are not refunded for winning bets, there is a chance that, if prices swing wildly enough, a wager on the correct outcome might actually lose money.\nTraders must be aware that if they buy in at an excessively high price that later tumbles allowing many others to get in at a much lower price, they may lose money in the end regardless of the outcome.\nFrom informal experiments, I don't believe this eventuality would be common, but nonetheless it requires care in communicating to traders the possible risks.\nOne potential fix would be for the market maker to keep track of when the price is going too low, endangering an investor on the correct outcome.\nAt this point, the market maker could artificially stop lowering the price.\nSell orders in the aftermarket might still come in below the market maker's price, but in this way the system could ensure that every wager on the correct outcome at least breaks even.\n6.\nOTHER VARIATIONS\nA simple ascending price function would set p1 = \u03b1M1 and p2 = \u03b1M2, where \u03b1> 0.\nIn this case, prices would only go up.\nFor the case of all money being redistributed, this would eliminate the possibility of losing money on a wager on the correct outcome.\nEven though the market maker's price only rises, the going price may fall well below the market maker's price, as ask orders are placed in the aftermarket.\nI have derived price functions for several other cases, using the same methodology above.\nEach price function may have its own desirable properties, but it's not clear which is best, or even that a single best method exists.\nFurther analyses and, more importantly, empirical investigations are required to answer these questions.\n7.\nAFTERMARKETS\nA key advantage of DPM over a standard pari-mutuel market is the ability to cash out of the market before it closes, in order to take a profit or limit a loss.\nThis is accomplished by allowing traders to place ask orders on the same queue as the market maker.\nSo traders can sell the shares that they purchased at or below the price set by the market maker.\nOr traders can place a limit sell order at any price.\nBuyers will purchase any existing shares for sale at the lower prices first, before purchasing new shares from the market maker.\n7.1 Aftermarket for DPM II\nFor the second main case explored above, where all money\nis redistributed, allowing an aftermarket is simple.\nIn fact, \"aftermarket\" may be a poor descriptor: buying and selling are both fully integrated into the same mechanism.\nEvery share is worth precisely the same amount, so traders can simply place ask orders on the same queue as the market maker in order to sell their shares.\nNew buyers will accept the lowest ask price, whether it comes from the market maker or another trader.\nIn this way, traders can cash out early and walk away with their current profit or loss, assuming they can find a willing buyer.\n7.2 Aftermarket for DPM I\nWhen winning wagers are refunded and only losing wagers are redistributed, each share is potentially worth a different amount, depending on how much was paid for it, so it is not as simple a matter to set up an aftermarket.\nHowever, an aftermarket is still possible.\nIn fact, much of the complexity can be hidden from traders, so it looks nearly as simple as placing a sell order on the queue.\nIn this case shares are not homogenous: each share of A is actually composed of two distinct parts: (1) the refund of P \u00b7 1A dollars, and (2) the payoff of P1 \u00b7 1A dollars, where P is the per-share price paid and 1A is the indicator function equalling 1 if A occurs, and 0 otherwise.\nOne can imagine running two separate aftermarkets where people can sell these two respective components.\nHowever, it is possible to automate the two aftermarkets, by automatically bundling them together in the correct ratio and selling them in the central DPM.\nIn this way, traders can cash out by placing sell orders on the same queue as the DPM market maker, effectively hiding the complexity of explicitly having two separate aftermarkets.\nThe bundling mechanism works as follows.\nSuppose the current price for 1 share of A is P1.\nA buyer agrees to purchase the share at P1.\nThe buyer pays P1 dollars and receives P1 \u00b7 1A + P1 \u00b7 1A dollars.\nIf there is enough inventory in the aftermarkets, the buyer's share is constructed by bundling together P1 \u00b7 1A from the first aftermarket, and P1 \u00b7 1A from the second aftermarket.\nThe seller in the first aftermarket receives P1MPr (A) dollars, and the seller in the second aftermarket receives P1MPr (B) dollars.\n7.3 Pseudo aftermarket for DPM I\nThere is an alternative \"pseudo aftermarket\" that's possible for the case of DPM I that does not require bundling.\nConsider a share of A purchased for $5.\nThe share is composed of $5 \u00b7 1A and $P1 \u00b7 1A.\nNow suppose the current price has moved from $5 to $10 per share and the trader wants to cash out at a profit.\nThe trader can sell 1\/2 share at market price (1\/2 share for $5), receiving all of the initial $5 investment back, and retaining 1\/2 share of A.\nThe 1\/2 share is worth either some positive amount, or nothing, depending on the outcome and the final payoff.\nSo the trader is left with shares worth a positive expected value and all of his or her initial investment.\nThe trader has essentially cashed out and locked in his or her gains.\nNow suppose instead that the price moves downward, from $5 to $2 per share.\nThe trader decides to limit his or her loss by selling the share for $2.\nThe buyer gets the 1 share plus $2 \u00b7 1A (the buyer's price refunded).\nThe trader (seller) gets the $2 plus what remains of the original price refunded, or $3 \u00b7 1A.\nThe trader's loss is now limited to $3 at most instead of $5.\nIf A occurs, the trader breaks even; if B occurs, the trader loses $3.\nAlso note that--in either DPM formulation--traders can always \"hedge sell\" by buying the opposite outcome without the need for any type of aftermarket.\n8.\nCONCLUSIONS\nI have presented a new market mechanism for wagering on, or hedging against, a future uncertain event, called a dynamic pari-mutuel market (DPM).\nThe mechanism combines the infinite liquidity and risk-free nature of a parimutuel market with the dynamic nature of a CDA, making it suitable for continuous information aggregation.\nTo my knowledge, all existing mechanisms--including the standard pari-mutuel market, the CDA, the CDAwMM, the bookie mechanism, and the MSR--exhibit at most two of the three properties.\nAn MSR is the closest to a DPM in terms of these properties, if not in terms of mechanics.\nGiven some natural constraints on price dynamics, I have derived in closed form the implied price functions, which encode how prices change continuously as shares are purchased.\nThe interface for traders looks much like the familiar CDA, with the system acting as an automated market maker willing to accept an infinite number of buy orders at some price.\nI have explored two main variations of a DPM: one where only losing money is redistributed, and one where all money is redistributed.\nEach has its own pros and cons, and each supports several reasonable price functions.\nI have described the workings of an aftermarket, so that traders can cash out of the market early, like in a CDA, to lock in their gains or limit their losses, an operation that is not possible in a standard pari-mutuel setting.\n9.\nFUTURE WORK\nThis paper reports the results of an initial investigation of the concept of a dynamic pari-mutuel market.\nMany avenues for future work present themselves, including the following: 9 Random walk conjecture.\nThe most important question mark in my mind is whether the random walk assumption (3) can be proven under reasonable market efficiency conditions and, if not, how severely it effects the practicality of the system.\n9 Incentive analysis.\nFormally, what are the incentives for traders to act on new information and when?\nHow does the level of initial subsidy effect trader incentives?\n9 Laboratory experiments and field tests.\nThis paper concentrated on the mathematics and algorithmics of the mechanism.\nHowever, the true test of the mechanism's ability to serve as an instrument for hedging, wagering, or information aggregation is to test it with real traders in a realistic environment.\nIn reality, how do people behave when faced with a DPM mechanism?\n9 DPM call market.\nI have derived the price functions to react to wagers on one outcome at a time.\nThe mechanism could be generalized to accept orders on both sides, then update the prices wholistically, rather than by assuming a particular sequence on the wagers.\n9 Real-valued variables.\nI believe the mechanisms in this paper can easily be generalized to multiple discrete\noutcomes, and multiple real-valued outcomes that always sum to some constant value (e.g., multiple percentage values that must sum to 100).\nHowever, the generalization to real-valued variables with arbitrary range is less clear, and open for future development.\n\u2022 Compound\/combinatorial betting.\nI believe that DPM may be well suited for compound [8, 11] or combinatorial [2] betting, for many of the same reasons that market scoring rules [11] are well suited for the task.\nDPM may also have some computational advantages over MSR, though this remains to be seen.","keyphrases":["dynam pari-mutuel market","pari-mutuel market","hedg","wager","inform aggreg","risk alloc","inform specul","specul","dpm","hybrid","continu doubl auction","cda","zero risk","market institut","price","gain","loss","sell","event resolut","trader interfac","doubl auction format","bid-ask queue","payoff per share","price function","autom market maker","autom market maker","demand","infinit bui-in liquid","compound secur market","combinatori bet","trade","bet","gambl"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","M","U","U","U","U"]} {"id":"C-83","title":"Concept and Architecture of a Pervasive Document Editing and Managing System","abstract":"Collaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing. We address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system. It exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices. Each user is always served with up-to-date documents and can organize his work based on document meta data. For this, we present our conceptual architecture for such a system and discuss it with an example.","lvl-1":"Concept and Architecture of a Pervasive Document Editing and Managing System Stefania Leone Thomas B. Hodel Harald Gall University of Zurich, Switzerland University of Zurich, Switzerland University of Zurich, Switzerland Department of Informatics Department of Informatics Department of Informatics leone@ifi.unizh.ch hodel@ifi.unizh.ch gall@ifi.unizh.ch ABSTRACT Collaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing.\nWe address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system.\nIt exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices.\nEach user is always served with upto-date documents and can organize his work based on document meta data.\nFor this, we present our conceptual architecture for such a system and discuss it with an example.\nCategories and Subject Descriptors C.2.4 Distributed Systems [Computer-Communication Networks]: Computer System Organization, Distributed Systems, Distributed Applications General Terms Management, Measurement, Documentation, Economics, Human Factors 1.\nINTRODUCTION Text documents are a valuable resource for virtually any enterprise and organization.\nDocuments like papers, reports and general business documentations contain a large part of today``s (business) knowledge.\nDocuments are mostly stored in a hierarchical folder structure on file servers and it is difficult to organize them in regard to classification, versioning etc., although it is of utmost importance that users can find, retrieve and edit up-to-date versions of documents whenever they want and, in a user-friendly way.\n1.1 Problem Description With most of the commonly used word-processing applications documents can be manipulated by only one user at a time: tools for pervasive collaborative document editing and management, are rarely deployed in today``s world.\nDespite the fact, that people strive for location- and time- independence, the importance of pervasive collaborative work, i.e. collaborative document editing and management is totally neglected.\nDocuments could therefore be seen as a vulnerable source in today``s world, which demands for an appropriate solution: The need to store, retrieve and edit these documents collaboratively anytime, everywhere and with almost every suitable device and with guaranteed mechanisms for security, consistency, availability and access control, is obvious.\nIn addition, word processing systems ignore the fact that the history of a text document contains crucial information for its management.\nSuch meta data includes creation date, creator, authors, version, location-based information such as time and place when\/where a user reads\/edits a document and so on.\nSuch meta data can be gathered during the documents creation process and can be used versatilely.\nEspecially in the field of pervasive document management, meta data is of crucial importance since it offers totally new ways of organizing and classifying documents: On the one hand, the user``s actual situation influences the user``s objectives.\nMeta data could be used to give the user the best possible view on the documents, dependent of his actual information.\nOn the other hand, as soon as the user starts to work, i.e. reads or edits a document, new meta data can be gathered in order to make the system more adaptable and in a sense to the users situation and, to offer future users a better view on the documents.\nAs far as we know, no system exists, that satisfies the aforementioned requirements.\nA very good overview about realtime communication and collaboration system is described in [7].\nWe therefore strive for a pervasive document editing and management system, which enables pervasive (and collaborative) document editing and management: users should be able to read and edit documents whenever, wherever, with whomever and with whatever device.\nIn this paper, we present collaborative database-based real-time word processing, which provides pervasive document editing and management functionality.\nIt enables the user to work on documents collaboratively and offers sophisticated document management facility: the user is always served with up-to-date documents and can organize and manage documents on the base of meta data.\nAdditionally document data is treated as `first class citizen'' of the database as demanded in [1].\n1.2 Underlying Concepts The concept of our pervasive document editing and management system requires an appropriate architectural foundation.\nOur concept and implementation are based on the TeNDaX [3] collaborative database-based document editing and management system, which enables pervasive document editing and managing.\nTeNDaX is a Text Native Database eXtension.\nIt enables the storage of text in databases in a native form so that editing text is finally represented as real-time transactions.\nUnder the term `text editing'' we understand the following: writing and deleting text (characters), copying & pasting text, defining text layout & structure, inserting notes, setting access rights, defining business processes, inserting tables, pictures, and so on i.e. all the actions regularly carried out by word processing users.\nWith `real-time transaction'' we mean that editing text (e.g. writing a character\/word) invokes one or several database transactions so that everything, which is typed appears within the editor as soon as these objects are stored persistently.\nInstead of creating files and storing them in a file system, the content and all of the meta data belonging to the documents is stored in a special way in the database, which enables very fast real-time transactions for all editing tasks [2].\nThe database schema and the above-mentioned transactions are created in such a way that everything can be done within a multiuser environment, as is usual done by database technology.\nAs a consequence, many of the achievements (with respect to data organization and querying, recovery, integrity and security enforcement, multi-user operation, distribution management, uniform tool access, etc.) are now, by means of this approach, also available for word processing.\n2.\nAPPROACH Our pervasive editing and management system is based on the above-mentioned database-based TeNDaX approach, where document data is stored natively in the database and supports pervasive collaborative text editing and document management.\nWe define the pervasive document editing and management system, as a system, where documents can easily be accessed and manipulated everywhere (within the network), anytime (independently of the number of users working on the same document) and with any device (desktop, notebook, PDA, mobile phone etc.).\nDB 3 RTSC 4 RTSC 1 RTSC 2 RTSC 3 AS 1 AS 3 DB 1 DB 2 AS 2 AS 4 DB 4 A B C D E F G Figure 1.\nTeNDaX Application Architecture In contrast to documents stored locally on the hard drive or on a file server, our system automatically serves the user with the up-to-date version of a document and changes done on the document are stored persistently in the database and immediately propagated to all clients who are working on the same document.\nAdditionally, meta data gathered during the whole document creation process enables sophisticated document management.\nWith the TeXt SQL API as abstract interface, this approach can be used by any tool and for any device.\nThe system is built on the following components (see Figure 1): An editor in Java implements the presentation layer (A-G in Figure 1).\nThe aim of this layer is the integration in a well-known wordprocessing application such as OpenOffice.\nThe business logic layer represents the interface between the database and the word-processing application.\nIt consists of the following three components: The application server (marked as AS 1-4 in Figure 1) enables text editing within the database environment and takes care of awareness, security, document management etc., all within a collaborative, real-time and multi-user environment.\nThe real-time server component (marked as RTSC 14 in Figure 1) is responsible for the propagation of information, i.e. updates between all of the connected editors.\nThe storage engine (data layer) primarily stores the content of documents as well as all related meta data within the database Databases can be distributed in a peer-to-peer network (DB 1-4 in Figure 1).\n.\nIn the following, we will briefly present the database schema, the editor and the real-time server component as well as the concept of dynamic folders, which enables sophisticated document management on the basis of meta data.\n2.1 Application Architecture A database-based real-time collaborative editor allows the same document to be opened and edited simultaneously on the same computer or over a network of several computers and mobile devices.\nAll concurrency issues, as well as message propagation, are solved within this approach, while multiple instances of the same document are being opened [3].\nEach insert or delete action is a database transaction and as such, is immediately stored persistently in the database and propagated to all clients working on the same document.\n2.1.1 Database Schema As it was mentioned earlier that text is stored in a native way.\nEach character of a text document is stored as a single object in the database [3].\nWhen storing text in such a native form, the performance of the employed database system is of crucial importance.\nThe concept and performance issues of such a text database are described in [3], collaborative layouting in [2], dynamic collaborative business processes within documents in [5], the text editing creation time meta data model in [6] and the relation to XML databases in [7].\nFigure 2 depicts the core database schema.\nBy connecting a client to the database, a Session instance is created.\nOne important attribute of the Session is the DocumentSession.\nThis attribute refers to DocumentSession instances, which administrates all opened documents.\nFor each opened document, a DocumentSession instance is created.\nThe DocumentSession is important for the realtime server component, which, in case of a 42 is beforeis after Char (ID) has TextElement (ID) starts with is used by InternalFile (ID) is in includes created at has inserted by inserted is active ir ir CharacterValue (Unicode) has List (ID) starts starts with ends ends with FileSize has User (ID) last read by last written by created at created by Style DTD (ID) is used by uses uses is used by Authors arehas Description Password Picture UserColors UserListSecurity has has has has has has FileNode (ID) references\/isreferencedby is dynamic DynStructure NodeDetails has has is NodeType is parent of has parent has Role (ID) created at created created by Name has Description is user Name has has main role FileNodeAccessMatrix (ID) has is AccessMatrix read option grand option write option contains has access Times opened ... times with ... by contains\/ispartof ir ir is...andincludes Lineage (ID) references is after is before CopyPaste (ID) references is in is copy of is a copy from hasCopyPaste (ID) is activeLength has Str (Stream) has inserted by \/ inserted RegularChar StartChar EndChar File ExternalFile is from URL Type (extension) is of Title has DocumentSession (ID) is opened by has opened has opened Session (ID) isconnectedwith launched by VersionNumber uses has read option grand option write option ends with is used by is in has is unique DTD (Stream) has has Name Column (ID) has set on On\/off isvisible...for false LanguageProfile (ID) has contains Name Profile Marking (ID) has parent internal is copy from hasRank is onPosition starts with ends with is logical style is itemized is italic is enumerated is underline is is part of Alignment Size has Font has hasColor is bold has uses ElementName StylesheetName isused by Process (ID) is running by OS is web session MainRoles Roles has has Timestamp (Date, Time) created at Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time)created at Type has Port IP has has MessagePropagator (ID) Picture (Stream) Name Picture (ID) has contains LayoutBlock WorkflowBlockLogicalBlock contains BlockDataType has property BlockData is of WorkflowInstance (ID) isin TaskInstance (ID) has parent Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time) last modified at completed at started at created at is on has Name created by has attached Comment Typeis of Timestamp (Date, Time) Timestamp (Date, Time) Timestamp (Date, Time) created at started at << last modified at is Category Editors has Status has Timestamp (Date, Time) << status last modified Timestamp (Date, Time) is due at DueType has Timezone has Notes has SecurityLevel hasset Timestamp (Date, Time) << is completed at isfollowedby Task (Code) Description has Indent references hasbeenopenedat...by Timestamp RedoHistory is before is after references hasCharCounter is inhas has Offset ActionID (Code) Timestamp (Date, Time) invoked at invoked by Version (ID) isbuild from has created byarchived has Comment Timestamp (Date, Time) <<createdat UndoHistory (ID) starts ends has Name created by Name has is before is after << references CharCounter has is in created at Timestamp is active created by is used by Offset has created at Timestamp Index (ID) lastmodifiedby Lexicon (ID) isof Frequency is occurring is stop word Term is is in ends with starts with << original starts with WordNumber SentenceNumber ParagraphNumber Citatons has is in is is in istemporary is in has Structure has ElementPath createdat Timestamp << describes SpiderBuild (ID) is updated is deleted Timestamp (Date, Time) <<lastupdatedat has validated structure <<neededtoindex Time (ms) IndexUpdate nextupdatein hasindexed isrunningbyOS lastupdate enabled Timestamp Time (s) Documents StopCharacter Description Character Value (ASCII) is sentence stop is paragraph stop Name has is is OptionsSettings show information show warningsshow exceptions do lineage recording do internal lineage recording ask for unknown source show intra document lineage information are set for X X X VirtualBorder (ID) isonhas {1, 2} {1, 2} ir ir UserMode (Code) UserMode (Code) Figure 2.\nTeNDaX Database Schema (Object Role Modeling Diagram) change on a document done by a client, is responsible for sending update information to all the clients working on the same document.\nThe DocumentId in the class DocumentSession points to a FileNode instance, and corresponds to the ID of the opened document.\nInstances of the class FileNode either represent a folder node or a document node.\nThe folder node corresponds to a folder of a file system and the document node to that of a file.\nInstances of the class Char represent the characters of a document.\nThe value of a character is stored in the attribute CharacterValue.\nThe sequence is defined by the attributes After and Before of the class Char.\nParticular instances of Char mark the beginning and the end of a document.\nThe methods InsertChars and RemoveChars are used to add and delete characters.\n2.1.2 Editor As seen above, each document is natively stored in the database.\nOur editor does not have a replica of one part of the native text database in the sense of database replicas.\nInstead, it has a so-called image as its replica.\nEven if several authors edit the same text at the same time, they work on one unique document at all times.\nThe system guarantees this unique view.\nEditing a document involves a number of steps: first, getting the required information out of the image, secondly, invoking the corresponding methods within the database, thirdly, changing the image, and fourthly, informing all other clients about the changes.\n2.1.3 Real-Time Server Component The real-time server component is responsible for the real-time propagation of any changes on a document done within an editor to all the editors who are working or have opened the same document.\nWhen an editor connects to the application server, which in turn connects to the database, the database also establishes a connection to the real-time server component (if there isn``t already a connection).\nThe database system informs the real-time server component about each new editor session (session), which the realtime server component administrates in his SessionManager.\nThen, the editor as well connects to the real-time server component.\nThe real-time server component adds the editor socket to the client``s data structure in the SessionManager and is then ready to communicate.\nEach time a change on a document from an editor is persistently stored in the database, the database sends a message to the real-time server component, which in turns, sends the changes to all the 43 editors working on the same document.\nTherefore, a special communication protocol is used: the update protocol.\nUpdate Protocol The real-time server component uses the update protocol to communicate with the database and the editors.\nMessages are sent from the database to the real-time server component, which sends the messages to the affected editors.\nThe update protocol consists of different message types.\nMessages consist of two packages: package one contains information for the real-time server component whereas package two is passed to the editors and contains the update information, as depicted in Figure 3.\n|| RTSC || Parameter | ... | Parameter|| || Editor Data || Protocol between database system and real-time server component Protocol between real -time server component and editors Figure 3.\nUpdate Protocol In the following, two message types are presented: ||u|sessionId,...,sessionId||||editor data|| u: update message, sessionId: Id of the client session With this message type the real-time server component sends the editor data package to all editors specified in the sessionId list.\n||ud|fileId||||editor data|| ud: update document message, fileId: Id of the file With this message type, the real-time server component sends the editor data to all editors who have opened the document with the indicated file-Id.\nClass Model Figure 4 depicts the class model as well as the environment of the real-time server component.\nThe environment consists mainly of the editor and the database, but any other client application that could make use of the real-time server component can connect.\nConnectionListener: This class is responsible for the connection to the clients, i.e. to the database and the editors.\nDepending on the connection type (database or editor) the connection is passed to an EditorWorker instance or DatabaseMessageWorker instance respectively.\nEditorWorker: This class manages the connections of type `editor''.\nThe connection (a socket and its input and output stream) is stored in the SessionManager.\nSessionManager: This class is similar to an `in-memory database'': all editor session information, e.g. the editor sockets, which editor has opened which document etc. are stored within this data structure.\nDatabaseMessageWorker: This class is responsible for the connections of type `database''.\nAt run-time, only one connection exists for each database.\nUpdate messages from the database are sent to the DatabaseMessageWorker and, with the help of additional information from the SessionManager, sent to the corresponding clients.\nServiceClass: This class offers a set of methods for reading, writing and logging messages.\ntdb.mp.editor tdb.mp.database tdb.mp.mgmt EditorWorker DatabaseMessageWorker SessionManager MessageHandler ConnectionListener ServiceClass MessageQueue tdb.mp.listener tdb.mp.service junit.tests 1 * 1 * 1 * 1 * 1* 1 * Editors Datenbanksystem 1 2 1 * 1 * 1 * TCP\/IP Figure 4.\nReal-Time Server Component Class Diagram 2.1.4 Dynamic Folders As mentioned above, every editing action invoked by a user is immediately transferred to the database.\nAt the same time, more information about the current transaction is gathered.\nAs all information is stored in the database, one character can hold a multitude of information, which can later be used for the retrieval of documents.\nMeta data is collected at character level, from document structure (layout, workflow, template, semantics, security, workflow and notes), on the level of a document section and on the level of the whole document [6].\nAll of the above-mentioned meta data is crucial information for creating content and knowledge out of word processing documents.\nThis meta data can be used to create an alternative storage system for documents.\nIn any case, it is not an easy task to change users'' familiarity to the well known hierarchical file system.\nThis is also the main reason why we do not completely disregard the classical file system, but rather enhance it.\nFolders which correspond to the classical hierarchical file system will be called static folders.\nFolders where the documents are organized according to meta data, will be called dynamic folders.\nAs all information is stored in the database, the file system, too, is based on the database.\nThe dynamic folders build up sub-trees, which are guided by the meta data selected by the user.\nThus, the first step in using a dynamic folder is the definition of how it should be built.\nFor each level of a dynamic folder, exactly one meta data item is used to.\nThe following example illustrates the steps which have to be taken in order to define a dynamic folder, and the meta data which should be used.\nAs a first step, the meta data which will be used for the dynamic folder must be chosen (see Table 1): The sequence of the meta data influences the structure of the folder.\nFurthermore, for each meta data used, restrictions and granularity must be defined by the user; if no restrictions are defined, all accessible documents are listed.\nThe granularity therefore influences the number of sub-folders which will be created for the partitioning of the documents.\n44 As the user enters the tree structure of the dynamic folder, he can navigate through the branches to arrive at the document(s) he is looking for.\nThe directory names indicate which meta data determines the content of the sub-folder in question.\nAt each level, the documents, which have so far been found to match the meta data, can be inspected.\nTable 1.\nDefining dynamic folders (example) Level Meta data Restrictions Granularity 1 Creator Only show documents which have been created by the users Leone or Hodel or Gall One folder per creator 2 Current location Only show documents which where read at my current location One folder per task status 3 Authors Only show documents where at least 40% was written by user `Leone'' Each 20% one folder ad-hoc changes of granularity and restrictions are possible in order to maximize search comfort for the user.\nIt is possible to predefine dynamic folders for frequent use, e.g. a location-based folder, as well as to create and modify dynamic folders on an ad-hoc basis.\nFurthermore, the content of such dynamic folders can change from one second to another, depending on the changes made by other users at that moment.\n3.\nVALIDATION The proposed architecture is validated on the example of a character insertion.\nInsert operations are the mostly used operations in a (collaborative) editing system.\nThe character insertion is based on the TeNDaX Insert Algorithm which is formally described in the following.\nThe algorithm is simplified for this purpose.\n3.1 Insert Characters Algorithm The symbol c stands for the object character, p stands for the previous character, n stands for the next character of a character object c and the symbol l stands for a list of character objects.\nc = character p=previous character n = next character l = list of characters The symbol c1 stands for the first character in the list l, ci stands for a character in the list l at the position i, whereas i is a value between 1 and the length of the list l, and cn stands for the last character in the list l. c1 = first character in list l ci = character at position i in list l cn = last character in list l The symbol \u03b2 stands for the special character that marks the beginning of a document and \u03b5 stands for the special character that marks the end of a document.\n\u03b2=beginning of document \u03b5=end of document The function startTA starts a transaction.\nstartTA = start transaction The function commitTA commits a transaction that was started.\ncommitTA = commit transaction The function checkWriteAccess checks if the write access for a document session s is granted.\ncheckWriteAccess(s) = check if write access for document session s is granted The function lock acquires an exclusive lock for a character c and returns 1 for a success and 0 for no success.\nlock(c) = acquire the lock for character c success : return 1, no success : return 0 The function releaseLocks releases all locks that a transaction has acquired so far.\nreleaseLocks = release all locks The function getPrevious returns the previous character and getNext returns the next character of a character c. getPrevious(c) = return previous character of character c getNext(c) = return next character of character c The function linkBefore links a preceding character p with a succeeding character x and the function linkAfter links a succeeding character n with a preceding character y. linkBefore(p,x) = link character p to character x linkAfter(n,y) = link character n to character y The function updateString links a character p with the first character c1 of a character list l and a character n with the last character cn of a character list l updateString(l, p, n) = linkBefore(p cl)\u2227 linkAfter(n, cn ) The function insertChar inserts a character c in the table Char with the fields After set to a character p and Before set to a character n. insertChar(c, p, n) = linkAfter(c,p) \u2227 linkBefore(c,n) \u2227 linkBefore(p,c) \u2227 linkAfter(n,c) The function checkPreceding determines the previous character's CharacterValue of a character c and if the previous character's status is active.\ncheckPreceding(c) = return status and CharacterValue of the previous character The function checkSucceeding determines the next character's CharacterValue of a character c and if the next character's status is active.\n45 checkSucceeding(c) = return status and CharacterValue of the next character The function checkCharValue determines the CharacterValue of a character c. checkCharValue(c) = return CharacterValue of character c The function sendUpdate sends an update message (UpdateMessage) from the database to the real-time server component.\nsendUpdate(UpdateMessage) The function Read is used in the real-time server component to read the UpdateMessage.\nRead(UpdateInformationMessage) The function AllocatEditors checks on the base of the UpdateMessage and the SessionManager, which editors have to be informed.\nAllocateEditors(UpdateInformationMessage, SessionManager) = returns the affected editors The function SendMessage(EditorData) sends the editor part of the UpdateMessage to the editors SendMessage(EditorData) In TeNDaX, the Insert Algorithm is implemented in the class method InsertChars of the class Char which is depicted in Figure 2.\nThe relevant parameters for the definitions beneath, are introduced in the following list: - nextCharacterOID: OID of the character situated next to the string to be inserted - previousCharacterOID: OID of the character situated previously to the string to be inserted - characterOIDs (List): List of character which have to be inserted Thus, the insertion of characters can be defined stepwise as follows: Start a transaction.\nstartTA Select the character that is situated before the character that follows the string to be inserted.\ngetPrevious(nextCharacterOID) = PrevChar(prevCharOID) \u21d0 \u03a0 After \u03d1OID = nextCharacterOID(Char)) Acquire the lock for the character that is situated in the document before the character that follows the string which shall be inserted.\nlock(prevCharId) At this time the list characterOIDs contains the characters c1 to cn that shall be inserted.\ncharacterOIDs={ c1, ..., cn } Each character of the string is inserted at the appropriate position by linking the preceding and the succeeding character to it.\nFor each character ci of characterOIDs: insertChar(ci, p, n) Whereas ci \u2208 { c1,..., cn } Check if the preceding and succeeding characters are active or if it is the beginning or the end of the document.\ncheckPreceding(prevCharOID) = IsOK(IsActive, CharacterValue) \u21d0 \u03a0 IsActive, CharacterValue (\u03d1 OID = nextCharacterOID(Char)) checkSucceeding(nextCharacterOID) = IsOK(IsActive, CharacterValue)\u21d0 \u03a0 IsActive, CharacterValue (\u03d1 OID = nextCharacterOID(Char)) Update characters before and after the string to be inserted.\nupdateString(characterOIDs, prevCharOID, nextCharacterOID) Release all locks and commit Transaction.\nreleaseLocks commitTA Send update information to the real-time server component sendUpdate(UpdatenMessage) Read update message and inform affected editors of the change Read(UpdateMessage) Allocate Editors(UpdateMessage, SessionManager) SendMessage(EditorData) 3.2 Insert Characters Example Figure 1 gives a snapshot the system, i.e. of its architecture: four databases are distributed over a peer-to-peer network.\nEach database is connected to an application server (AS) and each application server is connected to a real-time server component (RTSC).\nEditors are connected to one or more real-time server components and to the corresponding databases.\nConsidering that editor A (connected to database 1 and 4) and editor B (connected to database 1 and 2) are working on the same document stored in database 1.\nEditor B now inserts a character into this document.\nThe insert operation is passed to application server 1, which in turns, passes it to the database 1, where an insert operation is invoked; the characters are inserted according to the algorithm discussed in the previous section.\nAfter the insertion, database 1 sends an update message (according to the update protocol discussed before) to real-time server component 1 (via AS 1).\nRTCS 1 combines the received update information with the information in his SessionManager and sends the editor data to the affected editors, in this case to editor A and B, where the changes are immediately shown.\nOccurring collaboration conflicts are solved and described in [3].\n4.\nSUMMARY With the approach presented in this paper and the implemented prototype, we offer real-time collaborative editing and management of documents stored in a special way in a database.\nWith this approach we provide security, consistency and availability of documents and consequently offer pervasive document editing and management.\nPervasive document editing and management is enabled due to the proposed architecture with the embedded real46 time server component, which propagates changes to a document immediately and consequently offers up-to-date documents.\nDocument editing and managing is consequently enabled anywhere, anytime and with any device.\nThe above-descried system is implemented in a running prototype.\nThe system will be tested soon in line with a student workshop next autumn.\nREFERENCES [1] Abiteboul, S., Agrawal, R., et al.: The Lowell Database Research Self Assessment.\nMassachusetts, USA, 2003.\n[2] Hodel, T. B., Businger, D., and Dittrich, K. R.: Supporting Collaborative Layouting in Word Processing.\nIEEE International Conference on Cooperative Information Systems (CoopIS), Larnaca, Cyprus, IEEE, 2004.\n[3] Hodel, T. B. and Dittrich, K. R.: ``Concept and prototype of a collaborative business process environment for document processing.''\nData & Knowledge Engineering 52, Special Issue: Collaborative Business Process Technologies(1): 61120, 2005.\n[4] Hodel, T. B., Dubacher, M., and Dittrich, K. R.: Using Database Management Systems for Collaborative Text Editing.\nACM European Conference of Computersupported Cooperative Work (ECSCW CEW 2003), Helsinki, Finland, 2003.\n[5] Hodel, T. B., Gall, H., and Dittrich, K. R.: Dynamic Collaborative Business Processes within Documents.\nACM Special Interest Group on Design of Communication (SIGDOC) , Memphis, USA, 2004.\n[6] Hodel, T. B., R. Hacmac, and Dittrich, K. R.: Using Text Editing Creation Time Meta Data for Document Management.\nConference on Advanced Information Systems Engineering (CAiSE'05), Porto, Portugal, Springer Lecture Notes, 2005.\n[7] Hodel, T. B., Specker, F. and Dittrich, K. R.: Embedded SOAP Server on the Operating System Level for ad-hoc Automatic Real-Time Bidirectional Communication.\nInformation Resources Management Association (IRMA), San Diego, USA, 2005.\n[8] O``Kelly, P.: Revolution in Real-Time Communication and Collaboration: For Real This Time.\nApplication Strategies: In-Depth Research Report.\nBurton Group, 2005.\n47","lvl-3":"Concept and Architecture of a Pervasive Document Editing and Managing System\nABSTRACT\nCollaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing.\nWe address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system.\nIt exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices.\nEach user is always served with upto-date documents and can organize his work based on document meta data.\nFor this, we present our conceptual architecture for such a system and discuss it with an example.\n1.\nINTRODUCTION\nText documents are a valuable resource for virtually any enterprise and organization.\nDocuments like papers, reports and general business documentations contain a large part of today's (business) knowledge.\nDocuments are mostly stored in a hierarchical folder structure on file servers and it is difficult to organize them in regard to classification, versioning etc., although it is of utmost importance that users can find, retrieve and edit up-to-date versions of documents whenever they want and, in a user-friendly way.\n1.1 Problem Description\nWith most of the commonly used word-processing applications documents can be manipulated by only one user at a time: tools for pervasive collaborative document editing and management, are rarely deployed in today's world.\nDespite the fact, that people strive for location - and time - independence, the importance of pervasive collaborative work, i.e. collaborative document editing and management is totally neglected.\nDocuments could therefore be\nseen as a vulnerable source in today's world, which demands for an appropriate solution: The need to store, retrieve and edit these documents collaboratively anytime, everywhere and with almost every suitable device and with guaranteed mechanisms for security, consistency, availability and access control, is obvious.\nIn addition, word processing systems ignore the fact that the history of a text document contains crucial information for its management.\nSuch meta data includes creation date, creator, authors, version, location-based information such as time and place when\/where a user reads\/edits a document and so on.\nSuch meta data can be gathered during the documents creation process and can be used versatilely.\nEspecially in the field of pervasive document management, meta data is of crucial importance since it offers totally new ways of organizing and classifying documents: On the one hand, the user's actual situation influences the user's objectives.\nMeta data could be used to give the user the best possible view on the documents, dependent of his actual information.\nOn the other hand, as soon as the user starts to work, i.e. reads or edits a document, new meta data can be gathered in order to make the system more adaptable and in a sense to the users situation and, to offer future users a better view on the documents.\nAs far as we know, no system exists, that satisfies the aforementioned requirements.\nA very good overview about realtime communication and collaboration system is described in [7].\nWe therefore strive for a pervasive document editing and management system, which enables pervasive (and collaborative) document editing and management: users should be able to read and edit documents whenever, wherever, with whomever and with whatever device.\nIn this paper, we present collaborative database-based real-time word processing, which provides pervasive document editing and management functionality.\nIt enables the user to work on documents collaboratively and offers sophisticated document management facility: the user is always served with up-to-date documents and can organize and manage documents on the base of meta data.\nAdditionally document data is treated as ` first class citizen' of the database as demanded in [1].\n1.2 Underlying Concepts\nThe concept of our pervasive document editing and management system requires an appropriate architectural foundation.\nOur concept and implementation are based on the TeNDaX [3] collaborative database-based document editing and management system, which enables pervasive document editing and managing.\nTeNDaX is a Text Native Database eXtension.\nIt enables the storage of text in databases in a native form so that editing text is finally represented as real-time transactions.\nUnder the term ` text editing' we understand the following: writing and deleting text\n(characters), copying & pasting text, defining text layout & structure, inserting notes, setting access rights, defining business processes, inserting tables, pictures, and so on i.e. all the actions regularly carried out by word processing users.\nWith ` real-time transaction' we mean that editing text (e.g. writing a character\/word) invokes one or several database transactions so that everything, which is typed appears within the editor as soon as these objects are stored persistently.\nInstead of creating files and storing them in a file system, the content and all of the meta data belonging to the documents is stored in a special way in the database, which enables very fast real-time transactions for all editing tasks [2].\nThe database schema and the above-mentioned transactions are created in such a way that everything can be done within a multiuser environment, as is usual done by database technology.\nAs a consequence, many of the achievements (with respect to data organization and querying, recovery, integrity and security enforcement, multi-user operation, distribution management, uniform tool access, etc.) are now, by means of this approach, also available for word processing.\n2.\nAPPROACH\n2.1 Application Architecture\n2.1.1 Database Schema\n2.1.3 Real-Time Server Component\n2.1.2 Editor\nUpdate Protocol\nClass Model\n2.1.4 Dynamic Folders\n3.\nVALIDATION\n3.1 Insert Characters Algorithm\n3.2 Insert Characters Example\n4.\nSUMMARY\nWith the approach presented in this paper and the implemented prototype, we offer real-time collaborative editing and management of documents stored in a special way in a database.\nWith this approach we provide security, consistency and availability of documents and consequently offer pervasive document editing and management.\nPervasive document editing and management is enabled due to the proposed architecture with the embedded real\ntime server component, which propagates changes to a document immediately and consequently offers up-to-date documents.\nDocument editing and managing is consequently enabled anywhere, anytime and with any device.\nThe above-descried system is implemented in a running prototype.\nThe system will be tested soon in line with a student workshop next autumn.","lvl-4":"Concept and Architecture of a Pervasive Document Editing and Managing System\nABSTRACT\nCollaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing.\nWe address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system.\nIt exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices.\nEach user is always served with upto-date documents and can organize his work based on document meta data.\nFor this, we present our conceptual architecture for such a system and discuss it with an example.\n1.\nINTRODUCTION\nText documents are a valuable resource for virtually any enterprise and organization.\nDocuments like papers, reports and general business documentations contain a large part of today's (business) knowledge.\n1.1 Problem Description\nWith most of the commonly used word-processing applications documents can be manipulated by only one user at a time: tools for pervasive collaborative document editing and management, are rarely deployed in today's world.\nDespite the fact, that people strive for location - and time - independence, the importance of pervasive collaborative work, i.e. collaborative document editing and management is totally neglected.\nDocuments could therefore be\nIn addition, word processing systems ignore the fact that the history of a text document contains crucial information for its management.\nSuch meta data includes creation date, creator, authors, version, location-based information such as time and place when\/where a user reads\/edits a document and so on.\nSuch meta data can be gathered during the documents creation process and can be used versatilely.\nEspecially in the field of pervasive document management, meta data is of crucial importance since it offers totally new ways of organizing and classifying documents: On the one hand, the user's actual situation influences the user's objectives.\nMeta data could be used to give the user the best possible view on the documents, dependent of his actual information.\nOn the other hand, as soon as the user starts to work, i.e. reads or edits a document, new meta data can be gathered in order to make the system more adaptable and in a sense to the users situation and, to offer future users a better view on the documents.\nAs far as we know, no system exists, that satisfies the aforementioned requirements.\nA very good overview about realtime communication and collaboration system is described in [7].\nWe therefore strive for a pervasive document editing and management system, which enables pervasive (and collaborative) document editing and management: users should be able to read and edit documents whenever, wherever, with whomever and with whatever device.\nIn this paper, we present collaborative database-based real-time word processing, which provides pervasive document editing and management functionality.\nIt enables the user to work on documents collaboratively and offers sophisticated document management facility: the user is always served with up-to-date documents and can organize and manage documents on the base of meta data.\nAdditionally document data is treated as ` first class citizen' of the database as demanded in [1].\n1.2 Underlying Concepts\nThe concept of our pervasive document editing and management system requires an appropriate architectural foundation.\nOur concept and implementation are based on the TeNDaX [3] collaborative database-based document editing and management system, which enables pervasive document editing and managing.\nTeNDaX is a Text Native Database eXtension.\nIt enables the storage of text in databases in a native form so that editing text is finally represented as real-time transactions.\nUnder the term ` text editing' we understand the following: writing and deleting text\nInstead of creating files and storing them in a file system, the content and all of the meta data belonging to the documents is stored in a special way in the database, which enables very fast real-time transactions for all editing tasks [2].\n4.\nSUMMARY\nWith the approach presented in this paper and the implemented prototype, we offer real-time collaborative editing and management of documents stored in a special way in a database.\nWith this approach we provide security, consistency and availability of documents and consequently offer pervasive document editing and management.\nPervasive document editing and management is enabled due to the proposed architecture with the embedded real\ntime server component, which propagates changes to a document immediately and consequently offers up-to-date documents.\nDocument editing and managing is consequently enabled anywhere, anytime and with any device.\nThe above-descried system is implemented in a running prototype.\nThe system will be tested soon in line with a student workshop next autumn.","lvl-2":"Concept and Architecture of a Pervasive Document Editing and Managing System\nABSTRACT\nCollaborative document processing has been addressed by many approaches so far, most of which focus on document versioning and collaborative editing.\nWe address this issue from a different angle and describe the concept and architecture of a pervasive document editing and managing system.\nIt exploits database techniques and real-time updating for sophisticated collaboration scenarios on multiple devices.\nEach user is always served with upto-date documents and can organize his work based on document meta data.\nFor this, we present our conceptual architecture for such a system and discuss it with an example.\n1.\nINTRODUCTION\nText documents are a valuable resource for virtually any enterprise and organization.\nDocuments like papers, reports and general business documentations contain a large part of today's (business) knowledge.\nDocuments are mostly stored in a hierarchical folder structure on file servers and it is difficult to organize them in regard to classification, versioning etc., although it is of utmost importance that users can find, retrieve and edit up-to-date versions of documents whenever they want and, in a user-friendly way.\n1.1 Problem Description\nWith most of the commonly used word-processing applications documents can be manipulated by only one user at a time: tools for pervasive collaborative document editing and management, are rarely deployed in today's world.\nDespite the fact, that people strive for location - and time - independence, the importance of pervasive collaborative work, i.e. collaborative document editing and management is totally neglected.\nDocuments could therefore be\nseen as a vulnerable source in today's world, which demands for an appropriate solution: The need to store, retrieve and edit these documents collaboratively anytime, everywhere and with almost every suitable device and with guaranteed mechanisms for security, consistency, availability and access control, is obvious.\nIn addition, word processing systems ignore the fact that the history of a text document contains crucial information for its management.\nSuch meta data includes creation date, creator, authors, version, location-based information such as time and place when\/where a user reads\/edits a document and so on.\nSuch meta data can be gathered during the documents creation process and can be used versatilely.\nEspecially in the field of pervasive document management, meta data is of crucial importance since it offers totally new ways of organizing and classifying documents: On the one hand, the user's actual situation influences the user's objectives.\nMeta data could be used to give the user the best possible view on the documents, dependent of his actual information.\nOn the other hand, as soon as the user starts to work, i.e. reads or edits a document, new meta data can be gathered in order to make the system more adaptable and in a sense to the users situation and, to offer future users a better view on the documents.\nAs far as we know, no system exists, that satisfies the aforementioned requirements.\nA very good overview about realtime communication and collaboration system is described in [7].\nWe therefore strive for a pervasive document editing and management system, which enables pervasive (and collaborative) document editing and management: users should be able to read and edit documents whenever, wherever, with whomever and with whatever device.\nIn this paper, we present collaborative database-based real-time word processing, which provides pervasive document editing and management functionality.\nIt enables the user to work on documents collaboratively and offers sophisticated document management facility: the user is always served with up-to-date documents and can organize and manage documents on the base of meta data.\nAdditionally document data is treated as ` first class citizen' of the database as demanded in [1].\n1.2 Underlying Concepts\nThe concept of our pervasive document editing and management system requires an appropriate architectural foundation.\nOur concept and implementation are based on the TeNDaX [3] collaborative database-based document editing and management system, which enables pervasive document editing and managing.\nTeNDaX is a Text Native Database eXtension.\nIt enables the storage of text in databases in a native form so that editing text is finally represented as real-time transactions.\nUnder the term ` text editing' we understand the following: writing and deleting text\n(characters), copying & pasting text, defining text layout & structure, inserting notes, setting access rights, defining business processes, inserting tables, pictures, and so on i.e. all the actions regularly carried out by word processing users.\nWith ` real-time transaction' we mean that editing text (e.g. writing a character\/word) invokes one or several database transactions so that everything, which is typed appears within the editor as soon as these objects are stored persistently.\nInstead of creating files and storing them in a file system, the content and all of the meta data belonging to the documents is stored in a special way in the database, which enables very fast real-time transactions for all editing tasks [2].\nThe database schema and the above-mentioned transactions are created in such a way that everything can be done within a multiuser environment, as is usual done by database technology.\nAs a consequence, many of the achievements (with respect to data organization and querying, recovery, integrity and security enforcement, multi-user operation, distribution management, uniform tool access, etc.) are now, by means of this approach, also available for word processing.\n2.\nAPPROACH\nOur pervasive editing and management system is based on the above-mentioned database-based TeNDaX approach, where document data is stored natively in the database and supports pervasive collaborative text editing and document management.\nWe define the pervasive document editing and management system, as a system, where documents can easily be accessed and manipulated everywhere (within the network), anytime (independently of the number of users working on the same document) and with any device (desktop, notebook, PDA, mobile phone etc.).\nFigure 1.\nTeNDaX Application Architecture\nIn contrast to documents stored locally on the hard drive or on a file server, our system automatically serves the user with the up-to-date version of a document and changes done on the document are stored persistently in the database and immediately propagated to all clients who are working on the same document.\nAdditionally, meta data gathered during the whole document creation process enables sophisticated document management.\nWith the TeXt SQL API as abstract interface, this approach can be used by any tool and for any device.\nThe system is built on the following components (see Figure 1): An editor in Java implements the presentation layer (A-G in Figure 1).\nThe aim of this layer is the integration in a well-known wordprocessing application such as OpenOffice.\nThe business logic layer represents the interface between the database and the word-processing application.\nIt consists of the following three components: The application server (marked as AS 1-4 in Figure 1) enables text editing within the database environment and takes care of awareness, security, document management etc., all within a collaborative, real-time and multi-user environment.\nThe real-time server component (marked as RTSC 14 in Figure 1) is responsible for the propagation of information, i.e. updates between all of the connected editors.\nThe storage engine (data layer) primarily stores the content of documents as well as all related meta data within the database Databases can be distributed in a peer-to-peer network (DB 1-4 in Figure 1).\n.\nIn the following, we will briefly present the database schema, the editor and the real-time server component as well as the concept of dynamic folders, which enables sophisticated document management on the basis of meta data.\n2.1 Application Architecture\nA database-based real-time collaborative editor allows the same document to be opened and edited simultaneously on the same computer or over a network of several computers and mobile devices.\nAll concurrency issues, as well as message propagation, are solved within this approach, while multiple instances of the same document are being opened [3].\nEach insert or delete action is a database transaction and as such, is immediately stored persistently in the database and propagated to all clients working on the same document.\n2.1.1 Database Schema\nAs it was mentioned earlier that text is stored in a native way.\nEach character of a text document is stored as a single object in the database [3].\nWhen storing text in such a native form, the performance of the employed database system is of crucial importance.\nThe concept and performance issues of such a text database are described in [3], collaborative layouting in [2], dynamic collaborative business processes within documents in [5], the text editing creation time meta data model in [6] and the relation to XML databases in [7].\nFigure 2 depicts the core database schema.\nBy connecting a client to the database, a Session instance is created.\nOne important attribute of the Session is the DocumentSession.\nThis attribute refers to DocumentSession instances, which administrates all opened documents.\nFor each opened document, a DocumentSession instance is created.\nThe DocumentSession is important for the realtime server component, which, in case of a\nFigure 2.\nTeNDaX Database Schema (Object Role Modeling Diagram)\nchange on a document done by a client, is responsible for sending update information to all the clients working on the same document.\nThe DocumentId in the class DocumentSession points to a FileNode instance, and corresponds to the ID of the opened document.\nInstances of the class FileNode either represent a folder node or a document node.\nThe folder node corresponds to a folder of a file system and the document node to that of a file.\nInstances of the class Char represent the characters of a document.\nThe value of a character is stored in the attribute CharacterValue.\nThe sequence is defined by the attributes After and Before of the class Char.\nParticular instances of Char mark the beginning and the end of a document.\nThe methods InsertChars and RemoveChars are used to add and delete characters.\nEditing a document involves a number of steps: first, getting the required information out of the image, secondly, invoking the corresponding methods within the database, thirdly, changing the image, and fourthly, informing all other clients about the changes.\n2.1.3 Real-Time Server Component\nThe real-time server component is responsible for the real-time propagation of any changes on a document done within an editor to all the editors who are working or have opened the same document.\nWhen an editor connects to the application server, which in turn connects to the database, the database also establishes a connection to the real-time server component (if there isn't already a connection).\nThe database system informs the real-time server component about each new editor session (session), which the realtime server component administrates in his SessionManager.\nThen, the editor as well connects to the real-time server component.\nThe real-time server component adds the editor socket to the client's data structure in the SessionManager and is then ready to communicate.\n2.1.2 Editor\nAs seen above, each document is natively stored in the database.\nOur editor does not have a replica of one part of the native text database in the sense of database replicas.\nInstead, it has a so-called image as its replica.\nEven if several authors edit the same text at the same time, they work on one unique document at all times.\nThe system guarantees this unique view.\nEach time a change on a document from an editor is persistently stored in the database, the database sends a message to the real-time server component, which in turns, sends the changes to all the\neditors working on the same document.\nTherefore, a special communication protocol is used: the update protocol.\nUpdate Protocol\nThe real-time server component uses the update protocol to communicate with the database and the editors.\nMessages are sent from the database to the real-time server component, which sends the messages to the affected editors.\nThe update protocol consists of different message types.\nMessages consist of two packages: package one contains information for the real-time server component whereas package two is passed to the editors and contains the update information, as depicted in Figure 3.\nFigure 3.\nUpdate Protocol\nIn the following, two message types are presented: | | u | sessionId,..., sessionId | editor data | | u: update message, sessionId: Id of the client session With this message type the real-time server component sends the editor data package to all editors specified in the sessionId list.\n| | ud | fileId | editor data | | ud: update document message, fileId: Id of the file With this message type, the real-time server component sends the editor data to all editors who have opened the document with the indicated file-Id.\nClass Model\nFigure 4 depicts the class model as well as the environment of the real-time server component.\nThe environment consists mainly of the editor and the database, but any other client application that could make use of the real-time server component can connect.\nConnectionListener: This class is responsible for the connection to the clients, i.e. to the database and the editors.\nDepending on the connection type (database or editor) the connection is passed to an EditorWorker instance or DatabaseMessageWorker instance respectively.\nEditorWorker: This class manages the connections of type ` editor'.\nThe connection (a socket and its input and output stream) is stored in the SessionManager.\nSessionManager: This class is similar to an ` in-memory database': all editor session information, e.g. the editor sockets, which editor has opened which document etc. are stored within this data structure.\nDatabaseMessageWorker: This class is responsible for the connections of type ` database'.\nAt run-time, only one connection exists for each database.\nUpdate messages from the database are sent to the DatabaseMessageWorker and, with the help of additional information from the SessionManager, sent to the corresponding clients.\nServiceClass: This class offers a set of methods for reading, writing and logging messages.\nFigure 4.\nReal-Time Server Component Class Diagram\n2.1.4 Dynamic Folders\nAs mentioned above, every editing action invoked by a user is immediately transferred to the database.\nAt the same time, more information about the current transaction is gathered.\nAs all information is stored in the database, one character can hold a multitude of information, which can later be used for the retrieval of documents.\nMeta data is collected at character level, from document structure (layout, workflow, template, semantics, security, workflow and notes), on the level of a document section and on the level of the whole document [6].\nAll of the above-mentioned meta data is crucial information for creating content and knowledge out of word processing documents.\nThis meta data can be used to create an alternative storage system for documents.\nIn any case, it is not an easy task to change users' familiarity to the well known hierarchical file system.\nThis is also the main reason why we do not completely disregard the classical file system, but rather enhance it.\nFolders which correspond to the classical hierarchical file system will be called \"static folders\".\nFolders where the documents are organized according to meta data, will be called \"dynamic folders\".\nAs all information is stored in the database, the file system, too, is based on the database.\nThe dynamic folders build up sub-trees, which are guided by the meta data selected by the user.\nThus, the first step in using a dynamic folder is the definition of how it should be built.\nFor each level of a dynamic folder, exactly one meta data item is used to.\nThe following example illustrates the steps which have to be taken in order to define a dynamic folder, and the meta data which should be used.\nAs a first step, the meta data which will be used for the dynamic folder must be chosen (see Table 1): The sequence of the meta data influences the structure of the folder.\nFurthermore, for each meta data used, restrictions and granularity must be defined by the user; if no restrictions are defined, all accessible documents are listed.\nThe granularity therefore influences the number of sub-folders which will be created for the partitioning of the documents.\nAs the user enters the tree structure of the dynamic folder, he can navigate through the branches to arrive at the document (s) he is looking for.\nThe directory names indicate which meta data determines the content of the sub-folder in question.\nAt each level, the documents, which have so far been found to match the meta data, can be inspected.\nTable 1.\nDefining dynamic folders (example)\nAd hoc changes of granularity and restrictions are possible in order to maximize search comfort for the user.\nIt is possible to predefine dynamic folders for frequent use, e.g. a location-based folder, as well as to create and modify dynamic folders on an ad hoc basis.\nFurthermore, the content of such dynamic folders can change from one second to another, depending on the changes made by other users at that moment.\n3.\nVALIDATION\nThe proposed architecture is validated on the example of a character insertion.\nInsert operations are the mostly used operations in a (collaborative) editing system.\nThe character insertion is based on the TeNDaX Insert Algorithm which is formally described in the following.\nThe algorithm is simplified for this purpose.\n3.1 Insert Characters Algorithm\nThe symbol c stands for the object\" character\", p stands for the previous character, n stands for the next character of a character object c and the symbol l stands for a list of character objects.\nc = character p = previous character n = next character l = list of characters The symbol c1 stands for the first character in the list l, ci stands for a character in the list l at the position i, whereas i is a value between 1 and the length of the list l, and cn stands for the last character in the list l. c1 = first character in list l ci = character at position i in list l cn = last character in list l The symbol \u03b2 stands for the special character that marks the beginning of a document and \u03b5 stands for the special character that marks the end of a document.\n\u03b2 = beginning of document \u03b5 = end of document The function startTA starts a transaction.\nThe function checkWriteAccess checks if the write access for a document session s is granted.\nThe function lock acquires an exclusive lock for a character c and returns 1 for a success and 0 for no success.\nThe function releaseLocks releases all locks that a transaction has acquired so far.\nreleaseLocks = release all locks The function getPrevious returns the previous character and getNext returns the next character of a character c.\nThe function linkBefore links a preceding character p with a succeeding character x and the function linkAfter links a succeeding character n with a preceding character y.\nThe function updateString links a character p with the first character c1 of a character list l and a character n with the last character cn of a character list l updateString (l, p, n) = linkBefore (p cl) \u2227 linkAfter (n, cn) The function insertChar inserts a character c in the table Char with the fields After set to a character p and Before set to a character n.\nThe function checkPreceding determines the previous character's CharacterValue of a character c and if the previous character's status is active.\ncheckPreceding (c) = return status and CharacterValue of the previous character The function checkSucceeding determines the next character's CharacterValue of a character c and if the next character's status is active.\ncheckSucceeding (c) = return status and CharacterValue of the next character The function checkCharValue determines the CharacterValue of a character c. checkCharValue (c) = return CharacterValue of character c The function sendUpdate sends an update message (UpdateMessage) from the database to the real-time server component.\nsendUpdate (UpdateMessage) The function Read is used in the real-time server component to read the UpdateMessage.\nThe function AllocatEditors checks on the base of the UpdateMessage and the SessionManager, which editors have to be informed.\nAllocateEditors (UpdateInformationMessage, SessionManager) = returns the affected editors The function SendMessage (EditorData) sends the editor part of the UpdateMessage to the editors SendMessage (EditorData) In TeNDaX, the Insert Algorithm is implemented in the class method InsertChars of the class Char which is depicted in Figure 2.\nThe relevant parameters for the definitions beneath, are introduced in the following list: - nextCharacterOID: OID of the character situated next to the string to be inserted - previousCharacterOID: OID of the character situated previously to the string to be inserted - characterOIDs (List): List of character which have to be inserted Thus, the insertion of characters can be defined stepwise as follows: Start a transaction.\nstartTA Select the character that is situated before the character that follows the string to be inserted.\nAcquire the lock for the character that is situated in the document before the character that follows the string which shall be inserted.\nlock (prevCharId) At this time the list characterOIDs contains the characters c1 to cn that shall be inserted.\nEach character of the string is inserted at the appropriate position by linking the preceding and the succeeding character to it.\nFor each character ci of characterOIDs: insertChar (ci, p, n) Whereas ci \u2208 {c1,..., cn} Check if the preceding and succeeding characters are active or if it is the beginning or the end of the document.\n3.2 Insert Characters Example\nFigure 1 gives a snapshot the system, i.e. of its architecture: four databases are distributed over a peer-to-peer network.\nEach database is connected to an application server (AS) and each application server is connected to a real-time server component (RTSC).\nEditors are connected to one or more real-time server components and to the corresponding databases.\nConsidering that editor A (connected to database 1 and 4) and editor B (connected to database 1 and 2) are working on the same document stored in database 1.\nEditor B now inserts a character into this document.\nThe insert operation is passed to application server 1, which in turns, passes it to the database 1, where an insert operation is invoked; the characters are inserted according to the algorithm discussed in the previous section.\nAfter the insertion, database 1 sends an update message (according to the update protocol discussed before) to real-time server component 1 (via AS 1).\nRTCS 1 combines the received update information with the information in his SessionManager and sends the editor data to the affected editors, in this case to editor A and B, where the changes are immediately shown.\nOccurring collaboration conflicts are solved and described in [3].\n4.\nSUMMARY\nWith the approach presented in this paper and the implemented prototype, we offer real-time collaborative editing and management of documents stored in a special way in a database.\nWith this approach we provide security, consistency and availability of documents and consequently offer pervasive document editing and management.\nPervasive document editing and management is enabled due to the proposed architecture with the embedded real\ntime server component, which propagates changes to a document immediately and consequently offers up-to-date documents.\nDocument editing and managing is consequently enabled anywhere, anytime and with any device.\nThe above-descried system is implemented in a running prototype.\nThe system will be tested soon in line with a student workshop next autumn.","keyphrases":["pervas document edit and manag system","pervas document edit and manag system","collabor document process","collabor document","text edit","real-time transact","comput support collabor work","busi logic layer","real-time server compon","collabor layout","hierarch file system","restrict","granular","charact insert","cscw"],"prmu":["P","P","P","P","M","U","M","U","U","M","M","U","U","U","U"]} {"id":"C-54","title":"Remote Access to Large Spatial Databases","abstract":"Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet. However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the client-server architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions.","lvl-1":"Remote Access to Large Spatial Databases \u2217 Egemen Tanin Franti\u02c7sek Brabec Hanan Samet Computer Science Department Center for Automation Research Institute for Advanced Computer Studies University of Maryland, College Park, MD 20742 {egemen,brabec,hjs}@umiacs.\numd.edu www.cs.umd.edu\/{~egemen,~brabec,~hjs} ABSTRACT Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet.\nHowever, interactive work with such large volumes of online spatial data is a challenging task.\nWe propose two efficient approaches to remote access to large spatial data.\nFirst, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management.\nWe enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently.\nSecond, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer O\ufb04oading the INTernet).\nThis is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently.\nIn APPOINT, active clients of the clientserver architecture act on the server``s behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Client\/server, Distributed applications, Distributed databases; H.2.8 [Database Management]: Database Applications-Spatial databases and GIS General Terms Performance, Management 1.\nINTRODUCTION In recent years, enterprises in the public and private sectors have provided access to large volumes of spatial data over the Internet.\nInteractive work with such large volumes of online spatial data is a challenging task.\nWe have been developing an interactive browser for accessing spatial online databases: the SAND (Spatial and Non-spatial Data) Internet Browser.\nUsers of this browser can interactively and visually manipulate spatial data remotely.\nUnfortunately, interactive remote access to spatial data slows to a crawl without proper data access mechanisms.\nWe developed two separate methods for improving the system performance, together, form a dynamic network infrastructure that is highly scalable and provides a satisfactory user experience for interactions with large volumes of online spatial data.\nThe core functionality responsible for the actual database operations is performed by the server-based SAND system.\nSAND is a spatial database system developed at the University of Maryland [12].\nThe client-side SAND Internet Browser provides a graphical user interface to the facilities of SAND over the Internet.\nUsers specify queries by choosing the desired selection conditions from a variety of menus and dialog boxes.\nSAND Internet Browser is Java-based, which makes it deployable across many platforms.\nIn addition, since Java has often been installed on target computers beforehand, our clients can be deployed on these systems with little or no need for any additional software installation or customization.\nThe system can start being utilized immediately without any prior setup which can be extremely beneficial in time-sensitive usage scenarios such as emergencies.\nThere are two ways to deploy SAND.\nFirst, any standard Web browser can be used to retrieve and run the client piece (SAND Internet Browser) as a Java application or an applet.\nThis way, users across various platforms can continuously access large spatial data on a remote location with little or 15 no need for any preceding software installation.\nThe second option is to use a stand-alone SAND Internet Browser along with a locally-installed Internet-enabled database management system (server piece).\nIn this case, the SAND Internet Browser can still be utilized to view data from remote locations.\nHowever, frequently accessed data can be downloaded to the local database on demand, and subsequently accessed locally.\nPower users can also upload large volumes of spatial data back to the remote server using this enhanced client.\nWe focused our efforts in two directions.\nWe first aimed at developing a client-server architecture with efficient caching methods to balance local resources on one side and the significant latency of the network connection on the other.\nThe low bandwidth of this connection is the primary concern in both cases.\nThe outcome of this research primarily addresses the issues of our first type of usage (i.e., as a remote browser application or an applet) for our browser and other similar applications.\nThe second direction aims at helping users that wish to manipulate large volumes of online data for prolonged periods.\nWe have developed a centralized peerto-peer approach to provide the users with the ability to transfer large volumes of data (i.e., whole data sets to the local database) more efficiently by better utilizing the distributed network resources among active clients of a clientserver architecture.\nWe call this architecture APPOINTApproach for Peer-to-Peer O\ufb04oading the INTernet.\nThe results of this research addresses primarily the issues of the second type of usage for our SAND Internet Browser (i.e., as a stand-alone application).\nThe rest of this paper is organized as follows.\nSection 2 describes our client-server approach in more detail.\nSection 3 focuses on APPOINT, our peer-to-peer approach.\nSection 4 discusses our work in relation to existing work.\nSection 5 outlines a sample SAND Internet Browser scenario for both of our remote access approaches.\nSection 6 contains concluding remarks as well as future research directions.\n2.\nTHE CLIENT-SERVER APPROACH Traditionally, Geographic Information Systems (GIS) such as ArcInfo from ESRI [2] and many spatial databases are designed to be stand-alone products.\nThe spatial database is kept on the same computer or local area network from where it is visualized and queried.\nThis architecture allows for instantaneous transfer of large amounts of data between the spatial database and the visualization module so that it is perfectly reasonable to use large-bandwidth protocols for communication between them.\nThere are however many applications where a more distributed approach is desirable.\nIn these cases, the database is maintained in one location while users need to work with it from possibly distant sites over the network (e.g., the Internet).\nThese connections can be far slower and less reliable than local area networks and thus it is desirable to limit the data flow between the database (server) and the visualization unit (client) in order to get a timely response from the system.\nOur client-server approach (Figure 1) allows the actual database engine to be run in a central location maintained by spatial database experts, while end users acquire a Javabased client component that provides them with a gateway into the SAND spatial database engine.\nOur client is more than a simple image viewer.\nInstead, it operates on vector data allowing the client to execute many operations such as zooming or locational queries locally.\nIn Figure 1: SAND Internet Browser - Client-Server architecture.\nessence, a simple spatial database engine is run on the client.\nThis database keeps a copy of a subset of the whole database whose full version is maintained on the server.\nThis is a concept similar to `caching''.\nIn our case, the client acts as a lightweight server in that given data, it evaluates queries and provides the visualization module with objects to be displayed.\nIt initiates communication with the server only in cases where it does not have enough data stored locally.\nSince the locally run database is only updated when additional or newer data is needed, our architecture allows the system to minimize the network traffic between the client and the server when executing the most common user-side operations such as zooming and panning.\nIn fact, as long as the user explores one region at a time (i.e., he or she is not panning all over the database), no additional data needs to be retrieved after the initial population of the client-side database.\nThis makes the system much more responsive than the Web mapping services.\nDue to the complexity of evaluating arbitrary queries (i.e., more complex queries than window queries that are needed for database visualization), we do not perform user-specified queries on the client.\nAll user queries are still evaluated on the server side and the results are downloaded onto the client for display.\nHowever, assuming that the queries are selective enough (i.e., there are far fewer elements returned from the query than the number of elements in the database), the response delay is usually within reasonable limits.\n2.1 Client-Server Communication As mentioned above, the SAND Internet Browser is a client piece of the remotely accessible spatial database server built around the SAND kernel.\nIn order to communicate with the server, whose application programming interface (API) is a Tcl-based scripting language, a servlet specifically designed to interface the SAND Internet Browser with the SAND kernel is required on the server side.\nThis servlet listens on a given port of the server for incoming requests from the client.\nIt translates these requests into the SAND-Tcl language.\nNext, it transmits these SAND-Tcl commands or scripts to the SAND kernel.\nAfter results are provided by the kernel, the servlet fetches and processes them, and then sends those results back to the originating client.\nOnce the Java servlet is launched, it waits for a client to initiate a connection.\nIt handles both requests for the actual client Java code (needed when the client is run as an applet) and the SAND traffic.\nWhen the client piece is launched, it connects back to the SAND servlet, the communication is driven by the client piece; the server only responds to the client``s queries.\nThe client initiates a transaction by 6 sending a query.\nThe Java servlet parses the query and creates a corresponding SAND-Tcl expression or script in the SAND kernel``s native format.\nIt is then sent to the kernel for evaluation or execution.\nThe kernel``s response naturally depends on the query and can be a boolean value, a number or a string representing a value (e.g., a default color) or, a whole tuple (e.g., in response to a nearest tuple query).\nIf a script was sent to the kernel (e.g., requesting all the tuples matching some criteria), then an arbitrary amount of data can be returned by the SAND server.\nIn this case, the data is first compressed before it is sent over the network to the client.\nThe data stream gets decompressed at the client before the results are parsed.\nNotice, that if another spatial database was to be used instead of the SAND kernel, then only a simple modification to the servlet would need to be made in order for the SAND Internet Browser to function properly.\nIn particular, the queries sent by the client would need to be recoded into another query language which is native to this different spatial database.\nThe format of the protocol used for communication between the servlet and the client is unaffected.\n3.\nTHE PEER-TO-PEER APPROACH Many users may want to work on a complete spatial data set for a prolonged period of time.\nIn this case, making an initial investment of downloading the whole data set may be needed to guarantee a satisfactory session.\nUnfortunately, spatial data tends to be large.\nA few download requests to a large data set from a set of idle clients waiting to be served can slow the server to a crawl.\nThis is due to the fact that the common client-server approach to transferring data between the two ends of a connection assumes a designated role for each one of the ends (i.e, some clients and a server).\nWe built APPOINT as a centralized peer-to-peer system to demonstrate our approach for improving the common client-server systems.\nA server still exists.\nThere is a central source for the data and a decision mechanism for the service.\nThe environment still functions as a client-server environment under many circumstances.\nYet, unlike many common client-server environments, APPOINT maintains more information about the clients.\nThis includes, inventories of what each client downloads, their availabilities, etc..\nWhen the client-server service starts to perform poorly or a request for a data item comes from a client with a poor connection to the server, APPOINT can start appointing appropriate active clients of the system to serve on behalf of the server, i.e., clients who have already volunteered their services and can take on the role of peers (hence, moving from a client-server scheme to a peer-to-peer scheme).\nThe directory service for the active clients is still performed by the server but the server no longer serves all of the requests.\nIn this scheme, clients are used mainly for the purpose of sharing their networking resources rather than introducing new content and hence they help o\ufb04oad the server and scale up the service.\nThe existence of a server is simpler in terms of management of dynamic peers in comparison to pure peerto-peer approaches where a flood of messages to discover who is still active in the system should be used by each peer that needs to make a decision.\nThe server is also the main source of data and under regular circumstances it may not forward the service.\nData is assumed to be formed of files.\nA single file forms the atomic means of communication.\nAPPOINT optimizes requests with respect to these atomic requests.\nFrequently accessed data sets are replicated as a byproduct of having been requested by a large number of users.\nThis opens up the potential for bypassing the server in future downloads for the data by other users as there are now many new points of access to it.\nBypassing the server is useful when the server``s bandwidth is limited.\nExistence of a server assures that unpopular data is also available at all times.\nThe service depends on the availability of the server.\nThe server is now more resilient to congestion as the service is more scalable.\nBackups and other maintenance activities are already being performed on the server and hence no extra administrative effort is needed for the dynamic peers.\nIf a peer goes down, no extra precautions are taken.\nIn fact, APPOINT does not require any additional resources from an already existing client-server environment but, instead, expands its capability.\nThe peers simply get on to or get off from a table on the server.\nUploading data is achieved in a similar manner as downloading data.\nFor uploads, the active clients can again be utilized.\nUsers can upload their data to a set of peers other than the server if the server is busy or resides in a distant location.\nEventually the data is propagated to the server.\nAll of the operations are performed in a transparent fashion to the clients.\nUpon initial connection to the server, they can be queried as to whether or not they want to share their idle networking time and disk space.\nThe rest of the operations follow transparently after the initial contact.\nAPPOINT works on the application layer but not on lower layers.\nThis achieves platform independence and easy deployment of the system.\nAPPOINT is not a replacement but an addition to the current client-server architectures.\nWe developed a library of function calls that when placed in a client-server architecture starts the service.\nWe are developing advanced peer selection schemes that incorporate the location of active clients, bandwidth among active clients, data-size to be transferred, load on active clients, and availability of active clients to form a complete means of selecting the best clients that can become efficient alternatives to the server.\nWith APPOINT we are defining a very simple API that could be used within an existing client-server system easily.\nInstead of denial of service or a slow connection, this API can be utilized to forward the service appropriately.\nThe API for the server side is: start(serverPortNo) makeFileAvailable(file,location,boolean) callback receivedFile(file,location) callback errorReceivingFile(file,location,error) stop() Similarly the API for the client side is: start(clientPortNo,serverPortNo,serverAddress) makeFileAvailable(file,location,boolean) receiveFile(file,location) sendFile(file,location) stop() The server, after starting the APPOINT service, can make all of the data files available to the clients by using the makeFileAvailable method.\nThis will enable APPOINT to treat the server as one of the peers.\nThe two callback methods of the server are invoked when a file is received from a client, or when an error is encountered while receiving a file from a client.\nAPPOINT guar7 Figure 2: The localization operation in APPOINT.\nantees that at least one of the callbacks will be called so that the user (who may not be online anymore) can always be notified (i.e., via email).\nClients localizing large data files can make these files available to the public by using the makeFileAvailable method on the client side.\nFor example, in our SAND Internet Browser, we have the localization of spatial data as a function that can be chosen from our menus.\nThis functionality enables users to download data sets completely to their local disks before starting their queries or analysis.\nIn our implementation, we have calls to the APPOINT service both on the client and the server sides as mentioned above.\nHence, when a localization request comes to the SAND Internet Browser, the browser leaves the decisions to optimally find and localize a data set to the APPOINT service.\nOur server also makes its data files available over APPOINT.\nThe mechanism for the localization operation is shown with more details from the APPOINT protocols in Figure 2.\nThe upload operation is performed in a similar fashion.\n4.\nRELATED WORK There has been a substantial amount of research on remote access to spatial data.\nOne specific approach has been adopted by numerous Web-based mapping services (MapQuest [5], MapsOnUs [6], etc.).\nThe goal in this approach is to enable remote users, typically only equipped with standard Web browsers, to access the company``s spatial database server and retrieve information in the form of pictorial maps from them.\nThe solution presented by most of these vendors is based on performing all the calculations on the server side and transferring only bitmaps that represent results of user queries and commands.\nAlthough the advantage of this solution is the minimization of both hardware and software resources on the client site, the resulting product has severe limitations in terms of available functionality and response time (each user action results in a new bitmap being transferred to the client).\nWork described in [9] examines a client-server architecture for viewing large images that operates over a lowbandwidth network connection.\nIt presents a technique based on wavelet transformations that allows the minimization of the amount of data needed to be transferred over the network between the server and the client.\nIn this case, while the server holds the full representation of the large image, only a limited amount of data needs to be transferred to the client to enable it to display a currently requested view into the image.\nOn the client side, the image is reconstructed into a pyramid representation to speed up zooming and panning operations.\nBoth the client and the server keep a common mask that indicates what parts of the image are available on the client and what needs to be requested.\nThis also allows dropping unnecessary parts of the image from the main memory on the server.\nOther related work has been reported in [16] where a client-server architecture is described that is designed to provide end users with access to a server.\nIt is assumed that this data server manages vast databases that are impractical to be stored on individual clients.\nThis work blends raster data management (stored in pyramids [22]) with vector data stored in quadtrees [19, 20].\nFor our peer-to-peer transfer approach (APPOINT), Napster is the forefather where a directory service is centralized on a server and users exchange music files that they have stored on their local disks.\nOur application domain, where the data is already freely available to the public, forms a prime candidate for such a peer-to-peer approach.\nGnutella is a pure (decentralized) peer-to-peer file exchange system.\nUnfortunately, it suffers from scalability issues, i.e., floods of messages between peers in order to map connectivity in the system are required.\nOther systems followed these popular systems, each addressing a different flavor of sharing over the Internet.\nMany peer-to-peer storage systems have also recently emerged.\nPAST [18], Eternity Service [7], CFS [10], and OceanStore [15] are some peer-to-peer storage systems.\nSome of these systems have focused on anonymity while others have focused on persistence of storage.\nAlso, other approaches, like SETI@Home [21], made other resources, such as idle CPUs, work together over the Internet to solve large scale computational problems.\nOur goal is different than these approaches.\nWith APPOINT, we want to improve existing client-server systems in terms of performance by using idle networking resources among active clients.\nHence, other issues like anonymity, decentralization, and persistence of storage were less important in our decisions.\nConfirming the authenticity of the indirectly delivered data sets is not yet addressed with APPOINT.\nWe want to expand our research, in the future, to address this issue.\nFrom our perspective, although APPOINT employs some of the techniques used in peer-to-peer systems, it is also closely related to current Web caching architectures.\nSquirrel [13] forms the middle ground.\nIt creates a pure peer-topeer collaborative Web cache among the Web browser caches of the machines in a local-area network.\nExcept for this recent peer-to-peer approach, Web caching is mostly a wellstudied topic in the realm of server\/proxy level caching [8, 11, 14, 17].\nCollaborative Web caching systems, the most relevant of these for our research, focus on creating either a hierarchical, hash-based, central directory-based, or multicast-based caching schemes.\nWe do not compete with these approaches.\nIn fact, APPOINT can work in tandem with collaborative Web caching if they are deployed together.\nWe try to address the situation where a request arrives at a server, meaning all the caches report a miss.\nHence, the point where the server is reached can be used to take a central decision but then the actual service request can be forwarded to a set of active clients, i.e., the down8 load and upload operations.\nCache misses are especially common in the type of large data-based services on which we are working.\nMost of the Web caching schemes that are in use today employ a replacement policy that gives a priority to replacing the largest sized items over smaller-sized ones.\nHence, these policies would lead to the immediate replacement of our relatively large data files even though they may be used frequently.\nIn addition, in our case, the user community that accesses a certain data file may also be very dispersed from a network point of view and thus cannot take advantage of any of the caching schemes.\nFinally, none of the Web caching methods address the symmetric issue of large data uploads.\n5.\nA SAMPLE APPLICATION FedStats [1] is an online source that enables ordinary citizens access to official statistics of numerous federal agencies without knowing in advance which agency produced them.\nWe are using a FedStats data set as a testbed for our work.\nOur goal is to provide more power to the users of FedStats by utilizing the SAND Internet Browser.\nAs an example, we looked at two data files corresponding to Environmental Protection Agency (EPA)-regulated facilities that have chlorine and arsenic, respectively.\nFor each file, we had the following information available: EPA-ID, name, street, city, state, zip code, latitude, longitude, followed by flags to indicate if that facility is in the following EPA programs: Hazardous Waste, Wastewater Discharge, Air Emissions, Abandoned Toxic Waste Dump, and Active Toxic Release.\nWe put this data into a SAND relation where the spatial attribute `location'' corresponds to the latitude and longitude.\nSome queries that can be handled with our system on this data include: 1.\nFind all EPA-regulated facilities that have arsenic and participate in the Air Emissions program, and: (a) Lie in Georgia to Illinois, alphabetically.\n(b) Lie within Arkansas or 30 miles within its border.\n(c) Lie within 30 miles of the border of Arkansas (i.e., both sides of the border).\n2.\nFor each EPA-regulated facility that has arsenic, find all EPA-regulated facilities that have chlorine and: (a) That are closer to it than to any other EPAregulated facility that has arsenic.\n(b) That participate in the Air Emissions program and are closer to it than to any other EPAregulated facility which has arsenic.\nIn order to avoid reporting a particular facility more than once, we use our `group by EPA-ID'' mechanism.\nFigure 3 illustrates the output of an example query that finds all arsenic sites within a given distance of the border of Arkansas.\nThe sites are obtained in an incremental manner with respect to a given point.\nThis ordering is shown by using different color shades.\nWith this example data, it is possible to work with the SAND Internet Browser online as an applet (connecting to a remote server) or after localizing the data and then opening it locally.\nIn the first case, for each action taken, the client-server architecture will decide what to ask for from the server.\nIn the latter case, the browser will use the peerto-peer APPOINT architecture for first localizing the data.\n6.\nCONCLUDING REMARKS An overview of our efforts in providing remote access to large spatial data has been given.\nWe have outlined our approaches and introduced their individual elements.\nOur client-server approach improves the system performance by using efficient caching methods when a remote server is accessed from thin-clients.\nAPPOINT forms an alternative approach that improves performance under an existing clientserver system by using idle client resources when individual users want work on a data set for longer periods of time using their client computers.\nFor the future, we envision development of new efficient algorithms that will support large online data transfers within our peer-to-peer approach using multiple peers simultaneously.\nWe assume that a peer (client) can become unavailable at any anytime and hence provisions need to be in place to handle such a situation.\nTo address this, we will augment our methods to include efficient dynamic updates.\nUpon completion of this step of our work, we also plan to run comprehensive performance studies on our methods.\nAnother issue is how to access data from different sources in different formats.\nIn order to access multiple data sources in real time, it is desirable to look for a mechanism that would support data exchange by design.\nThe XML protocol [3] has emerged to become virtually a standard for describing and communicating arbitrary data.\nGML [4] is an XML variant that is becoming increasingly popular for exchange of geographical data.\nWe are currently working on making SAND XML-compatible so that the user can instantly retrieve spatial data provided by various agencies in the GML format via their Web services and then explore, query, or process this data further within the SAND framework.\nThis will turn the SAND system into a universal tool for accessing any spatial data set as it will be deployable on most platforms, work efficiently given large amounts of data, be able to tap any GML-enabled data source, and provide an easy to use graphical user interface.\nThis will also convert the SAND system from a research-oriented prototype into a product that could be used by end users for accessing, viewing, and analyzing their data efficiently and with minimum effort.\n7.\nREFERENCES [1] Fedstats: The gateway to statistics from over 100 U.S. federal agencies.\nhttp:\/\/www.fedstats.gov\/, 2001.\n[2] Arcinfo: Scalable system of software for geographic data creation, management, integration, analysis, and dissemination.\nhttp:\/\/www.esri.com\/software\/ arcgis\/arcinfo\/index.\nhtml, 2002.\n[3] Extensible markup language (xml).\nhttp:\/\/www.w3.org\/XML\/, 2002.\n[4] Geography markup language (gml) 2.0.\nhttp:\/\/opengis.net\/gml\/01-029\/GML2.html, 2002.\n[5] Mapquest: Consumer-focused interactive mapping site on the web.\nhttp:\/\/www.mapquest.com, 2002.\n[6] Mapsonus: Suite of online geographic services.\nhttp:\/\/www.mapsonus.com, 2002.\n[7] R. Anderson.\nThe Eternity Service.\nIn Proceedings of the PRAGOCRYPT``96, pages 242-252, Prague, Czech Republic, September 1996.\n[8] L. Breslau, P. Cao, L. Fan, G. Phillips, and S. Shenker.\nWeb caching and Zipf-like distributions: 9 Figure 3: Sample output from the SAND Internet Browser - Large dark dots indicate the result of a query that looks for all arsenic sites within a given distance from Arkansas.\nDifferent color shades are used to indicate ranking order by the distance from a given point.\nEvidence and implications.\nIn Proceedings of the IEEE Infocom``99, pages 126-134, New York, NY, March 1999.\n[9] E. Chang, C. Yap, and T. Yen.\nRealtime visualization of large images over a thinwire.\nIn R. Yagel and H. Hagen, editors, Proceedings IEEE Visualization``97 (Late Breaking Hot Topics), pages 45-48, Phoenix, AZ, October 1997.\n[10] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, and I. Stoica.\nWide-area cooperative storage with CFS.\nIn Proceedings of the ACM SOSP``01, pages 202-215, Banff, AL, October 2001.\n[11] A. Dingle and T. Partl.\nWeb cache coherence.\nComputer Networks and ISDN Systems, 28(7-11):907-920, May 1996.\n[12] C. Esperan\u00b8ca and H. Samet.\nExperience with SAND\/Tcl: a scripting tool for spatial databases.\nJournal of Visual Languages and Computing, 13(2):229-255, April 2002.\n[13] S. Iyer, A. Rowstron, and P. Druschel.\nSquirrel: A decentralized peer-to-peer Web cache.\nRice University\/Microsoft Research, submitted for publication, 2002.\n[14] D. Karger, A. Sherman, A. Berkheimer, B. Bogstad, R. Dhanidina, K. Iwamoto, B. Kim, L. Matkins, and Y. Yerushalmi.\nWeb caching with consistent hashing.\nComputer Networks, 31(11-16):1203-1213, May 1999.\n[15] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski, P. Eaton, D. Geels, R. Gummadi, S. Rhea, H. Weatherspoon, W. Weimer, C. Wells, and B. Zhao.\nOceanStore: An architecture for global-scale persistent store.\nIn Proceedings of the ACM ASPLOS``00, pages 190-201, Cambridge, MA, November 2000.\n[16] M. Potmesil.\nMaps alive: viewing geospatial information on the WWW.\nComputer Networks and ISDN Systems, 29(8-13):1327-1342, September 1997.\nAlso Hyper Proceedings of the 6th International World Wide Web Conference, Santa Clara, CA, April 1997.\n[17] M. Rabinovich, J. Chase, and S. Gadde.\nNot all hits are created equal: Cooperative proxy caching over a wide-area network.\nComputer Networks and ISDN Systems, 30(22-23):2253-2259, November 1998.\n[18] A. Rowstron and P. Druschel.\nStorage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility.\nIn Proceedings of the ACM SOSP``01, pages 160-173, Banff, AL, October 2001.\n[19] H. Samet.\nApplications of Spatial Data Structures: Computer Graphics, Image Processing, and GIS.\nAddison-Wesley, Reading, MA, 1990.\n[20] H. Samet.\nThe Design and Analysis of Spatial Data Structures.\nAddison-Wesley, Reading, MA, 1990.\n[21] SETI@Home.\nhttp:\/\/setiathome.ssl.berkeley.edu\/, 2001.\n[22] L. J. Williams.\nPyramidal parametrics.\nComputer Graphics, 17(3):1-11, July 1983.\nAlso Proceedings of the SIGGRAPH``83 Conference, Detroit, July 1983.\n10","lvl-3":"Remote Access to Large Spatial Databases *\nABSTRACT\nEnterprises in the public and private sectors have been making their large spatial data archives available over the Internet.\nHowever, interactive work with such large volumes of online spatial data is a challenging task.\nWe propose two efficient approaches to remote access to large spatial data.\nFirst, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management.\nWe enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently.\nSecond, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet).\nThis is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently.\nIn APPOINT, active clients of the clientserver architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions.\n1.\nINTRODUCTION\nIn recent years, enterprises in the public and private sectors have provided access to large volumes of spatial data over the Internet.\nInteractive work with such large volumes of online spatial data is a challenging task.\nWe have been developing an interactive browser for accessing spatial online databases: the SAND (Spatial and Non-spatial Data) Internet Browser.\nUsers of this browser can interactively and visually manipulate spatial data remotely.\nUnfortunately, interactive remote access to spatial data slows to a crawl without proper data access mechanisms.\nWe developed two separate methods for improving the system performance, together, form a dynamic network infrastructure that is highly scalable and provides a satisfactory user experience for interactions with large volumes of online spatial data.\nThe core functionality responsible for the actual database operations is performed by the server-based SAND system.\nSAND is a spatial database system developed at the University of Maryland [12].\nThe client-side SAND Internet Browser provides a graphical user interface to the facilities of SAND over the Internet.\nUsers specify queries by choosing the desired selection conditions from a variety of menus and dialog boxes.\nSAND Internet Browser is Java-based, which makes it deployable across many platforms.\nIn addition, since Java has often been installed on target computers beforehand, our clients can be deployed on these systems with little or no need for any additional software installation or customization.\nThe system can start being utilized immediately without any prior setup which can be extremely beneficial in time-sensitive usage scenarios such as emergencies.\nThere are two ways to deploy SAND.\nFirst, any standard Web browser can be used to retrieve and run the client piece (SAND Internet Browser) as a Java application or an applet.\nThis way, users across various platforms can continuously access large spatial data on a remote location with little or\nno need for any preceding software installation.\nThe second option is to use a stand-alone SAND Internet Browser along with a locally-installed Internet-enabled database management system (server piece).\nIn this case, the SAND Internet Browser can still be utilized to view data from remote locations.\nHowever, frequently accessed data can be downloaded to the local database on demand, and subsequently accessed locally.\nPower users can also upload large volumes of spatial data back to the remote server using this enhanced client.\nWe focused our efforts in two directions.\nWe first aimed at developing a client-server architecture with efficient caching methods to balance local resources on one side and the significant latency of the network connection on the other.\nThe low bandwidth of this connection is the primary concern in both cases.\nThe outcome of this research primarily addresses the issues of our first type of usage (i.e., as a remote browser application or an applet) for our browser and other similar applications.\nThe second direction aims at helping users that wish to manipulate large volumes of online data for prolonged periods.\nWe have developed a centralized peerto-peer approach to provide the users with the ability to transfer large volumes of data (i.e., whole data sets to the local database) more efficiently by better utilizing the distributed network resources among active clients of a clientserver architecture.\nWe call this architecture APPOINT--Approach for Peer-to-Peer Offloading the INTernet.\nThe results of this research addresses primarily the issues of the second type of usage for our SAND Internet Browser (i.e., as a stand-alone application).\nThe rest of this paper is organized as follows.\nSection 2 describes our client-server approach in more detail.\nSection 3 focuses on APPOINT, our peer-to-peer approach.\nSection 4 discusses our work in relation to existing work.\nSection 5 outlines a sample SAND Internet Browser scenario for both of our remote access approaches.\nSection 6 contains concluding remarks as well as future research directions.\n2.\nTHE CLIENT-SERVER APPROACH\n2.1 Client-Server Communication\n3.\nTHE PEER-TO-PEER APPROACH\n4.\nRELATED WORK\nThere has been a substantial amount of research on remote access to spatial data.\nOne specific approach has been adopted by numerous Web-based mapping services (MapQuest [5], MapsOnUs [6], etc.).\nThe goal in this approach is to enable remote users, typically only equipped with standard Web browsers, to access the company's spatial database server and retrieve information in the form of pictorial maps from them.\nThe solution presented by most of these vendors is based on performing all the calculations on the server side and transferring only bitmaps that represent results of user queries and commands.\nAlthough the advantage of this solution is the minimization of both hardware and software resources on the client site, the resulting product has severe limitations in terms of available functionality and response time (each user action results in a new bitmap being transferred to the client).\nWork described in [9] examines a client-server architecture for viewing large images that operates over a lowbandwidth network connection.\nIt presents a technique based on wavelet transformations that allows the minimization of the amount of data needed to be transferred over the network between the server and the client.\nIn this case, while the server holds the full representation of the large image, only a limited amount of data needs to be transferred to the client to enable it to display a currently requested view into the image.\nOn the client side, the image is reconstructed into a pyramid representation to speed up zooming and panning operations.\nBoth the client and the server keep a common mask that indicates what parts of the image are available on the client and what needs to be requested.\nThis also allows dropping unnecessary parts of the image from the main memory on the server.\nOther related work has been reported in [16] where a client-server architecture is described that is designed to provide end users with access to a server.\nIt is assumed that this data server manages vast databases that are impractical to be stored on individual clients.\nThis work blends raster data management (stored in pyramids [22]) with vector data stored in quadtrees [19, 20].\nFor our peer-to-peer transfer approach (APPOINT), Napster is the forefather where a directory service is centralized on a server and users exchange music files that they have stored on their local disks.\nOur application domain, where the data is already freely available to the public, forms a prime candidate for such a peer-to-peer approach.\nGnutella is a pure (decentralized) peer-to-peer file exchange system.\nUnfortunately, it suffers from scalability issues, i.e., floods of messages between peers in order to map connectivity in the system are required.\nOther systems followed these popular systems, each addressing a different flavor of sharing over the Internet.\nMany peer-to-peer storage systems have also recently emerged.\nPAST [18], Eternity Service [7], CFS [10], and OceanStore [15] are some peer-to-peer storage systems.\nSome of these systems have focused on anonymity while others have focused on persistence of storage.\nAlso, other approaches, like SETI@Home [21], made other resources, such as idle CPUs, work together over the Internet to solve large scale computational problems.\nOur goal is different than these approaches.\nWith APPOINT, we want to improve existing client-server systems in terms of performance by using idle networking resources among active clients.\nHence, other issues like anonymity, decentralization, and persistence of storage were less important in our decisions.\nConfirming the authenticity of the indirectly delivered data sets is not yet addressed with APPOINT.\nWe want to expand our research, in the future, to address this issue.\nFrom our perspective, although APPOINT employs some of the techniques used in peer-to-peer systems, it is also closely related to current Web caching architectures.\nSquirrel [13] forms the middle ground.\nIt creates a pure peer-topeer collaborative Web cache among the Web browser caches of the machines in a local-area network.\nExcept for this recent peer-to-peer approach, Web caching is mostly a wellstudied topic in the realm of server\/proxy level caching [8, 11, 14, 17].\nCollaborative Web caching systems, the most relevant of these for our research, focus on creating either a hierarchical, hash-based, central directory-based, or multicast-based caching schemes.\nWe do not compete with these approaches.\nIn fact, APPOINT can work in tandem with collaborative Web caching if they are deployed together.\nWe try to address the situation where a request arrives at a server, meaning all the caches report a miss.\nHence, the point where the server is reached can be used to take a central decision but then the actual service request can be forwarded to a set of active clients, i.e., the down\nload and upload operations.\nCache misses are especially common in the type of large data-based services on which we are working.\nMost of the Web caching schemes that are in use today employ a replacement policy that gives a priority to replacing the largest sized items over smaller-sized ones.\nHence, these policies would lead to the immediate replacement of our relatively large data files even though they may be used frequently.\nIn addition, in our case, the user community that accesses a certain data file may also be very dispersed from a network point of view and thus cannot take advantage of any of the caching schemes.\nFinally, none of the Web caching methods address the symmetric issue of large data uploads.\n5.\nA SAMPLE APPLICATION\n6.\nCONCLUDING REMARKS\nAn overview of our efforts in providing remote access to large spatial data has been given.\nWe have outlined our approaches and introduced their individual elements.\nOur client-server approach improves the system performance by using efficient caching methods when a remote server is accessed from thin-clients.\nAPPOINT forms an alternative approach that improves performance under an existing clientserver system by using idle client resources when individual users want work on a data set for longer periods of time using their client computers.\nFor the future, we envision development of new efficient algorithms that will support large online data transfers within our peer-to-peer approach using multiple peers simultaneously.\nWe assume that a peer (client) can become unavailable at any anytime and hence provisions need to be in place to handle such a situation.\nTo address this, we will augment our methods to include efficient dynamic updates.\nUpon completion of this step of our work, we also plan to run comprehensive performance studies on our methods.\nAnother issue is how to access data from different sources in different formats.\nIn order to access multiple data sources in real time, it is desirable to look for a mechanism that would support data exchange by design.\nThe XML protocol [3] has emerged to become virtually a standard for describing and communicating arbitrary data.\nGML [4] is an XML variant that is becoming increasingly popular for exchange of geographical data.\nWe are currently working on making SAND XML-compatible so that the user can instantly retrieve spatial data provided by various agencies in the GML format via their Web services and then explore, query, or process this data further within the SAND framework.\nThis will turn the SAND system into a universal tool for accessing any spatial data set as it will be deployable on most platforms, work efficiently given large amounts of data, be able to tap any GML-enabled data source, and provide an easy to use graphical user interface.\nThis will also convert the SAND system from a research-oriented prototype into a product that could be used by end users for accessing, viewing, and analyzing their data efficiently and with minimum effort.","lvl-4":"Remote Access to Large Spatial Databases *\nABSTRACT\nEnterprises in the public and private sectors have been making their large spatial data archives available over the Internet.\nHowever, interactive work with such large volumes of online spatial data is a challenging task.\nWe propose two efficient approaches to remote access to large spatial data.\nFirst, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management.\nWe enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently.\nSecond, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet).\nThis is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently.\nIn APPOINT, active clients of the clientserver architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions.\n1.\nINTRODUCTION\nIn recent years, enterprises in the public and private sectors have provided access to large volumes of spatial data over the Internet.\nInteractive work with such large volumes of online spatial data is a challenging task.\nWe have been developing an interactive browser for accessing spatial online databases: the SAND (Spatial and Non-spatial Data) Internet Browser.\nUsers of this browser can interactively and visually manipulate spatial data remotely.\nUnfortunately, interactive remote access to spatial data slows to a crawl without proper data access mechanisms.\nWe developed two separate methods for improving the system performance, together, form a dynamic network infrastructure that is highly scalable and provides a satisfactory user experience for interactions with large volumes of online spatial data.\nThe core functionality responsible for the actual database operations is performed by the server-based SAND system.\nSAND is a spatial database system developed at the University of Maryland [12].\nThe client-side SAND Internet Browser provides a graphical user interface to the facilities of SAND over the Internet.\nSAND Internet Browser is Java-based, which makes it deployable across many platforms.\nThere are two ways to deploy SAND.\nFirst, any standard Web browser can be used to retrieve and run the client piece (SAND Internet Browser) as a Java application or an applet.\nThis way, users across various platforms can continuously access large spatial data on a remote location with little or\nno need for any preceding software installation.\nThe second option is to use a stand-alone SAND Internet Browser along with a locally-installed Internet-enabled database management system (server piece).\nIn this case, the SAND Internet Browser can still be utilized to view data from remote locations.\nHowever, frequently accessed data can be downloaded to the local database on demand, and subsequently accessed locally.\nPower users can also upload large volumes of spatial data back to the remote server using this enhanced client.\nWe focused our efforts in two directions.\nWe first aimed at developing a client-server architecture with efficient caching methods to balance local resources on one side and the significant latency of the network connection on the other.\nThe outcome of this research primarily addresses the issues of our first type of usage (i.e., as a remote browser application or an applet) for our browser and other similar applications.\nThe second direction aims at helping users that wish to manipulate large volumes of online data for prolonged periods.\nWe have developed a centralized peerto-peer approach to provide the users with the ability to transfer large volumes of data (i.e., whole data sets to the local database) more efficiently by better utilizing the distributed network resources among active clients of a clientserver architecture.\nWe call this architecture APPOINT--Approach for Peer-to-Peer Offloading the INTernet.\nThe results of this research addresses primarily the issues of the second type of usage for our SAND Internet Browser (i.e., as a stand-alone application).\nSection 2 describes our client-server approach in more detail.\nSection 3 focuses on APPOINT, our peer-to-peer approach.\nSection 4 discusses our work in relation to existing work.\nSection 5 outlines a sample SAND Internet Browser scenario for both of our remote access approaches.\nSection 6 contains concluding remarks as well as future research directions.\n4.\nRELATED WORK\nThere has been a substantial amount of research on remote access to spatial data.\nThe goal in this approach is to enable remote users, typically only equipped with standard Web browsers, to access the company's spatial database server and retrieve information in the form of pictorial maps from them.\nWork described in [9] examines a client-server architecture for viewing large images that operates over a lowbandwidth network connection.\nIt presents a technique based on wavelet transformations that allows the minimization of the amount of data needed to be transferred over the network between the server and the client.\nIn this case, while the server holds the full representation of the large image, only a limited amount of data needs to be transferred to the client to enable it to display a currently requested view into the image.\nOn the client side, the image is reconstructed into a pyramid representation to speed up zooming and panning operations.\nBoth the client and the server keep a common mask that indicates what parts of the image are available on the client and what needs to be requested.\nThis also allows dropping unnecessary parts of the image from the main memory on the server.\nOther related work has been reported in [16] where a client-server architecture is described that is designed to provide end users with access to a server.\nIt is assumed that this data server manages vast databases that are impractical to be stored on individual clients.\nThis work blends raster data management (stored in pyramids [22]) with vector data stored in quadtrees [19, 20].\nFor our peer-to-peer transfer approach (APPOINT), Napster is the forefather where a directory service is centralized on a server and users exchange music files that they have stored on their local disks.\nOur application domain, where the data is already freely available to the public, forms a prime candidate for such a peer-to-peer approach.\nGnutella is a pure (decentralized) peer-to-peer file exchange system.\nUnfortunately, it suffers from scalability issues, i.e., floods of messages between peers in order to map connectivity in the system are required.\nOther systems followed these popular systems, each addressing a different flavor of sharing over the Internet.\nMany peer-to-peer storage systems have also recently emerged.\nPAST [18], Eternity Service [7], CFS [10], and OceanStore [15] are some peer-to-peer storage systems.\nSome of these systems have focused on anonymity while others have focused on persistence of storage.\nAlso, other approaches, like SETI@Home [21], made other resources, such as idle CPUs, work together over the Internet to solve large scale computational problems.\nOur goal is different than these approaches.\nWith APPOINT, we want to improve existing client-server systems in terms of performance by using idle networking resources among active clients.\nConfirming the authenticity of the indirectly delivered data sets is not yet addressed with APPOINT.\nWe want to expand our research, in the future, to address this issue.\nFrom our perspective, although APPOINT employs some of the techniques used in peer-to-peer systems, it is also closely related to current Web caching architectures.\nIt creates a pure peer-topeer collaborative Web cache among the Web browser caches of the machines in a local-area network.\nWe do not compete with these approaches.\nIn fact, APPOINT can work in tandem with collaborative Web caching if they are deployed together.\nWe try to address the situation where a request arrives at a server, meaning all the caches report a miss.\nHence, the point where the server is reached can be used to take a central decision but then the actual service request can be forwarded to a set of active clients, i.e., the down\nload and upload operations.\nCache misses are especially common in the type of large data-based services on which we are working.\nHence, these policies would lead to the immediate replacement of our relatively large data files even though they may be used frequently.\nIn addition, in our case, the user community that accesses a certain data file may also be very dispersed from a network point of view and thus cannot take advantage of any of the caching schemes.\nFinally, none of the Web caching methods address the symmetric issue of large data uploads.\n6.\nCONCLUDING REMARKS\nAn overview of our efforts in providing remote access to large spatial data has been given.\nWe have outlined our approaches and introduced their individual elements.\nOur client-server approach improves the system performance by using efficient caching methods when a remote server is accessed from thin-clients.\nAPPOINT forms an alternative approach that improves performance under an existing clientserver system by using idle client resources when individual users want work on a data set for longer periods of time using their client computers.\nFor the future, we envision development of new efficient algorithms that will support large online data transfers within our peer-to-peer approach using multiple peers simultaneously.\nTo address this, we will augment our methods to include efficient dynamic updates.\nUpon completion of this step of our work, we also plan to run comprehensive performance studies on our methods.\nAnother issue is how to access data from different sources in different formats.\nIn order to access multiple data sources in real time, it is desirable to look for a mechanism that would support data exchange by design.\nThe XML protocol [3] has emerged to become virtually a standard for describing and communicating arbitrary data.\nGML [4] is an XML variant that is becoming increasingly popular for exchange of geographical data.\nWe are currently working on making SAND XML-compatible so that the user can instantly retrieve spatial data provided by various agencies in the GML format via their Web services and then explore, query, or process this data further within the SAND framework.\nThis will turn the SAND system into a universal tool for accessing any spatial data set as it will be deployable on most platforms, work efficiently given large amounts of data, be able to tap any GML-enabled data source, and provide an easy to use graphical user interface.\nThis will also convert the SAND system from a research-oriented prototype into a product that could be used by end users for accessing, viewing, and analyzing their data efficiently and with minimum effort.","lvl-2":"Remote Access to Large Spatial Databases *\nABSTRACT\nEnterprises in the public and private sectors have been making their large spatial data archives available over the Internet.\nHowever, interactive work with such large volumes of online spatial data is a challenging task.\nWe propose two efficient approaches to remote access to large spatial data.\nFirst, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management.\nWe enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently.\nSecond, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer Offloading the INTernet).\nThis is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently.\nIn APPOINT, active clients of the clientserver architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions.\n1.\nINTRODUCTION\nIn recent years, enterprises in the public and private sectors have provided access to large volumes of spatial data over the Internet.\nInteractive work with such large volumes of online spatial data is a challenging task.\nWe have been developing an interactive browser for accessing spatial online databases: the SAND (Spatial and Non-spatial Data) Internet Browser.\nUsers of this browser can interactively and visually manipulate spatial data remotely.\nUnfortunately, interactive remote access to spatial data slows to a crawl without proper data access mechanisms.\nWe developed two separate methods for improving the system performance, together, form a dynamic network infrastructure that is highly scalable and provides a satisfactory user experience for interactions with large volumes of online spatial data.\nThe core functionality responsible for the actual database operations is performed by the server-based SAND system.\nSAND is a spatial database system developed at the University of Maryland [12].\nThe client-side SAND Internet Browser provides a graphical user interface to the facilities of SAND over the Internet.\nUsers specify queries by choosing the desired selection conditions from a variety of menus and dialog boxes.\nSAND Internet Browser is Java-based, which makes it deployable across many platforms.\nIn addition, since Java has often been installed on target computers beforehand, our clients can be deployed on these systems with little or no need for any additional software installation or customization.\nThe system can start being utilized immediately without any prior setup which can be extremely beneficial in time-sensitive usage scenarios such as emergencies.\nThere are two ways to deploy SAND.\nFirst, any standard Web browser can be used to retrieve and run the client piece (SAND Internet Browser) as a Java application or an applet.\nThis way, users across various platforms can continuously access large spatial data on a remote location with little or\nno need for any preceding software installation.\nThe second option is to use a stand-alone SAND Internet Browser along with a locally-installed Internet-enabled database management system (server piece).\nIn this case, the SAND Internet Browser can still be utilized to view data from remote locations.\nHowever, frequently accessed data can be downloaded to the local database on demand, and subsequently accessed locally.\nPower users can also upload large volumes of spatial data back to the remote server using this enhanced client.\nWe focused our efforts in two directions.\nWe first aimed at developing a client-server architecture with efficient caching methods to balance local resources on one side and the significant latency of the network connection on the other.\nThe low bandwidth of this connection is the primary concern in both cases.\nThe outcome of this research primarily addresses the issues of our first type of usage (i.e., as a remote browser application or an applet) for our browser and other similar applications.\nThe second direction aims at helping users that wish to manipulate large volumes of online data for prolonged periods.\nWe have developed a centralized peerto-peer approach to provide the users with the ability to transfer large volumes of data (i.e., whole data sets to the local database) more efficiently by better utilizing the distributed network resources among active clients of a clientserver architecture.\nWe call this architecture APPOINT--Approach for Peer-to-Peer Offloading the INTernet.\nThe results of this research addresses primarily the issues of the second type of usage for our SAND Internet Browser (i.e., as a stand-alone application).\nThe rest of this paper is organized as follows.\nSection 2 describes our client-server approach in more detail.\nSection 3 focuses on APPOINT, our peer-to-peer approach.\nSection 4 discusses our work in relation to existing work.\nSection 5 outlines a sample SAND Internet Browser scenario for both of our remote access approaches.\nSection 6 contains concluding remarks as well as future research directions.\n2.\nTHE CLIENT-SERVER APPROACH\nTraditionally, Geographic Information Systems (GIS) such as ArcInfo from ESRI [2] and many spatial databases are designed to be stand-alone products.\nThe spatial database is kept on the same computer or local area network from where it is visualized and queried.\nThis architecture allows for instantaneous transfer of large amounts of data between the spatial database and the visualization module so that it is perfectly reasonable to use large-bandwidth protocols for communication between them.\nThere are however many applications where a more distributed approach is desirable.\nIn these cases, the database is maintained in one location while users need to work with it from possibly distant sites over the network (e.g., the Internet).\nThese connections can be far slower and less reliable than local area networks and thus it is desirable to limit the data flow between the database (server) and the visualization unit (client) in order to get a timely response from the system.\nOur client-server approach (Figure 1) allows the actual database engine to be run in a central location maintained by spatial database experts, while end users acquire a Javabased client component that provides them with a gateway into the SAND spatial database engine.\nOur client is more than a simple image viewer.\nInstead, it operates on vector data allowing the client to execute many operations such as zooming or locational queries locally.\nIn\nFigure 1: SAND Internet Browser--Client-Server architecture.\nessence, a simple spatial database engine is run on the client.\nThis database keeps a copy of a subset of the whole database whose full version is maintained on the server.\nThis is a concept similar to ` caching'.\nIn our case, the client acts as a lightweight server in that given data, it evaluates queries and provides the visualization module with objects to be displayed.\nIt initiates communication with the server only in cases where it does not have enough data stored locally.\nSince the locally run database is only updated when additional or newer data is needed, our architecture allows the system to minimize the network traffic between the client and the server when executing the most common user-side operations such as zooming and panning.\nIn fact, as long as the user explores one region at a time (i.e., he or she is not panning all over the database), no additional data needs to be retrieved after the initial population of the client-side database.\nThis makes the system much more responsive than the Web mapping services.\nDue to the complexity of evaluating arbitrary queries (i.e., more complex queries than window queries that are needed for database visualization), we do not perform user-specified queries on the client.\nAll user queries are still evaluated on the server side and the results are downloaded onto the client for display.\nHowever, assuming that the queries are selective enough (i.e., there are far fewer elements returned from the query than the number of elements in the database), the response delay is usually within reasonable limits.\n2.1 Client-Server Communication\nAs mentioned above, the SAND Internet Browser is a client piece of the remotely accessible spatial database server built around the SAND kernel.\nIn order to communicate with the server, whose application programming interface (API) is a Tcl-based scripting language, a servlet specifically designed to interface the SAND Internet Browser with the SAND kernel is required on the server side.\nThis servlet listens on a given port of the server for incoming requests from the client.\nIt translates these requests into the SAND-Tcl language.\nNext, it transmits these SAND-Tcl commands or scripts to the SAND kernel.\nAfter results are provided by the kernel, the servlet fetches and processes them, and then sends those results back to the originating client.\nOnce the Java servlet is launched, it waits for a client to initiate a connection.\nIt handles both requests for the actual client Java code (needed when the client is run as an applet) and the SAND traffic.\nWhen the client piece is launched, it connects back to the SAND servlet, the communication is driven by the client piece; the server only responds to the client's queries.\nThe client initiates a transaction by\nsending a query.\nThe Java servlet parses the query and creates a corresponding SAND-Tcl expression or script in the SAND kernel's native format.\nIt is then sent to the kernel for evaluation or execution.\nThe kernel's response naturally depends on the query and can be a boolean value, a number or a string representing a value (e.g., a default color) or, a whole tuple (e.g., in response to a nearest tuple query).\nIf a script was sent to the kernel (e.g., requesting all the tuples matching some criteria), then an arbitrary amount of data can be returned by the SAND server.\nIn this case, the data is first compressed before it is sent over the network to the client.\nThe data stream gets decompressed at the client before the results are parsed.\nNotice, that if another spatial database was to be used instead of the SAND kernel, then only a simple modification to the servlet would need to be made in order for the SAND Internet Browser to function properly.\nIn particular, the queries sent by the client would need to be recoded into another query language which is native to this different spatial database.\nThe format of the protocol used for communication between the servlet and the client is unaffected.\n3.\nTHE PEER-TO-PEER APPROACH\nMany users may want to work on a complete spatial data set for a prolonged period of time.\nIn this case, making an initial investment of downloading the whole data set may be needed to guarantee a satisfactory session.\nUnfortunately, spatial data tends to be large.\nA few download requests to a large data set from a set of idle clients waiting to be served can slow the server to a crawl.\nThis is due to the fact that the common client-server approach to transferring data between the two ends of a connection assumes a designated role for each one of the ends (i.e, some clients and a server).\nWe built APPOINT as a centralized peer-to-peer system to demonstrate our approach for improving the common client-server systems.\nA server still exists.\nThere is a central source for the data and a decision mechanism for the service.\nThe environment still functions as a client-server environment under many circumstances.\nYet, unlike many common client-server environments, APPOINT maintains more information about the clients.\nThis includes, inventories of what each client downloads, their availabilities, etc. .\nWhen the client-server service starts to perform poorly or a request for a data item comes from a client with a poor connection to the server, APPOINT can start appointing appropriate active clients of the system to serve on behalf of the server, i.e., clients who have already volunteered their services and can take on the role of peers (hence, moving from a client-server scheme to a peer-to-peer scheme).\nThe directory service for the active clients is still performed by the server but the server no longer serves all of the requests.\nIn this scheme, clients are used mainly for the purpose of sharing their networking resources rather than introducing new content and hence they help offload the server and scale up the service.\nThe existence of a server is simpler in terms of management of dynamic peers in comparison to pure peerto-peer approaches where a flood of messages to discover who is still active in the system should be used by each peer that needs to make a decision.\nThe server is also the main source of data and under regular circumstances it may not forward the service.\nData is assumed to be formed of files.\nA single file forms the atomic means of communication.\nAPPOINT optimizes requests with respect to these atomic requests.\nFrequently accessed data sets are replicated as a byproduct of having been requested by a large number of users.\nThis opens up the potential for bypassing the server in future downloads for the data by other users as there are now many new points of access to it.\nBypassing the server is useful when the server's bandwidth is limited.\nExistence of a server assures that unpopular data is also available at all times.\nThe service depends on the availability of the server.\nThe server is now more resilient to congestion as the service is more scalable.\nBackups and other maintenance activities are already being performed on the server and hence no extra administrative effort is needed for the dynamic peers.\nIf a peer goes down, no extra precautions are taken.\nIn fact, APPOINT does not require any additional resources from an already existing client-server environment but, instead, expands its capability.\nThe peers simply get on to or get off from a table on the server.\nUploading data is achieved in a similar manner as downloading data.\nFor uploads, the active clients can again be utilized.\nUsers can upload their data to a set of peers other than the server if the server is busy or resides in a distant location.\nEventually the data is propagated to the server.\nAll of the operations are performed in a transparent fashion to the clients.\nUpon initial connection to the server, they can be queried as to whether or not they want to share their idle networking time and disk space.\nThe rest of the operations follow transparently after the initial contact.\nAPPOINT works on the application layer but not on lower layers.\nThis achieves platform independence and easy deployment of the system.\nAPPOINT is not a replacement but an addition to the current client-server architectures.\nWe developed a library of function calls that when placed in a client-server architecture starts the service.\nWe are developing advanced peer selection schemes that incorporate the location of active clients, bandwidth among active clients, data-size to be transferred, load on active clients, and availability of active clients to form a complete means of selecting the best clients that can become efficient alternatives to the server.\nWith APPOINT we are defining a very simple API that could be used within an existing client-server system easily.\nInstead of denial of service or a slow connection, this API can be utilized to forward the service appropriately.\nThe API for the server side is:\nThe server, after starting the APPOINT service, can make all of the data files available to the clients by using the makeFileAvailable method.\nThis will enable APPOINT to treat the server as one of the peers.\nThe two callback methods of the server are invoked when a file is received from a client, or when an error is encountered while receiving a file from a client.\nAPPOINT guar\nFigure 2: The localization operation in APPOINT.\nantees that at least one of the callbacks will be called so that the user (who may not be online anymore) can always be notified (i.e., via email).\nClients localizing large data files can make these files available to the public by using the makeFileAvailable method on the client side.\nFor example, in our SAND Internet Browser, we have the localization of spatial data as a function that can be chosen from our menus.\nThis functionality enables users to download data sets completely to their local disks before starting their queries or analysis.\nIn our implementation, we have calls to the APPOINT service both on the client and the server sides as mentioned above.\nHence, when a localization request comes to the SAND Internet Browser, the browser leaves the decisions to optimally find and localize a data set to the APPOINT service.\nOur server also makes its data files available over APPOINT.\nThe mechanism for the localization operation is shown with more details from the APPOINT protocols in Figure 2.\nThe upload operation is performed in a similar fashion.\n4.\nRELATED WORK\nThere has been a substantial amount of research on remote access to spatial data.\nOne specific approach has been adopted by numerous Web-based mapping services (MapQuest [5], MapsOnUs [6], etc.).\nThe goal in this approach is to enable remote users, typically only equipped with standard Web browsers, to access the company's spatial database server and retrieve information in the form of pictorial maps from them.\nThe solution presented by most of these vendors is based on performing all the calculations on the server side and transferring only bitmaps that represent results of user queries and commands.\nAlthough the advantage of this solution is the minimization of both hardware and software resources on the client site, the resulting product has severe limitations in terms of available functionality and response time (each user action results in a new bitmap being transferred to the client).\nWork described in [9] examines a client-server architecture for viewing large images that operates over a lowbandwidth network connection.\nIt presents a technique based on wavelet transformations that allows the minimization of the amount of data needed to be transferred over the network between the server and the client.\nIn this case, while the server holds the full representation of the large image, only a limited amount of data needs to be transferred to the client to enable it to display a currently requested view into the image.\nOn the client side, the image is reconstructed into a pyramid representation to speed up zooming and panning operations.\nBoth the client and the server keep a common mask that indicates what parts of the image are available on the client and what needs to be requested.\nThis also allows dropping unnecessary parts of the image from the main memory on the server.\nOther related work has been reported in [16] where a client-server architecture is described that is designed to provide end users with access to a server.\nIt is assumed that this data server manages vast databases that are impractical to be stored on individual clients.\nThis work blends raster data management (stored in pyramids [22]) with vector data stored in quadtrees [19, 20].\nFor our peer-to-peer transfer approach (APPOINT), Napster is the forefather where a directory service is centralized on a server and users exchange music files that they have stored on their local disks.\nOur application domain, where the data is already freely available to the public, forms a prime candidate for such a peer-to-peer approach.\nGnutella is a pure (decentralized) peer-to-peer file exchange system.\nUnfortunately, it suffers from scalability issues, i.e., floods of messages between peers in order to map connectivity in the system are required.\nOther systems followed these popular systems, each addressing a different flavor of sharing over the Internet.\nMany peer-to-peer storage systems have also recently emerged.\nPAST [18], Eternity Service [7], CFS [10], and OceanStore [15] are some peer-to-peer storage systems.\nSome of these systems have focused on anonymity while others have focused on persistence of storage.\nAlso, other approaches, like SETI@Home [21], made other resources, such as idle CPUs, work together over the Internet to solve large scale computational problems.\nOur goal is different than these approaches.\nWith APPOINT, we want to improve existing client-server systems in terms of performance by using idle networking resources among active clients.\nHence, other issues like anonymity, decentralization, and persistence of storage were less important in our decisions.\nConfirming the authenticity of the indirectly delivered data sets is not yet addressed with APPOINT.\nWe want to expand our research, in the future, to address this issue.\nFrom our perspective, although APPOINT employs some of the techniques used in peer-to-peer systems, it is also closely related to current Web caching architectures.\nSquirrel [13] forms the middle ground.\nIt creates a pure peer-topeer collaborative Web cache among the Web browser caches of the machines in a local-area network.\nExcept for this recent peer-to-peer approach, Web caching is mostly a wellstudied topic in the realm of server\/proxy level caching [8, 11, 14, 17].\nCollaborative Web caching systems, the most relevant of these for our research, focus on creating either a hierarchical, hash-based, central directory-based, or multicast-based caching schemes.\nWe do not compete with these approaches.\nIn fact, APPOINT can work in tandem with collaborative Web caching if they are deployed together.\nWe try to address the situation where a request arrives at a server, meaning all the caches report a miss.\nHence, the point where the server is reached can be used to take a central decision but then the actual service request can be forwarded to a set of active clients, i.e., the down\nload and upload operations.\nCache misses are especially common in the type of large data-based services on which we are working.\nMost of the Web caching schemes that are in use today employ a replacement policy that gives a priority to replacing the largest sized items over smaller-sized ones.\nHence, these policies would lead to the immediate replacement of our relatively large data files even though they may be used frequently.\nIn addition, in our case, the user community that accesses a certain data file may also be very dispersed from a network point of view and thus cannot take advantage of any of the caching schemes.\nFinally, none of the Web caching methods address the symmetric issue of large data uploads.\n5.\nA SAMPLE APPLICATION\nFedStats [1] is an online source that enables ordinary citizens access to official statistics of numerous federal agencies without knowing in advance which agency produced them.\nWe are using a FedStats data set as a testbed for our work.\nOur goal is to provide more power to the users of FedStats by utilizing the SAND Internet Browser.\nAs an example, we looked at two data files corresponding to Environmental Protection Agency (EPA) - regulated facilities that have chlorine and arsenic, respectively.\nFor each file, we had the following information available: EPA-ID, name, street, city, state, zip code, latitude, longitude, followed by flags to indicate if that facility is in the following EPA programs: Hazardous Waste, Wastewater Discharge, Air Emissions, Abandoned Toxic Waste Dump, and Active Toxic Release.\nWe put this data into a SAND relation where the spatial attribute ` location' corresponds to the latitude and longitude.\nSome queries that can be handled with our system on this data include:\n1.\nFind all EPA-regulated facilities that have arsenic and participate in the Air Emissions program, and: (a) Lie in Georgia to Illinois, alphabetically.\n(b) Lie within Arkansas or 30 miles within its border.\n(c) Lie within 30 miles of the border of Arkansas (i.e., both sides of the border).\n2.\nFor each EPA-regulated facility that has arsenic, find all EPA-regulated facilities that have chlorine and: (a) That are closer to it than to any other EPAregulated facility that has arsenic.\n(b) That participate in the Air Emissions program and are closer to it than to any other EPAregulated facility which has arsenic.\nIn order to avoid reporting a particular facility more than once, we use our ` group by EPA-ID' mechanism.\nFigure 3 illustrates the output of an example query that finds all arsenic sites within a given distance of the border of Arkansas.\nThe sites are obtained in an incremental manner with respect to a given point.\nThis ordering is shown by using different color shades.\nWith this example data, it is possible to work with the SAND Internet Browser online as an applet (connecting to a remote server) or after localizing the data and then opening it locally.\nIn the first case, for each action taken, the client-server architecture will decide what to ask for from the server.\nIn the latter case, the browser will use the peerto-peer APPOINT architecture for first localizing the data.\n6.\nCONCLUDING REMARKS\nAn overview of our efforts in providing remote access to large spatial data has been given.\nWe have outlined our approaches and introduced their individual elements.\nOur client-server approach improves the system performance by using efficient caching methods when a remote server is accessed from thin-clients.\nAPPOINT forms an alternative approach that improves performance under an existing clientserver system by using idle client resources when individual users want work on a data set for longer periods of time using their client computers.\nFor the future, we envision development of new efficient algorithms that will support large online data transfers within our peer-to-peer approach using multiple peers simultaneously.\nWe assume that a peer (client) can become unavailable at any anytime and hence provisions need to be in place to handle such a situation.\nTo address this, we will augment our methods to include efficient dynamic updates.\nUpon completion of this step of our work, we also plan to run comprehensive performance studies on our methods.\nAnother issue is how to access data from different sources in different formats.\nIn order to access multiple data sources in real time, it is desirable to look for a mechanism that would support data exchange by design.\nThe XML protocol [3] has emerged to become virtually a standard for describing and communicating arbitrary data.\nGML [4] is an XML variant that is becoming increasingly popular for exchange of geographical data.\nWe are currently working on making SAND XML-compatible so that the user can instantly retrieve spatial data provided by various agencies in the GML format via their Web services and then explore, query, or process this data further within the SAND framework.\nThis will turn the SAND system into a universal tool for accessing any spatial data set as it will be deployable on most platforms, work efficiently given large amounts of data, be able to tap any GML-enabled data source, and provide an easy to use graphical user interface.\nThis will also convert the SAND system from a research-oriented prototype into a product that could be used by end users for accessing, viewing, and analyzing their data efficiently and with minimum effort.","keyphrases":["remot access","larg spatial data","internet","spatial queri evalu","data visual","data manag","network latenc","client-server architectur","central peer-to-peer approach","sand","dynam network infrastructur","web browser","internet-enabl databas manag system","gi","client-server","peer-to-peer"],"prmu":["P","P","P","P","P","P","P","M","M","U","M","U","M","U","U","U"]} {"id":"J-70","title":"Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions","abstract":"Often, an outcome must be chosen on the basis of the preferences reported by a group of agents. The key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves. Mechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen. In a recently proposed approach -- called automated mechanism design -- a mechanism is computed for the preference aggregation setting at hand. This has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time. Unlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested. In this case, the center cares only about which outcome is chosen and what payments are made to it. The reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism. In this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen. We then show how allowing for randomization in the mechanism makes problems in this setting computationally easy. Finally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have best-only preferences. We show that here, too, designing an optimal deterministic auction is NP-complete, but designing an optimal randomized auction is easy.","lvl-1":"Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions\u2217 Vincent Conitzer Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA conitzer@cs.cmu.edu Tuomas Sandholm Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA sandholm@cs.cmu.edu ABSTRACT Often, an outcome must be chosen on the basis of the preferences reported by a group of agents.\nThe key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves.\nMechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen.\nIn a recently proposed approach-called automated mechanism design-a mechanism is computed for the preference aggregation setting at hand.\nThis has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time.\nUnlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested.\nIn this case, the center cares only about which outcome is chosen and what payments are made to it.\nThe reason that the agents'' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism.\nIn this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen.\nWe then show how allowing for randomization in the mechanism makes problems in this setting computationally easy.\nFinally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have best-only preferences.\nWe show that here, too, designing an optimal deterministic auction is NPcomplete, but designing an optimal randomized auction is easy.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION In multiagent settings, often an outcome must be chosen on the basis of the preferences reported by a group of agents.\nSuch outcomes could be potential presidents, joint plans, allocations of goods or resources, etc..\nThe preference aggregator generally does not know the agents'' preferences a priori.\nRather, the agents report their preferences to the coordinator.\nUnfortunately, an agent may have an incentive to misreport its preferences in order to mislead the mechanism into selecting an outcome that is more desirable to the agent than the outcome that would be selected if the agent revealed its preferences truthfully.\nSuch manipulation is undesirable because preference aggregation mechanisms are tailored to aggregate preferences in a socially desirable way, and if the agents reveal their preferences insincerely, a socially undesirable outcome may be chosen.\nManipulability is a pervasive problem across preference aggregation mechanisms.\nA seminal negative result, the Gibbard-Satterthwaite theorem, shows that under any nondictatorial preference aggregation scheme, if there are at least 3 possible outcomes, there are preferences under which an agent is better off reporting untruthfully [10, 23].\n(A preference aggregation scheme is called dictatorial if one of the agents dictates the outcome no matter what preferences the other agents report.)\nWhat the aggregator would like to do is design a preference aggregation mechanism so that 1) the self-interested agents are motivated to report their preferences truthfully, and 2) the mechanism chooses an outcome that is desirable from the perspective of some objective.\nThis is the classic setting of mechanism design in game theory.\nIn this paper, we study the case where the designer is self-interested, that is, the designer does not directly care about how the out132 come relates to the agents'' preferences, but is rather concerned with its own agenda for which outcome should be chosen, and with maximizing payments to itself.\nThis is the mechanism design setting most relevant to electronic commerce.\nIn the case where the mechanism designer is interested in maximizing some notion of social welfare, the importance of collecting the agents'' preferences is clear.\nIt is perhaps less obvious why they should be collected when the designer is self-interested and hence its objective is not directly related to the agents'' preferences.\nThe reason for this is that often the agents'' preferences impose limits on how the designer chooses the outcome and payments.\nThe most common such constraint is that of individual rationality (IR), which means that the mechanism cannot make any agent worse off than the agent would have been had it not participated in the mechanism.\nFor instance, in the setting of optimal auction design, the designer (auctioneer) is only concerned with how much revenue is collected, and not per se with how well the allocation of the good (or goods) corresponds to the agents'' preferences.\nNevertheless, the designer cannot force an agent to pay more than its valuation for the bundle of goods allocated to it.\nTherefore, even a self-interested designer will choose an outcome that makes the agents reasonably well off.\nOn the other hand, the designer will not necessarily choose a social welfare maximizing outcome.\nFor example, if the designer always chooses an outcome that maximizes social welfare with respect to the reported preferences, and forces each agent to pay the difference between the utility it has now and the utility it would have had if it had not participated in the mechanism, it is easy to see that agents may have an incentive to misreport their preferences-and this may actually lead to less revenue being collected.\nIndeed, one of the counterintuitive results of optimal auction design theory is that sometimes the good is allocated to nobody even when the auctioneer has a reservation price of 0.\nClassical mechanism design provides some general mechanisms, which, under certain assumptions, satisfy some notion of nonmanipulability and maximize some objective.\nThe upside of these mechanisms is that they do not rely on (even probabilistic) information about the agents'' preferences (e.g., the Vickrey-Clarke-Groves (VCG) mechanism [24, 4, 11]), or they can be easily applied to any probability distribution over the preferences (e.g., the dAGVA mechanism [8, 2], the Myerson auction [18], and the Maskin-Riley multi-unit auction [17]).\nHowever, the general mechanisms also have significant downsides: \u2022 The most famous and most broadly applicable general mechanisms, VCG and dAGVA, only maximize social welfare.\nIf the designer is self-interested, as is the case in many electronic commerce settings, these mechanisms do not maximize the designer``s objective.\n\u2022 The general mechanisms that do focus on a selfinterested designer are only applicable in very restricted settings-such as Myerson``s expected revenue maximizing auction for selling a single item, and Maskin and Riley``s expected revenue maximizing auction for selling multiple identical units of an item.\n\u2022 Even in the restricted settings in which these mechanisms apply, the mechanisms only allow for payment maximization.\nIn practice, the designer may also be interested in the outcome per se.\nFor example, an auctioneer may care which bidder receives the item.\n\u2022 It is often assumed that side payments can be used to tailor the agents'' incentives, but this is not always practical.\nFor example, in barter-based electronic marketplaces-such as Recipco, firstbarter.com, BarterOne, and Intagio-side payments are not allowed.\nFurthermore, among software agents, it might be more desirable to construct mechanisms that do not rely on the ability to make payments, because many software agents do not have the infrastructure to make payments.\nIn contrast, we follow a recent approach where the mechanism is designed automatically for the specific problem at hand.\nThis approach addresses all of the downsides listed above.\nWe formulate the mechanism design problem as an optimization problem.\nThe input is characterized by the number of agents, the agents'' possible types (preferences), and the aggregator``s prior distributions over the agents'' types.\nThe output is a nonmanipulable mechanism that is optimal with respect to some objective.\nThis approach is called automated mechanism design.\nThe automated mechanism design approach has four advantages over the classical approach of designing general mechanisms.\nFirst, it can be used even in settings that do not satisfy the assumptions of the classical mechanisms (such as availability of side payments or that the objective is social welfare).\nSecond, it may allow one to circumvent impossibility results (such as the Gibbard-Satterthwaite theorem) which state that there is no mechanism that is desirable across all preferences.\nWhen the mechanism is designed for the setting at hand, it does not matter that it would not work more generally.\nThird, it may yield better mechanisms (in terms of stronger nonmanipulability guarantees and\/or better outcomes) than classical mechanisms because the mechanism capitalizes on the particulars of the setting (the probabilistic information that the designer has about the agents'' types).\nGiven the vast amount of information that parties have about each other today, this approach is likely to lead to tremendous savings over classical mechanisms, which largely ignore that information.\nFor example, imagine a company automatically creating its procurement mechanism based on statistical knowledge about its suppliers, rather than using a classical descending procurement auction.\nFourth, the burden of design is shifted from humans to a machine.\nHowever, automated mechanism design requires the mechanism design optimization problem to be solved anew for each setting.\nHence its computational complexity becomes a key issue.\nPrevious research has studied this question for benevolent designers-that wish to maximize, for example, social welfare [5, 6].\nIn this paper we study the computational complexity of automated mechanism design in the case of a self-interested designer.\nThis is an important setting for automated mechanism design due to the shortage of general mechanisms in this area, and the fact that in most e-commerce settings the designer is self-interested.\nWe also show that this problem is closely related to a particular optimal (revenue-maximizing) combinatorial auction design problem.\n133 The rest of this paper is organized as follows.\nIn Section 2, we justify the focus on nonmanipulable mechanisms.\nIn Section 3, we define the problem we study.\nIn Section 4, we show that designing an optimal deterministic mechanism is NP-complete even when the designer only cares about the payments made to it.\nIn Section 5, we show that designing an optimal deterministic mechanism is also NP-complete when payments are not possible and the designer is only interested in the outcome chosen.\nIn Section 6, we show that an optimal randomized mechanism can be designed in polynomial time even in the general case.\nFinally, in Section 7, we show that for designing optimal combinatorial auctions under best-only preferences, our results on AMD imply that this problem is NP-complete for deterministic auctions, but easy for randomized auctions.\n2.\nJUSTIFYING THE FOCUS ON NONMANIPULABLE MECHANISMS Before we define the computational problem of automated mechanism design, we should justify our focus on nonmanipulable mechanisms.\nAfter all, it is not immediately obvious that there are no manipulable mechanisms that, even when agents report their types strategically and hence sometimes untruthfully, still reach better outcomes (according to whatever objective we use) than any nonmanipulable mechanism.\nThis does, however, turn out to be the case: given any mechanism, we can construct a nonmanipulable mechanism whose performance is identical, as follows.\nWe build an interface layer between the agents and the original mechanism.\nThe agents report their preferences (or types) to the interface layer; subsequently, the interface layer inputs into the original mechanism the types that the agents would have strategically reported to the original mechanism, if their types were as declared to the interface layer.\nThe resulting outcome is the outcome of the new mechanism.\nSince the interface layer acts strategically on each agent``s behalf, there is never an incentive to report falsely to the interface layer; and hence, the types reported by the interface layer are the strategic types that would have been reported without the interface layer, so the results are exactly as they would have been with the original mechanism.\nThis argument is known in the mechanism design literature as the revelation principle [16].\n(There are computational difficulties with applying the revelation principle in large combinatorial outcome and type spaces [7, 22].\nHowever, because here we focus on flatly represented outcome and type spaces, this is not a concern here.)\nGiven this, we can focus on truthful mechanisms in the rest of the paper.\n3.\nDEFINITIONS We now formalize the automated mechanism design setting.\nDefinition 1.\nIn an automated mechanism design setting, we are given: \u2022 a finite set of outcomes O; \u2022 a finite set of N agents; \u2022 for each agent i, 1.\na finite set of types \u0398i, 2.\na probability distribution \u03b3i over \u0398i (in the case of correlated types, there is a single joint distribution \u03b3 over \u03981 \u00d7 ... \u00d7 \u0398N ), and 3.\na utility function ui : \u0398i \u00d7 O \u2192 R; 1 \u2022 An objective function whose expectation the designer wishes to maximize.\nThere are many possible objective functions the designer might have, for example, social welfare (where the designer seeks to maximize the sum of the agents'' utilities), or the minimum utility of any agent (where the designer seeks to maximize the worst utility had by any agent).\nIn both of these cases, the designer is benevolent, because the designer, in some sense, is pursuing the agents'' collective happiness.\nHowever, in this paper, we focus on the case of a self-interested designer.\nA self-interested designer cares only about the outcome chosen (that is, the designer does not care how the outcome relates to the agents'' preferences, but rather has a fixed preference over the outcomes), and about the net payments made by the agents, which flow to the designer.\nDefinition 2.\nA self-interested designer has an objective function given by g(o) + N i=1 \u03c0i, where g : O \u2192 R indicates the designer``s own preference over the outcomes, and \u03c0i is the payment made by agent i.\nIn the case where g = 0 everywhere, the designer is said to be payment maximizing.\nIn the case where payments are not possible, g constitutes the objective function by itself.\nWe now define the kinds of mechanisms under study.\nBy the revelation principle, we can restrict attention to truthful, direct revelation mechanisms, where agents report their types directly and never have an incentive to misreport them.\nDefinition 3.\nWe consider the following kinds of mechanism: \u2022 A deterministic mechanism without payments consists of an outcome selection function o : \u03981 \u00d7 \u03982 \u00d7 ... \u00d7 \u0398N \u2192 O. \u2022 A randomized mechanism without payments consists of a distribution selection function p : \u03981 \u00d7 \u03982 \u00d7 ... \u00d7 \u0398N \u2192 P(O), where P(O) is the set of probability distributions over O. \u2022 A deterministic mechanism with payments consists of an outcome selection function o : \u03981 \u00d7\u03982 \u00d7...\u00d7\u0398N \u2192 O and for each agent i, a payment selection function \u03c0i : \u03981 \u00d7 \u03982 \u00d7 ... \u00d7 \u0398N \u2192 R, where \u03c0i(\u03b81, ... , \u03b8N ) gives the payment made by agent i when the reported types are \u03b81, ... , \u03b8N .\n1 Though this follows standard game theory notation [16], the fact that the agent has both a utility function and a type is perhaps confusing.\nThe types encode the various possible preferences that the agent may turn out to have, and the agent``s type is not known to the aggregator.\nThe utility function is common knowledge, but because the agent``s type is a parameter in the agent``s utility function, the aggregator cannot know what the agent``s utility is without knowing the agent``s type.\n134 \u2022 A randomized mechanism with payments consists of a distribution selection function p : \u03981 \u00d7 \u03982 \u00d7 ... \u00d7 \u0398N \u2192 P(O), and for each agent i, a payment selection function \u03c0i : \u03981 \u00d7 \u03982 \u00d7 ... \u00d7 \u0398N \u2192 R.2 There are two types of constraint on the designer in building the mechanism.\n3.1 Individual rationality (IR) constraints The first type of constraint is the following.\nThe utility of each agent has to be at least as great as the agent``s fallback utility, that is, the utility that the agent would receive if it did not participate in the mechanism.\nOtherwise that agent would not participate in the mechanism-and no agent``s participation can ever hurt the mechanism designer``s objective because at worst, the mechanism can ignore an agent by pretending the agent is not there.\n(Furthermore, if no such constraint applied, the designer could simply make the agents pay an infinite amount.)\nThis type of constraint is called an IR (individual rationality) constraint.\nThere are three different possible IR constraints: ex ante, ex interim, and ex post, depending on what the agent knows about its own type and the others'' types when deciding whether to participate in the mechanism.\nEx ante IR means that the agent would participate if it knew nothing at all (not even its own type).\nWe will not study this concept in this paper.\nEx interim IR means that the agent would always participate if it knew only its own type, but not those of the others.\nEx post IR means that the agent would always participate even if it knew everybody``s type.\nWe will define the latter two notions of IR formally.\nFirst, we need to formalize the concept of the fallback outcome.\nWe assume that each agent``s fallback utility is zero for each one of its types.\nThis is without loss of generality because we can add a constant term to an agent``s utility function (for a given type), without affecting the decision-making behavior of that expected utility maximizing agent [16].\nDefinition 4.\nIn any automated mechanism design setting with an IR constraint, there is a fallback outcome o0 \u2208 O where, for any agent i and any type \u03b8i \u2208 \u0398i, we have ui(\u03b8i, o0) = 0.\n(Additionally, in the case of a self-interested designer, g(o0) = 0.)\nWe can now to define the notions of individual rationality.\nDefinition 5.\nIndividual rationality (IR) is defined by: \u2022 A deterministic mechanism is ex interim IR if for any agent i, and any type \u03b8i \u2208 \u0398i, we have E(\u03b81,.\n.\n,\u03b8i\u22121,\u03b8i+1,.\n.\n,\u03b8N )|\u03b8i [ui(\u03b8i, o(\u03b81, .\n.\n, \u03b8N ))\u2212\u03c0i(\u03b81, .\n.\n, \u03b8N )] \u2265 0.\nA randomized mechanism is ex interim IR if for any agent i, and any type \u03b8i \u2208 \u0398i, we have E(\u03b81,.\n.\n,\u03b8i\u22121,\u03b8i+1,.\n.\n,\u03b8N )|\u03b8i Eo|\u03b81,.\n.\n,\u03b8n [ui(\u03b8i, o)\u2212\u03c0i(\u03b81, .\n.\n, \u03b8N )] \u2265 0.\n\u2022 A deterministic mechanism is ex post IR if for any agent i, and any type vector (\u03b81, ... , \u03b8N ) \u2208 \u03981 \u00d7 ... \u00d7 \u0398N , we have ui(\u03b8i, o(\u03b81, ... , \u03b8N )) \u2212 \u03c0i(\u03b81, ... , \u03b8N ) \u2265 0.\n2 We do not randomize over payments because as long as the agents and the designer are risk neutral with respect to payments, that is, their utility is linear in payments, there is no reason to randomize over payments.\nA randomized mechanism is ex post IR if for any agent i, and any type vector (\u03b81, ... , \u03b8N ) \u2208 \u03981 \u00d7 ... \u00d7 \u0398N , we have Eo|\u03b81,.\n.\n,\u03b8n [ui(\u03b8i, o) \u2212 \u03c0i(\u03b81, .\n.\n, \u03b8N )] \u2265 0.\nThe terms involving payments can be left out in the case where payments are not possible.\n3.2 Incentive compatibility (IC) constraints The second type of constraint says that the agents should never have an incentive to misreport their type (as justified above by the revelation principle).\nFor this type of constraint, the two most common variants (or solution concepts) are implementation in dominant strategies, and implementation in Bayes-Nash equilibrium.\nDefinition 6.\nGiven an automated mechanism design setting, a mechanism is said to implement its outcome and payment functions in dominant strategies if truthtelling is always optimal even when the types reported by the other agents are already known.\nFormally, for any agent i, any type vector (\u03b81, ... , \u03b8i, ... , \u03b8N ) \u2208 \u03981 \u00d7 ... \u00d7 \u0398i \u00d7 ... \u00d7 \u0398N , and any alternative type report \u02c6\u03b8i \u2208 \u0398i, in the case of deterministic mechanisms we have ui(\u03b8i, o(\u03b81, ... , \u03b8i, ... , \u03b8N )) \u2212 \u03c0i(\u03b81, ... , \u03b8i, ... , \u03b8N ) \u2265 ui(\u03b8i, o(\u03b81, ... , \u02c6\u03b8i, ... , \u03b8N )) \u2212 \u03c0i(\u03b81, ... , \u02c6\u03b8i, ... , \u03b8N ).\nIn the case of randomized mechanisms we have Eo|\u03b81,.\n.\n,\u03b8i,.\n.\n,\u03b8n [ui(\u03b8i, o) \u2212 \u03c0i(\u03b81, ... , \u03b8i, ... , \u03b8N )] \u2265 Eo|\u03b81,.\n.\n, \u02c6\u03b8i,.\n.\n,\u03b8n [ui(\u03b8i, o) \u2212 \u03c0i(\u03b81, ... , \u02c6\u03b8i, ... , \u03b8N )].\nThe terms involving payments can be left out in the case where payments are not possible.\nThus, in dominant strategies implementation, truthtelling is optimal regardless of what the other agents report.\nIf it is optimal only given that the other agents are truthful, and given that one does not know the other agents'' types, we have implementation in Bayes-Nash equilibrium.\nDefinition 7.\nGiven an automated mechanism design setting, a mechanism is said to implement its outcome and payment functions in Bayes-Nash equilibrium if truthtelling is always optimal to an agent when that agent does not yet know anything about the other agents'' types, and the other agents are telling the truth.\nFormally, for any agent i, any type \u03b8i \u2208 \u0398i, and any alternative type report \u02c6\u03b8i \u2208 \u0398i, in the case of deterministic mechanisms we have E(\u03b81,.\n.\n,\u03b8i\u22121,\u03b8i+1,.\n.\n,\u03b8N )|\u03b8i [ui(\u03b8i, o(\u03b81, ... , \u03b8i, ... , \u03b8N ))\u2212 \u03c0i(\u03b81, ... , \u03b8i, ... , \u03b8N )] \u2265 E(\u03b81,.\n.\n,\u03b8i\u22121,\u03b8i+1,.\n.\n,\u03b8N )|\u03b8i [ui(\u03b8i, o(\u03b81, ... , \u02c6\u03b8i, ... , \u03b8N ))\u2212 \u03c0i(\u03b81, ... , \u02c6\u03b8i, ... , \u03b8N )].\nIn the case of randomized mechanisms we have E(\u03b81,.\n.\n,\u03b8i\u22121,\u03b8i+1,.\n.\n,\u03b8N )|\u03b8i Eo|\u03b81,.\n.\n,\u03b8i,.\n.\n,\u03b8n [ui(\u03b8i, o)\u2212 \u03c0i(\u03b81, ... , \u03b8i, ... , \u03b8N )] \u2265 E(\u03b81,.\n.\n,\u03b8i\u22121,\u03b8i+1,.\n.\n,\u03b8N )|\u03b8i Eo|\u03b81,.\n.\n, \u02c6\u03b8i,.\n.\n,\u03b8n [ui(\u03b8i, o)\u2212 \u03c0i(\u03b81, ... , \u02c6\u03b8i, ... , \u03b8N )].\nThe terms involving payments can be left out in the case where payments are not possible.\n135 3.3 Automated mechanism design We can now define the computational problem we study.\nDefinition 8.\n(AUTOMATED-MECHANISM-DESIGN (AMD)) We are given: \u2022 an automated mechanism design setting, \u2022 an IR notion (ex interim, ex post, or none), \u2022 a solution concept (dominant strategies or Bayes-Nash), \u2022 whether payments are possible, \u2022 whether randomization is possible, \u2022 (in the decision variant of the problem) a target value G.\nWe are asked whether there exists a mechanism of the specified kind (in terms of payments and randomization) that satisfies both the IR notion and the solution concept, and gives an expected value of at least G for the objective.\nAn interesting special case is the setting where there is only one agent.\nIn this case, the reporting agent always knows everything there is to know about the other agents'' types-because there are no other agents.\nSince ex post and ex interim IR only differ on what an agent is assumed to know about other agents'' types, the two IR concepts coincide here.\nAlso, because implementation in dominant strategies and implementation in Bayes-Nash equilibrium only differ on what an agent is assumed to know about other agents'' types, the two solution concepts coincide here.\nThis observation will prove to be a useful tool in proving hardness results: if we prove computational hardness in the singleagent setting, this immediately implies hardness for both IR concepts, for both solution concepts, for any number of agents.\n4.\nPAYMENT-MAXIMIZINGDETERMINISTIC AMD IS HARD In this section we demonstrate that it is NP-complete to design a deterministic mechanism that maximizes the expected sum of the payments collected from the agents.\nWe show that this problem is hard even in the single-agent setting, thereby immediately showing it hard for both IR concepts, for both solution concepts.\nTo demonstrate NPhardness, we reduce from the MINSAT problem.\nDefinition 9 (MINSAT).\nWe are given a formula \u03c6 in conjunctive normal form, represented by a set of Boolean variables V and a set of clauses C, and an integer K (K < |C|).\nWe are asked whether there exists an assignment to the variables in V such that at most K clauses in \u03c6 are satisfied.\nMINSAT was recently shown to be NP-complete [14].\nWe can now present our result.\nTheorem 1.\nPayment-maximizing deterministic AMD is NP-complete, even for a single agent, even with a uniform distribution over types.\nProof.\nIt is easy to show that the problem is in NP.\nTo show NP-hardness, we reduce an arbitrary MINSAT instance to the following single-agent payment-maximizing deterministic AMD instance.\nLet the agent``s type set be \u0398 = {\u03b8c : c \u2208 C} \u222a {\u03b8v : v \u2208 V }, where C is the set of clauses in the MINSAT instance, and V is the set of variables.\nLet the probability distribution over these types be uniform.\nLet the outcome set be O = {o0} \u222a {oc : c \u2208 C} \u222a {ol : l \u2208 L}, where L is the set of literals, that is, L = {+v : v \u2208 V } \u222a {\u2212v : v \u2208 V }.\nLet the notation v(l) = v denote that v is the variable corresponding to the literal l, that is, l \u2208 {+v, \u2212v}.\nLet l \u2208 c denote that the literal l occurs in clause c. Then, let the agent``s utility function be given by u(\u03b8c, ol) = |\u0398| + 1 for all l \u2208 L with l \u2208 c; u(\u03b8c, ol) = 0 for all l \u2208 L with l \/\u2208 c; u(\u03b8c, oc) = |\u0398| + 1; u(\u03b8c, oc ) = 0 for all c \u2208 C with c = c ; u(\u03b8v, ol) = |\u0398| for all l \u2208 L with v(l) = v; u(\u03b8v, ol) = 0 for all l \u2208 L with v(l) = v; u(\u03b8v, oc) = 0 for all c \u2208 C.\nThe goal of the AMD instance is G = |\u0398| + |C|\u2212K |\u0398| , where K is the goal of the MINSAT instance.\nWe show the instances are equivalent.\nFirst, suppose there is a solution to the MINSAT instance.\nLet the assignment of truth values to the variables in this solution be given by the function f : V \u2192 L (where v(f(v)) = v for all v \u2208 V ).\nThen, for every v \u2208 V , let o(\u03b8v) = of(v) and \u03c0(\u03b8v) = |\u0398|.\nFor every c \u2208 C, let o(\u03b8c) = oc; let \u03c0(\u03b8c) = |\u0398| + 1 if c is not satisfied in the MINSAT solution, and \u03c0(\u03b8c) = |\u0398| if c is satisfied.\nIt is straightforward to check that the IR constraint is satisfied.\nWe now check that the agent has no incentive to misreport.\nIf the agent``s type is some \u03b8v, then any other report will give it an outcome that is no better, for a payment that is no less, so it has no incentive to misreport.\nIf the agent``s type is some \u03b8c where c is a satisfied clause, again, any other report will give it an outcome that is no better, for a payment that is no less, so it has no incentive to misreport.\nThe final case to check is where the agent``s type is some \u03b8c where c is an unsatisfied clause.\nIn this case, we observe that for none of the types, reporting it leads to an outcome ol for a literal l \u2208 c, precisely because the clause is not satisfied in the MINSAT instance.\nBecause also, no type besides \u03b8c leads to the outcome oc, reporting any other type will give an outcome with utility 0, while still forcing a payment of at least |\u0398| from the agent.\nClearly the agent is better off reporting truthfully, for a total utility of 0.\nThis establishes that the agent never has an incentive to misreport.\nFinally, we show that the goal is reached.\nIf s is the number of satisfied clauses in the MINSAT solution (so that s \u2264 K), the expected payment from this mechanism is |V ||\u0398|+s|\u0398|+(|C|\u2212s)(|\u0398|+1) |\u0398| \u2265 |V ||\u0398|+K|\u0398|+(|C|\u2212K)(|\u0398|+1) |\u0398| = |\u0398| + |C|\u2212K |\u0398| = G.\nSo there is a solution to the AMD instance.\nNow suppose there is a solution to the AMD instance, given by an outcome function o and a payment function \u03c0.\nFirst, suppose there is some v \u2208 V such that o(\u03b8v) \/\u2208 {o+v, o\u2212v}.\nThen the utility that the agent derives from the given outcome for this type is 0, and hence, by IR, no payment can be extracted from the agent for this type.\nBecause, again by IR, the maximum payment that can be extracted for any other type is |\u0398| + 1, it follows that the maximum expected payment that could be obtained is at most (|\u0398|\u22121)(|\u0398|+1) |\u0398| < |\u0398| < G, contradicting that this is a solution to the AMD instance.\nIt follows that in the solution to the AMD instance, for every v \u2208 V , o(\u03b8v) \u2208 {o+v, o\u2212v}.\n136 We can interpret this as an assignment of truth values to the variables: v is set to true if o(\u03b8v) = o+v, and to false if o(\u03b8v) = o\u2212v.\nWe claim this assignment is a solution to the MINSAT instance.\nBy the IR constraint, the maximum payment we can extract from any type \u03b8v is |\u0398|.\nBecause there can be no incentives for the agent to report falsely, for any clause c satisfied by the given assignment, the maximum payment we can extract for the corresponding type \u03b8c is |\u0398|.\n(For if we extracted more from this type, the agent``s utility in this case would be less than 1; and if v is the variable satisfying c in the assignment, so that o(\u03b8v) = ol where l occurs in c, then the agent would be better off reporting \u03b8v instead of the truthful report \u03b8c, to get an outcome worth |\u0398|+1 to it while having to pay at most |\u0398|.)\nFinally, for any unsatisfied clause c, by the IR constraint, the maximum payment we can extract for the corresponding type \u03b8c is |\u0398| + 1.\nIt follows that the expected payment from our mechanism is at most V |\u0398|+s|\u0398|+(|C|\u2212s)(|\u0398|+1) \u0398 , where s is the number of satisfied clauses.\nBecause our mechanism achieves the goal, it follows that V |\u0398|+s|\u0398|+(|C|\u2212s)(|\u0398|+1) \u0398 \u2265 G, which by simple algebraic manipulations is equivalent to s \u2264 K.\nSo there is a solution to the MINSAT instance.\nBecause payment-maximizing AMD is just the special case of AMD for a self-interested designer where the designer has no preferences over the outcome chosen, this immediately implies hardness for the general case of AMD for a selfinterested designer where payments are possible.\nHowever, it does not yet imply hardness for the special case where payments are not possible.\nWe will prove hardness in this case in the next section.\n5.\nSELF-INTERESTED DETERMINISTIC AMD WITHOUT PAYMENTS IS HARD In this section we demonstrate that it is NP-complete to design a deterministic mechanism that maximizes the expectation of the designer``s objective when payments are not possible.\nWe show that this problem is hard even in the single-agent setting, thereby immediately showing it hard for both IR concepts, for both solution concepts.\nTheorem 2.\nWithout payments, deterministic AMD for a self-interested designer is NP-complete, even for a single agent, even with a uniform distribution over types.\nProof.\nIt is easy to show that the problem is in NP.\nTo show NP-hardness, we reduce an arbitrary MINSAT instance to the following single-agent self-interested deterministic AMD without payments instance.\nLet the agent``s type set be \u0398 = {\u03b8c : c \u2208 C} \u222a {\u03b8v : v \u2208 V }, where C is the set of clauses in the MINSAT instance, and V is the set of variables.\nLet the probability distribution over these types be uniform.\nLet the outcome set be O = {o0} \u222a {oc : c \u2208 C}\u222a{ol : l \u2208 L}\u222a{o\u2217 }, where L is the set of literals, that is, L = {+v : v \u2208 V } \u222a {\u2212v : v \u2208 V }.\nLet the notation v(l) = v denote that v is the variable corresponding to the literal l, that is, l \u2208 {+v, \u2212v}.\nLet l \u2208 c denote that the literal l occurs in clause c. Then, let the agent``s utility function be given by u(\u03b8c, ol) = 2 for all l \u2208 L with l \u2208 c; u(\u03b8c, ol) = \u22121 for all l \u2208 L with l \/\u2208 c; u(\u03b8c, oc) = 2; u(\u03b8c, oc ) = \u22121 for all c \u2208 C with c = c ; u(\u03b8c, o\u2217 ) = 1; u(\u03b8v, ol) = 1 for all l \u2208 L with v(l) = v; u(\u03b8v, ol) = \u22121 for all l \u2208 L with v(l) = v; u(\u03b8v, oc) = \u22121 for all c \u2208 C; u(\u03b8v, o\u2217 ) = \u22121.\nLet the designer``s objective function be given by g(o\u2217 ) = |\u0398|+1; g(ol) = |\u0398| for all l \u2208 L; g(oc) = |\u0398| for all c \u2208 C.\nThe goal of the AMD instance is G = |\u0398| + |C|\u2212K |\u0398| , where K is the goal of the MINSAT instance.\nWe show the instances are equivalent.\nFirst, suppose there is a solution to the MINSAT instance.\nLet the assignment of truth values to the variables in this solution be given by the function f : V \u2192 L (where v(f(v)) = v for all v \u2208 V ).\nThen, for every v \u2208 V , let o(\u03b8v) = of(v).\nFor every c \u2208 C that is satisfied in the MINSAT solution, let o(\u03b8c) = oc; for every unsatisfied c \u2208 C, let o(\u03b8c) = o\u2217 .\nIt is straightforward to check that the IR constraint is satisfied.\nWe now check that the agent has no incentive to misreport.\nIf the agent``s type is some \u03b8v, it is getting the maximum utility for that type, so it has no incentive to misreport.\nIf the agent``s type is some \u03b8c where c is a satisfied clause, again, it is getting the maximum utility for that type, so it has no incentive to misreport.\nThe final case to check is where the agent``s type is some \u03b8c where c is an unsatisfied clause.\nIn this case, we observe that for none of the types, reporting it leads to an outcome ol for a literal l \u2208 c, precisely because the clause is not satisfied in the MINSAT instance.\nBecause also, no type leads to the outcome oc, there is no outcome that the mechanism ever selects that would give the agent utility greater than 1 for type \u03b8c, and hence the agent has no incentive to report falsely.\nThis establishes that the agent never has an incentive to misreport.\nFinally, we show that the goal is reached.\nIf s is the number of satisfied clauses in the MINSAT solution (so that s \u2264 K), then the expected value of the designer``s objective function is |V ||\u0398|+s|\u0398|+(|C|\u2212s)(|\u0398|+1) |\u0398| \u2265 |V ||\u0398|+K|\u0398|+(|C|\u2212K)(|\u0398|+1) |\u0398| = |\u0398| + |C|\u2212K |\u0398| = G.\nSo there is a solution to the AMD instance.\nNow suppose there is a solution to the AMD instance, given by an outcome function o. First, suppose there is some v \u2208 V such that o(\u03b8v) \/\u2208 {o+v, o\u2212v}.\nThe only other outcome that the mechanism is allowed to choose under the IR constraint is o0.\nThis has an objective value of 0, and because the highest value the objective function ever takes is |\u0398| + 1, it follows that the maximum expected value of the objective function that could be obtained is at most (|\u0398|\u22121)(|\u0398|+1) |\u0398| < |\u0398| < G, contradicting that this is a solution to the AMD instance.\nIt follows that in the solution to the AMD instance, for every v \u2208 V , o(\u03b8v) \u2208 {o+v, o\u2212v}.\nWe can interpret this as an assignment of truth values to the variables: v is set to true if o(\u03b8v) = o+v, and to false if o(\u03b8v) = o\u2212v.\nWe claim this assignment is a solution to the MINSAT instance.\nBy the above, for any type \u03b8v, the value of the objective function in this mechanism will be |\u0398|.\nFor any clause c satisfied by the given assignment, the value of the objective function in the case where the agent reports type \u03b8c will be at most |\u0398|.\n(This is because we cannot choose the outcome o\u2217 for such a type, as in this case the agent would have an incentive to report \u03b8v instead, where v is the variable satisfying c in the assignment (so that o(\u03b8v) = ol where l occurs in c).)\nFinally, for any unsatisfied clause c, the maximum value the objective function can take in the case where the agent reports type \u03b8c is |\u0398| + 1, simply because this is the largest value the function ever takes.\nIt follows that the expected value of the objective function for our mechanism is at most V |\u0398|+s|\u0398|+(|C|\u2212s)(|\u0398|+1) \u0398 , where s is the number of satisfied 137 clauses.\nBecause our mechanism achieves the goal, it follows that V |\u0398|+s|\u0398|+(|C|\u2212s)(|\u0398|+1) \u0398 \u2265 G, which by simple algebraic manipulations is equivalent to s \u2264 K.\nSo there is a solution to the MINSAT instance.\nBoth of our hardness results relied on the constraint that the mechanism should be deterministic.\nIn the next section, we show that the hardness of design disappears when we allow for randomization in the mechanism.\n6.\nRANDOMIZED AMD FOR A SELFINTERESTED DESIGNER IS EASY We now show how allowing for randomization over the outcomes makes the problem of self-interested AMD tractable through linear programming, for any constant number of agents.\nTheorem 3.\nSelf-interested randomized AMD with a constant number of agents is solvable in polynomial time by linear programming, both with and without payments, both for ex post and ex interim IR, and both for implementation in dominant strategies and for implementation in Bayes-Nash equilibrium-even if the types are correlated.\nProof.\nBecause linear programs can be solved in polynomial time [13], all we need to show is that the number of variables and equations in our program is polynomial for any constant number of agents-that is, exponential only in N. Throughout, for purposes of determining the size of the linear program, let T = maxi{|\u0398i|}.\nThe variables of our linear program will be the probabilities (p(\u03b81, \u03b82, ... , \u03b8N ))(o) (at most TN |O| variables) and the payments \u03c0i(\u03b81, \u03b82, ... , \u03b8N ) (at most NTN variables).\n(We show the linear program for the case where payments are possible; the case without payments is easily obtained from this by simply omitting all the payment variables in the program, or by adding additional constraints forcing the payments to be 0.)\nFirst, we show the IR constraints.\nFor ex post IR, we add the following (at most NTN ) constraints to the LP: \u2022 For every i \u2208 {1, 2, ... , N}, and for every (\u03b81, \u03b82, ... , \u03b8N ) \u2208 \u03981 \u00d7 \u03982 \u00d7 ... \u00d7 \u0398N , we add ( o\u2208O (p(\u03b81, \u03b82, ... , \u03b8N ))(o)u(\u03b8i, o)) \u2212 \u03c0i(\u03b81, \u03b82, ... , \u03b8N ) \u2265 0.\nFor ex interim IR, we add the following (at most NT) constraints to the LP: \u2022 For every i \u2208 {1, 2, ... , N}, for every \u03b8i \u2208 \u0398i, we add \u03b81,... ,\u03b8N \u03b3(\u03b81, ... , \u03b8N |\u03b8i)(( o\u2208O (p(\u03b81, \u03b82, ... , \u03b8N ))(o)u(\u03b8i, o))\u2212 \u03c0i(\u03b81, \u03b82, ... , \u03b8N )) \u2265 0.\nNow, we show the solution concept constraints.\nFor implementation in dominant strategies, we add the following (at most NTN+1 ) constraints to the LP: \u2022 For every i \u2208 {1, 2, ... , N}, for every (\u03b81, \u03b82, ... , \u03b8i, ... , \u03b8N ) \u2208 \u03981 \u00d7 \u03982 \u00d7 ... \u00d7 \u0398N , and for every alternative type report \u02c6\u03b8i \u2208 \u0398i, we add the constraint ( o\u2208O (p(\u03b81, \u03b82, ... , \u03b8i, ... , \u03b8N ))(o)u(\u03b8i, o)) \u2212 \u03c0i(\u03b81, \u03b82, ... , \u03b8i, ... , \u03b8N ) \u2265 ( o\u2208O (p(\u03b81, \u03b82, ... , \u02c6\u03b8i, ... , \u03b8N ))(o)u(\u03b8i, o)) \u2212 \u03c0i(\u03b81, \u03b82, ... , \u02c6\u03b8i, ... , \u03b8N ).\nFinally, for implementation in Bayes-Nash equilibrium, we add the following (at most NT2 ) constraints to the LP: \u2022 For every i \u2208 {1, 2, ..., N}, for every \u03b8i \u2208 \u0398i, and for every alternative type report \u02c6\u03b8i \u2208 \u0398i, we add the constraint \u03b81,...,\u03b8N \u03b3(\u03b81, ..., \u03b8N |\u03b8i)(( o\u2208O (p(\u03b81, \u03b82, ..., \u03b8i, ..., \u03b8N ))(o)u(\u03b8i, o)) \u2212 \u03c0i(\u03b81, \u03b82, ..., \u03b8i, ..., \u03b8N )) \u2265 \u03b81,...,\u03b8N \u03b3(\u03b81, ..., \u03b8N |\u03b8i)(( o\u2208O (p(\u03b81, \u03b82, ..., \u02c6\u03b8i, ..., \u03b8N ))(o)u(\u03b8i, o)) \u2212 \u03c0i(\u03b81, \u03b82, ..., \u02c6\u03b8i, ..., \u03b8N )).\nAll that is left to do is to give the expression the designer is seeking to maximize, which is: \u2022 \u03b81,...,\u03b8N \u03b3(\u03b81, ..., \u03b8N )(( o\u2208O (p(\u03b81, \u03b82, ..., \u03b8i, ..., \u03b8N ))(o)g(o)) + N i=1 \u03c0i(\u03b81, \u03b82, ..., \u03b8N )).\nAs we indicated, the number of variables and constraints is exponential only in N, and hence the linear program is of polynomial size for constant numbers of agents.\nThus the problem is solvable in polynomial time.\n7.\nIMPLICATIONS FOR AN OPTIMAL COMBINATORIAL AUCTION DESIGN PROBLEM In this section, we will demonstrate some interesting consequences of the problem of automated mechanism design for a self-interested designer on designing optimal combinatorial auctions.\nConsider a combinatorial auction with a set S of items for sale.\nFor any bundle B \u2286 S, let ui(\u03b8i, B) be bidder i``s utility for receiving bundle B when the bidder``s type is \u03b8i.\nThe optimal auction design problem is to specify the rules of the auction so as to maximize expected revenue to the auctioneer.\n(By the revelation principle, without loss of generality, we can assume the auction is truthful.)\nThe optimal auction design problem is solved for the case of a single item by the famous Myerson auction [18].\nHowever, designing optimal auctions in combinatorial auctions is a recognized open research problem [3, 25].\nThe problem is open even if there are only two items for sale.\n(The twoitem case with a very special form of complementarity and no substitutability has been solved recently [1].)\nSuppose we have free disposal-items can be thrown away at no cost.\nAlso, suppose that the bidders'' preferences have the following structure: whenever a bidder receives a bundle of items, the bidder``s utility for that bundle is determined by the best item in the bundle only.\n(We emphasize that 138 which item is the best is allowed to depend on the bidder``s type.)\nDefinition 10.\nBidder i is said to have best-only preferences over bundles of items if there exists a function vi : \u0398i \u00d7 S \u2192 R such that for any \u03b8i \u2208 \u0398i, for any B \u2286 S, ui(\u03b8i, B) = maxs\u2208B vi(\u03b8i, s).\nWe make the following useful observation in this setting: there is no sense in awarding a bidder more than one item.\nThe reason is that if the bidder is reporting truthfully, taking all but the highest valued item away from the bidder will not hurt the bidder; and, by free disposal, doing so can only reduce the incentive for this bidder to falsely report this type, when the bidder actually has another type.\nWe now show that the problem of designing a deterministic optimal auction here is NP-complete, by a reduction from the payment maximizing AMD problem!\nTheorem 4.\nGiven an optimal combinatorial auction design problem under best-only preferences (given by a set of items S and for each bidder i, a finite type space \u0398i and a function vi : \u0398i \u00d7 S \u2192 R such that for any \u03b8i \u2208 \u0398i, for any B \u2286 S, ui(\u03b8i, B) = maxs\u2208B vi(\u03b8i, s)), designing the optimal deterministic auction is NP-complete, even for a single bidder with a uniform distribution over types.\nProof.\nThe problem is in NP because we can nondeterministically generate an allocation rule, and then set the payments using linear programming.\nTo show NP-hardness, we reduce an arbitrary paymentmaximizing deterministic AMD instance, with a single agent and a uniform distribution over types, to the following optimal combinatorial auction design problem instance with a single bidder with best-only preferences.\nFor every outcome o \u2208 O in the AMD instance (besides the outcome o0), let there be one item so \u2208 S. Let the type space be the same, and let v(\u03b8i, so) = ui(\u03b8i, o) (where u is as specified in the AMD instance).\nLet the expected revenue target value be the same in both instances.\nWe show the instances are equivalent.\nFirst suppose there exists a solution to the AMD instance, given by an outcome function and a payment function.\nThen, if the AMD solution chooses outcome o for a type, in the optimal auction solution, allocate {so} to the bidder for this type.\n(Unless o = o0, in which case we allocate {} to the bidder.)\nLet the payment functions be the same in both instances.\nThen, the utility that an agent receives for reporting a type (given the true type) in either solution is the same, so we have incentive compatibility in the optimal auction solution.\nMoreover, because the type distribution and the payment function are the same, the expected revenue to the auctioneer\/designer is the same.\nIt follows that there exists a solution to the optimal auction design instance.\nNow suppose there exists a solution to the optimal auction design instance.\nBy the at-most-one-item observation, we can assume without loss of generality that the solution never allocates more than one item.\nThen, if the optimal auction solution allocates item so to the bidder for a type, in the AMD solution, let the mechanism choose outcome o for that type.\nIf the optimal auction solution allocates nothing to the bidder for a type, in the AMD solution, let the mechanism choose outcome o0 for that type.\nLet the payment functions be the same.\nThen, the utility that an agent receives for reporting a type (given the true type) in either solution is the same, so we have incentive compatibility in the AMD solution.\nMoreover, because the type distribution and the payment function are the same, the expected revenue to the designer\/auctioneer is the same.\nIt follows that there exists a solution to the AMD instance.\nFortunately, we can also carry through the easiness result for randomized mechanisms to this combinatorial auction setting-giving us one of the few known polynomial-time algorithms for an optimal combinatorial auction design problem.\nTheorem 5.\nGiven an optimal combinatorial auction design problem under best-only preferences (given by a set of items S and for each bidder i, a finite type space \u0398i and a function vi : \u0398i \u00d7 S \u2192 R such that for any \u03b8i \u2208 \u0398i, for any B \u2286 S, ui(\u03b8i, B) = maxs\u2208B vi(\u03b8i, s)), if the number of bidders is a constant k, then the optimal randomized auction can be designed in polynomial time.\n(For any IC and IR constraints.)\nProof.\nBy the at-most-one-item observation, we can without loss of generality restrict ourselves to allocations where each bidder receives at most one item.\nThere are fewer than (|S| + 1)k such allocations-that is, a polynomial number of allocations.\nBecause we can list the outcomes explicitly, we can simply solve this as a payment-maximizing AMD instance, with linear programming.\n8.\nRELATED RESEARCH ON COMPLEXITY IN MECHANISM DESIGN There has been considerable recent interest in mechanism design in computer science.\nSome of it has focused on issues of computational complexity, but most of that work has strived toward designing mechanisms that are easy to execute (e.g. [20, 15, 19, 9, 12]), rather than studying the complexity of designing the mechanism.\nThe closest piece of earlier work studied the complexity of automated mechanism design by a benevolent designer [5, 6].\nRoughgarden has studied the complexity of designing a good network topology for agents that selfishly choose the links they use [21].\nThis is related to mechanism design, but differs significantly in that the designer only has restricted control over the rules of the game because there is no party that can impose the outcome (or side payments).\nAlso, there is no explicit reporting of preferences.\n9.\nCONCLUSIONS AND FUTURE RESEARCH Often, an outcome must be chosen on the basis of the preferences reported by a group of agents.\nThe key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves.\nMechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen.\nIn a recently emerging approach-called automated mechanism design-a mechanism is computed for the specific preference aggregation setting at hand.\nThis has several advantages, 139 but the downside is that the mechanism design optimization problem needs to be solved anew each time.\nUnlike earlier work on automated mechanism design that studied a benevolent designer, in this paper we studied automated mechanism design problems where the designer is self-interesteda setting much more relevant for electronic commerce.\nIn this setting, the center cares only about which outcome is chosen and what payments are made to it.\nThe reason that the agents'' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism.\nIn this setting, we showed that designing an optimal deterministic mechanism is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen.\nThese hardness results imply hardness in all more general automated mechanism design settings with a self-interested designer.\nThe hardness results apply whether the individual rationality (participation) constraints are applied ex interim or ex post, and whether the solution concept is dominant strategies implementation or Bayes-Nash equilibrium implementation.\nWe then showed that allowing randomization in the mechanism makes the design problem in all these settings computationally easy.\nFinally, we showed that the paymentmaximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have best-only preferences.\nWe showed that here, too, designing an optimal deterministic mechanism is NP-complete even with one agent, but designing an optimal randomized mechanism is easy.\nFuture research includes studying automated mechanism design with a self-interested designer in more restricted settings such as auctions (where the designer``s objective may include preferences about which bidder should receive the good-as well as payments).\nWe also want to study the complexity of automated mechanism design in settings where the outcome and type spaces have special structure so they can be represented more concisely.\nFinally, we plan to assemble a data set of real-world mechanism design problems-both historical and current-and apply automated mechanism design to those problems.\n10.\nREFERENCES [1] M. Armstrong.\nOptimal multi-object auctions.\nReview of Economic Studies, 67:455-481, 2000.\n[2] K. Arrow.\nThe property rights doctrine and demand revelation under incomplete information.\nIn M. Boskin, editor, Economics and human welfare.\nNew York Academic Press, 1979.\n[3] C. Avery and T. Hendershott.\nBundling and optimal auctions of multiple products.\nReview of Economic Studies, 67:483-497, 2000.\n[4] E. H. Clarke.\nMultipart pricing of public goods.\nPublic Choice, 11:17-33, 1971.\n[5] V. Conitzer and T. Sandholm.\nComplexity of mechanism design.\nIn Proceedings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI-02), pages 103-110, Edmonton, Canada, 2002.\n[6] V. Conitzer and T. Sandholm.\nAutomated mechanism design: Complexity results stemming from the single-agent setting.\nIn Proceedings of the 5th International Conference on Electronic Commerce (ICEC-03), pages 17-24, Pittsburgh, PA, USA, 2003.\n[7] V. Conitzer and T. Sandholm.\nComputational criticisms of the revelation principle.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), New York, NY, 2004.\nShort paper.\nFull-length version appeared in the AAMAS-03 workshop on Agent-Mediated Electronic Commerce (AMEC).\n[8] C. d``Aspremont and L. A. G\u00b4erard-Varet.\nIncentives and incomplete information.\nJournal of Public Economics, 11:25-45, 1979.\n[9] J. Feigenbaum, C. Papadimitriou, and S. Shenker.\nSharing the cost of muliticast transmissions.\nJournal of Computer and System Sciences, 63:21-41, 2001.\nEarly version in Proceedings of the Annual ACM Symposium on Theory of Computing (STOC), 2000.\n[10] A. Gibbard.\nManipulation of voting schemes.\nEconometrica, 41:587-602, 1973.\n[11] T. Groves.\nIncentives in teams.\nEconometrica, 41:617-631, 1973.\n[12] J. Hershberger and S. Suri.\nVickrey prices and shortest paths: What is an edge worth?\nIn Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS), 2001.\n[13] L. Khachiyan.\nA polynomial algorithm in linear programming.\nSoviet Math.\nDoklady, 20:191-194, 1979.\n[14] R. Kohli, R. Krishnamurthi, and P. Mirchandani.\nThe minimum satisfiability problem.\nSIAM Journal of Discrete Mathematics, 7(2):275-283, 1994.\n[15] D. Lehmann, L. I. O``Callaghan, and Y. Shoham.\nTruth revelation in rapid, approximately efficient combinatorial auctions.\nJournal of the ACM, 49(5):577-602, 2002.\nEarly version appeared in Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), 1999.\n[16] A. Mas-Colell, M. Whinston, and J. R. Green.\nMicroeconomic Theory.\nOxford University Press, 1995.\n[17] E. S. Maskin and J. Riley.\nOptimal multi-unit auctions.\nIn F. Hahn, editor, The Economics of Missing Markets, Information, and Games, chapter 14, pages 312-335.\nClarendon Press, Oxford, 1989.\n[18] R. Myerson.\nOptimal auction design.\nMathematics of Operation Research, 6:58-73, 1981.\n[19] N. Nisan and A. Ronen.\nComputationally feasible VCG mechanisms.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 242-252, Minneapolis, MN, 2000.\n[20] N. Nisan and A. Ronen.\nAlgorithmic mechanism design.\nGames and Economic Behavior, 35:166-196, 2001.\nEarly version in Proceedings of the Annual ACM Symposium on Theory of Computing (STOC), 1999.\n[21] T. Roughgarden.\nDesigning networks for selfish users is hard.\nIn Proceedings of the Annual Symposium on Foundations of Computer Science (FOCS), 2001.\n[22] T. Sandholm.\nIssues in computational Vickrey auctions.\nInternational Journal of Electronic Commerce, 4(3):107-129, 2000.\nSpecial Issue on 140 Applying Intelligent Agents for Electronic Commerce.\nA short, early version appeared at the Second International Conference on Multi-Agent Systems (ICMAS), pages 299-306, 1996.\n[23] M. A. Satterthwaite.\nStrategy-proofness and Arrow``s conditions: existence and correspondence theorems for voting procedures and social welfare functions.\nJournal of Economic Theory, 10:187-217, 1975.\n[24] W. Vickrey.\nCounterspeculation, auctions, and competitive sealed tenders.\nJournal of Finance, 16:8-37, 1961.\n[25] R. V. Vohra.\nResearch problems in combinatorial auctions.\nMimeo, version Oct. 29, 2001.\n141","lvl-3":"Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions \u2217\nABSTRACT\nOften, an outcome must be chosen on the basis of the preferences reported by a group of agents.\nThe key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves.\nMechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen.\nIn a recently proposed approach--called automated mechanism design--a mechanism is computed for the preference aggregation setting at hand.\nThis has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time.\nUnlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested.\nIn this case, the center cares only about which outcome is chosen and what payments are made to it.\nThe reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism.\nIn this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen.\nWe then show how allowing for randomization in the mechanism makes problems in this setting computationally easy.\nFinally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have \"best-only\" preferences.\nWe show that here, too, designing an optimal deterministic auction is NPcomplete, but designing an optimal randomized auction is easy.\n\u2217 Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678.\n1.\nINTRODUCTION\nIn multiagent settings, often an outcome must be chosen on the basis of the preferences reported by a group of agents.\nSuch outcomes could be potential presidents, joint plans, allocations of goods or resources, etc. .\nThe preference aggregator generally does not know the agents' preferences a priori.\nRather, the agents report their preferences to the coordinator.\nUnfortunately, an agent may have an incentive to misreport its preferences in order to mislead the mechanism into selecting an outcome that is more desirable to the agent than the outcome that would be selected if the agent revealed its preferences truthfully.\nSuch manipulation is undesirable because preference aggregation mechanisms are tailored to aggregate preferences in a socially desirable way, and if the agents reveal their preferences insincerely, a socially undesirable outcome may be chosen.\nManipulability is a pervasive problem across preference aggregation mechanisms.\nA seminal negative result, the Gibbard-Satterthwaite theorem, shows that under any nondictatorial preference aggregation scheme, if there are at least 3 possible outcomes, there are preferences under which an agent is better off reporting untruthfully [10, 23].\n(A preference aggregation scheme is called dictatorial if one of the agents dictates the outcome no matter what preferences the other agents report.)\nWhat the aggregator would like to do is design a preference aggregation mechanism so that 1) the self-interested agents are motivated to report their preferences truthfully, and 2) the mechanism chooses an outcome that is desirable from the perspective of some objective.\nThis is the classic setting of mechanism design in game theory.\nIn this paper, we study the case where the designer is self-interested, that is, the designer does not directly care about how the out\ncome relates to the agents' preferences, but is rather concerned with its own agenda for which outcome should be chosen, and with maximizing payments to itself.\nThis is the mechanism design setting most relevant to electronic commerce.\nIn the case where the mechanism designer is interested in maximizing some notion of social welfare, the importance of collecting the agents' preferences is clear.\nIt is perhaps less obvious why they should be collected when the designer is self-interested and hence its objective is not directly related to the agents' preferences.\nThe reason for this is that often the agents' preferences impose limits on how the designer chooses the outcome and payments.\nThe most common such constraint is that of individual rationality (IR), which means that the mechanism cannot make any agent worse off than the agent would have been had it not participated in the mechanism.\nFor instance, in the setting of optimal auction design, the designer (auctioneer) is only concerned with how much revenue is collected, and not per se with how well the allocation of the good (or goods) corresponds to the agents' preferences.\nNevertheless, the designer cannot force an agent to pay more than its valuation for the bundle of goods allocated to it.\nTherefore, even a self-interested designer will choose an outcome that makes the agents reasonably well off.\nOn the other hand, the designer will not necessarily choose a social welfare maximizing outcome.\nFor example, if the designer always chooses an outcome that maximizes social welfare with respect to the reported preferences, and forces each agent to pay the difference between the utility it has now and the utility it would have had if it had not participated in the mechanism, it is easy to see that agents may have an incentive to misreport their preferences--and this may actually lead to less revenue being collected.\nIndeed, one of the counterintuitive results of optimal auction design theory is that sometimes the good is allocated to nobody even when the auctioneer has a reservation price of 0.\nClassical mechanism design provides some general mechanisms, which, under certain assumptions, satisfy some notion of nonmanipulability and maximize some objective.\nThe upside of these mechanisms is that they do not rely on (even probabilistic) information about the agents' preferences (e.g., the Vickrey-Clarke-Groves (VCG) mechanism [24, 4, 11]), or they can be easily applied to any probability distribution over the preferences (e.g., the dAGVA mechanism [8, 2], the Myerson auction [18], and the Maskin-Riley multi-unit auction [17]).\nHowever, the general mechanisms also have significant downsides: 9 The most famous and most broadly applicable general mechanisms, VCG and dAGVA, only maximize social welfare.\nIf the designer is self-interested, as is the case in many electronic commerce settings, these mechanisms do not maximize the designer's objective.\n9 The general mechanisms that do focus on a selfinterested designer are only applicable in very restricted settings--such as Myerson's expected revenue maximizing auction for selling a single item, and Maskin and Riley's expected revenue maximizing auction for selling multiple identical units of an item.\n9 Even in the restricted settings in which these mechanisms apply, the mechanisms only allow for payment maximization.\nIn practice, the designer may also be interested in the outcome per se.\nFor example, an auctioneer may care which bidder receives the item.\n9 It is often assumed that side payments can be used to tailor the agents' incentives, but this is not always practical.\nFor example, in barter-based electronic marketplaces--such as Recipco, firstbarter.com, BarterOne, and Intagio--side payments are not allowed.\nFurthermore, among software agents, it might be more desirable to construct mechanisms that do not rely on the ability to make payments, because many software agents do not have the infrastructure to make payments.\nIn contrast, we follow a recent approach where the mechanism is designed automatically for the specific problem at hand.\nThis approach addresses all of the downsides listed above.\nWe formulate the mechanism design problem as an optimization problem.\nThe input is characterized by the number of agents, the agents' possible types (preferences), and the aggregator's prior distributions over the agents' types.\nThe output is a nonmanipulable mechanism that is optimal with respect to some objective.\nThis approach is called automated mechanism design.\nThe automated mechanism design approach has four advantages over the classical approach of designing general mechanisms.\nFirst, it can be used even in settings that do not satisfy the assumptions of the classical mechanisms (such as availability of side payments or that the objective is social welfare).\nSecond, it may allow one to circumvent impossibility results (such as the Gibbard-Satterthwaite theorem) which state that there is no mechanism that is desirable across all preferences.\nWhen the mechanism is designed for the setting at hand, it does not matter that it would not work more generally.\nThird, it may yield better mechanisms (in terms of stronger nonmanipulability guarantees and\/or better outcomes) than classical mechanisms because the mechanism capitalizes on the particulars of the setting (the probabilistic information that the designer has about the agents' types).\nGiven the vast amount of information that parties have about each other today, this approach is likely to lead to tremendous savings over classical mechanisms, which largely ignore that information.\nFor example, imagine a company automatically creating its procurement mechanism based on statistical knowledge about its suppliers, rather than using a classical descending procurement auction.\nFourth, the burden of design is shifted from humans to a machine.\nHowever, automated mechanism design requires the mechanism design optimization problem to be solved anew for each setting.\nHence its computational complexity becomes a key issue.\nPrevious research has studied this question for benevolent designers--that wish to maximize, for example, social welfare [5, 6].\nIn this paper we study the computational complexity of automated mechanism design in the case of a self-interested designer.\nThis is an important setting for automated mechanism design due to the shortage of general mechanisms in this area, and the fact that in most e-commerce settings the designer is self-interested.\nWe also show that this problem is closely related to a particular optimal (revenue-maximizing) combinatorial auction design problem.\nThe rest of this paper is organized as follows.\nIn Section 2, we justify the focus on nonmanipulable mechanisms.\nIn Section 3, we define the problem we study.\nIn Section 4, we show that designing an optimal deterministic mechanism is NP-complete even when the designer only cares about the payments made to it.\nIn Section 5, we show that designing an optimal deterministic mechanism is also NP-complete when payments are not possible and the designer is only interested in the outcome chosen.\nIn Section 6, we show that an optimal randomized mechanism can be designed in polynomial time even in the general case.\nFinally, in Section 7, we show that for designing optimal combinatorial auctions under best-only preferences, our results on AMD imply that this problem is NP-complete for deterministic auctions, but easy for randomized auctions.\n2.\nJUSTIFYING THE FOCUS ON NONMANIPULABLE MECHANISMS\n3.\nDEFINITIONS\n3.1 Individual rationality (IR) constraints\n3.2 Incentive compatibility (IC) constraints\n3.3 Automated mechanism design\n4.\nPAYMENT-MAXIMIZING DETERMINISTIC AMD IS HARD\n5.\nSELF-INTERESTED DETERMINISTIC AMD WITHOUT PAYMENTS IS HARD\n6.\nRANDOMIZED AMD FOR A SELFINTERESTED DESIGNER IS EASY\n7.\nIMPLICATIONS FOR AN OPTIMAL COMBINATORIAL AUCTION DESIGN PROBLEM\n8.\nRELATED RESEARCH ON COMPLEXITY IN MECHANISM DESIGN\n9.\nCONCLUSIONS AND FUTURE RESEARCH\nOften, an outcome must be chosen on the basis of the preferences reported by a group of agents.\nThe key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves.\nMechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen.\nIn a recently emerging approach--called automated mechanism design--a mechanism is computed for the specific preference aggregation setting at hand.\nThis has several advantages,\nbut the downside is that the mechanism design optimization problem needs to be solved anew each time.\nUnlike earlier work on automated mechanism design that studied a benevolent designer, in this paper we studied automated mechanism design problems where the designer is self-interested--a setting much more relevant for electronic commerce.\nIn this setting, the center cares only about which outcome is chosen and what payments are made to it.\nThe reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism.\nIn this setting, we showed that designing an optimal deterministic mechanism is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen.\nThese hardness results imply hardness in all more general automated mechanism design settings with a self-interested designer.\nThe hardness results apply whether the individual rationality (participation) constraints are applied ex interim or ex post, and whether the solution concept is dominant strategies implementation or Bayes-Nash equilibrium implementation.\nWe then showed that allowing randomization in the mechanism makes the design problem in all these settings computationally easy.\nFinally, we showed that the paymentmaximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have \"best-only\" preferences.\nWe showed that here, too, designing an optimal deterministic mechanism is NP-complete even with one agent, but designing an optimal randomized mechanism is easy.\nFuture research includes studying automated mechanism design with a self-interested designer in more restricted settings such as auctions (where the designer's objective may include preferences about which bidder should receive the good--as well as payments).\nWe also want to study the complexity of automated mechanism design in settings where the outcome and type spaces have special structure so they can be represented more concisely.\nFinally, we plan to assemble a data set of real-world mechanism design problems--both historical and current--and apply automated mechanism design to those problems.","lvl-4":"Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions \u2217\nABSTRACT\nOften, an outcome must be chosen on the basis of the preferences reported by a group of agents.\nThe key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves.\nMechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen.\nIn a recently proposed approach--called automated mechanism design--a mechanism is computed for the preference aggregation setting at hand.\nThis has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time.\nUnlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested.\nIn this case, the center cares only about which outcome is chosen and what payments are made to it.\nThe reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism.\nIn this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen.\nWe then show how allowing for randomization in the mechanism makes problems in this setting computationally easy.\nFinally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have \"best-only\" preferences.\nWe show that here, too, designing an optimal deterministic auction is NPcomplete, but designing an optimal randomized auction is easy.\n\u2217 Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678.\n1.\nINTRODUCTION\nIn multiagent settings, often an outcome must be chosen on the basis of the preferences reported by a group of agents.\nThe preference aggregator generally does not know the agents' preferences a priori.\nRather, the agents report their preferences to the coordinator.\nUnfortunately, an agent may have an incentive to misreport its preferences in order to mislead the mechanism into selecting an outcome that is more desirable to the agent than the outcome that would be selected if the agent revealed its preferences truthfully.\nSuch manipulation is undesirable because preference aggregation mechanisms are tailored to aggregate preferences in a socially desirable way, and if the agents reveal their preferences insincerely, a socially undesirable outcome may be chosen.\nManipulability is a pervasive problem across preference aggregation mechanisms.\n(A preference aggregation scheme is called dictatorial if one of the agents dictates the outcome no matter what preferences the other agents report.)\nWhat the aggregator would like to do is design a preference aggregation mechanism so that 1) the self-interested agents are motivated to report their preferences truthfully, and 2) the mechanism chooses an outcome that is desirable from the perspective of some objective.\nThis is the classic setting of mechanism design in game theory.\nIn this paper, we study the case where the designer is self-interested, that is, the designer does not directly care about how the out\ncome relates to the agents' preferences, but is rather concerned with its own agenda for which outcome should be chosen, and with maximizing payments to itself.\nThis is the mechanism design setting most relevant to electronic commerce.\nIn the case where the mechanism designer is interested in maximizing some notion of social welfare, the importance of collecting the agents' preferences is clear.\nIt is perhaps less obvious why they should be collected when the designer is self-interested and hence its objective is not directly related to the agents' preferences.\nThe reason for this is that often the agents' preferences impose limits on how the designer chooses the outcome and payments.\nThe most common such constraint is that of individual rationality (IR), which means that the mechanism cannot make any agent worse off than the agent would have been had it not participated in the mechanism.\nFor instance, in the setting of optimal auction design, the designer (auctioneer) is only concerned with how much revenue is collected, and not per se with how well the allocation of the good (or goods) corresponds to the agents' preferences.\nNevertheless, the designer cannot force an agent to pay more than its valuation for the bundle of goods allocated to it.\nTherefore, even a self-interested designer will choose an outcome that makes the agents reasonably well off.\nOn the other hand, the designer will not necessarily choose a social welfare maximizing outcome.\nClassical mechanism design provides some general mechanisms, which, under certain assumptions, satisfy some notion of nonmanipulability and maximize some objective.\nHowever, the general mechanisms also have significant downsides: 9 The most famous and most broadly applicable general mechanisms, VCG and dAGVA, only maximize social welfare.\nIf the designer is self-interested, as is the case in many electronic commerce settings, these mechanisms do not maximize the designer's objective.\n9 Even in the restricted settings in which these mechanisms apply, the mechanisms only allow for payment maximization.\nIn practice, the designer may also be interested in the outcome per se.\n9 It is often assumed that side payments can be used to tailor the agents' incentives, but this is not always practical.\nFurthermore, among software agents, it might be more desirable to construct mechanisms that do not rely on the ability to make payments, because many software agents do not have the infrastructure to make payments.\nIn contrast, we follow a recent approach where the mechanism is designed automatically for the specific problem at hand.\nThis approach addresses all of the downsides listed above.\nWe formulate the mechanism design problem as an optimization problem.\nThe input is characterized by the number of agents, the agents' possible types (preferences), and the aggregator's prior distributions over the agents' types.\nThe output is a nonmanipulable mechanism that is optimal with respect to some objective.\nThis approach is called automated mechanism design.\nThe automated mechanism design approach has four advantages over the classical approach of designing general mechanisms.\nFirst, it can be used even in settings that do not satisfy the assumptions of the classical mechanisms (such as availability of side payments or that the objective is social welfare).\nSecond, it may allow one to circumvent impossibility results (such as the Gibbard-Satterthwaite theorem) which state that there is no mechanism that is desirable across all preferences.\nWhen the mechanism is designed for the setting at hand, it does not matter that it would not work more generally.\nThird, it may yield better mechanisms (in terms of stronger nonmanipulability guarantees and\/or better outcomes) than classical mechanisms because the mechanism capitalizes on the particulars of the setting (the probabilistic information that the designer has about the agents' types).\nFor example, imagine a company automatically creating its procurement mechanism based on statistical knowledge about its suppliers, rather than using a classical descending procurement auction.\nFourth, the burden of design is shifted from humans to a machine.\nHowever, automated mechanism design requires the mechanism design optimization problem to be solved anew for each setting.\nPrevious research has studied this question for benevolent designers--that wish to maximize, for example, social welfare [5, 6].\nIn this paper we study the computational complexity of automated mechanism design in the case of a self-interested designer.\nThis is an important setting for automated mechanism design due to the shortage of general mechanisms in this area, and the fact that in most e-commerce settings the designer is self-interested.\nWe also show that this problem is closely related to a particular optimal (revenue-maximizing) combinatorial auction design problem.\nIn Section 2, we justify the focus on nonmanipulable mechanisms.\nIn Section 3, we define the problem we study.\nIn Section 4, we show that designing an optimal deterministic mechanism is NP-complete even when the designer only cares about the payments made to it.\nIn Section 5, we show that designing an optimal deterministic mechanism is also NP-complete when payments are not possible and the designer is only interested in the outcome chosen.\nIn Section 6, we show that an optimal randomized mechanism can be designed in polynomial time even in the general case.\nFinally, in Section 7, we show that for designing optimal combinatorial auctions under best-only preferences, our results on AMD imply that this problem is NP-complete for deterministic auctions, but easy for randomized auctions.\n9.\nCONCLUSIONS AND FUTURE RESEARCH\nOften, an outcome must be chosen on the basis of the preferences reported by a group of agents.\nThe key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves.\nMechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen.\nIn a recently emerging approach--called automated mechanism design--a mechanism is computed for the specific preference aggregation setting at hand.\nThis has several advantages,\nbut the downside is that the mechanism design optimization problem needs to be solved anew each time.\nUnlike earlier work on automated mechanism design that studied a benevolent designer, in this paper we studied automated mechanism design problems where the designer is self-interested--a setting much more relevant for electronic commerce.\nIn this setting, the center cares only about which outcome is chosen and what payments are made to it.\nThe reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism.\nIn this setting, we showed that designing an optimal deterministic mechanism is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen.\nThese hardness results imply hardness in all more general automated mechanism design settings with a self-interested designer.\nWe then showed that allowing randomization in the mechanism makes the design problem in all these settings computationally easy.\nFinally, we showed that the paymentmaximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have \"best-only\" preferences.\nWe showed that here, too, designing an optimal deterministic mechanism is NP-complete even with one agent, but designing an optimal randomized mechanism is easy.\nFuture research includes studying automated mechanism design with a self-interested designer in more restricted settings such as auctions (where the designer's objective may include preferences about which bidder should receive the good--as well as payments).\nWe also want to study the complexity of automated mechanism design in settings where the outcome and type spaces have special structure so they can be represented more concisely.\nFinally, we plan to assemble a data set of real-world mechanism design problems--both historical and current--and apply automated mechanism design to those problems.","lvl-2":"Self-interested Automated Mechanism Design and Implications for Optimal Combinatorial Auctions \u2217\nABSTRACT\nOften, an outcome must be chosen on the basis of the preferences reported by a group of agents.\nThe key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves.\nMechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen.\nIn a recently proposed approach--called automated mechanism design--a mechanism is computed for the preference aggregation setting at hand.\nThis has several advantages, but the downside is that the mechanism design optimization problem needs to be solved anew each time.\nUnlike the earlier work on automated mechanism design that studied a benevolent designer, in this paper we study automated mechanism design problems where the designer is self-interested.\nIn this case, the center cares only about which outcome is chosen and what payments are made to it.\nThe reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism.\nIn this setting, we show that designing optimal deterministic mechanisms is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen.\nWe then show how allowing for randomization in the mechanism makes problems in this setting computationally easy.\nFinally, we show that the payment-maximizing AMD problem is closely related to an interesting variant of the optimal (revenuemaximizing) combinatorial auction design problem, where the bidders have \"best-only\" preferences.\nWe show that here, too, designing an optimal deterministic auction is NPcomplete, but designing an optimal randomized auction is easy.\n\u2217 Supported by NSF under CAREER Award IRI-9703122, Grant IIS-9800994, ITR IIS-0081246, and ITR IIS-0121678.\n1.\nINTRODUCTION\nIn multiagent settings, often an outcome must be chosen on the basis of the preferences reported by a group of agents.\nSuch outcomes could be potential presidents, joint plans, allocations of goods or resources, etc. .\nThe preference aggregator generally does not know the agents' preferences a priori.\nRather, the agents report their preferences to the coordinator.\nUnfortunately, an agent may have an incentive to misreport its preferences in order to mislead the mechanism into selecting an outcome that is more desirable to the agent than the outcome that would be selected if the agent revealed its preferences truthfully.\nSuch manipulation is undesirable because preference aggregation mechanisms are tailored to aggregate preferences in a socially desirable way, and if the agents reveal their preferences insincerely, a socially undesirable outcome may be chosen.\nManipulability is a pervasive problem across preference aggregation mechanisms.\nA seminal negative result, the Gibbard-Satterthwaite theorem, shows that under any nondictatorial preference aggregation scheme, if there are at least 3 possible outcomes, there are preferences under which an agent is better off reporting untruthfully [10, 23].\n(A preference aggregation scheme is called dictatorial if one of the agents dictates the outcome no matter what preferences the other agents report.)\nWhat the aggregator would like to do is design a preference aggregation mechanism so that 1) the self-interested agents are motivated to report their preferences truthfully, and 2) the mechanism chooses an outcome that is desirable from the perspective of some objective.\nThis is the classic setting of mechanism design in game theory.\nIn this paper, we study the case where the designer is self-interested, that is, the designer does not directly care about how the out\ncome relates to the agents' preferences, but is rather concerned with its own agenda for which outcome should be chosen, and with maximizing payments to itself.\nThis is the mechanism design setting most relevant to electronic commerce.\nIn the case where the mechanism designer is interested in maximizing some notion of social welfare, the importance of collecting the agents' preferences is clear.\nIt is perhaps less obvious why they should be collected when the designer is self-interested and hence its objective is not directly related to the agents' preferences.\nThe reason for this is that often the agents' preferences impose limits on how the designer chooses the outcome and payments.\nThe most common such constraint is that of individual rationality (IR), which means that the mechanism cannot make any agent worse off than the agent would have been had it not participated in the mechanism.\nFor instance, in the setting of optimal auction design, the designer (auctioneer) is only concerned with how much revenue is collected, and not per se with how well the allocation of the good (or goods) corresponds to the agents' preferences.\nNevertheless, the designer cannot force an agent to pay more than its valuation for the bundle of goods allocated to it.\nTherefore, even a self-interested designer will choose an outcome that makes the agents reasonably well off.\nOn the other hand, the designer will not necessarily choose a social welfare maximizing outcome.\nFor example, if the designer always chooses an outcome that maximizes social welfare with respect to the reported preferences, and forces each agent to pay the difference between the utility it has now and the utility it would have had if it had not participated in the mechanism, it is easy to see that agents may have an incentive to misreport their preferences--and this may actually lead to less revenue being collected.\nIndeed, one of the counterintuitive results of optimal auction design theory is that sometimes the good is allocated to nobody even when the auctioneer has a reservation price of 0.\nClassical mechanism design provides some general mechanisms, which, under certain assumptions, satisfy some notion of nonmanipulability and maximize some objective.\nThe upside of these mechanisms is that they do not rely on (even probabilistic) information about the agents' preferences (e.g., the Vickrey-Clarke-Groves (VCG) mechanism [24, 4, 11]), or they can be easily applied to any probability distribution over the preferences (e.g., the dAGVA mechanism [8, 2], the Myerson auction [18], and the Maskin-Riley multi-unit auction [17]).\nHowever, the general mechanisms also have significant downsides: 9 The most famous and most broadly applicable general mechanisms, VCG and dAGVA, only maximize social welfare.\nIf the designer is self-interested, as is the case in many electronic commerce settings, these mechanisms do not maximize the designer's objective.\n9 The general mechanisms that do focus on a selfinterested designer are only applicable in very restricted settings--such as Myerson's expected revenue maximizing auction for selling a single item, and Maskin and Riley's expected revenue maximizing auction for selling multiple identical units of an item.\n9 Even in the restricted settings in which these mechanisms apply, the mechanisms only allow for payment maximization.\nIn practice, the designer may also be interested in the outcome per se.\nFor example, an auctioneer may care which bidder receives the item.\n9 It is often assumed that side payments can be used to tailor the agents' incentives, but this is not always practical.\nFor example, in barter-based electronic marketplaces--such as Recipco, firstbarter.com, BarterOne, and Intagio--side payments are not allowed.\nFurthermore, among software agents, it might be more desirable to construct mechanisms that do not rely on the ability to make payments, because many software agents do not have the infrastructure to make payments.\nIn contrast, we follow a recent approach where the mechanism is designed automatically for the specific problem at hand.\nThis approach addresses all of the downsides listed above.\nWe formulate the mechanism design problem as an optimization problem.\nThe input is characterized by the number of agents, the agents' possible types (preferences), and the aggregator's prior distributions over the agents' types.\nThe output is a nonmanipulable mechanism that is optimal with respect to some objective.\nThis approach is called automated mechanism design.\nThe automated mechanism design approach has four advantages over the classical approach of designing general mechanisms.\nFirst, it can be used even in settings that do not satisfy the assumptions of the classical mechanisms (such as availability of side payments or that the objective is social welfare).\nSecond, it may allow one to circumvent impossibility results (such as the Gibbard-Satterthwaite theorem) which state that there is no mechanism that is desirable across all preferences.\nWhen the mechanism is designed for the setting at hand, it does not matter that it would not work more generally.\nThird, it may yield better mechanisms (in terms of stronger nonmanipulability guarantees and\/or better outcomes) than classical mechanisms because the mechanism capitalizes on the particulars of the setting (the probabilistic information that the designer has about the agents' types).\nGiven the vast amount of information that parties have about each other today, this approach is likely to lead to tremendous savings over classical mechanisms, which largely ignore that information.\nFor example, imagine a company automatically creating its procurement mechanism based on statistical knowledge about its suppliers, rather than using a classical descending procurement auction.\nFourth, the burden of design is shifted from humans to a machine.\nHowever, automated mechanism design requires the mechanism design optimization problem to be solved anew for each setting.\nHence its computational complexity becomes a key issue.\nPrevious research has studied this question for benevolent designers--that wish to maximize, for example, social welfare [5, 6].\nIn this paper we study the computational complexity of automated mechanism design in the case of a self-interested designer.\nThis is an important setting for automated mechanism design due to the shortage of general mechanisms in this area, and the fact that in most e-commerce settings the designer is self-interested.\nWe also show that this problem is closely related to a particular optimal (revenue-maximizing) combinatorial auction design problem.\nThe rest of this paper is organized as follows.\nIn Section 2, we justify the focus on nonmanipulable mechanisms.\nIn Section 3, we define the problem we study.\nIn Section 4, we show that designing an optimal deterministic mechanism is NP-complete even when the designer only cares about the payments made to it.\nIn Section 5, we show that designing an optimal deterministic mechanism is also NP-complete when payments are not possible and the designer is only interested in the outcome chosen.\nIn Section 6, we show that an optimal randomized mechanism can be designed in polynomial time even in the general case.\nFinally, in Section 7, we show that for designing optimal combinatorial auctions under best-only preferences, our results on AMD imply that this problem is NP-complete for deterministic auctions, but easy for randomized auctions.\n2.\nJUSTIFYING THE FOCUS ON NONMANIPULABLE MECHANISMS\nBefore we define the computational problem of automated mechanism design, we should justify our focus on nonmanipulable mechanisms.\nAfter all, it is not immediately obvious that there are no manipulable mechanisms that, even when agents report their types strategically and hence sometimes untruthfully, still reach better outcomes (according to whatever objective we use) than any nonmanipulable mechanism.\nThis does, however, turn out to be the case: given any mechanism, we can construct a nonmanipulable mechanism whose performance is identical, as follows.\nWe build an interface layer between the agents and the original mechanism.\nThe agents report their preferences (or types) to the interface layer; subsequently, the interface layer inputs into the original mechanism the types that the agents would have strategically reported to the original mechanism, if their types were as declared to the interface layer.\nThe resulting outcome is the outcome of the new mechanism.\nSince the interface layer acts \"strategically on each agent's behalf\", there is never an incentive to report falsely to the interface layer; and hence, the types reported by the interface layer are the strategic types that would have been reported without the interface layer, so the results are exactly as they would have been with the original mechanism.\nThis argument is known in the mechanism design literature as the revelation principle [16].\n(There are computational difficulties with applying the revelation principle in large combinatorial outcome and type spaces [7, 22].\nHowever, because here we focus on flatly represented outcome and type spaces, this is not a concern here.)\nGiven this, we can focus on truthful mechanisms in the rest of the paper.\n3.\nDEFINITIONS\nWe now formalize the automated mechanism design setting.\nDEFINITION 1.\nIn an automated mechanism design setting, we are given:\n\u2022 a finite set of outcomes O; \u2022 a finite set of N agents; \u2022 for each agent i, 1.\na finite set of types 8i, 2.\na probability distribution \u03b3i over 8i (in the case of correlated types, there is a single joint distribution \u03b3 over 81 \u00d7...\u00d7 8N), and 3.\na utility function ui: 8i \u00d7 O \u2192 R; 1 \u2022 An objective function whose expectation the designer wishes to maximize.\nThere are many possible objective functions the designer might have, for example, social welfare (where the designer seeks to maximize the sum of the agents' utilities), or the minimum utility of any agent (where the designer seeks to maximize the worst utility had by any agent).\nIn both of these cases, the designer is benevolent, because the designer, in some sense, is pursuing the agents' collective happiness.\nHowever, in this paper, we focus on the case of a self-interested designer.\nA self-interested designer cares only about the outcome chosen (that is, the designer does not care how the outcome relates to the agents' preferences, but rather has a fixed preference over the outcomes), and about the net payments made by the agents, which flow to the designer.\nthe designer's own preference over the outcomes, and \u03c0i is the payment made by agent i.\nIn the case where g = 0 everywhere, the designer is said to be payment maximizing.\nIn the case where payments are not possible, g constitutes the objective function by itself.\nWe now define the kinds of mechanisms under study.\nBy the revelation principle, we can restrict attention to truthful, direct revelation mechanisms, where agents report their types directly and never have an incentive to misreport them.\n\u2022 A deterministic mechanism without payments consists of an outcome selection function o: 81 \u00d7 82 \u00d7...\u00d7 8N \u2192 O. \u2022 A randomized mechanism without payments consists of a distribution selection function p: 81 \u00d7 82 \u00d7...\u00d7 8N \u2192 P (O), where P (O) is the set of probability distributions over O. \u2022 A deterministic mechanism with payments consists of an outcome selection function o: 81 \u00d7 82 \u00d7...\u00d7 8N \u2192 O and for each agent i, a payment selection function \u03c0i: 81 \u00d7 82 \u00d7...\u00d7 8N \u2192 R, where \u03c0i (\u03b81,..., \u03b8N) gives the payment made by agent i when the reported types are \u03b81,..., \u03b8N.\n1Though this follows standard game theory notation [16], the fact that the agent has both a utility function and a type is perhaps confusing.\nThe types encode the various possible preferences that the agent may turn out to have, and the agent's type is not known to the aggregator.\nThe utility function is common knowledge, but because the agent's type is a parameter in the agent's utility function, the aggregator cannot know what the agent's utility is without knowing the agent's type.\n9 A randomized mechanism with payments consists of a distribution selection function p: \u03981 x \u03982 x...x \u0398N--+ P (O), and for each agent i, a payment selection function \u03c0i: \u03981 x \u03982 x...x \u0398N--+ R. 2 There are two types of constraint on the designer in building the mechanism.\n3.1 Individual rationality (IR) constraints\nThe first type of constraint is the following.\nThe utility of each agent has to be at least as great as the agent's fallback utility, that is, the utility that the agent would receive if it did not participate in the mechanism.\nOtherwise that agent would not participate in the mechanism--and no agent's participation can ever hurt the mechanism designer's objective because at worst, the mechanism can ignore an agent by pretending the agent is not there.\n(Furthermore, if no such constraint applied, the designer could simply make the agents pay an infinite amount.)\nThis type of constraint is called an IR (individual rationality) constraint.\nThere are three different possible IR constraints: ex ante, ex interim, and ex post, depending on what the agent knows about its own type and the others' types when deciding whether to participate in the mechanism.\nEx ante IR means that the agent would participate if it knew nothing at all (not even its own type).\nWe will not study this concept in this paper.\nEx interim IR means that the agent would always participate if it knew only its own type, but not those of the others.\nEx post IR means that the agent would always participate even if it knew everybody's type.\nWe will define the latter two notions of IR formally.\nFirst, we need to formalize the concept of the fallback outcome.\nWe assume that each agent's fallback utility is zero for each one of its types.\nThis is without loss of generality because we can add a constant term to an agent's utility function (for a given type), without affecting the decision-making behavior of that expected utility maximizing agent [16].\n2We do not randomize over payments because as long as the agents and the designer are risk neutral with respect to payments, that is, their utility is linear in payments, there is no reason to randomize over payments.\nA randomized mechanism is ex post IR if for any agent i, and any type vector (\u03b81,..., \u03b8N) E \u03981 x...x \u0398N, we have EoI\u03b81,.\n.\n, \u03b8n [ui (\u03b8i, o)--\u03c0i (\u03b81,.\n.\n, \u03b8N)]> 0.\nThe terms involving payments can be left out in the case where payments are not possible.\n3.2 Incentive compatibility (IC) constraints\nThe second type of constraint says that the agents should never have an incentive to misreport their type (as justified above by the revelation principle).\nFor this type of constraint, the two most common variants (or solution concepts) are implementation in dominant strategies, and implementation in Bayes-Nash equilibrium.\nDEFINITION 6.\nGiven an automated mechanism design setting, a mechanism is said to implement its outcome and payment functions in dominant strategies if truthtelling is always optimal even when the types reported by the other agents are already known.\nFormally, for any agent i, any type vector (\u03b81,..., \u03b8i,..., \u03b8N) E \u03981 x...x \u0398i x...x \u0398N, and any alternative type report \u02c6\u03b8i E \u0398i, in the case of deterministic mechanisms we have\nThe terms involving payments can be left out in the case where payments are not possible.\nThus, in dominant strategies implementation, truthtelling is optimal regardless of what the other agents report.\nIf it is optimal only given that the other agents are truthful, and given that one does not know the other agents' types, we have implementation in Bayes-Nash equilibrium.\nDEFINITION 7.\nGiven an automated mechanism design setting, a mechanism is said to implement its outcome and payment functions in Bayes-Nash equilibrium if truthtelling is always optimal to an agent when that agent does not yet know anything about the other agents' types, and the other\nThe terms involving payments can be left out in the case where payments are not possible.\n3.3 Automated mechanism design\nWe can now define the computational problem we study.\nDEFINITION 8.\n(AUTOMATED-MECHANISM-DESIGN (AMD)) We are given: \u2022 an automated mechanism design setting, \u2022 an IR notion (ex interim, ex post, or none), \u2022 a solution concept (dominant strategies or Bayes-Nash), \u2022 whether payments are possible, \u2022 whether randomization is possible, \u2022 (in the decision variant of the problem) a target value G.\nWe are asked whether there exists a mechanism of the specified kind (in terms of payments and randomization) that satisfies both the IR notion and the solution concept, and gives an expected value of at least G for the objective.\nAn interesting special case is the setting where there is only one agent.\nIn this case, the reporting agent always knows everything there is to know about the other agents' types--because there are no other agents.\nSince ex post and ex interim IR only differ on what an agent is assumed to know about other agents' types, the two IR concepts coincide here.\nAlso, because implementation in dominant strategies and implementation in Bayes-Nash equilibrium only differ on what an agent is assumed to know about other agents' types, the two solution concepts coincide here.\nThis observation will prove to be a useful tool in proving hardness results: if we prove computational hardness in the singleagent setting, this immediately implies hardness for both IR concepts, for both solution concepts, for any number of agents.\n4.\nPAYMENT-MAXIMIZING DETERMINISTIC AMD IS HARD\nIn this section we demonstrate that it is NP-complete to design a deterministic mechanism that maximizes the expected sum of the payments collected from the agents.\nWe show that this problem is hard even in the single-agent setting, thereby immediately showing it hard for both IR concepts, for both solution concepts.\nTo demonstrate NPhardness, we reduce from the MINSAT problem.\nDEFINITION 9 (MINSAT).\nWe are given a formula \u03c6 in conjunctive normal form, represented by a set of Boolean variables V and a set of clauses C, and an integer K (K <| C |).\nWe are asked whether there exists an assignment to the variables in V such that at most K clauses in \u03c6 are satisfied.\nMINSAT was recently shown to be NP-complete [14].\nWe can now present our result.\nTHEOREM 1.\nPayment-maximizing deterministic AMD is NP-complete, even for a single agent, even with a uniform distribution over types.\nPROOF.\nIt is easy to show that the problem is in NP.\nTo show NP-hardness, we reduce an arbitrary MINSAT instance to the following single-agent payment-maximizing deterministic AMD instance.\nLet the agent's type set be \u0398 = {\u03b8c: c \u2208 C} \u222a {\u03b8v: v \u2208 V}, where C is the set of clauses in the MINSAT instance, and V is the set of variables.\nLet the probability distribution over these types be uniform.\nLet the outcome set be O = {o0} \u222a {oc: c \u2208 C} \u222a {ol: l \u2208 L}, where L is the set of literals, that is, L = {+ v: v \u2208 V} \u222a {\u2212 v: v \u2208 V}.\nLet the notation v (l) = v denote that v is the variable corresponding to the literal l, that is, l \u2208 {+ v, \u2212 v}.\nLet l \u2208 c denote that the literal l occurs in clause c. Then, let the agent's utility function be given by u (\u03b8c, ol) = | \u0398 | + 1 for all l \u2208 L with l \u2208 c;\nstance.\nWe show the instances are equivalent.\nFirst, suppose there is a solution to the MINSAT instance.\nLet the assignment of truth values to the variables in this solution be given by the function f: V \u2192 L (where v (f (v)) = v for all v \u2208 V).\nThen, for every v \u2208 V, let o (\u03b8v) = of (v) and \u03c0 (\u03b8v) = | \u0398 |.\nFor every c \u2208 C, let o (\u03b8c) = oc; let \u03c0 (\u03b8c) = | \u0398 | + 1 if c is not satisfied in the MINSAT solution, and \u03c0 (\u03b8c) = | \u0398 | if c is satisfied.\nIt is straightforward to check that the IR constraint is satisfied.\nWe now check that the agent has no incentive to misreport.\nIf the agent's type is some \u03b8v, then any other report will give it an outcome that is no better, for a payment that is no less, so it has no incentive to misreport.\nIf the agent's type is some \u03b8c where c is a satisfied clause, again, any other report will give it an outcome that is no better, for a payment that is no less, so it has no incentive to misreport.\nThe final case to check is where the agent's type is some \u03b8c where c is an unsatisfied clause.\nIn this case, we observe that for none of the types, reporting it leads to an outcome ol for a literal l \u2208 c, precisely because the clause is not satisfied in the MINSAT instance.\nBecause also, no type besides \u03b8c leads to the outcome oc, reporting any other type will give an outcome with utility 0, while still forcing a payment of at least | \u0398 | from the agent.\nClearly the agent is better off reporting truthfully, for a total utility of 0.\nThis establishes that the agent never has an incentive to misreport.\nFinally, we show that the goal is reached.\nIf s is the number of satisfied clauses in the MINSAT solution (so that s \u2264 K), the expected payment from this mechanism\nNow suppose there is a solution to the AMD instance, given by an outcome function o and a payment function \u03c0.\nFirst, suppose there is some v \u2208 V such that o (\u03b8v) \u2208 \/ {o + v, o \u2212 v}.\nThen the utility that the agent derives from the given outcome for this type is 0, and hence, by IR, no payment can be extracted from the agent for this type.\nBecause, again by IR, the maximum payment that can be extracted for any other type is | \u0398 | + 1, it follows that the maximum expected payment that could be obtained is at most (| \u0398 | \u2212 1) (| \u0398 | +1) <| \u0398 | <G, contradicting that this is a | \u0398 | solution to the AMD instance.\nIt follows that in the solution to the AMD instance, for every v \u2208 V, o (\u03b8v) \u2208 {o + v, o \u2212 v}.\nWe can interpret this as an assignment of truth values to the variables: v is set to true if o (\u03b8v) = o + v, and to false if o (\u03b8v) = o \u2212 v.\nWe claim this assignment is a solution to the MINSAT instance.\nBy the IR constraint, the maximum payment we can extract from any type \u03b8v is | 8 |.\nBecause there can be no incentives for the agent to report falsely, for any clause c satisfied by the given assignment, the maximum payment we can extract for the corresponding type \u03b8c is | 8 |.\n(For if we extracted more from this type, the agent's utility in this case would be less than 1; and if v is the variable satisfying c in the assignment, so that o (\u03b8v) = ol where l occurs in c, then the agent would be better off reporting \u03b8v instead of the truthful report \u03b8c, to get an outcome worth | 8 | + 1 to it while having to pay at most | 8 |.)\nFinally, for any unsatisfied clause c, by the IR constraint, the maximum payment we can extract for the corresponding type \u03b8c is | 8 | + 1.\nIt follows that the expected payment from our mechanism is\nalgebraic manipulations is equivalent to s \u2264 K.\nSo there is a solution to the MINSAT instance.\nBecause payment-maximizing AMD is just the special case of AMD for a self-interested designer where the designer has no preferences over the outcome chosen, this immediately implies hardness for the general case of AMD for a selfinterested designer where payments are possible.\nHowever, it does not yet imply hardness for the special case where payments are not possible.\nWe will prove hardness in this case in the next section.\n5.\nSELF-INTERESTED DETERMINISTIC AMD WITHOUT PAYMENTS IS HARD\nIn this section we demonstrate that it is NP-complete to design a deterministic mechanism that maximizes the expectation of the designer's objective when payments are not possible.\nWe show that this problem is hard even in the single-agent setting, thereby immediately showing it hard for both IR concepts, for both solution concepts.\nTHEOREM 2.\nWithout payments, deterministic AMD for a self-interested designer is NP-complete, even for a single agent, even with a uniform distribution over types.\nPROOF.\nIt is easy to show that the problem is in NP.\nTo show NP-hardness, we reduce an arbitrary MINSAT instance to the following single-agent self-interested deterministic AMD without payments instance.\nLet the agent's type set be 8 = {\u03b8c: c \u2208 C} \u222a {\u03b8v: v \u2208 V}, where C is the set of clauses in the MINSAT instance, and V is the set of variables.\nLet the probability distribution over these types be uniform.\nLet the outcome set be O = {o0} \u222a {oc: c \u2208 C} \u222a {ol: l \u2208 L} \u222a {o \u2217}, where L is the set of literals, that is, L = {+ v: v \u2208 V} \u222a {\u2212 v: v \u2208 V}.\nLet the notation v (l) = v denote that v is the variable corresponding to the literal l, that is, l \u2208 {+ v, \u2212 v}.\nLet l \u2208 c denote that the literal l occurs in clause c. Then, let the agent's utility function be given by u (\u03b8c, ol) = 2 for all l \u2208 L with l \u2208 c; u (\u03b8c, ol) = \u2212 1 for all l \u2208 L with l \u2208 \/ c; u (\u03b8c, oc) = 2; u (\u03b8c, oc ~) = \u2212 1 for all c ~ \u2208 C with c = ~ c ~; u (\u03b8c, o \u2217) = 1; u (\u03b8v, ol) = 1 for all l \u2208 L with v (l) = v; u (\u03b8v, ol) = \u2212 1 for all l \u2208 L with v (l) = ~ v; u (\u03b8v, oc) = \u2212 1 for all c \u2208 C; u (\u03b8v, o \u2217) = \u2212 1.\nLet the designer's objective function be given by g (o \u2217) = | 8 | +1; g (ol) = | 8 | for all l \u2208 L; g (oc) = | 8 | for all c \u2208 C.\nThe goal of the AMD instance is G = | 8 | + | C | \u2212 K | \u0398 |, where K is the goal of the MINSAT instance.\nWe show the instances are equivalent.\nFirst, suppose there is a solution to the MINSAT instance.\nLet the assignment of truth values to the variables in this solution be given by the function f: V \u2192 L (where v (f (v)) = v for all v \u2208 V).\nThen, for every v \u2208 V, let o (\u03b8v) = of (v).\nFor every c \u2208 C that is satisfied in the MINSAT solution, let o (\u03b8c) = oc; for every unsatisfied c \u2208 C, let o (\u03b8c) = o \u2217.\nIt is straightforward to check that the IR constraint is satisfied.\nWe now check that the agent has no incentive to misreport.\nIf the agent's type is some \u03b8v, it is getting the maximum utility for that type, so it has no incentive to misreport.\nIf the agent's type is some \u03b8c where c is a satisfied clause, again, it is getting the maximum utility for that type, so it has no incentive to misreport.\nThe final case to check is where the agent's type is some \u03b8c where c is an unsatisfied clause.\nIn this case, we observe that for none of the types, reporting it leads to an outcome ol for a literal l \u2208 c, precisely because the clause is not satisfied in the MINSAT instance.\nBecause also, no type leads to the outcome oc, there is no outcome that the mechanism ever selects that would give the agent utility greater than 1 for type \u03b8c, and hence the agent has no incentive to report falsely.\nThis establishes that the agent never has an incentive to misreport.\nFinally, we show that the goal is reached.\nIf s is the number of satisfied clauses in the MINSAT solution (so that s \u2264 K), then the expected value of the designer's objective function\n| \u0398 | = G.\nSo there is a solution to the AMD instance.\nNow suppose there is a solution to the AMD instance, given by an outcome function o. First, suppose there is some v \u2208 V such that o (\u03b8v) \u2208 \/ {o + v, o \u2212 v}.\nThe only other outcome that the mechanism is allowed to choose under the IR constraint is o0.\nThis has an objective value of 0, and because the highest value the objective function ever takes is | 8 | + 1, it follows that the maximum expected value of the objective function that could be obtained is at most\ntion to the AMD instance.\nIt follows that in the solution to the AMD instance, for every v \u2208 V, o (\u03b8v) \u2208 {o + v, o \u2212 v}.\nWe can interpret this as an assignment of truth values to the variables: v is set to true if o (\u03b8v) = o + v, and to false if o (\u03b8v) = o \u2212 v.\nWe claim this assignment is a solution to the MINSAT instance.\nBy the above, for any type \u03b8v, the value of the objective function in this mechanism will be | 8 |.\nFor any clause c satisfied by the given assignment, the value of the objective function in the case where the agent reports type \u03b8c will be at most | 8 |.\n(This is because we cannot choose the outcome o \u2217 for such a type, as in this case the agent would have an incentive to report \u03b8v instead, where v is the variable satisfying c in the assignment (so that o (\u03b8v) = ol where l occurs in c).)\nFinally, for any unsatisfied clause c, the maximum value the objective function can take in the case where the agent reports type \u03b8c is | 8 | + 1, simply because this is the largest value the function ever takes.\nIt follows that the expected value of the objective function for our mechanism is at most\nclauses.\nBecause our mechanism achieves the goal, it follows\nmanipulations is equivalent to s \u2264 K.\nSo there is a solution to the MINSAT instance.\nBoth of our hardness results relied on the constraint that the mechanism should be deterministic.\nIn the next section, we show that the hardness of design disappears when we allow for randomization in the mechanism.\n6.\nRANDOMIZED AMD FOR A SELFINTERESTED DESIGNER IS EASY\nWe now show how allowing for randomization over the outcomes makes the problem of self-interested AMD tractable through linear programming, for any constant number of agents.\n\u2022 For every i \u2208 {1, 2,..., N}, for every\nFinally, for implementation in Bayes-Nash equilibrium, we add the following (at most NT 2) constraints to the LP:\n\u2022 For every i \u2208 {1, 2,..., N}, for every \u03b8i \u2208 \u0398i, and for every alternative type report \u02c6\u03b8i \u2208 \u0398i, we add the constraint\nTHEOREM 3.\nSelf-interested randomized AMD with a constant number of agents is solvable in polynomial time by linear programming, both with and without payments, both for ex post and ex interim IR, and both for implementation in dominant strategies and for implementation in Bayes-Nash equilibrium--even if the types are correlated.\nPROOF.\nBecause linear programs can be solved in polynomial time [13], all we need to show is that the number of variables and equations in our program is polynomial for any constant number of agents--that is, exponential only in N. Throughout, for purposes of determining the size of the linear program, let T = maxi {| \u0398i |}.\nThe variables of our linear program will be the probabilities (p (\u03b81, \u03b82,..., \u03b8N)) (o) (at most TN | O | variables) and the payments \u03c0i (\u03b81, \u03b82,..., \u03b8N) (at most NTN variables).\n(We show the linear program for the case where payments are possible; the case without payments is easily obtained from this by simply omitting all the payment variables in the program, or by adding additional constraints forcing the payments to be 0.)\nFirst, we show the IR constraints.\nFor ex post IR, we add the following (at most NTN) constraints to the LP:\n\u2022 For every i \u2208 {1, 2,..., N}, and for every (\u03b81, \u03b82,..., \u03b8N) \u2208 \u03981 \u00d7 \u03982 \u00d7...\u00d7 \u0398N, we add\nFor ex interim IR, we add the following (at most NT) constraints to the LP:\n\u2022 For every i \u2208 {1, 2,..., N}, for every \u03b8i \u2208 \u0398i, we add\nNow, we show the solution concept constraints.\nFor implementation in dominant strategies, we add the following (at most NTN +1) constraints to the LP: All that is left to do is to give the expression the designer is seeking to maximize, which is:\nAs we indicated, the number of variables and constraints is exponential only in N, and hence the linear program is of polynomial size for constant numbers of agents.\nThus the problem is solvable in polynomial time.\n7.\nIMPLICATIONS FOR AN OPTIMAL COMBINATORIAL AUCTION DESIGN PROBLEM\nIn this section, we will demonstrate some interesting consequences of the problem of automated mechanism design for a self-interested designer on designing optimal combinatorial auctions.\nConsider a combinatorial auction with a set S of items for sale.\nFor any bundle B \u2286 S, let ui (\u03b8i, B) be bidder i's utility for receiving bundle B when the bidder's type is \u03b8i.\nThe optimal auction design problem is to specify the rules of the auction so as to maximize expected revenue to the auctioneer.\n(By the revelation principle, without loss of generality, we can assume the auction is truthful.)\nThe optimal auction design problem is solved for the case of a single item by the famous Myerson auction [18].\nHowever, designing optimal auctions in combinatorial auctions is a recognized open research problem [3, 25].\nThe problem is open even if there are only two items for sale.\n(The twoitem case with a very special form of complementarity and no substitutability has been solved recently [1].)\nSuppose we have free disposal--items can be thrown away at no cost.\nAlso, suppose that the bidders' preferences have the following structure: whenever a bidder receives a bundle of items, the bidder's utility for that bundle is determined by the \"best\" item in the bundle only.\n(We emphasize that\nwhich item is the best is allowed to depend on the bidder's type.)\nWe make the following useful observation in this setting: there is no sense in awarding a bidder more than one item.\nThe reason is that if the bidder is reporting truthfully, taking all but the highest valued item away from the bidder will not hurt the bidder; and, by free disposal, doing so can only reduce the incentive for this bidder to falsely report this type, when the bidder actually has another type.\nWe now show that the problem of designing a deterministic optimal auction here is NP-complete, by a reduction from the payment maximizing AMD problem!\nPROOF.\nThe problem is in NP because we can nondeterministically generate an allocation rule, and then set the payments using linear programming.\nTo show NP-hardness, we reduce an arbitrary paymentmaximizing deterministic AMD instance, with a single agent and a uniform distribution over types, to the following optimal combinatorial auction design problem instance with a single bidder with best-only preferences.\nFor every outcome o \u2208 O in the AMD instance (besides the outcome o0), let there be one item so \u2208 S. Let the type space be the same, and let v (\u03b8i, so) = ui (\u03b8i, o) (where u is as specified in the AMD instance).\nLet the expected revenue target value be the same in both instances.\nWe show the instances are equivalent.\nFirst suppose there exists a solution to the AMD instance, given by an outcome function and a payment function.\nThen, if the AMD solution chooses outcome o for a type, in the optimal auction solution, allocate {so} to the bidder for this type.\n(Unless o = o0, in which case we allocate {} to the bidder.)\nLet the payment functions be the same in both instances.\nThen, the utility that an agent receives for reporting a type (given the true type) in either solution is the same, so we have incentive compatibility in the optimal auction solution.\nMoreover, because the type distribution and the payment function are the same, the expected revenue to the auctioneer\/designer is the same.\nIt follows that there exists a solution to the optimal auction design instance.\nNow suppose there exists a solution to the optimal auction design instance.\nBy the at-most-one-item observation, we can assume without loss of generality that the solution never allocates more than one item.\nThen, if the optimal auction solution allocates item so to the bidder for a type, in the AMD solution, let the mechanism choose outcome o for that type.\nIf the optimal auction solution allocates nothing to the bidder for a type, in the AMD solution, let the mechanism choose outcome o0 for that type.\nLet the payment functions be the same.\nThen, the utility that an agent receives for reporting a type (given the true type) in either solution is the same, so we have incentive compatibility in the AMD solution.\nMoreover, because the type distribution and the payment function are the same, the expected revenue to the designer\/auctioneer is the same.\nIt follows that there exists a solution to the AMD instance.\nFortunately, we can also carry through the easiness result for randomized mechanisms to this combinatorial auction setting--giving us one of the few known polynomial-time algorithms for an optimal combinatorial auction design problem.\nPROOF.\nBy the at-most-one-item observation, we can without loss of generality restrict ourselves to allocations where each bidder receives at most one item.\nThere are fewer than (| S | + 1) k such allocations--that is, a polynomial number of allocations.\nBecause we can list the outcomes explicitly, we can simply solve this as a payment-maximizing AMD instance, with linear programming.\n8.\nRELATED RESEARCH ON COMPLEXITY IN MECHANISM DESIGN\nThere has been considerable recent interest in mechanism design in computer science.\nSome of it has focused on issues of computational complexity, but most of that work has strived toward designing mechanisms that are easy to execute (e.g. [20, 15, 19, 9, 12]), rather than studying the complexity of designing the mechanism.\nThe closest piece of earlier work studied the complexity of automated mechanism design by a benevolent designer [5, 6].\nRoughgarden has studied the complexity of designing a good network topology for agents that selfishly choose the links they use [21].\nThis is related to mechanism design, but differs significantly in that the designer only has restricted control over the rules of the game because there is no party that can impose the outcome (or side payments).\nAlso, there is no explicit reporting of preferences.\n9.\nCONCLUSIONS AND FUTURE RESEARCH\nOften, an outcome must be chosen on the basis of the preferences reported by a group of agents.\nThe key difficulty is that the agents may report their preferences insincerely to make the chosen outcome more favorable to themselves.\nMechanism design is the art of designing the rules of the game so that the agents are motivated to report their preferences truthfully, and a desirable outcome is chosen.\nIn a recently emerging approach--called automated mechanism design--a mechanism is computed for the specific preference aggregation setting at hand.\nThis has several advantages,\nbut the downside is that the mechanism design optimization problem needs to be solved anew each time.\nUnlike earlier work on automated mechanism design that studied a benevolent designer, in this paper we studied automated mechanism design problems where the designer is self-interested--a setting much more relevant for electronic commerce.\nIn this setting, the center cares only about which outcome is chosen and what payments are made to it.\nThe reason that the agents' preferences are relevant is that the center is constrained to making each agent at least as well off as the agent would have been had it not participated in the mechanism.\nIn this setting, we showed that designing an optimal deterministic mechanism is NP-complete in two important special cases: when the center is interested only in the payments made to it, and when payments are not possible and the center is interested only in the outcome chosen.\nThese hardness results imply hardness in all more general automated mechanism design settings with a self-interested designer.\nThe hardness results apply whether the individual rationality (participation) constraints are applied ex interim or ex post, and whether the solution concept is dominant strategies implementation or Bayes-Nash equilibrium implementation.\nWe then showed that allowing randomization in the mechanism makes the design problem in all these settings computationally easy.\nFinally, we showed that the paymentmaximizing AMD problem is closely related to an interesting variant of the optimal (revenue-maximizing) combinatorial auction design problem, where the bidders have \"best-only\" preferences.\nWe showed that here, too, designing an optimal deterministic mechanism is NP-complete even with one agent, but designing an optimal randomized mechanism is easy.\nFuture research includes studying automated mechanism design with a self-interested designer in more restricted settings such as auctions (where the designer's objective may include preferences about which bidder should receive the good--as well as payments).\nWe also want to study the complexity of automated mechanism design in settings where the outcome and type spaces have special structure so they can be represented more concisely.\nFinally, we plan to assemble a data set of real-world mechanism design problems--both historical and current--and apply automated mechanism design to those problems.","keyphrases":["autom mechan design","autom mechan design","mechan design","combinatori auction","desir outcom","prefer aggreg","manipul","individu ration","nonmanipul mechan","statist knowledg","classic mechan","payment maxim","fallback outcom","minsat","self-interest amd","complementar","revenu maxim"],"prmu":["P","P","P","P","P","P","U","U","M","U","M","M","M","U","R","U","U"]} {"id":"C-68","title":"An Evaluation of Availability Latency in Carrier-based Vehicular ad-hoc Networks","abstract":"On-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research. Our target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency. In this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids. Using analysis and extensive simulations, we gain novel insights into the design of carrier-based systems. Significant improvements in latency can be obtained with zebroids at the cost of a minimal overhead. These improvements occur even in scenarios with lower accuracy in the predictions of the car routes. Two particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items.","lvl-1":"An Evaluation of Availability Latency in Carrier-based Vehicular ad-hoc Networks Shahram Ghandeharizadeh Dept of Computer Science Univ of Southern California Los Angeles, CA 90089, USA shahram@usc.edu Shyam Kapadia Dept of Computer Science Univ of Southern California Los Angeles, CA 90089, USA kapadia@usc.edu Bhaskar Krishnamachari Dept of Computer Science Dept of Electrical Engineering Univ of Southern California Los Angeles, CA 90089, USA bkrishna@usc.edu ABSTRACT On-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research.\nOur target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency.\nIn this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids.\nUsing analysis and extensive simulations, we gain novel insights into the design of carrier-based systems.\nSignificant improvements in latency can be obtained with zebroids at the cost of a minimal overhead.\nThese improvements occur even in scenarios with lower accuracy in the predictions of the car routes.\nTwo particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items.\nCategories and Subject Descriptors: C.2.4 [Distributed Systems]: Client\/Server General Terms: Algorithms, Performance, Design, Experimentation.\n1.\nINTRODUCTION Technological advances in areas of storage and wireless communications have now made it feasible to envision on-demand delivery of data items, for e.g., video and audio clips, in vehicular peer-topeer networks.\nIn prior work, Ghandeharizadeh et al. [10] introduce the concept of vehicles equipped with a Car-to-Car-Peer-toPeer device, termed AutoMata, for in-vehicle entertainment.\nThe notable features of an AutoMata include a mass storage device offering hundreds of gigabytes (GB) of storage, a fast processor, and several types of networking cards.\nEven with today``s 500 GB disk drives, a repository of diverse entertainment content may exceed the storage capacity of a single AutoMata.\nSuch repositories constitute the focus of this study.\nTo exchange data, we assume each AutoMata is configured with two types of networking cards: 1) a low-bandwidth networking card with a long radio-range in the order of miles that enables an AutoMata device to communicate with a nearby cellular or WiMax station, 2) a high-bandwidth networking card with a limited radio-range in the order of hundreds of feet.\nThe high bandwidth connection supports data rates in the order of tens to hundreds of Megabits per second and represents the ad-hoc peer to peer network between the vehicles.\nThis is labelled as the data plane and is employed to exchange data items between devices.\nThe low-bandwidth connection serves as the control plane, enabling AutoMata devices to exchange meta-data with one or more centralized servers.\nThis connection offers bandwidths in the order of tens to hundreds of Kilobits per second.\nThe centralized servers, termed dispatchers, compute schedules of data delivery along the data plane using the provided meta-data.\nThese schedules are transmitted to the participating vehicles using the control plane.\nThe technical feasibility of such a two-tier architecture is presented in [7], with preliminary results to demonstrate the bandwidth of the control plane is sufficient for exchange of control information needed for realizing such an application.\nIn a typical scenario, an AutoMata device presents a passenger with a list of data items1 , showing both the name of each data item and its availability latency.\nThe latter, denoted as \u03b4, is defined as the earliest time at which the client encounters a copy of its requested data item.\nA data item is available immediately when it resides in the local storage of the AutoMata device serving the request.\nDue to storage constraints, an AutoMata may not store the entire repository.\nIn this case, availability latency is the time from when the user issues a request until when the AutoMata encounters another car containing the referenced data item.\n(The terms car and AutoMata are used interchangeably in this study.)\nThe availability latency for an item is a function of the current location of the client, its destination and travel path, the mobility model of the AutoMata equipped vehicles, the number of replicas constructed for the different data items, and the placement of data item replicas across the vehicles.\nA method to improve the availability latency is to employ data carriers which transport a replica of the requested data item from a server car containing it to a client that requested it.\nThese data carriers are termed `zebroids''.\nSelection of zebroids is facilitated by the two-tiered architecture.\nThe control plane enables centralized information gathering at a dispatcher present at a base-station.2 Some examples of control in1 Without loss of generality, the term data item might be either traditional media such as text or continuous media such as an audio or video clip.\n2 There may be dispatchers deployed at a subset of the base-stations for fault-tolerance and robustness.\nDispatchers between basestations may communicate via the wired infrastructure.\n75 formation are currently active requests, travel path of the clients and their destinations, and paths of the other cars.\nFor each client request, the dispatcher may choose a set of z carriers that collaborate to transfer a data item from a server to a client (z-relay zebroids).\nHere, z is the number of zebroids such that 0 \u2264 z < N, where N is the total number of cars.\nWhen z = 0 there are no carriers, requiring a server to deliver the data item directly to the client.\nOtherwise, the chosen relay team of z zebroids hand over the data item transitively to one another to arrive at the client, thereby reducing availability latency (see Section 3.1 for details).\nTo increase robustness, the dispatcher may employ multiple relay teams of z-carriers for every request.\nThis may be useful in scenarios where the dispatcher has lower prediction accuracy in the information about the routes of the cars.\nFinally, storage constraints may require a zebroid to evict existing data items from its local storage to accommodate the client requested item.\nIn this study, we quantify the following main factors that affect availability latency in the presence of zebroids: (i) data item repository size, (ii) car density, (iii) storage capacity per car, (iv) client trip duration, (v) replacement scheme employed by the zebroids, and (vi) accuracy of the car route predictions.\nFor a significant subset of these factors, we address some key questions pertaining to use of zebroids both via analysis and extensive simulations.\nOur main findings are as follows.\nA naive random replacement policy employed by the zebroids shows competitive performance in terms of availability latency.\nWith such a policy, substantial improvements in latency can be obtained with zebroids at a minimal replacement overhead.\nIn more practical scenarios, where the dispatcher has inaccurate information about the car routes, zebroids continue to provide latency improvements.\nA surprising result is that changes in popularity of the data items do not impact the latency gains obtained with a simple instantiation of z-relay zebroids called one-instantaneous zebroids (see Section 3.1).\nThis study suggests a number of interesting directions to be pursued to gain better understanding of design of carrier-based systems that improve availability latency.\nRelated Work: Replication in mobile ad-hoc networks has been a widely studied topic [11, 12, 15].\nHowever, none of these studies employ zebroids as data carriers to reduce the latency of the client``s requests.\nSeveral novel and important studies such as ZebraNet [13], DakNet [14], Data Mules [16], Message Ferries [20], and Seek and Focus [17] have analyzed factors impacting intermittently connected networks consisting of data carriers similar in spirit to zebroids.\nFactors considered by each study are dictated by their assumed environment and target application.\nA novel characteristic of our study is the impact on availability latency for a given database repository of items.\nA detailed description of related works can be obtained in [9].\nThe rest of this paper is organized as follows.\nSection 2 provides an overview of the terminology along with the factors that impact availability latency in the presence of zebroids.\nSection 3 describes how the zebroids may be employed.\nSection 4 provides details of the analysis methodology employed to capture the performance with zebroids.\nSection 5 describes the details of the simulation environment used for evaluation.\nSection 6 enlists the key questions examined in this study and answers them via analysis and simulations.\nFinally, Section 7 presents brief conclusions and future research directions.\n2.\nOVERVIEW AND TERMINOLOGY Table 1 summarizes the notation of the parameters used in the paper.\nBelow we introduce some terminology used in the paper.\nAssume a network of N AutoMata-equipped cars, each with storage capacity of \u03b1 bytes.\nThe total storage capacity of the system is ST =N \u00b7\u03b1.\nThere are T data items in the database, each with Database Parameters T Number of data items.\nSi Size of data item i fi Frequency of access to data item i. Replication Parameters Ri Normalized frequency of access to data item i ri Number of replicas for data item i n Characterizes a particular replication scheme.\n\u03b4i Average availability latency of data item i \u03b4agg Aggregate availability latency, \u03b4agg = T j=1 \u03b4j \u00b7 fj AutoMata System Parameters G Number of cells in the map (2D-torus).\nN Number of AutoMata devices in the system.\n\u03b1 Storage capacity per AutoMata.\n\u03b3 Trip duration of the client AutoMata.\nST Total storage capacity of the AutoMata system, ST = N \u00b7 \u03b1.\nTable 1: Terms and their definitions size Si.\nThe frequency of access to data item i is denoted as fi, with T j=1 fj = 1.\nLet the trip duration of the client AutoMata under consideration be \u03b3.\nWe now define the normalized frequency of access to the data item i, denoted by Ri, as: Ri = (fi)n T j=1(fj)n ; 0 \u2264 n \u2264 \u221e (1) The exponent n characterizes a particular replication technique.\nA square-root replication scheme is realized when n = 0.5 [5].\nThis serves as the base-line for comparison with the case when zebroids are deployed.\nRi is normalized to a value between 0 and 1.\nThe number of replicas for data item i, denoted as ri, is: ri = min (N, max (1, Ri\u00b7N\u00b7\u03b1 Si )).\nThis captures the case when at least one copy of every data item must be present in the ad-hoc network at all times.\nIn cases where a data item may be lost from the ad-hoc network, this equation becomes ri = min (N, max (0, Ri\u00b7N\u00b7\u03b1 Si )).\nIn this case, a request for the lost data item may need to be satisfied by fetching the item from a remote server.\nThe availability latency for a data item i, denoted as \u03b4i, is defined as the earliest time at which a client AutoMata will find the first replica of the item accessible to it.\nIf this condition is not satisfied, then we set \u03b4i to \u03b3.\nThis indicates that data item i was not available to the client during its journey.\nNote that since there is at least one replica in the system for every data item i, by setting \u03b3 to a large value we ensure that the client``s request for any data item i will be satisfied.\nHowever, in most practical circumstances \u03b3 may not be so large as to find every data item.\nWe are interested in the availability latency observed across all data items.\nHence, we augment the average availability latency for every data item i with its fi to obtain the following weighted availability latency (\u03b4agg) metric: \u03b4agg = T i=1 \u03b4i \u00b7 fi Next, we present our solution approach describing how zebroids are selected.\n3.\nSOLUTION APPROACH 3.1 Zebroids When a client references a data item missing from its local storage, the dispatcher identifies all cars with a copy of the data item as servers.\nNext, the dispatcher obtains the future routes of all cars for a finite time duration equivalent to the maximum time the client is willing to wait for its request to be serviced.\nUsing this information, the dispatcher schedules the quickest delivery path from any of the servers to the client using any other cars as intermediate carriers.\nHence, it determines the optimal set of forwarding decisions 76 that will enable the data item to be delivered to the client in the minimum amount of time.\nNote that the latency along the quickest delivery path that employs a relay team of z zebroids is similar to that obtained with epidemic routing [19] under the assumptions of infinite storage and no interference.\nA simple instantiation of z-relay zebroids occurs when z = 1 and the client``s request triggers a transfer of a copy of the requested data item from a server to a zebroid in its vicinity.\nSuch a zebroid is termed one-instantaneous zebroid.\nIn some cases, the dispatcher might have inaccurate information about the routes of the cars.\nHence, a zebroid scheduled on the basis of this inaccurate information may not rendezvous with its target client.\nTo minimize the likelihood of such scenarios, the dispatcher may schedule multiple zebroids.\nThis may incur additional overhead due to redundant resource utilization to obtain the same latency improvements.\nThe time required to transfer a data item from a server to a zebroid depends on its size and the available link bandwidth.\nWith small data items, it is reasonable to assume that this transfer time is small, especially in the presence of the high bandwidth data plane.\nLarge data items may be divided into smaller chunks enabling the dispatcher to schedule one or more zebroids to deliver each chunk to a client in a timely manner.\nThis remains a future research direction.\nInitially, number of replicas for each data item replicas might be computed using Equation 1.\nThis scheme computes the number of data item replicas as a function of their popularity.\nIt is static because number of replicas in the system do not change and no replacements are performed.\nHence, this is referred to as the `nozebroids'' environment.\nWe quantify the performance of the various replacement policies with reference to this base-line that does not employ zebroids.\nOne may assume a cold start phase, where initially only one or few copies of every data item exist in the system.\nMany storage slots of the cars may be unoccupied.\nWhen the cars encounter one another they construct new replicas of some selected data items to occupy the empty slots.\nThe selection procedure may be to choose the data items uniformly at random.\nNew replicas are created as long as a car has a certain threshold of its storage unoccupied.\nEventually, majority of the storage capacity of a car will be exhausted.\n3.2 Carrier-based Replacement policies The replacement policies considered in this paper are reactive since a replacement occurs only in response to a request issued for a certain data item.\nWhen the local storage of a zebroid is completely occupied, it needs to replace one of its existing items to carry the client requested data item.\nFor this purpose, the zebroid must select an appropriate candidate for eviction.\nThis decision process is analogous to that encountered in operating system paging where the goal is to maximize the cache hit ratio to prevent disk access delay [18].\nThe carrier-based replacement policies employed in our study are Least Recently Used (LRU), Least Frequently Used (LFU) and Random (where a eviction candidate is chosen uniformly at random).\nWe have considered local and global variants of the LRU\/LFU policies which determine whether local or global knowledge of contents of the cars known at the dispatcher is used for the eviction decision at a zebroid (see [9] for more details).\nThe replacement policies incur the following overheads.\nFirst, the complexity associated with the implementation of a policy.\nSecond, the bandwidth used to transfer a copy of a data item from a server to the zebroid.\nThird, the average number of replacements incurred by the zebroids.\nNote that in the no-zebroids case neither overhead is incurred.\nThe metrics considered in this study are aggregate availability latency, \u03b4agg, percentage improvement in \u03b4agg with zebroids as compared to the no-zebroids case and average number of replacements incurred per client request which is an indicator of the overhead incurred by zebroids.\nNote that the dispatchers with the help of the control plane may ensure that no data item is lost from the system.\nIn other words, at least one replica of every data item is maintained in the ad-hoc network at all times.\nIn such cases, even though a car may meet a requesting client earlier than other servers, if its local storage contains data items with only a single copy in the system, then such a car is not chosen as a zebroid.\n4.\nANALYSIS METHODOLOGY Here, we present the analytical evaluation methodology and some approximations as closed-form equations that capture the improvements in availability latency that can be obtained with both oneinstantaneous and z-relay zebroids.\nFirst, we present some preliminaries of our analysis methodology.\n\u2022 Let N be the number of cars in the network performing a 2D random walk on a \u221a G\u00d7 \u221a G torus.\nAn additional car serves as a client yielding a total of N + 1 cars.\nSuch a mobility model has been used widely in the literature [17, 16] chiefly because it is amenable to analysis and provides a baseline against which performance of other mobility models can be compared.\nMoreover, this class of Markovian mobility models has been used to model the movements of vehicles [3, 21].\n\u2022 We assume that all cars start from the stationary distribution and perform independent random walks.\nAlthough for sparse density scenarios, the independence assumption does hold, it is no longer valid when N approaches G. \u2022 Let the size of data item repository of interest be T. Also, data item i has ri replicas.\nThis implies ri cars, identified as servers, have a copy of this data item when the client requests item i. All analysis results presented in this section are obtained assuming that the client is willing to wait as long as it takes for its request to be satisfied (unbounded trip duration \u03b3 = \u221e).\nWith the random walk mobility model on a 2D-torus, there is a guarantee that as long as there is at least one replica of the requested data item in the network, the client will eventually encounter this replica [2].\nExtensions to the analysis that also consider finite trip durations can be obtained in [9].\nConsider a scenario where no zebroids are employed.\nIn this case, the expected availability latency for the data item is the expected meeting time of the random walk undertaken by the client with any of the random walks performed by the servers.\nAldous et al. [2] show that the the meeting time of two random walks in such a setting can be modelled as an exponential distribution with the mean C = c \u00b7 G \u00b7 log G, where the constant c 0.17 for G \u2265 25.\nThe meeting time, or equivalently the availability latency \u03b4i, for the client requesting data item i is the time till it encounters any of these ri replicas for the first time.\nThis is also an exponential distribution with the following expected value (note that this formulation is valid only for sparse cases when G >> ri): \u03b4i = cGlogG ri The aggregate availability latency without employing zebroids is then this expression averaged over all data items, weighted by their frequency of access: \u03b4agg(no \u2212 zeb) = T i=1 fi \u00b7 c \u00b7 G \u00b7 log G ri = T i=1 fi \u00b7 C ri (2) 77 4.1 One-instantaneous zebroids Recall that with one-instantaneous zebroids, for a given request, a new replica is created on a car in the vicinity of the server, provided this car meets the client earlier than any of the ri servers.\nMoreover, this replica is spawned at the time step when the client issues the request.\nLet Nc i be the expected total number of nodes that are in the same cell as any of the ri servers.\nThen, we have Nc i = (N \u2212 ri) \u00b7 (1 \u2212 (1 \u2212 1 G )ri ) (3) In the analytical model, we assume that Nc i new replicas are created, so that the total number of replicas is increased to ri +Nc i .\nThe availability latency is reduced since the client is more likely to meet a replica earlier.\nThe aggregated expected availability latency in the case of one-instantaneous zebroids is then given by, \u03b4agg(zeb) = T i=1 fi \u00b7 c \u00b7 G \u00b7 log G ri + Nc i = T i=1 fi \u00b7 C ri + Nc i (4) Note that in obtaining this expression, for ease of analysis, we have assumed that the new replicas start from random locations in the torus (not necessarily from the same cell as the original ri servers).\nIt thus treats all the Nc i carriers independently, just like the ri original servers.\nAs we shall show below by comparison with simulations, this approximation provides an upper-bound on the improvements that can be obtained because it results in a lower expected latency at the client.\nIt should be noted that the procedure listed above will yield a similar latency to that employed by a dispatcher employing oneinstantaneous zebroids (see Section 3.1).\nSince the dispatcher is aware of all future car movements it would only transfer the requested data item on a single zebroid, if it determines that the zebroid will meet the client earlier than any other server.\nThis selected zebroid is included in the Nc i new replicas.\n4.2 z-relay zebroids To calculate the expected availability latency with z-relay zebroids, we use a coloring problem analog similar to an approach used by Spyropoulos et al. [17].\nDetails of the procedure to obtain a closed-form expression are given in [9].\nThe aggregate availability latency (\u03b4agg) with z-relay zebroids is given by, \u03b4agg(zeb) = T i=1 [fi \u00b7 C N + 1 \u00b7 1 N + 1 \u2212 ri \u00b7 (N \u00b7 log N ri \u2212 log (N + 1 \u2212 ri))] (5) 5.\nSIMULATION METHODOLOGY The simulation environment considered in this study comprises of vehicles such as cars that carry a fraction of the data item repository.\nA prediction accuracy parameter inherently provides a certain probabilistic guarantee on the confidence of the car route predictions known at the dispatcher.\nA value of 100% implies that the exact routes of all cars are known at all times.\nA 70% value for this parameter indicates that the routes predicted for the cars will match the actual ones with probability 0.7.\nNote that this probability is spread across the car routes for the entire trip duration.\nWe now provide the preliminaries of the simulation study and then describe the parameter settings used in our experiments.\n\u2022 Similar to the analysis methodology, the map used is a 2D torus.\nA Markov mobility model representing a unbiased 2D random walk on the surface of the torus describes the movement of the cars across this torus.\n\u2022 Each grid\/cell is a unique state of this Markov chain.\nIn each time slot, every car makes a transition from a cell to any of its neighboring 8 cells.\nThe transition is a function of the current location of the car and a probability transition matrix Q = [qij] where qij is the probability of transition from state i to state j. Only AutoMata equipped cars within the same cell may communicate with each other.\n\u2022 The parameters \u03b3, \u03b4 have been discretized and expressed in terms of the number of time slots.\n\u2022 An AutoMata device does not maintain more than one replica of a data item.\nThis is because additional replicas occupy storage without providing benefits.\n\u2022 Either one-instantaneous or z-relay zebroids may be employed per client request for latency improvement.\n\u2022 Unless otherwise mentioned, the prediction accuracy parameter is assumed to be 100%.\nThis is because this study aims to quantify the effect of a large number of parameters individually on availability latency.\nHere, we set the size of every data item, Si, to be 1.\n\u03b1 represents the number of storage slots per AutoMata.\nEach storage slot stores one data item.\n\u03b3 represents the duration of the client``s journey in terms of the number of time slots.\nHence the possible values of availability latency are between 0 and \u03b3.\n\u03b4 is defined as the number of time slots after which a client AutoMata device will encounter a replica of the data item for the first time.\nIf a replica for the data item requested was encountered by the client in the first cell then we set \u03b4 = 0.\nIf \u03b4 > \u03b3 then we set \u03b4 = \u03b3 indicating that no copy of the requested data item was encountered by the client during its entire journey.\nIn all our simulations, for illustration we consider a 5 \u00d7 5 2D-torus with \u03b3 set to 10.\nOur experiments indicate that the trends in the results scale to maps of larger size.\nWe simulated a skewed distribution of access to the T data items that obeys Zipf``s law with a mean of 0.27.\nThis distribution is shown to correspond to sale of movie theater tickets in the United States [6].\nWe employ a replication scheme that allocates replicas for a data item as a function of the square-root of the frequency of access of that item.\nThe square-root replication scheme is shown to have competitive latency performance over a large parameter space [8].\nThe data item replicas are distributed uniformly across the AutoMata devices.\nThis serves as the base-line no-zebroids case.\nThe square-root scheme also provides the initial replica distribution when zebroids are employed.\nNote that the replacements performed by the zebroids will cause changes to the data item replica distribution.\nRequests generated as per the Zipf distribution are issued one at a time.\nThe client car that issues the request is chosen in a round-robin manner.\nAfter a maximum period of \u03b3, the latency encountered by this request is recorded.\nIn all the simulation results, each point is an average of 200,000 requests.\nMoreover, the 95% confidence intervals determined for the results are quite tight for the metrics of latency and replacement overhead.\nHence, we only present them for the metric that captures the percentage improvement in latency with respect to the no-zebroids case.\n6.\nRESULTS In this section, we describe our evaluation results where the following key questions are addressed.\nWith a wide choice of replacement schemes available for a zebroid, what is their effect on availability latency?\nA more central question is: Do zebroids provide 78 0 20 40 60 80 100 1.5 2 2.5 3 3.5 Number of cars Aggregate availability latency (\u03b4 agg ) lru_global lfu_global lru_local lfu_local random Figure 1: Figure 1 shows the availability latency when employing one-instantaneous zebroids as a function of (N,\u03b1) values, when the total storage in the system is kept fixed, ST = 200.\nsignificant improvements in availability latency?\nWhat is the associated overhead incurred in employing these zebroids?\nWhat happens to these improvements in scenarios where a dispatcher may have imperfect information about the car routes?\nWhat inherent trade-offs exist between car density and storage per car with regards to their combined as well as individual effect on availability latency in the presence of zebroids?\nWe present both simple analysis and detailed simulations to provide answers to these questions as well as gain insights into design of carrier-based systems.\n6.1 How does a replacement scheme employed by a zebroid impact availability latency?\nFor illustration, we present `scale-up'' experiments where oneinstantaneous zebroids are employed (see Figure 1).\nBy scale-up, we mean that \u03b1 and N are changed proportionally to keep the total system storage, ST , constant.\nHere, T = 50 and ST = 200.\nWe choose the following values of (N,\u03b1) = {(20,10), (25,8), (50,4), (100,2)}.\nThe figure indicates that a random replacement scheme shows competitive performance.\nThis is because of several reasons.\nRecall that the initial replica distribution is set as per the squareroot replication scheme.\nThe random replacement scheme does not alter this distribution since it makes replacements blind to the popularity of a data item.\nHowever, the replacements cause dynamic data re-organization so as to better serve the currently active request.\nMoreover, the mobility pattern of the cars is random, hence, the locations from which the requests are issued by clients are also random and not known a priori at the dispatcher.\nThese findings are significant because a random replacement policy can be implemented in a simple decentralized manner.\nThe lru-global and lfu-global schemes provide a latency performance that is worse than random.\nThis is because these policies rapidly develop a preference for the more popular data items thereby creating a larger number of replicas for them.\nDuring eviction, the more popular data items are almost never selected as a replacement candidate.\nConsequently, there are fewer replicas for the less popular items.\nHence, the initial distribution of the data item replicas changes from square-root to that resembling linear replication.\nThe higher number of replicas for the popular data items provide marginal additional benefits, while the lower number of replicas for the other data items hurts the latency performance of these global policies.\nThe lfu-local and lru-local schemes have similar performance to random since they do not have enough history of local data item requests.\nWe speculate that the performance of these local policies will approach that of their global variants for a large enough history of data item requests.\nOn account of the competitive performance shown by a random policy, for the remainder of the paper, we present the performance of zebroids that employ a random replacement policy.\n6.2 Do zebroids provide significant improvements in availability latency?\nWe find that in many scenarios employing zebroids provides substantial improvements in availability latency.\n6.2.1 Analysis We first consider the case of one-instantaneous zebroids.\nFigure 2.\na shows the variation in \u03b4agg as a function of N for T = 10 and \u03b1 = 1 with a 10 \u00d7 10 torus using Equation 4.\nBoth the x and y axes are drawn to a log-scale.\nFigure 2.\nb show the % improvement in \u03b4agg obtained with one-instantaneous zebroids.\nIn this case, only the x-axis is drawn to a log-scale.\nFor illustration, we assume that the T data items are requested uniformly.\nInitially, when the network is sparse the analytical approximation for improvements in latency with zebroids, obtained from Equations 2 and 4, closely matches the simulation results.\nHowever, as N increases, the sparseness assumption for which the analysis is valid, namely N << G, is no longer true.\nHence, the two curves rapidly diverge.\nThe point at which the two curves move away from each other corresponds to a value of \u03b4agg \u2264 1.\nMoreover, as mentioned earlier, the analysis provides an upper bound on the latency improvements, as it treats the newly created replicas given by Nc i independently.\nHowever, these Nc i replicas start from the same cell as one of the server replicas ri.\nFinally, the analysis captures a oneshot scenario where given an initial data item replica distribution, the availability latency is computed.\nThe new replicas created do not affect future requests from the client.\nOn account of space limitations, here, we summarize the observations in the case when z-relay zebroids are employed.\nThe interested reader can obtain further details in [9].\nSimilar observations, like the one-instantaneous zebroid case, apply since the simulation and analysis curves again start diverging when the analysis assumptions are no longer valid.\nHowever, the key observation here is that the latency improvement with z-relay zebroids is significantly better than the one-instantaneous zebroids case, especially for lower storage scenarios.\nThis is because in sparse scenarios, the transitive hand-offs between the zebroids creates higher number of replicas for the requested data item, yielding lower availability latency.\nMoreover, it is also seen that the simulation validation curve for the improvements in \u03b4agg with z-relay zebroids approaches that of the one-instantaneous zebroid case for higher storage (higher N values).\nThis is because one-instantaneous zebroids are a special case of z-relay zebroids.\n6.2.2 Simulation We conduct simulations to examine the entire storage spectrum obtained by changing car density N or storage per car \u03b1 to also capture scenarios where the sparseness assumptions for which the analysis is valid do not hold.\nWe separate the effect of N and \u03b1 by capturing the variation of N while keeping \u03b1 constant (case 1) and vice-versa (case 2) both with z-relay and one-instantaneous zebroids.\nHere, we set the repository size as T = 25.\nFigure 3 captures case 1 mentioned above.\nSimilar trends are observed with case 2, a complete description of those results are available in [9].\nWith Figure 3.\nb, keeping \u03b1 constant, initially increasing car density has higher latency benefits because increasing N introduces more zebroids in the system.\nAs N is further increased, \u03c9 reduces because the total storage in the system goes up.\nConsequently, the number of replicas per data item goes up thereby increasing the 79 number of servers.\nHence, the replacement policy cannot find a zebroid as often to transport the requested data item to the client earlier than any of the servers.\nOn the other hand, the increased number of servers benefits the no-zebroids case in bringing \u03b4agg down.\nThe net effect results in reduction in \u03c9 for larger values of N. 10 1 10 2 10 3 10 \u22121 10 0 10 1 10 2 Number of cars no\u2212zebroidsanal no\u2212zebroids sim one\u2212instantaneous anal one\u2212instantaneoussim Aggregate Availability latency (\u03b4agg ) 2.\na) \u03b4agg 10 1 10 2 10 3 0 10 20 30 40 50 60 70 80 90 100 Number of cars % Improvement in \u03b4agg wrt no\u2212zebroids (\u03c9) analytical upper\u2212bound simulation 2.\nb) \u03c9 Figure 2: Figure 2 shows the latency performance with oneinstantaneous zebroids via simulations along with the analytical approximation for a 10 \u00d7 10 torus with T = 10.\nThe trends mentioned above are similar to that obtained from the analysis.\nHowever, somewhat counter-intuitively with relatively higher system storage, z-relay zebroids provide slightly lower improvements in latency as compared to one-instantaneous zebroids.\nWe speculate that this is due to the different data item replica distributions enforced by them.\nNote that replacements performed by the zebroids cause fluctuations in these replica distributions which may effect future client requests.\nWe are currently exploring suitable choices of parameters that can capture these changing replica distributions.\n6.3 What is the overhead incurred with improvements in latency with zebroids?\nWe find that the improvements in latency with zebroids are obtained at a minimal replacement overhead (< 1 per client request).\n6.3.1 Analysis With one-instantaneous zebroids, for each client request a maximum of one zebroid is employed for latency improvement.\nHence, the replacement overhead per client request can amount to a maximum of one.\nRecall that to calculate the latency with one-instantaneous 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 0 1 2 3 4 5 6 Number of cars Aggregate availability latency (\u03b4agg ) no\u2212zebroids one\u2212instantaneous z\u2212relays 3.\na 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 0 10 20 30 40 50 60 Number of cars % Improvement in \u03b4agg wrt no\u2212zebroids (\u03c9) one\u2212instantaneous z\u2212relays 3.\nb Figure 3: Figure 3 depicts the latency performance with both one-instantaneous and z-relay zebroids as a function of the car density when \u03b1 = 2 and T = 25.\nzebroids, Nc i new replicas are created in the same cell as the servers.\nNow a replacement is only incurred if one of these Nc i newly created replicas meets the client earlier than any of the ri servers.\nLet Xri and XNc i respectively be random variables that capture the minimum time till any of the ri and Nc i replicas meet the client.\nSince Xri and XNc i are assumed to be independent, by the property of exponentially distributed random variables we have, Overhead\/request = 1 \u00b7 P(XNc i < Xri ) + 0 \u00b7 P(Xri \u2264 XNc i ) (6) Overhead\/request = ri C ri C + Nc i C = ri ri + Nc i (7) Recall that the number of replicas for data item i, ri, is a function of the total storage in the system i.e., ri = k\u00b7N \u00b7\u03b1 where k satisfies the constraint 1 \u2264 ri \u2264 N. Using this along with Equation 2, we get Overhead\/request = 1 \u2212 G G + N \u00b7 (1 \u2212 k \u00b7 \u03b1) (8) Now if we keep the total system storage N \u00b7 \u03b1 constant since G and T are also constant, increasing N increases the replacement overhead.\nHowever, if N \u00b7\u03b1 is constant then increasing N causes \u03b1 80 0 20 40 60 80 100 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Number of cars one\u2212instantaneous zebroids Average number of replacements per request (N=20,\u03b1=10) (N=25,\u03b1=8) (N=50,\u03b1=4) (N=100,\u03b1=2) Figure 4: Figure 4 captures replacement overhead when employing one-instantaneous zebroids as a function of (N,\u03b1) values, when the total storage in the system is kept fixed, ST = 200.\nto go down.\nThis implies that a higher replacement overhead is incurred for higher N and lower \u03b1 values.\nMoreover, when ri = N, this means that every car has a replica of data item i. Hence, no zebroids are employed when this item is requested, yielding an overhead\/request for this item as zero.\nNext, we present simulation results that validate our analysis hypothesis for the overhead associated with deployment of one-instantaneous zebroids.\n6.3.2 Simulation Figure 4 shows the replacement overhead with one-instantaneous zebroids when (N,\u03b1) are varied while keeping the total system storage constant.\nThe trends shown by the simulation are in agreement with those predicted by the analysis above.\nHowever, the total system storage can be changed either by varying car density (N) or storage per car (\u03b1).\nOn account of similar trends, here we present the case when \u03b1 is kept constant and N is varied (Figure 5).\nWe refer the reader to [9] for the case when \u03b1 is varied and N is held constant.\nWe present an intuitive argument for the behavior of the perrequest replacement overhead curves.\nWhen the storage is extremely scarce so that only one replica per data item exists in the AutoMata network, the number of replacements performed by the zebroids is zero since any replacement will cause a data item to be lost from the system.\nThe dispatcher ensures that no data item is lost from the system.\nAt the other end of the spectrum, if storage becomes so abundant that \u03b1 = T then the entire data item repository can be replicated on every car.\nThe number of replacements is again zero since each request can be satisfied locally.\nA similar scenario occurs if N is increased to such a large value that another car with the requested data item is always available in the vicinity of the client.\nHowever, there is a storage spectrum in the middle where replacements by the scheduled zebroids result in improvements in \u03b4agg (see Figure 3).\nMoreover, we observe that for sparse storage scenarios, the higher improvements with z-relay zebroids are obtained at the cost of a higher replacement overhead when compared to the one-instantaneous zebroids case.\nThis is because in the former case, each of these z zebroids selected along the lowest latency path to the client needs to perform a replacement.\nHowever, the replacement overhead is still less than 1 indicating that on an average less than one replacement per client request is needed even when z-relay zebroids are employed.\n0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of cars z\u2212relays one\u2212instantaneous Average number of replacements per request Figure 5: Figure 5 shows the replacement overhead with zebroids for the cases when N is varied keeping \u03b1 = 2.\n10 20 30 40 50 60 70 80 90 100 0 0.5 1 1.5 2 2.5 3 3.5 4 Prediction percentage no\u2212zebroids (N=50) one\u2212instantaneous (N=50) z\u2212relays (N=50) no\u2212zebroids (N=200) one\u2212instantaneous (N=200) z\u2212relays (N=200) Aggregate Availability Latency (\u03b4agg ) Figure 6: Figure 6 shows \u03b4agg for different car densities as a function of the prediction accuracy metric with \u03b1 = 2 and T = 25.\n6.4 What happens to the availability latency with zebroids in scenarios with inaccuracies in the car route predictions?\nWe find that zebroids continue to provide improvements in availability latency even with lower accuracy in the car route predictions.\nWe use a single parameter p to quantify the accuracy of the car route predictions.\n6.4.1 Analysis Since p represents the probability that a car route predicted by the dispatcher matches the actual one, hence, the latency with zebroids can be approximated by, \u03b4err agg = p \u00b7 \u03b4agg(zeb) + (1 \u2212 p) \u00b7 \u03b4agg(no \u2212 zeb) (9) \u03b4err agg = p \u00b7 \u03b4agg(zeb) + (1 \u2212 p) \u00b7 C ri (10) Expressions for \u03b4agg(zeb) can be obtained from Equations 4 (one-instantaneous) or 5 (z-relay zebroids).\n6.4.2 Simulation Figure 6 shows the variation in \u03b4agg as a function of this route prediction accuracy metric.\nWe observe a smooth reduction in the 81 improvement in \u03b4agg as the prediction accuracy metric reduces.\nFor zebroids that are scheduled but fail to rendezvous with the client due to the prediction error, we tag any such replacements made by the zebroids as failed.\nIt is seen that failed replacements gradually increase as the prediction accuracy reduces.\n6.5 Under what conditions are the improvements in availability latency with zebroids maximized?\nSurprisingly, we find that the improvements in latency obtained with one-instantaneous zebroids are independent of the input distribution of the popularity of the data items.\n6.5.1 Analysis The fractional difference (labelled \u03c9) in the latency between the no-zebroids and one-instantaneous zebroids is obtained from equations 2, 3, and 4 as \u03c9 = T i=1 fi\u00b7C ri \u2212 T i=1 fi\u00b7C ri+(N\u2212ri)\u00b7(1\u2212(1\u2212 1 G )ri ) T i=1 fi\u00b7C ri (11) Here C = c\u00b7G\u00b7log G.\nThis captures the fractional improvement in the availability latency obtained by employing one-instantaneous zebroids.\nLet \u03b1 = 1, making the total storage in the system ST = N. Assuming the initial replica distribution is as per the squareroot replication scheme, we have, ri = \u221a fi\u00b7N T j=1 \u221a fj .\nHence, we get fi = K2 \u00b7r2 i N2 , where K = T j=1 fj.\nUsing this, along with the approximation (1 \u2212 x)n 1 \u2212 n \u00b7 x for small x, we simplify the above equation to get, \u03c9 = 1 \u2212 T i=1 ri 1+ N\u2212ri G T i=1 ri In order to determine when the gains with one-instantaneous zebroids are maximized, we can frame an optimization problem as follows: Maximize \u03c9, subject to T i=1 ri = ST THEOREM 1.\nWith a square-root replication scheme, improvements obtained with one-instantaneous zebroids are independent of the input popularity distribution of the data items.\n(See [9] for proof) 6.5.2 Simulation We perform simulations with two different frequency distribution of data items: Uniform and Zipfian (with mean= 0.27).\nSimilar latency improvements with one-instantaneous zebroids are obtained in both cases.\nThis result has important implications.\nIn cases with biased popularity toward certain data items, the aggregate improvements in latency across all data item requests still remain the same.\nEven in scenarios where the frequency of access to the data items changes dynamically, zebroids will continue to provide similar latency improvements.\n7.\nCONCLUSIONS AND FUTURE RESEARCH DIRECTIONS In this study, we examined the improvements in latency that can be obtained in the presence of carriers that deliver a data item from a server to a client.\nWe quantified the variation in availability latency as a function of a rich set of parameters such as car density, storage per car, title database size, and replacement policies employed by zebroids.\nBelow we summarize some key future research directions we intend to pursue.\nTo better reflect reality we would like to validate the observations obtained from this study with some real world simulation traces of vehicular movements (for example using CORSIM [1]).\nThis will also serve as a validation for the utility of the Markov mobility model used in this study.\nWe are currently analyzing the performance of zebroids on a real world data set comprising of an ad-hoc network of buses moving around a small neighborhood in Amherst [4].\nZebroids may also be used for delivery of data items that carry delay sensitive information with a certain expiry.\nExtensions to zebroids that satisfy such application requirements presents an interesting future research direction.\n8.\nACKNOWLEDGMENTS This research was supported in part by an Annenberg fellowship and NSF grants numbered CNS-0435505 (NeTS NOSS), CNS-0347621 (CAREER), and IIS-0307908.\n9.\nREFERENCES [1] Federal Highway Administration.\nCorridor simulation.\nVersion 5.1, http:\/\/www.ops.fhwa.dot.gov\/trafficanalysistools\/cors im.htm.\n[2] D. Aldous and J. Fill.\nReversible markov chains and random walks on graphs.\nUnder preparation.\n[3] A. Bar-Noy, I. Kessler, and M. Sidi.\nMobile Users: To Update or Not to Update.\nIn IEEE Infocom, 1994.\n[4] J. Burgess, B. Gallagher, D. Jensen, and B. Levine.\nMaxProp: Routing for Vehicle-Based Disruption-Tolerant Networking.\nIn IEEE Infocom, April 2006.\n[5] E. Cohen and S. Shenker.\nReplication Strategies in Unstructured Peer-to-Peer Networks.\nIn SIGCOMM, 2002.\n[6] A. Dan, D. Dias, R. Mukherjee, D. Sitaram, and R. Tewari.\nBuffering and Caching in Large-Scale Video Servers.\nIn COMPCON, 1995.\n[7] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari.\nPAVAN: A Policy Framework for Content Availabilty in Vehicular ad-hoc Networks.\nIn VANET, New York, NY, USA, 2004.\nACM Press.\n[8] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari.\nComparison of Replication Strategies for Content Availability in C2P2 networks.\nIn MDM, May 2005.\n[9] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari.\nAn Evaluation of Availability Latency in Carrier-based Vehicular ad-hoc Networks.\nTechnical report, Department of Computer Science, University of Southern California,CENG-2006-1, 2006.\n[10] S. Ghandeharizadeh and B. Krishnamachari.\nC2p2: A peer-to-peer network for on-demand automobile information services.\nIn Globe.\nIEEE, 2004.\n[11] T. Hara.\nEffective Replica Allocation in ad-hoc Networks for Improving Data Accessibility.\nIn IEEE Infocom, 2001.\n[12] H. Hayashi, T. Hara, and S. Nishio.\nA Replica Allocation Method Adapting to Topology Changes in ad-hoc Networks.\nIn DEXA, 2005.\n[13] P. Juang, H. Oki, Y. Wang, M. Martonosi, L. Peh, and D. Rubenstein.\nEnergy-efficient computing for wildlife tracking: design tradeoffs and early experiences with ZebraNet.\nSIGARCH Comput.\nArchit.\nNews, 2002.\n[14] A. Pentland, R. Fletcher, and A. Hasson.\nDakNet: Rethinking Connectivity in Developing Nations.\nComputer, 37(1):78-83, 2004.\n[15] F. Sailhan and V. Issarny.\nCooperative Caching in ad-hoc Networks.\nIn MDM, 2003.\n[16] R. Shah, S. Roy, S. Jain, and W. Brunette.\nData mules: Modeling and analysis of a three-tier architecture for sparse sensor networks.\nElsevier ad-hoc Networks Journal, 1, September 2003.\n[17] T. Spyropoulos, K. Psounis, and C. Raghavendra.\nSingle-Copy Routing in Intermittently Connected Mobile Networks.\nIn SECON, April 2004.\n[18] A. Tanenbaum.\nModern Operating Systems, 2nd Edition, Chapter 4, Section 4.4 .\nPrentice Hall, 2001.\n[19] A. Vahdat and D. Becker.\nEpidemic routing for partially-connected ad-hoc networks.\nTechnical report, Department of Computer Science, Duke University, 2000.\n[20] W. Zhao, M. Ammar, and E. Zegura.\nA message ferrying approach for data delivery in sparse mobile ad-hoc networks.\nIn MobiHoc, pages 187-198, New York, NY, USA, 2004.\nACM Press.\n[21] M. Zonoozi and P. Dassanayake.\nUser Mobility Modeling and Characterization of Mobility Pattern.\nIEEE Journal on Selected Areas in Communications, 15:1239-1252, September 1997.\n82","lvl-3":"An Evaluation of Availability Latency in Carrier-based Vehicular Ad-Hoc Networks Shahram\nABSTRACT\nOn-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research.\nOur target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency.\nIn this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids.\nUsing analysis and extensive simulations, we gain novel insights into the design of carrier-based systems.\nSignificant improvements in latency can be obtained with zebroids at the cost of a minimal overhead.\nThese improvements occur even in scenarios with lower accuracy in the predictions of the car routes.\nTwo particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items.\nCategories and Subject Descriptors: C. 2.4 [Distributed Systems]: Client\/Server\n1.\nINTRODUCTION\nTechnological advances in areas of storage and wireless communications have now made it feasible to envision on-demand delivery of data items, for e.g., video and audio clips, in vehicular peer-topeer networks.\nIn prior work, Ghandeharizadeh et al. [10] introduce the concept of vehicles equipped with a Car-to-Car-Peer-toPeer device, termed AutoMata, for in-vehicle entertainment.\nThe notable features of an AutoMata include a mass storage device offering hundreds of gigabytes (GB) of storage, a fast processor, and several types of networking cards.\nEven with today's 500 GB disk drives, a repository of diverse entertainment content may exceed the storage capacity of a single AutoMata.\nSuch repositories con\nstitute the focus of this study.\nTo exchange data, we assume each AutoMata is configured with two types of networking cards: 1) a low-bandwidth networking card with a long radio-range in the order of miles that enables an AutoMata device to communicate with a nearby cellular or WiMax station, 2) a high-bandwidth networking card with a limited radio-range in the order of hundreds of feet.\nThe high bandwidth connection supports data rates in the order of tens to hundreds of Megabits per second and represents the ad hoc peer to peer network between the vehicles.\nThis is labelled as the data plane and is employed to exchange data items between devices.\nThe low-bandwidth connection serves as the control plane, enabling AutoMata devices to exchange meta-data with one or more centralized servers.\nThis connection offers bandwidths in the order of tens to hundreds of Kilobits per second.\nThe centralized servers, termed dispatchers, compute schedules of data delivery along the data plane using the provided meta-data.\nThese schedules are transmitted to the participating vehicles using the control plane.\nThe technical feasibility of such a two-tier architecture is presented in [7], with preliminary results to demonstrate the bandwidth of the control plane is sufficient for exchange of control information needed for realizing such an application.\nIn a typical scenario, an AutoMata device presents a passenger with a list of data items1, showing both the name of each data item and its availability latency.\nThe latter, denoted as \u03b4, is defined as the earliest time at which the client encounters a copy of its requested data item.\nA data item is available immediately when it resides in the local storage of the AutoMata device serving the request.\nDue to storage constraints, an AutoMata may not store the entire repository.\nIn this case, availability latency is the time from when the user issues a request until when the AutoMata encounters another car containing the referenced data item.\n(The terms car and AutoMata are used interchangeably in this study.)\nThe availability latency for an item is a function of the current location of the client, its destination and travel path, the mobility model of the AutoMata equipped vehicles, the number of replicas constructed for the different data items, and the placement of data item replicas across the vehicles.\nA method to improve the availability latency is to employ data carriers which transport a replica of the requested data item from a server car containing it to a client that requested it.\nThese data carriers are termed ` zebroids'.\nSelection of zebroids is facilitated by the two-tiered architecture.\nThe control plane enables centralized information gathering at a dispatcher present at a base-station .2 Some examples of control in\nformation are currently active requests, travel path of the clients and their destinations, and paths of the other cars.\nFor each client request, the dispatcher may choose a set of z carriers that collaborate to transfer a data item from a server to a client (z-relay zebroids).\nHere, z is the number of zebroids such that 0 \u2264 z <N, where N is the total number of cars.\nWhen z = 0 there are no carriers, requiring a server to deliver the data item directly to the client.\nOtherwise, the chosen relay team of z zebroids hand over the data item transitively to one another to arrive at the client, thereby reducing availability latency (see Section 3.1 for details).\nTo increase robustness, the dispatcher may employ multiple relay teams of z-carriers for every request.\nThis may be useful in scenarios where the dispatcher has lower prediction accuracy in the information about the routes of the cars.\nFinally, storage constraints may require a zebroid to evict existing data items from its local storage to accommodate the client requested item.\nIn this study, we quantify the following main factors that affect availability latency in the presence of zebroids: (i) data item repository size, (ii) car density, (iii) storage capacity per car, (iv) client trip duration, (v) replacement scheme employed by the zebroids, and (vi) accuracy of the car route predictions.\nFor a significant subset of these factors, we address some key questions pertaining to use of zebroids both via analysis and extensive simulations.\nOur main findings are as follows.\nA naive random replacement policy employed by the zebroids shows competitive performance in terms of availability latency.\nWith such a policy, substantial improvements in latency can be obtained with zebroids at a minimal replacement overhead.\nIn more practical scenarios, where the dispatcher has inaccurate information about the car routes, zebroids continue to provide latency improvements.\nA surprising result is that changes in popularity of the data items do not impact the latency gains obtained with a simple instantiation of z-relay zebroids called one-instantaneous zebroids (see Section 3.1).\nThis study suggests a number of interesting directions to be pursued to gain better understanding of design of carrier-based systems that improve availability latency.\nRelated Work: Replication in mobile ad-hoc networks has been a widely studied topic [11, 12, 15].\nHowever, none of these studies employ zebroids as data carriers to reduce the latency of the client's requests.\nSeveral novel and important studies such as ZebraNet [13], DakNet [14], Data Mules [16], Message Ferries [20], and Seek and Focus [17] have analyzed factors impacting intermittently connected networks consisting of data carriers similar in spirit to zebroids.\nFactors considered by each study are dictated by their assumed environment and target application.\nA novel characteristic of our study is the impact on availability latency for a given database repository of items.\nA detailed description of related works can be obtained in [9].\nThe rest of this paper is organized as follows.\nSection 2 provides an overview of the terminology along with the factors that impact availability latency in the presence of zebroids.\nSection 3 describes how the zebroids may be employed.\nSection 4 provides details of the analysis methodology employed to capture the performance with zebroids.\nSection 5 describes the details of the simulation environment used for evaluation.\nSection 6 enlists the key questions examined in this study and answers them via analysis and simulations.\nFinally, Section 7 presents brief conclusions and future research directions.\n2.\nOVERVIEW AND TERMINOLOGY\nTable 1 summarizes the notation of the parameters used in the paper.\nBelow we introduce some terminology used in the paper.\nAssume a network of N AutoMata-equipped cars, each with storage capacity of \u03b1 bytes.\nThe total storage capacity of the system is ST = N \u00b7 \u03b1.\nThere are T data items in the database, each with\nTable 1: Terms and their definitions\nwith ET size Si.\nThe frequency of access to data item i is denoted as fi,\nThe exponent n characterizes a particular replication technique.\nA square-root replication scheme is realized when n = 0.5 [5].\nThis serves as the base-line for comparison with the case when zebroids are deployed.\nRi is normalized to a value between 0 and 1.\nThe number of replicas for data item i, denoted as ri, is: ri = min (N, max (1, bRi \u00b7 N \u00b7 \u03b1 Si c)).\nThis captures the case when at least one copy of every data item must be present in the ad-hoc network at all times.\nIn cases where a data item may be lost from the ad-hoc network, this equation becomes ri = min (N, max (0, b Ri \u00b7 N \u00b7 \u03b1 Si c)).\nIn this case, a request for the lost data item may need to be satisfied by fetching the item from a remote server.\nThe availability latency for a data item i, denoted as \u03b4i, is defined as the earliest time at which a client AutoMata will find the first replica of the item accessible to it.\nIf this condition is not satisfied, then we set \u03b4i to - y.\nThis indicates that data item i was not available to the client during its journey.\nNote that since there is at least one replica in the system for every data item i, by setting - y to a large value we ensure that the client's request for any data item i will be satisfied.\nHowever, in most practical circumstances - y may not be so large as to find every data item.\nWe are interested in the availability latency observed across all data items.\nHence, we augment the average availability latency for every data item i with its fi to obtain the following weighted availability latency (\u03b4agg) metric: \u03b4agg = Ti = 1 \u03b4i \u00b7 fi Next, we present our solution approach describing how zebroids are selected.\n3.\nSOLUTION APPROACH\n3.1 Zebroids\n3.2 Carrier-based Replacement policies\n4.\nANALYSIS METHODOLOGY\n4.1 One-instantaneous zebroids\n4.2 z-relay zebroids\n5.\nSIMULATION METHODOLOGY\n6.\nRESULTS\n6.1 How does a replacement scheme employed\n6.2 Do zebroids provide significant improvements in availability latency?\n6.2.1 Analysis\n6.2.2 Simulation\nNumber of cars\n6.3 What is the overhead incurred with improvements in latency with zebroids?\n6.3.1 Analysis\n6.3.2 Simulation\n6.4 What happens to the availability latency with zebroids in scenarios with inaccuracies in the car route predictions?\n6.4.1 Analysis\n6.4.2 Simulation\n6.5 Under what conditions are the improve\n6.5.1 Analysis\n6.5.2 Simulation\n7.\nCONCLUSIONS AND FUTURE RESEARCH DIRECTIONS","lvl-4":"An Evaluation of Availability Latency in Carrier-based Vehicular Ad-Hoc Networks Shahram\nABSTRACT\nOn-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research.\nOur target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency.\nIn this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids.\nUsing analysis and extensive simulations, we gain novel insights into the design of carrier-based systems.\nSignificant improvements in latency can be obtained with zebroids at the cost of a minimal overhead.\nThese improvements occur even in scenarios with lower accuracy in the predictions of the car routes.\nTwo particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items.\nCategories and Subject Descriptors: C. 2.4 [Distributed Systems]: Client\/Server\n1.\nINTRODUCTION\nTechnological advances in areas of storage and wireless communications have now made it feasible to envision on-demand delivery of data items, for e.g., video and audio clips, in vehicular peer-topeer networks.\nSuch repositories con\nstitute the focus of this study.\nThe high bandwidth connection supports data rates in the order of tens to hundreds of Megabits per second and represents the ad hoc peer to peer network between the vehicles.\nThis is labelled as the data plane and is employed to exchange data items between devices.\nThe low-bandwidth connection serves as the control plane, enabling AutoMata devices to exchange meta-data with one or more centralized servers.\nThe centralized servers, termed dispatchers, compute schedules of data delivery along the data plane using the provided meta-data.\nThese schedules are transmitted to the participating vehicles using the control plane.\nIn a typical scenario, an AutoMata device presents a passenger with a list of data items1, showing both the name of each data item and its availability latency.\nThe latter, denoted as \u03b4, is defined as the earliest time at which the client encounters a copy of its requested data item.\nA data item is available immediately when it resides in the local storage of the AutoMata device serving the request.\nDue to storage constraints, an AutoMata may not store the entire repository.\nIn this case, availability latency is the time from when the user issues a request until when the AutoMata encounters another car containing the referenced data item.\n(The terms car and AutoMata are used interchangeably in this study.)\nA method to improve the availability latency is to employ data carriers which transport a replica of the requested data item from a server car containing it to a client that requested it.\nThese data carriers are termed ` zebroids'.\nSelection of zebroids is facilitated by the two-tiered architecture.\nThe control plane enables centralized information gathering at a dispatcher present at a base-station .2 Some examples of control in\nformation are currently active requests, travel path of the clients and their destinations, and paths of the other cars.\nFor each client request, the dispatcher may choose a set of z carriers that collaborate to transfer a data item from a server to a client (z-relay zebroids).\nHere, z is the number of zebroids such that 0 \u2264 z <N, where N is the total number of cars.\nWhen z = 0 there are no carriers, requiring a server to deliver the data item directly to the client.\nOtherwise, the chosen relay team of z zebroids hand over the data item transitively to one another to arrive at the client, thereby reducing availability latency (see Section 3.1 for details).\nTo increase robustness, the dispatcher may employ multiple relay teams of z-carriers for every request.\nThis may be useful in scenarios where the dispatcher has lower prediction accuracy in the information about the routes of the cars.\nFinally, storage constraints may require a zebroid to evict existing data items from its local storage to accommodate the client requested item.\nIn this study, we quantify the following main factors that affect availability latency in the presence of zebroids: (i) data item repository size, (ii) car density, (iii) storage capacity per car, (iv) client trip duration, (v) replacement scheme employed by the zebroids, and (vi) accuracy of the car route predictions.\nFor a significant subset of these factors, we address some key questions pertaining to use of zebroids both via analysis and extensive simulations.\nOur main findings are as follows.\nA naive random replacement policy employed by the zebroids shows competitive performance in terms of availability latency.\nWith such a policy, substantial improvements in latency can be obtained with zebroids at a minimal replacement overhead.\nIn more practical scenarios, where the dispatcher has inaccurate information about the car routes, zebroids continue to provide latency improvements.\nA surprising result is that changes in popularity of the data items do not impact the latency gains obtained with a simple instantiation of z-relay zebroids called one-instantaneous zebroids (see Section 3.1).\nThis study suggests a number of interesting directions to be pursued to gain better understanding of design of carrier-based systems that improve availability latency.\nRelated Work: Replication in mobile ad-hoc networks has been a widely studied topic [11, 12, 15].\nHowever, none of these studies employ zebroids as data carriers to reduce the latency of the client's requests.\nFactors considered by each study are dictated by their assumed environment and target application.\nA novel characteristic of our study is the impact on availability latency for a given database repository of items.\nThe rest of this paper is organized as follows.\nSection 2 provides an overview of the terminology along with the factors that impact availability latency in the presence of zebroids.\nSection 3 describes how the zebroids may be employed.\nSection 4 provides details of the analysis methodology employed to capture the performance with zebroids.\nSection 5 describes the details of the simulation environment used for evaluation.\nSection 6 enlists the key questions examined in this study and answers them via analysis and simulations.\nFinally, Section 7 presents brief conclusions and future research directions.\n2.\nOVERVIEW AND TERMINOLOGY\nTable 1 summarizes the notation of the parameters used in the paper.\nBelow we introduce some terminology used in the paper.\nAssume a network of N AutoMata-equipped cars, each with storage capacity of \u03b1 bytes.\nThe total storage capacity of the system is ST = N \u00b7 \u03b1.\nThere are T data items in the database, each with\nTable 1: Terms and their definitions\nwith ET size Si.\nThe frequency of access to data item i is denoted as fi,\nThe exponent n characterizes a particular replication technique.\nA square-root replication scheme is realized when n = 0.5 [5].\nThis serves as the base-line for comparison with the case when zebroids are deployed.\nThe number of replicas for data item i, denoted as ri, is: ri = min (N, max (1, bRi \u00b7 N \u00b7 \u03b1 Si c)).\nThis captures the case when at least one copy of every data item must be present in the ad-hoc network at all times.\nIn cases where a data item may be lost from the ad-hoc network, this equation becomes ri = min (N, max (0, b Ri \u00b7 N \u00b7 \u03b1 Si c)).\nIn this case, a request for the lost data item may need to be satisfied by fetching the item from a remote server.\nThe availability latency for a data item i, denoted as \u03b4i, is defined as the earliest time at which a client AutoMata will find the first replica of the item accessible to it.\nThis indicates that data item i was not available to the client during its journey.\nNote that since there is at least one replica in the system for every data item i, by setting - y to a large value we ensure that the client's request for any data item i will be satisfied.\nHowever, in most practical circumstances - y may not be so large as to find every data item.\nWe are interested in the availability latency observed across all data items.\nHence, we augment the average availability latency for every data item i with its fi to obtain the following weighted availability latency (\u03b4agg) metric: \u03b4agg = Ti = 1 \u03b4i \u00b7 fi Next, we present our solution approach describing how zebroids are selected.","lvl-2":"An Evaluation of Availability Latency in Carrier-based Vehicular Ad-Hoc Networks Shahram\nABSTRACT\nOn-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research.\nOur target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency.\nIn this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids.\nUsing analysis and extensive simulations, we gain novel insights into the design of carrier-based systems.\nSignificant improvements in latency can be obtained with zebroids at the cost of a minimal overhead.\nThese improvements occur even in scenarios with lower accuracy in the predictions of the car routes.\nTwo particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items.\nCategories and Subject Descriptors: C. 2.4 [Distributed Systems]: Client\/Server\n1.\nINTRODUCTION\nTechnological advances in areas of storage and wireless communications have now made it feasible to envision on-demand delivery of data items, for e.g., video and audio clips, in vehicular peer-topeer networks.\nIn prior work, Ghandeharizadeh et al. [10] introduce the concept of vehicles equipped with a Car-to-Car-Peer-toPeer device, termed AutoMata, for in-vehicle entertainment.\nThe notable features of an AutoMata include a mass storage device offering hundreds of gigabytes (GB) of storage, a fast processor, and several types of networking cards.\nEven with today's 500 GB disk drives, a repository of diverse entertainment content may exceed the storage capacity of a single AutoMata.\nSuch repositories con\nstitute the focus of this study.\nTo exchange data, we assume each AutoMata is configured with two types of networking cards: 1) a low-bandwidth networking card with a long radio-range in the order of miles that enables an AutoMata device to communicate with a nearby cellular or WiMax station, 2) a high-bandwidth networking card with a limited radio-range in the order of hundreds of feet.\nThe high bandwidth connection supports data rates in the order of tens to hundreds of Megabits per second and represents the ad hoc peer to peer network between the vehicles.\nThis is labelled as the data plane and is employed to exchange data items between devices.\nThe low-bandwidth connection serves as the control plane, enabling AutoMata devices to exchange meta-data with one or more centralized servers.\nThis connection offers bandwidths in the order of tens to hundreds of Kilobits per second.\nThe centralized servers, termed dispatchers, compute schedules of data delivery along the data plane using the provided meta-data.\nThese schedules are transmitted to the participating vehicles using the control plane.\nThe technical feasibility of such a two-tier architecture is presented in [7], with preliminary results to demonstrate the bandwidth of the control plane is sufficient for exchange of control information needed for realizing such an application.\nIn a typical scenario, an AutoMata device presents a passenger with a list of data items1, showing both the name of each data item and its availability latency.\nThe latter, denoted as \u03b4, is defined as the earliest time at which the client encounters a copy of its requested data item.\nA data item is available immediately when it resides in the local storage of the AutoMata device serving the request.\nDue to storage constraints, an AutoMata may not store the entire repository.\nIn this case, availability latency is the time from when the user issues a request until when the AutoMata encounters another car containing the referenced data item.\n(The terms car and AutoMata are used interchangeably in this study.)\nThe availability latency for an item is a function of the current location of the client, its destination and travel path, the mobility model of the AutoMata equipped vehicles, the number of replicas constructed for the different data items, and the placement of data item replicas across the vehicles.\nA method to improve the availability latency is to employ data carriers which transport a replica of the requested data item from a server car containing it to a client that requested it.\nThese data carriers are termed ` zebroids'.\nSelection of zebroids is facilitated by the two-tiered architecture.\nThe control plane enables centralized information gathering at a dispatcher present at a base-station .2 Some examples of control in\nformation are currently active requests, travel path of the clients and their destinations, and paths of the other cars.\nFor each client request, the dispatcher may choose a set of z carriers that collaborate to transfer a data item from a server to a client (z-relay zebroids).\nHere, z is the number of zebroids such that 0 \u2264 z <N, where N is the total number of cars.\nWhen z = 0 there are no carriers, requiring a server to deliver the data item directly to the client.\nOtherwise, the chosen relay team of z zebroids hand over the data item transitively to one another to arrive at the client, thereby reducing availability latency (see Section 3.1 for details).\nTo increase robustness, the dispatcher may employ multiple relay teams of z-carriers for every request.\nThis may be useful in scenarios where the dispatcher has lower prediction accuracy in the information about the routes of the cars.\nFinally, storage constraints may require a zebroid to evict existing data items from its local storage to accommodate the client requested item.\nIn this study, we quantify the following main factors that affect availability latency in the presence of zebroids: (i) data item repository size, (ii) car density, (iii) storage capacity per car, (iv) client trip duration, (v) replacement scheme employed by the zebroids, and (vi) accuracy of the car route predictions.\nFor a significant subset of these factors, we address some key questions pertaining to use of zebroids both via analysis and extensive simulations.\nOur main findings are as follows.\nA naive random replacement policy employed by the zebroids shows competitive performance in terms of availability latency.\nWith such a policy, substantial improvements in latency can be obtained with zebroids at a minimal replacement overhead.\nIn more practical scenarios, where the dispatcher has inaccurate information about the car routes, zebroids continue to provide latency improvements.\nA surprising result is that changes in popularity of the data items do not impact the latency gains obtained with a simple instantiation of z-relay zebroids called one-instantaneous zebroids (see Section 3.1).\nThis study suggests a number of interesting directions to be pursued to gain better understanding of design of carrier-based systems that improve availability latency.\nRelated Work: Replication in mobile ad-hoc networks has been a widely studied topic [11, 12, 15].\nHowever, none of these studies employ zebroids as data carriers to reduce the latency of the client's requests.\nSeveral novel and important studies such as ZebraNet [13], DakNet [14], Data Mules [16], Message Ferries [20], and Seek and Focus [17] have analyzed factors impacting intermittently connected networks consisting of data carriers similar in spirit to zebroids.\nFactors considered by each study are dictated by their assumed environment and target application.\nA novel characteristic of our study is the impact on availability latency for a given database repository of items.\nA detailed description of related works can be obtained in [9].\nThe rest of this paper is organized as follows.\nSection 2 provides an overview of the terminology along with the factors that impact availability latency in the presence of zebroids.\nSection 3 describes how the zebroids may be employed.\nSection 4 provides details of the analysis methodology employed to capture the performance with zebroids.\nSection 5 describes the details of the simulation environment used for evaluation.\nSection 6 enlists the key questions examined in this study and answers them via analysis and simulations.\nFinally, Section 7 presents brief conclusions and future research directions.\n2.\nOVERVIEW AND TERMINOLOGY\nTable 1 summarizes the notation of the parameters used in the paper.\nBelow we introduce some terminology used in the paper.\nAssume a network of N AutoMata-equipped cars, each with storage capacity of \u03b1 bytes.\nThe total storage capacity of the system is ST = N \u00b7 \u03b1.\nThere are T data items in the database, each with\nTable 1: Terms and their definitions\nwith ET size Si.\nThe frequency of access to data item i is denoted as fi,\nThe exponent n characterizes a particular replication technique.\nA square-root replication scheme is realized when n = 0.5 [5].\nThis serves as the base-line for comparison with the case when zebroids are deployed.\nRi is normalized to a value between 0 and 1.\nThe number of replicas for data item i, denoted as ri, is: ri = min (N, max (1, bRi \u00b7 N \u00b7 \u03b1 Si c)).\nThis captures the case when at least one copy of every data item must be present in the ad-hoc network at all times.\nIn cases where a data item may be lost from the ad-hoc network, this equation becomes ri = min (N, max (0, b Ri \u00b7 N \u00b7 \u03b1 Si c)).\nIn this case, a request for the lost data item may need to be satisfied by fetching the item from a remote server.\nThe availability latency for a data item i, denoted as \u03b4i, is defined as the earliest time at which a client AutoMata will find the first replica of the item accessible to it.\nIf this condition is not satisfied, then we set \u03b4i to - y.\nThis indicates that data item i was not available to the client during its journey.\nNote that since there is at least one replica in the system for every data item i, by setting - y to a large value we ensure that the client's request for any data item i will be satisfied.\nHowever, in most practical circumstances - y may not be so large as to find every data item.\nWe are interested in the availability latency observed across all data items.\nHence, we augment the average availability latency for every data item i with its fi to obtain the following weighted availability latency (\u03b4agg) metric: \u03b4agg = Ti = 1 \u03b4i \u00b7 fi Next, we present our solution approach describing how zebroids are selected.\n3.\nSOLUTION APPROACH\n3.1 Zebroids\nWhen a client references a data item missing from its local storage, the dispatcher identifies all cars with a copy of the data item as servers.\nNext, the dispatcher obtains the future routes of all cars for a finite time duration equivalent to the maximum time the client is willing to wait for its request to be serviced.\nUsing this information, the dispatcher schedules the quickest delivery path from any of the servers to the client using any other cars as intermediate carriers.\nHence, it determines the optimal set of forwarding decisions\nthat will enable the data item to be delivered to the client in the minimum amount of time.\nNote that the latency along the quickest delivery path that employs a relay team of z zebroids is similar to that obtained with epidemic routing [19] under the assumptions of infinite storage and no interference.\nA simple instantiation of z-relay zebroids occurs when z = 1 and the client's request triggers a transfer of a copy of the requested data item from a server to a zebroid in its vicinity.\nSuch a zebroid is termed one-instantaneous zebroid.\nIn some cases, the dispatcher might have inaccurate information about the routes of the cars.\nHence, a zebroid scheduled on the basis of this inaccurate information may not rendezvous with its target client.\nTo minimize the likelihood of such scenarios, the dispatcher may schedule multiple zebroids.\nThis may incur additional overhead due to redundant resource utilization to obtain the same latency improvements.\nThe time required to transfer a data item from a server to a zebroid depends on its size and the available link bandwidth.\nWith small data items, it is reasonable to assume that this transfer time is small, especially in the presence of the high bandwidth data plane.\nLarge data items may be divided into smaller chunks enabling the dispatcher to schedule one or more zebroids to deliver each chunk to a client in a timely manner.\nThis remains a future research direction.\nInitially, number of replicas for each data item replicas might be computed using Equation 1.\nThis scheme computes the number of data item replicas as a function of their popularity.\nIt is static because number of replicas in the system do not change and no replacements are performed.\nHence, this is referred to as the ` nozebroids' environment.\nWe quantify the performance of the various replacement policies with reference to this base-line that does not employ zebroids.\nOne may assume a cold start phase, where initially only one or few copies of every data item exist in the system.\nMany storage slots of the cars may be unoccupied.\nWhen the cars encounter one another they construct new replicas of some selected data items to occupy the empty slots.\nThe selection procedure may be to choose the data items uniformly at random.\nNew replicas are created as long as a car has a certain threshold of its storage unoccupied.\nEventually, majority of the storage capacity of a car will be exhausted.\n3.2 Carrier-based Replacement policies\nThe replacement policies considered in this paper are reactive since a replacement occurs only in response to a request issued for a certain data item.\nWhen the local storage of a zebroid is completely occupied, it needs to replace one of its existing items to carry the client requested data item.\nFor this purpose, the zebroid must select an appropriate candidate for eviction.\nThis decision process is analogous to that encountered in operating system paging where the goal is to maximize the cache hit ratio to prevent disk access delay [18].\nThe carrier-based replacement policies employed in our study are Least Recently Used (LRU), Least Frequently Used (LFU) and Random (where a eviction candidate is chosen uniformly at random).\nWe have considered local and global variants of the LRU\/LFU policies which determine whether local or global knowledge of contents of the cars known at the dispatcher is used for the eviction decision at a zebroid (see [9] for more details).\nThe replacement policies incur the following overheads.\nFirst, the complexity associated with the implementation of a policy.\nSecond, the bandwidth used to transfer a copy of a data item from a server to the zebroid.\nThird, the average number of replacements incurred by the zebroids.\nNote that in the no-zebroids case neither overhead is incurred.\nThe metrics considered in this study are aggregate availability latency, 6,, gg, percentage improvement in 6,, gg with zebroids as compared to the no-zebroids case and average number of replacements incurred per client request which is an indicator of the overhead incurred by zebroids.\nNote that the dispatchers with the help of the control plane may ensure that no data item is lost from the system.\nIn other words, at least one replica of every data item is maintained in the ad-hoc network at all times.\nIn such cases, even though a car may meet a requesting client earlier than other servers, if its local storage contains data items with only a single copy in the system, then such a car is not chosen as a zebroid.\n4.\nANALYSIS METHODOLOGY\nHere, we present the analytical evaluation methodology and some approximations as closed-form equations that capture the improvements in availability latency that can be obtained with both oneinstantaneous and z-relay zebroids.\nFirst, we present some preliminaries of our analysis methodology.\n\u2022 Let N be the number of cars in the network performing a 2D V V\nrandom walk on a G x G torus.\nAn additional car serves as a client yielding a total of N + 1 cars.\nSuch a mobility model has been used widely in the literature [17, 16] chiefly because it is amenable to analysis and provides a baseline against which performance of other mobility models can be compared.\nMoreover, this class of Markovian mobility models has been used to model the movements of vehicles [3, 21].\n\u2022 We assume that all cars start from the stationary distribution and perform independent random walks.\nAlthough for sparse density scenarios, the independence assumption does hold, it is no longer valid when N approaches G. \u2022 Let the size of data item repository of interest be T. Also, data item i has ri replicas.\nThis implies ri cars, identified as servers, have a copy of this data item when the client requests item i.\nAll analysis results presented in this section are obtained assuming that the client is willing to wait as long as it takes for its request to be satisfied (unbounded trip duration \u03b3 = oo).\nWith the random walk mobility model on a 2D-torus, there is a guarantee that as long as there is at least one replica of the requested data item in the network, the client will eventually encounter this replica [2].\nExtensions to the analysis that also consider finite trip durations can be obtained in [9].\nConsider a scenario where no zebroids are employed.\nIn this case, the expected availability latency for the data item is the expected meeting time of the random walk undertaken by the client with any of the random walks performed by the servers.\nAldous et al. [2] show that the the meeting time of two random walks in such a setting can be modelled as an exponential distribution with the mean C = c \u2022 G \u2022 log G, where the constant c - 0.17 for G> 25.\nThe meeting time, or equivalently the availability latency 6i, for the client requesting data item i is the time till it encounters any of these ri replicas for the first time.\nThis is also an exponential distribution with the following expected value (note that this formulation is valid only for sparse cases when G>> ri): 6i = cGlogG ri The aggregate availability latency without employing zebroids is then this expression averaged over all data items, weighted by their frequency of access:\n4.1 One-instantaneous zebroids\nRecall that with one-instantaneous zebroids, for a given request, a new replica is created on a car in the vicinity of the server, provided this car meets the client earlier than any of the ri servers.\nMoreover, this replica is spawned at the time step when the client issues the request.\nLet Nic be the expected total number of nodes that are in the same cell as any of the ri servers.\nThen, we have\nIn the analytical model, we assume that Nic new replicas are created, so that the total number of replicas is increased to ri + Nic.\nThe availability latency is reduced since the client is more likely to meet a replica earlier.\nThe aggregated expected availability latency in the case of one-instantaneous zebroids is then given by,\nNote that in obtaining this expression, for ease of analysis, we have assumed that the new replicas start from random locations in the torus (not necessarily from the same cell as the original ri servers).\nIt thus treats all the Nic carriers independently, just like the ri original servers.\nAs we shall show below by comparison with simulations, this approximation provides an upper-bound on the improvements that can be obtained because it results in a lower expected latency at the client.\nIt should be noted that the procedure listed above will yield a similar latency to that employed by a dispatcher employing oneinstantaneous zebroids (see Section 3.1).\nSince the dispatcher is aware of all future car movements it would only transfer the requested data item on a single zebroid, if it determines that the zebroid will meet the client earlier than any other server.\nThis selected zebroid is included in the Nic new replicas.\n4.2 z-relay zebroids\nTo calculate the expected availability latency with z-relay zebroids, we use a coloring problem analog similar to an approach used by Spyropoulos et al. [17].\nDetails of the procedure to obtain a closed-form expression are given in [9].\nThe aggregate availability latency (\u03b4a99) with z-relay zebroids is given by,\n5.\nSIMULATION METHODOLOGY\nThe simulation environment considered in this study comprises of vehicles such as cars that carry a fraction of the data item repository.\nA prediction accuracy parameter inherently provides a certain probabilistic guarantee on the confidence of the car route predictions known at the dispatcher.\nA value of 100% implies that the exact routes of all cars are known at all times.\nA 70% value for this parameter indicates that the routes predicted for the cars will match the actual ones with probability 0.7.\nNote that this probability is spread across the car routes for the entire trip duration.\nWe now provide the preliminaries of the simulation study and then describe the parameter settings used in our experiments.\n\u2022 Similar to the analysis methodology, the map used is a 2D torus.\nA Markov mobility model representing a unbiased 2D random walk on the surface of the torus describes the movement of the cars across this torus.\n\u2022 Each grid\/cell is a unique state of this Markov chain.\nIn each time slot, every car makes a transition from a cell to any of its neighboring 8 cells.\nThe transition is a function of the current location of the car and a probability transition matrix Q = [qij] where qij is the probability of transition from state i to state j. Only AutoMata equipped cars within the same cell may communicate with each other.\n\u2022 The parameters - y, \u03b4 have been discretized and expressed in terms of the number of time slots.\n\u2022 An AutoMata device does not maintain more than one replica of a data item.\nThis is because additional replicas occupy storage without providing benefits.\n\u2022 Either one-instantaneous or z-relay zebroids may be employed per client request for latency improvement.\n\u2022 Unless otherwise mentioned, the prediction accuracy parameter is assumed to be 100%.\nThis is because this study aims to quantify the effect of a large number of parameters individually on availability latency.\nHere, we set the size of every data item, Si, to be 1.\n\u03b1 represents the number of storage slots per AutoMata.\nEach storage slot stores one data item.\n- y represents the duration of the client's journey in terms of the number of time slots.\nHence the possible values of availability latency are between 0 and - y. \u03b4 is defined as the number of time slots after which a client AutoMata device will encounter a replica of the data item for the first time.\nIf a replica for the data item requested was encountered by the client in the first cell then we set \u03b4 = 0.\nIf \u03b4> - y then we set \u03b4 = - y indicating that no copy of the requested data item was encountered by the client during its entire journey.\nIn all our simulations, for illustration we consider a 5 \u00d7 5 2D-torus with - y set to 10.\nOur experiments indicate that the trends in the results scale to maps of larger size.\nWe simulated a skewed distribution of access to the T data items that obeys Zipf's law with a mean of 0.27.\nThis distribution is shown to correspond to sale of movie theater tickets in the United States [6].\nWe employ a replication scheme that allocates replicas for a data item as a function of the square-root of the frequency of access of that item.\nThe square-root replication scheme is shown to have competitive latency performance over a large parameter space [8].\nThe data item replicas are distributed uniformly across the AutoMata devices.\nThis serves as the base-line no-zebroids case.\nThe square-root scheme also provides the initial replica distribution when zebroids are employed.\nNote that the replacements performed by the zebroids will cause changes to the data item replica distribution.\nRequests generated as per the Zipf distribution are issued one at a time.\nThe client car that issues the request is chosen in a round-robin manner.\nAfter a maximum period of - y, the latency encountered by this request is recorded.\nIn all the simulation results, each point is an average of 200,000 requests.\nMoreover, the 95% confidence intervals determined for the results are quite tight for the metrics of latency and replacement overhead.\nHence, we only present them for the metric that captures the percentage improvement in latency with respect to the no-zebroids case.\n6.\nRESULTS\nIn this section, we describe our evaluation results where the following key questions are addressed.\nWith a wide choice of replacement schemes available for a zebroid, what is their effect on availability latency?\nA more central question is: Do zebroids provide\nFigure 1: Figure 1 shows the availability latency when employing one-instantaneous zebroids as a function of (N, \u03b1) values, when the total storage in the system is kept fixed, ST = 200.\nsignificant improvements in availability latency?\nWhat is the associated overhead incurred in employing these zebroids?\nWhat happens to these improvements in scenarios where a dispatcher may have imperfect information about the car routes?\nWhat inherent trade-offs exist between car density and storage per car with regards to their combined as well as individual effect on availability latency in the presence of zebroids?\nWe present both simple analysis and detailed simulations to provide answers to these questions as well as gain insights into design of carrier-based systems.\n6.1 How does a replacement scheme employed\nby a zebroid impact availability latency?\nFor illustration, we present ` scale-up' experiments where oneinstantaneous zebroids are employed (see Figure 1).\nBy scale-up, we mean that \u03b1 and N are changed proportionally to keep the total system storage, ST, constant.\nHere, T = 50 and ST = 200.\nWe choose the following values of (N, \u03b1) = {(20,10), (25,8), (50,4), (100,2)}.\nThe figure indicates that a random replacement scheme shows competitive performance.\nThis is because of several reasons.\nRecall that the initial replica distribution is set as per the squareroot replication scheme.\nThe random replacement scheme does not alter this distribution since it makes replacements blind to the popularity of a data item.\nHowever, the replacements cause dynamic data re-organization so as to better serve the currently active request.\nMoreover, the mobility pattern of the cars is random, hence, the locations from which the requests are issued by clients are also random and not known a priori at the dispatcher.\nThese findings are significant because a random replacement policy can be implemented in a simple decentralized manner.\nThe lru-global and lfu-global schemes provide a latency performance that is worse than random.\nThis is because these policies rapidly develop a preference for the more popular data items thereby creating a larger number of replicas for them.\nDuring eviction, the more popular data items are almost never selected as a replacement candidate.\nConsequently, there are fewer replicas for the less popular items.\nHence, the initial distribution of the data item replicas changes from square-root to that resembling linear replication.\nThe higher number of replicas for the popular data items provide marginal additional benefits, while the lower number of replicas for the other data items hurts the latency performance of these global policies.\nThe lfu-local and lru-local schemes have similar performance to random since they do not have enough history of local data item requests.\nWe speculate that the performance of these local policies will approach that of their global variants for a large enough history of data item requests.\nOn account of the competitive performance shown by a random policy, for the remainder of the paper, we present the performance of zebroids that employ a random replacement policy.\n6.2 Do zebroids provide significant improvements in availability latency?\nWe find that in many scenarios employing zebroids provides substantial improvements in availability latency.\n6.2.1 Analysis\nWe first consider the case of one-instantaneous zebroids.\nFigure 2.\na shows the variation in 6,99 as a function of N for T = 10 and \u03b1 = 1 with a 10 \u00d7 10 torus using Equation 4.\nBoth the x and y axes are drawn to a log-scale.\nFigure 2.\nb show the% improvement in 6,99 obtained with one-instantaneous zebroids.\nIn this case, only the x-axis is drawn to a log-scale.\nFor illustration, we assume that the T data items are requested uniformly.\nInitially, when the network is sparse the analytical approximation for improvements in latency with zebroids, obtained from Equations 2 and 4, closely matches the simulation results.\nHowever, as N increases, the sparseness assumption for which the analysis is valid, namely N <<G, is no longer true.\nHence, the two curves rapidly diverge.\nThe point at which the two curves move away from each other corresponds to a value of 6,99 \u2264 1.\nMoreover, as mentioned earlier, the analysis provides an upper bound on the latency improvements, as it treats the newly created replicas given by Nic independently.\nHowever, these Nic replicas start from the same cell as one of the server replicas ri.\nFinally, the analysis captures a oneshot scenario where given an initial data item replica distribution, the availability latency is computed.\nThe new replicas created do not affect future requests from the client.\nOn account of space limitations, here, we summarize the observations in the case when z-relay zebroids are employed.\nThe interested reader can obtain further details in [9].\nSimilar observations, like the one-instantaneous zebroid case, apply since the simulation and analysis curves again start diverging when the analysis assumptions are no longer valid.\nHowever, the key observation here is that the latency improvement with z-relay zebroids is significantly better than the one-instantaneous zebroids case, especially for lower storage scenarios.\nThis is because in sparse scenarios, the transitive hand-offs between the zebroids creates higher number of replicas for the requested data item, yielding lower availability latency.\nMoreover, it is also seen that the simulation validation curve for the improvements in 6,99 with z-relay zebroids approaches that of the one-instantaneous zebroid case for higher storage (higher N values).\nThis is because one-instantaneous zebroids are a special case of z-relay zebroids.\n6.2.2 Simulation\nWe conduct simulations to examine the entire storage spectrum obtained by changing car density N or storage per car \u03b1 to also capture scenarios where the sparseness assumptions for which the analysis is valid do not hold.\nWe separate the effect of N and \u03b1 by capturing the variation of N while keeping \u03b1 constant (case 1) and vice-versa (case 2) both with z-relay and one-instantaneous zebroids.\nHere, we set the repository size as T = 25.\nFigure 3 captures case 1 mentioned above.\nSimilar trends are observed with case 2, a complete description of those results are available in [9].\nWith Figure 3.\nb, keeping \u03b1 constant, initially increasing car density has higher latency benefits because increasing N introduces more zebroids in the system.\nAs N is further increased, \u03c9 reduces because the total storage in the system goes up.\nConsequently, the number of replicas per data item goes up thereby increasing the\nnumber of servers.\nHence, the replacement policy cannot find a zebroid as often to transport the requested data item to the client earlier than any of the servers.\nOn the other hand, the increased number of servers benefits the no-zebroids case in bringing 6agg down.\nThe net effect results in reduction in \u03c9 for larger values of N.\nNumber of cars\n2.\nb) \u03c9 Figure 2: Figure 2 shows the latency performance with oneinstantaneous zebroids via simulations along with the analytical approximation for a 10 x 10 torus with T = 10.\nThe trends mentioned above are similar to that obtained from the analysis.\nHowever, somewhat counter-intuitively with relatively higher system storage, z-relay zebroids provide slightly lower improvements in latency as compared to one-instantaneous zebroids.\nWe speculate that this is due to the different data item replica distributions enforced by them.\nNote that replacements performed by the zebroids cause fluctuations in these replica distributions which may effect future client requests.\nWe are currently exploring suitable choices of parameters that can capture these changing replica distributions.\n6.3 What is the overhead incurred with improvements in latency with zebroids?\nWe find that the improvements in latency with zebroids are obtained at a minimal replacement overhead (<1 per client request).\n6.3.1 Analysis\nWith one-instantaneous zebroids, for each client request a maximum of one zebroid is employed for latency improvement.\nHence, the replacement overhead per client request can amount to a maximum of one.\nRecall that to calculate the latency with one-instantaneous\nFigure 3: Figure 3 depicts the latency performance with both\none-instantaneous and z-relay zebroids as a function of the car density when \u03b1 = 2 and T = 25.\nzebroids, Nic new replicas are created in the same cell as the servers.\nNow a replacement is only incurred if one of these Nic newly created replicas meets the client earlier than any of the ri servers.\nLet Xri and XNc respectively be random variables that capture i the minimum time till any of the ri and Nic replicas meet the client.\nSince Xri and XNc are assumed to be independent, by the property i of exponentially distributed random variables we have,\nRecall that the number of replicas for data item i, ri, is a function of the total storage in the system i.e., ri = k \u2022 N \u2022 \u03b1 where k satisfies the constraint 1 <ri <N. Using this along with Equation 2, we get\nNow if we keep the total system storage N \u2022 \u03b1 constant since G and T are also constant, increasing N increases the replacement overhead.\nHowever, if N \u2022 \u03b1 is constant then increasing N causes \u03b1\nFigure 4: Figure 4 captures replacement overhead when em\nploying one-instantaneous zebroids as a function of (N, \u03b1) values, when the total storage in the system is kept fixed, ST = 200.\nto go down.\nThis implies that a higher replacement overhead is incurred for higher N and lower \u03b1 values.\nMoreover, when ri = N, this means that every car has a replica of data item i. Hence, no zebroids are employed when this item is requested, yielding an overhead\/request for this item as zero.\nNext, we present simulation results that validate our analysis hypothesis for the overhead associated with deployment of one-instantaneous zebroids.\n6.3.2 Simulation\nFigure 4 shows the replacement overhead with one-instantaneous zebroids when (N, \u03b1) are varied while keeping the total system storage constant.\nThe trends shown by the simulation are in agreement with those predicted by the analysis above.\nHowever, the total system storage can be changed either by varying car density (N) or storage per car (\u03b1).\nOn account of similar trends, here we present the case when \u03b1 is kept constant and N is varied (Figure 5).\nWe refer the reader to [9] for the case when \u03b1 is varied and N is held constant.\nWe present an intuitive argument for the behavior of the perrequest replacement overhead curves.\nWhen the storage is extremely scarce so that only one replica per data item exists in the AutoMata network, the number of replacements performed by the zebroids is zero since any replacement will cause a data item to be lost from the system.\nThe dispatcher ensures that no data item is lost from the system.\nAt the other end of the spectrum, if storage becomes so abundant that \u03b1 = T then the entire data item repository can be replicated on every car.\nThe number of replacements is again zero since each request can be satisfied locally.\nA similar scenario occurs if N is increased to such a large value that another car with the requested data item is always available in the vicinity of the client.\nHowever, there is a storage spectrum in the middle where replacements by the scheduled zebroids result in improvements in 6agg (see Figure 3).\nMoreover, we observe that for sparse storage scenarios, the higher improvements with z-relay zebroids are obtained at the cost of a higher replacement overhead when compared to the one-instantaneous zebroids case.\nThis is because in the former case, each of these z zebroids selected along the lowest latency path to the client needs to perform a replacement.\nHowever, the replacement overhead is still less than 1 indicating that on an average less than one replacement per client request is needed even when z-relay zebroids are employed.\nFigure 5: Figure 5 shows the replacement overhead with zebroids for the cases when N is varied keeping \u03b1 = 2.\nFigure 6: Figure 6 shows 6agg for different car densities as a\nfunction of the prediction accuracy metric with \u03b1 = 2 and T = 25.\n6.4 What happens to the availability latency with zebroids in scenarios with inaccuracies in the car route predictions?\nWe find that zebroids continue to provide improvements in availability latency even with lower accuracy in the car route predictions.\nWe use a single parameter p to quantify the accuracy of the car route predictions.\n6.4.1 Analysis\nSince p represents the probability that a car route predicted by the dispatcher matches the actual one, hence, the latency with zebroids can be approximated by,\nExpressions for 6agg (zeb) can be obtained from Equations 4 (one-instantaneous) or 5 (z-relay zebroids).\n6.4.2 Simulation\nFigure 6 shows the variation in 6agg as a function of this route prediction accuracy metric.\nWe observe a smooth reduction in the\nimprovement in \u03b4agg as the prediction accuracy metric reduces.\nFor zebroids that are scheduled but fail to rendezvous with the client due to the prediction error, we tag any such replacements made by the zebroids as failed.\nIt is seen that failed replacements gradually increase as the prediction accuracy reduces.\n6.5 Under what conditions are the improve\nments in availability latency with zebroids maximized?\nSurprisingly, we find that the improvements in latency obtained with one-instantaneous zebroids are independent of the input distribution of the popularity of the data items.\n6.5.1 Analysis\nThe fractional difference (labelled \u03c9) in the latency between the no-zebroids and one-instantaneous zebroids is obtained from equations 2, 3, and 4 as\nHere C = c \u00b7 G \u00b7 log G.\nThis captures the fractional improvement in the availability latency obtained by employing one-instantaneous zebroids.\nLet \u03b1 = 1, making the total storage in the system ST = N. Assuming the initial replica distribution is as per the squareroot replication scheme, we have, ri = T \u221a fj.\nHence, we get\nabove equation to get, \u03c9 = 1 Ti = 1 ri In order to determine when the gains with one-instantaneous zebroids are maximized, we can frame an optimization problem as follows: Maximize \u03c9, subject to \/ teTi = 1 ri = ST THEOREM 1.\nWith a square-root replication scheme, improvements obtained with one-instantaneous zebroids are independent of the input popularity distribution of the data items.\n(See [9] for proof)\n6.5.2 Simulation\nWe perform simulations with two different frequency distribution of data items: Uniform and Zipfian (with mean = 0.27).\nSimilar latency improvements with one-instantaneous zebroids are obtained in both cases.\nThis result has important implications.\nIn cases with biased popularity toward certain data items, the aggregate improvements in latency across all data item requests still remain the same.\nEven in scenarios where the frequency of access to the data items changes dynamically, zebroids will continue to provide similar latency improvements.\n7.\nCONCLUSIONS AND FUTURE RESEARCH DIRECTIONS\nIn this study, we examined the improvements in latency that can be obtained in the presence of carriers that deliver a data item from a server to a client.\nWe quantified the variation in availability latency as a function of a rich set of parameters such as car density, storage per car, title database size, and replacement policies employed by zebroids.\nBelow we summarize some key future research directions we intend to pursue.\nTo better reflect reality we would like to validate the observations obtained from this study with some real world simulation traces of vehicular movements (for example using CORSIM [1]).\nThis will also serve as a validation for the utility of the Markov mobility model used in this study.\nWe are currently analyzing the performance of zebroids on a real world data set comprising of an ad-hoc network of buses moving around a small neighborhood in Amherst [4].\nZebroids may also be used for delivery of data items that carry delay sensitive information with a certain expiry.\nExtensions to zebroids that satisfy such application requirements presents an interesting future research direction.","keyphrases":["avail latenc","latenc","audio and video clip","data carrier","term zebroid","zebroid","mobil devic","mobil","car densiti","storag per devic","repositori size","replac polici","naiv random replac polici","peer-to-peer vehicular ad-hoc network","zebroid simplifi instanti","vehicular network","automaton","markov model"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","R","R","U","U"]} {"id":"J-58","title":"Towards Truthful Mechanisms for Binary Demand Games: A General Framework","abstract":"The family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design. However, VCG mechanisms have their limitations. They only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function. For many optimization problems, finding the optimal output is computationally intractable. If we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful. In light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O. In this paper, we focus our attention on binary demand games in which the agents' only available actions are to take part in the a game or not to. For these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property. We provide a general framework to design such P. We further propose several general composition-based techniques to compute P efficiently for various types of output. In particular, we show how P can be computed through or\/and combinations, round-based combinations, and some more complex combinations of the outputs from subgames.","lvl-1":"Towards Truthful Mechanisms for Binary Demand Games: A General Framework Ming-Yang Kao \u2217 Dept. of Computer Science Northwestern University Evanston, IL, USA kao@cs.northwestern.edu Xiang-Yang Li \u2020 Dept. of Computer Science Illinois Institute of Technology Chicago, IL, USA xli@cs.iit.edu WeiZhao Wang Dept. of Computer Science Illinois Institute of Technology Chicago, IL, USA wangwei4@iit.edu ABSTRACT The family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design.\nHowever, VCG mechanisms have their limitations.\nThey only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function.\nFor many optimization problems, finding the optimal output is computationally intractable.\nIf we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful.\nIn light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O.\nIn this paper, we focus our attention on binary demand games in which the agents'' only available actions are to take part in the a game or not to.\nFor these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property.\nWe provide a general framework to design such P.\nWe further propose several general composition-based techniques to compute P efficiently for various types of output.\nIn particular, we show how P can be computed through or\/and combinations, round-based combinations, and some more complex combinations of the outputs from subgames.\nCategories and Subject Descriptors F.2 [Analysis of Algorithms and Problem Complexity]: General; J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computer and Society]: Electronic Commerce General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION In recent years, with the rapid development of the Internet, many protocols and algorithms have been proposed to make the Internet more efficient and reliable.\nThe Internet is a complex distributed system where a multitude of heterogeneous agents cooperate to achieve some common goals, and the existing protocols and algorithms often assume that all agents will follow the prescribed rules without deviation.\nHowever, in some settings where the agents are selfish instead of altruistic, it is more reasonable to assume these agents are rational - maximize their own profits - according to the neoclassic economics, and new models are needed to cope with the selfish behavior of such agents.\nTowards this end, Nisan and Ronen [14] proposed the framework of algorithmic mechanism design and applied VCG mechanisms to some fundamental problems in computer science, including shortest paths, minimum spanning trees, and scheduling on unrelated machines.\nThe VCG mechanisms [5, 11, 21] are applicable to mechanism design problems whose outputs optimize the utilitarian objective function, which is simply the sum of all agents'' valuations.\nUnfortunately, some objective functions are not utilitarian; even for those problems with a utilitarian objective function, sometimes it is impossible to find the optimal output in polynomial time unless P=NP.\nSome mechanisms other than VCG mechanism are needed to address these issues.\nArcher and Tardos [2] studied a scheduling problem where it is NP-Hard to find the optimal output.\nThey pointed out that a certain monotonicity property of the output work load is a necessary and sufficient condition for the existence of a truthful mechanism for their scheduling problem.\nAuletta et al. [3] studied a similar scheduling problem.\nThey provided a family of deterministic truthful (2 + )-approximation mechanisms for any fixed number of machines and several (1 + )-truthful mechanisms for some NP-hard restrictions of their scheduling problem.\nLehmann et al. [12] studied the single-minded combinatorial auction and gave a\u221a m-approximation truthful mechanism, where m is the number of goods.\nThey also pointed out that a certain monotonicity in the allocation rule can lead to a truthful mechanism.\nThe work of Mu``alem and Nisan [13] is the closest in spirit to our work.\nThey characterized all truthful mechanisms based on a certain monotonicity property in a single-minded auction setting.\nThey also showed how to used MAX and IF-THEN-ELSE to combine outputs from subproblems.\nAs shown in this paper, the MAX and IF-THEN-ELSE combinations are special cases of the composition-based techniques that we present in this paper for computing the payments in polynomial time under mild assumptions.\nMore generally, we study how to design truthful mechanisms for binary demand games where the allocation of an agent is either selected or not selected.\nWe also assume that the valuations 213 of agents are uncorrelated, i.e., the valuation of an agent only depends on its own allocation and type.\nRecall that a mechanism M = (O, P) consists of two parts, an allocation rule O and a payment scheme P. Previously, it is often assumed that there is an objective function g and an allocation rule O, that either optimizes g exactly or approximately.\nIn contrast to the VCG mechanisms, we do not require that the allocation should optimize the objective function.\nIn fact, we do not even require the existence of an objective function.\nGiven any allocation rule O for a binary demand game, we showed that a truthful mechanism M = (O, P) exists for the game if and only if O satisfies a certain monotonicity property.\nThe monotonicity property only guarantees the existence of a payment scheme P such that (O, P) is truthful.\nWe complement this existence theorem with a general framework to design such a payment scheme P. Furthermore, we present general techniques to compute the payment when the output is a composition of the outputs of subgames through the operators or and and; through round-based combinations; or through intermediate results, which may be themselves computed from other subproblems.\nThe remainder of the paper is organized as follows.\nIn Section 2, we discuss preliminaries and previous works, define binary demand games and discuss the basic assumptions about binary demand games.\nIn Section 3, we show that O satisfying a certain monotonicity property is a necessary and sufficient condition for the existence of a truthful mechanism M = (O, P).\nA framework is then proposed in Section 4 to compute the payment P in polynomial time for several types of allocation rules O.\nIn Section 5, we provide several examples to demonstrate the effectiveness of our general framework.\nWe conclude our paper in Section 6 with some possible future directions.\n2.\nPRELIMINARIES 2.1 Mechanism Design As usually done in the literatures about the designing of algorithms or protocols with inputs from individual agents, we adopt the assumption in neoclassic economics that all agents are rational, i.e., they respond to well-defined incentives and will deviate from the protocol only if the deviation improves their gain.\nA standard model for mechanism design is as follows.\nThere are n agents 1, ... , n and each agent i has some private information ti, called its type, only known to itself.\nFor example, the type ti can be the cost that agent i incurs for forwarding a packet in a network or can be a payment that the agent is willing to pay for a good in an auction.\nThe agents'' types define the type vector t = (t1, t2, ... , tn).\nEach agent i has a set of strategies Ai from which it can choose.\nFor each input vector a = (a1, ... , an) where agent i plays strategy ai \u2208 Ai, the mechanism M = (O, P) computes an output o = O(a) and a payment vector p(a) = (p1(a), ... , pn(a)).\nHere the payment pi(\u00b7) is the money given to agent i and depends on the strategies used by the agents.\nA game is defined as G = (S, M), where S is the setting for the game G. Here, S consists the parameters of the game that are set before the game starts and do not depend on the players'' strategies.\nFor example, in a unicast routing game [14], the setting consists of the topology of the network, the source node and the destination node.\nThroughout this paper, unless explicitly mentioned otherwise, the setting S of the game is fixed and we are only interested in how to design P for a given allocation rule O.\nA valuation function v(ti, o) assigns a monetary amount to agent i for each possible output o. Everything about a game S, M , including the setting S, the allocation rule O and the payment scheme P, is public knowledge except the agent i``s actual type ti, which is private information to agent i. Let ui(ti, o) denote the utility of agent i at the outcome of the game o, given its preferences ti.\nHere, following a common assumption in the literature, we assume the utility for agent i is quasi-linear, i.e., ui(ti, o) = v(ti, o) + Pi(a).\nLet a|i ai = (a1, \u00b7 \u00b7 \u00b7 , ai\u22121, ai, ai+1, \u00b7 \u00b7 \u00b7 , an), i.e., each agent j = i plays an action aj except that the agent i plays ai.\nLet a\u2212i = (a1, \u00b7 \u00b7 \u00b7 , ai\u22121, ai+1, \u00b7 \u00b7 \u00b7 , an) denote the actions of all agents except i. Sometimes, we write (a\u2212i, bi) as a|i bi.\nAn action ai is called dominant for i if it (weakly) maximizes the utility of i for all possible strategies b\u2212i of other agents, i.e., ui(ti, O(b\u2212i, ai)) \u2265 ui(ti, O(b\u2212i, ai)) for all ai = ai and b\u2212i.\nA direct-revelation mechanism is a mechanism in which the only actions available to each agent are to report its private type either truthfully or falsely to the mechanism.\nAn incentive compatible (IC) mechanism is a direct-revelation mechanism in which if an agent reports its type ti truthfully, then it will maximize its utility.\nThen, in a direct-revelation mechanism satisfying IC, the payment scheme should satisfy the property that, for each agent i, v(ti, O(t)) + pi(t) \u2265 v(ti, O(t|i ti)) + pi(t|i ti).\nAnother common requirement in the literature for mechanism design is so called individual rationality or voluntary participation: the agent``s utility of participating in the output of the mechanism is not less than the utility of the agent of not participating.\nA direct-revelation mechanism is strategproof if it satisfies both IC and IR properties.\nArguably the most important positive result in mechanism design is the generalized Vickrey-Clarke-Groves (VCG) mechanism by Vickrey [21], Clarke [5], and Groves [11].\nThe VCG mechanism applies to (affine) maximization problems where the objective function is utilitarian g(o, t) = P i v(ti, o) (i.e., the sum of all agents'' valuations) and the set of possible outputs is assumed to be finite.\nA direct revelation mechanism M = (O(t), P(t)) belongs to the VCG family if (1) the allocation O(t) maximizesP i v(ti, o), and (2) the payment to agent i is pi(t) = P j=i vj(tj, O(t))+ hi (t\u2212i), where hi () is an arbitrary function of t\u2212i. Under mild assumptions, VCG mechanisms are the only truthful implementations for utilitarian problems [10].\nThe allocation rule of a VCG mechanism is required to maximize the objective function in the range of the allocation function.\nThis makes the mechanism computationally intractable in many cases.\nFurthermore, replacing an optimal algorithm for computing the output with an approximation algorithm usually leads to untruthful mechanisms if a VCG payment scheme is used.\nIn this paper, we study how to design a truthful mechanism that does not optimize a utilitarian objective function.\n2.2 Binary Demand Games A binary demand game is a game G = (S, M), where M = (O, P) and the range of O is {0, 1}n .\nIn other words, the output is a n-tuple vector O(t) = (O1(t), O2(t), ... , On(t)), where Oi(t) = 1 (respectively, 0) means that agent i is (respectively, is not) selected.\nExamples of binary demand games include: unicast [14, 22, 9] and multicast [23, 24, 8] (generally subgraph construction by selecting some links\/nodes to satisfy some property), facility location [7], and a certain auction [12, 2, 13].\nHereafter, we make the following further assumptions.\n1.\nThe valuation of the agents are not correlated, i.e., v(ti, o) is a function of v(ti, oi) only is denoted as v(ti, oi).\n2.\nThe valuation v(ti, oi) is a publicly known value and is normalized to 0.\nThis assumption is needed to guarantee the IR property.\nThus, throughout his paper, we only consider these direct-revelation mechanisms in which every agent only needs to reveal its valuation vi = v(ti, 1).\n214 Notice that in applications where agents providing service and receiving payment, e.g., unicast and job scheduling, the valuation vi of an agent i is usually negative.\nFor the convenience of presentation, we define the cost of agent as ci = \u2212v(ti, 1), i.e., it costs agent i ci to provide the service.\nThroughout this paper, we will use ci instead of vi in our analysis.\nAll our results can apply to the case where the agents receive the service rather than provide by setting ci to negative, as in auction.\nIn a binary demand game, if we want to optimize an objective function g(o, t), then we call it a binary optimization demand game.\nThe main differences between the binary demand games and those problems that can be solved by VCG mechanisms are: 1.\nThe objective function is utilitarian (or affine maximization problem) for a problem solvable by VCG while there is no restriction on the objective function for a binary demand game.\n2.\nThe allocation rule O studied here does not necessarily optimize an objective function, while a VCG mechanism only uses the output that optimizes the objective function.\nWe even do not require the existence of an objective function.\n3.\nWe assume that the agents'' valuations are not correlated in a binary demand game, while the agents'' valuations may be correlated in a VCG mechanism.\nIn this paper, we assume for technical convenience that the objective function g(o, c), if exists, is continuous with respect to the cost ci, but most of our results are directly applicable to the discrete case without any modification.\n2.3 Previous Work Lehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction.\nIn a singleminded combinatorial auction, each agent i (1 \u2264 i \u2264 n) only wants to buy a subset Si \u2286 S with private price ci.\nA single-minded bidder i declares a bid bi = Si, ai with Si \u2286 S and ai \u2208 R+ .\nIn [12], it is assumed that the set of goods allocated to an agent i is either Si or \u2205, which is known as exactness.\nLehmann et al. gave a greedy round-based allocation algorithm, based on the rank ai |Si|1\/2 , that has an approximation ratio \u221a m, where m is the number of goods in S. Based on the approximation algorithm, they gave a truthful payment scheme.\nFor an allocation rule satisfying (1) exactness: the set of goods allocated to an agent i is either Si or \u2205; (2) monotonicity: proposing more money for fewer goods cannot cause a bidder to lose its bid, they proposed a truthful payment scheme as follows: (1) charge a winning bidder a certain amount that does not depend on its own bidding; (2) charge a losing bidder 0.\nNotice the assumption of exactness reveals that the single minded auction is indeed a binary demand game.\nTheir payment scheme inspired our payment scheme for binary demand game.\nIn [1], Archer et al. studied the combinatorial auctions where multiple copies of many different items are on sale, and each bidder i desires only one subset Si.\nThey devised a randomized rounding method that is incentive compatible and gave a truthful mechanism for combinatorial auctions with single parameter agents that approximately maximizes the social value of the auction.\nAs they pointed out, their method is strongly truthful in sense that it is truthful with high probability 1 \u2212 , where is an error probability.\nOn the contrary, in this paper, we study how to design a deterministic mechanism that is truthful based on some given allocation rules.\nIn [2], Archer and Tardos showed how to design truthful mechanisms for several combinatorial problems where each agent``s private information is naturally expressed by a single positive real number, which will always be the cost incurred per unit load.\nThe mechanism``s output could be arbitrary real number but their valuation is a quasi-linear function t \u00b7 w, where t is the private per unit cost and w is the work load.\nArcher and Tardos characterized that all truthful mechanism should have decreasing work curves w and that the truthful payment should be Pi(bi) = Pi(0) + biwi(bi) \u2212 R bi 0 wi(u)du Using this model, Archer and Tardos designed truthful mechanisms for several scheduling related problems, including minimizing the span, maximizing flow and minimizing the weighted sum of completion time problems.\nNotice when the load of the problems is w = {0, 1}, it is indeed a binary demand game.\nIf we apply their characterization of the truthful mechanism, their decreasing work curves w implies exactly the monotonicity property of the output.\nBut notice that their proof is heavily based on the assumption that the output is a continuous function of the cost, thus their conclusion can``t directly apply to binary demand games.\nThe paper of Ahuva Mu``alem and Noam Nisan [13] is closest in spirit to our work.\nThey clearly stated that we only discussed a limited class of bidders, single minded bidders, that was introduced by [12].\nThey proved that all truthful mechanisms should have a monotonicity output and their payment scheme is based on the cut value.\nWith a simple generalization, we get our conclusion for general binary demand game.\nThey proposed several combination methods including MAX, IF-THEN-ELSE construction to perform partial search.\nAll of their methods required the welfare function associated with the output satisfying bitonic property.\nDistinction between our contributions and previous results: It has been shown in [2, 6, 12, 13] that for the single minded combinatorial auction, there exists a payment scheme which results in a truthful mechanism if the allocation rule satisfies a certain monotonicity property.\nTheorem 4 also depends on the monotonicity property, but it is applicable to a broader setting than the single minded combinatorial auction.\nIn addition, the binary demand game studied here is different from the traditional packing IP``s: we only require that the allocation to each agent is binary and the allocation rule satisfies a certain monotonicity property; we do not put any restrictions on the objective function.\nFurthermore, the main focus of this paper is to design some general techniques to find the truthful payment scheme for a given allocation rule O satisfying a certain monotonicity property.\n3.\nGENERAL APPROACHES 3.1 Properties of Strategyproof Mechanisms We discuss several properties that mechanisms need to satisfy in order to be truthful.\nTHEOREM 1.\nIf a mechanism M = (O, P) satisfies IC, then \u2200i, if Oi(t|i ti1 ) = Oi(t|i ti2 ), then pi(t|i ti1 ) = pi(t|i ti2 ).\nCOROLLARY 2.\nFor any strategy-proof mechanism for a binary demand game G with setting S, if we fix the cost c\u2212i of all agents other than i, the payment to agent i is a constant p1 i if Oi(c) = 1, and it is another constant p0 i if Oi(c) = 0.\nTHEOREM 3.\nFixed the setting S for a binary demand game, if mechanism M = (O, P) satisfies IC, then mechanism M = (O, P ) with the same output method O and pi(c) = pi(c) \u2212 \u03b4i(c\u2212i) for any function \u03b4i(c\u2212i) also satisfies IC.\nThe proofs of above theorems are straightforward and thus omitted due to space limit.\nThis theorem implies that for the binary demand games we can always normalize the payment to an agent i such that the payment to the agent is 0 when it is not selected.\nHereafter, we will only consider normalized payment schemes.\n215 3.2 Existence of Strategyproof Mechanisms Notice, given the setting S, a mechanism design problem is composed of two parts: the allocation rule O and a payment scheme P.\nIn this paper, given an allocation rule O we focus our attention on how to design a truthful payment scheme based on O. Given an allocation rule O for a binary demand game, we first present a sufficient and necessary condition for the existence of a truthful payment scheme P. DEFINITION 1 (MONOTONE NON-INCREASING PROPERTY (MP)).\nAn output method O is said to satisfy the monotone non-increasing property if for every agent i and two of its possible costs ci1 < ci2 , Oi(c|i ci2 ) \u2264 Oi(c|i ci1 ).\nThis definition is not restricted only to binary demand games.\nFor binary demand games, this definition implies that if Oi(c|i ci2 ) = 1 then Oi(c|i ci1 ) = 1.\nTHEOREM 4.\nFix the setting S, c\u2212i in a binary demand game G with the allocation rule O, the following three conditions are equivalent: 1.\nThere exists a value \u03bai(O, c\u2212i)(which we will call a cut value, such that Oi(c) = 1 if ci < \u03bai(O, c\u2212i) and Oi(c) = 0 if ci > \u03bai(O, c\u2212i).\nWhen ci = \u03bai(O, c\u2212i), Oi(c) can be either 0 or 1 depending on the tie-breaker of the allocation rule O. Hereafter, we will not consider the tie-breaker scenario in our proofs.\n2.\nThe allocation rule O satisfies MP.\n3.\nThere exists a truthful payment scheme P for this binary demand game.\nPROOF.\nThe proof that Condition 2 implies Condition is straightforward and is omitted here.\nWe then show Condition 3 implies Condition 2.\nThe proof of this is similar to a proof in [13].\nTo prove this direction, we assume there exists an agent i and two valuation vectors c|i ci1 and c|i ci2 , where ci1 < ci2 , Oi(c|i ci2 ) = 1 and Oi(c|i ci1 ) = 0.\nFrom corollary 2, we know that pi(c|i ci1 ) = p0 i and pi(c|i ci2 ) = p1 i .\nNow fix c\u2212i, the utility for i when ci = ci1 is ui(ci1 ) = p0 i .\nWhen agent i lies its valuation to ci2 , its utility is p1 i \u2212 ci1 .\nSince M = (O, P) is truthful, we have p0 i > p1 i \u2212 ci1 .\nNow consider the scenario when the actual valuation of agent i is ci = ci2 .\nIts utility is p1 i \u2212 ci2 when it reports its true valuation.\nSimilarly, if it lies its valuation to ci1 , its utility is p0 i .\nSince M = (O, P) is truthful, we have p0 i < p1 i \u2212 ci2 .\nConsequently, we have p1 i \u2212ci2 > p0 i > p1 i \u2212ci1 .\nThis inequality implies that ci1 > ci2 , which is a contradiction.\nWe then show Condition 1 implies Condition 3.\nWe prove this by constructing a payment scheme and proving that this payment scheme is truthful.\nThe payment scheme is: If Oi(c) = 1, then agent i gets payment pi(c) = \u03bai(O, c\u2212i); else it gets payment pi(c) = 0.\nFrom condition 1, if Oi(c) = 1 then ci > \u03bai(O, c\u2212i).\nThus, its utility is \u03bai(O, c\u2212i) \u2212 ci > 0, which implies that the payment scheme satisfies the IR.\nIn the following we prove that this payment scheme also satisfies IC property.\nThere are two cases here.\nCase 1: ci < \u03ba(O, c\u2212i).\nIn this case, when i declares its true cost ci, its utility is \u03bai(O, c\u2212i) \u2212 ci > 0.\nNow consider the situation when i declares a cost di = ci.\nIf di < \u03bai(O, c\u2212i), then i gets the same payment and utility since it is still selected.\nIf di > \u03bai(O, c\u2212i), then its utility becomes 0 since it is not selected anymore.\nThus, it has no incentive to lie in this case.\nCase 2: ci \u2265 \u03ba(O, c\u2212i).\nIn this case, when i reveals its true valuation, its payment is 0 and the utility is 0.\nNow consider the situation when i declares a valuation di = ci.\nIf di > \u03bai(O, c\u2212i), then i gets the same payment and utility since it is still not selected.\nIf di \u2264 \u03bai(O, c\u2212i), then its utility becomes \u03bai(O, c\u2212i) \u2212 ci \u2264 0 since it is selected now.\nThus, it has no incentive to lie.\nThe equivalence of the monotonicity property of the allocation rule O and the existence of a truthful mechanism using O can be extended to games beyond binary demand games.\nThe details are omitted here due to space limit.\nWe now summarize the process to design a truthful payment scheme for a binary demand game based on an output method O. General Framework 1 Truthful mechanism design for a binary demand game Stage 1: Check whether the allocation rule O satisfies MP.\nIf it does not, then there is no payment scheme P such that mechanism M = (O, P) is truthful.\nOtherwise, define the payment scheme P as follows.\nStage 2: Based on the allocation rule O, find the cut value \u03bai(O, c\u2212i) for agent i such that Oi(c|i di) = 1 when di < \u03bai(O, c\u2212i), and Oi(c|i di) = 0 when di > \u03bai(O, c\u2212i).\nStage 3: The payment for agent i is 0 if Oi(c) = 0; the payment is \u03bai(O, c\u2212i) if Oi(c) = 1.\nTHEOREM 5.\nThe payment defined by our general framework is minimum among all truthful payment schemes using O as output.\n4.\nCOMPUTING CUT VALUE FUNCTIONS To find the truthful payment scheme by using General Framework 1, the most difficult stage seems to be the stage 2.\nNotice that binary search does not work generally since the valuations of agents may be continuous.\nWe give some general techniques that can help with finding the cut value function under certain circumstances.\nOur basic approach is as follows.\nFirst, we decompose the allocation rule into several allocation rules.\nNext find the cut value function for each of these new allocation rules.\nThen, we compute the original cut value function by combining these cut value functions of the new allocation rules.\n4.1 Simple Combinations In this subsection, we introduce techniques to compute the cut value function by combining multiple allocation rules with conjunctions or disconjunctions.\nFor simplicity, given an allocation rule O, we will use \u03ba(O, c) to denote a n-tuple vector (\u03ba1(O, c\u22121), \u03ba2(O, c\u22122), ... , \u03ban(O, c\u2212n)).\nHere, \u03bai(O, c\u2212i) is the cut value for agent i when the allocation rule is O and the costs c\u2212i of all other agents are fixed.\nTHEOREM 6.\nWith a fixed setting S of a binary demand game, assume that there are m allocation rules O1 , O2 , \u00b7 \u00b7 \u00b7 , Om satisfying the monotonicity property, and \u03ba(Oi , c) is the cut value vector for Oi .\nThen the allocation rule O(c) = Wm i=1 Oi (c) satisfies the monotonicity property.\nMoreover, the cut value for O is \u03ba(O, c) = maxm i=1{\u03ba(Oi , c)} Here \u03ba(O, c) = maxm i=1{\u03ba(Oi , c)} means, \u2200j \u2208 [1, n], \u03baj(O, c\u2212j) = maxm i=1{\u03baj(Oi , c\u2212j)} and O(c) =Wm i=1 Oi (c) means, \u2200j \u2208 [1, n], Oj(c) = O1 j (c) \u2228 O2 j (c) \u2228 \u00b7 \u00b7 \u00b7 \u2228 Om j (c).\nPROOF.\nAssume that ci > ci and Oi(c) = 1.\nWithout loss of generality, we assume that Ok i (c) = 1 for some k, 1 \u2264 k \u2264 m. From the assumption that Ok i (c) satisfies MP, we obtain that 216 Ok i (c|i ci) = 1.\nThus, Oi(c|i ci) = Wm j=1 Oj (c) = 1.\nThis proves that O(c) satisfies MP.\nThe correctness of the cut value function follows directly from Theorem 4.\nMany algorithms indeed fall into this category.\nTo demonstrate the usefulness of Theorem 6, we discuss a concrete example here.\nIn a network, sometimes we want to deliver a packet to a set of nodes instead of one.\nThis problem is known as multicast.\nThe most commonly used structure in multicast routing is so called shortest path tree (SPT).\nConsider a network G = (V, E, c), where V is the set of nodes, and vector c is the actual cost of the nodes forwarding the data.\nAssume that the source node is s and the receivers are Q \u2282 V .\nFor each receiver qi \u2208 Q, we compute the shortest path (least cost path), denoted by LCP(s, qi, d), from the source s to qi under the reported cost profile d.\nThe union of all such shortest paths forms the shortest path tree.\nWe then use General Framework 1 to design the truthful payment scheme P when the SPT structure is used as the output for multicast, i.e., we design a mechanism M = (SPT, P).\nNotice that VCG mechanisms cannot be applied here since SPT is not an affine maximization.\nWe define LCP(s,qi) as the allocation corresponds to the path LCP(s, qi, d), i.e., LCP (s,qi) k (d) = 1 if and only if node vk is in LCP(s, qi, d).\nThen the output SPT is defined as W qi\u2208Q LCP(s,qi) .\nIn other words, SPTk(d) = 1 if and only if qk is selected in some LCP(s, qi, d).\nThe shortest path allocation rule is a utilitarian and satisfies MP.\nThus, from Theorem 6, SPT also satisfies MP, and the cut value function vector for SPT can be calculated as \u03ba(SPT, c) = maxqi\u2208Q \u03ba(LCP(s,qi) , c), where \u03ba(LCP(s,qi) , c) is the cut value function vector for the shortest path LCP(s, qi, c).\nConsequently, the payment scheme above is truthful and the minimum among all truthful payment schemes when the allocation rule is SPT.\nTHEOREM 7.\nFixed the setting S of a binary demand game, assume that there are m output methods O1 , O2 , \u00b7 \u00b7 \u00b7 , Om satisfying MP, and \u03ba(Oi , c) are the cut value functions respectively for Oi where i = 1, 2, \u00b7 \u00b7 \u00b7 , m.\nThen the allocation rule O(c) =Vm i=1 Oi (c) satisfies MP.\nMoreover, the cut value function for O is \u03ba(O, c) = minm i=1{\u03ba(Oi , c)}.\nWe show that our simple combination generalizes the IF-THENELSE function defined in [13].\nFor an agent i, assume that there are two allocation rules O1 and O2 satisfying MP.\nLet \u03bai(O1 , c\u2212i), \u03bai(O2 , c\u2212i) be the cut value functions for O1 , O2 respectively.\nThen the IF-THEN-ELSE function Oi(c) is actually Oi(c) = [(ci \u2264 \u03bai(O1 , c\u2212i) + \u03b41(c\u2212i)) \u2227 O2 (c\u2212i, ci)] \u2228 (ci < \u03bai(O1 , c\u2212i) \u2212 \u03b42(c\u2212i)) where \u03b41(c\u2212i) and \u03b42(c\u2212i) are two positive functions.\nBy applying Theorems 6 and 7, we know that the allocation rule O satisfies MP and consequently \u03bai(O, c\u2212i) = max{min(\u03bai(O1 , c\u2212i)+ \u03b41(c\u2212i), \u03bai(O2 , c\u2212i)), \u03bai(O1 , c\u2212i) \u2212 \u03b42(c\u2212i))}.\n4.2 Round-Based Allocations Some approximation algorithms are round-based, where each round of an algorithm selects some agents and updates the setting and the cost profile if necessary.\nFor example, several approximation algorithms for minimum weight vertex cover [19], maximum weight independent set, minimum weight set cover [4], and minimum weight Steiner [18] tree fall into this category.\nAs an example, we discuss the minimum weighted vertex cover problem (MWVC) [16, 15] to show how to compute the cut value for a round-based output.\nGiven a graph G = (V, E), where the nodes v1, v2, ... , vn are the agents and each agent vi has a weight ci, we want to find a node set V \u2286 V such that for every edge (u, v) \u2208 E at least one of u and v is in V .\nSuch V is called a vertex cover of G.\nThe valuation of a node i is \u2212ci if it is selected; otherwise its valuation is 0.\nFor a subset of nodes V \u2208 V , we define its weight as c(V ) = P i\u2208V ci.\nWe want to find a vertex cover with the minimum weight.\nHence, the objective function to be implemented is utilitarian.\nTo use the VCG mechanism, we need to find the vertex cover with the minimum weight, which is NP-hard [16].\nSince we are interested in mechanisms that can be computed in polynomial time, we must use polynomial-time computable allocation rules.\nMany algorithms have been proposed in the literature to approximate the optimal solution.\nIn this paper, we use a 2-approximation algorithm given in [16].\nFor the sake of completeness, we briefly review this algorithm here.\nThe algorithm is round-based.\nEach round selects some vertices and discards some vertices.\nFor each node i, w(i) is initialized to its weight ci, and when w(i) drops to 0, i is included in the vertex cover.\nTo make the presentation clear, we say an edge (i1, j1) is lexicographically smaller than edge (i2, j2) if (1) min(i1, j1) < min(i2, j2), or (2) min(i1, j1) = min(i2, j2) and max(i1, j1) < max(i2, j2).\nAlgorithm 2 Approximate Minimum Weighted Vertex Cover Input: A node weighted graph G = (V, E, c).\nOutput: A vertex cover V .\n1: Set V = \u2205.\nFor each i \u2208 V , set w(i) = ci.\n2: while V is not a vertex cover do 3: Pick an uncovered edge (i, j) with the least lexicographic order among all uncovered edges.\n4: Let m = min(w(i), w(j)).\n5: Update w(i) to w(i) \u2212 m and w(j) to w(j) \u2212 m. 6: If w(i) = 0, add i to V .\nIf w(j) = 0, add j to V .\nNotice, selecting an edge using the lexicographic order is crutial to guarantee the monotonicity property.\nAlgorithm 2 outputs a vertex cover V whose weight is within 2 times of the optimum.\nFor convenience, we use VC(c) to denote the vertex cover computed by Algorithm 2 when the cost vector of vertices is c. Below we generalize Algorithm 2 to a more general scenario.\nTypically, a round-based output can be characterized as follows (Algorithm 3).\nDEFINITION 2.\nAn updating rule Ur is said to be crossingindependent if, for any agent i not selected in round r, (1) Sr+1 and cr+1 \u2212i do not depend on cr j (2) for fixed cr \u2212i, cr i1 \u2264 cr i2 implies that cr+1 i1 \u2264 cr+1 i2 .\nWe have the following theorem about the existence of a truthful payment using a round based allocation rule A. THEOREM 8.\nA round-based output A, with the framework defined in Algorithm 3, satisfies MP if the output methods Or satisfy MP and all updating rules Ur are crossing-independent.\nPROOF.\nConsider an agent i and fixed c\u2212i.\nWe prove that when an agent i is selected with cost ci, then it is also selected with cost di < ci.\nAssume that i is selected in round r with cost ci.\nThen under cost di, if agent i is selected in a round before r, our claim holds.\nOtherwise, consider in round r. Clearly, the setting Sr and the costs of all other agents are the same as what if agent i had cost ci since i is not selected in the previous rounds due to the crossindependent property.\nSince i is selected in round r with cost ci, i is also selected in round r with di < ci due to the reason that Or satisfies MP.\nThis finishes the proof.\n217 Algorithm 3 A General Round-Based Allocation Rule A 1: Set r = 0, c0 = c, and G0 = G initially.\n2: repeat 3: Compute an output or using a deterministic algorithm Or : Sr \u00d7 cr \u2192 {0, 1}n .\nHere Or , cr and Sr are allocation rule, cost vector and game setting in game Gr , respectively.\nRemark: Or is often a simple greedy algorithm such as selecting the agents that minimize some utilitarian function.\nFor the example of vertex cover, Or will always select the light-weighted node on the lexicographically least uncovered edge (i, j).\n4: Let r = r + 1.\nUpdate the game Gr\u22121 to obtain a new game Gr with setting Sr and cost vector cr according to some rule Ur : Or\u22121 \u00d7 (Sr\u22121 , cr\u22121 ) \u2192 (Sr , cr ).\nHere we updates the cost and setting of the game.\nRemark: For the example of vertex cover, the updating rule will decrease the weight of vertices i and j by min(w(i), w(j)).\n5: until a valid output is found 6: Return the union of the set of selected players of each round as the final output.\nFor the example of vertex cover, it is the union of nodes selected in all rounds.\nAlgorithm 4 Compute Cut Value for Round-Based Algorithms Input: A round-based output A, a game G1 = G, and a updating function vector U. Output: The cut value x for agent k. 1: Set r = 0 and ck = \u03b6.\nRecall that \u03b6 is a value that can guarantee Ak = 0 when an agent reports the cost \u03b6.\n2: repeat 3: Compute an output or using a deterministic algorithm based on setting Sr using allocation rule Or : Sr \u00d7cr \u2192 {0, 1}n .\n4: Find the cut value for agent k based on the allocation rule Or for costs cr \u2212k. Let r = \u03bak(Or , cr \u2212k) be the cut value.\n5: Set r = r + 1 and obtain a new game Gr from Gr\u22121 and or according to the updating rule Ur .\n6: Let cr be the new cost vector for game Gr .\n7: until a valid output is found.\n8: Let gi(x) be the cost of ci k when the original cost vector is c|k x. 9: Find the minimum value x such that 8 >>>>>< >>>>>: g1(x) \u2265 1; g2(x) \u2265 2; ... gt\u22121(x) \u2265 t\u22121; gt(x) \u2265 t. Here, t is the total number of rounds.\n10: Output the value x as the cut value.\nIf the round-based output satisfies monotonicity property, the cut-value always exists.\nWe then show how to find the cut value for a selected agent k in Algorithm 4.\nThe correctness of Algorithm 4 is straightforward.\nTo compute the cut value, we assume that (1) the cut value r for each round r can be computed in polynomial time; (2) we can solve the equation gr(x) = r to find x in polynomial time when the cost vector c\u2212i and b are given.\nNow we consider the vertex cover problem.\nFor each round r, we select a vertex with the least weight and that is incident on the lexicographically least uncovered edge.\nThe output satisfies MP.\nFor agent i, we update its cost to cr i \u2212 cr j iff edge (i, j) is selected.\nIt is easy to verify this updating rule is crossing-independent, thus we can apply Algorithm 4 to compute the cut value for the set cover game as shown in Algorithm 5.\nAlgorithm 5 Compute Cut Value for MVC.\nInput: A node weighted graph G = (V, E, c) and a node k selected by Algorithm 2.\nOutput: The cut value \u03bak(V C, c\u2212k).\n1: For each i \u2208 V , set w(i) = ci.\n2: Set w(k) = \u221e, pk = 0 and V = \u2205.\n3: while V is not a vertex cover do 4: Pick an uncovered edge (i, j) with the least lexicographic order among all uncovered edges.\n5: Set m = min(w(i), w(j)).\n6: Update w(i) = w(i) \u2212 m and w(j) = w(j) \u2212 m. 7: If w(i) = 0, add i to V ; else add j to V .\n8: If i == k or j == k then set pk = pk + m. 9: Output pk as the cut value \u03bak(V C, c\u2212k).\n4.3 Complex Combinations In subsection 4.1, we discussed how to find the cut value function when the output of the binary demand game is a simple combination of some outputs, whose cut values can be computed through other means (typically VCG).\nHowever, some algorithms cannot be decomposed in the way described in subsection 4.1.\nNext we present a more complex way to combine allocation rules, and as we may expected, the way to find the cut value is also more complicated.\nAssume that there are n agents 1 \u2264 i \u2264 n with cost vector c, and there are m binary demand games Gi with objective functions fi(o, c), setting Si and allocation rule \u03c8i where i = 1, 2, \u00b7 \u00b7 \u00b7 , m.\nThere is another binary demand game with setting S and allocation rule O, whose input is a cost vector d = (d1, d2, \u00b7 \u00b7 \u00b7 , dm).\nLet f be the function vector (f1, f2, \u00b7 \u00b7 \u00b7 , fm), \u03c8 be the allocation rule vector (\u03c81 , \u03c82 , \u00b7 \u00b7 \u00b7 , \u03c8m ) and \u222b be the setting vector (S1, S2, \u00b7 \u00b7 \u00b7 , Sm).\nFor notation simplicity, we define Fi(c) = fi(\u03c8i (c), c), for each 1 \u2264 i \u2264 m, and F(c) = (F1(c), F2(c), \u00b7 \u00b7 \u00b7 , Fm(c)).\nLet us see a concrete example of these combinations.\nConsider a link weighted graph G = (V, E, c), and a subset of q nodes Q \u2286 V .\nThe Steiner tree problem is to find a set of links with minimum total cost to connect Q.\nOne way to find an approximation of the Steiner tree is as follows: (1) we build a virtual complete graph H using Q as its vertices, and the cost of each edge (i, j) is the cost of LCP(i, j, c) in graph G; (2) build the minimum spanning tree of H, denoted as MST(H); (3) an edge of G is selected iff it is selected in some LCP(i, j, c) and edge (i, j) of H is selected to MST(H).\nIn this game, we define q(q \u2212 1)\/2 games Gi,j, where i, j \u2208 Q, with objective functions fi,j(o, c) being the minimum cost of 218 connecting i and j in graph G, setting Si being the original graph G and allocation rule is LCP(i, j, c).\nThe game G corresponds to the MST game on graph H.\nThe cost of the pair-wise q(q \u2212 1)\/2 shortest paths defines the input vector d = (d1, d2, \u00b7 \u00b7 \u00b7 , dm) for game MST.\nMore details will be given in Section 5.2.\nDEFINITION 3.\nGiven an allocation rule O and setting S, an objective function vector f, an allocation rule vector \u03c8 and setting vector \u222b, we define a compound binary demand game with setting S and output O \u25e6 F as (O \u25e6 F)i(c) = Wm j=1(Oj(F(c)) \u2227 \u03c8j i (c)).\nThe allocation rule of the above definition can be interpreted as follows.\nAn agent i is selected if and only if there is a j such that (1) i is selected in \u03c8j (c), and (2) the allocation rule O will select index j under cost profile F(c).\nFor simplicity, we will use O \u25e6 F to denote the output of this compound binary demand game.\nNotice that a truthful payment scheme using O \u25e6 F as output exists if and only if it satisfies the monotonicity property.\nTo study when O \u25e6F satisfies MP, several necessary definitions are in order.\nDEFINITION 4.\nFunction Monotonicity Property (FMP) Given an objective function g and an allocation rule O, a function H(c) = g(O(c), c) is said to satisfy the function monotonicity property, if, given fixed c\u2212i, it satisfies: 1.\nWhen Oi(c) = 0, H(c) does not increase over ci.\n2.\nWhen Oi(c) = 1, H(c) does not decrease over ci.\nDEFINITION 5.\nStrong Monotonicity Property (SMP) An allocation rule O is said to satisfy the strong monotonicity property if O satisfies MP, and for any agent i with Oi(c) = 1 and agent j = i, Oi(c|j cj) = 1 if cj \u2265 cj or Oj(c|j cj) = 0.\nLEMMA 1.\nFor a given allocation rule O satisfying SMP and cost vectors c, c with ci = ci, if Oi(c) = 1 and Oi(c ) = 0, then there must exist j = i such that cj < cj and Oj(c ) = 1.\nFrom the definition of the strong monotonicity property, we have Lemma 1 directly.\nWe now can give a sufficient condition when O \u25e6 F satisfies the monotonicity property.\nTHEOREM 9.\nIf \u2200i \u2208 [1, m], Fi satisfies FMP, \u03c8i satisfies MP, and the output O satisfies SMP, then O \u25e6 F satisfies MP.\nPROOF.\nAssuming for cost vector c we have (O \u25e6 F)i(c) = 1, we should prove for any cost vector c = c|i ci with ci < ci, (O \u25e6 F)i(c ) = 1.\nNoticing that (O \u25e6 F)i(c) = 1, without loss of generality, we assume that Ok(F(c)) = 1 and \u03c8k i (c) = 1 for some index 1 \u2264 k \u2264 m.\nNow consider the output O with the cost vector F(c )|k Fk(c).\nThere are two scenarios, which will be studied one by one as follows.\nOne scenario is that index k is not chosen by the output function O. From Lemma 1, there must exist j = k such that Fj(c ) < Fj(c) (1) Oj(F(c )|k Fk(c)) = 1 (2) We then prove that agent i will be selected in the output \u03c8j (c ), i.e., \u03c8j i (c ) = 1.\nIf it is not, since \u03c8j (c) satisfies MP, we have \u03c8j i (c) = \u03c8j i (c ) = 0 from ci < ci.\nSince Fj satisfies FMP, we know Fj(c ) \u2265 Fj(c), which is a contradiction to the inequality (1).\nConsequently, we have \u03c8j i (c ) = 1.\nFrom Equation (2), the fact that index k is not selected by allocation rule O and the definition of SMP, we have Oj(F(c )) = 1, Thus, agent i is selected by O \u25e6 F because of Oj(F(c )) = 1 and \u03c8j i (c ) = 1.\nThe other scenario is that index k is chosen by the output function O. First, agent i is chosen in \u03c8k (c ) since the output \u03c8k (c) satisfies the monotonicity property and ci < ci and \u03c8k i (c) = 1.\nSecondly, since the function Fk satisfies FMP, we know that Fk(c ) \u2264 Fk(c).\nRemember that output O satisfies the SMP, thus we can obtain Ok(F(c )) = 1 from the fact that Ok(F(c )|k Fk(c)) = 1 and Fk(c ) \u2264 Fk(c).\nConsequently, agent i will also be selected in the final output O \u25e6 F.\nThis finishes our proof.\nThis theorem implies that there is a cut value for the compound output O \u25e6 F.\nWe then discuss how to find the cut value for this output.\nBelow we will give an algorithm to calculate \u03bai(O \u25e6 F) when (1) O satisfies SMP, (2) \u03c8j satisfies MP, and (3) for fixed c\u2212i, Fj(c) is a constant, say hj, when \u03c8j i (c) = 0, and Fj(c) increases when \u03c8j i (c) = 1.\nNotice that here hj can be easily computed by setting ci = \u221e since \u03c8j satisfies the monotonicity property.\nWhen given i and fixed c\u2212i, we define (Fi j )\u22121 (y) as the smallest x such that Fj(c|i x) = y. For simplicity, we denote (Fi j )\u22121 as F\u22121 j if no confusion is caused when i is a fixed agent.\nIn this paper, we assume that given any y, we can find such x in polynomial time.\nAlgorithm 6 Find Cut Value for Compound Method O \u25e6 F Input: allocation rule O, objective function vector F and inverse function vector F\u22121 = {F\u22121 1 , \u00b7 \u00b7 \u00b7 , F\u22121 m }, allocation rule vector \u03c8 and fixed c\u2212i. Output: Cut value for agent i based on O \u25e6 F. 1: for 1 \u2264 j \u2264 m do 2: Compute the outputs \u03c8j (ci).\n3: Compute hj = Fj(c|i \u221e).\n4: Use h = (h1, h2, \u00b7 \u00b7 \u00b7 , hm) as the input for the output function O. Denote \u03c4j = \u03baj(O, h\u2212j) as the cut value function of output O based on input h. 5: for 1 \u2264 j \u2264 m do 6: Set \u03bai,j = F\u22121 j (min{\u03c4j, hj}).\n7: The cut value for i is \u03bai(O \u25e6 F, c\u2212i) = maxm j=1 \u03bai,j. THEOREM 10.\nAlgorithm 6 computes the correct cut value for agent i based on the allocation rule O \u25e6 F. PROOF.\nIn order to prove the correctness of the cut value function calculated by Algorithm 6, we prove the following two cases.\nFor our convenience, we will use \u03bai to represent \u03bai(O \u25e6 F, c\u2212i) if no confusion caused.\nFirst, if di < \u03bai then (O \u25e6 F)i(c|i di) = 1.\nWithout loss of generality, we assume that \u03bai = \u03bai,j for some j.\nSince function Fj satisfies FMP and \u03c8j i (c|i di) = 1, we have Fj(c|i di) < Fj(\u03bai).\nNotice di < \u03bai,j, from the definition of \u03bai,j = F\u22121 j (min{\u03c4j, hj}) we have (1) \u03c8j i (c|i di) = 1, (2) Fj(c|i di) < \u03c4j due to the fact that Fj(x) is a non-decreasing function when j is selected.\nThus, from the monotonicity property of O and \u03c4j is the cut value for output O, we have Oj(h|j Fj(c|i di)) = 1.\n(3) If Oj(F(c|i di)) = 1 then (O\u25e6F)i(c|i di) = 1.\nOtherwise, since O satisfies SMP, Lemma 1 and equation 3 imply that there exists at least one index k such that Ok(F(c|i di)) = 1 and Fk(c|i di) < hk.\nNote Fk(c|i di) < hk implies that i is selected in \u03c8k (c|i di) since hk = Fk(ci|i \u221e).\nIn other words, agent i is selected in O\u25e6F. 219 Second, if di \u2265 \u03bai(O \u25e6 F, c\u2212i) then (O \u25e6 F)i(c|i di) = 0.\nAssume for the sake of contradiction that (O \u25e6 F)i(c|i di) = 1.\nThen there exists an index 1 \u2264 j \u2264 m such that Oj(F(c|i di)) = 1 and \u03c8j i (c|i di) = 1.\nRemember that hk \u2265 Fk(c|i di) for any k. Thus, from the fact that O satisfies SMP, when changing the cost vector from F(c|i di) to h|j Fj(c|i di), we still have Oj(h|j Fj(c|i di)) = 1.\nThis implies that Fj(c|i di) < \u03c4j.\nCombining the above inequality and the fact that Fj(c|i c|i di) < hj, we have Fj(c|i di) < min{hj, \u03c4j}.\nThis implies di < F\u22121 j (min{hj, \u03c4j}) = \u03bai,j < \u03bai(O \u25e6 F, c\u2212i).\nwhich is a contradiction.\nThis finishes our proof.\nIn most applications, the allocation rule \u03c8j implements the objective function fj and fj is utilitarian.\nThus, we can compute the inverse of F\u22121 j efficiently.\nAnother issue is that it seems the conditions when we can apply Algorithm 6 are restrictive.\nHowever, lots of games in practice satisfy these properties and here we show how to deduct the MAX combination in [13].\nAssume A1 and A2 are two allocation rules for single minded combinatorial auction, then the combination MAX(A1, A2) returns the allocation with the larger welfare.\nIf algorithm A1 and A2 satisfy MP and FMP, the operation max(x, y) which returns the larger element of x and y satisfies SMP.\nFrom Theorem 9 we obtain that combination MAX(A1, A2) also satisfies MP.\nFurther, the cut value of the MAX combination can be found by Algorithm 6.\nAs we will show in Section 5, the complex combination can apply to some more complicated problems.\n5.\nCONCRETE EXAMPLES 5.1 Set Cover In the set cover problem, there is a set U of m elements needed to be covered, and each agent 1 \u2264 i \u2264 n can cover a subset of elements Si with a cost ci.\nLet S = {S1, S2, \u00b7 \u00b7 \u00b7 , Sn} and c = (c1, c2, \u00b7 \u00b7 \u00b7 , cn).\nWe want to find a subset of agents D such that U \u2286 S i\u2208D Si.\nThe selected subsets is called the set cover for U.\nThe social efficiency of the output D is defined as P i\u2208D ci, which is the objective function to be minimized.\nClearly, this is a utilitarian and thus VCG mechanism can be applied if we can find the subset of S that covers U with the minimum cost.\nIt is well-known that finding the optimal solution is NP-hard.\nIn [4], an algorithm of approximation ratio of Hm has been proposed and it has been proved that this is the best ratio possible for the set cover problem.\nFor the completeness of presentation, we review their method here.\nAlgorithm 7 Greedy Set Cover (GSC) Input: Agent i``s subset Si covered and cost ci.\n(1 \u2264 i \u2264 n).\nOutput: A set of agents that can cover all elements.\n1: Initialize r = 1, T0 = \u2205, and R = \u2205.\n2: while R = U do 3: Find the set Sj with the minimum density cj |Sj \u2212Tr| .\n4: Set Tr+1 = Tr S Sj and R = R S j. 5: r = r + 1 6: Output R. Let GSC(S) be the sets selected by the Algorithm 7.\nNotice that the output set is a function of S and c.\nSome works assume that the type of an agent could be ci, i.e., Si is assumed to be a public knowledge.\nHere, we consider a more general case in which the type of an agent is (Si, ci).\nIn other words, we assume that every agent i can not only lie about its cost ci but also can lie about the set Si.\nThis problem now looks similar to the combinatorial auction with single minded bidder studied in [12], but with the following differences: in the set cover problem we want to cover all the elements and the sets chosen can have some overlap while in combinatorial auction the chosen sets are disjoint.\nWe can show that the mechanism M = (GSC, PV CG ), using Algorithm 7 to find a set cover and apply VCG mechanism to compute the payment to the selected agents, is not truthful.\nObviously, the set cover problem is a binary demand game.\nFor the moment, we assume that agent i won``t be able to lie about Si.\nWe will drop this assumption later.\nWe show how to design a truthful mechanism by applying our general framework.\n1.\nCheck the monotonicity property: The output of Algorithm 7 is a round-based output.\nThus, for an agent i, we first focus on the output of one round r.\nIn round r, if i is selected by Algorithm 7, then it has the minimum ratio ci |Si\u2212Tr| among all remaining agents.\nNow consider the case when i lies its cost to ci < ci, obviously ci |Si\u2212Tr| is still minimum among all remaining agents.\nConsequently, agent i is still selected in round r, which means the output of round r satisfies MP.\nNow we look into the updating rules.\nFor every round, we only update the Tr+1 = Tr S Sj and R = R S j, which is obviously cross-independent.\nThus, by applying Theorem 8, we know the output by Algorithm 7 satisfies MP.\n2.\nFind the cut value: To calculate the cut value for agent i with fixed cost vector c\u2212i, we follow the steps in Algorithm 4.\nFirst, we set ci = \u221e and apply Algorithm 7.\nLet ir be the agent selected in round r and T\u2212i r+1 be the corresponding set.\nThen the cut value of round r is r = cir |Sir \u2212 T\u2212i r | \u00b7 |Si \u2212 T\u2212i r |.\nRemember the updating rule only updates the game setting but not the cost of the agent, thus we have gr(x) = x \u2265 r for 1 \u2264 r \u2264 t. Therefore, the final cut value for agent i is \u03bai(GSC, c\u2212i) = max r { cir |Sir \u2212 T\u2212i r | \u00b7 |Si \u2212 T\u2212i r |} The payment to an agent i is \u03bai if i is selected; otherwise its payment is 0.\nWe now consider the scenario when agent i can lie about Si.\nAssume that agent i cannot lie upward, i.e., it can only report a set Si \u2286 Si.\nWe argue that agent i will not lie about its elements Si.\nNotice that the cut value computed for round r is r = cir |Sir \u2212T \u2212i r | \u00b7 |Si \u2212 T\u2212i r |.\nObviously |Si \u2212 T\u2212i r | \u2264 |Si \u2212 T\u2212i r | for any Si \u2286 Si.\nThus, lying its set as Si will not increase the cut value for each round.\nThus lying about Si will not improve agent i``s utility.\n5.2 Link Weighted Steiner Trees Consider any link weighted network G = (V, E, c), where E = {e1, e2, \u00b7 \u00b7 \u00b7 , em} are the set of links and ci is the weight of the link ei.\nThe link weighted Steiner tree problem is to find a tree rooted at source node s spanning a given set of nodes Q = {q1, q2, \u00b7 \u00b7 \u00b7 , qk} \u2282 V .\nFor simplicity, we assume that qi = vi, for 1 \u2264 i \u2264 k.\nHere the links are agents.\nThe total cost of links in a graph H \u2286 G is called the weight of H, denoted as \u03c9(H).\nIt is NP-hard to find the minimum cost multicast tree when given an arbitrary link weighted 220 graph G [17, 20].\nThe currently best polynomial time method has approximation ratio 1 + ln 3 2 [17].\nHere, we review and discuss the first approximation method by Takahashi and Matsuyama [20].\nAlgorithm 8 Find LinkWeighted SteinerTree (LST) Input: Network G = (V, E, c) where c is the cost vector for link set E. Source node s and receiver set Q. Output: A tree LST rooted at s and spanned all receivers.\n1: Set r = 1, G1 = G, Q1 = Q and s1 = s. 2: repeat 3: In graph Gr, find the receiver, say qi, that is closest to the source s, i.e., LCP(s, qi, c) has the least cost among the shortest paths from s to all receivers in Qr .\n4: Select all links on LCP(s, qi, c) as relay links and set their cost to 0.\nThe new graph is denoted as Gr+1.\n5: Set tr as qi and Pr = LCP(s, qi, c).\n6: Set Qr+1 = Qr \\qi and r = r + 1.\n7: until all receivers are spanned.\nHereafter, let LST(G) be the final tree constructed using the above method.\nIt is shown in [24] that mechanism M = (LST, pV CG ) is not truthful, where pV CG is the payment calculated based on VCG mechanism.\nWe then show how to design a truthful payment scheme using our general framework.\nObserve that the output Pr, for any round r, satisfies MP, and the update rule for every round satisfies crossing-independence.\nThus, from Theorem 8, the roundbased output LST satisfies MP.\nIn round r, the cut value for a link ei can be obtained by using the VCG mechanism.\nNow we set ci = \u221e and execute Algorithm 8.\nLet w\u2212i r (ci) be the cost of the path Pr(ci) selected in the rth round and \u03a0i r(ci) be the shortest path selected in round r if the cost of ci is temporarily set to \u2212\u221e.\nThen the cut value for round r is r = wi r(c\u2212i) \u2212 |\u03a0i r(c\u2212i)| where |\u03a0i r(c\u2212i)| is the cost of the path \u03a0i r(c\u2212i) excluding node vi.\nUsing Algorithm 4, we obtain the final cut value for agent i: \u03bai(LST, c\u2212i) = maxr{ r}.\nThus, the payment to a link ei is \u03bai(LST, c\u2212i) if its reported cost is di < \u03bai(LST, d\u2212i); otherwise, its payment is 0.\n5.3 Virtual Minimal Spanning Trees To connect the given set of receivers to the source node, besides the Steiner tree constructed by the algorithms described before, a virtual minimum spanning tree is also often used.\nAssume that Q is the set of receivers, including the sender.\nAssume that the nodes in a node-weighted graph are all agents.\nThe virtual minimum spanning tree is constructed as follows.\nAlgorithm 9 Construct VMST 1: for all pairs of receivers qi, qj \u2208 Q do 2: Calculate the least cost path LCP(qi, qj, d).\n3: Construct a virtual complete link weighted graph K(d) using Q as its node set, where the link qiqj corresponds to the least cost path LCP(qi, qj, d), and its weight is w(qiqj) = |LCP(qi, qj, d)|.\n4: Build the minimum spanning tree on K(d), denoted as V MST(d).\n5: for every virtual link qiqj in V MST(d) do 6: Find the corresponding least cost path LCP(qi, qj, d) in the original network.\n7: Mark the agents on LCP(qi, qj, d) selected.\nThe mechanism M = (V MST, pV CG ) is not truthful [24], where the payment pV CG to a node is based on the VCG mechanism.\nWe then show how to design a truthful mechanism based on the framework we described.\n1.\nCheck the monotonicity property: Remember that in the complete graph K(d), the weight of a link qiqj is |LCP(qi, qj, d)|.\nIn other words, we implicitly defined |Q|(|Q| \u2212 1)\/2 functions fi,j, for all i < j and qi \u2208 Q and qj \u2208 Q, with fi,j(d) = |LCP(qi, qj, d)|.\nWe can show that the function fi,j(d) = |LCP(qi, qj, d)| satisfies FMP, LCP satisfies MP, and the output MST satisfies SMP.\nFrom Theorem 9, the allocation rule VMST satisfies the monotonicity property.\n2.\nFind the cut value: Notice VMST is the combination of MST and function fi,j, so cut value for VMST can be computed based on Algorithm 6 as follows.\n(a) Given a link weighted complete graph K(d) on Q, we should find the cut value function for edge ek = (qi, qj) based on MST.\nGiven a spanning tree T and a pair of terminals p and q, clearly there is a unique path connecting them on T.\nWe denote this path as \u03a0T (p, q), and the edge with the maximum length on this path as LE(p, q, T).\nThus, the cut value can be represented as \u03bak(MST, d) = LE(qi, qj, MST(d|k \u221e)) (b) We find the value-cost function for LCP.\nAssume vk \u2208 LCP(qi, qj, d), then the value-cost function is xk = yk \u2212 |LCPvk (qi, qj, d|k 0)|.\nHere, LCPvk (qi, qj, d) is the least cost path between qi and qj with node vk on this path.\n(c) Remove vk and calculate the value K(d|k \u221e).\nSet h(i,j) = |LCP(qi, qj, d|\u221e ))| for every pair of node i = j and let h = {h(i,j)} be the vector.\nThen it is easy to show that \u03c4(i,j) = |LE(qi, qj, MST(h|(i,j) \u221e))| is the cut value for output VMST.\nIt easy to verify that min{h(i,j), \u03c4(i,j)} = |LE(qi, qj, MST(h)|.\nThus, we know \u03ba (i,j) k (V MST, d) is |LE(qi, qj, MST(h)|\u2212 |LCPvk (qi, qj, d|k 0)|.\nThe cut value for agent k is \u03bak(V MST, d\u2212k) = max0\u2264i,j\u2264r \u03baij k (V MST, d\u2212k).\n3.\nWe pay agent k \u03bak(V MST, d\u2212k) if and only if k is selected in V MST(d); else we pay it 0.\n5.4 Combinatorial Auctions Lehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction.\nIn a singleminded combinatorial auction, there is a set of items S to be sold and there is a set of agents 1 \u2264 i \u2264 n who wants to buy some of the items: agent i wants to buy a subset Si \u2286 S with maximum price mi.\nA single-minded bidder i declares a bid bi = Si, ai with Si \u2286 S and ai \u2208 R+ .\nTwo bids Si, ai and Sj, aj conflict if Si \u2229 Sj = \u2205.\nGiven the bids b1, b2, \u00b7 \u00b7 \u00b7 , bn, they gave a greedy round-based algorithm as follows.\nFirst the bids are sorted by some criterion ( ai |Si|1\/2 is used in[12]) in an increasing order and let L be the list of sorted bids.\nThe first bid is granted.\nThen the algorithm exams each bid of L in order and grants the bid if it does not conflict with any of the bids previously granted.\nIf it does, it is denied.\nThey proved that this greedy allocation scheme using criterion ai |Si|1\/2 approximates the optimal allocation within a factor of \u221a m, where m is the number of goods in S.\nIn the auction settings, we have ci = \u2212ai.\nIt is easy to verify the output of the greedy algorithm is a round-based output.\nRemember after bidder j is selected for round r, every bidder has conflict 221 with j will not be selected in the rounds after.\nThis equals to update the cost of every bidder having conflict with j to 0, which satisfies crossing-independence.\nIn addition, in any round, if bidder i is selected with ai then it will still be selected when it declares ai > ai.\nThus, for every round, it satisfies MP and the cut value is |Si|1\/2 \u00b7 ajr |Sjr |1\/2 where jr is the bidder selected in round r if we did not consider the agent i at all.\nNotice ajr |Sjr |1\/2 does not increase when round r increases, so the final cut value is |Si|1\/2 \u00b7 aj |Sj |1\/2 where bj is the first bid that has been denied but would have been selected were it not only for the presence of bidder i. Thus, the payment by agent i is |Si|1\/2 \u00b7 aj |Sj |1\/2 if ai \u2265 |Si|1\/2 \u00b7 aj |Sj |1\/2 , and 0 otherwise.\nThis payment scheme is exactly the same as the payment scheme in [12].\n6.\nCONCLUSIONS In this paper, we have studied how to design a truthful mechanism M = (O, P) for a given allocation rule O for a binary demand game.\nWe first showed that the allocation rule O satisfying the MP is a necessary and sufficient condition for a truthful mechanism M to exist.\nWe then formulate a general framework for designing payment P such that the mechanism M = (O, P) is truthful and computable in polynomial time.\nWe further presented several general composition-based techniques to compute P efficiently for various allocation rules O. Several concrete examples were discussed to demonstrate our general framework for designing P and for composition-based techniques of computing P in polynomial time.\nIn this paper, we have concentrated on how to compute P in polynomial time.\nOur algorithms do not necessarily have the optimal running time for computing P given O.\nIt would be of interest to design algorithms to compute P in optimal time.\nWe have made some progress in this research direction in [22] by providing an algorithm to compute the payments for unicast in a node weighted graph in optimal O(n log n + m) time.\nAnother research direction is to design an approximation allocation rule O satisfying MP with a good approximation ratio for a given binary demand game.\nMany works [12, 13] in the mechanism design literature are in this direction.\nWe point out here that the goal of this paper is not to design a better allocation rule for a problem, but to design an algorithm to compute the payments efficiently when O is given.\nIt would be of significance to design allocation rules with good approximation ratios such that a given binary demand game has a computationally efficient payment scheme.\nIn this paper, we have studied mechanism design for binary demand games.\nHowever, some problems cannot be directly formulated as binary demand games.\nThe job scheduling problem in [2] is such an example.\nFor this problem, a truthful payment scheme P exists for an allocation rule O if and only if the workload assigned by O is monotonic in a certain manner.\nIt wound be of interest to generalize our framework for designing a truthful payment scheme for a binary demand game to non-binary demand games.\nTowards this research direction, Theorem 4 can be extended to a general allocation rule O, whose range is R+ .\nThe remaining difficulty is then how to compute the payment P under mild assumptions about the valuations if a truthful mechanism M = (O, P) does exist.\nAcknowledgements We would like to thank Rakesh Vohra, Tuomas Sandholm, and anonymous reviewers for helpful comments and discussions.\n7.\nREFERENCES [1] A. ARCHER, C. PAPADIMITRIOU, K. T., AND TARDOS, E.\nAn approximate truthful mechanism for combinatorial auctions with single parameter agents.\nIn ACM-SIAM SODA (2003), pp. 205-214.\n[2] ARCHER, A., AND TARDOS, E. Truthful mechanisms for one-parameter agents.\nIn Proceedings of the 42nd IEEE FOCS (2001), IEEE Computer Society, p. 482.\n[3] AULETTA, V., PRISCO, R. D., PENNA, P., AND PERSIANO, P. Deterministic truthful approximation schemes for scheduling related machines.\n[4] CHVATAL, V.\nA greedy heuristic for the set covering problem.\nMathematics of Operations Research 4, 3 (1979), 233-235.\n[5] CLARKE, E. H. Multipart pricing of public goods.\nPublic Choice (1971), 17-33.\n[6] R. Muller, and R. V. Vohra.\nOn Dominant Strategy Mechanisms.\nWorking paper, 2003.\n[7] DEVANUR, N. R., MIHAIL, M., AND VAZIRANI, V. V. Strategyproof cost-sharing mechanisms for set cover and facility location games.\nIn ACM Electronic Commerce (EC03) (2003).\n[8] FEIGENBAUM, J., KRISHNAMURTHY, A., SAMI, R., AND SHENKER, S. Approximation and collusion in multicast cost sharing (abstract).\nIn ACM Economic Conference (2001).\n[9] FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER, S.\nA BGP-based mechanism for lowest-cost routing.\nIn Proceedings of the 2002 ACM Symposium on Principles of Distributed Computing.\n(2002), pp. 173-182.\n[10] GREEN, J., AND LAFFONT, J. J. Characterization of satisfactory mechanisms for the revelation of preferences for public goods.\nEconometrica (1977), 427-438.\n[11] GROVES, T. Incentives in teams.\nEconometrica (1973), 617-631.\n[12] LEHMANN, D., OCALLAGHAN, L. I., AND SHOHAM, Y. Truth revelation in approximately efficient combinatorial auctions.\nJournal of ACM 49, 5 (2002), 577-602.\n[13] MU``ALEM, A., AND NISAN, N. Truthful approximation mechanisms for restricted combinatorial auctions: extended abstract.\nIn 18th National Conference on Artificial intelligence (2002), American Association for Artificial Intelligence, pp. 379-384.\n[14] NISAN, N., AND RONEN, A. Algorithmic mechanism design.\nIn Proc.\n31st Annual ACM STOC (1999), pp. 129-140.\n[15] E. Halperin.\nImproved approximation algorithms for the vertex cover problem in graphs and hypergraphs.\nIn Proceedings of the 11th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 329-337, 2000.\n[16] R. Bar-Yehuda and S. Even.\nA local ratio theorem for approximating the weighted vertex cover problem.\nAnnals of Discrete Mathematics, Volume 25: Analysis and Design of Algorithms for Combinatorial Problems, pages 27-46, 1985.\nEditor: G. Ausiello and M. Lucertini [17] ROBINS, G., AND ZELIKOVSKY, A. Improved steiner tree approximation in graphs.\nIn Proceedings of the 11th annual ACM-SIAM SODA (2000), pp. 770-779.\n[18] A. Zelikovsky.\nAn 11\/6-approximation algorithm for the network Steiner problem.\nAlgorithmica, 9(5):463-470, 1993.\n[19] D. S. Hochbaum.\nEfficient bounds for the stable set, vertex cover, and set packing problems, Discrete Applied Mathematics, 6:243-254, 1983.\n[20] TAKAHASHI, H., AND MATSUYAMA, A.\nAn approximate solution for the steiner problem in graphs.\nMath.\nJaponica 24 (1980), 573-577.\n[21] VICKREY, W. Counterspeculation, auctions and competitive sealed tenders.\nJournal of Finance (1961), 8-37.\n[22] WANG, W., AND LI, X.-Y.\nTruthful low-cost unicast in selfish wireless networks.\nIn 4th IEEE Transactions on Mobile Computing (2005), to appear.\n[23] WANG, W., LI, X.-Y., AND SUN, Z. Design multicast protocols for non-cooperative networks.\nIEEE INFOCOM 2005, to appear.\n[24] WANG, W., LI, X.-Y., AND WANG, Y. Truthful multicast in selfish wireless networks.\nACM MobiCom, 2005.\n222","lvl-3":"Towards Truthful Mechanisms for Binary Demand Games: A General Framework\nABSTRACT\nThe family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design.\nHowever, VCG mechanisms have their limitations.\nThey only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function.\nFor many optimization problems, finding the optimal output is computationally intractable.\nIf we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful.\nIn light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O.\nIn this paper, we focus our attention on binary demand games in which the agents' only available actions are to take part in the a game or not to.\nFor these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property.\nWe provide a general framework to design such P.\nWe further propose several general composition-based techniques to compute P efficiently for various types of output.\nIn particular, we show how P can be computed through \"or\/and\" combinations, round-based combinations, and some more complex combinations of the outputs from subgames.\n1.\nINTRODUCTION\nIn recent years, with the rapid development of the Internet, many protocols and algorithms have been proposed to make the Internet more efficient and reliable.\nThe Internet is a complex distributed system where a multitude of heterogeneous agents cooperate to achieve some common goals, and the existing protocols and algorithms often assume that all agents will follow the prescribed rules without deviation.\nHowever, in some settings where the agents are selfish instead of altruistic, it is more reasonable to assume these agents are rational--maximize their own profits--according to the neoclassic economics, and new models are needed to cope with the selfish behavior of such agents.\nTowards this end, Nisan and Ronen [14] proposed the framework of algorithmic mechanism design and applied VCG mechanisms to some fundamental problems in computer science, including shortest paths, minimum spanning trees, and scheduling on unrelated machines.\nThe VCG mechanisms [5, 11, 21] are applicable to mechanism design problems whose outputs optimize the utilitarian objective function, which is simply the sum of all agents' valuations.\nUnfortunately, some objective functions are not utilitarian; even for those problems with a utilitarian objective function, sometimes it is impossible to find the optimal output in polynomial time unless P = NP.\nSome mechanisms other than VCG mechanism are needed to address these issues.\nArcher and Tardos [2] studied a scheduling problem where it is NP-Hard to find the optimal output.\nThey pointed out that a certain monotonicity property of the output work load is a necessary and sufficient condition for the existence of a truthful mechanism for their scheduling problem.\nAuletta et al. [3] studied a similar scheduling problem.\nThey provided a family of deterministic truthful (2 + e) - approximation mechanisms for any fixed number of machines and several (1 + e) - truthful mechanisms for some NP-hard restrictions of their scheduling problem.\nLehmann et al. \u221a m-approximation truthful mechanism, where m is the number of [12] studied the single-minded combinatorial auction and gave a goods.\nThey also pointed out that a certain monotonicity in the allocation rule can lead to a truthful mechanism.\nThe work of Mu'alem and Nisan [13] is the closest in spirit to our work.\nThey characterized all truthful mechanisms based on a certain monotonicity property in a single-minded auction setting.\nThey also showed how to used MAX and IF-THEN-ELSE to combine outputs from subproblems.\nAs shown in this paper, the MAX and IF-THEN-ELSE combinations are special cases of the composition-based techniques that we present in this paper for computing the payments in polynomial time under mild assumptions.\nMore generally, we study how to design truthful mechanisms for binary demand games where the allocation of an agent is either \"selected\" or \"not selected\".\nWe also assume that the valuations\nof agents are uncorrelated, i.e., the valuation of an agent only depends on its own allocation and type.\nRecall that a mechanism M = (O, P) consists of two parts, an allocation rule O and a payment scheme P. Previously, it is often assumed that there is an objective function g and an allocation rule O, that either optimizes g exactly or approximately.\nIn contrast to the VCG mechanisms, we do not require that the allocation should optimize the objective function.\nIn fact, we do not even require the existence of an objective function.\nGiven any allocation rule O for a binary demand game, we showed that a truthful mechanism M = (O, P) exists for the game if and only if O satisfies a certain monotonicity property.\nThe monotonicity property only guarantees the existence of a payment scheme P such that (O, P) is truthful.\nWe complement this existence theorem with a general framework to design such a payment scheme P. Furthermore, we present general techniques to compute the payment when the output is a composition of the outputs of subgames through the operators \"or\" and \"and\"; through round-based combinations; or through intermediate results, which may be themselves computed from other subproblems.\nThe remainder of the paper is organized as follows.\nIn Section 2, we discuss preliminaries and previous works, define binary demand games and discuss the basic assumptions about binary demand games.\nIn Section 3, we show that O satisfying a certain monotonicity property is a necessary and sufficient condition for the existence of a truthful mechanism M = (O, P).\nA framework is then proposed in Section 4 to compute the payment P in polynomial time for several types of allocation rules O.\nIn Section 5, we provide several examples to demonstrate the effectiveness of our general framework.\nWe conclude our paper in Section 6 with some possible future directions.\n2.\nPRELIMINARIES 2.1 Mechanism Design\nAs usually done in the literatures about the designing of algorithms or protocols with inputs from individual agents, we adopt the assumption in neoclassic economics that all agents are rational, i.e., they respond to well-defined incentives and will deviate from the protocol only if the deviation improves their gain.\nA standard model for mechanism design is as follows.\nThere are n agents 1,..., n and each agent i has some private information ti, called its type, only known to itself.\nFor example, the type ti can be the cost that agent i incurs for forwarding a packet in a network or can be a payment that the agent is willing to pay for a good in an auction.\nThe agents' types define the type vector t = (t1, t2,..., tn).\nEach agent i has a set of strategies Ai from which it can choose.\nFor each input vector a = (a1,..., an) where agent i plays strategy ai \u2208 Ai, the mechanism M = (O, P) computes an output o = O (a) and a payment vector p (a) = (p1 (a),..., pn (a)).\nHere the payment pi (\u00b7) is the money given to agent i and depends on the strategies used by the agents.\nA game is defined as G = (S, M), where S is the setting for the game G. Here, S consists the parameters of the game that are set before the game starts and do not depend on the players' strategies.\nFor example, in a unicast routing game [14], the setting consists of the topology of the network, the source node and the destination node.\nThroughout this paper, unless explicitly mentioned otherwise, the setting S of the game is fixed and we are only interested in how to design P for a given allocation rule O.\nA valuation function v (ti, o) assigns a monetary amount to agent i for each possible output o. Everything about a game hS, Mi, including the setting S, the allocation rule O and the payment scheme P, is public knowledge except the agent i's actual type ti, which is private information to agent i. Let ui (ti, o) denote the utility of agent i at the outcome of the game o, given its preferences ti.\nHere, following a common assumption in the literature, we assume the utility for agent i is quasi-linear, i.e., ui (ti, o) = v (ti, o) + Pi (a).\nLet a | ia'i = (a1, \u00b7 \u00b7 \u00b7, ai_1, a' i, ai +1, \u00b7 \u00b7 \u00b7, an), i.e., each agent j = 6 i plays an action aj except that the agent i plays a' i. Let a_i = (a1, \u00b7 \u00b7 \u00b7, ai_1, ai +1, \u00b7 \u00b7 \u00b7, an) denote the actions of all agents except i. Sometimes, we write (a_i, bi) as a | ibi.\nAn action ai is called dominant for i if it (weakly) maximizes the utility of i for all possible strategies b_i of other agents, i.e., ui (ti, O (b_i, ai)) \u2265 ui (ti, O (b_i, a' i)) for all a' i = 6 ai and b_i.\nA direct-revelation mechanism is a mechanism in which the only actions available to each agent are to report its private type either truthfully or falsely to the mechanism.\nAn incentive compatible (IC) mechanism is a direct-revelation mechanism in which if an agent reports its type ti truthfully, then it will maximize its utility.\nThen, in a direct-revelation mechanism satisfying IC, the payment scheme should satisfy the property that, for each agent i, v (ti, O (t)) + pi (t) \u2265 v (ti, O (t | it' i)) + pi (t | it' i).\nAnother common requirement in the literature for mechanism design is so called individual rationality or voluntary participation: the agent's utility of participating in the output of the mechanism is not less than the utility of the agent of not participating.\nA direct-revelation mechanism is strategproof if it satisfies both IC and IR properties.\nArguably the most important positive result in mechanism design is the generalized Vickrey-Clarke-Groves (VCG) mechanism by Vickrey [21], Clarke [5], and Groves [11].\nThe VCG mechanism applies to (affine) maximization problems where the objective function is utilitarian g (o, t) = Ei v (ti, o) (i.e., the sum of all agents' valuations) and the set of possible outputs is assumed to be finite.\nA direct revelation mechanism M = (O (t), P (t)) belongs to the VCG family if (1) the allocation O (t) maximizes Ei v (ti, o), and (2) the payment to agent i is pi (t) vj (tj, O (t)) +\nhi (t_i), where hi () is an arbitrary function of t_i.\nUnder mild assumptions, VCG mechanisms are the only truthful implementations for utilitarian problems [10].\nThe allocation rule of a VCG mechanism is required to maximize the objective function in the range of the allocation function.\nThis makes the mechanism computationally intractable in many cases.\nFurthermore, replacing an optimal algorithm for computing the output with an approximation algorithm usually leads to untruthful mechanisms if a VCG payment scheme is used.\nIn this paper, we study how to design a truthful mechanism that does not optimize a utilitarian objective function.\n2.2 Binary Demand Games\nA binary demand game is a game G = (S, M), where M = (O, P) and the range of O is {0, 1} n.\nIn other words, the output is a n-tuple vector O (t) = (O1 (t), O2 (t),..., On (t)), where Oi (t) = 1 (respectively, 0) means that agent i is (respectively, is not) selected.\nExamples of binary demand games include: unicast [14, 22, 9] and multicast [23, 24, 8] (generally subgraph construction by selecting some links\/nodes to satisfy some property), facility location [7], and a certain auction [12, 2, 13].\nHereafter, we make the following further assumptions.\n1.\nThe valuation of the agents are not correlated, i.e., v (ti, o) is a function of v (ti, oi) only is denoted as v (ti, oi).\n2.\nThe valuation v (ti, oi) is a publicly known value and is normalized to 0.\nThis assumption is needed to guarantee the IR property.\nThus, throughout his paper, we only consider these direct-revelation mechanisms in which every agent only needs to reveal its valuation vi = v (ti, 1).\nNotice that in applications where agents providing service and receiving payment, e.g., unicast and job scheduling, the valuation vi of an agent i is usually negative.\nFor the convenience of presentation, we define the cost of agent as ci = \u2212 v (ti, 1), i.e., it costs agent i ci to provide the service.\nThroughout this paper, we will use ci instead of vi in our analysis.\nAll our results can apply to the case where the agents receive the service rather than provide by setting ci to negative, as in auction.\nIn a binary demand game, if we want to optimize an objective function g (o, t), then we call it a binary optimization demand game.\nThe main differences between the binary demand games and those problems that can be solved by VCG mechanisms are:\n1.\nThe objective function is utilitarian (or affine maximization problem) for a problem solvable by VCG while there is no restriction on the objective function for a binary demand game.\n2.\nThe allocation rule O studied here does not necessarily optimize an objective function, while a VCG mechanism only uses the output that optimizes the objective function.\nWe even do not require the existence of an objective function.\n3.\nWe assume that the agents' valuations are not correlated in a binary demand game, while the agents' valuations may be correlated in a VCG mechanism.\nIn this paper, we assume for technical convenience that the objective function g (o, c), if exists, is continuous with respect to the cost ci, but most of our results are directly applicable to the discrete case without any modification.\n2.3 Previous Work\nLehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction.\nIn a singleminded combinatorial auction, each agent i (1 <i <n) only wants to buy a subset Si C _ S with private price ci.\nA single-minded bidder i declares a bid bi = (S0 i, ai) with S0i C _ S and ai \u2208 R +.\nIn [12], it is assumed that the set of goods allocated to an agent i is either S0i or 0, which is known as exactness.\nLehmann et al. gave a greedy round-based allocation algorithm, based on the rank i | l\/z, that has an approximation ratio \u221a m, where m is the num\nber of goods in S. Based on the approximation algorithm, they gave a truthful payment scheme.\nFor an allocation rule satisfying (1) exactness: the set of goods allocated to an agent i is either S0i or 0; (2) monotonicity: proposing more money for fewer goods cannot cause a bidder to lose its bid, they proposed a truthful payment scheme as follows: (1) charge a winning bidder a certain amount that does not depend on its own bidding; (2) charge a losing bidder 0.\nNotice the assumption of exactness reveals that the single minded auction is indeed a binary demand game.\nTheir payment scheme inspired our payment scheme for binary demand game.\nIn [1], Archer et al. studied the combinatorial auctions where multiple copies of many different items are on sale, and each bidder i desires only one subset Si.\nThey devised a randomized rounding method that is incentive compatible and gave a truthful mechanism for combinatorial auctions with single parameter agents that approximately maximizes the social value of the auction.\nAs they pointed out, their method is strongly truthful in sense that it is truthful with high probability 1 \u2212 e, where a is an error probability.\nOn the contrary, in this paper, we study how to design a deterministic mechanism that is truthful based on some given allocation rules.\nIn [2], Archer and Tardos showed how to design truthful mechanisms for several combinatorial problems where each agent's private information is naturally expressed by a single positive real number, which will always be the cost incurred per unit load.\nThe mechanism's output could be arbitrary real number but their valuation is a quasi-linear function t \u00b7 w, where t is the private per unit cost and w is the work load.\nArcher and Tardos characterized that all truthful mechanism should have decreasing \"work curves\" w and that the truthful payment should be Pi (bi) = Pi (0) + biwi (bi) \u2212 fbi 0 wi (u) du Using this model, Archer and Tardos designed truthful mechanisms for several scheduling related problems, including minimizing the span, maximizing flow and minimizing the weighted sum of completion time problems.\nNotice when the load of the problems is w = {0, 1}, it is indeed a binary demand game.\nIf we apply their characterization of the truthful mechanism, their decreasing \"work curves\" w implies exactly the monotonicity property of the output.\nBut notice that their proof is heavily based on the assumption that the output is a continuous function of the cost, thus their conclusion can't directly apply to binary demand games.\nThe paper of Ahuva Mu'alem and Noam Nisan [13] is closest in spirit to our work.\nThey clearly stated that \"we only discussed a limited class ofbidders, single minded bidders, that was introduced by\" [12].\nThey proved that all truthful mechanisms should have a monotonicity output and their payment scheme is based on the cut value.\nWith a simple generalization, we get our conclusion for general binary demand game.\nThey proposed several combination methods including MAX, IF-THEN-ELSE construction to perform partial search.\nAll of their methods required the welfare function associated with the output satisfying bitonic property.\nDistinction between our contributions and previous results: It has been shown in [2, 6, 12, 13] that for the single minded combinatorial auction, there exists a payment scheme which results in a truthful mechanism if the allocation rule satisfies a certain monotonicity property.\nTheorem 4 also depends on the monotonicity property, but it is applicable to a broader setting than the single minded combinatorial auction.\nIn addition, the binary demand game studied here is different from the traditional packing IP's: we only require that the allocation to each agent is binary and the allocation rule satisfies a certain monotonicity property; we do not put any restrictions on the objective function.\nFurthermore, the main focus of this paper is to design some general techniques to find the truthful payment scheme for a given allocation rule O satisfying a certain monotonicity property.\n3.\nGENERAL APPROACHES\n3.1 Properties of Strategyproof Mechanisms\n3.2 Existence of Strategyproof Mechanisms\n4.\nCOMPUTING CUT VALUE FUNCTIONS\n4.1 Simple Combinations\n4.2 Round-Based Allocations\n4.3 Complex Combinations\n5.\nCONCRETE EXAMPLES\n5.1 Set Cover\n5.2 Link Weighted Steiner Trees\n5.3 Virtual Minimal Spanning Trees\n5.4 Combinatorial Auctions\n6.\nCONCLUSIONS\nIn this paper, we have studied how to design a truthful mechanism M = (O, P) for a given allocation rule O for a binary demand game.\nWe first showed that the allocation rule O satisfying the MP is a necessary and sufficient condition for a truthful mechanism M to exist.\nWe then formulate a general framework for designing payment P such that the mechanism M = (O, P) is truthful and computable in polynomial time.\nWe further presented several general composition-based techniques to compute P efficiently for various allocation rules O. Several concrete examples were discussed to demonstrate our general framework for designing P and for composition-based techniques of computing P in polynomial time.\nIn this paper, we have concentrated on how to compute P in polynomial time.\nOur algorithms do not necessarily have the optimal running time for computing P given O.\nIt would be of interest to design algorithms to compute P in optimal time.\nWe have made some progress in this research direction in [22] by providing an algorithm to compute the payments for unicast in a node weighted graph in optimal O (n log n + m) time.\nAnother research direction is to design an approximation allocation rule O satisfying MP with a good approximation ratio for a given binary demand game.\nMany works [12, 13] in the mechanism design literature are in this direction.\nWe point out here that the goal of this paper is not to design a better allocation rule for a problem, but to design an algorithm to compute the payments efficiently when O is given.\nIt would be of significance to design allocation rules with good approximation ratios such that a given binary demand game has a computationally efficient payment scheme.\nIn this paper, we have studied mechanism design for binary demand games.\nHowever, some problems cannot be directly formulated as binary demand games.\nThe job scheduling problem in [2] is such an example.\nFor this problem, a truthful payment scheme P exists for an allocation rule O if and only if the workload assigned by O is monotonic in a certain manner.\nIt wound be of interest to generalize our framework for designing a truthful payment scheme for a binary demand game to non-binary demand games.\nTowards this research direction, Theorem 4 can be extended to a general allocation rule O, whose range is R +.\nThe remaining difficulty is then how to compute the payment P under mild assumptions about the valuations if a truthful mechanism M = (O, P) does exist.","lvl-4":"Towards Truthful Mechanisms for Binary Demand Games: A General Framework\nABSTRACT\nThe family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design.\nHowever, VCG mechanisms have their limitations.\nThey only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function.\nFor many optimization problems, finding the optimal output is computationally intractable.\nIf we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful.\nIn light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O.\nIn this paper, we focus our attention on binary demand games in which the agents' only available actions are to take part in the a game or not to.\nFor these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property.\nWe provide a general framework to design such P.\nWe further propose several general composition-based techniques to compute P efficiently for various types of output.\nIn particular, we show how P can be computed through \"or\/and\" combinations, round-based combinations, and some more complex combinations of the outputs from subgames.\n1.\nINTRODUCTION\nTowards this end, Nisan and Ronen [14] proposed the framework of algorithmic mechanism design and applied VCG mechanisms to some fundamental problems in computer science, including shortest paths, minimum spanning trees, and scheduling on unrelated machines.\nThe VCG mechanisms [5, 11, 21] are applicable to mechanism design problems whose outputs optimize the utilitarian objective function, which is simply the sum of all agents' valuations.\nUnfortunately, some objective functions are not utilitarian; even for those problems with a utilitarian objective function, sometimes it is impossible to find the optimal output in polynomial time unless P = NP.\nSome mechanisms other than VCG mechanism are needed to address these issues.\nArcher and Tardos [2] studied a scheduling problem where it is NP-Hard to find the optimal output.\nThey pointed out that a certain monotonicity property of the output work load is a necessary and sufficient condition for the existence of a truthful mechanism for their scheduling problem.\nAuletta et al. [3] studied a similar scheduling problem.\nThey provided a family of deterministic truthful (2 + e) - approximation mechanisms for any fixed number of machines and several (1 + e) - truthful mechanisms for some NP-hard restrictions of their scheduling problem.\nLehmann et al. \u221a m-approximation truthful mechanism, where m is the number of [12] studied the single-minded combinatorial auction and gave a goods.\nThey also pointed out that a certain monotonicity in the allocation rule can lead to a truthful mechanism.\nThe work of Mu'alem and Nisan [13] is the closest in spirit to our work.\nThey characterized all truthful mechanisms based on a certain monotonicity property in a single-minded auction setting.\nThey also showed how to used MAX and IF-THEN-ELSE to combine outputs from subproblems.\nAs shown in this paper, the MAX and IF-THEN-ELSE combinations are special cases of the composition-based techniques that we present in this paper for computing the payments in polynomial time under mild assumptions.\nMore generally, we study how to design truthful mechanisms for binary demand games where the allocation of an agent is either \"selected\" or \"not selected\".\nWe also assume that the valuations\nof agents are uncorrelated, i.e., the valuation of an agent only depends on its own allocation and type.\nRecall that a mechanism M = (O, P) consists of two parts, an allocation rule O and a payment scheme P. Previously, it is often assumed that there is an objective function g and an allocation rule O, that either optimizes g exactly or approximately.\nIn contrast to the VCG mechanisms, we do not require that the allocation should optimize the objective function.\nIn fact, we do not even require the existence of an objective function.\nGiven any allocation rule O for a binary demand game, we showed that a truthful mechanism M = (O, P) exists for the game if and only if O satisfies a certain monotonicity property.\nThe monotonicity property only guarantees the existence of a payment scheme P such that (O, P) is truthful.\nThe remainder of the paper is organized as follows.\nIn Section 2, we discuss preliminaries and previous works, define binary demand games and discuss the basic assumptions about binary demand games.\nIn Section 3, we show that O satisfying a certain monotonicity property is a necessary and sufficient condition for the existence of a truthful mechanism M = (O, P).\nA framework is then proposed in Section 4 to compute the payment P in polynomial time for several types of allocation rules O.\nIn Section 5, we provide several examples to demonstrate the effectiveness of our general framework.\nWe conclude our paper in Section 6 with some possible future directions.\n2.\nPRELIMINARIES 2.1 Mechanism Design\nA standard model for mechanism design is as follows.\nThere are n agents 1,..., n and each agent i has some private information ti, called its type, only known to itself.\nFor example, the type ti can be the cost that agent i incurs for forwarding a packet in a network or can be a payment that the agent is willing to pay for a good in an auction.\nThe agents' types define the type vector t = (t1, t2,..., tn).\nEach agent i has a set of strategies Ai from which it can choose.\nFor each input vector a = (a1,..., an) where agent i plays strategy ai \u2208 Ai, the mechanism M = (O, P) computes an output o = O (a) and a payment vector p (a) = (p1 (a),..., pn (a)).\nHere the payment pi (\u00b7) is the money given to agent i and depends on the strategies used by the agents.\nThroughout this paper, unless explicitly mentioned otherwise, the setting S of the game is fixed and we are only interested in how to design P for a given allocation rule O.\nHere, following a common assumption in the literature, we assume the utility for agent i is quasi-linear, i.e., ui (ti, o) = v (ti, o) + Pi (a).\nA direct-revelation mechanism is a mechanism in which the only actions available to each agent are to report its private type either truthfully or falsely to the mechanism.\nAn incentive compatible (IC) mechanism is a direct-revelation mechanism in which if an agent reports its type ti truthfully, then it will maximize its utility.\nThen, in a direct-revelation mechanism satisfying IC, the payment scheme should satisfy the property that, for each agent i, v (ti, O (t)) + pi (t) \u2265 v (ti, O (t | it' i)) + pi (t | it' i).\nAnother common requirement in the literature for mechanism design is so called individual rationality or voluntary participation: the agent's utility of participating in the output of the mechanism is not less than the utility of the agent of not participating.\nA direct-revelation mechanism is strategproof if it satisfies both IC and IR properties.\nArguably the most important positive result in mechanism design is the generalized Vickrey-Clarke-Groves (VCG) mechanism by Vickrey [21], Clarke [5], and Groves [11].\nThe VCG mechanism applies to (affine) maximization problems where the objective function is utilitarian g (o, t) = Ei v (ti, o) (i.e., the sum of all agents' valuations) and the set of possible outputs is assumed to be finite.\nA direct revelation mechanism M = (O (t), P (t)) belongs to the VCG family if (1) the allocation O (t) maximizes Ei v (ti, o), and (2) the payment to agent i is pi (t) vj (tj, O (t)) +\nhi (t_i), where hi () is an arbitrary function of t_i.\nUnder mild assumptions, VCG mechanisms are the only truthful implementations for utilitarian problems [10].\nThe allocation rule of a VCG mechanism is required to maximize the objective function in the range of the allocation function.\nThis makes the mechanism computationally intractable in many cases.\nFurthermore, replacing an optimal algorithm for computing the output with an approximation algorithm usually leads to untruthful mechanisms if a VCG payment scheme is used.\nIn this paper, we study how to design a truthful mechanism that does not optimize a utilitarian objective function.\n2.2 Binary Demand Games\nA binary demand game is a game G = (S, M), where M = (O, P) and the range of O is {0, 1} n.\nHereafter, we make the following further assumptions.\n1.\nThe valuation of the agents are not correlated, i.e., v (ti, o) is a function of v (ti, oi) only is denoted as v (ti, oi).\n2.\nThis assumption is needed to guarantee the IR property.\nThus, throughout his paper, we only consider these direct-revelation mechanisms in which every agent only needs to reveal its valuation vi = v (ti, 1).\nNotice that in applications where agents providing service and receiving payment, e.g., unicast and job scheduling, the valuation vi of an agent i is usually negative.\nFor the convenience of presentation, we define the cost of agent as ci = \u2212 v (ti, 1), i.e., it costs agent i ci to provide the service.\nThroughout this paper, we will use ci instead of vi in our analysis.\nAll our results can apply to the case where the agents receive the service rather than provide by setting ci to negative, as in auction.\nIn a binary demand game, if we want to optimize an objective function g (o, t), then we call it a binary optimization demand game.\nThe main differences between the binary demand games and those problems that can be solved by VCG mechanisms are:\n1.\nThe objective function is utilitarian (or affine maximization problem) for a problem solvable by VCG while there is no restriction on the objective function for a binary demand game.\n2.\nThe allocation rule O studied here does not necessarily optimize an objective function, while a VCG mechanism only uses the output that optimizes the objective function.\nWe even do not require the existence of an objective function.\n3.\nWe assume that the agents' valuations are not correlated in a binary demand game, while the agents' valuations may be correlated in a VCG mechanism.\n2.3 Previous Work\nLehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction.\nIn a singleminded combinatorial auction, each agent i (1 <i <n) only wants to buy a subset Si C _ S with private price ci.\nIn [12], it is assumed that the set of goods allocated to an agent i is either S0i or 0, which is known as exactness.\nLehmann et al. gave a greedy round-based allocation algorithm, based on the rank i | l\/z, that has an approximation ratio \u221a m, where m is the num\nber of goods in S. Based on the approximation algorithm, they gave a truthful payment scheme.\nNotice the assumption of exactness reveals that the single minded auction is indeed a binary demand game.\nTheir payment scheme inspired our payment scheme for binary demand game.\nThey devised a randomized rounding method that is incentive compatible and gave a truthful mechanism for combinatorial auctions with single parameter agents that approximately maximizes the social value of the auction.\nOn the contrary, in this paper, we study how to design a deterministic mechanism that is truthful based on some given allocation rules.\nIn [2], Archer and Tardos showed how to design truthful mechanisms for several combinatorial problems where each agent's private information is naturally expressed by a single positive real number, which will always be the cost incurred per unit load.\nThe mechanism's output could be arbitrary real number but their valuation is a quasi-linear function t \u00b7 w, where t is the private per unit cost and w is the work load.\nNotice when the load of the problems is w = {0, 1}, it is indeed a binary demand game.\nIf we apply their characterization of the truthful mechanism, their decreasing \"work curves\" w implies exactly the monotonicity property of the output.\nBut notice that their proof is heavily based on the assumption that the output is a continuous function of the cost, thus their conclusion can't directly apply to binary demand games.\nThe paper of Ahuva Mu'alem and Noam Nisan [13] is closest in spirit to our work.\nThey proved that all truthful mechanisms should have a monotonicity output and their payment scheme is based on the cut value.\nWith a simple generalization, we get our conclusion for general binary demand game.\nThey proposed several combination methods including MAX, IF-THEN-ELSE construction to perform partial search.\nAll of their methods required the welfare function associated with the output satisfying bitonic property.\nDistinction between our contributions and previous results: It has been shown in [2, 6, 12, 13] that for the single minded combinatorial auction, there exists a payment scheme which results in a truthful mechanism if the allocation rule satisfies a certain monotonicity property.\nTheorem 4 also depends on the monotonicity property, but it is applicable to a broader setting than the single minded combinatorial auction.\nIn addition, the binary demand game studied here is different from the traditional packing IP's: we only require that the allocation to each agent is binary and the allocation rule satisfies a certain monotonicity property; we do not put any restrictions on the objective function.\nFurthermore, the main focus of this paper is to design some general techniques to find the truthful payment scheme for a given allocation rule O satisfying a certain monotonicity property.\n6.\nCONCLUSIONS\nIn this paper, we have studied how to design a truthful mechanism M = (O, P) for a given allocation rule O for a binary demand game.\nWe first showed that the allocation rule O satisfying the MP is a necessary and sufficient condition for a truthful mechanism M to exist.\nWe then formulate a general framework for designing payment P such that the mechanism M = (O, P) is truthful and computable in polynomial time.\nWe further presented several general composition-based techniques to compute P efficiently for various allocation rules O. Several concrete examples were discussed to demonstrate our general framework for designing P and for composition-based techniques of computing P in polynomial time.\nIn this paper, we have concentrated on how to compute P in polynomial time.\nOur algorithms do not necessarily have the optimal running time for computing P given O.\nIt would be of interest to design algorithms to compute P in optimal time.\nWe have made some progress in this research direction in [22] by providing an algorithm to compute the payments for unicast in a node weighted graph in optimal O (n log n + m) time.\nAnother research direction is to design an approximation allocation rule O satisfying MP with a good approximation ratio for a given binary demand game.\nMany works [12, 13] in the mechanism design literature are in this direction.\nWe point out here that the goal of this paper is not to design a better allocation rule for a problem, but to design an algorithm to compute the payments efficiently when O is given.\nIt would be of significance to design allocation rules with good approximation ratios such that a given binary demand game has a computationally efficient payment scheme.\nIn this paper, we have studied mechanism design for binary demand games.\nHowever, some problems cannot be directly formulated as binary demand games.\nThe job scheduling problem in [2] is such an example.\nFor this problem, a truthful payment scheme P exists for an allocation rule O if and only if the workload assigned by O is monotonic in a certain manner.\nIt wound be of interest to generalize our framework for designing a truthful payment scheme for a binary demand game to non-binary demand games.\nTowards this research direction, Theorem 4 can be extended to a general allocation rule O, whose range is R +.\nThe remaining difficulty is then how to compute the payment P under mild assumptions about the valuations if a truthful mechanism M = (O, P) does exist.","lvl-2":"Towards Truthful Mechanisms for Binary Demand Games: A General Framework\nABSTRACT\nThe family of Vickrey-Clarke-Groves (VCG) mechanisms is arguably the most celebrated achievement in truthful mechanism design.\nHowever, VCG mechanisms have their limitations.\nThey only apply to optimization problems with a utilitarian (or affine) objective function, and their output should optimize the objective function.\nFor many optimization problems, finding the optimal output is computationally intractable.\nIf we apply VCG mechanisms to polynomial-time algorithms that approximate the optimal solution, the resulting mechanisms may no longer be truthful.\nIn light of these limitations, it is useful to study whether we can design a truthful non-VCG payment scheme that is computationally tractable for a given allocation rule O.\nIn this paper, we focus our attention on binary demand games in which the agents' only available actions are to take part in the a game or not to.\nFor these problems, we prove that a truthful mechanism M = (O, P) exists with a proper payment method P iff the allocation rule O satisfies a certain monotonicity property.\nWe provide a general framework to design such P.\nWe further propose several general composition-based techniques to compute P efficiently for various types of output.\nIn particular, we show how P can be computed through \"or\/and\" combinations, round-based combinations, and some more complex combinations of the outputs from subgames.\n1.\nINTRODUCTION\nIn recent years, with the rapid development of the Internet, many protocols and algorithms have been proposed to make the Internet more efficient and reliable.\nThe Internet is a complex distributed system where a multitude of heterogeneous agents cooperate to achieve some common goals, and the existing protocols and algorithms often assume that all agents will follow the prescribed rules without deviation.\nHowever, in some settings where the agents are selfish instead of altruistic, it is more reasonable to assume these agents are rational--maximize their own profits--according to the neoclassic economics, and new models are needed to cope with the selfish behavior of such agents.\nTowards this end, Nisan and Ronen [14] proposed the framework of algorithmic mechanism design and applied VCG mechanisms to some fundamental problems in computer science, including shortest paths, minimum spanning trees, and scheduling on unrelated machines.\nThe VCG mechanisms [5, 11, 21] are applicable to mechanism design problems whose outputs optimize the utilitarian objective function, which is simply the sum of all agents' valuations.\nUnfortunately, some objective functions are not utilitarian; even for those problems with a utilitarian objective function, sometimes it is impossible to find the optimal output in polynomial time unless P = NP.\nSome mechanisms other than VCG mechanism are needed to address these issues.\nArcher and Tardos [2] studied a scheduling problem where it is NP-Hard to find the optimal output.\nThey pointed out that a certain monotonicity property of the output work load is a necessary and sufficient condition for the existence of a truthful mechanism for their scheduling problem.\nAuletta et al. [3] studied a similar scheduling problem.\nThey provided a family of deterministic truthful (2 + e) - approximation mechanisms for any fixed number of machines and several (1 + e) - truthful mechanisms for some NP-hard restrictions of their scheduling problem.\nLehmann et al. \u221a m-approximation truthful mechanism, where m is the number of [12] studied the single-minded combinatorial auction and gave a goods.\nThey also pointed out that a certain monotonicity in the allocation rule can lead to a truthful mechanism.\nThe work of Mu'alem and Nisan [13] is the closest in spirit to our work.\nThey characterized all truthful mechanisms based on a certain monotonicity property in a single-minded auction setting.\nThey also showed how to used MAX and IF-THEN-ELSE to combine outputs from subproblems.\nAs shown in this paper, the MAX and IF-THEN-ELSE combinations are special cases of the composition-based techniques that we present in this paper for computing the payments in polynomial time under mild assumptions.\nMore generally, we study how to design truthful mechanisms for binary demand games where the allocation of an agent is either \"selected\" or \"not selected\".\nWe also assume that the valuations\nof agents are uncorrelated, i.e., the valuation of an agent only depends on its own allocation and type.\nRecall that a mechanism M = (O, P) consists of two parts, an allocation rule O and a payment scheme P. Previously, it is often assumed that there is an objective function g and an allocation rule O, that either optimizes g exactly or approximately.\nIn contrast to the VCG mechanisms, we do not require that the allocation should optimize the objective function.\nIn fact, we do not even require the existence of an objective function.\nGiven any allocation rule O for a binary demand game, we showed that a truthful mechanism M = (O, P) exists for the game if and only if O satisfies a certain monotonicity property.\nThe monotonicity property only guarantees the existence of a payment scheme P such that (O, P) is truthful.\nWe complement this existence theorem with a general framework to design such a payment scheme P. Furthermore, we present general techniques to compute the payment when the output is a composition of the outputs of subgames through the operators \"or\" and \"and\"; through round-based combinations; or through intermediate results, which may be themselves computed from other subproblems.\nThe remainder of the paper is organized as follows.\nIn Section 2, we discuss preliminaries and previous works, define binary demand games and discuss the basic assumptions about binary demand games.\nIn Section 3, we show that O satisfying a certain monotonicity property is a necessary and sufficient condition for the existence of a truthful mechanism M = (O, P).\nA framework is then proposed in Section 4 to compute the payment P in polynomial time for several types of allocation rules O.\nIn Section 5, we provide several examples to demonstrate the effectiveness of our general framework.\nWe conclude our paper in Section 6 with some possible future directions.\n2.\nPRELIMINARIES 2.1 Mechanism Design\nAs usually done in the literatures about the designing of algorithms or protocols with inputs from individual agents, we adopt the assumption in neoclassic economics that all agents are rational, i.e., they respond to well-defined incentives and will deviate from the protocol only if the deviation improves their gain.\nA standard model for mechanism design is as follows.\nThere are n agents 1,..., n and each agent i has some private information ti, called its type, only known to itself.\nFor example, the type ti can be the cost that agent i incurs for forwarding a packet in a network or can be a payment that the agent is willing to pay for a good in an auction.\nThe agents' types define the type vector t = (t1, t2,..., tn).\nEach agent i has a set of strategies Ai from which it can choose.\nFor each input vector a = (a1,..., an) where agent i plays strategy ai \u2208 Ai, the mechanism M = (O, P) computes an output o = O (a) and a payment vector p (a) = (p1 (a),..., pn (a)).\nHere the payment pi (\u00b7) is the money given to agent i and depends on the strategies used by the agents.\nA game is defined as G = (S, M), where S is the setting for the game G. Here, S consists the parameters of the game that are set before the game starts and do not depend on the players' strategies.\nFor example, in a unicast routing game [14], the setting consists of the topology of the network, the source node and the destination node.\nThroughout this paper, unless explicitly mentioned otherwise, the setting S of the game is fixed and we are only interested in how to design P for a given allocation rule O.\nA valuation function v (ti, o) assigns a monetary amount to agent i for each possible output o. Everything about a game hS, Mi, including the setting S, the allocation rule O and the payment scheme P, is public knowledge except the agent i's actual type ti, which is private information to agent i. Let ui (ti, o) denote the utility of agent i at the outcome of the game o, given its preferences ti.\nHere, following a common assumption in the literature, we assume the utility for agent i is quasi-linear, i.e., ui (ti, o) = v (ti, o) + Pi (a).\nLet a | ia'i = (a1, \u00b7 \u00b7 \u00b7, ai_1, a' i, ai +1, \u00b7 \u00b7 \u00b7, an), i.e., each agent j = 6 i plays an action aj except that the agent i plays a' i. Let a_i = (a1, \u00b7 \u00b7 \u00b7, ai_1, ai +1, \u00b7 \u00b7 \u00b7, an) denote the actions of all agents except i. Sometimes, we write (a_i, bi) as a | ibi.\nAn action ai is called dominant for i if it (weakly) maximizes the utility of i for all possible strategies b_i of other agents, i.e., ui (ti, O (b_i, ai)) \u2265 ui (ti, O (b_i, a' i)) for all a' i = 6 ai and b_i.\nA direct-revelation mechanism is a mechanism in which the only actions available to each agent are to report its private type either truthfully or falsely to the mechanism.\nAn incentive compatible (IC) mechanism is a direct-revelation mechanism in which if an agent reports its type ti truthfully, then it will maximize its utility.\nThen, in a direct-revelation mechanism satisfying IC, the payment scheme should satisfy the property that, for each agent i, v (ti, O (t)) + pi (t) \u2265 v (ti, O (t | it' i)) + pi (t | it' i).\nAnother common requirement in the literature for mechanism design is so called individual rationality or voluntary participation: the agent's utility of participating in the output of the mechanism is not less than the utility of the agent of not participating.\nA direct-revelation mechanism is strategproof if it satisfies both IC and IR properties.\nArguably the most important positive result in mechanism design is the generalized Vickrey-Clarke-Groves (VCG) mechanism by Vickrey [21], Clarke [5], and Groves [11].\nThe VCG mechanism applies to (affine) maximization problems where the objective function is utilitarian g (o, t) = Ei v (ti, o) (i.e., the sum of all agents' valuations) and the set of possible outputs is assumed to be finite.\nA direct revelation mechanism M = (O (t), P (t)) belongs to the VCG family if (1) the allocation O (t) maximizes Ei v (ti, o), and (2) the payment to agent i is pi (t) vj (tj, O (t)) +\nhi (t_i), where hi () is an arbitrary function of t_i.\nUnder mild assumptions, VCG mechanisms are the only truthful implementations for utilitarian problems [10].\nThe allocation rule of a VCG mechanism is required to maximize the objective function in the range of the allocation function.\nThis makes the mechanism computationally intractable in many cases.\nFurthermore, replacing an optimal algorithm for computing the output with an approximation algorithm usually leads to untruthful mechanisms if a VCG payment scheme is used.\nIn this paper, we study how to design a truthful mechanism that does not optimize a utilitarian objective function.\n2.2 Binary Demand Games\nA binary demand game is a game G = (S, M), where M = (O, P) and the range of O is {0, 1} n.\nIn other words, the output is a n-tuple vector O (t) = (O1 (t), O2 (t),..., On (t)), where Oi (t) = 1 (respectively, 0) means that agent i is (respectively, is not) selected.\nExamples of binary demand games include: unicast [14, 22, 9] and multicast [23, 24, 8] (generally subgraph construction by selecting some links\/nodes to satisfy some property), facility location [7], and a certain auction [12, 2, 13].\nHereafter, we make the following further assumptions.\n1.\nThe valuation of the agents are not correlated, i.e., v (ti, o) is a function of v (ti, oi) only is denoted as v (ti, oi).\n2.\nThe valuation v (ti, oi) is a publicly known value and is normalized to 0.\nThis assumption is needed to guarantee the IR property.\nThus, throughout his paper, we only consider these direct-revelation mechanisms in which every agent only needs to reveal its valuation vi = v (ti, 1).\nNotice that in applications where agents providing service and receiving payment, e.g., unicast and job scheduling, the valuation vi of an agent i is usually negative.\nFor the convenience of presentation, we define the cost of agent as ci = \u2212 v (ti, 1), i.e., it costs agent i ci to provide the service.\nThroughout this paper, we will use ci instead of vi in our analysis.\nAll our results can apply to the case where the agents receive the service rather than provide by setting ci to negative, as in auction.\nIn a binary demand game, if we want to optimize an objective function g (o, t), then we call it a binary optimization demand game.\nThe main differences between the binary demand games and those problems that can be solved by VCG mechanisms are:\n1.\nThe objective function is utilitarian (or affine maximization problem) for a problem solvable by VCG while there is no restriction on the objective function for a binary demand game.\n2.\nThe allocation rule O studied here does not necessarily optimize an objective function, while a VCG mechanism only uses the output that optimizes the objective function.\nWe even do not require the existence of an objective function.\n3.\nWe assume that the agents' valuations are not correlated in a binary demand game, while the agents' valuations may be correlated in a VCG mechanism.\nIn this paper, we assume for technical convenience that the objective function g (o, c), if exists, is continuous with respect to the cost ci, but most of our results are directly applicable to the discrete case without any modification.\n2.3 Previous Work\nLehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction.\nIn a singleminded combinatorial auction, each agent i (1 <i <n) only wants to buy a subset Si C _ S with private price ci.\nA single-minded bidder i declares a bid bi = (S0 i, ai) with S0i C _ S and ai \u2208 R +.\nIn [12], it is assumed that the set of goods allocated to an agent i is either S0i or 0, which is known as exactness.\nLehmann et al. gave a greedy round-based allocation algorithm, based on the rank i | l\/z, that has an approximation ratio \u221a m, where m is the num\nber of goods in S. Based on the approximation algorithm, they gave a truthful payment scheme.\nFor an allocation rule satisfying (1) exactness: the set of goods allocated to an agent i is either S0i or 0; (2) monotonicity: proposing more money for fewer goods cannot cause a bidder to lose its bid, they proposed a truthful payment scheme as follows: (1) charge a winning bidder a certain amount that does not depend on its own bidding; (2) charge a losing bidder 0.\nNotice the assumption of exactness reveals that the single minded auction is indeed a binary demand game.\nTheir payment scheme inspired our payment scheme for binary demand game.\nIn [1], Archer et al. studied the combinatorial auctions where multiple copies of many different items are on sale, and each bidder i desires only one subset Si.\nThey devised a randomized rounding method that is incentive compatible and gave a truthful mechanism for combinatorial auctions with single parameter agents that approximately maximizes the social value of the auction.\nAs they pointed out, their method is strongly truthful in sense that it is truthful with high probability 1 \u2212 e, where a is an error probability.\nOn the contrary, in this paper, we study how to design a deterministic mechanism that is truthful based on some given allocation rules.\nIn [2], Archer and Tardos showed how to design truthful mechanisms for several combinatorial problems where each agent's private information is naturally expressed by a single positive real number, which will always be the cost incurred per unit load.\nThe mechanism's output could be arbitrary real number but their valuation is a quasi-linear function t \u00b7 w, where t is the private per unit cost and w is the work load.\nArcher and Tardos characterized that all truthful mechanism should have decreasing \"work curves\" w and that the truthful payment should be Pi (bi) = Pi (0) + biwi (bi) \u2212 fbi 0 wi (u) du Using this model, Archer and Tardos designed truthful mechanisms for several scheduling related problems, including minimizing the span, maximizing flow and minimizing the weighted sum of completion time problems.\nNotice when the load of the problems is w = {0, 1}, it is indeed a binary demand game.\nIf we apply their characterization of the truthful mechanism, their decreasing \"work curves\" w implies exactly the monotonicity property of the output.\nBut notice that their proof is heavily based on the assumption that the output is a continuous function of the cost, thus their conclusion can't directly apply to binary demand games.\nThe paper of Ahuva Mu'alem and Noam Nisan [13] is closest in spirit to our work.\nThey clearly stated that \"we only discussed a limited class ofbidders, single minded bidders, that was introduced by\" [12].\nThey proved that all truthful mechanisms should have a monotonicity output and their payment scheme is based on the cut value.\nWith a simple generalization, we get our conclusion for general binary demand game.\nThey proposed several combination methods including MAX, IF-THEN-ELSE construction to perform partial search.\nAll of their methods required the welfare function associated with the output satisfying bitonic property.\nDistinction between our contributions and previous results: It has been shown in [2, 6, 12, 13] that for the single minded combinatorial auction, there exists a payment scheme which results in a truthful mechanism if the allocation rule satisfies a certain monotonicity property.\nTheorem 4 also depends on the monotonicity property, but it is applicable to a broader setting than the single minded combinatorial auction.\nIn addition, the binary demand game studied here is different from the traditional packing IP's: we only require that the allocation to each agent is binary and the allocation rule satisfies a certain monotonicity property; we do not put any restrictions on the objective function.\nFurthermore, the main focus of this paper is to design some general techniques to find the truthful payment scheme for a given allocation rule O satisfying a certain monotonicity property.\n3.\nGENERAL APPROACHES\n3.1 Properties of Strategyproof Mechanisms\nWe discuss several properties that mechanisms need to satisfy in order to be truthful.\nThe proofs of above theorems are straightforward and thus omitted due to space limit.\nThis theorem implies that for the binary demand games we can always normalize the payment to an agent i such that the payment to the agent is 0 when it is not selected.\nHereafter, we will only consider normalized payment schemes.\n3.2 Existence of Strategyproof Mechanisms\nNotice, given the setting S, a mechanism design problem is composed of two parts: the allocation rule O and a payment scheme P.\nIn this paper, given an allocation rule O we focus our attention on how to design a truthful payment scheme based on O. Given an allocation rule O for a binary demand game, we first present a sufficient and necessary condition for the existence of a truthful payment scheme P. THEOREM 4.\nFix the setting S, c--i in a binary demand game G with the allocation rule O, the following three conditions are equivalent:\n1.\nThere exists a value \u03bai (O, c--i) (which we will calla cut value, such that Oi (c) = 1 if ci <\u03bai (O, c--i) and Oi (c) = 0 if ci> \u03bai (O, c--i).\nWhen ci = \u03bai (O, c--i), Oi (c) can be either 0 or 1 depending on the tie-breaker ofthe allocation rule O. Hereafter, we will not consider the tie-breaker scenario in our proofs.\n2.\nThe allocation rule O satisfies MP.\n3.\nThere exists a truthful payment scheme P for this binary demand game.\nPROOF.\nThe proof that Condition 2 implies Condition is straightforward and is omitted here.\nWe then show Condition 3 implies Condition 2.\nThe proof of this is similar to a proof in [13].\nTo prove this direction, we assume there exists an agent i and two valuation vectors c | ici1 and c | ici2, where ci1 <ci2, Oi (c | ici2) = 1 and Oi (c | ici1) = 0.\nFrom corollary 2, we know that pi (c | ici1) = p0i and pi (c | ici2) = p1i.\nNow fix c--i, the utility for i when ci = ci1 is ui (ci1) = p0i.\nWhen agent i lies its valuation to ci2, its utility is p1i \u2212 ci1.\nSince M = (O, P) is truthful, we have p0i> p1i \u2212 ci1.\nNow consider the scenario when the actual valuation of agent i is ci = ci2.\nIts utility is p1i \u2212 ci2 when it reports its true valuation.\nSimilarly, if it lies its valuation to ci1, its utility is p0i.\nSince M = (O, P) is truthful, we have p0i <p1i \u2212 ci2.\nConsequently, we have p1i \u2212 ci2> p0i> p1i \u2212 ci1.\nThis inequality implies that ci1> ci2, which is a contradiction.\nWe then show Condition 1 implies Condition 3.\nWe prove this by constructing a payment scheme and proving that this payment scheme is truthful.\nThe payment scheme is: If Oi (c) = 1, then agent i gets payment pi (c) = \u03bai (O, c--i); else it gets payment pi (c) = 0.\nFrom condition 1, if Oi (c) = 1 then ci> \u03bai (O, c--i).\nThus, its utility is \u03bai (O, c--i) \u2212 ci> 0, which implies that the payment scheme satisfies the IR.\nIn the following we prove that this payment scheme also satisfies IC property.\nThere are two cases here.\nCase 1: ci <\u03ba (O, c--i).\nIn this case, when i declares its true cost ci, its utility is \u03bai (O, c--i) \u2212 ci> 0.\nNow consider the situation when i declares a cost di = 6 ci.\nIf di <\u03bai (O, c--i), then i gets the same payment and utility since it is still selected.\nIf di> \u03bai (O, c--i), then its utility becomes 0 since it is not selected anymore.\nThus, it has no incentive to lie in this case.\nCase 2: ci \u2265 \u03ba (O, c--i).\nIn this case, when i reveals its true valuation, its payment is 0 and the utility is 0.\nNow consider the situation when i declares a valuation di = 6 ci.\nIf di> \u03bai (O, c--i), then i gets the same payment and utility since it is still not selected.\nIf di \u2264 \u03bai (O, c--i), then its utility becomes \u03bai (O, c--i) \u2212 ci \u2264 0 since it is selected now.\nThus, it has no incentive to lie.\nThe equivalence of the monotonicity property of the allocation rule O and the existence of a truthful mechanism using O can be extended to games beyond binary demand games.\nThe details are omitted here due to space limit.\nWe now summarize the process to design a truthful payment scheme for a binary demand game based on an output method O. General Framework 1 Truthful mechanism design for a binary demand game Stage 1: Check whether the allocation rule O satisfies MP.\nIf it does not, then there is no payment scheme P such that mechanism M = (O, P) is truthful.\nOtherwise, define the payment scheme P as follows.\nStage 2: Based on the allocation rule O, find the cut value \u03bai (O, c--i) for agent i such that Oi (c | idi) = 1 when di <\u03bai (O, c--i), and Oi (c | idi) = 0 when di> \u03bai (O, c--i).\nStage 3: The payment for agent i is 0 if Oi (c) = 0; the payment is \u03bai (O, c--i) if Oi (c) = 1.\nTHEOREM 5.\nThe payment defined by our general framework is minimum among all truthful payment schemes using O as output.\n4.\nCOMPUTING CUT VALUE FUNCTIONS\nTo find the truthful payment scheme by using General Framework 1, the most difficult stage seems to be the stage 2.\nNotice that binary search does not work generally since the valuations of agents may be continuous.\nWe give some general techniques that can help with finding the cut value function under certain circumstances.\nOur basic approach is as follows.\nFirst, we decompose the allocation rule into several allocation rules.\nNext find the cut value function for each of these new allocation rules.\nThen, we compute the original cut value function by combining these cut value functions of the new allocation rules.\n4.1 Simple Combinations\nIn this subsection, we introduce techniques to compute the cut value function by combining multiple allocation rules with conjunctions or disconjunctions.\nFor simplicity, given an allocation rule O, we will use \u03ba (O, c) to denote a n-tuple vector\nHere, \u03bai (O, c--i) is the cut value for agent i when the allocation rule is O and the costs c--i of all other agents are fixed.\nTHEOREM 6.\nWith a fixed setting S ofa binary demand game, for Oi.\nThen the allocation rule O (c) = Vm assume that there are m allocation rules O1, O2, \u00b7 \u00b7 \u00b7, Om satisfying the monotonicity property, and \u03ba (Oi, c) is the cut value vector i = 1 Oi (c) satisfies the monotonicity property.\nMoreover, the cut value for O is \u03ba (O, c) = maxmi = 1 {\u03ba (Oi, c)} Here \u03ba (O, c) = maxmi = 1 {\u03ba (Oi, c)} means, \u2200 j \u2208 [1, n], \u03baj (O, c--j) = maxmi = 1 {\u03baj (Oi, c--j)} and O (c) = Vmi = 1 Oi (c) means, \u2200 j \u2208 [1, n], Oj (c) = O1j (c) \u2228 O2j (c) \u2228 \u00b7 \u00b7 \u00b7 \u2228 Omj (c).\nPROOF.\nAssume that ci> c' i and Oi (c) = 1.\nWithout loss of generality, we assume that Oki (c) = 1 for some k, 1 \u2264 k \u2264 m. From the assumption that Oki (c) satisfies MP, we obtain that\nthat O (c) satisfies MP.\nThe correctness of the cut value function follows directly from Theorem 4.\nMany algorithms indeed fall into this category.\nTo demonstrate the usefulness of Theorem 6, we discuss a concrete example here.\nIn a network, sometimes we want to deliver a packet to a set of nodes instead of one.\nThis problem is known as multicast.\nThe most commonly used structure in multicast routing is so called shortest path tree (SPT).\nConsider a network G = (V, E, c), where V is the set of nodes, and vector c is the actual cost of the nodes forwarding the data.\nAssume that the source node is s and the receivers are Q \u2282 V.\nFor each receiver qi \u2208 Q, we compute the shortest path (least cost path), denoted by LCP (s, qi, d), from the source s to qi under the reported cost profile d.\nThe union of all such shortest paths forms the shortest path tree.\nWe then use General Framework 1 to design the truthful payment scheme P when the SPT structure is used as the output for multicast, i.e., we design a mechanism M = (SPT, P).\nNotice that VCG mechanisms cannot be applied here since SPT is not an affine maximization.\nLCP (s, qi, d).\nThen the output SPT is defined asVqi \u2208 Q LCP (s, qi).\nWe define LCP (s, qi) as the allocation corresponds to the path LCP (s, qi, d), i.e., LCP (s, qi) k (d) = 1 if and only if node vk is in In other words, SPTk (d) = 1 if and only if qk is selected in some LCP (s, qi, d).\nThe shortest path allocation rule is a utilitarian and satisfies MP.\nThus, from Theorem 6, SPT also satisfies MP, and the cut value function vector for SPT can be calculated as r, (SPT, c) = maxqi \u2208 Q r, (LCP (s, qi), c), where r, (LCP (s, qi), c) is the cut value function vector for the shortest path LCP (s, qi, c).\nConsequently, the payment scheme above is truthful and the minimum among all truthful payment schemes when the allocation rule is SPT.\nTHEOREM 7.\nFixed the setting S of a binary demand game, assume that there are m output methods O1, O2, \u00b7 \u00b7 \u00b7, Om satisfying MP, and r, (Oi, c) are the cut value functions respectively for Oi Amwhere i = 1, 2, \u00b7 \u00b7 \u00b7, m.\nThen the allocation rule O (c) = i = 1 Oi (c) satisfies MP.\nMoreover, the cut value function for O is r, (O, c) = minmi = 1 {r, (Oi, c)}.\nWe show that our simple combination generalizes the IF-THENELSE function defined in [13].\nFor an agent i, assume that there are two allocation rules O1 and O2 satisfying MP.\nLet r, i (O1, c \u2212 i), r, i (O2, c \u2212 i) be the cut value functions for O1, O2 respectively.\nThen the IF-THEN-ELSE function Oi (c) is actually Oi (c) = [(ci \u2264 r, i (O1, c \u2212 i) + 61 (c \u2212 i)) \u2227 O2 (c \u2212 i, ci)] \u2228 (ci <r, i (O1, c \u2212 i) \u2212 62 (c \u2212 i)) where 61 (c \u2212 i) and 62 (c \u2212 i) are two positive functions.\nBy applying Theorems 6 and 7, we know that the allocation rule O satisfies MP and consequently r, i (O, c \u2212 i) = max {min (r, i (O1, c \u2212 i) + 61 (c \u2212 i), r, i (O2, c \u2212 i)), r, i (O1, c \u2212 i) \u2212 62 (c \u2212 i))}.\n4.2 Round-Based Allocations\nSome approximation algorithms are round-based, where each round of an algorithm selects some agents and updates the setting and the cost profile if necessary.\nFor example, several approximation algorithms for minimum weight vertex cover [19], maximum weight independent set, minimum weight set cover [4], and minimum weight Steiner [18] tree fall into this category.\nAs an example, we discuss the minimum weighted vertex cover problem (MWVC) [16, 15] to show how to compute the cut value for a round-based output.\nGiven a graph G = (V, E), where the nodes v1, v2,..., vn are the agents and each agent vi has a weight ci, we want to find a node set V 0 \u2286 V such that for every edge (u, v) \u2208 E at least one of u and v is in V0.\nSuch V 0 is called a vertex cover of G.\nThe valuation of a node i is \u2212 ci if it is selected; otherwise its valuation is 0.\nFor a subset of nodes V 0 \u2208 V, we define its weight as c (V0) = Ei \u2208 V, ci.\nWe want to find a vertex cover with the minimum weight.\nHence, the objective function to be implemented is utilitarian.\nTo use the VCG mechanism, we need to find the vertex cover with the minimum weight, which is NP-hard [16].\nSince we are interested in mechanisms that can be computed in polynomial time, we must use polynomial-time computable allocation rules.\nMany algorithms have been proposed in the literature to approximate the optimal solution.\nIn this paper, we use a 2-approximation algorithm given in [16].\nFor the sake of completeness, we briefly review this algorithm here.\nThe algorithm is round-based.\nEach round selects some vertices and discards some vertices.\nFor each node i, w (i) is initialized to its weight ci, and when w (i) drops to 0, i is included in the vertex cover.\nTo make the presentation clear, we say an edge (i1, j1) is lexicographically smaller than edge (i2, j2) if\n(1) min (i1, j1) <min (i2, j2), or (2) min (i1, j1) = min (i2, j2) and max (i1, j1) <max (i2, j2).\nAlgorithm 2 Approximate Minimum Weighted Vertex Cover Input: A node weighted graph G = (V, E, c).\nOutput: A vertex cover V0.\n1: Set V 0 = \u2205.\nFor each i \u2208 V, set w (i) = ci.\n2: while V 0 is not a vertex cover do 3: Pick an uncovered edge (i, j) with the least lexicographic order among all uncovered edges.\n4: Let m = min (w (i), w (j)).\n5: Update w (i) to w (i) \u2212 m and w (j) to w (j) \u2212 m. 6: If w (i) = 0, add i to V0.\nIf w (j) = 0, add j to V0.\nNotice, selecting an edge using the lexicographic order is crutial to guarantee the monotonicity property.\nAlgorithm 2 outputs a vertex cover V 0 whose weight is within 2 times of the optimum.\nFor convenience, we use VC (c) to denote the vertex cover computed by Algorithm 2 when the cost vector of vertices is c. Below we generalize Algorithm 2 to a more general scenario.\nTypically, a round-based output can be characterized as follows (Algorithm 3).\nWe have the following theorem about the existence of a truthful payment using a round based allocation rule A. THEOREM 8.\nA round-based output A, with the framework defined in Algorithm 3, satisfies MP if the output methods Or satisfy MP and all updating rules Ur are crossing-independent.\nPROOF.\nConsider an agent i and fixed c \u2212 i.\nWe prove that when an agent i is selected with cost ci, then it is also selected with cost di <ci.\nAssume that i is selected in round r with cost ci.\nThen under cost di, if agent i is selected in a round before r, our claim holds.\nOtherwise, consider in round r. Clearly, the setting Sr and the costs of all other agents are the same as what if agent i had cost ci since i is not selected in the previous rounds due to the crossindependent property.\nSince i is selected in round r with cost ci, i is also selected in round r with di <ci due to the reason that Or satisfies MP.\nThis finishes the proof.\n1: Set r = 0, c \u00b0 = c, and G \u00b0 = G initially.\n2: repeat 3: Compute an output or using a deterministic algorithm\nHere Or, cr and Sr are allocation rule, cost vector and game setting in game Gr, respectively.\nRemark: Or is often a simple greedy algorithm such as selecting the agents that minimize some utilitarian function.\nFor the example of vertex cover, Or will always select the light-weighted node on the lexicographically least uncovered edge (i, j).\n4: Let r = r + 1.\nUpdate the game Gr--1 to obtain a new game Gr with setting Sr and cost vector cr according to some rule\nHere we updates the cost and setting of the game.\nRemark: For the example of vertex cover, the updating rule will decrease the weight of vertices i and j by min (w (i), w (j)).\n5: until a valid output is found 6: Return the union of the set of selected players of each round as the final output.\nFor the example of vertex cover, it is the union of nodes selected in all rounds.\nAlgorithm 4 Compute Cut Value for Round-Based Algorithms Input: A round-based output A, a game G1 = G, and a updating function vector U. Output: The cut value x for agent k.\n1: Set r = 0 and ck = \u03b6.\nRecall that \u03b6 is a value that can guarantee Ak = 0 when an agent reports the cost \u03b6.\n2: repeat 3: Compute an output or using a deterministic algorithm based on setting Sr using allocation rule Or: Sr \u00d7 cr \u2192 {0, 1} n. 4: Find the cut value for agent k based on the allocation rule Or for costs cr--k. Let ` r = \u03bak (Or, cr--k) be the cut value.\n5: Set r = r + 1 and obtain a new game Gr from Gr--1 and or according to the updating rule Ur.\n6: Let cr be the new cost vector for game Gr.\n7: until a valid output is found.\n8: Let gi (x) be the cost of cik when the original cost vector is c | kx.\n9: Find the minimum value x such that\nHere, t is the total number of rounds.\n10: Output the value x as the cut value.\nIf the round-based output satisfies monotonicity property, the cut-value always exists.\nWe then show how to find the cut value for a selected agent k in Algorithm 4.\nThe correctness of Algorithm 4 is straightforward.\nTo compute the cut value, we assume that (1) the cut value ` r for each round r can be computed in polynomial time; (2) we can solve the equation gr (x) = ` r to find x in polynomial time when the cost vector c--i and b are given.\nNow we consider the vertex cover problem.\nFor each round r, we select a vertex with the least weight and that is incident on the lexicographically least uncovered edge.\nThe output satisfies MP.\nFor agent i, we update its cost to cri \u2212 crj iff edge (i, j) is selected.\nIt is easy to verify this updating rule is crossing-independent, thus we can apply Algorithm 4 to compute the cut value for the set cover game as shown in Algorithm 5.\nAlgorithm 5 Compute Cut Value for MVC.\nInput: A node weighted graph G = (V, E, c) and a node k selected by Algorithm 2.\nOutput: The cut value \u03bak (V C, c--k).\n1: For each i \u2208 V, set w (i) = ci.\n2: Set w (k) = \u221e, pk = 0 and V' = \u2205.\n3: while V' is not a vertex cover do 4: Pick an uncovered edge (i, j) with the least lexicographic order among all uncovered edges.\n5: Set m = min (w (i), w (j)).\n6: Update w (i) = w (i) \u2212 m and w (j) = w (j) \u2212 m. 7: If w (i) = 0, add i to V'; else add j to V'.\n8: If i == k or j == k then set pk = pk + m. 9: Output pk as the cut value \u03bak (V C, c--k).\n4.3 Complex Combinations\nIn subsection 4.1, we discussed how to find the cut value function when the output of the binary demand game is a simple combination of some outputs, whose cut values can be computed through other means (typically VCG).\nHowever, some algorithms cannot be decomposed in the way described in subsection 4.1.\nNext we present a more complex way to combine allocation rules, and as we may expected, the way to find the cut value is also more complicated.\nAssume that there are n agents 1 \u2264 i \u2264 n with cost vector c, and there are m binary demand games Gi with objective functions fi (o, c), setting Si and allocation rule \u03c8i where i = 1, 2, \u00b7 \u00b7 \u00b7, m.\nThere is another binary demand game with setting S and allocation rule O, whose input is a cost vector d = (d1, d2, \u00b7 \u00b7 \u00b7, dm).\nLet f be the function vector (f1, f2, \u00b7 \u00b7 \u00b7, fm), \u03c8 be the allocation rule vector (\u03c81, \u03c82, \u00b7 \u00b7 \u00b7, \u03c8m) and \u222b be the setting vector (S1, S2, \u00b7 \u00b7 \u00b7, Sm).\nFor notation simplicity, we define Fi (c) = fi (\u03c8i (c), c), for each 1 \u2264 i \u2264 m, and F (c) = (F1 (c), F2 (c), \u00b7 \u00b7 \u00b7, Fm (c)).\nLet us see a concrete example of these combinations.\nConsider a link weighted graph G = (V, E, c), and a subset of q nodes Q \u2286 V.\nThe Steiner tree problem is to find a set of links with minimum total cost to connect Q.\nOne way to find an approximation of the Steiner tree is as follows: (1) we build a virtual complete graph H using Q as its vertices, and the cost of each edge (i, j) is the cost of LCP (i, j, c) in graph G; (2) build the minimum spanning tree of H, denoted as MST (H); (3) an edge of G is selected iff it is selected in some LCP (i, j, c) and edge (i, j) of H is selected to MST (H).\nIn this game, we define q (q \u2212 1) \/ 2 games Gi, j, where i, j \u2208 Q, with objective functions fi, j (o, c) being the minimum cost of <8>>>>>>>>>>:\nconnecting i and j in graph G, setting Si being the original graph G and allocation rule is LCP (i, j, c).\nThe game G corresponds to the MST game on graph H.\nThe cost of the pair-wise q (q \u2212 1) \/ 2 shortest paths defines the input vector d = (d1, d2, \u00b7 \u00b7 \u00b7, dm) for game MST.\nMore details will be given in Section 5.2.\nDEFINITION 3.\nGiven an allocation rule O and setting S, an objective function vector f, an allocation rule vector \u03c8 and setting vector \u222b, we define a compound binary demand game with setting S and output O \u25e6 F as (O \u25e6 F) i (c) = Vm j = 1 (Oj (F (c)) \u2227 \u03c8ji (c)).\nThe allocation rule of the above definition can be interpreted as follows.\nAn agent i is selected if and only if there is a j such that (1) i is selected in \u03c8j (c), and (2) the allocation rule O will select index j under cost profile F (c).\nFor simplicity, we will use O \u25e6 F to denote the output of this compound binary demand game.\nNotice that a truthful payment scheme using O \u25e6 F as output exists if and only if it satisfies the monotonicity property.\nTo study when O \u25e6 F satisfies MP, several necessary definitions are in order.\nDEFINITION 4.\nFunction Monotonicity Property (FMP) Given an objectivefunction g and an allocation rule O, afunction H (c) = g (O (c), c) is said to satisfy the function monotonicity property, if, given fixed c--i, it satisfies:\n1.\nWhen Oi (c) = 0, H (c) does not increase over ci.\n2.\nWhen Oi (c) = 1, H (c) does not decrease over ci.\nFrom the definition of the strong monotonicity property, we have Lemma 1 directly.\nWe now can give a sufficient condition when O \u25e6 F satisfies the monotonicity property.\nTHEOREM 9.\nIf \u2200 i \u2208 [1, m], Fi satisfies FMP, \u03c8i satisfies MP, and the output O satisfies SMP, then O \u25e6 F satisfies MP.\nPROOF.\nAssuming for cost vector c we have (O \u25e6 F) i (c) = 1, we should prove for any cost vector c' = c | ic' i with c' i <ci, (O \u25e6 F) i (c') = 1.\nNoticing that (O \u25e6 F) i (c) = 1, without loss of generality, we assume that Ok (F (c)) = 1 and \u03c8ki (c) = 1 for some index 1 <k <m.\nNow consider the output O with the cost vector F (c') | kFk (c).\nThere are two scenarios, which will be studied one by one as follows.\nOne scenario is that index k is not chosen by the output function O. From Lemma 1, there must exist j = 6 k such that\nWe then prove that agent i will be selected in the output \u03c8j (c'), i.e., \u03c8ji (c') = 1.\nIf it is not, since \u03c8j (c) satisfies MP, we have \u03c8ji (c) = \u03c8ji (c') = 0 from c' i <ci.\nSince Fj satisfies FMP, we know Fj (c')> Fj (c), which is a contradiction to the inequality (1).\nConsequently, we have \u03c8ji (c') = 1.\nFrom Equation (2), the fact that index k is not selected by allocation rule O and the definition of SMP, we have Oj (F (c')) = 1, Thus, agent i is selected by O \u25e6 F because of Oj (F (c')) = 1 and \u03c8ji (c') = 1.\nThe other scenario is that index k is chosen by the output function O. First, agent i is chosen in \u03c8k (c') since the output \u03c8k (c) satisfies the monotonicity property and c' i <ci and \u03c8ki (c) = 1.\nSecondly, since the function Fk satisfies FMP, we know that Fk (c') <Fk (c).\nRemember that output O satisfies the SMP, thus we can obtain Ok (F (c')) = 1 from the fact that Ok (F (c') | kFk (c)) = 1 and Fk (c') <Fk (c).\nConsequently, agent i will also be selected in the final output O \u25e6 F.\nThis finishes our proof.\nThis theorem implies that there is a cut value for the compound output O \u25e6 F.\nWe then discuss how to find the cut value for this output.\nBelow we will give an algorithm to calculate \u03bai (O \u25e6 F) when (1) O satisfies SMP, (2) \u03c8j satisfies MP, and (3) for fixed c--i, Fj (c) is a constant, say hj, when \u03c8ji (c) = 0, and Fj (c) increases when \u03c8ji (c) = 1.\nNotice that here hj can be easily computed by setting ci = \u221e since \u03c8j satisfies the monotonicity property.\nWhen given i and fixed c--i, we define (Fji)--1 (y) as the smallest x such thatFj (c | ix) = y. For simplicity, we denote (Fji)--1 asF--1 j if no confusion is caused when i is a fixed agent.\nIn this paper, we assume that given any y, we can find such x in polynomial time.\nAlgorithm 6 Find Cut Value for Compound Method O \u25e6 F Input: allocation rule O, objective function vector F and inverse function vector F--1 = {F1--1, \u00b7 \u00b7 \u00b7, F--1 m}, allocation rule vector \u03c8 and fixed c--i. Output: Cut value for agent i based on O \u25e6 F.\n1: for 1 <j <m do 2: Compute the outputs \u03c8j (ci).\n3: Compute hj = Fj (c | i \u221e).\n4: Use h = (h1, h2, \u00b7 \u00b7 \u00b7, hm) as the input for the output function O. Denote \u03c4j = \u03baj (O, h--j) as the cut value function of output O based on input h. 5: for 1 <j <m do 6: Set \u03bai, j = F--1 j (min {\u03c4j, hj}).\n7: The cut value for i is \u03bai (O \u25e6 F, c--i) = maxmj = 1 \u03bai, j.\nTHEOREM 10.\nAlgorithm 6 computes the correct cut value for agent i based on the allocation rule O \u25e6 F. PROOF.\nIn order to prove the correctness of the cut value function calculated by Algorithm 6, we prove the following two cases.\nFor our convenience, we will use \u03bai to represent \u03bai (O \u25e6 F, c--i) if no confusion caused.\nFirst, if di <\u03bai then (O \u25e6 F) i (c | idi) = 1.\nWithout loss of generality, we assume that \u03bai = \u03bai, j for some j.\nSince function Fj satisfies FMP and \u03c8ji (c | idi) = 1, we have Fj (c | idi) <Fj (\u03bai).\nNotice di <\u03bai, j, from the definition of \u03bai, j = F--1\nwe have (1) \u03c8ji (c | idi) = 1, (2) Fj (c | idi) <\u03c4j due to the fact that Fj (x) is a non-decreasing function when j is selected.\nThus, from the monotonicity property of O and \u03c4j is the cut value for output O, we have\nIf Oj (F (c | idi)) = 1 then (O \u25e6 F) i (c | idi) = 1.\nOtherwise, since O satisfies SMP, Lemma 1 and equation 3 imply that there exists at least one index k such that Ok (F (c | idi)) = 1 and Fk (c | idi) <hk.\nNote Fk (c | idi) <hk implies that i is selected in \u03c8k (c | idi) since hk = Fk (ci | i \u221e).\nIn other words, agent i is selected in O \u25e6 F.\nSecond, if di \u2265 \u03bai (O \u25e6 F, c \u2212 i) then (O \u25e6 F) i (c | idi) = 0.\nAssume for the sake of contradiction that (O \u25e6 F) i (c | idi) = 1.\nThen there exists an index 1 \u2264 j \u2264 m such that Oj (F (c | idi)) = 1 and \u03c8ji (c | idi) = 1.\nRemember that hk \u2265 Fk (c | idi) for any k. Thus, from the fact that O satisfies SMP, when changing the cost vector from F (c | idi) to h | jFj (c | idi), we still have Oj (h | jFj (c | idi)) = 1.\nThis implies that Fj (c | idi) <\u03c4j.\nCombining the above inequality and the fact that Fj (c | ic | idi) <hj, we have Fj (c | idi) <min {hj, \u03c4j}.\nThis implies di <F \u2212 1 j (min {hj, \u03c4j}) = \u03bai, j <\u03bai (O \u25e6 F, c \u2212 i).\nwhich is a contradiction.\nThis finishes our proof.\nIn most applications, the allocation rule \u03c8j implements the objective function fj and fj is utilitarian.\nThus, we can compute the inverse of F \u2212 1 j efficiently.\nAnother issue is that it seems the conditions when we can apply Algorithm 6 are restrictive.\nHowever, lots of games in practice satisfy these properties and here we show how to deduct the MAX combination in [13].\nAssume A1 and A2 are two allocation rules for single minded combinatorial auction, then the combination MAX (A1, A2) returns the allocation with the larger welfare.\nIf algorithm A1 and A2 satisfy MP and FMP, the operation max (x, y) which returns the larger element of x and y satisfies SMP.\nFrom Theorem 9 we obtain that combination MAX (A1, A2) also satisfies MP.\nFurther, the cut value of the MAX combination can be found by Algorithm 6.\nAs we will show in Section 5, the complex combination can apply to some more complicated problems.\n5.\nCONCRETE EXAMPLES\n5.1 Set Cover\nIn the set cover problem, there is a set U of m elements needed to be covered, and each agent 1 \u2264 i \u2264 n can cover a subset of elements Si with a cost ci.\nLet S = {S1, S2, \u00b7 \u00b7 \u00b7, Sn} and c = (c1, c2, \u00b7 \u00b7 \u00b7, cn).\nWe want to find a subset of agents D such that U \u2286 Ui \u2208 D Si.\nThe selected subsets is called the set cover for U.\nThe social efficiency of the output D is defined asEi \u2208 D ci, which is the objective function to be minimized.\nClearly, this is a utilitarian and thus VCG mechanism can be applied if we can find the subset of S that covers U with the minimum cost.\nIt is well-known that finding the optimal solution is NP-hard.\nIn [4], an algorithm of approximation ratio of Hm has been proposed and it has been proved that this is the best ratio possible for the set cover problem.\nFor the completeness of presentation, we review their method here.\nAlgorithm 7 Greedy Set Cover (GSC) Input: Agent i's subset Si covered and cost ci.\n(1 \u2264 i \u2264 n).\nOutput: A set of agents that can cover all elements.\n1: Initialize r = 1, T0 = \u2205, and R = \u2205.\n2: while R = 6 U do 4: Set Tr +1 = TrUSj and R = RUj.\n3: Find the set Sj with the minimum density cj | Sj \u2212 Tr |.\n5: r = r + 1 6: Output R.\nLet GSC (S) be the sets selected by the Algorithm 7.\nNotice that the output set is a function of S and c.\nSome works assume that the type of an agent could be ci, i.e., Si is assumed to be a public knowledge.\nHere, we consider a more general case in which the type of an agent is (Si, ci).\nIn other words, we assume that every agent i cannot only lie about its cost ci but also can lie about the set Si.\nThis problem now looks similar to the combinatorial auction with single minded bidder studied in [12], but with the following differences: in the set cover problem we want to cover all the elements and the sets chosen can have some overlap while in combinatorial auction the chosen sets are disjoint.\nWe can show that the mechanism M = (GSC, PV CG), using Algorithm 7 to find a set cover and apply VCG mechanism to compute the payment to the selected agents, is not truthful.\nObviously, the set cover problem is a binary demand game.\nFor the moment, we assume that agent i won't be able to lie about Si.\nWe will drop this assumption later.\nWe show how to design a truthful mechanism by applying our general framework.\n1.\nCheck the monotonicity property: The output of Algorithm 7 is a round-based output.\nThus, for an agent i, we first focus on the output of one round r.\nIn round r, if i is se\namong all remaining agents.\nConsequently, agent i is still selected in round r, which means the output of round r satisfies MP.\nNow we look into the updating rules.\nFor every round, we only update the Tr +1 = TrUSj and R = RUj, which is obviously cross-independent.\nThus, by applying Theorem 8, we know the output by Algorithm 7 satisfies MP.\n2.\nFind the cut value: To calculate the cut value for agent i with fixed cost vector c \u2212 i, we follow the steps in Algorithm 4.\nFirst, we set ci = \u221e and apply Algorithm 7.\nLet ir be the agent selected in round r and T \u2212 i\nr +1 be the corresponding set.\nThen the cut value of round r is\nRemember the updating rule only updates the game setting but not the cost of the agent, thus we have gr (x) = x \u2265 fr for 1 \u2264 r \u2264 t. Therefore, the final cut value for agent i is\nThe payment to an agent i is \u03bai if i is selected; otherwise its payment is 0.\nWe now consider the scenario when agent i can lie about Si.\nAssume that agent i cannot lie upward, i.e., it can only report a set S0i \u2286 Si.\nWe argue that agent i will not lie about its elements Si.\nNotice that the cut value computed for round r is fr = cir\nThus, lying its set as S0i will not increase the cut value for each round.\nThus lying about Si will not improve agent i's utility.\n5.2 Link Weighted Steiner Trees\nConsider any link weighted network G = (V, E, c), where E = {e1, e2, \u00b7 \u00b7 \u00b7, em} are the set of links and ci is the weight of the link ei.\nThe link weighted Steiner tree problem is to find a tree rooted at source node s spanning a given set of nodes Q = {q1, q2, \u00b7 \u00b7 \u00b7, qk} \u2282 V.\nFor simplicity, we assume that qi = vi, for 1 \u2264 i \u2264 k.\nHere the links are agents.\nThe total cost of links in a graph H \u2286 G is called the weight of H, denoted as \u03c9 (H).\nIt is NP-hard to find the minimum cost multicast tree when given an arbitrary link weighted\ngraph G [17, 20].\nThe currently best polynomial time method has approximation ratio 1 + ln 3 2 [17].\nHere, we review and discuss the first approximation method by Takahashi and Matsuyama [20].\nAlgorithm 8 Find LinkWeighted SteinerTree (LST) Input: Network G = (V, E, c) where c is the cost vector for link set E. Source node s and receiver set Q. Output: A tree LST rooted at s and spanned all receivers.\n1: Set r = 1, G1 = G, Q1 = Q and s1 = s. 2: repeat 3: In graph Gr, find the receiver, say qi, that is closest to the source s, i.e., LCP (s, qi, c) has the least cost among the shortest paths from s to all receivers in Qr.\n4: Select all links on LCP (s, qi, c) as relay links and set their cost to 0.\nThe new graph is denoted as Gr +1.\n5: Set tr as qi and Pr = LCP (s, qi, c).\n6: Set Qr +1 = Qr \\ qi and r = r + 1.\n7: until all receivers are spanned.\nHereafter, let LST (G) be the final tree constructed using the above method.\nIt is shown in [24] that mechanism M = (LST, pV CG) is not truthful, where pV CG is the payment calculated based on VCG mechanism.\nWe then show how to design a truthful payment scheme using our general framework.\nObserve that the output Pr, for any round r, satisfies MP, and the update rule for every round satisfies crossing-independence.\nThus, from Theorem 8, the roundbased output LST satisfies MP.\nIn round r, the cut value for a link ei can be obtained by using the VCG mechanism.\nNow we set ci = \u221e and execute Algorithm 8.\nLet w \u2212 ir (ci) be the cost of the path Pr (ci) selected in the rth round and \u03a0ir (ci) be the shortest path selected in round r if the cost of ci is temporarily set to \u2212 \u221e.\nThen the cut value for round r is fr = wir (c \u2212 i) \u2212 | \u03a0ir (c \u2212 i) | where | \u03a0ir (c \u2212 i) | is the cost of the path \u03a0ir (c \u2212 i) excluding node vi.\nUsing Algorithm 4, we obtain the final cut value for agent i: r, i (LST, c \u2212 i) = maxr {$r}.\nThus, the payment to a link ei is r, i (LST, c \u2212 i) if its reported cost is di <r, i (LST, d \u2212 i); otherwise, its payment is 0.\n5.3 Virtual Minimal Spanning Trees\nTo connect the given set of receivers to the source node, besides the Steiner tree constructed by the algorithms described before, a virtual minimum spanning tree is also often used.\nAssume that Q is the set of receivers, including the sender.\nAssume that the nodes in a node-weighted graph are all agents.\nThe virtual minimum spanning tree is constructed as follows.\nAlgorithm 9 Construct VMST\n1: for all pairs of receivers qi, qj \u2208 Q do 2: Calculate the least cost path LCP (qi, qj, d).\n3: Construct a virtual complete link weighted graph K (d) using Q as its node set, where the link qiqj corresponds to the least cost path LCP (qi, qj, d), and its weight is w (qiqj) = | LCP (qi, qj, d) |.\n4: Build the minimum spanning tree on K (d), denoted as V MST (d).\n5: for every virtual link qiqj in V MST (d) do 6: Find the corresponding least cost path LCP (qi, qj, d) in the original network.\n7: Mark the agents on LCP (qi, qj, d) selected.\nThe mechanism M = (V MST, pV CG) is not truthful [24], where the payment pV CG to a node is based on the VCG mechanism.\nWe then show how to design a truthful mechanism based on the framework we described.\n1.\nCheck the monotonicity property: Remember that in the complete graph K (d), the weight of a link qiqj is | LCP (qi, qj, d) |.\nIn other words, we implicitly defined | Q | (| Q | \u2212 1) \/ 2 functions fi, j, for all i <j and qi \u2208 Q and qj \u2208 Q, with fi, j (d) = | LCP (qi, qj, d) |.\nWe can show that the function fi, j (d) = | LCP (qi, qj, d) | satisfies FMP, LCP satisfies MP, and the output MST satisfies SMP.\nFrom Theorem 9, the allocation rule VMST satisfies the monotonicity property.\n2.\nFind the cut value: Notice VMST is the combination of MST and function fi, j, so cut value for VMST can be computed based on Algorithm 6 as follows.\n(a) Given a link weighted complete graph K (d) on Q, we should find the cut value function for edge ek = (qi, qj) based on MST.\nGiven a spanning tree T and a pair of terminals p and q, clearly there is a unique path connecting them on T.\nWe denote this path as \u03a0T (p, q), and the edge with the maximum length on this path as LE (p, q, T).\nThus, the cut value can be represented as r, k (MST, d) = LE (qi, qj, MST (d | k \u221e)) (b) We find the value-cost function for LCP.\nAssume vk \u2208 LCP (qi, qj, d), then the value-cost function is xk = yk \u2212 | LCPv.\n(qi, qj, d | k0) |.\nHere, LCPv.\n(qi, qj, d) is the least cost path between qi and qj with node vk on this path.\n(c) Remove vk and calculate the value K (d | k \u221e).\nSet h (i, j) = | LCP (qi, qj, d | \u221e)) | for every pair of node i = 6 j and let h = {h (i, j)} be the vector.\nThen it is easy to show that \u03c4 (i, j) = | LE (qi, qj, MST (h | (i, j) \u221e)) | is the cut value for output VMST.\nIt easy to verify that min {h (i, j), \u03c4 (i, j)} = | LE (qi, qj, MST (h) |.\nThus, we know r, (i, j) k (V MST, d) is | LE (qi, qj, MST (h) | \u2212 | LCPv.\n(qi, qj, d | k0) |.\nThe cut value for agent k is r, k (V MST, d \u2212 k) = max0 \u2264 i, j \u2264 r r, ijk (V MST, d \u2212 k).\n3.\nWe pay agent k r, k (V MST, d \u2212 k) if and only if k is selected in V MST (d); else we pay it 0.\n5.4 Combinatorial Auctions\nLehmann et al. [12] studied how to design an efficient truthful mechanism for single-minded combinatorial auction.\nIn a singleminded combinatorial auction, there is a set of items S to be sold and there is a set of agents 1 \u2264 i \u2264 n who wants to buy some of the items: agent i wants to buy a subset Si \u2286 S with maximum price mi.\nA single-minded bidder i declares a bid bi = hS0i, aii with S0i \u2286 S and ai \u2208 R +.\nTwo bids hS0i, aii and hS0j, aji conflict if S0i \u2229 S0j = 6 \u2205.\nGiven the bids b1, b2, \u00b7 \u00b7 \u00b7, bn, they gave a greedy round-based algorithm as follows.\nFirst the bids are sorted by some criterion (ai i | 1\/2 is used in [12]) in an increasing order and let L be | S0 the list of sorted bids.\nThe first bid is granted.\nThen the algorithm exams each bid of L in order and grants the bid if it does not conflict with any of the bids previously granted.\nIf it does, it is denied.\nThey proved that this greedy allocation scheme using criterion ai\napproximates the optimal allocation within a factor of \u221a m, where m is the number of goods in S.\nIn the auction settings, we have ci = \u2212 ai.\nIt is easy to verify the output of the greedy algorithm is a round-based output.\nRemember after bidder j is selected for round r, every bidder has conflict\nwith j will not be selected in the rounds after.\nThis equals to update the cost of every bidder having conflict with j to 0, which satisfies crossing-independence.\nIn addition, in any round, if bidder i is selected with ai then it will still be selected when it declares a' i> ai.\nThus, for every round, it satisfies MP and the\nISj 0I1\/2 where bj is the first bid that has been denied but would have been selected were it not only for the presence of bidder i. Thus, the payment by agent i is | S' i | 1\/2 \u00b7 aj\nISj 0I1\/2, and 0 otherwise.\nThis payment scheme is exactly the same as the payment scheme in [12].\n6.\nCONCLUSIONS\nIn this paper, we have studied how to design a truthful mechanism M = (O, P) for a given allocation rule O for a binary demand game.\nWe first showed that the allocation rule O satisfying the MP is a necessary and sufficient condition for a truthful mechanism M to exist.\nWe then formulate a general framework for designing payment P such that the mechanism M = (O, P) is truthful and computable in polynomial time.\nWe further presented several general composition-based techniques to compute P efficiently for various allocation rules O. Several concrete examples were discussed to demonstrate our general framework for designing P and for composition-based techniques of computing P in polynomial time.\nIn this paper, we have concentrated on how to compute P in polynomial time.\nOur algorithms do not necessarily have the optimal running time for computing P given O.\nIt would be of interest to design algorithms to compute P in optimal time.\nWe have made some progress in this research direction in [22] by providing an algorithm to compute the payments for unicast in a node weighted graph in optimal O (n log n + m) time.\nAnother research direction is to design an approximation allocation rule O satisfying MP with a good approximation ratio for a given binary demand game.\nMany works [12, 13] in the mechanism design literature are in this direction.\nWe point out here that the goal of this paper is not to design a better allocation rule for a problem, but to design an algorithm to compute the payments efficiently when O is given.\nIt would be of significance to design allocation rules with good approximation ratios such that a given binary demand game has a computationally efficient payment scheme.\nIn this paper, we have studied mechanism design for binary demand games.\nHowever, some problems cannot be directly formulated as binary demand games.\nThe job scheduling problem in [2] is such an example.\nFor this problem, a truthful payment scheme P exists for an allocation rule O if and only if the workload assigned by O is monotonic in a certain manner.\nIt wound be of interest to generalize our framework for designing a truthful payment scheme for a binary demand game to non-binary demand games.\nTowards this research direction, Theorem 4 can be extended to a general allocation rule O, whose range is R +.\nThe remaining difficulty is then how to compute the payment P under mild assumptions about the valuations if a truthful mechanism M = (O, P) does exist.","keyphrases":["truth mechan","binari demand game","demand game","mechan design","object function","monoton properti","combin","vickrei-clark-grove","composit-base techniqu","selfish wireless network","price","cut valu function","selfish agent"],"prmu":["P","P","P","P","P","P","P","U","M","U","U","M","M"]} {"id":"H-61","title":"Impedance Coupling in Content-targeted Advertising","abstract":"The current boom of the Web is associated with the revenues originated from on-line advertising. While search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important. In this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective. We assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business. Using no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness. Our methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for ads and keywords) can yield gains in average precision figures of 60% compared to a trivial vector-based strategy. Further, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%. These are first results. They suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms.","lvl-1":"Impedance Coupling in Content-targeted Advertising Berthier Ribeiro-Neto Computer Science Department Federal University of Minas Gerais Belo Horizonte, Brazil berthier@dcc.ufmg.br Marco Cristo Computer Science Department Federal University of Minas Gerais Belo Horizonte, Brazil marco@dcc.ufmg.br Paulo B. Golgher Akwan Information Technologies Av.\nAbra\u02dcao Caram 430 - Pampulha Belo Horizonte, Brazil golgher@akwan.com.br Edleno Silva de Moura Computer Science Department Federal University of Amazonas Manaus, Brazil edleno@dcc.ufam.edu.br ABSTRACT The current boom of the Web is associated with the revenues originated from on-line advertising.\nWhile search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important.\nIn this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective.\nWe assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser``s business.\nUsing no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness.\nOur methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for ads and keywords) can yield gains in average precision figures of 60% compared to a trivial vector-based strategy.\nFurther, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%.\nThese are first results.\nThey suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; I.5.3 [Pattern Recognition]: Applications-Text processing General Terms Algorithms, Experimentation 1.\nINTRODUCTION The emergence of the Internet has opened up new marketing opportunities.\nIn fact, a company has now the possibility of showing its advertisements (ads) to millions of people at a low cost.\nDuring the 90``s, many companies invested heavily on advertising in the Internet with apparently no concerns about their investment return [16].\nThis situation radically changed in the following decade when the failure of many Web companies led to a dropping in supply of cheap venture capital and a considerable reduction in on-line advertising investments [15,16].\nIt was clear then that more effective strategies for on-line advertising were required.\nFor that, it was necessary to take into account short-term and long-term interests of the users related to their information needs [9,14].\nAs a consequence, many companies intensified the adoption of intrusive techniques for gathering information of users mostly without their consent [8].\nThis raised privacy issues which stimulated the research for less invasive measures [16].\nMore recently, Internet information gatekeepers as, for example, search engines, recommender systems, and comparison shopping services, have employed what is called paid placement strategies [3].\nIn such methods, an advertiser company is given prominent positioning in advertisement lists in return for a placement fee.\nAmongst these methods, the most popular one is a non-intrusive technique called keyword targeted marketing [16].\nIn this technique, keywords extracted from the user``s search query are matched against keywords associated with ads provided by advertisers.\nA ranking of the ads, which also takes into consideration the amount that each advertiser is willing to pay, is computed.\nThe top ranked ads are displayed in the search result page together with the answers for the user query.\nThe success of keyword targeted marketing has motivated information gatekeepers to offer their advertisement services in different contexts.\nFor example, as shown in Figure 1, relevant ads could be shown to users directly in the pages of information portals.\nThe motivation is to take advantage of 496 the users immediate information interests at browsing time.\nThe problem of matching ads to a Web page that is browsed, which we also refer to as content-targeted advertising [1], is different from that of keyword marketing.\nIn this case, instead of dealing with users'' keywords, we have to use the contents of a Web page to decide which ads to display.\nFigure 1: Example of content-based advertising in the page of a newspaper.\nThe middle slice of the page shows the beginning of an article about the launch of a DVD movie.\nAt the bottom slice, we can see advertisements picked for this page by Google``s content-based advertising system, AdSense.\nIt is important to notice that paid placement advertising strategies imply some risks to information gatekeepers.\nFor instance, there is the possibility of a negative impact on their credibility which, at long term, can demise their market share [3].\nThis makes investments in the quality of ad recommendation systems even more important to minimize the possibility of exhibiting ads unrelated to the user``s interests.\nBy investing in their ad systems, information gatekeepers are investing in the maintenance of their credibility and in the reinforcement of a positive user attitude towards the advertisers and their ads [14].\nFurther, that can translate into higher clickthrough rates that lead to an increase in revenues for information gatekeepers and advertisers, with gains to all parts [3].\nIn this work, we focus on the problem of content-targeted advertising.\nWe propose new strategies for associating ads with a Web page.\nFive of these strategies are referred to as matching strategies.\nThey are based on the idea of matching the text of the Web page directly to the text of the ads and its associated keywords.\nFive other strategies, which we here introduce, are referred to as impedance coupling strategies.\nThey are based on the idea of expanding the Web page with new terms to facilitate the task of matching ads and Web pages.\nThis is motivated by the observation that there is frequently a mismatch between the vocabulary of a Web page and the vocabulary of an advertisement.\nWe say that there is a vocabulary impedance problem and that our technique provides a positive effect of impedance coupling by reducing the vocabulary impedance.\nFurther, all our strategies rely on information that is already available to information gatekeepers that operate keyword targeted advertising systems.\nThus, no other data from the advertiser is required.\nUsing a sample of a real case database with over 93,000 ads and 100 Web pages selected for testing, we evaluate our ad recommendation strategies.\nFirst, we evaluate the five matching strategies.\nThey match ads to a Web page using a standard vector model and provide what we may call trivial solutions.\nOur results indicate that a strategy that matches the ad plus its keywords to a Web page, requiring the keywords to appear in the Web page, provides improvements in average precision figures of roughly 60% relative to a strategy that simply matches the ads to the Web page.\nSuch strategy, which we call AAK (for ads and keywords), is then taken as our baseline.\nFollowing we evaluate the five impedance coupling strategies.\nThey are based on the idea of expanding the ad and the Web page with new terms to reduce the vocabulary impedance between their texts.\nOur results indicate that it is possible to generate extra improvements in average precision figures of roughly 50% relative to the AAK strategy.\nThe paper is organized as follows.\nIn section 2, we introduce five matching strategies to solve content-targeted advertising.\nIn section 3, we present our impedance coupling strategies.\nIn section 4, we describe our experimental methodology and datasets and discuss our results.\nIn section 5 we discuss related work.\nIn section 6 we present our conclusions.\n2.\nMATCHING STRATEGIES Keyword advertising relies on matching search queries to ads and its associated keywords.\nContext-based advertising, which we address here, relies on matching ads and its associated keywords to the text of a Web page.\nGiven a certain Web page p, which we call triggering page, our task is to select advertisements related to the contents of p. Without loss of generality, we consider that an advertisement ai is composed of a title, a textual description, and a hyperlink.\nTo illustrate, for the first ad by Google shown in Figure 1, the title is Star Wars Trilogy Full, the description is Get this popular DVD free.\nFree w\/ free shopping.\nSign up now, and the hyperlink points to the site www.freegiftworld.com.\nAdvertisements can be grouped by advertisers in groups called campaigns, such that a campaign can have one or more advertisements.\nGiven our triggering page p and a set A of ads, a simple way of ranking ai \u2208 A with regard to p is by matching the contents of p to the contents of ai.\nFor this, we use the vector space model [2], as discussed in the immediately following.\nIn the vector space model, queries and documents are represented as weighted vectors in an n-dimensional space.\nLet wiq be the weight associated with term ti in the query q and wij be the weight associated with term ti in the document dj.\nThen, q = (w1q, w2q, ..., wiq, ..., wnq) and dj = (w1j, w2j, ..., wij, ..., wnj) are the weighted vectors used to represent the query q and the document dj.\nThese weights can be computed using classic tf-idf schemes.\nIn such schemes, weights are taken as the product between factors that quantify the importance of a term in a document (given by the term frequency, or tf, factor) and its rarity in the whole collection (given by the inverse document factor, or idf, factor), see [2] for details.\nThe ranking of the query q with regard to the document dj is computed by the cosine similarity 497 formula, that is, the cosine of the angle between the two corresponding vectors: sim(q, dj) = q \u2022 dj |q| \u00d7 |dj| = Pn i=1 wiq \u00b7 wij qPn i=1 w2 iq qPn i=1 w2 ij (1) By considering p as the query and ai as the document, we can rank the ads with regard to the Web page p.\nThis is our first matching strategy.\nIt is represented by the function AD given by: AD(p, ai) = sim(p, ai) where AD stands for direct match of the ad, composed by title and description and sim(p, ai) is computed according to Eq.\n(1).\nIn our second method, we use other source of evidence provided by the advertisers: the keywords.\nWith each advertisement ai an advertiser associates a keyword ki, which may be composed of one or more terms.\nWe denote the association between an advertisement ai and a keyword ki as the pair (ai, ki) \u2208 K, where K is the set of associations made by the advertisers.\nIn the case of keyword targeted advertising, such keywords are used to match the ads to the user queries.\nIn here, we use them to match ads to the Web page p.\nThis provides our second method for ad matching given by: KW(p, ai) = sim(p, ki) where (ai, ki) \u2208 K and KW stands for match the ad keywords.\nWe notice that most of the keywords selected by advertisers are also present in the ads associated with those keywords.\nFor instance, in our advertisement test collection, this is true for 90% of the ads.\nThus, instead of using the keywords as matching devices, we can use them to emphasize the main concepts in an ad, in an attempt to improve our AD strategy.\nThis leads to our third method of ad matching given by: AD KW(p, ai) = sim(p, ai \u222a ki) where (ai, ki) \u2208 K and AD KW stands for match the ad and its keywords.\nFinally, it is important to notice that the keyword ki associated with ai could not appear at all in the triggering page p, even when ai is highly ranked.\nHowever, if we assume that ki summarizes the main topic of ai according to an advertiser viewpoint, it can be interesting to assure its presence in p.\nThis reasoning suggests that requiring the occurrence of the keyword ki in the triggering page p as a condition to associate ai with p might lead to improved results.\nThis leads to two extra matching strategies as follows: ANDKW(p, ai) = sim(p, ai) if ki p 0 if otherwise AD ANDKW(p, ai) = AAK(p, ai) = sim(p, ai \u222a ki) if ki p 0 if otherwise where (ai, ki) \u2208 K, ANDKW stands for match the ad keywords and force their appearance, and AD ANDKW (or AAK for ads and keywords) stands for match the ad, its keywords, and force their appearance.\nAs we will see in our results, the best among these simple methods is AAK.\nThus, it will be used as baseline for our impedance coupling strategies which we now discuss.\n3.\nIMPEDANCE COUPLING STRATEGIES Two key issues become clear as one plays with the contenttargeted advertising problem.\nFirst, the triggering page normally belongs to a broader contextual scope than that of the advertisements.\nSecond, the association between a good advertisement and the triggering page might depend on a topic that is not mentioned explicitly in the triggering page.\nThe first issue is due to the fact that Web pages can be about any subject and that advertisements are concise in nature.\nThat is, ads tend to be more topic restricted than Web pages.\nThe second issue is related to the fact that, as we later discuss, most advertisers place a small number of advertisements.\nAs a result, we have few terms describing their interest areas.\nConsequently, these terms tend to be of a more general nature.\nFor instance, a car shop probably would prefer to use car instead of super sport to describe its core business topic.\nAs a consequence, many specific terms that appear in the triggering page find no match in the advertisements.\nTo make matters worst, a page might refer to an entity or subject of the world through a label that is distinct from the label selected by an advertiser to refer to the same entity.\nA consequence of these two issues is that vocabularies of pages and ads have low intersection, even when an ad is related to a page.\nWe cite this problem from now on as the vocabulary impedance problem.\nIn our experiments, we realized that this problem limits the final quality of direct matching strategies.\nTherefore, we studied alternatives to reduce the referred vocabulary impedance.\nFor this, we propose to expand the triggering pages with new terms.\nFigure 2 illustrates our intuition.\nWe already know that the addition of keywords (selected by the advertiser) to the ads leads to improved results.\nWe say that a keyword reduces the vocabulary impedance by providing an alternative matching path.\nOur idea is to add new terms (words) to the Web page p to also reduce the vocabulary impedance by providing a second alternative matching path.\nWe refer to our expansion technique as impedance coupling.\nFor this, we proceed as follows.\nexpansion terms keyword vocabulary impedance triggering page p ad Figure 2: Addition of new terms to a Web page to reduce the vocabulary impedance.\nAn advertiser trying to describe a certain topic in a concise way probably will choose general terms to characterize that topic.\nTo facilitate the matching between this ad and our triggering page p, we need to associate new general terms with p. For this, we assume that Web documents similar to the triggering page p share common topics.\nTherefore, 498 by inspecting the vocabulary of these similar documents we might find good terms for better characterizing the main topics in the page p.\nWe now describe this idea using a Bayesian network model [10,11,13] depicted in Figure 3.\nR D0 D1 Dj Dk T1 T2 T3 Ti Tm ... ... ... ... Figure 3: Bayesian network model for our impedance coupling technique.\nIn our model, which is based on the belief network in [11], the nodes represent pieces of information in the domain.\nWith each node is associated a binary random variable, which takes the value 1 to mean that the corresponding entity (a page or terms) is observed and, thus, relevant in our computations.\nIn this case, we say that the information was observed.\nNode R represents the page r, a new representation for the triggering page p. Let N be the set of the k most similar documents to the triggering page, including the triggering page p itself, in a large enough Web collection C. Root nodes D0 through Dk represent the documents in N, that is, the triggering page D0 and its k nearest neighbors, D1 through Dk, among all pages in C.\nThere is an edge from node Dj to node R if document dj is in N. Nodes T1 through Tm represent the terms in the vocabulary of C.\nThere is an edge from node Dj to a node Ti if term ti occurs in document dj.\nIn our model, the observation of the pages in N leads to the observation of a new representation of the triggering page p and to a set of terms describing the main topics associated with p and its neighbors.\nGiven these definitions, we can now use the network to determine the probability that a term ti is a good term for representing a topic of the triggering page p.\nIn other words, we are interested in the probability of observing the final evidence regarding a term ti, given that the new representation of the page p has been observed, P(Ti = 1|R = 1).\nThis translates into the following equation1 : P(Ti|R) = 1 P(R) X d P(Ti|d)P(R|d)P(d) (2) where d represents the set of states of the document nodes.\nSince we are interested just in the states in which only a single document dj is observed and P(d) can be regarded as a constant, we can rewrite Eq.\n(2) as: P(Ti|R) = \u03bd P(R) kX j=0 P(Ti|dj)P(R|dj) (3) where dj represents the state of the document nodes in which only document dj is observed and \u03bd is a constant 1 To simplify our notation we represent the probabilities P(X = 1) as P(X) and P(X = 0) as P(X).\nassociated with P(dj).\nEq.\n(3) is the general equation to compute the probability that a term ti is related to the triggering page.\nWe now define the probabilities P(Ti|dj) and P(R|dj) as follows: P(Ti|dj) = \u03b7 wij (4) P(R|dj) = (1 \u2212 \u03b1) j = 0 \u03b1 sim(r, dj) 1 \u2264 j \u2264 k (5) where \u03b7 is a normalizing constant, wij is the weight associated with term ti in the document dj, and sim(p, dj) is given by Eq.\n(1), i.e., is the cosine similarity between p and dj.\nThe weight wij is computed using a classic tf-idf scheme and is zero if term ti does not occur in document dj.\nNotice that P(Ti|dj) = 1 \u2212 P(Ti|dj) and P(R|dj) = 1 \u2212 P(R|dj).\nBy defining the constant \u03b1, it is possible to determine how important should be the influence of the triggering page p to its new representation r. By substituting Eq.\n(4) and Eq.\n(5) into Eq.\n(3), we obtain: P(Ti|R) = \u03c1 ((1 \u2212 \u03b1) wi0 + \u03b1 kX j=1 wij sim(r, dj)) (6) where \u03c1 = \u03b7 \u03bd is a normalizing constant.\nWe use Eq.\n(6) to determine the set of terms that will compose r, as illustrated in Figure 2.\nLet ttop be the top ranked term according to Eq.\n(6).\nThe set r is composed of the terms ti such that P (Ti|R) P (Ttop|R) \u2265 \u03b2, where \u03b2 is a given threshold.\nIn our experiments, we have used \u03b2 = 0.05.\nNotice that the set r might contain terms that already occur in p.\nThat is, while we will refer to the set r as expansion terms, it should be clear that p \u2229 r = \u2205.\nBy using \u03b1 = 0, we simply consider the terms originally in page p. By increasing \u03b1, we relax the context of the page p, adding terms from neighbor pages, turning page p into its new representation r.\nThis is important because, sometimes, a topic apparently not important in the triggering page offers a good opportunity for advertising.\nFor example, consider a triggering page that describes a congress in London about digital photography.\nAlthough London is probably not an important topic in this page, advertisements about hotels in London would be appropriate.\nThus, adding hotels to page p is important.\nThis suggests using \u03b1 > 0, that is, preserving the contents of p and using the terms in r to expand p.\nIn this paper, we examine both approaches.\nThus, in our sixth method we match r, the set of new expansion terms, directly to the ads, as follows: AAK T(p, ai) = AAK(r, ai) where AAK T stands for match the ad and keywords to the set r of expansion terms.\nIn our seventh method, we match an expanded page p to the ads as follows: AAK EXP(p, ai) = AAK(p \u222a r, ai) where AAK EXP stands for match the ad and keywords to the expanded triggering page.\n499 To improve our ad placement methods, other external source that we can use is the content of the page h pointed to by the advertisement``s hyperlink, that is, its landing page.\nAfter all, this page comprises the real target of the ad and perhaps could present a more detailed description of the product or service being advertised.\nGiven that the advertisement ai points to the landing page hi, we denote this association as the pair (ai, hi) \u2208 H, where H is the set of associations between the ads and the pages they point to.\nOur eighth method consists of matching the triggering page p to the landing pages pointed to by the advertisements, as follows: H(p, ai) = sim(p, hi) where (ai, hi) \u2208 H and H stands for match the hyperlink pointed to by the ad.\nWe can also combine this information with the more promising methods previously described, AAK and AAK EXP as follows.\nGiven that (ai, hi) \u2208 H and (ai, ki) \u2208 K, we have our last two methods: AAK H(p, ai) = sim(p, ai \u222a hi \u222a ki) if ki p 0 if otherwise AAK EXP H(p, ai) = sim(p \u222a r, ai \u222a hi \u222a ki) if ki (p \u222a r) 0 if otherwise where AAK H stands for match ads and keywords also considering the page pointed by the ad and AAH EXP H stands for match ads and keywords with expanded triggering page, also considering the page pointed by the ad.\nNotice that other combinations were not considered in this study due to space restrictions.\nThese other combinations led to poor results in our experimentation and for this reason were discarded.\n4.\nEXPERIMENTS 4.1 Methodology To evaluate our ad placement strategies, we performed a series of experiments using a sample of a real case ad collection with 93,972 advertisements, 1,744 advertisers, and 68,238 keywords2 .\nThe advertisements are grouped in 2,029 campaigns with an average of 1.16 campaigns per advertiser.\nFor the strategies AAK T and AAK EXP, we had to generate a set of expansion terms.\nFor that, we used a database of Web pages crawled by the TodoBR search engine [12] (http:\/\/www.todobr.com.br\/).\nThis database is composed of 5,939,061 pages of the Brazilian Web, under the domain .\nbr.\nFor the strategies H, AAK H, and AAK EXP H, we also crawled the pages pointed to by the advertisers.\nNo other filtering method was applied to these pages besides the removal of HTML tags.\nSince we are initially interested in the placement of advertisements in the pages of information portals, our test collection was composed of 100 pages extracted from a Brazilian newspaper.\nThese are our triggering pages.\nThey were crawled in such a way that only the contents of their articles was preserved.\nAs we have no preferences for particular 2 Data in portuguese provided by an on-line advertisement company that operates in Brazil.\ntopics, the crawled pages cover topics as diverse as politics, economy, sports, and culture.\nFor each of our 100 triggering pages, we selected the top three ranked ads provided by each of our 10 ad placement strategies.\nThus, for each triggering page we select no more than 30 ads.\nThese top ads were then inserted in a pool for that triggering page.\nEach pool contained an average of 15.81 advertisements.\nAll advertisements in each pool were submitted to a manual evaluation by a group of 15 users.\nThe average number of relevant advertisements per page pool was 5.15.\nNotice that we adopted the same pooling method used to evaluate the TREC Web-based collection [6].\nTo quantify the precision of our results, we used 11-point average figures [2].\nSince we are not able to evaluate the entire ad collection, recall values are relative to the set of evaluated advertisements.\n4.2 Tuning Idf factors We start by analyzing the impact of different idf factors in our advertisement collection.\nIdf factors are important because they quantify how discriminative is a term in the collection.\nIn our ad collection, idf factors can be computed by taking ads, advertisers or campaigns as documents.\nTo exemplify, consider the computation of ad idf for a term ti that occurs 9 times in a collection of 100 ads.\nThen, the inverse document frequency of ti is given by: idfi = log 100 9 Hence, we can compute ad, advertiser or campaign idf factors.\nAs we observe in Figure 4, for the AD strategy, the best ranking is obtained by the use of campaign idf, that is, by calculating our idf factor so that it discriminates campaigns.\nSimilar results were obtained for all the other methods.\n0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0 0.2 0.4 0.6 0.8 1 precision recall Campaign idf Advertiser idf Ad idf Figure 4: Precision-recall curves obtained for the AD strategy using ad, advertiser, and campaign idf factors.\nThis reflects the fact that terms might be better discriminators for a business topic than for an specific ad.\nThis effect can be accomplished by calculating the factor relative to idf advertisers or campaigns instead of ads.\nIn fact, campaign idf factors yielded the best results.\nThus, they will be used in all the experiments reported from now on.\n500 4.3 Results Matching Strategies Figure 5 displays the results for the matching strategies presented in Section 2.\nAs shown, directly matching the contents of the ad to the triggering page (AD strategy) is not so effective.\nThe reason is that the ad contents are very noisy.\nIt may contain messages that do not properly describe the ad topics such as requisitions for user actions (e.g, visit our site) and general sentences that could be applied to any product or service (e.g, we delivery for the whole country).\nOn the other hand, an advertiser provided keyword summarizes well the topic of the ad.\nAs a consequence, the KW strategy is superior to the AD and AD KW strategies.\nThis situation changes when we require the keywords to appear in the target Web page.\nBy filtering out ads whose keywords do not occur in the triggering page, much noise is discarded.\nThis makes ANDKW a better alternative than KW.\nFurther, in this new situation, the contents of the ad becomes useful to rank the most relevant ads making AD ANDKW (or AAK for ads and keywords) the best among all described methods.\nFor this reason, we adopt AAK as our baseline in the next set of experiments.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0 0.2 0.4 0.6 0.8 1 precision recall AAK ANDKW KW AD_KW AD Figure 5: Comparison among our five matching strategies.\nAAK (ads and keywords) is superior.\nTable 1 illustrates average precision figures for Figure 5.\nWe also present actual hits per advertisement slot.\nWe call hit an assignment of an ad (to the triggering page) that was considered relevant by the evaluators.\nWe notice that our AAK strategy provides a gain in average precision of 60% relative to the trivial AD strategy.\nThis shows that careful consideration of the evidence related to the problem does pay off.\nImpedance Coupling Strategies Table 2 shows top ranked terms that occur in a page covering Argentinean wines produced using grapes derived from the Bordeaux region of France.\nThe p column includes the top terms for this page ranked according to our tf-idf weighting scheme.\nThe r column includes the top ranked expansion terms generated according to Eq.\n(6).\nNotice that the expansion terms not only emphasize important terms of the target page (by increasing their weights) such as wines and Methods Hits 11-pt average #1 #2 #3 total score gain(%) AD 41 32 13 86 0.104 AD KW 51 28 17 96 0.106 +1.9 KW 46 34 28 108 0.125 +20.2 ANDKW 49 37 35 121 0.153 +47.1 AD ANDKW (AAK) 51 48 39 138 0.168 +61.5 Table 1: Average precision figures, corresponding to Figure 5, for our five matching strategies.\nColumns labelled #1, #2, and #3 indicate total of hits in first, second, and third advertisement slots, respectively.\nThe AAK strategy provides improvements of 60% relative to the AD strategy.\nRank p r term score term score 1 argentina 0.090 wines 0.251 2 obtained* 0.047 wine* 0.140 3 class* 0.036 whites 0.091 4 whites 0.035 red* 0.057 5 french* 0.031 grape 0.051 6 origin* 0.029 bordeaux 0.045 7 france* 0.029 acideness* 0.038 8 grape 0.017 argentina 0.037 9 sweet* 0.016 aroma* 0.037 10 country* 0.013 blanc* 0.036 ... 35 wines 0.010 -... Table 2: Top ranked terms for the triggering page p according to our tf-idf weighting scheme and top ranked terms for r, the expansion terms for p, generated according to Eq.\n(6).\nRanking scores were normalized in order to sum up to 1.\nTerms marked with `*'' are not shared by the sets p and r. whites, but also reveal new terms related to the main topic of the page such as aroma and red.\nFurther, they avoid some uninteresting terms such as obtained and country.\nFigure 6 illustrates our results when the set r of expansion terms is used.\nThey show that matching the ads to the terms in the set r instead of to the triggering page p (AAK T strategy) leads to a considerable improvement over our baseline, AAK.\nThe gain is even larger when we use the terms in r to expand the triggering page (AAK EXP method).\nThis confirms our hypothesis that the triggering page could have some interesting terms that should not be completely discarded.\nFinally, we analyze the impact on the ranking of using the contents of pages pointed by the ads.\nFigure 7 displays our results.\nIt is clear that using only the contents of the pages pointed by the ads (H strategy) yields very poor results.\nHowever, combining evidence from the pages pointed by the ads with our baseline yields improved results.\nMost important, combining our best strategy so far (AAK EXP) with pages pointed by ads (AAK EXP H strategy) leads to superior results.\nThis happens because the two additional sources of evidence, expansion terms and pages pointed by the ads, are distinct and complementary, providing extra and valuable information for matching ads to a Web page.\n501 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 precision recall AAK_EXP AAK_T AAK Figure 6: Impact of using a new representation for the triggering page, one that includes expansion terms.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 precision recall AAK_EXP_H AAK_H AAK H Figure 7: Impact of using the contents of the page pointed by the ad (the hyperlink).\nFigure 8 and Table 3 summarize all results described in this section.\nIn Figure 8 we show precision-recall curves and in Table 3 we show 11-point average figures.\nWe also present actual hits per advertisement slot and gains in average precision relative to our baseline, AAK.\nWe notice that the highest number of hits in the first slot was generated by the method AAK EXP.\nHowever, the method with best overall retrieval performance was AAK EXP H, yielding a gain in average precision figures of roughly 50% over the baseline (AAK).\n4.4 Performance Issues In a keyword targeted advertising system, ads are assigned at query time, thus the performance of the system is a very important issue.\nIn content-targeted advertising systems, we can associate ads with a page at publishing (or updating) time.\nAlso, if a new ad comes in we might consider assigning this ad to already published pages in o\ufb04ine mode.\nThat is, we might design the system such that its performance depends fundamentally on the rate that new pages 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 precision recall AAK_EXP_H AAK_EXP AAK_T AAK_H AAK H Figure 8: Comparison among our ad placement strategies.\nMethods Hits 11-pt average #1 #2 #3 total score gain(%) H 28 5 6 39 0.026 -84.3 AAK 51 48 39 138 0.168 AAK H 52 50 46 148 0.191 +13.5 AAK T 65 49 43 157 0.226 +34.6 AAK EXP 70 52 53 175 0.242 +43.8 AAK EXP H 64 61 51 176 0.253 +50.3 Table 3: Results for our impedance coupling strategies.\nare published and the rate that ads are added or modified.\nFurther, the data needed by our strategies (page crawling, page expansion, and ad link crawling) can be gathered and processed o\ufb04ine, not affecting the user experience.\nThus, from this point of view, the performance is not critical and will not be addressed in this work.\n5.\nRELATED WORK Several works have stressed the importance of relevance in advertising.\nFor example, in [14] it was shown that advertisements that are presented to users when they are not interested on them are viewed just as annoyance.\nThus, in order to be effective, the authors conclude that advertisements should be relevant to consumer concerns at the time of exposure.\nThe results in [9] enforce this conclusion by pointing out that the more targeted the advertising, the more effective it is.\nTherefore it is not surprising that other works have addressed the relevance issue.\nFor instance, in [8] it is proposed a system called ADWIZ that is able to adapt online advertisement to a user``s short-term interests in a non-intrusive way.\nContrary to our work, ADWIZ does not directly use the content of the page viewed by the user.\nIt relies on search keywords supplied by the user to search engines and on the URL of the page requested by the user.\nOn the other hand, in [7] the authors presented an intrusive approach in which an agent sits between advertisers and the user``s browser allowing a banner to be placed into the currently viewed page.\nIn spite of having the opportunity to use the page``s content, 502 the agent infers relevance based on category information and user``s private information collected along the time.\nIn [5] the authors provide a comparison between the ranking strategies used by Google and Overture for their keyword advertising systems.\nBoth systems select advertisements by matching them to the keywords provided by the user in a search query and rank the resulting advertisement list according to the advertisers'' willingness to pay.\nIn particular, Google approach also considers the clickthrough rate of each advertisement as an additional evidence for its relevance.\nThe authors conclude that Google``s strategy is better than that used by Overture.\nAs mentioned before, the ranking problem in keyword advertising is different from that of content-targeted advertising.\nInstead of dealing with keywords provided by users in search queries, we have to deal with the contents of a page which can be very diffuse.\nFinally, the work in [4] focuses on improving search engine results in a TREC collection by means of an automatic query expansion method based on kNN [17].\nSuch method resembles our expansion approach presented in section 3.\nOur method is different from that presented by [4].\nThey expand user queries applied to a document collection with terms extracted from the top k documents returned as answer to the query in the same collection.\nIn our case, we use two collections: an advertisement and a Web collection.\nWe expand triggering pages with terms extracted from the Web collection and then we match these expanded pages to the ads from the advertisement collection.\nBy doing this, we emphasize the main topics of the triggering pages, increasing the possibility of associating relevant ads with them.\n6.\nCONCLUSIONS In this work we investigated ten distinct strategies for associating ads with a Web page that is browsed (contenttargeted advertising).\nFive of our strategies attempt to match the ads directly to the Web page.\nBecause of that, they are called matching strategies.\nThe other five strategies recognize that there is a vocabulary impedance problem among ads and Web pages and attempt to solve the problem by expanding the Web pages and the ads with new terms.\nBecause of that they are called impedance coupling strategies.\nUsing a sample of a real case database with over 93 thousand ads, we evaluated our strategies.\nFor the five matching strategies, our results indicated that planned consideration of additional evidence (such as the keywords provided by the advertisers) yielded gains in average precision figures (for our test collection) of 60%.\nThis was obtained by a strategy called AAK (for ads and keywords), which is taken as the baseline for evaluating our more advanced impedance coupling strategies.\nFor our five impedance coupling strategies, the results indicate that additional gains in average precision of 50% (now relative to the AAK strategy) are possible.\nThese were generated by expanding the Web page with new terms (obtained using a sample Web collection containing over five million pages) and the ads with the contents of the page they point to (a hyperlink provided by the advertisers).\nThese are first time results that indicate that high quality content-targeted advertising is feasible and practical.\n7.\nACKNOWLEDGEMENTS This work was supported in part by the GERINDO project, grant MCT\/CNPq\/CT-INFO 552.087\/02-5, by CNPq grant 300.188\/95-1 (Berthier Ribeiro-Neto), and by CNPq grant 303.576\/04-9 (Edleno Silva de Moura).\nMarco Cristo is supported by Fucapi, Manaus, AM, Brazil.\n8.\nREFERENCES [1] The Google adwords.\nGoogle content-targeted advertising.\nhttp:\/\/adwords.google.com\/select\/ct_faq.html, November 2004.\n[2] R. Baeza-Yates and B. Ribeiro-Neto.\nModern Information Retrieval.\nAddison-Wesley-Longman, 1st edition, 1999.\n[3] H. K. Bhargava and J. Feng.\nPaid placement strategies for internet search engines.\nIn Proceedings of the eleventh international conference on World Wide Web, pages 117-123.\nACM Press, 2002.\n[4] E. P. Chan, S. Garcia, and S. Roukos.\nTrec-5 ad-hoc retrieval using k nearest-neighbors re-scoring.\nIn The Fifth Text REtrieval Conference (TREC-5).\nNational Institute of Standards and Technology (NIST), November 1996.\n[5] J. Feng, H. K. Bhargava, and D. Pennock.\nComparison of allocation rules for paid placement advertising in search engines.\nIn Proceedings of the 5th international conference on Electronic commerce, pages 294-299.\nACM Press, 2003.\n[6] D. Hawking, N. Craswell, and P. B. Thistlewaite.\nOverview of TREC-7 very large collection track.\nIn The Seventh Text REtrieval Conference (TREC-7), pages 91-104, Gaithersburg, Maryland, USA, November 1998.\n[7] Y. Kohda and S. Endo.\nUbiquitous advertising on the www: merging advertisement on the browser.\nComput.\nNetw.\nISDN Syst., 28(7-11):1493-1499, 1996.\n[8] M. Langheinrich, A. Nakamura, N. Abe, T. Kamba, and Y. Koseki.\nUnintrusive customization techniques for web advertising.\nComput.\nNetworks, 31(11-16):1259-1272, 1999.\n[9] T. P. Novak and D. L. Hoffman.\nNew metrics for new media: toward the development of web measurement standards.\nWorld Wide Web J., 2(1):213-246, 1997.\n[10] J. Pearl.\nProbabilistic Reasoning in Intelligent Systems: Networks of plausible inference.\nMorgan Kaufmann Publishers, 2nd edition, 1988.\n[11] B. Ribeiro-Neto and R. Muntz.\nA belief network model for IR.\nIn Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 253-260, Zurich, Switzerland, August 1996.\n[12] A. Silva, E. Veloso, P. Golgher, B. Ribeiro-Neto, A. Laender, and N. Ziviani.\nCobWeb - a crawler for the brazilian web.\nIn Proceedings of the String Processing and Information Retrieval Symposium (SPIRE``99), pages 184-191, Cancun, Mexico, September 1999.\n[13] H. Turtle and W. B. Croft.\nEvaluation of an inference network-based retrieval model.\nACM Transactions on Information Systems, 9(3):187-222, July 1991.\n[14] C. Wang, P. Zhang, R. Choi, and M. Daeredita.\nUnderstanding consumers attitude toward advertising.\nIn Eighth Americas Conference on Information Systems, pages 1143-1148, August 2002.\n[15] M. Weideman.\nEthical issues on content distribution to digital consumers via paid placement as opposed to website visibility in search engine results.\nIn The Seventh ETHICOMP International Conference on the Social and Ethical Impacts of Information and Communication Technologies, pages 904-915.\nTroubador Publishing Ltd, April 2004.\n[16] M. Weideman and T. Haig-Smith.\nAn investigation into search engines as a form of targeted advert delivery.\nIn Proceedings of the 2002 annual research conference of the South African institute of computer scientists and information technologists on Enablement through technology, pages 258-258.\nSouth African Institute for Computer Scientists and Information Technologists, 2002.\n[17] Y. Yang.\nExpert network: Effective and efficient learning from human decisions in text categorization and retrieval.\nIn W. B. Croft and e. C. J. van Rijsbergen, editors, Proceedings of the 17rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 13-22.\nSpringer-Verlag, 1994.\n503","lvl-3":"Impedance Coupling in Content-targeted Advertising\nABSTRACT\nThe current boom of the Web is associated with the revenues originated from on-line advertising.\nWhile search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important.\nIn this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective.\nWe assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business.\nUsing no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness.\nOur methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for \"ads and keywords\") can yield gains in average precision figures of 60% compared to a trivial vector-based strategy.\nFurther, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%.\nThese are first results.\nThey suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms.\n1.\nINTRODUCTION\nThe emergence of the Internet has opened up new marketing opportunities.\nIn fact, a company has now the possibility of showing its advertisements (ads) to millions of people at a low cost.\nDuring the 90's, many companies invested heavily on advertising in the Internet with apparently no concerns about their investment return [16].\nThis situation radically changed in the following decade when the failure of many Web companies led to a dropping in supply of cheap venture capital and a considerable reduction in on-line advertising investments [15,16].\nIt was clear then that more effective strategies for on-line advertising were required.\nFor that, it was necessary to take into account short-term and long-term interests of the users related to their information needs [9,14].\nAs a consequence, many companies intensified the adoption of intrusive techniques for gathering information of users mostly without their consent [8].\nThis raised privacy issues which stimulated the research for less invasive measures [16].\nMore recently, Internet information gatekeepers as, for example, search engines, recommender systems, and comparison shopping services, have employed what is called paid placement strategies [3].\nIn such methods, an advertiser company is given prominent positioning in advertisement lists in return for a placement fee.\nAmongst these methods, the most popular one is a non-intrusive technique called keyword targeted marketing [16].\nIn this technique, keywords extracted from the user's search query are matched against keywords associated with ads provided by advertisers.\nA ranking of the ads, which also takes into consideration the amount that each advertiser is willing to pay, is computed.\nThe top ranked ads are displayed in the search result page together with the answers for the user query.\nThe success of keyword targeted marketing has motivated information gatekeepers to offer their advertisement services in different contexts.\nFor example, as shown in Figure 1, relevant ads could be shown to users directly in the pages of information portals.\nThe motivation is to take advantage of\nthe users immediate information interests at browsing time.\nThe problem of matching ads to a Web page that is browsed, which we also refer to as content-targeted advertising [1], is different from that of keyword marketing.\nIn this case, instead of dealing with users' keywords, we have to use the contents of a Web page to decide which ads to display.\nFigure 1: Example of content-based advertising in\nthe page of a newspaper.\nThe middle slice of the page shows the beginning of an article about the launch of a DVD movie.\nAt the bottom slice, we can see advertisements picked for this page by Google's content-based advertising system, AdSense.\nIt is important to notice that paid placement advertising strategies imply some risks to information gatekeepers.\nFor instance, there is the possibility of a negative impact on their credibility which, at long term, can demise their market share [3].\nThis makes investments in the quality of ad recommendation systems even more important to minimize the possibility of exhibiting ads unrelated to the user's interests.\nBy investing in their ad systems, information gatekeepers are investing in the maintenance of their credibility and in the reinforcement of a positive user attitude towards the advertisers and their ads [14].\nFurther, that can translate into higher clickthrough rates that lead to an increase in revenues for information gatekeepers and advertisers, with gains to all parts [3].\nIn this work, we focus on the problem of content-targeted advertising.\nWe propose new strategies for associating ads with a Web page.\nFive of these strategies are referred to as matching strategies.\nThey are based on the idea of matching the text of the Web page directly to the text of the ads and its associated keywords.\nFive other strategies, which we here introduce, are referred to as impedance coupling strategies.\nThey are based on the idea of expanding the Web page with new terms to facilitate the task of matching ads and Web pages.\nThis is motivated by the observation that there is frequently a mismatch between the vocabulary of a Web page and the vocabulary of an advertisement.\nWe say that there is a vocabulary impedance problem and that our technique provides a positive effect of impedance coupling by reducing the vocabulary impedance.\nFurther, all our strategies rely on information that is already available to information gatekeepers that operate keyword targeted advertising systems.\nThus, no other data from the advertiser is required.\nUsing a sample of a real case database with over 93,000 ads and 100 Web pages selected for testing, we evaluate our ad recommendation strategies.\nFirst, we evaluate the five matching strategies.\nThey match ads to a Web page using a standard vector model and provide what we may call trivial solutions.\nOur results indicate that a strategy that matches the ad plus its keywords to a Web page, requiring the keywords to appear in the Web page, provides improvements in average precision figures of roughly 60% relative to a strategy that simply matches the ads to the Web page.\nSuch strategy, which we call AAK (for \"ads and keywords\"), is then taken as our baseline.\nFollowing we evaluate the five impedance coupling strategies.\nThey are based on the idea of expanding the ad and the Web page with new terms to reduce the vocabulary impedance between their texts.\nOur results indicate that it is possible to generate extra improvements in average precision figures of roughly 50% relative to the AAK strategy.\nThe paper is organized as follows.\nIn section 2, we introduce five matching strategies to solve content-targeted advertising.\nIn section 3, we present our impedance coupling strategies.\nIn section 4, we describe our experimental methodology and datasets and discuss our results.\nIn section 5 we discuss related work.\nIn section 6 we present our conclusions.\n2.\nMATCHING STRATEGIES\n3.\nIMPEDANCE COUPLING STRATEGIES\n4.\nEXPERIMENTS 4.1 Methodology\n4.2 Tuning Idf factors\n4.3 Results\nMatching Strategies\nImpedance Coupling Strategies\n4.4 Performance Issues\n5.\nRELATED WORK\nSeveral works have stressed the importance of relevance in advertising.\nFor example, in [14] it was shown that advertisements that are presented to users when they are not interested on them are viewed just as annoyance.\nThus, in order to be effective, the authors conclude that advertisements should be relevant to consumer concerns at the time of exposure.\nThe results in [9] enforce this conclusion by pointing out that the more targeted the advertising, the more effective it is.\nTherefore it is not surprising that other works have addressed the relevance issue.\nFor instance, in [8] it is proposed a system called ADWIZ that is able to adapt online advertisement to a user's short-term interests in a non-intrusive way.\nContrary to our work, ADWIZ does not directly use the content of the page viewed by the user.\nIt relies on search keywords supplied by the user to search engines and on the URL of the page requested by the user.\nOn the other hand, in [7] the authors presented an intrusive approach in which an agent sits between advertisers and the user's browser allowing a banner to be placed into the currently viewed page.\nIn spite of having the opportunity to use the page's content,\nthe agent infers relevance based on category information and user's private information collected along the time.\nIn [5] the authors provide a comparison between the ranking strategies used by Google and Overture for their keyword advertising systems.\nBoth systems select advertisements by matching them to the keywords provided by the user in a search query and rank the resulting advertisement list according to the advertisers' willingness to pay.\nIn particular, Google approach also considers the clickthrough rate of each advertisement as an additional evidence for its relevance.\nThe authors conclude that Google's strategy is better than that used by Overture.\nAs mentioned before, the ranking problem in keyword advertising is different from that of content-targeted advertising.\nInstead of dealing with keywords provided by users in search queries, we have to deal with the contents of a page which can be very diffuse.\nFinally, the work in [4] focuses on improving search engine results in a TREC collection by means of an automatic query expansion method based on kNN [17].\nSuch method resembles our expansion approach presented in section 3.\nOur method is different from that presented by [4].\nThey expand user queries applied to a document collection with terms extracted from the top k documents returned as answer to the query in the same collection.\nIn our case, we use two collections: an advertisement and a Web collection.\nWe expand triggering pages with terms extracted from the Web collection and then we match these expanded pages to the ads from the advertisement collection.\nBy doing this, we emphasize the main topics of the triggering pages, increasing the possibility of associating relevant ads with them.\n6.\nCONCLUSIONS\nIn this work we investigated ten distinct strategies for associating ads with a Web page that is browsed (contenttargeted advertising).\nFive of our strategies attempt to match the ads directly to the Web page.\nBecause of that, they are called matching strategies.\nThe other five strategies recognize that there is a vocabulary impedance problem among ads and Web pages and attempt to solve the problem by expanding the Web pages and the ads with new terms.\nBecause of that they are called impedance coupling strategies.\nUsing a sample of a real case database with over 93 thousand ads, we evaluated our strategies.\nFor the five matching strategies, our results indicated that planned consideration of additional evidence (such as the keywords provided by the advertisers) yielded gains in average precision figures (for our test collection) of 60%.\nThis was obtained by a strategy called AAK (for \"ads and keywords\"), which is taken as the baseline for evaluating our more advanced impedance coupling strategies.\nFor our five impedance coupling strategies, the results indicate that additional gains in average precision of 50% (now relative to the AAK strategy) are possible.\nThese were generated by expanding the Web page with new terms (obtained using a sample Web collection containing over five million pages) and the ads with the contents of the page they point to (a hyperlink provided by the advertisers).\nThese are first time results that indicate that high quality content-targeted advertising is feasible and practical.","lvl-4":"Impedance Coupling in Content-targeted Advertising\nABSTRACT\nThe current boom of the Web is associated with the revenues originated from on-line advertising.\nWhile search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important.\nIn this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective.\nWe assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business.\nUsing no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness.\nOur methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for \"ads and keywords\") can yield gains in average precision figures of 60% compared to a trivial vector-based strategy.\nFurther, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%.\nThese are first results.\nThey suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms.\n1.\nINTRODUCTION\nDuring the 90's, many companies invested heavily on advertising in the Internet with apparently no concerns about their investment return [16].\nIt was clear then that more effective strategies for on-line advertising were required.\nFor that, it was necessary to take into account short-term and long-term interests of the users related to their information needs [9,14].\nAs a consequence, many companies intensified the adoption of intrusive techniques for gathering information of users mostly without their consent [8].\nMore recently, Internet information gatekeepers as, for example, search engines, recommender systems, and comparison shopping services, have employed what is called paid placement strategies [3].\nIn such methods, an advertiser company is given prominent positioning in advertisement lists in return for a placement fee.\nAmongst these methods, the most popular one is a non-intrusive technique called keyword targeted marketing [16].\nIn this technique, keywords extracted from the user's search query are matched against keywords associated with ads provided by advertisers.\nThe top ranked ads are displayed in the search result page together with the answers for the user query.\nThe success of keyword targeted marketing has motivated information gatekeepers to offer their advertisement services in different contexts.\nFor example, as shown in Figure 1, relevant ads could be shown to users directly in the pages of information portals.\nThe motivation is to take advantage of\nthe users immediate information interests at browsing time.\nThe problem of matching ads to a Web page that is browsed, which we also refer to as content-targeted advertising [1], is different from that of keyword marketing.\nIn this case, instead of dealing with users' keywords, we have to use the contents of a Web page to decide which ads to display.\nFigure 1: Example of content-based advertising in\nthe page of a newspaper.\nThe middle slice of the page shows the beginning of an article about the launch of a DVD movie.\nAt the bottom slice, we can see advertisements picked for this page by Google's content-based advertising system, AdSense.\nIt is important to notice that paid placement advertising strategies imply some risks to information gatekeepers.\nBy investing in their ad systems, information gatekeepers are investing in the maintenance of their credibility and in the reinforcement of a positive user attitude towards the advertisers and their ads [14].\nIn this work, we focus on the problem of content-targeted advertising.\nWe propose new strategies for associating ads with a Web page.\nFive of these strategies are referred to as matching strategies.\nThey are based on the idea of matching the text of the Web page directly to the text of the ads and its associated keywords.\nFive other strategies, which we here introduce, are referred to as impedance coupling strategies.\nThey are based on the idea of expanding the Web page with new terms to facilitate the task of matching ads and Web pages.\nThis is motivated by the observation that there is frequently a mismatch between the vocabulary of a Web page and the vocabulary of an advertisement.\nWe say that there is a vocabulary impedance problem and that our technique provides a positive effect of impedance coupling by reducing the vocabulary impedance.\nFurther, all our strategies rely on information that is already available to information gatekeepers that operate keyword targeted advertising systems.\nThus, no other data from the advertiser is required.\nUsing a sample of a real case database with over 93,000 ads and 100 Web pages selected for testing, we evaluate our ad recommendation strategies.\nFirst, we evaluate the five matching strategies.\nThey match ads to a Web page using a standard vector model and provide what we may call trivial solutions.\nOur results indicate that a strategy that matches the ad plus its keywords to a Web page, requiring the keywords to appear in the Web page, provides improvements in average precision figures of roughly 60% relative to a strategy that simply matches the ads to the Web page.\nSuch strategy, which we call AAK (for \"ads and keywords\"), is then taken as our baseline.\nFollowing we evaluate the five impedance coupling strategies.\nThey are based on the idea of expanding the ad and the Web page with new terms to reduce the vocabulary impedance between their texts.\nOur results indicate that it is possible to generate extra improvements in average precision figures of roughly 50% relative to the AAK strategy.\nIn section 2, we introduce five matching strategies to solve content-targeted advertising.\nIn section 3, we present our impedance coupling strategies.\nIn section 4, we describe our experimental methodology and datasets and discuss our results.\nIn section 5 we discuss related work.\nIn section 6 we present our conclusions.\n5.\nRELATED WORK\nSeveral works have stressed the importance of relevance in advertising.\nFor example, in [14] it was shown that advertisements that are presented to users when they are not interested on them are viewed just as annoyance.\nThe results in [9] enforce this conclusion by pointing out that the more targeted the advertising, the more effective it is.\nTherefore it is not surprising that other works have addressed the relevance issue.\nFor instance, in [8] it is proposed a system called ADWIZ that is able to adapt online advertisement to a user's short-term interests in a non-intrusive way.\nContrary to our work, ADWIZ does not directly use the content of the page viewed by the user.\nIt relies on search keywords supplied by the user to search engines and on the URL of the page requested by the user.\nIn spite of having the opportunity to use the page's content,\nthe agent infers relevance based on category information and user's private information collected along the time.\nIn [5] the authors provide a comparison between the ranking strategies used by Google and Overture for their keyword advertising systems.\nBoth systems select advertisements by matching them to the keywords provided by the user in a search query and rank the resulting advertisement list according to the advertisers' willingness to pay.\nIn particular, Google approach also considers the clickthrough rate of each advertisement as an additional evidence for its relevance.\nThe authors conclude that Google's strategy is better than that used by Overture.\nAs mentioned before, the ranking problem in keyword advertising is different from that of content-targeted advertising.\nInstead of dealing with keywords provided by users in search queries, we have to deal with the contents of a page which can be very diffuse.\nSuch method resembles our expansion approach presented in section 3.\nOur method is different from that presented by [4].\nThey expand user queries applied to a document collection with terms extracted from the top k documents returned as answer to the query in the same collection.\nIn our case, we use two collections: an advertisement and a Web collection.\nWe expand triggering pages with terms extracted from the Web collection and then we match these expanded pages to the ads from the advertisement collection.\nBy doing this, we emphasize the main topics of the triggering pages, increasing the possibility of associating relevant ads with them.\n6.\nCONCLUSIONS\nIn this work we investigated ten distinct strategies for associating ads with a Web page that is browsed (contenttargeted advertising).\nFive of our strategies attempt to match the ads directly to the Web page.\nBecause of that, they are called matching strategies.\nThe other five strategies recognize that there is a vocabulary impedance problem among ads and Web pages and attempt to solve the problem by expanding the Web pages and the ads with new terms.\nBecause of that they are called impedance coupling strategies.\nUsing a sample of a real case database with over 93 thousand ads, we evaluated our strategies.\nFor the five matching strategies, our results indicated that planned consideration of additional evidence (such as the keywords provided by the advertisers) yielded gains in average precision figures (for our test collection) of 60%.\nThis was obtained by a strategy called AAK (for \"ads and keywords\"), which is taken as the baseline for evaluating our more advanced impedance coupling strategies.\nFor our five impedance coupling strategies, the results indicate that additional gains in average precision of 50% (now relative to the AAK strategy) are possible.\nThese are first time results that indicate that high quality content-targeted advertising is feasible and practical.","lvl-2":"Impedance Coupling in Content-targeted Advertising\nABSTRACT\nThe current boom of the Web is associated with the revenues originated from on-line advertising.\nWhile search-based advertising is dominant, the association of ads with a Web page (during user navigation) is becoming increasingly important.\nIn this work, we study the problem of associating ads with a Web page, referred to as content-targeted advertising, from a computer science perspective.\nWe assume that we have access to the text of the Web page, the keywords declared by an advertiser, and a text associated with the advertiser's business.\nUsing no other information and operating in fully automatic fashion, we propose ten strategies for solving the problem and evaluate their effectiveness.\nOur methods indicate that a matching strategy that takes into account the semantics of the problem (referred to as AAK for \"ads and keywords\") can yield gains in average precision figures of 60% compared to a trivial vector-based strategy.\nFurther, a more sophisticated impedance coupling strategy, which expands the text of the Web page to reduce vocabulary impedance with regard to an advertisement, can yield extra gains in average precision of 50%.\nThese are first results.\nThey suggest that great accuracy in content-targeted advertising can be attained with appropriate algorithms.\n1.\nINTRODUCTION\nThe emergence of the Internet has opened up new marketing opportunities.\nIn fact, a company has now the possibility of showing its advertisements (ads) to millions of people at a low cost.\nDuring the 90's, many companies invested heavily on advertising in the Internet with apparently no concerns about their investment return [16].\nThis situation radically changed in the following decade when the failure of many Web companies led to a dropping in supply of cheap venture capital and a considerable reduction in on-line advertising investments [15,16].\nIt was clear then that more effective strategies for on-line advertising were required.\nFor that, it was necessary to take into account short-term and long-term interests of the users related to their information needs [9,14].\nAs a consequence, many companies intensified the adoption of intrusive techniques for gathering information of users mostly without their consent [8].\nThis raised privacy issues which stimulated the research for less invasive measures [16].\nMore recently, Internet information gatekeepers as, for example, search engines, recommender systems, and comparison shopping services, have employed what is called paid placement strategies [3].\nIn such methods, an advertiser company is given prominent positioning in advertisement lists in return for a placement fee.\nAmongst these methods, the most popular one is a non-intrusive technique called keyword targeted marketing [16].\nIn this technique, keywords extracted from the user's search query are matched against keywords associated with ads provided by advertisers.\nA ranking of the ads, which also takes into consideration the amount that each advertiser is willing to pay, is computed.\nThe top ranked ads are displayed in the search result page together with the answers for the user query.\nThe success of keyword targeted marketing has motivated information gatekeepers to offer their advertisement services in different contexts.\nFor example, as shown in Figure 1, relevant ads could be shown to users directly in the pages of information portals.\nThe motivation is to take advantage of\nthe users immediate information interests at browsing time.\nThe problem of matching ads to a Web page that is browsed, which we also refer to as content-targeted advertising [1], is different from that of keyword marketing.\nIn this case, instead of dealing with users' keywords, we have to use the contents of a Web page to decide which ads to display.\nFigure 1: Example of content-based advertising in\nthe page of a newspaper.\nThe middle slice of the page shows the beginning of an article about the launch of a DVD movie.\nAt the bottom slice, we can see advertisements picked for this page by Google's content-based advertising system, AdSense.\nIt is important to notice that paid placement advertising strategies imply some risks to information gatekeepers.\nFor instance, there is the possibility of a negative impact on their credibility which, at long term, can demise their market share [3].\nThis makes investments in the quality of ad recommendation systems even more important to minimize the possibility of exhibiting ads unrelated to the user's interests.\nBy investing in their ad systems, information gatekeepers are investing in the maintenance of their credibility and in the reinforcement of a positive user attitude towards the advertisers and their ads [14].\nFurther, that can translate into higher clickthrough rates that lead to an increase in revenues for information gatekeepers and advertisers, with gains to all parts [3].\nIn this work, we focus on the problem of content-targeted advertising.\nWe propose new strategies for associating ads with a Web page.\nFive of these strategies are referred to as matching strategies.\nThey are based on the idea of matching the text of the Web page directly to the text of the ads and its associated keywords.\nFive other strategies, which we here introduce, are referred to as impedance coupling strategies.\nThey are based on the idea of expanding the Web page with new terms to facilitate the task of matching ads and Web pages.\nThis is motivated by the observation that there is frequently a mismatch between the vocabulary of a Web page and the vocabulary of an advertisement.\nWe say that there is a vocabulary impedance problem and that our technique provides a positive effect of impedance coupling by reducing the vocabulary impedance.\nFurther, all our strategies rely on information that is already available to information gatekeepers that operate keyword targeted advertising systems.\nThus, no other data from the advertiser is required.\nUsing a sample of a real case database with over 93,000 ads and 100 Web pages selected for testing, we evaluate our ad recommendation strategies.\nFirst, we evaluate the five matching strategies.\nThey match ads to a Web page using a standard vector model and provide what we may call trivial solutions.\nOur results indicate that a strategy that matches the ad plus its keywords to a Web page, requiring the keywords to appear in the Web page, provides improvements in average precision figures of roughly 60% relative to a strategy that simply matches the ads to the Web page.\nSuch strategy, which we call AAK (for \"ads and keywords\"), is then taken as our baseline.\nFollowing we evaluate the five impedance coupling strategies.\nThey are based on the idea of expanding the ad and the Web page with new terms to reduce the vocabulary impedance between their texts.\nOur results indicate that it is possible to generate extra improvements in average precision figures of roughly 50% relative to the AAK strategy.\nThe paper is organized as follows.\nIn section 2, we introduce five matching strategies to solve content-targeted advertising.\nIn section 3, we present our impedance coupling strategies.\nIn section 4, we describe our experimental methodology and datasets and discuss our results.\nIn section 5 we discuss related work.\nIn section 6 we present our conclusions.\n2.\nMATCHING STRATEGIES\nKeyword advertising relies on matching search queries to ads and its associated keywords.\nContext-based advertising, which we address here, relies on matching ads and its associated keywords to the text of a Web page.\nGiven a certain Web page p, which we call triggering page, our task is to select advertisements related to the contents of p. Without loss of generality, we consider that an advertisement ai is composed of a title, a textual description, and a hyperlink.\nTo illustrate, for the first ad by Google shown in Figure 1, the title is \"Star Wars Trilogy Full\", the description is \"Get this popular DVD free.\nFree w \/ free shopping.\nSign up now\", and the hyperlink points to the site \u201cwww.freegiftworld.com\".\nAdvertisements can be grouped by advertisers in groups called campaigns, such that a campaign can have one or more advertisements.\nGiven our triggering page p and a set A of ads, a simple way of ranking ai \u2208 A with regard to p is by matching the contents of p to the contents of ai.\nFor this, we use the vector space model [2], as discussed in the immediately following.\nIn the vector space model, queries and documents are represented as weighted vectors in an n-dimensional space.\nLet wiq be the weight associated with term ti in the query q and wij be the weight associated with term ti in the document dj.\nThen, q ~ = (w1q, w2q,..., wiq,..., wnq) and ~ dj = (w1j, w2j,..., wij,..., wnj) are the weighted vectors used to represent the query q and the document dj.\nThese weights can be computed using classic tf-idf schemes.\nIn such schemes, weights are taken as the product between factors that quantify the importance of a term in a document (given by the term frequency, or tf, factor) and its rarity in the whole collection (given by the inverse document factor, or idf, factor), see [2] for details.\nThe ranking of the query q with regard to the document dj is computed by the cosine similarity\nformula, that is, the cosine of the angle between the two corresponding vectors: By considering p as the query and ai as the document, we can rank the ads with regard to the Web page p.\nThis is our first matching strategy.\nIt is represented by the function AD given by: AD (p, ai) = sim (p, ai) where AD stands for \"direct match of the ad, composed by title and description\" and sim (p, ai) is computed according to Eq.\n(1).\nIn our second method, we use other source of evidence provided by the advertisers: the keywords.\nWith each advertisement ai an advertiser associates a keyword ki, which may be composed of one or more terms.\nWe denote the association between an advertisement ai and a keyword ki as the pair (ai, ki) E K, where K is the set of associations made by the advertisers.\nIn the case of keyword targeted advertising, such keywords are used to match the ads to the user queries.\nIn here, we use them to match ads to the Web page p.\nThis provides our second method for ad matching given by: KW (p, ai) = sim (p, ki) where (ai, ki) E K and KW stands for \"match the ad keywords\".\nWe notice that most of the keywords selected by advertisers are also present in the ads associated with those keywords.\nFor instance, in our advertisement test collection, this is true for 90% of the ads.\nThus, instead of using the keywords as matching devices, we can use them to emphasize the main concepts in an ad, in an attempt to improve our AD strategy.\nThis leads to our third method of ad matching given by:\nwhere (ai, ki) E K and AD KW stands for \"match the ad and its keywords\".\nFinally, it is important to notice that the keyword ki associated with ai could not appear at all in the triggering page p, even when ai is highly ranked.\nHowever, if we assume that ki summarizes the main topic of ai according to an advertiser viewpoint, it can be interesting to assure its presence in p.\nThis reasoning suggests that requiring the occurrence of the keyword ki in the triggering page p as a condition to associate ai with p might lead to improved results.\nThis leads to two extra matching strategies as follows:\nwhere (ai, ki) E K, ANDKW stands for \"match the ad keywords and force their appearance\", and AD ANDKW (or AAK for \"ads and keywords\") stands for \"match the ad, its keywords, and force their appearance\".\nAs we will see in our results, the best among these simple methods is AAK.\nThus, it will be used as baseline for our impedance coupling strategies which we now discuss.\n3.\nIMPEDANCE COUPLING STRATEGIES\nTwo key issues become clear as one plays with the contenttargeted advertising problem.\nFirst, the triggering page normally belongs to a broader contextual scope than that of the advertisements.\nSecond, the association between a good advertisement and the triggering page might depend on a topic that is not mentioned explicitly in the triggering page.\nThe first issue is due to the fact that Web pages can be about any subject and that advertisements are concise in nature.\nThat is, ads tend to be more topic restricted than Web pages.\nThe second issue is related to the fact that, as we later discuss, most advertisers place a small number of advertisements.\nAs a result, we have few terms describing their interest areas.\nConsequently, these terms tend to be of a more general nature.\nFor instance, a car shop probably would prefer to use \"car\" instead of \"super sport\" to describe its core business topic.\nAs a consequence, many specific terms that appear in the triggering page find no match in the advertisements.\nTo make matters worst, a page might refer to an entity or subject of the world through a label that is distinct from the label selected by an advertiser to refer to the same entity.\nA consequence of these two issues is that vocabularies of pages and ads have low intersection, even when an ad is related to a page.\nWe cite this problem from now on as the vocabulary impedance problem.\nIn our experiments, we realized that this problem limits the final quality of direct matching strategies.\nTherefore, we studied alternatives to reduce the referred vocabulary impedance.\nFor this, we propose to expand the triggering pages with new terms.\nFigure 2 illustrates our intuition.\nWe already know that the addition of keywords (selected by the advertiser) to the ads leads to improved results.\nWe say that a keyword reduces the vocabulary impedance by providing an alternative matching path.\nOur idea is to add new terms (words) to the Web page p to also reduce the vocabulary impedance by providing a second alternative matching path.\nWe refer to our expansion technique as impedance coupling.\nFor this, we proceed as follows.\nFigure 2: Addition of new terms to a Web page to reduce the vocabulary impedance.\nAn advertiser trying to describe a certain topic in a concise way probably will choose general terms to characterize that topic.\nTo facilitate the matching between this ad and our triggering page p, we need to associate new general terms with p. For this, we assume that Web documents similar to the triggering page p share common topics.\nTherefore,\nby inspecting the vocabulary of these similar documents we might find good terms for better characterizing the main topics in the page p.\nWe now describe this idea using a Bayesian network model [10,11,13] depicted in Figure 3.\nFigure 3: Bayesian network model for our impedance coupling technique.\nIn our model, which is based on the belief network in [11], the nodes represent pieces of information in the domain.\nWith each node is associated a binary random variable, which takes the value 1 to mean that the corresponding entity (a page or terms) is observed and, thus, relevant in our computations.\nIn this case, we say that the information was observed.\nNode R represents the page r, a new representation for the triggering page p. Let N be the set of the k most similar documents to the triggering page, including the triggering page p itself, in a large enough Web collection C. Root nodes D0 through Dk represent the documents in N, that is, the triggering page D0 and its k nearest neighbors, D1 through Dk, among all pages in C.\nThere is an edge from node Dj to node R if document dj is in N. Nodes T1 through Tm represent the terms in the vocabulary of C.\nThere is an edge from node Dj to a node Ti if term ti occurs in document dj.\nIn our model, the observation of the pages in N leads to the observation of a new representation of the triggering page p and to a set of terms describing the main topics associated with p and its neighbors.\nGiven these definitions, we can now use the network to determine the probability that a term ti is a good term for representing a topic of the triggering page p.\nIn other words, we are interested in the probability of observing the final evidence regarding a term ti, given that the new representation of the page p has been observed, P (Ti = 1 | R = 1).\nThis translates into the following equation1: where d represents the set of states of the document nodes.\nSince we are interested just in the states in which only a single document dj is observed and P (d) can be regarded as a constant, we can rewrite Eq.\n(2) as:\nwhere dj represents the state of the document nodes in which only document dj is observed and \u03bd is a constant 1To simplify our notation we represent the probabilities P (X = 1) as P (X) and P (X = 0) as P (X).\nassociated with P (dj).\nEq.\n(3) is the general equation to compute the probability that a term ti is related to the triggering page.\nWe now define the probabilities P (Ti | dj) and P (R | dj) as follows:\nwhere \u03b7 is a normalizing constant, wij is the weight associated with term ti in the document dj, and sim (p, dj) is given by Eq.\n(1), i.e., is the cosine similarity between p and dj.\nThe weight wij is computed using a classic tf-idf scheme and is zero if term ti does not occur in document dj.\nNotice that P (Ti | dj) = 1 \u2212 P (Ti | dj) and P (R | dj) = 1 \u2212 P (R | dj).\nBy defining the constant \u03b1, it is possible to determine how important should be the influence of the triggering page p to its new representation r. By substituting Eq.\n(4) and Eq.\n(5) into Eq.\n(3), we obtain:\nwhere \u03c1 = \u03b7 \u03bd is a normalizing constant.\nWe use Eq.\n(6) to determine the set of terms that will compose r, as illustrated in Figure 2.\nLet ttop be the top ranked term according to Eq.\n(6).\nThe set r is composed of the terms ti such that P (Ti | R) P (Ttop | R) \u2265 \u03b2, where \u03b2 is a given threshold.\nIn our experiments, we have used \u03b2 = 0.05.\nNotice that the set r might contain terms that already occur in p.\nThat is, while we will refer to the set r as expansion terms, it should be clear that p \u2229 r = 6 \u2205.\nBy using \u03b1 = 0, we simply consider the terms originally in page p. By increasing \u03b1, we relax the context of the page p, adding terms from neighbor pages, turning page p into its new representation r.\nThis is important because, sometimes, a topic apparently not important in the triggering page offers a good opportunity for advertising.\nFor example, consider a triggering page that describes a congress in London about digital photography.\nAlthough London is probably not an important topic in this page, advertisements about hotels in London would be appropriate.\nThus, adding \"hotels\" to page p is important.\nThis suggests using \u03b1> 0, that is, preserving the contents of p and using the terms in r to expand p.\nIn this paper, we examine both approaches.\nThus, in our sixth method we match r, the set of new expansion terms, directly to the ads, as follows: AAK T (p, ai) = AAK (r, ai) where AAK T stands for \"match the ad and keywords to the set r of expansion terms\".\nIn our seventh method, we match an expanded page p to the ads as follows: AAK EXP (p, ai) = AAK (p \u222a r, ai) where AAK EXP stands for \"match the ad and keywords to the expanded triggering page\".\nTo improve our ad placement methods, other external source that we can use is the content of the page h pointed to by the advertisement's hyperlink, that is, its landing page.\nAfter all, this page comprises the real target of the ad and perhaps could present a more detailed description of the product or service being advertised.\nGiven that the advertisement ai points to the landing page hi, we denote this association as the pair (ai, hi) G H, where H is the set of associations between the ads and the pages they point to.\nOur eighth method consists of matching the triggering page p to the landing pages pointed to by the advertisements, as follows: H (p, ai) = sim (p, hi) where (ai, hi) G H and H stands for \"match the hyperlink pointed to by the ad\".\nWe can also combine this information with the more promising methods previously described, AAK and AAK EXP as follows.\nGiven that (ai, hi) G H and (ai, ki) G 1C, we have our last two methods:\nwhere AAK H stands for \"match ads and keywords also considering the page pointed by the ad\" and AAH EXP H stands for \"match ads and keywords with expanded triggering page, also considering the page pointed by the ad\".\nNotice that other combinations were not considered in this study due to space restrictions.\nThese other combinations led to poor results in our experimentation and for this reason were discarded.\n4.\nEXPERIMENTS 4.1 Methodology\nTo evaluate our ad placement strategies, we performed a series of experiments using a sample of a real case ad collection with 93,972 advertisements, 1,744 advertisers, and 68,238 keywords2.\nThe advertisements are grouped in 2,029 campaigns with an average of 1.16 campaigns per advertiser.\nFor the strategies AAK T and AAK EXP, we had to generate a set of expansion terms.\nFor that, we used a database of Web pages crawled by the TodoBR search engine [12] (http:\/\/www.todobr.com.br\/).\nThis database is composed of 5,939,061 pages of the Brazilian Web, under the domain \".\nbr\".\nFor the strategies H, AAK H, and AAK EXP H, we also crawled the pages pointed to by the advertisers.\nNo other filtering method was applied to these pages besides the removal of HTML tags.\nSince we are initially interested in the placement of advertisements in the pages of information portals, our test collection was composed of 100 pages extracted from a Brazilian newspaper.\nThese are our triggering pages.\nThey were crawled in such a way that only the contents of their articles was preserved.\nAs we have no preferences for particular 2Data in portuguese provided by an on-line advertisement company that operates in Brazil.\ntopics, the crawled pages cover topics as diverse as politics, economy, sports, and culture.\nFor each of our 100 triggering pages, we selected the top three ranked ads provided by each of our 10 ad placement strategies.\nThus, for each triggering page we select no more than 30 ads.\nThese top ads were then inserted in a pool for that triggering page.\nEach pool contained an average of 15.81 advertisements.\nAll advertisements in each pool were submitted to a manual evaluation by a group of 15 users.\nThe average number of relevant advertisements per page pool was 5.15.\nNotice that we adopted the same pooling method used to evaluate the TREC Web-based collection [6].\nTo quantify the precision of our results, we used 11-point average figures [2].\nSince we are not able to evaluate the entire ad collection, recall values are relative to the set of evaluated advertisements.\n4.2 Tuning Idf factors\nWe start by analyzing the impact of different idf factors in our advertisement collection.\nIdf factors are important because they quantify how discriminative is a term in the collection.\nIn our ad collection, idf factors can be computed by taking ads, advertisers or campaigns as documents.\nTo exemplify, consider the computation of \"ad idf\" for a term ti that occurs 9 times in a collection of 100 ads.\nThen, the inverse document frequency of ti is given by:\nHence, we can compute ad, advertiser or campaign idf factors.\nAs we observe in Figure 4, for the AD strategy, the best ranking is obtained by the use of campaign idf, that is, by calculating our idf factor so that it discriminates campaigns.\nSimilar results were obtained for all the other methods.\nFigure 4: Precision-recall curves obtained for the AD strategy using ad, advertiser, and campaign idf factors.\nThis reflects the fact that terms might be better discriminators for a business topic than for an specific ad.\nThis effect can be accomplished by calculating the factor relative to idf advertisers or campaigns instead of ads.\nIn fact, campaign idf factors yielded the best results.\nThus, they will be used in all the experiments reported from now on.\n4.3 Results\nMatching Strategies\nFigure 5 displays the results for the matching strategies presented in Section 2.\nAs shown, directly matching the contents of the ad to the triggering page (AD strategy) is not so effective.\nThe reason is that the ad contents are very noisy.\nIt may contain messages that do not properly describe the ad topics such as requisitions for user actions (e.g, \"visit our site\") and general sentences that could be applied to any product or service (e.g, \"we delivery for the whole country\").\nOn the other hand, an advertiser provided keyword summarizes well the topic of the ad.\nAs a consequence, the KW strategy is superior to the AD and AD KW strategies.\nThis situation changes when we require the keywords to appear in the target Web page.\nBy filtering out ads whose keywords do not occur in the triggering page, much noise is discarded.\nThis makes ANDKW a better alternative than KW.\nFurther, in this new situation, the contents of the ad becomes useful to rank the most relevant ads making AD ANDKW (or AAK for \"ads and keywords\") the best among all described methods.\nFor this reason, we adopt AAK as our baseline in the next set of experiments.\nFigure 5: Comparison among our five matching strategies.\nAAK (\"ads and keywords\") is superior.\nTable 1 illustrates average precision figures for Figure 5.\nWe also present actual hits per advertisement slot.\nWe call \"hit\" an assignment of an ad (to the triggering page) that was considered relevant by the evaluators.\nWe notice that our AAK strategy provides a gain in average precision of 60% relative to the trivial AD strategy.\nThis shows that careful consideration of the evidence related to the problem does pay off.\nImpedance Coupling Strategies\nTable 2 shows top ranked terms that occur in a page covering Argentinean wines produced using grapes derived from the Bordeaux region of France.\nThe p column includes the top terms for this page ranked according to our tf-idf weighting scheme.\nThe r column includes the top ranked expansion terms generated according to Eq.\n(6).\nNotice that the expansion terms not only emphasize important terms of the target page (by increasing their weights) such as \"wines\" and\nTable 1: Average precision figures, corresponding to\nFigure 5, for our five matching strategies.\nColumns labelled #1, #2, and #3 indicate total of hits in first, second, and third advertisement slots, respectively.\nThe AAK strategy provides improvements of 60% relative to the AD strategy.\nTable 2: Top ranked terms for the triggering page\np according to our tf-idf weighting scheme and top ranked terms for r, the expansion terms for p, generated according to Eq.\n(6).\nRanking scores were normalized in order to sum up to 1.\nTerms marked with ` *' are not shared by the sets p and r. \"whites\", but also reveal new terms related to the main topic of the page such as \"aroma\" and \"red\".\nFurther, they avoid some uninteresting terms such as \"obtained\" and \"country\".\nFigure 6 illustrates our results when the set r of expansion terms is used.\nThey show that matching the ads to the terms in the set r instead of to the triggering page p (AAK T strategy) leads to a considerable improvement over our baseline, AAK.\nThe gain is even larger when we use the terms in r to expand the triggering page (AAK EXP method).\nThis confirms our hypothesis that the triggering page could have some interesting terms that should not be completely discarded.\nFinally, we analyze the impact on the ranking of using the contents of pages pointed by the ads.\nFigure 7 displays our results.\nIt is clear that using only the contents of the pages pointed by the ads (H strategy) yields very poor results.\nHowever, combining evidence from the pages pointed by the ads with our baseline yields improved results.\nMost important, combining our best strategy so far (AAK EXP) with pages pointed by ads (AAK EXP H strategy) leads to superior results.\nThis happens because the two additional sources of evidence, expansion terms and pages pointed by the ads, are distinct and complementary, providing extra and valuable information for matching ads to a Web page.\nFigure 6: Impact of using a new representation for the triggering page, one that includes expansion terms.\nFigure 7: Impact of using the contents of the page pointed by the ad (the hyperlink).\nFigure 8 and Table 3 summarize all results described in\nthis section.\nIn Figure 8 we show precision-recall curves and in Table 3 we show 11-point average figures.\nWe also present actual hits per advertisement slot and gains in average precision relative to our baseline, AAK.\nWe notice that the highest number of hits in the first slot was generated by the method AAK EXP.\nHowever, the method with best overall retrieval performance was AAK EXP H, yielding a gain in average precision figures of roughly 50% over the baseline (AAK).\n4.4 Performance Issues\nIn a keyword targeted advertising system, ads are assigned at query time, thus the performance of the system is a very important issue.\nIn content-targeted advertising systems, we can associate ads with a page at publishing (or updating) time.\nAlso, if a new ad comes in we might consider assigning this ad to already published pages in offline mode.\nThat is, we might design the system such that its performance depends fundamentally on the rate that new pages\nFigure 8: Comparison among our ad placement strategies.\nTable 3: Results for our impedance coupling strategies.\nare published and the rate that ads are added or modified.\nFurther, the data needed by our strategies (page crawling, page expansion, and ad link crawling) can be gathered and processed offline, not affecting the user experience.\nThus, from this point of view, the performance is not critical and will not be addressed in this work.\n5.\nRELATED WORK\nSeveral works have stressed the importance of relevance in advertising.\nFor example, in [14] it was shown that advertisements that are presented to users when they are not interested on them are viewed just as annoyance.\nThus, in order to be effective, the authors conclude that advertisements should be relevant to consumer concerns at the time of exposure.\nThe results in [9] enforce this conclusion by pointing out that the more targeted the advertising, the more effective it is.\nTherefore it is not surprising that other works have addressed the relevance issue.\nFor instance, in [8] it is proposed a system called ADWIZ that is able to adapt online advertisement to a user's short-term interests in a non-intrusive way.\nContrary to our work, ADWIZ does not directly use the content of the page viewed by the user.\nIt relies on search keywords supplied by the user to search engines and on the URL of the page requested by the user.\nOn the other hand, in [7] the authors presented an intrusive approach in which an agent sits between advertisers and the user's browser allowing a banner to be placed into the currently viewed page.\nIn spite of having the opportunity to use the page's content,\nthe agent infers relevance based on category information and user's private information collected along the time.\nIn [5] the authors provide a comparison between the ranking strategies used by Google and Overture for their keyword advertising systems.\nBoth systems select advertisements by matching them to the keywords provided by the user in a search query and rank the resulting advertisement list according to the advertisers' willingness to pay.\nIn particular, Google approach also considers the clickthrough rate of each advertisement as an additional evidence for its relevance.\nThe authors conclude that Google's strategy is better than that used by Overture.\nAs mentioned before, the ranking problem in keyword advertising is different from that of content-targeted advertising.\nInstead of dealing with keywords provided by users in search queries, we have to deal with the contents of a page which can be very diffuse.\nFinally, the work in [4] focuses on improving search engine results in a TREC collection by means of an automatic query expansion method based on kNN [17].\nSuch method resembles our expansion approach presented in section 3.\nOur method is different from that presented by [4].\nThey expand user queries applied to a document collection with terms extracted from the top k documents returned as answer to the query in the same collection.\nIn our case, we use two collections: an advertisement and a Web collection.\nWe expand triggering pages with terms extracted from the Web collection and then we match these expanded pages to the ads from the advertisement collection.\nBy doing this, we emphasize the main topics of the triggering pages, increasing the possibility of associating relevant ads with them.\n6.\nCONCLUSIONS\nIn this work we investigated ten distinct strategies for associating ads with a Web page that is browsed (contenttargeted advertising).\nFive of our strategies attempt to match the ads directly to the Web page.\nBecause of that, they are called matching strategies.\nThe other five strategies recognize that there is a vocabulary impedance problem among ads and Web pages and attempt to solve the problem by expanding the Web pages and the ads with new terms.\nBecause of that they are called impedance coupling strategies.\nUsing a sample of a real case database with over 93 thousand ads, we evaluated our strategies.\nFor the five matching strategies, our results indicated that planned consideration of additional evidence (such as the keywords provided by the advertisers) yielded gains in average precision figures (for our test collection) of 60%.\nThis was obtained by a strategy called AAK (for \"ads and keywords\"), which is taken as the baseline for evaluating our more advanced impedance coupling strategies.\nFor our five impedance coupling strategies, the results indicate that additional gains in average precision of 50% (now relative to the AAK strategy) are possible.\nThese were generated by expanding the Web page with new terms (obtained using a sample Web collection containing over five million pages) and the ads with the contents of the page they point to (a hyperlink provided by the advertisers).\nThese are first time results that indicate that high quality content-targeted advertising is feasible and practical.","keyphrases":["content-target advertis","advertis","web","match strategi","ad and keyword","imped coupl strategi","on-line advertis","paid placement strategi","keyword target advertis","bayesian network model","expans term","ad placement strategi","bayesian network","knn"],"prmu":["P","P","P","P","P","P","M","M","M","U","U","M","U","U"]} {"id":"H-49","title":"Performance Prediction Using Spatial Autocorrelation","abstract":"Evaluation of information retrieval systems is one of the core tasks in information retrieval. Problems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems. Previous work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance. Approaches exist for the case of few and even no relevance judgments. Our focus is on zero-judgment performance prediction of individual retrievals. One common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments. If documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores. We find that the low correlation between scores of topically close documents often implies a poor retrieval performance. When compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance. These new predictors can also be incorporated with classic predictors to improve performance further. We also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages.","lvl-1":"Performance Prediction Using Spatial Autocorrelation Fernando Diaz Center for Intelligent Information Retrieval Department of Computer Science University of Massachusetts Amherst, MA 01003 fdiaz@cs.umass.edu ABSTRACT Evaluation of information retrieval systems is one of the core tasks in information retrieval.\nProblems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems.\nPrevious work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance.\nApproaches exist for the case of few and even no relevance judgments.\nOur focus is on zero-judgment performance prediction of individual retrievals.\nOne common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments.\nIf documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores.\nWe find that the low correlation between scores of topically close documents often implies a poor retrieval performance.\nWhen compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance.\nThese new predictors can also be incorporated with classic predictors to improve performance further.\nWe also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models; H.3.4 [Systems and Software]: Performance evaluation (efficiency and effectiveness) General Terms Performance, Design, Reliability, Experimentation 1.\nINTRODUCTION In information retrieval, a user poses a query to a system.\nThe system retrieves n documents each receiving a realvalued score indicating the predicted degree of relevance.\nIf we randomly select pairs of documents from this set, we expect some pairs to share the same topic and other pairs to not share the same topic.\nTake two topically-related documents from the set and call them a and b.\nIf the scores of a and b are very different, we may be concerned about the performance of our system.\nThat is, if a and b are both on the topic of the query, we would like them both to receive a high score; if a and b are not on the topic of the query, we would like them both to receive a low score.\nWe might become more worried as we find more differences between scores of related documents.\nWe would be more comfortable with a retrieval where scores are consistent between related documents.\nOur paper studies the quantification of this inconsistency in a retrieval from a spatial perspective.\nSpatial analysis is appropriate since many retrieval models embed documents in some vector space.\nIf documents are embedded in a space, proximity correlates with topical relationships.\nScore consistency can be measured by the spatial version of autocorrelation known as the Moran coefficient or IM [5, 10].\nIn this paper, we demonstrate a strong correlation between IM and retrieval performance.\nThe discussion up to this point is reminiscent of the cluster hypothesis.\nThe cluster hypothesis states: closely-related documents tend to be relevant to the same request [12].\nAs we shall see, a retrieval function``s spatial autocorrelation measures the degree to which closely-related documents receive similar scores.\nBecause of this, we interpret autocorrelation as measuring the degree to which a retrieval function satisfies the clustering hypothesis.\nIf this connection is reasonable, in Section 6, we present evidence that failure to satisfy the cluster hypothesis correlates strongly with poor performance.\nIn this work, we provide the following contributions, 1.\nA general, robust method for predicting the performance of retrievals with zero relevance judgments (Section 3).\n2.\nA theoretical treatment of the similarities and motivations behind several state-of-the-art performance prediction techniques (Section 4).\n3.\nThe first large-scale experiments of zero-judgment, single run performance prediction (Sections 5 and 6).\n2.\nPROBLEM DEFINITION Given a query, an information retrieval system produces a ranking of documents in the collection encoded as a set of scores associated with documents.\nWe refer to the set of scores for a particular query-system combination as a retrieval.\nWe would like to predict the performance of this retrieval with respect to some evaluation measure (eg, mean average precision).\nIn this paper, we present results for ranking retrievals from arbitrary systems.\nWe would like this ranking to approximate the ranking of retrievals by the evaluation measure.\nThis is different from ranking queries by the average performance on each query.\nIt is also different from ranking systems by the average performance on a set of queries.\nScores are often only computed for the top n documents from the collection.\nWe place these scores in the length n vector, y, where yi refers to the score of the ith-ranked document.\nWe adjust scores to have zero mean and unit variance.\nWe use this method because of its simplicity and its success in previous work [15].\n3.\nSPATIAL CORRELATION In information retrieval, we often assume that the representations of documents exist in some high-dimensional vector space.\nFor example, given a vocabulary, V, this vector space may be an arbitrary |V|-dimensional space with cosine inner-product or a multinomial simplex with a distributionbased distance measure.\nAn embedding space is often selected to respect topical proximity; if two documents are near, they are more likely to share a topic.\nBecause of the prevalence and success of spatial models of information retrieval, we believe that the application of spatial data analysis techniques are appropriate.\nWhereas in information retrieval, we are concerned with the score at a point in a space, in spatial data analysis, we are concerned with the value of a function at a point or location in a space.\nWe use the term function here to mean a mapping from a location to a real value.\nFor example, we might be interested in the prevalence of a disease in the neighborhood of some city.\nThe function would map the location of a neighborhood to an infection rate.\nIf we want to quantify the spatial dependencies of a function, we would employ a measure referred to as the spatial autocorrelation [5, 10].\nHigh spatial autocorrelation suggests that knowing the value of a function at location a will tell us a great deal about the value at a neighboring location b.\nThere is a high spatial autocorrelation for a function representing the temperature of a location since knowing the temperature at a location a will tell us a lot about the temperature at a neighboring location b. Low spatial autocorrelation suggests that knowing the value of a function at location a tells us little about the value at a neighboring location b.\nThere is low spatial autocorrelation in a function measuring the outcome of a coin toss at a and b.\nIn this section, we will begin by describing what we mean by spatial proximity for documents and then define a measure of spatial autocorrelation.\nWe conclude by extending this model to include information from multiple retrievals from multiple systems for a single query.\n3.1 Spatial Representation of Documents Our work does not focus on improving a specific similarity measure or defining a novel vector space.\nInstead, we choose an inner product known to be effective at detecting interdocument topical relationships.\nSpecifically, we adopt tf.idf document vectors, \u02dcdi = di log \u201e (n + 0.5) \u2212 ci 0.5 + ci `` (1) where d is a vector of term frequencies, c is the length-|V| document frequency vector.\nWe use this weighting scheme due to its success for topical link detection in the context of Topic Detection and Tracking (TDT) evaluations [6].\nAssuming vectors are scaled by their L2 norm, we use the inner product, \u02dcdi, \u02dcdj , to define similarity.\nGiven documents and some similarity measure, we can construct a matrix which encodes the similarity between pairs of documents.\nRecall that we are given the top n documents retrieved in y.\nWe can compute an n \u00d7 n similarity matrix, W.\nAn element of this matrix, Wij represents the similarity between documents ranked i and j.\nIn practice, we only include the affinities for a document``s k-nearest neighbors.\nIn all of our experiments, we have fixed k to 5.\nWe leave exploration of parameter sensitivity to future work.\nWe also row normalize the matrix so that Pn j=1 Wij = 1 for all i. 3.2 Spatial Autocorrelation of a Retrieval Recall that we are interested in measuring the similarity between the scores of spatially-close documents.\nOne such suitable measure is the Moran coefficient of spatial autocorrelation.\nAssuming the function y over n locations, this is defined as \u02dcIM = n eTWe P i,j Wijyiyj P i y2 i = n eTWe yT Wy yTy (2) where eT We = P ij Wij.\nWe would like to compare autocorrelation values for different retrievals.\nUnfortunately, the bound for Equation 2 is not consistent for different W and y. Therefore, we use the Cauchy-Schwartz inequality to establish a bound, \u02dcIM \u2264 n eTWe s yTWTWy yTy And we define the normalized spatial autocorrelation as IM = yT Wy p yTy \u00d7 yTWTWy Notice that if we let \u02dcy = Wy, then we can write this formula as, IM = yT \u02dcy y 2 \u02dcy 2 (3) which can be interpreted as the correlation between the original retrieval scores and a set of retrieval scores diffused in the space.\nWe present some examples of autocorrelations of functions on a grid in Figure 1.\n3.3 Correlation with Other Retrievals Sometimes we are interested in the performance of a single retrieval but have access to scores from multiple systems for (a) IM = 0.006 (b) IM = 0.241 (c) IM = 0.487 Figure 1: The Moran coefficient, IM for a several binary functions on a grid.\nThe Moran coefficient is a local measure of function consistency.\nFrom the perspective of information retrieval, each of these grid spaces would represent a document and documents would be organized so that they lay next to topically-related documents.\nBinary retrieval scores would define a pattern on this grid.\nNotice that, as the Moran coefficient increases, neighboring cells tend to have similar values.\nthe same query.\nIn this situation, we can use combined information from these scores to construct a surrogate for a high-quality ranking [17].\nWe can treat the correlation between the retrieval we are interested in and the combined scores as a predictor of performance.\nAssume that we are given m score functions, yi, for the same n documents.\nWe will represent the mean of these vectors as y\u00b5 = Pm i=1 yi.\nWe use the mean vector as an approximation to relevance.\nSince we use zero mean and unit variance normalization, work in metasearch suggests that this assumption is justified [15].\nBecause y\u00b5 represents a very good retrieval, we hypothesize that a strong similarity between y\u00b5 and y will correlate positively with system performance.\nWe use Pearson``s product-moment correlation to measure the similarity between these vectors, \u03c1(y, y\u00b5) = yT y\u00b5 y 2 y\u00b5 2 (4) We will comment on the similarity between Equation 3 and 4 in Section 7.\nOf course, we can combine \u03c1(y, \u02dcy) and \u03c1(y, y\u00b5) if we assume that they capture different factors in the prediction.\nOne way to accomplish this is to combine these predictors as independent variables in a linear regression.\nAn alternative means of combination is suggested by the mathematical form of our predictors.\nSince \u02dcy encodes the spatial dependencies in y and y\u00b5 encodes the spatial properties of the multiple runs, we can compute a third correlation between these two vectors, \u03c1(\u02dcy, y\u00b5) = \u02dcyT y\u00b5 \u02dcy 2 y\u00b5 2 (5) We can interpret Equation 5 as measuring the correlation between a high quality ranking (y\u00b5) and a spatially smoothed version of the retrieval (\u02dcy).\n4.\nRELATIONSHIP WITH OTHER PREDICTORS One way to predict the effectiveness of a retrieval is to look at the shared vocabulary of the top n retrieved documents.\nIf we computed the most frequent content words in this set, we would hope that they would be consistent with our topic.\nIn fact, we might believe that a bad retrieval would include documents on many disparate topics, resulting in an overlap of terminological noise.\nThe Clarity of a query attempts to quantify exactly this [7].\nSpecifically, Clarity measures the similarity of the words most frequently used in retrieved documents to those most frequently used in the whole corpus.\nThe conjecture is that a good retrieval will use language distinct from general text; the overlapping language in a bad retrieval will tend to be more similar to general text.\nMathematically, we can compute a representation of the language used in the initial retrieval as a weighted combination of document language models, P(w|\u03b8Q) = nX i=1 P(w|\u03b8i) P(Q|\u03b8i) Z (6) where \u03b8i is the language model of the ith-ranked document, P(Q|\u03b8i) is the query likelihood score of the ith-ranked document and Z = Pn i=1 P(Q|\u03b8i) is a normalization constant.\nThe similarity between the multinomial P(w|\u03b8Q) and a model of general text can be computed using the Kullback-Leibler divergence, DV KL(\u03b8Q \u03b8C ).\nHere, the distribution P(w|\u03b8C ) is our model of general text which can be computed using term frequencies in the corpus.\nIn Figure 2a, we present Clarity as measuring the distance between the weighted center of mass of the retrieval (labeled y) and the unweighted center of mass of the collection (labeled O).\nClarity reaches a minimum when a retrieval assigns every document the same score.\nLet``s again assume we have a set of n documents retrieved for our query.\nAnother way to quantify the dispersion of a set of documents is to look at how clustered they are.\nWe may hypothesize that a good retrieval will return a single, tight cluster.\nA poorly performing retrieval will return a loosely related set of documents covering many topics.\nOne proposed method of quantifying this dispersion is to measure the distance from a random document a to it``s nearest neighbor, b.\nA retrieval which is tightly clustered will, on average, have a low distance between a and b; a retrieval which is less tightly-closed will, on average have high distances between a and b.\nThis average corresponds to using the Cox-Lewis statistic to measure the randomness of the top n documents retrieved from a system [18].\nIn Figure 2a, this is roughly equivalent to measuring the area of the set n. Notice that we are throwing away information about the retrieval function y. Therefore the Cox-Lewis statistic is highly dependent on selecting the top n documents.1 Remember that we have n documents and a set of scores.\nLet``s assume that we have access to the system which provided the original scores and that we can also request scores for new documents.\nThis suggests a third method for predicting performance.\nTake some document, a, from the retrieved set and arbitrarily add or remove words at random to create a new document \u02dca.\nNow, we can ask our system to score \u02dca with respect to our query.\nIf, on average over the n documents, the scores of a and \u02dca tend to be very different, we might suspect that the system is failing on this query.\nSo, an alternative approach is to measure the simi1 The authors have suggested coupling the query with the distance measure [18].\nThe information introduced by the query, though, is retrieval-independent so that, if two retrievals return the same set of documents, the approximate Cox-Lewis statistic will be the same regardless of the retrieval scores.\nyOy (a) Global Divergence \u00b5(y)\u02dcy y (b) Score Perturbation \u00b5(y) y (c) Multirun Averaging Figure 2: Representation of several performance predictors on a grid.\nIn Figure 2a, we depict predictors which measure the divergence between the center of mass of a retrieval and the center of the embedding space.\nIn Figure 2b, we depict predictors which compare the original retrieval, y, to a perturbed version of the retrieval, \u02dcy.\nOur approach uses a particular type of perturbation based on score diffusion.\nFinally, in Figure 2c, we depict prediction when given retrievals from several other systems on the same query.\nHere, we can consider the fusion of these retrieval as a surrogate for relevance.\nlarity between the retrieval and a perturbed version of that retrieval [18, 19].\nThis can be accomplished by either perturbing the documents or queries.\nThe similarity between the two retrievals can be measured using some correlation measure.\nThis is depicted in Figure 2b.\nThe upper grid represents the original retrieval, y, while the lower grid represents the function after having been perturbed, \u02dcy.\nThe nature of the perturbation process requires additional scorings or retrievals.\nOur predictor does not require access to the original scoring function or additional retrievals.\nSo, although our method is similar to other perturbation methods in spirit, it can be applied in situations when the retrieval system is inaccessible or costly to access.\nFinally, assume that we have, in addition to the retrieval we want to evaluate, m retrievals from a variety of different systems.\nIn this case, we might take a document a, compare its rank in the retrieval to its average rank in the m retrievals.\nIf we believe that the m retrievals provide a satisfactory approximation to relevance, then a very large difference in rank would suggest that our retrieval is misranking a.\nIf this difference is large on average over all n documents, then we might predict that the retrieval is bad.\nIf, on the other hand, the retrieval is very consistent with the m retrievals, then we might predict that the retrieval is good.\nThe similarity between the retrieval and the combined retrieval may be computed using some correlation measure.\nThis is depicted in Figure 2c.\nIn previous work, the Kullback-Leibler divergence between the normalized scores of the retrieval and the normalized scores of the combined retrieval provides the similarity [1].\n5.\nEXPERIMENTS Our experiments focus on testing the predictive power of each of our predictors: \u03c1(y, \u02dcy), \u03c1(y, y\u00b5), and \u03c1(\u02dcy, y\u00b5).\nAs stated in Section 2, we are interested in predicting the performance of the retrieval generated by an arbitrary system.\nOur methodology is consistent with previous research in that we predict the relative performance of a retrieval by comparing a ranking based on our predictor to a ranking based on average precision.\nWe present results for two sets of experiments.\nThe first set of experiments presents detailed comparisons of our predictors to previously-proposed predictors using identical data sets.\nOur second set of experiments demonstrates the generalizability of our approach to arbitrary retrieval methods, corpus types, and corpus languages.\n5.1 Detailed Experiments In these experiments, we will predict the performance of language modeling scores using our autocorrelation predictor, \u03c1(y, \u02dcy); we do not consider \u03c1(y, y\u00b5) or \u03c1(\u02dcy, y\u00b5) because, in these detailed experiments, we focus on ranking the retrievals from a single system.\nWe use retrievals, values for baseline predictors, and evaluation measures reported in previous work [19].\n5.1.1 Topics and Collections These performance prediction experiments use language model retrievals performed for queries associated with collections in the TREC corpora.\nUsing TREC collections allows us to confidently associate an average precision with a retrieval.\nIn these experiments, we use the following topic collections: TREC 4 ad-hoc, TREC 5 ad-hoc, Robust 2004, Terabyte 2004, and Terabyte 2005.\n5.1.2 Baselines We provide two baselines.\nOur first baseline is the classic Clarity predictor presented in Equation 6.\nClarity is designed to be used with language modeling systems.\nOur second baseline is Zhou and Croft``s ranking robustness predictor.\nThis predictor corrupts the top k documents from retrieval and re-computes the language model scores for these corrupted documents.\nThe value of the predictor is the Spearman rank correlation between the original ranking and the corrupted ranking.\nIn our tables, we will label results for Clarity using DV KL and the ranking robustness predictor using P. 5.2 Generalizability Experiments Our predictors do not require a particular baseline retrieval system; the predictors can be computed for an arbitrary retrieval, regardless of how scores were generated.\nWe believe that that is one of the most attractive aspects of our algorithm.\nTherefore, in a second set of experiments, we demonstrate the ability of our techniques to generalize to a variety of collections, topics, and retrieval systems.\n5.2.1 Topics and Collections We gathered a diverse set of collections from all possible TREC corpora.\nWe cast a wide net in order to locate collections where our predictors might fail.\nOur hypothesis is that documents with high topical similarity should have correlated scores.\nTherefore, we avoided collections where scores were unlikely to be correlated (eg, question-answering) or were likely to be negatively correlated (eg, novelty).\nNevertheless, our collections include corpora where correlations are weakly justified (eg, non-English corpora) or not justified at all (eg, expert search).\nWe use the ad-hoc tracks from TREC3-8, TREC Robust 2003-2005, TREC Terabyte 20042005, TREC4-5 Spanish, TREC5-6 Chinese, and TREC Enterprise Expert Search 2005.\nIn all cases, we use only the automatic runs for ad-hoc tracks submitted to NIST.\nFor all English and Spanish corpora, we construct the matrix W according to the process described in Section 3.1.\nFor Chinese corpora, we use na\u00a8\u0131ve character-based tf.idf vectors.\nFor entities, entries in W are proportional to the number of documents in which two entities cooccur.\n5.2.2 Baselines In our detailed experiments, we used the Clarity measure as a baseline.\nSince we are predicting the performance of retrievals which are not based on language modeling, we use a version of Clarity referred to as ranked-list Clarity [7].\nRanked-list clarity converts document ranks to P(Q|\u03b8i) values.\nThis conversion begins by replacing all of the scores in y with the respective ranks.\nOur estimation of P(Q|\u03b8i) from the ranks, then is, P(Q|\u03b8i) = ( 2(c+1\u2212yi) c(c+1) if yi \u2264 c 0 otherwise (7) where c is a cutoff parameter.\nAs suggested by the authors, we fix the algorithm parameters c and \u03bb2 so that c = 60 and \u03bb2 = 0.10.\nWe use Equation 6 to estimate P(w|\u03b8Q) and DV KL(\u03b8Q \u03b8C ) to compute the value of the predictor.\nWe will refer to this predictor as DV KL, superscripted by V to indicate that the Kullback-Leibler divergence is with respect to the term embedding space.\nWhen information from multiple runs on the same query is available, we use Aslam and Pavlu``s document-space multinomial divergence as a baseline [1].\nThis rank-based method first normalizes the scores in a retrieval as an n-dimensional multinomial.\nAs with ranked-list Clarity, we begin by replacing all of the scores in y with their respective ranks.\nThen, we adjust the elements of y in the following way, \u02c6yi = 1 2n 0 @1 + nX k=yi 1 k 1 A (8) In our multirun experiments, we only use the top 75 documents from each retrieval (n = 75); this is within the range of parameter values suggested by the authors.\nHowever, we admit not tuning this parameter for either our system or the baseline.\nThe predictor is the divergence between the candidate distribution, y, and the mean distribution, y\u00b5 .\nWith the uniform linear combination of these m retrievals represented as y\u00b5, we can compute the divergence as Dn KL(\u02c6y \u02c6y\u00b5) where we use the superscript n to indicate that the summation is over the set of n documents.\nThis baseline was developed in the context of predicting query difficulty but we adopt it as a reasonable baseline for predicting retrieval performance.\n5.2.3 Parameter Settings When given multiple retrievals, we use documents in the union of the top k = 75 documents from each of the m retrievals for that query.\nIf the size of this union is \u02dcn, then y\u00b5 and each yi is of length \u02dcn.\nIn some cases, a system did not score a document in the union.\nSince we are making a Gaussian assumption about our scores, we can sample scores for these unseen documents from the negative tail of the distribution.\nSpecifically, we sample from the part of the distribution lower than the minimum value of in the normalized retrieval.\nThis introduces randomness into our algorithm but we believe it is more appropriate than assigning an arbitrary fixed value.\nWe optimized the linear regression using the square root of each predictor.\nWe found that this substantially improved fits for all predictors, including the baselines.\nWe considered linear combinations of pairs of predictors (labeled by the components) and all predictors (labeled as \u03b2).\n5.3 Evaluation Given a set of retrievals, potentially from a combination of queries and systems, we measure the correlation of the rank ordering of this set by the predictor and by the performance metric.\nIn order to ensure comparability with previous results, we present Kendall``s \u03c4 correlation between the predictor``s ranking and ranking based on average precision of the retrieval.\nUnless explicitly noted, all correlations are significant with p < 0.05.\nPredictors can sometimes perform better when linearly combined [9, 11].\nAlthough previous work has presented the coefficient of determination (R2 ) to measure the quality of the regression, this measure cannot be reliably used when comparing slight improvements from combining predictors.\nTherefore, we adopt the adjusted coefficient of determination which penalizes models with more variables.\nThe adjusted R2 allows us to evaluate the improvement in prediction achieved by adding a parameter but loses the statistical interpretation of R2 .\nWe will use Kendall``s \u03c4 to evaluate the magnitude of the correlation and the adjusted R2 to evaluate the combination of variables.\n6.\nRESULTS We present results for our detailed experiments comparing the prediction of language model scores in Table 1.\nAlthough the Clarity measure is theoretically designed for language model scores, it consistently underperforms our system-agnostic predictor.\nRanking robustness was presented as an improvement to Clarity for web collections (represented in our experiments by the terabyte04 and terabyte05 collections), shifting the \u03c4 correlation from 0.139 to 0.150 for terabyte04 and 0.171 to 0.208 for terabyte05.\nHowever, these improvements are slight compared to the performance of autocorrelation on these collections.\nOur predictor achieves a \u03c4 correlation of 0.454 for terabyte04 and 0.383 for terabyte05.\nThough not always the strongest, autocorrelation achieves correlations competitive with baseline predictors.\nWhen examining the performance of linear combinations of predictors, we note that in every case, autocorrelation factors as a necessary component of a strong predictor.\nWe also note that the adjusted R2 for individual baselines are always significantly improved by incorporating autocorrelation.\nWe present our generalizability results in Table 2.\nWe begin by examining the situation in column (a) where we are presented with a single retrieval and no information from additional retrievals.\nFor every collection except one, we achieve significantly better correlations than ranked-list Clarity.\nSurprisingly, we achieve relatively strong correlations for Spanish and Chinese collections despite our na\u00a8\u0131ve processing.\nWe do not have a ranked-list clarity correlation for ent05 because entity modeling is itself an open research question.\nHowever, our autocorrelation measure does not achieve high correlations perhaps because relevance for entity retrieval does not propagate according to the cooccurrence links we use.\nAs noted above, the poor Clarity performance on web data is consistent with our findings in the detailed experiments.\nClarity also notably underperforms for several news corpora (trec5, trec7, and robust04).\nOn the other hand, autocorrelation seems robust to the changes between different corpora.\nNext, we turn to the introduction of information from multiple retrievals.\nWe compare the correlations between those predictors which do not use this information in column (a) and those which do in column (b).\nFor every collection, the predictors in column (b) outperform the predictors in column (a), indicating that the information from additional runs can be critical to making good predictions.\nInspecting the predictors in column (b), we only draw weak conclusions.\nOur new predictors tend to perform better on news corpora.\nAnd between our new predictors, the hybrid \u03c1(\u02dcy, y\u00b5) predictor tends to perform better.\nRecall that our \u03c1(\u02dcy, y\u00b5) measure incorporates both spatial and multiple retrieval information.\nTherefore, we believe that the improvement in correlation is the result of incorporating information from spatial behavior.\nIn column (c), we can investigate the utility of incorporating spatial information with information from multiple retrievals.\nNotice that in the cases where autocorrelation, \u03c1(y, \u02dcy), alone performs well (trec3, trec5-spanish, and trec6-chinese), it is substantially improved by incorporating multiple-retrieval information from \u03c1(y, y\u00b5) in the linear regression, \u03b2.\nIn the cases where \u03c1(y, y\u00b5) performs well, incorporating autocorrelation rarely results in a significant improvement in performance.\nIn fact, in every case where our predictor outperforms the baseline, it includes information from multiple runs.\n7.\nDISCUSSION The most important result from our experiments involves prediction when no information is available from multiple runs (Tables 1 and 2a).\nThis situation arises often in system design.\nFor example, a system may need to, at retrieval time, assess its performance before deciding to conduct more intensive processing such as pseudo-relevance feedback or interaction.\nAssuming the presence of multiple retrievals is unrealistic in this case.\nWe believe that autocorrelation is, like multiple-retrieval algorithms, approximating a good ranking; in this case by diffusing scores.\nWhy is \u02dcy a reasonable surrogate?\nWe know that diffusion of scores on the web graph and language model graphs improves performance [14, 16].\nTherefore, if score diffusion tends to, in general, improve performance, then diffused scores will, in general, provide a good surrogate for relevance.\nOur results demonstrate that this approximation is not as powerful as information from multiple retrievals.\nNevertheless, in situations where this information is lacking, autocorrelation provides substantial information.\nThe success of autocorrelation as a predictor may also have roots in the clustering hypothesis.\nRecall that we regard autocorrelation as the degree to which a retrieval satisfies the clustering hypothesis.\nOur experiments, then, demonstrate that a failure to respect the clustering hypothesis correlates with poor performance.\nWhy might systems fail to conform to the cluster hypothesis?\nQuery-based information retrieval systems often score documents independently.\nThe score of document a may be computed by examining query term or phrase matches, the document length, and perhaps global collection statistics.\nOnce computed, a system rarely compares the score of a to the score of a topically-related document b. With some exceptions, the correlation of document scores has largely been ignored.\nWe should make it clear that we have selected tasks where topical autocorrelation is appropriate.\nThere are certainly cases where there is no reason to believe that retrieval scores will have topical autocorrelation.\nFor example, ranked lists which incorporate document novelty should not exhibit spatial autocorrelation; if anything autocorrelation should be negative for this task.\nSimilarly, answer candidates in a question-answering task may or may not exhibit autocorrelation; in this case, the semantics of links is questionable too.\nIt is important before applying this measure to confirm that, given the semantics for some link between two retrieved items, we should expect a correlation between scores.\n8.\nRELATED WORK In this section we draw more general comparisons to other work in performance prediction and spatial data analysis.\nThere is a growing body of work which attempts to predict the performance of individual retrievals [7, 3, 11, 9, 19].\nWe have attempted to place our work in the context of much of this work in Section 4.\nHowever, a complete comparison is beyond the scope of this paper.\nWe note, though, that our experiments cover a larger and more diverse set of retrievals, collections, and topics than previously examined.\nMuch previous work-particularly in the context of TRECfocuses on predicting the performance of systems.\nHere, each system generates k retrievals.\nThe task is, given these retrievals, to predict the ranking of systems according to some performance measure.\nSeveral papers attempt to address this task under the constraint of few judgments [2, 4].\nSome work even attempts to use zero judgments by leveraging multiple retrievals for the same query [17].\nOur task differs because we focus on ranking retrievals independent of the generating system.\nThe task here is not to test the hypothesis system A is superior to system B but to test the hypothesis retrieval A is superior to retrieval B. Autocorrelation manifests itself in many classification tasks.\nNeville and Jensen define relational autocorrelation for relational learning problems and demonstrate that many classification tasks manifest autocorrelation [13].\nTemporal autocorrelation of initial retrievals has also been used to predict performance [9].\nHowever, temporal autocorrelation is performed by projecting the retrieval function into the temporal embedding space.\nIn our work, we focus on the behavior of the function over the relationships between documents.\n\u03c4 adjusted R2 DV KL P \u03c1(y, \u02dcy) DV KL P \u03c1(y, \u02dcy) DV KL, P DV KL, \u03c1(y, \u02dcy) P\u03c1(y, \u02dcy) \u03b2 trec4 0.353 0.548 0.513 0.168 0.363 0.422 0.466 0.420 0.557 0.553 trec5 0.311 0.329 0.357 0.116 0.190 0.236 0.238 0.244 0.266 0.269 robust04 0.418 0.398 0.373 0.256 0.304 0.278 0.403 0.373 0.402 0.442 terabyte04 0.139 0.150 0.454 0.059 0.045 0.292 0.076 0.293 0.289 0.284 terabyte05 0.171 0.208 0.383 0.022 0.072 0.193 0.120 0.225 0.218 0.257 Table 1: Comparison to Robustness and Clarity measures for language model scores.\nEvaluation replicates experiments from [19].\nWe present correlations between the classic Clarity measure (DV KL), the ranking robustness measure (P), and autocorrelation (\u03c1(y, \u02dcy)) each with mean average precision in terms of Kendall``s \u03c4.\nThe adjusted coefficient of determination is presented to measure the effectiveness of combining predictors.\nMeasures in bold represent the strongest correlation for that test\/collection pair.\nmultiple run (a) (b) (c) \u03c4 \u03c4 adjusted R2 DKL \u03c1(y, \u02dcy) Dn KL \u03c1(y, y\u00b5) \u03c1(\u02dcy, y\u00b5) Dn KL \u03c1(y, \u02dcy) \u03c1(y, y\u00b5) \u03c1(\u02dcy, y\u00b5) \u03b2 trec3 0.201 0.461 0.461 0.439 0.456 0.444 0.395 0.394 0.386 0.498 trec4 0.252 0.396 0.455 0.482 0.489 0.379 0.263 0.429 0.482 0.483 trec5 0.016 0.277 0.433 0.459 0.393 0.280 0.157 0.375 0.323 0.386 trec6 0.230 0.227 0.352 0.428 0.418 0.203 0.089 0.323 0.325 0.325 trec7 0.083 0.326 0.341 0.430 0.483 0.264 0.182 0.363 0.442 0.400 trec8 0.235 0.396 0.454 0.508 0.567 0.402 0.272 0.490 0.580 0.523 robust03 0.302 0.354 0.377 0.385 0.447 0.269 0.206 0.274 0.392 0.303 robust04 0.183 0.308 0.301 0.384 0.453 0.200 0.182 0.301 0.393 0.335 robust05 0.224 0.249 0.371 0.377 0.404 0.341 0.108 0.313 0.328 0.336 terabyte04 0.043 0.245 0.544 0.420 0.392 0.516 0.105 0.357 0.343 0.365 terabyte05 0.068 0.306 0.480 0.434 0.390 0.491 0.168 0.384 0.309 0.403 trec4-spanish 0.307 0.388 0.488 0.398 0.395 0.423 0.299 0.282 0.299 0.388 trec5-spanish 0.220 0.458 0.446 0.484 0.475 0.411 0.398 0.428 0.437 0.529 trec5-chinese 0.092 0.199 0.367 0.379 0.384 0.379 0.199 0.273 0.276 0.310 trec6-chinese 0.144 0.276 0.265 0.353 0.376 0.115 0.128 0.188 0.223 0.199 ent05 - 0.181 0.324 0.305 0.282 0.211 0.043 0.158 0.155 0.179 Table 2: Large scale prediction experiments.\nWe predict the ranking of large sets of retrievals for various collections and retrieval systems.\nKendall``s \u03c4 correlations are computed between the predicted ranking and a ranking based on the retrieval``s average precision.\nIn column (a), we have predictors which do not use information from other retrievals for the same query.\nIn columns (b) and (c) we present performance for predictors which incorporate information from multiple retrievals.\nThe adjusted coefficient of determination is computed to determine effectiveness of combining predictors.\nMeasures in bold represent the strongest correlation for that test\/collection pair.\nFinally, regularization-based re-ranking processes are also closely-related to our work [8].\nThese techniques seek to maximize the agreement between scores of related documents by solving a constrained optimization problem.\nThe maximization of consistency is equivalent to maximizing the Moran autocorrelation.\nTherefore, we believe that our work provides explanation for why regularization-based re-ranking works.\n9.\nCONCLUSION We have presented a new method for predicting the performance of a retrieval ranking without any relevance judgments.\nWe consider two cases.\nFirst, when making predictions in the absence of retrievals from other systems, our predictors demonstrate robust, strong correlations with average precision.\nThis performance, combined with a simple implementation, makes our predictors, in particular, very attractive.\nWe have demonstrated this improvement for many, diverse settings.\nTo our knowledge, this is the first large scale examination of zero-judgment, single-retrieval performance prediction.\nSecond, when provided retrievals from other systems, our extended methods demonstrate competitive performance with state of the art baselines.\nOur experiments also demonstrate the limits of the usefulness of our predictors when information from multiple runs is provided.\nOur results suggest two conclusions.\nFirst, our results could affect retrieval algorithm design.\nRetrieval algorithms designed to consider spatial autocorrelation will conform to the cluster hypothesis and improve performance.\nSecond, our results could affect the design of minimal test collection algorithms.\nMuch of the recent work in ranking systems sometimes ignores correlations between document labels and scores.\nWe believe that these two directions could be rewarding given the theoretical and experimental evidence in this paper.\n10.\nACKNOWLEDGMENTS This work was supported in part by the Center for Intelligent Information Retrieval and in part by the Defense Advanced Research Projects Agency (DARPA) under contract number HR0011-06-C-0023.\nAny opinions, findings and conclusions or recommendations expressed in this material are the author``s and do not necessarily reflect those of the sponsor.\nWe thank Yun Zhou and Desislava Petkova for providing data and Andre Gauthier for technical assistance.\n11.\nREFERENCES [1] J. Aslam and V. Pavlu.\nQuery hardness estimation using jensen-shannon divergence among multiple scoring functions.\nIn ECIR 2007: Proceedings of the 29th European Conference on Information Retrieval, 2007.\n[2] J. A. Aslam, V. Pavlu, and E. Yilmaz.\nA statistical method for system evaluation using incomplete judgments.\nIn S. Dumais, E. N. Efthimiadis, D. Hawking, and K. Jarvelin, editors, Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 541-548.\nACM Press, August 2006.\n[3] D. Carmel, E. Yom-Tov, A. Darlow, and D. Pelleg.\nWhat makes a query difficult?\nIn SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 390-397, New York, NY, USA, 2006.\nACM Press.\n[4] B. Carterette, J. Allan, and R. Sitaraman.\nMinimal test collections for retrieval evaluation.\nIn SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 268-275, New York, NY, USA, 2006.\nACM Press.\n[5] A. D. Cliff and J. K. Ord.\nSpatial Autocorrelation.\nPion Ltd., 1973.\n[6] M. Connell, A. Feng, G. Kumaran, H. Raghavan, C. Shah, and J. Allan.\nUmass at tdt 2004.\nTechnical Report CIIR Technical Report IR - 357, Department of Computer Science, University of Massachusetts, 2004.\n[7] S. Cronen-Townsend, Y. Zhou, and W. B. Croft.\nPrecision prediction based on ranked list coherence.\nInf.\nRetr., 9(6):723-755, 2006.\n[8] F. Diaz.\nRegularizing ad-hoc retrieval scores.\nIn CIKM ``05: Proceedings of the 14th ACM international conference on Information and knowledge management, pages 672-679, New York, NY, USA, 2005.\nACM Press.\n[9] F. Diaz and R. Jones.\nUsing temporal profiles of queries for precision prediction.\nIn SIGIR ``04: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 18-24, New York, NY, USA, 2004.\nACM Press.\n[10] D. A. Griffith.\nSpatial Autocorrelation and Spatial Filtering.\nSpringer Verlag, 2003.\n[11] B.\nHe and I. Ounis.\nInferring Query Performance Using Pre-retrieval Predictors.\nIn The Eleventh Symposium on String Processing and Information Retrieval (SPIRE), 2004.\n[12] N. Jardine and C. J. V. Rijsbergen.\nThe use of hierarchic clustering in information retrieval.\nInformation Storage and Retrieval, 7:217-240, 1971.\n[13] D. Jensen and J. Neville.\nLinkage and autocorrelation cause feature selection bias in relational learning.\nIn ICML ``02: Proceedings of the Nineteenth International Conference on Machine Learning, pages 259-266, San Francisco, CA, USA, 2002.\nMorgan Kaufmann Publishers Inc. [14] O. Kurland and L. Lee.\nCorpus structure, language models, and ad-hoc information retrieval.\nIn SIGIR ``04: Proceedings of the 27th annual international conference on Research and development in information retrieval, pages 194-201, New York, NY, USA, 2004.\nACM Press.\n[15] M. Montague and J. A. Aslam.\nRelevance score normalization for metasearch.\nIn CIKM ``01: Proceedings of the tenth international conference on Information and knowledge management, pages 427-433, New York, NY, USA, 2001.\nACM Press.\n[16] T. Qin, T.-Y.\nLiu, X.-D.\nZhang, Z. Chen, and W.-Y.\nMa.\nA study of relevance propagation for web search.\nIn SIGIR ``05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 408-415, New York, NY, USA, 2005.\nACM Press.\n[17] I. Soboroff, C. Nicholas, and P. Cahan.\nRanking retrieval systems without relevance judgments.\nIn SIGIR ``01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 66-73, New York, NY, USA, 2001.\nACM Press.\n[18] V. Vinay, I. J. Cox, N. Milic-Frayling, and K. Wood.\nOn ranking the effectiveness of searches.\nIn SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 398-404, New York, NY, USA, 2006.\nACM Press.\n[19] Y. Zhou and W. B. Croft.\nRanking robustness: a novel framework to predict query performance.\nIn CIKM ``06: Proceedings of the 15th ACM international conference on Information and knowledge management, pages 567-574, New York, NY, USA, 2006.\nACM Press.","lvl-3":"Performance Prediction Using Spatial Autocorrelation\nABSTRACT\nEvaluation of information retrieval systems is one of the core tasks in information retrieval.\nProblems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems.\nPrevious work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance.\nApproaches exist for the case of few and even no relevance judgments.\nOur focus is on zero-judgment performance prediction of individual retrievals.\nOne common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments.\nIf documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores.\nWe find that the low correlation between scores of topically close documents often implies a poor retrieval performance.\nWhen compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance.\nThese new predictors can also be incorporated with classic predictors to improve performance further.\nWe also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages.\n1.\nINTRODUCTION\nIn information retrieval, a user poses a query to a system.\nThe system retrieves n documents each receiving a realvalued score indicating the predicted degree of relevance.\nIf we randomly select pairs of documents from this set, we expect some pairs to share the same topic and other pairs to not share the same topic.\nTake two topically-related documents from the set and call them a and b.\nIf the scores of a and b are very different, we may be concerned about the performance of our system.\nThat is, if a and b are both on the topic of the query, we would like them both to receive a high score; if a and b are not on the topic of the query, we would like them both to receive a low score.\nWe might become more worried as we find more differences between scores of related documents.\nWe would be more comfortable with a retrieval where scores are consistent between related documents.\nOur paper studies the quantification of this inconsistency in a retrieval from a spatial perspective.\nSpatial analysis is appropriate since many retrieval models embed documents in some vector space.\nIf documents are embedded in a space, proximity correlates with topical relationships.\nScore consistency can be measured by the spatial version of autocorrelation known as the Moran coefficient or IM [5, 10].\nIn this paper, we demonstrate a strong correlation between IM and retrieval performance.\nThe discussion up to this point is reminiscent of the cluster hypothesis.\nThe cluster hypothesis states: closely-related documents tend to be relevant to the same request [12].\nAs we shall see, a retrieval function's spatial autocorrelation measures the degree to which closely-related documents receive similar scores.\nBecause of this, we interpret autocorrelation as measuring the degree to which a retrieval function satisfies the clustering hypothesis.\nIf this connection is reasonable, in Section 6, we present evidence that failure to satisfy the cluster hypothesis correlates strongly with poor performance.\nIn this work, we provide the following contributions,\n1.\nA general, robust method for predicting the performance of retrievals with zero relevance judgments (Section 3).\n2.\nA theoretical treatment of the similarities and motivations behind several state-of-the-art performance prediction techniques (Section 4).\n3.\nThe first large-scale experiments of zero-judgment, single run performance prediction (Sections 5 and 6).\n2.\nPROBLEM DEFINITION\n3.\nSPATIAL CORRELATION\n3.1 Spatial Representation of Documents\n3.2 Spatial Autocorrelation of a Retrieval\n3.3 Correlation with Other Retrievals\n4.\nRELATIONSHIP WITH OTHER PREDICTORS\n5.\nEXPERIMENTS\n5.1 Detailed Experiments\n5.1.1 Topics and Collections\n5.1.2 Baselines\n5.2 Generalizability Experiments\n5.2.1 Topics and Collections\n5.2.2 Baselines\n5.2.3 Parameter Settings\n5.3 Evaluation\n6.\nRESULTS\n8.\nRELATED WORK\nIn this section we draw more general comparisons to other work in performance prediction and spatial data analysis.\nThere is a growing body of work which attempts to predict the performance of individual retrievals [7, 3, 11, 9, 19].\nWe have attempted to place our work in the context of much of this work in Section 4.\nHowever, a complete comparison is beyond the scope of this paper.\nWe note, though, that our experiments cover a larger and more diverse set of retrievals, collections, and topics than previously examined.\nMuch previous work--particularly in the context of TREC--focuses on predicting the performance of systems.\nHere, each system generates k retrievals.\nThe task is, given these retrievals, to predict the ranking of systems according to some performance measure.\nSeveral papers attempt to address this task under the constraint of few judgments [2, 4].\nSome work even attempts to use zero judgments by leveraging multiple retrievals for the same query [17].\nOur task differs because we focus on ranking retrievals independent of the generating system.\nThe task here is not to test the hypothesis \"system A is superior to system B\" but to test the hypothesis \"retrieval A is superior to retrieval B\".\nAutocorrelation manifests itself in many classification tasks.\nNeville and Jensen define relational autocorrelation for relational learning problems and demonstrate that many classification tasks manifest autocorrelation [13].\nTemporal autocorrelation of initial retrievals has also been used to predict performance [9].\nHowever, temporal autocorrelation is performed by projecting the retrieval function into the temporal embedding space.\nIn our work, we focus on the behavior of the function over the relationships between documents.\nTable 1: Comparison to Robustness and Clarity measures for language model scores.\nEvaluation replicates experiments from [19].\nWe present correlations between the classic Clarity measure (DVKL), the ranking robustness measure (P), and autocorrelation (\u03c1 (y, \u02dcy)) each with mean average precision in terms of Kendall's \u03c4.\nThe adjusted coefficient of determination is presented to measure the effectiveness of combining predictors.\nMeasures in bold represent the strongest correlation for that test\/collection pair.\nTable 2: Large scale prediction experiments.\nWe predict the ranking of large sets of retrievals for various collections and retrieval systems.\nKendall's \u03c4 correlations are computed between the predicted ranking and a ranking based on the retrieval's average precision.\nIn column (a), we have predictors which do not use information from other retrievals for the same query.\nIn columns (b) and (c) we present performance for predictors which incorporate information from multiple retrievals.\nThe adjusted coefficient of determination is computed to determine effectiveness of combining predictors.\nMeasures in bold represent the strongest correlation for that test\/collection pair.\nFinally, regularization-based re-ranking processes are also closely-related to our work [8].\nThese techniques seek to maximize the agreement between scores of related documents by solving a constrained optimization problem.\nThe maximization of consistency is equivalent to maximizing the Moran autocorrelation.\nTherefore, we believe that our work provides explanation for why regularization-based re-ranking works.\n9.\nCONCLUSION\nWe have presented a new method for predicting the performance of a retrieval ranking without any relevance judgments.\nWe consider two cases.\nFirst, when making predictions in the absence of retrievals from other systems, our predictors demonstrate robust, strong correlations with average precision.\nThis performance, combined with a simple implementation, makes our predictors, in particular, very attractive.\nWe have demonstrated this improvement for many, diverse settings.\nTo our knowledge, this is the first large scale examination of zero-judgment, single-retrieval performance prediction.\nSecond, when provided retrievals from other systems, our extended methods demonstrate competitive performance with state of the art baselines.\nOur experiments also demonstrate the limits of the usefulness of our predictors when information from multiple runs is provided.\nOur results suggest two conclusions.\nFirst, our results could affect retrieval algorithm design.\nRetrieval algorithms designed to consider spatial autocorrelation will conform to the cluster hypothesis and improve performance.\nSecond, our results could affect the design of minimal test collection algorithms.\nMuch of the recent work in ranking systems sometimes ignores correlations between document labels and scores.\nWe believe that these two directions could be rewarding given the theoretical and experimental evidence in this paper.","lvl-4":"Performance Prediction Using Spatial Autocorrelation\nABSTRACT\nEvaluation of information retrieval systems is one of the core tasks in information retrieval.\nProblems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems.\nPrevious work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance.\nApproaches exist for the case of few and even no relevance judgments.\nOur focus is on zero-judgment performance prediction of individual retrievals.\nOne common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments.\nIf documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores.\nWe find that the low correlation between scores of topically close documents often implies a poor retrieval performance.\nWhen compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance.\nThese new predictors can also be incorporated with classic predictors to improve performance further.\nWe also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages.\n1.\nINTRODUCTION\nIn information retrieval, a user poses a query to a system.\nThe system retrieves n documents each receiving a realvalued score indicating the predicted degree of relevance.\nIf the scores of a and b are very different, we may be concerned about the performance of our system.\nWe might become more worried as we find more differences between scores of related documents.\nWe would be more comfortable with a retrieval where scores are consistent between related documents.\nOur paper studies the quantification of this inconsistency in a retrieval from a spatial perspective.\nSpatial analysis is appropriate since many retrieval models embed documents in some vector space.\nIf documents are embedded in a space, proximity correlates with topical relationships.\nScore consistency can be measured by the spatial version of autocorrelation known as the Moran coefficient or IM [5, 10].\nIn this paper, we demonstrate a strong correlation between IM and retrieval performance.\nThe cluster hypothesis states: closely-related documents tend to be relevant to the same request [12].\nAs we shall see, a retrieval function's spatial autocorrelation measures the degree to which closely-related documents receive similar scores.\nBecause of this, we interpret autocorrelation as measuring the degree to which a retrieval function satisfies the clustering hypothesis.\nIn this work, we provide the following contributions,\n1.\nA general, robust method for predicting the performance of retrievals with zero relevance judgments (Section 3).\n2.\nA theoretical treatment of the similarities and motivations behind several state-of-the-art performance prediction techniques (Section 4).\n3.\nThe first large-scale experiments of zero-judgment, single run performance prediction (Sections 5 and 6).\n8.\nRELATED WORK\nIn this section we draw more general comparisons to other work in performance prediction and spatial data analysis.\nThere is a growing body of work which attempts to predict the performance of individual retrievals [7, 3, 11, 9, 19].\nWe have attempted to place our work in the context of much of this work in Section 4.\nWe note, though, that our experiments cover a larger and more diverse set of retrievals, collections, and topics than previously examined.\nMuch previous work--particularly in the context of TREC--focuses on predicting the performance of systems.\nHere, each system generates k retrievals.\nThe task is, given these retrievals, to predict the ranking of systems according to some performance measure.\nSome work even attempts to use zero judgments by leveraging multiple retrievals for the same query [17].\nOur task differs because we focus on ranking retrievals independent of the generating system.\nThe task here is not to test the hypothesis \"system A is superior to system B\" but to test the hypothesis \"retrieval A is superior to retrieval B\".\nAutocorrelation manifests itself in many classification tasks.\nTemporal autocorrelation of initial retrievals has also been used to predict performance [9].\nHowever, temporal autocorrelation is performed by projecting the retrieval function into the temporal embedding space.\nIn our work, we focus on the behavior of the function over the relationships between documents.\nTable 1: Comparison to Robustness and Clarity measures for language model scores.\nEvaluation replicates experiments from [19].\nThe adjusted coefficient of determination is presented to measure the effectiveness of combining predictors.\nMeasures in bold represent the strongest correlation for that test\/collection pair.\nTable 2: Large scale prediction experiments.\nWe predict the ranking of large sets of retrievals for various collections and retrieval systems.\nKendall's \u03c4 correlations are computed between the predicted ranking and a ranking based on the retrieval's average precision.\nIn column (a), we have predictors which do not use information from other retrievals for the same query.\nIn columns (b) and (c) we present performance for predictors which incorporate information from multiple retrievals.\nThe adjusted coefficient of determination is computed to determine effectiveness of combining predictors.\nMeasures in bold represent the strongest correlation for that test\/collection pair.\nFinally, regularization-based re-ranking processes are also closely-related to our work [8].\nThese techniques seek to maximize the agreement between scores of related documents by solving a constrained optimization problem.\nThe maximization of consistency is equivalent to maximizing the Moran autocorrelation.\nTherefore, we believe that our work provides explanation for why regularization-based re-ranking works.\n9.\nCONCLUSION\nWe have presented a new method for predicting the performance of a retrieval ranking without any relevance judgments.\nWe consider two cases.\nFirst, when making predictions in the absence of retrievals from other systems, our predictors demonstrate robust, strong correlations with average precision.\nThis performance, combined with a simple implementation, makes our predictors, in particular, very attractive.\nTo our knowledge, this is the first large scale examination of zero-judgment, single-retrieval performance prediction.\nSecond, when provided retrievals from other systems, our extended methods demonstrate competitive performance with state of the art baselines.\nOur experiments also demonstrate the limits of the usefulness of our predictors when information from multiple runs is provided.\nOur results suggest two conclusions.\nFirst, our results could affect retrieval algorithm design.\nRetrieval algorithms designed to consider spatial autocorrelation will conform to the cluster hypothesis and improve performance.\nSecond, our results could affect the design of minimal test collection algorithms.\nMuch of the recent work in ranking systems sometimes ignores correlations between document labels and scores.","lvl-2":"Performance Prediction Using Spatial Autocorrelation\nABSTRACT\nEvaluation of information retrieval systems is one of the core tasks in information retrieval.\nProblems include the inability to exhaustively label all documents for a topic, nongeneralizability from a small number of topics, and incorporating the variability of retrieval systems.\nPrevious work addresses the evaluation of systems, the ranking of queries by difficulty, and the ranking of individual retrievals by performance.\nApproaches exist for the case of few and even no relevance judgments.\nOur focus is on zero-judgment performance prediction of individual retrievals.\nOne common shortcoming of previous techniques is the assumption of uncorrelated document scores and judgments.\nIf documents are embedded in a high-dimensional space (as they often are), we can apply techniques from spatial data analysis to detect correlations between document scores.\nWe find that the low correlation between scores of topically close documents often implies a poor retrieval performance.\nWhen compared to a state of the art baseline, we demonstrate that the spatial analysis of retrieval scores provides significantly better prediction performance.\nThese new predictors can also be incorporated with classic predictors to improve performance further.\nWe also describe the first large-scale experiment to evaluate zero-judgment performance prediction for a massive number of retrieval systems over a variety of collections in several languages.\n1.\nINTRODUCTION\nIn information retrieval, a user poses a query to a system.\nThe system retrieves n documents each receiving a realvalued score indicating the predicted degree of relevance.\nIf we randomly select pairs of documents from this set, we expect some pairs to share the same topic and other pairs to not share the same topic.\nTake two topically-related documents from the set and call them a and b.\nIf the scores of a and b are very different, we may be concerned about the performance of our system.\nThat is, if a and b are both on the topic of the query, we would like them both to receive a high score; if a and b are not on the topic of the query, we would like them both to receive a low score.\nWe might become more worried as we find more differences between scores of related documents.\nWe would be more comfortable with a retrieval where scores are consistent between related documents.\nOur paper studies the quantification of this inconsistency in a retrieval from a spatial perspective.\nSpatial analysis is appropriate since many retrieval models embed documents in some vector space.\nIf documents are embedded in a space, proximity correlates with topical relationships.\nScore consistency can be measured by the spatial version of autocorrelation known as the Moran coefficient or IM [5, 10].\nIn this paper, we demonstrate a strong correlation between IM and retrieval performance.\nThe discussion up to this point is reminiscent of the cluster hypothesis.\nThe cluster hypothesis states: closely-related documents tend to be relevant to the same request [12].\nAs we shall see, a retrieval function's spatial autocorrelation measures the degree to which closely-related documents receive similar scores.\nBecause of this, we interpret autocorrelation as measuring the degree to which a retrieval function satisfies the clustering hypothesis.\nIf this connection is reasonable, in Section 6, we present evidence that failure to satisfy the cluster hypothesis correlates strongly with poor performance.\nIn this work, we provide the following contributions,\n1.\nA general, robust method for predicting the performance of retrievals with zero relevance judgments (Section 3).\n2.\nA theoretical treatment of the similarities and motivations behind several state-of-the-art performance prediction techniques (Section 4).\n3.\nThe first large-scale experiments of zero-judgment, single run performance prediction (Sections 5 and 6).\n2.\nPROBLEM DEFINITION\nGiven a query, an information retrieval system produces a ranking of documents in the collection encoded as a set of scores associated with documents.\nWe refer to the set of scores for a particular query-system combination as a retrieval.\nWe would like to predict the performance of this retrieval with respect to some evaluation measure (eg, mean average precision).\nIn this paper, we present results for ranking retrievals from arbitrary systems.\nWe would like this ranking to approximate the ranking of retrievals by the evaluation measure.\nThis is different from ranking queries by the average performance on each query.\nIt is also different from ranking systems by the average performance on a set of queries.\nScores are often only computed for the top n documents from the collection.\nWe place these scores in the length n vector, y, where yi refers to the score of the ith-ranked document.\nWe adjust scores to have zero mean and unit variance.\nWe use this method because of its simplicity and its success in previous work [15].\n3.\nSPATIAL CORRELATION\nIn information retrieval, we often assume that the representations of documents exist in some high-dimensional vector space.\nFor example, given a vocabulary, V, this vector space may be an arbitrary | V | - dimensional space with cosine inner-product or a multinomial simplex with a distributionbased distance measure.\nAn embedding space is often selected to respect topical proximity; if two documents are near, they are more likely to share a topic.\nBecause of the prevalence and success of spatial models of information retrieval, we believe that the application of spatial data analysis techniques are appropriate.\nWhereas in information retrieval, we are concerned with the score at a point in a space, in spatial data analysis, we are concerned with the value of a function at a point or location in a space.\nWe use the term function here to mean a mapping from a location to a real value.\nFor example, we might be interested in the prevalence of a disease in the neighborhood of some city.\nThe function would map the location of a neighborhood to an infection rate.\nIf we want to quantify the spatial dependencies of a function, we would employ a measure referred to as the spatial autocorrelation [5, 10].\nHigh spatial autocorrelation suggests that knowing the value of a function at location a will tell us a great deal about the value at a neighboring location b.\nThere is a high spatial autocorrelation for a function representing the temperature of a location since knowing the temperature at a location a will tell us a lot about the temperature at a neighboring location b. Low spatial autocorrelation suggests that knowing the value of a function at location a tells us little about the value at a neighboring location b.\nThere is low spatial autocorrelation in a function measuring the outcome of a coin toss at a and b.\nIn this section, we will begin by describing what we mean by spatial proximity for documents and then define a measure of spatial autocorrelation.\nWe conclude by extending this model to include information from multiple retrievals from multiple systems for a single query.\n3.1 Spatial Representation of Documents\nOur work does not focus on improving a specific similarity measure or defining a novel vector space.\nInstead, we choose an inner product known to be effective at detecting interdocument topical relationships.\nSpecifically, we adopt tf.idf document vectors,\nwhere d is a vector of term frequencies, c is the length - | V | document frequency vector.\nWe use this weighting scheme due to its success for topical link detection in the context of Topic Detection and Tracking (TDT) evaluations [6].\nAssuming vectors are scaled by their L2 norm, we use the inner \u02dcdi, \u02dcdj), to define similarity.\nGiven documents and some similarity measure, we can construct a matrix which encodes the similarity between pairs of documents.\nRecall that we are given the top n documents retrieved in y.\nWe can compute an n \u00d7 n similarity matrix, W.\nAn element of this matrix, Wij represents the similarity between documents ranked i and j.\nIn practice, we only include the affinities for a document's k-nearest neighbors.\nIn all of our experiments, we have fixed k to 5.\nWe leave exploration of parameter sensitivity to future work.\nWe also row normalize the matrix so that En j = 1 Wij = 1 for all i.\n3.2 Spatial Autocorrelation of a Retrieval\nRecall that we are interested in measuring the similarity between the scores of spatially-close documents.\nOne such suitable measure is the Moran coefficient of spatial autocorrelation.\nAssuming the function y over n locations, this is defined as\nwhere eTWe = Eij Wij.\nWe would like to compare autocorrelation values for different retrievals.\nUnfortunately, the bound for Equation 2 is not consistent for different W and y. Therefore, we use the Cauchy-Schwartz inequality to establish a bound,\nAnd we define the normalized spatial autocorrelation as\nwhich can be interpreted as the correlation between the original retrieval scores and a set of retrieval scores \"diffused\" in the space.\nWe present some examples of autocorrelations of functions on a grid in Figure 1.\n3.3 Correlation with Other Retrievals\nSometimes we are interested in the performance of a single retrieval but have access to scores from multiple systems for\nFigure 1: The Moran coefficient, IM for a several\nbinary functions on a grid.\nThe Moran coefficient is a local measure of function consistency.\nFrom the perspective of information retrieval, each of these grid spaces would represent a document and documents would be organized so that they lay next to topically-related documents.\nBinary retrieval scores would define a pattern on this grid.\nNotice that, as the Moran coefficient increases, neighboring cells tend to have similar values.\nthe same query.\nIn this situation, we can use combined information from these scores to construct a surrogate for a high-quality ranking [17].\nWe can treat the correlation between the retrieval we are interested in and the combined scores as a predictor of performance.\nAssume that we are given m score functions, yi, for the vectors as y\u00b5 = Pm same n documents.\nWe will represent the mean of these i = 1 yi.\nWe use the mean vector as an approximation to relevance.\nSince we use zero mean and unit variance normalization, work in metasearch suggests that this assumption is justified [15].\nBecause y\u00b5 represents a very good retrieval, we hypothesize that a strong similarity between y\u00b5 and y will correlate positively with system performance.\nWe use Pearson's product-moment correlation to measure the similarity between these vectors, kyk2ky\u00b5k2 We will comment on the similarity between Equation 3 and 4 in Section 7.\nOf course, we can combine \u03c1 (y, \u02dcy) and \u03c1 (y, y\u00b5) if we assume that they capture different factors in the prediction.\nOne way to accomplish this is to combine these predictors as independent variables in a linear regression.\nAn alternative means of combination is suggested by the mathematical form of our predictors.\nSince y\u02dc encodes the spatial dependencies in y and y\u00b5 encodes the spatial properties of the multiple runs, we can compute a third correlation between these two vectors, We can interpret Equation 5 as measuring the correlation between a high quality ranking (y\u00b5) and a spatially smoothed version of the retrieval (\u02dcy).\n4.\nRELATIONSHIP WITH OTHER PREDICTORS\nOne way to predict the effectiveness of a retrieval is to look at the shared vocabulary of the top n retrieved documents.\nIf we computed the most frequent content words in this set, we would hope that they would be consistent with our topic.\nIn fact, we might believe that a bad retrieval would include documents on many disparate topics, resulting in an overlap of terminological noise.\nThe Clarity of a query attempts to quantify exactly this [7].\nSpecifically, Clarity measures the similarity of the words most frequently used in retrieved documents to those most frequently used in the whole corpus.\nThe conjecture is that a good retrieval will use language distinct from general text; the overlapping language in a bad retrieval will tend to be more similar to general text.\nMathematically, we can compute a representation of the language used in the initial retrieval as a weighted combination of document language models,\nwhere \u03b8i is the language model of the ith-ranked document, P (Q | \u03b8i) is the query likelihood score of the ith-ranked document and Z = Pn i = 1 P (Q | \u03b8i) is a normalization constant.\nThe similarity between the multinomial P (w | \u03b8Q) and a model of \"general text\" can be computed using the Kullback-Leibler divergence, DVKL (\u03b8Qk\u03b8C).\nHere, the distribution P (w | \u03b8C) is our model of general text which can be computed using term frequencies in the corpus.\nIn Figure 2a, we present Clarity as measuring the distance between the \"weighted center of mass\" of the retrieval (labeled y) and the \"unweighted center of mass\" of the collection (labeled O).\nClarity reaches a minimum when a retrieval assigns every document the same score.\nLet's again assume we have a set of n documents retrieved for our query.\nAnother way to quantify the dispersion of a set of documents is to look at how clustered they are.\nWe may hypothesize that a good retrieval will return a single, tight cluster.\nA poorly performing retrieval will return a loosely related set of documents covering many topics.\nOne proposed method of quantifying this dispersion is to measure the distance from a random document a to it's nearest neighbor, b.\nA retrieval which is tightly clustered will, on average, have a low distance between a and b; a retrieval which is less tightly-closed will, on average have high distances between a and b.\nThis average corresponds to using the Cox-Lewis statistic to measure the randomness of the top n documents retrieved from a system [18].\nIn Figure 2a, this is roughly equivalent to measuring the area of the set n. Notice that we are throwing away information about the retrieval function y. Therefore the Cox-Lewis statistic is highly dependent on selecting the top n documents .1 Remember that we have n documents and a set of scores.\nLet's assume that we have access to the system which provided the original scores and that we can also request scores for new documents.\nThis suggests a third method for predicting performance.\nTake some document, a, from the retrieved set and arbitrarily add or remove words at random to create a new document \u02dca.\nNow, we can ask our system to score a\u02dc with respect to our query.\nIf, on average over the n documents, the scores of a and a\u02dc tend to be very different, we might suspect that the system is failing on this query.\nSo, an alternative approach is to measure the simi1The authors have suggested coupling the query with the distance measure [18].\nThe information introduced by the query, though, is retrieval-independent so that, if two retrievals return the same set of documents, the approximate Cox-Lewis statistic will be the same regardless of the retrieval scores.\nFigure 2: Representation of several performance predictors on a grid.\nIn Figure 2a, we depict predictors which measure the divergence between the \"center of mass\" of a retrieval and the center of the embedding space.\nIn Figure 2b, we depict predictors which compare the original retrieval, y, to a perturbed version of the retrieval, \u02dcy.\nOur approach uses a particular type of perturbation based on score diffusion.\nFinally, in Figure 2c, we depict prediction when given retrievals from several other systems on the same query.\nHere, we can consider the fusion of these retrieval as a surrogate for relevance.\nlarity between the retrieval and a perturbed version of that retrieval [18, 19].\nThis can be accomplished by either perturbing the documents or queries.\nThe similarity between the two retrievals can be measured using some correlation measure.\nThis is depicted in Figure 2b.\nThe upper grid represents the original retrieval, y, while the lower grid represents the function after having been perturbed, \u02dcy.\nThe nature of the perturbation process requires additional scorings or retrievals.\nOur predictor does not require access to the original scoring function or additional retrievals.\nSo, although our method is similar to other perturbation methods in spirit, it can be applied in situations when the retrieval system is inaccessible or costly to access.\nFinally, assume that we have, in addition to the retrieval we want to evaluate, m retrievals from a variety of different systems.\nIn this case, we might take a document a, compare its rank in the retrieval to its average rank in the m retrievals.\nIf we believe that the m retrievals provide a satisfactory approximation to relevance, then a very large difference in rank would suggest that our retrieval is misranking a.\nIf this difference is large on average over all n documents, then we might predict that the retrieval is bad.\nIf, on the other hand, the retrieval is very consistent with the m retrievals, then we might predict that the retrieval is good.\nThe similarity between the retrieval and the combined retrieval may be computed using some correlation measure.\nThis is depicted in Figure 2c.\nIn previous work, the Kullback-Leibler divergence between the normalized scores of the retrieval and the normalized scores of the combined retrieval provides the similarity [1].\n5.\nEXPERIMENTS\nOur experiments focus on testing the predictive power of each of our predictors: \u03c1 (y, \u02dcy), \u03c1 (y, y,,), and \u03c1 (\u02dcy, y,,).\nAs stated in Section 2, we are interested in predicting the performance of the retrieval generated by an arbitrary system.\nOur methodology is consistent with previous research in that we predict the relative performance of a retrieval by comparing a ranking based on our predictor to a ranking based on average precision.\nWe present results for two sets of experiments.\nThe first set of experiments presents detailed comparisons of our predictors to previously-proposed predictors using identical data sets.\nOur second set of experiments demonstrates the generalizability of our approach to arbitrary retrieval methods, corpus types, and corpus languages.\n5.1 Detailed Experiments\nIn these experiments, we will predict the performance of language modeling scores using our autocorrelation predictor, \u03c1 (y, \u02dcy); we do not consider \u03c1 (y, y,,) or \u03c1 (\u02dcy, y,,) because, in these detailed experiments, we focus on ranking the retrievals from a single system.\nWe use retrievals, values for baseline predictors, and evaluation measures reported in previous work [19].\n5.1.1 Topics and Collections\nThese performance prediction experiments use language model retrievals performed for queries associated with collections in the TREC corpora.\nUsing TREC collections allows us to confidently associate an average precision with a retrieval.\nIn these experiments, we use the following topic collections: TREC 4 ad hoc, TREC 5 ad hoc, Robust 2004, Terabyte 2004, and Terabyte 2005.\n5.1.2 Baselines\nWe provide two baselines.\nOur first baseline is the classic Clarity predictor presented in Equation 6.\nClarity is designed to be used with language modeling systems.\nOur second baseline is Zhou and Croft's \"ranking robustness\" predictor.\nThis predictor corrupts the top k documents from retrieval and re-computes the language model scores for these corrupted documents.\nThe value of the predictor is the Spearman rank correlation between the original ranking and the corrupted ranking.\nIn our tables, we will label results for Clarity using DVKL and the ranking robustness predictor using P.\n5.2 Generalizability Experiments\nOur predictors do not require a particular baseline retrieval system; the predictors can be computed for an arbitrary retrieval, regardless of how scores were generated.\nWe believe that that is one of the most attractive aspects of our algorithm.\nTherefore, in a second set of experiments, we demonstrate the ability of our techniques to generalize to a variety of collections, topics, and retrieval systems.\n5.2.1 Topics and Collections\nWe gathered a diverse set of collections from all possible TREC corpora.\nWe cast a wide net in order to locate collections where our predictors might fail.\nOur hypothesis is that documents with high topical similarity should have correlated scores.\nTherefore, we avoided collections where scores were unlikely to be correlated (eg, question-answering) or were likely to be negatively correlated (eg, novelty).\nNevertheless, our collections include corpora where correlations are weakly justified (eg, non-English corpora) or not justified at all (eg, expert search).\nWe use the ad hoc tracks from TREC3-8, TREC Robust 2003-2005, TREC Terabyte 20042005, TREC4-5 Spanish, TREC5-6 Chinese, and TREC Enterprise Expert Search 2005.\nIn all cases, we use only the automatic runs for ad hoc tracks submitted to NIST.\nFor all English and Spanish corpora, we construct the matrix W according to the process described in Section 3.1.\nFor Chinese corpora, we use na \u00a8 \u0131ve character-based tf.idf vectors.\nFor entities, entries in W are proportional to the number of documents in which two entities cooccur.\n5.2.2 Baselines\nIn our detailed experiments, we used the Clarity measure as a baseline.\nSince we are predicting the performance of retrievals which are not based on language modeling, we use a version of Clarity referred to as ranked-list Clarity [7].\nRanked-list clarity converts document ranks to P (Q | \u03b8i) values.\nThis conversion begins by replacing all of the scores in Y with the respective ranks.\nOur estimation of P (Q | \u03b8i) from the ranks, then is, where c is a cutoff parameter.\nAs suggested by the authors, we fix the algorithm parameters c and \u03bb2 so that c = 60 and \u03bb2 = 0.10.\nWe use Equation 6 to estimate P (w | \u03b8Q) and DVKL (\u03b8QII\u03b8C) to compute the value of the predictor.\nWe will refer to this predictor as DVKL, superscripted by V to indicate that the Kullback-Leibler divergence is with respect to the term embedding space.\nWhen information from multiple runs on the same query is available, we use Aslam and Pavlu's document-space multinomial divergence as a baseline [1].\nThis rank-based method first normalizes the scores in a retrieval as an n-dimensional multinomial.\nAs with ranked-list Clarity, we begin by replacing all of the scores in Y with their respective ranks.\nThen, we adjust the elements of Y in the following way,\nIn our multirun experiments, we only use the top 75 documents from each retrieval (n = 75); this is within the range of parameter values suggested by the authors.\nHowever, we admit not tuning this parameter for either our system or the baseline.\nThe predictor is the divergence between the candidate distribution, Y, and the mean distribution, Y\u00b5.\nWith the uniform linear combination of these m retrievals represented as Y\u00b5, we can compute the divergence as DnKL (\u02c6YII\u02c6Y\u00b5) where we use the superscript n to indicate that the summation is over the set of n documents.\nThis baseline was developed in the context of predicting query difficulty but we adopt it as a reasonable baseline for predicting retrieval performance.\n5.2.3 Parameter Settings\nWhen given multiple retrievals, we use documents in the union of the top k = 75 documents from each of the m retrievals for that query.\nIf the size of this union is \u02dcn, then Y\u00b5 and each Yi is of length \u02dcn.\nIn some cases, a system did not score a document in the union.\nSince we are making a Gaussian assumption about our scores, we can sample scores for these unseen documents from the negative tail of the distribution.\nSpecifically, we sample from the part of the distribution lower than the minimum value of in the normalized retrieval.\nThis introduces randomness into our algorithm but we believe it is more appropriate than assigning an arbitrary fixed value.\nWe optimized the linear regression using the square root of each predictor.\nWe found that this substantially improved fits for all predictors, including the baselines.\nWe considered linear combinations of pairs of predictors (labeled by the components) and all predictors (labeled as \u03b2).\n5.3 Evaluation\nGiven a set of retrievals, potentially from a combination of queries and systems, we measure the correlation of the rank ordering of this set by the predictor and by the performance metric.\nIn order to ensure comparability with previous results, we present Kendall's \u03c4 correlation between the predictor's ranking and ranking based on average precision of the retrieval.\nUnless explicitly noted, all correlations are significant with p <0.05.\nPredictors can sometimes perform better when linearly combined [9, 11].\nAlthough previous work has presented the coefficient of determination (R2) to measure the quality of the regression, this measure cannot be reliably used when comparing slight improvements from combining predictors.\nTherefore, we adopt the adjusted coefficient of determination which penalizes models with more variables.\nThe adjusted R2 allows us to evaluate the improvement in prediction achieved by adding a parameter but loses the statistical interpretation of R2.\nWe will use Kendall's \u03c4 to evaluate the magnitude of the correlation and the adjusted R2 to evaluate the combination of variables.\n6.\nRESULTS\nWe present results for our detailed experiments comparing the prediction of language model scores in Table 1.\nAlthough the Clarity measure is theoretically designed for language model scores, it consistently underperforms our system-agnostic predictor.\nRanking robustness was presented as an improvement to Clarity for web collections (represented in our experiments by the terabyte04 and terabyte05 collections), shifting the \u03c4 correlation from 0.139 to 0.150 for terabyte04 and 0.171 to 0.208 for terabyte05.\nHowever, these improvements are slight compared to the performance of autocorrelation on these collections.\nOur predictor achieves a \u03c4 correlation of 0.454 for terabyte04 and 0.383 for terabyte05.\nThough not always the strongest, autocorrelation achieves correlations competitive with baseline predictors.\nWhen examining the performance of linear combinations of predictors, we note that in every case, autocorrelation factors as a necessary component of a strong predictor.\nWe also note that the\nadjusted R2 for individual baselines are always significantly improved by incorporating autocorrelation.\nWe present our generalizability results in Table 2.\nWe begin by examining the situation in column (a) where we are presented with a single retrieval and no information from additional retrievals.\nFor every collection except one, we achieve significantly better correlations than ranked-list Clarity.\nSurprisingly, we achieve relatively strong correlations for Spanish and Chinese collections despite our na \u00a8 \u0131ve processing.\nWe do not have a ranked-list clarity correlation for ent05 because entity modeling is itself an open research question.\nHowever, our autocorrelation measure does not achieve high correlations perhaps because relevance for entity retrieval does not propagate according to the cooccurrence links we use.\nAs noted above, the poor Clarity performance on web data is consistent with our findings in the detailed experiments.\nClarity also notably underperforms for several news corpora (trec5, trec7, and robust04).\nOn the other hand, autocorrelation seems robust to the changes between different corpora.\nNext, we turn to the introduction of information from multiple retrievals.\nWe compare the correlations between those predictors which do not use this information in column (a) and those which do in column (b).\nFor every collection, the predictors in column (b) outperform the predictors in column (a), indicating that the information from additional runs can be critical to making good predictions.\nInspecting the predictors in column (b), we only draw weak conclusions.\nOur new predictors tend to perform better on news corpora.\nAnd between our new predictors, the hybrid \u03c1 (\u02dcy, y,,) predictor tends to perform better.\nRecall that our \u03c1 (\u02dcy, y,,) measure incorporates both spatial and multiple retrieval information.\nTherefore, we believe that the improvement in correlation is the result of incorporating information from spatial behavior.\nIn column (c), we can investigate the utility of incorporating spatial information with information from multiple retrievals.\nNotice that in the cases where autocorrelation, \u03c1 (y, \u02dcy), alone performs well (trec3, trec5-spanish, and trec6-chinese), it is substantially improved by incorporating multiple-retrieval information from \u03c1 (y, y,,) in the linear regression, \u03b2.\nIn the cases where \u03c1 (y, y,,) performs well, incorporating autocorrelation rarely results in a significant improvement in performance.\nIn fact, in every case where our predictor outperforms the baseline, it includes information from multiple runs.\n8.\nRELATED WORK\nIn this section we draw more general comparisons to other work in performance prediction and spatial data analysis.\nThere is a growing body of work which attempts to predict the performance of individual retrievals [7, 3, 11, 9, 19].\nWe have attempted to place our work in the context of much of this work in Section 4.\nHowever, a complete comparison is beyond the scope of this paper.\nWe note, though, that our experiments cover a larger and more diverse set of retrievals, collections, and topics than previously examined.\nMuch previous work--particularly in the context of TREC--focuses on predicting the performance of systems.\nHere, each system generates k retrievals.\nThe task is, given these retrievals, to predict the ranking of systems according to some performance measure.\nSeveral papers attempt to address this task under the constraint of few judgments [2, 4].\nSome work even attempts to use zero judgments by leveraging multiple retrievals for the same query [17].\nOur task differs because we focus on ranking retrievals independent of the generating system.\nThe task here is not to test the hypothesis \"system A is superior to system B\" but to test the hypothesis \"retrieval A is superior to retrieval B\".\nAutocorrelation manifests itself in many classification tasks.\nNeville and Jensen define relational autocorrelation for relational learning problems and demonstrate that many classification tasks manifest autocorrelation [13].\nTemporal autocorrelation of initial retrievals has also been used to predict performance [9].\nHowever, temporal autocorrelation is performed by projecting the retrieval function into the temporal embedding space.\nIn our work, we focus on the behavior of the function over the relationships between documents.\nTable 1: Comparison to Robustness and Clarity measures for language model scores.\nEvaluation replicates experiments from [19].\nWe present correlations between the classic Clarity measure (DVKL), the ranking robustness measure (P), and autocorrelation (\u03c1 (y, \u02dcy)) each with mean average precision in terms of Kendall's \u03c4.\nThe adjusted coefficient of determination is presented to measure the effectiveness of combining predictors.\nMeasures in bold represent the strongest correlation for that test\/collection pair.\nTable 2: Large scale prediction experiments.\nWe predict the ranking of large sets of retrievals for various collections and retrieval systems.\nKendall's \u03c4 correlations are computed between the predicted ranking and a ranking based on the retrieval's average precision.\nIn column (a), we have predictors which do not use information from other retrievals for the same query.\nIn columns (b) and (c) we present performance for predictors which incorporate information from multiple retrievals.\nThe adjusted coefficient of determination is computed to determine effectiveness of combining predictors.\nMeasures in bold represent the strongest correlation for that test\/collection pair.\nFinally, regularization-based re-ranking processes are also closely-related to our work [8].\nThese techniques seek to maximize the agreement between scores of related documents by solving a constrained optimization problem.\nThe maximization of consistency is equivalent to maximizing the Moran autocorrelation.\nTherefore, we believe that our work provides explanation for why regularization-based re-ranking works.\n9.\nCONCLUSION\nWe have presented a new method for predicting the performance of a retrieval ranking without any relevance judgments.\nWe consider two cases.\nFirst, when making predictions in the absence of retrievals from other systems, our predictors demonstrate robust, strong correlations with average precision.\nThis performance, combined with a simple implementation, makes our predictors, in particular, very attractive.\nWe have demonstrated this improvement for many, diverse settings.\nTo our knowledge, this is the first large scale examination of zero-judgment, single-retrieval performance prediction.\nSecond, when provided retrievals from other systems, our extended methods demonstrate competitive performance with state of the art baselines.\nOur experiments also demonstrate the limits of the usefulness of our predictors when information from multiple runs is provided.\nOur results suggest two conclusions.\nFirst, our results could affect retrieval algorithm design.\nRetrieval algorithms designed to consider spatial autocorrelation will conform to the cluster hypothesis and improve performance.\nSecond, our results could affect the design of minimal test collection algorithms.\nMuch of the recent work in ranking systems sometimes ignores correlations between document labels and scores.\nWe believe that these two directions could be rewarding given the theoretical and experimental evidence in this paper.","keyphrases":["perform predict","spatial autocorrel","autocorrel","inform retriev","cluster hypothesi","zero relev judgment","predictor relationship","predictor predict power","languag model score","queri rank","regular"],"prmu":["P","P","P","P","U","M","M","M","M","R","U"]} {"id":"C-44","title":"MSP: Multi-Sequence Positioning of Wireless Sensor Nodes","abstract":"Wireless Sensor Networks have been proposed for use in many location-dependent applications. Most of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices. To overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments. The novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution. Starting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy. We address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built. We have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes). This evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution. It also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy.","lvl-1":"MSP: Multi-Sequence Positioning of Wireless Sensor Nodes\u2217 Ziguo Zhong Computer Science and Engineering University of Minnesota zhong@cs.umn.edu Tian He Computer Science and Engineering University of Minnesota tianhe@cs.umn.edu Abstract Wireless Sensor Networks have been proposed for use in many location-dependent applications.\nMost of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices.\nTo overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments.\nThe novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution.\nStarting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy.\nWe address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built.\nWe have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes).\nThis evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution.\nIt also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy.\nCategories and Subject Descriptors C.2.4 [Computer Communications Networks]: Distributed Systems General Terms Algorithms, Measurement, Design, Performance, Experimentation 1 Introduction Although Wireless Sensor Networks (WSN) have shown promising prospects in various applications [5], researchers still face several challenges for massive deployment of such networks.\nOne of these is to identify the location of individual sensor nodes in outdoor environments.\nBecause of unpredictable flow dynamics in airborne scenarios, it is not currently feasible to localize sensor nodes during massive UVA-based deployment.\nOn the other hand, geometric information is indispensable in these networks, since users need to know where events of interest occur (e.g., the location of intruders or of a bomb explosion).\nPrevious research on node localization falls into two categories: range-based approaches and range-free approaches.\nRange-based approaches [13, 17, 19, 24] compute per-node location information iteratively or recursively based on measured distances among target nodes and a few anchors which precisely know their locations.\nThese approaches generally require costly hardware (e.g., GPS) and have limited effective range due to energy constraints (e.g., ultrasound-based TDOA [3, 17]).\nAlthough range-based solutions can be suitably used in small-scale indoor environments, they are considered less cost-effective for large-scale deployments.\nOn the other hand, range-free approaches [4, 8, 10, 13, 14, 15] do not require accurate distance measurements, but localize the node based on network connectivity (proximity) information.\nUnfortunately, since wireless connectivity is highly influenced by the environment and hardware calibration, existing solutions fail to deliver encouraging empirical results, or require substantial survey [2] and calibration [24] on a case-by-case basis.\nRealizing the impracticality of existing solutions for the large-scale outdoor environment, researchers have recently proposed solutions (e.g., Spotlight [20] and Lighthouse [18]) for sensor node localization using the spatiotemporal correlation of controlled events (i.e., inferring nodes'' locations based on the detection time of controlled events).\nThese solutions demonstrate that long range and high accuracy localization can be achieved simultaneously with little additional cost at sensor nodes.\nThese benefits, however, come along with an implicit assumption that the controlled events can be precisely distributed to a specified location at a specified time.\nWe argue that precise event distribution is difficult to achieve, especially at large scale when terrain is uneven, the event distribution device is not well calibrated and its position is difficult to maintain (e.g., the helicopter-mounted scenario in [20]).\nTo address these limitations in current approaches, in this paper we present a multi-sequence positioning (MSP) method 15 for large-scale stationary sensor node localization, in deployments where an event source has line-of-sight to all sensors.\nThe novel idea behind MSP is to estimate each sensor node``s two-dimensional location by processing multiple easy-to-get one-dimensional node sequences (e.g., event detection order) obtained through loosely-guided event distribution.\nThis design offers several benefits.\nFirst, compared to a range-based approach, MSP does not require additional costly hardware.\nIt works using sensors typically used by sensor network applications, such as light and acoustic sensors, both of which we specifically consider in this work.\nSecond, compared to a range-free approach, MSP needs only a small number of anchors (theoretically, as few as two), so high accuracy can be achieved economically by introducing more events instead of more anchors.\nAnd third, compared to Spotlight, MSP does not require precise and sophisticated event distribution, an advantage that significantly simplifies the system design and reduces calibration cost.\nThis paper offers the following additional intellectual contributions: \u2022 We are the first to localize sensor nodes using the concept of node sequence, an ordered list of sensor nodes, sorted by the detection time of a disseminated event.\nWe demonstrate that making full use of the information embedded in one-dimensional node sequences can significantly improve localization accuracy.\nInterestingly, we discover that repeated reprocessing of one-dimensional node sequences can further increase localization accuracy.\n\u2022 We propose a distribution-based location estimation strategy that obtains the final location of sensor nodes using the marginal probability of joint distribution among adjacent nodes within the sequence.\nThis new algorithm outperforms the widely adopted Centroid estimation [4, 8].\n\u2022 To the best of our knowledge, this is the first work to improve the localization accuracy of nodes by adaptive events.\nThe generation of later events is guided by localization results from previous events.\n\u2022 We evaluate line-based MSP on our new Mirage test-bed, and wave-based MSP in outdoor environments.\nThrough system implementation, we discover and address several interesting issues such as partial sequence and sequence flips.\nTo reveal MSP performance at scale, we provide analytic results as well as a complete simulation study.\nAll the simulation and implementation code is available online at http:\/\/www.cs.umn.edu\/\u223czhong\/MSP.\nThe rest of the paper is organized as follows.\nSection 2 briefly surveys the related work.\nSection 3 presents an overview of the MSP localization system.\nIn sections 4 and 5, basic MSP and four advanced processing methods are introduced.\nSection 6 describes how MSP can be applied in a wave propagation scenario.\nSection 7 discusses several implementation issues.\nSection 8 presents simulation results, and Section 9 reports an evaluation of MSP on the Mirage test-bed and an outdoor test-bed.\nSection 10 concludes the paper.\n2 Related Work Many methods have been proposed to localize wireless sensor devices in the open air.\nMost of these can be classified into two categories: range-based and range-free localization.\nRange-based localization systems, such as GPS [23], Cricket [17], AHLoS [19], AOA [16], Robust Quadrilaterals [13] and Sweeps [7], are based on fine-grained point-topoint distance estimation or angle estimation to identify pernode location.\nConstraints on the cost, energy and hardware footprint of each sensor node make these range-based methods undesirable for massive outdoor deployment.\nIn addition, ranging signals generated by sensor nodes have a very limited effective range because of energy and form factor concerns.\nFor example, ultrasound signals usually effectively propagate 20-30 feet using an on-board transmitter [17].\nConsequently, these range-based solutions require an undesirably high deployment density.\nAlthough the received signal strength indicator (RSSI) related [2, 24] methods were once considered an ideal low-cost solution, the irregularity of radio propagation [26] seriously limits the accuracy of such systems.\nThe recently proposed RIPS localization system [11] superimposes two RF waves together, creating a low-frequency envelope that can be accurately measured.\nThis ranging technique performs very well as long as antennas are well oriented and environmental factors such as multi-path effects and background noise are sufficiently addressed.\nRange-free methods don``t need to estimate or measure accurate distances or angles.\nInstead, anchors or controlled-event distributions are used for node localization.\nRange-free methods can be generally classified into two types: anchor-based and anchor-free solutions.\n\u2022 For anchor-based solutions such as Centroid [4], APIT [8], SeRLoc [10], Gradient [13] , and APS [15], the main idea is that the location of each node is estimated based on the known locations of the anchor nodes.\nDifferent anchor combinations narrow the areas in which the target nodes can possibly be located.\nAnchor-based solutions normally require a high density of anchor nodes so as to achieve good accuracy.\nIn practice, it is desirable to have as few anchor nodes as possible so as to lower the system cost.\n\u2022 Anchor-free solutions require no anchor nodes.\nInstead, external event generators and data processing platforms are used.\nThe main idea is to correlate the event detection time at a sensor node with the known space-time relationship of controlled events at the generator so that detection time-stamps can be mapped into the locations of sensors.\nSpotlight [20] and Lighthouse [18] work in this fashion.\nIn Spotlight [20], the event distribution needs to be precise in both time and space.\nPrecise event distribution is difficult to achieve without careful calibration, especially when the event-generating devices require certain mechanical maneuvers (e.g., the telescope mount used in Spotlight).\nAll these increase system cost and reduce localization speed.\nStarDust [21], which works much faster, uses label relaxation algorithms to match light spots reflected by corner-cube retro-reflectors (CCR) with sensor nodes using various constraints.\nLabel relaxation algorithms converge only when a sufficient number of robust constraints are obtained.\nDue to the environmental impact on RF connectivity constraints, however, StarDust is less accurate than Spotlight.\nIn this paper, we propose a balanced solution that avoids the limitations of both anchor-based and anchor-free solutions.\nUnlike anchor-based solutions [4, 8], MSP allows a flexible tradeoff between the physical cost (anchor nodes) with the soft 16 1 A B 2 3 4 5 Target nodeAnchor node 1A 5 3 B2 4 1 B2 5A 43 1A25B4 3 1 52 AB 4 3 1 2 3 5 4 (b) (c)(d) (a) Event 1 Node Sequence generated by event 1 Event 3 Node Sequence generated by event 2 Node Sequence generated by event 3 Node Sequence generated by event 4 Event 2 Event 4 Figure 1.\nThe MSP System Overview cost (localization events).\nMSP uses only a small number of anchors (theoretically, as few as two).\nUnlike anchor-free solutions, MSP doesn``t need to maintain rigid time-space relationships while distributing events, which makes system design simpler, more flexible and more robust to calibration errors.\n3 System Overview MSP works by extracting relative location information from multiple simple one-dimensional orderings of nodes.\nFigure 1(a) shows a layout of a sensor network with anchor nodes and target nodes.\nTarget nodes are defined as the nodes to be localized.\nBriefly, the MSP system works as follows.\nFirst, events are generated one at a time in the network area (e.g., ultrasound propagations from different locations, laser scans with diverse angles).\nAs each event propagates, as shown in Figure 1(a), each node detects it at some particular time instance.\nFor a single event, we call the ordering of nodes, which is based on the sequential detection of the event, a node sequence.\nEach node sequence includes both the targets and the anchors as shown in Figure 1(b).\nSecond, a multi-sequence processing algorithm helps to narrow the possible location of each node to a small area (Figure 1(c)).\nFinally, a distributionbased estimation method estimates the exact location of each sensor node, as shown in Figure 1(d).\nFigure 1 shows that the node sequences can be obtained much more economically than accurate pair-wise distance measurements between target nodes and anchor nodes via ranging methods.\nIn addition, this system does not require a rigid time-space relationship for the localization events, which is critical but hard to achieve in controlled event distribution scenarios (e.g., Spotlight [20]).\nFor the sake of clarity in presentation, we present our system in two cases: \u2022 Ideal Case, in which all the node sequences obtained from the network are complete and correct, and nodes are time-synchronized [12, 9].\n\u2022 Realistic Deployment, in which (i) node sequences can be partial (incomplete), (ii) elements in sequences could flip (i.e., the order obtained is reversed from reality), and (iii) nodes are not time-synchronized.\nTo introduce the MSP algorithm, we first consider a simple straight-line scan scenario.\nThen, we describe how to implement straight-line scans as well as other event types, such as sound wave propagation.\n1 A 2 3 4 5 B C 6 7 8 9 Straight-line Scan 1 Straight-lineScan2 8 1 5 A 6 C 4 3 7 2 B 9 3 1 C 5 9 2 A 4 6 B 7 8 Target node Anchor node Figure 2.\nObtaining Multiple Node Sequences 4 Basic MSP Let us consider a sensor network with N target nodes and M anchor nodes randomly deployed in an area of size S.\nThe top-level idea for basic MSP is to split the whole sensor network area into small pieces by processing node sequences.\nBecause the exact locations of all the anchors in a node sequence are known, all the nodes in this sequence can be divided into O(M +1) parts in the area.\nIn Figure 2, we use numbered circles to denote target nodes and numbered hexagons to denote anchor nodes.\nBasic MSP uses two straight lines to scan the area from different directions, treating each scan as an event.\nAll the nodes react to the event sequentially generating two node sequences.\nFor vertical scan 1, the node sequence is (8,1,5,A,6,C,4,3,7,2,B,9), as shown outside the right boundary of the area in Figure 2; for horizontal scan 2, the node sequence is (3,1,C,5,9,2,A,4,6,B,7,8), as shown under the bottom boundary of the area in Figure 2.\nSince the locations of the anchor nodes are available, the anchor nodes in the two node sequences actually split the area vertically and horizontally into 16 parts, as shown in Figure 2.\nTo extend this process, suppose we have M anchor nodes and perform d scans from different angles, obtaining d node sequences and dividing the area into many small parts.\nObviously, the number of parts is a function of the number of anchors M, the number of scans d, the anchors'' location as well as the slop k for each scan line.\nAccording to the pie-cutting theorem [22], the area can be divided into O(M2d2) parts.\nWhen M and d are appropriately large, the polygon for each target node may become sufficiently small so that accurate estimation can be achieved.\nWe emphasize that accuracy is affected not only by the number of anchors M, but also by the number of events d.\nIn other words, MSP provides a tradeoff between the physical cost of anchors and the soft cost of events.\nAlgorithm 1 depicts the computing architecture of basic MSP.\nEach node sequence is processed within line 1 to 8.\nFor each node, GetBoundaries() in line 5 searches for the predecessor and successor anchors in the sequence so as to determine the boundaries of this node.\nThen in line 6 UpdateMap() shrinks the location area of this node according to the newly obtained boundaries.\nAfter processing all sequences, Centroid Estimation (line 11) set the center of gravity of the final polygon as the estimated location of the target node.\nBasic MSP only makes use of the order information between a target node and the anchor nodes in each sequence.\nActually, we can extract much more location information from 17 Algorithm 1 Basic MSP Process Output: The estimated location of each node.\n1: repeat 2: GetOneUnprocessedSeqence(); 3: repeat 4: GetOneNodeFromSequenceInOrder(); 5: GetBoundaries(); 6: UpdateMap(); 7: until All the target nodes are updated; 8: until All the node sequences are processed; 9: repeat 10: GetOneUnestimatedNode(); 11: CentroidEstimation(); 12: until All the target nodes are estimated; each sequence.\nSection 5 will introduce advanced MSP, in which four novel optimizations are proposed to improve the performance of MSP significantly.\n5 Advanced MSP Four improvements to basic MSP are proposed in this section.\nThe first three improvements do not need additional sensing and communication in the networks but require only slightly more off-line computation.\nThe objective of all these improvements is to make full use of the information embedded in the node sequences.\nThe results we have obtained empirically indicate that the implementation of the first two methods can dramatically reduce the localization error, and that the third and fourth methods are helpful for some system deployments.\n5.1 Sequence-Based MSP As shown in Figure 2, each scan line and M anchors, splits the whole area into M + 1 parts.\nEach target node falls into one polygon shaped by scan lines.\nWe noted that in basic MSP, only the anchors are used to narrow down the polygon of each target node, but actually there is more information in the node sequence that we can made use of.\nLet``s first look at a simple example shown in Figure 3.\nThe previous scans narrow the locations of target node 1 and node 2 into two dashed rectangles shown in the left part of Figure 3.\nThen a new scan generates a new sequence (1, 2).\nWith knowledge of the scan``s direction, it is easy to tell that node 1 is located to the left of node 2.\nThus, we can further narrow the location area of node 2 by eliminating the shaded part of node 2``s rectangle.\nThis is because node 2 is located on the right of node 1 while the shaded area is outside the lower boundary of node 1.\nSimilarly, the location area of node 1 can be narrowed by eliminating the shaded part out of node 2``s right boundary.\nWe call this procedure sequence-based MSP which means that the whole node sequence needs to be processed node by node in order.\nSpecifically, sequence-based MSP follows this exact processing rule: 1 2 1 2 1 2 Lower boundary of 1 Upper boundary of 1 Lower boundary of 2 Upper boundary of 2 New sequence New upper boundary of 1 New Lower boundary of 2 EventPropagation Figure 3.\nRule Illustration in Sequence Based MSP Algorithm 2 Sequence-Based MSP Process Output: The estimated location of each node.\n1: repeat 2: GetOneUnprocessedSeqence(); 3: repeat 4: GetOneNodeByIncreasingOrder(); 5: ComputeLowbound(); 6: UpdateMap(); 7: until The last target node in the sequence; 8: repeat 9: GetOneNodeByDecreasingOrder(); 10: ComputeUpbound(); 11: UpdateMap(); 12: until The last target node in the sequence; 13: until All the node sequences are processed; 14: repeat 15: GetOneUnestimatedNode(); 16: CentroidEstimation(); 17: until All the target nodes are estimated; Elimination Rule: Along a scanning direction, the lower boundary of the successor``s area must be equal to or larger than the lower boundary of the predecessor``s area, and the upper boundary of the predecessor``s area must be equal to or smaller than the upper boundary of the successor``s area.\nIn the case of Figure 3, node 2 is the successor of node 1, and node 1 is the predecessor of node 2.\nAccording to the elimination rule, node 2``s lower boundary cannot be smaller than that of node 1 and node 1``s upper boundary cannot exceed node 2``s upper boundary.\nAlgorithm 2 illustrates the pseudo code of sequence-based MSP.\nEach node sequence is processed within line 3 to 13.\nThe sequence processing contains two steps: Step 1 (line 3 to 7): Compute and modify the lower boundary for each target node by increasing order in the node sequence.\nEach node``s lower boundary is determined by the lower boundary of its predecessor node in the sequence, thus the processing must start from the first node in the sequence and by increasing order.\nThen update the map according to the new lower boundary.\nStep 2 (line 8 to 12): Compute and modify the upper boundary for each node by decreasing order in the node sequence.\nEach node``s upper boundary is determined by the upper boundary of its successor node in the sequence, thus the processing must start from the last node in the sequence and by decreasing order.\nThen update the map according to the new upper boundary.\nAfter processing all the sequences, for each node, a polygon bounding its possible location has been found.\nThen, center-ofgravity-based estimation is applied to compute the exact location of each node (line 14 to 17).\nAn example of this process is shown in Figure 4.\nThe third scan generates the node sequence (B,9,2,7,4,6,3,8,C,A,5,1).\nIn addition to the anchor split lines, because nodes 4 and 7 come after node 2 in the sequence, node 4 and 7``s polygons could be narrowed according to node 2``s lower boundary (the lower right-shaded area); similarly, the shaded area in node 2``s rectangle could be eliminated since this part is beyond node 7``s upper boundary indicated by the dotted line.\nSimilar eliminating can be performed for node 3 as shown in the figure.\n18 1 A 2 3 4 5 B C 6 7 8 9 Straight-line Scan 1 Straight-lineScan2 Straight-line Scan 3 Target node Anchor node Figure 4.\nSequence-Based MSP Example 1 A 2 3 4 5 B C 6 7 8 9 Straight-line Scan 1 Straight-lineScan2 Straight-line Scan 3 Reprocessing Scan 1 Target node Anchor node Figure 5.\nIterative MSP: Reprocessing Scan 1 From above, we can see that the sequence-based MSP makes use of the information embedded in every sequential node pair in the node sequence.\nThe polygon boundaries of the target nodes obtained in prior could be used to further split other target nodes'' areas.\nOur evaluation in Sections 8 and 9 shows that sequence-based MSP considerably enhances system accuracy.\n5.2 Iterative MSP Sequence-based MSP is preferable to basic MSP because it extracts more information from the node sequence.\nIn fact, further useful information still remains!\nIn sequence-based MSP, a sequence processed later benefits from information produced by previously processed sequences (e.g., the third scan in Figure 5).\nHowever, the first several sequences can hardly benefit from other scans in this way.\nInspired by this phenomenon, we propose iterative MSP.\nThe basic idea of iterative MSP is to process all the sequences iteratively several times so that the processing of each single sequence can benefit from the results of other sequences.\nTo illustrate the idea more clearly, Figure 4 shows the results of three scans that have provided three sequences.\nNow if we process the sequence (8,1,5,A,6,C,4,3,7,2,B,9) obtained from scan 1 again, we can make progress, as shown in Figure 5.\nThe reprocessing of the node sequence 1 provides information in the way an additional vertical scan would.\nFrom sequencebased MSP, we know that the upper boundaries of nodes 3 and 4 along the scan direction must not extend beyond the upper boundary of node 7, therefore the grid parts can be eliminated (a) Central of Gravity (b) Joint Distribution 1 2 2 1 1 2 1 2 2 1 1 2 2 1 1 2 Figure 6.\nExample of Joint Distribution Estimation ...... vm ap[0] vm ap[1] vm ap[2] vm ap[3] Combine m ap Figure 7.\nIdea of DBE MSP for Each Node for the nodes 3 and node 4, respectively, as shown in Figure 5.\nFrom this example, we can see that iterative processing of the sequence could help further shrink the polygon of each target node, and thus enhance the accuracy of the system.\nThe implementation of iterative MSP is straightforward: process all the sequences multiple times using sequence-based MSP.\nLike sequence-based MSP, iterative MSP introduces no additional event cost.\nIn other words, reprocessing does not actually repeat the scan physically.\nEvaluation results in Section 8 will show that iterative MSP contributes noticeably to a lower localization error.\nEmpirical results show that after 5 iterations, improvements become less significant.\nIn summary, iterative processing can achieve better performance with only a small computation overhead.\n5.3 Distribution-Based Estimation After determining the location area polygon for each node, estimation is needed for a final decision.\nPrevious research mostly applied the Center of Gravity (COG) method [4] [8] [10] which minimizes average error.\nIf every node is independent of all others, COG is the statistically best solution.\nIn MSP, however, each node may not be independent.\nFor example, two neighboring nodes in a certain sequence could have overlapping polygon areas.\nIn this case, if the marginal probability of joint distribution is used for estimation, better statistical results are achieved.\nFigure 6 shows an example in which node 1 and node 2 are located in the same polygon.\nIf COG is used, both nodes are localized at the same position (Figure 6(a)).\nHowever, the node sequences obtained from two scans indicate that node 1 should be to the left of and above node 2, as shown in Figure 6(b).\nThe high-level idea of distribution-based estimation proposed for MSP, which we call DBE MSP, is illustrated in Figure 7.\nThe distributions of each node under the ith scan (for the ith node sequence) are estimated in node.vmap[i], which is a data structure for remembering the marginal distribution over scan i.\nThen all the vmaps are combined to get a single map and weighted estimation is used to obtain the final location.\nFor each scan, all the nodes are sorted according to the gap, which is the diameter of the polygon along the direction of the scan, to produce a second, gap-based node sequence.\nThen, the estimation starts from the node with the smallest gap.\nThis is because it is statistically more accurate to assume a uniform distribution of the node with smaller gap.\nFor each node processed in order from the gap-based node sequence, either if 19 Pred.\nnode``s area Predecessor node exists: conditional distribution based on pred.\nnode``s area Alone: Uniformly Distributed Succ.\nnode``s area Successor node exists: conditional distribution based on succ.\nnode``s area Succ.\nnode``s area Both predecessor and successor nodes exist: conditional distribution based on both of them Pred.\nnode``s area Figure 8.\nFour Cases in DBE Process no neighbor node in the original event-based node sequence shares an overlapping area, or if the neighbors have not been processed due to bigger gaps, a uniform distribution Uniform() is applied to this isolated node (the Alone case in Figure 8).\nIf the distribution of its neighbors sharing overlapped areas has been processed, we calculate the joint distribution for the node.\nAs shown in Figure 8, there are three possible cases depending on whether the distribution of the overlapping predecessor and\/or successor nodes have\/has already been estimated.\nThe estimation``s strategy of starting from the most accurate node (smallest gap node) reduces the problem of estimation error propagation.\nThe results in the evaluation section indicate that applying distribution-based estimation could give statistically better results.\n5.4 Adaptive MSP So far, all the enhancements to basic MSP focus on improving the multi-sequence processing algorithm given a fixed set of scan directions.\nAll these enhancements require only more computing time without any overhead to the sensor nodes.\nObviously, it is possible to have some choice and optimization on how events are generated.\nFor example, in military situations, artillery or rocket-launched mini-ultrasound bombs can be used for event generation at some selected locations.\nIn adaptive MSP, we carefully generate each new localization event so as to maximize the contribution of the new event to the refinement of localization, based on feedback from previous events.\nFigure 9 depicts the basic architecture of adaptive MSP.\nThrough previous localization events, the whole map has been partitioned into many small location areas.\nThe idea of adaptive MSP is to generate the next localization event to achieve best-effort elimination, which ideally could shrink the location area of individual node as much as possible.\nWe use a weighted voting mechanism to evaluate candidate localization events.\nEvery node wants the next event to split its area evenly, which would shrink the area fast.\nTherefore, every node votes for the parameters of the next event (e.g., the scan angle k of the straight-line scan).\nSince the area map is maintained centrally, the vote is virtually done and there is no need for the real sensor nodes to participate in it.\nAfter gathering all the voting results, the event parameters with the most votes win the election.\nThere are two factors that determine the weight of each vote: \u2022 The vote for each candidate event is weighted according to the diameter D of the node``s location area.\nNodes with bigger location areas speak louder in the voting, because Map Partitioned by the Localization Events Diameter of Each Area Candidate Localization Events Evaluation Trigger Next Localization Evet Figure 9.\nBasic Architecture of Adaptive MSP 2 3 Diameter D3 1 1 3k 2 3k 3 3k 4 3k 5 3k 6 3k 1 3k 2 3k 3 3k 6 3k4 3k 5 3k Weight el small i opt i j ii j i S S DkkDfkWeight arg ),(,()( \u22c5=\u2206= 1 3 opt k Target node Anchor node Center of Gravity Node 3's area Figure 10.\nCandidate Slops for Node 3 at Anchor 1 overall system error is reduced mostly by splitting the larger areas.\n\u2022 The vote for each candidate event is also weighted according to its elimination efficiency for a location area, which is defined as how equally in size (or in diameter) an event can cut an area.\nIn other words, an optimal scan event cuts an area in the middle, since this cut shrinks the area quickly and thus reduces localization uncertainty quickly.\nCombining the above two aspects, the weight for each vote is computed according to the following equation (1): Weight(k j i ) = f(Di,\u25b3(k j i ,k opt i )) (1) k j i is node i``s jth supporting parameter for next event generation; Di is diameter of node i``s location area; \u25b3(k j i ,k opt i ) is the distance between k j i and the optimal parameter k opt i for node i, which should be defined to fit the specific application.\nFigure 10 presents an example for node 1``s voting for the slopes of the next straight-line scan.\nIn the system, there are a fixed number of candidate slopes for each scan (e.g., k1,k2,k3,k4...).\nThe location area of target node 3 is shown in the figure.\nThe candidate events k1 3,k2 3,k3 3,k4 3,k5 3,k6 3 are evaluated according to their effectiveness compared to the optimal ideal event which is shown as a dotted line with appropriate weights computed according to equation (1).\nFor this specific example, as is illustrated in the right part of Figure 10, f(Di,\u25b3(k j i ,kopt i )) is defined as the following equation (2): Weight(kj i ) = f(Di,\u25b3(kj i ,kopt i )) = Di \u00b7 Ssmall Slarge (2) Ssmall and Slarge are the sizes of the smaller part and larger part of the area cut by the candidate line respectively.\nIn this case, node 3 votes 0 for the candidate lines that do not cross its area since Ssmall = 0.\nWe show later that adaptive MSP improves localization accuracy in WSNs with irregularly shaped deployment areas.\n20 5.5 Overhead and MSP Complexity Analysis This section provides a complexity analysis of the MSP design.\nWe emphasize that MSP adopts an asymmetric design in which sensor nodes need only to detect and report the events.\nThey are blissfully oblivious to the processing methods proposed in previous sections.\nIn this section, we analyze the computational cost on the node sequence processing side, where resources are plentiful.\nAccording to Algorithm 1, the computational complexity of Basic MSP is O(d \u00b7 N \u00b7 S), and the storage space required is O(N \u00b7 S), where d is the number of events, N is the number of target nodes, and S is the area size.\nAccording to Algorithm 2, the computational complexity of both sequence-based MSP and iterative MSP is O(c\u00b7d \u00b7N \u00b7S), where c is the number of iterations and c = 1 for sequencebased MSP, and the storage space required is O(N \u00b7S).\nBoth the computational complexity and storage space are equal within a constant factor to those of basic MSP.\nThe computational complexity of the distribution-based estimation (DBE MSP) is greater.\nThe major overhead comes from the computation of joint distributions when both predecessor and successor nodes exit.\nIn order to compute the marginal probability, MSP needs to enumerate the locations of the predecessor node and the successor node.\nFor example, if node A has predecessor node B and successor node C, then the marginal probability PA(x,y) of node A``s being at location (x,y) is: PA(x,y) = \u2211 i \u2211 j \u2211 m \u2211 n 1 NB,A,C \u00b7PB(i, j)\u00b7PC(m,n) (3) NB,A,C is the number of valid locations for A satisfying the sequence (B, A, C) when B is at (i, j) and C is at (m,n); PB(i, j) is the available probability of node B``s being located at (i, j); PC(m,n) is the available probability of node C``s being located at (m,n).\nA naive algorithm to compute equation (3) has complexity O(d \u00b7 N \u00b7 S3).\nHowever, since the marginal probability indeed comes from only one dimension along the scanning direction (e.g., a line), the complexity can be reduced to O(d \u00b7 N \u00b7 S1.5) after algorithm optimization.\nIn addition, the final location areas for every node are much smaller than the original field S; therefore, in practice, DBE MSP can be computed much faster than O(d \u00b7N \u00b7S1.5).\n6 Wave Propagation Example So far, the description of MSP has been solely in the context of straight-line scan.\nHowever, we note that MSP is conceptually independent of how the event is propagated as long as node sequences can be obtained.\nClearly, we can also support wave-propagation-based events (e.g., ultrasound propagation, air blast propagation), which are polar coordinate equivalences of the line scans in the Cartesian coordinate system.\nThis section illustrates the effects of MSP``s implementation in the wave propagation-based situation.\nFor easy modelling, we have made the following assumptions: \u2022 The wave propagates uniformly in all directions, therefore the propagation has a circle frontier surface.\nSince MSP does not rely on an accurate space-time relationship, a certain distortion in wave propagation is tolerable.\nIf any directional wave is used, the propagation frontier surface can be modified accordingly.\n1 3 5 9 Target node Anchor node Previous Event location A 2 Center of Gravity 4 8 7 B 6 C A line of preferred locations for next event Figure 11.\nExample of Wave Propagation Situation \u2022 Under the situation of line-of-sight, we allow obstacles to reflect or deflect the wave.\nReflection and deflection are not problems because each node reacts only to the first detected event.\nThose reflected or deflected waves come later than the line-of-sight waves.\nThe only thing the system needs to maintain is an appropriate time interval between two successive localization events.\n\u2022 We assume that background noise exists, and therefore we run a band-pass filter to listen to a particular wave frequency.\nThis reduces the chances of false detection.\nThe parameter that affects the localization event generation here is the source location of the event.\nThe different distances between each node and the event source determine the rank of each node in the node sequence.\nUsing the node sequences, the MSP algorithm divides the whole area into many non-rectangular areas as shown in Figure 11.\nIn this figure, the stars represent two previous event sources.\nThe previous two propagations split the whole map into many areas by those dashed circles that pass one of the anchors.\nEach node is located in one of the small areas.\nSince sequence-based MSP, iterative MSP and DBE MSP make no assumptions about the type of localization events and the shape of the area, all three optimization algorithms can be applied for the wave propagation scenario.\nHowever, adaptive MSP needs more explanation.\nFigure 11 illustrates an example of nodes'' voting for next event source locations.\nUnlike the straight-line scan, the critical parameter now is the location of the event source, because the distance between each node and the event source determines the rank of the node in the sequence.\nIn Figure 11, if the next event breaks out along\/near the solid thick gray line, which perpendicularly bisects the solid dark line between anchor C and the center of gravity of node 9``s area (the gray area), the wave would reach anchor C and the center of gravity of node 9``s area at roughly the same time, which would relatively equally divide node 9``s area.\nTherefore, node 9 prefers to vote for the positions around the thick gray line.\n7 Practical Deployment Issues For the sake of presentation, until now we have described MSP in an ideal case where a complete node sequence can be obtained with accurate time synchronization.\nIn this section we describe how to make MSP work well under more realistic conditions.\n21 7.1 Incomplete Node Sequence For diverse reasons, such as sensor malfunction or natural obstacles, the nodes in the network could fail to detect localization events.\nIn such cases, the node sequence will not be complete.\nThis problem has two versions: \u2022 Anchor nodes are missing in the node sequence If some anchor nodes fail to respond to the localization events, then the system has fewer anchors.\nIn this case, the solution is to generate more events to compensate for the loss of anchors so as to achieve the desired accuracy requirements.\n\u2022 Target nodes are missing in the node sequence There are two consequences when target nodes are missing.\nFirst, if these nodes are still be useful to sensing applications, they need to use other backup localization approaches (e.g., Centroid) to localize themselves with help from their neighbors who have already learned their own locations from MSP.\nSecondly, since in advanced MSP each node in the sequence may contribute to the overall system accuracy, dropping of target nodes from sequences could also reduce the accuracy of the localization.\nThus, proper compensation procedures such as adding more localization events need to be launched.\n7.2 Localization without Time Synchronization In a sensor network without time synchronization support, nodes cannot be ordered into a sequence using timestamps.\nFor such cases, we propose a listen-detect-assemble-report protocol, which is able to function independently without time synchronization.\nlisten-detect-assemble-report requires that every node listens to the channel for the node sequence transmitted from its neighbors.\nThen, when the node detects the localization event, it assembles itself into the newest node sequence it has heard and reports the updated sequence to other nodes.\nFigure 12 (a) illustrates an example for the listen-detect-assemble-report protocol.\nFor simplicity, in this figure we did not differentiate the target nodes from anchor nodes.\nA solid line between two nodes stands for a communication link.\nSuppose a straight line scans from left to right.\nNode 1 detects the event, and then it broadcasts the sequence (1) into the network.\nNode 2 and node 3 receive this sequence.\nWhen node 2 detects the event, node 2 adds itself into the sequence and broadcasts (1, 2).\nThe sequence propagates in the same direction with the scan as shown in Figure 12 (a).\nFinally, node 6 obtains a complete sequence (1,2,3,5,7,4,6).\nIn the case of ultrasound propagation, because the event propagation speed is much slower than that of radio, the listendetect-assemble-report protocol can work well in a situation where the node density is not very high.\nFor instance, if the distance between two nodes along one direction is 10 meters, the 340m\/s sound needs 29.4ms to propagate from one node to the other.\nWhile normally the communication data rate is 250Kbps in the WSN (e.g., CC2420 [1]), it takes only about 2 \u223c 3 ms to transmit an assembled packet for one hop.\nOne problem that may occur using the listen-detectassemble-report protocol is multiple partial sequences as shown in Figure 12 (b).\nTwo separate paths in the network may result in two sequences that could not be further combined.\nIn this case, since the two sequences can only be processed as separate sequences, some order information is lost.\nTherefore the 1,2,5,4 1,3,7,4 1,2,3,5 1,2,3,5,7,4 1,2,3,5,7 1,2,3,5 1,3 1,2 1 2 3 5 7 4 6 1 1 1,3 1,2,3,5,7,4,6 1,2,5 1,3,7 1,3 1,2 1 2 3 5 7 4 6 1 1 1,3,7,4,6 1,2,5,4,6 (a) (b) (c) 1,3,2,5 1,3,2,5,7,4 1,3,2,5,7 1,3,2,5 1,3 1,2 1 2 3 5 7 4 6 1 1 1,3 1,3,2,5,7,4,6 Event Propagation Event Propagation Event Propagation Figure 12.\nNode Sequence without Time Synchronization accuracy of the system would decrease.\nThe other problem is the sequence flip problem.\nAs shown in Figure 12 (c), because node 2 and node 3 are too close to each other along the scan direction, they detect the scan almost simultaneously.\nDue to the uncertainty such as media access delay, two messages could be transmitted out of order.\nFor example, if node 3 sends out its report first, then the order of node 2 and node 3 gets flipped in the final node sequence.\nThe sequence flip problem would appear even in an accurately synchronized system due to random jitter in node detection if an event arrives at multiple nodes almost simultaneously.\nA method addressing the sequence flip is presented in the next section.\n7.3 Sequence Flip and Protection Band Sequence flip problems can be solved with and without time synchronization.\nWe firstly start with a scenario applying time synchronization.\nExisting solutions for time synchronization [12, 6] can easily achieve sub-millisecond-level accuracy.\nFor example, FTSP [12] achieves 16.9\u00b5s (microsecond) average error for a two-node single-hop case.\nTherefore, we can comfortably assume that the network is synchronized with maximum error of 1000\u00b5s.\nHowever, when multiple nodes are located very near to each other along the event propagation direction, even when time synchronization with less than 1ms error is achieved in the network, sequence flip may still occur.\nFor example, in the sound wave propagation case, if two nodes are less than 0.34 meters apart, the difference between their detection timestamp would be smaller than 1 millisecond.\nWe find that sequence flip could not only damage system accuracy, but also might cause a fatal error in the MSP algorithm.\nFigure 13 illustrates both detrimental results.\nIn the left side of Figure 13(a), suppose node 1 and node 2 are so close to each other that it takes less than 0.5ms for the localization event to propagate from node 1 to node 2.\nNow unfortunately, the node sequence is mistaken to be (2,1).\nSo node 1 is expected to be located to the right of node 2, such as at the position of the dashed node 1.\nAccording to the elimination rule in sequencebased MSP, the left part of node 1``s area is cut off as shown in the right part of Figure 13(a).\nThis is a potentially fatal error, because node 1 is actually located in the dashed area which has been eliminated by mistake.\nDuring the subsequent eliminations introduced by other events, node 1``s area might be cut off completely, thus node 1 could consequently be erased from the map!\nEven in cases where node 1 still survives, its area actually does not cover its real location.\n22 1 2 12 2 Lower boundary of 1 Upper boundary of 1 Flipped Sequence Fatal Elimination Error EventPropagation 1 1 Fatal Error 1 1 2 12 2 Lower boundary of 1 Upper boundary of 1 Flipped Sequence Safe Elimination EventPropagation 1 1 New lower boundary of 1 1 B (a) (b) B: Protection band Figure 13.\nSequence Flip and Protection Band Another problem is not fatal but lowers the localization accuracy.\nIf we get the right node sequence (1,2), node 1 has a new upper boundary which can narrow the area of node 1 as in Figure 3.\nDue to the sequence flip, node 1 loses this new upper boundary.\nIn order to address the sequence flip problem, especially to prevent nodes from being erased from the map, we propose a protection band compensation approach.\nThe basic idea of protection band is to extend the boundary of the location area a little bit so as to make sure that the node will never be erased from the map.\nThis solution is based on the fact that nodes have a high probability of flipping in the sequence if they are near to each other along the event propagation direction.\nIf two nodes are apart from each other more than some distance, say, B, they rarely flip unless the nodes are faulty.\nThe width of a protection band B, is largely determined by the maximum error in system time synchronization and the localization event propagation speed.\nFigure 13(b) presents the application of the protection band.\nInstead of eliminating the dashed part in Figure 13(a) for node 1, the new lower boundary of node 1 is set by shifting the original lower boundary of node 2 to the left by distance B.\nIn this case, the location area still covers node 1 and protects it from being erased.\nIn a practical implementation, supposing that the ultrasound event is used, if the maximum error of system time synchronization is 1ms, two nodes might flip with high probability if the timestamp difference between the two nodes is smaller than or equal to 1ms.\nAccordingly, we set the protection band B as 0.34m (the distance sound can propagate within 1 millisecond).\nBy adding the protection band, we reduce the chances of fatal errors, although at the cost of localization accuracy.\nEmpirical results obtained from our physical test-bed verified this conclusion.\nIn the case of using the listen-detect-assemble-report protocol, the only change we need to make is to select the protection band according to the maximum delay uncertainty introduced by the MAC operation and the event propagation speed.\nTo bound MAC delay at the node side, a node can drop its report message if it experiences excessive MAC delay.\nThis converts the sequence flip problem to the incomplete sequence problem, which can be more easily addressed by the method proposed in Section 7.1.\n8 Simulation Evaluation Our evaluation of MSP was conducted on three platforms: (i) an indoor system with 46 MICAz motes using straight-line scan, (ii) an outdoor system with 20 MICAz motes using sound wave propagation, and (iii) an extensive simulation under various kinds of physical settings.\nIn order to understand the behavior of MSP under numerous settings, we start our evaluation with simulations.\nThen, we implemented basic MSP and all the advanced MSP methods for the case where time synchronization is available in the network.\nThe simulation and implementation details are omitted in this paper due to space constraints, but related documents [25] are provided online at http:\/\/www.cs.umn.edu\/\u223czhong\/MSP.\nFull implementation and evaluation of system without time synchronization are yet to be completed in the near future.\nIn simulation, we assume all the node sequences are perfect so as to reveal the performance of MSP achievable in the absence of incomplete node sequences or sequence flips.\nIn our simulations, all the anchor nodes and target nodes are assumed to be deployed uniformly.\nThe mean and maximum errors are averaged over 50 runs to obtain high confidence.\nFor legibility reasons, we do not plot the confidence intervals in this paper.\nAll the simulations are based on the straight-line scan example.\nWe implement three scan strategies: \u2022 Random Scan: The slope of the scan line is randomly chosen at each time.\n\u2022 Regular Scan: The slope is predetermined to rotate uniformly from 0 degree to 180 degrees.\nFor example, if the system scans 6 times, then the scan angles would be: 0, 30, 60, 90, 120, and 150.\n\u2022 Adaptive Scan: The slope of each scan is determined based on the localization results from previous scans.\nWe start with basic MSP and then demonstrate the performance improvements one step at a time by adding (i) sequencebased MSP, (ii) iterative MSP, (iii) DBE MSP and (iv) adaptive MSP.\n8.1 Performance of Basic MSP The evaluation starts with basic MSP, where we compare the performance of random scan and regular scan under different configurations.\nWe intend to illustrate the impact of the number of anchors M, the number of scans d, and target node density (number of target nodes N in a fixed-size region) on the localization error.\nTable 1 shows the default simulation parameters.\nThe error of each node is defined as the distance between the estimated location and the real position.\nWe note that by default we only use three anchors, which is considerably fewer than existing range-free solutions [8, 4].\nImpact of the Number of Scans: In this experiment, we compare regular scan with random scan under a different number of scans from 3 to 30 in steps of 3.\nThe number of anchors Table 1.\nDefault Configuration Parameters Parameter Description Field Area 200\u00d7200 (Grid Unit) Scan Type Regular (Default)\/Random Scan Anchor Number 3 (Default) Scan Times 6 (Default) Target Node Number 100 (Default) Statistics Error Mean\/Max Random Seeds 50 runs 23 0 5 10 15 20 25 30 0 10 20 30 40 50 60 70 80 90 Mean Error and Max Error VS Scan Time Scan Time Error Max Error of Random Scan Max Error of Regular Scan Mean Error of Random Scan Mean Error of Regular Scan (a) Error vs. Number of Scans 0 5 10 15 20 25 30 0 10 20 30 40 50 60 Mean Error and Max Error VS Anchor Number Anchor Number Error Max Error of Random Scan Max Error of Regular Scan Mean Error of Random Scan Mean Error of Regular Scan (b) Error vs. Anchor Number 0 50\u00a0100\u00a0150\u00a0200 10 20 30 40 50 60 70 Mean Error and Max Error VS Target Node Number Target Node Number Error Max Error of Random Scan Max Error of Regular Scan Mean Error of Random Scan Mean Error of Regular Scan (c) Error vs. Number of Target Nodes Figure 14.\nEvaluation of Basic MSP under Random and Regular Scans 0 5 10 15 20 25 30 0 10 20 30 40 50 60 70 Basic MSP VS Sequence Based MSP II Scan Time Error Max Error of Basic MSP Max Error of Seq MSP Mean Error of Basic MSP Mean Error of Seq MSP (a) Error vs. Number of Scans 0 5 10 15 20 25 30 0 5 10 15 20 25 30 35 40 45 50 Basic MSP VS Sequence Based MSP I Anchor Number Error Max Error of Basic MSP Max Error of Seq MSP Mean Error of Basic MSP Mean Error of Seq MSP (b) Error vs. Anchor Number 0 50\u00a0100\u00a0150\u00a0200 5 10 15 20 25 30 35 40 45 50 55 Basic MSP VS Sequence Based MSP III Target Node Number Error Max Error of Basic MSP Max Error of Seq MSP Mean Error of Basic MSP Mean Error of Seq MSP (c) Error vs. Number of Target Nodes Figure 15.\nImprovements of Sequence-Based MSP over Basic MSP is 3 by default.\nFigure 14(a) indicates the following: (i) as the number of scans increases, the localization error decreases significantly; for example, localization errors drop more than 60% from 3 scans to 30 scans; (ii) statistically, regular scan achieves better performance than random scan under identical number of scans.\nHowever, the performance gap reduces as the number of scans increases.\nThis is expected since a large number of random numbers converges to a uniform distribution.\nFigure 14(a) also demonstrates that MSP requires only a small number of anchors to perform very well, compared to existing range-free solutions [8, 4].\nImpact of the Number of Anchors: In this experiment, we compare regular scan with random scan under different number of anchors from 3 to 30 in steps of 3.\nThe results shown in Figure 14(b) indicate that (i) as the number of anchor nodes increases, the localization error decreases, and (ii) statistically, regular scan obtains better results than random scan with identical number of anchors.\nBy combining Figures 14(a) and 14(b), we can conclude that MSP allows a flexible tradeoff between physical cost (anchor nodes) and soft cost (localization events).\nImpact of the Target Node Density: In this experiment, we confirm that the density of target nodes has no impact on the accuracy, which motivated the design of sequence-based MSP.\nIn this experiment, we compare regular scan with random scan under different number of target nodes from 10 to 190 in steps of 20.\nResults in Figure 14(c) show that mean localization errors remain constant across different node densities.\nHowever, when the number of target nodes increases, the average maximum error increases.\nSummary: From the above experiments, we can conclude that in basic MSP, regular scan are better than random scan under different numbers of anchors and scan events.\nThis is because regular scans uniformly eliminate the map from different directions, while random scans would obtain sequences with redundant overlapping information, if two scans choose two similar scanning slopes.\n8.2 Improvements of Sequence-Based MSP This section evaluates the benefits of exploiting the order information among target nodes by comparing sequence-based MSP with basic MSP.\nIn this and the following sections, regular scan is used for straight-line scan event generation.\nThe purpose of using regular scan is to keep the scan events and the node sequences identical for both sequence-based MSP and basic MSP, so that the only difference between them is the sequence processing procedure.\nImpact of the Number of Scans: In this experiment, we compare sequence-based MSP with basic MSP under different number of scans from 3 to 30 in steps of 3.\nFigure 15(a) indicates significant performance improvement in sequence-based MSP over basic MSP across all scan settings, especially when the number of scans is large.\nFor example, when the number of scans is 30, errors in sequence-based MSP are about 20% of that of basic MSP.\nWe conclude that sequence-based MSP performs extremely well when there are many scan events.\nImpact of the Number of Anchors: In this experiment, we use different number of anchors from 3 to 30 in steps of 3.\nAs seen in Figure 15(b), the mean error and maximum error of sequence-based MSP is much smaller than that of basic MSP.\nEspecially when there is limited number of anchors in the system, e.g., 3 anchors, the error rate was almost halved by using sequence-based MSP.\nThis phenomenon has an interesting explanation: the cutting lines created by anchor nodes are exploited by both basic MSP and sequence-based MSP, so as the 24 0 2 4 6 8 10 0 5 10 15 20 25 30 35 40 45 50 Basic MSP VS Iterative MSP Iterative Times Error Max Error of Iterative Seq MSP Mean Error of Iterative Seq MSP Max Error of Basic MSP Mean Error of Basic MSP Figure 16.\nImprovements of Iterative MSP 0 2 4 6 8 10 12 14 16 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 DBE VS Non\u2212DBE Error CumulativeDistrubutioinFunctions(CDF) Mean Error CDF of DBE MSP Mean Error CDF of Non\u2212DBE MSP Max Error CDF of DBE MSP Max Error CDF of Non\u2212DBE MSP Figure 17.\nImprovements of DBE MSP 0 20 40 60 80 100 0 10 20 30 40 50 60 70 Adaptive MSP for 500by80 Target Node Number Error Max Error of Regualr Scan Max Error of Adaptive Scan Mean Error of Regualr Scan Mean Error of Adaptive Scan (a) Adaptive MSP for 500 by 80 field 0 10 20 30 40 50 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Mean Error CDF at Different Angle Steps in Adaptive Scan Mean Error CumulativeDistrubutioinFunctions(CDF) 5 Degree Angle Step Adaptive 10 Degree Angle Step Adaptive 20 Degree Angle Step Adaptive 30 Degree Step Regular Scan (b) Impact of the Number of Candidate Events Figure 18.\nThe Improvements of Adaptive MSP number of anchor nodes increases, anchors tend to dominate the contribution.\nTherefore the performance gaps lessens.\nImpact of the Target Node Density: Figure 15(c) demonstrates the benefits of exploiting order information among target nodes.\nSince sequence-based MSP makes use of the information among the target nodes, having more target nodes contributes to the overall system accuracy.\nAs the number of target nodes increases, the mean error and maximum error of sequence-based MSP decreases.\nClearly the mean error in basic MSP is not affected by the number of target nodes, as shown in Figure 15(c).\nSummary: From the above experiments, we can conclude that exploiting order information among target nodes can improve accuracy significantly, especially when the number of events is large but with few anchors.\n8.3 Iterative MSP over Sequence-Based MSP In this experiment, the same node sequences were processed iteratively multiple times.\nIn Figure 16, the two single marks are results from basic MSP, since basic MSP doesn``t perform iterations.\nThe two curves present the performance of iterative MSP under different numbers of iterations c.\nWe note that when only a single iteration is used, this method degrades to sequence-based MSP.\nTherefore, Figure 16 compares the three methods to one another.\nFigure 16 shows that the second iteration can reduce the mean error and maximum error dramatically.\nAfter that, the performance gain gradually reduces, especially when c > 5.\nThis is because the second iteration allows earlier scans to exploit the new boundaries created by later scans in the first iteration.\nSuch exploitation decays quickly over iterations.\n8.4 DBE MSP over Iterative MSP Figure 17, in which we augment iterative MSP with distribution-based estimation (DBE MSP), shows that DBE MSP could bring about statistically better performance.\nFigure 17 presents cumulative distribution localization errors.\nIn general, the two curves of the DBE MSP lay slightly to the left of that of non-DBE MSP, which indicates that DBE MSP has a smaller statistical mean error and averaged maximum error than non-DBE MSP.\nWe note that because DBE is augmented on top of the best solution so far, the performance improvement is not significant.\nWhen we apply DBE on basic MSP methods, the improvement is much more significant.\nWe omit these results because of space constraints.\n8.5 Improvements of Adaptive MSP This section illustrates the performance of adaptive MSP over non-adaptive MSP.\nWe note that feedback-based adaptation can be applied to all MSP methods, since it affects only the scanning angles but not the sequence processing.\nIn this experiment, we evaluated how adaptive MSP can improve the best solution so far.\nThe default angle granularity (step) for adaptive searching is 5 degrees.\nImpact of Area Shape: First, if system settings are regular, the adaptive method hardly contributes to the results.\nFor a square area (regular), the performance of adaptive MSP and regular scans are very close.\nHowever, if the shape of the area is not regular, adaptive MSP helps to choose the appropriate localization events to compensate.\nTherefore, adaptive MSP can achieve a better mean error and maximum error as shown in Figure 18(a).\nFor example, adaptive MSP improves localization accuracy by 30% when the number of target nodes is 10.\nImpact of the Target Node Density: Figure 18(a) shows that when the node density is low, adaptive MSP brings more benefit than when node density is high.\nThis phenomenon makes statistical sense, because the law of large numbers tells us that node placement approaches a truly uniform distribution when the number of nodes is increased.\nAdaptive MSP has an edge 25 Figure 19.\nThe Mirage Test-bed (Line Scan) Figure 20.\nThe 20-node Outdoor Experiments (Wave) when layout is not uniform.\nImpact of Candidate Angle Density: Figure 18(b) shows that the smaller the candidate scan angle step, the better the statistical performance in terms of mean error.\nThe rationale is clear, as wider candidate scan angles provide adaptive MSP more opportunity to choose the one approaching the optimal angle.\n8.6 Simulation Summary Starting from basic MSP, we have demonstrated step-bystep how four optimizations can be applied on top of each other to improve localization performance.\nIn other words, these optimizations are compatible with each other and can jointly improve the overall performance.\nWe note that our simulations were done under assumption that the complete node sequence can be obtained without sequence flips.\nIn the next section, we present two real-system implementations that reveal and address these practical issues.\n9 System Evaluation In this section, we present a system implementation of MSP on two physical test-beds.\nThe first one is called Mirage, a large indoor test-bed composed of six 4-foot by 8-foot boards, illustrated in Figure 19.\nEach board in the system can be used as an individual sub-system, which is powered, controlled and metered separately.\nThree Hitachi CP-X1250 projectors, connected through a Matorx Triplehead2go graphics expansion box, are used to create an ultra-wide integrated display on six boards.\nFigure 19 shows that a long tilted line is generated by the projectors.\nWe have implemented all five versions of MSP on the Mirage test-bed, running 46 MICAz motes.\nUnless mentioned otherwise, the default setting is 3 anchors and 6 scans at the scanning line speed of 8.6 feet\/s.\nIn all of our graphs, each data point represents the average value of 50 trials.\nIn the outdoor system, a Dell A525 speaker is used to generate 4.7KHz sound as shown in Figure 20.\nWe place 20 MICAz motes in the backyard of a house.\nSince the location is not completely open, sound waves are reflected, scattered and absorbed by various objects in the vicinity, causing a multi-path effect.\nIn the system evaluation, simple time synchronization mechanisms are applied on each node.\n9.1 Indoor System Evaluation During indoor experiments, we encountered several realworld problems that are not revealed in the simulation.\nFirst, sequences obtained were partial due to misdetection and message losses.\nSecond, elements in the sequences could flip due to detection delay, uncertainty in media access, or error in time synchronization.\nWe show that these issues can be addressed by using the protection band method described in Section 7.3.\n9.1.1 On Scanning Speed and Protection Band In this experiment, we studied the impact of the scanning speed and the length of protection band on the performance of the system.\nIn general, with increasing scanning speed, nodes have less time to respond to the event and the time gap between two adjacent nodes shrinks, leading to an increasing number of partial sequences and sequence flips.\nFigure 21 shows the node flip situations for six scans with distinct angles under different scan speeds.\nThe x-axis shows the distance between the flipped nodes in the correct node sequence.\ny-axis shows the total number of flips in the six scans.\nThis figure tells us that faster scan brings in not only increasing number of flips, but also longer-distance flips that require wider protection band to prevent from fatal errors.\nFigure 22(a) shows the effectiveness of the protection band in terms of reducing the number of unlocalized nodes.\nWhen we use a moderate scan speed (4.3feet\/s), the chance of flipping is rare, therefore we can achieve 0.45 feet mean accuracy (Figure 22(b)) with 1.6 feet maximum error (Figure 22(c)).\nWith increasing speeds, the protection band needs to be set to a larger value to deal with flipping.\nInteresting phenomena can be observed in Figures 22: on one hand, the protection band can sharply reduce the number of unlocalized nodes; on the other hand, protection bands enlarge the area in which a target would potentially reside, introducing more uncertainty.\nThus there is a concave curve for both mean and maximum error when the scan speed is at 8.6 feet\/s.\n9.1.2 On MSP Methods and Protection Band In this experiment, we show the improvements resulting from three different methods.\nFigure 23(a) shows that a protection band of 0.35 feet is sufficient for the scan speed of 8.57feet\/s.\nFigures 23(b) and 23(c) show clearly that iterative MSP (with adaptation) achieves best performance.\nFor example, Figures 23(b) shows that when we set the protection band at 0.05 feet, iterative MSP achieves 0.7 feet accuracy, which is 42% more accurate than the basic design.\nSimilarly, Figures 23(b) and 23(c) show the double-edged effects of protection band on the localization accuracy.\n0 5 10 15 20 0 20 40 (3) Flip Distribution for 6 Scans at Line Speed of 14.6feet\/s Flips Node Distance in the Ideal Node Sequence 0 5 10 15 20 0 20 40 (2) Flip Distribution for 6 Scans at Line Speed of 8.6feet\/s Flips 0 5 10 15 20 0 20 40 (1) Flip Distribution for 6 Scans at Line Speed of 4.3feet\/s Flips Figure 21.\nNumber of Flips for Different Scan Speeds 26 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 12 14 16 18 20 Unlocalized Node Number(Line Scan at Different Speed) Protection Band (in feet) UnlocalizedNodeNumber Scan Line Speed: 14.6feet\/s Scan Line Speed: 8.6feet\/s Scan Line Speed: 4.3feet\/s (a) Number of Unlocalized Nodes 0 0.2 0.4 0.6 0.8 1 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 Mean Error(Line Scan at Different Speed) Protection Band (in feet) Error(infeet) Scan Line Speed:14.6feet\/s Scan Line Speed: 8.6feet\/s Scan Line Speed: 4.3feet\/s (b) Mean Localization Error 0 0.2 0.4 0.6 0.8 1 1.5 2 2.5 3 3.5 4 Max Error(Line Scan at Different Speed) Protection Band (in feet) Error(infeet) Scan Line Speed: 14.6feet\/s Scan Line Speed: 8.6feet\/s Scan Line Speed: 4.3feet\/s (c) Max Localization Error Figure 22.\nImpact of Protection Band and Scanning Speed 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 12 14 16 18 20 Unlocalized Node Number(Scan Line Speed 8.57feet\/s) Protection Band (in feet) Numberofunlocalizednodeoutof46 Unlocalized node of Basic MSP Unlocalized node of Sequence Based MSP Unlocalized node of Iterative MSP (a) Number of Unlocalized Nodes 0 0.2 0.4 0.6 0.8 1 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 Mean Error(Scan Line Speed 8.57feet\/s) Protection Band (in feet) Error(infeet) Mean Error of Basic MSP Mean Error of Sequence Based MSP Mean Error of Iterative MSP (b) Mean Localization Error 0 0.2 0.4 0.6 0.8 1 1.5 2 2.5 3 3.5 4 Max Error(Scan Line Speed 8.57feet\/s) Protection Band (in feet) Error(infeet) Max Error of Basic MSP Max Error of Sequence Based MSP Max Error of Iterative MSP (c) Max Localization Error Figure 23.\nImpact of Protection Band under Different MSP Methods 3 4 5 6 7 8 9 10 11 0 0.5 1 1.5 2 2.5 Unlocalized Node Number(Protection Band: 0.35 feet) Anchor Number UnlocalizedNodeNumber 4 Scan Events at Speed 8.75feet\/s 6 Scan Events at Speed 8.75feet\/s 8 Scan Events at Speed 8.75feet\/s (a) Number of Unlocalized Nodes 3 4 5 6 7 8 9 10 11 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Mean Error(Protection Band: 0.35 feet) Anchor Number Error(infeet) Mean Error of 4 Scan Events at Speed 8.75feet\/s Mean Error of 6 Scan Events at Speed 8.75feet\/s Mean Error of 8 Scan Events at Speed 8.75feet\/s (b) Mean Localization Error 3 4 5 6 7 8 9 10 11 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 Max Error(Protection Band: 0.35 feet) Anchor Number Error(infeet) Max Error of 4 Scan Events at Speed 8.75feet\/s Max Error of 6 Scan Events at Speed 8.75feet\/s Max Error of 8 Scan Events at Speed 8.75feet\/s (c) Max Localization Error Figure 24.\nImpact of the Number of Anchors and Scans 9.1.3 On Number of Anchors and Scans In this experiment, we show a tradeoff between hardware cost (anchors) with soft cost (events).\nFigure 24(a) shows that with more cutting lines created by anchors, the chance of unlocalized nodes increases slightly.\nWe note that with a 0.35 feet protection band, the percentage of unlocalized nodes is very small, e.g., in the worst-case with 11 anchors, only 2 out of 46 nodes are not localized due to flipping.\nFigures 24(b) and 24(c) show the tradeoff between number of anchors and the number of scans.\nObviously, with the number of anchors increases, the error drops significantly.\nWith 11 anchors we can achieve a localization accuracy as low as 0.25 \u223c 0.35 feet, which is nearly a 60% improvement.\nSimilarly, with increasing number of scans, the accuracy drops significantly as well.\nWe can observe about 30% across all anchor settings when we increase the number of scans from 4 to 8.\nFor example, with only 3 anchors, we can achieve 0.6-foot accuracy with 8 scans.\n9.2 Outdoor System Evaluation The outdoor system evaluation contains two parts: (i) effective detection distance evaluation, which shows that the node sequence can be readily obtained, and (ii) sound propagation based localization, which shows the results of wavepropagation-based localization.\n9.2.1 Effective Detection Distance Evaluation We firstly evaluate the sequence flip phenomenon in wave propagation.\nAs shown in Figure 25, 20 motes were placed as five groups in front of the speaker, four nodes in each group at roughly the same distances to the speaker.\nThe gap between each group is set to be 2, 3, 4 and 5 feet respectively in four experiments.\nFigure 26 shows the results.\nThe x-axis in each subgraph indicates the group index.\nThere are four nodes in each group (4 bars).\nThe y-axis shows the detection rank (order) of each node in the node sequence.\nAs distance between each group increases, number of flips in the resulting node sequence 27 Figure 25.\nWave Detection 1 2 3 4 5 0 5 10 15 20 2 feet group distance Rank Group Index 1 2 3 4 5 0 5 10 15 20 3 feet group distance Rank Group Index 1 2 3 4 5 0 5 10 15 20 4 feet group distance Rank Group Index 1 2 3 4 5 0 5 10 15 20 5 feet group distance Rank Group Index Figure 26.\nRanks vs. Distances 0 2 4 6 8 10 12 14 16 18 20 22 24 0 2 4 6 8 10 12 14 Y-Dimension(feet) X-Dimension (feet) Node 0 2 4 6 8 10 12 14 16 18 20 22 24 0 2 4 6 8 10 12 14 Y-Dimension(feet) X-Dimension (feet) Anchor Figure 27.\nLocalization Error (Sound) decreases.\nFor example, in the 2-foot distance subgraph, there are quite a few flips between nodes in adjacent and even nonadjacent groups, while in the 5-foot subgraph, flips between different groups disappeared in the test.\n9.2.2 Sound Propagation Based Localization As shown in Figure 20, 20 motes are placed as a grid including 5 rows with 5 feet between each row and 4 columns with 4 feet between each column.\nSix 4KHz acoustic wave propagation events are generated around the mote grid by a speaker.\nFigure 27 shows the localization results using iterative MSP (3 times iterative processing) with a protection band of 3 feet.\nThe average error of the localization results is 3 feet and the maximum error is 5 feet with one un-localized node.\nWe found that sequence flip in wave propagation is more severe than that in the indoor, line-based test.\nThis is expected due to the high propagation speed of sound.\nCurrently we use MICAz mote, which is equipped with a low quality microphone.\nWe believe that using a better speaker and more events, the system can yield better accuracy.\nDespite the hardware constrains, the MSP algorithm still successfully localized most of the nodes with good accuracy.\n10 Conclusions In this paper, we present the first work that exploits the concept of node sequence processing to localize sensor nodes.\nWe demonstrated that we could significantly improve localization accuracy by making full use of the information embedded in multiple easy-to-get one-dimensional node sequences.\nWe proposed four novel optimization methods, exploiting order and marginal distribution among non-anchor nodes as well as the feedback information from early localization results.\nImportantly, these optimization methods can be used together, and improve accuracy additively.\nThe practical issues of partial node sequence and sequence flip were identified and addressed in two physical system test-beds.\nWe also evaluated performance at scale through analysis as well as extensive simulations.\nResults demonstrate that requiring neither costly hardware on sensor nodes nor precise event distribution, MSP can achieve a sub-foot accuracy with very few anchor nodes provided sufficient events.\n11 References [1] CC2420 Data Sheet.\nAvaiable at http:\/\/www.chipcon.com\/.\n[2] P. Bahl and V. N. Padmanabhan.\nRadar: An In-Building RF-Based User Location and Tracking System.\nIn IEEE Infocom ``00.\n[3] M. Broxton, J. Lifton, and J. Paradiso.\nLocalizing A Sensor Network via Collaborative Processing of Global Stimuli.\nIn EWSN ``05.\n[4] N. Bulusu, J. Heidemann, and D. Estrin.\nGPS-Less Low Cost Outdoor Localization for Very Small Devices.\nIEEE Personal Communications Magazine, 7(4), 2000.\n[5] D. Culler, D. Estrin, and M. Srivastava.\nOverview of Sensor Networks.\nIEEE Computer Magazine, 2004.\n[6] J. Elson, L. Girod, and D. Estrin.\nFine-Grained Network Time Synchronization Using Reference Broadcasts.\nIn OSDI ``02.\n[7] D. K. Goldenberg, P. Bihler, M. Gao, J. Fang, B. D. Anderson, A. Morse, and Y. Yang.\nLocalization in Sparse Networks Using Sweeps.\nIn MobiCom ``06.\n[8] T. He, C. Huang, B. M. Blum, J. A. Stankovic, and T. Abdelzaher.\nRangeFree Localization Schemes in Large-Scale Sensor Networks.\nIn MobiCom ``03.\n[9] B. Kusy, P. Dutta, P. Levis, M. Mar, A. Ledeczi, and D. Culler.\nElapsed Time on Arrival: A Simple and Versatile Primitive for Canonical Time Synchronization Services.\nInternational Journal of ad-hoc and Ubiquitous Computing, 2(1), 2006.\n[10] L. Lazos and R. Poovendran.\nSeRLoc: Secure Range-Independent Localization for Wireless Sensor Networks.\nIn WiSe ``04.\n[11] M. Maroti, B. Kusy, G. Balogh, P. Volgyesi, A. Nadas, K. Molnar, S. Dora, and A. Ledeczi.\nRadio Interferometric Geolocation.\nIn SenSys ``05.\n[12] M. Maroti, B. Kusy, G. Simon, and A. Ledeczi.\nThe Flooding Time Synchronization Protocol.\nIn SenSys ``04.\n[13] D. Moore, J. Leonard, D. Rus, and S. Teller.\nRobust Distributed Network Localization with Noise Range Measurements.\nIn SenSys ``04.\n[14] R. Nagpal and D. Coore.\nAn Algorithm for Group Formation in an Amorphous Computer.\nIn PDCS ``98.\n[15] D. Niculescu and B. Nath.\nad-hoc Positioning System.\nIn GlobeCom ``01.\n[16] D. Niculescu and B. Nath.\nad-hoc Positioning System (APS) Using AOA.\nIn InfoCom ``03.\n[17] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan.\nThe Cricket Location-Support System.\nIn MobiCom ``00.\n[18] K. R\u00a8omer.\nThe Lighthouse Location System for Smart Dust.\nIn MobiSys ``03.\n[19] A. Savvides, C. C. Han, and M. B. Srivastava.\nDynamic Fine-Grained Localization in ad-hoc Networks of Sensors.\nIn MobiCom ``01.\n[20] R. Stoleru, T. He, J. A. Stankovic, and D. Luebke.\nA High-Accuracy, Low-Cost Localization System for Wireless Sensor Networks.\nIn SenSys ``05.\n[21] R. Stoleru, P. Vicaire, T. He, and J. A. Stankovic.\nStarDust: a Flexible Architecture for Passive Localization in Wireless Sensor Networks.\nIn SenSys ``06.\n[22] E. W. Weisstein.\nPlane Division by Lines.\nmathworld.wolfram.com.\n[23] B. H. Wellenhoff, H. Lichtenegger, and J. Collins.\nGlobal Positions System: Theory and Practice,Fourth Edition.\nSpringer Verlag, 1997.\n[24] K. Whitehouse.\nThe Design of Calamari: an ad-hoc Localization System for Sensor Networks.\nIn University of California at Berkeley, 2002.\n[25] Z. Zhong.\nMSP Evaluation and Implementation Report.\nAvaiable at http:\/\/www.cs.umn.edu\/\u223czhong\/MSP.\n[26] G. Zhou, T. He, and J. A. Stankovic.\nImpact of Radio Irregularity on Wireless Sensor Networks.\nIn MobiSys ``04.\n28","lvl-3":"MSP: Multi-Sequence Positioning of Wireless Sensor Nodes *\nAbstract\nWireless Sensor Networks have been proposed for use in many location-dependent applications.\nMost of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices.\nTo overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments.\nThe novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution.\nStarting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy.\nWe address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built.\nWe have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes).\nThis evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution.\nIt also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy.\n1 Introduction\nAlthough Wireless Sensor Networks (WSN) have shown promising prospects in various applications [5], researchers still face several challenges for massive deployment of such networks.\nOne of these is to identify the location of individual sensor nodes in outdoor environments.\nBecause of unpredictable flow dynamics in airborne scenarios, it is not currently feasible to localize sensor nodes during massive UVA-based deployment.\nOn the other hand, geometric information is indispensable in these networks, since users need to know where events of interest occur (e.g., the location of intruders or of a bomb explosion).\nPrevious research on node localization falls into two categories: range-based approaches and range-free approaches.\nRange-based approaches [13, 17, 19, 24] compute per-node location information iteratively or recursively based on measured distances among target nodes and a few anchors which precisely know their locations.\nThese approaches generally require costly hardware (e.g., GPS) and have limited effective range due to energy constraints (e.g., ultrasound-based TDOA [3, 17]).\nAlthough range-based solutions can be suitably used in small-scale indoor environments, they are considered less cost-effective for large-scale deployments.\nOn the other hand, range-free approaches [4, 8, 10, 13, 14, 15] do not require accurate distance measurements, but localize the node based on network connectivity (proximity) information.\nUnfortunately, since wireless connectivity is highly influenced by the environment and hardware calibration, existing solutions fail to deliver encouraging empirical results, or require substantial survey [2] and calibration [24] on a case-by-case basis.\nRealizing the impracticality of existing solutions for the large-scale outdoor environment, researchers have recently proposed solutions (e.g., Spotlight [20] and Lighthouse [18]) for sensor node localization using the spatiotemporal correlation of controlled events (i.e., inferring nodes' locations based on the detection time of controlled events).\nThese solutions demonstrate that long range and high accuracy localization can be achieved simultaneously with little additional cost at sensor nodes.\nThese benefits, however, come along with an implicit assumption that the controlled events can be precisely distributed to a specified location at a specified time.\nWe argue that precise event distribution is difficult to achieve, especially at large scale when terrain is uneven, the event distribution device is not well calibrated and its position is difficult to maintain (e.g., the helicopter-mounted scenario in [20]).\nTo address these limitations in current approaches, in this paper we present a multi-sequence positioning (MSP) method\nfor large-scale stationary sensor node localization, in deployments where an event source has line-of-sight to all sensors.\nThe novel idea behind MSP is to estimate each sensor node's two-dimensional location by processing multiple easy-to-get one-dimensional node sequences (e.g., event detection order) obtained through loosely-guided event distribution.\nThis design offers several benefits.\nFirst, compared to a range-based approach, MSP does not require additional costly hardware.\nIt works using sensors typically used by sensor network applications, such as light and acoustic sensors, both of which we specifically consider in this work.\nSecond, compared to a range-free approach, MSP needs only a small number of anchors (theoretically, as few as two), so high accuracy can be achieved economically by introducing more events instead of more anchors.\nAnd third, compared to Spotlight, MSP does not require precise and sophisticated event distribution, an advantage that significantly simplifies the system design and reduces calibration cost.\nThis paper offers the following additional intellectual contributions: \u2022 We are the first to localize sensor nodes using the concept of node sequence, an ordered list of sensor nodes, sorted by the detection time of a disseminated event.\nWe demonstrate that making full use of the information embedded in one-dimensional node sequences can significantly improve localization accuracy.\nInterestingly, we discover that repeated reprocessing of one-dimensional node sequences can further increase localization accuracy.\n\u2022 We propose a distribution-based location estimation strategy that obtains the final location of sensor nodes using the marginal probability of joint distribution among adjacent nodes within the sequence.\nThis new algorithm outperforms the widely adopted Centroid estimation [4, 8].\n\u2022 To the best of our knowledge, this is the first work to improve the localization accuracy of nodes by adaptive events.\nThe generation of later events is guided by localization results from previous events.\n\u2022 We evaluate line-based MSP on our new Mirage test-bed, and wave-based MSP in outdoor environments.\nThrough system implementation, we discover and address several interesting issues such as partial sequence and sequence flips.\nTo reveal MSP performance at scale, we provide analytic results as well as a complete simulation study.\nAll the simulation and implementation code is available online at http:\/\/www.cs.umn.edu\/\u223czhong\/MSP.\nThe rest of the paper is organized as follows.\nSection 2 briefly surveys the related work.\nSection 3 presents an overview of the MSP localization system.\nIn sections 4 and 5, basic MSP and four advanced processing methods are introduced.\nSection 6 describes how MSP can be applied in a wave propagation scenario.\nSection 7 discusses several implementation issues.\nSection 8 presents simulation results, and Section 9 reports an evaluation of MSP on the Mirage test-bed and an outdoor test-bed.\nSection 10 concludes the paper.\n2 Related Work\nMany methods have been proposed to localize wireless sensor devices in the open air.\nMost of these can be classified into two categories: range-based and range-free localization.\nRange-based localization systems, such as GPS [23], Cricket [17], AHLoS [19], AOA [16], Robust Quadrilaterals [13] and Sweeps [7], are based on fine-grained point-topoint distance estimation or angle estimation to identify pernode location.\nConstraints on the cost, energy and hardware footprint of each sensor node make these range-based methods undesirable for massive outdoor deployment.\nIn addition, ranging signals generated by sensor nodes have a very limited effective range because of energy and form factor concerns.\nFor example, ultrasound signals usually effectively propagate 20-30 feet using an on-board transmitter [17].\nConsequently, these range-based solutions require an undesirably high deployment density.\nAlthough the received signal strength indicator (RSSI) related [2, 24] methods were once considered an ideal low-cost solution, the irregularity of radio propagation [26] seriously limits the accuracy of such systems.\nThe recently proposed RIPS localization system [11] superimposes two RF waves together, creating a low-frequency envelope that can be accurately measured.\nThis ranging technique performs very well as long as antennas are well oriented and environmental factors such as multi-path effects and background noise are sufficiently addressed.\nRange-free methods don't need to estimate or measure accurate distances or angles.\nInstead, anchors or controlled-event distributions are used for node localization.\nRange-free methods can be generally classified into two types: anchor-based and anchor-free solutions.\n\u2022 For anchor-based solutions such as Centroid [4], APIT [8], SeRLoc [10], Gradient [13], and APS [15], the main\nidea is that the location of each node is estimated based on the known locations of the anchor nodes.\nDifferent anchor combinations narrow the areas in which the target nodes can possibly be located.\nAnchor-based solutions normally require a high density of anchor nodes so as to achieve good accuracy.\nIn practice, it is desirable to have as few anchor nodes as possible so as to lower the system cost.\n\u2022 Anchor-free solutions require no anchor nodes.\nInstead, external event generators and data processing platforms are used.\nThe main idea is to correlate the event detection time at a sensor node with the known space-time relationship of controlled events at the generator so that detection time-stamps can be mapped into the locations of sensors.\nSpotlight [20] and Lighthouse [18] work in this fashion.\nIn Spotlight [20], the event distribution needs to be precise in both time and space.\nPrecise event distribution is difficult to achieve without careful calibration, especially when the event-generating devices require certain mechanical maneuvers (e.g., the telescope mount used in Spotlight).\nAll these increase system cost and reduce localization speed.\nStarDust [21], which works much faster, uses label relaxation algorithms to match light spots reflected by corner-cube retro-reflectors (CCR) with sensor nodes using various constraints.\nLabel relaxation algorithms converge only when a sufficient number of robust constraints are obtained.\nDue to the environmental impact on RF connectivity constraints, however, StarDust is less accurate than Spotlight.\nIn this paper, we propose a balanced solution that avoids the limitations of both anchor-based and anchor-free solutions.\nUnlike anchor-based solutions [4, 8], MSP allows a flexible tradeoff between the physical cost (anchor nodes) with the soft\nFigure 1.\nThe MSP System Overview\ncost (localization events).\nMSP uses only a small number of anchors (theoretically, as few as two).\nUnlike anchor-free solutions, MSP doesn't need to maintain rigid time-space relationships while distributing events, which makes system design simpler, more flexible and more robust to calibration errors.\n3 System Overview\n4 Basic MSP\nAlgorithm 1 Basic MSP Process\n5 Advanced MSP\n5.1 Sequence-Based MSP\n5.2 Iterative MSP\n5.3 Distribution-Based Estimation\n5.4 Adaptive MSP\n5.5 Overhead and MSP Complexity Analysis\n6 Wave Propagation Example\n7 Practical Deployment Issues\n7.1 Incomplete Node Sequence\n7.2 Localization without Time Synchronization\n7.3 Sequence Flip and Protection Band\n8 Simulation Evaluation\n8.1 Performance of Basic MSP\nImpact of the Number of Anchors: In this experiment, we\nImpact of the Target Node Density: In this experiment, we\n8.2 Improvements of Sequence-Based MSP\nImpact of the Number of Scans: In this experiment, we\nImpact of the Number of Anchors: In this experiment, we\nAdaptive MSP for 500by80\nMax Error of Regualr Scan Max Error of Adaptive Scan Mean Error of Regualr Scan Mean Error of Adaptive Scan\n8.3 Iterative MSP over Sequence-Based MSP\n8.4 DBE MSP over Iterative MSP\n8.5 Improvements of Adaptive MSP\n8.6 Simulation Summary\n9 System Evaluation\n9.1 Indoor System Evaluation\n9.1.1 On Scanning Speed and Protection Band\n9.1.2 On MSP Methods and Protection Band\nUnlocalized Node Number (Scan Line Speed 8.57 feet\/s) Unlocalized Node Number (Scan Line Speed 8.57 feet\/s) Mean Error (Scan Line Speed 8.57 feet\/s) Max Error (Scan Line Speed 8.57 feet\/s) Mean Error (Scan Line Speed 8.57 feet\/s) Max Error (Scan Line Speed 8.57 feet\/s)\n9.1.3 On Number of Anchors and Scans\n9.2 Outdoor System Evaluation\n9.2.1 Effective Detection Distance Evaluation\n9.2.2 Sound Propagation Based Localization\n10 Conclusions\nIn this paper, we present the first work that exploits the concept of node sequence processing to localize sensor nodes.\nWe demonstrated that we could significantly improve localization accuracy by making full use of the information embedded in multiple easy-to-get one-dimensional node sequences.\nWe proposed four novel optimization methods, exploiting order and marginal distribution among non-anchor nodes as well as the feedback information from early localization results.\nImportantly, these optimization methods can be used together, and improve accuracy additively.\nThe practical issues of partial node sequence and sequence flip were identified and addressed in two physical system test-beds.\nWe also evaluated performance at scale through analysis as well as extensive simulations.\nResults demonstrate that requiring neither costly hardware on sensor nodes nor precise event distribution, MSP can achieve a sub-foot accuracy with very few anchor nodes provided sufficient events.","lvl-4":"MSP: Multi-Sequence Positioning of Wireless Sensor Nodes *\nAbstract\nWireless Sensor Networks have been proposed for use in many location-dependent applications.\nMost of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices.\nTo overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments.\nThe novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution.\nStarting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy.\nWe address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built.\nWe have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes).\nThis evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution.\nIt also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy.\n1 Introduction\nOne of these is to identify the location of individual sensor nodes in outdoor environments.\nBecause of unpredictable flow dynamics in airborne scenarios, it is not currently feasible to localize sensor nodes during massive UVA-based deployment.\nPrevious research on node localization falls into two categories: range-based approaches and range-free approaches.\nRange-based approaches [13, 17, 19, 24] compute per-node location information iteratively or recursively based on measured distances among target nodes and a few anchors which precisely know their locations.\nThese approaches generally require costly hardware (e.g., GPS) and have limited effective range due to energy constraints (e.g., ultrasound-based TDOA [3, 17]).\nAlthough range-based solutions can be suitably used in small-scale indoor environments, they are considered less cost-effective for large-scale deployments.\nOn the other hand, range-free approaches [4, 8, 10, 13, 14, 15] do not require accurate distance measurements, but localize the node based on network connectivity (proximity) information.\nRealizing the impracticality of existing solutions for the large-scale outdoor environment, researchers have recently proposed solutions (e.g., Spotlight [20] and Lighthouse [18]) for sensor node localization using the spatiotemporal correlation of controlled events (i.e., inferring nodes' locations based on the detection time of controlled events).\nThese solutions demonstrate that long range and high accuracy localization can be achieved simultaneously with little additional cost at sensor nodes.\nTo address these limitations in current approaches, in this paper we present a multi-sequence positioning (MSP) method\nfor large-scale stationary sensor node localization, in deployments where an event source has line-of-sight to all sensors.\nThe novel idea behind MSP is to estimate each sensor node's two-dimensional location by processing multiple easy-to-get one-dimensional node sequences (e.g., event detection order) obtained through loosely-guided event distribution.\nThis design offers several benefits.\nFirst, compared to a range-based approach, MSP does not require additional costly hardware.\nIt works using sensors typically used by sensor network applications, such as light and acoustic sensors, both of which we specifically consider in this work.\nSecond, compared to a range-free approach, MSP needs only a small number of anchors (theoretically, as few as two), so high accuracy can be achieved economically by introducing more events instead of more anchors.\nAnd third, compared to Spotlight, MSP does not require precise and sophisticated event distribution, an advantage that significantly simplifies the system design and reduces calibration cost.\nThis paper offers the following additional intellectual contributions: \u2022 We are the first to localize sensor nodes using the concept of node sequence, an ordered list of sensor nodes, sorted by the detection time of a disseminated event.\nWe demonstrate that making full use of the information embedded in one-dimensional node sequences can significantly improve localization accuracy.\nInterestingly, we discover that repeated reprocessing of one-dimensional node sequences can further increase localization accuracy.\n\u2022 We propose a distribution-based location estimation strategy that obtains the final location of sensor nodes using the marginal probability of joint distribution among adjacent nodes within the sequence.\n\u2022 To the best of our knowledge, this is the first work to improve the localization accuracy of nodes by adaptive events.\nThe generation of later events is guided by localization results from previous events.\n\u2022 We evaluate line-based MSP on our new Mirage test-bed, and wave-based MSP in outdoor environments.\nThrough system implementation, we discover and address several interesting issues such as partial sequence and sequence flips.\nTo reveal MSP performance at scale, we provide analytic results as well as a complete simulation study.\nAll the simulation and implementation code is available online at http:\/\/www.cs.umn.edu\/\u223czhong\/MSP.\nThe rest of the paper is organized as follows.\nSection 2 briefly surveys the related work.\nSection 3 presents an overview of the MSP localization system.\nIn sections 4 and 5, basic MSP and four advanced processing methods are introduced.\nSection 6 describes how MSP can be applied in a wave propagation scenario.\nSection 7 discusses several implementation issues.\nSection 8 presents simulation results, and Section 9 reports an evaluation of MSP on the Mirage test-bed and an outdoor test-bed.\nSection 10 concludes the paper.\n2 Related Work\nMany methods have been proposed to localize wireless sensor devices in the open air.\nMost of these can be classified into two categories: range-based and range-free localization.\nConstraints on the cost, energy and hardware footprint of each sensor node make these range-based methods undesirable for massive outdoor deployment.\nIn addition, ranging signals generated by sensor nodes have a very limited effective range because of energy and form factor concerns.\nConsequently, these range-based solutions require an undesirably high deployment density.\nThe recently proposed RIPS localization system [11] superimposes two RF waves together, creating a low-frequency envelope that can be accurately measured.\nRange-free methods don't need to estimate or measure accurate distances or angles.\nInstead, anchors or controlled-event distributions are used for node localization.\nRange-free methods can be generally classified into two types: anchor-based and anchor-free solutions.\nidea is that the location of each node is estimated based on the known locations of the anchor nodes.\nDifferent anchor combinations narrow the areas in which the target nodes can possibly be located.\nAnchor-based solutions normally require a high density of anchor nodes so as to achieve good accuracy.\nIn practice, it is desirable to have as few anchor nodes as possible so as to lower the system cost.\n\u2022 Anchor-free solutions require no anchor nodes.\nInstead, external event generators and data processing platforms are used.\nThe main idea is to correlate the event detection time at a sensor node with the known space-time relationship of controlled events at the generator so that detection time-stamps can be mapped into the locations of sensors.\nSpotlight [20] and Lighthouse [18] work in this fashion.\nIn Spotlight [20], the event distribution needs to be precise in both time and space.\nPrecise event distribution is difficult to achieve without careful calibration, especially when the event-generating devices require certain mechanical maneuvers (e.g., the telescope mount used in Spotlight).\nAll these increase system cost and reduce localization speed.\nStarDust [21], which works much faster, uses label relaxation algorithms to match light spots reflected by corner-cube retro-reflectors (CCR) with sensor nodes using various constraints.\nLabel relaxation algorithms converge only when a sufficient number of robust constraints are obtained.\nIn this paper, we propose a balanced solution that avoids the limitations of both anchor-based and anchor-free solutions.\nUnlike anchor-based solutions [4, 8], MSP allows a flexible tradeoff between the physical cost (anchor nodes) with the soft\nFigure 1.\nThe MSP System Overview\ncost (localization events).\nMSP uses only a small number of anchors (theoretically, as few as two).\nUnlike anchor-free solutions, MSP doesn't need to maintain rigid time-space relationships while distributing events, which makes system design simpler, more flexible and more robust to calibration errors.\n10 Conclusions\nIn this paper, we present the first work that exploits the concept of node sequence processing to localize sensor nodes.\nWe demonstrated that we could significantly improve localization accuracy by making full use of the information embedded in multiple easy-to-get one-dimensional node sequences.\nWe proposed four novel optimization methods, exploiting order and marginal distribution among non-anchor nodes as well as the feedback information from early localization results.\nImportantly, these optimization methods can be used together, and improve accuracy additively.\nThe practical issues of partial node sequence and sequence flip were identified and addressed in two physical system test-beds.\nWe also evaluated performance at scale through analysis as well as extensive simulations.\nResults demonstrate that requiring neither costly hardware on sensor nodes nor precise event distribution, MSP can achieve a sub-foot accuracy with very few anchor nodes provided sufficient events.","lvl-2":"MSP: Multi-Sequence Positioning of Wireless Sensor Nodes *\nAbstract\nWireless Sensor Networks have been proposed for use in many location-dependent applications.\nMost of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices.\nTo overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments.\nThe novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution.\nStarting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy.\nWe address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built.\nWe have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes).\nThis evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution.\nIt also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy.\n1 Introduction\nAlthough Wireless Sensor Networks (WSN) have shown promising prospects in various applications [5], researchers still face several challenges for massive deployment of such networks.\nOne of these is to identify the location of individual sensor nodes in outdoor environments.\nBecause of unpredictable flow dynamics in airborne scenarios, it is not currently feasible to localize sensor nodes during massive UVA-based deployment.\nOn the other hand, geometric information is indispensable in these networks, since users need to know where events of interest occur (e.g., the location of intruders or of a bomb explosion).\nPrevious research on node localization falls into two categories: range-based approaches and range-free approaches.\nRange-based approaches [13, 17, 19, 24] compute per-node location information iteratively or recursively based on measured distances among target nodes and a few anchors which precisely know their locations.\nThese approaches generally require costly hardware (e.g., GPS) and have limited effective range due to energy constraints (e.g., ultrasound-based TDOA [3, 17]).\nAlthough range-based solutions can be suitably used in small-scale indoor environments, they are considered less cost-effective for large-scale deployments.\nOn the other hand, range-free approaches [4, 8, 10, 13, 14, 15] do not require accurate distance measurements, but localize the node based on network connectivity (proximity) information.\nUnfortunately, since wireless connectivity is highly influenced by the environment and hardware calibration, existing solutions fail to deliver encouraging empirical results, or require substantial survey [2] and calibration [24] on a case-by-case basis.\nRealizing the impracticality of existing solutions for the large-scale outdoor environment, researchers have recently proposed solutions (e.g., Spotlight [20] and Lighthouse [18]) for sensor node localization using the spatiotemporal correlation of controlled events (i.e., inferring nodes' locations based on the detection time of controlled events).\nThese solutions demonstrate that long range and high accuracy localization can be achieved simultaneously with little additional cost at sensor nodes.\nThese benefits, however, come along with an implicit assumption that the controlled events can be precisely distributed to a specified location at a specified time.\nWe argue that precise event distribution is difficult to achieve, especially at large scale when terrain is uneven, the event distribution device is not well calibrated and its position is difficult to maintain (e.g., the helicopter-mounted scenario in [20]).\nTo address these limitations in current approaches, in this paper we present a multi-sequence positioning (MSP) method\nfor large-scale stationary sensor node localization, in deployments where an event source has line-of-sight to all sensors.\nThe novel idea behind MSP is to estimate each sensor node's two-dimensional location by processing multiple easy-to-get one-dimensional node sequences (e.g., event detection order) obtained through loosely-guided event distribution.\nThis design offers several benefits.\nFirst, compared to a range-based approach, MSP does not require additional costly hardware.\nIt works using sensors typically used by sensor network applications, such as light and acoustic sensors, both of which we specifically consider in this work.\nSecond, compared to a range-free approach, MSP needs only a small number of anchors (theoretically, as few as two), so high accuracy can be achieved economically by introducing more events instead of more anchors.\nAnd third, compared to Spotlight, MSP does not require precise and sophisticated event distribution, an advantage that significantly simplifies the system design and reduces calibration cost.\nThis paper offers the following additional intellectual contributions: \u2022 We are the first to localize sensor nodes using the concept of node sequence, an ordered list of sensor nodes, sorted by the detection time of a disseminated event.\nWe demonstrate that making full use of the information embedded in one-dimensional node sequences can significantly improve localization accuracy.\nInterestingly, we discover that repeated reprocessing of one-dimensional node sequences can further increase localization accuracy.\n\u2022 We propose a distribution-based location estimation strategy that obtains the final location of sensor nodes using the marginal probability of joint distribution among adjacent nodes within the sequence.\nThis new algorithm outperforms the widely adopted Centroid estimation [4, 8].\n\u2022 To the best of our knowledge, this is the first work to improve the localization accuracy of nodes by adaptive events.\nThe generation of later events is guided by localization results from previous events.\n\u2022 We evaluate line-based MSP on our new Mirage test-bed, and wave-based MSP in outdoor environments.\nThrough system implementation, we discover and address several interesting issues such as partial sequence and sequence flips.\nTo reveal MSP performance at scale, we provide analytic results as well as a complete simulation study.\nAll the simulation and implementation code is available online at http:\/\/www.cs.umn.edu\/\u223czhong\/MSP.\nThe rest of the paper is organized as follows.\nSection 2 briefly surveys the related work.\nSection 3 presents an overview of the MSP localization system.\nIn sections 4 and 5, basic MSP and four advanced processing methods are introduced.\nSection 6 describes how MSP can be applied in a wave propagation scenario.\nSection 7 discusses several implementation issues.\nSection 8 presents simulation results, and Section 9 reports an evaluation of MSP on the Mirage test-bed and an outdoor test-bed.\nSection 10 concludes the paper.\n2 Related Work\nMany methods have been proposed to localize wireless sensor devices in the open air.\nMost of these can be classified into two categories: range-based and range-free localization.\nRange-based localization systems, such as GPS [23], Cricket [17], AHLoS [19], AOA [16], Robust Quadrilaterals [13] and Sweeps [7], are based on fine-grained point-topoint distance estimation or angle estimation to identify pernode location.\nConstraints on the cost, energy and hardware footprint of each sensor node make these range-based methods undesirable for massive outdoor deployment.\nIn addition, ranging signals generated by sensor nodes have a very limited effective range because of energy and form factor concerns.\nFor example, ultrasound signals usually effectively propagate 20-30 feet using an on-board transmitter [17].\nConsequently, these range-based solutions require an undesirably high deployment density.\nAlthough the received signal strength indicator (RSSI) related [2, 24] methods were once considered an ideal low-cost solution, the irregularity of radio propagation [26] seriously limits the accuracy of such systems.\nThe recently proposed RIPS localization system [11] superimposes two RF waves together, creating a low-frequency envelope that can be accurately measured.\nThis ranging technique performs very well as long as antennas are well oriented and environmental factors such as multi-path effects and background noise are sufficiently addressed.\nRange-free methods don't need to estimate or measure accurate distances or angles.\nInstead, anchors or controlled-event distributions are used for node localization.\nRange-free methods can be generally classified into two types: anchor-based and anchor-free solutions.\n\u2022 For anchor-based solutions such as Centroid [4], APIT [8], SeRLoc [10], Gradient [13], and APS [15], the main\nidea is that the location of each node is estimated based on the known locations of the anchor nodes.\nDifferent anchor combinations narrow the areas in which the target nodes can possibly be located.\nAnchor-based solutions normally require a high density of anchor nodes so as to achieve good accuracy.\nIn practice, it is desirable to have as few anchor nodes as possible so as to lower the system cost.\n\u2022 Anchor-free solutions require no anchor nodes.\nInstead, external event generators and data processing platforms are used.\nThe main idea is to correlate the event detection time at a sensor node with the known space-time relationship of controlled events at the generator so that detection time-stamps can be mapped into the locations of sensors.\nSpotlight [20] and Lighthouse [18] work in this fashion.\nIn Spotlight [20], the event distribution needs to be precise in both time and space.\nPrecise event distribution is difficult to achieve without careful calibration, especially when the event-generating devices require certain mechanical maneuvers (e.g., the telescope mount used in Spotlight).\nAll these increase system cost and reduce localization speed.\nStarDust [21], which works much faster, uses label relaxation algorithms to match light spots reflected by corner-cube retro-reflectors (CCR) with sensor nodes using various constraints.\nLabel relaxation algorithms converge only when a sufficient number of robust constraints are obtained.\nDue to the environmental impact on RF connectivity constraints, however, StarDust is less accurate than Spotlight.\nIn this paper, we propose a balanced solution that avoids the limitations of both anchor-based and anchor-free solutions.\nUnlike anchor-based solutions [4, 8], MSP allows a flexible tradeoff between the physical cost (anchor nodes) with the soft\nFigure 1.\nThe MSP System Overview\ncost (localization events).\nMSP uses only a small number of anchors (theoretically, as few as two).\nUnlike anchor-free solutions, MSP doesn't need to maintain rigid time-space relationships while distributing events, which makes system design simpler, more flexible and more robust to calibration errors.\n3 System Overview\nMSP works by extracting relative location information from multiple simple one-dimensional orderings of nodes.\nFigure 1 (a) shows a layout of a sensor network with anchor nodes and target nodes.\nTarget nodes are defined as the nodes to be localized.\nBriefly, the MSP system works as follows.\nFirst, events are generated one at a time in the network area (e.g., ultrasound propagations from different locations, laser scans with diverse angles).\nAs each event propagates, as shown in Figure 1 (a), each node detects it at some particular time instance.\nFor a single event, we call the ordering of nodes, which is based on the sequential detection of the event, a node sequence.\nEach node sequence includes both the targets and the anchors as shown in Figure 1 (b).\nSecond, a multi-sequence processing algorithm helps to narrow the possible location of each node to a small area (Figure 1 (c)).\nFinally, a distributionbased estimation method estimates the exact location of each sensor node, as shown in Figure 1 (d).\nFigure 1 shows that the node sequences can be obtained much more economically than accurate pair-wise distance measurements between target nodes and anchor nodes via ranging methods.\nIn addition, this system does not require a rigid time-space relationship for the localization events, which is critical but hard to achieve in controlled event distribution scenarios (e.g., Spotlight [20]).\nFor the sake of clarity in presentation, we present our system in two cases:\n\u2022 Ideal Case, in which all the node sequences obtained from the network are complete and correct, and nodes are time-synchronized [12, 9].\n\u2022 Realistic Deployment, in which (i) node sequences can be partial (incomplete), (ii) elements in sequences could flip (i.e., the order obtained is reversed from reality), and (iii) nodes are not time-synchronized.\nTo introduce the MSP algorithm, we first consider a simple straight-line scan scenario.\nThen, we describe how to implement straight-line scans as well as other event types, such as sound wave propagation.\nFigure 2.\nObtaining Multiple Node Sequences\n4 Basic MSP\nLet us consider a sensor network with N target nodes and M anchor nodes randomly deployed in an area of size S.\nThe top-level idea for basic MSP is to split the whole sensor network area into small pieces by processing node sequences.\nBecause the exact locations of all the anchors in a node sequence are known, all the nodes in this sequence can be divided into O (M + 1) parts in the area.\nIn Figure 2, we use numbered circles to denote target nodes and numbered hexagons to denote anchor nodes.\nBasic MSP uses two straight lines to scan the area from different directions, treating each scan as an event.\nAll the nodes react to the event sequentially generating two node sequences.\nFor vertical scan 1, the node sequence is (8,1,5, A,6, C,4,3,7,2, B,9), as shown outside the right boundary of the area in Figure 2; for horizontal scan 2, the node sequence is (3,1, C,5,9,2, A,4,6, B,7,8), as shown under the bottom boundary of the area in Figure 2.\nSince the locations of the anchor nodes are available, the anchor nodes in the two node sequences actually split the area vertically and horizontally into 16 parts, as shown in Figure 2.\nTo extend this process, suppose we have M anchor nodes and perform d scans from different angles, obtaining d node sequences and dividing the area into many small parts.\nObviously, the number of parts is a function of the number of anchors M, the number of scans d, the anchors' location as well as the slop k for each scan line.\nAccording to the pie-cutting theorem [22], the area can be divided into O (M2d2) parts.\nWhen M and d are appropriately large, the polygon for each target node may become sufficiently small so that accurate estimation can be achieved.\nWe emphasize that accuracy is affected not only by the number of anchors M, but also by the number of events d.\nIn other words, MSP provides a tradeoff between the physical cost of anchors and the soft cost of events.\nAlgorithm 1 depicts the computing architecture of basic MSP.\nEach node sequence is processed within line 1 to 8.\nFor each node, GetBoundaries () in line 5 searches for the predecessor and successor anchors in the sequence so as to determine the boundaries of this node.\nThen in line 6 UpdateMap () shrinks the location area of this node according to the newly obtained boundaries.\nAfter processing all sequences, Centroid Estimation (line 11) set the center of gravity of the final polygon as the estimated location of the target node.\nBasic MSP only makes use of the order information between a target node and the anchor nodes in each sequence.\nActually, we can extract much more location information from\nAlgorithm 1 Basic MSP Process\nOutput: The estimated location of each node.\n1: repeat 2: GetOneUnprocessedSeqence (); 3: repeat 4: GetOneNodeFromSequenceInOrder (); 5: GetBoundaries (); 6: UpdateMap (); 7: until All the target nodes are updated; 8: until All the node sequences are processed; 9: repeat 10: GetOneUnestimatedNode (); 11: CentroidEstimation (); 12: until All the target nodes are estimated;\neach sequence.\nSection 5 will introduce advanced MSP, in which four novel optimizations are proposed to improve the performance of MSP significantly.\n5 Advanced MSP\nFour improvements to basic MSP are proposed in this section.\nThe first three improvements do not need additional sensing and communication in the networks but require only slightly more off-line computation.\nThe objective of all these improvements is to make full use of the information embedded in the node sequences.\nThe results we have obtained empirically indicate that the implementation of the first two methods can dramatically reduce the localization error, and that the third and fourth methods are helpful for some system deployments.\n5.1 Sequence-Based MSP\nAs shown in Figure 2, each scan line and M anchors, splits the whole area into M + 1 parts.\nEach target node falls into one polygon shaped by scan lines.\nWe noted that in basic MSP, only the anchors are used to narrow down the polygon of each target node, but actually there is more information in the node sequence that we can made use of.\nLet's first look at a simple example shown in Figure 3.\nThe previous scans narrow the locations of target node 1 and node 2 into two dashed rectangles shown in the left part of Figure 3.\nThen a new scan generates a new sequence (1, 2).\nWith knowledge of the scan's direction, it is easy to tell that node 1 is located to the left of node 2.\nThus, we can further narrow the location area of node 2 by eliminating the shaded part of node 2's rectangle.\nThis is because node 2 is located on the right of node 1 while the shaded area is outside the lower boundary of node 1.\nSimilarly, the location area of node 1 can be narrowed by eliminating the shaded part out of node 2's right boundary.\nWe call this procedure sequence-based MSP which means that the whole node sequence needs to be processed node by node in order.\nSpecifically, sequence-based MSP follows this exact processing rule:\nFigure 3.\nRule Illustration in Sequence Based MSP\nAlgorithm 2 Sequence-Based MSP Process Output: The estimated location of each node.\n1: repeat 2: GetOneUnprocessedSeqence (); 3: repeat 4: GetOneNodeByIncreasingOrder (); 5: ComputeLowbound (); 6: UpdateMap (); 7: until The last target node in the sequence; 8: repeat 9: GetOneNodeByDecreasingOrder (); 10: ComputeUpbound (); 11: UpdateMap (); 12: until The last target node in the sequence; 13: until All the node sequences are processed; 14: repeat 15: GetOneUnestimatedNode (); 16: CentroidEstimation (); 17: until All the target nodes are estimated;\nElimination Rule: Along a scanning direction, the lower boundary of the successor's area must be equal to or larger than the lower boundary of the predecessor's area, and the upper boundary of the predecessor's area must be equal to or smaller than the upper boundary of the successor's area.\nIn the case of Figure 3, node 2 is the successor of node 1, and node 1 is the predecessor of node 2.\nAccording to the elimination rule, node 2's lower boundary cannot be smaller than that of node 1 and node 1's upper boundary cannot exceed node 2's upper boundary.\nAlgorithm 2 illustrates the pseudo code of sequence-based MSP.\nEach node sequence is processed within line 3 to 13.\nThe sequence processing contains two steps: Step 1 (line 3 to 7): Compute and modify the lower boundary for each target node by increasing order in the node sequence.\nEach node's lower boundary is determined by the lower boundary of its predecessor node in the sequence, thus the processing must start from the first node in the sequence and by increasing order.\nThen update the map according to the new lower boundary.\nStep 2 (line 8 to 12): Compute and modify the upper boundary for each node by decreasing order in the node sequence.\nEach node's upper boundary is determined by the upper boundary of its successor node in the sequence, thus the processing must start from the last node in the sequence and by decreasing order.\nThen update the map according to the new upper boundary.\nAfter processing all the sequences, for each node, a polygon bounding its possible location has been found.\nThen, center-ofgravity-based estimation is applied to compute the exact location of each node (line 14 to 17).\nAn example of this process is shown in Figure 4.\nThe third scan generates the node sequence (B,9,2,7,4,6,3,8, C, A,5,1).\nIn addition to the anchor split lines, because nodes 4 and 7 come after node 2 in the sequence, node 4 and 7's polygons could be narrowed according to node 2's lower boundary (the lower right-shaded area); similarly, the shaded area in node 2's rectangle could be eliminated since this part is beyond node 7's upper boundary indicated by the dotted line.\nSimilar eliminating can be performed for node 3 as shown in the figure.\nFigure 4.\nSequence-Based MSP Example\nStraight-line Scan 1 From above, we can see that the sequence-based MSP makes use of the information embedded in every sequential node pair in the node sequence.\nThe polygon boundaries of the target nodes obtained in prior could be used to further split other target nodes' areas.\nOur evaluation in Sections 8 and 9 shows that sequence-based MSP considerably enhances system accuracy.\n5.2 Iterative MSP\nSequence-based MSP is preferable to basic MSP because it extracts more information from the node sequence.\nIn fact, further useful information still remains!\nIn sequence-based MSP, a sequence processed later benefits from information produced by previously processed sequences (e.g., the third scan in Figure 5).\nHowever, the first several sequences can hardly benefit from other scans in this way.\nInspired by this phenomenon, we propose iterative MSP.\nThe basic idea of iterative MSP is to process all the sequences iteratively several times so that the processing of each single sequence can benefit from the results of other sequences.\nTo illustrate the idea more clearly, Figure 4 shows the results of three scans that have provided three sequences.\nNow if we process the sequence (8,1,5, A,6, C,4,3,7,2, B,9) obtained from scan 1 again, we can make progress, as shown in Figure 5.\nThe reprocessing of the node sequence 1 provides information in the way an additional vertical scan would.\nFrom sequencebased MSP, we know that the upper boundaries of nodes 3 and 4 along the scan direction must not extend beyond the upper boundary of node 7, therefore the grid parts can be eliminated\nFigure 6.\nExample of Joint Distribution Estimation\nFigure 7.\nIdea of DBE MSP for Each Node\nfor the nodes 3 and node 4, respectively, as shown in Figure 5.\nFrom this example, we can see that iterative processing of the sequence could help further shrink the polygon of each target node, and thus enhance the accuracy of the system.\nThe implementation of iterative MSP is straightforward: process all the sequences multiple times using sequence-based MSP.\nLike sequence-based MSP, iterative MSP introduces no additional event cost.\nIn other words, reprocessing does not actually repeat the scan physically.\nEvaluation results in Section 8 will show that iterative MSP contributes noticeably to a lower localization error.\nEmpirical results show that after 5 iterations, improvements become less significant.\nIn summary, iterative processing can achieve better performance with only a small computation overhead.\n5.3 Distribution-Based Estimation\nAfter determining the location area polygon for each node, estimation is needed for a final decision.\nPrevious research mostly applied the Center of Gravity (COG) method [4] [8] [10] which minimizes average error.\nIf every node is independent of all others, COG is the statistically best solution.\nIn MSP, however, each node may not be independent.\nFor example, two neighboring nodes in a certain sequence could have overlapping polygon areas.\nIn this case, if the marginal probability of joint distribution is used for estimation, better statistical results are achieved.\nFigure 6 shows an example in which node 1 and node 2 are located in the same polygon.\nIf COG is used, both nodes are localized at the same position (Figure 6 (a)).\nHowever, the node sequences obtained from two scans indicate that node 1 should be to the left of and above node 2, as shown in Figure 6 (b).\nThe high-level idea of distribution-based estimation proposed for MSP, which we call DBE MSP, is illustrated in Figure 7.\nThe distributions of each node under the ith scan (for the ith node sequence) are estimated in node.vmap [i], which is a data structure for remembering the marginal distribution over scan i.\nThen all the vmaps are combined to get a single map and weighted estimation is used to obtain the final location.\nFor each scan, all the nodes are sorted according to the gap, which is the diameter of the polygon along the direction of the scan, to produce a second, gap-based node sequence.\nThen, the estimation starts from the node with the smallest gap.\nThis is because it is statistically more accurate to assume a uniform distribution of the node with smaller gap.\nFor each node processed in order from the gap-based node sequence, either if\nFigure 5.\nIterative MSP: Reprocessing Scan 1\nFigure 9.\nBasic Architecture of Adaptive MSP\nSuccessor node exists: Both predecessor and successor conditional distribution nodes exist: conditional distribution based on succ.\nnode's area based on both of them\nFigure 8.\nFour Cases in DBE Process\nno neighbor node in the original event-based node sequence shares an overlapping area, or if the neighbors have not been processed due to bigger gaps, a uniform distribution Uniform () is applied to this isolated node (the Alone case in Figure 8).\nIf the distribution of its neighbors sharing overlapped areas has been processed, we calculate the joint distribution for the node.\nAs shown in Figure 8, there are three possible cases depending on whether the distribution of the overlapping predecessor and\/or successor nodes have\/has already been estimated.\nThe estimation's strategy of starting from the most accurate node (smallest gap node) reduces the problem of estimation error propagation.\nThe results in the evaluation section indicate that applying distribution-based estimation could give statistically better results.\n5.4 Adaptive MSP\nSo far, all the enhancements to basic MSP focus on improving the multi-sequence processing algorithm given a fixed set of scan directions.\nAll these enhancements require only more computing time without any overhead to the sensor nodes.\nObviously, it is possible to have some choice and optimization on how events are generated.\nFor example, in military situations, artillery or rocket-launched mini-ultrasound bombs can be used for event generation at some selected locations.\nIn adaptive MSP, we carefully generate each new localization event so as to maximize the contribution of the new event to the refinement of localization, based on feedback from previous events.\nFigure 9 depicts the basic architecture of adaptive MSP.\nThrough previous localization events, the whole map has been partitioned into many small location areas.\nThe idea of adaptive MSP is to generate the next localization event to achieve best-effort elimination, which ideally could shrink the location area of individual node as much as possible.\nWe use a weighted voting mechanism to evaluate candidate localization events.\nEvery node wants the next event to split its area evenly, which would shrink the area fast.\nTherefore, every node votes for the parameters of the next event (e.g., the scan angle k of the straight-line scan).\nSince the area map is maintained centrally, the vote is virtually done and there is no need for the real sensor nodes to participate in it.\nAfter gathering all the voting results, the event parameters with the most votes win the election.\nThere are two factors that determine the weight of each vote: \u2022 The vote for each candidate event is weighted according to the diameter D of the node's location area.\nNodes with bigger location areas speak louder in the voting, because Figure 10.\nCandidate Slops for Node 3 at Anchor 1 overall system error is reduced mostly by splitting the larger areas.\n\u2022 The vote for each candidate event is also weighted according to its elimination efficiency for a location area, which is defined as how equally in size (or in diameter) an event can cut an area.\nIn other words, an optimal scan event cuts an area in the middle, since this cut shrinks the area quickly and thus reduces localization uncertainty quickly.\nCombining the above two aspects, the weight for each vote is computed according to the following equation (1): Weight (kji) = f (Di, \u25b3 (kji, kopt\nkji is node i's jth supporting parameter for next event generation; Di is diameter of node i's location area; \u25b3 (kji, kopt i) is the distance between kji and the optimal parameter kopt i for node i, which should be defined to fit the specific application.\nFigure 10 presents an example for node 1's voting for the slopes of the next straight-line scan.\nIn the system, there are a fixed number of candidate slopes for each scan (e.g., k1, k2, k3, k4 ...).\nThe location area of target node 3 is shown in the figure.\nThe candidate events k13, k23, k33, k43, k53, k63 are evaluated according to their effectiveness compared to the optimal ideal event which is shown as a dotted line with appropriate weights computed according to equation (1).\nFor this specific example, as is illustrated in the right part of Figure 10, f (Di, \u25b3 (kji, kopt\nSlarge Ssmall and Slarge are the sizes of the smaller part and larger part of the area cut by the candidate line respectively.\nIn this case, node 3 votes 0 for the candidate lines that do not cross its area since Ssmall = 0.\nWe show later that adaptive MSP improves localization accuracy in WSNs with irregularly shaped deployment areas.\n5.5 Overhead and MSP Complexity Analysis\nThis section provides a complexity analysis of the MSP design.\nWe emphasize that MSP adopts an asymmetric design in which sensor nodes need only to detect and report the events.\nThey are blissfully oblivious to the processing methods proposed in previous sections.\nIn this section, we analyze the computational cost on the node sequence processing side, where resources are plentiful.\nAccording to Algorithm 1, the computational complexity of Basic MSP is O (d \u2022 N \u2022 S), and the storage space required is O (N \u2022 S), where d is the number of events, N is the number of target nodes, and S is the area size.\nAccording to Algorithm 2, the computational complexity of both sequence-based MSP and iterative MSP is O (c \u2022 d \u2022 N \u2022 S), where c is the number of iterations and c = 1 for sequencebased MSP, and the storage space required is O (N \u2022 S).\nBoth the computational complexity and storage space are equal within a constant factor to those of basic MSP.\nThe computational complexity of the distribution-based estimation (DBE MSP) is greater.\nThe major overhead comes from the computation of joint distributions when both predecessor and successor nodes exit.\nIn order to compute the marginal probability, MSP needs to enumerate the locations of the predecessor node and the successor node.\nFor example, if node A has predecessor node B and successor node C, then the marginal probability PA (x, y) of node A's being at location\nNB, A, C is the number of valid locations for A satisfying the sequence (B, A, C) when B is at (i, j) and C is at (m, n); PB (i, j) is the available probability of node B's being located at (i, j); PC (m, n) is the available probability of node C's being located at (m, n).\nA naive algorithm to compute equation (3) has complexity O (d \u2022 N \u2022 S3).\nHowever, since the marginal probability indeed comes from only one dimension along the scanning direction (e.g., a line), the complexity can be reduced to O (d \u2022 N \u2022 S1 .5) after algorithm optimization.\nIn addition, the final location areas for every node are much smaller than the original field S; therefore, in practice, DBE MSP can be computed much faster than O (d \u2022 N \u2022 S1 .5).\n6 Wave Propagation Example\nSo far, the description of MSP has been solely in the context of straight-line scan.\nHowever, we note that MSP is conceptually independent of how the event is propagated as long as node sequences can be obtained.\nClearly, we can also support wave-propagation-based events (e.g., ultrasound propagation, air blast propagation), which are polar coordinate equivalences of the line scans in the Cartesian coordinate system.\nThis section illustrates the effects of MSP's implementation in the wave propagation-based situation.\nFor easy modelling, we have made the following assumptions: \u2022 The wave propagates uniformly in all directions, therefore the propagation has a circle frontier surface.\nSince MSP does not rely on an accurate space-time relationship, a certain distortion in wave propagation is tolerable.\nIf any directional wave is used, the propagation frontier surface can be modified accordingly.\nFigure 11.\nExample of Wave Propagation Situation\n\u2022 Under the situation of line-of-sight, we allow obstacles to reflect or deflect the wave.\nReflection and deflection are not problems because each node reacts only to the first detected event.\nThose reflected or deflected waves come later than the line-of-sight waves.\nThe only thing the system needs to maintain is an appropriate time interval between two successive localization events.\n\u2022 We assume that background noise exists, and therefore we run a band-pass filter to listen to a particular wave frequency.\nThis reduces the chances of false detection.\nThe parameter that affects the localization event generation here is the source location of the event.\nThe different distances between each node and the event source determine the rank of each node in the node sequence.\nUsing the node sequences, the MSP algorithm divides the whole area into many non-rectangular areas as shown in Figure 11.\nIn this figure, the stars represent two previous event sources.\nThe previous two propagations split the whole map into many areas by those dashed circles that pass one of the anchors.\nEach node is located in one of the small areas.\nSince sequence-based MSP, iterative MSP and DBE MSP make no assumptions about the type of localization events and the shape of the area, all three optimization algorithms can be applied for the wave propagation scenario.\nHowever, adaptive MSP needs more explanation.\nFigure 11 illustrates an example of nodes' voting for next event source locations.\nUnlike the straight-line scan, the critical parameter now is the location of the event source, because the distance between each node and the event source determines the rank of the node in the sequence.\nIn Figure 11, if the next event breaks out along\/near the solid thick gray line, which perpendicularly bisects the solid dark line between anchor C and the center of gravity of node 9's area (the gray area), the wave would reach anchor C and the center of gravity of node 9's area at roughly the same time, which would relatively equally divide node 9's area.\nTherefore, node 9 prefers to vote for the positions around the thick gray line.\n7 Practical Deployment Issues\nFor the sake of presentation, until now we have described MSP in an ideal case where a complete node sequence can be obtained with accurate time synchronization.\nIn this section we describe how to make MSP work well under more realistic conditions.\n7.1 Incomplete Node Sequence\nFor diverse reasons, such as sensor malfunction or natural obstacles, the nodes in the network could fail to detect localization events.\nIn such cases, the node sequence will not be complete.\nThis problem has two versions:\n\u2022 Anchor nodes are missing in the node sequence\nIf some anchor nodes fail to respond to the localization events, then the system has fewer anchors.\nIn this case, the solution is to generate more events to compensate for the loss of anchors so as to achieve the desired accuracy requirements.\n\u2022 Target nodes are missing in the node sequence\nThere are two consequences when target nodes are missing.\nFirst, if these nodes are still be useful to sensing applications, they need to use other backup localization approaches (e.g., Centroid) to localize themselves with help from their neighbors who have already learned their own locations from MSP.\nSecondly, since in advanced MSP each node in the sequence may contribute to the overall system accuracy, dropping of target nodes from sequences could also reduce the accuracy of the localization.\nThus, proper compensation procedures such as adding more localization events need to be launched.\n7.2 Localization without Time Synchronization\nIn a sensor network without time synchronization support, nodes cannot be ordered into a sequence using timestamps.\nFor such cases, we propose a listen-detect-assemble-report protocol, which is able to function independently without time synchronization.\nlisten-detect-assemble-report requires that every node listens to the channel for the node sequence transmitted from its neighbors.\nThen, when the node detects the localization event, it assembles itself into the newest node sequence it has heard and reports the updated sequence to other nodes.\nFigure 12 (a) illustrates an example for the listen-detect-assemble-report protocol.\nFor simplicity, in this figure we did not differentiate the target nodes from anchor nodes.\nA solid line between two nodes stands for a communication link.\nSuppose a straight line scans from left to right.\nNode 1 detects the event, and then it broadcasts the sequence (1) into the network.\nNode 2 and node 3 receive this sequence.\nWhen node 2 detects the event, node 2 adds itself into the sequence and broadcasts (1, 2).\nThe sequence propagates in the same direction with the scan as shown in Figure 12 (a).\nFinally, node 6 obtains a complete sequence (1,2,3,5,7,4,6).\nIn the case of ultrasound propagation, because the event propagation speed is much slower than that of radio, the listendetect-assemble-report protocol can work well in a situation where the node density is not very high.\nFor instance, if the distance between two nodes along one direction is 10 meters, the 340m\/s sound needs 29.4 ms to propagate from one node to the other.\nWhile normally the communication data rate is 250Kbps in the WSN (e.g., CC2420 [1]), it takes only about 2 \u223c 3 ms to transmit an assembled packet for one hop.\nOne problem that may occur using the listen-detectassemble-report protocol is multiple partial sequences as shown in Figure 12 (b).\nTwo separate paths in the network may result in two sequences that could not be further combined.\nIn this case, since the two sequences can only be processed as separate sequences, some order information is lost.\nTherefore the\nFigure 12.\nNode Sequence without Time Synchronization\naccuracy of the system would decrease.\nThe other problem is the sequence flip problem.\nAs shown in Figure 12 (c), because node 2 and node 3 are too close to each other along the scan direction, they detect the scan almost simultaneously.\nDue to the uncertainty such as media access delay, two messages could be transmitted out of order.\nFor example, if node 3 sends out its report first, then the order of node 2 and node 3 gets flipped in the final node sequence.\nThe sequence flip problem would appear even in an accurately synchronized system due to random jitter in node detection if an event arrives at multiple nodes almost simultaneously.\nA method addressing the sequence flip is presented in the next section.\n7.3 Sequence Flip and Protection Band\nSequence flip problems can be solved with and without time synchronization.\nWe firstly start with a scenario applying time synchronization.\nExisting solutions for time synchronization [12, 6] can easily achieve sub-millisecond-level accuracy.\nFor example, FTSP [12] achieves 16.9 \u00b5s (microsecond) average error for a two-node single-hop case.\nTherefore, we can comfortably assume that the network is synchronized with maximum error of 1000\u00b5s.\nHowever, when multiple nodes are located very near to each other along the event propagation direction, even when time synchronization with less than 1ms error is achieved in the network, sequence flip may still occur.\nFor example, in the sound wave propagation case, if two nodes are less than 0.34 meters apart, the difference between their detection timestamp would be smaller than 1 millisecond.\nWe find that sequence flip could not only damage system accuracy, but also might cause a fatal error in the MSP algorithm.\nFigure 13 illustrates both detrimental results.\nIn the left side of Figure 13 (a), suppose node 1 and node 2 are so close to each other that it takes less than 0.5 ms for the localization event to propagate from node 1 to node 2.\nNow unfortunately, the node sequence is mistaken to be (2,1).\nSo node 1 is expected to be located to the right of node 2, such as at the position of the dashed node 1.\nAccording to the elimination rule in sequencebased MSP, the left part of node 1's area is cut off as shown in the right part of Figure 13 (a).\nThis is a potentially fatal error, because node 1 is actually located in the dashed area which has been eliminated by mistake.\nDuring the subsequent eliminations introduced by other events, node 1's area might be cut off completely, thus node 1 could consequently be erased from the map!\nEven in cases where node 1 still survives, its area actually does not cover its real location.\nFigure 13.\nSequence Flip and Protection Band\nAnother problem is not fatal but lowers the localization accuracy.\nIf we get the right node sequence (1,2), node 1 has a new upper boundary which can narrow the area of node 1 as in Figure 3.\nDue to the sequence flip, node 1 loses this new upper boundary.\nIn order to address the sequence flip problem, especially to prevent nodes from being erased from the map, we propose a protection band compensation approach.\nThe basic idea of protection band is to extend the boundary of the location area a little bit so as to make sure that the node will never be erased from the map.\nThis solution is based on the fact that nodes have a high probability of flipping in the sequence if they are near to each other along the event propagation direction.\nIf two nodes are apart from each other more than some distance, say, B, they rarely flip unless the nodes are faulty.\nThe width of a protection band B, is largely determined by the maximum error in system time synchronization and the localization event propagation speed.\nFigure 13 (b) presents the application of the protection band.\nInstead of eliminating the dashed part in Figure 13 (a) for node 1, the new lower boundary of node 1 is set by shifting the original lower boundary of node 2 to the left by distance B.\nIn this case, the location area still covers node 1 and protects it from being erased.\nIn a practical implementation, supposing that the ultrasound event is used, if the maximum error of system time synchronization is 1ms, two nodes might flip with high probability if the timestamp difference between the two nodes is smaller than or equal to 1ms.\nAccordingly, we set the protection band B as 0.34 m (the distance sound can propagate within 1 millisecond).\nBy adding the protection band, we reduce the chances of fatal errors, although at the cost of localization accuracy.\nEmpirical results obtained from our physical test-bed verified this conclusion.\nIn the case of using the listen-detect-assemble-report protocol, the only change we need to make is to select the protection band according to the maximum delay uncertainty introduced by the MAC operation and the event propagation speed.\nTo bound MAC delay at the node side, a node can drop its report message if it experiences excessive MAC delay.\nThis converts the sequence flip problem to the incomplete sequence problem, which can be more easily addressed by the method proposed in Section 7.1.\n8 Simulation Evaluation\nOur evaluation of MSP was conducted on three platforms: (i) an indoor system with 46 MICAz motes using straight-line scan, (ii) an outdoor system with 20 MICAz motes using sound wave propagation, and (iii) an extensive simulation under various kinds of physical settings.\nIn order to understand the behavior of MSP under numerous settings, we start our evaluation with simulations.\nThen, we implemented basic MSP and all the advanced MSP methods for the case where time synchronization is available in the network.\nThe simulation and implementation details are omitted in this paper due to space constraints, but related documents [25] are provided online at http:\/\/www.cs.umn.edu\/\u223czhong\/MSP.\nFull implementation and evaluation of system without time synchronization are yet to be completed in the near future.\nIn simulation, we assume all the node sequences are perfect so as to reveal the performance of MSP achievable in the absence of incomplete node sequences or sequence flips.\nIn our simulations, all the anchor nodes and target nodes are assumed to be deployed uniformly.\nThe mean and maximum errors are averaged over 50 runs to obtain high confidence.\nFor legibility reasons, we do not plot the confidence intervals in this paper.\nAll the simulations are based on the straight-line scan example.\nWe implement three scan strategies:\n\u2022 Random Scan: The slope of the scan line is randomly chosen at each time.\n\u2022 Regular Scan: The slope is predetermined to rotate uniformly from 0 degree to 180 degrees.\nFor example, if the system scans 6 times, then the scan angles would be: 0, 30, 60, 90, 120, and 150.\n\u2022 Adaptive Scan: The slope of each scan is determined based on the localization results from previous scans.\nWe start with basic MSP and then demonstrate the performance improvements one step at a time by adding (i) sequencebased MSP, (ii) iterative MSP, (iii) DBE MSP and (iv) adaptive MSP.\n8.1 Performance of Basic MSP\nThe evaluation starts with basic MSP, where we compare the performance of random scan and regular scan under different configurations.\nWe intend to illustrate the impact of the number of anchors M, the number of scans d, and target node density (number of target nodes N in a fixed-size region) on the localization error.\nTable 1 shows the default simulation parameters.\nThe error of each node is defined as the distance between the estimated location and the real position.\nWe note that by default we only use three anchors, which is considerably fewer than existing range-free solutions [8, 4].\nImpact of the Number of Scans: In this experiment, we compare regular scan with random scan under a different number of scans from 3 to 30 in steps of 3.\nThe number of anchors\nTable 1.\nDefault Configuration Parameters\nFigure 15.\nImprovements of Sequence-Based MSP over Basic MSP\nis 3 by default.\nFigure 14 (a) indicates the following: (i) as the number of scans increases, the localization error decreases significantly; for example, localization errors drop more than 60% from 3 scans to 30 scans; (ii) statistically, regular scan achieves better performance than random scan under identical number of scans.\nHowever, the performance gap reduces as the number of scans increases.\nThis is expected since a large number of random numbers converges to a uniform distribution.\nFigure 14 (a) also demonstrates that MSP requires only a small number of anchors to perform very well, compared to existing range-free solutions [8, 4].\nImpact of the Number of Anchors: In this experiment, we\ncompare regular scan with random scan under different number of anchors from 3 to 30 in steps of 3.\nThe results shown in Figure 14 (b) indicate that (i) as the number of anchor nodes increases, the localization error decreases, and (ii) statistically, regular scan obtains better results than random scan with identical number of anchors.\nBy combining Figures 14 (a) and 14 (b), we can conclude that MSP allows a flexible tradeoff between physical cost (anchor nodes) and soft cost (localization events).\nImpact of the Target Node Density: In this experiment, we\nconfirm that the density of target nodes has no impact on the accuracy, which motivated the design of sequence-based MSP.\nIn this experiment, we compare regular scan with random scan under different number of target nodes from 10 to 190 in steps of 20.\nResults in Figure 14 (c) show that mean localization errors remain constant across different node densities.\nHowever, when the number of target nodes increases, the average maximum error increases.\nSummary: From the above experiments, we can conclude that in basic MSP, regular scan are better than random scan under different numbers of anchors and scan events.\nThis is because regular scans uniformly eliminate the map from different directions, while random scans would obtain sequences with redundant overlapping information, if two scans choose two similar scanning slopes.\n8.2 Improvements of Sequence-Based MSP\nThis section evaluates the benefits of exploiting the order information among target nodes by comparing sequence-based MSP with basic MSP.\nIn this and the following sections, regular scan is used for straight-line scan event generation.\nThe purpose of using regular scan is to keep the scan events and the node sequences identical for both sequence-based MSP and basic MSP, so that the only difference between them is the sequence processing procedure.\nImpact of the Number of Scans: In this experiment, we\ncompare sequence-based MSP with basic MSP under different number of scans from 3 to 30 in steps of 3.\nFigure 15 (a) indicates significant performance improvement in sequence-based MSP over basic MSP across all scan settings, especially when the number of scans is large.\nFor example, when the number of scans is 30, errors in sequence-based MSP are about 20% of that of basic MSP.\nWe conclude that sequence-based MSP performs extremely well when there are many scan events.\nImpact of the Number of Anchors: In this experiment, we\nuse different number of anchors from 3 to 30 in steps of 3.\nAs seen in Figure 15 (b), the mean error and maximum error of sequence-based MSP is much smaller than that of basic MSP.\nEspecially when there is limited number of anchors in the system, e.g., 3 anchors, the error rate was almost halved by using sequence-based MSP.\nThis phenomenon has an interesting explanation: the cutting lines created by anchor nodes are exploited by both basic MSP and sequence-based MSP, so as the\nFigure 16.\nImprovements of Iterative MSP\nAdaptive MSP for 500by80\nMax Error of Regualr Scan Max Error of Adaptive Scan Mean Error of Regualr Scan Mean Error of Adaptive Scan\nFigure 17.\nImprovements of DBE MSP\nFigure 18.\nThe Improvements of Adaptive MSP\nnumber of anchor nodes increases, anchors tend to dominate the contribution.\nTherefore the performance gaps lessens.\nImpact of the Target Node Density: Figure 15 (c) demonstrates the benefits of exploiting order information among target nodes.\nSince sequence-based MSP makes use of the information among the target nodes, having more target nodes contributes to the overall system accuracy.\nAs the number of target nodes increases, the mean error and maximum error of sequence-based MSP decreases.\nClearly the mean error in basic MSP is not affected by the number of target nodes, as shown in Figure 15 (c).\nSummary: From the above experiments, we can conclude that exploiting order information among target nodes can improve accuracy significantly, especially when the number of events is large but with few anchors.\n8.3 Iterative MSP over Sequence-Based MSP\nIn this experiment, the same node sequences were processed iteratively multiple times.\nIn Figure 16, the two single marks are results from basic MSP, since basic MSP doesn't perform iterations.\nThe two curves present the performance of iterative MSP under different numbers of iterations c.\nWe note that when only a single iteration is used, this method degrades to sequence-based MSP.\nTherefore, Figure 16 compares the three methods to one another.\nFigure 16 shows that the second iteration can reduce the mean error and maximum error dramatically.\nAfter that, the performance gain gradually reduces, especially when c> 5.\nThis is because the second iteration allows earlier scans to exploit the new boundaries created by later scans in the first iteration.\nSuch exploitation decays quickly over iterations.\n8.4 DBE MSP over Iterative MSP\nFigure 17, in which we augment iterative MSP with distribution-based estimation (DBE MSP), shows that DBE MSP could bring about statistically better performance.\nFigure 17 presents cumulative distribution localization errors.\nIn general, the two curves of the DBE MSP lay slightly to the left of that of non-DBE MSP, which indicates that DBE MSP has a smaller statistical mean error and averaged maximum error than non-DBE MSP.\nWe note that because DBE is augmented on top of the best solution so far, the performance improvement is not significant.\nWhen we apply DBE on basic MSP methods, the improvement is much more significant.\nWe omit these results because of space constraints.\n8.5 Improvements of Adaptive MSP\nThis section illustrates the performance of adaptive MSP over non-adaptive MSP.\nWe note that feedback-based adaptation can be applied to all MSP methods, since it affects only the scanning angles but not the sequence processing.\nIn this experiment, we evaluated how adaptive MSP can improve the best solution so far.\nThe default angle granularity (step) for adaptive searching is 5 degrees.\nImpact of Area Shape: First, if system settings are regular, the adaptive method hardly contributes to the results.\nFor a square area (regular), the performance of adaptive MSP and regular scans are very close.\nHowever, if the shape of the area is not regular, adaptive MSP helps to choose the appropriate localization events to compensate.\nTherefore, adaptive MSP can achieve a better mean error and maximum error as shown in Figure 18 (a).\nFor example, adaptive MSP improves localization accuracy by 30% when the number of target nodes is 10.\nImpact of the Target Node Density: Figure 18 (a) shows that when the node density is low, adaptive MSP brings more benefit than when node density is high.\nThis phenomenon makes statistical sense, because the law of large numbers tells us that node placement approaches a truly uniform distribution when the number of nodes is increased.\nAdaptive MSP has an edge\nFigure 19.\nThe Mirage Test-bed (Line Scan) Figure 20.\nThe 20-node Outdoor Experiments (Wave)\nwhen layout is not uniform.\nImpact of Candidate Angle Density: Figure 18 (b) shows that the smaller the candidate scan angle step, the better the statistical performance in terms of mean error.\nThe rationale is clear, as wider candidate scan angles provide adaptive MSP more opportunity to choose the one approaching the optimal angle.\n8.6 Simulation Summary\nStarting from basic MSP, we have demonstrated step-bystep how four optimizations can be applied on top of each other to improve localization performance.\nIn other words, these optimizations are compatible with each other and can jointly improve the overall performance.\nWe note that our simulations were done under assumption that the complete node sequence can be obtained without sequence flips.\nIn the next section, we present two real-system implementations that reveal and address these practical issues.\n9 System Evaluation\nIn this section, we present a system implementation of MSP on two physical test-beds.\nThe first one is called Mirage, a large indoor test-bed composed of six 4-foot by 8-foot boards, illustrated in Figure 19.\nEach board in the system can be used as an individual sub-system, which is powered, controlled and metered separately.\nThree Hitachi CP-X1250 projectors, connected through a Matorx Triplehead2go graphics expansion box, are used to create an ultra-wide integrated display on six boards.\nFigure 19 shows that a long tilted line is generated by the projectors.\nWe have implemented all five versions of MSP on the Mirage test-bed, running 46 MICAz motes.\nUnless mentioned otherwise, the default setting is 3 anchors and 6 scans at the scanning line speed of 8.6 feet\/s.\nIn all of our graphs, each data point represents the average value of 50 trials.\nIn the outdoor system, a Dell A525 speaker is used to generate 4.7 KHz sound as shown in Figure 20.\nWe place 20 MICAz motes in the backyard of a house.\nSince the location is not completely open, sound waves are reflected, scattered and absorbed by various objects in the vicinity, causing a multi-path effect.\nIn the system evaluation, simple time synchronization mechanisms are applied on each node.\n9.1 Indoor System Evaluation\nDuring indoor experiments, we encountered several realworld problems that are not revealed in the simulation.\nFirst, sequences obtained were partial due to misdetection and message losses.\nSecond, elements in the sequences could flip due to detection delay, uncertainty in media access, or error in time synchronization.\nWe show that these issues can be addressed by using the protection band method described in Section 7.3.\n9.1.1 On Scanning Speed and Protection Band\nIn this experiment, we studied the impact of the scanning speed and the length of protection band on the performance of the system.\nIn general, with increasing scanning speed, nodes have less time to respond to the event and the time gap between two adjacent nodes shrinks, leading to an increasing number of partial sequences and sequence flips.\nFigure 21 shows the node flip situations for six scans with distinct angles under different scan speeds.\nThe x-axis shows the distance between the flipped nodes in the correct node sequence.\ny-axis shows the total number of flips in the six scans.\nThis figure tells us that faster scan brings in not only increasing number of flips, but also longer-distance flips that require wider protection band to prevent from fatal errors.\nFigure 22 (a) shows the effectiveness of the protection band in terms of reducing the number of unlocalized nodes.\nWhen we use a moderate scan speed (4.3 feet\/s), the chance of flipping is rare, therefore we can achieve 0.45 feet mean accuracy (Figure 22 (b)) with 1.6 feet maximum error (Figure 22 (c)).\nWith increasing speeds, the protection band needs to be set to a larger value to deal with flipping.\nInteresting phenomena can be observed in Figures 22: on one hand, the protection band can sharply reduce the number of unlocalized nodes; on the other hand, protection bands enlarge the area in which a target would potentially reside, introducing more uncertainty.\nThus there is a concave curve for both mean and maximum error when the scan speed is at 8.6 feet\/s.\n9.1.2 On MSP Methods and Protection Band\nIn this experiment, we show the improvements resulting from three different methods.\nFigure 23 (a) shows that a protection band of 0.35 feet is sufficient for the scan speed of 8.57 feet\/s.\nFigures 23 (b) and 23 (c) show clearly that iterative MSP (with adaptation) achieves best performance.\nFor example, Figures 23 (b) shows that when we set the protection band at 0.05 feet, iterative MSP achieves 0.7 feet accuracy, which is 42% more accurate than the basic design.\nSimilarly, Figures 23 (b) and 23 (c) show the double-edged effects of protection band on the localization accuracy.\nFigure 21.\nNumber of Flips for Different Scan Speeds\nFigure 22.\nImpact of Protection Band and Scanning Speed Figure 22.\nImpact of Protection Band and Scanning Speed\nUnlocalized Node Number (Scan Line Speed 8.57 feet\/s) Unlocalized Node Number (Scan Line Speed 8.57 feet\/s) Mean Error (Scan Line Speed 8.57 feet\/s) Max Error (Scan Line Speed 8.57 feet\/s) Mean Error (Scan Line Speed 8.57 feet\/s) Max Error (Scan Line Speed 8.57 feet\/s)\nFigure 23.\nImpact of Protection Band under Different MSP Methods\nFigure 24.\nImpact of the Number of Anchors and Scans\n9.1.3 On Number of Anchors and Scans\nIn this experiment, we show a tradeoff between hardware cost (anchors) with soft cost (events).\nFigure 24 (a) shows that with more cutting lines created by anchors, the chance of unlocalized nodes increases slightly.\nWe note that with a 0.35 feet protection band, the percentage of unlocalized nodes is very small, e.g., in the worst-case with 11 anchors, only 2 out of 46 nodes are not localized due to flipping.\nFigures 24 (b) and 24 (c) show the tradeoff between number of anchors and the number of scans.\nObviously, with the number of anchors increases, the error drops significantly.\nWith 11 anchors we can achieve a localization accuracy as low as 0.25 \u223c 0.35 feet, which is nearly a 60% improvement.\nSimilarly, with increasing number of scans, the accuracy drops significantly as well.\nWe can observe about 30% across all anchor settings when we increase the number of scans from 4 to 8.\nFor example, with only 3 anchors, we can achieve 0.6-foot accuracy with 8 scans.\n9.2 Outdoor System Evaluation\nThe outdoor system evaluation contains two parts: (i) effective detection distance evaluation, which shows that the node sequence can be readily obtained, and (ii) sound propagation based localization, which shows the results of wavepropagation-based localization.\n9.2.1 Effective Detection Distance Evaluation\nWe firstly evaluate the sequence flip phenomenon in wave propagation.\nAs shown in Figure 25, 20 motes were placed as five groups in front of the speaker, four nodes in each group at roughly the same distances to the speaker.\nThe gap between each group is set to be 2, 3, 4 and 5 feet respectively in four experiments.\nFigure 26 shows the results.\nThe x-axis in each subgraph indicates the group index.\nThere are four nodes in each group (4 bars).\nThe y-axis shows the detection rank (order) of each node in the node sequence.\nAs distance between each group increases, number of flips in the resulting node sequence\nFigure 26.\nRanks vs. Distances\nFigure 27.\nLocalization Error (Sound) Figure 25.\nWave Detection\ndecreases.\nFor example, in the 2-foot distance subgraph, there are quite a few flips between nodes in adjacent and even nonadjacent groups, while in the 5-foot subgraph, flips between different groups disappeared in the test.\n9.2.2 Sound Propagation Based Localization\nAs shown in Figure 20, 20 motes are placed as a grid including 5 rows with 5 feet between each row and 4 columns with 4 feet between each column.\nSix 4KHz acoustic wave propagation events are generated around the mote grid by a speaker.\nFigure 27 shows the localization results using iterative MSP (3 times iterative processing) with a protection band of 3 feet.\nThe average error of the localization results is 3 feet and the maximum error is 5 feet with one un-localized node.\nWe found that sequence flip in wave propagation is more severe than that in the indoor, line-based test.\nThis is expected due to the high propagation speed of sound.\nCurrently we use MICAz mote, which is equipped with a low quality microphone.\nWe believe that using a better speaker and more events, the system can yield better accuracy.\nDespite the hardware constrains, the MSP algorithm still successfully localized most of the nodes with good accuracy.\n10 Conclusions\nIn this paper, we present the first work that exploits the concept of node sequence processing to localize sensor nodes.\nWe demonstrated that we could significantly improve localization accuracy by making full use of the information embedded in multiple easy-to-get one-dimensional node sequences.\nWe proposed four novel optimization methods, exploiting order and marginal distribution among non-anchor nodes as well as the feedback information from early localization results.\nImportantly, these optimization methods can be used together, and improve accuracy additively.\nThe practical issues of partial node sequence and sequence flip were identified and addressed in two physical system test-beds.\nWe also evaluated performance at scale through analysis as well as extensive simulations.\nResults demonstrate that requiring neither costly hardware on sensor nodes nor precise event distribution, MSP can achieve a sub-foot accuracy with very few anchor nodes provided sufficient events.","keyphrases":["wireless sensor network","node local","local","event distribut","multi-sequenc posit","massiv uva-base deploment","spatiotempor correl","rang-base approach","distribut-base locat estim","listen-detect-assembl-report protocol","margin distribut","node sequenc process"],"prmu":["P","P","P","P","M","U","U","U","M","U","M","R"]} {"id":"J-60","title":"On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments","abstract":"Algorithmic Mechanism Design focuses on Dominant Strategy Implementations. The main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players (single-parameter domains). As it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20]. This suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms. We observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information. We thus suggest a notion of partially informed environments. Even if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies. As a result, cooperation is achieved independent of agents' belief. As a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing.","lvl-1":"On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments \u2217 Ahuva Mu``alem School of Engineering and Computer Science The Hebrew University of Jerusalem ahumu@cs.huji.ac.il ABSTRACT Algorithmic Mechanism Design focuses on Dominant Strategy Implementations.\nThe main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players (single-parameter domains).\nAs it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20].\nThis suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms.\nWe observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information.\nWe thus suggest a notion of partially informed environments.\nEven if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies.\nAs a result, cooperation is achieved independent of agents'' belief.\nAs a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Design, Algorithms 1.\nINTRODUCTION Recently, global networks have attracted widespread study.\nThe emergence of popular scalable shared networks with self-interested entities - such as peer-to-peer systems over the Internet and mobile wireless communication ad-hoc networks - poses fundamental challenges.\nNaturally, the study of such giant decentralized systems involves aspects of game theory [32, 34].\nIn particular, the subfield of Mechanism Design deals with the construction of mechanisms: for a given social goal the challenge is to design rules for interaction such that selfish behavior of the agents will result in the desired social goal [23, 33].\nAlgorithmic Mechanism Design (AMD) focuses on efficiently computable constructions [32].\nDistributed Algorithmic Mechanism Design (DAMD) studies mechanism design in inherently decentralized settings [30, 12].\nThe standard model assumes rational agents with quasi-linear utilities and private information, playing dominant strategies.\nThe solution concept of dominant strategies - in which each player has a best response strategy regardless of the strategy played by any other player - is well suited to the assumption of private information, in which each player is not assumed to have knowledge or beliefs regarding the other players.\nThe appropriateness of this set-up stems from the strength of the solution concept, which complements the weak information assumption.\nMany mechanisms have been constructed using this set-up, e.g., [1, 4, 6, 11, 14, 22].\nMost of these apply to severely-restricted cases (e.g., single-item auctions with no externalities) in which a player``s preference is described by only one parameter (single-parameter domains).\nTo date, Vickrey-Clarke-Groves (VCG) mechanisms are the only known general method for designing dominant strategy mechanisms for general domains of preferences.\nHowever, in distributed settings without available subsidies from outside sources, VCG mechanisms cannot be accepted as valid solutions due to a serious lack of budget balance.\nAdditionally, for some domains of preferences, VCG mechanisms and weighted VCG mechanisms are faced with computational hardness [22, 20].\nFurther limitations of the set-up are discussed in subsection 1.3.\nIn most distributed environments, players can take advantage of the network structure to collect and distribute information about other players.\nThis paper thus studies the effects of relaxing the private information assumption.\n240 One model that has been extensively studied recently is the Peer-to-Peer (P2P) network.\nA P2P network is a distributed network with no centralized authority, in which the participants share their individual resources (e.g., processing power, storage capacity, bandwidth and content).\nThe aggregation of such resources provides inexpensive computational platforms.\nThe most popular P2P networks are those for sharing media files, such as Napster, Gnutella, and Kazaa.\nRecent work on P2P Incentives include micropayment methods [15] and reputation-based methods [9, 13].\nThe following description of a P2P network scenario illustrates the relevance of our relaxed informational assumption.\nExample 1.\nConsider a Peer-to-Peer network for file sharing.\nWhenever agent B uploads a file from agent A, all peers along the routing path know that B has loaded the file.\nThey can record this information about agent B.\nIn addition, they can distribute this information.\nHowever, it is impossible to record all the information everywhere.\nFirst, such duplication induces huge costs.\nSecond, as agents dynamically enter and exit from the network, the information might not be always available.\nAnd so it is seems natural to consider environments in which the information is locally recorded, that is, the information is recorded in the closest neighborhood with some probability p.\nIn this paper we shall see that if the information is available with some probability, then this enables us to implement a wider range of social goals.\nAs a result, cooperation is achieved independent of agents'' belief.\nThis demonstrates that in some computational contexts our approach is far less demanding than the Bayesian approach (that assumes that players'' types are drawn according to some identified probability density function).\n1.1 Implementations in Complete Information Set-ups In complete information environments, each agent is informed about everyone else.\nThat is, each agent observes his own preference and the preferences of all other agents.\nHowever, no outsider can observe this information.\nSpecifically, neither the mechanism designer nor the court.\nMany positive results were shown for such arguably realistic settings.\nFor recent surveys see [25, 27, 18].\nMoore and Repullo implement a large class of social goals using sequential mechanisms with a small number of rounds [28].\nThe concept they used is subgame-perfect implementations (SPE).\nThe SPE-implementability concept seems natural for the following reasons: the designed mechanisms usually have non-artificial constructs and a small strategy space.\nAs a result, it is straightforward for a player to compute his strategy.1 Second, sequential mechanisms avoid simultaneous moves, and thus can be considered for distributed networks.\nThird, the constructed mechanisms are often decentralized (i.e., lacking a centralized authority or designer) 1 Interestingly, in real life players do not always use their subgame perfect strategies.\nOne such widely studied case is the Ultimatum Bargaining 2-person game.\nIn this simple game, the proposer first makes an offer of how to divide a certain known sum of money, and the responder either agrees or refuses, in the latter case both players earn zero.\nSomewhat surprisingly, experiments show that the responder often rejects the suggested offer, even if it is bounded away from zero and the game is played only once (see e.g. [38]).\nand budget-balanced (i.e., transfers always sum up to zero).\nThis happens essentially if there are at least three players, and a direct network link between any two agents.\nFinally, Moore and Repullo observed that they actually use a relaxed complete information assumption: it is only required that for every player there exists only one other player who is informed about him.\n1.2 Implementations in Partially Informed Set-ups and Our Results The complete information assumption is realistic for small groups of players, but not in general.\nIn this paper we consider players that are informed about each other with some probability.\nMore formally, we say that agent B is p-informed about agent A, if B knows the type of A with probability p. For such partially-informed environments, we show how to use the solution concept of iterative elimination of weakly dominated strategies.\nWe demonstrate this concept through some motivating examples that (i) seem natural in distributed settings and (ii) cannot be implemented in dominant strategies even if there is an authorized center with a direct connection to every agent or even if players have single-parameter domains.\n1.\nWe first show how the subgame perfect techniques of Moore and Repullo [28] can be applied to p-informed environments and further adjusted to the concept of iterative elimination of weakly dominated strategies (for large enough p).\n2.\nWe then suggest a certificate based challenging method that is more natural in computerized p-informed environments and different from the one introduced by Moore and Repullo [28] (for p \u2208 (0, 1]).\n3.\nWe consider implementations in various network structures.\nAs a case study we apply our methods to derive: (1) Simplified Peer-to-Peer network for file sharing with no payments in equilibrium.\nOur approach is (agent, file)-specific.\n(2) Web-cache budget-balanced and economically efficient mechanism.\nOur mechanisms use reasonable punishments that inversely depend on p. And so, if the fines are large then small p is enough to induce cooperation.\nEssentially, large p implies a large amount of recorded information.\n1.2.1 Malicious Agents Decentralized mechanisms often utilize punishing outcomes.\nAs a result, malicious players might cause severe harm to others.\nWe suggest a quantified notion of malicious player, who benefits from his own gained surplus and from harm caused to others.\n[12] suggests several categories to classify non-cooperating players.\nOur approach is similar to [7] (and the references therein), who considered independently such players in different context.\nWe show a simple decentralized mechanism in which q-malicious players cooperate and in particular, do not use their punishing actions in equilibrium.\n241 1.3 Dominant Strategy Implementations In this subsection we shall refer to some recent results demonstrating that the set-up of private information with the concept of dominant strategies is restrictive in general.\nFirst, Roberts'' classical impossibility result shows that if players'' preferences are not restricted and there are at least 3 different outcomes, then every dominant-strategy mechanism must be weighted VCG (with the social goal that maximizes the weighted welfare) [35].\nFor slightly-restricted preference domains, it is not known how to turn efficiently computable algorithms into dominant strategy mechanisms.\nThis was observed and analyzed in [32, 22, 31].\nRecently [20] extends Roberts'' result to some leading examples.\nThey showed that under mild assumptions any dominant strategy mechanism for variety of Combinatorial Auctions over multi-dimensional domains must be almost weighted VCG.\nAdditionally, it turns out that the dominant strategy requirement implies that the social goal must be monotone [35, 36, 22, 20, 5, 37].\nThis condition is very restrictive, as many desired natural goals are non-monotone2 .\nSeveral recent papers consider relaxations of the dominant strategy concept: [32, 1, 2, 19, 16, 17, 26, 21].\nHowever, most of these positive results either apply to severely restricted cases (e.g., single-parameter, 2 players) or amount to VCG or almost VCG mechanisms (e.g., [19]).\nRecently, [8, 3] considered implementations for generalized single-parameter players.\nOrganization of this paper: In section 2 we illustrate the concepts of subgame perfect and iterative elimination of weakly dominated strategies in completely-informed and partially-informed environments.\nIn section 3 we show a mechanism for Peer-to-Peer file sharing networks.\nIn section 4 we apply our methods to derive a web cache mechanism.\nFuture work is briefly discussed in section 5.\n2.\nMOTIVATING EXAMPLES In this section we examine the concepts of subgame perfect and iterative elimination of weakly dominated strategies for completely informed and p-informed environments.\nWe also present the notion of q-maliciousness and some other related considerations through two illustrative examples.\n2.1 The Fair Assignment Problem Our first example is an adjustment to computerized context of an ancient procedure to ensure that the wealthiest man in Athens would sponsor a theatrical production known as the Choregia [27].\nIn the fair assignment problem, Alice and Bob are two workers, and there is a new task to be performed.\nTheir goal is to assign the task to the least loaded worker without any monetary transfers.\nThe informational assumption is that Alice and Bob know both loads and the duration of the new task.3 2 E.g., minimizing the makespan within a factor of 2 [32] and Rawls'' Rule over some multi-dimensional domains [20].\n3 In first glance one might ask why the completely informed agents could not simply sign a contract, specifying the desired goal.\nSuch a contract is sometimes infeasible due to fact that the true state cannot be observed by outsiders, especially not the court.\nClaim 1.\nThe fair assignment goal cannot be implemented in dominant strategies.4 2.1.1 Basic Mechanism The following simple mechanism implements this goal in subgame perfect equilibrium.\n\u2022 Stage 1: Alice either agrees to perform the new task or refuses.\n\u2022 Stage 2: If she refuses, Bob has to choose between: - (a) Performing the task himself.\n- (b) Exchanging his load with Alice and performing the new task as well.\nLet LT A, LT B be the true loads of Alice and Bob, and let t > 0 be the load of the new task.\nAssume that load exchanging takes zero time and cost.\nWe shall see that the basic mechanism achieves the goal in a subgame perfect equilibrium.\nIntuitively this means that in equilibrium each player will choose his best action at each point he might reach, assuming similar behavior of others, and thus every SPE is a Nash equilibrium.\nClaim 2.\n([27]) The task is assigned to the least loaded worker in subgame perfect equilibrium.\nProof.\nBy backward induction argument (look forward and reason backward), consider the following cases: 1.\nLT B \u2264 LT A.\nIf stage 2 is reached then Bob will not exchange.\n2.\nLT A < LT B < LT A + t.\nIf stage 2 is reached Bob will exchange, and this is what Alice prefers.\n3.\nLT A + t \u2264 LT B.\nIf stage 2 is reached then Bob would exchange, as a result it is strictly preferable by Alice to perform the task.\nNote that the basic mechanism does not use monetary transfers at all and is decentralized in the sense that no third party is needed to run the procedure.\nThe goal is achieved in equilibrium (ties are broken in favor of Alice).\nHowever, in the second case exchange do occur in an equilibrium point.\nRecall the unrealistic assumption that load exchange takes zero time and cost.\nIntroducing fines, the next mechanism overcomes this drawback.\n2.1.2 Elicitation Mechanism In this subsection we shall see a centralized mechanism for the fair assignment goal without load exchange in equilibrium.\nThe additional assumptions are as follows.\nThe cost performing a load of duration d is exactly d.\nWe assume that the duration t of the new task is < T.\nThe payoffs 4 proof: Assume that there exists a mechanism that implements this goal in dominant strategies.\nThen by the Revelation Principle [23] there exists a mechanism that implements this goal for which the dominant strategy of each player is to report his true load.\nClearly, truthfully reporting cannot be a dominant strategy for this goal (if monetary transfers are not available), as players would prefer to report higher loads.\n242 of the utility maximizers agents are quasilinear.\nThe following mechanism is an adaptation of Moore and Repullo``s elicitation mechanism [28]5 .\n\u2022 Stage 1: (Elicitation of Alice``s load) Alice announces LA..\nBob announces LA \u2264 LA..\nIf LA = LA (Bob agrees) goto the next Stage.\nOtherwise (Bob challenges), Alice is assigned the task.\nShe then has to choose between: - (a) Transferring her original load to Bob and paying him LA \u2212 0.5 \u00b7 min{ , LA \u2212 LA}.\nAlice pays to the mechanism.\nBob pays the fine of T + to the mechanism.\n- (b) No load transfer.\nAlice pays to Bob.\nSTOP.\n\u2022 Stage 2: The elicitation of Bob``s load is similar to Stage 1 (switching the roles of Alice and Bob).\n\u2022 Stage 3: If LA < LB Alice is assigned the task, otherwise Bob.\nSTOP.\nObserve that Alice is assigned the task and fined with whenever Bob challenges.\nWe shall see that the bonus of is paid to a challenging player only in out of equilibria cases.\nClaim 3.\nIf the mechanism stops at Stage 3, then the payoff of each agent is at least \u2212t and at most 0.\nProposition 1.\nIt is a subgame perfect equilibrium of the elicitation mechanism to report the true load, and to challenge with the true load only if the other agent overreports.\nProof.\nAssume w.l.o.g that the elicitation of Alice``s load is done after Bob``s, and that Stage 2 is reached.\nIf Alice truly reports LA = LT A, Bob strictly prefers to agree.\nOtherwise, if Bob challenges, Alice would always strictly prefer to transfer (as in this case Bob would perform her load for smaller cost), as a result Bob would pay T + to the mechanism.\nThis punishing outcome is less preferable than the normal outcome of Stage 3 achieved had he agreed.\nIf Alice misreports LA > LT A, then Bob can ensure himself the bonus (which is always strictly preferable than reaching Stage 3) by challenging with LA = LT A, and so whenever Bob gets the bonus Alice gains the worst of all payoffs.\nReporting a lower load LA < LT A is not beneficial for Alice.\nIn this case, Bob would strictly prefer to agree (and not to announce LA < LA, as he limited to challenge with a smaller load than what she announces).\nThus such misreporting can only increase the possibility that she is assigned the task.\nAnd so there is no incentive for Alice to do so.\nAll together, Alice would prefer to report the truth in this stage.\nAnd so Stage 2 would not abnormally end by STOP, and similarly Stage 1.\nObserve that the elicitation mechanism is almost balanced: in all outcomes no money comes in or out, except for the non-equilibrium outcome (a), in which both players pay to the mechanism.\n5 In [28], if an agent misreport his type then it is always beneficial to the other agent to challenge.\nIn particular, even if the agent reports a lower load.\n2.1.3 Elicitation Mechanism for Partially Informed Agents In this subsection we consider partially informed agents.\nFormally: Definition 1.\nAn agent A is p-informed about agent B, if A knows the type of B with probability p (independently of what B knows).\nIt turns out that a version of the elicitation mechanism works for this relaxed information assumption, if we use the concept of iterative elimination of weakly dominated strategies6 .\nWe replace the fixed fine of in the elicitation mechanism with the fine: \u03b2p = max{L, 1 \u2212 p 2p \u2212 1 T} + , and assume the bounds LT A, LT B \u2264 L. Proposition 2.\nIf all agents are p-informed, p > 0.5, the elicitation mechanism(\u03b2p) implements the fair assignment goal with the concept of iterative elimination of weakly dominated strategies.\nThe strategy of each player is to report the true load and to challenge with the true load if the other agent overreport.\nProof.\nAssume w.l.o.g that the elicitation of Alice``s load is done after Bob``s, and that Stage 2 is reached.\nFirst observe that underreporting the true value is a dominated strategy, whether Bob is not informed and mistakenly challenges with a lower load (as \u03b2p \u2265 L) or not, or even if t is very small.\nNow we shall see that overreporting her value is a dominated strategy, as well.\nAlice``s expected payoff gained by misreporting \u2264 p (payoff if she lies and Bob is informed) +(1 \u2212 p) (max payoff if Bob is not informed) \u2264 p (\u2212t \u2212 \u03b2p) < p (\u2212t) + (1 \u2212 p) (\u2212t \u2212 \u03b2p) \u2264 p (min payoff of true report if Bob is informed) + (1 \u2212 p) (min payoff if Bob is not informed) \u2264 Alice``s expected payoff if she truly reports.\nThe term (\u2212t\u2212\u03b2p) in the left hand side is due to the fact that if Bob is informed he will always prefer to challenge.\nIn the right hand side, if he is informed, then challenging is a dominated strategy, and if he is not informed the worst harm he can make is to challenge.\nThus in stage 2 Alice will report her true load.\nThis implies that challenging without being informed is a dominated strategy for Bob.\nThis argument can be reasoned also for the first stage, when Bob reports his value.\nBob knows the maximum payoff he can gain is at most zero since he cannot expect to get the bonus in the next stage.\n2.1.4 Extensions The elicitation mechanism for partially informed agents is rather general.\nAs in [28], we need the capability to judge between two distinct declarations in the elicitation rounds, 6 A strategy si of player i is weakly dominated if there exists si such that (i) the payoff gained by si is at least as high as the payoff gained by si, for all strategies of the other players and all preferences, and (ii) there exist a preference and a combination of strategies for the other players such that the payoff gained by si is strictly higher than the payoff gained by si.\n243 and upper and lower bounds based on the possible payoffs derived from the last stage.\nIn addition, for p-informed environments, some structure is needed to ensure that underbidding is a dominated strategy.\nThe Choregia-type mechanisms can be applied to more than 2 players with the same number of stages: the player in the first stage can simply points out the name of the wealthiest agent.\nSimilarly, the elicitation mechanisms can be extended in a straightforward manner.\nThese mechanisms can be budget-balanced, as some player might replace the role of the designer, and collect the fines, as observed in [28].\nOpen Problem 1.\nDesign a decentralized budget balanced mechanism with reasonable fines for independently p-informed n players, where p \u2264 1 \u2212 1\/2 1 n\u22121 .\n2.2 Seller and Buyer Scenario A player might cause severe harm to others by choosing a non-equilibrium outcome.\nIn the mechanism for the fair assignment goal, an agent might maliciously challenge even if the other agent truly reports his load.\nIn this subsection we consider such malicious scenarios.\nFor the ease of exposition we present a second example.\nWe demonstrate that equilibria remain unchanged even if players are malicious.\nIn the seller-buyer example there is one item to be traded and two possible future states.\nThe goal is to sell the item for the average low price pl = ls+lb 2 in state L, and the higher price ph = hs+hb 2 in the other state H, where ls is seller``s cost and lb is buyer``s value in state L, and similarly hs, hb in H.\nThe players fix the prices without knowing what will be the future state.\nAssume that ls < hs < lb < hb, and that trade can occur in both prices (that is, pl, ph \u2208 (hs, lb)).\nOnly the players can observe the realization of the true state.\nThe payoffs are of the form ub = xv\u2212tb, us = ts \u2212xvs, where the binary variable x indicates if trade occurred, and tb, ts are the transfers.\nConsider the following decentralized trade mechanism.\n\u2022 Stage 1: If seller reports H goto Stage 2.\nOtherwise, trade at the low price pl.\nSTOP.\n\u2022 Stage 2: The buyer has to choose between: - (a) Trade at the high price ph. - (b) No trade and seller pays \u2206 to the buyer.\nClaim 4.\nLet \u2206 = lb\u2212ph+ .\nThe unique subgame perfect equilibrium of the trade mechanism is to report the true state in Stage 1 and trading if Stage 2 is reached.\nNote that the outcome (b) is never chosen in equilibrium.\n2.2.1 Trade Mechanism for Malicious Agents The buyer might maliciously punish the seller by choosing the outcome (b) when the true state is H.\nThe following notion quantifies the consideration that a player is not indifferent to the private surpluses of others.\nDefinition 2.\nA player is q-malicious if his payoff equals: (1 \u2212 q) (his private surplus) \u2212 q (summation of others surpluses), q \u2208 [0, 1].\nThis definition appeared independently in [7] in different context.\nWe shall see that the traders would avoid such bad behavior if they are q-malicious, where q < 0.5, that is if their non-indifference impact is bounded by 0.5.\nEquilibria outcomes remain unchanged, and so cooperation is achieved as in the original case of non-malicious players.\nConsider the trade mechanism with pl = (1 \u2212 q) hs + q lb , ph = q hs + (1 \u2212 q) lb , \u2206 = (1 \u2212 q) (hb \u2212 lb \u2212 ).\nNote that pl < ph for q < 0.5.\nClaim 5.\nIf q < 0.5, then the unique subgame perfect equilibrium for q-malicious players remains unchanged.\nProof.\nBy backward induction we consider two cases.\nIn state H, the q-malicious buyer would prefer to trade if (1 \u2212 q)(hb \u2212 ph) + q(hs \u2212 ph) > (1 \u2212 q)\u2206 + q(\u2206).\nIndeed, (1 \u2212 q)hb + qhs > \u2206 + ph. Trivially, the seller prefers to trade at the higher price, (1 \u2212 q)(pl \u2212 hs) + q(pl \u2212 hb) < (1 \u2212 q)(ph \u2212 hs) + q(ph \u2212 hb).\nIn state L the buyer prefers the no trade outcome, as (1\u2212q)(lb \u2212ph)+q(ls \u2212ph) < \u2206.\nThe seller prefers to trade at a low price, as (1 \u2212 q)(pl \u2212 ls) + q(pl \u2212 lb) > 0 > \u2212\u2206.\n2.2.2 Discussion No mechanism can Nash-implement this trading goal if the only possible outcomes are trade at pl and trade at ph. To see this, it is enough to consider normal forms (as any extensive form mechanism can be presented as a normal one).\nConsider a matrix representation, where the seller is the row player and the buyer is the column player, in which every entry includes an outcome.\nSuppose there is equilibrium entry for the state L.\nThe associate column must be all pl, otherwise the seller would have an incentive to deviate.\nSimilarly, the associate row of the H equilibrium entry must be all ph (otherwise the buyer would deviate), a contradiction.\n7 8 The buyer prefers pl and seller ph, and so the preferences are identical in both states.\nHence reporting preferences over outcomes is not enough - players must supply additional information.\nThis is captured by outcome (b) in the trade mechanism.\nIntuitively, if a goal is not Nash-implementable we need to add more outcomes.\nThe drawback is that some new additional equilibria must be ruled out.\nE.g., additional Nash equilibrium for the trade mechanism is (trade at pl, (b)).\nThat is, the seller chooses to trade at low price at either states, and the buyer always chooses the no trade option that fines the seller, if the second stage is reached.\nSuch buyer``s threat is not credible, because if the mechanism is played only once, and Stage 2 is reached in state H, the buyer would strictly decrease his payoff if he chooses (b).\nClearly, this is not a subgame perfect equilibrium.\nAlthough each extensive game-form is strategically equivalent to a normal form one, the extensive form representation places more structure and so it seems plausible that the subgame perfect equilibrium will be played.9 7 Formally, this goal is not Maskin monotonic, a necessary condition for Nash-implementability [24].\n8 A similar argument applies for the Fair Assignment Problem.\n9 Interestingly, it is a straight forward to construct a sequential mechanism with unique SPE, and additional NE with a strictly larger payoff for every player.\n244 3.\nPEER-TO-PEER NETWORKS In this section we describe a simplified Peer-to-Peer network for file sharing, without payments in equilibrium, using a certificate-based challenging method.\nIn this challenging method - as opposed to [28] - an agent that challenges cannot harm other agents, unless he provides a valid certificate.\nIn general, if agent B copied a file f from agent A, then agent A knows that agent B holds a copy of the file.\nWe denote such information as a certificate(B, f) (we shall omit cryptographic details).\nSuch a certificate can be recorded and distributed along the network, and so we can treat each agent holding the certificate as an informed agent.\nAssumptions: We assume an homogeneous system with files of equal size.\nThe benefit each agent gains by holding a copy of any file is V .\nThe only cost each agent has is the uploading cost C (induced while transferring a file to an immediate neighbor).\nAll other costs are negligible (e.g., storing the certificates, forwarding messages, providing acknowledgements, digital signatures, etc).\nLet upA, downA be the numbers of agent A uploads and downloads if he always cooperates.\nWe assume that each agent A enters the system if upA \u00b7 C < downA \u00b7 V .\nEach agent has a quasilinear utility and only cares about his current bandwidth usage.\nIn particular, he ignores future scenarios (e.g., whether forwarding or dropping of a packet might affect future demand).\n3.1 Basic Mechanism We start with a mechanism for a network with 3 p-informed agents: B, A1, A2.\nWe assume that B is directly connected to A1 and A2.\nIf B has the certificate(A1, f), then he can apply directly to A1 and request the file (if he refuses, then B can go to court).\nThe following basic sequential mechanism is applicable whenever agent B is not informed and still would like to download the file if it exists in the network.\nNote that this goal cannot be implemented in dominant strategies without payments (similar to Claim 1, when the type of each agent here is the set of files he holds).\nDefine tA,B to be the monetary amount that agent A should transfer to B. \u2022 Stage 1: Agent B requests the file f from A1.\n- If A1 replies yes then B downloads the file from A1.\nSTOP.\n- Otherwise, agent B sends A1s no reply to agent A2.\n\u2217 If A2 declares agree then goto the next stage.\n\u2217 Else, A2 sends a certificate(A1, f) to agent B. \u00b7 If the certificate is correct then tA1,A2 = \u03b2p.\nSTOP.\n\u00b7 Else tA2,A1 = |C| + .\nSTOP.\nStage 2: Agent B requests the file f from A2.\nSwitch the roles of the agents A1, A2.\nClaim 6.\nThe basic mechanism is budget-balanced (transfers always sum to zero) and decentralized.\nTheorem 1.\nLet \u03b2p = |C| p + , p \u2208 (0, 1].\nA strategy that survives iterative elimination of weakly dominated strategies is to reply yes if Ai holds the file, and to challenge only with a valid certificate.\nAs a result, B downloads the file if some agent holds it, in equilibrium.\nThere are no payments or transfers in equilibrium.\nProof.\nClearly if the mechanism ends without challenging: \u2212C \u2264 u(Ai) \u2264 0.\nAnd so, challenging with an invalid certificate is always a dominated strategy.\nNow, when Stage 2 is reached, A2 is the last to report if he has the file.\nIf A2 has the file it is a weakly undominated strategy to misreport, whether A1 is informed or not: A2``s expected payoff gained by misreporting no \u2264 p \u00b7 (\u2212\u03b2p) + (1 \u2212 p) \u00b7 0 < \u2212C \u2264 A2``s payoff if she reports yes.\nThis argument can be reasoned also for Stage 1, when A1 reports whether he has the file.\nA1 knows that A2 will report yes if and only if she has the file in the next stage, and so the maximum payoff he can gain is at most zero since he cannot expect to get a bonus.\n3.2 Chain Networks In a chain network, agent B is directly connected to A1, and Ai is directly connected to agent Ai+1.\nAssume that we have an acknowledgment protocol to confirm the receipt of a particular message.\nTo avoid message dropping, we add the fine (\u03b2p +2 ) to be paid by an agent who hasn``t properly forwarded a message.\nThe chain mechanism follows: \u2022 Stage i: Agent B forwards a request for the file f to Ai (through {Ak}k\u2264i).\n\u2022 If Ai reports yes, then B downloads f from Ai.\nSTOP.\n\u2022 Otherwise Ai reports no.\nIf Aj sends a certificate(Ak, f) to B, ( j, k \u2264 i), then - If certificate(Ak, f) is correct, then t(Ak, Aj) = \u03b2p.\nSTOP.\n- Else, t(Aj, Ak) = C + .\nSTOP.\nIf Ai reports that he has no copy of the file, then any agent in between might challenge.\nUsing digital signatures and acknowledgements, observe that every agent must forward each message, even if it contains a certificate showing that he himself has misreported.\nWe use the same fine, \u03b2p, as in the basic mechanism, because the protocol might end at stage 1 (clearly, the former analysis still applies, since the actual p increases with the number of players).\n3.3 Network Mechanism In this subsection we consider general network structures.\nWe need the assumption that there is a ping protocol that checks whether a neighbor agent is on-line or not (that is, an on-line agent cannot hide himself).\nTo limit the amount of information to be recorded, we assume that an agent is committed to keep any downloaded file to at least one hour, and so certificates are valid for a limited amount of time.\nWe assume that each agent has a digitally signed listing of his current immediate neighbors.\nAs in real P2P file sharing applications, we restrict each request for a file to be forwarded at most r times (that is, downloads are possible only inside a neighborhood of radius r).\n245 The network mechanism utilizes the chain mechanism in the following way: When agent B requests a file from agent A (at most r \u2212 1 far), then A sends to B the list of his neighbors and the output of the ping protocol to all of these neighbors.\nAs a result, B can explore the network.\nRemark: In this mechanism we assumed that the environment is p-informed.\nAn important design issue that it is not addressed here is the incentives for the information propagation phase.\n4.\nWEB CACHE Web caches are widely used tool to improve overall system efficiency by allowing fast local access.\nThey were listed in [12] as a challenging application of Distributed Algorithmic Mechanism Design.\nNisan [30] considered a single cache shared by strategic agents.\nIn this problem, agent i gains the value vT i if a particular item is loaded to the local shared cache.\nThe efficient goal is to load the item if and only if \u03a3vT i \u2265 C, where C is the loading cost.\nThis goal reduces to the public project problem analyzed by Clarke [10].\nHowever, it is well known that this mechanism is not budget-balanced (e.g., if the valuation of each player is C, then everyone pays zero).\nIn this section we suggest informational and environmental assumptions for which we describe a decentralized budgetbalanced efficient mechanism.\nWe consider environments for which future demand of each agent depends on past demand.\nThe underlying informational and environmental requirements are as follows.\n1.\nAn agent can read the content of a message only if he is the target node (even if he has to forward the message as an intermediate node of some routing path).\nAn agent cannot initiate a message on behalf of other agents.\n2.\nAn acknowledgement protocol is available, so that every agent can provide a certificate indicating that he handled a certain message properly.\n3.\nNegligible costs: we assume p-informed agents, where p is such that the agent``s induced cost for keeping records of information is negligible.\nWe also assume that the cost incurred by sending and forwarding messages is negligible.\n4.\nLet qi(t) denotes the number of loading requests agent i initiated for the item during the time slot t.\nWe assume that vT i (t), the value for caching the item in the beginning of slot t depends only on most recent slot, formally vT i (t) = max{Vi(qi(t \u2212 1)), C}, where Vi(\u00b7) is a non-decreasing real function.\nIn addition, Vi(\u00b7) is a common knowledge among the players.\n5.\nThe network is homogeneous in the sense that if agent j happens to handle k requests initiated by agent i during the time slot t, then qi(t) = k\u03b1, where \u03b1 depends on the routing protocol and the environment (\u03b1 might be smaller than 1, if each request is flooded several times).\nWe assume that the only way agent i can affect the true qi(t) is by superficially increasing his demand for the cached item, but not the other way (that is, agent``s loss, incurred by giving up a necessary request for the item, is not negligible).\nThe first requirement is to avoid free riding, and also to avoid the case that an agent superficially increases the demand of others and as a result decreases his own demand.\nThe second requirement is to avoid the case that an agent who gets a routing request for the item, records it and then drops it.\nThe third is to ensure that the environment stays well informed.\nIn addition, if the forwarding cost is negligible each agent cooperates and forwards messages as he would not like to decrease the future demand (that monotonically depends on the current time slot, as assumed in the forth requirement) of some other agent.\nGiven that the payments are increasing with the declared values, the forth and fifth requirements ensure that the agent would not increase his demand superficially and so qi(t) is the true demand.\nThe following Web-Cache Mechanism implements the efficient goal that shares the cost proportionally.\nFor simplicity it is described for two players and w.l.o.g vT i (t) equals the number of requests initiated by i and observed by any informed j (that is, \u03b1 = 1 and Vi(qi(t \u2212 1)) = qi(t \u2212 1)).\n\u2022 Stage 1: (Elicitation of vT A(t)) Alice announces vA..\nBob announces vA \u2265 vA..\nIf vA = vA goto the next Stage.\nOtherwise (Bob challenges): - If Bob provides vA valid records then Alice pays C to finance the loading of the item into the cache.\nShe also pays \u03b2p to Bob.\nSTOP.\n- Otherwise, Bob finances the loading of the item into the cache.\nSTOP.\n\u2022 Stage 2: The elicitation of vT B(t) is done analogously.\n\u2022 Stage 3: If vA + vB < C, then STOP.\nOtherwise, load the item to the cache, Alice pays pA = vA vA+vB \u00b7 C, and Bob pays pB = vB vA+vB \u00b7 C. Claim 7.\nIt is a dominated strategy to overreport the true value.\nProof.\nLet vT A < VA..\nThere are two cases to consider: \u2022 If vT A + vB < C and vA + vB \u2265 C.\nWe need to show that if the mechanism stops normally Alice would pay more than vT A, that is: vA vA+vB \u00b7C > vT A. Indeed, vA C > vA (vT A + vB) > vT A (vA + vB).\n\u2022 If vT A + vB \u2265 C, then clearly, vA vA+vB > vT A vT A +vB .\nTheorem 2.\nLet \u03b2p = max{0, 1\u22122p p \u00b7 C} + , p \u2208 (0, 1].\nA strategy that survives iterative elimination of weakly dominated strategies is to report the truth and to challenge only when the agent is informed.\nThe mechanism is efficient, budget-balanced, exhibits consumer sovereignty, no positive transfer and individual rationality10 .\nProof.\nChallenging without being informed (that is, without providing enough valid records) is always dominated strategy in this mechanism.\nNow, assume w.l.o.g. Alice is 10 See [29] or [12] for exact definitions.\n246 the last to report her value.\nAlice``s expected payoff gained by underreporting \u2264 p \u00b7 (\u2212C \u2212 \u03b2p) + (1 \u2212 p) \u00b7 C < p \u00b7 0 + (1 \u2212 p) \u00b7 0 \u2264 Alice``s expected payoff if she honestly reports.\nThe right hand side equals zero as the participation costs are negligible.\nReasoning back, Bob cannot expect to get the bonus and so misreporting is dominated strategy for him.\n5.\nCONCLUDING REMARKS In this paper we have seen a new partial informational assumption, and we have demonstrated its suitability to networks in which computational agents can easily collect and distribute information.\nWe then described some mechanisms using the concept of iterative elimination of weakly dominated strategies.\nSome issues for future work include: \u2022 As we have seen, the implementation issue in p-informed environments is straightforward - it is easy to construct incentive compatible mechanisms even for non-singleparameter cases.\nThe challenge is to find more realistic scenarios in which the partial informational assumption is applicable.\n\u2022 Mechanisms for information propagation and maintenance.\nIn our examples we choose p such that the maintenance cost over time is negligible.\nHowever, the dynamics of the general case is delicate: an agent can use the recorded information to eliminate data that is not likely to be needed, in order to decrease his maintenance costs.\nAs a result, the probability that the environment is informed decreases, and selfish agents would not cooperate.\nIncentives for information propagation should be considered as well (e.g., for P2P networks for file sharing).\n\u2022 It seems that some social choice goals cannot be implemented if each player is at least 1\/n-malicious (where n is the number of players).\nIt would be interesting to identify these cases.\nAcknowledgements We thank Meitav Ackerman, Moshe Babaioff, Liad Blumrozen, Michal Feldman, Daniel Lehmann, Noam Nisan, Motty Perry and Eyal Winter for helpful discussions.\n6.\nREFERENCES [1] A. Archer and E. Tardos.\nTruthful mechanisms for one-parameter agents.\nIn IEEE Symposium on Foundations of Computer Science, pages 482-491, 2001.\n[2] Aaron Archer, Christos Papadimitriou, Kunal Talwar, and Eva Tardos.\nAn approximate truthful mechanism for combinatorial auctions with single parameter agent.\nIn SODA, 2003.\n[3] Moshe Babaioff, Ron Lavi, and Elan Pavlov.\nSingle-parameter domains and implementation in undominated strategies, 2004.\nWorking paper.\n[4] Yair Bartal, Rica Gonen, and Noam Nisan.\nIncentive compatible multi-unit combinatorial auctions, 2003.\nTARK-03.\n[5] Sushil Bikhchandani, Shurojit Chatterji, and Arunava Sen. Incentive compatibility in multi-unit auctions, 2003.\nWorking paper.\n[6] Liad Blumrosen, Noam Nisan, and Ilya Segal.\nAuctions with severely bounded communication, 2004.\nWorking paper.\n[7] F. Brandt, T. Sandholm, and Y. Shoham.\nSpiteful bidding in sealed-bid auctions, 2005.\n[8] Patrick Briest, Piotr Krysta, and Berthold Voecking.\nApproximation techniques for utilitarian mechanism design.\nIn STOC, 2005.\n[9] Chiranjeeb Buragohain, Divy Agrawal, and Subhash Suri.\nA game-theoretic framework for incentives in p2p systems.\nIn IEEE P2P, 2003.\n[10] E. H. Clarke.\nMultipart pricing of public goods.\nPublic Choice, 11:17-33, 1971.\n[11] Joan Feigenbaum, Christos Papadimitrios, and Scott Shenkar.\nSharing the cost of multicast transmissions.\nComputer and system Sciences, 63(1), 2001.\n[12] Joan Feigenbaum and Scott Shenker.\nDistributed algorithmic mechanism design: Recent results and future directions.\nIn Proceedings of the 6th International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications, pages 1-13.\nACM Press, New York, 2002.\n[13] M. Feldman, K. Lai, I. Stoica, and J. Chuang.\nRobust incentive techniques for peer-to-peer networks.\nIn EC, 2004.\n[14] A. Goldberg, J. Hartline, A. Karlin, and A. Wright.\nCompetitive auctions, 2004.\nWorking paper.\n[15] Philippe Golle, Kevin Leyton-Brown, Ilya Mironov, and Mark Lillibridge.\nIncentives for sharing in peer-to-peer networks.\nIn EC, 2001.\n[16] Ron Holzman, Noa Kfir-Dahav, Dov Monderer, and Moshe Tennenholtz.\nBundling equilibrium in combinatorial auctions.\nGames and Economic Behavior, 47:104-123, 2004.\n[17] Ron Holzman and Dov Monderer.\nCharacterization of ex post equilibrium in the vcg combinatorial auctions.\nGames and Economic Behavior, 47:87-103, 2004.\n[18] Matthew O. Jackson.\nA crash course in implementation theory, 1997.\nmimeo: California Institute of Technology.\n25.\n[19] A. Kothari, D. Parkes, and S. Suri.\nApproximately-strategyproof and tractable multi-unit auctions.\nIn EC, 2003.\n[20] Ron Lavi, Ahuva Mu``alem, and Noam Nisan.\nTowards a characterization of truthful combinatorial auctions.\nIn FOCS, 2003.\n[21] Ron Lavi and Noam Nisan.\nOnline ascending auctions for gradually expiring goods.\nIn SODA, 2005.\n[22] Daniel Lehmann, Liadan O``Callaghan, and Yoav Shoham.\nTruth revelation in approximately efficient combinatorial auctions.\nJournal of the ACM, 49(5):577-602, 2002.\n[23] A. Mas-Collel, W. Whinston, and J. Green.\nMicroeconomic Theory.\nOxford university press, 1995.\n[24] Eric Maskin.\nNash equilibrium and welfare optimality.\nReview of Economic Studies, 66:23-38, 1999.\n[25] Eric Maskin and Tomas Sj\u00a8ostr\u00a8om.\nImplementation theory, 2002.\n247 [26] Aranyak Mehta and Vijay Vazirani.\nRandomized truthful auctions of digital goods are randomizations over truthful auctions.\nIn EC, 2004.\n[27] John Moore.\nImplementation, contract and renegotiation in environments with complete information, 1992.\n[28] John Moore and Rafael Repullo.\nSubgame perfect implementation.\nEconometrica, 56(5):1191-1220, 1988.\n[29] H. Moulin and S. Shenker.\nStrategyproof sharing of submodular costs: Budget balance versus efficiency.\nEconomic Theory, 18(3):511-533, 2001.\n[30] Noam Nisan.\nAlgorithms for selfish agents.\nIn STACS, 1999.\n[31] Noam Nisan and Amir Ronen.\nComputationally feasable vcg mechanisms.\nIn EC, 2000.\n[32] Noam Nisan and Amir Ronen.\nAlgorithmic mechanism design.\nGames and Economic Behavior, 35:166-196, 2001.\n[33] M. J. Osborne and A. Rubinstein.\nA Course in Game Theory.\nMIT press, 1994.\n[34] Christos H. Papadimitriou.\nAlgorithms, games, and the internet.\nIn STOC, 2001.\n[35] Kevin Roberts.\nThe characterization of implementable choice rules.\nIn Jean-Jacques Laffont, editor, Aggregation and Revelation of Preferences.\nPapers presented at the 1st European Summer Workshop of the Econometric Society, pages 321-349.\nNorth-Holland, 1979.\n[36] Irit Rozenshtrom.\nDominant strategy implementation with quasi-linear preferences, 1999.\nMaster``s thesis, Dept. of Economics, The Hebrew University, Jerusalem, Israel.\n[37] Rakesh Vohra and Rudolf Muller.\nOn dominant strategy mechanisms, 2003.\nWorking paper.\n[38] Shmuel Zamir.\nRationality and emotions in ultimatum bargaining.\nAnnales D'' Economie et De Statistique, 61, 2001.\n248","lvl-3":"On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments *\nABSTRACT\nAlgorithmic Mechanism Design focuses on Dominant Strategy Implementations.\nThe main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players (\"single-parameter domains\").\nAs it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20].\nThis suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms.\nWe observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information.\nWe thus suggest a notion of partially informed environments.\nEven if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies.\nAs a result, cooperation is achieved independent of agents' belief.\nAs a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing.\n1.\nINTRODUCTION\nRecently, global networks have attracted widespread study.\nThe emergence of popular scalable shared networks with self-interested entities - such as peer-to-peer systems over the Internet and mobile wireless communication ad-hoc networks - poses fundamental challenges.\nNaturally, the study of such giant decentralized systems involves aspects of game theory [32, 34].\nIn particular, the subfield of Mechanism Design deals with the construction of mechanisms: for a given social goal the challenge is to design rules for interaction such that selfish behavior of the agents will result in the desired social goal [23, 33].\nAlgorithmic Mechanism Design (AMD) focuses on efficiently computable constructions [32].\nDistributed Algorithmic Mechanism Design (DAMD) studies mechanism design in inherently decentralized settings [30, 12].\nThe standard model assumes rational agents with quasi-linear utilities and private information, playing dominant strategies.\nThe solution concept of dominant strategies - in which each player has a best response strategy regardless of the strategy played by any other player - is well suited to the assumption of private information, in which each player is not assumed to have knowledge or beliefs regarding the other players.\nThe appropriateness of this set-up stems from the strength of the solution concept, which complements the weak information assumption.\nMany mechanisms have been constructed using this set-up, e.g., [1, 4, 6, 11, 14, 22].\nMost of these apply to severely-restricted cases (e.g., single-item auctions with no externalities) in which a player's preference is described by only one parameter (\"single-parameter domains\").\nTo date, Vickrey-Clarke-Groves (VCG) mechanisms are the only known general method for designing dominant strategy mechanisms for general domains of preferences.\nHowever, in distributed settings without available subsidies from outside sources, VCG mechanisms cannot be accepted as valid solutions due to a serious lack of budget balance.\nAdditionally, for some domains of preferences, VCG mechanisms and weighted VCG mechanisms are faced with computational hardness [22, 20].\nFurther limitations of the set-up are discussed in subsection 1.3.\nIn most distributed environments, players can take advantage of the network structure to collect and distribute information about other players.\nThis paper thus studies the effects of relaxing the private information assumption.\nOne model that has been extensively studied recently is the Peer-to-Peer (P2P) network.\nA P2P network is a distributed network with no centralized authority, in which the participants share their individual resources (e.g., processing power, storage capacity, bandwidth and content).\nThe aggregation of such resources provides inexpensive computational platforms.\nThe most popular P2P networks are those for sharing media files, such as Napster, Gnutella, and Kazaa.\nRecent work on P2P Incentives include micropayment methods [15] and reputation-based methods [9, 13].\nThe following description of a P2P network scenario illustrates the relevance of our relaxed informational assumption.\nEXAMPLE 1.\nConsider a Peer-to-Peer network for file sharing.\nWhenever agent B uploads a file from agent A, all peers along the routing path know that B has loaded the file.\nThey can record this information about agent B.\nIn addition, they can distribute this information.\nHowever, it is impossible to record all the information everywhere.\nFirst, such duplication induces huge costs.\nSecond, as agents dynamically enter and exit from the network, the information might not be always available.\nAnd so it is seems natural to consider environments in which the information is locally recorded, that is, the information is recorded in the closest neighborhood with some probability p.\nIn this paper we shall see that if the information is available with some probability, then this enables us to implement a wider range of social goals.\nAs a result, cooperation is achieved independent of agents' belief.\nThis demonstrates that in some computational contexts our approach is far less demanding than the Bayesian approach (that assumes that players' types are drawn according to some identified probability density function).\n1.1 Implementations in Complete Information Set-ups\nIn complete information environments, each agent is informed about everyone else.\nThat is, each agent observes his own preference and the preferences of all other agents.\nHowever, no outsider can observe this information.\nSpecifically, neither the mechanism designer nor the court.\nMany positive results were shown for such arguably realistic settings.\nFor recent surveys see [25, 27, 18].\nMoore and Repullo implement a large class of social goals using sequential mechanisms with a small number of rounds [28].\nThe concept they used is subgame-perfect implementations (SPE).\nThe SPE-implementability concept seems natural for the following reasons: the designed mechanisms usually have non-artificial constructs and a \"small\" strategy space.\nAs a result, it is straightforward for a player to compute his strategy . '\nSecond, sequential mechanisms avoid simultaneous moves, and thus can be considered for distributed networks.\nThird, the constructed mechanisms are often decentralized (i.e., lacking a centralized authority or designer)' Interestingly, in real life players do not always use their subgame perfect strategies.\nOne such widely studied case is the Ultimatum Bargaining 2-person game.\nIn this simple game, the proposer first makes an offer of how to divide a certain known sum of money, and the responder either agrees or refuses, in the latter case both players earn zero.\nSomewhat surprisingly, experiments show that the responder often rejects the suggested offer, even if it is bounded away from zero and the game is played only once (see e.g. [38]).\nand budget-balanced (i.e., transfers always sum up to zero).\nThis happens essentially if there are at least three players, and a direct network link between any two agents.\nFinally, Moore and Repullo observed that they actually use a relaxed complete information assumption: it is only required that for every player there exists only one other player who is informed about him.\n1.2 Implementations in Partially Informed Set-ups and Our Results\nThe complete information assumption is realistic for small groups of players, but not in general.\nIn this paper we consider players that are informed about each other with some probability.\nMore formally, we say that agent B is p-informed about agent A, if B knows the type of A with probability p. For such partially-informed environments, we show how to use the solution concept of iterative elimination of weakly dominated strategies.\nWe demonstrate this concept through some motivating examples that (i) seem natural in distributed settings and (ii) cannot be implemented in dominant strategies even if there is an authorized center with a direct connection to every agent or even if players have single-parameter domains.\n1.\nWe first show how the subgame perfect techniques of Moore and Repullo [28] can be applied to p-informed environments and further adjusted to the concept of iterative elimination of weakly dominated strategies (for large enough p).\n2.\nWe then suggest a certificate based challenging method that is more natural in computerized p-informed environments and different from the one introduced by Moore and Repullo [28] (for p G (0, 1]).\n3.\nWe consider implementations in various network structures.\nAs a case study we apply our methods to derive: (1) Simplified Peer-to-Peer network for file sharing with no payments in equilibrium.\nOur approach is (agent, file) - specific.\n(2) Web-cache budget-balanced and economically efficient mechanism.\nOur mechanisms use reasonable punishments that inversely depend on p. And so, if the fines are large then small p is enough to induce cooperation.\nEssentially, large p implies a large amount of recorded information.\n1.2.1 Malicious Agents\nDecentralized mechanisms often utilize punishing outcomes.\nAs a result, malicious players might cause severe harm to others.\nWe suggest a quantified notion of \"malicious\" player, who benefits from his own gained surplus and from harm caused to others.\n[12] suggests several categories to classify non-cooperating players.\nOur approach is similar to [7] (and the references therein), who considered independently such players in different context.\nWe show a simple decentralized mechanism in which q-malicious players cooperate and in particular, do not use their punishing actions in equilibrium.\n1.3 Dominant Strategy Implementations\nIn this subsection we shall refer to some recent results demonstrating that the set-up of private information with the concept of dominant strategies is restrictive in general.\nFirst, Roberts' classical impossibility result shows that if players' preferences are not restricted and there are at least 3 different outcomes, then every dominant-strategy mechanism must be weighted VCG (with the social goal that maximizes the weighted welfare) [35].\nFor slightly-restricted preference domains, it is not known how to turn efficiently computable algorithms into dominant strategy mechanisms.\nThis was observed and analyzed in [32, 22, 31].\nRecently [20] extends Roberts' result to some leading examples.\nThey showed that under mild assumptions any dominant strategy mechanism for variety of Combinatorial Auctions over multi-dimensional domains must be almost weighted VCG.\nAdditionally, it turns out that the dominant strategy requirement implies that the social goal must be \"monotone\" [35, 36, 22, 20, 5, 37].\nThis condition is very restrictive, as many desired natural goals are non-monotone2.\nSeveral recent papers consider relaxations of the dominant strategy concept: [32, 1, 2, 19, 16, 17, 26, 21].\nHowever, most of these positive results either apply to severely restricted cases (e.g., single-parameter, 2 players) or amount to VCG or \"almost\" VCG mechanisms (e.g., [19]).\nRecently, [8, 3] considered implementations for generalized single-parameter players.\nOrganization of this paper: In section 2 we illustrate the concepts of subgame perfect and iterative elimination of weakly dominated strategies in completely-informed and partially-informed environments.\nIn section 3 we show a mechanism for Peer-to-Peer file sharing networks.\nIn section 4 we apply our methods to derive a web cache mechanism.\nFuture work is briefly discussed in section 5.\n2.\nMOTIVATING EXAMPLES\n2.1 The Fair Assignment Problem\n2.1.1 Basic Mechanism\n2.1.2 Elicitation Mechanism\n2.1.3 Elicitation Mechanism for Partially Informed Agents\n2.1.4 Extensions\n2.2 Seller and Buyer Scenario\n2.2.1 Trade Mechanism for Malicious Agents\n2.2.2 Discussion\n3.\nPEER-TO-PEER NETWORKS\n3.1 Basic Mechanism\n3.2 Chain Networks\n3.3 Network Mechanism\n4.\nWEB CACHE\n5.\nCONCLUDING REMARKS\nIn this paper we have seen a new partial informational assumption, and we have demonstrated its suitability to networks in which computational agents can easily collect and distribute information.\nWe then described some mechanisms using the concept of iterative elimination of weakly dominated strategies.\nSome issues for future work include: \u2022 As we have seen, the implementation issue in p-informed environments is straightforward - it is easy to construct \"incentive compatible\" mechanisms even for non-singleparameter cases.\nThe challenge is to find more realistic scenarios in which the partial informational assumption is applicable.\n\u2022 Mechanisms for information propagation and maintenance.\nIn our examples we choose p such that the maintenance cost over time is negligible.\nHowever, the dynamics of the general case is delicate: an agent can use the recorded information to eliminate data that is not \"likely\" to be needed, in order to decrease his maintenance costs.\nAs a result, the probability that the environment is informed decreases, and selfish agents would not cooperate.\nIncentives for information propagation should be considered as well (e.g., for P2P networks for file sharing).\n\u2022 It seems that some social choice goals cannot be implemented if each player is at least 1\/n-malicious (where n is the number of players).\nIt would be interesting to identify these cases.","lvl-4":"On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments *\nABSTRACT\nAlgorithmic Mechanism Design focuses on Dominant Strategy Implementations.\nThe main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players (\"single-parameter domains\").\nAs it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20].\nThis suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms.\nWe observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information.\nWe thus suggest a notion of partially informed environments.\nEven if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies.\nAs a result, cooperation is achieved independent of agents' belief.\nAs a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing.\n1.\nINTRODUCTION\nRecently, global networks have attracted widespread study.\nAlgorithmic Mechanism Design (AMD) focuses on efficiently computable constructions [32].\nDistributed Algorithmic Mechanism Design (DAMD) studies mechanism design in inherently decentralized settings [30, 12].\nThe standard model assumes rational agents with quasi-linear utilities and private information, playing dominant strategies.\nThe solution concept of dominant strategies - in which each player has a best response strategy regardless of the strategy played by any other player - is well suited to the assumption of private information, in which each player is not assumed to have knowledge or beliefs regarding the other players.\nThe appropriateness of this set-up stems from the strength of the solution concept, which complements the weak information assumption.\nMany mechanisms have been constructed using this set-up, e.g., [1, 4, 6, 11, 14, 22].\nMost of these apply to severely-restricted cases (e.g., single-item auctions with no externalities) in which a player's preference is described by only one parameter (\"single-parameter domains\").\nTo date, Vickrey-Clarke-Groves (VCG) mechanisms are the only known general method for designing dominant strategy mechanisms for general domains of preferences.\nAdditionally, for some domains of preferences, VCG mechanisms and weighted VCG mechanisms are faced with computational hardness [22, 20].\nFurther limitations of the set-up are discussed in subsection 1.3.\nIn most distributed environments, players can take advantage of the network structure to collect and distribute information about other players.\nThis paper thus studies the effects of relaxing the private information assumption.\nOne model that has been extensively studied recently is the Peer-to-Peer (P2P) network.\nThe most popular P2P networks are those for sharing media files, such as Napster, Gnutella, and Kazaa.\nThe following description of a P2P network scenario illustrates the relevance of our relaxed informational assumption.\nEXAMPLE 1.\nConsider a Peer-to-Peer network for file sharing.\nThey can record this information about agent B.\nIn addition, they can distribute this information.\nHowever, it is impossible to record all the information everywhere.\nFirst, such duplication induces huge costs.\nSecond, as agents dynamically enter and exit from the network, the information might not be always available.\nAnd so it is seems natural to consider environments in which the information is locally recorded, that is, the information is recorded in the closest neighborhood with some probability p.\nIn this paper we shall see that if the information is available with some probability, then this enables us to implement a wider range of social goals.\nAs a result, cooperation is achieved independent of agents' belief.\n1.1 Implementations in Complete Information Set-ups\nIn complete information environments, each agent is informed about everyone else.\nThat is, each agent observes his own preference and the preferences of all other agents.\nHowever, no outsider can observe this information.\nSpecifically, neither the mechanism designer nor the court.\nMany positive results were shown for such arguably realistic settings.\nMoore and Repullo implement a large class of social goals using sequential mechanisms with a small number of rounds [28].\nThe concept they used is subgame-perfect implementations (SPE).\nThe SPE-implementability concept seems natural for the following reasons: the designed mechanisms usually have non-artificial constructs and a \"small\" strategy space.\nAs a result, it is straightforward for a player to compute his strategy . '\nSecond, sequential mechanisms avoid simultaneous moves, and thus can be considered for distributed networks.\nThird, the constructed mechanisms are often decentralized (i.e., lacking a centralized authority or designer)' Interestingly, in real life players do not always use their subgame perfect strategies.\nOne such widely studied case is the Ultimatum Bargaining 2-person game.\nThis happens essentially if there are at least three players, and a direct network link between any two agents.\nFinally, Moore and Repullo observed that they actually use a relaxed complete information assumption: it is only required that for every player there exists only one other player who is informed about him.\n1.2 Implementations in Partially Informed Set-ups and Our Results\nThe complete information assumption is realistic for small groups of players, but not in general.\nIn this paper we consider players that are informed about each other with some probability.\nMore formally, we say that agent B is p-informed about agent A, if B knows the type of A with probability p. For such partially-informed environments, we show how to use the solution concept of iterative elimination of weakly dominated strategies.\nWe demonstrate this concept through some motivating examples that (i) seem natural in distributed settings and (ii) cannot be implemented in dominant strategies even if there is an authorized center with a direct connection to every agent or even if players have single-parameter domains.\n1.\nWe first show how the subgame perfect techniques of Moore and Repullo [28] can be applied to p-informed environments and further adjusted to the concept of iterative elimination of weakly dominated strategies (for large enough p).\n2.\n3.\nWe consider implementations in various network structures.\nAs a case study we apply our methods to derive: (1) Simplified Peer-to-Peer network for file sharing with no payments in equilibrium.\nOur approach is (agent, file) - specific.\n(2) Web-cache budget-balanced and economically efficient mechanism.\nOur mechanisms use reasonable punishments that inversely depend on p. And so, if the fines are large then small p is enough to induce cooperation.\nEssentially, large p implies a large amount of recorded information.\n1.2.1 Malicious Agents\nDecentralized mechanisms often utilize punishing outcomes.\nAs a result, malicious players might cause severe harm to others.\nWe suggest a quantified notion of \"malicious\" player, who benefits from his own gained surplus and from harm caused to others.\n[12] suggests several categories to classify non-cooperating players.\nOur approach is similar to [7] (and the references therein), who considered independently such players in different context.\nWe show a simple decentralized mechanism in which q-malicious players cooperate and in particular, do not use their punishing actions in equilibrium.\n1.3 Dominant Strategy Implementations\nIn this subsection we shall refer to some recent results demonstrating that the set-up of private information with the concept of dominant strategies is restrictive in general.\nFirst, Roberts' classical impossibility result shows that if players' preferences are not restricted and there are at least 3 different outcomes, then every dominant-strategy mechanism must be weighted VCG (with the social goal that maximizes the weighted welfare) [35].\nFor slightly-restricted preference domains, it is not known how to turn efficiently computable algorithms into dominant strategy mechanisms.\nRecently [20] extends Roberts' result to some leading examples.\nThey showed that under mild assumptions any dominant strategy mechanism for variety of Combinatorial Auctions over multi-dimensional domains must be almost weighted VCG.\nAdditionally, it turns out that the dominant strategy requirement implies that the social goal must be \"monotone\" [35, 36, 22, 20, 5, 37].\nThis condition is very restrictive, as many desired natural goals are non-monotone2.\nSeveral recent papers consider relaxations of the dominant strategy concept: [32, 1, 2, 19, 16, 17, 26, 21].\nHowever, most of these positive results either apply to severely restricted cases (e.g., single-parameter, 2 players) or amount to VCG or \"almost\" VCG mechanisms (e.g., [19]).\nRecently, [8, 3] considered implementations for generalized single-parameter players.\nOrganization of this paper: In section 2 we illustrate the concepts of subgame perfect and iterative elimination of weakly dominated strategies in completely-informed and partially-informed environments.\nIn section 3 we show a mechanism for Peer-to-Peer file sharing networks.\nIn section 4 we apply our methods to derive a web cache mechanism.\nFuture work is briefly discussed in section 5.\n5.\nCONCLUDING REMARKS\nIn this paper we have seen a new partial informational assumption, and we have demonstrated its suitability to networks in which computational agents can easily collect and distribute information.\nWe then described some mechanisms using the concept of iterative elimination of weakly dominated strategies.\nSome issues for future work include: \u2022 As we have seen, the implementation issue in p-informed environments is straightforward - it is easy to construct \"incentive compatible\" mechanisms even for non-singleparameter cases.\nThe challenge is to find more realistic scenarios in which the partial informational assumption is applicable.\n\u2022 Mechanisms for information propagation and maintenance.\nIn our examples we choose p such that the maintenance cost over time is negligible.\nHowever, the dynamics of the general case is delicate: an agent can use the recorded information to eliminate data that is not \"likely\" to be needed, in order to decrease his maintenance costs.\nAs a result, the probability that the environment is informed decreases, and selfish agents would not cooperate.\nIncentives for information propagation should be considered as well (e.g., for P2P networks for file sharing).\n\u2022 It seems that some social choice goals cannot be implemented if each player is at least 1\/n-malicious (where n is the number of players).\nIt would be interesting to identify these cases.","lvl-2":"On Decentralized Incentive Compatible Mechanisms for Partially Informed Environments *\nABSTRACT\nAlgorithmic Mechanism Design focuses on Dominant Strategy Implementations.\nThe main positive results are the celebrated Vickrey-Clarke-Groves (VCG) mechanisms and computationally efficient mechanisms for severely restricted players (\"single-parameter domains\").\nAs it turns out, many natural social goals cannot be implemented using the dominant strategy concept [35, 32, 22, 20].\nThis suggests that the standard requirements must be relaxed in order to construct general-purpose mechanisms.\nWe observe that in many common distributed environments computational entities can take advantage of the network structure to collect and distribute information.\nWe thus suggest a notion of partially informed environments.\nEven if the information is recorded with some probability, this enables us to implement a wider range of social goals, using the concept of iterative elimination of weakly dominated strategies.\nAs a result, cooperation is achieved independent of agents' belief.\nAs a case study, we apply our methods to derive Peer-to-Peer network mechanism for file sharing.\n1.\nINTRODUCTION\nRecently, global networks have attracted widespread study.\nThe emergence of popular scalable shared networks with self-interested entities - such as peer-to-peer systems over the Internet and mobile wireless communication ad-hoc networks - poses fundamental challenges.\nNaturally, the study of such giant decentralized systems involves aspects of game theory [32, 34].\nIn particular, the subfield of Mechanism Design deals with the construction of mechanisms: for a given social goal the challenge is to design rules for interaction such that selfish behavior of the agents will result in the desired social goal [23, 33].\nAlgorithmic Mechanism Design (AMD) focuses on efficiently computable constructions [32].\nDistributed Algorithmic Mechanism Design (DAMD) studies mechanism design in inherently decentralized settings [30, 12].\nThe standard model assumes rational agents with quasi-linear utilities and private information, playing dominant strategies.\nThe solution concept of dominant strategies - in which each player has a best response strategy regardless of the strategy played by any other player - is well suited to the assumption of private information, in which each player is not assumed to have knowledge or beliefs regarding the other players.\nThe appropriateness of this set-up stems from the strength of the solution concept, which complements the weak information assumption.\nMany mechanisms have been constructed using this set-up, e.g., [1, 4, 6, 11, 14, 22].\nMost of these apply to severely-restricted cases (e.g., single-item auctions with no externalities) in which a player's preference is described by only one parameter (\"single-parameter domains\").\nTo date, Vickrey-Clarke-Groves (VCG) mechanisms are the only known general method for designing dominant strategy mechanisms for general domains of preferences.\nHowever, in distributed settings without available subsidies from outside sources, VCG mechanisms cannot be accepted as valid solutions due to a serious lack of budget balance.\nAdditionally, for some domains of preferences, VCG mechanisms and weighted VCG mechanisms are faced with computational hardness [22, 20].\nFurther limitations of the set-up are discussed in subsection 1.3.\nIn most distributed environments, players can take advantage of the network structure to collect and distribute information about other players.\nThis paper thus studies the effects of relaxing the private information assumption.\nOne model that has been extensively studied recently is the Peer-to-Peer (P2P) network.\nA P2P network is a distributed network with no centralized authority, in which the participants share their individual resources (e.g., processing power, storage capacity, bandwidth and content).\nThe aggregation of such resources provides inexpensive computational platforms.\nThe most popular P2P networks are those for sharing media files, such as Napster, Gnutella, and Kazaa.\nRecent work on P2P Incentives include micropayment methods [15] and reputation-based methods [9, 13].\nThe following description of a P2P network scenario illustrates the relevance of our relaxed informational assumption.\nEXAMPLE 1.\nConsider a Peer-to-Peer network for file sharing.\nWhenever agent B uploads a file from agent A, all peers along the routing path know that B has loaded the file.\nThey can record this information about agent B.\nIn addition, they can distribute this information.\nHowever, it is impossible to record all the information everywhere.\nFirst, such duplication induces huge costs.\nSecond, as agents dynamically enter and exit from the network, the information might not be always available.\nAnd so it is seems natural to consider environments in which the information is locally recorded, that is, the information is recorded in the closest neighborhood with some probability p.\nIn this paper we shall see that if the information is available with some probability, then this enables us to implement a wider range of social goals.\nAs a result, cooperation is achieved independent of agents' belief.\nThis demonstrates that in some computational contexts our approach is far less demanding than the Bayesian approach (that assumes that players' types are drawn according to some identified probability density function).\n1.1 Implementations in Complete Information Set-ups\nIn complete information environments, each agent is informed about everyone else.\nThat is, each agent observes his own preference and the preferences of all other agents.\nHowever, no outsider can observe this information.\nSpecifically, neither the mechanism designer nor the court.\nMany positive results were shown for such arguably realistic settings.\nFor recent surveys see [25, 27, 18].\nMoore and Repullo implement a large class of social goals using sequential mechanisms with a small number of rounds [28].\nThe concept they used is subgame-perfect implementations (SPE).\nThe SPE-implementability concept seems natural for the following reasons: the designed mechanisms usually have non-artificial constructs and a \"small\" strategy space.\nAs a result, it is straightforward for a player to compute his strategy . '\nSecond, sequential mechanisms avoid simultaneous moves, and thus can be considered for distributed networks.\nThird, the constructed mechanisms are often decentralized (i.e., lacking a centralized authority or designer)' Interestingly, in real life players do not always use their subgame perfect strategies.\nOne such widely studied case is the Ultimatum Bargaining 2-person game.\nIn this simple game, the proposer first makes an offer of how to divide a certain known sum of money, and the responder either agrees or refuses, in the latter case both players earn zero.\nSomewhat surprisingly, experiments show that the responder often rejects the suggested offer, even if it is bounded away from zero and the game is played only once (see e.g. [38]).\nand budget-balanced (i.e., transfers always sum up to zero).\nThis happens essentially if there are at least three players, and a direct network link between any two agents.\nFinally, Moore and Repullo observed that they actually use a relaxed complete information assumption: it is only required that for every player there exists only one other player who is informed about him.\n1.2 Implementations in Partially Informed Set-ups and Our Results\nThe complete information assumption is realistic for small groups of players, but not in general.\nIn this paper we consider players that are informed about each other with some probability.\nMore formally, we say that agent B is p-informed about agent A, if B knows the type of A with probability p. For such partially-informed environments, we show how to use the solution concept of iterative elimination of weakly dominated strategies.\nWe demonstrate this concept through some motivating examples that (i) seem natural in distributed settings and (ii) cannot be implemented in dominant strategies even if there is an authorized center with a direct connection to every agent or even if players have single-parameter domains.\n1.\nWe first show how the subgame perfect techniques of Moore and Repullo [28] can be applied to p-informed environments and further adjusted to the concept of iterative elimination of weakly dominated strategies (for large enough p).\n2.\nWe then suggest a certificate based challenging method that is more natural in computerized p-informed environments and different from the one introduced by Moore and Repullo [28] (for p G (0, 1]).\n3.\nWe consider implementations in various network structures.\nAs a case study we apply our methods to derive: (1) Simplified Peer-to-Peer network for file sharing with no payments in equilibrium.\nOur approach is (agent, file) - specific.\n(2) Web-cache budget-balanced and economically efficient mechanism.\nOur mechanisms use reasonable punishments that inversely depend on p. And so, if the fines are large then small p is enough to induce cooperation.\nEssentially, large p implies a large amount of recorded information.\n1.2.1 Malicious Agents\nDecentralized mechanisms often utilize punishing outcomes.\nAs a result, malicious players might cause severe harm to others.\nWe suggest a quantified notion of \"malicious\" player, who benefits from his own gained surplus and from harm caused to others.\n[12] suggests several categories to classify non-cooperating players.\nOur approach is similar to [7] (and the references therein), who considered independently such players in different context.\nWe show a simple decentralized mechanism in which q-malicious players cooperate and in particular, do not use their punishing actions in equilibrium.\n1.3 Dominant Strategy Implementations\nIn this subsection we shall refer to some recent results demonstrating that the set-up of private information with the concept of dominant strategies is restrictive in general.\nFirst, Roberts' classical impossibility result shows that if players' preferences are not restricted and there are at least 3 different outcomes, then every dominant-strategy mechanism must be weighted VCG (with the social goal that maximizes the weighted welfare) [35].\nFor slightly-restricted preference domains, it is not known how to turn efficiently computable algorithms into dominant strategy mechanisms.\nThis was observed and analyzed in [32, 22, 31].\nRecently [20] extends Roberts' result to some leading examples.\nThey showed that under mild assumptions any dominant strategy mechanism for variety of Combinatorial Auctions over multi-dimensional domains must be almost weighted VCG.\nAdditionally, it turns out that the dominant strategy requirement implies that the social goal must be \"monotone\" [35, 36, 22, 20, 5, 37].\nThis condition is very restrictive, as many desired natural goals are non-monotone2.\nSeveral recent papers consider relaxations of the dominant strategy concept: [32, 1, 2, 19, 16, 17, 26, 21].\nHowever, most of these positive results either apply to severely restricted cases (e.g., single-parameter, 2 players) or amount to VCG or \"almost\" VCG mechanisms (e.g., [19]).\nRecently, [8, 3] considered implementations for generalized single-parameter players.\nOrganization of this paper: In section 2 we illustrate the concepts of subgame perfect and iterative elimination of weakly dominated strategies in completely-informed and partially-informed environments.\nIn section 3 we show a mechanism for Peer-to-Peer file sharing networks.\nIn section 4 we apply our methods to derive a web cache mechanism.\nFuture work is briefly discussed in section 5.\n2.\nMOTIVATING EXAMPLES\nIn this section we examine the concepts of subgame perfect and iterative elimination of weakly dominated strategies for completely informed and p-informed environments.\nWe also present the notion of q-maliciousness and some other related considerations through two illustrative examples.\n2.1 The Fair Assignment Problem\nOur first example is an adjustment to computerized context of an ancient procedure to ensure that the wealthiest man in Athens would sponsor a theatrical production known as the Choregia [27].\nIn the fair assignment problem, Alice and Bob are two workers, and there is a new task to be performed.\nTheir goal is to assign the task to the least loaded worker without any monetary transfers.\nThe informational assumption is that Alice and Bob know both loads and the duration of the new task .3 2E.\ng., minimizing the makespan within a factor of 2 [32] and Rawls' Rule over some multi-dimensional domains [20].\n3In first glance one might ask why the completely informed agents could not simply sign a contract, specifying the desired goal.\nSuch a contract is sometimes infeasible due to fact that the true state cannot be observed by outsiders, especially not the court.\n2.1.1 Basic Mechanism\nThe following simple mechanism implements this goal in subgame perfect equilibrium.\n9 Stage 1: Alice either agrees to perform the new task or refuses.\n9 Stage 2: If she refuses, Bob has to choose between:--(a) Performing the task himself.\n-- (b) Exchanging his load with Alice and performing the new task as well.\nLet LTA, LTB be the true loads of Alice and Bob, and let t> 0 be the load of the new task.\nAssume that load exchanging takes zero time and cost.\nWe shall see that the basic mechanism achieves the goal in a subgame perfect equilibrium.\nIntuitively this means that in equilibrium each player will choose his best action at each point he might reach, assuming similar behavior of others, and thus every SPE is a Nash equilibrium.\nCLAIM 2.\n([27]) The task is assigned to the least loaded worker in subgame perfect equilibrium.\nPROOF.\nBy backward induction argument (\"look forward and reason backward\"), consider the following cases:\n1.\nLTB <LTA.\nIf stage 2 is reached then Bob will not exchange.\n2.\nLTA <LTB <LTA + t.\nIf stage 2 is reached Bob will exchange, and this is what Alice prefers.\n3.\nLTA + t <LTB.\nIf stage 2 is reached then Bob would\nexchange, as a result it is strictly preferable by Alice to perform the task.\nNote that the basic mechanism does not use monetary transfers at all and is decentralized in the sense that no third party is needed to run the procedure.\nThe goal is achieved in equilibrium (ties are broken in favor of Alice).\nHowever, in the second case exchange do occur in an equilibrium point.\nRecall the unrealistic assumption that load exchange takes zero time and cost.\nIntroducing fines, the next mechanism overcomes this drawback.\n2.1.2 Elicitation Mechanism\nIn this subsection we shall see a centralized mechanism for the fair assignment goal without load exchange in equilibrium.\nThe additional assumptions are as follows.\nThe cost performing a load of duration d is exactly d.\nWe assume that the duration t of the new task is <T.\nThe payoffs cents proof: Assume that there exists a mechanism that implements this goal in dominant strategies.\nThen by the Revelation Principle [23] there exists a mechanism that implements this goal for which the dominant strategy of each player is to report his true load.\nClearly, truthfully reporting cannot be a dominant strategy for this goal (if monetary transfers are not available), as players would prefer to report higher loads.\nof the utility maximizers agents are quasilinear.\nThe following mechanism is an adaptation of Moore and Repullo's elicitation mechanism [28] 5.\n\u2022 Stage 1: (\"Elicitation of Alice's load\")\nAlice announces LA. .\nBob announces L A <LA. .\nIf L A = LA (\" Bob agrees\") goto the next Stage.\nOtherwise (\" Bob challenges\"), Alice is assigned the task.\nShe then has to choose between:\n-- (a) Transferring her original load to Bob and paying him LA \u2212 0.5 \u00b7 min {e, LA \u2212 L A}.\nAlice pays a to the mechanism.\nBob pays the fine of T + E to the mechanism.\n-- (b) No load transfer.\nAlice pays e to Bob.\nSTOP.\n\u2022 Stage 2: The elicitation of Bob's load is similar to Stage 1 (switching the roles of Alice and Bob).\n\u2022 Stage 3: If LA <LB Alice is assigned the task, otherwise Bob.\nSTOP.\nObserve that Alice is assigned the task and fined with e whenever Bob challenges.\nWe shall see that the bonus of e is paid to a challenging player only in out of equilibria cases.\nPROOF.\nAssume w.l.o.g that the elicitation of Alice's load is done after Bob's, and that Stage 2 is reached.\nIf Alice truly reports LA = LTA, Bob strictly prefers to agree.\nOtherwise, if Bob challenges, Alice would always strictly prefer to transfer (as in this case Bob would perform her load for smaller cost), as a result Bob would pay T + e to the mechanism.\nThis punishing outcome is less preferable than the\" normal\" outcome of Stage 3 achieved had he agreed.\nIf Alice misreports LA> LTA, then Bob can ensure himself the bonus (which is always strictly preferable than reaching Stage 3) by challenging with L A = LTA, and so whenever Bob gets the bonus Alice gains the worst of all payoffs.\nReporting a lower load LA <LTA is not beneficial for Alice.\nIn this case, Bob would strictly prefer to agree (and not to announce L A <LA, as he limited to challenge with a smaller load than what she announces).\nThus such misreporting can only increase the possibility that she is assigned the task.\nAnd so there is no incentive for Alice to do so.\nAll together, Alice would prefer to report the truth in this stage.\nAnd so Stage 2 would not abnormally end by STOP, and similarly Stage 1.\nObserve that the elicitation mechanism is almost balanced: in all outcomes no money comes in or out, except for the non-equilibrium outcome (a), in which both players pay to the mechanism.\n5In [28], if an agent misreport his type then it is always beneficial to the other agent to challenge.\nIn particular, even if the agent reports a lower load.\n2.1.3 Elicitation Mechanism for Partially Informed Agents\nIn this subsection we consider partially informed agents.\nFormally:\nIt turns out that a version of the elicitation mechanism works for this relaxed information assumption, if we use the concept of iterative elimination of weakly dominated strategies6.\nWe replace the fixed fine of a in the elicitation mechanism with the fine:\n,3 p = max {L, 2p \u2212 1 T} + e, and assume the bounds LTA, LTB <L.\nPROPOSITION 2.\nIf all agents are p-informed, p> 0.5, the elicitation mechanism (,3 p) implements the fair assignment goal with the concept of iterative elimination of weakly dominated strategies.\nThe strategy of each player is to report the true load and to challenge with the true load if the other agent overreport.\nPROOF.\nAssume w.l.o.g that the elicitation of Alice's load is done after Bob's, and that Stage 2 is reached.\nFirst observe that underreporting the true value is a dominated strategy, whether Bob is not informed and \"mistakenly\" challenges with a lower load (as,3 p> L) or not, or even if t is very small.\nNow we shall see that overreporting her value is a dominated strategy, as well.\nAlice's expected payoff gained by misreporting <p (payoff if she lies and Bob is informed) + (1 \u2212 p) (max payoff if Bob is not informed) <p (\u2212 t \u2212,3 p) <p (\u2212 t) + (1 \u2212 p) (\u2212 t \u2212,3 p) <p (min payoff of true report if Bob is informed) + (1 \u2212 p) (min payoff if Bob is not informed) <Alice's expected payoff if she truly reports.\nThe term (\u2212 t \u2212,3 p) in the left hand side is due to the fact that if Bob is informed he will always prefer to challenge.\nIn the right hand side, if he is informed, then challenging is a dominated strategy, and if he is not informed the worst harm he can make is to challenge.\nThus in stage 2 Alice will report her true load.\nThis implies that challenging without being informed is a dominated strategy for Bob.\nThis argument can be reasoned also for the first stage, when Bob reports his value.\nBob knows the maximum payoff he can gain is at most zero since he cannot expect to get the bonus in the next stage.\n2.1.4 Extensions\nThe elicitation mechanism for partially informed agents is rather general.\nAs in [28], we need the capability to \"judge\" between two distinct declarations in the elicitation rounds, 6A strategy si of player i is weakly dominated if there exists s i such that (i) the payoff gained by s i is at least as high as the payoff gained by si, for all strategies of the other players and all preferences, and (ii) there exist a preference and a combination of strategies for the other players such that the payoff gained by s i is strictly higher than the payoff gained by si.\nand upper and lower bounds based on the possible payoffs derived from the last stage.\nIn addition, for p-informed environments, some structure is needed to ensure that underbidding is a dominated strategy.\nThe Choregia-type mechanisms can be applied to more than 2 players with the same number of stages: the player in the first stage can simply points out the name of the \"wealthiest\" agent.\nSimilarly, the elicitation mechanisms can be extended in a straightforward manner.\nThese mechanisms can be budget-balanced, as some player might replace the role of the designer, and collect the fines, as observed in [28].\nOPEN PROBLEM 1.\nDesign a decentralized budget balanced mechanism with reasonable fines for independently p-informed\nn players, where p <1--1\/2 n \u2212 1.\n2.2 Seller and Buyer Scenario\nA player might cause severe harm to others by choosing a non-equilibrium outcome.\nIn the mechanism for the fair assignment goal, an agent might \"maliciously\" challenge even if the other agent truly reports his load.\nIn this subsection we consider such malicious scenarios.\nFor the ease of exposition we present a second example.\nWe demonstrate that equilibria remain unchanged even if players are malicious.\nIn the seller-buyer example there is one item to be traded and two possible future states.\nThe goal is to sell the item for the average low price pl = ls + lb 2 in state L, and the higher price ph = hs + hb 2 in the other state H, where ls is seller's cost and lb is buyer's value in state L, and similarly hs, hb in H.\nThe players fix the prices without knowing what will be the future state.\nAssume that ls <hs <lb <hb, and that trade can occur in both prices (that is, pl, ph E (hs, lb)).\nOnly the players can observe the realization of the true state.\nThe payoffs are of the form ub = xv--tb, us = ts--xvs, where the binary variable x indicates if trade occurred, and tb, ts are the transfers.\nConsider the following decentralized trade mechanism.\n9 Stage 1: If seller reports H goto Stage 2.\nOtherwise, trade at the low price pl.\nSTOP.\n9 Stage 2: The buyer has to choose between:--(a) Trade at the high price ph.--(b) No trade and seller pays A to the buyer.\nCLAIM 4.\nLet A = lb--ph +.\nThe unique subgame perfect equilibrium of the trade mechanism is to report the true state in Stage 1 and trading if Stage 2 is reached.\nNote that the outcome (b) is never chosen in equilibrium.\n2.2.1 Trade Mechanism for Malicious Agents\nThe buyer might maliciously punish the seller by choosing the outcome (b) when the true state is H.\nThe following notion quantifies the consideration that a player is not indifferent to the private surpluses of others.\nThis definition appeared independently in [7] in different context.\nWe shall see that the traders would avoid such bad behavior if they are q-malicious, where q <0.5, that is if their\" non-indifference\" impact is bounded by 0.5.\nEquilibria outcomes remain unchanged, and so cooperation is achieved as in the original case of non-malicious players.\nConsider the trade mechanism with pl = (1--q) hs + q lb, ph = q hs + (1--q) lb, A = (1--q) (hb--lb--).\nNote that pl <ph for q <0.5.\nCLAIM 5.\nIf q <0.5, then the unique subgame perfect equilibrium for q-malicious players remains unchanged.\nPROOF.\nBy backward induction we consider two cases.\nIn state H, the q-malicious buyer would prefer to trade if\nIn state L the buyer prefers the no trade outcome, as (1--q) (lb--ph) + q (ls--ph) <A.\nThe seller prefers to trade at a low price, as (1--q) (pl--ls) + q (pl--lb)> 0>--A.\n2.2.2 Discussion\nNo mechanism can Nash-implement this trading goal if the only possible outcomes are trade at pl and trade at ph. To see this, it is enough to consider normal forms (as any extensive form mechanism can be presented as a normal one).\nConsider a matrix representation, where the seller is the row player and the buyer is the column player, in which every entry includes an outcome.\nSuppose there is equilibrium entry for the state L.\nThe associate column must be all pl, otherwise the seller would have an incentive to deviate.\nSimilarly, the associate row of the H equilibrium entry must be all ph (otherwise the buyer would deviate), a contradiction.\n7 8 The buyer prefers pl and seller ph, and so the preferences are identical in both states.\nHence reporting preferences over outcomes is not \"enough\" - players must supply additional \"information\".\nThis is captured by outcome (b) in the trade mechanism.\nIntuitively, if a goal is not Nash-implementable we need to add more outcomes.\nThe drawback is that some\" new\" additional equilibria must be ruled out.\nE.g., additional Nash equilibrium for the trade mechanism is (trade at pl, (b)).\nThat is, the seller chooses to trade at low price at either states, and the buyer always chooses the no trade option that fines the seller, if the second stage is reached.\nSuch buyer's threat is not credible, because if the mechanism is played only once, and Stage 2 is reached in state H, the buyer would strictly decrease his payoff if he chooses (b).\nClearly, this is not a subgame perfect equilibrium.\nAlthough each extensive game-form is strategically equivalent to a normal form one, the extensive form representation places more structure and so it seems plausible that the subgame perfect equilibrium will be played .9 7Formally, this goal is not Maskin monotonic, a necessary condition for Nash-implementability [24].\n8A similar argument applies for the Fair Assignment Problem.\n9Interestingly, it is a straight forward to construct a sequential mechanism with unique SPE, and additional NE with a strictly larger payoff for every player.\n3.\nPEER-TO-PEER NETWORKS\nIn this section we describe a simplified Peer-to-Peer network for file sharing, without payments in equilibrium, using a certificate-based challenging method.\nIn this challenging method - as opposed to [28] - an agent that challenges cannot harm other agents, unless he provides a valid \"certificate\".\nIn general, if agent B copied a file f from agent A, then agent A knows that agent B holds a copy of the file.\nWe denote such information as a certificate (B, f) (we shall omit cryptographic details).\nSuch a certificate can be recorded and distributed along the network, and so we can treat each agent holding the certificate as an informed agent.\nAssumptions: We assume an homogeneous system with files of equal size.\nThe benefit each agent gains by holding a copy of any file is V.\nThe only cost each agent has is the uploading cost C (induced while transferring a file to an immediate neighbor).\nAll other costs are negligible (e.g., storing the certificates, forwarding messages, providing acknowledgements, digital signatures, etc).\nLet upA, downA be the numbers of agent A uploads and downloads if he always cooperates.\nWe assume that each agent A enters the system if upA \u00b7 C <downA \u00b7 V.\nEach agent has a quasilinear utility and only cares about his current bandwidth usage.\nIn particular, he ignores future scenarios (e.g., whether forwarding or dropping of a packet might affect future demand).\n3.1 Basic Mechanism\nWe start with a mechanism for a network with 3p-informed agents: B, A1, A2.\nWe assume that B is directly connected to A1 and A2.\nIf B has the certificate (A1, f), then he can apply directly to A1 and request the file (if he refuses, then B can go to court).\nThe following basic sequential mechanism is applicable whenever agent B is not informed and still would like to download the file if it exists in the network.\nNote that this goal cannot be implemented in dominant strategies without payments (similar to Claim 1, when the type of each agent here is the set of files he holds).\nDefine tA, B to be the monetary amount that agent A should transfer to B.\n\u2022 Stage 1: Agent B requests the file f from A1.\n-- If A1 replies \"yes\" then B downloads the file from A1.\nSTOP.\n-- Otherwise, agent B sends A 1s \"no\" reply to agent If A2 declares \"agree\" then goto the next stage.\nElse, A2 sends a certificate (A1, f) to agent B. \u00b7 If the certificate is correct then tA1, A2 =,3 p. STOP.\n\u00b7 Else tA2, A1 = | C | + E. STOP.\nStage 2: Agent B requests the file f from A2.\nSwitch the roles of the agents A1, A2.\nPROOF.\nClearly if the mechanism ends without challenging: \u2212 C u (Ai) 0.\nAnd so, challenging with an invalid certificate is always a dominated strategy.\nNow, when Stage 2 is reached, A2 is the last to report if he has the file.\nIf A2 has the file it is a weakly undominated strategy to misreport, whether A1 is informed or not: A2's expected payoff gained by misreporting \"no\" p \u00b7 (\u2212,3 p) + (1 \u2212 p) \u00b7 0 <\u2212 C A2's payoff if she reports \"yes\".\nThis argument can be reasoned also for Stage 1, when A1 reports whether he has the file.\nA1 knows that A2 will report \"yes\" if and only if she has the file in the next stage, and so the maximum payoff he can gain is at most zero since he cannot expect to get a bonus.\n3.2 Chain Networks\nIn a chain network, agent B is directly connected to A1, and Ai is directly connected to agent Ai +1.\nAssume that we have an acknowledgment protocol to confirm the receipt of a particular message.\nTo avoid message dropping, we add the fine (,3 p +2 e) to be paid by an agent who hasn't properly forwarded a message.\nThe chain mechanism follows:\n\u2022 Stage i: Agent B forwards a request for the file f to Ai (through {Ak} k <i).\n\u2022 If Ai reports \"yes\", then B downloads f from Ai.\nSTOP.\n\u2022 Otherwise Ai reports \"no\".\nIf Aj sends a certificate (Ak, f) to B, (j, k i), then--If certificate (Ak, f) is correct, then t (Ak, Aj) =,3 p. STOP.\n-- Else, t (Aj, Ak) = C + E. STOP.\nIf Ai reports that he has no copy of the file, then any agent in between might challenge.\nUsing digital signatures and acknowledgements, observe that every agent must forward each message, even if it contains a certificate showing that he himself has misreported.\nWe use the same fine,,3 p, as in the basic mechanism, because the protocol might end at stage 1 (clearly, the former analysis still applies, since the actual p increases with the number of players).\n3.3 Network Mechanism\nIn this subsection we consider general network structures.\nWe need the assumption that there is a ping protocol that checks whether a neighbor agent is on-line or not (that is, an on-line agent cannot hide himself).\nTo limit the amount of information to be recorded, we assume that an agent is committed to keep any downloaded file to at least one hour, and so certificates are valid for a limited amount of time.\nWe assume that each agent has a digitally signed listing of his current immediate neighbors.\nAs in real P2P file sharing applications, we restrict each request for a file to be forwarded at most r times (that is, downloads are possible only inside a neighborhood of radius r).\nThe network mechanism utilizes the chain mechanism in the following way: When agent B requests a file from agent A (at most r \u2212 1 far), then A sends to B the list of his neighbors and the output of the ping protocol to all of these neighbors.\nAs a result, B can explore the network.\nRemark: In this mechanism we assumed that the environment is p-informed.\nAn important design issue that it is not addressed here is the incentives for the information propagation phase.\n4.\nWEB CACHE\nWeb caches are widely used tool to improve overall system efficiency by allowing fast local access.\nThey were listed in [12] as a challenging application of Distributed Algorithmic Mechanism Design.\nNisan [30] considered a single cache shared by strategic agents.\nIn this problem, agent i gains the value vTi if a particular item is loaded to the local shared cache.\nThe efficient goal is to load the item if and only if EvTi C, where C is the loading cost.\nThis goal reduces to the\" public project\" problem analyzed by Clarke [10].\nHowever, it is well known that this mechanism is not budget-balanced (e.g., if the valuation of each player is C, then everyone pays zero).\nIn this section we suggest informational and environmental assumptions for which we describe a decentralized budgetbalanced efficient mechanism.\nWe consider environments for which future demand of each agent depends on past demand.\nThe underlying informational and environmental requirements are as follows.\n1.\nAn agent can read the content of a message only if he is the target node (even if he has to forward the message as an intermediate node of some routing path).\nAn agent cannot initiate a message on behalf of other agents.\n2.\nAn acknowledgement protocol is available, so that every agent can provide a certificate indicating that he handled a certain message properly.\n3.\nNegligible costs: we assume p-informed agents, where p is such that the agent's induced cost for keeping records of information is negligible.\nWe also assume that the cost incurred by sending and forwarding messages is negligible.\n4.\nLet qi (t) denotes the number of loading requests agent\ni initiated for the item during the time slot t.\nWe assume that vTi (t), the value for caching the item in the beginning of slot t depends only on most recent slot, formally vTi (t) = max {Vi (qi (t \u2212 1)), C}, where Vi (\u00b7) is a non-decreasing real function.\nIn addition, Vi (\u00b7) is a common knowledge among the players.\n5.\nThe network is\" homogeneous\" in the sense that if agent j happens to handle k requests initiated by agent i during the time slot t, then qi (t) = k, where depends on the routing protocol and the environment (might be smaller than 1, if each request is \"flooded\" several times).\nWe assume that the only way agent i can affect the true qi (t) is by superficially increasing his demand for the cached item, but not the other way (that is, agent's loss, incurred by giving up a necessary request for the item, is not negligible).\nThe first requirement is to avoid free riding, and also to avoid the case that an agent superficially increases the demand of others and as a result decreases his own demand.\nThe second requirement is to avoid the case that an agent who gets a routing request for the item, records it and then drops it.\nThe third is to ensure that the environment stays well informed.\nIn addition, if the forwarding cost is negligible each agent cooperates and forwards messages as he would not like to decrease the future demand (that monotonically depends on the current time slot, as assumed in the forth requirement) of some other agent.\nGiven that the payments are increasing with the declared values, the forth and fifth requirements ensure that the agent would not increase his demand superficially and so qi (t) is the true demand.\nThe following Web-Cache Mechanism implements the efficient goal that shares the cost proportionally.\nFor simplicity it is described for two players and w.l.o.g vTi (t) equals the number of requests initiated by i and observed by any informed j (that is, = 1 and Vi (qi (t \u2212 1)) = qi (t \u2212 1)).\n\u2022 Stage 1: (\"Elicitation of vTA (t)\") Alice announces vA. .\nBob announces vA vA. .\nIf v' A = vA goto the next Stage.\nOtherwise (\" Bob challenges\"):--If Bob provides vA valid records then Alice pays C to finance the loading of the item into the cache.\nShe also pays p to Bob.\nSTOP.\n-- Otherwise, Bob finances the loading of the item into the cache.\nSTOP.\n\u2022 Stage 2: The elicitation of vTB (t) is done analogously.\n\u2022 Stage 3: If vA + vB <C, then STOP.\nOtherwise, load the item to the cache, Alice pays pA = vA \u00b7 C, and Bob pays pB = vB\nPROOF.\nChallenging without being informed (that is, without providing enough valid records) is always dominated strategy in this mechanism.\nNow, assume w.l.o.g. Alice is 10See [29] or [12] for exact definitions.\nthe last to report her value.\nAlice's expected payoff gained by underreporting p \u00b7 (\u2212 C \u2212,3 p) + (1 \u2212 p) \u00b7 C <p \u00b7 0 + (1 \u2212 p) \u00b7 0 Alice's expected payoff if she honestly reports.\nThe right hand side equals zero as the participation costs are negligible.\nReasoning back, Bob cannot expect to get the bonus and so misreporting is dominated strategy for him.\n5.\nCONCLUDING REMARKS\nIn this paper we have seen a new partial informational assumption, and we have demonstrated its suitability to networks in which computational agents can easily collect and distribute information.\nWe then described some mechanisms using the concept of iterative elimination of weakly dominated strategies.\nSome issues for future work include: \u2022 As we have seen, the implementation issue in p-informed environments is straightforward - it is easy to construct \"incentive compatible\" mechanisms even for non-singleparameter cases.\nThe challenge is to find more realistic scenarios in which the partial informational assumption is applicable.\n\u2022 Mechanisms for information propagation and maintenance.\nIn our examples we choose p such that the maintenance cost over time is negligible.\nHowever, the dynamics of the general case is delicate: an agent can use the recorded information to eliminate data that is not \"likely\" to be needed, in order to decrease his maintenance costs.\nAs a result, the probability that the environment is informed decreases, and selfish agents would not cooperate.\nIncentives for information propagation should be considered as well (e.g., for P2P networks for file sharing).\n\u2022 It seems that some social choice goals cannot be implemented if each player is at least 1\/n-malicious (where n is the number of players).\nIt would be interesting to identify these cases.","keyphrases":["decentr incent compat mechan","partial inform environ","domin strategi implement","distribut environ","comput entiti","cooper","agent","distribut algorithm mechan design","vickrei-clark-grove","weakli domin strategi iter elimin","peer-to-peer","p-inform environ"],"prmu":["P","P","P","P","P","P","P","R","U","R","U","M"]} {"id":"J-74","title":"On Cheating in Sealed-Bid Auctions","abstract":"Motivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions. The first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder. In the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid. In both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating. These results provide insights into sealed-bid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction.","lvl-1":"On Cheating in Sealed-Bid Auctions Ryan Porter rwporter@stanford.edu Yoav Shoham shoham@stanford.edu Computer Science Department Stanford University Stanford, CA 94305 ABSTRACT Motivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions.\nThe first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder.\nIn the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid.\nIn both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating.\nThese results provide insights into sealedbid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Economics, Security 1.\nINTRODUCTION Among the types of auctions commonly used in practice, sealed-bid auctions are a good practical choice because they require little communication and can be completed almost instantly.\nEach bidder simply submits a bid, and the winner is immediately determined.\nHowever, sealed-bid auctions do require that the bids be kept private until the auction clears.\nThe increasing popularity of online auctions only makes this disadvantage more troublesome.\nAt an auction house, with all participants present, it is difficult to examine a bid that another bidder gave directly to the auctioneer.\nHowever, in an online auction the auctioneer is often little more than a server with questionable security; and, since all participants are in different locations, one can anonymously attempt to break into the server.\nIn this paper, we present a game theoretic analysis of how bidders should behave when they are aware of the possibility of cheating that is based on knowledge of the bids.\nWe investigate this type of cheating along two dimensions: whether it is the auctioneer or a bidder who cheats, and which variant (either first or second-price) of the sealed-bid auction is used.\nNote that two of these cases are trivial.\nIn our setting, there is no incentive for the seller to submit a shill bid in a first price auction, because doing so would either cancel the auction or not affect the payment of the winning bidder.\nIn a second-price auction, knowing the competing bids does not help a bidder because it is dominant strategy to bid truthfully.\nThis leaves us with two cases that we examine in detail.\nA seller can profitably cheat in a second-price auction by looking at the bids before the auction clears and submitting an extra bid.\nThis possibility was pointed out as early as the seminal paper [12] that introduced this type of auction.\nFor example, if the bidders in an eBay auction each use a proxy bidder (essentially creating a second-price auction), then the seller may be able to break into eBay``s server, observe the maximum price that a bidder is willing to pay, and then extract this price by submitting a shill bid just below it using a false identity.\nWe assume that there is no chance that the seller will be caught when it cheats.\nHowever, not all sellers are willing to use this power (or, not all sellers can successfully cheat).\nWe assume that each bidder knows the probability with which the seller will cheat.\nPossible motivation for this knowledge could be a recently published expos\u00b4e on seller cheating in eBay auctions.\nIn this setting, we derive an equilibrium bidding strategy for the case in which each bidder``s value for the good is independently drawn from a common distribution (with no further assumptions except for continuity and differentiability).\nThis result shows how first and second-price auctions can be viewed as the endpoints of a spectrum of auctions.\nBut why should the seller have all the fun?\nIn a first-price auction, a bidder must bid below his value for the good (also called shaving his bid) in order to have positive utility if he 76 wins.\nTo decide how much to shave his bid, he must trade off the probability of winning the auction against how much he will pay if he does win.\nOf course, if he could simply examine the other bids before submitting his own, then his problem is solved: bid the minimum necessary to win the auction.\nIn this setting, our goal is to derive an equilibrium bidding strategy for a non-cheating bidder who is aware of the possibility that he is competing against cheating bidders.\nWhen bidder values are drawn from the commonly-analyzed uniform distribution, we show the counterintuitive result that the possibility of other bidders cheating has no effect on the equilibrium strategy of an honest bidder.\nThis result is then extended to show the robustness of the equilibrium of a first-price auction without the possibility of cheating.\nWe conclude this section by exploring other distributions, including some in which the presence of cheating bidders actually induces an honest bidder to lower its bid.\nThe rest of the paper is structured as follows.\nIn Section 2 we formalize the setting and present our results for the case of a seller cheating in a second price auction.\nSection 3 covers the case of bidders cheating in a first-price auction.\nIn Section 4, we quantify the effects that the possibility of cheating has on an honest seller in the two settings.\nWe discuss related work, including other forms of cheating in auctions, in Section 5, before concluding with Section 6.\nAll proofs and derivations are found in the appendix.\n2.\nSECOND-PRICE AUCTION, CHEATING SELLER In this section, we consider a second-price auction in which the seller may cheat by inserting a shill bid after observing all of the bids.\nThe formulation for this section will be largely reused in the following section on bidders cheating in a first-price auction.\nWhile no prior knowledge of game theory or auction theory is assumed, good introductions can be found in [2] and [6], respectively.\n2.1 Formulation The setting consists of N bidders, or agents, (indexed by i = 1, \u00b7 \u00b7 \u00b7 , n) and a seller.\nEach agent has a type \u03b8i \u2208 [0, 1], drawn from a continuous range, which represents the agent``s value for the good being auctioned.2 Each agent``s type is independently drawn from a cumulative distribution function (cdf ) F over [0, 1], where F(0) = 0 and F(1) = 1.\nWe assume that F(\u00b7) is strictly increasing and differentiable over the interval [0, 1].\nCall the probability density function (pdf ) f(\u03b8i) = F (\u03b8i), which is the derivative of the cdf.\nEach agent knows its own type \u03b8i, but only the distribution over the possible types of the other agents.\nA bidding strategy for an agent bi : [0, 1] \u2192 [0, 1] maps its type to its bid.3 Let \u03b8 = (\u03b81, \u00b7 \u00b7 \u00b7 , \u03b8n) be the vector of types for all agents, and \u03b8\u2212i = (\u03b81, \u00b7 \u00b7 \u00b7 , \u03b8i\u22121, \u03b8i+1, \u00b7 \u00b7 \u00b7 \u03b8n) be the vector of all types except for that of agent i.\nWe can then combine the vectors so that \u03b8 = (\u03b8i, \u03b8\u2212i).\nWe also define the vector of bids as b(\u03b8) = (b1(\u03b81), ... , bn(\u03b8n)), and this vector without 2 We can restrict the types to the range [0, 1] without loss of generality because any distribution over a different range can be normalized to this range.\n3 We thus limit agents to deterministic bidding strategies, but, because of our continuity assumption, there always exists a pure strategy equilibrium.\nthe bid of agent i as b\u2212i(\u03b8\u2212i).\nLet b[1](\u03b8) be the value of the highest bid of the vector b(\u03b8), with a corresponding definition for b[1](\u03b8\u2212i).\nAn agent obviously wins the auction if its bid is greater than all other bids, but ties complicate the formulation.\nFortunately, we can ignore the case of ties in this paper because our continuity assumption will make them a zero probability event in equilibrium.\nWe assume that the seller does not set a reserve price.4 If the seller does not cheat, then the winning agent pays the highest bid by another agent.\nOn the other hand, if the seller does cheat, then the winning agent will pay its bid, since we assume that a cheating seller would take full advantage of its power.\nLet the indicator variable \u00b5c be 1 if the seller cheats, and 0 otherwise.\nThe probability that the seller cheats, Pc , is known by all agents.5 We can then write the payment of the winning agent as follows.\npi(b(\u03b8), \u00b5c ) = \u00b5c \u00b7 bi(\u03b8i) \u2212 (1 \u2212 \u00b5c ) \u00b7 b[1](\u03b8\u2212i) (1) Let \u00b5(\u00b7) be an indicator function that takes an inequality as an argument and returns is 1 if it holds, and 0 otherwise.\nThe utility for agent i is zero if it does not win the auction, and the difference between its valuation and its price if it does.\nui(b(\u03b8), \u00b5c , \u03b8i) = \u00b5 bi(\u03b8i) > b[1](\u03b8\u2212i) \u00b7 \u03b8i \u2212 pi(b(\u03b8), \u00b5c ) (2) We will be concerned with the expected utility of an agent, with the expectation taken over the types of the other agents and over whether or not the seller cheats.\nBy pushing the expectation inward so that it is only over the price (conditioned on the agent winning the auction), we can write the expected utility as: E\u03b8\u2212i,\u00b5c [ui(b(\u03b8), \u00b5c , \u03b8i)] = Prob bi(\u03b8i) > b[1](\u03b8\u2212i) \u00b7 \u03b8i \u2212 E\u03b8\u2212i,\u00b5c pi(b(\u03b8), \u00b5c ) | bi(\u03b8i) > b[1](\u03b8\u2212i) (3) We assume that all agents are rational, expected utility maximizers.\nBecause of the uncertainty over the types of the other agents, we will be looking for a Bayes-Nash equilibrium.\nA vector of bidding strategies b\u2217 is a Bayes-Nash equilibrium if for each agent i and each possible type \u03b8i, agent i cannot increase its expected utility by using an alternate bidding strategy bi, holding the bidding strategies for all other agents fixed.\nFormally, b\u2217 is a Bayes-Nash equilibrium if: \u2200i, \u03b8i, bi E\u03b8\u2212i,\u00b5c ui b\u2217 i (\u03b8i), b\u2217 \u2212i(\u03b8\u2212i) , \u00b5c , \u03b8i \u2265 E\u03b8\u2212i,\u00b5c ui bi(\u03b8i), b\u2217 \u2212i(\u03b8\u2212i) , \u00b5c , \u03b8i (4) 2.2 Equilibrium We first present the Bayes-Nash equilibrium for an arbitrary distribution F(\u00b7).\n4 This simplifies the analysis, but all of our results can be applied to the case in which the seller announces a reserve price before the auction begins.\n5 Note that common knowledge is not necessary for the existence of an equilibrium.\n77 Theorem 1.\nIn a second-price auction in which the seller cheats with probability Pc , it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: bi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 F( N\u22121 P c ) (x)dx F( N\u22121 P c ) (\u03b8i) (5) It is useful to consider the extreme points of Pc .\nSetting Pc = 1 yields the correct result for a first-price auction (see, e.g., [10]).\nIn the case of Pc = 0, this solution is not defined.\nHowever, in the limit, bi(\u03b8i) approaches \u03b8i as Pc approaches 0, which is what we expect as the auction approaches a standard second-price auction.\nThe position of Pc is perhaps surprising.\nFor example, the linear combination bi(\u03b8i) = \u03b8i \u2212 Pc \u00b7 \u03b8i 0 F (N\u22121) (x)dx F (N\u22121)(\u03b8i) of the equilibrium bidding strategies of first and second-price auctions would have also given us the correct bidding strategies for the cases of Pc = 0 and Pc = 1.\n2.3 Continuum of Auctions An alternative perspective on the setting is as a continuum between first and second-price auctions.\nConsider a probabilistic sealed-bid auction in which the seller is honest, but the price paid by the winning agent is determined by a weighted coin flip: with probability Pc it is his bid, and with probability 1 \u2212 Pc it is the second-highest bid.\nBy adjusting Pc , we can smoothly move between a first and second-price auction.\nFurthermore, the fact that this probabilistic auction satisfies the properties required for the Revenue Equivalence Theorem (see, e.g., [2]) provides a way to verify that the bidding strategy in Equation 5 is the symmetric equilibrium of this auction (see the alternative proof of Theorem 1 in the appendix).\n2.4 Special Case: Uniform Distribution Another way to try to gain insight into Equation 5 is by instantiating the distribution of types.\nWe now consider the often-studied uniform distribution: F(\u03b8i) = \u03b8i.\nCorollary 2.\nIn a second-price auction in which the seller cheats with probability Pc , and F(\u03b8i) = \u03b8i, it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: bi(\u03b8i) = N \u2212 1 N \u2212 1 + Pc \u03b8i (6) This equilibrium bidding strategy, parameterized by Pc , can be viewed as an interpolation between two well-known results.\nWhen Pc = 0 the bidding strategy is now welldefined (each agent bids its true type), while when Pc = 1 we get the correct result for a first-price auction: each agent bids according to the strategy bi(\u03b8i) = N\u22121 N \u03b8i.\n3.\nFIRST-PRICE AUCTION, CHEATING AGENTS We now consider the case in which the seller is honest, but there is a chance that agents will cheat and examine the other bids before submitting their own (or, alternatively, they will revise their bid before the auction clears).\nSince this type of cheating is pointless in a second-price auction, we only analyze the case of a first-price auction.\nAfter revising the formulation from the previous section, we present a fixed point equation for the equilibrium strategy for an arbitrary distribution F(\u00b7).\nThis equation will be useful for the analysis the uniform distribution, in which we show that the possibility of cheating agents does not change the equilibrium strategy of honest agents.\nThis result has implications for the robustness of the symmetric equilibrium to overbidding in a standard first-price auction.\nFurthermore, we find that for other distributions overbidding actually induces a competing agent to shave more off of its bid.\n3.1 Formulation It is clear that if a single agent is cheating, he will bid (up to his valuation) the minimum amount necessary to win the auction.\nIt is less obvious, though, what will happen if multiple agents cheat.\nOne could imagine a scenario similar to an English auction, in which all cheating agents keep revising their bids until all but one cheater wants the good at the current winning bid.\nHowever, we are only concerned with how an honest agent should bid given that it is aware of the possibility of cheating.\nThus, it suffices for an honest agent to know that it will win the auction if and only if its bid exceeds every other honest agent``s bid and every cheating agent``s type.\nThis intuition can be formalized as the following discriminatory auction.\nIn the first stage, each agent``s payment rule is determined.\nWith probability Pa , the agent will pay the second highest bid if it wins the auction (essentially, he is a cheater), and otherwise it will have to pay its bid.\nThese selections are recorded by a vector of indicator variables \u00b5a = (\u00b5a1 , ... , \u00b5an ), where \u00b5ai = 1 denotes that agent i pays the second highest bid.\nEach agent knows the probability Pa , but does not know the payment rule for all other agents.\nOtherwise, this auction is a standard, sealed-bid auction.\nIt is thus a dominant strategy for a cheater to bid its true type, making this formulation strategically equivalent to the setting outlined in the previous paragraph.\nThe expression for the utility of an honest agent in this discriminatory auction is as follows.\nui(b(\u03b8), \u00b5a , \u03b8i) = \u03b8i \u2212 bi(\u03b8) \u00b7 j=i \u00b5aj \u00b7 \u00b5 bi(\u03b8i) > \u03b8j + (1 \u2212 \u00b5aj ) \u00b7 \u00b5 bi(\u03b8i) > bj(\u03b8j) (7) 3.2 Equilibrium Our goal is to find the equilibrium in which all cheating agents use their dominant strategy of bidding truthfully and honest agents bid according to a symmetric bidding strategy.\nSince we have left F(\u00b7) unspecified, we cannot present a closed form solution for the honest agent``s bidding strategy, and instead give a fixed point equation for it.\nTheorem 3.\nIn a first-price auction in which each agent cheats with probability Pa , it is a Bayes-Nash equilibrium for each non-cheating agent i to bid according to the strategy that is a fixed point of the following equation: bi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 Pa \u00b7 F(bi(x)) + (1 \u2212 Pa) \u00b7 F(x) (N\u22121) dx Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa) \u00b7 F(\u03b8i) (N\u22121) (8) 78 3.3 Special Case: Uniform Distribution Since we could not solve Equation 8 in the general case, we can only see how the possibility of cheating affects the equilibrium bidding strategy for particular instances of F(\u00b7).\nA natural place to start is uniform distribution: F(\u03b8i) = \u03b8i.\nRecall the logic behind the symmetric equilibrium strategy in a first-price auction without cheating: bi(\u03b8i) = N\u22121 N \u03b8i is the optimal tradeoff between increasing the probability of winning and decreasing the price paid upon winning, given that the other agents are bidding according to the same strategy.\nSince in the current setting the cheating agents do not shave their bid at all and thus decrease an honest agent``s probability of winning (while obviously not affecting the price that an honest agent pays if he wins), it is natural to expect that an honest agent should compensate by increasing his bid.\nThe idea is that sacrificing some potential profit in order to regain some of the lost probability of winning would bring the two sides of the tradeoff back into balance.\nHowever, it turns out that the equilibrium bidding strategy is unchanged.\nCorollary 4.\nIn a first-price auction in which each agent cheats with probability Pa , and F(\u03b8i) = \u03b8i, it is a BayesNash equilibrium for each non-cheating agent to bid according to the strategy bi(\u03b8i) = N\u22121 N \u03b8i.\nThis result suggests that the equilibrium of a first-price auction is particularly robust when types are drawn from the uniform distribution, since the best response is unaffected by deviations of the other agents to the strategy of always bidding their type.\nIn fact, as long as all other agents shave their bid by a fraction (which can differ across the agents) no greater than 1 N , it is still a best response for the remaining agent to bid according to the equilibrium strategy.\nNote that this result holds even if other agents are shaving their bid by a negative fraction, and are thus irrationally bidding above their type.\nTheorem 5.\nIn a first-price auction where F(\u03b8i) = \u03b8i, if each agent j = i bids according a strategy bj(\u03b8j) = N\u22121+\u03b1j N \u03b8j, where \u03b1j \u2265 0, then it is a best response for the remaining agent i to bid according to the strategy bi(\u03b8i) = N\u22121 N \u03b8i.\nObviously, these strategy profiles are not equilibria (unless each \u03b1j = 0), because each agent j has an incentive to set \u03b1j = 0.\nThe point of this theorem is that a wide range of possible beliefs that an agent can hold about the strategies of the other agents will all lead him to play the equilibrium strategy.\nThis is important because a common (and valid) criticism of equilibrium concepts such as Nash and BayesNash is that they are silent on how the agents converge on a strategy profile from which no one wants to deviate.\nHowever, if the equilibrium strategy is a best response to a large set of strategy profiles that are out of equilibrium, then it seems much more plausible that the agents will indeed converge on this equilibrium.\nIt is important to note, though, that while this equilibrium is robust against arbitrary deviations to strategies that shave less, it is not robust to even a single agent shaving more off of its bid.\nIn fact, if we take any strategy profile consistent with the conditions of Theorem 5 and change a single agent j``s strategy so that its corresponding \u03b1j is negative, then agent i``s best response is to shave more than 1 N off of its bid.\n3.4 Effects of Overbidding for Other Distributions A natural question is whether the best response bidding strategy is similarly robust to overbidding by competing agents for other distributions.\nIt turns out that Theorem 5 holds for all distributions of the form F(\u03b8i) = (\u03b8i)k , where k is some positive integer.\nHowever, taking a simple linear combination of two such distributions to produce F(\u03b8i) = \u03b82 i +\u03b8i 2 yields a distribution in which an agent should actually shave its bid more when other agents shave their bids less.\nIn the example we present for this distribution (with the details in the appendix), there are only two players and the deviation by one agent is to bid his type.\nHowever, it can be generalized to a higher number of agents and to other deviations.\nExample 1.\nIn a first-price auction where F(\u03b8i) = \u03b82 i +\u03b8i 2 and N = 2, if agent 2 always bids its type (b2(\u03b82) = \u03b82), then, for all \u03b81 > 0, agent 1``s best response bidding strategy is strictly less than the bidding strategy of the symmetric equilibrium.\nWe also note that the same result holds for the normalized exponential distribution (F(\u03b8i) = e\u03b8i \u22121 e\u22121 ).\nIt is certainly the case that distributions can be found that support the intuition given above that agents should shave their bid less when other agents are doing likewise.\nExamples include F(\u03b8i) = \u22121 2 \u03b82 i + 3 2 \u03b8i (the solution to the system of equations: F (\u03b8i) = \u22121, F(0) = 0, and F(1) = 1), and F(\u03b8i) = e\u2212e(1\u2212\u03b8i) e\u22121 .\nIt would be useful to relate the direction of the change in the best response bidding strategy to a general condition on F(\u00b7).\nUnfortunately, we were not able to find such a condition, in part because the integral in the symmetric bidding strategy of a first-price auction cannot be solved without knowing F(\u00b7) (or at least some restrictions on it).\nWe do note, however, that the sign of the second derivative of F(\u03b8i)\/f(\u03b8i) is an accurate predictor for all of the distributions that we considered.\n4.\nREVENUE LOSS FOR AN HONEST SELLER In both of the settings we covered, an honest seller suffers a loss in expected revenue due to the possibility of cheating.\nThe equilibrium bidding strategies that we derived allow us to quantify this loss.\nAlthough this is as far as we will take the analysis, it could be applied to more general settings, in which the seller could, for example, choose the market in which he sells his good or pay a trusted third party to oversee the auction.\nIn a second-price auction in which the seller may cheat, an honest seller suffers due the fact that the agents will shave their bids.\nFor the case in which agent types are drawn from the uniform distribution, every agent will shave its bid by P c N\u22121+P c , which is thus also the fraction by which an honest seller``s revenue decreases due to the possibility of cheating.\nAnalysis of the case of a first-price auction in which agents may cheat is not so straightforward.\nIf Pa = 1 (each agent cheats with certainty), then we simply have a second-price auction, and the seller``s expected revenue will be unchanged.\nAgain considering the uniform distribution for agent types, it is not surprising that Pa = 1 2 causes the seller to lose 79 the most revenue.\nHowever, even in this worst case, the percentage of expected revenue lost is significantly less than it is for the second-price auction in which Pc = 1 2 , as shown in Table 1.6 It turns out that setting Pc = 0.2 would make the expected loss of these two settings comparable.\nWhile this comparison between the settings is unlikely to be useful for a seller, it is interesting to note that agent suspicions of possible cheating by the seller are in some sense worse than agents actually cheating themselves.\nPercentage of Revenue lost for an Honest Seller Agents Second-Price Auction First-Price Auction (Pc = 0.5) (Pa = 0.5) 2 33 12 5 11 4.0 10 5.3 1.8 15 4.0 1.5 25 2.2 0.83 50 1.1 0.38 100 0.50 0.17 Table 1: The percentage of expected revenue lost by an honest seller due to the possibility of cheating in the two settings considered in this paper.\nAgent valuations are drawn from the uniform distribution.\n5.\nRELATED WORK Existing work covers another dimension along which we could analyze cheating: altering the perceived value of N.\nIn this paper, we have assumed that N is known by all of the bidders.\nHowever, in an online setting this assumption is rather tenuous.\nFor example, a bidder``s only source of information about N could be a counter that the seller places on the auction webpage, or a statement by the seller about the number of potential bidders who have indicated that they will participate.\nIn these cases, the seller could arbitrarily manipulate the perceived N.\nIn a first-price auction, the seller obviously has an incentive to increase the perceived value of N in order to induce agents to bid closer to their true valuation.\nHowever, if agents are aware that the seller has this power, then any communication about N to the agents is cheap talk, and furthermore is not credible.\nThus, in equilibrium the agents would ignore the declared value of N, and bid according to their own prior beliefs about the number of agents.\nIf we make the natural assumption of a common prior, then the setting reduces to the one tackled by [5], which derived the equilibrium bidding strategies of a first-price auction when the number of bidders is drawn from a known distribution but not revealed to any of the bidders.\nOf course, instead of assuming that the seller can always exploit this power, we could assume that it can only do so with some probability that is known by the agents.\nThe analysis would then proceed in a similar manner as that of our cheating seller model.\nThe other interesting case of this form of cheating is by bidders in a first-price auction.\nBidders would obviously want to decrease the perceived number of agents in order to induce their competition to lower their bids.\nWhile it is 6 Note that we have not considered the costs of the seller.\nThus, the expected loss in profit could be much greater than the numbers that appear here.\nunreasonable for bidders to be able to alter the perceived N arbitrarily, collusion provides an opportunity to decrease the perceived N by having only one of a group of colluding agents participate in the auction.\nWhile the non-colluding agents would account for this possibility, as long as they are not certain of the collusion they will still be induced to shave more off of their bids than they would if the collusion did not take place.\nThis issue is tackled in [7].\nOther types of collusion are of course related to the general topic of cheating in auctions.\nResults on collusion in first and second-price auctions can be found in [8] and [3], respectively.\nThe work most closely related to our first setting is [11], which also presents a model in which the seller may cheat in a second-price auction.\nIn their setting, the seller is a participant in the Bayesian game who decides between running a first-price auction (where profitable cheating is never possible) or second-price auction.\nThe seller makes this choice after observing his type, which is his probability of having the opportunity and willingness to cheat in a second-price auction.\nThe bidders, who know the distribution from which the seller``s type is drawn, then place their bid.\nIt is shown that, in equilibrium, only a seller with the maximum probability of cheating would ever choose to run a second-price auction.\nOur work differs in that we focus on the agents'' strategies in a second-price auction for a given probability of cheating by the seller.\nAn explicit derivation of the equilibrium strategies then allows us relate first and second-price auctions.\nAn area of related work that can be seen as complementary to ours is that of secure auctions, which takes the point of view of an auction designer.\nThe goals often extend well beyond simply preventing cheating, including properties such as anonymity of the bidders and nonrepudiation of bids.\nCryptographic methods are the standard weapon of choice here (see [1, 4, 9]).\n6.\nCONCLUSION In this paper we presented the equilibria of sealed-bid auctions in which cheating is possible.\nIn addition to providing strategy profiles that are stable against deviations, these results give us with insights into both first and second-price auctions.\nThe results for the case of a cheating seller in a second-price auction allow us to relate the two auctions as endpoints along a continuum.\nThe case of agents cheating in a first-price auction showed the robustness of the first-price auction equilibrium when agent types are drawn from the uniform distribution.\nWe also explored the effect of overbidding on the best response bidding strategy for other distributions, and showed that even for relatively simple distributions it can be positive, negative, or neutral.\nFinally, results from both of our settings allowed us to quantify the expected loss in revenue for a seller due to the possibility of cheating.\n7.\nREFERENCES [1] M. Franklin and M. Reiter.\nThe Design and Implementation of a Secure Auction Service.\nIn Proc.\nIEEE Symp.\non Security and Privacy, 1995.\n[2] D. Fudenberg and J. Tirole.\nGame Theory.\nMIT Press, 1991.\n80 [3] D. Graham and R. Marshall.\nCollusive bidder behavior at single-object second-price and english auctions.\nJournal of Political Economy, 95:579-599, 1987.\n[4] M. Harkavy, J. D. Tygar, and H. Kikuchi.\nElectronic auctions with private bids.\nIn Proceedings of the 3rd USENIX Workshop on Electronic Commerce, 1998.\n[5] R. Harstad, J. Kagel, and D. Levin.\nEquilibrium bid functions for auctions with an uncertain number of bidders.\nEconomic Letters, 33:35-40, 1990.\n[6] P. Klemperer.\nAuction theory: A guide to the literature.\nJournal of Economic Surveys, 13(3):227-286, 1999.\n[7] K. Leyton-Brown, Y. Shoham, and M. Tennenholtz.\nBidding clubs in first-price auctions.\nIn AAAI-02.\n[8] R. McAfee and J. McMillan.\nBidding rings.\nThe American Economic Review, 71:579-599, 1992.\n[9] M. Naor, B. Pinkas, and R. Sumner.\nPrivacy preserving auctions and mechanism design.\nIn EC-99.\n[10] J. Riley and W. Samuelson.\nOptimal auctions.\nAmerican Economic Review, 71(3):381-392, 1981.\n[11] M. Rothkopf and R. Harstad.\nTwo models of bid-taker cheating in vickrey auctions.\nThe Journal of Business, 68(2):257-267, 1995.\n[12] W. Vickrey.\nCounterspeculations, auctions, and competitive sealed tenders.\nJournal of Finance, 16:15-27, 1961.\nAPPENDIX Theorem 1.\nIn a second-price auction in which the seller cheats with probability Pc , it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: bi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 F( N\u22121 P c ) (x)dx F( N\u22121 P c ) (\u03b8i) (5) Proof.\nTo find an equilibrium, we start by guessing that there exists an equilibrium in which all agents bid according to the same function bi(\u03b8i), because the game is symmetric.\nFurther, we guess that bi(\u03b8i) is strictly increasing and differentiable over the range [0, 1].\nWe can also assume that bi(0) = 0, because negative bids are not allowed and a positive bid is not rational when the agent``s valuation is 0.\nNote that these are not assumptions on the setting- they are merely limitations that we impose on our search.\nLet \u03a6i : [0, bi(1)] \u2192 [0, 1] be the inverse function of bi(\u03b8i).\nThat is, it takes a bid for agent i as input and returns the type \u03b8i that induced this bid.\nRecall Equation 3: E\u03b8\u2212i,\u00b5c ui(b(\u03b8), \u00b5c , \u03b8i) = Prob bi(\u03b8i) > b[1](\u03b8\u2212i) \u00b7 \u03b8i \u2212 E\u03b8\u2212i,\u00b5c pi(b(\u03b8), \u00b5c ) | bi(\u03b8i) > b[1](\u03b8\u2212i) The probability that a single other bid is below that of agent i is equal to the cdf at the type that would induce a bid equal to that of agent i, which is formally written as F(\u03a6i(bi(\u03b8i))).\nSince all agents are independent, the probability that all other bids are below agent i``s is simply this term raised the (N \u2212 1)-th power.\nThus, we can re-write the expected utility as: E\u03b8\u2212i,\u00b5c ui(b(\u03b8), \u00b5c , \u03b8i) = FN\u22121 (\u03a6i(bi(\u03b8i)))\u00b7 \u03b8i \u2212 E\u03b8\u2212i,\u00b5c pi(b(\u03b8), \u00b5c ) | bi(\u03b8i) > b[1](\u03b8\u2212i) (9) We now solve for the expected payment.\nPlugging Equation 1 (which gives the price for the winning agent) into the term for the expected price in Equation 9, and then simplifying the expectation yields: E\u03b8\u2212i,\u00b5c pi(b(\u03b8), \u00b5c ) | bi(\u03b8i) > b[1](\u03b8\u2212i) = E\u03b8\u2212i,\u00b5c \u00b5c \u00b7 bi(\u03b8i) + (1 \u2212 \u00b5c ) \u00b7 b[1](\u03b8\u2212i) | bi(\u03b8i) > b[1](\u03b8\u2212i) = Pc \u00b7 bi(\u03b8i) + (1 \u2212 Pc ) \u00b7 E\u03b8\u2212i b[1](\u03b8\u2212i) | bi(\u03b8i) > b[1](\u03b8\u2212i) = Pc \u00b7 bi(\u03b8i) + (1 \u2212 Pc ) \u00b7 bi(\u03b8i) 0 b[1](\u03b8\u2212i)\u00b7 pdf b[1](\u03b8\u2212i) | bi(\u03b8i) > b[1](\u03b8\u2212i) db[1](\u03b8\u2212i) (10) Note that the integral on the last line is taken up to bi(\u03b8i) because we are conditioning on the fact that bi(\u03b8i) > b[1](\u03b8\u2212i).\nTo derive the pdf of b[1](\u03b8\u2212i) given this condition, we start with the cdf.\nFor a given value b[1](\u03b8\u2212i), the probability that any one agent``s bid is less than this value is equal to F(\u03a6i(b[1](\u03b8\u2212i))).\nWe then condition on the agent``s bid being below bi(\u03b8i) by dividing by F(\u03a6i(bi(\u03b8i))).\nThe cdf for the N \u2212 1 agents is then this value raised to the (N \u2212 1)-th power.\ncdf b[1](\u03b8\u2212i) | bi(\u03b8i) > b[1](\u03b8\u2212i) = FN\u22121 (\u03a6i(b[1](\u03b8\u2212i))) FN\u22121(\u03a6i(bi(\u03b8i))) The pdf is then the derivative of the cdf with respect to b[1](\u03b8\u2212i): pdf b[1](\u03b8\u2212i) | bi(\u03b8i) > b[1](\u03b8\u2212i) = N \u2212 1 FN\u22121(\u03a6i(bi(\u03b8i))) \u00b7 FN\u22122 (\u03a6i(b[1](\u03b8\u2212i)))\u00b7 f(\u03a6i(b[1](\u03b8\u2212i))) \u00b7 \u03a6i(b[1](\u03b8\u2212i)) Substituting the pdf into Equation 10 and pulling terms out of the integral that do not depend on b[1](\u03b8\u2212i) yields: E\u03b8\u2212i,\u00b5c pi(b(\u03b8), \u00b5c ) | bi(\u03b8i) > b[1](\u03b8\u2212i) = Pc \u00b7 bi(\u03b8i)+ (1 \u2212 Pc ) \u00b7 (N \u2212 1) FN\u22121(\u03a6i(bi(\u03b8i))) \u00b7 bi(\u03b8i) 0 b[1](\u03b8\u2212i) \u00b7 FN\u22122 (\u03a6i(b[1](\u03b8\u2212i)))\u00b7 f(\u03a6i(b[1](\u03b8\u2212i))) \u00b7 \u03a6i(b[1](\u03b8\u2212i)) db[1](\u03b8\u2212i) 81 Plugging the expected price back into the expected utility equation (9), and distributing FN\u22121 (\u03a6i(bi(\u03b8i))), yields: E\u03b8\u2212i,\u00b5c ui(b(\u03b8), \u00b5c , \u03b8i) = FN\u22121 (\u03a6i(bi(\u03b8i))) \u00b7 \u03b8i\u2212 FN\u22121 (\u03a6i(bi(\u03b8i))) \u00b7 Pc \u00b7 bi(\u03b8i)\u2212 (1 \u2212 Pc ) \u00b7 (N \u2212 1) \u00b7 bi(\u03b8i) 0 b[1](\u03b8\u2212i) \u00b7 FN\u22122 (\u03a6i(b[1](\u03b8\u2212i)))\u00b7 f(\u03a6i(b[1](\u03b8\u2212i))) \u00b7 \u03a6i(b[1](\u03b8\u2212i)) db[1](\u03b8\u2212i) We are now ready to optimize the expected utility by taking the derivative with respect to bi(\u03b8i) and setting it to 0.\nNote that we do not need to solve the integral, because it will disappear when the derivative is taken (by application of the Fundamental Theorem of Calculus).\n0 = (N\u22121)\u00b7FN\u22122 (\u03a6i(bi(\u03b8i)))\u00b7f(\u03a6i(bi(\u03b8i)))\u00b7\u03a6i(bi(\u03b8i))\u00b7\u03b8i\u2212 FN\u22121 (\u03a6i(bi(\u03b8i))) \u00b7 Pc \u2212 Pc \u00b7(N\u22121)\u00b7FN\u22122 (\u03a6i(bi(\u03b8i)))\u00b7f(\u03a6i(bi(\u03b8i)))\u00b7\u03a6i(bi(\u03b8i))\u00b7bi(\u03b8i)\u2212 (1\u2212Pc )\u00b7(N\u22121)\u00b7 bi(\u03b8i)\u00b7FN\u22122 (\u03a6i(bi(\u03b8i)))\u00b7f(\u03a6i(bi(\u03b8i)))\u00b7\u03a6i(bi(\u03b8i)) Dividing through by FN\u22122 (\u03a6i(bi(\u03b8i))) and combining like terms yields: 0 = \u03b8i \u2212 Pc \u00b7 bi(\u03b8i) \u2212 (1 \u2212 Pc ) \u00b7 bi(\u03b8i) \u00b7 (N \u2212 1) \u00b7 f(\u03a6i(bi(\u03b8i))) \u00b7 \u03a6i(bi(\u03b8i)) \u2212 Pc \u00b7 F(\u03a6i(bi(\u03b8i))) Simplifying the expression and rearranging terms produces: bi(\u03b8i) = \u03b8i \u2212 Pc \u00b7 F(\u03a6i(bi(\u03b8i))) (N \u2212 1) \u00b7 f(\u03a6i(bi(\u03b8i))) \u00b7 \u03a6i(bi(\u03b8i)) To further simplify, we use the formula f (x) = 1 g (f(x)) , where g(x) is the inverse function of f(x).\nPlugging in function from our setting gives us: \u03a6i(bi(\u03b8i)) = 1 bi(\u03b8i) .\nApplying both this equation and \u03a6i(bi(\u03b8i)) = \u03b8i yields: bi(\u03b8i) = \u03b8i \u2212 Pc \u00b7 F(\u03b8i) \u00b7 bi(\u03b8i) (N \u2212 1) \u00b7 f(\u03b8i) (11) Attempts at a derivation of the solution from this point proved fruitless, but we are at a point now where a guessed solution can be quickly verified.\nWe used the solution for the first-price auction (see, e.g., [10]) as our starting point to find the answer: bi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 F( N\u22121 P c ) (x)dx F( N\u22121 P c ) (\u03b8i) (12) To verify the solution, we first take its derivative: bi(\u03b8i) = 1\u2212 F(2\u00b7 N\u22121 P c ) (\u03b8i) \u2212 N\u22121 P c \u00b7 F( N\u22121 P c \u22121) (\u03b8i) \u00b7 f(\u03b8i) \u00b7 \u03b8i 0 F( N\u22121 P c ) (x)dx F(2\u00b7 N\u22121 P c ) (\u03b8i) This simplifies to: bi(\u03b8i) = N\u22121 P c \u00b7 f(\u03b8i) \u00b7 \u03b8i 0 F( N\u22121 P c ) (x)dx F( N\u22121 P c +1) (\u03b8i) We then plug this derivative into the equation we derived (11): bi(\u03b8i) = \u03b8i \u2212 Pc \u00b7 F(\u03b8i) \u00b7 N\u22121 P c \u00b7 f(\u03b8i) \u00b7 \u03b8i 0 F( N\u22121 P c ) (x)dx (N \u2212 1) \u00b7 f(\u03b8i) \u00b7 F( N\u22121 P c +1) (\u03b8i) Cancelling terms yields Equation 12, verifying that our guessed solution is correct.\nAlternative Proof of Theorem 1: The following proof uses the Revenue Equivalence Theorem (RET) and the probabilistic auction given as an interpretation of our cheating seller setting.\nIn a first-price auction without the possibility of cheating, the expected payment for an agent with type \u03b8i is simply the product of its bid and the probability that this bid is the highest.\nFor the symmetric equilibrium, this is equal to: F(N\u22121) (\u03b8i) \u00b7 \u03b8i \u2212 \u03b8i 0 F(N\u22121) (x)dx F(N\u22121)(\u03b8i) For our probabilistic auction, the expected payment of the winning agent is a weighted average of its bid and the second highest bid.\nFor the bi(\u00b7) we found in the original interpretation of the setting, it can be written as follows.\nF(N\u22121) (\u03b8i) \u00b7 Pc \u03b8i \u2212 \u03b8i 0 F( N\u22121 P c ) (x)dx F( N\u22121 P c ) (\u03b8i) + (1 \u2212 Pc ) 1 F(N\u22121)(\u03b8i) \u00b7 \u03b8i 0 x\u2212 x 0 F( N\u22121 P c ) (y)dy F( N\u22121 P c ) (x) \u00b7(N \u22121)\u00b7F(N\u22122) (x)\u00b7f(x)dx By the RET, the expected payments will be the same in the two auctions.\nThus, we can verify our equilibrium bidding strategy by showing that the expected payment in the two auctions is equal.\nSince the expected payment is zero at \u03b8i = 0 for both functions, it suffices to verify that the derivatives of the expected payment functions with respect to \u03b8i are equal, for an arbitrary value \u03b8i.\nThus, we need to verify the following equation: F(N\u22121) (\u03b8i) + (N \u2212 1) \u00b7 F(N\u22122) (\u03b8i) \u00b7 f(\u03b8i) \u00b7 \u03b8i \u2212 F(N\u22121) (\u03b8i) = Pc \u00b7 F(N\u22121) (\u03b8i) \u00b7 1 \u2212 1\u2212 (N\u22121 P c ) \u00b7 F( N\u22121 P c \u22121) (\u03b8i) \u00b7 f(\u03b8i) \u00b7 \u03b8i 0 F( N\u22121 P c ) (x)dx F2( N\u22121 P c ) (\u03b8i) + (N \u2212 1) \u00b7 F(N\u22122) (\u03b8i) \u00b7 f(\u03b8i) \u00b7 \u03b8i \u2212 \u03b8i 0 F( N\u22121 P c ) (x)dx F( N\u22121 P c ) (\u03b8i) + (1\u2212Pc ) \u03b8i\u2212 \u03b8i 0 F( N\u22121 P c ) (y)dy F( N\u22121 P c ) (\u03b8i) \u00b7(N\u22121)\u00b7F(N\u22122) (\u03b8i)\u00b7f(\u03b8i) 82 This simplifies to: 0 = Pc \u00b7 (N\u22121 P c ) \u00b7 F(N\u22122) (\u03b8i) \u00b7 f(\u03b8i) \u00b7 \u03b8i 0 F( N\u22121 P c ) (x)dx F( N\u22121 P c ) (\u03b8i) + (N \u2212 1) \u00b7 F(N\u22122) (\u03b8i) \u00b7 f(\u03b8i) \u00b7 \u2212 \u03b8i 0 F( N\u22121 P c ) (x)dx F( N\u22121 P c ) (\u03b8i) + (1\u2212Pc ) \u2212 \u03b8i 0 F( N\u22121 P c ) (y)dy F( N\u22121 P c ) (\u03b8i) \u00b7(N\u22121)\u00b7F(N\u22122) (\u03b8i)\u00b7f(\u03b8i) After distributing Pc , the right-hand side of this equation cancels out, and we have verified our equilibrium bidding strategy.\nCorollary 2.\nIn a second-price auction in which the seller cheats with probability Pc , and F(\u03b8i) = \u03b8i, it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: bi(\u03b8i) = N \u2212 1 N \u2212 1 + Pc \u03b8i (6) Proof.\nPlugging F(\u03b8i) = \u03b8i into Equation 5 (repeated as 12), we get: bi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 x( N\u22121 P c ) dx \u03b8i ( N\u22121 P c ) = \u03b8i \u2212 P c N\u22121+P c \u03b8i ( N\u22121+P c P c ) \u03b8i ( N\u22121 P c ) = \u03b8i \u2212 Pc N \u2212 1 + Pc \u00b7 \u03b8i = N \u2212 1 N \u2212 1 + Pc \u00b7 \u03b8i Theorem 3.\nIn a first-price auction in which each agent cheats with probability Pa , it is a Bayes-Nash equilibrium for each non-cheating agent i to bid according to the strategy that is a fixed point of the following equation: bi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 Pa \u00b7 F(bi(x)) + (1 \u2212 Pa) \u00b7 F(x) (N\u22121) dx Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa) \u00b7 F(\u03b8i) (N\u22121) (8) Proof.\nWe make the same guesses about the equilibrium strategy to aid our search as we did in the proof of Theorem 1.\nWhen simplifying the expectation of this setting``s utility equation (7), we use the fact that the probability that agent i will have a higher bid than another honest agent is still F(\u03a6i(bi(\u03b8i))), while the probability is F(bi(\u03b8i)) if the other agent cheats.\nThe probability that agent i beats a single other agent is then a weighted average of these two probabilities.\nThus, we can write agent i``s expected utility as: E\u03b8\u2212i,\u00b5a ui(b(\u03b8), \u00b5a , \u03b8i) = \u03b8i \u2212 bi(\u03b8i) \u00b7 Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa ) \u00b7 F(\u03a6i(bi(\u03b8i))) N\u22121 As before, to find the equilibrium bi(\u03b8i), we take the derivative and set it to zero: 0 = \u03b8i \u2212 bi(\u03b8i) \u00b7 (N \u2212 1)\u00b7 Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa ) \u00b7 F(\u03a6i(bi(\u03b8i))) N\u22122 \u00b7 Pa \u00b7 f(bi(\u03b8i)) + (1 \u2212 Pa ) \u00b7 f(\u03a6i(bi(\u03b8i))) \u00b7 \u03a6i(bi(\u03b8i)) \u2212 Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa ) \u00b7 F(\u03a6i(bi(\u03b8i))) N\u22121 Applying the equations \u03a6i(bi(\u03b8i)) = 1 bi(\u03b8i) and \u03a6i(bi(\u03b8i)) = \u03b8i, and dividing through, produces: 0 = \u03b8i \u2212 bi(\u03b8i) \u00b7 (N \u2212 1)\u00b7 Pa \u00b7 f(bi(\u03b8i)) + (1 \u2212 Pa ) \u00b7 f(\u03b8i) \u00b7 1 bi(\u03b8i) \u2212 Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa ) \u00b7 F(\u03b8i) Rearranging terms yields: bi(\u03b8i) = \u03b8i \u2212 Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa) \u00b7 F(\u03b8i) \u00b7 bi(\u03b8i) (N \u2212 1) \u00b7 Pa \u00b7 f(bi(\u03b8i)) \u00b7 bi(\u03b8i) + (1 \u2212 Pa) \u00b7 f(\u03b8i) (13) In this setting, because we leave F(\u00b7) unspecified, we cannot present a closed form solution.\nHowever, we can simplify the expression by removing its dependence on bi(\u03b8i).\nbi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 Pa \u00b7 F(bi(x)) + (1 \u2212 Pa) \u00b7 F(x) (N\u22121) dx Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa) \u00b7 F(\u03b8i) (N\u22121) (14) To verify Equation 14, first take its derivative: bi(\u03b8i) = 1 \u2212 1\u2212 (N \u2212 1) \u00b7 Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa ) \u00b7 F(\u03b8i) (N\u22122) \u00b7 Pa \u00b7 f(bi(\u03b8i)) \u00b7 bi(\u03b8i)) + (1 \u2212 Pa ) \u00b7 f(\u03b8i) \u00b7 \u03b8i 0 Pa \u00b7 F(bi(x)) + (1 \u2212 Pa ) \u00b7 F(x) (N\u22121) dx Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa) \u00b7 F(\u03b8i) 2(N\u22121) This equation simplifies to: bi(\u03b8i) = (N \u22121)\u00b7 Pa \u00b7f(bi(\u03b8i))\u00b7bi(\u03b8i))+(1\u2212Pa )\u00b7f(\u03b8i) \u00b7 \u03b8i 0 Pa \u00b7 F(bi(x)) + (1 \u2212 Pa ) \u00b7 F(x) (N\u22121) dx Pa \u00b7 F(bi(\u03b8i)) + (1 \u2212 Pa) \u00b7 F(\u03b8i) N Plugging this equation into the bi(\u03b8i) in the numerator of Equation 13 yields Equation 14, verifying the solution.\n83 Corollary 4.\nIn a first-price auction in which each agent cheats with probability Pa , and F(\u03b8i) = \u03b8i, it is a BayesNash equilibrium for each non-cheating agent to bid according to the strategy bi(\u03b8i) = N\u22121 N \u03b8i.\nProof.\nInstantiating the fixed point equation (8, and repeated as 14) with F(\u03b8i) = \u03b8i yields: bi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 Pa \u00b7 bi(x) + (1 \u2212 Pa ) \u00b7 x (N\u22121) dx Pa \u00b7 bi(\u03b8i) + (1 \u2212 Pa) \u00b7 \u03b8i (N\u22121) We can plug the strategy bi(\u03b8i) = N\u22121 N \u03b8i into this equation in order to verify that it is a fixed point.\nbi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 Pa \u00b7 N\u22121 N x + (1 \u2212 Pa ) \u00b7 x (N\u22121) dx Pa \u00b7 N\u22121 N \u03b8i + (1 \u2212 Pa) \u00b7 \u03b8i (N\u22121) = \u03b8i \u2212 \u03b8i 0 x(N\u22121) dx \u03b8 (N\u22121) i = \u03b8i \u2212 1 N \u03b8N i \u03b8 (N\u22121) i = N \u2212 1 N \u03b8i Theorem 5.\nIn a first-price auction where F(\u03b8i) = \u03b8i, if each agent j = i bids according a strategy bj(\u03b8j) = N\u22121+\u03b1j N \u03b8j, where \u03b1j \u2265 0, then it is a best response for the remaining agent i to bid according to the strategy bi(\u03b8i) = N\u22121 N \u03b8i.\nProof.\nWe again use \u03a6j : [0, bj(1)] \u2192 [0, 1] as the inverse of bj(\u03b8j).\nFor all j = i in this setting, \u03a6j(x) = N N\u22121+\u03b1j x.\nThe probability that agent i has a higher bid than a single agent j is F(\u03a6j(bi(\u03b8i))) = N N\u22121+\u03b1j bi(\u03b8i).\nNote, however, that since \u03a6j(\u00b7) is only defined over the range [0, bj(1)], it must be the case that bi(1) \u2264 bj(1), which is why \u03b1j \u2265 0 is necessary, in addition to being sufficient.\nAssuming that bi(\u03b8i) = N\u22121 N \u03b8i, then indeed \u03a6j(bi(\u03b8i)) is always welldefined.\nWe will now show that this assumption is correct.\nThe expected utility for agent i can then be written as: E\u03b8\u2212i ui(b(\u03b8), \u03b8i) = \u03a0j=i N N \u2212 1 + \u03b1j bi(\u03b8i) \u00b7 \u03b8i \u2212bi(\u03b8) = \u03a0j=i N N \u2212 1 + \u03b1j \u00b7 \u03b8i\u00b7 bi(\u03b8i) (N\u22121) \u2212 bi(\u03b8i) N Taking the derivative with respect to bi(\u03b8i), setting it to zero, and dividing out \u03a0j=i N N\u22121+\u03b1j yields: 0 = \u03b8i \u00b7 (N \u2212 1) \u00b7 bi(\u03b8i) (N\u22122) \u2212 N \u00b7 bi(\u03b8i) (N\u22121) This simplifies to the solution: bi(\u03b8i) = N\u22121 N \u03b8i.\nFull Version of Example 1: In a first-price auction where F(\u03b8i) = \u03b82 i +\u03b8i 2 and N = 2, if agent 2 always bids its type (b2(\u03b82) = \u03b82), then, for all \u03b81 > 0, agent 1``s best response bidding strategy is strictly less than the bidding strategy of the symmetric equilibrium.\nAfter calculating the symmetric equilibrium in which both agents shave their bid by the same amount, we find the best response to an agent who instead does not shave its bid.\nWe then show that this best response is strictly less than the equilibrium strategy.\nTo find the symmetric equilibrium bidding strategy, we instantiate N = 2 in the general formula the equation found in [10], plug in F(\u03b8i) = \u03b82 i +\u03b8i 2 , and simplify: bi(\u03b8i) = \u03b8i \u2212 \u03b8i 0 F(x)dx F(\u03b8i) = \u03b8i\u2212 1 2 \u00b7 \u03b8i 0 (x2 + x)dx 1 2 \u00b7 (\u03b82 i + \u03b8i) = \u03b8i\u2212 1 3 \u03b83 i + 1 2 \u03b82 i \u03b82 i + \u03b8i = 2 3 \u03b82 i + 1 2 \u03b8i \u03b8i + 1 We now derive the best response for agent 1 to the strategy b2(\u03b82) = \u03b82, denoting the best response strategy b\u2217 1(\u03b81) to distinguish it from the symmetric case.\nThe probability of agent 1 winning is F(b\u2217 1(\u03b81)), which is the probability that agent 2``s type is less than agent 1``s bid.\nThus, agent 1``s expected utility is: E\u03b82 ui((b\u2217 1(\u03b81), b2(\u03b82)), \u03b81) = F(b\u2217 1(\u03b81)) \u00b7 \u03b81 \u2212 b\u2217 1(\u03b81) = (b\u2217 1(\u03b81))2 + b\u2217 1(\u03b81) 2 \u00b7 \u03b81 \u2212 b\u2217 1(\u03b81) Taking the derivative with respect to b\u2217 1(\u03b81), setting it to zero, and then rearranging terms gives us: 0 = 1 2 \u00b7 2 \u00b7 b\u2217 1(\u03b81) \u00b7 \u03b81 \u2212 3 \u00b7 (b\u2217 1(\u03b81))2 + \u03b81 \u2212 2 \u00b7 b\u2217 1(\u03b81) 0 = 3 \u00b7 (b\u2217 1(\u03b81))2 + (2 \u2212 2 \u00b7 \u03b81) \u00b7 b\u2217 1(\u03b81) \u2212 \u03b81 Of the two solutions of this equation, one always produces a negative bid.\nThe other is: b\u2217 1(\u03b81) = \u03b81 \u2212 1 + \u03b82 1 + \u03b81 + 1 3 We now need to show that b1(\u03b81) > b\u2217 1(\u03b81) holds for all \u03b8i > 0.\nSubstituting in for both terms, and then simplifying the inequality gives us: 2 3 \u03b82 1 + 1 2 \u03b81 \u03b81 + 1 > \u03b81 \u2212 1 + \u03b82 1 + \u03b81 + 1 3 \u03b82 1 + 3 2 \u03b81 + 1 > (\u03b81 + 1) \u03b82 1 + \u03b81 + 1 Since \u03b81 \u2265 0, we can square both sides of the inequality, which then allows us to verify the inequality for all \u03b81 > 0.\n\u03b84 1 + 3\u03b83 1 + 17 4 \u03b82 1 + 3\u03b81 + 1 > \u03b84 1 + 3\u03b83 1 + 4\u03b82 1 + 3\u03b81 + 1 1 4 \u03b82 1 > 0 84","lvl-3":"On Cheating in Sealed-Bid Auctions\nMotivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions.\nThe first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder.\nIn the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid.\nIn both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating.\nThese results provide insights into sealedbid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction.\n1.\nINTRODUCTION\nAmong the types of auctions commonly used in practice, sealed-bid auctions are a good practical choice because they require little communication and can be completed almost instantly.\nEach bidder simply submits a bid, and the winner is immediately determined.\nHowever, sealed-bid auctions do require that the bids be kept private until the auction\nclears.\nThe increasing popularity of online auctions only makes this disadvantage more troublesome.\nAt an auction house, with all participants present, it is difficult to examine a bid that another bidder gave directly to the auctioneer.\nHowever, in an online auction the auctioneer is often little more than a server with questionable security; and, since all participants are in different locations, one can anonymously attempt to break into the server.\nIn this paper, we present a game theoretic analysis of how bidders should behave when they are aware of the possibility of cheating that is based on knowledge of the bids.\nWe investigate this type of cheating along two dimensions: whether it is the auctioneer or a bidder who cheats, and which variant (either first or second-price) of the sealed-bid auction is used.\nNote that two of these cases are trivial.\nIn our setting, there is no incentive for the seller to submit a shill bid in a first price auction, because doing so would either cancel the auction or not affect the payment of the winning bidder.\nIn a second-price auction, knowing the competing bids does not help a bidder because it is dominant strategy to bid truthfully.\nThis leaves us with two cases that we examine in detail.\nA seller can profitably cheat in a second-price auction by looking at the bids before the auction clears and submitting an extra bid.\nThis possibility was pointed out as early as the seminal paper [12] that introduced this type of auction.\nFor example, if the bidders in an eBay auction each use a proxy bidder (essentially creating a second-price auction), then the seller may be able to break into eBay's server, observe the maximum price that a bidder is willing to pay, and then extract this price by submitting a shill bid just below it using a false identity.\nWe assume that there is no chance that the seller will be caught when it cheats.\nHowever, not all sellers are willing to use this power (or, not all sellers can successfully cheat).\nWe assume that each bidder knows the probability with which the seller will cheat.\nPossible motivation for this knowledge could be a recently published expos \u00b4 e on seller cheating in eBay auctions.\nIn this setting, we derive an equilibrium bidding strategy for the case in which each bidder's value for the good is independently drawn from a common distribution (with no further assumptions except for continuity and differentiability).\nThis result shows how first and second-price auctions can be viewed as the endpoints of a spectrum of auctions.\nBut why should the seller have all the fun?\nIn a first-price auction, a bidder must bid below his value for the good (also called \"shaving\" his bid) in order to have positive utility if he\nwins.\nTo decide how much to shave his bid, he must trade off the probability of winning the auction against how much he will pay if he does win.\nOf course, if he could simply examine the other bids before submitting his own, then his problem is solved: bid the minimum necessary to win the auction.\nIn this setting, our goal is to derive an equilibrium bidding strategy for a non-cheating bidder who is aware of the possibility that he is competing against cheating bidders.\nWhen bidder values are drawn from the commonly-analyzed uniform distribution, we show the counterintuitive result that the possibility of other bidders cheating has no effect on the equilibrium strategy of an honest bidder.\nThis result is then extended to show the robustness of the equilibrium of a first-price auction without the possibility of cheating.\nWe conclude this section by exploring other distributions, including some in which the presence of cheating bidders actually induces an honest bidder to lower its bid.\nThe rest of the paper is structured as follows.\nIn Section 2 we formalize the setting and present our results for the case of a seller cheating in a second price auction.\nSection 3 covers the case of bidders cheating in a first-price auction.\nIn Section 4, we quantify the effects that the possibility of cheating has on an honest seller in the two settings.\nWe discuss related work, including other forms of cheating in auctions, in Section 5, before concluding with Section 6.\nAll proofs and derivations are found in the appendix.\n2.\nSECOND-PRICE AUCTION, CHEATING SELLER\n2.1 Formulation\n2.2 Equilibrium\n2.3 Continuum of Auctions\n2.4 Special Case: Uniform Distribution\n3.\nFIRST-PRICE AUCTION, CHEATING AGENTS\n3.1 Formulation\n3.2 Equilibrium\n3.3 Special Case: Uniform Distribution\n3.4 Effects of Overbidding for Other Distributions\n4.\nREVENUE LOSS FOR AN HONEST SELLER\n5.\nRELATED WORK\nExisting work covers another dimension along which we could analyze cheating: altering the perceived value of N.\nIn this paper, we have assumed that N is known by all of the bidders.\nHowever, in an online setting this assumption is rather tenuous.\nFor example, a bidder's only source of information about N could be a counter that the seller places on the auction webpage, or a statement by the seller about the number of potential bidders who have indicated that they will participate.\nIn these cases, the seller could arbitrarily manipulate the perceived N.\nIn a first-price auction, the seller obviously has an incentive to increase the perceived value of N in order to induce agents to bid closer to their true valuation.\nHowever, if agents are aware that the seller has this power, then any communication about N to the agents is \"cheap talk\", and furthermore is not credible.\nThus, in equilibrium the agents would ignore the declared value of N, and bid according to their own prior beliefs about the number of agents.\nIf we make the natural assumption of a common prior, then the setting reduces to the one tackled by [5], which derived the equilibrium bidding strategies of a first-price auction when the number of bidders is drawn from a known distribution but not revealed to any of the bidders.\nOf course, instead of assuming that the seller can always exploit this power, we could assume that it can only do so with some probability that is known by the agents.\nThe analysis would then proceed in a similar manner as that of our cheating seller model.\nThe other interesting case of this form of cheating is by bidders in a first-price auction.\nBidders would obviously want to decrease the perceived number of agents in order to induce their competition to lower their bids.\nWhile it is 6Note that we have not considered the costs of the seller.\nThus, the expected loss in profit could be much greater than the numbers that appear here.\nunreasonable for bidders to be able to alter the perceived N arbitrarily, collusion provides an opportunity to decrease the perceived N by having only one of a group of colluding agents participate in the auction.\nWhile the non-colluding agents would account for this possibility, as long as they are not certain of the collusion they will still be induced to shave more off of their bids than they would if the collusion did not take place.\nThis issue is tackled in [7].\nOther types of collusion are of course related to the general topic of cheating in auctions.\nResults on collusion in first and second-price auctions can be found in [8] and [3], respectively.\nThe work most closely related to our first setting is [11], which also presents a model in which the seller may cheat in a second-price auction.\nIn their setting, the seller is a participant in the Bayesian game who decides between running a first-price auction (where profitable cheating is never possible) or second-price auction.\nThe seller makes this choice after observing his type, which is his probability of having the opportunity and willingness to cheat in a second-price auction.\nThe bidders, who know the distribution from which the seller's type is drawn, then place their bid.\nIt is shown that, in equilibrium, only a seller with the maximum probability of cheating would ever choose to run a second-price auction.\nOur work differs in that we focus on the agents' strategies in a second-price auction for a given probability of cheating by the seller.\nAn explicit derivation of the equilibrium strategies then allows us relate first and second-price auctions.\nAn area of related work that can be seen as complementary to ours is that of secure auctions, which takes the point of view of an auction designer.\nThe goals often extend well beyond simply preventing cheating, including properties such as anonymity of the bidders and nonrepudiation of bids.\nCryptographic methods are the standard weapon of choice here (see [1, 4, 9]).\n6.\nCONCLUSION\nIn this paper we presented the equilibria of sealed-bid auctions in which cheating is possible.\nIn addition to providing strategy profiles that are stable against deviations, these results give us with insights into both first and second-price auctions.\nThe results for the case of a cheating seller in a second-price auction allow us to relate the two auctions as endpoints along a continuum.\nThe case of agents cheating in a first-price auction showed the robustness of the first-price auction equilibrium when agent types are drawn from the uniform distribution.\nWe also explored the effect of overbidding on the best response bidding strategy for other distributions, and showed that even for relatively simple distributions it can be positive, negative, or neutral.\nFinally, results from both of our settings allowed us to quantify the expected loss in revenue for a seller due to the possibility of cheating.\n7.\nREFERENCES [1] M. Franklin and M. Reiter.\nThe Design and Implementation of a Secure Auction Service.\nIn Proc.\nIEEE Symp.\non Security and Privacy, 1995.\n[2] D. Fudenberg and J. Tirole.\nGame Theory.\nMIT Press, 1991.\n80 [3] D. Graham and R. Marshall.\nCollusive bidder behavior at single-object second-price and english auctions.\nJournal of Political Economy, 95:579--599, 1987.\n[4] M. Harkavy, J. D. Tygar, and H. Kikuchi.\nElectronic auctions with private bids.\nIn Proceedings of the 3rd USENIX Workshop on Electronic Commerce, 1998.\n[5] R. Harstad, J. Kagel, and D. Levin.\nEquilibrium bid functions for auctions with an uncertain number of bidders.\nEconomic Letters, 33:35--40, 1990.\n[6] P. Klemperer.\nAuction theory: A guide to the literature.\nJournal of Economic Surveys, 13 (3):227--286, 1999.\n[7] K. Leyton-Brown, Y. Shoham, and M. Tennenholtz.\nBidding clubs in first-price auctions.\nIn AAAI-02.\n[8] R. McAfee and J. McMillan.\nBidding rings.\nThe American Economic Review, 71:579--599, 1992.\n[9] M. Naor, B. Pinkas, and R. Sumner.\nPrivacy preserving auctions and mechanism design.\nIn EC-99.\n[10] J. Riley and W. Samuelson.\nOptimal auctions.\nAmerican Economic Review, 71 (3):381--392, 1981.\n[11] M. Rothkopf and R. Harstad.\nTwo models of bid-taker cheating in vickrey auctions.\nThe Journal of Business, 68 (2):257--267, 1995.\n[12] W. Vickrey.\nCounterspeculations, auctions, and competitive sealed tenders.\nJournal of Finance, 16:15--27, 1961.","lvl-4":"On Cheating in Sealed-Bid Auctions\nMotivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions.\nThe first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder.\nIn the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid.\nIn both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating.\nThese results provide insights into sealedbid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction.\n1.\nINTRODUCTION\nAmong the types of auctions commonly used in practice, sealed-bid auctions are a good practical choice because they require little communication and can be completed almost instantly.\nEach bidder simply submits a bid, and the winner is immediately determined.\nHowever, sealed-bid auctions do require that the bids be kept private until the auction\nclears.\nThe increasing popularity of online auctions only makes this disadvantage more troublesome.\nAt an auction house, with all participants present, it is difficult to examine a bid that another bidder gave directly to the auctioneer.\nHowever, in an online auction the auctioneer is often little more than a server with questionable security; and, since all participants are in different locations, one can anonymously attempt to break into the server.\nIn this paper, we present a game theoretic analysis of how bidders should behave when they are aware of the possibility of cheating that is based on knowledge of the bids.\nWe investigate this type of cheating along two dimensions: whether it is the auctioneer or a bidder who cheats, and which variant (either first or second-price) of the sealed-bid auction is used.\nNote that two of these cases are trivial.\nIn our setting, there is no incentive for the seller to submit a shill bid in a first price auction, because doing so would either cancel the auction or not affect the payment of the winning bidder.\nIn a second-price auction, knowing the competing bids does not help a bidder because it is dominant strategy to bid truthfully.\nThis leaves us with two cases that we examine in detail.\nA seller can profitably cheat in a second-price auction by looking at the bids before the auction clears and submitting an extra bid.\nThis possibility was pointed out as early as the seminal paper [12] that introduced this type of auction.\nWe assume that there is no chance that the seller will be caught when it cheats.\nHowever, not all sellers are willing to use this power (or, not all sellers can successfully cheat).\nWe assume that each bidder knows the probability with which the seller will cheat.\nPossible motivation for this knowledge could be a recently published expos \u00b4 e on seller cheating in eBay auctions.\nIn this setting, we derive an equilibrium bidding strategy for the case in which each bidder's value for the good is independently drawn from a common distribution (with no further assumptions except for continuity and differentiability).\nThis result shows how first and second-price auctions can be viewed as the endpoints of a spectrum of auctions.\nBut why should the seller have all the fun?\nIn a first-price auction, a bidder must bid below his value for the good (also called \"shaving\" his bid) in order to have positive utility if he\nwins.\nTo decide how much to shave his bid, he must trade off the probability of winning the auction against how much he will pay if he does win.\nOf course, if he could simply examine the other bids before submitting his own, then his problem is solved: bid the minimum necessary to win the auction.\nIn this setting, our goal is to derive an equilibrium bidding strategy for a non-cheating bidder who is aware of the possibility that he is competing against cheating bidders.\nWhen bidder values are drawn from the commonly-analyzed uniform distribution, we show the counterintuitive result that the possibility of other bidders cheating has no effect on the equilibrium strategy of an honest bidder.\nThis result is then extended to show the robustness of the equilibrium of a first-price auction without the possibility of cheating.\nWe conclude this section by exploring other distributions, including some in which the presence of cheating bidders actually induces an honest bidder to lower its bid.\nThe rest of the paper is structured as follows.\nIn Section 2 we formalize the setting and present our results for the case of a seller cheating in a second price auction.\nSection 3 covers the case of bidders cheating in a first-price auction.\nIn Section 4, we quantify the effects that the possibility of cheating has on an honest seller in the two settings.\nWe discuss related work, including other forms of cheating in auctions, in Section 5, before concluding with Section 6.\n5.\nRELATED WORK\nExisting work covers another dimension along which we could analyze cheating: altering the perceived value of N.\nIn this paper, we have assumed that N is known by all of the bidders.\nHowever, in an online setting this assumption is rather tenuous.\nFor example, a bidder's only source of information about N could be a counter that the seller places on the auction webpage, or a statement by the seller about the number of potential bidders who have indicated that they will participate.\nIn these cases, the seller could arbitrarily manipulate the perceived N.\nIn a first-price auction, the seller obviously has an incentive to increase the perceived value of N in order to induce agents to bid closer to their true valuation.\nHowever, if agents are aware that the seller has this power, then any communication about N to the agents is \"cheap talk\", and furthermore is not credible.\nThus, in equilibrium the agents would ignore the declared value of N, and bid according to their own prior beliefs about the number of agents.\nIf we make the natural assumption of a common prior, then the setting reduces to the one tackled by [5], which derived the equilibrium bidding strategies of a first-price auction when the number of bidders is drawn from a known distribution but not revealed to any of the bidders.\nOf course, instead of assuming that the seller can always exploit this power, we could assume that it can only do so with some probability that is known by the agents.\nThe analysis would then proceed in a similar manner as that of our cheating seller model.\nThe other interesting case of this form of cheating is by bidders in a first-price auction.\nBidders would obviously want to decrease the perceived number of agents in order to induce their competition to lower their bids.\nWhile it is 6Note that we have not considered the costs of the seller.\nunreasonable for bidders to be able to alter the perceived N arbitrarily, collusion provides an opportunity to decrease the perceived N by having only one of a group of colluding agents participate in the auction.\nOther types of collusion are of course related to the general topic of cheating in auctions.\nResults on collusion in first and second-price auctions can be found in [8] and [3], respectively.\nThe work most closely related to our first setting is [11], which also presents a model in which the seller may cheat in a second-price auction.\nIn their setting, the seller is a participant in the Bayesian game who decides between running a first-price auction (where profitable cheating is never possible) or second-price auction.\nThe seller makes this choice after observing his type, which is his probability of having the opportunity and willingness to cheat in a second-price auction.\nThe bidders, who know the distribution from which the seller's type is drawn, then place their bid.\nIt is shown that, in equilibrium, only a seller with the maximum probability of cheating would ever choose to run a second-price auction.\nOur work differs in that we focus on the agents' strategies in a second-price auction for a given probability of cheating by the seller.\nAn explicit derivation of the equilibrium strategies then allows us relate first and second-price auctions.\nAn area of related work that can be seen as complementary to ours is that of secure auctions, which takes the point of view of an auction designer.\nThe goals often extend well beyond simply preventing cheating, including properties such as anonymity of the bidders and nonrepudiation of bids.\n6.\nCONCLUSION\nIn this paper we presented the equilibria of sealed-bid auctions in which cheating is possible.\nIn addition to providing strategy profiles that are stable against deviations, these results give us with insights into both first and second-price auctions.\nThe results for the case of a cheating seller in a second-price auction allow us to relate the two auctions as endpoints along a continuum.\nThe case of agents cheating in a first-price auction showed the robustness of the first-price auction equilibrium when agent types are drawn from the uniform distribution.\nFinally, results from both of our settings allowed us to quantify the expected loss in revenue for a seller due to the possibility of cheating.\n7.\nThe Design and Implementation of a Secure Auction Service.\nIn Proc.\nIEEE Symp.\non Security and Privacy, 1995.\nGame Theory.\nMIT Press, 1991.\nCollusive bidder behavior at single-object second-price and english auctions.\nJournal of Political Economy, 95:579--599, 1987.\nElectronic auctions with private bids.\n[5] R. Harstad, J. Kagel, and D. Levin.\nEquilibrium bid functions for auctions with an uncertain number of bidders.\nEconomic Letters, 33:35--40, 1990.\n[6] P. Klemperer.\nAuction theory: A guide to the literature.\nJournal of Economic Surveys, 13 (3):227--286, 1999.\nBidding clubs in first-price auctions.\nIn AAAI-02.\nBidding rings.\nThe American Economic Review, 71:579--599, 1992.\nPrivacy preserving auctions and mechanism design.\nIn EC-99.\nOptimal auctions.\nAmerican Economic Review, 71 (3):381--392, 1981.\n[11] M. Rothkopf and R. Harstad.\nTwo models of bid-taker cheating in vickrey auctions.\nThe Journal of Business, 68 (2):257--267, 1995.\n[12] W. Vickrey.\nCounterspeculations, auctions, and competitive sealed tenders.\nJournal of Finance, 16:15--27, 1961.","lvl-2":"On Cheating in Sealed-Bid Auctions\nMotivated by the rise of online auctions and their relative lack of security, this paper analyzes two forms of cheating in sealed-bid auctions.\nThe first type of cheating we consider occurs when the seller spies on the bids of a second-price auction and then inserts a fake bid in order to increase the payment of the winning bidder.\nIn the second type, a bidder cheats in a first-price auction by examining the competing bids before deciding on his own bid.\nIn both cases, we derive equilibrium strategies when bidders are aware of the possibility of cheating.\nThese results provide insights into sealedbid auctions even in the absence of cheating, including some counterintuitive results on the effects of overbidding in a first-price auction.\n1.\nINTRODUCTION\nAmong the types of auctions commonly used in practice, sealed-bid auctions are a good practical choice because they require little communication and can be completed almost instantly.\nEach bidder simply submits a bid, and the winner is immediately determined.\nHowever, sealed-bid auctions do require that the bids be kept private until the auction\nclears.\nThe increasing popularity of online auctions only makes this disadvantage more troublesome.\nAt an auction house, with all participants present, it is difficult to examine a bid that another bidder gave directly to the auctioneer.\nHowever, in an online auction the auctioneer is often little more than a server with questionable security; and, since all participants are in different locations, one can anonymously attempt to break into the server.\nIn this paper, we present a game theoretic analysis of how bidders should behave when they are aware of the possibility of cheating that is based on knowledge of the bids.\nWe investigate this type of cheating along two dimensions: whether it is the auctioneer or a bidder who cheats, and which variant (either first or second-price) of the sealed-bid auction is used.\nNote that two of these cases are trivial.\nIn our setting, there is no incentive for the seller to submit a shill bid in a first price auction, because doing so would either cancel the auction or not affect the payment of the winning bidder.\nIn a second-price auction, knowing the competing bids does not help a bidder because it is dominant strategy to bid truthfully.\nThis leaves us with two cases that we examine in detail.\nA seller can profitably cheat in a second-price auction by looking at the bids before the auction clears and submitting an extra bid.\nThis possibility was pointed out as early as the seminal paper [12] that introduced this type of auction.\nFor example, if the bidders in an eBay auction each use a proxy bidder (essentially creating a second-price auction), then the seller may be able to break into eBay's server, observe the maximum price that a bidder is willing to pay, and then extract this price by submitting a shill bid just below it using a false identity.\nWe assume that there is no chance that the seller will be caught when it cheats.\nHowever, not all sellers are willing to use this power (or, not all sellers can successfully cheat).\nWe assume that each bidder knows the probability with which the seller will cheat.\nPossible motivation for this knowledge could be a recently published expos \u00b4 e on seller cheating in eBay auctions.\nIn this setting, we derive an equilibrium bidding strategy for the case in which each bidder's value for the good is independently drawn from a common distribution (with no further assumptions except for continuity and differentiability).\nThis result shows how first and second-price auctions can be viewed as the endpoints of a spectrum of auctions.\nBut why should the seller have all the fun?\nIn a first-price auction, a bidder must bid below his value for the good (also called \"shaving\" his bid) in order to have positive utility if he\nwins.\nTo decide how much to shave his bid, he must trade off the probability of winning the auction against how much he will pay if he does win.\nOf course, if he could simply examine the other bids before submitting his own, then his problem is solved: bid the minimum necessary to win the auction.\nIn this setting, our goal is to derive an equilibrium bidding strategy for a non-cheating bidder who is aware of the possibility that he is competing against cheating bidders.\nWhen bidder values are drawn from the commonly-analyzed uniform distribution, we show the counterintuitive result that the possibility of other bidders cheating has no effect on the equilibrium strategy of an honest bidder.\nThis result is then extended to show the robustness of the equilibrium of a first-price auction without the possibility of cheating.\nWe conclude this section by exploring other distributions, including some in which the presence of cheating bidders actually induces an honest bidder to lower its bid.\nThe rest of the paper is structured as follows.\nIn Section 2 we formalize the setting and present our results for the case of a seller cheating in a second price auction.\nSection 3 covers the case of bidders cheating in a first-price auction.\nIn Section 4, we quantify the effects that the possibility of cheating has on an honest seller in the two settings.\nWe discuss related work, including other forms of cheating in auctions, in Section 5, before concluding with Section 6.\nAll proofs and derivations are found in the appendix.\n2.\nSECOND-PRICE AUCTION, CHEATING SELLER\nIn this section, we consider a second-price auction in which the seller may cheat by inserting a shill bid after observing all of the bids.\nThe formulation for this section will be largely reused in the following section on bidders cheating in a first-price auction.\nWhile no prior knowledge of game theory or auction theory is assumed, good introductions can be found in [2] and [6], respectively.\n2.1 Formulation\nThe setting consists of N bidders, or agents, (indexed by i = 1, \u2022 \u2022 \u2022, n) and a seller.\nEach agent has a type Oi E [0, 1], drawn from a continuous range, which represents the agent's value for the good being auctioned .2 Each agent's type is independently drawn from a cumulative distribution function (cdf) F over [0, 1], where F (0) = 0 and F (1) = 1.\nWe assume that F (\u2022) is strictly increasing and differentiable over the interval [0, 1].\nCall the probability density function (pdf) f (Oi) = F ~ (Oi), which is the derivative of the cdf.\nEach agent knows its own type Oi, but only the distribution over the possible types of the other agents.\nA bidding strategy for an agent bi: [0, 1]--+ [0, 1] maps its type to its bid .3 Let O = (O1, \u2022 \u2022 \u2022, On) be the vector of types for all agents, and O \u2212 i = (O1, \u2022 \u2022 \u2022, Oi \u2212 1, Oi +1, \u2022 \u2022 \u2022 On) be the vector of all types except for that of agent i.\nWe can then combine the vectors so that O = (Oi, O \u2212 i).\nWe also define the vector of bids as b (O) = (b1 (O1),..., bn (On)), and this vector without 2We can restrict the types to the range [0, 1] without loss of generality because any distribution over a different range can be normalized to this range.\n3We thus limit agents to deterministic bidding strategies, but, because of our continuity assumption, there always exists a pure strategy equilibrium.\nthe bid of agent i as b \u2212 i (O \u2212 i).\nLet b [1] (O) be the value of the highest bid of the vector b (O), with a corresponding definition for b [1] (O \u2212 i).\nAn agent obviously wins the auction if its bid is greater than all other bids, but ties complicate the formulation.\nFortunately, we can ignore the case of ties in this paper because our continuity assumption will make them a zero probability event in equilibrium.\nWe assume that the seller does not set a reserve price .4 If the seller does not cheat, then the winning agent pays the highest bid by another agent.\nOn the other hand, if the seller does cheat, then the winning agent will pay its bid, since we assume that a cheating seller would take full advantage of its power.\nLet the indicator variable \u00b5c be 1 if the seller cheats, and 0 otherwise.\nThe probability that the seller cheats, Pc, is known by all agents .5 We can then write the payment of the winning agent as follows.\nLet \u00b5 (\u2022) be an indicator function that takes an inequality as an argument and returns is 1 if it holds, and 0 otherwise.\nThe utility for agent i is zero if it does not win the auction, and the difference between its valuation and its price if it does.\nWe will be concerned with the expected utility of an agent, with the expectation taken over the types of the other agents and over whether or not the seller cheats.\nBy pushing the expectation inward so that it is only over the price (conditioned on the agent winning the auction), we can write the expected utility as:\nWe assume that all agents are rational, expected utility maximizers.\nBecause of the uncertainty over the types of the other agents, we will be looking for a Bayes-Nash equilibrium.\nA vector of bidding strategies b \u2217 is a Bayes-Nash equilibrium if for each agent i and each possible type Oi, agent i cannot increase its expected utility by using an alternate bidding strategy b ~ i, holding the bidding strategies for all other agents fixed.\nFormally, b \u2217 is a Bayes-Nash equilibrium if:\n2.2 Equilibrium\nWe first present the Bayes-Nash equilibrium for an arbitrary distribution F (\u2022).\nIt is useful to consider the extreme points of Pc.\nSetting Pc = 1 yields the correct result for a first-price auction (see, e.g., [10]).\nIn the case of Pc = 0, this solution is not defined.\nHowever, in the limit, bi (\u03b8i) approaches \u03b8i as Pc approaches 0, which is what we expect as the auction approaches a standard second-price auction.\nThe position of Pc is perhaps surprising.\nFor example, the\nequilibrium bidding strategies of first and second-price auctions would have also given us the correct bidding strategies for the cases of Pc = 0 and Pc = 1.\n2.3 Continuum of Auctions\nAn alternative perspective on the setting is as a continuum between first and second-price auctions.\nConsider a probabilistic sealed-bid auction in which the seller is honest, but the price paid by the winning agent is determined by a weighted coin flip: with probability Pc it is his bid, and with probability 1--Pc it is the second-highest bid.\nBy adjusting Pc, we can smoothly move between a first and second-price auction.\nFurthermore, the fact that this probabilistic auction satisfies the properties required for the Revenue Equivalence Theorem (see, e.g., [2]) provides a way to verify that the bidding strategy in Equation 5 is the symmetric equilibrium of this auction (see the alternative proof of Theorem 1 in the appendix).\n2.4 Special Case: Uniform Distribution\nAnother way to try to gain insight into Equation 5 is by instantiating the distribution of types.\nWe now consider the often-studied uniform distribution: F (\u03b8i) = \u03b8i.\nCOROLLARY 2.\nIn a second-price auction in which the seller cheats with probability Pc, and F (\u03b8i) = \u03b8i, it is a Bayes-Nash equilibrium for each agent to bid according to the following strategy: This equilibrium bidding strategy, parameterized by Pc, can be viewed as an interpolation between two well-known results.\nWhen Pc = 0 the bidding strategy is now welldefined (each agent bids its true type), while when Pc = 1 we get the correct result for a first-price auction: each agent bids according to the strategy bi (\u03b8i) = N \u2212 1\n3.\nFIRST-PRICE AUCTION, CHEATING AGENTS\nWe now consider the case in which the seller is honest, but there is a chance that agents will cheat and examine the other bids before submitting their own (or, alternatively, they will revise their bid before the auction clears).\nSince this type of cheating is pointless in a second-price auction, we only analyze the case of a first-price auction.\nAfter revising the formulation from the previous section, we present a fixed point equation for the equilibrium strategy for an arbitrary distribution F (\u2022).\nThis equation will be useful for the analysis the uniform distribution, in which we show that the possibility of cheating agents does not change the equilibrium strategy of honest agents.\nThis result has implications for the robustness of the symmetric equilibrium to overbidding in a standard first-price auction.\nFurthermore, we find that for other distributions overbidding actually induces a competing agent to shave more off of its bid.\n3.1 Formulation\nIt is clear that if a single agent is cheating, he will bid (up to his valuation) the minimum amount necessary to win the auction.\nIt is less obvious, though, what will happen if multiple agents cheat.\nOne could imagine a scenario similar to an English auction, in which all cheating agents keep revising their bids until all but one cheater wants the good at the current winning bid.\nHowever, we are only concerned with how an honest agent should bid given that it is aware of the possibility of cheating.\nThus, it suffices for an honest agent to know that it will win the auction if and only if its bid exceeds every other honest agent's bid and every cheating agent's type.\nThis intuition can be formalized as the following discriminatory auction.\nIn the first stage, each agent's payment rule is determined.\nWith probability Pa, the agent will pay the second highest bid if it wins the auction (essentially, he is a cheater), and otherwise it will have to pay its bid.\nThese selections are recorded by a vector of indicator variables \u00b5a = (\u00b5a1,..., \u00b5an), where \u00b5ai = 1 denotes that agent i pays the second highest bid.\nEach agent knows the probability Pa, but does not know the payment rule for all other agents.\nOtherwise, this auction is a standard, sealed-bid auction.\nIt is thus a dominant strategy for a cheater to bid its true type, making this formulation strategically equivalent to the setting outlined in the previous paragraph.\nThe expression for the utility of an honest agent in this discriminatory auction is as follows.\n3.2 Equilibrium\nOur goal is to find the equilibrium in which all cheating agents use their dominant strategy of bidding truthfully and honest agents bid according to a symmetric bidding strategy.\nSince we have left F (\u2022) unspecified, we cannot present a closed form solution for the honest agent's bidding strategy, and instead give a fixed point equation for it.\n3.3 Special Case: Uniform Distribution\nSince we could not solve Equation 8 in the general case, we can only see how the possibility of cheating affects the equilibrium bidding strategy for particular instances of F (\u2022).\nA natural place to start is uniform distribution: F (\u03b8i) = \u03b8i.\nRecall the logic behind the symmetric equilibrium strategy in a first-price auction without cheating: bi (\u03b8i) = N-1 N \u03b8i is the optimal tradeoff between increasing the probability of winning and decreasing the price paid upon winning, given that the other agents are bidding according to the same strategy.\nSince in the current setting the cheating agents do not shave their bid at all and thus decrease an honest agent's probability of winning (while obviously not affecting the price that an honest agent pays if he wins), it is natural to expect that an honest agent should compensate by increasing his bid.\nThe idea is that sacrificing some potential profit in order to regain some of the lost probability of winning would bring the two sides of the tradeoff back into balance.\nHowever, it turns out that the equilibrium bidding strategy is unchanged.\nThis result suggests that the equilibrium of a first-price auction is particularly robust when types are drawn from the uniform distribution, since the best response is unaffected by deviations of the other agents to the strategy of always bidding their type.\nIn fact, as long as all other agents shave their bid by a fraction (which can differ across the agents) no greater than 1N, it is still a best response for the remaining agent to bid according to the equilibrium strategy.\nNote that this result holds even if other agents are shaving their bid by a negative fraction, and are thus irrationally bidding above their type.\nObviously, these strategy profiles are not equilibria (unless each \u03b1j = 0), because each agent j has an incentive to set \u03b1j = 0.\nThe point of this theorem is that a wide range of possible beliefs that an agent can hold about the strategies of the other agents will all lead him to play the equilibrium strategy.\nThis is important because a common (and valid) criticism of equilibrium concepts such as Nash and BayesNash is that they are silent on how the agents converge on a strategy profile from which no one wants to deviate.\nHowever, if the equilibrium strategy is a best response to a large set of strategy profiles that are out of equilibrium, then it seems much more plausible that the agents will indeed converge on this equilibrium.\nIt is important to note, though, that while this equilibrium is robust against arbitrary deviations to strategies that shave less, it is not robust to even a single agent shaving more off of its bid.\nIn fact, if we take any strategy profile consistent with the conditions of Theorem 5 and change a single agent j's strategy so that its corresponding \u03b1j is negative, then agent i's best response is to shave more than 1N off of its bid.\n3.4 Effects of Overbidding for Other Distributions\nA natural question is whether the best response bidding strategy is similarly robust to overbidding by competing agents for other distributions.\nIt turns out that Theorem 5 holds for all distributions of the form F (\u03b8i) = (\u03b8i) k, where k is some positive integer.\nHowever, taking a simple linear combination of two such distributions to produce F (\u03b8i) =\n2 yields a distribution in which an agent should actually shave its bid more when other agents shave their bids less.\nIn the example we present for this distribution (with the details in the appendix), there are only two players and the deviation by one agent is to bid his type.\nHowever, it can be generalized to a higher number of agents and to other deviations.\nand N = 2, if agent 2 always bids its type (b2 (\u03b82) = \u03b82), then, for all \u03b81> 0, agent 1's best response bidding strategy is strictly less than the bidding strategy of the symmetric equilibrium.\nWe also note that the same result holds for the normalized\nIt is certainly the case that distributions can be found that support the intuition given above that agents should shave their bid less when other agents are doing likewise.\nExamples include F (\u03b8i) = -21 \u03b82i + 32\u03b8i (the solution to the system of equations: F ~ ~ (\u03b8i) = -1, F (0) = 0, and F (1) = 1),\nIt would be useful to relate the direction of the change in the best response bidding strategy to a general condition on F (\u2022).\nUnfortunately, we were not able to find such a condition, in part because the integral in the symmetric bidding strategy of a first-price auction cannot be solved without knowing F (\u2022) (or at least some restrictions on it).\nWe do note, however, that the sign of the second derivative of (F (\u03b8i) \/ f (\u03b8i)) is an accurate predictor for all of the distributions that we considered.\n4.\nREVENUE LOSS FOR AN HONEST SELLER\nIn both of the settings we covered, an honest seller suffers a loss in expected revenue due to the possibility of cheating.\nThe equilibrium bidding strategies that we derived allow us to quantify this loss.\nAlthough this is as far as we will take the analysis, it could be applied to more general settings, in which the seller could, for example, choose the market in which he sells his good or pay a trusted third party to oversee the auction.\nIn a second-price auction in which the seller may cheat, an honest seller suffers due the fact that the agents will shave their bids.\nFor the case in which agent types are drawn from the uniform distribution, every agent will shave its bid by\nN-1 + P c, which is thus also the fraction by which an honest seller's revenue decreases due to the possibility of cheating.\nAnalysis of the case of a first-price auction in which agents may cheat is not so straightforward.\nIf Pa = 1 (each agent cheats with certainty), then we simply have a second-price auction, and the seller's expected revenue will be unchanged.\nAgain considering the uniform distribution for agent types, it is not surprising that Pa = 21 causes the seller to lose\nthe most revenue.\nHowever, even in this worst case, the percentage of expected revenue lost is significantly less than it is for the second-price auction in which Pc = 21, as shown in Table 1.6 It turns out that setting Pc = 0.2 would make the expected loss of these two settings comparable.\nWhile this comparison between the settings is unlikely to be useful for a seller, it is interesting to note that agent suspicions of possible cheating by the seller are in some sense worse than agents actually cheating themselves.\nTable 1: The percentage of expected revenue lost by an honest seller due to the possibility of cheating in the two settings considered in this paper.\nAgent valuations are drawn from the uniform distribution.\n5.\nRELATED WORK\nExisting work covers another dimension along which we could analyze cheating: altering the perceived value of N.\nIn this paper, we have assumed that N is known by all of the bidders.\nHowever, in an online setting this assumption is rather tenuous.\nFor example, a bidder's only source of information about N could be a counter that the seller places on the auction webpage, or a statement by the seller about the number of potential bidders who have indicated that they will participate.\nIn these cases, the seller could arbitrarily manipulate the perceived N.\nIn a first-price auction, the seller obviously has an incentive to increase the perceived value of N in order to induce agents to bid closer to their true valuation.\nHowever, if agents are aware that the seller has this power, then any communication about N to the agents is \"cheap talk\", and furthermore is not credible.\nThus, in equilibrium the agents would ignore the declared value of N, and bid according to their own prior beliefs about the number of agents.\nIf we make the natural assumption of a common prior, then the setting reduces to the one tackled by [5], which derived the equilibrium bidding strategies of a first-price auction when the number of bidders is drawn from a known distribution but not revealed to any of the bidders.\nOf course, instead of assuming that the seller can always exploit this power, we could assume that it can only do so with some probability that is known by the agents.\nThe analysis would then proceed in a similar manner as that of our cheating seller model.\nThe other interesting case of this form of cheating is by bidders in a first-price auction.\nBidders would obviously want to decrease the perceived number of agents in order to induce their competition to lower their bids.\nWhile it is 6Note that we have not considered the costs of the seller.\nThus, the expected loss in profit could be much greater than the numbers that appear here.\nunreasonable for bidders to be able to alter the perceived N arbitrarily, collusion provides an opportunity to decrease the perceived N by having only one of a group of colluding agents participate in the auction.\nWhile the non-colluding agents would account for this possibility, as long as they are not certain of the collusion they will still be induced to shave more off of their bids than they would if the collusion did not take place.\nThis issue is tackled in [7].\nOther types of collusion are of course related to the general topic of cheating in auctions.\nResults on collusion in first and second-price auctions can be found in [8] and [3], respectively.\nThe work most closely related to our first setting is [11], which also presents a model in which the seller may cheat in a second-price auction.\nIn their setting, the seller is a participant in the Bayesian game who decides between running a first-price auction (where profitable cheating is never possible) or second-price auction.\nThe seller makes this choice after observing his type, which is his probability of having the opportunity and willingness to cheat in a second-price auction.\nThe bidders, who know the distribution from which the seller's type is drawn, then place their bid.\nIt is shown that, in equilibrium, only a seller with the maximum probability of cheating would ever choose to run a second-price auction.\nOur work differs in that we focus on the agents' strategies in a second-price auction for a given probability of cheating by the seller.\nAn explicit derivation of the equilibrium strategies then allows us relate first and second-price auctions.\nAn area of related work that can be seen as complementary to ours is that of secure auctions, which takes the point of view of an auction designer.\nThe goals often extend well beyond simply preventing cheating, including properties such as anonymity of the bidders and nonrepudiation of bids.\nCryptographic methods are the standard weapon of choice here (see [1, 4, 9]).\n6.\nCONCLUSION\nIn this paper we presented the equilibria of sealed-bid auctions in which cheating is possible.\nIn addition to providing strategy profiles that are stable against deviations, these results give us with insights into both first and second-price auctions.\nThe results for the case of a cheating seller in a second-price auction allow us to relate the two auctions as endpoints along a continuum.\nThe case of agents cheating in a first-price auction showed the robustness of the first-price auction equilibrium when agent types are drawn from the uniform distribution.\nWe also explored the effect of overbidding on the best response bidding strategy for other distributions, and showed that even for relatively simple distributions it can be positive, negative, or neutral.\nFinally, results from both of our settings allowed us to quantify the expected loss in revenue for a seller due to the possibility of cheating.\n7.\nREFERENCES [1] M. Franklin and M. Reiter.\nThe Design and Implementation of a Secure Auction Service.\nIn Proc.\nIEEE Symp.\non Security and Privacy, 1995.\n[2] D. Fudenberg and J. Tirole.\nGame Theory.\nMIT Press, 1991.\n80 [3] D. Graham and R. Marshall.\nCollusive bidder behavior at single-object second-price and english auctions.\nJournal of Political Economy, 95:579--599, 1987.\n[4] M. Harkavy, J. D. Tygar, and H. Kikuchi.\nElectronic auctions with private bids.\nIn Proceedings of the 3rd USENIX Workshop on Electronic Commerce, 1998.\n[5] R. Harstad, J. Kagel, and D. Levin.\nEquilibrium bid functions for auctions with an uncertain number of bidders.\nEconomic Letters, 33:35--40, 1990.\n[6] P. Klemperer.\nAuction theory: A guide to the literature.\nJournal of Economic Surveys, 13 (3):227--286, 1999.\n[7] K. Leyton-Brown, Y. Shoham, and M. Tennenholtz.\nBidding clubs in first-price auctions.\nIn AAAI-02.\n[8] R. McAfee and J. McMillan.\nBidding rings.\nThe American Economic Review, 71:579--599, 1992.\n[9] M. Naor, B. Pinkas, and R. Sumner.\nPrivacy preserving auctions and mechanism design.\nIn EC-99.\n[10] J. Riley and W. Samuelson.\nOptimal auctions.\nAmerican Economic Review, 71 (3):381--392, 1981.\n[11] M. Rothkopf and R. Harstad.\nTwo models of bid-taker cheating in vickrey auctions.\nThe Journal of Business, 68 (2):257--267, 1995.\n[12] W. Vickrey.\nCounterspeculations, auctions, and competitive sealed tenders.\nJournal of Finance, 16:15--27, 1961.","keyphrases":["cheat","cheat","auction","seller","payment","case","possibl","seal-bid","bidsecond-price auction","agent","first-price auction","game theori","seal-bid auction"],"prmu":["P","P","P","P","P","P","P","U","M","U","M","U","M"]} {"id":"C-50","title":"CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses","abstract":"This paper describes the design, implementation and evaluation of a search and rescue system called CenWits. CenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices. It is designed for search and rescue of people in emergency situations in wilderness areas. A key feature of CenWits is that it does not require a continuously connected sensor network for its operation. It is designed for an intermittently connected network that provides only occasional connectivity. It makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. A prototype of CenWits has been implemented using Berkeley Mica2 motes. The paper describes this implementation and reports on the performance measured from it.","lvl-1":"CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses Jyh-How Huang Department of Computer Science University of Colorado, Campus Box 0430 Boulder, CO 80309-0430 huangjh@cs.colorado.edu Saqib Amjad Department of Computer Science University of Colorado, Campus Box 0430 Boulder, CO 80309-0430 Saqib.Amjad@colorado.edu Shivakant Mishra Department of Computer Science University of Colorado, Campus Box 0430 Boulder, CO 80309-0430 mishras@cs.colorado.edu ABSTRACT This paper describes the design, implementation and evaluation of a search and rescue system called CenWits.\nCenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices.\nIt is designed for search and rescue of people in emergency situations in wilderness areas.\nA key feature of CenWits is that it does not require a continuously connected sensor network for its operation.\nIt is designed for an intermittently connected network that provides only occasional connectivity.\nIt makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center.\nA prototype of CenWits has been implemented using Berkeley Mica2 motes.\nThe paper describes this implementation and reports on the performance measured from it.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems General Terms Algorithms, Design, Experimentation 1.\nINTRODUCTION Search and rescue of people in emergency situation in a timely manner is an extremely important service.\nIt has been difficult to provide such a service due to lack of timely information needed to determine the current location of a person who may be in an emergency situation.\nWith the emergence of pervasive computing, several systems [12, 19, 1, 5, 6, 4, 11] have been developed over the last few years that make use of small devices such as cell phones, sensors, etc..\nAll these systems require a connected network via satellites, GSM base stations, or mobile devices.\nThis requirement severely limits their applicability, particularly in remote wilderness areas where maintaining a connected network is very difficult.\nFor example, a GSM transmitter has to be in the range of a base station to transmit.\nAs a result, it cannot operate in most wilderness areas.\nWhile a satellite transmitter is the only viable solution in wilderness areas, it is typically expensive and cumbersome.\nFurthermore, a line of sight is required to transmit to satellite, and that makes it infeasible to stay connected in narrow canyons, large cities with skyscrapers, rain forests, or even when there is a roof or some other obstruction above the transmitter, e.g. in a car.\nAn RF transmitter has a relatively smaller range of transmission.\nSo, while an in-situ sensor is cheap as a single unit, it is expensive to build a large network that can provide connectivity over a large wilderness area.\nIn a mobile environment where sensors are carried by moving people, power-efficient routing is difficult to implement and maintain over a large wilderness area.\nIn fact, building an adhoc sensor network using only the sensors worn by hikers is nearly impossible due to a relatively small number of sensors spread over a large wilderness area.\nIn this paper, we describe the design, implementation and evaluation of a search and rescue system called CenWits (Connection-less Sensor-Based Tracking System Using Witnesses).\nCenWits is comprised of mobile, in-situ sensors that are worn by subjects (people, wild animals, or in-animate objects), access points (AP) that collect information from these sensors, and GPS receivers and location points (LP) that provide location information to the sensors.\nA subject uses GPS receivers (when it can connect to a satellite) and LPs to determine its current location.\nThe key idea of CenWits is that it uses a concept of witnesses to convey a subject``s movement and location information to the outside world.\nThis averts a need for maintaining a connected network to transmit location information to the outside world.\nIn particular, there is no need for expensive GSM or satellite transmitters, or maintaining an adhoc network of in-situ sensors in CenWits.\n180 CenWits employs several important mechanisms to address the key problem of resource constraints (low signal strength, low power and limited memory) in sensors.\nIn particular, it makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center.\nThe problem of low signal strengths (short range RF communication) is addressed by avoiding a need for maintaining a connected network.\nInstead, CenWits propagates the location information of sensors using the concept of witnesses through an intermittently connected network.\nAs a result, this system can be deployed in remote wilderness areas, as well as in large urban areas with skyscrapers and other tall structures.\nAlso, this makes CenWits cost-effective.\nA subject only needs to wear light-weight and low-cost sensors that have GPS receivers but no expensive GSM or satellite transmitters.\nFurthermore, since there is no need for a connected sensor network, there is no need to deploy sensors in very large numbers.\nThe problem of limited battery life and limited memory of a sensor is addressed by incorporating the concepts of groups and partitions.\nGroups and partitions allow sensors to stay in sleep or receive modes most of the time.\nUsing groups and partitions, the location information collected by a sensor can be distributed among several sensors, thereby reducing the amount of memory needed in one sensor to store that information.\nIn fact, CenWits provides an adaptive tradeoff between memory and power consumption of sensors.\nEach sensor can dynamically adjust its power and memory consumption based on its remaining power or available memory.\nIt has amply been noted that the strength of sensor networks comes from the fact that several sensor nodes can be distributed over a relatively large area to construct a multihop network.\nThis paper demonstrates that important large-scale applications can be built using sensors by judiciously integrating the storage, communication and computation capabilities of sensors.\nThe paper describes important techniques to combine memory, transmission and battery power of many sensors to address resource constraints in the context of a search and rescue application.\nHowever, these techniques are quite general.\nWe discuss several other sensor-based applications that can employ these techniques.\nWhile CenWits addresses the general location tracking and reporting problem in a wide-area network, there are two important differences from the earlier work done in this area.\nFirst, unlike earlier location tracking solutions, CenWits does not require a connected network.\nSecond, unlike earlier location tracking solutions, CenWits does not aim for a very high accuracy of localization.\nInstead, the main goal is to provide an approximate, small area where search and rescue efforts can be concentrated.\nThe rest of this paper is organized as follows.\nIn Section 2, we overview some of the recent projects and technologies related to movement and location tracking, and search and rescue systems.\nIn Section 3, we describe the overall architecture of CenWits, and provide a high-level description of its functionality.\nIn the next section, Section 4, we discuss power and memory management in CenWits.\nTo simplify our presentation, we will focus on a specific application of tracking lost\/injured hikers in all these sections.\nIn Section 6, we describe a prototype implementation of CenWits and present performance measured from this implementation.\nWe discuss how the ideas of CenWits can be used to build several other applications in Section 7.\nFinally, in Section 8, we discuss some related issues and conclude the paper.\n2.\nRELATED WORK A survey of location systems for ubiquitous computing is provided in [11].\nA location tracking system for adhoc sensor networks using anchor sensors as reference to gain location information and spread it out to outer node is proposed in [17].\nMost location tracking systems in adhoc sensor networks are for benefiting geographic-aware routing.\nThey don``t fit well for our purposes.\nThe well-known active badge system [19] lets a user carry a badge around.\nAn infrared sensor in the room can detect the presence of a badge and determine the location and identification of the person.\nThis is a useful system for indoor environment, where GPS doesn``t work.\nLocationing using 802.11 devices is probably the cheapest solution for indoor position tracking [8].\nBecause of the popularity and low cost of 802.11 devices, several business solutions based on this technology have been developed[1].\nA system that combines two mature technologies and is viable in suburban area where a user can see clear sky and has GSM cellular reception at the same time is currently available[5].\nThis system receives GPS signal from a satellite and locates itself, draws location on a map, and sends location information through GSM network to the others who are interested in the user``s location.\nA very simple system to monitor children consists an RF transmitter and a receiver.\nThe system alarms the holder of the receiver when the transmitter is about to run out of range [6].\nPersonal Locater Beacons (PLB) has been used for avalanche rescuing for years.\nA skier carries an RF transmitter that emits beacons periodically, so that a rescue team can find his\/her location based on the strength of the RF signal.\nLuxury version of PLB combines a GPS receiver and a COSPASSARSAT satellite transmitter that can transmit user``s location in latitude and longitude to the rescue team whenever an accident happens [4].\nHowever, the device either is turned on all the time resulting in fast battery drain, or must be turned on after the accident to function.\nAnother related technology in widespread use today is the ONSTAR system [3], typically used in several luxury cars.\nIn this system, a GPS unit provides position information, and a powerful transmitter relays that information via satellite to a customer service center.\nDesigned for emergencies, the system can be triggered either by the user with the push of a button, or by a catastrophic accident.\nOnce the system has been triggered, a human representative attempts to gain communication with the user via a cell phone built as an incar device.\nIf contact cannot be made, emergency services are dispatched to the location provided by GPS.\nLike PLBs, this system has several limitations.\nFirst, it is heavy and expensive.\nIt requires a satellite transmitter and a connected network.\nIf connectivity with either the GPS network or a communication satellite cannot be maintained, the system fails.\nUnfortunately, these are common obstacles encountered in deep canyons, narrow streets in large cities, parking garages, and a number of other places.\n181 The Lifetch system uses GPS receiver board combined with a GSM\/GPRS transmitter and an RF transmitter in one wireless sensor node called Intelligent Communication Unit (ICU).\nAn ICU first attempts to transmit its location to a control center through GSM\/GPRS network.\nIf that fails, it connects with other ICUs (adhoc network) to forward its location information until the information reaches an ICU that has GSM\/GPRS reception.\nThis ICU then transmits the location information of the original ICU via the GSM\/GPRS network.\nZebraNet is a system designed to study the moving patterns of zebras [13].\nIt utilizes two protocols: History-based protocol and flooding protocol.\nHistory-based protocol is used when the zebras are grazing and not moving around too much.\nWhile this might be useful for tracking zebras, it``s not suitable for tracking hikers because two hikers are most likely to meet each other only once on a trail.\nIn the flooding protocol, a node dumps its data to a neighbor whenever it finds one and doesn``t delete its own copy until it finds a base station.\nWithout considering routing loops, packet filtering and grouping, the size of data on a node will grow exponentially and drain the power and memory of a sensor node with in a short time.\nInstead, Cenwits uses a four-phase hand-shake protocol to ensure that a node transmits only as much information as the other node is willing to receive.\nWhile ZebraNet is designed for a big group of sensors moving together in the same direction with same speed, Cenwits is designed to be used in the scenario where sensors move in different directions at different speeds.\nDelay tolerant network architecture addresses some important problems in challenged (resource-constrained) networks [9].\nWhile this work is mainly concerned with interoperability of challenged networks, some problems related to occasionally-connected networks are similar to the ones we have addressed in CenWits.\nAmong all these systems, luxury PLB and Lifetch are designed for location tracking in wilderness areas.\nHowever, both of these systems require a connected network.\nLuxury PLB requires the user to transmit a signal to a satellite, while Lifetch requires connection to GSM\/GPRS network.\nLuxury PLB transmits location information, only when an accident happens.\nHowever, if the user is buried in the snow or falls into a deep canyon, there is almost no chance for the signal to go through and be relayed to the rescue team.\nThis is because satellite transmission needs line of sight.\nFurthermore, since there is no known history of user``s location, it is not possible for the rescue team to infer the current location of the user.\nAnother disadvantage of luxury PLB is that a satellite transmitter is very expensive, costing in the range of $750.\nLifetch attempts to transmit the location information by GSM\/GPRS and adhoc sensor network that uses AODV as the routing protocol.\nHowever, having a cellular reception in remote areas in wilderness areas, e.g. American national parks is unlikely.\nFurthermore, it is extremely unlikely that ICUs worn by hikers will be able to form an adhoc network in a large wilderness area.\nThis is because the hikers are mobile and it is very unlikely to have several ICUs placed dense enough to forward packets even on a very popular hike route.\nCenWits is designed to address the limitations of systems such as luxury PLB and Lifetch.\nIt is designed to provide hikers, skiers, and climbers who have their activities mainly in wilderness areas a much higher chance to convey their location information to a control center.\nIt is not reliant upon constant connectivity with any communication medium.\nRather, it communicates information along from user to user, finally arriving at a control center.\nUnlike several of the systems discussed so far, it does not require that a user``s unit is constantly turned on.\nIn fact, it can discover a victim``s location, even if the victim``s sensor was off at the time of accident and has remained off since then.\nCenWits solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability.\nThis means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well.\nFigure 1: Hiker A and Hiker B are are not in the range of each other 3.\nCENWITS We describe CenWits in the context of locating lost\/injured hikers in wilderness areas.\nEach hiker wears a sensor (MICA2 motes in our prototype) equipped with a GPS receiver and an RF transmitter.\nEach sensor is assigned a unique ID and maintains its current location based on the signal received by its GPS receiver.\nIt also emits beacons periodically.\nWhen any two sensors are in the range of one another, they record the presence of each other (witness information), and also exchange the witness information they recorded earlier.\nThe key idea here is that if two sensors come with in range of each other at any time, they become each other``s witnesses.\nLater on, if the hiker wearing one of these sensors is lost, the other sensor can convey the last known (witnessed) location of the lost hiker.\nFurthermore, by exchanging the witness information that each sensor recorded earlier, the witness information is propagated beyond a direct contact between two sensors.\nTo convey witness information to a processing center or to a rescue team, access points are established at well-known locations that the hikers are expected to pass through, e.g. at the trail heads, trail ends, intersection of different trails, scenic view points, resting areas, and so on.\nWhenever a sensor node is in the vicinity of an access point, all witness information stored in that sensor is automatically dumped to the access point.\nAccess points are connected to a processing center via satellite or some other network1 .\nThe witness information is downloaded to the processing center from various access points at regular intervals.\nIn case, connection to an access point is lost, the information from that 1 A connection is needed only between access points and a processing center.\nThere is no need for any connection between different access points.\n182 access point can be downloaded manually, e.g. by UAVs.\nTo estimate the speed, location and direction of a hiker at any point in time, all witness information of that hiker that has been collected from various access points is processed.\nFigure 2: Hiker A and Hiker B are in the range of each other.\nA records the presence of B and B records the presence of A.\nA and B become each other``s witnesses.\nFigure 3: Hiker A is in the range of an access point.\nIt uploads its recorded witness information and clears its memory.\nAn example of how CenWits operates is illustrated in Figures 1, 2 and 3.\nFirst, hikers A and B are on two close trails, but out of range of each other (Figure 1).\nThis is a very common scenario during a hike.\nFor example, on a popular four-hour hike, a hiker might run into as many as 20 other hikers.\nThis accounts for one encounter every 12 minutes on average.\nA slow hiker can go 1 mile (5,280 feet) per hour.\nThus in 12 minutes a slow hiker can go as far as 1056 feet.\nThis implies that if we were to put 20 hikers on a 4-hour, one-way hike evenly, the range of each sensor node should be at least 1056 feet for them to communicate with one another continuously.\nThe signal strength starts dropping rapidly for two Mica2 nodes to communicate with each other when they are 180 feet away, and is completely lost when they are 230 feet away from each other[7].\nSo, for the sensors to form a sensor network on a 4-hour hiking trail, there should be at least 120 hikers scattered evenly.\nClearly, this is extremely unlikely.\nIn fact, in a 4-hour, less-popular hiking trail, one might only run into say five other hikers.\nCenWits takes advantage of the fact that sensors can communicate with one another and record their presence.\nGiven a walking speed of one mile per hour (88 feet per minute) and Mica2 range of about 150 feet for non-line-of-sight radio transmission, two hikers have about 150\/88 = 1.7 minutes to discover the presence of each other and exchange their witness information.\nWe therefore design our system to have each sensor emit a beacon every one-and-a-half minute.\nIn Figure 2, hiker B``s sensor emits a beacon when A is in range, this triggers A to exchange data with B.\nA communicates the following information to B: My ID is A; I saw C at 1:23 PM at (39\u25e6 49.3277655'', 105\u25e6 39.1126776''), I saw E at 3:09 PM at (40\u25e6 49.2234879'', 105\u25e6 20.3290168'').\nB then replies with My ID is B; I saw K at 11:20 AM at (39\u25e6 51.4531655'', 105\u25e6 41.6776223'').\nIn addition, A records I saw B at 4:17 PM at (41\u25e6 29.3177354'', 105\u25e6 04.9106211'') and B records I saw A at 4:17 PM at (41\u25e6 29.3177354'', 105\u25e6 04.9106211'').\nB goes on his way to overnight camping while A heads back to trail head where there is an AP, which emits beacon every 5 seconds to avoid missing any hiker.\nA dumps all witness information it has collected to the access point.\nThis is shown in Figure 3.\n3.1 Witness Information: Storage A critical concern is that there is limited amount of memory available on motes (4 KB SDRAM memory, 128 KB flash memory, and 4-512 KB EEPROM).\nSo, it is important to organize witness information efficiently.\nCenWits stores witness information at each node as a set of witness records (Format is shown in Figure 4.\n1 B Node ID Record Time X, Y Location Time Hop Count 1 B 3 B 8 B 3 B Figure 4: Format of a witness record.\nWhen two nodes i and j encounter each other, each node generates a new witness record.\nIn the witness record generated by i, Node ID is j, Record Time is the current time in i``s clock, (X,Y) are the coordinates of the location of i that i recorded most recently (either from satellite or an LP), Location Time is the time when the this location was recorded, and Hop Count is 0.\nEach node is assigned a unique Node Id when it enters a trail.\nIn our current prototype, we have allocated one byte for Node Id, although this can be increased to two or more bytes if a large number of hikers are expected to be present at the same time.\nWe can represent time in 17 bits to a second precision.\nSo, we have allocated 3 bytes each for Record Time and Location Time.\nThe circumference of the Earth is approximately 40,075 KM.\nIf we use a 32-bit number to represent both longitude and latitude, the precision we get is 40,075,000\/232 = 0.0093 meter = 0.37 inches, which is quite precise for our needs.\nSo, we have allocated 4 bytes each for X and Y coordinates of the location of a node.\nIn fact, a foot precision can be achieved by using only 27 bits.\n3.2 Location Point and Location Inference Although a GPS receiver provides an accurate location information, it has it``s limitation.\nIn canyons and rainy forests, a GPS receiver does not work.\nWhen there is a heavy cloud cover, GPS users have experienced inaccuracy in the reported location as well.\nUnfortunately, a lot of hiking trails are in dense forests and canyons, and it``s not that uncommon to rain after hikers start hiking.\nTo address this, CenWits incorporates the idea of location points (LP).\nA location point can update a sensor node with its current location whenever the node is near that LP.\nLPs are placed at different locations in a wilderness area where GPS receivers don``t work.\nAn LP is a very simple device that emits prerecorded location information at some regular time interval.\nIt can be placed in difficult-to-reach places such as deep canyons and dense rain forests by simply dropping them from an airplane.\nLPs allow a sensor node to determine its current location more accurately.\nHowever, they are not 183 an essential requirement of CenWits.\nIf an LP runs out of power, the CenWits will continue to work correctly.\nFigure 5: GPS receiver not working correctly.\nSensors then have to rely on LP to provide coordination In Figure 5, B cannot get GPS reception due to bad weather.\nIt then runs into A on the trail who doesn``t have GPS reception either.\nTheir sensors record the presence of each other.\nAfter 10 minutes, A is in range of an LP that provides an accurate location information to A.\nWhen A returns to trail head and uploads its data (Figure 6), the system can draw a circle centered at the LP from which A fetched location information for the range of encounter location of A and B. By Overlapping this circle with the trail map, two or three possible location of encounter can be inferred.\nThus when a rescue is required, the possible location of B can be better inferred (See Figures 7 and 8).\nFigure 6: A is back to trail head, It reports the time of encounter with B to AP, but no location information to AP Figure 7: B is still missing after sunset.\nCenWits infers the last contact point and draws the circle of possible current locations based on average hiking speed CenWits requires that the clocks of different sensor nodes be loosely synchronized with one another.\nSuch a synchronization is trivial when GPS coverage is available.\nIn addition, sensor nodes in CenWits synchronize their clocks whenever they are in the range of an AP or an LP.\nThe Figure 8: Based on overlapping landscape, B might have hiked to wrong branch and fallen off a cliff.\nHot rescue areas can thus be determined synchronization accuracy Cenwits needs is of the order of a second or so.\nAs long as the clocks are synchronized with in one second range, whether A met B at 12:37``45 or 12:37``46 doesn``t matter in the ordering of witness events and inferring the path.\n4.\nMEMORY AND POWER MANAGEMENT CenWits employs several important mechanisms to conserve power and memory.\nIt is important to note while current sensor nodes have limited amount of memory, future sensor nodes are expected to have much more memory.\nWith this in mind, the main focus in our design is to provide a tradeoff between the amount of memory available and amount of power consumption.\n4.1 Memory Management The size of witness information stored at a node can get very large.\nThis is because the node may come across several other nodes during a hike, and may end up accumulating a large amount of witness information over time.\nTo address this problem, CenWits allows a node to pro-actively free up some parts of its memory periodically.\nThis raises an interesting question of when and which witness record should be deleted from the memory of a node?\nCenWits uses three criteria to determine this: record count, hop count, and record gap.\nRecord count refers to the number of witness records with same node id that a node has stored in its memory.\nA node maintains an integer parameter MAX RECORD COUNT.\nIt stores at most MAX RECORD COUNT witness records of any node.\nEvery witness record has a hop count field that stores the number times (hops) this record has been transferred since being created.\nInitially this field is set to 0.\nWhenever a node receives a witness record from another node, it increments the hop count of that record by 1.\nA node maintains an integer parameter called MAX HOP COUNT.\nIt keeps only those witness records in its memory, whose hop count is less than MAX HOP COUNT.\nThe MAX HOP COUNT parameter provides a balance between two conflicting goals: (1) To ensure that a witness record has been propagated to and thus stored at as many nodes as possible, so that it has a high probability of being dumped at some AP as quickly as possible; and (2) To ensure that a witness record is stored only at a few nodes, so that it does not clog up too much of the combined memory of all sensor nodes.\nWe chose to use hop count instead of time-to-live to decide when to drop a packet.\nThe main reason for this is that the probability of a packet reaching an AP goes up as the hop count adds up.\nFor example, when the hop count if 5 for a specific record, 184 the record is in at least 5 sensor nodes.\nOn the other hand, if we discard old records, without considering hop count, there is no guarantee that the record is present in any other sensor node.\nRecord gap refers to the time difference between the record times of two witness records with the same node id.\nTo save memory, a node n ensures the the record gap between any two witness records with the same node id is at least MIN RECORD GAP.\nFor each node id i, n stores the witness record with the most recent record time rti, the witness with most recent record time that is at least MIN RECORD GAP time units before rti, and so on until the record count limit (MAX RECORD COUNT) is reached.\nWhen a node is tight in memory, it adjusts the three parameters, MAX RECORD COUNT, MAX HOP COUNT and MIN RECORD GAP to free up some memory.\nIt decrements MAX RECORD COUNT and MAX HOP COUNT, and increments MIN RECORD GAP.\nIt then first erases all witness records whose hop count exceeds the reduced MAX HOP COUNT value, and then erases witness records to satisfy the record gap criteria.\nAlso, when a node has extra memory space available, e.g. after dumping its witness information at an access point, it resets MAX RECORD COUNT, MAX HOP COUNT and MIN RECORD GAP to some predefined values.\n4.2 Power Management An important advantage of using sensors for tracking purposes is that we can regulate the behavior of a sensor node based on current conditions.\nFor example, we mentioned earlier that a sensor should emit a beacon every 1.7 minute, given a hiking speed of 1 mile\/hour.\nHowever, if a user is moving 10 feet\/sec, a beacon should be emitted every 10 seconds.\nIf a user is not moving at all, a beacon can be emitted every 10 minutes.\nIn the night, a sensor can be put into sleep mode to save energy, when a user is not likely to move at all for a relatively longer period of time.\nIf a user is active for only eight hours in a day, we can put the sensor into sleep mode for the other 16 hours and thus save 2\/3rd of the energy.\nIn addition, a sensor node can choose to not send any beacons during some time intervals.\nFor example, suppose hiker A has communicated its witness information to three other hikers in the last five minutes.\nIf it is running low on power, it can go to receive mode or sleep mode for the next ten minutes.\nIt goes to receive mode if it is still willing to receive additional witness information from hikers that it encounters in the next ten minutes.\nIt goes to sleep mode if it is extremely low on power.\nThe bandwidth and energy limitations of sensor nodes require that the amount of data transferred among the nodes be reduced to minimum.\nIt has been observed that in some scenarios 3000 instructions could be executed for the same energy cost of sending a bit 100m by radio [15].\nTo reduce the amount of data transfer, CenWits employs a handshake protocol that two nodes execute when they encounter one another.\nThe goal of this protocol is to ensure that a node transmits only as much witness information as the other node is willing to receive.\nThis protocol is initiated when a node i receives a beacon containing the node ID of the sender node j and i has not exchanged witness information with j in the last \u03b4 time units.\nAssume that i < j.\nThe protocol consists of four phases (See Figure 9): 1.\nPhase I: Node i sends its receive constraints and the number of witness records it has in its memory.\n2.\nPhase II: On receiving this message from i, j sends its receive constraints and the number of witness records it has in its memory.\n3.\nPhase III: On receiving the above message from j, i sends its witness information (filtered based on receive constraints received in phase II).\n4.\nPhase IV: After receiving the witness records from i, j sends its witness information (filtered based on receive constraints received in phase I).\nj <Constaints, Witness info size> <Constaints, Witness info size> <Filtered\u00a0Witness\u00a0info> <Filtered\u00a0Witness\u00a0info> i j j j i i i Figure 9: Four-Phase Hand Shake Protocol (i < j) Receive constraints are a function of memory and power.\nIn the most general case, they are comprised of the three parameters (record count, hop count and record gap) used for memory management.\nIf i is low on memory, it specifies the maximum number of records it is willing to accept from j. Similarly, i can ask j to send only those records that have hop count value less than MAX HOP COUNT \u2212 1.\nFinally, i can include its MIN RECORD GAP value in its receive constraints.\nNote that the handshake protocol is beneficial to both i and j.\nThey save memory by receiving only as much information as they are willing to accept and conserve energy by sending only as many witness records as needed.\nIt turns out that filtering witness records based on MIN RECORD GAP is complex.\nIt requires that the witness records of any given node be arranged in an order sorted by their record time values.\nMaintaining this sorted order is complex in memory, because new witness records with the same node id can arrive later that may have to be inserted in between to preserve the sorted order.\nFor this reason, the receive constraints in the current CenWits prototype do not include record gap.\nSuppose i specifies a hop count value of 3.\nIn this case, j checks the hop count field of every witness record before sending them.\nIf the hop count value is greater than 3, the record is not transmitted.\n4.3 Groups and Partitions To further reduce communication and increase the lifetime of our system, we introduce the notion of groups.\nThe idea is based on the concept of abstract regions presented in [20].\nA group is a set of n nodes that can be defined in terms of radio connectivity, geographic location, or other properties of nodes.\nAll nodes within a group can communicate directly with one another and they share information to maintain their view of the external world.\nAt any point in time, a group has exactly one leader that communicates 185 with external nodes on behalf of the entire group.\nA group can be static, meaning that the group membership does not change over the period of time, or it could be dynamic in which case nodes can leave or join the group.\nTo make our analysis simple and to explain the advantages of group, we first discuss static groups.\nA static group is formed at the start of a hiking trail or ski slope.\nSuppose there are five family members who want to go for a hike in the Rocky Mountain National Park.\nBefore these members start their hike, each one of them is given a sensor node and the information is entered in the system that the five nodes form a group.\nEach group member is given a unique id and every group member knows about other members of the group.\nThe group, as a whole, is also assigned an id to distinguish it from other groups in the system.\nFigure 10: A group of five people.\nNode 2 is the group leader and it is communicating on behalf of the group with an external node 17.\nAll other (shown in a lighter shade) are in sleep mode.\nAs the group moves through the trail, it exchanges information with other nodes or groups that it comes across.\nAt any point in time, only one group member, called the leader, sends and receives information on behalf of the group and all other n \u2212 1 group members are put in the sleep mode (See Figure 10).\nIt is this property of groups that saves us energy.\nThe group leadership is time-multiplexed among the group members.\nThis is done to make sure that a single node does not run out of battery due to continuous exchange of information.\nThus after every t seconds, the leadership is passed on to another node, called the successor, and the leader (now an ordinary member) is put to sleep.\nSince energy is dear, we do not implement an extensive election algorithm for choosing the successor.\nInstead, we choose the successor on the basis of node id.\nThe node with the next highest id in the group is chosen as the successor.\nThe last node, of course, chooses the node with the lowest id as its successor.\nWe now discuss the data storage schemes for groups.\nMemory is a scarce resource in sensor nodes and it is therefore important that witness information be stored efficiently among group members.\nEfficient data storage is not a trivial task when it comes to groups.\nThe tradeoff is between simplicity of the scheme and memory savings.\nA simpler scheme incurs lesser energy cost as compared to a more sophisticated scheme, but offers lesser memory savings as well.\nThis is because in a more complicated scheme, the group members have to coordinate to update and store information.\nAfter considering a number of different schemes, we have come to a conclusion that there is no optimal storage scheme for groups.\nThe system should be able to adapt according to its requirements.\nIf group members are low on battery, then the group can adapt a scheme that is more energy efficient.\nSimilarly, if the group members are running out of memory, they can adapt a scheme that is more memory efficient.\nWe first present a simple scheme that is very energy efficient but does not offer significant memory savings.\nWe then present an alternate scheme that is much more memory efficient.\nAs already mentioned a group can receive information only through the group leader.\nWhenever the leader comes across an external node e, it receives information from that node and saves it.\nIn our first scheme, when the timeslot for the leader expires, the leader passes this new information it received from e to its successor.\nThis is important because during the next time slot, if the new leader comes across another external node, it should be able to pass information about all the external nodes this group has witnessed so far.\nThus the information is fully replicated on all nodes to maintain the correct view of the world.\nOur first scheme does not offer any memory savings but is highly energy efficient and may be a good choice when the group members are running low on battery.\nExcept for the time when the leadership is switched, all n \u2212 1 members are asleep at any given time.\nThis means that a single member is up for t seconds once every n\u2217t seconds and therefore has to spend approximately only 1\/nth of its energy.\nThus, if there are 5 members in a group, we save 80% energy, which is huge.\nMore energy can be saved by increasing the group size.\nWe now present an alternate data storage scheme that aims at saving memory at the cost of energy.\nIn this scheme we divide the group into what we call partitions.\nPartitions can be thought of as subgroups within a group.\nEach partition must have at least two nodes in it.\nThe nodes within a partition are called peers.\nEach partition has one peer designated as partition leader.\nThe partition leader stays in receive mode at all times, while all others peers a partition stay in the sleep mode.\nPartition leadership is time-multiplexed among the peers to make sure that a single node does not run out of battery.\nLike before, a group has exactly one leader and the leadership is time-multiplexed among partitions.\nThe group leader also serves as the partition leader for the partition it belongs to (See Figure 11).\nIn this scheme, all partition leaders participate in information exchange.\nWhenever a group comes across an external node e, every partition leader receives all witness information, but it only stores a subset of that information after filtering.\nInformation is filtered in such a way that each partition leader has to store only B\/K bytes of data, where K is the number of partitions and B is the total number of bytes received from e. Similarly when a group wants to send witness information to e, each partition leader sends only B\/K bytes that are stored in the partition it belongs to.\nHowever, before a partition leader can send information, it must switch from receive mode to send mode.\nAlso, partition leaders must coordinate with one another to ensure that they do not send their witness information at the same time, i.e. their message do not collide.\nAll this is achieved by having the group leader send a signal to every partition leader in turn.\n186 Figure 11: The figure shows a group of eight nodes divided into four partitions of 2 nodes each.\nNode 1 is the group leader whereas nodes 2, 9, and 7 are partition leaders.\nAll other nodes are in the sleep mode.\nSince the partition leadership is time-multiplexed, it is important that any information received by the partition leader, p1, be passed on to the next leader, p2.\nThis has to be done to make sure that p2 has all the information that it might need to send when it comes across another external node during its timeslot.\nOne way of achieving this is to wake p2 up just before p1``s timeslot expires and then have p1 transfer information only to p2.\nAn alternate is to wake all the peers up at the time of leadership change, and then have p1 broadcast the information to all peers.\nEach peer saves the information sent by p1 and then goes back to sleep.\nIn both cases, the peers send acknowledgement to the partition leader after receiving the information.\nIn the former method, only one node needs to wake up at the time of leadership change, but the amount of information that has to be transmitted between the nodes increases as time passes.\nIn the latter case, all nodes have to be woken up at the time of leadership change, but small piece of information has to be transmitted each time among the peers.\nSince communication is much more expensive than bringing the nodes up, we prefer the second method over the first one.\nA group can be divided into partitions in more than one way.\nFor example, suppose we have a group of six members.\nWe can divide this group into three partitions of two peers each, or two partitions with three peers each.\nThe choice once again depends on the requirements of the system.\nA few big partitions will make the system more energy efficient.\nThis is because in this configuration, a greater number of nodes will stay in sleep mode at any given point in time.\nOn the other hand, several small partitions will make the system memory efficient, since each node will have to store lesser information (See Figure 12).\nA group that is divided into partitions must be able to readjust itself when a node leaves or runs out of battery.\nThis is crucial because a partition must have at least two nodes at any point in time to tolerate failure of one node.\nFor example, in figure 3 (a), if node 2 or node 5 dies, the partition is left with only one node.\nLater on, if that single node in the partition dies, all witness information stored in that partition will be lost.\nWe have devised a very simple protocol to solve this problem.\nWe first explain how partiFigure 12: The figure shows two different ways of partitioning a group of six nodes.\nIn (a), a group is divided into three partitions of two nodes.\nNode 1 is the group leader, nodes 9 and 5 are partition leaders, and nodes 2, 3, and 6 are in sleep mode.\nIn (b) the group is divided into two partitions of three nodes.\nNode 1 is the group leader, node 9 is the partition leader and nodes 2, 3, 5, and 6 are in sleep mode.\ntions are adjusted when a peer dies, and then explain what happens if a partition leader dies.\nSuppose node 2 in figure 3 (a) dies.\nWhen node 5, the partition leader, sends information to node 2, it does not receive an acknowledgement from it and concludes that node 2 has died2 .\nAt this point, node 5 contacts other partition leaders (nodes 1 ... 9) using a broadcast message and informs them that one of its peers has died.\nUpon hearing this, each partition leader informs node 5 (i) the number of nodes in its partition, (ii) a candidate node that node 5 can take if the number of nodes in its partition is greater than 2, and (iii) the amount of witness information stored in its partition.\nUpon hearing from every leader, node 5 chooses the candidate node from the partition with maximum number (must be greater than 2) of peers, and sends a message back to all leaders.\nNode 5 then sends data to its new peer to make sure that the information is replicated within the partition.\nHowever, if all partitions have exactly two nodes, then node 5 must join another partition.\nIt chooses the partition that has the least amount of witness information to join.\nIt sends its witness information to the new partition leader.\nWitness information and membership update is propagated to all peers during the next partition leadership change.\nWe now consider the case where the partition leader dies.\nIf this happens, then we wait for the partition leadership to change and for the new partition leader to eventually find out that a peer has died.\nOnce the new partition leader finds out that it needs more peers, it proceeds with the protocol explained above.\nHowever, in this case, we do lose information that the previous partition leader might have received just before it died.\nThis problem can be solved by implementing a more rigorous protocol, but we have decided to give up on accuracy to save energy.\nOur current design uses time-division multiplexing to schedule wakeup and sleep modes in the sensor nodes.\nHowever, recent work on radio wakeup sensors [10] can be used to do this scheduling more efficiently.\nwe plan to incorporate radio wakeup sensors in CenWits when the hardware is mature.\n2 The algorithm to conclude that a node has died can be made more rigorous by having the partition leader query the suspected node a few times.\n187 5.\nSYSTEM EVALUATION A sensor is constrained in the amount of memory and power.\nIn general, the amount of memory needed and power consumption depends on a variety of factors such as node density, number of hiker encounters, and the number of access points.\nIn this Section, we provide an estimate of how long the power of a MICA2 mote will last under certain assumtions.\nFirst, we assume that each sensor node carries about 100 witness records.\nOn encountering another hiker, a sensor node transmits 50 witness records and receives 50 new witness records.\nSince, each record is 16 bytes long, it will take 0.34 seconds to transmit 50 records and another 0.34 seconds to receive 50 records over a 19200 bps link.\nThe power consumption of MICA2 due to CPU processing, transmission and reception are approximately 8.0 mA, 7.0 mA and 8.5 mA per hour respectively [18], and the capacity of an alkaline battery is 2500mAh.\nSince the radio module of Mica2 is half-duplex and assuming that the CPU is always active when a node is awake, power consumption due to transmission is 8 + 8.5 = 16.5 mA per hour and due to reception is 8 + 7 = 15mA per hour.\nSo, average power consumtion due to transmission and reception is (16.5 + 15)\/2 = 15.75 mA per hour.\nGiven that the capacity of an alkaline battery is 2500 mAh, a battery should last for 2500\/15.75 = 159 hours of transmission and reception.\nAn encounter between two hikers results in exchange of about 50 witness records that takes about 0.68 seconds as calculated above.\nThus, a single alkaline battery can last for (159 \u2217 60 \u2217 60)\/0.68 = 841764 hiker encounters.\nAssuming that a node emits a beacon every 90 seconds and a hiker encounter occurs everytime a beacon is emitted (worst-case scenario), a single alkaline battery will last for (841764 \u2217 90)\/(30 \u2217 24 \u2217 60 \u2217 60) = 29 days.\nSince, a Mica2 is equipped with two batteries, a Mica2 sensor can remain operation for about two months.\nNotice that this calculation is preliminary, because it assumes that hikers are active 24 hours of the day and a hiker encounter occurs every 90 seconds.\nIn a more realistic scenario, power is expected to last for a much longer time period.\nAlso, this time period will significantly increase when groups of hikers are moving together.\nFinally, the lifetime of a sensor running on two batteries can definitely be increased significantly by using energy scavenging techniques and energy harvesting techniques [16, 14].\n6.\nPROTOTYPE IMPLEMENTATION We have implemented a prototype of CenWits on MICA2 sensor 900MHz running Mantis OS 0.9.1b.\nOne of the sensor is equipped with MTS420CA GPS module, which is capable of barometric pressure and two-axis acceleration sensing in addition to GPS location tracking.\nWe use SiRF, the serial communication protocol, to control GPS module.\nSiRF has a rich command set, but we record only X and Y coordinates.\nA witness record is 16 bytes long.\nWhen a node starts up, it stores its current location and emits a beacon periodicallyin the prototype, a node emits a beacon every minute.\nWe have conducted a number of experiments with this prototype.\nA detailed report on these experiments with the raw data collected and photographs of hikers, access points etc. is available at http:\/\/csel.cs.colorado.edu\/\u223chuangjh\/ Cenwits.index.htm.\nHere we report results from three of them.\nIn all these experiments, there are three access points (A, B and C) where nodes dump their witness information.\nThese access points also provide location information to the nodes that come with in their range.\nWe first show how CenWits can be used to determine the hiking trail a hiker is most likely on and the speed at which he is hiking, and identify hot search areas in case he is reported missing.\nNext, we show the results of power and memory management techniques of CenWits in conserving power and memory of a sensor node in one of our experiments.\n6.1 Locating Lost Hikers The first experiment is called Direct Contact.\nIt is a very simple experiment in which a single hiker starts from A, goes to B and then C, and finally returns to A (See Figure 13).\nThe goal of this experiment is to illustrate that CenWits can deduce the trail a hiker takes by processing witness information.\nFigure 13: Direct Contact Experiment Node Id Record (X,Y) Location Hop Time Time Count 1 15 (12,7) 15 0 1 33 (31,17) 33 0 1 46 (12,23) 46 0 1 10 (12,7) 10 0 1 48 (12,23) 48 0 1 16 (12,7) 16 0 1 34 (31,17) 34 0 Table 1: Witness information collected in the direct contact experiment.\nThe witness information dumped at the three access points was then collected and processed at a control center.\nPart of the witness information collected at the control center is shown in Table 1.\nThe X,Y locations in this table correspond to the location information provided by access points A, B, and C.\nA is located at (12,7), B is located at (31,17) and C is located at (12,23).\nThree encounter points (between hiker 1 and the three access points) extracted from 188 this witness information are shown in Figure 13 (shown in rectangular boxes).\nFor example, A,1 at 16 means 1 came in contact with A at time 16.\nUsing this information, we can infer the direction in which hiker 1 was moving and speed at which he was moving.\nFurthermore, given a map of hiking trails in this area, it is clearly possible to identify the hiking trail that hiker 1 took.\nThe second experiment is called Indirect Inference.\nThis experiment is designed to illustrate that the location, direction and speed of a hiker can be inferred by CenWits, even if the hiker never comes in the range of any access point.\nIt illustrates the importance of witness information in search and rescue applications.\nIn this experiment, there are three hikers, 1, 2 and 3.\nHiker 1 takes a trail that goes along access points A and B, while hiker 3 takes trail that goes along access points C and B. Hiker 2 takes a trail that does not come in the range of any access points.\nHowever, this hiker meets hiker 1 and 3 during his hike.\nThis is illustrated in Figure 14.\nFigure 14: Indirect Inference Experiment Node Id Record (X,Y) Location Hop Time Time Count 2 16 (12,7) 6 0 2 15 (12,7) 6 0 1 4 (12,7) 4 0 1 6 (12,7) 6 0 1 29 (31,17) 29 0 1 31 (31,17) 31 0 Table 2: Witness information collected from hiker 1 in indirect inference experiment.\nPart of the witness information collected at the control center from access points A, B and C is shown in Tables 2 and 3.\nThere are some interesting data in these tables.\nFor example, the location time in some witness records is not the same as the record time.\nThis means that the node that generated that record did not have its most up-to-date location at the encounter time.\nFor example, when hikers 1 and 2 meet at time 16, the last recorded location time of Node Id Record (X,Y) Location Hop Time Time Count 3 78 (12,23) 78 0 3 107 (31,17) 107 0 3 106 (31,17) 106 0 3 76 (12,23) 76 0 3 79 (12,23) 79 0 2 94 (12,23) 79 0 1 16 (?\n,?)\n?\n1 1 15 (?\n,?)\n?\n1 Table 3: Witness information collected from hiker 3 in indirect inference experiment.\nhiker 1 is (12,7) recorded at time 6.\nSo, node 1 generates a witness record with record time 16, location (12,7) and location time 6.\nIn fact, the last two records in Table 3 have (?\n,?)\nas their location.\nThis has happened because these witness records were generate by hiker 2 during his encounter with 1 at time 15 and 16.\nUntil this time, hiker 2 hadn``t come in contact with any location points.\nInterestingly, a more accurate location information of 1 and 2 encounter or 2 and 3 encounter can be computed by process the witness information at the control center.\nIt took 25 units of time for hiker 1 to go from A (12,7) to B (31,17).\nAssuming a constant hiking speed and a relatively straight-line hike, it can be computed that at time 16, hiker 1 must have been at location (18,10).\nThus (18,10) is a more accurate location of encounter between 1 and 2.\nFinally, our third experiment called Identifying Hot Search Areas is designed to determine the trail a hiker has taken and identify hot search areas for rescue after he is reported missing.\nThere are six hikers (1, 2, 3, 4, 5 and 6) in this experiment.\nFigure 15 shows the trails that hikers 1, 2, 3, 4 and 5 took, along with the encounter points obtained from witness records collected at the control center.\nFor brevity, we have not shown the entire witness information collected at the control center.\nThis information is available at http:\/\/csel.cs.colorado.edu\/\u223chuangjh\/Cenwits\/index.htm.\nFigure 15: Identifying Hot Search Area Experiment (without hiker 6) 189 Now suppose hiker 6 is reported missing at time 260.\nTo determine the hot search areas, the witness records of hiker 6 are processed to determine the trail he is most likely on, the speed at which he had been moving, direction in which he had been moving, and his last known location.\nBased on this information and the hiking trail map, hot search areas are identified.\nThe hiking trail taken by hiker 6 as inferred by CenWits is shown by a dotted line and the hot search areas identified by CenWits are shown by dark lines inside the dotted circle in Figure 16.\nFigure 16: Identifying Hot Search Area Experiment (with hiker 6) 6.2 Results of Power and Memory Management The witness information shown in Tables 1, 2 and 3 has not been filtered using the three criteria described in Section 4.1.\nFor example, the witness records generated by 3 at record times 76, 78 and 79 (see Table 3) have all been generated due a single contact between access point C and node 3.\nBy applying the record gap criteria, two of these three records will be erased.\nSimilarly, the witness records generated by 1 at record times 10, 15 and 16 (see Table 1) have all been generated due a single contact between access point A and node 1.\nAgain, by applying the record gap criteria, two of these three records will be erased.\nOur experiments did not generate enough data to test the impact of record count or hop count criteria.\nTo evaluate the impact of these criteria, we simulated CenWits to generate a significantly large number of records for a given number of hikers and access points.\nWe generated witness records by having the hikers walk randomly.\nWe applied the three criteria to measure the amount of memory savings in a sensor node.\nThe results are shown in Table 4.\nThe number of hikers in this simulation was 10 and the number of access points was 5.\nThe number of witness records reported in this table is an average number of witness records a sensor node stored at the time of dump to an access point.\nThese results show that the three memory management criteria significantly reduces the memory consumption of sensor nodes in CenWits.\nFor example, they can reduce MAX MIN MAX # of RECORD RECORD HOP Witness COUNT GAP COUNT Records 5 5 5 628 4 5 5 421 3 5 5 316 5 10 5 311 5 20 5 207 5 5 4 462 5 5 3 341 3 20 3 161 Table 4: Impact of memory management techniques.\nthe memory consumption by up to 75%.\nHowever, these results are premature at present for two reasons: (1) They are generated via simulation of hikers walking at random; and (2) It is not clear what impact the erasing of witness records has on the accuracy of inferred location\/hot search areas of lost hikers.\nIn our future work, we plan to undertake a major study to address these two concerns.\n7.\nOTHER APPLICATIONS In addition to the hiking in wilderness areas, CenWits can be used in several other applications, e.g. skiing, climbing, wild life monitoring, and person tracking.\nSince CenWits relies only on intermittent connectivity, it can take advantage of the existing cheap and mature technologies, and thereby make tracking cheaper and fairly accurate.\nSince CenWits doesn``t rely on keeping track of a sensor holder all time, but relies on maintaining witnesses, the system is relatively cheaper and widely applicable.\nFor example, there are some dangerous cliffs in most ski resorts.\nBut it is too expensive for a ski resort to deploy a connected wireless sensor network through out the mountain.\nUsing CenWits, we can deploy some sensors at the cliff boundaries.\nThese boundary sensors emit beacons quite frequently, e.g. every second, and so can record presence of skiers who cross the boundary and fall off the cliff.\nSki patrols can cruise the mountains every hour, and automatically query the boundary sensor when in range using PDAs.\nIf a PDA shows that a skier has been close to the boundary sensor, the ski patrol can use a long range walkie-talkie to query control center at the resort base to check the witness record of the skier.\nIf there is no witness record after the recorded time in the boundary sensor, there is a high chance that a rescue is needed.\nIn wildlife monitoring, a very popular method is to attach a GPS receiver on the animals.\nTo collect data, either a satellite transmitter is used, or the data collector has to wait until the GPS receiver brace falls off (after a year or so) and then search for the GPS receiver.\nGPS transmitters are very expensive, e.g. the one used in geese tracking is $3,000 each [2].\nAlso, it is not yet known if continuous radio signal is harmful to the birds.\nFurthermore, a GPS transmitter is quite bulky and uncomfortable, and as a result, birds always try to get rid of it.\nUsing CenWits, not only can we record the presence of wildlife, we can also record the behavior of wild animals, e.g. lions might follow the migration of deers.\nCenWits does nor require any bulky and expensive satellite transmitters, nor is there a need to wait for a year and search for the braces.\nCenWits provides a very simple and cost-effective solution in this case.\nAlso, access points 190 can be strategically located, e.g. near a water source, to increase chances of collecting up-to-date data.\nIn fact, the access points need not be statically located.\nThey can be placed in a low-altitude plane (e.g a UAV) and be flown over a wilderness area to collect data from wildlife.\nIn large cities, CenWits can be used to complement GPS, since GPS doesn``t work indoor and near skyscrapers.\nIf a person A is reported missing, and from the witness records we find that his last contacts were C and D, we can trace an approximate location quickly and quite efficiently.\n8.\nDISCUSSION AND FUTURE WORK This paper presents a new search and rescue system called CenWits that has several advantages over the current search and rescue systems.\nThese advantages include a looselycoupled system that relies only on intermittent network connectivity, power and storage efficiency, and low cost.\nIt solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability.\nThis means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well.\nIt utilizes the concept of witnesses to propagate information, infer current possible location and speed of a subject, and identify hot search and rescue areas in case of emergencies.\nA large part of CenWits design focuses on addressing the power and memory limitations of current sensor nodes.\nIn fact, power and memory constraints depend on how much weight (of sensor node) a hiker is willing to carry and the cost of these sensors.\nAn important goal of CenWits is build small chips that can be implanted in hiking boots or ski jackets.\nThis goal is similar to the avalanche beacons that are currently implanted in ski jackets.\nWe anticipate that power and memory will continue to be constrained in such an environment.\nWhile the paper focuses on the development of a search and rescue system, it also provides some innovative, systemlevel ideas for information processing in a sensor network system.\nWe have developed and experimented with a basic prototype of CenWits at present.\nFuture work includes developing a more mature prototype addressing important issues such as security, privacy, and high availability.\nThere are several pressing concerns regarding security, privacy, and high availability in CenWits.\nFor example, an adversary can sniff the witness information to locate endangered animals, females, children, etc..\nHe may inject false information in the system.\nAn individual may not be comfortable with providing his\/her location and movement information, even though he\/she is definitely interested in being located in a timely manner at the time of emergency.\nIn general, people in hiking community are friendly and usually trustworthy.\nSo, a bullet-proof security is not really required.\nHowever, when CenWits is used in the context of other applications, security requirements may change.\nSince the sensor nodes used in CenWits are fragile, they can fail.\nIn fact, the nature and level of security, privacy and high availability support needed in CenWits strongly depends on the application for which it is being used and the individual subjects involved.\nAccordingly, we plan to design a multi-level support for security, privacy and high availability in CenWits.\nSo far, we have experimented with CenWits in a very restricted environment with a small number of sensors.\nOur next goal is to deploy this system in a much larger and more realistic environment.\nIn particular, discussions are currenly underway to deploy CenWits in the Rocky Mountain and Yosemite National Parks.\n9.\nREFERENCES [1] 802.11-based tracking system.\nhttp:\/\/www.pangonetworks.com\/locator.htm.\n[2] Brent geese 2002.\nhttp:\/\/www.wwt.org.uk\/brent\/.\n[3] The onstar system.\nhttp:\/\/www.onstar.com.\n[4] Personal locator beacons with GPS receiver and satellite transmitter.\nhttp:\/\/www.aeromedix.com\/.\n[5] Personal tracking using GPS and GSM system.\nhttp:\/\/www.ulocate.com\/trimtrac.html.\n[6] Rf based kid tracking system.\nhttp:\/\/www.ion-kids.com\/.\n[7] F. Alessio.\nPerformance measurements with motes technology.\nMSWiM``04, 2004.\n[8] P. Bahl and V. N. Padmanabhan.\nRADAR: An in-building RF-based user location and tracking system.\nIEEE Infocom, 2000.\n[9] K. Fall.\nA delay-tolerant network architecture for challenged internets.\nIn SIGCOMM, 2003.\n[10] L. Gu and J. Stankovic.\nRadio triggered wake-up capability for sensor networks.\nIn Real-Time Applications Symposium, 2004.\n[11] J. Hightower and G. Borriello.\nLocation systems for ubiquitous computing.\nIEEE Computer, 2001.\n[12] W. Jaskowski, K. Jedrzejek, B. Nyczkowski, and S. Skowronek.\nLifetch life saving system.\nCSIDC, 2004.\n[13] P. Juang, H. Oki, Y. Wang, M. Martonosi, L. Peh, and D. Rubenstein.\nEnergy-efficient computing for wildlife tracking: design tradeoffs and early experiences with ZebraNet.\nIn ASPLOS, 2002.\n[14] K. Kansal and M. Srivastava.\nEnergy harvesting aware power management.\nIn Wireless Sensor Networks: A Systems Perspective, 2005.\n[15] G. J. Pottie and W. J. Kaiser.\nEmbedding the internet: wireless integrated network sensors.\nCommunications of the ACM, 43(5), May 2000.\n[16] S. Roundy, P. K. Wright, and J. Rabaey.\nA study of low-level vibrations as a power source for wireless sensor networks.\nComputer Communications, 26(11), 2003.\n[17] C. Savarese, J. M. Rabaey, and J. Beutel.\nLocationing in distributed ad-hoc wireless sensor networks.\nICASSP, 2001.\n[18] V. Shnayder, M. Hempstead, B. Chen, G. Allen, and M. Welsh.\nSimulating the power consumption of large-scale sensor network applications.\nIn Sensys, 2004.\n[19] R. Want and A. Hopper.\nActive badges and personal interactive computing objects.\nIEEE Transactions of Consumer Electronics, 1992.\n[20] M. Welsh and G. Mainland.\nProgramming sensor networks using abstract regions.\nFirst USENIX\/ACM Symposium on Networked Systems Design and Implementation (NSDI ``04), 2004.\n191","lvl-3":"CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses\nUniversity of Colorado, Campus Box 0430 Boulder, CO 80309-0430\nABSTRACT\nThis paper describes the design, implementation and evaluation of a search and rescue system called CenWits.\nCenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices.\nIt is designed for search and rescue of people in emergency situations in wilderness areas.\nA key feature of CenWits is that it does not require a continuously connected sensor network for its operation.\nIt is designed for an intermittently connected network that provides only occasional connectivity.\nIt makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center.\nA prototype of CenWits has been implemented using Berkeley Mica2 motes.\nThe paper describes this implementation and reports on the performance measured from it.\n1.\nINTRODUCTION\nSearch and rescue of people in emergency situation in a timely manner is an extremely important service.\nIt has been difficult to provide such a service due to lack of timely\ninformation needed to determine the current location of a person who may be in an emergency situation.\nWith the emergence of pervasive computing, several systems [12, 19, 1, 5, 6, 4, 11] have been developed over the last few years that make use of small devices such as cell phones, sensors, etc. .\nAll these systems require a connected network via satellites, GSM base stations, or mobile devices.\nThis requirement severely limits their applicability, particularly in remote wilderness areas where maintaining a connected network is very difficult.\nFor example, a GSM transmitter has to be in the range of a base station to transmit.\nAs a result, it cannot operate in most wilderness areas.\nWhile a satellite transmitter is the only viable solution in wilderness areas, it is typically expensive and cumbersome.\nFurthermore, a line of sight is required to transmit to satellite, and that makes it infeasible to stay connected in narrow canyons, large cities with skyscrapers, rain forests, or even when there is a roof or some other obstruction above the transmitter, e.g. in a car.\nAn RF transmitter has a relatively smaller range of transmission.\nSo, while an in-situ sensor is cheap as a single unit, it is expensive to build a large network that can provide connectivity over a large wilderness area.\nIn a mobile environment where sensors are carried by moving people, power-efficient routing is difficult to implement and maintain over a large wilderness area.\nIn fact, building an adhoc sensor network using only the sensors worn by hikers is nearly impossible due to a relatively small number of sensors spread over a large wilderness area.\nIn this paper, we describe the design, implementation and evaluation of a search and rescue system called CenWits (Connection-less Sensor-Based Tracking System Using Witnesses).\nCenWits is comprised of mobile, in-situ sensors that are worn by subjects (people, wild animals, or in-animate objects), access points (AP) that collect information from these sensors, and GPS receivers and location points (LP) that provide location information to the sensors.\nA subject uses GPS receivers (when it can connect to a satellite) and LPs to determine its current location.\nThe key idea of CenWits is that it uses a concept of witnesses to convey a subject's movement and location information to the outside world.\nThis averts a need for maintaining a connected network to transmit location information to the outside world.\nIn particular, there is no need for expensive GSM or satellite transmitters, or maintaining an adhoc network of in-situ sensors in CenWits.\nCenWits employs several important mechanisms to address the key problem of resource constraints (low signal strength, low power and limited memory) in sensors.\nIn particular, it makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center.\nThe problem of low signal strengths (short range RF communication) is addressed by avoiding a need for maintaining a connected network.\nInstead, CenWits propagates the location information of sensors using the concept of witnesses through an intermittently connected network.\nAs a result, this system can be deployed in remote wilderness areas, as well as in large urban areas with skyscrapers and other tall structures.\nAlso, this makes CenWits cost-effective.\nA subject only needs to wear light-weight and low-cost sensors that have GPS receivers but no expensive GSM or satellite transmitters.\nFurthermore, since there is no need for a connected sensor network, there is no need to deploy sensors in very large numbers.\nThe problem of limited battery life and limited memory of a sensor is addressed by incorporating the concepts of groups and partitions.\nGroups and partitions allow sensors to stay in sleep or receive modes most of the time.\nUsing groups and partitions, the location information collected by a sensor can be distributed among several sensors, thereby reducing the amount of memory needed in one sensor to store that information.\nIn fact, CenWits provides an adaptive tradeoff between memory and power consumption of sensors.\nEach sensor can dynamically adjust its power and memory consumption based on its remaining power or available memory.\nIt has amply been noted that the strength of sensor networks comes from the fact that several sensor nodes can be distributed over a relatively large area to construct a multihop network.\nThis paper demonstrates that important large-scale applications can be built using sensors by judiciously integrating the storage, communication and computation capabilities of sensors.\nThe paper describes important techniques to combine memory, transmission and battery power of many sensors to address resource constraints in the context of a search and rescue application.\nHowever, these techniques are quite general.\nWe discuss several other sensor-based applications that can employ these techniques.\nWhile CenWits addresses the general location tracking and reporting problem in a wide-area network, there are two important differences from the earlier work done in this area.\nFirst, unlike earlier location tracking solutions, CenWits does not require a connected network.\nSecond, unlike earlier location tracking solutions, CenWits does not aim for a very high accuracy of localization.\nInstead, the main goal is to provide an approximate, small area where search and rescue efforts can be concentrated.\nThe rest of this paper is organized as follows.\nIn Section 2, we overview some of the recent projects and technologies related to movement and location tracking, and search and rescue systems.\nIn Section 3, we describe the overall architecture of CenWits, and provide a high-level description of its functionality.\nIn the next section, Section 4, we discuss power and memory management in CenWits.\nTo simplify our presentation, we will focus on a specific application of tracking lost\/injured hikers in all these sections.\nIn Section 6, we describe a prototype implementation of CenWits and present performance measured from this implementation.\nWe discuss how the ideas of CenWits can be used to build several other applications in Section 7.\nFinally, in Section 8, we discuss some related issues and conclude the paper.\n2.\nRELATED WORK\nA survey of location systems for ubiquitous computing is provided in [11].\nA location tracking system for adhoc sensor networks using anchor sensors as reference to gain location information and spread it out to outer node is proposed in [17].\nMost location tracking systems in adhoc sensor networks are for benefiting geographic-aware routing.\nThey don't fit well for our purposes.\nThe well-known active badge system [19] lets a user carry a badge around.\nAn infrared sensor in the room can detect the presence of a badge and determine the location and identification of the person.\nThis is a useful system for indoor environment, where GPS doesn't work.\nLocationing using 802.11 devices is probably the cheapest solution for indoor position tracking [8].\nBecause of the popularity and low cost of 802.11 devices, several business solutions based on this technology have been developed [1].\nA system that combines two mature technologies and is viable in suburban area where a user can see clear sky and has GSM cellular reception at the same time is currently available [5].\nThis system receives GPS signal from a satellite and locates itself, draws location on a map, and sends location information through GSM network to the others who are interested in the user's location.\nA very simple system to monitor children consists an RF transmitter and a receiver.\nThe system alarms the holder of the receiver when the transmitter is about to run out of range [6].\nPersonal Locater Beacons (PLB) has been used for avalanche rescuing for years.\nA skier carries an RF transmitter that emits beacons periodically, so that a rescue team can find his\/her location based on the strength of the RF signal.\nLuxury version of PLB combines a GPS receiver and a COSPASSARSAT satellite transmitter that can transmit user's location in latitude and longitude to the rescue team whenever an accident happens [4].\nHowever, the device either is turned on all the time resulting in fast battery drain, or must be turned on after the accident to function.\nAnother related technology in widespread use today is the ONSTAR system [3], typically used in several luxury cars.\nIn this system, a GPS unit provides position information, and a powerful transmitter relays that information via satellite to a customer service center.\nDesigned for emergencies, the system can be triggered either by the user with the push of a button, or by a catastrophic accident.\nOnce the system has been triggered, a human representative attempts to gain communication with the user via a cell phone built as an incar device.\nIf contact cannot be made, emergency services are dispatched to the location provided by GPS.\nLike PLBs, this system has several limitations.\nFirst, it is heavy and expensive.\nIt requires a satellite transmitter and a connected network.\nIf connectivity with either the GPS network or a communication satellite cannot be maintained, the system fails.\nUnfortunately, these are common obstacles encountered in deep canyons, narrow streets in large cities, parking garages, and a number of other places.\nThe Lifetch system uses CPS receiver board combined with a CSM\/CPRS transmitter and an RF transmitter in one wireless sensor node called Intelligent Communication Unit (ICU).\nAn ICU first attempts to transmit its location to a control center through CSM\/CPRS network.\nIf that fails, it connects with other ICUs (adhoc network) to forward its location information until the information reaches an ICU that has CSM\/CPRS reception.\nThis ICU then transmits the location information of the original ICU via the CSM\/CPRS network.\nZebraNet is a system designed to study the moving patterns of zebras [13].\nIt utilizes two protocols: History-based protocol and flooding protocol.\nHistory-based protocol is used when the zebras are grazing and not moving around too much.\nWhile this might be useful for tracking zebras, it's not suitable for tracking hikers because two hikers are most likely to meet each other only once on a trail.\nIn the flooding protocol, a node dumps its data to a neighbor whenever it finds one and doesn't delete its own copy until it finds a base station.\nWithout considering routing loops, packet filtering and grouping, the size of data on a node will grow exponentially and drain the power and memory of a sensor node with in a short time.\nInstead, Cenwits uses a four-phase hand-shake protocol to ensure that a node transmits only as much information as the other node is willing to receive.\nWhile ZebraNet is designed for a big group of sensors moving together in the same direction with same speed, Cenwits is designed to be used in the scenario where sensors move in different directions at different speeds.\nDelay tolerant network architecture addresses some important problems in challenged (resource-constrained) networks [9].\nWhile this work is mainly concerned with interoperability of challenged networks, some problems related to occasionally-connected networks are similar to the ones we have addressed in CenWits.\nAmong all these systems, luxury PLB and Lifetch are designed for location tracking in wilderness areas.\nHowever, both of these systems require a connected network.\nLuxury PLB requires the user to transmit a signal to a satellite, while Lifetch requires connection to CSM\/CPRS network.\nLuxury PLB transmits location information, only when an accident happens.\nHowever, if the user is buried in the snow or falls into a deep canyon, there is almost no chance for the signal to go through and be relayed to the rescue team.\nThis is because satellite transmission needs line of sight.\nFurthermore, since there is no known history of user's location, it is not possible for the rescue team to infer the current location of the user.\nAnother disadvantage of luxury PLB is that a satellite transmitter is very expensive, costing in the range of $750.\nLifetch attempts to transmit the location information by CSM\/CPRS and adhoc sensor network that uses AODV as the routing protocol.\nHowever, having a cellular reception in remote areas in wilderness areas, e.g. American national parks is unlikely.\nFurthermore, it is extremely unlikely that ICUs worn by hikers will be able to form an adhoc network in a large wilderness area.\nThis is because the hikers are mobile and it is very unlikely to have several ICUs placed dense enough to forward packets even on a very popular hike route.\nCenWits is designed to address the limitations of systems such as luxury PLB and Lifetch.\nIt is designed to provide hikers, skiers, and climbers who have their activities mainly in wilderness areas a much higher chance to convey their location information to a control center.\nIt is not reliant upon constant connectivity with any communication medium.\nRather, it communicates information along from user to user, finally arriving at a control center.\nUnlike several of the systems discussed so far, it does not require that a user's unit is constantly turned on.\nIn fact, it can discover a victim's location, even if the victim's sensor was off at the time of accident and has remained off since then.\nCenWits solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability.\nThis means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well.\nFigure 1: Hiker A and Hiker B are are not in the range of each other\n3.\nCENWITS\n3.1 Witness Information: Storage\n3.2 Location Point and Location Inference\n4.\nMEMORY AND POWER MANAGEMENT\n4.1 Memory Management\n4.2 Power Management\n4.3 Groups and Partitions\n5.\nSYSTEM EVALUATION\n6.\nPROTOTYPE IMPLEMENTATION\nWe have implemented a prototype of CenWits on MICA2 sensor 900MHz running Mantis OS 0.9.1 b.\nOne of the sensor is equipped with MTS420CA GPS module, which is capable of barometric pressure and two-axis acceleration sensing in addition to GPS location tracking.\nWe use SiRF, the serial communication protocol, to control GPS module.\nSiRF has a rich command set, but we record only X and Y coordinates.\nA witness record is 16 bytes long.\nWhen a node starts up, it stores its current location and emits a beacon periodically--in the prototype, a node emits a beacon every minute.\nWe have conducted a number of experiments with this prototype.\nA detailed report on these experiments with the raw data collected and photographs of hikers, access points etc. is available at http:\/\/csel.cs.colorado.edu\/\ue18ahuangjh\/ Cenwits.index.htm.\nHere we report results from three of them.\nIn all these experiments, there are three access points (A, B and C) where nodes dump their witness information.\nThese access points also provide location information to the nodes that come with in their range.\nWe first show how CenWits can be used to determine the hiking trail a hiker is most likely on and the speed at which he is hiking, and identify hot search areas in case he is reported missing.\nNext, we show the results of power and memory management techniques of CenWits in conserving power and memory of a sensor node in one of our experiments.\n6.1 Locating Lost Hikers\nThe first experiment is called Direct Contact.\nIt is a very simple experiment in which a single hiker starts from A, goes to B and then C, and finally returns to A (See Figure 13).\nThe goal of this experiment is to illustrate that CenWits can deduce the trail a hiker takes by processing witness information.\nFigure 13: Direct Contact Experiment\nTable 1: Witness information collected in the direct contact experiment.\nThe witness information dumped at the three access points was then collected and processed at a control center.\nPart of the witness information collected at the control center is shown in Table 1.\nThe X, Y locations in this table correspond to the location information provided by access points A, B, and C.\nA is located at (12,7), B is located at (31,17) and C is located at (12,23).\nThree encounter points (between hiker 1 and the three access points) extracted from\nthis witness information are shown in Figure 13 (shown in rectangular boxes).\nFor example, A,1 at 16 means 1 came in contact with A at time 16.\nUsing this information, we can infer the direction in which hiker 1 was moving and speed at which he was moving.\nFurthermore, given a map of hiking trails in this area, it is clearly possible to identify the hiking trail that hiker 1 took.\nThe second experiment is called Indirect Inference.\nThis experiment is designed to illustrate that the location, direction and speed of a hiker can be inferred by CenWits, even if the hiker never comes in the range of any access point.\nIt illustrates the importance of witness information in search and rescue applications.\nIn this experiment, there are three\nhikers, 1, 2 and 3.\nHiker 1 takes a trail that goes along access points A and B, while hiker 3 takes trail that goes along access points C and B. Hiker 2 takes a trail that does not come in the range of any access points.\nHowever, this hiker meets hiker 1 and 3 during his hike.\nThis is illustrated in Figure 14.\nFigure 14: Indirect Inference Experiment\nTable 2: Witness information collected from hiker 1\nin indirect inference experiment.\nPart of the witness information collected at the control center from access points A, B and C is shown in Tables 2 and 3.\nThere are some interesting data in these tables.\nFor example, the location time in some witness records is not the same as the record time.\nThis means that the node that generated that record did not have its most up-to-date location at the encounter time.\nFor example, when hikers 1 and 2 meet at time 16, the last recorded location time of\nTable 3: Witness information collected from hiker 3 in indirect inference experiment.\nhiker 1 is (12,7) recorded at time 6.\nSo, node 1 generates a witness record with record time 16, location (12,7) and location time 6.\nIn fact, the last two records in Table 3 have (?\n,?)\nas their location.\nThis has happened because these witness records were generate by hiker 2 during his encounter with 1 at time 15 and 16.\nUntil this time, hiker 2 hadn't come in contact with any location points.\nInterestingly, a more accurate location information of 1 and 2 encounter or 2 and 3 encounter can be computed by process the witness information at the control center.\nIt took 25 units of time for hiker 1 to go from A (12,7) to B (31,17).\nAssuming a constant hiking speed and a relatively straight-line hike, it can be computed that at time 16, hiker 1 must have been at location (18,10).\nThus (18,10) is a more accurate location of encounter between 1 and 2.\nFinally, our third experiment called Identifying Hot Search Areas is designed to determine the trail a hiker has taken and identify hot search areas for rescue after he is reported missing.\nThere are six hikers (1, 2, 3, 4, 5 and 6) in this experiment.\nFigure 15 shows the trails that hikers 1, 2, 3, 4 and 5 took, along with the encounter points obtained from witness records collected at the control center.\nFor brevity, we have not shown the entire witness information collected at the control center.\nThis information is available at http:\/\/csel.cs.colorado.edu\/\ue18ahuangjh\/Cenwits\/index.htm.\nFigure 15: Identifying Hot Search Area Experiment (without hiker 6)\nNow suppose hiker 6 is reported missing at time 260.\nTo determine the hot search areas, the witness records of hiker 6 are processed to determine the trail he is most likely on, the speed at which he had been moving, direction in which he had been moving, and his last known location.\nBased on this information and the hiking trail map, hot search areas are identified.\nThe hiking trail taken by hiker 6 as inferred by CenWits is shown by a dotted line and the hot search areas identified by CenWits are shown by dark lines inside the dotted circle in Figure 16.\nFigure 16: Identifying Hot Search Area Experiment (with hiker 6)\n6.2 Results of Power and Memory Management\nThe witness information shown in Tables 1, 2 and 3 has not been filtered using the three criteria described in Section 4.1.\nFor example, the witness records generated by 3 at record times 76, 78 and 79 (see Table 3) have all been generated due a single contact between access point C and node 3.\nBy applying the record gap criteria, two of these three records will be erased.\nSimilarly, the witness records generated by 1 at record times 10, 15 and 16 (see Table 1) have all been generated due a single contact between access point A and node 1.\nAgain, by applying the record gap criteria, two of these three records will be erased.\nOur experiments did not generate enough data to test the impact of record count or hop count criteria.\nTo evaluate the impact of these criteria, we simulated CenWits to generate a significantly large number of records for a given number of hikers and access points.\nWe generated witness records by having the hikers walk randomly.\nWe applied the three criteria to measure the amount of memory savings in a sensor node.\nThe results are shown in Table 4.\nThe number of hikers in this simulation was 10 and the number of access points was 5.\nThe number of witness records reported in this table is an average number of witness records a sensor node stored at the time of dump to an access point.\nThese results show that the three memory management criteria significantly reduces the memory consumption of sensor nodes in CenWits.\nFor example, they can reduce\nTable 4: Impact of memory management techniques.\nthe memory consumption by up to 75%.\nHowever, these results are premature at present for two reasons: (1) They are generated via simulation of hikers walking at random; and (2) It is not clear what impact the erasing of witness records has on the accuracy of inferred location\/hot search areas of lost hikers.\nIn our future work, we plan to undertake a major study to address these two concerns.","lvl-4":"CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses\nUniversity of Colorado, Campus Box 0430 Boulder, CO 80309-0430\nABSTRACT\nThis paper describes the design, implementation and evaluation of a search and rescue system called CenWits.\nCenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices.\nIt is designed for search and rescue of people in emergency situations in wilderness areas.\nA key feature of CenWits is that it does not require a continuously connected sensor network for its operation.\nIt is designed for an intermittently connected network that provides only occasional connectivity.\nIt makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center.\nA prototype of CenWits has been implemented using Berkeley Mica2 motes.\nThe paper describes this implementation and reports on the performance measured from it.\n1.\nINTRODUCTION\nSearch and rescue of people in emergency situation in a timely manner is an extremely important service.\ninformation needed to determine the current location of a person who may be in an emergency situation.\nAll these systems require a connected network via satellites, GSM base stations, or mobile devices.\nThis requirement severely limits their applicability, particularly in remote wilderness areas where maintaining a connected network is very difficult.\nFor example, a GSM transmitter has to be in the range of a base station to transmit.\nAs a result, it cannot operate in most wilderness areas.\nWhile a satellite transmitter is the only viable solution in wilderness areas, it is typically expensive and cumbersome.\nAn RF transmitter has a relatively smaller range of transmission.\nSo, while an in-situ sensor is cheap as a single unit, it is expensive to build a large network that can provide connectivity over a large wilderness area.\nIn a mobile environment where sensors are carried by moving people, power-efficient routing is difficult to implement and maintain over a large wilderness area.\nIn fact, building an adhoc sensor network using only the sensors worn by hikers is nearly impossible due to a relatively small number of sensors spread over a large wilderness area.\nIn this paper, we describe the design, implementation and evaluation of a search and rescue system called CenWits (Connection-less Sensor-Based Tracking System Using Witnesses).\nA subject uses GPS receivers (when it can connect to a satellite) and LPs to determine its current location.\nThe key idea of CenWits is that it uses a concept of witnesses to convey a subject's movement and location information to the outside world.\nThis averts a need for maintaining a connected network to transmit location information to the outside world.\nIn particular, there is no need for expensive GSM or satellite transmitters, or maintaining an adhoc network of in-situ sensors in CenWits.\nCenWits employs several important mechanisms to address the key problem of resource constraints (low signal strength, low power and limited memory) in sensors.\nThe problem of low signal strengths (short range RF communication) is addressed by avoiding a need for maintaining a connected network.\nInstead, CenWits propagates the location information of sensors using the concept of witnesses through an intermittently connected network.\nAs a result, this system can be deployed in remote wilderness areas, as well as in large urban areas with skyscrapers and other tall structures.\nAlso, this makes CenWits cost-effective.\nA subject only needs to wear light-weight and low-cost sensors that have GPS receivers but no expensive GSM or satellite transmitters.\nFurthermore, since there is no need for a connected sensor network, there is no need to deploy sensors in very large numbers.\nThe problem of limited battery life and limited memory of a sensor is addressed by incorporating the concepts of groups and partitions.\nGroups and partitions allow sensors to stay in sleep or receive modes most of the time.\nUsing groups and partitions, the location information collected by a sensor can be distributed among several sensors, thereby reducing the amount of memory needed in one sensor to store that information.\nIn fact, CenWits provides an adaptive tradeoff between memory and power consumption of sensors.\nEach sensor can dynamically adjust its power and memory consumption based on its remaining power or available memory.\nIt has amply been noted that the strength of sensor networks comes from the fact that several sensor nodes can be distributed over a relatively large area to construct a multihop network.\nThis paper demonstrates that important large-scale applications can be built using sensors by judiciously integrating the storage, communication and computation capabilities of sensors.\nThe paper describes important techniques to combine memory, transmission and battery power of many sensors to address resource constraints in the context of a search and rescue application.\nHowever, these techniques are quite general.\nWe discuss several other sensor-based applications that can employ these techniques.\nWhile CenWits addresses the general location tracking and reporting problem in a wide-area network, there are two important differences from the earlier work done in this area.\nFirst, unlike earlier location tracking solutions, CenWits does not require a connected network.\nSecond, unlike earlier location tracking solutions, CenWits does not aim for a very high accuracy of localization.\nInstead, the main goal is to provide an approximate, small area where search and rescue efforts can be concentrated.\nIn Section 2, we overview some of the recent projects and technologies related to movement and location tracking, and search and rescue systems.\nIn Section 3, we describe the overall architecture of CenWits, and provide a high-level description of its functionality.\nIn the next section, Section 4, we discuss power and memory management in CenWits.\nTo simplify our presentation, we will focus on a specific application of tracking lost\/injured hikers in all these sections.\nIn Section 6, we describe a prototype implementation of CenWits and present performance measured from this implementation.\nWe discuss how the ideas of CenWits can be used to build several other applications in Section 7.\nFinally, in Section 8, we discuss some related issues and conclude the paper.\n2.\nRELATED WORK\nA survey of location systems for ubiquitous computing is provided in [11].\nA location tracking system for adhoc sensor networks using anchor sensors as reference to gain location information and spread it out to outer node is proposed in [17].\nMost location tracking systems in adhoc sensor networks are for benefiting geographic-aware routing.\nThe well-known active badge system [19] lets a user carry a badge around.\nAn infrared sensor in the room can detect the presence of a badge and determine the location and identification of the person.\nThis is a useful system for indoor environment, where GPS doesn't work.\nLocationing using 802.11 devices is probably the cheapest solution for indoor position tracking [8].\nThis system receives GPS signal from a satellite and locates itself, draws location on a map, and sends location information through GSM network to the others who are interested in the user's location.\nA very simple system to monitor children consists an RF transmitter and a receiver.\nThe system alarms the holder of the receiver when the transmitter is about to run out of range [6].\nPersonal Locater Beacons (PLB) has been used for avalanche rescuing for years.\nA skier carries an RF transmitter that emits beacons periodically, so that a rescue team can find his\/her location based on the strength of the RF signal.\nLuxury version of PLB combines a GPS receiver and a COSPASSARSAT satellite transmitter that can transmit user's location in latitude and longitude to the rescue team whenever an accident happens [4].\nAnother related technology in widespread use today is the ONSTAR system [3], typically used in several luxury cars.\nIn this system, a GPS unit provides position information, and a powerful transmitter relays that information via satellite to a customer service center.\nDesigned for emergencies, the system can be triggered either by the user with the push of a button, or by a catastrophic accident.\nIf contact cannot be made, emergency services are dispatched to the location provided by GPS.\nLike PLBs, this system has several limitations.\nFirst, it is heavy and expensive.\nIt requires a satellite transmitter and a connected network.\nIf connectivity with either the GPS network or a communication satellite cannot be maintained, the system fails.\nThe Lifetch system uses CPS receiver board combined with a CSM\/CPRS transmitter and an RF transmitter in one wireless sensor node called Intelligent Communication Unit (ICU).\nAn ICU first attempts to transmit its location to a control center through CSM\/CPRS network.\nIf that fails, it connects with other ICUs (adhoc network) to forward its location information until the information reaches an ICU that has CSM\/CPRS reception.\nThis ICU then transmits the location information of the original ICU via the CSM\/CPRS network.\nZebraNet is a system designed to study the moving patterns of zebras [13].\nHistory-based protocol is used when the zebras are grazing and not moving around too much.\nInstead, Cenwits uses a four-phase hand-shake protocol to ensure that a node transmits only as much information as the other node is willing to receive.\nDelay tolerant network architecture addresses some important problems in challenged (resource-constrained) networks [9].\nAmong all these systems, luxury PLB and Lifetch are designed for location tracking in wilderness areas.\nHowever, both of these systems require a connected network.\nLuxury PLB requires the user to transmit a signal to a satellite, while Lifetch requires connection to CSM\/CPRS network.\nLuxury PLB transmits location information, only when an accident happens.\nThis is because satellite transmission needs line of sight.\nFurthermore, since there is no known history of user's location, it is not possible for the rescue team to infer the current location of the user.\nAnother disadvantage of luxury PLB is that a satellite transmitter is very expensive, costing in the range of $750.\nLifetch attempts to transmit the location information by CSM\/CPRS and adhoc sensor network that uses AODV as the routing protocol.\nHowever, having a cellular reception in remote areas in wilderness areas, e.g. American national parks is unlikely.\nFurthermore, it is extremely unlikely that ICUs worn by hikers will be able to form an adhoc network in a large wilderness area.\nCenWits is designed to address the limitations of systems such as luxury PLB and Lifetch.\nIt is designed to provide hikers, skiers, and climbers who have their activities mainly in wilderness areas a much higher chance to convey their location information to a control center.\nRather, it communicates information along from user to user, finally arriving at a control center.\nUnlike several of the systems discussed so far, it does not require that a user's unit is constantly turned on.\nIn fact, it can discover a victim's location, even if the victim's sensor was off at the time of accident and has remained off since then.\nCenWits solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability.\nThis means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well.\nFigure 1: Hiker A and Hiker B are are not in the range of each other\n6.\nPROTOTYPE IMPLEMENTATION\nWe have implemented a prototype of CenWits on MICA2 sensor 900MHz running Mantis OS 0.9.1 b.\nOne of the sensor is equipped with MTS420CA GPS module, which is capable of barometric pressure and two-axis acceleration sensing in addition to GPS location tracking.\nWe use SiRF, the serial communication protocol, to control GPS module.\nA witness record is 16 bytes long.\nWe have conducted a number of experiments with this prototype.\nA detailed report on these experiments with the raw data collected and photographs of hikers, access points etc. is available at http:\/\/csel.cs.colorado.edu\/\ue18ahuangjh\/ Cenwits.index.htm.\nHere we report results from three of them.\nIn all these experiments, there are three access points (A, B and C) where nodes dump their witness information.\nThese access points also provide location information to the nodes that come with in their range.\nWe first show how CenWits can be used to determine the hiking trail a hiker is most likely on and the speed at which he is hiking, and identify hot search areas in case he is reported missing.\nNext, we show the results of power and memory management techniques of CenWits in conserving power and memory of a sensor node in one of our experiments.\n6.1 Locating Lost Hikers\nThe first experiment is called Direct Contact.\nThe goal of this experiment is to illustrate that CenWits can deduce the trail a hiker takes by processing witness information.\nFigure 13: Direct Contact Experiment\nTable 1: Witness information collected in the direct contact experiment.\nThe witness information dumped at the three access points was then collected and processed at a control center.\nPart of the witness information collected at the control center is shown in Table 1.\nThe X, Y locations in this table correspond to the location information provided by access points A, B, and C.\nThree encounter points (between hiker 1 and the three access points) extracted from\nthis witness information are shown in Figure 13 (shown in rectangular boxes).\nFor example, A,1 at 16 means 1 came in contact with A at time 16.\nUsing this information, we can infer the direction in which hiker 1 was moving and speed at which he was moving.\nFurthermore, given a map of hiking trails in this area, it is clearly possible to identify the hiking trail that hiker 1 took.\nThe second experiment is called Indirect Inference.\nThis experiment is designed to illustrate that the location, direction and speed of a hiker can be inferred by CenWits, even if the hiker never comes in the range of any access point.\nIt illustrates the importance of witness information in search and rescue applications.\nIn this experiment, there are three\nhikers, 1, 2 and 3.\nHowever, this hiker meets hiker 1 and 3 during his hike.\nThis is illustrated in Figure 14.\nFigure 14: Indirect Inference Experiment\nTable 2: Witness information collected from hiker 1\nin indirect inference experiment.\nPart of the witness information collected at the control center from access points A, B and C is shown in Tables 2 and 3.\nThere are some interesting data in these tables.\nFor example, the location time in some witness records is not the same as the record time.\nThis means that the node that generated that record did not have its most up-to-date location at the encounter time.\nFor example, when hikers 1 and 2 meet at time 16, the last recorded location time of\nTable 3: Witness information collected from hiker 3 in indirect inference experiment.\nhiker 1 is (12,7) recorded at time 6.\nSo, node 1 generates a witness record with record time 16, location (12,7) and location time 6.\nIn fact, the last two records in Table 3 have (?\n,?)\nas their location.\nThis has happened because these witness records were generate by hiker 2 during his encounter with 1 at time 15 and 16.\nUntil this time, hiker 2 hadn't come in contact with any location points.\nInterestingly, a more accurate location information of 1 and 2 encounter or 2 and 3 encounter can be computed by process the witness information at the control center.\nIt took 25 units of time for hiker 1 to go from A (12,7) to B (31,17).\nAssuming a constant hiking speed and a relatively straight-line hike, it can be computed that at time 16, hiker 1 must have been at location (18,10).\nThus (18,10) is a more accurate location of encounter between 1 and 2.\nFinally, our third experiment called Identifying Hot Search Areas is designed to determine the trail a hiker has taken and identify hot search areas for rescue after he is reported missing.\nThere are six hikers (1, 2, 3, 4, 5 and 6) in this experiment.\nFigure 15 shows the trails that hikers 1, 2, 3, 4 and 5 took, along with the encounter points obtained from witness records collected at the control center.\nFor brevity, we have not shown the entire witness information collected at the control center.\nThis information is available at http:\/\/csel.cs.colorado.edu\/\ue18ahuangjh\/Cenwits\/index.htm.\nFigure 15: Identifying Hot Search Area Experiment (without hiker 6)\nNow suppose hiker 6 is reported missing at time 260.\nTo determine the hot search areas, the witness records of hiker 6 are processed to determine the trail he is most likely on, the speed at which he had been moving, direction in which he had been moving, and his last known location.\nBased on this information and the hiking trail map, hot search areas are identified.\nThe hiking trail taken by hiker 6 as inferred by CenWits is shown by a dotted line and the hot search areas identified by CenWits are shown by dark lines inside the dotted circle in Figure 16.\nFigure 16: Identifying Hot Search Area Experiment (with hiker 6)\n6.2 Results of Power and Memory Management\nThe witness information shown in Tables 1, 2 and 3 has not been filtered using the three criteria described in Section 4.1.\nFor example, the witness records generated by 3 at record times 76, 78 and 79 (see Table 3) have all been generated due a single contact between access point C and node 3.\nBy applying the record gap criteria, two of these three records will be erased.\nSimilarly, the witness records generated by 1 at record times 10, 15 and 16 (see Table 1) have all been generated due a single contact between access point A and node 1.\nAgain, by applying the record gap criteria, two of these three records will be erased.\nOur experiments did not generate enough data to test the impact of record count or hop count criteria.\nTo evaluate the impact of these criteria, we simulated CenWits to generate a significantly large number of records for a given number of hikers and access points.\nWe generated witness records by having the hikers walk randomly.\nWe applied the three criteria to measure the amount of memory savings in a sensor node.\nThe results are shown in Table 4.\nThe number of hikers in this simulation was 10 and the number of access points was 5.\nThe number of witness records reported in this table is an average number of witness records a sensor node stored at the time of dump to an access point.\nThese results show that the three memory management criteria significantly reduces the memory consumption of sensor nodes in CenWits.\nFor example, they can reduce\nTable 4: Impact of memory management techniques.\nthe memory consumption by up to 75%.","lvl-2":"CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses\nUniversity of Colorado, Campus Box 0430 Boulder, CO 80309-0430\nABSTRACT\nThis paper describes the design, implementation and evaluation of a search and rescue system called CenWits.\nCenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices.\nIt is designed for search and rescue of people in emergency situations in wilderness areas.\nA key feature of CenWits is that it does not require a continuously connected sensor network for its operation.\nIt is designed for an intermittently connected network that provides only occasional connectivity.\nIt makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center.\nA prototype of CenWits has been implemented using Berkeley Mica2 motes.\nThe paper describes this implementation and reports on the performance measured from it.\n1.\nINTRODUCTION\nSearch and rescue of people in emergency situation in a timely manner is an extremely important service.\nIt has been difficult to provide such a service due to lack of timely\ninformation needed to determine the current location of a person who may be in an emergency situation.\nWith the emergence of pervasive computing, several systems [12, 19, 1, 5, 6, 4, 11] have been developed over the last few years that make use of small devices such as cell phones, sensors, etc. .\nAll these systems require a connected network via satellites, GSM base stations, or mobile devices.\nThis requirement severely limits their applicability, particularly in remote wilderness areas where maintaining a connected network is very difficult.\nFor example, a GSM transmitter has to be in the range of a base station to transmit.\nAs a result, it cannot operate in most wilderness areas.\nWhile a satellite transmitter is the only viable solution in wilderness areas, it is typically expensive and cumbersome.\nFurthermore, a line of sight is required to transmit to satellite, and that makes it infeasible to stay connected in narrow canyons, large cities with skyscrapers, rain forests, or even when there is a roof or some other obstruction above the transmitter, e.g. in a car.\nAn RF transmitter has a relatively smaller range of transmission.\nSo, while an in-situ sensor is cheap as a single unit, it is expensive to build a large network that can provide connectivity over a large wilderness area.\nIn a mobile environment where sensors are carried by moving people, power-efficient routing is difficult to implement and maintain over a large wilderness area.\nIn fact, building an adhoc sensor network using only the sensors worn by hikers is nearly impossible due to a relatively small number of sensors spread over a large wilderness area.\nIn this paper, we describe the design, implementation and evaluation of a search and rescue system called CenWits (Connection-less Sensor-Based Tracking System Using Witnesses).\nCenWits is comprised of mobile, in-situ sensors that are worn by subjects (people, wild animals, or in-animate objects), access points (AP) that collect information from these sensors, and GPS receivers and location points (LP) that provide location information to the sensors.\nA subject uses GPS receivers (when it can connect to a satellite) and LPs to determine its current location.\nThe key idea of CenWits is that it uses a concept of witnesses to convey a subject's movement and location information to the outside world.\nThis averts a need for maintaining a connected network to transmit location information to the outside world.\nIn particular, there is no need for expensive GSM or satellite transmitters, or maintaining an adhoc network of in-situ sensors in CenWits.\nCenWits employs several important mechanisms to address the key problem of resource constraints (low signal strength, low power and limited memory) in sensors.\nIn particular, it makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center.\nThe problem of low signal strengths (short range RF communication) is addressed by avoiding a need for maintaining a connected network.\nInstead, CenWits propagates the location information of sensors using the concept of witnesses through an intermittently connected network.\nAs a result, this system can be deployed in remote wilderness areas, as well as in large urban areas with skyscrapers and other tall structures.\nAlso, this makes CenWits cost-effective.\nA subject only needs to wear light-weight and low-cost sensors that have GPS receivers but no expensive GSM or satellite transmitters.\nFurthermore, since there is no need for a connected sensor network, there is no need to deploy sensors in very large numbers.\nThe problem of limited battery life and limited memory of a sensor is addressed by incorporating the concepts of groups and partitions.\nGroups and partitions allow sensors to stay in sleep or receive modes most of the time.\nUsing groups and partitions, the location information collected by a sensor can be distributed among several sensors, thereby reducing the amount of memory needed in one sensor to store that information.\nIn fact, CenWits provides an adaptive tradeoff between memory and power consumption of sensors.\nEach sensor can dynamically adjust its power and memory consumption based on its remaining power or available memory.\nIt has amply been noted that the strength of sensor networks comes from the fact that several sensor nodes can be distributed over a relatively large area to construct a multihop network.\nThis paper demonstrates that important large-scale applications can be built using sensors by judiciously integrating the storage, communication and computation capabilities of sensors.\nThe paper describes important techniques to combine memory, transmission and battery power of many sensors to address resource constraints in the context of a search and rescue application.\nHowever, these techniques are quite general.\nWe discuss several other sensor-based applications that can employ these techniques.\nWhile CenWits addresses the general location tracking and reporting problem in a wide-area network, there are two important differences from the earlier work done in this area.\nFirst, unlike earlier location tracking solutions, CenWits does not require a connected network.\nSecond, unlike earlier location tracking solutions, CenWits does not aim for a very high accuracy of localization.\nInstead, the main goal is to provide an approximate, small area where search and rescue efforts can be concentrated.\nThe rest of this paper is organized as follows.\nIn Section 2, we overview some of the recent projects and technologies related to movement and location tracking, and search and rescue systems.\nIn Section 3, we describe the overall architecture of CenWits, and provide a high-level description of its functionality.\nIn the next section, Section 4, we discuss power and memory management in CenWits.\nTo simplify our presentation, we will focus on a specific application of tracking lost\/injured hikers in all these sections.\nIn Section 6, we describe a prototype implementation of CenWits and present performance measured from this implementation.\nWe discuss how the ideas of CenWits can be used to build several other applications in Section 7.\nFinally, in Section 8, we discuss some related issues and conclude the paper.\n2.\nRELATED WORK\nA survey of location systems for ubiquitous computing is provided in [11].\nA location tracking system for adhoc sensor networks using anchor sensors as reference to gain location information and spread it out to outer node is proposed in [17].\nMost location tracking systems in adhoc sensor networks are for benefiting geographic-aware routing.\nThey don't fit well for our purposes.\nThe well-known active badge system [19] lets a user carry a badge around.\nAn infrared sensor in the room can detect the presence of a badge and determine the location and identification of the person.\nThis is a useful system for indoor environment, where GPS doesn't work.\nLocationing using 802.11 devices is probably the cheapest solution for indoor position tracking [8].\nBecause of the popularity and low cost of 802.11 devices, several business solutions based on this technology have been developed [1].\nA system that combines two mature technologies and is viable in suburban area where a user can see clear sky and has GSM cellular reception at the same time is currently available [5].\nThis system receives GPS signal from a satellite and locates itself, draws location on a map, and sends location information through GSM network to the others who are interested in the user's location.\nA very simple system to monitor children consists an RF transmitter and a receiver.\nThe system alarms the holder of the receiver when the transmitter is about to run out of range [6].\nPersonal Locater Beacons (PLB) has been used for avalanche rescuing for years.\nA skier carries an RF transmitter that emits beacons periodically, so that a rescue team can find his\/her location based on the strength of the RF signal.\nLuxury version of PLB combines a GPS receiver and a COSPASSARSAT satellite transmitter that can transmit user's location in latitude and longitude to the rescue team whenever an accident happens [4].\nHowever, the device either is turned on all the time resulting in fast battery drain, or must be turned on after the accident to function.\nAnother related technology in widespread use today is the ONSTAR system [3], typically used in several luxury cars.\nIn this system, a GPS unit provides position information, and a powerful transmitter relays that information via satellite to a customer service center.\nDesigned for emergencies, the system can be triggered either by the user with the push of a button, or by a catastrophic accident.\nOnce the system has been triggered, a human representative attempts to gain communication with the user via a cell phone built as an incar device.\nIf contact cannot be made, emergency services are dispatched to the location provided by GPS.\nLike PLBs, this system has several limitations.\nFirst, it is heavy and expensive.\nIt requires a satellite transmitter and a connected network.\nIf connectivity with either the GPS network or a communication satellite cannot be maintained, the system fails.\nUnfortunately, these are common obstacles encountered in deep canyons, narrow streets in large cities, parking garages, and a number of other places.\nThe Lifetch system uses CPS receiver board combined with a CSM\/CPRS transmitter and an RF transmitter in one wireless sensor node called Intelligent Communication Unit (ICU).\nAn ICU first attempts to transmit its location to a control center through CSM\/CPRS network.\nIf that fails, it connects with other ICUs (adhoc network) to forward its location information until the information reaches an ICU that has CSM\/CPRS reception.\nThis ICU then transmits the location information of the original ICU via the CSM\/CPRS network.\nZebraNet is a system designed to study the moving patterns of zebras [13].\nIt utilizes two protocols: History-based protocol and flooding protocol.\nHistory-based protocol is used when the zebras are grazing and not moving around too much.\nWhile this might be useful for tracking zebras, it's not suitable for tracking hikers because two hikers are most likely to meet each other only once on a trail.\nIn the flooding protocol, a node dumps its data to a neighbor whenever it finds one and doesn't delete its own copy until it finds a base station.\nWithout considering routing loops, packet filtering and grouping, the size of data on a node will grow exponentially and drain the power and memory of a sensor node with in a short time.\nInstead, Cenwits uses a four-phase hand-shake protocol to ensure that a node transmits only as much information as the other node is willing to receive.\nWhile ZebraNet is designed for a big group of sensors moving together in the same direction with same speed, Cenwits is designed to be used in the scenario where sensors move in different directions at different speeds.\nDelay tolerant network architecture addresses some important problems in challenged (resource-constrained) networks [9].\nWhile this work is mainly concerned with interoperability of challenged networks, some problems related to occasionally-connected networks are similar to the ones we have addressed in CenWits.\nAmong all these systems, luxury PLB and Lifetch are designed for location tracking in wilderness areas.\nHowever, both of these systems require a connected network.\nLuxury PLB requires the user to transmit a signal to a satellite, while Lifetch requires connection to CSM\/CPRS network.\nLuxury PLB transmits location information, only when an accident happens.\nHowever, if the user is buried in the snow or falls into a deep canyon, there is almost no chance for the signal to go through and be relayed to the rescue team.\nThis is because satellite transmission needs line of sight.\nFurthermore, since there is no known history of user's location, it is not possible for the rescue team to infer the current location of the user.\nAnother disadvantage of luxury PLB is that a satellite transmitter is very expensive, costing in the range of $750.\nLifetch attempts to transmit the location information by CSM\/CPRS and adhoc sensor network that uses AODV as the routing protocol.\nHowever, having a cellular reception in remote areas in wilderness areas, e.g. American national parks is unlikely.\nFurthermore, it is extremely unlikely that ICUs worn by hikers will be able to form an adhoc network in a large wilderness area.\nThis is because the hikers are mobile and it is very unlikely to have several ICUs placed dense enough to forward packets even on a very popular hike route.\nCenWits is designed to address the limitations of systems such as luxury PLB and Lifetch.\nIt is designed to provide hikers, skiers, and climbers who have their activities mainly in wilderness areas a much higher chance to convey their location information to a control center.\nIt is not reliant upon constant connectivity with any communication medium.\nRather, it communicates information along from user to user, finally arriving at a control center.\nUnlike several of the systems discussed so far, it does not require that a user's unit is constantly turned on.\nIn fact, it can discover a victim's location, even if the victim's sensor was off at the time of accident and has remained off since then.\nCenWits solves one of the greatest problems plaguing modern search and rescue systems: it has an inherent on-site storage capability.\nThis means someone within the network will have access to the last-known-location information of a victim, and perhaps his bearing and speed information as well.\nFigure 1: Hiker A and Hiker B are are not in the range of each other\n3.\nCENWITS\nWe describe CenWits in the context of locating lost\/injured hikers in wilderness areas.\nEach hiker wears a sensor (MICA2 motes in our prototype) equipped with a CPS receiver and an RF transmitter.\nEach sensor is assigned a unique ID and maintains its current location based on the signal received by its CPS receiver.\nIt also emits beacons periodically.\nWhen any two sensors are in the range of one another, they record the presence of each other (witness information), and also exchange the witness information they recorded earlier.\nThe key idea here is that if two sensors come with in range of each other at any time, they become each other's witnesses.\nLater on, if the hiker wearing one of these sensors is lost, the other sensor can convey the last known (witnessed) location of the lost hiker.\nFurthermore, by exchanging the witness information that each sensor recorded earlier, the witness information is propagated beyond a direct contact between two sensors.\nTo convey witness information to a processing center or to a rescue team, access points are established at well-known locations that the hikers are expected to pass through, e.g. at the trail heads, trail ends, intersection of different trails, scenic view points, resting areas, and so on.\nWhenever a sensor node is in the vicinity of an access point, all witness information stored in that sensor is automatically dumped to the access point.\nAccess points are connected to a processing center via satellite or some other network'.\nThe witness information is downloaded to the processing center from various access points at regular intervals.\nIn case, connection to an access point is lost, the information from that' A connection is needed only between access points and a processing center.\nThere is no need for any connection between different access points.\naccess point can be downloaded manually, e.g. by UAVs.\nTo estimate the speed, location and direction of a hiker at any point in time, all witness information of that hiker that has been collected from various access points is processed.\nFigure 2: Hiker A and Hiker B are in the range of each other.\nA records the presence of B and B records the presence of A.\nA and B become each other's witnesses.\nFigure 3: Hiker A is in the range of an access point.\nIt uploads its recorded witness information and clears its memory.\nAn example of how CenWits operates is illustrated in Figures 1, 2 and 3.\nFirst, hikers A and B are on two close trails, but out of range of each other (Figure 1).\nThis is a very common scenario during a hike.\nFor example, on a popular four-hour hike, a hiker might run into as many as 20 other hikers.\nThis accounts for one encounter every 12 minutes on average.\nA slow hiker can go 1 mile (5,280 feet) per hour.\nThus in 12 minutes a slow hiker can go as far as 1056 feet.\nThis implies that if we were to put 20 hikers on a 4-hour, one-way hike evenly, the range of each sensor node should be at least 1056 feet for them to communicate with one another continuously.\nThe signal strength starts dropping rapidly for two Mica2 nodes to communicate with each other when they are 180 feet away, and is completely lost when they are 230 feet away from each other [7].\nSo, for the sensors to form a sensor network on a 4-hour hiking trail, there should be at least 120 hikers scattered evenly.\nClearly, this is extremely unlikely.\nIn fact, in a 4-hour, less-popular hiking trail, one might only run into say five other hikers.\nCenWits takes advantage of the fact that sensors can communicate with one another and record their presence.\nGiven a walking speed of one mile per hour (88 feet per minute) and Mica2 range of about 150 feet for non-line-of-sight radio transmission, two hikers have about 150\/88 = 1.7 minutes to discover the presence of each other and exchange their witness information.\nWe therefore design our system to have each sensor emit a beacon every one-and-a-half minute.\nIn Figure 2, hiker B's sensor emits a beacon when A is in range, this triggers A to exchange data with B.\nA communicates the following information to B: \"My ID is A; I saw C at 1:23 PM at (39 \u00b0 49.3277655', 105 \u00b0 39.1126776'), I saw E at 3:09 PM at (40 \u00b0 49.2234879', 105 \u00b0 20.3290168')\".\nB then replies with \"My ID is B; I saw K at 11:20 AM at (39 \u00b0 51.4531655', 105 \u00b0 41.6776223')\".\nIn addition, A records \"I saw B at 4:17 PM at (41 \u00b0 29.3177354', 105 \u00b0 04.9106211')\" and B records \"I saw A at 4:17 PM at (41 \u00b0 29.3177354', 105 \u00b0 04.9106211')\".\nB goes on his way to overnight camping while A heads back to trail head where there is an AP, which emits beacon every 5 seconds to avoid missing any hiker.\nA dumps all witness information it has collected to the access point.\nThis is shown in Figure 3.\n3.1 Witness Information: Storage\nA critical concern is that there is limited amount of memory available on motes (4 KB SDRAM memory, 128 KB flash memory, and 4-512 KB EEPROM).\nSo, it is important to organize witness information efficiently.\nCenWits stores witness information at each node as a set of witness records (Format is shown in Figure 4.\nFigure 4: Format of a witness record.\nWhen two nodes i and j encounter each other, each node generates a new witness record.\nIn the witness record generated by i, Node ID is j, Record Time is the current time in i's clock, (X, Y) are the coordinates of the location of i that i recorded most recently (either from satellite or an LP), Location Time is the time when the this location was recorded, and Hop Count is 0.\nEach node is assigned a unique Node Id when it enters a trail.\nIn our current prototype, we have allocated one byte for Node Id, although this can be increased to two or more bytes if a large number of hikers are expected to be present at the same time.\nWe can represent time in 17 bits to a second precision.\nSo, we have allocated 3 bytes each for Record Time and Location Time.\nThe circumference of the Earth is approximately 40,075 KM.\nIf we use a 32-bit number to represent both longitude and latitude, the precision we get is 40,075,000 \/ 232 = 0.0093 meter = 0.37 inches, which is quite precise for our needs.\nSo, we have allocated 4 bytes each for X and Y coordinates of the location of a node.\nIn fact, a foot precision can be achieved by using only 27 bits.\n3.2 Location Point and Location Inference\nAlthough a GPS receiver provides an accurate location information, it has it's limitation.\nIn canyons and rainy forests, a GPS receiver does not work.\nWhen there is a heavy cloud cover, GPS users have experienced inaccuracy in the reported location as well.\nUnfortunately, a lot of hiking trails are in dense forests and canyons, and it's not that uncommon to rain after hikers start hiking.\nTo address this, CenWits incorporates the idea of location points (LP).\nA location point can update a sensor node with its current location whenever the node is near that LP.\nLPs are placed at different locations in a wilderness area where GPS receivers don't work.\nAn LP is a very simple device that emits prerecorded location information at some regular time interval.\nIt can be placed in difficult-to-reach places such as deep canyons and dense rain forests by simply dropping them from an airplane.\nLPs allow a sensor node to determine its current location more accurately.\nHowever, they are not\nan essential requirement of CenWits.\nIf an LP runs out of power, the CenWits will continue to work correctly.\nFigure 5: GPS receiver not working correctly.\nSensors then have to rely on LP to provide coordination In Figure 5, B cannot get GPS reception due to bad weather.\nIt then runs into A on the trail who doesn't have GPS reception either.\nTheir sensors record the presence of each other.\nAfter 10 minutes, A is in range of an LP that provides an accurate location information to A.\nWhen A returns to trail head and uploads its data (Figure 6), the system can draw a circle centered at the LP from which A fetched location information for the range of encounter location of A and B. By Overlapping this circle with the trail map, two or three possible location of encounter can be inferred.\nThus when a rescue is required, the possible location of B can be better inferred (See Figures 7 and 8).\nFigure 6: A is back to trail head, It reports the time of encounter with B to AP, but no location information to AP Figure 7: B is still missing after sunset.\nCenWits\ninfers the last contact point and draws the circle of possible current locations based on average hiking speed CenWits requires that the clocks of different sensor nodes be loosely synchronized with one another.\nSuch a synchronization is trivial when GPS coverage is available.\nIn addition, sensor nodes in CenWits synchronize their clocks whenever they are in the range of an AP or an LP.\nThe Figure 8: Based on overlapping landscape, B might have hiked to wrong branch and fallen off a cliff.\nHot rescue areas can thus be determined synchronization accuracy Cenwits needs is of the order of a second or so.\nAs long as the clocks are synchronized with in one second range, whether A met B at 12:37' 45\" or 12:37' 46\" doesn't matter in the ordering of witness events and inferring the path.\n4.\nMEMORY AND POWER MANAGEMENT\nCenWits employs several important mechanisms to conserve power and memory.\nIt is important to note while current sensor nodes have limited amount of memory, future sensor nodes are expected to have much more memory.\nWith this in mind, the main focus in our design is to provide a tradeoff between the amount of memory available and amount of power consumption.\n4.1 Memory Management\nThe size of witness information stored at a node can get very large.\nThis is because the node may come across several other nodes during a hike, and may end up accumulating a large amount of witness information over time.\nTo address this problem, CenWits allows a node to pro-actively free up some parts of its memory periodically.\nThis raises an interesting question of when and which witness record should be deleted from the memory of a node?\nCenWits uses three criteria to determine this: record count, hop count, and record gap.\nRecord count refers to the number of witness records with same node id that a node has stored in its memory.\nA node maintains an integer parameter MAX RECORD COUNT.\nIt stores at most MAX RECORD COUNT witness records of any node.\nEvery witness record has a hop count field that stores the number times (hops) this record has been transferred since being created.\nInitially this field is set to 0.\nWhenever a node receives a witness record from another node, it increments the hop count of that record by 1.\nA node maintains an integer parameter called MAX HOP COUNT.\nIt keeps only those witness records in its memory, whose hop count is less than MAX HOP COUNT.\nThe MAX HOP COUNT parameter provides a balance between two conflicting goals: (1) To ensure that a witness record has been propagated to and thus stored at as many nodes as possible, so that it has a high probability of being dumped at some AP as quickly as possible; and (2) To ensure that a witness record is stored only at a few nodes, so that it does not clog up too much of the combined memory of all sensor nodes.\nWe chose to use hop count instead of time-to-live to decide when to drop a packet.\nThe main reason for this is that the probability of a packet reaching an AP goes up as the hop count adds up.\nFor example, when the hop count if 5 for a specific record,\nthe record is in at least 5 sensor nodes.\nOn the other hand, if we discard old records, without considering hop count, there is no guarantee that the record is present in any other sensor node.\nRecord gap refers to the time difference between the record times of two witness records with the same node id.\nTo save memory, a node n ensures the the record gap between any two witness records with the same node id is at least MIN RECORD GAP.\nFor each node id i, n stores the witness record with the most recent record time rti, the witness with most recent record time that is at least MIN RECORD GAP time units before rti, and so on until the record count limit (MAX RECORD COUNT) is reached.\nWhen a node is tight in memory, it adjusts the three parameters, MAX RECORD COUNT, MAX HOP COUNT and MIN RECORD GAP to free up some memory.\nIt decrements MAX RECORD COUNT and MAX HOP COUNT, and increments MIN RECORD GAP.\nIt then first erases all witness records whose hop count exceeds the reduced MAX HOP COUNT value, and then erases witness records to satisfy the record gap criteria.\nAlso, when a node has extra memory space available, e.g. after dumping its witness information at an access point, it resets MAX RECORD COUNT, MAX HOP COUNT and MIN RECORD GAP to some predefined values.\n4.2 Power Management\nAn important advantage of using sensors for tracking purposes is that we can regulate the behavior of a sensor node based on current conditions.\nFor example, we mentioned earlier that a sensor should emit a beacon every 1.7 minute, given a hiking speed of 1 mile\/hour.\nHowever, if a user is moving 10 feet\/sec, a beacon should be emitted every 10 seconds.\nIf a user is not moving at all, a beacon can be emitted every 10 minutes.\nIn the night, a sensor can be put into sleep mode to save energy, when a user is not likely to move at all for a relatively longer period of time.\nIf a user is active for only eight hours in a day, we can put the sensor into sleep mode for the other 16 hours and thus save 2\/3rd of the energy.\nIn addition, a sensor node can choose to not send any beacons during some time intervals.\nFor example, suppose hiker A has communicated its witness information to three other hikers in the last five minutes.\nIf it is running low on power, it can go to receive mode or sleep mode for the next ten minutes.\nIt goes to receive mode if it is still willing to receive additional witness information from hikers that it encounters in the next ten minutes.\nIt goes to sleep mode if it is extremely low on power.\nThe bandwidth and energy limitations of sensor nodes require that the amount of data transferred among the nodes be reduced to minimum.\nIt has been observed that in some scenarios 3000 instructions could be executed for the same energy cost of sending a bit 100m by radio [15].\nTo reduce the amount of data transfer, CenWits employs a handshake protocol that two nodes execute when they encounter one another.\nThe goal of this protocol is to ensure that a node transmits only as much witness information as the other node is willing to receive.\nThis protocol is initiated when a node i receives a beacon containing the node ID of the sender node j and i has not exchanged witness information with j in the last S time units.\nAssume that i <j.\nThe protocol consists of four phases (See Figure 9):\n1.\nPhase I: Node i sends its receive constraints and the number of witness records it has in its memory.\n2.\nPhase II: On receiving this message from i, j sends its receive constraints and the number of witness records it has in its memory.\n3.\nPhase III: On receiving the above message from j, i sends its witness information (filtered based on receive constraints received in phase II).\n4.\nPhase IV: After receiving the witness records from i, j sends its witness information (filtered based on receive constraints received in phase I).\nFigure 9: Four-Phase Hand Shake Protocol (i <j)\nReceive constraints are a function of memory and power.\nIn the most general case, they are comprised of the three parameters (record count, hop count and record gap) used for memory management.\nIf i is low on memory, it specifies the maximum number of records it is willing to accept from j. Similarly, i can ask j to send only those records that have hop count value less than MAX HOP COUNT \u2212 1.\nFinally, i can include its MIN RECORD GAP value in its receive constraints.\nNote that the handshake protocol is beneficial to both i and j.\nThey save memory by receiving only as much information as they are willing to accept and conserve energy by sending only as many witness records as needed.\nIt turns out that filtering witness records based on MIN RECORD GAP is complex.\nIt requires that the witness records of any given node be arranged in an order sorted by their record time values.\nMaintaining this sorted order is complex in memory, because new witness records with the same node id can arrive later that may have to be inserted in between to preserve the sorted order.\nFor this reason, the receive constraints in the current CenWits prototype do not include record gap.\nSuppose i specifies a hop count value of 3.\nIn this case, j checks the hop count field of every witness record before sending them.\nIf the hop count value is greater than 3, the record is not transmitted.\n4.3 Groups and Partitions\nTo further reduce communication and increase the lifetime of our system, we introduce the notion of groups.\nThe idea is based on the concept of abstract regions presented in [20].\nA group is a set of n nodes that can be defined in terms of radio connectivity, geographic location, or other properties of nodes.\nAll nodes within a group can communicate directly with one another and they share information to maintain their view of the external world.\nAt any point in time, a group has exactly one leader that communicates i i i j j j\nwith external nodes on behalf of the entire group.\nA group can be static, meaning that the group membership does not change over the period of time, or it could be dynamic in which case nodes can leave or join the group.\nTo make our analysis simple and to explain the advantages of group, we first discuss static groups.\nA static group is formed at the start of a hiking trail or ski slope.\nSuppose there are five family members who want to go for a hike in the Rocky Mountain National Park.\nBefore these members start their hike, each one of them is given a sensor node and the information is entered in the system that the five nodes form a group.\nEach group member is given a unique id and every group member knows about other members of the group.\nThe group, as a whole, is also assigned an id to distinguish it from other groups in the system.\nFigure 10: A group of five people.\nNode 2 is the group leader and it is communicating on behalf of the group with an external node 17.\nAll other (shown in a lighter shade) are in sleep mode.\nAs the group moves through the trail, it exchanges information with other nodes or groups that it comes across.\nAt any point in time, only one group member, called the leader, sends and receives information on behalf of the group and all other n \u2212 1 group members are put in the sleep mode (See Figure 10).\nIt is this property of groups that saves us energy.\nThe group leadership is time-multiplexed among the group members.\nThis is done to make sure that a single node does not run out of battery due to continuous exchange of information.\nThus after every t seconds, the leadership is passed on to another node, called the successor, and the leader (now an ordinary member) is put to sleep.\nSince energy is dear, we do not implement an extensive election algorithm for choosing the successor.\nInstead, we choose the successor on the basis of node id.\nThe node with the next highest id in the group is chosen as the successor.\nThe last node, of course, chooses the node with the lowest id as its successor.\nWe now discuss the data storage schemes for groups.\nMemory is a scarce resource in sensor nodes and it is therefore important that witness information be stored efficiently among group members.\nEfficient data storage is not a trivial task when it comes to groups.\nThe tradeoff is between simplicity of the scheme and memory savings.\nA simpler scheme incurs lesser energy cost as compared to a more sophisticated scheme, but offers lesser memory savings as well.\nThis is because in a more complicated scheme, the group members have to coordinate to update and store information.\nAfter considering a number of different schemes, we have come to a conclusion that there is no optimal storage scheme for groups.\nThe system should be able to adapt according to its requirements.\nIf group members are low on battery, then the group can adapt a scheme that is more energy efficient.\nSimilarly, if the group members are running out of memory, they can adapt a scheme that is more memory efficient.\nWe first present a simple scheme that is very energy efficient but does not offer significant memory savings.\nWe then present an alternate scheme that is much more memory efficient.\nAs already mentioned a group can receive information only through the group leader.\nWhenever the leader comes across an external node e, it receives information from that node and saves it.\nIn our first scheme, when the timeslot for the leader expires, the leader passes this new information it received from e to its successor.\nThis is important because during the next time slot, if the new leader comes across another external node, it should be able to pass information about all the external nodes this group has witnessed so far.\nThus the information is fully replicated on all nodes to maintain the correct view of the world.\nOur first scheme does not offer any memory savings but is highly energy efficient and may be a good choice when the group members are running low on battery.\nExcept for the time when the leadership is switched, all n \u2212 1 members are asleep at any given time.\nThis means that a single member is up for t seconds once every n * t seconds and therefore has to spend approximately only 1\/nth of its energy.\nThus, if there are 5 members in a group, we save 80% energy, which is huge.\nMore energy can be saved by increasing the group size.\nWe now present an alternate data storage scheme that aims at saving memory at the cost of energy.\nIn this scheme we divide the group into what we call partitions.\nPartitions can be thought of as subgroups within a group.\nEach partition must have at least two nodes in it.\nThe nodes within a partition are called peers.\nEach partition has one peer designated as partition leader.\nThe partition leader stays in receive mode at all times, while all others peers a partition stay in the sleep mode.\nPartition leadership is time-multiplexed among the peers to make sure that a single node does not run out of battery.\nLike before, a group has exactly one leader and the leadership is time-multiplexed among partitions.\nThe group leader also serves as the partition leader for the partition it belongs to (See Figure 11).\nIn this scheme, all partition leaders participate in information exchange.\nWhenever a group comes across an external node e, every partition leader receives all witness information, but it only stores a subset of that information after filtering.\nInformation is filtered in such a way that each partition leader has to store only B\/K bytes of data, where K is the number of partitions and B is the total number of bytes received from e. Similarly when a group wants to send witness information to e, each partition leader sends only B\/K bytes that are stored in the partition it belongs to.\nHowever, before a partition leader can send information, it must switch from receive mode to send mode.\nAlso, partition leaders must coordinate with one another to ensure that they do not send their witness information at the same time, i.e. their message do not collide.\nAll this is achieved by having the group leader send a signal to every partition leader in turn.\nFigure 11: The figure shows a group of eight nodes divided into four partitions of 2 nodes each.\nNode 1 is the group leader whereas nodes 2, 9, and 7 are partition leaders.\nAll other nodes are in the sleep mode.\nSince the partition leadership is time-multiplexed, it is important that any information received by the partition leader, P1, be passed on to the next leader, P2.\nThis has to be done to make sure that P2 has all the information that it might need to send when it comes across another external node during its timeslot.\nOne way of achieving this is to wake P2 up just before P1's timeslot expires and then have P1 transfer information only to P2.\nAn alternate is to wake all the peers up at the time of leadership change, and then have P1 broadcast the information to all peers.\nEach peer saves the information sent by P1 and then goes back to sleep.\nIn both cases, the peers send acknowledgement to the partition leader after receiving the information.\nIn the former method, only one node needs to wake up at the time of leadership change, but the amount of information that has to be transmitted between the nodes increases as time passes.\nIn the latter case, all nodes have to be woken up at the time of leadership change, but small piece of information has to be transmitted each time among the peers.\nSince communication is much more expensive than bringing the nodes up, we prefer the second method over the first one.\nA group can be divided into partitions in more than one way.\nFor example, suppose we have a group of six members.\nWe can divide this group into three partitions of two peers each, or two partitions with three peers each.\nThe choice once again depends on the requirements of the system.\nA few big partitions will make the system more energy efficient.\nThis is because in this configuration, a greater number of nodes will stay in sleep mode at any given point in time.\nOn the other hand, several small partitions will make the system memory efficient, since each node will have to store lesser information (See Figure 12).\nA group that is divided into partitions must be able to readjust itself when a node leaves or runs out of battery.\nThis is crucial because a partition must have at least two nodes at any point in time to tolerate failure of one node.\nFor example, in figure 3 (a), if node 2 or node 5 dies, the partition is left with only one node.\nLater on, if that single node in the partition dies, all witness information stored in that partition will be lost.\nWe have devised a very simple protocol to solve this problem.\nWe first explain how partiFigure 12: The figure shows two different ways of partitioning a group of six nodes.\nIn (a), a group is divided into three partitions of two nodes.\nNode 1 is the group leader, nodes 9 and 5 are partition leaders, and nodes 2, 3, and 6 are in sleep mode.\nIn (b) the group is divided into two partitions of three nodes.\nNode 1 is the group leader, node 9 is the partition leader and nodes 2, 3, 5, and 6 are in sleep mode.\ntions are adjusted when a peer dies, and then explain what happens if a partition leader dies.\nSuppose node 2 in figure 3 (a) dies.\nWhen node 5, the partition leader, sends information to node 2, it does not receive an acknowledgement from it and concludes that node 2 has died2.\nAt this point, node 5 contacts other partition leaders (nodes 1...9) using a broadcast message and informs them that one of its peers has died.\nUpon hearing this, each partition leader informs node 5 (i) the number of nodes in its partition, (ii) a candidate node that node 5 can take if the number of nodes in its partition is greater than 2, and (iii) the amount of witness information stored in its partition.\nUpon hearing from every leader, node 5 chooses the candidate node from the partition with maximum number (must be greater than 2) of peers, and sends a message back to all leaders.\nNode 5 then sends data to its new peer to make sure that the information is replicated within the partition.\nHowever, if all partitions have exactly two nodes, then node 5 must join another partition.\nIt chooses the partition that has the least amount of witness information to join.\nIt sends its witness information to the new partition leader.\nWitness information and membership update is propagated to all peers during the next partition leadership change.\nWe now consider the case where the partition leader dies.\nIf this happens, then we wait for the partition leadership to change and for the new partition leader to eventually find out that a peer has died.\nOnce the new partition leader finds out that it needs more peers, it proceeds with the protocol explained above.\nHowever, in this case, we do lose information that the previous partition leader might have received just before it died.\nThis problem can be solved by implementing a more rigorous protocol, but we have decided to give up on accuracy to save energy.\nOur current design uses time-division multiplexing to schedule wakeup and sleep modes in the sensor nodes.\nHowever, recent work on radio wakeup sensors [10] can be used to do this scheduling more efficiently.\nwe plan to incorporate radio wakeup sensors in CenWits when the hardware is mature.\n2The algorithm to conclude that a node has died can be made more rigorous by having the partition leader query the suspected node a few times.\n5.\nSYSTEM EVALUATION\nA sensor is constrained in the amount of memory and power.\nIn general, the amount of memory needed and power consumption depends on a variety of factors such as node density, number of hiker encounters, and the number of access points.\nIn this Section, we provide an estimate of how long the power of a MICA2 mote will last under certain assumtions.\nFirst, we assume that each sensor node carries about 100 witness records.\nOn encountering another hiker, a sensor node transmits 50 witness records and receives 50 new witness records.\nSince, each record is 16 bytes long, it will take 0.34 seconds to transmit 50 records and another 0.34 seconds to receive 50 records over a 19200 bps link.\nThe power consumption of MICA2 due to CPU processing, transmission and reception are approximately 8.0 mA, 7.0 mA and 8.5 mA per hour respectively [18], and the capacity of an alkaline battery is 2500mAh.\nSince the radio module of Mica2 is half-duplex and assuming that the CPU is always active when a node is awake, power consumption due to transmission is 8 + 8.5 = 16.5 mA per hour and due to reception is 8 + 7 = 15mA per hour.\nSo, average power consumtion due to transmission and reception is (16.5 + 15) \/ 2 = 15.75 mA per hour.\nGiven that the capacity of an alkaline battery is 2500 mAh, a battery should last for 2500\/15 .75 = 159 hours of transmission and reception.\nAn encounter between two hikers results in exchange of about 50 witness records that takes about 0.68 seconds as calculated above.\nThus, a single alkaline battery can last for (159 * 60 * 60) \/ 0.68 = 841764 hiker encounters.\nAssuming that a node emits a beacon every 90 seconds and a hiker encounter occurs everytime a beacon is emitted (worst-case scenario), a single alkaline battery will last for (841764 * 90) \/ (30 * 24 * 60 * 60) = 29 days.\nSince, a Mica2 is equipped with two batteries, a Mica2 sensor can remain operation for about two months.\nNotice that this calculation is preliminary, because it assumes that hikers are active 24 hours of the day and a hiker encounter occurs every 90 seconds.\nIn a more realistic scenario, power is expected to last for a much longer time period.\nAlso, this time period will significantly increase when groups of hikers are moving together.\nFinally, the lifetime of a sensor running on two batteries can definitely be increased significantly by using energy scavenging techniques and energy harvesting techniques [16, 14].\n6.\nPROTOTYPE IMPLEMENTATION\nWe have implemented a prototype of CenWits on MICA2 sensor 900MHz running Mantis OS 0.9.1 b.\nOne of the sensor is equipped with MTS420CA GPS module, which is capable of barometric pressure and two-axis acceleration sensing in addition to GPS location tracking.\nWe use SiRF, the serial communication protocol, to control GPS module.\nSiRF has a rich command set, but we record only X and Y coordinates.\nA witness record is 16 bytes long.\nWhen a node starts up, it stores its current location and emits a beacon periodically--in the prototype, a node emits a beacon every minute.\nWe have conducted a number of experiments with this prototype.\nA detailed report on these experiments with the raw data collected and photographs of hikers, access points etc. is available at http:\/\/csel.cs.colorado.edu\/\ue18ahuangjh\/ Cenwits.index.htm.\nHere we report results from three of them.\nIn all these experiments, there are three access points (A, B and C) where nodes dump their witness information.\nThese access points also provide location information to the nodes that come with in their range.\nWe first show how CenWits can be used to determine the hiking trail a hiker is most likely on and the speed at which he is hiking, and identify hot search areas in case he is reported missing.\nNext, we show the results of power and memory management techniques of CenWits in conserving power and memory of a sensor node in one of our experiments.\n6.1 Locating Lost Hikers\nThe first experiment is called Direct Contact.\nIt is a very simple experiment in which a single hiker starts from A, goes to B and then C, and finally returns to A (See Figure 13).\nThe goal of this experiment is to illustrate that CenWits can deduce the trail a hiker takes by processing witness information.\nFigure 13: Direct Contact Experiment\nTable 1: Witness information collected in the direct contact experiment.\nThe witness information dumped at the three access points was then collected and processed at a control center.\nPart of the witness information collected at the control center is shown in Table 1.\nThe X, Y locations in this table correspond to the location information provided by access points A, B, and C.\nA is located at (12,7), B is located at (31,17) and C is located at (12,23).\nThree encounter points (between hiker 1 and the three access points) extracted from\nthis witness information are shown in Figure 13 (shown in rectangular boxes).\nFor example, A,1 at 16 means 1 came in contact with A at time 16.\nUsing this information, we can infer the direction in which hiker 1 was moving and speed at which he was moving.\nFurthermore, given a map of hiking trails in this area, it is clearly possible to identify the hiking trail that hiker 1 took.\nThe second experiment is called Indirect Inference.\nThis experiment is designed to illustrate that the location, direction and speed of a hiker can be inferred by CenWits, even if the hiker never comes in the range of any access point.\nIt illustrates the importance of witness information in search and rescue applications.\nIn this experiment, there are three\nhikers, 1, 2 and 3.\nHiker 1 takes a trail that goes along access points A and B, while hiker 3 takes trail that goes along access points C and B. Hiker 2 takes a trail that does not come in the range of any access points.\nHowever, this hiker meets hiker 1 and 3 during his hike.\nThis is illustrated in Figure 14.\nFigure 14: Indirect Inference Experiment\nTable 2: Witness information collected from hiker 1\nin indirect inference experiment.\nPart of the witness information collected at the control center from access points A, B and C is shown in Tables 2 and 3.\nThere are some interesting data in these tables.\nFor example, the location time in some witness records is not the same as the record time.\nThis means that the node that generated that record did not have its most up-to-date location at the encounter time.\nFor example, when hikers 1 and 2 meet at time 16, the last recorded location time of\nTable 3: Witness information collected from hiker 3 in indirect inference experiment.\nhiker 1 is (12,7) recorded at time 6.\nSo, node 1 generates a witness record with record time 16, location (12,7) and location time 6.\nIn fact, the last two records in Table 3 have (?\n,?)\nas their location.\nThis has happened because these witness records were generate by hiker 2 during his encounter with 1 at time 15 and 16.\nUntil this time, hiker 2 hadn't come in contact with any location points.\nInterestingly, a more accurate location information of 1 and 2 encounter or 2 and 3 encounter can be computed by process the witness information at the control center.\nIt took 25 units of time for hiker 1 to go from A (12,7) to B (31,17).\nAssuming a constant hiking speed and a relatively straight-line hike, it can be computed that at time 16, hiker 1 must have been at location (18,10).\nThus (18,10) is a more accurate location of encounter between 1 and 2.\nFinally, our third experiment called Identifying Hot Search Areas is designed to determine the trail a hiker has taken and identify hot search areas for rescue after he is reported missing.\nThere are six hikers (1, 2, 3, 4, 5 and 6) in this experiment.\nFigure 15 shows the trails that hikers 1, 2, 3, 4 and 5 took, along with the encounter points obtained from witness records collected at the control center.\nFor brevity, we have not shown the entire witness information collected at the control center.\nThis information is available at http:\/\/csel.cs.colorado.edu\/\ue18ahuangjh\/Cenwits\/index.htm.\nFigure 15: Identifying Hot Search Area Experiment (without hiker 6)\nNow suppose hiker 6 is reported missing at time 260.\nTo determine the hot search areas, the witness records of hiker 6 are processed to determine the trail he is most likely on, the speed at which he had been moving, direction in which he had been moving, and his last known location.\nBased on this information and the hiking trail map, hot search areas are identified.\nThe hiking trail taken by hiker 6 as inferred by CenWits is shown by a dotted line and the hot search areas identified by CenWits are shown by dark lines inside the dotted circle in Figure 16.\nFigure 16: Identifying Hot Search Area Experiment (with hiker 6)\n6.2 Results of Power and Memory Management\nThe witness information shown in Tables 1, 2 and 3 has not been filtered using the three criteria described in Section 4.1.\nFor example, the witness records generated by 3 at record times 76, 78 and 79 (see Table 3) have all been generated due a single contact between access point C and node 3.\nBy applying the record gap criteria, two of these three records will be erased.\nSimilarly, the witness records generated by 1 at record times 10, 15 and 16 (see Table 1) have all been generated due a single contact between access point A and node 1.\nAgain, by applying the record gap criteria, two of these three records will be erased.\nOur experiments did not generate enough data to test the impact of record count or hop count criteria.\nTo evaluate the impact of these criteria, we simulated CenWits to generate a significantly large number of records for a given number of hikers and access points.\nWe generated witness records by having the hikers walk randomly.\nWe applied the three criteria to measure the amount of memory savings in a sensor node.\nThe results are shown in Table 4.\nThe number of hikers in this simulation was 10 and the number of access points was 5.\nThe number of witness records reported in this table is an average number of witness records a sensor node stored at the time of dump to an access point.\nThese results show that the three memory management criteria significantly reduces the memory consumption of sensor nodes in CenWits.\nFor example, they can reduce\nTable 4: Impact of memory management techniques.\nthe memory consumption by up to 75%.\nHowever, these results are premature at present for two reasons: (1) They are generated via simulation of hikers walking at random; and (2) It is not clear what impact the erasing of witness records has on the accuracy of inferred location\/hot search areas of lost hikers.\nIn our future work, we plan to undertake a major study to address these two concerns.","keyphrases":["wit","search and rescu","emerg situat","sensor network","connect network","intermitt network connect","pervas comput","satellit transmitt","group and partit","locat track system","hiker","gp receiv","rf transmitt","beacon"],"prmu":["P","P","P","P","P","P","U","U","M","M","U","U","U","U"]} {"id":"C-78","title":"An Architectural Framework and a Middleware for Cooperating Smart Components","abstract":"In a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it. Examples range in telematics, traffic management, team robotics or home automation to name a few. To a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically. The challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution. A crucial design decision is the choice of the appropriate abstractions and interaction mechanisms. Looking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components. They are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface. Larger autonomous components may be composed recursively from these building blocks. The paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system. It starts by an outline of the component-based system construction. The generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer. The generic event layer hides the different communication channels including the interactions through the environment. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.","lvl-1":"An Architectural Framework and a Middleware for Cooperating Smart Components \u2217 Ant\u00b4onio Casimiro U.Lisboa casim@di.fc.ul.pt J\u00a8org Kaiser U.Ulm kaiser@informatik.uniulm.de Paulo Ver\u00b4\u0131ssimo U.Lisboa pjv@di.fc.ul.pt ABSTRACT In a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it.\nExamples range in telematics, traffic management, team robotics or home automation to name a few.\nTo a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically.\nThe challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution.\nA crucial design decision is the choice of the appropriate abstractions and interaction mechanisms.\nLooking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components.\nThey are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface.\nLarger autonomous components may be composed recursively from these building blocks.\nThe paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system.\nIt starts by an outline of the component-based system construction.\nThe generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer.\nThe generic event layer hides the different communication channels including \u2217This work was partially supported by the EC, through project IST-2000-26031 (CORTEX), and by the FCT, through the Large-Scale Informatic Systems Laboratory (LaSIGE) and project POSI\/1999\/CHS\/33996 (DEFEATS).\nthe interactions through the environment.\nAn appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints.\nThis is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes.\nThey are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Distributed applications; C.3 [Special-Purpose and Application-Based Systems]: Real-Time and embedded systems General Terms Design 1.\nINTRODUCTION In recent years we have seen the continuous improvement of technologies that are relevant for the construction of distributed embedded systems, including trustworthy visual, auditory, and location sensing [11], communication and processing.\nWe believe that in a future networked physical world a new class of applications will emerge, composed of a myriad of smart sensors and actuators to assess and control aspects of their environments and autonomously act in response to it.\nThe anticipated challenging characteristics of these applications include autonomy, responsiveness and safety criticality, large scale, geographical dispersion, mobility and evolution.\nIn order to deal with these challenges, it is of fundamental importance to use adequate high-level models, abstractions and interaction paradigms.\nUnfortunately, when facing the specific characteristics of the target systems, the shortcomings of current architectures and middleware interaction paradigms become apparent.\nLooking to the basic building blocks of such systems we may find components which comprise mechanical parts, hardware, software and a network interface.\nHowever, classical event\/object models are usually software oriented and, as such, when trans28 ported to a real-time, embedded systems setting, their harmony is cluttered by the conflict between, on the one side, send\/receive of software events (message-based), and on the other side, input\/output of hardware or real-world events, register-based.\nIn terms of interaction paradigms, and although the use of event-based models appears to be a convenient solution [10, 22], these often lack the appropriate support for non-functional requirements like reliability, timeliness or security.\nThis paper describes an architectural framework and a middleware, supporting a component-based system and an integrated view on event-based communication comprising the real world events and the events generated in the system.\nWhen choosing the appropriate interaction paradigm it is of fundamental importance to address the challenging issues of the envisaged sentient applications.\nUnlike classical approaches that confine the possible interactions to the application boundaries, i.e. to its components, we consider that the environment surrounding the application also plays a relevant role in this respect.\nTherefore, the paper starts by clarifying several issues concerning our view of the system, about the interactions that may take place and about the information flows.\nThis view is complemented by providing an outline of the component-based system construction and, in particular, by showing that it is possible to compose larger applications from basic components, following an hierarchical composition approach.\nThis provides the necessary background to introduce the Generic-Events Architecture (GEAR), which describes the event-based interaction between the components via a generic event layer while allowing the seamless integration of physical and computer information flows.\nIn fact, the generic event layer hides the different communication channels, including the interactions through the environment.\nAdditionally, the event layer abstraction is also adequate for the proper handling of the non-functional requirements, namely reliability and timeliness, which are particularly stringent in real-time settings.\nThe paper devotes particular attention to this issue by discussing the temporal aspects of interactions and the needs for predictability.\nAn appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints.\nThis is complemented by the notion of Event Channels (EC), which are abstractions of the underlying network while being abstracted by the event layer.\nIn fact, event channels play a fundamental role in securing the functional and non-functional (e.g. reliability and timeliness) properties of the envisaged applications, that is, in allowing the enforcement of quality attributes.\nThey are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.\nThe paper is organized as follows.\nIn Section 3 we introduce the fundamental notions and abstractions that we adopt in this work to describe the interactions taking place in the system.\nThen, in Section 4, we describe the componentbased approach that allows composition of objects.\nGEAR is then described in Section 5 and Section 6 focuses on temporal aspects of the interactions.\nSection 7 describes the COSMIC middleware, which may be used to specify the interaction between sentient objects.\nA simple example to highlight the ideas presented in the paper appears in Section 8 and Section 9 concludes the paper.\n2.\nRELATED WORK Our work considers a wired physical world in which a very large number of autonomous components cooperate.\nIt is inspired by many research efforts in very different areas.\nEvent-based systems in general have been introduced to meet the requirements of applications in which entities spontaneously generate information and disseminate it [1, 25, 22].\nIntended for large systems and requiring quite complex infrastructures, these event systems do not consider stringent quality aspects like timeliness and dependability issues.\nSecondly, they are not created to support inter-operability between tiny smart devices with substantial resource constraints.\nIn [10] a real-time event system for CORBA has been introduced.\nThe events are routed via a central event server which provides scheduling functions to support the real-time requirements.\nSuch a central component is not available in an infrastructure envisaged in our system architecture and the developed middleware TAO (The Ace Orb) is quite complex and unsuitable to be directly integrated in smart devices.\nThere are efforts to implement CORBA for control networks, tailored to connect sensor and actuator components [15, 19].\nThey are targeted for the CAN-Bus [9], a popular network developed for the automotive industry.\nHowever, in these approaches the support for timeliness or dependability issues does not exist or is only very limited.\nA new scheme to integrate smart devices in a CORBA environment is proposed in [17] and has lead to the proposal of a standard by the Object Management Group (OMG) [26].\nSmart transducers are organized in clusters that are connected to a CORBA system by a gateway.\nThe clusters form isolated subnetworks.\nA special master node enforces the temporal properties in the cluster subnet.\nA CORBA gateway allows to access sensor data and write actuator data by means of an interface file system (IFS).\nThe basic structure is similar to the WAN-of-CANs structure which has been introduced in the CORTEX project [4].\nIslands of tight control may be realized by a control network and cooperate via wired or wireless networks covering a large number of these subnetworks.\nHowever, in contrast to the event channel model introduced in this paper, all communication inside a cluster relies on a single technical solution of a synchronous communication channel.\nSecondly, although the temporal behaviour of a single cluster is rigorously defined, no model to specify temporal properties for clusterto-CORBA or cluster-to-cluster interactions is provided.\n3.\nINFORMATION FLOW AND INTERACTION MODEL In this paper we consider a component-based system model that incorporates previous work developed in the context of the IST CORTEX project [5].\nAs mentioned above, a fundamental idea underlying the approach is that applications can be composed of a large number of smart components that are able to sense their surrounding environment and interact with it.\nThese components are referred to as sentient objects, a metaphor elaborated in CORTEX and inspired on the generic concept of sentient computing introduced in [12].\nSentient objects accept input events from a variety of different sources (including sensors, but not constrained to that), process them, and produce output events, whereby 29 they actuate on the environment and\/or interact with other objects.\nTherefore, the following kinds of interactions can take place in the system: Environment-to-object interactions: correspond to a flow of information from the environment to application objects, reporting about the state of the former, and\/or notifying about events taking place therein.\nObject-to-object interactions: correspond to a flow of information among sentient objects, serving two purposes.\nThe first is related with complementing the assessment of each individual object about the state of the surrounding space.\nThe second is related to collaboration, in which the object tries to influence other objects into contributing to a common goal, or into reacting to an unexpected situation.\nObject-to-environment interactions: correspond to a flow of information from an object to the environment, with the purpose of forcing a change in the state of the latter.\nBefore continuing, we need to clarify a few issues with respect to these possible forms of interaction.\nWe consider that the environment can be a producer or consumer of information while interacting with sentient objects.\nThe environment is the real (physical) world surrounding an object, not necessarily close to the object or limited to certain boundaries.\nQuite clearly, the information produced by the environment corresponds to the physical representation of real-time entities, of which typical examples include temperature, distance or the state of a door.\nOn the other hand, actuation on the environment implies the manipulation of these real-time entities, like increasing the temperature (applying more heat), changing the distance (applying some movement) or changing the state of the door (closing or opening it).\nThe required transformations between system representations of these real-time entities and their physical representations is accomplished, generically, by sensors and actuators.\nWe further consider that there may exist dumb sensors and actuators, which interact with the objects by disseminating or capturing raw transducer information, and smart sensors and actuators, with enhanced processing capabilities, capable of speaking some more elaborate event dialect (see Sections 5 and 6.1).\nInteraction with the environment is therefore done through sensors and actuators, which may, or may not be part of sentient objects, as discussed in Section 4.2.\nState or state changes in the environment are considered as events, captured by sensors (in the environment or within sentient objects) and further disseminated to other potentially interested sentient objects in the system.\nIn consequence, it is quite natural to base the communication and interaction among sentient objects and with the environment on an event-based communication model.\nMoreover, typical properties of event-based models, such as anonymous and non-blocking communication, are highly desirable in systems where sentient objects can be mobile and where interactions are naturally very dynamic.\nA distinguishing aspect of our work from many of the existing approaches, is that we consider that sentient objects may indirectly communicate with each other through the environment, when they act on it.\nThus the environment constitutes an interaction and communication channel and is in the control and awareness loop of the objects.\nIn other words, when a sentient object actuates on the environment it will be able to observe the state changes in the environment by means of events captured by the sensors.\nClearly, other objects might as well capture the same events, thus establishing the above-mentioned indirect communication path.\nIn systems that involve interactions with the environment it is very important to consider the possibility of communication through the environment.\nIt has been shown that the hidden channels developing through the latter (e.g., feedback loops) may hinder software-based algorithms ignoring them [30].\nTherefore, any solution to the problem requires the definition of convenient abstractions and appropriate architectural constructs.\nOn the other hand, in order to deal with the information flow through the whole computer system and environment in a seamless way, handling software and hardware events uniformly, it is also necessary to find adequate abstractions.\nAs discussed in Section 5, the Generic-Events Architecture introduces the concept of Generic Event and an Event Layer abstraction which aim at dealing, among others, with these issues.\n4.\nSENTIENT OBJECT COMPOSITION In this section we analyze the most relevant issues related with the sentient object paradigm and the construction of systems composed of sentient objects.\n4.1 Component-based System Construction Sentient objects can take several different forms: they can simply be software-based components, but they can also comprise mechanical and\/or hardware parts, amongst which the very sensorial apparatus that substantiates sentience, mixed with software components to accomplish their task.\nWe refine this notion by considering a sentient object as an encapsulating entity, a component with internal logic and active processing elements, able to receive, transform and produce new events.\nThis interface hides the internal hardware\/software structure of the object, which may be complex, and shields the system from the low-level functional and temporal details of controlling a specific sensor or actuator.\nFurthermore, given the inherent complexity of the envisaged applications, the number of simultaneous input events and the internal size of sentient objects may become too large and difficult to handle.\nTherefore, it should be possible to consider the hierarchical composition of sentient objects so that the application logic can be separated across as few or as many of these objects as necessary.\nOn the other hand, composition of sentient objects should normally be constrained by the actual hardware component``s structure, preventing the possibility of arbitrarily composing sentient objects.\nThis is illustrated in Figure 1, where a sentient object is internally composed of a few other sentient objects, each of them consuming and producing events, some of which only internally propagated.\nObserving the figure, and recalling our previous discussion about the possible interactions, we identify all of them here: an object-to-environment interaction occurs between the object controlling a WLAN transmitter and some WLAN receiver in the environment; an environment-to-object interaction takes place when the object responsible for the GPS 30 G P S r e c e p t i o n W i r e l e s s t r a n s m i s s i o n D o p p l e r r a d a r P h y s i c a l f e e d b a c k O b j e c t ' s b o d y I n t e r n a l N e t w o r k Figure 1: Component-aware sentient object composition.\nsignal reception uses the information transmitted by the satellites; finally, explicit object-to-object interactions occur internally to the container object, through an internal communication network.\nAdditionally, it is interesting to observe that implicit communication can also occur, whether the physical feedback develops through the environment internal to the container object (as depicted) or through the environment external to this object.\nHowever, there is a subtle difference between both cases.\nWhile in the former the feedback can only be perceived by objects internal to the container, bounding the extent to which consistency must be ensured, such bounds do not exist in the latter.\nIn fact, the notion of sentient object as an encapsulating entity may serve other purposes (e.g., the confinement of feedback and of the propagation of events), beyond the mere hierarchical composition of objects.\nTo give a more concrete example of such component-aware object composition we consider a scenario of cooperating robots.\nEach robot is made of several components, corresponding, for instance, to axis and manipulator controllers.\nTogether with the control software, each of these controllers may be a sentient object.\nOn the other hand, a robot itself is a sentient object, composed of the objects materialized by the controllers, and the environment internal to its own structure, or body.\nThis means that it should be possible to define cooperation activities using the events produced by robot sentient objects, without the need to know the internal structure of robots, or the events produced by body objects or by smart sensors within the body.\nFrom an engineering point of view, however, this also means that robot sentient object may have to generate new events that reflect its internal state, which requires the definition of a gateway to make the bridge between the internal and external environments.\n4.2 Encapsulation and Scoping Now an important question is about how to represent and disseminate events in a large scale networked world.\nAs we have seen above, any event generated by a sentient object could, in principle, be visible anywhere in the system and thus received by any other sentient object.\nHowever, there are substantial obstacles to such universal interactions, originating from the components heterogeneity in such a largescale setting.\nFirstly, the components may have severe performance constraints, particularly because we want to integrate smart sensors and actuators in such an architecture.\nSecondly, the bandwidth of the participating networks may vary largely.\nSuch networks may be low power, low bandwidth fieldbuses, or more powerful wireless networks as well as high speed backbones.\nThirdly, the networks may have widely different reliability and timeliness characteristics.\nConsider a platoon of cooperating vehicles.\nInside a vehicle there may be a field-bus like CAN [8, 9], TTP\/A [17] or LIN [20], with a comparatively low bandwidth.\nOn the other hand, the vehicles are communicating with others in the platoon via a direct wireless link.\nFinally, there may be multiple platoons of vehicles which are coordinated by an additional wireless network layer.\nAt the abstraction level of sentient objects, such heterogeneity is reflected by the notion of body-vs-environment.\nAt the network level, we assume the WAN-of-CANs structure [27] to model the different networks.\nThe notion of body and environment is derived from the recursively defined component-based object model.\nA body is similar to a cell membrane and represents a quality of service container for the sentient objects inside.\nOn the network level, it may be associated with the components coupled by a certain CAN.\nA CAN defines the dissemination quality which can be expected by the cooperating objects.\nIn the above example, a vehicle may be a sentient object, whose body is composed of the respective lower level objects (sensors and actuators) which are connected by the internal network (see Figure 1).\nCorrespondingly, the platoon can be seen itself as an object composed of a collection of cooperating vehicles, its body being the environment encapsulated by the platoon zone.\nAt the network level, the wireless network represents the respective CAN.\nHowever, several platoons united by their CANs may interact with each other and objects further away, through some wider-range, possible fixed networking substrate, hence the concept of WAN-of-CANs.\nThe notions of body-environment and WAN-of-CANs are very useful when defining interaction properties across such boundaries.\nTheir introduction obeyed to our belief that a single mechanism to provide quality measures for interactions is not appropriate.\nInstead, a high level construct for interaction across boundaries is needed which allows to specify the quality of dissemination and exploits the knowledge about body and environment to assess the feasibility of quality constraints.\nAs we will see in the following section, the notion of an event channel represents this construct in our architecture.\nIt disseminates events and allows the network independent specification of quality attributes.\nThese attributes must be mapped to the respective properties of the underlying network structure.\n5.\nA GENERIC EVENTS ARCHITECTURE In order to successfully apply event-based object-oriented models, addressing the challenges enumerated in the introduction of this paper, it is necessary to use adequate architectural constructs, which allow the enforcement of fundamental properties such as timeliness or reliability.\nWe propose the Generic-Events Architecture (GEAR), depicted in Figure 2, which we briefly describe in what follows (for a more detailed description please refer to [29]).\nThe L-shaped structure is crucial to ensure some of the properties described.\nEnvironment: The physical surroundings, remote and close, solid and etherial, of sentient objects.\n31 C o m m ' sC o m m ' sC o m m ' s T r a n s l a t i o n L a y e r T r a n s l a t i o n L a y e r B o d y E n v i r o n m e n t B o d y E n v i r o n m e n t B o d y E n v i r o n m e n t ( i n c l u d i n g o p e r a t i o n a l n e t w o r k ) ( o f o b j e c t o r o b j e c t c o m p o u n d ) T r a n s l a t i o n L a y e r T r a n s l a t i o n S e n t i e n t O b j e c t S e n t i e n t O b j e c t S e n t i e n t O b j e c t R e g u l a r N e t w o r k c o n s u m ep r o d u c e E v e n t L a y e r E v e n t L a y e r E v e n t L a y e r S e n t i e n t O b j e c t Figure 2: Generic-Events architecture.\nBody: The physical embodiment of a sentient object (e.g., the hardware where a mechatronic controller resides, the physical structure of a car).\nNote that due to the compositional approach taken in our model, part of what is environment to a smaller object seen individually, becomes body for a larger, containing object.\nIn fact, the body is the internal environment of the object.\nThis architecture layering allows composition to take place seamlessly, in what concerns information flow.\nInside a body there may also be implicit knowledge, which can be exploited to make interaction more efficient, like the knowledge about the number of cooperating entities, the existence of a specific communication network or the simple fact that all components are co-located and thus the respective events do not need to specify location in their context attributes.\nSuch intrinsic information is not available outside a body and, therefore, more explicit information has to be carried by an event.\nTranslation Layer: The layer responsible for physical event transformation from\/to their native form to event channel dialect, between environment\/body and an event channel.\nEssentially one doing observation and actuation operations on the lower side, and doing transactions of event descriptions on the other.\nOn the lower side this layer may also interact with dumb sensors or actuators, therefore talking the language of the specific device.\nThese interactions are done through operational networks (hence the antenna symbol in the figure).\nEvent Layer: The layer responsible for event propagation in the whole system, through several Event Channels (EC):.\nIn concrete terms, this layer is a kind of middleware that provides important event-processing services which are crucial for any realistic event-based system.\nFor example, some of the services that imply the processing of events may include publishing, subscribing, discrimination (zoning, filtering, fusion, tracing), and queuing.\nCommunication Layer: The layer responsible for wrapping events (as a matter of fact, event descriptions in EC dialect) into carrier event-messages, to be transported to remote places.\nFor example, a sensing event generated by a smart sensor is wrapped in an event-message and disseminated, to be caught by whoever is concerned.\nThe same holds for an actuation event produced by a sentient object, to be delivered to a remote smart actuator.\nLikewise, this may apply to an event-message from one sentient object to another.\nDumb sensors and actuators do not send event-messages, since they are unable to understand the EC dialect (they do not have an event layer neither a communication layer- they communicate, if needed, through operational networks).\nRegular Network: This is represented in the horizontal axis of the block diagram by the communication layer, which encompasses the usual LAN, TCP\/IP, and realtime protocols, desirably augmented with reliable and\/or ordered broadcast and other protocols.\nThe GEAR introduces some innovative ideas in distributed systems architecture.\nWhile serving an object model based on production and consumption of generic events, it treats events produced by several sources (environment, body, objects) in a homogeneous way.\nThis is possible due to the use of a common basic dialect for talking about events and due to the existence of the translation layer, which performs the necessary translation between the physical representation of a real-time entity and the EC compliant format.\nCrucial to the architecture is the event layer, which uses event channels to propagate events through regular network infrastructures.\nThe event layer is realized by the COSMIC middleware, as described in Section 7.\n5.1 Information Flow in GEAR The flow of information (external environment and computational part) is seamlessly supported by the L-shaped architecture.\nIt occurs in a number of different ways, which demonstrates the expressiveness of the model with regard to the necessary forms of information encountered in real-time cooperative and embedded systems.\nSmart sensors produce events which report on the environment.\nBody sensors produce events which report on the body.\nThey are disseminated by the local event layer module, on an event channel (EC) propagated through the regular network, to any relevant remote event layer modules where entities showed an interest on them, normally, sentient objects attached to the respective local event layer modules.\nSentient objects consume events they are interested in, process them, and produce other events.\nSome of these events are destined to other sentient objects.\nThey are published on an EC using the same EC dialect that serves, e.g., sensor originated events.\nHowever, these events are semantically of a kind such that they are to be subscribed by the relevant sentient objects, for example, the sentient objects composing a robot controller system, or, at a higher level, the sentient objects composing the actual robots in 32 a cooperative application.\nSmart actuators, on the other hand, merely consume events produced by sentient objects, whereby they accept and execute actuation commands.\nAlternatively to talking to other sentient objects, sentient objects can produce events of a lower level, for example, actuation commands on the body or environment.\nThey publish these exactly the same way: on an event channel through the local event layer representative.\nNow, if these commands are of concern to local actuator units (e.g., body, including internal operational networks), they are passed on to the local translation layer.\nIf they are of concern to a remote smart actuator, they are disseminated through the distributed event layer, to reach the former.\nIn any case, if they are also of interest to other entities, such as other sentient objects that wish to be informed of the actuation command, then they are also disseminated through the EC to these sentient objects.\nA key advantage of this architecture is that event-messages and physical events can be globally ordered, if necessary, since they all pass through the event layer.\nThe model also offers opportunities to solve a long lasting problem in realtime, computer control, and embedded systems: the inconsistency between message passing and the feedback loop information flow subsystems.\n6.\nTEMPORAL ASPECTS OF THE INTERACTIONS Any interaction needs some form of predictability.\nIf safety critical scenarios are considered as it is done in CORTEX, temporal aspects become crucial and have to be made explicit.\nThe problem is how to define temporal constraints and how to enforce them by appropriate resource usage in a dynamic ad-hoc environment.\nIn an system where interactions are spontaneous, it may be also necessary to determine temporal properties dynamically.\nTo do this, the respective temporal information must be stated explicitly and available during run-time.\nSecondly, it is not always ensured that temporal properties can be fulfilled.\nIn these cases, adaptations and timing failure notification must be provided [2, 28].\nIn most real-time systems, the notion of a deadline is the prevailing scheme to express and enforce timeliness.\nHowever, a deadline only weakly reflect the temporal characteristics of the information which is handled.\nSecondly, a deadline often includes implicit knowledge about the system and the relations between activities.\nIn a rather well defined, closed environment, it is possible to make such implicit assumptions and map these to execution times and deadlines.\nE.g. the engineer knows how long a vehicle position can be used before the vehicle movement outdates this information.\nThus he maps this dependency between speed and position on a deadline which then assures that the position error can be assumed to be bounded.\nIn a open environment, this implicit mapping is not possible any more because, as an obvious reason, the relation between speed and position, and thus the error bound, cannot easily be reverse engineered from a deadline.\nTherefore, our event model includes explicit quality attributes which allow to specify the temporal attributes for every individual event.\nThis is of course an overhead compared to the use of implicit knowledge, but in a dynamic environment such information is needed.\nTo illustrate the problem, consider the example of the position of a vehicle.\nA position is a typical example for time, value entity [30].\nThus, the position is useful if we can determine an error bound which is related to time, e.g. if we want a position error below 10 meters to establish a safety property between cooperating cars moving with 5 m\/sec, the position has a validity time of 2 seconds.\nIn a time, value entity entity we can trade time against the precision of the value.\nThis is known as value over time and time over value [18].\nOnce having established the time-value relation and captured in event attributes, subscribers of this event can locally decide about the usefulness of an information.\nIn the GEAR architecture temporal validity is used to reason about safety properties in a event-based system [29].\nWe will briefly review the respective notions and see how they are exploited in our COSMIC event middleware.\nConsider the timeline of generating an event representing some real-time entity [18] from its occurrence to the notification of a certain sentient object (Figure 3).\nThe real-time entity is captured at the sensor interface of the system and has to be transformed in a form which can be treated by a computer.\nDuring the time interval t0 the sensor reads the real-time entity and a time stamp is associated with the respective value.\nThe derived time, value entity represents an observation.\nIt may be necessary to perform substantial local computations to derive application relevant information from the raw sensor data.\nHowever, it should be noted that the time stamp of the observation is associated with the capture time and thus independent from further signal processing and event generation.\nThis close relationship between capture time and the associated value is supported by smart sensors described above.\nThe processed sensor information is assembled in an event data structure after ts to be published to an event channel.\nAs is described later, the event includes the time stamp of generation and the temporal validity as attributes.\nThe temporal validity is an application defined measure for the expiration of a time, value .\nAs we explained in the example of a position above, it may vary dependent on application parameters.\nTemporal validity is a more general concept than that of a deadline.\nIt is independent of a certain technical implementation of a system.\nWhile deadlines may be used to schedule the respective steps in an event generation and dissemination, a temporal validity is an intrinsic property of a time, value entity carried in an event.\nA temporal validity allows to reason about the usefulness of information and is beneficial even in systems in which timely dissemination of events cannot be enforced because it enables timing failure detection at the event consumer.\nIt is obvious that deadlines or periods can be derived from the temporal validity of an event.\nTo set a deadline, knowledge of an implementation, worst case execution times or message dissemination latencies is necessary.\nThus, in the timeline of Figure 3 every interval may have a deadline.\nEvent dissemination through soft real-time channels in COSMIC exploits the temporal validity to define dissemination deadlines.\nQuality attributes can be defined, for instance, in terms of validity interval, omission degree pairs.\nThese allow to characterize the usefulness of the event for a certain application, in a certain context.\nBecause of that, quality attributes of an event clearly depend on higher level issues, such as the nature of the sentient object or of the smart sensor that produced the event.\nFor instance, an event containing an indication of some vehicle speed must have different quality attributes depending on the kind of vehicle 33 real-world event observation: <time stamp, value> event generated ready to be transmitted event received notification , to t event producer communication network event consumer event channel push <event> , ts , tm , tt , tn , t o : t i m e t o o b t a i n a n o b s e r v a t i o n , t s : t i m e t o p r o c e s s s e n s o r r e a d i n g , t m : t i m e t o a s s e m b l e a n e v e n t m e s s a g e , t t : t i m e t o t r a n s f e r t h e e v e n t o n t h e r e g u l a r n e t w o r k , t n : t i m e f o r n o t i f i c a t i o n o n t h e c o n s u m e r s i t e Figure 3: Event processing and dissemination.\nfrom which it originated, or depending on its current speed.\nThe same happens with the position event of the car example above, whose validity depends on the current speed and on a predefined required precision.\nHowever, since quality attributes are strictly related with the semantics of the application or, at least, with some high level knowledge of the purpose of the system (from which the validity of the information can be derived), the definition of these quality attributes may be done by exploiting the information provided at the programming interface.\nTherefore, it is important to understand how the system programmer can specify non-functional requirements at the API, and how these requirements translate into quality attributes assigned to events.\nWhile temporal validity is identified as an intrinsic event property, which is exploited to decide on the usefulness of data at a certain point in time, it is still necessary to provide a communication facility which can disseminate the event before the validity is expired.\nIn a WAN-of-CANs network structure we have to cope with very different network characteristics and quality of service properties.\nTherefore, when crossing the network boundaries the quality of service guarantees available in a certain network will be lost and it will be very hard, costly and perhaps impossible to achieve these properties in the next larger area of the WAN-of CANs structure.\nCORTEX has a couple of abstractions to cope with this situation (network zones, body\/environment) which have been discussed above.\nFrom the temporal point of view we need a high level abstraction like the temporal validity for the individual event now to express our quality requirements of the dissemination over the network.\nThe bound, coverage pair, introduced in relation with the TCB [28] seems to be an appropriate approach.\nIt considers the inherent uncertainty of networks and allows to trade the quality of dissemination against the resources which are needed.\nIn relation with the event channel model discussed later, the bound, coverage pair allows to specify the quality properties of an event channel independently of specific technical issues.\nGiven the typical environments in which sentient applications will operate, where it is difficult or even impossible to provide timeliness or reliability guarantees, we proposed an alternative way to handle non-functional application requirements, in relation with the TCB approach [28].\nThe proposed approach exploits intrinsic characteristics of applications, such as fail-safety, or time-elasticity, in order to secure QoS specifications of the form bound, coverage .\nInstead of constructing systems that rely on guaranteed bounds, the idea is to use (possibly changing) bounds that are secured with a constant probability all over the execution.\nThis obviously requires an application to be able to adapt to changing conditions (and\/or changing bounds) or, if this is not possible, to be able to perform some safety procedures when the operational conditions degrade to an unbearable level.\nThe bounds we mentioned above refer essentially to timeliness bounds associated to the execution of local or distributed activities, or combinations thereof.\nFrom these bounds it is then possible to derive the quality attributes, in particular validity intervals, that characterize the events published in the event channel.\n6.1 The Role of Smart Sensors and Actuators Smart devices encapsulate hardware, software and mechanical components and provide information and a set of well specified functions and which are closely related to the interaction with the environment.\nThe built-in computational components and the network interface enable the implementation of a well-defined high level interface that does not just provide raw transducer data, but a processed, application-related set of events.\nMoreover, they exhibit an autonomous spontaneous behaviour.\nThey differ from general purpose nodes because they are dedicated to a certain functionality which complies to their sensing and actuating capabilities while general purpose node may execute any program.\nConcerning the sentient object model, smart sensors and actuators may be basic sentient objects themselves, consuming events from the real-world environment and producing the respective generic events for the system``s event layer or, 34 vice versa consuming a generic event and converting it to a real-world event by an actuation.\nSmart components therefore constitute the periphery, i.e. the real-world interface of a more complex sentient object.\nThe model of sentient objects also constitutes the framework to built more complex virtual sensors by relating multiple (primary, i.e. sensors which directly sense a physical entity) sensors.\nSmart components translate events of the environment to an appropriate form available at the event layer or, vice versa, transform a system event into an actuation.\nFor smart components we can assume that: \u2022 Smart components have dedicated resources to perform a specific function.\n\u2022 These resources are not used for other purposes during normal real-time operation.\n\u2022 No local temporal conflicts occur that will change the observable temporal behaviour.\n\u2022 The functions of a component can usually only be changed during a configuration procedure which is not performed when the component is involved in critical operations.\n\u2022 An observation of the environment as a time,value pair can be obtained with a bounded jitter in time.\nMany predictability and scheduling problems arise from the fact, that very low level timing behaviours have to be handled on a single processor.\nHere, temporal encapsulation of activities is difficult because of the possible side effects when sharing a single processor resource.\nConsider the control of a simple IR-range detector which is used for obstacle avoidance.\nDependent on its range and the speed of a vehicle, it has to be polled to prevent the vehicle from crashing into an obstacle.\nOn a single central processor, this critical activity has to be coordinated with many similar, possibly less critical functions.\nIt means that a very fine grained schedule has to be derived based purely on the artifacts of the low level device control.\nIn a smart sensor component, all this low level timing behaviour can be optimized and encapsulated.\nThus we can assume temporal encapsulation similar to information hiding in the functional domain.\nOf course, there is still the problem to guarantee that an event will be disseminated and recognized in due time by the respective system components, but this relates to application related events rather than the low artifacts of a device timing.\nThe main responsibility to provide timeliness guarantees is shifted to the event layer where these events are disseminated.\nSmart sensors thus lead to network centric system model.\nThe network constitute the shared resource which has to be scheduled in a predictable way.\nThe COSMIC middleware introduced in the next section is an approach to provide predictable event dissemination for a network of smart sensors and actuators.\n7.\nAN EVENT MODEL ANDMIDDLEWARE FOR COOPERATING SMART DEVICES An event model and a middleware suitable for smart components must support timely and reliable communication and also must be resource efficient.\nCOSMIC (COoperating Smart devices) is aimed at supporting the interaction between those components according to the concepts introduced so far.\nBased on the model of a WAN-of-CANs, we assume that the components are connected to some form of CAN as a fieldbus or a special wireless sensor network which provides specific network properties.\nE.g. a fieldbus developed for control applications usually includes mechanisms for predictable communication while other networks only support a best effort dissemination.\nA gateway connects these CANs to the next level in the network hierarchy.\nThe event system should allow the dynamic interaction over a hierarchy of such networks and comply with the overall CORTEX generic event model.\nEvents are typed information carriers and are disseminated in a publisher\/ subscriber style [24, 7], which is particularly suitable because it supports generative, anonymous communication [3] and does not create any artificial control dependencies between producers of information and the consumers.\nThis decoupling in space (no references or names of senders or receivers are needed for communication) and the flow decoupling (no control transfer occurs with a data transfer) are well known [24, 7, 14] and crucial properties to maintain autonomy of components and dynamic interactions.\nIt is obvious that not all networks can provide the same QoS guarantees and secondly, applications may have widely differing requirements for event dissemination.\nAdditionally, when striving for predictability, resources have to be reserved and data structures must be set up before communication takes place.\nThus, these things can not predictably be made on the fly while disseminating an event.\nTherefore, we introduced the notion of an event channel to cope with differing properties and requirements and have an object to which we can assign resources and reservations.\nThe concept of an event channel is not new [10, 25], however, it has not yet been used to reflect the properties of the underlying heterogeneous communication networks and mechanisms as described by the GEAR architecture.\nRather, existing event middleware allows to specify the priorities or deadlines of events handled in an event server.\nEvent channels allow to specify the communication properties on the level of the event system in a fine grained way.\nAn event channel is defined by: event channel := subject, quality attributeList, handlers The subject determines the types of events event which may be issued to the channel.\nThe quality attributes model the properties of the underlying communication network and dissemination scheme.\nThese attributes include latency specifications, dissemination constraints and reliability parameters.\nThe notion of zones which represent a guaranteed quality of service in a subnetwork support this approach.\nOur goal is to handle the temporal specifications as bound, coverage pairs [28] orthogonal to the more technical questions of how to achieve a certain synchrony property of the dissemination infrastructure.\nCurrently, we support quality attributes of event channels in a CAN-Bus environment represented by explicit synchrony classes.\nThe COSMIC middleware maps the channel properties to lower level protocols of the regular network.\nBased on our previous work on predictable protocols for the CAN-Bus, COSMIC defines an abstract network which provides hard, soft and non real-time message classes [21].\nCorrespondingly, we distinguish three event channel classes according to their synchrony properties: hard real-time channels, soft real-time channels and non-real-time channels.\nHard real-time channels (HRTC) guarantee event propagation within the defined time constraints in the presence 35 of a specified number of omission faults.\nHRTECs are supported by a reservation scheme which is similar to the scheme used in time-triggered protocols like TTP [16][31], TTP\/A [17], and TTCAN [8].\nHowever, a substantial advantage over a TDMA scheme is that due to CAN-Bus properties, bandwidth which was reserved but is not needed by a HRTEC can be used by less critical traffic [21].\nSoft real-time channels (SRTC) exploit the temporal validity interval of events to derive deadlines for scheduling.\nThe validity interval defines the point in time after which an event becomes temporally inconsistent.\nTherefore, in a real-time system an event is useless after this point and may me discarded.\nThe transmission deadline (DL) is defined as the latest point in time when a message has to be transmitted and is specified in a time interval which is derived from the expiration time: tevent ready < DL < texpiration \u2212 \u2206notification texpiration defines the point in time when the temporal validity expires.\n\u2206notification is the expected end-to-end latency which includes the transfer time over the network and the time the event may be delayed by the local event handling in the nodes.\nAs said before, event deadlines are used to schedule the dissemination by SRTECs.\nHowever, deadlines may be missed in transient overload situations or due to arbitrary arrival times of events.\nOn the publisher side the application``s exception handler is called whenever the event deadline expires before event transmission.\nAt this point in time the event is also not expected to arrive at the subscriber side before the validity expires.\nTherefore, the event is removed from the sending queue.\nOn the subscriber side the expiration time is used to schedule the delivery of the event.\nIf the event cannot be delivered until its expiration time it is removed from the respective queues allocated by the COSMIC middleware.\nThis prevents the communication system to be loaded by outdated messages.\nNon-real-time channels do not assume any temporal specification and disseminate events in a best effort manner.\nAn instance of an event channel is created locally, whenever a publisher makes an announcement for publication or a subscriber subscribes for an event notification.\nWhen a publisher announces publication, the respective data structures of an event channel are created by the middleware.\nWhen a subscriber subscribes to an event channel, it may specify context attributes of an event which are used to filter events locally.\nE.g. a subscriber may only be interested in events generated at a certain location.\nAdditionally the subscriber specifies quality properties of the event channel.\nA more detailed description of the event channels can be found in [13].\nCurrently, COSMIC handles all event channels which disseminate events beyond the CAN network boundary as non real-time event channels.\nThis is mainly because we use the TCP\/IP protocol to disseminate events over wireless links or to the standard Ethernet.\nHowever, there are a number of possible improvements which can easily be integrated in the event channel model.\nThe Timely Computing Base (TCB) [28] can be exploited for timing failure detection and thus would provide awareness for event dissemination in environments where timely delivery of events cannot be enforced.\nAdditionally, there are wireless protocols which can provide timely and reliable message delivery [6, 23] which may be exploited for the respective event channel classes.\nEvents are the information carriers which are exchanged between sentient objects through event channels.\nTo cope with the requirements of an ad-hoc environment, an event includes the description of the context in which it has been generated and quality attributes defining requirements for dissemination.\nThis is particularly important in an open, dynamic environment where an event may travel over multiple networks.\nAn event instance is specified as: event := subject, context attributeList, quality attributeList, contents A subject defines the type of the event and is related to the event contents.\nIt supports anonymous communication and is used to route an event.\nThe subject has to match to the subject of the event channel through which the event is disseminated.\nAttributes are complementary to the event contents.\nThey describe individual functional and non-functional properties of the event.\nThe context attributes describe the environment in which the event has been generated, e.g. a location, an operational mode or a time of occurrence.\nThe quality attributes specify timeliness and dependability aspects in terms of validity interval, omission degree pairs.\nThe validity interval defines the point in time after which an event becomes temporally inconsistent [18].\nAs described above, the temporal validity can be mapped to a deadline.\nHowever, usually a deadline is an engineering artefact which is used for scheduling while the temporal validity is a general property of a time, value entity.\nIn a environment where a deadline cannot be enforced, a consumer of an event eventually must decide whether the event still is temporally consistent, i.e. represents a valid time, value entity.\n7.1 The Architecture of the COSMIC Middleware On the architectural level, COSMIC distinguish three layers roughly depicted in Figure 4.\nTwo of them, the event layer and the abstract network layer are implemented by the COSMIC middleware.\nThe event layer provides the API for the application and realizes the abstraction of event and event channels.\nThe abstract network implements real-time message classes and adapts the quality requirements to the underlying real network.\nAn event channel handler resides in every node.\nIt supports the programming interface and provides the necessary data structures for event-based communication.\nWhenever an object subscribes to a channel or a publisher announces a channel, the event channel handler is involved.\nIt initiates the binding of the channel``s subject, which is represented by a network independent unique identifier to an address of the underlying abstract network to enable communication [14].\nThe event channel handler then tightly cooperates with the respective handlers of the abstract network layer to disseminate events or receive event notifications.\nIt should be noted that the QoS properties of the event layer in general depend on what the abstract network layer can provide.\nThus, it may not always be possible to e.g. support hard real-time event channels because the abstract network layer cannot provide the respective guarantees.\nIn [13], we describe the protocols and services of the abstract network layer particularly for the CAN-Bus.\nAs can be seen in Figure 4, the hard real-time (HRT) message class is supported by a dedicated handler which is able to provide the time triggered message dissemination.\n36 event notifications HRT-msg list SRT-msg queue NRT-msg queue HRT-msg calendar HRTC Handler S\/NRTC Handler Abstract Network Layer CAN Layer RX Buffer TX Buffer RX, TX, error interrupts Event Channel Specs.\nEvent Layer send messages exception notification exceptions, notifications ECH: Event Channel Handler p u b l i s h a n n o u n c e s u b s c r i b e b i n d i n g p r o t o c o l c o n f i g .\np r o t o c o l Global Time Service event notifications HRT-msg list SRT-msg queue NRT-msg queue HRT-msg calendar HRTC Handler S\/NRTC Handler Abstract Network Layer CAN Layer RX Buffer TX Buffer RX, TX, error interrupts Event Channel Specs.\nEvent Layer send messages exception notification exceptions, notifications ECH: Event Channel Handler p u b l i s h a n n o u n c e s u b s c r i b e b i n d i n g p r o t o c o l c o n f i g .\np r o t o c o l Global Time Service Figure 4: Architecture layers of COSMIC.\nThe HRT handler maintains the HRT message list, which contains an entry for each local HRT message to be sent.\nThe entry holds the parameters for the message, the activation status and the binding information.\nMessages are scheduled on the bus according to the HRT message calendar which comprises the precise start time for each time slot allocated for a message.\nSoft real-time message queues order outgoing messages according to their transmission deadlines derived from the temporal validity interval.\nIf the transmission deadline is exceeded, the event message is purged out of the queue.\nThe respective application is notified via the exception notification interface and can take actions like trying to publish the event again or publish it to a channel of another class.\nIncoming event messages are ordered according to their temporal validity.\nIf an event message arrive, the respective applications are notified.\nAt the moment, an outdated message is deleted from the queue and if the queue runs out of space, the oldest message is discarded.\nHowever, there are other policies possible depending on event attributes and available memory space.\nNon real-time messages are FIFO ordered in a fixed size circular buffer.\n7.2 Status of COSMIC The goal for developing COSMIC was to provide a platform to seamlessly integrate smart tiny components in a large system.\nTherefore, COSMIC should run also on the small, resource constraint devices which are built around 16Bit or even 8-Bit micro-controllers.\nThe distributed COSMIC middleware has been implemented and tested on various platforms.\nUnder RT-Linux, we support the real-time channels over the CAN Bus as described above.\nThe RTLinux version runs on Pentium processors and is currently evaluated before we intent to port it to a smart sensor or actuator.\nFor the interoperability in a WAN-of-CANs environment, we only provide non real-time channels at the moment.\nThis version includes a gateway between the CANbus and a TCP\/IP network.\nIt allows us to use a standard wireless 802.11 network.\nThe non real-time version of COSMIC is available on Linux, RT-Linux and on the microcontroller families C167 (Infineon) and 68HC908 (Motorola).\nBoth micro-controllers have an on-board CAN controller and thus do not require additional hardware components for the network.\nThe memory footprint of COSMIC is about 13 Kbyte on a C167 and slightly more on the 68HC908 where it fits into the on-board flash memory without problems.\nBecause only a few channels are required on such a smart sensor or actuator component, the requirement of RAM (which is a scarce resource on many single chip systems) to hold the dynamic data structures of a channel is low.\nThe COSMIC middleware makes it very easy to include new smart sensors in an existing system.\nParticularly, the application running on a smart sensor to condition and process the raw physical data must not be aware of any low level network specific details.\nIt seamlessly interacts with other components of the system exclusively via event channels.\nThe demo example, briefly described in the next chapter, is using a distributed infrastructure of tiny smart sensors and actuators directly cooperating via event channels over heterogeneous networks.\n8.\nAN ILLUSTRATIVE EXAMPLE A simple example for many important properties of the proposed system showing the coordination through the environment and events disseminated over the network is the demo of two cooperating robots depicted in Figure 5.\nEach robot is equipped with smart distance sensors, speed sensors, acceleration sensors and one of the robots (the guide (KURT2) in front (Figure 5)) has a tracking camera allowing to follow a white line.\nThe robots form a WAN-of-CANs system in which their local CANs are interconnected via a wireless 802.11 network.\nCOSMIC provides the event layer for seamless interaction.\nThe blind robot (N.N.) is searching the guide randomly.\nWhenever the blind robot detects (by its front distance sensors) an obstacle, it checks whether this may be the guide.\nFor this purpose, it dynamically subscribes to the event channel disseminating distance events from rear distance sensors of the guide(s) and compares these with the distance events from its local front sensors.\nIf the distance is approximately the same it infers that it is really behind a guide.\nNow N.N. also subscribes to the event channels of the tracking camera and the speed sensors 37 Figure 5: Cooperating robots.\nto follow the guide.\nThe demo application highlights the following properties of the system: 1.\nDynamic interaction of robots which is not known in advance.\nIn principle, any two a priori unknown robots can cooperate.\nAll what publishers and subscribers have to know to dynamically interact in this environment is the subject of the respective event class.\nA problem will be to receive only the events of the robot which is closest.\nA robot identity does not help much to solve this problem.\nRather, the position of the event generation entity which is captured in the respective attributes can be evaluated to filter the relevant event out of the event stream.\nA suitable wireless protocol which uses proximity to filter events has been proposed by Meier and Cahill [22] in the CORTEX project.\n2.\nInteraction through the environment.\nThe cooperation between the robots is controlled by sensing the distance between the robots.\nIf the guide detects that the distance grows, it slows down.\nRespectively, if the blind robot comes too close it reduces its speed.\nThe local distance sensors produce events which are disseminated through a low latency, highly predictable event channel.\nThe respective reaction time can be calculated as function of the speed and the distance of the robots and define a dynamic dissemination deadline for events.\nThus, the interaction through the environment will secure the safety properties of the application, i.e. the follower may not crash into the guide and the guide may not loose the follower.\nAdditionally, the robots have remote subscriptions to the respective distance events which are used to check it with the local sensor readings to validate that they really follow the guide which they detect with their local sensors.\nBecause there may be longer latencies and omissions, this check occasionally will not be possible.\nThe unavailability of the remote events will decrease the quality of interaction and probably and slow down the robots, but will not affect safety properties.\n3.\nCooperative sensing.\nThe blind robot subscribes to the events of the line tracking camera.\nThus it can see through the eye of the guide.\nBecause it knows the distance to the guide and the speed as well, it can foresee the necessary movements.\nThe proposed system provides the architectural framework for such a cooperation.\nThe respective sentient object controlling the actuation of the robot receives as input the position and orientation of the white line to be tracked.\nIn the case of the guide robot, this information is directly delivered as a body event with a low latency and a high reliability over the internal network.\nFor the follower robot, the information comes also via an event channel but with different quality attributes.\nThese quality attributes are reflected in the event channel description.\nThe sentient object controlling the actuation of the follower is aware of the increased latency and higher probability of omission.\n9.\nCONCLUSION AND FUTURE WORK The paper addresses problems of building large distributed systems interacting with the physical environment and being composed from a huge number of smart components.\nWe cannot assume that the network architecture in such a system is homogeneous.\nRather multiple edge- networks are fused to a hierarchical, heterogeneous wide area network.\nThey connect the tiny sensors and actuators perceiving the environment and providing sentience to the application.\nAdditionally, mobility and dynamic deployment of components require the dynamic interaction without fixed, a priori known addressing and routing schemes.\nThe work presented in the paper is a contribution towards the seamless interaction in such an environment which should not be restricted by technical obstacles.\nRather it should be possible to control the flow of information by explicitly specifying functional and temporal dissemination constraints.\nThe paper presented the general model of a sentient object to describe composition, encapsulation and interaction in such an environment and developed the Generic Event Architecture GEAR which integrates the interaction through the environment and the network.\nWhile appropriate abstractions and interaction models can hide the functional heterogeneity of the networks, it is impossible to hide the quality differences.\nTherefore, one of the main concerns is to define temporal properties in such an open infrastructure.\nThe notion of an event channel has been introduced which allows to specify quality aspects explicitly.\nThey can be verified at subscription and define a boundary for event dissemination.\nThe COSMIC middleware is a first attempt to put these concepts into operation.\nCOSMIC allows the interoperability of tiny components over multiple network boundaries and supports the definition of different real-time event channel classes.\nThere are many open questions that emerged from our work.\nOne direction of future research will be the inclusion of real-world communication channels established between sensors and actuators in the temporal analysis and the ordering of such events in a cause-effect chain.\nAdditionally, the provision of timing failure detection for the adaptation of interactions will be in the focus of our research.\nTo reduce network traffic and only disseminate those events to the subscribers which they are really interested in and which have a chance to arrive timely, the encapsulation and scoping schemes have to be transformed into respective multi-level filtering rules.\nThe event attributes which describe aspects of the context and temporal constraints for the dissemination will be exploited for this purpose.\nFinally, it is intended to integrate the results in the COSMIC middleware to enable experimental assessment.\n38 10.\nREFERENCES [1] J. Bacon, K. Moody, J. Bates, R. Hayton, C. Ma, A. McNeil, O. Seidel, and M. Spiteri.\nGeneric support for distributed applications.\nIEEE Computer, 33(3):68-76, 2000.\n[2] L. B. Becker, M. Gergeleit, S. Schemmer, and E. Nett.\nUsing a flexible real-time scheduling strategy in a distributed embedded application.\nIn Proc.\nof the 9th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Lisbon, Portugal, Sept. 2003.\n[3] N. Carriero and D. Gelernter.\nLinda in context.\nCommunications of the ACM, 32(4):444-458, apr 1989.\n[4] A. Casimiro (Ed.)\n.\nPreliminary definition of cortex system architecture.\nCORTEX project, IST-2000-26031, Deliverable D4, Apr. 2002.\n[5] CORTEX project Annex 1, Description of Work.\nTechnical report, CORTEX project, IST-2000-26031, Oct. 2000.\nhttp:\/\/cortex.di.fc.ul.pt.\n[6] R. Cunningham and V. Cahill.\nTime bounded medium access control for ad-hoc networks.\nIn Proceedings of the Second ACM International Workshop on Principles of Mobile Computing (POMC``02), pages 1-8, Toulouse, France, Oct. 2002.\nACM Press.\n[7] P. T. Eugster, P. Felber, R. Guerraoui, and A.-M.\nKermarrec.\nThe many faces of publish\/subscribe.\nTechnical Report DSC ID:200104, EPFL, Lausanne, Switzerland, 2001.\n[8] T. F\u00a8uhrer, B. M\u00a8uller, W. Dieterle, F. Hartwich, R. Hugel, and M.Walther.\nTime triggered communication on CAN, 2000.\nhttp:\/\/www.can-cia.org\/can\/ttcan\/fuehrer.pdf.\n[9] R. B. GmbH.\nCAN Specification Version 2.0.\nTechnical report, Sept. 1991.\n[10] T. Harrison, D. Levine, and D. Schmidt.\nThe design and performance of a real-time corba event service.\nIn Proceedings of the 1997 Conference on Object Oriented Programming Systems, Languages and Applications (OOPSLA), pages 184-200, Atlanta, Georgia, USA, 1997.\nACM Press.\n[11] J. Hightower and G. Borriello.\nLocation systems for ubiquitous computing.\nIEEE Computer, 34(8):57-66, aug 2001.\n[12] A. Hopper.\nThe Clifford Paterson Lecture, 1999 Sentient Computing.\nPhilosophical Transactions of the Royal Society London, 358(1773):2349-2358, Aug. 2000.\n[13] J. Kaiser, C. Mitidieri, C. Brudna, and C. Pereira.\nCOSMIC: A Middleware for Event-Based Interaction on CAN.\nIn Proc.\n2003 IEEE Conference on Emerging Technologies and Factory Automation, Lisbon, Portugal, Sept. 2003.\n[14] J. Kaiser and M. Mock.\nImplementing the real-time publisher\/subscriber model on the controller area network (CAN).\nIn Proceedings of the 2nd International Symposium on Object-oriented Real-time distributed Computing (ISORC99), Saint-Malo, France, May 1999.\n[15] K. Kim, G. Jeon, S. Hong, T. Kim, and S. Kim.\nIntegrating subscription-based and connection-oriented communications into the embedded CORBA for the CAN Bus.\nIn Proceedings of the IEEE Real-time Technology and Application Symposium, May 2000.\n[16] H. Kopetz and G. Gr\u00a8unsteidl.\nTTP - A Time-Triggered Protocol for Fault-Tolerant Real-Time Systems.\nTechnical Report rr-12-92, Institut f\u00a8ur Technische Informatik, Technische Universit\u00a8at Wien, Treilstr.\n3\/182\/1, A-1040 Vienna, Austria, 1992.\n[17] H. Kopetz, M. Holzmann, and W. Elmenreich.\nA Universal Smart Transducer Interface: TTP\/A.\nInternational Journal of Computer System, Science Engineering, 16(2), Mar. 2001.\n[18] H. Kopetz and P. Ver\u00b4\u0131ssimo.\nReal-time and Dependability Concepts.\nIn S. J. Mullender, editor, Distributed Systems, 2nd Edition, ACM-Press, chapter 16, pages 411-446.\nAddison-Wesley, 1993.\n[19] S. Lankes, A. Jabs, and T. Bemmerl.\nIntegration of a CAN-based connection-oriented communication model into Real-Time CORBA.\nIn Workshop on Parallel and Distributed Real-Time Systems, Nice, France, Apr. 2003.\n[20] Local Interconnect Network: LIN Specification Package Revision 1.2.\nTechnical report, Nov. 2000.\n[21] M. Livani, J. Kaiser, and W. Jia.\nScheduling hard and soft real-time communication in the controller area network.\nControl Engineering, 7(12):1515-1523, 1999.\n[22] R. Meier and V. Cahill.\nSteam: Event-based middleware for wireless ad-hoc networks.\nIn Proceedings of the International Workshop on Distributed Event-Based Systems (ICDCS\/DEBS``02), pages 639-644, Vienna, Austria, 2002.\n[23] E. Nett and S. Schemmer.\nReliable real-time communication in cooperative mobile applications.\nIEEE Transactions on Computers, 52(2):166-180, Feb. 2003.\n[24] B. Oki, M. Pfluegl, A. Seigel, and D. Skeen.\nThe information bus - an architecture for extensible distributed systems.\nOperating Systems Review, 27(5):58-68, 1993.\n[25] O. M. G. (OMG).\nCORBAservices: Common Object Services Specification - Notification Service Specification, Version 1.0, 2000.\n[26] O. M. G. (OMG).\nSmart transducer interface, initial submission, June 2001.\n[27] P. Ver\u00b4\u0131ssimo, V. Cahill, A. Casimiro, K. Cheverst, A. Friday, and J. Kaiser.\nCortex: Towards supporting autonomous and cooperating sentient entities.\nIn Proceedings of European Wireless 2002, Florence, Italy, Feb. 2002.\n[28] P. Ver\u00b4\u0131ssimo and A. Casimiro.\nThe Timely Computing Base model and architecture.\nTransactions on Computers - Special Issue on Asynchronous Real-Time Systems, 51(8):916-930, Aug. 2002.\n[29] P. Ver\u00b4\u0131ssimo and A. Casimiro.\nEvent-driven support of real-time sentient objects.\nIn Proceedings of the 8th IEEE International Workshop on Object-oriented Real-time Dependable Systems, Guadalajara, Mexico, Jan. 2003.\n[30] P. Ver\u00b4\u0131ssimo and L. Rodrigues.\nDistributed Systems for System Architects.\nKluwer Academic Publishers, 2001.\n39","lvl-3":"An Architectural Framework and a Middleware for Cooperating Smart Components *\nU.Lisboa U.Ulm U.Lisboa casim@di.fc.ul.pt kaiser@informatik.uni- pjv@di.fc.ul.pt ulm.de\nABSTRACT\nIn a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it.\nExamples range in telematics, traffic management, team robotics or home automation to name a few.\nTo a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically.\nThe challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution.\nA crucial design decision is the choice of the appropriate abstractions and interaction mechanisms.\nLooking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components.\nThey are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface.\nLarger autonomous components may be composed recursively from these building blocks.\nThe paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system.\nIt starts by an outline of the component-based system construction.\nThe generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer.\nThe generic event layer hides the different communication channels including * This work was partially supported by the EC, through project IST-2000-26031 (CORTEX), and by the FCT, through the Large-Scale Informatic Systems Laboratory (LaSIGE) and project POSI\/1999\/CHS \/ 33996 (DEFEATS).\nthe interactions through the environment.\nAn appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints.\nThis is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes.\nThey are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.\n1.\nINTRODUCTION\nIn recent years we have seen the continuous improvement of technologies that are relevant for the construction of distributed embedded systems, including trustworthy visual, auditory, and location sensing [11], communication and processing.\nWe believe that in a future networked physical world a new class of applications will emerge, composed of a myriad of smart sensors and actuators to assess and control aspects of their environments and autonomously act in response to it.\nThe anticipated challenging characteristics of these applications include autonomy, responsiveness and safety criticality, large scale, geographical dispersion, mobility and evolution.\nIn order to deal with these challenges, it is of fundamental importance to use adequate high-level models, abstractions and interaction paradigms.\nUnfortunately, when facing the specific characteristics of the target systems, the shortcomings of current architectures and middleware interaction paradigms become apparent.\nLooking to the basic building blocks of such systems we may find components which comprise mechanical parts, hardware, software and a network interface.\nHowever, classical event\/object models are usually software oriented and, as such, when trans\nported to a real-time, embedded systems setting, their harmony is cluttered by the conflict between, on the one side, send\/receive of \"software\" events (message-based), and on the other side, input\/output of \"hardware\" or \"real-world\" events, register-based.\nIn terms of interaction paradigms, and although the use of event-based models appears to be a convenient solution [10, 22], these often lack the appropriate support for non-functional requirements like reliability, timeliness or security.\nThis paper describes an architectural framework and a middleware, supporting a component-based system and an integrated view on event-based communication comprising the real world events and the events generated in the system.\nWhen choosing the appropriate interaction paradigm it is of fundamental importance to address the challenging issues of the envisaged sentient applications.\nUnlike classical approaches that confine the possible interactions to the application boundaries, i.e. to its components, we consider that the environment surrounding the application also plays a relevant role in this respect.\nTherefore, the paper starts by clarifying several issues concerning our view of the system, about the interactions that may take place and about the information flows.\nThis view is complemented by providing an outline of the component-based system construction and, in particular, by showing that it is possible to compose larger applications from basic components, following an hierarchical composition approach.\nThis provides the necessary background to introduce the Generic-Events Architecture (GEAR), which describes the event-based interaction between the components via a generic event layer while allowing the seamless integration of physical and computer information flows.\nIn fact, the generic event layer hides the different communication channels, including the interactions through the environment.\nAdditionally, the event layer abstraction is also adequate for the proper handling of the non-functional requirements, namely reliability and timeliness, which are particularly stringent in real-time settings.\nThe paper devotes particular attention to this issue by discussing the temporal aspects of interactions and the needs for predictability.\nAn appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints.\nThis is complemented by the notion of Event Channels (EC), which are abstractions of the underlying network while being abstracted by the event layer.\nIn fact, event channels play a fundamental role in securing the functional and non-functional (e.g. reliability and timeliness) properties of the envisaged applications, that is, in allowing the enforcement of quality attributes.\nThey are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.\nThe paper is organized as follows.\nIn Section 3 we introduce the fundamental notions and abstractions that we adopt in this work to describe the interactions taking place in the system.\nThen, in Section 4, we describe the componentbased approach that allows composition of objects.\nGEAR is then described in Section 5 and Section 6 focuses on temporal aspects of the interactions.\nSection 7 describes the COSMIC middleware, which may be used to specify the interaction between sentient objects.\nA simple example to highlight the ideas presented in the paper appears in Section 8 and Section 9 concludes the paper.\n2.\nRELATED WORK\nOur work considers a wired physical world in which a very large number of autonomous components cooperate.\nIt is inspired by many research efforts in very different areas.\nEvent-based systems in general have been introduced to meet the requirements of applications in which entities spontaneously generate information and disseminate it [1, 25, 22].\nIntended for large systems and requiring quite complex infrastructures, these event systems do not consider stringent quality aspects like timeliness and dependability issues.\nSecondly, they are not created to support inter-operability between tiny smart devices with substantial resource constraints.\nIn [10] a real-time event system for CORBA has been introduced.\nThe events are routed via a central event server which provides scheduling functions to support the real-time requirements.\nSuch a central component is not available in an infrastructure envisaged in our system architecture and the developed middleware TAO (The Ace Orb) is quite complex and unsuitable to be directly integrated in smart devices.\nThere are efforts to implement CORBA for control networks, tailored to connect sensor and actuator components [15, 19].\nThey are targeted for the CAN-Bus [9], a popular network developed for the automotive industry.\nHowever, in these approaches the support for timeliness or dependability issues does not exist or is only very limited.\nA new scheme to integrate smart devices in a CORBA environment is proposed in [17] and has lead to the proposal of a standard by the Object Management Group (OMG) [26].\nSmart transducers are organized in clusters that are connected to a CORBA system by a gateway.\nThe clusters form isolated subnetworks.\nA special master node enforces the temporal properties in the cluster subnet.\nA CORBA gateway allows to access sensor data and write actuator data by means of an interface file system (IFS).\nThe basic structure is similar to the WAN-of-CANs structure which has been introduced in the CORTEX project [4].\nIslands of tight control may be realized by a control network and cooperate via wired or wireless networks covering a large number of these subnetworks.\nHowever, in contrast to the event channel model introduced in this paper, all communication inside a cluster relies on a single technical solution of a synchronous communication channel.\nSecondly, although the temporal behaviour of a single cluster is rigorously defined, no model to specify temporal properties for clusterto-CORBA or cluster-to-cluster interactions is provided.\n3.\nINFORMATION FLOW AND INTERACTION MODEL\n4.\nSENTIENT OBJECT COMPOSITION\n4.1 Component-based System Construction\n4.2 Encapsulation and Scoping\n5.\nA GENERIC EVENTS ARCHITECTURE\n5.1 Information Flow in GEAR\n6.\nTEMPORAL ASPECTS OF THE INTERACTIONS\n6.1 The Role of Smart Sensors and Actuators\n7.\nAN EVENT MODEL AND MIDDLEWARE FOR COOPERATING SMART DEVICES\n7.1 The Architecture of the COSMIC Middleware\n7.2 Status of COSMIC\n8.\nAN ILLUSTRATIVE EXAMPLE\n9.\nCONCLUSION AND FUTURE WORK\nThe paper addresses problems of building large distributed systems interacting with the physical environment and being composed from a huge number of smart components.\nWe cannot assume that the network architecture in such a system is homogeneous.\nRather multiple \"edge -\" networks are fused to a hierarchical, heterogeneous wide area network.\nThey connect the tiny sensors and actuators perceiving the environment and providing sentience to the application.\nAdditionally, mobility and dynamic deployment of components require the dynamic interaction without fixed, a priori known addressing and routing schemes.\nThe work presented in the paper is a contribution towards the seamless interaction in such an environment which should not be restricted by technical obstacles.\nRather it should be possible to control the flow of information by explicitly specifying functional and temporal dissemination constraints.\nThe paper presented the general model of a sentient object to describe composition, encapsulation and interaction in such an environment and developed the Generic Event Architecture GEAR which integrates the interaction through the environment and the network.\nWhile appropriate abstractions and interaction models can hide the functional heterogeneity of the networks, it is impossible to hide the quality differences.\nTherefore, one of the main concerns is to define temporal properties in such an open infrastructure.\nThe notion of an event channel has been introduced which allows to specify quality aspects explicitly.\nThey can be verified at subscription and define a boundary for event dissemination.\nThe COSMIC middleware is a first attempt to put these concepts into operation.\nCOSMIC allows the interoperability of tiny components over multiple network boundaries and supports the definition of different real-time event channel classes.\nThere are many open questions that emerged from our work.\nOne direction of future research will be the inclusion of real-world communication channels established between sensors and actuators in the temporal analysis and the ordering of such events in a cause-effect chain.\nAdditionally, the provision of timing failure detection for the adaptation of interactions will be in the focus of our research.\nTo reduce network traffic and only disseminate those events to the subscribers which they are really interested in and which have a chance to arrive timely, the encapsulation and scoping schemes have to be transformed into respective multi-level filtering rules.\nThe event attributes which describe aspects of the context and temporal constraints for the dissemination will be exploited for this purpose.\nFinally, it is intended to integrate the results in the COSMIC middleware to enable experimental assessment.","lvl-4":"An Architectural Framework and a Middleware for Cooperating Smart Components *\nU.Lisboa U.Ulm U.Lisboa casim@di.fc.ul.pt kaiser@informatik.uni- pjv@di.fc.ul.pt ulm.de\nABSTRACT\nIn a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it.\nExamples range in telematics, traffic management, team robotics or home automation to name a few.\nTo a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically.\nThe challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution.\nA crucial design decision is the choice of the appropriate abstractions and interaction mechanisms.\nLooking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components.\nThey are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface.\nLarger autonomous components may be composed recursively from these building blocks.\nThe paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system.\nIt starts by an outline of the component-based system construction.\nThe generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer.\nThe generic event layer hides the different communication channels including * This work was partially supported by the EC, through project IST-2000-26031 (CORTEX), and by the FCT, through the Large-Scale Informatic Systems Laboratory (LaSIGE) and project POSI\/1999\/CHS \/ 33996 (DEFEATS).\nthe interactions through the environment.\nAn appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints.\nThis is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes.\nThey are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.\n1.\nINTRODUCTION\nIn order to deal with these challenges, it is of fundamental importance to use adequate high-level models, abstractions and interaction paradigms.\nUnfortunately, when facing the specific characteristics of the target systems, the shortcomings of current architectures and middleware interaction paradigms become apparent.\nLooking to the basic building blocks of such systems we may find components which comprise mechanical parts, hardware, software and a network interface.\nThis paper describes an architectural framework and a middleware, supporting a component-based system and an integrated view on event-based communication comprising the real world events and the events generated in the system.\nWhen choosing the appropriate interaction paradigm it is of fundamental importance to address the challenging issues of the envisaged sentient applications.\nUnlike classical approaches that confine the possible interactions to the application boundaries, i.e. to its components, we consider that the environment surrounding the application also plays a relevant role in this respect.\nTherefore, the paper starts by clarifying several issues concerning our view of the system, about the interactions that may take place and about the information flows.\nThis view is complemented by providing an outline of the component-based system construction and, in particular, by showing that it is possible to compose larger applications from basic components, following an hierarchical composition approach.\nThis provides the necessary background to introduce the Generic-Events Architecture (GEAR), which describes the event-based interaction between the components via a generic event layer while allowing the seamless integration of physical and computer information flows.\nIn fact, the generic event layer hides the different communication channels, including the interactions through the environment.\nAdditionally, the event layer abstraction is also adequate for the proper handling of the non-functional requirements, namely reliability and timeliness, which are particularly stringent in real-time settings.\nThe paper devotes particular attention to this issue by discussing the temporal aspects of interactions and the needs for predictability.\nAn appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints.\nThis is complemented by the notion of Event Channels (EC), which are abstractions of the underlying network while being abstracted by the event layer.\nIn fact, event channels play a fundamental role in securing the functional and non-functional (e.g. reliability and timeliness) properties of the envisaged applications, that is, in allowing the enforcement of quality attributes.\nThey are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.\nThe paper is organized as follows.\nIn Section 3 we introduce the fundamental notions and abstractions that we adopt in this work to describe the interactions taking place in the system.\nThen, in Section 4, we describe the componentbased approach that allows composition of objects.\nGEAR is then described in Section 5 and Section 6 focuses on temporal aspects of the interactions.\nSection 7 describes the COSMIC middleware, which may be used to specify the interaction between sentient objects.\nA simple example to highlight the ideas presented in the paper appears in Section 8 and Section 9 concludes the paper.\n2.\nRELATED WORK\nOur work considers a wired physical world in which a very large number of autonomous components cooperate.\nIt is inspired by many research efforts in very different areas.\nEvent-based systems in general have been introduced to meet the requirements of applications in which entities spontaneously generate information and disseminate it [1, 25, 22].\nIntended for large systems and requiring quite complex infrastructures, these event systems do not consider stringent quality aspects like timeliness and dependability issues.\nSecondly, they are not created to support inter-operability between tiny smart devices with substantial resource constraints.\nIn [10] a real-time event system for CORBA has been introduced.\nThe events are routed via a central event server which provides scheduling functions to support the real-time requirements.\nSuch a central component is not available in an infrastructure envisaged in our system architecture and the developed middleware TAO (The Ace Orb) is quite complex and unsuitable to be directly integrated in smart devices.\nThere are efforts to implement CORBA for control networks, tailored to connect sensor and actuator components [15, 19].\nThey are targeted for the CAN-Bus [9], a popular network developed for the automotive industry.\nSmart transducers are organized in clusters that are connected to a CORBA system by a gateway.\nThe clusters form isolated subnetworks.\nA special master node enforces the temporal properties in the cluster subnet.\nA CORBA gateway allows to access sensor data and write actuator data by means of an interface file system (IFS).\nIslands of tight control may be realized by a control network and cooperate via wired or wireless networks covering a large number of these subnetworks.\nHowever, in contrast to the event channel model introduced in this paper, all communication inside a cluster relies on a single technical solution of a synchronous communication channel.\nSecondly, although the temporal behaviour of a single cluster is rigorously defined, no model to specify temporal properties for clusterto-CORBA or cluster-to-cluster interactions is provided.\n9.\nCONCLUSION AND FUTURE WORK\nThe paper addresses problems of building large distributed systems interacting with the physical environment and being composed from a huge number of smart components.\nWe cannot assume that the network architecture in such a system is homogeneous.\nRather multiple \"edge -\" networks are fused to a hierarchical, heterogeneous wide area network.\nThey connect the tiny sensors and actuators perceiving the environment and providing sentience to the application.\nAdditionally, mobility and dynamic deployment of components require the dynamic interaction without fixed, a priori known addressing and routing schemes.\nThe work presented in the paper is a contribution towards the seamless interaction in such an environment which should not be restricted by technical obstacles.\nRather it should be possible to control the flow of information by explicitly specifying functional and temporal dissemination constraints.\nThe paper presented the general model of a sentient object to describe composition, encapsulation and interaction in such an environment and developed the Generic Event Architecture GEAR which integrates the interaction through the environment and the network.\nWhile appropriate abstractions and interaction models can hide the functional heterogeneity of the networks, it is impossible to hide the quality differences.\nTherefore, one of the main concerns is to define temporal properties in such an open infrastructure.\nThe notion of an event channel has been introduced which allows to specify quality aspects explicitly.\nThey can be verified at subscription and define a boundary for event dissemination.\nThe COSMIC middleware is a first attempt to put these concepts into operation.\nCOSMIC allows the interoperability of tiny components over multiple network boundaries and supports the definition of different real-time event channel classes.\nThere are many open questions that emerged from our work.\nOne direction of future research will be the inclusion of real-world communication channels established between sensors and actuators in the temporal analysis and the ordering of such events in a cause-effect chain.\nAdditionally, the provision of timing failure detection for the adaptation of interactions will be in the focus of our research.\nThe event attributes which describe aspects of the context and temporal constraints for the dissemination will be exploited for this purpose.\nFinally, it is intended to integrate the results in the COSMIC middleware to enable experimental assessment.","lvl-2":"An Architectural Framework and a Middleware for Cooperating Smart Components *\nU.Lisboa U.Ulm U.Lisboa casim@di.fc.ul.pt kaiser@informatik.uni- pjv@di.fc.ul.pt ulm.de\nABSTRACT\nIn a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it.\nExamples range in telematics, traffic management, team robotics or home automation to name a few.\nTo a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically.\nThe challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution.\nA crucial design decision is the choice of the appropriate abstractions and interaction mechanisms.\nLooking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components.\nThey are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface.\nLarger autonomous components may be composed recursively from these building blocks.\nThe paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system.\nIt starts by an outline of the component-based system construction.\nThe generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer.\nThe generic event layer hides the different communication channels including * This work was partially supported by the EC, through project IST-2000-26031 (CORTEX), and by the FCT, through the Large-Scale Informatic Systems Laboratory (LaSIGE) and project POSI\/1999\/CHS \/ 33996 (DEFEATS).\nthe interactions through the environment.\nAn appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints.\nThis is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes.\nThey are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.\n1.\nINTRODUCTION\nIn recent years we have seen the continuous improvement of technologies that are relevant for the construction of distributed embedded systems, including trustworthy visual, auditory, and location sensing [11], communication and processing.\nWe believe that in a future networked physical world a new class of applications will emerge, composed of a myriad of smart sensors and actuators to assess and control aspects of their environments and autonomously act in response to it.\nThe anticipated challenging characteristics of these applications include autonomy, responsiveness and safety criticality, large scale, geographical dispersion, mobility and evolution.\nIn order to deal with these challenges, it is of fundamental importance to use adequate high-level models, abstractions and interaction paradigms.\nUnfortunately, when facing the specific characteristics of the target systems, the shortcomings of current architectures and middleware interaction paradigms become apparent.\nLooking to the basic building blocks of such systems we may find components which comprise mechanical parts, hardware, software and a network interface.\nHowever, classical event\/object models are usually software oriented and, as such, when trans\nported to a real-time, embedded systems setting, their harmony is cluttered by the conflict between, on the one side, send\/receive of \"software\" events (message-based), and on the other side, input\/output of \"hardware\" or \"real-world\" events, register-based.\nIn terms of interaction paradigms, and although the use of event-based models appears to be a convenient solution [10, 22], these often lack the appropriate support for non-functional requirements like reliability, timeliness or security.\nThis paper describes an architectural framework and a middleware, supporting a component-based system and an integrated view on event-based communication comprising the real world events and the events generated in the system.\nWhen choosing the appropriate interaction paradigm it is of fundamental importance to address the challenging issues of the envisaged sentient applications.\nUnlike classical approaches that confine the possible interactions to the application boundaries, i.e. to its components, we consider that the environment surrounding the application also plays a relevant role in this respect.\nTherefore, the paper starts by clarifying several issues concerning our view of the system, about the interactions that may take place and about the information flows.\nThis view is complemented by providing an outline of the component-based system construction and, in particular, by showing that it is possible to compose larger applications from basic components, following an hierarchical composition approach.\nThis provides the necessary background to introduce the Generic-Events Architecture (GEAR), which describes the event-based interaction between the components via a generic event layer while allowing the seamless integration of physical and computer information flows.\nIn fact, the generic event layer hides the different communication channels, including the interactions through the environment.\nAdditionally, the event layer abstraction is also adequate for the proper handling of the non-functional requirements, namely reliability and timeliness, which are particularly stringent in real-time settings.\nThe paper devotes particular attention to this issue by discussing the temporal aspects of interactions and the needs for predictability.\nAn appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints.\nThis is complemented by the notion of Event Channels (EC), which are abstractions of the underlying network while being abstracted by the event layer.\nIn fact, event channels play a fundamental role in securing the functional and non-functional (e.g. reliability and timeliness) properties of the envisaged applications, that is, in allowing the enforcement of quality attributes.\nThey are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.\nThe paper is organized as follows.\nIn Section 3 we introduce the fundamental notions and abstractions that we adopt in this work to describe the interactions taking place in the system.\nThen, in Section 4, we describe the componentbased approach that allows composition of objects.\nGEAR is then described in Section 5 and Section 6 focuses on temporal aspects of the interactions.\nSection 7 describes the COSMIC middleware, which may be used to specify the interaction between sentient objects.\nA simple example to highlight the ideas presented in the paper appears in Section 8 and Section 9 concludes the paper.\n2.\nRELATED WORK\nOur work considers a wired physical world in which a very large number of autonomous components cooperate.\nIt is inspired by many research efforts in very different areas.\nEvent-based systems in general have been introduced to meet the requirements of applications in which entities spontaneously generate information and disseminate it [1, 25, 22].\nIntended for large systems and requiring quite complex infrastructures, these event systems do not consider stringent quality aspects like timeliness and dependability issues.\nSecondly, they are not created to support inter-operability between tiny smart devices with substantial resource constraints.\nIn [10] a real-time event system for CORBA has been introduced.\nThe events are routed via a central event server which provides scheduling functions to support the real-time requirements.\nSuch a central component is not available in an infrastructure envisaged in our system architecture and the developed middleware TAO (The Ace Orb) is quite complex and unsuitable to be directly integrated in smart devices.\nThere are efforts to implement CORBA for control networks, tailored to connect sensor and actuator components [15, 19].\nThey are targeted for the CAN-Bus [9], a popular network developed for the automotive industry.\nHowever, in these approaches the support for timeliness or dependability issues does not exist or is only very limited.\nA new scheme to integrate smart devices in a CORBA environment is proposed in [17] and has lead to the proposal of a standard by the Object Management Group (OMG) [26].\nSmart transducers are organized in clusters that are connected to a CORBA system by a gateway.\nThe clusters form isolated subnetworks.\nA special master node enforces the temporal properties in the cluster subnet.\nA CORBA gateway allows to access sensor data and write actuator data by means of an interface file system (IFS).\nThe basic structure is similar to the WAN-of-CANs structure which has been introduced in the CORTEX project [4].\nIslands of tight control may be realized by a control network and cooperate via wired or wireless networks covering a large number of these subnetworks.\nHowever, in contrast to the event channel model introduced in this paper, all communication inside a cluster relies on a single technical solution of a synchronous communication channel.\nSecondly, although the temporal behaviour of a single cluster is rigorously defined, no model to specify temporal properties for clusterto-CORBA or cluster-to-cluster interactions is provided.\n3.\nINFORMATION FLOW AND INTERACTION MODEL\nIn this paper we consider a component-based system model that incorporates previous work developed in the context of the IST CORTEX project [5].\nAs mentioned above, a fundamental idea underlying the approach is that applications can be composed of a large number of smart components that are able to sense their surrounding environment and interact with it.\nThese components are referred to as sentient objects, a metaphor elaborated in CORTEX and inspired on the generic concept of sentient computing introduced in [12].\nSentient objects accept input events from a variety of different sources (including sensors, but not constrained to that), process them, and produce output events, whereby\nthey actuate on the environment and\/or interact with other objects.\nTherefore, the following kinds of interactions can take place in the system: Environment-to-object interactions: correspond to a flow of information from the environment to application objects, reporting about the state of the former, and\/or notifying about events taking place therein.\nObject-to-object interactions: correspond to a flow of information among sentient objects, serving two purposes.\nThe first is related with complementing the assessment of each individual object about the state of the surrounding space.\nThe second is related to collaboration, in which the object tries to influence other objects into contributing to a common goal, or into reacting to an unexpected situation.\nObject-to-environment interactions: correspond to a flow of information from an object to the environment, with the purpose of forcing a change in the state of the latter.\nBefore continuing, we need to clarify a few issues with respect to these possible forms of interaction.\nWe consider that the environment can be a producer or consumer of information while interacting with sentient objects.\nThe environment is the real (physical) world surrounding an object, not necessarily close to the object or limited to certain boundaries.\nQuite clearly, the information produced by the environment corresponds to the physical representation of real-time entities, of which typical examples include temperature, distance or the state of a door.\nOn the other hand, actuation on the environment implies the manipulation of these real-time entities, like increasing the temperature (applying more heat), changing the distance (applying some movement) or changing the state of the door (closing or opening it).\nThe required transformations between system representations of these real-time entities and their physical representations is accomplished, generically, by sensors and actuators.\nWe further consider that there may exist dumb sensors and actuators, which interact with the objects by disseminating or capturing raw transducer information, and smart sensors and actuators, with enhanced processing capabilities, capable of \"speaking\" some more elaborate \"event dialect\" (see Sections 5 and 6.1).\nInteraction with the environment is therefore done through sensors and actuators, which may, or may not be part of sentient objects, as discussed in Section 4.2.\nState or state changes in the environment are considered as events, captured by sensors (in the environment or within sentient objects) and further disseminated to other potentially interested sentient objects in the system.\nIn consequence, it is quite natural to base the communication and interaction among sentient objects and with the environment on an event-based communication model.\nMoreover, typical properties of event-based models, such as anonymous and non-blocking communication, are highly desirable in systems where sentient objects can be mobile and where interactions are naturally very dynamic.\nA distinguishing aspect of our work from many of the existing approaches, is that we consider that sentient objects may indirectly communicate with each other through the environment, when they act on it.\nThus the environment constitutes an interaction and communication channel and is in the control and awareness loop of the objects.\nIn other words, when a sentient object actuates on the environment it will be able to observe the state changes in the environment by means of events captured by the sensors.\nClearly, other objects might as well capture the same events, thus establishing the above-mentioned indirect communication path.\nIn systems that involve interactions with the environment it is very important to consider the possibility of communication through the environment.\nIt has been shown that the hidden channels developing through the latter (e.g., feedback loops) may hinder software-based algorithms ignoring them [30].\nTherefore, any solution to the problem requires the definition of convenient abstractions and appropriate architectural constructs.\nOn the other hand, in order to deal with the information flow through the whole computer system and environment in a seamless way, handling \"software\" and \"hardware\" events uniformly, it is also necessary to find adequate abstractions.\nAs discussed in Section 5, the Generic-Events Architecture introduces the concept of Generic Event and an Event Layer abstraction which aim at dealing, among others, with these issues.\n4.\nSENTIENT OBJECT COMPOSITION\nIn this section we analyze the most relevant issues related with the sentient object paradigm and the construction of systems composed of sentient objects.\n4.1 Component-based System Construction\nSentient objects can take several different forms: they can simply be software-based components, but they can also comprise mechanical and\/or hardware parts, amongst which the very sensorial apparatus that substantiates \"sentience\", mixed with software components to accomplish their task.\nWe refine this notion by considering a sentient object as an encapsulating entity, a component with internal logic and active processing elements, able to receive, transform and produce new events.\nThis interface hides the internal hardware\/software structure of the object, which may be complex, and shields the system from the low-level functional and temporal details of controlling a specific sensor or actuator.\nFurthermore, given the inherent complexity of the envisaged applications, the number of simultaneous input events and the internal size of sentient objects may become too large and difficult to handle.\nTherefore, it should be possible to consider the hierarchical composition of sentient objects so that the application logic can be separated across as few or as many of these objects as necessary.\nOn the other hand, composition of sentient objects should normally be constrained by the actual hardware component's structure, preventing the possibility of arbitrarily composing sentient objects.\nThis is illustrated in Figure 1, where a sentient object is internally composed of a few other sentient objects, each of them consuming and producing events, some of which only internally propagated.\nObserving the figure, and recalling our previous discussion about the possible interactions, we identify all of them here: an object-to-environment interaction occurs between the object controlling a WLAN transmitter and some WLAN receiver in the environment; an environment-to-object interaction takes place when the object responsible for the GPS\nFigure 1: Component-aware sentient object composition.\nsignal reception uses the information transmitted by the satellites; finally, explicit object-to-object interactions occur internally to the container object, through an internal communication network.\nAdditionally, it is interesting to observe that implicit communication can also occur, whether the physical feedback develops through the environment internal to the container object (as depicted) or through the environment external to this object.\nHowever, there is a subtle difference between both cases.\nWhile in the former the feedback can only be perceived by objects internal to the container, bounding the extent to which consistency must be ensured, such bounds do not exist in the latter.\nIn fact, the notion of sentient object as an encapsulating entity may serve other purposes (e.g., the confinement of feedback and of the propagation of events), beyond the mere hierarchical composition of objects.\nTo give a more concrete example of such component-aware object composition we consider a scenario of cooperating robots.\nEach robot is made of several components, corresponding, for instance, to axis and manipulator controllers.\nTogether with the control software, each of these controllers may be a sentient object.\nOn the other hand, a robot itself is a sentient object, composed of the objects materialized by the controllers, and the environment internal to its own structure, or body.\nThis means that it should be possible to define cooperation activities using the events produced by robot sentient objects, without the need to know the internal structure of robots, or the events produced by body objects or by smart sensors within the body.\nFrom an engineering point of view, however, this also means that robot sentient object may have to generate new events that reflect its internal state, which requires the definition of a gateway to make the bridge between the internal and external environments.\n4.2 Encapsulation and Scoping\nNow an important question is about how to represent and disseminate events in a large scale networked world.\nAs we have seen above, any event generated by a sentient object could, in principle, be visible anywhere in the system and thus received by any other sentient object.\nHowever, there are substantial obstacles to such universal interactions, originating from the components heterogeneity in such a largescale setting.\nFirstly, the components may have severe performance constraints, particularly because we want to integrate smart sensors and actuators in such an architecture.\nSecondly, the bandwidth of the participating networks may vary largely.\nSuch networks may be low power, low bandwidth fieldbuses, or more powerful wireless networks as well as high speed backbones.\nThirdly, the networks may have widely different reliability and timeliness characteristics.\nConsider a platoon of cooperating vehicles.\nInside a vehicle there may be a field-bus like CAN [8, 9], TTP\/A [17] or LIN [20], with a comparatively low bandwidth.\nOn the other hand, the vehicles are communicating with others in the platoon via a direct wireless link.\nFinally, there may be multiple platoons of vehicles which are coordinated by an additional wireless network layer.\nAt the abstraction level of sentient objects, such heterogeneity is reflected by the notion of body-vs-environment.\nAt the network level, we assume the WAN-of-CANs structure [27] to model the different networks.\nThe notion of body and environment is derived from the recursively defined component-based object model.\nA body is similar to a cell membrane and represents a quality of service container for the sentient objects inside.\nOn the network level, it may be associated with the components coupled by a certain CAN.\nA CAN defines the dissemination quality which can be expected by the cooperating objects.\nIn the above example, a vehicle may be a sentient object, whose body is composed of the respective lower level objects (sensors and actuators) which are connected by the internal network (see Figure 1).\nCorrespondingly, the platoon can be seen itself as an object composed of a collection of cooperating vehicles, its body being the environment encapsulated by the platoon zone.\nAt the network level, the wireless network represents the respective CAN.\nHowever, several platoons united by their CANs may interact with each other and objects further away, through some wider-range, possible fixed networking substrate, hence the concept of WAN-of-CANs.\nThe notions of body-environment and WAN-of-CANs are very useful when defining interaction properties across such boundaries.\nTheir introduction obeyed to our belief that a single mechanism to provide quality measures for interactions is not appropriate.\nInstead, a high level construct for interaction across boundaries is needed which allows to specify the quality of dissemination and exploits the knowledge about body and environment to assess the feasibility of quality constraints.\nAs we will see in the following section, the notion of an event channel represents this construct in our architecture.\nIt disseminates events and allows the network independent specification of quality attributes.\nThese attributes must be mapped to the respective properties of the underlying network structure.\n5.\nA GENERIC EVENTS ARCHITECTURE\nIn order to successfully apply event-based object-oriented models, addressing the challenges enumerated in the introduction of this paper, it is necessary to use adequate architectural constructs, which allow the enforcement of fundamental properties such as timeliness or reliability.\nWe propose the Generic-Events Architecture (GEAR), depicted in Figure 2, which we briefly describe in what follows (for a more detailed description please refer to [29]).\nThe L-shaped structure is crucial to ensure some of the properties described.\nEnvironment: The physical surroundings, remote and close, solid and etherial, of sentient objects.\nFigure 2: Generic-Events architecture.\nBody: The physical embodiment of a sentient object (e.g., the hardware where a mechatronic controller resides, the physical structure of a car).\nNote that due to the compositional approach taken in our model, part of what is \"environment\" to a smaller object seen individually, becomes \"body\" for a larger, containing object.\nIn fact, the body is the \"internal environment\" of the object.\nThis architecture layering allows composition to take place seamlessly, in what concerns information flow.\nInside a body there may also be implicit knowledge, which can be exploited to make interaction more efficient, like the knowledge about the number of cooperating entities, the existence of a specific communication network or the simple fact that all components are co-located and thus the respective events do not need to specify location in their context attributes.\nSuch intrinsic information is not available outside a body and, therefore, more explicit information has to be carried by an event.\nTranslation Layer: The layer responsible for physical event transformation from\/to their native form to event channel dialect, between environment\/body and an event channel.\nEssentially one doing observation and actuation operations on the lower side, and doing transactions of event descriptions on the other.\nOn the lower side this layer may also interact with dumb sensors or actuators, therefore \"talking\" the language of the specific device.\nThese interactions are done through operational networks (hence the antenna symbol in the figure).\nEvent Layer: The layer responsible for event propagation in the whole system, through several Event Channels (EC):.\nIn concrete terms, this layer is a kind of middleware that provides important event-processing services which are crucial for any realistic event-based system.\nFor example, some of the services that imply the processing of events may include publishing, subscribing, discrimination (zoning, filtering, fusion, tracing), and queuing.\nCommunication Layer: The layer responsible for \"wrapping\" events (as a matter of fact, event descriptions in EC dialect) into \"carrier\" event-messages, to be transported to remote places.\nFor example, a sensing event generated by a smart sensor is wrapped in an event-message and disseminated, to be caught by whoever is concerned.\nThe same holds for an actuation event produced by a sentient object, to be delivered to a remote smart actuator.\nLikewise, this may apply to an event-message from one sentient object to another.\nDumb sensors and actuators do not send event-messages, since they are unable to understand the EC dialect (they do not have an event layer neither a communication layer--they communicate, if needed, through operational networks).\nRegular Network: This is represented in the horizontal axis of the block diagram by the communication layer, which encompasses the usual LAN, TCP\/IP, and realtime protocols, desirably augmented with reliable and\/or ordered broadcast and other protocols.\nThe GEAR introduces some innovative ideas in distributed systems architecture.\nWhile serving an object model based on production and consumption of generic events, it treats events produced by several sources (environment, body, objects) in a homogeneous way.\nThis is possible due to the use of a common basic dialect for talking about events and due to the existence of the translation layer, which performs the necessary translation between the physical representation of a real-time entity and the EC compliant format.\nCrucial to the architecture is the event layer, which uses event channels to propagate events through regular network infrastructures.\nThe event layer is realized by the COSMIC middleware, as described in Section 7.\n5.1 Information Flow in GEAR\nThe flow of information (external environment and computational part) is seamlessly supported by the L-shaped architecture.\nIt occurs in a number of different ways, which demonstrates the expressiveness of the model with regard to the necessary forms of information encountered in real-time cooperative and embedded systems.\nSmart sensors produce events which report on the environment.\nBody sensors produce events which report on the body.\nThey are disseminated by the local event layer module, on an event channel (EC) propagated through the regular network, to any relevant remote event layer modules where entities showed an interest on them, normally, sentient objects attached to the respective local event layer modules.\nSentient objects consume events they are interested in, process them, and produce other events.\nSome of these events are destined to other sentient objects.\nThey are published on an EC using the same EC dialect that serves, e.g., sensor originated events.\nHowever, these events are semantically of a kind such that they are to be subscribed by the relevant sentient objects, for example, the sentient objects composing a robot controller system, or, at a higher level, the sentient objects composing the actual robots in\na cooperative application.\nSmart actuators, on the other hand, merely consume events produced by sentient objects, whereby they accept and execute actuation commands.\nAlternatively to \"talking\" to other sentient objects, sentient objects can produce events of a lower level, for example, actuation commands on the body or environment.\nThey publish these exactly the same way: on an event channel through the local event layer representative.\nNow, if these commands are of concern to local actuator units (e.g., body, including internal operational networks), they are passed on to the local translation layer.\nIf they are of concern to a remote smart actuator, they are disseminated through the distributed event layer, to reach the former.\nIn any case, if they are also of interest to other entities, such as other sentient objects that wish to be informed of the actuation command, then they are also disseminated through the EC to these sentient objects.\nA key advantage of this architecture is that event-messages and physical events can be globally ordered, if necessary, since they all pass through the event layer.\nThe model also offers opportunities to solve a long lasting problem in realtime, computer control, and embedded systems: the inconsistency between message passing and the feedback loop information flow subsystems.\n6.\nTEMPORAL ASPECTS OF THE INTERACTIONS\nAny interaction needs some form of predictability.\nIf safety critical scenarios are considered as it is done in CORTEX, temporal aspects become crucial and have to be made explicit.\nThe problem is how to define temporal constraints and how to enforce them by appropriate resource usage in a dynamic ad-hoc environment.\nIn an system where interactions are spontaneous, it may be also necessary to determine temporal properties dynamically.\nTo do this, the respective temporal information must be stated explicitly and available during run-time.\nSecondly, it is not always ensured that temporal properties can be fulfilled.\nIn these cases, adaptations and timing failure notification must be provided [2, 28].\nIn most real-time systems, the notion of a deadline is the prevailing scheme to express and enforce timeliness.\nHowever, a deadline only weakly reflect the temporal characteristics of the information which is handled.\nSecondly, a deadline often includes implicit knowledge about the system and the relations between activities.\nIn a rather well defined, closed environment, it is possible to make such implicit assumptions and map these to execution times and deadlines.\nE.g. the engineer knows how long a vehicle position can be used before the vehicle movement outdates this information.\nThus he maps this dependency between speed and position on a deadline which then assures that the position error can be assumed to be bounded.\nIn a open environment, this implicit mapping is not possible any more because, as an obvious reason, the relation between speed and position, and thus the error bound, cannot easily be reverse engineered from a deadline.\nTherefore, our event model includes explicit quality attributes which allow to specify the temporal attributes for every individual event.\nThis is of course an overhead compared to the use of implicit knowledge, but in a dynamic environment such information is needed.\nTo illustrate the problem, consider the example of the position of a vehicle.\nA position is a typical example for (time, value) entity [30].\nThus, the position is useful if we can determine an error bound which is related to time, e.g. if we want a position error below 10 meters to establish a safety property between cooperating cars moving with 5 m\/sec, the position has a validity time of 2 seconds.\nIn a (time, value) entity entity we can trade time against the precision of the value.\nThis is known as value over time and time over value [18].\nOnce having established the time-value relation and captured in event attributes, subscribers of this event can locally decide about the usefulness of an information.\nIn the GEAR architecture temporal validity is used to reason about safety properties in a event-based system [29].\nWe will briefly review the respective notions and see how they are exploited in our COSMIC event middleware.\nConsider the timeline of generating an event representing some real-time entity [18] from its occurrence to the notification of a certain sentient object (Figure 3).\nThe real-time entity is captured at the sensor interface of the system and has to be transformed in a form which can be treated by a computer.\nDuring the time interval t0 the sensor reads the real-time entity and a time stamp is associated with the respective value.\nThe derived (time, value) entity represents an observation.\nIt may be necessary to perform substantial local computations to derive application relevant information from the raw sensor data.\nHowever, it should be noted that the time stamp of the observation is associated with the capture time and thus independent from further signal processing and event generation.\nThis close relationship between capture time and the associated value is supported by smart sensors described above.\nThe processed sensor information is assembled in an event data structure after ts to be published to an event channel.\nAs is described later, the event includes the time stamp of generation and the temporal validity as attributes.\nThe temporal validity is an application defined measure for the expiration of a (time, value).\nAs we explained in the example of a position above, it may vary dependent on application parameters.\nTemporal validity is a more general concept than that of a deadline.\nIt is independent of a certain technical implementation of a system.\nWhile deadlines may be used to schedule the respective steps in an event generation and dissemination, a temporal validity is an intrinsic property of a (time, value) entity carried in an event.\nA temporal validity allows to reason about the usefulness of information and is beneficial even in systems in which timely dissemination of events cannot be enforced because it enables timing failure detection at the event consumer.\nIt is obvious that deadlines or periods can be derived from the temporal validity of an event.\nTo set a deadline, knowledge of an implementation, worst case execution times or message dissemination latencies is necessary.\nThus, in the timeline of Figure 3 every interval may have a deadline.\nEvent dissemination through soft real-time channels in COSMIC exploits the temporal validity to define dissemination deadlines.\nQuality attributes can be defined, for instance, in terms of (validity interval, omission degree) pairs.\nThese allow to characterize the usefulness of the event for a certain application, in a certain context.\nBecause of that, quality attributes of an event clearly depend on higher level issues, such as the nature of the sentient object or of the smart sensor that produced the event.\nFor instance, an event containing an indication of some vehicle speed must have different quality attributes depending on the kind of vehicle\nFigure 3: Event processing and dissemination.\nfrom which it originated, or depending on its current speed.\nThe same happens with the position event of the car example above, whose validity depends on the current speed and on a predefined required precision.\nHowever, since quality attributes are strictly related with the semantics of the application or, at least, with some high level knowledge of the purpose of the system (from which the validity of the information can be derived), the definition of these quality attributes may be done by exploiting the information provided at the programming interface.\nTherefore, it is important to understand how the system programmer can specify non-functional requirements at the API, and how these requirements translate into quality attributes assigned to events.\nWhile temporal validity is identified as an intrinsic event property, which is exploited to decide on the usefulness of data at a certain point in time, it is still necessary to provide a communication facility which can disseminate the event before the validity is expired.\nIn a WAN-of-CANs network structure we have to cope with very different network characteristics and quality of service properties.\nTherefore, when crossing the network boundaries the quality of service guarantees available in a certain network will be lost and it will be very hard, costly and perhaps impossible to achieve these properties in the next larger area of the WAN-of CANs structure.\nCORTEX has a couple of abstractions to cope with this situation (network zones, body\/environment) which have been discussed above.\nFrom the temporal point of view we need a high level abstraction like the temporal validity for the individual event now to express our quality requirements of the dissemination over the network.\nThe (bound, coverage) pair, introduced in relation with the TCB [28] seems to be an appropriate approach.\nIt considers the inherent uncertainty of networks and allows to trade the quality of dissemination against the resources which are needed.\nIn relation with the event channel model discussed later, the (bound, coverage) pair allows to specify the quality properties of an event channel independently of specific technical issues.\nGiven the typical environments in which sentient applications will operate, where it is difficult or even impossible to provide timeliness or reliability guarantees, we proposed an alternative way to handle non-functional application requirements, in relation with the TCB approach [28].\nThe proposed approach exploits intrinsic characteristics of applications, such as fail-safety, or time-elasticity, in order to secure QoS specifications of the form (bound, coverage).\nInstead of constructing systems that rely on guaranteed bounds, the idea is to use (possibly changing) bounds that are secured with a constant probability all over the execution.\nThis obviously requires an application to be able to adapt to changing conditions (and\/or changing bounds) or, if this is not possible, to be able to perform some safety procedures when the operational conditions degrade to an unbearable level.\nThe bounds we mentioned above refer essentially to timeliness bounds associated to the execution of local or distributed activities, or combinations thereof.\nFrom these bounds it is then possible to derive the quality attributes, in particular validity intervals, that characterize the events published in the event channel.\n6.1 The Role of Smart Sensors and Actuators\nSmart devices encapsulate hardware, software and mechanical components and provide information and a set of well specified functions and which are closely related to the interaction with the environment.\nThe built-in computational components and the network interface enable the implementation of a well-defined high level interface that does not just provide raw transducer data, but a processed, application-related set of events.\nMoreover, they exhibit an autonomous spontaneous behaviour.\nThey differ from general purpose nodes because they are dedicated to a certain functionality which complies to their sensing and actuating capabilities while general purpose node may execute any program.\nConcerning the sentient object model, smart sensors and actuators may be basic sentient objects themselves, consuming events from the real-world environment and producing the respective generic events for the system's event layer or,\nvice versa consuming a generic event and converting it to a real-world event by an actuation.\nSmart components therefore constitute the periphery, i.e. the real-world interface of a more complex sentient object.\nThe model of sentient objects also constitutes the framework to built more complex \"virtual\" sensors by relating multiple (primary, i.e. sensors which directly sense a physical entity) sensors.\nSmart components translate events of the environment to an appropriate form available at the event layer or, vice versa, transform a system event into an actuation.\nFor smart components we can assume that:\n\u2022 Smart components have dedicated resources to perform a specific function.\n\u2022 These resources are not used for other purposes during normal real-time operation.\n\u2022 No local temporal conflicts occur that will change the observable temporal behaviour.\n\u2022 The functions of a component can usually only be changed during a configuration procedure which is not performed when the component is involved in critical operations.\n\u2022 An observation of the environment as a (time, value) pair can be obtained with a bounded jitter in time.\nMany predictability and scheduling problems arise from the fact, that very low level timing behaviours have to be handled on a single processor.\nHere, temporal encapsulation of activities is difficult because of the possible side effects when sharing a single processor resource.\nConsider the control of a simple IR-range detector which is used for obstacle avoidance.\nDependent on its range and the speed of a vehicle, it has to be polled to prevent the vehicle from crashing into an obstacle.\nOn a single central processor, this critical activity has to be coordinated with many similar, possibly less critical functions.\nIt means that a very fine grained schedule has to be derived based purely on the artifacts of the low level device control.\nIn a smart sensor component, all this low level timing behaviour can be optimized and encapsulated.\nThus we can assume temporal encapsulation similar to information hiding in the functional domain.\nOf course, there is still the problem to guarantee that an event will be disseminated and recognized in due time by the respective system components, but this relates to application related events rather than the low artifacts of a device timing.\nThe main responsibility to provide timeliness guarantees is shifted to the event layer where these events are disseminated.\nSmart sensors thus lead to network centric system model.\nThe network constitute the shared resource which has to be scheduled in a predictable way.\nThe COSMIC middleware introduced in the next section is an approach to provide predictable event dissemination for a network of smart sensors and actuators.\n7.\nAN EVENT MODEL AND MIDDLEWARE FOR COOPERATING SMART DEVICES\nAn event model and a middleware suitable for smart components must support timely and reliable communication and also must be resource efficient.\nCOSMIC (COoperating Smart devices) is aimed at supporting the interaction between those components according to the concepts introduced so far.\nBased on the model of a WAN-of-CANs, we assume that the components are connected to some form of CAN as a fieldbus or a special wireless sensor network which provides specific network properties.\nE.g. a fieldbus developed for control applications usually includes mechanisms for predictable communication while other networks only support a best effort dissemination.\nA gateway connects these CANs to the next level in the network hierarchy.\nThe event system should allow the dynamic interaction over a hierarchy of such networks and comply with the overall CORTEX generic event model.\nEvents are typed information carriers and are disseminated in a publisher \/ subscriber style [24, 7], which is particularly suitable because it supports generative, anonymous communication [3] and does not create any artificial control dependencies between producers of information and the consumers.\nThis decoupling in space (no references or names of senders or receivers are needed for communication) and the flow decoupling (no control transfer occurs with a data transfer) are well known [24, 7, 14] and crucial properties to maintain autonomy of components and dynamic interactions.\nIt is obvious that not all networks can provide the same QoS guarantees and secondly, applications may have widely differing requirements for event dissemination.\nAdditionally, when striving for predictability, resources have to be reserved and data structures must be set up before communication takes place.\nThus, these things cannot predictably be made on the fly while disseminating an event.\nTherefore, we introduced the notion of an event channel to cope with differing properties and requirements and have an object to which we can assign resources and reservations.\nThe concept of an event channel is not new [10, 25], however, it has not yet been used to reflect the properties of the underlying heterogeneous communication networks and mechanisms as described by the GEAR architecture.\nRather, existing event middleware allows to specify the priorities or deadlines of events handled in an event server.\nEvent channels allow to specify the communication properties on the level of the event system in a fine grained way.\nAn event channel is defined by: event channel: = (subject, quality attributeList, handlers) The subject determines the types of events event which may be issued to the channel.\nThe quality attributes model the properties of the underlying communication network and dissemination scheme.\nThese attributes include latency specifications, dissemination constraints and reliability parameters.\nThe notion of zones which represent a guaranteed quality of service in a subnetwork support this approach.\nOur goal is to handle the temporal specifications as (bound, coverage) pairs [28] orthogonal to the more technical questions of how to achieve a certain synchrony property of the dissemination infrastructure.\nCurrently, we support quality attributes of event channels in a CAN-Bus environment represented by explicit synchrony classes.\nThe COSMIC middleware maps the channel properties to lower level protocols of the regular network.\nBased on our previous work on predictable protocols for the CAN-Bus, COSMIC defines an abstract network which provides hard, soft and non real-time message classes [21].\nCorrespondingly, we distinguish three event channel classes according to their synchrony properties: hard real-time channels, soft real-time channels and non-real-time channels.\nHard real-time channels (HRTC) guarantee event propagation within the defined time constraints in the presence\nof a specified number of omission faults.\nHRTECs are supported by a reservation scheme which is similar to the scheme used in time-triggered protocols like TTP [16] [31], TTP\/A [17], and TTCAN [8].\nHowever, a substantial advantage over a TDMA scheme is that due to CAN-Bus properties, bandwidth which was reserved but is not needed by a HRTEC can be used by less critical traffic [21].\nSoft real-time channels (SRTC) exploit the temporal validity interval of events to derive deadlines for scheduling.\nThe validity interval defines the point in time after which an event becomes temporally inconsistent.\nTherefore, in a real-time system an event is useless after this point and may me discarded.\nThe transmission deadline (DL) is defined as the latest point in time when a message has to be transmitted and is specified in a time interval which is derived from the expiration time: tevent ready <DL <texpiration--\u0394notification texpiration defines the point in time when the temporal validity expires.\n\u0394notification is the expected end-to-end latency which includes the transfer time over the network and the time the event may be delayed by the local event handling in the nodes.\nAs said before, event deadlines are used to schedule the dissemination by SRTECs.\nHowever, deadlines may be missed in transient overload situations or due to arbitrary arrival times of events.\nOn the publisher side the application's exception handler is called whenever the event deadline expires before event transmission.\nAt this point in time the event is also not expected to arrive at the subscriber side before the validity expires.\nTherefore, the event is removed from the sending queue.\nOn the subscriber side the expiration time is used to schedule the delivery of the event.\nIf the event cannot be delivered until its expiration time it is removed from the respective queues allocated by the COSMIC middleware.\nThis prevents the communication system to be loaded by outdated messages.\nNon-real-time channels do not assume any temporal specification and disseminate events in a best effort manner.\nAn instance of an event channel is created locally, whenever a publisher makes an announcement for publication or a subscriber subscribes for an event notification.\nWhen a publisher announces publication, the respective data structures of an event channel are created by the middleware.\nWhen a subscriber subscribes to an event channel, it may specify context attributes of an event which are used to filter events locally.\nE.g. a subscriber may only be interested in events generated at a certain location.\nAdditionally the subscriber specifies quality properties of the event channel.\nA more detailed description of the event channels can be found in [13].\nCurrently, COSMIC handles all event channels which disseminate events beyond the CAN network boundary as non real-time event channels.\nThis is mainly because we use the TCP\/IP protocol to disseminate events over wireless links or to the standard Ethernet.\nHowever, there are a number of possible improvements which can easily be integrated in the event channel model.\nThe Timely Computing Base (TCB) [28] can be exploited for timing failure detection and thus would provide awareness for event dissemination in environments where timely delivery of events cannot be enforced.\nAdditionally, there are wireless protocols which can provide timely and reliable message delivery [6, 23] which may be exploited for the respective event channel classes.\nEvents are the information carriers which are exchanged between sentient objects through event channels.\nTo cope with the requirements of an ad-hoc environment, an event includes the description of the context in which it has been generated and quality attributes defining requirements for dissemination.\nThis is particularly important in an open, dynamic environment where an event may travel over multiple networks.\nAn event instance is specified as:\nA subject defines the type of the event and is related to the event contents.\nIt supports anonymous communication and is used to route an event.\nThe subject has to match to the subject of the event channel through which the event is disseminated.\nAttributes are complementary to the event contents.\nThey describe individual functional and non-functional properties of the event.\nThe context attributes describe the environment in which the event has been generated, e.g. a location, an operational mode or a time of occurrence.\nThe quality attributes specify timeliness and dependability aspects in terms of (validity interval, omission degree) pairs.\nThe validity interval defines the point in time after which an event becomes temporally inconsistent [18].\nAs described above, the temporal validity can be mapped to a deadline.\nHowever, usually a deadline is an engineering artefact which is used for scheduling while the temporal validity is a general property of a (time, value) entity.\nIn a environment where a deadline cannot be enforced, a consumer of an event eventually must decide whether the event still is temporally consistent, i.e. represents a valid (time, value) entity.\n7.1 The Architecture of the COSMIC Middleware\nOn the architectural level, COSMIC distinguish three layers roughly depicted in Figure 4.\nTwo of them, the event layer and the abstract network layer are implemented by the COSMIC middleware.\nThe event layer provides the API for the application and realizes the abstraction of event and event channels.\nThe abstract network implements real-time message classes and adapts the quality requirements to the underlying real network.\nAn event channel handler resides in every node.\nIt supports the programming interface and provides the necessary data structures for event-based communication.\nWhenever an object subscribes to a channel or a publisher announces a channel, the event channel handler is involved.\nIt initiates the binding of the channel's subject, which is represented by a network independent unique identifier to an address of the underlying abstract network to enable communication [14].\nThe event channel handler then tightly cooperates with the respective handlers of the abstract network layer to disseminate events or receive event notifications.\nIt should be noted that the QoS properties of the event layer in general depend on what the abstract network layer can provide.\nThus, it may not always be possible to e.g. support hard real-time event channels because the abstract network layer cannot provide the respective guarantees.\nIn [13], we describe the protocols and services of the abstract network layer particularly for the CAN-Bus.\nAs can be seen in Figure 4, the hard real-time (HRT) message class is supported by a dedicated handler which is able to provide the time triggered message dissemination.\nFigure 4: Architecture layers of COSMIC.\nThe HRT handler maintains the HRT message list, which contains an entry for each local HRT message to be sent.\nThe entry holds the parameters for the message, the activation status and the binding information.\nMessages are scheduled on the bus according to the HRT message calendar which comprises the precise start time for each time slot allocated for a message.\nSoft real-time message queues order outgoing messages according to their transmission deadlines derived from the temporal validity interval.\nIf the transmission deadline is exceeded, the event message is purged out of the queue.\nThe respective application is notified via the exception notification interface and can take actions like trying to publish the event again or publish it to a channel of another class.\nIncoming event messages are ordered according to their temporal validity.\nIf an event message arrive, the respective applications are notified.\nAt the moment, an outdated message is deleted from the queue and if the queue runs out of space, the oldest message is discarded.\nHowever, there are other policies possible depending on event attributes and available memory space.\nNon real-time messages are FIFO ordered in a fixed size circular buffer.\n7.2 Status of COSMIC\nThe goal for developing COSMIC was to provide a platform to seamlessly integrate smart tiny components in a large system.\nTherefore, COSMIC should run also on the small, resource constraint devices which are built around 16Bit or even 8-Bit micro-controllers.\nThe distributed COSMIC middleware has been implemented and tested on various platforms.\nUnder RT-Linux, we support the real-time channels over the CAN Bus as described above.\nThe RTLinux version runs on Pentium processors and is currently evaluated before we intent to port it to a smart sensor or actuator.\nFor the interoperability in a WAN-of-CANs environment, we only provide non real-time channels at the moment.\nThis version includes a gateway between the CANbus and a TCP\/IP network.\nIt allows us to use a standard wireless 802.11 network.\nThe non real-time version of COSMIC is available on Linux, RT-Linux and on the microcontroller families C167 (Infineon) and 68HC908 (Motorola).\nBoth micro-controllers have an on-board CAN controller and thus do not require additional hardware components for the network.\nThe memory footprint of COSMIC is about 13 Kbyte on a C167 and slightly more on the 68HC908 where it fits into the on-board flash memory without problems.\nBecause only a few channels are required on such a smart sensor or actuator component, the requirement of RAM (which is a scarce resource on many single chip systems) to hold the dynamic data structures of a channel is low.\nThe COSMIC middleware makes it very easy to include new smart sensors in an existing system.\nParticularly, the application running on a smart sensor to condition and process the raw physical data must not be aware of any low level network specific details.\nIt seamlessly interacts with other components of the system exclusively via event channels.\nThe demo example, briefly described in the next chapter, is using a distributed infrastructure of tiny smart sensors and actuators directly cooperating via event channels over heterogeneous networks.\n8.\nAN ILLUSTRATIVE EXAMPLE\nA simple example for many important properties of the proposed system showing the coordination through the environment and events disseminated over the network is the demo of two cooperating robots depicted in Figure 5.\nEach robot is equipped with smart distance sensors, speed sensors, acceleration sensors and one of the robots (the \"guide\" (KURT2) in front (Figure 5)) has a tracking camera allowing to follow a white line.\nThe robots form a WAN-of-CANs system in which their local CANs are interconnected via a wireless 802.11 network.\nCOSMIC provides the event layer for seamless interaction.\nThe \"blind\" robot (N.N.) is searching the guide randomly.\nWhenever the blind robot detects (by its front distance sensors) an obstacle, it checks whether this may be the guide.\nFor this purpose, it dynamically subscribes to the event channel disseminating distance events from rear distance sensors of the guide (s) and compares these with the distance events from its local front sensors.\nIf the distance is approximately the same it infers that it is really behind a guide.\nNow N.N. also subscribes to the event channels of the tracking camera and the speed sensors\nFigure 5: Cooperating robots.\nto follow the guide.\nThe demo application highlights the following properties of the system:\n1.\nDynamic interaction of robots which is not known in advance.\nIn principle, any two a priori unknown robots can cooperate.\nAll what publishers and subscribers\nhave to know to dynamically interact in this environment is the subject of the respective event class.\nA problem will be to receive only the events of the robot which is closest.\nA robot identity does not help much to solve this problem.\nRather, the position of the event generation entity which is captured in the respective attributes can be evaluated to filter the relevant event out of the event stream.\nA suitable wireless protocol which uses proximity to filter events has been proposed by Meier and Cahill [22] in the CORTEX project.\n2.\nInteraction through the environment.\nThe cooperation between the robots is controlled by sensing the distance between the robots.\nIf the guide detects that the distance grows, it slows down.\nRespectively, if the blind robot comes too close it reduces its speed.\nThe local distance sensors produce events which are disseminated through a low latency, highly predictable event channel.\nThe respective reaction time can be calculated as function of the speed and the distance of the robots and define a dynamic dissemination deadline for events.\nThus, the interaction through the environment will secure the safety properties of the application, i.e. the follower may not crash into the guide and the guide may not loose the follower.\nAdditionally, the robots have remote subscriptions to the respective distance events which are used to check it with the local sensor readings to validate that they really follow the guide which they detect with their local sensors.\nBecause there may be longer latencies and omissions, this check occasionally will not be possible.\nThe unavailability of the remote events will decrease the quality of interaction and probably and slow down the robots, but will not affect safety properties.\n3.\nCooperative sensing.\nThe blind robot subscribes to the events of the line tracking camera.\nThus it can\" see\" through the eye of the guide.\nBecause it knows the distance to the guide and the speed as well, it can foresee the necessary movements.\nThe proposed system provides the architectural framework for such a cooperation.\nThe respective sentient object controlling the actuation of the robot receives as input the position and orientation of the white line to be tracked.\nIn the case of the guide robot, this information is directly delivered as a body event with a low latency and a high reliability over the internal network.\nFor the follower robot, the information comes also via an event channel but with different quality attributes.\nThese quality attributes are reflected in the event channel description.\nThe sentient object controlling the actuation of the follower is aware of the increased latency and higher probability of omission.\n9.\nCONCLUSION AND FUTURE WORK\nThe paper addresses problems of building large distributed systems interacting with the physical environment and being composed from a huge number of smart components.\nWe cannot assume that the network architecture in such a system is homogeneous.\nRather multiple \"edge -\" networks are fused to a hierarchical, heterogeneous wide area network.\nThey connect the tiny sensors and actuators perceiving the environment and providing sentience to the application.\nAdditionally, mobility and dynamic deployment of components require the dynamic interaction without fixed, a priori known addressing and routing schemes.\nThe work presented in the paper is a contribution towards the seamless interaction in such an environment which should not be restricted by technical obstacles.\nRather it should be possible to control the flow of information by explicitly specifying functional and temporal dissemination constraints.\nThe paper presented the general model of a sentient object to describe composition, encapsulation and interaction in such an environment and developed the Generic Event Architecture GEAR which integrates the interaction through the environment and the network.\nWhile appropriate abstractions and interaction models can hide the functional heterogeneity of the networks, it is impossible to hide the quality differences.\nTherefore, one of the main concerns is to define temporal properties in such an open infrastructure.\nThe notion of an event channel has been introduced which allows to specify quality aspects explicitly.\nThey can be verified at subscription and define a boundary for event dissemination.\nThe COSMIC middleware is a first attempt to put these concepts into operation.\nCOSMIC allows the interoperability of tiny components over multiple network boundaries and supports the definition of different real-time event channel classes.\nThere are many open questions that emerged from our work.\nOne direction of future research will be the inclusion of real-world communication channels established between sensors and actuators in the temporal analysis and the ordering of such events in a cause-effect chain.\nAdditionally, the provision of timing failure detection for the adaptation of interactions will be in the focus of our research.\nTo reduce network traffic and only disseminate those events to the subscribers which they are really interested in and which have a chance to arrive timely, the encapsulation and scoping schemes have to be transformed into respective multi-level filtering rules.\nThe event attributes which describe aspects of the context and temporal constraints for the dissemination will be exploited for this purpose.\nFinally, it is intended to integrate the results in the COSMIC middleware to enable experimental assessment.","keyphrases":["smart sensor","sensor and actuat","gener event architectur","tempor constraint","event channel","event channel","event-base system","corba","real-time entiti","sentient object","dissemin qualiti","cortex","gear architectur","tempor valid","soft real-time channel","cosmic middlewar","event-base commun","sentient comput","componentbas system","middlewar architectur"],"prmu":["P","P","P","P","P","P","M","U","U","U","R","U","R","M","M","M","M","M","M","R"]} {"id":"J-49","title":"Information Markets vs. Opinion Pools: An Empirical Comparison","abstract":"In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation. We leverage a unique data source of almost 2000 people's subjective probability judgments on 2003 US National Football League games and compare with the market probabilities given by two different information markets on exactly the same events. We combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions. Prices in information markets are used to derive market predictions. Our results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments. In screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions. The results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.","lvl-1":"Information Markets vs. Opinion Pools: An Empirical Comparison Yiling Chen Chao-Hsien Chu Tracy Mullen School of Information Sciences & Technology The Pennsylvania State University University Park, PA 16802 {ychen|chu|tmullen}@ist.\npsu.edu David M. Pennock Yahoo! Research Labs 74 N. Pasadena Ave, 3rd Floor Pasadena, CA 91103 pennockd@yahoo-inc.com ABSTRACT In this paper, we examine the relative forecast accuracy of information markets versus expert aggregation.\nWe leverage a unique data source of almost 2000 people``s subjective probability judgments on 2003 US National Football League games and compare with the market probabilities given by two different information markets on exactly the same events.\nWe combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions.\nPrices in information markets are used to derive market predictions.\nOur results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments.\nIn screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions.\nThe results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-economics General Terms Economics, Performance 1.\nINTRODUCTION Forecasting is a ubiquitous endeavor in human societies.\nFor decades, scientists have been developing and exploring various forecasting methods, which can be roughly divided into statistical and non-statistical approaches.\nStatistical approaches require not only the existence of enough historical data but also that past data contains valuable information about the future event.\nWhen these conditions can not be met, non-statistical approaches that rely on judgmental information about the future event could be better choices.\nOne widely used non-statistical method is to elicit opinions from experts.\nSince experts are not generally in agreement, many belief aggregation methods have been proposed to combine expert opinions together and form a single prediction.\nThese belief aggregation methods are called opinion pools, which have been extensively studied in statistics [20, 24, 38], and management sciences [8, 9, 30, 31], and applied in many domains such as group decision making [29] and risk analysis [12].\nWith the fast growth of the Internet, information markets have recently emerged as a promising non-statistical forecasting tool.\nInformation markets (sometimes called prediction markets, idea markets, or event markets) are markets designed for aggregating information and making predictions about future events.\nTo form the predictions, information markets tie payoffs of securities to outcomes of events.\nFor example, in an information market to predict the result of a US professional National Football League (NFL) game, say New England vs Carolina, the security pays a certain amount of money per share to its holders if and only if New England wins the game.\nOtherwise, it pays off nothing.\nThe security price before the game reflects the consensus expectation of market traders about the probability of New England winning the game.\nSuch markets are becoming very popular.\nThe Iowa Electronic Markets (IEM) [2] are real-money futures markets to predict economic and political events such as elections.\nThe Hollywood Stock Exchange (HSX) [3] is a virtual (play-money) exchange for trading securities to forecast future box office proceeds of new movies, and the outcomes of entertainment awards, etc..\nTradeSports.com [7], a real-money betting exchange registered in Ireland, hosts markets for sports, political, entertainment, and financial events.\nThe Foresight Exchange (FX) [4] allows traders to wager play money on unresolved scientific questions or other claims of public interest, and NewsFutures.com``s World News Exchange [1] has 58 popular sports and financial betting markets, also grounded in a play-money currency.\nDespite the popularity of information markets, one of the most important questions to ask is: how accurately can information markets predict?\nPrevious research in general shows that information markets are remarkably accurate.\nThe political election markets at IEM predict the election outcomes better than polls [16, 17, 18, 19].\nPrices in HSX and FX have been found to give as accurate or more accurate predictions than judgment of individual experts [33, 34, 37].\nHowever, information markets have not been calibrated against opinion pools, except for Servan-Schreiber et.\nal [36], in which the authors compare two information markets against arithmetic average of expert opinions.\nSince information markets, in nature, offer an adaptive and selforganized mechanism to aggregate opinions of market participants, it is interesting to compare them with existing opinion pooling methods, to evaluate the performance of information markets from another perspective.\nThe comparison will provide beneficial guidance for practitioners to choose the most appropriate method for their needs.\nThis paper contributes to the literature in two ways: (1) As an initial attempt to compare information markets with opinion pools of multiple experts, it leads to a better understanding of information markets and their promise as an alternative institution for obtaining accurate forecasts; (2) In screening opinion pools to be used in the comparison, we cast insights into relative performances of different opinion pools.\nIn terms of prediction accuracy, we compare two information markets with several linear and logarithmic opinion pools (LinOP and LogOP) at predicting the results of NFL games.\nOur results show that at the same time point ahead of the game, information markets provide as accurate predictions as our carefully selected opinion pools.\nIn selecting the opinion pools to be used in our comparison, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performances does not improve the prediction accuracy of opinion pools; and LogOP offers bolder predictions than LinOP.\nThe remainder of the paper is organized as follows.\nSection 2 reviews popular opinion pooling methods.\nSection 3 introduces the basics of information markets.\nData sets and our analysis methods are described in Section 4.\nWe present results and analysis in Section 5, followed by conclusions in Section 6.\n2.\nREVIEW OF OPINION POOLS Clemen and Winkler [12] classify opinion pooling methods into two broad categories: mathematical approaches and behavioral approaches.\nIn mathematical approaches, the opinions of individual experts are expressed as subjective probability distributions over outcomes of an uncertain event.\nThey are combined through various mathematical methods to form an aggregated probability distribution.\nGenest and Zidek [24] and French [20] provide comprehensive reviews of mathematical approaches.\nMathematical approaches can be further distinguished into axiomatic approaches and Bayesian approaches.\nAxiomatic approaches apply prespecified functions that map expert opinions, expressed as a set of individual probability distributions, to a single aggregated probability distribution.\nThese pooling functions are justified using axioms or certain desirable properties.\nTwo of the most common pooling functions are the linear opinion pool (LinOP) and the logarithmic opinion pool (LogOP).\nUsing LinOP, the aggregate probability distribution is a weighted arithmetic mean of individual probability distributions: p(\u03b8) = n i=1 wipi(\u03b8), (1) where pi(\u03b8) is expert i``s probability distribution of uncertain event \u03b8, p(\u03b8) represents the aggregate probability distribution, wi``s are weights for experts, which are usually nonnegative and sum to 1, and n is the number of experts.\nUsing LogOP, the aggregate probability distribution is a weighted geometric mean of individual probability distributions: p(\u03b8) = k n i=1 pi(\u03b8)wi , (2) where k is a normalization constant to ensure that the pooled opinion is a probability distribution.\nOther axiomatic pooling methods often are extensions of LinOP [22], LogOP [23], or both [13].\nWinkler [39] and Morris [29, 30] establish the early framework of Bayesian aggregation methods.\nBayesian approaches assume as if there is a decision maker who has a prior probability distribution over event \u03b8 and a likelihood function over expert opinions given the event.\nThis decision maker takes expert opinions as evidence and updates its priors over the event and opinions according to Bayes rule.\nThe resulted posterior probability distribution of \u03b8 is the pooled opinion.\nBehavioral approaches have been widely studied in the field of group decision making and organizational behavior.\nThe important assumption of behavioral approaches is that, through exchanging opinions or information, experts can eventually reach an equilibrium where further interaction won``t change their opinions.\nOne of the best known behavioral approaches is the Delphi technique [28].\nTypically, this method and its variants do not allow open discussion, but each expert has chance to judge opinions of other experts, and is given feedback.\nExperts then can reassess their opinions and repeat the process until a consensus or a smaller spread of opinions is achieved.\nSome other behavioral methods, such as the Nominal Group technique [14], promote open discussions in controlled environments.\nEach approach has its pros and cons.\nAxiomatic approaches are easy to use.\nBut they don``t have a normative basis to choose weights.\nIn addition, several impossibility results (e.g., Genest [21]) show that no aggregation function can satisfy all desired properties of an opinion pool, unless the pooled opinion degenerates to a single individual opinion, which effectively implies a dictator.\nBayesian approaches are nicely based on the normative Bayesian framework.\nHowever, it is sometimes frustratingly difficult to apply because it requires either (1) constructing an obscenely complex joint prior over the event and opinions (often impractical even in terms of storage \/ space complexity, not to mention from an elicitation standpoint) or (2) making strong assumptions about the prior, like conditional independence of experts.\nBehavior approaches allow experts to dynamically improve their information and revise their opinions during interactions, but many of them are not fixed or completely specified, and can``t guarantee convergence or repeatability.\n59 3.\nHOW INFORMATION MARKETS WORK Much of the enthusiasm for information markets stems from Hayek hypothesis [26] and efficient market hypothesis [15].\nHayek, in his classic critique of central planning in 1940``s, claims that the price system in a competitive market is a very efficient mechanism to aggregate dispersed information among market participants.\nThe efficient market hypothesis further states that, in an efficient market, the price of a security almost instantly incorporates all available information.\nThe market price summarizes all relevant information across traders, hence is the market participants'' consensus expectation about the future value of the security.\nEmpirical evidence supports both hypotheses to a large extent [25, 27, 35].\nThus, when associating the value of a security with the outcome of an uncertain future event, market price, by revealing the consensus expectation of the security value, can indirectly predict the outcome of the event.\nThis idea gives rise to information markets.\nFor example, if we want to predict which team will win the NFL game between New England and Carolina, an information market can trade a security $100 if New England defeats Carolina, whose payoff per share at the end of the game is specified as follow: $100 if New England wins the game; $0 otherwise.\nThe security price should roughly equal the expected payoff of the security in an efficient market.\nThe time value of money usually can be ignored because durations of most information markets are short.\nAssuming exposure to risk is roughly equal for both outcomes, or that there are sufficient effectively risk-neutral speculators in the market, the price should not be biased by the risk attitudes of various players in the market.\nThus, p = Pr(Patriots win) \u00d7 100 + [1 \u2212 Pr(Patriots win)] \u00d7 0, where p is the price of the security $100 if New England defeats Carolina and Pr(Patriots win) is the probability that New England will win the game.\nObserving the security price p before the game, we can derive Pr(Patriots win), which is the market participants'' collective prediction about how likely it is that New England will win the game.\nThe above security is a winner-takes-all contract.\nIt is used when the event to be predicted is a discrete random variable with disjoint outcomes (in this case binary).\nIts price predicts the probability that a specific outcome will be realized.\nWhen the outcome of a prediction problem can be any value in a continuous interval, we can design a security that pays its holder proportional to the realized value.\nThis kind of security is what Wolfers and Zitzewitz [40] called an index contract.\nIt predicts the expected value of a future outcome.\nMany other aspects of a future event such as median value of outcome can also be predicted in information markets by designing and trading different securities.\nWolfers and Zitzewitz [40] provide a summary of the main types of securities traded in information markets and what statistical properties they can predict.\nIn practice, conceiving a security for a prediction problem is only one of the many decisions in designing an effective information market.\nSpann and Skiera [37] propose an initial framework for designing information markets.\n4.\nDESIGN OF ANALYSIS 4.1 Data Sets Our data sets cover 210 NFL games held between September 28th, 2003 and December 28th, 2003.\nNFL games are very suitable for our purposes because: (1) two online exchanges and one online prediction contest already exist that provide data on both information markets and the opinions of self-identified experts for the same set of games; (2) the popularity of NFL games in the United States provides natural incentives for people to participate in information markets and\/or the contest, which increases liquidity of information markets and improves the quality and number of opinions in the contest; (3) intense media coverage and analysis of the profiles and strengths of teams and individual players provide the public with much information so that participants of information markets and the contest can be viewed as knowledgeable regarding to the forecasting goal.\nInformation market data was acquired, by using a specially designed crawler program, from TradeSports.com``s Football-NFL markets [7] and NewsFutures.com``s Sports Exchange [1].\nFor each NFL game, both TradeSports and NewsFutures have a winner-takes-all information market to predict the game outcome.\nWe introduce the design of the two markets according to Spann and Skiera``s three steps for designing an information market [37] as below.\n\u2022 Choice of forecasting goal: Markets at both TradeSports and NewsFutures aim at predicting which one of the two teams will win a NFL football game.\nThey trade similar winner-takes-all securities that pay off 100 if a team wins the game and 0 if it loses the game.\nSmall differences exist in how they deal with ties.\nIn the case of a tie, TradeSports will unwind all trades that occurred and refund all exchange fees, but the security is worth 50 in NewsFutures.\nSince the probability of a tie is usually very low (much less the 1%), prices at both markets effectively represent the market participants'' consensus assessment of the probability that the team will win.\n\u2022 Incentive for participation and information revelation: TradeSports and NewsFutures use different incentives for participation and information revelation.\nTradeSports is a real-money exchange.\nA trader needs to open and fund an account with a minimum of $100 to participate in the market.\nBoth profits and losses can occur as a result of trading activity.\nOn the contrary, a trader can register at NewsFutures for free and get 2000 units of Sport Exchange virtual money at the time of registration.\nTraders at NewsFutures will not incur any real financial loss.\nThey can accumulate virtual money by trading securities.\nThe virtual money can then be used to bid for a few real prizes at NewsFutures'' online shop.\n\u2022 Financial market design: Both markets at TradeSports and NewsFutures use the continuous double auction as their trading mechanism.\nTradeSports charges a small fee on each security transaction and expiry, while NewsFutures does not.\nWe can see that the main difference between two information markets is real money vs. virtual money.\nServan-Schreiber 60 et.\nal [36] have compared the effect of money on the performance of the two information markets and concluded that the prediction accuracy of the two markets are at about the same level.\nNot intending to compare these two markets, we still use both markets in our analysis to ensure that our findings are not accidental.\nWe obtain the opinions of 1966 self-identified experts for NFL games from the ProbabilityFootball online contest [5], one of several ProbabilitySports contests [6].\nThe contest is free to enter.\nParticipants of the contest are asked to enter their subjective probability that a team will win a game by noon on the day of the game.\nImportantly, the contest evaluates the participants'' performance via the quadratic scoring rule: s = 100 \u2212 400 \u00d7 Prob Lose2 , (3) where s represents the score that a participant earns for the game, and Prob Lose is the probability that the participant assigns to the actual losing team.\nThe quadratic score is one of a family of so-called proper scoring rules that have the property that an expert``s expected score is maximized when the expert reports probabilities truthfully.\nFor example, for a game team A vs. team B, if a player assigns 0.5 to both team A and B, his\/her score for the game is 0 no matter which team wins.\nIf he\/she assigns 0.8 to team A and 0.2 to team B, showing that he is confident in team A``s winning, he\/she will score 84 points for the game if team A wins, and lose 156 points if team B wins.\nThis quadratic scoring rule rewards bold predictions that are right, but penalizes bold predictions that turn out to be wrong.\nThe top players, measured by accumulated scores over all games, win the prizes of the contest.\nThe suggested strategy at the contest website is to make picks for each game that match, as closely as possible, the probabilities that each team will win.\nThis strategy is correct if the participant seeks to maximize expected score.\nHowever, as prizes are awarded only to the top few winners, participants'' goals are to maximize the probability of winning, not maximize expected score, resulting in a slightly different and more risk-seeking optimization.1 Still, as far as we are aware, this data offer the closest thing available to true subjective probability judgments from so many people over so many public events that have corresponding information markets.\n4.2 Methods of Analysis In order to compare the prediction accuracy of information markets and that of opinion pools, we proceed to derive predictions from market data of TradeSports and NewsFutures, form pooled opinions using expert data from ProbabilityFootball contest, and specify the performance measures to be used.\n4.2.1 Deriving Predictions For information markets, deriving predictions is straightforward.\nWe can take the security price and divide it by 100 to get the market``s prediction of the probability that a team will win.\nTo match the time when participants at the ProbabilityFootball contest are required to report their probability assessments, we derive predictions using the last trade price before noon on the day of the game.\nFor more 1 Ideally, prizes would be awarded by lottery in proportion to accumulated score.\nthan half of the games, this time is only about an hour earlier than the game starting time, while it is several hours earlier for other games.\nTwo sets of market predictions are derived: \u2022 NF: Prediction equals NewsFutures'' last trade price before noon of the game day divided by 100.\n\u2022 TS: Prediction equals TradeSports'' last trade price before noon of the game day divided by 100.\nWe apply LinOP and LogOP to ProbabilityFootball data to obtain aggregate expert predictions.\nThe reason that we do not consider other aggregation methods include: (1) data from ProbabilityFootball is only suitable for mathematical pooling methods-we can rule out behavioral approaches, (2) Bayesian aggregation requires us to make assumptions about the prior probability distribution of game outcomes and the likelihood function of expert opinions: given the large number of games and participants, making reasonable assumptions is difficult, and (3) for axiomatic approaches, previous research has shown that simpler aggregation methods often perform better than more complex methods [12].\nBecause the output of LogOP is indeterminate if there are probability assessments of both 0 and 1 (and because assessments of 0 and 1 are dictatorial using LogOP), we add a small number 0.01 to an expert opinion if it is 0, and subtract 0.01 from it if it is 1.\nIn pooling opinions, we consider two influencing factors: weights of experts and number of expert opinions to be pooled.\nFor weights of experts, we experiment with equal weights and performance-based weights.\nThe performancebased weights are determined according to previous accumulated score in the contest.\nThe score for each game is calculated according to equation 3, the scoring rule used in the ProbabilityFootball contest.\nFor the first week, since no previous scores are available, we choose equal weights.\nFor later weeks, we calculate accumulated past scores for each player.\nBecause the cumulative scores can be negative, we shift everyone``s score if needed to ensure the weights are non-negative.\nThus, wi = cumulative scorei + shift n j=1(cumulative scorej + shift) .\n(4) where shift equals 0 if the smallest cumulative scorej is non-negative, and equals the absolute value of the smallest cumulative scorej otherwise.\nFor simplicity, we call performance-weighted opinion pool as weighted, and equally weighted opinion pool as unweighted.\nWe will use them interchangeably in the remaining of the paper.\nAs for the number of opinions used in an opinion pool, we form different opinion pools with different number of experts.\nOnly the best performing experts are selected.\nFor example, to form an opinion pool with 20 expert opinions, we choose the top 20 participants.\nSince there is no performance record for the first week, we use opinions of all participants in the first week.\nFor week 2, we select opinions of 20 individuals whose scores in the first week are among the top 20.\nFor week 3, 20 individuals whose cumulative scores of week 1 and 2 are among the top 20s are selected.\nExperts are chosen in a similar way for later weeks.\nThus, the top 20 participants can change from week to week.\nThe possible opinion pools, varied in pooling functions, weighting methods, and number of expert opinions, are shown 61 Table 1: Pooled Expert Predictions # Symbol Description 1 Lin-All-u Unweighted (equally weighted) LinOP of all experts.\n2 Lin-All-w Weighted (performance-weighted) LinOP of all experts.\n3 Lin-n-u Unweighted (equally weighted) LinOP with n experts.\n4 Lin-n-w Weighted (performance-weighted) LinOP with n experts.\n5 Log-All-u Unweighted (equally weighted) LogOP of all experts.\n6 Log-All-w Weighted (performance-weighted) LogOP of all experts.\n7 Log-n-u Unweighted (equally weighted) LogOP with n experts.\n8 Log-n-w Weighted (performance-weighted) LogOP with n experts.\nin Table 1.\nLin represents linear, and Log represents Logarithmic.\nn is the number of expert opinions that are pooled, and All indicates that all opinions are combined.\nWe use u to symbolize unweighted (equally weighted) opinion pools.\nw is used for weighted (performance-weighted) opinion pools.\nLin-All-u, the equally weighted LinOP with all participants, is basically the arithmetic mean of all participants'' opinions.\nLog-All-u is simply the geometric mean of all opinions.\nWhen a participant did not enter a prediction for a particular game, that participant was removed from the opinion pool for that game.\nThis contrasts with the ProbabilityFootball average reported on the contest website and used by Servan-Schreiber et.\nal [36], where unreported predictions were converted to 0.5 probability predictions.\n4.2.2 Performance Measures We use three common metrics to assess prediction accuracy of information markets and opinion pools.\nThese measures have been used by Servan-Schreiber et.\nal [36] in evaluating the prediction accuracy of information markets.\n1.\nAbsolute Error = Prob Lose, where Prob Lose is the probability assigned to the eventual losing team.\nAbsolute error simply measures the difference between a perfect prediction (1 for winning team) and the actual prediction.\nA prediction with lower absolute error is more accurate.\n2.\nQuadratic Score = 100 \u2212 400 \u00d7 (Prob Lose2 ).\nQuadratic score is the scoring function that is used in the ProbabilityFootball contest.\nIt is a linear transformation of squared error, Prob Lose2 , which is one of the mostly used metrics in evaluating forecasting accuracy.\nQuadratic score can be negative.\nA prediction with higher quadratic score is more accurate.\n3.\nLogarithmic Score = log(Prob W in), where Prob W in is the probability assigned to the eventual winning team.\nThe logarithmic score, like the quadratic score, is a proper scoring rule.\nA prediction with higher (less negative) logarithmic score is more accurate.\n5.\nEMPIRICAL RESULTS 5.1 Performance of Opinion Pools Depending on how many opinions are used, there can be numerous different opinion pools.\nWe first examine the effect of number of opinions on prediction accuracy by forming opinion pools with the number of expert opinions varying from 1 to 960.\nIn the ProbabilityFootball Competition, not all 1966 registered participants provide their probability assessments for every game.\n960 is the smallest number of participants for all games.\nFor each game, we sort experts according to their accumulated quadratic score in previous weeks.\nPredictions of the best performing n participants are picked to form an opinion pool with n experts.\nFigure 1 shows the prediction accuracy of LinOP and LogOP in terms of mean values of the three performance measures across all 210 games.\nWe can see the following trends in the figure.\n1.\nUnweighted opinion pools and performance-weighted opinion pools have similar levels of prediction accuracy, especially for LinOP.\n2.\nFor LinOP, increasing the number of experts in general increases or keeps the same the level of prediction accuracy.\nWhen there are more than 200 experts, the prediction accuracy of LinOP is stable regarding the number of experts.\n3.\nLogOP seems more accurate than LinOP in terms of mean absolute error.\nBut, using all other performance measures, LinOP outperforms LogOP.\n4.\nFor LogOP, increasing the number of experts increases the prediction accuracy at the beginning.\nBut the curves (including the points with all experts) for mean quadratic score, and mean logarithmic score have slight bell-shapes, which represent a decrease in prediction accuracy when the number of experts is very large.\nThe curves for mean absolute error, on the other hand, show a consistent increase of accuracy.\nThe first and second trend above imply that when using LinOP, the simplest way, which has good prediction accuracy, is to average the opinions of all experts.\nWeighting does not seem to improve performance.\nSelecting experts according to past performance also does not help.\nIt is a very interesting observation that even if many participants of the ProbabilityFootball contest do not provide accurate individual predictions (they have negative quadratic scores in the contest), including their opinions into the opinion pool still increases the prediction accuracy.\nOne explanation of this phenomena could be that biases of individual judgment can offset with each other when opinions are diverse, which makes the pooled prediction more accurate.\nThe third trend presents a controversy.\nThe relative prediction accuracy of LogOP and LinOP flips when using different accuracy measures.\nTo investigate this disagreement, we plot the absolute error of Log-All-u and Lin-All-u for each game in Figure 2.\nWhen the absolute error of an opinion 62 0 100\u00a0200\u00a0300\u00a0400 500\u00a0600\u00a0700\u00a0800 900 All 0.4 0.405 0.41 0.415 0.42 0.425 0.43 0.435 0.44 0.445 0.45 Number of Expert Opinions MeanAbsoluteError Unweighted Linear Weighted Linear Unweighted Logarithmic Weighted Logarithmic Lin\u2212All\u2212u Lin\u2212All\u2212w Log\u2212All\u2212u Log\u2212All\u2212w (a) Mean Absolute Error 0 100\u00a0200\u00a0300\u00a0400 500\u00a0600\u00a0700\u00a0800 900 All 4 5 6 7 8 9 10 11 12 13 14 Number of Expert Opinions MeanQuadraticScore Unweighted Linear Weighted Linear Unweighted Logarithmic Weighted Logarithmic Lin\u2212All\u2212u Lin\u2212All\u2212w Log\u2212All\u2212u Log\u2212All\u2212w 0 100\u00a0200\u00a0300\u00a0400 500\u00a0600\u00a0700\u00a0800 900 All \u22120.71 \u22120.7 \u22120.69 \u22120.68 \u22120.67 \u22120.66 \u22120.65 \u22120.64 \u22120.63 \u22120.62 Number of Expert Opinions MeanLogarithmicScore Unweighted Linear Weighted Linear Unweighted Logarithmic Weighted Logarithmic Lin\u2212All\u2212u Lin\u2212All\u2212w Log\u2212All\u2212u Log\u2212All\u2212w (b) Mean Quadratic Score (c) Mean Logarithmic Score Figure 1: Prediction Accuracy of Opinion Pools pool for a game is less than 0.5, it means that the team favored by the opinion pool wins the game.\nIf it is greater than 0.5, the underdog wins.\nCompared with Lin-All-u, Log-All-u has lower absolute error when it is less than 0.5, and greater absoluter error when it is greater than 0.5, which indicates that predictions of Log-All-u are bolder, more close to 0 or 1, than those of Lin-All-u.\nThis is due to the nature of linear and logarithmic aggregating functions.\nBecause quadratic score and logarithmic score penalize bold predictions that are wrong, LogOP is less accurate when measured in these terms.\nSimilar reasoning accounts for the fourth trend.\nWhen there are more than 500 experts, increasing number of experts used in LogOP improves the prediction accuracy measured by absolute error, but worsens the accuracy measured by the other two metrics.\nExamining expert opinions, we find that participants who rank lower are more frequent in offering extreme predictions (0 or 1) than those ranking high in the list.\nWhen we increase the number of experts in an opinion pool, we are incorporating more extreme predictions into it.\nThe resulting LogOP is bolder, and hence has lower mean quadratic score and mean logarithmic score.\n5.2 Comparison of Information Markets and Opinion Pools Through the first screening of various opinion pools, we select Lin-All-u, Log-All-u, Log-All-w, and Log-200-u to compare with predictions from information markets.\nLin-All-u as shown in Figure 1 can represent what LinOP can achieve.\nHowever, the performance of LogOP is not consistent when evaluated using different metrics.\nLog-All-u and Log-All-w offer either the best or the worst predictions.\nLog-200-u, the LogOP with the 200 top performing experts, provides more stable predictions.\nWe use all of the three to stand for the performance of LogOP in our later comparison.\nIf a prediction of the probability that a team will win a game, either from an opinion pool or an information market, is higher than 0.5, we say that the team is the predicted favorite for the game.\nTable 2 presents the number and percentage of games that predicted favorites actually win, out of a total of 210 games.\nAll four opinion pools correctly predict a similar number and percentage of games as NF and TS.\nSince NF, TS, and the four opinion pools form their predictions using information available at noon of the game 63 Table 2: Number and Percentage of Games that Predicted Favorites Win NF TS Lin-All-u Log-All-u Log-All-w Log-200-u Number 142\u00a0137\u00a0144\u00a0144 143 141 Percentage 67.62% 65.24% 68.57% 68.57% 68.10% 67.14% Table 3: Mean of Prediction Accuracy Measures Absolute Error Quadratic Score Logarithmic Score NF 0.4253 15.4352 -0.6136 (0.0121) (4.6072) (0.0258) TS 0.4275 15.2739 -0.6121 (0.0118) (4.3982) (0.0241) Lin-All-u 0.4292 13.0525 -0.6260 (0.0126) (4.8088) (0.0268) Log-All-u 0.4024 10.0099 -0.6546 (0.0173) (6.6594) (0.0418) Log-All-w 0.4059 10.4491 -0.6497 (0.0168) (6.4440) (0.0398) Log-200-u 0.4266 12.3868 -0.6319 (0.0133) (5.0764) (0.0295) *Numbers in parentheses are standard errors.\n*Best value for each metric is shown in bold.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Absolute Error of Lin\u2212All\u2212u AbsoluteErrorofLog\u2212All\u2212u 45 Degree Line Absolute Error Figure 2: Absolute Error: Lin-All-u vs. Log-All-u day, information markets and opinion pools have comparable potential at the same time point.\nWe then take a closer look at prediction accuracy of information markets and opinion pools using the three performance measures.\nTable 3 displays mean values of these measures over 210 games.\nNumbers in parentheses are standard errors, which estimate the standard deviation of the mean.\nTo take into consideration of skewness of distributions, we also report median values of accuracy measures in Table 4.\nJudged by the mean values of accuracy measures in Table 3, all methods have similar accuracy levels, with NF and TS slightly better than the opinion pools.\nHowever, the median values of accuracy measures indicate that LogAll-u and Log-All-w opinion pools are more accurate than all other predictions.\nWe employ the randomization test [32] to study whether the differences in prediction accuracy presented in Table 3 and Table 4 are statistically significant.\nThe basic idea of randomization test is that, by randomly swapping predictions of two methods numerous times, an empirical distribution for the difference of prediction accuracy can be constructed.\nUsing this empirical distribution, we are then able to evaluate that at what confidence level the observed difference reflects a real difference.\nFor example, the mean absolute error of NF is higher than that of Log-All-u by 0.0229, as shown in Table 3.\nTo test whether this difference is statistically significant, we shu\ufb04e predictions from two methods, randomly label half of predictions as NF and the other half as Log-All-u, and compute the difference of mean absolute error of the newly formed NF and Log-All-u data.\nThe above procedure is repeated 10,000 times.\nThe 10,000 differences of mean absolute error results in an empirical distribution of the difference.\nComparing our observed difference, 0.0229, with this distribution, we find that the observed difference is greater than 75.37% of the empirical differences.\nThis leads us to conclude that the difference of mean absolute error between NF and Log-All-u is not statistically significant, if we choose the level of significance to be 0.05.\nTable 5 and Table 6 are results of randomization test for mean and median differences respectively.\nEach cell of the table is for two different prediction methods, represented by name of the row and name of the column.\nThe first lines of table cells are results for absolute error.\nThe second and third lines are dedicated to quadratic score and logarithmic score respectively.\nWe can see that, in terms of mean values of accuracy measures, the differences of all methods are not statistically significant to any reasonable degree.\nWhen it 64 Table 4: Median of Prediction Accuracy Measures Absolute Error Quadratic Score Logarithmic Score NF 0.3800 42.2400 -0.4780 TS 0.4000 36.0000 -0.5108 Lin-All-u 0.3639 36.9755 -0.5057 Log-All-u 0.3417 53.2894 -0.4181 Log-All-w 0.3498 51.0486 -0.4305 Log-200-u 0.3996 36.1300 -0.5101 *Best value for each metric is shown in bold.\nTable 5: Statistical Confidence of Mean Differences in Prediction Accuracy TS Lin-All-u Log-All-u Log-All-w Log-200-u NF 8.92% 22.07% 75.37% 66.47% 7.76% 2.38% 26.60% 50.74% 44.26% 32.24% 2.99% 22.81% 59.35% 56.21% 33.26% TS 10.13% 77.79% 68.15% 4.35% 27.25% 53.65% 44.90% 28.30% 32.35% 57.89% 60.69% 38.84% Lin-All-u 82.19% 68.86% 9.75% 28.91% 23.92% 6.81% 44.17% 43.01% 17.36% Log-All-u 11.14% 72.49% 3.32% 18.89% 5.25% 39.06% Log-All-w 69.89% 18.30% 30.23% *In each table cell, row 1 accounts for absolute error, row 2 for quadratic score, and row 3 for logarithmic score.\ncomes to median values of prediction accuracy, Log-All-u outperforms Lin-All-u at a high confidence level.\nThese results indicate that differences in prediction accuracy between information markets and opinion pools are not statistically significant.\nThis may seem to contradict the result of Servan-Schreiber et.\nal [36], in which NewsFutures``s information markets have been shown to provide statistically significantly more accurate predictions than the (unweighted) average of all ProbabilityFootball opinions.\nThe discrepancy emerges in dealing with missing data.\nNot all 1966 registered ProbabilityFootball participants offer probability assessments for each game.\nWhen a participant does not provide a probability assessment for a game, the contest considers their prediction as 0.5.\n.\nThis makes sense in the context of the contest, since 0.5 always yields 0 quadratic score.\nThe ProbabilityFootball average reported on the contest website and used by Servan-Schreiber et.\nal includes these 0.5 estimates.\nInstead, we remove participants from games that they do not provide assessments, pooling only the available opinions together.\nOur treatment increases the prediction accuracy of Lin-All-u significantly.\n6.\nCONCLUSIONS With the fast growth of the Internet, information markets have recently emerged as an alternative tool for predicting future events.\nPrevious research has shown that information markets give as accurate or more accurate predictions than individual experts and polls.\nHowever, information markets, as an adaptive mechanism to aggregate different opinions of market participants, have not been calibrated against many belief aggregation methods.\nIn this paper, we compare prediction accuracy of information markets with linear and logarithmic opinion pools (LinOP and LogOP) using predictions from two markets and 1966 individuals regarding the outcomes of 210 American football games during the 2003 NFL season.\nIn screening for representative opinion pools to compare with information markets, we investigate the effect of weights and number of experts on prediction accuracy.\nOur results on both the comparison of information markets and opinion pools and the relative performance of different opinion pools are summarized as below.\n1.\nAt the same time point ahead of the events, information markets offer as accurate predictions as our selected opinion pools.\nWe have selected four opinion pools to represent the prediction accuracy level that LinOP and LogOP can achieve.\nWith all four performance metrics, our two information markets obtain similar prediction accuracy as the four opinion pools.\n65 Table 6: Statistical Confidence of Median Differences in Prediction Accuracy TS Lin-All-u Log-All-u Log-All-w Log-200-u NF 48.85% 47.3% 84.8% 77.9% 65.36% 45.26% 44.55% 85.27% 75.65% 66.75% 44.89% 46.04% 84.43% 77.16% 64.78% TS 5.18% 94.83% 94.31% 0% 5.37% 92.08% 92.53% 0% 7.41% 95.62% 91.09% 0% Lin-All-u 95.11% 91.37% 7.31% 96.10% 92.69% 9.84% 95.45% 95.12% 7.79% Log-All-u 23.47% 95.89% 26.68% 93.85% 22.47% 96.42% Log-All-w 91.3% 91.4% 90.37% *In each table cell, row 1 accounts for absolute error, row 2 for quadratic score, and row 3 for logarithmic score.\n*Confidence above 95% is shown in bold.\n2.\nThe arithmetic average of all opinions (Lin-All-u) is a simple, robust, and efficient opinion pool.\nSimply averaging across all experts seems resulting in better predictions than individual opinions and opinion pools with a few experts.\nIt is quite robust in the sense that even if the included individual predictions are less accurate, averaging over all opinions still gives better (or equally good) predictions.\n3.\nWeighting expert opinions according to past performance does not seem to significantly improve prediction accuracy of either LinOP or LogOP.\nComparing performance-weighted opinion pools with equally weighted opinion pools, we do not observe much difference in terms of prediction accuracy.\nSince we only use one performance-weighting method, calculating the weights according to past accumulated quadratic score that participants earned, this might due to the weighting method we chose.\n4.\nLogOP yields bolder predictions than LinOP.\nLogOP yields predictions that are closer to the extremes, 0 or 1.\nAn information markets is a self-organizing mechanism for aggregating information and making predictions.\nCompared with opinion pools, it is less constrained by space and time, and can eliminate the efforts to identify experts and decide belief aggregation methods.\nBut the advantages do not compromise their prediction accuracy to any extent.\nOn the contrary, information markets can provide real-time predictions, which are hardly achievable through resorting to experts.\nIn the future, we are interested in further exploring: \u2022 Performance comparison of information markets with other opinion pools and mathematical aggregation procedures.\nIn this paper, we only compare information markets with two simple opinion pools, linear and logarithmic.\nIt will be meaningful to investigate their relative prediction accuracy with other belief aggregation methods such as Bayesian approaches.\nThere are also a number of theoretical expert algorithms with proven worst-case performance bounds [10] whose average-case or practical performance would be instructive to investigate.\n\u2022 Whether defining expertise more narrowly can improve predictions of opinion pools.\nIn our analysis, we broadly treat participants of the ProbabilityFootball contest as experts in all games.\nIf we define expertise more narrowly, selecting experts in certain football teams to predict games involving these teams, will the predictions of opinion pools be more accurate?\n\u2022 The possibility of combining information markets with other forecasting methods to achieve better prediction accuracy.\nChen, Fine, and Huberman [11] use an information market to determine the risk attitude of participants, and then perform a nonlinear aggregation of their predictions based on their risk attitudes.\nThe nonlinear aggregation mechanism is shown to outperform both the market and the best individual participants.\nIt is worthy of more attention whether information markets, as an alternative forecasting method, can be used together with other methods to improve our predictions.\n7.\nACKNOWLEDGMENTS We thank Brian Galebach, the owner and operator of the ProbabilitySports and ProbabilityFootball websites, for providing us with such unique and valuable data.\nWe thank Varsha Dani, Lance Fortnow, Omid Madani, Sumit Sang66 hai, and the anonymous reviewers for useful insights and pointers.\nThe authors acknowledge the support of The Penn State eBusiness Research Center.\n8.\nREFERENCES [1] http:\/\/us.newsfutures.com [2] http:\/\/www.biz.uiowa.edu\/iem\/ [3] http:\/\/www.hsx.com\/ [4] http:\/\/www.ideosphere.com\/fx\/ [5] http:\/\/www.probabilityfootball.com\/ [6] http:\/\/www.probabilitysports.com\/ [7] http:\/\/www.tradesports.com\/ [8] A. H. Ashton and R. H. Ashton.\nAggregating subjective forecasts: Some empirical results.\nManagement Science, 31:1499-1508, 1985.\n[9] R. P. Batchelor and P. Dua.\nForecaster diversity and the benefits of combining forecasts.\nManagement Science, 41:68-75, 1995.\n[10] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth.\nHow to use expert advice.\nJournal of the ACM, 44(3):427-485, 1997.\n[11] K. Chen, L. Fine, and B. Huberman.\nPredicting the future.\nInformation System Frontier, 5(1):47-61, 2003.\n[12] R. T. Clemen and R. L. Winkler.\nCombining probability distributions from experts in risk analysis.\nRisk Analysis, 19(2):187-203, 1999.\n[13] R. M. Cook.\nExperts in Uncertainty: Opinion and Subjective Probability in Science.\nOxford University Press, New York, 1991.\n[14] A. L. Delbecq, A. H. Van de Ven, and D. H. Gustafson.\nGroup Techniques for Program Planners: A Guide to Nominal Group and Delphi Processes.\nScott Foresman and Company, Glenview, IL, 1975.\n[15] E. F. Fama.\nEfficient capital market: A review of theory and empirical work.\nJournal of Finance, 25:383-417, 1970.\n[16] R. Forsythe and F. Lundholm.\nInformation aggregation in an experimental market.\nEconometrica, 58:309-47, 1990.\n[17] R. Forsythe, F. Nelson, G. R. Neumann, and J. Wright.\nForecasting elections: A market alternative to polls.\nIn T. R. Palfrey, editor, Contemporary Laboratory Experiments in Political Economy, pages 69-111.\nUniversity of Michigan Press, Ann Arbor, MI, 1991.\n[18] R. Forsythe, F. Nelson, G. R. Neumann, and J. Wright.\nAnatomy of an experimental political stock market.\nAmerican Economic Review, 82(5):1142-1161, 1992.\n[19] R. Forsythe, T. A. Rietz, and T. W. Ross.\nWishes, expectations, and actions: A survey on price formation in election stock markets.\nJournal of Economic Behavior and Organization, 39:83-110, 1999.\n[20] S. French.\nGroup consensus probability distributions: a critical survey.\nBayesian Statistics, 2:183-202, 1985.\n[21] C. Genest.\nA conflict between two axioms for combining subjective distributions.\nJournal of the Royal Statistical Society, 46(3):403-405, 1984.\n[22] C. Genest.\nPooling operators with the marginalization property.\nCanadian Journal of Statistics, 12(2):153-163, 1984.\n[23] C. Genest, K. J. McConway, and M. J. Schervish.\nCharacterization of externally Bayesian pooling operators.\nAnnals of Statistics, 14(2):487-501, 1986.\n[24] C. Genest and J. V. Zidek.\nCombining probability distributions: A critique and an annotated bibliography.\nStatistical Science, 1(1):114-148, 1986.\n[25] S. J. Grossman.\nAn introduction to the theory of rational expectations under asymmetric information.\nReview of Economic Studies, 48(4):541-559, 1981.\n[26] F. A. Hayek.\nThe use of knowledge in society.\nAmerican Economic Review, 35(4):519-530, 1945.\n[27] J. C. Jackwerth and M. Rubinstein.\nRecovering probability distribution from options prices.\nJournal of Finance, 51(5):1611-1631, 1996.\n[28] H. A. Linstone and M. Turoff.\nThe Delphi Method: Techniques and Applications.\nAddison-Wesley, Reading, MA, 1975.\n[29] P. A. Morris.\nDecision analysis expert use.\nManagement Science, 20(9):1233-1241, 1974.\n[30] P. A. Morris.\nCombining expert judgments: A bayesian approach.\nManagement Science, 23(7):679-693, 1977.\n[31] P. A. Morris.\nAn axiomatic approach to expert resolution.\nManagement Science, 29(1):24-32, 1983.\n[32] E. W. Noreen.\nComputer-Intensive Methods for Testing Hypotheses: An Introduction.\nWiley and Sons, Inc., New York, 1989.\n[33] D. M. Pennock, S. Lawrence, C. L. Giles, and F. A. Nielsen.\nThe real power of artificial markets.\nScience, 291:987-988, February 2002.\n[34] D. M. Pennock, S. Lawrence, F. A. Nielsen, and C. L. Giles.\nExtracting collective probabilistic forecasts from web games.\nIn Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 174-183, San Francisco, CA, 2001.\n[35] C. Plott and S. Sunder.\nRational expectations and the aggregation of diverse information in laboratory security markets.\nEconometrica, 56:1085-118, 1988.\n[36] E. Servan-Schreiber, J. Wolfers, D. M. Pennock, and B. Galebach.\nPrediction markets: Does money matter?\nElectronic Markets, 14(3):243-251, 2004.\n[37] M. Spann and B. Skiera.\nInternet-based virtual stock markets for business forecasting.\nManagement Science, 49(10):1310-1326, 2003.\n[38] M. West.\nBayesian aggregation.\nJournal of the Royal Statistical Society.\nSeries A. General, 147(4):600-607, 1984.\n[39] R. L. Winkler.\nThe consensus of subjective probability distributions.\nManagement Science, 15(2):B61-B75, 1968.\n[40] J. Wolfers and E. Zitzewitz.\nPrediction markets.\nJournal of Economic Perspectives, 18(2):107-126, 2004.\n67","lvl-3":"Information Markets vs. Opinion Pools: An Empirical Comparison\nABSTRACT\nIn this paper, we examine the relative forecast accuracy of information markets versus expert aggregation.\nWe leverage a unique data source of almost 2000 people's subjective probability judgments on 2003 US National Football League games and compare with the \"market probabilities\" given by two different information markets on exactly the same events.\nWe combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions.\nPrices in information markets are used to derive market predictions.\nOur results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments.\nIn screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions.\nThe results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.\n1.\nINTRODUCTION\nForecasting is a ubiquitous endeavor in human societies.\nFor decades, scientists have been developing and exploring various forecasting methods, which can be roughly divided into statistical and non-statistical approaches.\nStatistical approaches require not only the existence of enough historical data but also that past data contains valuable information about the future event.\nWhen these conditions cannot be met, non-statistical approaches that rely on judgmental information about the future event could be better choices.\nOne widely used non-statistical method is to elicit opinions from experts.\nSince experts are not generally in agreement, many belief aggregation methods have been proposed to combine expert opinions together and form a single prediction.\nThese belief aggregation methods are called opinion pools, which have been extensively studied in statistics [20, 24, 38], and management sciences [8, 9, 30, 31], and applied in many domains such as group decision making [29] and risk analysis [12].\nWith the fast growth of the Internet, information markets have recently emerged as a promising non-statistical forecasting tool.\nInformation markets (sometimes called prediction markets, idea markets, or event markets) are markets designed for aggregating information and making predictions about future events.\nTo form the predictions, information markets tie payoffs of securities to outcomes of events.\nFor example, in an information market to predict the result of a US professional National Football League (NFL) game, say New England vs Carolina, the security pays a certain amount of money per share to its holders if and only if New England wins the game.\nOtherwise, it pays off nothing.\nThe security price before the game reflects the consensus expectation of market traders about the probability of New England winning the game.\nSuch markets are becoming very popular.\nThe Iowa Electronic Markets (IEM) [2] are real-money futures markets to predict economic and political events such as elections.\nThe Hollywood Stock Exchange (HSX) [3] is a virtual (play-money) exchange for trading securities to forecast future box office proceeds of new movies, and the outcomes of entertainment awards, etc. .\nTradeSports.com [7], a real-money betting exchange registered in Ireland, hosts markets for sports, political, entertainment, and financial events.\nThe Foresight Exchange (FX) [4] allows traders to wager play money on unresolved scientific questions or other claims of public interest, and NewsFutures.com's World News Exchange [1] has\npopular sports and financial betting markets, also grounded in a play-money currency.\nDespite the popularity of information markets, one of the most important questions to ask is: how accurately can information markets predict?\nPrevious research in general shows that information markets are remarkably accurate.\nThe political election markets at IEM predict the election outcomes better than polls [16, 17, 18, 19].\nPrices in HSX and FX have been found to give as accurate or more accurate predictions than judgment of individual experts [33, 34, 37].\nHowever, information markets have not been calibrated against opinion pools, except for Servan-Schreiber et.\nal [36], in which the authors compare two information markets against arithmetic average of expert opinions.\nSince information markets, in nature, offer an adaptive and selforganized mechanism to aggregate opinions of market participants, it is interesting to compare them with existing opinion pooling methods, to evaluate the performance of information markets from another perspective.\nThe comparison will provide beneficial guidance for practitioners to choose the most appropriate method for their needs.\nThis paper contributes to the literature in two ways: (1) As an initial attempt to compare information markets with opinion pools of multiple experts, it leads to a better understanding of information markets and their promise as an alternative institution for obtaining accurate forecasts; (2) In screening opinion pools to be used in the comparison, we cast insights into relative performances of different opinion pools.\nIn terms of prediction accuracy, we compare two information markets with several linear and logarithmic opinion pools (LinOP and LogOP) at predicting the results of NFL games.\nOur results show that at the same time point ahead of the game, information markets provide as accurate predictions as our carefully selected opinion pools.\nIn selecting the opinion pools to be used in our comparison, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performances does not improve the prediction accuracy of opinion pools; and LogOP offers bolder predictions than LinOP.\nThe remainder of the paper is organized as follows.\nSection 2 reviews popular opinion pooling methods.\nSection 3 introduces the basics of information markets.\nData sets and our analysis methods are described in Section 4.\nWe present results and analysis in Section 5, followed by conclusions in Section 6.\n2.\nREVIEW OF OPINION POOLS\n3.\nHOW INFORMATION MARKETS WORK\n4.\nDESIGN OF ANALYSIS\n4.1 Data Sets\n9 Incentive for participation and information rev\n4.2 Methods of Analysis\n4.2.1 Deriving Predictions\n4.2.2 Performance Measures\n2.\nQuadratic Score = 100 \u2212 400 \u00d7 (Prob Lose2).\n5.\nEMPIRICAL RESULTS\n5.1 Performance of Opinion Pools\n5.2 Comparison of Information Markets and Opinion Pools\n6.\nCONCLUSIONS\n4.\nLogOP yields bolder predictions than LinOP.\n7.\nACKNOWLEDGMENTS\nWe thank Brian Galebach, the owner and operator of the ProbabilitySports and ProbabilityFootball websites, for providing us with such unique and valuable data.\nWe thank Varsha Dani, Lance Fortnow, Omid Madani, Sumit Sang\nhai, and the anonymous reviewers for useful insights and pointers.\nThe authors acknowledge the support of The Penn State eBusiness Research Center.","lvl-4":"Information Markets vs. Opinion Pools: An Empirical Comparison\nABSTRACT\nIn this paper, we examine the relative forecast accuracy of information markets versus expert aggregation.\nWe leverage a unique data source of almost 2000 people's subjective probability judgments on 2003 US National Football League games and compare with the \"market probabilities\" given by two different information markets on exactly the same events.\nWe combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions.\nPrices in information markets are used to derive market predictions.\nOur results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments.\nIn screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions.\nThe results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.\n1.\nINTRODUCTION\nForecasting is a ubiquitous endeavor in human societies.\nFor decades, scientists have been developing and exploring various forecasting methods, which can be roughly divided into statistical and non-statistical approaches.\nStatistical approaches require not only the existence of enough historical data but also that past data contains valuable information about the future event.\nWhen these conditions cannot be met, non-statistical approaches that rely on judgmental information about the future event could be better choices.\nOne widely used non-statistical method is to elicit opinions from experts.\nSince experts are not generally in agreement, many belief aggregation methods have been proposed to combine expert opinions together and form a single prediction.\nWith the fast growth of the Internet, information markets have recently emerged as a promising non-statistical forecasting tool.\nInformation markets (sometimes called prediction markets, idea markets, or event markets) are markets designed for aggregating information and making predictions about future events.\nTo form the predictions, information markets tie payoffs of securities to outcomes of events.\nFor example, in an information market to predict the result of a US professional National Football League (NFL) game, say New England vs Carolina, the security pays a certain amount of money per share to its holders if and only if New England wins the game.\nOtherwise, it pays off nothing.\nThe security price before the game reflects the consensus expectation of market traders about the probability of New England winning the game.\nSuch markets are becoming very popular.\nThe Iowa Electronic Markets (IEM) [2] are real-money futures markets to predict economic and political events such as elections.\nTradeSports.com [7], a real-money betting exchange registered in Ireland, hosts markets for sports, political, entertainment, and financial events.\npopular sports and financial betting markets, also grounded in a play-money currency.\nDespite the popularity of information markets, one of the most important questions to ask is: how accurately can information markets predict?\nPrevious research in general shows that information markets are remarkably accurate.\nThe political election markets at IEM predict the election outcomes better than polls [16, 17, 18, 19].\nPrices in HSX and FX have been found to give as accurate or more accurate predictions than judgment of individual experts [33, 34, 37].\nHowever, information markets have not been calibrated against opinion pools, except for Servan-Schreiber et.\nal [36], in which the authors compare two information markets against arithmetic average of expert opinions.\nSince information markets, in nature, offer an adaptive and selforganized mechanism to aggregate opinions of market participants, it is interesting to compare them with existing opinion pooling methods, to evaluate the performance of information markets from another perspective.\nThe comparison will provide beneficial guidance for practitioners to choose the most appropriate method for their needs.\nThis paper contributes to the literature in two ways: (1) As an initial attempt to compare information markets with opinion pools of multiple experts, it leads to a better understanding of information markets and their promise as an alternative institution for obtaining accurate forecasts; (2) In screening opinion pools to be used in the comparison, we cast insights into relative performances of different opinion pools.\nIn terms of prediction accuracy, we compare two information markets with several linear and logarithmic opinion pools (LinOP and LogOP) at predicting the results of NFL games.\nOur results show that at the same time point ahead of the game, information markets provide as accurate predictions as our carefully selected opinion pools.\nIn selecting the opinion pools to be used in our comparison, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performances does not improve the prediction accuracy of opinion pools; and LogOP offers bolder predictions than LinOP.\nThe remainder of the paper is organized as follows.\nSection 2 reviews popular opinion pooling methods.\nSection 3 introduces the basics of information markets.\nData sets and our analysis methods are described in Section 4.\nWe present results and analysis in Section 5, followed by conclusions in Section 6.\n7.\nACKNOWLEDGMENTS\nWe thank Brian Galebach, the owner and operator of the ProbabilitySports and ProbabilityFootball websites, for providing us with such unique and valuable data.","lvl-2":"Information Markets vs. Opinion Pools: An Empirical Comparison\nABSTRACT\nIn this paper, we examine the relative forecast accuracy of information markets versus expert aggregation.\nWe leverage a unique data source of almost 2000 people's subjective probability judgments on 2003 US National Football League games and compare with the \"market probabilities\" given by two different information markets on exactly the same events.\nWe combine assessments of multiple experts via linear and logarithmic aggregation functions to form pooled predictions.\nPrices in information markets are used to derive market predictions.\nOur results show that, at the same time point ahead of the game, information markets provide as accurate predictions as pooled expert assessments.\nIn screening pooled expert predictions, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performance does not improve accuracy of pooled predictions; and logarithmic aggregation functions offer bolder predictions than linear aggregation functions.\nThe results provide insights into the predictive performance of information markets, and the relative merits of selecting among various opinion pooling methods.\n1.\nINTRODUCTION\nForecasting is a ubiquitous endeavor in human societies.\nFor decades, scientists have been developing and exploring various forecasting methods, which can be roughly divided into statistical and non-statistical approaches.\nStatistical approaches require not only the existence of enough historical data but also that past data contains valuable information about the future event.\nWhen these conditions cannot be met, non-statistical approaches that rely on judgmental information about the future event could be better choices.\nOne widely used non-statistical method is to elicit opinions from experts.\nSince experts are not generally in agreement, many belief aggregation methods have been proposed to combine expert opinions together and form a single prediction.\nThese belief aggregation methods are called opinion pools, which have been extensively studied in statistics [20, 24, 38], and management sciences [8, 9, 30, 31], and applied in many domains such as group decision making [29] and risk analysis [12].\nWith the fast growth of the Internet, information markets have recently emerged as a promising non-statistical forecasting tool.\nInformation markets (sometimes called prediction markets, idea markets, or event markets) are markets designed for aggregating information and making predictions about future events.\nTo form the predictions, information markets tie payoffs of securities to outcomes of events.\nFor example, in an information market to predict the result of a US professional National Football League (NFL) game, say New England vs Carolina, the security pays a certain amount of money per share to its holders if and only if New England wins the game.\nOtherwise, it pays off nothing.\nThe security price before the game reflects the consensus expectation of market traders about the probability of New England winning the game.\nSuch markets are becoming very popular.\nThe Iowa Electronic Markets (IEM) [2] are real-money futures markets to predict economic and political events such as elections.\nThe Hollywood Stock Exchange (HSX) [3] is a virtual (play-money) exchange for trading securities to forecast future box office proceeds of new movies, and the outcomes of entertainment awards, etc. .\nTradeSports.com [7], a real-money betting exchange registered in Ireland, hosts markets for sports, political, entertainment, and financial events.\nThe Foresight Exchange (FX) [4] allows traders to wager play money on unresolved scientific questions or other claims of public interest, and NewsFutures.com's World News Exchange [1] has\npopular sports and financial betting markets, also grounded in a play-money currency.\nDespite the popularity of information markets, one of the most important questions to ask is: how accurately can information markets predict?\nPrevious research in general shows that information markets are remarkably accurate.\nThe political election markets at IEM predict the election outcomes better than polls [16, 17, 18, 19].\nPrices in HSX and FX have been found to give as accurate or more accurate predictions than judgment of individual experts [33, 34, 37].\nHowever, information markets have not been calibrated against opinion pools, except for Servan-Schreiber et.\nal [36], in which the authors compare two information markets against arithmetic average of expert opinions.\nSince information markets, in nature, offer an adaptive and selforganized mechanism to aggregate opinions of market participants, it is interesting to compare them with existing opinion pooling methods, to evaluate the performance of information markets from another perspective.\nThe comparison will provide beneficial guidance for practitioners to choose the most appropriate method for their needs.\nThis paper contributes to the literature in two ways: (1) As an initial attempt to compare information markets with opinion pools of multiple experts, it leads to a better understanding of information markets and their promise as an alternative institution for obtaining accurate forecasts; (2) In screening opinion pools to be used in the comparison, we cast insights into relative performances of different opinion pools.\nIn terms of prediction accuracy, we compare two information markets with several linear and logarithmic opinion pools (LinOP and LogOP) at predicting the results of NFL games.\nOur results show that at the same time point ahead of the game, information markets provide as accurate predictions as our carefully selected opinion pools.\nIn selecting the opinion pools to be used in our comparison, we find that arithmetic average is a robust and efficient pooling function; weighting expert assessments according to their past performances does not improve the prediction accuracy of opinion pools; and LogOP offers bolder predictions than LinOP.\nThe remainder of the paper is organized as follows.\nSection 2 reviews popular opinion pooling methods.\nSection 3 introduces the basics of information markets.\nData sets and our analysis methods are described in Section 4.\nWe present results and analysis in Section 5, followed by conclusions in Section 6.\n2.\nREVIEW OF OPINION POOLS\nClemen and Winkler [12] classify opinion pooling methods into two broad categories: mathematical approaches and behavioral approaches.\nIn mathematical approaches, the opinions of individual experts are expressed as subjective probability distributions over outcomes of an uncertain event.\nThey are combined through various mathematical methods to form an aggregated probability distribution.\nGenest and Zidek [24] and French [20] provide comprehensive reviews of mathematical approaches.\nMathematical approaches can be further distinguished into axiomatic approaches and Bayesian approaches.\nAxiomatic approaches apply prespecified functions that map expert opinions, expressed as a set of individual probability distributions, to a single aggregated probability distribution.\nThese pooling functions are justified using axioms or certain desirable properties.\nTwo of the most common pooling functions are the linear opinion pool (LinOP) and the logarithmic opinion pool (LogOP).\nUsing LinOP, the aggregate probability distribution is a weighted arithmetic mean of individual probability distributions:\nwhere pi (0) is expert i's probability distribution of uncertain event 0, p (0) represents the aggregate probability distribution, wi's are weights for experts, which are usually nonnegative and sum to 1, and n is the number of experts.\nUsing LogOP, the aggregate probability distribution is a weighted geometric mean of individual probability distributions:\nwhere k is a normalization constant to ensure that the pooled opinion is a probability distribution.\nOther axiomatic pooling methods often are extensions of LinOP [22], LogOP [23], or both [13].\nWinkler [39] and Morris [29, 30] establish the early framework of Bayesian aggregation methods.\nBayesian approaches assume as if there is a decision maker who has a prior probability distribution over event 0 and a likelihood function over expert opinions given the event.\nThis decision maker takes expert opinions as evidence and updates its priors over the event and opinions according to Bayes rule.\nThe resulted posterior probability distribution of 0 is the pooled opinion.\nBehavioral approaches have been widely studied in the field of group decision making and organizational behavior.\nThe important assumption of behavioral approaches is that, through exchanging opinions or information, experts can eventually reach an equilibrium where further interaction won't change their opinions.\nOne of the best known behavioral approaches is the Delphi technique [28].\nTypically, this method and its variants do not allow open discussion, but each expert has chance to judge opinions of other experts, and is given feedback.\nExperts then can reassess their opinions and repeat the process until a consensus or a smaller spread of opinions is achieved.\nSome other behavioral methods, such as the Nominal Group technique [14], promote open discussions in controlled environments.\nEach approach has its pros and cons.\nAxiomatic approaches are easy to use.\nBut they don't have a normative basis to choose weights.\nIn addition, several impossibility results (e.g., Genest [21]) show that no aggregation function can satisfy all desired properties of an opinion pool, unless the pooled opinion degenerates to a single individual opinion, which effectively implies a dictator.\nBayesian approaches are nicely based on the normative Bayesian framework.\nHowever, it is sometimes frustratingly difficult to apply because it requires either (1) constructing an obscenely complex joint prior over the event and opinions (often impractical even in terms of storage \/ space complexity, not to mention from an elicitation standpoint) or (2) making strong assumptions about the prior, like conditional independence of experts.\nBehavior approaches allow experts to dynamically improve their information and revise their opinions during interactions, but many of them are not fixed or completely specified, and can't guarantee convergence or repeatability.\n3.\nHOW INFORMATION MARKETS WORK\nMuch of the enthusiasm for information markets stems from Hayek hypothesis [26] and efficient market hypothesis [15].\nHayek, in his classic critique of central planning in 1940's, claims that the price system in a competitive market is a very efficient mechanism to aggregate dispersed information among market participants.\nThe efficient market hypothesis further states that, in an efficient market, the price of a security almost instantly incorporates all available information.\nThe market price summarizes all relevant information across traders, hence is the market participants' consensus expectation about the future value of the security.\nEmpirical evidence supports both hypotheses to a large extent [25, 27, 35].\nThus, when associating the value of a security with the outcome of an uncertain future event, market price, by revealing the consensus expectation of the security value, can indirectly predict the outcome of the event.\nThis idea gives rise to information markets.\nFor example, if we want to predict which team will win the NFL game between New England and Carolina, an information market can trade a security \"$100 if New England defeats Carolina\", whose payoff per share at the end of the game is specified as follow: $100 if New England wins the game; $0 otherwise.\nThe security price should roughly equal the expected payoff of the security in an efficient market.\nThe time value of money usually can be ignored because durations of most information markets are short.\nAssuming exposure to risk is roughly equal for both outcomes, or that there are sufficient effectively risk-neutral speculators in the market, the price should not be biased by the risk attitudes of various players in the market.\nThus,\nwhere P is the price of the security \"$100 if New England defeats Carolina\" and Pr (Patriots win) is the probability that New England will win the game.\nObserving the security price P before the game, we can derive Pr (Patriots win), which is the market participants' collective prediction about how likely it is that New England will win the game.\nThe above security is a winner-takes-all contract.\nIt is used when the event to be predicted is a discrete random variable with disjoint outcomes (in this case binary).\nIts price predicts the probability that a specific outcome will be realized.\nWhen the outcome of a prediction problem can be any value in a continuous interval, we can design a security that pays its holder proportional to the realized value.\nThis kind of security is what Wolfers and Zitzewitz [40] called an index contract.\nIt predicts the expected value of a future outcome.\nMany other aspects of a future event such as median value of outcome can also be predicted in information markets by designing and trading different securities.\nWolfers and Zitzewitz [40] provide a summary of the main types of securities traded in information markets and what statistical properties they can predict.\nIn practice, conceiving a security for a prediction problem is only one of the many decisions in designing an effective information market.\nSpann and Skiera [37] propose an initial framework for designing information markets.\n4.\nDESIGN OF ANALYSIS\n4.1 Data Sets\nOur data sets cover 210 NFL games held between September 28th, 2003 and December 28th, 2003.\nNFL games are very suitable for our purposes because: (1) two online exchanges and one online prediction contest already exist that provide data on both information markets and the opinions of self-identified experts for the same set of games; (2) the popularity of NFL games in the United States provides natural incentives for people to participate in information markets and\/or the contest, which increases liquidity of information markets and improves the quality and number of opinions in the contest; (3) intense media coverage and analysis of the profiles and strengths of teams and individual players provide the public with much information so that participants of information markets and the contest can be viewed as knowledgeable regarding to the forecasting goal.\nInformation market data was acquired, by using a specially designed crawler program, from TradeSports.com's Football-NFL markets [7] and NewsFutures.com's Sports Exchange [1].\nFor each NFL game, both TradeSports and NewsFutures have a winner-takes-all information market to predict the game outcome.\nWe introduce the design of the two markets according to Spann and Skiera's three steps for designing an information market [37] as below.\n9 Choice of forecasting goal: Markets at both TradeSports and NewsFutures aim at predicting which one of the two teams will win a NFL football game.\nThey trade similar winner-takes-all securities that pay off 100 if a team wins the game and 0 if it loses the game.\nSmall differences exist in how they deal with ties.\nIn the case of a tie, TradeSports will unwind all trades that occurred and refund all exchange fees, but the security is worth 50 in NewsFutures.\nSince the probability of a tie is usually very low (much less the 1%), prices at both markets effectively represent the market participants' consensus assessment of the probability that the team will win.\n9 Incentive for participation and information rev\nelation: TradeSports and NewsFutures use different incentives for participation and information revelation.\nTradeSports is a real-money exchange.\nA trader needs to open and fund an account with a minimum of $100 to participate in the market.\nBoth profits and losses can occur as a result of trading activity.\nOn the contrary, a trader can register at NewsFutures for free and get 2000 units of Sport Exchange virtual money at the time of registration.\nTraders at NewsFutures will not incur any real financial loss.\nThey can accumulate virtual money by trading securities.\nThe virtual money can then be used to bid for a few real prizes at NewsFutures' online shop.\n9 Financial market design: Both markets at TradeSports and NewsFutures use the continuous double auction as their trading mechanism.\nTradeSports charges a small fee on each security transaction and expiry, while NewsFutures does not.\nWe can see that the main difference between two information markets is real money vs. virtual money.\nServan-Schreiber\net.\nal [36] have compared the effect of money on the performance of the two information markets and concluded that the prediction accuracy of the two markets are at about the same level.\nNot intending to compare these two markets, we still use both markets in our analysis to ensure that our findings are not accidental.\nWe obtain the opinions of 1966 self-identified experts for NFL games from the ProbabilityFootball online contest [5], one of several ProbabilitySports contests [6].\nThe contest is free to enter.\nParticipants of the contest are asked to enter their subjective probability that a team will win a game by noon on the day of the game.\nImportantly, the contest evaluates the participants' performance via the quadratic scoring rule:\nwhere s represents the score that a participant earns for the game, and Prob Lose is the probability that the participant assigns to the actual losing team.\nThe quadratic score is one of a family of so-called proper scoring rules that have the property that an expert's expected score is maximized when the expert reports probabilities truthfully.\nFor example, for a game team A vs. team B, if a player assigns 0.5 to both team A and B, his\/her score for the game is 0 no matter which team wins.\nIf he\/she assigns 0.8 to team A and 0.2 to team B, showing that he is confident in team A's winning, he\/she will score 84 points for the game if team A wins, and lose 156 points if team B wins.\nThis quadratic scoring rule rewards bold predictions that are right, but penalizes bold predictions that turn out to be wrong.\nThe top players, measured by accumulated scores over all games, win the prizes of the contest.\nThe suggested strategy at the contest website is \"to make picks for each game that match, as closely as possible, the probabilities that each team will win\".\nThis strategy is correct if the participant seeks to maximize expected score.\nHowever, as prizes are awarded only to the top few winners, participants' goals are to maximize the probability of winning, not maximize expected score, resulting in a slightly different and more risk-seeking optimization., Still, as far as we are aware, this data offer the closest thing available to true subjective probability judgments from so many people over so many public events that have corresponding information markets.\n4.2 Methods of Analysis\nIn order to compare the prediction accuracy of information markets and that of opinion pools, we proceed to derive predictions from market data of TradeSports and NewsFutures, form pooled opinions using expert data from ProbabilityFootball contest, and specify the performance measures to be used.\n4.2.1 Deriving Predictions\nFor information markets, deriving predictions is straightforward.\nWe can take the security price and divide it by 100 to get the market's prediction of the probability that a team will win.\nTo match the time when participants at the ProbabilityFootball contest are required to report their probability assessments, we derive predictions using the last trade price before noon on the day of the game.\nFor more, Ideally, prizes would be awarded by lottery in proportion to accumulated score.\nthan half of the games, this time is only about an hour earlier than the game starting time, while it is several hours earlier for other games.\nTwo sets of market predictions are derived:\n9 NF: Prediction equals NewsFutures' last trade price before noon of the game day divided by 100.\n9 TS: Prediction equals TradeSports' last trade price before noon of the game day divided by 100.\nWe apply LinOP and LogOP to ProbabilityFootball data to obtain aggregate expert predictions.\nThe reason that we do not consider other aggregation methods include: (1) data from ProbabilityFootball is only suitable for mathematical pooling methods--we can rule out behavioral approaches, (2) Bayesian aggregation requires us to make assumptions about the prior probability distribution of game outcomes and the likelihood function of expert opinions: given the large number of games and participants, making reasonable assumptions is difficult, and (3) for axiomatic approaches, previous research has shown that simpler aggregation methods often perform better than more complex methods [12].\nBecause the output of LogOP is indeterminate if there are probability assessments of both 0 and 1 (and because assessments of 0 and 1 are dictatorial using LogOP), we add a small number 0.01 to an expert opinion if it is 0, and subtract 0.01 from it if it is 1.\nIn pooling opinions, we consider two influencing factors: weights of experts and number of expert opinions to be pooled.\nFor weights of experts, we experiment with equal weights and performance-based weights.\nThe performancebased weights are determined according to previous accumulated score in the contest.\nThe score for each game is calculated according to equation 3, the scoring rule used in the ProbabilityFootball contest.\nFor the first week, since no previous scores are available, we choose equal weights.\nFor later weeks, we calculate accumulated past scores for each player.\nBecause the cumulative scores can be negative, we shift everyone's score if needed to ensure the weights are non-negative.\nThus,\nwhere shift equals 0 if the smallest cumulative scored is non-negative, and equals the absolute value of the smallest cumulative scored otherwise.\nFor simplicity, we call performance-weighted opinion pool as weighted, and equally weighted opinion pool as unweighted.\nWe will use them interchangeably in the remaining of the paper.\nAs for the number of opinions used in an opinion pool, we form different opinion pools with different number of experts.\nOnly the best performing experts are selected.\nFor example, to form an opinion pool with 20 expert opinions, we choose the top 20 participants.\nSince there is no performance record for the first week, we use opinions of all participants in the first week.\nFor week 2, we select opinions of 20 individuals whose scores in the first week are among the top 20.\nFor week 3, 20 individuals whose cumulative scores of week 1 and 2 are among the top 20s are selected.\nExperts are chosen in a similar way for later weeks.\nThus, the top 20 participants can change from week to week.\nThe possible opinion pools, varied in pooling functions, weighting methods, and number of expert opinions, are shown\nTable 1: Pooled Expert Predictions\nin Table 1.\n\"Lin\" represents linear, and \"Log\" represents Logarithmic.\n\"n\" is the number of expert opinions that are pooled, and \"All\" indicates that all opinions are combined.\nWe use \"u\" to symbolize unweighted (equally weighted) opinion pools.\n\"w\" is used for weighted (performance-weighted) opinion pools.\nLin-All-u, the equally weighted LinOP with all participants, is basically the arithmetic mean of all participants' opinions.\nLog-All-u is simply the geometric mean of all opinions.\nWhen a participant did not enter a prediction for a particular game, that participant was removed from the opinion pool for that game.\nThis contrasts with the \"ProbabilityFootball average\" reported on the contest website and used by Servan-Schreiber et.\nal [36], where unreported predictions were converted to 0.5 probability predictions.\n4.2.2 Performance Measures\nWe use three common metrics to assess prediction accuracy of information markets and opinion pools.\nThese measures have been used by Servan-Schreiber et.\nal [36] in evaluating the prediction accuracy of information markets.\n1.\nAbsolute Error = Prob Lose,\nwhere Prob Lose is the probability assigned to the eventual losing team.\nAbsolute error simply measures the difference between a perfect prediction (1 for winning team) and the actual prediction.\nA prediction with lower absolute error is more accurate.\n2.\nQuadratic Score = 100 \u2212 400 \u00d7 (Prob Lose2).\nQuadratic score is the scoring function that is used in the ProbabilityFootball contest.\nIt is a linear transformation of squared error, Prob Lose2, which is one of the mostly used metrics in evaluating forecasting accuracy.\nQuadratic score can be negative.\nA prediction with higher quadratic score is more accurate.\n3.\nLogarithmic Score = log (Prob Win),\nwhere Prob Win is the probability assigned to the eventual winning team.\nThe logarithmic score, like the quadratic score, is a proper scoring rule.\nA prediction with higher (less negative) logarithmic score is more accurate.\n5.\nEMPIRICAL RESULTS\n5.1 Performance of Opinion Pools\nDepending on how many opinions are used, there can be numerous different opinion pools.\nWe first examine the effect of number of opinions on prediction accuracy by forming opinion pools with the number of expert opinions varying from 1 to 960.\nIn the ProbabilityFootball Competition, not all 1966 registered participants provide their probability assessments for every game.\n960 is the smallest number of participants for all games.\nFor each game, we sort experts according to their accumulated quadratic score in previous weeks.\nPredictions of the best performing n participants are picked to form an opinion pool with n experts.\nFigure 1 shows the prediction accuracy of LinOP and LogOP in terms of mean values of the three performance measures across all 210 games.\nWe can see the following trends in the figure.\n1.\nUnweighted opinion pools and performance-weighted opinion pools have similar levels of prediction accuracy, especially for LinOP.\n2.\nFor LinOP, increasing the number of experts in general increases or keeps the same the level of prediction accuracy.\nWhen there are more than 200 experts, the prediction accuracy of LinOP is stable regarding the number of experts.\n3.\nLogOP seems more accurate than LinOP in terms of mean absolute error.\nBut, using all other performance measures, LinOP outperforms LogOP.\n4.\nFor LogOP, increasing the number of experts increases\nthe prediction accuracy at the beginning.\nBut the curves (including the points with all experts) for mean quadratic score, and mean logarithmic score have slight bell-shapes, which represent a decrease in prediction accuracy when the number of experts is very large.\nThe curves for mean absolute error, on the other hand, show a consistent increase of accuracy.\nThe first and second trend above imply that when using LinOP, the simplest way, which has good prediction accuracy, is to average the opinions of all experts.\nWeighting does not seem to improve performance.\nSelecting experts according to past performance also does not help.\nIt is a very interesting observation that even if many participants of the ProbabilityFootball contest do not provide accurate individual predictions (they have negative quadratic scores in the contest), including their opinions into the opinion pool still increases the prediction accuracy.\nOne explanation of this phenomena could be that biases of individual judgment can offset with each other when opinions are diverse, which makes the pooled prediction more accurate.\nThe third trend presents a controversy.\nThe relative prediction accuracy of LogOP and LinOP flips when using different accuracy measures.\nTo investigate this disagreement, we plot the absolute error of Log-All-u and Lin-All-u for each game in Figure 2.\nWhen the absolute error of an opinion\nFigure 1: Prediction Accuracy of Opinion Pools\npool for a game is less than 0.5, it means that the team favored by the opinion pool wins the game.\nIf it is greater than 0.5, the underdog wins.\nCompared with Lin-All-u, Log-All-u has lower absolute error when it is less than 0.5, and greater absoluter error when it is greater than 0.5, which indicates that predictions of Log-All-u are bolder, more close to 0 or 1, than those of Lin-All-u.\nThis is due to the nature of linear and logarithmic aggregating functions.\nBecause quadratic score and logarithmic score penalize bold predictions that are wrong, LogOP is less accurate when measured in these terms.\nSimilar reasoning accounts for the fourth trend.\nWhen there are more than 500 experts, increasing number of experts used in LogOP improves the prediction accuracy measured by absolute error, but worsens the accuracy measured by the other two metrics.\nExamining expert opinions, we find that participants who rank lower are more frequent in offering extreme predictions (0 or 1) than those ranking high in the list.\nWhen we increase the number of experts in an opinion pool, we are incorporating more extreme predictions into it.\nThe resulting LogOP is bolder, and hence has lower mean quadratic score and mean logarithmic score.\n5.2 Comparison of Information Markets and Opinion Pools\nThrough the first screening of various opinion pools, we select Lin-All-u, Log-All-u, Log-All-w, and Log-200-u to compare with predictions from information markets.\nLin-All-u as shown in Figure 1 can represent what LinOP can achieve.\nHowever, the performance of LogOP is not consistent when evaluated using different metrics.\nLog-All-u and Log-All-w offer either the best or the worst predictions.\nLog-200-u, the LogOP with the 200 top performing experts, provides more stable predictions.\nWe use all of the three to stand for the performance of LogOP in our later comparison.\nIf a prediction of the probability that a team will win a game, either from an opinion pool or an information market, is higher than 0.5, we say that the team is the predicted favorite for the game.\nTable 2 presents the number and percentage of games that predicted favorites actually win, out of a total of 210 games.\nAll four opinion pools correctly predict a similar number and percentage of games as NF and TS.\nSince NF, TS, and the four opinion pools form their predictions using information available at noon of the game\nTable 2: Number and Percentage of Games that Predicted Favorites Win\nTable 3: Mean of Prediction Accuracy Measures\nFigure 2: Absolute Error: Lin-All-u vs. Log-All-u\nday, information markets and opinion pools have comparable potential at the same time point.\nWe then take a closer look at prediction accuracy of information markets and opinion pools using the three performance measures.\nTable 3 displays mean values of these measures over 210 games.\nNumbers in parentheses are standard errors, which estimate the standard deviation of the mean.\nTo take into consideration of skewness of distributions, we also report median values of accuracy measures in Table 4.\nJudged by the mean values of accuracy measures in Table 3, all methods have similar accuracy levels, with NF and TS slightly better than the opinion pools.\nHowever, the median values of accuracy measures indicate that LogAll-u and Log-All-w opinion pools are more accurate than all other predictions.\nWe employ the randomization test [32] to study whether the differences in prediction accuracy presented in Table 3 and Table 4 are statistically significant.\nThe basic idea of randomization test is that, by randomly swapping predictions of two methods numerous times, an empirical distribution for the difference of prediction accuracy can be constructed.\nUsing this empirical distribution, we are then able to evaluate that at what confidence level the observed difference reflects a real difference.\nFor example, the mean absolute error of NF is higher than that of Log-All-u by 0.0229, as shown in Table 3.\nTo test whether this difference is statistically significant, we shuffle predictions from two methods, randomly label half of predictions as NF and the other half as Log-All-u, and compute the difference of mean absolute error of the newly formed NF and Log-All-u data.\nThe above procedure is repeated 10,000 times.\nThe 10,000 differences of mean absolute error results in an empirical distribution of the difference.\nComparing our observed difference, 0.0229, with this distribution, we find that the observed difference is greater than 75.37% of the empirical differences.\nThis leads us to conclude that the difference of mean absolute error between NF and Log-All-u is not statistically significant, if we choose the level of significance to be 0.05.\nTable 5 and Table 6 are results of randomization test for mean and median differences respectively.\nEach cell of the table is for two different prediction methods, represented by name of the row and name of the column.\nThe first lines of table cells are results for absolute error.\nThe second and third lines are dedicated to quadratic score and logarithmic score respectively.\nWe can see that, in terms of mean values of accuracy measures, the differences of all methods are not statistically significant to any reasonable degree.\nWhen it\nTable 4: Median of Prediction Accuracy Measures\nTable 5: Statistical Confidence of Mean Differences in Prediction Accuracy\n* In each table cell, row 1 accounts for absolute error, row 2 for quadratic score, and row 3 for logarithmic score.\ncomes to median values of prediction accuracy, Log-All-u outperforms Lin-All-u at a high confidence level.\nThese results indicate that differences in prediction accuracy between information markets and opinion pools are not statistically significant.\nThis may seem to contradict the result of Servan-Schreiber et.\nal [36], in which NewsFutures's information markets have been shown to provide statistically significantly more accurate predictions than the (unweighted) average of all ProbabilityFootball opinions.\nThe discrepancy emerges in dealing with missing data.\nNot all 1966 registered ProbabilityFootball participants offer probability assessments for each game.\nWhen a participant does not provide a probability assessment for a game, the contest considers their prediction as 0.5.\n.\nThis makes sense in the context of the contest, since 0.5 always yields 0 quadratic score.\nThe ProbabilityFootball average reported on the contest website and used by Servan-Schreiber et.\nal includes these 0.5 estimates.\nInstead, we remove participants from games that they do not provide assessments, pooling only the available opinions together.\nOur treatment increases the prediction accuracy of Lin-All-u significantly.\n6.\nCONCLUSIONS\nWith the fast growth of the Internet, information markets have recently emerged as an alternative tool for predicting future events.\nPrevious research has shown that information markets give as accurate or more accurate predictions than individual experts and polls.\nHowever, information markets, as an adaptive mechanism to aggregate different opinions of market participants, have not been calibrated against many belief aggregation methods.\nIn this paper, we compare prediction accuracy of information markets with linear and logarithmic opinion pools (LinOP and LogOP) using predictions from two markets and 1966 individuals regarding the outcomes of 210 American football games during the 2003 NFL season.\nIn screening for representative opinion pools to compare with information markets, we investigate the effect of weights and number of experts on prediction accuracy.\nOur results on both the comparison of information markets and opinion pools and the relative performance of different opinion pools are summarized as below.\n1.\nAt the same time point ahead of the events, information markets offer as accurate predictions as our selected opinion pools.\nWe have selected four opinion pools to represent the prediction accuracy level that LinOP and LogOP can achieve.\nWith all four performance metrics, our two information markets obtain similar prediction accuracy as the four opinion pools.\nTable 6: Statistical Confidence of Median Differences in Prediction Accuracy\n* In each table cell, row 1 accounts for absolute error, row 2 for quadratic score, and row 3 for logarithmic score.\n* Confidence above 95% is shown in bold.\n2.\nThe arithmetic average of all opinions (Lin-All-u) is a simple, robust, and efficient opinion pool.\nSimply averaging across all experts seems resulting in better predictions than individual opinions and opinion pools with a few experts.\nIt is quite robust in the sense that even if the included individual predictions are less accurate, averaging over all opinions still gives better (or equally good) predictions.\n3.\nWeighting expert opinions according to past performance does not seem to significantly improve prediction accuracy of either LinOP or LogOP.\nComparing performance-weighted opinion pools with equally weighted opinion pools, we do not observe much difference in terms of prediction accuracy.\nSince we only use one performance-weighting method, calculating the weights according to past accumulated quadratic score that participants earned, this might due to the weighting method we chose.\n4.\nLogOP yields bolder predictions than LinOP.\nLogOP yields predictions that are closer to the extremes, 0 or 1.\nAn information markets is a self-organizing mechanism for aggregating information and making predictions.\nCompared with opinion pools, it is less constrained by space and time, and can eliminate the efforts to identify experts and decide belief aggregation methods.\nBut the advantages do not compromise their prediction accuracy to any extent.\nOn the contrary, information markets can provide real-time predictions, which are hardly achievable through resorting to experts.\nIn the future, we are interested in further exploring: 9 Performance comparison of information markets with other opinion pools and mathematical aggregation procedures.\nIn this paper, we only compare information markets with two simple opinion pools, linear and logarithmic.\nIt will be meaningful to investigate their relative prediction accuracy with other belief aggregation methods such as Bayesian approaches.\nThere are also a number of theoretical expert algorithms with proven worst-case performance bounds [10] whose average-case or practical performance would be instructive to investigate.\n9 Whether defining expertise more narrowly can improve predictions of opinion pools.\nIn our analysis, we broadly treat participants of the ProbabilityFootball contest as experts in all games.\nIf we define expertise more narrowly, selecting experts in certain football teams to predict games involving these teams, will the predictions of opinion pools be more accurate?\n9 The possibility of combining information markets with other forecasting methods to achieve better prediction accuracy.\nChen, Fine, and Huberman [11] use an information market to determine the risk attitude of participants, and then perform a nonlinear aggregation of their predictions based on their risk attitudes.\nThe nonlinear aggregation mechanism is shown to outperform both the market and the best individual participants.\nIt is worthy of more attention whether information markets, as an alternative forecasting method, can be used together with other methods to improve our predictions.\n7.\nACKNOWLEDGMENTS\nWe thank Brian Galebach, the owner and operator of the ProbabilitySports and ProbabilityFootball websites, for providing us with such unique and valuable data.\nWe thank Varsha Dani, Lance Fortnow, Omid Madani, Sumit Sang\nhai, and the anonymous reviewers for useful insights and pointers.\nThe authors acknowledge the support of The Penn State eBusiness Research Center.","keyphrases":["inform market","opinion pool","forecast","expert aggreg","market probabl","pool predict","price","futur event","expertis","contract","expert opinion","predict accuraci"],"prmu":["P","P","P","P","P","P","P","M","U","U","R","R"]} {"id":"C-79","title":"A Cross-Layer Approach to Resource Discovery and Distribution in Mobile ad-hoc Networks","abstract":"This paper describes a cross-layer approach to designing robust P2P system over mobile ad-hoc networks. The design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead. With these primitives, the paper addresses various load balancing techniques. Preliminary simulation results are also presented.","lvl-1":"A Cross-Layer Approach to Resource Discovery and Distribution in Mobile ad-hoc Networks Chaiporn Jaikaeo Computer Engineering Kasetsart University, Thailand (+662) 942-8555 Ext 1424 cpj@cpe.ku.ac.th Xiang Cao Computer and Information Sciences University of Delaware, USA (+1) 302-831-1131 cao@cis.udel.edu Chien-Chung Shen Computer and Information Sciences University of Delaware, USA (+1) 302-831-1951 cshen@cis.udel.edu ABSTRACT This paper describes a cross-layer approach to designing robust P2P system over mobile ad-hoc networks.\nThe design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead.\nWith these primitives, the paper addresses various load balancing techniques.\nPreliminary simulation results are also presented.\nCategories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed Applications General Terms Algorithms and design 1.\nINTRODUCTION Mobile ad-hoc networks (MANETs) consist of mobile nodes that autonomously establish connectivity via multi-hop wireless communications.\nWithout relying on any existing, pre-configured network infrastructure or centralized control, MANETs are useful in situations where impromptu communication facilities are required, such as battlefield communications and disaster relief missions.\nAs MANET applications demand collaborative processing and information sharing among mobile nodes, resource (service) discovery and distribution have become indispensable capabilities.\nOne approach to designing resource discovery and distribution schemes over MANETs is to construct a peer-to-peer (P2P) system (or an overlay) which organizes peers of the system into a logical structure, on top of the actual network topology.\nHowever, deploying such P2P systems over MANETs may result in either a large number of flooding operations triggered by the reactive routing process, or inefficiency in terms of bandwidth utilization in proactive routing schemes.\nEither way, constructing an overlay will potentially create a scalability problem for large-scale MANETs.\nDue to the dynamic nature of MANETs, P2P systems should be robust by being scalable and adaptive to topology changes.\nThese systems should also provide efficient and effective ways for peers to interact, as well as other desirable application specific features.\nThis paper describes a design paradigm that uses the following two functional primitives to design robust resource discovery and distribution schemes over MANETs.\n1.\nPositive\/negative feedback.\nQuery packets are used to explore a route to other peers holding resources of interest.\nOptionally, advertisement packets are sent out to advertise routes from other peers about available resources.\nWhen traversing a route, these control packets measure goodness of the route and leave feedback information on each node along the way to guide subsequent control packets to appropriate directions.\n2.\nSporadic random walk.\nAs the network topology and\/or the availability of resources change, existing routes may become stale while better routes become available.\nSporadic random walk allows a control packet to explore different paths and opportunistically discover new and\/or better routes.\nAdopting this paradigm, the whole MANET P2P system operates as a collection of autonomous entities which consist of different types of control packets such as query and advertisement packets.\nThese packets work collaboratively, but indirectly, to achieve common tasks, such as resource discovery, routing, and load balancing.\nWith collaboration among these entities, a MANET P2P system is able to `learn'' the network dynamics by itself and adjust its behavior accordingly, without the overhead of organizing peers into an overlay.\nThe remainder of this paper is organized as follows.\nRelated work is described in the next section.\nSection 3 describes the resource discovery scheme.\nSection 4 describes the resource distribution scheme.\nThe replica invalidation scheme is described in Section 5, followed by it performance evaluation in Section 6.\nSection 7 concludes the paper.\n2.\nRELATED WORK For MANETs, P2P systems can be classified based on the design principle, into layered and cross-layer approaches.\nA layered approach adopts a P2P-like [1] solution, where resource discovery is facilitated as an application layer protocol and query\/reply messages are delivered by the underlying MANET routing protocols.\nFor instance, Konark [2] makes use of a underlying multicast protocol such that service providers and queriers advertise and search services via a predefined multicast group, respectively.\nProem [3] is a high-level mobile computing platform for P2P systems over MANETs.\nIt defines a transport protocol that sits on top of the existing TCP\/IP stack, hence relying on an existing routing protocol to operate.\nWith limited control over how control and data packets are routed in the network, it is difficult to avoid the inefficiency of the general-purpose routing protocols which are often reactive and flooding-based.\nIn contrast, cross-layer approaches either relies on its own routing mechanism or augments existing MANET routing algorithms to support resource discovery.\n7DS [4], which is the pioneering work deploying P2P system on mobile devices, exploits data locality and node mobility to dissemination data in a single-hop fashion.\nHence, long search latency may be resulted as a 7DS node can get data of interest only if the node that holds the data is in its radio coverage.\nMohan et al. [5] propose an adaptive service discovery algorithm that combines both push and pull models.\nSpecifically, a service provider\/querier broadcasts advertisement\/query only when the number of nodes advertising or querying, which is estimated by received control packets, is below a threshold during a period of time.\nIn this way, the number of control packets on the network is constrained, thus providing good scalability.\nDespite the mechanism to reduce control packets, high overhead may still be unavoidable, especially when there are many clients trying to locate different services, due to the fact that the algorithm relies on flooding, For resource replication, Yin and Cao [6] design and evaluate cooperative caching techniques for MANETs.\nCaching, however, is performed reactively by intermediate nodes when a querier requests data from a server.\nData items or resources are never pushed into other nodes proactively.\nThanedar et al. [7] propose a lightweight content replication scheme using an expanding ring technique.\nIf a server detects the number of requests exceed a threshold within a time period, it begins to replicate its data onto nodes capable of storing replicas, whose hop counts from the server are of certain values.\nSince data replication is triggered by the request frequency alone, it is possible that there are replicas unnecessarily created in a large scope even though only nodes within a small range request this data.\nOur proposed resource replication mechanism, in contrast, attempts to replicate a data item in appropriate areas, instead of a large area around the server, where the item is requested frequently.\n3.\nRESOURCE DISCOVERY We propose a cross-layer, hybrid resource discovery scheme that relies on the multiple interactions of query, reply and advertisement packets.\nWe assume that each resource is associated with a unique ID1 .\nInitially, when a node wants to discover a resource, it deploys query packets, which carry the corresponding resource ID, and randomly explore the network to search for the requested resource.\nUpon receiving such a query packet, a reply packet is generated by the node providing the requested resource.\nAdvertisement packets can also be used to proactively inform other nodes about what resources are available at each node.\nIn addition to discovering the `identity'' of the node providing the requested resource, it may be also necessary to discover a `route'' leading to this node for further interaction.\nTo allow intermediate nodes to make a decision on where to forward query packets, each node maintains two tables: neighbor 1 The assumption of unique ID is made for brevity in exposition, and resources could be specified via attribute-value assertions.\ntable and pheromone table.\nThe neighbor table maintains a list of all current neighbors obtained via a neighbor discovery protocol.\nThe pheromone table maintains the mapping of a resource ID and a neighbor ID to a pheromone value.\nThis table is initially empty, and is updated by a reply packet generated by a successful query.\nFigure 1 illustrates an example of a neighbor table and a pheromone table maintained by node A having four neighbors.\nWhen node A receives a query packet searching for a resource, it makes a decision to which neighbor it should forward the query packet by computing the desirability of each of the neighbors that have not been visited before by the same query packet.\nFor a resource ID r, the desirability of choosing a neighbor n, \u03b4(r,n), is obtained from the pheromone value of the entry whose neighbor and resource ID fields are n and r, respectively.\nIf no such entry exists in the pheromone table, \u03b4(r,n) is set to zero.\nOnce the desirabilities of all valid next hops have been calculated, they are normalized to obtain the probability of choosing each neighbor.\nIn addition, a small probability is also assigned to those neighbors with zero desirability to exercise the sporadic random walk primitive.\nBased on these probabilities, a next hop is selected to forward the query packet to.\nWhen a query packet encounters a node with a satisfying resource, a reply packet is returned to the querying node.\nThe returning reply packet also updates the pheromone table at each node on its return trip by increasing the pheromone value in the entry whose resource ID and neighbor ID fields match the ID of the discovered resource and the previous hop, respectively.\nIf such an entry does not exist, a new entry is added into the table.\nTherefore, subsequent query packets looking for the same resource, when encountering this pheromone information, are then guided toward the same destination with a small probability of taking an alternate path.\nSince the hybrid discovery scheme neither relies on a MANET routing protocol nor arranges nodes into a logical overlay, query packets are to traverse the actual network topology.\nIn dense networks, relatively large nodal degrees can have potential impacts on this random exploring mechanism.\nTo address this issue, the hybrid scheme also incorporates proactive advertisement in addition to the reactive query.\nTo perform proactive advertisement, each node periodically deploys an advertising packet containing a list of its available resources'' IDs.\nThese packets will traverse away from the advertising node in a random walk manner up to a limited number of hops and advertise resource information to surrounding nodes in the same way as reply packets.\nIn the hybrid scheme, an increase of pheromone serves as a positive feedback which indirectly guides query packets looking for similar resources.\nIntuitively, the amount of pheromone increased is inversely proportional to the distance the reply packet has traveled back, and other metrics, such as quality of the resource, could contribute to this amount as well.\nEach node also performs an implicit negative feedback for resources that have not been given a positive feedback for some time by regularly decreasing the pheromone in all of its pheromone table entries over time.\nIn addition, pheromone can be reduced by an explicit negative response, for instance, a reply packet returning from a node that is not willing to provide a resource due to excessive workload.\nAs a result, load balancing can be achieved via positive and negative feedback.\nA node serving too many nodes can either return fewer responses to query packets or generate negative responses.\n2 The 3rd International Conference on Mobile Technology, Applications and Systems - Mobility 2006 Figure 1: Example illustrating neighbor and pheromone tables maintained by node A: (a) wireless connectivity around A showing that it currently has four neighbors, (b) A``s neighbor table, and (c) a possible pheromone table of A Figure 2: Sample scenarios illustrating the three mechanisms supporting load-balancing: (a) resource replication, (b) resource relocation, and (c) resource division 4.\nRESOURCE DISTRIBUTION In addition to resource discovery, a querying node usually attempts to access and retrieve the contents of a resource after a successful discovery.\nIn certain situations, it is also beneficial to make a resource readily available at multiple nodes when the resource can be relocated and\/or replicated, such as data files.\nFurthermore, in MANETs, we should consider not only the amount of load handled by a resource provider, but also the load on those intermediate nodes that are located on the communication paths between the provider and other nodes as well.\nHence, we describe a cross-layer, hybrid resource distribution scheme to achieve load balancing by incorporating the functionalities of resource relocation, resource replication, and resource division.\n4.1 Resource Replication Multiple replicas of a resource in the network help prevent a single node, as well as nodes surrounding it, from being overloaded by a large number of requests and data transfers.\nAn example is when a node has obtained a data file from another node, the requesting node and the intermediate nodes can cache the file and start sharing that file with other surrounding nodes right away.\nIn addition, replicable resources can also be proactively replicated at other nodes which are located in certain strategic areas.\nFor instance, to help nodes find a resource quickly, we could replicate the resource so that it becomes reachable by random walk for a specific number of hops from any node with some probability, as depicted in Figure 2(a).\nTo realize this feature, the hybrid resource distribution scheme employs a different type of control packet, called resource replication packet, which is responsible for finding an appropriate place to create a replica of a resource.\nA resource replication packet of type R is deployed by a node that is providing the resource R itself.\nUnlike a query packet which follows higher pheromone upstream toward a resource it is looking for, a resource replication packet tends to be propelled away from similar resources by moving itself downstream toward weaker pheromone.\nWhen a resource replication packet finds itself in an area with sufficiently low pheromone, it makes a decision whether it should continue exploring or turn back.\nThe decision depends on conditions such as current workload and\/or remaining energy of the node being visited, as well as popularity of the resource itself.\n4.2 Resource Relocation In certain situations, a resource may be required to transfer from one node to another.\nFor example, a node may no longer want to possess a file due to the shortage of storage space, but it cannot simply delete the file since other nodes may still need it in the future.\nIn this case, the node can choose to create replicas of the file by the aforementioned resource replication mechanism and then delete its own copy.\nLet us consider a situation where a majority of nodes requesting for a resource are located far away from a resource provider, as shown on the top of Figure 2(b).\nIf the resource R is relocatable, it is preferred to be relocated to another area that is closer to those nodes, similar to the bottom of the same figure.\nHence network bandwidth is more efficiently utilized.\nThe 3rd Conference on Mobile Technology, Applications and Systems - Mobility 2006 3 The hybrid resource distribution scheme incorporates resource relocation algorithms that are adaptive to user requests and aim to reduce communication overhead.\nSpecifically, by following the same pheromone maintenance concept, the hybrid resource distribution scheme introduces another type of pheromone which corresponds to user requests, instead of resources.\nThis type of pheromone, called request pheromone, is setup by query packets that are in their exploring phases (not returning ones) to guide a resource to a new location.\n4.3 Resource Division Certain types of resources can be divided into smaller subresources (e.g., a large file being broken into smaller files) and distributed to multiple locations to avoid overloading a single node, as depicted in Figure 2(c).\nThe hybrid resource distribution scheme incorporates a resource division mechanism that operates at a thin layer right above all the other mechanisms described earlier.\nThe resource division mechanism is responsible for decomposing divisible resources into sub-resources, and then adds an extra keyword to distinguish each sub-resource from one another.\nTherefore, each of these sub-resources will be seen by the other mechanisms as one single resource which can be independently discovered, replicated, and relocated.\nThe resource division mechanism is also responsible for combining data from these subresources together (e.g., merging pieces of a file) and delivering the final result to the application.\n5.\nREPLICA INVALIDATION Although replicas improve accessibility and balance load, replica invalidation becomes a critical issue when nodes caching updatable resources may concurrently update their own replicas, which renders replicas held by other nodes obsolete.\nMost existing solutions to the replica invalidation problem either impose constrains that only the data source could perform update and invalidate other replicas, or resort to network-wide flooding which results in heavy network traffic and leads to scalability problem, or both.\nThe lack of infrastructure supports and frequent topology changes in MANETs further challenge the issue.\nWe apply the same cross-layer paradigm to invalidating replicas in MANETs which allows concurrent updates performed by multiple replicas.\nTo coordinate concurrent updates and disseminate replica invalidations, a special infrastructure, called validation mesh or mesh for short, is adaptively maintained among nodes possessing `valid'' replicas of a resource.\nOnce a node has updated its replica, an invalidation packet will only be disseminated over the validation mesh to inform other replica-possessing nodes that their replicas become invalid and should be deleted.\nThe structure (topology) of the validation mesh keeps evolving (1) when nodes request and cache a resource, (2) when nodes update their respective replicas and invalidate other replicas, and (3) when nodes move.\nTo accommodate the dynamics, our scheme integrates the components of swarm intelligence to adaptively maintain the validation mesh without relying on any underlying MANET routing protocol.\nIn particular, the scheme takes into account concurrent updates initiated by multiple nodes to ensure the consistency among replicas.\nIn addition, version number is used to distinguish new from old replicas when invalidating any stale replica.\nSimulation results show that the proposed scheme effectively facilitates concurrent replica updates and efficiently perform replica invalidation without incurring network-wide flooding.\nFigure 3 depicts the idea of `validation mesh'' which maintains connectivity among nodes holding valid replicas of a resource to avoid network-wide flooding when invalidating replicas.\nFigure 3: Examples showing maintenance of validation mesh There are eight nodes in the sample network, and we start with only node A holding the valid file, as shown in Figure 3(a).\nLater on, node G issues a query packet for the file and eventually obtains the file from A via nodes B and D.\nSince intermediate nodes are allowed to cache forwarded data, nodes B, D, and G will now hold valid replicas of the file.\nAs a result, a validation mesh is established among nodes A, B, D, and G, as depicted in Figure 3(b).\nIn Figure 3(c), another node, H, has issued a query packet for the same file and obtained it from node B``s cache via node E.\nAt this point, six nodes hold valid replicas and are connected through the validation mesh.\nNow we assume node G updates its replica of the file and informs the other nodes by sending an invalidation packet over the validation mesh.\nConsequently, all other nodes except G remove their replicas of the file from their storage and the validation mesh is torn down.\nHowever, query forwarding pheromone, as denoted by the dotted arrows in Figure 3(d), is setup at these nodes via the `reverse paths'' in which the invalidation packets have traversed, so that future requests for this file will be forwarded to node G.\nIn Figure 3(e), node H makes a new request for the file again.\nThis time, its query packet follows the pheromone toward node G, where the updated file can be obtained.\nEventually, a new validation mesh is established over nodes G, B, D, E, and H. To maintain a validation mesh among the nodes holding valid replicas, one of them is designated to be the focal node.\nInitially, the node that originally holds the data is the focal node.\nAs nodes update replicas, the node that last (or most recently) updates a 4 The 3rd International Conference on Mobile Technology, Applications and Systems - Mobility 2006 corresponding replica assumes the role of focal node.\nWe also name nodes, such as G and H, who originate requests to replicate data as clients, and nodes B, D, and E who locally cache passing data as data nodes.\nFor instance, in Figures 3(a), 3(b), and 3(c), node A is the focal node; in Figures 3(d), 3(e), and 3(f), node G becomes the focal node.\nIn addition, to accommodate newly participating nodes and mobility of nodes, the focal node periodically floods the validation mesh with a keep-alive packet, so that nodes who can hear this packet are considered themselves to be part of the validation mesh.\nIf a node holding a valid\/updated replica doesn``t hear a keep-alive packet for a certain time interval, it will deploy a search packet using the resource discovery mechanism described in Section 3 to find another node (termed attachment point) currently on the validation mesh so that it can attach itself to.\nOnce an attachment point is found, a search_reply packet is returned to the disconnected node who originated the search.\nIntermediate nodes who forward the search_reply packet will become part of the validation mesh as well.\nTo illustrate the effect of node mobility, in Figure 3(f), node H has moved to a location where it is not directly connected to the mesh.\nVia the resource discovery mechanism, node H relies on an intermediate node F to connect itself to the mesh.\nHere, node F, although part of the validation mesh, doesn``t hold data replica, and hence is termed nondata node.\nClient and data node who keep hearing the keep-alive packets from the focal node act as if they are holding a valid replica, so that they can reply to query packets, like node B in Figure 3(c) replying a request from node H.\nWhile a disconnected node attempting to discover an attachment point to reattach itself to the mesh, the disconnected node can``t reply to a query packet.\nFor instance, in Figure 3(f), node H does not reply to any query packet before it reattaches itself to the mesh.\nAlthough validation mesh provides a conceptual topology that (1) connects all replicas together, (2) coordinates concurrent updates, and (3) disseminates invalidation packets, the technical issue is how such a mesh topology could be effectively and efficiently maintained and evolved when (a) nodes request and cache a resource, (b) when nodes update their respective replicas and invalidate other replicas, and (c) when nodes move.\nWithout relying on any MANET routing protocols, the two primitives work together to facilitate efficient search and adaptive maintenance.\n6.\nPERFORMANCE EVALUATION We have conducted simulation experiments using the QualNet simulator to evaluate the performance of the described resource discovery, resource distribution, and replica invalidation schemes.\nHowever, due to space limitation only the performance of the replica invalidation is reported.\nIn our experiments, eighty nodes are uniformly distributed over a terrain of size 1000\u00d71000 m2 .\nEach node has a communication range of approximately 250 m over a 2 Mbps wireless channel, using IEEE802.11 as the MAC layer.\nWe use the random-waypoint mobility model with a pause time of 1 second.\nNodes may move at the minimum and maximum speeds of 1 m\/s and 5 m\/s, respectively.\nTable 1 lists other parameter settings used in the simulation.\nInitially, there is one resource server node in network.\nTwo nodes are randomly picked up every 10 seconds as clients.\nEvery \u03b2 seconds, we check the number of nodes, N, which have gotten data.\nThen we randomly pickup Min(\u03b3,N) nodes from them to initiate data update.\nEach experiment is run for 10 minutes.\nTable 1: Simulation Settings HOP_LIMIT 10 ADVERTISE_HOP_LIMIT 1 KEEPALIVE_INTERVAL 3 second NUM_SEARCH 1 ADVERTISE_INTERVAL 5 second EXPIRATION_INTERVAL 10 second Average Query Generation Rate 2 query\/ 10 sec Max # of Concurrent Update (\u03b3) 2 Frequency of Update (\u03b2) 3s We evaluate the performance under different mobility speed, the density, the maximum number of concurrent update nodes, and update frequency using two metrics: \u2022 Average overhead per update measures the average number of packets transmitted per update in the network.\n\u2022 Average delay per update measures how long our approach takes to finish an update on average.\nAll figures shown present the results with a 70% confidence interval.\nFigure 4: Overhead vs. speed for 80 nodes Figure 5: Overhead vs. density Figure 6: Overhead vs. max #concurrent updates Figure 7: Overhead vs. freq.\nFigure 8: Delay vs. speed Figure 9: Delay vs. density The 3rd Conference on Mobile Technology, Applications and Systems - Mobility 2006 5 Figure 10: Delay vs. max #concurrent updates Figure 11: Delay vs. freq.\nFigures 4, 5, 6, and 7 show the overhead versus various parameter values.\nIn Figure 4, the overhead increases as the speed increase, which is due to the fact that as the speed increase, nodes move out of mesh more frequently, and will send out more search packets.\nHowever, the overhead is not high, and even in speed 10m\/sec, the overhead is below 100 packets.\nIn contrast, the packets will be expected to be more than 200 packets at various speeds when flooding is used.\nFigure 5 shows that the overhead almost remains the same under various densities.\nThat is attributed to only flooding over the mesh instead of the whole network.\nThe size of mesh doesn``t vary much on various densities, so that the overhead doesn``t vary much.\nFigure 6 shows that overhead also almost remains the same under various maximum number of concurrent updates.\nThat``s because one more node just means one more flood over the mesh during update process so that the impact is limited.\nFigure 7 shows that if updates happen more frequently, the overhead is higher.\nThis is because the more quickly updates happen, (1) there will be more keep_alive message over the mesh between two updates, and (2) nodes move out of mesh more frequently and send out more search packets.\nFigures 8, 9, 10, and 11 show the delay versus various parameter values.\nFrom Figure 8, we know the delay increases as the speed increases, which is due to the fact that with increasing speed, clients will move out of mesh with higher probability.\nWhen these clients want to update data, they will spend time to first search the mesh.\nThe faster the speed, the more time clients need to spend to search the mesh.\nFigure 9 shows that delay is negligibly affected by the density.\nDelay decreases slightly as the number of nodes increases, due to the fact that the more nodes in the network, the more nodes receives the advertisement packets which helps the search packet find the target so that the delay of update decreases.\nFigure 10 shows that delay decreases slightly as the maximum number of concurrent updates increases.\nThe larger the maximum number of concurrent updates is, the more nodes are picked up to do update.\nThen with higher probability, one of these nodes is still in mesh and finishes the update immediately (don``t need to search mesh first), which decreases the delay.\nFigure 11 shows how the delay varies with the update frequency.\nWhen updates happen more frequently, the delay will higher.\nBecause the less frequently, the more time nodes in mesh have to move out of mesh then they need to take time to search the mesh when they do update, which increases the delay.\nThe simulation results show that the replica invalidation scheme can significantly reduce the overhead with an acceptable delay.\n7.\nCONCLUSION To facilitate resource discovery and distribution over MANETs, one approach is to designing peer-to-peer (P2P) systems over MANETs which constructs an overlay by organizing peers of the system into a logical structure on the top of MANETs'' physical topology.\nHowever, deploying overlay over MANETs may result in either a large number of flooding operations triggered by the routing process, or inefficiency in terms of bandwidth usage.\nSpecifically, overlay routing relies on the network-layer routing protocols.\nIn the case of a reactive routing protocol, routing on the overlay may cause a large number of flooded route discovery message since the routing path in each routing step must be discovered on demand.\nOn the other hand, if a proactive routing protocol is adopted, each peer has to periodically broadcast control messages, which leads to poor efficiency in terms of bandwidth usage.\nEither way, constructing an overlay will potentially suffer from the scalability problem.\nThe paper describes a design paradigm that uses the functional primitives of positive\/negative feedback and sporadic random walk to design robust resource discovery and distribution schemes over MANETs.\nIn particular, the scheme offers the features of (1) cross-layer design of P2P systems, which allows the routing process at both the P2P and the network layers to be integrated to reduce overhead, (2) scalability and mobility support, which minimizes the use of global flooding operations and adaptively combines proactive resource advertisement and reactive resource discovery, and (3) load balancing, which facilitates resource replication, relocation, and division to achieve load balancing.\n8.\nREFERENCES [1] A. Oram, Peer-to-Peer: Harnessing the Power of Disruptive Technologies.\nO``Reilly, March 2000.\n[2] S. Helal, N. Desai, V. Verma, and C. Lee, Konark - A Service Discovery and Delivery Protocol for ad-hoc Networks, in the Third IEEE Conference on Wireless Communication Networks (WCNC), New Orleans, Louisiana, 2003.\n[3] G. Krotuem, Proem: A Peer-to-Peer Computing Platform for Mobile ad-hoc Networks, in Advanced Topic Workshop Middleware for Mobile Computing, Germany, 2001.\n[4] M. Papadopouli and H. Schulzrinne, A Performance Analysis of 7DS a Peer-to-Peer Data Dissemination and Prefetching Tool for Mobile Users, in Advances in wired and wireless communications, IEEE Sarnoff Symposium Digest, Ewing, NJ, 2001, (Best student paper & poster award).\n[5] U. Mohan, K. Almeroth, and E. Belding-Royer, Scalable Service Discovery in Mobile ad-hoc Networks, in IFIP Networking Conference, Athens, Greece, May 2004.\n[6] L. Yin and G. Cao, Supporting Cooperative Caching in Ad Hoc Networks, in IEEE INFOCOM, 2004.\n[7] V. Thanedar, K. Almeroth, and E. Belding-Royer, A Lightweight Content Replication Scheme for Mobile ad-hoc Environments, in IFIP Networking Conference, Athens, Greece, May 2004.\n6 The 3rd International Conference on Mobile Technology, Applications and Systems - Mobility 2006","lvl-3":"A Cross-Layer Approach to Resource Discovery and Distribution in Mobile Ad hoc Networks\nABSTRACT\nThis paper describes a cross-layer approach to designing robust P2P system over mobile ad hoc networks.\nThe design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead.\nWith these primitives, the paper addresses various load balancing techniques.\nPreliminary simulation results are also presented.\n1.\nINTRODUCTION\nMobile ad hoc networks (MANETs) consist of mobile nodes that autonomously establish connectivity via multi-hop wireless communications.\nWithout relying on any existing, pre-configured network infrastructure or centralized control, MANETs are useful in situations where impromptu communication facilities are required, such as battlefield communications and disaster relief missions.\nAs MANET applications demand collaborative processing and information sharing among mobile nodes, resource (service) discovery and distribution have become indispensable capabilities.\nOne approach to designing resource discovery and distribution schemes over MANETs is to construct a peer-to-peer (P2P) system (or an overlay) which organizes peers of the system into a logical structure, on top of the actual network topology.\nHowever, deploying such P2P systems over MANETs may result in either a large number of flooding operations triggered by the reactive routing process, or inefficiency in terms of bandwidth utilization in proactive routing schemes.\nEither way, constructing an overlay will potentially create a scalability problem for large-scale MANETs.\nDue to the dynamic nature of MANETs, P2P systems should be robust by being scalable and adaptive to topology changes.\nThese systems should also provide efficient and effective ways for peers to interact, as well as other desirable application specific features.\nThis paper describes a design paradigm that uses the following two functional primitives to design robust resource discovery and distribution schemes over MANETs.\n1.\nPositive\/negative feedback.\nQuery packets are used to explore a route to other peers holding resources of interest.\nOptionally, advertisement packets are sent out to advertise routes from other peers about available resources.\nWhen traversing a route, these control packets measure goodness of the route and leave feedback information on each node along the way to guide subsequent control packets to appropriate directions.\n2.\nSporadic random walk.\nAs the network topology and\/or the availability of resources change, existing routes may become stale while better routes become available.\nSporadic random walk allows a control packet to explore different paths and opportunistically discover new and\/or better routes.\nAdopting this paradigm, the whole MANET P2P system operates as a collection of autonomous entities which consist of different types of control packets such as query and advertisement packets.\nThese packets work collaboratively, but indirectly, to achieve common tasks, such as resource discovery, routing, and load balancing.\nWith collaboration among these entities, a MANET P2P system is able to ` learn' the network dynamics by itself and adjust its behavior accordingly, without the overhead of organizing peers into an overlay.\nThe remainder of this paper is organized as follows.\nRelated work is described in the next section.\nSection 3 describes the resource discovery scheme.\nSection 4 describes the resource distribution scheme.\nThe replica invalidation scheme is described in Section 5, followed by it performance evaluation in Section 6.\nSection 7 concludes the paper.\n2.\nRELATED WORK\nFor MANETs, P2P systems can be classified based on the design principle, into layered and cross-layer approaches.\nA layered approach adopts a P2P-like [1] solution, where resource discovery is facilitated as an application layer protocol and query\/reply messages are delivered by the underlying MANET routing protocols.\nFor instance, Konark [2] makes use of a underlying multicast protocol such that service providers and queriers advertise and search services via a predefined multicast group, respectively.\nProem [3] is a high-level mobile computing platform for P2P systems over MANETs.\nIt defines a transport protocol that sits on top of the existing TCP\/IP stack, hence relying on an existing\nrouting protocol to operate.\nWith limited control over how control and data packets are routed in the network, it is difficult to avoid the inefficiency of the general-purpose routing protocols which are often reactive and flooding-based.\nIn contrast, cross-layer approaches either relies on its own routing mechanism or augments existing MANET routing algorithms to support resource discovery.\n7DS [4], which is the pioneering work deploying P2P system on mobile devices, exploits data locality and node mobility to dissemination data in a single-hop fashion.\nHence, long search latency may be resulted as a 7DS node can get data of interest only if the node that holds the data is in its radio coverage.\nMohan et al. [5] propose an adaptive service discovery algorithm that combines both push and pull models.\nSpecifically, a service provider\/querier broadcasts advertisement\/query only when the number of nodes advertising or querying, which is estimated by received control packets, is below a threshold during a period of time.\nIn this way, the number of control packets on the network is constrained, thus providing good scalability.\nDespite the mechanism to reduce control packets, high overhead may still be unavoidable, especially when there are many clients trying to locate different services, due to the fact that the algorithm relies on flooding, For resource replication, Yin and Cao [6] design and evaluate cooperative caching techniques for MANETs.\nCaching, however, is performed reactively by intermediate nodes when a querier requests data from a server.\nData items or resources are never pushed into other nodes proactively.\nThanedar et al. [7] propose a lightweight content replication scheme using an expanding ring technique.\nIf a server detects the number of requests exceed a threshold within a time period, it begins to replicate its data onto nodes capable of storing replicas, whose hop counts from the server are of certain values.\nSince data replication is triggered by the request frequency alone, it is possible that there are replicas unnecessarily created in a large scope even though only nodes within a small range request this data.\nOur proposed resource replication mechanism, in contrast, attempts to replicate a data item in appropriate areas, instead of a large area around the server, where the item is requested frequently.\n3.\nRESOURCE DISCOVERY\n4.\nRESOURCE DISTRIBUTION\n4.1 Resource Replication\n4.2 Resource Relocation\n4.3 Resource Division\n5.\nREPLICA INVALIDATION\n4 The 3rd International Conference on Mobile Technology, Applications and Systems--Mobility 2006\n6.\nPERFORMANCE EVALUATION\n7.\nCONCLUSION\nTo facilitate resource discovery and distribution over MANETs, one approach is to designing peer-to-peer (P2P) systems over MANETs which constructs an overlay by organizing peers of the system into a logical structure on the top of MANETs' physical topology.\nHowever, deploying overlay over MANETs may result in either a large number of flooding operations triggered by the routing process, or inefficiency in terms of bandwidth usage.\nSpecifically, overlay routing relies on the network-layer routing protocols.\nIn the case of a reactive routing protocol, routing on the overlay may cause a large number of flooded route discovery message since the routing path in each routing step must be discovered on demand.\nOn the other hand, if a proactive routing protocol is adopted, each peer has to periodically broadcast control messages, which leads to poor efficiency in terms of bandwidth usage.\nEither way, constructing an overlay will potentially suffer from the scalability problem.\nThe paper describes a design paradigm that uses the functional primitives of positive\/negative feedback and sporadic random walk to design robust resource discovery and distribution schemes over MANETs.\nIn particular, the scheme offers the features of (1) cross-layer design of P2P systems, which allows the routing process at both the P2P and the network layers to be integrated to reduce overhead, (2) scalability and mobility support, which minimizes the use of global flooding operations and adaptively combines proactive resource advertisement and reactive resource discovery, and (3) load balancing, which facilitates resource replication, relocation, and division to achieve load balancing.","lvl-4":"A Cross-Layer Approach to Resource Discovery and Distribution in Mobile Ad hoc Networks\nABSTRACT\nThis paper describes a cross-layer approach to designing robust P2P system over mobile ad hoc networks.\nThe design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead.\nWith these primitives, the paper addresses various load balancing techniques.\nPreliminary simulation results are also presented.\n1.\nINTRODUCTION\nMobile ad hoc networks (MANETs) consist of mobile nodes that autonomously establish connectivity via multi-hop wireless communications.\nAs MANET applications demand collaborative processing and information sharing among mobile nodes, resource (service) discovery and distribution have become indispensable capabilities.\nOne approach to designing resource discovery and distribution schemes over MANETs is to construct a peer-to-peer (P2P) system (or an overlay) which organizes peers of the system into a logical structure, on top of the actual network topology.\nHowever, deploying such P2P systems over MANETs may result in either a large number of flooding operations triggered by the reactive routing process, or inefficiency in terms of bandwidth utilization in proactive routing schemes.\nEither way, constructing an overlay will potentially create a scalability problem for large-scale MANETs.\nDue to the dynamic nature of MANETs, P2P systems should be robust by being scalable and adaptive to topology changes.\nThis paper describes a design paradigm that uses the following two functional primitives to design robust resource discovery and distribution schemes over MANETs.\n1.\nPositive\/negative feedback.\nQuery packets are used to explore a route to other peers holding resources of interest.\nOptionally, advertisement packets are sent out to advertise routes from other peers about available resources.\nWhen traversing a route, these control packets measure goodness of the route and leave feedback information on each node along the way to guide subsequent control packets to appropriate directions.\n2.\nSporadic random walk.\nAs the network topology and\/or the availability of resources change, existing routes may become stale while better routes become available.\nSporadic random walk allows a control packet to explore different paths and opportunistically discover new and\/or better routes.\nAdopting this paradigm, the whole MANET P2P system operates as a collection of autonomous entities which consist of different types of control packets such as query and advertisement packets.\nThese packets work collaboratively, but indirectly, to achieve common tasks, such as resource discovery, routing, and load balancing.\nWith collaboration among these entities, a MANET P2P system is able to ` learn' the network dynamics by itself and adjust its behavior accordingly, without the overhead of organizing peers into an overlay.\nThe remainder of this paper is organized as follows.\nRelated work is described in the next section.\nSection 3 describes the resource discovery scheme.\nSection 4 describes the resource distribution scheme.\nThe replica invalidation scheme is described in Section 5, followed by it performance evaluation in Section 6.\nSection 7 concludes the paper.\n2.\nRELATED WORK\nFor MANETs, P2P systems can be classified based on the design principle, into layered and cross-layer approaches.\nA layered approach adopts a P2P-like [1] solution, where resource discovery is facilitated as an application layer protocol and query\/reply messages are delivered by the underlying MANET routing protocols.\nProem [3] is a high-level mobile computing platform for P2P systems over MANETs.\nrouting protocol to operate.\nWith limited control over how control and data packets are routed in the network, it is difficult to avoid the inefficiency of the general-purpose routing protocols which are often reactive and flooding-based.\nIn contrast, cross-layer approaches either relies on its own routing mechanism or augments existing MANET routing algorithms to support resource discovery.\n7DS [4], which is the pioneering work deploying P2P system on mobile devices, exploits data locality and node mobility to dissemination data in a single-hop fashion.\nMohan et al. [5] propose an adaptive service discovery algorithm that combines both push and pull models.\nSpecifically, a service provider\/querier broadcasts advertisement\/query only when the number of nodes advertising or querying, which is estimated by received control packets, is below a threshold during a period of time.\nIn this way, the number of control packets on the network is constrained, thus providing good scalability.\nCaching, however, is performed reactively by intermediate nodes when a querier requests data from a server.\nData items or resources are never pushed into other nodes proactively.\nThanedar et al. [7] propose a lightweight content replication scheme using an expanding ring technique.\nOur proposed resource replication mechanism, in contrast, attempts to replicate a data item in appropriate areas, instead of a large area around the server, where the item is requested frequently.\n7.\nCONCLUSION\nTo facilitate resource discovery and distribution over MANETs, one approach is to designing peer-to-peer (P2P) systems over MANETs which constructs an overlay by organizing peers of the system into a logical structure on the top of MANETs' physical topology.\nHowever, deploying overlay over MANETs may result in either a large number of flooding operations triggered by the routing process, or inefficiency in terms of bandwidth usage.\nSpecifically, overlay routing relies on the network-layer routing protocols.\nIn the case of a reactive routing protocol, routing on the overlay may cause a large number of flooded route discovery message since the routing path in each routing step must be discovered on demand.\nOn the other hand, if a proactive routing protocol is adopted, each peer has to periodically broadcast control messages, which leads to poor efficiency in terms of bandwidth usage.\nEither way, constructing an overlay will potentially suffer from the scalability problem.\nThe paper describes a design paradigm that uses the functional primitives of positive\/negative feedback and sporadic random walk to design robust resource discovery and distribution schemes over MANETs.","lvl-2":"A Cross-Layer Approach to Resource Discovery and Distribution in Mobile Ad hoc Networks\nABSTRACT\nThis paper describes a cross-layer approach to designing robust P2P system over mobile ad hoc networks.\nThe design is based on simple functional primitives that allow routing at both P2P and network layers to be integrated to reduce overhead.\nWith these primitives, the paper addresses various load balancing techniques.\nPreliminary simulation results are also presented.\n1.\nINTRODUCTION\nMobile ad hoc networks (MANETs) consist of mobile nodes that autonomously establish connectivity via multi-hop wireless communications.\nWithout relying on any existing, pre-configured network infrastructure or centralized control, MANETs are useful in situations where impromptu communication facilities are required, such as battlefield communications and disaster relief missions.\nAs MANET applications demand collaborative processing and information sharing among mobile nodes, resource (service) discovery and distribution have become indispensable capabilities.\nOne approach to designing resource discovery and distribution schemes over MANETs is to construct a peer-to-peer (P2P) system (or an overlay) which organizes peers of the system into a logical structure, on top of the actual network topology.\nHowever, deploying such P2P systems over MANETs may result in either a large number of flooding operations triggered by the reactive routing process, or inefficiency in terms of bandwidth utilization in proactive routing schemes.\nEither way, constructing an overlay will potentially create a scalability problem for large-scale MANETs.\nDue to the dynamic nature of MANETs, P2P systems should be robust by being scalable and adaptive to topology changes.\nThese systems should also provide efficient and effective ways for peers to interact, as well as other desirable application specific features.\nThis paper describes a design paradigm that uses the following two functional primitives to design robust resource discovery and distribution schemes over MANETs.\n1.\nPositive\/negative feedback.\nQuery packets are used to explore a route to other peers holding resources of interest.\nOptionally, advertisement packets are sent out to advertise routes from other peers about available resources.\nWhen traversing a route, these control packets measure goodness of the route and leave feedback information on each node along the way to guide subsequent control packets to appropriate directions.\n2.\nSporadic random walk.\nAs the network topology and\/or the availability of resources change, existing routes may become stale while better routes become available.\nSporadic random walk allows a control packet to explore different paths and opportunistically discover new and\/or better routes.\nAdopting this paradigm, the whole MANET P2P system operates as a collection of autonomous entities which consist of different types of control packets such as query and advertisement packets.\nThese packets work collaboratively, but indirectly, to achieve common tasks, such as resource discovery, routing, and load balancing.\nWith collaboration among these entities, a MANET P2P system is able to ` learn' the network dynamics by itself and adjust its behavior accordingly, without the overhead of organizing peers into an overlay.\nThe remainder of this paper is organized as follows.\nRelated work is described in the next section.\nSection 3 describes the resource discovery scheme.\nSection 4 describes the resource distribution scheme.\nThe replica invalidation scheme is described in Section 5, followed by it performance evaluation in Section 6.\nSection 7 concludes the paper.\n2.\nRELATED WORK\nFor MANETs, P2P systems can be classified based on the design principle, into layered and cross-layer approaches.\nA layered approach adopts a P2P-like [1] solution, where resource discovery is facilitated as an application layer protocol and query\/reply messages are delivered by the underlying MANET routing protocols.\nFor instance, Konark [2] makes use of a underlying multicast protocol such that service providers and queriers advertise and search services via a predefined multicast group, respectively.\nProem [3] is a high-level mobile computing platform for P2P systems over MANETs.\nIt defines a transport protocol that sits on top of the existing TCP\/IP stack, hence relying on an existing\nrouting protocol to operate.\nWith limited control over how control and data packets are routed in the network, it is difficult to avoid the inefficiency of the general-purpose routing protocols which are often reactive and flooding-based.\nIn contrast, cross-layer approaches either relies on its own routing mechanism or augments existing MANET routing algorithms to support resource discovery.\n7DS [4], which is the pioneering work deploying P2P system on mobile devices, exploits data locality and node mobility to dissemination data in a single-hop fashion.\nHence, long search latency may be resulted as a 7DS node can get data of interest only if the node that holds the data is in its radio coverage.\nMohan et al. [5] propose an adaptive service discovery algorithm that combines both push and pull models.\nSpecifically, a service provider\/querier broadcasts advertisement\/query only when the number of nodes advertising or querying, which is estimated by received control packets, is below a threshold during a period of time.\nIn this way, the number of control packets on the network is constrained, thus providing good scalability.\nDespite the mechanism to reduce control packets, high overhead may still be unavoidable, especially when there are many clients trying to locate different services, due to the fact that the algorithm relies on flooding, For resource replication, Yin and Cao [6] design and evaluate cooperative caching techniques for MANETs.\nCaching, however, is performed reactively by intermediate nodes when a querier requests data from a server.\nData items or resources are never pushed into other nodes proactively.\nThanedar et al. [7] propose a lightweight content replication scheme using an expanding ring technique.\nIf a server detects the number of requests exceed a threshold within a time period, it begins to replicate its data onto nodes capable of storing replicas, whose hop counts from the server are of certain values.\nSince data replication is triggered by the request frequency alone, it is possible that there are replicas unnecessarily created in a large scope even though only nodes within a small range request this data.\nOur proposed resource replication mechanism, in contrast, attempts to replicate a data item in appropriate areas, instead of a large area around the server, where the item is requested frequently.\n3.\nRESOURCE DISCOVERY\nWe propose a cross-layer, hybrid resource discovery scheme that relies on the multiple interactions of query, reply and advertisement packets.\nWe assume that each resource is associated with a unique ID1.\nInitially, when a node wants to discover a resource, it deploys query packets, which carry the corresponding resource ID, and randomly explore the network to search for the requested resource.\nUpon receiving such a query packet, a reply packet is generated by the node providing the requested resource.\nAdvertisement packets can also be used to proactively inform other nodes about what resources are available at each node.\nIn addition to discovering the ` identity' of the node providing the requested resource, it may be also necessary to discover a ` route' leading to this node for further interaction.\nTo allow intermediate nodes to make a decision on where to forward query packets, each node maintains two tables: neighbor 1The assumption of unique ID is made for brevity in exposition, and resources could be specified via attribute-value assertions.\ntable and pheromone table.\nThe neighbor table maintains a list of all current neighbors obtained via a neighbor discovery protocol.\nThe pheromone table maintains the mapping of a resource ID and a neighbor ID to a pheromone value.\nThis table is initially empty, and is updated by a reply packet generated by a successful query.\nFigure 1 illustrates an example of a neighbor table and a pheromone table maintained by node A having four neighbors.\nWhen node A receives a query packet searching for a resource, it makes a decision to which neighbor it should forward the query packet by computing the desirability of each of the neighbors that have not been visited before by the same query packet.\nFor a resource ID r, the desirability of choosing a neighbor n, \u03b4 (r, n), is obtained from the pheromone value of the entry whose neighbor and resource ID fields are n and r, respectively.\nIf no such entry exists in the pheromone table, \u03b4 (r, n) is set to zero.\nOnce the desirabilities of all valid next hops have been calculated, they are normalized to obtain the probability of choosing each neighbor.\nIn addition, a small probability is also assigned to those neighbors with zero desirability to exercise the sporadic random walk primitive.\nBased on these probabilities, a next hop is selected to forward the query packet to.\nWhen a query packet encounters a node with a satisfying resource, a reply packet is returned to the querying node.\nThe returning reply packet also updates the pheromone table at each node on its return trip by increasing the pheromone value in the entry whose resource ID and neighbor ID fields match the ID of the discovered resource and the previous hop, respectively.\nIf such an entry does not exist, a new entry is added into the table.\nTherefore, subsequent query packets looking for the same resource, when encountering this pheromone information, are then guided toward the same destination with a small probability of taking an alternate path.\nSince the hybrid discovery scheme neither relies on a MANET routing protocol nor arranges nodes into a logical overlay, query packets are to traverse the actual network topology.\nIn dense networks, relatively large nodal degrees can have potential impacts on this random exploring mechanism.\nTo address this issue, the hybrid scheme also incorporates proactive advertisement in addition to the reactive query.\nTo perform proactive advertisement, each node periodically deploys an advertising packet containing a list of its available resources' IDs.\nThese packets will traverse away from the advertising node in a random walk manner up to a limited number of hops and advertise resource information to surrounding nodes in the same way as reply packets.\nIn the hybrid scheme, an increase of pheromone serves as a positive feedback which indirectly guides query packets looking for similar resources.\nIntuitively, the amount of pheromone increased is inversely proportional to the distance the reply packet has traveled back, and other metrics, such as quality of the resource, could contribute to this amount as well.\nEach node also performs an implicit negative feedback for resources that have not been given a positive feedback for some time by regularly decreasing the pheromone in all of its pheromone table entries over time.\nIn addition, pheromone can be reduced by an explicit negative response, for instance, a reply packet returning from a node that is not willing to provide a resource due to excessive workload.\nAs a result, load balancing can be achieved via positive and negative feedback.\nA node serving too many nodes can either return fewer responses to query packets or generate negative responses.\nFigure 1: Example illustrating neighbor and pheromone tables maintained by node A: (a) wireless connectivity around A showing that it currently has four neighbors, (b) A's neighbor table, and (c) a possible pheromone table of A Figure 2: Sample scenarios illustrating the three mechanisms supporting load-balancing: (a) resource replication, (b) resource relocation, and (c) resource division\n4.\nRESOURCE DISTRIBUTION\nIn addition to resource discovery, a querying node usually attempts to access and retrieve the contents of a resource after a successful discovery.\nIn certain situations, it is also beneficial to make a resource readily available at multiple nodes when the resource can be relocated and\/or replicated, such as data files.\nFurthermore, in MANETs, we should consider not only the amount of load handled by a resource provider, but also the load on those intermediate nodes that are located on the communication paths between the provider and other nodes as well.\nHence, we describe a cross-layer, hybrid resource distribution scheme to achieve load balancing by incorporating the functionalities of resource relocation, resource replication, and resource division.\n4.1 Resource Replication\nMultiple replicas of a resource in the network help prevent a single node, as well as nodes surrounding it, from being overloaded by a large number of requests and data transfers.\nAn example is when a node has obtained a data file from another node, the requesting node and the intermediate nodes can cache the file and start sharing that file with other surrounding nodes right away.\nIn addition, replicable resources can also be proactively replicated at other nodes which are located in certain strategic areas.\nFor instance, to help nodes find a resource quickly, we could replicate the resource so that it becomes reachable by random walk for a specific number of hops from any node with some probability, as depicted in Figure 2 (a).\nTo realize this feature, the hybrid resource distribution scheme employs a different type of control packet, called resource replication packet, which is responsible for finding an appropriate place to create a replica of a resource.\nA resource replication packet of type R is deployed by a node that is providing the resource R itself.\nUnlike a query packet which follows higher pheromone upstream toward a resource it is looking for, a resource replication packet tends to be propelled away from similar resources by moving itself downstream toward weaker pheromone.\nWhen a resource replication packet finds itself in an area with sufficiently low pheromone, it makes a decision whether it should continue exploring or turn back.\nThe decision depends on conditions such as current workload and\/or remaining energy of the node being visited, as well as popularity of the resource itself.\n4.2 Resource Relocation\nIn certain situations, a resource may be required to transfer from one node to another.\nFor example, a node may no longer want to possess a file due to the shortage of storage space, but it cannot simply delete the file since other nodes may still need it in the future.\nIn this case, the node can choose to create replicas of the file by the aforementioned resource replication mechanism and then delete its own copy.\nLet us consider a situation where a majority of nodes requesting for a resource are located far away from a resource provider, as shown on the top of Figure 2 (b).\nIf the resource R is relocatable, it is preferred to be relocated to another area that is closer to those nodes, similar to the bottom of the same figure.\nHence network bandwidth is more efficiently utilized.\nThe 3rd Conference on Mobile Technology, Applications and Systems--Mobility 2006 3 The hybrid resource distribution scheme incorporates resource relocation algorithms that are adaptive to user requests and aim to reduce communication overhead.\nSpecifically, by following the same pheromone maintenance concept, the hybrid resource distribution scheme introduces another type of pheromone which corresponds to user requests, instead of resources.\nThis type of pheromone, called request pheromone, is setup by query packets that are in their exploring phases (not returning ones) to guide a resource to a new location.\n4.3 Resource Division\nCertain types of resources can be divided into smaller subresources (e.g., a large file being broken into smaller files) and distributed to multiple locations to avoid overloading a single node, as depicted in Figure 2 (c).\nThe hybrid resource distribution scheme incorporates a resource division mechanism that operates at a thin layer right above all the other mechanisms described earlier.\nThe resource division mechanism is responsible for decomposing divisible resources into sub-resources, and then adds an extra keyword to distinguish each sub-resource from one another.\nTherefore, each of these sub-resources will be seen by the other mechanisms as one single resource which can be independently discovered, replicated, and relocated.\nThe resource division mechanism is also responsible for combining data from these subresources together (e.g., merging pieces of a file) and delivering the final result to the application.\n5.\nREPLICA INVALIDATION\nAlthough replicas improve accessibility and balance load, replica invalidation becomes a critical issue when nodes caching updatable resources may concurrently update their own replicas, which renders replicas held by other nodes obsolete.\nMost existing solutions to the replica invalidation problem either impose constrains that only the data source could perform update and invalidate other replicas, or resort to network-wide flooding which results in heavy network traffic and leads to scalability problem, or both.\nThe lack of infrastructure supports and frequent topology changes in MANETs further challenge the issue.\nWe apply the same cross-layer paradigm to invalidating replicas in MANETs which allows concurrent updates performed by multiple replicas.\nTo coordinate concurrent updates and disseminate replica invalidations, a special infrastructure, called validation mesh or mesh for short, is adaptively maintained among nodes possessing ` valid' replicas of a resource.\nOnce a node has updated its replica, an invalidation packet will only be disseminated over the validation mesh to inform other replica-possessing nodes that their replicas become invalid and should be deleted.\nThe structure (topology) of the validation mesh keeps evolving (1) when nodes request and cache a resource, (2) when nodes update their respective replicas and invalidate other replicas, and (3) when nodes move.\nTo accommodate the dynamics, our scheme integrates the components of swarm intelligence to adaptively maintain the validation mesh without relying on any underlying MANET routing protocol.\nIn particular, the scheme takes into account concurrent updates initiated by multiple nodes to ensure the consistency among replicas.\nIn addition, version number is used to distinguish new from old replicas when invalidating any stale replica.\nSimulation results show that the proposed scheme effectively facilitates concurrent replica updates and efficiently perform replica invalidation without incurring network-wide flooding.\nFigure 3 depicts the idea of ` validation mesh' which maintains connectivity among nodes holding valid replicas of a resource to avoid network-wide flooding when invalidating replicas.\nFigure 3: Examples showing maintenance of validation mesh\nThere are eight nodes in the sample network, and we start with only node A holding the valid file, as shown in Figure 3 (a).\nLater on, node G issues a query packet for the file and eventually obtains the file from A via nodes B and D.\nSince intermediate nodes are allowed to cache forwarded data, nodes B, D, and G will now hold valid replicas of the file.\nAs a result, a validation mesh is established among nodes A, B, D, and G, as depicted in Figure 3 (b).\nIn Figure 3 (c), another node, H, has issued a query packet for the same file and obtained it from node B's cache via node E.\nAt this point, six nodes hold valid replicas and are connected through the validation mesh.\nNow we assume node G updates its replica of the file and informs the other nodes by sending an invalidation packet over the validation mesh.\nConsequently, all other nodes except G remove their replicas of the file from their storage and the validation mesh is torn down.\nHowever, query forwarding pheromone, as denoted by the dotted arrows in Figure 3 (d), is setup at these nodes via the ` reverse paths' in which the invalidation packets have traversed, so that future requests for this file will be forwarded to node G.\nIn Figure 3 (e), node H makes a new request for the file again.\nThis time, its query packet follows the pheromone toward node G, where the updated file can be obtained.\nEventually, a new validation mesh is established over nodes G, B, D, E, and H. To maintain a validation mesh among the nodes holding valid replicas, one of them is designated to be the focal node.\nInitially, the node that originally holds the data is the focal node.\nAs nodes update replicas, the node that last (or most recently) updates a\n4 The 3rd International Conference on Mobile Technology, Applications and Systems--Mobility 2006\ncorresponding replica assumes the role of focal node.\nWe also name nodes, such as G and H, who originate requests to replicate data as clients, and nodes B, D, and E who locally cache passing data as data nodes.\nFor instance, in Figures 3 (a), 3 (b), and 3 (c), node A is the focal node; in Figures 3 (d), 3 (e), and 3 (f), node G becomes the focal node.\nIn addition, to accommodate newly participating nodes and mobility of nodes, the focal node periodically floods the validation mesh with a keep-alive packet, so that nodes who can hear this packet are considered themselves to be part of the validation mesh.\nIf a node holding a valid\/updated replica doesn't hear a keep-alive packet for a certain time interval, it will deploy a search packet using the resource discovery mechanism described in Section 3 to find another node (termed attachment point) currently on the validation mesh so that it can attach itself to.\nOnce an attachment point is found, a search_reply packet is returned to the disconnected node who originated the search.\nIntermediate nodes who forward the search_reply packet will become part of the validation mesh as well.\nTo illustrate the effect of node mobility, in Figure 3 (f), node H has moved to a location where it is not directly connected to the mesh.\nVia the resource discovery mechanism, node H relies on an intermediate node F to connect itself to the mesh.\nHere, node F, although part of the validation mesh, doesn't hold data replica, and hence is termed nondata node.\nClient and data node who keep hearing the keep-alive packets from the focal node act as if they are holding a valid replica, so that they can reply to query packets, like node B in Figure 3 (c) replying a request from node H.\nWhile a disconnected node attempting to discover an attachment point to reattach itself to the mesh, the disconnected node can't reply to a query packet.\nFor instance, in Figure 3 (f), node H does not reply to any query packet before it reattaches itself to the mesh.\nAlthough validation mesh provides a conceptual topology that (1) connects all replicas together, (2) coordinates concurrent updates, and (3) disseminates invalidation packets, the technical issue is how such a mesh topology could be effectively and efficiently maintained and evolved when (a) nodes request and cache a resource, (b) when nodes update their respective replicas and invalidate other replicas, and (c) when nodes move.\nWithout relying on any MANET routing protocols, the two primitives work together to facilitate efficient search and adaptive maintenance.\n6.\nPERFORMANCE EVALUATION\nWe have conducted simulation experiments using the QualNet simulator to evaluate the performance of the described resource discovery, resource distribution, and replica invalidation schemes.\nHowever, due to space limitation only the performance of the replica invalidation is reported.\nIn our experiments, eighty nodes are uniformly distributed over a terrain of size 1000x1000 m2.\nEach node has a communication range of approximately 250 m over a 2 Mbps wireless channel, using IEEE802 .11 as the MAC layer.\nWe use the random-waypoint mobility model with a pause time of 1 second.\nNodes may move at the minimum and maximum speeds of 1 m\/s and 5 m\/s, respectively.\nTable 1 lists other parameter settings used in the simulation.\nInitially, there is one resource server node in network.\nTwo nodes are randomly picked up every 10 seconds as clients.\nEvery, B seconds, we check the number of nodes, N, which have gotten data.\nThen we randomly pickup Min (\u03b3, N) nodes from them to initiate data update.\nEach experiment is run for 10 minutes.\nTable 1: Simulation Settings\nWe evaluate the performance under different mobility speed, the density, the maximum number of concurrent update nodes, and update frequency using two metrics:\n\u2022 Average overhead per update measures the average number of packets transmitted per update in the network.\n\u2022 Average delay per update measures how long our approach takes to finish an update on average.\nAll figures shown present the results with a 70% confidence interval.\nFigure 4: Overhead vs. speed Figure 5: Overhead vs. density for 80 nodes Figure 6: Overhead vs. max Figure 7: Overhead vs. freq.\n#concurrent updates Figure 8: Delay vs. speed Figure 9: Delay vs. density\nFigure 10: Delay vs. max Figure 11: Delay vs. freq.\n#concurrent updates\nFigures 4, 5, 6, and 7 show the overhead versus various parameter values.\nIn Figure 4, the overhead increases as the speed increase, which is due to the fact that as the speed increase, nodes move out of mesh more frequently, and will send out more search packets.\nHowever, the overhead is not high, and even in speed 10m\/sec, the overhead is below 100 packets.\nIn contrast, the packets will be expected to be more than 200 packets at various speeds when flooding is used.\nFigure 5 shows that the overhead almost remains the same under various densities.\nThat is attributed to only flooding over the mesh instead of the whole network.\nThe size of mesh doesn't vary much on various densities, so that the overhead doesn't vary much.\nFigure 6 shows that overhead also almost remains the same under various maximum number of concurrent updates.\nThat's because one more node just means one more flood over the mesh during update process so that the impact is limited.\nFigure 7 shows that if updates happen more frequently, the overhead is higher.\nThis is because the more quickly updates happen, (1) there will be more keep_alive message over the mesh between two updates, and (2) nodes move out of mesh more frequently and send out more search packets.\nFigures 8, 9, 10, and 11 show the delay versus various parameter values.\nFrom Figure 8, we know the delay increases as the speed increases, which is due to the fact that with increasing speed, clients will move out of mesh with higher probability.\nWhen these clients want to update data, they will spend time to first search the mesh.\nThe faster the speed, the more time clients need to spend to search the mesh.\nFigure 9 shows that delay is negligibly affected by the density.\nDelay decreases slightly as the number of nodes increases, due to the fact that the more nodes in the network, the more nodes receives the advertisement packets which helps the search packet find the target so that the delay of update decreases.\nFigure 10 shows that delay decreases slightly as the maximum number of concurrent updates increases.\nThe larger the maximum number of concurrent updates is, the more nodes are picked up to do update.\nThen with higher probability, one of these nodes is still in mesh and finishes the update immediately (don't need to search mesh first), which decreases the delay.\nFigure 11 shows how the delay varies with the update frequency.\nWhen updates happen more frequently, the delay will higher.\nBecause the less frequently, the more time nodes in mesh have to move out of mesh then they need to take time to search the mesh when they do update, which increases the delay.\nThe simulation results show that the replica invalidation scheme can significantly reduce the overhead with an acceptable delay.\n7.\nCONCLUSION\nTo facilitate resource discovery and distribution over MANETs, one approach is to designing peer-to-peer (P2P) systems over MANETs which constructs an overlay by organizing peers of the system into a logical structure on the top of MANETs' physical topology.\nHowever, deploying overlay over MANETs may result in either a large number of flooding operations triggered by the routing process, or inefficiency in terms of bandwidth usage.\nSpecifically, overlay routing relies on the network-layer routing protocols.\nIn the case of a reactive routing protocol, routing on the overlay may cause a large number of flooded route discovery message since the routing path in each routing step must be discovered on demand.\nOn the other hand, if a proactive routing protocol is adopted, each peer has to periodically broadcast control messages, which leads to poor efficiency in terms of bandwidth usage.\nEither way, constructing an overlay will potentially suffer from the scalability problem.\nThe paper describes a design paradigm that uses the functional primitives of positive\/negative feedback and sporadic random walk to design robust resource discovery and distribution schemes over MANETs.\nIn particular, the scheme offers the features of (1) cross-layer design of P2P systems, which allows the routing process at both the P2P and the network layers to be integrated to reduce overhead, (2) scalability and mobility support, which minimizes the use of global flooding operations and adaptively combines proactive resource advertisement and reactive resource discovery, and (3) load balancing, which facilitates resource replication, relocation, and division to achieve load balancing.","keyphrases":["resourc discoveri","mobil ad-hoc network","manet","manet p2p system","hybrid discoveri scheme","neighbor discoveri protocol","neg feedback","queri packet","replica invalid","valid mesh","invalid packet","manet rout protocol","rout discoveri messag","concurr updat"],"prmu":["P","P","U","M","M","M","U","U","U","U","U","M","M","U"]} {"id":"C-45","title":"StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks","abstract":"The problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase. In this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components. In the StarDust framework, sensor nodes are equipped with optical retro-reflectors. An aerial device projects light towards the deployed sensor network, and records an image of the reflected light. An image processing algorithm is developed for obtaining the locations of sensor nodes. For matching a node ID to a location we propose a constraint-based label relaxation algorithm. We propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node. We evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 \u00d7 60 ft2 area. The localization accuracy ranges from 2 ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes.","lvl-1":"StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks \u2217 Radu Stoleru, Pascal Vicaire, Tian He\u2020, John A. Stankovic Department of Computer Science, University of Virginia \u2020Department of Computer Science and Engineering, University of Minnesota {stoleru, pv9f}@cs.\nvirginia.edu, tianhe@cs.umn.edu, stankovic@cs.virginia.edu Abstract The problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase.\nIn this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components.\nIn the StarDust framework, sensor nodes are equipped with optical retro-reflectors.\nAn aerial device projects light towards the deployed sensor network, and records an image of the reflected light.\nAn image processing algorithm is developed for obtaining the locations of sensor nodes.\nFor matching a node ID to a location we propose a constraint-based label relaxation algorithm.\nWe propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node.\nWe evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 \u00d7 60 ft2 area.\nThe localization accuracy ranges from 2 ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes.\nCategories and Subject Descriptors: C.2.4 [Computer Communications Networks]: Distributed Systems; C.3 [Special Purpose and Application Based Systems]: Real-time and embedded systems General Terms: Algorithms, Measurement, Performance, Design, Experimentation 1 Introduction Wireless Sensor Networks (WSN) have been envisioned to revolutionize the way humans perceive and interact with the surrounding environment.\nOne vision is to embed tiny sensor devices in outdoor environments, by aerial deployments from unmanned air vehicles.\nThe sensor nodes form a network and collaborate (to compensate for the extremely scarce resources available to each of them: computational power, memory size, communication capabilities) to accomplish the mission.\nThrough collaboration, redundancy and fault tolerance, the WSN is then able to achieve unprecedented sensing capabilities.\nA major step forward has been accomplished by developing systems for several domains: military surveillance [1] [2] [3], habitat monitoring [4] and structural monitoring [5].\nEven after these successes, several research problems remain open.\nAmong these open problems is sensor node localization, i.e., how to find the physical position of each sensor node.\nDespite the attention the localization problem in WSN has received, no universally acceptable solution has been developed.\nThere are several reasons for this.\nOn one hand, localization schemes that use ranging are typically high end solutions.\nGPS ranging hardware consumes energy, it is relatively expensive (if high accuracy is required) and poses form factor challenges that move us away from the vision of dust size sensor nodes.\nUltrasound has a short range and is highly directional.\nSolutions that use the radio transceiver for ranging either have not produced encouraging results (if the received signal strength indicator is used) or are sensitive to environment (e.g., multipath).\nOn the other hand, localization schemes that only use the connectivity information for inferring location information are characterized by low accuracies: \u2248 10 ft in controlled environments, 40\u221250 ft in realistic ones.\nTo address these challenges, we propose a framework for WSN localization, called StarDust, in which the complexity associated with the node localization is completely removed from the sensor node.\nThe basic principle of the framework is localization through passivity: each sensor node is equipped with a corner-cube retro-reflector and possibly an optical filter (a coloring device).\nAn aerial vehicle projects light onto the deployment area and records images containing retro-reflected light beams (they appear as luminous spots).\nThrough image processing techniques, the locations of the retro-reflectors (i.e., sensor nodes) is deter57 mined.\nFor inferring the identity of the sensor node present at a particular location, the StarDust framework develops a constraint-based node ID relaxation algorithm.\nThe main contributions of our work are the following.\nWe propose a novel framework for node localization in WSNs that is very promising and allows for many future extensions and more accurate results.\nWe propose a constraint-based label relaxation algorithm for mapping node IDs to the locations, and four constraints (node, connectivity, time and space), which are building blocks for very accurate and very fast localization systems.\nWe develop a sensor node hardware prototype, called a SensorBall.\nWe evaluate the performance of a localization system for which we obtain location accuracies of 2 \u2212 5 ft with a localization duration ranging from 10 milliseconds to 2 minutes.\nWe investigate the range of a system built on our framework by considering realities of physical phenomena that occurs during light propagation through the atmosphere.\nThe rest of the paper is structured as follows.\nSection 2 is an overview of the state of art.\nThe design of the StarDust framework is presented in Section 3.\nOne implementation and its performance evaluation are in Sections 4 and 5, followed by a suite of system optimization techniques, in Section 6.\nIn Section 7 we present our conclusions.\n2 Related Work We present the prior work in localization in two major categories: the range-based, and the range-free schemes.\nThe range-based localization techniques have been designed to use either more expensive hardware (and hence higher accuracy) or just the radio transceiver.\nRanging techniques dependent on hardware are the time-of-flight (ToF) and the time-difference-of-arrival(TDoA).\nSolutions that use the radio are based on the received signal strength indicator (RSSI) and more recently on radio interferometry.\nThe ToF localization technique that is most widely used is the GPS.\nGPS is a costly solution for a high accuracy localization of a large scale sensor network.\nAHLoS [6] employs a TDoA ranging technique that requires extensive hardware and solves relatively large nonlinear systems of equations.\nThe Cricket location-support system (TDoA) [7] can achieve a location granularity of tens of inches with highly directional and short range ultrasound transceivers.\nIn [2] the location of a sniper is determined in an urban terrain, by using the TDoA between an acoustic wave and a radio beacon.\nThe PushPin project [8] uses the TDoA between ultrasound pulses and light flashes for node localization.\nThe RADAR system [9] uses the RSSI to build a map of signal strengths as emitted by a set of beacon nodes.\nA mobile node is located by the best match, in the signal strength space, with a previously acquired signature.\nIn MAL [10], a mobile node assists in measuring the distances (acting as constraints) between nodes until a rigid graph is generated.\nThe localization problem is formulated as an on-line state estimation in a nonlinear dynamic system [11].\nA cooperative ranging that attempts to achieve a global positioning from distributed local optimizations is proposed in [12].\nA very recent, remarkable, localization technique is based on radio interferometry, RIPS [13], which utilizes two transmitters to create an interfering signal.\nThe frequencies of the emitters are very close to each other, thus the interfering signal will have a low frequency envelope that can be easily measured.\nThe ranging technique performs very well.\nThe long time required for localization and multi-path environments pose significant challenges.\nReal environments create additional challenges for the range based localization schemes.\nThese have been emphasized by several studies [14] [15] [16].\nTo address these challenges, and others (hardware cost, the energy expenditure, the form factor, the small range, localization time), several range-free localization schemes have been proposed.\nSensor nodes use primarily connectivity information for inferring proximity to a set of anchors.\nIn the Centroid localization scheme [17], a sensor node localizes to the centroid of its proximate beacon nodes.\nIn APIT [18] each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacons within node``s communication range.\nThe Gradient algorithm [19], leverages the knowledge about the network density to infer the average one hop length.\nThis, in turn, can be transformed into distances to nodes with known locations.\nDV-Hop [20] uses the hop by hop propagation capability of the network to forward distances to landmarks.\nMore recently, several localization schemes that exploit the sensing capabilities of sensor nodes, have been proposed.\nSpotlight [21] creates well controlled (in time and space) events in the network while the sensor nodes detect and timestamp this events.\nFrom the spatiotemporal knowledge for the created events and the temporal information provided by sensor nodes, nodes'' spatial information can be obtained.\nIn a similar manner, the Lighthouse system [22] uses a parallel light beam, that is emitted by an anchor which rotates with a certain period.\nA sensor node detects the light beam for a period of time, which is dependent on the distance between it and the light emitting device.\nMany of the above localization solutions target specific sets of requirements and are useful for specific applications.\nStarDust differs in that it addresses a particular demanding set of requirements that are not yet solved well.\nStarDust is meant for localizing air dropped nodes where node passiveness, high accuracy, low cost, small form factor and rapid localization are all required.\nMany military applications have such requirements.\n3 StarDust System Design The design of the StarDust system (and its name) was inspired by the similarity between a deployed sensor network, in which sensor nodes indicate their presence by emitting light, and the Universe consisting of luminous and illuminated objects: stars, galaxies, planets, etc..\nThe main difficulty when applying the above ideas to the real world is the complexity of the hardware that needs to be put on a sensor node so that the emitted light can be detected from thousands of feet.\nThe energy expenditure for producing an intense enough light beam is also prohibitive.\nInstead, what we propose to use for sensor node localization is a passive optical element called a retro-reflector.\nThe most common retro-reflective optical component is a Corner-Cube Retroreflector (CCR), shown in Figure 1(a).\nIt consists of three mutually perpendicular mirrors.\nThe inter58 (a) (b) Figure 1.\nCorner-Cube Retroreflector (a) and an array of CCRs molded in plastic (b) esting property of this optical component is that an incoming beam of light is reflected back, towards the source of the light, irrespective of the angle of incidence.\nThis is in contrast with a mirror, which needs to be precisely positioned to be perpendicular to the incident light.\nA very common and inexpensive implementation of an array of CCRs is the retroreflective plastic material used on cars and bicycles for night time detection, shown in Figure 1(b).\nIn the StarDust system, each node is equipped with a small (e.g. 0.5in2) array of CCRs and the enclosure has self-righting capabilities that orient the array of CCRs predominantly upwards.\nIt is critical to understand that the upward orientation does not need to be exact.\nEven when large angular variations from a perfectly upward orientation are present, a CCR will return the light in the exact same direction from which it came.\nIn the remaining part of the section, we present the architecture of the StarDust system and the design of its main components.\n3.1 System Architecture The envisioned sensor network localization scenario is as follows: \u2022 The sensor nodes are released, possibly in a controlled manner, from an aerial vehicle during the night.\n\u2022 The aerial vehicle hovers over the deployment area and uses a strobe light to illuminate it.\nThe sensor nodes, equipped with CCRs and optical filters (acting as coloring devices) have self-righting capabilities and retroreflect the incoming strobe light.\nThe retro-reflected light is either white, as the originating source light, or colored, due to optical filters.\n\u2022 The aerial vehicle records a sequence of two images very close in time (msec level).\nOne image is taken when the strobe light is on, the other when the strobe light is off.\nThe acquired images are used for obtaining the locations of sensor nodes (which appear as luminous spots in the image).\n\u2022 The aerial vehicle executes the mapping of node IDs to the identified locations in one of the following ways: a) by using the color of a retro-reflected light, if a sensor node has a unique color; b) by requiring sensor nodes to establish neighborhood information and report it to a base station; c) by controlling the time sequence of sensor nodes deployment and recording additional imLight Emitter Sensor Node i Transfer Function \u03a6i(\u03bb) \u03a8(\u03bb) \u03a6(\u03a8(\u03bb)) Image Processing Node ID Matching Radio Model R G(\u039b,E) Central Device V'' V'' Figure 2.\nThe StarDust system architecture ages; d) by controlling the location where a sensor node is deployed.\n\u2022 The computed locations are disseminated to the sensor network.\nThe architecture of the StarDust system is shown in Figure 2.\nThe architecture consists of two main components: the first is centralized and it is located on a more powerful device.\nThe second is distributed and it resides on all sensor nodes.\nThe Central Device consists of the following: the Light Emitter, the Image Processing module, the Node ID Mapping module and the Radio Model.\nThe distributed component of the architecture is the Transfer Function, which acts as a filter for the incoming light.\nThe aforementioned modules are briefly described below: \u2022 Light Emitter - It is a strobe light, capable of producing very intense, collimated light pulses.\nThe emitted light is non-monochromatic (unlike a laser) and it is characterized by a spectral density \u03a8(\u03bb), a function of the wavelength.\nThe emitted light is incident on the CCRs present on sensor nodes.\n\u2022 Transfer Function \u03a6(\u03a8(\u03bb)) - This is a bandpass filter for the incident light on the CCR.\nThe filter allows a portion of the original spectrum, to be retro-reflected.\nFrom here on, we will refer to the transfer function as the color of a sensor node.\n\u2022 Image Processing - The Image Processing module acquires high resolution images.\nFrom these images the locations and the colors of sensor nodes are obtained.\nIf only one set of pictures can be taken (i.e., one location of the light emitter\/image analysis device), then the map of the field is assumed to be known as well as the distance between the imaging device and the field.\nThe aforementioned assumptions (field map and distance to it) are not necessary if the images can be simultaneously taken from different locations.\nIt is important to remark here that the identity of a node can not be directly obtained through Image Processing alone, unless a specific characteristic of a sensor node can be identified in the image.\n\u2022 Node ID Matching - This module uses the detected locations and through additional techniques (e.g., sensor node coloring and connectivity information (G(\u039b,E)) from the deployed network) to uniquely identify the sensor nodes observed in the image.\nThe connectivity information is represented by neighbor tables sent from 59 Algorithm 1 Image Processing 1: Background filtering 2: Retro-reflected light recognition through intensity filtering 3: Edge detection to obtain the location of sensor nodes 4: Color identification for each detected sensor node each sensor node to the Central Device.\n\u2022 Radio Model - This component provides an estimate of the radio range to the Node ID Matching module.\nIt is only used by node ID matching techniques that are based on the radio connectivity in the network.\nThe estimate of the radio range R is based on the sensor node density (obtained through the Image Processing module) and the connectivity information (i.e., G(\u039b,E)).\nThe two main components of the StarDust architecture are the Image Processing and the Node ID Mapping.\nTheir design and analysis is presented in the sections that follow.\n3.2 Image Processing The goal of the Image Processing Algorithm (IPA) is to identify the location of the nodes and their color.\nNote that IPA does not identify which node fell where, but only what is the set of locations where the nodes fell.\nIPA is executed after an aerial vehicle records two pictures: one in which the field of deployment is illuminated and one when no illuminations is present.\nLet Pdark be the picture of the deployment area, taken when no light was emitted and Plight be the picture of the same deployment area when a strong light beam was directed towards the sensor nodes.\nThe proposed IPA has several steps, as shown in Algorithm 1.\nThe first step is to obtain a third picture Pfilter where only the differences between Pdark and Plight remain.\nLet us assume that Pdark has a resolution of n \u00d7 m, where n is the number of pixels in a row of the picture, while m is the number of pixels in a column of the picture.\nThen Pdark is composed of n \u00d7 m pixels noted Pdark(i, j), i \u2208 1 \u2264 i \u2264 n,1 \u2264 j \u2264 m. Similarly Plight is composed of n \u00d7 m pixels noted Plight(i, j), 1 \u2264 i \u2264 n,1 \u2264 j \u2264 m. Each pixel P is described by an RGB value where the R value is denoted by PR, the G value is denoted by PG, and the B value is denoted by PB.\nIPA then generates the third picture, Pfilter, through the following transformations: PR filter(i, j) = PR light(i, j)\u2212PR dark(i, j) PG filter(i, j) = PG light(i, j)\u2212PG dark(i, j) PB filter(i, j) = PB light(i, j)\u2212PB dark(i, j) (1) After this transformation, all the features that appeared in both Pdark and Plight are removed from Pfilter.\nThis simplifies the recognition of light retro-reflected by sensor nodes.\nThe second step consists of identifying the elements contained in Pfilter that retro-reflect light.\nFor this, an intensity filter is applied to Pfilter.\nFirst IPA converts Pfilter into a grayscale picture.\nThen the brightest pixels are identified and used to create Preflect.\nThis step is eased by the fact that the reflecting nodes should appear much brighter than any other illuminated object in the picture.\nSupport: Q(\u03bbk) ni P1 ... P2 ... PN \u03bb1 ... \u03bbk ... \u03bbN Figure 3.\nProbabilistic label relaxation The third step runs an edge detection algorithm on Preflect to identify the boundary of the nodes present.\nA tool such as Matlab provides a number of edge detection techniques.\nWe used the bwboundaries function.\nFor the obtained edges, the location (x,y) (in the image) of each node is determined by computing the centroid of the points constituting its edges.\nStandard computer graphics techniques [23] are then used to transform the 2D locations of sensor nodes detected in multiple images into 3D sensor node locations.\nThe color of the node is obtained as the color of the pixel located at (x,y) in Plight.\n3.3 Node ID Matching The goal of the Node ID Matching module is to obtain the identity (node ID) of a luminous spot in the image, detected to be a sensor node.\nFor this, we define V = {(x1,y1),(x2,y2),...,(xm,ym)} to be the set of locations of the sensor nodes, as detected by the Image Processing module and \u039b = {\u03bb1,\u03bb2,...,\u03bbm} to be the set of unique node IDs assigned to the m sensor nodes, before deployment.\nFrom here on, we refer to node IDs as labels.\nWe model the problem of finding the label \u03bbj of a node ni as a probabilistic label relaxation problem, frequently used in image processing\/understanding.\nIn the image processing domain, scene labeling (i.e., identifying objects in an image) plays a major role.\nThe goal of scene labeling is to assign a label to each object detected in an image, such that an appropriate image interpretation is achieved.\nIt is prohibitively expensive to consider the interactions among all the objects in an image.\nInstead, constraints placed among nearby objects generate local consistencies and through iteration, global consistencies can be obtained.\nThe main idea of the sensor node localization through probabilistic label relaxation is to iteratively compute the probability of each label being the correct label for a sensor node, by taking into account, at each iteration, the support for a label.\nThe support for a label can be understood as a hint or proof, that a particular label is more likely to be the correct one, when compared with the other potential labels for a sensor node.\nWe pictorially depict this main idea in Figure 3.\nAs shown, node ni has a set of candidate labels {\u03bb1,...,\u03bbk}.\nEach of the labels has a different value for the Support function Q(\u03bbk).\nWe defer the explanation of how the Support function is implemented until the subsections that follow, where we provide four concrete techniques.\nFormally, the algorithm is outlined in Algorithm 2, where the equations necessary for computing the new probability Pni(\u03bbk) for a label \u03bbk of a node ni, are expressed by the 60 Algorithm 2 Label Relaxation 1: for each sensor node ni do 2: assign equal prob.\nto all possible labels 3: end for 4: repeat 5: converged \u2190 true 6: for each sensor node ni do 7: for each each label \u03bbj of ni do 8: compute the Support label \u03bbj: Equation 4 9: end for 10: compute K for the node ni: Equation 3 11: for each each label \u03bbj do 12: update probability of label \u03bbj: Equation 2 13: if |new prob.\n\u2212old prob.\n| \u2265 \u03b5 then 14: converged \u2190 false 15: end if 16: end for 17: end for 18: until converged = true following equations: Ps+1 ni (\u03bbk) = 1 Kni Ps ni (\u03bbk)Qs ni (\u03bbk) (2) where Kni is a normalizing constant, given by: Kni = N \u2211 k=1 Ps ni (\u03bbk)Qs ni (\u03bbk) (3) and Qs ni (\u03bbk) is: Qs ni (\u03bbk) = support for label \u03bbk of node ni (4) The label relaxation algorithm is iterative and it is polynomial in the size of the network(number of nodes).\nThe pseudo-code is shown in Algorithm 2.\nIt initializes the probabilities associated with each possible label, for a node ni, through a uniform distribution.\nAt each iteration s, the algorithm updates the probability associated with each label, by considering the Support Qs ni (\u03bbk) for each candidate label of a sensor node.\nIn the sections that follow, we describe four different techniques for implementing the Support function: based on node coloring, radio connectivity, the time of deployment (time) and the location of deployment (space).\nWhile some of these techniques are simplistic, they are primitives which, when combined, can create powerful localization systems.\nThese design techniques have different trade-offs, which we will present in Section 3.3.6.\n3.3.1 Relaxation with Color Constraints The unique mapping between a sensor node``s position (identified by the image processing) and a label can be obtained by assigning a unique color to each sensor node.\nFor this we define C = {c1,c2,...,cn} to be the set of unique colors available and M : \u039b \u2192 C to be a one-to-one mapping of labels to colors.\nThis mapping is known prior to the sensor node deployment (from node manufacturing).\nIn the case of color constrained label relaxation, the support for label \u03bbk is expressed as follows: Qs ni (\u03bbk) = 1 (5) As a result, the label relaxation algorithm (Algorithm 2) consists of the following steps: one label is assigned to each sensor node (lines 1-3 of the algorithm), implicitly having a probability Pni(\u03bbk) = 1 ; the algorithm executes a single iteration, when the support function, simply, reiterates the confidence in the unique labeling.\nHowever, it is often the case that unique colors for each node will not be available.\nIt is interesting to discuss here the influence that the size of the coloring space (i.e., |C|) has on the accuracy of the localization algorithm.\nSeveral cases are discussed below: \u2022 If |C| = 0, no colors are used and the sensor nodes are equipped with simple CCRs that reflect back all the incoming light (i.e., no filtering, and no coloring of the incoming light).\nFrom the image processing system, the position of sensor nodes can still be obtained.\nSince all nodes appear white, no single sensor node can be uniquely identified.\n\u2022 If |C| = m \u2212 1 then there are enough unique colors for all nodes (one node remains white, i.e. no coloring), the problem is trivially solved.\nEach node can be identified, based on its unique color.\nThis is the scenario for the relaxation with color constraints.\n\u2022 If |C| \u2265 1, there are several options for how to partition the coloring space.\nIf C = {c1} one possibility is to assign the color c1 to a single node, and leave the remaining m\u22121 sensor nodes white, or to assign the color c1 to more than one sensor node.\nOne can observe that once a color is assigned uniquely to a sensor node, in effect, that sensor node is given the status of anchor, or node with known location.\nIt is interesting to observe that there is an entire spectrum of possibilities for how to partition the set of sensor nodes in equivalence classes (where an equivalence class is represented by one color), in order to maximize the success of the localization algorithm.\nOne of the goals of this paper is to understand how the size of the coloring space and its partitioning affect localization accuracy.\nDespite the simplicity of this method of constraining the set of labels that can be assigned to a node, we will show that this technique is very powerful, when combined with other relaxation techniques.\n3.3.2 Relaxation with Connectivity Constraints Connectivity information, obtained from the sensor network through beaconing, can provide additional information for locating sensor nodes.\nIn order to gather connectivity information, the following need to occur: 1) after deployment, through beaconing of HELLO messages, sensor nodes build their neighborhood tables; 2) each node sends its neighbor table information to the Central device via a base station.\nFirst, let us define G = (\u039b,E) to be the weighted connectivity graph built by the Central device from the received neighbor table information.\nIn G the edge (\u03bbi,\u03bbj) has a 61 \u03bb1 \u03bb2 ... \u03bbN ni nj gi2,j2 \u03bb1 \u03bb2 ... \u03bbN Pj,\u03bb1 Pj,\u03bb2 ... Pj,\u03bbN Pi,\u03bb1 Pi,\u03bb1 ... Pi,\u03bbN gi2,jm Figure 4.\nLabel relaxation with connectivity constraints weight gij represented by the number of beacons sent by \u03bbj and received by \u03bbi.\nIn addition, let R be the radio range of the sensor nodes.\nThe main idea of the connectivity constrained label relaxation is depicted in Figure 4 in which two nodes ni and nj have been assigned all possible labels.\nThe confidence in each of the candidate labels for a sensor node, is represented by a probability, shown in a dotted rectangle.\nIt is important to remark that through beaconing and the reporting of neighbor tables to the Central Device, a global view of all constraints in the network can be obtained.\nIt is critical to observe that these constraints are among labels.\nAs shown in Figure 4 two constraints exist between nodes ni and nj.\nThe constraints are depicted by gi2,j2 and gi2,jM, the number of beacons sent the labels \u03bbj2 and \u03bbjM and received by the label \u03bbi2.\nThe support for the label \u03bbk of sensor node ni, resulting from the interaction (i.e., within radio range) with sensor node nj is given by: Qs ni (\u03bbk) = M \u2211 m=1 g\u03bbk\u03bbm Ps nj (\u03bbm) (6) As a result, the localization algorithm (Algorithm 3 consists of the following steps: all labels are assigned to each sensor node (lines 1-3 of the algorithm), and implicitly each label has a probability initialized to Pni(\u03bbk) = 1\/|\u039b|; in each iteration, the probabilities for the labels of a sensor node are updated, when considering the interaction with the labels of sensor nodes within R.\nIt is important to remark that the identity of the nodes within R is not known, only the candidate labels and their probabilities.\nThe relaxation algorithm converges when, during an iteration, the probability of no label is updated by more than \u03b5.\nThe label relaxation algorithm based on connectivity constraints, enforces such constraints between pairs of sensor nodes.\nFor a large scale sensor network deployment, it is not feasible to consider all pairs of sensor nodes in the network.\nHence, the algorithm should only consider pairs of sensor nodes that are within a reasonable communication range (R).\nWe assume a circular radio range and a symmetric connectivity.\nIn the remaining part of the section we propose a simple analytical model that estimates the radio range R for medium-connected networks (less than 20 neighbors per R).\nWe consider the following to be known: the size of the deployment field (L), the number of sensor nodes deployed (N) Algorithm 3 Localization 1: Estimate the radio range R 2: Execute the Label Relaxation Algorithm with Support Function given by Equation 6 for neighbors less than R apart 3: for each sensor node ni do 4: node identity is \u03bbk with max.\nprob.\n5: end for and the total number of unidirectional (i.e., not symmetric) one-hop radio connections in the network (k).\nFor our analysis, we uniformly distribute the sensor nodes in a square area of length L, by using a grid of unit length L\/ \u221a N.\nWe use the substitution u = L\/ \u221a N to simplify the notation, in order to distinguish the following cases: if u \u2264 R \u2264 \u221a 2u each node has four neighbors (the expected k = 4N); if \u221a 2u \u2264 R \u2264 2u each node has eight neighbors (the expected k = 8N); if 2u \u2264 R \u2264 \u221a 5u each node has twelve neighbors ( the expected k = 12N); if \u221a 5u \u2264 R \u2264 3u each node has twenty neighbors (the expected k = 20N) For a given t = k\/4N we take R to be the middle of the interval.\nAs an example, if t = 5 then R = (3 + \u221a 5)u\/2.\nA quadratic fitting for R over the possible values of t, produces the following closed-form solution for the communication range R, as a function of network connectivity k, assuming L and N constant: R(k) = L \u221a N \u22120.051 k 4N 2 +0.66 k 4N +0.6 (7) We investigate the accuracy of our model in Section 5.2.1.\n3.3.3 Relaxation with Time Constraints Time constraints can be treated similarly with color constraints.\nThe unique identification of a sensor node can be obtained by deploying sensor nodes individually, one by one, and recording a sequence of images.\nThe sensor node that is identified as new in the last picture (it was not identified in the picture before last) must be the last sensor node dropped.\nIn a similar manner with color constrained label relaxation, the time constrained approach is very simple, but may take too long, especially for large scale systems.\nWhile it can be used in practice, it is unlikely that only a time constrained label relaxation is used.\nAs we will see, by combining constrained-based primitives, realistic localization systems can be implemented.\nThe support function for the label relaxation with time constraints is defined identically with the color constrained relaxation: Qs ni (\u03bbk) = 1 (8) The localization algorithm (Algorithm 2 consists of the following steps: one label is assigned to each sensor node (lines 1-3 of the algorithm), and implicitly having a probability Pni(\u03bbk) = 1 ; the algorithm executes a single iteration, 62 D1 D2 D4 D 3Node Label-1 Label-2 Label-3 Label-4 0.2 0.1 0.5 0.2 Figure 5.\nRelaxation with space constraints 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 1 2 3 4 5 6 7 8 PDF Distance D \u03c3 = 0.5 \u03c3 = 1 \u03c3 = 2 Figure 6.\nProbability distribution of distances -4 -3 -2 -1 0 1 2 3 4 X -4 -3 -2 -1 0 1 2 3 4 Y 0 0.2 0.4 0.6 0.8 1 Node Density Figure 7.\nDistribution of nodes when the support function, simply, reiterates the confidence in the unique labeling.\n3.3.4 Relaxation with Space Constraints Spatial information related to sensor deployment can also be employed as another input to the label relaxation algorithm.\nTo do that, we use two types of locations: the node location pn and the label location pl.\nThe former pn is defined as the position of nodes (xn,yn,zn) after deployment, which can be obtained through Image Processing as mentioned in Section 3.3.\nThe latter pl is defined as the location (xl,yl,zl) where a node is dropped.\nWe use Dni \u03bbm to denote the horizontal distance between the location of the label \u03bbm and the location of the node ni.\nClearly, Dni \u03bbm = (xn \u2212xl)2 +(yn \u2212yl)2.\nAt the time of a sensor node release, the one-to-one mapping between the node and its label is known.\nIn other words, the label location is the same as the node location at the release time.\nAfter release, the label location information is partially lost due to the random factors such as wind and surface impact.\nHowever, statistically, the node locations are correlated with label locations.\nSuch correlation depends on the airdrop methods employed and environments.\nFor the sake of simplicity, let``s assume nodes are dropped from the air through a helicopter hovering in the air.\nWind can be decomposed into three components X,Y and Z. Only X and Y affect the horizontal distance a node can travel.\nAccording to [24], we can assume that X and Y follow an independent normal distribution.\nTherefore, the absolute value of the wind speed follows a Rayleigh distribution.\nObviously the higher the wind speed is, the further a node would land away horizontally from the label location.\nIf we assume that the distance D is a function of the wind speed V [25] [26], we can obtain the probability distribution of D under a given wind speed distribution.\nWithout loss of generality, we assume that D is proportional to the wind speed.\nTherefore, D follows the Rayleigh distribution as well.\nAs shown in Figure 5, the spatial-based relaxation is a recursive process to assign the probability that a nodes has a certain label by using the distances between the location of a node with multiple label locations.\nWe note that the distribution of distance D affects the probability with which a label is assigned.\nIt is not necessarily true that the nearest label is always chosen.\nFor example, if D follows the Rayleigh(\u03c32) distribution, we can obtain the Probability Density Function (PDF) of distances as shown in Figure 6.\nThis figure indicates that the possibility of a node to fall vertically is very small under windy conditions (\u03c3 > 0), and that the distance D is affected by the \u03c3.\nThe spatial distribution of nodes for \u03c3 = 1 is shown in Figure 7.\nStrong wind with a high \u03c3 value leads to a larger node dispersion.\nMore formally, given a probability density function PDF(D), the support for label \u03bbk of sensor node ni can be formulated as: Qs ni (\u03bbk) = PDF(Dni \u03bbk ) (9) It is interesting to point out two special cases.\nFirst, if all nodes are released at once (i.e., only one label location for all released nodes), the distance D from a node to all labels is the same.\nIn this case, Ps+1 ni (\u03bbk) = Ps ni (\u03bbk), which indicates that we can not use the spatial-based relaxation to recursively narrow down the potential labels for a node.\nSecond, if nodes are released at different locations that are far away from each other, we have: (i) If node ni has label \u03bbk, Ps ni (\u03bbk) \u2192 1 when s \u2192 \u221e, (ii) If node ni does not have label \u03bbk, Ps ni (\u03bbk) \u2192 0 when s \u2192 \u221e.\nIn this second scenario, there are multiple labels (one label per release), hence it is possible to correlate release times (labels) with positions on the ground.\nThese results indicate that spatial-based relaxation can label the node with a very high probability if the physical separation among nodes is large.\n3.3.5 Relaxation with Color and Connectivity Constraints One of the most interesting features of the StarDust architecture is that it allows for hybrid localization solutions to be built, depending on the system requirements.\nOne example is a localization system that uses the color and connectivity constraints.\nIn this scheme, the color constraints are used for reducing the number of candidate labels for sensor nodes, to a more manageable value.\nAs a reminder, in the connectivity constrained relaxation, all labels are candidate labels for each sensor node.\nThe color constraints are used in the initialization phase of Algorithm 3 (lines 1-3).\nAfter the initialization, the standard connectivity constrained relaxation algorithm is used.\nFor a better understanding of how the label relaxation algorithm works, we give a concrete example, exemplified in Figure 8.\nIn part (a) of the figure we depict the data structures 63 11 8 4 1 12 9 7 5 3 ni nj 12 8 10 11 10 0.2 0.2 0.2 0.2 0.2 0.25 0.25 0.25 0.25 (a) 11 8 4 1 12 9 7 5 3 ni nj 12 8 10 11 10 0.2 0.2 0.2 0.2 0.2 0.32 0 0.68 0 (b) Figure 8.\nA step through the algorithm.\nAfter initialization (a) and after the 1st iteration for node ni (b) associated with nodes ni and nj after the initialization steps of the algorithm (lines 1-6), as well as the number of beacons between different labels (as reported by the network, through G(\u039b,E)).\nAs seen, the potential labels (shown inside the vertical rectangles) are assigned to each node.\nNode ni can be any of the following: 11,8,4,1.\nAlso depicted in the figure are the probabilities associated with each of the labels.\nAfter initialization, all probabilities are equal.\nPart (b) of Figure 8 shows the result of the first iteration of the localization algorithm for node ni, assuming that node nj is the first wi chosen in line 7 of Algorithm 3.\nBy using Equation 6, the algorithm computes the support Q(\u03bbi) for each of the possible labels for node ni.\nOnce the Q(\u03bbi)``s are computed, the normalizing constant, given by Equation 3 can be obtained.\nThe last step of the iteration is to update the probabilities associated with all potential labels of node ni, as given by Equation 2.\nOne interesting problem, which we explore in the performance evaluation section, is to assess the impact the partitioning of the color set C has on the accuracy of localization.\nWhen the size of the coloring set is smaller than the number of sensor nodes (as it is the case for our hybrid connectivity\/color constrained relaxation), the system designer has the option of allowing one node to uniquely have a color (acting as an anchor), or multiple nodes.\nIntuitively, by assigning one color to more than one node, more constraints (distributed) can be enforced.\n3.3.6 Relaxation Techniques Analysis The proposed label relaxation techniques have different trade-offs.\nFor our analysis of the trade-offs, we consider the following metrics of interest: the localization time (duration), the energy consumed (overhead), the network size (scale) that can be handled by the technique and the localization accuracy.\nThe parameters of interest are the following: the number of sensor nodes (N), the energy spent for one aerial drop (\u03b5d), the energy spent in the network for collecting and reporting neighbor information \u03b5b and the time Td taken by a sensor node to reach the ground after being aerially deployed.\nThe cost comparison of the different label relaxation techniques is shown in Table 1.\nAs shown, the relaxation techniques based on color and space constraints have the lowest localization duration, zero, for all practical purposes.\nThe scalability of the color based relaxation technique is, however, limited to the number of (a) (b) Figure 9.\nSensorBall with self-righting capabilities (a) and colored CCRs (b) unique color filters that can be built.\nThe narrower the Transfer Function \u03a8(\u03bb), the larger the number of unique colors that can be created.\nThe manufacturing costs, however, are increasing as well.\nThe scalability issue is addressed by all other label relaxation techniques.\nMost notably, the time constrained relaxation, which is very similar to the colorconstrained relaxation, addresses the scale issue, at a higher deployment cost.\nCriteria Color Connectivity Time Space Duration 0 NTb NTd 0 Overhead \u03b5d \u03b5d +N\u03b5b N\u03b5d \u03b5d Scale |C| |N| |N| |N| Accuracy High Low High Medium Table 1.\nComparison of label relaxation techniques 4 System Implementation The StarDust localization framework, depicted in Figure 2, is flexible in that it enables the development of new localization systems, based on the four proposed label relaxation schemes, or the inclusion of other, yet to be invented, schemes.\nFor our performance evaluation we implemented a version of the StarDust framework, namely the one proposed in Section 3.3.5, where the constraints are based on color and connectivity.\nThe Central device of the StarDust system consists of the following: the Light Emitter - we used a common-off-theshelf flash light (QBeam, 3 million candlepower); the image acquisition was done with a 3 megapixel digital camera (Sony DSC-S50) which provided the input to the Image Processing algorithm, implemented in Matlab.\nFor sensor nodes we built a custom sensor node, called SensorBall, with self-righting capabilities, shown in Figure 9(a).\nThe self-righting capabilities are necessary in order to orient the CCR predominantly upwards.\nThe CCRs that we used were inexpensive, plastic molded, night time warning signs commonly available on bicycles, as shown in Figure 9(b).\nWe remark here the low quality of the CCRs we used.\nThe reflectivity of each CCR (there are tens molded in the plastic container) is extremely low, and each CCR is not built with mirrors.\nA reflective effect is achieved by employing finely polished plastic surfaces.\nWe had 5 colors available, in addition to the standard CCR, which reflects all the incoming light (white CCR).\nFor a slightly higher price (ours were 20cents\/piece), better quality CCRs can be employed.\n64 Figure 10.\nThe field in the dark Figure 11.\nThe illuminated field Figure 12.\nThe difference: Figure 10Figure 11 Higher quality (better mirrors) would translate in more accurate image processing (better sensor node detection) and smaller form factor for the optical component (an array of CCRs with a smaller area can be used).\nThe sensor node platform we used was the micaZ mote.\nThe code that runs on each node is a simple application which broadcasts 100 beacons, and maintains a neighbor table containing the percentage of successfully received beacons, for each neighbor.\nOn demand, the neighbor table is reported to a base station, where the node ID mapping is performed.\n5 System Evaluation In this section we present the performance evaluation of a system implementation of the StarDust localization framework.\nThe three major research questions that our evaluation tries to answer are: the feasibility of the proposed framework (can sensor nodes be optically detected at large distances), the localization accuracy of one actual implementation of the StarDust framework, and whether or not atmospheric conditions can affect the recognition of sensor nodes in an image.\nThe first two questions are investigated by evaluating the two main components of the StarDust framework: the Image Processing and the Node ID Matching.\nThese components have been evaluated separately mainly because of lack of adequate facilities.\nWe wanted to evaluate the performance of the Image Processing Algorithm in a long range, realistic, experimental set-up, while the Node ID Matching required a relatively large area, available for long periods of time (for connectivity data gathering).\nThe third research question is investigated through a computer modeling of atmospheric phenomena.\nFor the evaluation of the Image Processing module, we performed experiments in a football stadium where we deploy 6 sensor nodes in a 3\u00d72 grid.\nThe distance between the Central device and the sensor nodes is approximately 500 ft. The metrics of interest are the number of false positives and false negatives in the Image Processing Algorithm.\nFor the evaluation of the Node ID Mapping component, we deploy 26 sensor nodes in an 120 \u00d7 60 ft2 flat area of a stadium.\nIn order to investigate the influence the radio connectivity has on localization accuracy, we vary the height above ground of the deployed sensor nodes.\nTwo set-ups are used: one in which the sensor nodes are on the ground, and the second one, in which the sensor nodes are raised 3 inches above ground.\nFrom here on, we will refer to these two experimental set-ups as the low connectivity and the high connectivity networks, respectively because when nodes are on the ground the communication range is low resulting in less neighbors than when the nodes are elevated and have a greater communication range.\nThe metrics of interest are: the localization error (defined as the distance between the computed location and the true location - known from the manual placement), the percentage of nodes correctly localized, the convergence of the label relaxation algorithm, the time to localize and the robustness of the node ID mapping to errors in the Image Processing module.\nThe parameters that we vary experimentally are: the angle under which images are taken, the focus of the camera, and the degree of connectivity.\nThe parameters that we vary in simulations (subsequent to image acquisition and connectivity collection) the number of colors, the number of anchors, the number of false positives or negatives as input to the Node ID Matching component, the distance between the imaging device and sensor network (i.e., range), atmospheric conditions (light attenuation coefficient) and CCR reflectance (indicative of its quality).\n5.1 Image Processing For the IPA evaluation, we deploy 6 sensor nodes in a 3 \u00d7 2 grid.\nWe take 13 sets of pictures using different orientations of the camera and different zooming factors.\nAll pictures were taken from the same location.\nEach set is composed of a picture taken in the dark and of a picture taken with a light beam pointed at the nodes.\nWe process the pictures offline using a Matlab implementation of IPA.\nSince we are interested in the feasibility of identifying colored sensor nodes at large distance, the end result of our IPA is the 2D location of sensor nodes (position in the image).\nThe transformation to 3D coordinates can be done through standard computer graphics techniques [23].\nOne set of pictures obtained as part of our experiment is shown in Figures 10 and 11.\nThe execution of our IPA algorithm results in Figure 12 which filters out the background, and Figure 13 which shows the output of the edge detection step of IPA.\nThe experimental results are depicted in Figure 14.\nFor each set of pictures the graph shows the number of false positives (the IPA determines that there is a node 65 Figure 13.\nRetroreflectors detected in Figure 12 0 1 2 3 1 2 3 4 5 6 7 8 9 10 11 Experiment Number Count False Positives False Negatives Figure 14.\nFalse Positives and Negatives for the 6 nodes while there is none), and the number of false negatives (the IPA determines that there is no node while there is one).\nIn about 45% of the cases, we obtained perfect results, i.e., no false positives and no false negatives.\nIn the remaining cases, we obtained a number of false positives of at most one, and a number of false negatives of at most two.\nWe exclude two pairs of pictures from Figure 14.\nIn the first excluded pair, we obtain 42 false positives and in the second pair 10 false positives and 7 false negatives.\nBy carefully examining the pictures, we realized that the first pair was taken out of focus and that a car temporarily appeared in one of the pictures of the second pair.\nThe anomaly in the second set was due to the fact that we waited too long to take the second picture.\nIf the pictures had been taken a few milliseconds apart, the car would have been represented on either both or none of the pictures and the IPA would have filtered it out.\n5.2 Node ID Matching We evaluate the Node ID Matching component of our system by collecting empirical data (connectivity information) from the outdoor deployment of 26 nodes in the 120\u00d760 ft2 area.\nWe collect 20 sets of data for the high connectivity and low connectivity network deployments.\nOff-line we investigate the influence of coloring on the metrics of interest, by randomly assigning colors to the sensor nodes.\nFor one experimental data set we generate 50 random assignments of colors to sensor nodes.\nIt is important to observe that, for the evaluation of the Node ID Matching algorithm (color and connectivity constrained), we simulate the color assignment to sensor nodes.\nAs mentioned in Section 4 the size of the coloring space available to us was 5 (5 colors).\nThrough simulations of color assignment (not connectivity) we are able to investigate the influence that the size of the coloring space has on the accuracy of localization.\nThe value of the param0 10 20 30 40 50 60 0 10 20 30 40 50 60 70 80 90 Distance [feet] Count Connected Not Connected Figure 15.\nThe number of existing and missing radio connections in the sparse connectivity experiment 0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70 80 90 10 11 12 Distance [feet] Count Connected Not Connected Figure 16.\nThe number of existing and missing radio connections in the high connectivity experiment eter \u03b5 used in Algorithm 2 was 0.001.\nThe results presented here represent averages over the randomly generated colorings and over all experimental data sets.\nWe first investigate the accuracy of our proposed Radio Model, and subsequently use the derived values for the radio range in the evaluation of the Node ID matching component.\n5.2.1 Radio Model From experiments, we obtain the average number of observed beacons (k, defined in Section 3.3.2) for the low connectivity network of 180 beacons and for the high connectivity network of 420 beacons.\nFrom our Radio Model (Equation 7, we obtain a radio range R = 25 ft for the low connectivity network and R = 40 ft for the high connectivity network.\nTo estimate the accuracy of our simple model, we plot the number of radio links that exist in the networks, and the number of links that are missing, as functions of the distance between nodes.\nThe results are shown in Figures 15 and 16.\nWe define the average radio range R to be the distance over which less than 20% of potential radio links, are missing.\nAs shown in Figure 15, the radio range is between 20 ft and 25 ft. For the higher connectivity network, the radio range was between 30 ft and 40 ft. We choose two conservative estimates of the radio range: 20 ft for the low connectivity case and 35 ft for the high connectivity case, which are in good agreement with the values predicted by our Radio Model.\n5.2.2 Localization Error vs. Coloring Space Size In this experiment we investigate the effect of the number of colors on the localization accuracy.\nFor this, we randomly assign colors from a pool of a given size, to the sensor nodes.\n66 0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 Number of Colors LocalizationError[feet] R=15feet R=20feet R=25feet Figure 17.\nLocalization error 0 10 20 30 40 50 60 70 80 90 100 0 5 10 15 20 Number of Colors %CorrectLocalized[x100] R=15feet R=20feet R=25feet Figure 18.\nPercentage of nodes correctly localized We then execute the localization algorithm, which uses the empirical data.\nThe algorithm is run for three different radio ranges: 15, 20 and 25 ft, to investigate its influence on the localization error.\nThe results are depicted in Figure 17 (localization error) and Figure 18 (percentage of nodes correctly localized).\nAs shown, for an estimate of 20 ft for the radio range (as predicted by our Radio Model) we obtain the smallest localization errors, as small as 2 ft, when enough colors are used.\nBoth Figures 17 and 18 confirm our intuition that a larger number of colors available significantly decrease the error in localization.\nThe well known fact that relaxation algorithms do not always converge, was observed during our experiments.\nThe percentage of successful runs (when the algorithm converged) is depicted in Figure 19.\nAs shown, in several situations, the algorithm failed to converge (the algorithm execution was stopped after 100 iterations per node).\nIf the algorithm does not converge in a predetermined number of steps, it will terminate and the label with the highest probability will provide the identity of the node.\nIt is very probable that the chosen label is incorrect, since the probabilities of some of labels are constantly changing (with each iteration).\nThe convergence of relaxation based algorithms is a well known issue.\n5.2.3 Localization Error vs. Color Uniqueness As mentioned in the Section 3.3.1, a unique color gives a sensor node the statute of an anchor.\nA sensor node that is an anchor can unequivocally be identified through the Image Processing module.\nIn this section we investigate the effect unique colors have on the localization accuracy.\nSpecifically, we want to experimentally verify our intuition that assigning more nodes to a color can benefit the localization accuracy, by enforcing more constraints, as opposed to uniquely assigning a color to a single node.\n90 95 100 105 0 5 10 15 20 Number of Colors ConvergenceRate[x100] R=15feet R=20feet R=25feet Figure 19.\nConvergence error 0 2 4 6 8 10 12 14 16 4 6 8 Number of Colors LocalizationError[feet] 0 anchors 2 anchors 4 anchors 6 anchors 8 anchors Figure 20.\nLocalization error vs. number of colors For this, we fix the number of available colors to either 4, 6 or 8 and vary the number of nodes that are given unique colors, from 0, up to the maximum number of colors (4, 6 or 8).\nNaturally, if we have a maximum number of colors of 4, we can assign at most 4 anchors.\nThe experimental results are depicted in Figure 20 (localization error) and Figure 21 (percentage of sensor node correctly localized).\nAs expected, the localization accuracy increases with the increase in the number of colors available (larger coloring space).\nAlso, for a given size of the coloring space (e.g., 6 colors available), if more colors are uniquely assigned to sensor nodes then the localization accuracy decreases.\nIt is interesting to observe that by assigning colors uniquely to nodes, the benefit of having additional colors is diminished.\nSpecifically, if 8 colors are available and all are assigned uniquely, the system would be less accurately localized (error \u2248 7 ft), when compared to the case of 6 colors and no unique assignments of colors (\u2248 5 ft localization error).\nThe same trend, of a less accurate localization can be observed in Figure 21, which shows the percentage of nodes correctly localized (i.e., 0 ft localization error).\nAs shown, if we increase the number of colors that are uniquely assigned, the percentage of nodes correctly localized decreases.\n5.2.4 Localization Error vs. Connectivity We collected empirical data for two network deployments with different degrees of connectivity (high and low) in order to assess the influence of connectivity on location accuracy.\nThe results obtained from running our localization algorithm are depicted in Figure 22 and Figure 23.\nWe varied the number of colors available and assigned no anchors (i.e., no unique assignments of colors).\nIn both scenarios, as expected, localization error decrease with an increase in the number of colors.\nIt is interesting to observe, however, that the low connectivity scenario im67 0 20 40 60 80 100 120 140 4 6 8 Number of Colors %CorrectLocalized[x100] 0 anchors 2 anchors 4 anchors 6 anchors 8 anchors Figure 21.\nPercentage of nodes correctly localized vs. number of colors 0 5 10 15 20 25 30 35 40 45 0 2 4 6 8 10 12 Number of Colors LocalizationError[feet] Low Connectivity High Connectivity Figure 22.\nLocalization error vs. number of colors proves the localization accuracy quicker, from the additional number of colors available.\nWhen the number of colors becomes relatively large (twelve for our 26 sensor node network), both scenarios (low and high connectivity) have comparable localization errors, of less that 2 ft. The same trend of more accurate location information is evidenced by Figure 23 which shows that the percentage of nodes that are localized correctly grows quicker for the low connectivity deployment.\n5.3 Localization Error vs. Image Processing Errors So far we investigated the sources for error in localization that are intrinsic to the Node ID Matching component.\nAs previously presented, luminous objects can be mistakenly detected to be sensor nodes during the location detection phase of the Image Processing module.\nThese false positives can be eliminated by the color recognition procedure of the Image Processing module.\nMore problematic are false negatives (when a sensor node does not reflect back enough light to be detected).\nThey need to be handled by the localization algorithm.\nIn this case, the localization algorithm is presented with two sets of nodes of different sizes, that need to be matched: one coming from the Image Processing (which misses some nodes) and one coming from the network, with the connectivity information (here we assume a fully connected network, so that all sensor nodes report their connectivity information).\nIn this experiment we investigate how Image Processing errors (false negatives) influence the localization accuracy.\nFor this evaluation, we ran our localization algorithm with empirical data, but dropped a percentage of nodes from the list of nodes detected by the Image Processing algorithm (we artificially introduced false negatives in the Image Process0 10 20 30 40 50 60 70 80 90 100 0 2 4 6 8 10 12 Number of Colors %CorrectLocalized[x100] Low Connectivity High Connectivity Figure 23.\nPercentage of nodes correctly localized 0 2 4 6 8 10 12 14 0 4 8 12 16 % False Negatives [x100] LocalizationError[feet] 4 colors 8 colors 12 colors Figure 24.\nImpact of false negatives on the localization error ing).\nThe effect of false negatives on localization accuracy is depicted in Figure 24.\nAs seen in the figure if the number of false negatives is 15%, the error in position estimation doubles when 4 colors are available.\nIt is interesting to observe that the scenario when more colors are available (e.g., 12 colors) is being affected more drastically than the scenario with less colors (e.g., 4 colors).\nThe benefit of having more colors available is still being maintained, at least for the range of colors we investigated (4 through 12 colors).\n5.4 Localization Time In this section we look more closely at the duration for each of the four proposed relaxation techniques and two combinations of them: color-connectivity and color-time.\nWe assume that 50 unique color filters can be manufactured, that the sensor network is deployed from 2,400 ft (necessary for the time-constrained relaxation) and that the time required for reporting connectivity grows linearly, with an initial reporting period of 160sec, as used in a real world tracking application [1].\nThe localization duration results, as presented in Table 1, are depicted in Figure 25.\nAs shown, for all practical purposes the time required by the space constrained relaxation techniques is 0sec.\nThe same applies to the color constrained relaxation, for which the localization time is 0sec (if the number of colors is sufficient).\nConsidering our assumptions, only for a network of size 50 the color constrained relaxation works.\nThe localization duration for all other network sizes (100, 150 and 200) is infinite (i.e., unique color assignments to sensor nodes can not be made, since only 50 colors are unique), when only color constrained relaxation is used.\nBoth the connectivity constrained and time constrained techniques increase linearly with the network size (for the time constrained, the Central device deploys sensor nodes one by one, recording an image after the time a sensor node is expected to reach the 68 0 500 1000 1500 2000 2500 3000 Color Connectivity Time Space ColorConenctivity Color-Time Localization technique Localizationtime[sec] 50 nodes 100 nodes 150 nodes 200 nodes Figure 25.\nLocalization time for different label relaxation schemes 0 2000\u00a04000\u00a06000\u00a08000 0 0.5 1 1.5 2 2.5 3 3.5 4 r [feet] C r 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 26.\nApparent contrast in a clear atmosphere 0 2000\u00a04000\u00a06000\u00a08000 0 0.5 1 1.5 2 2.5 3 3.5 4 r [feet] C r 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Figure 27.\nApparent contrast in a hazing atmosphere ground).\nIt is interesting to notice in Figure 25 the improvement in the localization time obtained by simply combining the color and the connectivity constrained techniques.\nThe localization duration in this case is identical with the connectivity constrained technique.\nThe combination of color and time constrained relaxations is even more interesting.\nFor a reasonable localization duration of 52seconds a perfect (i.e., 0 ft localization error) localization system can be built.\nIn this scenario, the set of sensor nodes is split in batches, with each batch having a set of unique colors.\nIt would be very interesting to consider other scenarios, where the strength of the space constrained relaxation (0sec for any sensor network size) is used for improving the other proposed relaxation techniques.\nWe leave the investigation and rigorous classification of such technique combination for future work.\n5.5 System Range In this section we evaluate the feasibility of the StarDust localization framework when considering the realities of light propagation through the atmosphere.\nThe main factor that determines the range of our system is light scattering, which redirects the luminance of the source into the medium (in essence equally affecting the luminosity of the target and of the background).\nScattering limits the visibility range by reducing the apparent contrast between the target and its background (approaches zero, as the distance increases).\nThe apparent contrast Cr is quantitatively expressed by the formula: Cr = (Nt r \u2212Nb r )\/Nb r (10) where Nt r and Nb r are the apparent target radiance and apparent background radiance at distance r from the light source, respectively.\nThe apparent radiance Nt r of a target at a distance r from the light source, is given by: Nt r = Na + I\u03c1te\u22122\u03c3r \u03c0r2 (11) where I is the intensity of the light source, \u03c1t is the target reflectance, \u03c3 is the spectral attenuation coefficient (\u2248 0.12km\u22121 and \u2248 0.60km\u22121 for a clear and a hazy atmosphere, respectively) and Na is the radiance of the atmospheric backscatter, and it can be expressed as follows: Na = G\u03c32I 2\u03c0 2\u03c3rZ 0.02\u03c3r e\u2212x x2 dx (12) where G = 0.24 is a backscatter gain.\nThe apparent background radiance Nb r is given by formulas similar with Equations 11 and 12, where only the target reflectance \u03c1t is substituted with the background reflectance \u03c1b.\nIt is important to remark that when Cr reaches its lower limit, no increase in the source luminance or receiver sensitivity will increase the range of the system.\nFrom Equations 11 and 12 it can be observed that the parameter which can be controlled and can influence the range of the system is \u03c1t, the target reflectance.\nFigures 26 and 27 depict the apparent contrast Cr as a function of the distance r for a clear and for a hazy atmosphere, respectively.\nThe apparent contrast is investigated for reflectance coefficients \u03c1t ranging from 0.3 to 1.0 (perfect reflector).\nFor a contrast C of at least 0.5, as it can be seen in Figure 26 a range of approximately 4,500 ft can be achieved if the atmosphere is clear.\nThe performance dramatically deteriorates, when the atmospheric conditions are problematic.\nAs shown in Figure 27 a range of up to 1,500 ft is achievable, when using highly reflective CCR components.\nWhile our light source (3 million candlepower) was sufficient for a range of a few hundred feet, we remark that there exist commercially available light sources (20 million candlepower) or military (150 million candlepower [27]), powerful enough for ranges of a few thousand feet.\n6 StarDust System Optimizations In this section we describe extensions of the proposed architecture that can constitute future research directions.\n6.1 Chained Constraint Primitives In this paper we proposed four primitives for constraintbased relaxation algorithms: color, connectivity, time and space.\nTo demonstrate the power that can be obtained by combining them, we proposed and evaluated one combination of such primitives: color and connectivity.\nAn interesting research direction to pursue could be to chain more than two of these primitives.\nAn example of such chain is: color, temporal, spatial and connectivity.\nOther research directions could be to use voting scheme for deciding which primitive to use or assign different weights to different relaxation algorithms.\n69 6.2 Location Learning If after several iterations of the algorithm, none of the label probabilities for a node ni converges to a higher value, the confidence in our labeling of that node is relatively low.\nIt would be interesting to associate with a node, more than one label (implicitly more than one location) and defer the label assignment decision until events are detected in the network (if the network was deployed for target tracking).\n6.3 Localization in Rugged Environments The initial driving force for the StarDust localization framework was to address the sensor node localization in extremely rugged environments.\nCanopies, dense vegetation, extremely obstructing environments pose significant challenges for sensor nodes localization.\nThe hope, and our original idea, was to consider the time period between the aerial deployment and the time when the sensor node disappears under the canopy.\nBy recording the last visible position of a sensor node (as seen from the aircraft) a reasonable estimate of the sensor node location can be obtained.\nThis would require that sensor nodes posses self-righting capabilities, while in mid-air.\nNevertheless, we remark on the suitability of our localization framework for rugged, non-line-of-sight environments.\n7 Conclusions StarDust solves the localization problem for aerial deployments where passiveness, low cost, small form factor and rapid localization are required.\nResults show that accuracy can be within 2 ft and localization time within milliseconds.\nStarDust also shows robustness with respect to errors.\nWe predict the influence the atmospheric conditions can have on the range of a system based on the StarDust framework, and show that hazy environments or daylight can pose significant challenges.\nMost importantly, the properties of StarDust support the potential for even more accurate localization solutions as well as solutions for rugged, non-line-of-sight environments.\n8 References [1] T. He, S. Krishnamurthy, J. A. Stankovic, T. Abdelzaher, L. Luo, R. Stoleru, T. Yan, L. Gu, J. Hui, and B. Krogh, An energy-efficient surveillance system using wireless sensor networks, in MobiSys, 2004.\n[2] G. Simon, M. Maroti, A. Ledeczi, G. Balogh, B. Kusy, A. Nadas, G. Pap, J. Sallai, and K. Frampton, Sensor network-based countersniper system, in SenSys, 2004.\n[3] A. Arora, P. Dutta, and B. Bapat, A line in the sand: A wireless sensor network for trage detection, classification and tracking, in Computer Networks, 2004.\n[4] R. Szewczyk, A. Mainwaring, J. Polastre, J. Anderson, and D. Culler, An analysis of a large scale habitat monitoring application, in ACM SenSys, 2004.\n[5] N. Xu, S. Rangwala, K. K. Chintalapudi, D. Ganesan, A. Broad, R. Govindan, and D. Estrin, A wireless sensor network for structural monitoring, in ACM SenSys, 2004.\n[6] A. Savvides, C. Han, and M. Srivastava, Dynamic fine-grained localization in ad-hoc networks of sensors, in Mobicom, 2001.\n[7] N. Priyantha, A. Chakraborty, and H. Balakrishnan, The cricket location-support system, in Mobicom, 2000.\n[8] M. Broxton, J. Lifton, and J. Paradiso, Localizing a sensor network via collaborative processing of global stimuli, in EWSN, 2005.\n[9] P. Bahl and V. N. Padmanabhan, Radar: An in-building rf-based user location and tracking system, in IEEE Infocom, 2000.\n[10] N. Priyantha, H. Balakrishnan, E. Demaine, and S. Teller, Mobileassisted topology generation for auto-localization in sensor networks, in IEEE Infocom, 2005.\n[11] P. N. Pathirana, A. Savkin, S. Jha, and N. Bulusu, Node localization using mobile robots in delay-tolerant sensor networks, IEEE Transactions on Mobile Computing, 2004.\n[12] C. Savarese, J. M. Rabaey, and J. Beutel, Locationing in distribued ad-hoc wireless sensor networks, in ICAASSP, 2001.\n[13] M. Maroti, B. Kusy, G. Balogh, P. Volgyesi, A. Nadas, K. Molnar, S. Dora, and A. Ledeczi, Radio interferometric geolocation, in ACM SenSys, 2005.\n[14] K. Whitehouse, A. Woo, C. Karlof, F. Jiang, and D. Culler, The effects of ranging noise on multi-hop localization: An empirical study, in IPSN, 2005.\n[15] Y. Kwon, K. Mechitov, S. Sundresh, W. Kim, and G. Agha, Resilient localization for sensor networks in outdoor environment, UIUC, Tech.\nRep., 2004.\n[16] R. Stoleru and J. A. Stankovic, Probability grid: A location estimation scheme for wireless sensor networks, in SECON, 2004.\n[17] N. Bulusu, J. Heidemann, and D. Estrin, GPS-less low cost outdoor localization for very small devices, IEEE Personal Communications Magazine, 2000.\n[18] T. He, C. Huang, B. Blum, J. A. Stankovic, and T. Abdelzaher, Range-Free localization schemes in large scale sensor networks, in ACM Mobicom, 2003.\n[19] R. Nagpal, H. Shrobe, and J. Bachrach, Organizing a global coordinate system from local information on an ad-hoc sensor network, in IPSN, 2003.\n[20] D. Niculescu and B. Nath, ad-hoc positioning system, in IEEE GLOBECOM, 2001.\n[21] R. Stoleru, T. He, J. A. Stankovic, and D. Luebke, A high-accuracy low-cost localization system for wireless sensor networks, in ACM SenSys, 2005.\n[22] K. R\u00a8omer, The lighthouse location system for smart dust, in ACM\/USENIX MobiSys, 2003.\n[23] R. Y. Tsai, A versatile camera calibration technique for highaccuracy 3d machine vision metrology using off-the-shelf tv cameras and lenses, IEEE JRA, 1987.\n[24] C. L. Archer and M. Z. Jacobson, Spatial and temporal distributions of U.S. winds and wind power at 80m derived from measurements, Geophysical Research Jrnl., 2003.\n[25] Team for advanced flow simulation and modeling.\n[Online].\nAvailable: http:\/\/www.mems.rice.edu\/TAFSM\/RES\/ [26] K. Stein, R. Benney, T. Tezduyar, V. Kalro, and J. Leonard, 3-D computation of parachute fluid-structure interactions - performance and control, in Aerodynamic Decelerator Systems Conference, 1999.\n[27] Headquarters Department of the Army, Technical manual for searchlight infrared AN\/GSS-14(V)1, 1982.\n70","lvl-3":"StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks *\nAbstract\nThe problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase.\nIn this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components.\nIn the StarDust framework, sensor nodes are equipped with optical retro-reflectors.\nAn aerial device projects light towards the deployed sensor network, and records an image of the reflected light.\nAn image processing algorithm is developed for obtaining the locations of sensor nodes.\nFor matching a node ID to a location we propose a constraint-based label relaxation algorithm.\nWe propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node.\nWe evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 x 60ft2 area.\nThe localization accuracy ranges from 2ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes.\n1 Introduction\n2 Related Work\n3 StarDust System Design\n3.1 System Architecture\n3.2 Image Processing\n3.3 Node ID Matching\n3.3.1 Relaxation with Color Constraints\n3.3.2 Relaxation with Connectivity Constraints\nAlgorithm 3 Localization\n3.3.3 Relaxation with Time Constraints\n3.3.4 Relaxation with Space Constraints\n3.3.5 Relaxation with Color and Connectivity Constraints\n3.3.6 Relaxation Techniques Analysis\n4 System Implementation\n5 System Evaluation\n5.1 Image Processing\n5.2 Node ID Matching\n5.2.1 Radio Model\n5.2.2 Localization Error vs. Coloring Space Size\n5.2.3 Localization Error vs. Color Uniqueness\n5.2.4 Localization Error vs. Connectivity\n5.3 Localization Error vs. Image Processing Errors\nNumber of Colors\n5.4 Localization Time\n5.5 System Range\nI\u03c1te \u2212 2\u03c3r\n6 StarDust System Optimizations\n6.1 Chained Constraint Primitives\n6.2 Location Learning\n6.3 Localization in Rugged Environments\n7 Conclusions\nStarDust solves the localization problem for aerial deployments where passiveness, low cost, small form factor and rapid localization are required.\nResults show that accuracy can be within 2ft and localization time within milliseconds.\nStarDust also shows robustness with respect to errors.\nWe predict the influence the atmospheric conditions can have on the range of a system based on the StarDust framework, and show that hazy environments or daylight can pose significant challenges.\nMost importantly, the properties of StarDust support the potential for even more accurate localization solutions as well as solutions for rugged, non-line-of-sight environments.","lvl-4":"StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks *\nAbstract\nThe problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase.\nIn this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components.\nIn the StarDust framework, sensor nodes are equipped with optical retro-reflectors.\nAn aerial device projects light towards the deployed sensor network, and records an image of the reflected light.\nAn image processing algorithm is developed for obtaining the locations of sensor nodes.\nFor matching a node ID to a location we propose a constraint-based label relaxation algorithm.\nWe propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node.\nWe evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 x 60ft2 area.\nThe localization accuracy ranges from 2ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes.\n7 Conclusions\nStarDust solves the localization problem for aerial deployments where passiveness, low cost, small form factor and rapid localization are required.\nResults show that accuracy can be within 2ft and localization time within milliseconds.\nStarDust also shows robustness with respect to errors.\nMost importantly, the properties of StarDust support the potential for even more accurate localization solutions as well as solutions for rugged, non-line-of-sight environments.","lvl-2":"StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks *\nAbstract\nThe problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase.\nIn this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components.\nIn the StarDust framework, sensor nodes are equipped with optical retro-reflectors.\nAn aerial device projects light towards the deployed sensor network, and records an image of the reflected light.\nAn image processing algorithm is developed for obtaining the locations of sensor nodes.\nFor matching a node ID to a location we propose a constraint-based label relaxation algorithm.\nWe propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node.\nWe evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 x 60ft2 area.\nThe localization accuracy ranges from 2ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes.\n1 Introduction\nWireless Sensor Networks (WSN) have been envisioned to revolutionize the way humans perceive and interact with the surrounding environment.\nOne vision is to embed tiny sensor devices in outdoor environments, by aerial deployments from unmanned air vehicles.\nThe sensor nodes form a network and collaborate (to compensate for the extremely scarce resources available to each of them: computational power, memory size, communication capabilities) to accomplish the mission.\nThrough collaboration, redundancy and fault tolerance, the WSN is then able to achieve unprecedented sensing capabilities.\nA major step forward has been accomplished by developing systems for several domains: military surveillance [1] [2] [3], habitat monitoring [4] and structural monitoring [5].\nEven after these successes, several research problems remain open.\nAmong these open problems is sensor node localization, i.e., how to find the physical position of each sensor node.\nDespite the attention the localization problem in WSN has received, no universally acceptable solution has been developed.\nThere are several reasons for this.\nOn one hand, localization schemes that use ranging are typically high end solutions.\nGPS ranging hardware consumes energy, it is relatively expensive (if high accuracy is required) and poses form factor challenges that move us away from the vision of dust size sensor nodes.\nUltrasound has a short range and is highly directional.\nSolutions that use the radio transceiver for ranging either have not produced encouraging results (if the received signal strength indicator is used) or are sensitive to environment (e.g., multipath).\nOn the other hand, localization schemes that only use the connectivity information for inferring location information are characterized by low accuracies:,: 10ft in controlled environments, 40 \u2212 50ft in realistic ones.\nTo address these challenges, we propose a framework for WSN localization, called StarDust, in which the complexity associated with the node localization is completely removed from the sensor node.\nThe basic principle of the framework is localization through passivity: each sensor node is equipped with a corner-cube retro-reflector and possibly an optical filter (a coloring device).\nAn aerial vehicle projects light onto the deployment area and records images containing retro-reflected light beams (they appear as luminous spots).\nThrough image processing techniques, the locations of the retro-reflectors (i.e., sensor nodes) is deter\nmined.\nFor inferring the identity of the sensor node present at a particular location, the StarDust framework develops a constraint-based node ID relaxation algorithm.\nThe main contributions of our work are the following.\nWe propose a novel framework for node localization in WSNs that is very promising and allows for many future extensions and more accurate results.\nWe propose a constraint-based label relaxation algorithm for mapping node IDs to the locations, and four constraints (node, connectivity, time and space), which are building blocks for very accurate and very fast localization systems.\nWe develop a sensor node hardware prototype, called a SensorBall.\nWe evaluate the performance of a localization system for which we obtain location accuracies of 2 \u2212 5 ft with a localization duration ranging from 10 milliseconds to 2 minutes.\nWe investigate the range of a system built on our framework by considering realities of physical phenomena that occurs during light propagation through the atmosphere.\nThe rest of the paper is structured as follows.\nSection 2 is an overview of the state of art.\nThe design of the StarDust framework is presented in Section 3.\nOne implementation and its performance evaluation are in Sections 4 and 5, followed by a suite of system optimization techniques, in Section 6.\nIn Section 7 we present our conclusions.\n2 Related Work\nWe present the prior work in localization in two major categories: the range-based, and the range-free schemes.\nThe range-based localization techniques have been designed to use either more expensive hardware (and hence higher accuracy) or just the radio transceiver.\nRanging techniques dependent on hardware are the time-of-flight (ToF) and the time-difference-of-arrival (TDoA).\nSolutions that use the radio are based on the received signal strength indicator (RSSI) and more recently on radio interferometry.\nThe ToF localization technique that is most widely used is the GPS.\nGPS is a costly solution for a high accuracy localization of a large scale sensor network.\nAHLoS [6] employs a TDoA ranging technique that requires extensive hardware and solves relatively large nonlinear systems of equations.\nThe Cricket location-support system (TDoA) [7] can achieve a location granularity of tens of inches with highly directional and short range ultrasound transceivers.\nIn [2] the location of a sniper is determined in an urban terrain, by using the TDoA between an acoustic wave and a radio beacon.\nThe PushPin project [8] uses the TDoA between ultrasound pulses and light flashes for node localization.\nThe RADAR system [9] uses the RSSI to build a map of signal strengths as emitted by a set of beacon nodes.\nA mobile node is located by the best match, in the signal strength space, with a previously acquired signature.\nIn MAL [10], a mobile node assists in measuring the distances (acting as constraints) between nodes until a rigid graph is generated.\nThe localization problem is formulated as an on-line state estimation in a nonlinear dynamic system [11].\nA cooperative ranging that attempts to achieve a global positioning from distributed local optimizations is proposed in [12].\nA very recent, remarkable, localization technique is based on radio interferometry, RIPS [13], which utilizes two transmitters to create an interfering signal.\nThe frequencies of the emitters are very close to each other, thus the interfering signal will have a low frequency envelope that can be easily measured.\nThe ranging technique performs very well.\nThe long time required for localization and multi-path environments pose significant challenges.\nReal environments create additional challenges for the range based localization schemes.\nThese have been emphasized by several studies [14] [15] [16].\nTo address these challenges, and others (hardware cost, the energy expenditure, the form factor, the small range, localization time), several range-free localization schemes have been proposed.\nSensor nodes use primarily connectivity information for inferring proximity to a set of anchors.\nIn the Centroid localization scheme [17], a sensor node localizes to the centroid of its proximate beacon nodes.\nIn APIT [18] each node decides its position based on the possibility of being inside or outside of a triangle formed by any three beacons within node's communication range.\nThe Gradient algorithm [19], leverages the knowledge about the network density to infer the average one hop length.\nThis, in turn, can be transformed into distances to nodes with known locations.\nDV-Hop [20] uses the hop by hop propagation capability of the network to forward distances to landmarks.\nMore recently, several localization schemes that exploit the sensing capabilities of sensor nodes, have been proposed.\nSpotlight [21] creates well controlled (in time and space) events in the network while the sensor nodes detect and timestamp this events.\nFrom the spatiotemporal knowledge for the created events and the temporal information provided by sensor nodes, nodes' spatial information can be obtained.\nIn a similar manner, the Lighthouse system [22] uses a parallel light beam, that is emitted by an anchor which rotates with a certain period.\nA sensor node detects the light beam for a period of time, which is dependent on the distance between it and the light emitting device.\nMany of the above localization solutions target specific sets of requirements and are useful for specific applications.\nStarDust differs in that it addresses a particular demanding set of requirements that are not yet solved well.\nStarDust is meant for localizing air dropped nodes where node passiveness, high accuracy, low cost, small form factor and rapid localization are all required.\nMany military applications have such requirements.\n3 StarDust System Design\nThe design of the StarDust system (and its name) was inspired by the similarity between a deployed sensor network, in which sensor nodes indicate their presence by emitting light, and the Universe consisting of luminous and illuminated objects: stars, galaxies, planets, etc. .\nThe main difficulty when applying the above ideas to the real world is the complexity of the hardware that needs to be put on a sensor node so that the emitted light can be detected from thousands of feet.\nThe energy expenditure for producing an intense enough light beam is also prohibitive.\nInstead, what we propose to use for sensor node localization is a passive optical element called a retro-reflector.\nThe most common retro-reflective optical component is a Corner-Cube Retroreflector (CCR), shown in Figure 1 (a).\nIt consists of three mutually perpendicular mirrors.\nThe inter\nFigure 1.\nCorner-Cube Retroreflector (a) and an array of CCRs molded in plastic (b)\nesting property of this optical component is that an incoming beam of light is reflected back, towards the source of the light, irrespective of the angle of incidence.\nThis is in contrast with a mirror, which needs to be precisely positioned to be perpendicular to the incident light.\nA very common and inexpensive implementation of an array of CCRs is the retroreflective plastic material used on cars and bicycles for night time detection, shown in Figure 1 (b).\nIn the StarDust system, each node is equipped with a small (e.g. 0.5 in2) array of CCRs and the enclosure has self-righting capabilities that orient the array of CCRs predominantly upwards.\nIt is critical to understand that the upward orientation does not need to be exact.\nEven when large angular variations from a perfectly upward orientation are present, a CCR will return the light in the exact same direction from which it came.\nIn the remaining part of the section, we present the architecture of the StarDust system and the design of its main components.\n3.1 System Architecture\nThe envisioned sensor network localization scenario is as follows:\n\u2022 The sensor nodes are released, possibly in a controlled manner, from an aerial vehicle during the night.\n\u2022 The aerial vehicle hovers over the deployment area and uses a strobe light to illuminate it.\nThe sensor nodes, equipped with CCRs and optical filters (acting as coloring devices) have self-righting capabilities and retroreflect the incoming strobe light.\nThe retro-reflected light is either\" white\", as the originating source light, or colored, due to optical filters.\n\u2022 The aerial vehicle records a sequence of two images very close in time (msec level).\nOne image is taken when the strobe light is on, the other when the strobe light is off.\nThe acquired images are used for obtaining the locations of sensor nodes (which appear as luminous spots in the image).\n\u2022 The aerial vehicle executes the mapping of node IDs to the identified locations in one of the following ways: a) by using the color of a retro-reflected light, if a sensor node has a unique color; b) by requiring sensor nodes to establish neighborhood information and report it to a base station; c) by controlling the time sequence of sensor nodes deployment and recording additional im\nFigure 2.\nThe StarDust system architecture\nages; d) by controlling the location where a sensor node is deployed.\n\u2022 The computed locations are disseminated to the sensor network.\nThe architecture of the StarDust system is shown in Figure 2.\nThe architecture consists of two main components: the first is centralized and it is located on a more powerful device.\nThe second is distributed and it resides on all sensor nodes.\nThe Central Device consists of the following: the Light Emitter, the Image Processing module, the Node ID Mapping module and the Radio Model.\nThe distributed component of the architecture is the Transfer Function, which acts as a filter for the incoming light.\nThe aforementioned modules are briefly described below:\n\u2022 Light Emitter - It is a strobe light, capable of producing very intense, collimated light pulses.\nThe emitted light is non-monochromatic (unlike a laser) and it is characterized by a spectral density \u03a8 (\u03bb), a function of the wavelength.\nThe emitted light is incident on the CCRs present on sensor nodes.\n\u2022 Transfer Function \u03a6 (\u03a8 (\u03bb)) - This is a bandpass filter for the incident light on the CCR.\nThe filter allows a portion of the original spectrum, to be retro-reflected.\nFrom here on, we will refer to the transfer function as the color of a sensor node.\n\u2022 Image Processing - The Image Processing module ac\nquires high resolution images.\nFrom these images the locations and the colors of sensor nodes are obtained.\nIf only one set of pictures can be taken (i.e., one location of the light emitter\/image analysis device), then the map of the field is assumed to be known as well as the distance between the imaging device and the field.\nThe aforementioned assumptions (field map and distance to it) are not necessary if the images can be simultaneously taken from different locations.\nIt is important to remark here that the identity of a node cannot be directly obtained through Image Processing alone, unless a specific characteristic of a sensor node can be identified in the image.\n\u2022 Node ID Matching - This module uses the detected locations and through additional techniques (e.g., sensor node coloring and connectivity information (G (\u039b, E)) from the deployed network) to uniquely identify the sensor nodes observed in the image.\nThe connectivity information is represented by neighbor tables sent from\n\u2022 Radio Model - This component provides an estimate of\nthe radio range to the Node ID Matching module.\nIt is only used by node ID matching techniques that are based on the radio connectivity in the network.\nThe estimate of the radio range R is based on the sensor node density (obtained through the Image Processing module) and the connectivity information (i.e., G (A, E)).\nThe two main components of the StarDust architecture are the Image Processing and the Node ID Mapping.\nTheir design and analysis is presented in the sections that follow.\n3.2 Image Processing\nThe goal of the Image Processing Algorithm (IPA) is to identify the location of the nodes and their color.\nNote that IPA does not identify which node fell where, but only what is the set of locations where the nodes fell.\nIPA is executed after an aerial vehicle records two pictures: one in which the field of deployment is illuminated and one when no illuminations is present.\nLet Pdark be the picture of the deployment area, taken when no light was emitted and Plight be the picture of the same deployment area when a strong light beam was directed towards the sensor nodes.\nThe proposed IPA has several steps, as shown in Algorithm 1.\nThe first step is to obtain a third picture Pfilter where only the differences between Pdark and Plight remain.\nLet us assume that Pdark has a resolution of n \u00d7 m, where n is the number of pixels in a row of the picture, while m is the number of pixels in a column of the picture.\nThen Pdark is composed of n \u00d7 m pixels noted Pdark (i, j), i \u2208 1 \u2264 i \u2264 n, 1 \u2264 j \u2264 m. Similarly Plight is composed of n \u00d7 m pixels noted Plight (i, j), 1 \u2264 i \u2264 n, 1 \u2264 j \u2264 m. Each pixel P is described by an RGB value where the R value is denoted by PR, the G value is denoted by PG, and the B value is denoted by PB.\nIPA then generates the third picture, Pfilter, through the following transformations:\nAfter this transformation, all the features that appeared in both Pdark and Plight are removed from Pfilter.\nThis simplifies the recognition of light retro-reflected by sensor nodes.\nThe second step consists of identifying the elements contained in Pfilter that retro-reflect light.\nFor this, an intensity filter is applied to Pfilter.\nFirst IPA converts Pfilter into a grayscale picture.\nThen the brightest pixels are identified and used to create Pre flect.\nThis step is eased by the fact that the reflecting nodes should appear much brighter than any other illuminated object in the picture.\nFigure 3.\nProbabilistic label relaxation\nThe third step runs an edge detection algorithm on Preflect to identify the boundary of the nodes present.\nA tool such as Matlab provides a number of edge detection techniques.\nWe used the bwboundaries function.\nFor the obtained edges, the location (x, y) (in the image) of each node is determined by computing the centroid of the points constituting its edges.\nStandard computer graphics techniques [23] are then used to transform the 2D locations of sensor nodes detected in multiple images into 3D sensor node locations.\nThe color of the node is obtained as the color of the pixel located at (x, y) in Plight.\n3.3 Node ID Matching\nThe goal of the Node ID Matching module is to obtain the identity (node ID) of a luminous spot in the image, detected to be a sensor node.\nFor this, we define V = {(x1, y1), (x2, y2),..., (xm, ym)} to be the set of locations of the sensor nodes, as detected by the Image Processing module and A = {% 1,% 2,...,% m} to be the set of unique node IDs assigned to the m sensor nodes, before deployment.\nFrom here on, we refer to node IDs as labels.\nWe model the problem of finding the label% j of a node ni as a probabilistic label relaxation problem, frequently used in image processing\/understanding.\nIn the image processing domain, scene labeling (i.e., identifying objects in an image) plays a major role.\nThe goal of scene labeling is to assign a label to each object detected in an image, such that an appropriate image interpretation is achieved.\nIt is prohibitively expensive to consider the interactions among all the objects in an image.\nInstead, constraints placed among nearby objects generate local consistencies and through iteration, global consistencies can be obtained.\nThe main idea of the sensor node localization through probabilistic label relaxation is to iteratively compute the probability of each label being the correct label for a sensor node, by taking into account, at each iteration, the\" support\" for a label.\nThe support for a label can be understood as a hint or proof, that a particular label is more likely to be the correct one, when compared with the other potential labels for a sensor node.\nWe pictorially depict this main idea in Figure 3.\nAs shown, node ni has a set of candidate labels {% 1,...,% k}.\nEach of the labels has a different value for the Support function Q (% k).\nWe defer the explanation of how the Support function is implemented until the subsections that follow, where we provide four concrete techniques.\nFormally, the algorithm is outlined in Algorithm 2, where the equations necessary for computing the new probability Pni (% k) for a label% k of a node ni, are expressed by the\n1: for each sensor node ni do 2: assign equal prob.\nto all possible labels 3: end for 4: repeat 5: converged +--true 6: for each sensor node ni do 7: for each each label Xj of ni do 8: compute the Support label Xj: Equation 4 9: end for 10: compute K for the node ni: Equation 3 11: for each each label Xj do 12: update probability of label Xj: Equation 2 13: if | new prob.\n\u2212 old prob.\n|> s then 14: converged +--false 15: end if 16: end for 17: end for 18: until converged = true\nfollowing equations:\nand Qsni (Xk) is:\nThe label relaxation algorithm is iterative and it is polynomial in the size of the network (number of nodes).\nThe pseudo-code is shown in Algorithm 2.\nIt initializes the probabilities associated with each possible label, for a node ni, through a uniform distribution.\nAt each iteration s, the algorithm updates the probability associated with each label, by considering the Support Qsni (Xk) for each candidate label of a sensor node.\nIn the sections that follow, we describe four different techniques for implementing the Support function: based on node coloring, radio connectivity, the time of deployment (time) and the location of deployment (space).\nWhile some of these techniques are simplistic, they are primitives which, when combined, can create powerful localization systems.\nThese design techniques have different trade-offs, which we will present in Section 3.3.6.\n3.3.1 Relaxation with Color Constraints\nThe unique mapping between a sensor node's position (identified by the image processing) and a label can be obtained by assigning a unique color to each sensor node.\nFor this we define C = {c1, c2,..., cn} to be the set of unique colors available and M: A \u2192 C to be a one-to-one mapping of labels to colors.\nThis mapping is known prior to the sensor node deployment (from node manufacturing).\nIn the case of color constrained label relaxation, the support for label Xk is expressed as follows:\nAs a result, the label relaxation algorithm (Algorithm 2) consists of the following steps: one label is assigned to each sensor node (lines 1-3 of the algorithm), implicitly having a probability Pni (Xk) = 1; the algorithm executes a single iteration, when the support function, simply, reiterates the confidence in the unique labeling.\nHowever, it is often the case that unique colors for each node will not be available.\nIt is interesting to discuss here the influence that the size of the coloring space (i.e., | C |) has on the accuracy of the localization algorithm.\nSeveral cases are discussed below:\n\u2022 If | C | = 0, no colors are used and the sensor nodes are equipped with simple CCRs that reflect back all the incoming light (i.e., no filtering, and no coloring of the incoming light).\nFrom the image processing system, the position of sensor nodes can still be obtained.\nSince all nodes appear white, no single sensor node can be uniquely identified.\n\u2022 If | C | = m \u2212 1 then there are enough unique colors for all nodes (one node remains white, i.e. no coloring), the problem is trivially solved.\nEach node can be identified, based on its unique color.\nThis is the scenario for the relaxation with color constraints.\n\u2022 If | C |> 1, there are several options for how to parti\ntion the coloring space.\nIf C = {c1} one possibility is to assign the color c1 to a single node, and leave the remaining m \u2212 1 sensor nodes white, or to assign the color c1 to more than one sensor node.\nOne can observe that once a color is assigned uniquely to a sensor node, in effect, that sensor node is given the status of \"anchor\", or node with known location.\nIt is interesting to observe that there is an entire spectrum of possibilities for how to partition the set of sensor nodes in equivalence classes (where an equivalence class is represented by one color), in order to maximize the success of the localization algorithm.\nOne of the goals of this paper is to understand how the size of the coloring space and its partitioning affect localization accuracy.\nDespite the simplicity of this method of constraining the set of labels that can be assigned to a node, we will show that this technique is very powerful, when combined with other relaxation techniques.\n3.3.2 Relaxation with Connectivity Constraints\nConnectivity information, obtained from the sensor network through beaconing, can provide additional information for locating sensor nodes.\nIn order to gather connectivity information, the following need to occur: 1) after deployment, through beaconing of HELLO messages, sensor nodes build their neighborhood tables; 2) each node sends its neighbor table information to the Central device via a base station.\nFirst, let us define G = (A, E) to be the weighted connectivity graph built by the Central device from the received neighbor table information.\nIn G the edge (Xi, Xj) has a\nFigure 4.\nLabel relaxation with connectivity constraints\nweight gij represented by the number of beacons sent by Xj and received by Xi.\nIn addition, let R be the radio range of the sensor nodes.\nThe main idea of the connectivity constrained label relaxation is depicted in Figure 4 in which two nodes ni and nj have been assigned all possible labels.\nThe confidence in each of the candidate labels for a sensor node, is represented by a probability, shown in a dotted rectangle.\nIt is important to remark that through beaconing and the reporting of neighbor tables to the Central Device, a global view of all constraints in the network can be obtained.\nIt is critical to observe that these constraints are among labels.\nAs shown in Figure 4 two constraints exist between nodes ni and nj.\nThe constraints are depicted by gi2, j2 and gi2, jM, the number of beacons sent the labels Xj2 and XjM and received by the label Xi2.\nThe support for the label Xk of sensor node ni, resulting from the \"interaction\" (i.e., within radio range) with sensor node nj is given by:\nAs a result, the localization algorithm (Algorithm 3 consists of the following steps: all labels are assigned to each sensor node (lines 1-3 of the algorithm), and implicitly each label has a probability initialized to Pni (Xk) = 1 \/ | A |; in each iteration, the probabilities for the labels of a sensor node are updated, when considering the interaction with the labels of sensor nodes within R.\nIt is important to remark that the identity of the nodes within R is not known, only the candidate labels and their probabilities.\nThe relaxation algorithm converges when, during an iteration, the probability of no label is updated by more than s.\nThe label relaxation algorithm based on connectivity constraints, enforces such constraints between pairs of sensor nodes.\nFor a large scale sensor network deployment, it is not feasible to consider all pairs of sensor nodes in the network.\nHence, the algorithm should only consider pairs of sensor nodes that are within a reasonable communication range (R).\nWe assume a circular radio range and a symmetric connectivity.\nIn the remaining part of the section we propose a simple analytical model that estimates the radio range R for medium-connected networks (less than 20 neighbors per R).\nWe consider the following to be known: the size of the deployment field (L), the number of sensor nodes deployed (N)\nAlgorithm 3 Localization\n1: Estimate the radio range R 2: Execute the Label Relaxation Algorithm with Support Function given by Equation 6 for neighbors less than R apart\n3: for each sensor node ni do 4: node identity is Xk with max.\nprob.\n5: end for\nand the total number of unidirectional (i.e., not symmetric) one-hop radio connections in the network (k).\nFor our analyof length L, by using a grid of unit length L \/ \u221a N.\nWe use the sis, we uniformly distribute the sensor nodes in a square area substitution u = L \/ \u221a N to simplify the notation, in order to \u221a distinguish the following cases: if u \u2264 R \u2264 \u221a 2u each node has four neighbors (the expected k = 4N); if 2u \u2264 R \u2264 2u each node has eight neighbors (the expected k = 8N); if\n(the expected k = 20N) For a given t = k\/4N we take R to be the middle of the \u221a interval.\nAs an example, if t = 5 then R = (3 + 5) u\/2.\nA quadratic fitting for R over the possible values of t, produces the following closed-form solution for the communication range R, as a function of network connectivity k, assuming L and N constant:\nWe investigate the accuracy of our model in Section 5.2.1.\n3.3.3 Relaxation with Time Constraints\nTime constraints can be treated similarly with color constraints.\nThe unique identification of a sensor node can be obtained by deploying sensor nodes individually, one by one, and recording a sequence of images.\nThe sensor node that is identified as new in the last picture (it was not identified in the picture before last) must be the last sensor node dropped.\nIn a similar manner with color constrained label relaxation, the time constrained approach is very simple, but may take too long, especially for large scale systems.\nWhile it can be used in practice, it is unlikely that only a time constrained label relaxation is used.\nAs we will see, by combining constrained-based primitives, realistic localization systems can be implemented.\nThe support function for the label relaxation with time constraints is defined identically with the color constrained relaxation:\nThe localization algorithm (Algorithm 2 consists of the following steps: one label is assigned to each sensor node (lines 1-3 of the algorithm), and implicitly having a probability Pni (Xk) = 1; the algorithm executes a single iteration,\nFigure 5.\nRelaxation with space constraints Figure 6.\nProbability distribution of distances Figure 7.\nDistribution of nodes\nwhen the support function, simply, reiterates the confidence in the unique labeling.\n3.3.4 Relaxation with Space Constraints\nSpatial information related to sensor deployment can also be employed as another input to the label relaxation algorithm.\nTo do that, we use two types of locations: the node location pn and the label location pl.\nThe former pn is defined as the position of nodes (xn, yn, zn) after deployment, which can be obtained through Image Processing as mentioned in Section 3.3.\nThe latter pl is defined as the location (xl, yl, zl) where a node is dropped.\nWe use Dni \u03bbm to denote the horizontal distance between the location of the label \u03bbm and the locaV tion of the node ni.\nClearly, Dni\u03bbm = (xn--xl) 2 + (yn--yl) 2.\nAt the time of a sensor node release, the one-to-one mapping between the node and its label is known.\nIn other words, the label location is the same as the node location at the release time.\nAfter release, the label location information is partially lost due to the random factors such as wind and surface impact.\nHowever, statistically, the node locations are correlated with label locations.\nSuch correlation depends on the airdrop methods employed and environments.\nFor the sake of simplicity, let's assume nodes are dropped from the air through a helicopter hovering in the air.\nWind can be decomposed into three components ~ X, ~ Y and ~ Z. Only X ~ and Y ~ affect the horizontal distance a node can travel.\nAccording to [24], we can assume that X ~ and Y ~ follow an independent normal distribution.\nTherefore, the absolute value of the wind speed follows a Rayleigh distribution.\nObviously the higher the wind speed is, the further a node would land away horizontally from the label location.\nIf we assume that the distance D is a function of the wind speed V [25] [26], we can obtain the probability distribution of D under a given wind speed distribution.\nWithout loss of generality, we assume that D is proportional to the wind speed.\nTherefore, D follows the Rayleigh distribution as well.\nAs shown in Figure 5, the spatial-based relaxation is a recursive process to assign the probability that a nodes has a certain label by using the distances between the location of a node with multiple label locations.\nWe note that the distribution of distance D affects the probability with which a label is assigned.\nIt is not necessarily true that the nearest label is always chosen.\nFor example, if D follows the Rayleigh (\u03c32) distribution, we can obtain the Probability Density Function (PDF) of distances as shown in Figure 6.\nThis figure indicates that the possibility of a node to fall vertically is very small under windy conditions (\u03c3> 0), and that the distance D is affected by the \u03c3.\nThe spatial distribution of nodes for \u03c3 = 1 is shown in Figure 7.\nStrong wind with a high \u03c3 value leads to a larger node dispersion.\nMore formally, given a probability density function PDF (D), the support for label \u03bbk of sensor node ni can be formulated as:\nIt is interesting to point out two special cases.\nFirst, if all nodes are released at once (i.e., only one label location for all released nodes), the distance D from a node to all labels is the same.\nIn this case, Ps +1\nthat we cannot use the spatial-based relaxation to recursively narrow down the potential labels for a node.\nSecond, if nodes are released at different locations that are far away from each other, we have: (i) If node ni has label \u03bbk, Psni (\u03bbk)--> 1 when s--> \u221e, (ii) If node ni does not have label \u03bbk, Psni (\u03bbk)--> 0 when s--> \u221e.\nIn this second scenario, there are multiple labels (one label per release), hence it is possible to correlate release times (labels) with positions on the ground.\nThese results indicate that spatial-based relaxation can label the node with a very high probability if the physical separation among nodes is large.\n3.3.5 Relaxation with Color and Connectivity Constraints\nOne of the most interesting features of the StarDust architecture is that it allows for hybrid localization solutions to be built, depending on the system requirements.\nOne example is a localization system that uses the color and connectivity constraints.\nIn this scheme, the color constraints are used for reducing the number of candidate labels for sensor nodes, to a more manageable value.\nAs a reminder, in the connectivity constrained relaxation, all labels are candidate labels for each sensor node.\nThe color constraints are used in the initialization phase of Algorithm 3 (lines 1-3).\nAfter the initialization, the standard connectivity constrained relaxation algorithm is used.\nFor a better understanding of how the label relaxation algorithm works, we give a concrete example, exemplified in Figure 8.\nIn part (a) of the figure we depict the data structures\nFigure 8.\nA step through the algorithm.\nAfter initialization (a) and after the 1st iteration for node ni (b)\nassociated with nodes ni and n \/ after the initialization steps of the algorithm (lines 1-6), as well as the number of beacons between different labels (as reported by the network, through G (\u039b, E)).\nAs seen, the potential labels (shown inside the vertical rectangles) are assigned to each node.\nNode ni can be any of the following: 11, 8,4, 1.\nAlso depicted in the figure are the probabilities associated with each of the labels.\nAfter initialization, all probabilities are equal.\nPart (b) of Figure 8 shows the result of the first iteration of the localization algorithm for node ni, assuming that node n \/ is the first wi chosen in line 7 of Algorithm 3.\nBy using Equation 6, the algorithm computes the\" support\" Q (\u03bbi) for each of the possible labels for node ni.\nOnce the Q (\u03bbi)'s are computed, the normalizing constant, given by Equation 3 can be obtained.\nThe last step of the iteration is to update the probabilities associated with all potential labels of node ni, as given by Equation 2.\nOne interesting problem, which we explore in the performance evaluation section, is to assess the impact the partitioning of the color set C has on the accuracy of localization.\nWhen the size of the coloring set is smaller than the number of sensor nodes (as it is the case for our hybrid connectivity\/color constrained relaxation), the system designer has the option of allowing one node to uniquely have a color (acting as an anchor), or multiple nodes.\nIntuitively, by assigning one color to more than one node, more constraints (distributed) can be enforced.\n3.3.6 Relaxation Techniques Analysis\nThe proposed label relaxation techniques have different trade-offs.\nFor our analysis of the trade-offs, we consider the following metrics of interest: the localization time (duration), the energy consumed (overhead), the network size (scale) that can be handled by the technique and the localization accuracy.\nThe parameters of interest are the following: the number of sensor nodes (N), the energy spent for one aerial drop (\u03b5d), the energy spent in the network for collecting and reporting neighbor information \u03b5b and the time Td taken by a sensor node to reach the ground after being aerially deployed.\nThe cost comparison of the different label relaxation techniques is shown in Table 1.\nAs shown, the relaxation techniques based on color and space constraints have the lowest localization duration, zero, for all practical purposes.\nThe scalability of the color based relaxation technique is, however, limited to the number of\nFigure 9.\nSensorBall with self-righting capabilities (a) and colored CCRs (b)\nunique color filters that can be built.\nThe narrower the Transfer Function \u03a8 (\u03bb), the larger the number of unique colors that can be created.\nThe manufacturing costs, however, are increasing as well.\nThe scalability issue is addressed by all other label relaxation techniques.\nMost notably, the time constrained relaxation, which is very similar to the colorconstrained relaxation, addresses the scale issue, at a higher deployment cost.\nTable 1.\nComparison of label relaxation techniques\n4 System Implementation\nThe StarDust localization framework, depicted in Figure 2, is flexible in that it enables the development of new localization systems, based on the four proposed label relaxation schemes, or the inclusion of other, yet to be invented, schemes.\nFor our performance evaluation we implemented a version of the StarDust framework, namely the one proposed in Section 3.3.5, where the constraints are based on color and connectivity.\nThe Central device of the StarDust system consists of the following: the Light Emitter - we used a common-off-theshelf flash light (QBeam, 3 million candlepower); the image acquisition was done with a 3 megapixel digital camera (Sony DSC-S50) which provided the input to the Image Processing algorithm, implemented in Matlab.\nFor sensor nodes we built a custom sensor node, called SensorBall, with self-righting capabilities, shown in Figure 9 (a).\nThe self-righting capabilities are necessary in order to orient the CCR predominantly upwards.\nThe CCRs that we used were inexpensive, plastic molded, night time warning signs commonly available on bicycles, as shown in Figure 9 (b).\nWe remark here the low quality of the CCRs we used.\nThe reflectivity of each CCR (there are tens molded in the plastic container) is extremely low, and each CCR is not built with mirrors.\nA reflective effect is achieved by employing finely polished plastic surfaces.\nWe had 5 colors available, in addition to the standard CCR, which reflects all the incoming light (white CCR).\nFor a slightly higher price (ours were 20cents\/piece), better quality CCRs can be employed.\nFigure 10.\nThe field in the dark Figure 11.\nThe illuminated field Figure 12.\nThe difference: Figure 10 Figure 11\nHigher quality (better mirrors) would translate in more accurate image processing (better sensor node detection) and smaller form factor for the optical component (an array of CCRs with a smaller area can be used).\nThe sensor node platform we used was the micaZ mote.\nThe code that runs on each node is a simple application which broadcasts 100 beacons, and maintains a neighbor table containing the percentage of successfully received beacons, for each neighbor.\nOn demand, the neighbor table is reported to a base station, where the node ID mapping is performed.\n5 System Evaluation\nIn this section we present the performance evaluation of a system implementation of the StarDust localization framework.\nThe three major research questions that our evaluation tries to answer are: the feasibility of the proposed framework (can sensor nodes be optically detected at large distances), the localization accuracy of one actual implementation of the StarDust framework, and whether or not atmospheric conditions can affect the recognition of sensor nodes in an image.\nThe first two questions are investigated by evaluating the two main components of the StarDust framework: the Image Processing and the Node ID Matching.\nThese components have been evaluated separately mainly because of lack of adequate facilities.\nWe wanted to evaluate the performance of the Image Processing Algorithm in a long range, realistic, experimental set-up, while the Node ID Matching required a relatively large area, available for long periods of time (for connectivity data gathering).\nThe third research question is investigated through a computer modeling of atmospheric phenomena.\nFor the evaluation of the Image Processing module, we performed experiments in a football stadium where we deploy 6 sensor nodes in a 3 \u00d7 2 grid.\nThe distance between the Central device and the sensor nodes is approximately 500ft.\nThe metrics of interest are the number of false positives and false negatives in the Image Processing Algorithm.\nFor the evaluation of the Node ID Mapping component, we deploy 26 sensor nodes in an 120 \u00d7 60ft2 flat area of a stadium.\nIn order to investigate the influence the radio connectivity has on localization accuracy, we vary the height above ground of the deployed sensor nodes.\nTwo set-ups are used: one in which the sensor nodes are on the ground, and the second one, in which the sensor nodes are raised 3 inches above ground.\nFrom here on, we will refer to these two experimental set-ups as the low connectivity and the high connectivity networks, respectively because when nodes are on the ground the communication range is low resulting in less neighbors than when the nodes are elevated and have a greater communication range.\nThe metrics of interest are: the localization error (defined as the distance between the computed location and the true location - known from the manual placement), the percentage of nodes correctly localized, the convergence of the label relaxation algorithm, the time to localize and the robustness of the node ID mapping to errors in the Image Processing module.\nThe parameters that we vary experimentally are: the angle under which images are taken, the focus of the camera, and the degree of connectivity.\nThe parameters that we vary in simulations (subsequent to image acquisition and connectivity collection) the number of colors, the number of anchors, the number of false positives or negatives as input to the Node ID Matching component, the distance between the imaging device and sensor network (i.e., range), atmospheric conditions (light attenuation coefficient) and CCR reflectance (indicative of its quality).\n5.1 Image Processing\nFor the IPA evaluation, we deploy 6 sensor nodes in a 3 \u00d7 2 grid.\nWe take 13 sets of pictures using different orientations of the camera and different zooming factors.\nAll pictures were taken from the same location.\nEach set is composed of a picture taken in the dark and of a picture taken with a light beam pointed at the nodes.\nWe process the pictures offline using a Matlab implementation of IPA.\nSince we are interested in the feasibility of identifying colored sensor nodes at large distance, the end result of our IPA is the 2D location of sensor nodes (position in the image).\nThe transformation to 3D coordinates can be done through standard computer graphics techniques [23].\nOne set of pictures obtained as part of our experiment is shown in Figures 10 and 11.\nThe execution of our IPA algorithm results in Figure 12 which filters out the background, and Figure 13 which shows the output of the edge detection step of IPA.\nThe experimental results are depicted in Figure 14.\nFor each set of pictures the graph shows the number of false positives (the IPA determines that there is a node\nFigure 13.\nRetroreflectors detected in Figure 12 Figure 14.\nFalse Positives and Negatives for the 6 nodes\nwhile there is none), and the number of false negatives (the IPA determines that there is no node while there is one).\nIn about 45% of the cases, we obtained perfect results, i.e., no false positives and no false negatives.\nIn the remaining cases, we obtained a number of false positives of at most one, and a number of false negatives of at most two.\nWe exclude two pairs of pictures from Figure 14.\nIn the first excluded pair, we obtain 42 false positives and in the second pair 10 false positives and 7 false negatives.\nBy carefully examining the pictures, we realized that the first pair was taken out of focus and that a car temporarily appeared in one of the pictures of the second pair.\nThe anomaly in the second set was due to the fact that we waited too long to take the second picture.\nIf the pictures had been taken a few milliseconds apart, the car would have been represented on either both or none of the pictures and the IPA would have filtered it out.\n5.2 Node ID Matching\nWe evaluate the Node ID Matching component of our system by collecting empirical data (connectivity information) from the outdoor deployment of 26 nodes in the 120 \u00d7 60ft2 area.\nWe collect 20 sets of data for the high connectivity and low connectivity network deployments.\nOff-line we investigate the influence of coloring on the metrics of interest, by randomly assigning colors to the sensor nodes.\nFor one experimental data set we generate 50 random assignments of colors to sensor nodes.\nIt is important to observe that, for the evaluation of the Node ID Matching algorithm (color and connectivity constrained), we simulate the color assignment to sensor nodes.\nAs mentioned in Section 4 the size of the coloring space available to us was 5 (5 colors).\nThrough simulations of color assignment (not connectivity) we are able to investigate the influence that the size of the coloring space has on the accuracy of localization.\nThe value of the param\nFigure 15.\nThe number of existing and missing radio connections in the sparse connectivity experiment Figure 16.\nThe number of existing and missing radio connections in the high connectivity experiment\neter \u03b5 used in Algorithm 2 was 0.001.\nThe results presented here represent averages over the randomly generated colorings and over all experimental data sets.\nWe first investigate the accuracy of our proposed Radio Model, and subsequently use the derived values for the radio range in the evaluation of the Node ID matching component.\n5.2.1 Radio Model\nFrom experiments, we obtain the average number of observed beacons (k, defined in Section 3.3.2) for the low connectivity network of 180 beacons and for the high connectivity network of 420 beacons.\nFrom our Radio Model (Equation 7, we obtain a radio range R = 25 ft for the low connectivity network and R = 40ft for the high connectivity network.\nTo estimate the accuracy of our simple model, we plot the number of radio links that exist in the networks, and the number of links that are missing, as functions of the distance between nodes.\nThe results are shown in Figures 15 and 16.\nWe define the average radio range R to be the distance over which less than 20% of potential radio links, are missing.\nAs shown in Figure 15, the radio range is between 20ft and 25ft.\nFor the higher connectivity network, the radio range was between 30ft and 40ft.\nWe choose two conservative estimates of the radio range: 20ft for the low connectivity case and 35 ft for the high connectivity case, which are in good agreement with the values predicted by our Radio Model.\n5.2.2 Localization Error vs. Coloring Space Size\nIn this experiment we investigate the effect of the number of colors on the localization accuracy.\nFor this, we randomly assign colors from a pool of a given size, to the sensor nodes.\nFigure 17.\nLocalization error Figure 19.\nConvergence error\nFigure 18.\nPercentage of nodes correctly localized\nWe then execute the localization algorithm, which uses the empirical data.\nThe algorithm is run for three different radio ranges: 15, 20 and 25ft, to investigate its influence on the localization error.\nThe results are depicted in Figure 17 (localization error) and Figure 18 (percentage of nodes correctly localized).\nAs shown, for an estimate of 20ft for the radio range (as predicted by our Radio Model) we obtain the smallest localization errors, as small as 2ft, when enough colors are used.\nBoth Figures 17 and 18 confirm our intuition that a larger number of colors available significantly decrease the error in localization.\nThe well known fact that relaxation algorithms do not always converge, was observed during our experiments.\nThe percentage of successful runs (when the algorithm converged) is depicted in Figure 19.\nAs shown, in several situations, the algorithm failed to converge (the algorithm execution was stopped after 100 iterations per node).\nIf the algorithm does not converge in a predetermined number of steps, it will terminate and the label with the highest probability will provide the identity of the node.\nIt is very probable that the chosen label is incorrect, since the probabilities of some of labels are constantly changing (with each iteration).\nThe convergence of relaxation based algorithms is a well known issue.\n5.2.3 Localization Error vs. Color Uniqueness\nAs mentioned in the Section 3.3.1, a unique color gives a sensor node the statute of an anchor.\nA sensor node that is an anchor can unequivocally be identified through the Image Processing module.\nIn this section we investigate the effect unique colors have on the localization accuracy.\nSpecifically, we want to experimentally verify our intuition that assigning more nodes to a color can benefit the localization accuracy, by enforcing more constraints, as opposed to uniquely assigning a color to a single node.\nFigure 20.\nLocalization error vs. number of colors\nFor this, we fix the number of available colors to either 4, 6 or 8 and vary the number of nodes that are given unique colors, from 0, up to the maximum number of colors (4, 6 or 8).\nNaturally, if we have a maximum number of colors of 4, we can assign at most 4 anchors.\nThe experimental results are depicted in Figure 20 (localization error) and Figure 21 (percentage of sensor node correctly localized).\nAs expected, the localization accuracy increases with the increase in the number of colors available (larger coloring space).\nAlso, for a given size of the coloring space (e.g., 6 colors available), if more colors are uniquely assigned to sensor nodes then the localization accuracy decreases.\nIt is interesting to observe that by assigning colors uniquely to nodes, the benefit of having additional colors is diminished.\nSpecifically, if 8 colors are available and all are assigned uniquely, the system would be less accurately localized (error \u2248 7ft), when compared to the case of 6 colors and no unique assignments of colors (\u2248 5 ft localization error).\nThe same trend, of a less accurate localization can be observed in Figure 21, which shows the percentage of nodes correctly localized (i.e., 0ft localization error).\nAs shown, if we increase the number of colors that are uniquely assigned, the percentage of nodes correctly localized decreases.\n5.2.4 Localization Error vs. Connectivity\nWe collected empirical data for two network deployments with different degrees of connectivity (high and low) in order to assess the influence of connectivity on location accuracy.\nThe results obtained from running our localization algorithm are depicted in Figure 22 and Figure 23.\nWe varied the number of colors available and assigned no anchors (i.e., no unique assignments of colors).\nIn both scenarios, as expected, localization error decrease with an increase in the number of colors.\nIt is interesting to observe, however, that the low connectivity scenario im\nFigure 21.\nPercentage of nodes correctly localized vs. number of colors\nFigure 22.\nLocalization error vs. number of colors\nproves the localization accuracy quicker, from the additional number of colors available.\nWhen the number of colors becomes relatively large (twelve for our 26 sensor node network), both scenarios (low and high connectivity) have comparable localization errors, of less that 2ft.\nThe same trend of more accurate location information is evidenced by Figure 23 which shows that the percentage of nodes that are localized correctly grows quicker for the low connectivity deployment.\n5.3 Localization Error vs. Image Processing Errors\nSo far we investigated the sources for error in localization that are intrinsic to the Node ID Matching component.\nAs previously presented, luminous objects can be mistakenly detected to be sensor nodes during the location detection phase of the Image Processing module.\nThese false positives can be eliminated by the color recognition procedure of the Image Processing module.\nMore problematic are false negatives (when a sensor node does not reflect back enough light to be detected).\nThey need to be handled by the localization algorithm.\nIn this case, the localization algorithm is presented with two sets of nodes of different sizes, that need to be matched: one coming from the Image Processing (which misses some nodes) and one coming from the network, with the connectivity information (here we assume a fully connected network, so that all sensor nodes report their connectivity information).\nIn this experiment we investigate how Image Processing errors (false negatives) influence the localization accuracy.\nFor this evaluation, we ran our localization algorithm with empirical data, but dropped a percentage of nodes from the list of nodes detected by the Image Processing algorithm (we artificially introduced false negatives in the Image Process\nNumber of Colors\nFigure 23.\nPercentage of nodes correctly localized Figure 24.\nImpact of false negatives on the localization error\ning).\nThe effect of false negatives on localization accuracy is depicted in Figure 24.\nAs seen in the figure if the number of false negatives is 15%, the error in position estimation doubles when 4 colors are available.\nIt is interesting to observe that the scenario when more colors are available (e.g., 12 colors) is being affected more drastically than the scenario with less colors (e.g., 4 colors).\nThe benefit of having more colors available is still being maintained, at least for the range of colors we investigated (4 through 12 colors).\n5.4 Localization Time\nIn this section we look more closely at the duration for each of the four proposed relaxation techniques and two combinations of them: color-connectivity and color-time.\nWe assume that 50 unique color filters can be manufactured, that the sensor network is deployed from 2,400 ft (necessary for the time-constrained relaxation) and that the time required for reporting connectivity grows linearly, with an initial reporting period of 160sec, as used in a real world tracking application [1].\nThe localization duration results, as presented in Table 1, are depicted in Figure 25.\nAs shown, for all practical purposes the time required by the space constrained relaxation techniques is 0sec.\nThe same applies to the color constrained relaxation, for which the localization time is 0sec (if the number of colors is sufficient).\nConsidering our assumptions, only for a network of size 50 the color constrained relaxation works.\nThe localization duration for all other network sizes (100, 150 and 200) is infinite (i.e., unique color assignments to sensor nodes cannot be made, since only 50 colors are unique), when only color constrained relaxation is used.\nBoth the connectivity constrained and time constrained techniques increase linearly with the network size (for the time constrained, the Central device deploys sensor nodes one by one, recording an image after the time a sensor node is expected to reach the\nFigure 25.\nLocalization time for different la - Figure 26.\nApparent contrast in a Figure 27.\nApparent contrast in a bel relaxation schemes clear atmosphere hazing atmosphere\nground).\nIt is interesting to notice in Figure 25 the improvement in the localization time obtained by simply combining the color and the connectivity constrained techniques.\nThe localization duration in this case is identical with the connectivity constrained technique.\nThe combination of color and time constrained relaxations is even more interesting.\nFor a reasonable localization duration of 52seconds a perfect (i.e., 0ft localization error) localization system can be built.\nIn this scenario, the set of sensor nodes is split in batches, with each batch having a set of unique colors.\nIt would be very interesting to consider other scenarios, where the strength of the space constrained relaxation (0sec for any sensor network size) is used for improving the other proposed relaxation techniques.\nWe leave the investigation and rigorous classification of such technique combination for future work.\n5.5 System Range\nIn this section we evaluate the feasibility of the StarDust localization framework when considering the realities of light propagation through the atmosphere.\nThe main factor that determines the range of our system is light scattering, which redirects the luminance of the source into the medium (in essence equally affecting the luminosity of the target and of the background).\nScattering limits the visibility range by reducing the apparent contrast between the target and its background (approaches zero, as the distance increases).\nThe apparent contrast Cr is quantitatively expressed by the formula:\nwhere Ntr and Nbr are the apparent target radiance and apparent background radiance at distance r from the light source, respectively.\nThe apparent radiance Ntr of a target at a distance r from the light source, is given by:\nI\u03c1te \u2212 2\u03c3r\nNt r = Na + (11) \u03c0r2 where I is the intensity of the light source, \u03c1t is the target reflectance, \u03c3 is the spectral attenuation coefficient (\u2248 0.12 km \u2212 1 and \u2248 0.60 km \u2212 1 for a clear and a hazy atmosphere, respectively) and Na is the radiance of the atmospheric backscatter, and it can be expressed as follows: e \u2212 x x2 dx (12) where G = 0.24 is a backscatter gain.\nThe apparent background radiance Nbr is given by formulas similar with Equations 11 and 12, where only the target reflectance \u03c1t is substituted with the background reflectance \u03c1b.\nIt is important to remark that when Cr reaches its lower limit, no increase in the source luminance or receiver sensitivity will increase the range of the system.\nFrom Equations 11 and 12 it can be observed that the parameter which can be controlled and can influence the range of the system is \u03c1t, the target reflectance.\nFigures 26 and 27 depict the apparent contrast Cr as a function of the distance r for a clear and for a hazy atmosphere, respectively.\nThe apparent contrast is investigated for reflectance coefficients \u03c1t ranging from 0.3 to 1.0 (perfect reflector).\nFor a contrast C of at least 0.5, as it can be seen in Figure 26 a range of approximately 4, 500ft can be achieved if the atmosphere is clear.\nThe performance dramatically deteriorates, when the atmospheric conditions are problematic.\nAs shown in Figure 27 a range of up to 1, 500ft is achievable, when using highly reflective CCR components.\nWhile our light source (3 million candlepower) was sufficient for a range of a few hundred feet, we remark that there exist commercially available light sources (20 million candlepower) or military (150 million candlepower [27]), powerful enough for ranges of a few thousand feet.\n6 StarDust System Optimizations\nIn this section we describe extensions of the proposed architecture that can constitute future research directions.\n6.1 Chained Constraint Primitives\nIn this paper we proposed four primitives for constraintbased relaxation algorithms: color, connectivity, time and space.\nTo demonstrate the power that can be obtained by combining them, we proposed and evaluated one combination of such primitives: color and connectivity.\nAn interesting research direction to pursue could be to chain more than two of these primitives.\nAn example of such chain is: color, temporal, spatial and connectivity.\nOther research directions could be to use voting scheme for deciding which primitive to use or assign different weights to different relaxation algorithms.\n6.2 Location Learning\nIf after several iterations of the algorithm, none of the label probabilities for a node ni converges to a higher value, the confidence in our labeling of that node is relatively low.\nIt would be interesting to associate with a node, more than one label (implicitly more than one location) and defer the label assignment decision until events are detected in the network (if the network was deployed for target tracking).\n6.3 Localization in Rugged Environments\nThe initial driving force for the StarDust localization framework was to address the sensor node localization in extremely rugged environments.\nCanopies, dense vegetation, extremely obstructing environments pose significant challenges for sensor nodes localization.\nThe hope, and our original idea, was to consider the time period between the aerial deployment and the time when the sensor node disappears under the canopy.\nBy recording the last visible position of a sensor node (as seen from the aircraft) a reasonable estimate of the sensor node location can be obtained.\nThis would require that sensor nodes posses self-righting capabilities, while in mid-air.\nNevertheless, we remark on the suitability of our localization framework for rugged, non-line-of-sight environments.\n7 Conclusions\nStarDust solves the localization problem for aerial deployments where passiveness, low cost, small form factor and rapid localization are required.\nResults show that accuracy can be within 2ft and localization time within milliseconds.\nStarDust also shows robustness with respect to errors.\nWe predict the influence the atmospheric conditions can have on the range of a system based on the StarDust framework, and show that hazy environments or daylight can pose significant challenges.\nMost importantly, the properties of StarDust support the potential for even more accurate localization solutions as well as solutions for rugged, non-line-of-sight environments.","keyphrases":["local","wireless sensor network","rang","sensor node","imag process","perform","corner-cube retro-reflector","aerial vehicl","scene label","consist","probabl","uniqu map","connect"],"prmu":["P","P","P","P","P","P","M","M","M","U","U","U","U"]} {"id":"J-61","title":"ICE: An Iterative Combinatorial Exchange","abstract":"We present the first design for an iterative combinatorial exchange (ICE). The exchange incorporates a tree-based bidding language that is concise and expressive for CEs. Bidders specify lower and upper bounds on their value for different trades. These bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations. All computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades. A proxied interpretation of a revealedpreference activity rule ensures progress across rounds. A VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments. The exchange is fully implemented and in a validation phase.","lvl-1":"ICE: An Iterative Combinatorial Exchange David C. Parkes\u2217 \u2020 Ruggiero Cavallo\u2020 Nick Elprin\u2020 Adam Juda\u2020 S\u00b4ebastien Lahaie\u2020 Benjamin Lubin\u2020 Loizos Michael\u2020 Jeffrey Shneidman\u2020 Hassan Sultan\u2020 ABSTRACT We present the first design for an iterative combinatorial exchange (ICE).\nThe exchange incorporates a tree-based bidding language that is concise and expressive for CEs.\nBidders specify lower and upper bounds on their value for different trades.\nThese bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations.\nAll computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades.\nA proxied interpretation of a revealedpreference activity rule ensures progress across rounds.\nA VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments.\nThe exchange is fully implemented and in a validation phase.\nCategories and Subject Descriptors: I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence; J.4 [Computer Applications]: Social and Behavioral Sciences -Economics General Terms: Algorithms, Economics, Theory.\n1.\nINTRODUCTION Combinatorial exchanges combine and generalize two different mechanisms: double auctions and combinatorial auctions.\nIn a double auction (DA), multiple buyers and sellers trade units of an identical good [20].\nIn a combinatorial auction (CA), a single seller has multiple heterogeneous items up for sale [11].\nBuyers may have complementarities or substitutabilities between goods, and are provided with an expressive bidding language.\nA common goal in both market designs is to determine the efficient allocation, which is the allocation that maximizes total value.\nA combinatorial exchange (CE) [24] is a combinatorial double auction that brings together multiple buyers and sellers to trade multiple heterogeneous goods.\nFor example, in an exchange for wireless spectrum, a bidder may declare that she is willing to pay $1 million for a trade where she obtains licenses for New York City, Boston, and Philadelphia, and loses her license for Washington DC.\nThus, unlike a DA, a CE allows all participants to express complex valuations via expressive bids.\nUnlike a CA, a CE allows for fragmented ownership, with multiple buyers and sellers and agents that are both buying and selling.\nCEs have received recent attention both in the context of wireless spectrum allocation [18] and for airport takeoff and landing slot allocation [3].\nIn both of these domains there are incumbents with property rights, and it is important to facilitate a complex multi-way reallocation of resources.\nAnother potential application domain for CEs is to resource allocation in shared distributed systems, such as PlanetLab [13].\nThe instantiation of our general purpose design to specific domains is a compelling next step in our research.\nThis paper presents the first design for an iterative combinatorial exchange (ICE).\nThe genesis of this project was a class, CS 286r Topics at the Interface between Economics and Computer Science, taught at Harvard University in Spring 2004.1 The entire class was dedicated to the design and prototyping of an iterative CE.\nThe ICE design problem is multi-faceted and quite hard.\nThe main innovation in our design is an expressive yet concise tree-based bidding language (which generalizes known languages such as XOR\/OR [23]), and the tight coupling of this language with efficient algorithms for price-feedback to guide bidding, winner-determination to determine trades, and revealed-preference activity rules to ensure progress across rounds.\nThe exchange is iterative: bidders express upper and lower valuations on trades by annotating their bid-tree, and then tighten these bounds in response to price feedback in each round.\nThe Threshold payment rule, introduced by Parkes et al. [24], is used to determine final payments.\nThe exchange has a number of interesting theoretical properties.\nFor instance, when there exist linear prices we establish soundness and completeness: for straightforward bidders that adjust their bounds to meet activity rules while keeping their true value within the bounds, the exchange will terminate with the efficient allocation.\nIn addition, the 1 http:\/\/www.eecs.harvard.edu\/\u223cparkes\/cs286r\/ice.html 249 Truth Agent Act Rule WD ACC FAIR BALClosing RuleVickreyThreshold DONE !\nDONE 2,2 +A +10 +B +10 BUYER 2,2 -A -5 -B -5 SELLER 2,2 +A +15 +8 +B +15 +8 BUYER 2,2 -A -2 -6 -B -2 -6 SELLER BUYER, buy AB SELLER, sell AB 12 < PA+PB < 16 PA+PB=14 PA=PB=7 PBUYER = 16 - (4-0) = 12 PSELLER = -12 - (4-0) = -16 PBUYER = 14 PSELLER = -14 Pessim istic O ptim istic = 1 Figure 1: ICE System Flow of Control efficient allocation can often be determined without bidders revealing, or even knowing, their exact value for all trades.\nThis is essential in complex domains where the valuation problem can itself be very challenging for a participant [28].\nWhile we cannot claim that straightforward bidding is an equilibrium of the exchange (and indeed, should not expect to by the Myerson-Satterthwaite impossibility theorem [22]), the Threshold payment rule minimizes the ex post incentive to manipulate across all budget-balanced payment rules.\nThe exchange is implemented in Java and is currently in validation.\nIn describing the exchange we will first provide an overview of the main components and introduce several working examples.\nThen, we introduce the basic components for a simple one-shot variation in which bidders state their exact values for trades in a single round.\nWe then describe the full iterative exchange, with upper and lower values, price-feedback, activity rules, and termination conditions.\nWe state some theoretical properties of the exchange, and end with a discussion to motivate our main design decisions, and suggest some next steps.\n2.\nAN OVERVIEW OF THE ICE DESIGN The design has four main components, which we will introduce in order through the rest of the paper: \u2022 Expressive and concise tree-based bidding language.\nThe language describes values for trades, such as my value for selling AB and buying C is $100, or my value for selling ABC is -$50, with negative values indicating that a bidder must receive a payment for the trade to be acceptable.\nThe language allows bidders to express upper and lower bounds on value, which can be tightened across rounds.\n\u2022 Winner Determination.\nWinner-determination (WD) is formulated as a mixed-integer program (MIP), with the structure of the bid-trees captured explicitly in the formulation.\nComparing the solution at upper and lower values allows for a determination to be made about termination, with progress in intermediate rounds driven by an intermediate valuation and the lower values adopted on termination.\n\u2022 Payments.\nPayments are computed using the Threshold payment rule [24], with the intermediate valuations adopted in early rounds and lower values adopted on termination.\n\u2022 Price feedback.\nAn approximate price is computed for each item in the exchange in each round, in terms of the intermediate valuations and the provisional trade.\nThe prices are optimized to approximate competitive equilibrium prices, and further optimized to best approximate the current Threshold payments with remaining ties broken to favor prices that are balanced across different items.\nIn computing the prices, we adopt the methods of constraint-generation to exploit the structure of the bidding language and avoid enumerating all feasible trades.\nThe subproblem to generate new constraints is a variation of the WD problem.\n\u2022 Activity rule.\nA revealed-preference activity rule [1] ensures progress across rounds.\nIn order to remain active, a bidder must tighten bounds so that there is enough information to define a trade that maximizes surplus at the current prices.\nAnother variation on the WD problem is formulated, both to verify that the activity rule is met and also to provide feedback to a bidder to explain how to meet the rule.\nAn outline of the ICE system flow of control is provided in Figure 1.\nWe will return to this example later in the paper.\nFor now, just observe in this two-agent example that the agents state lower and upper bounds that are checked in the activity rule, and then passed to winner-determination (WD), and then through three stages of pricing (accuracy, fairness, balance).\nOn passing the closing rule (in which parameters \u03b1eff and \u03b1thresh are checked for convergence of the trade and payments), the exchange goes to a last-and-final round.\nAt the end of this round, the trade and payments are finally determined, based on the lower valuations.\n2.1 Related Work Many ascending-price one-sided CAs are known in the literature [10, 25, 29].\nDirect elicitation approaches have also been proposed for one-sided CAs in which agents respond to explicit queries about their valuations [8, 14, 19].\nA number of ascending CAs are designed to work with simple prices on items [12, 17].\nThe price generation methods that we use in ICE generalize the methods in these earlier papers.\nParkes et al. [24] studied sealed-bid combinatorial exchanges and introduced the Threshold payment rule.\nSubsequently, Krych [16] demonstrated experimentally that the Threshold rule promotes efficient allocations.\nWe are not aware of any previous studies of iterative CEs.\nDominant strategy DAs are known for unit demand [20] and also for single-minded agents [2].\nNo dominant strategy mechanisms are known for the general CE problem.\nICE is a hybrid auction design, in that it couples simple item prices to drive bidding in early rounds with combinatorial WD and payments, a feature it shares with the clock-proxy design of Ausubel et al. [1] for one-sided CAs.\nWe adopt a variation on the clock-proxy auctions``s revealedpreference activity rule.\nThe bidding language shares some structural elements with the LGB language of Boutilier and Hoos [7], but has very different semantics.\nRothkopf et al. [27] also describe a restricted tree-based bidding language.\nIn LGB, the semantics are those of propositional logic, with the same items in an allocation able to satisfy a tree in multiple places.\nAlthough this can make LGB especially concise in some settings, the semantics that we propose appear to provide useful locality, so that the value of one component in a tree can be understood independently from the rest of the tree.\nThe idea of capturing the structure of our bidding language explicitly within a mixed-integer programming formulation follows the developments in Boutilier [6].\n3.\nPRELIMINARIES In our model, we consider a set of goods, indexed {1, ... , m} and a set of bidders, indexed {1, ... , n}.\nThe initial allocation of goods is denoted x0 = (x0 1, ... , x0 n), with x0 i = (x0 i1, ... , x0 im) and x0 ij \u2265 0 for good j indicating the number 250 of units of good j held by bidder i.\nA trade \u03bb = (\u03bb1, ... , \u03bbn) denotes the change in allocation, with \u03bbi = (\u03bbi1, ... , \u03bbim) where \u03bbij \u2208 is the change in the number of units of item j to bidder i. So, the final allocation is x1 = x0 + \u03bb.\nEach bidder has a value vi(\u03bbi) \u2208 \u00a1 for a trade \u03bbi.\nThis value can be positive or negative, and represents the change in value between the final allocation x0 i +\u03bbi and the initial allocation x0 i .\nUtility is quasi-linear, with ui(\u03bbi, p) = vi(\u03bbi)\u2212p for trade \u03bbi and payment p \u2208 \u00a1 .\nPrice p can be negative, indicating the bidder receives a payment for the trade.\nWe use the term payoff interchangeably with utility.\nOur goal in the ICE design is to implement the efficient trade.\nThe efficient trade, \u03bb\u2217 , maximizes the total increase in value across bidders.\nDefinition 1 (Efficient trade).\nThe efficient trade \u03bb\u2217 solves max (\u03bb1,...,\u03bbn) cents i vi(\u03bbi) s.t. \u03bbij + x0 ij \u2265 0, \u2200i, \u2200j (1) cents i \u03bbij \u2264 0, \u2200j (2) \u03bbij \u2208 (3) Constraints (1) ensure that no agent sells more items than it has in its initial allocation.\nConstraints (2) provide free disposal, and allows feasible trades to sell more items than are purchased (but not vice versa).\nLater, we adopt Feas(x0 ) to denote the set of feasible trades, given these constraints and given an initial allocation x0 = (x0 1, ... , x0 n).\n3.1 Working Examples In this section, we provide three simple examples of instances that we will use to illustrate various components of the exchange.\nAll three examples have only one seller, but this is purely illustrative.\nExample 1.\nOne seller and one buyer, two goods {A, B}, with the seller having an initial allocation of AB.\nChanges in values for trades: seller buyer AND(\u2212A, \u2212B) AND(+A, +B) -10 +20 The AND indicates that both the buyer and the seller are only interested in trading both goods as a bundle.\nHere, the efficient (value-maximizing) trade is for the seller to sell AB to the buyer, denoted \u03bb\u2217 = ([\u22121, \u22121], [+1, +1]).\nExample 2.\nOne seller and four buyers, four goods {A, B, C, D}, with the seller having an initial allocation of ABCD.\nChanges in values for trades: seller buyer1 buyer 2 buyer 3 buyer 4 OR(\u2212A, \u2212B, AND(+A, XOR(+A, AND(+C, XOR(+C, \u2212C, \u2212D) +B) +B) +D) +D) 0 +6 +4 +3 +2 The OR indicates that the seller is willing to sell any number of goods.\nThe XOR indicates that buyers 2 and 4 are willing to buy at most one of the two goods in which they are interested.\nThe efficient trade is for bundle AB to go to buyer 1 and bundle CD to buyer 3, denoted \u03bb\u2217 = ([\u22121, \u22121, \u22121, \u22121], [+1, +1, 0, 0], [0, 0, 0, 0], [0, 0, +1, +1], [0, 0, 0, 0]).\n2,2 +A +10 +B +10 BUYER 2,2 -A -5 -B -5 SELLER Example 1: Example 3: 2,2 +C +D BUYER 2 2,2 +A +B BUYER 1 +11 +84,4 -B SELLER -A -C -D Example 2: 1,1 +A +B BUYER 2 2,2 +A +B BUYER 1 +6 +40,4 -B SELLER -C -D-A 1,1 +C +D BUYER 4 2,2 +C +D +3 +2 BUYER 3 -18 Figure 2: Example Bid Trees.\nExample 3.\nOne seller and two buyers, four goods {A, B, C, D}, with the seller having an initial allocation of ABCD.\nChanges in values for trades: seller buyer1 buyer 2 AND(\u2212A, \u2212B, \u2212C, \u2212D) AND(+A, +B) AND(+C, +D) -18 +11 +8 The efficient trade is for bundle AB to go to buyer 1 and bundle CD to go to buyer 2, denoted \u03bb\u2217 = ([\u22121, \u22121, \u22121, \u22121], [+1, +1, 0, 0], [0, 0, +1, +1]).\n4.\nA ONE-SHOT EXCHANGE DESIGN The description of ICE is broken down into two sections: one-shot (sealed-bid) and iterative.\nIn this section we abstract away the iterative aspect and introduce a specialization of the tree-based language that supports only exact values on nodes.\n4.1 Tree-Based Bidding Language The bidding language is designed to be expressive and concise, entirely symmetric with respect to buyers and sellers, and to extend to capture bids from mixed buyers and sellers, ranging from simple swaps to highly complex trades.\nBids are expressed as annotated bid trees, and define a bidder``s value for all possible trades.\nThe language defines changes in values on trades, with leaves annotated with traded items and nodes annotated with changes in values (either positive or negative).\nThe main feature is that it has a general interval-choose logical operator on internal nodes, and that it defines careful semantics for propagating values within the tree.\nWe illustrate the language on each of Examples 1-3 in Figure 2.\nThe language has a tree structure, with trades on items defined on leaves and values annotated on nodes and leaves.\nThe nodes have zero values where no value is indicated.\nInternal nodes are also labeled with interval-choose (IC) ranges.\nGiven a trade, the semantics of the language define which nodes in the tree can be satisfied, or switched-on.\nFirst, if a child is on then its parent must be on.\nSecond, if a parent node is on, then the number of children that are on must be within the IC range on the parent node.\nFinally, leaves in which the bidder is buying items can only be on if the items are provided in the trade.\nFor instance, in Example 2 we can consider the efficient trade, and observe that in this trade all nodes in the trees of buyers 1 and 3 (and also the seller), but none of the nodes in the trees of buyers 2 and 4, can be on.\nOn the other hand, in 251 the trade in which A goes to buyer 2 and D to buyer 4, then the root and appropriate leaf nodes can be on for buyers 2 and 4, but no nodes can be on for buyers 1 and 3.\nGiven a trade there is often a number of ways to choose the set of satisfied nodes.\nThe semantics of the language require that the nodes that maximize the summed value across satisfied nodes be activated.\nConsider bid tree Ti from bidder i.\nThis defines nodes \u03b2 \u2208 Ti, of which some are leaves, Leaf (i) \u2286 Ti.\nLet Child(\u03b2) \u2286 Ti denote the children of a node \u03b2 (that is not itself a leaf).\nAll nodes except leaves are labeled with the interval-choose operator [IC x i (\u03b2), ICy i (\u03b2)].\nEvery node is also labeled with a value, vi\u03b2 \u2208 \u00a1 .\nEach leaf \u03b2 is labeled with a trade, qi\u03b2 \u2208 m (i.e., leaves can define a bundled trade on more than one type of item.)\nGiven a trade \u03bbi to bidder i, the interval-choose operators and trades on leaves define which nodes can be satisfied.\nThere will often be a choice.\nTies are broken to maximize value.\nLet sati\u03b2 \u2208 {0, 1} denote whether node \u03b2 is satisfied.\nSolution sati is valid given tree Ti and trade \u03bbi, written sati \u2208 valid(Ti, \u03bbi), if and only if: cents \u03b2\u2208Leaf (i) qi\u03b2j \u00b7 sati\u03b2 \u2264 \u03bbij , \u2200i, \u2200j (4) ICx i (\u03b2)sati\u03b2 \u2264 cents \u03b2 \u2208Child(\u03b2) sati\u03b2 \u2264 ICy i (\u03b2)sati\u03b2, \u2200\u03b2 \/\u2208 Leaf (i) (5) In words, a set of leaves can only be considered satisfied given trade \u03bbi if the total increase in quantity summed across all such leaves is covered by the trade, for all goods (Eq.\n4).\nThis works for sellers as well as buyers: for sellers a trade is negative and this requires that the total number of items indicated sold in the tree is at least the total number sold as defined in the trade.\nWe also need upwards-propagation: any time a node other than the root is satisfied then its parent must be satisfied (by \u03b2 \u2208Child(\u03b2) sati\u03b2 \u2264 ICy i (\u03b2)sati\u03b2 in Eq.\n5).\nFinally, we need downwards-propagation: any time an internal node is satisfied then the appropriate number of children must also be satisfied (Eq.\n5).\nThe total value of trade \u03bbi, given bid-tree Ti, is defined as: vi(Ti, \u03bbi) = max sat\u2208valid(Ti,\u03bbi) cents \u03b2\u2208T v\u03b2 \u00b7 sat\u03b2 (6) The tree-based language generalizes existing languages.\nFor instance: IC(2, 2) on a node with 2 children is equivalent to an AND operator; IC(1, 3) on a node with 3 children is equivalent to an OR operator; and IC(1, 1) on a node with 2 children is equivalent to an XOR operator.\nSimilarly, the XOR\/OR bidding languages can be directly expressed as a bid tree in our language.2 4.2 Winner Determination This section defines the winner determination problem, which is formulated as a MIP and solved in our implementation with a commercial solver.3 The solver uses branchand-bound search with dynamic cut generation and branching heuristics to solve large MIPs in economically feasible run times.\n2 The OR* language is the OR language with dummy items to provide additional structure.\nOR* is known to be expressive and concise.\nHowever, it is not known whether OR* dominates XOR\/OR in terms of conciseness [23].\n3 CPLEX, www.ilog.com In defining the MIP representation we are careful to avoid an XOR-based enumeration of all bundles.\nA variation on the WD problem is reused many times within the exchange, e.g. for column generation in pricing and for checking revealed preference.\nGiven bid trees T = (T1, ... , Tn) and initial allocation x0 , the mixed-integer formulation for WD is: WD(T, x0 ) : max \u03bb,sat cents i cents \u03b2\u2208Ti vi\u03b2 \u00b7 sati\u03b2 s.t. (1), (2), sati\u03b2 \u2208 {0, 1}, \u03bbij \u2208 sati \u2208 valid(Ti, \u03bbi), \u2200i Some goods may go unassigned because free disposal is allowed within the clearing rules of winner determination.\nThese items can be allocated back to agents that sold the items, i.e. for which \u03bbij < 0.\n4.3 Computing Threshold Payments The Threshold payment rule is based on the payments in the Vickrey-Clarke-Groves (VCG) mechanism [15], which itself is truthful and efficient but does not satisfy budget balance.\nBudget-balance requires that the total payments to the exchange are equal to the total payments made by the exchange.\nIn VCG, the payment paid by agent i is pvcg,i = \u02c6v(\u03bb\u2217 i ) \u2212 (V \u2217 \u2212 V\u2212i) (7) where \u03bb\u2217 is the efficient trade, V \u2217 is the reported value of this trade, and V\u2212i is the reported value of the efficient trade that would be implemented without bidder i.\nWe call \u2206vcg,i = V \u2217 \u2212 V\u2212i the VCG discount.\nFor instance, in Example 1 pvcg,seller = \u221210 \u2212 (+10 \u2212 0) = \u221220 and pvcg,buyer = +20 \u2212 (+10 \u2212 0) = 10, and the exchange would run at a budget deficit of \u221220 + 10 = \u221210.\nThe Threshold payment rule [24] determines budgetbalanced payments to minimize the maximal error across all agents to the VCG outcome.\nDefinition 2.\nThe Threshold payment scheme implements the efficient trade \u03bb\u2217 given bids, and sets payments pthresh,i = \u02c6vi(\u03bb\u2217 i ) \u2212 \u2206i, where \u2206 = (\u22061, ... , \u2206n) is set to minimize maxi(\u2206vcg,i \u2212 \u2206i) subject to \u2206i \u2264 \u2206vcg,i and i \u2206i \u2264 V \u2217 (this gives budget-balance).\nExample 4.\nIn Example 2, the VCG discounts are (9, 2, 0, 1, 0) to the seller and four buyers respectively, VCG payments are (\u22129, 4, 0, 2, 0) and the exchange runs at a deficit of -3.\nIn Threshold, the discounts are (8, 1, 0, 0, 0) and the payments are (\u22128, 5, 0, 3, 0).\nThis minimizes the worst-case error to VCG discounts across all budget-balanced payment schemes.\nThreshold payments are designed to minimize the maximal ex post incentive to manipulate.\nKrych [16] confirmed that Threshold promotes allocative efficiency in restricted and approximate Bayes-Nash equilibrium.\n5.\nTHE ICE DESIGN We are now ready to introduce the iterative combinatorial exchange (ICE) design.\nSeveral new components are introduced, relative to the design for the one-shot exchange.\nRather than provide precise valuations, bidders can provide lower and upper valuations and revise this bid information across rounds.\nThe exchange provides price-based feedback 252 to guide bidders in this process, and terminates with an efficient (or approximately-efficient) trade with respect to reported valuations.\nIn each round t \u2208 {0, 1, ...} the current lower and upper bounds, vt and vt , are used to define a provisional valuation profile v\u03b1 (the \u03b1-valuation), together with a provisional trade \u03bbt and provisional prices pt = (pt 1, ... , pt m) on items.\nThe \u03b1-valuation is a linear combination of the current upper and lower valuations, with \u03b1EFF \u2208 [0, 1] chosen endogenously based on the closeness of the optimistic trade (at v) and the pessimistic trade (at v).\nPrices pt are used to inform an activity rule, and drive progress towards an efficient trade.\n5.1 Upper and Lower Valuations The bidding language is extended to allow a bidder i to report a lower and upper value (vi\u03b2, vi\u03b2) on each node.\nThese take the place of the exact value vi\u03b2 defined in Section 4.1.\nBased on these labels, we can define the valuation functions vi(Ti, \u03bbi) and vi(Ti, \u03bbi), using the exact same semantics as in Eq.\n(6).\nWe say that such a bid-tree is well-formed if vi\u03b2 \u2264 vi\u03b2 for all nodes.\nThe following lemma is useful: Lemma 1.\nGiven a well-formed tree, T, then vi(Ti, \u03bbi) \u2264 vi(Ti, \u03bbi) for all trades.\nProof.\nSuppose there is some \u03bbi for which vi(Ti, \u03bbi) > vi(Ti, \u03bbi).\nThen, maxsat\u2208valid(Ti,\u03bbi) \u03b2\u2208Ti vi\u03b2 \u00b7 sat\u03b2 > maxsat\u2208valid(Ti,\u03bbi) \u03b2\u2208Ti vi\u03b2 \u00b7 sat\u03b2.\nBut, this is a contradiction because the trade \u03bb that defines vi(Ti, \u03bbi) is still feasible with upper bounds vi, and vi\u03b2 \u2265 vi\u03b2 for all nodes \u03b2 in a well-formed tree.\n5.2 Price Feedback In each round, approximate competitive-equilibrium (CE) prices, pt = (pt 1, ... , pt m), are determined.\nGiven these provisional prices, the price on trade \u03bbi for bidder i is pt (\u03bbi) = j\u2264m pt j \u00b7 \u03bbij.\nDefinition 3 (CE prices).\nPrices p\u2217 are competitive equilibrium prices if the efficient trade \u03bb\u2217 is supported at prices p\u2217 , so that for each bidder: \u03bb\u2217 i \u2208 arg max \u03bb\u2208Feas(x0) {vi(\u03bbi) \u2212 p\u2217 (\u03bbi)} (8) CE prices will not always exist and we will often need to compute approximate prices [5].\nWe extend ideas due to Rassenti et al. [26], Kwasnica et al. [17] and Dunford et al. [12], and select approximate prices as follows: I: Accuracy.\nFirst, we compute prices that minimize the maximal error in the best-response constraints across all bidders.\nII: Fairness.\nSecond, we break ties to prefer prices that minimize the maximal deviation from Threshold payments across all bidders.\nIII: Balance.\nThird, we break ties to prefer prices that minimize the maximal price across all items.\nTaken together, these steps are designed to promote the informativeness of the prices in driving progress across rounds.\nIn computing prices, we explain how to compute approximate (or otherwise) prices for structured bidding languages, and without enumerating all possible trades.\nFor this, we adopt constraint generation to efficient handle an exponential number of constraints.\nEach step is described in detail below.\nI: Accuracy.\nWe adopt a definition of price accuracy that generalizes the notions adopted in previous papers for unstructured bidding languages.\nLet \u03bbt denote the current provisional trade and suppose the provisional valuation is v\u03b1 .\nTo compute accurate CE prices, we consider: min p,\u03b4 \u03b4 (9) s.t. v\u03b1 i (\u03bb) \u2212 p(\u03bb) \u2264 v\u03b1 i (\u03bbt i) \u2212 p(\u03bbt i) + \u03b4, \u2200i, \u2200\u03bb (10) \u03b4 \u2265 0,pj \u2265 0, \u2200j.\nThis linear program (LP) is designed to find prices that minimize the worst-case error across all agents.\nFrom the definition of CE prices, it follows that CE prices would have \u03b4 = 0 as a solution to (9), at which point trade \u03bbt i would be in the best-response set of every agent (with \u03bbt i = \u2205, i.e. no trade, for all agents with no surplus for trade at the prices.)\nExample 5.\nWe can illustrate the formulation (9) on Example 2, assuming for simplicity that v\u03b1 = v (i.e. truth).\nThe efficient trade allocates AB to buyer 1 and CD to buyer 3.\nAccuracy will seek prices p(A), p(B), p(C) and p(D) to minimize the \u03b4 \u2265 0 required to satisfy constraints: p(A) + p(B) + p(C) + p(D) \u2265 0 (seller) p(A) + p(B) \u2264 6 + \u03b4 (buyer 1) p(A) + \u03b4 \u2265 4, p(B) + \u03b4 \u2265 4 (buyer 2) p(C) + p(D) \u2264 3 (buyer 3) p(C) + \u03b4 \u2265 2, p(D) + \u03b4 \u2265 2 (buyer 4) An optimal solution requires p(A) = p(B) = 10\/3, with \u03b4 = 2\/3, with p(C) and p(D) taking values such as p(C) = p(D) = 3\/2.\nBut, (9) has an exponential number of constraints (Eq.\n10).\nRather than solve it explicitly we use constraint generation [4] and dynamically generate a sufficient subset of constraints.\nLet i denote a manageable subset of all possible feasible trades to bidder i. Then, a relaxed version of (9) (written ACC) is formulated by substituting (10) with v\u03b1 i (\u03bb) \u2212 p(\u03bb) \u2264 v\u03b1 i (\u03bbt i) \u2212 p(\u03bbt i) + \u03b4, \u2200i, \u2200\u03bb \u2208 i , (11) where i is a set of trades that are feasible for bidder i given the other bids.\nFixing the prices p\u2217 , we then solve n subproblems (one for each bidder), max \u03bb v\u03b1 i (\u03bbi) \u2212 p\u2217 (\u03bbi) [R-WD(i)] s.t. \u03bb \u2208 Feas(x0 ), (12) to check whether solution (p\u2217 , \u03b4\u2217 ) to ACC is feasible in problem (9).\nIn R-WD(i) the objective is to determine a most preferred trade for each bidder at these prices.\nLet \u02c6\u03bbi denote the solution to R-WD(i).\nCheck condition: v\u03b1 i (\u02c6\u03bbi) \u2212 p\u2217 (\u02c6\u03bb) \u2264 v\u03b1 i (\u03bbt i) \u2212 p\u2217 (\u03bbt i) + \u03b4\u2217 , (13) and if this condition holds for all bidders i, then solution (p\u2217 , \u03b4\u2217 ) is optimal for problem (9).\nOtherwise, trade \u02c6\u03bbi is added to i for all bidders i for which this constraint is 253 violated and we re-solve the LP with the new set of constraints.4 II: Fairness.\nSecond, we break remaining ties to prefer fair prices: choosing prices that minimize the worst-case error with respect to Threshold payoffs (i.e. utility to bidders with Threshold payments), but without choosing prices that are less accurate.5 Example 6.\nFor example, accuracy in Example 1 (depicted in Figure 1) requires 12 \u2264 pA +pB \u2264 16 (for v\u03b1 = v).\nAt these valuations the Threshold payoffs would be 2 to both the seller and the buyer.\nThis can be exactly achieved in pricing with pA + pB = 14.\nThe fairness tie-breaking method is formulated as the following LP: min p,\u03c0 \u03c0 [FAIR] s.t. v\u03b1 i (\u03bb) \u2212 p(\u03bb) \u2264 v\u03b1 i (\u03bbt i) \u2212 p(\u03bbt i) + \u03b4\u2217 i , \u2200i, \u2200\u03bb \u2208 i (14) \u03c0 \u2265 \u03c0vcg,i \u2212 (v\u03b1 i (\u03bbt i) \u2212 p(\u03bbt i)), \u2200i (15) \u03c0 \u2265 0,pj \u2265 0, \u2200j, where \u03b4\u2217 represents the error in the optimal solution, from ACC.\nThe objective here is the same as in the Threshold payment rule (see Section 4.3): minimize the maximal error between bidder payoff (at v\u03b1 ) for the provisional trade and the VCG payoff (at v\u03b1 ).\nProblem FAIR is also solved through constraint generation, using R-WD(i) to add additional violated constraints as necessary.\nIII: Balance.\nThird, we break remaining ties to prefer balanced prices: choosing prices that minimize the maximal price across all items.\nReturning again to Example 1, depicted in Figure 1, we see that accuracy and fairness require p(A) + p(B) = 14.\nFinally, balance sets p(A) = p(B) = 7.\nBalance is justified when, all else being equal, items are more likely to have similar than dissimilar values.6 The LP for balance is formulated as follows: min p,Y Y [BAL] s.t. v\u03b1 i (\u03bb) \u2212 p(\u03bb) \u2264 v\u03b1 i (\u03bbt i) \u2212 p(\u03bbt i) + \u03b4\u2217 i , \u2200i, \u2200\u03bb \u2208 i (16) \u03c0\u2217 i \u2265 \u03c0vcg,i \u2212 (v\u03b1 i (\u03bbt i) \u2212 p(\u03bbt i)), \u2200i, (17) Y \u2265 pj, \u2200j (18) Y \u2265 0, pj \u2265 0, \u2200j, where \u03b4\u2217 represents the error in the optimal solution from ACC and \u03c0\u2217 represents the error in the optimal solution from FAIR.\nConstraint generation is also used to solve BAL, generating new trades for i as necessary.\n4 Problem R-WD(i) is a specialization of the WD problem, in which the objective is to maximize the payoff of a single bidder, rather than the total value across all bidders.\nIt is solved as a MIP, by rewriting the objective in WD(T, x0 ) as max{vi\u03b2 \u00b7 sati\u03b2 \u2212 j p\u2217 j \u00b7 \u03bbij } for agent i. Thus, the structure of the bid-tree language is exploited in generating new constraints, because this is solved as a concise MIP.\nThe other bidders are kept around in the MIP (but do not appear in the objective), and are used to define the space of feasible trades.\n5 The methods of Dunford et al. [12], that use a nucleolus approach, are also closely related.\n6 The use of balance was advocated by Kwasnica et al. [17].\nDunford et al. [12] prefer to smooth prices across rounds.\nComment 1: Lexicographical Refinement.\nFor all three sub-problems we also perform lexicographical refinement (with respect to bidders in ACC and FAIR, and with respect to goods in BAL).\nFor instance, in ACC we successively minimize the maximal error across all bidders.\nGiven an initial solution we first pin down the error on all bidders for whom a constraint (11) is binding.\nFor such a bidder i, the constraint is replaced with v\u03b1 i (\u03bb) \u2212 p(\u03bb) \u2264 v\u03b1 i (\u03bbt i) \u2212 p(\u03bbt i) + \u03b4\u2217 i , \u2200\u03bb \u2208 i , (19) and the error to bidder i no longer appears explicitly in the objective.\nACC is then re-solved, and makes progress by further minimizing the maximal error across all bidders yet to be pinned down.\nThis continues, pinning down any new bidders for whom one of constraints (11) is binding, until the error is lexicographically optimized for all bidders.7 The exact same process is repeated for FAIR and BAL, with bidders pinned down and constraints (15) replaced with \u03c0\u2217 i \u2265 \u03c0vcg,i \u2212 (v\u03b1 i (\u03bbt i) \u2212 p(\u03bbt i)), \u2200\u03bb \u2208 i , (where \u03c0\u2217 i is the current objective) in FAIR, and items pinned down and constraints (18) replaced with p\u2217 j \u2265 pj (where p\u2217 j represents the target for the maximal price on that item) in BAL.\nComment 2: Computation.\nAll constraints in i are retained, and this set grows across all stages and across all rounds of the exchange.\nThus, the computational effort in constraint generation is re-used.\nIn implementation we are careful to address a number of -issues that arise due to floating-point issues.\nWe prefer to err on the side of being conservative in determining whether or not to add another constraint in performing check (13).\nThis avoids later infeasibility issues.\nIn addition, when pinning-down bidders for the purpose of lexicographical refinement we relax the associated bidder-constraints with a small > 0 on the righthand side.\n5.3 Revealed-Preference Activity Rules The role of activity rules in the auction is to ensure both consistency and progress across rounds [21].\nConsistency in our exchange requires that bidders tighten bounds as the exchange progresses.\nActivity rules ensure that bidders are active during early rounds, and promote useful elicitation throughout the exchange.\nWe adopt a simple revealed-preference (RP) activity rule.\nThe idea is loosely based around the RP-rule in Ausubel et al. [1], where it is used for one-sided CAs.\nThe motivation is to require more than simply consistency: we need bidders to provide enough information for the system to be able to to prove that an allocation is (approximately) efficient.\nIt is helpful to think about the bidders interacting with proxy agents that will act on their behalf in responding to provisional prices pt\u22121 determined at the end of round t \u2212 1.\nThe only knowledge that such a proxy has of the valuation of a bidder is through the bid-tree.\nSuppose a proxy was queried by the exchange and asked which trade the bidder was most interested in at the provisional prices.\nThe RP rule says the following: the proxy must have enough 7 For example, applying this to accuracy on Example 2 we solve once and find bidders 1 and 2 are binding, for error \u03b4\u2217 = 2\/3.\nWe pin these down and then minimize the error to bidders 3 and 4.\nFinally, this gives p(A) = p(B) = 10\/3 and p(C) = p(D) = 5\/3, with accuracy 2\/3 to bidders 1 and 2 and 1\/3 to bidders 3 and 4.\n254 information to be able to determine this surplus-maximizing trade at current prices.\nConsider the following examples: Example 7.\nA bidder has XOR(+A, +B) and a value of +5 on the leaf +A and a value range of [5,10] on leaf +B. Suppose prices are currently 3 for each of A and B.\nThe RP rule is satisfied because the proxy knows that however the remaining value uncertainty on +B is resolved the bidder will always (weakly) prefer +B to +A. Example 8.\nA bidder has XOR(+A, +B) and value bounds [5, 10] on the root node and a value of 1 on leaf +A. Suppose prices are currently 3 for each of A and B.\nThe RP rule is satisfied because the bidder will always prefer +A to +B at equal prices, whichever way the uncertain value on the root node is ultimately resolved.\nOverloading notation, let vi \u2208 Ti denote a valuation that is consistent with lower and upper valuations in bid tree Ti.\nDefinition 4.\nBid tree Ti satisfies RP at prices pt\u22121 if and only if there exists some feasible trade L\u2217 for which, vi(L\u2217 i ) \u2212 pt\u22121 (L\u2217 i ) \u2265 max \u03bb\u2208Feas(x0) vi(\u03bbi) \u2212 pt\u22121 (\u03bbi), \u2200vi \u2208 Ti.\n(20) To make this determination for bidder i we solve a sequence of problems, each of which is a variation on the WD problem.\nFirst, we construct a candidate lower-bound trade, which is a feasible trade that solves: max \u03bb vi(\u03bbi) \u2212 pt\u22121 (\u03bbi) [RP1(i)] s.t. \u03bb \u2208 Feas(x0 ), (21) The solution \u03c0\u2217 l to RP1(i) represents the maximal payoff that bidder i can achieve across all feasible trades, given its pessimistic valuation.\nSecond, we break ties to find a trade with maximal value uncertainty across all possible solutions to RP1(i): max \u03bb vi(\u03bbi) \u2212 vi(\u03bbi) [RP2(i)] s.t. \u03bb \u2208 Feas(x0 ) (22) vi(\u03bbi) \u2212 pt\u22121 (\u03bbi) \u2265 \u03c0\u2217 l (23) We adopt solution L\u2217 i as our candidate for the trade that may satisfy RP.\nTo understand the importance of this tiebreaking rule consider Example 7.\nThe proxy can prove +B but not +A is a best-response for all vi \u2208 Ti, and should choose +B as its candidate.\nNotice that +B is a counterexample to +A, but not the other way round.\nNow, we construct a modified valuation \u02dcvi, by setting \u02dcvi\u03b2 = vi\u03b2 , if \u03b2 \u2208 sat(L\u2217 i ) vi\u03b2 , otherwise.\n(24) where sat(L\u2217 i ) is the set of nodes that are satisfied in the lower-bound tree for trade L\u2217 i .\nGiven this modified valuation, we find U\u2217 to solve: max \u03bb \u02dcvi(\u03bbi) \u2212 pt\u22121 (\u03bbi) [RP3(i)] s.t. \u03bb \u2208 Feas(x0 ) (25) Let \u03c0\u2217 u denote the payoff from this optimal trade at modified values \u02dcv.\nWe call trade U\u2217 i the witness trade.\nWe show in Proposition 1 that the RP rule is satisfied if and only if \u03c0\u2217 l \u2265 \u03c0\u2217 u. Constructing the modified valuation as \u02dcvi recognizes that there is shared uncertainty across trades that satisfy the same nodes in a bid tree.\nExample 8 helps to illustrate this.\nJust using vi in RP3(i), we would find L\u2217 i is buy A with payoff \u03c0\u2217 l = 3 but then find U\u2217 i is buy B with \u03c0\u2217 u = 7 and fail RP.\nWe must recognize that however the uncertainty on the root node is resolved it will affect +A and +B in exactly the same way.\nFor this reason, we set \u02dcvi\u03b2 = vi\u03b2 = 5 on the root node, which is exactly the same value that was adopted in determining \u03c0\u2217 l .\nThen, RP3(i) applied to U\u2217 i gives buy A and the RP test is judged to be passed.\nProposition 1.\nBid tree Ti satisfies RP given prices pt\u22121 if and only if any lower-bound trade L\u2217 i that solves RP1(i) and RP2(i) satisfies: vi(Ti, L\u2217 i ) \u2212 pt\u22121 (L\u2217 i ) \u2265 \u02dcvi(Ti, U\u2217 i ) \u2212 pt\u22121 (U\u2217 i ), (26) where \u02dcvi is the modified valuation in Eq.\n(24).\nProof.\nFor sufficiency, notice that the difference in payoff between trade L\u2217 i and another trade \u03bbi is unaffected by the way uncertainty is resolved on any node that is satisfied in both L\u2217 i and \u03bbi.\nFixing the values in \u02dcvi on nodes satisfied in L\u2217 i has the effect of removing this consideration when a trade U\u2217 i is selected that satisfies one of these nodes.\nOn the other hand, fixing the values on these nodes has no effect on trades considered in RP3(i) that do not share a node with L\u2217 i .\nFor the necessary direction, we first show that any trade that satisfies RP must solve RP1(i).\nSuppose otherwise, that some \u03bbi with payoff greater than \u03c0\u2217 l satisfies RP.\nBut, valuation vi \u2208 Ti together with L\u2217 i presents a counterexample to RP (Eq.\n20).\nNow, suppose (for contradiction) that some \u03bbi with maximal payoff \u03c0\u2217 l but uncertainty less than L\u2217 i satisfies RP.\nProceed by case analysis.\nCase a): only one solution to RP1(i) has uncertain value and so \u03bbi has certain value.\nBut, this cannot satisfy RP because L\u2217 i with uncertain value would be a counterexample to RP (Eq.\n20).\nCase b): two or more solutions to RP1(i) have uncertain value.\nHere, we first argue that one of these trades must satisfy a (weak) superset of all the nodes with uncertain value that are satisfied by all other trades in this set.\nThis is by RP.\nWithout this, then for any choice of trade that solves RP1(i), there is another trade with a disjoint set of uncertain but satisfied nodes that provides a counterexample to RP (Eq.\n20).\nNow, consider the case that some trade contains a superset of all the uncertain satisfied nodes of the other trades.\nClearly RP2(i) will choose this trade, L\u2217 i , and \u03bbi must satisfy a subset of these nodes (by assumption).\nBut, we now see that \u03bbi cannot satisfy RP because L\u2217 i would be a counterexample to RP.\nFailure to meet the activity rule must have some consequence.\nIn the current rules, the default action we choose is to set the upper bounds in valuations down to the maximal value of the provisional price on a node8 and the lowerbound value on that node.9 Such a bidder can remain active 8 The provisional price on a node is defined as the minimal total price across all feasible trades for which the subtree rooted at the tree is satisfied.\n9 This is entirely analogous to when a bidder in an ascending clock auction stops bidding at a price: she is not permitted to bid at a higher price again in future rounds.\n255 within the exchange, but only with valuations that are consistent with these new bounds.\n5.4 Bidder Feedback In each round, our default design provides every bidder with the provisional trade and also with the current provisional prices.\nSee 7 for an additional discussion.\nWe also provide guidance to help a bidder meet the RP rule.\nLet sat(L\u2217 i ) and sat(U\u2217 i ) denote the nodes that are satisfied in trades L\u2217 i and U\u2217 i , as computed in RP1-RP3.\nLemma 2.\nWhen RP fails, a bidder must increase a lower bound on at least one node in sat(L\u2217 i ) \\ sat(U\u2217 i ) or decrease an upper bound on at least one node in sat(U\u2217 i ) \\ sat(L\u2217 i ) in order to meet the activity rule.\nProof.\nChanging the upper- or lower- values on nodes that are not satisfied by either trade does not change L\u2217 i or U\u2217 i , and does not change the payoff from these trades.\nThus, the RP condition will continue to fail.\nSimilarly, changing the bounds on nodes that are satisfied in both trades has no effect on revealed preference.\nA change to a lower bound on a shared node affects both L\u2217 i and U\u2217 i identically because of the use of the modified valuation to determine U\u2217 i .\nA change to an upper bound on a shared node has no effect in determining either L\u2217 i or U\u2217 i .\nNote that when sat(U\u2217 i ) = sat(L\u2217 i ) then condition (26) is always trivially satisfied, and so the guidance in the lemma is always well-defined when RP fails.\nThis is an elegant feedback mechanism because it is adaptive.\nOnce a bidder makes some changes on some subset of these nodes, the bidder can query the exchange.\nThe exchange can then respond yes, or can revise the set of nodes sat(\u03bb\u2217 l ) and sat(\u03bb\u2217 u) as necessary.\n5.5 Termination Conditions Once each bidder has committed its new bids (and either met the RP rule or suffered the penalty) then round t closes.\nAt this point, the task is to determine the new \u03b1-valuation, and in turn the provisional allocation \u03bbt and provisional prices pt .\nA termination condition is also checked, to determine whether to move the exchange to a last-and-final round.\nTo define the \u03b1-valuation we compute the following two quantities: Pessimistic at Pessimistic (PP) Determine an efficient trade, \u03bb\u2217 l , at pessimistic values, i.e. to solve max\u03bb i vi(\u03bbi), and set PP= i vi(\u03bb\u2217 li).\nPessimistic at Optimistic (PO) Determine an efficient trade, \u03bb\u2217 u, at optimistic values, i.e. to solve max\u03bb i vi(\u03bbi), and set PO= i vi(\u03bb\u2217 ui).\nFirst, note that PP \u2265 PO and PP \u2265 0 by definition, for all bid-trees, although PO can be negative (because the right trade at v is not currently a useful trade at v).\nRecognizing this, define \u03b3eff (PP, PO) = 1 + PP \u2212 PO PP , (27) when PP > 0, and observe that \u03b3eff (PP, PO) \u2265 1 when this is defined, and that \u03b3eff (PP, PO) will start large and then trend towards 1 as the optimistic allocation converges towards the pessimistic allocation.\nIn each round, we define \u03b1eff \u2208 [0, 1] as: \u03b1eff = 0 when PP is 0 1\/\u03b3eff otherwise (28) which is 0 while PP is 0 and then trends towards 1 once PP> 0 in some round.\nThis is used to define \u03b1-valuation v\u03b1 i = \u03b1eff vi + (1 \u2212 \u03b1eff )vi, \u2200i, (29) which is used to define the provisional allocation and provisional prices.\nThe effect is to endogenously define a schedule for moving from optimistic to pessimistic values across rounds, based on how close the trades are to one another.\nTermination Condition.\nIn moving to the last-and-final round, and finally closing, we also care about the convergence of payments, in addition to the convergence towards an efficient trade.\nFor this we introduce another parameter, \u03b1thresh \u2208 [0, 1], that trends from 0 to 1 as the Threshold payments at lower and upper valuations converge.\nConsider the following parameter: \u03b3thresh = 1 + ||pthresh(v) \u2212 pthresh(v)||2 (PP\/Nactive) , (30) which is defined for PP > 0, where pthresh(v) denotes the Threshold payments at valuation profile v, Nactive is the number of bidders that are actively engaged in trade in the PP trade, and || \u00b7 ||2 is the L2-norm.\nNote that \u03b3thresh is defined for payments and not payoffs.\nThis is appropriate because it is the accuracy of the outcome of the exchange that matters: i.e. the trade and the payments.\nGiven this, we define \u03b1thresh = 0 when PP is 0 1\/\u03b3thresh otherwise (31) which is 0 while PP is 0 and then trends towards 1 as progress is made.\nDefinition 5 (termination).\nICE transitions to a lastand-final round when one of the following holds: 1.\n\u03b1eff \u2265 CUTOFFeff and \u03b1thresh \u2265 CUTOFFthresh, 2.\nthere is no trade at the optimistic values, where CUTOFFeff , CUTOFFthresh \u2208 (0, 1] determine the accuracy required for termination.\nAt the end of the last-and-final round v\u03b1 = v is used to define the final trade and the final Threshold payments.\nExample 9.\nConsider again Example 1, and consider the upper and lower bounds as depicted in Figure 1.\nFirst, if the seller``s bounds were [\u221220, \u22124] then there is an optimistic trade but no pessimistic trade, and PO = \u22124 and PP = 0, and \u03b1eff = 0.\nAt the bounds depicted, both the optimistic and the pessimistic trades occur and PO = PP = 4 and \u03b1eff = 1.\nHowever, we can see the Threshold payments are (17, \u221217) at v but (14, \u221214) at v. Evaluating \u03b3thresh , we have \u03b3thresh = 1 + \u221a 1\/2(32+32) (4\/2) = 5\/2, and \u03b1thresh = 2\/5.\nFor CUTOFFthresh < 2\/5 the exchange would remain open.\nOn the other hand, if the buyer``s value for +AB was between [18, 24] and the seller``s value for \u2212AB was between [\u221212, \u22126], the Threshold payments are (15, \u221215) at both upper and lower bounds, and \u03b1thresh = 1.\n256 Component Purpose Lines Agent.\nCaptures strategic behavior and information revelation decisions 762 Model Support Provides XML support to load goods and valuations into world 200 World Keeps track of all agent, good, and valuation details 998 Exchange Driver & Communication Controls exchange, and coordinates remote agent behavior 585 Bidding Language Implements the tree-based bidding language 1119 Activity Rule Engine Implements the revealed preference rule with range support 203 Closing Rule Engine Checks if auction termination condition reached 137 WD Engine Provides WD-related logic 377 Pricing Engine Provides Pricing-related logic 460 MIP Builders Translates logic used by engines into our general optimizer formulation 346 Pricing Builders Used by three pricing stages 256 Winner Determination Builders Used by WD, activity rule, closing rule, and pricing constraint generation 365 Framework Support code; eases modular replacement of above components 510 Table 1: Exchange Component and Code Breakdown.\n6.\nSYSTEMS INFRASTRUCTURE ICE is approximately 6502 lines of Java code, broken up into the functional packages described in Table 1.10 The prototype is modular so that researchers may easily replace components for experimentation.\nIn addition to the core exchange discussed in this paper, we have developed an agent component that allows a user to simulate the behavior and knowledge of other players in the system, better allowing a user to formulate their strategy in advance of actual play.\nA user specifies a valuation model in an XMLinterpretation of our bidding language, which is revealed to the exchange via the agent``s strategy.\nMajor exchange tasks are handled by engines that dictate the non-optimizer specific logic.\nThese engines drive the appropriate MIP\/LP builders.\nWe realized that all of our optimization formulations boil down to two classes of optimization problem.\nThe first, used by winner determination, activity rule, closing rule, and constraint generation in pricing, is a MIP that finds trades that maximize value, holding prices and slacks constant.\nThe second, used by the three pricing stages, is an LP that holds trades constant, seeking to minimize slack, profit, or prices.\nWe take advantage of the commonality of these problems by using common LP\/MIP builders that differ only by a few functional hooks to provide the correct variables for optimization.\nWe have generalized our back-end optimization solver interface11 (we currently support CPLEX and the LGPL- licensed LPSolve), and can take advantage of the load-balancing and parallel MIP\/LP solving capability that this library provides.\n7.\nDISCUSSION The bidding language was defined to allow for perfect symmetry between buyers and sellers and provide expressiveness in an exchange domain, for instance for mixed bidders interested in executing trades such as swaps.\nThis proved especially challenging.\nThe breakthrough came when we focused on changes in value for trades rather than providing absolute values for allocations.\nFor simplicity, we require the same tree structure for both the upper and lower valuations.\n10 Code size is measured in physical source line of code (SLOC), as generated using David A. Wheeler``s SLOC Count.\nThe total of 6502 includes 184 for instrumentation (not shown in the table).\nThe JOpt solver interface is another 1964 lines, and Castor automatically generates around 5200 lines of code for XML file manipulation.\n11 http:\/\/econcs.eecs.harvard.edu\/jopt This allows the language itself to ensure consistency (with the upper value at least the lower value on all trades) and enforce monotonic tightening of these bounds for all trades across rounds.\nIt also provides for an efficient method to check the RP activity rule, because it makes it simple to reason about shared uncertainty between trades.\nThe decision to adopt a direct and proxied approach in which bidders express their upper and lower values to a trusted proxy agent that interacts with the exchange was made early in the design process.\nIn many ways this is the clearest and most immediate way to generalize the design in Parkes et al. [24] and make it iterative.\nIn addition, this removes much opportunity for strategic manipulation: bidders are restricted to making (incremental) statements about their valuations.\nAnother advantage is that it makes the activity rule easy to explain: bidders can always meet the activity rule by tightening bounds such that their true value remains in the support.12 Perhaps most importantly, having explicit information on upper and lower values permits progress in early rounds, even while there is no efficient trade at pessimistic values.\nUpper and lower bound information also provides guidance about when to terminate.\nNote that taken by itself, PP = PO does not imply that the current provisional trade is efficient with respect to all values consistent with current value information.\nThe difference in values between different trades, aggregated across all bidders, could be similar at lower and upper bounds but quite different at intermediate values (including truth).\nNevertheless, we conjecture that PP = PO will prove an excellent indicator of efficiency in practical settings where the shape of the upper and lower valuations does convey useful information.\nThis is worthy of experimental investigation.\nMoreover, the use of price and RP activity provides additional guarantees.\nWe adopted linear prices (prices on individual items) rather than non-linear prices (with prices on a trade not equal to the sum of the prices on the component items) early in the design process.\nThe conciseness of this price representation is very important for computational tractability within the exchange and also to promote simplicity and transparency for bidders.\nThe RP activity rule was adopted later, and is a good choice because of its excellent theoretical properties when coupled with CE prices.\nThe following can be easily established: given exact CE prices pt\u22121 for provisional trade 12 This is in contrast to indirect price-based approaches, such as clock-proxy [1], in which bidders must be able to reason about the RP-constraints implied by bids in each round.\n257 \u03bbt\u22121 at valuations v\u03b1 , then if the upper and lower values at the start of round t already satisfy the RP rule (and without the need for any tie-breaking), the provisional trade is efficient for all valuations consistent with the current bid trees.\nWhen linear CE prices exist, this provides for a soundness and completeness statement: if PP = PO, linear CE prices exist, and the RP rule is satisfied, the provisional trade is efficient (soundness); if prices are exact CE prices for the provisional trade at v\u03b1 , but the trade is inefficient with respect to some valuation profile consistent with the current bid trees, then at least one bidder must fail RP with her current bid tree and progress will be made (completeness).\nFuture work must study convergence experimentally, and extend this theory to allow for approximate prices.\nSome strategic aspects of our ICE design deserve comment, and further study.\nFirst, we do not claim that truthfully responding to the RP rule is an ex post equilibrium.13 However, the exchange is designed to mimic the Threshold rule in its payment scheme, which is known to have useful incentive properties [16].\nWe must be careful, though.\nFor instance we do not suggest to provide \u03b1eff to bidders, because as \u03b1eff approaches 1 it would inform bidders that bid values are becoming irrelevant to determining the trade but merely used to determine payments (and bidders would become increasingly reluctant to increase their lower valuations).\nAlso, no consideration has been given in this work to collusion by bidders.\nThis is an issue that deserves some attention in future work.\n8.\nCONCLUSIONS In this work we designed and prototyped a scalable and highly-expressive iterative combinatorial exchange.\nThe design includes many interesting features, including: a new bid-tree language for exchanges, a new method to construct approximate linear prices from expressive languages, and a proxied elicitation method with optimistic and pessimistic valuations with a new method to evaluate a revealed- preference activity rule.\nThe exchange is fully implemented in Java and is in a validation phase.\nThe next steps for our work are to allow bidders to refine the structure of the bid tree in addition to values on the tree.\nWe intend to study the elicitation properties of the exchange and we have put together a test suite of exchange problem instances.\nIn addition, we are beginning to engage in collaborations to apply the design to airline takeoff and landing slot scheduling and to resource allocation in widearea network distributed computational systems.\nAcknowledgments We would like to dedicate this paper to all of the participants in CS 286r at Harvard University in Spring 2004.\nThis work is supported in part by NSF grant IIS-0238147.\n9.\nREFERENCES [1] L. Ausubel, P. Cramton, and P. Milgrom.\nThe clock-proxy auction: A practical combinatorial auction design.\nIn Cramton et al. [9], chapter 5.\n[2] M. Babaioff, N. Nisan, and E. Pavlov.\nMechanisms for a spatially distributed market.\nIn Proc.\n5th ACM Conf.\non Electronic Commerce, pages 9-20.\nACM Press, 2001.\n13 Given the Myerson-Satterthwaite impossibility theorem [22] and the method by which we determine the trade we should not expect this.\n[3] M. Ball, G. Donohue, and K. Hoffman.\nAuctions for the safe, efficient, and equitable allocation of airspace system resources.\nIn S. Cramton, Shoham, editor, Combinatorial Auctions.\n2004.\nForthcoming.\n[4] D. Bertsimas and J. Tsitsiklis.\nIntroduction to Linear Optimization.\nAthena Scientific, 1997.\n[5] S. Bikhchandani and J. M. Ostroy.\nThe package assignment model.\nJournal of Economic Theory, 107(2):377-406, 2002.\n[6] C. Boutilier.\nA pomdp formulation of preference elicitation problems.\nIn Proc.\n18th National Conference on Artificial Intelligence (AAAI-02), 2002.\n[7] C. Boutilier and H. Hoos.\nBidding languages for combinatorial auctions.\nIn Proc.\n17th International Joint Conference on Artificial Intelligence (IJCAI-01), 2001.\n[8] W. Conen and T. Sandholm.\nPreference elicitation in combinatorial auctions.\nIn Proc.\n3rd ACM Conf.\non Electronic Commerce (EC-01), pages 256-259.\nACM Press, New York, 2001.\n[9] P. Cramton, Y. Shoham, and R. Steinberg, editors.\nCombinatorial Auctions.\nMIT Press, 2004.\n[10] S. de Vries, J. Schummer, and R. V. Vohra.\nOn ascending Vickrey auctions for heterogeneous objects.\nTechnical report, MEDS, Kellogg School, Northwestern University, 2003.\n[11] S. de Vries and R. V. Vohra.\nCombinatorial auctions: A survey.\nInforms Journal on Computing, 15(3):284-309, 2003.\n[12] M. Dunford, K. Hoffman, D. Menon, R. Sultana, and T. Wilson.\nTesting linear pricing algorithms for use in ascending combinatorial auctions.\nTechnical report, SEOR, George Mason University, 2003.\n[13] Y. Fu, J. Chase, B. Chun, S. Schwab, and A. Vahdat.\nSharp: an architecture for secure resource peering.\nIn Proceedings of the nineteenth ACM symposium on Operating systems principles, pages 133-148.\nACM Press, 2003.\n[14] B. Hudson and T. Sandholm.\nEffectiveness of query types and policies for preference elicitation in combinatorial auctions.\nIn Proc.\n3rd Int.\nJoint.\nConf.\non Autonomous Agents and Multi Agent Systems, pages 386-393, 2004.\n[15] V. Krishna.\nAuction Theory.\nAcademic Press, 2002.\n[16] D. Krych.\nCalculation and analysis of Nash equilibria of Vickrey-based payment rules for combinatorial exchanges, Harvard College, April 2003.\n[17] A. M. Kwasnica, J. O. Ledyard, D. Porter, and C. DeMartini.\nA new and improved design for multi-object iterative auctions.\nManagement Science, 2004.\nTo appear.\n[18] E. Kwerel and J. Williams.\nA proposal for a rapid transition to market allocation of spectrum.\nTechnical report, FCC Office of Plans and Policy, Nov 2002.\n[19] S. M. Lahaie and D. C. Parkes.\nApplying learning algorithms to preference elicitation.\nIn Proc.\nACM Conf.\non Electronic Commerce, pages 180-188, 2004.\n[20] R. P. McAfee.\nA dominant strategy double auction.\nJ. of Economic Theory, 56:434-450, 1992.\n[21] P. Milgrom.\nPutting auction theory to work: The simultaneous ascending auction.\nJ.Pol.\nEcon., 108:245-272, 2000.\n[22] R. B. Myerson and M. A. Satterthwaite.\nEfficient mechanisms for bilateral trading.\nJournal of Economic Theory, 28:265-281, 1983.\n[23] N. Nisan.\nBidding and allocation in combinatorial auctions.\nIn Proc.\n2nd ACM Conf.\non Electronic Commerce (EC-00), pages 1-12, 2000.\n[24] D. C. Parkes, J. R. Kalagnanam, and M. Eso.\nAchieving budget-balance with Vickrey-based payment schemes in exchanges.\nIn Proc.\n17th International Joint Conference on Artificial Intelligence (IJCAI-01), pages 1161-1168, 2001.\n[25] D. C. Parkes and L. H. Ungar.\nIterative combinatorial auctions: Theory and practice.\nIn Proc.\n17th National Conference on Artificial Intelligence (AAAI-00), pages 74-81, July 2000.\n[26] S. J. Rassenti, V. L. Smith, and R. L. Bulfin.\nA combinatorial mechanism for airport time slot allocation.\nBell Journal of Economics, 13:402-417, 1982.\n[27] M. H. Rothkopf, A. Peke\u02c7c, and R. M. Harstad.\nComputationally manageable combinatorial auctions.\nManagement Science, 44(8):1131-1147, 1998.\n[28] T. Sandholm and C. Boutilier.\nPreference elicitation in combinatorial auctions.\nIn Cramton et al. [9], chapter 10.\n[29] P. R. Wurman and M. P. Wellman.\nAkBA: A progressive, anonymous-price combinatorial auction.\nIn Second ACM Conference on Electronic Commerce, pages 21-29, 2000.\n258","lvl-3":"ICE: An Iterative Combinatorial Exchange David C. Parkes * s Ruggiero Cavallos Nick Elprins Adam Judas S \u00b4 ebastien Lahaies\nABSTRACT\nWe present the first design for an iterative combinatorial exchange (ICE).\nThe exchange incorporates a tree-based bidding language that is concise and expressive for CEs.\nBidders specify lower and upper bounds on their value for different trades.\nThese bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations.\nAll computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades.\nA proxied interpretation of a revealedpreference activity rule ensures progress across rounds.\nA VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments.\nThe exchange is fully implemented and in a validation phase.\nKeywords: Combinatorial exchange, Threshold payments, VCG, Preference Elicitation.\n1.\nINTRODUCTION\nCombinatorial exchanges combine and generalize two different mechanisms: double auctions and combinatorial auctions.\nIn a double auction (DA), multiple buyers and sellers trade units of an identical good [20].\nIn a combinatorial auction (CA), a single seller has multiple heterogeneous items up for sale [11].\nBuyers may have complementarities or substitutabilities between goods, and are provided with an expressive bidding language.\nA common goal in both market * Corresponding author.\nRemaining authors in alphabetical order.\nparkes@eecs.harvard.edu sDivision of Engineering and Applied Sciences, Harvard University, Cambridge MA 02138.\ndesigns is to determine the efficient allocation, which is the allocation that maximizes total value.\nA combinatorial exchange (CE) [24] is a combinatorial double auction that brings together multiple buyers and sellers to trade multiple heterogeneous goods.\nFor example, in an exchange for wireless spectrum, a bidder may declare that she is willing to pay $1 million for a trade where she obtains licenses for New York City, Boston, and Philadelphia, and loses her license for Washington DC.\nThus, unlike a DA, a CE allows all participants to express complex valuations via expressive bids.\nUnlike a CA, a CE allows for fragmented ownership, with multiple buyers and sellers and agents that are both buying and selling.\nCEs have received recent attention both in the context of wireless spectrum allocation [18] and for airport takeoff and landing slot allocation [3].\nIn both of these domains there are incumbents with property rights, and it is important to facilitate a complex multi-way reallocation of resources.\nAnother potential application domain for CEs is to resource allocation in shared distributed systems, such as PlanetLab [13].\nThe instantiation of our general purpose design to specific domains is a compelling next step in our research.\nThis paper presents the first design for an iterative combinatorial exchange (ICE).\nThe genesis of this project was a class, CS 286r \"Topics at the Interface between Economics and Computer Science,\" taught at Harvard University in Spring 2004.1 The entire class was dedicated to the design and prototyping of an iterative CE.\nThe ICE design problem is multi-faceted and quite hard.\nThe main innovation in our design is an expressive yet concise tree-based bidding language (which generalizes known languages such as XOR\/OR [23]), and the tight coupling of this language with efficient algorithms for price-feedback to guide bidding, winner-determination to determine trades, and revealed-preference activity rules to ensure progress across rounds.\nThe exchange is iterative: bidders express upper and lower valuations on trades by annotating their bid-tree, and then tighten these bounds in response to price feedback in each round.\nThe Threshold payment rule, introduced by Parkes et al. [24], is used to determine final payments.\nThe exchange has a number of interesting theoretical properties.\nFor instance, when there exist linear prices we establish soundness and completeness: for straightforward bidders that adjust their bounds to meet activity rules while keeping their true value within the bounds, the exchange will terminate with the efficient allocation.\nIn addition, the\nFigure 1: ICE System Flow of Control\nefficient allocation can often be determined without bidders revealing, or even knowing, their exact value for all trades.\nThis is essential in complex domains where the valuation problem can itself be very challenging for a participant [28].\nWhile we cannot claim that straightforward bidding is an equilibrium of the exchange (and indeed, should not expect to by the Myerson-Satterthwaite impossibility theorem [22]), the Threshold payment rule minimizes the ex post incentive to manipulate across all budget-balanced payment rules.\nThe exchange is implemented in Java and is currently in validation.\nIn describing the exchange we will first provide an overview of the main components and introduce several working examples.\nThen, we introduce the basic components for a simple one-shot variation in which bidders state their exact values for trades in a single round.\nWe then describe the full iterative exchange, with upper and lower values, price-feedback, activity rules, and termination conditions.\nWe state some theoretical properties of the exchange, and end with a discussion to motivate our main design decisions, and suggest some next steps.\n2.\nAN OVERVIEW OF THE ICE DESIGN\n2.1 Related Work\n3.\nPRELIMINARIES\n3.1 Working Examples\n4.\nA ONE-SHOT EXCHANGE DESIGN\n4.1 Tree-Based Bidding Language\n4.2 Winner Determination\n4.3 Computing Threshold Payments\n5.\nTHE ICE DESIGN\n5.1 Upper and Lower Valuations\n5.2 Price Feedback\n5.3 Revealed-Preference Activity Rules\n5.4 Bidder Feedback\n5.5 Termination Conditions\n1 \/ thresh otherwise\n6.\nSYSTEMS INFRASTRUCTURE\n8.\nCONCLUSIONS\nIn this work we designed and prototyped a scalable and highly-expressive iterative combinatorial exchange.\nThe design includes many interesting features, including: a new bid-tree language for exchanges, a new method to construct approximate linear prices from expressive languages, and a proxied elicitation method with optimistic and pessimistic valuations with a new method to evaluate a revealed - preference activity rule.\nThe exchange is fully implemented in Java and is in a validation phase.\nThe next steps for our work are to allow bidders to refine the structure of the bid tree in addition to values on the tree.\nWe intend to study the elicitation properties of the exchange and we have put together a test suite of exchange problem instances.\nIn addition, we are beginning to engage in collaborations to apply the design to airline takeoff and landing slot scheduling and to resource allocation in widearea network distributed computational systems.","lvl-4":"ICE: An Iterative Combinatorial Exchange David C. Parkes * s Ruggiero Cavallos Nick Elprins Adam Judas S \u00b4 ebastien Lahaies\nABSTRACT\nWe present the first design for an iterative combinatorial exchange (ICE).\nThe exchange incorporates a tree-based bidding language that is concise and expressive for CEs.\nBidders specify lower and upper bounds on their value for different trades.\nThese bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations.\nAll computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades.\nA proxied interpretation of a revealedpreference activity rule ensures progress across rounds.\nA VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments.\nThe exchange is fully implemented and in a validation phase.\nKeywords: Combinatorial exchange, Threshold payments, VCG, Preference Elicitation.\n1.\nINTRODUCTION\nCombinatorial exchanges combine and generalize two different mechanisms: double auctions and combinatorial auctions.\nIn a double auction (DA), multiple buyers and sellers trade units of an identical good [20].\nIn a combinatorial auction (CA), a single seller has multiple heterogeneous items up for sale [11].\nBuyers may have complementarities or substitutabilities between goods, and are provided with an expressive bidding language.\nA common goal in both market * Corresponding author.\nRemaining authors in alphabetical order.\ndesigns is to determine the efficient allocation, which is the allocation that maximizes total value.\nA combinatorial exchange (CE) [24] is a combinatorial double auction that brings together multiple buyers and sellers to trade multiple heterogeneous goods.\nThus, unlike a DA, a CE allows all participants to express complex valuations via expressive bids.\nUnlike a CA, a CE allows for fragmented ownership, with multiple buyers and sellers and agents that are both buying and selling.\nIn both of these domains there are incumbents with property rights, and it is important to facilitate a complex multi-way reallocation of resources.\nAnother potential application domain for CEs is to resource allocation in shared distributed systems, such as PlanetLab [13].\nThe instantiation of our general purpose design to specific domains is a compelling next step in our research.\nThis paper presents the first design for an iterative combinatorial exchange (ICE).\nThe ICE design problem is multi-faceted and quite hard.\nThe exchange is iterative: bidders express upper and lower valuations on trades by annotating their bid-tree, and then tighten these bounds in response to price feedback in each round.\nThe Threshold payment rule, introduced by Parkes et al. [24], is used to determine final payments.\nThe exchange has a number of interesting theoretical properties.\nFor instance, when there exist linear prices we establish soundness and completeness: for straightforward bidders that adjust their bounds to meet activity rules while keeping their true value within the bounds, the exchange will terminate with the efficient allocation.\nIn addition, the\nFigure 1: ICE System Flow of Control\nefficient allocation can often be determined without bidders revealing, or even knowing, their exact value for all trades.\nThis is essential in complex domains where the valuation problem can itself be very challenging for a participant [28].\nThe exchange is implemented in Java and is currently in validation.\nIn describing the exchange we will first provide an overview of the main components and introduce several working examples.\nThen, we introduce the basic components for a simple one-shot variation in which bidders state their exact values for trades in a single round.\nWe then describe the full iterative exchange, with upper and lower values, price-feedback, activity rules, and termination conditions.\nWe state some theoretical properties of the exchange, and end with a discussion to motivate our main design decisions, and suggest some next steps.\n8.\nCONCLUSIONS\nIn this work we designed and prototyped a scalable and highly-expressive iterative combinatorial exchange.\nThe exchange is fully implemented in Java and is in a validation phase.\nThe next steps for our work are to allow bidders to refine the structure of the bid tree in addition to values on the tree.\nWe intend to study the elicitation properties of the exchange and we have put together a test suite of exchange problem instances.\nIn addition, we are beginning to engage in collaborations to apply the design to airline takeoff and landing slot scheduling and to resource allocation in widearea network distributed computational systems.","lvl-2":"ICE: An Iterative Combinatorial Exchange David C. Parkes * s Ruggiero Cavallos Nick Elprins Adam Judas S \u00b4 ebastien Lahaies\nABSTRACT\nWe present the first design for an iterative combinatorial exchange (ICE).\nThe exchange incorporates a tree-based bidding language that is concise and expressive for CEs.\nBidders specify lower and upper bounds on their value for different trades.\nThese bounds allow price discovery and useful preference elicitation in early rounds, and allow termination with an efficient trade despite partial information on bidder valuations.\nAll computation in the exchange is carefully optimized to exploit the structure of the bid-trees and to avoid enumerating trades.\nA proxied interpretation of a revealedpreference activity rule ensures progress across rounds.\nA VCG-based payment scheme that has been shown to mitigate opportunities for bargaining and strategic behavior is used to determine final payments.\nThe exchange is fully implemented and in a validation phase.\nKeywords: Combinatorial exchange, Threshold payments, VCG, Preference Elicitation.\n1.\nINTRODUCTION\nCombinatorial exchanges combine and generalize two different mechanisms: double auctions and combinatorial auctions.\nIn a double auction (DA), multiple buyers and sellers trade units of an identical good [20].\nIn a combinatorial auction (CA), a single seller has multiple heterogeneous items up for sale [11].\nBuyers may have complementarities or substitutabilities between goods, and are provided with an expressive bidding language.\nA common goal in both market * Corresponding author.\nRemaining authors in alphabetical order.\nparkes@eecs.harvard.edu sDivision of Engineering and Applied Sciences, Harvard University, Cambridge MA 02138.\ndesigns is to determine the efficient allocation, which is the allocation that maximizes total value.\nA combinatorial exchange (CE) [24] is a combinatorial double auction that brings together multiple buyers and sellers to trade multiple heterogeneous goods.\nFor example, in an exchange for wireless spectrum, a bidder may declare that she is willing to pay $1 million for a trade where she obtains licenses for New York City, Boston, and Philadelphia, and loses her license for Washington DC.\nThus, unlike a DA, a CE allows all participants to express complex valuations via expressive bids.\nUnlike a CA, a CE allows for fragmented ownership, with multiple buyers and sellers and agents that are both buying and selling.\nCEs have received recent attention both in the context of wireless spectrum allocation [18] and for airport takeoff and landing slot allocation [3].\nIn both of these domains there are incumbents with property rights, and it is important to facilitate a complex multi-way reallocation of resources.\nAnother potential application domain for CEs is to resource allocation in shared distributed systems, such as PlanetLab [13].\nThe instantiation of our general purpose design to specific domains is a compelling next step in our research.\nThis paper presents the first design for an iterative combinatorial exchange (ICE).\nThe genesis of this project was a class, CS 286r \"Topics at the Interface between Economics and Computer Science,\" taught at Harvard University in Spring 2004.1 The entire class was dedicated to the design and prototyping of an iterative CE.\nThe ICE design problem is multi-faceted and quite hard.\nThe main innovation in our design is an expressive yet concise tree-based bidding language (which generalizes known languages such as XOR\/OR [23]), and the tight coupling of this language with efficient algorithms for price-feedback to guide bidding, winner-determination to determine trades, and revealed-preference activity rules to ensure progress across rounds.\nThe exchange is iterative: bidders express upper and lower valuations on trades by annotating their bid-tree, and then tighten these bounds in response to price feedback in each round.\nThe Threshold payment rule, introduced by Parkes et al. [24], is used to determine final payments.\nThe exchange has a number of interesting theoretical properties.\nFor instance, when there exist linear prices we establish soundness and completeness: for straightforward bidders that adjust their bounds to meet activity rules while keeping their true value within the bounds, the exchange will terminate with the efficient allocation.\nIn addition, the\nFigure 1: ICE System Flow of Control\nefficient allocation can often be determined without bidders revealing, or even knowing, their exact value for all trades.\nThis is essential in complex domains where the valuation problem can itself be very challenging for a participant [28].\nWhile we cannot claim that straightforward bidding is an equilibrium of the exchange (and indeed, should not expect to by the Myerson-Satterthwaite impossibility theorem [22]), the Threshold payment rule minimizes the ex post incentive to manipulate across all budget-balanced payment rules.\nThe exchange is implemented in Java and is currently in validation.\nIn describing the exchange we will first provide an overview of the main components and introduce several working examples.\nThen, we introduce the basic components for a simple one-shot variation in which bidders state their exact values for trades in a single round.\nWe then describe the full iterative exchange, with upper and lower values, price-feedback, activity rules, and termination conditions.\nWe state some theoretical properties of the exchange, and end with a discussion to motivate our main design decisions, and suggest some next steps.\n2.\nAN OVERVIEW OF THE ICE DESIGN\nThe design has four main components, which we will introduce in order through the rest of the paper: 9 Expressive and concise tree-based bidding language.\nThe language describes values for trades, such as \"my value for selling AB and buying C is $100,\" or \"my value for selling ABC is - $50,\" with negative values indicating that a bidder must receive a payment for the trade to be acceptable.\nThe language allows bidders to express upper and lower bounds on value, which can be tightened across rounds.\n9 Winner Determination.\nWinner-determination (WD) is formulated as a mixed-integer program (MIP), with the structure of the bid-trees captured explicitly in the formulation.\nComparing the solution at upper and lower values allows for a determination to be made about termination, with progress in intermediate rounds driven by an intermediate valuation and the lower values adopted on termination.\n9 Payments.\nPayments are computed using the Threshold payment rule [24], with the intermediate valuations adopted in early rounds and lower values adopted on termination.\n9 Price feedback.\nAn approximate price is computed for each item in the exchange in each round, in terms of the intermediate valuations and the provisional trade.\nThe prices are optimized to approximate competitive equilibrium prices, and further optimized to best approximate the current Threshold payments with remaining ties broken to favor prices that are balanced across different items.\nIn computing the prices, we adopt the methods of constraint-generation to exploit the structure of the bidding language and avoid enumerating all feasible trades.\nThe subproblem to generate new constraints is a variation of the WD problem.\n9 Activity rule.\nA revealed-preference activity rule [1] ensures progress across rounds.\nIn order to remain active, a bidder must tighten bounds so that there is enough information to define a trade that maximizes surplus at the current prices.\nAnother variation on the WD problem is formulated, both to verify that the activity rule is met and also to provide feedback to a bidder to explain how to meet the rule.\nAn outline of the ICE system flow of control is provided in Figure 1.\nWe will return to this example later in the paper.\nFor now, just observe in this two-agent example that the agents state lower and upper bounds that are checked in the activity rule, and then passed to winner-determination (WD), and then through three stages of pricing (accuracy, fairness, balance).\nOn passing the closing rule (in which parameters aeff and athresh are checked for convergence of the trade and payments), the exchange goes to a last-and-final round.\nAt the end of this round, the trade and payments are finally determined, based on the lower valuations.\n2.1 Related Work\nMany ascending-price one-sided CAs are known in the literature [10, 25, 29].\nDirect elicitation approaches have also been proposed for one-sided CAs in which agents respond to explicit queries about their valuations [8, 14, 19].\nA number of ascending CAs are designed to work with simple prices on items [12, 17].\nThe price generation methods that we use in ICE generalize the methods in these earlier papers.\nParkes et al. [24] studied sealed-bid combinatorial exchanges and introduced the Threshold payment rule.\nSubsequently, Krych [16] demonstrated experimentally that the Threshold rule promotes efficient allocations.\nWe are not aware of any previous studies of iterative CEs.\nDominant strategy DAs are known for unit demand [20] and also for single-minded agents [2].\nNo dominant strategy mechanisms are known for the general CE problem.\nICE is a \"hybrid\" auction design, in that it couples simple item prices to drive bidding in early rounds with combinatorial WD and payments, a feature it shares with the clock-proxy design of Ausubel et al. [1] for one-sided CAs.\nWe adopt a variation on the clock-proxy auctions's revealedpreference activity rule.\nThe bidding language shares some structural elements with the LGB language of Boutilier and Hoos [7], but has very different semantics.\nRothkopf et al. [27] also describe a restricted tree-based bidding language.\nIn LGB, the semantics are those of propositional logic, with the same items in an allocation able to satisfy a tree in multiple places.\nAlthough this can make LGB especially concise in some settings, the semantics that we propose appear to provide useful \"locality,\" so that the value of one component in a tree can be understood independently from the rest of the tree.\nThe idea of capturing the structure of our bidding language explicitly within a mixed-integer programming formulation follows the developments in Boutilier [6].\n3.\nPRELIMINARIES\nIn our model, we consider a set of goods, indexed {1,..., m} and a set of bidders, indexed {1,..., n}.\nThe initial allocation of goods is denoted x0 = (x01,..., x0n), with x0i = (x0i1,..., x0im) and x0ij> 0 for good j indicating the number\nof units of good j held by bidder i.\nA trade A = (A1,..., An) denotes the change in allocation, with Ai = (Ai1,..., Aim) where Aij E is the change in the number of units of item j to bidder i. So, the final allocation is x1 = x0 + A. Each bidder has a value vi (Ai) E for a trade Ai.\nThis value can be positive or negative, and represents the change in value between the final allocation x0i + Ai and the initial allocation x0i.\nUtility is quasi-linear, with ui (Ai, p) = vi (Ai) \u2212 p for trade Ai and payment p E.\nPrice p can be negative, indicating the bidder receives a payment for the trade.\nWe use the term payoff interchangeably with utility.\nOur goal in the ICE design is to implement the efficient trade.\nThe efficient trade, A *, maximizes the total increase in value across bidders.\nConstraints (1) ensure that no agent sells more items than it has in its initial allocation.\nConstraints (2) provide free disposal, and allows feasible trades to sell more items than are purchased (but not vice versa).\nLater, we adopt Feas (x0) to denote the set of feasible trades, given these constraints and given an initial allocation x0 = (x01,..., x0 n).\n3.1 Working Examples\nIn this section, we provide three simple examples of instances that we will use to illustrate various components of the exchange.\nAll three examples have only one seller, but this is purely illustrative.\nThe \"OR\" indicates that the seller is willing to sell any number of goods.\nThe \"XOR\" indicates that buyers 2 and 4 are willing to buy at most one of the two goods in which they are interested.\nThe efficient trade is for bundle AB to go to buyer 1 and bundle CD to buyer 3, denoted A * = ([\u2212 1, \u2212 1, \u2212 1, \u2212 1], [+1, +1, 0, 0], [0, 0, 0, 0], [0, 0, +1, +1], [0, 0, 0, 0]).\nFigure 2: Example Bid Trees.\n4.\nA ONE-SHOT EXCHANGE DESIGN\nThe description of ICE is broken down into two sections: one-shot (sealed-bid) and iterative.\nIn this section we abstract away the iterative aspect and introduce a specialization of the tree-based language that supports only exact values on nodes.\n4.1 Tree-Based Bidding Language\nThe bidding language is designed to be expressive and concise, entirely symmetric with respect to buyers and sellers, and to extend to capture bids from mixed buyers and sellers, ranging from simple swaps to highly complex trades.\nBids are expressed as annotated bid trees, and define a bidder's value for all possible trades.\nThe language defines changes in values on trades, with leaves annotated with traded items and nodes annotated with changes in values (either positive or negative).\nThe main feature is that it has a general \"interval-choose\" logical operator on internal nodes, and that it defines careful semantics for propagating values within the tree.\nWe illustrate the language on each of Examples 1--3 in Figure 2.\nThe language has a tree structure, with trades on items defined on leaves and values annotated on nodes and leaves.\nThe nodes have zero values where no value is indicated.\nInternal nodes are also labeled with interval-choose (IC) ranges.\nGiven a trade, the semantics of the language define which nodes in the tree can be satisfied, or \"switched-on.\"\nFirst, if a child is on then its parent must be on.\nSecond, if a parent node is on, then the number of children that are on must be within the IC range on the parent node.\nFinally, leaves in which the bidder is buying items can only be on if the items are provided in the trade.\nFor instance, in Example 2 we can consider the efficient trade, and observe that in this trade all nodes in the trees of buyers 1 and 3 (and also the seller), but none of the nodes in the trees of buyers 2 and 4, can be on.\nOn the other hand, in\nthe trade in which A goes to buyer 2 and D to buyer 4, then the root and appropriate leaf nodes can be on for buyers 2 and 4, but no nodes can be on for buyers 1 and 3.\nGiven a trade there is often a number of ways to choose the set of satisfied nodes.\nThe semantics of the language require that the nodes that maximize the summed value across satisfied nodes be activated.\nConsider bid tree Ti from bidder i.\nThis defines nodes,3 Ti, of which some are leaves, Leaf (i) Ti.\nLet Child (,3) Ti denote the children of a node,3 (that is not itself a leaf).\nAll nodes except leaves are labeled with the interval-choose operator [ICxi (,3), ICyi (,3)].\nEvery node is also labeled with a value, vi,3.\nEach leaf,3 is labeled with a trade, qi,3 m (i.e., leaves can define a bundled trade on more than one type of item.)\nGiven a trade Ai to bidder i, the interval-choose operators and trades on leaves define which nodes can be satisfied.\nThere will often be a choice.\nTies are broken to maximize value.\nLet sati,3 {0, 1} denote whether node,3 is satisfied.\nSolution sati is valid given tree Ti and trade Ai, written sati valid (Ti, Ai), if and only if:\nIn words, a set of leaves can only be considered satisfied given trade Ai if the total increase in quantity summed across all such leaves is covered by the trade, for all goods (Eq.\n4).\nThis works for sellers as well as buyers: for sellers a trade is negative and this requires that the total number of items indicated sold in the tree is at least the total number sold as defined in the trade.\nWe also need \"upwards-propagation\": any time a node other than the root is satisfied then its parent must be satisfied (by,3 EChild (,3) sati,3 IC y i (,3) sati,3 in Eq.\n5).\nFinally, we need \"downwards-propagation\": any time an internal node is satisfied then the appropriate number of children must also be satisfied (Eq.\n5).\nThe total value of trade Ai, given bid-tree Ti, is defined as:\nThe tree-based language generalizes existing languages.\nFor instance: IC (2, 2) on a node with 2 children is equivalent to an AND operator; IC (1, 3) on a node with 3 children is equivalent to an OR operator; and IC (1, 1) on a node with 2 children is equivalent to an XOR operator.\nSimilarly, the XOR\/OR bidding languages can be directly expressed as a bid tree in our language .2\n4.2 Winner Determination\nThis section defines the winner determination problem, which is formulated as a MIP and solved in our implementation with a commercial solver .3 The solver uses branchand-bound search with dynamic cut generation and branching heuristics to solve large MIPs in economically feasible run times.\n2The OR * language is the OR language with dummy items to provide additional structure.\nOR * is known to be expressive and concise.\nHowever, it is not known whether OR * dominates XOR\/OR in terms of conciseness [23].\n3CPLEX, www.ilog.com In defining the MIP representation we are careful to avoid an XOR-based enumeration of all bundles.\nA variation on the WD problem is reused many times within the exchange, e.g. for column generation in pricing and for checking revealed preference.\nGiven bid trees T = (T1,..., Tte) and initial allocation x0, the mixed-integer formulation for WD is:\nSome goods may go unassigned because free disposal is allowed within the clearing rules of winner determination.\nThese items can be allocated back to agents that sold the items, i.e. for which Aij <0.\n4.3 Computing Threshold Payments\nThe Threshold payment rule is based on the payments in the Vickrey-Clarke-Groves (VCG) mechanism [15], which itself is truthful and efficient but does not satisfy budget balance.\nBudget-balance requires that the total payments to the exchange are equal to the total payments made by the exchange.\nIn VCG, the payment paid by agent i is\nwhere A * is the efficient trade, V * is the reported value of this trade, and V_i is the reported value of the efficient trade that would be implemented without bidder i.\nWe call Avcg, i = V * \u2212 V_i the VCG discount.\nFor instance, in Example 1 pvcg, seller = \u2212 10 \u2212 (+10 \u2212 0) = \u2212 20 and pvcg, buyer = +20 \u2212 (+10 \u2212 0) = 10, and the exchange would run at a budget deficit of \u2212 20 + 10 = \u2212 10.\nThe Threshold payment rule [24] determines budgetbalanced payments to minimize the maximal error across all agents to the VCG outcome.\nThreshold payments are designed to minimize the maximal ex post incentive to manipulate.\nKrych [16] confirmed that Threshold promotes allocative efficiency in restricted and approximate Bayes-Nash equilibrium.\n5.\nTHE ICE DESIGN\nWe are now ready to introduce the iterative combinatorial exchange (ICE) design.\nSeveral new components are introduced, relative to the design for the one-shot exchange.\nRather than provide precise valuations, bidders can provide lower and upper valuations and revise this bid information across rounds.\nThe exchange provides price-based feedback\nto guide bidders in this process, and terminates with an efficient (or approximately-efficient) trade with respect to reported valuations.\nIn each round t {0, 1, ...} the current lower and upper bounds, vt and vt, are used to define a provisional valuation profile v (the a-valuation), together with a provisional trade At and provisional prices pt = (pt 1,..., ptm) on items.\nThe a-valuation is a linear combination of the current upper and lower valuations, with aEFF [0, 1] chosen endogenously based on the \"closeness\" of the optimistic trade (at v) and the pessimistic trade (at v).\nPrices pt are used to inform an activity rule, and drive progress towards an efficient trade.\n5.1 Upper and Lower Valuations\nThe bidding language is extended to allow a bidder i to report a lower and upper value (vi, vi) on each node.\nThese take the place of the exact value vi defined in Section 4.1.\nBased on these labels, we can define the valuation functions vi (Ti, Ai) and vi (Ti, Ai), using the exact same semantics as in Eq.\n(6).\nWe say that such a bid-tree is well-formed if vi vi for all nodes.\nThe following lemma is useful: LEMMA 1.\nGiven a well-formed tree, T, then vi (Ti, Ai) vi (Ti, Ai) for all trades.\nPROOF.\nSuppose there is some Ai for which vi (Ti, Ai)> vi (Ti, Ai).\nThen, maxsatEvalid (Ti, i) ETi vi \u00b7 sat> maxsatEvalid (Ti, i) ETi vi \u00b7 sat.\nBut, this is a contradiction because the trade A' that defines vi (Ti, Ai) is still feasible with upper bounds vi, and vi vi for all nodes,3 in a well-formed tree.\n5.2 Price Feedback\nIn each round, approximate competitive-equilibrium (CE) prices, pt = (pt 1,..., ptm), are determined.\nGiven these provisional prices, the price on trade Ai for bidder i is pt (Ai) = j <m ptj \u00b7 Aij.\nCE prices will not always exist and we will often need to compute approximate prices [5].\nWe extend ideas due to Rassenti et al. [26], Kwasnica et al. [17] and Dunford et al. [12], and select approximate prices as follows: I: Accuracy.\nFirst, we compute prices that minimize the maximal error in the best-response constraints across all bidders.\nII: Fairness.\nSecond, we break ties to prefer prices that minimize the maximal deviation from Threshold payments across all bidders.\nIII: Balance.\nThird, we break ties to prefer prices that minimize the maximal price across all items.\nTaken together, these steps are designed to promote the informativeness of the prices in driving progress across rounds.\nIn computing prices, we explain how to compute approximate (or otherwise) prices for structured bidding languages, and without enumerating all possible trades.\nFor this, we adopt constraint generation to efficient handle an exponential number of constraints.\nEach step is described in detail below.\nI: Accuracy.\nWe adopt a definition of price accuracy that generalizes the notions adopted in previous papers for unstructured bidding languages.\nLet At denote the current provisional trade and suppose the provisional valuation is v.\nTo compute accurate CE prices, we consider:\nThis linear program (LP) is designed to find prices that minimize the worst-case error across all agents.\nFrom the definition of CE prices, it follows that CE prices would have S = 0 as a solution to (9), at which point trade Ati would be in the best-response set of every agent (with Ati =, i.e. no trade, for all agents with no surplus for trade at the prices.)\nEXAMPLE 5.\nWe can illustrate the formulation (9) on Example 2, assuming for simplicity that v = v (i.e. truth).\nThe efficient trade allocates AB to buyer 1 and CD to buyer\n3.\nAccuracy will seek prices p (A), p (B), p (C) and p (D) to minimize the S 0 required to satisfy constraints:\nAn optimal solution requires p (A) = p (B) = 10\/3, with S = 2\/3, with p (C) and p (D) taking values such as p (C) = p (D) = 3\/2.\nBut, (9) has an exponential number of constraints (Eq.\n10).\nRather than solve it explicitly we use constraint generation [4] and dynamically generate a sufficient subset of constraints.\nLet i denote a manageable subset of all possible feasible trades to bidder i. Then, a relaxed version of (9) (written ACC) is formulated by substituting (10) with\nwhere i is a set of trades that are feasible for bidder i given the other bids.\nFixing the prices p *, we then solve n subproblems (one for each bidder),\nto check whether solution (p *, S *) to ACC is feasible in problem (9).\nIn R-WD (i) the objective is to determine a most preferred trade for each bidder at these prices.\nLet \u02c6Ai denote the solution to R-WD (i).\nCheck condition:\nand if this condition holds for all bidders i, then solution (p *, S *) is optimal for problem (9).\nOtherwise, trade \u02c6Ai is added to i for all bidders i for which this constraint is\nviolated and we re-solve the LP with the new set of constraints .4 II: Fairness.\nSecond, we break remaining ties to prefer fair prices: choosing prices that minimize the worst-case error with respect to Threshold payoffs (i.e. utility to bidders with Threshold payments), but without choosing prices that are less accurate .5 EXAMPLE 6.\nFor example, accuracy in Example 1 (depicted in Figure 1) requires 12 pA + pB 16 (for v = v).\nAt these valuations the Threshold payoffs would be 2 to both the seller and the buyer.\nThis can be exactly achieved in pricing with pA + pB = 14.\nThe fairness tie-breaking method is formulated as the following LP:\nwhere * represents the error in the optimal solution, from ACC.\nThe objective here is the same as in the Threshold payment rule (see Section 4.3): minimize the maximal error between bidder payoff (at v) for the provisional trade and the VCG payoff (at v).\nProblem FAIR is also solved through constraint generation, using R-WD (i) to add additional violated constraints as necessary.\nIII: Balance.\nThird, we break remaining ties to prefer balanced prices: choosing prices that minimize the maximal price across all items.\nReturning again to Example 1, depicted in Figure 1, we see that accuracy and fairness require p (A) + p (B) = 14.\nFinally, balance sets p (A) = p (B) = 7.\nBalance is justified when, all else being equal, items are more likely to have similar than dissimilar values .6 The LP for balance is formulated as follows:\nwhere * represents the error in the optimal solution from ACC and * represents the error in the optimal solution from FAIR.\nConstraint generation is also used to solve BAL, generating new trades for i as necessary.\n4Problem R-WD (i) is a specialization of the WD problem, in which the objective is to maximize the payoff of a single bidder, rather than the total value across all bidders.\nIt is solved as a MIP, by rewriting the objective in WD (T, x0) as max {vi \u00b7 sati \u2212 j p * j \u00b7 ij} for agent i. Thus, the structure of the bid-tree language is exploited in generating new constraints, because this is solved as a concise MIP.\nThe other bidders are kept around in the MIP (but do not appear in the objective), and are used to define the space of feasible trades.\n5The methods of Dunford et al. [12], that use a nucleolus approach, are also closely related.\n6The use of balance was advocated by Kwasnica et al. [17].\nDunford et al. [12] prefer to smooth prices across rounds.\nComment 1: Lexicographical Refinement.\nFor all three sub-problems we also perform lexicographical refinement (with respect to bidders in ACC and FAIR, and with respect to goods in BAL).\nFor instance, in ACC we successively minimize the maximal error across all bidders.\nGiven an initial solution we first \"pin down\" the error on all bidders for whom a constraint (11) is binding.\nFor such a bidder i, the constraint is replaced withvi () \u2212 p () v\nand the error to bidder i no longer appears explicitly in the objective.\nACC is then re-solved, and makes progress by further minimizing the maximal error across all bidders yet to be pinned down.\nThis continues, pinning down any new bidders for whom one of constraints (11) is binding, until the error is lexicographically optimized for all bidders .7 The exact same process is repeated for FAIR and BAL, with bidders pinned down and constraints (15) replaced with * i vcg, i \u2212 (v i (ti) \u2212 p (ti)), i, (where * i is the current objective) in FAIR, and items pinned down and constraints (18) replaced with p * j pj (where p * j represents the target for the maximal price on that item) in BAL.\nComment 2: Computation.\nAll constraints in i are retained, and this set grows across all stages and across all rounds of the exchange.\nThus, the computational effort in constraint generation is re-used.\nIn implementation we are careful to address a number of \"- issues\" that arise due to floating-point issues.\nWe prefer to err on the side of being conservative in determining whether or not to add another constraint in performing check (13).\nThis avoids later infeasibility issues.\nIn addition, when pinning-down bidders for the purpose of lexicographical refinement we relax the associated bidder-constraints with a small> 0 on the righthand side.\n5.3 Revealed-Preference Activity Rules\nThe role of activity rules in the auction is to ensure both consistency and progress across rounds [21].\nConsistency in our exchange requires that bidders tighten bounds as the exchange progresses.\nActivity rules ensure that bidders are active during early rounds, and promote useful elicitation throughout the exchange.\nWe adopt a simple revealed-preference (RP) activity rule.\nThe idea is loosely based around the RP-rule in Ausubel et al. [1], where it is used for one-sided CAs.\nThe motivation is to require more than simply consistency: we need bidders to provide enough information for the system to be able to to prove that an allocation is (approximately) efficient.\nIt is helpful to think about the bidders interacting with \"proxy agents\" that will act on their behalf in responding to provisional prices pt-1 determined at the end of round t \u2212 1.\nThe only knowledge that such a proxy has of the valuation of a bidder is through the bid-tree.\nSuppose a proxy was queried by the exchange and asked which trade the bidder was most interested in at the provisional prices.\nThe RP rule says the following: the proxy must have enough 7For example, applying this to accuracy on Example 2 we solve once and find bidders 1 and 2 are binding, for error * = 2\/3.\nWe pin these down and then minimize the error to bidders 3 and 4.\nFinally, this gives p (A) = p (B) = 10\/3 and p (C) = p (D) = 5\/3, with accuracy 2\/3 to bidders 1 and 2 and 1\/3 to bidders 3 and 4.\ninformation to be able to determine this surplus-maximizing trade at current prices.\nConsider the following examples: EXAMPLE 7.\nA bidder has XOR (+ A, + B) and a value of +5 on the leaf + A and a value range of [5,10] on leaf + B. Suppose prices are currently 3 for each of A and B.\nThe RP rule is satisfied because the proxy knows that however the remaining value uncertainty on + B is resolved the bidder will always (weakly) prefer + B to + A. EXAMPLE 8.\nA bidder has XOR (+ A, + B) and value bounds [5, 10] on the root node and a value of 1 on leaf + A. Suppose prices are currently 3 for each of A and B.\nThe RP rule is satisfied because the bidder will always prefer + A to + B at equal prices, whichever way the uncertain value on the root node is ultimately resolved.\nOverloading notation, let vi E Ti denote a valuation that is consistent with lower and upper valuations in bid tree Ti.\nDEFINITION 4.\nBid tree Ti satisfies RP at prices pt \u2212 1 if and only if there exists some feasible trade L for which,\nTo make this determination for bidder i we solve a sequence of problems, each of which is a variation on the WD problem.\nFirst, we construct a candidate lower-bound trade, which is a feasible trade that solves:\nThe solution l to RP1 (i) represents the maximal payoff that bidder i can achieve across all feasible trades, given its pessimistic valuation.\nSecond, we break ties to find a trade with maximal value uncertainty across all possible solutions to RP1 (i):\nWe adopt solution L i as our candidate for the trade that may satisfy RP.\nTo understand the importance of this tiebreaking rule consider Example 7.\nThe proxy can prove + B but not + A is a best-response for all vi E Ti, and should choose + B as its candidate.\nNotice that + B is a counterexample to + A, but not the other way round.\nNow, we construct a modified valuation \u02dcvi, by setting (24) vi, otherwise.\nwhere sat (L i) is the set of nodes that are satisfied in the lower-bound tree for trade L i.\nGiven this modified valuation, we find U to solve:\nLet u denote the payoff from this optimal trade at modified values \u02dcv.\nWe call trade Ui the witness trade.\nWe show in Proposition 1 that the RP rule is satisfied if and only if l> u. Constructing the modified valuation as \u02dcvi recognizes that there is \"shared uncertainty\" across trades that satisfy the same nodes in a bid tree.\nExample 8 helps to illustrate this.\nJust using vi in RP3 (i), we would find L i is \"buy A\" with payoff l = 3 but then find Ui is \"buy B\" with u = 7 and fail RP.\nWe must recognize that however the uncertainty on the root node is resolved it will affect + A and + B in exactly the same way.\nFor this reason, we set \u02dcvi = vi = 5 on the root node, which is exactly the same value that was adopted in determining l. Then, RP3 (i) applied to Ui gives \"buy A\" and the RP test is judged to be passed.\nwhere \u02dcvi is the modified valuation in Eq.\n(24).\nPROOF.\nFor sufficiency, notice that the difference in payoff between trade L i and another trade i is unaffected by the way uncertainty is resolved on any node that is satisfied in both L i and i. Fixing the values in \u02dcvi on nodes satisfied in L i has the effect of removing this consideration when a trade Ui is selected that satisfies one of these nodes.\nOn the other hand, fixing the values on these nodes has no effect on trades considered in RP3 (i) that do not share a node with L i.\nFor the necessary direction, we first show that any trade that satisfies RP must solve RP1 (i).\nSuppose otherwise, that some i with payoff greater than l satisfies RP.\nBut, valuation vi E Ti together with L i presents a counterexample to RP (Eq.\n20).\nNow, suppose (for contradiction) that some i with maximal payoff l but uncertainty less than L i satisfies RP.\nProceed by case analysis.\nCase a): only one solution to RP1 (i) has uncertain value and so i has certain value.\nBut, this cannot satisfy RP because L i with uncertain value would be a counterexample to RP (Eq.\n20).\nCase b): two or more solutions to RP1 (i) have uncertain value.\nHere, we first argue that one of these trades must satisfy a (weak) superset of all the nodes with uncertain value that are satisfied by all other trades in this set.\nThis is by RP.\nWithout this, then for any choice of trade that solves RP1 (i), there is another trade with a disjoint set of uncertain but satisfied nodes that provides a counterexample to RP (Eq.\n20).\nNow, consider the case that some trade contains a superset of all the uncertain satisfied nodes of the other trades.\nClearly RP2 (i) will choose this trade, L i, and i must satisfy a subset of these nodes (by assumption).\nBut, we now see that i cannot satisfy RP because L i would be a counterexample to RP.\nFailure to meet the activity rule must have some consequence.\nIn the current rules, the default action we choose is to set the upper bounds in valuations down to the maximal value of the provisional price on a node8 and the lowerbound value on that node .9 Such a bidder can remain active 8The provisional price on a node is defined as the minimal total price across all feasible trades for which the subtree rooted at the tree is satisfied.\n9This is entirely analogous to when a bidder in an ascending clock auction stops bidding at a price: she is not permitted to bid at a higher price again in future rounds.\nwithin the exchange, but only with valuations that are consistent with these new bounds.\n5.4 Bidder Feedback\nIn each round, our default design provides every bidder with the provisional trade and also with the current provisional prices.\nSee 7 for an additional discussion.\nWe also provide guidance to help a bidder meet the RP rule.\nLet sat (L * i) and sat (Ui *) denote the nodes that are satisfied in trades L * i and Ui *, as computed in RP1--RP3.\nLEMMA 2.\nWhen RP fails, a bidder must increase a lower bound on at least one node in sat (L * i) \\ sat (Ui *) or decrease an upper bound on at least one node in sat (Ui *) \\ sat (L * i) in order to meet the activity rule.\nPROOF.\nChanging the upper - or lower - values on nodes that are not satisfied by either trade does not change L * i or Ui *, and does not change the payoff from these trades.\nThus, the RP condition will continue to fail.\nSimilarly, changing the bounds on nodes that are satisfied in both trades has no effect on revealed preference.\nA change to a lower bound on a shared node affects both L * i and Ui * identically because of the use of the modified valuation to determine Ui *.\nA change to an upper bound on a shared node has no effect in determining either L * i or Ui *.\nNote that when sat (Ui *) = sat (L * i) then condition (26) is always trivially satisfied, and so the guidance in the lemma is always well-defined when RP fails.\nThis is an elegant feedback mechanism because it is adaptive.\nOnce a bidder makes some changes on some subset of these nodes, the bidder can query the exchange.\nThe exchange can then respond \"yes,\" or can revise the set of nodes sat (* l) and sat (* u) as necessary.\n5.5 Termination Conditions\nOnce each bidder has committed its new bids (and either met the RP rule or suffered the penalty) then round t closes.\nAt this point, the task is to determine the new - valuation, and in turn the provisional allocation t and provisional prices pt.\nA termination condition is also checked, to determine whether to move the exchange to a last-and-final round.\nTo define the - valuation we compute the following two quantities: Pessimistic at Pessimistic (PP) Determine an efficient trade, * l, at pessimistic values, i.e. to solve max i vi (i), and set PP = i vi (* li).\nPessimistic at Optimistic (PO) Determine an efficient trade, * u, at optimistic values, i.e. to solve max i vi (i), and set PO = i vi (* ui).\nFirst, note that PP> PO and PP> 0 by definition, for all bid-trees, although PO can be negative (because the \"right\" trade at v is not currently a useful trade at v).\nRecognizing this, define\nwhen PP> 0, and observe that eff (PP, PO)> 1 when this is defined, and that eff (PP, PO) will start large and then trend towards 1 as the optimistic allocation converges towards the pessimistic allocation.\nIn each round, we define eff E [0, 1] as:\nwhich is 0 while PP is 0 and then trends towards 1 once PP> 0 in some round.\nThis is used to define - valuation\nwhich is used to define the provisional allocation and provisional prices.\nThe effect is to endogenously define a schedule for moving from optimistic to pessimistic values across rounds, based on how \"close\" the trades are to one another.\nTermination Condition.\nIn moving to the last-and-final round, and finally closing, we also care about the convergence of payments, in addition to the convergence towards an efficient trade.\nFor this we introduce another parameter, thresh E [0, 1], that trends from 0 to 1 as the Threshold payments at lower and upper valuations converge.\nConsider the following parameter:\nwhich is defined for PP> 0, where pthresh (v) denotes the Threshold payments at valuation profile v, Nactive is the number of bidders that are actively engaged in trade in the PP trade, and II - II2 is the L2-norm.\nNote that thresh is defined for payments and not payoffs.\nThis is appropriate because it is the accuracy of the outcome of the exchange that matters: i.e. the trade and the payments.\nGiven this, we define 0 when PP is 0 (31)\n1 \/ thresh otherwise\nwhich is 0 while PP is 0 and then trends towards 1 as progress is made.\nDEFINITION 5 (TERMINATION).\nICE transitions to a lastand-final round when one of the following holds:\n1.\neff> CUTOFFeff and thresh> CUTOFFthresh, 2.\nthere is no trade at the optimistic values,\nwhere CUTOFFeff, CUTOFFthresh E (0, 1] determine the accuracy required for termination.\nAt the end of the last-and-final round v = v is used to define the final trade and the final Threshold payments.\nEXAMPLE 9.\nConsider again Example 1, and consider the upper and lower bounds as depicted in Figure 1.\nFirst, if the seller's bounds were [--20,--4] then there is an optimistic trade but no pessimistic trade, and PO =--4 and PP = 0, and eff = 0.\nAt the bounds depicted, both the optimistic and the pessimistic trades occur and PO = PP = 4 and eff = 1.\nHowever, we can see the Threshold payments are (17,--17) at v but (14,--14) at v. Evaluating thresh, we\nhave thresh = 1 + = 5\/2, and thresh = 2\/5.\n(4\/2) For CUTOFFthresh <2\/5 the exchange would remain open.\nOn the other hand, if the buyer's value for + AB was between [18, 24] and the seller's value for--AB was between [--12,--6], the Threshold payments are (15,--15) at both upper and lower bounds, and thresh = 1.\nTable 1: Exchange Component and Code Breakdown.\n6.\nSYSTEMS INFRASTRUCTURE\nICE is approximately 6502 lines of Java code, broken up into the functional packages described in Table 1.10 The prototype is modular so that researchers may easily replace components for experimentation.\nIn addition to the core exchange discussed in this paper, we have developed an agent component that allows a user to simulate the behavior and knowledge of other players in the system, better allowing a user to formulate their strategy in advance of actual play.\nA user specifies a valuation model in an XMLinterpretation of our bidding language, which is revealed to the exchange via the agent's strategy.\nMajor exchange tasks are handled by \"engines\" that dictate the non-optimizer specific logic.\nThese engines drive the appropriate MIP\/LP \"builders\".\nWe realized that all of our optimization formulations boil down to two classes of optimization problem.\nThe first, used by winner determination, activity rule, closing rule, and constraint generation in pricing, is a MIP that finds trades that maximize value, holding prices and slacks constant.\nThe second, used by the three pricing stages, is an LP that holds trades constant, seeking to minimize slack, profit, or prices.\nWe take advantage of the commonality of these problems by using common LP\/MIP builders that differ only by a few functional hooks to provide the correct variables for optimization.\nWe have generalized our back-end optimization solver interface11 (we currently support CPLEX and the LGPL - licensed LPSolve), and can take advantage of the load-balancing and parallel MIP\/LP solving capability that this library provides.\n8.\nCONCLUSIONS\nIn this work we designed and prototyped a scalable and highly-expressive iterative combinatorial exchange.\nThe design includes many interesting features, including: a new bid-tree language for exchanges, a new method to construct approximate linear prices from expressive languages, and a proxied elicitation method with optimistic and pessimistic valuations with a new method to evaluate a revealed - preference activity rule.\nThe exchange is fully implemented in Java and is in a validation phase.\nThe next steps for our work are to allow bidders to refine the structure of the bid tree in addition to values on the tree.\nWe intend to study the elicitation properties of the exchange and we have put together a test suite of exchange problem instances.\nIn addition, we are beginning to engage in collaborations to apply the design to airline takeoff and landing slot scheduling and to resource allocation in widearea network distributed computational systems.","keyphrases":["iter combinatori exchang","combinatori exchang","bid","trade","price","prefer elicit","doubl auction","combinatori auction","buyer and seller","tree-base bid languag","winner-determin","threshold payment","vcg"],"prmu":["P","P","P","P","P","P","U","M","M","M","U","M","U"]} {"id":"H-64","title":"Machine Learning for Information Architecture in a Large Governmental Website","abstract":"This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries. Under the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection. The goal of this discovery is twofold. First we desire a practical aid for information architects. Second, automatically derived document-concept relationships are a necessary precondition for realworld deployment of many dynamic interfaces. The current study compares concept learning strategies based on three document representations: keywords, titles, and full-text. In statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations.","lvl-1":"Machine Learning for Information Architecture in a Large Governmental Website \u2217 Miles Efron School of Information & Library Science CB#3360, 100 Manning Hall University of North Carolina Chapel Hill, NC 27599-3360 efrom@ils.unc.edu Jonathan Elsas School of Information & Library Science CB#3360, 100 Manning Hall University of North Carolina Chapel Hill, NC 27599-3360 jelsas@email.unc.edu Gary Marchionini School of Information & Library Science CB#3360, 100 Manning Hall University of North Carolina Chapel Hill, NC 27599-3360 march@ils.unc.edu Junliang Zhang School of Information & Library Science CB#3360, 100 Manning Hall University of North Carolina Chapel Hill, NC 27599-3360 junliang@email.unc.edu ABSTRACT This paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries.\nUnder the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection.\nThe goal of this discovery is twofold.\nFirst we desire a practical aid for information architects.\nSecond, automatically derived documentconcept relationships are a necessary precondition for realworld deployment of many dynamic interfaces.\nThe current study compares concept learning strategies based on three document representations: keywords, titles, and full-text.\nIn statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations.\nCategories and Subject Descriptors H.3.7 [Information Storage and Retrieval]: Digital Libraries-Systems Issues, User Issues; H.3.3 [Information Storage and Retrieval]: Information Search and RetrievalClustering General Terms Design, Experimentation 1.\nINTRODUCTION The GovStat Project is a joint effort of the University of North Carolina Interaction Design Lab and the University of Maryland Human-Computer Interaction Lab1 .\nCiting end-user difficulty in finding governmental information (especially statistical data) online, the project seeks to create an integrated model of user access to US government statistical information that is rooted in realistic data models and innovative user interfaces.\nTo enable such models and interfaces, we propose a data-driven approach, based on data mining and machine learning techniques.\nIn particular, our work analyzes a particular digital library-the website of the Bureau of Labor Statistics2 (BLS)-in efforts to discover a small number of linguistically meaningful concepts, or bins, that collectively summarize the semantic domain of the site.\nThe project goal is to classify the site``s web content according to these inferred concepts as an initial step towards data filtering via active user interfaces (cf. [13]).\nMany digital libraries already make use of content classification, both explicitly and implicitly; they divide their resources manually by topical relation; they organize content into hierarchically oriented file systems.\nThe goal of the present 1 http:\/\/www.ils.unc.edu\/govstat 2 http:\/\/www.bls.gov 151 research is to develop another means of browsing the content of these collections.\nBy analyzing the distribution of terms across documents, our goal is to supplement the agency``s pre-existing information structures.\nStatistical learning technologies are appealing in this context insofar as they stand to define a data-driven-as opposed to an agency-drivennavigational structure for a site.\nOur approach combines supervised and unsupervised learning techniques.\nA pure document clustering [12] approach to such a large, diverse collection as BLS led to poor results in early tests [6].\nBut strictly supervised techniques [5] are inappropriate, too.\nAlthough BLS designers have defined high-level subject headings for their collections, as we discuss in Section 2, this scheme is less than optimal.\nThus we hope to learn an additional set of concepts by letting the data speak for themselves.\nThe remainder of this paper describes the details of our concept discovery efforts and subsequent evaluation.\nIn Section 2 we describe the previously existing, human-created conceptual structure of the BLS website.\nThis section also describes evidence that this structure leaves room for improvement.\nNext (Sections 3-5), we turn to a description of the concepts derived via content clustering under three document representations: keyword, title only, and full-text.\nSection 6 describes a two-part evaluation of the derived conceptual structures.\nFinally, we conclude in Section 7 by outlining upcoming work on the project.\n2.\nSTRUCTURING ACCESS TO THE BLS WEBSITE The Bureau of Labor Statistics is a federal government agency charged with compiling and publishing statistics pertaining to labor and production in the US and abroad.\nGiven this broad mandate, the BLS publishes a wide array of information, intended for diverse audiences.\nThe agency``s website acts as a clearinghouse for this process.\nWith over 15,000 text\/html documents (and many more documents if spreadsheets and typeset reports are included), providing access to the collection provides a steep challenge to information architects.\n2.1 The Relation Browser The starting point of this work is the notion that access to information in the BLS website could be improved by the addition of a dynamic interface such as the relation browser described by Marchionini and Brunk [13].\nThe relation browser allows users to traverse complex data sets by iteratively slicing the data along several topics.\nIn Figure 1 we see a prototype instantiation of the relation browser, applied to the FedStats website3 .\nThe relation browser supports information seeking by allowing users to form queries in a stepwise fashion, slicing and re-slicing the data as their interests dictate.\nIts motivation is in keeping with Shneiderman``s suggestion that queries and their results should be tightly coupled [2].\nThus in Fig3 http:\/\/www.fedstats.gov Figure 1: Relation Browser Prototype ure 1, users might limit their search set to those documents about energy.\nWithin this subset of the collection, they might further eliminate documents published more than a year ago.\nFinally, they might request to see only documents published in PDF format.\nAs Marchionini and Brunk discuss, capturing the publication date and format of documents is trivial.\nBut successful implementations of the relation browser also rely on topical classification.\nThis presents two stumbling blocks for system designers: \u2022 Information architects must define the appropriate set of topics for their collection \u2022 Site maintainers must classify each document into its appropriate categories These tasks parallel common problems in the metadata community: defining appropriate elements and marking up documents to support metadata-aware information access.\nGiven a collection of over 15,000 documents, these hurdles are especially daunting, and automatic methods of approaching them are highly desirable.\n2.2 A Pre-Existing Structure Prior to our involvement with the project, designers at BLS created a shallow classificatory structure for the most important documents in their website.\nAs seen in Figure 2, the BLS home page organizes 65 top-level documents into 15 categories.\nThese include topics such as Employment and Unemployment, Productivity, and Inflation and Spending.\n152 Figure 2: The BLS Home Page We hoped initially that these pre-defined categories could be used to train a 15-way document classifier, thus automating the process of populating the relation browser altogether.\nHowever, this approach proved unsatisfactory.\nIn personal meetings, BLS officials voiced dissatisfaction with the existing topics.\nTheir form, it was argued, owed as much to the institutional structure of BLS as it did to the inherent topology of the website``s information space.\nIn other words, the topics reflected official divisions rather than semantic clusters.\nThe BLS agents suggested that re-designing this classification structure would be desirable.\nThe agents'' misgivings were borne out in subsequent analysis.\nThe BLS topics comprise a shallow classificatory structure; each of the 15 top-level categories is linked to a small number of related pages.\nThus there are 7 pages associated with Inflation.\nAltogether, the link structure of this classificatory system contains 65 documents; that is, excluding navigational links, there are 65 documents linked from the BLS home page, where each hyperlink connects a document to a topic (pages can be linked to multiple topics).\nBased on this hyperlink structure, we defined M, a symmetric 65\u00d765 matrix, where mij counts the number of topics in which documents i and j are both classified on the BLS home page.\nTo analyze the redundancy inherent in the pre-existing structure, we derived the principal components of M (cf. [11]).\nFigure 3 shows the resultant scree plot4 .\nBecause all 65 documents belong to at least one BLS topic, 4 A scree plot shows the magnitude of the kth eigenvalue versus its rank.\nDuring principal component analysis scree plots visualize the amount of variance captured by each component.\nm00M0M 0 1010M10M 10 2020M20M 20 3030M30M 30 4040M40M 40 5050M50M 50 6060M60M 60 m00M0M 0 22M2M 2 44M4M 4 66M6M 6 88M8M 8 1010M10M 10 1212M12M 12 1414M14M 14 Eigenvalue RankMEigenvalue RankM Eigenvalue Rank Eigenvlue MagnitudeMEigenvlue MagnitudeM EigenvlueMagnitude Figure 3: Scree Plot of BLS Categories the rank of M is guaranteed to be less than or equal to 15 (hence, eigenvalues 16 ... 65 = 0).\nWhat is surprising about Figure 3, however, is the precipitous decline in magnitude among the first four eigenvalues.\nThe four largest eigenvlaues account for 62.2% of the total variance in the data.\nThis fact suggests a high degree of redundancy among the topics.\nTopical redundancy is not in itself problematic.\nHowever, the documents in this very shallow classificatory structure are almost all gateways to more specific information.\nThus the listing of the Producer Price Index under three categories could be confusing to the site``s users.\nIn light of this potential for confusion and the agency``s own request for redesign, we undertook the task of topic discovery described in the following sections.\n3.\nA HYBRID APPROACH TO TOPIC DISCOVERY To aid in the discovery of a new set of high-level topics for the BLS website, we turned to unsupervised machine learning methods.\nIn efforts to let the data speak for themselves, we desired a means of concept discovery that would be based not on the structure of the agency, but on the content of the material.\nTo begin this process, we crawled the BLS website, downloading all documents of MIME type text\/html.\nThis led to a corpus of 15,165 documents.\nBased on this corpus, we hoped to derive k \u2248 10 topical categories, such that each document di is assigned to one or more classes.\n153 Document clustering (cf. [16]) provided an obvious, but only partial solution to the problem of automating this type of high-level information architecture discovery.\nThe problems with standard clustering are threefold.\n1.\nMutually exclusive clusters are inappropriate for identifying the topical content of documents, since documents may be about many subjects.\n2.\nDue to the heterogeneity of the data housed in the BLS collection (tables, lists, surveys, etc.), many documents'' terms provide noisy topical information.\n3.\nFor application to the relation browser, we require a small number (k \u2248 10) of topics.\nWithout significant data reduction, term-based clustering tends to deliver clusters at too fine a level of granularity.\nIn light of these problems, we take a hybrid approach to topic discovery.\nFirst, we limit the clustering process to a sample of the entire collection, described in Section 4.\nWorking on a focused subset of the data helps to overcome problems two and three, listed above.\nTo address the problem of mutual exclusivity, we combine unsupervised with supervised learning methods, as described in Section 5.\n4.\nFOCUSING ON CONTENT-RICH DOCUMENTS To derive empirically evidenced topics we initially turned to cluster analysis.\nLet A be the n\u00d7p data matrix with n observations in p variables.\nThus aij shows the measurement for the ith observation on the jth variable.\nAs described in [12], the goal of cluster analysis is to assign each of the n observations to one of a small number k groups, each of which is characterized by high intra-cluster correlation and low inter-cluster correlation.\nThough the algorithms for accomplishing such an arrangement are legion, our analysis focuses on k-means clustering5 , during which, each observation oi is assigned to the cluster Ck whose centroid is closest to it, in terms of Euclidean distance.\nReaders interested in the details of the algorithm are referred to [12] for a thorough treatment of the subject.\nClustering by k-means is well-studied in the statistical literature, and has shown good results for text analysis (cf. [8, 16]).\nHowever, k-means clustering requires that the researcher specify k, the number of clusters to define.\nWhen applying k-means to our 15,000 document collection, indicators such as the gap statistic [17] and an analysis of the mean-squared distance across values of k suggested that k \u2248 80 was optimal.\nThis paramterization led to semantically intelligible clusters.\nHowever, 80 clusters are far too many for application to an interface such as the relation 5 We have focused on k-means as opposed to other clustering algorithms for several reasons.\nChief among these is the computational efficiency enjoyed by the k-means approach.\nBecause we need only a flat clustering there is little to be gained by the more expensive hierarchical algorithms.\nIn future work we will turn to model-based clustering [7] as a more principled method of selecting the number of clusters and of representing clusters.\nbrowser.\nMoreover, the granularity of these clusters was unsuitably fine.\nFor instance, the 80-cluster solution derived a cluster whose most highly associated words (in terms of log-odds ratio [1]) were drug, pharmacy, and chemist.\nThese words are certainly related, but they are related at a level of specificity far below what we sought.\nTo remedy the high dimensionality of the data, we resolved to limit the algorithm to a subset of the collection.\nIn consultation with employees of the BLS, we continued our analysis on documents that form a series titled From the Editor``s Desk6 .\nThese are brief articles, written by BLS employees.\nBLS agents suggested that we focus on the Editor``s Desk because it is intended to span the intellectual domain of the agency.\nThe column is published daily, and each entry describes an important current issue in the BLS domain.\nThe Editor``s Desk column has been written daily (five times per week) since 1998.\nAs such, we operated on a set of N = 1279 documents.\nLimiting attention to these 1279 documents not only reduced the dimensionality of the problem.\nIt also allowed the clustering process to learn on a relatively clean data set.\nWhile the entire BLS collection contains a great deal of nonprose text (i.e. tables, lists, etc.), the Editor``s Desk documents are all written in clear, journalistic prose.\nEach document is highly topical, further aiding the discovery of termtopic relations.\nFinally, the Editor``s Desk column provided an ideal learning environment because it is well-supplied with topical metadata.\nEach of the 1279 documents contains a list of one or more keywords.\nAdditionally, a subset of the documents (1112) contained a subject heading.\nThis metadata informed our learning and evaluation, as described in Section 6.1.\n5.\nCOMBINING SUPERVISED AND UNSUPERVISED LEARNING FORTOPIC DISCOVERY To derive suitably general topics for the application of a dynamic interface to the BLS collection, we combined document clustering with text classification techniques.\nSpecifically, using k-means, we clustered each of the 1279 documents into one of k clusters, with the number of clusters chosen by analyzing the within-cluster mean squared distance at different values of k (see Section 6.1).\nConstructing mutually exclusive clusters violates our assumption that documents may belong to multiple classes.\nHowever, these clusters mark only the first step in a two-phase process of topic identification.\nAt the end of the process, documentcluster affinity is measured by a real-valued number.\nOnce the Editor``s Desk documents were assigned to clusters, we constructed a k-way classifier that estimates the strength of evidence that a new document di is a member of class Ck.\nWe tested three statistical classification techniques: probabilistic Rocchio (prind), naive Bayes, and support vector machines (SVMs).\nAll were implemented using McCallum``s BOW text classification library [14].\nPrind is a probabilistic version of the Rocchio classification algorithm [9].\nInterested readers are referred to Joachims'' article for 6 http:\/\/www.bls.gov\/opub\/ted 154 further details of the classification method.\nLike prind, naive Bayes attempts to classify documents into the most probable class.\nIt is described in detail in [15].\nFinally, support vector machines were thoroughly explicated by Vapnik [18], and applied specifically to text in [10].\nThey define a decision boundary by finding the maximally separating hyperplane in a high-dimensional vector space in which document classes become linearly separable.\nHaving clustered the documents and trained a suitable classifier, the remaining 14,000 documents in the collection are labeled by means of automatic classification.\nThat is, for each document di we derive a k-dimensional vector, quantifying the association between di and each class C1 ... Ck.\nDeriving topic scores via naive Bayes for the entire 15,000document collection required less than two hours of CPU time.\nThe output of this process is a score for every document in the collection on each of the automatically discovered topics.\nThese scores may then be used to populate a relation browser interface, or they may be added to a traditional information retrieval system.\nTo use these weights in the relation browser we currently assign to each document the two topics on which it scored highest.\nIn future work we will adopt a more rigorous method of deriving documenttopic weight thresholds.\nAlso, evaluation of the utility of the learned topics for users will be undertaken.\n6.\nEVALUATION OF CONCEPT DISCOVERY Prior to implementing a relation browser interface and undertaking the attendant user studies, it is of course important to evaluate the quality of the inferred concepts, and the ability of the automatic classifier to assign documents to the appropriate subjects.\nTo evaluate the success of the two-stage approach described in Section 5, we undertook two experiments.\nDuring the first experiment we compared three methods of document representation for the clustering task.\nThe goal here was to compare the quality of document clusters derived by analysis of full-text documents, documents represented only by their titles, and documents represented by human-created keyword metadata.\nDuring the second experiment, we analyzed the ability of the statistical classifiers to discern the subject matter of documents from portions of the database in addition to the Editor``s Desk.\n6.1 Comparing Document Representations Documents from The Editor``s Desk column came supplied with human-generated keyword metadata.\nAdditionally, The titles of the Editor``s Desk documents tend to be germane to the topic of their respective articles.\nWith such an array of distilled evidence of each document``s subject matter, we undertook a comparison of document representations for topic discovery by clustering.\nWe hypothesized that keyword-based clustering would provide a useful model.\nBut we hoped to see whether comparable performance could be attained by methods that did not require extensive human indexing, such as the title-only or full-text representations.\nTo test this hypothesis, we defined three modes of document representation-full-text, title-only, and keyword only-we generated three sets of topics, Tfull, Ttitle, and Tkw, respectively.\nTopics based on full-text documents were derived by application of k-means clustering to the 1279 Editor``s Desk documents, where each document was represented by a 1908dimensional vector.\nThese 1908 dimensions captured the TF.IDF weights [3] of each term ti in document dj, for all terms that occurred at least three times in the data.\nTo arrive at the appropriate number of clusters for these data, we inspected the within-cluster mean-squared distance for each value of k = 1 ... 20.\nAs k approached 10 the reduction in error with the addition of more clusters declined notably, suggesting that k \u2248 10 would yield good divisions.\nTo select a single integer value, we calculated which value of k led to the least variation in cluster size.\nThis metric stemmed from a desire to suppress the common result where one large cluster emerges from the k-means algorithm, accompanied by several accordingly small clusters.\nWithout reason to believe that any single topic should have dramatically high prior odds of document membership, this heuristic led to kfull = 10.\nClusters based on document titles were constructed similarly.\nHowever, in this case, each document was represented in the vector space spanned by the 397 terms that occur at least twice in document titles.\nUsing the same method of minimizing the variance in cluster membership ktitle-the number of clusters in the title-based representation-was also set to 10.\nThe dimensionality of the keyword-based clustering was very similar to that of the title-based approach.\nThere were 299 keywords in the data, all of which were retained.\nThe median number of keywords per document was 7, where a keyword is understood to be either a single word, or a multiword term such as consumer price index.\nIt is worth noting that the keywords were not drawn from any controlled vocabulary; they were assigned to documents by publishers at the BLS.\nUsing the keywords, the documents were clustered into 10 classes.\nTo evaluate the clusters derived by each method of document representation, we used the subject headings that were included with 1112 of the Editor``s Desk documents.\nEach of these 1112 documents was assigned one or more subject headings, which were withheld from all of the cluster applications.\nLike the keywords, subject headings were assigned to documents by BLS publishers.\nUnlike the keywords, however, subject headings were drawn from a controlled vocabulary.\nOur analysis began with the assumption that documents with the same subject headings should cluster together.\nTo facilitate this analysis, we took a conservative approach; we considered multi-subject classifications to be unique.\nThus if document di was assigned to a single subject prices, while document dj was assigned to two subjects, international comparisons, prices, documents di and dj are not considered to come from the same class.\nTable 1 shows all Editor``s Desk subject headings that were assigned to at least 10 documents.\nAs noted in the table, 155 Table 1: Top Editor``s Desk Subject Headings Subject Count prices 92 unemployment 55 occupational safety & health 53 international comparisons, prices 48 manufacturing, prices 45 employment 44 productivity 40 consumer expenditures 36 earnings & wages 27 employment & unemployment 27 compensation costs 25 earnings & wages, metro.\nareas 18 benefits, compensation costs 18 earnings & wages, occupations 17 employment, occupations 14 benefits 14 earnings & wage, regions 13 work stoppages 12 earnings & wages, industries 11 Total 609 Table 2: Contingecy Table for Three Document Representations Representation Right Wrong Accuracy Full-text 392 217 0.64 Title 441 168 0.72 Keyword 601 8 0.98 there were 19 such subject headings, which altogether covered 609 (54%) of the documents with subjects assigned.\nThese document-subject pairings formed the basis of our analysis.\nLimiting analysis to subjects with N > 10 kept the resultant \u03c72 tests suitably robust.\nThe clustering derived by each document representation was tested by its ability to collocate documents with the same subjects.\nThus for each of the 19 subject headings in Table 1, Si, we calculated the proportion of documents assigned to Si that each clustering co-classified.\nFurther, we assumed that whichever cluster captured the majority of documents for a given class constituted the right answer for that class.\nFor instance, There were 92 documents whose subject heading was prices.\nTaking the BLS editors'' classifications as ground truth, all 92 of these documents should have ended up in the same cluster.\nUnder the full-text representation 52 of these documents were clustered into category 5, while 35 were in category 3, and 5 documents were in category 6.\nTaking the majority cluster as the putative right home for these documents, we consider the accuracy of this clustering on this subject to be 52\/92 = 0.56.\nRepeating this process for each topic across all three representations led to the contingency table shown in Table 2.\nThe obvious superiority of the keyword-based clustering evidenced by Table 2 was borne out by a \u03c72 test on the accuracy proportions.\nComparing the proportion right and Table 3: Keyword-Based Clusters benefits costs international jobs plans compensation import employment benefits costs prices jobs employees benefits petroleum youth occupations prices productivity safety workers prices productivity safety earnings index output health operators inflation nonfarm occupational spending unemployment expenditures unemployment consumer mass spending jobless wrong achieved by keyword and title-based clustering led to p 0.001.\nDue to this result, in the remainder of this paper, we focus our attention on the clusters derived by analysis of the Editor``s Desk keywords.\nThe ten keyword-based clusters are shown in Table 3, represented by the three terms most highly associated with each cluster, in terms of the log-odds ratio.\nAdditionally, each cluster has been given a label by the researchers.\nEvaluating the results of clustering is notoriously difficult.\nIn order to lend our analysis suitable rigor and utility, we made several simplifying assumptions.\nMost problematic is the fact that we have assumed that each document belongs in only a single category.\nThis assumption is certainly false.\nHowever, by taking an extremely rigid view of what constitutes a subject-that is, by taking a fully qualified and often multipart subject heading as our unit of analysis-we mitigate this problem.\nAnalogically, this is akin to considering the location of books on a library shelf.\nAlthough a given book may cover many subjects, a classification system should be able to collocate books that are extremely similar, say books about occupational safety and health.\nThe most serious liability with this evaluation, then, is the fact that we have compressed multiple subject headings, say prices : international into single subjects.\nThis flattening obscures the multivalence of documents.\nWe turn to a more realistic assessment of document-class relations in Section 6.2.\n6.2 Accuracy of the Document Classifiers Although the keyword-based clusters appear to classify the Editor``s Desk documents very well, their discovery only solved half of the problem required for the successful implementation of a dynamic user interface such as the relation browser.\nThe matter of roughly fourteen thousand unclassified documents remained to be addressed.\nTo solve this problem, we trained the statistical classifiers described above in Section 5.\nFor each document in the collection di, these classifiers give pi, a k-vector of probabilities or distances (depending on the classification method used), where pik quantifies the strength of association between the ith document and the kth class.\nAll classifiers were trained on the full text of each document, regardless of the representation used to discover the initial clusters.\nThe different training sets were thus constructed simply by changing the 156 Table 4: Cross Validation Results for 3 Classifiers Method Av.\nPercent Accuracy SE Prind 59.07 1.07 Naive Bayes 75.57 0.4 SVM 75.08 0.68 class variable for each instance (document) to reflect its assigned cluster under a given model.\nTo test the ability of each classifier to locate documents correctly, we first performed a 10-fold cross validation on the Editor``s Desk documents.\nDuring cross-validation the data are split randomly into n subsets (in this case n = 10).\nThe process proceeds by iteratively holding out each of the n subsets as a test collection for a model trained on the remaining n \u2212 1 subsets.\nCross validation is described in [15].\nUsing this methodology, we compared the performance of the three classification models described above.\nTable 4 gives the results from cross validation.\nAlthough naive Bayes is not significantly more accurate for these data than the SVM classifier, we limit the remainder of our attention to analysis of its performance.\nOur selection of naive Bayes is due to the fact that it appears to work comparably to the SVM approach for these data, while being much simpler, both in theory and implementation.\nBecause we have only 1279 documents and 10 classes, the number of training documents per class is relatively small.\nIn addition to models fitted to the Editor``s Desk data, then, we constructed a fourth model, supplementing the training sets of each class by querying the Google search engine7 and applying naive Bayes to the augmented test set.\nFor each class, we created a query by submitting the three terms with the highest log-odds ratio with that class.\nFurther, each query was limited to the domain www.bls.gov.\nFor each class we retrieved up to 400 documents from Google (the actual number varied depending on the size of the result set returned by Google).\nThis led to a training set of 4113 documents in the augmented model, as we call it below8 .\nCross validation suggested that the augmented model decreased classification accuracy (accuracy= 58.16%, with standard error= 0.32).\nAs we discuss below, however, augmenting the training set appeared to help generalization during our second experiment.\nThe results of our cross validation experiment are encouraging.\nHowever, the success of our classifiers on the Editor``s Desk documents that informed the cross validation study may not be good predictors of the models'' performance on the remainder to the BLS website.\nTo test the generality of the naive Bayes classifier, we solicited input from 11 human judges who were familiar with the BLS website.\nThe sample was chosen by convenience, and consisted of faculty and graduate students who work on the GovStat project.\nHowever, none of the reviewers had prior knowledge of the outcome of the classification before their participation.\nFor the experiment, a random sample of 100 documents was drawn from the entire BLS collection.\nOn average each re7 http:\/\/www.google.com 8 A more formal treatment of the combination of labeled and unlabeled data is available in [4].\nTable 5: Human-Model Agreement on 100 Sample Docs.\nHuman Judge 1st Choice Model Model 1st Choice Model 2nd Choice N. Bayes (aug.) 14 24 N. Bayes 24 1 Human Judge 2nd Choice Model Model 1st Choice Model 2nd Choice N. Bayes (aug.) 14 21 N. Bayes 21 4 viewer classified 83 documents, placing each document into as many of the categories shown in Table 3 as he or she saw fit.\nResults from this experiment suggest that room for improvement remains with respect to generalizing to the whole collection from the class models fitted to the Editor``s Desk documents.\nIn Table 5, we see, for each classifier, the number of documents for which it``s first or second most probable class was voted best or second best by the 11 human judges.\nIn the context of this experiment, we consider a first- or second-place classification by the machine to be accurate because the relation browser interface operates on a multiway classification, where each document is classified into multiple categories.\nThus a document with the correct class as its second choice would still be easily available to a user.\nLikewise, a correct classification on either the most popular or second most popular category among the human judges is considered correct in cases where a given document was classified into multiple classes.\nThere were 72 multiclass documents in our sample, as seen in Figure 4.\nThe remaining 28 documents were assigned to 1 or 0 classes.\nUnder this rationale, The augmented naive Bayes classifier correctly grouped 73 documents, while the smaller model (not augmented by a Google search) correctly classified 50.\nThe resultant \u03c72 test gave p = 0.001, suggesting that increasing the training set improved the ability of the naive Bayes model to generalize from the Editor``s Desk documents to the collection as a whole.\nHowever, the improvement afforded by the augmented model comes at some cost.\nIn particular, the augmented model is significantly inferior to the model trained solely on Editor``s Desk documents if we concern ourselves only with documents selected by the majority of human reviewers-i.e. only first-choice classes.\nLimiting the right answers to the left column of Table 5 gives p = 0.02 in favor of the non-augmented model.\nFor the purposes of applying the relation browser to complex digital library content (where documents will be classified along multiple categories), the augmented model is preferable.\nBut this is not necessarily the case in general.\nIt must also be said that 73% accuracy under a fairly liberal test condition leaves room for improvement in our assignment of topics to categories.\nWe may begin to understand the shortcomings of the described techniques by consulting Figure 5, which shows the distribution of categories across documents given by humans and by the augmented naive Bayes model.\nThe majority of reviewers put 157 Number of Human-Assigned ClassesMNumber of Human-Assigned ClassesM Number of Human-Assigned Classes FrequencyMFrequencyM Frequency m00M0M 0 11M1M 1 22M2M 2 33M3M 3 44M4M 4 55M5M 5 66M6M 6 77M7M 7 m00M0M 055M5M 51010M10M 101515M15M 152020M20M 202525M25M 253030M30M 303535M35M 35 Figure 4: Number of Classes Assigned to Documents by Judges documents into only three categories, jobs, benefits, and occupations.\nOn the other hand, the naive Bayes classifier distributed classes more evenly across the topics.\nThis behavior suggests areas for future improvement.\nMost importantly, we observed a strong correlation among the three most frequent classes among the human judges (for instance, there was 68% correlation between benefits and occupations).\nThis suggests that improving the clustering to produce topics that were more nearly orthogonal might improve performance.\n7.\nCONCLUSIONS AND FUTURE WORK Many developers and maintainers of digital libraries share the basic problem pursued here.\nGiven increasingly large, complex bodies of data, how may we improve access to collections without incurring extraordinary cost, and while also keeping systems receptive to changes in content over time?\nData mining and machine learning methods hold a great deal of promise with respect to this problem.\nEmpirical methods of knowledge discovery can aid in the organization and retrieval of information.\nAs we have argued in this paper, these methods may also be brought to bear on the design and implementation of advanced user interfaces.\nThis study explored a hybrid technique for aiding information architects as they implement dynamic interfaces such as the relation browser.\nOur approach combines unsupervised learning techniques, applied to a focused subset of the BLS website.\nThe goal of this initial stage is to discover the most basic and far-reaching topics in the collection.\nBased mjobsjobsMjobsM jobs benefitsunemploymentpricespricesMpricesM prices safetyinternationalspendingspendingMspendingM spending occupationscostscostsMcostsM costs productivityHuman ClassificationsMHuman ClassificationsM Human Classifications m0.000.00M0.00M 0.00 0.050.100.150.15M0.15M 0.15 0.200.25mjobsjobsMjobsM jobs benefitsunemploymentpricespricesMpricesM prices safetyinternationalspendingspendingMspendingM spending occupationscostscostsMcostsM costs productivityMachine ClassificationsMMachine ClassificationsM Machine Classifications m0.000.00M0.00M 0.00 0.050.100.10M0.10M 0.10 0.15 Figure 5: Distribution of Classes Across Documents on a statistical model of these topics, the second phase of our approach uses supervised learning (in particular, a naive Bayes classifier, trained on individual words), to assign topical relations to the remaining documents in the collection.\nIn the study reported here, this approach has demonstrated promise.\nIn its favor, our approach is highly scalable.\nIt also appears to give fairly good results.\nComparing three modes of document representation-full-text, title only, and keyword-we found 98% accuracy as measured by collocation of documents with identical subject headings.\nWhile it is not surprising that editor-generated keywords should give strong evidence for such learning, their superiority over fulltext and titles was dramatic, suggesting that even a small amount of metadata can be very useful for data mining.\nHowever, we also found evidence that learning topics from a subset of the collection may lead to overfitted models.\nAfter clustering 1279 Editor``s Desk documents into 10 categories, we fitted a 10-way naive Bayes classifier to categorize the remaining 14,000 documents in the collection.\nWhile we saw fairly good results (classification accuracy of 75% with respect to a small sample of human judges), this experiment forced us to reconsider the quality of the topics learned by clustering.\nThe high correlation among human judgments in our sample suggests that the topics discovered by analysis of the Editor``s Desk were not independent.\nWhile we do not desire mutually exclusive categories in our setting, we do desire independence among the topics we model.\nOverall, then, the techniques described here provide an encouraging start to our work on acquiring subject metadata for dynamic interfaces automatically.\nIt also suggests that a more sophisticated modeling approach might yield 158 better results in the future.\nIn upcoming work we will experiment with streamlining the two-phase technique described here.\nInstead of clustering documents to find topics and then fitting a model to the learned clusters, our goal is to expand the unsupervised portion of our analysis beyond a narrow subset of the collection, such as The Editor``s Desk.\nIn current work we have defined algorithms to identify documents likely to help the topic discovery task.\nSupplied with a more comprehensive training set, we hope to experiment with model-based clustering, which combines the clustering and classification processes into a single modeling procedure.\nTopic discovery and document classification have long been recognized as fundamental problems in information retrieval and other forms of text mining.\nWhat is increasingly clear, however, as digital libraries grow in scope and complexity, is the applicability of these techniques to problems at the front-end of systems such as information architecture and interface design.\nFinally, then, in future work we will build on the user studies undertaken by Marchionini and Brunk in efforts to evaluate the utility of automatically populated dynamic interfaces for the users of digital libraries.\n8.\nREFERENCES [1] A. Agresti.\nAn Introduction to Categorical Data Analysis.\nWiley, New York, 1996.\n[2] C. Ahlberg, C. Williamson, and B. Shneiderman.\nDynamic queries for information exploration: an implementation and evaluation.\nIn Proceedings of the SIGCHI conference on Human factors in computing systems, pages 619-626, 1992.\n[3] R. Baeza-Yates and B. Ribeiro-Neto.\nModern Information Retrieval.\nACM Press, 1999.\n[4] A. Blum and T. Mitchell.\nCombining labeled and unlabeled data with co-training.\nIn Proceedings of the eleventh annual conference on Computational learning theory, pages 92-100.\nACM Press, 1998.\n[5] H. Chen and S. Dumais.\nHierarchical classification of web content.\nIn Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 256-263, 2000.\n[6] M. Efron, G. Marchionini, and J. Zhang.\nImplications of the recursive representation problem for automatic concept identification in on-line governmental information.\nIn Proceedings of the ASIST Special Interest Group on Classification Research (ASIST SIG-CR), 2003.\n[7] C. Fraley and A. E. Raftery.\nHow many clusters?\nwhich clustering method?\nanswers via model-based cluster analysis.\nThe Computer Journal, 41(8):578-588, 1998.\n[8] A. K. Jain, M. N. Murty, and P. J. Flynn.\nData clustering: a review.\nACM Computing Surveys, 31(3):264-323, September 1999.\n[9] T. Joachims.\nA probabilistic analysis of the Rocchio algorithm with TFIDF for text categorization.\nIn D. H. Fisher, editor, Proceedings of ICML-97, 14th International Conference on Machine Learning, pages 143-151, Nashville, US, 1997.\nMorgan Kaufmann Publishers, San Francisco, US.\n[10] T. Joachims.\nText categorization with support vector machines: learning with many relevant features.\nIn C. N\u00b4edellec and C. Rouveirol, editors, Proceedings of ECML-98, 10th European Conference on Machine Learning, pages 137-142, Chemnitz, DE, 1998.\nSpringer Verlag, Heidelberg, DE.\n[11] I. T. Jolliffe.\nPrincipal Component Analysis.\nSpringer, 2nd edition, 2002.\n[12] L. Kaufman and P. J. Rosseeuw.\nFinding Groups in Data: an Introduction to Cluster Analysis.\nWiley, 1990.\n[13] G. Marchionini and B. Brunk.\nToward a general relation browser: a GUI for information architects.\nJournal of Digital Information, 4(1), 2003.\nhttp:\/\/jodi.ecs.soton.ac.uk\/Articles\/v04\/i01\/Marchionini\/.\n[14] A. K. McCallum.\nBow: A toolkit for statistical language modeling, text retrieval, classification and clustering.\nhttp:\/\/www.cs.cmu.edu\/\u02dcmccallum\/bow, 1996.\n[15] T. Mitchell.\nMachine Learning.\nMcGraw Hill, 1997.\n[16] E. Rasmussen.\nClustering algorithms.\nIn W. B. Frakes and R. Baeza-Yates, editors, Information Retrieval: Data Structures and Algorithms, pages 419-442.\nPrentice Hall, 1992.\n[17] R. Tibshirani, G. Walther, and T. Hastie.\nEstimating the number of clusters in a dataset via the gap statistic, 2000.\nhttp:\/\/citeseer.nj.nec.com\/tibshirani00estimating.html.\n[18] V. N. Vapnik.\nThe Nature of Statistical Learning Theory.\nSpringer, 2000.\n159","lvl-3":"Machine Learning for Information Architecture in a Large Governmental Website *\nABSTRACT\nThis paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries.\nUnder the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection.\nThe goal of this discovery is twofold.\nFirst we desire a practical aid for information architects.\nSecond, automatically derived documentconcept relationships are a necessary precondition for realworld deployment of many dynamic interfaces.\nThe current study compares concept learning strategies based on three document representations: keywords, titles, and full-text.\nIn statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations.\n1.\nINTRODUCTION\nThe GovStat Project is a joint effort of the University of North Carolina Interaction Design Lab and the University of Maryland Human-Computer Interaction Lab1.\nCiting end-user difficulty in finding governmental information (especially statistical data) online, the project seeks to create an integrated model of user access to US government statistical information that is rooted in realistic data models and innovative user interfaces.\nTo enable such models and interfaces, we propose a data-driven approach, based on data mining and machine learning techniques.\nIn particular, our work analyzes a particular digital library--the website of the Bureau of Labor Statistics2 (BLS)--in efforts to discover a small number of linguistically meaningful concepts, or \"bins,\" that collectively summarize the semantic domain of the site.\nThe project goal is to classify the site's web content according to these inferred concepts as an initial step towards data filtering via active user interfaces (cf. [13]).\nMany digital libraries already make use of content classification, both explicitly and implicitly; they divide their resources manually by topical relation; they organize content into hierarchically oriented file systems.\nThe goal of the present\nresearch is to develop another means of browsing the content of these collections.\nBy analyzing the distribution of terms across documents, our goal is to supplement the agency's pre-existing information structures.\nStatistical learning technologies are appealing in this context insofar as they stand to define a data-driven--as opposed to an agency-driven--navigational structure for a site.\nOur approach combines supervised and unsupervised learning techniques.\nA pure document clustering [12] approach to such a large, diverse collection as BLS led to poor results in early tests [6].\nBut strictly supervised techniques [5] are inappropriate, too.\nAlthough BLS designers have defined high-level subject headings for their collections, as we discuss in Section 2, this scheme is less than optimal.\nThus we hope to learn an additional set of concepts by letting the data speak for themselves.\nThe remainder of this paper describes the details of our concept discovery efforts and subsequent evaluation.\nIn Section 2 we describe the previously existing, human-created conceptual structure of the BLS website.\nThis section also describes evidence that this structure leaves room for improvement.\nNext (Sections 3--5), we turn to a description of the concepts derived via content clustering under three document representations: keyword, title only, and full-text.\nSection 6 describes a two-part evaluation of the derived conceptual structures.\nFinally, we conclude in Section 7 by outlining upcoming work on the project.\nFigure 1: Relation Browser Prototype\n2.\nSTRUCTURING ACCESS TO THE BLS WEBSITE\n2.1 The Relation Browser\n2.2 A Pre-Existing Structure\n3.\nA HYBRID APPROACH TO TOPIC DISCOVERY\n4.\nFOCUSING ON CONTENT-RICH DOCUMENTS\n5.\nCOMBINING SUPERVISED AND UNSUPERVISED LEARNING FOR TOPIC DISCOVERY\n6.\nEVALUATION OF CONCEPT DISCOVERY\n6.1 Comparing Document Representations\n6.2 Accuracy of the Document Classifiers\n7.\nCONCLUSIONS AND FUTURE WORK\nMany developers and maintainers of digital libraries share the basic problem pursued here.\nGiven increasingly large, complex bodies of data, how may we improve access to collections without incurring extraordinary cost, and while also keeping systems receptive to changes in content over time?\nData mining and machine learning methods hold a great deal of promise with respect to this problem.\nEmpirical methods of knowledge discovery can aid in the organization and retrieval of information.\nAs we have argued in this paper, these methods may also be brought to bear on the design and implementation of advanced user interfaces.\nThis study explored a hybrid technique for aiding information architects as they implement dynamic interfaces such as the relation browser.\nOur approach combines unsupervised learning techniques, applied to a focused subset of the BLS website.\nThe goal of this initial stage is to discover the most basic and far-reaching topics in the collection.\nBased jobs prices spending costs\nFigure 5: Distribution of Classes Across Documents\non a statistical model of these topics, the second phase of our approach uses supervised learning (in particular, a naive Bayes classifier, trained on individual words), to assign topical relations to the remaining documents in the collection.\nIn the study reported here, this approach has demonstrated promise.\nIn its favor, our approach is highly scalable.\nIt also appears to give fairly good results.\nComparing three modes of document representation--full-text, title only, and keyword--we found 98% accuracy as measured by collocation of documents with identical subject headings.\nWhile it is not surprising that editor-generated keywords should give strong evidence for such learning, their superiority over fulltext and titles was dramatic, suggesting that even a small amount of metadata can be very useful for data mining.\nHowever, we also found evidence that learning topics from a subset of the collection may lead to overfitted models.\nAfter clustering 1279 Editor's Desk documents into 10 categories, we fitted a 10-way naive Bayes classifier to categorize the remaining 14,000 documents in the collection.\nWhile we saw fairly good results (classification accuracy of 75% with respect to a small sample of human judges), this experiment forced us to reconsider the quality of the topics learned by clustering.\nThe high correlation among human judgments in our sample suggests that the topics discovered by analysis of the Editor's Desk were not independent.\nWhile we do not desire mutually exclusive categories in our setting, we do desire independence among the topics we model.\nOverall, then, the techniques described here provide an encouraging start to our work on acquiring subject metadata for dynamic interfaces automatically.\nIt also suggests that a more sophisticated modeling approach might yield\nbetter results in the future.\nIn upcoming work we will experiment with streamlining the two-phase technique described here.\nInstead of clustering documents to find topics and then fitting a model to the learned clusters, our goal is to expand the unsupervised portion of our analysis beyond a narrow subset of the collection, such as The Editor's Desk.\nIn current work we have defined algorithms to identify documents likely to help the topic discovery task.\nSupplied with a more comprehensive training set, we hope to experiment with model-based clustering, which combines the clustering and classification processes into a single modeling procedure.\nTopic discovery and document classification have long been recognized as fundamental problems in information retrieval and other forms of text mining.\nWhat is increasingly clear, however, as digital libraries grow in scope and complexity, is the applicability of these techniques to problems at the front-end of systems such as information architecture and interface design.\nFinally, then, in future work we will build on the user studies undertaken by Marchionini and Brunk in efforts to evaluate the utility of automatically populated dynamic interfaces for the users of digital libraries.","lvl-4":"Machine Learning for Information Architecture in a Large Governmental Website *\nABSTRACT\nThis paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries.\nUnder the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection.\nThe goal of this discovery is twofold.\nFirst we desire a practical aid for information architects.\nSecond, automatically derived documentconcept relationships are a necessary precondition for realworld deployment of many dynamic interfaces.\nThe current study compares concept learning strategies based on three document representations: keywords, titles, and full-text.\nIn statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations.\n1.\nINTRODUCTION\nTo enable such models and interfaces, we propose a data-driven approach, based on data mining and machine learning techniques.\nThe project goal is to classify the site's web content according to these inferred concepts as an initial step towards data filtering via active user interfaces (cf. [13]).\nThe goal of the present\nresearch is to develop another means of browsing the content of these collections.\nBy analyzing the distribution of terms across documents, our goal is to supplement the agency's pre-existing information structures.\nOur approach combines supervised and unsupervised learning techniques.\nA pure document clustering [12] approach to such a large, diverse collection as BLS led to poor results in early tests [6].\nBut strictly supervised techniques [5] are inappropriate, too.\nAlthough BLS designers have defined high-level subject headings for their collections, as we discuss in Section 2, this scheme is less than optimal.\nThus we hope to learn an additional set of concepts by letting the data speak for themselves.\nThe remainder of this paper describes the details of our concept discovery efforts and subsequent evaluation.\nIn Section 2 we describe the previously existing, human-created conceptual structure of the BLS website.\nThis section also describes evidence that this structure leaves room for improvement.\nNext (Sections 3--5), we turn to a description of the concepts derived via content clustering under three document representations: keyword, title only, and full-text.\nSection 6 describes a two-part evaluation of the derived conceptual structures.\nFinally, we conclude in Section 7 by outlining upcoming work on the project.\nFigure 1: Relation Browser Prototype\n7.\nCONCLUSIONS AND FUTURE WORK\nMany developers and maintainers of digital libraries share the basic problem pursued here.\nGiven increasingly large, complex bodies of data, how may we improve access to collections without incurring extraordinary cost, and while also keeping systems receptive to changes in content over time?\nData mining and machine learning methods hold a great deal of promise with respect to this problem.\nEmpirical methods of knowledge discovery can aid in the organization and retrieval of information.\nAs we have argued in this paper, these methods may also be brought to bear on the design and implementation of advanced user interfaces.\nThis study explored a hybrid technique for aiding information architects as they implement dynamic interfaces such as the relation browser.\nOur approach combines unsupervised learning techniques, applied to a focused subset of the BLS website.\nThe goal of this initial stage is to discover the most basic and far-reaching topics in the collection.\nBased jobs prices spending costs\nFigure 5: Distribution of Classes Across Documents\non a statistical model of these topics, the second phase of our approach uses supervised learning (in particular, a naive Bayes classifier, trained on individual words), to assign topical relations to the remaining documents in the collection.\nIn the study reported here, this approach has demonstrated promise.\nIn its favor, our approach is highly scalable.\nIt also appears to give fairly good results.\nComparing three modes of document representation--full-text, title only, and keyword--we found 98% accuracy as measured by collocation of documents with identical subject headings.\nHowever, we also found evidence that learning topics from a subset of the collection may lead to overfitted models.\nAfter clustering 1279 Editor's Desk documents into 10 categories, we fitted a 10-way naive Bayes classifier to categorize the remaining 14,000 documents in the collection.\nThe high correlation among human judgments in our sample suggests that the topics discovered by analysis of the Editor's Desk were not independent.\nWhile we do not desire mutually exclusive categories in our setting, we do desire independence among the topics we model.\nOverall, then, the techniques described here provide an encouraging start to our work on acquiring subject metadata for dynamic interfaces automatically.\nIt also suggests that a more sophisticated modeling approach might yield\nbetter results in the future.\nIn upcoming work we will experiment with streamlining the two-phase technique described here.\nInstead of clustering documents to find topics and then fitting a model to the learned clusters, our goal is to expand the unsupervised portion of our analysis beyond a narrow subset of the collection, such as The Editor's Desk.\nIn current work we have defined algorithms to identify documents likely to help the topic discovery task.\nTopic discovery and document classification have long been recognized as fundamental problems in information retrieval and other forms of text mining.\nWhat is increasingly clear, however, as digital libraries grow in scope and complexity, is the applicability of these techniques to problems at the front-end of systems such as information architecture and interface design.\nFinally, then, in future work we will build on the user studies undertaken by Marchionini and Brunk in efforts to evaluate the utility of automatically populated dynamic interfaces for the users of digital libraries.","lvl-2":"Machine Learning for Information Architecture in a Large Governmental Website *\nABSTRACT\nThis paper describes ongoing research into the application of machine learning techniques for improving access to governmental information in complex digital libraries.\nUnder the auspices of the GovStat Project, our goal is to identify a small number of semantically valid concepts that adequately spans the intellectual domain of a collection.\nThe goal of this discovery is twofold.\nFirst we desire a practical aid for information architects.\nSecond, automatically derived documentconcept relationships are a necessary precondition for realworld deployment of many dynamic interfaces.\nThe current study compares concept learning strategies based on three document representations: keywords, titles, and full-text.\nIn statistical and user-based studies, human-created keywords provide significant improvements in concept learning over both title-only and full-text representations.\n1.\nINTRODUCTION\nThe GovStat Project is a joint effort of the University of North Carolina Interaction Design Lab and the University of Maryland Human-Computer Interaction Lab1.\nCiting end-user difficulty in finding governmental information (especially statistical data) online, the project seeks to create an integrated model of user access to US government statistical information that is rooted in realistic data models and innovative user interfaces.\nTo enable such models and interfaces, we propose a data-driven approach, based on data mining and machine learning techniques.\nIn particular, our work analyzes a particular digital library--the website of the Bureau of Labor Statistics2 (BLS)--in efforts to discover a small number of linguistically meaningful concepts, or \"bins,\" that collectively summarize the semantic domain of the site.\nThe project goal is to classify the site's web content according to these inferred concepts as an initial step towards data filtering via active user interfaces (cf. [13]).\nMany digital libraries already make use of content classification, both explicitly and implicitly; they divide their resources manually by topical relation; they organize content into hierarchically oriented file systems.\nThe goal of the present\nresearch is to develop another means of browsing the content of these collections.\nBy analyzing the distribution of terms across documents, our goal is to supplement the agency's pre-existing information structures.\nStatistical learning technologies are appealing in this context insofar as they stand to define a data-driven--as opposed to an agency-driven--navigational structure for a site.\nOur approach combines supervised and unsupervised learning techniques.\nA pure document clustering [12] approach to such a large, diverse collection as BLS led to poor results in early tests [6].\nBut strictly supervised techniques [5] are inappropriate, too.\nAlthough BLS designers have defined high-level subject headings for their collections, as we discuss in Section 2, this scheme is less than optimal.\nThus we hope to learn an additional set of concepts by letting the data speak for themselves.\nThe remainder of this paper describes the details of our concept discovery efforts and subsequent evaluation.\nIn Section 2 we describe the previously existing, human-created conceptual structure of the BLS website.\nThis section also describes evidence that this structure leaves room for improvement.\nNext (Sections 3--5), we turn to a description of the concepts derived via content clustering under three document representations: keyword, title only, and full-text.\nSection 6 describes a two-part evaluation of the derived conceptual structures.\nFinally, we conclude in Section 7 by outlining upcoming work on the project.\nFigure 1: Relation Browser Prototype\n2.\nSTRUCTURING ACCESS TO THE BLS WEBSITE\nThe Bureau of Labor Statistics is a federal government agency charged with compiling and publishing statistics pertaining to labor and production in the US and abroad.\nGiven this broad mandate, the BLS publishes a wide array of information, intended for diverse audiences.\nThe agency's website acts as a clearinghouse for this process.\nWith over 15,000 text\/html documents (and many more documents if spreadsheets and typeset reports are included), providing access to the collection provides a steep challenge to information architects.\n2.1 The Relation Browser\nThe starting point of this work is the notion that access to information in the BLS website could be improved by the addition of a dynamic interface such as the relation browser described by Marchionini and Brunk [13].\nThe relation browser allows users to traverse complex data sets by iteratively slicing the data along several topics.\nIn Figure 1 we see a prototype instantiation of the relation browser, applied to the FedStats website3.\nThe relation browser supports information seeking by allowing users to form queries in a stepwise fashion, slicing and re-slicing the data as their interests dictate.\nIts motivation is in keeping with Shneiderman's suggestion that queries and their results should be tightly coupled [2].\nThus in Fig\nure 1, users might limit their search set to those documents about \"energy.\"\nWithin this subset of the collection, they might further eliminate documents published more than a year ago.\nFinally, they might request to see only documents published in PDF format.\nAs Marchionini and Brunk discuss, capturing the publication date and format of documents is trivial.\nBut successful implementations of the relation browser also rely on topical classification.\nThis presents two stumbling blocks for system designers:\n\u2022 Information architects must define the appropriate set of topics for their collection \u2022 Site maintainers must classify each document into its appropriate categories\nThese tasks parallel common problems in the metadata community: defining appropriate elements and marking up documents to support metadata-aware information access.\nGiven a collection of over 15,000 documents, these hurdles are especially daunting, and automatic methods of approaching them are highly desirable.\n2.2 A Pre-Existing Structure\nPrior to our involvement with the project, designers at BLS created a shallow classificatory structure for the most important documents in their website.\nAs seen in Figure 2, the BLS home page organizes 65 \"top-level\" documents into 15 categories.\nThese include topics such as Employment and Unemployment, Productivity, and Inflation and Spending.\nFigure 2: The BLS Home Page Figure 3: Scree Plot of BLS Categories\nWe hoped initially that these pre-defined categories could be used to train a 15-way document classifier, thus automating the process of populating the relation browser altogether.\nHowever, this approach proved unsatisfactory.\nIn personal meetings, BLS officials voiced dissatisfaction with the existing topics.\nTheir form, it was argued, owed as much to the institutional structure of BLS as it did to the inherent topology of the website's information space.\nIn other words, the topics reflected official divisions rather than semantic clusters.\nThe BLS agents suggested that re-designing this classification structure would be desirable.\nThe agents' misgivings were borne out in subsequent analysis.\nThe BLS topics comprise a shallow classificatory structure; each of the 15 top-level categories is linked to a small number of related pages.\nThus there are 7 pages associated with Inflation.\nAltogether, the link structure of this classificatory system contains 65 documents; that is, excluding navigational links, there are 65 documents linked from the BLS home page, where each hyperlink connects a document to a topic (pages can be linked to multiple topics).\nBased on this hyperlink structure, we defined M, a symmetric 65 \u00d7 65 matrix, where mil counts the number of topics in which documents i and j are both classified on the BLS home page.\nTo analyze the redundancy inherent in the pre-existing structure, we derived the principal components of M (cf. [11]).\nFigure 3 shows the resultant scree plot4.\nBecause all 65 documents belong to at least one BLS topic, 4A scree plot shows the magnitude of the kth eigenvalue versus its rank.\nDuring principal component analysis scree plots visualize the amount of variance captured by each component.\nthe rank of M is guaranteed to be less than or equal to 15 (hence, eigenvalues 16...65 = 0).\nWhat is surprising about Figure 3, however, is the precipitous decline in magnitude among the first four eigenvalues.\nThe four largest eigenvlaues account for 62.2% of the total variance in the data.\nThis fact suggests a high degree of redundancy among the topics.\nTopical redundancy is not in itself problematic.\nHowever, the documents in this very shallow classificatory structure are almost all gateways to more specific information.\nThus the listing of the Producer Price Index under three categories could be confusing to the site's users.\nIn light of this potential for confusion and the agency's own request for redesign, we undertook the task of topic discovery described in the following sections.\n3.\nA HYBRID APPROACH TO TOPIC DISCOVERY\nTo aid in the discovery of a new set of high-level topics for the BLS website, we turned to unsupervised machine learning methods.\nIn efforts to let the data speak for themselves, we desired a means of concept discovery that would be based not on the structure of the agency, but on the content of the material.\nTo begin this process, we crawled the BLS website, downloading all documents of MIME type text\/html.\nThis led to a corpus of 15,165 documents.\nBased on this corpus, we hoped to derive k \u2248 10 topical categories, such that each document di is assigned to one or more classes.\nDocument clustering (cf. [16]) provided an obvious, but only partial solution to the problem of automating this type of high-level information architecture discovery.\nThe problems with standard clustering are threefold.\n1.\nMutually exclusive clusters are inappropriate for identifying the topical content of documents, since documents may be about many subjects.\n2.\nDue to the heterogeneity of the data housed in the BLS collection (tables, lists, surveys, etc.), many documents' terms provide noisy topical information.\n3.\nFor application to the relation browser, we require a small number (k \u2248 10) of topics.\nWithout significant data reduction, term-based clustering tends to deliver clusters at too fine a level of granularity.\nIn light of these problems, we take a hybrid approach to topic discovery.\nFirst, we limit the clustering process to a sample of the entire collection, described in Section 4.\nWorking on a focused subset of the data helps to overcome problems two and three, listed above.\nTo address the problem of mutual exclusivity, we combine unsupervised with supervised learning methods, as described in Section 5.\n4.\nFOCUSING ON CONTENT-RICH DOCUMENTS\nTo derive empirically evidenced topics we initially turned to cluster analysis.\nLet A be the n \u00d7 p data matrix with n observations in p variables.\nThus aij shows the measurement for the ith observation on the jth variable.\nAs described in [12], the goal of cluster analysis is to assign each of the n observations to one of a small number k groups, each of which is characterized by high intra-cluster correlation and low inter-cluster correlation.\nThough the algorithms for accomplishing such an arrangement are legion, our analysis focuses on k-means clustering5, during which, each observation oi is assigned to the cluster Ck whose centroid is closest to it, in terms of Euclidean distance.\nReaders interested in the details of the algorithm are referred to [12] for a thorough treatment of the subject.\nClustering by k-means is well-studied in the statistical literature, and has shown good results for text analysis (cf. [8, 16]).\nHowever, k-means clustering requires that the researcher specify k, the number of clusters to define.\nWhen applying k-means to our 15,000 document collection, indicators such as the gap statistic [17] and an analysis of the mean-squared distance across values of k suggested that k \u2248 80 was optimal.\nThis paramterization led to semantically intelligible clusters.\nHowever, 80 clusters are far too many for application to an interface such as the relation 5We have focused on k-means as opposed to other clustering algorithms for several reasons.\nChief among these is the computational efficiency enjoyed by the k-means approach.\nBecause we need only a flat clustering there is little to be gained by the more expensive hierarchical algorithms.\nIn future work we will turn to model-based clustering [7] as a more principled method of selecting the number of clusters and of representing clusters.\nbrowser.\nMoreover, the granularity of these clusters was unsuitably fine.\nFor instance, the 80-cluster solution derived a cluster whose most highly associated words (in terms of log-odds ratio [1]) were drug, pharmacy, and chemist.\nThese words are certainly related, but they are related at a level of specificity far below what we sought.\nTo remedy the high dimensionality of the data, we resolved to limit the algorithm to a subset of the collection.\nIn consultation with employees of the BLS, we continued our analysis on documents that form a series titled From the Editor's Desk6.\nThese are brief articles, written by BLS employees.\nBLS agents suggested that we focus on the Editor's Desk because it is intended to span the intellectual domain of the agency.\nThe column is published daily, and each entry describes an important current issue in the BLS domain.\nThe Editor's Desk column has been written daily (five times per week) since 1998.\nAs such, we operated on a set of N = 1279 documents.\nLimiting attention to these 1279 documents not only reduced the dimensionality of the problem.\nIt also allowed the clustering process to learn on a relatively clean data set.\nWhile the entire BLS collection contains a great deal of nonprose text (i.e. tables, lists, etc.), the Editor's Desk documents are all written in clear, journalistic prose.\nEach document is highly topical, further aiding the discovery of termtopic relations.\nFinally, the Editor's Desk column provided an ideal learning environment because it is well-supplied with topical metadata.\nEach of the 1279 documents contains a list of one or more keywords.\nAdditionally, a subset of the documents (1112) contained a subject heading.\nThis metadata informed our learning and evaluation, as described in Section 6.1.\n5.\nCOMBINING SUPERVISED AND UNSUPERVISED LEARNING FOR TOPIC DISCOVERY\nTo derive suitably general topics for the application of a dynamic interface to the BLS collection, we combined document clustering with text classification techniques.\nSpecifically, using k-means, we clustered each of the 1279 documents into one of k clusters, with the number of clusters chosen by analyzing the within-cluster mean squared distance at different values of k (see Section 6.1).\nConstructing mutually exclusive clusters violates our assumption that documents may belong to multiple classes.\nHowever, these clusters mark only the first step in a two-phase process of topic identification.\nAt the end of the process, documentcluster affinity is measured by a real-valued number.\nOnce the Editor's Desk documents were assigned to clusters, we constructed a k-way classifier that estimates the strength of evidence that a new document di is a member of class Ck.\nWe tested three statistical classification techniques: probabilistic Rocchio (prind), naive Bayes, and support vector machines (SVMs).\nAll were implemented using McCallum's BOW text classification library [14].\nPrind is a probabilistic version of the Rocchio classification algorithm [9].\nInterested readers are referred to Joachims' article for\nfurther details of the classification method.\nLike prind, naive Bayes attempts to classify documents into the most probable class.\nIt is described in detail in [15].\nFinally, support vector machines were thoroughly explicated by Vapnik [18], and applied specifically to text in [10].\nThey define a decision boundary by finding the maximally separating hyperplane in a high-dimensional vector space in which document classes become linearly separable.\nHaving clustered the documents and trained a suitable classifier, the remaining 14,000 documents in the collection are labeled by means of automatic classification.\nThat is, for each document di we derive a k-dimensional vector, quantifying the association between di and each class C1...Ck.\nDeriving topic scores via naive Bayes for the entire 15,000 document collection required less than two hours of CPU time.\nThe output of this process is a score for every document in the collection on each of the automatically discovered topics.\nThese scores may then be used to populate a relation browser interface, or they may be added to a traditional information retrieval system.\nTo use these weights in the relation browser we currently assign to each document the two topics on which it scored highest.\nIn future work we will adopt a more rigorous method of deriving documenttopic weight thresholds.\nAlso, evaluation of the utility of the learned topics for users will be undertaken.\n6.\nEVALUATION OF CONCEPT DISCOVERY\nPrior to implementing a relation browser interface and undertaking the attendant user studies, it is of course important to evaluate the quality of the inferred concepts, and the ability of the automatic classifier to assign documents to the appropriate subjects.\nTo evaluate the success of the two-stage approach described in Section 5, we undertook two experiments.\nDuring the first experiment we compared three methods of document representation for the clustering task.\nThe goal here was to compare the quality of document clusters derived by analysis of full-text documents, documents represented only by their titles, and documents represented by human-created keyword metadata.\nDuring the second experiment, we analyzed the ability of the statistical classifiers to discern the subject matter of documents from portions of the database in addition to the Editor's Desk.\n6.1 Comparing Document Representations\nDocuments from The Editor's Desk column came supplied with human-generated keyword metadata.\nAdditionally, The titles of the Editor's Desk documents tend to be germane to the topic of their respective articles.\nWith such an array of distilled evidence of each document's subject matter, we undertook a comparison of document representations for topic discovery by clustering.\nWe hypothesized that keyword-based clustering would provide a useful model.\nBut we hoped to see whether comparable performance could be attained by methods that did not require extensive human indexing, such as the title-only or full-text representations.\nTo test this hypothesis, we defined three modes of document representation--full-text, title-only, and keyword only--we generated three sets of topics, Tfull, Ttitle, and Tkw, respectively.\nTopics based on full-text documents were derived by application of k-means clustering to the 1279 Editor's Desk documents, where each document was represented by a 1908dimensional vector.\nThese 1908 dimensions captured the TF.IDF weights [3] of each term ti in document dj, for all terms that occurred at least three times in the data.\nTo arrive at the appropriate number of clusters for these data, we inspected the within-cluster mean-squared distance for each value of k = 1...20.\nAs k approached 10 the reduction in error with the addition of more clusters declined notably, suggesting that k \u2248 10 would yield good divisions.\nTo select a single integer value, we calculated which value of k led to the least variation in cluster size.\nThis metric stemmed from a desire to suppress the common result where one large cluster emerges from the k-means algorithm, accompanied by several accordingly small clusters.\nWithout reason to believe that any single topic should have dramatically high prior odds of document membership, this heuristic led to kfull = 10.\nClusters based on document titles were constructed similarly.\nHowever, in this case, each document was represented in the vector space spanned by the 397 terms that occur at least twice in document titles.\nUsing the same method of minimizing the variance in cluster membership ktitle--the number of clusters in the title-based representation--was also set to 10.\nThe dimensionality of the keyword-based clustering was very similar to that of the title-based approach.\nThere were 299 keywords in the data, all of which were retained.\nThe median number of keywords per document was 7, where a keyword is understood to be either a single word, or a multiword term such as \"consumer price index.\"\nIt is worth noting that the keywords were not drawn from any controlled vocabulary; they were assigned to documents by publishers at the BLS.\nUsing the keywords, the documents were clustered into 10 classes.\nTo evaluate the clusters derived by each method of document representation, we used the subject headings that were included with 1112 of the Editor's Desk documents.\nEach of these 1112 documents was assigned one or more subject headings, which were withheld from all of the cluster applications.\nLike the keywords, subject headings were assigned to documents by BLS publishers.\nUnlike the keywords, however, subject headings were drawn from a controlled vocabulary.\nOur analysis began with the assumption that documents with the same subject headings should cluster together.\nTo facilitate this analysis, we took a conservative approach; we considered multi-subject classifications to be unique.\nThus if document di was assigned to a single subject prices, while document dj was assigned to two subjects, international comparisons, prices, documents di and dj are not considered to come from the same class.\nTable 1 shows all Editor's Desk subject headings that were assigned to at least 10 documents.\nAs noted in the table,\nTable 1: Top Editor's Desk Subject Headings\nTable 2: Contingecy Table for Three Document Representations\nthere were 19 such subject headings, which altogether covered 609 (54%) of the documents with subjects assigned.\nThese document-subject pairings formed the basis of our analysis.\nLimiting analysis to subjects with N> 10 kept the resultant \u03c72 tests suitably robust.\nThe clustering derived by each document representation was tested by its ability to collocate documents with the same subjects.\nThus for each of the 19 subject headings in Table 1, Si, we calculated the proportion of documents assigned to Si that each clustering co-classified.\nFurther, we assumed that whichever cluster captured the majority of documents for a given class constituted the \"right answer\" for that class.\nFor instance, There were 92 documents whose subject heading was prices.\nTaking the BLS editors' classifications as ground truth, all 92 of these documents should have ended up in the same cluster.\nUnder the full-text representation 52 of these documents were clustered into category 5, while 35 were in category 3, and 5 documents were in category 6.\nTaking the majority cluster as the putative right home for these documents, we consider the accuracy of this clustering on this subject to be 52\/92 = 0.56.\nRepeating this process for each topic across all three representations led to the contingency table shown in Table 2.\nThe obvious superiority of the keyword-based clustering evidenced by Table 2 was borne out by a \u03c72 test on the accuracy proportions.\nComparing the proportion right and\nTable 3: Keyword-Based Clusters\nwrong achieved by keyword and title-based clustering led to p \"0.001.\nDue to this result, in the remainder of this paper, we focus our attention on the clusters derived by analysis of the Editor's Desk keywords.\nThe ten keyword-based clusters are shown in Table 3, represented by the three terms most highly associated with each cluster, in terms of the log-odds ratio.\nAdditionally, each cluster has been given a label by the researchers.\nEvaluating the results of clustering is notoriously difficult.\nIn order to lend our analysis suitable rigor and utility, we made several simplifying assumptions.\nMost problematic is the fact that we have assumed that each document belongs in only a single category.\nThis assumption is certainly false.\nHowever, by taking an extremely rigid view of what constitutes a subject--that is, by taking a fully qualified and often multipart subject heading as our unit of analysis--we mitigate this problem.\nAnalogically, this is akin to considering the location of books on a library shelf.\nAlthough a given book may cover many subjects, a classification system should be able to collocate books that are extremely similar, say books about occupational safety and health.\nThe most serious liability with this evaluation, then, is the fact that we have compressed multiple subject headings, say prices: international into single subjects.\nThis flattening obscures the multivalence of documents.\nWe turn to a more realistic assessment of document-class relations in Section 6.2.\n6.2 Accuracy of the Document Classifiers\nAlthough the keyword-based clusters appear to classify the Editor's Desk documents very well, their discovery only solved half of the problem required for the successful implementation of a dynamic user interface such as the relation browser.\nThe matter of roughly fourteen thousand unclassified documents remained to be addressed.\nTo solve this problem, we trained the statistical classifiers described above in Section 5.\nFor each document in the collection di, these classifiers give pi, a k-vector of probabilities or distances (depending on the classification method used), where pik quantifies the strength of association between the ith document and the kth class.\nAll classifiers were trained on the full text of each document, regardless of the representation used to discover the initial clusters.\nThe different training sets were thus constructed simply by changing the\nTable 4: Cross Validation Results for 3 Classifiers\nclass variable for each instance (document) to reflect its assigned cluster under a given model.\nTo test the ability of each classifier to locate documents correctly, we first performed a 10-fold cross validation on the Editor's Desk documents.\nDuring cross-validation the data are split randomly into n subsets (in this case n = 10).\nThe process proceeds by iteratively holding out each of the n subsets as a test collection for a model trained on the remaining n \u2212 1 subsets.\nCross validation is described in [15].\nUsing this methodology, we compared the performance of the three classification models described above.\nTable 4 gives the results from cross validation.\nAlthough naive Bayes is not significantly more accurate for these data than the SVM classifier, we limit the remainder of our attention to analysis of its performance.\nOur selection of naive Bayes is due to the fact that it appears to work comparably to the SVM approach for these data, while being much simpler, both in theory and implementation.\nBecause we have only 1279 documents and 10 classes, the number of training documents per class is relatively small.\nIn addition to models fitted to the Editor's Desk data, then, we constructed a fourth model, supplementing the training sets of each class by querying the Google search engine7 and applying naive Bayes to the augmented test set.\nFor each class, we created a query by submitting the three terms with the highest log-odds ratio with that class.\nFurther, each query was limited to the domain www.bls.gov.\nFor each class we retrieved up to 400 documents from Google (the actual number varied depending on the size of the result set returned by Google).\nThis led to a training set of 4113 documents in the \"augmented model,\" as we call it below8.\nCross validation suggested that the augmented model decreased classification accuracy (accuracy = 58.16%, with standard error = 0.32).\nAs we discuss below, however, augmenting the training set appeared to help generalization during our second experiment.\nThe results of our cross validation experiment are encouraging.\nHowever, the success of our classifiers on the Editor's Desk documents that informed the cross validation study may not be good predictors of the models' performance on the remainder to the BLS website.\nTo test the generality of the naive Bayes classifier, we solicited input from 11 human judges who were familiar with the BLS website.\nThe sample was chosen by convenience, and consisted of faculty and graduate students who work on the GovStat project.\nHowever, none of the reviewers had prior knowledge of the outcome of the classification before their participation.\nFor the experiment, a random sample of 100 documents was drawn from the entire BLS collection.\nOn average each re\nTable 5: Human-Model Agreement on 100 Sample Docs.\nviewer classified 83 documents, placing each document into as many of the categories shown in Table 3 as he or she saw fit.\nResults from this experiment suggest that room for improvement remains with respect to generalizing to the whole collection from the class models fitted to the Editor's Desk documents.\nIn Table 5, we see, for each classifier, the number of documents for which it's first or second most probable class was voted best or second best by the 11 human judges.\nIn the context of this experiment, we consider a first - or second-place classification by the machine to be accurate because the relation browser interface operates on a multiway classification, where each document is classified into multiple categories.\nThus a document with the \"correct\" class as its second choice would still be easily available to a user.\nLikewise, a correct classification on either the most popular or second most popular category among the human judges is considered correct in cases where a given document was classified into multiple classes.\nThere were 72 multiclass documents in our sample, as seen in Figure 4.\nThe remaining 28 documents were assigned to 1 or 0 classes.\nUnder this rationale, The augmented naive Bayes classifier correctly grouped 73 documents, while the smaller model (not augmented by a Google search) correctly classified 50.\nThe resultant \u03c72 test gave p = 0.001, suggesting that increasing the training set improved the ability of the naive Bayes model to generalize from the Editor's Desk documents to the collection as a whole.\nHowever, the improvement afforded by the augmented model comes at some cost.\nIn particular, the augmented model is significantly inferior to the model trained solely on Editor's Desk documents if we concern ourselves only with documents selected by the majority of human reviewers--i.e. only first-choice classes.\nLimiting the right answers to the left column of Table 5 gives p = 0.02 in favor of the non-augmented model.\nFor the purposes of applying the relation browser to complex digital library content (where documents will be classified along multiple categories), the augmented model is preferable.\nBut this is not necessarily the case in general.\nIt must also be said that 73% accuracy under a fairly liberal test condition leaves room for improvement in our assignment of topics to categories.\nWe may begin to understand the shortcomings of the described techniques by consulting Figure 5, which shows the distribution of categories across documents given by humans and by the augmented naive Bayes model.\nThe majority of reviewers put\nFigure 4: Number of Classes Assigned to Documents by Judges\ndocuments into only three categories, jobs, benefits, and occupations.\nOn the other hand, the naive Bayes classifier distributed classes more evenly across the topics.\nThis behavior suggests areas for future improvement.\nMost importantly, we observed a strong correlation among the three most frequent classes among the human judges (for instance, there was 68% correlation between benefits and occupations).\nThis suggests that improving the clustering to produce topics that were more nearly orthogonal might improve performance.\n7.\nCONCLUSIONS AND FUTURE WORK\nMany developers and maintainers of digital libraries share the basic problem pursued here.\nGiven increasingly large, complex bodies of data, how may we improve access to collections without incurring extraordinary cost, and while also keeping systems receptive to changes in content over time?\nData mining and machine learning methods hold a great deal of promise with respect to this problem.\nEmpirical methods of knowledge discovery can aid in the organization and retrieval of information.\nAs we have argued in this paper, these methods may also be brought to bear on the design and implementation of advanced user interfaces.\nThis study explored a hybrid technique for aiding information architects as they implement dynamic interfaces such as the relation browser.\nOur approach combines unsupervised learning techniques, applied to a focused subset of the BLS website.\nThe goal of this initial stage is to discover the most basic and far-reaching topics in the collection.\nBased jobs prices spending costs\nFigure 5: Distribution of Classes Across Documents\non a statistical model of these topics, the second phase of our approach uses supervised learning (in particular, a naive Bayes classifier, trained on individual words), to assign topical relations to the remaining documents in the collection.\nIn the study reported here, this approach has demonstrated promise.\nIn its favor, our approach is highly scalable.\nIt also appears to give fairly good results.\nComparing three modes of document representation--full-text, title only, and keyword--we found 98% accuracy as measured by collocation of documents with identical subject headings.\nWhile it is not surprising that editor-generated keywords should give strong evidence for such learning, their superiority over fulltext and titles was dramatic, suggesting that even a small amount of metadata can be very useful for data mining.\nHowever, we also found evidence that learning topics from a subset of the collection may lead to overfitted models.\nAfter clustering 1279 Editor's Desk documents into 10 categories, we fitted a 10-way naive Bayes classifier to categorize the remaining 14,000 documents in the collection.\nWhile we saw fairly good results (classification accuracy of 75% with respect to a small sample of human judges), this experiment forced us to reconsider the quality of the topics learned by clustering.\nThe high correlation among human judgments in our sample suggests that the topics discovered by analysis of the Editor's Desk were not independent.\nWhile we do not desire mutually exclusive categories in our setting, we do desire independence among the topics we model.\nOverall, then, the techniques described here provide an encouraging start to our work on acquiring subject metadata for dynamic interfaces automatically.\nIt also suggests that a more sophisticated modeling approach might yield\nbetter results in the future.\nIn upcoming work we will experiment with streamlining the two-phase technique described here.\nInstead of clustering documents to find topics and then fitting a model to the learned clusters, our goal is to expand the unsupervised portion of our analysis beyond a narrow subset of the collection, such as The Editor's Desk.\nIn current work we have defined algorithms to identify documents likely to help the topic discovery task.\nSupplied with a more comprehensive training set, we hope to experiment with model-based clustering, which combines the clustering and classification processes into a single modeling procedure.\nTopic discovery and document classification have long been recognized as fundamental problems in information retrieval and other forms of text mining.\nWhat is increasingly clear, however, as digital libraries grow in scope and complexity, is the applicability of these techniques to problems at the front-end of systems such as information architecture and interface design.\nFinally, then, in future work we will build on the user studies undertaken by Marchionini and Brunk in efforts to evaluate the utility of automatically populated dynamic interfaces for the users of digital libraries.","keyphrases":["machin learn","inform architectur","machin learn techniqu","access","complex digit librari","digit librari","data-driven approach","supervis and unsupervis learn techniqu","bureau of labor statist","eigenvalu","bl collect","k-mean cluster","multiwai classif","interfac design"],"prmu":["P","P","P","P","P","P","U","M","M","U","M","U","U","M"]} {"id":"C-53","title":"Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games","abstract":"Dead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG). Since DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG. However, DR cannot maintain high consistency, and this constrains its application in highly interactive games. With the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency. In this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented. Performance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.","lvl-1":"Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games Yi Zhang1 , Ling Chen1, 2 , Gencai Chen1 1College of Computer Science, Zhejiang University, Hangzhou 310027, P.R. China 2School of Computer Science and IT, The University of Nottingham, Nottingham NG8 1BB, UK {m05zhangyi, lingchen, chengc}@cs.\nzju.edu.cn ABSTRACT Dead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG).\nSince DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG.\nHowever, DR cannot maintain high consistency, and this constrains its application in highly interactive games.\nWith the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency.\nIn this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented.\nPerformance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems - distributed applications.\nGeneral Terms Algorithms, Performance, Experimentation.\n1.\nINTRODUCTION Nowadays, many distributed multiplayer games adopt replicated architectures.\nIn such games, the states of entities are changed not only by the operations of players, but also by the passing of time [1, 2].\nThese games are referred to as Continuous Distributed Multiplayer Games (CDMG).\nLike other distributed applications, CDMG also suffer from the consistency problem caused by network transmission delay.\nAlthough new network techniques (e.g. QoS) can reduce or at least bound the delay, they can not completely eliminate it, as there exists the physical speed limitation of light, for instance, 100 ms is needed for light to propagate from Europe to Australia [3].\nThere are many studies about the effects of network transmission delay in different applications [4, 5, 6, 7].\nIn replication based games, network transmission delay makes the states of local and remote sites to be inconsistent, which can cause serious problems, such as reducing the fairness of a game and leading to paradoxical situations etc..\nIn order to maintain consistency for distributed systems, many different approaches have been proposed, among which local lag and Dead-Reckoning (DR) are two representative approaches.\nMauve et al [1] proposed local lag to maintain high consistency for replicated continuous applications.\nIt synchronizes the physical clocks of all sites in a system.\nAfter an operation is issued at local site, it delays the execution of the operation for a short time.\nDuring this short time period the operation is transmitted to remote sites, and all sites try to execute the operation at a same physical time.\nIn order to tackle the inconsistency caused by exceptional network transmission delay, a time warp based mechanism is proposed to repair the state.\nLocal lag can achieve significant high consistency, but it is based on operation transmission, which forwards every operation on a shared entity to remote sites.\nSince operation transmission mechanism requests that all operations should be transmitted in a reliable way, message filtering is difficult to be deployed and the scalability of a system is limited.\nDR is based on state transmission mechanism.\nIn addition to the high fidelity model that maintains the accurate states of its own entities, each site also has a DR model that estimates the states of all entities (including its own entities).\nAfter each update of its own entities, a site compares the accurate state with the estimated one.\nIf the difference exceeds a pre-defined threshold, a state update would be transmitted to all sites and all DR models would be corrected.\nThrough state estimation, DR can not only maintain consistency but also decrease the number of transmitted state updates.\nCompared with aforementioned local lag, DR cannot maintain high consistency.\nDue to network transmission delay, when a remote site receives a state update of an entity the state of the entity might have changed at the site sending the state update.\nIn order to make DR maintain high consistency, Aggarwal et al [8] proposed Globally Synchronized DR (GS-DR), which synchronizes the physical clocks of all sites in a system and adds time stamps to transmitted state updates.\nDetailed description of GS-DR can be found in Section 3.\nWhen a state update is available, GS-DR immediately updates the state of local site and then transmits the state update to remote sites, which causes the states of local site and remote sites to be inconsistent in the transmission procedure.\nThus with the synchronization of physical clocks, GS-DR can eliminate after inconsistency, but it cannot tackle before inconsistency [8].\nIn this paper, we propose a new method named globally synchronized DR with Local Lag (GS-DR-LL), which combines local lag and GS-DR.\nBy delaying the update to local site, GS-DR-LL can achieve higher consistency than GS-DR.\nThe rest of this paper is organized as follows: Section 2 gives the definition of consistency and corresponding metrics; the cause of the inconsistency of DR is analyzed in Section 3; Section 4 describes how GS-DR-LL works; performance evaluation is presented in Section 5; Section 6 concludes the paper.\n2.\nCONSISTENCY DEFINITIONS AND METRICS The consistency of replicated applications has already been well defined in discrete domain [9, 10, 11, 12], but few related work has been done in continuous domain.\nMauve et al [1] have given a definition of consistency for replicated applications in continuous domain, but the definition is based on operation transmission and it is difficult for the definition to describe state transmission based methods (e.g. DR).\nHere, we present an alternative definition of consistency in continuous domain, which suits state transmission based methods well.\nGiven two distinct sites i and j, which have replicated a shared entity e, at a given time t, the states of e at sites i and j are Si(t) and Sj(t).\nDEFINITION 1: the states of e at sites i and j are consistent at time t, iff: De(i, j, t) = |Si(t) - Sj(t)| = 0 (1) DEFINITION 2: the states of e at sites i and j are consistent between time t1 and t2 (t1 < t2), iff: De(i, j, t1, t2) = dt|)t(S)t(S| t2 t1 ji = 0 (2) In this paper, formulas (1) and (2) are used to determine whether the states of shared entities are consistent between local and remote sites.\nDue to network transmission delay, it is difficult to maintain the states of shared entities absolutely consistent.\nCorresponding metrics are needed to measure the consistency of shared entities between local and remote sites.\nDe(i, j, t) can be used as a metric to measure the degree of consistency at a certain time point.\nIf De(i, j, t1) > De(i, j, t2), it can be stated that between sites i and j, the consistency of the states of entity e at time point t1 is lower than that at time point t2.\nIf De(i, j, t) > De(l, k, t), it can be stated that, at time point t, the consistency of the states of entity e between sites i and j is lower than that between sites l and k. Similarly, De(i, j, t1, t2) can been used as a metric to measure the degree of consistency in a certain time period.\nIf De(i, j, t1, t2) > De(i, j, t3, t4) and |t1 - t2| = |t3 - t4|, it can be stated that between sites i and j, the consistency of the states of entity e between time points t1 and t2 is lower than that between time points t3 and t4.\nIf De(i, j, t1, t2) > De(l, k, t1, t2), it can be stated that between time points t1 and t2, the consistency of the states of entity e between sites i and j is lower than that between sites l and k.\nIn DR, the states of entities are composed of the positions and orientations of entities and some prediction related parameters (e.g. the velocities of entities).\nGiven two distinct sites i and j, which have replicated a shared entity e, at a given time point t, the positions of e at sites i and j are (xit, yit, zit) and (xjt, yjt, zjt), De(i, j, t) and D (i, j, t1, t2) could be calculated as: De(i, j, t) = )zz()yy()xx( jtit 2 jtit 2 jtit 2 (3) De(i, j, t1, t2) = dt)zz()yy()xx( 2t 1t jtit 2 jtit 2 jtit 2 (4) In this paper, formulas (3) and (4) are used as metrics to measure the consistency of shared entities between local and remote sites.\n3.\nINCONSISTENCY IN DR The inconsistency in DR can be divided into two sections by the time point when a remote site receives a state update.\nThe inconsistency before a remote site receives a state update is referred to as before inconsistency, and the inconsistency after a remote site receives a state update is referred to as after inconsistency.\nBefore inconsistency and after inconsistency are similar with the terms before export error and after export error [8].\nAfter inconsistency is caused by the lack of synchronization between the physical clocks of all sites in a system.\nBy employing physical clock synchronization, GS-DR can accurately calculate the states of shared entities after receiving state updates, and it can eliminate after inconsistency.\nBefore inconsistency is caused by two reasons.\nThe first reason is the delay of sending state updates, as local site does not send a state update unless the difference between accurate state and the estimated one is larger than a predefined threshold.\nThe second reason is network transmission delay, as a shared entity can be synchronized only after remote sites receiving corresponding state update.\nFigure 1.\nThe paths of a shared entity by using GS-DR.\nFor example, it is assumed that the velocity of a shared entity is the only parameter to predict the entity``s position, and current position of the entity can be calculated by its last position and current velocity.\nTo simplify the description, it is also assumed that there are only two sites i and j in a game session, site i acts as 2 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 local site and site j acts as remote site, and t1 is the time point the local site updates the state of the shared entity.\nFigure 1 illustrates the paths of the shared entity at local site and remote site in x axis by using GS-DR.\nAt the beginning, the positions of the shared entity are the same at sites i and j and the velocity of the shared entity is 0.\nBefore time point t0, the paths of the shared entity at sites i and j in x coordinate are exactly the same.\nAt time point t0, the player at site i issues an operation, which changes the velocity in x axis to v0.\nSite i first periodically checks whether the difference between the accurate position of the shared entity and the estimated one, 0 in this case, is larger than a predefined threshold.\nAt time point t1, site i finds that the difference is larger than the threshold and it sends a state update to site j.\nThe state update contains the position and velocity of the shared entity at time point t1 and time point t1 is also attached as a timestamp.\nAt time point t2, the state update reaches site j, and the received state and the time deviation between time points t1 and t2 are used to calculate the current position of the shared entity.\nThen site j updates its replicated entity``s position and velocity, and the paths of the shared entity at sites i and j overlap again.\nFrom Figure 1, it can be seen that the after inconsistency is 0, and the before consistency is composed of two parts, D1 and D2.\nD1 is De(i, j, t0, t1) and it is caused by the state filtering mechanism of DR. D2 is De(i, j, t1, t2) and it is caused by network transmission delay.\n4.\nGLOBALLY SYNCHRONIZED DR WITH LOCAL LAG From the analysis in Section 3, It can be seen that GS-DR can eliminate after inconsistency, but it cannot effectively tackle before inconsistency.\nIn order to decrease before inconsistency, we propose GS-DR-LL, which combines GS-DR with local lag and can effectively decrease before inconsistency.\nIn GS-DR-LL, the state of a shared entity at a certain time point t is notated as S = (t, pos, par 1, par 2, ......, par n), in which pos means the position of the entity and par 1 to par n means the parameters to calculate the position of the entity.\nIn order to simplify the description of GS-DR-LL, it is assumed that there are only one shared entity and one remote site.\nAt the beginning of a game session, the states of the shared entity are the same at local and remote sites, with the same position p0 and parameters pars0 (pars represents all the parameters).\nLocal site keeps three states: the real state of the entity Sreal, the predicted state at remote site Sp-remote, and the latest state updated to remote site Slate.\nRemote site keep only one state Sremote, which is the real state of the entity at remote site.\nTherefore, at the beginning of a game session Sreal = Sp-remote = Slate = Sremote = (t0, p0, pars0).\nIn GS-DR-LL, it is assumed that the physical clocks of all sites are synchronized with a deviation of less than 50 ms (using NTP or GPS clock).\nFurthermore, it is necessary to make corrections to a physical clock in a way that does not result in decreasing the value of the clock, for example by slowing down or halting the clock for a period of time.\nAdditionally it is assumed that the game scene is updated at a fixed frequency and T stands for the time interval between two consecutive updates, for example, if the scene update frequency is 50 Hz, T would be 20 ms. n stands for the lag value used by local lag, and t stands for current physical time.\nAfter updating the scene, local site waits for a constant amount of time T.\nDuring this time period, local site receives the operations of the player and stores them in a list L. All operations in L are sorted by their issue time.\nAt the end of time period T, local site executes all stored operations, whose issue time is between t - T and t, on Slate to get the new Slate, and it also executes all stored operations, whose issue time is between t - (n + T) and t - n, on Sreal to get the new Sreal.\nAdditionally, local site uses Sp-remote and corresponding prediction methods to estimate the new Sp-remote.\nAfter new Slate, Sreal, and Sp-remote are calculated, local site compares whether the difference between the new Slate and Spremote exceeds the predefined threshold.\nIf YES, local site sends new Slate to remote site and Sp-remote is updated with new Slate.\nNote that the timestamp of the sent state update is t.\nAfter that, local site uses Sreal to update local scene and deletes the operations, whose issue time is less than t - n, from L.\nAfter updating the scene, remote site waits for a constant amount of time T.\nDuring this time period, remote site stores received state update(s) in a list R. All state updates in R are sorted by their timestamps.\nAt the end of time period T, remote site checks whether R contains state updates whose timestamps are less than t - n. Note that t is current physical time and it increases during the transmission of state updates.\nIf YES, it uses these state updates and corresponding prediction methods to calculate the new Sremote, else they use Sremote and corresponding prediction methods to estimate the new Sremote.\nAfter that, local site uses Sremote to update local scene and deletes the sate updates, whose timestamps are less than t - n, from R. From the above description, it can been see that the main difference between GS-DR and GS-DR-LL is that GS-DR-LL uses the operations, whose issue time is less than t - n, to calculate Sreal.\nThat means that the scene seen by local player is the results of the operations issued a period of time (i.e. n) ago.\nMeanwhile, if the results of issued operations make the difference between Slate and Sp-remote exceed a predefined threshold, corresponding state updates are sent to remote sites immediately.\nThe aforementioned is the basic mechanism of GS-DR-LL.\nIn the case with multiple shared entities and remote sites, local site calculates Slate, Sreal, and Sp-remote for different shared entities respectively, if there are multiple Slate need to be transmitted, local site packets them in one state update and then send it to all remote sites.\nFigure 2 illustrates the paths of a shared entity at local site and remote site while using GS-DR and GS-DR-LL.\nAll conditions are the same with the conditions used in the aforementioned example describing GS-DR.\nCompared with t1, t2, and n, T (i.e. the time interval between two consecutive updates) is quite small and it is ignored in the following description.\nAt time point t0, the player at site i issues an operation, which changes the velocity of the shared entity form 0 to v0.\nBy using GS-DR-LL, the results of the operation are updated to local scene at time point t0 + n.\nHowever the operation is immediately used to calculate Slate, thus in spite of GS-DR or GS-DR-LL, at time point t1 site i finds that the difference between accurate position and the estimated one is larger than the threshold and it sends a state update to site j.\nAt time point t2, the state update is received by remote site j. Assuming that the timestamp of the state update is less than t - n, site j uses it to update local scene immediately.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 3 With GS-DR, the time period of before inconsistency is (t2 - t1) + (t1 - t0), whereas it decreases to (t2 - t1 - n) + (t1 - t0) with the help of GS-DR-LL.\nNote that t2 - t1 is caused by network transmission delay and t1 - t0 is caused by the state filtering mechanism of DR. If n is larger than t2 - t1, GS-DR-LL can eliminate the before inconsistency caused by network transmission delay, but it cannot eliminate the before inconsistency caused by the state filtering mechanism of DR (unless the threshold is set to 0).\nIn highly interactive games, which request high consistency and GS-DR-LL might be employed, the results of operations are quite difficult to be estimated and a small threshold must be used.\nThus, in practice, most before inconsistency is caused by network transmission delay and GS-DR-LL has the capability to eliminate such before inconsistency.\nFigure 2.\nThe paths of a shared entity by using GS-DR and GS-DR-LL.\nTo GS-DR-LL, the selection of lag value n is very important, and both network transmission delay and the effects of local lag on interaction should be considered.\nAccording to the results of HCI related researches, humans cannot perceive the delay imposed on a system when it is smaller than a specific value, and the specific value depends on both the system and the task.\nFor example, in a graphical user interface a delay of approximately 150 ms cannot be noticed for keyboard interaction and the threshold increases to 195 ms for mouse interaction [13], and a delay of up to 50 ms is uncritical for a car-racing game [5].\nThus if network transmission delay is less than the specific value of a game system, n can be set to the specific value.\nElse n can be set in terms of the effects of local lag on the interaction of a system [14].\nIn the case that a large n must be used, some HCI methods (e.g. echo [15]) can be used to relieve the negative effects of the large lag.\nIn the case that n is larger than the network transmission delay, GS-DR-LL can eliminate most before inconsistency.\nTraditional local lag requests that the lag value must be larger than typical network transmission delay, otherwise state repairs would flood the system.\nHowever GS-DR-LL allows n to be smaller than typical network transmission delay.\nIn this case, the before inconsistency caused by network transmission delay still exists, but it can be decreased.\n5.\nPERFORMANCE EVALUATION In order to evaluate GS-DR-LL and compare it with GS-DR in a real application, we had implemented both two methods in a networked game named spaceship [1].\nSpaceship is a very simple networked computer game, in which players can control their spaceships to accelerate, decelerate, turn, and shoot spaceships controlled by remote players with laser beams.\nIf a spaceship is hit by a laser beam, its life points decrease one.\nIf the life points of a spaceship decrease to 0, the spaceship is removed from the game and the player controlling the spaceship loses the game.\nIn our practical implementation, GS-DR-LL and GS-DR coexisted in the game system, and the test bed was composed of two computers connected by 100 M switched Ethernet, with one computer acted as local site and the other acted as remote site.\nIn order to simulate network transmission delay, a specific module was developed to delay all packets transmitted between the two computers in terms of a predefined delay value.\nThe main purpose of performance evaluation is to study the effects of GS-DR-LL on decreasing before inconsistency in a particular game system under different thresholds, lags, and network transmission delays.\nTwo different thresholds were used in the evaluation, one is 10 pixels deviation in position or 15 degrees deviation in orientation, and the other is 4 pixels or 5 degrees.\nSix different combinations of lag and network transmission delay were used in the evaluation and they could be divided into two categories.\nIn one category, the lag was fixed at 300 ms and three different network transmission delays (100 ms, 300 ms, and 500 ms) were used.\nIn the other category, the network transmission delay was fixed at 800 ms and three different lags (100 ms, 300 ms, and 500 ms) were used.\nTherefore the total number of settings used in the evaluation was 12 (2 \u00d7 6).\nThe procedure of performance evaluation was composed of three steps.\nIn the first step, two participants were employed to play the game, and the operation sequences were recorded.\nBased on the records, a sub operation sequence, which lasted about one minute and included different operations (e.g. accelerate, decelerate, and turn), was selected.\nIn the second step, the physical clocks of the two computers were synchronized first.\nUnder different settings and consistency maintenance approaches, the selected sub operation sequence was played back on one computer, and it drove the two spaceships, one was local and the other was remote, to move.\nMeanwhile, the tracks of the spaceships on the two computers were recorded separately and they were called as a track couple.\nSince there are 12 settings and 2 consistency maintenance approaches, the total number of recorded track couples was 24.\nIn the last step, to each track couple, the inconsistency between them was calculated, and the unit of inconsistency was pixel.\nSince the physical clocks of the two computers were synchronized, the calculation of inconsistency was quite simple.\nThe inconsistency at a particular time point was the distance between the positions of the two spaceships at that time point (i.e. formula (3)).\nIn order to show the results of inconsistency in a clear way, only parts of the results, which last about 7 seconds, are used in the following figures, and the figures show almost the same parts of the results.\nFigures 3, 4, and 5 show the results of inconsistency when the lag is fixed at 300 ms and the network transmission delays are 100, 300, and 500 ms. It can been seen that inconsistency does exist, but in most of the time it is 0.\nAdditionally, inconsistency increases with the network transmission delay, but decreases with the threshold.\nCompared with GS-DR, GS-DR-LL can decrease more inconsistency, and it eliminates most inconsistency when the network transmission delay is 100 ms and the threshold is 4 pixels or 5 degrees.\n4 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 According to the prediction and state filtering mechanisms of DR, inconsistency cannot be completely eliminated if the threshold is not 0.\nWith the definitions of before inconsistency and after inconsistency, it can be indicated that GS-DR and GS-DR-LL both can eliminate after inconsistency, and GS-DR-LL can effectively decrease before inconsistency.\nIt can be foreseen that with proper lag and threshold (e.g. the lag is larger than the network transmission delay and the threshold is 0), GS-DR-LL even can eliminate before inconsistency.\n0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 3.\nInconsistency when the network transmission delay is 100 ms and the lag is 300 ms. 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 4.\nInconsistency when the network transmission delay is 300 ms and the lag is 300 ms. 0 10 20 30 40 0.0 1.5 3.1 4.6 6.2 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 5.\nInconsistency when the network transmission delay is 500 ms and the lag is 300 ms. Figures 6, 7, and 8 show the results of inconsistency when the network transmission delay is fixed at 800 ms and the lag are 100, 300, and 500 ms. It can be seen that with GS-DR-LL before inconsistency decreases with the lag.\nIn traditional local lag, the lag must be set to a value larger than typical network transmission delay, otherwise the state repairs would flood the system.\nFrom the above results it can be seen that there does not exist any constraint on the selection of the lag, with GS-DR-LL a system would work fine even if the lag is much smaller than the network transmission delay.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 5 From all above results, it can be indicated that GS-DR and GSDR-LL both can eliminate after inconsistency, and GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\n0 10 20 30 40 0.0 1.5 3.1 4.7 6.2 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 6.\nInconsistency when the network transmission delay is 800 ms and the lag is 100 ms. 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 7.\nInconsistency when the network transmission delay is 800 ms and the lag is 300 ms. 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 10 pixels or 15degrees 0 10 20 30 40 0.0 1.5 3.1 4.6 6.1 Time (seconds) Inconsistency(pixels) GS-DR-LL GS-DR The threshold is 4 pixels or 5degrees Figure 8.\nInconsistency when the network transmission delay is 800 ms and the lag is 500 ms. 6.\nCONCLUSIONS Compared with traditional DR, GS-DR can eliminate after inconsistency through the synchronization of physical clocks, but it cannot tackle before inconsistency, which would significantly influence the usability and fairness of a game.\nIn this paper, we proposed a method named GS-DR-LL, which combines local lag and GS-DR, to decrease before inconsistency through delaying updating the execution results of local operations to local scene.\nPerformance evaluation indicates that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\nGS-DR-LL has significant implications to consistency maintenance approaches.\nFirst, GS-DR-LL shows that improved DR can not only eliminate after inconsistency but also decrease 6 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 before inconsistency, with proper lag and threshold, it would even eliminate before inconsistency.\nAs a result, the application of DR can be greatly broadened and it could be used in the systems which request high consistency (e.g. highly interactive games).\nSecond, GS-DR-LL shows that by combining local lag and GSDR, the constraint on selecting lag value is removed and a lag, which is smaller than typical network transmission delay, could be used.\nAs a result, the application of local lag can be greatly broadened and it could be used in the systems which have large typical network transmission delay (e.g. Internet based games).\n7.\nREFERENCES [1] Mauve, M., Vogel, J., Hilt, V., and Effelsberg, W. Local-Lag and Timewarp: Providing Consistency for Replicated Continuous Applications.\nIEEE Transactions on Multimedia, Vol.\n6, No.1, 2004, 47-57.\n[2] Li, F.W., Li, L.W., and Lau, R.W. Supporting Continuous Consistency in Multiplayer Online Games.\nIn Proc.\nof ACM Multimedia, 2004, 388-391.\n[3] Pantel, L. and Wolf, L. On the Suitability of Dead Reckoning Schemes for Games.\nIn Proc.\nof NetGames, 2002, 79-84.\n[4] Alhalabi, M.O., Horiguchi, S., and Kunifuji, S.\nAn Experimental Study on the Effects of Network Delay in Cooperative Shared Haptic Virtual Environment.\nComputers and Graphics, Vol.\n27, No. 2, 2003, 205-213.\n[5] Pantel, L. and Wolf, L.C. On the Impact of Delay on RealTime Multiplayer Games.\nIn Proc.\nof NOSSDAV, 2002, 2329.\n[6] Meehan, M., Razzaque, S., Whitton, M.C., and Brooks, F.P. Effect of Latency on Presence in Stressful Virtual Environments.\nIn Proc.\nof IEEE VR, 2003, 141-148.\n[7] Bernier, Y.W. Latency Compensation Methods in Client\/Server In-Game Protocol Design and Optimization.\nIn Proc.\nof Game Developers Conference, 2001.\n[8] Aggarwal, S., Banavar, H., and Khandelwal, A. Accuracy in Dead-Reckoning based Distributed Multi-Player Games.\nIn Proc.\nof NetGames, 2004, 161-165.\n[9] Raynal, M. and Schiper, A. From Causal Consistency to Sequential Consistency in Shared Memory Systems.\nIn Proc.\nof Conference on Foundations of Software Technology and Theoretical Computer Science, 1995, 180-194.\n[10] Ahamad, M., Burns, J.E., Hutto, P.W., and Neiger, G. Causal Memory.\nIn Proc.\nof International Workshop on Distributed Algorithms, 1991, 9-30.\n[11] Herlihy, M. and Wing, J. Linearizability: a Correctness Condition for Concurrent Objects.\nACM Transactions on Programming Languages and Systems, Vol.\n12, No. 3, 1990, 463-492.\n[12] Misra, J. Axioms for Memory Access in Asynchronous Hardware Systems.\nACM Transactions on Programming Languages and Systems, Vol.\n8, No. 1, 1986, 142-153.\n[13] Dabrowski, J.R. and Munson, E.V. Is 100 Milliseconds too Fast.\nIn Proc.\nof SIGCHI Conference on Human Factors in Computing Systems, 2001, 317-318.\n[14] Chen, H., Chen, L., and Chen, G.C. Effects of Local-Lag Mechanism on Cooperation Performance in a Desktop CVE System.\nJournal of Computer Science and Technology, Vol.\n20, No. 3, 2005, 396-401.\n[15] Chen, L., Chen, H., and Chen, G.C. Echo: a Method to Improve the Interaction Quality of CVEs.\nIn Proc.\nof IEEE VR, 2005, 269-270.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 7","lvl-3":"Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games\nABSTRACT\nDead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG).\nSince DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG.\nHowever, DR cannot maintain high consistency, and this constrains its application in highly interactive games.\nWith the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency.\nIn this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented.\nPerformance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\n1.\nINTRODUCTION\nNowadays, many distributed multiplayer games adopt replicated architectures.\nIn such games, the states of entities are changed not only by the operations of players, but also by the passing of time [1, 2].\nThese games are referred to as Continuous Distributed Multiplayer Games (CDMG).\nLike other distributed applications, CDMG also suffer from the consistency problem caused by network transmission delay.\nAlthough new network techniques (e.g. QoS) can reduce or at least bound the delay, they cannot Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.\nTo copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and\/or a fee.\ncompletely eliminate it, as there exists the physical speed limitation of light, for instance, 100 ms is needed for light to propagate from Europe to Australia [3].\nThere are many studies about the effects of network transmission delay in different applications [4, 5, 6, 7].\nIn replication based games, network transmission delay makes the states of local and remote sites to be inconsistent, which can cause serious problems, such as reducing the fairness of a game and leading to paradoxical situations etc. .\nIn order to maintain consistency for distributed systems, many different approaches have been proposed, among which local lag and Dead-Reckoning (DR) are two representative approaches.\nMauve et al [1] proposed local lag to maintain high consistency for replicated continuous applications.\nIt synchronizes the physical clocks of all sites in a system.\nAfter an operation is issued at local site, it delays the execution of the operation for a short time.\nDuring this short time period the operation is transmitted to remote sites, and all sites try to execute the operation at a same physical time.\nIn order to tackle the inconsistency caused by exceptional network transmission delay, a time warp based mechanism is proposed to repair the state.\nLocal lag can achieve significant high consistency, but it is based on operation transmission, which forwards every operation on a shared entity to remote sites.\nSince operation transmission mechanism requests that all operations should be transmitted in a reliable way, message filtering is difficult to be deployed and the scalability of a system is limited.\nDR is based on state transmission mechanism.\nIn addition to the high fidelity model that maintains the accurate states of its own entities, each site also has a DR model that estimates the states of all entities (including its own entities).\nAfter each update of its own entities, a site compares the accurate state with the estimated one.\nIf the difference exceeds a pre-defined threshold, a state update would be transmitted to all sites and all DR models would be corrected.\nThrough state estimation, DR cannot only maintain consistency but also decrease the number of transmitted state updates.\nCompared with aforementioned local lag, DR cannot maintain high consistency.\nDue to network transmission delay, when a remote site receives a state update of an entity the state of the entity might have changed at the site sending the state update.\nIn order to make DR maintain high consistency, Aggarwal et al [8] proposed Globally Synchronized DR (GS-DR), which synchronizes the physical clocks of all sites in a system and adds time stamps to transmitted state updates.\nDetailed description of GS-DR can be found in Section 3.\nWhen a state update is available, GS-DR immediately updates the state of local site and then transmits the state update to remote\nsites, which causes the states of local site and remote sites to be inconsistent in the transmission procedure.\nThus with the synchronization of physical clocks, GS-DR can eliminate after inconsistency, but it cannot tackle before inconsistency [8].\nIn this paper, we propose a new method named globally synchronized DR with Local Lag (GS-DR-LL), which combines local lag and GS-DR.\nBy delaying the update to local site, GS-DR-LL can achieve higher consistency than GS-DR.\nThe rest of this paper is organized as follows: Section 2 gives the definition of consistency and corresponding metrics; the cause of the inconsistency of DR is analyzed in Section 3; Section 4 describes how GS-DR-LL works; performance evaluation is presented in Section 5; Section 6 concludes the paper.\n2.\nCONSISTENCY DEFINITIONS AND METRICS\n3.\nINCONSISTENCY IN DR\n2 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006\n4.\nGLOBALLY SYNCHRONIZED DR WITH LOCAL LAG\n5.\nPERFORMANCE EVALUATION\n4 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006\n6.\nCONCLUSIONS\nCompared with traditional DR, GS-DR can eliminate after inconsistency through the synchronization of physical clocks, but it cannot tackle before inconsistency, which would significantly influence the usability and fairness of a game.\nIn this paper, we proposed a method named GS-DR-LL, which combines local lag and GS-DR, to decrease before inconsistency through delaying updating the execution results of local operations to local scene.\nPerformance evaluation indicates that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\nGS-DR-LL has significant implications to consistency maintenance approaches.\nFirst, GS-DR-LL shows that improved DR cannot only eliminate after inconsistency but also decrease\nbefore inconsistency, with proper lag and threshold, it would even eliminate before inconsistency.\nAs a result, the application of DR can be greatly broadened and it could be used in the systems which request high consistency (e.g. highly interactive games).\nSecond, GS-DR-LL shows that by combining local lag and GSDR, the constraint on selecting lag value is removed and a lag, which is smaller than typical network transmission delay, could be used.\nAs a result, the application of local lag can be greatly broadened and it could be used in the systems which have large typical network transmission delay (e.g. Internet based games).","lvl-4":"Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games\nABSTRACT\nDead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG).\nSince DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG.\nHowever, DR cannot maintain high consistency, and this constrains its application in highly interactive games.\nWith the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency.\nIn this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented.\nPerformance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\n1.\nINTRODUCTION\nNowadays, many distributed multiplayer games adopt replicated architectures.\nIn such games, the states of entities are changed not only by the operations of players, but also by the passing of time [1, 2].\nThese games are referred to as Continuous Distributed Multiplayer Games (CDMG).\nLike other distributed applications, CDMG also suffer from the consistency problem caused by network transmission delay.\nThere are many studies about the effects of network transmission delay in different applications [4, 5, 6, 7].\nIn replication based games, network transmission delay makes the states of local and remote sites to be inconsistent, which can cause serious problems, such as reducing the fairness of a game and leading to paradoxical situations etc. .\nIn order to maintain consistency for distributed systems, many different approaches have been proposed, among which local lag and Dead-Reckoning (DR) are two representative approaches.\nMauve et al [1] proposed local lag to maintain high consistency for replicated continuous applications.\nIt synchronizes the physical clocks of all sites in a system.\nAfter an operation is issued at local site, it delays the execution of the operation for a short time.\nDuring this short time period the operation is transmitted to remote sites, and all sites try to execute the operation at a same physical time.\nIn order to tackle the inconsistency caused by exceptional network transmission delay, a time warp based mechanism is proposed to repair the state.\nLocal lag can achieve significant high consistency, but it is based on operation transmission, which forwards every operation on a shared entity to remote sites.\nDR is based on state transmission mechanism.\nIn addition to the high fidelity model that maintains the accurate states of its own entities, each site also has a DR model that estimates the states of all entities (including its own entities).\nAfter each update of its own entities, a site compares the accurate state with the estimated one.\nIf the difference exceeds a pre-defined threshold, a state update would be transmitted to all sites and all DR models would be corrected.\nThrough state estimation, DR cannot only maintain consistency but also decrease the number of transmitted state updates.\nCompared with aforementioned local lag, DR cannot maintain high consistency.\nDue to network transmission delay, when a remote site receives a state update of an entity the state of the entity might have changed at the site sending the state update.\nIn order to make DR maintain high consistency, Aggarwal et al [8] proposed Globally Synchronized DR (GS-DR), which synchronizes the physical clocks of all sites in a system and adds time stamps to transmitted state updates.\nDetailed description of GS-DR can be found in Section 3.\nWhen a state update is available, GS-DR immediately updates the state of local site and then transmits the state update to remote\nsites, which causes the states of local site and remote sites to be inconsistent in the transmission procedure.\nThus with the synchronization of physical clocks, GS-DR can eliminate after inconsistency, but it cannot tackle before inconsistency [8].\nIn this paper, we propose a new method named globally synchronized DR with Local Lag (GS-DR-LL), which combines local lag and GS-DR.\nBy delaying the update to local site, GS-DR-LL can achieve higher consistency than GS-DR.\n6.\nCONCLUSIONS\nIn this paper, we proposed a method named GS-DR-LL, which combines local lag and GS-DR, to decrease before inconsistency through delaying updating the execution results of local operations to local scene.\nPerformance evaluation indicates that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\nGS-DR-LL has significant implications to consistency maintenance approaches.\nFirst, GS-DR-LL shows that improved DR cannot only eliminate after inconsistency but also decrease\nbefore inconsistency, with proper lag and threshold, it would even eliminate before inconsistency.\nAs a result, the application of DR can be greatly broadened and it could be used in the systems which request high consistency (e.g. highly interactive games).\nAs a result, the application of local lag can be greatly broadened and it could be used in the systems which have large typical network transmission delay (e.g. Internet based games).","lvl-2":"Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games\nABSTRACT\nDead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG).\nSince DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG.\nHowever, DR cannot maintain high consistency, and this constrains its application in highly interactive games.\nWith the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency.\nIn this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented.\nPerformance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\n1.\nINTRODUCTION\nNowadays, many distributed multiplayer games adopt replicated architectures.\nIn such games, the states of entities are changed not only by the operations of players, but also by the passing of time [1, 2].\nThese games are referred to as Continuous Distributed Multiplayer Games (CDMG).\nLike other distributed applications, CDMG also suffer from the consistency problem caused by network transmission delay.\nAlthough new network techniques (e.g. QoS) can reduce or at least bound the delay, they cannot Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.\nTo copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and\/or a fee.\ncompletely eliminate it, as there exists the physical speed limitation of light, for instance, 100 ms is needed for light to propagate from Europe to Australia [3].\nThere are many studies about the effects of network transmission delay in different applications [4, 5, 6, 7].\nIn replication based games, network transmission delay makes the states of local and remote sites to be inconsistent, which can cause serious problems, such as reducing the fairness of a game and leading to paradoxical situations etc. .\nIn order to maintain consistency for distributed systems, many different approaches have been proposed, among which local lag and Dead-Reckoning (DR) are two representative approaches.\nMauve et al [1] proposed local lag to maintain high consistency for replicated continuous applications.\nIt synchronizes the physical clocks of all sites in a system.\nAfter an operation is issued at local site, it delays the execution of the operation for a short time.\nDuring this short time period the operation is transmitted to remote sites, and all sites try to execute the operation at a same physical time.\nIn order to tackle the inconsistency caused by exceptional network transmission delay, a time warp based mechanism is proposed to repair the state.\nLocal lag can achieve significant high consistency, but it is based on operation transmission, which forwards every operation on a shared entity to remote sites.\nSince operation transmission mechanism requests that all operations should be transmitted in a reliable way, message filtering is difficult to be deployed and the scalability of a system is limited.\nDR is based on state transmission mechanism.\nIn addition to the high fidelity model that maintains the accurate states of its own entities, each site also has a DR model that estimates the states of all entities (including its own entities).\nAfter each update of its own entities, a site compares the accurate state with the estimated one.\nIf the difference exceeds a pre-defined threshold, a state update would be transmitted to all sites and all DR models would be corrected.\nThrough state estimation, DR cannot only maintain consistency but also decrease the number of transmitted state updates.\nCompared with aforementioned local lag, DR cannot maintain high consistency.\nDue to network transmission delay, when a remote site receives a state update of an entity the state of the entity might have changed at the site sending the state update.\nIn order to make DR maintain high consistency, Aggarwal et al [8] proposed Globally Synchronized DR (GS-DR), which synchronizes the physical clocks of all sites in a system and adds time stamps to transmitted state updates.\nDetailed description of GS-DR can be found in Section 3.\nWhen a state update is available, GS-DR immediately updates the state of local site and then transmits the state update to remote\nsites, which causes the states of local site and remote sites to be inconsistent in the transmission procedure.\nThus with the synchronization of physical clocks, GS-DR can eliminate after inconsistency, but it cannot tackle before inconsistency [8].\nIn this paper, we propose a new method named globally synchronized DR with Local Lag (GS-DR-LL), which combines local lag and GS-DR.\nBy delaying the update to local site, GS-DR-LL can achieve higher consistency than GS-DR.\nThe rest of this paper is organized as follows: Section 2 gives the definition of consistency and corresponding metrics; the cause of the inconsistency of DR is analyzed in Section 3; Section 4 describes how GS-DR-LL works; performance evaluation is presented in Section 5; Section 6 concludes the paper.\n2.\nCONSISTENCY DEFINITIONS AND METRICS\nThe consistency of replicated applications has already been well defined in discrete domain [9, 10, 11, 12], but few related work has been done in continuous domain.\nMauve et al [1] have given a definition of consistency for replicated applications in continuous domain, but the definition is based on operation transmission and it is difficult for the definition to describe state transmission based methods (e.g. DR).\nHere, we present an alternative definition of consistency in continuous domain, which suits state transmission based methods well.\nGiven two distinct sites i and j, which have replicated a shared entity e, at a given time t, the states of e at sites i and j are Si (t) and Sj (t).\nIn this paper, formulas (1) and (2) are used to determine whether the states of shared entities are consistent between local and remote sites.\nDue to network transmission delay, it is difficult to maintain the states of shared entities absolutely consistent.\nCorresponding metrics are needed to measure the consistency of shared entities between local and remote sites.\nDe (i, j, t) can be used as a metric to measure the degree of consistency at a certain time point.\nIf De (i, j, t1)> De (i, j, t2), it can be stated that between sites i and j, the consistency of the states of entity e at time point t1 is lower than that at time point t2.\nIf De (i, j, t)> De (l, k, t), it can be stated that, at time point t, the consistency of the states of entity e between sites i and j is lower than that between sites l and k. Similarly, De (i, j, t1, t2) can been used as a metric to measure the degree of consistency in a certain time period.\nIf De (i, j, t1, t2)> De (i, j, t3, t4) and | t1--t2 | = | t3--t4 |, it can be stated that between sites i and j, the consistency of the states of entity e between time points t1 and t2 is lower than that between time points t3 and t4.\nIf De (i, j, t1, t2)> De (l, k, t1, t2), it can be stated that between time points t1 and t2, the consistency of the states of entity e between sites i and j is lower than that between sites l and k.\nIn DR, the states of entities are composed of the positions and orientations of entities and some prediction related parameters (e.g. the velocities of entities).\nGiven two distinct sites i and j, which have replicated a shared entity e, at a given time point t, the positions of e at sites i and j are (xit, yit, zit) and (xjt, yjt, zjt), De (i, j, t) and D (i, j, t1, t2) could be calculated as:\nIn this paper, formulas (3) and (4) are used as metrics to measure the consistency of shared entities between local and remote sites.\n3.\nINCONSISTENCY IN DR\nThe inconsistency in DR can be divided into two sections by the time point when a remote site receives a state update.\nThe inconsistency before a remote site receives a state update is referred to as before inconsistency, and the inconsistency after a remote site receives a state update is referred to as after inconsistency.\nBefore inconsistency and after inconsistency are similar with the terms before export error and after export error [8].\nAfter inconsistency is caused by the lack of synchronization between the physical clocks of all sites in a system.\nBy employing physical clock synchronization, GS-DR can accurately calculate the states of shared entities after receiving state updates, and it can eliminate after inconsistency.\nBefore inconsistency is caused by two reasons.\nThe first reason is the delay of sending state updates, as local site does not send a state update unless the difference between accurate state and the estimated one is larger than a predefined threshold.\nThe second reason is network transmission delay, as a shared entity can be synchronized only after remote sites receiving corresponding state update.\nFigure 1.\nThe paths of a shared entity by using GS-DR.\nFor example, it is assumed that the velocity of a shared entity is the only parameter to predict the entity's position, and current position of the entity can be calculated by its last position and current velocity.\nTo simplify the description, it is also assumed that there are only two sites i and j in a game session, site i acts as\n2 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006\nlocal site and site j acts as remote site, and t1 is the time point the local site updates the state of the shared entity.\nFigure 1 illustrates the paths of the shared entity at local site and remote site in x axis by using GS-DR.\nAt the beginning, the positions of the shared entity are the same at sites i and j and the velocity of the shared entity is 0.\nBefore time point t0, the paths of the shared entity at sites i and j in x coordinate are exactly the same.\nAt time point t0, the player at site i issues an operation, which changes the velocity in x axis to v0.\nSite i first periodically checks whether the difference between the accurate position of the shared entity and the estimated one, 0 in this case, is larger than a predefined threshold.\nAt time point t1, site i finds that the difference is larger than the threshold and it sends a state update to site j.\nThe state update contains the position and velocity of the shared entity at time point t1 and time point t1 is also attached as a timestamp.\nAt time point t2, the state update reaches site j, and the received state and the time deviation between time points t1 and t2 are used to calculate the current position of the shared entity.\nThen site j updates its replicated entity's position and velocity, and the paths of the shared entity at sites i and j overlap again.\nFrom Figure 1, it can be seen that the after inconsistency is 0, and the before consistency is composed of two parts, D1 and D2.\nD1 is De (i, j, t0, t1) and it is caused by the state filtering mechanism of DR. D2 is De (i, j, t1, t2) and it is caused by network transmission delay.\n4.\nGLOBALLY SYNCHRONIZED DR WITH LOCAL LAG\nFrom the analysis in Section 3, It can be seen that GS-DR can eliminate after inconsistency, but it cannot effectively tackle before inconsistency.\nIn order to decrease before inconsistency, we propose GS-DR-LL, which combines GS-DR with local lag and can effectively decrease before inconsistency.\nIn GS-DR-LL, the state of a shared entity at a certain time point t is notated as S = (t, pos, par 1, par 2,, par n), in which pos means the position of the entity and par 1 to par n means the parameters to calculate the position of the entity.\nIn order to simplify the description of GS-DR-LL, it is assumed that there are only one shared entity and one remote site.\nAt the beginning of a game session, the states of the shared entity are the same at local and remote sites, with the same position p0 and parameters pars0 (pars represents all the parameters).\nLocal site keeps three states: the real state of the entity Sreal, the predicted state at remote site Sp-remote, and the latest state updated to remote site Slate.\nRemote site keep only one state Sremote, which is the real state of the entity at remote site.\nTherefore, at the beginning of a game session Sreal = Sp-remote = Slate = Sremote = (t0, p0, pars0).\nIn GS-DR-LL, it is assumed that the physical clocks of all sites are synchronized with a deviation of less than 50 ms (using NTP or GPS clock).\nFurthermore, it is necessary to make corrections to a physical clock in a way that does not result in decreasing the value of the clock, for example by slowing down or halting the clock for a period of time.\nAdditionally it is assumed that the game scene is updated at a fixed frequency and T stands for the time interval between two consecutive updates, for example, if the scene update frequency is 50 Hz, T would be 20 ms. n stands for the lag value used by local lag, and t stands for current physical time.\nAfter updating the scene, local site waits for a constant amount of time T.\nDuring this time period, local site receives the operations of the player and stores them in a list L. All operations in L are sorted by their issue time.\nAt the end of time period T, local site executes all stored operations, whose issue time is between t--T and t, on Slate to get the new Slate, and it also executes all stored operations, whose issue time is between t--(n + T) and t--n, on Sreal to get the new Sreal.\nAdditionally, local site uses Sp-remote and corresponding prediction methods to estimate the new Sp-remote.\nAfter new Slate, Sreal, and Sp-remote are calculated, local site compares whether the difference between the new Slate and Spremote exceeds the predefined threshold.\nIf YES, local site sends new Slate to remote site and Sp-remote is updated with new Slate.\nNote that the timestamp of the sent state update is t.\nAfter that, local site uses Sreal to update local scene and deletes the operations, whose issue time is less than t--n, from L.\nAfter updating the scene, remote site waits for a constant amount of time T.\nDuring this time period, remote site stores received state update (s) in a list R. All state updates in R are sorted by their timestamps.\nAt the end of time period T, remote site checks whether R contains state updates whose timestamps are less than t--n. Note that t is current physical time and it increases during the transmission of state updates.\nIf YES, it uses these state updates and corresponding prediction methods to calculate the new Sremote, else they use Sremote and corresponding prediction methods to estimate the new Sremote.\nAfter that, local site uses Sremote to update local scene and deletes the sate updates, whose timestamps are less than t--n, from R. From the above description, it can been see that the main difference between GS-DR and GS-DR-LL is that GS-DR-LL uses the operations, whose issue time is less than t--n, to calculate Sreal.\nThat means that the scene seen by local player is the results of the operations issued a period of time (i.e. n) ago.\nMeanwhile, if the results of issued operations make the difference between Slate and Sp-remote exceed a predefined threshold, corresponding state updates are sent to remote sites immediately.\nThe aforementioned is the basic mechanism of GS-DR-LL.\nIn the case with multiple shared entities and remote sites, local site calculates Slate, Sreal, and Sp-remote for different shared entities respectively, if there are multiple Slate need to be transmitted, local site packets them in one state update and then send it to all remote sites.\nFigure 2 illustrates the paths of a shared entity at local site and remote site while using GS-DR and GS-DR-LL.\nAll conditions are the same with the conditions used in the aforementioned example describing GS-DR.\nCompared with t1, t2, and n, T (i.e. the time interval between two consecutive updates) is quite small and it is ignored in the following description.\nAt time point t0, the player at site i issues an operation, which changes the velocity of the shared entity form 0 to v0.\nBy using GS-DR-LL, the results of the operation are updated to local scene at time point t0 + n.\nHowever the operation is immediately used to calculate Slate, thus in spite of GS-DR or GS-DR-LL, at time point t1 site i finds that the difference between accurate position and the estimated one is larger than the threshold and it sends a state update to site j.\nAt time point t2, the state update is received by remote site j. Assuming that the timestamp of the state update is less than t--n, site j uses it to update local scene immediately.\nThe 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 3 With GS-DR, the time period of before inconsistency is (t2--t1) + (t1--t0), whereas it decreases to (t2--t1--n) + (t1--t0) with the help of GS-DR-LL.\nNote that t2--t1 is caused by network transmission delay and t1--t0 is caused by the state filtering mechanism of DR. If n is larger than t2--t1, GS-DR-LL can eliminate the before inconsistency caused by network transmission delay, but it cannot eliminate the before inconsistency caused by the state filtering mechanism of DR (unless the threshold is set to 0).\nIn highly interactive games, which request high consistency and GS-DR-LL might be employed, the results of operations are quite difficult to be estimated and a small threshold must be used.\nThus, in practice, most before inconsistency is caused by network transmission delay and GS-DR-LL has the capability to eliminate such before inconsistency.\nFigure 2.\nThe paths of a shared entity by using GS-DR and\nGS-DR-LL.\nTo GS-DR-LL, the selection of lag value n is very important, and both network transmission delay and the effects of local lag on interaction should be considered.\nAccording to the results of HCI related researches, humans cannot perceive the delay imposed on a system when it is smaller than a specific value, and the specific value depends on both the system and the task.\nFor example, in a graphical user interface a delay of approximately 150 ms cannot be noticed for keyboard interaction and the threshold increases to 195 ms for mouse interaction [13], and a delay of up to 50 ms is uncritical for a car-racing game [5].\nThus if network transmission delay is less than the specific value of a game system, n can be set to the specific value.\nElse n can be set in terms of the effects of local lag on the interaction of a system [14].\nIn the case that a large n must be used, some HCI methods (e.g. echo [15]) can be used to relieve the negative effects of the large lag.\nIn the case that n is larger than the network transmission delay, GS-DR-LL can eliminate most before inconsistency.\nTraditional local lag requests that the lag value must be larger than typical network transmission delay, otherwise state repairs would flood the system.\nHowever GS-DR-LL allows n to be smaller than typical network transmission delay.\nIn this case, the before inconsistency caused by network transmission delay still exists, but it can be decreased.\n5.\nPERFORMANCE EVALUATION\nIn order to evaluate GS-DR-LL and compare it with GS-DR in a real application, we had implemented both two methods in a networked game named spaceship [1].\nSpaceship is a very simple networked computer game, in which players can control their spaceships to accelerate, decelerate, turn, and shoot spaceships controlled by remote players with laser beams.\nIf a spaceship is hit by a laser beam, its life points decrease one.\nIf the life points of a spaceship decrease to 0, the spaceship is removed from the game and the player controlling the spaceship loses the game.\nIn our practical implementation, GS-DR-LL and GS-DR coexisted in the game system, and the test bed was composed of two computers connected by 100 M switched Ethernet, with one computer acted as local site and the other acted as remote site.\nIn order to simulate network transmission delay, a specific module was developed to delay all packets transmitted between the two computers in terms of a predefined delay value.\nThe main purpose of performance evaluation is to study the effects of GS-DR-LL on decreasing before inconsistency in a particular game system under different thresholds, lags, and network transmission delays.\nTwo different thresholds were used in the evaluation, one is 10 pixels deviation in position or 15 degrees deviation in orientation, and the other is 4 pixels or 5 degrees.\nSix different combinations of lag and network transmission delay were used in the evaluation and they could be divided into two categories.\nIn one category, the lag was fixed at 300 ms and three different network transmission delays (100 ms, 300 ms, and 500 ms) were used.\nIn the other category, the network transmission delay was fixed at 800 ms and three different lags (100 ms, 300 ms, and 500 ms) were used.\nTherefore the total number of settings used in the evaluation was 12 (2 \u00d7 6).\nThe procedure of performance evaluation was composed of three steps.\nIn the first step, two participants were employed to play the game, and the operation sequences were recorded.\nBased on the records, a sub operation sequence, which lasted about one minute and included different operations (e.g. accelerate, decelerate, and turn), was selected.\nIn the second step, the physical clocks of the two computers were synchronized first.\nUnder different settings and consistency maintenance approaches, the selected sub operation sequence was played back on one computer, and it drove the two spaceships, one was local and the other was remote, to move.\nMeanwhile, the tracks of the spaceships on the two computers were recorded separately and they were called as a track couple.\nSince there are 12 settings and 2 consistency maintenance approaches, the total number of recorded track couples was 24.\nIn the last step, to each track couple, the inconsistency between them was calculated, and the unit of inconsistency was pixel.\nSince the physical clocks of the two computers were synchronized, the calculation of inconsistency was quite simple.\nThe inconsistency at a particular time point was the distance between the positions of the two spaceships at that time point (i.e. formula (3)).\nIn order to show the results of inconsistency in a clear way, only parts of the results, which last about 7 seconds, are used in the following figures, and the figures show almost the same parts of the results.\nFigures 3, 4, and 5 show the results of inconsistency when the lag is fixed at 300 ms and the network transmission delays are 100, 300, and 500 ms. It can been seen that inconsistency does exist, but in most of the time it is 0.\nAdditionally, inconsistency increases with the network transmission delay, but decreases with the threshold.\nCompared with GS-DR, GS-DR-LL can decrease more inconsistency, and it eliminates most inconsistency when the network transmission delay is 100 ms and the threshold is 4 pixels or 5 degrees.\n4 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006\nAccording to the prediction and state filtering mechanisms of DR, inconsistency cannot be completely eliminated if the threshold is not 0.\nWith the definitions of before inconsistency and after inconsistency, it can be indicated that GS-DR and GS-DR-LL both can eliminate after inconsistency, and GS-DR-LL can\nThe threshold is 10 pixels or 15degrees effectively decrease before inconsistency.\nIt can be foreseen that with proper lag and threshold (e.g. the lag is larger than the network transmission delay and the threshold is 0), GS-DR-LL even can eliminate before inconsistency.\nFigure 3.\nInconsistency when the network transmission delay is 100 ms and the lag is 300 ms.\nFigure 4.\nInconsistency when the network transmission delay is 300 ms and the lag is 300 ms.\nFigure 5.\nInconsistency when the network transmission delay is 500 ms and the lag is 300 ms.\nFigures 6, 7, and 8 show the results of inconsistency when the network transmission delay is fixed at 800 ms and the lag are 100, 300, and 500 ms. It can be seen that with GS-DR-LL before inconsistency decreases with the lag.\nIn traditional local lag, the lag must be set to a value larger than typical network transmission delay, otherwise the state repairs would flood the system.\nFrom the above results it can be seen that there does not exist any constraint on the selection of the lag, with GS-DR-LL a system would work fine even if the lag is much smaller than the network transmission delay.\nThe 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 5 From all above results, it can be indicated that GS-DR and GSDR-LL both can eliminate after inconsistency, and GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\nFigure 6.\nInconsistency when the network transmission delay is 800 ms and the lag is 100 ms.\nFigure 7.\nInconsistency when the network transmission delay is 800 ms and the lag is 300 ms.\nFigure 8.\nInconsistency when the network transmission delay is 800 ms and the lag is 500 ms.\n6.\nCONCLUSIONS\nCompared with traditional DR, GS-DR can eliminate after inconsistency through the synchronization of physical clocks, but it cannot tackle before inconsistency, which would significantly influence the usability and fairness of a game.\nIn this paper, we proposed a method named GS-DR-LL, which combines local lag and GS-DR, to decrease before inconsistency through delaying updating the execution results of local operations to local scene.\nPerformance evaluation indicates that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.\nGS-DR-LL has significant implications to consistency maintenance approaches.\nFirst, GS-DR-LL shows that improved DR cannot only eliminate after inconsistency but also decrease\nbefore inconsistency, with proper lag and threshold, it would even eliminate before inconsistency.\nAs a result, the application of DR can be greatly broadened and it could be used in the systems which request high consistency (e.g. highly interactive games).\nSecond, GS-DR-LL shows that by combining local lag and GSDR, the constraint on selecting lag value is removed and a lag, which is smaller than typical network transmission delay, could be used.\nAs a result, the application of local lag can be greatly broadened and it could be used in the systems which have large typical network transmission delay (e.g. Internet based games).","keyphrases":["dead-reckon","local lag","multiplay game","consist","gs-dr-ll","network transmiss delai","time warp","accur state","correct","physic clock","usabl and fair","distribut multi-player game","continu replic applic"],"prmu":["P","P","P","P","P","U","U","M","U","U","M","M","M"]} {"id":"J-63","title":"Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets","abstract":"This paper introduces a new class of mechanisms based on negotiation between market participants. This model allows us to circumvent Myerson and Satterthwaite's impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting. The underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation.","lvl-1":"Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets Yair Bartal \u2217 School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel yair@cs.huji.ac.il Rica Gonen School of Computer Science and Engineering The Hebrew University of Jerusalem, Israel rgonen@cs.huji.ac.il Pierfrancesco La Mura Leipzig Graduate School of Management Leipzig, Germany plamura@hhl.de ABSTRACT This paper introduces a new class of mechanisms based on negotiation between market participants.\nThis model allows us to circumvent Myerson and Satterthwaite``s impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting.\nThe underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computers and Society]: Electronic Commerce-payment schemes General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION In this paper we introduce the concept of negotiation based mechanisms in the context of the theory of efficient truthful markets.\nA market consists of multiple buyers and sellers who wish to exchange goods.\nThe market``s main objective is to produce an allocation of sellers'' goods to buyers as to maximize the total gain from trade.\nA commonly studied model of participant behavior is taken from the field of economic mechanism design [3, 4, 11].\nIn this model each player has a private valuation function that assigns real values to each possible allocation.\nThe algorithm motivates players to participate truthfully by handing payments to them.\nThe mechanism in an exchange collects buyer bids and seller bids and clears the exchange by computing:(i) a set of trades, and (ii) the payments made and received by players.\nIn designing a mechanism to compute trades and payments we must consider the bidding strategies of self-interested players, i.e. rational players that follow expected-utility maximizing strategies.\nWe set allocative efficiency as our primary goal.\nThat is the mechanism must compute a set of trades that maximizes gain from trade.\nIn addition we require individual rationality (IR) so that all players have positive expected utility to participate, budget balance (BB) so that the exchange does not run at a loss, and incentive compatibility (IC) so that reporting the truth is a dominant strategy for each player.\nUnfortunately, Myerson and Satterthwaite``s (1983) well known result demonstrates that in bilateral trade it is impossible to simultaneously achieve perfect efficiency, BB, and IR using an IC mechanism [10].\nA unique approach to overcome Myerson and Satterthwaite``s impossibility result was attempted by Parkes, Kalagnanam and Eso [12].\nThis result designs both a regular and a combinatorial bilateral trade mechanism (which imposes BB and IR) that approximates truth revelation and allocation efficiency.\nIn this paper we circumvent Myerson and Satterthwaite``s impossibility result by introducing a new model of negotiationrange markets.\nA negotiation-range mechanism does not produce payment prices for the market participants.\nRather, is assigns each buyer-seller pair a price range, called Zone Of Possible Agreements (ZOPA).\nThe buyer is provided with the high end of the range and the seller with the low end of the range.\nThis allows the trading parties to engage in negotiation over the final price with guarantee that the deal is beneficial for both of them.\nThe negotiation process is not considered part of the mechanism but left up to the interested parties, or to some external mechanism to perform.\nIn effect a negotiation-range mechanism operates as a mediator between the market participants, offering them the grounds to be able to finalize the terms of the trade by themselves.\nThis concept is natural to many real-world market environments such as the real estate market.\n1 We focus on the single-unit heterogeneous setting: n sellers offer one unique good each by placing sealed bids specifying their willingness to sell, and m buyers, interested in buying a single good each, placing sealed bids specifying their willingness to pay for each good they may be interested in.\nOur main result is a single-unit heterogeneous bilateral trade negotiation-range mechanism (ZOPAS) that is efficient, individually rational, incentive compatible and budget balanced.\nOur result does not contradict Myerson and Satterthwaite``s important theorem.\nMyerson-Satterthwaite``s proof relies on a theorem assuring that in two different efficient IC markets; if the sellers have the same upper bound utility then they will receive the same prices in each market and if the buyers have the same lower bound utility then they will receive the same prices in each market.\nOur single-unit heterogeneous mechanism bypasses Myerson and Satterthwaite``s theorem by producing a price range, defined by a seller``s floor and a buyer``s ceiling, for each pair of matched players.\nIn our market mechanism the seller``s upper bound utility may be the same while the seller``s floor is different and the buyer``s lower bound utility may be the same while the buyer``s ceiling is different.\nMoreover, the final price is not fixed by the mechanism at all.\nInstead, it is determined by an independent negotiation between the buyer and seller.\nMore specifically, in a negotiation-range mechanism, the range of prices each matched pair is given is resolved by a negotiation stage where a final price is determined.\nThis negotiation stage is crucial for our mechanism to be IC.\nIntuitively, a negotiation range mechanism is incentive compatible if truth telling promises the best ZOPA from the point of view of the player in question.\nThat is, he would tell the truth if this strategy maximizes the upper and lower bounds on his utility as expressed by the ZOPA boundaries.\nYet, when carefully examined it turns out that it is impossible (by [10]) that this goal will always hold.\nThis is simply because such a mechanism could be easily modified to determine final prices for the players (e.g. by taking the average of the range``s boundaries).\nHere, the negotiation stage come into play.\nWe show that if the above utility maximizing condition does not hold then it is the case that the player cannot influence the negotiation bound that is assigned to his deal partner no matter what value he declares.\nThis means that the only thing that he may achieve by reporting a false valuation is modifying his own negotiation bound, something that he could alternatively achieve by reporting his true valuation and incorporating the effect of the modified negotiation bound into his negotiation strategy.\nThis eliminates the benefit of reporting false valuations and allows our mechanism to compute the optimal gain from trade according to the players'' true values.\nThe problem of computing the optimal allocation which maximizes gain from trade can be conceptualized as the problem of finding the maximum weighted matching in a weighted bipartite graph connecting buyers and sellers, where each edge in the graph is assigned a weight equal to the difference between the respective buyer bid and seller bid.\nIt is well known that this problem can be solved efficiently in polynomial time.\nVCG IC payment schemes [2, 7, 13] support efficient and IR bilateral trade but not simultaneously BB.\nOur particular approach adapts the VCG payment scheme to achieve budget balance.\nThe philosophy of the VCG payment schemes in bilateral trade is that the buyer pays the seller``s opportunity cost of not selling the good to another buyer and not keeping the good to herself.\nThe seller is paid in addition to the buyer``s price a compensation for the damage the mechanism did to the seller by not extracting the buyer``s full bid.\nOur philosophy is a bit different: The seller is paid at least her opportunity cost of not selling the good to another buyer and not keeping the good for herself.\nThe buyer pays at most his alternate seller``s opportunity cost of not selling the good to another buyer and not keeping the alternate good for herself.\nThe rest of this paper is organized as follows.\nIn Section 2 we describe our model and definitions.\nIn section 3 we present the single-unit heterogeneous negotiation-range mechanism and show that it is efficient, IR, IC and BB.\nFinally, we conclude with a discussion in Section 4.\n2.\nNEGOTIATION MARKETS PRELIMINARIES Let \u03a0 denote the set of players, N the set of n selling players, and M the set of m buying players, where \u03a0 = N \u222a M. Let \u03a8 = {1, ..., t} denote the set of goods.\nLet Ti \u2208 {\u22121, 0, 1}t denote an exchange vector for a trade, such that player i buys goods {A \u2208 \u03a8|Ti (A) = 1} and sells goods {A \u2208 \u03a8|Ti (A) = \u22121}.\nLet T = (T1, ..., T|\u03a0|) denote the complete trade between all players.\nWe view T as describing the allocation of goods by the mechanism to the buyers and sellers.\nIn the single-unit heterogeneous setting every good belongs to specific seller, and every buyer is interested in buying one good.\nThe buyer may bid for several or all goods.\nAt the end of the auction every good is either assigned to one of the buyers who bid for it or kept unsold by the seller.\nIt is convenient to assume the sets of buyers and sellers are disjoint (though it is not required), i.e. N \u2229 M = \u2205.\nEach seller i is associated with exactly one good Ai, for which she has true valuation ci which expresses the price at which it is beneficial for her to sell the good.\nIf the seller reports a false valuation at an attempt to improve the auction results for her, this valuation is denoted \u02c6ci.\nA buyer has a valuation vector describing his valuation for each of the goods according to their owner.\nSpecifically, vj(k) denotes buyer j``s valuation for good Ak.\nSimilarly, if he reports a false valuation it is denoted \u02c6vj(k).\nIf buyer j is matched by the mechanism with seller i then Ti(Ai) = \u22121 and Tj(Ai) = 1.\nNotice, that in our setting for every k = i, Ti(Ak) = 0 and Tj(Ai) = 0 and also for every z = j, Tz(Ai) = 0.\nFor a matched buyer j - seller i pair, the gain from trade on the deal is defined as vj(i) \u2212 ci.\nGiven and allocation T, the gain from trade associated with T is V = j\u2208M,i\u2208N (vj(i) \u2212 ci) \u00b7 Tj(Ai).\nLet T\u2217 denote the optimal allocation which maximizes the gain from trade, computed according to the players'' true valuations.\nLet V \u2217 denote the optimal gain from trade associated with this allocation.\nWhen players'' report false valuations we use \u02c6T\u2217 and \u02c6V \u2217 to denote the optimal allocation and gain from trade, respectively, when computed according to the reported valuations.\n2 We are interested in the design of negotiation-range mechanisms.\nIn contrast to a standard auction mechanism where the buyer and seller are provided with the prices they should pay, the goal of a negotiation-range mechanism is to provide the player``s with a range of prices within which they can negotiate the final terms of the deal by themselves.\nThe mechanism would provide the buyer with the upper bound on the range and the seller with the lower bound on the range.\nThis gives each of them a promise that it will be beneficial for them to close the deal, but does not provide information about the other player``s terms of negotiation.\nDefinition 1.\nNegotiation Range: Zone Of Possible Agreements, ZOPA, between a matched buyer and seller.\nThe ZOPA is a range, (L, H), 0 \u2264 L \u2264 H, where H is an upper bound (ceiling) price for the buyer and L is a lower bound (floor) price for the seller.\nDefinition 2.\nNegotiation-Range Mechanism: A mechanism that computes a ZOPA, (L, H), for each matched buyer and seller in T\u2217 , and provides the buyer with the upper bound H and the seller with the lower bound L.\nThe basic assumption is that participants in the auction are self-interested players.\nThat is their main goal is to maximize their expected utility.\nThe utility for a buyer who does not participate in the trade is 0.\nIf he does win some good, his utility is the surplus between his valuation for that good and the price he pays.\nFor a seller, if she keeps the good unsold, her utility is just her valuation of the good, and the surplus is 0.\nIf she gets to sell it, her utility is the price she is paid for it, and the surplus is the difference between this price and her valuation.\nSince negotiation-range mechanisms assign bounds on the range of prices rather than the final price, it is useful to define the upper and lower bounds on the player``s utilities defined by the range``s limits.\nDefinition 3.\nConsider a buyer j - seller i pair matched by a negotiation-range mechanism and let (L, H) be their associated negotiation range.\n\u2022 The buyer``s top utility is: vj(i) \u2212 L, and the buyer``s bottom utility is vj(i) \u2212 H. \u2022 The seller``s top utility is H, with surplus H \u2212 ci, and the seller``s bottom utility is L, with surplus L \u2212 ci.\n3.\nTHE SINGLE-UNIT HETEROGENEOUS MECHANISM (ZOPAS) 3.1 Description of the Mechanism ZOPAS is a negotiation-range mechanism, it finds the optimal allocation T\u2217 and uses it to define a ZOPA for each buyer-seller pair.\nThe first stage in applying the mechanism is for the buyers and sellers to submit their sealed bids.\nThe mechanism then allocates buyers to sellers by computing the allocation T \u2217 , which results in the optimal gain from trade V \u2217 , and defines a ZOPA for each buyer-seller pair.\nFinally, buyers and sellers use the ZOPA to negotiate a final price.\nComputing T\u2217 involves solving the maximum weighted bipartite matching problem for the complete bipartite graph Kn,m constructed by placing the buyers on one side of the Find the optimal allocation T \u2217 Compute the maximum weighted bipartite matching for the bipartite graph of buyers and sellers, and edge weights equal to the gain from trade.\nCalculate Sellers'' Floors For every buyer j, allocated good Ai Find the optimal allocation (T\u2212j)\u2217 Li = vj(i) + (V\u2212j)\u2217 \u2212 V \u2217 Calculate Buyers'' Ceilings For every buyer j, allocated good Ai Find the optimal allocation (T \u2212i )\u2217 Find the optimal allocation (T \u2212i \u2212j )\u2217 Hj = vj(i) + (V \u2212i \u2212j )\u2217 \u2212 (V \u2212i )\u2217 Negotiation Phase For every buyer j and every seller i of good Ai Report to seller i her floor Li and identify her matched buyer j Report to buyer j his ceiling Hj and identify his matched seller i i, j negotiate the good``s final price Figure 1: The ZOPAS mechanism graph, the seller on another and giving the edge between buyer j and seller i weight equal to vj(i) \u2212 ci.\nThe maximum weighted matching problem in solvable in polynomial time (e.g., using the Hungarian Method).\nThis results in a matching between buyers and sellers that maximizes gain from trade.\nThe next step is to compute for each buyer-seller pair a seller``s floor, which provides the lower bound of the ZOPA for this pair, and assigns it to the seller.\nA seller``s floor is computed by calculating the difference between the total gain from trade when the buyer is excluded and the total gain from trade of the other participants when the buyer is included (the VCG principle).\nLet (T\u2212j)\u2217 denote the gain from trade of the optimal allocation when buyer j``s bids are discarded.\nDenote by (V\u2212j)\u2217 the total gain from trade in the allocation (T\u2212j)\u2217 .\nDefinition 4.\nSeller Floor: The lowest price the seller should expect to receive, communicated to the seller by the mechanism.\nThe seller floor for player i who was matched with buyer j on good Ai, i.e., Tj(Ai) = 1, is computed as: Li = vj(i) + (V\u2212j)\u2217 \u2212 V \u2217 .\nThe seller is instructed not to accept less than this price from her matched buyer.\nNext, the mechanism computes for each buyer-seller pair a buyer ceiling, which provides the upper bound of the ZOPA for this pair, and assigns it to the buyer.\nEach buyer``s ceiling is computed by removing the buyer``s matched seller and calculating the difference between the total gain from trade when the buyer is excluded and the total gain from trade of the other participants when the 3 buyer is included.\nLet (T\u2212i )\u2217 denote the gain from trade of the optimal allocation when seller i is removed from the trade.\nDenote by (V \u2212i )\u2217 the total gain from trade in the allocation (T\u2212i )\u2217 .\nLet (T\u2212i \u2212j )\u2217 denote the gain from trade of the optimal allocation when seller i is removed from the trade and buyer j``s bids are discarded.\nDenote by (V \u2212i \u2212j )\u2217 the total gain from trade in the allocation (T \u2212i \u2212j )\u2217 .\nDefinition 5.\nBuyer Ceiling: The highest price the seller should expect to pay, communicated to the buyer by the mechanism.\nThe buyer ceiling for player j who was matched with seller i on good Ai, i.e., Tj(Ai) = 1, is computed as: Hj = vj(i) + (V \u2212i \u2212j )\u2217 \u2212 (V \u2212i )\u2217 .\nThe buyer is instructed not to pay more than this price to his matched seller.\nOnce the negotiation range lower bound and upper bound are computed for every matched pair, the mechanism reports the lower bound price to the seller and the upper bound price to the buyer.\nAt this point each buyer-seller pair negotiates on the final price and concludes the deal.\nA schematic description the ZOPAS mechanism is given in Figure 3.1.\n3.2 Analysis of the Mechanism In this section we analyze the properties of the ZOPAS mechanism.\nTheorem 1.\nThe ZOPAS market negotiation-range mechanism is an incentive-compatible bilateral trade mechanism that is efficient, individually rational and budget balanced.\nClearly ZOPAS is an efficient polynomial time mechanism.\nLet us show it satisfies the rest of the properties in the theorem.\nClaim 1.\nZOPAS is individually rational, i.e., the mechanism maintains nonnegative utility surplus for all participants.\nProof.\nIf a participant does not trade in the optimal allocation then his utility surplus is zero by definition.\nConsider a pair of buyer j and seller i which are matched in the optimal allocation T \u2217 .\nThen the buyer``s utility is at least vj(i) \u2212 Hj.\nRecall that Hj = vj(i) + (V \u2212i \u2212j )\u2217 \u2212 (V \u2212i )\u2217 , so that vj(i) \u2212 Hj = (V \u2212i )\u2217 \u2212 (V \u2212i \u2212j )\u2217 .\nSince the optimal gain from trade which includes j is higher than that which does not, we have that the utility is nonnegative: vj(i) \u2212 Hj \u2265 0.\nNow, consider the seller i.\nHer utility surplus is at least Li \u2212 ci.\nRecall that Li = vj(i) + (V\u2212j)\u2217 \u2212 V \u2217 .\nIf we removed from the optimal allocation T \u2217 the contribution of the buyer j - seller i pair, we are left with an allocation which excludes j, and has value V \u2217 \u2212 (vj(i) \u2212 ci).\nThis implies that (V\u2212j)\u2217 \u2265 V \u2217 \u2212 vj(i) + ci, which implies that Li \u2212 ci \u2265 0.\nThe fact that ZOPAS is a budget-balanced mechanism follows from the following lemma which ensures the validity of the negotiation range, i.e., that every seller``s floor is below her matched buyer``s ceiling.\nThis ensures that they can close the deal at a final price which lies in this range.\nLemma 1.\nFor every buyer j- seller i pair matched by the mechanism: Li \u2264 Hj.\nProof.\nRecall that Li = vj(i) + (V\u2212j)\u2217 \u2212 V \u2217 and Hj = vj(i)+(V \u2212i \u2212j )\u2217 \u2212(V \u2212i )\u2217 .\nTo prove that Li \u2264 Hj it is enough to show that (V \u2212i )\u2217 + (V\u2212j)\u2217 \u2264 V \u2217 + (V \u2212i \u2212j )\u2217 .\n(1) The proof of (1) is based on a method which we apply several times in our analysis.\nWe start with the allocations (T\u2212i )\u2217 and (T\u2212j)\u2217 which together have value equal to (V \u2212i )\u2217 + (V\u2212j)\u2217 .\nWe now use them to create a pair of new valid allocations, by using the same pairs that were matched in the original allocations.\nThis means that the sum of values of the new allocations is the same as the original pair of allocations.\nWe also require that one of the new allocations does not include buyer j or seller i.\nThis means that the sum of values of these new allocations is at most V \u2217 + (V \u2212i \u2212j )\u2217 , which proves (1).\nLet G be the bipartite graph where the nodes on one side of G represent the buyers and the nodes on the other side represent the sellers, and edge weights represent the gain from trade for the particular pair.\nThe different allocations represent bipartite matchings in G.\nIt will be convenient for the sake of our argument to think of the edges that belong to each of the matchings as being colored with a specific color representing this matching.\nAssign color 1 to the edges in the matching (T \u2212i )\u2217 and assign color 2 to the edges in the matching (T\u2212j)\u2217 .\nWe claim that these edges can be recolored using colors 3 and 4 so that the new coloring represents allocations T (represented by color 3) and (T\u2212i \u2212j ) (represented by color 4).\nThis implies the that inequality (1) holds.\nFigure 2 illustrates the graph G and the colorings of the different matchings.\nDefine an alternating path P starting at j. Let S1 be the seller matched to j in (T \u2212i )\u2217 (if none exists then P is empty).\nLet B1 be the buyer matched to S1 in (T\u2212j)\u2217 , S2 be the seller matched to B1 in (T\u2212i )\u2217 , B2 be the buyer matched to S2 in (T\u2212j)\u2217 , and so on.\nThis defines an alternating path P, starting at j, whose edges'' colors alternate between colors 1 and 2 (starting with 1).\nThis path ends either in a seller who is not matched in (T\u2212j)\u2217 or in a buyer who is not matched in (T\u2212i )\u2217 .\nSince all sellers in this path are matched in (T\u2212i )\u2217 , we have that seller i does not belong to P.\nThis ensures that edges in P may be colored by alternating colors 3 and 4 (starting with 3).\nSince except for the first edge, all others do not involve i or j and thus may be colored 4 and be part of an allocation (T \u2212i \u2212j ) .\nWe are left to recolor the edges that do not belong to P.\nSince none of these edges includes j we have that the edges that were colored 1, which are part of (T \u2212i )\u2217 , may now be colored 4, and be included in the allocation (T \u2212i \u2212j ) .\nIt is also clear that the edges that were colored 2, which are part of (T\u2212j)\u2217 , may now be colored 3, and be included in the allocation T .\nThis completes the proof of the lemma.\n3.3 Incentive Compatibility The basic requirement in mechanism design is for an exchange mechanism to be incentive compatible.\nThis means that its payment structure enforces that truth-telling is the players'' weakly dominant strategy, that is that the strategy by which the player is stating his true valuation results 4 ... jS1 S2 B1 S3 B2 B3S4 S5 B4 S6 B5 S8 B7 S7 B6 ... Figure 2: Alternating path argument for Lemma 1 (Validity of the Negotiation Range) and Claim 2 (part of Buyer``s IC proof) Colors Bidders 1 32 4 UnmatchedMatched Figure 3: Key to Figure 2 in bigger or equal utility as any other strategy.\nThe utility surplus is defined as the absolute difference between the player``s bid and his price.\nNegotiation-range mechanisms assign bounds on the range of prices rather than the final price and therefore the player``s valuation only influences the minimum and maximum bounds on his utility.\nFor a buyer the minimum (bottom) utility would be based on the top of the negotiation range (ceiling), and the maximum (top) utility would be based on the bottom of the negotiation range (floor).\nFor a seller it``s the other way around.\nTherefore the basic natural requirement from negotiation-range mechanisms would be that stating the player``s true valuation results in both the higher bottom utility and higher top utility for the player, compared with other strategies.\nUnfortunately, this requirement is still too strong and it is impossible (by [10]) that this will always hold.\nTherefore we slightly relax it as follows: we require this holds when the false valuation based strategy changes the player``s allocation.\nWhen the allocation stays unchanged we require instead that the player would not be able to change his matched player``s bound (e.g. a buyer cannot change the seller``s floor).\nThis means that the only thing he can influence is his own bound, something that he can alternatively achieve through means of negotiation.\nThe following formally summarizes our incentive compatibility requirements from the negotiation-range mechanism.\nBuyer``s incentive compatibility: \u2022 Let j be a buyer matched with seller i by the mechanism according to valuation vj and the negotiationrange assigned is (Li, Hj).\nAssume that when the mechanism is applied according to valuation \u02c6vj, seller k = i is matched with j and the negotiation-range assigned is (\u02c6Lk, \u02c6Hj).\nThen vj(i) \u2212 Hj \u2265 vj(k) \u2212 \u02c6Hj.\n(2) vj(i) \u2212 Li \u2265 vj(k) \u2212 \u02c6Lk.\n(3) \u2022 Let j be a buyer not matched by the mechanism according to valuation vj.\nAssume that when the mechanism is applied according to valuation \u02c6vj, seller k = i is matched with j and the negotiation-range assigned is (\u02c6Lk, \u02c6Hj).\nThen vj(k) \u2212 \u02c6Hj \u2264 vj(k) \u2212 \u02c6Lk \u2264 0.\n(4) \u2022 Let j be a buyer matched with seller i by the mechanism according to valuation vj and let the assigned bottom of the negotiation range (seller``s floor) be Li.\nAssume that when the mechanism is applied according to valuation \u02c6vj, the matching between i and j remains unchanged and let the assigned bottom of the negotiation range (seller``s floor) be \u02c6Li.\nThen, \u02c6Li = Li.\n(5) Notice that the first inequality of (4) always holds for a valid negotiation range mechanism (Lemma 1).\nSeller``s incentive compatibility: \u2022 Let i be a seller not matched by the mechanism according to valuation ci.\nAssume that when the mechanism 5 is applied according to valuation \u02c6ci, buyer z = j is matched with i and the negotiation-range assigned is (\u02c6Li, \u02c6Hz).\nThen \u02c6Li \u2212 ci \u2264 \u02c6Hz \u2212 ci \u2264 0.\n(6) \u2022 Let i be a buyer matched with buyer j by the mechanism according to valuation ci and let the assigned top of the negotiation range (buyer``s ceiling) be Hj.\nAssume that when the mechanism is applied according to valuation \u02c6ci, the matching between i and j remains unchanged and let the assigned top of the negotiation range (buyer``s ceiling) be \u02c6Hj.\nThen, \u02c6Hj = Hj.\n(7) Notice that the first inequality of (6) always holds for a valid negotiation range mechanism (Lemma 1).\nObserve that in the case of sellers in our setting, the case expressed by requirement (6) is the only case in which the seller may change the allocation to her benefit.\nIn particular, it is not possible for seller i who is matched in T \u2217 to change her buyer by reporting a false valuation.\nThis fact simply follows from the observation that reducing the seller``s valuation increases the gain from trade for the current allocation by at least as much than any other allocation, whereas increasing the seller``s valuation decreases the gain from trade for the current allocation by exactly the same amount as any other allocation in which it is matched.\nTherefore, the only case the optimal allocation may change is when in the new allocation i is not matched in which case her utility surplus is 0.\nTheorem 2.\nZOPAS is an incentive compatible negotiationrange mechanism.\nProof.\nWe begin with the incentive compatibility for buyers.\nConsider a buyer j who is matched with seller i according to his true valuation v. Consider that j is reporting instead a false valuation \u02c6v which results in a different allocation in which j is matched with seller k = i.\nThe following claim shows that a buyer j which changed his allocation due to a false declaration of his valuation cannot improve his top utility.\nClaim 2.\nLet j be a buyer matched to seller i in T \u2217 , and let k = i be the seller matched to j in \u02c6T\u2217 .\nThen, vj(i) \u2212 Hj \u2265 vj(k) \u2212 \u02c6Hj.\n(8) Proof.\nRecall that Hj = vj(i) + (V \u2212i \u2212j )\u2217 \u2212 (V \u2212i )\u2217 and \u02c6Hj = \u02c6vj(k) + ( \u02c6V \u2212k \u2212j )\u2217 \u2212 ( \u02c6V \u2212k )\u2217 .\nTherefore, vj(i) \u2212 Hj = (V \u2212i )\u2217 \u2212 (V \u2212i \u2212j )\u2217 and vj(k) \u2212 \u02c6Hj = vj(k) \u2212 \u02c6vj(k) + ( \u02c6V \u2212k )\u2217 \u2212 ( \u02c6V \u2212k \u2212j )\u2217 .\nIt follows that in order to prove (8) we need to show ( \u02c6V \u2212k )\u2217 + (V \u2212i \u2212j )\u2217 \u2264 (V \u2212i )\u2217 + ( \u02c6V \u2212k \u2212j )\u2217 + \u02c6vj(k) \u2212 vj(k).\n(9) Consider first the case were j is matched to i in ( \u02c6T\u2212k )\u2217 .\nIf we remove this pair and instead match j with k we obtain a matching which excludes i, if the gain from trade on the new pair is taken according to the true valuation then we get ( \u02c6V \u2212k )\u2217 \u2212 (\u02c6vj(i) \u2212 ci) + (vj(k) \u2212 ck) \u2264 (V \u2212i )\u2217 .\nNow, since the optimal allocation \u02c6T\u2217 matches j with k rather than with i we have that (V \u2212i \u2212j )\u2217 + (\u02c6vj(i) \u2212 ci) \u2264 \u02c6V \u2217 = ( \u02c6V \u2212k \u2212j )\u2217 + (\u02c6vj(k) \u2212 ck), where we have used that ( \u02c6V \u2212i \u2212j )\u2217 = (V \u2212i \u2212j )\u2217 since these allocations exclude j. Adding up these two inequalities implies (9) in this case.\nIt is left to prove (9) when j is not matched to i in ( \u02c6T\u2212k )\u2217 .\nIn fact, in this case we prove the stronger inequality ( \u02c6V \u2212k )\u2217 + (V \u2212i \u2212j )\u2217 \u2264 (V \u2212i )\u2217 + ( \u02c6V \u2212k \u2212j )\u2217 .\n(10) It is easy to see that (10) indeed implies (9) since it follows from the fact that k is assigned to j in \u02c6T\u2217 that \u02c6vj(k) \u2265 vj(k).\nThe proof of (10) works as follows.\nWe start with the allocations ( \u02c6T\u2212k )\u2217 and (T\u2212i \u2212j )\u2217 which together have value equal to ( \u02c6V \u2212k )\u2217 + (V \u2212i \u2212j )\u2217 .\nWe now use them to create a pair of new valid allocations, by using the same pairs that were matched in the original allocations.\nThis means that the sum of values of the new allocations is the same as the original pair of allocations.\nWe also require that one of the new allocations does not include seller i and is based on the true valuation v, while the other allocation does not include buyer j or seller k and is based on the false valuation \u02c6v.\nThis means that the sum of values of these new allocations is at most (V \u2212i )\u2217 + ( \u02c6V \u2212k \u2212j )\u2217 , which proves (10).\nLet G be the bipartite graph where the nodes on one side of G represent the buyers and the nodes on the other side represent the sellers, and edge weights represent the gain from trade for the particular pair.\nThe different allocations represent bipartite matchings in G.\nIt will be convenient for the sake of our argument to think of the edges that belong to each of the matchings as being colored with a specific color representing this matching.\nAssign color 1 to the edges in the matching ( \u02c6T\u2212k )\u2217 and assign color 2 to the edges in the matching (T \u2212i \u2212j )\u2217 .\nWe claim that these edges can be recolored using colors 3 and 4 so that the new coloring represents allocations (T \u2212i ) (represented by color 3) and ( \u02c6T\u2212k \u2212j ) (represented by color 4).\nThis implies the that inequality (10) holds.\nFigure 2 illustrates the graph G and the colorings of the different matchings.\nDefine an alternating path P starting at j. Let S1 = i be the seller matched to j in ( \u02c6T\u2212k )\u2217 (if none exists then P is empty).\nLet B1 be the buyer matched to S1 in (T\u2212i \u2212j )\u2217 , S2 be the seller matched to B1 in ( \u02c6T\u2212k )\u2217 , B2 be the buyer matched to S2 in (T\u2212i \u2212j )\u2217 , and so on.\nThis defines an alternating path P, starting at j, whose edges'' colors alternate between colors 1 and 2 (starting with 1).\nThis path ends either in a seller who is not matched in (T \u2212i \u2212j )\u2217 or in a buyer who is not matched in ( \u02c6T\u2212k )\u2217 .\nSince all sellers in this path are matched in ( \u02c6T\u2212k )\u2217 , we have that seller k does not belong to P.\nSince in this case S1 = i and the rest of the sellers in P are matched in (T\u2212i \u2212j )\u2217 we have that seller i as well does not belong to P.\nThis ensures that edges in P may be colored by alternating colors 3 and 4 (starting with 3).\nSince S1 = i, we may use color 3 for the first edge and thus assign it to the allocation (T\u2212i ) .\nAll other edges, do not involve i, j or k and thus may be either colored 4 and be part of an allocation ( \u02c6T\u2212k \u2212j ) or colored 3 and be part of an allocation (T\u2212i ) , in an alternating fashion.\nWe are left to recolor the edges that do not belong to P.\nSince none of these edges includes j we have that the edges 6 that were colored 1, which are part of ( \u02c6T\u2212k )\u2217 , may now be colored 4, and be included in the allocation ( \u02c6T\u2212k \u2212j ) .\nIt is also clear that the edges that were colored 2, which are part of (T\u2212i \u2212j )\u2217 , may now be colored 3, and be included in the allocation (T\u2212i ) .\nThis completes the proof of (10) and the claim.\nThe following claim shows that a buyer j which changed his allocation due to a false declaration of his valuation cannot improve his bottom utility.\nThe proof is basically the standard VCG argument.\nClaim 3.\nLet j be a buyer matched to seller i in T \u2217 , and k = i be the seller matched to j in \u02c6T\u2217 .\nThen, vj(i) \u2212 Li \u2265 vj(k) \u2212 \u02c6Lk.\n(11) Proof.\nRecall that Li = vj(i) + (V\u2212j)\u2217 \u2212 V \u2217 , and \u02c6Lk = \u02c6vj(k) + ( \u02c6V\u2212j)\u2217 \u2212 \u02c6V \u2217 = \u02c6vj(k) + (V\u2212j)\u2217 \u2212 \u02c6V \u2217 .\nTherefore, vj(i) \u2212 Li = V \u2217 \u2212 (V\u2212j)\u2217 and vj(k) \u2212 \u02c6Lk = vj(k) \u2212 \u02c6vj(k) + \u02c6V \u2217 \u2212 (V\u2212j)\u2217 .\nIt follows that in order to prove (11) we need to show V \u2217 \u2265 vj(k) \u2212 \u02c6vj(k) + \u02c6V \u2217 .\n(12) The scenario of this claim occurs when j understates his value for Ai or overstated his value for Ak.\nConsider these two cases: \u2022 \u02c6vj(k) > vj(k): Since Ak was allocated to j in the allocation \u02c6T\u2217 we have that using the allocation of \u02c6T\u2217 according to the true valuation gives an allocation of value U satisfying \u02c6V \u2217 \u2212 \u02c6vj(k) + vj(k) \u2264 U \u2264 V \u2217 .\n\u2022 \u02c6vj(k) = vj(k) and \u02c6vj(i) < vj(i): In this case (12) reduces to V \u2217 \u2265 \u02c6V \u2217 .\nSince j is not allocated i in \u02c6T\u2217 we have that \u02c6T\u2217 is an allocation that uses only true valuations.\nFrom the optimality of T \u2217 we conclude that V \u2217 \u2265 \u02c6V \u2217 .\nAnother case in which a buyer may try to improve his utility is when he does not win any good by stating his true valuation.\nHe may give a false valuation under which he wins some good.\nThe following claim shows that doing this is not beneficial to him.\nClaim 4.\nLet j be a buyer not matched in T \u2217 , and assume seller k is matched to j in \u02c6T\u2217 .\nThen, vj(k) \u2212 \u02c6Lk \u2264 0.\nProof.\nThe scenario of this claim occurs if j did not buy in the truth-telling allocation and overstates his value for Ak, \u02c6vj(k) > vj(k) in his false valuation.\nRecall that \u02c6Lk = \u02c6vj(k) + ( \u02c6V\u2212j)\u2217 \u2212 \u02c6V \u2217 .\nThus we need to show that 0 \u2265 vj(k) \u2212 \u02c6vj(k) + \u02c6V \u2217 \u2212 (V\u2212j)\u2217 .\nSince j is not allocated in T\u2217 then (V\u2212j)\u2217 = V \u2217 .\nSince j is allocated Ak in \u02c6T\u2217 we have that using the allocation of \u02c6T\u2217 according to the true valuation gives an allocation of value U satisfying \u02c6V \u2217 \u2212 \u02c6vj(k) + vj(k) \u2264 U \u2264 V \u2217 .\nThus we can conclude that 0 \u2265 vj(k) \u2212 \u02c6vj(k) + \u02c6V \u2217 \u2212 (V\u2212j)\u2217 .\nFinally, the following claim ensures that a buyer cannot influence the floor bound of the ZOPA for the good he wins.\nClaim 5.\nLet j be a buyer matched to seller i in T \u2217 , and assume that \u02c6T\u2217 = T\u2217 , then \u02c6Li = Li.\nProof.\nRecall that Li = vj(i) + (V\u2212j)\u2217 \u2212 V \u2217 , and \u02c6Li = \u02c6vj(i) + ( \u02c6V\u2212j)\u2217 \u2212 \u02c6V \u2217 = \u02c6vj(i) + (V\u2212j)\u2217 \u2212 \u02c6V \u2217 .\nTherefore we need to show that \u02c6V \u2217 = V \u2217 + \u02c6vj(i) \u2212 vj(i).\nSince j is allocated Ai in T\u2217 , we have that using the allocation of T\u2217 according to the false valuation gives an allocation of value U satisfying V \u2217 \u2212 vj(i) + \u02c6vj(i) \u2264 U \u2264 \u02c6V \u2217 .\nSimilarly since j is allocated Ai in \u02c6T\u2217 , we have that using the allocation of \u02c6T\u2217 according to the true valuation gives an allocation of value U satisfying \u02c6V \u2217 \u2212 \u02c6vj(i)+vj(i) \u2264 U \u2264 V \u2217 , which together with the previous inequality completes the proof.\nThis completes the analysis of the buyer``s incentive compatibility.\nWe now turn to prove the seller``s incentive compatibility properties of our mechanism.\nThe following claim handles the case where a seller that was not matched in T \u2217 falsely understates her valuation such that she gets matched n \u02c6T\u2217 .\nClaim 6.\nLet i be a seller not matched in T \u2217 , and assume buyer z is matched to i in \u02c6T\u2217 .\nThen, \u02c6Hz \u2212 ci \u2264 0.\nProof.\nRecall that \u02c6Hz = vz(i) + ( \u02c6V \u2212i \u2212z )\u2217 \u2212 ( \u02c6V \u2212i )\u2217 .\nSince i is not matched in T \u2217 and ( \u02c6T\u2212i )\u2217 involves only true valuations we have that ( \u02c6V \u2212i )\u2217 = V \u2217 .\nSince i is matched with z in \u02c6T\u2217 it can be obtained by adding the buyer z - seller i pair to ( \u02c6T\u2212i \u2212z)\u2217 .\nIt follows that \u02c6V \u2217 = ( \u02c6V \u2212i \u2212z )\u2217 + vz(i) \u2212 \u02c6ci.\nThus, we have that \u02c6Hz = \u02c6V \u2217 + \u02c6ci \u2212 V \u2217 .\nNow, since i is matched in \u02c6T\u2217 , using this allocation according to the true valuation gives an allocation of value U satisfying \u02c6V \u2217 + \u02c6ci \u2212 ci \u2264 U \u2264 V \u2217 .\nTherefore \u02c6Hz \u2212ci = \u02c6V \u2217 +\u02c6ci \u2212V \u2217 \u2212ci \u2264 0.\nFinally, the following simple claim ensures that a seller cannot influence the ceiling bound of the ZOPA for the good she sells.\nClaim 7.\nLet i be a seller matched to buyer j in T \u2217 , and assume that \u02c6T\u2217 = T\u2217 , then \u02c6Hj = Hj.\nProof.\nSince ( \u02c6V \u2212i \u2212j )\u2217 = (V \u2212i \u2212j )\u2217 and ( \u02c6V \u2212i )\u2217 = (V \u2212i )\u2217 it follows that \u02c6Hj = vj(i)+( \u02c6V \u2212i \u2212j )\u2217 \u2212( \u02c6V \u2212i )\u2217 = vj(i)+(V \u2212i \u2212j )\u2217 \u2212(V \u2212i )\u2217 = Hj.\n4.\nCONCLUSIONS AND EXTENSIONS In this paper we suggest a way to deal with the impossibility of producing mechanisms which are efficient, individually rational, incentive compatible and budget balanced.\nTo this aim we introduce the concept of negotiation-range mechanisms which avoid the problem by leaving the final determination of prices to a negotiation between the buyer and seller.\nThe goal of the mechanism is to provide the initial range (ZOPA) for negotiation in a way that it will be beneficial for the participants to close the proposed deals.\nWe present a negotiation range mechanism that is efficient, individually rational, incentive compatible and budget balanced.\nThe ZOPA produced by our mechanism is based 7 on a natural adaptation of the VCG payment scheme in a way that promises valid negotiation ranges which permit a budget balanced allocation.\nThe basic question that we aimed to tackle seems very exciting: which properties can we expect a market mechanism to achieve ?\nAre there different market models and requirements from the mechanisms that are more feasible than classic mechanism design goals ?\nIn the context of our negotiation-range model, is natural to further study negotiation based mechanisms in more general settings.\nA natural extension is that of a combinatorial market.\nUnfortunately, finding the optimal allocation in a combinatorial setting is NP-hard, and thus the problem of maintaining BB is compounded by the problem of maintaining IC when efficiency is approximated [1, 5, 6, 9, 11].\nApplying the approach in this paper to develop negotiationrange mechanisms for combinatorial markets, even in restricted settings, seems a promising direction for research.\n5.\nREFERENCES [1] Y. Bartal, R. Gonen, and N. Nisan.\nIncentive Compatible Multi-Unit Combinatorial Auctions.\nProceeding of 9th TARK 2003, pp. 72-87, June 2003.\n[2] E. H. Clarke.\nMultipart Pricing of Public Goods.\nIn journal Public Choice 1971, volume 2, pages 17-33.\n[3] J.Feigenbaum, C. Papadimitriou, and S. Shenker.\nSharing the Cost of Multicast Transmissions.\nJournal of Computer and System Sciences, 63(1),2001.\n[4] A. Fiat, A. Goldberg, J. Hartline, and A. Karlin.\nCompetitive Generalized Auctions.\nProceeding of 34th ACM Symposium on Theory of Computing,2002.\n[5] R. Gonen, and D. Lehmann.\nOptimal Solutions for Multi-Unit Combinatorial Auctions: Branch and Bound Heuristics.\nProceeding of ACM Conference on Electronic Commerce EC``00, pages 13-20, October 2000.\n[6] R. Gonen, and D. Lehmann.\nLinear Programming helps solving Large Multi-unit Combinatorial Auctions.\nIn Proceeding of INFORMS 2001, November, 2001.\n[7] T. Groves.\nIncentives in teams.\nIn journal Econometrica 1973, volume 41, pages 617-631.\n[8] R. Lavi, A. Mu``alem and N. Nisan.\nTowards a Characterization of Truthful Combinatorial Auctions.\nProceeding of 44th Annual IEEE Symposium on Foundations of Computer Science,2003.\n[9] D. Lehmann, L. I. O``Callaghan, and Y. Shoham.\nTruth revelation in rapid, approximately efficient combinatorial auctions.\nIn Proceedings of the First ACM Conference on Electronic Commerce, pages 96-102, November 1999.\n[10] R. Myerson, M. Satterthwaite.\nEfficient Mechanisms for Bilateral Trading.\nJournal of Economic Theory, 28, pages 265-81, 1983.\n[11] N. Nisan and A. Ronen.\nAlgorithmic Mechanism Design.\nIn Proceeding of 31th ACM Symposium on Theory of Computing, 1999.\n[12] D.C. Parkes, J. Kalagnanam, and M. Eso.\nAchieving Budget-Balance with Vickrey-Based Payment Schemes in Exchanges.\nProceeding of 17th International Joint Conference on Artificial Intelligence, pages 1161-1168, 2001.\n[13] W. Vickrey.\nCounterspeculation, Auctions and Competitive Sealed Tenders.\nIn Journal of Finance 1961, volume 16, pages 8-37.\n8","lvl-3":"Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets\nABSTRACT\nThis paper introduces a new class of mechanisms based on negotiation between market participants.\nThis model allows us to circumvent Myerson and Satterthwaite's impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting.\nThe underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation.\n1.\nINTRODUCTION\nIn this paper we introduce the concept of negotiation based mechanisms in the context of the theory of efficient truthful markets.\nA market consists of multiple buyers and sellers who wish to exchange goods.\nThe market's main objective is to produce an allocation of sellers' goods to buyers as to maximize the total gain from trade.\n* Supported in part by a grant from the Israeli National Science Foundation (195\/02).\nA commonly studied model of participant behavior is taken from the field of economic mechanism design [3, 4, 11].\nIn this model each player has a private valuation function that assigns real values to each possible allocation.\nThe algorithm motivates players to participate truthfully by handing payments to them.\nThe mechanism in an exchange collects buyer bids and seller bids and clears the exchange by computing: (i) a set of trades, and (ii) the payments made and received by players.\nIn designing a mechanism to compute trades and payments we must consider the bidding strategies of self-interested players, i.e. rational players that follow expected-utility maximizing strategies.\nWe set allocative efficiency as our primary goal.\nThat is the mechanism must compute a set of trades that maximizes gain from trade.\nIn addition we require individual rationality (IR) so that all players have positive expected utility to participate, budget balance (BB) so that the exchange does not run at a loss, and incentive compatibility (IC) so that reporting the truth is a dominant strategy for each player.\nUnfortunately, Myerson and Satterthwaite's (1983) well known result demonstrates that in bilateral trade it is impossible to simultaneously achieve perfect efficiency, BB, and IR using an IC mechanism [10].\nA unique approach to overcome Myerson and Satterthwaite's impossibility result was attempted by Parkes, Kalagnanam and Eso [12].\nThis result designs both a regular and a combinatorial bilateral trade mechanism (which imposes BB and IR) that approximates truth revelation and allocation efficiency.\nIn this paper we circumvent Myerson and Satterthwaite's impossibility result by introducing a new model of negotiationrange markets.\nA negotiation-range mechanism does not produce payment prices for the market participants.\nRather, is assigns each buyer-seller pair a price range, called Zone Of Possible Agreements (ZOPA).\nThe buyer is provided with the high end of the range and the seller with the low end of the range.\nThis allows the trading parties to engage in negotiation over the final price with guarantee that the deal is beneficial for both of them.\nThe negotiation process is not considered part of the mechanism but left up to the interested parties, or to some external mechanism to perform.\nIn effect a negotiation-range mechanism operates as a mediator between the market participants, offering them the grounds to be able to finalize the terms of the trade by themselves.\nThis concept is natural to many real-world market environments such as the real estate market.\nWe focus on the single-unit heterogeneous setting: n sellers offer one unique good each by placing sealed bids specifying their willingness to sell, and m buyers, interested in buying a single good each, placing sealed bids specifying their willingness to pay for each good they may be interested in.\nOur main result is a single-unit heterogeneous bilateral trade negotiation-range mechanism (ZOPAS) that is efficient, individually rational, incentive compatible and budget balanced.\nOur result does not contradict Myerson and Satterthwaite's important theorem.\nMyerson-Satterthwaite's proof relies on a theorem assuring that in two different efficient IC markets; if the sellers have the same upper bound utility then they will receive the same prices in each market and if the buyers have the same lower bound utility then they will receive the same prices in each market.\nOur single-unit heterogeneous mechanism bypasses Myerson and Satterthwaite's theorem by producing a price range, defined by a seller's floor and a buyer's ceiling, for each pair of matched players.\nIn our market mechanism the seller's upper bound utility may be the same while the seller's floor is different and the buyer's lower bound utility may be the same while the buyer's ceiling is different.\nMoreover, the final price is not fixed by the mechanism at all.\nInstead, it is determined by an independent negotiation between the buyer and seller.\nMore specifically, in a negotiation-range mechanism, the range of prices each matched pair is given is resolved by a negotiation stage where a final price is determined.\nThis negotiation stage is crucial for our mechanism to be IC.\nIntuitively, a negotiation range mechanism is incentive compatible if truth telling promises the best ZOPA from the point of view of the player in question.\nThat is, he would tell the truth if this strategy maximizes the upper and lower bounds on his utility as expressed by the ZOPA boundaries.\nYet, when carefully examined it turns out that it is impossible (by [10]) that this goal will always hold.\nThis is simply because such a mechanism could be easily modified to determine final prices for the players (e.g. by taking the average of the range's boundaries).\nHere, the negotiation stage come into play.\nWe show that if the above utility maximizing condition does not hold then it is the case that the player cannot influence the negotiation bound that is assigned to his deal partner no matter what value he declares.\nThis means that the only thing that he may achieve by reporting a false valuation is modifying his own negotiation bound, something that he could alternatively achieve by reporting his true valuation and incorporating the effect of the modified negotiation bound into his negotiation strategy.\nThis eliminates the benefit of reporting false valuations and allows our mechanism to compute the optimal gain from trade according to the players' true values.\nThe problem of computing the optimal allocation which maximizes gain from trade can be conceptualized as the problem of finding the maximum weighted matching in a weighted bipartite graph connecting buyers and sellers, where each edge in the graph is assigned a weight equal to the difference between the respective buyer bid and seller bid.\nIt is well known that this problem can be solved efficiently in polynomial time.\nVCG IC payment schemes [2, 7, 13] support efficient and IR bilateral trade but not simultaneously BB.\nOur particular approach adapts the VCG payment scheme to achieve budget balance.\nThe philosophy of the VCG payment schemes in bilateral trade is that the buyer pays the seller's opportunity cost of not selling the good to another buyer and not keeping the good to herself.\nThe seller is paid in addition to the buyer's price a compensation for the damage the mechanism did to the seller by not extracting the buyer's full bid.\nOur philosophy is a bit different: The seller is paid at least her opportunity cost of not selling the good to another buyer and not keeping the good for herself.\nThe buyer pays at most his alternate seller's opportunity cost of not selling the good to another buyer and not keeping the alternate good for herself.\nThe rest of this paper is organized as follows.\nIn Section 2 we describe our model and definitions.\nIn section 3 we present the single-unit heterogeneous negotiation-range mechanism and show that it is efficient, IR, IC and BB.\nFinally, we conclude with a discussion in Section 4.\n2.\nNEGOTIATION MARKETS PRELIMINARIES\n3.\nTHE SINGLE-UNIT HETEROGENEOUS MECHANISM (ZOPAS)\n3.1 Description of the Mechanism\n3.2 Analysis of the Mechanism\n3.3 Incentive Compatibility\nBuyer's incentive compatibility:\nSeller's incentive compatibility:\n4.\nCONCLUSIONS AND EXTENSIONS\nIn this paper we suggest a way to deal with the impossibility of producing mechanisms which are efficient, individually rational, incentive compatible and budget balanced.\nTo this aim we introduce the concept of negotiation-range mechanisms which avoid the problem by leaving the final determination of prices to a negotiation between the buyer and seller.\nThe goal of the mechanism is to provide the initial range (ZOPA) for negotiation in a way that it will be beneficial for the participants to close the proposed deals.\nWe present a negotiation range mechanism that is efficient, individually rational, incentive compatible and budget balanced.\nThe ZOPA produced by our mechanism is based\non a natural adaptation of the VCG payment scheme in a way that promises valid negotiation ranges which permit a budget balanced allocation.\nThe basic question that we aimed to tackle seems very exciting: which properties can we expect a market mechanism to achieve?\nAre there different market models and requirements from the mechanisms that are more feasible than classic mechanism design goals?\nIn the context of our negotiation-range model, is natural to further study negotiation based mechanisms in more general settings.\nA natural extension is that of a combinatorial market.\nUnfortunately, finding the optimal allocation in a combinatorial setting is NP-hard, and thus the problem of maintaining BB is compounded by the problem of maintaining IC when efficiency is approximated [1, 5, 6, 9, 11].\nApplying the approach in this paper to develop negotiationrange mechanisms for combinatorial markets, even in restricted settings, seems a promising direction for research.","lvl-4":"Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets\nABSTRACT\nThis paper introduces a new class of mechanisms based on negotiation between market participants.\nThis model allows us to circumvent Myerson and Satterthwaite's impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting.\nThe underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation.\n1.\nINTRODUCTION\nIn this paper we introduce the concept of negotiation based mechanisms in the context of the theory of efficient truthful markets.\nA market consists of multiple buyers and sellers who wish to exchange goods.\nThe market's main objective is to produce an allocation of sellers' goods to buyers as to maximize the total gain from trade.\nA commonly studied model of participant behavior is taken from the field of economic mechanism design [3, 4, 11].\nIn this model each player has a private valuation function that assigns real values to each possible allocation.\nThe algorithm motivates players to participate truthfully by handing payments to them.\nThe mechanism in an exchange collects buyer bids and seller bids and clears the exchange by computing: (i) a set of trades, and (ii) the payments made and received by players.\nIn designing a mechanism to compute trades and payments we must consider the bidding strategies of self-interested players, i.e. rational players that follow expected-utility maximizing strategies.\nWe set allocative efficiency as our primary goal.\nThat is the mechanism must compute a set of trades that maximizes gain from trade.\nUnfortunately, Myerson and Satterthwaite's (1983) well known result demonstrates that in bilateral trade it is impossible to simultaneously achieve perfect efficiency, BB, and IR using an IC mechanism [10].\nA unique approach to overcome Myerson and Satterthwaite's impossibility result was attempted by Parkes, Kalagnanam and Eso [12].\nThis result designs both a regular and a combinatorial bilateral trade mechanism (which imposes BB and IR) that approximates truth revelation and allocation efficiency.\nIn this paper we circumvent Myerson and Satterthwaite's impossibility result by introducing a new model of negotiationrange markets.\nA negotiation-range mechanism does not produce payment prices for the market participants.\nRather, is assigns each buyer-seller pair a price range, called Zone Of Possible Agreements (ZOPA).\nThe buyer is provided with the high end of the range and the seller with the low end of the range.\nThis allows the trading parties to engage in negotiation over the final price with guarantee that the deal is beneficial for both of them.\nThe negotiation process is not considered part of the mechanism but left up to the interested parties, or to some external mechanism to perform.\nIn effect a negotiation-range mechanism operates as a mediator between the market participants, offering them the grounds to be able to finalize the terms of the trade by themselves.\nThis concept is natural to many real-world market environments such as the real estate market.\nOur main result is a single-unit heterogeneous bilateral trade negotiation-range mechanism (ZOPAS) that is efficient, individually rational, incentive compatible and budget balanced.\nOur result does not contradict Myerson and Satterthwaite's important theorem.\nMyerson-Satterthwaite's proof relies on a theorem assuring that in two different efficient IC markets; if the sellers have the same upper bound utility then they will receive the same prices in each market and if the buyers have the same lower bound utility then they will receive the same prices in each market.\nOur single-unit heterogeneous mechanism bypasses Myerson and Satterthwaite's theorem by producing a price range, defined by a seller's floor and a buyer's ceiling, for each pair of matched players.\nIn our market mechanism the seller's upper bound utility may be the same while the seller's floor is different and the buyer's lower bound utility may be the same while the buyer's ceiling is different.\nMoreover, the final price is not fixed by the mechanism at all.\nInstead, it is determined by an independent negotiation between the buyer and seller.\nMore specifically, in a negotiation-range mechanism, the range of prices each matched pair is given is resolved by a negotiation stage where a final price is determined.\nThis negotiation stage is crucial for our mechanism to be IC.\nIntuitively, a negotiation range mechanism is incentive compatible if truth telling promises the best ZOPA from the point of view of the player in question.\nThis is simply because such a mechanism could be easily modified to determine final prices for the players (e.g. by taking the average of the range's boundaries).\nHere, the negotiation stage come into play.\nThis eliminates the benefit of reporting false valuations and allows our mechanism to compute the optimal gain from trade according to the players' true values.\nVCG IC payment schemes [2, 7, 13] support efficient and IR bilateral trade but not simultaneously BB.\nOur particular approach adapts the VCG payment scheme to achieve budget balance.\nThe philosophy of the VCG payment schemes in bilateral trade is that the buyer pays the seller's opportunity cost of not selling the good to another buyer and not keeping the good to herself.\nThe seller is paid in addition to the buyer's price a compensation for the damage the mechanism did to the seller by not extracting the buyer's full bid.\nOur philosophy is a bit different: The seller is paid at least her opportunity cost of not selling the good to another buyer and not keeping the good for herself.\nThe buyer pays at most his alternate seller's opportunity cost of not selling the good to another buyer and not keeping the alternate good for herself.\nThe rest of this paper is organized as follows.\nIn Section 2 we describe our model and definitions.\nIn section 3 we present the single-unit heterogeneous negotiation-range mechanism and show that it is efficient, IR, IC and BB.\nFinally, we conclude with a discussion in Section 4.\n4.\nCONCLUSIONS AND EXTENSIONS\nIn this paper we suggest a way to deal with the impossibility of producing mechanisms which are efficient, individually rational, incentive compatible and budget balanced.\nTo this aim we introduce the concept of negotiation-range mechanisms which avoid the problem by leaving the final determination of prices to a negotiation between the buyer and seller.\nThe goal of the mechanism is to provide the initial range (ZOPA) for negotiation in a way that it will be beneficial for the participants to close the proposed deals.\nWe present a negotiation range mechanism that is efficient, individually rational, incentive compatible and budget balanced.\nThe ZOPA produced by our mechanism is based\non a natural adaptation of the VCG payment scheme in a way that promises valid negotiation ranges which permit a budget balanced allocation.\nThe basic question that we aimed to tackle seems very exciting: which properties can we expect a market mechanism to achieve?\nAre there different market models and requirements from the mechanisms that are more feasible than classic mechanism design goals?\nIn the context of our negotiation-range model, is natural to further study negotiation based mechanisms in more general settings.\nA natural extension is that of a combinatorial market.\nApplying the approach in this paper to develop negotiationrange mechanisms for combinatorial markets, even in restricted settings, seems a promising direction for research.","lvl-2":"Negotiation-Range Mechanisms: Exploring the Limits of Truthful Efficient Markets\nABSTRACT\nThis paper introduces a new class of mechanisms based on negotiation between market participants.\nThis model allows us to circumvent Myerson and Satterthwaite's impossibility result and present a bilateral market mechanism that is efficient, individually rational, incentive compatible and budget balanced in the single-unit heterogeneous setting.\nThe underlying scheme makes this combination of desirable qualities possible by reporting a price range for each buyer-seller pair that defines a zone of possible agreements, while the final price is left open for negotiation.\n1.\nINTRODUCTION\nIn this paper we introduce the concept of negotiation based mechanisms in the context of the theory of efficient truthful markets.\nA market consists of multiple buyers and sellers who wish to exchange goods.\nThe market's main objective is to produce an allocation of sellers' goods to buyers as to maximize the total gain from trade.\n* Supported in part by a grant from the Israeli National Science Foundation (195\/02).\nA commonly studied model of participant behavior is taken from the field of economic mechanism design [3, 4, 11].\nIn this model each player has a private valuation function that assigns real values to each possible allocation.\nThe algorithm motivates players to participate truthfully by handing payments to them.\nThe mechanism in an exchange collects buyer bids and seller bids and clears the exchange by computing: (i) a set of trades, and (ii) the payments made and received by players.\nIn designing a mechanism to compute trades and payments we must consider the bidding strategies of self-interested players, i.e. rational players that follow expected-utility maximizing strategies.\nWe set allocative efficiency as our primary goal.\nThat is the mechanism must compute a set of trades that maximizes gain from trade.\nIn addition we require individual rationality (IR) so that all players have positive expected utility to participate, budget balance (BB) so that the exchange does not run at a loss, and incentive compatibility (IC) so that reporting the truth is a dominant strategy for each player.\nUnfortunately, Myerson and Satterthwaite's (1983) well known result demonstrates that in bilateral trade it is impossible to simultaneously achieve perfect efficiency, BB, and IR using an IC mechanism [10].\nA unique approach to overcome Myerson and Satterthwaite's impossibility result was attempted by Parkes, Kalagnanam and Eso [12].\nThis result designs both a regular and a combinatorial bilateral trade mechanism (which imposes BB and IR) that approximates truth revelation and allocation efficiency.\nIn this paper we circumvent Myerson and Satterthwaite's impossibility result by introducing a new model of negotiationrange markets.\nA negotiation-range mechanism does not produce payment prices for the market participants.\nRather, is assigns each buyer-seller pair a price range, called Zone Of Possible Agreements (ZOPA).\nThe buyer is provided with the high end of the range and the seller with the low end of the range.\nThis allows the trading parties to engage in negotiation over the final price with guarantee that the deal is beneficial for both of them.\nThe negotiation process is not considered part of the mechanism but left up to the interested parties, or to some external mechanism to perform.\nIn effect a negotiation-range mechanism operates as a mediator between the market participants, offering them the grounds to be able to finalize the terms of the trade by themselves.\nThis concept is natural to many real-world market environments such as the real estate market.\nWe focus on the single-unit heterogeneous setting: n sellers offer one unique good each by placing sealed bids specifying their willingness to sell, and m buyers, interested in buying a single good each, placing sealed bids specifying their willingness to pay for each good they may be interested in.\nOur main result is a single-unit heterogeneous bilateral trade negotiation-range mechanism (ZOPAS) that is efficient, individually rational, incentive compatible and budget balanced.\nOur result does not contradict Myerson and Satterthwaite's important theorem.\nMyerson-Satterthwaite's proof relies on a theorem assuring that in two different efficient IC markets; if the sellers have the same upper bound utility then they will receive the same prices in each market and if the buyers have the same lower bound utility then they will receive the same prices in each market.\nOur single-unit heterogeneous mechanism bypasses Myerson and Satterthwaite's theorem by producing a price range, defined by a seller's floor and a buyer's ceiling, for each pair of matched players.\nIn our market mechanism the seller's upper bound utility may be the same while the seller's floor is different and the buyer's lower bound utility may be the same while the buyer's ceiling is different.\nMoreover, the final price is not fixed by the mechanism at all.\nInstead, it is determined by an independent negotiation between the buyer and seller.\nMore specifically, in a negotiation-range mechanism, the range of prices each matched pair is given is resolved by a negotiation stage where a final price is determined.\nThis negotiation stage is crucial for our mechanism to be IC.\nIntuitively, a negotiation range mechanism is incentive compatible if truth telling promises the best ZOPA from the point of view of the player in question.\nThat is, he would tell the truth if this strategy maximizes the upper and lower bounds on his utility as expressed by the ZOPA boundaries.\nYet, when carefully examined it turns out that it is impossible (by [10]) that this goal will always hold.\nThis is simply because such a mechanism could be easily modified to determine final prices for the players (e.g. by taking the average of the range's boundaries).\nHere, the negotiation stage come into play.\nWe show that if the above utility maximizing condition does not hold then it is the case that the player cannot influence the negotiation bound that is assigned to his deal partner no matter what value he declares.\nThis means that the only thing that he may achieve by reporting a false valuation is modifying his own negotiation bound, something that he could alternatively achieve by reporting his true valuation and incorporating the effect of the modified negotiation bound into his negotiation strategy.\nThis eliminates the benefit of reporting false valuations and allows our mechanism to compute the optimal gain from trade according to the players' true values.\nThe problem of computing the optimal allocation which maximizes gain from trade can be conceptualized as the problem of finding the maximum weighted matching in a weighted bipartite graph connecting buyers and sellers, where each edge in the graph is assigned a weight equal to the difference between the respective buyer bid and seller bid.\nIt is well known that this problem can be solved efficiently in polynomial time.\nVCG IC payment schemes [2, 7, 13] support efficient and IR bilateral trade but not simultaneously BB.\nOur particular approach adapts the VCG payment scheme to achieve budget balance.\nThe philosophy of the VCG payment schemes in bilateral trade is that the buyer pays the seller's opportunity cost of not selling the good to another buyer and not keeping the good to herself.\nThe seller is paid in addition to the buyer's price a compensation for the damage the mechanism did to the seller by not extracting the buyer's full bid.\nOur philosophy is a bit different: The seller is paid at least her opportunity cost of not selling the good to another buyer and not keeping the good for herself.\nThe buyer pays at most his alternate seller's opportunity cost of not selling the good to another buyer and not keeping the alternate good for herself.\nThe rest of this paper is organized as follows.\nIn Section 2 we describe our model and definitions.\nIn section 3 we present the single-unit heterogeneous negotiation-range mechanism and show that it is efficient, IR, IC and BB.\nFinally, we conclude with a discussion in Section 4.\n2.\nNEGOTIATION MARKETS PRELIMINARIES\nLet H denote the set of players, N the set of n selling players, and M the set of m buying players, where H = N U M. Let T = {1,..., t} denote the set of goods.\nLet Ti G {\u2212 1, 0, 1} t denote an exchange vector for a trade, such that player i buys goods {A G T | Ti (A) = 1} and sells goods {A G T | Ti (A) = \u2212 1}.\nLet T = (T1,..., Tlnl) denote the complete trade between all players.\nWe view T as describing the allocation of goods by the mechanism to the buyers and sellers.\nIn the single-unit heterogeneous setting every good belongs to specific seller, and every buyer is interested in buying one good.\nThe buyer may bid for several or all goods.\nAt the end of the auction every good is either assigned to one of the buyers who bid for it or kept unsold by the seller.\nIt is convenient to assume the sets of buyers and sellers are disjoint (though it is not required), i.e. N n M = 0.\nEach seller i is associated with exactly one good Ai, for which she has true valuation ci which expresses the price at which it is beneficial for her to sell the good.\nIf the seller reports a false valuation at an attempt to improve the auction results for her, this valuation is denoted \u02c6ci.\nA buyer has a valuation vector describing his valuation for each of the goods according to their owner.\nSpecifically, vj (k) denotes buyer j's valuation for good Ak.\nSimilarly, if he reports a false valuation it is denoted \u02c6vj (k).\nIf buyer j is matched by the mechanism with seller i then Ti (Ai) = \u2212 1 and Tj (Ai) = 1.\nNotice, that in our setting for every k = i, Ti (Ak) = 0 and Tj (Ai) = 0 and also for every z = j, Tz (Ai) = 0.\nFor a matched buyer j - seller i pair, the gain from trade on the deal is defined as vj (i) \u2212 ci.\nGiven and allocation T, the gain from trade associated with T is\nLet T * denote the optimal allocation which maximizes the gain from trade, computed according to the players' true valuations.\nLet V * denote the optimal gain from trade associated with this allocation.\nWhen players' report false valuations we use T\u02c6 * and V\u02c6 * to denote the optimal allocation and gain from trade, respectively, when computed according to the reported valuations.\nWe are interested in the design of negotiation-range mechanisms.\nIn contrast to a standard auction mechanism where the buyer and seller are provided with the prices they should pay, the goal of a negotiation-range mechanism is to provide the player's with a range of prices within which they can negotiate the final terms of the deal by themselves.\nThe mechanism would provide the buyer with the upper bound on the range and the seller with the lower bound on the range.\nThis gives each of them a promise that it will be beneficial for them to close the deal, but does not provide information about the other player's terms of negotiation.\nThe basic assumption is that participants in the auction are self-interested players.\nThat is their main goal is to maximize their expected utility.\nThe utility for a buyer who does not participate in the trade is 0.\nIf he does win some good, his utility is the surplus between his valuation for that good and the price he pays.\nFor a seller, if she keeps the good unsold, her utility is just her valuation of the good, and the surplus is 0.\nIf she gets to sell it, her utility is the price she is paid for it, and the surplus is the difference between this price and her valuation.\nSince negotiation-range mechanisms assign bounds on the range of prices rather than the final price, it is useful to define the upper and lower bounds on the player's utilities defined by the range's limits.\n\u2022 The buyer's top utility is: vj (i) \u2212 L, and the buyer's bottom utility is vj (i) \u2212 H. \u2022 The seller's top utility is H, with surplus H \u2212 ci, and the seller's bottom utility is L, with surplus L \u2212 ci.\n3.\nTHE SINGLE-UNIT HETEROGENEOUS MECHANISM (ZOPAS)\n3.1 Description of the Mechanism\nZOPAS is a negotiation-range mechanism, it finds the optimal allocation T * and uses it to define a ZOPA for each buyer-seller pair.\nThe first stage in applying the mechanism is for the buyers and sellers to submit their sealed bids.\nThe mechanism then allocates buyers to sellers by computing the allocation T *, which results in the optimal gain from trade V *, and defines a ZOPA for each buyer-seller pair.\nFinally, buyers and sellers use the ZOPA to negotiate a final price.\nComputing T * involves solving the maximum weighted bipartite matching problem for the complete bipartite graph Kn, m constructed by placing the buyers on one side of the\nFigure 1: The ZOPAS mechanism\ngraph, the seller on another and giving the edge between buyer j and seller i weight equal to vj (i) \u2212 ci.\nThe maximum weighted matching problem in solvable in polynomial time (e.g., using the Hungarian Method).\nThis results in a matching between buyers and sellers that maximizes gain from trade.\nThe next step is to compute for each buyer-seller pair a seller's floor, which provides the lower bound of the ZOPA for this pair, and assigns it to the seller.\nA seller's floor is computed by calculating the difference between the total gain from trade when the buyer is excluded and the total gain from trade of the other participants when the buyer is included (the VCG principle).\nLet (T--j) * denote the gain from trade of the optimal allocation when buyer j's bids are discarded.\nDenote by (V--j) * the total gain from trade in the allocation (T--j) *.\nThe seller is instructed not to accept less than this price from her matched buyer.\nNext, the mechanism computes for each buyer-seller pair a buyer ceiling, which provides the upper bound of the ZOPA for this pair, and assigns it to the buyer.\nEach buyer's ceiling is computed by removing the buyer's matched seller and calculating the difference between the total gain from trade when the buyer is excluded and the total gain from trade of the other participants when the Find the optimal allocation T * Compute the maximum weighted bipartite matching for the bipartite graph of buyers and sellers, and edge weights equal to the gain from trade.\nand every seller i of good Ai Report to seller i her floor Li and identify her matched buyer j Report to buyer j his ceiling Hj and identify his matched seller i i, j negotiate the good's final price\nbuyer is included.\nLet (T \u2212 i) denote the gain from trade of the optimal allocation when seller i is removed from the trade.\nDenote by (V \u2212 i) the total gain from trade in the allocation (T \u2212 i).\nLet (T \u2212 i \u2212 j) denote the gain from trade of the optimal allocation when seller i is removed from the trade and buyer j's bids are discarded.\nDenote by (V \u2212 i \u2212 j) the total gain from trade in the allocation (T \u2212 i \u2212 j).\nDEFINITION 5.\nBuyer Ceiling: The highest price the seller should expect to pay, communicated to the buyer by the mechanism.\nThe buyer ceiling for player j who was matched with seller i on good Ai, i.e., Tj (Ai) = 1, is computed as:\nThe buyer is instructed not to pay more than this price to his matched seller.\nOnce the negotiation range lower bound and upper bound are computed for every matched pair, the mechanism reports the lower bound price to the seller and the upper bound price to the buyer.\nAt this point each buyer-seller pair negotiates on the final price and concludes the deal.\nA schematic description the ZOPAS mechanism is given in Figure 3.1.\n3.2 Analysis of the Mechanism\nIn this section we analyze the properties of the ZOPAS mechanism.\nTHEOREM 1.\nThe ZOPAS market negotiation-range mechanism is an incentive-compatible bilateral trade mechanism that is efficient, individually rational and budget balanced.\nClearly ZOPAS is an efficient polynomial time mechanism.\nLet us show it satisfies the rest of the properties in the theorem.\nCLAIM 1.\nZOPAS is individually rational, i.e., the mechanism maintains nonnegative utility surplus for all participants.\nPROOF.\nIf a participant does not trade in the optimal allocation then his utility surplus is zero by definition.\nConsider a pair of buyer j and seller i which are matched in the optimal allocation T.\nThen the buyer's utility is at least vj (i)--Hj.\nRecall that Hj = vj (i) + (V \u2212 i\n\u2212 j).\nSince the optimal gain from trade which includes j is higher than that which does not, we have that the utility is nonnegative: vj (i)--Hj> 0.\nNow, consider the seller i.\nHer utility surplus is at least Li--ci.\nRecall that Li = vj (i) + (V \u2212 j)--V.\nIf we removed from the optimal allocation T the contribution of the buyer j - seller i pair, we are left with an allocation which excludes j, and has value V--(vj (i)--ci).\nThis implies that (V \u2212 j)> V--vj (i) + ci, which implies that Li--ci> 0.\nThe fact that ZOPAS is a budget-balanced mechanism follows from the following lemma which ensures the validity of the negotiation range, i.e., that every seller's floor is below her matched buyer's ceiling.\nThis ensures that they can close the deal at a final price which lies in this range.\nPROOF.\nRecall that Li = vj (i) + (V \u2212 j)--V and Hj =\nThe proof of (1) is based on a method which we apply several times in our analysis.\nWe start with the allocations (T \u2212 i) and (T \u2212 j) which together have value equal to (V \u2212 i) + (V \u2212 j).\nWe now use them to create a pair of new valid allocations, by using the same pairs that were matched in the original allocations.\nThis means that the sum of values of the new allocations is the same as the original pair of allocations.\nWe also require that one of the new allocations does not include buyer j or seller i.\nThis means that the sum of values of these new allocations is at most V + (V \u2212 i \u2212 j), which proves (1).\nLet G be the bipartite graph where the nodes on one side of G represent the buyers and the nodes on the other side represent the sellers, and edge weights represent the gain from trade for the particular pair.\nThe different allocations represent bipartite matchings in G.\nIt will be convenient for the sake of our argument to think of the edges that belong to each of the matchings as being colored with a specific color representing this matching.\nAssign color 1 to the edges in the matching (T \u2212 i) and assign color 2 to the edges in the matching (T \u2212 j).\nWe claim that these edges can be recolored using colors 3 and 4 so that the new coloring represents allocations T (represented by color 3) and (T \u2212 i \u2212 j) (represented by color 4).\nThis implies the that inequality (1) holds.\nFigure 2 illustrates the graph G and the colorings of the different matchings.\nDefine an alternating path P starting at j. Let S1 be the seller matched to j in (T \u2212 i) (if none exists then P is empty).\nLet B1 be the buyer matched to S1 in (T \u2212 j), S2 be the seller matched to B1 in (T \u2212 i), B2 be the buyer matched to S2 in (T \u2212 j), and so on.\nThis defines an alternating path P, starting at j, whose edges' colors alternate between colors 1 and 2 (starting with 1).\nThis path ends either in a seller who is not matched in (T \u2212 j) or in a buyer who is not matched in (T \u2212 i).\nSince all sellers in this path are matched in (T \u2212 i), we have that seller i does not belong to P.\nThis ensures that edges in P may be colored by alternating colors 3 and 4 (starting with 3).\nSince except for the first edge, all others do not involve i or j and thus may be colored 4 and be part of an allocation (T \u2212 j \u2212 i).\nWe are left to recolor the edges that do not belong to P.\nSince none of these edges includes j we have that the edges that were colored 1, which are part of (T \u2212 i), may now be colored 4, and be included in the allocation (T \u2212 j \u2212 i).\nIt is also clear that the edges that were colored 2, which are part of (T \u2212 j), may now be colored 3, and be included in the allocation T.\nThis completes the proof of the lemma.\n3.3 Incentive Compatibility\nThe basic requirement in mechanism design is for an exchange mechanism to be incentive compatible.\nThis means that its payment structure enforces that truth-telling is the players' weakly dominant strategy, that is that the strategy by which the player is stating his true valuation results\nFigure 2: Alternating path argument for Lemma 1 (Validity of the Negotiation Range) and Claim 2 (part of Buyer's IC proof) Figure 3: Key to Figure 2\nin bigger or equal utility as any other strategy.\nThe utility surplus is defined as the absolute difference between the player's bid and his price.\nNegotiation-range mechanisms assign bounds on the range of prices rather than the final price and therefore the player's valuation only influences the minimum and maximum bounds on his utility.\nFor a buyer the minimum (bottom) utility would be based on the top of the negotiation range (ceiling), and the maximum (top) utility would be based on the bottom of the negotiation range (floor).\nFor a seller it's the other way around.\nTherefore the basic natural requirement from negotiation-range mechanisms would be that stating the player's true valuation results in both the higher bottom utility and higher top utility for the player, compared with other strategies.\nUnfortunately, this requirement is still too strong and it is impossible (by [10]) that this will always hold.\nTherefore we slightly relax it as follows: we require this holds when the false valuation based strategy changes the player's allocation.\nWhen the allocation stays unchanged we require instead that the player would not be able to change his matched player's bound (e.g. a buyer cannot change the seller's floor).\nThis means that the only thing he can influence is his own bound, something that he can alternatively achieve through means of negotiation.\nThe following formally summarizes our incentive compatibility requirements from the negotiation-range mechanism.\nBuyer's incentive compatibility:\n9 Let j be a buyer matched with seller i by the mechanism according to valuation vj and the negotiationrange assigned is (Li, Hj).\nAssume that when the mechanism is applied according to valuation \u02c6vj, seller k = i is matched with j and the negotiation-range assigned is (\u02c6Lk, \u02c6Hj).\nThen\n9 Let j be a buyer not matched by the mechanism according to valuation vj.\nAssume that when the mechanism is applied according to valuation \u02c6vj, seller k = i is matched with j and the negotiation-range assigned is (\u02c6Lk, \u02c6Hj).\nThen\n9 Let j be a buyer matched with seller i by the mechanism according to valuation vj and let the assigned bottom of the negotiation range (seller's floor) be Li.\nAssume that when the mechanism is applied according to valuation \u02c6vj, the matching between i and j remains unchanged and let the assigned bottom of the negotiation range (seller's floor) be \u02c6Li.\nThen,\nNotice that the first inequality of (4) always holds for a valid negotiation range mechanism (Lemma 1).\nSeller's incentive compatibility:\n9 Let i be a seller not matched by the mechanism according to valuation ci.\nAssume that when the mechanism\n\u02c6Hj = Hj.\n(7) Notice that the first inequality of (6) always holds for a valid negotiation range mechanism (Lemma 1).\nObserve that in the case of sellers in our setting, the case expressed by requirement (6) is the only case in which the seller may change the allocation to her benefit.\nIn particular, it is not possible for seller i who is matched in T to change her buyer by reporting a false valuation.\nThis fact simply follows from the observation that reducing the seller's valuation increases the gain from trade for the current allocation by at least as much than any other allocation, whereas increasing the seller's valuation decreases the gain from trade for the current allocation by exactly the same amount as any other allocation in which it is matched.\nTherefore, the only case the optimal allocation may change is when in the new allocation i is not matched in which case her utility surplus is 0.\nPROOF.\nWe begin with the incentive compatibility for buyers.\nConsider a buyer j who is matched with seller i according to his true valuation v. Consider that j is reporting instead a false valuation v\u02c6 which results in a different allocation in which j is matched with seller k = i.\nThe following claim shows that a buyer j which changed his allocation due to a false declaration of his valuation cannot improve his top utility.\nCLAIM 2.\nLet j be a buyer matched to seller i in T, and let k = i be the seller matched to j in T\u02c6.\nThen,\nConsider first the case were j is matched to i in (T\u02c6 \u2212 k).\nIf we remove this pair and instead match j with k we obtain a matching which excludes i, if the gain from trade on the new pair is taken according to the true valuation then we get (V\u02c6 \u2212 k)--(\u02c6vj (i)--ci) + (vj (k)--ck) <(V \u2212 i).\nNow, since the optimal allocation T\u02c6 matches j with k rather than with i we have that\nwhere we have used that (V\u02c6 \u2212 i \u2212 j) = (V \u2212 i \u2212 j) since these allocations exclude j. Adding up these two inequalities implies (9) in this case.\nIt is left to prove (9) when j is not matched to i in (In fact, in this case we prove the stronger inequality\nIt is easy to see that (10) indeed implies (9) since it follows from the fact that k is assigned to j in T\u02c6 that \u02c6vj (k)> vj (k).\nThe proof of (10) works as follows.\nWe start with the allocations (T\u02c6 \u2212 k) and (T \u2212 i \u2212 j) which together have value equal to (V\u02c6 \u2212 k) + (V \u2212 i \u2212 j).\nWe now use them to create a pair of new valid allocations, by using the same pairs that were matched in the original allocations.\nThis means that the sum of values of the new allocations is the same as the original pair of allocations.\nWe also require that one of the new allocations does not include seller i and is based on the true valuation v, while the other allocation does not include buyer j or seller k and is based on the false valuation \u02c6v.\nThis means that the sum of values of these new allocations is at most (V \u2212 i) + (V\u02c6 \u2212 k \u2212 j), which proves (10).\nLet G be the bipartite graph where the nodes on one side of G represent the buyers and the nodes on the other side represent the sellers, and edge weights represent the gain from trade for the particular pair.\nThe different allocations represent bipartite matchings in G.\nIt will be convenient for the sake of our argument to think of the edges that belong to each of the matchings as being colored with a specific color representing this matching.\nAssign color 1 to the edges in the matching (assign color 2 to the edges in the matching (T \u2212 j \u2212 i).\nWe claim that these edges can be recolored using colors 3 and 4 so that the new coloring represents allocations (T \u2212 i) (represented by color 3) and (T\u02c6 \u2212 k \u2212 j) (represented by color 4).\nThis implies the that inequality (10) holds.\nFigure 2 illustrates the graph G and the colorings of the different matchings.\nDefine an alternating path P starting at j. Let S1 = i be the seller matched to j in (T\u02c6 \u2212 k) (if none exists then P is empty).\nLet B1 be the buyer matched to S1 in (T \u2212 j \u2212 i), S2 be the seller matched to B1 in (T\u02c6 \u2212 k), B2 be the buyer matched to S2 in (T \u2212 i \u2212 j), and so on.\nThis defines an alternating path P, starting at j, whose edges' colors alternate between colors 1 and 2 (starting with 1).\nThis path ends either in a seller who is not matched in (T \u2212 i \u2212 j) or in a buyer who is not matched in (T\u02c6 \u2212 k).\nSince all sellers in this path are matched in (T\u02c6 \u2212 k), we have that seller k does not belong to P.\nSince in this case S1 = i and the rest of the sellers in P are matched in (T \u2212 i \u2212 j) we have that seller i as well does not belong to P.\nThis ensures that edges in P may be colored by alternating colors 3 and 4 (starting with 3).\nSince S1 = i, we may use color 3 for the first edge and thus assign it to the allocation (T \u2212 i).\nAll other edges, do not involve i, j or k and thus may be either colored 4 and be part of an allocation (T\u02c6 \u2212 k \u2212 j) or colored 3 and be part of an allocation (T \u2212 i), in an alternating fashion.\nWe are left to recolor the edges that do not belong to P.\nSince none of these edges includes j we have that the edges is applied according to valuation \u02c6ci, buyer z = j is matched with i and the negotiation-range assigned is (\u02c6Li, \u02c6Hz).\nThen\n9 Let i be a buyer matched with buyer j by the mechanism according to valuation ci and let the assigned top of the negotiation range (buyer's ceiling) be Hj.\nAssume that when the mechanism is applied according to valuation \u02c6ci, the matching between i and j remains unchanged and let the assigned top of the negotiation range (buyer's ceiling) be \u02c6 Hj.\nThen, T\u02c6 \u2212 k).\nT\u02c6 \u2212 k) and\nthat were colored 1, which are part of (T\u02c6 \u2212 k), may now be colored 4, and be included in the allocation (T\u02c6 \u2212 j \u2212 k).\nIt is also clear that the edges that were colored 2, which are part of (T \u2212 i \u2212 j), may now be colored 3, and be included in the allocation (T \u2212 i).\nThis completes the proof of (10) and the claim.\nThe following claim shows that a buyer j which changed his allocation due to a false declaration of his valuation cannot improve his bottom utility.\nThe proof is basically the standard VCG argument.\nCLAIM 3.\nLet j be a buyer matched to seller i in T, and k = i be the seller matched to j in T\u02c6.\nThen,\nIt follows that in order to prove (11) we need to show\nThe scenario of this claim occurs when j understates his value for Ai or overstated his value for Ak.\nConsider these two cases: 0 \u02c6vj (k)> vj (k): Since Ak was allocated to j in the allocation T\u02c6 we have that using the allocation of T\u02c6 according to the true valuation gives an allocation of value U satisfying V\u02c6--\u02c6vj (k) + vj (k) <U <V.\n0 \u02c6vj (k) = vj (k) and \u02c6vj (i) <vj (i): In this case (12) reduces to V> V\u02c6.\nSince j is not allocated i in T\u02c6 we have that T\u02c6 is an allocation that uses only true valuations.\nFrom the optimality of T we conclude that V> V\u02c6.\nAnother case in which a buyer may try to improve his utility is when he does not win any good by stating his true valuation.\nHe may give a false valuation under which he wins some good.\nThe following claim shows that doing this is not beneficial to him.\nCLAIM 4.\nLet j be a buyer not matched in T, and assume seller k is matched to j in T\u02c6.\nThen, CLAIM 5.\nLet j be a buyer matched to seller i in T, and assume that T\u02c6 = T, then \u02c6Li = Li.\nPROOF.\nRecall that Li = vj (i) + (V \u2212 j)--V, and \u02c6Li = \u02c6vj (i) + (\u02c6V \u2212 j)--V\u02c6 = \u02c6vj (i) + (V \u2212 j)--V\u02c6.\nTherefore we need to show that V\u02c6 = V + \u02c6vj (i)--vj (i).\nSince j is allocated Ai in T, we have that using the allocation of T according to the false valuation gives an allocation of value U satisfying V--vj (i) + \u02c6vj (i) <U <V\u02c6.\nSimilarly since j is allocated Ai in T\u02c6, we have that using the allocation of T\u02c6 according to the true valuation gives an allocation of value U satisfying V\u02c6--\u02c6vj (i) + vj (i) <U <V, which together with the previous inequality completes the proof.\nThis completes the analysis of the buyer's incentive compatibility.\nWe now turn to prove the seller's incentive compatibility properties of our mechanism.\nThe following claim handles the case where a seller that was not matched in T falsely understates her valuation such that she gets matched n T\u02c6.\nCLAIM 6.\nLet i be a seller not matched in T, and assume buyer z is matched to i in T\u02c6.\nThen,\nSince i is not matched in T and (T\u02c6 \u2212 i) involves only true valuations we have that (V\u02c6 \u2212 i) = V.\nSince i is matched with z in T\u02c6 it can be obtained by adding the buyer z - seller i pair to (T\u02c6 \u2212 z \u2212 i).\nIt follows that \u02c6V = (\u02c6 V \u2212 i\nmatched in T\u02c6, using this allocation according to the true valuation gives an allocation of value U satisfying V\u02c6 + \u02c6ci--V\u02c6 + \u02c6ci--V--ci <0.\nFinally, the following simple claim ensures that a seller cannot influence the ceiling bound of the ZOPA for the good she sells.\nCLAIM 7.\nLet i be a seller matched to buyer j in T, and assume that T\u02c6 = T, then \u02c6Hj = Hj.\nPROOF.\nSince (\u02c6V \u2212 j \u2212 i) = (V \u2212 j \u2212 i) and (\u02c6V \u2212 i) = (V \u2212 i) it follows that\nPROOF.\nThe scenario of this claim occurs if j did not buy in the truth-telling allocation and overstates his value for Ak, \u02c6vj (k)> vj (k) in his false valuation.\nRecall that\nV \u2212 j)--V.\nThus we need to show that 0> vj (k)--\u02c6vj (k) + V\u02c6--(V \u2212 j).\nSince j is not allocated in T then (V \u2212 j) = V.\nSince j is allocated Ak in T\u02c6 we have that using the allocation of T\u02c6 according to the true valuation gives an allocation of value U satisfying V\u02c6--\u02c6vj (k) + vj (k) <U <V.\nThus we can conclude that 0> vj (k)--\u02c6vj (k) + V\u02c6--(V \u2212 j).\nFinally, the following claim ensures that a buyer cannot influence the floor bound of the ZOPA for the good he wins.\n4.\nCONCLUSIONS AND EXTENSIONS\nIn this paper we suggest a way to deal with the impossibility of producing mechanisms which are efficient, individually rational, incentive compatible and budget balanced.\nTo this aim we introduce the concept of negotiation-range mechanisms which avoid the problem by leaving the final determination of prices to a negotiation between the buyer and seller.\nThe goal of the mechanism is to provide the initial range (ZOPA) for negotiation in a way that it will be beneficial for the participants to close the proposed deals.\nWe present a negotiation range mechanism that is efficient, individually rational, incentive compatible and budget balanced.\nThe ZOPA produced by our mechanism is based\non a natural adaptation of the VCG payment scheme in a way that promises valid negotiation ranges which permit a budget balanced allocation.\nThe basic question that we aimed to tackle seems very exciting: which properties can we expect a market mechanism to achieve?\nAre there different market models and requirements from the mechanisms that are more feasible than classic mechanism design goals?\nIn the context of our negotiation-range model, is natural to further study negotiation based mechanisms in more general settings.\nA natural extension is that of a combinatorial market.\nUnfortunately, finding the optimal allocation in a combinatorial setting is NP-hard, and thus the problem of maintaining BB is compounded by the problem of maintaining IC when efficiency is approximated [1, 5, 6, 9, 11].\nApplying the approach in this paper to develop negotiationrange mechanisms for combinatorial markets, even in restricted settings, seems a promising direction for research.","keyphrases":["effici market","imposs result","individu ration","incent compat","effici truth market","negoti base mechan","buyer and seller","good exchang","negotiationrang market","negoti-rang mechan","real-world market environ","util","possibl agreement zone","mechan design"],"prmu":["P","P","P","P","R","R","M","U","M","M","M","U","R","M"]} {"id":"C-84","title":"Selfish Caching in Distributed Systems: A Game-Theoretic Analysis","abstract":"We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach. We refer to this as the selfish caching problem. In our model, nodes incur either cost for replicating resources or cost for access to a remote replica. We show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination. The price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds. With a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate.","lvl-1":"Selfish Caching in Distributed Systems: A Game-Theoretic Analysis Byung-Gon Chun \u2217 bgchun@cs.berkeley.edu Kamalika Chaudhuri \u2020 kamalika@cs.berkeley.edu Hoeteck Wee \u2021 hoeteck@cs.berkeley.edu Marco Barreno \u00a7 barreno@cs.berkeley.edu Christos H. Papadimitriou \u2020 christos@cs.berkeley.edu John Kubiatowicz \u2217 kubitron@cs.berkeley.edu Computer Science Division University of California, Berkeley ABSTRACT We analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach.\nWe refer to this as the selfish caching problem.\nIn our model, nodes incur either cost for replicating resources or cost for access to a remote replica.\nWe show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination.\nThe price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds.\nWith a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems General Terms Algorithms, Economics, Theory, Performance 1.\nINTRODUCTION Wide-area peer-to-peer file systems [2,5,22,32,33], peer-to-peer caches [15, 16], and web caches [6, 10] have become popular over the last few years.\nCaching1 of files in selected servers is widely used to enhance the performance, availability, and reliability of these systems.\nHowever, most such systems assume that servers cooperate with one another by following protocols optimized for overall system performance, regardless of the costs incurred by each server.\nIn reality, servers may behave selfishly - seeking to maximize their own benefit.\nFor example, parties in different administrative domains utilize their local resources (servers) to better support clients in their own domains.\nThey have obvious incentives to cache objects2 that maximize the benefit in their domains, possibly at the expense of globally optimum behavior.\nIt has been an open question whether these caching scenarios and protocols maintain their desirable global properties (low total social cost, for example) in the face of selfish behavior.\nIn this paper, we take a game-theoretic approach to analyzing the problem of caching in networks of selfish servers through theoretical analysis and simulations.\nWe model selfish caching as a non-cooperative game.\nIn the basic model, the servers have two possible actions for each object.\nIf a replica of a requested object is located at a nearby node, the server may be better off accessing the remote replica.\nOn the other hand, if all replicas are located too far away, the server is better off caching the object itself.\nDecisions about caching the replicas locally are arrived at locally, taking into account only local costs.\nWe also define a more elaborate payment model, in which each server bids for having an object replicated at another site.\nEach site now has the option of replicating an object and collecting the related bids.\nOnce all servers have chosen a strategy, each game specifies a configuration, that is, the set of servers that replicate the object, and the corresponding costs for all servers.\nGame theory predicts that such a situation will end up in a Nash equilibrium, that is, a set of (possibly randomized) strategies with the property that no player can benefit by changing its strategy while the other players keep their strategies unchanged [28].\nFoundational considerations notwithstanding, it is not easy to accept randomized strategies as the behavior of rational agents in a distributed system (see [28] for an extensive discussion) - but this is what classical game theory can guarantee.\nIn certain very fortunate situations, however (see [9]), the existence of pure (that is, deterministic) Nash equilibria can be predicted.\nWith or without randomization, however, the lack of coordination inherent in selfish decision-making may incur costs well beyond what would be globally optimum.\nThis loss of efficiency is 1 We will use caching and replication interchangeably.\n2 We use the term object as an abstract entity that represents files and other data objects.\n21 quantified by the price of anarchy [21].\nThe price of anarchy is the ratio of the social (total) cost of the worst possible Nash equilibrium to the cost of the social optimum.\nThe price of anarchy bounds the worst possible behavior of a selfish system, when left completely on its own.\nHowever, in reality there are ways whereby the system can be guided, through seeding or incentives, to a preselected Nash equilibrium.\nThis optimistic version of the price of anarchy [3] is captured by the smallest ratio between a Nash equilibrium and the social optimum.\nIn this paper we address the following questions : \u2022 Do pure strategy Nash equilibria exist in the caching game?\n\u2022 If pure strategy Nash equilibria do exist, how efficient are they (in terms of the price of anarchy, or its optimistic counterpart) under different placement costs, network topologies, and demand distributions?\n\u2022 What is the effect of adopting payments?\nWill the Nash equilibria be improved?\nWe show that pure strategy Nash equilibria always exist in the caching game.\nThe price of anarchy of the basic game model can be O(n), where n is the number of servers; the intuitive reason is undersupply.\nUnder certain topologies, the price of anarchy does have tighter bounds.\nFor complete graphs and stars, it is O(1).\nFor D-dimensional grids, it is O(n D D+1 ).\nEven the optimistic price of anarchy can be O(n).\nIn the payment model, however, the game can always implement a Nash equilibrium that is same as the social optimum, so the optimistic price of anarchy is one.\nOur simulation results show several interesting phases.\nAs the placement cost increases from zero, the price of anarchy increases.\nWhen the placement cost first exceeds the maximum distance between servers, the price of anarchy is at its highest due to undersupply problems.\nAs the placement cost further increases, the price of anarchy decreases, and the effect of replica misplacement dominates the price of anarchy.\nThe rest of the paper is organized as follows.\nIn Section 2 we discuss related work.\nSection 3 discusses details of the basic game and analyzes the bounds of the price of anarchy.\nIn Section 4 we discuss the payment game and analyze its price of anarchy.\nIn Section 5 we describe our simulation methodology and study the properties of Nash equilibria observed.\nWe discuss extensions of the game and directions for future work in Section 6.\n2.\nRELATED WORK There has been considerable research on wide-area peer-to-peer file systems such as OceanStore [22], CFS [5], PAST [32], FARSITE [2], and Pangaea [33], web caches such as NetCache [6] and SummaryCache [10], and peer-to-peer caches such as Squirrel [16].\nMost of these systems use caching for performance, availability, and reliability.\nThe caching protocols assume obedience to the protocol and ignore participants'' incentives.\nOur work starts from the assumption that servers are selfish and quantifies the cost of the lack of coordination when servers behave selfishly.\nThe placement of replicas in the caching problem is the most important issue.\nThere is much work on the placement of web replicas, instrumentation servers, and replicated resources.\nAll protocols assume obedience and ignore participants'' incentives.\nIn [14], Gribble et al. discuss the data placement problem in peer-to-peer systems.\nKo and Rubenstein propose a self-stabilizing, distributed graph coloring algorithm for the replicated resource placement [20].\nChen, Katz, and Kubiatowicz propose a dynamic replica placement algorithm exploiting underlying distributed hash tables [4].\nDouceur and Wattenhofer describe a hill-climbing algorithm to exchange replicas for reliability in FARSITE [8].\nRaDar is a system that replicates and migrates objects for an Internet hosting service [31].\nTang and Chanson propose a coordinated en-route web caching that caches objects along the routing path [34].\nCentralized algorithms for the placement of objects, web proxies, mirrors, and instrumentation servers in the Internet have been studied extensively [18,19,23,30].\nThe facility location problem has been widely studied as a centralized optimization problem in theoretical computer science and operations research [27].\nSince the problem is NP-hard, approximation algorithms based on primal-dual techniques, greedy algorithms, and local search have been explored [17, 24, 26].\nOur caching game is different from all of these in that the optimization process is performed among distributed selfish servers.\nThere is little research in non-cooperative facility location games, as far as we know.\nVetta [35] considers a class of problems where the social utility is submodular (submodularity means decreasing marginal utility).\nIn the case of competitive facility location among corporations he proves that any Nash equilibrium gives an expected social utility within a factor of 2 of optimal plus an additive term that depends on the facility opening cost.\nTheir results are not directly applicable to our problem, however, because we consider each server to be tied to a particular location, while in their model an agent is able to open facilities in multiple locations.\nNote that in that paper the increase of the price of anarchy comes from oversupply problems due to the fact that competing corporations can open facilities at the same location.\nOn the other hand, the significant problems in our game are undersupply and misplacement.\nIn a recent paper, Goemans et al. analyze content distribution on ad-hoc wireless networks using a game-theoretic approach [12].\nAs in our work, they provide monetary incentives to mobile users for caching data items, and provide tight bounds on the price of anarchy and speed of convergence to (approximate) Nash equilibria.\nHowever, their results are incomparable to ours because their payoff functions neglect network latencies between users, they consider multiple data items (markets), and each node has a limited budget to cache items.\nCost sharing in the facility location problem has been studied using cooperative game theory [7, 13, 29].\nGoemans and Skutella show strong connections between fair cost allocations and linear programming relaxations for facility location problems [13].\nP\u00b4al and Tardos develop a method for cost-sharing that is approximately budget-balanced and group strategyproof and show that the method recovers 1\/3 of the total cost for the facility location game [29].\nDevanur, Mihail, and Vazirani give a strategyproof cost allocation for the facility location problem, but cannot achieve group strategyproofness [7].\n3.\nBASIC GAME The caching problem we study is to find a configuration that meets certain objectives (e.g., minimum total cost).\nFigure 1 shows examples of caching among four servers.\nIn network (a), A stores an object.\nSuppose B wants to access the object.\nIf it is cheaper to access the remote replica than to cache it, B accesses the remote replica as shown in network (b).\nIn network (c), C wants to access the object.\nIf C is far from A, C caches the object instead of accessing the object from A.\nIt is possible that in an optimal configuration it would be better to place replicas in A and B. Understanding the placement of replicas by selfish servers is the focus of our study.\nThe caching problem is abstracted as follows.\nThere is a set N of n servers and a set M of m objects.\nThe distance between servers can be represented as a distance matrix D (i.e., dij is the distance 22 Server Server Server Server A B C D (a) Server Server Server Server A B C D (b) Server Server Server Server A B C D (c) Figure 1: Caching.\nThere are four servers labeled A, B, C, and D.\nThe rectangles are object replicas.\nIn (a), A stores an object.\nIf B incurs less cost accessing A``s replica than it would caching the object itself, it accesses the object from A as in (b).\nIf the distance cost is too high, the server caches the object itself, as C does in (c).\nThis figure is an example of our caching game model.\nfrom server i to server j).\nD models an underlying network topology.\nFor our analysis we assume that the distances are symmetric and the triangle inequality holds on the distances (for all servers i, j, k: dij + djk \u2265 dik).\nEach server has demand from clients that is represented by a demand matrix W (i.e., wij is the demand of server i for object j).\nWhen a server caches objects, the server incurs some placement cost that is represented by a matrix \u03b1 (i.e., \u03b1ij is a placement cost of server i for object j).\nIn this study, we assume that servers have no capacity limit.\nAs we discuss in the next section, this fact means that the caching behavior with respect to each object can be examined separately.\nConsequently, we can talk about configurations of the system with respect to a given object: DEFINITION 1.\nA configuration X for some object O is the set of servers replicating this object.\nThe goal of the basic game is to find configurations that are achieved when servers optimize their cost functions locally.\n3.1 Game Model We take a game-theoretic approach to analyzing the uncapacitated caching problem among networked selfish servers.\nWe model the selfish caching problem as a non-cooperative game with n players (servers\/nodes) whose strategies are sets of objects to cache.\nIn the game, each server chooses a pure strategy that minimizes its cost.\nOur focus is to investigate the resulting configuration, which is the Nash equilibrium of the game.\nIt should be emphasized that we consider only pure strategy Nash equilibria in this paper.\nThe cost model is an important part of the game.\nLet Ai be the set of feasible strategies for server i, and let Si \u2208 Ai be the strategy chosen by server i. Given a strategy profile S = (S1, S2, ..., Sn), the cost incurred by server i is defined as: Ci(S) = j\u2208Si \u03b1ij + j \/\u2208Si wij di (i,j).\n(1) where \u03b1ij is the placement cost of object j, wij is the demand that server i has for object j, (i, j) is the closest server to i that caches object j, and dik is the distance between i and k.\nWhen no server caches the object, we define distance cost di (i,j) to be dM -large enough that at least one server will choose to cache the object.\nThe placement cost can be further divided into first-time installation cost and maintenance cost: \u03b1ij = k1i + k2i UpdateSizej ObjectSizej 1 T Pj k wkj , (2) where k1i is the installation cost, k2i is the relative weight between the maintenance cost and the installation cost, Pj is the ratio of the number of writes over the number of reads and writes, UpdateSizej is the size of an update, ObjectSizej is the size of the object, and T is the update period.\nWe see tradeoffs between different parameters in this equation.\nFor example, placing replicas becomes more expensive as UpdateSizej increases, Pj increases, or T decreases.\nHowever, note that by varying \u03b1ij itself we can capture the full range of behaviors in the game.\nFor our analysis, we use only \u03b1ij .\nSince there is no capacity limit on servers, we can look at each single object as a separate game and combine the pure strategy equilibria of these games to obtain a pure strategy equilibrium of the multi-object game.\nFabrikant, Papadimitriou, and Talwar discuss this existence argument: if two games are known to have pure equilibria, and their cost functions are cross-monotonic, then their union is also guaranteed to have pure Nash equilibria, by a continuity argument [9].\nA Nash equilibrium for the multi-object game is the cross product of Nash equilibria for single-object games.\nTherefore, we can focus on the single object game in the rest of this paper.\nFor single object selfish caching, each server i has two strategies - to cache or not to cache.\nThe object under consideration is j.\nWe define Si to be 1 when server i caches j and 0 otherwise.\nThe cost incurred by server i is Ci(S) = \u03b1ij Si + wij di (i,j)(1 \u2212 Si).\n(3) We refer to this game as the basic game.\nThe extent to which Ci(S) represents actual cost incurred by server i is beyond the scope of this paper; we will assume that an appropriate cost function of the form of Equation 3 can be defined.\n3.2 Nash Equilibrium Solutions In principle, we can start with a random configuration and let this configuration evolve as each server alters its strategy and attempts to minimize its cost.\nGame theory is interested in stable solutions called Nash equilibria.\nA pure strategy Nash equilibrium is reached when no server can benefit by unilaterally changing its strategy.\nA Nash equilibrium3 (S\u2217 i , S\u2217 \u2212i) for the basic game specifies a configuration X such that \u2200i \u2208 N, i \u2208 X \u21d4 S\u2217 i = 1.\nThus, we can consider a set E of all pure strategy Nash equilibrium configurations: X \u2208 E \u21d4 \u2200i \u2208 N, \u2200Si \u2208 Ai, Ci(S\u2217 i , S\u2217 \u2212i) \u2264 Ci(Si, S\u2217 \u2212i) (4) By this definition, no server has incentive to deviate in the configurations since it cannot reduce its cost.\nFor the basic game, we can easily see that: X \u2208 E \u21d4 \u2200i \u2208 N, \u2203j \u2208 X s.t. dji \u2264 \u03b1 and \u2200j \u2208 X, \u00ac\u2203k \u2208 X s.t. dkj < \u03b1 (5) The first condition guarantees that there is a server that places the replica within distance \u03b1 of each server i.\nIf the replica is not placed 3 The notation for strategy profile (S\u2217 i , S\u2217 \u2212i) separates node i s strategy (S\u2217 i ) from the strategies of other nodes (S\u2217 \u2212i).\n23 A B1\u2212\u03b1 0 0 0 0 0 0 0 0 0 0 2 n nodes 2 n nodes (a) A B1\u2212\u03b1 0 0 0 0 0 0 0 0 0 0 2 n nodes 2 n nodes (b) A B1\u2212\u03b1 2 n nodes 2 n nodes n2 n2 n2 n2 n2 n2 n2 n2 n2 n2 (c) Figure 2: Potential inefficiency of Nash equilibria illustrated by two clusters of n 2 servers.\nThe intra-cluster distances are all zero and the distance between clusters is \u03b1 \u2212 1, where \u03b1 is the placement cost.\nThe dark nodes replicate the object.\nNetwork (a) shows a Nash equilibrium in the basic game, where one server in a cluster caches the object.\nNetwork (b) shows the social optimum where two replicas, one for each cluster, are placed.\nThe price of anarchy is O(n) and even the optimistic price of anarchy is O(n).\nThis high price of anarchy comes from the undersupply of replicas due to the selfish nature of servers.\nNetwork (c) shows a Nash equilibrium in the payment game, where two replicas, one for each cluster, are placed.\nEach light node in each cluster pays 2\/n to the dark node, and the dark node replicates the object.\nHere, the optimistic price of anarchy is one.\nat i, then it is placed at another server within distance \u03b1 of i, so i has no incentive to cache.\nIf the replica is placed at i, then the second condition ensures there is no incentive to drop the replica because no two servers separated by distance less than \u03b1 both place replicas.\n3.3 Social Optimum The social cost of a given strategy profile is defined as the total cost incurred by all servers, namely: C(S) = n\u22121 i=0 Ci(S) (6) where Ci(S) is the cost incurred by server i given by Equation 1.\nThe social optimum cost, referred to as C(SO) for the remainder of the paper, is the minimum social cost.\nThe social optimum cost will serve as an important base case against which to measure the cost of selfish caching.\nWe define C(SO) as: C(SO) = min S C(S) (7) where S varies over all possible strategy profiles.\nNote that in the basic game, this means varying configuration X over all possible configurations.\nIn some sense, C(SO) represents the best possible caching behavior - if only nodes could be convinced to cooperate with one another.\nThe social optimum configuration is a solution of a mini-sum facility location problem, which is NP-hard [11].\nTo find such configurations, we formulate an integer programming problem: minimize \u00c8i \u00c8j cents\u03b1ij xij + \u00c8k wij dikyijk # subject to \u2200i, j \u00c8k yijk = I(wij) \u2200i, j, k xij \u2212 ykji \u2265 0 \u2200i, j xij \u2208 {0, 1} \u2200i, j, k yijk \u2208 {0, 1} (8) Here, xij is 1 if server i replicates object j and 0 otherwise; yijk is 1 if server i accesses object j from server k and 0 otherwise; I(w) returns 1 if w is nonzero and 0 otherwise.\nThe first constraint specifies that if server i has demand for object j, then it must access j from exactly one server.\nThe second constraint ensures that server i replicates object j if any other server accesses j from i. 3.4 Analysis To analyze the basic game, we first give a proof of the existence of pure strategy Nash equilibria.\nWe discuss the price of anarchy in general and then on specific underlying topologies.\nIn this analysis we use simply \u03b1 in place of \u03b1ij , since we deal with a single object and we assume placement cost is the same for all servers.\nIn addition, when we compute the price of anarchy, we assume that all nodes have the same demand (i.e., \u2200i \u2208 N wij = 1).\nTHEOREM 1.\nPure strategy Nash equilibria exist in the basic game.\nPROOF.\nWe show a constructive proof.\nFirst, initialize the set V to N. Then, remove all nodes with zero demand from V .\nEach node x defines \u03b2x, where \u03b2x = \u03b1 wxj .\nFurthermore, let Z(y) = {z : dzy \u2264 \u03b2z, z \u2208 V }; Z(y) represents all nodes z for which y lies within \u03b2z from z. Pick a node y \u2208 V such that \u03b2y \u2264 \u03b2x for all x \u2208 V .\nPlace a replica at y and then remove y and all z \u2208 Z(y) from V .\nNo such z can have incentive to replicate the object because it can access y``s replica at lower (or equal) cost.\nIterate this process of placing replicas until V is empty.\nBecause at each iteration y is the remaining node with minimum \u03b2, no replica will be placed within distance \u03b2y of any such y by this process.\nThe resulting configuration is a pure-strategy Nash equilibrium of the basic game.\nThe Price of Anarchy (POA): To quantify the cost of lack of coordination, we use the price of anarchy [21] and the optimistic price of anarchy [3].\nThe price of anarchy is the ratio of the social costs of the worst-case Nash equilibrium and the social optimum, and the optimistic price of anarchy is the ratio of the social costs of the best-case Nash equilibrium and the social optimum.\nWe show general bounds on the price of anarchy.\nThroughout our discussion, we use C(SW ) to represent the cost of worst case Nash equilibrium, C(SO) to represent the cost of social optimum, and PoA to represent the price of anarchy, which is C(SW ) C(SO) .\nThe worst case Nash equilibrium maximizes the total cost under the constraint that the configuration meets the Nash condition.\nFormally, we can define C(SW ) as follows.\nC(SW ) = max X\u2208E (\u03b1|X| + i min j\u2208X dij) (9) where minj\u2208X dij is the distance to the closest replica (including i itself) from node i and X varies through Nash equilibrium configurations.\nBounds on the Price of Anarchy: We show bounds of the price of anarchy varying \u03b1.\nLet dmin = min(i,j)\u2208N\u00d7N,i=j dij and dmax = max(i,j)\u2208N\u00d7N dij .\nWe see that if \u03b1 \u2264 dmin, PoA = 1 24 Topology PoA Complete graph 1 Star \u2264 2 Line O( \u221a n) D-dimensional grid O(n D D+1 ) Table 1: PoA in the basic game for specific topologies trivially, since every server caches the object for both Nash equilibrium and social optimum.\nWhen \u03b1 > dmax, there is a transition in Nash equilibria: since the placement cost is greater than any distance cost, only one server caches the object and other servers access it remotely.\nHowever, the social optimum may still place multiple replicas.\nSince \u03b1 \u2264 C(SO) \u2264 \u03b1+minj\u2208N \u00c8i dij when \u03b1 > dmax, we obtain \u03b1+maxj\u2208N \u00c8i dij \u03b1+minj\u2208N \u00c8i dij \u2264 PoA \u2264 \u03b1+maxj\u2208N \u00c8i dij \u03b1 .\nNote that depending on the underlying topology, even the lower bound of PoA can be O(n).\nFinally, there is a transition when \u03b1 > maxj\u2208N \u00c8i dij.\nIn this case, PoA = \u03b1+maxj\u2208N \u00c8i dij \u03b1+minj\u2208N \u00c8i dij and it is upper bounded by 2.\nFigure 2 shows an example of the inefficiency of a Nash equilibrium.\nIn the network there are two clusters of servers whose size is n 2 .\nThe distance between two clusters is \u03b1 \u2212 1 where \u03b1 is the placement cost.\nFigure 2(a) shows a Nash equilibrium where one server in a cluster caches the object.\nIn this case, C(SW ) = \u03b1 + (\u03b1 \u2212 1)n 2 , since all servers in the other cluster accesses the remote replica.\nHowever, the social optimum places two replicas, one for each cluster, as shown in Figure 2(b).\nTherefore, C(SO) = 2\u03b1.\nPoA = \u03b1+(\u03b1\u22121) n 2 2\u03b1 , which is O(n).\nThis bad price of anarchy comes from an undersupply of replicas due to the selfish nature of the servers.\nNote that all Nash equilibria have the same cost; thus even the optimistic price of anarchy is O(n).\nIn Appendix A, we analyze the price of anarchy with specific underlying topologies and show that PoA can have tighter bounds than O(n) for the complete graph, star, line, and D-dimensional grid.\nIn these topologies, we set the distance between directly connected nodes to one.\nWe describe the case where \u03b1 > 1, since PoA = 1 trivially when \u03b1 \u2264 1.\nA summary of the results is shown in Table 1.\n4.\nPAYMENT GAME In this section, we present an extension to the basic game with payments and analyze the price of anarchy and the optimistic price of anarchy of the game.\n4.1 Game Model The new game, which we refer to as the payment game, allows each player to offer a payment to another player to give the latter incentive to replicate the object.\nThe cost of replication is shared among the nodes paying the server that replicates the object.\nThe strategy for each player i is specified by a triplet (vi, bi, ti) \u2208 {N, \u00ca+, \u00ca+}.\nvi specifies the player to whom i makes a bid, bi \u2265 0 is the value of the bid, and ti \u2265 0 denotes a threshold for payments beyond which i will replicate the object.\nIn addition, we use Ri to denote the total amount of bids received by a node i (Ri = \u00c8j:vj =i bj).\nA node i replicates the object if and only if Ri \u2265 ti, that is, the amount of bids it receives is greater than or equal to its threshold.\nLet Ii denote the corresponding indicator variable, that is, Ii equals 1 if i replicates the object, and 0 otherwise.\nWe make the rule that if a node i makes a bid to another node j and j replicates the object, then i must pay j the amount bi.\nIf j does not replicate the object, i does not pay j. Given a strategy profile, the outcome of the game is the set of tuples {(Ii, vi, bi, Ri)}.\nIi tells us whether player i replicates the object or not, bi is the payment player i makes to player vi, and Ri is the total amount of bids received by player i. To compute the payoffs given the outcome, we must now take into account the payments a node makes, in addition to the placement costs and access costs of the basic game.\nBy our rules, a server node i pays bi to node vi if vi replicates the object, and receives a payment of Ri if it replicates the object itself.\nIts net payment is biIvi \u2212 RiIi.\nThe total cost incurred by each node is the sum of its placement cost, access cost, and net payment.\nIt is defined as Ci(S) = \u03b1ij Ii + wij di (i,j)(1 \u2212 Ii) + biIvi \u2212 RiIi.\n(10) The cost of social optimum for the payment game is same as that for the basic game, since the net payments made cancel out.\n4.2 Analysis In analyzing the payment model, we first show that a Nash equilibrium in the basic game is also a Nash equilibrium in the payment game.\nWe then present an important positive result - in the payment game the socially optimal configuration can always be implemented by a Nash equilibrium.\nWe know from the counterexample in Figure 2 that this is not guaranteed in the the basic game.\nIn this analysis we use \u03b1 to represent \u03b1ij .\nTHEOREM 2.\nAny configuration that is a pure strategy Nash equilibrium in the basic game is also a pure strategy Nash equilibrium in the payment game.\nTherefore, the price of anarchy of the payment game is at least that of the basic game.\nPROOF.\nConsider any Nash equilibrium configuration in the basic game.\nFor each node i replicating the object, set its threshold ti to 0; everyone else has threshold \u03b1.\nAlso, for all i, bi = 0.\nA node that replicates the object does not have incentive to change its strategy: changing the threshold does not decrease its cost, and it would have to pay at least \u03b1 to access a remote replica or incentivize a nearby node to cache.\nTherefore it is better off keeping its threshold and bid at 0 and replicating the object.\nA node that is not replicating the object can access the object remotely at a cost less than or equal to \u03b1.\nLowering its threshold does not decrease its cost, since all bi are zero.\nThe payment necessary for another server to place a replica is at least \u03b1.\nNo player has incentive to deviate, so the current configuration is a Nash equilibrium.\nIn fact, Appendix B shows that the PoA of the payment game can be more than that of the basic game in a given topology.\nNow let us look at what happens to the example shown in Figure 2 in the best case.\nSuppose node B``s neighbors each decide to pay node B an amount 2\/n.\nB does not have an incentive to deviate, since accessing the remote replica does not decrease its cost.\nThe same argument holds for A because of symmetry in the graph.\nSince no one has an incentive to deviate, the configuration is a Nash equilibrium.\nIts total cost is 2\u03b1, the same as in the socially optimal configuration shown in Figure 2(b).\nNext we prove that indeed the payment game always has a strategy profile that implements the socially optimal configuration as a Nash equilibrium.\nWe first present the following observation, which is used in the proof, about thresholds in the payment game.\nOBSERVATION 1.\nIf node i replicates the object, j is the nearest node to i among the other nodes that replicate the object, and dij < \u03b1 in a Nash equilibrium, then i should have a threshold at 25 least (\u03b1 \u2212 dij).\nOtherwise, it cannot collect enough payment to compensate for the cost of replicating the object and is better off accessing the replica at j. THEOREM 3.\nIn the payment game, there is always a pure strategy Nash equilibrium that implements the social optimum configuration.\nThe optimistic price of anarchy in the payment game is therefore always one.\nPROOF.\nConsider the socially optimal configuration \u03c6opt.\nLet No be the set of nodes that replicate the object and Nc = N \u2212 No be the rest of the nodes.\nAlso, for each i in No, let Qi denote the set of nodes that access the object from i, not including i itself.\nIn the socially optimal configuration, dij \u2264 \u03b1 for all j in Qi.\nWe want to find a set of payments and thresholds that makes this configuration implementable.\nThe idea is to look at each node i in No and distribute the minimum payment needed to make i replicate the object among the nodes that access the object from i. For each i in No, and for each j in Qi, we define \u03b4j = min{\u03b1, min k\u2208No\u2212{i} djk} \u2212 dji (11) Note that \u03b4j is the difference between j``s cost for accessing the replica at i and j``s next best option among replicating the object and accessing some replica other than i.\nIt is clear that \u03b4j \u2265 0.\nCLAIM 1.\nFor each i \u2208 No, let be the nearest node to i in No.\nThen, \u00c8j\u2208Qi \u03b4j \u2265 \u03b1 \u2212 di .\nPROOF.\n(of claim) Assume the contrary, that is, \u00c8j\u2208Qi \u03b4j < \u03b1 \u2212 di .\nConsider the new configuration \u03c6new wherein i does not replicate and each node in Qi chooses its next best strategy (either replicating or accessing the replica at some node in No \u2212 {i}).\nIn addition, we still place replicas at each node in No \u2212 {i}.\nIt is easy to see that cost of \u03c6opt minus cost of \u03c6new is at least: (\u03b1 + j\u2208Qi dij) \u2212 (di + j\u2208Qi min{\u03b1, min k\u2208No\u2212{i} dik}) = \u03b1 \u2212 di \u2212 j\u2208Qi \u03b4j > 0, which contradicts the optimality of \u03c6opt.\nWe set bids as follows.\nFor each i in No, bi = 0 and for each j in Qi, j bids to i (i.e., vj = i) the amount: bj = max{0, \u03b4j \u2212 i\/(|Qi| + 1)}, j \u2208 Qi (12) where i = \u00c8j\u2208Qi \u03b4j \u2212 \u03b1 + di \u2265 0 and |Qi| is the cardinality of Qi.\nFor the thresholds, we have: ti = \u03b1 if i \u2208 Nc;\u00c8j\u2208Qi bj if i \u2208 No.\n(13) This fully specifies the strategy profile of the nodes, and it is easy to see that the outcome is indeed the socially optimal configuration.\nNext, we verify that the strategies stipulated constitute a Nash equilibrium.\nHaving set ti to \u03b1 for i in Nc means that any node in N is at least as well off lowering its threshold and replicating as bidding \u03b1 to some node in Nc to make it replicate, so we may disregard the latter as a profitable strategy.\nBy observation 1, to ensure that each i in No does not deviate, we require that if is the nearest node to i in No, then \u00c8j\u2208Qi bj is at least (\u03b1 \u2212 di ).\nOtherwise, i will raise ti above \u00c8j\u2208Qi bj so that it does not replicate and instead accesses the replica at .\nWe can easily check that j\u2208Qi bj \u2265 j\u2208Qi \u03b4j \u2212 |Qi| i |Qi| + 1 = \u03b1 \u2212 di + i |Qi| + 1 \u2265 \u03b1 \u2212 di .\n1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 0 20 40 60\u00a080\u00a0100\u00a0120 140\u00a0160\u00a0180\u00a0200 1 10 100 C(NE)\/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) Figure 3: We present P oA, Ratio, and OP oA results for the basic game, varying \u03b1 on a 100-node line topology, and we show number of replicas placed by the Nash equilibria and by the optimal solution.\nWe see large peaks in P oA and OP oA at \u03b1 = 100, where a phase transition causes an abrupt transition in the lines.\nTherefore, each node i \u2208 No does not have incentive to change ti since i loses its payments received or there is no change, and i does not have incentive to bi since it replicates the object.\nEach node j in Nc has no incentive to change tj since changing tj does not reduce its cost.\nIt also does not have incentive to reduce bj since the node where j accesses does not replicate and j has to replicate the object or to access the next closest replica, which costs at least the same from the definition of bj .\nNo player has incentive to deviate, so this strategy profile is a Nash equilibrium.\n5.\nSIMULATION We run simulations to compare Nash equilibria for the singleobject caching game with the social optimum computed by solving the integer linear program described in Equation 8 using Mosek [1].\nWe examine price of anarchy (PoA), optimistic price of anarchy (OPoA), and the average ratio of the costs of Nash equilibria and social optima (Ratio), and when relevant we also show the average numbers of replicas placed by the Nash equilibrium (Replica(NE)) and the social optimum (Replica(SO)).\nThe PoA and OPoA are taken from the worst and best Nash equilibria, respectively, that we observe over the runs.\nEach data point in our figures is based on 1000 runs, randomly varying the initial strategy profile and player order.\nThe details of the simulations including protocols and a discussion of convergence are presented in Appendix C.\nIn our evaluation, we study the effects of variation in four categories: placement cost, underlying topology, demand distribution, and payments.\nAs we vary the placement cost \u03b1, we directly influence the tradeoff between caching and not caching.\nIn order to get a clear picture of the dependency of PoA on \u03b1 in a simple case, we first analyze the basic game with a 100-node line topology whose edge distance is one.\nWe also explore transit-stub topologies generated using the GTITM library [36] and power-law topologies (Router-level BarabasiAlbert model) generated using the BRITE topology generator [25].\nFor these topologies, we generate an underlying physical graph of 3050 physical nodes.\nBoth topologies have similar minimum, average, and maximum physical node distances.\nThe average distance is 0.42.\nWe create an overlay of 100 server nodes and use the same overlay for all experiments with the given topology.\nIn the game, each server has a demand whose distribution is Bernoulli(p), where p is the probability of having demand for the object; the default unless otherwise specified is p = 1.0.\n26 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 10 100 C(NE)\/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) (a) 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 10 100 C(NE)\/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) (b) Figure 4: Transit-stub topology: (a) basic game, (b) payment game.\nWe show the P oA, Ratio, OP oA, and the number of replicas placed while varying \u03b1 between 0 and 2 with 100 servers on a 3050-physical-node transit-stub topology.\n0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 10 100 C(NE)\/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) (a) 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 1 10 100 C(NE)\/C(SO) AverageNumberofReplicas alpha PoA Ratio OPoA Replica (SO) Replica (NE) (b) Figure 5: Power-law topology: (a) basic game, (b) payment game.\nWe show the P oA, Ratio, OP oA, and the number of replicas placed while varying \u03b1 between 0 and 2 with 100 servers on a 3050-physical-node power-law topology.\n5.1 Varying Placement Cost Figure 3 shows PoA, OPoA, and Ratio, as well as number of replicas placed, for the line topology as \u03b1 varies.\nWe observe two phases.\nAs \u03b1 increases the PoA rises quickly to a peak at 100.\nAfter 100, there is a gradual decline.\nOPoA and Ratio show behavior similar to PoA.\nThese behaviors can be explained by examining the number of replicas placed by Nash equilibria and by optimal solutions.\nWe see that when \u03b1 is above one, Nash equilibrium solutions place fewer replicas than optimal on average.\nFor example, when \u03b1 is 100, the social optimum places four replicas, but the Nash equilibrium places only one.\nThe peak in PoA at \u03b1 = 100 occurs at the point for a 100-node line where the worst-case cost of accessing a remote replica is slightly less than the cost of placing a new replica, so selfish servers will never place a second replica.\nThe optimal solution, however, places multiple replicas to decrease the high global cost of access.\nAs \u03b1 continues to increase, the undersupply problem lessens as the optimal solution places fewer replicas.\n5.2 Different Underlying Topologies In Figure 4(a) we examine an overlay graph on the more realistic transit-stub topology.\nThe trends for the PoA, OPoA, and Ratio are similar to the results for the line topology, with a peak in PoA at \u03b1 = 0.8 due to maximal undersupply.\nIn Figure 5(a) we examine an overlay graph on the power-law topology.\nWe observe several interesting differences between the power-law and transit-stub results.\nFirst, the PoA peaks at a lower level in the power-law graph, around 2.3 (at \u03b1 = 0.9) while the peak PoA in the transit-stub topology is almost 3.0 (at \u03b1 = 0.8).\nAfter the peak, PoA and Ratio decrease more slowly as \u03b1 increases.\nOPoA is close to one for the whole range of \u03b1 values.\nThis can be explained by the observation in Figure 5(a) that there is no significant undersupply problem here like there was in the transit-stub graph.\nIndeed the high PoA is due mostly to misplacement problems when \u03b1 is from 0.7 to 2.0, since there is little decrease in PoA when the number of replicas in social optimum changes from two to one.\nThe OPoA is equal to one in the figure when the same number of replicas are placed.\n5.3 Varying Demand Distribution Now we examine the effects of varying the demand distribution.\nThe set of servers with demand is random for p < 1, so we calculate the expected PoA by averaging over 5 trials (each data point is based on 5000 runs).\nWe run simulations for demand levels of p \u2208 {0.2, 0.6, 1.0} as \u03b1 is varied on the 100 servers on top of the transit-stub graph.\nWe observe that as demand falls, so does expected PoA.\nAs p decreases, the number of replicas placed in the social optimum decreases, but the number in Nash equilibria changes little.\nFurthermore, when \u03b1 exceeds the overlay diameter, the number in Nash equilibria stays constant when p varies.\nTherefore, lower p leads to a lesser undersupply problem, agreeing with intuition.\nWe do not present the graph due to space limitations and redundancy; the PoA for p = 1.0 is identical to PoA in Figure 4(a), and the lines for p = 0.6 and p = 0.2 are similar but lower and flatter.\n27 5.4 Effects of Payment Finally, we discuss the effects of payments on the efficiency of Nash equilibria.\nThe results are presented in Figure 4(b) and Figure 5(b).\nAs shown in the analysis, the simulations achieve OPoA close to one (it is not exactly one because of randomness in the simulations).\nThe Ratio for the payment game is much lower than the Ratio for the basic game, since the protocol for the payment game tends to explore good regions in the space of Nash equilibria.\nWe observe in Figure 4 that for \u03b1 \u2265 0.4, the average number of replicas of Nash equilibria gets closer with payments to that of the social optimum than it does without.\nWe observe in Figure 5 that more replicas are placed with payments than without when \u03b1 is between 0.7 and 1.3, the only range of significant undersupply in the power-law case.\nThe results confirm that payments give servers incentive to replicate the object and this leads to better equilibria.\n6.\nDISCUSSION AND FUTURE WORK We suggest several interesting extensions and directions.\nOne extension is to consider multiple objects in the capacitated caching game, in which servers have capacity limits when placing objects.\nSince caching one object affects the ability to cache another, there is no separability of a multi-object game into multiple single object games.\nAs studied in [12], one way to formulate this problem is to find the best response of a server by solving a knapsack problem and to compute Nash equilibria.\nIn our analyses, we assume that all nodes have the same demand.\nHowever, nodes could have different demand depending on objects.\nWe intend to examine the effects of heterogeneous demands (or heterogeneous placement costs) analytically.\nWe also want to look at the following aggregation effect.\nSuppose there are n \u2212 1 clustered nodes with distance of \u03b1\u22121 from a node hosting a replica.\nAll nodes have demands of one.\nIn that case, the price of anarchy is O(n).\nHowever, if we aggregate n \u2212 1 nodes into one node with demand n \u2212 1, the price of anarchy becomes O(1), since \u03b1 should be greater than (n \u2212 1)(\u03b1 \u2212 1) to replicate only one object.\nSuch aggregation can reduce the inefficiency of Nash equilibria.\nWe intend to compute the bounds of the price of anarchy under different underlying topologies such as random graphs or growthrestricted metrics.\nWe want to investigate whether there are certain distance constraints that guarantee O(1) price of anarchy.\nIn addition, we want to run large-scale simulations to observe the change in the price of anarchy as the network size increases.\nAnother extension is to consider server congestion.\nSuppose the distance is the network distance plus \u03b3 \u00d7 (number of accesses) where \u03b3 is an extra delay when an additional server accesses the replica.\nThen, when \u03b1 > \u03b3, it can be shown that PoA is bounded by \u03b1 \u03b3 .\nAs \u03b3 increases, the price of anarchy bound decreases, since the load of accesses is balanced across servers.\nWhile exploring the caching problem, we made several observations that seem counterintuitive.\nFirst, the PoA in the payment game can be worse than the PoA in the basic game.\nAnother observation we made was that the number of replicas in a Nash equilibrium can be more than the number of replicas in the social optimum even without payments.\nFor example, a graph with diameter slightly more than \u03b1 may have a Nash equilibrium configuration with two replicas at the two ends.\nHowever, the social optimum may place one replica at the center.\nWe leave the investigation of more examples as an open issue.\n7.\nCONCLUSIONS In this work we introduce a novel non-cooperative game model to characterize the caching problem among selfish servers without any central coordination.\nWe show that pure strategy Nash equilibria exist in the game and that the price of anarchy can be O(n) in general, where n is the number of servers, due to undersupply problems.\nWith specific topologies, we show that the price of anarchy can have tighter bounds.\nMore importantly, with payments, servers are incentivized to replicate and the optimistic price of anarchy is always one.\nNon-cooperative caching is a more realistic model than cooperative caching in the competitive Internet, hence this work is an important step toward viable federated caching systems.\n8.\nACKNOWLEDGMENTS We thank Kunal Talwar for enlightening discussions regarding this work.\n9.\nREFERENCES [1] http:\/\/www.mosek.com.\n[2] A. Adya et al..\nFARSITE: Federated, Available, and Reliable Storage for an Incompletely Trusted Environment.\nIn Proc.\nof USENIX OSDI, 2002.\n[3] E. Anshelevich, A. Dasgupta, E. Tardos, and T. Wexler.\nNear-optimal Network Design with Selfish Agents.\nIn Proc.\nof ACM STOC, 2003.\n[4] Y. Chen, R. H. Katz, and J. D. Kubiatowicz.\nSCAN: A Dynamic, Scalable, and Efficient Content Distribution Network.\nIn Proc.\nof Intl..\nConf.\non Pervasive Computing, 2002.\n[5] F. Dabek et al..\nWide-area Cooperative Storage with CFS.\nIn Proc.\nof ACM SOSP, Oct. 2001.\n[6] P. B. Danzig.\nNetCache Architecture and Deploment.\nIn Computer Networks and ISDN Systems, 1998.\n[7] N. Devanur, M. Mihail, and V. Vazirani.\nStrategyproof cost-sharing Mechanisms for Set Cover and Facility Location Games.\nIn Proc.\nof ACM EC, 2003.\n[8] J. R. Douceur and R. P. Wattenhofer.\nLarge-Scale Simulation of Replica Placement Algorithms for a Serverless Distributed File System.\nIn Proc.\nof MASCOTS, 2001.\n[9] A. Fabrikant, C. H. Papadimitriou, and K. Talwar.\nThe Complexity of Pure Nash Equilibria.\nIn Proc.\nof ACM STOC, 2004.\n[10] L. Fan, P. Cao, J. Almeida, and A. Z. Broder.\nSummary Cache: A Scalable Wide-area Web Cache Sharing Protocol.\nIEEE\/ACM Trans.\non Networking, 8(3):281-293, 2000.\n[11] M. R. Garey and D. S. Johnson.\nComputers and Intractability: A Guide to the Theory of NP-Completeness.\nW. H. Freeman and Co., 1979.\n[12] M. X. Goemans, L. Li, V. S. Mirrokni, and M. Thottan.\nMarket Sharing Games Applied to Content Distribution in ad-hoc Networks.\nIn Proc.\nof ACM MOBIHOC, 2004.\n[13] M. X. Goemans and M. Skutella.\nCooperative Facility Location Games.\nIn Proc.\nof ACM-SIAM SODA, 2000.\n[14] S. Gribble et al..\nWhat Can Databases Do for Peer-to-Peer?\nIn WebDB Workshop on Databases and the Web, June 2001.\n[15] K. P. Gummadi et al..\nMeasurement, Modeling, and Analysis of a Peer-to-Peer File-Sharing Workload.\nIn Proc.\nof ACM SOSP, October 2003.\n[16] S. Iyer, A. Rowstron, and P. Druschel.\nSquirrel: A Decentralized Peer-to-Peer Web Cache.\nIn Proc.\nof ACM PODC, 2002.\n[17] K. Jain and V. V. Vazirani.\nPrimal-Dual Approximation Algorithms for Metric Facility Location and k-Median Problems.\nIn Proc.\nof IEEE FOCS, 1999.\n28 [18] S. Jamin et al..\nOn the Placement of Internet Instrumentation.\nIn Proc.\nof IEEE INFOCOM, pages 295-304, 2000.\n[19] S. Jamin et al..\nConstrained Mirror Placement on the Internet.\nIn Proc.\nof IEEE INFOCOM, pages 31-40, 2001.\n[20] B.-J.\nKo and D. Rubenstein.\nA Distributed, Self-stabilizing Protocol for Placement of Replicated Resources in Emerging Networks.\nIn Proc.\nof IEEE ICNP, 2003.\n[21] E. Koutsoupias and C. Papadimitriou.\nWorst-Case Equilibria.\nIn STACS, 1999.\n[22] J. Kubiatowicz et al..\nOceanStore: An Architecture for Global-scale Persistent Storage.\nIn Proc.\nof ACM ASPLOS.\nACM, November 2000.\n[23] B. Li, M. J. Golin, G. F. Italiano, and X. Deng.\nOn the Optimal Placement of Web Proxies in the Internet.\nIn Proc.\nof IEEE INFOCOM, 1999.\n[24] M. Mahdian, Y. Ye, and J. Zhang.\nImproved Approximation Algorithms for Metric Facility Location Problems.\nIn Proc.\nof Intl..\nWorkshop on Approximation Algorithms for Combinatorial Optimization Problems, 2002.\n[25] A. Medina, A. Lakhina, I. Matta, and J. Byers.\nBRITE: Universal Topology Generation from a User``s Perspective.\nTechnical Report 2001-003, 1 2001.\n[26] R. R. Mettu and C. G. Plaxton.\nThe Online Median Problem.\nIn Proc.\nof IEEE FOCS, 2000.\n[27] P. B. Mirchandani and R. L. Francis.\nDiscrete Location Theory.\nWiley-Interscience Series in Discrete Mathematics and Optimization, 1990.\n[28] M. J. Osborne and A. Rubinstein.\nA Course in Game Theory.\nMIT Press, 1994.\n[29] M. Pal and E. Tardos.\nGroup Strategyproof Mechanisms via Primal-Dual Algorithms.\nIn Proc.\nof IEEE FOCS, 2003.\n[30] L. Qiu, V. N. Padmanabhan, and G. M. Voelker.\nOn the Placement of Web Server Replicas.\nIn Proc.\nof IEEE INFOCOM, 2001.\n[31] M. Rabinovich, I. Rabinovich, R. Rajaraman, and A. Aggarwal.\nA Dynamic Object Replication and Migration Protocol for an Internet Hosting Service.\nIn Proc.\nof IEEE ICDCS, 1999.\n[32] A. Rowstron and P. Druschel.\nStorage Management and Caching in PAST, A Large-scale, Persistent Peer-to-peer Storage Utility.\nIn Proc.\nof ACM SOSP, October 2001.\n[33] Y. Saito, C. Karamanolis, M. Karlsson, and M. Mahalingam.\nTaming Aggressive Replication in the Pangaea Wide-Area File System.\nIn Proc.\nof USENIX OSDI, 2002.\n[34] X. Tang and S. T. Chanson.\nCoordinated En-route Web Caching.\nIn IEEE Trans.\nComputers, 2002.\n[35] A. Vetta.\nNash Equilibria in Competitive Societies, with Applications to Facility Location, Traffic Routing, and Auctions.\nIn Proc.\nof IEEE FOCS, 2002.\n[36] E. W. Zegura, K. L. Calvert, and S. Bhattacharjee.\nHow to Model an Internetwork.\nIn Proc.\nof IEEE INFOCOM, 1996.\nAPPENDIX A. ANALYZING SPECIFIC TOPOLOGIES We now analyze the price of anarchy (PoA) for the basic game with specific underlying topologies and show that PoA can have better bounds.\nWe look at complete graph, star, line, and Ddimensional grid.\nIn all these topologies, we set the distance between two directly connected nodes to one.\nWe describe the case where \u03b1 > 1, since PoA = 1 trivially when \u03b1 \u2264 1.\nA BC D3 \u03b1 3 \u03b1 4 3\u03b1 4 \u03b1 4 \u03b1 Figure 6: Example where the payment game has a Nash equilibrium which is worse than any Nash equilibrium in the basic game.\nThe unlabeled distances between the nodes in the cluster are all 1.\nThe thresholds of white nodes are all \u03b1 and the thresholds of dark nodes are all \u03b1\/4.\nThe two dark nodes replicate the object in this payment game Nash equilibrium.\nFor a complete graph, PoA = 1, and for a star, PoA \u2264 2.\nFor a complete graph, when \u03b1 > 1, both Nash equilibria and social optima place one replica at one server, so PoA = 1.\nFor star, when 1 < \u03b1 < 2, the worst case Nash equilibrium places replicas at all leaf nodes.\nHowever, the social optimum places one replica at the center node.\nTherefore, PoA = (n\u22121)\u03b1+1 \u03b1+(n\u22121) \u2264 2(n\u22121)+1 1+(n\u22121) \u2264 2.\nWhen \u03b1 > 2, the worst case Nash equilibrium places one replica at a leaf node and the other nodes access the remote replica, and the social optimum places one replica at the center.\nPoA = \u03b1+1+2(n\u22122) \u03b1+(n\u22121) = 1 + n \u03b1+(n\u22121) \u2264 2.\nFor a line, the price of anarchy is O( \u221a n).\nWhen 1 < \u03b1 < n, the worst case Nash equilibrium places replicas every 2\u03b1 so that there is no overlap between areas covered by two adjacent servers that cache the object.\nThe social optimum places replicas at least every \u221a 2\u03b1.\nThe placement of replicas for the social optimum is as follows.\nSuppose there are two replicas separated by distance d. By placing an additional replica in the middle, we want to have the reduction of distance to be at least \u03b1.\nThe distance reduction is d\/2 + 2{((d\/2 \u2212 1) \u2212 1) + ((d\/2 \u2212 2) \u2212 2) + ... + ((d\/2 \u2212 d\/4) \u2212 d\/4)} \u2265 d2 \/8.\nd should be at most 2 \u221a 2\u03b1.\nTherefore, the distance between replicas in the social optimum is at most \u221a 2\u03b1.\nC(SW ) = \u03b1(n\u22121) 2\u03b1 + \u03b1(\u03b1+1) 2 (n\u22121) 2\u03b1 = \u0398(\u03b1n).\nC(SO) \u2265 \u03b1 n\u22121\u221a 2\u03b1 + 2 \u221a 2\u03b1\/2( \u221a 2\u03b1\/2+1) 2 n\u22121\u221a 2\u03b1 .\nC(SO) = \u2126( \u221a \u03b1n).\nTherefore, PoA = O( \u221a \u03b1).\nWhen \u03b1 > n \u2212 1, the worst case Nash equilibrium places one replica at a leaf node and C(SW ) = \u03b1 + (n\u22121)n 2 .\nHowever, the social optimum still places replicas every \u221a 2\u03b1.\nIf we view PoA as a continuous function of \u03b1 and compute a derivative of PoA, the derivative becomes 0 when \u03b1 is \u0398(n2 ), which means the function decreases as \u03b1 increases from n. Therefore, PoA is maximum when \u03b1 is n, and PoA = \u0398(n2 ) \u2126( \u221a nn) = O( \u221a n).\nWhen \u03b1 > (n\u22121)n 2 , the social optimum also places only one replica, and PoA is trivially bounded by 2.\nThis result holds for the ring and it can be generalized to the D-dimensional grid.\nAs the dimension in the grid increases, the distance reduction of additional replica placement becomes \u2126(dD+1 ) where d is the distance between two adjacent replicas.\nTherefore, PoA = \u0398(n2) \u2126(n 1 D+1 n) = O(n D D+1 ).\nB. PAYMENT CAN DO WORSE Consider the network in Figure 6 where \u03b1 > 1+\u03b1\/3.\nAny Nash equilibrium in the basic game model would have exactly two replicas - one in the left cluster, and one in the right.\nIt is easy to verify that the worst placement (in terms of social cost) of two replicas occurs when they are placed at nodes A and B.\nThis placement can be achieved as a Nash equilibrium in the payment game, but not in the basic game since A and B are a distance 3\u03b1\/4 apart.\n29 Algorithm 1 Initialization for the Basic Game L1 = a random subset of servers for each node i in N do if i \u2208 L1 then Si = 1 ; replicate the object else Si = 0 Algorithm 2 Move Selection of i for the Basic Game Cost1 = \u03b1 Cost2 = minj\u2208X\u2212{i} dij ; X is the current configuration Costmin = min{Cost1, Cost2} if Costnow > Costmin then if Costmin == Cost1 then Si = 1 else Si = 0 C. NASH DYNAMICS PROTOCOLS The simulator initializes the game according to the given parameters and a random initial strategy profile and then iterates through rounds.\nInitially the order of player actions is chosen randomly.\nIn each round, each server performs the Nash dynamics protocol that adjusts its strategies greedily in the chosen order.\nWhen a round passes without any server changing its strategy, the simulation ends and a Nash equilibrium is reached.\nIn the basic game, we pick a random initial subset of servers to replicate the object as shown in Algorithm 1.\nAfter the initialization, each player runs the move selection procedure described in Algorithm 2 (in algorithms 2 and 4, Costnow represents the current cost for node i).\nThis procedure chooses greedily between replication and non-replication.\nIt is not hard to see that this Nash dynamics protocol converges in two rounds.\nIn the payment game, we pick a random initial subset of servers to replicate the object by setting their thresholds to 0.\nIn addition, we initialize a second random subset of servers to replicate the object with payments from other servers.\nThe details are shown in Algorithm 3.\nAfter the initialization, each player runs the move selection procedure described in Algorithm 4.\nThis procedure chooses greedily between replication and accessing a remote replica, with the possibilities of receiving and making payments, respectively.\nIn the protocol, each node increases its threshold value by incr if it does not replicate the object.\nBy this ramp up procedure, the cost of replicating an object is shared fairly among the nodes that access a replica from a server that does cache.\nIf incr is small, cost is shared more fairly, and the game tends to reach equilibria that encourages more servers to store replicas, though the convergence takes longer.\nIf incr is large, the protocol converges quickly, but it may miss efficient equilibria.\nIn the simulations we set incr to 0.1.\nMost of our A B C a b c \u03b1\/3+1 2\u03b1\/3\u22121 2\u03b1\/3 Figure 7: An example where the Nash dynamics protocol does not converge in the payment game.\nAlgorithm 3 Initialization for the Payment Game L1 = a random subset of servers for each node i in N do bi = 0 if i \u2208 L1 then ti = 0 ; replicate the object else ti = \u03b1 L2 = {} for each node i in N do if coin toss == head then Mi = {j : d(j, i) < mink\u2208L1\u222aL2 d(j, k)} if Mi !\n= \u2205 then for each node j \u2208 Mi do bj = max{ \u03b1+ \u00c8k\u2208Mi d(i,k) |Mi| \u2212 d(i, j), 0} L2 = L2 \u222a {i} Algorithm 4 Move Selection of i for the Payment Game Cost1 = \u03b1 \u2212 Ri Cost2 = minj\u2208N\u2212{i}{tj \u2212 Rj + dij } Costmin = min{Cost1, Cost2} if Costnow > Costmin then if Costmin == Cost1 then ti = Ri else ti = Ri + incr vi = argminj{tj \u2212 Rj + dij} bi = tvi \u2212 Rvi simulation runs converged, but there were a very few cases where the simulation did not converge due to the cycles of dynamics.\nThe protocol does not guarantee convergence within a certain number of rounds like the protocol for the basic game.\nWe provide an example graph and an initial condition such that the Nash dynamics protocol does not converge in the payment game if started from this initial condition.\nThe graph is represented by a shortest path metric on the network shown in Figure 7.\nIn the starting configuration, only A replicates the object, and a pays it an amount \u03b1\/3 to do so.\nThe thresholds for A, B and C are \u03b1\/3 each, and the thresholds for a, b and c are 2\u03b1\/3.\nIt is not hard to verify that the Nash dynamics protocol will never converge if we start with this condition.\nThe Nash dynamics protocol for the payment game needs further investigation.\nThe dynamics protocol for the payment game should avoid cycles of actions to achieve stabilization of the protocol.\nFinding a self-stabilizing dynamics protocol is an interesting problem.\nIn addition, a fixed value of incr cannot adapt to changing environments.\nA small value of incr can lead to efficient equilibria, but it can take long time to converge.\nAn important area for future research is looking at adaptively changing incr.\n30","lvl-3":"Selfish Caching in Distributed Systems: A Game-Theoretic Analysis\nABSTRACT\nWe analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach.\nWe refer to this as the selfish caching problem.\nIn our model, nodes incur either cost for replicating resources or cost for access to a remote replica.\nWe show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination.\nThe price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds.\nWith a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate.\n1.\nINTRODUCTION\nWide-area peer-to-peer file systems [2,5,22,32,33], peer-to-peer caches [15,16], and web caches [6,10] have become popular over * Research supported by NSF career award ANI-9985250, NFS ITR award CCR-0085899, and California MICRO award 00-049.\n\u2020 Research supported by NSF ITR grant 0121555.\n\u2021 Research supported by NSF grant CCR 9984703 and a US-Israel Binational Science Foundation grant.\n\u00a7 Research supported by NSF grant EIA-0122599.\nthe last few years.\nCaching1 of files in selected servers is widely used to enhance the performance, availability, and reliability of these systems.\nHowever, most such systems assume that servers cooperate with one another by following protocols optimized for overall system performance, regardless of the costs incurred by each server.\nIn reality, servers may behave selfishly--seeking to maximize their own benefit.\nFor example, parties in different administrative domains utilize their local resources (servers) to better support clients in their own domains.\nThey have obvious incentives to cache objects2 that maximize the benefit in their domains, possibly at the expense of globally optimum behavior.\nIt has been an open question whether these caching scenarios and protocols maintain their desirable global properties (low total social cost, for example) in the face of selfish behavior.\nIn this paper, we take a game-theoretic approach to analyzing the problem of caching in networks of selfish servers through theoretical analysis and simulations.\nWe model selfish caching as a non-cooperative game.\nIn the basic model, the servers have two possible actions for each object.\nIf a replica of a requested object is located at a nearby node, the server may be better off accessing the remote replica.\nOn the other hand, if all replicas are located too far away, the server is better off caching the object itself.\nDecisions about caching the replicas locally are arrived at locally, taking into account only local costs.\nWe also define a more elaborate payment model, in which each server bids for having an object replicated at another site.\nEach site now has the option of replicating an object and collecting the related bids.\nOnce all servers have chosen a strategy, each game specifies a configuration, that is, the set of servers that replicate the object, and the corresponding costs for all servers.\nGame theory predicts that such a situation will end up in a Nash equilibrium, that is, a set of (possibly randomized) strategies with the property that no player can benefit by changing its strategy while the other players keep their strategies unchanged [28].\nFoundational considerations notwithstanding, it is not easy to accept randomized strategies as the behavior of rational agents in a distributed system (see [28] for an extensive discussion)--but this is what classical game theory can guarantee.\nIn certain very fortunate situations, however (see [9]), the existence of pure (that is, deterministic) Nash equilibria can be predicted.\nWith or without randomization, however, the lack of coordination inherent in selfish decision-making may incur costs well beyond what would be globally optimum.\nThis loss of efficiency is\nquantified by the price of anarchy [21].\nThe price of anarchy is the ratio of the social (total) cost of the worst possible Nash equilibrium to the cost of the social optimum.\nThe price of anarchy bounds the worst possible behavior of a selfish system, when left completely on its own.\nHowever, in reality there are ways whereby the system can be guided, through \"seeding\" or incentives, to a preselected Nash equilibrium.\nThis \"optimistic\" version of the price of anarchy [3] is captured by the smallest ratio between a Nash equilibrium and the social optimum.\nIn this paper we address the following questions:\n\u2022 Do pure strategy Nash equilibria exist in the caching game?\n\u2022 If pure strategy Nash equilibria do exist, how efficient are they (in terms of the price of anarchy, or its optimistic counterpart) under different placement costs, network topologies, and demand distributions?\n\u2022 What is the effect of adopting payments?\nWill the Nash equilibria be improved?\nWe show that pure strategy Nash equilibria always exist in the caching game.\nThe price of anarchy of the basic game model can be O (n), where n is the number of servers; the intuitive reason is undersupply.\nUnder certain topologies, the price of anarchy does have tighter bounds.\nFor complete graphs and stars, it is O (1).\nFor\nanarchy can be O (n).\nIn the payment model, however, the game can always implement a Nash equilibrium that is same as the social optimum, so the optimistic price of anarchy is one.\nOur simulation results show several interesting phases.\nAs the placement cost increases from zero, the price of anarchy increases.\nWhen the placement cost first exceeds the maximum distance between servers, the price of anarchy is at its highest due to undersupply problems.\nAs the placement cost further increases, the price of anarchy decreases, and the effect of replica misplacement dominates the price of anarchy.\nThe rest of the paper is organized as follows.\nIn Section 2 we discuss related work.\nSection 3 discusses details of the basic game and analyzes the bounds of the price of anarchy.\nIn Section 4 we discuss the payment game and analyze its price of anarchy.\nIn Section 5 we describe our simulation methodology and study the properties of Nash equilibria observed.\nWe discuss extensions of the game and directions for future work in Section 6.\n2.\nRELATED WORK\nThere has been considerable research on wide-area peer-to-peer file systems such as OceanStore [22], CFS [5], PAST [32], FARSITE [2], and Pangaea [33], web caches such as NetCache [6] and SummaryCache [10], and peer-to-peer caches such as Squirrel [16].\nMost of these systems use caching for performance, availability, and reliability.\nThe caching protocols assume obedience to the protocol and ignore participants' incentives.\nOur work starts from the assumption that servers are selfish and quantifies the cost of the lack of coordination when servers behave selfishly.\nThe placement of replicas in the caching problem is the most important issue.\nThere is much work on the placement of web replicas, instrumentation servers, and replicated resources.\nAll protocols assume obedience and ignore participants' incentives.\nIn [14], Gribble et al. discuss the data placement problem in peer-to-peer systems.\nKo and Rubenstein propose a self-stabilizing, distributed graph coloring algorithm for the replicated resource placement [20].\nChen, Katz, and Kubiatowicz propose a dynamic replica placement algorithm exploiting underlying distributed hash tables [4].\nDouceur and Wattenhofer describe a hill-climbing algorithm to exchange replicas for reliability in FARSITE [8].\nRaDar is a system that replicates and migrates objects for an Internet hosting service [31].\nTang and Chanson propose a coordinated en-route web caching that caches objects along the routing path [34].\nCentralized algorithms for the placement of objects, web proxies, mirrors, and instrumentation servers in the Internet have been studied extensively [18,19, 23, 30].\nThe facility location problem has been widely studied as a centralized optimization problem in theoretical computer science and operations research [27].\nSince the problem is NP-hard, approximation algorithms based on primal-dual techniques, greedy algorithms, and local search have been explored [17, 24, 26].\nOur caching game is different from all of these in that the optimization process is performed among distributed selfish servers.\nThere is little research in non-cooperative facility location games, as far as we know.\nVetta [35] considers a class of problems where the social utility is submodular (submodularity means decreasing marginal utility).\nIn the case of competitive facility location among corporations he proves that any Nash equilibrium gives an expected social utility within a factor of 2 of optimal plus an additive term that depends on the facility opening cost.\nTheir results are not directly applicable to our problem, however, because we consider each server to be tied to a particular location, while in their model an agent is able to open facilities in multiple locations.\nNote that in that paper the increase of the price of anarchy comes from oversupply problems due to the fact that competing corporations can open facilities at the same location.\nOn the other hand, the significant problems in our game are undersupply and misplacement.\nIn a recent paper, Goemans et al. analyze content distribution on ad-hoc wireless networks using a game-theoretic approach [12].\nAs in our work, they provide monetary incentives to mobile users for caching data items, and provide tight bounds on the price of anarchy and speed of convergence to (approximate) Nash equilibria.\nHowever, their results are incomparable to ours because their payoff functions neglect network latencies between users, they consider multiple data items (markets), and each node has a limited budget to cache items.\nCost sharing in the facility location problem has been studied using cooperative game theory [7, 13, 29].\nGoemans and Skutella show strong connections between fair cost allocations and linear programming relaxations for facility location problems [13].\nP \u00b4 al and Tardos develop a method for cost-sharing that is approximately budget-balanced and group strategyproof and show that the method recovers 1\/3 of the total cost for the facility location game [29].\nDevanur, Mihail, and Vazirani give a strategyproof cost allocation for the facility location problem, but cannot achieve group strategyproofness [7].\n3.\nBASIC GAME\n3.1 Game Model\n3.2 Nash Equilibrium Solutions\n3.3 Social Optimum\n3.4 Analysis\n4.\nPAYMENT GAME\n4.1 Game Model\n4.2 Analysis\n5.\nSIMULATION\n5.1 Varying Placement Cost\n5.2 Different Underlying Topologies\n5.3 Varying Demand Distribution\n5.4 Effects of Payment\n6.\nDISCUSSION AND FUTURE WORK\n7.\nCONCLUSIONS\n8.\nACKNOWLEDGMENTS\n9.\nREFERENCES\nAPPENDIX A. ANALYZING SPECIFIC TOPOLOGIES\nB. PAYMENT CAN DO WORSE\nC. NASH DYNAMICS PROTOCOLS","lvl-4":"Selfish Caching in Distributed Systems: A Game-Theoretic Analysis\nABSTRACT\nWe analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach.\nWe refer to this as the selfish caching problem.\nIn our model, nodes incur either cost for replicating resources or cost for access to a remote replica.\nWe show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination.\nThe price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds.\nWith a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate.\n1.\nINTRODUCTION\n\u2020 Research supported by NSF ITR grant 0121555.\n\u2021 Research supported by NSF grant CCR 9984703 and a US-Israel Binational Science Foundation grant.\n\u00a7 Research supported by NSF grant EIA-0122599.\nCaching1 of files in selected servers is widely used to enhance the performance, availability, and reliability of these systems.\nHowever, most such systems assume that servers cooperate with one another by following protocols optimized for overall system performance, regardless of the costs incurred by each server.\nIn reality, servers may behave selfishly--seeking to maximize their own benefit.\nFor example, parties in different administrative domains utilize their local resources (servers) to better support clients in their own domains.\nThey have obvious incentives to cache objects2 that maximize the benefit in their domains, possibly at the expense of globally optimum behavior.\nIt has been an open question whether these caching scenarios and protocols maintain their desirable global properties (low total social cost, for example) in the face of selfish behavior.\nIn this paper, we take a game-theoretic approach to analyzing the problem of caching in networks of selfish servers through theoretical analysis and simulations.\nWe model selfish caching as a non-cooperative game.\nIn the basic model, the servers have two possible actions for each object.\nIf a replica of a requested object is located at a nearby node, the server may be better off accessing the remote replica.\nOn the other hand, if all replicas are located too far away, the server is better off caching the object itself.\nDecisions about caching the replicas locally are arrived at locally, taking into account only local costs.\nWe also define a more elaborate payment model, in which each server bids for having an object replicated at another site.\nEach site now has the option of replicating an object and collecting the related bids.\nOnce all servers have chosen a strategy, each game specifies a configuration, that is, the set of servers that replicate the object, and the corresponding costs for all servers.\nIn certain very fortunate situations, however (see [9]), the existence of pure (that is, deterministic) Nash equilibria can be predicted.\nWith or without randomization, however, the lack of coordination inherent in selfish decision-making may incur costs well beyond what would be globally optimum.\nquantified by the price of anarchy [21].\nThe price of anarchy is the ratio of the social (total) cost of the worst possible Nash equilibrium to the cost of the social optimum.\nThe price of anarchy bounds the worst possible behavior of a selfish system, when left completely on its own.\nHowever, in reality there are ways whereby the system can be guided, through \"seeding\" or incentives, to a preselected Nash equilibrium.\nThis \"optimistic\" version of the price of anarchy [3] is captured by the smallest ratio between a Nash equilibrium and the social optimum.\nIn this paper we address the following questions:\n\u2022 Do pure strategy Nash equilibria exist in the caching game?\n\u2022 If pure strategy Nash equilibria do exist, how efficient are they (in terms of the price of anarchy, or its optimistic counterpart) under different placement costs, network topologies, and demand distributions?\n\u2022 What is the effect of adopting payments?\nWill the Nash equilibria be improved?\nWe show that pure strategy Nash equilibria always exist in the caching game.\nThe price of anarchy of the basic game model can be O (n), where n is the number of servers; the intuitive reason is undersupply.\nUnder certain topologies, the price of anarchy does have tighter bounds.\nFor\nanarchy can be O (n).\nIn the payment model, however, the game can always implement a Nash equilibrium that is same as the social optimum, so the optimistic price of anarchy is one.\nOur simulation results show several interesting phases.\nAs the placement cost increases from zero, the price of anarchy increases.\nWhen the placement cost first exceeds the maximum distance between servers, the price of anarchy is at its highest due to undersupply problems.\nAs the placement cost further increases, the price of anarchy decreases, and the effect of replica misplacement dominates the price of anarchy.\nThe rest of the paper is organized as follows.\nIn Section 2 we discuss related work.\nSection 3 discusses details of the basic game and analyzes the bounds of the price of anarchy.\nIn Section 4 we discuss the payment game and analyze its price of anarchy.\nIn Section 5 we describe our simulation methodology and study the properties of Nash equilibria observed.\nWe discuss extensions of the game and directions for future work in Section 6.\n2.\nRELATED WORK\nMost of these systems use caching for performance, availability, and reliability.\nThe caching protocols assume obedience to the protocol and ignore participants' incentives.\nOur work starts from the assumption that servers are selfish and quantifies the cost of the lack of coordination when servers behave selfishly.\nThe placement of replicas in the caching problem is the most important issue.\nThere is much work on the placement of web replicas, instrumentation servers, and replicated resources.\nAll protocols assume obedience and ignore participants' incentives.\nIn [14], Gribble et al. discuss the data placement problem in peer-to-peer systems.\nKo and Rubenstein propose a self-stabilizing, distributed graph coloring algorithm for the replicated resource placement [20].\nChen, Katz, and Kubiatowicz propose a dynamic replica placement algorithm exploiting underlying distributed hash tables [4].\nDouceur and Wattenhofer describe a hill-climbing algorithm to exchange replicas for reliability in FARSITE [8].\nRaDar is a system that replicates and migrates objects for an Internet hosting service [31].\nTang and Chanson propose a coordinated en-route web caching that caches objects along the routing path [34].\nCentralized algorithms for the placement of objects, web proxies, mirrors, and instrumentation servers in the Internet have been studied extensively [18,19, 23, 30].\nThe facility location problem has been widely studied as a centralized optimization problem in theoretical computer science and operations research [27].\nOur caching game is different from all of these in that the optimization process is performed among distributed selfish servers.\nThere is little research in non-cooperative facility location games, as far as we know.\nTheir results are not directly applicable to our problem, however, because we consider each server to be tied to a particular location, while in their model an agent is able to open facilities in multiple locations.\nNote that in that paper the increase of the price of anarchy comes from oversupply problems due to the fact that competing corporations can open facilities at the same location.\nOn the other hand, the significant problems in our game are undersupply and misplacement.\nAs in our work, they provide monetary incentives to mobile users for caching data items, and provide tight bounds on the price of anarchy and speed of convergence to (approximate) Nash equilibria.\nCost sharing in the facility location problem has been studied using cooperative game theory [7, 13, 29].\nGoemans and Skutella show strong connections between fair cost allocations and linear programming relaxations for facility location problems [13].\nP \u00b4 al and Tardos develop a method for cost-sharing that is approximately budget-balanced and group strategyproof and show that the method recovers 1\/3 of the total cost for the facility location game [29].\nDevanur, Mihail, and Vazirani give a strategyproof cost allocation for the facility location problem, but cannot achieve group strategyproofness [7].","lvl-2":"Selfish Caching in Distributed Systems: A Game-Theoretic Analysis\nABSTRACT\nWe analyze replication of resources by server nodes that act selfishly, using a game-theoretic approach.\nWe refer to this as the selfish caching problem.\nIn our model, nodes incur either cost for replicating resources or cost for access to a remote replica.\nWe show the existence of pure strategy Nash equilibria and investigate the price of anarchy, which is the relative cost of the lack of coordination.\nThe price of anarchy can be high due to undersupply problems, but with certain network topologies it has better bounds.\nWith a payment scheme the game can always implement the social optimum in the best case by giving servers incentive to replicate.\n1.\nINTRODUCTION\nWide-area peer-to-peer file systems [2,5,22,32,33], peer-to-peer caches [15,16], and web caches [6,10] have become popular over * Research supported by NSF career award ANI-9985250, NFS ITR award CCR-0085899, and California MICRO award 00-049.\n\u2020 Research supported by NSF ITR grant 0121555.\n\u2021 Research supported by NSF grant CCR 9984703 and a US-Israel Binational Science Foundation grant.\n\u00a7 Research supported by NSF grant EIA-0122599.\nthe last few years.\nCaching1 of files in selected servers is widely used to enhance the performance, availability, and reliability of these systems.\nHowever, most such systems assume that servers cooperate with one another by following protocols optimized for overall system performance, regardless of the costs incurred by each server.\nIn reality, servers may behave selfishly--seeking to maximize their own benefit.\nFor example, parties in different administrative domains utilize their local resources (servers) to better support clients in their own domains.\nThey have obvious incentives to cache objects2 that maximize the benefit in their domains, possibly at the expense of globally optimum behavior.\nIt has been an open question whether these caching scenarios and protocols maintain their desirable global properties (low total social cost, for example) in the face of selfish behavior.\nIn this paper, we take a game-theoretic approach to analyzing the problem of caching in networks of selfish servers through theoretical analysis and simulations.\nWe model selfish caching as a non-cooperative game.\nIn the basic model, the servers have two possible actions for each object.\nIf a replica of a requested object is located at a nearby node, the server may be better off accessing the remote replica.\nOn the other hand, if all replicas are located too far away, the server is better off caching the object itself.\nDecisions about caching the replicas locally are arrived at locally, taking into account only local costs.\nWe also define a more elaborate payment model, in which each server bids for having an object replicated at another site.\nEach site now has the option of replicating an object and collecting the related bids.\nOnce all servers have chosen a strategy, each game specifies a configuration, that is, the set of servers that replicate the object, and the corresponding costs for all servers.\nGame theory predicts that such a situation will end up in a Nash equilibrium, that is, a set of (possibly randomized) strategies with the property that no player can benefit by changing its strategy while the other players keep their strategies unchanged [28].\nFoundational considerations notwithstanding, it is not easy to accept randomized strategies as the behavior of rational agents in a distributed system (see [28] for an extensive discussion)--but this is what classical game theory can guarantee.\nIn certain very fortunate situations, however (see [9]), the existence of pure (that is, deterministic) Nash equilibria can be predicted.\nWith or without randomization, however, the lack of coordination inherent in selfish decision-making may incur costs well beyond what would be globally optimum.\nThis loss of efficiency is\nquantified by the price of anarchy [21].\nThe price of anarchy is the ratio of the social (total) cost of the worst possible Nash equilibrium to the cost of the social optimum.\nThe price of anarchy bounds the worst possible behavior of a selfish system, when left completely on its own.\nHowever, in reality there are ways whereby the system can be guided, through \"seeding\" or incentives, to a preselected Nash equilibrium.\nThis \"optimistic\" version of the price of anarchy [3] is captured by the smallest ratio between a Nash equilibrium and the social optimum.\nIn this paper we address the following questions:\n\u2022 Do pure strategy Nash equilibria exist in the caching game?\n\u2022 If pure strategy Nash equilibria do exist, how efficient are they (in terms of the price of anarchy, or its optimistic counterpart) under different placement costs, network topologies, and demand distributions?\n\u2022 What is the effect of adopting payments?\nWill the Nash equilibria be improved?\nWe show that pure strategy Nash equilibria always exist in the caching game.\nThe price of anarchy of the basic game model can be O (n), where n is the number of servers; the intuitive reason is undersupply.\nUnder certain topologies, the price of anarchy does have tighter bounds.\nFor complete graphs and stars, it is O (1).\nFor\nanarchy can be O (n).\nIn the payment model, however, the game can always implement a Nash equilibrium that is same as the social optimum, so the optimistic price of anarchy is one.\nOur simulation results show several interesting phases.\nAs the placement cost increases from zero, the price of anarchy increases.\nWhen the placement cost first exceeds the maximum distance between servers, the price of anarchy is at its highest due to undersupply problems.\nAs the placement cost further increases, the price of anarchy decreases, and the effect of replica misplacement dominates the price of anarchy.\nThe rest of the paper is organized as follows.\nIn Section 2 we discuss related work.\nSection 3 discusses details of the basic game and analyzes the bounds of the price of anarchy.\nIn Section 4 we discuss the payment game and analyze its price of anarchy.\nIn Section 5 we describe our simulation methodology and study the properties of Nash equilibria observed.\nWe discuss extensions of the game and directions for future work in Section 6.\n2.\nRELATED WORK\nThere has been considerable research on wide-area peer-to-peer file systems such as OceanStore [22], CFS [5], PAST [32], FARSITE [2], and Pangaea [33], web caches such as NetCache [6] and SummaryCache [10], and peer-to-peer caches such as Squirrel [16].\nMost of these systems use caching for performance, availability, and reliability.\nThe caching protocols assume obedience to the protocol and ignore participants' incentives.\nOur work starts from the assumption that servers are selfish and quantifies the cost of the lack of coordination when servers behave selfishly.\nThe placement of replicas in the caching problem is the most important issue.\nThere is much work on the placement of web replicas, instrumentation servers, and replicated resources.\nAll protocols assume obedience and ignore participants' incentives.\nIn [14], Gribble et al. discuss the data placement problem in peer-to-peer systems.\nKo and Rubenstein propose a self-stabilizing, distributed graph coloring algorithm for the replicated resource placement [20].\nChen, Katz, and Kubiatowicz propose a dynamic replica placement algorithm exploiting underlying distributed hash tables [4].\nDouceur and Wattenhofer describe a hill-climbing algorithm to exchange replicas for reliability in FARSITE [8].\nRaDar is a system that replicates and migrates objects for an Internet hosting service [31].\nTang and Chanson propose a coordinated en-route web caching that caches objects along the routing path [34].\nCentralized algorithms for the placement of objects, web proxies, mirrors, and instrumentation servers in the Internet have been studied extensively [18,19, 23, 30].\nThe facility location problem has been widely studied as a centralized optimization problem in theoretical computer science and operations research [27].\nSince the problem is NP-hard, approximation algorithms based on primal-dual techniques, greedy algorithms, and local search have been explored [17, 24, 26].\nOur caching game is different from all of these in that the optimization process is performed among distributed selfish servers.\nThere is little research in non-cooperative facility location games, as far as we know.\nVetta [35] considers a class of problems where the social utility is submodular (submodularity means decreasing marginal utility).\nIn the case of competitive facility location among corporations he proves that any Nash equilibrium gives an expected social utility within a factor of 2 of optimal plus an additive term that depends on the facility opening cost.\nTheir results are not directly applicable to our problem, however, because we consider each server to be tied to a particular location, while in their model an agent is able to open facilities in multiple locations.\nNote that in that paper the increase of the price of anarchy comes from oversupply problems due to the fact that competing corporations can open facilities at the same location.\nOn the other hand, the significant problems in our game are undersupply and misplacement.\nIn a recent paper, Goemans et al. analyze content distribution on ad-hoc wireless networks using a game-theoretic approach [12].\nAs in our work, they provide monetary incentives to mobile users for caching data items, and provide tight bounds on the price of anarchy and speed of convergence to (approximate) Nash equilibria.\nHowever, their results are incomparable to ours because their payoff functions neglect network latencies between users, they consider multiple data items (markets), and each node has a limited budget to cache items.\nCost sharing in the facility location problem has been studied using cooperative game theory [7, 13, 29].\nGoemans and Skutella show strong connections between fair cost allocations and linear programming relaxations for facility location problems [13].\nP \u00b4 al and Tardos develop a method for cost-sharing that is approximately budget-balanced and group strategyproof and show that the method recovers 1\/3 of the total cost for the facility location game [29].\nDevanur, Mihail, and Vazirani give a strategyproof cost allocation for the facility location problem, but cannot achieve group strategyproofness [7].\n3.\nBASIC GAME\nThe caching problem we study is to find a configuration that meets certain objectives (e.g., minimum total cost).\nFigure 1 shows examples of caching among four servers.\nIn network (a), A stores an object.\nSuppose B wants to access the object.\nIf it is cheaper to access the remote replica than to cache it, B accesses the remote replica as shown in network (b).\nIn network (c), C wants to access the object.\nIf C is far from A, C caches the object instead of accessing the object from A.\nIt is possible that in an optimal configuration it would be better to place replicas in A and B. Understanding the placement of replicas by selfish servers is the focus of our study.\nThe caching problem is abstracted as follows.\nThere is a set N of n servers and a set M of m objects.\nThe distance between servers can be represented as a distance matrix D (i.e., dij is the distance\nFigure 1: Caching.\nThere are four servers labeled A, B, C, and D.\nThe rectangles are object replicas.\nIn (a), A stores an object.\nIf B incurs less cost accessing A's replica than it would caching the object itself, it accesses the object from A as in (b).\nIf the distance cost is too high, the server caches the object itself, as C does in (c).\nThis figure is an example of our caching game model.\nfrom server i to server j).\nD models an underlying network topology.\nFor our analysis we assume that the distances are symmetric and the triangle inequality holds on the distances (for all servers i, j, k: dij + djk> dik).\nEach server has demand from clients that is represented by a demand matrix W (i.e., wij is the demand of server i for object j).\nWhen a server caches objects, the server incurs some placement cost that is represented by a matrix \u03b1 (i.e., \u03b1ij is a placement cost of server i for object j).\nIn this study, we assume that servers have no capacity limit.\nAs we discuss in the next section, this fact means that the caching behavior with respect to each object can be examined separately.\nConsequently, we can talk about configurations of the system with respect to a given object:\nThe goal of the basic game is to find configurations that are achieved when servers optimize their cost functions locally.\n3.1 Game Model\nWe take a game-theoretic approach to analyzing the uncapacitated caching problem among networked selfish servers.\nWe model the selfish caching problem as a non-cooperative game with n players (servers\/nodes) whose strategies are sets of objects to cache.\nIn the game, each server chooses a pure strategy that minimizes its cost.\nOur focus is to investigate the resulting configuration, which is the Nash equilibrium of the game.\nIt should be emphasized that we consider only pure strategy Nash equilibria in this paper.\nThe cost model is an important part of the game.\nLet Ai be the set of feasible strategies for server i, and let Si E Ai be the strategy chosen by server i. Given a strategy profile S = (S1, S2,..., Sn), the cost incurred by server i is defined as:\nwhere \u03b1ij is the placement cost of object j, wij is the demand that server i has for object j, f (i, j) is the closest server to i that caches object j, and dik is the distance between i and k.\nWhen no server caches the object, we define distance cost di ~ (i, j) to be dM--large enough that at least one server will choose to cache the object.\nThe placement cost can be further divided into first-time installation cost and maintenance cost:\nwhere k1i is the installation cost, k2i is the relative weight between the maintenance cost and the installation cost, Pj is the ratio of the number of writes over the number of reads and writes, UpdateSizej is the size of an update, ObjectSizej is the size of the object, and T is the update period.\nWe see tradeoffs between different parameters in this equation.\nFor example, placing replicas becomes more expensive as UpdateSizej increases, Pj increases, or T decreases.\nHowever, note that by varying \u03b1ij itself we can capture the full range of behaviors in the game.\nFor our analysis, we use only \u03b1ij.\nSince there is no capacity limit on servers, we can look at each single object as a separate game and combine the pure strategy equilibria of these games to obtain a pure strategy equilibrium of the multi-object game.\nFabrikant, Papadimitriou, and Talwar discuss this existence argument: if two games are known to have pure equilibria, and their cost functions are cross-monotonic, then their union is also guaranteed to have pure Nash equilibria, by a continuity argument [9].\nA Nash equilibrium for the multi-object game is the cross product of Nash equilibria for single-object games.\nTherefore, we can focus on the single object game in the rest of this paper.\nFor single object selfish caching, each server i has two strategies--to cache or not to cache.\nThe object under consideration is j.\nWe define Si to be 1 when server i caches j and 0 otherwise.\nThe cost incurred by server i is\nWe refer to this game as the basic game.\nThe extent to which Ci (S) represents actual cost incurred by server i is beyond the scope of this paper; we will assume that an appropriate cost function of the form of Equation 3 can be defined.\n3.2 Nash Equilibrium Solutions\nIn principle, we can start with a random configuration and let this configuration evolve as each server alters its strategy and attempts to minimize its cost.\nGame theory is interested in stable solutions called Nash equilibria.\nA pure strategy Nash equilibrium is reached when no server can benefit by unilaterally changing its strategy.\nA Nash equilibrium3 (S \u2217 i, S \u2217 \u2212 i) for the basic game specifies a configuration X such that ` di E N, i E X 4 * S \u2217 i = 1.\nThus, we can consider a set E of all pure strategy Nash equilibrium configurations:\nBy this definition, no server has incentive to deviate in the configurations since it cannot reduce its cost.\nFor the basic game, we can easily see that:\nThe first condition guarantees that there is a server that places the replica within distance \u03b1 of each server i.\nIf the replica is not placed 3The notation for strategy profile (S \u2217 i, S \u2217 \u2212 i) separates node i ~ s strategy (S \u2217 i) from the strategies of other nodes (S \u2217 \u2212 i).\nFigure 2: Potential inefficiency of Nash equilibria illustrated by two clusters of n2 servers.\nThe intra-cluster distances are all zero and the distance between clusters is \u03b1 \u2212 1, where \u03b1 is the placement cost.\nThe dark nodes replicate the object.\nNetwork (a) shows a Nash equilibrium in the basic game, where one server in a cluster caches the object.\nNetwork (b) shows the social optimum where two replicas, one for each cluster, are placed.\nThe price of anarchy is O (n) and even the optimistic price of anarchy is O (n).\nThis high price of anarchy comes from the undersupply of replicas due to the selfish nature of servers.\nNetwork (c) shows a Nash equilibrium in the payment game, where two replicas, one for each cluster, are placed.\nEach light node in each cluster pays 2\/n to the dark node, and the dark node replicates the object.\nHere, the optimistic price of anarchy is one.\nat i, then it is placed at another server within distance \u03b1 of i, so i has no incentive to cache.\nIf the replica is placed at i, then the second condition ensures there is no incentive to drop the replica because no two servers separated by distance less than \u03b1 both place replicas.\n3.3 Social Optimum\nThe social cost of a given strategy profile is defined as the total cost incurred by all servers, namely:\nwhere Ci (S) is the cost incurred by server i given by Equation 1.\nThe social optimum cost, referred to as C (SO) for the remainder of the paper, is the minimum social cost.\nThe social optimum cost will serve as an important base case against which to measure the cost of selfish caching.\nWe define C (SO) as:\nwhere S varies over all possible strategy profiles.\nNote that in the basic game, this means varying configuration X over all possible configurations.\nIn some sense, C (SO) represents the best possible caching behavior--if only nodes could be convinced to cooperate with one another.\nThe social optimum configuration is a solution of a mini-sum facility location problem, which is NP-hard [11].\nTo find such configurations, we formulate an integer programming problem:\nHere, xij is 1 if server i replicates object j and 0 otherwise; yijk is 1 if server i accesses object j from server k and 0 otherwise; I (w) returns 1 if w is nonzero and 0 otherwise.\nThe first constraint specifies that if server i has demand for object j, then it must access j from exactly one server.\nThe second constraint ensures that server i replicates object j if any other server accesses j from i.\n3.4 Analysis\nTo analyze the basic game, we first give a proof of the existence of pure strategy Nash equilibria.\nWe discuss the price of anarchy in general and then on specific underlying topologies.\nIn this analysis we use simply \u03b1 in place of \u03b1ij, since we deal with a single object and we assume placement cost is the same for all servers.\nIn addition, when we compute the price of anarchy, we assume that all nodes have the same demand (i.e., \u2200 i \u2208 N wij = 1).\nPROOF.\nWe show a constructive proof.\nFirst, initialize the set V to N. Then, remove all nodes with zero demand from V.\nEach node x defines 3x, where 3x = wxj \u03b1.\nFurthermore, let Z (y) = {z: dzy \u2264 3z, z \u2208 V}; Z (y) represents all nodes z for which y lies within 3z from z. Pick a node y \u2208 V such that 3y \u2264 3x for all x \u2208 V.\nPlace a replica at y and then remove y and all z \u2208 Z (y) from V.\nNo such z can have incentive to replicate the object because it can access y's replica at lower (or equal) cost.\nIterate this process of placing replicas until V is empty.\nBecause at each iteration y is the remaining node with minimum 3, no replica will be placed within distance 3y of any such y by this process.\nThe resulting configuration is a pure-strategy Nash equilibrium of the basic game.\nThe Price of Anarchy (POA): To quantify the cost of lack of coordination, we use the price of anarchy [21] and the optimistic price of anarchy [3].\nThe price of anarchy is the ratio of the social costs of the worst-case Nash equilibrium and the social optimum, and the optimistic price of anarchy is the ratio of the social costs of the best-case Nash equilibrium and the social optimum.\nWe show general bounds on the price of anarchy.\nThroughout our discussion, we use C (SW) to represent the cost of worst case Nash equilibrium, C (SO) to represent the cost of social optimum, and PoA to represent the price of anarchy, which is C (SW) C (SO).\nThe worst case Nash equilibrium maximizes the total cost under the constraint that the configuration meets the Nash condition.\nFormally, we can define C (SW) as follows.\nwhere minj \u2208 X dij is the distance to the closest replica (including i itself) from node i and X varies through Nash equilibrium configurations.\nBounds on the Price of Anarchy: We show bounds of the price of anarchy varying \u03b1.\nLet dmin = min (i, j) \u2208 N \u00d7 N, i ~ = j dij and dmax = max (i, j) \u2208 N \u00d7 N dij.\nWe see that if \u03b1 \u2264 dmin, PoA = 1\nTable 1: PoA in the basic game for specific topologies\ntrivially, since every server caches the object for both Nash equilibrium and social optimum.\nWhen \u03b1> dmax, there is a transition in Nash equilibria: since the placement cost is greater than any distance cost, only one server caches the object and other servers access it remotely.\nHowever, the social optimum may still place multiple replicas.\nSince \u03b1 <C (SO) <\u03b1 + minj \u2208 N J: i dij when \u03b1>\nNote that depending on the underlying topology, even the lower bound of PoA can be O (n).\nFinally, there is a transition when \u03b1> maxj \u2208 N J: i dij.\nIn this case, PoA = \u03b1 + maxj \u2208 N F-F-i dij and\nFigure 2 shows an example of the inefficiency of a Nash equilibrium.\nIn the network there are two clusters of servers whose size is n2.\nThe distance between two clusters is \u03b1--1 where \u03b1 is the placement cost.\nFigure 2 (a) shows a Nash equilibrium where one server in a cluster caches the object.\nIn this case, C (SW) = \u03b1 + (\u03b1--1) n2, since all servers in the other cluster accesses the remote replica.\nHowever, the social optimum places two replicas, one for each cluster, as shown in Figure 2 (b).\nTherefore, C (SO) = 2\u03b1.\ncomes from an undersupply of replicas due to the selfish nature of the servers.\nNote that all Nash equilibria have the same cost; thus even the optimistic price of anarchy is O (n).\nIn Appendix A, we analyze the price of anarchy with specific underlying topologies and show that PoA can have tighter bounds than O (n) for the complete graph, star, line, and D-dimensional grid.\nIn these topologies, we set the distance between directly connected nodes to one.\nWe describe the case where \u03b1> 1, since PoA = 1 trivially when \u03b1 <1.\nA summary of the results is shown in Table 1.\n4.\nPAYMENT GAME\nIn this section, we present an extension to the basic game with payments and analyze the price of anarchy and the optimistic price of anarchy of the game.\n4.1 Game Model\nThe new game, which we refer to as the payment game, allows each player to offer a payment to another player to give the latter incentive to replicate the object.\nThe cost of replication is shared among the nodes paying the server that replicates the object.\nThe strategy for each player i is specified by a triplet (vi, bi, ti) E {N, R +, R +}.\nvi specifies the player to whom i makes a bid, bi> 0 is the value of the bid, and ti> 0 denotes a threshold for payments beyond which i will replicate the object.\nIn addition, we use Ri to denote the total amount of bids received by a node i (Ri = J: j: vj = i bj).\nA node i replicates the object if and only if Ri> ti, that is, the amount of bids it receives is greater than or equal to its threshold.\nLet Ii denote the corresponding indicator variable, that is, Ii equals 1 if i replicates the object, and 0 otherwise.\nWe make the rule that if a node i makes a bid to another node j and j replicates the object, then i must pay j the amount bi.\nIf j does not replicate the object, i does not pay j. Given a strategy profile, the outcome of the game is the set of tuples {(Ii, vi, bi, Ri)}.\nIi tells us whether player i replicates the object or not, bi is the payment player i makes to player vi, and Ri is the total amount of bids received by player i. To compute the payoffs given the outcome, we must now take into account the payments a node makes, in addition to the placement costs and access costs of the basic game.\nBy our rules, a server node i pays bi to node vi if vi replicates the object, and receives a payment of Ri if it replicates the object itself.\nIts net payment is biIvi--RiIi.\nThe total cost incurred by each node is the sum of its placement cost, access cost, and net payment.\nIt is defined as\nThe cost of social optimum for the payment game is same as that for the basic game, since the net payments made cancel out.\n4.2 Analysis\nIn analyzing the payment model, we first show that a Nash equilibrium in the basic game is also a Nash equilibrium in the payment game.\nWe then present an important positive result--in the payment game the socially optimal configuration can always be implemented by a Nash equilibrium.\nWe know from the counterexample in Figure 2 that this is not guaranteed in the the basic game.\nIn this analysis we use \u03b1 to represent \u03b1ij.\nTHEOREM 2.\nAny configuration that is a pure strategy Nash equilibrium in the basic game is also a pure strategy Nash equilibrium in the payment game.\nTherefore, the price of anarchy of the payment game is at least that of the basic game.\nPROOF.\nConsider any Nash equilibrium configuration in the basic game.\nFor each node i replicating the object, set its threshold ti to 0; everyone else has threshold \u03b1.\nAlso, for all i, bi = 0.\nA node that replicates the object does not have incentive to change its strategy: changing the threshold does not decrease its cost, and it would have to pay at least \u03b1 to access a remote replica or incentivize a nearby node to cache.\nTherefore it is better off keeping its threshold and bid at 0 and replicating the object.\nA node that is not replicating the object can access the object remotely at a cost less than or equal to \u03b1.\nLowering its threshold does not decrease its cost, since all bi are zero.\nThe payment necessary for another server to place a replica is at least \u03b1.\nNo player has incentive to deviate, so the current configuration is a Nash equilibrium.\nIn fact, Appendix B shows that the PoA of the payment game can be more than that of the basic game in a given topology.\nNow let us look at what happens to the example shown in Figure 2 in the best case.\nSuppose node B's neighbors each decide to pay node B an amount 2\/n.\nB does not have an incentive to deviate, since accessing the remote replica does not decrease its cost.\nThe same argument holds for A because of symmetry in the graph.\nSince no one has an incentive to deviate, the configuration is a Nash equilibrium.\nIts total cost is 2\u03b1, the same as in the socially optimal configuration shown in Figure 2 (b).\nNext we prove that indeed the payment game always has a strategy profile that implements the socially optimal configuration as a Nash equilibrium.\nWe first present the following observation, which is used in the proof, about thresholds in the payment game.\nleast (\u03b1 \u2212 dij).\nOtherwise, it cannot collect enough payment to compensate for the cost of replicating the object and is better off accessing the replica at j. THEOREM 3.\nIn the payment game, there is always a pure strategy Nash equilibrium that implements the social optimum configuration.\nThe optimistic price of anarchy in the payment game is therefore always one.\nPROOF.\nConsider the socially optimal configuration 0opt.\nLet No be the set of nodes that replicate the object and Nc = N \u2212 No be the rest of the nodes.\nAlso, for each i in No, let Qi denote the set of nodes that access the object from i, not including i itself.\nIn the socially optimal configuration, dij <\u03b1 for all j in Qi.\nWe want to find a set of payments and thresholds that makes this configuration implementable.\nThe idea is to look at each node i in No and distribute the minimum payment needed to make i replicate the object among the nodes that access the object from i. For each i in No, and for each j in Qi, we define\nNote that 6j is the difference between j's cost for accessing the replica at i and j's next best option among replicating the object and accessing some replica other than i.\nIt is clear that 6j> 0.\nNo.\nThen, Ej \u2208 Qi 6j> \u03b1 \u2212 di ~.\nCLAIM 1.\nFor each i E No, let f be the nearest node to i in PROOF.\n(of claim) Assume the contrary, that is, Ej \u2208 Qi 6j <\u03b1 \u2212 di ~.\nConsider the new configuration 0new wherein i does not replicate and each node in Qi chooses its next best strategy (either replicating or accessing the replica at some node in No \u2212 {i}).\nIn addition, we still place replicas at each node in No \u2212 {i}.\nIt is easy to see that cost of 0opt minus cost of 0new is at least:\nwhich contradicts the optimality of 0opt.\nWe set bids as follows.\nFor each i in No, bi = 0 and for each j in Qi, j bids to i (i.e., vj = i) the amount: where Ei = Ej \u2208 Qi 6j \u2212 \u03b1 + di ~> 0 and | Qi | is the cardinality of bj = max {0, 6j \u2212 Ei \/ (| Qi | + 1)}, j E Qi (12) Qi.\nFor the thresholds, we have: ti = ~ \u03b1 Ej \u2208 Qi bj if if i iE ENc; No.\n(13) This fully specifies the strategy profile of the nodes, and it is easy to see that the outcome is indeed the socially optimal configuration.\nNext, we verify that the strategies stipulated constitute a Nash equilibrium.\nHaving set ti to \u03b1 for i in Nc means that any node in N is at least as well off lowering its threshold and replicating as bidding \u03b1 to some node in Nc to make it replicate, so we may disregard the latter as a profitable strategy.\nBy observation 1, to ensure that each i in No does not deviate, we require that if f is the nearest node to i in No, then Ej \u2208 Qi bj is at least (\u03b1 \u2212 di ~).\nOtherwise, i will raise ti above Ej \u2208 Qi bj so that it does not replicate and instead accesses the replica at.\nWe can easily check that\nFigure 3: We present PoA, Ratio, and OPoA results for the basic game, varying \u03b1 on a 100-node line topology, and we show number of replicas placed by the Nash equilibria and by the optimal solution.\nWe see large peaks in PoA and OPoA at \u03b1 = 100, where a phase transition causes an abrupt transition in the lines.\nTherefore, each node i E No does not have incentive to change ti since i loses its payments received or there is no change, and i does not have incentive to bi since it replicates the object.\nEach node j in Nc has no incentive to change tj since changing tj does not reduce its cost.\nIt also does not have incentive to reduce bj since the node where j accesses does not replicate and j has to replicate the object or to access the next closest replica, which costs at least the same from the definition of bj.\nNo player has incentive to deviate, so this strategy profile is a Nash equilibrium.\n5.\nSIMULATION\nWe run simulations to compare Nash equilibria for the singleobject caching game with the social optimum computed by solving the integer linear program described in Equation 8 using Mosek [1].\nWe examine price of anarchy (PoA), optimistic price of anarchy (OPoA), and the average ratio of the costs of Nash equilibria and social optima (Ratio), and when relevant we also show the average numbers of replicas placed by the Nash equilibrium (Replica (NE)) and the social optimum (Replica (SO)).\nThe PoA and OPoA are taken from the worst and best Nash equilibria, respectively, that we observe over the runs.\nEach data point in our figures is based on 1000 runs, randomly varying the initial strategy profile and player order.\nThe details of the simulations including protocols and a discussion of convergence are presented in Appendix C.\nIn our evaluation, we study the effects of variation in four categories: placement cost, underlying topology, demand distribution, and payments.\nAs we vary the placement cost \u03b1, we directly influence the tradeoff between caching and not caching.\nIn order to get a clear picture of the dependency of PoA on \u03b1 in a simple case, we first analyze the basic game with a 100-node line topology whose edge distance is one.\nWe also explore transit-stub topologies generated using the GTITM library [36] and power-law topologies (Router-level BarabasiAlbert model) generated using the BRITE topology generator [25].\nFor these topologies, we generate an underlying physical graph of 3050 physical nodes.\nBoth topologies have similar minimum, average, and maximum physical node distances.\nThe average distance is 0.42.\nWe create an overlay of 100 server nodes and use the same overlay for all experiments with the given topology.\nIn the game, each server has a demand whose distribution is Bernoulli (p), where p is the probability of having demand for the object; the default unless otherwise specified is p = 1.0.\nFigure 4: Transit-stub topology: (a) basic game, (b) payment game.\nWe show the PoA, Ratio, OPoA, and the number of replicas placed while varying \u03b1 between 0 and 2 with 100 servers on a 3050-physical-node transit-stub topology.\nFigure 5: Power-law topology: (a) basic game, (b) payment game.\nWe show the PoA, Ratio, OPoA, and the number of replicas placed while varying \u03b1 between 0 and 2 with 100 servers on a 3050-physical-node power-law topology.\n5.1 Varying Placement Cost\nFigure 3 shows PoA, OPoA, and Ratio, as well as number of replicas placed, for the line topology as \u03b1 varies.\nWe observe two phases.\nAs \u03b1 increases the PoA rises quickly to a peak at 100.\nAfter 100, there is a gradual decline.\nOPoA and Ratio show behavior similar to PoA.\nThese behaviors can be explained by examining the number of replicas placed by Nash equilibria and by optimal solutions.\nWe see that when \u03b1 is above one, Nash equilibrium solutions place fewer replicas than optimal on average.\nFor example, when \u03b1 is 100, the social optimum places four replicas, but the Nash equilibrium places only one.\nThe peak in PoA at \u03b1 = 100 occurs at the point for a 100-node line where the worst-case cost of accessing a remote replica is slightly less than the cost of placing a new replica, so selfish servers will never place a second replica.\nThe optimal solution, however, places multiple replicas to decrease the high global cost of access.\nAs \u03b1 continues to increase, the undersupply problem lessens as the optimal solution places fewer replicas.\n5.2 Different Underlying Topologies\nIn Figure 4 (a) we examine an overlay graph on the more realistic transit-stub topology.\nThe trends for the PoA, OPoA, and Ratio are similar to the results for the line topology, with a peak in PoA at \u03b1 = 0.8 due to maximal undersupply.\nIn Figure 5 (a) we examine an overlay graph on the power-law topology.\nWe observe several interesting differences between the power-law and transit-stub results.\nFirst, the PoA peaks at a lower level in the power-law graph, around 2.3 (at \u03b1 = 0.9) while the peak PoA in the transit-stub topology is almost 3.0 (at \u03b1 = 0.8).\nAfter the peak, PoA and Ratio decrease more slowly as \u03b1 increases.\nOPoA is close to one for the whole range of \u03b1 values.\nThis can be explained by the observation in Figure 5 (a) that there is no significant undersupply problem here like there was in the transit-stub graph.\nIndeed the high PoA is due mostly to misplacement problems when \u03b1 is from 0.7 to 2.0, since there is little decrease in PoA when the number of replicas in social optimum changes from two to one.\nThe OPoA is equal to one in the figure when the same number of replicas are placed.\n5.3 Varying Demand Distribution\nNow we examine the effects of varying the demand distribution.\nThe set of servers with demand is random for p <1, so we calculate the expected PoA by averaging over 5 trials (each data point is based on 5000 runs).\nWe run simulations for demand levels of p \u2208 {0.2, 0.6, 1.0} as \u03b1 is varied on the 100 servers on top of the transit-stub graph.\nWe observe that as demand falls, so does expected PoA.\nAs p decreases, the number of replicas placed in the social optimum decreases, but the number in Nash equilibria changes little.\nFurthermore, when \u03b1 exceeds the overlay diameter, the number in Nash equilibria stays constant when p varies.\nTherefore, lower p leads to a lesser undersupply problem, agreeing with intuition.\nWe do not present the graph due to space limitations and redundancy; the PoA for p = 1.0 is identical to PoA in Figure 4 (a), and the lines for p = 0.6 and p = 0.2 are similar but lower and flatter.\n5.4 Effects of Payment\nFinally, we discuss the effects of payments on the efficiency of Nash equilibria.\nThe results are presented in Figure 4 (b) and Figure 5 (b).\nAs shown in the analysis, the simulations achieve OPoA close to one (it is not exactly one because of randomness in the simulations).\nThe Ratio for the payment game is much lower than the Ratio for the basic game, since the protocol for the payment game tends to explore good regions in the space of Nash equilibria.\nWe observe in Figure 4 that for \u03b1> 0.4, the average number of replicas of Nash equilibria gets closer with payments to that of the social optimum than it does without.\nWe observe in Figure 5 that more replicas are placed with payments than without when \u03b1 is between 0.7 and 1.3, the only range of significant undersupply in the power-law case.\nThe results confirm that payments give servers incentive to replicate the object and this leads to better equilibria.\n6.\nDISCUSSION AND FUTURE WORK\nWe suggest several interesting extensions and directions.\nOne extension is to consider multiple objects in the capacitated caching game, in which servers have capacity limits when placing objects.\nSince caching one object affects the ability to cache another, there is no separability of a multi-object game into multiple single object games.\nAs studied in [12], one way to formulate this problem is to find the best response of a server by solving a knapsack problem and to compute Nash equilibria.\nIn our analyses, we assume that all nodes have the same demand.\nHowever, nodes could have different demand depending on objects.\nWe intend to examine the effects of heterogeneous demands (or heterogeneous placement costs) analytically.\nWe also want to look at the following \"aggregation effect\".\nSuppose there are n--1 clustered nodes with distance of \u03b1--1 from a node hosting a replica.\nAll nodes have demands of one.\nIn that case, the price of anarchy is O (n).\nHowever, if we aggregate n--1 nodes into one node with demand n--1, the price of anarchy becomes O (1), since \u03b1 should be greater than (n--1) (\u03b1--1) to replicate only one object.\nSuch aggregation can reduce the inefficiency of Nash equilibria.\nWe intend to compute the bounds of the price of anarchy under different underlying topologies such as random graphs or growthrestricted metrics.\nWe want to investigate whether there are certain distance constraints that guarantee O (1) price of anarchy.\nIn addition, we want to run large-scale simulations to observe the change in the price of anarchy as the network size increases.\nAnother extension is to consider server congestion.\nSuppose the distance is the network distance plus - y x (number of accesses) where - y is an extra delay when an additional server accesses the replica.\nThen, when \u03b1> - y, it can be shown that PoA is bounded by \u03b1\u03b3.\nAs - y increases, the price of anarchy bound decreases, since the load of accesses is balanced across servers.\nWhile exploring the caching problem, we made several observations that seem counterintuitive.\nFirst, the PoA in the payment game can be worse than the PoA in the basic game.\nAnother observation we made was that the number of replicas in a Nash equilibrium can be more than the number of replicas in the social optimum even without payments.\nFor example, a graph with diameter slightly more than \u03b1 may have a Nash equilibrium configuration with two replicas at the two ends.\nHowever, the social optimum may place one replica at the center.\nWe leave the investigation of more examples as an open issue.\n7.\nCONCLUSIONS\nIn this work we introduce a novel non-cooperative game model to characterize the caching problem among selfish servers without any central coordination.\nWe show that pure strategy Nash equilibria exist in the game and that the price of anarchy can be O (n) in general, where n is the number of servers, due to undersupply problems.\nWith specific topologies, we show that the price of anarchy can have tighter bounds.\nMore importantly, with payments, servers are incentivized to replicate and the optimistic price of anarchy is always one.\nNon-cooperative caching is a more realistic model than cooperative caching in the competitive Internet, hence this work is an important step toward viable federated caching systems.\n8.\nACKNOWLEDGMENTS\nWe thank Kunal Talwar for enlightening discussions regarding this work.\n9.\nREFERENCES\nAPPENDIX A. ANALYZING SPECIFIC TOPOLOGIES\nWe now analyze the price of anarchy (PoA) for the basic game with specific underlying topologies and show that PoA can have better bounds.\nWe look at complete graph, star, line, and Ddimensional grid.\nIn all these topologies, we set the distance between two directly connected nodes to one.\nWe describe the case where \u03b1> 1, since PoA = 1 trivially when \u03b1 <1.\nFigure 6: Example where the payment game has a Nash equilibrium which is worse than any Nash equilibrium in the basic game.\nThe unlabeled distances between the nodes in the cluster are all 1.\nThe thresholds of white nodes are all \u03b1 and the thresholds of dark nodes are all \u03b1 \/ 4.\nThe two dark nodes replicate the object in this payment game Nash equilibrium.\nFor a complete graph, PoA = 1, and for a star, PoA <2.\nFor a complete graph, when \u03b1> 1, both Nash equilibria and social optima place one replica at one server, so PoA = 1.\nFor star, when 1 <\u03b1 <2, the worst case Nash equilibrium places replicas at all leaf nodes.\nHowever, the social optimum places one replica at the center node.\nTherefore, PoA = (n \u2212 1) \u03b1 +1 \u03b1 + (n \u2212 1) <2 (n \u2212 1) +1 1 + (n \u2212 1) <2.\nWhen \u03b1> 2, the worst case Nash equilibrium places one replica at a leaf node and the other nodes access the remote replica, and the social optimum places one replica at the center.\nPoA = \u03b1 +1 +2 (n \u2212 2)\nFor a line, the price of anarchy is O (% \/ n).\nWhen 1 <\u03b1 <n, the worst case Nash equilibrium places replicas every 2\u03b1 so that there is no overlap between areas covered by two adjacent servers that cache the object.\nThe social optimum places replicas at least every% \/ 2\u03b1.\nThe placement of replicas for the social optimum is as follows.\nSuppose there are two replicas separated by distance d. By placing an additional replica in the middle, we want to have the reduction of distance to be at least \u03b1.\nThe distance reduction is d\/2 + 2 {((d\/2 - 1) - 1) + ((d\/2 - 2) - 2) +...+ ((d\/2 d\/4) - d\/4)}> d2\/8.\nd should be at most 2% \/ 2\u03b1.\nTherefore, the distance between replicas in the social optimum is at most% \/ 2\u03b1.\nC (SW) = \u03b1 (n \u2212 1)\none replica at a leaf node and C (SW) = \u03b1 + (n \u2212 1) n 2.\nHowever, the social optimum still places replicas every% \/ 2\u03b1.\nIfwe view PoA as a continuous function of \u03b1 and compute a derivative of PoA, the derivative becomes 0 when \u03b1 is \u0398 (n2), which means the function decreases as \u03b1 increases from n. Therefore, PoA is maximum when \u03b1 is n, and PoA = \u0398 (n2)\n2, the social optimum also places only one replica, and PoA is trivially bounded by 2.\nThis result holds for the ring and it can be generalized to the D-dimensional grid.\nAs the dimension in the grid increases, the distance reduction of additional replica placement becomes \u2126 (dD +1) where d is the distance between two adjacent replicas.\nTherefore, PoA = \u0398 (n2)\nB. PAYMENT CAN DO WORSE\nConsider the network in Figure 6 where \u03b1> 1 + \u03b1 \/ 3.\nAny Nash equilibrium in the basic game model would have exactly two replicas - one in the left cluster, and one in the right.\nIt is easy to verify that the worst placement (in terms of social cost) of two replicas occurs when they are placed at nodes A and B.\nThis placement can be achieved as a Nash equilibrium in the payment game, but not in the basic game since A and B are a distance 3\u03b1 \/ 4 apart.\nC. NASH DYNAMICS PROTOCOLS\nThe simulator initializes the game according to the given parameters and a random initial strategy profile and then iterates through rounds.\nInitially the order of player actions is chosen randomly.\nIn each round, each server performs the Nash dynamics protocol that adjusts its strategies greedily in the chosen order.\nWhen a round passes without any server changing its strategy, the simulation ends and a Nash equilibrium is reached.\nIn the basic game, we pick a random initial subset of servers to replicate the object as shown in Algorithm 1.\nAfter the initialization, each player runs the move selection procedure described in Algorithm 2 (in algorithms 2 and 4, Costnow represents the current cost for node i).\nThis procedure chooses greedily between replication and non-replication.\nIt is not hard to see that this Nash dynamics protocol converges in two rounds.\nIn the payment game, we pick a random initial subset of servers to replicate the object by setting their thresholds to 0.\nIn addition, we initialize a second random subset of servers to replicate the object with payments from other servers.\nThe details are shown in Algorithm 3.\nAfter the initialization, each player runs the move selection procedure described in Algorithm 4.\nThis procedure chooses greedily between replication and accessing a remote replica, with the possibilities of receiving and making payments, respectively.\nIn the protocol, each node increases its threshold value by incr if it does not replicate the object.\nBy this ramp up procedure, the cost of replicating an object is shared fairly among the nodes that access a replica from a server that does cache.\nIf incr is small, cost is shared more fairly, and the game tends to reach equilibria that encourages more servers to store replicas, though the convergence takes longer.\nIf incr is large, the protocol converges quickly, but it may miss efficient equilibria.\nIn the simulations we set incr to 0.1.\nMost of our\nFigure 7: An example where the Nash dynamics protocol does not converge in the payment game.\nsimulation runs converged, but there were a very few cases where the simulation did not converge due to the cycles of dynamics.\nThe protocol does not guarantee convergence within a certain number of rounds like the protocol for the basic game.\nWe provide an example graph and an initial condition such that the Nash dynamics protocol does not converge in the payment game if started from this initial condition.\nThe graph is represented by a shortest path metric on the network shown in Figure 7.\nIn the starting configuration, only A replicates the object, and a pays it an amount \u03b1 \/ 3 to do so.\nThe thresholds for A, B and C are \u03b1 \/ 3 each, and the thresholds for a, b and c are 2\u03b1 \/ 3.\nIt is not hard to verify that the Nash dynamics protocol will never converge if we start with this condition.\nThe Nash dynamics protocol for the payment game needs further investigation.\nThe dynamics protocol for the payment game should avoid cycles of actions to achieve stabilization of the protocol.\nFinding a self-stabilizing dynamics protocol is an interesting problem.\nIn addition, a fixed value of incr cannot adapt to changing environments.\nA small value of incr can lead to efficient equilibria, but it can take long time to converge.\nAn important area for future research is looking at adaptively changing incr.","keyphrases":["cach","distribut system","game-theoret approach","cach problem","remot replica","network topolog","nash equilibrium","peer-to-peer file system","demand distribut","anarchi price","instrument server","primal-dual techniqu","social util","submodular","group strategyproof","aggreg effect","peer-to-peer system","game-theoret model"],"prmu":["P","P","P","P","P","P","M","M","M","R","M","U","M","U","U","U","M","R"]} {"id":"J-62","title":"Weak Monotonicity Suffices for Truthfulness on Convex Domains","abstract":"Weak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism. Roberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted. Lavi, Mu'alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints. Here we show the more general result, conjectured by Lavi, Mu'alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain.","lvl-1":"Weak Monotonicity Suffices for Truthfulness on Convex Domains Michael Saks \u2217 Dept. of Mathematics Rutgers University 110 Frelinghuysen Road Piscataway, NJ, 08854 saks@math.rutgers.edu Lan Yu \u2020 Dept. of Computer Science Rutgers University 110 Frelinghuysen Road Piscataway, NJ, 08854 lanyu@paul.rutgers.edu ABSTRACT Weak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism.\nRoberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted.\nLavi, Mu``alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints.\nHere we show the more general result, conjectured by Lavi, Mu``alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computers and Society]: Electronic Commerce-payment schemes General Terms Theory, Economics 1.\nINTRODUCTION Social choice theory centers around the general problem of selecting a single outcome out of a set A of alternative outcomes based on the individual preferences of a set P of players.\nA method for aggregating player preferences to select one outcome is called a social choice function.\nIn this paper we assume that the range A is finite and that each player``s preference is expressed by a valuation function which assigns to each possible outcome a real number representing the benefit the player derives from that outcome.\nThe ensemble of player valuation functions is viewed as a valuation matrix with rows indexed by players and columns by outcomes.\nA major difficulty connected with social choice functions is that players can not be required to tell the truth about their preferences.\nSince each player seeks to maximize his own benefit, he may find it in his interest to misrepresent his valuation function.\nAn important approach for dealing with this problem is to augment a given social choice function with a payment function, which assigns to each player a (positive or negative) payment as a function of all of the individual preferences.\nBy carefully choosing the payment function, one can hope to entice each player to tell the truth.\nA social choice function augmented with a payment function is called a mechanism 1 and the mechanism is said to implement the social choice function.\nA mechanism is truthful (or to be strategyproof or to have a dominant strategy) if each player``s best strategy, knowing the preferences of the others, is always to declare his own true preferences.\nA social choice function is truthfully implementable, or truthful if it has a truthful implementation.\n(The property of truthful implementability is sometimes called dominant strategy incentive compatibility).\nThis framework leads naturally to the question: which social choice functions are truthful?\nThis question is of the following general type: given a class of functions (here, social choice functions) and a property that holds for some of them (here, truthfulness), characterize the property.\nThe definition of the property itself provides a characterization, so what more is needed?\nHere are some useful notions of characterization: \u2022 Recognition algorithm.\nGive an algorithm which, given an appropriate representation of a function in the class, determines whether the function has the property.\n\u2022 Parametric representation.\nGive an explicit parametrized family of functions and show that each function in the 1 The usual definition of mechanism is more general than this (see [8] Chapter 23.C or [9]); the mechanisms we consider here are usually called direct revelation mechanisms.\n286 family has the property, and that every function with the property is in the family.\nA third notion applies in the case of hereditary properties of functions.\nA function g is a subfunction of function f, or f contains g, if g is obtained by restricting the domain of f.\nA property P of functions is hereditary if it is preserved under taking subfunctions.\nTruthfulness is easily seen to be hereditary.\n\u2022 Sets of obstructions.\nFor a hereditary property P, a function g that does not have the property is an obstruction to the property in the sense that any function containing g doesn``t have the property.\nAn obstruction is minimal if every proper subfunction has the property.\nA set of obstructions is complete if every function that does not have the property contains one of them as a subfunction.\nThe set of all functions that don``t satisfy P is a complete (but trivial and uninteresting) set of obstructions; one seeks a set of small (ideally, minimal) obstructions.\nWe are not aware of any work on recognition algorithms for the property of truthfulness, but there are significant results concerning parametric representations and obstruction characterizations of truthfulness.\nIt turns out that the domain of the function, i.e., the set of allowed valuation matrices, is crucial.\nFor functions with unrestricted domain, i.e., whose domain is the set of all real matrices, there are very good characterizations of truthfulness.\nFor general domains, however, the picture is far from complete.\nTypically, the domains of social choice functions are specified by a system of constraints.\nFor example, an order constraint requires that one specified entry in some row be larger than another in the same row, a range constraint places an upper or lower bound on an entry, and a zero constraint forces an entry to be 0.\nThese are all examples of linear inequality constraints on the matrix entries.\nBuilding on work of Roberts [10], Lavi, Mu``alem and Nisan [6] defined a condition called weak monotonicity (WMON).\n(Independently, in the context of multi-unit auctions, Bikhchandani, Chatterji and Sen [3] identified the same condition and called it nondecreasing in marginal utilities (NDMU).)\nThe definition of W-MON can be formulated in terms of obstructions: for some specified simple set F of functions each having domains of size 2, a function satisfies W-MON if it contains no function from F.\nThe functions in F are not truthful, and therefore W-MON is a necessary condition for truthfulness.\nLavi, Mu``alem and Nisan [6] showed that W-MON is also sufficient for truthfulness for social choice functions whose domain is order-based, i.e., defined by order constraints and zero constraints, and Gui, Muller and Vohra [5] extended this to other domains.\nThe domain constraints considered in both papers are special cases of linear inequality constraints, and it is natural to ask whether W-MON is sufficient for any domain defined by such constraints.\nLavi, Mu``alem and Nisan [6] conjectured that W-MON suffices for convex domains.\nThe main result of this paper is an affirmative answer to this conjecture: Theorem 1.\nFor any social choice function having convex domain and finite range, weak monotonicity is necessary and sufficient for truthfulness.\nUsing the interpretation of weak monotonicity in terms of obstructions each having domain size 2, this provides a complete set of minimal obstructions for truthfulness within the class of social choice functions with convex domains.\nThe two hypotheses on the social choice function, that the domain is convex and that the range is finite, can not be omitted as is shown by the examples given in section 7.\n1.1 Related Work There is a simple and natural parametrized set of truthful social choice functions called affine maximizers.\nRoberts [10] showed that for functions with unrestricted domain, every truthful function is an affine maximizer, thus providing a parametrized representation for truthful functions with unrestricted domain.\nThere are many known examples of truthful functions over restricted domains that are not affine maximizers (see [1], [2], [4], [6] and [7]).\nEach of these examples has a special structure and it seems plausible that there might be some mild restrictions on the class of all social choice functions such that all truthful functions obeying these restrictions are affine maximizers.\nLavi, Mu``alem and Nisan [6] obtained a result in this direction by showing that for order-based domains, under certain technical assumptions, every truthful social choice function is almost an affine maximizer.\nThere are a number of results about truthfulness that can be viewed as providing obstruction characterizations, although the notion of obstruction is not explicitly discussed.\nFor a player i, a set of valuation matrices is said to be i-local if all of the matrices in the set are identical except for row i. Call a social choice function i-local if its domain is ilocal and call it local if it is i-local for some i.\nThe following easily proved fact is used extensively in the literature: Proposition 2.\nThe social choice function f is truthful if and only if every local subfunction of f is truthful.\nThis implies that the set of all local non-truthful functions comprises a complete set of obstructions for truthfulness.\nThis set is much smaller than the set of all non-truthful functions, but is still far from a minimal set of obstructions.\nRochet [11], Rozenshtrom [12] and Gui, Muller and Vohra [5] identified a necessary and sufficient condition for truthfulness (see lemma 3 below) called the nonnegative cycle property.\nThis condition can be viewed as providing a minimal complete set of non-truthful functions.\nAs is required by proposition 2, each function in the set is local.\nFurthermore it is one-to-one.\nIn particular its domain has size at most the number of possible outcomes |A|.\nAs this complete set of obstructions consists of minimal non-truthful functions, this provides the optimal obstruction characterization of non-truthful functions within the class of all social choice functions.\nBut by restricting attention to interesting subclasses of social choice functions, one may hope to get simpler sets of obstructions for truthfulness within that class.\nThe condition of weak monotonicity mentioned earlier can be defined by a set of obstructions, each of which is a local function of domain size exactly 2.\nThus the results of Lavi, Mu``alem and Nisan [6], and of Gui, Muller and Vohra [5] give a very simple set of obstructions for truthfulness within certain subclasses of social choice functions.\nTheorem 1 extends these results to a much larger subclass of functions.\n287 1.2 Weak Monotonicity and the Nonnegative Cycle Property By proposition 2, a function is truthful if and only if each of its local subfunctions is truthful.\nTherefore, to get a set of obstructions for truthfulness, it suffices to obtain such a set for local functions.\nThe domain of an i-local function consists of matrices that are fixed on all rows but row i. Fix such a function f and let D \u2286 RA be the set of allowed choices for row i.\nSince f depends only on row i and row i is chosen from D, we can view f as a function from D to A. Therefore, f is a social choice function having one player; we refer to such a function as a single player function.\nAssociated to any single player function f with domain D we define an edge-weighted directed graph Hf whose vertex set is the image of f. For convenience, we assume that f is surjective and so this image is A. For each a, b \u2208 A, x \u2208 f\u22121 (a) there is an edge ex(a, b) from a to b with weight x(a) \u2212 x(b).\nThe weight of a set of edges is just the sum of the weights of the edges.\nWe say that f satisfies: \u2022 the nonnegative cycle property if every directed cycle has nonnegative weight.\n\u2022 the nonnegative two-cycle property if every directed cycle between two vertices has nonnegative weight.\nWe say a local function g satisfies nonnegative cycle property\/nonnegative two-cycle property if its associated single player function f does.\nThe graph Hf has a possibly infinite number of edges between any two vertices.\nWe define Gf to be the edgeweighted directed graph with exactly one edge from a to b, whose weight \u03b4ab is the infimum (possibly \u2212\u221e) of all of the edge weights ex(a, b) for x \u2208 f\u22121 (a).\nIt is easy to see that Hf has the nonnegative cycle property\/nonnegative two-cycle property if and only if Gf does.\nGf is called the outcome graph of f.\nThe weak monotonicity property mentioned earlier can be defined for arbitrary social choice functions by the condition that every local subfunction satisfies the nonnegative two-cycle property.\nThe following result was obtained by Rochet [11] in a slightly different form and rediscovered by Rozenshtrom [12] and Gui, Muller and Vohra [5]: Lemma 3.\nA local social choice function is truthful if and only if it has the nonnegative cycle property.\nThus a social choice function is truthful if and only if every local subfunction satisfies the nonnegative cycle property.\nIn light of this, theorem 1 follows from: Theorem 4.\nFor any surjective single player function f : D \u2212\u2192 A where D is a convex subset of RA and A is finite, the nonnegative two-cycle property implies the nonnegative cycle property.\nThis is the result we will prove.\n1.3 Overview of the Proof of Theorem 4 Let D \u2286 RA be convex and let f : D \u2212\u2192 A be a single player function such that Gf has no negative two-cycles.\nWe want to conclude that Gf has no negative cycles.\nFor two vertices a, b, let \u03b4\u2217 ab denote the minimum weight of any path from a to b. Clearly \u03b4\u2217 ab \u2264 \u03b4ab.\nOur proof shows that the \u03b4\u2217 -weight of every cycle is exactly 0, from which theorem 4 follows.\nThere seems to be no direct way to compute \u03b4\u2217 and so we proceed indirectly.\nBased on geometric considerations, we identify a subset of paths in Gf called admissible paths and a subset of admissible paths called straight paths.\nWe prove that for any two outcomes a, b, there is a straight path from a to b (lemma 8 and corollary 10), and all straight paths from a to b have the same weight, which we denote \u03c1ab (theorem 12).\nWe show that \u03c1ab \u2264 \u03b4ab (lemma 14) and that the \u03c1-weight of every cycle is 0.\nThe key step to this proof is showing that the \u03c1-weight of every directed triangle is 0 (lemma 17).\nIt turns out that \u03c1 is equal to \u03b4\u2217 (corollary 20), although this equality is not needed in the proof of theorem 4.\nTo expand on the above summary, we give the definitions of an admissible path and a straight path.\nThese are somewhat technical and rely on the geometry of f.\nWe first observe that, without loss of generality, we can assume that D is (topologically) closed (section 2).\nIn section 3, for each a \u2208 A, we enlarge the set f\u22121 (a) to a closed convex set Da \u2286 D in such a way that for a, b \u2208 A with a = b, Da and Db have disjoint interiors.\nWe define an admissible path to be a sequence of outcomes (a1, ... , ak) such that each of the sets Ij = Daj \u2229 Daj+1 is nonempty (section 4).\nAn admissible path is straight if there is a straight line that meets one point from each of the sets I1, ... , Ik\u22121 in order (section 5).\nFinally, we mention how the hypotheses of convex domain and finite range are used in the proof.\nBoth hypotheses are needed to show: (1) the existence of a straight path from a to b for all a, b (lemma 8).\n(2) that the \u03c1-weight of a directed triangle is 0 (lemma 17).\nThe convex domain hypothesis is also needed for the convexity of the sets Da (section 3).\nThe finite range hypothesis is also needed to reduce theorem 4 to the case that D is closed (section 2) and to prove that every straight path from a to b has the same \u03b4-weight (theorem 12).\n2.\nREDUCTION TO CLOSED DOMAIN We first reduce the theorem to the case that D is closed.\nWrite DC for the closure of D.\nSince A is finite, DC = \u222aa\u2208A(f\u22121 (a))C .\nThus for each v \u2208 DC \u2212 D, there is an a = a(v) \u2208 A such that v \u2208 (f\u22121 (a))C .\nExtend f to the function g on DC by defining g(v) = a(v) for v \u2208 DC \u2212 D and g(v) = f(v) for v \u2208 D.\nIt is easy to check that \u03b4ab(g) = \u03b4ab(f) for all a, b \u2208 A and therefore it suffices to show that the nonnegative two-cycle property for g implies the nonnegative cycle property for g. Henceforth we assume D is convex and closed.\n3.\nA DISSECTION OF THE DOMAIN In this section, we construct a family of closed convex sets {Da : a \u2208 A} with disjoint interiors whose union is D and satisfying f\u22121 (a) \u2286 Da for each a \u2208 A. Let Ra = {v : \u2200b \u2208 A, v(a) \u2212 v(b) \u2265 \u03b4ab}.\nRa is a closed polyhedron containing f\u22121 (a).\nThe next proposition implies that any two of these polyhedra intersect only on their boundary.\nProposition 5.\nLet a, b \u2208 A.\nIf v \u2208 Ra \u2229Rb then v(a)\u2212 v(b) = \u03b4ab = \u2212\u03b4ba.\n288 Da Db Dc Dd De v w x y z u p Figure 1: A 2-dimensional domain with 5 outcomes.\nProof.\nv \u2208 Ra implies v(a) \u2212 v(b) \u2265 \u03b4ab and v \u2208 Rb implies v(b)\u2212v(a) \u2265 \u03b4ba which, by the nonnegative two-cycle property, implies v(a) \u2212 v(b) \u2264 \u03b4ab.\nThus v(a) \u2212 v(b) = \u03b4ab and by symmetry v(b) \u2212 v(a) = \u03b4ba.\nFinally, we restrict the collection of sets {Ra : a \u2208 A} to the domain D by defining Da = Ra \u2229 D for each a \u2208 A. Clearly, Da is closed and convex, and contains f\u22121 (a).\nTherefore S a\u2208A Da = D. Also, by proposition 5, any point v in Da \u2229 Db satisfies v(a) \u2212 v(b) = \u03b4ab = \u2212\u03b4ba.\n4.\nPATHS AND D-SEQUENCES A path of size k is a sequence \u2212\u2192a = (a1, ... , ak) with each ai \u2208 A (possibly with repetition).\nWe call \u2212\u2192a an (a1, ak)path.\nFor a path \u2212\u2192a , we write |\u2212\u2192a | for the size of \u2212\u2192a .\n\u2212\u2192a is simple if the ai``s are distinct.\nFor b, c \u2208 A we write Pbc for the set of (b, c)-paths and SPbc for the set of simple (b, c)-paths.\nThe \u03b4-weight of path \u2212\u2192a is defined by \u03b4(\u2212\u2192a ) = k\u22121X i=1 \u03b4aiai+1 .\nA D-sequence of order k is a sequence \u2212\u2192u = (u0, ... , uk) with each ui \u2208 D (possibly with repetition).\nWe call \u2212\u2192u a (u0, uk)-sequence.\nFor a D-sequence \u2212\u2192u , we write ord(u) for the order of \u2212\u2192u .\nFor v, w \u2208 D we write Svw for the set of (v, w)-sequences.\nA compatible pair is a pair (\u2212\u2192a , \u2212\u2192u ) where \u2212\u2192a is a path and \u2212\u2192u is a D-sequence satisfying ord(\u2212\u2192u ) = |\u2212\u2192a | and for each i \u2208 [k], both ui\u22121 and ui belong to Dai .\nWe write C(\u2212\u2192a ) for the set of D-sequences \u2212\u2192u that are compatible with \u2212\u2192a .\nWe say that \u2212\u2192a is admissible if C(\u2212\u2192a ) is nonempty.\nFor \u2212\u2192u \u2208 C(\u2212\u2192a ) we define \u2206\u2212\u2192a (\u2212\u2192u ) = |\u2212\u2192a |\u22121 X i=1 (ui(ai) \u2212 ui(ai+1)).\nFor v, w \u2208 D and b, c \u2208 A, we define Cvw bc to be the set of compatible pairs (\u2212\u2192a , \u2212\u2192u ) such that \u2212\u2192a \u2208 Pbc and \u2212\u2192u \u2208 Svw .\nTo illustrate these definitions, figure 1 gives the dissection of a domain, a 2-dimensional plane, into five regions Da, Db, Dc, Dd, De.\nD-sequence (v, w, x, y, z) is compatible with both path (a, b, c, e) and path (a, b, d, e); D-sequence (v, w, u, y, z) is compatible with a unique path (a, b, d, e).\nD-sequence (x, w, p, y, z) is compatible with a unique path (b, a, d, e).\nHence (a, b, c, e), (a, b, d, e) and (b, a, d, e) are admissible paths.\nHowever, path (a, c, d) or path (b, e) is not admissible.\nProposition 6.\nFor any compatible pair (\u2212\u2192a , \u2212\u2192u ), \u2206\u2212\u2192a (\u2212\u2192u ) = \u03b4(\u2212\u2192a ).\nProof.\nLet k = ord(\u2212\u2192u ) = |\u2212\u2192a |.\nBy the definition of a compatible pair, ui \u2208 Dai \u2229 Dai+1 for i \u2208 [k \u2212 1].\nui(ai) \u2212 ui(ai+1) = \u03b4aiai+1 from proposition 5.\nTherefore, \u2206\u2212\u2192a (\u2212\u2192u ) = k\u22121X i=1 (ui(ai) \u2212 ui(ai+1)) = k\u22121X i=1 \u03b4aiai+1 = \u03b4(\u2212\u2192a ).\nLemma 7.\nLet b, c \u2208 A and let \u2212\u2192a , \u2212\u2192a \u2208 Pbc.\nIf C(\u2212\u2192a ) \u2229 C(\u2212\u2192a ) = \u2205 then \u03b4(\u2212\u2192a ) = \u03b4(\u2212\u2192a ).\nProof.\nLet \u2212\u2192u be a D-sequence in C(\u2212\u2192a ) \u2229 C(\u2212\u2192a ).\nBy proposition 6, \u03b4(\u2212\u2192a ) = \u2206\u2212\u2192a (\u2212\u2192u ) and \u03b4(\u2212\u2192a ) = \u2206\u2212\u2192a (\u2212\u2192u ), it suffices to show \u2206\u2212\u2192a (\u2212\u2192u ) = \u2206\u2212\u2192a (\u2212\u2192u ).\nLet k = ord(\u2212\u2192u ) = |\u2212\u2192a | = |\u2212\u2192a |.\nSince \u2206\u2212\u2192a (\u2212\u2192u ) = k\u22121X i=1 (ui(ai) \u2212 ui(ai+1)) = u1(a1) + k\u22121X i=2 (ui(ai) \u2212 ui\u22121(ai)) \u2212 uk\u22121(ak) = u1(b) + k\u22121X i=2 (ui(ai) \u2212 ui\u22121(ai)) \u2212 uk\u22121(c), \u2206\u2212\u2192a (\u2212\u2192u ) \u2212 \u2206\u2212\u2192a (\u2212\u2192u ) = k\u22121X i=2 ((ui(ai) \u2212 ui\u22121(ai)) \u2212 (ui(ai) \u2212 ui\u22121(ai))) = k\u22121X i=2 ((ui(ai) \u2212 ui(ai)) \u2212 (ui\u22121(ai) \u2212 ui\u22121(ai))).\nNoticing both ui\u22121 and ui belong to Dai \u2229 Dai , we have by proposition 5 ui\u22121(ai) \u2212 ui\u22121(ai) = \u03b4aiai = ui(ai) \u2212 ui(ai).\nHence \u2206\u2212\u2192a (\u2212\u2192u ) \u2212 \u2206\u2212\u2192a (\u2212\u2192u ) = 0.\n5.\nLINEAR D-SEQUENCES AND STRAIGHT PATHS For v, w \u2208 D we write vw for the (closed) line segment joining v and w.\nA D-sequence \u2212\u2192u of order k is linear provided that there is a sequence of real numbers 0 = \u03bb0 \u2264 \u03bb1 \u2264 ... \u2264 \u03bbk = 1 such that ui = (1 \u2212 \u03bbi)u0 + \u03bbiuk.\nIn particular, each ui belongs to u0uk.\nFor v, w \u2208 D we write Lvw for the set of linear (v, w)-sequences.\nFor b, c \u2208 A and v, w \u2208 D we write LCvw bc for the set of compatible pairs (\u2212\u2192a , \u2212\u2192u ) such that \u2212\u2192a \u2208 Pbc and \u2212\u2192u \u2208 Lvw .\nFor a path \u2212\u2192a , we write L(\u2212\u2192a ) for the set of linear sequences compatible with \u2212\u2192a .\nWe say that \u2212\u2192a is straight if L(\u2212\u2192a ) = \u2205.\nFor example, in figure 1, D-sequence (v, w, x, y, z) is linear while (v, w, u, y, z), (x, w, p, y, z), and (x, v, w, y, z) are not.\nHence path (a, b, c, e) and (a, b, d, e) are both straight.\nHowever, path (b, a, d, e) is not straight.\n289 Lemma 8.\nLet b, c \u2208 A and v \u2208 Db, w \u2208 Dc.\nThere is a simple path \u2212\u2192a and D-sequence \u2212\u2192u such that (\u2212\u2192a , \u2212\u2192u ) \u2208 LCvw bc .\nFurthermore, for any such path \u2212\u2192a , \u03b4(\u2212\u2192a ) \u2264 v(b) \u2212 v(c).\nProof.\nBy the convexity of D, any sequence of points on vw is a D-sequence.\nIf b = c, singleton path \u2212\u2192a = (b) and D-sequence \u2212\u2192u = (v, w) are obviously compatible.\n\u03b4(\u2212\u2192a ) = 0 = v(b) \u2212 v(c).\nSo assume b = c.\nIf Db \u2229Dc \u2229vw = \u2205, we pick an arbitrary x from this set and let \u2212\u2192a = (b, c) \u2208 SPbc, \u2212\u2192u = (v, x, w) \u2208 Lvw .\nAgain it is easy to check the compatibility of (\u2212\u2192a , \u2212\u2192u ).\nSince v \u2208 Db, v(b) \u2212 v(c) \u2265 \u03b4bc = \u03b4(\u2212\u2192a ).\nFor the remaining case b = c and Db \u2229Dc \u2229vw = \u2205, notice v = w otherwise v = w \u2208 Db \u2229 Dc \u2229 vw.\nSo we can define \u03bbx for every point x on vw to be the unique number in [0, 1] such that x = (1 \u2212 \u03bbx)v + \u03bbxw.\nFor convenience, we write x \u2264 y for \u03bbx \u2264 \u03bby.\nLet Ia = Da \u2229 vw for each a \u2208 A.\nSince D = \u222aa\u2208ADa, we have vw = \u222aa\u2208AIa.\nMoreover, by the convexity of Da and vw, Ia is a (possibly trivial) closed interval.\nWe begin by considering the case that Ib and Ic are each a single point, that is, Ib = {v} and Ic = {w}.\nLet S be a minimal subset of A satisfying \u222as\u2208SIs = vw.\nFor each s \u2208 S, Is is maximal, i.e., not contained in any other It, for t \u2208 S.\nIn particular, the intervals {Is : s \u2208 S} have all left endpoints distinct and all right endpoints distinct and the order of the left endpoints is the same as that of the right endpoints.\nLet k = |S| + 2 and index S as a2, ... , ak\u22121 in the order defined by the right endpoints.\nDenote the interval Iai by [li, ri].\nThus l2 < l3 < ... < lk\u22121, r2 < r3 < ... < rk\u22121 and the fact that these intervals cover vw implies l2 = v, rk\u22121 = w and for all 2 \u2264 i \u2264 k \u2212 2, li+1 \u2264 ri which further implies li < ri.\nNow we define the path \u2212\u2192a = (a1, a2, ... , ak\u22121, ak) with a1 = b, ak = c and a2, a3, ... , ak\u22121 as above.\nDefine the linear D-sequence \u2212\u2192u = (u0, u1, ... , uk) by u0 = u1 = v, uk = w and for 2 \u2264 i \u2264 k\u22121, ui = ri.\nIt follows immediately that (\u2212\u2192a , \u2212\u2192u ) \u2208 LCvw bc .\nNeither b nor c is in S since lb = rb and lc = rc.\nThus \u2212\u2192a is simple.\nFinally to show \u03b4(\u2212\u2192a ) \u2264 v(b) \u2212 v(c), we note v(b) \u2212 v(c) = v(a1) \u2212 v(ak) = k\u22121X i=1 (v(ai) \u2212 v(ai+1)) and \u03b4(\u2212\u2192a ) = \u2206\u2212\u2192a (\u2212\u2192u ) = k\u22121X i=1 (ui(ai) \u2212 ui(ai+1)) = v(a1) \u2212 v(a2) + k\u22121X i=2 (ri(ai) \u2212 ri(ai+1)).\nFor two outcomes d, e \u2208 A, let us define fde(z) = z(d)\u2212z(e) for all z \u2208 D.\nIt suffices to show faiai+1 (ri) \u2264 faiai+1 (v) for 2 \u2264 i \u2264 k \u2212 1.\nFact 9.\nFor d, e \u2208 A, fde(z) is a linear function of z. Furthermore, if x \u2208 Dd and y \u2208 De with x = y, then fde(x) = x(d) \u2212 x(e) \u2265 \u03b4de \u2265 \u2212\u03b4ed \u2265 \u2212(y(e) \u2212 y(d)) = fde(y).\nTherefore fde(z) is monotonically nonincreasing along the line \u2190\u2192 xy as z moves in the direction from x to y. Applying this fact with d = ai, e = ai+1, x = li and y = ri gives the desired conclusion.\nThis completes the proof for the case that Ib = {v} and Ic = {w}.\nFor general Ib, Ic, rb < lc otherwise Db \u2229 Dc \u2229 vw = Ib \u2229 Ic = \u2205.\nLet v = rb and w = lc.\nClearly we can apply the above conclusion to v \u2208 Db, w \u2208 Dc and get a compatible pair (\u2212\u2192a , \u2212\u2192u ) \u2208 LCv w bc with \u2212\u2192a simple and \u03b4(\u2212\u2192a ) \u2264 v (b) \u2212 v (c).\nDefine the linear D-sequence \u2212\u2192u by u0 = v, uk = w and ui = ui for i \u2208 [k \u2212 1].\n(\u2212\u2192a , \u2212\u2192u ) \u2208 LCvw bc is evident.\nMoreover, applying the above fact with d = b, e = c, x = v and y = w, we get v(b) \u2212 v(c) \u2265 v (b) \u2212 v (c) \u2265 \u03b4(\u2212\u2192a ).\nCorollary 10.\nFor any b, c \u2208 A there is a straight (b, c)path.\nThe main result of this section (theorem 12) says that for any b, c \u2208 A, every straight (b, c)-path has the same \u03b4-weight.\nTo prove this, we first fix v \u2208 Db and w \u2208 Dc and show (lemma 11) that every straight (b, c)-path compatible with some linear (v, w)-sequence has the same \u03b4-weight \u03c1bc(v, w).\nWe then show in theorem 12 that \u03c1bc(v, w) is the same for all choices of v \u2208 Db and w \u2208 Dc.\nLemma 11.\nFor b, c \u2208 A, there is a function \u03c1bc : Db \u00d7 Dc \u2212\u2192 R satisfying that for any (\u2212\u2192a , \u2212\u2192u ) \u2208 LCvw bc , \u03b4(\u2212\u2192a ) = \u03c1bc(v, w).\nProof.\nLet (\u2212\u2192a , \u2212\u2192u ), (\u2212\u2192a , \u2212\u2192u ) \u2208 LCvw bc .\nIt suffices to show \u03b4(\u2212\u2192a ) = \u03b4(\u2212\u2192a ).\nTo do this we construct a linear (v, w)-sequence \u2212\u2192u and paths \u2212\u2192a \u2217 , \u2212\u2192a \u2217\u2217 \u2208 Pbc, both compatible with \u2212\u2192u , satisfying \u03b4(\u2212\u2192a \u2217 ) = \u03b4(\u2212\u2192a ) and \u03b4(\u2212\u2192a \u2217\u2217 ) = \u03b4(\u2212\u2192a ).\nLemma 7 implies \u03b4(\u2212\u2192a \u2217 ) = \u03b4(\u2212\u2192a \u2217\u2217 ), which will complete the proof.\nLet |\u2212\u2192a | = ord(\u2212\u2192u ) = k and |\u2212\u2192a | = ord(\u2212\u2192u ) = l.\nWe select \u2212\u2192u to be any linear (v, w)-sequence (u0, u1, ... , ut) such that \u2212\u2192u and \u2212\u2192u are both subsequences of \u2212\u2192u , i.e., there are indices 0 = i0 < i1 < \u00b7 \u00b7 \u00b7 < ik = t and 0 = j0 < j1 < \u00b7 \u00b7 \u00b7 < jl = t such that \u2212\u2192u = (ui0 , ui1 , ... , uik ) and \u2212\u2192u = (uj0 , uj1 , ... , ujl ).\nWe now construct a (b, c)-path \u2212\u2192a \u2217 compatible with \u2212\u2192u such that \u03b4(\u2212\u2192a \u2217 ) = \u03b4(\u2212\u2192a ).\n(An analogous construction gives \u2212\u2192a \u2217\u2217 compatible with \u2212\u2192u such that \u03b4(\u2212\u2192a \u2217\u2217 ) = \u03b4(\u2212\u2192a ).)\nThis will complete the proof.\n\u2212\u2192a \u2217 is defined as follows: for 1 \u2264 j \u2264 t, a\u2217 j = ar where r is the unique index satisfying ir\u22121 < j \u2264 ir.\nSince both uir\u22121 = ur\u22121 and uir = ur belong to Dar , uj \u2208 Dar for ir\u22121 \u2264 j \u2264 ir by the convexity of Dar .\nThe compatibility of (\u2212\u2192a \u2217 , \u2212\u2192u ) follows immediately.\nClearly, a\u2217 1 = a1 = b and a\u2217 t = ak = c, so \u2212\u2192a \u2217 \u2208 Pbc.\nFurthermore, as \u03b4a\u2217 j a\u2217 j+1 = \u03b4arar = 0 for each r \u2208 [k], ir\u22121 < j < ir, \u03b4(\u2212\u2192a \u2217 ) = k\u22121X r=1 \u03b4a\u2217 ir a\u2217 ir+1 = k\u22121X r=1 \u03b4arar+1 = \u03b4(\u2212\u2192a ).\nWe are now ready for the main theorem of the section: Theorem 12.\n\u03c1bc is a constant map on Db \u00d7 Dc.\nThus for any b, c \u2208 A, every straight (b, c)-path has the same \u03b4weight.\nProof.\nFor a path \u2212\u2192a , (v, w) is compatible with \u2212\u2192a if there is a linear (v, w)-sequence compatible with \u2212\u2192a .\nWe write CP(\u2212\u2192a ) for the set of pairs (v, w) compatible with \u2212\u2192a .\n\u03c1bc is constant on CP(\u2212\u2192a ) because for each (v, w) \u2208 CP(\u2212\u2192a ), \u03c1bc(v, w) = \u03b4(\u2212\u2192a ).\nBy lemma 8, we also haveS \u2212\u2192a \u2208SPbc CP(\u2212\u2192a ) = Db \u00d7Dc.\nSince A is finite, SPbc, the set of simple paths from b to c, is finite as well.\n290 Next we prove that for any path \u2212\u2192a , CP(\u2212\u2192a ) is closed.\nLet ((vn , wn ) : n \u2208 N) be a convergent sequence in CP(\u2212\u2192a ) and let (v, w) be the limit.\nWe want to show that (v, w) \u2208 CP(\u2212\u2192a ).\nFor each n \u2208 N, since (vn , wn ) \u2208 CP(\u2212\u2192a ), there is a linear (vn , wn )-sequence un compatible with \u2212\u2192a , i.e., there are 0 = \u03bbn 0 \u2264 \u03bbn 1 \u2264 ... \u2264 \u03bbn k = 1 (k = |\u2212\u2192a |) such that un j = (1 \u2212 \u03bbn j )vn + \u03bbn j wn (j = 0, 1, ... , k).\nSince for each n, \u03bbn = (\u03bbn 0 , \u03bbn 1 , ... , \u03bbn k ) belongs to the closed bounded set [0, 1]k+1 we can choose an infinite subset I \u2286 N such that the sequence (\u03bbn : n \u2208 I) converges.\nLet \u03bb = (\u03bb0, \u03bb1, ... , \u03bbk) be the limit.\nClearly 0 = \u03bb0 \u2264 \u03bb1 \u2264 \u00b7 \u00b7 \u00b7 \u2264 \u03bbk = 1.\nDefine the linear (v, w)-sequence \u2212\u2192u by uj = (1 \u2212 \u03bbj )v + \u03bbj w (j = 0, 1, ... , k).\nThen for each j \u2208 {0, ... , k}, uj is the limit of the sequence (un j : n \u2208 I).\nFor j > 0, each un j belongs to the closed set Daj , so uj \u2208 Daj .\nSimilarly, for j < k each un j belongs to the closed set Daj+1 , so uj \u2208 Daj+1 .\nHence (\u2212\u2192a , \u2212\u2192u ) is compatible, implying (v, w) \u2208 CP(\u2212\u2192a ).\nNow we have Db \u00d7 Dc covered by finitely many closed subsets on each of them \u03c1bc is a constant.\nSuppose for contradiction that there are (v, w), (v , w ) \u2208 Db \u00d7 Dc such that \u03c1bc(v, w) = \u03c1bc(v , w ).\nL = {((1 \u2212 \u03bb)v + \u03bbv , (1 \u2212 \u03bb)w + \u03bbw ) : \u03bb \u2208 [0, 1]} is a line segment in Db \u00d7Dc by the convexity of Db, Dc.\nLet L1 = {(x, y) \u2208 L : \u03c1bc(x, y) = \u03c1bc(v, w)} and L2 = L \u2212 L1.\nClearly (v, w) \u2208 L1, (v , w ) \u2208 L2.\nLet P = {\u2212\u2192a \u2208 SPbc : \u03b4(\u2212\u2192a ) = \u03c1bc(v, w)}.\nL1 = `S \u2212\u2192a \u2208P CP(\u2212\u2192a ) \u00b4 \u2229 L, L2 = S \u2212\u2192a \u2208SPbc\u2212P CP(\u2212\u2192a ) \u2229 L are closed by the finiteness of P.\nThis is a contradiction, since it is well known (and easy to prove) that a line segment can not be expressed as the disjoint union of two nonempty closed sets.\nSummarizing corollary 10, lemma 11 and theorem 12, we have Corollary 13.\nFor any b, c \u2208 A, there is a real number \u03c1bc with the property that (1) There is at least one straight (b, c)-path of \u03b4-weight \u03c1bc and (2) Every straight (b, c)-path has \u03b4-weight \u03c1bc.\n6.\nPROOF OF THEOREM 4 Lemma 14.\n\u03c1bc \u2264 \u03b4bc for all b, c \u2208 A. Proof.\nFor contradiction, suppose \u03c1bc \u2212 \u03b4bc = > 0.\nBy the definition of \u03b4bc, there exists v \u2208 f\u22121 (b) \u2286 Db with v(b) \u2212 v(c) < \u03b4bc + = \u03c1bc.\nPick an arbitrary w \u2208 Dc.\nBy lemma 8, there is a compatible pair (\u2212\u2192a , \u2212\u2192u ) \u2208 LCvw bc with \u03b4(\u2212\u2192a ) \u2264 v(b) \u2212 v(c).\nSince \u2212\u2192a is a straight (b, c)-path, \u03c1bc = \u03b4(\u2212\u2192a ) \u2264 v(b) \u2212 v(c), leading to a contradiction.\nDefine another edge-weighted complete directed graph Gf on vertex set A where the weight of arc (a, b) is \u03c1ab.\nImmediately from lemma 14, the weight of every directed cycle in Gf is bounded below by its weight in Gf .\nTo prove theorem 4, it suffices to show the zero cycle property of Gf , i.e., every directed cycle has weight zero.\nWe begin by considering two-cycles.\nLemma 15.\n\u03c1bc + \u03c1cb = 0 for all b, c \u2208 A. Proof.\nLet \u2212\u2192a be a straight (b, c)-path compatible with linear sequence \u2212\u2192u .\nlet \u2212\u2192a be the reverse of \u2212\u2192a and \u2212\u2192u the reverse of \u2212\u2192u .\nObviously, (\u2212\u2192a , \u2212\u2192u ) is compatible as well and thus \u2212\u2192a is a straight (c, b)-path.\nTherefore, \u03c1bc + \u03c1cb = \u03b4(\u2212\u2192a ) + \u03b4(\u2212\u2192a ) = k\u22121X i=1 \u03b4aiai+1 + k\u22121X i=1 \u03b4ai+1ai = k\u22121X i=1 (\u03b4aiai+1 + \u03b4ai+1ai ) = 0, where the final equality uses proposition 5.\nNext, for three cycles, we first consider those compatible with linear triples.\nLemma 16.\nIf there are collinear points u \u2208 Da, v \u2208 Db, w \u2208 Dc (a, b, c \u2208 A), \u03c1ab + \u03c1bc + \u03c1ca = 0.\nProof.\nFirst, we prove for the case where v is between u and w. From lemma 8, there are compatible pairs (\u2212\u2192a , \u2212\u2192u ) \u2208 LCuv ab , (\u2212\u2192a , \u2212\u2192u ) \u2208 LCvw bc .\nLet |\u2212\u2192a | = ord(\u2212\u2192u ) = k and |\u2212\u2192a | = ord(\u2212\u2192u ) = l.\nWe paste \u2212\u2192a and \u2212\u2192a together as \u2212\u2192a = (a = a1, a2, ... , ak\u22121, ak, a1 , ... , al = c), \u2212\u2192u and \u2212\u2192u as \u2212\u2192u = (u = u0, u1, ... , uk = v = u0 , u1 , ... , ul = w).\nClearly (\u2212\u2192a , \u2212\u2192u ) \u2208 LCuw ac and \u03b4(\u2212\u2192a ) = k\u22121X i=1 \u03b4aiai+1 + \u03b4ak a1 + l\u22121X i=1 \u03b4ai ai+1 = \u03b4(\u2212\u2192a ) + \u03b4bb + \u03b4(\u2212\u2192a ) = \u03b4(\u2212\u2192a ) + \u03b4(\u2212\u2192a ).\nTherefore, \u03c1ac = \u03b4(\u2212\u2192a ) = \u03b4(\u2212\u2192a ) + \u03b4(\u2212\u2192a ) = \u03c1ab + \u03c1bc.\nMoreover, \u03c1ac = \u2212\u03c1ca by lemma 15, so we get \u03c1ab + \u03c1bc + \u03c1ca = 0.\nNow suppose w is between u and v. By the above argument, we have \u03c1ac + \u03c1cb + \u03c1ba = 0 and by lemma 15, \u03c1ab + \u03c1bc + \u03c1ca = \u2212\u03c1ba \u2212 \u03c1cb \u2212 \u03c1ac = 0.\nThe case that u is between v and w is similar.\nNow we are ready for the zero three-cycle property: Lemma 17.\n\u03c1ab + \u03c1bc + \u03c1ca = 0 for all a, b, c \u2208 A. Proof.\nLet S = {(a, b, c) : \u03c1ab + \u03c1bc + \u03c1ca = 0} and for contradiction, suppose S = \u2205.\nS is finite.\nFor each a \u2208 A, choose va \u2208 Da arbitrarily and let T be the convex hull of {va : a \u2208 A}.\nFor each (a, b, c) \u2208 S, let Rabc = Da \u00d7 Db \u00d7 Dc \u2229 T3 .\nClearly, each Rabc is nonempty and compact.\nMoreover, by lemma 16, no (u, v, w) \u2208 Rabc is collinear.\nDefine f : D3 \u2192 R by f(u, v, w) = |v\u2212u|+|w\u2212v|+|u\u2212w|.\nFor (a, b, c) \u2208 S, the restriction of f to the compact set Rabc attains a minimum m(a, b, c) at some point (u, v, w) \u2208 Rabc by the continuity of f, i.e., there exists a triangle \u2206uvw of minimum perimeter within T with u \u2208 Da, v \u2208 Db, w \u2208 Dc.\nChoose (a\u2217 , b\u2217 , c\u2217 ) \u2208 S so that m(a\u2217 , b\u2217 , c\u2217 ) is minimum and let (u\u2217 , v\u2217 , w\u2217 ) \u2208 Ra\u2217b\u2217c\u2217 be a triple achieving it.\nPick an arbitrary point p in the interior of \u2206u\u2217 v\u2217 w\u2217 .\nBy the convexity of domain D, there is d \u2208 A such that p \u2208 Dd.\n291 Consider triangles \u2206u\u2217 pw\u2217 , \u2206w\u2217 pv\u2217 and \u2206v\u2217 pu\u2217 .\nSince each of them has perimeter less than that of \u2206u\u2217 v\u2217 w\u2217 and all three triangles are contained in T, by the minimality of \u2206u\u2217 v\u2217 w\u2217 , (a\u2217 , d, c\u2217 ), (c\u2217 , d, b\u2217 ), (b\u2217 , d, a\u2217 ) \u2208 S. Thus \u03c1a\u2217d + \u03c1dc\u2217 + \u03c1c\u2217a\u2217 = 0, \u03c1c\u2217d + \u03c1db\u2217 + \u03c1b\u2217c\u2217 = 0, \u03c1b\u2217d + \u03c1da\u2217 + \u03c1a\u2217b\u2217 = 0.\nSumming up the three equalities, (\u03c1a\u2217d + \u03c1dc\u2217 + \u03c1c\u2217d + \u03c1db\u2217 + \u03c1b\u2217d + \u03c1da\u2217 ) +(\u03c1c\u2217a\u2217 + \u03c1b\u2217c\u2217 + \u03c1a\u2217b\u2217 ) = 0, which yields a contradiction \u03c1a\u2217b\u2217 + \u03c1b\u2217c\u2217 + \u03c1c\u2217a\u2217 = 0.\nWith the zero two-cycle and three-cycle properties, the zero cycle property of Gf is immediate.\nAs noted earlier, this completes the proof of theorem 4.\nTheorem 18.\nEvery directed cycle of Gf has weight zero.\nProof.\nClearly, zero two-cycle and three-cycle properties imply triangle equality \u03c1ab +\u03c1bc = \u03c1ac for all a, b, c \u2208 A. For a directed cycle C = a1a2 ... aka1, by inductively applying triangle equality, we have Pk\u22121 i=1 \u03c1aiai+1 = \u03c1a1ak .\nTherefore, the weight of C is k\u22121X i=1 \u03c1aiai+1 + \u03c1aka1 = \u03c1a1ak + \u03c1aka1 = 0.\nAs final remarks, we note that our result implies the following strengthenings of theorem 12: Corollary 19.\nFor any b, c \u2208 A, every admissible (b, c)path has the same \u03b4-weight \u03c1bc.\nProof.\nFirst notice that for any b, c \u2208 A, if Db \u2229Dc = \u2205, \u03b4bc = \u03c1bc.\nTo see this, pick v \u2208 Db \u2229 Dc arbitrarily.\nObviously, path \u2212\u2192a = (b, c) is compatible with linear sequence \u2212\u2192u = (v, v, v) and is thus a straight (b, c)-path.\nHence \u03c1bc = \u03b4(\u2212\u2192a ) = \u03b4bc.\nNow for any b, c \u2208 A and any (b, c)-path \u2212\u2192a with C(\u2212\u2192a ) = \u2205, let \u2212\u2192u \u2208 C(\u2212\u2192a ).\nSince ui \u2208 Dai \u2229 Dai+1 for i \u2208 [|\u2212\u2192a | \u2212 1], \u03b4(\u2212\u2192a ) = |\u2212\u2192a |\u22121 X i=1 \u03b4aiai+1 = |\u2212\u2192a |\u22121 X i=1 \u03c1aiai+1 , which by theorem 18, = \u2212\u03c1a|\u2212\u2192a |a1 = \u03c1a1a|\u2212\u2192a | = \u03c1bc.\nCorollary 20.\nFor any b, c \u2208 A, \u03c1bc is equal to \u03b4\u2217 bc, the minimum \u03b4-weight over all (b, c)-paths.\nProof.\nClearly \u03c1bc \u2265 \u03b4\u2217 bc by corollary 13.\nOn the other hand, for every (b, c)-path \u2212\u2192a = (b = a1, a2, ... , ak = c), by lemma 14, \u03b4(\u2212\u2192a ) = k\u22121X i=1 \u03b4aiai+1 \u2265 k\u22121X i=1 \u03c1aiai+1 , which by theorem 18, = \u2212\u03c1aka1 = \u03c1a1ak = \u03c1bc.\nHence \u03c1bc \u2264 \u03b4\u2217 bc, which completes the proof.\n7.\nCOUNTEREXAMPLES TO STRONGER FORMS OF THEOREM 4 Theorem 4 applies to social choice functions with convex domain and finite range.\nWe now show that neither of these hypotheses can be omitted.\nOur examples are single player functions.\nThe first example illustrates that convexity can not be omitted.\nWe present an untruthful single player social choice function with three outcomes a, b, c satisfying W-MON on a path-connected but non-convex domain.\nThe domain is the boundary of a triangle whose vertices are x = (0, 1, \u22121), y = (\u22121, 0, 1) and z = (1, \u22121, 0).\nx and the open line segment zx is assigned outcome a, y and the open line segment xy is assigned outcome b, and z and the open line segment yz is assigned outcome c. Clearly, \u03b4ab = \u2212\u03b4ba = \u03b4bc = \u2212\u03b4cb = \u03b4ca = \u2212\u03b4ac = \u22121, W-MON (the nonnegative twocycle property) holds.\nSince there is a negative cycle \u03b4ab + \u03b4bc + \u03b4ca = \u22123, by lemma 3, this is not a truthful choice function.\nWe now show that the hypothesis of finite range can not be omitted.\nWe construct a family of single player social choice functions each having a convex domain and an infinite number of outcomes, and satisfying weak monotonicity but not truthfulness.\nOur examples will be specified by a positive integer n and an n \u00d7 n matrix M satisfying the following properties: (1) M is non-singular.\n(2) M is positive semidefinite.\n(3) There are distinct i1, i2, ... , ik \u2208 [n] satisfying k\u22121X j=1 (M(ij, ij) \u2212 M(ij , ij+1)) + (M(ik, ik) \u2212 M(ik, i1)) < 0.\nHere is an example matrix with n = 3 and (i1, i2, i3) = (1, 2, 3): 0 @ 0 1 \u22121 \u22121 0 1 1 \u22121 0 1 A Let e1, e2, ... , en denote the standard basis of Rn .\nLet Sn denote the convex hull of {e1, e2 ... , en}, which is the set of vectors in Rn with nonnegative coordinates that sum to 1.\nThe range of our social choice function will be the set Sn and the domain D will be indexed by Sn, that is D = {y\u03bb : \u03bb \u2208 Sn}, where y\u03bb is defined below.\nThe function f maps y\u03bb to \u03bb.\nNext we specify y\u03bb.\nBy definition, D must be a set of functions from Sn to R. For \u03bb \u2208 Sn, the domain element y\u03bb : Sn \u2212\u2192 R is defined by y\u03bb(\u03b1) = \u03bbT M\u03b1.\nThe nonsingularity of M guarantees that y\u03bb = y\u00b5 for \u03bb = \u00b5 \u2208 Sn.\nIt is easy to see that D is a convex subset of the set of all functions from Sn to R.\nThe outcome graph Gf is an infinite graph whose vertex set is the outcome set A = Sn.\nFor outcomes \u03bb, \u00b5 \u2208 A, the edge weight \u03b4\u03bb\u00b5 is equal to \u03b4\u03bb\u00b5 = inf{v(\u03bb) \u2212 v(\u00b5) : f(v) = \u03bb} = y\u03bb(\u03bb) \u2212 y\u03bb(\u00b5) = \u03bbT M\u03bb \u2212 \u03bbT M\u00b5 = \u03bbT M(\u03bb \u2212 \u00b5).\nWe claim that Gf satisfies the nonnegative two-cycle property (W-MON) but has a negative cycle (and hence is not truthful).\nFor outcomes \u03bb, \u00b5 \u2208 A, \u03b4\u03bb\u00b5 +\u03b4\u00b5\u03bb = \u03bbT M(\u03bb\u2212\u00b5)+\u00b5T M(\u00b5\u2212\u03bb) = (\u03bb\u2212\u00b5)T M(\u03bb\u2212\u00b5), 292 which is nonnegative since M is positive semidefinite.\nHence the nonnegative two-cycle property holds.\nNext we show that Gf has a negative cycle.\nLet i1, i2, ... , ik be a sequence of indices satisfying property 3 of M.\nWe claim ei1 ei2 ... eik ei1 is a negative cycle.\nSince \u03b4eiej = eT i M(ei \u2212 ej) = M(i, i) \u2212 M(i, j) for any i, j \u2208 [k], the weight of the cycle k\u22121X j=1 \u03b4eij eij+1 + \u03b4eik ei1 = k\u22121X j=1 (M(ij , ij ) \u2212 M(ij, ij+1)) + (M(ik, ik) \u2212 M(ik, i1)) < 0, which completes the proof.\nFinally, we point out that the third property imposed on the matrix M has the following interpretation.\nLet R(M) = {r1, r2, ... , rn} be the set of row vectors of M and let hM be the single player social choice function with domain R(M) and range {1, 2, ... , n} mapping ri to i. Property 3 is equivalent to the condition that the outcome graph GhM has a negative cycle.\nBy lemma 3, this is equivalent to the condition that hM is untruthful.\n8.\nFUTURE WORK As stated in the introduction, the goal underlying the work in this paper is to obtain useful and general characterizations of truthfulness.\nLet us say that a set D of P \u00d7 A real valuation matrices is a WM-domain if any social choice function on D satisfying weak monotonicity is truthful.\nIn this paper, we showed that for finite A, any convex D is a WM-domain.\nTypically, the domains of social choice functions considered in mechanism design are convex, but there are interesting examples with non-convex domains, e.g., combinatorial auctions with unknown single-minded bidders.\nIt is intriguing to find the most general conditions under which a set D of real matrices is a WM-domain.\nWe believe that convexity is the main part of the story, i.e., a WM-domain is, after excluding some exceptional cases, essentially a convex set.\nTurning to parametric representations, let us say a set D of P \u00d7 A matrices is an AM-domain if any truthful social choice function with domain D is an affine maximizer.\nRoberts'' theorem says that the unrestricted domain is an AM-domain.\nWhat are the most general conditions under which a set D of real matrices is an AM-domain?\nAcknowledgments We thank Ron Lavi for helpful discussions and the two anonymous referees for helpful comments.\n9.\nREFERENCES [1] A. Archer and E. Tardos.\nTruthful mechanisms for one-parameter agents.\nIn IEEE Symposium on Foundations of Computer Science, pages 482-491, 2001.\n[2] Y. Bartal, R. Gonen, and N. Nisan.\nIncentive compatible multi unit combinatorial auctions.\nIn TARK ``03: Proceedings of the 9th conference on Theoretical aspects of rationality and knowledge, pages 72-87.\nACM Press, 2003.\n[3] S. Bikhchandani, S. Chatterjee, and A. Sen. Incentive compatibility in multi-unit auctions.\nTechnical report, UCLA Department of Economics, Dec. 2004.\n[4] A. Goldberg, J. Hartline, A. Karlin, M. Saks and A. Wright.\nCompetitive Auctions, 2004.\n[5] H. Gui, R. Muller, and R. Vohra.\nDominant strategy mechanisms with multidimensional types.\nTechnical Report 047, Maastricht: METEOR, Maastricht Research School of Economics of Technology and Organization, 2004.\navailable at http:\/\/ideas.repec.org\/p\/dgr\/umamet\/2004047.html.\n[6] R. Lavi, A. Mu``alem, and N. Nisan.\nTowards a characterization of truthful combinatorial auctions.\nIn FOCS ``03: Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, page 574.\nIEEE Computer Society, 2003.\n[7] D. Lehmann, L. O``Callaghan, and Y. Shoham.\nTruth revelation in approximately efficient combinatorial auctions.\nJ. ACM, 49(5):577-602, 2002.\n[8] A. Mas-Colell, M. Whinston, and J. Green.\nMicroeconomic Theory.\nOxford University Press, 1995.\n[9] N. Nisan.\nAlgorithms for selfish agents.\nLecture Notes in Computer Science, 1563:1-15, 1999.\n[10] K. Roberts.\nThe characterization of implementable choice rules.\nAggregation and Revelation of Preferences, J-J.\nLaffont (ed.)\n, North Holland Publishing Company.\n[11] J.-C.\nRochet.\nA necessary and sufficient condition for rationalizability in a quasi-linear context.\nJournal of Mathematical Economics, 16:191-200, 1987.\n[12] I. Rozenshtrom.\nDominant strategy implementation with quasi-linear preferences.\nMaster``s thesis, Dept. of Economics, The Hebrew University, Jerusalem, Israel, 1999.\n293","lvl-3":"Weak Monotonicity Suffices for Truthfulness on Convex Domains\nABSTRACT\nWeak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism.\nRoberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted.\nLavi, Mu'alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints.\nHere we show the more general result, conjectured by Lavi, Mu'alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain.\n1.\nINTRODUCTION\nSocial choice theory centers around the general problem of selecting a single outcome out of a set A of alternative out * This work was supported in part by NSF grant CCR9988526.\n\u2020 This work was supported in part by NSF grant CCR9988526 and DIMACS.\ncomes based on the individual preferences of a set P of players.\nA method for aggregating player preferences to select one outcome is called a social choice function.\nIn this paper we assume that the range A is finite and that each player's preference is expressed by a valuation function which assigns to each possible outcome a real number representing the \"benefit\" the player derives from that outcome.\nThe ensemble of player valuation functions is viewed as a valuation matrix with rows indexed by players and columns by outcomes.\nA major difficulty connected with social choice functions is that players cannot be required to tell the truth about their preferences.\nSince each player seeks to maximize his own benefit, he may find it in his interest to misrepresent his valuation function.\nAn important approach for dealing with this problem is to augment a given social choice function with a payment function, which assigns to each player a (positive or negative) payment as a function of all of the individual preferences.\nBy carefully choosing the payment function, one can hope to entice each player to tell the truth.\nA social choice function augmented with a payment function is called a mechanism 1 and the mechanism is said to implement the social choice function.\nA mechanism is truthful (or to be strategyproof or to have a dominant strategy) if each player's best strategy, knowing the preferences of the others, is always to declare his own true preferences.\nA social choice function is truthfully implementable, or truthful if it has a truthful implementation.\n(The property of truthful implementability is sometimes called dominant strategy incentive compatibility).\nThis framework leads naturally to the question: which social choice functions are truthful?\nThis question is of the following general type: given a class of functions (here, social choice functions) and a property that holds for some of them (here, truthfulness), \"characterize\" the property.\nThe definition of the property itself provides a characterization, so what more is needed?\nHere are some useful notions of characterization:\n\u2022 Recognition algorithm.\nGive an algorithm which, given an appropriate representation of a function in the class, determines whether the function has the property.\n\u2022 Parametric representation.\nGive an explicit parametrized family of functions and show that each function in the\nfamily has the property, and that every function with the property is in the family.\nA third notion applies in the case of hereditary properties of functions.\nA function g is a subfunction of function f, or f contains g, if g is obtained by restricting the domain of f.\nA property P of functions is hereditary if it is preserved under taking subfunctions.\nTruthfulness is easily seen to be hereditary.\n\u2022 Sets of obstructions.\nFor a hereditary property P, a function g that does not have the property is an obstruction to the property in the sense that any function containing g doesn't have the property.\nAn obstruction is minimal if every proper subfunction has the property.\nA set of obstructions is complete if every function that does not have the property contains one of them as a subfunction.\nThe set of all functions that don't satisfy P is a complete (but trivial and uninteresting) set of obstructions; one seeks a set of small (ideally, minimal) obstructions.\nWe are not aware of any work on recognition algorithms for the property of truthfulness, but there are significant results concerning parametric representations and obstruction characterizations of truthfulness.\nIt turns out that the domain of the function, i.e., the set of allowed valuation matrices, is crucial.\nFor functions with unrestricted domain, i.e., whose domain is the set of all real matrices, there are very good characterizations of truthfulness.\nFor general domains, however, the picture is far from complete.\nTypically, the domains of social choice functions are specified by a system of constraints.\nFor example, an order constraint requires that one specified entry in some row be larger than another in the same row, a range constraint places an upper or lower bound on an entry, and a zero constraint forces an entry to be 0.\nThese are all examples of linear inequality constraints on the matrix entries.\nBuilding on work of Roberts [10], Lavi, Mu'alem and Nisan [6] defined a condition called weak monotonicity (WMON).\n(Independently, in the context of multi-unit auctions, Bikhchandani, Chatterji and Sen [3] identified the same condition and called it nondecreasing in marginal utilities (NDMU).)\nThe definition of W-MON can be formulated in terms of obstructions: for some specified simple set F of functions each having domains of size 2, a function satisfies W-MON if it contains no function from F.\nThe functions in F are not truthful, and therefore W-MON is a necessary condition for truthfulness.\nLavi, Mu'alem and Nisan [6] showed that W-MON is also sufficient for truthfulness for social choice functions whose domain is order-based, i.e., defined by order constraints and zero constraints, and Gui, Muller and Vohra [5] extended this to other domains.\nThe domain constraints considered in both papers are special cases of linear inequality constraints, and it is natural to ask whether W-MON is sufficient for any domain defined by such constraints.\nLavi, Mu'alem and Nisan [6] conjectured that W-MON suffices for convex domains.\nThe main result of this paper is an affirmative answer to this conjecture:\nUsing the interpretation of weak monotonicity in terms of obstructions each having domain size 2, this provides a complete set of minimal obstructions for truthfulness within the class of social choice functions with convex domains.\nThe two hypotheses on the social choice function, that the domain is convex and that the range is finite, cannot be omitted as is shown by the examples given in section 7.\n1.1 Related Work\nThere is a simple and natural parametrized set of truthful social choice functions called affine maximizers.\nRoberts [10] showed that for functions with unrestricted domain, every truthful function is an affine maximizer, thus providing a parametrized representation for truthful functions with unrestricted domain.\nThere are many known examples of truthful functions over restricted domains that are not affine maximizers (see [1], [2], [4], [6] and [7]).\nEach of these examples has a special structure and it seems plausible that there might be some mild restrictions on the class of all social choice functions such that all truthful functions obeying these restrictions are affine maximizers.\nLavi, Mu'alem and Nisan [6] obtained a result in this direction by showing that for order-based domains, under certain technical assumptions, every truthful social choice function is \"almost\" an affine maximizer.\nThere are a number of results about truthfulness that can be viewed as providing obstruction characterizations, although the notion of obstruction is not explicitly discussed.\nFor a player i, a set of valuation matrices is said to be i-local if all of the matrices in the set are identical except for row i. Call a social choice function i-local if its domain is ilocal and call it local if it is i-local for some i.\nThe following easily proved fact is used extensively in the literature:\nThis implies that the set of all local non-truthful functions comprises a complete set of obstructions for truthfulness.\nThis set is much smaller than the set of all non-truthful functions, but is still far from a minimal set of obstructions.\nRochet [11], Rozenshtrom [12] and Gui, Muller and Vohra [5] identified a necessary and sufficient condition for truthfulness (see lemma 3 below) called the nonnegative cycle property.\nThis condition can be viewed as providing a minimal complete set of non-truthful functions.\nAs is required by proposition 2, each function in the set is local.\nFurthermore it is one-to-one.\nIn particular its domain has size at most the number of possible outcomes | A |.\nAs this complete set of obstructions consists of minimal non-truthful functions, this provides the optimal obstruction characterization of non-truthful functions within the class of all social choice functions.\nBut by restricting attention to interesting subclasses of social choice functions, one may hope to get simpler sets of obstructions for truthfulness within that class.\nThe condition of weak monotonicity mentioned earlier can be defined by a set of obstructions, each of which is a local function of domain size exactly 2.\nThus the results of Lavi, Mu'alem and Nisan [6], and of Gui, Muller and Vohra [5] give a very simple set of obstructions for truthfulness within certain subclasses of social choice functions.\nTheorem 1 extends these results to a much larger subclass of functions.\n1.2 Weak Monotonicity and the Nonnegative Cycle Property\nBy proposition 2, a function is truthful if and only if each of its local subfunctions is truthful.\nTherefore, to get a set of obstructions for truthfulness, it suffices to obtain such a set for local functions.\nThe domain of an i-local function consists of matrices that are fixed on all rows but row i. Fix such a function f and let D \u2286 RA be the set of allowed choices for row i.\nSince f depends only on row i and row i is chosen from D, we can view f as a function from D to A. Therefore, f is a social choice function having one player; we refer to such a function as a single player function.\nAssociated to any single player function f with domain D we define an edge-weighted directed graph Hf whose vertex set is the image of f. For convenience, we assume that f is surjective and so this image is A. For each a, b \u2208 A, x \u2208 f -1 (a) there is an edge ex (a, b) from a to b with weight x (a) \u2212 x (b).\nThe weight of a set of edges is just the sum of the weights of the edges.\nWe say that f satisfies:\n\u2022 the nonnegative cycle property if every directed cycle has nonnegative weight.\n\u2022 the nonnegative two-cycle property if every directed cycle between two vertices has nonnegative weight.\nWe say a local function g satisfies nonnegative cycle property\/nonnegative two-cycle property if its associated single player function f does.\nThe graph Hf has a possibly infinite number of edges between any two vertices.\nWe define Gf to be the edgeweighted directed graph with exactly one edge from a to b, whose weight \u03b4ab is the infimum (possibly \u2212 \u221e) of all of the edge weights ex (a, b) for x \u2208 f -1 (a).\nIt is easy to see that Hf has the nonnegative cycle property\/nonnegative two-cycle property if and only if Gf does.\nGf is called the outcome graph of f.\nThe weak monotonicity property mentioned earlier can be defined for arbitrary social choice functions by the condition that every local subfunction satisfies the nonnegative two-cycle property.\nThe following result was obtained by Rochet [11] in a slightly different form and rediscovered by Rozenshtrom [12] and Gui, Muller and Vohra [5]: LEMMA 3.\nA local social choice function is truthful if and only if it has the nonnegative cycle property.\nThus a social choice function is truthful if and only if every local subfunction satisfies the nonnegative cycle property.\nIn light of this, theorem 1 follows from: THEOREM 4.\nFor any surjective single player function f: D \u2212 \u2192 A where D is a convex subset of RA and A is finite, the nonnegative two-cycle property implies the nonnegative cycle property.\nThis is the result we will prove.\n1.3 Overview of the Proof of Theorem 4\nLet D \u2286 RA be convex and let f: D \u2212 \u2192 A be a single player function such that Gf has no negative two-cycles.\nWe want to conclude that Gf has no negative cycles.\nFor two vertices a, b, let \u03b4 \u2217 ab denote the minimum weight of any path from a to b. Clearly \u03b4 \u2217 ab \u2264 \u03b4ab.\nOur proof shows that the \u03b4 \u2217 - weight of every cycle is exactly 0, from which theorem 4 follows.\nThere seems to be no direct way to compute \u03b4 \u2217 and so we proceed indirectly.\nBased on geometric considerations, we identify a subset of paths in Gf called admissible paths and a subset of admissible paths called straight paths.\nWe prove that for any two outcomes a, b, there is a straight path from a to b (lemma 8 and corollary 10), and all straight paths from a to b have the same weight, which we denote \u03c1ab (theorem 12).\nWe show that \u03c1ab \u2264 \u03b4ab (lemma 14) and that the \u03c1-weight of every cycle is 0.\nThe key step to this proof is showing that the \u03c1-weight of every directed triangle is 0 (lemma 17).\nIt turns out that \u03c1 is equal to \u03b4 \u2217 (corollary 20), although this equality is not needed in the proof of theorem 4.\nTo expand on the above summary, we give the definitions of an admissible path and a straight path.\nThese are somewhat technical and rely on the geometry of f.\nWe first observe that, without loss of generality, we can assume that D is (topologically) closed (section 2).\nIn section 3, for each a \u2208 A, we enlarge the set f-1 (a) to a closed convex set Da \u2286 D in such a way that for a, b \u2208 A with a = b, Da and Db have disjoint interiors.\nWe define an admissible path to be a sequence of outcomes (a1,..., ak) such that each of the sets Ij = Daj \u2229 Daj +1 is nonempty (section 4).\nAn admissible path is straight if there is a straight line that meets one point from each of the sets I1,..., Ik-1 in order (section 5).\nFinally, we mention how the hypotheses of convex domain and finite range are used in the proof.\nBoth hypotheses are needed to show: (1) the existence of a straight path from a to b for all a, b (lemma 8).\n(2) that the \u03c1-weight of a directed triangle is 0 (lemma 17).\nThe convex domain hypothesis is also needed for the convexity of the sets Da (section 3).\nThe finite range hypothesis is also needed to reduce theorem 4 to the case that D is closed (section 2) and to prove that every straight path from a to b has the same \u03b4-weight (theorem 12).\n2.\nREDUCTION TO CLOSED DOMAIN\n3.\nA DISSECTION OF THE DOMAIN\n4.\nPATHS AND D-SEQUENCES\n5.\nLINEAR D-SEQUENCES AND STRAIGHT PATHS\n6.\nPROOF OF THEOREM 4\nTHEOREM 18.\nEvery directed cycle of G ~ f has weight zero.\n7.\nCOUNTEREXAMPLES TO STRONGER FORMS OF THEOREM 4\n8.\nFUTURE WORK\nAs stated in the introduction, the goal underlying the work in this paper is to obtain useful and general characterizations of truthfulness.\nLet us say that a set D of P x A real valuation matrices is a WM-domain if any social choice function on D satisfying weak monotonicity is truthful.\nIn this paper, we showed that for finite A, any convex D is a WM-domain.\nTypically, the domains of social choice functions considered in mechanism design are convex, but there are interesting examples with non-convex domains, e.g., combinatorial auctions with unknown single-minded bidders.\nIt is intriguing to find the most general conditions under which a set D of real matrices is a WM-domain.\nWe believe that convexity is the main part of the story, i.e., a WM-domain is, after excluding some exceptional cases,\" essentially\" a convex set.\nTurning to parametric representations, let us say a set D of P x A matrices is an AM-domain if any truthful social choice function with domain D is an affine maximizer.\nRoberts' theorem says that the unrestricted domain is an AM-domain.\nWhat are the most general conditions under which a set D of real matrices is an AM-domain?","lvl-4":"Weak Monotonicity Suffices for Truthfulness on Convex Domains\nABSTRACT\nWeak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism.\nRoberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted.\nLavi, Mu'alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints.\nHere we show the more general result, conjectured by Lavi, Mu'alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain.\n1.\nINTRODUCTION\nSocial choice theory centers around the general problem of selecting a single outcome out of a set A of alternative out * This work was supported in part by NSF grant CCR9988526.\n\u2020 This work was supported in part by NSF grant CCR9988526 and DIMACS.\ncomes based on the individual preferences of a set P of players.\nA method for aggregating player preferences to select one outcome is called a social choice function.\nIn this paper we assume that the range A is finite and that each player's preference is expressed by a valuation function which assigns to each possible outcome a real number representing the \"benefit\" the player derives from that outcome.\nThe ensemble of player valuation functions is viewed as a valuation matrix with rows indexed by players and columns by outcomes.\nA major difficulty connected with social choice functions is that players cannot be required to tell the truth about their preferences.\nSince each player seeks to maximize his own benefit, he may find it in his interest to misrepresent his valuation function.\nAn important approach for dealing with this problem is to augment a given social choice function with a payment function, which assigns to each player a (positive or negative) payment as a function of all of the individual preferences.\nBy carefully choosing the payment function, one can hope to entice each player to tell the truth.\nA social choice function augmented with a payment function is called a mechanism 1 and the mechanism is said to implement the social choice function.\nA social choice function is truthfully implementable, or truthful if it has a truthful implementation.\n(The property of truthful implementability is sometimes called dominant strategy incentive compatibility).\nThis framework leads naturally to the question: which social choice functions are truthful?\nThis question is of the following general type: given a class of functions (here, social choice functions) and a property that holds for some of them (here, truthfulness), \"characterize\" the property.\nThe definition of the property itself provides a characterization, so what more is needed?\nHere are some useful notions of characterization:\n\u2022 Recognition algorithm.\nGive an algorithm which, given an appropriate representation of a function in the class, determines whether the function has the property.\n\u2022 Parametric representation.\nGive an explicit parametrized family of functions and show that each function in the\nfamily has the property, and that every function with the property is in the family.\nA third notion applies in the case of hereditary properties of functions.\nA function g is a subfunction of function f, or f contains g, if g is obtained by restricting the domain of f.\nA property P of functions is hereditary if it is preserved under taking subfunctions.\nTruthfulness is easily seen to be hereditary.\n\u2022 Sets of obstructions.\nFor a hereditary property P, a function g that does not have the property is an obstruction to the property in the sense that any function containing g doesn't have the property.\nAn obstruction is minimal if every proper subfunction has the property.\nA set of obstructions is complete if every function that does not have the property contains one of them as a subfunction.\nThe set of all functions that don't satisfy P is a complete (but trivial and uninteresting) set of obstructions; one seeks a set of small (ideally, minimal) obstructions.\nWe are not aware of any work on recognition algorithms for the property of truthfulness, but there are significant results concerning parametric representations and obstruction characterizations of truthfulness.\nIt turns out that the domain of the function, i.e., the set of allowed valuation matrices, is crucial.\nFor functions with unrestricted domain, i.e., whose domain is the set of all real matrices, there are very good characterizations of truthfulness.\nFor general domains, however, the picture is far from complete.\nTypically, the domains of social choice functions are specified by a system of constraints.\nThese are all examples of linear inequality constraints on the matrix entries.\nBuilding on work of Roberts [10], Lavi, Mu'alem and Nisan [6] defined a condition called weak monotonicity (WMON).\nThe definition of W-MON can be formulated in terms of obstructions: for some specified simple set F of functions each having domains of size 2, a function satisfies W-MON if it contains no function from F.\nThe functions in F are not truthful, and therefore W-MON is a necessary condition for truthfulness.\nLavi, Mu'alem and Nisan [6] showed that W-MON is also sufficient for truthfulness for social choice functions whose domain is order-based, i.e., defined by order constraints and zero constraints, and Gui, Muller and Vohra [5] extended this to other domains.\nThe domain constraints considered in both papers are special cases of linear inequality constraints, and it is natural to ask whether W-MON is sufficient for any domain defined by such constraints.\nLavi, Mu'alem and Nisan [6] conjectured that W-MON suffices for convex domains.\nThe main result of this paper is an affirmative answer to this conjecture:\nUsing the interpretation of weak monotonicity in terms of obstructions each having domain size 2, this provides a complete set of minimal obstructions for truthfulness within the class of social choice functions with convex domains.\nThe two hypotheses on the social choice function, that the domain is convex and that the range is finite, cannot be omitted as is shown by the examples given in section 7.\n1.1 Related Work\nThere is a simple and natural parametrized set of truthful social choice functions called affine maximizers.\nRoberts [10] showed that for functions with unrestricted domain, every truthful function is an affine maximizer, thus providing a parametrized representation for truthful functions with unrestricted domain.\nThere are many known examples of truthful functions over restricted domains that are not affine maximizers (see [1], [2], [4], [6] and [7]).\nEach of these examples has a special structure and it seems plausible that there might be some mild restrictions on the class of all social choice functions such that all truthful functions obeying these restrictions are affine maximizers.\nLavi, Mu'alem and Nisan [6] obtained a result in this direction by showing that for order-based domains, under certain technical assumptions, every truthful social choice function is \"almost\" an affine maximizer.\nThere are a number of results about truthfulness that can be viewed as providing obstruction characterizations, although the notion of obstruction is not explicitly discussed.\nFor a player i, a set of valuation matrices is said to be i-local if all of the matrices in the set are identical except for row i. Call a social choice function i-local if its domain is ilocal and call it local if it is i-local for some i.\nThe following easily proved fact is used extensively in the literature:\nThis implies that the set of all local non-truthful functions comprises a complete set of obstructions for truthfulness.\nThis set is much smaller than the set of all non-truthful functions, but is still far from a minimal set of obstructions.\nRochet [11], Rozenshtrom [12] and Gui, Muller and Vohra [5] identified a necessary and sufficient condition for truthfulness (see lemma 3 below) called the nonnegative cycle property.\nThis condition can be viewed as providing a minimal complete set of non-truthful functions.\nAs is required by proposition 2, each function in the set is local.\nFurthermore it is one-to-one.\nIn particular its domain has size at most the number of possible outcomes | A |.\nAs this complete set of obstructions consists of minimal non-truthful functions, this provides the optimal obstruction characterization of non-truthful functions within the class of all social choice functions.\nBut by restricting attention to interesting subclasses of social choice functions, one may hope to get simpler sets of obstructions for truthfulness within that class.\nThe condition of weak monotonicity mentioned earlier can be defined by a set of obstructions, each of which is a local function of domain size exactly 2.\nThus the results of Lavi, Mu'alem and Nisan [6], and of Gui, Muller and Vohra [5] give a very simple set of obstructions for truthfulness within certain subclasses of social choice functions.\nTheorem 1 extends these results to a much larger subclass of functions.\n1.2 Weak Monotonicity and the Nonnegative Cycle Property\nBy proposition 2, a function is truthful if and only if each of its local subfunctions is truthful.\nTherefore, to get a set of obstructions for truthfulness, it suffices to obtain such a set for local functions.\nThe domain of an i-local function consists of matrices that are fixed on all rows but row i. Fix such a function f and let D \u2286 RA be the set of allowed choices for row i.\nSince f depends only on row i and row i is chosen from D, we can view f as a function from D to A. Therefore, f is a social choice function having one player; we refer to such a function as a single player function.\nWe say that f satisfies:\n\u2022 the nonnegative cycle property if every directed cycle has nonnegative weight.\n\u2022 the nonnegative two-cycle property if every directed cycle between two vertices has nonnegative weight.\nWe say a local function g satisfies nonnegative cycle property\/nonnegative two-cycle property if its associated single player function f does.\nThe graph Hf has a possibly infinite number of edges between any two vertices.\nIt is easy to see that Hf has the nonnegative cycle property\/nonnegative two-cycle property if and only if Gf does.\nGf is called the outcome graph of f.\nThe weak monotonicity property mentioned earlier can be defined for arbitrary social choice functions by the condition that every local subfunction satisfies the nonnegative two-cycle property.\nA local social choice function is truthful if and only if it has the nonnegative cycle property.\nThus a social choice function is truthful if and only if every local subfunction satisfies the nonnegative cycle property.\nIn light of this, theorem 1 follows from: THEOREM 4.\nFor any surjective single player function f: D \u2212 \u2192 A where D is a convex subset of RA and A is finite, the nonnegative two-cycle property implies the nonnegative cycle property.\nThis is the result we will prove.\n1.3 Overview of the Proof of Theorem 4\nLet D \u2286 RA be convex and let f: D \u2212 \u2192 A be a single player function such that Gf has no negative two-cycles.\nWe want to conclude that Gf has no negative cycles.\nOur proof shows that the \u03b4 \u2217 - weight of every cycle is exactly 0, from which theorem 4 follows.\nBased on geometric considerations, we identify a subset of paths in Gf called admissible paths and a subset of admissible paths called straight paths.\nWe show that \u03c1ab \u2264 \u03b4ab (lemma 14) and that the \u03c1-weight of every cycle is 0.\nThe key step to this proof is showing that the \u03c1-weight of every directed triangle is 0 (lemma 17).\nTo expand on the above summary, we give the definitions of an admissible path and a straight path.\nFinally, we mention how the hypotheses of convex domain and finite range are used in the proof.\nBoth hypotheses are needed to show: (1) the existence of a straight path from a to b for all a, b (lemma 8).\n(2) that the \u03c1-weight of a directed triangle is 0 (lemma 17).\nThe convex domain hypothesis is also needed for the convexity of the sets Da (section 3).\n8.\nFUTURE WORK\nAs stated in the introduction, the goal underlying the work in this paper is to obtain useful and general characterizations of truthfulness.\nLet us say that a set D of P x A real valuation matrices is a WM-domain if any social choice function on D satisfying weak monotonicity is truthful.\nIn this paper, we showed that for finite A, any convex D is a WM-domain.\nTypically, the domains of social choice functions considered in mechanism design are convex, but there are interesting examples with non-convex domains, e.g., combinatorial auctions with unknown single-minded bidders.\nIt is intriguing to find the most general conditions under which a set D of real matrices is a WM-domain.\nTurning to parametric representations, let us say a set D of P x A matrices is an AM-domain if any truthful social choice function with domain D is an affine maximizer.\nRoberts' theorem says that the unrestricted domain is an AM-domain.\nWhat are the most general conditions under which a set D of real matrices is an AM-domain?","lvl-2":"Weak Monotonicity Suffices for Truthfulness on Convex Domains\nABSTRACT\nWeak monotonicity is a simple necessary condition for a social choice function to be implementable by a truthful mechanism.\nRoberts [10] showed that it is sufficient for all social choice functions whose domain is unrestricted.\nLavi, Mu'alem and Nisan [6] proved the sufficiency of weak monotonicity for functions over order-based domains and Gui, Muller and Vohra [5] proved sufficiency for order-based domains with range constraints and for domains defined by other special types of linear inequality constraints.\nHere we show the more general result, conjectured by Lavi, Mu'alem and Nisan [6], that weak monotonicity is sufficient for functions defined on any convex domain.\n1.\nINTRODUCTION\nSocial choice theory centers around the general problem of selecting a single outcome out of a set A of alternative out * This work was supported in part by NSF grant CCR9988526.\n\u2020 This work was supported in part by NSF grant CCR9988526 and DIMACS.\ncomes based on the individual preferences of a set P of players.\nA method for aggregating player preferences to select one outcome is called a social choice function.\nIn this paper we assume that the range A is finite and that each player's preference is expressed by a valuation function which assigns to each possible outcome a real number representing the \"benefit\" the player derives from that outcome.\nThe ensemble of player valuation functions is viewed as a valuation matrix with rows indexed by players and columns by outcomes.\nA major difficulty connected with social choice functions is that players cannot be required to tell the truth about their preferences.\nSince each player seeks to maximize his own benefit, he may find it in his interest to misrepresent his valuation function.\nAn important approach for dealing with this problem is to augment a given social choice function with a payment function, which assigns to each player a (positive or negative) payment as a function of all of the individual preferences.\nBy carefully choosing the payment function, one can hope to entice each player to tell the truth.\nA social choice function augmented with a payment function is called a mechanism 1 and the mechanism is said to implement the social choice function.\nA mechanism is truthful (or to be strategyproof or to have a dominant strategy) if each player's best strategy, knowing the preferences of the others, is always to declare his own true preferences.\nA social choice function is truthfully implementable, or truthful if it has a truthful implementation.\n(The property of truthful implementability is sometimes called dominant strategy incentive compatibility).\nThis framework leads naturally to the question: which social choice functions are truthful?\nThis question is of the following general type: given a class of functions (here, social choice functions) and a property that holds for some of them (here, truthfulness), \"characterize\" the property.\nThe definition of the property itself provides a characterization, so what more is needed?\nHere are some useful notions of characterization:\n\u2022 Recognition algorithm.\nGive an algorithm which, given an appropriate representation of a function in the class, determines whether the function has the property.\n\u2022 Parametric representation.\nGive an explicit parametrized family of functions and show that each function in the\nfamily has the property, and that every function with the property is in the family.\nA third notion applies in the case of hereditary properties of functions.\nA function g is a subfunction of function f, or f contains g, if g is obtained by restricting the domain of f.\nA property P of functions is hereditary if it is preserved under taking subfunctions.\nTruthfulness is easily seen to be hereditary.\n\u2022 Sets of obstructions.\nFor a hereditary property P, a function g that does not have the property is an obstruction to the property in the sense that any function containing g doesn't have the property.\nAn obstruction is minimal if every proper subfunction has the property.\nA set of obstructions is complete if every function that does not have the property contains one of them as a subfunction.\nThe set of all functions that don't satisfy P is a complete (but trivial and uninteresting) set of obstructions; one seeks a set of small (ideally, minimal) obstructions.\nWe are not aware of any work on recognition algorithms for the property of truthfulness, but there are significant results concerning parametric representations and obstruction characterizations of truthfulness.\nIt turns out that the domain of the function, i.e., the set of allowed valuation matrices, is crucial.\nFor functions with unrestricted domain, i.e., whose domain is the set of all real matrices, there are very good characterizations of truthfulness.\nFor general domains, however, the picture is far from complete.\nTypically, the domains of social choice functions are specified by a system of constraints.\nFor example, an order constraint requires that one specified entry in some row be larger than another in the same row, a range constraint places an upper or lower bound on an entry, and a zero constraint forces an entry to be 0.\nThese are all examples of linear inequality constraints on the matrix entries.\nBuilding on work of Roberts [10], Lavi, Mu'alem and Nisan [6] defined a condition called weak monotonicity (WMON).\n(Independently, in the context of multi-unit auctions, Bikhchandani, Chatterji and Sen [3] identified the same condition and called it nondecreasing in marginal utilities (NDMU).)\nThe definition of W-MON can be formulated in terms of obstructions: for some specified simple set F of functions each having domains of size 2, a function satisfies W-MON if it contains no function from F.\nThe functions in F are not truthful, and therefore W-MON is a necessary condition for truthfulness.\nLavi, Mu'alem and Nisan [6] showed that W-MON is also sufficient for truthfulness for social choice functions whose domain is order-based, i.e., defined by order constraints and zero constraints, and Gui, Muller and Vohra [5] extended this to other domains.\nThe domain constraints considered in both papers are special cases of linear inequality constraints, and it is natural to ask whether W-MON is sufficient for any domain defined by such constraints.\nLavi, Mu'alem and Nisan [6] conjectured that W-MON suffices for convex domains.\nThe main result of this paper is an affirmative answer to this conjecture:\nUsing the interpretation of weak monotonicity in terms of obstructions each having domain size 2, this provides a complete set of minimal obstructions for truthfulness within the class of social choice functions with convex domains.\nThe two hypotheses on the social choice function, that the domain is convex and that the range is finite, cannot be omitted as is shown by the examples given in section 7.\n1.1 Related Work\nThere is a simple and natural parametrized set of truthful social choice functions called affine maximizers.\nRoberts [10] showed that for functions with unrestricted domain, every truthful function is an affine maximizer, thus providing a parametrized representation for truthful functions with unrestricted domain.\nThere are many known examples of truthful functions over restricted domains that are not affine maximizers (see [1], [2], [4], [6] and [7]).\nEach of these examples has a special structure and it seems plausible that there might be some mild restrictions on the class of all social choice functions such that all truthful functions obeying these restrictions are affine maximizers.\nLavi, Mu'alem and Nisan [6] obtained a result in this direction by showing that for order-based domains, under certain technical assumptions, every truthful social choice function is \"almost\" an affine maximizer.\nThere are a number of results about truthfulness that can be viewed as providing obstruction characterizations, although the notion of obstruction is not explicitly discussed.\nFor a player i, a set of valuation matrices is said to be i-local if all of the matrices in the set are identical except for row i. Call a social choice function i-local if its domain is ilocal and call it local if it is i-local for some i.\nThe following easily proved fact is used extensively in the literature:\nThis implies that the set of all local non-truthful functions comprises a complete set of obstructions for truthfulness.\nThis set is much smaller than the set of all non-truthful functions, but is still far from a minimal set of obstructions.\nRochet [11], Rozenshtrom [12] and Gui, Muller and Vohra [5] identified a necessary and sufficient condition for truthfulness (see lemma 3 below) called the nonnegative cycle property.\nThis condition can be viewed as providing a minimal complete set of non-truthful functions.\nAs is required by proposition 2, each function in the set is local.\nFurthermore it is one-to-one.\nIn particular its domain has size at most the number of possible outcomes | A |.\nAs this complete set of obstructions consists of minimal non-truthful functions, this provides the optimal obstruction characterization of non-truthful functions within the class of all social choice functions.\nBut by restricting attention to interesting subclasses of social choice functions, one may hope to get simpler sets of obstructions for truthfulness within that class.\nThe condition of weak monotonicity mentioned earlier can be defined by a set of obstructions, each of which is a local function of domain size exactly 2.\nThus the results of Lavi, Mu'alem and Nisan [6], and of Gui, Muller and Vohra [5] give a very simple set of obstructions for truthfulness within certain subclasses of social choice functions.\nTheorem 1 extends these results to a much larger subclass of functions.\n1.2 Weak Monotonicity and the Nonnegative Cycle Property\nBy proposition 2, a function is truthful if and only if each of its local subfunctions is truthful.\nTherefore, to get a set of obstructions for truthfulness, it suffices to obtain such a set for local functions.\nThe domain of an i-local function consists of matrices that are fixed on all rows but row i. Fix such a function f and let D \u2286 RA be the set of allowed choices for row i.\nSince f depends only on row i and row i is chosen from D, we can view f as a function from D to A. Therefore, f is a social choice function having one player; we refer to such a function as a single player function.\nAssociated to any single player function f with domain D we define an edge-weighted directed graph Hf whose vertex set is the image of f. For convenience, we assume that f is surjective and so this image is A. For each a, b \u2208 A, x \u2208 f -1 (a) there is an edge ex (a, b) from a to b with weight x (a) \u2212 x (b).\nThe weight of a set of edges is just the sum of the weights of the edges.\nWe say that f satisfies:\n\u2022 the nonnegative cycle property if every directed cycle has nonnegative weight.\n\u2022 the nonnegative two-cycle property if every directed cycle between two vertices has nonnegative weight.\nWe say a local function g satisfies nonnegative cycle property\/nonnegative two-cycle property if its associated single player function f does.\nThe graph Hf has a possibly infinite number of edges between any two vertices.\nWe define Gf to be the edgeweighted directed graph with exactly one edge from a to b, whose weight \u03b4ab is the infimum (possibly \u2212 \u221e) of all of the edge weights ex (a, b) for x \u2208 f -1 (a).\nIt is easy to see that Hf has the nonnegative cycle property\/nonnegative two-cycle property if and only if Gf does.\nGf is called the outcome graph of f.\nThe weak monotonicity property mentioned earlier can be defined for arbitrary social choice functions by the condition that every local subfunction satisfies the nonnegative two-cycle property.\nThe following result was obtained by Rochet [11] in a slightly different form and rediscovered by Rozenshtrom [12] and Gui, Muller and Vohra [5]: LEMMA 3.\nA local social choice function is truthful if and only if it has the nonnegative cycle property.\nThus a social choice function is truthful if and only if every local subfunction satisfies the nonnegative cycle property.\nIn light of this, theorem 1 follows from: THEOREM 4.\nFor any surjective single player function f: D \u2212 \u2192 A where D is a convex subset of RA and A is finite, the nonnegative two-cycle property implies the nonnegative cycle property.\nThis is the result we will prove.\n1.3 Overview of the Proof of Theorem 4\nLet D \u2286 RA be convex and let f: D \u2212 \u2192 A be a single player function such that Gf has no negative two-cycles.\nWe want to conclude that Gf has no negative cycles.\nFor two vertices a, b, let \u03b4 \u2217 ab denote the minimum weight of any path from a to b. Clearly \u03b4 \u2217 ab \u2264 \u03b4ab.\nOur proof shows that the \u03b4 \u2217 - weight of every cycle is exactly 0, from which theorem 4 follows.\nThere seems to be no direct way to compute \u03b4 \u2217 and so we proceed indirectly.\nBased on geometric considerations, we identify a subset of paths in Gf called admissible paths and a subset of admissible paths called straight paths.\nWe prove that for any two outcomes a, b, there is a straight path from a to b (lemma 8 and corollary 10), and all straight paths from a to b have the same weight, which we denote \u03c1ab (theorem 12).\nWe show that \u03c1ab \u2264 \u03b4ab (lemma 14) and that the \u03c1-weight of every cycle is 0.\nThe key step to this proof is showing that the \u03c1-weight of every directed triangle is 0 (lemma 17).\nIt turns out that \u03c1 is equal to \u03b4 \u2217 (corollary 20), although this equality is not needed in the proof of theorem 4.\nTo expand on the above summary, we give the definitions of an admissible path and a straight path.\nThese are somewhat technical and rely on the geometry of f.\nWe first observe that, without loss of generality, we can assume that D is (topologically) closed (section 2).\nIn section 3, for each a \u2208 A, we enlarge the set f-1 (a) to a closed convex set Da \u2286 D in such a way that for a, b \u2208 A with a = b, Da and Db have disjoint interiors.\nWe define an admissible path to be a sequence of outcomes (a1,..., ak) such that each of the sets Ij = Daj \u2229 Daj +1 is nonempty (section 4).\nAn admissible path is straight if there is a straight line that meets one point from each of the sets I1,..., Ik-1 in order (section 5).\nFinally, we mention how the hypotheses of convex domain and finite range are used in the proof.\nBoth hypotheses are needed to show: (1) the existence of a straight path from a to b for all a, b (lemma 8).\n(2) that the \u03c1-weight of a directed triangle is 0 (lemma 17).\nThe convex domain hypothesis is also needed for the convexity of the sets Da (section 3).\nThe finite range hypothesis is also needed to reduce theorem 4 to the case that D is closed (section 2) and to prove that every straight path from a to b has the same \u03b4-weight (theorem 12).\n2.\nREDUCTION TO CLOSED DOMAIN\nWe first reduce the theorem to the case that D is closed.\nWrite DC for the closure of D.\nSince A is finite, DC = \u222a aEA (f-1 (a)) C. Thus for each v \u2208 DC \u2212 D, there is an a = a (v) \u2208 A such that v \u2208 (f-1 (a)) C. Extend f to the function g on DC by defining g (v) = a (v) for v \u2208 DC \u2212 D and g (v) = f (v) for v \u2208 D.\nIt is easy to check that \u03b4ab (g) = \u03b4ab (f) for all a, b \u2208 A and therefore it suffices to show that the nonnegative two-cycle property for g implies the nonnegative cycle property for g. Henceforth we assume D is convex and closed.\n3.\nA DISSECTION OF THE DOMAIN\nIn this section, we construct a family of closed convex sets {Da: a \u2208 A} with disjoint interiors whose union is D and satisfying f -1 (a) \u2286 Da for each a \u2208 A. Let Ra = {v: \u2200 b \u2208 A, v (a) \u2212 v (b) \u2265 \u03b4ab}.\nRa is a closed polyhedron containing f -1 (a).\nThe next proposition implies that any two of these polyhedra intersect only on their boundary.\nFigure 1: A 2-dimensional domain with 5 outcomes.\nPROOF.\nv E Ra implies v (a)--v (b)> \u03b4ab and v E Rb implies v (b)--v (a)> \u03b4ba which, by the nonnegative two-cycle property, implies v (a)--v (b) <\u03b4ab.\nThus v (a)--v (b) = \u03b4ab and by symmetry v (b)--v (a) = \u03b4ba.\nmissible paths.\nHowever, path (a, c, d) or path (b, e) is not admissible.\nPROOF.\nLet \u2192--u be a D-sequence in C (--\u2192 a) n C (--\u2192 a ~).\nBy proposition 6, \u03b4 (--\u2192 a) = \u2206 \u2212 \u2192 a (--\u2192 u) and \u03b4 (--\u2192 a ~) = \u2206 \u2212 \u2192 a, (--\u2192 u), it suffices to show \u2206 \u2212 \u2192 a (--\u2192 u) = \u2206 \u2212 \u2192 a, (--\u2192 u).\nLet k = ord (--\u2192 u) = I--\u2192 a I = I--\u2192 a ~ I. Since\nFinally, we restrict the collection of sets {Ra: a E Al to the domain D by defining Da = Ra n D for each a E A. Clearly, Da is closed and convex, and contains f \u2212 1 (a).\nTherefore Sa \u2208 A Da = D. Also, by proposition 5, any point v in Da n Db satisfies v (a)--v (b) = \u03b4ab =--\u03b4ba.\n4.\nPATHS AND D-SEQUENCES\nA path of size k is a sequence \u2192--a = (a1,..., ak) with each ai E A (possibly with repetition).\nWe call \u2192--a an (a1, ak) path.\nFor a path \u2192--a, we write I--\u2192 a I for the size of \u2192--a.\n\u2192--a is simple if the ai's are distinct.\nFor b, c E A we write Pbc for the set of (b, c) - paths and SPbc for the set of simple (b, c) - paths.\nThe \u03b4-weight of path \u2192--a is defined by A D-sequence of order k is a sequence \u2192--u = (u0,..., uk) with each ui E D (possibly with repetition).\nWe call \u2192--u a (u0, uk) - sequence.\nFor a D-sequence \u2192--u, we write ord (u) for the order of \u2192--u.\nFor v, w E D we write Svw for the set of (v, w) - sequences.\nA compatible pair is a pair (--\u2192 a, \u2192--u) where \u2192--a is a path and \u2192--u is a D-sequence satisfying ord (--\u2192 u) = I--\u2192 a I and for each i E [k], both ui \u2212 1 and ui belong to Dai.\nWe write C (--\u2192 a) for the set of D-sequences \u2192--u that are compatible with \u2192--a.\nWe say that \u2192--a is admissible if C (--\u2192 a) is nonempty.\nFor \u2192--u E C (--\u2192 a) we define\nFor v, w E D and b, c E A, we define Cvw bc to be the set of compatible pairs (--\u2192 a, \u2192--u) such that \u2192--a E Pbc and \u2192--u E Svw.\nTo illustrate these definitions, figure 1 gives the dissection of a domain, a 2-dimensional plane, into five regions Da, Db, Dc, Dd, De.\nD-sequence (v, w, x, y, z) is compatible with both path (a, b, c, e) and path (a, b, d, e); D-sequence (v, w, u, y, z) is compatible with a unique path (a, b, d, e).\nD-sequence (x, w, p, y, z) is compatible with a unique path (b, a, d, e).\nHence (a, b, c, e), (a, b, d, e) and (b, a, d, e) are ad\nNoticing both ui \u2212 1 and ui belong to Dai n Da, i, we have by proposition 5\n5.\nLINEAR D-SEQUENCES AND STRAIGHT PATHS\nFor v, w E D we write vw for the (closed) line segment joining v and w.\nA D-sequence \u2192--u of order k is linear provided that there is a sequence of real numbers 0 = \u03bb0 <\u03bb1 <... <\u03bbk = 1 such that ui = (1--\u03bbi) u0 + \u03bbiuk.\nIn particular, each ui belongs to u0uk.\nFor v, w E D we write Lvw for the set of linear (v, w) - sequences.\nFor b, c E A and v, w E D we write LCvw bc for the set of compatible pairs (--\u2192 a, \u2192--u) such that \u2192--a E Pbc and \u2192--u E Lvw.\nFor a path \u2192--a, we write L (--\u2192 a) for the set of linear sequences compatible with \u2192--a.\nWe say that \u2192--a is straight if L (--\u2192 a) = 0.\nFor example, in figure 1, D-sequence (v, w, x, y, z) is linear while (v, w, u, y, z), (x, w, p, y, z), and (x, v, w, y, z) are not.\nHence path (a, b, c, e) and (a, b, d, e) are both straight.\nHowever, path (b, a, d, e) is not straight.\nLEMMA 8.\nLet b, c \u2208 A and v \u2208 Db, w \u2208 Dc.\nThere is a simple path \u2192 \u2212 a and D-sequence \u2192 \u2212 u such that (\u2212 \u2192 a, \u2192 \u2212 u) \u2208 LCvw bc.\nFurthermore, for any such path \u2192 \u2212 a, \u03b4 (\u2212 \u2192 a) \u2264 v (b) \u2212 v (c).\nPROOF.\nBy the convexity of D, any sequence of points on vw is a D-sequence.\nIf b = c, singleton path \u2192 \u2212 a = (b) and D-sequence \u2192 \u2212 u = (v, w) are obviously compatible.\n\u03b4 (\u2212 \u2192 a) = 0 = v (b) \u2212 v (c).\nSo assume b = c.\nIf Db \u2229 Dc \u2229 vw = \u2205, we pick an arbitrary x from this set and let \u2192 \u2212 a = (b, c) \u2208 SPbc, \u2192 \u2212 u = (v, x, w) \u2208 Lvw.\nAgain it is easy to check the compatibility of (\u2212 \u2192 a, \u2192 \u2212 u).\nSince v \u2208 Db, v (b) \u2212 v (c) \u2265 \u03b4bc = \u03b4 (\u2212 \u2192 a).\nFor the remaining case b = c and Db \u2229 Dc \u2229 vw = \u2205, notice v = w otherwise v = w \u2208 Db \u2229 Dc \u2229 vw.\nSo we can define \u03bbx for every point x on vw to be the unique number in [0, 1] such that x = (1 \u2212 \u03bbx) v + \u03bbxw.\nFor convenience, we write x \u2264 y for \u03bbx \u2264 \u03bby.\nLet Ia = Da \u2229 vw for each a \u2208 A.\nSince D = \u222a a \u2208 ADa, we have vw = \u222a a \u2208 AIa.\nMoreover, by the convexity of Da and vw, Ia is a (possibly trivial) closed interval.\nWe begin by considering the case that Ib and Ic are each a single point, that is, Ib = {v} and Ic = {w}.\nLet S be a minimal subset of A satisfying \u222a s \u2208 SIs = vw.\nFor each s \u2208 S, Is is maximal, i.e., not contained in any other It, for t \u2208 S.\nIn particular, the intervals {Is: s \u2208 S} have all left endpoints distinct and all right endpoints distinct and the order of the left endpoints is the same as that of the right endpoints.\nLet k = | S | + 2 and index S as a2,..., ak \u2212 1 in the order defined by the right endpoints.\nDenote the interval Iai by [li, ri].\nThus l2 <l3 <... <lk \u2212 1, r2 <r3 <... <rk \u2212 1 and the fact that these intervals cover vw implies l2 = v, rk \u2212 1 = w and for all 2 \u2264 i \u2264 k \u2212 2, li +1 \u2264 ri which further implies li <ri.\nNow we define the path \u2192 \u2212 a = (a1, a2,..., ak \u2212 1, ak) with a1 = b, ak = c and a2, a3,..., ak \u2212 1 as above.\nDefine the linear D-sequence \u2192 \u2212 u = (u0, u1,..., uk) by u0 = u1 = v, uk = w and for 2 \u2264 i \u2264 k \u2212 1, ui = ri.\nIt follows immediately that (\u2212 \u2192 a, \u2192 \u2212 u) \u2208 LCvw bc.\nNeither b nor c is in S since lb = rb and lc = rc.\nThus \u2192 \u2212 a is simple.\nFinally to show \u03b4 (\u2212 \u2192 a) \u2264 v (b) \u2212 v (c), we note\nfde (y).\nTherefore fde (z) is monotonically nonincreasing along \u2190 \u2192 the line xy as z moves in the direction from x to y. Applying this fact with d = ai, e = ai +1, x = li and y = ri gives the desired conclusion.\nThis completes the proof for the case that Ib = {v} and Ic = {w}.\nFor general Ib, Ic, rb <lc otherwise Db \u2229 Dc \u2229 vw = Ib \u2229 Ic = \u2205.\nLet v ~ = rb and w ~ = lc.\nClearly we can apply the above conclusion to v ~ \u2208 Db, w ~ \u2208 Dc and get a compatible pair (\u2212 \u2192 a, \u2192 \u2212 u ~) \u2208 LCv ~ w ~ bc with \u2192 \u2212 a simple and \u03b4 (\u2212 \u2192 a) \u2264 v ~ (b) \u2212 v ~ (c).\nDefine the linear D-sequence \u2192 \u2212 u by u0 = v, uk = w and ui = u ~ i for i \u2208 [k \u2212 1].\n(\u2212 \u2192 a, \u2192 \u2212 u) \u2208 LCvw bc is evident.\nMoreover, applying the above fact with d = b, e = c, x = v and y = w, we get v (b) \u2212 v (c) \u2265 v ~ (b) \u2212 v ~ (c) \u2265 \u03b4 (\u2212 \u2192 a).\nThe main result of this section (theorem 12) says that for any b, c \u2208 A, every straight (b, c) - path has the same \u03b4-weight.\nTo prove this, we first fix v \u2208 Db and w \u2208 Dc and show (lemma 11) that every straight (b, c) - path compatible with some linear (v, w) - sequence has the same \u03b4-weight \u03c1bc (v, w).\nWe then show in theorem 12 that \u03c1bc (v, w) is the same for all choices of v \u2208 Db and w \u2208 Dc.\nbc.\nIt suffices to show \u03b4 (\u2212 \u2192 a ~) = \u03b4 (\u2212 \u2192 a ~ ~).\nTo do this we construct a linear (v, w) - sequence \u2192 \u2212 u and paths \u2192 \u2212 a \u2217, \u2192 \u2212 a \u2217 \u2217 \u2208 Pbc, both compatible with \u2192 \u2212 u, satisfying \u03b4 (\u2212 \u2192 a \u2217) = \u03b4 (\u2212 \u2192 a ~) and \u03b4 (\u2212 \u2192 a \u2217 \u2217) = \u03b4 (\u2212 \u2192 a ~ ~).\nLemma 7 implies \u03b4 (\u2212 \u2192 a \u2217) = \u03b4 (\u2212 \u2192 a \u2217 \u2217), which will complete the proof.\nLet | \u2212 \u2192 a ~ | = ord (\u2212 \u2192 u ~) = k and | \u2212 \u2192 a ~ ~ | = ord (\u2212 \u2192 u ~ ~) = l.\nWe select \u2192 \u2212 u to be any linear (v, w) - sequence (u0, u1,..., ut) such that \u2192 \u2212 u ~ and \u2192 \u2212 u ~ ~ are both subsequences of \u2192 \u2212 u, i.e., there are indices 0 = i0 <i1 <\u00b7 \u00b7 \u00b7 <ik = t and 0 = j0 <j1 <\u00b7 \u00b7 \u00b7 <jl = t such that \u2192 \u2212 u ~ = (ui0, ui1,..., uik) and \u2192 \u2212 u ~ ~ = (uj0, uj1,..., ujl).\nWe now construct a (b, c) - path \u2192 \u2212 a \u2217 compatible with \u2192 \u2212 u such that \u03b4 (\u2212 \u2192 a \u2217) = \u03b4 (\u2212 \u2192 a ~).\n(An analogous construction gives \u2192 \u2212 a \u2217 \u2217 compatible with \u2192 \u2212 u such that \u03b4 (\u2212 \u2192 a \u2217 \u2217) = \u03b4 (\u2212 \u2192 a ~ ~).)\nThis will complete the proof.\n\u2192 \u2212 a \u2217 is defined as follows: for 1 \u2264 j \u2264 t, a \u2217 j = a ~ r where r is the unique index satisfying ir \u2212 1 <j \u2264 ir.\nSince both\nWe are now ready for the main theorem of the section: THEOREM 12.\n\u03c1bc is a constant map on Db \u00d7 Dc.\nThus for any b, c \u2208 A, every straight (b, c) - path has the same \u03b4weight.\nPROOF.\nFor a path \u2192 \u2212 a, (v, w) is compatible with \u2192 \u2212 a if there is a linear (v, w) - sequence compatible with \u2192 \u2212 a.\nWe write CP (\u2212 \u2192 a) for the set of pairs (v, w) compatible with \u2192 \u2212 a.\n\u03c1bc is constant on CP (\u2212 \u2192 a) because for each (v, w) \u2208 CP (\u2212 \u2192 a), \u03c1bc (v, w) = \u03b4 (\u2212 \u2192 a).\nBy lemma 8, we also have S \u2212 \u2192 a \u2208 SPbc CP (\u2212 \u2192 a) = Db \u00d7 Dc.\nSince A is finite, SPbc, the set of simple paths from b to c, is finite as well.\nNext we prove that for any path \u2192 \u2212 a, CP (\u2212 \u2192 a) is closed.\nLet ((vn, wn): n \u2208 ICY) be a convergent sequence in CP (\u2212 \u2192 a) and let (v, w) be the limit.\nWe want to show that (v, w) \u2208 CP (\u2212 \u2192 a).\nFor each n \u2208 ICY, since (vn, wn) \u2208 CP (\u2212 \u2192 a), there is a linear (vn, wn) - sequence un compatible with \u2192 \u2212 a, i.e., there are 0 = \u03bbn0 \u2264 \u03bbn1 \u2264...\u2264 \u03bbnk = 1 (k = | \u2212 \u2192 a |) such that unj = (1 \u2212 \u03bbnj) vn + \u03bbnj wn (j = 0, 1,..., k).\nSince for each n, \u03bbn = (\u03bbn0, \u03bbn 1,..., \u03bbn k) belongs to the closed bounded set [0, 1] k +1 we can choose an infinite subset I \u2286 ICY such that the sequence (\u03bbn: n \u2208 I) converges.\nLet \u03bb = (\u03bb0, \u03bb1,..., \u03bbk) be the limit.\nClearly 0 = \u03bb0 \u2264 \u03bb1 \u2264 \u00b7 \u00b7 \u00b7 \u2264 \u03bbk = 1.\nDefine the linear (v, w) - sequence \u2192 \u2212 u by uj = (1 \u2212 \u03bbj) v + \u03bbjw (j = 0, 1,..., k).\nThen for each j \u2208 {0,..., k}, uj is the limit of the sequence (unj: n \u2208 I).\nFor j> 0, each unj belongs to the closed set Daj, so uj \u2208 Daj.\nSimilarly, for j <k each unj belongs to the closed set Daj +1, so uj \u2208 Daj +1.\nHence (\u2212 \u2192 a, \u2192 \u2212 u) is compatible, implying (v, w) \u2208 CP (\u2212 \u2192 a).\nNow we have Db \u00d7 Dc covered by finitely many closed subsets on each of them \u03c1bc is a constant.\nSuppose for contradiction that there are (v, w), (v', w') \u2208 Db \u00d7 Dc such that \u03c1bc (v, w) = \u03c1bc (v', w').\nare closed by the finiteness of P.\nThis is a contradiction, since it is well known (and easy to prove) that a line segment cannot be expressed as the disjoint union of two nonempty closed sets.\nSummarizing corollary 10, lemma 11 and theorem 12, we have\n6.\nPROOF OF THEOREM 4\nLEMMA 14.\n\u03c1bc \u2264 Sbc for all b, c \u2208 A. PROOF.\nFor contradiction, suppose \u03c1bc \u2212 Sbc = e> 0.\nBy the definition of Sbc, there exists v \u2208 f--1 (b) \u2286 Db with v (b) \u2212 v (c) <Sbc + e = \u03c1bc.\nPick an arbitrary w \u2208 Dc.\nBy lemma 8, there is a compatible pair (\u2212 \u2192 a, \u2192 \u2212 u) \u2208 LCvw bc with S (\u2212 \u2192 a) \u2264 v (b) \u2212 v (c).\nSince \u2192 \u2212 a is a straight (b, c) - path, \u03c1bc = S (\u2212 \u2192 a) \u2264 v (b) \u2212 v (c), leading to a contradiction.\nDefine another edge-weighted complete directed graph G' f on vertex set A where the weight of arc (a, b) is \u03c1ab.\nImmediately from lemma 14, the weight of every directed cycle in Gf is bounded below by its weight in G' f. To prove theorem 4, it suffices to show the zero cycle property of G' f, i.e., every directed cycle has weight zero.\nWe begin by considering two-cycles.\nLEMMA 15.\n\u03c1bc + \u03c1cb = 0 for all b, c \u2208 A. PROOF.\nLet \u2192 \u2212 a be a straight (b, c) - path compatible with linear sequence \u2192 \u2212 u.\nlet \u2192 \u2212 a' be the reverse of \u2192 \u2212 a and \u2192 \u2212 u' the reverse of \u2192 \u2212 u.\nObviously, (\u2212 \u2192 a', \u2192 \u2212 u') is compatible as well and thus \u2192 \u2212 a' is a straight (c, b) - path.\nTherefore,\nwhere the final equality uses proposition 5.\nNext, for three cycles, we first consider those compatible with linear triples.\nPROOF.\nFirst, we prove for the case where v is between u and w. From lemma 8, there are compatible pairs (\u2212 \u2192 a', \u2192 \u2212 u') \u2208\nTherefore, \u03c1ac = S (\u2212 \u2192 a' ' ') = S (\u2212 \u2192 a') + S (\u2212 \u2192 a' ') = \u03c1ab + \u03c1bc.\nMoreover, \u03c1ac = \u2212 \u03c1ca by lemma 15, so we get \u03c1ab + \u03c1bc + \u03c1ca = 0.\nNow suppose w is between u and v. By the above argument, we have \u03c1ac + \u03c1cb + \u03c1ba = 0 and by lemma 15, \u03c1ab + \u03c1bc + \u03c1ca = \u2212 \u03c1ba \u2212 \u03c1cb \u2212 \u03c1ac = 0.\nThe case that u is between v and w is similar.\nNow we are ready for the zero three-cycle property: LEMMA 17.\n\u03c1ab + \u03c1bc + \u03c1ca = 0 for all a, b, c \u2208 A. PROOF.\nLet S = {(a, b, c): \u03c1ab + \u03c1bc + \u03c1ca = 0} and for contradiction, suppose S = \u2205.\nS is finite.\nFor each a \u2208 A, choose va \u2208 Da arbitrarily and let T be the convex hull of {va: a \u2208 A}.\nFor each (a, b, c) \u2208 S, let Rabc = Da \u00d7 Db \u00d7 Dc \u2229 T3.\nClearly, each Rabc is nonempty and compact.\nMoreover, by lemma 16, no (u, v, w) \u2208 Rabc is collinear.\nDefine f: D3 \u2192 R by f (u, v, w) = | v \u2212 u | + | w \u2212 v | + | u \u2212 w |.\nFor (a, b, c) \u2208 S, the restriction of f to the compact set Rabc attains a minimum m (a, b, c) at some point (u, v, w) \u2208 Rabc by the continuity of f, i.e., there exists a triangle \u2206 uvw of minimum perimeter within T with u \u2208 Da, v \u2208 Db, w \u2208 Dc.\nChoose (a *, b *, c *) \u2208 S so that m (a *, b *, c *) is minimum and let (u *, v *, w *) \u2208 Ra * b * c * be a triple achieving it.\nPick an arbitrary point p in the interior of \u2206 u * v * w *.\nBy the convexity of domain D, there is d \u2208 A such that p \u2208 Dd.\nConsider triangles \u2206 u \u2217 pw \u2217, \u2206 w \u2217 pv \u2217 and \u2206 v \u2217 pu \u2217.\nSince each of them has perimeter less than that of \u2206 u \u2217 v \u2217 w \u2217 and all three triangles are contained in T, by the minimality of \u2206 u \u2217 v \u2217 w \u2217, (a \u2217, d, c \u2217), (c \u2217, d, b \u2217), (b \u2217, d, a \u2217) \u2208 S. Thus\nSumming up the three equalities,\nWith the zero two-cycle and three-cycle properties, the zero cycle property of G ~ f is immediate.\nAs noted earlier, this completes the proof of theorem 4.\nTHEOREM 18.\nEvery directed cycle of G ~ f has weight zero.\nPROOF.\nClearly, zero two-cycle and three-cycle properties imply triangle equality Pab + Pbc = Pac for all a, b, c \u2208 A. For triangle equality, we have Pk \u2212 1 a directed cycle C = a1a2...aka1, by inductively applying\nthe weight of C is\nAs final remarks, we note that our result implies the following strengthenings of theorem 12: COROLLARY 19.\nFor any b, c \u2208 A, every admissible (b, c) path has the same S-weight Pbc.\nPROOF.\nFirst notice that for any b, c \u2208 A, if Db \u2229 Dc = \u2205, Sbc = Pbc.\nTo see this, pick v \u2208 Db \u2229 Dc arbitrarily.\nObviously, path \u2192 \u2212 a = (b, c) is compatible with linear sequence \u2192 \u2212 u = (v, v, v) and is thus a straight (b, c) - path.\nHence Pbc = S (\u2212 \u2192 a) = Sbc.\nNow for any b, c \u2208 A and any (b, c) - path \u2192 \u2212 a with C (\u2212 \u2192 a) = \u2205, let \u2192 \u2212 u \u2208 C (\u2212 \u2192 a).\nSince ui \u2208 Dai \u2229 Dai +1 for i \u2208 [| \u2212 \u2192 a | \u2212 1],\nwhich by theorem 18, = \u2212 Paka1 = Pa1ak = Pbc.\nHence Pbc \u2264 S \u2217 bc, which completes the proof.\n7.\nCOUNTEREXAMPLES TO STRONGER FORMS OF THEOREM 4\nTheorem 4 applies to social choice functions with convex domain and finite range.\nWe now show that neither of these hypotheses can be omitted.\nOur examples are single player functions.\nThe first example illustrates that convexity cannot be omitted.\nWe present an untruthful single player social choice function with three outcomes a, b, c satisfying W-MON on a path-connected but non-convex domain.\nThe domain is the boundary of a triangle whose vertices are x = (0, 1, \u2212 1), y = (\u2212 1, 0, 1) and z = (1, \u2212 1, 0).\nx and the open line segment zx is assigned outcome a, y and the open line segment xy is assigned outcome b, and z and the open line segment yz is assigned outcome c. Clearly, Sab = \u2212 Sba = Sbc = \u2212 Scb = Sca = \u2212 Sac = \u2212 1, W-MON (the nonnegative twocycle property) holds.\nSince there is a negative cycle Sab + Sbc + Sca = \u2212 3, by lemma 3, this is not a truthful choice function.\nWe now show that the hypothesis of finite range cannot be omitted.\nWe construct a family of single player social choice functions each having a convex domain and an infinite number of outcomes, and satisfying weak monotonicity but not truthfulness.\nOur examples will be specified by a positive integer n and an n \u00d7 n matrix M satisfying the following properties: (1) M is non-singular.\n(2) M is positive semidefinite.\n(3) There are distinct i1, i2,..., ik \u2208 [n] satisfying\nHere is an example matrix with n = 3 and (i1, i2, i3) = (1, 2, 3):\nLet e1, e2,..., en denote the standard basis of Rn.\nLet Sn denote the convex hull of {e1, e2..., en}, which is the set of vectors in Rn with nonnegative coordinates that sum to 1.\nThe range of our social choice function will be the set Sn and the domain D will be indexed by Sn, that is D = {y\u03bb: A \u2208 Sn}, where y\u03bb is defined below.\nThe function f maps y\u03bb to A. Next we specify y\u03bb.\nBy definition, D must be a set of functions from Sn to R. For A \u2208 Sn, the domain element y\u03bb: Sn \u2212 \u2192 R is defined by y\u03bb (\u03b1) = AT M\u03b1.\nThe nonsingularity of M guarantees that y\u03bb = y\u00b5 for A = \u00b5 \u2208 Sn.\nIt is easy to see that D is a convex subset of the set of all functions from Sn to R.\nThe outcome graph Gf is an infinite graph whose vertex set is the outcome set A = Sn.\nFor outcomes A, \u00b5 \u2208 A, the edge weight S\u03bb\u00b5 is equal to S\u03bb\u00b5 = inf {v (A) \u2212 v (\u00b5): f (v) = A} = y\u03bb (A) \u2212 y\u03bb (\u00b5) = AT MA \u2212 ATM\u00b5 = AT M (A \u2212 \u00b5).\nWe claim that Gf satisfies the nonnegative two-cycle property (W-MON) but has a negative cycle (and hence is not truthful).\nFor outcomes A, \u00b5 \u2208 A,\nwhich is nonnegative since M is positive semidefinite.\nHence the nonnegative two-cycle property holds.\nNext we show that Gf has a negative cycle.\nLet i1, i2,..., ik be a sequence of indices satisfying property 3 of M.\nWe claim ei1 ei2...eik ei1 is a negative cycle.\nSince\nwhich completes the proof.\nFinally, we point out that the third property imposed on the matrix M has the following interpretation.\nLet R (M) = {r1, r2,..., rn} be the set of row vectors of M and let hM be the single player social choice function with domain R (M) and range {1, 2,..., n} mapping ri to i. Property 3 is equivalent to the condition that the outcome graph GhM has a negative cycle.\nBy lemma 3, this is equivalent to the condition that hM is untruthful.\n8.\nFUTURE WORK\nAs stated in the introduction, the goal underlying the work in this paper is to obtain useful and general characterizations of truthfulness.\nLet us say that a set D of P x A real valuation matrices is a WM-domain if any social choice function on D satisfying weak monotonicity is truthful.\nIn this paper, we showed that for finite A, any convex D is a WM-domain.\nTypically, the domains of social choice functions considered in mechanism design are convex, but there are interesting examples with non-convex domains, e.g., combinatorial auctions with unknown single-minded bidders.\nIt is intriguing to find the most general conditions under which a set D of real matrices is a WM-domain.\nWe believe that convexity is the main part of the story, i.e., a WM-domain is, after excluding some exceptional cases,\" essentially\" a convex set.\nTurning to parametric representations, let us say a set D of P x A matrices is an AM-domain if any truthful social choice function with domain D is an affine maximizer.\nRoberts' theorem says that the unrestricted domain is an AM-domain.\nWhat are the most general conditions under which a set D of real matrices is an AM-domain?","keyphrases":["weak monoton","truth","truth","convex domain","social choic function","individu prefer","truth implement","recognit algorithm","nonneg cycl properti","affin maxim","non-truth function","domin strategi","mechan design","strategyproof"],"prmu":["P","P","P","P","P","U","R","U","U","U","M","U","M","U"]} {"id":"C-46","title":"TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs","abstract":"Archival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends. We argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors. We present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarse-grain index. We implement TSAR in a two-tier sensor testbed comprising Stargate-based proxies and Mote-based sensors. Our experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks.","lvl-1":"TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs \u2217 Peter Desnoyers, Deepak Ganesan, and Prashant Shenoy Department of Computer Science University of Massachusetts Amherst, MA 01003 pjd@cs.umass.edu, dganesan@cs.umass.edu, shenoy@cs.umass.edu ABSTRACT Archival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends.\nWe argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors.\nWe present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies.\nAt the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries.\nAt the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarse-grain index.\nWe implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors.\nOur experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks.\nCategories and Subject Descriptors: C.2.4 [Computer - Communication Networks]: Distributed Systems General Terms: Algorithms, performance, experimentation.\n1.\nIntroduction 1.1 Motivation Many different kinds of networked data-centric sensor applications have emerged in recent years.\nSensors in these applications sense the environment and generate data that must be processed, filtered, interpreted, and archived in order to provide a useful infrastructure to its users.\nTo achieve its goals, a typical sensor application needs access to both live and past sensor data.\nWhereas access to live data is necessary in monitoring and surveillance applications, access to past data is necessary for applications such as mining of sensor logs to detect unusual patterns, analysis of historical trends, and post-mortem analysis of particular events.\nArchival storage of past sensor data requires a storage system, the key attributes of which are: where the data is stored, whether it is indexed, and how the application can access this data in an energy-efficient manner with low latency.\nThere have been a spectrum of approaches for constructing sensor storage systems.\nIn the simplest, sensors stream data or events to a server for long-term archival storage [3], where the server often indexes the data to permit efficient access at a later time.\nSince sensors may be several hops from the nearest base station, network costs are incurred; however, once data is indexed and archived, subsequent data accesses can be handled locally at the server without incurring network overhead.\nIn this approach, the storage is centralized, reads are efficient and cheap, while writes are expensive.\nFurther, all data is propagated to the server, regardless of whether it is ever used by the application.\nAn alternate approach is to have each sensor store data or events locally (e.g., in flash memory), so that all writes are local and incur no communication overheads.\nA read request, such as whether an event was detected by a particular sensor, requires a message to be sent to the sensor for processing.\nMore complex read requests are handled by flooding.\nFor instance, determining if an intruder was detected over a particular time interval requires the request to be flooded to all sensors in the system.\nThus, in this approach, the storage is distributed, writes are local and inexpensive, while reads incur significant network overheads.\nRequests that require flooding, due to the lack of an index, are expensive and may waste precious sensor resources, even if no matching data is stored at those sensors.\nResearch efforts such as Directed Diffusion [17] have attempted to reduce these read costs, however, by intelligent message routing.\nBetween these two extremes lie a number of other sensor storage systems with different trade-offs, summarized in Table 1.\nThe geographic hash table (GHT) approach [24, 26] advocates the use of an in-network index to augment the fully distributed nature of sensor storage.\nIn this approach, each data item has a key associated with it, and a distributed or geographic hash table is used to map keys to nodes that store the corresponding data items.\nThus, writes cause data items to be sent to the hashed nodes and also trigger updates to the in-network hash table.\nA read request requires a lookup in the in-network hash table to locate the node that stores the data 39 item; observe that the presence of an index eliminates the need for flooding in this approach.\nMost of these approaches assume a flat, homogeneous architecture in which every sensor node is energy-constrained.\nIn this paper, we propose a novel storage architecture called TSAR1 that reflects and exploits the multi-tier nature of emerging sensor networks, where the application is comprised of tens of tethered sensor proxies (or more), each controlling tens or hundreds of untethered sensors.\nTSAR is a component of our PRESTO [8] predictive storage architecture, which combines archival storage with caching and prediction.\nWe believe that a fundamentally different storage architecture is necessary to address the multi-tier nature of future sensor networks.\nSpecifically, the storage architecture needs to exploit the resource-rich nature of proxies, while respecting resource constraints at the remote sensors.\nNo existing sensor storage architecture explicitly addresses this dichotomy in the resource capabilities of different tiers.\nAny sensor storage system should also carefully exploit current technology trends, which indicate that the capacities of flash memories continue to rise as per Moore``s Law, while their costs continue to plummet.\nThus it will soon be feasible to equip each sensor with 1 GB of flash storage for a few tens of dollars.\nAn even more compelling argument is the energy cost of flash storage, which can be as much as two orders of magnitude lower than that for communication.\nNewer NAND flash memories offer very low write and erase energy costs - our comparison of a 1GB Samsung NAND flash storage [16] and the Chipcon CC2420 802.15.4 wireless radio [4] in Section 6.2 indicates a 1:100 ratio in per-byte energy cost between the two devices, even before accounting for network protocol overheads.\nThese trends, together with the energy-constrained nature of untethered sensors, indicate that local storage offers a viable, energy-efficient alternative to communication in sensor networks.\nTSAR exploits these trends by storing data or events locally on the energy-efficient flash storage at each sensor.\nSensors send concise identifying information, which we term metadata, to a nearby proxy; depending on the representation used, this metadata may be an order of magnitude or more smaller than the data itself, imposing much lower communication costs.\nThe resource-rich proxies interact with one another to construct a distributed index of the metadata reported from all sensors, and thus an index of the associated data stored at the sensors.\nThis index provides a unified, logical view of the distributed data, and enables an application to query and read past data efficiently - the index is used to pinpoint all data that match a read request, followed by messages to retrieve that data from the corresponding sensors.\nIn-network index lookups are eliminated, reducing network overheads for read requests.\nThis separation of data, which is stored at the sensors, and the metadata, which is stored at the proxies, enables TSAR to reduce energy overheads at the sensors, by leveraging resources at tethered proxies.\n1.2 Contributions This paper presents TSAR, a novel two-tier storage architecture for sensor networks.\nTo the best of our knowledge, this is the first sensor storage system that is explicitly tailored for emerging multitier sensor networks.\nOur design and implementation of TSAR has resulted in four contributions.\nAt the core of the TSAR architecture is a novel distributed index structure based on interval skip graphs that we introduce in this paper.\nThis index structure can store coarse summaries of sensor data and organize them in an ordered manner to be easily search1 TSAR: Tiered Storage ARchitecture for sensor networks.\nable.\nThis data structure has O(log n) expected search and update complexity.\nFurther, the index provides a logically unified view of all data in the system.\nSecond, at the sensor level, each sensor maintains a local archive that stores data on flash memory.\nOur storage architecture is fully stateless at each sensor from the perspective of the metadata index; all index structures are maintained at the resource-rich proxies, and only direct requests or simple queries on explicitly identified storage locations are sent to the sensors.\nStorage at the remote sensor is in effect treated as appendage of the proxy, resulting in low implementation complexity, which makes it ideal for small, resourceconstrained sensor platforms.\nFurther, the local store is optimized for time-series access to archived data, as is typical in many applications.\nEach sensor periodically sends a summary of its data to a proxy.\nTSAR employs a novel adaptive summarization technique that adapts the granularity of the data reported in each summary to the ratio of false hits for application queries.\nMore fine grain summaries are sent whenever more false positives are observed, thereby balancing the energy cost of metadata updates and false positives.\nThird, we have implemented a prototype of TSAR on a multi-tier testbed comprising Stargate-based proxies and Mote-based sensors.\nOur implementation supports spatio-temporal, value, and rangebased queries on sensor data.\nFourth, we conduct a detailed experimental evaluation of TSAR using a combination of EmStar\/EmTOS [10] and our prototype.\nWhile our EmStar\/EmTOS experiments focus on the scalability of TSAR in larger settings, our prototype evaluation involves latency and energy measurements in a real setting.\nOur results demonstrate the logarithmic scaling property of the sparse skip graph and the low latency of end-to-end queries in a duty-cycled multi-hop network .\nThe remainder of this paper is structured as follows.\nSection 2 presents key design issues that guide our work.\nSection 3 and 4 present the proxy-level index and the local archive and summarization at a sensor, respectively.\nSection 5 discusses our prototype implementation, and Section 6 presents our experimental results.\nWe present related work in Section 7 and our conclusions in Section 8.\n2.\nDesign Considerations In this section, we first describe the various components of a multi-tier sensor network assumed in our work.\nWe then present a description of the expected usage models for this system, followed by several principles addressing these factors which guide the design of our storage system.\n2.1 System Model We envision a multi-tier sensor network comprising multiple tiers - a bottom tier of untethered remote sensor nodes, a middle tier of tethered sensor proxies, and an upper tier of applications and user terminals (see Figure 1).\nThe lowest tier is assumed to form a dense deployment of lowpower sensors.\nA canonical sensor node at this tier is equipped with low-power sensors, a micro-controller, and a radio as well as a significant amount of flash memory (e.g., 1GB).\nThe common constraint for this tier is energy, and the need for a long lifetime in spite of a finite energy constraint.\nThe use of radio, processor, RAM, and the flash memory all consume energy, which needs to be limited.\nIn general, we assume radio communication to be substantially more expensive than accesses to flash memory.\nThe middle tier consists of power-rich sensor proxies that have significant computation, memory and storage resources and can use 40 Table 1: Characteristics of sensor storage systems System Data Index Reads Writes Order preserving Centralized store Centralized Centralized index Handled at store Send to store Yes Local sensor store Fully distributed No index Flooding, diffusion Local No GHT\/DCS [24] Fully distributed In-network index Hash to node Send to hashed node No TSAR\/PRESTO Fully distributed Distributed index at proxies Proxy lookup + sensor query Local plus index update Yes User Unified Logical Store Queries (time, space, value) Query Response Cache Query forwarding Proxy Remote Sensors Local Data Archive on Flash Memory Interval Skip Graph Query forwarding summaries start index end index linear traversal Query Response Cache-miss triggered query forwarding summaries Figure 1: Architecture of a multi-tier sensor network.\nthese resources continuously.\nIn urban environments, the proxy tier would comprise a tethered base-station class nodes (e.g., Crossbow Stargate), each with with multiple radios-an 802.11 radio that connects it to a wireless mesh network and a low-power radio (e.g. 802.15.4) that connects it to the sensor nodes.\nIn remote sensing applications [10], this tier could comprise a similar Stargate node with a solar power cell.\nEach proxy is assumed to manage several tens to hundreds of lower-tier sensors in its vicinity.\nA typical sensor network deployment will contain multiple geographically distributed proxies.\nFor instance, in a building monitoring application, one sensor proxy might be placed per floor or hallway to monitor temperature, heat and light sensors in their vicinity.\nAt the highest tier of our infrastructure are applications that query the sensor network through a query interface[20].\nIn this work, we focus on applications that require access to past sensor data.\nTo support such queries, the system needs to archive data on a persistent store.\nOur goal is to design a storage system that exploits the relative abundance of resources at proxies to mask the scarcity of resources at the sensors.\n2.2 Usage Models The design of a storage system such as TSAR is affected by the queries that are likely to be posed to it.\nA large fraction of queries on sensor data can be expected to be spatio-temporal in nature.\nSensors provide information about the physical world; two key attributes of this information are when a particular event or activity occurred and where it occurred.\nSome instances of such queries include the time and location of target or intruder detections (e.g., security and monitoring applications), notifications of specific types of events such as pressure and humidity values exceeding a threshold (e.g., industrial applications), or simple data collection queries which request data from a particular time or location (e.g., weather or environment monitoring).\nExpected queries of such data include those requesting ranges of one or more attributes; for instance, a query for all image data from cameras within a specified geographic area for a certain period of time.\nIn addition, it is often desirable to support efficient access to data in a way that maintains spatial and temporal ordering.\nThere are several ways of supporting range queries, such as locality-preserving hashes such as are used in DIMS [18].\nHowever, the most straightforward mechanism, and one which naturally provides efficient ordered access, is via the use of order-preserving data structures.\nOrder-preserving structures such as the well-known B-Tree maintain relationships between indexed values and thus allow natural access to ranges, as well as predecessor and successor operations on their key values.\nApplications may also pose value-based queries that involve determining if a value v was observed at any sensor; the query returns a list of sensors and the times at which they observed this value.\nVariants of value queries involve restricting the query to a geographical region, or specifying a range (v1, v2) rather than a single value v. Value queries can be handled by indexing on the values reported in the summaries.\nSpecifically, if a sensor reports a numerical value, then the index is constructed on these values.\nA search involves finding matching values that are either contained in the search range (v1, v2) or match the search value v exactly.\nHybrid value and spatio-temporal queries are also possible.\nSuch queries specify a time interval, a value range and a spatial region and request all records that match these attributes - find all instances where the temperature exceeded 100o F at location R during the month of August.\nThese queries require an index on both time and value.\nIn TSAR our focus is on range queries on value or time, with planned extensions to include spatial scoping.\n2.3 Design Principles Our design of a sensor storage system for multi-tier networks is based on the following set of principles, which address the issues arising from the system and usage models above.\n\u2022 Principle 1: Store locally, access globally: Current technology allows local storage to be significantly more energyefficient than network communication, while technology trends show no signs of erasing this gap in the near future.\nFor maximum network life a sensor storage system should leverage the flash memory on sensors to archive data locally, substituting cheap memory operations for expensive radio transmission.\nBut without efficient mechanisms for retrieval, the energy gains of local storage may be outweighed by communication costs incurred by the application in searching for data.\nWe believe that if the data storage system provides the abstraction of a single logical store to applications, as 41 does TSAR, then it will have additional flexibility to optimize communication and storage costs.\n\u2022 Principle 2: Distinguish data from metadata: Data must be identified so that it may be retrieved by the application without exhaustive search.\nTo do this, we associate metadata with each data record - data fields of known syntax which serve as identifiers and may be queried by the storage system.\nExamples of this metadata are data attributes such as location and time, or selected or summarized data values.\nWe leverage the presence of resource-rich proxies to index metadata for resource-constrained sensors.\nThe proxies share this metadata index to provide a unified logical view of all data in the system, thereby enabling efficient, low-latency lookups.\nSuch a tier-specific separation of data storage from metadata indexing enables the system to exploit the idiosyncrasies of multi-tier networks, while improving performance and functionality.\n\u2022 Principle 3: Provide data-centric query support: In a sensor application the specific location (i.e. offset) of a record in a stream is unlikely to be of significance, except if it conveys information concerning the location and\/or time at which the information was generated.\nWe thus expect that applications will be best served by a query interface which allows them to locate data by value or attribute (e.g. location and time), rather than a read interface for unstructured data.\nThis in turn implies the need to maintain metadata in the form of an index that provides low cost lookups.\n2.4 System Design TSAR embodies these design principles by employing local storage at sensors and a distributed index at the proxies.\nThe key features of the system design are as follows: In TSAR, writes occur at sensor nodes, and are assumed to consist of both opaque data as well as application-specific metadata.\nThis metadata is a tuple of known types, which may be used by the application to locate and identify data records, and which may be searched on and compared by TSAR in the course of locating data for the application.\nIn a camera-based sensing application, for instance, this metadata might include coordinates describing the field of view, average luminance, and motion values, in addition to basic information such as time and sensor location.\nDepending on the application, this metadata may be two or three orders of magnitude smaller than the data itself, for instance if the metadata consists of features extracted from image or acoustic data.\nIn addition to storing data locally, each sensor periodically sends a summary of reported metadata to a nearby proxy.\nThe summary contains information such as the sensor ID, the interval (t1, t2) over which the summary was generated, a handle identifying the corresponding data record (e.g. its location in flash memory), and a coarse-grain representation of the metadata associated with the record.\nThe precise data representation used in the summary is application-specific; for instance, a temperature sensor might choose to report the maximum and minimum temperature values observed in an interval as a coarse-grain representation of the actual time series.\nThe proxy uses the summary to construct an index; the index is global in that it stores information from all sensors in the system and it is distributed across the various proxies in the system.\nThus, applications see a unified view of distributed data, and can query the index at any proxy to get access to data stored at any sensor.\nSpecifically, each query triggers lookups in this distributed index and the list of matches is then used to retrieve the corresponding data from the sensors.\nThere are several distributed index and lookup methods which might be used in this system; however, the index structure described in Section 3 is highly suited for the task.\nSince the index is constructed using a coarse-grain summary, instead of the actual data, index lookups will yield approximate matches.\nThe TSAR summarization mechanism guarantees that index lookups will never yield false negatives - i.e. it will never miss summaries which include the value being searched for.\nHowever, index lookups may yield false positives, where a summary matches the query but when queried the remote sensor finds no matching value, wasting network resources.\nThe more coarse-grained the summary, the lower the update overhead and the greater the fraction of false positives, while finer summaries incur update overhead while reducing query overhead due to false positives.\nRemote sensors may easily distinguish false positives from queries which result in search hits, and calculate the ratio between the two; based on this ratio, TSAR employs a novel adaptive technique that dynamically varies the granularity of sensor summaries to balance the metadata overhead and the overhead of false positives.\n3.\nData Structures At the proxy tier, TSAR employs a novel index structure called the Interval Skip Graph, which is an ordered, distributed data structure for finding all intervals that contain a particular point or range of values.\nInterval skip graphs combine Interval Trees [5], an interval-based binary search tree, with Skip Graphs [1], a ordered, distributed data structure for peer-to-peer systems [13].\nThe resulting data structure has two properties that make it ideal for sensor networks.\nFirst, it has O(log n) search complexity for accessing the first interval that matches a particular value or range, and constant complexity for accessing each successive interval.\nSecond, indexing of intervals rather than individual values makes the data structure ideal for indexing summaries over time or value.\nSuch summary-based indexing is a more natural fit for energyconstrained sensor nodes, since transmitting summaries incurs less energy overhead than transmitting all sensor data.\nDefinitions: We assume that there are Np proxies and Ns sensors in a two-tier sensor network.\nEach proxy is responsible for multiple sensor nodes, and no assumption is made about the number of sensors per proxy.\nEach sensor transmits interval summaries of data or events regularly to one or more proxies that it is associated with, where interval i is represented as [lowi, highi].\nThese intervals can correspond to time or value ranges that are used for indexing sensor data.\nNo assumption is made about the size of an interval or about the amount of overlap between intervals.\nRange queries on the intervals are posed by users to the network of proxies and sensors; each query q needs to determine all index values that overlap the interval [lowq, highq].\nThe goal of the interval skip graph is to index all intervals such that the set that overlaps a query interval can be located efficiently.\nIn the rest of this section, we describe the interval skip graph in greater detail.\n3.1 Skip Graph Overview In order to inform the description of the Interval Skip Graph, we first provide a brief overview of the Skip Graph data structure; for a more extensive description the reader is referred to [1].\nFigure 2 shows a skip graph which indexes 8 keys; the keys may be seen along the bottom, and above each key are the pointers associated with that key.\nEach data element, consisting of a key and its associated pointers, may reside on a different node in the network, 42 7 9 13 17 21 25 311 level 0 level 1 level 2 key single skip graph element (each may be on different node) find(21) node-to-node messages Figure 2: Skip Graph of 8 Elements [6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30][2,5] 5 14 14 16 23 23 27 30 [low,high] max contains(13) match no match halt Figure 3: Interval Skip Graph [6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30] [2,5] Node 1 Node 2 Node 3 level 2 level 1 level 0 Figure 4: Distributed Interval Skip Graph and pointers therefore identify both a remote node as well as a data element on that node.\nIn this figure we may see the following properties of a skip graph: \u2022 Ordered index: The keys are members of an ordered data type, for instance integers.\nLookups make use of ordered comparisons between the search key and existing index entries.\nIn addition, the pointers at the lowest level point directly to the successor of each item in the index.\n\u2022 In-place indexing: Data elements remain on the nodes where they were inserted, and messages are sent between nodes to establish links between those elements and others in the index.\n\u2022 Log n height: There are log2 n pointers associated with each element, where n is the number of data elements indexed.\nEach pointer belongs to a level l in [0... log2 n \u2212 1], and together with some other pointers at that level forms a chain of n\/2l elements.\n\u2022 Probabilistic balance: Rather than relying on re-balancing operations which may be triggered at insert or delete, skip graphs implement a simple random balancing mechanism which maintains close to perfect balance on average, with an extremely low probability of significant imbalance.\n\u2022 Redundancy and resiliency: Each data element forms an independent search tree root, so searches may begin at any node in the network, eliminating hot spots at a single search root.\nIn addition the index is resilient against node failure; data on the failed node will not be accessible, but remaining data elements will be accessible through search trees rooted on other nodes.\nIn Figure 2 we see the process of searching for a particular value in a skip graph.\nThe pointers reachable from a single data element form a binary tree: a pointer traversal at the highest level skips over n\/2 elements, n\/4 at the next level, and so on.\nSearch consists of descending the tree from the highest level to level 0, at each level comparing the target key with the next element at that level and deciding whether or not to traverse.\nIn the perfectly balanced case shown here there are log2 n levels of pointers, and search will traverse 0 or 1 pointers at each level.\nWe assume that each data element resides on a different node, and measure search cost by the number messages sent (i.e. the number of pointers traversed); this will clearly be O(log n).\nTree update proceeds from the bottom, as in a B-Tree, with the root(s) being promoted in level as the tree grows.\nIn this way, for instance, the two chains at level 1 always contain n\/2 entries each, and there is never a need to split chains as the structure grows.\nThe update process then consists of choosing which of the 2l chains to insert an element into at each level l, and inserting it in the proper place in each chain.\nMaintaining a perfectly balanced skip graph as shown in Figure 2 would be quite complex; instead, the probabilistic balancing method introduced in Skip Lists [23] is used, which trades off a small amount of overhead in the expected case in return for simple update and deletion.\nThe basis for this method is the observation that any element which belongs to a particular chain at level l can only belong to one of two chains at level l+1.\nTo insert an element we ascend levels starting at 0, randomly choosing one of the two possible chains at each level, an stopping when we reach an empty chain.\nOne means of implementation (e.g. as described in [1]) is to assign each element an arbitrarily long random bit string.\nEach chain at level l is then constructed from those elements whose bit strings match in the first l bits, thus creating 2l possible chains at each level and ensuring that each chain splits into exactly two chains at the next level.\nAlthough the resulting structure is not perfectly balanced, following the analysis in [23] we can show that the probability of it being significantly out of balance is extremely small; in addition, since the structure is determined by the random number stream, input data patterns cannot cause the tree to become imbalanced.\n3.2 Interval Skip Graph A skip graph is designed to store single-valued entries.\nIn this section, we introduce a novel data structure that extends skip graphs to store intervals [lowi, highi] and allows efficient searches for all intervals covering a value v, i.e. {i : lowi \u2264 v \u2264 highi}.\nOur data structure can be extended to range searches in a straightforward manner.\nThe interval skip graph is constructed by applying the method of augmented search trees, as described by Cormen, Leiserson, and Rivest [5] and applied to binary search trees to create an Interval Tree.\nThe method is based on the observation that a search structure based on comparison of ordered keys, such as a binary tree, may also be used to search on a secondary key which is non-decreasing in the first key.\nGiven a set of intervals sorted by lower bound - lowi \u2264 lowi+1 - we define the secondary key as the cumulative maximum, maxi = maxk=0...i (highk).\nThe set of intervals intersecting a value v may then be found by searching for the first interval (and thus the interval with least lowi) such that maxi \u2265 v.\nWe then 43 traverse intervals in increasing order lower bound, until we find the first interval with lowi > v, selecting those intervals which intersect v. Using this approach we augment the skip graph data structure, as shown in Figure 3, so that each entry stores a range (lower bound and upper bound) and a secondary key (cumulative maximum of upper bound).\nTo efficiently calculate the secondary key maxi for an entry i, we take the greatest of highi and the maximum values reported by each of i``s left-hand neighbors.\nTo search for those intervals containing the value v, we first search for v on the secondary index, maxi, and locate the first entry with maxi \u2265 v. (by the definition of maxi, for this data element maxi = highi.)\nIf lowi > v, then this interval does not contain v, and no other intervals will, either, so we are done.\nOtherwise we traverse the index in increasing order of mini, returning matching intervals, until we reach an entry with mini > v and we are done.\nSearches for all intervals which overlap a query range, or which completely contain a query range, are straightforward extensions of this mechanism.\nLookup Complexity: Lookup for the first interval that matches a given value is performed in a manner very similar to an interval tree.\nThe complexity of search is O(log n).\nThe number of intervals that match a range query can vary depending on the amount of overlap in the intervals being indexed, as well as the range specified in the query.\nInsert Complexity: In an interval tree or interval skip list, the maximum value for an entry need only be calculated over the subtree rooted at that entry, as this value will be examined only when searching within the subtree rooted at that entry.\nFor a simple interval skip graph, however, this maximum value for an entry must be computed over all entries preceding it in the index, as searches may begin anywhere in the data structure, rather than at a distinguished root element.\nIt may be easily seen that in the worse case the insertion of a single interval (one that covers all existing intervals in the index) will trigger the update of all entries in the index, for a worst-case insertion cost of O(n).\n3.3 Sparse Interval Skip Graph The final extensions we propose take advantage of the difference between the number of items indexed in a skip graph and the number of systems on which these items are distributed.\nThe cost in network messages of an operation may be reduced by arranging the data structure so that most structure traversals occur locally on a single node, and thus incur zero network cost.\nIn addition, since both congestion and failure occur on a per-node basis, we may eliminate links without adverse consequences if those links only contribute to load distribution and\/or resiliency within a single node.\nThese two modifications allow us to achieve reductions in asymptotic complexity of both update and search.\nAs may be in Section 3.2, insert and delete cost on an interval skip graph has a worst case complexity of O(n), compared to O(log n) for an interval tree.\nThe main reason for the difference is that skip graphs have a full search structure rooted at each element, in order to distribute load and provide resilience to system failures in a distributed setting.\nHowever, in order to provide load distribution and failure resilience it is only necessary to provide a full search structure for each system.\nIf as in TSAR the number of nodes (proxies) is much smaller than the number of data elements (data summaries indexed), then this will result in significant savings.\nImplementation: To construct a sparse interval skip graph, we ensure that there is a single distinguished element on each system, the root element for that system; all searches will start at one of these root elements.\nWhen adding a new element, rather than splitting lists at increasing levels l until the element is in a list with no others, we stop when we find that the element would be in a list containing no root elements, thus ensuring that the element is reachable from all root elements.\nAn example of applying this optimization may be seen in Figure 5.\n(In practice, rather than designating existing data elements as roots, as shown, it may be preferable to insert null values at startup.)\nWhen using the technique of membership vectors as in [1], this may be done by broadcasting the membership vectors of each root element to all other systems, and stopping insertion of an element at level l when it does not share an l-bit prefix with any of the Np root elements.\nThe expected number of roots sharing a log2Np-bit prefix is 1, giving an expected expected height for each element of log2Np +O(1).\nAn alternate implementation, which distributes information concerning root elements at pointer establishment time, is omitted due to space constraints; this method eliminates the need for additional messages.\nPerformance: In a (non-interval) sparse skip graph, since the expected height of an inserted element is now log2 Np + O(1), expected insertion complexity is O(log Np), rather than O(log n), where Np is the number of root elements and thus the number of separate systems in the network.\n(In the degenerate case of a single system we have a skip list; with splitting probability 0.5 the expected height of an individual element is 1.)\nNote that since searches are started at root elements of expected height log2 n, search complexity is not improved.\nFor an interval sparse skip graph, update performance is improved considerably compared to the O(n) worst case for the nonsparse case.\nIn an augmented search structure such as this, an element only stores information for nodes which may be reached from that element-e.g. the subtree rooted at that element, in the case of a tree.\nThus, when updating the maximum value in an interval tree, the update is only propagated towards the root.\nIn a sparse interval skip graph, updates to a node only propagate towards the Np root elements, for a worst-case cost of Np log2 n. Shortcut search: When beginning a search for a value v, rather than beginning at the root on that proxy, we can find the element that is closest to v (e.g. using a secondary local index), and then begin the search at that element.\nThe expected distance between this element and the search terminus is log2 Np, and the search will now take on average log2 Np + O(1) steps.\nTo illustrate this optimization, in Figure 4 depending on the choice of search root, a search for [21, 30] beginning at node 2 may take 3 network hops, traversing to node 1, then back to node 2, and finally to node 3 where the destination is located, for a cost of 3 messages.\nThe shortcut search, however, locates the intermediate data element on node 2, and then proceeds directly to node 3 for a cost of 1 message.\nPerformance: This technique may be applied to the primary key search which is the first of two insertion steps in an interval skip graph.\nBy combining the short-cut optimization with sparse interval skip graphs, the expected cost of insertion is now O(log Np), independent of the size of the index or the degree of overlap of the inserted intervals.\n3.4 Alternative Data Structures Thus far we have only compared the sparse interval skip graph with similar structures from which it is derived.\nA comparison with several other data structures which meet at least some of the requirements for the TSAR index is shown in Table 2.\n44 Table 2: Comparison of Distributed Index Structures Range Query Support Interval Representation Re-balancing Resilience Small Networks Large Networks DHT, GHT no no no yes good good Local index, flood query yes yes no yes good bad P-tree, RP* (distributed B-Trees) yes possible yes no good good DIMS yes no yes yes yes yes Interval Skipgraph yes yes no yes good good [6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30][2,5] Roots Node 1 Node 2 Figure 5: Sparse Interval Skip Graph The hash-based systems, DHT [25] and GHT [26], lack the ability to perform range queries and are thus not well-suited to indexing spatio-temporal data.\nIndexing locally using an appropriate singlenode structure and then flooding queries to all proxies is a competitive alternative for small networks; for large networks the linear dependence on the number of proxies becomes an issue.\nTwo distributed B-Trees were examined - P-Trees [6] and RP* [19].\nEach of these supports range queries, and in theory could be modified to support indexing of intervals; however, they both require complex re-balancing, and do not provide the resilience characteristics of the other structures.\nDIMS [18] provides the ability to perform spatio-temporal range queries, and has the necessary resilience to failures; however, it cannot be used index intervals, which are used by TSAR``s data summarization algorithm.\n4.\nData Storage and Summarization Having described the proxy-level index structure, we turn to the mechanisms at the sensor tier.\nTSAR implements two key mechanisms at the sensor tier.\nThe first is a local archival store at each sensor node that is optimized for resource-constrained devices.\nThe second is an adaptive summarization technique that enables each sensor to adapt to changing data and query characteristics.\nThe rest of this section describes these mechanisms in detail.\n4.1 Local Storage at Sensors Interval skip graphs provide an efficient mechanism to lookup sensor nodes containing data relevant to a query.\nThese queries are then routed to the sensors, which locate the relevant data records in the local archive and respond back to the proxy.\nTo enable such lookups, each sensor node in TSAR maintains an archival store of sensor data.\nWhile the implementation of such an archival store is straightforward on resource-rich devices that can run a database, sensors are often power and resource-constrained.\nConsequently, the sensor archiving subsystem in TSAR is explicitly designed to exploit characteristics of sensor data in a resource-constrained setting.\nTimestamp Calibration Parameters Opaque DataData\/Event Attributes size Figure 6: Single storage record Sensor data has very distinct characteristics that inform our design of the TSAR archival store.\nSensors produce time-series data streams, and therefore, temporal ordering of data is a natural and simple way of storing archived sensor data.\nIn addition to simplicity, a temporally ordered store is often suitable for many sensor data processing tasks since they involve time-series data processing.\nExamples include signal processing operations such as FFT, wavelet transforms, clustering, similarity matching, and target detection.\nConsequently, the local archival store is a collection of records, designed as an append-only circular buffer, where new records are appended to the tail of the buffer.\nThe format of each data record is shown in Figure 6.\nEach record has a metadata field which includes a timestamp, sensor settings, calibration parameters, etc..\nRaw sensor data is stored in the data field of the record.\nThe data field is opaque and application-specific-the storage system does not know or care about interpreting this field.\nA camera-based sensor, for instance, may store binary images in this data field.\nIn order to support a variety of applications, TSAR supports variable-length data fields; as a result, record sizes can vary from one record to another.\nOur archival store supports three operations on records: create, read, and delete.\nDue to the append-only nature of the store, creation of records is simple and efficient.\nThe create operation simply creates a new record and appends it to the tail of the store.\nSince records are always written at the tail, the store need not maintain a free space list.\nAll fields of the record need to be specified at creation time; thus, the size of the record is known a priori and the store simply allocates the the corresponding number of bytes at the tail to store the record.\nSince writes are immutable, the size of a record does not change once it is created.\nproxy proxy proxy record 3 record summary local archive in flash memory data summary start,end offset time interval sensor summary sent to proxy Insert summaries into interval skip graph Figure 7: Sensor Summarization 45 The read operation enables stored records to be retrieved in order to answer queries.\nIn a traditional database system, efficient lookups are enabled by maintaining a structure such as a B-tree that indexes certain keys of the records.\nHowever, this can be quite complex for a small sensor node with limited resources.\nConsequently, TSAR sensors do not maintain any index for the data stored in their archive.\nInstead, they rely on the proxies to maintain this metadata index-sensors periodically send the proxy information summarizing the data contained in a contiguous sequence of records, as well as a handle indicating the location of these records in flash memory.\nThe mechanism works as follows: In addition to the summary of sensor data, each node sends metadata to the proxy containing the time interval corresponding to the summary, as well as the start and end offsets of the flash memory location where the raw data corresponding is stored (as shown in Figure 7).\nThus, random access is enabled at granularity of a summary-the start offset of each chunk of records represented by a summary is known to the proxy.\nWithin this collection, records are accessed sequentially.\nWhen a query matches a summary in the index, the sensor uses these offsets to access the relevant records on its local flash by sequentially reading data from the start address until the end address.\nAny queryspecific operation can then be performed on this data.\nThus, no index needs to be maintained at the sensor, in line with our goal of simplifying sensor state management.\nThe state of the archive is captured in the metadata associated with the summaries, and is stored and maintained at the proxy.\nWhile we anticipate local storage capacity to be large, eventually there might be a need to overwrite older data, especially in high data rate applications.\nThis may be done via techniques such as multi-resolution storage of data [9], or just simply by overwriting older data.\nWhen older data is overwritten, a delete operation is performed, where an index entry is deleted from the interval skip graph at the proxy and the corresponding storage space in flash memory at the sensor is freed.\n4.2 Adaptive Summarization The data summaries serve as glue between the storage at the remote sensor and the index at the proxy.\nEach update from a sensor to the proxy includes three pieces of information: the summary, a time period corresponding to the summary, and the start and end offsets for the flash archive.\nIn general, the proxy can index the time interval representing a summary or the value range reported in the summary (or both).\nThe former index enables quick lookups on all records seen during a certain interval, while the latter index enables quick lookups on all records matching a certain value.\nAs described in Section 2.4, there is a trade-off between the energy used in sending summaries (and thus the frequency and resolution of those summaries) and the cost of false hits during queries.\nThe coarser and less frequent the summary information, the less energy required, while false query hits in turn waste energy on requests for non-existent data.\nTSAR employs an adaptive summarization technique that balances the cost of sending updates against the cost of false positives.\nThe key intuition is that each sensor can independently identify the fraction of false hits and true hits for queries that access its local archive.\nIf most queries result in true hits, then the sensor determines that the summary can be coarsened further to reduce update costs without adversely impacting the hit ratio.\nIf many queries result in false hits, then the sensor makes the granularity of each summary finer to reduce the number and overhead of false hits.\nThe resolution of the summary depends on two parametersthe interval over which summaries of the data are constructed and transmitted to the proxy, as well as the size of the applicationspecific summary.\nOur focus in this paper is on the interval over which the summary is constructed.\nChanging the size of the data summary can be performed in an application-specific manner (e.g. using wavelet compression techniques as in [9]) and is beyond the scope of this paper.\nCurrently, TSAR employs a simple summarization scheme that computes the ratio of false and true hits and decreases (increases) the interval between summaries whenever this ratio increases (decreases) beyond a threshold.\n5.\nTSAR Implementation We have implemented a prototype of TSAR on a multi-tier sensor network testbed.\nOur prototype employs Crossbow Stargate nodes to implement the proxy tier.\nEach Stargate node employs a 400MHz Intel XScale processor with 64MB RAM and runs the Linux 2.4.19 kernel and EmStar release 2.1.\nThe proxy nodes are equipped with two wireless radios, a Cisco Aironet 340-based 802.11b radio and a hostmote bridge to the Mica2 sensor nodes using the EmStar transceiver.\nThe 802.11b wireless network is used for inter-proxy communication within the proxy tier, while the wireless bridge enables sensor-proxy communication.\nThe sensor tier consists of Crossbow Mica2s and Mica2dots, each consisting of a 915MHz CC1000 radio, a BMAC protocol stack, a 4 Mb on-board flash memory and an ATMega 128L processor.\nThe sensor nodes run TinyOS 1.1.8.\nIn addition to the on-board flash, the sensor nodes can be equipped with external MMC\/SD flash cards using a custom connector.\nThe proxy nodes can be equipped with external storage such as high-capacity compact flash (up to 4GB), 6GB micro-drives, or up to 60GB 1.8inch mobile disk drives.\nSince sensor nodes may be several hops away from the nearest proxy, the sensor tier employs multi-hop routing to communicate with the proxy tier.\nIn addition, to reduce the power consumption of the radio while still making the sensor node available for queries, low power listening is enabled, in which the radio receiver is periodically powered up for a short interval to sense the channel for transmissions, and the packet preamble is extended to account for the latency until the next interval when the receiving radio wakes up.\nOur prototype employs the MultiHopLEPSM routing protocol with the BMAC layer configured in the low-power mode with a 11% duty cycle (one of the default BMAC [22] parameters) Our TSAR implementation on the Mote involves a data gathering task that periodically obtains sensor readings and logs these reading to flash memory.\nThe flash memory is assumed to be a circular append-only store and the format of the logged data is depicted in Figure 6.\nThe Mote sends a report to the proxy every N readings, summarizing the observed data.\nThe report contains: (i) the address of the Mote, (ii) a handle that contains an offset and the length of the region in flash memory containing data referred to by the summary, (iii) an interval (t1, t2) over which this report is generated, (iv) a tuple (low, high) representing the minimum and the maximum values observed at the sensor in the interval, and (v) a sequence number.\nThe sensor updates are used to construct a sparse interval skip graph that is distributed across proxies, via network messages between proxies over the 802.11b wireless network.\nOur current implementation supports queries that request records matching a time interval (t1, t2) or a value range (v1, v2).\nSpatial constraints are specified using sensor IDs.\nGiven a list of matching intervals from the skip graph, TSAR supports two types of messages to query the sensor: lookup and fetch.\nA lookup message triggers a search within the corresponding region in flash memory and returns the number of matching records in that memory region (but does not retrieve data).\nIn contrast, a fetch message not only 46 0 10 20 30 40 50 60 70 80 128512 1024\u00a02048\u00a04096 NumberofMessages Index size (entries) Insert (skipgraph) Insert (sparse skipgraph) Initial lookup (a) James Reserve Data 0 10 20 30 40 50 60 70 80 512\u00a01024\u00a02048\u00a04096 NumberofMessages Index size (entries) Insert (skipgraph) Insert (sparse skipgraph) Initial lookup (b) Synthetic Data Figure 8: Skip Graph Insert Performance triggers a search but also returns all matching data records to the proxy.\nLookup messages are useful for polling a sensor, for instance, to determine if a query matches too many records.\n6.\nExperimental Evaluation In this section, we evaluate the efficacy of TSAR using our prototype and simulations.\nThe testbed for our experiments consists of four Stargate proxies and twelve Mica2 and Mica2dot sensors; three sensors each are assigned to each proxy.\nGiven the limited size of our testbed, we employ simulations to evaluate the behavior of TSAR in larger settings.\nOur simulation employs the EmTOS emulator [10], which enables us to run the same code in simulation and the hardware platform.\nRather than using live data from a real sensor, to ensure repeatable experiments, we seed each sensor node with a dataset (i.e., a trace) that dictates the values reported by that node to the proxy.\nOne section of the flash memory on each sensor node is programmed with data points from the trace; these observations are then replayed during an experiment, logged to the local archive (located in flash memory, as well), and reported to the proxy.\nThe first dataset used to evaluate TSAR is a temperature dataset from James Reserve [27] that includes data from eleven temperature sensor nodes over a period of 34 days.\nThe second dataset is synthetically generated; the trace for each sensor is generated using a uniformly distributed random walk though the value space.\nOur experimental evaluation has four parts.\nFirst, we run EmTOS simulations to evaluate the lookup, update and delete overhead for sparse interval skip graphs using the real and synthetic datasets.\nSecond, we provide summary results from micro-benchmarks of the storage component of TSAR, which include empirical characterization of the energy costs and latency of reads and writes for the flash memory chip as well as the whole mote platform, and comparisons to published numbers for other storage and communication technologies.\nThese micro-benchmarks form the basis for our full-scale evaluation of TSAR on a testbed of four Stargate proxies and twelve Motes.\nWe measure the end-to-end query latency in our multi-hop testbed as well as the query processing overhead at the mote tier.\nFinally, we demonstrate the adaptive summarization capability at each sensor node.\nThe remainder of this section presents our experimental results.\n6.1 Sparse Interval Skip Graph Performance This section evaluates the performance of sparse interval skip graphs by quantifying insert, lookup and delete overheads.\nWe assume a proxy tier with 32 proxies and construct sparse interval skip graphs of various sizes using our datasets.\nFor each skip 0 5 10 15 20 25 30 35 409620481024512 NumberofMessages Index size (entries) Initial lookup Traversal (a) James Reserve Data 0 2 4 6 8 10 12 14 409620481024512 NumberofMessages Index size (entries) Initial lookup Traversal (b) Synthetic Data Figure 9: Skip Graph Lookup Performance 0 10 20 30 40 50 60 70 1 4 8 16 24 32 48 Numberofmessages Number of proxies Skipgraph insert Sparse skipgraph insert Initial lookup (a) Impact of Number of Proxies 0 20 40 60 80 100 120 512\u00a01024\u00a02048\u00a04096 NumberofMessages Index size (entries) Insert (redundant) Insert (non-redundant) Lookup (redundant) Lookup (non-redundant) (b) Impact of Redundant Summaries Figure 10: Skip Graph Overheads graph, we evaluate the cost of inserting a new value into the index.\nEach entry was deleted after its insertion, enabling us to quantify the delete overhead as well.\nFigure 8(a) and (b) quantify the insert overhead for our two datasets: each insert entails an initial traversal that incurs log n messages, followed by neighbor pointer update at increasing levels, incurring a cost of 4 log n messages.\nOur results demonstrate this behavior, and show as well that performance of delete-which also involves an initial traversal followed by pointer updates at each level-incurs a similar cost.\nNext, we evaluate the lookup performance of the index structure.\nAgain, we construct skip graphs of various sizes using our datasets and evaluate the cost of a lookup on the index structure.\nFigures 9(a) and (b) depict our results.\nThere are two components for each lookup-the lookup of the first interval that matches the query and, in the case of overlapping intervals, the subsequent linear traversal to identify all matching intervals.\nThe initial lookup can be seen to takes log n messages, as expected.\nThe costs of the subsequent linear traversal, however, are highly data dependent.\nFor instance, temperature values for the James Reserve data exhibit significant spatial correlations, resulting in significant overlap between different intervals and variable, high traversal cost (see Figure 9(a)).\nThe synthetic data, however, has less overlap and incurs lower traversal overhead as shown in Figure 9(b).\nSince the previous experiments assumed 32 proxies, we evaluate the impact of the number of proxies on skip graph performance.\nWe vary the number of proxies from 10 to 48 and distribute a skip graph with 4096 entries among these proxies.\nWe construct regular interval skip graphs as well as sparse interval skip graphs using these entries and measure the overhead of inserts and lookups.\nThus, the experiment also seeks to demonstrate the benefits of sparse skip graphs over regular skip graphs.\nFigure 10(a) depicts our results.\nIn regular skip graphs, the complexity of insert is O(log2n) in the 47 expected case (and O(n) in the worst case) where n is the number of elements.\nThis complexity is unaffected by changing the number of proxies, as indicated by the flat line in the figure.\nSparse skip graphs require fewer pointer updates; however, their overhead is dependent on the number of proxies, and is O(log2Np) in the expected case, independent of n.\nThis can be seen to result in significant reduction in overhead when the number of proxies is small, which decreases as the number of proxies increases.\nFailure handling is an important issue in a multi-tier sensor architecture since it relies on many components-proxies, sensor nodes and routing nodes can fail, and wireless links can fade.\nHandling of many of these failure modes is outside the scope of this paper; however, we consider the case of resilience of skip graphs to proxy failures.\nIn this case, skip graph search (and subsequent repair operations) can follow any one of the other links from a root element.\nSince a sparse skip graph has search trees rooted at each node, searching can then resume once the lookup request has routed around the failure.\nTogether, these two properties ensure that even if a proxy fails, the remaining entries in the skip graph will be reachable with high probability-only the entries on the failed proxy and the corresponding data at the sensors becomes inaccessible.\nTo ensure that all data on sensors remains accessible, even in the event of failure of a proxy holding index entries for that data, we incorporate redundant index entries.\nTSAR employs a simple redundancy scheme where additional coarse-grain summaries are used to protect regular summaries.\nEach sensor sends summary data periodically to its local proxy, but less frequently sends a lowerresolution summary to a backup proxy-the backup summary represents all of the data represented by the finer-grained summaries, but in a lossier fashion, thus resulting in higher read overhead (due to false hits) if the backup summary is used.\nThe cost of implementing this in our system is low - Figure 10(b) shows the overhead of such a redundancy scheme, where a single coarse summary is send to a backup for every two summaries sent to the primary proxy.\nSince a redundant summary is sent for every two summaries, the insert cost is 1.5 times the cost in the normal case.\nHowever, these redundant entries result in only a negligible increase in lookup overhead, due the logarithmic dependence of lookup cost on the index size, while providing full resilience to any single proxy failure.\n6.2 Storage Microbenchmarks Since sensors are resource-constrained, the energy consumption and the latency at this tier are important measures for evaluating the performance of a storage architecture.\nBefore performing an endto-end evaluation of our system, we provide more detailed information on the energy consumption of the storage component used to implement the TSAR local archive, based on empirical measurements.\nIn addition we compare these figures to those for other local storage technologies, as well as to the energy consumption of wireless communication, using information from the literature.\nFor empirical measurements we measure energy usage for the storage component itself (i.e. current drawn by the flash chip), as well as for the entire Mica2 mote.\nThe power measurements in Table 3 were performed for the AT45DB041 [15] flash memory on a Mica2 mote, which is an older NOR flash device.\nThe most promising technology for low-energy storage on sensing devices is NAND flash, such as the Samsung K9K4G08U0M device [16]; published power numbers for this device are provided in the table.\nPublished energy requirements for wireless transmission using the Chipcon [4] CC2420 radio (used in MicaZ and Telos motes) are provided for comparison, assuming Energy Energy\/byte Mote flash Read 256 byte page 58\u00b5J* \/ 136\u00b5J* total 0.23\u00b5J* Write 256 byte page 926\u00b5J* \/ 1042\u00b5J* total 3.6\u00b5J* NAND Flash Read 512 byte page 2.7\u00b5J 1.8nJ Write 512 byte page 7.8\u00b5J 15nJ Erase 16K byte sector 60\u00b5J 3.7nJ CC2420 radio Transmit 8 bits (-25dBm) 0.8\u00b5J 0.8\u00b5J Receive 8 bits 1.9\u00b5J 1.9\u00b5J Mote AVR processor In-memory search, 256 bytes 1.8\u00b5J 6.9nJ Table 3: Storage and Communication Energy Costs (*measured values) 0 200 400 600 800 1000 1 2 3 Latency(ms) Number of hops (a) Multi-hop query performance 0 100 200 300 400 500 1 5121024 2048 4096 Latency(ms) Index size (entries) Sensor communication Proxy communication Sensor lookup, processing (b) Query Performance Figure 11: Query Processing Latency zero network and protocol overhead.\nComparing the total energy cost for writing flash (erase + write) to the total cost for communication (transmit + receive), we find that the NAND flash is almost 150 times more efficient than radio communication, even assuming perfect network protocols.\n6.3 Prototype Evaluation This section reports results from an end-to-end evaluation of the TSAR prototype involving both tiers.\nIn our setup, there are four proxies connected via 802.11 links and three sensors per proxy.\nThe multi-hop topology was preconfigured such that sensor nodes were connected in a line to each proxy, forming a minimal tree of depth 0 400 800 1200 1600 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Retrievallatency(ms) Archived data retrieved (bytes) (a) Data Query and Fetch Time 0 2 4 6 8 10 12 4 8 16 32 Latency(ms) Number of 34-byte records searched (b) Sensor query processing delay Figure 12: Query Latency Components 48 3.\nDue to resource constraints we were unable to perform experiments with dozens of sensor nodes, however this topology ensured that the network diameter was as large as for a typical network of significantly larger size.\nOur evaluation metric is the end-to-end latency of query processing.\nA query posed on TSAR first incurs the latency of a sparse skip graph lookup, followed by routing to the appropriate sensor node(s).\nThe sensor node reads the required page(s) from its local archive, processes the query on the page that is read, and transmits the response to the proxy, which then forwards it to the user.\nWe first measure query latency for different sensors in our multi-hop topology.\nDepending on which of the sensors is queried, the total latency increases almost linearly from about 400ms to 1 second, as the number of hops increases from 1 to 3 (see Figure 11(a)).\nFigure 11(b) provides a breakdown of the various components of the end-to-end latency.\nThe dominant component of the total latency is the communication over one or more hops.\nThe typical time to communicate over one hop is approximately 300ms.\nThis large latency is primarily due to the use of a duty-cycled MAC layer; the latency will be larger if the duty cycle is reduced (e.g. the 2% setting as opposed to the 11.5% setting used in this experiment), and will conversely decrease if the duty cycle is increased.\nThe figure also shows the latency for varying index sizes; as expected, the latency of inter-proxy communication and skip graph lookups increases logarithmically with index size.\nNot surprisingly, the overhead seen at the sensor is independent of the index size.\nThe latency also depends on the number of packets transmitted in response to a query-the larger the amount of data retrieved by a query, the greater the latency.\nThis result is shown in Figure 12(a).\nThe step function is due to packetization in TinyOS; TinyOS sends one packet so long as the payload is smaller than 30 bytes and splits the response into multiple packets for larger payloads.\nAs the data retrieved by a query is increased, the latency increases in steps, where each step denotes the overhead of an additional packet.\nFinally, Figure 12(b) shows the impact of searching and processing flash memory regions of increasing sizes on a sensor.\nEach summary represents a collection of records in flash memory, and all of these records need to be retrieved and processed if that summary matches a query.\nThe coarser the summary, the larger the memory region that needs to be accessed.\nFor the search sizes examined, amortization of overhead when searching multiple flash pages and archival records, as well as within the flash chip and its associated driver, results in the appearance of sub-linear increase in latency with search size.\nIn addition, the operation can be seen to have very low latency, in part due to the simplicity of our query processing, requiring only a compare operation with each stored element.\nMore complex operations, however, will of course incur greater latency.\n6.4 Adaptive Summarization When data is summarized by the sensor before being reported to the proxy, information is lost.\nWith the interval summarization method we are using, this information loss will never cause the proxy to believe that a sensor node does not hold a value which it in fact does, as all archived values will be contained within the interval reported.\nHowever, it does cause the proxy to believe that the sensor may hold values which it does not, and forward query messages to the sensor for these values.\nThese false positives constitute the cost of the summarization mechanism, and need to be balanced against the savings achieved by reducing the number of reports.\nThe goal of adaptive summarization is to dynamically vary the summary size so that these two costs are balanced.\n0 0.1 0.2 0.3 0.4 0.5 0 5 10 15 20 25 30 Fractionoftruehits Summary size (number of records) (a) Impact of summary size 0 5 10 15 20 25 30 35 0 5000 10000 15000 20000 25000 30000 Summarizationsize(num.records) Normalized time (units) query rate 0.2 query rate 0.03 query rate 0.1 (b) Adaptation to query rate Figure 13: Impact of Summarization Granularity Figure 13(a) demonstrates the impact of summary granularity on false hits.\nAs the number of records included in a summary is increased, the fraction of queries forwarded to the sensor which match data held on that sensor (true positives) decreases.\nNext, in Figure 13(b) we run the a EmTOS simulation with our adaptive summarization algorithm enabled.\nThe adaptive algorithm increases the summary granularity (defined as the number of records per summary) when Cost(updates) Cost(falsehits) > 1 + and reduces it if Cost(updates) Cost(falsehits) > 1 \u2212 , where is a small constant.\nTo demonstrate the adaptive nature of our technique, we plot a time series of the summarization granularity.\nWe begin with a query rate of 1 query per 5 samples, decrease it to 1 every 30 samples, and then increase it again to 1 query every 10 samples.\nAs shown in Figure 13(b), the adaptive technique adjusts accordingly by sending more fine-grain summaries at higher query rates (in response to the higher false hit rate), and fewer, coarse-grain summaries at lower query rates.\n7.\nRelated Work In this section, we review prior work on storage and indexing techniques for sensor networks.\nWhile our work addresses both problems jointly, much prior work has considered them in isolation.\nThe problem of archival storage of sensor data has received limited attention in sensor network literature.\nELF [7] is a logstructured file system for local storage on flash memory that provides load leveling and Matchbox is a simple file system that is packaged with the TinyOS distribution [14].\nBoth these systems focus on local storage, whereas our focus is both on storage at the remote sensors as well as providing a unified view of distributed data across all such local archives.\nMulti-resolution storage [9] is intended for in-network storage and search in systems where there is significant data in comparison to storage resources.\nIn contrast, TSAR addresses the problem of archival storage in two-tier systems where sufficient resources can be placed at the edge sensors.\nThe RISE platform [21] being developed as part of the NODE project at UCR addresses the issues of hardware platform support for large amounts of storage in remote sensor nodes, but not the indexing and querying of this data.\nIn order to efficiently access a distributed sensor store, an index needs to be constructed of the data.\nEarly work on sensor networks such as Directed Diffusion [17] assumes a system where all useful sensor data was stored locally at each sensor, and spatially scoped queries are routed using geographic co-ordinates to locations where the data is stored.\nSources publish the events that they detect, and sinks with interest in specific events can subscribe to these events.\nThe Directed Diffusion substrate routes queries to specific locations 49 if the query has geographic information embedded in it (e.g.: find temperature in the south-west quadrant), and if not, the query is flooded throughout the network.\nThese schemes had the drawback that for queries that are not geographically scoped, search cost (O(n) for a network of n nodes) may be prohibitive in large networks with frequent queries.\nLocal storage with in-network indexing approaches address this issue by constructing indexes using frameworks such as Geographic Hash Tables [24] and Quad Trees [9].\nRecent research has seen a growing body of work on data indexing schemes for sensor networks[26][11][18].\nOne such scheme is DCS [26], which provides a hash function for mapping from event name to location.\nDCS constructs a distributed structure that groups events together spatially by their named type.\nDistributed Index of Features in Sensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor Networks (DIM [18]) extend the data-centric storage approach to provide spatially distributed hierarchies of indexes to data.\nWhile these approaches advocate in-network indexing for sensor networks, we believe that indexing is a task that is far too complicated to be performed at the remote sensor nodes since it involves maintaining significant state and large tables.\nTSAR provides a better match between resource requirements of storage and indexing and the availability of resources at different tiers.\nThus complex operations such as indexing and managing metadata are performed at the proxies, while storage at the sensor remains simple.\nIn addition to storage and indexing techniques specific to sensor networks, many distributed, peer-to-peer and spatio-temporal index structures are relevant to our work.\nDHTs [25] can be used for indexing events based on their type, quad-tree variants such as Rtrees [12] can be used for optimizing spatial searches, and K-D trees [2] can be used for multi-attribute search.\nWhile this paper focuses on building an ordered index structure for range queries, we will explore the use of other index structures for alternate queries over sensor data.\n8.\nConclusions In this paper, we argued that existing sensor storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks.\nWe presented the design of TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local storage at the sensors and distributed indexing at the proxies.\nAt the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Sparse Interval Skip Graph, for efficiently supporting spatio-temporal and range queries.\nAt the sensor tier, TSAR supports energy-aware adaptive summarization that can trade-off the energy cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarser resolution index structure.\nWe implemented TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors.\nOur experimental evaluation of TSAR demonstrated the benefits and feasibility of employing our energy-efficient low-latency distributed storage architecture in multi-tier sensor networks.\n9.\nREFERENCES [1] James Aspnes and Gauri Shah.\nSkip graphs.\nIn Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 384-393, Baltimore, MD, USA, 12-14 January 2003.\n[2] Jon Louis Bentley.\nMultidimensional binary search trees used for associative searching.\nCommun.\nACM, 18(9):509-517, 1975.\n[3] Philippe Bonnet, J. E. Gehrke, and Praveen Seshadri.\nTowards sensor database systems.\nIn Proceedings of the Second International Conference on Mobile Data Management., January 2001.\n[4] Chipcon.\nCC2420 2.4 GHz IEEE 802.15.4 \/ ZigBee-ready RF transceiver, 2004.\n[5] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.\nIntroduction to Algorithms.\nThe MIT Press and McGraw-Hill, second edition edition, 2001.\n[6] Adina Crainiceanu, Prakash Linga, Johannes Gehrke, and Jayavel Shanmugasundaram.\nQuerying Peer-to-Peer Networks Using P-Trees.\nTechnical Report TR2004-1926, Cornell University, 2004.\n[7] Hui Dai, Michael Neufeld, and Richard Han.\nELF: an efficient log-structured flash file system for micro sensor nodes.\nIn SenSys ``04: Proceedings of the 2nd international conference on Embedded networked sensor systems, pages 176-187, New York, NY, USA, 2004.\nACM Press.\n[8] Peter Desnoyers, Deepak Ganesan, Huan Li, and Prashant Shenoy.\nPRESTO: A predictive storage architecture for sensor networks.\nIn Tenth Workshop on Hot Topics in Operating Systems (HotOS X).\n, June 2005.\n[9] Deepak Ganesan, Ben Greenstein, Denis Perelyubskiy, Deborah Estrin, and John Heidemann.\nAn evaluation of multi-resolution storage in sensor networks.\nIn Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys).\n, 2003.\n[10] L. Girod, T. Stathopoulos, N. Ramanathan, J. Elson, D. Estrin, E. Osterweil, and T. Schoellhammer.\nA system for simulation, emulation, and deployment of heterogeneous sensor networks.\nIn Proceedings of the Second ACM Conference on Embedded Networked Sensor Systems, Baltimore, MD, 2004.\n[11] B. Greenstein, D. Estrin, R. Govindan, S. Ratnasamy, and S. Shenker.\nDIFS: A distributed index for features in sensor networks.\nElsevier Journal of ad-hoc Networks, 2003.\n[12] Antonin Guttman.\nR-trees: a dynamic index structure for spatial searching.\nIn SIGMOD ``84: Proceedings of the 1984 ACM SIGMOD international conference on Management of data, pages 47-57, New York, NY, USA, 1984.\nACM Press.\n[13] Nicholas Harvey, Michael B. Jones, Stefan Saroiu, Marvin Theimer, and Alec Wolman.\nSkipnet: A scalable overlay network with practical locality properties.\nIn In proceedings of the 4th USENIX Symposium on Internet Technologies and Systems (USITS ``03), Seattle, WA, March 2003.\n[14] Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar, David Culler, and Kristofer Pister.\nSystem architecture directions for networked sensors.\nIn Proceedings of the Ninth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS-IX), pages 93-104, Cambridge, MA, USA, November 2000.\nACM.\n[15] Atmel Inc. 4-megabit 2.5-volt or 2.7-volt DataFlash AT45DB041B, 2005.\n[16] Samsung Semiconductor Inc..\nK9W8G08U1M, K9K4G08U0M: 512M x 8 bit \/ 1G x 8 bit NAND flash memory, 2003.\n[17] Chalermek Intanagonwiwat, Ramesh Govindan, and Deborah Estrin.\nDirected diffusion: A scalable and robust communication paradigm for sensor networks.\nIn Proceedings of the Sixth Annual International Conference on Mobile Computing and Networking, pages 56-67, Boston, MA, August 2000.\nACM Press.\n[18] Xin Li, Young-Jin Kim, Ramesh Govindan, and Wei Hong.\nMulti-dimensional range queries in sensor networks.\nIn Proceedings of the First ACM Conference on Embedded Networked Sensor Systems (SenSys).\n, 2003.\nto appear.\n[19] Witold Litwin, Marie-Anne Neimat, and Donovan A. Schneider.\nRP*: A family of order preserving scalable distributed data structures.\nIn VLDB ``94: Proceedings of the 20th International Conference on Very Large Data Bases, pages 342-353, San Francisco, CA, USA, 1994.\n[20] Samuel Madden, Michael Franklin, Joseph Hellerstein, and Wei Hong.\nTAG: a tiny aggregation service for ad-hoc sensor networks.\nIn OSDI, Boston, MA, 2002.\n[21] A. Mitra, A. Banerjee, W. Najjar, D. Zeinalipour-Yazti, D.Gunopulos, and V. Kalogeraki.\nHigh performance, low power sensor platforms featuring gigabyte scale storage.\nIn SenMetrics 2005: Third International Workshop on Measurement, Modeling, and Performance Analysis of Wireless Sensor Networks, July 2005.\n[22] J. Polastre, J. Hill, and D. Culler.\nVersatile low power media access for wireless sensor networks.\nIn Proceedings of the Second ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2004.\n[23] William Pugh.\nSkip lists: a probabilistic alternative to balanced trees.\nCommun.\nACM, 33(6):668-676, 1990.\n[24] S. Ratnasamy, D. Estrin, R. Govindan, B. Karp, L. Yin S. Shenker, and F. Yu.\nData-centric storage in sensornets.\nIn ACM First Workshop on Hot Topics in Networks, 2001.\n[25] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker.\nA scalable content addressable network.\nIn Proceedings of the 2001 ACM SIGCOMM Conference, 2001.\n[26] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin, R. Govindan, and S. Shenker.\nGHT - a geographic hash-table for data-centric storage.\nIn First ACM International Workshop on Wireless Sensor Networks and their Applications, 2002.\n[27] N. Xu, E. Osterweil, M. Hamilton, and D. Estrin.\nhttp:\/\/www.lecs.cs.ucla.edu\/\u02dcnxu\/ess\/.\nJames Reserve Data.\n50","lvl-3":"TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs *\nABSTRACT\nArchival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends.\nWe argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors.\nWe present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies.\nAt the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries.\nAt the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead offalse hits resultingfrom querying a coarse-grain index.\nWe implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors.\nOur experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks.\n1.\nIntroduction\n1.1 Motivation\nMany different kinds of networked data-centric sensor applications have emerged in recent years.\nSensors in these applications sense the environment and generate data that must be processed, * This work was supported in part by National Science Foundation grants EEC-0313747, CNS-0325868 and EIA-0098060.\nfiltered, interpreted, and archived in order to provide a useful infrastructure to its users.\nTo achieve its goals, a typical sensor application needs access to both live and past sensor data.\nWhereas access to live data is necessary in monitoring and surveillance applications, access to past data is necessary for applications such as mining of sensor logs to detect unusual patterns, analysis of historical trends, and post-mortem analysis of particular events.\nArchival storage of past sensor data requires a storage system, the key attributes of which are: where the data is stored, whether it is indexed, and how the application can access this data in an energy-efficient manner with low latency.\nThere have been a spectrum of approaches for constructing sensor storage systems.\nIn the simplest, sensors stream data or events to a server for long-term archival storage [3], where the server often indexes the data to permit efficient access at a later time.\nSince sensors may be several hops from the nearest base station, network costs are incurred; however, once data is indexed and archived, subsequent data accesses can be handled locally at the server without incurring network overhead.\nIn this approach, the storage is centralized, reads are efficient and cheap, while writes are expensive.\nFurther, all data is propagated to the server, regardless of whether it is ever used by the application.\nAn alternate approach is to have each sensor store data or events locally (e.g., in flash memory), so that all writes are local and incur no communication overheads.\nA read request, such as whether an event was detected by a particular sensor, requires a message to be sent to the sensor for processing.\nMore complex read requests are handled by flooding.\nFor instance, determining if an intruder was detected over a particular time interval requires the request to be flooded to all sensors in the system.\nThus, in this approach, the storage is distributed, writes are local and inexpensive, while reads incur significant network overheads.\nRequests that require flooding, due to the lack of an index, are expensive and may waste precious sensor resources, even if no matching data is stored at those sensors.\nResearch efforts such as Directed Diffusion [17] have attempted to reduce these read costs, however, by intelligent message routing.\nBetween these two extremes lie a number of other sensor storage systems with different trade-offs, summarized in Table 1.\nThe geographic hash table (GHT) approach [24, 26] advocates the use of an in-network index to augment the fully distributed nature of sensor storage.\nIn this approach, each data item has a key associated with it, and a distributed or geographic hash table is used to map keys to nodes that store the corresponding data items.\nThus, writes cause data items to be sent to the hashed nodes and also trigger updates to the in-network hash table.\nA read request requires a lookup in the in-network hash table to locate the node that stores the data\nitem; observe that the presence of an index eliminates the need for flooding in this approach.\nMost of these approaches assume a flat, homogeneous architecture in which every sensor node is energy-constrained.\nIn this paper, we propose a novel storage architecture called TSAR1 that reflects and exploits the multi-tier nature of emerging sensor networks, where the application is comprised of tens of tethered sensor proxies (or more), each controlling tens or hundreds of untethered sensors.\nTSAR is a component of our PRESTO [8] predictive storage architecture, which combines archival storage with caching and prediction.\nWe believe that a fundamentally different storage architecture is necessary to address the multi-tier nature of future sensor networks.\nSpecifically, the storage architecture needs to exploit the resource-rich nature of proxies, while respecting resource constraints at the remote sensors.\nNo existing sensor storage architecture explicitly addresses this dichotomy in the resource capabilities of different tiers.\nAny sensor storage system should also carefully exploit current technology trends, which indicate that the capacities of flash memories continue to rise as per Moore's Law, while their costs continue to plummet.\nThus it will soon be feasible to equip each sensor with 1 GB of flash storage for a few tens of dollars.\nAn even more compelling argument is the energy cost of flash storage, which can be as much as two orders of magnitude lower than that for communication.\nNewer NAND flash memories offer very low write and erase energy costs--our comparison of a 1GB Samsung NAND flash storage [16] and the Chipcon CC2420 802.15.4 wireless radio [4] in Section 6.2 indicates a 1:100 ratio in per-byte energy cost between the two devices, even before accounting for network protocol overheads.\nThese trends, together with the energy-constrained nature of untethered sensors, indicate that local storage offers a viable, energy-efficient alternative to communication in sensor networks.\nTSAR exploits these trends by storing data or events locally on the energy-efficient flash storage at each sensor.\nSensors send concise identifying information, which we term metadata, to a nearby proxy; depending on the representation used, this metadata may be an order of magnitude or more smaller than the data itself, imposing much lower communication costs.\nThe resource-rich proxies interact with one another to construct a distributed index of the metadata reported from all sensors, and thus an index of the associated data stored at the sensors.\nThis index provides a unified, logical view of the distributed data, and enables an application to query and read past data efficiently--the index is used to pinpoint all data that match a read request, followed by messages to retrieve that data from the corresponding sensors.\nIn-network index lookups are eliminated, reducing network overheads for read requests.\nThis separation of data, which is stored at the sensors, and the metadata, which is stored at the proxies, enables TSAR to reduce energy overheads at the sensors, by leveraging resources at tethered proxies.\n1.2 Contributions\nThis paper presents TSAR, a novel two-tier storage architecture for sensor networks.\nTo the best of our knowledge, this is the first sensor storage system that is explicitly tailored for emerging multitier sensor networks.\nOur design and implementation of TSAR has resulted in four contributions.\nAt the core of the TSAR architecture is a novel distributed index structure based on interval skip graphs that we introduce in this paper.\nThis index structure can store coarse summaries of sensor data and organize them in an ordered manner to be easily search1TSAR: Tiered Storage ARchitecture for sensor networks.\nable.\nThis data structure has O (log n) expected search and update complexity.\nFurther, the index provides a logically unified view of all data in the system.\nSecond, at the sensor level, each sensor maintains a local archive that stores data on flash memory.\nOur storage architecture is fully stateless at each sensor from the perspective of the metadata index; all index structures are maintained at the resource-rich proxies, and only direct requests or simple queries on explicitly identified storage locations are sent to the sensors.\nStorage at the remote sensor is in effect treated as appendage of the proxy, resulting in low implementation complexity, which makes it ideal for small, resourceconstrained sensor platforms.\nFurther, the local store is optimized for time-series access to archived data, as is typical in many applications.\nEach sensor periodically sends a summary of its data to a proxy.\nTSAR employs a novel adaptive summarization technique that adapts the granularity of the data reported in each summary to the ratio of false hits for application queries.\nMore fine grain summaries are sent whenever more false positives are observed, thereby balancing the energy cost of metadata updates and false positives.\nThird, we have implemented a prototype of TSAR on a multi-tier testbed comprising Stargate-based proxies and Mote-based sensors.\nOur implementation supports spatio-temporal, value, and rangebased queries on sensor data.\nFourth, we conduct a detailed experimental evaluation of TSAR using a combination of EmStar\/EmTOS [10] and our prototype.\nWhile our EmStar\/EmTOS experiments focus on the scalability of TSAR in larger settings, our prototype evaluation involves latency and energy measurements in a real setting.\nOur results demonstrate the logarithmic scaling property of the sparse skip graph and the low latency of end-to-end queries in a duty-cycled multi-hop network.\nThe remainder of this paper is structured as follows.\nSection 2 presents key design issues that guide our work.\nSection 3 and 4 present the proxy-level index and the local archive and summarization at a sensor, respectively.\nSection 5 discusses our prototype implementation, and Section 6 presents our experimental results.\nWe present related work in Section 7 and our conclusions in Section 8.\n2.\nDesign Considerations\n2.1 System Model\n2.2 Usage Models\n2.3 Design Principles\n2.4 System Design\n3.\nData Structures\n3.1 Skip Graph Overview\n3.2 Interval Skip Graph\n3.3 Sparse Interval Skip Graph\n3.4 Alternative Data Structures\n4.\nData Storage and Summarization\n4.1 Local Storage at Sensors\n4.2 Adaptive Summarization\n5.\nTSAR Implementation\n6.\nExperimental Evaluation\n6.1 Sparse Interval Skip Graph Performance\n6.2 Storage Microbenchmarks\n6.3 Prototype Evaluation\n6.4 Adaptive Summarization\n7.\nRelated Work\nIn this section, we review prior work on storage and indexing techniques for sensor networks.\nWhile our work addresses both problems jointly, much prior work has considered them in isolation.\nThe problem of archival storage of sensor data has received limited attention in sensor network literature.\nELF [7] is a logstructured file system for local storage on flash memory that provides load leveling and Matchbox is a simple file system that is packaged with the TinyOS distribution [14].\nBoth these systems focus on local storage, whereas our focus is both on storage at the remote sensors as well as providing a unified view of distributed data across all such local archives.\nMulti-resolution storage [9] is intended for in-network storage and search in systems where there is significant data in comparison to storage resources.\nIn contrast, TSAR addresses the problem of archival storage in two-tier systems where sufficient resources can be placed at the edge sensors.\nThe RISE platform [21] being developed as part of the NODE project at UCR addresses the issues of hardware platform support for large amounts of storage in remote sensor nodes, but not the indexing and querying of this data.\nIn order to efficiently access a distributed sensor store, an index needs to be constructed of the data.\nEarly work on sensor networks such as Directed Diffusion [17] assumes a system where all useful sensor data was stored locally at each sensor, and spatially scoped queries are routed using geographic co-ordinates to locations where the data is stored.\nSources publish the events that they detect, and sinks with interest in specific events can subscribe to these events.\nThe Directed Diffusion substrate routes queries to specific locations\nif the query has geographic information embedded in it (e.g.: find temperature in the south-west quadrant), and if not, the query is flooded throughout the network.\nThese schemes had the drawback that for queries that are not geographically scoped, search cost (O (n) for a network of n nodes) may be prohibitive in large networks with frequent queries.\nLocal storage with in-network indexing approaches address this issue by constructing indexes using frameworks such as Geographic Hash Tables [24] and Quad Trees [9].\nRecent research has seen a growing body of work on data indexing schemes for sensor networks [26] [11] [18].\nOne such scheme is DCS [26], which provides a hash function for mapping from event name to location.\nDCS constructs a distributed structure that groups events together spatially by their named type.\nDistributed Index of Features in Sensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor Networks (DIM [18]) extend the data-centric storage approach to provide spatially distributed hierarchies of indexes to data.\nWhile these approaches advocate in-network indexing for sensor networks, we believe that indexing is a task that is far too complicated to be performed at the remote sensor nodes since it involves maintaining significant state and large tables.\nTSAR provides a better match between resource requirements of storage and indexing and the availability of resources at different tiers.\nThus complex operations such as indexing and managing metadata are performed at the proxies, while storage at the sensor remains simple.\nIn addition to storage and indexing techniques specific to sensor networks, many distributed, peer-to-peer and spatio-temporal index structures are relevant to our work.\nDHTs [25] can be used for indexing events based on their type, quad-tree variants such as Rtrees [12] can be used for optimizing spatial searches, and K-D trees [2] can be used for multi-attribute search.\nWhile this paper focuses on building an ordered index structure for range queries, we will explore the use of other index structures for alternate queries over sensor data.\n8.\nConclusions\nIn this paper, we argued that existing sensor storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks.\nWe presented the design of TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local storage at the sensors and distributed indexing at the proxies.\nAt the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Sparse Interval Skip Graph, for efficiently supporting spatio-temporal and range queries.\nAt the sensor tier, TSAR supports energy-aware adaptive summarization that can trade-off the energy cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarser resolution index structure.\nWe implemented TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors.\nOur experimental evaluation of TSAR demonstrated the benefits and feasibility of employing our energy-efficient low-latency distributed storage architecture in multi-tier sensor networks.","lvl-4":"TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs *\nABSTRACT\nArchival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends.\nWe argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors.\nWe present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies.\nAt the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries.\nAt the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead offalse hits resultingfrom querying a coarse-grain index.\nWe implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors.\nOur experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks.\n1.\nIntroduction\n1.1 Motivation\nMany different kinds of networked data-centric sensor applications have emerged in recent years.\nSensors in these applications sense the environment and generate data that must be processed, * This work was supported in part by National Science Foundation grants EEC-0313747, CNS-0325868 and EIA-0098060.\nTo achieve its goals, a typical sensor application needs access to both live and past sensor data.\nArchival storage of past sensor data requires a storage system, the key attributes of which are: where the data is stored, whether it is indexed, and how the application can access this data in an energy-efficient manner with low latency.\nThere have been a spectrum of approaches for constructing sensor storage systems.\nIn the simplest, sensors stream data or events to a server for long-term archival storage [3], where the server often indexes the data to permit efficient access at a later time.\nSince sensors may be several hops from the nearest base station, network costs are incurred; however, once data is indexed and archived, subsequent data accesses can be handled locally at the server without incurring network overhead.\nIn this approach, the storage is centralized, reads are efficient and cheap, while writes are expensive.\nFurther, all data is propagated to the server, regardless of whether it is ever used by the application.\nAn alternate approach is to have each sensor store data or events locally (e.g., in flash memory), so that all writes are local and incur no communication overheads.\nA read request, such as whether an event was detected by a particular sensor, requires a message to be sent to the sensor for processing.\nMore complex read requests are handled by flooding.\nFor instance, determining if an intruder was detected over a particular time interval requires the request to be flooded to all sensors in the system.\nThus, in this approach, the storage is distributed, writes are local and inexpensive, while reads incur significant network overheads.\nRequests that require flooding, due to the lack of an index, are expensive and may waste precious sensor resources, even if no matching data is stored at those sensors.\nBetween these two extremes lie a number of other sensor storage systems with different trade-offs, summarized in Table 1.\nThe geographic hash table (GHT) approach [24, 26] advocates the use of an in-network index to augment the fully distributed nature of sensor storage.\nIn this approach, each data item has a key associated with it, and a distributed or geographic hash table is used to map keys to nodes that store the corresponding data items.\nThus, writes cause data items to be sent to the hashed nodes and also trigger updates to the in-network hash table.\nA read request requires a lookup in the in-network hash table to locate the node that stores the data\nitem; observe that the presence of an index eliminates the need for flooding in this approach.\nMost of these approaches assume a flat, homogeneous architecture in which every sensor node is energy-constrained.\nIn this paper, we propose a novel storage architecture called TSAR1 that reflects and exploits the multi-tier nature of emerging sensor networks, where the application is comprised of tens of tethered sensor proxies (or more), each controlling tens or hundreds of untethered sensors.\nTSAR is a component of our PRESTO [8] predictive storage architecture, which combines archival storage with caching and prediction.\nWe believe that a fundamentally different storage architecture is necessary to address the multi-tier nature of future sensor networks.\nSpecifically, the storage architecture needs to exploit the resource-rich nature of proxies, while respecting resource constraints at the remote sensors.\nNo existing sensor storage architecture explicitly addresses this dichotomy in the resource capabilities of different tiers.\nAny sensor storage system should also carefully exploit current technology trends, which indicate that the capacities of flash memories continue to rise as per Moore's Law, while their costs continue to plummet.\nThus it will soon be feasible to equip each sensor with 1 GB of flash storage for a few tens of dollars.\nAn even more compelling argument is the energy cost of flash storage, which can be as much as two orders of magnitude lower than that for communication.\nThese trends, together with the energy-constrained nature of untethered sensors, indicate that local storage offers a viable, energy-efficient alternative to communication in sensor networks.\nTSAR exploits these trends by storing data or events locally on the energy-efficient flash storage at each sensor.\nSensors send concise identifying information, which we term metadata, to a nearby proxy; depending on the representation used, this metadata may be an order of magnitude or more smaller than the data itself, imposing much lower communication costs.\nThe resource-rich proxies interact with one another to construct a distributed index of the metadata reported from all sensors, and thus an index of the associated data stored at the sensors.\nThis index provides a unified, logical view of the distributed data, and enables an application to query and read past data efficiently--the index is used to pinpoint all data that match a read request, followed by messages to retrieve that data from the corresponding sensors.\nIn-network index lookups are eliminated, reducing network overheads for read requests.\nThis separation of data, which is stored at the sensors, and the metadata, which is stored at the proxies, enables TSAR to reduce energy overheads at the sensors, by leveraging resources at tethered proxies.\n1.2 Contributions\nThis paper presents TSAR, a novel two-tier storage architecture for sensor networks.\nTo the best of our knowledge, this is the first sensor storage system that is explicitly tailored for emerging multitier sensor networks.\nOur design and implementation of TSAR has resulted in four contributions.\nAt the core of the TSAR architecture is a novel distributed index structure based on interval skip graphs that we introduce in this paper.\nThis index structure can store coarse summaries of sensor data and organize them in an ordered manner to be easily search1TSAR: Tiered Storage ARchitecture for sensor networks.\nable.\nThis data structure has O (log n) expected search and update complexity.\nFurther, the index provides a logically unified view of all data in the system.\nSecond, at the sensor level, each sensor maintains a local archive that stores data on flash memory.\nOur storage architecture is fully stateless at each sensor from the perspective of the metadata index; all index structures are maintained at the resource-rich proxies, and only direct requests or simple queries on explicitly identified storage locations are sent to the sensors.\nStorage at the remote sensor is in effect treated as appendage of the proxy, resulting in low implementation complexity, which makes it ideal for small, resourceconstrained sensor platforms.\nFurther, the local store is optimized for time-series access to archived data, as is typical in many applications.\nEach sensor periodically sends a summary of its data to a proxy.\nTSAR employs a novel adaptive summarization technique that adapts the granularity of the data reported in each summary to the ratio of false hits for application queries.\nThird, we have implemented a prototype of TSAR on a multi-tier testbed comprising Stargate-based proxies and Mote-based sensors.\nOur implementation supports spatio-temporal, value, and rangebased queries on sensor data.\nThe remainder of this paper is structured as follows.\nSection 2 presents key design issues that guide our work.\nSection 3 and 4 present the proxy-level index and the local archive and summarization at a sensor, respectively.\nSection 5 discusses our prototype implementation, and Section 6 presents our experimental results.\nWe present related work in Section 7 and our conclusions in Section 8.\n7.\nRelated Work\nIn this section, we review prior work on storage and indexing techniques for sensor networks.\nThe problem of archival storage of sensor data has received limited attention in sensor network literature.\nBoth these systems focus on local storage, whereas our focus is both on storage at the remote sensors as well as providing a unified view of distributed data across all such local archives.\nMulti-resolution storage [9] is intended for in-network storage and search in systems where there is significant data in comparison to storage resources.\nIn contrast, TSAR addresses the problem of archival storage in two-tier systems where sufficient resources can be placed at the edge sensors.\nThe RISE platform [21] being developed as part of the NODE project at UCR addresses the issues of hardware platform support for large amounts of storage in remote sensor nodes, but not the indexing and querying of this data.\nIn order to efficiently access a distributed sensor store, an index needs to be constructed of the data.\nEarly work on sensor networks such as Directed Diffusion [17] assumes a system where all useful sensor data was stored locally at each sensor, and spatially scoped queries are routed using geographic co-ordinates to locations where the data is stored.\nThe Directed Diffusion substrate routes queries to specific locations\nLocal storage with in-network indexing approaches address this issue by constructing indexes using frameworks such as Geographic Hash Tables [24] and Quad Trees [9].\nRecent research has seen a growing body of work on data indexing schemes for sensor networks [26] [11] [18].\nDCS constructs a distributed structure that groups events together spatially by their named type.\nDistributed Index of Features in Sensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor Networks (DIM [18]) extend the data-centric storage approach to provide spatially distributed hierarchies of indexes to data.\nWhile these approaches advocate in-network indexing for sensor networks, we believe that indexing is a task that is far too complicated to be performed at the remote sensor nodes since it involves maintaining significant state and large tables.\nTSAR provides a better match between resource requirements of storage and indexing and the availability of resources at different tiers.\nThus complex operations such as indexing and managing metadata are performed at the proxies, while storage at the sensor remains simple.\nIn addition to storage and indexing techniques specific to sensor networks, many distributed, peer-to-peer and spatio-temporal index structures are relevant to our work.\nWhile this paper focuses on building an ordered index structure for range queries, we will explore the use of other index structures for alternate queries over sensor data.\n8.\nConclusions\nIn this paper, we argued that existing sensor storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks.\nWe presented the design of TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local storage at the sensors and distributed indexing at the proxies.\nAt the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Sparse Interval Skip Graph, for efficiently supporting spatio-temporal and range queries.\nAt the sensor tier, TSAR supports energy-aware adaptive summarization that can trade-off the energy cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarser resolution index structure.\nWe implemented TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors.\nOur experimental evaluation of TSAR demonstrated the benefits and feasibility of employing our energy-efficient low-latency distributed storage architecture in multi-tier sensor networks.","lvl-2":"TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs *\nABSTRACT\nArchival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends.\nWe argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors.\nWe present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies.\nAt the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries.\nAt the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead offalse hits resultingfrom querying a coarse-grain index.\nWe implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors.\nOur experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks.\n1.\nIntroduction\n1.1 Motivation\nMany different kinds of networked data-centric sensor applications have emerged in recent years.\nSensors in these applications sense the environment and generate data that must be processed, * This work was supported in part by National Science Foundation grants EEC-0313747, CNS-0325868 and EIA-0098060.\nfiltered, interpreted, and archived in order to provide a useful infrastructure to its users.\nTo achieve its goals, a typical sensor application needs access to both live and past sensor data.\nWhereas access to live data is necessary in monitoring and surveillance applications, access to past data is necessary for applications such as mining of sensor logs to detect unusual patterns, analysis of historical trends, and post-mortem analysis of particular events.\nArchival storage of past sensor data requires a storage system, the key attributes of which are: where the data is stored, whether it is indexed, and how the application can access this data in an energy-efficient manner with low latency.\nThere have been a spectrum of approaches for constructing sensor storage systems.\nIn the simplest, sensors stream data or events to a server for long-term archival storage [3], where the server often indexes the data to permit efficient access at a later time.\nSince sensors may be several hops from the nearest base station, network costs are incurred; however, once data is indexed and archived, subsequent data accesses can be handled locally at the server without incurring network overhead.\nIn this approach, the storage is centralized, reads are efficient and cheap, while writes are expensive.\nFurther, all data is propagated to the server, regardless of whether it is ever used by the application.\nAn alternate approach is to have each sensor store data or events locally (e.g., in flash memory), so that all writes are local and incur no communication overheads.\nA read request, such as whether an event was detected by a particular sensor, requires a message to be sent to the sensor for processing.\nMore complex read requests are handled by flooding.\nFor instance, determining if an intruder was detected over a particular time interval requires the request to be flooded to all sensors in the system.\nThus, in this approach, the storage is distributed, writes are local and inexpensive, while reads incur significant network overheads.\nRequests that require flooding, due to the lack of an index, are expensive and may waste precious sensor resources, even if no matching data is stored at those sensors.\nResearch efforts such as Directed Diffusion [17] have attempted to reduce these read costs, however, by intelligent message routing.\nBetween these two extremes lie a number of other sensor storage systems with different trade-offs, summarized in Table 1.\nThe geographic hash table (GHT) approach [24, 26] advocates the use of an in-network index to augment the fully distributed nature of sensor storage.\nIn this approach, each data item has a key associated with it, and a distributed or geographic hash table is used to map keys to nodes that store the corresponding data items.\nThus, writes cause data items to be sent to the hashed nodes and also trigger updates to the in-network hash table.\nA read request requires a lookup in the in-network hash table to locate the node that stores the data\nitem; observe that the presence of an index eliminates the need for flooding in this approach.\nMost of these approaches assume a flat, homogeneous architecture in which every sensor node is energy-constrained.\nIn this paper, we propose a novel storage architecture called TSAR1 that reflects and exploits the multi-tier nature of emerging sensor networks, where the application is comprised of tens of tethered sensor proxies (or more), each controlling tens or hundreds of untethered sensors.\nTSAR is a component of our PRESTO [8] predictive storage architecture, which combines archival storage with caching and prediction.\nWe believe that a fundamentally different storage architecture is necessary to address the multi-tier nature of future sensor networks.\nSpecifically, the storage architecture needs to exploit the resource-rich nature of proxies, while respecting resource constraints at the remote sensors.\nNo existing sensor storage architecture explicitly addresses this dichotomy in the resource capabilities of different tiers.\nAny sensor storage system should also carefully exploit current technology trends, which indicate that the capacities of flash memories continue to rise as per Moore's Law, while their costs continue to plummet.\nThus it will soon be feasible to equip each sensor with 1 GB of flash storage for a few tens of dollars.\nAn even more compelling argument is the energy cost of flash storage, which can be as much as two orders of magnitude lower than that for communication.\nNewer NAND flash memories offer very low write and erase energy costs--our comparison of a 1GB Samsung NAND flash storage [16] and the Chipcon CC2420 802.15.4 wireless radio [4] in Section 6.2 indicates a 1:100 ratio in per-byte energy cost between the two devices, even before accounting for network protocol overheads.\nThese trends, together with the energy-constrained nature of untethered sensors, indicate that local storage offers a viable, energy-efficient alternative to communication in sensor networks.\nTSAR exploits these trends by storing data or events locally on the energy-efficient flash storage at each sensor.\nSensors send concise identifying information, which we term metadata, to a nearby proxy; depending on the representation used, this metadata may be an order of magnitude or more smaller than the data itself, imposing much lower communication costs.\nThe resource-rich proxies interact with one another to construct a distributed index of the metadata reported from all sensors, and thus an index of the associated data stored at the sensors.\nThis index provides a unified, logical view of the distributed data, and enables an application to query and read past data efficiently--the index is used to pinpoint all data that match a read request, followed by messages to retrieve that data from the corresponding sensors.\nIn-network index lookups are eliminated, reducing network overheads for read requests.\nThis separation of data, which is stored at the sensors, and the metadata, which is stored at the proxies, enables TSAR to reduce energy overheads at the sensors, by leveraging resources at tethered proxies.\n1.2 Contributions\nThis paper presents TSAR, a novel two-tier storage architecture for sensor networks.\nTo the best of our knowledge, this is the first sensor storage system that is explicitly tailored for emerging multitier sensor networks.\nOur design and implementation of TSAR has resulted in four contributions.\nAt the core of the TSAR architecture is a novel distributed index structure based on interval skip graphs that we introduce in this paper.\nThis index structure can store coarse summaries of sensor data and organize them in an ordered manner to be easily search1TSAR: Tiered Storage ARchitecture for sensor networks.\nable.\nThis data structure has O (log n) expected search and update complexity.\nFurther, the index provides a logically unified view of all data in the system.\nSecond, at the sensor level, each sensor maintains a local archive that stores data on flash memory.\nOur storage architecture is fully stateless at each sensor from the perspective of the metadata index; all index structures are maintained at the resource-rich proxies, and only direct requests or simple queries on explicitly identified storage locations are sent to the sensors.\nStorage at the remote sensor is in effect treated as appendage of the proxy, resulting in low implementation complexity, which makes it ideal for small, resourceconstrained sensor platforms.\nFurther, the local store is optimized for time-series access to archived data, as is typical in many applications.\nEach sensor periodically sends a summary of its data to a proxy.\nTSAR employs a novel adaptive summarization technique that adapts the granularity of the data reported in each summary to the ratio of false hits for application queries.\nMore fine grain summaries are sent whenever more false positives are observed, thereby balancing the energy cost of metadata updates and false positives.\nThird, we have implemented a prototype of TSAR on a multi-tier testbed comprising Stargate-based proxies and Mote-based sensors.\nOur implementation supports spatio-temporal, value, and rangebased queries on sensor data.\nFourth, we conduct a detailed experimental evaluation of TSAR using a combination of EmStar\/EmTOS [10] and our prototype.\nWhile our EmStar\/EmTOS experiments focus on the scalability of TSAR in larger settings, our prototype evaluation involves latency and energy measurements in a real setting.\nOur results demonstrate the logarithmic scaling property of the sparse skip graph and the low latency of end-to-end queries in a duty-cycled multi-hop network.\nThe remainder of this paper is structured as follows.\nSection 2 presents key design issues that guide our work.\nSection 3 and 4 present the proxy-level index and the local archive and summarization at a sensor, respectively.\nSection 5 discusses our prototype implementation, and Section 6 presents our experimental results.\nWe present related work in Section 7 and our conclusions in Section 8.\n2.\nDesign Considerations\nIn this section, we first describe the various components of a multi-tier sensor network assumed in our work.\nWe then present a description of the expected usage models for this system, followed by several principles addressing these factors which guide the design of our storage system.\n2.1 System Model\nWe envision a multi-tier sensor network comprising multiple tiers--a bottom tier of untethered remote sensor nodes, a middle tier of tethered sensor proxies, and an upper tier of applications and user terminals (see Figure 1).\nThe lowest tier is assumed to form a dense deployment of lowpower sensors.\nA canonical sensor node at this tier is equipped with low-power sensors, a micro-controller, and a radio as well as a significant amount of flash memory (e.g., 1GB).\nThe common constraint for this tier is energy, and the need for a long lifetime in spite of a finite energy constraint.\nThe use of radio, processor, RAM, and the flash memory all consume energy, which needs to be limited.\nIn general, we assume radio communication to be substantially more expensive than accesses to flash memory.\nThe middle tier consists of power-rich sensor proxies that have significant computation, memory and storage resources and can use\nTable 1: Characteristics of sensor storage systems\nFigure 1: Architecture of a multi-tier sensor network.\nthese resources continuously.\nIn urban environments, the proxy tier would comprise a tethered base-station class nodes (e.g., Crossbow Stargate), each with with multiple radios--an 802.11 radio that connects it to a wireless mesh network and a low-power radio (e.g. 802.15.4) that connects it to the sensor nodes.\nIn remote sensing applications [10], this tier could comprise a similar Stargate node with a solar power cell.\nEach proxy is assumed to manage several tens to hundreds of lower-tier sensors in its vicinity.\nA typical sensor network deployment will contain multiple geographically distributed proxies.\nFor instance, in a building monitoring application, one sensor proxy might be placed per floor or hallway to monitor temperature, heat and light sensors in their vicinity.\nAt the highest tier of our infrastructure are applications that query the sensor network through a query interface [20].\nIn this work, we focus on applications that require access to past sensor data.\nTo support such queries, the system needs to archive data on a persistent store.\nOur goal is to design a storage system that exploits the relative abundance of resources at proxies to mask the scarcity of resources at the sensors.\n2.2 Usage Models\nThe design of a storage system such as TSAR is affected by the queries that are likely to be posed to it.\nA large fraction of queries on sensor data can be expected to be spatio-temporal in nature.\nSensors provide information about the physical world; two key attributes of this information are when a particular event or activity occurred and where it occurred.\nSome instances of such queries include the time and location of target or intruder detections (e.g., security and monitoring applications), notifications of specific types of events such as pressure and humidity values exceeding a threshold (e.g., industrial applications), or simple data collection queries which request data from a particular time or location (e.g., weather or environment monitoring).\nExpected queries of such data include those requesting ranges of one or more attributes; for instance, a query for all image data from cameras within a specified geographic area for a certain period of time.\nIn addition, it is often desirable to support efficient access to data in a way that maintains spatial and temporal ordering.\nThere are several ways of supporting range queries, such as locality-preserving hashes such as are used in DIMS [18].\nHowever, the most straightforward mechanism, and one which naturally provides efficient ordered access, is via the use of order-preserving data structures.\nOrder-preserving structures such as the well-known B-Tree maintain relationships between indexed values and thus allow natural access to ranges, as well as predecessor and successor operations on their key values.\nApplications may also pose value-based queries that involve determining if a value v was observed at any sensor; the query returns a list of sensors and the times at which they observed this value.\nVariants of value queries involve restricting the query to a geographical region, or specifying a range (v1, v2) rather than a single value v. Value queries can be handled by indexing on the values reported in the summaries.\nSpecifically, if a sensor reports a numerical value, then the index is constructed on these values.\nA search involves finding matching values that are either contained in the search range (v1, v2) or match the search value v exactly.\nHybrid value and spatio-temporal queries are also possible.\nSuch queries specify a time interval, a value range and a spatial region and request all records that match these attributes--\"find all instances where the temperature exceeded 100o F at location R during the month of August\".\nThese queries require an index on both time and value.\nIn TSAR our focus is on range queries on value or time, with planned extensions to include spatial scoping.\n2.3 Design Principles\nOur design of a sensor storage system for multi-tier networks is based on the following set of principles, which address the issues arising from the system and usage models above.\n\u2022 Principle 1: Store locally, access globally: Current technology allows local storage to be significantly more energyefficient than network communication, while technology trends show no signs of erasing this gap in the near future.\nFor maximum network life a sensor storage system should leverage the flash memory on sensors to archive data locally, substituting cheap memory operations for expensive radio transmission.\nBut without efficient mechanisms for retrieval, the energy gains of local storage may be outweighed by communication costs incurred by the application in searching for data.\nWe believe that if the data storage system provides the abstraction of a single logical store to applications, as\ndoes TSAR, then it will have additional flexibility to optimize communication and storage costs.\n9 Principle 2: Distinguish data from metadata: Data must be identified so that it may be retrieved by the application without exhaustive search.\nTo do this, we associate metadata with each data record--data fields of known syntax which serve as identifiers and may be queried by the storage system.\nExamples of this metadata are data attributes such as location and time, or selected or summarized data values.\nWe leverage the presence of resource-rich proxies to index metadata for resource-constrained sensors.\nThe proxies share this metadata index to provide a unified logical view of all data in the system, thereby enabling efficient, low-latency lookups.\nSuch a tier-specific separation of data storage from metadata indexing enables the system to exploit the idiosyncrasies of multi-tier networks, while improving performance and functionality.\n9 Principle 3: Provide data-centric query support: In a sensor application the specific location (i.e. offset) of a record in a stream is unlikely to be of significance, except if it conveys information concerning the location and\/or time at which the information was generated.\nWe thus expect that applications will be best served by a query interface which allows them to locate data by value or attribute (e.g. location and time), rather than a read interface for unstructured data.\nThis in turn implies the need to maintain metadata in the form of an index that provides low cost lookups.\n2.4 System Design\nTSAR embodies these design principles by employing local storage at sensors and a distributed index at the proxies.\nThe key features of the system design are as follows: In TSAR, writes occur at sensor nodes, and are assumed to consist of both opaque data as well as application-specific metadata.\nThis metadata is a tuple of known types, which may be used by the application to locate and identify data records, and which may be searched on and compared by TSAR in the course of locating data for the application.\nIn a camera-based sensing application, for instance, this metadata might include coordinates describing the field of view, average luminance, and motion values, in addition to basic information such as time and sensor location.\nDepending on the application, this metadata may be two or three orders of magnitude smaller than the data itself, for instance if the metadata consists of features extracted from image or acoustic data.\nIn addition to storing data locally, each sensor periodically sends a summary of reported metadata to a nearby proxy.\nThe summary contains information such as the sensor ID, the interval (t1, t2) over which the summary was generated, a handle identifying the corresponding data record (e.g. its location in flash memory), and a coarse-grain representation of the metadata associated with the record.\nThe precise data representation used in the summary is application-specific; for instance, a temperature sensor might choose to report the maximum and minimum temperature values observed in an interval as a coarse-grain representation of the actual time series.\nThe proxy uses the summary to construct an index; the index is global in that it stores information from all sensors in the system and it is distributed across the various proxies in the system.\nThus, applications see a unified view of distributed data, and can query the index at any proxy to get access to data stored at any sensor.\nSpecifically, each query triggers lookups in this distributed index and the list of matches is then used to retrieve the corresponding data from the sensors.\nThere are several distributed index and lookup methods which might be used in this system; however, the index structure described in Section 3 is highly suited for the task.\nSince the index is constructed using a coarse-grain summary, instead of the actual data, index lookups will yield approximate matches.\nThe TSAR summarization mechanism guarantees that index lookups will never yield false negatives - i.e. it will never miss summaries which include the value being searched for.\nHowever, index lookups may yieldfalse positives, where a summary matches the query but when queried the remote sensor finds no matching value, wasting network resources.\nThe more coarse-grained the summary, the lower the update overhead and the greater the fraction of false positives, while finer summaries incur update overhead while reducing query overhead due to false positives.\nRemote sensors may easily distinguish false positives from queries which result in search hits, and calculate the ratio between the two; based on this ratio, TSAR employs a novel adaptive technique that dynamically varies the granularity of sensor summaries to balance the metadata overhead and the overhead of false positives.\n3.\nData Structures\nAt the proxy tier, TSAR employs a novel index structure called the Interval Skip Graph, which is an ordered, distributed data structure for finding all intervals that contain a particular point or range of values.\nInterval skip graphs combine Interval Trees [5], an interval-based binary search tree, with Skip Graphs [1], a ordered, distributed data structure for peer-to-peer systems [13].\nThe resulting data structure has two properties that make it ideal for sensor networks.\nFirst, it has O (log n) search complexity for accessing the first interval that matches a particular value or range, and constant complexity for accessing each successive interval.\nSecond, indexing of intervals rather than individual values makes the data structure ideal for indexing summaries over time or value.\nSuch summary-based indexing is a more natural fit for energyconstrained sensor nodes, since transmitting summaries incurs less energy overhead than transmitting all sensor data.\nDefinitions: We assume that there are Np proxies and Ns sensors in a two-tier sensor network.\nEach proxy is responsible for multiple sensor nodes, and no assumption is made about the number of sensors per proxy.\nEach sensor transmits interval summaries of data or events regularly to one or more proxies that it is associated with, where interval i is represented as [lowi, highi].\nThese intervals can correspond to time or value ranges that are used for indexing sensor data.\nNo assumption is made about the size of an interval or about the amount of overlap between intervals.\nRange queries on the intervals are posed by users to the network of proxies and sensors; each query q needs to determine all index values that overlap the interval [lowq, highq].\nThe goal of the interval skip graph is to index all intervals such that the set that overlaps a query interval can be located efficiently.\nIn the rest of this section, we describe the interval skip graph in greater detail.\n3.1 Skip Graph Overview\nIn order to inform the description of the Interval Skip Graph, we first provide a brief overview of the Skip Graph data structure; for a more extensive description the reader is referred to [1].\nFigure 2 shows a skip graph which indexes 8 keys; the keys may be seen along the bottom, and above each key are the pointers associated with that key.\nEach data element, consisting of a key and its associated pointers, may reside on a different node in the network,\nlevel 2 level 1 level 0\nlevel 2 level 1 level 0\nFigure 2: Skip Graph of 8 Elements Figure 3: Interval Skip Graph Figure 4: Distributed Interval Skip Graph\nand pointers therefore identify both a remote node as well as a data element on that node.\nIn this figure we may see the following properties of a skip graph:\n\u2022 Ordered index: The keys are members of an ordered data type, for instance integers.\nLookups make use of ordered comparisons between the search key and existing index entries.\nIn addition, the pointers at the lowest level point directly to the successor of each item in the index.\n\u2022 In-place indexing: Data elements remain on the nodes where they were inserted, and messages are sent between nodes to establish links between those elements and others in the index.\n\u2022 Log n height: There are loge n pointers associated with each element, where n is the number of data elements indexed.\nEach pointer belongs to a level l in [0...loge n \u2212 1], and together with some other pointers at that level forms a chain of n\/2l elements.\n\u2022 Probabilistic balance: Rather than relying on re-balancing operations which may be triggered at insert or delete, skip graphs implement a simple random balancing mechanism which maintains close to perfect balance on average, with an extremely low probability of significant imbalance.\n\u2022 Redundancy and resiliency: Each data element forms an\nindependent search tree root, so searches may begin at any node in the network, eliminating hot spots at a single search root.\nIn addition the index is resilient against node failure; data on the failed node will not be accessible, but remaining data elements will be accessible through search trees rooted on other nodes.\nIn Figure 2 we see the process of searching for a particular value in a skip graph.\nThe pointers reachable from a single data element form a binary tree: a pointer traversal at the highest level skips over n\/2 elements, n\/4 at the next level, and so on.\nSearch consists of descending the tree from the highest level to level 0, at each level comparing the target key with the next element at that level and deciding whether or not to traverse.\nIn the perfectly balanced case shown here there are loge n levels of pointers, and search will traverse 0 or 1 pointers at each level.\nWe assume that each data element resides on a different node, and measure search cost by the number messages sent (i.e. the number of pointers traversed); this will clearly be O (log n).\nTree update proceeds from the bottom, as in a B-Tree, with the root (s) being promoted in level as the tree grows.\nIn this way, for instance, the two chains at level 1 always contain n\/2 entries each, and there is never a need to split chains as the structure grows.\nThe update process then consists of choosing which of the 2l chains to insert an element into at each level l, and inserting it in the proper place in each chain.\nMaintaining a perfectly balanced skip graph as shown in Figure 2 would be quite complex; instead, the probabilistic balancing method introduced in Skip Lists [23] is used, which trades off a small amount of overhead in the expected case in return for simple update and deletion.\nThe basis for this method is the observation that any element which belongs to a particular chain at level l can only belong to one of two chains at level l + 1.\nTo insert an element we ascend levels starting at 0, randomly choosing one of the two possible chains at each level, an stopping when we reach an empty chain.\nOne means of implementation (e.g. as described in [1]) is to assign each element an arbitrarily long random bit string.\nEach chain at level l is then constructed from those elements whose bit strings match in the first l bits, thus creating 2l possible chains at each level and ensuring that each chain splits into exactly two chains at the next level.\nAlthough the resulting structure is not perfectly balanced, following the analysis in [23] we can show that the probability of it being significantly out of balance is extremely small; in addition, since the structure is determined by the random number stream, input data patterns cannot cause the tree to become imbalanced.\n3.2 Interval Skip Graph\nA skip graph is designed to store single-valued entries.\nIn this section, we introduce a novel data structure that extends skip graphs to store intervals [lowi, highi] and allows efficient searches for all intervals covering a value v, i.e. {i: lowi <v <highi}.\nOur data structure can be extended to range searches in a straightforward manner.\nThe interval skip graph is constructed by applying the method of augmented search trees, as described by Cormen, Leiserson, and Rivest [5] and applied to binary search trees to create an Interval Tree.\nThe method is based on the observation that a search structure based on comparison of ordered keys, such as a binary tree, may also be used to search on a secondary key which is non-decreasing in the first key.\nGiven a set of intervals sorted by lower bound--lowi <lowi +1--we define the secondary key as the cumulative maximum, maxi = maxk = 0...i (highk).\nThe set of intervals intersecting a value v may then be found by searching for the first interval (and thus the interval with least lowi) such that maxi> v.\nWe then\ntraverse intervals in increasing order lower bound, until we find the first interval with lowi> v, selecting those intervals which intersect v. Using this approach we augment the skip graph data structure, as shown in Figure 3, so that each entry stores a range (lower bound and upper bound) and a secondary key (cumulative maximum of upper bound).\nTo efficiently calculate the secondary key maxi for an entry i, we take the greatest of highi and the maximum values reported by each of i's left-hand neighbors.\nTo search for those intervals containing the value v, we first search for v on the secondary index, maxi, and locate the first entry with maxi \u2265 v. (by the definition of maxi, for this data element maxi = highi.)\nIf lowi> v, then this interval does not contain v, and no other intervals will, either, so we are done.\nOtherwise we traverse the index in increasing order of mini, returning matching intervals, until we reach an entry with mini> v and we are done.\nSearches for all intervals which overlap a query range, or which completely contain a query range, are straightforward extensions of this mechanism.\nLookup Complexity: Lookup for the first interval that matches a given value is performed in a manner very similar to an interval tree.\nThe complexity of search is O (log n).\nThe number of intervals that match a range query can vary depending on the amount of overlap in the intervals being indexed, as well as the range specified in the query.\nInsert Complexity: In an interval tree or interval skip list, the maximum value for an entry need only be calculated over the subtree rooted at that entry, as this value will be examined only when searching within the subtree rooted at that entry.\nFor a simple interval skip graph, however, this maximum value for an entry must be computed over all entries preceding it in the index, as searches may begin anywhere in the data structure, rather than at a distinguished root element.\nIt may be easily seen that in the worse case the insertion of a single interval (one that covers all existing intervals in the index) will trigger the update of all entries in the index, for a worst-case insertion cost of O (n).\n3.3 Sparse Interval Skip Graph\nThe final extensions we propose take advantage of the difference between the number of items indexed in a skip graph and the number of systems on which these items are distributed.\nThe cost in network messages of an operation may be reduced by arranging the data structure so that most structure traversals occur locally on a single node, and thus incur zero network cost.\nIn addition, since both congestion and failure occur on a per-node basis, we may eliminate links without adverse consequences if those links only contribute to load distribution and\/or resiliency within a single node.\nThese two modifications allow us to achieve reductions in asymptotic complexity of both update and search.\nAs may be in Section 3.2, insert and delete cost on an interval skip graph has a worst case complexity of O (n), compared to O (log n) for an interval tree.\nThe main reason for the difference is that skip graphs have a full search structure rooted at each element, in order to distribute load and provide resilience to system failures in a distributed setting.\nHowever, in order to provide load distribution and failure resilience it is only necessary to provide a full search structure for each system.\nIf as in TSAR the number of nodes (proxies) is much smaller than the number of data elements (data summaries indexed), then this will result in significant savings.\nImplementation: To construct a sparse interval skip graph, we ensure that there is a single distinguished element on each system, the root element for that system; all searches will start at one of these root elements.\nWhen adding a new element, rather than splitting lists at increasing levels l until the element is in a list with no others, we stop when we find that the element would be in a list containing no root elements, thus ensuring that the element is reachable from all root elements.\nAn example of applying this optimization may be seen in Figure 5.\n(In practice, rather than designating existing data elements as roots, as shown, it may be preferable to insert null values at startup.)\nWhen using the technique of membership vectors as in [1], this may be done by broadcasting the membership vectors of each root element to all other systems, and stopping insertion of an element at level l when it does not share an l-bit prefix with any of the N, root elements.\nThe expected number of roots sharing a logeN,-bit prefix is 1, giving an expected expected height for each element of logeN, + O (1).\nAn alternate implementation, which distributes information concerning root elements at pointer establishment time, is omitted due to space constraints; this method eliminates the need for additional messages.\nPerformance: In a (non-interval) sparse skip graph, since the expected height of an inserted element is now loge N, + O (1), expected insertion complexity is O (log N,), rather than O (log n), where N, is the number of root elements and thus the number of separate systems in the network.\n(In the degenerate case of a single system we have a skip list; with splitting probability 0.5 the expected height of an individual element is 1.)\nNote that since searches are started at root elements of expected height loge n, search complexity is not improved.\nFor an interval sparse skip graph, update performance is improved considerably compared to the O (n) worst case for the nonsparse case.\nIn an augmented search structure such as this, an element only stores information for nodes which may be reached from that element--e.g. the subtree rooted at that element, in the case of a tree.\nThus, when updating the maximum value in an interval tree, the update is only propagated towards the root.\nIn a sparse interval skip graph, updates to a node only propagate towards the N, root elements, for a worst-case cost of N, loge n. Shortcut search: When beginning a search for a value v, rather than beginning at the root on that proxy, we can find the element that is closest to v (e.g. using a secondary local index), and then begin the search at that element.\nThe expected distance between this element and the search terminus is loge N,, and the search will now take on average loge N, + O (1) steps.\nTo illustrate this optimization, in Figure 4 depending on the choice of search root, a search for [21, 30] beginning at node 2 may take 3 network hops, traversing to node 1, then back to node 2, and finally to node 3 where the destination is located, for a cost of 3 messages.\nThe shortcut search, however, locates the intermediate data element on node 2, and then proceeds directly to node 3 for a cost of 1 message.\nPerformance: This technique may be applied to the primary key search which is the first of two insertion steps in an interval skip graph.\nBy combining the short-cut optimization with sparse interval skip graphs, the expected cost of insertion is now O (log N,), independent of the size of the index or the degree of overlap of the inserted intervals.\n3.4 Alternative Data Structures\nThus far we have only compared the sparse interval skip graph with similar structures from which it is derived.\nA comparison with several other data structures which meet at least some of the requirements for the TSAR index is shown in Table 2.\nTable 2: Comparison of Distributed Index Structures\nFigure 5: Sparse Interval Skip Graph\nThe hash-based systems, DHT [25] and GHT [26], lack the ability to perform range queries and are thus not well-suited to indexing spatio-temporal data.\nIndexing locally using an appropriate singlenode structure and then flooding queries to all proxies is a competitive alternative for small networks; for large networks the linear dependence on the number of proxies becomes an issue.\nTwo distributed B-Trees were examined - P-Trees [6] and RP * [19].\nEach of these supports range queries, and in theory could be modified to support indexing of intervals; however, they both require complex re-balancing, and do not provide the resilience characteristics of the other structures.\nDIMS [18] provides the ability to perform spatio-temporal range queries, and has the necessary resilience to failures; however, it cannot be used index intervals, which are used by TSAR's data summarization algorithm.\n4.\nData Storage and Summarization\nHaving described the proxy-level index structure, we turn to the mechanisms at the sensor tier.\nTSAR implements two key mechanisms at the sensor tier.\nThe first is a local archival store at each sensor node that is optimized for resource-constrained devices.\nThe second is an adaptive summarization technique that enables each sensor to adapt to changing data and query characteristics.\nThe rest of this section describes these mechanisms in detail.\n4.1 Local Storage at Sensors\nInterval skip graphs provide an efficient mechanism to lookup sensor nodes containing data relevant to a query.\nThese queries are then routed to the sensors, which locate the relevant data records in the local archive and respond back to the proxy.\nTo enable such lookups, each sensor node in TSAR maintains an archival store of sensor data.\nWhile the implementation of such an archival store is straightforward on resource-rich devices that can run a database, sensors are often power and resource-constrained.\nConsequently, the sensor archiving subsystem in TSAR is explicitly designed to exploit characteristics of sensor data in a resource-constrained setting.\nFigure 6: Single storage record\nSensor data has very distinct characteristics that inform our design of the TSAR archival store.\nSensors produce time-series data streams, and therefore, temporal ordering of data is a natural and simple way of storing archived sensor data.\nIn addition to simplicity, a temporally ordered store is often suitable for many sensor data processing tasks since they involve time-series data processing.\nExamples include signal processing operations such as FFT, wavelet transforms, clustering, similarity matching, and target detection.\nConsequently, the local archival store is a collection of records, designed as an append-only circular buffer, where new records are appended to the tail of the buffer.\nThe format of each data record is shown in Figure 6.\nEach record has a metadata field which includes a timestamp, sensor settings, calibration parameters, etc. .\nRaw sensor data is stored in the data field of the record.\nThe data field is opaque and application-specific--the storage system does not know or care about interpreting this field.\nA camera-based sensor, for instance, may store binary images in this data field.\nIn order to support a variety of applications, TSAR supports variable-length data fields; as a result, record sizes can vary from one record to another.\nOur archival store supports three operations on records: create, read, and delete.\nDue to the append-only nature of the store, creation of records is simple and efficient.\nThe create operation simply creates a new record and appends it to the tail of the store.\nSince records are always written at the tail, the store need not maintain a \"free space\" list.\nAll fields of the record need to be specified at creation time; thus, the size of the record is known a priori and the store simply allocates the the corresponding number of bytes at the tail to store the record.\nSince writes are immutable, the size of a record does not change once it is created.\nFigure 7: Sensor Summarization\nThe read operation enables stored records to be retrieved in order to answer queries.\nIn a traditional database system, efficient lookups are enabled by maintaining a structure such as a B-tree that indexes certain keys of the records.\nHowever, this can be quite complex for a small sensor node with limited resources.\nConsequently, TSAR sensors do not maintain any index for the data stored in their archive.\nInstead, they rely on the proxies to maintain this metadata index--sensors periodically send the proxy information summarizing the data contained in a contiguous sequence of records, as well as a handle indicating the location of these records in flash memory.\nThe mechanism works as follows: In addition to the summary of sensor data, each node sends metadata to the proxy containing the time interval corresponding to the summary, as well as the start and end offsets of the flash memory location where the raw data corresponding is stored (as shown in Figure 7).\nThus, random access is enabled at granularity of a summary--the start offset of each chunk of records represented by a summary is known to the proxy.\nWithin this collection, records are accessed sequentially.\nWhen a query matches a summary in the index, the sensor uses these offsets to access the relevant records on its local flash by sequentially reading data from the start address until the end address.\nAny queryspecific operation can then be performed on this data.\nThus, no index needs to be maintained at the sensor, in line with our goal of simplifying sensor state management.\nThe state of the archive is captured in the metadata associated with the summaries, and is stored and maintained at the proxy.\nWhile we anticipate local storage capacity to be large, eventually there might be a need to overwrite older data, especially in high data rate applications.\nThis may be done via techniques such as multi-resolution storage of data [9], or just simply by overwriting older data.\nWhen older data is overwritten, a delete operation is performed, where an index entry is deleted from the interval skip graph at the proxy and the corresponding storage space in flash memory at the sensor is freed.\n4.2 Adaptive Summarization\nThe data summaries serve as glue between the storage at the remote sensor and the index at the proxy.\nEach update from a sensor to the proxy includes three pieces of information: the summary, a time period corresponding to the summary, and the start and end offsets for the flash archive.\nIn general, the proxy can index the time interval representing a summary or the value range reported in the summary (or both).\nThe former index enables quick lookups on all records seen during a certain interval, while the latter index enables quick lookups on all records matching a certain value.\nAs described in Section 2.4, there is a trade-off between the energy used in sending summaries (and thus the frequency and resolution of those summaries) and the cost of false hits during queries.\nThe coarser and less frequent the summary information, the less energy required, while false query hits in turn waste energy on requests for non-existent data.\nTSAR employs an adaptive summarization technique that balances the cost of sending updates against the cost of false positives.\nThe key intuition is that each sensor can independently identify the fraction of false hits and true hits for queries that access its local archive.\nIf most queries result in true hits, then the sensor determines that the summary can be coarsened further to reduce update costs without adversely impacting the hit ratio.\nIf many queries result in false hits, then the sensor makes the granularity of each summary finer to reduce the number and overhead of false hits.\nThe resolution of the summary depends on two parameters--the interval over which summaries of the data are constructed and transmitted to the proxy, as well as the size of the applicationspecific summary.\nOur focus in this paper is on the interval over which the summary is constructed.\nChanging the size of the data summary can be performed in an application-specific manner (e.g. using wavelet compression techniques as in [9]) and is beyond the scope of this paper.\nCurrently, TSAR employs a simple summarization scheme that computes the ratio of false and true hits and decreases (increases) the interval between summaries whenever this ratio increases (decreases) beyond a threshold.\n5.\nTSAR Implementation\nWe have implemented a prototype of TSAR on a multi-tier sensor network testbed.\nOur prototype employs Crossbow Stargate nodes to implement the proxy tier.\nEach Stargate node employs a 400MHz Intel XScale processor with 64MB RAM and runs the Linux 2.4.19 kernel and EmStar release 2.1.\nThe proxy nodes are equipped with two wireless radios, a Cisco Aironet 340-based 802.11 b radio and a hostmote bridge to the Mica2 sensor nodes using the EmStar transceiver.\nThe 802.11 b wireless network is used for inter-proxy communication within the proxy tier, while the wireless bridge enables sensor-proxy communication.\nThe sensor tier consists of Crossbow Mica2s and Mica2dots, each consisting of a 915MHz CC1000 radio, a BMAC protocol stack, a 4 Mb on-board flash memory and an ATMega 128L processor.\nThe sensor nodes run TinyOS 1.1.8.\nIn addition to the on-board flash, the sensor nodes can be equipped with external MMC\/SD flash cards using a custom connector.\nThe proxy nodes can be equipped with external storage such as high-capacity compact flash (up to 4GB), 6GB micro-drives, or up to 60GB 1.8 inch mobile disk drives.\nSince sensor nodes may be several hops away from the nearest proxy, the sensor tier employs multi-hop routing to communicate with the proxy tier.\nIn addition, to reduce the power consumption of the radio while still making the sensor node available for queries, low power listening is enabled, in which the radio receiver is periodically powered up for a short interval to sense the channel for transmissions, and the packet preamble is extended to account for the latency until the next interval when the receiving radio wakes up.\nOur prototype employs the MultiHopLEPSM routing protocol with the BMAC layer configured in the low-power mode with a 11% duty cycle (one of the default BMAC [22] parameters) Our TSAR implementation on the Mote involves a data gathering task that periodically obtains sensor readings and logs these reading to flash memory.\nThe flash memory is assumed to be a circular append-only store and the format of the logged data is depicted in Figure 6.\nThe Mote sends a report to the proxy every N readings, summarizing the observed data.\nThe report contains: (i) the address of the Mote, (ii) a handle that contains an offset and the length of the region in flash memory containing data referred to by the summary, (iii) an interval (t1, t2) over which this report is generated, (iv) a tuple (low, high) representing the minimum and the maximum values observed at the sensor in the interval, and (v) a sequence number.\nThe sensor updates are used to construct a sparse interval skip graph that is distributed across proxies, via network messages between proxies over the 802.11 b wireless network.\nOur current implementation supports queries that request records matching a time interval (t1, t2) or a value range (v1, v2).\nSpatial constraints are specified using sensor IDs.\nGiven a list of matching intervals from the skip graph, TSAR supports two types of messages to query the sensor: lookup and fetch.\nA lookup message triggers a search within the corresponding region in flash memory and returns the number of matching records in that memory region (but does not retrieve data).\nIn contrast, a fetch message not only\nFigure 8: Skip Graph Insert Performance\ntriggers a search but also returns all matching data records to the proxy.\nLookup messages are useful for polling a sensor, for instance, to determine if a query matches too many records.\n6.\nExperimental Evaluation\nIn this section, we evaluate the efficacy of TSAR using our prototype and simulations.\nThe testbed for our experiments consists of four Stargate proxies and twelve Mica2 and Mica2dot sensors; three sensors each are assigned to each proxy.\nGiven the limited size of our testbed, we employ simulations to evaluate the behavior of TSAR in larger settings.\nOur simulation employs the EmTOS emulator [10], which enables us to run the same code in simulation and the hardware platform.\nRather than using live data from a real sensor, to ensure repeatable experiments, we seed each sensor node with a dataset (i.e., a trace) that dictates the values reported by that node to the proxy.\nOne section of the flash memory on each sensor node is programmed with data points from the trace; these \"observations\" are then replayed during an experiment, logged to the local archive (located in flash memory, as well), and reported to the proxy.\nThe first dataset used to evaluate TSAR is a temperature dataset from James Reserve [27] that includes data from eleven temperature sensor nodes over a period of 34 days.\nThe second dataset is synthetically generated; the trace for each sensor is generated using a uniformly distributed random walk though the value space.\nOur experimental evaluation has four parts.\nFirst, we run EmTOS simulations to evaluate the lookup, update and delete overhead for sparse interval skip graphs using the real and synthetic datasets.\nSecond, we provide summary results from micro-benchmarks of the storage component of TSAR, which include empirical characterization of the energy costs and latency of reads and writes for the flash memory chip as well as the whole mote platform, and comparisons to published numbers for other storage and communication technologies.\nThese micro-benchmarks form the basis for our full-scale evaluation of TSAR on a testbed of four Stargate proxies and twelve Motes.\nWe measure the end-to-end query latency in our multi-hop testbed as well as the query processing overhead at the mote tier.\nFinally, we demonstrate the adaptive summarization capability at each sensor node.\nThe remainder of this section presents our experimental results.\n6.1 Sparse Interval Skip Graph Performance\nThis section evaluates the performance of sparse interval skip graphs by quantifying insert, lookup and delete overheads.\nWe assume a proxy tier with 32 proxies and construct sparse interval skip graphs of various sizes using our datasets.\nFor each skip\nFigure 10: Skip Graph Overheads\ngraph, we evaluate the cost of inserting a new value into the index.\nEach entry was deleted after its insertion, enabling us to quantify the delete overhead as well.\nFigure 8 (a) and (b) quantify the insert overhead for our two datasets: each insert entails an initial traversal that incurs log n messages, followed by neighbor pointer update at increasing levels, incurring a cost of 4 log n messages.\nOur results demonstrate this behavior, and show as well that performance of delete--which also involves an initial traversal followed by pointer updates at each level--incurs a similar cost.\nNext, we evaluate the lookup performance of the index structure.\nAgain, we construct skip graphs of various sizes using our datasets and evaluate the cost of a lookup on the index structure.\nFigures 9 (a) and (b) depict our results.\nThere are two components for each lookup--the lookup of the first interval that matches the query and, in the case of overlapping intervals, the subsequent linear traversal to identify all matching intervals.\nThe initial lookup can be seen to takes log n messages, as expected.\nThe costs of the subsequent linear traversal, however, are highly data dependent.\nFor instance, temperature values for the James Reserve data exhibit significant spatial correlations, resulting in significant overlap between different intervals and variable, high traversal cost (see Figure 9 (a)).\nThe synthetic data, however, has less overlap and incurs lower traversal overhead as shown in Figure 9 (b).\nSince the previous experiments assumed 32 proxies, we evaluate the impact of the number of proxies on skip graph performance.\nWe vary the number of proxies from 10 to 48 and distribute a skip graph with 4096 entries among these proxies.\nWe construct regular interval skip graphs as well as sparse interval skip graphs using these entries and measure the overhead of inserts and lookups.\nThus, the experiment also seeks to demonstrate the benefits of sparse skip graphs over regular skip graphs.\nFigure 10 (a) depicts our results.\nIn regular skip graphs, the complexity of insert is O (log2n) in the\nFigure 9: Skip Graph Lookup Performance\nexpected case (and O (n) in the worst case) where n is the number of elements.\nThis complexity is unaffected by changing the number of proxies, as indicated by the flat line in the figure.\nSparse skip graphs require fewer pointer updates; however, their overhead is dependent on the number of proxies, and is O (log2Np) in the expected case, independent of n.\nThis can be seen to result in significant reduction in overhead when the number of proxies is small, which decreases as the number of proxies increases.\nFailure handling is an important issue in a multi-tier sensor architecture since it relies on many components--proxies, sensor nodes and routing nodes can fail, and wireless links can fade.\nHandling of many of these failure modes is outside the scope of this paper; however, we consider the case of resilience of skip graphs to proxy failures.\nIn this case, skip graph search (and subsequent repair operations) can follow any one of the other links from a root element.\nSince a sparse skip graph has search trees rooted at each node, searching can then resume once the lookup request has routed around the failure.\nTogether, these two properties ensure that even if a proxy fails, the remaining entries in the skip graph will be reachable with high probability--only the entries on the failed proxy and the corresponding data at the sensors becomes inaccessible.\nTo ensure that all data on sensors remains accessible, even in the event of failure of a proxy holding index entries for that data, we incorporate redundant index entries.\nTSAR employs a simple redundancy scheme where additional coarse-grain summaries are used to protect regular summaries.\nEach sensor sends summary data periodically to its local proxy, but less frequently sends a lowerresolution summary to a backup proxy--the backup summary represents all of the data represented by the finer-grained summaries, but in a lossier fashion, thus resulting in higher read overhead (due to false hits) if the backup summary is used.\nThe cost of implementing this in our system is low--Figure 10 (b) shows the overhead of such a redundancy scheme, where a single coarse summary is send to a backup for every two summaries sent to the primary proxy.\nSince a redundant summary is sent for every two summaries, the insert cost is 1.5 times the cost in the normal case.\nHowever, these redundant entries result in only a negligible increase in lookup overhead, due the logarithmic dependence of lookup cost on the index size, while providing full resilience to any single proxy failure.\n6.2 Storage Microbenchmarks\nSince sensors are resource-constrained, the energy consumption and the latency at this tier are important measures for evaluating the performance of a storage architecture.\nBefore performing an endto-end evaluation of our system, we provide more detailed information on the energy consumption of the storage component used to implement the TSAR local archive, based on empirical measurements.\nIn addition we compare these figures to those for other local storage technologies, as well as to the energy consumption of wireless communication, using information from the literature.\nFor empirical measurements we measure energy usage for the storage component itself (i.e. current drawn by the flash chip), as well as for the entire Mica2 mote.\nThe power measurements in Table 3 were performed for the AT45DB041 [15] flash memory on a Mica2 mote, which is an older NOR flash device.\nThe most promising technology for low-energy storage on sensing devices is NAND flash, such as the Samsung K9K4G08U0M device [16]; published power numbers for this device are provided in the table.\nPublished energy requirements for wireless transmission using the Chipcon [4] CC2420 radio (used in MicaZ and Telos motes) are provided for comparison, assuming\nTable 3: Storage and Communication Energy Costs (* measured\nFigure 11: Query Processing Latency\nzero network and protocol overhead.\nComparing the total energy cost for writing flash (erase + write) to the total cost for communication (transmit + receive), we find that the NAND flash is almost 150 times more efficient than radio communication, even assuming perfect network protocols.\n6.3 Prototype Evaluation\nThis section reports results from an end-to-end evaluation of the TSAR prototype involving both tiers.\nIn our setup, there are four proxies connected via 802.11 links and three sensors per proxy.\nThe multi-hop topology was preconfigured such that sensor nodes were connected in a line to each proxy, forming a minimal tree of depth\nFigure 12: Query Latency Components\nFraction of true hits 3.\nDue to resource constraints we were unable to perform experiments with dozens of sensor nodes, however this topology ensured that the network diameter was as large as for a typical network of significantly larger size.\nOur evaluation metric is the end-to-end latency of query processing.\nA query posed on TSAR first incurs the latency of a sparse skip graph lookup, followed by routing to the appropriate sensor node (s).\nThe sensor node reads the required page (s) from its local archive, processes the query on the page that is read, and transmits the response to the proxy, which then forwards it to the user.\nWe first measure query latency for different sensors in our multi-hop topology.\nDepending on which of the sensors is queried, the total latency increases almost linearly from about 400ms to 1 second, as the number of hops increases from 1 to 3 (see Figure 11 (a)).\nFigure 11 (b) provides a breakdown of the various components of the end-to-end latency.\nThe dominant component of the total latency is the communication over one or more hops.\nThe typical time to communicate over one hop is approximately 300ms.\nThis large latency is primarily due to the use of a duty-cycled MAC layer; the latency will be larger if the duty cycle is reduced (e.g. the 2% setting as opposed to the 11.5% setting used in this experiment), and will conversely decrease if the duty cycle is increased.\nThe figure also shows the latency for varying index sizes; as expected, the latency of inter-proxy communication and skip graph lookups increases logarithmically with index size.\nNot surprisingly, the overhead seen at the sensor is independent of the index size.\nThe latency also depends on the number of packets transmitted in response to a query--the larger the amount of data retrieved by a query, the greater the latency.\nThis result is shown in Figure 12 (a).\nThe step function is due to packetization in TinyOS; TinyOS sends one packet so long as the payload is smaller than 30 bytes and splits the response into multiple packets for larger payloads.\nAs the data retrieved by a query is increased, the latency increases in steps, where each step denotes the overhead of an additional packet.\nFinally, Figure 12 (b) shows the impact of searching and processing flash memory regions of increasing sizes on a sensor.\nEach summary represents a collection of records in flash memory, and all of these records need to be retrieved and processed if that summary matches a query.\nThe coarser the summary, the larger the memory region that needs to be accessed.\nFor the search sizes examined, amortization of overhead when searching multiple flash pages and archival records, as well as within the flash chip and its associated driver, results in the appearance of sub-linear increase in latency with search size.\nIn addition, the operation can be seen to have very low latency, in part due to the simplicity of our query processing, requiring only a compare operation with each stored element.\nMore complex operations, however, will of course incur greater latency.\n6.4 Adaptive Summarization\nWhen data is summarized by the sensor before being reported to the proxy, information is lost.\nWith the interval summarization method we are using, this information loss will never cause the proxy to believe that a sensor node does not hold a value which it in fact does, as all archived values will be contained within the interval reported.\nHowever, it does cause the proxy to believe that the sensor may hold values which it does not, and forward query messages to the sensor for these values.\nThese false positives constitute the cost of the summarization mechanism, and need to be balanced against the savings achieved by reducing the number of reports.\nThe goal of adaptive summarization is to dynamically vary the summary size so that these two costs are balanced.\nFigure 13: Impact of Summarization Granularity\nFigure 13 (a) demonstrates the impact of summary granularity on false hits.\nAs the number of records included in a summary is increased, the fraction of queries forwarded to the sensor which match data held on that sensor (\"true positives\") decreases.\nNext, in Figure 13 (b) we run the a EmTOS simulation with our adaptive summarization algorithm enabled.\nThe adaptive algorithm increases the summary granularity (defined as the number of records per summary) when Cost (updates)\nstrate the adaptive nature of our technique, we plot a time series of the summarization granularity.\nWe begin with a query rate of 1 query per 5 samples, decrease it to 1 every 30 samples, and then increase it again to 1 query every 10 samples.\nAs shown in Figure 13 (b), the adaptive technique adjusts accordingly by sending more fine-grain summaries at higher query rates (in response to the higher false hit rate), and fewer, coarse-grain summaries at lower query rates.\n7.\nRelated Work\nIn this section, we review prior work on storage and indexing techniques for sensor networks.\nWhile our work addresses both problems jointly, much prior work has considered them in isolation.\nThe problem of archival storage of sensor data has received limited attention in sensor network literature.\nELF [7] is a logstructured file system for local storage on flash memory that provides load leveling and Matchbox is a simple file system that is packaged with the TinyOS distribution [14].\nBoth these systems focus on local storage, whereas our focus is both on storage at the remote sensors as well as providing a unified view of distributed data across all such local archives.\nMulti-resolution storage [9] is intended for in-network storage and search in systems where there is significant data in comparison to storage resources.\nIn contrast, TSAR addresses the problem of archival storage in two-tier systems where sufficient resources can be placed at the edge sensors.\nThe RISE platform [21] being developed as part of the NODE project at UCR addresses the issues of hardware platform support for large amounts of storage in remote sensor nodes, but not the indexing and querying of this data.\nIn order to efficiently access a distributed sensor store, an index needs to be constructed of the data.\nEarly work on sensor networks such as Directed Diffusion [17] assumes a system where all useful sensor data was stored locally at each sensor, and spatially scoped queries are routed using geographic co-ordinates to locations where the data is stored.\nSources publish the events that they detect, and sinks with interest in specific events can subscribe to these events.\nThe Directed Diffusion substrate routes queries to specific locations\nif the query has geographic information embedded in it (e.g.: find temperature in the south-west quadrant), and if not, the query is flooded throughout the network.\nThese schemes had the drawback that for queries that are not geographically scoped, search cost (O (n) for a network of n nodes) may be prohibitive in large networks with frequent queries.\nLocal storage with in-network indexing approaches address this issue by constructing indexes using frameworks such as Geographic Hash Tables [24] and Quad Trees [9].\nRecent research has seen a growing body of work on data indexing schemes for sensor networks [26] [11] [18].\nOne such scheme is DCS [26], which provides a hash function for mapping from event name to location.\nDCS constructs a distributed structure that groups events together spatially by their named type.\nDistributed Index of Features in Sensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor Networks (DIM [18]) extend the data-centric storage approach to provide spatially distributed hierarchies of indexes to data.\nWhile these approaches advocate in-network indexing for sensor networks, we believe that indexing is a task that is far too complicated to be performed at the remote sensor nodes since it involves maintaining significant state and large tables.\nTSAR provides a better match between resource requirements of storage and indexing and the availability of resources at different tiers.\nThus complex operations such as indexing and managing metadata are performed at the proxies, while storage at the sensor remains simple.\nIn addition to storage and indexing techniques specific to sensor networks, many distributed, peer-to-peer and spatio-temporal index structures are relevant to our work.\nDHTs [25] can be used for indexing events based on their type, quad-tree variants such as Rtrees [12] can be used for optimizing spatial searches, and K-D trees [2] can be used for multi-attribute search.\nWhile this paper focuses on building an ordered index structure for range queries, we will explore the use of other index structures for alternate queries over sensor data.\n8.\nConclusions\nIn this paper, we argued that existing sensor storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks.\nWe presented the design of TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local storage at the sensors and distributed indexing at the proxies.\nAt the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Sparse Interval Skip Graph, for efficiently supporting spatio-temporal and range queries.\nAt the sensor tier, TSAR supports energy-aware adaptive summarization that can trade-off the energy cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarser resolution index structure.\nWe implemented TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors.\nOur experimental evaluation of TSAR demonstrated the benefits and feasibility of employing our energy-efficient low-latency distributed storage architecture in multi-tier sensor networks.","keyphrases":["interv skip graph","archiv","archiv storag","sensor data","distribut index structur","data separ","analysi","flood","geograph hash tabl","homogen architectur","multi-tier sensor network","flash storag","metada","spatial scope","interv tree","wireless sensor network","index method"],"prmu":["P","P","P","P","P","R","U","U","U","R","M","M","U","U","M","M","M"]} {"id":"C-52","title":"Fairness in Dead-Reckoning based Distributed Multi-Player Games","abstract":"In a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver. The object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender. This inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object. But due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well. This leads to unfairness in game playing. In this paper, we first introduce an error measure for estimating this inaccuracy. Then we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time. This algorithm makes the game very fair at the expense of increasing the overall mean error of all players. To mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing. We have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game. We show through experiments that these algorithms provide fairness among players in spite of widely varying network delays. An additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing.","lvl-1":"Fairness in Dead-Reckoning based Distributed Multi-Player Games Sudhir Aggarwal Hemant Banavar Department of Computer Science Florida State University, Tallahassee, FL Email: {sudhir, banavar}@cs.\nfsu.edu Sarit Mukherjee Sampath Rangarajan Center for Networking Research Bell Laboratories, Holmdel, NJ Email: {sarit, sampath}@bell-labs.com ABSTRACT In a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver.\nThe object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender.\nThis inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object.\nBut due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well.\nThis leads to unfairness in game playing.\nIn this paper, we first introduce an error measure for estimating this inaccuracy.\nThen we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time.\nThis algorithm makes the game very fair at the expense of increasing the overall mean error of all players.\nTo mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing.\nWe have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game.\nWe show through experiments that these algorithms provide fairness among players in spite of widely varying network delays.\nAn additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Distributed applications General Terms Algorithms, Design, Experimentation, Performance 1.\nINTRODUCTION In a distributed multi-player game, players are normally distributed across the Internet and have varying delays to each other or to a central game server.\nUsually, in such games, the players are part of the game and in addition they may control entities that make up the game.\nDuring the course of the game, the players and the entities move within the game space.\nA player sends information about her movement as well as the movement of the entities she controls to the other players using a Dead-Reckoning (DR) vector.\nA DR vector contains information about the current position of the player\/entity in terms of x, y and z coordinates (at the time the DR vector was sent) as well as the trajectory of the entity in terms of the velocity component in each of the dimensions.\nEach of the participating players receives such DR vectors from one another and renders the other players\/entities on the local consoles until a new DR vector is received for that player\/entity.\nIn a peer-to-peer game, players send DR vectors directly to each other; in a client-server game, these DR vectors may be forwarded through a game server.\nThe idea of DR is used because it is almost impossible for players\/entities to exchange their current positions at every time unit.\nDR vectors are quantization of the real trajectory (which we refer to as real path) at a player.\nNormally, a new DR vector is computed and sent whenever the real path deviates from the path extrapolated using the previous DR vector (say, in terms of distance in the x, y, z plane) by some amount specified by a threshold.\nWe refer to the trajectory that can be computed using the sequence of DR vectors as the exported path.\nTherefore, at the sending player, there is a deviation between the real path and the exported path.\nThe error due to this deviation can be removed if each movement of player\/entity is communicated to the other players at every time unit; that is a DR vector is generated at every time unit thereby making the real and exported paths the same.\nGiven that it is not feasible to satisfy this due to bandwidth limitations, this error is not of practical interest.\nTherefore, the receiving players can, at best, follow the exported path.\nBecause of the network delay between the sending and receiving players, when a DR vector is received and rendered at a player, the original trajectory of the player\/entity may have already changed.\nThus, in physical time, there is a deviation at the receiving player between the exported path and the rendered trajectory (which we refer to as placed path).\nWe refer to this error as the export error.\nNote that the export error, in turn, results in a deviation between the real and the placed paths.\nThe export error manifests itself due to the deviation between the exported path at the sender and the placed path at the receiver (i) 1 before the DR vector is received at the receiver (referred to as the before export error, and (ii) after the DR vector is received at the receiver (referred to as the after export error).\nIn an earlier paper [1], we showed that by synchronizing the clocks at all the players and by using a technique based on time-stamping messages that carry the DR vectors, we can guarantee that the after export error is made zero.\nThat is, the placed and the exported paths match after the DR vector is received.\nWe also showed that the before export error can never be eliminated since there is always a non-zero network delay, but can be significantly reduced using our technique [1].\nHenceforth we assume that the players use such a technique which results in unavoidable but small overall export error.\nIn this paper we consider the problem of different and varying network delays between each sender-receiver pair of a DR vector, and consequently, the different and varying export errors at the receivers.\nDue to the difference in the export errors among the receivers, the same entity is rendered at different physical time at different receivers.\nThis brings in unfairness in game playing.\nFor instance a player with a large delay would always see an entity late in physical time compared to the other players and, therefore, her action on the entity would be delayed (in physical time) even if she reacted instantaneously after the entity was rendered.\nOur goal in this paper is to improve the fairness of these games in spite of the varying network delays by equalizing the export error at the players.\nWe explore whether the time-average of the export errors (which is the cumulative export error over a period of time averaged over the time period) at all the players can be made the same by scheduling the sending of the DR vectors appropriately at the sender.\nWe propose two algorithms to achieve this.\nBoth the algorithms are based on delaying (or dropping) the sending of DR vectors to some players on a continuous basis to try and make the export error the same at all the players.\nAt an abstract level, the algorithm delays sending DR vectors to players whose accumulated error so far in the game is smaller than others; this would mean that the export error due to this DR vector at these players will be larger than that of the other players, thereby making them the same.\nThe goal is to make this error at least approximately equal at every DR vector with the deviation in the error becoming smaller as time progresses.\nThe first algorithm (which we refer to as the scheduling algorithm) is based on estimating the delay between players and refining the sending of DR vectors by scheduling them to be sent to different players at different times at every DR generation point.\nThrough an implementation of this algorithm using the open source game BZflag, we show that this algorithm makes the game very fair (we measure fairness in terms of the standard deviation of the error).\nThe drawback of this algorithm is that it tends to push the error of all the players towards that of the player with the worst error (which is the error at the farthest player, in terms of delay, from the sender of the DR).\nTo alleviate this effect, we propose a budget based algorithm which budgets how the DRs are sent to different players.\nAt a high level, the algorithm is based on the idea of sending more DRs to players who are farther away from the sender compared to those who are closer.\nExperimental results from BZflag illustrates that the budget based algorithm follows a more balanced approach.\nIt improves the fairness of the game but at the same time does so without pushing up the mean error of the players thereby maintaining the accuracy of the game.\nIn addition, the budget based algorithm is shown to achieve the same level of accuracy of game playing as the current implementation of BZflag using much less number of DR vectors.\n2.\nPREVIOUS WORK Earlier work on network games to deal with network latency has mostly focussed on compensation techniques for packet delay and loss [2, 3, 4].\nThese methods are aimed at making large delays and message loss tolerable for players but does not consider the problems that may be introduced by varying delays from the server to different players or from the players to one another.\nFor example, the concept of local lag has been used in [3] where each player delays every local operation for a certain amount of time so that remote players can receive information about the local operation and execute the same operation at the about same time, thus reducing state inconsistencies.\nThe online multi-player game MiMaze [2, 5, 6], for example, takes a static bucket synchronization approach to compensate for variable network delays.\nIn MiMaze, each player delays all events by 100 ms regardless of whether they are generated locally or remotely.\nPlayers with a network delay larger than 100 ms simply cannot participate in the game.\nIn general, techniques based on bucket synchronization depend on imposing a worst case delay on all the players.\nThere have been a few papers which have studied the problem of fairness in a distributed game by more sophisticated message delivery mechanisms.\nBut these works [7, 8] assume the existence of a global view of the game where a game server maintains a view (or state) of the game.\nPlayers can introduce objects into the game or delete objects that are already part of the game (for example, in a first-person shooter game, by shooting down the object).\nThese additions and deletions are communicated to the game server using action messages.\nBased on these action messages, the state of the game is changed at the game server and these changes are communicated to the players using update messages.\nFairness is achieved by ordering the delivery of action and update messages at the game server and players respectively based on the notion of a fair-order which takes into account the delays between the game server and the different players.\nObjects that are part of the game may move but how this information is communicated to the players seems to be beyond the scope of these works.\nIn this sense, these works are very limited in scope and may be applicable only to firstperson shooter games and that too to only games where players are not part of the game.\nDR vectors can be exchanged directly among the players (peerto-peer model) or using a central server as a relay (client-server model).\nIt has been shown in [9] that multi-player games that use DR vectors together with bucket synchronization are not cheatproof unless additional mechanisms are put in place.\nBoth the scheduling algorithm and the budget-based algorithm described in our paper use DR vectors and hence are not cheat-proof.\nFor example, a receiver could skew the delay estimate at the sender to make the sender believe that the delay between the sender and the receiver is high thereby gaining undue advantage.\nWe emphasize that the focus of this paper is on fairness without addressing the issue of cheating.\nIn the next section, we describe the game model that we use and illustrate how senders and receivers exchange DR vectors and how entities are rendered at the receivers based on the time-stamp augmented DR vector exchange as described in [1].\nIn Section 4, we describe the DR vector scheduling algorithm that aims to make the export error equal across the players with varying delays from the sender of a DR vector, followed by experimental results obtained from instrumentation of the scheduling algorithm on the open source game BZFlag.\nSection 5, describes the budget based algorithm that achieves improved fairness but without reducing the level accuracy of game playing.\nConclusions are presented in Section 6.\n2 3.\nGAME MODEL The game architecture is based on players distributed across the Internet and exchanging DR vectors to each other.\nThe DR vectors could either be sent directly from one player to another (peerto-peer model) or could be sent through a game server which receives the DR vector from a player and forwards it to other players (client-server model).\nAs mentioned before, we assume synchronized clocks among the participating players.\nEach DR vector sent from one player to another specifies the trajectory of exactly one player\/entity.\nWe assume a linear DR vector in that the information contained in the DR vector is only enough at the receiving player to compute the trajectory and render the entity in a straight line path.\nSuch a DR vector contains information about the starting position and velocity of the player\/entity where the velocity is constant1 .\nThus, the DR vectors sent by a player specifies the current time at the player when the DR vector is computed (not the time at which this DR vector is sent to the other players as we will explain later), the current position of the player\/entity in terms of the x, y, z coordinates and the velocity vector in the direction of x, y and z coordinates.\nSpecifically, the ith DR vector sent by player j about the kth entity is denoted by DRj ik and is represented by the following tuple (Tj ik, xj ik, yj ik, zj ik, vxj ik, vyj ik, vzj ik).\nWithout loss of generality, in the rest of the discussion, we consider a sequence of DR vectors sent by only one player and for only one entity.\nFor simplicity, we consider a two dimensional game space rather than a three dimensional one.\nHence we use DRi to denote the ith such DR vector represented as the tuple (Ti, xi, yi, vxi, vyi).\nThe receiving player computes the starting position for the entity based on xi, yi and the time difference between when the DR vector is received and the time Ti at which it was computed.\nNote that the computation of time difference is feasible since all the clocks are synchronized.\nThe receiving player then uses the velocity components to project and render the trajectory of the entity.\nThis trajectory is followed until a new DR vector is received which changes the position and\/or velocity of the entity.\ntimeT1 Real Exported Placed dt1 A B C D DR1 = (T1, x1, y1, vx1, vy1) computed at time T1 and sent to the receiver DR0 = (T0, x0, y0, vx0, vy0) computed at time T0 and sent to the receiver T0 dt0 Placed E Figure 1: Trajectories and deviations.\nBased on this model, Figure 1 illustrates the sending and receiv1 Other type of DR vectors include quadratic DR vectors which specify the acceleration of the entity and cubic spline DR vectors that consider the starting position and velocity and the ending position and velocity of the entity.\ning of DR vectors and the different errors that are encountered.\nThe figure shows the reception of DR vectors at a player (henceforth called the receiver).\nThe horizontal axis shows the time which is synchronized among all the players.\nThe vertical axis tries to conceptually capture the two-dimensional position of an entity.\nAssume that at time T0 a DR vector DR0 is computed by the sender and immediately sent to the receiver.\nAssume that DR0 is received at the receiver after a delay of dt0 time units.\nThe receiver computes the initial position of the entity as (x0 + vx0 \u00d7 dt0, y0 + vy0 \u00d7 dt0) (shown as point E).\nThe thick line EBD represents the projected and rendered trajectory at the receiver based on the velocity components vx0 and vy0 (placed path).\nAt time T1 a DR vector DR1 is computed for the same entity and immediately sent to the receiver2 .\nAssume that DR1 is received at the receiver after a delay of dt1 time units.\nWhen this DR vector is received, assume that the entity is at point D.\nA new position for the entity is computed as (x1 + vx1 \u00d7 dt1, y1 + vy0 \u00d7 dt1) and the entity is moved to this position (point C).\nThe velocity components vx1 and vy1 are used to project and render this entity further.\nLet us now consider the error due to network delay.\nAlthough DR1 was computed at time T1 and sent to the receiver, it did not reach the receiver until time T1 + dt1.\nThis means, although the exported path based on DR1 at the sender at time T1 is the trajectory AC, until time T1 + dt1, at the receiver, this entity was being rendered at trajectory BD based on DR0.\nOnly at time T1 + dt1 did the entity get moved to point C from which point onwards the exported and the placed paths are the same.\nThe deviation between the exported and placed paths creates an error component which we refer to as the export error.\nA way to represent the export error is to compute the integral of the distance between the two trajectories over the time when they are out of sync.\nWe represent the integral of the distances between the placed and exported paths due to some DR DRi over a time interval [t1, t2] as Err(DRi, t1, t2).\nIn the figure, the export error due to DR1 is computed as the integral of the distance between the trajectories AC and BD over the time interval [T1, T1 + dt1].\nNote that there could be other ways of representing this error as well, but in this paper, we use the integral of the distance between the two trajectories as a measure of the export error.\nNote that there would have been an export error created due to the reception of DR0 at which time the placed path would have been based on a previous DR vector.\nThis is not shown in the figure but it serves to remind the reader that the export error is cumulative when a sequence of DR vectors are received.\nStarting from time T1 onwards, there is a deviation between the real and the exported paths.\nAs we discussed earlier, this export error is unavoidable.\nThe above figure and example illustrates one receiver only.\nBut in reality, DR vectors DR0 and DR1 are sent by the sender to all the participating players.\nEach of these players receives DR0 and DR1 after varying delays thereby creating different export error values at different players.\nThe goal of the DR vector scheduling algorithm to be described in the next section is to make this (cumulative) export error equal at every player independently for each of the entities that make up the game.\n4.\nSCHEDULING ALGORITHM FORSENDING DR VECTORS In Section 3 we showed how delay from the sender of a new DR 2 Normally, DR vectors are not computed on a periodic basis but on an on-demand basis where the decision to compute a new DR vector is based on some threshold being exceeded on the deviation between the real path and the path exported by the previous DR vector.\n3 vector to the receiver of the DR vector could lead to export error because of the deviation of the placed path from the exported path at the receiver until this new DR vector is received.\nWe also mentioned that the goal of the DR vector scheduling algorithm is to make the export error equal at all receivers over a period of time.\nSince the game is played in a distributed environment, it makes sense for the sender of an entity to keep track of all the errors at the receivers and try to make them equal.\nHowever, the sender cannot know the actual error at a receiver till it gets some information regarding the error back from the receiver.\nOur algorithm estimates the error to compute a schedule to send DR vectors to the receivers and corrects the error when it gets feedbacks from the receivers.\nIn this section we provide motivations for the algorithm and describe the steps it goes through.\nThroughout this section, we will use the following example to illustrate the algorithm.\ntimeT1 Exported path Placed path at receiver 2 dt1 A B C D E F T0 G2 G1 dt2 DR1 sent to receiver 1 DR1 sent to receiver 2 T1 1 T1 2 da1 da2 G H I J K L N M DR1 estimated to be received by receiver 2 DR1 estimated to be received by receiver 1 DR1 actually received by receiver 1 DR1 actually received by receiver 2 DR0 sent to both receivers DR1 computed by sender Placed path at receiver 1 Figure 2: DR vector flow between a sender and two receivers and the evolution of estimated and actual placed paths at the receivers.\nDR0 = (T0, T0, x0, y0, vx0, vy0), sent at time T0 to both receivers.\nDR1 = (T1, T1 1 , x1, y1, vx1, vy1) sent at time T1 1 = T1+\u03b41 to receiver 1 and DR1 = (T1, T2 1 , x1, y1, vx1, vy1) sent at time T2 1 = T1 + \u03b42 to receiver 2.\nConsider the example in Figure 2.\nThe figure shows a single sender sending DR vectors for an entity to two different receivers 1 and 2.\nDR0 computed at T0 is sent and received by the receivers sometime between T0 and T1 at which time they move the location of the entity to match the exported path.\nThus, the path of the entity is shown only from the point where the placed path matches the exported path for DR0.\nNow consider DR1.\nAt time T1, DR1 is computed by the sender but assume that it is not immediately sent to the receivers and is only sent after time \u03b41 to receiver 1 (at time T1 1 = T1 + \u03b41) and after time \u03b42 to receiver 2 (at time T2 1 = T1 + \u03b42).\nNote that the sender includes the sending timestamp with the DR vector as shown in the figure.\nAssume that the sender estimates (it will be clear shortly why the sender has to estimate the delay) that after a delay of dt1, receiver 1 will receive it, will use the coordinate and velocity parameters to compute the entity``s current location and move it there (point C) and from this time onwards, the exported and the placed paths will become the same.\nHowever, in reality, receiver 1 receives DR1 after a delay of da1 (which is less than sender``s estimates of dt1), and moves the corresponding entity to point H. Similarly, the sender estimates that after a delay of dt2, receiver 2 will receive DR1, will compute the current location of the entity and move it to that point (point E), while in reality it receives DR1 after a delay of da2 > dt2 and moves the entity to point N.\nThe other points shown on the placed and exported paths will be used later in the discussion to describe different error components.\n4.1 Computation of Relative Export Error Referring back to the discussion from Section 3, from the sender``s perspective, the export error at receiver 1 due to DR1 is given by Err(DR1, T1, T1 + \u03b41 + dt1) (the integral of the distance between the trajectories AC and DB over the time interval [T1, T1 + \u03b41 + dt1]) of Figure 2.\nThis is due to the fact that the sender uses the estimated delay dt1 to compute this error.\nSimilarly, the export error from the sender``s perspective at received 2 due to DR1 is given by Err(DR1, T1, T1 + \u03b42 + dt2) (the integral of the distance between the trajectories AE and DF over the time interval [T1, T1 + \u03b42 + dt2]).\nNote that the above errors from the sender``s perspective are only estimates.\nIn reality, the export error will be either smaller or larger than the estimated value, based on whether the delay estimate was larger or smaller than the actual delay that DR1 experienced.\nThis difference between the estimated and the actual export error is the relative export error (which could either be positive or negative) which occurs for every DR vector that is sent and is accumulated at the sender.\nThe concept of relative export error is illustrated in Figure 2.\nSince the actual delay to receiver 1 is da1, the export error induced by DR1 at receiver 1 is Err(DR1, T1, T1 + \u03b41 + da1).\nThis means, there is an error in the estimated export error and the sender can compute this error only after it gets a feedback from the receiver about the actual delay for the delivery of DR1, i.e., the value of da1.\nWe propose that once receiver 1 receives DR1, it sends the value of da1 back to the sender.\nThe receiver can compute this information as it knows the time at which DR1 was sent (T1 1 = T1 + \u03b41, which is appended to the DR vector as shown in Figure 2) and the local receiving time (which is synchronized with the sender``s clock).\nTherefore, the sender computes the relative export error for receiver 1, represented using R1 as R1 = Err(DR1, T1, T1 + \u03b41 + dt1) \u2212 Err(DR1, T1, T1 + \u03b41 + da1) = Err(DR1, T1 + \u03b41 + dt1, T1 + \u03b41 + da1) Similarly the relative export error for receiver 2 is computed as R2 = Err(DR1, T1, T1 + \u03b42 + dt2) \u2212 Err(DR1, T1, T1 + \u03b42 + da2) = Err(DR1, T1 + \u03b42 + dt2, T1 + \u03b42 + da2) Note that R1 > 0 as da1 < dt1, and R2 < 0 as da2 > dt2.\nRelative export errors are computed by the sender as and when it receives the feedback from the receivers.\nThis example shows the 4 relative export error values after DR1 is sent and the corresponding feedbacks are received.\n4.2 Equalization of Error Among Receivers We now explain what we mean by making the errors equal at all the receivers and how this can be achieved.\nAs stated before the sender keeps estimates of the delays to the receivers, dt1 and dt2 in the example of Figure 2.\nThis says that at time T1 when DR1 is computed, the sender already knows how long it may take messages carrying this DR vector to reach the receivers.\nThe sender uses this information to compute the export errors, which are Err(DR1, T1, T1 + \u03b41 + dt1) and Err(DR1, T1, T1 + \u03b42 + dt2) for receivers 1 and 2, respectively.\nNote that the areas of these error components are a function of \u03b41 and \u03b42 as well as the network delays dt1 and dt2.\nIf we are to make the exports errors due to DR1 the same at both receivers, the sender needs to choose \u03b41 and \u03b42 such that Err(DR1, T1, T1 + \u03b41 + dt1) = Err(DR1, T1, T1 + \u03b42 + dt2).\nBut when T1 was computed there could already have been accumulated relative export errors due to previous DR vectors (DR0 and the ones before).\nLet us represent the accumulated relative error up to DRi for receiver j as Ri j. To accommodate these accumulated relative errors, the sender should now choose \u03b41 and \u03b42 such that R0 1 + Err(DR1, T1, T1 + \u03b41 + dt1) = R0 2 + Err(DR1, T1, T1 + \u03b42 + dt2) The \u03b4i determines the scheduling instant of the DR vector at the sender for receiver i.\nThis method of computation of \u03b4``s ensures that the accumulated export error (i.e., total actual error) for each receiver equalizes at the transmission of each DR vector.\nIn order to establish this, assume that the feedback for DR vector Di from a receiver comes to the sender before schedule for Di+1 is computed.\nLet Si m and Ai m denote the estimated error for receiver m used for computing schedule for Di and accumulated error for receiver m computed after receiving feedback for Di, respectively.\nThen Ri m = Ai m \u2212Si m.\nIn order to compute the schedule instances (i.e., \u03b4``s) for Di, for any pair of receivers m and n, we do Ri\u22121 m + Si m = Ri\u22121 n + Si n.\nThe following theorem establishes the fact that the accumulated export error is equalized at every scheduling instant.\nTHEOREM 4.1.\nWhen the schedule instances for sending Di are computed for any pair of receivers m and n, the following condition is satisfied: i\u22121 k=1 Ak m + Si m = i\u22121 k=1 Ak n + Si n. Proof: By induction.\nAssume that the premise holds for some i.\nWe show that it holds for i+1.\nThe base case for i = 1 holds since initially R0 m = R0 n = 0, and the S1 m = S1 n is used to compute the scheduling instances.\nIn order to compute the schedule for Di+1, the we first compute the relative errors as Ri m = Ai m \u2212 Si m, and Ri n = Ai n \u2212 Si n.\nThen to compute \u03b4``s we execute Ri m + Si+1 m = Ri n + Si+1 n Ai m \u2212 Si m + Si+1 m = Ai n \u2212 Si n + Si+1 n .\nAdding the condition of the premise on both sides we get, i k=1 Ak m + Si+1 m = i k=1 Ak n + Si+1 n .\n4.3 Computation of the Export Error Let us now consider how the export errors can be computed.\nFrom the previous section, to find \u03b41 and \u03b42 we need to find Err(DR1, T1, T1 +\u03b41 +dt1) and Err(DR1, T1, T1 +\u03b42 +dt2).\nNote that the values of R0 1 and R0 2 are already known at the sender.\nConsider the computation of Err(DR1, T1, T1 +\u03b41 +dt1).\nThis is the integral of the distance between the trajectories AC due to DR1 and BD due to DR0.\nFrom DR0 and DR1, point A is (X1, Y1) = (x1, y1) and point B is (X0, Y0) = (x0 + (T1 \u2212 T0) \u00d7 vx0, y0 + (T1 \u2212 T0) \u00d7 vy0).\nThe trajectory AC can be represented as a function of time as (X1(t), Y1(t) = (X1 + vx1 \u00d7 t, Y1 + vy1 \u00d7 t) and the trajectory of BD can be represented as (X0(t), Y0(t) = (X0 + vx0 \u00d7 t, Y0 + vy0 \u00d7 t).\nThe distance between the two trajectories as a function of time then becomes, dist(t) = (X1(t) \u2212 X0(t))2 + (Y1(t) \u2212 Y0(t))2 = ((X1 \u2212 X0) + (vx1 \u2212 vx0)t)2 +((Y1 \u2212 Y0) + (vy1 \u2212 vy0)t)2 = ((vx1 \u2212 vx0)2 + (vy1 \u2212 vy0)2)t2 +2((X1 \u2212 X0)(vx1 \u2212 vx0) +(Y1 \u2212 Y0)(vy1 \u2212 vy0))t +(X1 \u2212 X0)2 + (Y1 \u2212 Y0)2 Let a = (vx1 \u2212 vx0)2 + (vy1 \u2212 vy0)2 b = 2((X1 \u2212 X0)(vx1 \u2212 vx0) +(Y1 \u2212 Y0)(vy1 \u2212 vy0)) c = (X1 \u2212 X0)2 + (Y1 \u2212 Y0)2 Then dist(t) can be written as dist(t) = a \u00d7 t2 + b \u00d7 t + c.\nThen Err(DR1, t1, t2) for some time interval [t1, t2] becomes t2 t1 dist(t) dt = t2 t1 a \u00d7 t2 + b \u00d7 t + c dt.\nA closed form solution for the indefinite integral a \u00d7 t2 + b \u00d7 t + c dt = (2at + b) \u221a at2 + bt + c 4a + 1 2 ln 1 2b + at \u221a a + at2 + bt + c c 1 \u221a a \u2212 1 8 ln 1 2b + at \u221a a + at2 + bt + c b2 a\u2212 3 2 Err(DR1, T1, T1 +\u03b41 +dt1) and Err(DR1, T1, T1 +\u03b42 +dt2) can then be calculated by applying the appropriate limits to the above solution.\nIn the next section, we consider the computation of the \u03b4``s for N receivers.\n5 4.4 Computation of Scheduling Instants We again look at the computation of \u03b4``s by referring to Figure 2.\nThe sender chooses \u03b41 and \u03b42 such that R0 1 + Err(DR1, T1, T1 + \u03b41 +dt1) = R0 2 +Err(DR1, T1, T1 +\u03b42 +dt2).\nIf R0 1 and R0 2 both are zero, then \u03b41 and \u03b42 should be chosen such that Err(DR1, T1, T1+ \u03b41 +dt1) = Err(DR1, T1, T1 +\u03b42 +dt2).\nThis equality will hold if \u03b41 + dt1 = \u03b42 + dt2.\nThus, if there is no accumulated relative export error, all that the sender needs to do is to choose the \u03b4``s in such a way that they counteract the difference in the delay to the two receivers, so that they receive the DR vector at the same time.\nAs discussed earlier, because the sender is not able to a priori learn the delay, there will always be an accumulated relative export error from a previous DR vector that does have to be taken into account.\nTo delve deeper into this, consider the computation of the export error as illustrated in the previous section.\nTo compute the \u03b4``s we require that R0 1 + Err(DR1, T1, T1 + \u03b41 + dt1) = R0 2 + Err(DR1, T1, T1 + \u03b42 + dt2).\nThat is, R0 1 + T1+\u03b41+dt1 T1 dist(t) dt = R0 2 + T1+\u03b42+dt2 T1 dist(t) dt.\nThat is R0 1 + T1+dt1 T1 dist(t) dt + T1+dt1+\u03b41 T1+dt1 dist(t) dt = R0 2 + T1+dt2 T1 dist(t) dt + T1+dt2+\u03b42 T1+dt2 dist(t) dt.\nThe components R0 1, R0 2, are already known to (or estimated by) the sender.\nFurther, the error components T1+dt1 T1 dist(t) dt and T1+dt2 T1 dist(t) dt can be a priori computed by the sender using estimated values of dt1 and dt2.\nLet us use E1 to denote R0 1 + T1+dt1 T1 dist(t) dt and E2 to denote R0 2 + T1+dt2 T1 dist(t) dt.\nThen, we require that E1 + T1+dt1+\u03b41 T1+dt1 dist(t) dt = E2 + T1+dt2+\u03b42 T1+dt2 dist(t) dt.\nAssume that E1 > E2.\nThen, for the above equation to hold, we require that T1+dt1+\u03b41 T1+dt1 dist(t) dt < T1+dt2+\u03b42 T1+dt2 dist(t) dt.\nTo make the game as fast as possible within this framework, the \u03b4 values should be made as small as possible so that DR vectors are sent to the receivers as soon as possible subject to the fairness requirement.\nGiven this, we would choose \u03b41 to be zero and compute \u03b42 from the equation E1 = E2 + T1+dt2+\u03b42 T1+dt2 dist(t) dt.\nIn general, if there are N receivers 1, ... , N, when a sender generates a DR vector and decides to schedule them to be sent, it first computes the Ei values for all of them from the accumulated relative export errors and estimates of delays.\nThen, it finds the smallest of these values.\nLet Ek be the smallest value.\nThe sender makes \u03b4k to be zero and computes the rest of the \u03b4``s from the equality Ei + T1+dti+\u03b4i T1+dti dist(t) dt = Ek, \u2200i 1 \u2264 i \u2264 N, i = k. (1) The \u03b4``s thus obtained gives the scheduling instants of the DR vector for the receivers.\n4.5 Steps of the Scheduling Algorithm For the purpose of the discussion below, as before let us denote the accumulated relative export error at a sender for receiver k up until DRi to be Ri k. Let us denote the scheduled delay at the sender before DRi is sent to receiver k as \u03b4i k. Given the above discussion, the algorithm steps are as follows: 1.\nThe sender computes DRi at (say) time Ti and then computes \u03b4i k, and Ri\u22121 k , \u2200k, 1 \u2264 k \u2264 N based on the estimation of delays dtk, \u2200k, 1 \u2264 k \u2264 N as per Equation (1).\nIt schedules, DRi to be sent to receiver k at time Ti + \u03b4i k. 2.\nThe DR vectors are sent to the receivers at the scheduled times which are received after a delay of dak, \u2200k, 1 \u2264 k \u2264 N where dak \u2264 or > dtk.\nThe receivers send the value of dak back to the sender (the receiver can compute this value based on the time stamps on the DR vector as described earlier).\n3.\nThe sender computes Ri k as described earlier and illustrated in Figure 2.\nThe sender also recomputes (using exponential averaging method similar to round-trip time estimation by TCP [10]) the estimate of delay dtk from the new value of dak for receiver k. 4.\nGo back to Step 1 to compute DRi+1 when it is required and follow the steps of the algorithm to schedule and send this DR vector to the receivers.\n4.6 Handling Cases in Practice So far we implicity assumed that DRi is sent out to all receivers before a decision is made to compute the next DR vector DRi+1, and the receivers send the value of dak corresponding to DRi and this information reaches the sender before it computes DRi+1 so that it can compute Ri+1 k and then use it in the computation of \u03b4i+1 k .\nTwo issues need consideration with respect to the above algorithm when it is used in practice.\n\u2022 It may so happen that a new DR vector is computed even before the previous DR vector is sent out to all receivers.\nHow will this situation be handled?\n\u2022 What happens if the feedback does not arrive before DRi+1 is computed and scheduled to be sent?\nLet us consider the first scenario.\nWe assume that DRi has been scheduled to be sent and the scheduling instants are such that \u03b4i 1 < \u03b4i 2 < \u00b7 \u00b7 \u00b7 < \u03b4i N .\nAssume that DRi+1 is to be computed (because the real path has deviated exceeding a threshold from the path exported by DRi) at time Ti+1 where Ti + \u03b4i k < Ti+1 < Ti + \u03b4i k+1.\nThis means, DRi has been sent only to receivers up to k in the scheduled order.\nIn our algorithm, in this case, the scheduled delay ordering queue is flushed which means DRi is not sent to receivers still queued to receive it, but a new scheduling order is computed for all the receivers to send DRi+1.\nFor those receivers who have been sent DRi, assume for now that daj, 1 \u2264 j \u2264 k has been received from all receivers (the scenario where daj has not been received will be considered as a part of the second scenario later).\nFor these receivers, Ei j, 1 \u2264 j \u2264 k can be computed.\nFor those receivers j, k + 1 \u2264 j \u2264 N to whom DRi was not sent Ei j does not apply.\nConsider a receiver j, k + 1 \u2264 j \u2264 N to whom DRi was not sent.\nRefer to Figure 3.\nFor such a receiver j, when DRi+1 is to be scheduled and 6 timeTi Exported path dtj A B C D Ti-1 Gi j DRi+1 computed by sender and DRi for receiver k+1 to N is removed from queue DRi+1 scheduled for receiver k+1 Ti+1 G H E F DRi scheduled for receiver j DRi computed by sender Placed path at receiver k+1 Gi+1 j Figure 3: Schedule computation when DRi is not sent to receiver j, k + 1 \u2264 j \u2264 N. \u03b4i+1 j needs to be computed, the total export error is the accumulated relative export error at time Ti when schedule for DRi was computed, plus the integral of the distance between the two trajectories AC and BD of Figure 3 over the time interval [Ti, Ti+1 + \u03b4i+1 j + dtj].\nNote that this integral is given by Err(DRi, Ti, Ti+1) + Err(DRi+1, Ti+1, Ti+1 + \u03b4i+1 j + dtj).\nTherefore, instead of Ei j of Equation (1), we use the value Ri\u22121 j + Err(DRi, Ti, Ti+1) + Err(DRi+1, Ti+1, Ti+1 + \u03b4i+1 j + dtj) where Ri\u22121 j is relative export error used when the schedule for DRi was computed.\nNow consider the second scenario.\nHere the feedback dak corresponding to DRi has not arrived before DRi+1 is computed and scheduled.\nIn this case, Ri k cannot be computed.\nThus, in the computation of \u03b4k for DRi+1, this will be assumed to be zero.\nWe do assume that a reliable mechanism is used to send dak back to the sender.\nWhen this information arrives at a later time, Ri k will be computed and accumulated to future relative export errors (for example Ri+1 k if dak is received before DRi+2 is computed) and used in the computation of \u03b4k when a future DR vector is to be scheduled (for example DRi+2).\n4.7 Experimental Results In order to evaluate the effectiveness and quantify benefits obtained through the use of the scheduling algorithm, we implemented the proposed algorithm in BZFlag (Battle Zone Flag) [11] game.\nIt is a first-person shooter game where the players in teams drive tanks and move within a battle field.\nThe aim of the players is to navigate and capture flags belonging to the other team and bring them back to their own area.\nThe players shoot each other``s tanks using shooting bullets.\nThe movement of the tanks as well as that of the shots are exchanged among the players using DR vectors.\nWe have modified the implementation of BZFlag to incorporate synchronized clocks among the players and the server and exchange time-stamps with the DR vector.\nWe set up a testbed with four players running the instrumented version of BZFlag, with one as a sender and the rest as receivers.\nThe scheduling approach and the base case where each DR vector was sent to all the receivers concurrently at every trigger point were implemented in the same run by tagging the DR vectors according to the type of approach used to send the DR vector.\nNISTNet [12] was used to introduce delays across the sender and the three receivers.\nMean delays of 800ms, 500ms and 200ms were introduced between the sender and first, second and the third receiver, respectively.\nWe introduce a variance of 100 msec (to the mean delay of each receiver) to model variability in delay.\nThe sender logged the errors of each receiver every 100 milliseconds for both the scheduling approach and the base case.\nThe sender also calculated the standard deviation and the mean of the accumulated export error of all the receivers every 100 milliseconds.\nFigure 4 plots the mean and standard deviation of the accumulated export error of all the receivers in the scheduling case against the base case.\nNote that the x-axis of these graphs (and the other graphs that follow) represents the system time when the snapshot of the game was taken.\nObserve that the standard deviation of the error with scheduling is much lower as compared to the base case.\nThis implies that the accumulated errors of the receivers in the scheduling case are closer to one another.\nThis shows that the scheduling approach achieves fairness among the receivers even if they are at different distances (i.e, latencies) from the sender.\nObserve that the mean of the accumulated error increased multifold with scheduling in comparison to the base case.\nFurther exploration for the reason for the rise in the mean led to the conclusion that every time the DR vectors are scheduled in a way to equalize the total error, it pushes each receivers total error higher.\nAlso, as the accumulated error has an estimated component, the schedule is not accurate to equalize the errors for the receivers, leading to the DR vector reaching earlier or later than the actual schedule.\nIn either case, the error is not equalized and if the DR vector reaches late, it actually increases the error for a receiver beyond the highest accumulated error.\nThis means that at the next trigger, this receiver will be the one with highest error and every other receiver``s error will be pushed to this error value.\nThis flip-flop effect leads to the increase in the accumulated error for all the receivers.\nThe scheduling for fairness leads to the decrease in standard deviation (i.e., increases the fairness among different players), but it comes at the cost of higher mean error, which may not be a desirable feature.\nThis led us to explore different ways of equalizing the accumulated errors.\nThe approach discussed in the following section is a heuristic approach based on the following idea.\nUsing the same amount of DR vectors over time as in the base case, instead of sending the DR vectors to all the receivers at the same frequency as in the base case, if we can increase the frequency of sending the DR vectors to the receiver with higher accumulated error and decrease the frequency of sending DR vectors to the receiver with lower accumulated error, we can equalize the export error of all receivers over time.\nAt the same time we wish to decrease the error of the receiver with the highest accumulated error in the base case (of course, this receiver would be sent more DR vectors than in the base case).\nWe refer to such an algorithm as a budget based algorithm.\n5.\nBUDGET BASED ALGORITHM In a game, the sender of an entity sends DR vectors to all the receivers every time a threshold is crossed by the entity.\nLower the threshold, more DR vectors are generated during a given time period.\nSince the DR vectors are sent to all the receivers and the network delay between the sender-receiver pairs cannot be avoided, the before export error 3 with the most distant player will always 3 Note that after export error is eliminated by using synchronized clock among the players.\n7 0 1000 2000 3000 4000 5000 15950 16000 16050 16100 16150 16200 16250 16300 MeanAccumulatedError Time in Seconds Base Case Scheduling Algorithm #1 0 50 100 150 200 250 300 350 400 450 500 15950 16000 16050 16100 16150 16200 16250 16300 StandardDeviationofAccumulatedError Time in Seconds Base Case Scheduling Algorithm #1 Figure 4: Mean and standard deviation of error with scheduling and without (i.e., base case).\nbe higher than the rest.\nIn order to mitigate the imbalance in the error, we propose to send DR vectors selectively to different players based on the accumulated errors of these players.\nThe budget based algorithm is based on this idea and there are two variations of it.\nOne is a probabilistic budget based scheme and the other, a deterministic budget base scheme.\n5.1 Probabilistic budget based scheme The probabilistic budget based scheme has three main steps: a) lower the dead reckoning threshold but at the same time keep the total number of DRs sent the same as the base case, b) at every trigger, probabilistically pick a player to send the DR vector to, and c) send the DR vector to the chosen player.\nThese steps are described below.\nThe lowering of DR threshold is implemented as follows.\nLowering the threshold is equivalent to increasing the number of trigger points where DR vectors are generated.\nSuppose the threshold is such that the number of triggers caused by it in the base case is t and at each trigger n DR vectors sent by the sender, which results in a total of nt DR vectors.\nOur goal is to keep the total number of DR vectors sent by the sender fixed at nt, but lower the number of DR vectors sent at each trigger (i.e., do not send the DR vector to all the receivers).\nLet n and t be the number of DR vectors sent at each trigger and number of triggers respectively in the modified case.\nWe want to ensure n t = nt.\nSince we want to increase the number of trigger points, i.e, t > t, this would mean that n < n.\nThat is, not all receivers will be sent the DR vector at every trigger.\nIn the probabilistic budget based scheme, at each trigger, a probability is calculated for each receiver to be sent a DR vector and only one receiver is sent the DR (n = 1).\nThis probability is based on the relative weights of the receivers'' accumulated errors.\nThat is, a receiver with a higher accumulated error will have a higher probability of being sent the DR vector.\nConsider that the accumulated error for three players are a1, a2 and a3 respectively.\nThen the probability of player 1 receiving the DR vector would be a1 a1+a2+a3 .\nSimilarly for the other players.\nOnce the player is picked, the DR vector is sent to that player.\nTo compare the probabilistic budget based algorithm with the base case, we needed to lower the threshold for the base case (for fair comparison).\nAs the dead reckoning threshold in the base case was already very fine, it was decided that instead of lowering the threshold, the probabilistic budget based approach would be compared against a modified base case that would use the normal threshold as the budget based algorithm but the base case was modified such that every third trigger would be actually used to send out a DR vector to all the three receivers used in our experiments.\nThis was called as the 1\/3 base case as it resulted in 1\/3 number of DR vectors being sent as compared to the base case.\nThe budget per trigger for the probability based approach was calculated as one DR vector at each trigger as compared to three DR vectors at every third trigger in the 1\/3 base case; thus the two cases lead to the same number of DR vectors being sent out over time.\nIn order to evaluate the effectiveness of the probabilistic budget based algorithm, we instrumented the BZFlag game to use this approach.\nWe used the same testbed consisting of one sender and three receivers with delays of 800ms, 500ms and 200ms from the sender and with low delay variance (100ms) and moderate delay variance (180ms).\nThe results are shown in Figures 5 and 6.\nAs mentioned earlier, the x-axis of these graphs represents the system time when the snapshot of the game was taken.\nObserve from the figures that the standard deviation of the accumulated error among the receivers with the probabilistic budget based algorithm is less than the 1\/3 base case and the mean is a little higher than the 1\/3 base case.\nThis implies that the game is fairer as compared to the 1\/3 base case at the cost of increasing the mean error by a small amount as compared to the 1\/3 base case.\nThe increase in mean error in the probabilistic case compared to the 1\/3 base case can be attributed to the fact that the even though the probabilistic approach on average sends the same number of DR vectors as the 1\/3 base case, it sometimes sends DR vectors to a receiver less frequently and sometimes more frequently than the 1\/3 base case due to its probabilistic nature.\nWhen a receiver does not receive a DR vector for a long time, the receiver``s trajectory is more and more off of the sender``s trajectory and hence the rate of buildup of the error at the receiver is higher.\nAt times when a receiver receives DR vectors more frequently, it builds up error at a lower rate but there is no way of reversing the error that was built up when it did not receive a DR vector for a long time.\nThis leads the receivers to build up more error in the probabilistic case as compared to the 1\/3 base case where the receivers receive a DR vector almost periodically.\n8 0 200 400 600 800 1000 15950 16000 16050 16100 16150 16200 16250 16300 MeanAccumulatedError Time in Seconds 1\/3 Base Case Deterministic Algorithm Probabilistic Algorithm 0 50 100 150 200 250 300 350 400 450 500 15950 16000 16050 16100 16150 16200 16250 16300 StandardDeviationofAccumulatedError Time in Seconds 1\/3 Base Case Deterministic Algorithm Probabilistic Algorithm Figure 5: Mean and standard deviation of error for different algorithms (including budget based algorithms) for low delay variance.\n0 200 400 600 800 1000 16960 16980 17000 17020 17040 17060 17080 17100 17120 17140 17160 17180 MeanAccumulatedError Time in Seconds 1\/3 Base Case Deterministic Algorithm Probabilistic Algorithm 0 50 100 150 200 250 300 16960 16980 17000 17020 17040 17060 17080 17100 17120 17140 17160 17180 StandardDeviationofAccumulatedError Time in Seconds 1\/3 Base Case Deterministic Algorithm Probabilistic Algorithm Figure 6: Mean and standard deviation of error for different algorithms (including budget based algorithms) for moderate delay variance.\n5.2 Deterministic budget based scheme To bound the increase in mean error we decided to modify the budget based algorithm to be deterministic.\nThe first two steps of the algorithm are the same as in the probabilistic algorithm; the trigger points are increased to lower the threshold and accumulated errors are used to compute the probability that a receiver will receiver a DR vector.\nOnce these steps are completed, a deterministic schedule for the receiver is computed as follows: 1.\nIf there is any receiver(s) tagged to receive a DR vector at the current trigger, the sender sends out the DR vector to the respective receiver(s).\nIf at least one receiver was sent a DR vector, the sender calculates the probabilities of each receiver receiving a DR vector as explained before and follows steps 2 to 6, else it does not do anything.\n2.\nFor each receiver, the probability value is multiplied with the budget available at each trigger (which is set to 1 as explained below) to give the frequency of sending the DR vector to each receiver.\n3.\nIf any of the receiver``s frequency after multiplying with the budget goes over 1, the receiver``s frequency is set as 1 and the surplus amount is equally distributed to all the receivers by adding the amount to their existing frequencies.\nThis process is repeated until all the receivers have a frequency of less than or equal to 1.\nThis is due to the fact that at a trigger we cannot send more than one DR vector to the respective receiver.\nThat will be wastage of DR vectors by sending redundant information.\n4.\n(1\/frequency) gives us the schedule at which the sender should send DR vectors to the respective receiver.\nCredit obtained previously (explained in step 5) if any is subtracted from the schedule.\nObserve that the resulting value of the schedule might not be an integer; hence, the value is rounded off by taking the ceiling of the schedule.\nFor example, if the frequency is 1\/3.5, this implies that we would like to have a DR vector sent every 3.5 triggers.\nHowever, we are constrained to send it at the 4th trigger giving us a credit of 0.5.\nWhen we do send the DR vector next time, we would be able to send it 9 on the 3rd trigger because of the 0.5 credit.\n5.\nThe difference between the schedule and the ceiling of the schedule is the credit that the receiver has obtained which is remembered for the future and used at the next time as explained in step 4.\n6.\nFor each of those receivers who were sent a DR vector at the current trigger, the receivers are tagged to receive the next DR vector at the trigger that happens exactly schedule (the ceiling of the schedule) number of times away from the current trigger.\nObserve that no other receiver``s schedule is modified at this point as they all are running a schedule calculated at some previous point of time.\nThose schedules will be automatically modified at the trigger when they are scheduled to receive the next DR vector.\nAt the first trigger, the sender sends the DR vector to all the receivers and uses a relative probability of 1\/n for each receiver and follows the steps 2 to 6 to calculate the next schedule for each receiver in the same way as mentioned for other triggers.\nThis algorithm ensures that every receiver has a guaranteed schedule of receiving DR vectors and hence there is no irregularity in sending the DR vector to any receiver as was observed in the budget based probabilistic algorithm.\nWe used the testbed described earlier (three receivers with varying delays) to evaluate the deterministic algorithm using the budget of 1 DR vector per trigger so as to use the same number of DR vectors as in the 1\/3 base case.\nResults from our experiments are shown in Figures 5 and 6.\nIt can be observed that the standard deviation of error in the deterministic budget based algorithm is less than the 1\/3 base case and also has the same mean error as the 1\/3 base case.\nThis indicates that the deterministic algorithm is more fair than the 1\/3 base case and at the same time does not increase the mean error thereby leading to a better game quality compared to the probabilistic algorithm.\nIn general, when comparing the deterministic approach to the probabilistic approach, we found that the mean accumulated error was always less in the deterministic approach.\nWith respect to standard deviation of the accumulated error, we found that in the fixed or low variance cases, the deterministic approach was generally lower, but in higher variance cases, it was harder to draw conclusions as the probabilistic approach was sometimes better than the deterministic approach.\n6.\nCONCLUSIONS AND FUTURE WORK In distributed multi-player games played across the Internet, object and player trajectory within the game space are exchanged in terms of DR vectors.\nDue to the variable delay between players, these DR vectors reach different players at different times.\nThere is unfair advantage gained by receivers who are closer to the sender of the DR as they are able to render the sender``s position more accurately in real time.\nIn this paper, we first developed a model for estimating the error in rendering player trajectories at the receivers.\nWe then presented an algorithm based on scheduling the DR vectors to be sent to different players at different times thereby equalizing the error at different players.\nThis algorithm is aimed at making the game fair to all players, but tends to increase the mean error of the players.\nTo counter this effect, we presented budget based algorithms where the DR vectors are still scheduled to be sent at different players at different times but the algorithm balances the need for fairness with the requirement that the error of the worst case players (who are furthest from the sender) are not increased compared to the base case (where all DR vectors are sent to all players every time a DR vector is generated).\nWe presented two variations of the budget based algorithms and through experimentation showed that the algorithms reduce the standard deviation of the error thereby making the game more fair and at the same time has comparable mean error to the base case.\n7.\nREFERENCES [1] S.Aggarwal, H. Banavar, A. Khandelwal, S. Mukherjee, and S. Rangarajan, Accuracy in Dead-Reckoning based Distributed Multi-Player Games, Proceedings of ACM SIGCOMM 2004 Workshop on Network and System Support for Games (NetGames 2004), Aug. 2004.\n[2] L. Gautier and C. Diot, Design and Evaluation of MiMaze, a Multiplayer Game on the Internet, in Proc.\nof IEEE Multimedia (ICMCS``98), 1998.\n[3] M. Mauve, Consistency in Replicated Continuous Interactive Media, in Proc.\nof the ACM Conference on Computer Supported Cooperative Work (CSCW``00), 2000, pp. 181-190.\n[4] S.K. Singhal and D.R. Cheriton, Exploiting Position History for Efficient Remote Rendering in Networked Virtual Reality, Presence: Teleoperators and Virtual Environments, vol.\n4, no. 2, pp. 169-193, 1995.\n[5] C. Diot and L. Gautier, A Distributed Architecture for Multiplayer Interactive Applications on the Internet, in IEEE Network Magazine, 1999, vol.\n13, pp. 6-15.\n[6] L. Pantel and L.C. Wolf, On the Impact of Delay on Real-Time Multiplayer Games, in Proc.\nof ACM NOSSDAV``02, May 2002.\n[7] Y. Lin, K. Guo, and S. Paul, Sync-MS: Synchronized Messaging Service for Real-Time Multi-Player Distributed Games, in Proc.\nof 10th IEEE International Conference on Network Protocols (ICNP), Nov 2002.\n[8] K. Guo, S. Mukherjee, S. Rangarajan, and S. Paul, A Fair Message Exchange Framework for Distributed Multi-Player Games, in Proc.\nof NetGames2003, May 2003.\n[9] N. E. Baughman and B. N. Levine, Cheat-Proof Playout for Centralized and Distributed Online Games, in Proc.\nof IEEE INFOCOM``01, April 2001.\n[10] M. Allman and V. Paxson, On Estimating End-to-End Network Path Properties, in Proc.\nof ACM SIGCOMM``99, Sept. 1999.\n[11] BZFlag Forum, BZFlag Game, URL: http:\/\/www.bzflag.org.\n[12] Nation Institute of Standards and Technology, NIST Net, URL: http:\/\/snad.ncsl.nist.gov\/nistnet\/.\n10","lvl-3":"Fairness in Dead-Reckoning based Distributed Multi-Player Games\nABSTRACT\nIn a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver.\nThe object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender.\nThis inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object.\nBut due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well.\nThis leads to unfairness in game playing.\nIn this paper, we first introduce an \"error\" measure for estimating this inaccuracy.\nThen we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time.\nThis algorithm makes the game very fair at the expense of increasing the overall mean error of all players.\nTo mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing.\nWe have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game.\nWe show through experiments that these algorithms provide fairness among players in spite of widely varying network delays.\nAn additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing.\n1.\nINTRODUCTION\nIn a distributed multi-player game, players are normally distributed across the Internet and have varying delays to each other or to a central game server.\nUsually, in such games, the players are part of the game and in addition they may control entities that make up the game.\nDuring the course of the game, the players and the entities move within the game space.\nA player sends information about her movement as well as the movement of the entities she controls to the other players using a Dead-Reckoning (DR) vector.\nA DR vector contains information about the current position of the player\/entity in terms of x, y and z coordinates (at the time the DR vector was sent) as well as the trajectory of the entity in terms of the velocity component in each of the dimensions.\nEach of the participating players receives such DR vectors from one another and renders the other players\/entities on the local consoles until a new DR vector is received for that player\/entity.\nIn a peer-to-peer game, players send DR vectors directly to each other; in a client-server game, these DR vectors may be forwarded through a game server.\nThe idea of DR is used because it is almost impossible for players\/entities to exchange their current positions at every time unit.\nDR vectors are \"quantization\" of the real trajectory (which we refer to as real path) at a player.\nNormally, a new DR vector is computed and sent whenever the real path deviates from the path extrapolated using the previous DR vector (say, in terms of distance in the x, y, z plane) by some amount specified by a threshold.\nWe refer to the trajectory that can be computed using the sequence of DR vectors as the exported path.\nTherefore, at the sending player, there is a deviation between the real path and the exported path.\nThe error due to this deviation can be removed if each movement of player\/entity is communicated to the other players at every time unit; that is a DR vector is generated at every time unit thereby making the real and exported paths the same.\nGiven that it is not feasible to satisfy this due to bandwidth limitations, this error is not of practical interest.\nTherefore, the receiving players can, at best, follow the exported path.\nBecause of the network delay between the sending and receiving players, when a DR vector is received and rendered at a player, the original trajectory of the player\/entity may have already changed.\nThus, in physical time, there is a deviation at the receiving player between the exported path and the rendered trajectory (which we refer to as placed path).\nWe refer to this error as the export error.\nNote that the export error, in turn, results in a deviation between the real and the placed paths.\nThe export error manifests itself due to the deviation between the exported path at the sender and the placed path at the receiver (i)\nbefore the DR vector is received at the receiver (referred to as the before export error, and (ii) after the DR vector is received at the receiver (referred to as the after export error).\nIn an earlier paper [1], we showed that by synchronizing the clocks at all the players and by using a technique based on time-stamping messages that carry the DR vectors, we can guarantee that the after export error is made zero.\nThat is, the placed and the exported paths match after the DR vector is received.\nWe also showed that the before export error can never be eliminated since there is always a non-zero network delay, but can be significantly reduced using our technique [1].\nHenceforth we assume that the players use such a technique which results in unavoidable but small overall export error.\nIn this paper we consider the problem of different and varying network delays between each sender-receiver pair of a DR vector, and consequently, the different and varying export errors at the receivers.\nDue to the difference in the export errors among the receivers, the same entity is rendered at different physical time at different receivers.\nThis brings in unfairness in game playing.\nFor instance a player with a large delay would always see an entity \"late\" in physical time compared to the other players and, therefore, her action on the entity would be delayed (in physical time) even if she reacted instantaneously after the entity was rendered.\nOur goal in this paper is to improve the fairness of these games in spite of the varying network delays by equalizing the export error at the players.\nWe explore whether the time-average of the export errors (which is the cumulative export error over a period of time averaged over the time period) at all the players can be made the same by scheduling the sending of the DR vectors appropriately at the sender.\nWe propose two algorithms to achieve this.\nBoth the algorithms are based on delaying (or dropping) the sending of DR vectors to some players on a continuous basis to try and make the export error the same at all the players.\nAt an abstract level, the algorithm delays sending DR vectors to players whose accumulated error so far in the game is smaller than others; this would mean that the export error due to this DR vector at these players will be larger than that of the other players, thereby making them the same.\nThe goal is to make this error at least approximately equal at every DR vector with the deviation in the error becoming smaller as time progresses.\nThe first algorithm (which we refer to as the \"scheduling algorithm\") is based on \"estimating\" the delay between players and refining the sending of DR vectors by scheduling them to be sent to different players at different times at every DR generation point.\nThrough an implementation of this algorithm using the open source game BZflag, we show that this algorithm makes the game very fair (we measure fairness in terms of the standard deviation of the error).\nThe drawback of this algorithm is that it tends to push the error of all the players towards that of the player with the worst error (which is the error at the farthest player, in terms of delay, from the sender of the DR).\nTo alleviate this effect, we propose a budget based algorithm which budgets how the DRs are sent to different players.\nAt a high level, the algorithm is based on the idea of sending more DRs to players who are farther away from the sender compared to those who are closer.\nExperimental results from BZflag illustrates that the budget based algorithm follows a more balanced approach.\nIt improves the fairness of the game but at the same time does so without pushing up the mean error of the players thereby maintaining the accuracy of the game.\nIn addition, the budget based algorithm is shown to achieve the same level of accuracy of game playing as the current implementation of BZflag using much less number of DR vectors.\n2.\nPREVIOUS WORK\nEarlier work on network games to deal with network latency has mostly focussed on compensation techniques for packet delay and loss [2, 3, 4].\nThese methods are aimed at making large delays and message loss tolerable for players but does not consider the problems that may be introduced by varying delays from the server to different players or from the players to one another.\nFor example, the concept of local lag has been used in [3] where each player delays every local operation for a certain amount of time so that remote players can receive information about the local operation and execute the same operation at the about same time, thus reducing state inconsistencies.\nThe online multi-player game MiMaze [2, 5, 6], for example, takes a static bucket synchronization approach to compensate for variable network delays.\nIn MiMaze, each player delays all events by 100 ms regardless of whether they are generated locally or remotely.\nPlayers with a network delay larger than 100 ms simply cannot participate in the game.\nIn general, techniques based on bucket synchronization depend on imposing a worst case delay on all the players.\nThere have been a few papers which have studied the problem of fairness in a distributed game by more sophisticated message delivery mechanisms.\nBut these works [7, 8] assume the existence of a global view of the game where a game server maintains a view (or state) of the game.\nPlayers can introduce objects into the game or delete objects that are already part of the game (for example, in a first-person shooter game, by shooting down the object).\nThese additions and deletions are communicated to the game server using \"action\" messages.\nBased on these action messages, the state of the game is changed at the game server and these changes are communicated to the players using \"update\" messages.\nFairness is achieved by ordering the delivery of action and update messages at the game server and players respectively based on the notion of a \"fair-order\" which takes into account the delays between the game server and the different players.\nObjects that are part of the game may move but how this information is communicated to the players seems to be beyond the scope of these works.\nIn this sense, these works are very limited in scope and may be applicable only to firstperson shooter games and that too to only games where players are not part of the game.\nDR vectors can be exchanged directly among the players (peerto-peer model) or using a central server as a relay (client-server model).\nIt has been shown in [9] that multi-player games that use DR vectors together with bucket synchronization are not cheatproof unless additional mechanisms are put in place.\nBoth the scheduling algorithm and the budget-based algorithm described in our paper use DR vectors and hence are not cheat-proof.\nFor example, a receiver could skew the delay estimate at the sender to make the sender believe that the delay between the sender and the receiver is high thereby gaining undue advantage.\nWe emphasize that the focus of this paper is on fairness without addressing the issue of cheating.\nIn the next section, we describe the game model that we use and illustrate how senders and receivers exchange DR vectors and how entities are rendered at the receivers based on the time-stamp augmented DR vector exchange as described in [1].\nIn Section 4, we describe the DR vector scheduling algorithm that aims to make the export error equal across the players with varying delays from the sender of a DR vector, followed by experimental results obtained from instrumentation of the scheduling algorithm on the open source game BZFlag.\nSection 5, describes the budget based algorithm that achieves improved fairness but without reducing the level accuracy of game playing.\nConclusions are presented in Section 6.\n3.\nGAME MODEL\n4.\nSCHEDULING ALGORITHM FOR SENDING DR VECTORS\n4.1 Computation of Relative Export Error\n4.2 Equalization of Error Among Receivers\n4.3 Computation of the Export Error\n4.4 Computation of Scheduling Instants\n4.5 Steps of the Scheduling Algorithm\n4.6 Handling Cases in Practice\n4.7 Experimental Results\n5.\nBUDGET BASED ALGORITHM\n5.1 Probabilistic budget based scheme\n5.2 Deterministic budget based scheme\n6.\nCONCLUSIONS AND FUTURE WORK\nIn distributed multi-player games played across the Internet, object and player trajectory within the game space are exchanged in terms of DR vectors.\nDue to the variable delay between players, these DR vectors reach different players at different times.\nThere is unfair advantage gained by receivers who are closer to the sender of the DR as they are able to render the sender's position more accurately in real time.\nIn this paper, we first developed a model for estimating the \"error\" in rendering player trajectories at the receivers.\nWe then presented an algorithm based on scheduling the DR vectors to be sent to different players at different times thereby \"equalizing\" the error at different players.\nThis algorithm is aimed at making the game fair to all players, but tends to increase the mean error of the players.\nTo counter this effect, we presented \"budget\" based algorithms where the DR vectors are still scheduled to be sent at different players at different times but the algorithm balances the need for \"fairness\" with the requirement that the error of the \"worst\" case players (who are furthest from the sender) are not increased compared to the base case (where all DR vectors are sent to all players every time a DR vector is generated).\nWe presented two variations of the budget based algorithms and through experimentation showed that the algorithms reduce the standard deviation of the error thereby making the game more fair and at the same time has comparable mean error to the base case.","lvl-4":"Fairness in Dead-Reckoning based Distributed Multi-Player Games\nABSTRACT\nIn a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver.\nThe object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender.\nThis inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object.\nBut due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well.\nThis leads to unfairness in game playing.\nIn this paper, we first introduce an \"error\" measure for estimating this inaccuracy.\nThen we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time.\nThis algorithm makes the game very fair at the expense of increasing the overall mean error of all players.\nTo mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing.\nWe have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game.\nWe show through experiments that these algorithms provide fairness among players in spite of widely varying network delays.\nAn additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing.\n1.\nINTRODUCTION\nIn a distributed multi-player game, players are normally distributed across the Internet and have varying delays to each other or to a central game server.\nUsually, in such games, the players are part of the game and in addition they may control entities that make up the game.\nDuring the course of the game, the players and the entities move within the game space.\nA player sends information about her movement as well as the movement of the entities she controls to the other players using a Dead-Reckoning (DR) vector.\nEach of the participating players receives such DR vectors from one another and renders the other players\/entities on the local consoles until a new DR vector is received for that player\/entity.\nIn a peer-to-peer game, players send DR vectors directly to each other; in a client-server game, these DR vectors may be forwarded through a game server.\nDR vectors are \"quantization\" of the real trajectory (which we refer to as real path) at a player.\nWe refer to the trajectory that can be computed using the sequence of DR vectors as the exported path.\nTherefore, at the sending player, there is a deviation between the real path and the exported path.\nThe error due to this deviation can be removed if each movement of player\/entity is communicated to the other players at every time unit; that is a DR vector is generated at every time unit thereby making the real and exported paths the same.\nTherefore, the receiving players can, at best, follow the exported path.\nBecause of the network delay between the sending and receiving players, when a DR vector is received and rendered at a player, the original trajectory of the player\/entity may have already changed.\nThus, in physical time, there is a deviation at the receiving player between the exported path and the rendered trajectory (which we refer to as placed path).\nWe refer to this error as the export error.\nNote that the export error, in turn, results in a deviation between the real and the placed paths.\nThe export error manifests itself due to the deviation between the exported path at the sender and the placed path at the receiver (i)\nbefore the DR vector is received at the receiver (referred to as the before export error, and (ii) after the DR vector is received at the receiver (referred to as the after export error).\nIn an earlier paper [1], we showed that by synchronizing the clocks at all the players and by using a technique based on time-stamping messages that carry the DR vectors, we can guarantee that the after export error is made zero.\nThat is, the placed and the exported paths match after the DR vector is received.\nWe also showed that the before export error can never be eliminated since there is always a non-zero network delay, but can be significantly reduced using our technique [1].\nHenceforth we assume that the players use such a technique which results in unavoidable but small overall export error.\nIn this paper we consider the problem of different and varying network delays between each sender-receiver pair of a DR vector, and consequently, the different and varying export errors at the receivers.\nDue to the difference in the export errors among the receivers, the same entity is rendered at different physical time at different receivers.\nThis brings in unfairness in game playing.\nOur goal in this paper is to improve the fairness of these games in spite of the varying network delays by equalizing the export error at the players.\nWe explore whether the time-average of the export errors (which is the cumulative export error over a period of time averaged over the time period) at all the players can be made the same by scheduling the sending of the DR vectors appropriately at the sender.\nWe propose two algorithms to achieve this.\nBoth the algorithms are based on delaying (or dropping) the sending of DR vectors to some players on a continuous basis to try and make the export error the same at all the players.\nAt an abstract level, the algorithm delays sending DR vectors to players whose accumulated error so far in the game is smaller than others; this would mean that the export error due to this DR vector at these players will be larger than that of the other players, thereby making them the same.\nThe goal is to make this error at least approximately equal at every DR vector with the deviation in the error becoming smaller as time progresses.\nThe first algorithm (which we refer to as the \"scheduling algorithm\") is based on \"estimating\" the delay between players and refining the sending of DR vectors by scheduling them to be sent to different players at different times at every DR generation point.\nThrough an implementation of this algorithm using the open source game BZflag, we show that this algorithm makes the game very fair (we measure fairness in terms of the standard deviation of the error).\nThe drawback of this algorithm is that it tends to push the error of all the players towards that of the player with the worst error (which is the error at the farthest player, in terms of delay, from the sender of the DR).\nTo alleviate this effect, we propose a budget based algorithm which budgets how the DRs are sent to different players.\nAt a high level, the algorithm is based on the idea of sending more DRs to players who are farther away from the sender compared to those who are closer.\nExperimental results from BZflag illustrates that the budget based algorithm follows a more balanced approach.\nIt improves the fairness of the game but at the same time does so without pushing up the mean error of the players thereby maintaining the accuracy of the game.\nIn addition, the budget based algorithm is shown to achieve the same level of accuracy of game playing as the current implementation of BZflag using much less number of DR vectors.\n2.\nPREVIOUS WORK\nEarlier work on network games to deal with network latency has mostly focussed on compensation techniques for packet delay and loss [2, 3, 4].\nThese methods are aimed at making large delays and message loss tolerable for players but does not consider the problems that may be introduced by varying delays from the server to different players or from the players to one another.\nThe online multi-player game MiMaze [2, 5, 6], for example, takes a static bucket synchronization approach to compensate for variable network delays.\nIn MiMaze, each player delays all events by 100 ms regardless of whether they are generated locally or remotely.\nPlayers with a network delay larger than 100 ms simply cannot participate in the game.\nIn general, techniques based on bucket synchronization depend on imposing a worst case delay on all the players.\nThere have been a few papers which have studied the problem of fairness in a distributed game by more sophisticated message delivery mechanisms.\nPlayers can introduce objects into the game or delete objects that are already part of the game (for example, in a first-person shooter game, by shooting down the object).\nThese additions and deletions are communicated to the game server using \"action\" messages.\nBased on these action messages, the state of the game is changed at the game server and these changes are communicated to the players using \"update\" messages.\nFairness is achieved by ordering the delivery of action and update messages at the game server and players respectively based on the notion of a \"fair-order\" which takes into account the delays between the game server and the different players.\nObjects that are part of the game may move but how this information is communicated to the players seems to be beyond the scope of these works.\nIn this sense, these works are very limited in scope and may be applicable only to firstperson shooter games and that too to only games where players are not part of the game.\nDR vectors can be exchanged directly among the players (peerto-peer model) or using a central server as a relay (client-server model).\nIt has been shown in [9] that multi-player games that use DR vectors together with bucket synchronization are not cheatproof unless additional mechanisms are put in place.\nBoth the scheduling algorithm and the budget-based algorithm described in our paper use DR vectors and hence are not cheat-proof.\nIn the next section, we describe the game model that we use and illustrate how senders and receivers exchange DR vectors and how entities are rendered at the receivers based on the time-stamp augmented DR vector exchange as described in [1].\nIn Section 4, we describe the DR vector scheduling algorithm that aims to make the export error equal across the players with varying delays from the sender of a DR vector, followed by experimental results obtained from instrumentation of the scheduling algorithm on the open source game BZFlag.\nSection 5, describes the budget based algorithm that achieves improved fairness but without reducing the level accuracy of game playing.\nConclusions are presented in Section 6.\n6.\nCONCLUSIONS AND FUTURE WORK\nIn distributed multi-player games played across the Internet, object and player trajectory within the game space are exchanged in terms of DR vectors.\nDue to the variable delay between players, these DR vectors reach different players at different times.\nIn this paper, we first developed a model for estimating the \"error\" in rendering player trajectories at the receivers.\nWe then presented an algorithm based on scheduling the DR vectors to be sent to different players at different times thereby \"equalizing\" the error at different players.\nThis algorithm is aimed at making the game fair to all players, but tends to increase the mean error of the players.\nWe presented two variations of the budget based algorithms and through experimentation showed that the algorithms reduce the standard deviation of the error thereby making the game more fair and at the same time has comparable mean error to the base case.","lvl-2":"Fairness in Dead-Reckoning based Distributed Multi-Player Games\nABSTRACT\nIn a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver.\nThe object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender.\nThis inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object.\nBut due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well.\nThis leads to unfairness in game playing.\nIn this paper, we first introduce an \"error\" measure for estimating this inaccuracy.\nThen we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time.\nThis algorithm makes the game very fair at the expense of increasing the overall mean error of all players.\nTo mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing.\nWe have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game.\nWe show through experiments that these algorithms provide fairness among players in spite of widely varying network delays.\nAn additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing.\n1.\nINTRODUCTION\nIn a distributed multi-player game, players are normally distributed across the Internet and have varying delays to each other or to a central game server.\nUsually, in such games, the players are part of the game and in addition they may control entities that make up the game.\nDuring the course of the game, the players and the entities move within the game space.\nA player sends information about her movement as well as the movement of the entities she controls to the other players using a Dead-Reckoning (DR) vector.\nA DR vector contains information about the current position of the player\/entity in terms of x, y and z coordinates (at the time the DR vector was sent) as well as the trajectory of the entity in terms of the velocity component in each of the dimensions.\nEach of the participating players receives such DR vectors from one another and renders the other players\/entities on the local consoles until a new DR vector is received for that player\/entity.\nIn a peer-to-peer game, players send DR vectors directly to each other; in a client-server game, these DR vectors may be forwarded through a game server.\nThe idea of DR is used because it is almost impossible for players\/entities to exchange their current positions at every time unit.\nDR vectors are \"quantization\" of the real trajectory (which we refer to as real path) at a player.\nNormally, a new DR vector is computed and sent whenever the real path deviates from the path extrapolated using the previous DR vector (say, in terms of distance in the x, y, z plane) by some amount specified by a threshold.\nWe refer to the trajectory that can be computed using the sequence of DR vectors as the exported path.\nTherefore, at the sending player, there is a deviation between the real path and the exported path.\nThe error due to this deviation can be removed if each movement of player\/entity is communicated to the other players at every time unit; that is a DR vector is generated at every time unit thereby making the real and exported paths the same.\nGiven that it is not feasible to satisfy this due to bandwidth limitations, this error is not of practical interest.\nTherefore, the receiving players can, at best, follow the exported path.\nBecause of the network delay between the sending and receiving players, when a DR vector is received and rendered at a player, the original trajectory of the player\/entity may have already changed.\nThus, in physical time, there is a deviation at the receiving player between the exported path and the rendered trajectory (which we refer to as placed path).\nWe refer to this error as the export error.\nNote that the export error, in turn, results in a deviation between the real and the placed paths.\nThe export error manifests itself due to the deviation between the exported path at the sender and the placed path at the receiver (i)\nbefore the DR vector is received at the receiver (referred to as the before export error, and (ii) after the DR vector is received at the receiver (referred to as the after export error).\nIn an earlier paper [1], we showed that by synchronizing the clocks at all the players and by using a technique based on time-stamping messages that carry the DR vectors, we can guarantee that the after export error is made zero.\nThat is, the placed and the exported paths match after the DR vector is received.\nWe also showed that the before export error can never be eliminated since there is always a non-zero network delay, but can be significantly reduced using our technique [1].\nHenceforth we assume that the players use such a technique which results in unavoidable but small overall export error.\nIn this paper we consider the problem of different and varying network delays between each sender-receiver pair of a DR vector, and consequently, the different and varying export errors at the receivers.\nDue to the difference in the export errors among the receivers, the same entity is rendered at different physical time at different receivers.\nThis brings in unfairness in game playing.\nFor instance a player with a large delay would always see an entity \"late\" in physical time compared to the other players and, therefore, her action on the entity would be delayed (in physical time) even if she reacted instantaneously after the entity was rendered.\nOur goal in this paper is to improve the fairness of these games in spite of the varying network delays by equalizing the export error at the players.\nWe explore whether the time-average of the export errors (which is the cumulative export error over a period of time averaged over the time period) at all the players can be made the same by scheduling the sending of the DR vectors appropriately at the sender.\nWe propose two algorithms to achieve this.\nBoth the algorithms are based on delaying (or dropping) the sending of DR vectors to some players on a continuous basis to try and make the export error the same at all the players.\nAt an abstract level, the algorithm delays sending DR vectors to players whose accumulated error so far in the game is smaller than others; this would mean that the export error due to this DR vector at these players will be larger than that of the other players, thereby making them the same.\nThe goal is to make this error at least approximately equal at every DR vector with the deviation in the error becoming smaller as time progresses.\nThe first algorithm (which we refer to as the \"scheduling algorithm\") is based on \"estimating\" the delay between players and refining the sending of DR vectors by scheduling them to be sent to different players at different times at every DR generation point.\nThrough an implementation of this algorithm using the open source game BZflag, we show that this algorithm makes the game very fair (we measure fairness in terms of the standard deviation of the error).\nThe drawback of this algorithm is that it tends to push the error of all the players towards that of the player with the worst error (which is the error at the farthest player, in terms of delay, from the sender of the DR).\nTo alleviate this effect, we propose a budget based algorithm which budgets how the DRs are sent to different players.\nAt a high level, the algorithm is based on the idea of sending more DRs to players who are farther away from the sender compared to those who are closer.\nExperimental results from BZflag illustrates that the budget based algorithm follows a more balanced approach.\nIt improves the fairness of the game but at the same time does so without pushing up the mean error of the players thereby maintaining the accuracy of the game.\nIn addition, the budget based algorithm is shown to achieve the same level of accuracy of game playing as the current implementation of BZflag using much less number of DR vectors.\n2.\nPREVIOUS WORK\nEarlier work on network games to deal with network latency has mostly focussed on compensation techniques for packet delay and loss [2, 3, 4].\nThese methods are aimed at making large delays and message loss tolerable for players but does not consider the problems that may be introduced by varying delays from the server to different players or from the players to one another.\nFor example, the concept of local lag has been used in [3] where each player delays every local operation for a certain amount of time so that remote players can receive information about the local operation and execute the same operation at the about same time, thus reducing state inconsistencies.\nThe online multi-player game MiMaze [2, 5, 6], for example, takes a static bucket synchronization approach to compensate for variable network delays.\nIn MiMaze, each player delays all events by 100 ms regardless of whether they are generated locally or remotely.\nPlayers with a network delay larger than 100 ms simply cannot participate in the game.\nIn general, techniques based on bucket synchronization depend on imposing a worst case delay on all the players.\nThere have been a few papers which have studied the problem of fairness in a distributed game by more sophisticated message delivery mechanisms.\nBut these works [7, 8] assume the existence of a global view of the game where a game server maintains a view (or state) of the game.\nPlayers can introduce objects into the game or delete objects that are already part of the game (for example, in a first-person shooter game, by shooting down the object).\nThese additions and deletions are communicated to the game server using \"action\" messages.\nBased on these action messages, the state of the game is changed at the game server and these changes are communicated to the players using \"update\" messages.\nFairness is achieved by ordering the delivery of action and update messages at the game server and players respectively based on the notion of a \"fair-order\" which takes into account the delays between the game server and the different players.\nObjects that are part of the game may move but how this information is communicated to the players seems to be beyond the scope of these works.\nIn this sense, these works are very limited in scope and may be applicable only to firstperson shooter games and that too to only games where players are not part of the game.\nDR vectors can be exchanged directly among the players (peerto-peer model) or using a central server as a relay (client-server model).\nIt has been shown in [9] that multi-player games that use DR vectors together with bucket synchronization are not cheatproof unless additional mechanisms are put in place.\nBoth the scheduling algorithm and the budget-based algorithm described in our paper use DR vectors and hence are not cheat-proof.\nFor example, a receiver could skew the delay estimate at the sender to make the sender believe that the delay between the sender and the receiver is high thereby gaining undue advantage.\nWe emphasize that the focus of this paper is on fairness without addressing the issue of cheating.\nIn the next section, we describe the game model that we use and illustrate how senders and receivers exchange DR vectors and how entities are rendered at the receivers based on the time-stamp augmented DR vector exchange as described in [1].\nIn Section 4, we describe the DR vector scheduling algorithm that aims to make the export error equal across the players with varying delays from the sender of a DR vector, followed by experimental results obtained from instrumentation of the scheduling algorithm on the open source game BZFlag.\nSection 5, describes the budget based algorithm that achieves improved fairness but without reducing the level accuracy of game playing.\nConclusions are presented in Section 6.\n3.\nGAME MODEL\nThe game architecture is based on players distributed across the Internet and exchanging DR vectors to each other.\nThe DR vectors could either be sent directly from one player to another (peerto-peer model) or could be sent through a game server which receives the DR vector from a player and forwards it to other players (client-server model).\nAs mentioned before, we assume synchronized clocks among the participating players.\nEach DR vector sent from one player to another specifies the trajectory of exactly one player\/entity.\nWe assume a linear DR vector in that the information contained in the DR vector is only enough at the receiving player to compute the trajectory and render the entity in a straight line path.\nSuch a DR vector contains information about the starting position and velocity of the player\/entity where the velocity is constant1.\nThus, the DR vectors sent by a player specifies the current time at the player when the DR vector is computed (not the time at which this DR vector is sent to the other players as we will explain later), the current position of the player\/entity in terms of the x, y, z coordinates and the velocity vector in the direction of x, y and z coordinates.\nSpecifically, the ith DR vector sent by player j about the kth entity is denoted by DRjik and is represented by the following tuple (Tjik, xjik, yjik, zjik, vxjik, vyjik, vzjik).\nWithout loss of generality, in the rest of the discussion, we consider a sequence of DR vectors sent by only one player and for only one entity.\nFor simplicity, we consider a two dimensional game space rather than a three dimensional one.\nHence we use DRi to denote the ith such DR vector represented as the tuple (Ti, xi, yi, vxi, vyi).\nThe receiving player computes the starting position for the entity based on xi, yi and the time difference between when the DR vector is received and the time Ti at which it was computed.\nNote that the computation of time difference is feasible since all the clocks are synchronized.\nThe receiving player then uses the velocity components to project and render the trajectory of the entity.\nThis trajectory is followed until a new DR vector is received which changes the position and\/or velocity of the entity.\nFigure 1: Trajectories and deviations.\nBased on this model, Figure 1 illustrates the sending and receiv1Other type of DR vectors include quadratic DR vectors which specify the acceleration of the entity and cubic spline DR vectors that consider the starting position and velocity and the ending position and velocity of the entity.\ning of DR vectors and the different errors that are encountered.\nThe figure shows the reception of DR vectors at a player (henceforth called the receiver).\nThe horizontal axis shows the time which is synchronized among all the players.\nThe vertical axis tries to conceptually capture the two-dimensional position of an entity.\nAssume that at time To a DR vector DRo is computed by the sender and immediately sent to the receiver.\nAssume that DRo is received at the receiver after a delay of dto time units.\nThe receiver computes the initial position of the entity as (xo + vxo \u00d7 dto, yo + vyo \u00d7 dto) (shown as point E).\nThe thick line EBD represents the projected and rendered trajectory at the receiver based on the velocity components vxo and vyo (placed path).\nAt time Tl a DR vector DRl is computed for the same entity and immediately sent to the receiver2.\nAssume that DRl is received at the receiver after a delay of dtl time units.\nWhen this DR vector is received, assume that the entity is at point D.\nA new position for the entity is computed as (xl + vxl \u00d7 dtl, yl + vyo \u00d7 dtl) and the entity is moved to this position (point C).\nThe velocity components vxl and vyl are used to project and render this entity further.\nLet us now consider the error due to network delay.\nAlthough DRl was computed at time Tl and sent to the receiver, it did not reach the receiver until time Tl + dtl.\nThis means, although the exported path based on DRl at the sender at time Tl is the trajectory AC, until time Tl + dtl, at the receiver, this entity was being rendered at trajectory BD based on DRo.\nOnly at time Tl + dtl did the entity get moved to point C from which point onwards the exported and the placed paths are the same.\nThe deviation between the exported and placed paths creates an error component which we refer to as the export error.\nA way to represent the export error is to compute the integral of the distance between the two trajectories over the time when they are out of sync.\nWe represent the integral of the distances between the placed and exported paths due to some DR DRi over a time interval [tl, t2] as Err (DRi, tl, t2).\nIn the figure, the export error due to DRl is computed as the integral of the distance between the trajectories AC and BD over the time interval [Tl, Tl + dtl].\nNote that there could be other ways of representing this error as well, but in this paper, we use the integral of the distance between the two trajectories as a measure of the export error.\nNote that there would have been an export error created due to the reception of DRo at which time the placed path would have been based on a previous DR vector.\nThis is not shown in the figure but it serves to remind the reader that the export error is cumulative when a sequence of DR vectors are received.\nStarting from time Tl onwards, there is a deviation between the real and the exported paths.\nAs we discussed earlier, this export error is unavoidable.\nThe above figure and example illustrates one receiver only.\nBut in reality, DR vectors DRo and DRl are sent by the sender to all the participating players.\nEach of these players receives DRo and DRl after varying delays thereby creating different export error values at different players.\nThe goal of the DR vector scheduling algorithm to be described in the next section is to make this (cumulative) export error equal at every player independently for each of the entities that make up the game.\n4.\nSCHEDULING ALGORITHM FOR SENDING DR VECTORS\nIn Section 3 we showed how delay from the sender of a new DR 2Normally, DR vectors are not computed on a periodic basis but on an on-demand basis where the decision to compute a new DR vector is based on some threshold being exceeded on the deviation between the real path and the path exported by the previous DR vector.\nvector to the receiver of the DR vector could lead to export error because of the deviation of the placed path from the exported path at the receiver until this new DR vector is received.\nWe also mentioned that the goal of the DR vector scheduling algorithm is to make the export error \"equal\" at all receivers over a period of time.\nSince the game is played in a distributed environment, it makes sense for the sender of an entity to keep track of all the errors at the receivers and try to make them equal.\nHowever, the sender cannot know the actual error at a receiver till it gets some information regarding the error back from the receiver.\nOur algorithm estimates the error to compute a schedule to send DR vectors to the receivers and corrects the error when it gets feedbacks from the receivers.\nIn this section we provide motivations for the algorithm and describe the steps it goes through.\nThroughout this section, we will use the following example to illustrate the algorithm.\nFigure 2: DR vector flow between a sender and two receivers and the evolution of estimated and actual placed paths at the\nreceivers.\nDR0 = (T0, T0, x0, y0, vx0, vy0), sent at time T0 to both receivers.\nDR1 = (T1, T11, x1, y1, vx1, vy1) sent at time T11 = T1 +61 to receiver 1 and DR1 = (T1, T12, x1, y1, vx1, vy1) sent at time T12 = T1 + 62 to receiver 2.\nConsider the example in Figure 2.\nThe figure shows a single sender sending DR vectors for an entity to two different receivers 1 and 2.\nDR0 computed at T0 is sent and received by the receivers sometime between T0 and T1 at which time they move the location of the entity to match the exported path.\nThus, the path of the entity is shown only from the point where the placed path matches the exported path for DR0.\nNow consider DR1.\nAt time T1, DR1 is computed by the sender but assume that it is not immediately sent to the receivers and is only sent after time 61 to receiver 1 (at time T11 = T1 + 61) and after time 62 to receiver 2 (at time\nstamp with the DR vector as shown in the figure.\nAssume that the sender estimates (it will be clear shortly why the sender has to estimate the delay) that after a delay of dt1, receiver 1 will receive it, will use the coordinate and velocity parameters to compute the entity's current location and move it there (point C) and from this time onwards, the exported and the placed paths will become the same.\nHowever, in reality, receiver 1 receives DR1 after a delay of da1 (which is less than sender's estimates of dt1), and moves the corresponding entity to point H. Similarly, the sender estimates that after a delay of dt2, receiver 2 will receive DR1, will compute the current location of the entity and move it to that point (point E), while in reality it receives DR1 after a delay of da2> dt2 and moves the entity to point N.\nThe other points shown on the placed and exported paths will be used later in the discussion to describe different error components.\n4.1 Computation of Relative Export Error\nReferring back to the discussion from Section 3, from the sender's perspective, the export error at receiver 1 due to DR1 is given by Err (DR1, T1, T1 + 61 + dt1) (the integral of the distance between the trajectories AC and DB over the time interval [T1, T1 + 61 + dt1]) of Figure 2.\nThis is due to the fact that the sender uses the estimated delay dt1 to compute this error.\nSimilarly, the export error from the sender's perspective at received 2 due to DR1 is given by Err (DR1, T1, T1 + 62 + dt2) (the integral of the distance between the trajectories AE and DF over the time interval [T1, T1 + 62 + dt2]).\nNote that the above errors from the sender's perspective are only estimates.\nIn reality, the export error will be either smaller or larger than the estimated value, based on whether the delay estimate was larger or smaller than the actual delay that DR1 experienced.\nThis difference between the estimated and the actual export error is the relative export error (which could either be positive or negative) which occurs for every DR vector that is sent and is accumulated at the sender.\nThe concept of relative export error is illustrated in Figure 2.\nSince the actual delay to receiver 1 is da1, the export error induced by DR1 at receiver 1 is Err (DR1, T1, T1 + 61 + da1).\nThis means, there is an error in the estimated export error and the sender can compute this error only after it gets a feedback from the receiver about the actual delay for the delivery of DR1, i.e., the value of da1.\nWe propose that once receiver 1 receives DR1, it sends the value of da1 back to the sender.\nThe receiver can compute this information as it knows the time at which DR1 was sent (T11 = T1 + 61, which is appended to the DR vector as shown in Figure 2) and the local receiving time (which is synchronized with the sender's clock).\nTherefore, the sender computes the relative export error for receiver 1, represented using R1 as\nNote that R1> 0 as da1 <dt1, and R2 <0 as da2> dt2.\nRelative export errors are computed by the sender as and when it receives the feedback from the receivers.\nThis example shows the\nrelative export error values after DR1 is sent and the corresponding feedbacks are received.\n4.2 Equalization of Error Among Receivers\nWe now explain what we mean by making the errors \"equal\" at all the receivers and how this can be achieved.\nAs stated before the sender keeps estimates of the delays to the receivers, dt1 and dt2 in the example of Figure 2.\nThis says that at time T1 when DR1 is computed, the sender already knows how long it may take messages carrying this DR vector to reach the receivers.\nThe sender uses this information to compute the export errors, which are Err (DR1, T1, T1 + 61 + dt1) and Err (DR1, T1, T1 + 62 + dt2) for receivers 1 and 2, respectively.\nNote that the areas of these error components are a function of 61 and 62 as well as the network delays dt1 and dt2.\nIf we are to make the exports errors due to DR1 the same at both receivers, the sender needs to choose 61 and 62 such that\nBut when T1 was computed there could already have been accumulated relative export errors due to previous DR vectors (DR0 and the ones before).\nLet us represent the accumulated relative error up to DRi for receiver j as Rij.\nTo accommodate these accumulated relative errors, the sender should now choose 61 and 62 such that\nThe 6i determines the scheduling instant of the DR vector at the sender for receiver i.\nThis method of computation of 6's ensures that the accumulated export error (i.e., total actual error) for each receiver equalizes at the transmission of each DR vector.\nIn order to establish this, assume that the feedback for DR vector Di from a receiver comes to the sender before schedule for Di +1 is computed.\nLet Sim and Aim denote the estimated error for receiver m used for computing schedule for Di and accumulated error for receiver m computed after receiving feedback for Di, respectively.\nThen Rim = Aim \u2212 Sim.\nIn order to compute the schedule instances (i.e., 6's) for Di, for any pair of receivers m and n, we do Ri \u2212 1 m + Sim = Ri \u2212 1 n + Sin.\nThe following theorem establishes the fact that the accumulated export error is equalized at every scheduling instant.\nTHEOREM 4.1.\nWhen the schedule instances for sending Di are computedfor any pair of receivers m and n, thefollowing condition is satisfied: Xi \u2212 1 Akm + Sim = Xi \u2212 1 Akn + Sin.\nProof: By induction.\nAssume that the premise holds for some i.\nWe show that it holds for i + 1.\nThe base case for i = 1 holds since initially R0m = R0n = 0, and the S1m = S1n is used to compute the scheduling instances.\nIn order to compute the schedule for Di +1, the we first compute the relative errors as Rim = Aim \u2212 Sim, and Rin = Ain \u2212 Sin.\nThen to compute 6's we execute\nAdding the condition of the premise on both sides we get,\n4.3 Computation of the Export Error\nLet us now consider how the export errors can be computed.\nFrom the previous section, to find 61 and 62 we need to find\nNote that the values of R01 and R02 are already known at the sender.\nConsider the computation of Err (DR1, T1, T1 +61 + dt1).\nThis is the integral of the distance between the trajectories AC due to DR1 and BD due to DR0.\nFrom DR0 and DR1, point A is (X1, Y1) = (x1, y1) and point B is (X0, Y0) = (x0 + (T1 \u2212 T0) \u00d7 vx0, y0 + (T1 \u2212 T0) \u00d7 vy0).\nThe trajectory AC can be represented as a function of time as (X1 (t), Y1 (t) = (X1 + vx1 \u00d7 t, Y1 + vy1 \u00d7 t) and the trajectory of BD can be represented as (X0 (t), Y0 (t) = (X0 + vx0 \u00d7 t, Y0 + vy0 \u00d7 t).\nThe distance between the two trajectories as a function of time then becomes,\ncan then be calculated by applying the appropriate limits to the above solution.\nIn the next section, we consider the computation of the 6's for N receivers.\n4.4 Computation of Scheduling Instants\nWe again look at the computation of \u03b4's by referring to Figure 2.\nThe sender chooses \u03b41 and \u03b42 such that R01 + Err (DR1, T1, T1 + \u03b41 + dt1) = R02 + Err (DR1, T1, T1 + \u03b42 + dt2).\nIf R01 and R02 both are zero, then \u03b41 and \u03b42 should be chosen such that Err (DR1, T1, T1 + \u03b41 + dt1) = Err (DR1, T1, T1 + \u03b42 + dt2).\nThis equality will hold if \u03b41 + dt1 = \u03b42 + dt2.\nThus, if there is no accumulated relative export error, all that the sender needs to do is to choose the \u03b4's in such a way that they counteract the difference in the delay to the two receivers, so that they receive the DR vector at the same time.\nAs discussed earlier, because the sender is not able to a priori learn the delay, there will always be an accumulated relative export error from a previous DR vector that does have to be taken into account.\nTo delve deeper into this, consider the computation of the export error as illustrated in the previous section.\nTo compute the \u03b4's we require that R01 + Err (DR1, T1, T1 + \u03b41 + dt1) = R02 +\nAssume that E1> E2.\nThen, for the above equation to hold, we require that\nTo make the game as fast as possible within this framework, the \u03b4 values should be made as small as possible so that DR vectors are sent to the receivers as soon as possible subject to the fairness requirement.\nGiven this, we would choose \u03b41 to be zero and compute \u03b42 from the equation\nIn general, if there are N receivers 1,..., N, when a sender generates a DR vector and decides to schedule them to be sent, it first computes the Ei values for all of them from the accumulated relative export errors and estimates of delays.\nThen, it finds the smallest of these values.\nLet Ek be the smallest value.\nThe sender makes \u03b4k to be zero and computes the rest of the \u03b4's from the equality\nThe \u03b4's thus obtained gives the scheduling instants of the DR vector for the receivers.\n4.5 Steps of the Scheduling Algorithm\nFor the purpose of the discussion below, as before let us denote the accumulated relative export error at a sender for receiver k up until DRi to be Ri k. Let us denote the scheduled delay at the sender before DRi is sent to receiver k as \u03b4i k. Given the above discussion, the algorithm steps are as follows:\n1.\nThe sender computes DRi at (say) time Ti and then computes \u03b4ik, and Ri \u2212 1\nk, ` dk, 1 <k <N based on the estimation of delays dtk, ` dk, 1 <k <N as per Equation (1).\nIt schedules, DRi to be sent to receiver k at time Ti + \u03b4i k. 2.\nThe DR vectors are sent to the receivers at the scheduled times which are received after a delay of dak, ` dk, 1 <k <N where dak <or> dtk.\nThe receivers send the value of dak back to the sender (the receiver can compute this value based on the time stamps on the DR vector as described earlier).\n3.\nThe sender computes Rik as described earlier and illustrated in Figure 2.\nThe sender also recomputes (using exponential averaging method similar to round-trip time estimation by TCP [10]) the estimate of delay dtk from the new value of dak for receiver k.\n4.\nGo back to Step 1 to compute DRi +1 when it is required and follow the steps of the algorithm to schedule and send this DR vector to the receivers.\n4.6 Handling Cases in Practice\nSo far we implicity assumed that DRi is sent out to all receivers before a decision is made to compute the next DR vector DRi +1, and the receivers send the value of dak corresponding to DRi and this information reaches the sender before it computes DRi +1 so that it can compute Ri +1 k and then use it in the computation of \u03b4i +1 k. Two issues need consideration with respect to the above algorithm when it is used in practice.\n\u2022 It may so happen that a new DR vector is computed even before the previous DR vector is sent out to all receivers.\nHow will this situation be handled?\n\u2022 What happens if the feedback does not arrive before DRi +1 is computed and scheduled to be sent?\nLet us consider the first scenario.\nWe assume that DRi has been scheduled to be sent and the scheduling instants are such that \u03b4i1 <\u03b4i2 <\u2022 \u2022 \u2022 <\u03b4iN.\nAssume that DRi +1 is to be computed (because the real path has deviated exceeding a threshold from the path exported by DRi) at time Ti +1 where Ti + \u03b4ik <Ti +1 <Ti + \u03b4ik +1.\nThis means, DRi has been sent only to receivers up to k in the scheduled order.\nIn our algorithm, in this case, the scheduled delay ordering queue is flushed which means DRi is not sent to receivers still queued to receive it, but a new scheduling order is computed for all the receivers to send DRi +1.\nFor those receivers who have been sent DRi, assume for now that daj, 1 <j <k has been received from all receivers (the scenario where daj has not been received will be considered as a part of the second scenario later).\nFor these receivers, Ei j, 1 <j <k can be computed.\nFor those receivers j, k + 1 <j <N to whom DRi was not sent Eij does not apply.\nConsider a receiver j, k + 1 <j <N to whom DRi was not sent.\nRefer to Figure 3.\nFor such a receiver j, when DRi +1 is to be scheduled and\nFigure 3: Schedule computation when DRi is not sent to re\nj needs to be computed, the total export error is the accumulated relative export error at time Ti when schedule for DRi was computed, plus the integral of the distance between the two trajectories AC and BD of Figure 3 over the time interval [Ti, Ti +1 + 5i +1 j + dtj].\nNote that this integral is given by Err (DRi, Ti, Ti +1) + Err (DRi +1, Ti +1, Ti +1 + 5i +1 j + dtj).\nTherefore, instead of Eij of Equation (1), we use the value Ri \u2212 1\nj is relative export error used when the schedule for DRi was computed.\nNow consider the second scenario.\nHere the feedback dak corresponding to DRi has not arrived before DRi +1 is computed and scheduled.\nIn this case, Rik cannot be computed.\nThus, in the computation of 5k for DRi +1, this will be assumed to be zero.\nWe do assume that a reliable mechanism is used to send dak back to the sender.\nWhen this information arrives at a later time, Rik will be computed and accumulated to future relative export errors (for example Ri +1 k if dak is received before DRi +2 is computed) and used in the computation of 5k when a future DR vector is to be scheduled (for example DRi +2).\n4.7 Experimental Results\nIn order to evaluate the effectiveness and quantify benefits obtained through the use of the scheduling algorithm, we implemented the proposed algorithm in BZFlag (Battle Zone Flag) [11] game.\nIt is a first-person shooter game where the players in teams drive tanks and move within a battle field.\nThe aim of the players is to navigate and capture flags belonging to the other team and bring them back to their own area.\nThe players shoot each other's tanks using \"shooting bullets\".\nThe movement of the tanks as well as that of the shots are exchanged among the players using DR vectors.\nWe have modified the implementation of BZFlag to incorporate synchronized clocks among the players and the server and exchange time-stamps with the DR vector.\nWe set up a testbed with four players running the instrumented version of BZFlag, with one as a sender and the rest as receivers.\nThe scheduling approach and the base case where each DR vector was sent to all the receivers concurrently at every trigger point were implemented in the same run by tagging the DR vectors according to the type of approach used to send the DR vector.\nNISTNet [12] was used to introduce delays across the sender and the three receivers.\nMean delays of 800ms, 500ms and 200ms were introduced between the sender and first, second and the third receiver, respectively.\nWe introduce a variance of 100 msec (to the mean delay of each receiver) to model variability in delay.\nThe sender logged the errors of each receiver every 100 milliseconds for both the scheduling approach and the base case.\nThe sender also calculated the standard deviation and the mean of the accumulated export error of all the receivers every 100 milliseconds.\nFigure 4 plots the mean and standard deviation of the accumulated export error of all the receivers in the scheduling case against the base case.\nNote that the x-axis of these graphs (and the other graphs that follow) represents the system time when the snapshot of the game was taken.\nObserve that the standard deviation of the error with scheduling is much lower as compared to the base case.\nThis implies that the accumulated errors of the receivers in the scheduling case are closer to one another.\nThis shows that the scheduling approach achieves fairness among the receivers even if they are at different distances (i.e, latencies) from the sender.\nObserve that the mean of the accumulated error increased multifold with scheduling in comparison to the base case.\nFurther exploration for the reason for the rise in the mean led to the conclusion that every time the DR vectors are scheduled in a way to equalize the total error, it pushes each receivers total error higher.\nAlso, as the accumulated error has an estimated component, the schedule is not accurate to equalize the errors for the receivers, leading to the DR vector reaching earlier or later than the actual schedule.\nIn either case, the error is not equalized and if the DR vector reaches late, it actually increases the error for a receiver beyond the highest accumulated error.\nThis means that at the next trigger, this receiver will be the one with highest error and every other receiver's error will be pushed to this error value.\nThis \"flip-flop\" effect leads to the increase in the accumulated error for all the receivers.\nThe scheduling for fairness leads to the decrease in standard deviation (i.e., increases the fairness among different players), but it comes at the cost of higher mean error, which may not be a desirable feature.\nThis led us to explore different ways of equalizing the accumulated errors.\nThe approach discussed in the following section is a heuristic approach based on the following idea.\nUsing the same amount of DR vectors over time as in the base case, instead of sending the DR vectors to all the receivers at the same frequency as in the base case, if we can increase the frequency of sending the DR vectors to the receiver with higher accumulated error and decrease the frequency of sending DR vectors to the receiver with lower accumulated error, we can equalize the export error of all receivers over time.\nAt the same time we wish to decrease the error of the receiver with the highest accumulated error in the base case (of course, this receiver would be sent more DR vectors than in the base case).\nWe refer to such an algorithm as a budget based algorithm.\n5.\nBUDGET BASED ALGORITHM\nIn a game, the sender of an entity sends DR vectors to all the receivers every time a threshold is crossed by the entity.\nLower the threshold, more DR vectors are generated during a given time period.\nSince the DR vectors are sent to all the receivers and the network delay between the sender-receiver pairs cannot be avoided, the before export error 3 with the most distant player will always 3Note that after export error is eliminated by using synchronized clock among the players.\nFigure 4: Mean and standard deviation of error with scheduling and without (i.e., base case).\nbe higher than the rest.\nIn order to mitigate the imbalance in the error, we propose to send DR vectors selectively to different players based on the accumulated errors of these players.\nThe budget based algorithm is based on this idea and there are two variations of it.\nOne is a probabilistic budget based scheme and the other, a deterministic budget base scheme.\n5.1 Probabilistic budget based scheme\nThe probabilistic budget based scheme has three main steps: a) lower the dead reckoning threshold but at the same time keep the total number of DRs sent the same as the base case, b) at every trigger, probabilistically pick a player to send the DR vector to, and c) send the DR vector to the chosen player.\nThese steps are described below.\nThe lowering of DR threshold is implemented as follows.\nLowering the threshold is equivalent to increasing the number of trigger points where DR vectors are generated.\nSuppose the threshold is such that the number of triggers caused by it in the base case is t and at each trigger n DR vectors sent by the sender, which results in a total of nt DR vectors.\nOur goal is to keep the total number of DR vectors sent by the sender fixed at nt, but lower the number of DR vectors sent at each trigger (i.e., do not send the DR vector to all the receivers).\nLet n' and t' be the number of DR vectors sent at each trigger and number of triggers respectively in the modified case.\nWe want to ensuren't' = nt.\nSince we want to increase the number of trigger points, i.e, t'> t, this would mean that n' <n.\nThat is, not all receivers will be sent the DR vector at every trigger.\nIn the probabilistic budget based scheme, at each trigger, a probability is calculated for each receiver to be sent a DR vector and only one receiver is sent the DR (n' = 1).\nThis probability is based on the relative weights of the receivers' accumulated errors.\nThat is, a receiver with a higher accumulated error will have a higher probability of being sent the DR vector.\nConsider that the accumulated error for three players are a1, a2 and a3 respectively.\nThen the probability of player 1 receiving the DR vector would be a1 a1 + a2 + a3.\nSimilarly for the other players.\nOnce the player is picked, the DR vector is sent to that player.\nTo compare the probabilistic budget based algorithm with the base case, we needed to lower the threshold for the base case (for fair comparison).\nAs the dead reckoning threshold in the base case was already very fine, it was decided that instead of lowering the threshold, the probabilistic budget based approach would be compared against a modified base case that would use the normal threshold as the budget based algorithm but the base case was modified such that every third trigger would be actually used to send out a DR vector to all the three receivers used in our experiments.\nThis was called as the 1\/3 base case as it resulted in 1\/3 number of DR vectors being sent as compared to the base case.\nThe budget per trigger for the probability based approach was calculated as one DR vector at each trigger as compared to three DR vectors at every third trigger in the 1\/3 base case; thus the two cases lead to the same number of DR vectors being sent out over time.\nIn order to evaluate the effectiveness of the probabilistic budget based algorithm, we instrumented the BZFlag game to use this approach.\nWe used the same testbed consisting of one sender and three receivers with delays of 800ms, 500ms and 200ms from the sender and with low delay variance (100ms) and moderate delay variance (180ms).\nThe results are shown in Figures 5 and 6.\nAs mentioned earlier, the x-axis of these graphs represents the system time when the snapshot of the game was taken.\nObserve from the figures that the standard deviation of the accumulated error among the receivers with the probabilistic budget based algorithm is less than the 1\/3 base case and the mean is a little higher than the 1\/3 base case.\nThis implies that the game is fairer as compared to the 1\/3 base case at the cost of increasing the mean error by a small amount as compared to the 1\/3 base case.\nThe increase in mean error in the probabilistic case compared to the 1\/3 base case can be attributed to the fact that the even though the probabilistic approach on average sends the same number of DR vectors as the 1\/3 base case, it sometimes sends DR vectors to a receiver less frequently and sometimes more frequently than the 1\/3 base case due to its probabilistic nature.\nWhen a receiver does not receive a DR vector for a long time, the receiver's trajectory is more and more off of the sender's trajectory and hence the rate of buildup of the error at the receiver is higher.\nAt times when a receiver receives DR vectors more frequently, it builds up error at a lower rate but there is no way of reversing the error that was built up when it did not receive a DR vector for a long time.\nThis leads the receivers to build up more error in the probabilistic case as compared to the 1\/3 base case where the receivers receive a DR vector almost periodically.\nFigure 5: Mean and standard deviation of error for different algorithms (including budget based algorithms) for low delay variance.\nFigure 6: Mean and standard deviation of error for different algorithms (including budget based algorithms) for moderate delay variance.\n5.2 Deterministic budget based scheme\nTo bound the increase in mean error we decided to modify the budget based algorithm to be \"deterministic\".\nThe first two steps of the algorithm are the same as in the probabilistic algorithm; the trigger points are increased to lower the threshold and accumulated errors are used to compute the probability that a receiver will receiver a DR vector.\nOnce these steps are completed, a deterministic schedule for the receiver is computed as follows: 1.\nIf there is any receiver (s) tagged to receive a DR vector at the current trigger, the sender sends out the DR vector to the respective receiver (s).\nIf at least one receiver was sent a DR vector, the sender calculates the probabilities of each receiver receiving a DR vector as explained before and follows steps 2 to 6, else it does not do anything.\n2.\nFor each receiver, the probability value is multiplied with the budget available at each trigger (which is set to 1 as explained below) to give the frequency of sending the DR vector to each receiver.\n3.\nIf any of the receiver's frequency after multiplying with the budget goes over 1, the receiver's frequency is set as 1 and the surplus amount is equally distributed to all the receivers by adding the amount to their existing frequencies.\nThis process is repeated until all the receivers have a frequency of less than or equal to 1.\nThis is due to the fact that at a trigger we cannot send more than one DR vector to the respective receiver.\nThat will be wastage of DR vectors by sending redundant information.\n4.\n(1\/frequency) gives us the schedule at which the sender should send DR vectors to the respective receiver.\nCredit obtained previously (explained in step 5) if any is subtracted from the schedule.\nObserve that the resulting value of the schedule might not be an integer; hence, the value is rounded off by taking the ceiling of the schedule.\nFor example, if the frequency is 1\/3 .5, this implies that we would like to have a DR vector sent every 3.5 triggers.\nHowever, we are constrained to send it at the 4th trigger giving us a credit of 0.5.\nWhen we do send the DR vector next time, we would be able to send it\non the 3rd trigger because of the 0.5 credit.\n5.\nThe difference between the schedule and the ceiling of the schedule is the credit that the receiver has obtained which is remembered for the future and used at the next time as explained in step 4.\n6.\nFor each of those receivers who were sent a DR vector at the current trigger, the receivers are tagged to receive the next DR vector at the trigger that happens exactly schedule (the ceiling of the schedule) number of times away from the current trigger.\nObserve that no other receiver's schedule is modified at this point as they all are running a schedule calculated at some previous point of time.\nThose schedules will be automatically modified at the trigger when they are scheduled to receive the next DR vector.\nAt the first trigger, the sender sends the DR vector to all the receivers and uses a relative probability of 1\/n for each receiver and follows the steps 2 to 6 to calculate the next schedule for each receiver in the same way as mentioned for other triggers.\nThis algorithm ensures that every receiver has a guaranteed schedule of receiving DR vectors and hence there is no irregularity in sending the DR vector to any receiver as was observed in the budget based probabilistic algorithm.\nWe used the testbed described earlier (three receivers with varying delays) to evaluate the deterministic algorithm using the budget of 1 DR vector per trigger so as to use the same number of DR vectors as in the 1\/3 base case.\nResults from our experiments are shown in Figures 5 and 6.\nIt can be observed that the standard deviation of error in the deterministic budget based algorithm is less than the 1\/3 base case and also has the same mean error as the 1\/3 base case.\nThis indicates that the deterministic algorithm is more fair than the 1\/3 base case and at the same time does not increase the mean error thereby leading to a better game quality compared to the probabilistic algorithm.\nIn general, when comparing the deterministic approach to the probabilistic approach, we found that the mean accumulated error was always less in the deterministic approach.\nWith respect to standard deviation of the accumulated error, we found that in the fixed or low variance cases, the deterministic approach was generally lower, but in higher variance cases, it was harder to draw conclusions as the probabilistic approach was sometimes better than the deterministic approach.\n6.\nCONCLUSIONS AND FUTURE WORK\nIn distributed multi-player games played across the Internet, object and player trajectory within the game space are exchanged in terms of DR vectors.\nDue to the variable delay between players, these DR vectors reach different players at different times.\nThere is unfair advantage gained by receivers who are closer to the sender of the DR as they are able to render the sender's position more accurately in real time.\nIn this paper, we first developed a model for estimating the \"error\" in rendering player trajectories at the receivers.\nWe then presented an algorithm based on scheduling the DR vectors to be sent to different players at different times thereby \"equalizing\" the error at different players.\nThis algorithm is aimed at making the game fair to all players, but tends to increase the mean error of the players.\nTo counter this effect, we presented \"budget\" based algorithms where the DR vectors are still scheduled to be sent at different players at different times but the algorithm balances the need for \"fairness\" with the requirement that the error of the \"worst\" case players (who are furthest from the sender) are not increased compared to the base case (where all DR vectors are sent to all players every time a DR vector is generated).\nWe presented two variations of the budget based algorithms and through experimentation showed that the algorithms reduce the standard deviation of the error thereby making the game more fair and at the same time has comparable mean error to the base case.","keyphrases":["fair","dead-reckon","dead-reckon vector","accuraci","mean error","budget base algorithm","schedul algorithm","distribut multi-player game","quantiz","export error","bucket synchron","network delai","clock synchron"],"prmu":["P","P","P","P","P","P","P","M","U","M","U","M","U"]} {"id":"H-98","title":"Using Asymmetric Distributions to Improve Text Classifier Probability Estimates","abstract":"Text classifiers that give probability estimates are more readily applicable in a variety of scenarios. For example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a user-specified cost function dynamically chosen at prediction time. However, the quality of the probability estimates is crucial. We review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the extremely irrelevant, hard to discriminate, and obviously relevant items are often significantly different. Finally, we analyze the experimental performance of these models over the outputs of two text classifiers. The analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable.","lvl-1":"Using Asymmetric Distributions to Improve Text Classifier Probability Estimates Paul N. Bennett Computer Science Dept. Carnegie Mellon University Pittsburgh, PA 15213 pbennett+@cs.cmu.edu ABSTRACT Text classifiers that give probability estimates are more readily applicable in a variety of scenarios.\nFor example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a userspecified cost function dynamically chosen at prediction time.\nHowever, the quality of the probability estimates is crucial.\nWe review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the extremely irrelevant, hard to discriminate, and obviously relevant items are often significantly different.\nFinally, we analyze the experimental performance of these models over the outputs of two text classifiers.\nThe analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; I.2.6 [Artificial Intelligence]: Learning; I.5.2 [Pattern Recognition]: Design Methodology General Terms Algorithms, Experimentation, Reliability.\n1.\nINTRODUCTION Text classifiers that give probability estimates are more flexible in practice than those that give only a simple classification or even a ranking.\nFor example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model [8] to issue a runtime decision which minimizes the expected cost of a user-specified cost function dynamically chosen at prediction time.\nThis can be used to minimize a linear utility cost function for filtering tasks where pre-specified costs of relevant\/irrelevant are not available during training but are specified at prediction time.\nFurthermore, the costs can be changed without retraining the model.\nAdditionally, probability estimates are often used as the basis of deciding which document``s label to request next during active learning [17, 23].\nEffective active learning can be key in many information retrieval tasks where obtaining labeled data can be costly - severely reducing the amount of labeled data needed to reach the same performance as when new labels are requested randomly [17].\nFinally, they are also amenable to making other types of cost-sensitive decisions [26] and for combining decisions [3].\nHowever, in all of these tasks, the quality of the probability estimates is crucial.\nParametric models generally use assumptions that the data conform to the model to trade-off flexibility with the ability to estimate the model parameters accurately with little training data.\nSince many text classification tasks often have very little training data, we focus on parametric methods.\nHowever, most of the existing parametric methods that have been applied to this task have an assumption we find undesirable.\nWhile some of these methods allow the distributions of the documents relevant and irrelevant to the topic to have different variances, they typically enforce the unnecessary constraint that the documents are symmetrically distributed around their respective modes.\nWe introduce several asymmetric parametric models that allow us to relax this assumption without significantly increasing the number of parameters and demonstrate how we can efficiently fit the models.\nAdditionally, these models can be interpreted as assuming the scores produced by the text classifier have three basic types of empirical behavior - one corresponding to each of the extremely irrelevant, hard to discriminate, and obviously relevant items.\nWe first review related work on improving probability estimates and score modeling in information retrieval.\nThen, we discuss in further detail the need for asymmetric models.\nAfter this, we describe two specific asymmetric models and, using two standard text classifiers, na\u00a8\u0131ve Bayes and SVMs, demonstrate how they can be efficiently used to recalibrate poor probability estimates or produce high quality probability estimates from raw scores.\nWe then review experiments using previously proposed methods and the asymmetric methods over several text classification corpora to demonstrate the strengths and weaknesses of the various methods.\nFinally, we summarize our contributions and discuss future directions.\n2.\nRELATED WORK Parametric models have been employed to obtain probability estimates in several areas of information retrieval.\nLewis & Gale [17] use logistic regression to recalibrate na\u00a8\u0131ve Bayes though the quality of the probability estimates are not directly evaluated; it is simply performed as an intermediate step in active learning.\nManmatha et.\nal [20] introduced models appropriate to produce probability estimates from relevance scores returned from search engines and demonstrated how the resulting probability estimates could be subsequently employed to combine the outputs of several search engines.\nThey use a different parametric distribution for the relevant and irrelevant classes, but do not pursue two-sided asymmetric distributions for a single class as described here.\nThey also survey the long history of modeling the relevance scores of search engines.\nOur work is similar in flavor to these previous attempts to model search engine scores, but we target text classifier outputs which we have found demonstrate a different type of score distribution behavior because of the role of training data.\nFocus on improving probability estimates has been growing lately.\nZadrozny & Elkan [26] provide a corrective measure for decision trees (termed curtailment) and a non-parametric method for recalibrating na\u00a8\u0131ve Bayes.\nIn more recent work [27], they investigate using a semi-parametric method that uses a monotonic piecewiseconstant fit to the data and apply the method to na\u00a8\u0131ve Bayes and a linear SVM.\nWhile they compared their methods to other parametric methods based on symmetry, they fail to provide significance test results.\nOur work provides asymmetric parametric methods which complement the non-parametric and semi-parametric methods they propose when data scarcity is an issue.\nIn addition, their methods reduce the resolution of the scores output by the classifier (the number of distinct values output), but the methods here do not have such a weakness since they are continuous functions.\nThere is a variety of other work that this paper extends.\nPlatt [22] uses a logistic regression framework that models noisy class labels to produce probabilities from the raw output of an SVM.\nHis work showed that this post-processing method not only can produce probability estimates of similar quality to SVMs directly trained to produce probabilities (regularized likelihood kernel methods), but it also tends to produce sparser kernels (which generalize better).\nFinally, Bennett [1] obtained moderate gains by applying Platt``s method to the recalibration of na\u00a8\u0131ve Bayes but found there were more problematic areas than when it was applied to SVMs.\nRecalibrating poorly calibrated classifiers is not a new problem.\nLindley et.\nal [19] first proposed the idea of recalibrating classifiers, and DeGroot & Fienberg [5, 6] gave the now accepted standard formalization for the problem of assessing calibration initiated by others [4, 24].\n3.\nPROBLEM DEFINITION & APPROACH Our work differs from earlier approaches primarily in three points: (1) We provide asymmetric parametric models suitable for use when little training data is available; (2) We explicitly analyze the quality of probability estimates these and competing methods produce and provide significance tests for these results; (3) We target text classifier outputs where a majority of the previous literature targeted the output of search engines.\n3.1 Problem Definition The general problem we are concerned with is highlighted in Figure 1.\nA text classifier produces a prediction about a document and gives a score s(d) indicating the strength of its decision that the document belongs to the positive class (relevant to the topic).\nWe assume throughout there are only two classes: the positive and the negative (or irrelevant) class (''+'' and ''-'' respectively).\nThere are two general types of parametric approaches.\nThe first of these tries to fit the posterior function directly, i.e. there is one p(s|+) p(s|\u2212) Bayes'' RuleP(+) P(\u2212) Classifier P(+| s(d)) Predict class, c(d)={+,\u2212} confidence s(d) that c(d)=+ Document, d and give unnormalized Figure 1: We are concerned with how to perform the box highlighted in grey.\nThe internals are for one type of approach.\nfunction estimator that performs a direct mapping of the score s to the probability P(+|s(d)).\nThe second type of approach breaks the problem down as shown in the grey box of Figure 1.\nAn estimator for each of the class-conditional densities (i.e. p(s|+) and p(s|\u2212)) is produced, then Bayes'' rule and the class priors are used to obtain the estimate for P(+|s(d)).\n3.2 Motivation for Asymmetric Distributions Most of the previous parametric approaches to this problem either directly or indirectly (when fitting only the posterior) correspond to fitting Gaussians to the class-conditional densities; they differ only in the criterion used to estimate the parameters.\nWe can visualize this as depicted in Figure 2.\nSince increasing s usually indicates increased likelihood of belonging to the positive class, then the rightmost distribution usually corresponds to p(s|+).\nA B C 0 0.2 0.4 0.6 0.8 1 \u221210 \u22125 0 5 10 p(s|Class={+,\u2212}) Unnormalized Confidence Score s p(s | Class = +) p(s | Class = \u2212) Figure 2: Typical View of Discrimination based on Gaussians However, using standard Gaussians fails to capitalize on a basic characteristic commonly seen.\nNamely, if we have a raw output score that can be used for discrimination, then the empirical behavior between the modes (label B in Figure 2) is often very different than that outside of the modes (labels A and C in Figure 2).\nIntuitively, the area between the modes corresponds to the hard examples, which are difficult for this classifier to distinguish, while the areas outside the modes are the extreme examples that are usually easily distinguished.\nThis suggests that we may want to uncouple the scale of the outside and inside segments of the distribution (as depicted by the curve denoted as A-Gaussian in Figure 3).\nAs a result, an asymmetric distribution may be a more appropriate choice for application to the raw output score of a classifier.\nIdeally (i.e. perfect classification) there will exist scores \u03b8\u2212 and \u03b8+ such that all examples with score greater than \u03b8+ are relevant and all examples with scores less than \u03b8\u2212 are irrelevant.\nFurthermore, no examples fall between \u03b8\u2212 and \u03b8+.\nThe distance | \u03b8\u2212 \u2212 \u03b8+ | corresponds to the margin in some classifiers, and an attempt is often made to maximize this quantity.\nBecause text classifiers have training data to use to separate the classes, the final behavior of the score distributions is primarily a factor of the amount of training data and the consequent separation in the classes achieved.\nThis is in contrast to search engine retrieval where the distribution of scores is more a factor of language distribution across documents, the similarity function, and the length and type of query.\nPerfect classification corresponds to using two very asymmetric distributions, but in this case, the probabilities are actually one and zero and many methods will work for typical purposes.\nPractically, some examples will fall between \u03b8\u2212 and \u03b8+, and it is often important to estimate the probabilities of these examples well (since they correspond to the hard examples).\nJustifications can be given for both why you may find more and less examples between \u03b8\u2212 and \u03b8+ than outside of them, but there are few empirical reasons to believe that the distributions should be symmetric.\nA natural first candidate for an asymmetric distribution is to generalize a common symmetric distribution, e.g. the Laplace or the Gaussian.\nAn asymmetric Laplace distribution can be achieved by placing two exponentials around the mode in the following manner: p(x | \u03b8, \u03b2, \u03b3) = \u03b2\u03b3 \u03b2+\u03b3 exp [\u2212\u03b2 (\u03b8 \u2212 x)] x \u2264 \u03b8 (\u03b2, \u03b3 > 0) \u03b2\u03b3 \u03b2+\u03b3 exp [\u2212\u03b3 (x \u2212 \u03b8)] x > \u03b8 (1) where \u03b8, \u03b2, and \u03b3 are the model parameters.\n\u03b8 is the mode of the distribution, \u03b2 is the inverse scale of the exponential to the left of the mode, and \u03b3 is the inverse scale of the exponential to the right.\nWe will use the notation \u039b(X | \u03b8, \u03b2, \u03b3) to refer to this distribution.\n0 0.002 0.004 0.006 0.008 0.01 -300 -200 -100 0 100 200 p(s|Class={+,-}) Unnormalized Confidence Score s Gaussian A-Gaussian Figure 3: Gaussians vs. Asymmetric Gaussians.\nA Shortcoming of Symmetric Distributions - The vertical lines show the modes as estimated nonparametrically.\nWe can create an asymmetric Gaussian in the same manner: p(x | \u03b8, \u03c3l, \u03c3r) = 2\u221a 2\u03c0(\u03c3l+\u03c3r) exp \u2212(x\u2212\u03b8)2 2\u03c32 l x \u2264 \u03b8 (\u03c3l, \u03c3r > 0) 2\u221a 2\u03c0(\u03c3l+\u03c3r) exp \u2212(x\u2212\u03b8)2 2\u03c32 r x > \u03b8 (2) where \u03b8, \u03c3l, and \u03c3r are the model parameters.\nTo refer to this asymmetric Gaussian, we use the notation \u0393(X | \u03b8, \u03c3l, \u03c3r).\nWhile these distributions are composed of halves, the resulting function is a single continuous distribution.\nThese distributions allow us to fit our data with much greater flexibility at the cost of only fitting six parameters.\nWe could instead try mixture models for each component or other extensions, but most other extensions require at least as many parameters (and can often be more computationally expensive).\nIn addition, the motivation above should provide significant cause to believe the underlying distributions actually behave in this way.\nFurthermore, this family of distributions can still fit a symmetric distribution, and finally, in the empirical evaluation, evidence is presented that demonstrates this asymmetric behavior (see Figure 4).\nTo our knowledge, neither family of distributions has been previously used in machine learning or information retrieval.\nBoth are termed generalizations of an Asymmetric Laplace in [14], but we refer to them as described above to reflect the nature of how we derived them for this task.\n3.3 Estimating the Parameters of the Asymmetric Distributions This section develops the method for finding maximum likelihood estimates (MLE) of the parameters for the above asymmetric distributions.\nIn order to find the MLEs, we have two choices: (1) use numerical estimation to estimate all three parameters at once (2) fix the value of \u03b8, and estimate the other two (\u03b2 and \u03b3 or \u03c3l and \u03c3r) given our choice of \u03b8, then consider alternate values of \u03b8.\nBecause of the simplicity of analysis in the latter alternative, we choose this method.\n3.3.1 Asymmetric Laplace MLEs For D = {x1, x2, ... , xN } where the xi are i.i.d. and X \u223c \u039b(X | \u03b8, \u03b2, \u03b3), the likelihood is N i \u039b(X | \u03b8, \u03b2, \u03b3).\nNow, we fix \u03b8 and compute the maximum likelihood for that choice of \u03b8.\nThen, we can simply consider all choices of \u03b8 and choose the one with the maximum likelihood over all choices of \u03b8.\nThe complete derivation is omitted because of space but is available in [2].\nWe define the following values: Nl = | {x \u2208 D | x \u2264 \u03b8} | Nr = | {x \u2208 D | x > \u03b8} | Sl = x\u2208D|x\u2264\u03b8 x Sr = x\u2208D|x>\u03b8 x Dl = Nl\u03b8 \u2212 Sl Dr = Sr \u2212 Nr\u03b8.\nNote that Dl and Dr are the sum of the absolute differences between the x belonging to the left and right halves of the distribution (respectively) and \u03b8.\nFinally the MLEs for \u03b2 and \u03b3 for a fixed \u03b8 are: \u03b2MLE = N Dl + \u221a DrDl \u03b3MLE = N Dr + \u221a DrDl .\n(3) These estimates are not wholly unexpected since we would obtain Nl Dl if we were to estimate \u03b2 independently of \u03b3.\nThe elegance of the formulae is that the estimates will tend to be symmetric only insofar as the data dictate it (i.e. the closer Dl and Dr are to being equal, the closer the resulting inverse scales).\nBy continuity arguments, when N = 0, we assign \u03b2 = \u03b3 = 0 where 0 is a small constant that acts to disperse the distribution to a uniform.\nSimilarly, when N = 0 and Dl = 0, we assign \u03b2 = inf where inf is a very large constant that corresponds to an extremely sharp distribution (i.e. almost all mass at \u03b8 for that half).\nDr = 0 is handled similarly.\nAssuming that \u03b8 falls in some range [\u03c6, \u03c8] dependent upon only the observed documents, then this alternative is also easily computable.\nGiven Nl, Sl, Nr, Sr, we can compute the posterior and the MLEs in constant time.\nIn addition, if the scores are sorted, then we can perform the whole process quite efficiently.\nStarting with the minimum \u03b8 = \u03c6 we would like to try, we loop through the scores once and set Nl, Sl, Nr, Sr appropriately.\nThen we increase \u03b8 and just step past the scores that have shifted from the right side of the distribution to the left.\nAssuming the number of candidate \u03b8s are O(n), this process is O(n), and the overall process is dominated by sorting the scores, O(n log n) (or expected linear time).\n3.3.2 Asymmetric Gaussian MLEs For D = {x1, x2, ... , xN } where the xi are i.i.d. and X \u223c \u0393(X | \u03b8, \u03c3l, \u03c3r), the likelihood is N i \u0393(X | \u03b8, \u03b2, \u03b3).\nThe MLEs can be worked out similar to the above.\nWe assume the same definitions as above (the complete derivation omitted for space is available in [2]), and in addition, let: Sl2 = x\u2208D|x\u2264\u03b8 x2 Sr2 = x\u2208D|x>\u03b8 x2 Dl2 = Sl2 \u2212 Sl\u03b8 + \u03b82 Nl Dr2 = Sr2 \u2212 Sr\u03b8 + \u03b82 Nr.\nThe analytical solution for the MLEs for a fixed \u03b8 is: \u03c3l,MLE = Dl2 + D 2\/3 l2 D 1\/3 r2 N (4) \u03c3r,MLE = Dr2 + D 2\/3 r2 D 1\/3 l2 N .\n(5) By continuity arguments, when N = 0, we assign \u03c3r = \u03c3l = inf , and when N = 0 and Dl2 = 0 (resp.\nDr2 = 0), we assign \u03c3l = 0 (resp.\n\u03c3r = 0).\nAgain, the same computational complexity analysis applies to estimating these parameters.\n4.\nEXPERIMENTAL ANALYSIS 4.1 Methods For each of the methods that use a class prior, we use a smoothed add-one estimate, i.e. P(c) = |c|+1 N+2 where N is the number of documents.\nFor methods that fit the class-conditional densities, p(s|+) and p(s|\u2212), the resulting densities are inverted using Bayes'' rule as described above.\nAll of the methods below are fit using maximum likelihood estimates.\nFor recalibrating a classifier (i.e. correcting poor probability estimates output by the classifier), it is usual to use the log-odds of the classifier``s estimate as s(d).\nThe log-odds are defined to be log P (+|d) P (\u2212|d) .\nThe normal decision threshold (minimizing error) in terms of log-odds is at zero (i.e. P(+|d) = P(\u2212|d) = 0.5).\nSince it scales the outputs to a space [\u2212\u221e, \u221e], the log-odds make normal (and similar distributions) applicable [19].\nLewis & Gale [17] give a more motivating viewpoint that fitting the log-odds is a dampening effect for the inaccurate independence assumption and a bias correction for inaccurate estimates of the priors.\nIn general, fitting the log-odds can serve to boost or dampen the signal from the original classifier as the data dictate.\nGaussians A Gaussian is fit to each of the class-conditional densities, using the usual maximum likelihood estimates.\nThis method is denoted in the tables below as Gauss.\nAsymmetric Gaussians An asymmetric Gaussian is fit to each of the class-conditional densities using the maximum likelihood estimation procedure described above.\nIntervals between adjacent scores are divided by 10 in testing candidate \u03b8s, i.e. 8 points between actual scores occurring in the data set are tested.\nThis method is denoted as A. Gauss.\nLaplace Distributions Even though Laplace distributions are not typically applied to this task, we also tried this method to isolate why benefit is gained from the asymmetric form.\nThe usual MLEs were used for estimating the location and scale of a classical symmetric Laplace distribution as described in [14].\nWe denote this method as Laplace below.\nAsymmetric Laplace Distributions An asymmetric Laplace is fit to each of the class-conditional densities using the maximum likelihood estimation procedure described above.\nAs with the asymmetric Gaussian, intervals between adjacent scores are divided by 10 in testing candidate \u03b8s.\nThis method is denoted as A. Laplace below.\nLogistic Regression This method is the first of two methods we evaluated that directly fit the posterior, P(+|s(d)).\nBoth methods restrict the set of families to a two-parameter sigmoid family; they differ primarily in their model of class labels.\nAs opposed to the above methods, one can argue that an additional boon of these methods is they completely preserve the ranking given by the classifier.\nWhen this is desired, these methods may be more appropriate.\nThe previous methods will mostly preserve the rankings, but they can deviate if the data dictate it.\nThus, they may model the data behavior better at the cost of departing from a monotonicity constraint in the output of the classifier.\nLewis & Gale [17] use logistic regression to recalibrate na\u00a8\u0131ve Bayes for subsequent use in active learning.\nThe model they use is: P(+|s(d)) = exp(a + b s(d)) 1 + exp(a + b s(d)) .\n(6) Instead of using the probabilities directly output by the classifier, they use the loglikelihood ratio of the probabilities, log P (d|+) P (d|\u2212) , as the score s(d).\nInstead of using this below, we will use the logodds ratio.\nThis does not affect the model as it simply shifts all of the scores by a constant determined by the priors.\nWe refer to this method as LogReg below.\nLogistic Regression with Noisy Class Labels Platt [22] proposes a framework that extends the logistic regression model above to incorporate noisy class labels and uses it to produce probability estimates from the raw output of an SVM.\nThis model differs from the LogReg model only in how the parameters are estimated.\nThe parameters are still fit using maximum likelihood estimation, but a model of noisy class labels is used in addition to allow for the possibility that the class was mislabeled.\nThe noise is modeled by assuming there is a finite probability of mislabeling a positive example and of mislabeling a negative example; these two noise estimates are determined by the number of positive examples and the number of negative examples (using Bayes'' rule to infer the probability of incorrect label).\nEven though the performance of this model would not be expected to deviate much from LogReg, we evaluate it for completeness.\nWe refer to this method below as LR+Noise.\n4.2 Data We examined several corpora, including the MSN Web Directory, Reuters, and TREC-AP.\nMSN Web Directory The MSN Web Directory is a large collection of heterogeneous web pages (from a May 1999 web snapshot) that have been hierarchically classified.\nWe used the same train\/test split of 50078\/10024 documents as that reported in [9].\nThe MSN Web hierarchy is a seven-level hierarchy; we used all 13 of the top-level categories.\nThe class proportions in the training set vary from 1.15% to 22.29%.\nIn the testing set, they range from 1.14% to 21.54%.\nThe classes are general subjects such as Health & Fitness and Travel & Vacation.\nHuman indexers assigned the documents to zero or more categories.\nFor the experiments below, we used only the top 1000 words with highest mutual information for each class; approximately 195K words appear in at least three training documents.\nReuters The Reuters 21578 corpus [16] contains Reuters news articles from 1987.\nFor this data set, we used the ModApte standard train\/ test split of 9603\/3299 documents (8676 unused documents).\nThe classes are economic subjects (e.g., acq for acquisitions, earn for earnings, etc.) that human taggers applied to the document; a document may have multiple subjects.\nThere are actually 135 classes in this domain (only 90 of which occur in the training and testing set); however, we only examined the ten most frequent classes since small numbers of testing examples make interpreting some performance measures difficult due to high variance.1 Limiting to the ten largest classes allows us to compare our results to previously published results [10, 13, 21, 22].\nThe class proportions in the training set vary from 1.88% to 29.96%.\nIn the testing set, they range from 1.7% to 32.95%.\nFor the experiments below we used only the top 300 words with highest mutual information for each class; approximately 15K words appear in at least three training documents.\nTREC-AP The TREC-AP corpus is a collection of AP news stories from 1988 to 1990.\nWe used the same train\/test split of 142791\/66992 documents that was used in [18].\nAs described in [17] (see also [15]), the categories are defined by keywords in a keyword field.\nThe title and body fields are used in the experiments below.\nThere are twenty categories in total.\nThe class proportions in the training set vary from 0.06% to 2.03%.\nIn the testing set, they range from 0.03% to 4.32%.\nFor the experiments described below, we use only the top 1000 words with the highest mutual information for each class; approximately 123K words appear in at least 3 training documents.\n4.3 Classifiers We selected two classifiers for evaluation.\nA linear SVM classifier which is a discriminative classifier that does not normally output probability values, and a na\u00a8\u0131ve Bayes classifier whose probability outputs are often poor [1, 7] but can be improved [1, 26, 27].\n1 A separate comparison of only LogReg, LR+Noise, and A. Laplace over all 90 categories of Reuters was also conducted.\nAfter accounting for the variance, that evaluation also supported the claims made here.\nSVM For linear SVMs, we use the Smox toolkit which is based on Platt``s Sequential Minimal Optimization algorithm.\nThe features were represented as continuous values.\nWe used the raw output score of the SVM as s(d) since it has been shown to be appropriate before [22].\nThe normal decision threshold (assuming we are seeking to minimize errors) for this classifier is at zero.\nNa\u00a8\u0131ve Bayes The na\u00a8\u0131ve Bayes classifier model is a multinomial model [21].\nWe smoothed word and class probabilities using a Bayesian estimate (with the word prior) and a Laplace m-estimate, respectively.\nWe use the log-odds estimated by the classifier as s(d).\nThe normal decision threshold is at zero.\n4.4 Performance Measures We use log-loss [12] and squared error [4, 6] to evaluate the quality of the probability estimates.\nFor a document d with class c(d) \u2208 {+, \u2212} (i.e. the data have known labels and not probabilities), logloss is defined as \u03b4(c(d), +) log P(+|d) + \u03b4(c(d), \u2212) log P(\u2212|d) where \u03b4(a, b) .\n= 1 if a = b and 0 otherwise.\nThe squared error is \u03b4(c(d), +)(1 \u2212 P(+|d))2 + \u03b4(c(d), \u2212)(1 \u2212 P(\u2212|d))2 .\nWhen the class of a document is correctly predicted with a probability of one, log-loss is zero and squared error is zero.\nWhen the class of a document is incorrectly predicted with a probability of one, log-loss is \u2212\u221e and squared error is one.\nThus, both measures assess how close an estimate comes to correctly predicting the item``s class but vary in how harshly incorrect predictions are penalized.\nWe report only the sum of these measures and omit the averages for space.\nTheir averages, average log-loss and mean squared error (MSE), can be computed from these totals by dividing by the number of binary decisions in a corpus.\nIn addition, we also compare the error of the classifiers at their default thresholds and with the probabilities.\nThis evaluates how the probability estimates have improved with respect to the decision threshold P(+|d) = 0.5.\nThus, error only indicates how the methods would perform if a false positive was penalized the same as a false negative and not the general quality of the probability estimates.\nIt is presented simply to provide the reader with a more complete understanding of the empirical tendencies of the methods.\nWe use a a standard paired micro sign test [25] to determine statistical significance in the difference of all measures.\nOnly pairs that the methods disagree on are used in the sign test.\nThis test compares pairs of scores from two systems with the null hypothesis that the number of items they disagree on are binomially distributed.\nWe use a significance level of p = 0.01.\n4.5 Experimental Methodology As the categories under consideration in the experiments are not mutually exclusive, the classification was done by training n binary classifiers, where n is the number of classes.\nIn order to generate the scores that each method uses to fit its probability estimates, we use five-fold cross-validation on the training data.\nWe note that even though it is computationally efficient to perform leave-one-out cross-validation for the na\u00a8\u0131ve Bayes classifier, this may not be desirable since the distribution of scores can be skewed as a result.\nOf course, as with any application of n-fold cross-validation, it is also possible to bias the results by holding n too low and underestimating the performance of the final classifier.\n4.6 Results & Discussion The results for recalibrating na\u00a8\u0131ve Bayes are given in Table 1a.\nTable 1b gives results for producing probabilistic outputs for SVMs.\nLog-loss Error2 Errors MSN Web Gauss -60656.41 10503.30 10754 A.Gauss -57262.26 8727.47 9675 Laplace -45363.84 8617.59 10927 A.Laplace -36765.88 6407.84\u2020 8350 LogReg -36470.99 6525.47 8540 LR+Noise -36468.18 6534.61 8563 na\u00a8\u0131ve Bayes -1098900.83 17117.50 17834 Reuters Gauss -5523.14 1124.17 1654 A.Gauss -4929.12 652.67 888 Laplace -5677.68 1157.33 1416 A.Laplace -3106.95\u2021 554.37\u2021 726 LogReg -3375.63 603.20 786 LR+Noise -3374.15 604.80 785 na\u00a8\u0131ve Bayes -52184.52 1969.41 2121 TREC-AP Gauss -57872.57 8431.89 9705 A.Gauss -66009.43 7826.99 8865 Laplace -61548.42 9571.29 11442 A.Laplace -48711.55 7251.87\u2021 8642 LogReg -48250.81 7540.60 8797 LR+Noise -48251.51 7544.84 8801 na\u00a8\u0131ve Bayes -1903487.10 41770.21 43661 Log-loss Error2 Errors MSN Web Gauss -54463.32 9090.57 10555 A. Gauss -44363.70 6907.79 8375 Laplace -42429.25 7669.75 10201 A. Laplace -31133.83 5003.32 6170 LogReg -30209.36 5158.74 6480 LR+Noise -30294.01 5209.80 6551 Linear SVM N\/A N\/A 6602 Reuters Gauss -3955.33 589.25 735 A. Gauss -4580.46 428.21 532 Laplace -3569.36 640.19 770 A. Laplace -2599.28 412.75 505 LogReg -2575.85 407.48 509 LR+Noise -2567.68 408.82 516 Linear SVM N\/A N\/A 516 TREC-AP Gauss -54620.94 6525.71 7321 A. Gauss -77729.49 6062.64 6639 Laplace -54543.19 7508.37 9033 A. Laplace -48414.39 5761.25\u2021 6572\u2021 LogReg -48285.56 5914.04 6791 LR+Noise -48214.96 5919.25 6794 Linear SVM N\/A N\/A 6718 Table 1: (a) Results for na\u00a8\u0131ve Bayes (left) and (b) SVM (right).\nThe best entry for a corpus is in bold.\nEntries that are statistically significantly better than all other entries are underlined.\nA \u2020 denotes the method is significantly better than all other methods except for na\u00a8\u0131ve Bayes.\nA \u2021 denotes the entry is significantly better than all other methods except for A. Gauss (and na\u00a8\u0131ve Bayes for the table on the left).\nThe reason for this distinction in significance tests is described in the text.\nWe start with general observations that result from examining the performance of these methods over the various corpora.\nThe first is that A. Laplace, LR+Noise, and LogReg, quite clearly outperform the other methods.\nThere is usually little difference between the performance of LR+Noise and LogReg (both as shown here and on a decision by decision basis), but this is unsurprising since LR+Noise just adds noisy class labels to the LogReg model.\nWith respect to the three different measures, LR+Noise and LogReg tend to perform slightly better (but never significantly) than A. Laplace at some tasks with respect to log-loss and squared error.\nHowever, A. Laplace always produces the least number of errors for all of the tasks, though at times the degree of improvement is not significant.\nIn order to give the reader a better sense of the behavior of these methods, Figures 4-5 show the fits produced by the most competitive of these methods versus the actual data behavior (as estimated nonparametrically by binning) for class Earn in Reuters.\nFigure 4 shows the class-conditional densities, and thus only A. Laplace is shown since LogReg fits the posterior directly.\nFigure 5 shows the estimations of the log-odds, (i.e. log P (Earn|s(d)) P (\u00acEarn|s(d)) ).\nViewing the log-odds (rather than the posterior) usually enables errors in estimation to be detected by the eye more easily.\nWe can break things down as the sign test does and just look at wins and losses on the items that the methods disagree on.\nLooked at in this way only two methods (na\u00a8\u0131ve Bayes and A. Gauss) ever have more pairwise wins than A. Laplace; those two sometimes have more pairwise wins on log-loss and squared error even though the total never wins (i.e. they are dragged down by heavy penalties).\nIn addition, this comparison of pairwise wins means that for those cases where LogReg and LR+Noise have better scores than A. Laplace, it would not be deemed significant by the sign test at any level since they do not have more wins.\nFor example, of the 130K binary decisions over the MSN Web dataset, A. Laplace had approximately 101K pairwise wins versus LogReg and LR+Noise.\nNo method ever has more pairwise wins than A. Laplace for the error comparison nor does any method every achieve a better total.\nThe basic observation made about na\u00a8\u0131ve Bayes in previous work is that it tends to produce estimates very close to zero and one [1, 17].\nThis means if it tends to be right enough of the time, it will produce results that do not appear significant in a sign test that ignores size of difference (as the one here).\nThe totals of the squared error and log-loss bear out the previous observation that when it``s wrong it``s really wrong.\nThere are several interesting points about the performance of the asymmetric distributions as well.\nFirst, A. Gauss performs poorly because (similar to na\u00a8\u0131ve Bayes) there are some examples where it is penalized a large amount.\nThis behavior results from a general tendency to perform like the picture shown in Figure 3 (note the crossover at the tails).\nWhile the asymmetric Gaussian tends to place the mode much more accurately than a symmetric Gaussian, its asymmetric flexibility combined with its distance function causes it to distribute too much mass to the outside tails while failing to fit around the mode accurately enough to compensate.\nFigure 3 is actually a result of fitting the two distributions to real data.\nAs a result, at the tails there can be a large discrepancy between the likelihood of belonging to each class.\nThus when there are no outliers A. Gauss can perform quite competitively, but when there is an 0 0.002 0.004 0.006 0.008 0.01 0.012 -600 -400 -200 0 200 400 p(s(d)|Class={+,-}) s(d) = naive Bayes log-odds Train Test A.Laplace 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 -15 -10 -5 0 5 10 15 p(s(d)|Class={+,-}) s(d) = linear SVM raw score Train Test A.Laplace Figure 4: The empirical distribution of classifier scores for documents in the training and the test set for class Earn in Reuters.\nAlso shown is the fit of the asymmetric Laplace distribution to the training score distribution.\nThe positive class (i.e. Earn) is the distribution on the right in each graph, and the negative class (i.e. \u00acEarn) is that on the left in each graph.\n-6 -4 -2 0 2 4 6 8 -250 -200 -150 -100 -50 0 50\u00a0100\u00a0150 LogOdds=logP(+|s(d))-logP(-|s(d)) s(d) = naive Bayes log-odds Train Test A.Laplace LogReg -5 0 5 10 15 -4 -2 0 2 4 6 LogOdds=logP(+|s(d))-logP(-|s(d)) s(d) = linear SVM raw score Train Test A.Laplace LogReg Figure 5: The fit produced by various methods compared to the empirical log-odds of the training data for class Earn in Reuters.\noutlier A. Gauss is penalized quite heavily.\nThere are enough such cases overall that it seems clearly inferior to the top three methods.\nHowever, the asymmetric Laplace places much more emphasis around the mode (Figure 4) because of the different distance function (think of the sharp peak of an exponential).\nAs a result most of the mass stays centered around the mode, while the asymmetric parameters still allow more flexibility than the standard Laplace.\nSince the standard Laplace also corresponds to a piecewise fit in the log-odds space, this highlights that part of the power of the asymmetric methods is their sensitivity in placing the knots at the actual modes - rather than the symmetric assumption that the means correspond to the modes.\nAdditionally, the asymmetric methods have greater flexibility in fitting the slopes of the line segments as well.\nEven in cases where the test distribution differs from the training distribution (Figure 4), A. Laplace still yields a solution that gives a better fit than LogReg (Figure 5), the next best competitor.\nFinally, we can make a few observations about the usefulness of the various performance metrics.\nFirst, log-loss only awards a finite amount of credit as the degree to which something is correct improves (i.e. there are diminishing returns as it approaches zero), but it can infinitely penalize for a wrong estimate.\nThus, it is possible for one outlier to skew the totals, but misclassifying this example may not matter for any but a handful of actual utility functions used in practice.\nSecondly, squared error has a weakness in the other direction.\nThat is, its penalty and reward are bounded in [0, 1], but if the number of errors is small enough, it is possible for a method to appear better when it is producing what we generally consider unhelpful probability estimates.\nFor example, consider a method that only estimates probabilities as zero or one (which na\u00a8\u0131ve Bayes tends to but doesn``t quite reach if you use smoothing).\nThis method could win according to squared error, but with just one error it would never perform better on log-loss than any method that assigns some non-zero probability to each outcome.\nFor these reasons, we recommend that neither of these are used in isolation as they each give slightly different insights to the quality of the estimates produced.\nThese observations are straightforward from the definitions but are underscored by the evaluation.\n5.\nFUTURE WORK A promising extension to the work presented here is a hybrid distribution of a Gaussian (on the outside slopes) and exponentials (on the inner slopes).\nFrom the empirical evidence presented in [22], the expectation is that such a distribution might allow more emphasis of the probability mass around the modes (as with the exponential) while still providing more accurate estimates toward the tails.\nJust as logistic regression allows the log-odds of the posterior distribution to be fit directly with a line, we could directly fit the log-odds of the posterior with a three-piece line (a spline) instead of indirectly doing the same thing by fitting the asymmetric Laplace.\nThis approach may provide more power since it retains the asymmetry assumption but not the assumption that the class-conditional densities are from an asymmetric Laplace.\nFinally, extending these methods to the outputs of other discriminative classifiers is an open area.\nWe are currently evaluating the appropriateness of these methods for the output of a voted perceptron [11].\nBy analogy to the log-odds, the operative score that appears promising is log weight perceptrons voting + weight perceptrons voting \u2212 .\n6.\nSUMMARY AND CONCLUSIONS We have reviewed a wide variety of parametric methods for producing probability estimates from the raw scores of a discriminative classifier and for recalibrating an uncalibrated probabilistic classifier.\nIn addition, we have introduced two new families that attempt to capitalize on the asymmetric behavior that tends to arise from learning a discrimination function.\nWe have given an efficient way to estimate the parameters of these distributions.\nWhile these distributions attempt to strike a balance between the generalization power of parametric distributions and the flexibility that the added asymmetric parameters give, the asymmetric Gaussian appears to have too great of an emphasis away from the modes.\nIn striking contrast, the asymmetric Laplace distribution appears to be preferable over several large text domains and a variety of performance measures to the primary competing parametric methods, though comparable performance is sometimes achieved with one of two varieties of logistic regression.\nGiven the ease of estimating the parameters of this distribution, it is a good first choice for producing quality probability estimates.\nAcknowledgments We are grateful to Francisco Pereira for the sign test code, Anton Likhodedov for logistic regression code, and John Platt for the code support for the linear SVM classifier toolkit Smox.\nAlso, we sincerely thank Chris Meek and John Platt for the very useful advice provided in the early stages of this work.\nThanks also to Jaime Carbonell and John Lafferty for their useful feedback on the final versions of this paper.\n7.\nREFERENCES [1] P. N. Bennett.\nAssessing the calibration of naive bayes'' posterior estimates.\nTechnical Report CMU-CS-00-155, Carnegie Mellon, School of Computer Science, 2000.\n[2] P. N. Bennett.\nUsing asymmetric distributions to improve classifier probabilities: A comparison of new and standard parametric methods.\nTechnical Report CMU-CS-02-126, Carnegie Mellon, School of Computer Science, 2002.\n[3] H. Bourlard and N. Morgan.\nA continuous speech recognition system embedding mlp into hmm.\nIn NIPS ``89, 1989.\n[4] G. Brier.\nVerification of forecasts expressed in terms of probability.\nMonthly Weather Review, 78:1-3, 1950.\n[5] M. H. DeGroot and S. E. Fienberg.\nThe comparison and evaluation of forecasters.\nStatistician, 32:12-22, 1983.\n[6] M. H. DeGroot and S. E. Fienberg.\nComparing probability forecasters: Basic binary concepts and multivariate extensions.\nIn P. Goel and A. Zellner, editors, Bayesian Inference and Decision Techniques.\nElsevier Science Publishers B.V., 1986.\n[7] P. Domingos and M. Pazzani.\nBeyond independence: Conditions for the optimality of the simple bayesian classifier.\nIn ICML ``96, 1996.\n[8] R. Duda, P. Hart, and D. Stork.\nPattern Classification.\nJohn Wiley & Sons, Inc., 2001.\n[9] S. T. Dumais and H. Chen.\nHierarchical classification of web content.\nIn SIGIR ``00, 2000.\n[10] S. T. Dumais, J. Platt, D. Heckerman, and M. Sahami.\nInductive learning algorithms and representations for text categorization.\nIn CIKM ``98, 1998.\n[11] Y. Freund and R. Schapire.\nLarge margin classification using the perceptron algorithm.\nMachine Learning, 37(3):277-296, 1999.\n[12] I. Good.\nRational decisions.\nJournal of the Royal Statistical Society, Series B, 1952.\n[13] T. Joachims.\nText categorization with support vector machines: Learning with many relevant features.\nIn ECML ``98, 1998.\n[14] S. Kotz, T. J. Kozubowski, and K. Podgorski.\nThe Laplace Distribution and Generalizations: A Revisit with Applications to Communications, Economics, Engineering, and Finance.\nBirkh\u00a8auser, 2001.\n[15] D. D. Lewis.\nA sequential algorithm for training text classifiers: Corrigendum and additional data.\nSIGIR Forum, 29(2):13-19, Fall 1995.\n[16] D. D. Lewis.\nReuters-21578, distribution 1.0.\nhttp:\/\/www.daviddlewis.com\/resources\/ testcollections\/reuters21578, January 1997.\n[17] D. D. Lewis and W. A. Gale.\nA sequential algorithm for training text classifiers.\nIn SIGIR ``94, 1994.\n[18] D. D. Lewis, R. E. Schapire, J. P. Callan, and R. Papka.\nTraining algorithms for linear text classifiers.\nIn SIGIR ``96, 1996.\n[19] D. Lindley, A. Tversky, and R. Brown.\nOn the reconciliation of probability assessments.\nJournal of the Royal Statistical Society, 1979.\n[20] R. Manmatha, T. Rath, and F. Feng.\nModeling score distributions for combining the outputs of search engines.\nIn SIGIR ``01, 2001.\n[21] A. McCallum and K. Nigam.\nA comparison of event models for naive bayes text classification.\nIn AAAI ``98, Workshop on Learning for Text Categorization, 1998.\n[22] J. C. Platt.\nProbabilistic outputs for support vector machines and comparisons to regularized likelihood methods.\nIn A. J. Smola, P. Bartlett, B. Scholkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers.\nMIT Press, 1999.\n[23] M. Saar-Tsechansky and F. Provost.\nActive learning for class probability estimation and ranking.\nIn IJCAI ``01, 2001.\n[24] R. L. Winkler.\nScoring rules and the evaluation of probability assessors.\nJournal of the American Statistical Association, 1969.\n[25] Y. Yang and X. Liu.\nA re-examination of text categorization methods.\nIn SIGIR ``99, 1999.\n[26] B. Zadrozny and C. Elkan.\nObtaining calibrated probability estimates from decision trees and naive bayesian classifiers.\nIn ICML ``01, 2001.\n[27] B. Zadrozny and C. Elkan.\nReducing multiclass to binary by coupling probability estimates.\nIn KDD ``02, 2002.","lvl-3":"Using Asymmetric Distributions to Improve Text Classifier Probability Estimates\nABSTRACT\nText classifiers that give probability estimates are more readily applicable in a variety of scenarios.\nFor example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a userspecified cost function dynamically chosen at prediction time.\nHowever, the quality of the probability estimates is crucial.\nWe review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the \"extremely irrelevant\", \"hard to discriminate\", and \"obviously relevant\" items are often significantly different.\nFinally, we analyze the experimental performance of these models over the outputs of two text classifiers.\nThe analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable.\n1.\nINTRODUCTION\nText classifiers that give probability estimates are more flexible in practice than those that give only a simple classification or even a ranking.\nFor example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model [8] to issue a runtime decision which minimizes the expected cost of a user-specified\ncost function dynamically chosen at prediction time.\nThis can be used to minimize a linear utility cost function for filtering tasks where pre-specified costs of relevant\/irrelevant are not available during training but are specified at prediction time.\nFurthermore, the costs can be changed without retraining the model.\nAdditionally, probability estimates are often used as the basis of deciding which document's label to request next during active learning [17, 23].\nEffective active learning can be key in many information retrieval tasks where obtaining labeled data can be costly--severely reducing the amount of labeled data needed to reach the same performance as when new labels are requested randomly [17].\nFinally, they are also amenable to making other types of cost-sensitive decisions [26] and for combining decisions [3].\nHowever, in all of these tasks, the quality of the probability estimates is crucial.\nParametric models generally use assumptions that the data conform to the model to trade-off flexibility with the ability to estimate the model parameters accurately with little training data.\nSince many text classification tasks often have very little training data, we focus on parametric methods.\nHowever, most of the existing parametric methods that have been applied to this task have an assumption we find undesirable.\nWhile some of these methods allow the distributions of the documents relevant and irrelevant to the topic to have different variances, they typically enforce the unnecessary constraint that the documents are symmetrically distributed around their respective modes.\nWe introduce several asymmetric parametric models that allow us to relax this assumption without significantly increasing the number of parameters and demonstrate how we can efficiently fit the models.\nAdditionally, these models can be interpreted as assuming the scores produced by the text classifier have three basic types of empirical behavior--one corresponding to each of the \"extremely irrelevant\", \"hard to discriminate\", and \"obviously relevant\" items.\nWe first review related work on improving probability estimates and score modeling in information retrieval.\nThen, we discuss in further detail the need for asymmetric models.\nAfter this, we describe two specific asymmetric models and, using two standard text classifiers, na \u00a8 \u0131ve Bayes and SVMs, demonstrate how they can be efficiently used to recalibrate poor probability estimates or produce high quality probability estimates from raw scores.\nWe then review experiments using previously proposed methods and the asymmetric methods over several text classification corpora to demonstrate the strengths and weaknesses of the various methods.\nFinally, we summarize our contributions and discuss future directions.\n2.\nRELATED WORK\nParametric models have been employed to obtain probability estimates in several areas of information retrieval.\nLewis & Gale [17]\nuse logistic regression to recalibrate na \u00a8 \u0131ve Bayes though the quality of the probability estimates are not directly evaluated; it is simply performed as an intermediate step in active learning.\nManmatha et.\nal [20] introduced models appropriate to produce probability estimates from relevance scores returned from search engines and demonstrated how the resulting probability estimates could be subsequently employed to combine the outputs of several search engines.\nThey use a different parametric distribution for the relevant and irrelevant classes, but do not pursue two-sided asymmetric distributions for a single class as described here.\nThey also survey the long history of modeling the relevance scores of search engines.\nOur work is similar in flavor to these previous attempts to model search engine scores, but we target text classifier outputs which we have found demonstrate a different type of score distribution behavior because of the role of training data.\nFocus on improving probability estimates has been growing lately.\nZadrozny & Elkan [26] provide a corrective measure for decision trees (termed curtailment) and a non-parametric method for recalibrating na \u00a8 \u0131ve Bayes.\nIn more recent work [27], they investigate using a semi-parametric method that uses a monotonic piecewiseconstant fit to the data and apply the method to na \u00a8 \u0131ve Bayes and a linear SVM.\nWhile they compared their methods to other parametric methods based on symmetry, they fail to provide significance test results.\nOur work provides asymmetric parametric methods which complement the non-parametric and semi-parametric methods they propose when data scarcity is an issue.\nIn addition, their methods reduce the resolution of the scores output by the classifier (the number of distinct values output), but the methods here do not have such a weakness since they are continuous functions.\nThere is a variety of other work that this paper extends.\nPlatt [22] uses a logistic regression framework that models noisy class labels to produce probabilities from the raw output of an SVM.\nHis work showed that this post-processing method not only can produce probability estimates of similar quality to SVMs directly trained to produce probabilities (regularized likelihood kernel methods), but it also tends to produce sparser kernels (which generalize better).\nFinally, Bennett [1] obtained moderate gains by applying Platt's method to the recalibration of na \u00a8 \u0131ve Bayes but found there were more problematic areas than when it was applied to SVMs.\nRecalibrating poorly calibrated classifiers is not a new problem.\nLindley et.\nal [19] first proposed the idea of recalibrating classifiers, and DeGroot & Fienberg [5, 6] gave the now accepted standard formalization for the problem of assessing calibration initiated by others [4, 24].\n3.\nPROBLEM DEFINITION & APPROACH\n3.1 Problem Definition\n3.2 Motivation for Asymmetric Distributions\n3.3 Estimating the Parameters of the Asymmetric Distributions\n3.3.1 Asymmetric Laplace MLEs\n3.3.2 Asymmetric Gaussian MLEs\n4.\nEXPERIMENTAL ANALYSIS 4.1 Methods\nGaussians\nAsymmetric Gaussians\nLaplace Distributions\nAsymmetric Laplace Distributions\nLogistic Regression\nLogistic Regression with Noisy Class Labels\n4.2 Data\nMSN Web Directory\nReuters\n4.3 Classifiers\nNa \u00a8 \u0131ve Bayes\n4.4 Performance Measures\n4.5 Experimental Methodology\n4.6 Results & Discussion\n5.\nFUTURE WORK\n6.\nSUMMARY AND CONCLUSIONS\nWe have reviewed a wide variety of parametric methods for producing probability estimates from the raw scores of a discriminative classifier and for recalibrating an uncalibrated probabilistic classifier.\nIn addition, we have introduced two new families that attempt to capitalize on the asymmetric behavior that tends to arise from learning a discrimination function.\nWe have given an efficient way to estimate the parameters of these distributions.\nWhile these distributions attempt to strike a balance between the generalization power of parametric distributions and the flexibility that the added asymmetric parameters give, the asymmetric Gaussian appears to have too great of an emphasis away from the modes.\nIn striking contrast, the asymmetric Laplace distribution appears to be preferable over several large text domains and a variety of performance measures to the primary competing parametric methods, though comparable performance is sometimes achieved with one of two varieties of logistic regression.\nGiven the ease of estimating the parameters of this distribution, it is a good first choice for producing quality probability estimates.","lvl-4":"Using Asymmetric Distributions to Improve Text Classifier Probability Estimates\nABSTRACT\nText classifiers that give probability estimates are more readily applicable in a variety of scenarios.\nFor example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a userspecified cost function dynamically chosen at prediction time.\nHowever, the quality of the probability estimates is crucial.\nWe review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the \"extremely irrelevant\", \"hard to discriminate\", and \"obviously relevant\" items are often significantly different.\nFinally, we analyze the experimental performance of these models over the outputs of two text classifiers.\nThe analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable.\n1.\nINTRODUCTION\nText classifiers that give probability estimates are more flexible in practice than those that give only a simple classification or even a ranking.\nFor example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model [8] to issue a runtime decision which minimizes the expected cost of a user-specified\ncost function dynamically chosen at prediction time.\nThis can be used to minimize a linear utility cost function for filtering tasks where pre-specified costs of relevant\/irrelevant are not available during training but are specified at prediction time.\nFurthermore, the costs can be changed without retraining the model.\nAdditionally, probability estimates are often used as the basis of deciding which document's label to request next during active learning [17, 23].\nHowever, in all of these tasks, the quality of the probability estimates is crucial.\nParametric models generally use assumptions that the data conform to the model to trade-off flexibility with the ability to estimate the model parameters accurately with little training data.\nSince many text classification tasks often have very little training data, we focus on parametric methods.\nHowever, most of the existing parametric methods that have been applied to this task have an assumption we find undesirable.\nWe introduce several asymmetric parametric models that allow us to relax this assumption without significantly increasing the number of parameters and demonstrate how we can efficiently fit the models.\nWe first review related work on improving probability estimates and score modeling in information retrieval.\nThen, we discuss in further detail the need for asymmetric models.\nAfter this, we describe two specific asymmetric models and, using two standard text classifiers, na \u00a8 \u0131ve Bayes and SVMs, demonstrate how they can be efficiently used to recalibrate poor probability estimates or produce high quality probability estimates from raw scores.\nWe then review experiments using previously proposed methods and the asymmetric methods over several text classification corpora to demonstrate the strengths and weaknesses of the various methods.\nFinally, we summarize our contributions and discuss future directions.\n2.\nRELATED WORK\nParametric models have been employed to obtain probability estimates in several areas of information retrieval.\nLewis & Gale [17]\nuse logistic regression to recalibrate na \u00a8 \u0131ve Bayes though the quality of the probability estimates are not directly evaluated; it is simply performed as an intermediate step in active learning.\nManmatha et.\nal [20] introduced models appropriate to produce probability estimates from relevance scores returned from search engines and demonstrated how the resulting probability estimates could be subsequently employed to combine the outputs of several search engines.\nThey use a different parametric distribution for the relevant and irrelevant classes, but do not pursue two-sided asymmetric distributions for a single class as described here.\nThey also survey the long history of modeling the relevance scores of search engines.\nOur work is similar in flavor to these previous attempts to model search engine scores, but we target text classifier outputs which we have found demonstrate a different type of score distribution behavior because of the role of training data.\nFocus on improving probability estimates has been growing lately.\nZadrozny & Elkan [26] provide a corrective measure for decision trees (termed curtailment) and a non-parametric method for recalibrating na \u00a8 \u0131ve Bayes.\nIn more recent work [27], they investigate using a semi-parametric method that uses a monotonic piecewiseconstant fit to the data and apply the method to na \u00a8 \u0131ve Bayes and a linear SVM.\nWhile they compared their methods to other parametric methods based on symmetry, they fail to provide significance test results.\nOur work provides asymmetric parametric methods which complement the non-parametric and semi-parametric methods they propose when data scarcity is an issue.\nIn addition, their methods reduce the resolution of the scores output by the classifier (the number of distinct values output), but the methods here do not have such a weakness since they are continuous functions.\nThere is a variety of other work that this paper extends.\nPlatt [22] uses a logistic regression framework that models noisy class labels to produce probabilities from the raw output of an SVM.\nHis work showed that this post-processing method not only can produce probability estimates of similar quality to SVMs directly trained to produce probabilities (regularized likelihood kernel methods), but it also tends to produce sparser kernels (which generalize better).\nRecalibrating poorly calibrated classifiers is not a new problem.\nLindley et.\n6.\nSUMMARY AND CONCLUSIONS\nWe have reviewed a wide variety of parametric methods for producing probability estimates from the raw scores of a discriminative classifier and for recalibrating an uncalibrated probabilistic classifier.\nIn addition, we have introduced two new families that attempt to capitalize on the asymmetric behavior that tends to arise from learning a discrimination function.\nWe have given an efficient way to estimate the parameters of these distributions.\nGiven the ease of estimating the parameters of this distribution, it is a good first choice for producing quality probability estimates.","lvl-2":"Using Asymmetric Distributions to Improve Text Classifier Probability Estimates\nABSTRACT\nText classifiers that give probability estimates are more readily applicable in a variety of scenarios.\nFor example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model to issue a run-time decision which minimizes a userspecified cost function dynamically chosen at prediction time.\nHowever, the quality of the probability estimates is crucial.\nWe review a variety of standard approaches to converting scores (and poor probability estimates) from text classifiers to high quality estimates and introduce new models motivated by the intuition that the empirical score distribution for the \"extremely irrelevant\", \"hard to discriminate\", and \"obviously relevant\" items are often significantly different.\nFinally, we analyze the experimental performance of these models over the outputs of two text classifiers.\nThe analysis demonstrates that one of these models is theoretically attractive (introducing few new parameters while increasing flexibility), computationally efficient, and empirically preferable.\n1.\nINTRODUCTION\nText classifiers that give probability estimates are more flexible in practice than those that give only a simple classification or even a ranking.\nFor example, rather than choosing one set decision threshold, they can be used in a Bayesian risk model [8] to issue a runtime decision which minimizes the expected cost of a user-specified\ncost function dynamically chosen at prediction time.\nThis can be used to minimize a linear utility cost function for filtering tasks where pre-specified costs of relevant\/irrelevant are not available during training but are specified at prediction time.\nFurthermore, the costs can be changed without retraining the model.\nAdditionally, probability estimates are often used as the basis of deciding which document's label to request next during active learning [17, 23].\nEffective active learning can be key in many information retrieval tasks where obtaining labeled data can be costly--severely reducing the amount of labeled data needed to reach the same performance as when new labels are requested randomly [17].\nFinally, they are also amenable to making other types of cost-sensitive decisions [26] and for combining decisions [3].\nHowever, in all of these tasks, the quality of the probability estimates is crucial.\nParametric models generally use assumptions that the data conform to the model to trade-off flexibility with the ability to estimate the model parameters accurately with little training data.\nSince many text classification tasks often have very little training data, we focus on parametric methods.\nHowever, most of the existing parametric methods that have been applied to this task have an assumption we find undesirable.\nWhile some of these methods allow the distributions of the documents relevant and irrelevant to the topic to have different variances, they typically enforce the unnecessary constraint that the documents are symmetrically distributed around their respective modes.\nWe introduce several asymmetric parametric models that allow us to relax this assumption without significantly increasing the number of parameters and demonstrate how we can efficiently fit the models.\nAdditionally, these models can be interpreted as assuming the scores produced by the text classifier have three basic types of empirical behavior--one corresponding to each of the \"extremely irrelevant\", \"hard to discriminate\", and \"obviously relevant\" items.\nWe first review related work on improving probability estimates and score modeling in information retrieval.\nThen, we discuss in further detail the need for asymmetric models.\nAfter this, we describe two specific asymmetric models and, using two standard text classifiers, na \u00a8 \u0131ve Bayes and SVMs, demonstrate how they can be efficiently used to recalibrate poor probability estimates or produce high quality probability estimates from raw scores.\nWe then review experiments using previously proposed methods and the asymmetric methods over several text classification corpora to demonstrate the strengths and weaknesses of the various methods.\nFinally, we summarize our contributions and discuss future directions.\n2.\nRELATED WORK\nParametric models have been employed to obtain probability estimates in several areas of information retrieval.\nLewis & Gale [17]\nuse logistic regression to recalibrate na \u00a8 \u0131ve Bayes though the quality of the probability estimates are not directly evaluated; it is simply performed as an intermediate step in active learning.\nManmatha et.\nal [20] introduced models appropriate to produce probability estimates from relevance scores returned from search engines and demonstrated how the resulting probability estimates could be subsequently employed to combine the outputs of several search engines.\nThey use a different parametric distribution for the relevant and irrelevant classes, but do not pursue two-sided asymmetric distributions for a single class as described here.\nThey also survey the long history of modeling the relevance scores of search engines.\nOur work is similar in flavor to these previous attempts to model search engine scores, but we target text classifier outputs which we have found demonstrate a different type of score distribution behavior because of the role of training data.\nFocus on improving probability estimates has been growing lately.\nZadrozny & Elkan [26] provide a corrective measure for decision trees (termed curtailment) and a non-parametric method for recalibrating na \u00a8 \u0131ve Bayes.\nIn more recent work [27], they investigate using a semi-parametric method that uses a monotonic piecewiseconstant fit to the data and apply the method to na \u00a8 \u0131ve Bayes and a linear SVM.\nWhile they compared their methods to other parametric methods based on symmetry, they fail to provide significance test results.\nOur work provides asymmetric parametric methods which complement the non-parametric and semi-parametric methods they propose when data scarcity is an issue.\nIn addition, their methods reduce the resolution of the scores output by the classifier (the number of distinct values output), but the methods here do not have such a weakness since they are continuous functions.\nThere is a variety of other work that this paper extends.\nPlatt [22] uses a logistic regression framework that models noisy class labels to produce probabilities from the raw output of an SVM.\nHis work showed that this post-processing method not only can produce probability estimates of similar quality to SVMs directly trained to produce probabilities (regularized likelihood kernel methods), but it also tends to produce sparser kernels (which generalize better).\nFinally, Bennett [1] obtained moderate gains by applying Platt's method to the recalibration of na \u00a8 \u0131ve Bayes but found there were more problematic areas than when it was applied to SVMs.\nRecalibrating poorly calibrated classifiers is not a new problem.\nLindley et.\nal [19] first proposed the idea of recalibrating classifiers, and DeGroot & Fienberg [5, 6] gave the now accepted standard formalization for the problem of assessing calibration initiated by others [4, 24].\n3.\nPROBLEM DEFINITION & APPROACH\nOur work differs from earlier approaches primarily in three points: (1) We provide asymmetric parametric models suitable for use when little training data is available; (2) We explicitly analyze the quality of probability estimates these and competing methods produce and provide significance tests for these results; (3) We target text classifier outputs where a majority of the previous literature targeted the output of search engines.\n3.1 Problem Definition\nThe general problem we are concerned with is highlighted in Figure 1.\nA text classifier produces a prediction about a document and gives a score s (d) indicating the strength of its decision that the document belongs to the positive class (relevant to the topic).\nWe assume throughout there are only two classes: the positive and the negative (or irrelevant) class (' +' and' -' respectively).\nThere are two general types of parametric approaches.\nThe first of these tries to fit the posterior function directly, i.e. there is one\nFigure 1: We are concerned with how to perform the box highlighted in grey.\nThe internals are for one type of approach.\nfunction estimator that performs a direct mapping of the score s to the probability P (+ Is (d)).\nThe second type of approach breaks the problem down as shown in the grey box of Figure 1.\nAn estimator for each of the class-conditional densities (i.e. p (sI +) and p (sI--)) is produced, then Bayes' rule and the class priors are used to obtain the estimate for P (+ Is (d)).\n3.2 Motivation for Asymmetric Distributions\nMost of the previous parametric approaches to this problem either directly or indirectly (when fitting only the posterior) correspond to fitting Gaussians to the class-conditional densities; they differ only in the criterion used to estimate the parameters.\nWe can visualize this as depicted in Figure 2.\nSince increasing s usually indicates increased likelihood of belonging to the positive class, then the rightmost distribution usually corresponds to p (sI +).\nFigure 2: Typical View of Discrimination based on Gaussians\nHowever, using standard Gaussians fails to capitalize on a basic characteristic commonly seen.\nNamely, if we have a raw output score that can be used for discrimination, then the empirical behavior between the modes (label B in Figure 2) is often very different than that outside of the modes (labels A and C in Figure 2).\nIntuitively, the area between the modes corresponds to the hard examples, which are difficult for this classifier to distinguish, while the\nareas outside the modes are the extreme examples that are usually easily distinguished.\nThis suggests that we may want to uncouple the scale of the outside and inside segments of the distribution (as depicted by the curve denoted as A-Gaussian in Figure 3).\nAs a result, an asymmetric distribution may be a more appropriate choice for application to the raw output score of a classifier.\nIdeally (i.e. perfect classification) there will exist scores \u03b8 \u2212 and \u03b8 + such that all examples with score greater than \u03b8 + are relevant and all examples with scores less than \u03b8 \u2212 are irrelevant.\nFurthermore, no examples fall between \u03b8 \u2212 and \u03b8 +.\nThe distance | \u03b8 \u2212 \u2212 \u03b8 + | corresponds to the margin in some classifiers, and an attempt is often made to maximize this quantity.\nBecause text classifiers have training data to use to separate the classes, the final behavior of the score distributions is primarily a factor of the amount of training data and the consequent separation in the classes achieved.\nThis is in contrast to search engine retrieval where the distribution of scores is more a factor of language distribution across documents, the similarity function, and the length and type of query.\nPerfect classification corresponds to using two very asymmetric distributions, but in this case, the probabilities are actually one and zero and many methods will work for typical purposes.\nPractically, some examples will fall between \u03b8 \u2212 and \u03b8 +, and it is often important to estimate the probabilities of these examples well (since they correspond to the \"hard\" examples).\nJustifications can be given for both why you may find more and less examples between \u03b8 \u2212 and \u03b8 + than outside of them, but there are few empirical reasons to believe that the distributions should be symmetric.\nA natural first candidate for an asymmetric distribution is to generalize a common symmetric distribution, e.g. the Laplace or the Gaussian.\nAn asymmetric Laplace distribution can be achieved by placing two exponentials around the mode in the following manner:\nwhere \u03b8, \u03b2, and \u03b3 are the model parameters.\n\u03b8 is the mode of the distribution, \u03b2 is the inverse scale of the exponential to the left of the mode, and \u03b3 is the inverse scale of the exponential to the right.\nWe will use the notation \u039b (X | \u03b8, \u03b2, \u03b3) to refer to this distribution.\nFigure 3: Gaussians vs. Asymmetric Gaussians.\nA Shortcoming of Symmetric Distributions--The vertical lines show the modes as estimated nonparametrically.\nWe can create an asymmetric Gaussian in the same manner:\nwhere \u03b8, \u03c3l, and \u03c3r are the model parameters.\nTo refer to this asymmetric Gaussian, we use the notation \u0393 (X | \u03b8, \u03c3l, \u03c3r).\nWhile these distributions are composed of \"halves\", the resulting function is a single continuous distribution.\nThese distributions allow us to fit our data with much greater flexibility at the cost of only fitting six parameters.\nWe could instead try mixture models for each component or other extensions, but most other extensions require at least as many parameters (and can often be more computationally expensive).\nIn addition, the motivation above should provide significant cause to believe the underlying distributions actually behave in this way.\nFurthermore, this family of distributions can still fit a symmetric distribution, and finally, in the empirical evaluation, evidence is presented that demonstrates this asymmetric behavior (see Figure 4).\nTo our knowledge, neither family of distributions has been previously used in machine learning or information retrieval.\nBoth are termed generalizations of an Asymmetric Laplace in [14], but we refer to them as described above to reflect the nature of how we derived them for this task.\n3.3 Estimating the Parameters of the Asymmetric Distributions\nThis section develops the method for finding maximum likelihood estimates (MLE) of the parameters for the above asymmetric distributions.\nIn order to find the MLEs, we have two choices: (1) use numerical estimation to estimate all three parameters at once (2) fix the value of \u03b8, and estimate the other two (\u03b2 and \u03b3 or \u03c3l and \u03c3r) given our choice of \u03b8, then consider alternate values of \u03b8.\nBecause of the simplicity of analysis in the latter alternative, we choose this method.\n3.3.1 Asymmetric Laplace MLEs\nFor D = {x1, x2,..., xN} where the xi are i.i.d. and X \u223c \u039b (X | \u03b8, \u03b2, \u03b3), the likelihood is ~ Ni \u039b (X | \u03b8, \u03b2, \u03b3).\nNow, we fix \u03b8 and compute the maximum likelihood for that choice of \u03b8.\nThen, we can simply consider all choices of \u03b8 and choose the one with the maximum likelihood over all choices of \u03b8.\nThe complete derivation is omitted because of space but is available in [2].\nWe define the following values:\nNote that Dl and Dr are the sum of the absolute differences between the x belonging to the left and right halves of the distribution (respectively) and \u03b8.\nFinally the MLEs for \u03b2 and \u03b3 for a fixed \u03b8 are:\nThese estimates are not wholly unexpected since we would obtain Nl if we were to estimate \u03b2 independently of \u03b3.\nThe elegance of Dl the formulae is that the estimates will tend to be symmetric only insofar as the data dictate it (i.e. the closer Dl and Dr are to being equal, the closer the resulting inverse scales).\nBy continuity arguments, when N = 0, we assign \u03b2 = \u03b3 = ~ 0 where ~ 0 is a small constant that acts to disperse the distribution to a uniform.\nSimilarly, when N = ~ 0 and Dl = 0, we assign \u03b2 = ~ inf where ~ inf is a very large constant that corresponds to an extremely sharp distribution (i.e. almost all mass at \u03b8 for that half).\nDr = 0 is handled similarly.\nAssuming that \u03b8 falls in some range [\u03c6, \u03c8] dependent upon only the observed documents, then this alternative is also easily computable.\nGiven Nl, Sl, Nr, Sr, we can compute the posterior and the MLEs in constant time.\nIn addition, if the scores are sorted, then we can perform the whole process quite efficiently.\nStarting with the minimum \u03b8 = \u03c6 we would like to try, we loop through the scores once and set Nl, Sl, Nr, Sr appropriately.\nThen we increase \u03b8 and just step past the scores that have shifted from the right side of the distribution to the left.\nAssuming the number of candidate \u03b8s are O (n), this process is O (n), and the overall process is dominated by sorting the scores, O (n log n) (or expected linear time).\n3.3.2 Asymmetric Gaussian MLEs\nFor D = {x1, x2,..., xN} where the xi are i.i.d. and X \u223c \u0393 (X | \u03b8, \u03c3l, \u03c3r), the likelihood is ~ Ni \u0393 (X | \u03b8, \u03b2, \u03b3).\nThe MLEs can be worked out similar to the above.\nWe assume the same definitions as above (the complete derivation omitted for space is available in [2]), and in addition, let:\nBy continuity arguments, when N = 0, we assign \u03c3r = \u03c3l = ~ inf, and when N = ~ 0 and Dl2 = 0 (resp.\nDr2 = 0), we assign \u03c3l = ~ 0 (resp.\n\u03c3r = ~ 0).\nAgain, the same computational complexity analysis applies to estimating these parameters.\n4.\nEXPERIMENTAL ANALYSIS 4.1 Methods\nFor each of the methods that use a class prior, we use a smoothed add-one estimate, i.e. P (c) = | c | +1 N +2 where N is the number of documents.\nFor methods that fit the class-conditional densities, p (s | +) and p (s | \u2212), the resulting densities are inverted using Bayes' rule as described above.\nAll of the methods below are fit using maximum likelihood estimates.\nFor recalibrating a classifier (i.e. correcting poor probability estimates output by the classifier), it is usual to use the log-odds of the classifier's estimate as s (d).\nThe log-odds are defined to be log P (+ | d) P (\u2212 | d).\nThe normal decision threshold (minimizing error) in terms of log-odds is at zero (i.e. P (+ | d) = P (\u2212 | d) = 0.5).\nSince it scales the outputs to a space [\u2212 oo, oo], the log-odds make normal (and similar distributions) applicable [19].\nLewis & Gale [17] give a more motivating viewpoint that fitting the log-odds is a dampening effect for the inaccurate independence assumption and a bias correction for inaccurate estimates of the priors.\nIn general, fitting the log-odds can serve to boost or dampen the signal from the original classifier as the data dictate.\nGaussians\nA Gaussian is fit to each of the class-conditional densities, using the usual maximum likelihood estimates.\nThis method is denoted in the tables below as Gauss.\nAsymmetric Gaussians\nAn asymmetric Gaussian is fit to each of the class-conditional densities using the maximum likelihood estimation procedure described above.\nIntervals between adjacent scores are divided by 10 in testing candidate \u03b8s, i.e. 8 points between actual scores occurring in the data set are tested.\nThis method is denoted as A. Gauss.\nLaplace Distributions\nEven though Laplace distributions are not typically applied to this task, we also tried this method to isolate why benefit is gained from the asymmetric form.\nThe usual MLEs were used for estimating the location and scale of a classical symmetric Laplace distribution as described in [14].\nWe denote this method as Laplace below.\nAsymmetric Laplace Distributions\nAn asymmetric Laplace is fit to each of the class-conditional densities using the maximum likelihood estimation procedure described above.\nAs with the asymmetric Gaussian, intervals between adjacent scores are divided by 10 in testing candidate \u03b8s.\nThis method is denoted as A. Laplace below.\nLogistic Regression\nThis method is the first of two methods we evaluated that directly fit the posterior, P (+ | s (d)).\nBoth methods restrict the set of families to a two-parameter sigmoid family; they differ primarily in their model of class labels.\nAs opposed to the above methods, one can argue that an additional boon of these methods is they completely preserve the ranking given by the classifier.\nWhen this is desired, these methods may be more appropriate.\nThe previous methods will mostly preserve the rankings, but they can deviate if the data dictate it.\nThus, they may model the data behavior better at the cost of departing from a monotonicity constraint in the output of the classifier.\nLewis & Gale [17] use logistic regression to recalibrate na \u00a8 \u0131ve Bayes for subsequent use in active learning.\nThe model they use is:\nLogistic Regression with Noisy Class Labels\nPlatt [22] proposes a framework that extends the logistic regression model above to incorporate noisy class labels and uses it to produce probability estimates from the raw output of an SVM.\nThis model differs from the LogReg model only in how the parameters are estimated.\nThe parameters are still fit using maximum likelihood estimation, but a model of noisy class labels is used in addition to allow for the possibility that the class was mislabeled.\nThe noise is modeled by assuming there is a finite probability of mislabeling a positive example and of mislabeling a negative example; these two noise estimates are determined by the number of positive examples and the number of negative examples (using Bayes' rule to infer the probability of incorrect label).\nthe score s (d).\nInstead of using this below, we will use the logodds ratio.\nThis does not affect the model as it simply shifts all of the scores by a constant determined by the priors.\nWe refer to this method as LogReg below.\nEven though the performance of this model would not be expected to deviate much from LogReg, we evaluate it for completeness.\nWe refer to this method below as LR+N oise.\n4.2 Data\nWe examined several corpora, including the MSN Web Directory, Reuters, and TREC-AP.\nMSN Web Directory\nThe MSN Web Directory is a large collection of heterogeneous web pages (from a May 1999 web snapshot) that have been hierarchically classified.\nWe used the same train\/test split of 50078\/10024 documents as that reported in [9].\nThe MSN Web hierarchy is a seven-level hierarchy; we used all 13 of the top-level categories.\nThe class proportions in the training set vary from 1.15% to 22.29%.\nIn the testing set, they range from 1.14% to 21.54%.\nThe classes are general subjects such as Health & Fitness and Travel & Vacation.\nHuman indexers assigned the documents to zero or more categories.\nFor the experiments below, we used only the top 1000 words with highest mutual information for each class; approximately 195K words appear in at least three training documents.\nReuters\nThe Reuters 21578 corpus [16] contains Reuters news articles from 1987.\nFor this data set, we used the ModApte standard train \/ test split of 9603\/3299 documents (8676 unused documents).\nThe classes are economic subjects (e.g., \"acq\" for acquisitions, \"earn\" for earnings, etc.) that human taggers applied to the document; a document may have multiple subjects.\nThere are actually 135 classes in this domain (only 90 of which occur in the training and testing set); however, we only examined the ten most frequent classes since small numbers of testing examples make interpreting some performance measures difficult due to high variance .1 Limiting to the ten largest classes allows us to compare our results to previously published results [10, 13, 21, 22].\nThe class proportions in the training set vary from 1.88% to 29.96%.\nIn the testing set, they range from 1.7% to 32.95%.\nFor the experiments below we used only the top 300 words with highest mutual information for each class; approximately 15K words appear in at least three training documents.\nTREC-AP The TREC-AP corpus is a collection of AP news stories from 1988 to 1990.\nWe used the same train\/test split of 142791\/66992 documents that was used in [18].\nAs described in [17] (see also [15]), the categories are defined by keywords in a keyword field.\nThe title and body fields are used in the experiments below.\nThere are twenty categories in total.\nThe class proportions in the training set vary from 0.06% to 2.03%.\nIn the testing set, they range from 0.03% to 4.32%.\nFor the experiments described below, we use only the top 1000 words with the highest mutual information for each class; approximately 123K words appear in at least 3 training documents.\n4.3 Classifiers\nWe selected two classifiers for evaluation.\nA linear SVM classifier which is a discriminative classifier that does not normally output probability values, and a na \u00a8 \u0131ve Bayes classifier whose probability outputs are often poor [1, 7] but can be improved [1, 26, 27].\n1A separate comparison of only LogReg, LR+N oise, and A. Laplace over all 90 categories of Reuters was also conducted.\nAfter accounting for the variance, that evaluation also supported the claims made here.\nSVM For linear SVMs, we use the Smox toolkit which is based on Platt's Sequential Minimal Optimization algorithm.\nThe features were represented as continuous values.\nWe used the raw output score of the SVM as s (d) since it has been shown to be appropriate before [22].\nThe normal decision threshold (assuming we are seeking to minimize errors) for this classifier is at zero.\nNa \u00a8 \u0131ve Bayes\nThe na \u00a8 \u0131ve Bayes classifier model is a multinomial model [21].\nWe smoothed word and class probabilities using a Bayesian estimate (with the word prior) and a Laplace m-estimate, respectively.\nWe use the log-odds estimated by the classifier as s (d).\nThe normal decision threshold is at zero.\n4.4 Performance Measures\nWe use log-loss [12] and squared error [4, 6] to evaluate the quality of the probability estimates.\nFor a document d with class c (d) E {+, \u2212} (i.e. the data have known labels and not probabilities), logloss is defined as \u03b4 (c (d), +) log P (+ | d) + \u03b4 (c (d), \u2212) log P (\u2212 | d) where \u03b4 (a, b) =.\n1 if a = b and 0 otherwise.\nThe squared error is \u03b4 (c (d), +) (1 \u2212 P (+ | d)) 2 + \u03b4 (c (d), \u2212) (1 \u2212 P (\u2212 | d)) 2.\nWhen the class of a document is correctly predicted with a probability of one, log-loss is zero and squared error is zero.\nWhen the class of a document is incorrectly predicted with a probability of one, log-loss is \u2212 oo and squared error is one.\nThus, both measures assess how close an estimate comes to correctly predicting the item's class but vary in how harshly incorrect predictions are penalized.\nWe report only the sum of these measures and omit the averages for space.\nTheir averages, average log-loss and mean squared error (MSE), can be computed from these totals by dividing by the number of binary decisions in a corpus.\nIn addition, we also compare the error of the classifiers at their default thresholds and with the probabilities.\nThis evaluates how the probability estimates have improved with respect to the decision threshold P (+ | d) = 0.5.\nThus, error only indicates how the methods would perform if a false positive was penalized the same as a false negative and not the general quality of the probability estimates.\nIt is presented simply to provide the reader with a more complete understanding of the empirical tendencies of the methods.\nWe use a a standard paired micro sign test [25] to determine statistical significance in the difference of all measures.\nOnly pairs that the methods disagree on are used in the sign test.\nThis test compares pairs of scores from two systems with the null hypothesis that the number of items they disagree on are binomially distributed.\nWe use a significance level of p = 0.01.\n4.5 Experimental Methodology\nAs the categories under consideration in the experiments are not mutually exclusive, the classification was done by training n binary classifiers, where n is the number of classes.\nIn order to generate the scores that each method uses to fit its probability estimates, we use five-fold cross-validation on the training data.\nWe note that even though it is computationally efficient to perform leave-one-out cross-validation for the na \u00a8 \u0131ve Bayes classifier, this may not be desirable since the distribution of scores can be skewed as a result.\nOf course, as with any application of n-fold cross-validation, it is also possible to bias the results by holding n too low and underestimating the performance of the final classifier.\n4.6 Results & Discussion\nThe results for recalibrating na \u00a8 \u0131ve Bayes are given in Table 1a.\nTable 1b gives results for producing probabilistic outputs for SVMs.\nTable 1: (a) Results for na \u00a8 \u0131ve Bayes (left) and (b) SVM (right).\nThe best entry for a corpus is in bold.\nEntries that are statistically\nsignificantly better than all other entries are underlined.\nA \u2020 denotes the method is significantly better than all other methods except for na \u00a8 \u0131ve Bayes.\nA \u2021 denotes the entry is significantly better than all other methods except for A. Gauss (and na \u00a8 \u0131ve Bayes for the table on the left).\nThe reason for this distinction in significance tests is described in the text.\nWe start with general observations that result from examining the performance of these methods over the various corpora.\nThe first is that A. Laplace, LR+N oise, and LogReg, quite clearly outperform the other methods.\nThere is usually little difference between the performance of LR+N oise and LogReg (both as shown here and on a decision by decision basis), but this is unsurprising since LR+N oise just adds noisy class labels to the LogReg model.\nWith respect to the three different measures, LR+N oise and LogReg tend to perform slightly better (but never significantly) than A. Laplace at some tasks with respect to log-loss and squared error.\nHowever, A. Laplace always produces the least number of errors for all of the tasks, though at times the degree of improvement is not significant.\nIn order to give the reader a better sense of the behavior of these methods, Figures 4-5 show the fits produced by the most competitive of these methods versus the actual data behavior (as estimated nonparametrically by binning) for class Earn in Reuters.\nFigure 4 shows the class-conditional densities, and thus only A. Laplace is shown since LogReg fits the posterior directly.\nFigure 5 shows the estimations of the log-odds, (i.e. log P <Earn | s <d)) P <\u00ac Earn | s <d))).\nViewing the log-odds (rather than the posterior) usually enables errors in estimation to be detected by the eye more easily.\nWe can break things down as the sign test does and just look at wins and losses on the items that the methods disagree on.\nLooked at in this way only two methods (na \u00a8 \u0131ve Bayes and A. Gauss) ever have more pairwise wins than A. Laplace; those two sometimes have more pairwise wins on log-loss and squared error even though the total never wins (i.e. they are dragged down by heavy penalties).\nIn addition, this comparison of pairwise wins means that for those cases where LogReg and LR+N oise have better scores than A. Laplace, it would not be deemed significant by the sign test at any level since they do not have more wins.\nFor example, of the 130K binary decisions over the MSN Web dataset, A. Laplace had approximately 101K pairwise wins versus LogReg and LR+N oise.\nNo method ever has more pairwise wins than A. Laplace for the error comparison nor does any method every achieve a better total.\nThe basic observation made about na \u00a8 \u0131ve Bayes in previous work is that it tends to produce estimates very close to zero and one [1, 17].\nThis means if it tends to be right enough of the time, it will produce results that do not appear significant in a sign test that ignores size of difference (as the one here).\nThe totals of the squared error and log-loss bear out the previous observation that \"when it's wrong it's really wrong\".\nThere are several interesting points about the performance of the asymmetric distributions as well.\nFirst, A. Gauss performs poorly because (similar to na \u00a8 \u0131ve Bayes) there are some examples where it is penalized a large amount.\nThis behavior results from a general tendency to perform like the picture shown in Figure 3 (note the crossover at the tails).\nWhile the asymmetric Gaussian tends to place the mode much more accurately than a symmetric Gaussian, its asymmetric flexibility combined with its distance function causes it to distribute too much mass to the outside tails while failing to fit around the mode accurately enough to compensate.\nFigure 3 is actually a result of fitting the two distributions to real data.\nAs a result, at the tails there can be a large discrepancy between the likelihood of belonging to each class.\nThus when there are no outliers A. Gauss can perform quite competitively, but when there is an\nFigure 4: The empirical distribution of classifier scores for documents in the training and the test set for class Earn in Reuters.\nAlso shown is the fit of the asymmetric Laplace distribution to the training score distribution.\nThe positive class (i.e. Earn) is the distribution on the right in each graph, and the negative class (i.e. \u00ac Earn) is that on the left in each graph.\nFigure 5: The fit produced by various methods compared to the empirical log-odds of the training data for class Earn in Reuters.\noutlier A. Gauss is penalized quite heavily.\nThere are enough such cases overall that it seems clearly inferior to the top three methods.\nHowever, the asymmetric Laplace places much more emphasis around the mode (Figure 4) because of the different distance function (think of the \"sharp peak\" of an exponential).\nAs a result most of the mass stays centered around the mode, while the asymmetric parameters still allow more flexibility than the standard Laplace.\nSince the standard Laplace also corresponds to a piecewise fit in the log-odds space, this highlights that part of the power of the asymmetric methods is their sensitivity in placing the knots at the actual modes--rather than the symmetric assumption that the means correspond to the modes.\nAdditionally, the asymmetric methods have greater flexibility in fitting the slopes of the line segments as well.\nEven in cases where the test distribution differs from the training distribution (Figure 4), A. Laplace still yields a solution that gives a better fit than LogReg (Figure 5), the next best competitor.\nFinally, we can make a few observations about the usefulness of the various performance metrics.\nFirst, log-loss only awards a finite amount of credit as the degree to which something is correct improves (i.e. there are diminishing returns as it approaches zero), but it can infinitely penalize for a wrong estimate.\nThus, it is possible for one outlier to skew the totals, but misclassifying this example may not matter for any but a handful of actual utility functions used in practice.\nSecondly, squared error has a weakness in the other direction.\nThat is, its penalty and reward are bounded in [0, 1], but if the number of errors is small enough, it is possible for a method to appear better when it is producing what we generally consider unhelpful probability estimates.\nFor example, consider a method that only estimates probabilities as zero or one (which na \u00a8 \u0131ve Bayes tends to but doesn't quite reach if you use smoothing).\nThis method could win according to squared error, but with just one error it would never perform better on log-loss than any method that assigns some non-zero probability to each outcome.\nFor these reasons, we recommend that neither of these are used in isolation as they each give slightly different insights to the quality of the estimates produced.\nThese observations are straightforward from the definitions but are underscored by the evaluation.\n5.\nFUTURE WORK\nA promising extension to the work presented here is a hybrid distribution of a Gaussian (on the outside slopes) and exponentials (on the inner slopes).\nFrom the empirical evidence presented in [22], the expectation is that such a distribution might allow more emphasis of the probability mass around the modes (as with the\nexponential) while still providing more accurate estimates toward the tails.\nJust as logistic regression allows the log-odds of the posterior distribution to be fit directly with a line, we could directly fit the log-odds of the posterior with a three-piece line (a spline) instead of indirectly doing the same thing by fitting the asymmetric Laplace.\nThis approach may provide more power since it retains the asymmetry assumption but not the assumption that the class-conditional densities are from an asymmetric Laplace.\nFinally, extending these methods to the outputs of other discriminative classifiers is an open area.\nWe are currently evaluating the appropriateness of these methods for the output of a voted perceptron [11].\nBy analogy to the log-odds, the operative score that appears promising is log weight perceptrons voting + weight perceptrons voting \u2212.\n6.\nSUMMARY AND CONCLUSIONS\nWe have reviewed a wide variety of parametric methods for producing probability estimates from the raw scores of a discriminative classifier and for recalibrating an uncalibrated probabilistic classifier.\nIn addition, we have introduced two new families that attempt to capitalize on the asymmetric behavior that tends to arise from learning a discrimination function.\nWe have given an efficient way to estimate the parameters of these distributions.\nWhile these distributions attempt to strike a balance between the generalization power of parametric distributions and the flexibility that the added asymmetric parameters give, the asymmetric Gaussian appears to have too great of an emphasis away from the modes.\nIn striking contrast, the asymmetric Laplace distribution appears to be preferable over several large text domains and a variety of performance measures to the primary competing parametric methods, though comparable performance is sometimes achieved with one of two varieties of logistic regression.\nGiven the ease of estimating the parameters of this distribution, it is a good first choice for producing quality probability estimates.","keyphrases":["text classifi","probabl estim","decis threshold","bayesian risk model","empir score distribut","parametr model","inform retriev","logist regress framework","posterior function","asymmetr laplac distribut","search engin retriev","symmetr distribut","asymmetr gaussian","maximum likelihood estim","class-condit densiti","text classif","cost-sensit learn","activ learn","classifi combin"],"prmu":["P","P","P","P","P","M","U","U","M","M","U","M","M","M","U","M","U","U","M"]} {"id":"H-73","title":"Unified Utility Maximization Framework for Resource Selection","abstract":"This paper presents a unified utility framework for resource selection of distributed text information retrieval. This new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases. With the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications. Specifically, when used for database recommendation, the selection is optimized for the goal of high-recall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents). This new model provides a more solid framework for distributed information retrieval. Empirical studies show that it is at least as effective as other state-of-the-art algorithms.","lvl-1":"Unified Utility Maximization Framework for Resource Selection Luo Si Language Technology Inst.\nSchool of Compute Science Carnegie Mellon University Pittsburgh, PA 15213 lsi@cs.cmu.edu Jamie Callan Language Technology Inst.\nSchool of Compute Science Carnegie Mellon University Pittsburgh, PA 15213 callan@cs.cmu.edu ABSTRACT This paper presents a unified utility framework for resource selection of distributed text information retrieval.\nThis new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases.\nWith the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications.\nSpecifically, when used for database recommendation, the selection is optimized for the goal of highrecall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents).\nThis new model provides a more solid framework for distributed information retrieval.\nEmpirical studies show that it is at least as effective as other state-of-the-art algorithms.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: General Terms Algorithms 1.\nINTRODUCTION Conventional search engines such as Google or AltaVista use ad-hoc information retrieval solution by assuming all the searchable documents can be copied into a single centralized database for the purpose of indexing.\nDistributed information retrieval, also known as federated search [1,4,7,11,14,22] is different from ad-hoc information retrieval as it addresses the cases when documents cannot be acquired and stored in a single database.\nFor example, Hidden Web contents (also called invisible or deep Web contents) are information on the Web that cannot be accessed by the conventional search engines.\nHidden web contents have been estimated to be 2-50 [19] times larger than the contents that can be searched by conventional search engines.\nTherefore, it is very important to search this type of valuable information.\nThe architecture of distributed search solution is highly influenced by different environmental characteristics.\nIn a small local area network such as small company environments, the information providers may cooperate to provide corpus statistics or use the same type of search engines.\nEarly distributed information retrieval research focused on this type of cooperative environments [1,8].\nOn the other side, in a wide area network such as very large corporate environments or on the Web there are many types of search engines and it is difficult to assume that all the information providers can cooperate as they are required.\nEven if they are willing to cooperate in these environments, it may be hard to enforce a single solution for all the information providers or to detect whether information sources provide the correct information as they are required.\nMany applications fall into the latter type of uncooperative environments such as the Mind project [16] which integrates non-cooperating digital libraries or the QProber system [9] which supports browsing and searching of uncooperative hidden Web databases.\nIn this paper, we focus mainly on uncooperative environments that contain multiple types of independent search engines.\nThere are three important sub-problems in distributed information retrieval.\nFirst, information about the contents of each individual database must be acquired (resource representation) [1,8,21].\nSecond, given a query, a set of resources must be selected to do the search (resource selection) [5,7,21].\nThird, the results retrieved from all the selected resources have to be merged into a single final list before it can be presented to the end user (retrieval and results merging) [1,5,20,22].\nMany types of solutions exist for distributed information retrieval.\nInvisible-web.\nnet1 provides guided browsing of hidden Web databases by collecting the resource descriptions of these databases and building hierarchies of classes that group them by similar topics.\nA database recommendation system goes a step further than a browsing system like Invisible-web.\nnet by recommending most relevant information sources to users'' queries.\nIt is composed of the resource description and the resource selection components.\nThis solution is useful when the users want to browse the selected databases by themselves instead of asking the system to retrieve relevant documents automatically.\nDistributed document retrieval is a more sophisticated task.\nIt selects relevant information sources for users'' queries as the database recommendation system does.\nFurthermore, users'' queries are forwarded to the corresponding selected databases and the returned individual ranked lists are merged into a single list to present to the users.\nThe goal of a database recommendation system is to select a small set of resources that contain as many relevant documents as possible, which we call a high-recall goal.\nOn the other side, the effectiveness of distributed document retrieval is often measured by the Precision of the final merged document result list, which we call a high-precision goal.\nPrior research indicated that these two goals are related but not identical [4,21].\nHowever, most previous solutions simply use effective resource selection algorithm of database recommendation system for distributed document retrieval system or solve the inconsistency with heuristic methods [1,4,21].\nThis paper presents a unified utility maximization framework to integrate the resource selection problem of both database recommendation and distributed document retrieval together by treating them as different optimization goals.\nFirst, a centralized sample database is built by randomly sampling a small amount of documents from each database with query-based sampling [1]; database size statistics are also estimated [21].\nA logistic transformation model is learned off line with a small amount of training queries to map the centralized document scores in the centralized sample database to the corresponding probabilities of relevance.\nSecond, after a new query is submitted, the query can be used to search the centralized sample database which produces a score for each sampled document.\nThe probability of relevance for each document in the centralized sample database can be estimated by applying the logistic model to each document``s score.\nThen, the probabilities of relevance of all the (mostly unseen) documents among the available databases can be estimated using the probabilities of relevance of the documents in the centralized sample database and the database size estimates.\nFor the task of resource selection for a database recommendation system, the databases can be ranked by the expected number of relevant documents to meet the high-recall goal.\nFor resource selection for a distributed document retrieval system, databases containing a small number of documents with large probabilities of relevance are favored over databases containing many documents with small probabilities of relevance.\nThis selection criterion meets the high-precision goal of distributed document retrieval application.\nFurthermore, the Semi-supervised learning (SSL) [20,22] algorithm is applied to merge the returned documents into a final ranked list.\nThe unified utility framework makes very few assumptions and works in uncooperative environments.\nTwo key features make it a more solid model for distributed information retrieval: i) It formalizes the resource selection problems of different applications as various utility functions, and optimizes the utility functions to achieve the optimal results accordingly; and ii) It shows an effective and efficient way to estimate the probabilities of relevance of all documents across databases.\nSpecifically, the framework builds logistic models on the centralized sample database to transform centralized retrieval scores to the corresponding probabilities of relevance and uses the centralized sample database as the bridge between individual databases and the logistic model.\nThe human effort (relevance judgment) required to train the single centralized logistic model does not scale with the number of databases.\nThis is a large advantage over previous research, which required the amount of human effort to be linear with the number of databases [7,15].\nThe unified utility framework is not only more theoretically solid but also very effective.\nEmpirical studies show the new model to be at least as accurate as the state-of-the-art algorithms in a variety of configurations.\nThe next section discusses related work.\nSection 3 describes the new unified utility maximization model.\nSection 4 explains our experimental methodology.\nSections 5 and 6 present our experimental results for resource selection and document retrieval.\nSection 7 concludes.\n2.\nPRIOR RESEARCH There has been considerable research on all the sub-problems of distributed information retrieval.\nWe survey the most related work in this section.\nThe first problem of distributed information retrieval is resource representation.\nThe STARTS protocol is one solution for acquiring resource descriptions in cooperative environments [8].\nHowever, in uncooperative environments, even the databases are willing to share their information, it is not easy to judge whether the information they provide is accurate or not.\nFurthermore, it is not easy to coordinate the databases to provide resource representations that are compatible with each other.\nThus, in uncooperative environments, one common choice is query-based sampling, which randomly generates and sends queries to individual search engines and retrieves some documents to build the descriptions.\nAs the sampled documents are selected by random queries, query-based sampling is not easily fooled by any adversarial spammer that is interested to attract more traffic.\nExperiments have shown that rather accurate resource descriptions can be built by sending about 80 queries and downloading about 300 documents [1].\nMany resource selection algorithms such as gGlOSS\/vGlOSS [8] and CORI [1] have been proposed in the last decade.\nThe CORI algorithm represents each database by its terms, the document frequencies and a small number of corpus statistics (details in [1]).\nAs prior research on different datasets has shown the CORI algorithm to be the most stable and effective of the three algorithms [1,17,18], we use it as a baseline algorithm in this work.\nThe relevant document distribution estimation (ReDDE [21]) resource selection algorithm is a recent algorithm that tries to estimate the distribution of relevant documents across the available databases and ranks the databases accordingly.\nAlthough the ReDDE algorithm has been shown to be effective, it relies on heuristic constants that are set empirically [21].\nThe last step of the document retrieval sub-problem is results merging, which is the process of transforming database-specific 33 document scores into comparable database-independent document scores.\nThe semi supervised learning (SSL) [20,22] result merging algorithm uses the documents acquired by querybased sampling as training data and linear regression to learn the database-specific, query-specific merging models.\nThese linear models are used to convert the database-specific document scores into the approximated centralized document scores.\nThe SSL algorithm has been shown to be effective [22].\nIt serves as an important component of our unified utility maximization framework (Section 3).\nIn order to achieve accurate document retrieval results, many previous methods simply use resource selection algorithms that are effective of database recommendation system.\nBut as pointed out above, a good resource selection algorithm optimized for high-recall may not work well for document retrieval, which targets the high-precision goal.\nThis type of inconsistency has been observed in previous research [4,21].\nThe research in [21] tried to solve the problem with a heuristic method.\nThe research most similar to what we propose here is the decision-theoretic framework (DTF) [7,15].\nThis framework computes a selection that minimizes the overall costs (e.g., retrieval quality, time) of document retrieval system and several methods [15] have been proposed to estimate the retrieval quality.\nHowever, two points distinguish our research from the DTF model.\nFirst, the DTF is a framework designed specifically for document retrieval, but our new model integrates two distinct applications with different requirements (database recommendation and distributed document retrieval) into the same unified framework.\nSecond, the DTF builds a model for each database to calculate the probabilities of relevance.\nThis requires human relevance judgments for the results retrieved from each database.\nIn contrast, our approach only builds one logistic model for the centralized sample database.\nThe centralized sample database can serve as a bridge to connect the individual databases with the centralized logistic model, thus the probabilities of relevance of documents in different databases can be estimated.\nThis strategy can save large amount of human judgment effort and is a big advantage of the unified utility maximization framework over the DTF especially when there are a large number of databases.\n3.\nUNIFIED UTILITY MAXIMIZATION FRAMEWORK The Unified Utility Maximization (UUM) framework is based on estimating the probabilities of relevance of the (mostly unseen) documents available in the distributed search environment.\nIn this section we describe how the probabilities of relevance are estimated and how they are used by the Unified Utility Maximization model.\nWe also describe how the model can be optimized for the high-recall goal of a database recommendation system and the high-precision goal of a distributed document retrieval system.\n3.1 Estimating Probabilities of Relevance As pointed out above, the purpose of resource selection is highrecall and the purpose of document retrieval is high-precision.\nIn order to meet these diverse goals, the key issue is to estimate the probabilities of relevance of the documents in various databases.\nThis is a difficult problem because we can only observe a sample of the contents of each database using query-based sampling.\nOur strategy is to make full use of all the available information to calculate the probability estimates.\n3.1.1 Learning Probabilities of Relevance In the resource description step, the centralized sample database is built by query-based sampling and the database sizes are estimated using the sample-resample method [21].\nAt the same time, an effective retrieval algorithm (Inquery [2]) is applied on the centralized sample database with a small number (e.g., 50) of training queries.\nFor each training query, the CORI resource selection algorithm [1] is applied to select some number (e.g., 10) of databases and retrieve 50 document ids from each database.\nThe SSL results merging algorithm [20,22] is used to merge the results.\nThen, we can download the top 50 documents in the final merged list and calculate their corresponding centralized scores using Inquery and the corpus statistics of the centralized sample database.\nThe centralized scores are further normalized (divided by the maximum centralized score for each query), as this method has been suggested to improve estimation accuracy in previous research [15].\nHuman judgment is acquired for those documents and a logistic model is built to transform the normalized centralized document scores to probabilities of relevance as follows: ( ) ))(exp(1 ))(exp( |)( _ _ dSba dSba drelPdR ccc ccc ++ + == (1) where )( _ dSc is the normalized centralized document score and ac and bc are the two parameters of the logistic model.\nThese two parameters are estimated by maximizing the probabilities of relevance of the training queries.\nThe logistic model provides us the tool to calculate the probabilities of relevance from centralized document scores.\n3.1.2 Estimating Centralized Document Scores When the user submits a new query, the centralized document scores of the documents in the centralized sample database are calculated.\nHowever, in order to calculate the probabilities of relevance, we need to estimate centralized document scores for all documents across the databases instead of only the sampled documents.\nThis goal is accomplished using: the centralized scores of the documents in the centralized sample database, and the database size statistics.\nWe define the database scale factor for the ith database as the ratio of the estimated database size and the number of documents sampled from this database as follows: ^ _ i i i db db db samp N SF N = (2) where ^ idbN is the estimated database size and _idb sampN is the number of documents from the ith database in the centralized sample database.\nThe intuition behind the database scale factor is that, for a database whose scale factor is 50, if one document from this database in the centralized sample database has a centralized document score of 0.5, we may guess that there are about 50 documents in that database which have scores of about 0.5.\nActually, we can apply a finer non-parametric linear interpolation method to estimate the centralized document score curve for each database.\nFormally, we rank all the sampled documents from the ith database by their centralized document 34 scores to get the sampled centralized document score list {Sc(dsi1), Sc(dsi2), Sc(dsi3),....\n.}\nfor the ith database; we assume that if we could calculate the centralized document scores for all the documents in this database and get the complete centralized document score list, the top document in the sampled list would have rank SFdbi\/2, the second document in the sampled list would rank SFdbi3\/2, and so on.\nTherefore, the data points of sampled documents in the complete list are: {(SFdbi\/2, Sc(dsi1)), (SFdbi3\/2, Sc(dsi2)), (SFdbi5\/2, Sc(dsi3)),....\n.}\n.\nPiecewise linear interpolation is applied to estimate the centralized document score curve, as illustrated in Figure 1.\nThe complete centralized document score list can be estimated by calculating the values of different ranks on the centralized document curve as: ],1[,)(S ^^ c idbij Njd \u2208 .\nIt can be seen from Figure 1 that more sample data points produce more accurate estimates of the centralized document score curves.\nHowever, for databases with large database scale ratios, this kind of linear interpolation may be rather inaccurate, especially for the top ranked (e.g., [1, SFdbi\/2]) documents.\nTherefore, an alternative solution is proposed to estimate the centralized document scores of the top ranked documents for databases with large scale ratios (e.g., larger than 100).\nSpecifically, a logistic model is built for each of these databases.\nThe logistic model is used to estimate the centralized document score of the top 1 document in the corresponding database by using the two sampled documents from that database with highest centralized scores. ))\n()(exp(1 ))()(exp( )( 22110 22110 ^ 1 iciicii iciicii ic dsSdsS dsSdsS dS \u03b1\u03b1\u03b1 \u03b1\u03b1\u03b1 +++ ++ = (3) 0i\u03b1 , 1i\u03b1 and 2i\u03b1 are the parameters of the logistic model.\nFor each training query, the top retrieved document of each database is downloaded and the corresponding centralized document score is calculated.\nTogether with the scores of the top two sampled documents, these parameters can be estimated.\nAfter the centralized score of the top document is estimated, an exponential function is fitted for the top part ([1, SFdbi\/2]) of the centralized document score curve as: ]2\/,1[)*exp()( 10 ^ idbiiijc SFjjdS \u2208+= \u03b2\u03b2 (4) ^ 0 1 1log( ( ))i c i iS d\u03b2 \u03b2= \u2212 (5) )12\/( ))(log()((log( ^ 11 1 \u2212 \u2212 = idb icic i SF dSdsS \u03b2 (6) The two parameters 0i\u03b2 and 1i\u03b2 are fitted to make sure the exponential function passes through the two points (1, ^ 1)( ic dS ) and (SFdbi\/2, Sc(dsi1)).\nThe exponential function is only used to adjust the top part of the centralized document score curve and the lower part of the curve is still fitted with the linear interpolation method described above.\nThe adjustment by fitting exponential function of the top ranked documents has been shown empirically to produce more accurate results.\nFrom the centralized document score curves, we can estimate the complete centralized document score lists accordingly for all the available databases.\nAfter the estimated centralized document scores are normalized, the complete lists of probabilities of relevance can be constructed out of the complete centralized document score lists by Equation 1.\nFormally for the ith database, the complete list of probabilities of relevance is: ],1[,)(R ^^ idbij Njd \u2208 .\n3.2 The Unified Utility Maximization Model In this section, we formally define the new unified utility maximization model, which optimizes the resource selection problems for two goals of high-recall (database recommendation) and high-precision (distributed document retrieval) in the same framework.\nIn the task of database recommendation, the system needs to decide how to rank databases.\nIn the task of document retrieval, the system not only needs to select the databases but also needs to decide how many documents to retrieve from each selected database.\nWe generalize the database recommendation selection process, which implicitly recommends all documents in every selected database, as a special case of the selection decision for the document retrieval task.\nFormally, we denote di as the number of documents we would like to retrieve from the ith database and ,...},{ 21 ddd = as a selection action for all the databases.\nThe database selection decision is made based on the complete lists of probabilities of relevance for all the databases.\nThe complete lists of probabilities of relevance are inferred from all the available information specifically sR , which stands for the resource descriptions acquired by query-based sampling and the database size estimates acquired by sample-resample; cS stands for the centralized document scores of the documents in the centralized sample database.\nIf the method of estimating centralized document scores and probabilities of relevance in Section 3.1 is acceptable, then the most probable complete lists of probabilities of relevance can be derived and we denote them as 1 ^ ^ * 1{(R( ), [1, ]),dbjd j N\u03b8 = \u2208 2 ^ ^ 2(R( ), [1, ]),....\n.}\ndbjd j N\u2208 .\nRandom vector denotes an arbitrary set of complete lists of probabilities of relevance and ),|( cs SRP \u03b8 as the probability of generating this set of lists.\nFinally, to each selection action d and a set of complete lists of Figure 1.\nLinear interpolation construction of the complete centralized document score list (database scale factor is 50).\n35 probabilities of relevance \u03b8 , we associate a utility function ),( dU \u03b8 which indicates the benefit from making the d selection when the true complete lists of probabilities of relevance are \u03b8 .\nTherefore, the selection decision defined by the Bayesian framework is: \u03b8\u03b8\u03b8 \u03b8 dSRPdUd cs d ).\n|(),(maxarg * = (7) One common approach to simplify the computation in the Bayesian framework is to only calculate the utility function at the most probable parameter values instead of calculating the whole expectation.\nIn other words, we only need to calculate ),( * dU \u03b8 and Equation 7 is simplified as follows: ),(maxarg * * \u03b8dUd d = (8) This equation serves as the basic model for both the database recommendation system and the document retrieval system.\n3.3 Resource Selection for High-Recall High-recall is the goal of the resource selection algorithm in federated search tasks such as database recommendation.\nThe goal is to select a small set of resources (e.g., less than Nsdb databases) that contain as many relevant documents as possible, which can be formally defined as: = = i N j iji idb ddIdU ^ 1 ^ * )(R)(),( \u03b8 (9) I(di) is the indicator function, which is 1 when the ith database is selected and 0 otherwise.\nPlug this equation into the basic model in Equation 8 and associate the selected database number constraint to obtain the following: sdb i i i N j iji d NdItoSubject ddId idb = = = )(: )(R)(maxarg ^ 1 ^* (10) The solution of this optimization problem is very simple.\nWe can calculate the expected number of relevant documents for each database as follows: = = idb i N j ijRd dN ^ 1 ^^ )(R (11) The Nsdb databases with the largest expected number of relevant documents can be selected to meet the high-recall goal.\nWe call this the UUM\/HR algorithm (Unified Utility Maximization for High-Recall).\n3.4 Resource Selection for High-Precision High-Precision is the goal of resource selection algorithm in federated search tasks such as distributed document retrieval.\nIt is measured by the Precision at the top part of the final merged document list.\nThis high-precision criterion is realized by the following utility function, which measures the Precision of retrieved documents from the selected databases.\n= = i d j iji i ddIdU 1 ^ * )(R)(),( \u03b8 (12) Note that the key difference between Equation 12 and Equation 9 is that Equation 9 sums up the probabilities of relevance of all the documents in a database, while Equation 12 only considers a much smaller part of the ranking.\nSpecifically, we can calculate the optimal selection decision by: = = i d j iji d i ddId 1 ^* )(R)(maxarg (13) Different kinds of constraints caused by different characteristics of the document retrieval tasks can be associated with the above optimization problem.\nThe most common one is to select a fixed number (Nsdb) of databases and retrieve a fixed number (Nrdoc) of documents from each selected database, formally defined as: 0, )(: )(R)(maxarg 1 ^* \u2260= = = = irdoci sdb i i i d j iji d difNd NdItoSubject ddId i (14) This optimization problem can be solved easily by calculating the number of expected relevant documents in the top part of the each database``s complete list of probabilities of relevance: = = rdoc i N j ijRdTop dN 1 ^^ _ )(R (15) Then the databases can be ranked by these values and selected.\nWe call this the UUM\/HP-FL algorithm (Unified Utility Maximization for High-Precision with Fixed Length document rankings from each selected database).\nA more complex situation is to vary the number of retrieved documents from each selected database.\nMore specifically, we allow different selected databases to return different numbers of documents.\nFor simplification, the result list lengths are required to be multiples of a baseline number 10.\n(This value can also be varied, but for simplification it is set to 10 in this paper.)\nThis restriction is set to simulate the behavior of commercial search engines on the Web.\n(Search engines such as Google and AltaVista return only 10 or 20 document ids for every result page.)\nThis procedure saves the computation time of calculating optimal database selection by allowing the step of dynamic programming to be 10 instead of 1 (more detail is discussed latterly).\nFor further simplification, we restrict to select at most 100 documents from each database (di<=100) Then, the selection optimization problem is formalized as follows: ]10.\n.\n,,2,1,0[,*10 )(: )(R)(maxarg _ 1 ^* \u2208= = = = = kkd Nd NdItoSubject ddId i rdocTotal i i sdb i i i d j iji d i (16) NTotal_rdoc is the total number of documents to be retrieved.\nUnfortunately, there is no simple solution for this optimization problem as there are for Equations 10 and 14.\nHowever, a 36 dynamic programming algorithm can be applied to calculate the optimal solution.\nThe basic steps of this dynamic programming method are described in Figure 2.\nAs this algorithm allows retrieving result lists of varying lengths from each selected database, it is called UUM\/HP-VL algorithm.\nAfter the selection decisions are made, the selected databases are searched and the corresponding document ids are retrieved from each database.\nThe final step of document retrieval is to merge the returned results into a single ranked list with the semisupervised learning algorithm.\nIt was pointed out before that the SSL algorithm maps the database-specific scores into the centralized document scores and builds the final ranked list accordingly, which is consistent with all our selection procedures where documents with higher probabilities of relevance (thus higher centralized document scores) are selected.\n4.\nEXPERIMENTAL METHODOLOGY 4.1 Testbeds It is desirable to evaluate distributed information retrieval algorithms with testbeds that closely simulate the real world applications.\nThe TREC Web collections WT2g or WT10g [4,13] provide a way to partition documents by different Web servers.\nIn this way, a large number (O(1000)) of databases with rather diverse contents could be created, which may make this testbed a good candidate to simulate the operational environments such as open domain hidden Web.\nHowever, two weakness of this testbed are: i) Each database contains only a small amount of document (259 documents by average for WT2g) [4]; and ii) The contents of WT2g or WT10g are arbitrarily crawled from the Web.\nIt is not likely for a hidden Web database to provide personal homepages or web pages indicating that the pages are under construction and there is no useful information at all.\nThese types of web pages are contained in the WT2g\/WT10g datasets.\nTherefore, the noisy Web data is not similar with that of high-quality hidden Web database contents, which are usually organized by domain experts.\nAnother choice is the TREC news\/government data [1,15,17, 18,21].\nTREC news\/government data is concentrated on relatively narrow topics.\nCompared with TREC Web data: i) The news\/government documents are much more similar to the contents provided by a topic-oriented database than an arbitrary web page, ii) A database in this testbed is larger than that of TREC Web data.\nBy average a database contains thousands of documents, which is more realistic than a database of TREC Web data with about 250 documents.\nAs the contents and sizes of the databases in the TREC news\/government testbed are more similar with that of a topic-oriented database, it is a good candidate to simulate the distributed information retrieval environments of large organizations (companies) or domainspecific hidden Web sites, such as West that provides access to legal, financial and news text databases [3].\nAs most current distributed information retrieval systems are developed for the environments of large organizations (companies) or domainspecific hidden Web other than open domain hidden Web, TREC news\/government testbed was chosen in this work.\nTrec123-100col-bysource testbed is one of the most used TREC news\/government testbed [1,15,17,21].\nIt was chosen in this work.\nThree testbeds in [21] with skewed database size distributions and different types of relevant document distributions were also used to give more thorough simulation for real environments.\nTrec123-100col-bysource: 100 databases were created from TREC CDs 1, 2 and 3.\nThey were organized by source and publication date [1].\nThe sizes of the databases are not skewed.\nDetails are in Table 1.\nThree testbeds built in [21] were based on the trec123-100colbysource testbed.\nEach testbed contains many small databases and two large databases created by merging about 10-20 small databases together.\nInput: Complete lists of probabilities of relevance for all the |DB| databases.\nOutput: Optimal selection solution for Equation 16.\ni) Create the three-dimensional array: Sel (1.\n.\n|DB|, 1.\n.\nNTotal_rdoc\/10, 1.\n.\nNsdb) Each Sel (x, y, z) is associated with a selection decision xyzd , which represents the best selection decision in the condition: only databases from number 1 to number x are considered for selection; totally y*10 documents will be retrieved; only z databases are selected out of the x database candidates.\nAnd Sel (x, y, z) is the corresponding utility value by choosing the best selection.\nii) Initialize Sel (1, 1.\n.\nNTotal_rdoc\/10, 1.\n.\nNsdb) with only the estimated relevance information of the 1st database.\niii) Iterate the current database candidate i from 2 to |DB| For each entry Sel (i, y, z): Find k such that: )10,min(1: ))()1,,1((maxarg *10 ^ * yktosubject dRzkyiSelk kj ij k \u2264\u2264 +\u2212\u2212\u2212= \u2264 ),,1())()1,,1(( * *10 ^ * zyiSeldRzkyiSelIf kj ij \u2212>+\u2212\u2212\u2212 \u2264 This means that we should retrieve * 10 k\u2217 documents from the ith database, otherwise we should not select this database and the previous best solution Sel (i-1, y, z) should be kept.\nThen set the value of iyzd and Sel (i, y, z) accordingly.\niv) The best selection solution is given by _ \/10| | Toral rdoc sdbDB N Nd and the corresponding utility value is Sel (|DB|, NTotal_rdoc\/10, Nsdb).\nFigure 2.\nThe dynamic programming optimization procedure for Equation 16.\nTable1: Testbed statistics.\nNumber of documents Size (MB) Testbed Size (GB) Min Avg Max Min Avg Max Trec123 3.2 752 10782 39713 28 32 42 Table2: Query set statistics.\nName TREC Topic Set TREC Topic Field Average Length (Words) Trec123 51-150 Title 3.1 37 Trec123-2ldb-60col (representative): The databases in the trec123-100col-bysource were sorted with alphabetical order.\nTwo large databases were created by merging 20 small databases with the round-robin method.\nThus, the two large databases have more relevant documents due to their large sizes, even though the densities of relevant documents are roughly the same as the small databases.\nTrec123-AP-WSJ-60col (relevant): The 24 Associated Press collections and the 16 Wall Street Journal collections in the trec123-100col-bysource testbed were collapsed into two large databases APall and WSJall.\nThe other 60 collections were left unchanged.\nThe APall and WSJall databases have higher densities of documents relevant to TREC queries than the small databases.\nThus, the two large databases have many more relevant documents than the small databases.\nTrec123-FR-DOE-81col (nonrelevant): The 13 Federal Register collections and the 6 Department of Energy collections in the trec123-100col-bysource testbed were collapsed into two large databases FRall and DOEall.\nThe other 80 collections were left unchanged.\nThe FRall and DOEall databases have lower densities of documents relevant to TREC queries than the small databases, even though they are much larger.\n100 queries were created from the title fields of TREC topics 51-150.\nThe queries 101-150 were used as training queries and the queries 51-100 were used as test queries (details in Table 2).\n4.2 Search Engines In the uncooperative distributed information retrieval environments of large organizations (companies) or domainspecific hidden Web, different databases may use different types of search engine.\nTo simulate the multiple type-engine environment, three different types of search engines were used in the experiments: INQUERY [2], a unigram statistical language model with linear smoothing [12,20] and a TFIDF retrieval algorithm with ltc weight [12,20].\nAll these algorithms were implemented with the Lemur toolkit [12].\nThese three kinds of search engines were assigned to the databases among the four testbeds in a round-robin manner.\n5.\nRESULTS: RESOURCE SELECTION OF DATABASE RECOMMENDATION All four testbeds described in Section 4 were used in the experiments to evaluate the resource selection effectiveness of the database recommendation system.\nThe resource descriptions were created using query-based sampling.\nAbout 80 queries were sent to each database to download 300 unique documents.\nThe database size statistics were estimated by the sample-resample method [21].\nFifty queries (101-150) were used as training queries to build the relevant logistic model and to fit the exponential functions of the centralized document score curves for large ratio databases (details in Section 3.1).\nAnother 50 queries (51-100) were used as test data.\nResource selection algorithms of database recommendation systems are typically compared using the recall metric nR [1,17,18,21].\nLet B denote a baseline ranking, which is often the RBR (relevance based ranking), and E as a ranking provided by a resource selection algorithm.\nAnd let Bi and Ei denote the number of relevant documents in the ith ranked database of B or E.\nThen Rn is defined as follows: = = = k i i k i i k B E R 1 1 (17) Usually the goal is to search only a few databases, so our figures only show results for selecting up to 20 databases.\nThe experiments summarized in Figure 3 compared the effectiveness of the three resource selection algorithms, namely the CORI, ReDDE and UUM\/HR.\nThe UUM\/HR algorithm is described in Section 3.3.\nIt can be seen from Figure 3 that the ReDDE and UUM\/HR algorithms are more effective (on the representative, relevant and nonrelevant testbeds) or as good as (on the Trec123-100Col testbed) the CORI resource selection algorithm.\nThe UUM\/HR algorithm is more effective than the ReDDE algorithm on the representative and relevant testbeds and is about the same as the ReDDE algorithm on the Trec123100Col and the nonrelevant testbeds.\nThis suggests that the UUM\/HR algorithm is more robust than the ReDDE algorithm.\nIt can be noted that when selecting only a few databases on the Trec123-100Col or the nonrelevant testbeds, the ReDEE algorithm has a small advantage over the UUM\/HR algorithm.\nWe attribute this to two causes: i) The ReDDE algorithm was tuned on the Trec123-100Col testbed; and ii) Although the difference is small, this may suggest that our logistic model of estimating probabilities of relevance is not accurate enough.\nMore training data or a more sophisticated model may help to solve this minor puzzle.\nCollections Selected.\nCollections Selected.\nTrec123-100Col Testbed.\nRepresentative Testbed.\nCollection Selected.\nCollection Selected.\nRelevant Testbed.\nNonrelevant Testbed.\nFigure 3.\nResource selection experiments on the four testbeds.\n38 6.\nRESULTS: DOCUMENT RETRIEVAL EFFECTIVENESS For document retrieval, the selected databases are searched and the returned results are merged into a single final list.\nIn all of the experiments discussed in this section the results retrieved from individual databases were combined by the semisupervised learning results merging algorithm.\nThis version of the SSL algorithm [22] is allowed to download a small number of returned document texts on the fly to create additional training data in the process of learning the linear models which map database-specific document scores into estimated centralized document scores.\nIt has been shown to be very effective in environments where only short result-lists are retrieved from each selected database [22].\nThis is a common scenario in operational environments and was the case for our experiments.\nDocument retrieval effectiveness was measured by Precision at the top part of the final document list.\nThe experiments in this section were conducted to study the document retrieval effectiveness of five selection algorithms, namely the CORI, ReDDE, UUM\/HR, UUM\/HP-FL and UUM\/HP-VL algorithms.\nThe last three algorithms were proposed in Section 3.\nAll the first four algorithms selected 3 or 5 databases, and 50 documents were retrieved from each selected database.\nThe UUM\/HP-FL algorithm also selected 3 or 5 databases, but it was allowed to adjust the number of documents to retrieve from each selected database; the number retrieved was constrained to be from 10 to 100, and a multiple of 10.\nThe Trec123-100Col and representative testbeds were selected for document retrieval as they represent two extreme cases of resource selection effectiveness; in one case the CORI algorithm is as good as the other algorithms and in the other case it is quite Table 5.\nPrecision on the representative testbed when 3 databases were selected.\n(The first baseline is CORI; the second baseline for UUM\/HP methods is UUM\/HR.)\nPrecision at Doc Rank CORI ReDDE UUM\/HR UUM\/HP-FL UUM\/HP-VL 5 docs 0.3720 0.4080 (+9.7%) 0.4640 (+24.7%) 0.4600 (+23.7%)(-0.9%) 0.5000 (+34.4%)(+7.8%) 10 docs 0.3400 0.4060 (+19.4%) 0.4600 (+35.3%) 0.4540 (+33.5%)(-1.3%) 0.4640 (+36.5%)(+0.9%) 15 docs 0.3120 0.3880 (+24.4%) 0.4320 (+38.5%) 0.4240 (+35.9%)(-1.9%) 0.4413 (+41.4%)(+2.2) 20 docs 0.3000 0.3750 (+25.0%) 0.4080 (+36.0%) 0.4040 (+34.7%)(-1.0%) 0.4240 (+41.3%)(+4.0%) 30 docs 0.2533 0.3440 (+35.8%) 0.3847 (+51.9%) 0.3747 (+47.9%)(-2.6%) 0.3887 (+53.5%)(+1.0%) Table 6.\nPrecision on the representative testbed when 5 databases were selected.\n(The first baseline is CORI; the second baseline for UUM\/HP methods is UUM\/HR.)\nPrecision at Doc Rank CORI ReDDE UUM\/HR UUM\/HP-FL UUM\/HP-VL 5 docs 0.3960 0.4080 (+3.0%) 0.4560 (+15.2%) 0.4280 (+8.1%)(-6.1%) 0.4520 (+14.1%)(-0.9%) 10 docs 0.3880 0.4060 (+4.6%) 0.4280 (+10.3%) 0.4460 (+15.0%)(+4.2%) 0.4560 (+17.5%)(+6.5%) 15 docs 0.3533 0.3987 (+12.9%) 0.4227 (+19.6%) 0.4440 (+25.7%)(+5.0%) 0.4453 (+26.0%)(+5.4%) 20 docs 0.3330 0.3960 (+18.9%) 0.4140 (+24.3%) 0.4290 (+28.8%)(+3.6%) 0.4350 (+30.6%)(+5.1%) 30 docs 0.2967 0.3740 (+26.1%) 0.4013 (+35.3%) 0.3987 (+34.4%)(-0.7%) 0.4060 (+36.8%)(+1.2%) Table 3.\nPrecision on the trec123-100col-bysource testbed when 3 databases were selected.\n(The first baseline is CORI; the second baseline for UUM\/HP methods is UUM\/HR.)\nPrecision at Doc Rank CORI ReDDE UUM\/HR UUM\/HP-FL UUM\/HP-VL 5 docs 0.3640 0.3480 (-4.4%) 0.3960 (+8.8%) 0.4680 (+28.6%)(+18.1%) 0.4640 (+27.5%)(+17.2%) 10 docs 0.3360 0.3200 (-4.8%) 0.3520 (+4.8%) 0.4240 (+26.2%)(+20.5%) 0.4220 (+25.6%)(+19.9%) 15 docs 0.3253 0.3187 (-2.0%) 0.3347 (+2.9%) 0.3973 (+22.2%)(+15.7%) 0.3920 (+20.5%)(+17.1%) 20 docs 0.3140 0.2980 (-5.1%) 0.3270 (+4.1%) 0.3720 (+18.5%)(+13.8%) 0.3700 (+17.8%)(+13.2%) 30 docs 0.2780 0.2660 (-4.3%) 0.2973 (+6.9%) 0.3413 (+22.8%)(+14.8%) 0.3400 (+22.3%)(+14.4%) Table 4.\nPrecision on the trec123-100col-bysource testbed when 5 databases were selected.\n(The first baseline is CORI; the second baseline for UUM\/HP methods is UUM\/HR.)\nPrecision at Doc Rank CORI ReDDE UUM\/HR UUM\/HP-FL UUM\/HP-VL 5 docs 0.4000 0.3920 (-2.0%) 0.4280 (+7.0%) 0.4680 (+17.0%)(+9.4%) 0.4600 (+15.0%)(+7.5%) 10 docs 0.3800 0.3760 (-1.1%) 0.3800 (+0.0%) 0.4180 (+10.0%)(+10.0%) 0.4320 (+13.7%)(+13.7%) 15 docs 0.3560 0.3560 (+0.0%) 0.3720 (+4.5%) 0.3920 (+10.1%)(+5.4%) 0.4080 (+14.6%)(+9.7%) 20 docs 0.3430 0.3390 (-1.2%) 0.3550 (+3.5%) 0.3710 (+8.2%)(+4.5%) 0.3830 (+11.7%)(+7.9%) 30 docs 0.3240 0.3140 (-3.1%) 0.3313 (+2.3%) 0.3500 (+8.0%)(+5.6%) 0.3487 (+7.6%)(+5.3%) 39 a lot worse than the other algorithms.\nTables 3 and 4 show the results on the Trec123-100Col testbed, and Tables 5 and 6 show the results on the representative testbed.\nOn the Trec123-100Col testbed, the document retrieval effectiveness of the CORI selection algorithm is roughly the same or a little bit better than the ReDDE algorithm but both of them are worse than the other three algorithms (Tables 3 and 4).\nThe UUM\/HR algorithm has a small advantage over the CORI and ReDDE algorithms.\nOne main difference between the UUM\/HR algorithm and the ReDDE algorithm was pointed out before: The UUM\/HR uses training data and linear interpolation to estimate the centralized document score curves, while the ReDDE algorithm [21] uses a heuristic method, assumes the centralized document score curves are step functions and makes no distinction among the top part of the curves.\nThis difference makes UUM\/HR better than the ReDDE algorithm at distinguishing documents with high probabilities of relevance from low probabilities of relevance.\nTherefore, the UUM\/HR reflects the high-precision retrieval goal better than the ReDDE algorithm and thus is more effective for document retrieval.\nThe UUM\/HR algorithm does not explicitly optimize the selection decision with respect to the high-precision goal as the UUM\/HP-FL and UUM\/HP-VL algorithms are designed to do.\nIt can be seen that on this testbed, the UUM\/HP-FL and UUM\/HP-VL algorithms are much more effective than all the other algorithms.\nThis indicates that their power comes from explicitly optimizing the high-precision goal of document retrieval in Equations 14 and 16.\nOn the representative testbed, CORI is much less effective than other algorithms for distributed document retrieval (Tables 5 and 6).\nThe document retrieval results of the ReDDE algorithm are better than that of the CORI algorithm but still worse than the results of the UUM\/HR algorithm.\nOn this testbed the three UUM algorithms are about equally effective.\nDetailed analysis shows that the overlap of the selected databases between the UUM\/HR, UUM\/HP-FL and UUM\/HP-VL algorithms is much larger than the experiments on the Trec123-100Col testbed, since all of them tend to select the two large databases.\nThis explains why they are about equally effective for document retrieval.\nIn real operational environments, databases may return no document scores and report only ranked lists of results.\nAs the unified utility maximization model only utilizes retrieval scores of sampled documents with a centralized retrieval algorithm to calculate the probabilities of relevance, it makes database selection decisions without referring to the document scores from individual databases and can be easily generalized to this case of rank lists without document scores.\nThe only adjustment is that the SSL algorithm merges ranked lists without document scores by assigning the documents with pseudo-document scores normalized for their ranks (In a ranked list of 50 documents, the first one has a score of 1, the second has a score of 0.98 etc) ,which has been studied in [22].\nThe experiment results on trec123-100Col-bysource testbed with 3 selected databases are shown in Table 7.\nThe experiment setting was the same as before except that the document scores were eliminated intentionally and the selected databases only return ranked lists of document ids.\nIt can be seen from the results that the UUM\/HP-FL and UUM\/HP-VL work well with databases returning no document scores and are still more effective than other alternatives.\nOther experiments with databases that return no document scores are not reported but they show similar results to prove the effectiveness of UUM\/HP-FL and UUM\/HPVL algorithms.\nThe above experiments suggest that it is very important to optimize the high-precision goal explicitly in document retrieval.\nThe new algorithms based on this principle achieve better or at least as good results as the prior state-of-the-art algorithms in several environments.\n7.\nCONCLUSION Distributed information retrieval solves the problem of finding information that is scattered among many text databases on local area networks and Internets.\nMost previous research use effective resource selection algorithm of database recommendation system for distributed document retrieval application.\nWe argue that the high-recall resource selection goal of database recommendation and high-precision goal of document retrieval are related but not identical.\nThis kind of inconsistency has also been observed in previous work, but the prior solutions either used heuristic methods or assumed cooperation by individual databases (e.g., all the databases used the same kind of search engines), which is frequently not true in the uncooperative environment.\nIn this work we propose a unified utility maximization model to integrate the resource selection of database recommendation and document retrieval tasks into a single unified framework.\nIn this framework, the selection decisions are obtained by optimizing different objective functions.\nAs far as we know, this is the first work that tries to view and theoretically model the distributed information retrieval task in an integrated manner.\nThe new framework continues a recent research trend studying the use of query-based sampling and a centralized sample database.\nA single logistic model was trained on the centralized Table 7.\nPrecision on the trec123-100col-bysource testbed when 3 databases were selected (The first baseline is CORI; the second baseline for UUM\/HP methods is UUM\/HR.)\n(Search engines do not return document scores) Precision at Doc Rank CORI ReDDE UUM\/HR UUM\/HP-FL UUM\/HP-VL 5 docs 0.3520 0.3240 (-8.0%) 0.3680 (+4.6%) 0.4520 (+28.4%)(+22.8%) 0.4520 (+28.4%)(+22.8) 10 docs 0.3320 0.3140 (-5.4%) 0.3340 (+0.6%) 0.4120 (+24.1%)(+23.4%) 0.4020 (+21.1%)(+20.4%) 15 docs 0.3227 0.2987 (-7.4%) 0.3280 (+1.6%) 0.3920 (+21.5%)(+19.5%) 0.3733 (+15.7%)(+13.8%) 20 docs 0.3030 0.2860 (-5.6%) 0.3130 (+3.3%) 0.3670 (+21.2%)(+17.3%) 0.3590 (+18.5%)(+14.7%) 30 docs 0.2727 0.2640 (-3.2%) 0.2900 (+6.3%) 0.3273 (+20.0%)(+12.9%) 0.3273 (+20.0%)(+12.9%) 40 sample database to estimate the probabilities of relevance of documents by their centralized retrieval scores, while the centralized sample database serves as a bridge to connect the individual databases with the centralized logistic model.\nTherefore, the probabilities of relevance for all the documents across the databases can be estimated with very small amount of human relevance judgment, which is much more efficient than previous methods that build a separate model for each database.\nThis framework is not only more theoretically solid but also very effective.\nOne algorithm for resource selection (UUM\/HR) and two algorithms for document retrieval (UUM\/HP-FL and UUM\/HP-VL) are derived from this framework.\nEmpirical studies have been conducted on testbeds to simulate the distributed search solutions of large organizations (companies) or domain-specific hidden Web.\nFurthermore, the UUM\/HP-FL and UUM\/HP-VL resource selection algorithms are extended with a variant of SSL results merging algorithm to address the distributed document retrieval task when selected databases do not return document scores.\nExperiments have shown that these algorithms achieve results that are at least as good as the prior state-of-the-art, and sometimes considerably better.\nDetailed analysis indicates that the advantage of these algorithms comes from explicitly optimizing the goals of the specific tasks.\nThe unified utility maximization framework is open for different extensions.\nWhen cost is associated with searching the online databases, the utility framework can be adjusted to automatically estimate the best number of databases to search so that a large amount of relevant documents can be retrieved with relatively small costs.\nAnother extension of the framework is to consider the retrieval effectiveness of the online databases, which is an important issue in the operational environments.\nAll of these are the directions of future research.\nACKNOWLEDGEMENT This research was supported by NSF grants EIA-9983253 and IIS-0118767.\nAny opinions, findings, conclusions, or recommendations expressed in this paper are the authors'', and do not necessarily reflect those of the sponsor.\nREFERENCES [1] J. Callan.\n(2000).\nDistributed information retrieval.\nIn W.B. Croft, editor, Advances in Information Retrieval.\nKluwer Academic Publishers.\n(pp. 127-150).\n[2] J. Callan, W.B. Croft, and J. Broglio.\n(1995).\nTREC and TIPSTER experiments with INQUERY.\nInformation Processing and Management, 31(3).\n(pp. 327-343).\n[3] J. G. Conrad, X. S. Guo, P. Jackson and M. Meziou.\n(2002).\nDatabase selection using actual physical and acquired logical collection resources in a massive domainspecific operational environment.\nDistributed search over the hidden web: Hierarchical database sampling and selection.\nIn Proceedings of the 28th International Conference on Very Large Databases (VLDB).\n[4] N. Craswell.\n(2000).\nMethods for distributed information retrieval.\nPh. D. thesis, The Australian Nation University.\n[5] N. Craswell, D. Hawking, and P. Thistlewaite.\n(1999).\nMerging results from isolated search engines.\nIn Proceedings of 10th Australasian Database Conference.\n[6] D. D'Souza, J. Thom, and J. Zobel.\n(2000).\nA comparison of techniques for selecting text collections.\nIn Proceedings of the 11th Australasian Database Conference.\n[7] N. Fuhr.\n(1999).\nA Decision-Theoretic approach to database selection in networked IR.\nACM Transactions on Information Systems, 17(3).\n(pp. 229-249).\n[8] L. Gravano, C. Chang, H. Garcia-Molina, and A. Paepcke.\n(1997).\nSTARTS: Stanford proposal for internet metasearching.\nIn Proceedings of the 20th ACM-SIGMOD International Conference on Management of Data.\n[9] L. Gravano, P. Ipeirotis and M. Sahami.\n(2003).\nQProber: A System for Automatic Classification of Hidden-Web Databases.\nACM Transactions on Information Systems, 21(1).\n[10] P. Ipeirotis and L. Gravano.\n(2002).\nDistributed search over the hidden web: Hierarchical database sampling and selection.\nIn Proceedings of the 28th International Conference on Very Large Databases (VLDB).\n[11] InvisibleWeb.com.\nhttp:\/\/www.invisibleweb.com [12] The lemur toolkit.\nhttp:\/\/www.cs.cmu.edu\/~lemur [13] J. Lu and J. Callan.\n(2003).\nContent-based information retrieval in peer-to-peer networks.\nIn Proceedings of the 12th International Conference on Information and Knowledge Management.\n[14] W. Meng, C.T. Yu and K.L. Liu.\n(2002) Building efficient and effective metasearch engines.\nACM Comput.\nSurv.\n34(1).\n[15] H. Nottelmann and N. Fuhr.\n(2003).\nEvaluating different method of estimating retrieval quality for resource selection.\nIn Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.\n[16] H., Nottelmann and N., Fuhr.\n(2003).\nThe MIND architecture for heterogeneous multimedia federated digital libraries.\nACM SIGIR 2003 Workshop on Distributed Information Retrieval.\n[17] A.L. Powell, J.C. French, J. Callan, M. Connell, and C.L. Viles.\n(2000).\nThe impact of database selection on distributed searching.\nIn Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.\n[18] A.L. Powell and J.C. French.\n(2003).\nComparing the performance of database selection algorithms.\nACM Transactions on Information Systems, 21(4).\n(pp. 412-456).\n[19] C. Sherman (2001).\nSearch for the invisible web.\nGuardian Unlimited.\n[20] L. Si and J. Callan.\n(2002).\nUsing sampled data and regression to merge search engine results.\nIn Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.\n[21] L. Si and J. Callan.\n(2003).\nRelevant document distribution estimation method for resource selection.\nIn Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval.\n[22] L. Si and J. Callan.\n(2003).\nA Semi-Supervised learning method to merge search engine results.\nACM Transactions on Information Systems, 21(4).\n(pp. 457-491).\n41","lvl-3":"Unified Utility Maximization Framework for Resource Selection\nABSTRACT\nThis paper presents a unified utility framework for resource selection of distributed text information retrieval.\nThis new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases.\nWith the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications.\nSpecifically, when used for database recommendation, the selection is optimized for the goal of highrecall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents).\nThis new model provides a more solid framework for distributed information retrieval.\nEmpirical studies show that it is at least as effective as other state-of-the-art algorithms.\n1.\nINTRODUCTION\nConventional search engines such as Google or AltaVista use ad-hoc information retrieval solution by assuming all the searchable documents can be copied into a single centralized database for the purpose of indexing.\nDistributed information retrieval, also known as federated search [1,4,7,11,14,22] is different from ad-hoc information retrieval as it addresses the cases when documents cannot be acquired and stored in a single database.\nFor example, \"Hidden Web\" contents (also called \"invisible\" or \"deep\" Web contents) are information on the Web\nthat cannot be accessed by the conventional search engines.\nHidden web contents have been estimated to be 2-50 [19] times larger than the contents that can be searched by conventional search engines.\nTherefore, it is very important to search this type of valuable information.\nThe architecture of distributed search solution is highly influenced by different environmental characteristics.\nIn a small local area network such as small company environments, the information providers may cooperate to provide corpus statistics or use the same type of search engines.\nEarly distributed information retrieval research focused on this type of cooperative environments [1,8].\nOn the other side, in a wide area network such as very large corporate environments or on the Web there are many types of search engines and it is difficult to assume that all the information providers can cooperate as they are required.\nEven if they are willing to cooperate in these environments, it may be hard to enforce a single solution for all the information providers or to detect whether information sources provide the correct information as they are required.\nMany applications fall into the latter type of uncooperative environments such as the Mind project [16] which integrates non-cooperating digital libraries or the QProber system [9] which supports browsing and searching of uncooperative hidden Web databases.\nIn this paper, we focus mainly on uncooperative environments that contain multiple types of independent search engines.\nThere are three important sub-problems in distributed information retrieval.\nFirst, information about the contents of each individual database must be acquired (resource representation) [1,8,21].\nSecond, given a query, a set of resources must be selected to do the search (resource selection) [5,7,21].\nThird, the results retrieved from all the selected resources have to be merged into a single final list before it can be presented to the end user (retrieval and results merging) [1,5,20,22].\nMany types of solutions exist for distributed information retrieval.\nInvisible-web.\nnet1 provides guided browsing of hidden Web databases by collecting the resource descriptions of these databases and building hierarchies of classes that group them by similar topics.\nA database recommendation system goes a step further than a browsing system like Invisible-web.\nnet by recommending most relevant information sources to users' queries.\nIt is composed of the resource description and the\nresource selection components.\nThis solution is useful when the users want to browse the selected databases by themselves instead of asking the system to retrieve relevant documents automatically.\nDistributed document retrieval is a more sophisticated task.\nIt selects relevant information sources for users' queries as the database recommendation system does.\nFurthermore, users' queries are forwarded to the corresponding selected databases and the returned individual ranked lists are merged into a single list to present to the users.\nThe goal of a database recommendation system is to select a small set of resources that contain as many relevant documents as possible, which we call a high-recall goal.\nOn the other side, the effectiveness of distributed document retrieval is often measured by the Precision of the final merged document result list, which we call a high-precision goal.\nPrior research indicated that these two goals are related but not identical [4,21].\nHowever, most previous solutions simply use effective resource selection algorithm of database recommendation system for distributed document retrieval system or solve the inconsistency with heuristic methods [1,4,21].\nThis paper presents a unified utility maximization framework to integrate the resource selection problem of both database recommendation and distributed document retrieval together by treating them as different optimization goals.\nFirst, a centralized sample database is built by randomly sampling a small amount of documents from each database with query-based sampling [1]; database size statistics are also estimated [21].\nA logistic transformation model is learned off line with a small amount of training queries to map the centralized document scores in the centralized sample database to the corresponding probabilities of relevance.\nSecond, after a new query is submitted, the query can be used to search the centralized sample database which produces a score for each sampled document.\nThe probability of relevance for each document in the centralized sample database can be estimated by applying the logistic model to each document's score.\nThen, the probabilities of relevance of all the (mostly unseen) documents among the available databases can be estimated using the probabilities of relevance of the documents in the centralized sample database and the database size estimates.\nFor the task of resource selection for a database recommendation system, the databases can be ranked by the expected number of relevant documents to meet the high-recall goal.\nFor resource selection for a distributed document retrieval system, databases containing a small number of documents with large probabilities of relevance are favored over databases containing many documents with small probabilities of relevance.\nThis selection criterion meets the high-precision goal of distributed document retrieval application.\nFurthermore, the Semi-supervised learning (SSL) [20,22] algorithm is applied to merge the returned documents into a final ranked list.\nThe unified utility framework makes very few assumptions and works in uncooperative environments.\nTwo key features make it a more solid model for distributed information retrieval: i) It formalizes the resource selection problems of different applications as various utility functions, and optimizes the utility functions to achieve the optimal results accordingly; and ii) It shows an effective and efficient way to estimate the probabilities of relevance of all documents across databases.\nSpecifically, the framework builds logistic models on the centralized sample database to transform centralized retrieval scores to the corresponding probabilities of relevance and uses the centralized sample database as the bridge between individual databases and the logistic model.\nThe human effort (relevance judgment) required to train the single centralized logistic model does not scale with the number of databases.\nThis is a large advantage over previous research, which required the amount of human effort to be linear with the number of databases [7,15].\nThe unified utility framework is not only more theoretically solid but also very effective.\nEmpirical studies show the new model to be at least as accurate as the state-of-the-art algorithms in a variety of configurations.\nThe next section discusses related work.\nSection 3 describes the new unified utility maximization model.\nSection 4 explains our experimental methodology.\nSections 5 and 6 present our experimental results for resource selection and document retrieval.\nSection 7 concludes.\n2.\nPRIOR RESEARCH\n3.\nUNIFIED UTILITY MAXIMIZATION FRAMEWORK\n3.1 Estimating Probabilities of Relevance\n3.1.1 Learning Probabilities of Relevance\n3.1.2 Estimating Centralized Document Scores\n3.2 The Unified Utility Maximization Model\n3.4 Resource Selection for High-Precision\n4.\nEXPERIMENTAL METHODOLOGY\n4.1 Testbeds\n5.\nRESULTS: RESOURCE SELECTION OF DATABASE RECOMMENDATION\n4.2 Search Engines\n6.\nRESULTS: DOCUMENT RETRIEVAL EFFECTIVENESS\n7.\nCONCLUSION\nDistributed information retrieval solves the problem of finding information that is scattered among many text databases on local area networks and Internets.\nMost previous research use effective resource selection algorithm of database recommendation system for distributed document retrieval application.\nWe argue that the high-recall resource selection goal of database recommendation and high-precision goal of document retrieval are related but not identical.\nThis kind of inconsistency has also been observed in previous work, but the prior solutions either used heuristic methods or assumed cooperation by individual databases (e.g., all the databases used the same kind of search engines), which is frequently not true in the uncooperative environment.\nIn this work we propose a unified utility maximization model to integrate the resource selection of database recommendation and document retrieval tasks into a single unified framework.\nIn this framework, the selection decisions are obtained by optimizing different objective functions.\nAs far as we know, this is the first work that tries to view and theoretically model the distributed information retrieval task in an integrated manner.\nThe new framework continues a recent research trend studying the use of query-based sampling and a centralized sample database.\nA single logistic model was trained on the centralized 40 sample database to estimate the probabilities of relevance of documents by their centralized retrieval scores, while the centralized sample database serves as a bridge to connect the individual databases with the centralized logistic model.\nTherefore, the probabilities of relevance for all the documents across the databases can be estimated with very small amount of human relevance judgment, which is much more efficient than previous methods that build a separate model for each database.\nThis framework is not only more theoretically solid but also very effective.\nOne algorithm for resource selection (UUM\/HR) and two algorithms for document retrieval (UUM\/HP-FL and UUM\/HP-VL) are derived from this framework.\nEmpirical studies have been conducted on testbeds to simulate the distributed search solutions of large organizations (companies) or domain-specific hidden Web.\nFurthermore, the UUM\/HP-FL and UUM\/HP-VL resource selection algorithms are extended with a variant of SSL results merging algorithm to address the distributed document retrieval task when selected databases do not return document scores.\nExperiments have shown that these algorithms achieve results that are at least as good as the prior state-of-the-art, and sometimes considerably better.\nDetailed analysis indicates that the advantage of these algorithms comes from explicitly optimizing the goals of the specific tasks.\nThe unified utility maximization framework is open for different extensions.\nWhen cost is associated with searching the online databases, the utility framework can be adjusted to automatically estimate the best number of databases to search so that a large amount of relevant documents can be retrieved with relatively small costs.\nAnother extension of the framework is to consider the retrieval effectiveness of the online databases, which is an important issue in the operational environments.\nAll of these are the directions of future research.","lvl-4":"Unified Utility Maximization Framework for Resource Selection\nABSTRACT\nThis paper presents a unified utility framework for resource selection of distributed text information retrieval.\nThis new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases.\nWith the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications.\nSpecifically, when used for database recommendation, the selection is optimized for the goal of highrecall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents).\nThis new model provides a more solid framework for distributed information retrieval.\nEmpirical studies show that it is at least as effective as other state-of-the-art algorithms.\n1.\nINTRODUCTION\nConventional search engines such as Google or AltaVista use ad-hoc information retrieval solution by assuming all the searchable documents can be copied into a single centralized database for the purpose of indexing.\nDistributed information retrieval, also known as federated search [1,4,7,11,14,22] is different from ad-hoc information retrieval as it addresses the cases when documents cannot be acquired and stored in a single database.\nthat cannot be accessed by the conventional search engines.\nTherefore, it is very important to search this type of valuable information.\nThe architecture of distributed search solution is highly influenced by different environmental characteristics.\nIn a small local area network such as small company environments, the information providers may cooperate to provide corpus statistics or use the same type of search engines.\nEarly distributed information retrieval research focused on this type of cooperative environments [1,8].\nIn this paper, we focus mainly on uncooperative environments that contain multiple types of independent search engines.\nThere are three important sub-problems in distributed information retrieval.\nFirst, information about the contents of each individual database must be acquired (resource representation) [1,8,21].\nSecond, given a query, a set of resources must be selected to do the search (resource selection) [5,7,21].\nMany types of solutions exist for distributed information retrieval.\nInvisible-web.\nnet1 provides guided browsing of hidden Web databases by collecting the resource descriptions of these databases and building hierarchies of classes that group them by similar topics.\nA database recommendation system goes a step further than a browsing system like Invisible-web.\nnet by recommending most relevant information sources to users' queries.\nIt is composed of the resource description and the\nresource selection components.\nThis solution is useful when the users want to browse the selected databases by themselves instead of asking the system to retrieve relevant documents automatically.\nDistributed document retrieval is a more sophisticated task.\nIt selects relevant information sources for users' queries as the database recommendation system does.\nFurthermore, users' queries are forwarded to the corresponding selected databases and the returned individual ranked lists are merged into a single list to present to the users.\nThe goal of a database recommendation system is to select a small set of resources that contain as many relevant documents as possible, which we call a high-recall goal.\nOn the other side, the effectiveness of distributed document retrieval is often measured by the Precision of the final merged document result list, which we call a high-precision goal.\nPrior research indicated that these two goals are related but not identical [4,21].\nHowever, most previous solutions simply use effective resource selection algorithm of database recommendation system for distributed document retrieval system or solve the inconsistency with heuristic methods [1,4,21].\nThis paper presents a unified utility maximization framework to integrate the resource selection problem of both database recommendation and distributed document retrieval together by treating them as different optimization goals.\nFirst, a centralized sample database is built by randomly sampling a small amount of documents from each database with query-based sampling [1]; database size statistics are also estimated [21].\nA logistic transformation model is learned off line with a small amount of training queries to map the centralized document scores in the centralized sample database to the corresponding probabilities of relevance.\nSecond, after a new query is submitted, the query can be used to search the centralized sample database which produces a score for each sampled document.\nThe probability of relevance for each document in the centralized sample database can be estimated by applying the logistic model to each document's score.\nThen, the probabilities of relevance of all the (mostly unseen) documents among the available databases can be estimated using the probabilities of relevance of the documents in the centralized sample database and the database size estimates.\nFor the task of resource selection for a database recommendation system, the databases can be ranked by the expected number of relevant documents to meet the high-recall goal.\nFor resource selection for a distributed document retrieval system, databases containing a small number of documents with large probabilities of relevance are favored over databases containing many documents with small probabilities of relevance.\nThis selection criterion meets the high-precision goal of distributed document retrieval application.\nFurthermore, the Semi-supervised learning (SSL) [20,22] algorithm is applied to merge the returned documents into a final ranked list.\nThe unified utility framework makes very few assumptions and works in uncooperative environments.\nSpecifically, the framework builds logistic models on the centralized sample database to transform centralized retrieval scores to the corresponding probabilities of relevance and uses the centralized sample database as the bridge between individual databases and the logistic model.\nThe human effort (relevance judgment) required to train the single centralized logistic model does not scale with the number of databases.\nThis is a large advantage over previous research, which required the amount of human effort to be linear with the number of databases [7,15].\nThe unified utility framework is not only more theoretically solid but also very effective.\nEmpirical studies show the new model to be at least as accurate as the state-of-the-art algorithms in a variety of configurations.\nThe next section discusses related work.\nSection 3 describes the new unified utility maximization model.\nSection 4 explains our experimental methodology.\nSections 5 and 6 present our experimental results for resource selection and document retrieval.\nSection 7 concludes.\n7.\nCONCLUSION\nDistributed information retrieval solves the problem of finding information that is scattered among many text databases on local area networks and Internets.\nMost previous research use effective resource selection algorithm of database recommendation system for distributed document retrieval application.\nWe argue that the high-recall resource selection goal of database recommendation and high-precision goal of document retrieval are related but not identical.\nIn this work we propose a unified utility maximization model to integrate the resource selection of database recommendation and document retrieval tasks into a single unified framework.\nIn this framework, the selection decisions are obtained by optimizing different objective functions.\nAs far as we know, this is the first work that tries to view and theoretically model the distributed information retrieval task in an integrated manner.\nThe new framework continues a recent research trend studying the use of query-based sampling and a centralized sample database.\nA single logistic model was trained on the centralized 40 sample database to estimate the probabilities of relevance of documents by their centralized retrieval scores, while the centralized sample database serves as a bridge to connect the individual databases with the centralized logistic model.\nTherefore, the probabilities of relevance for all the documents across the databases can be estimated with very small amount of human relevance judgment, which is much more efficient than previous methods that build a separate model for each database.\nThis framework is not only more theoretically solid but also very effective.\nOne algorithm for resource selection (UUM\/HR) and two algorithms for document retrieval (UUM\/HP-FL and UUM\/HP-VL) are derived from this framework.\nEmpirical studies have been conducted on testbeds to simulate the distributed search solutions of large organizations (companies) or domain-specific hidden Web.\nFurthermore, the UUM\/HP-FL and UUM\/HP-VL resource selection algorithms are extended with a variant of SSL results merging algorithm to address the distributed document retrieval task when selected databases do not return document scores.\nThe unified utility maximization framework is open for different extensions.\nWhen cost is associated with searching the online databases, the utility framework can be adjusted to automatically estimate the best number of databases to search so that a large amount of relevant documents can be retrieved with relatively small costs.\nAnother extension of the framework is to consider the retrieval effectiveness of the online databases, which is an important issue in the operational environments.","lvl-2":"Unified Utility Maximization Framework for Resource Selection\nABSTRACT\nThis paper presents a unified utility framework for resource selection of distributed text information retrieval.\nThis new framework shows an efficient and effective way to infer the probabilities of relevance of all the documents across the text databases.\nWith the estimated relevance information, resource selection can be made by explicitly optimizing the goals of different applications.\nSpecifically, when used for database recommendation, the selection is optimized for the goal of highrecall (include as many relevant documents as possible in the selected databases); when used for distributed document retrieval, the selection targets the high-precision goal (high precision in the final merged list of documents).\nThis new model provides a more solid framework for distributed information retrieval.\nEmpirical studies show that it is at least as effective as other state-of-the-art algorithms.\n1.\nINTRODUCTION\nConventional search engines such as Google or AltaVista use ad-hoc information retrieval solution by assuming all the searchable documents can be copied into a single centralized database for the purpose of indexing.\nDistributed information retrieval, also known as federated search [1,4,7,11,14,22] is different from ad-hoc information retrieval as it addresses the cases when documents cannot be acquired and stored in a single database.\nFor example, \"Hidden Web\" contents (also called \"invisible\" or \"deep\" Web contents) are information on the Web\nthat cannot be accessed by the conventional search engines.\nHidden web contents have been estimated to be 2-50 [19] times larger than the contents that can be searched by conventional search engines.\nTherefore, it is very important to search this type of valuable information.\nThe architecture of distributed search solution is highly influenced by different environmental characteristics.\nIn a small local area network such as small company environments, the information providers may cooperate to provide corpus statistics or use the same type of search engines.\nEarly distributed information retrieval research focused on this type of cooperative environments [1,8].\nOn the other side, in a wide area network such as very large corporate environments or on the Web there are many types of search engines and it is difficult to assume that all the information providers can cooperate as they are required.\nEven if they are willing to cooperate in these environments, it may be hard to enforce a single solution for all the information providers or to detect whether information sources provide the correct information as they are required.\nMany applications fall into the latter type of uncooperative environments such as the Mind project [16] which integrates non-cooperating digital libraries or the QProber system [9] which supports browsing and searching of uncooperative hidden Web databases.\nIn this paper, we focus mainly on uncooperative environments that contain multiple types of independent search engines.\nThere are three important sub-problems in distributed information retrieval.\nFirst, information about the contents of each individual database must be acquired (resource representation) [1,8,21].\nSecond, given a query, a set of resources must be selected to do the search (resource selection) [5,7,21].\nThird, the results retrieved from all the selected resources have to be merged into a single final list before it can be presented to the end user (retrieval and results merging) [1,5,20,22].\nMany types of solutions exist for distributed information retrieval.\nInvisible-web.\nnet1 provides guided browsing of hidden Web databases by collecting the resource descriptions of these databases and building hierarchies of classes that group them by similar topics.\nA database recommendation system goes a step further than a browsing system like Invisible-web.\nnet by recommending most relevant information sources to users' queries.\nIt is composed of the resource description and the\nresource selection components.\nThis solution is useful when the users want to browse the selected databases by themselves instead of asking the system to retrieve relevant documents automatically.\nDistributed document retrieval is a more sophisticated task.\nIt selects relevant information sources for users' queries as the database recommendation system does.\nFurthermore, users' queries are forwarded to the corresponding selected databases and the returned individual ranked lists are merged into a single list to present to the users.\nThe goal of a database recommendation system is to select a small set of resources that contain as many relevant documents as possible, which we call a high-recall goal.\nOn the other side, the effectiveness of distributed document retrieval is often measured by the Precision of the final merged document result list, which we call a high-precision goal.\nPrior research indicated that these two goals are related but not identical [4,21].\nHowever, most previous solutions simply use effective resource selection algorithm of database recommendation system for distributed document retrieval system or solve the inconsistency with heuristic methods [1,4,21].\nThis paper presents a unified utility maximization framework to integrate the resource selection problem of both database recommendation and distributed document retrieval together by treating them as different optimization goals.\nFirst, a centralized sample database is built by randomly sampling a small amount of documents from each database with query-based sampling [1]; database size statistics are also estimated [21].\nA logistic transformation model is learned off line with a small amount of training queries to map the centralized document scores in the centralized sample database to the corresponding probabilities of relevance.\nSecond, after a new query is submitted, the query can be used to search the centralized sample database which produces a score for each sampled document.\nThe probability of relevance for each document in the centralized sample database can be estimated by applying the logistic model to each document's score.\nThen, the probabilities of relevance of all the (mostly unseen) documents among the available databases can be estimated using the probabilities of relevance of the documents in the centralized sample database and the database size estimates.\nFor the task of resource selection for a database recommendation system, the databases can be ranked by the expected number of relevant documents to meet the high-recall goal.\nFor resource selection for a distributed document retrieval system, databases containing a small number of documents with large probabilities of relevance are favored over databases containing many documents with small probabilities of relevance.\nThis selection criterion meets the high-precision goal of distributed document retrieval application.\nFurthermore, the Semi-supervised learning (SSL) [20,22] algorithm is applied to merge the returned documents into a final ranked list.\nThe unified utility framework makes very few assumptions and works in uncooperative environments.\nTwo key features make it a more solid model for distributed information retrieval: i) It formalizes the resource selection problems of different applications as various utility functions, and optimizes the utility functions to achieve the optimal results accordingly; and ii) It shows an effective and efficient way to estimate the probabilities of relevance of all documents across databases.\nSpecifically, the framework builds logistic models on the centralized sample database to transform centralized retrieval scores to the corresponding probabilities of relevance and uses the centralized sample database as the bridge between individual databases and the logistic model.\nThe human effort (relevance judgment) required to train the single centralized logistic model does not scale with the number of databases.\nThis is a large advantage over previous research, which required the amount of human effort to be linear with the number of databases [7,15].\nThe unified utility framework is not only more theoretically solid but also very effective.\nEmpirical studies show the new model to be at least as accurate as the state-of-the-art algorithms in a variety of configurations.\nThe next section discusses related work.\nSection 3 describes the new unified utility maximization model.\nSection 4 explains our experimental methodology.\nSections 5 and 6 present our experimental results for resource selection and document retrieval.\nSection 7 concludes.\n2.\nPRIOR RESEARCH\nThere has been considerable research on all the sub-problems of distributed information retrieval.\nWe survey the most related work in this section.\nThe first problem of distributed information retrieval is resource representation.\nThe STARTS protocol is one solution for acquiring resource descriptions in cooperative environments [8].\nHowever, in uncooperative environments, even the databases are willing to share their information, it is not easy to judge whether the information they provide is accurate or not.\nFurthermore, it is not easy to coordinate the databases to provide resource representations that are compatible with each other.\nThus, in uncooperative environments, one common choice is query-based sampling, which randomly generates and sends queries to individual search engines and retrieves some documents to build the descriptions.\nAs the sampled documents are selected by random queries, query-based sampling is not easily fooled by any adversarial spammer that is interested to attract more traffic.\nExperiments have shown that rather accurate resource descriptions can be built by sending about 80 queries and downloading about 300 documents [1].\nMany resource selection algorithms such as gGlOSS\/vGlOSS [8] and CORI [1] have been proposed in the last decade.\nThe CORI algorithm represents each database by its terms, the document frequencies and a small number of corpus statistics (details in [1]).\nAs prior research on different datasets has shown the CORI algorithm to be the most stable and effective of the three algorithms [1,17,18], we use it as a baseline algorithm in this work.\nThe relevant document distribution estimation (ReDDE [21]) resource selection algorithm is a recent algorithm that tries to estimate the distribution of relevant documents across the available databases and ranks the databases accordingly.\nAlthough the ReDDE algorithm has been shown to be effective, it relies on heuristic constants that are set empirically [21].\nThe last step of the document retrieval sub-problem is results merging, which is the process of transforming database-specific\ndocument scores into comparable database-independent document scores.\nThe semi supervised learning (SSL) [20,22] result merging algorithm uses the documents acquired by querybased sampling as training data and linear regression to learn the database-specific, query-specific merging models.\nThese linear models are used to convert the database-specific document scores into the approximated centralized document scores.\nThe SSL algorithm has been shown to be effective [22].\nIt serves as an important component of our unified utility maximization framework (Section 3).\nIn order to achieve accurate document retrieval results, many previous methods simply use resource selection algorithms that are effective of database recommendation system.\nBut as pointed out above, a good resource selection algorithm optimized for high-recall may not work well for document retrieval, which targets the high-precision goal.\nThis type of inconsistency has been observed in previous research [4,21].\nThe research in [21] tried to solve the problem with a heuristic method.\nThe research most similar to what we propose here is the decision-theoretic framework (DTF) [7,15].\nThis framework computes a selection that minimizes the overall costs (e.g., retrieval quality, time) of document retrieval system and several methods [15] have been proposed to estimate the retrieval quality.\nHowever, two points distinguish our research from the DTF model.\nFirst, the DTF is a framework designed specifically for document retrieval, but our new model integrates two distinct applications with different requirements (database recommendation and distributed document retrieval) into the same unified framework.\nSecond, the DTF builds a model for each database to calculate the probabilities of relevance.\nThis requires human relevance judgments for the results retrieved from each database.\nIn contrast, our approach only builds one logistic model for the centralized sample database.\nThe centralized sample database can serve as a bridge to connect the individual databases with the centralized logistic model, thus the probabilities of relevance of documents in different databases can be estimated.\nThis strategy can save large amount of human judgment effort and is a big advantage of the unified utility maximization framework over the DTF especially when there are a large number of databases.\n3.\nUNIFIED UTILITY MAXIMIZATION FRAMEWORK\nThe Unified Utility Maximization (UUM) framework is based on estimating the probabilities of relevance of the (mostly unseen) documents available in the distributed search environment.\nIn this section we describe how the probabilities of relevance are estimated and how they are used by the Unified Utility Maximization model.\nWe also describe how the model can be optimized for the high-recall goal of a database recommendation system and the high-precision goal of a distributed document retrieval system.\n3.1 Estimating Probabilities of Relevance\nAs pointed out above, the purpose of resource selection is highrecall and the purpose of document retrieval is high-precision.\nIn order to meet these diverse goals, the key issue is to estimate the probabilities of relevance of the documents in various databases.\nThis is a difficult problem because we can only observe a sample of the contents of each database using query-based sampling.\nOur strategy is to make full use of all the available information to calculate the probability estimates.\n3.1.1 Learning Probabilities of Relevance\nIn the resource description step, the centralized sample database is built by query-based sampling and the database sizes are estimated using the sample-resample method [21].\nAt the same time, an effective retrieval algorithm (Inquery [2]) is applied on the centralized sample database with a small number (e.g., 50) of training queries.\nFor each training query, the CORI resource selection algorithm [1] is applied to select some number (e.g., 10) of databases and retrieve 50 document ids from each database.\nThe SSL results merging algorithm [20,22] is used to merge the results.\nThen, we can download the top 50 documents in the final merged list and calculate their corresponding centralized scores using Inquery and the corpus statistics of the centralized sample database.\nThe centralized scores are further normalized (divided by the maximum centralized score for each query), as this method has been suggested to improve estimation accuracy in previous research [15].\nHuman judgment is acquired for those documents and a logistic model is built to transform the normalized centralized document scores to probabilities of relevance as follows:\nis the normalized centralized document score and ac and bc are the two parameters of the logistic model.\nThese two parameters are estimated by maximizing the probabilities of relevance of the training queries.\nThe logistic model provides us the tool to calculate the probabilities of relevance from centralized document scores.\n3.1.2 Estimating Centralized Document Scores\nWhen the user submits a new query, the centralized document scores of the documents in the centralized sample database are calculated.\nHowever, in order to calculate the probabilities of relevance, we need to estimate centralized document scores for all documents across the databases instead of only the sampled documents.\nThis goal is accomplished using: the centralized scores of the documents in the centralized sample database, and the database size statistics.\nWe define the database scale factor for the ith database as the ratio of the estimated database size and the number of documents sampled from this database as follows:\nwhere Ndbi is the estimated database size and Ndbi _ samp is the number of documents from the ith database in the centralized sample database.\nThe intuition behind the database scale factor is that, for a database whose scale factor is 50, if one document from this database in the centralized sample database has a centralized document score of 0.5, we may guess that there are about 50 documents in that database which have scores of about 0.5.\nActually, we can apply a finer non-parametric linear interpolation method to estimate the centralized document score curve for each database.\nFormally, we rank all the sampled documents from the ith database by their centralized document\nscores to get the sampled centralized document score list {Sc (dsi1), Sc (dsi2), Sc (dsi3), ...} for the ith database; we assume that if we could calculate the centralized document scores for all the documents in this database and get the complete centralized document score list, the top document in the sampled list would have rank SFdbi\/2, the second document in the sampled list would rank SFdbi3\/2, and so on.\nTherefore, the data points of sampled documents in the complete list are: {(SFdbi\/2, Sc (dsi1)), (SFdbi3\/2, Sc (dsi2)), (SFdbi5\/2, Sc (dsi3)), ...}.\nPiecewise linear interpolation is applied to estimate the centralized document score curve, as illustrated in Figure 1.\nThe complete centralized document score list can be estimated by calculating the values of different ranks on the centralized document curve as:\nIt can be seen from Figure 1 that more sample data points produce more accurate estimates of the centralized document score curves.\nHowever, for databases with large database scale ratios, this kind of linear interpolation may be rather inaccurate, especially for the top ranked (e.g., [1, SFdbi\/2]) documents.\nTherefore, an alternative solution is proposed to estimate the centralized document scores of the top ranked documents for databases with large scale ratios (e.g., larger than 100).\nSpecifically, a logistic model is built for each of these databases.\nThe logistic model is used to estimate the centralized document score of the top 1 document in the corresponding database by using the two sampled documents from that database with highest centralized scores.\n\u03b1i0, \u03b1i1 and \u03b1i2 are the parameters of the logistic model.\nFor each training query, the top retrieved document of each database is downloaded and the corresponding centralized document score is calculated.\nTogether with the scores of the top two sampled documents, these parameters can be estimated.\nAfter the centralized score of the top document is estimated, an exponential function is fitted for the top part ([1, SFdbi\/2]) of the centralized document score curve as:\nThe two parameters \u03b2i0 and \u03b2i1 are fitted to make sure the ^ exponential function passes through the two points (1, Sc (di1)) and (SFdbi\/2, Sc (dsi1)).\nThe exponential function is only used to adjust the top part of the centralized document score curve and the lower part of the curve is still fitted with the linear interpolation method described above.\nThe adjustment by fitting exponential function of the top ranked documents has been shown empirically to produce more accurate results.\nFigure 1.\nLinear interpolation construction of the complete centralized document score list (database scale factor is 50).\nFrom the centralized document score curves, we can estimate the complete centralized document score lists accordingly for all the available databases.\nAfter the estimated centralized document scores are normalized, the complete lists of probabilities of relevance can be constructed out of the complete centralized document score lists by Equation 1.\nFormally for the ith database, the complete list of probabilities of relevance is:\n3.2 The Unified Utility Maximization Model\nIn this section, we formally define the new unified utility maximization model, which optimizes the resource selection problems for two goals of high-recall (database recommendation) and high-precision (distributed document retrieval) in the same framework.\nIn the task of database recommendation, the system needs to decide how to rank databases.\nIn the task of document retrieval, the system not only needs to select the databases but also needs to decide how many documents to retrieve from each selected database.\nWe generalize the database recommendation selection process, which implicitly recommends all documents in every selected database, as a special case of the selection decision for the document retrieval task Formally, we denote di as the number of documents we would like to retrieve from the ith database and d = {d1, d2,} as a selection action for all the databases.\nThe database selection decision is made based on the complete lists of probabilities of relevance for all the databases.\nThe complete lists of probabilities of relevance are inferred from all the available information specifically Rs, which stands for the resource descriptions acquired by query-based sampling and the database size estimates acquired by sample-resample; Sc stands for the centralized document scores of the documents in the centralized sample database.\nIf the method of estimating centralized document scores and probabilities of relevance in Section 3.1 is acceptable, then the most probable complete lists of probabilities of relevance can be\nderived and we denote them as\nHigh-recall is the goal of the resource selection algorithm in federated search tasks such as database recommendation.\nThe goal is to select a small set of resources (e.g., less than Nsdb databases) that contain as many relevant documents as possible, which can be formally defined as:\nI (di) is the indicator function, which is 1 when the ith database is selected and 0 otherwise.\nPlug this equation into the basic model in Equation 8 and associate the selected database number constraint to obtain the following: ^ ^ probabilities of relevance \u03b8, we associate a utility function U (\u03b8, d) which indicates the benefit from making the d selection when the true complete lists of probabilities of relevance are \u03b8.\nTherefore, the selection decision defined by the Bayesian framework is:\nOne common approach to simplify the computation in the Bayesian framework is to only calculate the utility function at the most probable parameter values instead of calculating the whole expectation.\nIn other words, we only need to calculate\nThe solution of this optimization problem is very simple.\nWe can calculate the expected number of relevant documents for each database as follows:\nThe Nsdb databases with the largest expected number of relevant documents can be selected to meet the high-recall goal.\nWe call this the UUM\/HR algorithm (Unified Utility Maximization for High-Recall).\n3.4 Resource Selection for High-Precision\nHigh-Precision is the goal of resource selection algorithm in federated search tasks such as distributed document retrieval.\nIt is measured by the Precision at the top part of the final merged document list.\nThis high-precision criterion is realized by the following utility function, which measures the Precision of retrieved documents from the selected databases.\nNote that the key difference between Equation 12 and Equation 9 is that Equation 9 sums up the probabilities of relevance of all the documents in a database, while Equation 12 only considers a much smaller part of the ranking.\nSpecifically, we can calculate the optimal selection decision by:\nDifferent kinds of constraints caused by different characteristics of the document retrieval tasks can be associated with the above optimization problem.\nThe most common one is to select a fixed number (Nsdb) of databases and retrieve a fixed number (Nrdoc) of documents from each selected database, formally defined as:\nThis optimization problem can be solved easily by calculating the number of expected relevant documents in the top part of the each database's complete list of probabilities of relevance: Then the databases can be ranked by these values and selected.\nWe call this the UUM\/HP-FL algorithm (Unified Utility Maximization for High-Precision with Fixed Length document rankings from each selected database).\nA more complex situation is to vary the number of retrieved documents from each selected database.\nMore specifically, we allow different selected databases to return different numbers of documents.\nFor simplification, the result list lengths are required to be multiples of a baseline number 10.\n(This value can also be varied, but for simplification it is set to 10 in this paper.)\nThis restriction is set to simulate the behavior of commercial search engines on the Web.\n(Search engines such as Google and AltaVista return only 10 or 20 document ids for every result page.)\nThis procedure saves the computation time of calculating optimal database selection by allowing the step of dynamic programming to be 10 instead of 1 (more detail is discussed latterly).\nFor further simplification, we restrict to select at most\nNTotal_rdoc is the total number of documents to be retrieved.\ndynamic programming algorithm can be applied to calculate the optimal solution.\nThe basic steps of this dynamic programming method are described in Figure 2.\nAs this algorithm allows retrieving result lists of varying lengths from each selected database, it is called UUM\/HP-VL algorithm.\nAfter the selection decisions are made, the selected databases are searched and the corresponding document ids are retrieved from each database.\nThe final step of document retrieval is to merge the returned results into a single ranked list with the semisupervised learning algorithm.\nIt was pointed out before that the SSL algorithm maps the database-specific scores into the centralized document scores and builds the final ranked list accordingly, which is consistent with all our selection procedures where documents with higher probabilities of relevance (thus higher centralized document scores) are selected.\n4.\nEXPERIMENTAL METHODOLOGY\n4.1 Testbeds\nIt is desirable to evaluate distributed information retrieval algorithms with testbeds that closely simulate the real world applications.\nThe TREC Web collections WT2g or WT10g [4,13] provide a way to partition documents by different Web servers.\nIn this way, a large number (O (1000)) of databases with rather diverse\nTable1: Testbed statistics.\ncontents could be created, which may make this testbed a good candidate to simulate the operational environments such as open domain hidden Web.\nHowever, two weakness of this testbed are: i) Each database contains only a small amount of document (259 documents by average for WT2g) [4]; and ii) The contents of WT2g or WT10g are arbitrarily crawled from the Web.\nIt is not likely for a hidden Web database to provide personal homepages or web pages indicating that the pages are under construction and there is no useful information at all.\nThese types of web pages are contained in the WT2g\/WT10g datasets.\nTherefore, the noisy Web data is not similar with that of high-quality hidden Web database contents, which are usually organized by domain experts.\nAnother choice is the TREC news\/government data [1,15,17, 18,21].\nTREC news\/government data is concentrated on relatively narrow topics.\nCompared with TREC Web data: i) The news\/government documents are much more similar to the contents provided by a topic-oriented database than an arbitrary web page, ii) A database in this testbed is larger than that of TREC Web data.\nBy average a database contains thousands of documents, which is more realistic than a database of TREC Web data with about 250 documents.\nAs the contents and sizes of the databases in the TREC news\/government testbed are more similar with that of a topic-oriented database, it is a good candidate to simulate the distributed information retrieval environments of large organizations (companies) or domainspecific hidden Web sites, such as West that provides access to legal, financial and news text databases [3].\nAs most current distributed information retrieval systems are developed for the environments of large organizations (companies) or domainspecific hidden Web other than open domain hidden Web, TREC news\/government testbed was chosen in this work.\nTrec123-100col-bysource testbed is one of the most used TREC news\/government testbed [1,15,17,21].\nIt was chosen in this work.\nThree testbeds in [21] with skewed database size distributions and different types of relevant document distributions were also used to give more thorough simulation for real environments.\nTrec123-100col-bysource: 100 databases were created from TREC CDs 1, 2 and 3.\nThey were organized by source and publication date [1].\nThe sizes of the databases are not skewed.\nDetails are in Table 1.\nThree testbeds built in [21] were based on the trec123-100colbysource testbed.\nEach testbed contains many \"small\" databases and two large databases created by merging about 10-20 small databases together.\ndecision d xyz, which represents the best selection decision in the condition: only databases from number 1 to number x are considered for selection; totally y * 10 documents will be retrieved; only z databases are selected out of the x database candidates.\nAnd Sel (x, y, z) is the corresponding utility value by choosing the best selection.\nii) Initialize Sel (1, 1.\n.\nNTotal_rdoc \/ 10, 1.\n.\nNsdb) with only the estimated relevance information of the 1st database.\niii) Iterate the current database candidate i from 2 to | DB |\nfrom the ith database, otherwise we should not select this database and the previous best solution Sel (i-1, y, z) should be kept.\nThen set the value of d iyz and Sel (i, y, z) accordingly.\nand the corresponding utility value is Sel (| DB |, NTotal_rdoc \/ 10, Nsdb).\nFigure 2.\nThe dynamic programming optimization procedure for Equation 16.\nenvironment, three different types of search engines were used in the experiments: INQUERY [2], a unigram statistical language model with linear smoothing [12,20] and a TFIDF retrieval algorithm with \"ltc\" weight [12,20].\nAll these algorithms were implemented with the Lemur toolkit [12].\nThese three kinds of search engines were assigned to the databases among the four testbeds in a round-robin manner.\n5.\nRESULTS: RESOURCE SELECTION OF DATABASE RECOMMENDATION\nAll four testbeds described in Section 4 were used in the experiments to evaluate the resource selection effectiveness of the database recommendation system.\nThe resource descriptions were created using query-based sampling.\nAbout 80 queries were sent to each database to download 300 unique documents.\nThe database size statistics were estimated by the sample-resample method [21].\nFifty queries (101-150) were used as training queries to build the relevant logistic model and to fit the exponential functions of the centralized document score curves for large ratio databases (details in Section 3.1).\nAnother 50 queries (51-100) were used as test data.\nCollection Selected.\nCollection Selected.\nRelevant Testbed.\nNonrelevant Testbed.\nFigure 3.\nResource selection experiments on the four testbeds.\nTrec123-2ldb-60col (\"representative\"): The databases in the trec123-100col-bysource were sorted with alphabetical order.\nTwo large databases were created by merging 20 small databases with the round-robin method.\nThus, the two large databases have more relevant documents due to their large sizes, even though the densities of relevant documents are roughly the same as the small databases.\nTrec123-AP-WSJ-60col (\"relevant\"): The 24 Associated Press collections and the 16 Wall Street Journal collections in the trec123-100col-bysource testbed were collapsed into two large databases APall and WSJall.\nThe other 60 collections were left unchanged.\nThe APall and WSJall databases have higher densities of documents relevant to TREC queries than the small databases.\nThus, the two large databases have many more relevant documents than the small databases.\nTrec123-FR-DOE-81col (\"nonrelevant\"): The 13 Federal Register collections and the 6 Department of Energy collections in the trec123-100col-bysource testbed were collapsed into two large databases FRall and DOEall.\nThe other 80 collections were left unchanged.\nThe FRall and DOEall databases have lower densities of documents relevant to TREC queries than the small databases, even though they are much larger.\n100 queries were created from the title fields of TREC topics 51-150.\nThe queries 101-150 were used as training queries and the queries 51-100 were used as test queries (details in Table 2).\n4.2 Search Engines\nIn the uncooperative distributed information retrieval environments of large organizations (companies) or domainspecific hidden Web, different databases may use different types of search engine.\nTo simulate the multiple type-engine Resource selection algorithms of database recommendation systems are typically compared using the recall metric Rn [1,17,18,21].\nLet B denote a baseline ranking, which is often the RBR (relevance based ranking), and E as a ranking provided by a resource selection algorithm.\nAnd let Bi and Ei denote the number of relevant documents in the ith ranked database of B or E.\nThen Rn is defined as follows:\nUsually the goal is to search only a few databases, so our figures only show results for selecting up to 20 databases.\nThe experiments summarized in Figure 3 compared the effectiveness of the three resource selection algorithms, namely the CORI, ReDDE and UUM\/HR.\nThe UUM\/HR algorithm is described in Section 3.3.\nIt can be seen from Figure 3 that the ReDDE and UUM\/HR algorithms are more effective (on the representative, relevant and nonrelevant testbeds) or as good as (on the Trec123-100Col testbed) the CORI resource selection algorithm.\nThe UUM\/HR algorithm is more effective than the ReDDE algorithm on the representative and relevant testbeds and is about the same as the ReDDE algorithm on the Trec123100Col and the nonrelevant testbeds.\nThis suggests that the UUM\/HR algorithm is more robust than the ReDDE algorithm.\nIt can be noted that when selecting only a few databases on the Trec123-100Col or the nonrelevant testbeds, the ReDEE algorithm has a small advantage over the UUM\/HR algorithm.\nWe attribute this to two causes: i) The ReDDE algorithm was tuned on the Trec123-100Col testbed; and ii) Although the difference is small, this may suggest that our logistic model of estimating probabilities of relevance is not accurate enough.\nMore training data or a more sophisticated model may help to solve this minor puzzle.\nTable 3.\nPrecision on the trec123-100col-bysource testbed when 3 databases were selected.\n(The first baseline is CORI; the second\nTable 4.\nPrecision on the trec123-100col-bysource testbed when 5 databases were selected.\n(The first baseline is CORI; the second\nTable 5.\nPrecision on the representative testbed when 3 databases were selected.\n(The first baseline is CORI; the second baseline for UUM\/HP methods is UUM\/HR.)\nTable 6.\nPrecision on the representative testbed when 5 databases were selected.\n(The first baseline is CORI; the second baseline for UUM\/HP methods is UUM\/HR.)\n6.\nRESULTS: DOCUMENT RETRIEVAL EFFECTIVENESS\nFor document retrieval, the selected databases are searched and the returned results are merged into a single final list.\nIn all of the experiments discussed in this section the results retrieved from individual databases were combined by the semisupervised learning results merging algorithm.\nThis version of the SSL algorithm [22] is allowed to download a small number of returned document texts \"on the fly\" to create additional training data in the process of learning the linear models which map database-specific document scores into estimated centralized document scores.\nIt has been shown to be very effective in environments where only short result-lists are retrieved from each selected database [22].\nThis is a common scenario in operational environments and was the case for our experiments.\nDocument retrieval effectiveness was measured by Precision at the top part of the final document list.\nThe experiments in this section were conducted to study the document retrieval effectiveness of five selection algorithms, namely the CORI, ReDDE, UUM\/HR, UUM\/HP-FL and UUM\/HP-VL algorithms.\nThe last three algorithms were proposed in Section 3.\nAll the first four algorithms selected 3 or 5 databases, and 50 documents were retrieved from each selected database.\nThe UUM\/HP-FL algorithm also selected 3 or 5 databases, but it was allowed to adjust the number of documents to retrieve from each selected database; the number retrieved was constrained to be from 10 to 100, and a multiple of 10.\nThe Trec123-100Col and representative testbeds were selected for document retrieval as they represent two extreme cases of resource selection effectiveness; in one case the CORI algorithm is as good as the other algorithms and in the other case it is quite\nTable 7.\nPrecision on the trec123-100col-bysource testbed when 3 databases were selected (The first baseline is CORI; the second baseline for UUM\/HP methods is UUM\/HR.)\n(Search engines do not return document scores)\na lot worse than the other algorithms.\nTables 3 and 4 show the results on the Trec123-100Col testbed, and Tables 5 and 6 show the results on the representative testbed.\nOn the Trec123-100Col testbed, the document retrieval effectiveness of the CORI selection algorithm is roughly the same or a little bit better than the ReDDE algorithm but both of them are worse than the other three algorithms (Tables 3 and 4).\nThe UUM\/HR algorithm has a small advantage over the CORI and ReDDE algorithms.\nOne main difference between the UUM\/HR algorithm and the ReDDE algorithm was pointed out before: The UUM\/HR uses training data and linear interpolation to estimate the centralized document score curves, while the ReDDE algorithm [21] uses a heuristic method, assumes the centralized document score curves are step functions and makes no distinction among the top part of the curves.\nThis difference makes UUM\/HR better than the ReDDE algorithm at distinguishing documents with high probabilities of relevance from low probabilities of relevance.\nTherefore, the UUM\/HR reflects the high-precision retrieval goal better than the ReDDE algorithm and thus is more effective for document retrieval.\nThe UUM\/HR algorithm does not explicitly optimize the selection decision with respect to the high-precision goal as the UUM\/HP-FL and UUM\/HP-VL algorithms are designed to do.\nIt can be seen that on this testbed, the UUM\/HP-FL and UUM\/HP-VL algorithms are much more effective than all the other algorithms.\nThis indicates that their power comes from explicitly optimizing the high-precision goal of document retrieval in Equations 14 and 16.\nOn the representative testbed, CORI is much less effective than other algorithms for distributed document retrieval (Tables 5 and 6).\nThe document retrieval results of the ReDDE algorithm are better than that of the CORI algorithm but still worse than the results of the UUM\/HR algorithm.\nOn this testbed the three UUM algorithms are about equally effective.\nDetailed analysis shows that the overlap of the selected databases between the UUM\/HR, UUM\/HP-FL and UUM\/HP-VL algorithms is much larger than the experiments on the Trec123-100Col testbed, since all of them tend to select the two large databases.\nThis explains why they are about equally effective for document retrieval.\nIn real operational environments, databases may return no document scores and report only ranked lists of results.\nAs the unified utility maximization model only utilizes retrieval scores of sampled documents with a centralized retrieval algorithm to calculate the probabilities of relevance, it makes database selection decisions without referring to the document scores from individual databases and can be easily generalized to this case of rank lists without document scores.\nThe only adjustment is that the SSL algorithm merges ranked lists without document scores by assigning the documents with pseudo-document scores normalized for their ranks (In a ranked list of 50 documents, the first one has a score of 1, the second has a score of 0.98 etc), which has been studied in [22].\nThe experiment results on trec123-100Col-bysource testbed with 3 selected databases are shown in Table 7.\nThe experiment setting was the same as before except that the document scores were eliminated intentionally and the selected databases only return ranked lists of document ids.\nIt can be seen from the results that the UUM\/HP-FL and UUM\/HP-VL work well with databases returning no document scores and are still more effective than other alternatives.\nOther experiments with databases that return no document scores are not reported but they show similar results to prove the effectiveness of UUM\/HP-FL and UUM\/HPVL algorithms.\nThe above experiments suggest that it is very important to optimize the high-precision goal explicitly in document retrieval.\nThe new algorithms based on this principle achieve better or at least as good results as the prior state-of-the-art algorithms in several environments.\n7.\nCONCLUSION\nDistributed information retrieval solves the problem of finding information that is scattered among many text databases on local area networks and Internets.\nMost previous research use effective resource selection algorithm of database recommendation system for distributed document retrieval application.\nWe argue that the high-recall resource selection goal of database recommendation and high-precision goal of document retrieval are related but not identical.\nThis kind of inconsistency has also been observed in previous work, but the prior solutions either used heuristic methods or assumed cooperation by individual databases (e.g., all the databases used the same kind of search engines), which is frequently not true in the uncooperative environment.\nIn this work we propose a unified utility maximization model to integrate the resource selection of database recommendation and document retrieval tasks into a single unified framework.\nIn this framework, the selection decisions are obtained by optimizing different objective functions.\nAs far as we know, this is the first work that tries to view and theoretically model the distributed information retrieval task in an integrated manner.\nThe new framework continues a recent research trend studying the use of query-based sampling and a centralized sample database.\nA single logistic model was trained on the centralized 40 sample database to estimate the probabilities of relevance of documents by their centralized retrieval scores, while the centralized sample database serves as a bridge to connect the individual databases with the centralized logistic model.\nTherefore, the probabilities of relevance for all the documents across the databases can be estimated with very small amount of human relevance judgment, which is much more efficient than previous methods that build a separate model for each database.\nThis framework is not only more theoretically solid but also very effective.\nOne algorithm for resource selection (UUM\/HR) and two algorithms for document retrieval (UUM\/HP-FL and UUM\/HP-VL) are derived from this framework.\nEmpirical studies have been conducted on testbeds to simulate the distributed search solutions of large organizations (companies) or domain-specific hidden Web.\nFurthermore, the UUM\/HP-FL and UUM\/HP-VL resource selection algorithms are extended with a variant of SSL results merging algorithm to address the distributed document retrieval task when selected databases do not return document scores.\nExperiments have shown that these algorithms achieve results that are at least as good as the prior state-of-the-art, and sometimes considerably better.\nDetailed analysis indicates that the advantage of these algorithms comes from explicitly optimizing the goals of the specific tasks.\nThe unified utility maximization framework is open for different extensions.\nWhen cost is associated with searching the online databases, the utility framework can be adjusted to automatically estimate the best number of databases to search so that a large amount of relevant documents can be retrieved with relatively small costs.\nAnother extension of the framework is to consider the retrieval effectiveness of the online databases, which is an important issue in the operational environments.\nAll of these are the directions of future research.","keyphrases":["resourc select","databas recommend","distribut document retriev","distribut inform retriev","distribut text inform retriev resourc select","feder search","hidden web content","resourc represent","retriev and result merg","logist transform model","semi-supervis learn","unifi util maxim model"],"prmu":["P","P","P","P","R","U","U","M","M","M","U","R"]} {"id":"H-42","title":"HITS Hits TREC: Exploring IR Evaluation Results with Network Analysis","abstract":"We propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments. We define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics. We analyze the meaning of well known -- and somewhat generalized -- indicators from social network analysis on the Systems-Topics graph. We apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case.","lvl-1":"HITS Hits TRECExploring IR Evaluation Results with Network Analysis Stefano Mizzaro Dept. of Mathematics and Computer Science University of Udine Via delle Scienze, 206 - 33100 Udine, Italy mizzaro@dimi.uniud.it Stephen Robertson Microsoft Research 7 JJ Thomson Avenue Cambridge CB3 0FB, UK ser@microsoft.com ABSTRACT We propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments.\nWe define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics.\nWe analyze the meaning of well known - and somewhat generalized - indicators from social network analysis on the Systems-Topics graph.\nWe apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms Measurement, Experimentation 1.\nINTRODUCTION Evaluation is a primary concern in the Information Retrieval (IR) field.\nTREC (Text REtrieval Conference) [12, 15] is an annual benchmarking exercise that has become a de facto standard in IR evaluation: before the actual conference, TREC provides to participants a collection of documents and a set of topics (representations of information needs).\nParticipants use their systems to retrieve, and submit to TREC, a list of documents for each topic.\nAfter the lists have been submitted and pooled, the TREC organizers employ human assessors to provide relevance judgements on the pooled set.\nThis defines a set of relevant documents for each topic.\nSystem effectiveness is then measured by well established metrics (Mean Average Precision being the most used).\nOther conferences such as NTCIR, INEX, CLEF provide comparable data.\nNetwork analysis is a discipline that studies features and properties of (usually large) networks, or graphs.\nOf particular importance is Social Network Analysis [16], that studies networks made up by links among humans (friendship, acquaintance, co-authorship, bibliographic citation, etc.).\nNetwork analysis and IR fruitfully meet in Web Search Engine implementation, as is already described in textbooks [3,6].\nCurrent search engines use link analysis techniques to help rank the retrieved documents.\nSome indicators (and the corresponding algorithms that compute them) have been found useful in this respect, and are nowadays well known: Inlinks (the number of links to a Web page), PageRank [7], and HITS (Hyperlink-Induced Topic Search) [5].\nSeveral extensions to these algorithms have been and are being proposed.\nThese indicators and algorithms might be quite general in nature, and can be used for applications which are very different from search result ranking.\nOne example is using HITS for stemming, as described by Agosti et al. [1].\nIn this paper, we propose and demonstrate a method for constructing a network, specifically a weighted complete bidirectional directed bipartite graph, on a set of TREC topics and participating systems.\nLinks represent effectiveness measurements on system-topic pairs.\nWe then apply analysis methods originally developed for search applications to the resulting network.\nThis reveals phenomena previously hidden in TREC data.\nIn passing, we also provide a small generalization to Kleinberg``s HITS algorithm, as well as to Inlinks and PageRank.\nThe paper is organized as follows: Sect.\n2 gives some motivations for the work.\nSect.\n3 presents the basic ideas of normalizing average precision and of constructing a systemstopics graph, whose properties are analyzed in Sect.\n4; Sect.\n5 presents some experiments on TREC 8 data; Sect.\n6 discusses some issues and Sect.\n7 closes the paper.\n2.\nMOTIVATIONS We are interested in the following hypotheses: 1.\nSome systems are more effective than others; t1 \u00b7 \u00b7 \u00b7 tn MAP s1 AP(s1, t1) \u00b7 \u00b7 \u00b7 AP(s1, tn) MAP(s1) ... ... ... sm AP(sm, t1) \u00b7 \u00b7 \u00b7 AP(sm, tn) MAP(sm) AAP AAP(t1) \u00b7 \u00b7 \u00b7 AAP(tn) (a) t1 t2 \u00b7 \u00b7 \u00b7 MAP s1 0.5 0.4 \u00b7 \u00b7 \u00b7 0.6 s2 0.4 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 0.3 ... ... ... ... AAP 0.6 0.3 \u00b7 \u00b7 \u00b7 (b) Table 1: AP, MAP and AAP 2.\nSome topics are easier than others; 3.\nSome systems are better than others at distinguishing easy and difficult topics; 4.\nSome topics are better than others at distinguishing more or less effective systems.\nThe first of these hypotheses needs no further justification - every reported significant difference between any two systems supports it.\nThere is now also quite a lot of evidence for the second, centered on the TREC Robust Track [14].\nOur primary interest is in the third and fourth.\nThe third might be regarded as being of purely academic interest; however, the fourth has the potential for being of major practical importance in evaluation studies.\nIf we could identify a relatively small number of topics which were really good at distinguishing effective and ineffective systems, we could save considerable effort in evaluating systems.\nOne possible direction from this point would be to attempt direct identification of such small sets of topics.\nHowever, in the present paper, we seek instead to explore the relationships suggested by the hypotheses, between what different topics tell us about systems and what different systems tell us about topics.\nWe seek methods of building and analysing a matrix of system-topic normalised performances, with a view to giving insight into the issue and confirming or refuting the third and fourth hypotheses.\nIt turns out that the obvious symmetry implied by the above formulation of the hypotheses is a property worth investigating, and the investigation does indeed give us valuable insights.\n3.\nTHE IDEA 3.1 1st step: average precision table From TREC results, one can produce an Average Precision (AP) table (see Tab.\n1a): each AP(si, tj) value measures the AP of system si on topic tj.\nBesides AP values, the table shows Mean Average Precision (MAP) values i.e., the mean of the AP values for a single system over all topics, and what we call Average Average Precision (AAP) values i.e., the average of the AP values for a single topic over all systems: MAP(si) = 1 n nX j=1 AP(si, tj), (1) AAP(tj) = 1 m mX i=1 AP(si, tj).\n(2) MAPs are indicators of systems performance: higher MAP means good system.\nAAP are indicators of the performance on a topic: higher AAP means easy topic - a topic on which all or most systems have good performance.\n3.2 Critique of pure AP MAP is a standard, well known, and widely used IR effectiveness measure.\nSingle AP values are used too (e.g., in AP histograms).\nTopic difficulty is often discussed (e.g., in TREC Robust track [14]), although AAP values are not used and, to the best of our knowledge, have never been proposed (the median, not the average, of AP on a topic is used to produce TREC AP histograms [11]).\nHowever, the AP values in Tab.\n1 present two limitations, which are symmetric in some respect: \u2022 Problem 1.\nThey are not reliable to compare the effectiveness of a system on different topics, relative to the other systems.\nIf, for example, AP(s1, t1) > AP(s1, t2), can we infer that s1 is a good system (i.e., has a good performance) on t1 and a bad system on t2?\nThe answer is no: t1 might be an easy topic (with high AAP) and t2 a difficult one (low AAP).\nSee an example in Tab.\n1b: s1 is outperformed (on average) by the other systems on t1, and it outperforms the other systems on t2.\n\u2022 Problem 2.\nConversely, if, for example, AP(s1, t1) > AP(s2, t1), can we infer that t1 is considered easier by s1 than by s2?\nNo, we cannot: s1 might be a good system (with high MAP) and s2 a bad one (low MAP); see an example in Tab.\n1b.\nThese two problems are a sort of breakdown of the well known high influence of topics on IR evaluation; again, our formulation makes explicit the topics \/ systems symmetry.\n3.3 2nd step: normalizations To avoid these two problems, we can normalize the AP table in two ways.\nThe first normalization removes the influence of the single topic ease on system performance.\nEach AP(si, tj) value in the table depends on both system goodness and topic ease (the value will increase if a system is good and\/or the topic is easy).\nBy subtracting from each AP(si, tj) the AAP(tj) value, we obtain normalized AP values (APA(si, tj), Normalized AP according to AAP): APA(si, tj) = AP(si, tj) \u2212 AAP(tj), (3) that depend on system performance only (the value will increase only if system performance is good).\nSee Tab.\n2a.\nThe second normalization removes the influence of the single system effectiveness on topic ease: by subtracting from each AP(si, tj) the MAP(si) value, we obtain normalized AP values (APM(si, tj), Normalized AP according to MAP): APM(si, tj) = AP(si, tj) \u2212 MAP(si), (4) that depend on topic ease only (the value will increase only if the topic is easy, i.e., all systems perform well on that topic).\nSee Tab.\n2b.\nIn other words, APA avoids Problem 1: APA(s, t) values measure the performance of system s on topic t normalized t1 \u00b7 \u00b7 \u00b7 tn MAP s1 APA(s1, t1) \u00b7 \u00b7 \u00b7 APA(s1, tn) MAP(s1) ... ... ... sm APA(sm, t1) \u00b7 \u00b7 \u00b7 APA(sm, tn) MAP(sm) 0 \u00b7 \u00b7 \u00b7 0 0 (a) t1 \u00b7 \u00b7 \u00b7 tn s1 APM(s1, t1) \u00b7 \u00b7 \u00b7 APM(s1, tn) 0 ... ... ... sm APM(sm, t1) \u00b7 \u00b7 \u00b7 APM(sm, tn) 0 AAP AAP(t1) \u00b7 \u00b7 \u00b7 AAP(tn) 0 (b) t1 t2 \u00b7 \u00b7 \u00b7 MAP s1 \u22120.1 0.1 \u00b7 \u00b7 \u00b7 ... s2 0.2 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 ... ... ... ... 0 0 \u00b7 \u00b7 \u00b7 t1 t2 \u00b7 \u00b7 \u00b7 s1 \u22120.1 \u22120.2 \u00b7 \u00b7 \u00b7 0 s2 0.1 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 0 ... ... ... AAP ... .\n\u00b7 \u00b7 \u00b7 (c) (d) Table 2: Normalizations: APA and MAP: normalized AP (APA) and MAP (MAP) (a); normalized AP (APM) and AAP (AAP) (b); a numeric example (c) and (d) according to the ease of the topic (easy topics will not have higher APA values).\nNow, if, for example, APA(s1, t2) > APA(s1, t1), we can infer that s1 is a good system on t2 and a bad system on t1 (see Tab.\n2c).\nVice versa, APM avoids Problem 2: APM(s, t) values measure the ease of topic t according to system s, normalized according to goodness of the system (good systems will not lead to higher APM values).\nIf, for example, APM(s2, t1) > APM(s1, t1), we can infer that t1 is considered easier by s2 than by s1 (see Tab.\n2d).\nOn the basis of Tables 2a and 2b, we can also define two new measures of system effectiveness and topic ease, i.e., a Normalized MAP (MAP), obtained by averaging the APA values on one row in Tab.\n2a, and a Normalized AAP (AAP), obtained by averaging the APM values on one column in Tab.\n2b: MAP(si) = 1 n nX j=1 APA(si, tj) (5) AAP(tj) = 1 m mX i=1 APM(si, tj).\n(6) Thus, overall system performance can be measured, besides by means of MAP, also by means of MAP.\nMoreover, MAP is equivalent to MAP, as can be immediately proved by using Eqs.\n(5), (3), and (1): MAP(si) = 1 n nX j=1 (AP(si, tj) \u2212 AAP(tj)) = = MAP(si) \u2212 1 n nX j=1 AAP(tj) (and 1 n Pn j=1 AAP(tj) is the same for all systems).\nAnd, conversely, overall topic ease can be measured, besides by t1 \u00b7 \u00b7 \u00b7 tn s1 ... APM sm t1 \u00b7 \u00b7 \u00b7 tn s1 ... APA sm s1 \u00b7 \u00b7 \u00b7 sm t1 \u00b7 \u00b7 \u00b7 tn s1 ... 0 APM 0 sm t1 ... APA T 0 0 tn MAP AAP 0 Figure 1: Construction of the adjacency matrix.\nAPA T is the transpose of APA.\nmeans of AAP, also by means of AAP, and this is equivalent (the proof is analogous, and relies on Eqs.\n(6), (4), and (2)).\nThe two Tables 2a and 2b are interesting per se, and can be analyzed in several different ways.\nIn the following we propose an analysis based on network analysis techniques, mainly Kleinberg``s HITS algorithm.\nThere is a little further discussion of these normalizations in Sect.\n6.\n3.4 3rd step: Systems-Topics Graph The two tables 2a and 2b can be merged into a single one with the procedure shown in Fig. 1.\nThe obtained matrix can be interpreted as the adjacency matrix of a complete weighted bipartite graph, that we call Systems-Topics graph.\nArcs and weights in the graph can be interpreted as follows: \u2022 (weight on) arc s \u2192 t: how much the system s thinks that the topic t is easy - assuming that a system has no knowledge of the other systems (or in other words, how easy we might think the topic is, knowing only the results for this one system).\nThis corresponds to APM values, i.e., to normalized topic ease (Fig. 2a).\n\u2022 (weight on) arc s \u2190 t: how much the topic t thinks that the system s is good - assuming that a topic has no knowledge of the other topics (or in other words, how good we might think the system is, knowing only the results for this one topic).\nThis corresponds to APA (normalized system effectiveness, Fig. 2b).\nFigs. 2c and 2d show the Systems-Topics complete weighted bipartite graph, on a toy example with 4 systems and 2 topics; the graph is split in two parts to have an understandable graphical representation: arcs in Fig. 2c are labeled with APM values; arcs in Fig. 2d are labeled with APA values.\n4.\nANALYSIS OF THE GRAPH 4.1 Weighted Inlinks, Outlinks, PageRank The sum of weighted outlinks, i.e., the sum of the weights on the outgoing arcs from each node, is always zero: \u2022 The outlinks on each node corresponding to a system s (Fig. 2c) is the sum of all the corresponding APM values on one row of the matrix in Tab.\n2b.\n\u2022 The outlinks on each node corresponding to a topic t (Fig. 2d) is the sum of all the corresponding APA (a) (b) (c) (d) Figure 2: The relationships between systems and topics (a) and (b); and the Systems-Topics graph for a toy example (c) and (d).\nDashed arcs correspond to negative values.\nh (a) s1 ... sm t1 ... tn = s1 \u00b7 \u00b7 \u00b7 sm t1 \u00b7 \u00b7 \u00b7 tn s1 ... 0 APM (APA) sm t1 ... APA T 0 tn (APM T ) \u00b7 a (h) s1 ... sm t1 ... tn Figure 3: Hub and Authority computation values on one row of the transpose of the matrix in Tab.\n2a.\nThe average1 of weighted inlinks is: \u2022 MAP for each node corresponding to a system s; this corresponds to the average of all the corresponding APA values on one column of the APA T part of the adjacency matrix (see Fig. 1).\n\u2022 AAP for each node corresponding to a topic t; this corresponds to the average of all the corresponding APM values on one column of the APM part of the adjacency matrix (see Fig. 1).\nTherefore, weighted inlinks measure either system effectiveness or topic ease; weighted outlinks are not meaningful.\nWe could also apply the PageRank algorithm to the network; the meaning of the PageRank of a node is not quite so obvious as Inlinks and Outlinks, but it also seems a sensible measure of either system effectiveness or topic ease: if a system is effective, it will have several incoming links with high 1 Usually, the sum of the weights on the incoming arcs to each node is used in place of the average; since the graph is complete, it makes no difference.\nweights (APA); if a topic is easy it will have high weights (APM) on the incoming links too.\nWe will see experimental confirmation in the following.\n4.2 Hubs and Authorities Let us now turn to more sophisticated indicators.\nKleinberg``s HITS algorithm defines, for a directed graph, two indicators: hubness and authority; we reiterate here some of the basic details of the HITS algorithm in order to emphasize both the nature of our generalization and the interpretation of the HITS concepts in this context.\nUsually, hubness and authority are defined as h(x) = P x\u2192y a(y) and a(x) = P y\u2192x h(y), and described intuitively as a good hub links many good authorities; a good authority is linked from many good hubs.\nAs it is well known, an equivalent formulation in linear algebra terms is (see also Fig. 3): h = Aa and a = AT h (7) (where h is the hubness vector, with the hub values for all the nodes; a is the authority vector; A is the adjacency matrix of the graph; and AT its transpose).\nUsually, A contains 0s and 1s only, corresponding to presence and absence of unweighted directed arcs, but Eq.\n(7) can be immediately generalized to (in fact, it is already valid for) A containing any real value, i.e., to weighted graphs.\nTherefore we can have a generalized version (or rather a generalized interpretation, since the formulation is still the original one) of hubness and authority for all nodes in a graph.\nAn intuitive formulation of this generalized HITS is still available, although slightly more complex: a good hub links, by means of arcs having high weights, many good authorities; a good authority is linked, by means of arcs having high weights, from many good hubs.\nSince arc weights can be, in general, negative, hub and authority values can be negative, and one could speak of unhubness and unauthority; the intuitive formulation could be completed by adding that a good hub links good unauthorities by means of links with highly negative weights; a good authority is linked by good unhubs by means of links with highly negative weights.\nAnd, also, a good unhub links positively good unauthorities and negatively good authorities; a good unauthority is linked positively from good unhubs and negatively from good hubs.\nLet us now apply generalized HITS to our Systems-Topics graph.\nWe compute a(s), h(s), a(t), and h(t).\nIntuitively, we expect that a(s) is somehow similar to Inlinks, so it should be a measure of either systems effectiveness or topic ease.\nSimilarly, hubness should be more similar to Outlinks, thus less meaningful, although the interplay between hub and authority might lead to the discovery of something different.\nLet us start by remarking that authority of topics and hubness of systems depend only on each other; similarly hubness of topics and authority of systems depend only on each other: see Figs. 2c, 2d and 3.\nThus the two graphs in Figs. 2c and 2d can be analyzed independently.\nIn fact the entire HITS analysis could be done in one direction only, with just APM(s, t) values or alternatively with just APA(s, t).\nAs discussed below, probably most interest resides in the hubness of topics and the authority of systems, so the latter makes sense.\nHowever, in this paper, we pursue both analyses together, because the symmetry itself is interesting.\nConsidering Fig. 2c we can state that: \u2022 Authority a(t) of a topic node t increases when: - if h(si) > 0, APM(si, t) increases (or if APM(si, t) > 0, h(si) increases); - if h(si) < 0, APM(si, t) decreases (or if APM(si, t) < 0, h(si) decreases).\n\u2022 Hubness h(s) of a system node s increases when: - if a(tj) > 0, APM(s, tj) increases (or, if APM(s, tj) > 0, a(tj) increases); - if a(tj) < 0, APM(s, tj) decreases (or, if APM(s, tj) < 0, a(tj) decreases).\nWe can summarize this as: a(t) is high if APM(s, t) is high for those systems with high h(s); h(s) is high if APM(s, t) is high for those topics with high a(t).\nIntuitively, authority a(t) of a topic measures topic ease; hubness h(s) of a system measures system``s capability to recognize easy topics.\nA system with high unhubness (negative hubness) would tend to regard easy topics as hard and hard ones as easy.\nThe situation for Fig. 2d, i.e., for a(s) and h(t), is analogous.\nAuthority a(s) of a system node s measures system effectiveness: it increases with the weight on the arc (i.e., APA(s, tj)) and the hubness of the incoming topic nodes tj.\nHubness h(t) of a topic node t measures topic capability to recognize effective systems: if h(t) > 0, it increases further if APA(s, tj) increases; if h(t) < 0, it increases if APA(s, tj) decreases.\nIntuitively, we can state that A system has a higher authority if it is more effective on topics with high hubness; and A topic has a higher hubness if it is easier for those systems which are more effective in general.\nConversely for system hubness and topic authority: A topic has a higher authority if it is easier on systems with high hubness; and A system has a higher hubness if it is more effective for those topics which are easier in general.\nTherefore, for each system we have two indicators: authority (a(s)), measuring system effectiveness, and hubness (h(s)), measuring system capability to estimate topic ease.\nAnd for each topic, we have two indicators: authority (a(t)), measuring topic ease, and hubness (h(t)), measuring topic capability to estimate systems effectiveness.\nWe can define them formally as a(s) = X t h(t) \u00b7 APA(s, t), h(t) = X s a(s) \u00b7 APA(s, t), a(t) = X s h(s) \u00b7 APM(s, t), h(s) = X t a(t) \u00b7 APM(s, t).\nWe observe that the hubness of topics may be of particular interest for evaluation studies.\nIt may be that we can evaluate the effectiveness of systems efficiently by using relatively few high-hubness topics.\n5.\nEXPERIMENTS We now turn to discuss if these indicators are meaningful and useful in practice, and how they correlate with standard measures used in TREC.\nWe have built the Systems-Topics graph for TREC 8 data (featuring 1282 systems - actually, 2 Actually, TREC 8 data features 129 systems; due to some bug in our scripts, we did not include one system (8manexT3D1N0), but the results should not be affected.\n0 0.1 0.2 0.3 -1 -0.5 0 0.5 1 NAPM NAPA AP Figure 4: Distributions of AP, APA, and APM values in TREC 8 data MAP In PR H A MAP 1 1.0 1.0 .80 .99 Inlinks 1 1.0 .80 .99 PageRank 1 .80 .99 Hub 1 .87 (a) AAP In PR H A AAP 1 1.0 1.0 .92 1.0 Inlinks 1 1.0 .92 1.0 PageRank 1 .92 1.0 Hub 1 .93 (b) Table 3: Correlations between network analysis measures and MAP (a) and AAP (b) runs - on 50 topics).\nThis section illustrates the results obtained mining these data according to the method presented in previous sections.\nFig. 4 shows the distributions of AP, APA, and APM: whereas AP is very skewed, both APA and APM are much more symmetric (as it should be, since they are constructed by subtracting the mean).\nTables 3a and 3b show the Pearson``s correlation values between Inlinks, PageRank, Hub, Authority and, respectively, MAP or AAP (Outlinks values are not shown since they are always zero, as seen in Sect.\n4).\nAs expected, Inlinks and PageRank have a perfect correlation with MAP and AAP.\nAuthority has a very high correlation too with MAP and AAP; Hub assumes slightly lower values.\nLet us analyze the correlations more in detail.\nThe correlations chart in Figs. 5a and 5b demonstrate the high correlation between Authority and MAP or AAP.\nHubness presents interesting phenomena: both Fig. 5c (correlation with MAP) and Fig. 5d (correlation with AAP) show that correlation is not exact, but neither is it random.\nThis, given the meaning of hubness (capability in estimating topic ease and system effectiveness), means two things: (i) more effective systems are better at estimating topic ease; and (ii) easier topics are better at estimating system effectiveness.\nWhereas the first statement is fine (there is nothing against it), the second is a bit worrying.\nIt means that system effectiveness in TREC is affected more by easy topics than by difficult topics, which is rather undesirable for quite obvious reasons: a system capable of performing well on a difficult topic, i.e., on a topic on which the other systems perform badly, would be an important result for IR effectiveness; con-8E-5 -6E-5 -4E-5 -2E-5 0E+0 2E-5 4E-5 6E-5 0.0 0.1 0.2 0.3 0.4 0.5 -3E-1 -2E-1 -1E-1 0E+0 1E-1 2E-1 3E-1 4E-1 5E-1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 (a) (b) 0E+0 2E-2 4E-2 6E-2 8E-2 1E-1 1E-1 1E-1 0.0 0.1 0.2 0.3 0.4 0.5 0E+0 1E-5 2E-5 3E-5 4E-5 5E-5 6E-5 7E-5 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 (c) (d) Figure 5: Correlations: MAP (x axis) and authority (y axis) of systems (a); AAP and authority of topics (b); MAP and hub of systems (c) and AAP and hub of topics (d) versely, a system capable of performing well on easy topics is just a confirmation of the state of the art.\nIndeed, the correlation between hubness and AAP (statement (i) above) is higher than the correlation between hubness and MAP (corresponding to statement (ii)): 0.92 vs. 0.80.\nHowever, this phenomenon is quite strong.\nThis is also confirmed by the work being done on the TREC Robust Track [14].\nIn this respect, it is interesting to see what happens if we use a different measure from MAP (and AAP).\nThe GMAP (Geometric MAP) metric is defined as the geometric mean of AP values, or equivalently as the arithmetic mean of the logarithms of AP values [8].\nGMAP has the property of giving more weight to the low end of the AP scale (i.e., to low AP values), and this seems reasonable, since, intuitively, a performance increase in MAP values from 0.01 to 0.02 should be more important than an increase from 0.81 to 0.82.\nTo use GMAP in place of MAP and AAP, we only need to take the logarithms of initial AP values, i.e., those in Tab.\n1a (zero values are modified into \u03b5 = 0.00001).\nWe then repeat the same normalization process (with GMAP and GAAP - Geometric AAP - replacing MAP and AAP): whereas authority values still perfectly correlate with GMAP (0.99) and GAAP (1.00), the correlation with hubness largely disappears (values are \u22120.16 and \u22120.09 - slightly negative but not enough to concern us).\nThis is yet another confirmation that TREC effectiveness as measured by MAP depends mainly on easy topics; GMAP appears to be a more balanced measure.\nNote that, perhaps surprisingly, GMAP is indeed fairly well balanced, not biased in the opposite direction - that is, it does not overemphasize the difficult topics.\nIn Sect.\n6.3 below, we discuss another transformation, replacing the log function used in GMAP with logit.\nThis has a similar effect: the correlation of mean logitAP and average logitAP with hubness are now small positive numbers (0.23 and 0.15 respectively), still comfortably away from the high correlations with regular MAP and AAP, i.e., not presenting the problematic phenomenon (ii) above (over-dependency on easy topics).\nWe also observe that hub values are positive, whereas authority assumes, as predicted, both positive and negative values.\nAn intuitive justification is that negative hubness would indicate a node which disagrees with the other nodes, e.g., a system which does better on difficult topics, or a topic on which bad systems do better; such systems and topics would be quite strange, and probably do not appear in TREC.\nFinally, although one might think that topics with several relevant documents are more important and difficult, this is not the case: there is no correlation between hub (or any other indicator) and the number of documents relevant to a topic.\n6.\nDISCUSSION 6.1 Related work There has been considerable interest in recent years in questions of statistical significance of effectiveness comparisons between systems (e.g. [2, 9]), and related questions of how many topics might be needed to establish differences (e.g. [13]).\nWe regard some results of the present study as in some way complementary to this work, in that we make a step towards answering the question Which topics are best for establishing differences?\n.\nThe results on evaluation without relevance judgements such as [10] show that, to some extent, good systems agree on which are the good documents.\nWe have not addressed the question of individual documents in the present analysis, but this effect is certainly analogous to our results.\n6.2 Are normalizations necessary?\nAt this point it is also worthwhile to analyze what would happen without the MAP- and AAP-normalizations defined in Sect.\n3.3.\nIndeed, the process of graph construction (Sect.\n3.4) is still valid: both the APM and APA matrices are replaced by the AP one, and then everything goes on as above.\nTherefore, one might think that the normalizations are unuseful in this setting.\nThis is not the case.\nFrom the theoretical point of view, the AP-only graph does not present the interesting properties above discussed: since the AP-only graph is symmetrical (the weight on each incoming link is equal to the weight on the corresponding outgoing link), Inlinks and Outlinks assume the same values.\nThere is symmetry also in computing hub and authority, that assume the same value for each node since the weights on the incoming and outgoing arcs are the same.\nThis could be stated in more precise and formal terms, but one might still wonder if on the overall graph there are some sort of counterbalancing effects.\nIt is therefore easier to look at experimental data, which confirm that the normalizations are needed: the correlations between AP, Inlinks, Outlinks, Hub, and\/or Authority are all very close to one (none of them is below 0.98).\n6.3 Are these normalizations sufficient?\nIt might be argued that (in the case of APA, for example) the amount we have subtracted from each AP value is topicdependent, therefore the range of the resulting APA value is also topic-dependent (e.g. the maximum is 1 \u2212 AAP(tj) and the minimum is \u2212 AAP(tj)).\nThis suggests that the cross-topic comparisons of these values suggested in Sect.\n3.3 may not be reliable.\nA similar issue arises for APM and comparisons across systems.\nOne possible way to overcome this would be to use an unconstrained measure whose range is the full real line.\nNote that in applying the method to GMAP by using log AP, we avoid the problem with the lower limit but retain it for the upper limit.\nOne way to achieve an unconstrainted range would be to use the logit function rather than the log [4,8].\nWe have also run this variant (as already reported in Sect.\n5 above), and it appears to provide very similar results to the GMAP results already given.\nThis is not surprising, since in practice the two functions are very similar over most of the operative range.\nThe normalizations thus seem reliable.\n6.4 On AAT and AT A It is well known that h and a vectors are the principal left eigenvectors of AAT and AT A, respectively (this can be easily derived from Eqs.\n(7)), and that, in the case of citation graphs, AAT and AT A represent, respectively, bibliographic coupling and co-citations.\nWhat is the meaning, if any, of AAT and AT A in our Systems-Topics graph?\nIt is easy to derive that: AAT [i, j] = 8 >< >: 0 if i \u2208 S \u2227 j \u2208 T or i \u2208 T \u2227 j \u2208 S P k A[i, k] \u00b7 A[j, k] otherwise AT A[i, j] = 8 >< >: 0 if i \u2208 S \u2227 j \u2208 T or i \u2208 T \u2227 j \u2208 S P k A[k, i] \u00b7 A[k, j] otherwise (where S is the set of indices corresponding to systems and T the set of indices corresponding to topics).\nThus AAT and AT A are block diagonal matrices, with two blocks each, one relative to systems and one relative to topics: (a) if i, j \u2208 S, then AAT [i, j] = P k\u2208T APM(i, k)\u00b7APM(j, k) measures how much the two systems i and j agree in estimating topics ease (APM): high values mean that the two systems agree on topics ease.\n(b) if i, j \u2208 T, then AAT [i, j] = P k\u2208S APA(k, i)\u00b7APA(k, j) measures how much the two topics i and j agree in estimating systems effectiveness (APA): high values mean that the two topics agree on systems effectiveness (and that TREC results would not change by leaving out one of the two topics).\n(c) if i, j \u2208 S, then AT A[i, j] = P k\u2208T APA(i, k) \u00b7 APA(j, k) measures how much agreement on the effectiveness of two systems i and j there is over all topics: high values mean that many topics quite agree on the two systems effectiveness; low values single out systems that are somehow controversial, and that need several topics to have a correct effectiveness assessment.\n(d) if i, j \u2208 T, then AT A[i, j] = P k\u2208S APM(k, i)\u00b7APM(k, j) measures how much agreement on the ease of the two topics i and j there is over all systems: high values mean that many systems quite agree on the two topics ease.\nTherefore, these matrices are meaningful and somehow interesting.\nFor instance, the submatrix (b) corresponds to a weighted undirected complete graph, whose nodes are the topics and whose arc weights are a measure of how much two topics agree on systems effectiveness.\nTwo topics that are very close on this graph give the same information, and therefore one of them could be discarded without changes in TREC results.\nIt would be interesting to cluster the topics on this graph.\nFurthermore, the matrix\/graph (a) could be useful in TREC pool formation: systems that do not agree on topic ease would probably find different relevant documents, and should therefore be complementary in pool formation.\nNote that no notion of single documents is involved in the above analysis.\n6.5 Insights As indicated, the primary contribution of this paper has been a method of analysis.\nHowever, in the course of applying this method to one set of TREC results, we have achieved some insights relating to the hypotheses formulated in Sect.\n2: \u2022 We confirm Hypothesis 2 above, that some topics are easier than others.\n\u2022 Differences in the hubness of systems reveal that some systems are better than others at distinguishing easy and difficult topics; thus we have some confirmation of Hypothesis 3.\n\u2022 There are some relatively idiosyncratic systems which do badly on some topics generally considered easy but well on some hard topics.\nHowever, on the whole, the more effective systems are better at distinguishing easy and difficult topics.\nThis is to be expected: a really bad system will do badly on everything, while even a good system may have difficulty with some topics.\n\u2022 Differences in the hubness of topics reveal that some topics are better than others at distinguising more or less effective systems; thus we have some confirmation of Hypothesis 4.\n\u2022 If we use MAP as the measure of effectiveness, it is also true that the easiest topics are better at distinguishing more or less effective systems.\nAs argued in Sect.\n5, this is an undesirable property.\nGMAP is more balanced.\nClearly these ideas need to be tested on other data sets.\nHowever, they reveal that the method of analysis proposed in this paper can provide valuable information.\n6.6 Selecting topics The confirmation of Hypothesis 4 leads, as indicated, to the idea that we could do reliable system evaluation on a much smaller set of topics, provided we could select such an appropriate set.\nThis selection may not be straightforward, however.\nIt is possible that simply selecting the high hubness topics will achieve this end; however, it is also possible that there are significant interactions between topics which would render such a simple rule ineffective.\nThis investigation would therefore require serious experimentation.\nFor this reason we have not attempted in this paper to point to the specific high hubness topics as being good for evaluation.\nThis is left for future work.\n7.\nCONCLUSIONS AND FUTURE DEVELOPMENTS The contribution of this paper is threefold: \u2022 we propose a novel way of normalizing AP values; \u2022 we propose a novel method to analyse TREC data; \u2022 the method applied on TREC data does indeed reveal some hidden properties.\nMore particularly, we propose Average Average Precision (AAP), a measure of topic ease, and a novel way of normalizing the average precision measure in TREC, on the basis of both MAP (Mean Average Precision) and AAP.\nThe normalized measures (APM and APA) are used to build a bipartite weighted Systems-Topics graph, that is then analyzed by means of network analysis indicators widely known in the (social) network analysis field, but somewhat generalised.\nWe note that no such approach to TREC data analysis has been proposed so far.\nThe analysis shows that, with current measures, a system that wants to be effective in TREC needs to be effective on easy topics.\nAlso, it is suggested that a cluster analysis on topic similarity can lead to relying on a lower number of topics.\nOur method of analysis, as described in this paper, can be applied only a posteriori, i.e., once we have all the topics and all the systems available.\nAdding (removing) a new system \/ topic would mean re-computing hubness and authority indicators.\nMoreover, we are not explicitly proposing a change to current TREC methodology, although this could be a by-product of these - and further - analyses.\nThis is an initial work, and further analyses could be performed.\nFor instance, other effectiveness metrics could be used, in place of AP.\nOther centrality indicators, widely used in social network analysis, could be computed, although probably with similar results to PageRank.\nIt would be interesting to compute the higher-order eigenvectors of AT A and AAT .\nThe same kind of analysis could be performed at the document level, measuring document ease.\nHopefully, further analyses of the graph defined in this paper, according to the approach described, can be insightful for a better understanding of TREC or similar data.\nAcknowledgments We would like to thank Nick Craswell for insightful discussions and the anonymous referees for useful remarks.\nPart of this research has been carried on while the first author was visiting Microsoft Research Cambridge, whose financial support is acknowledged.\n8.\nREFERENCES [1] M. Agosti, M. Bacchin, N. Ferro, and M. Melucci.\nImproving the automatic retrieval of text documents.\nIn Proceedings of the 3rd CLEF Workshop, volume 2785 of LNCS, pages 279-290, 2003.\n[2] C. Buckley and E. Voorhees.\nEvaluating evaluation measure stability.\nIn 23rd SIGIR, pages 33-40, 2000.\n[3] S. Chakrabarti.\nMining the Web.\nMorgan Kaufmann, 2003.\n[4] G. V. Cormack and T. R. Lynam.\nStatistical precision of information retrieval evaluation.\nIn 29th SIGIR, pages 533-540, 2006.\n[5] J. Kleinberg.\nAuthoritative sources in a hyperlinked environment.\nJ. of the ACM, 46(5):604-632, 1999.\n[6] M. Levene.\nAn Introduction to Search Engines and Web Navigation.\nAddison Wesley, 2006.\n[7] L. Page, S. Brin, R. Motwani, and T. Winograd.\nThe PageRank Citation Ranking: Bringing Order to the Web, 1998.\nhttp:\/\/dbpubs.stanford.edu:8090\/pub\/1999-66.\n[8] S. Robertson.\nOn GMAP - and other transformations.\nIn 13th CIKM, pages 78-83, 2006.\n[9] M. Sanderson and J. Zobel.\nInformation retrieval system evaluation: effort, sensitivity, and reliability.\nIn 28th SIGIR, pages 162-169, 2005.\nhttp:\/\/doi.acm.org\/10.1145\/1076034.1076064.\n[10] I. Soboroff, C. Nicholas, and P. Cahan.\nRanking retrieval systems without relevance judgments.\nIn 24th SIGIR, pages 66-73, 2001.\n[11] TREC Common Evaluation Measures, 2005.\nhttp:\/\/trec.nist.gov\/pubs\/trec14\/appendices\/ CE.MEASURES05.pdf (Last visit: Jan. 2007).\n[12] Text REtrieval Conference (TREC).\nhttp:\/\/trec.nist.gov\/ (Last visit: Jan. 2007).\n[13] E. Voorhees and C. Buckley.\nThe effect of topic set size on retrieval experiment error.\nIn 25th SIGIR, pages 316-323, 2002.\n[14] E. M. Voorhees.\nOverview of the TREC 2005 Robust Retrieval Track.\nIn TREC 2005 Proceedings, 2005.\n[15] E. M. Voorhees and D. K. Harman.\nTRECExperiment and Evaluation in Information Retrieval.\nMIT Press, 2005.\n[16] S. Wasserman and K. Faust.\nSocial Network Analysis.\nCambridge University Press, Cambridge, UK, 1994.","lvl-3":"HITS Hits TREC--Exploring IR Evaluation Results with Network Analysis\nABSTRACT\nWe propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments.\nWe define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics.\nWe analyze the meaning of well known--and somewhat generalized--indicators from social network analysis on the Systems-Topics graph.\nWe apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case.\n1.\nINTRODUCTION\nEvaluation is a primary concern in the Information Retrieval (IR) field.\nTREC (Text REtrieval Conference) [12, 15] is an annual benchmarking exercise that has become a de facto standard in IR evaluation: before the actual conference, TREC provides to participants a collection of documents and a set of topics (representations of information\nneeds).\nParticipants use their systems to retrieve, and submit to TREC, a list of documents for each topic.\nAfter the lists have been submitted and pooled, the TREC organizers employ human assessors to provide relevance judgements on the pooled set.\nThis defines a set of relevant documents for each topic.\nSystem effectiveness is then measured by well established metrics (Mean Average Precision being the most used).\nOther conferences such as NTCIR, INEX, CLEF provide comparable data.\nNetwork analysis is a discipline that studies features and properties of (usually large) networks, or graphs.\nOf particular importance is Social Network Analysis [16], that studies networks made up by links among humans (friendship, acquaintance, co-authorship, bibliographic citation, etc.).\nNetwork analysis and IR fruitfully meet in Web Search Engine implementation, as is already described in textbooks [3, 6].\nCurrent search engines use link analysis techniques to help rank the retrieved documents.\nSome indicators (and the corresponding algorithms that compute them) have been found useful in this respect, and are nowadays well known: Inlinks (the number of links to a Web page), PageRank [7], and HITS (Hyperlink-Induced Topic Search) [5].\nSeveral extensions to these algorithms have been and are being proposed.\nThese indicators and algorithms might be quite general in nature, and can be used for applications which are very different from search result ranking.\nOne example is using HITS for stemming, as described by Agosti et al. [1].\nIn this paper, we propose and demonstrate a method for constructing a network, specifically a weighted complete bidirectional directed bipartite graph, on a set of TREC topics and participating systems.\nLinks represent effectiveness measurements on system-topic pairs.\nWe then apply analysis methods originally developed for search applications to the resulting network.\nThis reveals phenomena previously hidden in TREC data.\nIn passing, we also provide a small generalization to Kleinberg's HITS algorithm, as well as to Inlinks and PageRank.\nThe paper is organized as follows: Sect.\n2 gives some motivations for the work.\nSect.\n3 presents the basic ideas of normalizing average precision and of constructing a systemstopics graph, whose properties are analyzed in Sect.\n4; Sect.\n5 presents some experiments on TREC 8 data; Sect.\n6 discusses some issues and Sect.\n7 closes the paper.\n2.\nMOTIVATIONS\n3.\nTHE IDEA\n3.1 1st step: average precision table\n3.2 Critique of pure AP\n3.3 2nd step: normalizations\n3.4 3rd step: Systems-Topics Graph\n4.\nANALYSIS OF THE GRAPH\n4.1 Weighted Inlinks, Outlinks, PageRank\n4.2 Hubs and Authorities\nP\n5.\nEXPERIMENTS\n6.\nDISCUSSION 6.1 Related work\n6.2 Are normalizations necessary?\n6.3 Are these normalizations sufficient?\n6.4 On AAT and ATA\n6.5 Insights\nSIGIR 2007 Proceedings Session 20: Link Analysis\n6.6 Selecting topics\n7.\nCONCLUSIONS AND FUTURE DEVELOPMENTS","lvl-4":"HITS Hits TREC--Exploring IR Evaluation Results with Network Analysis\nABSTRACT\nWe propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments.\nWe define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics.\nWe analyze the meaning of well known--and somewhat generalized--indicators from social network analysis on the Systems-Topics graph.\nWe apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case.\n1.\nINTRODUCTION\nEvaluation is a primary concern in the Information Retrieval (IR) field.\nneeds).\nParticipants use their systems to retrieve, and submit to TREC, a list of documents for each topic.\nAfter the lists have been submitted and pooled, the TREC organizers employ human assessors to provide relevance judgements on the pooled set.\nThis defines a set of relevant documents for each topic.\nSystem effectiveness is then measured by well established metrics (Mean Average Precision being the most used).\nOther conferences such as NTCIR, INEX, CLEF provide comparable data.\nNetwork analysis is a discipline that studies features and properties of (usually large) networks, or graphs.\nNetwork analysis and IR fruitfully meet in Web Search Engine implementation, as is already described in textbooks [3, 6].\nCurrent search engines use link analysis techniques to help rank the retrieved documents.\nSeveral extensions to these algorithms have been and are being proposed.\nThese indicators and algorithms might be quite general in nature, and can be used for applications which are very different from search result ranking.\nIn this paper, we propose and demonstrate a method for constructing a network, specifically a weighted complete bidirectional directed bipartite graph, on a set of TREC topics and participating systems.\nLinks represent effectiveness measurements on system-topic pairs.\nWe then apply analysis methods originally developed for search applications to the resulting network.\nThis reveals phenomena previously hidden in TREC data.\nIn passing, we also provide a small generalization to Kleinberg's HITS algorithm, as well as to Inlinks and PageRank.\nThe paper is organized as follows: Sect.\nSect.\n3 presents the basic ideas of normalizing average precision and of constructing a systemstopics graph, whose properties are analyzed in Sect.\n4; Sect.\n5 presents some experiments on TREC 8 data; Sect.\n6 discusses some issues and Sect.\n7 closes the paper.","lvl-2":"HITS Hits TREC--Exploring IR Evaluation Results with Network Analysis\nABSTRACT\nWe propose a novel method of analysing data gathered from TREC or similar information retrieval evaluation experiments.\nWe define two normalized versions of average precision, that we use to construct a weighted bipartite graph of TREC systems and topics.\nWe analyze the meaning of well known--and somewhat generalized--indicators from social network analysis on the Systems-Topics graph.\nWe apply this method to an analysis of TREC 8 data; among the results, we find that authority measures systems performance, that hubness of topics reveals that some topics are better than others at distinguishing more or less effective systems, that with current measures a system that wants to be effective in TREC needs to be effective on easy topics, and that by using different effectiveness measures this is no longer the case.\n1.\nINTRODUCTION\nEvaluation is a primary concern in the Information Retrieval (IR) field.\nTREC (Text REtrieval Conference) [12, 15] is an annual benchmarking exercise that has become a de facto standard in IR evaluation: before the actual conference, TREC provides to participants a collection of documents and a set of topics (representations of information\nneeds).\nParticipants use their systems to retrieve, and submit to TREC, a list of documents for each topic.\nAfter the lists have been submitted and pooled, the TREC organizers employ human assessors to provide relevance judgements on the pooled set.\nThis defines a set of relevant documents for each topic.\nSystem effectiveness is then measured by well established metrics (Mean Average Precision being the most used).\nOther conferences such as NTCIR, INEX, CLEF provide comparable data.\nNetwork analysis is a discipline that studies features and properties of (usually large) networks, or graphs.\nOf particular importance is Social Network Analysis [16], that studies networks made up by links among humans (friendship, acquaintance, co-authorship, bibliographic citation, etc.).\nNetwork analysis and IR fruitfully meet in Web Search Engine implementation, as is already described in textbooks [3, 6].\nCurrent search engines use link analysis techniques to help rank the retrieved documents.\nSome indicators (and the corresponding algorithms that compute them) have been found useful in this respect, and are nowadays well known: Inlinks (the number of links to a Web page), PageRank [7], and HITS (Hyperlink-Induced Topic Search) [5].\nSeveral extensions to these algorithms have been and are being proposed.\nThese indicators and algorithms might be quite general in nature, and can be used for applications which are very different from search result ranking.\nOne example is using HITS for stemming, as described by Agosti et al. [1].\nIn this paper, we propose and demonstrate a method for constructing a network, specifically a weighted complete bidirectional directed bipartite graph, on a set of TREC topics and participating systems.\nLinks represent effectiveness measurements on system-topic pairs.\nWe then apply analysis methods originally developed for search applications to the resulting network.\nThis reveals phenomena previously hidden in TREC data.\nIn passing, we also provide a small generalization to Kleinberg's HITS algorithm, as well as to Inlinks and PageRank.\nThe paper is organized as follows: Sect.\n2 gives some motivations for the work.\nSect.\n3 presents the basic ideas of normalizing average precision and of constructing a systemstopics graph, whose properties are analyzed in Sect.\n4; Sect.\n5 presents some experiments on TREC 8 data; Sect.\n6 discusses some issues and Sect.\n7 closes the paper.\n2.\nMOTIVATIONS\nTable 1: AP, MAP and AAP\n2.\nSome topics are easier than others; 3.\nSome systems are better than others at distinguishing easy and difficult topics; 4.\nSome topics are better than others at distinguishing more or less effective systems.\nThe first of these hypotheses needs no further justification--every reported significant difference between any two systems supports it.\nThere is now also quite a lot of evidence for the second, centered on the TREC Robust Track [14].\nOur primary interest is in the third and fourth.\nThe third might be regarded as being of purely academic interest; however, the fourth has the potential for being of major practical importance in evaluation studies.\nIf we could identify a relatively small number of topics which were really good at distinguishing effective and ineffective systems, we could save considerable effort in evaluating systems.\nOne possible direction from this point would be to attempt direct identification of such small sets of topics.\nHowever, in the present paper, we seek instead to explore the relationships suggested by the hypotheses, between what different topics tell us about systems and what different systems tell us about topics.\nWe seek methods of building and analysing a matrix of system-topic normalised performances, with a view to giving insight into the issue and confirming or refuting the third and fourth hypotheses.\nIt turns out that the obvious symmetry implied by the above formulation of the hypotheses is a property worth investigating, and the investigation does indeed give us valuable insights.\n3.\nTHE IDEA\n3.1 1st step: average precision table\nFrom TREC results, one can produce an Average Precision (AP) table (see Tab.\n1a): each AP (si, tj) value measures the AP of system si on topic tj.\nBesides AP values, the table shows Mean Average Precision (MAP) values i.e., the mean of the AP values for a single system over all topics, and what we call Average Average Precision (AAP) values i.e., the average of the AP values for a single topic over all systems:\nMAPs are indicators of systems performance: higher MAP means good system.\nAAP are indicators of the performance on a topic: higher AAP means easy topic--a topic on which all or most systems have good performance.\n3.2 Critique of pure AP\nMAP is a standard, well known, and widely used IR effectiveness measure.\nSingle AP values are used too (e.g., in AP histograms).\nTopic difficulty is often discussed (e.g., in TREC Robust track [14]), although AAP values are not used and, to the best of our knowledge, have never been proposed (the median, not the average, of AP on a topic is used to produce TREC AP histograms [11]).\nHowever, the AP values in Tab.\n1 present two limitations, which are symmetric in some respect:\n\u2022 Problem 1.\nThey are not reliable to compare the effectiveness of a system on different topics, relative to the other systems.\nIf, for example, AP (s1, t1)> AP (s1, t2), can we infer that s1 is a good system (i.e., has a good performance) on t1 and a bad system on t2?\nThe answer is no: t1 might be an easy topic (with high AAP) and t2 a difficult one (low AAP).\nSee an example in Tab.\n1b: s1 is outperformed (on average) by the other systems on t1, and it outperforms the other systems on t2.\n\u2022 Problem 2.\nConversely, if, for example, AP (s1, t1)> AP (s2, t1), can we infer that t1 is considered easier by s1 than by s2?\nNo, we cannot: s1 might be a good system (with high MAP) and s2 a bad one (low MAP); see an example in Tab.\n1b.\nThese two problems are a sort of breakdown of the well known high influence of topics on IR evaluation; again, our formulation makes explicit the topics \/ systems symmetry.\n3.3 2nd step: normalizations\nTo avoid these two problems, we can normalize the AP table in two ways.\nThe first normalization removes the influence of the single topic ease on system performance.\nEach AP (si, tj) value in the table depends on both system goodness and topic ease (the value will increase if a system is good and\/or the topic is easy).\nBy subtracting from each AP (si, tj) the AAP (tj) value, we obtain \"normalized\" AP values (APA (si, tj), Normalized AP according to AAP):\nthat depend on system performance only (the value will increase only if system performance is good).\nSee Tab.\n2a.\nThe second normalization removes the influence of the single system effectiveness on topic ease: by subtracting from each AP (si, tj) the MAP (si) value, we obtain \"normalized\" AP values (APM (si, tj), Normalized AP according to MAP):\nthat depend on topic ease only (the value will increase only if the topic is easy, i.e., all systems perform well on that topic).\nSee Tab.\n2b.\nIn other words, APA avoids Problem 1: APA (s, t) values measure the performance of system s on topic t normalized\nTable 2: Normalizations: APA and MAP: normalized\nAP (APA) and MAP (MAP) (a); normalized AP (APM) and AAP (AAP) (b); a numeric example (c) and (d) according to the ease of the topic (easy topics will not have higher APA values).\nNow, if, for example, APA (s1, t2)> APA (s1, t1), we can infer that s1 is a good system on t2 and a bad system on t1 (see Tab.\n2c).\nVice versa, APM avoids Problem 2: APM (s, t) values measure the ease of topic t according to system s, normalized according to goodness of the system (good systems will not lead to higher APM values).\nIf, for example, APM (s2, t1)> APM (s1, t1), we can infer that t1 is considered easier by s2 than by s1 (see Tab.\n2d).\nOn the basis of Tables 2a and 2b, we can also define two new measures of system effectiveness and topic ease, i.e., a Normalized MAP (MAP), obtained by averaging the APA values on one row in Tab.\n2a, and a Normalized AAP (AAP), obtained by averaging the APM values on one column in Tab.\n2b:\nThus, overall system performance can be measured, besides by means of MAP, also by means of MAP.\nMoreover, MAP is equivalent to MAP, as can be immediately proved by using Eqs.\n(5), (3), and (1):\nconversely, overall topic ease can be measured, besides by\nFigure 1: Construction of the adjacency matrix.\nAPAT is the transpose of APA.\nmeans of AAP, also by means of AAP, and this is equivalent (the proof is analogous, and relies on Eqs.\n(6), (4), and (2)).\nThe two Tables 2a and 2b are interesting per se, and can be analyzed in several different ways.\nIn the following we propose an analysis based on network analysis techniques, mainly Kleinberg's HITS algorithm.\nThere is a little further discussion of these normalizations in Sect.\n6.\n3.4 3rd step: Systems-Topics Graph\nThe two tables 2a and 2b can be merged into a single one with the procedure shown in Fig. 1.\nThe obtained matrix can be interpreted as the adjacency matrix of a complete weighted bipartite graph, that we call Systems-Topics graph.\nArcs and weights in the graph can be interpreted as follows:\n\u2022 (weight on) arc s \u2192 t: how much the system s \"thinks\" that the topic t is easy--assuming that a system has no knowledge of the other systems (or in other words, how easy we might think the topic is, knowing only the results for this one system).\nThis corresponds to APM values, i.e., to normalized topic ease (Fig. 2a).\n\u2022 (weight on) arc s \u2190 t: how much the topic t \"thinks\" that the system s is good--assuming that a topic has no knowledge of the other topics (or in other words, how good we might think the system is, knowing only the results for this one topic).\nThis corresponds to APA (normalized system effectiveness, Fig. 2b).\nFigs. 2c and 2d show the Systems-Topics complete weighted bipartite graph, on a toy example with 4 systems and 2 topics; the graph is split in two parts to have an understandable graphical representation: arcs in Fig. 2c are labeled with APM values; arcs in Fig. 2d are labeled with APA values.\n4.\nANALYSIS OF THE GRAPH\n4.1 Weighted Inlinks, Outlinks, PageRank\nThe sum of weighted outlinks, i.e., the sum of the weights on the outgoing arcs from each node, is always zero:\n\u2022 The outlinks on each node corresponding to a system s (Fig. 2c) is the sum of all the corresponding APM values on one row of the matrix in Tab.\n2b.\n\u2022 The outlinks on each node corresponding to a topic\nFigure 2: The relationships between systems and topics (a) and (b); and the Systems-Topics graph for a toy example (c) and (d).\nDashed arcs correspond to negative values.\nFigure 3: Hub and Authority computation\nvalues on one row of the transpose of the matrix in Tab.\n2a.\nThe average1of weighted inlinks is: 9 MAP for each node corresponding to a system s; this corresponds to the average of all the corresponding\nAPA values on one column of the APA part of the adjacency matrix (see Fig. 1).\n9 AAP for each node corresponding to a topic t; this corresponds to the average of all the corresponding APM values on one column of the APM part of the adjacency matrix (see Fig. 1).\nTherefore, weighted inlinks measure either system effectiveness or topic ease; weighted outlinks are not meaningful.\nWe could also apply the PageRank algorithm to the network; the meaning of the PageRank of a node is not quite so obvious as Inlinks and Outlinks, but it also seems a sensible measure of either system effectiveness or topic ease: if a system is effective, it will have several incoming links with high 1Usually, the sum of the weights on the incoming arcs to each node is used in place of the average; since the graph is complete, it makes no difference.\nweights (APA); if a topic is easy it will have high weights (APM) on the incoming links too.\nWe will see experimental confirmation in the following.\n4.2 Hubs and Authorities\nLet us now turn to more sophisticated indicators.\nKleinberg's HITS algorithm defines, for a directed graph, two indicators: hubness and authority; we reiterate here some of the basic details of the HITS algorithm in order to emphasize both the nature of our generalization and the interpretation of the HITS concepts in this context.\nUsually, hubness and authority are defined as h (x) = Px \u2192 y a (y) and a (x) =\nP\ny \u2192 x h (y), and described intuitively as \"a good hub links many good authorities; a good authority is linked from many good hubs\".\nAs it is well known, an equivalent formulation in linear algebra terms is (see also Fig. 3): h = Aa and a = ATh (where h is the hubness vector, with the hub values for all the nodes; a is the authority vector; A is the adjacency matrix of the graph; and AT its transpose).\nUsually, A contains 0s and 1s only, corresponding to presence and absence of unweighted directed arcs, but Eq.\n(7) can be immediately generalized to (in fact, it is already valid for) A containing any real value, i.e., to weighted graphs.\nTherefore we can have a \"generalized version\" (or rather a generalized interpretation, since the formulation is still the original one) of hubness and authority for all nodes in a graph.\nAn intuitive formulation of this generalized HITS is still available, although slightly more complex: \"a good hub links, by means of arcs having high weights, many good authorities; a good authority is linked, by means of arcs having high weights, from many good hubs\".\nSince arc weights can be, in general, negative, hub and authority values can be negative, and one could speak of unhubness and unauthority; the intuitive formulation could be completed by adding that \"a good hub links good unauthorities by means of links with highly negative weights; a good authority is linked by good unhubs by means of links with highly negative weights\".\nAnd, also, \"a good unhub links positively good unauthorities and negatively good authorities; a good unauthority is linked positively from good unhubs and negatively from good hubs\".\nLet us now apply generalized HITS to our Systems-Topics graph.\nWe compute a (s), h (s), a (t), and h (t).\nIntuitively, we expect that a (s) is somehow similar to Inlinks, so it should be a measure of either systems effectiveness or topic ease.\nSimilarly, hubness should be more similar to Outlinks, thus less meaningful, although the interplay between hub and authority might lead to the discovery of something different.\nLet us start by remarking that authority of topics and hubness of systems depend only on each other; similarly hubness of topics and authority of systems depend only on each other: see Figs. 2c, 2d and 3.\nThus the two graphs in Figs. 2c and 2d can be analyzed independently.\nIn fact the entire HITS analysis could be done in one direction only, with just APM (s, t) values or alternatively with just APA (s, t).\nAs discussed below, probably most interest resides in the hubness of topics and the authority of systems, so the latter makes sense.\nHowever, in this paper, we pursue both analyses together, because the symmetry itself is interesting.\nConsidering Fig. 2c we can state that:\nWe can summarize this as: a (t) is high if APM (s, t) is high for those systems with high h (s); h (s) is high if APM (s, t) is high for those topics with high a (t).\nIntuitively, authority a (t) of a topic measures topic ease; hubness h (s) of a system measures system's \"capability\" to recognize easy topics.\nA system with high unhubness (negative hubness) would tend to regard easy topics as hard and hard ones as easy.\nThe situation for Fig. 2d, i.e., for a (s) and h (t), is analogous.\nAuthority a (s) of a system node s measures system effectiveness: it increases with the weight on the arc (i.e., APA (s, tj)) and the hubness of the incoming topic nodes tj.\nHubness h (t) of a topic node t measures topic capability to recognize effective systems: if h (t)> 0, it increases further if APA (s, tj) increases; if h (t) <0, it increases if APA (s, tj) decreases.\nIntuitively, we can state that \"A system has a higher authority if it is more effective on topics with high hubness\"; and \"A topic has a higher hubness if it is easier for those systems which are more effective in general\".\nConversely for system hubness and topic authority: \"A topic has a higher authority if it is easier on systems with high hubness\"; and \"A system has a higher hubness if it is more effective for those topics which are easier in general\".\nTherefore, for each system we have two indicators: authority (a (s)), measuring system effectiveness, and hubness (h (s)), measuring system capability to estimate topic ease.\nAnd for each topic, we have two indicators: authority (a (t)), measuring topic ease, and hubness (h (t)), measuring topic capability to estimate systems effectiveness.\nWe can define them formally as\nWe observe that the hubness of topics may be of particular interest for evaluation studies.\nIt may be that we can evaluate the effectiveness of systems efficiently by using relatively few high-hubness topics.\n5.\nEXPERIMENTS\nWe now turn to discuss if these indicators are meaningful and useful in practice, and how they correlate with standard measures used in TREC.\nWe have built the Systems-Topics graph for TREC 8 data (featuring 1282 systems--actually, 2Actually, TREC 8 data features 129 systems; due to some bug in our scripts, we did not include one system (8manexT3D1N0), but the results should not be affected.\nFigure 4: Distributions of AP, APA, and APM values in TREC 8 data\nTable 3: Correlations between network analysis measures and MAP (a) and AAP (b)\nruns--on 50 topics).\nThis section illustrates the results obtained mining these data according to the method presented in previous sections.\nFig. 4 shows the distributions of AP, APA, and APM: whereas AP is very skewed, both APA and APM are much more symmetric (as it should be, since they are constructed by subtracting the mean).\nTables 3a and 3b show the Pearson's correlation values between Inlinks, PageRank, Hub, Authority and, respectively, MAP or AAP (Outlinks values are not shown since they are always zero, as seen in Sect.\n4).\nAs expected, Inlinks and PageRank have a perfect correlation with MAP and AAP.\nAuthority has a very high correlation too with MAP and AAP; Hub assumes slightly lower values.\nLet us analyze the correlations more in detail.\nThe correlations chart in Figs. 5a and 5b demonstrate the high correlation between Authority and MAP or AAP.\nHubness presents interesting phenomena: both Fig. 5c (correlation with MAP) and Fig. 5d (correlation with AAP) show that correlation is not exact, but neither is it random.\nThis, given the meaning of hubness (capability in estimating topic ease and system effectiveness), means two things: (i) more effective systems are better at estimating topic ease; and (ii) easier topics are better at estimating system effectiveness.\nWhereas the first statement is fine (there is nothing against it), the second is a bit worrying.\nIt means that system effectiveness in TREC is affected more by easy topics than by difficult topics, which is rather undesirable for quite obvious reasons: a system capable of performing well on a difficult topic, i.e., on a topic on which the other systems perform badly, would be an important result for IR effectiveness; con\nFigure 5: Correlations: MAP (x axis) and authority (y axis) of systems (a); AAP and authority of topics (b); MAP and hub of systems (c) and AAP and hub of topics (d)\nversely, a system capable of performing well on easy topics is just a confirmation of the state of the art.\nIndeed, the correlation between hubness and AAP (statement (i) above) is higher than the correlation between hubness and MAP (corresponding to statement (ii)): 0.92 vs. 0.80.\nHowever, this phenomenon is quite strong.\nThis is also confirmed by the work being done on the TREC Robust Track [14].\nIn this respect, it is interesting to see what happens if we use a different measure from MAP (and AAP).\nThe GMAP (Geometric MAP) metric is defined as the geometric mean of AP values, or equivalently as the arithmetic mean of the logarithms of AP values [8].\nGMAP has the property of giving more weight to the low end of the AP scale (i.e., to low AP values), and this seems reasonable, since, intuitively, a performance increase in MAP values from 0.01 to 0.02 should be more important than an increase from 0.81 to 0.82.\nTo use GMAP in place of MAP and AAP, we only need to take the logarithms of initial AP values, i.e., those in Tab.\n1a (zero values are modified into \u03b5 = 0.00001).\nWe then repeat the same normalization process (with GMAP and GAAP--Geometric AAP--replacing MAP and AAP): whereas authority values still perfectly correlate with GMAP (0.99) and GAAP (1.00), the correlation with hubness largely disappears (values are \u2212 0.16 and \u2212 0.09--slightly negative but not enough to concern us).\nThis is yet another confirmation that TREC effectiveness as measured by MAP depends mainly on easy topics; GMAP appears to be a more balanced measure.\nNote that, perhaps surprisingly, GMAP is indeed fairly well balanced, not biased in the opposite direction--that is, it does not overemphasize the difficult topics.\nIn Sect.\n6.3 below, we discuss another transformation, replacing the log function used in GMAP with logit.\nThis has a similar effect: the correlation of mean logitAP and average logitAP with hubness are now small positive numbers (0.23 and 0.15 respectively), still comfortably away from the high correlations with regular MAP and AAP, i.e., not presenting the problematic phenomenon (ii) above (over-dependency on easy topics).\nWe also observe that hub values are positive, whereas authority assumes, as predicted, both positive and negative values.\nAn intuitive justification is that negative hubness would indicate a node which disagrees with the other nodes, e.g., a system which does better on difficult topics, or a topic on which bad systems do better; such systems and topics would be quite strange, and probably do not appear in TREC.\nFinally, although one might think that topics with several relevant documents are more important and difficult, this is not the case: there is no correlation between hub (or any other indicator) and the number of documents relevant to a topic.\n6.\nDISCUSSION 6.1 Related work\nThere has been considerable interest in recent years in questions of statistical significance of effectiveness comparisons between systems (e.g. [2,9]), and related questions of how many topics might be needed to establish differences (e.g. [13]).\nWe regard some results of the present study as in some way complementary to this work, in that we make a step towards answering the question \"Which topics are best for establishing differences?\"\n.\nThe results on evaluation without relevance judgements such as [10] show that, to some extent, good systems agree on which are the good documents.\nWe have not addressed the question of individual documents in the present analysis, but this effect is certainly analogous to our results.\n6.2 Are normalizations necessary?\nAt this point it is also worthwhile to analyze what would happen without the MAP - and AAP-normalizations defined in Sect.\n3.3.\nIndeed, the process of graph construction (Sect.\n3.4) is still valid: both the APM and APA matrices\nare replaced by the AP one, and then everything goes on as above.\nTherefore, one might think that the normalizations are unuseful in this setting.\nThis is not the case.\nFrom the theoretical point of view, the AP-only graph does not present the interesting properties above discussed: since the AP-only graph is symmetrical (the weight on each incoming link is equal to the weight on the corresponding outgoing link), Inlinks and Outlinks assume the same values.\nThere is symmetry also in computing hub and authority, that assume the same value for each node since the weights on the incoming and outgoing arcs are the same.\nThis could be stated in more precise and formal terms, but one might still wonder if on the overall graph there are some sort of counterbalancing effects.\nIt is therefore easier to look at experimental data, which confirm that the normalizations are needed: the correlations between AP, Inlinks, Outlinks, Hub, and\/or Authority are all very close to one (none of them is below 0.98).\n6.3 Are these normalizations sufficient?\nIt might be argued that (in the case of APA, for example) the amount we have subtracted from each AP value is topicdependent, therefore the range of the resulting APA value is also topic-dependent (e.g. the maximum is 1 \u2212 AAP (tj) and the minimum is \u2212 AAP (tj)).\nThis suggests that the cross-topic comparisons of these values suggested in Sect.\n3.3 may not be reliable.\nA similar issue arises for APM and comparisons across systems.\nOne possible way to overcome this would be to use an unconstrained measure whose range is the full real line.\nNote that in applying the method to GMAP by using log AP, we avoid the problem with the lower limit but retain it for the upper limit.\nOne way to achieve an unconstrainted range would be to use the logit function rather than the log [4,8].\nWe have also run this variant (as already reported in Sect.\n5 above), and it appears to provide very similar results to the GMAP results already given.\nThis is not surprising, since in practice the two functions are very similar over most of the operative range.\nThe normalizations thus seem reliable.\n6.4 On AAT and ATA\nIt is well known that h and a vectors are the principal left eigenvectors of AAT and ATA, respectively (this can be easily derived from Eqs.\n(7)), and that, in the case of citation graphs, AAT and ATA represent, respectively, bibliographic coupling and co-citations.\nWhat is the meaning, if any, of AAT and ATA in our Systems-Topics graph?\nIt is easy to derive that:\n(where S is the set of indices corresponding to systems and T the set of indices corresponding to topics).\nThus AAT and ATA are block diagonal matrices, with two blocks each, one relative to systems and one relative to topics: (a) if i, j \u2208 S, then AAT [i, j] = Ek \u2208, APM (i, k) \u00b7 APM (j, k) measures how much the two systems i and j agree in estimating topics ease (APM): high values mean that the two systems agree on topics ease.\n(b) if i, j \u2208 T, then AAT [i, j] = Ek \u2208 S APA (k, i) \u00b7 APA (k, j) measures how much the two topics i and j agree in estimating systems effectiveness (APA): high values mean that the two topics agree on systems effectiveness (and that TREC results would not change by leaving out one of the two topics).\n(c) if i, j \u2208 S, then ATA [i, j] = Ek \u2208, APA (i, k) \u00b7 APA (j, k) measures how much agreement on the effectiveness of two systems i and j there is over all topics: high values mean that many topics quite agree on the two systems effectiveness; low values single out systems that are somehow controversial, and that need several topics to have a correct effectiveness assessment.\n(d) if i, j \u2208 T, then ATA [i, j] = Ek \u2208 S APM (k, i) \u00b7 APM (k, j) measures how much agreement on the ease of the two topics i and j there is over all systems: high values mean that many systems quite agree on the two topics ease.\nTherefore, these matrices are meaningful and somehow interesting.\nFor instance, the submatrix (b) corresponds to a weighted undirected complete graph, whose nodes are the topics and whose arc weights are a measure of how much two topics agree on systems effectiveness.\nTwo topics that are very close on this graph give the same information, and therefore one of them could be discarded without changes in TREC results.\nIt would be interesting to cluster the topics on this graph.\nFurthermore, the matrix\/graph (a) could be useful in TREC pool formation: systems that do not agree on topic ease would probably find different relevant documents, and should therefore be complementary in pool formation.\nNote that no notion of single documents is involved in the above analysis.\n6.5 Insights\nAs indicated, the primary contribution of this paper has been a method of analysis.\nHowever, in the course of applying this method to one set of TREC results, we have achieved some insights relating to the hypotheses formulated in Sect.\n2:\n\u2022 We confirm Hypothesis 2 above, that some topics are easier than others.\n\u2022 Differences in the hubness of systems reveal that some systems are better than others at distinguishing easy and difficult topics; thus we have some confirmation of Hypothesis 3.\n\u2022 There are some relatively idiosyncratic systems which do badly on some topics generally considered easy but well on some hard topics.\nHowever, on the whole, the more effective systems are better at distinguishing easy and difficult topics.\nThis is to be expected: a really bad system will do badly on everything, while even a good system may have difficulty with some topics.\n\u2022 Differences in the hubness of topics reveal that some\ntopics are better than others at distinguising more or less effective systems; thus we have some confirmation of Hypothesis 4.\nSIGIR 2007 Proceedings Session 20: Link Analysis\n\u2022 If we use MAP as the measure of effectiveness, it is also true that the easiest topics are better at distinguishing more or less effective systems.\nAs argued in Sect.\n5, this is an undesirable property.\nGMAP is more balanced.\nClearly these ideas need to be tested on other data sets.\nHowever, they reveal that the method of analysis proposed in this paper can provide valuable information.\n6.6 Selecting topics\nThe confirmation of Hypothesis 4 leads, as indicated, to the idea that we could do reliable system evaluation on a much smaller set of topics, provided we could select such an appropriate set.\nThis selection may not be straightforward, however.\nIt is possible that simply selecting the high hubness topics will achieve this end; however, it is also possible that there are significant interactions between topics which would render such a simple rule ineffective.\nThis investigation would therefore require serious experimentation.\nFor this reason we have not attempted in this paper to point to the specific high hubness topics as being good for evaluation.\nThis is left for future work.\n7.\nCONCLUSIONS AND FUTURE DEVELOPMENTS\nThe contribution of this paper is threefold:\n\u2022 we propose a novel way of normalizing AP values; \u2022 we propose a novel method to analyse TREC data; \u2022 the method applied on TREC data does indeed reveal some hidden properties.\nMore particularly, we propose Average Average Precision (AAP), a measure of topic ease, and a novel way of normalizing the average precision measure in TREC, on the basis of both MAP (Mean Average Precision) and AAP.\nThe normalized measures (APM and APA) are used to build a bipartite weighted Systems-Topics graph, that is then analyzed by means of network analysis indicators widely known in the (social) network analysis field, but somewhat generalised.\nWe note that no such approach to TREC data analysis has been proposed so far.\nThe analysis shows that, with current measures, a system that wants to be effective in TREC needs to be effective on easy topics.\nAlso, it is suggested that a cluster analysis on topic similarity can lead to relying on a lower number of topics.\nOur method of analysis, as described in this paper, can be applied only a posteriori, i.e., once we have all the topics and all the systems available.\nAdding (removing) a new system \/ topic would mean re-computing hubness and authority indicators.\nMoreover, we are not explicitly proposing a change to current TREC methodology, although this could be a by-product of these--and further--analyses.\nThis is an initial work, and further analyses could be performed.\nFor instance, other effectiveness metrics could be used, in place of AP.\nOther centrality indicators, widely used in social network analysis, could be computed, although probably with similar results to PageRank.\nIt would be interesting to compute the higher-order eigenvectors of ATA and AAT.\nThe same kind of analysis could be performed at the document level, measuring document ease.\nHopefully, further analyses of the graph defined in this paper, according to the approach described, can be insightful for a better understanding of TREC or similar data.","keyphrases":["hit","trec","ir evalu","network analysi","inform retriev evalu experi","weight bipartit graph","social network analysi","system-topic graph","hit algorithm","human assessor","mean averag precis","web search engin implement","link analysi techniqu","inlink","pagerank","stem","kleinberg' hit algorithm"],"prmu":["P","P","P","P","P","P","P","M","M","U","R","U","M","U","U","U","M"]} {"id":"H-95","title":"Handling Locations in Search Engine Queries","abstract":"This paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval. We address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology. Moreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents. Evaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches.","lvl-1":"Handling Locations in Search Engine Queries Bruno Martins, M\u00e1rio J. Silva, S\u00e9rgio Freitas and Ana Paula Afonso Faculdade de Ci\u00eancias da Universidade de Lisboa 1749-016 Lisboa, Portugal {bmartins,mjs,sfreitas,apa}@xldb.\ndi.fc.ul.pt ABSTRACT This paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval.\nWe address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology.\nMoreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents.\nEvaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval 1.\nINTRODUCTION Search engine queries are often associated with geographical locations, either explicitly (i.e. a location reference is given as part of the query) or implicitly (i.e. the location reference is not present in the query string, but the query clearly has a local intent [17]).\nOne of the concerns of geographical information retrieval (GIR) lies in appropriately handling such queries, bringing better targeted search results and improving user satisfaction.\nNowadays, GIR is getting increasing attention.\nSystems that access resources on the basis of geographic context are starting to appear, both in the academic and commercial domains [4, 7].\nAccurately and effectively detecting location references in search engine queries is a crucial aspect of these systems, as they are generally based on interpreting geographical terms differently from the others.\nDetecting locations in queries is also important for generalpropose search engines, as this information can be used to improve ranking algorithms.\nQueries with a local intent are best answered with localized pages, while queries without any geographical references are best answered with broad pages [5].\nText mining methods have been successfully used in GIR to detect and disambiguate geographical references in text [9], or even to infer geographic scopes for documents [1, 13].\nHowever, this body of research has been focused on processing Web pages and full-text documents.\nSearch engine queries are more difficult to handle, in the sense that they are very short and with implicit and subjective user intents.\nMoreover, the data is also noisier and more versatile in form, and we have to deal with misspellings, multilingualism and acronyms.\nHow to automatically understand what the user intended, given a search query, without putting the burden in the user himself, remains an open text mining problem.\nKey challenges in handling locations over search engine queries include their detection and disambiguation, the ranking of possible candidates, the detection of false positives (i.e not all contained location names refer to geographical locations), and the detection of implied locations by the context of the query (i.e. when the query does not explicitly contain a place reference but it is nonetheless geographical).\nSimple named entity recognition (NER) algorithms, based on dictionary look-ups for geographical names, may introduce high false positives for queries whose location names do not constitute place references.\nFor example the query Denzel Washington contains the place name Washington, but the query is not geographical.\nQueries can also be geographic without containing any explicit reference to locations at the dictionary.\nIn these cases, place name extraction and disambiguation does not give any results, and we need to access other sources of information.\nThis paper proposes simple and yet effective techniques for handling place references over queries.\nEach query is split into a triple < what,relation,where >, where what specifies the non-geographic aspect of the information need, where specifies the geographic areas of interest, and relation specifies a spatial relationship connecting what and where.\nWhen this is not possible, i.e. the query does not contain any place references, we try using information from documents matching the query, exploiting geographic scopes previously assigned to these documents.\nDisambiguating place references is one of the most important aspects.\nWe use a search procedure that combines textual patterns with geographical names defined at an ontology, and we use heuristics to disambiguate the discovered references (e.g. more important places are preferred).\nDisambiguation results in having the where term, from the triple above, associated with the most likely corresponding concepts from the ontology.\nWhen we cannot detect any locations, we attempt to use geographical scopes previously inferred for the documents at the top search results.\nBy doing this, we assume that the most frequent geographical scope in the results should correspond to the geographical context implicit in the query.\nExperiments with CLEF topics [4] and sample queries from a Web search engine show that the proposed methods are accurate, and may have applications in improving search results.\nThe rest of this paper is organized as follows.\nWe first formalize the problem and describe related work to our research.\nNext, we describe our approach for handling place names in queries, starting with the general approach for disambiguating place references over textual strings, then presenting the method for splitting a query into a < what,relation,where > triple, and finally discussing the technique for exploiting geographic scopes previously assigned to documents in the result set.\nSection 4 presents evaluation results.\nFinally, we give some conclusions and directions for future research.\n2.\nCONCEPTS AND RELATED WORK Search engine performance depends on the ability to capture the most likely meaning of a query as intended by the user [16].\nPrevious studies showed that a significant portion of the queries submitted to search engines are geographic [8, 14].\nA recent enhancement to search engine technology is the addition of geographic reasoning, combining geographic information systems and information retrieval in order to build search engines that find information associated with given locations.\nThe ability to recognize and reason about the geographical terminology, given in the text documents and user queries, is a crucial aspect of these geographical information retrieval (GIR) systems [4, 7].\nExtracting and distinguishing different types of entities in text is usually referred to as Named Entity Recognition (NER).\nFor at least a decade, this has been an important text mining task, and a key feature of the Message Understanding Conferences (MUC) [3].\nNER has been successfully automated with near-human performance, but the specific problem of recognizing geographical references presents additional challenges [9].\nWhen handling named entities with a high level of detail, ambiguity problems arise more frequently.\nAmbiguity in geographical references is bi-directional [15].\nThe same name can be used for more than one location (referent ambiguity), and the same location can have more than one name (reference ambiguity).\nThe former has another twist, since the same name can be used for locations as well as for other class of entities, such as persons or company names (referent class ambiguity).\nBesides the recognition of geographical expressions, GIR also requires that the recognized expressions be classified and grounded to unique identifiers [11].\nGrounding the recognized expressions (e.g. associating them to coordinates or concepts at an ontology) assures that they can be used in more advanced GIR tasks.\nPrevious works have addressed the tagging and grounding of locations in Web pages, as well as the assignment of geographic scopes to these documents [1, 7, 13].\nThis is a complementary aspect to the techniques described in this paper, since if we have the Web pages tagged with location information, a search engine can conveniently return pages with a geographical scope related to the scope of the query.\nThe task of handling geographical references over documents is however considerably different from that of handling geographical references over queries.\nIn our case, queries are usually short and often do not constitute proper sentences.\nText mining techniques that make use of context information are difficult to apply for high accuracy.\nPrevious studies have also addressed the use of text mining and automated classification techniques over search engine queries [16, 10].\nHowever, most of these works did not consider place references or geographical categories.\nAgain, these previously proposed methods are difficult to apply to the geographic domain.\nGravano et.\nal. studied the classification of Web queries into two types, namely local and global [5].\nThey defined a query as local if its best matches on a Web search engine are likely to be local pages, such as houses for sale.\nA number of classification algorithms have been evaluated using search engine queries.\nHowever, their experimental results showed that only a rather low precision and recall could be achieved.\nThe problem addressed in this paper is also slightly different, since we are trying not only to detect local queries but also to disambiguate the local of interest.\nWang et.\nal. proposed to go further than detecting local queries, by also disambiguating the implicit local of interest [17].\nThe proposed approach works for both queries containing place references and queries not containing them, by looking for dominant geographic references over query logs and text from search results.\nIn comparison, we propose simpler techniques based on matching names from a geographic ontology.\nOur approach looks for spatial relationships at the query string, and it also associates the place references to ontology concepts.\nIn the case of queries not containing explicit place references, we use geographical scopes previously assigned to the documents, whereas Wang et.\nal. proposed to extract locations from the text of the top search results.\nThere are nowadays many geocoding, reverse-geocoding, and mapping services on the Web that can be easily integrated with other applications.\nGeocoding is the process of locating points on the surface of the Earth from alphanumeric addressing data.\nTaking a string with an address, a geocoder queries a geographical information system and returns interpolated coordinate values for the given location.\nInstead of computing coordinates for a given place reference, the technique described in this paper aims at assigning references to the corresponding ontology concepts.\nHowever, if each concept at the ontology contains associated coordinate information, the approach described here could also be used to build a geocoding service.\nMost of such existing services are commercial in nature, and there are no technical publications describing them.\nA number of commercial search services have also started to support location-based searches.\nGoogle Local, for instance, initially required the user to specify a location qualifier separately from the search query.\nMore recently, it added location look-up capabilities that extract locations from query strings.\nFor example, in a search for Pizza Seattle, Google Local returns local results for pizza near Seattle, WA.\nHowever, the intrinsics of their solution are not published, and their approach also does not handle locationimplicit queries.\nMoreover, Google Local does not take spatial relations into account.\nIn sum, there are already some studies on tagging geographical references, but Web queries pose additional challenges which have not been addressed.\nIn this paper, we explain the proposed solutions for the identified problems.\n3.\nHANDLINGQUERIESIN GIR SYSTEMS Most GIR queries can be parsed to < what,relation,where > triple, where the what term is used to specify the general nongeographical aspect of the information need, the where term is used to specify the geographical areas of interest, and the relation term is used to specify a spatial relationship connecting what and where.\nWhile the what term can assume any form, in order to reflect any information need, the relation and where terms should be part of a controlled vocabulary.\nIn particular, the relation term should refer to a well-known geographical relation that the underlying GIR system can interpret (e.g. near or contained at), and the where term should be disambiguated into a set of unique identifiers, corresponding to concepts at the ontology.\nDifferent systems can use alternative schemes to take input queries from the users.\nThree general strategies can be identified, and GIR systems often support more than one of the following schemes: Figure 1: Strategies for processing queries in Geographical Information Retrieval systems.\n1.\nInput to the system is a textual query string.\nThis is the hardest case, since we need to separate the query into the three different components, and then we need to disambiguate the where term into a set of unique identifiers.\n2.\nInput to the system is provided in two separate strings, one concerning the what term, and the other concerning the where.\nThe relation term can be either fixed (e.g. always assume the near relation), specified together with the where string, or provided separately from the users from a set of possible choices.\nAlthough there is no need for separating query string into the different components, we still need to disambiguate the where term into a set of unique identifiers.\n3.\nInput to the system is provided through a query string together with an unambiguous description of the geographical area of interest (e.g. a sketch in a map, spatial coordinates or a selection from a set of possible choices).\nNo disambiguation is required, and therefore the techniques described in this paper do not have to be applied.\nThe first two schemes depend on place name disambiguation.\nFigure 1 illustrates how we propose to handle geographic queries in these first two schemes.\nA common component is the algorithm for disambiguating place references into corresponding ontology concepts, which is described next.\n3.1 From Place Names to Ontology Concepts A required task in handling GIR queries consists of associating a string containing a geographical reference with the set of corresponding concepts at the geographic ontology.\nWe propose to do this according to the pseudo-code listed at Algorithm 1.\nThe algorithm considers the cases where a second (or even more than one) location is given to qualify a first (e.g. Paris, France).\nIt makes recursive calls to match each location, and relies on hierarchical part-of relations to detect if two locations share a common hierarchy path.\nOne of the provided locations should be more general and the other more specific, in the sense that there must exist a part-of relationship among the associated concepts at the ontology (either direct or transitive).\nThe most specific location is a sub-region of the most general, and the algorithm returns the most specific one (i.e. for Paris, France the algorithm returns the ontology concept associated with Paris, the capital city of France).\nWe also consider the cases where a geographical type expression is used to qualify a given name (e.g. city of Lisbon or state of New York).\nFor instance the name Lisbon can correspond to many different concepts at a geographical ontology, and type Algorithm 1 Matching a place name with ontology concepts Require: O = A geographic ontology Require: GN = A string with the geographic name to be matched 1: L = An empty list 2: INDEX = The position in GN for the first occurrence of a comma, semi-colon or bracket character 3: if INDEX is defined then 4: GN1 = The substring of GN from position 0 to INDEX 5: GN2 = The substring of GN from INDEX +1 to length(GN) 6: L1 = Algorithm1(O,GN1) 7: L2 = Algorithm1(O,GN2) 8: for each C1 in L1 do 9: for each C2 in L2 do 10: if C1 is an ancestor of C2 at O then 11: L = The list L after adding element C2 12: else if C1 is a descendant of C2 at O then 13: L = The list L after adding element C1 14: end if 15: end for 16: end for 17: else 18: GN = The string GN after removing case and diacritics 19: if GN contains a geographic type qualifier then 20: T = The substring of GN containing the type qualifier 21: GN = The substring of GN with the type qualifier removed 22: L = The list of concepts from O with name GN and type T 23: else 24: L = The list of concepts from O with name GN 25: end if 26: end if 27: return The list L qualifiers can provide useful information for disambiguation.\nThe considered type qualifiers should also described at the ontologies (e.g. each geographic concept should be associated to a type that is also defined at the ontology, such as country, district or city).\nIdeally, the geographical reference provided by the user should be disambiguated into a single ontology concept.\nHowever, this is not always possible, since the user may not provide all the required information (i.e. a type expression or a second qualifying location).\nThe output is therefore a list with the possible concepts being referred to by the user.\nIn a final step, we propose to sort this list, so that if a single concept is required as output, we can use the one that is ranked higher.\nThe sorting procedure reflects the likelihood of each concept being indeed the one referred to.\nWe propose to rank concepts according to the following heuristics: 1.\nThe geographical type expression associated with the ontology concept.\nFor the same name, a country is more likely to be referenced than a city, and in turn a city more likely to be referenced than a street.\n2.\nNumber of ancestors at the ontology.\nTop places at the ontology tend to be more general, and are therefore more likely to be referenced in search engine queries.\n3.\nPopulation count.\nHighly populated places are better known, and therefore more likely to be referenced in queries.\n4.\nPopulation counts from direct ancestors at the ontology.\nSubregions of highly populated places are better known, and also more likely to be referenced in search engine queries.\n5.\nOccurrence frequency over Web documents (e.g. Google counts) for the geographical names.\nPlaces names that occur more frequently over Web documents are also more likely to be referenced in search engine queries.\n6.\nNumber of descendants at the ontology.\nPlaces that have more sub-regions tend to be more general, and are therefore more likely to be mentioned in search engine queries.\n7.\nString size for the geographical names.\nShort names are more likely to be mentioned in search engine queries.\nAlgorithm 1, plus the ranking procedure, can already handle GIR queries where the where term is given separately from the what and relation terms.\nHowever, if the query is given in a single string, we require the identification of the associated < what,relation,where > triple, before disambiguating the where term into the corresponding ontology concepts.\nThis is described in the following Section.\n3.2 Handling Single Query Strings Algorithm 2 provides the mechanism for separating a query string into a < what,relation,where > triple.\nIt uses Algorithm 1 to find the where term, disambiguating it into a set of ontology concepts.\nThe algorithm starts by tokenizing the query string into individual words, also taking care of removing case and diacritics.\nWe have a simple tokenizer that uses the space character as a word delimiter, but we could also have a tokenization approach similar to the proposal of Wang et.\nal. which relies on Web occurrence statistics to avoid breaking collocations [17].\nIn the future, we plan on testing if this different tokenization scheme can improve results.\nNext, the algorithm tests different possible splits of the query, building the what, relation and where terms through concatenations of the individual tokens.\nThe relation term is matched against a list of possible values (e.g. near, at, around, or south of), corresponding to the operators that are supported by the GIR system.\nNote that is also the responsibility of the underlying GIR system to interpret the actual meaning of the different spatial relations.\nAlgorithm 1 is used to check whether a where term constitutes a geographical reference or not.\nWe also check if the last word in the what term belongs to a list of exceptions, containing for instance first names of people in different languages.\nThis ensures that a query like Denzel Washington is appropriately handled.\nIf the algorithm succeeds in finding valid relation and where terms, then the corresponding triple is returned.\nOtherwise, we return a triple with the what term equaling the query string, and the relation and where terms set as empty.\nIf the entire query string constitutes a geographical reference, we return a triple with the what term set to empty, the where term equaling the query string, and the relation term set the DEFINITION (i.e. these queries should be answered with information about the given place references).\nThe algorithm also handles query strings where more than one geographical reference is provided, using and or an equivalent preposition, together with a recursive call to Algorithm 2.\nA query like Diamond trade in Angola and South Africa is Algorithm 2 Get < what,relation,where > from a query string Require: O = A geographical ontology Require: Q = A non-empty string with the query 1: Q = The string Q after removing case and diacritics 2: TOKENS[0.\n.\nN] = An array of strings with the individual words of Q 3: N = The size of the TOKENS array 4: for INDEX = 0 to N do 5: if INDEX = 0 then 6: WHAT = Concatenation of TOKENS[0.\n.\nINDEX \u22121] 7: LASTWHAT = TOKENS[INDEX \u22121] 8: else 9: WHAT = An empty string 10: LASTWHAT = An empty string 11: end if 12: WHERE = Concatenation of TOKENS[INDEX.\n.\nN] 13: RELATION = An empty string 14: for INDEX2 = INDEX to N \u22121 do 15: RELATION2 = Concatenation of TOKENS[INDEX.\n.\nINDEX2] 16: if RELATION2 is a valid geographical relation then 17: WHERE = Concatenation of S[INDEX2 +1.\n.\nN] 18: RELATION = RELATION2; 19: end if 20: end for 21: if RELATION = empty AND LASTWHAT in an exception then 22: TESTGEO = FALSE 23: else 24: TESTGEO = TRUE 25: end if 26: if TESTGEO AND Algorithm1(WHERE) <> EMPTY then 27: if WHERE ends with AND SURROUNDINGS then 28: RELATION = The string NEAR; 29: WHERE = The substring of WHERE with AND SURROUNDINGS removed 30: end if 31: if WHAT ends with AND or similar) then 32: < WHAT,RELATION,WHERE2 >= Algorithm2(WHAT) 33: WHERE = Concatenation of WHERE with WHERE2 34: end if 35: if RELATION = An empty string then 36: if WHAT = An empty string then 37: RELATION = The string DEFINITION 38: else 39: RELATION = The string CONTAINED-AT 40: end if 41: end if 42: else 43: WHAT = The string Q 44: WHERE = An empty string 45: RELATION = An empty string 46: end if 47: end for 48: return < WHAT,RELATION,WHERE > therefore appropriately handled.\nFinally, if the geographical reference in the query is complemented with an expression similar to and its surroundings, the spatial relation (which is assumed to be CONTAINED-AT if none is provided) is changed to NEAR.\n3.3 From Search Results to Query Locality The procedures given so far are appropriate for handling queries where a place reference is explicitly mentioned.\nHowever, the fact that a query can be associated with a geographical context may not be directly observable in the query itself, but rather from the results returned.\nFor instance, queries like recommended hotels for SIGIR 2006 or SeaFair 2006 lodging can be seen to refer to the city of Seattle.\nAlthough they do not contain an explicit place reference, we expect results to be about hotels in Seattle.\nIn the cases where a query does not contain place references, we start by assuming that the top results from a search engine represent the most popular and correct context and usage for the query.\nWe Topic What Relation Where TGN concepts ML concepts Vegetable Exporters of Europe Vegetable Exporters CONTAINED-AT Europe 1 1 Trade Unions in Europe Trade Unions CONTAINED-AT Europe 1 1 Roman cities in the UK and Germany Roman cities CONTAINED-AT UK and Germany 6 2 Cathedrals in Europe Cathedrals CONTAINED-AT Europe 1 1 Car bombings near Madrid Car bombings NEAR Madrid 14 2 Volcanos around Quito Volcanos NEAR Quito 4 1 Cities within 100km of Frankfurt Cities NEAR Frankfurt 3 1 Russian troops in south(ern) Caucasus Russian troops in south(ern) CONTAINED-AT Caucasus 2 1 Cities near active volcanoes (This topic could not be appropriately handled - the relation and where terms are returned empty) Japanese rice imports (This topic could not be appropriately handled - the relation and where terms are returned empty) Table 1: Example topics from the GeoCLEF evaluation campaigns and the corresponding < what,relation,where > triples.\nthen propose to use the distributional characteristics of geographical scopes previously assigned to the documents corresponding to these top results.\nIn a previous work, we presented a text mining approach for assigning documents with corresponding geographical scopes, defined at an ontology, that worked as an offline preprocessing stage in a GIR system [13].\nThis pre-processing step is a fundamental stage of GIR, and it is reasonable to assume that this kind of information would be available on any system.\nSimilarly to Wang et.\nal., we could also attempt to process the results on-line, in order to detect place references in the documents [17].\nHowever, a GIR system already requires the offline stage.\nFor the top N documents given at the results, we check the geographic scopes that were assigned to them.\nIf a significant portion of the results are assigned to the same scope, than the query can be seen to be related to the corresponding geographic concept.\nThis assumption could even be relaxed, for instance by checking if the documents belong to scopes that are hierarchically related.\n4.\nEVALUATION EXPERIMENTS We used three different ontologies in evaluation experiments, namely the Getty thesaurus of geographic names (TGN) [6] and two specific resources developed at our group, here referred to as the PT and ML ontologies [2].\nTGN and ML include global geographical information in multiple languages (although TGN is considerably larger), while the PT ontology focuses on the Portuguese territory with a high detail.\nPlace types are also different across ontologies, as for instance PT includes street names and postal addresses, whereas ML only goes to the level of cities.\nThe reader should refer to [2, 6] for a complete description of these resources.\nOur initial experiments used Portuguese and English topics from the GeoCLEF 2005 and 2006 evaluation campaigns.\nTopics in GeoCLEF correspond to query strings that can be used as input to a GIR system [4].\nImageCLEF 2006 also included topics specifying place references, and participants were encouraged to run their GIR systems on them.\nOur experiments also considered this dataset.\nFor each topic, we measured if Algorithm 2 was able to find the corresponding < what,relation,where > triple.\nThe ontologies used in this experiment were the TGN and ML, as topics were given in multiple languages and covered the whole globe.\nDataset Number of Correct Triples Time per Query Queries ML TGN ML TGN GeoCLEF05 EN 25 19 20 GeoCLEF05 PT 25 20 18 288.1 334.5 GeoCLEF06 EN 32 28 19 msec msec GeoCLEF06 PT 25 23 11 ImgCLEF06 EN 24 16 18 Table 2: Summary of results over CLEF topics.\nTable 1 illustrates some of the topics, and Table 2 summarizes the obtained results.\nThe tables show that the proposed technique adequately handles most of these queries.\nA manual inspection of the ontology concepts that were returned for each case also revealed that the where term was being correctly disambiguated.\nNote that the TGN ontology indeed added some ambiguity, as for instance names like Madrid can correspond to many different places around the globe.\nIt should also be noted that some of the considered topics are very hard for an automated system to handle.\nSome of them were ambiguous (e.g. in Japanese rice imports, the query can be said to refer either rice imports in Japan or imports of Japanese rice), and others contained no direct geographical references (e.g. cities near active volcanoes).\nBesides these very hard cases, we also missed some topics due to their usage of place adjectives and specific regions that are not defined at the ontologies (e.g. environmental concerns around the Scottish Trossachs).\nIn a second experiment, we used a sample of around 100,000 real search engine queries.\nThe objective was to see if a significant number of these queries were geographical in nature, also checking if the algorithm did not produce many mistakes by classifying a query as geographical when that was not the case.\nThe Portuguese ontology was used in this experiment, and queries were taken from the logs of a Portuguese Web search engine available at www.tumba.pt.\nTable 3 summarizes the obtained results.\nMany queries were indeed geographical (around 3.4%, although previous studies reported values above 14% [8]).\nA manual inspection showed that the algorithm did not produce many false positives, and the geographical queries were indeed correctly split into correct < what,relation,where > triple.\nThe few mistakes we encountered were related to place names that were more frequently used in other contexts (e.g. in Te\u00f3filo Braga we have the problem that Braga is a Portuguese district, and Te\u00f3filo Braga was a well known Portuguese writer and politician).\nThe addition of more names to the exception list can provide a workaround for most of these cases.\nValue Num.\nQueries 110,916 Num.\nQueries without Geographical References 107,159 (96.6%) Num.\nQueries with Geographical References 3,757 (3.4%) Table 3: Results from an experiment with search engine logs.\nWe also tested the procedure for detecting queries that are implicitly geographical with a small sample of queries from the logs.\nFor instance, for the query Est\u00e1dio do Drag\u00e3o (e.g. home stadium for a soccer team from Porto), the correct geographical context can be discovered from the analysis of the results (more than 75% of the top 20 results are assigned with the scope Porto).\nFor future work, we plan on using a larger collection of queries to evaluate this aspect.\nBesides queries from the search engine logs, we also plan on using the names of well-known buildings, monuments and other landmarks, as they have a strong geographical connotation.\nFinally, we also made a comparative experiment with 2 popular geocoders, Maporama and Microsoft``s Mappoint.\nThe objective was to compare Algorithm 1 with other approaches, in terms of being able to correctly disambiguate a string with a place reference.\nCivil Parishes from Lisbon Maporama Mappoint Ours Coded refs.\n(out of 53) 9 (16.9%) 30 (56,6%) 15 (28.3%) Avg.\nTime per ref.\n(msec) 506.23 1235.87 143.43 Civil Parishes from Porto Maporama Mappoint Ours Coded refs.\n(out of 15) 0 (0%) 2 (13,3%) 5 (33.3%) Avg.\nTime per ref.\n(msec) 514.45 991.88 132.14 Table 4: Results from a comparison with geocoding services.\nThe Portuguese ontology was used in this experiment, taking as input the names of civil parishes from the Portuguese municipalities of Lisbon and Porto, and checking if the systems were able to disambiguate the full name (e.g. Campo Grande, Lisboa or Foz do Douro, Porto) into the correct geocode.\nWe specifically measured whether our approach was better at unambiguously returning geocodes given the place reference (i.e. return the single correct code), and providing results rapidly.\nTable 4 shows the obtained results, and the accuracy of our method seems comparable to the commercial geocoders.\nNote that for Maporama and Mappoint, the times given at Table 4 include fetching results from the Web, but we have no direct way of accessing the geocoding algorithms (in both cases, fetching static content from the Web servers takes around 125 milliseconds).\nAlthough our approach cannot unambiguously return the correct geocode in most cases (only 20 out of a total of 68 cases), it nonetheless returns results that a human user can disambiguate (e.g. for Madalena, Lisboa we return both a street and a civil parish), as opposed to the other systems that often did not produce results.\nMoreover, if we consider the top geocode according to the ranking procedure described in Section 3.1, or if we use a type qualifier in the name (e.g. civil parish of Campo Grande, Lisboa), our algorithm always returns the correct geocode.\n5.\nCONCLUSIONS This paper presented simple approaches for handling place references in search engine queries.\nThis is a hard text mining problem, as queries are often ambiguous or underspecify information needs.\nHowever, our initial experiments indicate that for many queries, the referenced places can be determined effectively.\nUnlike the techniques proposed by Wang et.\nal. [17], we mainly focused on recognizing spatial relations and associating place names to ontology concepts.\nThe proposed techniques were employed in the prototype system that we used for participating in GeoCLEF 2006.\nIn queries where a geographical reference is not explicitly mentioned, we propose to use the results for the query, exploiting geographic scopes previously assigned to these documents.\nIn the future, we plan on doing a careful evaluation of this last approach.\nAnother idea that we would like to test involves the integration of a spelling correction mechanism [12] into Algorithm 1, so that incorrectly spelled place references can be matched to ontology concepts.\nThe proposed techniques for handling geographic queries can have many applications in improving GIR systems or even general purpose search engines.\nAfter place references are appropriately disambiguated into ontology concepts, a GIR system can use them to retrieve relevant results, through the use of appropriate index structures (e.g. indexing the spatial coordinates associated with ontology concepts) and provided that the documents are also assigned to scopes corresponding to ontology concepts.\nA different GIR strategy can involve query expansion, by taking the where terms from the query and using the ontology to add names from neighboring locations.\nIn a general purpose search engine, and if a local query is detected, we can forward users to a GIR system, which should be better suited for properly handling the query.\nThe regular Google search interface already does this, by presenting a link to Google Local when it detects a geographical query.\n6.\nREFERENCES [1] E. Amitay, N. Har``El, R. Sivan, and A. Soffer.\nWeb-a-Where: Geotagging Web content.\nIn Proceedings of SIGIR-04, the 27th Conference on research and development in information retrieval, 2004.\n[2] M. Chaves, M. J. Silva, and B. Martins.\nA Geographic Knowledge Base for Semantic Web Applications.\nIn Proceedings of SBBD-05, the 20th Brazilian Symposium on Databases, 2005.\n[3] N. A. Chinchor.\nOverview of MUC-7\/MET-2.\nIn Proceedings of MUC-7, the 7th Message Understanding Conference, 1998.\n[4] F. Gey, R. Larson, M. Sanderson, H. Joho, and P. Clough.\nGeoCLEF: the CLEF 2005 cross-language geographic information retrieval track.\nIn Working Notes for the CLEF 2005 Workshop, 2005.\n[5] L. Gravano, V. Hatzivassiloglou, and R. Lichtenstein.\nCategorizing Web queries according to geographical locality.\nIn Proceedings of CIKM-03, the 12th Conference on Information and knowledge management, 2003.\n[6] P. Harpring.\nProper words in proper places: The thesaurus of geographic names.\nMDA Information, 3, 1997.\n[7] C. Jones, R. Purves, A. Ruas, M. Sanderson, M. Sester, M. van Kreveld, and R. Weibel.\nSpatial information retrieval and geographical ontologies: An overview of the SPIRIT project.\nIn Proceedings of SIGIR-02, the 25th Conference on Research and Development in Information Retrieval, 2002.\n[8] J. Kohler.\nAnalyzing search engine queries for the use of geographic terms, 2003.\n(MSc Thesis).\n[9] A. Kornai and B. Sundheim, editors.\nProceedings of the NAACL-HLT Workshop on the Analysis of Geographic References, 2003.\n[10] Y. Li, Z. Zheng, and H. Dai.\nKDD CUP-2005 report: Facing a great challenge.\nSIGKDD Explorations, 7, 2006.\n[11] D. Manov, A. Kiryakov, B. Popov, K. Bontcheva, D. Maynard, and H. Cunningham.\nExperiments with geographic knowledge for information extraction.\nIn Proceedings of the NAACL-HLT Workshop on the Analysis of Geographic References, 2003.\n[12] B. Martins and M. J. Silva.\nSpelling correction for search engine queries.\nIn Proceedings of EsTAL-04, Espa\u00f1a for Natural Language Processing, 2004.\n[13] B. Martins and M. J. Silva.\nA graph-ranking algorithm for geo-referencing documents.\nIn Proceedings of ICDM-05, the 5th IEEE International Conference on Data Mining, 2005.\n[14] L. Souza, C. J. Davis, K. Borges, T. Delboni, and A. Laender.\nThe role of gazetteers in geographic knowledge discovery on the web.\nIn Proceedings of LA-Web-05, the 3rd Latin American Web Congress, 2005.\n[15] E. Tjong, K. Sang, and F. D. Meulder.\nIntroduction to the CoNLL-2003 shared task: Language-Independent Named Entity Recognition.\nIn Proceedings of CoNLL-2003, the 7th Conference on Natural Language Learning, 2003.\n[16] D. Vogel, S. Bickel, P. Haider, R. Schimpfky, P. Siemen, S. Bridges, and T. Scheffer.\nClassifying search engine queries using the Web as background knowledge.\nSIGKDD Explorations Newsletter, 7(2):117-122, 2005.\n[17] L. Wang, C. Wang, X. Xie, J. Forman, Y. Lu, W.-Y.\nMa, and Y. Li.\nDetecting dominant locations from search queries.\nIn Proceedings of SIGIR-05, the 28th Conference on Research and development in information retrieval, 2005.","lvl-3":"Handling Locations in Search Engine Queries\nABSTRACT\nThis paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval.\nWe address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology.\nMoreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents.\nEvaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches.\n1.\nINTRODUCTION\nSearch engine queries are often associated with geographical locations, either explicitly (i.e. a location reference is given as part of the query) or implicitly (i.e. the location reference is not present in the query string, but the query clearly has a local intent [17]).\nOne of the concerns of geographical information retrieval (GIR) lies in appropriately handling such queries, bringing better targeted search results and improving user satisfaction.\nNowadays, GIR is getting increasing attention.\nSystems that access resources on the basis of geographic context are starting to appear, both in the academic and commercial domains [4, 7].\nAccurately and effectively detecting location references in search engine queries is a crucial aspect of these systems, as they are generally based on interpreting geographical terms differently from the others.\nDetecting locations in queries is also important for generalpropose search engines, as this information can be used to improve ranking algorithms.\nQueries with a local intent are best answered This research was partially supported Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia, under grants POSI\/SRI\/40193 \/ 2001 and SFRH\/BD\/10757 \/ 2002.\nwith \"localized\" pages, while queries without any geographical references are best answered with \"broad\" pages [5].\nText mining methods have been successfully used in GIR to detect and disambiguate geographical references in text [9], or even to infer geographic scopes for documents [1, 13].\nHowever, this body of research has been focused on processing Web pages and full-text documents.\nSearch engine queries are more difficult to handle, in the sense that they are very short and with implicit and subjective user intents.\nMoreover, the data is also noisier and more versatile in form, and we have to deal with misspellings, multilingualism and acronyms.\nHow to automatically understand what the user intended, given a search query, without putting the burden in the user himself, remains an open text mining problem.\nKey challenges in handling locations over search engine queries include their detection and disambiguation, the ranking of possible candidates, the detection of false positives (i.e not all contained location names refer to geographical locations), and the detection of implied locations by the context of the query (i.e. when the query does not explicitly contain a place reference but it is nonetheless geographical).\nSimple named entity recognition (NER) algorithms, based on dictionary look-ups for geographical names, may introduce high false positives for queries whose location names do not constitute place references.\nFor example the query \"Denzel Washington\" contains the place name \"Washington,\" but the query is not geographical.\nQueries can also be geographic without containing any explicit reference to locations at the dictionary.\nIn these cases, place name extraction and disambiguation does not give any results, and we need to access other sources of information.\nThis paper proposes simple and yet effective techniques for handling place references over queries.\nEach query is split into a triple <what, relation, where>, where what specifies the non-geographic aspect of the information need, where specifies the geographic areas of interest, and relation specifies a spatial relationship connecting what and where.\nWhen this is not possible, i.e. the query does not contain any place references, we try using information from documents matching the query, exploiting geographic scopes previously assigned to these documents.\nDisambiguating place references is one of the most important aspects.\nWe use a search procedure that combines textual patterns with geographical names defined at an ontology, and we use heuristics to disambiguate the discovered references (e.g. more important places are preferred).\nDisambiguation results in having the where term, from the triple above, associated with the most likely corresponding concepts from the ontology.\nWhen we cannot detect any locations, we attempt to use geographical scopes previously inferred for the documents at the top search results.\nBy doing this, we assume that the most frequent geographical scope in the results should correspond to the geographical context implicit in the query.\nExperiments with CLEF topics [4] and sample queries from a Web search engine show that the proposed methods are accurate, and may have applications in improving search results.\nThe rest of this paper is organized as follows.\nWe first formalize the problem and describe related work to our research.\nNext, we describe our approach for handling place names in queries, starting with the general approach for disambiguating place references over textual strings, then presenting the method for splitting a query into a <what, relation, where> triple, and finally discussing the technique for exploiting geographic scopes previously assigned to documents in the result set.\nSection 4 presents evaluation results.\nFinally, we give some conclusions and directions for future research.\n2.\nCONCEPTS AND RELATED WORK\n3.\nHANDLING QUERIES IN GIR SYSTEMS\n3.1 From Place Names to Ontology Concepts\n3.2 Handling Single Query Strings\n3.3 From Search Results to Query Locality\n4.\nEVALUATION EXPERIMENTS\n5.\nCONCLUSIONS\nThis paper presented simple approaches for handling place references in search engine queries.\nThis is a hard text mining problem, as queries are often ambiguous or underspecify information needs.\nHowever, our initial experiments indicate that for many queries, the referenced places can be determined effectively.\nUnlike the techniques proposed by Wang et.\nal. [17], we mainly focused on recognizing spatial relations and associating place names to ontology concepts.\nThe proposed techniques were employed in the prototype system that we used for participating in GeoCLEF 2006.\nIn queries where a geographical reference is not explicitly mentioned, we propose to use the results for the query, exploiting geographic scopes previously assigned to these documents.\nIn the future, we plan on doing a careful evaluation of this last approach.\nAnother idea that we would like to test involves the integration of a spelling correction mechanism [12] into Algorithm 1, so that incorrectly spelled place references can be matched to ontology concepts.\nThe proposed techniques for handling geographic queries can have many applications in improving GIR systems or even general purpose search engines.\nAfter place references are appropriately disambiguated into ontology concepts, a GIR system can use them to retrieve relevant results, through the use of appropriate index structures (e.g. indexing the spatial coordinates associated with ontology concepts) and provided that the documents are also assigned to scopes corresponding to ontology concepts.\nA different GIR strategy can involve query expansion, by taking the where terms from the query and using the ontology to add names from neighboring locations.\nIn a general purpose search engine, and if a local query is detected, we can forward users to a GIR system, which should be better suited for properly handling the query.\nThe regular Google search interface already does this, by presenting a link to Google Local when it detects a geographical query.","lvl-4":"Handling Locations in Search Engine Queries\nABSTRACT\nThis paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval.\nWe address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology.\nMoreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents.\nEvaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches.\n1.\nINTRODUCTION\nSearch engine queries are often associated with geographical locations, either explicitly (i.e. a location reference is given as part of the query) or implicitly (i.e. the location reference is not present in the query string, but the query clearly has a local intent [17]).\nOne of the concerns of geographical information retrieval (GIR) lies in appropriately handling such queries, bringing better targeted search results and improving user satisfaction.\nAccurately and effectively detecting location references in search engine queries is a crucial aspect of these systems, as they are generally based on interpreting geographical terms differently from the others.\nDetecting locations in queries is also important for generalpropose search engines, as this information can be used to improve ranking algorithms.\nQueries with a local intent are best answered This research was partially supported Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia, under grants POSI\/SRI\/40193 \/ 2001 and SFRH\/BD\/10757 \/ 2002.\nwith \"localized\" pages, while queries without any geographical references are best answered with \"broad\" pages [5].\nText mining methods have been successfully used in GIR to detect and disambiguate geographical references in text [9], or even to infer geographic scopes for documents [1, 13].\nHowever, this body of research has been focused on processing Web pages and full-text documents.\nSearch engine queries are more difficult to handle, in the sense that they are very short and with implicit and subjective user intents.\nHow to automatically understand what the user intended, given a search query, without putting the burden in the user himself, remains an open text mining problem.\nSimple named entity recognition (NER) algorithms, based on dictionary look-ups for geographical names, may introduce high false positives for queries whose location names do not constitute place references.\nFor example the query \"Denzel Washington\" contains the place name \"Washington,\" but the query is not geographical.\nQueries can also be geographic without containing any explicit reference to locations at the dictionary.\nIn these cases, place name extraction and disambiguation does not give any results, and we need to access other sources of information.\nThis paper proposes simple and yet effective techniques for handling place references over queries.\nWhen this is not possible, i.e. the query does not contain any place references, we try using information from documents matching the query, exploiting geographic scopes previously assigned to these documents.\nDisambiguating place references is one of the most important aspects.\nWe use a search procedure that combines textual patterns with geographical names defined at an ontology, and we use heuristics to disambiguate the discovered references (e.g. more important places are preferred).\nDisambiguation results in having the where term, from the triple above, associated with the most likely corresponding concepts from the ontology.\nWhen we cannot detect any locations, we attempt to use geographical scopes previously inferred for the documents at the top search results.\nBy doing this, we assume that the most frequent geographical scope in the results should correspond to the geographical context implicit in the query.\nExperiments with CLEF topics [4] and sample queries from a Web search engine show that the proposed methods are accurate, and may have applications in improving search results.\nWe first formalize the problem and describe related work to our research.\nNext, we describe our approach for handling place names in queries, starting with the general approach for disambiguating place references over textual strings, then presenting the method for splitting a query into a <what, relation, where> triple, and finally discussing the technique for exploiting geographic scopes previously assigned to documents in the result set.\nSection 4 presents evaluation results.\nFinally, we give some conclusions and directions for future research.\n5.\nCONCLUSIONS\nThis paper presented simple approaches for handling place references in search engine queries.\nThis is a hard text mining problem, as queries are often ambiguous or underspecify information needs.\nHowever, our initial experiments indicate that for many queries, the referenced places can be determined effectively.\nUnlike the techniques proposed by Wang et.\nal. [17], we mainly focused on recognizing spatial relations and associating place names to ontology concepts.\nThe proposed techniques were employed in the prototype system that we used for participating in GeoCLEF 2006.\nIn queries where a geographical reference is not explicitly mentioned, we propose to use the results for the query, exploiting geographic scopes previously assigned to these documents.\nIn the future, we plan on doing a careful evaluation of this last approach.\nAnother idea that we would like to test involves the integration of a spelling correction mechanism [12] into Algorithm 1, so that incorrectly spelled place references can be matched to ontology concepts.\nThe proposed techniques for handling geographic queries can have many applications in improving GIR systems or even general purpose search engines.\nA different GIR strategy can involve query expansion, by taking the where terms from the query and using the ontology to add names from neighboring locations.\nIn a general purpose search engine, and if a local query is detected, we can forward users to a GIR system, which should be better suited for properly handling the query.\nThe regular Google search interface already does this, by presenting a link to Google Local when it detects a geographical query.","lvl-2":"Handling Locations in Search Engine Queries\nABSTRACT\nThis paper proposes simple techniques for handling place references in search engine queries, an important aspect of geographical information retrieval.\nWe address not only the detection, but also the disambiguation of place references, by matching them explicitly with concepts at an ontology.\nMoreover, when a query does not reference any locations, we propose to use information from documents matching the query, exploiting geographic scopes previously assigned to these documents.\nEvaluation experiments, using topics from CLEF campaigns and logs from real search engine queries, show the effectiveness of the proposed approaches.\n1.\nINTRODUCTION\nSearch engine queries are often associated with geographical locations, either explicitly (i.e. a location reference is given as part of the query) or implicitly (i.e. the location reference is not present in the query string, but the query clearly has a local intent [17]).\nOne of the concerns of geographical information retrieval (GIR) lies in appropriately handling such queries, bringing better targeted search results and improving user satisfaction.\nNowadays, GIR is getting increasing attention.\nSystems that access resources on the basis of geographic context are starting to appear, both in the academic and commercial domains [4, 7].\nAccurately and effectively detecting location references in search engine queries is a crucial aspect of these systems, as they are generally based on interpreting geographical terms differently from the others.\nDetecting locations in queries is also important for generalpropose search engines, as this information can be used to improve ranking algorithms.\nQueries with a local intent are best answered This research was partially supported Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia, under grants POSI\/SRI\/40193 \/ 2001 and SFRH\/BD\/10757 \/ 2002.\nwith \"localized\" pages, while queries without any geographical references are best answered with \"broad\" pages [5].\nText mining methods have been successfully used in GIR to detect and disambiguate geographical references in text [9], or even to infer geographic scopes for documents [1, 13].\nHowever, this body of research has been focused on processing Web pages and full-text documents.\nSearch engine queries are more difficult to handle, in the sense that they are very short and with implicit and subjective user intents.\nMoreover, the data is also noisier and more versatile in form, and we have to deal with misspellings, multilingualism and acronyms.\nHow to automatically understand what the user intended, given a search query, without putting the burden in the user himself, remains an open text mining problem.\nKey challenges in handling locations over search engine queries include their detection and disambiguation, the ranking of possible candidates, the detection of false positives (i.e not all contained location names refer to geographical locations), and the detection of implied locations by the context of the query (i.e. when the query does not explicitly contain a place reference but it is nonetheless geographical).\nSimple named entity recognition (NER) algorithms, based on dictionary look-ups for geographical names, may introduce high false positives for queries whose location names do not constitute place references.\nFor example the query \"Denzel Washington\" contains the place name \"Washington,\" but the query is not geographical.\nQueries can also be geographic without containing any explicit reference to locations at the dictionary.\nIn these cases, place name extraction and disambiguation does not give any results, and we need to access other sources of information.\nThis paper proposes simple and yet effective techniques for handling place references over queries.\nEach query is split into a triple <what, relation, where>, where what specifies the non-geographic aspect of the information need, where specifies the geographic areas of interest, and relation specifies a spatial relationship connecting what and where.\nWhen this is not possible, i.e. the query does not contain any place references, we try using information from documents matching the query, exploiting geographic scopes previously assigned to these documents.\nDisambiguating place references is one of the most important aspects.\nWe use a search procedure that combines textual patterns with geographical names defined at an ontology, and we use heuristics to disambiguate the discovered references (e.g. more important places are preferred).\nDisambiguation results in having the where term, from the triple above, associated with the most likely corresponding concepts from the ontology.\nWhen we cannot detect any locations, we attempt to use geographical scopes previously inferred for the documents at the top search results.\nBy doing this, we assume that the most frequent geographical scope in the results should correspond to the geographical context implicit in the query.\nExperiments with CLEF topics [4] and sample queries from a Web search engine show that the proposed methods are accurate, and may have applications in improving search results.\nThe rest of this paper is organized as follows.\nWe first formalize the problem and describe related work to our research.\nNext, we describe our approach for handling place names in queries, starting with the general approach for disambiguating place references over textual strings, then presenting the method for splitting a query into a <what, relation, where> triple, and finally discussing the technique for exploiting geographic scopes previously assigned to documents in the result set.\nSection 4 presents evaluation results.\nFinally, we give some conclusions and directions for future research.\n2.\nCONCEPTS AND RELATED WORK\nSearch engine performance depends on the ability to capture the most likely meaning of a query as intended by the user [16].\nPrevious studies showed that a significant portion of the queries submitted to search engines are geographic [8, 14].\nA recent enhancement to search engine technology is the addition of geographic reasoning, combining geographic information systems and information retrieval in order to build search engines that find information associated with given locations.\nThe ability to recognize and reason about the geographical terminology, given in the text documents and user queries, is a crucial aspect of these geographical information retrieval (GIR) systems [4, 7].\nExtracting and distinguishing different types of entities in text is usually referred to as Named Entity Recognition (NER).\nFor at least a decade, this has been an important text mining task, and a key feature of the Message Understanding Conferences (MUC) [3].\nNER has been successfully automated with near-human performance, but the specific problem of recognizing geographical references presents additional challenges [9].\nWhen handling named entities with a high level of detail, ambiguity problems arise more frequently.\nAmbiguity in geographical references is bi-directional [15].\nThe same name can be used for more than one location (referent ambiguity), and the same location can have more than one name (reference ambiguity).\nThe former has another twist, since the same name can be used for locations as well as for other class of entities, such as persons or company names (referent class ambiguity).\nBesides the recognition of geographical expressions, GIR also requires that the recognized expressions be classified and grounded to unique identifiers [11].\nGrounding the recognized expressions (e.g. associating them to coordinates or concepts at an ontology) assures that they can be used in more advanced GIR tasks.\nPrevious works have addressed the tagging and grounding of locations in Web pages, as well as the assignment of geographic scopes to these documents [1, 7, 13].\nThis is a complementary aspect to the techniques described in this paper, since if we have the Web pages tagged with location information, a search engine can conveniently return pages with a geographical scope related to the scope of the query.\nThe task of handling geographical references over documents is however considerably different from that of handling geographical references over queries.\nIn our case, queries are usually short and often do not constitute proper sentences.\nText mining techniques that make use of context information are difficult to apply for high accuracy.\nPrevious studies have also addressed the use of text mining and automated classification techniques over search engine queries [16, 10].\nHowever, most of these works did not consider place references or geographical categories.\nAgain, these previously proposed methods are difficult to apply to the geographic domain.\nGravano et.\nal. studied the classification of Web queries into two types, namely local and global [5].\nThey defined a query as local if its best matches on a Web search engine are likely to be local pages, such as \"houses for sale.\"\nA number of classification algorithms have been evaluated using search engine queries.\nHowever, their experimental results showed that only a rather low precision and recall could be achieved.\nThe problem addressed in this paper is also slightly different, since we are trying not only to detect local queries but also to disambiguate the local of interest.\nWang et.\nal. proposed to go further than detecting local queries, by also disambiguating the implicit local of interest [17].\nThe proposed approach works for both queries containing place references and queries not containing them, by looking for dominant geographic references over query logs and text from search results.\nIn comparison, we propose simpler techniques based on matching names from a geographic ontology.\nOur approach looks for spatial relationships at the query string, and it also associates the place references to ontology concepts.\nIn the case of queries not containing explicit place references, we use geographical scopes previously assigned to the documents, whereas Wang et.\nal. proposed to extract locations from the text of the top search results.\nThere are nowadays many geocoding, reverse-geocoding, and mapping services on the Web that can be easily integrated with other applications.\nGeocoding is the process of locating points on the surface of the Earth from alphanumeric addressing data.\nTaking a string with an address, a geocoder queries a geographical information system and returns interpolated coordinate values for the given location.\nInstead of computing coordinates for a given place reference, the technique described in this paper aims at assigning references to the corresponding ontology concepts.\nHowever, if each concept at the ontology contains associated coordinate information, the approach described here could also be used to build a geocoding service.\nMost of such existing services are commercial in nature, and there are no technical publications describing them.\nA number of commercial search services have also started to support location-based searches.\nGoogle Local, for instance, initially required the user to specify a location qualifier separately from the search query.\nMore recently, it added location look-up capabilities that extract locations from query strings.\nFor example, in a search for \"Pizza Seattle\", Google Local returns \"local results for pizza near Seattle, WA.\"\nHowever, the intrinsics of their solution are not published, and their approach also does not handle locationimplicit queries.\nMoreover, Google Local does not take spatial relations into account.\nIn sum, there are already some studies on tagging geographical references, but Web queries pose additional challenges which have not been addressed.\nIn this paper, we explain the proposed solutions for the identified problems.\n3.\nHANDLING QUERIES IN GIR SYSTEMS\nMost GIR queries can be parsed to <what, relation, where> triple, where the what term is used to specify the general nongeographical aspect of the information need, the where term is used to specify the geographical areas of interest, and the relation term is used to specify a spatial relationship connecting what and where.\nWhile the what term can assume any form, in order to reflect any information need, the relation and where terms should be part of a controlled vocabulary.\nIn particular, the relation term should refer to a well-known geographical relation that the underlying GIR system can interpret (e.g. \"near\" or \"contained at\"), and the where term should be disambiguated into a set of unique identifiers, corresponding to concepts at the ontology.\nDifferent systems can use alternative schemes to take input queries from the users.\nThree general strategies can be identified, and GIR systems often support more than one of the following schemes:\nFigure 1: Strategies for processing queries in Geographical Information Retrieval systems.\n1.\nInput to the system is a textual query string.\nThis is the hardest case, since we need to separate the query into the three different components, and then we need to disambiguate the where term into a set of unique identifiers.\n2.\nInput to the system is provided in two separate strings, one concerning the what term, and the other concerning the where.\nThe relation term can be either fixed (e.g. always assume the \"near\" relation), specified together with the where string, or provided separately from the users from a set of possible choices.\nAlthough there is no need for separating query string into the different components, we still need to disambiguate the where term into a set of unique identifiers.\n3.\nInput to the system is provided through a query string together with an unambiguous description of the geographical area of interest (e.g. a sketch in a map, spatial coordinates or a selection from a set of possible choices).\nNo disambiguation is required, and therefore the techniques described in this paper do not have to be applied.\nThe first two schemes depend on place name disambiguation.\nFigure 1 illustrates how we propose to handle geographic queries in these first two schemes.\nA common component is the algorithm for disambiguating place references into corresponding ontology concepts, which is described next.\n3.1 From Place Names to Ontology Concepts\nA required task in handling GIR queries consists of associating a string containing a geographical reference with the set of corresponding concepts at the geographic ontology.\nWe propose to do this according to the pseudo-code listed at Algorithm 1.\nThe algorithm considers the cases where a second (or even more than one) location is given to qualify a first (e.g. \"Paris, France\").\nIt makes recursive calls to match each location, and relies on hierarchical part-of relations to detect if two locations share a common hierarchy path.\nOne of the provided locations should be more general and the other more specific, in the sense that there must exist a part-of relationship among the associated concepts at the ontology (either direct or transitive).\nThe most specific location is a sub-region of the most general, and the algorithm returns the most specific one (i.e. for \"Paris, France\" the algorithm returns the ontology concept associated with Paris, the capital city of France).\nWe also consider the cases where a geographical type expression is used to qualify a given name (e.g. \"city of Lisbon\" or \"state of New York\").\nFor instance the name \"Lisbon\" can correspond to many different concepts at a geographical ontology, and type\n1: L = An empty list 2: INDEX = The position in GN for the first occurrence of a comma, semi-colon or bracket character 3: if INDEX is defined then 4: GN1 = The substring of GN from position 0 to INDEX 5: GN2 = The substring of GN from INDEX + 1 to length (GN) 6: L1 = Algorithm1 (O, GN1) 7: L2 = Algorithm1 (O, GN2) 8: for each C1 in L1 do 9: for each C2 in L2 do 10: if C1 is an ancestor of C2 at O then 11: L = The list L after adding element C2 12: else if C1 is a descendant of C2 at O then 13: L = The list L after adding element C1 14: end if 15: end for 16: end for 17: else 18: GN = The string GN after removing case and diacritics 19: if GN contains a geographic type qualifier then 20: T = The substring of GN containing the type qualifier 21: GN = The substring of GN with the type qualifier removed 22: L = The list of concepts from O with name GN and type T 23: else 24: L = The list of concepts from O with name GN 25: end if 26: end if 27: return The list L\nqualifiers can provide useful information for disambiguation.\nThe considered type qualifiers should also described at the ontologies (e.g. each geographic concept should be associated to a type that is also defined at the ontology, such as country, district or city).\nIdeally, the geographical reference provided by the user should be disambiguated into a single ontology concept.\nHowever, this is not always possible, since the user may not provide all the required information (i.e. a type expression or a second qualifying location).\nThe output is therefore a list with the possible concepts being referred to by the user.\nIn a final step, we propose to sort this list, so that if a single concept is required as output, we can use the one that is ranked higher.\nThe sorting procedure reflects the likelihood of each concept being indeed the one referred to.\nWe propose to rank concepts according to the following heuristics: 1.\nThe geographical type expression associated with the ontology concept.\nFor the same name, a country is more likely to be referenced than a city, and in turn a city more likely to be referenced than a street.\n2.\nNumber of ancestors at the ontology.\nTop places at the ontology tend to be more general, and are therefore more likely to be referenced in search engine queries.\n3.\nPopulation count.\nHighly populated places are better known, and therefore more likely to be referenced in queries.\n4.\nPopulation counts from direct ancestors at the ontology.\nSubregions of highly populated places are better known, and also more likely to be referenced in search engine queries.\n5.\nOccurrence frequency over Web documents (e.g. Google counts) for the geographical names.\nPlaces names that occur more frequently over Web documents are also more likely to be referenced in search engine queries.\n6.\nNumber of descendants at the ontology.\nPlaces that have more sub-regions tend to be more general, and are therefore more likely to be mentioned in search engine queries.\n7.\nString size for the geographical names.\nShort names are more likely to be mentioned in search engine queries.\nAlgorithm 1, plus the ranking procedure, can already handle GIR queries where the where term is given separately from the what and relation terms.\nHowever, if the query is given in a single string, we require the identification of the associated <what, relation, where> triple, before disambiguating the where term into the corresponding ontology concepts.\nThis is described in the following Section.\n3.2 Handling Single Query Strings\nAlgorithm 2 provides the mechanism for separating a query string into a <what, relation, where> triple.\nIt uses Algorithm 1 to find the where term, disambiguating it into a set of ontology concepts.\nThe algorithm starts by tokenizing the query string into individual words, also taking care of removing case and diacritics.\nWe have a simple tokenizer that uses the space character as a word delimiter, but we could also have a tokenization approach similar to the proposal of Wang et.\nal. which relies on Web occurrence statistics to avoid breaking collocations [17].\nIn the future, we plan on testing if this different tokenization scheme can improve results.\nNext, the algorithm tests different possible splits of the query, building the what, relation and where terms through concatenations of the individual tokens.\nThe relation term is matched against a list of possible values (e.g. \"near,\" \"at,\" \"around,\" or \"south of\"), corresponding to the operators that are supported by the GIR system.\nNote that is also the responsibility of the underlying GIR system to interpret the actual meaning of the different spatial relations.\nAlgorithm 1 is used to check whether a where term constitutes a geographical reference or not.\nWe also check if the last word in the what term belongs to a list of exceptions, containing for instance first names of people in different languages.\nThis ensures that a query like \"Denzel Washington\" is appropriately handled.\nIf the algorithm succeeds in finding valid relation and where terms, then the corresponding triple is returned.\nOtherwise, we return a triple with the what term equaling the query string, and the relation and where terms set as empty.\nIf the entire query string constitutes a geographical reference, we return a triple with the what term set to empty, the where term equaling the query string, and the relation term set the \"DEFINITION\" (i.e. these queries should be answered with information about the given place references).\nThe algorithm also handles query strings where more than one geographical reference is provided, using \"and\" or an equivalent preposition, together with a recursive call to Algorithm\n2.\nA query like \"Diamond trade in Angola and South Africa\" is\n1: Q = The string Q after removing case and diacritics 2: TOKENS [0.\n.\nN] = An array of strings with the individual words of Q 3: N = The size of the TOKENS array 4: for INDEX = 0 to N do 5: if INDEX = 0 then 6: WHAT = Concatenation of TOKENS [0.\n.\nINDEX \u2212 1] 7: LASTWHAT = TOKENS [INDEX \u2212 1] 8: else 9: WHAT = An empty string 10: LASTWHAT = An empty string\n26: if TESTGEO AND Algorithm1 (W HERE) <> EMPTY then 27: if WHERE ends with \"AND SURROUNDINGS\" then 28: RELATION = The string \"NEAR\"; 29: WHERE = The substring of WHERE with \"AND SURROUNDINGS\" removed 30: end if 31: if WHAT ends with \"AND\" or similar) then 32: <WHAT, RELATION, WHERE2> = Algorithm2 (W HAT) 33: WHERE = Concatenation of WHERE with WHERE2 34: end if 35: if RELATION = An empty string then 36: if WHAT = An empty string then 37: RELATION = The string \"DEFINITION\" 38: else 39: RELATION = The string \"CONTAINED-AT\" 40: end if 41: end if 42: else 43: WHAT = The string Q 44: WHERE = An empty string 45: RELATION = An empty string 46: end if 47: end for 48: return <WHAT, RELATION, W HERE>\ntherefore appropriately handled.\nFinally, if the geographical reference in the query is complemented with an expression similar to \"and its surroundings,\" the spatial relation (which is assumed to be \"CONTAINED-AT\" if none is provided) is changed to \"NEAR\".\n3.3 From Search Results to Query Locality\nThe procedures given so far are appropriate for handling queries where a place reference is explicitly mentioned.\nHowever, the fact that a query can be associated with a geographical context may not be directly observable in the query itself, but rather from the results returned.\nFor instance, queries like \"recommended hotels for SIGIR 2006\" or \"SeaFair 2006 lodging\" can be seen to refer to the city of Seattle.\nAlthough they do not contain an explicit place reference, we expect results to be about hotels in Seattle.\nIn the cases where a query does not contain place references, we start by assuming that the top results from a search engine represent the most popular and correct context and usage for the query.\nWe\nTable 1: Example topics from the GeoCLEF evaluation campaigns and the corresponding <what, relation, where> triples.\nthen propose to use the distributional characteristics of geographical scopes previously assigned to the documents corresponding to these top results.\nIn a previous work, we presented a text mining approach for assigning documents with corresponding geographical scopes, defined at an ontology, that worked as an offline preprocessing stage in a GIR system [13].\nThis pre-processing step is a fundamental stage of GIR, and it is reasonable to assume that this kind of information would be available on any system.\nSimilarly to Wang et.\nal., we could also attempt to process the results on-line, in order to detect place references in the documents [17].\nHowever, a GIR system already requires the offline stage.\nFor the top N documents given at the results, we check the geographic scopes that were assigned to them.\nIf a significant portion of the results are assigned to the same scope, than the query can be seen to be related to the corresponding geographic concept.\nThis assumption could even be relaxed, for instance by checking if the documents belong to scopes that are hierarchically related.\n4.\nEVALUATION EXPERIMENTS\nWe used three different ontologies in evaluation experiments, namely the Getty thesaurus of geographic names (TGN) [6] and two specific resources developed at our group, here referred to as the PT and ML ontologies [2].\nTGN and ML include global geographical information in multiple languages (although TGN is considerably larger), while the PT ontology focuses on the Portuguese territory with a high detail.\nPlace types are also different across ontologies, as for instance PT includes street names and postal addresses, whereas ML only goes to the level of cities.\nThe reader should refer to [2, 6] for a complete description of these resources.\nOur initial experiments used Portuguese and English topics from the GeoCLEF 2005 and 2006 evaluation campaigns.\nTopics in GeoCLEF correspond to query strings that can be used as input to a GIR system [4].\nImageCLEF 2006 also included topics specifying place references, and participants were encouraged to run their GIR systems on them.\nOur experiments also considered this dataset.\nFor each topic, we measured if Algorithm 2 was able to find the corresponding <what, relation, where> triple.\nThe ontologies used in this experiment were the TGN and ML, as topics were given in multiple languages and covered the whole globe.\nTable 2: Summary of results over CLEF topics.\nTable 1 illustrates some of the topics, and Table 2 summarizes\nthe obtained results.\nThe tables show that the proposed technique adequately handles most of these queries.\nA manual inspection of the ontology concepts that were returned for each case also revealed that the where term was being correctly disambiguated.\nNote that the TGN ontology indeed added some ambiguity, as for instance names like \"Madrid\" can correspond to many different places around the globe.\nIt should also be noted that some of the considered topics are very hard for an automated system to handle.\nSome of them were ambiguous (e.g. in \"Japanese rice imports,\" the query can be said to refer either rice imports in Japan or imports of Japanese rice), and others contained no direct geographical references (e.g. cities near active volcanoes).\nBesides these very hard cases, we also missed some topics due to their usage of place adjectives and specific regions that are not defined at the ontologies (e.g. environmental concerns around the Scottish Trossachs).\nIn a second experiment, we used a sample of around 100,000 real search engine queries.\nThe objective was to see if a significant number of these queries were geographical in nature, also checking if the algorithm did not produce many mistakes by classifying a query as geographical when that was not the case.\nThe Portuguese ontology was used in this experiment, and queries were taken from the logs of a Portuguese Web search engine available at www.tumba.pt.\nTable 3 summarizes the obtained results.\nMany queries were indeed geographical (around 3.4%, although previous studies reported values above 14% [8]).\nA manual inspection showed that the algorithm did not produce many false positives, and the geographical queries were indeed correctly split into correct <what, relation, where> triple.\nThe few mistakes we encountered were related to place names that were more frequently used in other contexts (e.g. in \"Te\u00f3filo Braga\" we have the problem that \"Braga\" is a Portuguese district, and \"Te\u00f3filo Braga\" was a well known Portuguese writer and politician).\nThe addition of more names to the exception list can provide a workaround for most of these cases.\nTable 3: Results from an experiment with search engine logs.\nWe also tested the procedure for detecting queries that are implicitly geographical with a small sample of queries from the logs.\nFor instance, for the query \"Est\u00e1dio do Drag\u00e3o\" (e.g. home stadium for a soccer team from Porto), the correct geographical context can be discovered from the analysis of the results (more than 75% of the top 20 results are assigned with the scope \"Porto\").\nFor future work, we plan on using a larger collection of queries to evaluate this aspect.\nBesides queries from the search engine logs, we also plan on using the names of well-known buildings, monuments and other landmarks, as they have a strong geographical connotation.\nFinally, we also made a comparative experiment with 2 popular geocoders, Maporama and Microsoft's Mappoint.\nThe objective was to compare Algorithm 1 with other approaches, in terms of being able to correctly disambiguate a string with a place reference.\nTable 4: Results from a comparison with geocoding services.\nThe Portuguese ontology was used in this experiment, taking as input the names of civil parishes from the Portuguese municipalities of Lisbon and Porto, and checking if the systems were able to disambiguate the full name (e.g. \"Campo Grande, Lisboa\" or \"Foz do Douro, Porto\") into the correct geocode.\nWe specifically measured whether our approach was better at unambiguously returning geocodes given the place reference (i.e. return the single correct code), and providing results rapidly.\nTable 4 shows the obtained results, and the accuracy of our method seems comparable to the commercial geocoders.\nNote that for Maporama and Mappoint, the times given at Table 4 include fetching results from the Web, but we have no direct way of accessing the geocoding algorithms (in both cases, fetching static content from the Web servers takes around 125 milliseconds).\nAlthough our approach cannot unambiguously return the correct geocode in most cases (only 20 out of a total of 68 cases), it nonetheless returns results that a human user can disambiguate (e.g. for \"Madalena, Lisboa\" we return both a street and a civil parish), as opposed to the other systems that often did not produce results.\nMoreover, if we consider the top geocode according to the ranking procedure described in Section 3.1, or if we use a type qualifier in the name (e.g. \"civil parish of Campo Grande, Lisboa\"), our algorithm always returns the correct geocode.\n5.\nCONCLUSIONS\nThis paper presented simple approaches for handling place references in search engine queries.\nThis is a hard text mining problem, as queries are often ambiguous or underspecify information needs.\nHowever, our initial experiments indicate that for many queries, the referenced places can be determined effectively.\nUnlike the techniques proposed by Wang et.\nal. [17], we mainly focused on recognizing spatial relations and associating place names to ontology concepts.\nThe proposed techniques were employed in the prototype system that we used for participating in GeoCLEF 2006.\nIn queries where a geographical reference is not explicitly mentioned, we propose to use the results for the query, exploiting geographic scopes previously assigned to these documents.\nIn the future, we plan on doing a careful evaluation of this last approach.\nAnother idea that we would like to test involves the integration of a spelling correction mechanism [12] into Algorithm 1, so that incorrectly spelled place references can be matched to ontology concepts.\nThe proposed techniques for handling geographic queries can have many applications in improving GIR systems or even general purpose search engines.\nAfter place references are appropriately disambiguated into ontology concepts, a GIR system can use them to retrieve relevant results, through the use of appropriate index structures (e.g. indexing the spatial coordinates associated with ontology concepts) and provided that the documents are also assigned to scopes corresponding to ontology concepts.\nA different GIR strategy can involve query expansion, by taking the where terms from the query and using the ontology to add names from neighboring locations.\nIn a general purpose search engine, and if a local query is detected, we can forward users to a GIR system, which should be better suited for properly handling the query.\nThe regular Google search interface already does this, by presenting a link to Google Local when it detects a geographical query.","keyphrases":["search engin queri","place refer","geograph inform retriev","geograph context","name entiti recognit algorithm","ner","disambigu result","textual string","web search engin","queri string","search queri","locationimplicit queri","geograph ontolog","geograph type express","token scheme","spell correct mechan","geograph ir","text mine","queri process"],"prmu":["P","P","P","M","U","U","M","U","M","M","R","M","R","M","U","U","M","U","M"]} {"id":"H-81","title":"Distance Measures for MPEG-7-based Retrieval","abstract":"In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor. The evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures. Eight visual MPEG-7 descriptors were selected and 38 distance measures implemented. Three media collections were created and assessed, performance indicators developed and more than 22500 tests performed. Additionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well. The evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better.","lvl-1":"Distance Measures for MPEG-7-based Retrieval Horst Eidenberger Vienna University of Technology, Institute of Software Technology and Interactive Systems Favoritenstrasse 9-11 - A-1040 Vienna, Austria Tel. + 43-1-58801-18853 eidenberger@ims.tuwien.ac.at ABSTRACT In visual information retrieval the careful choice of suitable proximity measures is a crucial success factor.\nThe evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures.\nEight visual MPEG-7 descriptors were selected and 38 distance measures implemented.\nThree media collections were created and assessed, performance indicators developed and more than 22500 tests performed.\nAdditionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well.\nThe evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - Information filtering, Query formulation, Retrieval models.\nGeneral Terms Algorithms, Measurement, Experimentation, Performance, Theory.\n1.\nINTRODUCTION The MPEG-7 standard defines - among others - a set of descriptors for visual media.\nEach descriptor consists of a feature extraction mechanism, a description (in binary and XML format) and guidelines that define how to apply the descriptor on different kinds of media (e.g. on temporal media).\nThe MPEG-7 descriptors have been carefully designed to meet - partially complementaryrequirements of different application domains: archival, browsing, retrieval, etc. [9].\nIn the following, we will exclusively deal with the visual MPEG-7 descriptors in the context of media retrieval.\nThe visual MPEG-7 descriptors fall in five groups: colour, texture, shape, motion and others (e.g. face description) and sum up to 16 basic descriptors.\nFor retrieval applications, a rule for each descriptor is mandatory that defines how to measure the similarity of two descriptions.\nCommon rules are distance functions, like the Euclidean distance and the Mahalanobis distance.\nUnfortunately, the MPEG-7 standard does not include distance measures in the normative part, because it was not designed to be (and should not exclusively understood to be) retrieval-specific.\nHowever, the MPEG-7 authors give recommendations, which distance measure to use on a particular descriptor.\nThese recommendations are based on accurate knowledge of the descriptors' behaviour and the description structures.\nIn the present study a large number of successful distance measures from different areas (statistics, psychology, medicine, social and economic sciences, etc.) were implemented and applied on MPEG-7 data vectors to verify whether or not the recommended MPEG-7 distance measures are really the best for any reasonable class of media objects.\nFrom the MPEG-7 tests and the recommendations it does not become clear, how many and which distance measures have been tested on the visual descriptors and the MPEG-7 test datasets.\nThe hypothesis is that analytically derived distance measures may be good in general but only a quantitative analysis is capable to identify the best distance measure for a specific feature extraction method.\nThe paper is organised as follows.\nSection 2 gives a minimum of background information on the MPEG-7 descriptors and distance measurement in visual information retrieval (VIR, see [3], [16]).\nSection 3 gives an overview over the implemented distance measures.\nSection 4 describes the test setup, including the test data and the implemented evaluation methods.\nFinally, Section 5 presents the results per descriptor and over all descriptors.\n2.\nBACKGROUND 2.1 MPEG-7: visual descriptors The visual part of the MPEG-7 standard defines several descriptors.\nNot all of them are really descriptors in the sense that they extract properties from visual media.\nSome of them are just structures for descriptor aggregation or localisation.\nThe basic descriptors are Color Layout, Color Structure, Dominant Color, Scalable Color, Edge Histogram, Homogeneous Texture, Texture Browsing, Region-based Shape, Contour-based Shape, Camera Motion, Parametric Motion and Motion Activity.\nOther descriptors are based on low-level descriptors or semantic information: Group-of-Frames\/Group-of-Pictures Color (based on Scalable Color), Shape 3D (based on 3D mesh information), Motion Trajectory (based on object segmentation) and Face Recognition (based on face extraction).\nDescriptors for spatiotemporal aggregation and localisation are: Spatial 2D Coordinates, Grid Layout, Region Locator (spatial), Time Series, Temporal Interpolation (temporal) and SpatioTemporal Locator (combined).\nFinally, other structures exist for colour spaces, colour quantisation and multiple 2D views of 3D objects.\nThese additional structures allow combining the basic descriptors in multiple ways and on different levels.\nBut they do not change the characteristics of the extracted information.\nConsequently, structures for aggregation and localisation were not considered in the work described in this paper.\n2.2 Similarity measurement on visual data Generally, similarity measurement on visual information aims at imitating human visual similarity perception.\nUnfortunately, human perception is much more complex than any of the existing similarity models (it includes perception, recognition and subjectivity).\nThe common approach in visual information retrieval is measuring dis-similarity as distance.\nBoth, query object and candidate object are represented by their corresponding feature vectors.\nThe distance between these objects is measured by computing the distance between the two vectors.\nConsequently, the process is independent of the employed querying paradigm (e.g. query by example).\nThe query object may be natural (e.g. a real object) or artificial (e.g. properties of a group of objects).\nGoal of the measurement process is to express a relationship between the two objects by their distance.\nIteration for multiple candidates allows then to define a partial order over the candidates and to address those in a (to be defined) neighbourhood being similar to the query object.\nAt this point, it has to be mentioned that in a multi-descriptor environmentespecially in MPEG-7 - we are only half way towards a statement on similarity.\nIf multiple descriptors are used (e.g. a descriptor scheme), a rule has to be defined how to combine all distances to a global value for each object.\nStill, distance measurement is the most important first step in similarity measurement.\nObviously, the main task of good distance measures is to reorganise descriptor space in a way that media objects with the highest similarity are nearest to the query object.\nIf distance is defined minimal, the query object is always in the origin of distance space and similar candidates should form clusters around the origin that are as large as possible.\nConsequently, many well known distance measures are based on geometric assumptions of descriptor space (e.g. Euclidean distance is based on the metric axioms).\nUnfortunately, these measures do not fit ideally with human similarity perception (e.g. due to human subjectivity).\nTo overcome this shortage, researchers from different areas have developed alternative models that are mostly predicate-based (descriptors are assumed to contain just binary elements, e.g. Tversky's Feature Contrast Model [17]) and fit better with human perception.\nIn the following distance measures of both groups of approaches will be considered.\n3.\nDISTANCE MEASURES The distance measures used in this work have been collected from various areas (Subsection 3.1).\nBecause they work on differently quantised data, Subsection 3.2 sketches a model for unification on the basis of quantitative descriptions.\nFinally, Subsection 3.3 introduces the distance measures as well as their origin and the idea they implement.\n3.1 Sources Distance measurement is used in many research areas such as psychology, sociology (e.g. comparing test results), medicine (e.g. comparing parameters of test persons), economics (e.g. comparing balance sheet ratios), etc..\nNaturally, the character of data available in these areas differs significantly.\nEssentially, there are two extreme cases of data vectors (and distance measures): predicatebased (all vector elements are binary, e.g. {0, 1}) and quantitative (all vector elements are continuous, e.g. [0, 1]).\nPredicates express the existence of properties and represent highlevel information while quantitative values can be used to measure and mostly represent low-level information.\nPredicates are often employed in psychology, sociology and other human-related sciences and most predicate-based distance measures were therefore developed in these areas.\nDescriptions in visual information retrieval are nearly ever (if they do not integrate semantic information) quantitative.\nConsequently, mostly quantitative distance measures are used in visual information retrieval.\nThe goal of this work is to compare the MPEG-7 distance measures with the most powerful distance measures developed in other areas.\nSince MPEG-7 descriptions are purely quantitative but some of the most sophisticated distance measures are defined exclusively on predicates, a model is mandatory that allows the application of predicate-based distance measures on quantitative data.\nThe model developed for this purpose is presented in the next section.\n3.2 Quantisation model The goal of the quantisation model is to redefine the set operators that are usually used in predicate-based distance measures on continuous data.\nThe first in visual information retrieval to follow this approach were Santini and Jain, who tried to apply Tversky's Feature Contrast Model [17] to content-based image retrieval [12], [13].\nThey interpreted continuous data as fuzzy predicates and used fuzzy set operators.\nUnfortunately, their model suffered from several shortcomings they described in [12], [13] (for example, the quantitative model worked only for one specific version of the original predicate-based measure).\nThe main idea of the presented quantisation model is that set operators are replaced by statistical functions.\nIn [5] the authors could show that this interpretation of set operators is reasonable.\nThe model offers a solution for the descriptors considered in the evaluation.\nIt is not specific to one distance measure, but can be applied to any predicate-based measure.\nBelow, it will be shown that the model does not only work for predicate data but for quantitative data as well.\nEach measure implementing the model can be used as a substitute for the original predicate-based measure.\nGenerally, binary properties of two objects (e.g. media objects) can exist in both objects (denoted as a), in just one (b, c) or in none of them (d).\nThe operator needed for these relationships are UNION, MINUS and NOT.\nIn the quantisation model they are replaced as follows (see [5] for further details).\n131 \u2211 \u2264 + \u2212 + ==\u2229= k jkikjkik kkji else xx Mif xx ssXXa 0 22, 1\u03b5 ( ) ( ) \u2211 \u2211 \u2211 \u2264 ++ \u2212==\u00ac\u2229\u00ac= \u2264\u2212\u2212\u2212 ==\u2212= \u2264\u2212\u2212\u2212 ==\u2212= k jkikjkik kkji k ikjkikjk kkij k jkikjkik kkji else xx if xx MssXXd else xxMifxx ssXXc else xxMifxx ssXXb 0 22, 0 , 0 , 1 2 2 \u03b5 \u03b5 \u03b5 with: ( ) [ ] ( ) { }0\\ .\n0 1 .\n0 1 , 2 2 1 minmax maxmin + \u2208 \u2212 = \u2265 \u2212 = = \u2265 \u2212 = \u2212= \u2208= \u2211 \u2211 \u2211 \u2211 Rp ki x where else pif p M ki x where else pif p M xxM xxxwithxX i k ik i k ik ikiki \u00b5 \u03c3 \u03c3 \u03c3 \u03b5 \u00b5 \u00b5 \u00b5 \u03b5 a selects properties that are present in both data vectors (Xi, Xj representing media objects), b and c select properties that are present in just one of them and d selects properties that are present in neither of the two data vectors.\nEvery property is selected by the extent to which it is present (a and d: mean, b and c: difference) and only if the amount to which it is present exceeds a certain threshold (depending on the mean and standard deviation over all elements of descriptor space).\nThe implementation of these operators is based on one assumption.\nIt is assumed that vector elements measure on interval scale.\nThat means, each element expresses that the measured property is ``more or less'' present (``0'': not at all, ``M'': fully present).\nThis is true for most visual descriptors and all MPEG-7 descriptors.\nA natural origin as it is assumed here (``0'') is not needed.\nIntroducing p (called discriminance-defining parameter) for the thresholds 21 ,\u03b5\u03b5 has the positive consequence that a, b, c, d can then be controlled through a single parameter.\np is an additional criterion for the behaviour of a distance measure and determines the thresholds used in the operators.\nIt expresses how accurate data items are present (quantisation) and consequently, how accurate they should be investigated.\np can be set by the user or automatically.\nInteresting are the limits: 1.\nMp \u2192\u21d2\u221e\u2192 21 ,\u03b5\u03b5 In this case, all elements (=properties) are assumed to be continuous (high quantisation).\nIn consequence, all properties of a descriptor are used by the operators.\nThen, the distance measure is not discriminant for properties.\n2.\n0,0 21 \u2192\u21d2\u2192 \u03b5\u03b5p In this case, all properties are assumed to be predicates.\nIn consequence, only binary elements (=predicates) are used by the operators (1-bit quantisation).\nThe distance measure is then highly discriminant for properties.\nBetween these limits, a distance measure that uses the quantisation model is - depending on p - more or less discriminant for properties.\nThis means, it selects a subset of all available description vector elements for distance measurement.\nFor both predicate data and quantitative data it can be shown that the quantisation model is reasonable.\nIf description vectors consist of binary elements only, p should be used as follows (for example, p can easily be set automatically): ( )\u03c3\u00b5\u03b5\u03b5 ,min.\n.\n,0,0 21 ==\u21d2\u2192 pgep In this case, a, b, c, d measure like the set operators they replace.\nFor example, Table 1 shows their behaviour for two onedimensional feature vectors Xi and Xj.\nAs can be seen, the statistical measures work like set operators.\nActually, the quantisation model works accurate on predicate data for any p\u2260\u221e.\nTo show that the model is reasonable for quantitative data the following fact is used.\nIt is easy to show that for predicate data some quantitative distance measures degenerate to predicatebased measures.\nFor example, the L1 metric (Manhattan metric) degenerates to the Hamming distance (from [9], without weights): distanceHammingcbxxL k jkik =+\u2261\u2212= \u22111 If it can be shown that the quantisation model is able to reconstruct the quantitative measure from the degenerated predicate-based measure, the model is obviously able to extend predicate-based measures to the quantitative domain.\nThis is easy to illustrate.\nFor purely quantitative feature vectors, p should be used as follows (again, p can easily be set automatically): 1, 21 =\u21d2\u221e\u2192 \u03b5\u03b5p Then, a and d become continuous functions: \u2211 \u2211 + \u2212==\u21d2\u2261\u2264 + + ==\u21d2\u2261\u2264 + \u2212 k jkik kk jkik k jkik kk jkik xx MswheresdtrueM xx xx swheresatrueM xx M 22 22 b and c can be made continuous for the following expressions: ( ) ( ) \u2211 \u2211 \u2211 \u2212==+\u21d2 \u2265\u2212\u2212 ==\u21d2 \u2265\u2212\u2261\u2264\u2212\u2212 \u2265\u2212\u2212 ==\u21d2 \u2265\u2212\u2261\u2264\u2212\u2212 k jkikkk k ikjkikjk kk ikjkikjk k jkikjkik kk jkikjkik xxswherescb else xxifxx swheresc xxMxxM else xxifxx swheresb xxMxxM 0 0 0 0 0 0 Table 1.\nQuantisation model on predicate vectors.\nXi Xj a b c d (1) (1) 1 0 0 0 (1) (0) 0 1 0 0 (0) (1) 0 0 1 0 (0) (0) 0 0 0 1 132 \u2211 \u2211 \u2212==\u2212 \u2212==\u2212 k ikjkkk k jkikkk xxswheresbc xxswherescb This means, for sufficiently high p every predicate-based distance measure that is either not using b and c or just as b+c, b-c or c-b, can be transformed into a continuous quantitative distance measure.\nFor example, the Hamming distance (again, without weights): 1 Lxxxxswherescb k jkik k jkikkk =\u2212=\u2212==+ \u2211\u2211 The quantisation model successfully reconstructs the L1 metric and no distance measure-specific modification has to be made to the model.\nThis demonstrates that the model is reasonable.\nIn the following it will be used to extend successful predicate-based distance measures on the quantitative domain.\nThe major advantages of the quantisation model are: (1) it is application domain independent, (2) the implementation is straightforward, (3) the model is easy to use and finally, (4) the new parameter p allows to control the similarity measurement process in a new way (discriminance on property level).\n3.3 Implemented measures For the evaluation described in this work next to predicate-based (based on the quantisation model) and quantitative measures, the distance measures recommended in the MPEG-7 standard were implemented (all together 38 different distance measures).\nTable 2 summarises those predicate-based measures that performed best in the evaluation (in sum 20 predicate-based measures were investigated).\nFor these measures, K is the number of predicates in the data vectors Xi and Xj.\nIn P1, the sum is used for Tversky's f() (as Tversky himself does in [17]) and \u03b1, \u03b2 are weights for element b and c.\nIn [5] the author's investigated Tversky's Feature Contrast Model and found \u03b1=1, \u03b2=0 to be the optimum parameters.\nSome of the predicate-based measures are very simple (e.g. P2, P4) but have been heavily exploited in psychological research.\nPattern difference (P6) - a very powerful measure - is used in the statistics package SPSS for cluster analysis.\nP7 is a correlation coefficient for predicates developed by Pearson.\nTable 3 shows the best quantitative distance measures that were used.\nQ1 and Q2 are metric-based and were implemented as representatives for the entire group of Minkowski distances.\nThe wi are weights.\nIn Q5, ii \u03c3\u00b5 , are mean and standard deviation for the elements of descriptor Xi.\nIn Q6, m is 2 M (=0.5).\nQ3, the Canberra metric, is a normalised form of Q1.\nSimilarly, Q4, Clark's divergence coefficient is a normalised version of Q2.\nQ6 is a further-developed correlation coefficient that is invariant against sign changes.\nThis measure is used even though its particular properties are of minor importance for this application domain.\nFinally, Q8 is a measure that takes the differences between adjacent vector elements into account.\nThis makes it structurally different from all other measures.\nObviously, one important distance measure is missing.\nThe Mahalanobis distance was not considered, because different descriptors would require different covariance matrices and for some descriptors it is simply impossible to define a covariance matrix.\nIf the identity matrix was used in this case, the Mahalanobis distance would degenerate to a Minkowski distance.\nAdditionally, the recommended MPEG-7 distances were implemented with the following parameters: In the distance measure of the Color Layout descriptor all weights were set to ``1'' (as in all other implemented measures).\nIn the distance measure of the Dominant Color descriptor the following parameters were used: 20,1,3.0,7.0 21 ==== dTww \u03b1 (as recommended).\nIn the Homogeneous Texture descriptor's distance all ( )k\u03b1 were set to ``1'' and matching was done rotation- and scale-invariant.\nImportant!\nSome of the measures presented in this section are distance measures while others are similarity measures.\nFor the tests, it is important to notice, that all similarity measures were inverted to distance measures.\n4.\nTEST SETUP Subsection 4.1 describes the descriptors (including parameters) and the collections (including ground truth information) that were used in the evaluation.\nSubsection 4.2 discusses the evaluation method that was implemented and Subsection 4.3 sketches the test environment used for the evaluation process.\n4.1 Test data For the evaluation eight MPEG-7 descriptors were used.\nAll colour descriptors: Color Layout, Color Structure, Dominant Color, Scalable Color, all texture descriptors: Edge Histogram, Homogeneous Texture, Texture Browsing and one shape descriptor: Region-based Shape.\nTexture Browsing was used even though the MPEG-7 standard suggests that it is not suitable for retrieval.\nThe other basic shape descriptor, Contour-based Shape, was not used, because it produces structurally different descriptions that cannot be transformed to data vectors with elements measuring on interval-scales.\nThe motion descriptors were not used, because they integrate the temporal dimension of visual media and would only be comparable, if the basic colour, texture and shape descriptors would be aggregated over time.\nThis was not done.\nFinally, no high-level descriptors were used (Localisation, Face Recognition, etc., see Subsection 2.1), because - to the author's opinion - the behaviour of the basic descriptors on elementary media objects should be evaluated before conclusions on aggregated structures can be drawn.\nTable 2.\nPredicate-based distance measures.\nNo.\nMeasure Comment P1 cba .\n.\n\u03b2\u03b1 \u2212\u2212 Feature Contrast Model, Tversky 1977 [17] P2 a No.\nof co-occurrences P3 cb + Hamming distance P4 K a Russel 1940 [14] P5 cb a + Kulczvnski 1927 [14] P6 2 K bc Pattern difference [14] P7 ( )( )( )( )dcdbcaba bcad ++++ \u2212 Pearson 1926 [11] 133 The Texture Browsing descriptions had to be transformed from five bins to an eight bin representation in order that all elements of the descriptor measure on an interval scale.\nA Manhattan metric was used to measure proximity (see [6] for details).\nDescriptor extraction was performed using the MPEG-7 reference implementation.\nIn the extraction process each descriptor was applied on the entire content of each media object and the following extraction parameters were used.\nColour in Color Structure was quantised to 32 bins.\nFor Dominant Color colour space was set to YCrCb, 5-bit default quantisation was used and the default value for spatial coherency was used.\nHomogeneous Texture was quantised to 32 components.\nScalable Color values were quantised to sizeof(int)-3 bits and 64 bins were used.\nFinally, Texture Browsing was used with five components.\nThese descriptors were applied on three media collections with image content: the Brodatz dataset (112 images, 512x512 pixel), a subset of the Corel dataset (260 images, 460x300 pixel, portrait and landscape) and a dataset with coats-of-arms images (426 images, 200x200 pixel).\nFigure 1 shows examples from the three collections.\nDesigning appropriate test sets for a visual evaluation is a highly difficult task (for example, see the TREC video 2002 report [15]).\nOf course, for identifying the best distance measure for a descriptor, it should be tested on an infinite number of media objects.\nBut this is not the aim of this study.\nIt is just evaluated if - for likely image collections - better proximity measures than those suggested by the MPEG-7 group can be found.\nCollections of this relatively small size were used in the evaluation, because the applied evaluation methods are above a certain minimum size invariant against collection size and for smaller collections it is easier to define a high-quality ground truth.\nStill, the average ratio of ground truth size to collection size is at least 1:7.\nEspecially, no collection from the MPEG-7 dataset was used in the evaluation because the evaluations should show, how well the descriptors and the recommended distance measures perform on ``unknown'' material.\nWhen the descriptor extraction was finished, the resulting XML descriptions were transformed into a data matrix with 798 lines (media objects) and 314 columns (descriptor elements).\nTo be usable with distance measures that do not integrate domain knowledge, the elements of this data matrix were normalised to [0, 1].\nFor the distance evaluation - next to the normalised data matrixhuman similarity judgement is needed.\nIn this work, the ground truth is built of twelve groups of similar images (four for each dataset).\nGroup membership was rated by humans based on semantic criterions.\nTable 4 summarises the twelve groups and the underlying descriptions.\nIt has to be noticed, that some of these groups (especially 5, 7 and 10) are much harder to find with lowlevel descriptors than others.\n4.2 Evaluation method Usually, retrieval evaluation is performed based on a ground truth with recall and precision (see, for example, [3], [16]).\nIn multidescriptor environments this leads to a problem, because the resulting recall and precision values are strongly influenced by the method used to merge the distance values for one media object.\nEven though it is nearly impossible to say, how big the influence of a single distance measure was on the resulting recall and precision values, this problem has been almost ignored so far.\nIn Subsection 2.2 it was stated that the major task of a distance measure is to bring the relevant media objects as close to the origin (where the query object lies) as possible.\nEven in a multidescriptor environment it is then simple to identify the similar objects in a large distance space.\nConsequently, it was decided to Table 3.\nQuantitative distance measures.\nNo.\nMeasure Comment No.\nMeasure Comment Q1 \u2211 \u2212 k jkiki xxw City block distance (L1 ) Q2 ( )\u2211 \u2212 k jkiki xxw 2 Euclidean distance (L2 ) Q3 \u2211 + \u2212 k jkik jkik xx xx Canberra metric, Lance, Williams 1967 [8] Q4 ( ) \u2211 + \u2212 k jkik jkik xx xx K 2 1 Divergence coefficient, Clark 1952 [1] Q5 ( )( ) ( ) ( )\u2211 \u2211 \u2211 \u2212\u2212 \u2212\u2212 k k jjkiik k jjkiik xx xx 22 \u00b5\u00b5 \u00b5\u00b5 Correlation coefficient Q6 \u2212+ \u2212\u2212 +\u2212\u2212 \u2211\u2211\u2211 \u2211 \u2211\u2211 k ik k jkik k ik k k jk k ikjkik xmKmxxmKmx xxmKmxx 2.\n.2 2222 Cohen 1969 [2] Q7 \u2211 \u2211 \u2211 k k jkik k jkik xx xx 22 Angular distance, Gower 1967 [7] Q8 ( ) ( )( )\u2211 \u2212 ++ \u2212\u2212\u2212 1 2 11 K k jkjkikik xxxx Meehl Index [10] Table 4.\nGround truth information.\nColl.\nNo Images Description 1 19 Regular, chequered patterns 2 38 Dark white noise 3 33 Moon-like surfaces Brodatz 4 35 Water-like surfaces 5 73 Humans in nature (difficult) 6 17 Images with snow (mountains, skiing) 7 76 Animals in nature (difficult) Corel 8 27 Large coloured flowers 9 12 Bavarian communal arms 10 10 All Bavarian arms (difficult) 11 18 Dark objects \/ light unsegmented shield Arms 12 14 Major charges on blue or red shield 134 use indicators measuring the distribution in distance space of candidates similar to the query object for this evaluation instead of recall and precision.\nIdentifying clusters of similar objects (based on the given ground truth) is relatively easy, because the resulting distance space for one descriptor and any distance measure is always one-dimensional.\nClusters are found by searching from the origin of distance space to the first similar object, grouping all following similar objects in the cluster, breaking off the cluster with the first un-similar object and so forth.\nFor the evaluation two indicators were defined.\nThe first measures the average distance of all cluster means to the origin: distanceavgclustersno sizecluster distanceclustersno i i sizecluster j ij d i _.\n_ _ _ _ \u2211 \u2211 =\u00b5 where distanceij is the distance value of the j-th element in the i-th cluster, \u2211 \u2211 \u2211 = CLUSTERS i i CLUSTERS i sizecluster j ij sizecluster distance distanceavg i _ _ _ , no_clusters is the number of found clusters and cluster_sizei is the size of the i-th cluster.\nThe resulting indicator is normalised by the distribution characteristics of the distance measure (avg_distance).\nAdditionally, the standard deviation is used.\nIn the evaluation process this measure turned out to produce valuable results and to be relatively robust against parameter p of the quantisation model.\nIn Subsection 3.2 we noted that p affects the discriminance of a predicate-based distance measure: The smaller p is set the larger are the resulting clusters because the quantisation model is then more discriminant against properties and less elements of the data matrix are used.\nThis causes a side-effect that is measured by the second indicator: more and more un-similar objects come out with exactly the same distance value as similar objects (a problem that does not exist for large p's) and become indiscernible from similar objects.\nConsequently, they are (false) cluster members.\nThis phenomenon (conceptually similar to the ``false negatives'' indicator) was named ``cluster pollution'' and the indicator measures the average cluster pollution over all clusters: clustersno doublesno cp clustersno i sizecluster j ij i _ _ _ _ \u2211 \u2211 = where no_doublesij is the number of indiscernible un-similar objects associated with the j-th element of cluster i. Remark: Even though there is a certain influence, it could be proven in [5] that no significant correlation exists between parameter p of the quantisation model and cluster pollution.\n4.3 Test environment As pointed out above, to generate the descriptors, the MPEG-7 reference implementation in version 5.6 was used (provided by TU Munich).\nImage processing was done with Adobe Photoshop and normalisation and all evaluations were done with Perl.\nThe querying process was performed in the following steps: (1) random selection of a ground truth group, (2) random selection of a query object from this group, (3) distance comparison for all other objects in the dataset, (4) clustering of the resulting distance space based on the ground truth and finally, (5) evaluation.\nFor each combination of dataset and distance measure 250 queries were issued and evaluations were aggregated over all datasets and descriptors.\nThe next section shows the - partially surprisingresults.\n5.\nRESULTS In the results presented below the first indicator from Subsection 4.2 was used to evaluate distance measures.\nIn a first step parameter p had to be set in a way that all measures are equally discriminant.\nDistance measurement is fair if the following condition holds true for any predicate-based measure dP and any continuous measure dC: ( ) ( )CP dcppdcp \u2248, Then, it is guaranteed that predicate-based measures do not create larger clusters (with a higher number of similar objects) for the price of higher cluster pollution.\nIn more than 1000 test queries the optimum value was found to be p=1.\nResults are organised as follows: Subsection 5.1 summarises the Figure 1.\nTest datasets.\nLeft: Brodatz dataset, middle: Corel dataset, right: coats-of-arms dataset.\n135 best distance measures per descriptor, Section 5.2 shows the best overall distance measures and Section 5.3 points out other interesting results (for example, distance measures that work particularly good on specific ground truth groups).\n5.1 Best measure per descriptor Figure 2 shows the evaluation results for the first indicator.\nFor each descriptor the best measure and the performance of the MPEG-7 recommendation are shown.\nThe results are aggregated over the tested datasets.\nOn first sight, it becomes clear that the MPEG-7 recommendations are mostly relatively good but never the best.\nFor Color Layout the difference between MP7 and the best measure, the Meehl index (Q8), is just 4% and the MPEG-7 measure has a smaller standard deviation.\nThe reason why the Meehl index is better may be that this descriptors generates descriptions with elements that have very similar variance.\nStatistical analysis confirmed that (see [6]).\nFor Color Structure, Edge Histogram, Homogeneous Texture, Region-based Shape and Scalable Color by far the best measure is pattern difference (P6).\nPsychological research on human visual perception has revealed that in many situation differences between the query object and a candidate weigh much stronger than common properties.\nThe pattern difference measure implements this insight in the most consequent way.\nIn the author's opinion, the reason why pattern difference performs so extremely well on many descriptors is due to this fact.\nAdditional advantages of pattern difference are that it usually has a very low variance andbecause it is a predicate-based measure - its discriminance (and cluster structure) can be tuned with parameter p.\nThe best measure for Dominant Color turned out to be Clark's Divergence coefficient (Q4).\nThis is a similar measure to pattern difference on the continuous domain.\nThe Texture Browsing descriptor is a special problem.\nIn the MPEG-7 standard it is recommended to use it exclusively for browsing.\nAfter testing it for retrieval on various distance measures the author supports this opinion.\nIt is very difficult to find a good distance measure for Texture Browsing.\nThe proposed Manhattan metric, for example, performs very bad.\nThe best measure is predicate-based (P7).\nIt works on common properties (a, d) but produces clusters with very high cluster pollution.\nFor this descriptor the second indicator is up to eight times higher than for predicate-based measures on other descriptors.\n5.2 Best overall measures Figure 3 summarises the results over all descriptors and media collections.\nThe diagram should give an indication on the general potential of the investigated distance measures for visual information retrieval.\nIt can be seen that the best overall measure is a predicate-based one.\nThe top performance of pattern difference (P6) proves that the quantisation model is a reasonable method to extend predicate-based distance measures on the continuous domain.\nThe second best group of measures are the MPEG-7 recommendations, which have a slightly higher mean but a lower standard deviation than pattern difference.\nThe third best measure is the Meehl index (Q8), a measure developed for psychological applications but because of its characteristic properties tailormade for certain (homogeneous) descriptors.\nMinkowski metrics are also among the best measures: the average mean and variance of the Manhattan metric (Q1) and the Euclidean metric (Q2) are in the range of Q8.\nOf course, these measures do not perform particularly well for any of the descriptors.\nRemarkably for a predicate-based measure, Tversky's Feature Contrast Model (P1) is also in the group of very good measures (even though it is not among the best) that ends with Q5, the correlation coefficient.\nThe other measures either have a significantly higher mean or a very large standard deviation.\n5.3 Other interesting results Distance measures that perform in average worse than others may in certain situations (e.g. on specific content) still perform better.\nFor Color Layout, for example, Q7 is a very good measure on colour photos.\nIt performs as good as Q8 and has a lower standard deviation.\nFor artificial images the pattern difference and the Hamming distance produce comparable results as well.\nIf colour information is available in media objects, pattern difference performs well on Dominant Color (just 20% worse Q4) and in case of difficult ground truth (group 5, 7, 10) the Meehl index is as strong as P6.\n0,000 0,001 0,002 0,003 0,004 0,005 0,006 0,007 0,008 Q8 MP7 P6 MP7 Q4 MP7 P6 MP7 P6 MP7 P6 MP7 P6 MP7 P7 Q2 Color Layout Color Structure Dominant Color Edge Histogram Homog.\nTexture Region Shape Scalable Color Texture Browsing Figure 2.\nResults per measure and descriptor.\nThe horizontal axis shows the best measure and the performance of the MPEG-7 recommendation for each descriptor.\nThe vertical axis shows the values for the first indicator (smaller value = better cluster structure).\nShades have the following meaning: black=\u00b5-\u03c3 (good cases), black + dark grey=\u00b5 (average) and black + dark grey + light grey=\u00b5+\u03c3 (bad).\n136 6.\nCONCLUSION The evaluation presented in this paper aims at testing the recommended distance measures and finding better ones for the basic visual MPEG-7 descriptors.\nEight descriptors were selected, 38 distance measures were implemented, media collections were created and assessed, performance indicators were defined and more than 22500 tests were performed.\nTo be able to use predicate-based distance measures next to quantitative measures a quantisation model was defined that allows the application of predicate-based measures on continuous data.\nIn the evaluation the best overall distance measures for visual content - as extracted by the visual MPEG-7 descriptors - turned out to be the pattern difference measure and the Meehl index (for homogeneous descriptions).\nSince these two measures perform significantly better than the MPEG-7 recommendations they should be further tested on large collections of image and video content (e.g. from [15]).\nThe choice of the right distance function for similarity measurement depends on the descriptor, the queried media collection and the semantic level of the user's idea of similarity.\nThis work offers suitable distance measures for various situations.\nIn consequence, the distance measures identified as the best will be implemented in the open MPEG-7 based visual information retrieval framework VizIR [4].\nACKNOWLEDGEMENTS The author would like to thank Christian Breiteneder for his valuable comments and suggestions for improvement.\nThe work presented in this paper is part of the VizIR project funded by the Austrian Scientific Research Fund FWF under grant no.\nP16111.\nREFERENCES [1] Clark, P.S..\nAn extension of the coefficient of divergence for use with multiple characters.\nCopeia, 2 (1952), 61-64.\n[2] Cohen, J.\nA profile similarity coefficient invariant over variable reflection.\nPsychological Bulletin, 71 (1969), 281284.\n[3] Del Bimbo, A. Visual information retrieval.\nMorgan Kaufmann Publishers, San Francisco CA, 1999.\n[4] Eidenberger, H., and Breiteneder, C.\nA framework for visual information retrieval.\nIn Proceedings Visual Information Systems Conference (HSinChu Taiwan, March 2002), LNCS 2314, Springer Verlag, 105-116.\n[5] Eidenberger, H., and Breiteneder, C. Visual similarity measurement with the Feature Contrast Model.\nIn Proceedings SPIE Storage and Retrieval for Media Databases Conference (Santa Clara CA, January 2003), SPIE Vol.\n5021, 64-76.\n[6] Eidenberger, H., How good are the visual MPEG-7 features?\nIn Proceedings SPIE Visual Communications and Image Processing Conference (Lugano Switzerland, July 2003), SPIE Vol.\n5150, 476-488.\n[7] Gower, J.G. Multivariate analysis and multidimensional geometry.\nThe Statistician, 17 (1967),13-25.\n[8] Lance, G.N., and Williams, W.T. Mixed data classificatory programs.\nAgglomerative Systems Australian Comp.\nJournal, 9 (1967), 373-380.\n[9] Manjunath, B.S., Ohm, J.R., Vasudevan, V.V., and Yamada, A. Color and texture descriptors.\nIn Special Issue on MPEG7.\nIEEE Transactions on Circuits and Systems for Video Technology, 11\/6 (June 2001), 703-715.\n[10] Meehl, P. E.\nThe problem is epistemology, not statistics: Replace significance tests by confidence intervals and quantify accuracy of risky numerical predictions.\nIn Harlow, L.L., Mulaik, S.A., and Steiger, J.H. (Eds.)\n.\nWhat if there were no significance tests?\nErlbaum, Mahwah NJ, 393-425.\n[11] Pearson, K. On the coefficients of racial likeness.\nBiometrica, 18 (1926), 105-117.\n[12] Santini, S., and Jain, R. Similarity is a geometer.\nMultimedia Tools and Application, 5\/3 (1997), 277-306.\n[13] Santini, S., and Jain, R. Similarity measures.\nIEEE Transactions on Pattern Analysis and Machine Intelligence, 21\/9 (September 1999), 871-883.\n[14] Sint, P.P. Similarity structures and similarity measures.\nAustrian Academy of Sciences Press, Vienna Austria, 1975 (in German).\n[15] Smeaton, A.F., and Over, P.\nThe TREC-2002 video track report.\nNIST Special Publication SP 500-251 (March 2003), available from: http:\/\/trec.nist.gov\/pubs\/trec11\/papers\/ VIDEO.OVER.pdf (last visited: 2003-07-29) [16] Smeulders, A.W.M., Worring, M., Santini, S., Gupta, A., and Jain, R. Content-based image retrieval at the end of the early years.\nIEEE Transactions on Pattern Analysis and Machine Intelligence, 22\/12 (December 2000), 1349-1380.\n[17] Tversky, A. Features of similarity.\nPsychological Review, 84\/4 (July 1977), 327-351.\n0,000 0,002 0,004 0,006 0,008 0,010 0,012 0,014 0,016 0,018 0,020 P6 MP7 Q8 Q1 Q4 Q2 P2 P4 Q6 Q3 Q7 P1 Q5 P3 P5 P7 Figure 3.\nOverall results (ordered by the first indicator).\nThe vertical axis shows the values for the first indicator (smaller value = better cluster structure).\nShades have the following meaning: black=\u00b5-\u03c3, black + dark grey=\u00b5 and black + dark grey + light grey=\u00b5+\u03c3.\n137","lvl-3":"Distance Measures for MPEG-7-based Retrieval\nABSTRACT\nIn visual information retrieval the careful choice of suitable proximity measures is a crucial success factor.\nThe evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures.\nEight visual MPEG-7 descriptors were selected and 38 distance measures implemented.\nThree media collections were created and assessed, performance indicators developed and more than 22500 tests performed.\nAdditionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well.\nThe evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better.\n1.\nINTRODUCTION\nThe MPEG-7 standard defines--among others--a set of descriptors for visual media.\nEach descriptor consists of a feature extraction mechanism, a description (in binary and XML format) and guidelines that define how to apply the descriptor on different kinds of media (e.g. on temporal media).\nThe MPEG-7 descriptors have been carefully designed to meet--partially complementary--requirements of different application domains: archival, browsing, retrieval, etc. [9].\nIn the following, we will exclusively deal with\nthe visual MPEG-7 descriptors in the context of media retrieval.\nThe visual MPEG-7 descriptors fall in five groups: colour, texture, shape, motion and others (e.g. face description) and sum up to 16 basic descriptors.\nFor retrieval applications, a rule for each descriptor is mandatory that defines how to measure the similarity of two descriptions.\nCommon rules are distance functions, like the Euclidean distance and the Mahalanobis distance.\nUnfortunately, the MPEG-7 standard does not include distance measures in the normative part, because it was not designed to be (and should not exclusively understood to be) retrieval-specific.\nHowever, the MPEG-7 authors give recommendations, which distance measure to use on a particular descriptor.\nThese recommendations are based on accurate knowledge of the descriptors' behaviour and the description structures.\nIn the present study a large number of successful distance measures from different areas (statistics, psychology, medicine, social and economic sciences, etc.) were implemented and applied on MPEG-7 data vectors to verify whether or not the recommended MPEG-7 distance measures are really the best for any reasonable class of media objects.\nFrom the MPEG-7 tests and the recommendations it does not become clear, how many and which distance measures have been tested on the visual descriptors and the MPEG-7 test datasets.\nThe hypothesis is that analytically derived distance measures may be good in general but only a quantitative analysis is capable to identify the best distance measure for a specific feature extraction method.\nThe paper is organised as follows.\nSection 2 gives a minimum of background information on the MPEG-7 descriptors and distance measurement in visual information retrieval (VIR, see [3], [16]).\nSection 3 gives an overview over the implemented distance measures.\nSection 4 describes the test setup, including the test data and the implemented evaluation methods.\nFinally, Section 5 presents the results per descriptor and over all descriptors.\n2.\nBACKGROUND\n2.1 MPEG-7: visual descriptors\nThe visual part of the MPEG-7 standard defines several descriptors.\nNot all of them are really descriptors in the sense that they extract properties from visual media.\nSome of them are just structures for descriptor aggregation or localisation.\nThe basic descriptors are Color Layout, Color Structure, Dominant Color, Scalable Color, Edge Histogram, Homogeneous Texture, Texture Browsing, Region-based Shape, Contour-based Shape, Camera Motion, Parametric Motion and Motion Activity.\nOther descriptors are based on low-level descriptors or semantic information: Group-of-Frames\/Group-of-Pictures Color (based on\nScalable Color), Shape 3D (based on 3D mesh information), Motion Trajectory (based on object segmentation) and Face Recognition (based on face extraction).\nDescriptors for spatiotemporal aggregation and localisation are: Spatial 2D Coordinates, Grid Layout, Region Locator (spatial), Time Series, Temporal Interpolation (temporal) and SpatioTemporal Locator (combined).\nFinally, other structures exist for colour spaces, colour quantisation and multiple 2D views of 3D objects.\nThese additional structures allow combining the basic descriptors in multiple ways and on different levels.\nBut they do not change the characteristics of the extracted information.\nConsequently, structures for aggregation and localisation were not considered in the work described in this paper.\n2.2 Similarity measurement on visual data\nGenerally, similarity measurement on visual information aims at imitating human visual similarity perception.\nUnfortunately, human perception is much more complex than any of the existing similarity models (it includes perception, recognition and subjectivity).\nThe common approach in visual information retrieval is measuring dis-similarity as distance.\nBoth, query object and candidate object are represented by their corresponding feature vectors.\nThe distance between these objects is measured by computing the distance between the two vectors.\nConsequently, the process is independent of the employed querying paradigm (e.g. query by example).\nThe query object may be natural (e.g. a real object) or artificial (e.g. properties of a group of objects).\nGoal of the measurement process is to express a relationship between the two objects by their distance.\nIteration for multiple candidates allows then to define a partial order over the candidates and to address those in a (to be defined) neighbourhood being similar to the query object.\nAt this point, it has to be mentioned that in a multi-descriptor environment--especially in MPEG-7--we are only half way towards a statement on similarity.\nIf multiple descriptors are used (e.g. a descriptor scheme), a rule has to be defined how to combine all distances to a global value for each object.\nStill, distance measurement is the most important first step in similarity measurement.\nObviously, the main task of good distance measures is to reorganise descriptor space in a way that media objects with the highest similarity are nearest to the query object.\nIf distance is defined minimal, the query object is always in the origin of distance space and similar candidates should form clusters around the origin that are as large as possible.\nConsequently, many well known distance measures are based on geometric assumptions of descriptor space (e.g. Euclidean distance is based on the metric axioms).\nUnfortunately, these measures do not fit ideally with human similarity perception (e.g. due to human subjectivity).\nTo overcome this shortage, researchers from different areas have developed alternative models that are mostly predicate-based (descriptors are assumed to contain just binary elements, e.g. Tversky's Feature Contrast Model [17]) and fit better with human perception.\nIn the following distance measures of both groups of approaches will be considered.\n3.\nDISTANCE MEASURES\n3.1 Sources\n3.2 Quantisation model\n3.3 Implemented measures\n4.\nTEST SETUP\n4.1 Test data\n4.2 Evaluation method\n4.3 Test environment\n5.\nRESULTS\n5.1 Best measure per descriptor\n5.2 Best overall measures\n5.3 Other interesting results\n6.\nCONCLUSION\nThe evaluation presented in this paper aims at testing the recommended distance measures and finding better ones for the basic visual MPEG-7 descriptors.\nEight descriptors were selected, 38 distance measures were implemented, media collections were created and assessed, performance indicators were defined and more than 22500 tests were performed.\nTo be able to use predicate-based distance measures next to quantitative measures a quantisation model was defined that allows the application of predicate-based measures on continuous data.\nIn the evaluation the best overall distance measures for visual content--as extracted by the visual MPEG-7 descriptors--turned out to be the pattern difference measure and the Meehl index (for homogeneous descriptions).\nSince these two measures perform significantly better than the MPEG-7 recommendations they should be further tested on large collections of image and video content (e.g. from [15]).\nThe choice of the right distance function for similarity measurement depends on the descriptor, the queried media collection and the semantic level of the user's idea of similarity.\nThis work offers suitable distance measures for various situations.\nIn consequence, the distance measures identified as the best will be implemented in the open MPEG-7 based visual information retrieval framework VizIR [4].","lvl-4":"Distance Measures for MPEG-7-based Retrieval\nABSTRACT\nIn visual information retrieval the careful choice of suitable proximity measures is a crucial success factor.\nThe evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures.\nEight visual MPEG-7 descriptors were selected and 38 distance measures implemented.\nThree media collections were created and assessed, performance indicators developed and more than 22500 tests performed.\nAdditionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well.\nThe evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better.\n1.\nINTRODUCTION\nThe MPEG-7 standard defines--among others--a set of descriptors for visual media.\nThe MPEG-7 descriptors have been carefully designed to meet--partially complementary--requirements of different application domains: archival, browsing, retrieval, etc. [9].\nIn the following, we will exclusively deal with\nthe visual MPEG-7 descriptors in the context of media retrieval.\nThe visual MPEG-7 descriptors fall in five groups: colour, texture, shape, motion and others (e.g. face description) and sum up to 16 basic descriptors.\nFor retrieval applications, a rule for each descriptor is mandatory that defines how to measure the similarity of two descriptions.\nCommon rules are distance functions, like the Euclidean distance and the Mahalanobis distance.\nUnfortunately, the MPEG-7 standard does not include distance measures in the normative part, because it was not designed to be (and should not exclusively understood to be) retrieval-specific.\nHowever, the MPEG-7 authors give recommendations, which distance measure to use on a particular descriptor.\nThese recommendations are based on accurate knowledge of the descriptors' behaviour and the description structures.\nFrom the MPEG-7 tests and the recommendations it does not become clear, how many and which distance measures have been tested on the visual descriptors and the MPEG-7 test datasets.\nThe hypothesis is that analytically derived distance measures may be good in general but only a quantitative analysis is capable to identify the best distance measure for a specific feature extraction method.\nThe paper is organised as follows.\nSection 2 gives a minimum of background information on the MPEG-7 descriptors and distance measurement in visual information retrieval (VIR, see [3], [16]).\nSection 3 gives an overview over the implemented distance measures.\nSection 4 describes the test setup, including the test data and the implemented evaluation methods.\nFinally, Section 5 presents the results per descriptor and over all descriptors.\n2.\nBACKGROUND\n2.1 MPEG-7: visual descriptors\nThe visual part of the MPEG-7 standard defines several descriptors.\nNot all of them are really descriptors in the sense that they extract properties from visual media.\nSome of them are just structures for descriptor aggregation or localisation.\nOther descriptors are based on low-level descriptors or semantic information: Group-of-Frames\/Group-of-Pictures Color (based on\nFinally, other structures exist for colour spaces, colour quantisation and multiple 2D views of 3D objects.\nThese additional structures allow combining the basic descriptors in multiple ways and on different levels.\nBut they do not change the characteristics of the extracted information.\nConsequently, structures for aggregation and localisation were not considered in the work described in this paper.\n2.2 Similarity measurement on visual data\nGenerally, similarity measurement on visual information aims at imitating human visual similarity perception.\nThe common approach in visual information retrieval is measuring dis-similarity as distance.\nBoth, query object and candidate object are represented by their corresponding feature vectors.\nThe distance between these objects is measured by computing the distance between the two vectors.\nConsequently, the process is independent of the employed querying paradigm (e.g. query by example).\nThe query object may be natural (e.g. a real object) or artificial (e.g. properties of a group of objects).\nGoal of the measurement process is to express a relationship between the two objects by their distance.\nIf multiple descriptors are used (e.g. a descriptor scheme), a rule has to be defined how to combine all distances to a global value for each object.\nStill, distance measurement is the most important first step in similarity measurement.\nObviously, the main task of good distance measures is to reorganise descriptor space in a way that media objects with the highest similarity are nearest to the query object.\nIf distance is defined minimal, the query object is always in the origin of distance space and similar candidates should form clusters around the origin that are as large as possible.\nConsequently, many well known distance measures are based on geometric assumptions of descriptor space (e.g. Euclidean distance is based on the metric axioms).\nUnfortunately, these measures do not fit ideally with human similarity perception (e.g. due to human subjectivity).\nIn the following distance measures of both groups of approaches will be considered.\n6.\nCONCLUSION\nThe evaluation presented in this paper aims at testing the recommended distance measures and finding better ones for the basic visual MPEG-7 descriptors.\nEight descriptors were selected, 38 distance measures were implemented, media collections were created and assessed, performance indicators were defined and more than 22500 tests were performed.\nTo be able to use predicate-based distance measures next to quantitative measures a quantisation model was defined that allows the application of predicate-based measures on continuous data.\nIn the evaluation the best overall distance measures for visual content--as extracted by the visual MPEG-7 descriptors--turned out to be the pattern difference measure and the Meehl index (for homogeneous descriptions).\nSince these two measures perform significantly better than the MPEG-7 recommendations they should be further tested on large collections of image and video content (e.g. from [15]).\nThe choice of the right distance function for similarity measurement depends on the descriptor, the queried media collection and the semantic level of the user's idea of similarity.\nThis work offers suitable distance measures for various situations.\nIn consequence, the distance measures identified as the best will be implemented in the open MPEG-7 based visual information retrieval framework VizIR [4].","lvl-2":"Distance Measures for MPEG-7-based Retrieval\nABSTRACT\nIn visual information retrieval the careful choice of suitable proximity measures is a crucial success factor.\nThe evaluation presented in this paper aims at showing that the distance measures suggested by the MPEG-7 group for the visual descriptors can be beaten by general-purpose measures.\nEight visual MPEG-7 descriptors were selected and 38 distance measures implemented.\nThree media collections were created and assessed, performance indicators developed and more than 22500 tests performed.\nAdditionally, a quantisation model was developed to be able to use predicate-based distance measures on continuous data as well.\nThe evaluation shows that the distance measures recommended in the MPEG-7-standard are among the best but that other measures perform even better.\n1.\nINTRODUCTION\nThe MPEG-7 standard defines--among others--a set of descriptors for visual media.\nEach descriptor consists of a feature extraction mechanism, a description (in binary and XML format) and guidelines that define how to apply the descriptor on different kinds of media (e.g. on temporal media).\nThe MPEG-7 descriptors have been carefully designed to meet--partially complementary--requirements of different application domains: archival, browsing, retrieval, etc. [9].\nIn the following, we will exclusively deal with\nthe visual MPEG-7 descriptors in the context of media retrieval.\nThe visual MPEG-7 descriptors fall in five groups: colour, texture, shape, motion and others (e.g. face description) and sum up to 16 basic descriptors.\nFor retrieval applications, a rule for each descriptor is mandatory that defines how to measure the similarity of two descriptions.\nCommon rules are distance functions, like the Euclidean distance and the Mahalanobis distance.\nUnfortunately, the MPEG-7 standard does not include distance measures in the normative part, because it was not designed to be (and should not exclusively understood to be) retrieval-specific.\nHowever, the MPEG-7 authors give recommendations, which distance measure to use on a particular descriptor.\nThese recommendations are based on accurate knowledge of the descriptors' behaviour and the description structures.\nIn the present study a large number of successful distance measures from different areas (statistics, psychology, medicine, social and economic sciences, etc.) were implemented and applied on MPEG-7 data vectors to verify whether or not the recommended MPEG-7 distance measures are really the best for any reasonable class of media objects.\nFrom the MPEG-7 tests and the recommendations it does not become clear, how many and which distance measures have been tested on the visual descriptors and the MPEG-7 test datasets.\nThe hypothesis is that analytically derived distance measures may be good in general but only a quantitative analysis is capable to identify the best distance measure for a specific feature extraction method.\nThe paper is organised as follows.\nSection 2 gives a minimum of background information on the MPEG-7 descriptors and distance measurement in visual information retrieval (VIR, see [3], [16]).\nSection 3 gives an overview over the implemented distance measures.\nSection 4 describes the test setup, including the test data and the implemented evaluation methods.\nFinally, Section 5 presents the results per descriptor and over all descriptors.\n2.\nBACKGROUND\n2.1 MPEG-7: visual descriptors\nThe visual part of the MPEG-7 standard defines several descriptors.\nNot all of them are really descriptors in the sense that they extract properties from visual media.\nSome of them are just structures for descriptor aggregation or localisation.\nThe basic descriptors are Color Layout, Color Structure, Dominant Color, Scalable Color, Edge Histogram, Homogeneous Texture, Texture Browsing, Region-based Shape, Contour-based Shape, Camera Motion, Parametric Motion and Motion Activity.\nOther descriptors are based on low-level descriptors or semantic information: Group-of-Frames\/Group-of-Pictures Color (based on\nScalable Color), Shape 3D (based on 3D mesh information), Motion Trajectory (based on object segmentation) and Face Recognition (based on face extraction).\nDescriptors for spatiotemporal aggregation and localisation are: Spatial 2D Coordinates, Grid Layout, Region Locator (spatial), Time Series, Temporal Interpolation (temporal) and SpatioTemporal Locator (combined).\nFinally, other structures exist for colour spaces, colour quantisation and multiple 2D views of 3D objects.\nThese additional structures allow combining the basic descriptors in multiple ways and on different levels.\nBut they do not change the characteristics of the extracted information.\nConsequently, structures for aggregation and localisation were not considered in the work described in this paper.\n2.2 Similarity measurement on visual data\nGenerally, similarity measurement on visual information aims at imitating human visual similarity perception.\nUnfortunately, human perception is much more complex than any of the existing similarity models (it includes perception, recognition and subjectivity).\nThe common approach in visual information retrieval is measuring dis-similarity as distance.\nBoth, query object and candidate object are represented by their corresponding feature vectors.\nThe distance between these objects is measured by computing the distance between the two vectors.\nConsequently, the process is independent of the employed querying paradigm (e.g. query by example).\nThe query object may be natural (e.g. a real object) or artificial (e.g. properties of a group of objects).\nGoal of the measurement process is to express a relationship between the two objects by their distance.\nIteration for multiple candidates allows then to define a partial order over the candidates and to address those in a (to be defined) neighbourhood being similar to the query object.\nAt this point, it has to be mentioned that in a multi-descriptor environment--especially in MPEG-7--we are only half way towards a statement on similarity.\nIf multiple descriptors are used (e.g. a descriptor scheme), a rule has to be defined how to combine all distances to a global value for each object.\nStill, distance measurement is the most important first step in similarity measurement.\nObviously, the main task of good distance measures is to reorganise descriptor space in a way that media objects with the highest similarity are nearest to the query object.\nIf distance is defined minimal, the query object is always in the origin of distance space and similar candidates should form clusters around the origin that are as large as possible.\nConsequently, many well known distance measures are based on geometric assumptions of descriptor space (e.g. Euclidean distance is based on the metric axioms).\nUnfortunately, these measures do not fit ideally with human similarity perception (e.g. due to human subjectivity).\nTo overcome this shortage, researchers from different areas have developed alternative models that are mostly predicate-based (descriptors are assumed to contain just binary elements, e.g. Tversky's Feature Contrast Model [17]) and fit better with human perception.\nIn the following distance measures of both groups of approaches will be considered.\n3.\nDISTANCE MEASURES\nThe distance measures used in this work have been collected from various areas (Subsection 3.1).\nBecause they work on differently quantised data, Subsection 3.2 sketches a model for unification on the basis of quantitative descriptions.\nFinally, Subsection 3.3 introduces the distance measures as well as their origin and the idea they implement.\n3.1 Sources\nDistance measurement is used in many research areas such as psychology, sociology (e.g. comparing test results), medicine (e.g. comparing parameters of test persons), economics (e.g. comparing balance sheet ratios), etc. .\nNaturally, the character of data available in these areas differs significantly.\nEssentially, there are two extreme cases of data vectors (and distance measures): predicatebased (all vector elements are binary, e.g. {0, 1}) and quantitative (all vector elements are continuous, e.g. [0, 1]).\nPredicates express the existence of properties and represent highlevel information while quantitative values can be used to measure and mostly represent low-level information.\nPredicates are often employed in psychology, sociology and other human-related sciences and most predicate-based distance measures were therefore developed in these areas.\nDescriptions in visual information retrieval are nearly ever (if they do not integrate semantic information) quantitative.\nConsequently, mostly quantitative distance measures are used in visual information retrieval.\nThe goal of this work is to compare the MPEG-7 distance measures with the most powerful distance measures developed in other areas.\nSince MPEG-7 descriptions are purely quantitative but some of the most sophisticated distance measures are defined exclusively on predicates, a model is mandatory that allows the application of predicate-based distance measures on quantitative data.\nThe model developed for this purpose is presented in the next section.\n3.2 Quantisation model\nThe goal of the quantisation model is to redefine the set operators that are usually used in predicate-based distance measures on continuous data.\nThe first in visual information retrieval to follow this approach were Santini and Jain, who tried to apply Tversky's Feature Contrast Model [17] to content-based image retrieval [12], [13].\nThey interpreted continuous data as fuzzy predicates and used fuzzy set operators.\nUnfortunately, their model suffered from several shortcomings they described in [12], [13] (for example, the quantitative model worked only for one specific version of the original predicate-based measure).\nThe main idea of the presented quantisation model is that set operators are replaced by statistical functions.\nIn [5] the authors could show that this interpretation of set operators is reasonable.\nThe model offers a solution for the descriptors considered in the evaluation.\nIt is not specific to one distance measure, but can be applied to any predicate-based measure.\nBelow, it will be shown that the model does not only work for predicate data but for quantitative data as well.\nEach measure implementing the model can be used as a substitute for the original predicate-based measure.\nGenerally, binary properties of two objects (e.g. media objects) can exist in both objects (denoted as a), in just one (b, c) or in none of them (d).\nThe operator needed for these relationships are UNION, MINUS and NOT.\nIn the quantisation model they are replaced as follows (see [5] for further details).\noperators (1-bit quantisation).\nThe distance measure is then highly discriminant for properties.\nBetween these limits, a distance measure that uses the quantisation model is--depending on p--more or less discriminant for properties.\nThis means, it selects a subset of all available description vector elements for distance measurement.\nFor both predicate data and quantitative data it can be shown that the quantisation model is reasonable.\nIf description vectors consist of binary elements only, p should be used as follows (for example, p can easily be set automatically): p \u2192 0 \u21d2 \u03b51, \u03b52 = 0, e.g.p = min In this case, a, b, c, d measure like the set operators they replace.\nFor example, Table 1 shows their behaviour for two onedimensional feature vectors Xi and Xj.\nAs can be seen, the statistical measures work like set operators.\nActually, the quantisation model works accurate on predicate data for any p \u2260 \u221e.\nTo show that the model is reasonable for quantitative data the following fact is used.\nIt is easy to show that for predicate data some quantitative distance measures degenerate to predicatebased measures.\nFor example, the L1 metric (Manhattan metric) degenerates to the Hamming distance (from [9], without weights):\nIf it can be shown that the quantisation model is able to reconstruct the quantitative measure from the degenerated predicate-based measure, the model is obviously able to extend predicate-based measures to the quantitative domain.\nThis is easy to illustrate.\nFor purely quantitative feature vectors, p should be used as follows (again, p can easily be set automatically):\na selects properties that are present in both data vectors (Xi, Xj representing media objects), b and c select properties that are present in just one of them and d selects properties that are present in neither of the two data vectors.\nEvery property is selected by the extent to which it is present (a and d: mean, b and c: difference) and only if the amount to which it is present exceeds a certain threshold (depending on the mean and standard deviation over all elements of descriptor space).\nThe implementation of these operators is based on one assumption.\nIt is assumed that vector elements measure on interval scale.\nThat means, each element expresses that the measured property is \"more or less\" present (\"0\": not at all, \"M\": fully present).\nThis is true for most visual descriptors and all MPEG-7 descriptors.\nA natural origin as it is assumed here (\"0\") is not needed.\nIntroducing p (called discriminance-defining parameter) for the thresholds \u03b51, \u03b52 has the positive consequence that a, b, c, d can then be controlled through a single parameter.\np is an additional criterion for the behaviour of a distance measure and determines the thresholds used in the operators.\nIt expresses how accurate data items are present (quantisation) and consequently, how accurate they should be investigated.\np can be set by the user or automatically.\nInteresting are the limits:\nIn this case, all elements (= properties) are assumed to be continuous (high quantisation).\nIn consequence, all properties of a descriptor are used by the operators.\nThen, the distance measure is not discriminant for properties.\nTable 2.\nPredicate-based distance measures.\nThis means, for sufficiently high p every predicate-based distance measure that is either not using b and c or just as b + c, b-c or c-b, can be transformed into a continuous quantitative distance measure.\nFor example, the Hamming distance (again, without weights):\nThe quantisation model successfully reconstructs the L1 metric and no distance measure-specific modification has to be made to the model.\nThis demonstrates that the model is reasonable.\nIn the following it will be used to extend successful predicate-based distance measures on the quantitative domain.\nThe major advantages of the quantisation model are: (1) it is application domain independent, (2) the implementation is straightforward, (3) the model is easy to use and finally, (4) the new parameter p allows to control the similarity measurement process in a new way (discriminance on property level).\n3.3 Implemented measures\nFor the evaluation described in this work next to predicate-based (based on the quantisation model) and quantitative measures, the distance measures recommended in the MPEG-7 standard were implemented (all together 38 different distance measures).\nTable 2 summarises those predicate-based measures that performed best in the evaluation (in sum 20 predicate-based measures were investigated).\nFor these measures, K is the number of predicates in the data vectors Xi and Xj.\nIn P1, the sum is used for Tversky's f () (as Tversky himself does in [17]) and \u03b1, \u03b2 are weights for element b and c.\nIn [5] the author's investigated Tversky's Feature Contrast Model and found \u03b1 = 1, \u03b2 = 0 to be the optimum parameters.\nSome of the predicate-based measures are very simple (e.g. P2, P4) but have been heavily exploited in psychological research.\nPattern difference (P6)--a very powerful measure--is used in the statistics package SPSS for cluster analysis.\nP7 is a correlation coefficient for predicates developed by Pearson.\nTable 3 shows the best quantitative distance measures that were used.\nQ1 and Q2 are metric-based and were implemented as representatives for the entire group of Minkowski distances.\nThe wi are weights.\nIn Q5, \u00b5i, \u03c3i are mean and standard deviation\nfor the elements of descriptor Xi.\nIn Q6, m is 2 (= 0.5).\nQ3, the Canberra metric, is a normalised form of Q1.\nSimilarly, Q4, Clark's divergence coefficient is a normalised version of Q2.\nQ6 is a further-developed correlation coefficient that is invariant against sign changes.\nThis measure is used even though its particular properties are of minor importance for this application domain.\nFinally, Q8 is a measure that takes the differences between adjacent vector elements into account.\nThis makes it structurally different from all other measures.\nObviously, one important distance measure is missing.\nThe Mahalanobis distance was not considered, because different descriptors would require different covariance matrices and for some descriptors it is simply impossible to define a covariance matrix.\nIf the identity matrix was used in this case, the Mahalanobis distance would degenerate to a Minkowski distance.\nAdditionally, the recommended MPEG-7 distances were implemented with the following parameters: In the distance measure of the Color Layout descriptor all weights were set to \"1\" (as in all other implemented measures).\nIn the distance measure of the Dominant Color descriptor the following parameters were used: w1 = 0.7, w2 = 0.3, \u03b1 = 1, Td = 20 (as recommended).\nIn the Homogeneous Texture descriptor's distance all \u03b1 (k) were set to \"1\" and matching was done rotation - and scale-invariant.\nImportant!\nSome of the measures presented in this section are distance measures while others are similarity measures.\nFor the tests, it is important to notice, that all similarity measures were inverted to distance measures.\n4.\nTEST SETUP\nSubsection 4.1 describes the descriptors (including parameters) and the collections (including ground truth information) that were used in the evaluation.\nSubsection 4.2 discusses the evaluation method that was implemented and Subsection 4.3 sketches the test environment used for the evaluation process.\n4.1 Test data\nFor the evaluation eight MPEG-7 descriptors were used.\nAll colour descriptors: Color Layout, Color Structure, Dominant Color, Scalable Color, all texture descriptors: Edge Histogram, Homogeneous Texture, Texture Browsing and one shape descriptor: Region-based Shape.\nTexture Browsing was used even though the MPEG-7 standard suggests that it is not suitable for retrieval.\nThe other basic shape descriptor, Contour-based Shape, was not used, because it produces structurally different descriptions that cannot be transformed to data vectors with elements measuring on interval-scales.\nThe motion descriptors were not used, because they integrate the temporal dimension of visual media and would only be comparable, if the basic colour, texture and shape descriptors would be aggregated over time.\nThis was not done.\nFinally, no high-level descriptors were used (Localisation, Face Recognition, etc., see Subsection 2.1), because--to the author's opinion--the behaviour of the basic descriptors on elementary media objects should be evaluated before conclusions on aggregated structures can be drawn.\nTable 3.\nQuantitative distance measures.\nThe Texture Browsing descriptions had to be transformed from five bins to an eight bin representation in order that all elements of the descriptor measure on an interval scale.\nA Manhattan metric was used to measure proximity (see [6] for details).\nDescriptor extraction was performed using the MPEG-7 reference implementation.\nIn the extraction process each descriptor was applied on the entire content of each media object and the following extraction parameters were used.\nColour in Color Structure was quantised to 32 bins.\nFor Dominant Color colour space was set to YCrCb, 5-bit default quantisation was used and the default value for spatial coherency was used.\nHomogeneous Texture was quantised to 32 components.\nScalable Color values were quantised to sizeof (int) -3 bits and 64 bins were used.\nFinally, Texture Browsing was used with five components.\nThese descriptors were applied on three media collections with image content: the Brodatz dataset (112 images, 512x512 pixel), a subset of the Corel dataset (260 images, 460x300 pixel, portrait and landscape) and a dataset with coats-of-arms images (426 images, 200x200 pixel).\nFigure 1 shows examples from the three collections.\nDesigning appropriate test sets for a visual evaluation is a highly difficult task (for example, see the TREC video 2002 report [15]).\nOf course, for identifying the best distance measure for a descriptor, it should be tested on an infinite number of media objects.\nBut this is not the aim of this study.\nIt is just evaluated if--for likely image collections--better proximity measures than those suggested by the MPEG-7 group can be found.\nCollections of this relatively small size were used in the evaluation, because the applied evaluation methods are above a certain minimum size invariant against collection size and for smaller collections it is easier to define a high-quality ground truth.\nStill, the average ratio of ground truth size to collection size is at least 1:7.\nEspecially, no collection from the MPEG-7 dataset was used in the evaluation because the evaluations should show, how well the descriptors and the recommended distance measures perform on \"unknown\" material.\nWhen the descriptor extraction was finished, the resulting XML descriptions were transformed into a data matrix with 798 lines (media objects) and 314 columns (descriptor elements).\nTo be usable with distance measures that do not integrate domain knowledge, the elements of this data matrix were normalised to [0, 1].\nFor the distance evaluation--next to the normalised data matrix--human similarity judgement is needed.\nIn this work, the ground truth is built of twelve groups of similar images (four for each dataset).\nGroup membership was rated by humans based on semantic criterions.\nTable 4 summarises the twelve groups and the underlying descriptions.\nIt has to be noticed, that some of these groups (especially 5, 7 and 10) are much harder to find with lowlevel descriptors than others.\n4.2 Evaluation method\nUsually, retrieval evaluation is performed based on a ground truth with recall and precision (see, for example, [3], [16]).\nIn multidescriptor environments this leads to a problem, because the resulting recall and precision values are strongly influenced by the method used to merge the distance values for one media object.\nEven though it is nearly impossible to say, how big the influence of a single distance measure was on the resulting recall and precision values, this problem has been almost ignored so far.\nIn Subsection 2.2 it was stated that the major task of a distance measure is to bring the relevant media objects as close to the origin (where the query object lies) as possible.\nEven in a multidescriptor environment it is then simple to identify the similar objects in a large distance space.\nConsequently, it was decided to\nTable 4.\nGround truth information.\nFigure 1.\nTest datasets.\nLeft: Brodatz dataset, middle: Corel dataset, right: coats-of-arms dataset.\nuse indicators measuring the distribution in distance space of candidates similar to the query object for this evaluation instead of recall and precision.\nIdentifying clusters of similar objects (based on the given ground truth) is relatively easy, because the resulting distance space for one descriptor and any distance measure is always one-dimensional.\nClusters are found by searching from the origin of distance space to the first similar object, grouping all following similar objects in the cluster, breaking off the cluster with the first un-similar object and so forth.\nFor the evaluation two indicators were defined.\nThe first measures the average distance of all cluster means to the origin: = clusters.avg _ distance where distanceij is the distance value of the j-th element in the i-th\nnumber of found clusters and cluster_sizei is the size of the i-th cluster.\nThe resulting indicator is normalised by the distribution characteristics of the distance measure (avg_distance).\nAdditionally, the standard deviation is used.\nIn the evaluation process this measure turned out to produce valuable results and to be relatively robust against parameter p of the quantisation model.\nIn Subsection 3.2 we noted that p affects the discriminance of a predicate-based distance measure: The smaller p is set the larger are the resulting clusters because the quantisation model is then more discriminant against properties and less elements of the data matrix are used.\nThis causes a side-effect that is measured by the second indicator: more and more un-similar objects come out with exactly the same distance value as similar objects (a problem that does not exist for large p's) and become indiscernible from similar objects.\nConsequently, they are (false) cluster members.\nThis phenomenon (conceptually similar to the \"false negatives\" indicator) was named \"cluster pollution\" and the indicator measures the average cluster pollution over all clusters: where no_doublesij is the number of indiscernible un-similar objects associated with the j-th element of cluster i. Remark: Even though there is a certain influence, it could be proven in [5] that no significant correlation exists between parameter p of the quantisation model and cluster pollution.\n4.3 Test environment\nAs pointed out above, to generate the descriptors, the MPEG-7 reference implementation in version 5.6 was used (provided by TU Munich).\nImage processing was done with Adobe Photoshop and normalisation and all evaluations were done with Perl.\nThe querying process was performed in the following steps: (1) random selection of a ground truth group, (2) random selection of a query object from this group, (3) distance comparison for all other objects in the dataset, (4) clustering of the resulting distance space based on the ground truth and finally, (5) evaluation.\nFor each combination of dataset and distance measure 250 queries were issued and evaluations were aggregated over all datasets and descriptors.\nThe next section shows the--partially surprising--results.\n5.\nRESULTS\nIn the results presented below the first indicator from Subsection 4.2 was used to evaluate distance measures.\nIn a first step parameter p had to be set in a way that all measures are equally discriminant.\nDistance measurement is fair if the following condition holds true for any predicate-based measure dP and any continuous measure dC:\nThen, it is guaranteed that predicate-based measures do not create larger clusters (with a higher number of similar objects) for the price of higher cluster pollution.\nIn more than 1000 test queries the optimum value was found to be p = 1.\nResults are organised as follows: Subsection 5.1 summarises the\nFigure 2.\nResults per measure and descriptor.\nThe horizontal axis shows the best measure and the performance of the MPEG-7\nrecommendation for each descriptor.\nThe vertical axis shows the values for the first indicator (smaller value = better cluster structure).\nShades have the following meaning: black = \u00b5-\u03c3 (good cases), black + dark grey = \u00b5 (average) and black + dark grey + light grey = \u00b5 + \u03c3 (bad).\nbest distance measures per descriptor, Section 5.2 shows the best overall distance measures and Section 5.3 points out other interesting results (for example, distance measures that work particularly good on specific ground truth groups).\n5.1 Best measure per descriptor\nFigure 2 shows the evaluation results for the first indicator.\nFor each descriptor the best measure and the performance of the MPEG-7 recommendation are shown.\nThe results are aggregated over the tested datasets.\nOn first sight, it becomes clear that the MPEG-7 recommendations are mostly relatively good but never the best.\nFor Color Layout the difference between MP7 and the best measure, the Meehl index (Q8), is just 4% and the MPEG-7 measure has a smaller standard deviation.\nThe reason why the Meehl index is better may be that this descriptors generates descriptions with elements that have very similar variance.\nStatistical analysis confirmed that (see [6]).\nFor Color Structure, Edge Histogram, Homogeneous Texture, Region-based Shape and Scalable Color by far the best measure is pattern difference (P6).\nPsychological research on human visual perception has revealed that in many situation differences between the query object and a candidate weigh much stronger than common properties.\nThe pattern difference measure implements this insight in the most consequent way.\nIn the author's opinion, the reason why pattern difference performs so extremely well on many descriptors is due to this fact.\nAdditional advantages of pattern difference are that it usually has a very low variance and--because it is a predicate-based measure--its discriminance (and cluster structure) can be tuned with parameter p.\nThe best measure for Dominant Color turned out to be Clark's Divergence coefficient (Q4).\nThis is a similar measure to pattern difference on the continuous domain.\nThe Texture Browsing descriptor is a special problem.\nIn the MPEG-7 standard it is recommended to use it exclusively for browsing.\nAfter testing it for retrieval on various distance measures the author supports this opinion.\nIt is very difficult to find a good distance measure for Texture Browsing.\nThe proposed Manhattan metric, for example, performs very bad.\nThe best measure is predicate-based (P7).\nIt works on common properties (a, d) but produces clusters with very high cluster pollution.\nFor this descriptor the second indicator is up to eight times higher than for predicate-based measures on other descriptors.\n5.2 Best overall measures\nFigure 3 summarises the results over all descriptors and media collections.\nThe diagram should give an indication on the general potential of the investigated distance measures for visual information retrieval.\nIt can be seen that the best overall measure is a predicate-based one.\nThe top performance of pattern difference (P6) proves that the quantisation model is a reasonable method to extend predicate-based distance measures on the continuous domain.\nThe second best group of measures are the MPEG-7 recommendations, which have a slightly higher mean but a lower standard deviation than pattern difference.\nThe third best measure is the Meehl index (Q8), a measure developed for psychological applications but because of its characteristic properties tailormade for certain (homogeneous) descriptors.\nMinkowski metrics are also among the best measures: the average mean and variance of the Manhattan metric (Q1) and the Euclidean metric (Q2) are in the range of Q8.\nOf course, these measures do not perform particularly well for any of the descriptors.\nRemarkably for a predicate-based measure, Tversky's Feature Contrast Model (P1) is also in the group of very good measures (even though it is not among the best) that ends with Q5, the correlation coefficient.\nThe other measures either have a significantly higher mean or a very large standard deviation.\n5.3 Other interesting results\nDistance measures that perform in average worse than others may in certain situations (e.g. on specific content) still perform better.\nFor Color Layout, for example, Q7 is a very good measure on colour photos.\nIt performs as good as Q8 and has a lower standard deviation.\nFor artificial images the pattern difference and the Hamming distance produce comparable results as well.\nIf colour information is available in media objects, pattern difference performs well on Dominant Color (just 20% worse Q4) and in case of difficult ground truth (group 5, 7, 10) the Meehl index is as strong as P6.\nFigure 3.\nOverall results (ordered by the first indicator).\nThe vertical axis shows the values for the first indicator (smaller value = better cluster structure).\nShades have the following meaning: black = \u00b5-\u03c3, black + dark grey = \u00b5 and black + dark grey + light grey = \u00b5 + \u03c3.\n6.\nCONCLUSION\nThe evaluation presented in this paper aims at testing the recommended distance measures and finding better ones for the basic visual MPEG-7 descriptors.\nEight descriptors were selected, 38 distance measures were implemented, media collections were created and assessed, performance indicators were defined and more than 22500 tests were performed.\nTo be able to use predicate-based distance measures next to quantitative measures a quantisation model was defined that allows the application of predicate-based measures on continuous data.\nIn the evaluation the best overall distance measures for visual content--as extracted by the visual MPEG-7 descriptors--turned out to be the pattern difference measure and the Meehl index (for homogeneous descriptions).\nSince these two measures perform significantly better than the MPEG-7 recommendations they should be further tested on large collections of image and video content (e.g. from [15]).\nThe choice of the right distance function for similarity measurement depends on the descriptor, the queried media collection and the semantic level of the user's idea of similarity.\nThis work offers suitable distance measures for various situations.\nIn consequence, the distance measures identified as the best will be implemented in the open MPEG-7 based visual information retrieval framework VizIR [4].","keyphrases":["distanc measur","distanc measur","mpeg-7","visual inform retriev","visual descriptor","media collect","perform indic","mpeg-7-base retriev","visual media","meehl index","human similar percept","predic-base model","content-base imag retriev","content-base video retriev","similar measur","similar percept"],"prmu":["P","P","P","P","P","P","P","M","R","U","U","M","M","M","M","U"]} {"id":"J-53","title":"A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters","abstract":"In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed market-based resource allocation system. Multiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility. We look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness. We show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium.","lvl-1":"A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters Michal Feldman\u2217 mfeldman@sims.berkeley.edu Kevin Lai\u2020 kevin.lai@hp.com Li Zhang\u2020 l.zhang@hp.com ABSTRACT In this paper we formulate the fixed budget resource allocation game to understand the performance of a distributed marketbased resource allocation system.\nMultiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility.\nWe look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness.\nWe show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; C.4 [Performance of Systems]; F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems; J.4 [Social and Behavioral Sciences]: Economics General Terms Algorithms, Performance, Design, Economics 1.\nINTRODUCTION The primary advantage of distributed shared clusters like the Grid [7] and PlanetLab [1] is their ability to pool together shared computational resources.\nThis allows increased throughput because of statistical multiplexing and the bursty utilization pattern of typical users.\nSharing nodes that are dispersed in the network allows lower delay because applications can store data close to users.\nFinally, sharing allows greater reliability because of redundancy in hosts and network connections.\nHowever, resource allocation in these systems remains the major challenge.\nThe problem is how to allocate a shared resource both fairly and efficiently (where efficiency is the ratio of the achieved social welfare to the social optimal) with the presence of strategic users who act in their own interests.\nSeveral non-economic allocation algorithms have been proposed, but these typically assume that task values (i.e., their importance) are the same, or are inversely proportional to the resources required, or are set by an omniscient administrator.\nHowever, in many cases, task values vary significantly, are not correlated to resource requirements, and are difficult and time-consuming for an administrator to set.\nInstead, we examine a market-based resource allocation system (others are described in [2, 4, 6, 21, 26, 27]) that allows users to express their preferences for resources through a bidding mechanism.\nIn particular, we consider a price-anticipating [12] scheme in which a user bids for a resource and receives the ratio of his bid to the sum of bids for that resource.\nThis proportional scheme is simpler, more scalable, and more responsive [15] than auction-based schemes [6, 21, 26].\nPrevious work has analyzed price-anticipating schemes in the context of allocating network capacity for flows for users with unlimited budgets.\nIn this work, we examine a price-anticipating scheme in the context of allocating computational capacity for users with private preferences and limited budgets, resulting in a qualitatively different game (as discussed in Section 6).\nIn this paper, we formulate the fixed budget resource allocation game and study the existence and performance of the Nash equilibria of this game.\nFor evaluating the Nash equilibria, we consider both their efficiency, measuring how close the social welfare at equilibrium is to the social optimum, and fairness, measuring how different the users'' utilities are.\nAlthough rarely considered in previous game theoretical study, we believe fairness is a critical metric for a resource allocation schemes because the perception of unfairness will cause some users to reject a system with more efficient, but less fair resource allocation in favor of one with less efficient, more fair resource allocation.\nWe use both utility uniformity and envy-freeness to measure fairness.\nUtility uniformity, which is common in Computer Science work, measures the closeness of utilities of different users.\nEnvyfreeness, which is more from the Economic perspective, measures the happiness of users with their own resources compared to the resources of others.\nOur contributions are as follows: \u2022 We analyze the existence and performance of 127 Nash equilibria.\nUsing analysis, we show that there is always a Nash equilibrium in the fixed budget game if the utility functions satisfy a fairly weak and natural condition of strong competitiveness.\nWe also show the worst case performance bounds: for m players the efficiency at equilibrium is \u2126(1\/ \u221a m), the utility uniformity is \u2265 1\/m, and the envyfreeness \u2265 2 \u221a 2\u22122 \u2248 0.83.\nAlthough these bounds are quite low, the simulations described below indicate these bounds are overly pessimistic.\n\u2022 We describe algorithms that allow strategic users to optimize their utility.\nAs part of the fixed budget game analysis, we show that strategic users with linear utility functions can calculate their bids using a best response algorithm that quickly results in an allocation with high efficiency with little computational and communication overhead.\nWe present variations of the best response algorithm for both finite and infinite parallelism tasks.\nIn addition, we present a local greedy adjustment algorithm that converges more slowly than best response, but allows for non-linear or unformulatable utility functions.\n\u2022 We show that the price-anticipating resource allocation mechanism achieves a high degree of efficiency and fairness.\nUsing simulation, we find that although the socially optimal allocation results in perfect efficiency, it also results in very poor fairness.\nLikewise, allocating according to only users'' preference weights results in a high fairness, but a mediocre efficiency.\nIntuition would suggest that efficiency and fairness are exclusive.\nSurprisingly, the Nash equilibrium, reached by each user iteratively applying the best response algorithm to adapt his bids, achieves nearly the efficiency of the social optimum and nearly the fairness of the weight-proportional allocation: the efficiency is \u2265 0.90, the utility uniformity is \u2265 0.65, and the envyfreeness is \u2265 0.97, independent of the number of users in the system.\nIn addition, the time to converge to the equilibrium is \u2264 5 iterations when all users use the best response strategy.\nThe local adjustment algorithm performs similarly when there is sufficient competitiveness, but takes 25 to 90 iterations to stabilize.\nAs a result, we believe that shared distributed systems based on the fixed budget game can be highly decentralized, yet achieve a high degree of efficiency and fairness.\nThe rest of the paper is organized as follows.\nWe describe the model in Section 2 and derive the performance at the Nash equilibria for the infinite parallelism model in Section 3.\nIn Section 4, we describe algorithms for users to optimize their own utility in the fixed budget game.\nIn Section 5, we describe our simulator and simulation results.\nWe describe related work in Section 6.\nWe conclude by discussing some limit of our model and future work in Section 7.\n2.\nTHE MODEL Price-Anticipating Resource Allocation.\nWe study the problem of allocating a set of divisible resources (or machines).\nSuppose that there are m users and n machines.\nEach machine can be continuously divided for allocation to multiple users.\nAn allocation scheme \u03c9 = (r1, ... , rm), where ri = (ri1, \u00b7 \u00b7 \u00b7 , rin) with rij representing the share of machine j allocated to user i, satisfies that for any 1 \u2264 i \u2264 m and 1 \u2264 j \u2264 n, rij \u2265 0 and Pm i=1 rij \u2264 1.\nLet \u2126 denote the set of all the allocation schemes.\nWe consider the price anticipating mechanism in which each user places a bid to each machine, and the price of the machine is determined by the total bids placed.\nFormally, suppose that user i submits a non-negative bid xij to machine j.\nThe price of machine j is then set to Yj = Pn i=1 xij, the total bids placed on the machine j. Consequently, user i receives a fraction of rij = xij Yj of j.\nWhen Yj = 0, i.e. when there is no bid on a machine, the machine is not allocated to anyone.\nWe call xi = (xi1, ... , xin) the bidding vector of user i.\nThe additional consideration we have is that each user i has a budget constraint Xi.\nTherefore, user i``s total bids have to sum up to his budget, i.e. Pn j=1 xij = Xi.\nThe budget constraints come from the fact that the users do not have infinite budget.\nUtility Functions.\nEach user i``s utility is represented by a function Ui of the fraction (ri1, ... , rin) the user receives from each machine.\nGiven the problem domain we consider, we assume that each user has different and relatively independent preferences for different machines.\nTherefore, the basic utility function we consider is the linear utility function: Ui(ri1, \u00b7 \u00b7 \u00b7 , rin) = wi1ri1 +\u00b7 \u00b7 \u00b7+winrin, where wij \u2265 0 is user i``s private preference, also called his weight, on machine j. For example, suppose machine 1 has a faster CPU but less memory than machine 2, and user 1 runs CPU bounded applications, while user 2 runs memory bounded applications.\nAs a result, w11 > w12 and w21 < w22.\nOur definition of utility functions corresponds to the user having enough jobs or enough parallelism within jobs to utilize all the machines.\nConsequently, the user``s goal is to grab as much of a resource as possible.\nWe call this the infinite parallelism model.\nIn practice, a user``s application may have an inherent limit on parallelization (e.g., some computations must be done sequentially) or there may be a system limit (e.g., the application``s data is being served from a file server with limited capacity).\nTo model this, we also consider the more realistic finite parallelism model, where the user``s parallelism is bounded by ki, and the user``s utility Ui is the sum of the ki largest wijrij.\nIn this model, the user only submits bids to up to ki machines.\nOur abstraction is to capture the essense of the problem and facilitate our analysis.\nIn Section 7, we discuss the limit of the above definition of utility functions.\nBest Response.\nAs typically, we assume the users are selfish and strategic - they all act to maximize their own utility, defined by their utility functions.\nFrom the perspective of user i, if the total bids of the other users placed on each machine j is yj, then the best response of user i to the system is the solution of the following optimization problem: maximize Ui( xij xij +yj ) subject to Pn j=1 xij = Xi, and xij \u2265 0.\nThe difficulty of the above optimization problem depends on the formulation of Ui.\nWe will show later how to solve it for the infinite parallelism model and provide a heuristic for finite parallelism model.\nNash Equilibrium.\nBy the assumption that the user is selfish, each user``s bidding vector is the best response to the system.\nThe question we are most interested in is whether there exists a collection of bidding vectors, one for each user, such that each user``s bidding vector is the best response to those of the other users.\nSuch a state is known as the Nash equilibrium, a central concept in Game Theory.\nFormally, the bidding vectors x1, ... , xm is a Nash equilibrium if for 128 any 1 \u2264 i \u2264 m, xi is the best response to the system, or, for any other bidding vector xi, Ui(x1, ... , xi, ... , xm) \u2265 Ui(x1, ... , xi, ... , xm) .\nThe Nash equilibrium is desirable because it is a stable state at which no one has incentive to change his strategy.\nBut a game may not have an equilibrium.\nIndeed, a Nash equilibrium may not exist in the price anticipating scheme we define above.\nThis can be shown by a simple example of two players and two machines.\nFor example, let U1(r1, r2) = r1 and U2(r1, r2) = r1 + r2.\nThen player 1 should never bid on machine 2 because it has no value to him.\nNow, player 2 has to put a positive bid on machine 2 to claim the machine, but there is no lower limit, resulting in the non-existence of the Nash equilibrium.\nWe should note that even the mixed strategy equilibrium does not exist in this example.\nClearly, this happens whenever there is a resource that is wanted by only one player.\nTo rule out this case, we consider those strongly competitive games.1 Under the infinite parallelism model, a game is called strongly competitive if for any 1 \u2264 j \u2264 n, there exists an i = k such that wij, wkj > 0.\nUnder such a condition, we have that (see [5] for a proof), Theorem 1.\nThere always exists a pure strategy Nash equilibrium in a strongly competitive game.\nGiven the existence of the Nash equilibrium, the next important question is the performance at the Nash equilibrium, which is often measured by its efficiency and fairness.\nEfficiency (Price of Anarchy).\nFor an allocation scheme \u03c9 \u2208 \u2126, denote by U(\u03c9) = P i Ui(ri) the social welfare under \u03c9.\nLet U\u2217 = max\u03c9\u2208\u2126 U(\u03c9) denote the optimal social welfare - the maximum possible aggregated user utilities.\nThe efficiency at an allocation scheme \u03c9 is defined as \u03c0(\u03c9) = U(\u03c9) U\u2217 .\nLet \u21260 denote the set of the allocation at the Nash equilibrium.\nWhen there exists Nash equilibrium, i.e. \u21260 = \u2205, define the efficiency of a game Q to be \u03c0(Q) = min\u03c9\u2208\u21260 \u03c0(\u03c9).\nIt is usually the case that \u03c0 < 1, i.e. there is an efficiency loss at a Nash equilibrium.\nThis is the price of anarchy [18] paid for not having central enforcement of the user``s good behavior.\nThis price is interesting because central control results in the best possible outcome, but is not possible in most cases.\nFairness.\nWhile the definition of efficiency is standard, there are multiple ways to define fairness.\nWe consider two metrics.\nOne is by comparing the users'' utilities.\nThe utility uniformity \u03c4(\u03c9) of an allocation scheme \u03c9 is defined to be mini Ui(\u03c9) maxi Ui(\u03c9) , the ratio of the minimum utility and the maximum utility among the users.\nSuch definition (or utility discrepancy defined similarly as maxi Ui(\u03c9) mini Ui(\u03c9) ) is used extensively in Computer Science literature.\nUnder this definition, the utility uniformity \u03c4(Q) of a game Q is defined to be \u03c4(Q) = min\u03c9\u2208\u21260 \u03c4(\u03c9).\nThe other metric extensively studied in Economics is the concept of envy-freeness [25].\nUnlike the utility uniformity metric, the enviness concerns how the user perceives the value of the share assigned to him, compared to the shares other users receive.\nWithin such a framework, define the envy-freeness of an allocation scheme \u03c9 by \u03c1(\u03c9) = mini,j Ui(ri) Ui(rj ) .\n1Alternatives include adding a reservation price or limiting the lowest allowable bid to each machine.\nThese alternatives, however, introduce the problem of coming up with the right price or limit.\nWhen \u03c1(\u03c9) \u2265 1, the scheme is known as an envy-free allocation scheme.\nLikewise, the envy-freeness \u03c1(Q) of a game Q is defined to be \u03c1(Q) = min\u03c9\u2208\u21260 \u03c1(\u03c9).\n3.\nNASH EQUILIBRIUM In this section, we present some theoretical results regarding the performance at Nash equilibrium under the infinite parallelism model.\nWe assume that the game is strongly competitive to guarantee the existence of equilibria.\nFor a meaningful discussion of efficiency and fairness, we assume that the users are symmetric by requiring that Xi = 1 andPn j=1 wij = 1 for all the 1 \u2264 i \u2264 m. Or informally, we require all the users have the same budget, and they have the same utility when they own all the resources.\nThis precludes the case when a user has an extremely high budget, resulting in very low efficiency or low fairness at equilibrium.\nWe first provide a characterization of the equilibria.\nBy definition, the bidding vectors x1, ... , xm is a Nash equilibrium if and only if each player``s strategy is the best response to the group``s bids.\nSince Ui is a linear function and the domain of each users bids {(xi1, ... , xin)| P j xij = Xi , and xij \u2265 0} is a convex set, the optimality condition is that there exists \u03bbi > 0 such that \u2202Ui \u2202xij = wij Yj \u2212 xij Y 2 j = \u03bbi if xij > 0, and < \u03bbi if xij = 0.\n(1) Or intuitively, at an equilibrium, each user has the same marginal value on machines where they place positive bids and has lower marginal values on those machines where they do not bid.\nUnder the infinite parallelism model, it is easy to compute the social optimum U\u2217 as it is achieved when we allocate each machine wholly to the person who has the maximum weight on the machine, i.e. U\u2217 = Pn j=1 max1\u2264i\u2264m wij.\n3.1 Two-player Games We first show that even in the simplest nontrivial case when there are two users and two machines, the game has interesting properties.\nWe start with two special cases to provide some intuition about the game.\nThe weight matrices are shown in figure 1(a) and (b), which correspond respectively to the equal-weight and opposite-weight games.\nLet x and y denote the respective bids of users 1 and 2 on machine 1.\nDenote by s = x + y and \u03b4 = (2 \u2212 s)\/s. Equal-weight game.\nIn Figure 1, both users have equal valuations for the two machines.\nBy the optimality condition, for the bid vectors to be in equilibrium, they need to satisfy the following equations according to (1) \u03b1 y (x + y)2 = (1 \u2212 \u03b1) 1 \u2212 y (2 \u2212 x \u2212 y)2 \u03b1 x (x + y)2 = (1 \u2212 \u03b1) 1 \u2212 x (2 \u2212 x \u2212 y)2 By simplifying the above equations, we obtain that \u03b4 = 1 \u2212 1\/\u03b1 and x = y = \u03b1.\nThus, there exists a unique Nash equilibrium of the game where the two users have the same bidding vector.\nAt the equilibrium, the utility of each user is 1\/2, and the social welfare is 1.\nOn the other hand, the social optimum is clearly 1.\nThus, the equal-weight game is ideal as the efficiency, utility uniformity, and the envyfreeness are all 1.\n129 m1 m2 u1 \u03b1 1 \u2212 \u03b1 u2 \u03b1 1 \u2212 \u03b1 m1 m2 u1 \u03b1 1 \u2212 \u03b1 u2 1 \u2212 \u03b1 \u03b1 (a) equal weight game (b) opposite weight game Figure 1: Two special cases of two-player games.\nOpposite-weight game.\nThe situation is different for the opposite game in which the two users put the exact opposite weights on the two machines.\nAssume that \u03b1 \u2265 1\/2.\nSimilarly, for the bid vectors to be at the equilibrium, they need to satisfy \u03b1 y (x + y)2 = (1 \u2212 \u03b1) 1 \u2212 y (2 \u2212 x \u2212 y)2 (1 \u2212 \u03b1) x (x + y)2 = \u03b1 1 \u2212 x (2 \u2212 x \u2212 y)2 By simplifying the above equations, we have that each Nash equilibrium corresponds to a nonnegative root of the cubic equation f(\u03b4) = \u03b43 \u2212 c\u03b42 + c\u03b4 \u2212 1 = 0, where c = 1 2\u03b1(1\u2212\u03b1) \u2212 1.\nClearly, \u03b4 = 1 is a root of f(\u03b4).\nWhen \u03b4 = 1, we have that x = \u03b1, y = 1 \u2212 \u03b1, which is the symmetric equilibrium that is consistent with our intuition - each user puts a bid proportional to his preference of the machine.\nAt this equilibrium, U = 2 \u2212 4\u03b1(1 \u2212 \u03b1), U\u2217 = 2\u03b1, and U\/U\u2217 = (2\u03b1 + 1 \u03b1 ) \u2212 2, which is minimized when \u03b1 = \u221a 2 2 with the minimum value of 2 \u221a 2 \u2212 2 \u2248 0.828.\nHowever, when \u03b1 is large enough, there exist two other roots, corresponding to less intuitive asymmetric equilibria.\nIntuitively, the asymmetric equilibrium arises when user 1 values machine 1 a lot, but by placing even a relatively small bid on machine 1, he can get most of the machine because user 2 values machine 1 very little, and thus places an even smaller bid.\nIn this case, user 1 gets most of machine 1 and almost half of machine 2.\nThe threshold is at when f (1) = 0, i.e. when c = 1 2\u03b1(1\u2212\u03b1) = 4.\nThis solves to \u03b10 = 2+ \u221a 2 4 \u2248 0.854.\nThose asymmetric equilibria at \u03b4 = 1 are bad as they yield lower efficiency than the symmetric equilibrium.\nLet \u03b40 be the minimum root.\nWhen \u03b1 \u2192 0, c \u2192 +\u221e, and \u03b40 = 1\/c + o(1\/c) \u2192 0.\nThen, x, y \u2192 1.\nThus, U \u2192 3\/2, U\u2217 \u2192 2, and U\/U\u2217 \u2192 0.75.\nFrom the above simple game, we already observe that the Nash equilibrium may not be unique, which is different from many congestion games in which the Nash equilibrium is unique.\nFor the general two player game, we can show that 0.75 is actually the worst efficiency bound with a proof in [5].\nFurther, at the asymmetric equilibrium, the utility uniformity approaches 1\/2 when \u03b1 \u2192 1.\nThis is the worst possible for two player games because as we show in Section 3.2, a user``s utility at any Nash equilibrium is at least 1\/m in the m-player game.\nAnother consequence is that the two player game is always envy-free.\nSuppose that the two user``s shares are r1 = (r11, ... , r1n) and r2 = (r21, ... , r2n) respectively.\nThen U1(r1) + U1(r2) = U1(r1 + r2) = U1(1, ... , 1) = 1 because ri1 + ri2 = 1 for all 1 \u2264 i \u2264 n. Again by that U1(r1) \u2265 1\/2, we have that U1(r1) \u2265 U1(r2), i.e. any equilibrium allocation is envy-free.\nTheorem 2.\nFor a two player game, \u03c0(Q) \u2265 3\/4, \u03c4(Q) \u2265 0.5, and \u03c1(Q) = 1.\nAll the bounds are tight in the worst case.\n3.2 Multi-player Game For large numbers of players, the loss in social welfare can be unfortunately large.\nThe following example shows the worst case bound.\nConsider a system with m = n2 + n players and n machines.\nOf the players, there are n2 who have the same weights on all the machines, i.e. 1\/n on each machine.\nThe other n players have weight 1, each on a different machine and 0 (or a sufficiently small ) on all the other machines.\nClearly, U\u2217 = n.\nThe following allocation is an equilibrium: the first n2 players evenly distribute their money among all the machines, the other n player invest all of their money on their respective favorite machine.\nHence, the total money on each machine is n + 1.\nAt this equilibrium, each of the first n2 players receives 1 n 1\/n n+1 = 1 n2(n+1) on each machine, resulting in a total utility of n3 \u00b7 1 n2(n+1) < 1.\nThe other n players each receives 1 n+1 on their favorite machine, resulting in a total utility of n \u00b7 1 n+1 < 1.\nTherefore, the total utility of the equilibrium is < 2, while the social optimum is n = \u0398( \u221a m).\nThis bound is the worst possible.\nWhat about the utility uniformity of the multi-player allocation game?\nWe next show that the utility uniformity of the m-player allocation game cannot exceed m. Let (S1, ... , Sn) be the current total bids on the n machines, excluding user i. User i can ensure a utility of 1\/m by distributing his budget proportionally to the current bids.\nThat is, user i, by bidding sij = Xi\/ Pn i=1 Si on machine j, obtains a resource level of: rij = sij sij + Sj = Sj\/ Pn i=1 Si Sj\/ Pn i=1 Si + Sj = 1 1 + Pn i=1 Si , where Pn j=1 Sj = Pm j=1 Xj \u2212 Xi = m \u2212 1.\nTherefore, rij = 1 1+m\u22121 = 1 m .\nThe total utility of user i is nX j=1 rijwij = (1\/m) nX j=1 wij = 1\/m .\nSince each user``s utility cannot exceed 1, the minimal possible uniformity is 1\/m.\nWhile the utility uniformity can be small, the envy-freeness, on the other hand, is bounded by a constant of 2 \u221a 2 \u2212 2 \u2248 0.828, as shown in [29].\nTo summarize, we have that Theorem 3.\nFor the m-player game Q, \u03c0(Q) = \u2126(1\/ \u221a m), \u03c4(Q) \u2265 1\/m, and \u03c1(Q) \u2265 2 \u221a 2 \u2212 2.\nAll of these bounds are tight in the worst case.\n4.\nALGORITHMS In the previous section, we present the performance bounds of the game under the infinite parallelism model.\nHowever, the more interesting questions in practice are how the equilibrium can be reached and what is the performance at the Nash equilibrium for the typical distribution of utility functions.\nIn particular, we would like to know if the intuitive strategy of each player constantly re-adjusting his bids according to the best response algorithm leads to the equilibrium.\nTo answer these questions, we resort to simulations.\nIn this section, we present the algorithms that we use to compute or approximate the best response and the social optimum in our experiments.\nWe consider both the infinite parallelism and finite parallelism model.\n130 4.1 Infinite Parallelism Model As we mentioned before, it is easy to compute the social optimum under the infinite parallelism model - we simply assign each machine to the user who likes it the most.\nWe now present the algorithm for computing the best response.\nRecall that for weights w1, ... , wn, total bids y1, ... , yn, and the budget X, the best response is to solve the following optimization problem maximize U = Pn j=1 wj xj xj +yj subject to Pn j=1 xj = X, and xj \u2265 0.\nTo compute the best response, we first sort wj yj in decreasing order.\nWithout loss of generality, suppose that w1 y1 \u2265 w2 y2 \u2265 ... wn yn .\nSuppose that x\u2217 = (x\u2217 1, ... , x\u2217 n) is the optimum solution.\nWe show that if x\u2217 i = 0, then for any j > i, x\u2217 j = 0 too.\nSuppose this were not true.\nThen \u2202U \u2202xj (x\u2217 ) = wj yj (x\u2217 j + yj)2 < wj yj y2 j = wj yj \u2264 wi yi = \u2202U \u2202xi (x\u2217 ) .\nThus it contradicts with the optimality condition (1).\nSuppose that k = max{i|x\u2217 i > 0}.\nAgain, by the optimality condition, there exists \u03bb such that wi yi (x\u2217 i +yi)2 = \u03bb for 1 \u2264 i \u2264 k, and x\u2217 i = 0 for i > k. Equivalently, we have that: x\u2217 i = r wiyi \u03bb \u2212 yi , for 1 \u2264 i \u2264 k, and x\u2217 i = 0 for i > k. Replacing them in the equation Pn i=1 x\u2217 i = X, we can solve for \u03bb = ( Pk i=1 \u221a wiyi)2 (X+ Pk i=1 yi)2 .\nThus, x\u2217 i = \u221a wiyi Pk i=1 \u221a wiyi (X + kX i=1 yi) \u2212 yi .\nThe remaining question is how to determine k.\nIt is the largest value such that x\u2217 k > 0.\nThus, we obtain the following algorithm to compute the best response of a user: 1.\nSort the machines according to wi yi in decreasing order.\n2.\nCompute the largest k such that \u221a wkyk Pk i=1 \u221a wiyi (X + kX i=1 yi) \u2212 yk \u2265 0.\n3.\nSet xj = 0 for j > k, and for 1 \u2264 j \u2264 k, set: xj = \u221a wjyj Pk i=1 \u221a wiyi (X + kX i=1 yi) \u2212 yj.\nThe computational complexity of this algorithm is O(n log n), dominated by the sorting.\nIn practice, the best response can be computed infrequently (e.g. once a minute), so for a typically powerful modern host, this cost is negligible.\nThe best response algorithm must send and receive O(n) messages because each user must obtain the total bids from each host.\nIn practice, this is more significant than the computational cost.\nNote that hosts only reveal to users the sum of the bids on them.\nAs a result, hosts do not reveal the private preferences and even the individual bids of one user to another.\n4.2 Finite Parallelism Model Recall that in the finite parallelism model, each user i only places bids on at most ki machines.\nOf course, the infinite parallelism model is just a special case of finite parallelism model in which ki = n for all the i``s.\nIn the finite parallelism model, computing the social optimum is no longer trivial due to bounded parallelism.\nIt can instead be computed by using the maximum matching algorithm.\nConsider the weighted complete bipartite graph G = U \u00d7 V , where U = {ui |1 \u2264 i \u2264 m , and 1 \u2264 \u2264 ki}, V = {1, 2, ... , n} with edge weight wij assigned to the edge (ui , vj).\nA matching of G is a set of edges with disjoint nodes, and the weight of a matching is the total weights of the edges in the matching.\nAs a result, the following lemma holds.\nLemma 1.\nThe social optimum is the same as the maximum weight matching of G. Thus, we can use the maximum weight matching algorithm to compute the social optimum.\nThe maximum weight matching is a classical network problem and can be solved in polynomial time [8, 9, 14].\nWe choose to implement the Hungarian algorithm [14, 19] because of its simplicity.\nThere may exist a more efficient algorithm for computing the maximum matching by exploiting the special structure of G.\nThis remains an interesting open question.\nHowever, we do not know an efficient algorithm to compute the best response under the finite parallelism model.\nInstead, we provide the following local search heuristic.\nSuppose we again have n machines with weights w1, ... , wn and total bids y1, ... , yn.\nLet the user``s budget be X and the parallelism bound be k.\nOur goal is to compute an allocation of X to up to k machines to maximize the user``s utility.\nFor a subset of machines A, denote by x(A) the best response on A without parallelism bound and by U(A) the utility obtained by the best response algorithm.\nThe local search works as follows: 1.\nSet A to be the k machines with the highest wi\/yi.\n2.\nCompute U(A) by the infinite parallelism best response algorithm (Sec 4.1) on A. 3.\nFor each i \u2208 A and each j \/\u2208 A, repeat 4.\nLet B = A \u2212 {i} + {j}, compute U(B).\n5.\nIf(U(B) > U(A)), let A \u2190 B, and goto 2.\n6.\nOutput x(A).\nIntuitively, by the local search heuristic, we test if we can swap a machine in A for one not in A to improve the best response utility.\nIf yes, we swap the machines and repeat the process.\nOtherwise, we have reached a local maxima and output that value.\nWe suspect that the local maxima that this algorithm finds is also the global maximum (with respect to an individual user) and that this process stop after a few number of iterations, but we are unable to establish it.\nHowever, in our simulations, this algorithm quickly converges to a high (\u2265 .7) efficiency.\n131 4.3 Local Greedy Adjustment The above best response algorithms only work for the linear utility functions described earlier.\nIn practice, utility functions may have more a complicated form, or even worse, a user may not have a formulation of his utility function.\nWe do assume that the user still has a way to measure his utility, which is the minimum assumption necessary for any market-based resource allocation mechanism.\nIn these situations, users can use a more general strategy, the local greedy adjustment method, which works as follows.\nA user finds the two machines that provide him with the highest and lowest marginal utility.\nHe then moves a fixed small amount of money from the machine with low marginal utility to the machine with the higher one.\nThis strategy aims to adjust the bids so that the marginal values at each machine being bid on are the same.\nThis condition guarantees the allocation is the optimum when the utility function is concave.\nThe tradeoff for local greedy adjustment is that it takes longer to stabilize than best-response.\n5.\nSIMULATION RESULTS While the analytic results provide us with worst-case analysis for the infinite parallelism model, in this section we employ simulations to study the properties of the Nash equilibria in more realistic scenarios and for the finite parallelism model.\nFirst, we determine whether the user bidding process converges, and if so, what the rate of convergence is.\nSecond, in cases of convergence, we look at the performance at equilibrium, using the efficiency and fairness metrics defined above.\nIterative Method.\nIn our simulations, each user starts with an initial bid vector and then iteratively updates his bids until a convergence criterion (described below) is met.\nThe initial bid is set proportional to the user``s weights on the machines.\nWe experiment with two update methods, the best response methods, as described in Section 4.1 and 4.2, and the local greedy adjustment method, as described in Section 4.3.\nConvergence Criteria.\nConvergence time measures how quickly the system reaches equilibrium.\nIt is particularly important in the highly dynamic environment of distributed shared clusters, in which the system``s conditions may change before reaching the equilibrium.\nThus, a high convergence rate may be more significant than the efficiency at the equilibrium.\nThere are several different criteria for convergence.\nThe strongest criterion is to require that there is only negligible change in the bids of each user.\nThe problem with this criterion is that it is too strict: users may see negligible change in their utilities, but according to this definition the system has not converged.\nThe less strict utility gap criterion requires there to be only negligible change in the users'' utility.\nGiven users'' concern for utility, this is a more natural definition.\nIndeed, in practice, the user is probably not willing to re-allocate their bids dramatically for a small utility gain.\nTherefore, we use the utility gap criterion to measure convergence time for the best response update method, i.e. we consider that the system has converged if the utility gap of each user is smaller than (0.001 in our experiments).\nHowever, this criterion does not work for the local greedy adjustment method because users of that method will experience constant fluctuations in utility as they move money around.\nFor this method, we use the marginal utility gap criterion.\nWe compare the highest and lowest utility margins on the machines.\nIf the difference is negligible, then we consider the system to be converged.\nIn addition to convergence to the equilibrium, we also consider the criterion from the system provider``s view, the social welfare stabilization criterion.\nUnder this criterion, a system has stabilized if the change in social welfare is \u2264 .\nIndividual users'' utility may not have converged.\nThis criterion is useful to evaluate how quickly the system as a whole reaches a particular efficiency level.\nUser preferences.\nWe experiment with two models of user preferences, random distribution and correlated distribution.\nWith random distribution, users'' weights on the different machines are independently and identically distributed, according the uniform distribution.\nIn practice, users'' preferences are probably correlated based on factors like the hosts'' location and the types of applications that users run.\nTo capture these correlations, we associate with each user and machine a resource profile vector where each dimension of the vector represents one resource (e.g., CPU, memory, and network bandwidth).\nFor a user i with a profile pi = (pi1, ... , pi ), pik represents user i``s need for resource k. For machine j with profile qj = (qj1, ... , qj ), qjk represents machine j``s strength with respect to resource k. Then, wij is the dot product of user i``s and machine j``s resource profiles, i.e. wij = pi \u00b7 qj = P k=1 pikqjk.\nBy using these profiles, we compress the parameter space and introduce correlations between users and machines.\nIn the following simulations, we fix the number of machines to 100 and vary the number of users from 5 to 250 (but we only report the results for the range of 5 \u2212 150 users since the results remain similar for a larger number of users).\nSections 5.1 and 5.2 present the simulation results when we apply the infinite parallelism and finite parallelism models, respectively.\nIf the system converges, we report the number of iterations until convergence.\nA convergence time of 200 iterations indicates non-convergence, in which case we report the efficiency and fairness values at the point we terminate the simulation.\n5.1 Infinite parallelism In this section, we apply the infinite parallelism model, which assumes that users can use an unlimited number of machines.\nWe present the efficiency and fairness at the equilibrium, compared to two baseline allocation methods: social optimum and weight-proportional, in which users distribute their bids proportionally to their weights on the machines (which may seem a reasonable distribution method intuitively).\nWe present results for the two user preference models.\nWith uniform preferences, users'' weights for the different machines are independently and identically distributed according to the uniform distribution, U \u223c (0, 1) (and are normalized thereafter).\nIn correlated preferences, each user``s and each machine``s resource profile vector has three dimensions, and their values are also taken from the uniform distribution, U \u223c (0, 1).\nConvergence Time.\nFigure 2 shows the convergence time, efficiency and fairness of the infinite parallelism model under uniform (left) and correlated (right) preferences.\nPlots (a) and (b) show the convergence and stabilization time of the best-response and local greedy adjustment methods.\n132 0 50 100 150 200 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Convergencetime(#iterations) Number of Users Uniform preferences (a) Best-Response Greedy (convergence) Greedy (stabilization) 0 50 100 150 200 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Number of Users Correlated preferences (b) Best-response Greedy (convergence) Greedy (stabilization) 0 0.2 0.4 0.6 0.8 1 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Efficiency Number of Users (c) Nash equilibrium Weight-proportional Social Optimum 0 0.2 0.4 0.6 0.8 1 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Number of Users (d) Nash equilibrium Weight-proportional Social optimum 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Utilityuniformity Number of Users (e) Nash equilibrium Weight-proportional Social optimum 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Number of Users (f) Nash equilibrium Weight proportional Social optimum 0 0.2 0.4 0.6 0.8 1 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Envy-freeness Number of Users (g) Nash equilibrium Weight proportional Social optimum 0 0.2 0.4 0.6 0.8 1 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Number of Users (h) Nash equilibrium Weight proportional Social optimum Figure 2: Efficiency, utility uniformity, enviness and convergence time as a function of the number of users under the infinite parallelism model, with uniform and correlated preferences.\nn = 100.\n133 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 Efficiency Iteration number Best-Response Greedy Figure 3: Efficiency level over time under the infinite parallelism model.\nnumber of users = 40.\nn = 100.\nThe best-response algorithm converges within a few number of iterations for any number of users.\nIn contrast, the local greedy adjustment algorithm does not converge even within 500 iterations when the number of users is smaller than 60, but does converge for a larger number of users.\nWe believe that for small numbers of users, there are dependency cycles among the users that prevent the system from converging because one user``s decisions affects another user, whose decisions affect another user, etc..\nRegardless, the local greedy adjustment method stabilizes within 100 iterations.\nFigure 3 presents the efficiency over time for a system with 40 users.\nIt demonstrates that while both adjustment methods reach the same social welfare, the best-response algorithm is faster.\nIn the remainder of this paper, we will refer to the (Nash) equilibrium, independent of the adjustment method used to reach it.\nEfficiency.\nFigure 2 (c) and (d) present the efficiency as a function of the number of users.\nWe present the efficiency at equilibrium, and use the social optimum and the weightproportional static allocation methods for comparison.\nSocial optimum provides an efficient allocation by definition.\nFor both user preference models, the efficiency at the equilibrium is approximately 0.9, independent of the number of users, which is only slightly worse than the social optimum.\nThe efficiency at the equilibrium is \u2248 50% improvement over the weight-proportional allocation method for uniform preferences, and \u2248 30% improvement for correlated preferences.\nFairness.\nFigure 2(e) and (f) present the utility uniformity as a function of the number of users, and figures (g) and (h) present the envy-freeness.\nWhile the social optimum yields perfect efficiency, it has poor fairness.\nThe weightproportional method achieves the highest fairness among the three allocation methods, but the fairness at the equilibrium is close.\nThe utility uniformity is slightly better at the equilibrium under uniform preferences (> 0.7) than under correlated preferences (> 0.6), since when users'' preferences are more aligned, users'' happiness is more likely going to be at the expense of each other.\nAlthough utility uniformity decreases in the number of users, it remains reasonable even for a large number of users, and flattens out at some point.\nAt the social optimum, utility uniformity can be infinitely poor, as some users may be allocated no resources at all.\nThe same is true with respect to envy-freeness.\nThe difference between uniform and correlated preferences is best demonstrated in the social optimum results.\nWhen the number of users is small, it may be possible to satisfy all users to some extent if their preferences are not aligned, but if they are aligned, even with a very small number of users, some users get no resources, thus both utility uniformity and envy-freeness go to zero.\nAs the number of users increases, it becomes almost impossible to satisfy all users independent of the existence of correlation.\nThese results demonstrate the tradeoff between the different allocation methods.\nThe efficiency at the equilibrium is lower than the social optimum, but it performs much better with respect to fairness.\nThe equilibrium allocation is completely envy-free under uniform preferences and almost envy-free under correlated preferences.\n5.2 Finite parallelism 0 50 100 150 200 0 10 20 30 40 50 60 70 80 90 Convergencetime(#iterations) Number of Users 5 machines\/user 20 machines\/user Figure 4: Convergence time under the finite parallelism model.\nn = 100.\n0.6 0.7 0.8 0.9 1 0 20 40 60 80 100 Efficiency Iteration number 5-machines\/user (40 users) 20-machines\/user (10 users) Figure 5: Efficiency level over time under the finite parallelism model with local search algorithm.\nn = 100.\nWe also consider the finite parallelism model and use the local search algorithm, as described in Section 4.2, to adjust user``s bids.\nWe again experimented with both the uniform and correlated preferences distributions and did not find significant differences in the results so we present the simulation results for only the uniform distribution.\nIn our experiments, the local search algorithm stops quickly - it usually discovers a local maximum within two iterations.\nAs mentioned before, we cannot prove that a local maximum is the global maximum, but our experiments indicate that the local search heuristic leads to high efficiency.\n134 Convergence time.\nLet \u2206 denote the parallelism bound that limits the maximum number of machines each user can bid on.\nWe experiment with \u2206 = 5 and \u2206 = 20.\nIn both cases, we use 100 machines and vary the number of users.\nFigure 4 shows that the system does not always converge, but if it does, the convergence happens quickly.\nThe nonconvergence occurs when the number of users is between 20 and 40 for \u2206 = 5, between 5 and 10 for \u2206 = 20.\nWe believe that the non-convergence is caused by moderate competition.\nNo competition allows the system to equilibrate quickly because users do not have to change their bids in reaction to changes in others'' bids.\nHigh competition also allows convergence because each user``s decision has only a small impact on other users, so the system is more stable and can gradually reach convergence.\nHowever, when there is moderate competition, one user``s decisions may cause dramatic changes in another``s decisions and cause large fluctuations in bids.\nIn both cases of non-convergence, the ratio of competitors per machine, \u03b4 = m\u00d7\u2206\/n for m users and n machines, is in the interval [1, 2].\nAlthough the system does not converge in these bad ranges, the system nontheless achieves and maintains a high level of overall efficiency after a few iterations (as shown in Figure 5).\nPerformance.\nIn Figure 6, we present the efficiency, utility uniformity, and envy-freeness at the Nash equilibrium for the finite parallelism model.\nWhen the system does not converge, we measure performance by taking the minimum value we observe after running for many iterations.\nWhen \u2206 = 5, there is a performance drop, in particular with respect to the fairness metrics, in the range between 20 and 40 users (where it does not converge).\nFor a larger number of users, the system converges and achieves a lower level of utility uniformity, but a high degree of efficiency and envy-freeness, similar to those under the infinite parallelism model.\nAs described above, this is due the competition ratio falling into the head-to-head range.\nWhen the parallelism bound is large (\u2206 = 20), the performance is closer to the infinite parallelism model, and we do not observe this drop in performance.\n6.\nRELATED WORK There are two main groups of related work in resource allocation: those that incorporate an economic mechanism, and those that do not.\nOne non-economic approach is scheduling (surveyed by Pinedo [20]).\nExamples of this approach are queuing in first-come, first-served (FCFS) order, queueing using the resource consumption of tasks (e.g., [28]), and scheduling using combinatorial optimization [19].\nThese all assume that the values and resource consumption of tasks are reported accurately, which does not apply in the presence of strategic users.\nWe view scheduling and resource allocation as two separate functions.\nResource allocation divides a resource among different users while scheduling takes a given allocation and orders a user``s jobs.\nExamples of the economic approach are Spawn [26]), work by Stoica, et al. [24].\n, the Millennium resource allocator [4], work by Wellman, et al. [27], Bellagio [2]), and Tycoon [15]).\nSpawn and the work by Wellman, et al. uses a reservation abstraction similar to the way airline seats are allocated.\nUnfortunately, reservations have a high latency to acquire resources, unlike the price-anticipating scheme we consider.\nThe tradeoff of the price-anticipating schemes is that users have uncertainty about exactly how much of the resources they will receive.\nBellagio[3] uses the SHARE centralized allocator.\nSHARE allocates resources using a centralized combinatorial auction that allows users to express preferences with complementarities.\nSolving the NP-complete combinatorial auction problem provides an optimally efficient allocation.\nThe priceanticipating scheme that we consider does not explicitly operate on complementarities, thereby possibly losing some efficiency, but it also avoids the complexity and overhead of combinatorial auctions.\nThere have been several analyses [10, 11, 12, 13, 23] of variations of price-anticipating allocation schemes in the context of allocation of network capacity for flows.\nTheir methodology follows the study of congestion (potential) games [17, 22] by relating the Nash equilibrium to the solution of a (usually convex) global optimization problem.\nBut those techniques no longer apply to our game because we model users as having fixed budgets and private preferences for machines.\nFor example, unlike those games, there may exist multiple Nash equilibria in our game.\nMilchtaich [16] studied congestion games with private preferences but the technique in [16] is specific to the congestion game.\n7.\nCONCLUSIONS This work studies the performance of a market-based mechanism for distributed shared clusters using both analyatical and simulation methods.\nWe show that despite the worst case bounds, the system can reach a high performance level at the Nash equilibrium in terms of both efficiency and fairness metrics.\nIn addition, with a few exceptions under the finite parallelism model, the system reaches equilibrium quickly by using the best response algorithm and, when the number of users is not too small, by the greedy local adjustment method.\nWhile our work indicates that the price-anticipating scheme may work well for resource allocation for shared clusters, there are many interesting directions for future work.\nOne direction is to consider more realistic utility functions.\nFor example, we assume that there is no parallelization cost, and there is no performance degradation when multiple users share the same machine.\nIn practice, both assumptions may not be correct.\nFor examples, the user must copy code and data to a machine before running his application there, and there is overhead for multiplexing resources on a single machine.\nWhen the job size is large enough and the degree of multiplexing is sufficiently low, we can probably ignore those effects, but those costs should be taken into account for a more realistic modeling.\nAnother assumption is that users have infinite work, so the more resources they can acquire, the better.\nIn practice, users have finite work.\nOne approach to address this is to model the user``s utility according to the time to finish a task rather than the amount of resources he receives.\nAnother direction is to study the dynamic properties of the system when the users'' needs change over time, according to some statistical model.\nIn addition to the usual questions concerning repeated games, it would also be important to understand how users should allocate their budgets wisely over time to accomodate future needs.\n135 0 0.2 0.4 0.6 0.8 1 0 20 40 60\u00a080\u00a0100\u00a0120 140 160 Number of Users (a) Limit: 5 machines\/user Efficiency Utility uniformity Envy-freeness 0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 60 70 80 90 Number of Users (b) Limit: 20 machines\/user Efficiency Utility uniformity Envy-freeness Figure 6: Efficiency, utility uniformity and envy-freeness under the finite parallelism model.\nn = 100.\n8.\nACKNOWLEDGEMENTS We thank Bernardo Huberman, Lars Rasmusson, Eytan Adar and Moshe Babaioff for fruitful discussions.\nWe also thank the anonymous reviewers for their useful comments.\n9.\nREFERENCES [1] http:\/\/planet-lab.org.\n[2] A. AuYoung, B. N. Chun, A. C. Snoeren, and A. Vahdat.\nResource Allocation in Federated Distributed Computing Infrastructures.\nIn Proceedings of the 1st Workshop on Operating System and Architectural Support for the On-demand IT InfraStructure, 2004.\n[3] B. Chun, C. Ng, J. Albrecht, D. C. Parkes, and A. Vahdat.\nComputational Resource Exchanges for Distributed Resource Allocation.\n2004.\n[4] B. N. Chun and D. E. Culler.\nMarket-based Proportional Resource Sharing for Clusters.\nTechnical Report CSD-1092, University of California at Berkeley, Computer Science Division, January 2000.\n[5] M. Feldman, K. Lai, and L. Zhang.\nA Price-anticipating Resource Allocation Mechanism for Distributed Shared Clusters.\nTechnical report, arXiv, 2005.\nhttp:\/\/arxiv.org\/abs\/cs.DC\/0502019.\n[6] D. Ferguson, Y. Yemimi, and C. Nikolaou.\nMicroeconomic Algorithms for Load Balancing in Distributed Computer Systems.\nIn International Conference on Distributed Computer Systems, pages 491-499, 1988.\n[7] I. Foster and C. Kesselman.\nGlobus: A Metacomputing Infrastructure Toolkit.\nThe International Journal of Supercomputer Applications and High Performance Computing, 11(2):115-128, Summer 1997.\n[8] M. L. Fredman and R. E. Tarjan.\nFibonacci Heaps and Their Uses in Improved Network Optimization Algorithms.\nJournal of the ACM, 34(3):596-615, 1987.\n[9] H. N. Gabow.\nData Structures for Weighted Matching and Nearest Common Ancestors with Linking.\nIn Proceedings of 1st Annual ACM-SIAM Symposium on Discrete algorithms, pages 434-443, 1990.\n[10] B. Hajek and S. Yang.\nStrategic Buyers in a Sum Bid Game for Flat Networks.\nManuscript, http: \/\/tesla.csl.uiuc.edu\/~hajek\/Papers\/HajekYang.pdf, 2004.\n[11] R. Johari and J. N. Tsitsiklis.\nEfficiency Loss in a Network Resource Allocation Game.\nMathematics of Operations Research, 2004.\n[12] F. P. Kelly.\nCharging and Rate Control for Elastic Traffic.\nEuropean Transactions on Telecommunications, 8:33-37, 1997.\n[13] F. P. Kelly and A. K. Maulloo.\nRate Control in Communication Networks: Shadow Prices, Proportional Fairness and Stability.\nOperational Research Society, 49:237-252, 1998.\n[14] H. W. Kuhn.\nThe Hungarian Method for the Assignment Problem.\nNaval Res.\nLogis.\nQuart., 2:83-97, 1955.\n[15] K. Lai, L. Rasmusson, S. Sorkin, L. Zhang, and B. A. Huberman.\nTycoon: an Implemention of a Distributed Market-Based Resource Allocation System.\nManuscript, http:\/\/www.hpl.hp.com\/research\/tycoon\/papers_and_ presentations, 2004.\n[16] I. Milchtaich.\nCongestion Games with Player-Specific Payoff Functions.\nGames and Economic Behavior, 13:111-124, 1996.\n[17] D. Monderer and L. S. Sharpley.\nPotential Games.\nGames and Economic Behavior, 14:124-143, 1996.\n[18] C. Papadimitriou.\nAlgorithms, Games, and the Internet.\nIn Proceedings of 33rd STOC, 2001.\n[19] C. H. Papadimitriou and K. Steiglitz.\nCombinatorial Optimization.\nDover Publications, Inc., 1982.\n[20] M. Pinedo.\nScheduling.\nPrentice Hall, 2002.\n[21] O. Regev and N. Nisan.\nThe Popcorn Market: Online Markets for Computational Resources.\nIn Proceedings of 1st International Conference on Information and Computation Economies, pages 148-157, 1998.\n[22] R. W. Rosenthal.\nA Class of Games Possessing Pure-Strategy Nash Equilibria.\nInternation Journal of Game Theory, 2:65-67, 1973.\n[23] S. Sanghavi and B. Hajek.\nOptimal Allocation of a Divisible Good to Strategic Buyers.\nManuscript, http: \/\/tesla.csl.uiuc.edu\/~hajek\/Papers\/OptDivisible.pdf, 2004.\n[24] I. Stoica, H. Abdel-Wahab, and A. Pothen.\nA Microeconomic Scheduler for Parallel Computers.\nIn Proceedings of the Workshop on Job Scheduling Strategies for Parallel Processing, pages 122-135, April 1995.\n[25] H. R. Varian.\nEquity, Envy, and Efficiency.\nJournal of Economic Theory, 9:63-91, 1974.\n[26] C. A. Waldspurger, T. Hogg, B. A. Huberman, J. O. Kephart, and S. Stornetta.\nSpawn: A Distributed Computational Economy.\nIEEE Transactions on Software Engineering, 18(2):103-117, February 1992.\n[27] M. P. Wellman, W. E. Walsh, P. R. Wurman, and J. K. MacKie-Mason.\nAuction Protocols for Decentralized Scheduling.\nGames and Economic Behavior, 35:271-303, 2001.\n[28] A. Wierman and M. Harchol-Balter.\nClassifying Scheduling Policies with respect to Unfairness in an M\/GI\/1.\nIn Proceedings of the ACM SIGMETRICS 2003 Conference on Measurement and Modeling of Computer Systems, 2003.\n[29] L. Zhang.\nOn the Efficiency and Fairness of a Fixed Budget Resource Allocation Game.\nManuscript, 2004.\n136","lvl-3":"A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters\nABSTRACT\nIn this paper we formulate the fixed budget resource allocation\ngame to understand the performance of a distributed marketbased resource allocation system.\nMultiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility.\nWe look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness.\nWe show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium.\n1.\nINTRODUCTION\nThe primary advantage of distributed shared clusters like the Grid [7] and PlanetLab [1] is their ability to pool together shared computational resources.\nThis allows increased throughput because of statistical multiplexing and the bursty utilization pattern of typical users.\nSharing nodes that are dispersed in the network allows lower delay because applications can store data close to users.\nFinally, sharing allows\ngreater reliability because of redundancy in hosts and network connections.\nHowever, resource allocation in these systems remains the major challenge.\nThe problem is how to allocate a shared resource both fairly and efficiently (where efficiency is the ratio of the achieved social welfare to the social optimal) with the presence of strategic users who act in their own interests.\nSeveral non-economic allocation algorithms have been proposed, but these typically assume that task values (i.e., their importance) are the same, or are inversely proportional to the resources required, or are set by an omniscient administrator.\nHowever, in many cases, task values vary significantly, are not correlated to resource requirements, and are difficult and time-consuming for an administrator to set.\nInstead, we examine a market-based resource allocation system (others are described in [2, 4, 6, 21, 26, 27]) that allows users to express their preferences for resources through a bidding mechanism.\nIn particular, we consider a price-anticipating [12] scheme in which a user bids for a resource and receives the ratio of his bid to the sum of bids for that resource.\nThis proportional scheme is simpler, more scalable, and more responsive [15] than auction-based schemes [6, 21, 26].\nPrevious work has analyzed price-anticipating schemes in the context of allocating network capacity for flows for users with unlimited budgets.\nIn this work, we examine a price-anticipating scheme in the context of allocating computational capacity for users with private preferences and limited budgets, resulting in a qualitatively different game (as discussed in Section 6).\nIn this paper, we formulate the fixed budget resource allocation game and study the existence and performance of the Nash equilibria of this game.\nFor evaluating the Nash equilibria, we consider both their efficiency, measuring how close the social welfare at equilibrium is to the social optimum, and fairness, measuring how different the users' utilities are.\nAlthough rarely considered in previous game theoretical study, we believe fairness is a critical metric for a resource allocation schemes because the perception of unfairness will cause some users to reject a system with more efficient, but less fair resource allocation in favor of one with less efficient, more fair resource allocation.\nWe use both utility uniformity and envy-freeness to measure fairness.\nUtility uniformity, which is common in Computer Science work, measures the closeness of utilities of different users.\nEnvyfreeness, which is more from the Economic perspective, measures the happiness of users with their own resources compared to the resources of others.\nOur contributions are as follows:\n\u2022 We analyze the existence and performance of\nNash equilibria.\nUsing analysis, we show that there is always a Nash equilibrium in the fixed budget game if the utility functions satisfy a fairly weak and natural condition of strong competitiveness.\nWe also show the worst case performance bounds: for m players the efficiency at equilibrium is SZ (1 \/ -, \/ m), the utility uniformity is> 1\/m, and the envyfreeness> 2 -, \/ 2--2 Pz 0.83.\nAlthough these bounds are quite low, the simulations described below indicate these bounds are overly pessimistic.\n9 We describe algorithms that allow strategic users to optimize their utility.\nAs part of the fixed budget game analysis, we show that strategic users with linear utility functions can calculate their bids using a best response algorithm that quickly results in an allocation with high efficiency with little computational and communication overhead.\nWe present variations of the best response algorithm for both finite and infinite parallelism tasks.\nIn addition, we present a local greedy adjustment algorithm that converges more slowly than best response, but allows for non-linear or unformulatable utility functions.\n9 We show that the price-anticipating resource allocation mechanism achieves a high degree of efficiency and fairness.\nUsing simulation, we find that although the socially optimal allocation results in perfect efficiency, it also results in very poor fairness.\nLikewise, allocating according to only users' preference weights results in a high fairness, but a mediocre efficiency.\nIntuition would suggest that efficiency and fairness are exclusive.\nSurprisingly, the Nash equilibrium, reached by each user iteratively applying the best response algorithm to adapt his bids, achieves nearly the efficiency of the social optimum and nearly the fairness of the weight-proportional allocation: the efficiency is> 0.90, the utility uniformity is> 0.65, and the envyfreeness is> 0.97, independent of the number of users in the system.\nIn addition, the time to converge to the equilibrium is <5 iterations when all users use the best response strategy.\nThe local adjustment algorithm performs similarly when there is sufficient competitiveness, but takes 25 to 90 iterations to stabilize.\nAs a result, we believe that shared distributed systems based on the fixed budget game can be highly decentralized, yet achieve a high degree of efficiency and fairness.\nThe rest of the paper is organized as follows.\nWe describe the model in Section 2 and derive the performance at the Nash equilibria for the infinite parallelism model in Section 3.\nIn Section 4, we describe algorithms for users to optimize their own utility in the fixed budget game.\nIn Section 5, we describe our simulator and simulation results.\nWe describe related work in Section 6.\nWe conclude by discussing some limit of our model and future work in Section 7.\n2.\nTHE MODEL\nEfficiency (Price of Anarchy).\nFor an allocation scheme\n3.\nNASH EQUILIBRIUM\n3.1 Two-player Games\n3.2 Multi-player Game\n4.\nALGORITHMS\n4.1 Infinite Parallelism Model\n4.2 Finite Parallelism Model\n4.3 Local Greedy Adjustment\n5.\nSIMULATION RESULTS\n5.1 Infinite parallelism\n5.2 Finite parallelism\n6.\nRELATED WORK\nThere are two main groups of related work in resource allocation: those that incorporate an economic mechanism, and those that do not.\nOne non-economic approach is scheduling (surveyed by Pinedo [20]).\nExamples of this approach are queuing in first-come, first-served (FCFS) order, queueing using the resource consumption of tasks (e.g., [28]), and scheduling using combinatorial optimization [19].\nThese all assume that the values and resource consumption of tasks are reported accurately, which does not apply in the presence of strategic users.\nWe view scheduling and resource allocation as two separate functions.\nResource allocation divides a resource among different users while scheduling takes a given allocation and orders a user's jobs.\nExamples of the economic approach are Spawn [26]), work by Stoica, et al. [24].\n, the Millennium resource allocator [4], work by Wellman, et al. [27], Bellagio [2]), and Tycoon [15]).\nSpawn and the work by Wellman, et al. uses a reservation abstraction similar to the way airline seats are allocated.\nUnfortunately, reservations have a high latency to acquire resources, unlike the price-anticipating scheme we consider.\nThe tradeoff of the price-anticipating schemes is that users have uncertainty about exactly how much of the resources they will receive.\nBellagio [3] uses the SHARE centralized allocator.\nSHARE allocates resources using a centralized combinatorial auction that allows users to express preferences with complementarities.\nSolving the NP-complete combinatorial auction problem provides an optimally efficient allocation.\nThe priceanticipating scheme that we consider does not explicitly operate on complementarities, thereby possibly losing some efficiency, but it also avoids the complexity and overhead of combinatorial auctions.\nThere have been several analyses [10, 11, 12, 13, 23] of variations of price-anticipating allocation schemes in the context of allocation of network capacity for flows.\nTheir methodology follows the study of congestion (potential) games [17, 22] by relating the Nash equilibrium to the solution of a (usually convex) global optimization problem.\nBut those techniques no longer apply to our game because we model users as having fixed budgets and private preferences for machines.\nFor example, unlike those games, there may exist multiple Nash equilibria in our game.\nMilchtaich [16] studied congestion games with private preferences but the technique in [16] is specific to the congestion game.\n7.\nCONCLUSIONS\nThis work studies the performance of a market-based mechanism for distributed shared clusters using both analyatical and simulation methods.\nWe show that despite the worst case bounds, the system can reach a high performance level at the Nash equilibrium in terms of both efficiency and fairness metrics.\nIn addition, with a few exceptions under the finite parallelism model, the system reaches equilibrium quickly by using the best response algorithm and, when the number of users is not too small, by the greedy local adjustment method.\nWhile our work indicates that the price-anticipating scheme may work well for resource allocation for shared clusters, there are many interesting directions for future work.\nOne direction is to consider more realistic utility functions.\nFor example, we assume that there is no parallelization cost, and there is no performance degradation when multiple users share the same machine.\nIn practice, both assumptions may not be correct.\nFor examples, the user must copy code and data to a machine before running his application there, and there is overhead for multiplexing resources on a single machine.\nWhen the job size is large enough and the degree of multiplexing is sufficiently low, we can probably ignore those effects, but those costs should be taken into account for a more realistic modeling.\nAnother assumption is that users have infinite work, so the more resources they can acquire, the better.\nIn practice, users have finite work.\nOne approach to address this is to model the user's utility according to the time to finish a task rather than the amount of resources he receives.\nAnother direction is to study the dynamic properties of the system when the users' needs change over time, according to some statistical model.\nIn addition to the usual questions concerning repeated games, it would also be important to understand how users should allocate their budgets wisely over time to accomodate future needs.\nFigure 6: Efficiency, utility uniformity and envy-freeness under the finite parallelism model.\nn = 100.","lvl-4":"A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters\nABSTRACT\nIn this paper we formulate the fixed budget resource allocation\ngame to understand the performance of a distributed marketbased resource allocation system.\nMultiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility.\nWe look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness.\nWe show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium.\n1.\nINTRODUCTION\nThe primary advantage of distributed shared clusters like the Grid [7] and PlanetLab [1] is their ability to pool together shared computational resources.\nThis allows increased throughput because of statistical multiplexing and the bursty utilization pattern of typical users.\nSharing nodes that are dispersed in the network allows lower delay because applications can store data close to users.\nFinally, sharing allows\nHowever, resource allocation in these systems remains the major challenge.\nThe problem is how to allocate a shared resource both fairly and efficiently (where efficiency is the ratio of the achieved social welfare to the social optimal) with the presence of strategic users who act in their own interests.\nHowever, in many cases, task values vary significantly, are not correlated to resource requirements, and are difficult and time-consuming for an administrator to set.\nInstead, we examine a market-based resource allocation system (others are described in [2, 4, 6, 21, 26, 27]) that allows users to express their preferences for resources through a bidding mechanism.\nIn particular, we consider a price-anticipating [12] scheme in which a user bids for a resource and receives the ratio of his bid to the sum of bids for that resource.\nPrevious work has analyzed price-anticipating schemes in the context of allocating network capacity for flows for users with unlimited budgets.\nIn this work, we examine a price-anticipating scheme in the context of allocating computational capacity for users with private preferences and limited budgets, resulting in a qualitatively different game (as discussed in Section 6).\nIn this paper, we formulate the fixed budget resource allocation game and study the existence and performance of the Nash equilibria of this game.\nFor evaluating the Nash equilibria, we consider both their efficiency, measuring how close the social welfare at equilibrium is to the social optimum, and fairness, measuring how different the users' utilities are.\nWe use both utility uniformity and envy-freeness to measure fairness.\nUtility uniformity, which is common in Computer Science work, measures the closeness of utilities of different users.\nEnvyfreeness, which is more from the Economic perspective, measures the happiness of users with their own resources compared to the resources of others.\nOur contributions are as follows:\n\u2022 We analyze the existence and performance of\nNash equilibria.\nUsing analysis, we show that there is always a Nash equilibrium in the fixed budget game if the utility functions satisfy a fairly weak and natural condition of strong competitiveness.\nAlthough these bounds are quite low, the simulations described below indicate these bounds are overly pessimistic.\n9 We describe algorithms that allow strategic users to optimize their utility.\nAs part of the fixed budget game analysis, we show that strategic users with linear utility functions can calculate their bids using a best response algorithm that quickly results in an allocation with high efficiency with little computational and communication overhead.\nWe present variations of the best response algorithm for both finite and infinite parallelism tasks.\nIn addition, we present a local greedy adjustment algorithm that converges more slowly than best response, but allows for non-linear or unformulatable utility functions.\n9 We show that the price-anticipating resource allocation mechanism achieves a high degree of efficiency and fairness.\nUsing simulation, we find that although the socially optimal allocation results in perfect efficiency, it also results in very poor fairness.\nLikewise, allocating according to only users' preference weights results in a high fairness, but a mediocre efficiency.\nIntuition would suggest that efficiency and fairness are exclusive.\nIn addition, the time to converge to the equilibrium is <5 iterations when all users use the best response strategy.\nAs a result, we believe that shared distributed systems based on the fixed budget game can be highly decentralized, yet achieve a high degree of efficiency and fairness.\nThe rest of the paper is organized as follows.\nWe describe the model in Section 2 and derive the performance at the Nash equilibria for the infinite parallelism model in Section 3.\nIn Section 4, we describe algorithms for users to optimize their own utility in the fixed budget game.\nIn Section 5, we describe our simulator and simulation results.\nWe describe related work in Section 6.\nWe conclude by discussing some limit of our model and future work in Section 7.\n6.\nRELATED WORK\nThere are two main groups of related work in resource allocation: those that incorporate an economic mechanism, and those that do not.\nExamples of this approach are queuing in first-come, first-served (FCFS) order, queueing using the resource consumption of tasks (e.g., [28]), and scheduling using combinatorial optimization [19].\nThese all assume that the values and resource consumption of tasks are reported accurately, which does not apply in the presence of strategic users.\nWe view scheduling and resource allocation as two separate functions.\nResource allocation divides a resource among different users while scheduling takes a given allocation and orders a user's jobs.\nExamples of the economic approach are Spawn [26]), work by Stoica, et al. [24].\n, the Millennium resource allocator [4], work by Wellman, et al. [27], Bellagio [2]), and Tycoon [15]).\nSpawn and the work by Wellman, et al. uses a reservation abstraction similar to the way airline seats are allocated.\nUnfortunately, reservations have a high latency to acquire resources, unlike the price-anticipating scheme we consider.\nThe tradeoff of the price-anticipating schemes is that users have uncertainty about exactly how much of the resources they will receive.\nBellagio [3] uses the SHARE centralized allocator.\nSHARE allocates resources using a centralized combinatorial auction that allows users to express preferences with complementarities.\nSolving the NP-complete combinatorial auction problem provides an optimally efficient allocation.\nThere have been several analyses [10, 11, 12, 13, 23] of variations of price-anticipating allocation schemes in the context of allocation of network capacity for flows.\nBut those techniques no longer apply to our game because we model users as having fixed budgets and private preferences for machines.\nFor example, unlike those games, there may exist multiple Nash equilibria in our game.\nMilchtaich [16] studied congestion games with private preferences but the technique in [16] is specific to the congestion game.\n7.\nCONCLUSIONS\nThis work studies the performance of a market-based mechanism for distributed shared clusters using both analyatical and simulation methods.\nWe show that despite the worst case bounds, the system can reach a high performance level at the Nash equilibrium in terms of both efficiency and fairness metrics.\nIn addition, with a few exceptions under the finite parallelism model, the system reaches equilibrium quickly by using the best response algorithm and, when the number of users is not too small, by the greedy local adjustment method.\nWhile our work indicates that the price-anticipating scheme may work well for resource allocation for shared clusters, there are many interesting directions for future work.\nOne direction is to consider more realistic utility functions.\nFor example, we assume that there is no parallelization cost, and there is no performance degradation when multiple users share the same machine.\nFor examples, the user must copy code and data to a machine before running his application there, and there is overhead for multiplexing resources on a single machine.\nAnother assumption is that users have infinite work, so the more resources they can acquire, the better.\nIn practice, users have finite work.\nOne approach to address this is to model the user's utility according to the time to finish a task rather than the amount of resources he receives.\nAnother direction is to study the dynamic properties of the system when the users' needs change over time, according to some statistical model.\nIn addition to the usual questions concerning repeated games, it would also be important to understand how users should allocate their budgets wisely over time to accomodate future needs.\nFigure 6: Efficiency, utility uniformity and envy-freeness under the finite parallelism model.\nn = 100.","lvl-2":"A Price-Anticipating Resource Allocation Mechanism for Distributed Shared Clusters\nABSTRACT\nIn this paper we formulate the fixed budget resource allocation\ngame to understand the performance of a distributed marketbased resource allocation system.\nMultiple users decide how to distribute their budget (bids) among multiple machines according to their individual preferences to maximize their individual utility.\nWe look at both the efficiency and the fairness of the allocation at the equilibrium, where fairness is evaluated through the measures of utility uniformity and envy-freeness.\nWe show analytically and through simulations that despite being highly decentralized, such a system converges quickly to an equilibrium and unlike the social optimum that achieves high efficiency but poor fairness, the proposed allocation scheme achieves a nice balance of high degrees of efficiency and fairness at the equilibrium.\n1.\nINTRODUCTION\nThe primary advantage of distributed shared clusters like the Grid [7] and PlanetLab [1] is their ability to pool together shared computational resources.\nThis allows increased throughput because of statistical multiplexing and the bursty utilization pattern of typical users.\nSharing nodes that are dispersed in the network allows lower delay because applications can store data close to users.\nFinally, sharing allows\ngreater reliability because of redundancy in hosts and network connections.\nHowever, resource allocation in these systems remains the major challenge.\nThe problem is how to allocate a shared resource both fairly and efficiently (where efficiency is the ratio of the achieved social welfare to the social optimal) with the presence of strategic users who act in their own interests.\nSeveral non-economic allocation algorithms have been proposed, but these typically assume that task values (i.e., their importance) are the same, or are inversely proportional to the resources required, or are set by an omniscient administrator.\nHowever, in many cases, task values vary significantly, are not correlated to resource requirements, and are difficult and time-consuming for an administrator to set.\nInstead, we examine a market-based resource allocation system (others are described in [2, 4, 6, 21, 26, 27]) that allows users to express their preferences for resources through a bidding mechanism.\nIn particular, we consider a price-anticipating [12] scheme in which a user bids for a resource and receives the ratio of his bid to the sum of bids for that resource.\nThis proportional scheme is simpler, more scalable, and more responsive [15] than auction-based schemes [6, 21, 26].\nPrevious work has analyzed price-anticipating schemes in the context of allocating network capacity for flows for users with unlimited budgets.\nIn this work, we examine a price-anticipating scheme in the context of allocating computational capacity for users with private preferences and limited budgets, resulting in a qualitatively different game (as discussed in Section 6).\nIn this paper, we formulate the fixed budget resource allocation game and study the existence and performance of the Nash equilibria of this game.\nFor evaluating the Nash equilibria, we consider both their efficiency, measuring how close the social welfare at equilibrium is to the social optimum, and fairness, measuring how different the users' utilities are.\nAlthough rarely considered in previous game theoretical study, we believe fairness is a critical metric for a resource allocation schemes because the perception of unfairness will cause some users to reject a system with more efficient, but less fair resource allocation in favor of one with less efficient, more fair resource allocation.\nWe use both utility uniformity and envy-freeness to measure fairness.\nUtility uniformity, which is common in Computer Science work, measures the closeness of utilities of different users.\nEnvyfreeness, which is more from the Economic perspective, measures the happiness of users with their own resources compared to the resources of others.\nOur contributions are as follows:\n\u2022 We analyze the existence and performance of\nNash equilibria.\nUsing analysis, we show that there is always a Nash equilibrium in the fixed budget game if the utility functions satisfy a fairly weak and natural condition of strong competitiveness.\nWe also show the worst case performance bounds: for m players the efficiency at equilibrium is SZ (1 \/ -, \/ m), the utility uniformity is> 1\/m, and the envyfreeness> 2 -, \/ 2--2 Pz 0.83.\nAlthough these bounds are quite low, the simulations described below indicate these bounds are overly pessimistic.\n9 We describe algorithms that allow strategic users to optimize their utility.\nAs part of the fixed budget game analysis, we show that strategic users with linear utility functions can calculate their bids using a best response algorithm that quickly results in an allocation with high efficiency with little computational and communication overhead.\nWe present variations of the best response algorithm for both finite and infinite parallelism tasks.\nIn addition, we present a local greedy adjustment algorithm that converges more slowly than best response, but allows for non-linear or unformulatable utility functions.\n9 We show that the price-anticipating resource allocation mechanism achieves a high degree of efficiency and fairness.\nUsing simulation, we find that although the socially optimal allocation results in perfect efficiency, it also results in very poor fairness.\nLikewise, allocating according to only users' preference weights results in a high fairness, but a mediocre efficiency.\nIntuition would suggest that efficiency and fairness are exclusive.\nSurprisingly, the Nash equilibrium, reached by each user iteratively applying the best response algorithm to adapt his bids, achieves nearly the efficiency of the social optimum and nearly the fairness of the weight-proportional allocation: the efficiency is> 0.90, the utility uniformity is> 0.65, and the envyfreeness is> 0.97, independent of the number of users in the system.\nIn addition, the time to converge to the equilibrium is <5 iterations when all users use the best response strategy.\nThe local adjustment algorithm performs similarly when there is sufficient competitiveness, but takes 25 to 90 iterations to stabilize.\nAs a result, we believe that shared distributed systems based on the fixed budget game can be highly decentralized, yet achieve a high degree of efficiency and fairness.\nThe rest of the paper is organized as follows.\nWe describe the model in Section 2 and derive the performance at the Nash equilibria for the infinite parallelism model in Section 3.\nIn Section 4, we describe algorithms for users to optimize their own utility in the fixed budget game.\nIn Section 5, we describe our simulator and simulation results.\nWe describe related work in Section 6.\nWe conclude by discussing some limit of our model and future work in Section 7.\n2.\nTHE MODEL\nPrice-Anticipating Resource Allocation.\nWe study the problem of allocating a set of divisible resources (or machines).\nSuppose that there are m users and n machines.\nEach machine can be continuously divided for allocation to multiple users.\nAn allocation scheme = (r1,..., rm), where ri = (ri1, - - -, rin) with rij representing the share of machine j allocated to user i, satisfies that for any 1 <i <m and 1 <j <n, rij> 0 and Pmi = 1 rij <1.\nLet 0 denote the set of all the allocation schemes.\nWe consider the price anticipating mechanism in which each user places a bid to each machine, and the price of the machine is determined by the total bids placed.\nFormally, suppose that user i submits a non-negative bid xij to machine j.\nThe price of machine j is then set to Yj = Pn i = 1 xij, the total bids placed on the machine j. Consequently, user i receives a fraction of rij = x% j Yj of j.\nWhen Yj = 0, i.e. when there is no bid on a machine, the machine is not allocated to anyone.\nWe call xi = (xi1,..., xin) the bidding vector of user i.\nThe additional consideration we have is that each user i has a budget constraint Xi.\nTherefore, user i's total bids have to sum up to his budget, i.e. Pn j = 1 xij = Xi.\nThe budget constraints come from the fact that the users do not have infinite budget.\nUtility Functions.\nEach user i's utility is represented by a function Ui of the fraction (ri1,..., rin) the user receives from each machine.\nGiven the problem domain we consider, we assume that each user has different and relatively independent preferences for different machines.\nTherefore, the basic utility function we consider is the linear utility function: Ui (ri1,--, rin) = wi1ri1 +--+ winrin, where wij> 0 is user i's private preference, also called his weight, on machine j. For example, suppose machine 1 has a faster CPU but less memory than machine 2, and user 1 runs CPU bounded applications, while user 2 runs memory bounded applications.\nAs a result, w11> w12 and w21 <w22.\nOur definition of utility functions corresponds to the user having enough jobs or enough parallelism within jobs to utilize all the machines.\nConsequently, the user's goal is to grab as much of a resource as possible.\nWe call this the infinite parallelism model.\nIn practice, a user's application may have an inherent limit on parallelization (e.g., some computations must be done sequentially) or there may be a system limit (e.g., the application's data is being served from a file server with limited capacity).\nTo model this, we also consider the more realistic finite parallelism model, where the user's parallelism is bounded by ki, and the user's utility Ui is the sum of the ki largest wijrij.\nIn this model, the user only submits bids to up to ki machines.\nOur abstraction is to capture the essense of the problem and facilitate our analysis.\nIn Section 7, we discuss the limit of the above definition of utility functions.\nBest Response.\nAs typically, we assume the users are selfish and strategic--they all act to maximize their own utility, defined by their utility functions.\nFrom the perspective of user i, if the total bids of the other users placed on each machine j is yj, then the best response of user i to the system is the solution of the following optimization problem: maximize Ui (x% j x% j + yj) subject to Pnj = 1 xij = Xi, and xij> 0.\nThe difficulty of the above optimization problem depends on the formulation of Ui.\nWe will show later how to solve it for the infinite parallelism model and provide a heuristic for finite parallelism model.\nNash Equilibrium.\nBy the assumption that the user is selfish, each user's bidding vector is the best response to the system.\nThe question we are most interested in is whether there exists a collection of bidding vectors, one for each user, such that each user's bidding vector is the best response to those of the other users.\nSuch a state is known as the Nash equilibrium, a central concept in Game Theory.\nFormally, the bidding vectors x1,..., xm is a Nash equilibrium if for\nany 1 <i <m, xi is the best response to the system, or, for any other bidding vector x i,\nThe Nash equilibrium is desirable because it is a stable state at which no one has incentive to change his strategy.\nBut a game may not have an equilibrium.\nIndeed, a Nash equilibrium may not exist in the price anticipating scheme we define above.\nThis can be shown by a simple example of two players and two machines.\nFor example, let U1 (r1, r2) = r1 and U2 (r1, r2) = r1 + r2.\nThen player 1 should never bid on machine 2 because it has no value to him.\nNow, player 2 has to put a positive bid on machine 2 to claim the machine, but there is no lower limit, resulting in the non-existence of the Nash equilibrium.\nWe should note that even the mixed strategy equilibrium does not exist in this example.\nClearly, this happens whenever there is a resource that is \"wanted\" by only one player.\nTo rule out this case, we consider those strongly competitive games .1 Under the infinite parallelism model, a game is called strongly competitive if for any 1 <j <n, there exists an i = k such that wij, wkj> 0.\nUnder such a condition, we have that (see [5] for a proof),\nGiven the existence of the Nash equilibrium, the next important question is the performance at the Nash equilibrium, which is often measured by its efficiency and fairness.\nG 0, denote by U () = P\nEfficiency (Price of Anarchy).\nFor an allocation scheme\ni Ui (ri) the social welfare under.\nLet U * = max E U () denote the optimal social welfare--the maximum possible aggregated user utilities.\nThe efficiency at an allocation scheme is defined as () = U () U.\nLet 520 denote the set of the allocation at the Nash equilibrium.\nWhen there exists Nash equilibrium, i.e. D0 = 0, define the efficiency of a game Q to be (Q) = min E 0 ().\nIt is usually the case that <1, i.e. there is an efficiency loss at a Nash equilibrium.\nThis is the price of anarchy [18] paid for not having central enforcement of the user's good behavior.\nThis price is interesting because central control results in the best possible outcome, but is not possible in most cases.\nFairness.\nWhile the definition of efficiency is standard, there are multiple ways to define fairness.\nWe consider two metrics.\nOne is by comparing the users' utilities.\nThe utility uniformity () of an allocation scheme is defined to be maxi Ui (), the ratio of the minimum utility and the maxmini Ui () imum utility among the users.\nSuch definition (or utility discrepancy defined similarly as maxi Ui () mini Ui ()) is used extensively in Computer Science literature.\nUnder this definition, the utility uniformity (Q) of a game Q is defined to be (Q) = min E 0 ().\nThe other metric extensively studied in Economics is the concept of envy-freeness [25].\nUnlike the utility uniformity metric, the enviness concerns how the user perceives the value of the share assigned to him, compared to the shares other users receive.\nWithin such a framework, define the envy-freeness of an allocation scheme by () = mini, j Ui (rj).\n1Alternatives include adding a reservation price or limiting the lowest allowable bid to each machine.\nThese alternatives, however, introduce the problem of coming up with the\" right\" price or limit.\nWhen ()> 1, the scheme is known as an envy-free allocation scheme.\nLikewise, the envy-freeness (Q) of a game Q is defined to be (Q) = min E 0 ().\n3.\nNASH EQUILIBRIUM\nIn this section, we present some theoretical results regarding the performance at Nash equilibrium under the infinite parallelism model.\nWe assume that the game is strongly competitive to guarantee the existence of equilibria.\nFor a meaningful discussion of efficiency and fairness, we assume that the users are symmetric by requiring that Xi = 1 and Pnj = 1 wij = 1 for all the 1 <i <m. Or informally, we require all the users have the same budget, and they have the same utility when they own all the resources.\nThis precludes the case when a user has an extremely high budget, resulting in very low efficiency or low fairness at equilibrium.\nWe first provide a characterization of the equilibria.\nBy definition, the bidding vectors x1,..., xm is a Nash equilibrium if and only if each player's strategy is the best response to the group's bids.\nSince Ui is a linear function and the domain of each users bids {(xi1,..., xin) I Pj xij = Xi, and xij> 01 is a convex set, the optimality condition is that there exists i> 0 such that\nOr intuitively, at an equilibrium, each user has the same marginal value on machines where they place positive bids and has lower marginal values on those machines where they do not bid.\nUnder the infinite parallelism model, it is easy to compute the social optimum U * as it is achieved when we allocate weight on the machine, i.e. U * = Pn each machine wholly to the person who has the maximum j = 1 max1 <i <m wij.\n3.1 Two-player Games\nWe first show that even in the simplest nontrivial case when there are two users and two machines, the game has interesting properties.\nWe start with two special cases to provide some intuition about the game.\nThe weight matrices are shown in figure 1 (a) and (b), which correspond respectively to the equal-weight and opposite-weight games.\nLet x and y denote the respective bids of users 1 and 2 on machine 1.\nDenote by s = x + y and = (2--s) \/ s. Equal-weight game.\nIn Figure 1, both users have equal valuations for the two machines.\nBy the optimality condition, for the bid vectors to be in equilibrium, they need to satisfy the following equations according to (1)\nBy simplifying the above equations, we obtain that = 1--1 \/ and x = y =.\nThus, there exists a unique Nash equilibrium of the game where the two users have the same bidding vector.\nAt the equilibrium, the utility of each user is 1\/2, and the social welfare is 1.\nOn the other hand, the social optimum is clearly 1.\nThus, the equal-weight game is ideal as the efficiency, utility uniformity, and the envyfreeness are all 1.\nFigure 1: Two special cases of two-player games.\nOpposite-weight game.\nThe situation is different for the opposite game in which the two users put the exact opposite weights on the two machines.\nAssume that 1\/2.\nSimilarly, for the bid vectors to be at the equilibrium, they need to satisfy\nBy simplifying the above equations, we have that each Nash equilibrium corresponds to a nonnegative root of the cubic equation f () = 3 \u2212 c 2 + c \u2212 1 = 0, where c = Clearly, = 1 is a root of f ().\nWhen = 1, we have that x =, y = 1 \u2212, which is the symmetric equilibrium that is consistent with our intuition--each user puts a bid proportional to his preference of the machine.\nAt this equilibrium, U = 2 \u2212 4 (1 \u2212), U * = 2, and U\/U * = 2 (2 + 1) \u2212 2, which is minimized when = 2 with the minimum value of 2 2 \u2212 2 0.828.\nHowever, when is large enough, there exist two other roots, corresponding to less intuitive asymmetric equilibria.\nIntuitively, the asymmetric equilibrium arises when user 1 values machine 1 a lot, but by placing even a relatively small bid on machine 1, he can get most of the machine because user 2 values machine 1 very little, and thus places an even smaller bid.\nIn this case, user 1 gets most of machine 1 and almost half of machine 2.\nThe threshold is at when f' (1) = 0, i.e. when c = 1\n4.\nThis solves to 0 = 2 + 2 4 0.854.\nThose asymmetric equilibria at = 1 are \"bad\" as they yield lower efficiency than the symmetric equilibrium.\nLet 0 be the minimum root.\nWhen 0, c +, and 0 = 1\/c + o (1\/c) 0.\nThen, x, y 1.\nThus, U 3\/2, U * 2, and U\/U * 0.75.\nFrom the above simple game, we already observe that the Nash equilibrium may not be unique, which is different from many congestion games in which the Nash equilibrium is unique.\nFor the general two player game, we can show that 0.75 is actually the worst efficiency bound with a proof in [5].\nFurther, at the asymmetric equilibrium, the utility uniformity approaches 1\/2 when 1.\nThis is the worst possible for two player games because as we show in Section 3.2, a user's utility at any Nash equilibrium is at least 1\/m in the m-player game.\nAnother consequence is that the two player game is always envy-free.\nSuppose that the two user's shares are r1 = (r11,..., r1n) and r2 = (r21,..., r2n) respectively.\nThen U1 (r1) + U1 (r2) = U1 (r1 + r2) = U1 (1,..., 1) = 1 because ri1 + ri2 = 1 for all 1 i n. Again by that U1 (r1) 1\/2, we have that U1 (r1) U1 (r2), i.e. any equilibrium allocation is envy-free.\nTHEOREM 2.\nFor a two player game, (Q) 3\/4, (Q) 0.5, and (Q) = 1.\nAll the bounds are tight in the worst case.\n3.2 Multi-player Game\nFor large numbers of players, the loss in social welfare can be unfortunately large.\nThe following example shows the worst case bound.\nConsider a system with m = n2 + n players and n machines.\nOf the players, there are n2 who have the same weights on all the machines, i.e. 1\/n on each machine.\nThe other n players have weight 1, each on a different machine and 0 (or a sufficiently small) on all the other machines.\nClearly, U * = n.\nThe following allocation is an equilibrium: the first n2 players evenly distribute their money among all the machines, the other n player invest all of their money on their respective favorite machine.\nHence, the total money on each machine is n + 1.\nAt this equilib\neach machine, resulting in a total utility of n3 \u00b7 1 n2 (n +1) <1.\nThe other n players each receives 1 n +1 on their favorite machine, resulting in a total utility of n \u00b7 1 n +1 <1.\nTherefore, the total utility of the equilibrium is <2, while the social optimum is n = o (m).\nThis bound is the worst possible.\nWhat about the utility uniformity of the multi-player allocation game?\nWe next show that the utility uniformity of the m-player allocation game cannot exceed m. Let (S1,..., Sn) be the current total bids on the n machines, excluding user i. User i can ensure a utility of 1\/m by distributing his budget proportionally to the current bids.\nThat is, user i, by bidding sij = Xi\/En i = 1 Si on machine j, obtains a resource level of:\nwhere Enj = 1 Sj = m j = 1 Xj \u2212 Xi = m \u2212 1.\nTherefore, rij = 1\nSince each user's utility cannot exceed 1, the minimal possible uniformity is 1\/m.\non the other hand, is bounded by a constant of 2 2 \u2212 2 While the utility uniformity can be small, the envy-freeness, 0.828, as shown in [29].\nTo summarize, we have that (Q) 1\/m, and (Q) 2 2 \u2212 2.\nAll of these bounds are THEOREM 3.\nFor the m-player game Q, (Q) = 52 (1 \/ m), tight in the worst case.\n4.\nALGORITHMS\nIn the previous section, we present the performance bounds of the game under the infinite parallelism model.\nHowever, the more interesting questions in practice are how the equilibrium can be reached and what is the performance at the Nash equilibrium for the typical distribution of utility functions.\nIn particular, we would like to know if the intuitive strategy of each player constantly re-adjusting his bids according to the best response algorithm leads to the equilibrium.\nTo answer these questions, we resort to simulations.\nIn this section, we present the algorithms that we use to compute or approximate the best response and the social optimum in our experiments.\nWe consider both the infinite parallelism and finite parallelism model.\n4.1 Infinite Parallelism Model\nAs we mentioned before, it is easy to compute the social optimum under the infinite parallelism model--we simply assign each machine to the user who likes it the most.\nWe now present the algorithm for computing the best response.\nRecall that for weights w1,..., wn, total bids y1,..., yn, and the budget X, the best response is to solve the following optimization problem maximize U = Pnj = 1 wj xj + yj subject to\nTo compute the best response, we first sort wj yj in decreasing order.\nWithout loss of generality, suppose that\nSuppose that x * = (x * 1,..., x * n) is the optimum solution.\nWe show that if x * i = 0, then for any j> i, x * j = 0 too.\nSuppose this were not true.\nThen\nThus it contradicts with the optimality condition (1).\nSuppose that k = max {i | x * i> 0}.\nAgain, by the optimality condition, there exists such that wi i + yi) 2 = for 1 i k,\nand x * i = 0 for i> k. Equivalently, we have that:\nThe remaining question is how to determine k.\nIt is the largest value such that x * k> 0.\nThus, we obtain the following algorithm to compute the best response of a user:\n1.\nSort the machines according to wi yi in decreasing order.\n2.\nCompute the largest k such that\nThe computational complexity of this algorithm is O (n log n), dominated by the sorting.\nIn practice, the best response can be computed infrequently (e.g. once a minute), so for a typically powerful modern host, this cost is negligible.\nThe best response algorithm must send and receive O (n) messages because each user must obtain the total bids from each host.\nIn practice, this is more significant than the computational cost.\nNote that hosts only reveal to users the sum\nof the bids on them.\nAs a result, hosts do not reveal the private preferences and even the individual bids of one user to another.\n4.2 Finite Parallelism Model\nRecall that in the finite parallelism model, each user i only places bids on at most ki machines.\nOf course, the infinite parallelism model is just a special case of finite parallelism model in which ki = n for all the i's.\nIn the finite parallelism model, computing the social optimum is no longer trivial due to bounded parallelism.\nIt can instead be computed by using the maximum matching algorithm.\nConsider the weighted complete bipartite graph G = U \u00d7 V, where U = {ui | 1 i m, and 1 ki}, V = {1, 2,..., n} with edge weight wij assigned to the edge (ui, vj).\nA matching of G is a set of edges with disjoint nodes, and the weight of a matching is the total weights of the edges in the matching.\nAs a result, the following lemma holds.\nLEMMA 1.\nThe social optimum is the same as the maximum weight matching of G. Thus, we can use the maximum weight matching algorithm to compute the social optimum.\nThe maximum weight matching is a classical network problem and can be solved in polynomial time [8, 9, 14].\nWe choose to implement the Hungarian algorithm [14, 19] because of its simplicity.\nThere may exist a more efficient algorithm for computing the maximum matching by exploiting the special structure of G.\nThis remains an interesting open question.\nHowever, we do not know an efficient algorithm to compute the best response under the finite parallelism model.\nInstead, we provide the following local search heuristic.\nSuppose we again have n machines with weights w1,..., wn and total bids y1,..., yn.\nLet the user's budget be X and the parallelism bound be k.\nOur goal is to compute an allocation of X to up to k machines to maximize the user's utility.\nFor a subset of machines A, denote by x (A) the best response on A without parallelism bound and by U (A) the utility obtained by the best response algorithm.\nThe local search works as follows:\n1.\nSet A to be the k machines with the highest wi\/yi.\n2.\nCompute U (A) by the infinite parallelism best response algorithm (Sec 4.1) on A. 3.\nFor each i A and each j \/ A, repeat 4.\nLet B = A \u2212 {i} + {j}, compute U (B).\n5.\nIf (U (B)> U (A)), let A B, and goto 2.\n6.\nOutput x (A).\nIntuitively, by the local search heuristic, we test if we can swap a machine in A for one not in A to improve the best response utility.\nIf yes, we swap the machines and repeat the process.\nOtherwise, we have reached a local maxima and output that value.\nWe suspect that the local maxima that this algorithm finds is also the global maximum (with respect to an individual user) and that this process stop after a few number of iterations, but we are unable to establish it.\nHowever, in our simulations, this algorithm quickly converges to a high (.7) efficiency.\n4.3 Local Greedy Adjustment\nThe above best response algorithms only work for the linear utility functions described earlier.\nIn practice, utility functions may have more a complicated form, or even worse, a user may not have a formulation of his utility function.\nWe do assume that the user still has a way to measure his utility, which is the minimum assumption necessary for any market-based resource allocation mechanism.\nIn these situations, users can use a more general strategy, the local greedy adjustment method, which works as follows.\nA user finds the two machines that provide him with the highest and lowest marginal utility.\nHe then moves a fixed small amount of money from the machine with low marginal utility to the machine with the higher one.\nThis strategy aims to adjust the bids so that the marginal values at each machine being bid on are the same.\nThis condition guarantees the allocation is the optimum when the utility function is concave.\nThe tradeoff for local greedy adjustment is that it takes longer to stabilize than best-response.\n5.\nSIMULATION RESULTS\nWhile the analytic results provide us with worst-case analysis for the infinite parallelism model, in this section we employ simulations to study the properties of the Nash equilibria in more realistic scenarios and for the finite parallelism model.\nFirst, we determine whether the user bidding process converges, and if so, what the rate of convergence is.\nSecond, in cases of convergence, we look at the performance at equilibrium, using the efficiency and fairness metrics defined above.\nIterative Method.\nIn our simulations, each user starts with an initial bid vector and then iteratively updates his bids until a convergence criterion (described below) is met.\nThe initial bid is set proportional to the user's weights on the machines.\nWe experiment with two update methods, the best response methods, as described in Section 4.1 and 4.2, and the local greedy adjustment method, as described in Section 4.3.\nConvergence Criteria.\nConvergence time measures how quickly the system reaches equilibrium.\nIt is particularly important in the highly dynamic environment of distributed shared clusters, in which the system's conditions may change before reaching the equilibrium.\nThus, a high convergence rate may be more significant than the efficiency at the equilibrium.\nThere are several different criteria for convergence.\nThe strongest criterion is to require that there is only negligible change in the bids of each user.\nThe problem with this criterion is that it is too strict: users may see negligible change in their utilities, but according to this definition the system has not converged.\nThe less strict utility gap criterion requires there to be only negligible change in the users' utility.\nGiven users' concern for utility, this is a more natural definition.\nIndeed, in practice, the user is probably not willing to re-allocate their bids dramatically for a small utility gain.\nTherefore, we use the utility gap criterion to measure convergence time for the best response update method, i.e. we consider that the system has converged if the utility gap of each user is smaller than e (0.001 in our experiments).\nHowever, this criterion does not work for the local greedy adjustment method because users of that method will experience constant fluctuations in utility as they move money around.\nFor this method, we use the marginal utility gap criterion.\nWe compare the highest and lowest utility margins on the machines.\nIf the difference is negligible, then we consider the system to be converged.\nIn addition to convergence to the equilibrium, we also consider the criterion from the system provider's view, the social welfare stabilization criterion.\nUnder this criterion, a system has stabilized if the change in social welfare is <E. Individual users' utility may not have converged.\nThis criterion is useful to evaluate how quickly the system as a whole reaches a particular efficiency level.\nUser preferences.\nWe experiment with two models of user preferences, random distribution and correlated distribution.\nWith random distribution, users' weights on the different machines are independently and identically distributed, according the uniform distribution.\nIn practice, users' preferences are probably correlated based on factors like the hosts' location and the types of applications that users run.\nTo capture these correlations, we associate with each user and machine a resource profile vector where each dimension of the vector represents one resource (e.g., CPU, memory, and network bandwidth).\nFor a user i with a profile pi = (pi1,..., pie), pik represents user i's need for resource k. For machine j with profile qj = (qj1,..., qje), qjk represents machine j's strength with respect to resource k. Then, wij is the dot product of user i's and machine j's resource profiles, i.e. wij = pi \u00b7 qj = Pk = 1 pikqjk.\nBy using these profiles, we compress the parameter space and introduce correlations between users and machines.\nIn the following simulations, we fix the number of machines to 100 and vary the number of users from 5 to 250 (but we only report the results for the range of 5 \u2212 150 users since the results remain similar for a larger number of users).\nSections 5.1 and 5.2 present the simulation results when we apply the infinite parallelism and finite parallelism models, respectively.\nIf the system converges, we report the number of iterations until convergence.\nA convergence time of 200 iterations indicates non-convergence, in which case we report the efficiency and fairness values at the point we terminate the simulation.\n5.1 Infinite parallelism\nIn this section, we apply the infinite parallelism model, which assumes that users can use an unlimited number of machines.\nWe present the efficiency and fairness at the equilibrium, compared to two baseline allocation methods: social optimum and weight-proportional, in which users distribute their bids proportionally to their weights on the machines (which may seem a reasonable distribution method intuitively).\nWe present results for the two user preference models.\nWith uniform preferences, users' weights for the different machines are independently and identically distributed according to the uniform distribution, U--(0, 1) (and are normalized thereafter).\nIn correlated preferences, each user's and each machine's resource profile vector has three dimensions, and their values are also taken from the uniform distribution, U--(0, 1).\nConvergence Time.\nFigure 2 shows the convergence time, efficiency and fairness of the infinite parallelism model under uniform (left) and correlated (right) preferences.\nPlots (a) and (b) show the convergence and stabilization time of the best-response and local greedy adjustment methods.\nFigure 2: Efficiency, utility uniformity, enviness and convergence time as a function of the number of users under the infinite parallelism model, with uniform and correlated preferences.\nn = 100.\nFigure 3: Efficiency level over time under the infinite parallelism model.\nnumber of users = 40.\nn = 100.\nThe best-response algorithm converges within a few number of iterations for any number of users.\nIn contrast, the local greedy adjustment algorithm does not converge even within 500 iterations when the number of users is smaller than 60, but does converge for a larger number of users.\nWe believe that for small numbers of users, there are dependency cycles among the users that prevent the system from converging because one user's decisions affects another user, whose decisions affect another user, etc. .\nRegardless, the local greedy adjustment method stabilizes within 100 iterations.\nFigure 3 presents the efficiency over time for a system with 40 users.\nIt demonstrates that while both adjustment methods reach the same social welfare, the best-response algorithm is faster.\nIn the remainder of this paper, we will refer to the (Nash) equilibrium, independent of the adjustment method used to reach it.\nEfficiency.\nFigure 2 (c) and (d) present the efficiency as a function of the number of users.\nWe present the efficiency at equilibrium, and use the social optimum and the weightproportional static allocation methods for comparison.\nSocial optimum provides an efficient allocation by definition.\nFor both user preference models, the efficiency at the equilibrium is approximately 0.9, independent of the number of users, which is only slightly worse than the social optimum.\nThe efficiency at the equilibrium is R 50% improvement over the weight-proportional allocation method for uniform preferences, and R 30% improvement for correlated preferences.\nFairness.\nFigure 2 (e) and (f) present the utility uniformity as a function of the number of users, and figures (g) and (h) present the envy-freeness.\nWhile the social optimum yields perfect efficiency, it has poor fairness.\nThe weightproportional method achieves the highest fairness among the three allocation methods, but the fairness at the equilibrium is close.\nThe utility uniformity is slightly better at the equilibrium under uniform preferences (> 0.7) than under correlated preferences (> 0.6), since when users' preferences are more aligned, users' happiness is more likely going to be at the expense of each other.\nAlthough utility uniformity decreases in the number of users, it remains reasonable even for a large number of users, and flattens out at some point.\nAt the social optimum, utility uniformity can be infinitely poor, as some users may be allocated no resources at all.\nThe same is true with respect to envy-freeness.\nThe difference between uniform and correlated preferences is best demonstrated in the social optimum results.\nWhen the number of users is small, it may be possible to satisfy all users to some extent if their preferences are not aligned, but if they are aligned, even with a very small number of users, some users get no resources, thus both utility uniformity and envy-freeness go to zero.\nAs the number of users increases, it becomes almost impossible to satisfy all users independent of the existence of correlation.\nThese results demonstrate the tradeoff between the different allocation methods.\nThe efficiency at the equilibrium is lower than the social optimum, but it performs much better with respect to fairness.\nThe equilibrium allocation is completely envy-free under uniform preferences and almost envy-free under correlated preferences.\n5.2 Finite parallelism\nFigure 4: Convergence time under the finite parallelism model.\nn = 100.\nFigure 5: Efficiency level over time under the finite parallelism model with local search algorithm.\nn = 100.\nWe also consider the finite parallelism model and use the local search algorithm, as described in Section 4.2, to adjust user's bids.\nWe again experimented with both the uniform and correlated preferences distributions and did not find significant differences in the results so we present the simulation results for only the uniform distribution.\nIn our experiments, the local search algorithm stops quickly--it usually discovers a local maximum within two iterations.\nAs mentioned before, we cannot prove that a local maximum is the global maximum, but our experiments indicate that the local search heuristic leads to high efficiency.\nConvergence time.\nLet A denote the parallelism bound that limits the maximum number of machines each user can bid on.\nWe experiment with A = 5 and A = 20.\nIn both cases, we use 100 machines and vary the number of users.\nFigure 4 shows that the system does not always converge, but if it does, the convergence happens quickly.\nThe nonconvergence occurs when the number of users is between 20 and 40 for A = 5, between 5 and 10 for A = 20.\nWe believe that the non-convergence is caused by moderate competition.\nNo competition allows the system to equilibrate quickly because users do not have to change their bids in reaction to changes in others' bids.\nHigh competition also allows convergence because each user's decision has only a small impact on other users, so the system is more stable and can gradually reach convergence.\nHowever, when there is moderate competition, one user's decisions may cause dramatic changes in another's decisions and cause large fluctuations in bids.\nIn both cases of non-convergence, the ratio of \"competitors\" per machine, S = m \u00d7 A\/n for m users and n machines, is in the interval [1, 2].\nAlthough the system does not converge in these \"bad\" ranges, the system nontheless achieves and maintains a high level of overall efficiency after a few iterations (as shown in Figure 5).\nPerformance.\nIn Figure 6, we present the efficiency, utility uniformity, and envy-freeness at the Nash equilibrium for the finite parallelism model.\nWhen the system does not converge, we measure performance by taking the minimum value we observe after running for many iterations.\nWhen A = 5, there is a performance drop, in particular with respect to the fairness metrics, in the range between 20 and 40 users (where it does not converge).\nFor a larger number of users, the system converges and achieves a lower level of utility uniformity, but a high degree of efficiency and envy-freeness, similar to those under the infinite parallelism model.\nAs described above, this is due the competition ratio falling into the \"head-to-head\" range.\nWhen the parallelism bound is large (A = 20), the performance is closer to the infinite parallelism model, and we do not observe this drop in performance.\n6.\nRELATED WORK\nThere are two main groups of related work in resource allocation: those that incorporate an economic mechanism, and those that do not.\nOne non-economic approach is scheduling (surveyed by Pinedo [20]).\nExamples of this approach are queuing in first-come, first-served (FCFS) order, queueing using the resource consumption of tasks (e.g., [28]), and scheduling using combinatorial optimization [19].\nThese all assume that the values and resource consumption of tasks are reported accurately, which does not apply in the presence of strategic users.\nWe view scheduling and resource allocation as two separate functions.\nResource allocation divides a resource among different users while scheduling takes a given allocation and orders a user's jobs.\nExamples of the economic approach are Spawn [26]), work by Stoica, et al. [24].\n, the Millennium resource allocator [4], work by Wellman, et al. [27], Bellagio [2]), and Tycoon [15]).\nSpawn and the work by Wellman, et al. uses a reservation abstraction similar to the way airline seats are allocated.\nUnfortunately, reservations have a high latency to acquire resources, unlike the price-anticipating scheme we consider.\nThe tradeoff of the price-anticipating schemes is that users have uncertainty about exactly how much of the resources they will receive.\nBellagio [3] uses the SHARE centralized allocator.\nSHARE allocates resources using a centralized combinatorial auction that allows users to express preferences with complementarities.\nSolving the NP-complete combinatorial auction problem provides an optimally efficient allocation.\nThe priceanticipating scheme that we consider does not explicitly operate on complementarities, thereby possibly losing some efficiency, but it also avoids the complexity and overhead of combinatorial auctions.\nThere have been several analyses [10, 11, 12, 13, 23] of variations of price-anticipating allocation schemes in the context of allocation of network capacity for flows.\nTheir methodology follows the study of congestion (potential) games [17, 22] by relating the Nash equilibrium to the solution of a (usually convex) global optimization problem.\nBut those techniques no longer apply to our game because we model users as having fixed budgets and private preferences for machines.\nFor example, unlike those games, there may exist multiple Nash equilibria in our game.\nMilchtaich [16] studied congestion games with private preferences but the technique in [16] is specific to the congestion game.\n7.\nCONCLUSIONS\nThis work studies the performance of a market-based mechanism for distributed shared clusters using both analyatical and simulation methods.\nWe show that despite the worst case bounds, the system can reach a high performance level at the Nash equilibrium in terms of both efficiency and fairness metrics.\nIn addition, with a few exceptions under the finite parallelism model, the system reaches equilibrium quickly by using the best response algorithm and, when the number of users is not too small, by the greedy local adjustment method.\nWhile our work indicates that the price-anticipating scheme may work well for resource allocation for shared clusters, there are many interesting directions for future work.\nOne direction is to consider more realistic utility functions.\nFor example, we assume that there is no parallelization cost, and there is no performance degradation when multiple users share the same machine.\nIn practice, both assumptions may not be correct.\nFor examples, the user must copy code and data to a machine before running his application there, and there is overhead for multiplexing resources on a single machine.\nWhen the job size is large enough and the degree of multiplexing is sufficiently low, we can probably ignore those effects, but those costs should be taken into account for a more realistic modeling.\nAnother assumption is that users have infinite work, so the more resources they can acquire, the better.\nIn practice, users have finite work.\nOne approach to address this is to model the user's utility according to the time to finish a task rather than the amount of resources he receives.\nAnother direction is to study the dynamic properties of the system when the users' needs change over time, according to some statistical model.\nIn addition to the usual questions concerning repeated games, it would also be important to understand how users should allocate their budgets wisely over time to accomodate future needs.\nFigure 6: Efficiency, utility uniformity and envy-freeness under the finite parallelism model.\nn = 100.","keyphrases":["resourc alloc","distribut share cluster","util","effici","fair","simul","algorithm","bid mechan","price-anticip scheme","nash equilibrium","parallel","anarchi price","price-anticip mechan"],"prmu":["P","P","P","P","P","P","U","R","R","M","U","U","R"]} {"id":"C-77","title":"Tracking Immediate Predecessors in Distributed Computations","abstract":"A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation). An important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order. So, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation. This paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Two of them are exhibited.","lvl-1":"Tracking Immediate Predecessors in Distributed Computations Emmanuelle Anceaume Jean-Michel H\u00b4elary Michel Raynal IRISA, Campus Beaulieu 35042 Rennes Cedex, France FirstName.LastName@irisa.fr ABSTRACT A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation).\nAn important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order.\nSo, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation.\nThis paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors.\nThe family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes).\nIn that sense, this family defines message size-efficient IPT protocols.\nAccording to the way the general condition is implemented, different IPT protocols can be obtained.\nTwo of them are exhibited.\nCategories and Subject Descriptors C.2.4 [Distributed Systems]: General Terms Asynchronous Distributed Computations 1.\nINTRODUCTION A distributed computation consists of a set of processes that cooperate to achieve a common goal.\nA main characteristic of these computations lies in the fact that the processes do not share a common global memory, and communicate only by exchanging messages over a communication network.\nMoreover, message transfer delays are finite but unpredictable.\nThis computation model defines what is known as the asynchronous distributed system model.\nIt is particularly important as it includes systems that span large geographic areas, and systems that are subject to unpredictable loads.\nConsequently, the concepts, tools and mechanisms developed for asynchronous distributed systems reveal to be both important and general.\nCausality is a key concept to understand and master the behavior of asynchronous distributed systems [18].\nMore precisely, given two events e and f of a distributed computation, a crucial problem that has to be solved in a lot of distributed applications is to know whether they are causally related, i.e., if the occurrence of one of them is a consequence of the occurrence of the other.\nThe causal past of an event e is the set of events from which e is causally dependent.\nEvents that are not causally dependent are said to be concurrent.\nVector clocks [5, 16] have been introduced to allow processes to track causality (and concurrency) between the events they produce.\nThe timestamp of an event produced by a process is the current value of the vector clock of the corresponding process.\nIn that way, by associating vector timestamps with events it becomes possible to safely decide whether two events are causally related or not.\nUsually, according to the problem he focuses on, a designer is interested only in a subset of the events produced by a distributed execution (e.g., only the checkpoint events are meaningful when one is interested in determining consistent global checkpoints [12]).\nIt follows that detecting causal dependencies (or concurrency) on all the events of the distributed computation is not desirable in all applications [7, 15].\nIn other words, among all the events that may occur in a distributed computation, only a subset of them are relevant.\nIn this paper, we are interested in the restriction of the causality relation to the subset of events defined as being the relevant events of the computation.\nBeing a strict partial order, the causality relation is transitive.\nAs a consequence, among all the relevant events that causally precede a given relevant event e, only a subset are its immediate predecessors: those are the events f such that there is no relevant event on any causal path from f to e. Unfortunately, given only the vector timestamp associated with an event it is not possible to determine which events of its causal past are its immediate predecessors.\nThis comes from the fact that the vector timestamp associated with e determines, for each process, the last relevant event belong210 ing to the causal past of e, but such an event is not necessarily an immediate predecessor of e. However, some applications [4, 6] require to associate with each relevant event only the set of its immediate predecessors.\nThose applications are mainly related to the analysis of distributed computations.\nSome of those analyses require the construction of the lattice of consistent cuts produced by the computation [15, 16].\nIt is shown in [4] that the tracking of immediate predecessors allows an efficient on the fly construction of this lattice.\nMore generally, these applications are interested in the very structure of the causal past.\nIn this context, the determination of the immediate predecessors becomes a major issue [6].\nAdditionally, in some circumstances, this determination has to satisfy behavior constraints.\nIf the communication pattern of the distributed computation cannot be modified, the determination has to be done without adding control messages.\nWhen the immediate predecessors are used to monitor the computation, it has to be done on the fly.\nWe call Immediate Predecessor Tracking (IPT) the problem that consists in determining on the fly and without additional messages the immediate predecessors of relevant events.\nThis problem consists actually in determining the transitive reduction (Hasse diagram) of the causality graph generated by the relevant events of the computation.\nSolving this problem requires tracking causality, hence using vector clocks.\nPrevious works have addressed the efficient implementation of vector clocks to track causal dependence on relevant events.\nTheir aim was to reduce the size of timestamps attached to messages.\nAn efficient vector clock implementation suited to systems with fifo channels is proposed in [19].\nAnother efficient implementation that does not depend on channel ordering property is described in [11].\nThe notion of causal barrier is introduced in [2, 17] to reduce the size of control information required to implement causal multicast.\nHowever, none of these papers considers the IPT problem.\nThis problem has been addressed for the first time (to our knowledge) in [4, 6] where an IPT protocol is described, but without correctness proof.\nMoreover, in this protocol, timestamps attached to messages are of size n.\nThis raises the following question which, to our knowledge, has never been answered: Are there efficient vector clock implementation techniques that are suitable for the IPT problem?\n.\nThis paper has three main contributions: (1) a positive answer to the previous open question, (2) the design of a family of efficient IPT protocols, and (3) a formal correctness proof of the associated protocols.\nFrom a methodological point of view the paper uses a top-down approach.\nIt states abstract properties from which more concrete properties and protocols are derived.\nThe family of IPT protocols is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than the system size (i.e., smaller than the number of processes composing the system).\nIn that sense, this family defines low cost IPT protocols when we consider the message size.\nIn addition to efficiency, the proposed approach has an interesting design property.\nNamely, the family is incrementally built in three steps.\nThe basic vector clock protocol is first enriched by adding to each process a boolean vector whose management allows the processes to track the immediate predecessor events.\nThen, a general condition is stated to reduce the size of the control information carried by messages.\nFinally, according to the way this condition is implemented, three IPT protocols are obtained.\nThe paper is composed of seven sections.\nSections 2 introduces the computation model, vector clocks and the notion of relevant events.\nSection 3 presents the first step of the construction that results in an IPT protocol in which each message carries a vector clock and a boolean array, both of size n (the number of processes).\nSection 4 improves this protocol by providing the general condition that allows a message to carry control information whose size can be smaller than n. Section 5 provides instantiations of this condition.\nSection 6 provides a simulation study comparing the behaviors of the proposed protocols.\nFinally, Section 7 concludes the paper.\n(Due to space limitations, proofs of lemmas and theorems are omitted.\nThey can be found in [1].)\n2.\nMODEL AND VECTOR CLOCK 2.1 Distributed Computation A distributed program is made up of sequential local programs which communicate and synchronize only by exchanging messages.\nA distributed computation describes the execution of a distributed program.\nThe execution of a local program gives rise to a sequential process.\nLet {P1, P2, ... , Pn} be the finite set of sequential processes of the distributed computation.\nEach ordered pair of communicating processes (Pi, Pj ) is connected by a reliable channel cij through which Pi can send messages to Pj.\nWe assume that each message is unique and a process does not send messages to itself1 .\nMessage transmission delays are finite but unpredictable.\nMoreover, channels are not necessarily fifo.\nProcess speeds are positive but arbitrary.\nIn other words, the underlying computation model is asynchronous.\nThe local program associated with Pi can include send, receive and internal statements.\nThe execution of such a statement produces a corresponding send\/receive\/internal event.\nThese events are called primitive events.\nLet ex i be the x-th event produced by process Pi.\nThe sequence hi = e1 i e2 i ... ex i ... constitutes the history of Pi, denoted Hi.\nLet H = \u222an i=1Hi be the set of events produced by a distributed computation.\nThis set is structured as a partial order by Lamport``s happened before relation [14] (denoted hb \u2192) and defined as follows: ex i hb \u2192 ey j if and only if (i = j \u2227 x + 1 = y) (local precedence) \u2228 (\u2203m : ex i = send(m) \u2227 ey j = receive(m)) (msg prec.)\n\u2228 (\u2203 ez k : ex i hb \u2192 ez k \u2227 e z k hb \u2192 ey j ) (transitive closure).\nmax(ex i , ey j ) is a partial function defined only when ex i and ey j are ordered.\nIt is defined as follows: max(ex i , ey j ) = ex i if ey j hb \u2192 ex i , max(ex i , ey j ) = ey i if ex i hb \u2192 ey j .\nClearly the restriction of hb \u2192 to Hi, for a given i, is a total order.\nThus we will use the notation ex i < ey i iff x < y. Throughout the paper, we will use the following notation: if e \u2208 Hi is not the first event produced by Pi, then pred(e) denotes the event immediately preceding e in the sequence Hi.\nIf e is the first event produced by Pi, then pred(e) is denoted by \u22a5 (meaning that there is no such event), and \u2200e \u2208 Hi : \u22a5 < e.\nThe partial order bH = (H, hb \u2192) constitutes a formal model of the distributed computation it is associated with.\n1 This assumption is only in order to get simple protocols.\n211 P1 P2 P3 [1, 1, 2] [1, 0, 0] [3, 2, 1] [1, 1, 0] (2, 1) [0, 0, 1] (3, 1) [2, 0, 1] (1, 1) (1, 3)(1, 2) (2, 2) (2, 3) (3, 2) [2, 2, 1] [2, 3, 1] (1, 1) (1, 2) (1, 3) (2, 1) (2, 2) (2, 3) (3, 1) (3, 2) Figure 1: Timestamped Relevant Events and Immediate Predecessors Graph (Hasse Diagram) 2.2 Relevant Events For a given observer of a distributed computation, only some events are relevant2 [7, 9, 15].\nAn interesting example of what an observation is, is the detection of predicates on consistent global states of a distributed computation [3, 6, 8, 9, 13, 15].\nIn that case, a relevant event corresponds to the modification of a local variable involved in the global predicate.\nAnother example is the checkpointing problem where a relevant event is the definition of a local checkpoint [10, 12, 20].\nThe left part of Figure 1 depicts a distributed computation using the classical space-time diagram.\nIn this figure, only relevant events are represented.\nThe sequence of relevant events produced by process Pi is denoted by Ri, and R = \u222an i=1Ri \u2286 H denotes the set of all relevant events.\nLet \u2192 be the relation on R defined in the following way: \u2200 (e, f) \u2208 R \u00d7 R : (e \u2192 f) \u21d4 (e hb \u2192 f).\nThe poset (R, \u2192) constitutes an abstraction of the distributed computation [7].\nIn the following we consider a distributed computation at such an abstraction level.\nMoreover, without loss of generality we consider that the set of relevant events is a subset of the internal events (if a communication event has to be observed, a relevant internal event can be generated just before a send and just after a receive communication event occurred).\nEach relevant event is identified by a pair (process id, sequence number) (see Figure 1).\nDefinition 1.\nThe relevant causal past of an event e \u2208 H is the (partially ordered) subset of relevant events f such that f hb \u2192 e.\nIt is denoted \u2191 (e).\nWe have \u2191 (e) = {f \u2208 R | f hb \u2192 e}.\nNote that, if e \u2208 R then \u2191 (e) = {f \u2208 R | f \u2192 e}.\nIn the computation described in Figure 1, we have, for the event e identified (2, 2): \u2191 (e) = {(1, 1), (1, 2), (2, 1), (3, 1)}.\nThe following properties are immediate consequences of the previous definitions.\nLet e \u2208 H. CP1 If e is not a receive event then \u2191 (e) = 8 < : \u2205 if pred(e) = \u22a5, \u2191 (pred(e)) \u222a {pred(e)} if pred(e) \u2208 R, \u2191 (pred(e)) if pred(e) \u2208 R. CP2 If e is a receive event (of a message m) then \u2191 (e) = 8 >>< >>: \u2191 (send(m)) if pred(e) = \u22a5, \u2191 (pred(e))\u222a \u2191 (send(m)) \u222a {pred(e)} if pred(e) \u2208 R, \u2191 (pred(e))\u222a \u2191 (send(m)) if pred(e) \u2208 R. 2 Those events are sometimes called observable events.\nDefinition 2.\nLet e \u2208 Hi.\nFor every j such that \u2191 (e) \u2229 Rj = \u2205, the last relevant event of Pj with respect to e is: lastr(e, j) = max{f | f \u2208\u2191 (e) \u2229 Rj}.\nWhen \u2191 (e) \u2229 Rj = \u2205, lastr(e, j) is denoted by \u22a5 (meaning that there is no such event).\nLet us consider the event e identified (2,2) in Figure 1.\nWe have lastr(e, 1) = (1, 2), lastr(e, 2) = (2, 1), lastr(e, 3) = (3, 1).\nThe following properties relate the events lastr(e, j) and lastr(f, j) for all the predecessors f of e in the relation hb \u2192.\nThese properties follow directly from the definitions.\nLet e \u2208 Hi.\nLR0 \u2200e \u2208 Hi: lastr(e, i) = 8 < : \u22a5 if pred(e) = \u22a5, pred(e) if pred(e) \u2208 R, lastr(pred(e),i) if pred(e) \u2208 R. LR1 If e is not a receipt event: \u2200j = i : lastr(e, j) = lastr(pred(e),j).\nLR2 If e is a receive event of m: \u2200j = i : lastr(e, j) = max(lastr(pred(e),j), lastr(send(m),j)).\n2.3 Vector Clock System Definition As a fundamental concept associated with the causality theory, vector clocks have been introduced in 1988, simultaneously and independently by Fidge [5] and Mattern [16].\nA vector clock system is a mechanism that associates timestamps with events in such a way that the comparison of their timestamps indicates whether the corresponding events are or are not causally related (and, if they are, which one is the first).\nMore precisely, each process Pi has a vector of integers V Ci[1.\n.\nn] such that V Ci[j] is the number of relevant events produced by Pj, that belong to the current relevant causal past of Pi.\nNote that V Ci[i] counts the number of relevant events produced so far by Pi.\nWhen a process Pi produces a (relevant) event e, it associates with e a vector timestamp whose value (denoted e.V C) is equal to the current value of V Ci.\nVector Clock Implementation The following implementation of vector clocks [5, 16] is based on the observation that \u2200i, \u2200e \u2208 Hi, \u2200j : e.V Ci[j] = y \u21d4 lastr(e, j) = ey j where e.V Ci is the value of V Ci just after the occurrence of e (this relation results directly from the properties LR0, LR1, and LR2).\nEach process Pi manages its vector clock V Ci[1.\n.\nn] according to the following rules: VC0 V Ci[1.\n.\nn] is initialized to [0, ... , 0].\nVC1 Each time it produces a relevant event e, Pi increments its vector clock entry V Ci[i] (V Ci[i] := V Ci[i] + 1) to 212 indicate it has produced one more relevant event, then Pi associates with e the timestamp e.V C = V Ci.\nVC2 When a process Pi sends a message m, it attaches to m the current value of V Ci.\nLet m.V C denote this value.\nVC3 When Pi receives a message m, it updates its vector clock as follows: \u2200k : V Ci[k] := max(V Ci[k], m.V C[k]).\n3.\nIMMEDIATE PREDECESSORS In this section, the Immediate Predecessor Tracking (IPT) problem is stated (Section 3.1).\nThen, some technical properties of immediate predecessors are stated and proved (Section 3.2).\nThese properties are used to design the basic IPT protocol and prove its correctness (Section 3.3).\nThis IPT protocol, previously presented in [4] without proof, is built from a vector clock protocol by adding the management of a local boolean array at each process.\n3.1 The IPT Problem As indicated in the introduction, some applications (e.g., analysis of distributed executions [6], detection of distributed properties [7]) require to determine (on-the-fly and without additional messages) the transitive reduction of the relation \u2192 (i.e., we must not consider transitive causal dependency).\nGiven two relevant events f and e, we say that f is an immediate predecessor of e if f \u2192 e and there is no relevant event g such that f \u2192 g \u2192 e. Definition 3.\nThe Immediate Predecessor Tracking (IPT) problem consists in associating with each relevant event e the set of relevant events that are its immediate predecessors.\nMoreover, this has to be done on the fly and without additional control message (i.e., without modifying the communication pattern of the computation).\nAs noted in the Introduction, the IPT problem is the computation of the Hasse diagram associated with the partially ordered set of the relevant events produced by a distributed computation.\n3.2 Formal Properties of IPT In order to design a protocol solving the IPT problem, it is useful to consider the notion of immediate relevant predecessor of any event, whether relevant or not.\nFirst, we observe that, by definition, the immediate predecessor on Pj of an event e is necessarily the lastr(e, j) event.\nSecond, for lastr(e, j) to be immediate predecessor of e, there must not be another lastr(e, k) event on a path between lastr(e, j) and e.\nThese observations are formalized in the following definition: Definition 4.\nLet e \u2208 Hi.\nThe set of immediate relevant predecessors of e (denoted IP(e)), is the set of the relevant events lastr(e, j) (j = 1, ... , n) such that \u2200k : lastr(e, j) \u2208\u2191 (lastr(e, k)).\nIt follows from this definition that IP(e) \u2286 {lastr(e, j)|j = 1, ... , n} \u2282\u2191 (e).\nWhen we consider Figure 1, The graph depicted in its right part describes the immediate predecessors of the relevant events of the computation defined in its left part, more precisely, a directed edge (e, f) means that the relevant event e is an immediate predecessor of the relevant event f (3 ).\nThe following lemmas show how the set of immediate predecessors of an event is related to those of its predecessors in the relation hb \u2192.\nThey will be used to design and prove the protocols solving the IPT problem.\nTo ease the reading of the paper, their proofs are presented in Appendix A.\nThe intuitive meaning of the first lemma is the following: if e is not a receive event, all the causal paths arriving at e have pred(e) as next-to-last event (see CP1).\nSo, if pred(e) is a relevant event, all the relevant events belonging to its relevant causal past are separated from e by pred(e), and pred(e) becomes the only immediate predecessor of e.\nIn other words, the event pred(e) constitutes a reset w.r.t. the set of immediate predecessors of e. On the other hand, if pred(e) is not relevant, it does not separate its relevant causal past from e. Lemma 1.\nIf e is not a receive event, IP(e) is equal to: \u2205 if pred(e) = \u22a5, {pred(e)} if pred(e) \u2208 R, IP(pred(e)) if pred(e) \u2208 R.\nThe intuitive meaning of the next lemma is as follows: if e is a receive event receive(m), the causal paths arriving at e have either pred(e) or send(m) as next-to-last events.\nIf pred(e) is relevant, as explained in the previous lemma, this event hides from e all its relevant causal past and becomes an immediate predecessor of e. Concerning the last relevant predecessors of send(m), only those that are not predecessors of pred(e) remain immediate predecessors of e. Lemma 2.\nLet e \u2208 Hi be the receive event of a message m.\nIf pred(e) \u2208 Ri, then, \u2200j, IP(e) \u2229 Rj is equal to: {pred(e)} if j = i, \u2205 if lastr(pred(e),j) \u2265 lastr(send(m),j), IP(send(m)) \u2229 Rj if lastr(pred(e),j) < lastr(send(m),j).\nThe intuitive meaning of the next lemma is the following: if e is a receive event receive(m), and pred(e) is not relevant, the last relevant events in the relevant causal past of e are obtained by merging those of pred(e) and those of send(m) and by taking the latest on each process.\nSo, the immediate predecessors of e are either those of pred(e) or those of send(m).\nOn a process where the last relevant events of pred(e) and of send(m) are the same event f, none of the paths from f to e must contain another relevant event, and thus, f must be immediate predecessor of both events pred(e) and send(m).\nLemma 3.\nLet e \u2208 Hi be the receive event of a message m.\nIf pred(e) \u2208 Ri, then, \u2200j, IP(e) \u2229 Rj is equal to: IP(pred(e)) \u2229 Rj if lastr(pred(e),j) > lastr(send(m),j), IP(send(m)) \u2229 Rj if lastr(pred(e),j) < lastr(send(m),j) IP(pred(e))\u2229IP(send(m))\u2229Rj if lastr(pred(e),j) = lastr (send(m), j).\n3.3 A Basic IPT Protocol The basic protocol proposed here associates with each relevant event e, an attribute encoding the set IP(e) of its immediate predecessors.\nFrom the previous lemmas, the set 3 Actually, this graph is the Hasse diagram of the partial order associated with the distributed computation.\n213 IP(e) of any event e depends on the sets IP of the events pred(e) and\/or send(m) (when e = receive(m)).\nHence the idea to introduce a data structure allowing to manage the sets IPs inductively on the poset (H, hb \u2192).\nTo take into account the information from pred(e), each process manages a boolean array IPi such that, \u2200e \u2208 Hi the value of IPi when e occurs (denoted e.IPi) is the boolean array representation of the set IP(e).\nMore precisely, \u2200j : IPi[j] = 1 \u21d4 lastr(e, j) \u2208 IP(e).\nAs recalled in Section 2.3, the knowledge of lastr(e,j) (for every e and every j) is based on the management of vectors V Ci.\nThus, the set IP(e) is determined in the following way: IP(e) = {ey j | e.V Ci[j] = y \u2227 e.IPi[j] = 1, j = 1, ... , n} Each process Pi updates IPi according to the Lemmas 1, 2, and 3: 1.\nIt results from Lemma 1 that, if e is not a receive event, the current value of IPi is sufficient to determine e.IPi.\nIt results from Lemmas 2 and 3 that, if e is a receive event (e = receive(m)), then determining e.IPi involves information related to the event send(m).\nMore precisely, this information involves IP(send(m)) and the timestamp of send(m) (needed to compare the events lastr(send(m),j) and lastr(pred(e),j), for every j).\nSo, both vectors send(m).\nV Cj and send(m).\nIPj (assuming send(m) produced by Pj ) are attached to message m. 2.\nMoreover, IPi must be updated upon the occurrence of each event.\nIn fact, the value of IPi just after an event e is used to determine the value succ(e).\nIPi.\nIn particular, as stated in the Lemmas, the determination of succ(e).\nIPi depends on whether e is relevant or not.\nThus, the value of IPi just after the occurrence of event e must keep track of this event.\nThe following protocol, previously presented in [4] without proof, ensures the correct management of arrays V Ci (as in Section 2.3) and IPi (according to the Lemmas of Section 3.2).\nThe timestamp associated with a relevant event e is denoted e.TS.\nR0 Initialization: Both V Ci[1.\n.\nn] and IPi[1.\n.\nn] are initialized to [0, ... , 0].\nR1 Each time it produces a relevant event e: - Pi associates with e the timestamp e.TS defined as follows e.TS = {(k, V Ci[k]) | IPi[k] = 1}, - Pi increments its vector clock entry V Ci[i] (namely it executes V Ci[i] := V Ci[i] + 1), - Pi resets IPi: \u2200 = i : IPi[ ] := 0; IPi[i] := 1.\nR2 When Pi sends a message m to Pj, it attaches to m the current values of V Ci (denoted m.V C) and the boolean array IPi (denoted m.IP).\nR3 When it receives a message m from Pj , Pi executes the following updates: \u2200k \u2208 [1.\n.\nn] : case V Ci[k] < m.V C[k] thenV Ci[k] := m.V C[k]; IPi[k] := m.IP[k] V Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]) V Ci[k] > m.V C[k] then skip endcase The proof of the following theorem directly follows from Lemmas 1, 2 and 3.\nTheorem 1.\nThe protocol described in Section 3.3 solves the IPT problem: for any relevant event e, the timestamp e.TS contains the identifiers of all its immediate predecessors and no other event identifier.\n4.\nA GENERAL CONDITION This section addresses a previously open problem, namely, How to solve the IPT problem without requiring each application message to piggyback a whole vector clock and a whole boolean array?\n.\nFirst, a general condition that characterizes which entries of vectors V Ci and IPi can be omitted from the control information attached to a message sent in the computation, is defined (Section 4.1).\nIt is then shown (Section 4.2) that this condition is both sufficient and necessary.\nHowever, this general condition cannot be locally evaluated by a process that is about to send a message.\nThus, locally evaluable approximations of this general condition must be defined.\nTo each approximation corresponds a protocol, implemented with additional local data structures.\nIn that sense, the general condition defines a family of IPT protocols, that solve the previously open problem.\nThis issue is addressed in Section 5.\n4.1 To Transmit or Not to Transmit Control Information Let us consider the previous IPT protocol (Section 3.3).\nRule R3 shows that a process Pj does not systematically update each entry V Cj[k] each time it receives a message m from a process Pi: there is no update of V Cj[k] when V Cj[k] \u2265 m.V C[k].\nIn such a case, the value m.V C[k] is useless, and could be omitted from the control information transmitted with m by Pi to Pj.\nSimilarly, some entries IPj[k] are not updated when a message m from Pi is received by Pj.\nThis occurs when 0 < V Cj[k] = m.V C[k] \u2227 m.IP[k] = 1, or when V Cj [k] > m.V C[k], or when m.V C[k] = 0 (in the latest case, as m.IP[k] = IPi[k] = 0 then no update of IPj[k] is necessary).\nDifferently, some other entries are systematically reset to 0 (this occurs when 0 < V Cj [k] = m.V C[k] \u2227 m.IP[k] = 0).\nThese observations lead to the definition of the condition K(m, k) that characterizes which entries of vectors V Ci and IPi can be omitted from the control information attached to a message m sent by a process Pi to a process Pj: Definition 5.\nK(m, k) \u2261 (send(m).\nV Ci[k] = 0) \u2228 (send(m).\nV Ci[k] < pred(receive(m)).\nV Cj[k]) \u2228 ; (send(m).\nV Ci[k] = pred(receive(m)).\nV Cj[k]) \u2227(send(m).\nIPi[k] = 1) .\n4.2 A Necessary and Sufficient Condition We show here that the condition K(m, k) is both necessary and sufficient to decide which triples of the form (k, send(m).\nV Ci[k], send(m).\nIPi[k]) can be omitted in an outgoing message m sent by Pi to Pj.\nA triple attached to m will also be denoted (k, m.V C[k], m.IP[k]).\nDue to space limitations, the proofs of Lemma 4 and Lemma 5 are given in [1].\n(The proof of Theorem 2 follows directly from these lemmas.)\n214 Lemma 4.\n(Sufficiency) If K(m, k) is true, then the triple (k, m.V C[k], m.IP[k]) is useless with respect to the correct management of IPj[k] and V Cj [k].\nLemma 5.\n(Necessity) If K(m, k) is false, then the triple (k, m.V C[k], m.IP[k]) is necessary to ensure the correct management of IPj[k] and V Cj [k].\nTheorem 2.\nWhen a process Pi sends m to a process Pj, the condition K(m, k) is both necessary and sufficient not to transmit the triple (k, send(m).\nV Ci[k], send(m).\nIPi[k]).\n5.\nA FAMILY OF IPT PROTOCOLS BASED ON EVALUABLE CONDITIONS It results from the previous theorem that, if Pi could evaluate K(m, k) when it sends m to Pj, this would allow us improve the previous IPT protocol in the following way: in rule R2, the triple (k, V Ci[k], IPi[k]) is transmitted with m only if \u00acK(m, k).\nMoreover, rule R3 is appropriately modified to consider only triples carried by m. However, as previously mentioned, Pi cannot locally evaluate K(m, k) when it is about to send m.\nMore precisely, when Pi sends m to Pj , Pi knows the exact values of send(m).\nV Ci[k] and send(m).\nIPi[k] (they are the current values of V Ci[k] and IPi[k]).\nBut, as far as the value of pred(receive(m)).\nV Cj[k] is concerned, two cases are possible.\nCase (i): If pred(receive(m)) hb \u2192 send(m), then Pi can know the value of pred(receive(m)).\nV Cj[k] and consequently can evaluate K(m, k).\nCase (ii): If pred(receive(m)) and send(m) are concurrent, Pi cannot know the value of pred(receive(m)).\nV Cj[k] and consequently cannot evaluate K(m, k).\nMoreover, when it sends m to Pj , whatever the case (i or ii) that actually occurs, Pi has no way to know which case does occur.\nHence the idea to define evaluable approximations of the general condition.\nLet K (m, k) be an approximation of K(m, k), that can be evaluated by a process Pi when it sends a message m. To be correct, the condition K must ensure that, every time Pi should transmit a triple (k, V Ci[k], IPi[k]) according to Theorem 2 (i.e., each time \u00acK(m, k)), then Pi transmits this triple when it uses condition K .\nHence, the definition of a correct evaluable approximation: Definition 6.\nA condition K , locally evaluable by a process when it sends a message m to another process, is correct if \u2200(m, k) : \u00acK(m, k) \u21d2 \u00acK (m, k) or, equivalently, \u2200(m, k) : K (m, k) \u21d2 K(m, k).\nThis definition means that a protocol evaluating K to decide which triples must be attached to messages, does not miss triples whose transmission is required by Theorem 2.\nLet us consider the constant condition (denoted K1), that is always false, i.e., \u2200(m, k) : K1(m, k) = false.\nThis trivially correct approximation of K actually corresponds to the particular IPT protocol described in Section 3 (in which each message carries a whole vector clock and a whole boolean vector).\nThe next section presents a better approximation of K (denoted K2).\n5.1 A Boolean Matrix-Based Evaluable Condition Condition K2 is based on the observation that condition K is composed of sub-conditions.\nSome of them can be Pj send(m) Pi V Ci[k] = x IPi[k] = 1 V Cj[k] \u2265 x receive(m) Figure 2: The Evaluable Condition K2 locally evaluated while the others cannot.\nMore precisely, K \u2261 a \u2228 \u03b1 \u2228 (\u03b2 \u2227 b), where a \u2261 send(m).\nV Ci[k] = 0 and b \u2261 send(m).\nIPi[k] = 1 are locally evaluable, whereas \u03b1 \u2261 send(m).\nV Ci[k] < pred(receive(m)).\nV Cj[k] and \u03b2 \u2261 send(m).\nV Ci[k] = pred(receive(m)).\nV Cj[k] are not.\nBut, from easy boolean calculus, a\u2228((\u03b1\u2228\u03b2)\u2227b) =\u21d2 a\u2228\u03b1\u2228 (\u03b2 \u2227 b) \u2261 K.\nThis leads to condition K \u2261 a \u2228 (\u03b3 \u2227 b), where \u03b3 = \u03b1 \u2228 \u03b2 \u2261 send(m).\nV Ci[k] \u2264 pred(receive(m)).\nV Cj[k] , i.e., K \u2261 (send(m).\nV Ci[k] \u2264 pred(receive(m)).\nV Cj[k] \u2227 send(m).\nIPi[k] = 1) \u2228 send(m).\nV Ci[k] = 0.\nSo, Pi needs to approximate the predicate send(m).\nV Ci[k] \u2264 pred(receive(m)).\nV Cj[k].\nTo be correct, this approximation has to be a locally evaluable predicate ci(j, k) such that, when Pi is about to send a message m to Pj, ci(j, k) \u21d2 (send(m).\nV Ci[k] \u2264 pred(receive(m)).\nV Cj[k]).\nInformally, that means that, when ci(j, k) holds, the local context of Pi allows to deduce that the receipt of m by Pj will not lead to V Cj[k] update (Pj knows as much as Pi about Pk).\nHence, the concrete condition K2 is the following: K2 \u2261 send(m).\nV Ci[k] = 0 \u2228 (ci(j, k) \u2227 send(m).\nIPi[k] = 1).\nLet us now examine the design of such a predicate (denoted ci).\nFirst, the case j = i can be ignored, since it is assumed (Section 2.1) that a process never sends a message to itself.\nSecond, in the case j = k, the relation send(m).\nV Ci[j] \u2264 pred(receive(m)).\nV Cj [j] is always true, because the receipt of m by Pj cannot update V Cj[j].\nThus, \u2200j = i : ci(j, j) must be true.\nNow, let us consider the case where j = i and j = k (Figure 2).\nSuppose that there exists an event e = receive(m ) with e < send(m), m sent by Pj and piggybacking the triple (k, m .\nV C[k], m .\nIP[k]), and m .\nV C[k] \u2265 V Ci[k] (hence m .\nV C[k] = receive(m ).\nV Ci[k]).\nAs V Cj[k] cannot decrease this means that, as long as V Ci[k] does not increase, for every message m sent by Pi to Pj we have the following: send(m).\nV Ci[k] = receive(m ).\nV Ci[k] = send(m ).\nV Cj[k] \u2264 receive(m).\nV Cj [k], i.e., ci(j, k) must remain true.\nIn other words, once ci(j, k) is true, the only event of Pi that could reset it to false is either the receipt of a message that increases V Ci[k] or, if k = i, the occurrence of a relevant event (that increases V Ci[i]).\nSimilarly, once ci(j, k) is false, the only event that can set it to true is the receipt of a message m from Pj, piggybacking the triple (k, m .\nV C[k], m .\nIP[k]) with m .\nV C[k] \u2265 V Ci[k].\nIn order to implement the local predicates ci(j, k), each process Pi is equipped with a boolean matrix Mi (as in [11]) such that M[j, k] = 1 \u21d4 ci(j, k).\nIt follows from the previous discussion that this matrix is managed according to the following rules (note that its i-th line is not significant (case j = i), and that its diagonal is always equal to 1): M0 Initialization: \u2200 (j, k) : Mi[j, k] is initialized to 1.\n215 M1 Each time it produces a relevant event e: Pi resets4 the ith column of its matrix: \u2200j = i : Mi[j, i] := 0.\nM2 When Pi sends a message: no update of Mi occurs.\nM3 When it receives a message m from Pj , Pi executes the following updates: \u2200 k \u2208 [1.\n.\nn] : case V Ci[k] < m.V C[k] then \u2200 = i, j, k : Mi[ , k] := 0; Mi[j, k] := 1 V Ci[k] = m.V C[k] then Mi[j, k] := 1 V Ci[k] > m.V C[k] then skip endcase The following lemma results from rules M0-M3.\nThe theorem that follows shows that condition K2(m, k) is correct.\n(Both are proved in [1].)\nLemma 6.\n\u2200i, \u2200m sent by Pi to Pj, \u2200k, we have: send(m).\nMi[j, k] = 1 \u21d2 send(m).\nV Ci[k] \u2264 pred(receive(m)).\nV Cj [k].\nTheorem 3.\nLet m be a message sent by Pi to Pj .\nLet K2(m, k) \u2261 ((send(m).\nMi[j, k] = 1) \u2227 (send(m).\nIPi[k] = 1)\u2228(send(m).\nV Ci[k] = 0)).\nWe have: K2(m, k) \u21d2 K(m, k).\n5.2 Resulting IPT Protocol The complete text of the IPT protocol based on the previous discussion follows.\nRM0 Initialization: - Both V Ci[1.\n.\nn] and IPi[1.\n.\nn] are set to [0, ... , 0], and \u2200 (j, k) : Mi[j, k] is set to 1.\nRM1 Each time it produces a relevant event e: - Pi associates with e the timestamp e.TS defined as follows: e.TS = {(k, V Ci[k]) | IPi[k] = 1}, - Pi increments its vector clock entry V Ci[i] (namely, it executes V Ci[i] := V Ci[i] + 1), - Pi resets IPi: \u2200 = i : IPi[ ] := 0; IPi[i] := 1.\n- Pi resets the ith column of its boolean matrix: \u2200j = i : Mi[j, i] := 0.\nRM2 When Pi sends a message m to Pj, it attaches to m the set of triples (each made up of a process id, an integer and a boolean): {(k, V Ci[k], IPi[k]) | (Mi[j, k] = 0 \u2228 IPi[k] = 0) \u2227 (V Ci[k] > 0)}.\nRM3 When Pi receives a message m from Pj , it executes the following updates: \u2200(k,m.V C[k], m.IP[k]) carried by m: case V Ci[k] < m.V C[k] then V Ci[k] := m.V C[k]; IPi[k] := m.IP[k]; \u2200 = i, j, k : Mi[ , k] := 0; 4 Actually, the value of this column remains constant after its first update.\nIn fact, \u2200j, Mi[j, i] can be set to 1 only upon the receipt of a message from Pj, carrying the value V Cj[i] (see R3).\nBut, as Mj [i, i] = 1, Pj does not send V Cj[i] to Pi.\nSo, it is possible to improve the protocol by executing this reset of the column Mi[\u2217, i] only when Pi produces its first relevant event.\nMi[j, k] := 1 V Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]); Mi[j, k] := 1 V Ci[k] > m.V C[k] then skip endcase 5.3 A Tradeoff The condition K2(m, k) shows that a triple has not to be transmitted when (Mi[j, k] = 1 \u2227 IPi[k] = 1) \u2228 (V Ci[k] > 0).\nLet us first observe that the management of IPi[k] is governed by the application program.\nMore precisely, the IPT protocol does not define which are the relevant events, it has only to guarantee a correct management of IPi[k].\nDifferently, the matrix Mi does not belong to the problem specification, it is an auxiliary variable of the IPT protocol, which manages it so as to satisfy the following implication when Pi sends m to Pj : (Mi[j, k] = 1) \u21d2 (pred(receive(m)).\nV Cj [k] \u2265 send(m).\nV Ci[k]).\nThe fact that the management of Mi is governed by the protocol and not by the application program leaves open the possibility to design a protocol where more entries of Mi are equal to 1.\nThis can make the condition K2(m, k) more often satisfied5 and can consequently allow the protocol to transmit less triples.\nWe show here that it is possible to transmit less triples at the price of transmitting a few additional boolean vectors.\nThe previous IPT matrix-based protocol (Section 5.2) is modified in the following way.\nThe rules RM2 and RM3 are replaced with the modified rules RM2'' and RM3'' (Mi[\u2217, k] denotes the kth column of Mi).\nRM2'' When Pi sends a message m to Pj, it attaches to m the following set of 4-uples (each made up of a process id, an integer, a boolean and a boolean vector): {(k, V Ci[k], IPi[k], Mi[\u2217, k]) | (Mi[j, k] = 0 \u2228 IPi[k] = 0) \u2227 V Ci[k] > 0}.\nRM3'' When Pi receives a message m from Pj , it executes the following updates: \u2200(k,m.V C[k], m.IP[k], m.M[1.\n.\nn, k]) carried by m: case V Ci[k] < m.V C[k] then V Ci[k] := m.V C[k]; IPi[k] := m.IP[k]; \u2200 = i : Mi[ , k] := m.M[ , k] V Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]); \u2200 =i : Mi[ , k] := max(Mi[ , k], m.M[ , k]) V Ci[k] > m.V C[k] then skip endcase Similarly to the proofs described in [1], it is possible to prove that the previous protocol still satisfies the property proved in Lemma 6, namely, \u2200i, \u2200m sent by Pi to Pj, \u2200k we have (send(m).\nMi[j, k] = 1) \u21d2 (send(m).\nV Ci[k] \u2264 pred(receive(m)).\nV Cj[k]).\n5 Let us consider the previously described protocol (Section 5.2) where the value of each matrix entry Mi[j, k] is always equal to 0.\nThe reader can easily verify that this setting correctly implements the matrix.\nMoreover, K2(m, k) is then always false: it actually coincides with K1(k, m) (which corresponds to the case where whole vectors have to be transmitted with each message).\n216 Intuitively, the fact that some columns of matrices M are attached to application messages allows a transitive transmission of information.\nMore precisely, the relevant history of Pk known by Pj is transmitted to a process Pi via a causal sequence of messages from Pj to Pi.\nIn contrast, the protocol described in Section 5.2 used only a direct transmission of this information.\nIn fact, as explained Section 5.1, the predicate c (locally implemented by the matrix M) was based on the existence of a message m sent by Pj to Pi, piggybacking the triple (k, m .\nV C[k], m .\nIP[k]), and m .\nV C[k] \u2265 V Ci[k], i.e., on the existence of a direct transmission of information (by the message m ).\nThe resulting IPT protocol (defined by the rules RM0, RM1, RM2'' and RM3'') uses the same condition K2(m, k) as the previous one.\nIt shows an interesting tradeoff between the number of triples (k, V Ci[k], IPi[k]) whose transmission is saved and the number of boolean vectors that have to be additionally piggybacked.\nIt is interesting to notice that the size of this additional information is bounded while each triple includes a non-bounded integer (namely a vector clock value).\n6.\nEXPERIMENTAL STUDY This section compares the behaviors of the previous protocols.\nThis comparison is done with a simulation study.\nIPT1 denotes the protocol presented in Section 3.3 that uses the condition K1(m, k) (which is always equal to false).\nIPT2 denotes the protocol presented in Section 5.2 that uses the condition K2(m, k) where messages carry triples.\nFinally, IPT3 denotes the protocol presented in Section 5.3 that also uses the condition K2(m, k) but where messages carry additional boolean vectors.\nThis section does not aim to provide an in-depth simulation study of the protocols, but rather presents a general view on the protocol behaviors.\nTo this end, it compares IPT2 and IPT3 with regard to IPT1.\nMore precisely, for IPT2 the aim was to evaluate the gain in terms of triples (k, V Ci[k], IPi[k]) not transmitted with respect to the systematic transmission of whole vectors as done in IPT1.\nFor IPT3, the aim was to evaluate the tradeoff between the additional boolean vectors transmitted and the number of saved triples.\nThe behavior of each protocol was analyzed on a set of programs.\n6.1 Simulation Parameters The simulator provides different parameters enabling to tune both the communication and the processes features.\nThese parameters allow to set the number of processes for the simulated computation, to vary the rate of communication (send\/receive) events, and to alter the time duration between two consecutive relevant events.\nMoreover, to be independent of a particular topology of the underlying network, a fully connected network is assumed.\nInternal events have not been considered.\nSince the presence of the triples (k, V Ci[k], IPi[k]) piggybacked by a message strongly depends on the frequency at which relevant events are produced by a process, different time distributions between two consecutive relevant events have been implemented (e.g., normal, uniform, and Poisson distributions).\nThe senders of messages are chosen according to a random law.\nTo exhibit particular configurations of a distributed computation a given scenario can be provided to the simulator.\nMessage transmission delays follow a standard normal distribution.\nFinally, the last parameter of the simulator is the number of send events that occurred during a simulation.\n6.2 Parameter Settings To compare the behavior of the three IPT protocols, we performed a large number of simulations using different parameters setting.\nWe set to 10 the number of processes participating to a distributed computation.\nThe number of communication events during the simulation has been set to 10 000.\nThe parameter \u03bb of the Poisson time distribution (\u03bb is the average number of relevant events in a given time interval) has been set so that the relevant events are generated at the beginning of the simulation.\nWith the uniform time distribution, a relevant event is generated (in the average) every 10 communication events.\nThe location parameter of the standard normal time distribution has been set so that the occurrence of relevant events is shifted around the third part of the simulation experiment.\nAs noted previously, the simulator can be fed with a given scenario.\nThis allows to analyze the worst case scenarios for IPT2 and IPT3.\nThese scenarios correspond to the case where the relevant events are generated at the maximal frequency (i.e., each time a process sends or receives a message, it produces a relevant event).\nFinally, the three IPT protocols are analyzed with the same simulation parameters.\n6.3 Simulation Results The results are displayed on the Figures 3.a-3.\nd.\nThese figures plot the gain of the protocols in terms of the number of triples that are not transmitted (y axis) with respect to the number of communication events (x axis).\nFrom these figures, we observe that, whatever the time distribution followed by the relevant events, both IPT2 and IPT3 exhibit a behavior better than IPT1 (i.e., the total number of piggybacked triples is lower in IPT2 and IPT3 than in IPT1), even in the worst case (see Figure 3.\nd).\nLet us consider the worst scenario.\nIn that case, the gain is obtained at the very beginning of the simulation and lasts as long as it exists a process Pj for which \u2200k : V Cj[k] = 0.\nIn that case, the condition \u2200k : K(m, k) is satisfied.\nAs soon as \u2203k : V Cj[k] = 0, both IPT2 and IPT3 behave as IPT1 (the shape of the curve becomes flat) since the condition K(m, k) is no longer satisfied.\nFigure 3.\na shows that during the first events of the simulation, the slope of curves IPT2 and IPT3 are steep.\nThe same occurs in Figure 3.\nd (that depicts the worst case scenario).\nThen the slope of these curves decreases and remains constant until the end of the simulation.\nIn fact, as soon as V Cj[k] becomes greater than 0, the condition \u00acK(m, k) reduces to (Mi[j, k] = 0 \u2228 IPi[k] = 0).\nFigure 3.\nb displays an interesting feature.\nIt considers \u03bb = 100.\nAs the relevant events are taken only during the very beginning of the simulation, this figure exhibits a very steep slope as the other figures.\nThe figure shows that, as soon as no more relevant events are taken, on average, 45% of the triples are not piggybacked by the messages.\nThis shows the importance of matrix Mi.\nFurthermore, IPT3 benefits from transmitting additional boolean vectors to save triple transmissions.\nThe Figures 3.a-3.c show that the average gain of IPT3 with respect to IPT2 is close to 10%.\nFinally, Figure 3.c underlines even more the importance 217 of matrix Mi.\nWhen very few relevant events are taken, IPT2 and IPT3 turn out to be very efficient.\nIndeed, this figure shows that, very quickly, the gain in number of triples that are saved is very high (actually, 92% of the triples are saved).\n6.4 Lessons Learned from the Simulation Of course, all simulation results are consistent with the theoretical results.\nIPT3 is always better than or equal to IPT2, and IPT2 is always better than IPT1.\nThe simulation results teach us more: \u2022 The first lesson we have learnt concerns the matrix Mi.\nIts use is quite significant but mainly depends on the time distribution followed by the relevant events.\nOn the one hand, when observing Figure 3.\nb where a large number of relevant events are taken in a very short time, IPT2 can save up to 45% of the triples.\nHowever, we could have expected a more sensitive gain of IPT2 since the boolean vector IP tends to stabilize to [1, ..., 1] when no relevant events are taken.\nIn fact, as discussed in Section 5.3, the management of matrix Mi within IPT2 does not allow a transitive transmission of information but only a direct transmission of this information.\nThis explains why some columns of Mi may remain equal to 0 while they could potentially be equal to 1.\nDifferently, as IPT3 benefits from transmitting additional boolean vectors (providing a transitive transmission information) it reaches a gain of 50%.\nOn the other hand, when very few relevant events are taken in a large period of time (see Figure 3.\nc), the behavior of IPT2 and IPT3 turns out to be very efficient since the transmission of up to 92% of the triples is saved.\nThis comes from the fact that very quickly the boolean vector IPi tends to stabilize to [1, ..., 1] and that matrix Mi contains very few 0 since very few relevant events have been taken.\nThus, a direct transmission of the information is sufficient to quickly get matrices Mi equal to [1, ..., 1], ... , [1, ..., 1].\n\u2022 The second lesson concerns IPT3, more precisely, the tradeoff between the additional piggybacking of boolean vectors and the number of triples whose transmission is saved.\nWith n = 10, adding 10 booleans to a triple does not substantially increases its size.\nThe Figures 3.a-3.c exhibit the number of triples whose transmission is saved: the average gain (in number of triples) of IPT3 with respect to IPT2 is about 10%.\n7.\nCONCLUSION This paper has addressed an important causality-related distributed computing problem, namely, the Immediate Predecessors Tracking problem.\nIt has presented a family of protocols that provide each relevant event with a timestamp that exactly identify its immediate predecessors.\nThe family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes).\nIn that sense, this family defines message size-efficient IPT protocols.\nAccording to the way the general condition is implemented, different IPT protocols can be obtained.\nThree of them have been described and analyzed with simulation experiments.\nInterestingly, it has also been shown that the efficiency of the protocols (measured in terms of the size of the control information that is not piggybacked by an application message) depends on the pattern defined by the communication events and the relevant events.\nLast but not least, it is interesting to note that if one is not interested in tracking the immediate predecessor events, the protocols presented in the paper can be simplified by suppressing the IPi booleans vectors (but keeping the boolean matrices Mi).\nThe resulting protocols, that implement a vector clock system, are particularly efficient as far as the size of the timestamp carried by each message is concerned.\nInterestingly, this efficiency is not obtained at the price of additional assumptions (such as fifo channels).\n8.\nREFERENCES [1] Anceaume E., H\u00b4elary J.-M.\nand Raynal M., Tracking Immediate Predecessors in Distributed Computations.\nRes.\nReport #1344, IRISA, Univ..\nRennes (France), 2001.\n[2] Baldoni R., Prakash R., Raynal M. and Singhal M., Efficient \u2206-Causal Broadcasting.\nJournal of Computer Systems Science and Engineering, 13(5):263-270, 1998.\n[3] Chandy K.M. and Lamport L., Distributed Snapshots: Determining Global States of Distributed Systems, ACM Transactions on Computer Systems, 3(1):63-75, 1985.\n[4] Diehl C., Jard C. and Rampon J.-X., Reachability Analysis of Distributed Executions, Proc.\nTAPSOFT``93, Springer-Verlag LNCS 668, pp. 629-643, 1993.\n[5] Fidge C.J., Timestamps in Message-Passing Systems that Preserve Partial Ordering, Proc.\n11th Australian Computing Conference, pp. 56-66, 1988.\n[6] Fromentin E., Jard C., Jourdan G.-V.\nand Raynal M., On-the-fly Analysis of Distributed Computations, IPL, 54:267-274, 1995.\n[7] Fromentin E. and Raynal M., Shared Global States in Distributed Computations, JCSS, 55(3):522-528, 1997.\n[8] Fromentin E., Raynal M., Garg V.K. and Tomlinson A., On-the-Fly Testing of Regular Patterns in Distributed Computations.\nProc.\nICPP``94, Vol.\n2:73-76, 1994.\n[9] Garg V.K., Principles of Distributed Systems, Kluwer Academic Press, 274 pages, 1996.\n[10] H\u00b4elary J.-M., Most\u00b4efaoui A., Netzer R.H.B. and Raynal M., Communication-Based Prevention of Useless Ckeckpoints in Distributed Computations.\nDistributed Computing, 13(1):29-43, 2000.\n[11] H\u00b4elary J.-M., Melideo G. and Raynal M., Tracking Causality in Distributed Systems: a Suite of Efficient Protocols.\nProc.\nSIROCCO``00, Carleton University Press, pp. 181-195, L``Aquila (Italy), June 2000.\n[12] H\u00b4elary J.-M., Netzer R. and Raynal M., Consistency Issues in Distributed Checkpoints.\nIEEE TSE, 25(4):274-281, 1999.\n[13] Hurfin M., Mizuno M., Raynal M. and Singhal M., Efficient Distributed Detection of Conjunction of Local Predicates in Asynch Computations.\nIEEE TSE, 24(8):664-677, 1998.\n[14] Lamport L., Time, Clocks and the Ordering of Events in a Distributed System.\nComm.\nACM, 21(7):558-565, 1978.\n[15] Marzullo K. and Sabel L., Efficient Detection of a Class of Stable Properties.\nDistributed Computing, 8(2):81-91, 1994.\n[16] Mattern F., Virtual Time and Global States of Distributed Systems.\nProc.\nInt.\nConf.\nParallel and Distributed Algorithms, (Cosnard, Quinton, Raynal, Robert Eds), North-Holland, pp. 215-226, 1988.\n[17] Prakash R., Raynal M. and Singhal M., An Adaptive Causal Ordering Algorithm Suited to Mobile Computing Environment.\nJPDC, 41:190-204, 1997.\n[18] Raynal M. and Singhal S., Logical Time: Capturing Causality in Distributed Systems.\nIEEE Computer, 29(2):49-57, 1996.\n[19] Singhal M. and Kshemkalyani A., An Efficient Implementation of Vector Clocks.\nIPL, 43:47-52, 1992.\n[20] Wang Y.M., Consistent Global Checkpoints That Contain a Given Set of Local Checkpoints.\nIEEE TOC, 46(4):456-468, 1997.\n218 0 1000 2000 3000 4000 5000 6000 0 2000\u00a04000\u00a06000\u00a08000 10000 gaininnumberoftriples communication events number IPT1 IPT2 IPT3 relevant events (a) The relevant events follow a uniform distribution (ratio=1\/10) -5000 0 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000 0 2000\u00a04000\u00a06000\u00a08000 10000 gaininnumberoftriples communication events number IPT1 IPT2 IPT3 relevant events (b) The relevant events follow a Poisson distribution (\u03bb = 100) 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 0 2000\u00a04000\u00a06000\u00a08000 10000 gaininnumberoftriples communication events number IPT1 IPT2 IPT3 relevant events (c) The relevant events follow a normal distribution 0 50 100 150 200 250 300 350 400 450 1 10\u00a0100\u00a01000\u00a010000 gaininnumberoftriples communication events number IPT1 IPT2 IPT3 relevant events (d) For each pi, pi takes a relevant event and broadcast to all processes Figure 3: Experimental Results 219","lvl-3":"Tracking Immediate Predecessors in Distributed Computations\nABSTRACT\nA distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation).\nAn important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order.\nSo, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation.\nThis paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors.\nThe family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes).\nIn that sense, this family defines message size-efficient IPT protocols.\nAccording to the way the general condition is implemented, different IPT protocols can be obtained.\nTwo of them are exhibited.\n1.\nINTRODUCTION\nA distributed computation consists of a set of processes that cooperate to achieve a common goal.\nA main characteristic of these computations lies in the fact that the\nprocesses do not share a common global memory, and communicate only by exchanging messages over a communication network.\nMoreover, message transfer delays are finite but unpredictable.\nThis computation model defines what is known as the asynchronous distributed system model.\nIt is particularly important as it includes systems that span large geographic areas, and systems that are subject to unpredictable loads.\nConsequently, the concepts, tools and mechanisms developed for asynchronous distributed systems reveal to be both important and general.\nCausality is a key concept to understand and master the behavior of asynchronous distributed systems [18].\nMore precisely, given two events e and f of a distributed computation, a crucial problem that has to be solved in a lot of distributed applications is to know whether they are causally related, i.e., if the occurrence of one of them is a consequence of the occurrence of the other.\nThe causal past of an event e is the set of events from which e is causally dependent.\nEvents that are not causally dependent are said to be concurrent.\nVector clocks [5, 16] have been introduced to allow processes to track causality (and concurrency) between the events they produce.\nThe timestamp of an event produced by a process is the current value of the vector clock of the corresponding process.\nIn that way, by associating vector timestamps with events it becomes possible to safely decide whether two events are causally related or not.\nUsually, according to the problem he focuses on, a designer is interested only in a subset of the events produced by a distributed execution (e.g., only the checkpoint events are meaningful when one is interested in determining consistent global checkpoints [12]).\nIt follows that detecting causal dependencies (or concurrency) on all the events of the distributed computation is not desirable in all applications [7, 15].\nIn other words, among all the events that may occur in a distributed computation, only a subset of them are relevant.\nIn this paper, we are interested in the restriction of the causality relation to the subset of events defined as being the relevant events of the computation.\nBeing a strict partial order, the causality relation is transitive.\nAs a consequence, among all the relevant events that causally precede a given relevant event e, only a subset are its immediate predecessors: those are the events f such that there is no relevant event on any causal path from f to e. Unfortunately, given only the vector timestamp associated with an event it is not possible to determine which events of its causal past are its immediate predecessors.\nThis comes from the fact that the vector timestamp associated with e determines, for each process, the last relevant event belong210 ing to the causal past of e, but such an event is not necessarily an immediate predecessor of e. However, some applications [4, 6] require to associate with each relevant event only the set of its immediate predecessors.\nThose applications are mainly related to the analysis of distributed computations.\nSome of those analyses require the construction of the lattice of consistent cuts produced by the computation [15, 16].\nIt is shown in [4] that the tracking of immediate predecessors allows an efficient on the fly construction of this lattice.\nMore generally, these applications are interested in the very structure of the causal past.\nIn this context, the determination of the immediate predecessors becomes a major issue [6].\nAdditionally, in some circumstances, this determination has to satisfy behavior constraints.\nIf the communication pattern of the distributed computation cannot be modified, the determination has to be done without adding control messages.\nWhen the immediate predecessors are used to monitor the computation, it has to be done on the fly.\nWe call Immediate Predecessor Tracking (IPT) the problem that consists in determining on the fly and without additional messages the immediate predecessors of relevant events.\nThis problem consists actually in determining the transitive reduction (Hasse diagram) of the causality graph generated by the relevant events of the computation.\nSolving this problem requires tracking causality, hence using vector clocks.\nPrevious works have addressed the efficient implementation of vector clocks to track causal dependence on relevant events.\nTheir aim was to reduce the size of timestamps attached to messages.\nAn efficient vector clock implementation suited to systems with FIFO channels is proposed in [19].\nAnother efficient implementation that does not depend on channel ordering property is described in [11].\nThe notion of causal barrier is introduced in [2, 17] to reduce the size of control information required to implement causal multicast.\nHowever, none of these papers considers the IPT problem.\nThis problem has been addressed for the first time (to our knowledge) in [4, 6] where an IPT protocol is described, but without correctness proof.\nMoreover, in this protocol, timestamps attached to messages are of size n.\nThis raises the following question which, to our knowledge, has never been answered: \"Are there efficient vector clock implementation techniques that are suitable for the IPT problem?\"\n.\nThis paper has three main contributions: (1) a positive answer to the previous open question, (2) the design of a family of efficient IPT protocols, and (3) a formal correctness proof of the associated protocols.\nFrom a methodological point of view the paper uses a top-down approach.\nIt states abstract properties from which more concrete properties and protocols are derived.\nThe family of IPT protocols is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than the system size (i.e., smaller than the number of processes composing the system).\nIn that sense, this family defines low cost IPT protocols when we consider the message size.\nIn addition to efficiency, the proposed approach has an interesting design property.\nNamely, the family is incrementally built in three steps.\nThe basic vector clock protocol is first enriched by adding to each process a boolean vector whose management allows the processes to track the immediate predecessor events.\nThen, a general condition is stated to reduce the size of the control information carried by messages.\nFinally, according to the way this condition is implemented, three IPT protocols are obtained.\nThe paper is composed of seven sections.\nSections 2 introduces the computation model, vector clocks and the notion of relevant events.\nSection 3 presents the first step of the construction that results in an IPT protocol in which each message carries a vector clock and a boolean array, both of size n (the number of processes).\nSection 4 improves this protocol by providing the general condition that allows a message to carry control information whose size can be smaller than n. Section 5 provides instantiations of this condition.\nSection 6 provides a simulation study comparing the behaviors of the proposed protocols.\nFinally, Section 7 concludes the paper.\n(Due to space limitations, proofs of lemmas and theorems are omitted.\nThey can be found in [1].)\n2.\nMODEL AND VECTOR CLOCK 2.1 Distributed Computation\n2.2 Relevant Events\n2.3 Vector Clock System\n3.\nIMMEDIATE PREDECESSORS\n3.1 The IPT Problem\n3.2 Formal Properties of IPT\n3.3 A Basic IPT Protocol\n4.\nA GENERAL CONDITION\n4.1 To Transmit or Not to Transmit Control Information\n4.2 A Necessary and Sufficient Condition\n5.\nA FAMILY OF IPT PROTOCOLS BASED ON EVALUABLE CONDITIONS\n5.1 A Boolean Matrix-Based Evaluable Condition\n5.2 Resulting IPT Protocol\n5.3 A Tradeoff\n6.\nEXPERIMENTAL STUDY\n6.1 Simulation Parameters\n6.2 Parameter Settings\n6.3 Simulation Results\n6.4 Lessons Learned from the Simulation\n7.\nCONCLUSION\nThis paper has addressed an important causality-related distributed computing problem, namely, the Immediate Predecessors Tracking problem.\nIt has presented a family of protocols that provide each relevant event with a timestamp that exactly identify its immediate predecessors.\nThe family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes).\nIn that sense, this family defines message size-efficient IPT protocols.\nAccording to the way the general condition is implemented, different IPT protocols can be obtained.\nThree of them have been described and analyzed with simulation experiments.\nInterestingly, it has also been shown that the efficiency of the protocols (measured in terms of the size of the control information that is not piggybacked by an application message) depends on the pattern defined by the communication events and the relevant events.\nLast but not least, it is interesting to note that if one is not interested in tracking the immediate predecessor events, the protocols presented in the paper can be simplified by suppressing the IPi booleans vectors (but keeping the boolean matrices Mi).\nThe resulting protocols, that implement a vector clock system, are particularly efficient as far as the size of the timestamp carried by each message is concerned.\nInterestingly, this efficiency is not obtained at the price of additional assumptions (such as FIFO channels).","lvl-4":"Tracking Immediate Predecessors in Distributed Computations\nABSTRACT\nA distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation).\nAn important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order.\nSo, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation.\nThis paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors.\nThe family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes).\nIn that sense, this family defines message size-efficient IPT protocols.\nAccording to the way the general condition is implemented, different IPT protocols can be obtained.\nTwo of them are exhibited.\n1.\nINTRODUCTION\nA distributed computation consists of a set of processes that cooperate to achieve a common goal.\nA main characteristic of these computations lies in the fact that the\nprocesses do not share a common global memory, and communicate only by exchanging messages over a communication network.\nMoreover, message transfer delays are finite but unpredictable.\nThis computation model defines what is known as the asynchronous distributed system model.\nConsequently, the concepts, tools and mechanisms developed for asynchronous distributed systems reveal to be both important and general.\nCausality is a key concept to understand and master the behavior of asynchronous distributed systems [18].\nThe causal past of an event e is the set of events from which e is causally dependent.\nEvents that are not causally dependent are said to be concurrent.\nVector clocks [5, 16] have been introduced to allow processes to track causality (and concurrency) between the events they produce.\nThe timestamp of an event produced by a process is the current value of the vector clock of the corresponding process.\nIn that way, by associating vector timestamps with events it becomes possible to safely decide whether two events are causally related or not.\nIt follows that detecting causal dependencies (or concurrency) on all the events of the distributed computation is not desirable in all applications [7, 15].\nIn other words, among all the events that may occur in a distributed computation, only a subset of them are relevant.\nIn this paper, we are interested in the restriction of the causality relation to the subset of events defined as being the relevant events of the computation.\nBeing a strict partial order, the causality relation is transitive.\nThose applications are mainly related to the analysis of distributed computations.\nSome of those analyses require the construction of the lattice of consistent cuts produced by the computation [15, 16].\nIt is shown in [4] that the tracking of immediate predecessors allows an efficient on the fly construction of this lattice.\nMore generally, these applications are interested in the very structure of the causal past.\nIn this context, the determination of the immediate predecessors becomes a major issue [6].\nIf the communication pattern of the distributed computation cannot be modified, the determination has to be done without adding control messages.\nWhen the immediate predecessors are used to monitor the computation, it has to be done on the fly.\nWe call Immediate Predecessor Tracking (IPT) the problem that consists in determining on the fly and without additional messages the immediate predecessors of relevant events.\nThis problem consists actually in determining the transitive reduction (Hasse diagram) of the causality graph generated by the relevant events of the computation.\nSolving this problem requires tracking causality, hence using vector clocks.\nPrevious works have addressed the efficient implementation of vector clocks to track causal dependence on relevant events.\nTheir aim was to reduce the size of timestamps attached to messages.\nAn efficient vector clock implementation suited to systems with FIFO channels is proposed in [19].\nAnother efficient implementation that does not depend on channel ordering property is described in [11].\nThe notion of causal barrier is introduced in [2, 17] to reduce the size of control information required to implement causal multicast.\nHowever, none of these papers considers the IPT problem.\nThis problem has been addressed for the first time (to our knowledge) in [4, 6] where an IPT protocol is described, but without correctness proof.\nMoreover, in this protocol, timestamps attached to messages are of size n.\nThis raises the following question which, to our knowledge, has never been answered: \"Are there efficient vector clock implementation techniques that are suitable for the IPT problem?\"\n.\nFrom a methodological point of view the paper uses a top-down approach.\nIt states abstract properties from which more concrete properties and protocols are derived.\nThe family of IPT protocols is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than the system size (i.e., smaller than the number of processes composing the system).\nIn that sense, this family defines low cost IPT protocols when we consider the message size.\nIn addition to efficiency, the proposed approach has an interesting design property.\nNamely, the family is incrementally built in three steps.\nThe basic vector clock protocol is first enriched by adding to each process a boolean vector whose management allows the processes to track the immediate predecessor events.\nThen, a general condition is stated to reduce the size of the control information carried by messages.\nFinally, according to the way this condition is implemented, three IPT protocols are obtained.\nThe paper is composed of seven sections.\nSections 2 introduces the computation model, vector clocks and the notion of relevant events.\nSection 3 presents the first step of the construction that results in an IPT protocol in which each message carries a vector clock and a boolean array, both of size n (the number of processes).\nSection 4 improves this protocol by providing the general condition that allows a message to carry control information whose size can be smaller than n. Section 5 provides instantiations of this condition.\nSection 6 provides a simulation study comparing the behaviors of the proposed protocols.\nFinally, Section 7 concludes the paper.\n7.\nCONCLUSION\nThis paper has addressed an important causality-related distributed computing problem, namely, the Immediate Predecessors Tracking problem.\nIt has presented a family of protocols that provide each relevant event with a timestamp that exactly identify its immediate predecessors.\nThe family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes).\nIn that sense, this family defines message size-efficient IPT protocols.\nAccording to the way the general condition is implemented, different IPT protocols can be obtained.\nThree of them have been described and analyzed with simulation experiments.\nInterestingly, it has also been shown that the efficiency of the protocols (measured in terms of the size of the control information that is not piggybacked by an application message) depends on the pattern defined by the communication events and the relevant events.\nLast but not least, it is interesting to note that if one is not interested in tracking the immediate predecessor events, the protocols presented in the paper can be simplified by suppressing the IPi booleans vectors (but keeping the boolean matrices Mi).\nThe resulting protocols, that implement a vector clock system, are particularly efficient as far as the size of the timestamp carried by each message is concerned.","lvl-2":"Tracking Immediate Predecessors in Distributed Computations\nABSTRACT\nA distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation).\nAn important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order.\nSo, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation.\nThis paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors.\nThe family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes).\nIn that sense, this family defines message size-efficient IPT protocols.\nAccording to the way the general condition is implemented, different IPT protocols can be obtained.\nTwo of them are exhibited.\n1.\nINTRODUCTION\nA distributed computation consists of a set of processes that cooperate to achieve a common goal.\nA main characteristic of these computations lies in the fact that the\nprocesses do not share a common global memory, and communicate only by exchanging messages over a communication network.\nMoreover, message transfer delays are finite but unpredictable.\nThis computation model defines what is known as the asynchronous distributed system model.\nIt is particularly important as it includes systems that span large geographic areas, and systems that are subject to unpredictable loads.\nConsequently, the concepts, tools and mechanisms developed for asynchronous distributed systems reveal to be both important and general.\nCausality is a key concept to understand and master the behavior of asynchronous distributed systems [18].\nMore precisely, given two events e and f of a distributed computation, a crucial problem that has to be solved in a lot of distributed applications is to know whether they are causally related, i.e., if the occurrence of one of them is a consequence of the occurrence of the other.\nThe causal past of an event e is the set of events from which e is causally dependent.\nEvents that are not causally dependent are said to be concurrent.\nVector clocks [5, 16] have been introduced to allow processes to track causality (and concurrency) between the events they produce.\nThe timestamp of an event produced by a process is the current value of the vector clock of the corresponding process.\nIn that way, by associating vector timestamps with events it becomes possible to safely decide whether two events are causally related or not.\nUsually, according to the problem he focuses on, a designer is interested only in a subset of the events produced by a distributed execution (e.g., only the checkpoint events are meaningful when one is interested in determining consistent global checkpoints [12]).\nIt follows that detecting causal dependencies (or concurrency) on all the events of the distributed computation is not desirable in all applications [7, 15].\nIn other words, among all the events that may occur in a distributed computation, only a subset of them are relevant.\nIn this paper, we are interested in the restriction of the causality relation to the subset of events defined as being the relevant events of the computation.\nBeing a strict partial order, the causality relation is transitive.\nAs a consequence, among all the relevant events that causally precede a given relevant event e, only a subset are its immediate predecessors: those are the events f such that there is no relevant event on any causal path from f to e. Unfortunately, given only the vector timestamp associated with an event it is not possible to determine which events of its causal past are its immediate predecessors.\nThis comes from the fact that the vector timestamp associated with e determines, for each process, the last relevant event belong210 ing to the causal past of e, but such an event is not necessarily an immediate predecessor of e. However, some applications [4, 6] require to associate with each relevant event only the set of its immediate predecessors.\nThose applications are mainly related to the analysis of distributed computations.\nSome of those analyses require the construction of the lattice of consistent cuts produced by the computation [15, 16].\nIt is shown in [4] that the tracking of immediate predecessors allows an efficient on the fly construction of this lattice.\nMore generally, these applications are interested in the very structure of the causal past.\nIn this context, the determination of the immediate predecessors becomes a major issue [6].\nAdditionally, in some circumstances, this determination has to satisfy behavior constraints.\nIf the communication pattern of the distributed computation cannot be modified, the determination has to be done without adding control messages.\nWhen the immediate predecessors are used to monitor the computation, it has to be done on the fly.\nWe call Immediate Predecessor Tracking (IPT) the problem that consists in determining on the fly and without additional messages the immediate predecessors of relevant events.\nThis problem consists actually in determining the transitive reduction (Hasse diagram) of the causality graph generated by the relevant events of the computation.\nSolving this problem requires tracking causality, hence using vector clocks.\nPrevious works have addressed the efficient implementation of vector clocks to track causal dependence on relevant events.\nTheir aim was to reduce the size of timestamps attached to messages.\nAn efficient vector clock implementation suited to systems with FIFO channels is proposed in [19].\nAnother efficient implementation that does not depend on channel ordering property is described in [11].\nThe notion of causal barrier is introduced in [2, 17] to reduce the size of control information required to implement causal multicast.\nHowever, none of these papers considers the IPT problem.\nThis problem has been addressed for the first time (to our knowledge) in [4, 6] where an IPT protocol is described, but without correctness proof.\nMoreover, in this protocol, timestamps attached to messages are of size n.\nThis raises the following question which, to our knowledge, has never been answered: \"Are there efficient vector clock implementation techniques that are suitable for the IPT problem?\"\n.\nThis paper has three main contributions: (1) a positive answer to the previous open question, (2) the design of a family of efficient IPT protocols, and (3) a formal correctness proof of the associated protocols.\nFrom a methodological point of view the paper uses a top-down approach.\nIt states abstract properties from which more concrete properties and protocols are derived.\nThe family of IPT protocols is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than the system size (i.e., smaller than the number of processes composing the system).\nIn that sense, this family defines low cost IPT protocols when we consider the message size.\nIn addition to efficiency, the proposed approach has an interesting design property.\nNamely, the family is incrementally built in three steps.\nThe basic vector clock protocol is first enriched by adding to each process a boolean vector whose management allows the processes to track the immediate predecessor events.\nThen, a general condition is stated to reduce the size of the control information carried by messages.\nFinally, according to the way this condition is implemented, three IPT protocols are obtained.\nThe paper is composed of seven sections.\nSections 2 introduces the computation model, vector clocks and the notion of relevant events.\nSection 3 presents the first step of the construction that results in an IPT protocol in which each message carries a vector clock and a boolean array, both of size n (the number of processes).\nSection 4 improves this protocol by providing the general condition that allows a message to carry control information whose size can be smaller than n. Section 5 provides instantiations of this condition.\nSection 6 provides a simulation study comparing the behaviors of the proposed protocols.\nFinally, Section 7 concludes the paper.\n(Due to space limitations, proofs of lemmas and theorems are omitted.\nThey can be found in [1].)\n2.\nMODEL AND VECTOR CLOCK 2.1 Distributed Computation\nA distributed program is made up of sequential local programs which communicate and synchronize only by exchanging messages.\nA distributed computation describes the execution of a distributed program.\nThe execution of a local program gives rise to a sequential process.\nLet {P1, P2,..., Pn} be the finite set of sequential processes of the distributed computation.\nEach ordered pair of communicating processes (Pi, Pj) is connected by a reliable channel cij through which Pi can send messages to Pj.\nWe assume that each message is unique and a process does not send messages to itself1.\nMessage transmission delays are finite but unpredictable.\nMoreover, channels are not necessarily FIFO.\nProcess speeds are positive but arbitrary.\nIn other words, the underlying computation model is asynchronous.\nThe local program associated with Pi can include send, receive and internal statements.\nThe execution of such a statement produces a corresponding send\/receive\/internal event.\nThese events are called primitive events.\nLet exi be the x-th event produced by process Pi.\nThe sequence hi = e1i e2i...exi...constitutes the history of Pi, denoted Hi.\nLet H = Uni = 1Hi be the set of events produced by a distributed computation.\nThis set is structured as a partial order by Lamport's happened before relation [14] (denoted\nClearly the restriction of hb + to Hi, for a given i, is a total order.\nThus we will use the notation exi <eyi iff x <y. Throughout the paper, we will use the following notation: if e E Hi is not the first event produced by Pi, then pred (e) denotes the event immediately preceding e in the sequence Hi.\nIf e is the first event produced by Pi, then pred (e) is Ve E Hi: L <e.\nThe partial order H b = (H, hb denoted by L (meaning that there is no such event), and +) constitutes a formal model of the distributed computation it is associated with.\nFigure 1: Timestamped Relevant Events and Immediate Predecessors Graph (Hasse Diagram)\n2.2 Relevant Events\nFor a given observer of a distributed computation, only some events are relevant2 [7, 9, 15].\nAn interesting example of \"what an observation is\", is the detection of predicates on consistent global states of a distributed computation [3, 6, 8, 9, 13, 15].\nIn that case, a relevant event corresponds to the modification of a local variable involved in the global predicate.\nAnother example is the checkpointing problem where a relevant event is the definition of a local checkpoint [10, 12, 20].\nThe left part of Figure 1 depicts a distributed computation using the classical space-time diagram.\nIn this figure, only relevant events are represented.\nThe sequence of relevant events produced by process Pi is denoted by Ri, and R = Uni = 1Ri C _ H denotes the set of all relevant events.\nLet--+ be the relation on R defined in the following way: ` d (e, f) ERxR: (e--+ f) 4 * (e hb--+ f).\nThe poset (R,--+) constitutes an abstraction of the distributed computation [7].\nIn the following we consider a distributed computation at such an abstraction level.\nMoreover, without loss of generality we consider that the set of relevant events is a subset of the internal events (if a communication event has to be observed, a relevant internal event can be generated just before a send and just after a receive communication event occurred).\nEach relevant event is identified by a pair (process id, sequence number) (see Figure 1).\nNote that, if e E R then 1 (e) = {f E R I f--+ e}.\nIn the computation described in Figure 1, we have, for the event e identified (2, 2): 1 (e) = {(1, 1), (1, 2), (2, 1), (3, 1)}.\nThe following properties are immediate consequences of the previous definitions.\nLet e E H.\nLet us consider the event e identified (2,2) in Figure 1.\nWe have lastr (e, 1) = (1, 2), lastr (e, 2) = (2, 1), lastr (e, 3) = (3, 1).\nThe following properties relate the events lastr (e, j) and lastr (f, j) for all the predecessors f of e in the relation--+.\nThese properties follow directly from the definitions.\nLR2 If e is a receive event of m: ` dj = ~ i: lastr (e, j) = max (lastr (pred (e), j), lastr (send (m), j)).\n2.3 Vector Clock System\nDefinition As a fundamental concept associated with the causality theory, vector clocks have been introduced in 1988, simultaneously and independently by Fidge [5] and Mattern [16].\nA vector clock system is a mechanism that associates timestamps with events in such a way that the comparison of their timestamps indicates whether the corresponding events are or are not causally related (and, if they are, which one is the first).\nMore precisely, each process Pi has a vector of integers VCi [1.\n.\nn] such that V Ci [j] is the number of relevant events produced by Pj, that belong to the current relevant causal past of Pi.\nNote that VCi [i] counts the number of relevant events produced so far by Pi.\nWhen a process Pi produces a (relevant) event e, it associates with e a vector timestamp whose value (denoted e.V C) is equal to the current value of VCi.\nVector Clock Implementation The following implementation of vector clocks [5, 16] is based on the observation that ` di, ` de E Hi, ` dj: e.V Ci [j] = y 4 * lastr (e, j) = ey j where e.V Ci is the value of V Ci just after the occurrence of e (this relation results directly from the properties LR0, LR1, and LR2).\nEach process Pi manages its vector clock V Ci [1.\n.\nn] according to the following rules:\nindicate it has produced one more relevant event, then Pi associates with e the timestamp e.VC = V Ci.\nVC2 When a process Pi sends a message m, it attaches to m the current value of VCi.\nLet m.V C denote this value.\nVC3 When Pi receives a message m, it updates its vector clock as follows: ` dk: VCi [k]: = max (VCi [k], m.V C [k]).\n3.\nIMMEDIATE PREDECESSORS\nIn this section, the Immediate Predecessor Tracking (IPT) problem is stated (Section 3.1).\nThen, some technical properties of immediate predecessors are stated and proved (Section 3.2).\nThese properties are used to design the basic IPT protocol and prove its correctness (Section 3.3).\nThis IPT protocol, previously presented in [4] without proof, is built from a vector clock protocol by adding the management of a local boolean array at each process.\n3.1 The IPT Problem\nAs indicated in the introduction, some applications (e.g., analysis of distributed executions [6], detection of distributed properties [7]) require to determine (on-the-fly and without additional messages) the transitive reduction of the relation--+ (i.e., we must not consider transitive causal dependency).\nGiven two relevant events f and e, we say that f is an immediate predecessor of e if f--+ e and there is no relevant event g such that f--+ g--+ e.\nAs noted in the Introduction, the IPT problem is the computation of the Hasse diagram associated with the partially ordered set of the relevant events produced by a distributed computation.\n3.2 Formal Properties of IPT\nIn order to design a protocol solving the IPT problem, it is useful to consider the notion of immediate relevant predecessor of any event, whether relevant or not.\nFirst, we observe that, by definition, the immediate predecessor on Pj of an event e is necessarily the lastr (e, j) event.\nSecond, for lastr (e, j) to be immediate predecessor of e, there must not be another lastr (e, k) event on a path between lastr (e, j) and e.\nThese observations are formalized in the following definition:\nIt follows from this definition that ZP (e) C _ {lastr (e, j) | j = 1,..., n} CT (e).\nWhen we consider Figure 1, The graph depicted in its right part describes the immediate predecessors of the relevant events of the computation defined in its left part, more precisely, a directed edge (e, f) means that the relevant event e is an immediate predecessor of the relevant event f (3).\nThe following lemmas show how the set of immediate predecessors of an event is related to those of its predecessors in the relation hb--+.\nThey will be used to design and prove the protocols solving the IPT problem.\nTo ease the reading of the paper, their proofs are presented in Appendix A.\nThe intuitive meaning of the first lemma is the following: if e is not a receive event, all the causal paths arriving at e have pred (e) as next-to-last event (see CP1).\nSo, if pred (e) is a relevant event, all the relevant events belonging to its relevant causal past are \"separated\" from e by pred (e), and pred (e) becomes the only immediate predecessor of e.\nIn other words, the event pred (e) constitutes a \"reset\" w.r.t. the set of immediate predecessors of e. On the other hand, if pred (e) is not relevant, it does not separate its relevant causal past from e.\nThe intuitive meaning of the next lemma is as follows: if e is a receive event receive (m), the causal paths arriving at e have either pred (e) or send (m) as next-to-last events.\nIf pred (e) is relevant, as explained in the previous lemma, this event \"hides\" from e all its relevant causal past and becomes an immediate predecessor of e. Concerning the last relevant predecessors of send (m), only those that are not predecessors of pred (e) remain immediate predecessors of e.\nThe intuitive meaning of the next lemma is the following: if e is a receive event receive (m), and pred (e) is not relevant, the last relevant events in the relevant causal past of e are obtained by merging those of pred (e) and those of send (m) and by taking the latest on each process.\nSo, the immediate predecessors of e are either those of pred (e) or those of send (m).\nOn a process where the last relevant events of pred (e) and of send (m) are the same event f, none of the paths from f to e must contain another relevant event, and thus, f must be immediate predecessor of both events pred (e) and send (m).\n3.3 A Basic IPT Protocol\nThe basic protocol proposed here associates with each relevant event e, an attribute encoding the set ZP (e) of its immediate predecessors.\nFrom the previous lemmas, the set 3Actually, this graph is the Hasse diagram of the partial order associated with the distributed computation.\n213 ZP (e) of any event e depends on the sets ZP of the events pred (e) and\/or send (m) (when e = receive (m)).\nHence the idea to introduce a data structure allowing to manage the sets ZPs inductively on the poset (H, hb \u2192).\nTo take into account the information from pred (e), each process manages a boolean array IPi such that, ` de E Hi the value of IPi when e occurs (denoted e.IPi) is the boolean array representation of the set ZP (e).\nMore precisely, ` dj: IPi [j] = 1 4 * lastr (e, j) E ZP (e).\nAs recalled in Section 2.3, the knowledge of lastr (e, j) (for every e and every j) is based on the management of vectors VCi.\nThus, the set ZP (e) is determined in the following way:\nEach process Pi updates IPi according to the Lemmas 1, 2, and 3:\n1.\nIt results from Lemma 1 that, if e is not a receive event, the current value of IPi is sufficient to determine e.IPi.\nIt results from Lemmas 2 and 3 that, if e is a receive event (e = receive (m)), then determining e.IPi involves information related to the event send (m).\nMore precisely, this information involves ZP (send (m)) and the timestamp of send (m) (needed to compare the events lastr (send (m), j) and lastr (pred (e), j), for every j).\nSo, both vectors send (m).\nV Cj and send (m).\nIPj (assuming send (m) produced by Pj) are attached to message m. 2.\nMoreover, IPi must be updated upon the occurrence of each event.\nIn fact, the value of IPi just after an event e is used to determine the value succ (e).\nIPi.\nIn\nparticular, as stated in the Lemmas, the determination of succ (e).\nIPi depends on whether e is relevant or not.\nThus, the value of IPi just after the occurrence of event e must \"keep track\" of this event.\nThe following protocol, previously presented in [4] without proof, ensures the correct management of arrays V Ci (as in Section 2.3) and IPi (according to the Lemmas of Section 3.2).\nThe timestamp associated with a relevant event e is denoted e.TS.\nR0 Initialization: Both VCi [1.\n.\nn] and IPi [1.\n.\nn] are initialized to [0,..., 0].\nR1 Each time it produces a relevant event e:--Pi associates with e the timestamp e.TS defined as follows e.TS = {(k, V Ci [k]) I IPi [k] = 11,--Pi increments its vector clock entry V Ci [i] (namely it executes VCi [i]: = VCi [i] + 1),--Pi resets IPi: ` d B = ~ i: IPi [f]: = 0; IPi [i]: = 1.\nR2 When Pi sends a message m to Pj, it attaches to m the current values of V Ci (denoted m.VC) and the boolean array IPi (denoted m.IP).\nR3 When it receives a message m from Pj, Pi executes the following updates:\nThe proof of the following theorem directly follows from Lemmas 1, 2 and 3.\n4.\nA GENERAL CONDITION\nThis section addresses a previously open problem, namely, \"How to solve the IPT problem without requiring each application message to piggyback a whole vector clock and a whole boolean array?\"\n.\nFirst, a general condition that characterizes which entries of vectors V Ci and IPi can be omitted from the control information attached to a message sent in the computation, is defined (Section 4.1).\nIt is then shown (Section 4.2) that this condition is both sufficient and necessary.\nHowever, this general condition cannot be locally evaluated by a process that is about to send a message.\nThus, locally evaluable approximations of this general condition must be defined.\nTo each approximation corresponds a protocol, implemented with additional local data structures.\nIn that sense, the general condition defines a family of IPT protocols, that solve the previously open problem.\nThis issue is addressed in Section 5.\n4.1 To Transmit or Not to Transmit Control Information\nLet us consider the previous IPT protocol (Section 3.3).\nRule R3 shows that a process Pj does not systematically update each entry V Cj [k] each time it receives a message m from a process Pi: there is no update of V Cj [k] when V Cj [k]> m.V C [k].\nIn such a case, the value m.V C [k] is useless, and could be omitted from the control information transmitted with m by Pi to Pj.\nSimilarly, some entries IPj [k] are not updated when a message m from Pi is received by Pj.\nThis occurs when 0 <V Cj [k] = m.V C [k] n m.IP [k] = 1, or when V Cj [k]> m.V C [k], or when m.VC [k] = 0 (in the latest case, as m.IP [k] = IPi [k] = 0 then no update of IPj [k] is necessary).\nDifferently, some other entries are systematically reset to 0 (this occurs when 0 <V Cj [k] = m.V C [k] n m.IP [k] = 0).\nThese observations lead to the definition of the condition K (m, k) that characterizes which entries of vectors V Ci and IPi can be omitted from the control information attached to a message m sent by a process Pi to a process Pj:\n4.2 A Necessary and Sufficient Condition\nWe show here that the condition K (m, k) is both necessary and sufficient to decide which triples of the form (k, send (m).\nV Ci [k], send (m).\nIPi [k]) can be omitted in an outgoing message m sent by Pi to Pj.\nA triple attached to m will also be denoted (k, m.V C [k], m.IP [k]).\nDue to space limitations, the proofs of Lemma 4 and Lemma 5 are given in [1].\n(The proof of Theorem 2 follows directly from these lemmas.)\n5.\nA FAMILY OF IPT PROTOCOLS BASED ON EVALUABLE CONDITIONS\nIt results from the previous theorem that, if Pi could evaluate K (m, k) when it sends m to Pj, this would allow us improve the previous IPT protocol in the following way: in rule R2, the triple (k, V Ci [k], IPi [k]) is transmitted with m only if - K (m, k).\nMoreover, rule R3 is appropriately modified to consider only triples carried by m. However, as previously mentioned, Pi cannot locally evaluate K (m, k) when it is about to send m.\nMore precisely, when Pi sends m to Pj, Pi knows the exact values of send (m).\nV Ci [k] and send (m).\nIPi [k] (they are the current values of V Ci [k] and IPi [k]).\nBut, as far as the value of pred (receive (m)).\nVCj [k] is concerned, two cases are possible.\nCase (i): If pred (receive (m)) hb \u2192 send (m), then Pi can know the value of pred (receive (m)).\nVCj [k] and consequently can evaluate K (m, k).\nCase (ii): If pred (receive (m)) and send (m) are concurrent, Pi cannot know the value of pred (receive (m)).\nVCj [k] and consequently cannot evaluate K (m, k).\nMoreover, when it sends m to Pj, whatever the case (i or ii) that actually occurs, Pi has no way to know which case does occur.\nHence the idea to define evaluable approximations of the general condition.\nLet K' (m, k) be an approximation of K (m, k), that can be evaluated by a process Pi when it sends a message m. To be correct, the condition K' must ensure that, every time Pi should transmit a triple (k, VCi [k], IPi [k]) according to Theorem 2 (i.e., each time - K (m, k)), then Pi transmits this triple when it uses condition K'.\nHence, the definition of a correct evaluable approximation:\nThis definition means that a protocol evaluating K' to decide which triples must be attached to messages, does not miss triples whose transmission is required by Theorem 2.\nLet us consider the \"constant\" condition (denoted K1), that is always false, i.e., V (m, k): K1 (m, k) = false.\nThis trivially correct approximation of K actually corresponds to the particular IPT protocol described in Section 3 (in which each message carries a whole vector clock and a whole boolean vector).\nThe next section presents a better approximation of K (denoted K2).\n5.1 A Boolean Matrix-Based Evaluable Condition\nFigure 2: The Evaluable Condition K2\nlocally evaluated while the others cannot.\nMore precisely, K = _ a V \u03b1 V (, Q n b), where a = _ send (m).\nVCi [k] = 0 and b = _ send (m).\nIPi [k] = 1 are locally evaluable, whereas \u03b1 = _ send (m).\nV Ci [k] <pred (receive (m)).\nVCj [k] and, Q = _ send (m).\nV Ci [k] = pred (receive (m)).\nVCj [k] are not.\nBut, from easy boolean calculus, a V ((\u03b1V, Q) nb) ==:>.\naV\u03b1V\nSo, Pi needs to approximate the predicate send (m).\nVCi [k] <pred (receive (m)).\nVCj [k].\nTo be correct, this approximation has to be a locally evaluable predicate ci (j, k) such that, when Pi is about to send a message m to Pj, ci (j, k) =:>.\n(send (m).\nV Ci [k] <pred (receive (m)).\nVCj [k]).\nInformally, that means that, when ci (j, k) holds, the local context of Pi allows to deduce that the receipt of m by Pj will not lead to VCj [k] update (\"Pj knows as much as Pi about Pk\").\nHence, the \"concrete\" condition K2 is the following: K2 = _ send (m).\nVCi [k] = 0 V (ci (j, k) n send (m).\nIPi [k] = 1).\nLet us now examine the design of such a predicate (denoted ci).\nFirst, the case j = i can be ignored, since it is assumed (Section 2.1) that a process never sends a message to itself.\nSecond, in the case j = k, the relation send (m).\nV Ci [j] <pred (receive (m)).\nVCj [j] is always true, because the receipt of m by Pj cannot update VCj [j].\nThus, Vj = ~ i: ci (j, j) must be true.\nNow, let us consider the case where j = ~ i and j = ~ k (Figure 2).\nSuppose that there exists an event e' = receive (m') with e' <send (m), m' sent by Pj and piggybacking the triple (k, m'.\nV C [k], m'.\nIP [k]), and m'.\nV C [k]> VCi [k] (hence m'.\nV C [k] = receive (m').\nV Ci [k]).\nAs V Cj [k] cannot decrease this means that, as long as V Ci [k] does not increase, for every message m sent by Pi to Pj we have the following: send (m).\nV Ci [k] = receive (m').\nV Ci [k] = send (m').\nV Cj [k] <receive (m).\nVCj [k], i.e., ci (j, k) must remain true.\nIn other words, once ci (j, k) is true, the only event of Pi that could reset it to false is either the receipt of a message that increases VCi [k] or, if k = i, the occurrence of a relevant event (that increases V Ci [i]).\nSimilarly, once ci (j, k) is false, the only event that can set it to true is the receipt of a message m' from Pj, piggybacking the triple (k, m'.\nV C [k], m'.\nIP [k]) with m'.\nV C [k]> VCi [k].\nIn order to implement the local predicates ci (j, k), each process Pi is equipped with a boolean matrix Mi (as in [11]) such that M [j, k] = 1 4 * ci (j, k).\nIt follows from the previous discussion that this matrix is managed according to the following rules (note that its i-th line is not significant (case j = i), and that its diagonal is always equal to 1):\nThe following lemma results from rules M0-M3.\nThe theorem that follows shows that condition K2 (m, k) is correct.\n(Both are proved in [1].)\n5.2 Resulting IPT Protocol\nThe complete text of the IPT protocol based on the previous discussion follows.\nRM0 Initialization:--Both V Ci [1.\n.\nn] and IPi [1.\n.\nn] are set to [0,..., 0], and \u2200 (j, k): Mi [j, k] is set to 1.\nRM1 Each time it produces a relevant event e:--Pi associates with e the timestamp e.TS defined as follows: e.TS = {(k, V Ci [k]) | IPi [k] = 1},--Pi increments its vector clock entry V Ci [i] (namely, it executes V Ci [i]: = V Ci [i] + 1),--Pi resets IPi: \u2200 B = ~ i: IPi [B]: = 0; IPi [i]: = 1.\n-- Pi resets the ith column of its boolean matrix: \u2200 j = ~ i: Mi [j, i]: = 0.\nRM2 When Pi sends a message m to Pj, it attaches to m the set of triples (each made up of a process id, an integer and a boolean): {(k, V Ci [k], IPi [k]) | (Mi [j, k] = 0 \u2228\n4Actually, the value of this column remains constant after its first update.\nIn fact, \u2200 j, Mi [j, i] can be set to 1 only upon the receipt of a message from Pj, carrying the value VCj [i] (see R3).\nBut, as Mj [i, i] = 1, Pj does not send VCj [i] to Pi.\nSo, it is possible to improve the protocol by executing this \"reset\" of the column Mi [\u2217, i] only when Pi produces its first relevant event.\nendcase\n5.3 A Tradeoff\nThe condition K2 (m, k) shows that a triple has not to be transmitted when (Mi [j, k] = 1 \u2227 IPi [k] = 1) \u2228 (VCi [k]> 0).\nLet us first observe that the management of IPi [k] is governed by the application program.\nMore precisely, the IPT protocol does not define which are the relevant events, it has only to guarantee a correct management of IPi [k].\nDifferently, the matrix Mi does not belong to the problem specification, it is an auxiliary variable of the IPT protocol, which manages it so as to satisfy the following implication when Pi sends m to Pj: (Mi [j, k] = 1) \u21d2 (pred (receive (m)).\nVCj [k] \u2265 send (m).\nVCi [k]).\nThe fact that the management of Mi is governed by the protocol and not by the application program leaves open the possibility to design a protocol where more entries of Mi are equal to 1.\nThis can make the condition K2 (m, k) more often satisfied5 and can consequently allow the protocol to transmit less triples.\nWe show here that it is possible to transmit less triples at the price of transmitting a few additional boolean vectors.\nThe previous IPT matrix-based protocol (Section 5.2) is modified in the following way.\nThe rules RM2 and RM3 are replaced with the modified rules RM2' and RM3' (Mi [\u2217, k] denotes the kth column of Mi).\nRM2' When Pi sends a message m to Pj, it attaches to m the following set of 4-uples (each made up of a process id, an integer, a boolean and a boolean vector): {(k, V Ci [k], IPi [k], Mi [\u2217, k]) | (Mi [j, k] = 0 \u2228 IPi [k] = 0) \u2227 V Ci [k]> 0}.\nRM3' When Pi receives a message m from Pj, it executes the following updates:\nSimilarly to the proofs described in [1], it is possible to prove that the previous protocol still satisfies the property proved in Lemma 6, namely, \u2200 i, \u2200 m sent by Pi to Pj, \u2200 k we have (send (m).\nMi [j, k] = 1) \u21d2 (send (m).\nVCi [k] \u2264 pred (receive (m)).\nVCj [k]).\n5Let us consider the previously described protocol (Section 5.2) where the value of each matrix entry Mi [j, k] is always equal to 0.\nThe reader can easily verify that this setting correctly implements the matrix.\nMoreover, K2 (m, k) is then always false: it actually coincides with K1 (k, m) (which corresponds to the case where whole vectors have to be transmitted with each message).\n216 Intuitively, the fact that some columns of matrices M are attached to application messages allows a transitive transmission of information.\nMore precisely, the relevant history of Pk known by Pj is transmitted to a process Pi via a causal sequence of messages from Pj to Pi.\nIn contrast, the protocol described in Section 5.2 used only a direct transmission of this information.\nIn fact, as explained Section 5.1, the predicate c (locally implemented by the matrix M) was based on the existence of a message m' sent by Pj to Pi, piggybacking the triple (k, m'.\nV C [k], m'.\nIP [k]), and m'.\nV C [k]> V Ci [k], i.e., on the existence of a direct transmission of information (by the message m').\nThe resulting IPT protocol (defined by the rules RM0, RM1, RM2' and RM3') uses the same condition K2 (m, k) as the previous one.\nIt shows an interesting tradeoff between the number of triples (k, V Ci [k], IPi [k]) whose transmission is saved and the number of boolean vectors that have to be additionally piggybacked.\nIt is interesting to notice that the size of this additional information is bounded while each triple includes a non-bounded integer (namely a vector clock value).\n6.\nEXPERIMENTAL STUDY\nThis section compares the behaviors of the previous protocols.\nThis comparison is done with a simulation study.\nIPT1 denotes the protocol presented in Section 3.3 that uses the condition K1 (m, k) (which is always equal to false).\nIPT2 denotes the protocol presented in Section 5.2 that uses the condition K2 (m, k) where messages carry triples.\nFinally, IPT3 denotes the protocol presented in Section 5.3 that also uses the condition K2 (m, k) but where messages carry additional boolean vectors.\nThis section does not aim to provide an in-depth simulation study of the protocols, but rather presents a general view on the protocol behaviors.\nTo this end, it compares IPT2 and IPT3 with regard to IPT1.\nMore precisely, for IPT2 the aim was to evaluate the gain in terms of triples (k, V Ci [k], IPi [k]) not transmitted with respect to the systematic transmission of whole vectors as done in IPT1.\nFor IPT3, the aim was to evaluate the tradeoff between the additional boolean vectors transmitted and the number of saved triples.\nThe behavior of each protocol was analyzed on a set of programs.\n6.1 Simulation Parameters\nThe simulator provides different parameters enabling to tune both the communication and the processes features.\nThese parameters allow to set the number of processes for the simulated computation, to vary the rate of communication (send\/receive) events, and to alter the time duration between two consecutive relevant events.\nMoreover, to be independent of a particular topology of the underlying network, a fully connected network is assumed.\nInternal events have not been considered.\nSince the presence of the triples (k, V Ci [k], IPi [k]) piggybacked by a message strongly depends on the frequency at which relevant events are produced by a process, different time distributions between two consecutive relevant events have been implemented (e.g., normal, uniform, and Poisson distributions).\nThe senders of messages are chosen according to a random law.\nTo exhibit particular configurations of a distributed computation a given scenario can be provided to the simulator.\nMessage transmission delays follow a standard normal distribution.\nFinally, the last parameter of the simulator is the number of send events that occurred during a simulation.\n6.2 Parameter Settings\nTo compare the behavior of the three IPT protocols, we performed a large number of simulations using different parameters setting.\nWe set to 10 the number of processes participating to a distributed computation.\nThe number of communication events during the simulation has been set to 10 000.\nThe parameter \u03bb of the Poisson time distribution (\u03bb is the average number of relevant events in a given time interval) has been set so that the relevant events are generated at the beginning of the simulation.\nWith the uniform time distribution, a relevant event is generated (in the average) every 10 communication events.\nThe location parameter of the standard normal time distribution has been set so that the occurrence of relevant events is shifted around the third part of the simulation experiment.\nAs noted previously, the simulator can be fed with a given scenario.\nThis allows to analyze the worst case scenarios for IPT2 and IPT3.\nThese scenarios correspond to the case where the relevant events are generated at the maximal frequency (i.e., each time a process sends or receives a message, it produces a relevant event).\nFinally, the three IPT protocols are analyzed with the same simulation parameters.\n6.3 Simulation Results\nThe results are displayed on the Figures 3.a-3.\nd.\nThese figures plot the gain of the protocols in terms of the number of triples that are not transmitted (y axis) with respect to the number of communication events (x axis).\nFrom these figures, we observe that, whatever the time distribution followed by the relevant events, both IPT2 and IPT3 exhibit a behavior better than IPT1 (i.e., the total number of piggybacked triples is lower in IPT2 and IPT3 than in IPT1), even in the worst case (see Figure 3.\nd).\nLet us consider the worst scenario.\nIn that case, the gain is obtained at the very beginning of the simulation and lasts as long as it exists a process Pj for which Vk: V Cj [k] = 0.\nIn that case, the condition Vk: K (m, k) is satisfied.\nAs soon as 3k: VCj [k] = ~ 0, both IPT2 and IPT3 behave as IPT1 (the shape of the curve becomes flat) since the condition K (m, k) is no longer satisfied.\nFigure 3.\na shows that during the first events of the simulation, the slope of curves IPT2 and IPT3 are steep.\nThe same occurs in Figure 3.\nd (that depicts the worst case scenario).\nThen the slope of these curves decreases and remains constant until the end of the simulation.\nIn fact, as soon as V Cj [k] becomes greater than 0, the condition - K (m, k) reduces to (Mi [j, k] = 0 V IPi [k] = 0).\nFigure 3.\nb displays an interesting feature.\nIt considers \u03bb = 100.\nAs the relevant events are taken only during the very beginning of the simulation, this figure exhibits a very steep slope as the other figures.\nThe figure shows that, as soon as no more relevant events are taken, on average, 45% of the triples are not piggybacked by the messages.\nThis shows the importance of matrix Mi.\nFurthermore, IPT3 benefits from transmitting additional boolean vectors to save triple transmissions.\nThe Figures 3.a-3.c show that the average gain of IPT3 with respect to IPT2 is close to 10%.\nFinally, Figure 3.c underlines even more the importance 217 of matrix Mi.\nWhen very few relevant events are taken, IPT2 and IPT3 turn out to be very efficient.\nIndeed, this figure shows that, very quickly, the gain in number of triples that are saved is very high (actually, 92% of the triples are saved).\n6.4 Lessons Learned from the Simulation\nOf course, all simulation results are consistent with the theoretical results.\nIPT3 is always better than or equal to IPT2, and IPT2 is always better than IPT1.\nThe simulation results teach us more: 9 The first lesson we have learnt concerns the matrix Mi.\nIts use is quite significant but mainly depends on the time distribution followed by the relevant events.\nOn the one hand, when observing Figure 3.\nb where a large number of relevant events are taken in a very short time, IPT2 can save up to 45% of the triples.\nHowever, we could have expected a more sensitive gain of IPT2 since the boolean vector IP tends to stabilize to [1,..., 1] when no relevant events are taken.\nIn fact, as discussed in Section 5.3, the management of matrix Mi within IPT2 does not allow a transitive transmission of information but only a direct transmission of this information.\nThis explains why some columns of Mi may remain equal to 0 while they could potentially be equal to 1.\nDifferently, as IPT3 benefits from transmitting additional boolean vectors (providing a transitive transmission information) it reaches a gain of 50%.\nOn the other hand, when very few relevant events are taken in a large period of time (see Figure 3.\nc), the behavior of IPT2 and IPT3 turns out to be very efficient since the transmission of up to 92% of the triples is saved.\nThis comes from the fact that very quickly the boolean vector IPi tends to stabilize to [1,..., 1] and that matrix Mi contains very few 0 since very few relevant events have been taken.\nThus, a direct transmission of the information is sufficient to quickly get matrices Mi equal to [1,..., 1],..., [1,..., 1].\n9 The second lesson concerns IPT3, more precisely, the tradeoff between the additional piggybacking of boolean vectors and the number of triples whose transmission is saved.\nWith n = 10, adding 10 booleans to a triple does not substantially increases its size.\nThe Figures 3.a-3.c exhibit the number of triples whose transmission is saved: the average gain (in number of triples) of IPT3 with respect to IPT2 is about 10%.\n7.\nCONCLUSION\nThis paper has addressed an important causality-related distributed computing problem, namely, the Immediate Predecessors Tracking problem.\nIt has presented a family of protocols that provide each relevant event with a timestamp that exactly identify its immediate predecessors.\nThe family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes).\nIn that sense, this family defines message size-efficient IPT protocols.\nAccording to the way the general condition is implemented, different IPT protocols can be obtained.\nThree of them have been described and analyzed with simulation experiments.\nInterestingly, it has also been shown that the efficiency of the protocols (measured in terms of the size of the control information that is not piggybacked by an application message) depends on the pattern defined by the communication events and the relevant events.\nLast but not least, it is interesting to note that if one is not interested in tracking the immediate predecessor events, the protocols presented in the paper can be simplified by suppressing the IPi booleans vectors (but keeping the boolean matrices Mi).\nThe resulting protocols, that implement a vector clock system, are particularly efficient as far as the size of the timestamp carried by each message is concerned.\nInterestingly, this efficiency is not obtained at the price of additional assumptions (such as FIFO channels).","keyphrases":["immedi predecessor","distribut comput","relev event","immedi predecessor track","transit reduct","hass diagram","timestamp","piggyback","control inform","ipt protocol","common global memori","messag transfer delai","vector clock","track causal","vector timestamp","channel order properti","checkpoint problem","causal track","messag-pass"],"prmu":["P","P","P","P","P","P","P","P","P","P","U","M","U","R","M","M","M","R","U"]} {"id":"J-47","title":"On the Computational Power of Iterative Auctions","abstract":"We embark on a systematic analysis of the power and limitations of iterative combinatorial auctions. Most existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their demand under these prices. We prove a large number of results showing the boundaries of what can be achieved by auctions of this kind. We first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions.","lvl-1":"On the Computational Power of Iterative Auctions\u2217 [Extended Abstract] Liad Blumrosen School of Engineering and Computer Science The Hebrew University of Jerusalem Jerusalem, Israel liad@cs.huji.ac.il Noam Nisan School of Engineering and Computer Science The Hebrew University of Jerusalem Jerusalem, Israel noam@cs.huji.ac.il ABSTRACT We embark on a systematic analysis of the power and limitations of iterative combinatorial auctions.\nMost existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their demand under these prices.\nWe prove a large number of results showing the boundaries of what can be achieved by auctions of this kind.\nWe first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION Combinatorial auctions have recently received a lot of attention.\nIn a combinatorial auction, a set M of m nonidentical items are sold in a single auction to n competing bidders.\nThe bidders have preferences regarding the bundles of items that they may receive.\nThe preferences of bidder i are specified by a valuation function vi : 2M \u2192 R+ , where vi(S) denotes the value that bidder i attaches to winning the bundle of items S.\nWe assume free disposal, i.e., that the vi``s are monotone non-decreasing.\nThe usual goal of the auctioneer is to optimize the social welfare P i vi(Si), where the allocation S1...Sn must be a partition of the items.\nApplications include many complex resource allocation problems and, in fact, combinatorial auctions may be viewed as the common abstraction of many complex resource allocation problems.\nCombinatorial auctions face both economic and computational difficulties and are a central problem in the recently active border of economic theory and computer science.\nA forthcoming book [11] addresses many of the issues involved in the design and implementation of combinatorial auctions.\nThe design of a combinatorial auction involves many considerations.\nIn this paper we focus on just one central issue: the communication between bidders and the allocation mechanism - preference elicitation.\nTransferring all information about bidders'' preferences requires an infeasible (exponential in m) amount of communication.\nThus, direct revelation auctions in which bidders simply declare their preferences to the mechanism are only practical for very small auction sizes or for very limited families of bidder preferences.\nWe have therefore seen a multitude of suggested iterative auctions in which the auction protocol repeatedly interacts with the different bidders, aiming to adaptively elicit enough information about the bidders'' preferences as to be able to find a good (optimal or close to optimal) allocation.\nMost of the suggested iterative auctions proceed by maintaining temporary prices for the bundles of items and repeatedly querying the bidders as to their preferences between the bundles under the current set of prices, and then updating the set of bundle prices according to the replies received (e.g., [22, 12, 17, 37, 3]).\nEffectively, such an iterative auction accesses the bidders'' preferences by repeatedly making the following type of demand query to bidders: Query to bidder i: a vector of bundle prices p = {p(S)}S\u2286M ; Answer: a bundle of items S \u2286 M that maximizes vi(S) \u2212 p(S).\n.\nThese types of queries are very natural in an economic setting as they capture the revealed preferences of the bidders.\nSome auctions, called item-price or linear-price auctions, specify a price pi for each item, and the price of any given bundle S is always linear, p(S) = P i\u2208S pi.\nOther auctions, called bundle-price auctions, allow specifying arbitrary (non-linear) prices p(S) for bundles.\nAnother important differentiation between models of iterative auctions is 29 based on whether they use anonymous or non-anonymous prices: In some auctions the prices that are presented to the bidders are always the same (anonymous prices).\nIn other auctions (non-anonymous), different bidders may face different (discriminatory) vectors of prices.\nIn ascending-price auctions, forcing prices to be anonymous may be a significant restriction.\nIn this paper, we embark on a systematic analysis of the computational power of iterative auctions that are based on demand queries.\nWe do not aim to present auctions for practical use but rather to understand the limitations and possibilities of these kinds of auctions.\nIn the first part of this paper, our main question is what can be done using a polynomial number of these types of queries?\nThat is, polynomial in the main parameters of the problem: n, m and the number of bits t needed for representing a single value vi(S).\nNote that from an algorithmic point of view we are talking about sub-linear time algorithms: the input size here is really n(2m \u2212 1) numbers - the descriptions of the valuation functions of all bidders.\nThere are two aspects to computational efficiency in these settings: the first is the communication with the bidders, i.e., the number of queries made, and the second is the usual computational tractability.\nOur lower bounds will depend only on the number of queriesand hold independently of any computational assumptions like P = NP.\nOur upper bounds will always be computationally efficient both in terms of the number of queries and in terms of regular computation.\nAs mentioned, this paper concentrates on the single aspect of preference elicitation and on its computational consequences and does not address issues of incentives.\nThis strengthens our lower bounds, but means that the upper bounds require evaluation from this perspective also before being used in any real combinatorial auction.1 The second part of this paper studies the power of ascending -price auctions.\nAscending auctions are iterative auctions where the published prices cannot decrease in time.\nIn this work, we try to systematically analyze what do the differences between various models of ascending auctions mean.\nWe try to answer the following questions: (i) Which models of ascending auctions can find the optimal allocation, and for which classes of valuations?\n(ii) In cases where the optimal allocation cannot be determined by ascending auctions, how well can such auctions approximate the social welfare?\n(iii) How do the different models for ascending auctions compare?\nAre some models computationally stronger than others?\nAscending auctions have been extensively studied in the literature (see the recent survey by Parkes [35]).\nMost of this work presented ``upper bounds'', i.e., proposed mechanisms with ascending prices and analyzed their properties.\nA result which is closer in spirit to ours, is by Gul and Stacchetti [17], who showed that no item-price ascending auction can always determine the VCG prices, even for substitutes valuations.2 Our framework is more general than the traditional line of research that concentrates on the final allocation and 1 We do observe however that some weak incentive property comes for free in demand-query auctions since myopic players will answer all demand queries truthfully.\nWe also note that in some cases (but not always!)\nthe incentives issues can be handled orthogonally to the preference elicitation issues, e.g., by using Vickrey-Clarke-Groves (VCG) prices (e.g., [4, 34]).\n2 We further discuss this result in Section 5.3.\nIterative auctions Demand auctions Item-price auctions Anonymous price auctions Ascending auctions 1 2 3 4 5 6 97 8 10 Figure 1: The diagram classifies the following auctions according to their properties: (1) The adaptation [12] for Kelso & Crawford``s [22] auction.\n(2) The Proxy Auction [3] by Ausubel & Milgrom.\n(3) iBundle(3) by Parkes & Ungar [34].\n(4) iBundle(2) by Parkes & Ungar [37].\n(5) Our descending adaptation for the 2-approximation for submodular valuations by [25] (see Subsection 5.4).\n(6) Ausubel``s [4] auction for substitutes valuations.\n(7) The adaptation by Nisan & Segal [32] of the O( \u221a m) approximation by [26].\n(8) The duplicate-item auction by [5].\n(9) Auction for Read-Once formulae by [43].\n(10) The AkBA Auction by Wurman & Wellman [42].\npayments and in particular, on reaching ``Walrasian equilibria'' or ``Competitive equilibria''.\nA Walrasian equilibrium3 is known to exist in the case of Substitutes valuations, and is known to be impossible for any wider class of valuations [16].\nThis does not rule out other allocations by ascending auctions: in this paper we view the auctions as a computational process where the outcome - both the allocation and the payments - can be determined according to all the data elicited throughout the auction; This general framework strengthens our negative results.4 We find the study of ascending auctions appealing for various reasons.\nFirst, ascending auctions are widely used in many real-life settings from the FCC spectrum auctions [15] to almost any e-commerce website (e.g., [2, 1]).\nActually, this is maybe the most straightforward way to sell items: ask the bidders what would they like to buy under certain prices, and increase the prices of over-demanded goods.\nAscending auctions are also considered more intuitive for many bidders, and are believed to increase the trust of the bidders in the auctioneer, as they see the result gradually emerging from the bidders'' responses.\nAscending auctions also have other desired economic properties, e.g., they incur smaller information revelation (consider, for example, English auctions vs. second-price sealed bid auctions).\n1.1 Extant Work Many iterative combinatorial auction mechanisms rely on demand queries (see the survey in [35]).\nFigure 1 summa3 A Walrasian equilibrium is vector of item prices for which all the items are sold when each bidder receives a bundle in his demand set.\n4 In few recent auction designs (e.g., [4, 28]) the payments are not necessarily the final prices of the auctions.\n30 Valuation family Upper bound Reference Lower bound Reference General min(n, O( \u221a m)) [26], Section 4.2 min(n, m1\/2\u2212 ) [32] Substitutes 1 [32] Submodular 2 [25], 1+ 1 2m , 1-1 e (*) [32],[23] Subadditive O(logm) [13] 2 [13] k-duplicates O(m1\/k+1 ) [14] O(m1\/k+1 ) [14] Procurement ln m [32] (log m)\/2 [29, 32] Figure 2: The best approximation factors currently achievable by computationally-efficient combinatorial auctions, for several classes of valuations.\nAll lower bounds in the table apply to all iterative auctions (except the one marked by *); all upper bounds in the table are achieved with item-price demand queries.\nrizes the basic classes of auctions implied by combinations of the above properties and classifies some of the auctions proposed in the literature according to this classification.\nFor our purposes, two families of these auctions serve as the main motivating starting points: the first is the ascending item-price auctions of [22, 17] that with computational efficiency find an optimal allocation among (gross) substitutes valuations, and the second is the ascending bundleprice auctions of [37, 3] that find an optimal allocation among general valuations - but not necessarily with computational efficiency.\nThe main lower bound in this area, due to [32], states that indeed, due to inherent communication requirements, it is not possible for any iterative auction to find the optimal allocation among general valuations with sub-exponentially many queries.\nA similar exponential lower bound was shown in [32] also for even approximating the optimal allocation to within a factor of m1\/2\u2212 .\nSeveral lower bounds and upper bounds for approximation are known for some natural classes of valuations - these are summarized in Figure 2.\nIn [32], the universal generality of demand queries is also shown: any non-deterministic communication protocol for finding an allocation that optimizes the social welfare can be converted into one that only uses demand queries (with bundle prices).\nIn [41] this was generalized also to nondeterministic protocols for finding allocations that satisfy other natural types of economic requirements (e.g., approximate social efficiency, envy-freeness).\nHowever, in [33] it was demonstrated that this completeness of demand queries holds only in the nondeterministic setting, while in the usual deterministic setting, demand queries (even with bundle prices) may be exponentially weaker than general communication.\nBundle-price auctions are a generalization of (the more natural and intuitive) item-price auctions.\nIt is known that indeed item-price auctions may be exponentially weaker: a nice example is the case of valuations that are a XOR of k bundles5 , where k is small (say, polynomial).\nLahaie and Parkes [24] show an economically-efficient bundle-price auction that uses a polynomial number of queries whenever k is polynomial.\nIn contrast, [7] show that there exist valuations that are XORs of k = \u221a m bundles such that any item-price auction that finds an optimal allocation between them requires exponentially many queries.\nThese results are part of a recent line of research ([7, 43, 24, 40]) that study the preference elicitation problem in combinatorial auctions and its relation to the full elicitation problem (i.e., learn5 These are valuations where bidders have values for k specific packages, and the value of each bundle is the maximal value of one of these packages that it contains.\ning the exact valuations of the bidders).\nThese papers adapt methods from machine-learning theory to the combinatorialauction setting.\nThe preference elicitation problem and the full elicitation problem relate to a well studied problem in microeconomics known as the integrability problem (see, e.g., [27]).\nThis problem studies if and when one can derive the utility function of a consumer from her demand function.\nPaper organization: Due to the relatively large number of results we present, we start with a survey of our new results in Section 2.\nAfter describing our formal model in Section 3, we present our results concerning the power of demand queries in Section 4.\nThen, we describe the power of item-price ascending auctions (Section 5) and bundle-price ascending auctions (Section 6).\nReaders who are mainly interested in the self-contained discussion of ascending auctions can skip Section 4.\nMissing proofs from Section 4 can be found in part I of the full paper ([8]).\nMissing proofs from Sections 5 and 6 can be found in part II of the full paper ([9]).\n2.\nA SURVEY OF OUR RESULTS Our systematic analysis is composed of the combination of a rather large number of results characterizing the power and limitations of various classes of auctions.\nIn this section, we will present an exposition describing our new results.\nWe first discuss the power of demand-query iterative auctions, and then we turn our attention to ascending auctions.\nFigure 3 summarizes some of our main results.\n2.1 Demand Queries Comparison of query types We first ask what other natural types of queries could we imagine iterative auctions using?\nHere is a list of such queries that are either natural, have been used in the literature, or that we found useful.\n1.\nValue query: The auctioneer presents a bundle S, the bidder reports his value v(S) for this bundle.\n2.\nMarginal-value query: The auctioneer presents a bundle A and an item j, the bidder reports how much he is willing to pay for j, given that he already owns A, i.e., v(j|A) = v(A \u222a {j}) \u2212 v(A).\n3.\nDemand query (with item prices): The auctioneer presents a vector of item prices p1...pm; the bidder reports his demand under these prices, i.e., some set S that maximizes v(S) \u2212 P i\u2208S pi.6 6 A tie breaking rule should be specified.\nAll of our results 31 Communication Constraint Can find an optimal allocation?\nUpper bound for welfare approx.\nLower bound for welfare approx.\nItem-Price Demand Queries Yes 1 1 Poly.\nCommunication No [32] min(n, O(m1\/2 )) [26] min(n, m1\/2\u2212 ) [32] Poly.\nItem-Price Demand Queries No [32] min(n, O(m1\/2 )) min(n, m1\/2\u2212 ) [32] Poly.\nValue Queries No [32] O( m\u221a log m ) [19] O( m log m ) Anonymous Item-Price AA No - min(O(n), O(m1\/2\u2212 )) Non-anonymous Item-Price AA No -Anonymous Bundle-Price AA No - min(O(n), O(m1\/2\u2212 )) Non-anonymous Bundle-Price AA Yes [37] 1 1 Poly Number of Item-Price AA No min(n, O(m1\/2 ))Figure 3: This paper studies the economic efficiency of auctions that follow certain communication constraints.\nFor each class of auctions, the table shows whether the optimal allocation can be achieved, or else, how well can it be approximated (both upper bounds and lower bounds).\nNew results are highlighted.\nAbbreviations: Poly.\n(Polynomial number\/size), AA (ascending auctions).\n- means that nothing is currently known except trivial solutions.\n4.\nIndirect-utility query: The auctioneer presents a set of item prices p1...pm, and the bidder responds with his indirect-utility under these prices, that is, the highest utility he can achieve from a bundle under these prices: maxS\u2286M (v(S) \u2212 P i\u2208S pi).7 5.\nRelative-demand query: the auctioneer presents a set of non-zero prices p1...pm, and the bidder reports the bundle that maximizes his value per unit of money, i.e., some set that maximizes v(S)P i\u2208S pi .8 Theorem: Each of these queries can be efficiently (i.e., in time polynomial in n, m, and the number of bits of precision t needed to represent a single value vi(S)) simulated by a sequence of demand queries with item prices.\nIn particular this shows that demand queries can elicit all information about a valuation by simulating all 2m \u22121 value queries.\nWe also observe that value queries and marginalvalue queries can simulate each other in polynomial time and that demand queries and indirect-utility queries can also simulate each other in polynomial time.\nWe prove that exponentially many value queries may be needed in order to simulate a single demand query.\nIt is interesting to note that for the restricted class of substitutes valuations, demand queries may be simulated by polynomial number of value queries [6].\nWelfare approximation The next question that we ask is how well can a computationally-efficient auction that uses only demand queries approximate the optimal allocation?\nTwo separate obstacles are known: In [32], a lower bound of min(n, m1\/2\u2212 ), for any fixed > 0, was shown for the approximation factor apply for any fixed tie breaking rule.\n7 This is exactly the utility achieved by the bundle which would be returned in a demand query with the same prices.\nThis notion relates to the Indirect-utility function studied in the Microeconomic literature (see, e.g., [27]).\n8 Note that when all the prices are 1, the bidder actually reports the bundle with the highest per-item price.\nWe found this type of query useful, for example, in the design of the approximation algorithm described in Figure 5 in Section 4.2.\nobtained using any polynomial amount of communication.\nA computational bound with the same value applies even for the case of single-minded bidders, but under the assumption of NP = ZPP [39].\nAs noted in [32], the computationallyefficient greedy algorithm of [26] can be adapted to become a polynomial-time iterative auction that achieves a nearly matching approximation factor of min(n, O( \u221a m)).\nThis iterative auction may be implemented with bundle-price demand queries but, as far as we see, not as one with item prices.\nSince in a single bundle-price demand query an exponential number of prices can be presented, this algorithm can have an exponential communication cost.\nIn Section 4.2, we describe a different item-price auction that achieves the same approximation factor with a polynomial number of queries (and thus with a polynomial communication).\nTheorem: There exists a computationally-efficient iterative auction with item-price demand queries that finds an allocation that approximates the optimal welfare between arbitrary valuations to within a factor of min(n, O( \u221a m)).\nOne may then attempt obtaining such an approximation factor using iterative auctions that use only the weaker value queries.\nHowever, we show that this is impossible: Theorem: Any iterative auction that uses a polynomial (in n and m) number of value queries can not achieve an approximation factor that is better than O( m log m ).9 Note however that auctions with only value queries are not completely trivial in power: the bundling auctions of Holzman et al. [19] can easily be implemented by a polynomial number of value queries and can achieve an approximation factor of O( m\u221a log m ) by using O(log m) equi-sized bundles.\nWe do not know how to close the (tiny) gap between this upper bound and our lower bound.\nRepresenting bundle-prices We then deal with a critical issue with bundle-price auctions that was side-stepped by our model, as well as by all previous works that used bundle-price auctions: how are 9 This was also proven independently by Shahar Dobzinski and Michael Schapira.\n32 the bundle prices represented?\nFor item-price auctions this is not an issue since a query needs only to specify a small number, m, of prices.\nIn bundle-price auctions that situation is more difficult since there are exponentially many bundles that require pricing.\nOur basic model (like all previous work that used bundle prices, e.g., [37, 34, 3]), ignores this issue, and only requires that the prices be determined, somehow, by the protocol.\nA finer model would fix a specific language for denoting bundle prices, force the protocol to represent the bundle-prices in this language, and require that the representations of the bundle-prices also be polynomial.\nWhat could such a language for denoting prices for all bundles look like?\nFirst note that specifying a price for each bundle is equivalent to specifying a valuation.\nSecond, as noted in [31], most of the proposed bidding languages are really just languages for representing valuations, i.e., a syntactic representation of valuations - thus we could use any of them.\nThis point of view opens up the general issue of which language should be used in bundle-price auctions and what are the implications of this choice.\nHere we initiate this line of investigation.\nWe consider bundle-price auctions where the prices must be given as a XOR-bid, i.e., the protocol must explicitly indicate the price of every bundle whose value is different than that of all of its proper subsets.\nNote that all bundle-price auctions that do not explicitly specify a bidding language must implicitly use this language or a weaker one, since without a specific language one would need to list prices for all bundles, perhaps except for trivial ones (those with value 0, or more generally, those with a value that is determined by one of their proper subsets.)\nWe show that once the representation length of bundle prices is taken into account (using the XOR-language), bundle-price auctions are no more strictly stronger than item-price auctions.\nDefine the cost of an iterative auction as the total length of the queries and answers used throughout the auction (in the worst case).\nTheorem: For some class of valuations, bundle price auctions that use the XOR-language require an exponential cost for finding the optimal allocation.\nIn contrast, item-price auctions can find the optimal allocation for this class within polynomial cost.10 This put doubts on the applicability of bundle-price auctions like [3, 37], and it may justify the use of hybrid pricing methods such as Ausubel, Cramton and Milgrom``s Clock-Proxy auction ([10]).\nDemand queries and linear programs The winner determination problem in combinatorial auctions may be formulated as an integer program.\nIn many cases solving the linear-program relaxation of this integer program is useful: for some restricted classes of valuations it finds the optimum of the integer program (e.g., substitute valuations [22, 17]) or helps approximating the optimum (e.g., by randomized rounding [13, 14]).\nHowever, the linear program has an exponential number of variables.\nNisan and Segal [32] observed the surprising fact that despite the ex10 Our proof relies on the sophisticated known lower bounds for constant depth circuits.\nWe were not able to find an elementary proof.\nponential number of variables, this linear program may be solved within polynomial communication.\nThe basic idea is to solve the dual program using the Ellipsoid method (see, e.g., [20]).\nThe dual program has a polynomial number of variables, but an exponential number of constraints.\nThe Ellipsoid algorithm runs in polynomial time even on such programs, provided that a separation oracle is given for the set of constraints.\nSurprisingly, such a separation oracle can be implemented using a single demand query (with item prices) to each of the bidders.\nThe treatment of [32] was somewhat ad-hoc to the problem at hand (the case of substitute valuations).\nHere we give a somewhat more general form of this important observation.\nLet us call the following class of linear programs generalized-winner-determination-relaxation (GWDR) LPs: Maximize X i\u2208N,S\u2286M wi xi,S vi(S) s.t. X i\u2208N, S|j\u2208S xi,S \u2264 qj \u2200j \u2208 M X S\u2286M xi,S \u2264 di \u2200i \u2208 N xi,S \u2265 0 \u2200i \u2208 N, S \u2286 M The case where wi = 1, di = 1, qj = 1 (for every i, j) is the usual linear relaxation of the winner determination problem.\nMore generally, wi may be viewed as the weight given to bidder i``s welfare, qj may be viewed as the quantity of units of good j, and di may be viewed as duplicity of the number of bidders of type i. Theorem: Any GWDR linear program may be solved in polynomial time (in n, m, and the number of bits of precision t) using only demand queries with item prices.11 2.2 Ascending Auctions Ascending item-price auctions: It is well known that the item-price ascending auctions of Kelso and Crawford [22] and its variants [12, 16] find the optimal allocation as long as all players'' valuations have the substitutes property.\nThe obvious question is whether the optimal allocation can be found for a larger class of valuations.\nOur main result here is a strong negative result: Theorem: There is a 2-item 2-player problem where no ascending item-price auction can find the optimal allocation.\nThis is in contrast to both the power of bundle-price ascending auctions and to the power of general item-price demand queries (see above), both of which can always find the optimal allocation and in fact even provide full preference elicitation.\nThe same proof proves a similar impossibility result for other types of auctions (e.g., descending auctions, non-anonymous auctions).\nMore extension of this result: \u2022 Eliciting some classes of valuations requires an exponential number of ascending item-price trajectories.\n11 The produced optimal solution will have polynomial support and thus can be listed fully.\n33 \u2022 At least k \u2212 1 ascending item-price trajectories are needed to elicit XOR formulae with k terms.\nThis result is in some sense tight, since we show that any k-term XOR formula can be fully elicited by k\u22121 nondeterministic (i.e., when some exogenous teacher instructs the auctioneer on how to increase the prices) ascending auctions.12 We also show that item-price ascending auctions and iterative auctions that are limited to a polynomial number of queries (of any kind, not necessarily ascending) are incomparable in their power: ascending auctions, with small enough increments, can elicit the preferences in cases where any polynomial number of queries cannot.\nMotivated by several recent papers that studied the relation between eliciting and fully-eliciting the preferences in combinatorial auctions (e.g., [7, 24]), we explore the difference between these problems in the context of ascending auctions.\nWe show that although a single ascending auction can determine the optimal allocation among any number of bidders with substitutes valuations, it cannot fully-elicit such a valuation even for a single bidder.\nWhile it was shown in [25] that the set of substitutes valuations has measure zero in the space of general valuations, its dimension is not known, and in particular it is still open whether a polynomial amount of information suffices to describe a substitutes valuation.\nWhile our result may be a small step in that direction (a polynomial full elicitation may still be possible with other communication protocols), we note that our impossibility result also holds for valuations in the class OXS defined by [25], valuations that we are able to show have a compact representation.\nWe also give several results separating the power of different models for ascending combinatorial auctions that use item-prices: we prove, not surprisingly, that adaptive ascending auctions are more powerful than oblivious ascending auctions and that non-deterministic ascending auctions are more powerful than deterministic ascending auctions.\nWe also compare different kinds of non-anonymous auctions (e.g., simultaneous or sequential), and observe that anonymous bundle-price auctions and non-anonymous item-price auctions are incomparable in their power.\nFinally, motivated by Dutch auctions, we consider descending auctions, and how they compare to ascending ones; we show classes of valuations that can be elicited by ascending item-price auctions but not by descending item-price auctions, and vice versa.\nAscending bundle-price auctions: All known ascending bundle-price auctions that are able to find the optimal allocation between general valuations (with free disposal) use non-anonymous prices.\nAnonymous ascending-price auctions (e.g., [42, 21, 37]) are only known to be able to find the optimal allocation among superadditive valuations or few other simple classes ([36]).\nWe show that this is no mistake: Theorem: No ascending auction with anonymous prices can find the optimal allocation between general valuations.\n12 Non-deterministic computation is widely used in CS and also in economics (e.g, a Walrasian equilibrium or [38]).\nIn some settings, deterministic and non-deterministic models have equal power (e.g., computation with finite automata).\nThis bound is regardless of the running time, and it also holds for descending auctions and non-deterministic auctions.\nWe strengthen this result significantly by showing that anonymous ascending auctions cannot produce a better than O( \u221a m) approximation - the approximation ratio that can be achieved with a polynomial number of queries ([26, 32]) and, as mentioned, with a polynomial number of item-price demand queries.\nThe same lower bound clearly holds for anonymous item-price ascending auctions since such auctions can be simulated by anonymous bundle-price ascending auctions.\nWe currently do not have any lower bound on the approximation achievable by non-anonymous item-price ascending auctions.\nFinally, we study the performance of the existing computationally-efficient ascending auctions.\nThese protocols ([37, 3]) require exponential time in the worst case, and this is unavoidable as shown by [32].\nHowever, we also observe that these auctions, as well as the whole class of similar ascending bundle-price auctions, require an exponential time even for simple additive valuations.\nThis is avoidable and indeed the ascending item-price auctions of [22] can find the optimal allocation for these simple valuations with polynomial communication.\n3.\nTHE MODEL 3.1 Discrete Auctions for Continuous Values Our model aims to capture iterative auctions that operate on real-valued valuations.\nThere is a slight technical difficulty here in bridging the gap between the discrete nature of an iterative auction, and the continuous nature of the valuations.\nThis is exactly the same problem as in modeling a simple English auction.\nThere are three standard formal ways to model it: 1.\nModel the auction as a continuous process and study its trajectory in time.\nFor example, the so-called Japanese auction is basically a continuous model of an English model.13 2.\nModel the auction as discrete and the valuations as continuously valued.\nIn this case we introduce a parameter and usually require the auction to produce results that are -close to optimal.\n3.\nModel the valuations as discrete.\nIn this case we will assume that all valuations are integer multiples of some small fixed quantity \u03b4, e.g., 1 penny.\nAll communication in this case is then naturally finite.\nIn this paper we use the latter formulation and assume that all values are multiples of some \u03b4.\nThus, in some parts of the paper we assume without loss of generality that \u03b4 = 1, hence all valuations are integral.\nAlmost all (if not all) of our results can be translated to the other two models with little effort.\n3.2 Valuations A single auctioneer is selling m indivisible non-homogeneous items in a single auction, and let M be the set of these 13 Another similar model is the moving knives model in the cake-cutting literature.\n34 items and N be the set of bidders.\nEach one of the n bidders in the auction has a valuation function vi : 2m \u2192 {0, \u03b4, 2\u03b4, ..., L}, where for every bundle of items S \u2286 M, vi(S) denotes the value of bidder i for the bundle S and is a multiple of \u03b4 in the range 0...L.\nWe will sometimes denote the number of bits needed to represent such values in the range 0...L by t = log L.\nWe assume free disposal, i.e., S \u2286 T implies vi(S) \u2264 vi(T) and that vi(\u2205) = 0 for all bidders.\nWe will mention the following classes of valuations: \u2022 A valuation is called sub-modular if for all sets of items A and B we have that v(A \u222a B) + v(A \u2229 B) \u2264 v(A) + v(B).\n\u2022 A valuation is called super-additive if for all disjoint sets of items A and B we have that v(A\u222aB) \u2265 v(A)+ v(B).\n\u2022 A valuation is called a k-bundle XOR if it can be represented as a XOR combination of at most k atomic bids [30], i.e., if there are at most k bundles Si and prices pi such that for all S, v(S) = maxi|S\u2287Si pi.\nSuch valuations will be denoted by v = (S1 : p1) \u2295 (S2 : p2) \u2295 ... \u2295 (Sk : pk).14 3.3 Iterative Auctions The auctioneer sets up a protocol (equivalently an algorithm), where at each stage of the protocol some information q - termed the query - is sent to some bidder i, and then bidder i replies with some reply that depends on the query as well as on his own valuation.\nIn this paper, we assume that we have complete control over the bidders'' behavior, and thus the protocol also defines a reply function ri(q, vi) that specifies bidder i``s reply to query q.\nThe protocol may be adaptive: the query value as well as the queried bidder may depend on the replies received for past queries.\nAt the end of the protocol, an allocation S1...Sn must be declared, where Si \u2229 Sj = \u2205 for i = j.\nWe say that the auction finds an optimal allocation if it finds the allocation that maximizes the social welfareP i vi(Si).\nWe say that it finds a c-approximation if P i vi(Si) \u2265 P i vi(Ti)\/c where T1...Tn is an optimal allocation.\nThe running time of the auction on a given instance of the bidders'' valuations is the total number of queries made on this instance.\nThe running time of a protocol is the worst case cost over all instances.\nNote that we impose no computational limitations on the protocol or on the players.15 This of course only strengthens our hardness results.\nYet, our positive results will not use this power and will be efficient also in the usual computational sense.\nOur goal will be to design computationally-efficient protocols.\nWe will deem a protocol computationally-efficient if its cost is polynomial in the relevant parameters: the number of bidders n, the number of items m, and t = log L, where L is the largest possible value of a bundle.\nHowever, when we discuss ascending-price auctions and their variants, a computationally-efficient protocol will be required to be 14 For example, v = (abcd : 5) \u2295 (ab : 3) \u2295 (c : 4) denotes the XOR valuation with the terms abcd, ab, c and prices 5, 3, 4 respectively.\nFor this valuation, v(abcd) = 5, v(abd) = 3, v(abc) = 4.\n15 The running time really measures communication costs and not computational running time.\npseudo-polynomial, i.e., it should ask a number of queries which is polynomial in m, n and L \u03b4 .\nThis is because that ascending auctions can usually not achieve such running times (consider even the English auction on a single item).16 Note that all of our results give concrete bounds, where the dependence on the parameters is given explicitly; we use the standard big-Oh notation just as a shorthand.\nWe say than an auction elicits some class V of valuations, if it determines the optimal allocation for any profile of valuations drawn from V ; We say that an auction fully elicits some class of valuations V , if it can fully learn any single valuation v \u2208 V (i.e., learn v(S) for every S).\n3.4 Demand Queries and Ascending Auctions Most of the paper will be concerned with a common special case of iterative auctions that we term demand auctions.\nIn such auctions, the queries that are sent to bidders are demand queries: the query specifies a price p(S) \u2208 + for each bundle S.\nThe reply of bidder i is simply the set most desired - demanded - under these prices.\nFormally, a set S that maximizes vi(S) \u2212 p(S).\nIt may happen that more than one set S maximizes this value.\nIn which case, ties are broken according to some fixed tie-breaking rule, e.g., the lexicographically first such set is returned.\nAll of our results hold for any fixed tie-breaking rule.\nAscending auctions are iterative auctions with non-decreasing prices: Definition 1.\nIn an ascending auction, the prices in the queries to the same bidder can only increase in time.\nFormally, let p be a query made for bidder i, and q be a query made for bidder i at a later stage in the protocol.\nThen for all sets S, q(S) \u2265 p(S).\nA similar variant, which we also study and that is also common in real life, is descending auctions, in which prices can only decrease in time.\nNote that the term ascending auction refers to an auction with a single ascending trajectory of prices.\nIt may be useful to define multi-trajectory ascending auctions, in which the prices maybe reset to zero a number of times (see, e.g., [4]).\nWe consider two main restrictions on the types of allowed demand queries: Definition 2.\nItem Prices: The prices in each query are given by prices pj for each item j.\nThe price of a set S is additive: p(S) = P j\u2208S pj.\nDefinition 3.\nAnonymous prices: The prices seen by the bidders at any stage in the auction are the same, i.e. whenever a query is made to some bidder, the same query is also made to all other bidders (with the prices unchanged).\nIn auctions with non-anonymous (discriminatory) prices, each bidder i has personalized prices denoted by pi (S).17 In this paper, all auctions are anonymous unless otherwise specified.\nNote that even though in our model valuations are integral (or multiples of some \u03b4), we allow the demand query to 16 Most of the auctions we present may be adapted to run in time polynomial in log L, using a binary-search-like procedure, losing their ascending nature.\n17 Note that a non-anonymous auction can clearly be simulated by n parallel anonymous auctions.\n35 use arbitrary real numbers in +.\nThat is, we assume that the increment we use in the ascending auctions may be significantly smaller than \u03b4.\nAll our hardness results hold for any , even for continuous price increments.\nA practical issue here is how will the query be specified: in the general case, an exponential number of prices needs to be sent in a single query.\nFormally, this is not a problem as the model does not limit the length of queries in any way - the protocol must simply define what the prices are in terms of the replies received for previous queries.\nWe look into this issue further in Section 4.3.\n4.\nTHE POWER OF DEMAND QUERIES In this section, we study the power of iterative auctions that use demand queries (not necessarily ascending).\nWe start by comapring demand queries to other types of queries.\nThen, we discuss how well can one approximate the optimal welfare using a polynomial number of demand queries.\nWe also initiate the study of the representation of bundle-price demand queries, and finally, we show how demand queries help solving the linear-programming relaxation of combinatorial auctions in polynomial time.\n4.1 The Power of Different Types of Queries In this section we compare the power of the various types of queries defined in Section 2.\nWe will present computationally -efficient simulations of these query types using item-price demand queries.\nIn Section 5.1 we show that these simulations can also be done using item-price ascending auctions.\nLemma 4.1.\nA value query can be simulated by m marginalvalue queries.\nA marginal-value query can be simulated by two value queries.\nLemma 4.2.\nA value query can be simulated by mt demand queries (where t = log L is the number of bits needed to represent a single bundle value).18 As a direct corollary we get that demand auctions can always fully elicit the bidders'' valuations by simulating all possible 2m \u2212 1 queries and thus elicit enough information for determining the optimal allocation.\nNote, however, that this elicitation may be computationally inefficient.\nThe next lemma shows that demand queries can be exponentially more powerful than value queries.\nLemma 4.3.\nAn exponential number of value queries may be required for simulating a single demand query.\nIndirect utility queries are, however, equivalent in power to demand queries: Lemma 4.4.\nAn indirect-utility query can be simulated by mt + 1 demand queries.\nA demand query can be simulated by m + 1 indirect-utility queries.\nDemand queries can also simulate relative-demand queries:19 18 Note that t bundle-price demand queries can easily simulate a value query by setting the prices of all the bundles except S (the bundle with the unknown value) to be L, and performing a binary search on the price of S. 19 Note: although in our model values are integral (our multiples of \u03b4), we allow the query prices to be arbitrary real numV MV D IU RD V 1 2 exp exp exp MV m 1 exp exp exp D mt poly 1 mt+1 poly IU 1 2 m+1 1 poly RD - - - - 1 Figure 4: Each entry in the table specifies how many queries of this row are needed to simulate a query from the relevant column.\nAbbreviations: V (value query), MV (marginal-value query), D (demand query), IU (Indirect-utility query), RD (relative demand query).\nLemma 4.5.\nRelative-demand queries can be simulated by a polynomial number of demand queries.\nAccording to our definition of relative-demand queries, they clearly cannot simulate even value queries.\nFigure 4 summarizes the relations between these query types.\n4.2 Approximating the Social Welfare with Value and Demand Queries We know from [32] that iterative combinatorial auctions that only use a polynomial number of queries can not find an optimal allocation among general valuations and in fact can not even approximate it to within a factor better than min{n, m1\/2\u2212 }.\nIn this section we ask how well can this approximation be done using demand queries with item prices, or using the weaker value queries.\nWe show that, using demand queries, the lower bound can be matched, while value queries can only do much worse.\nFigure 5 describes a polynomial-time algorithm that achieves a min(n, O( \u221a m)) approximation ratio.\nThis algorithm greedily picks the bundles that maximize the bidders'' per-item value (using relative-demand queries, see Section 4.1).\nAs a final step, it allocates all the items to a single bidder if it improves the social welfare (this can be checked using value queries).\nSince both value queries and relative-demand queries can be simulated by a polynomial number of demand queries with item prices (Lemmas 4.2 and 4.5), this algorithm can be implemented by a polynomial number of demand queries with item prices.20 Theorem 4.6.\nThe auction described in Figure 5 can be implemented by a polynomial number of demand queries and achieves a min{n, 4 \u221a m}-approximation for the social welfare.\nWe now ask how well can the optimal welfare be approximated by a polynomial number of value queries.\nFirst we note that value queries are not completely powerless: In [19] it is shown that if the m items are split into k fixed bundles of size m\/k each, and these fixed bundles are auctioned as though each was indivisible, then the social welfare bers, thus we may have bundles with arbitrarily close relative demands.\nIn this sense the simulation above is only up to any given (and the number of queries is O(log L+log 1 )).\nWhen the relative-demand query prices are given as rational numbers, exact simulation is implied when log is linear in the input length.\n20 In the full paper [8], we observe that this algorithm can be implemented by two descending item-price auctions (where we allow removing items along the auction).\n36 generated by such an auction is at least m\u221a k -approximation of that possible in the original auction.\nNotice that such an auction can be implemented by 2k \u2212 1 value queries to each bidder - querying the value of each bundle of the fixed bundles.\nThus, if we choose k = log m bundles we get an m\u221a log m -approximation while still using a polynomial number of queries.\nThe following lemma shows that not much more is possible using value queries: Lemma 4.7.\nAny iterative auction that uses only value queries and distinguishes between k-tuples of 0\/1 valuations where the optimal allocation has value 1, and those where the optimal allocation has value k requires at least 2 m k queries.\nProof.\nConsider the following family of valuations: for every S, such that |S| > m\/2, v(S) = 1, and there exists a single set T, such that for |S| \u2264 m\/2, v(S) = 1 iff T \u2286 S and v(S) = 0 otherwise.\nNow look at the behavior of the protocol when all valuations vi have T = {1...m}.\nClearly in this case the value of the best allocation is 1 since no set of size m 2 or lower has non-zero value for any player.\nFix the sequence of queries and answers received on this k-tuple of valuations.\nNow consider the k-tuple of valuations chosen at random as follows: a partition of the m items into k sets T1...Tk each of size m k each is chosen uniformly at random among all such partitions.\nNow consider the k-tuple of valuations from our family that correspond to this partition - clearly Ti can be allocated to i, for each i, getting a total value of k.\nNow look at the protocol when running on these valuations and compare its behavior to the original case.\nNote that the answer to a query S to player i differs between the case of Ti and the original case of T = {1...m} only if |S| \u2264 m 2 and Ti \u2286 S.\nSince Ti is distributed uniformly among all sets of size exactly m k , we have that for any fixed query S to player i, where |S| \u2264 m 2 , Pr[Ti \u2286 S] \u2264 \u201e |S| m ``|Ti| \u2264 2\u2212 m k Using the union-bound, if the original sequence of queries was of length less than 2 m k , then with positive probability none of the queries in the sequence would receive a different answer than for the original input tuple.\nThis is forbidden since the protocol must distinguish between this case and the original case - which cannot happen if all queries receive the same answer.\nHence there must have been at least 2 m k queries for the original tuple of valuations.\nWe conclude that a polynomial time protocol that uses only value queries cannot obtain a better than O( m log m ) approximation of the welfare: Theorem 4.8.\nAn iterative auction that uses a polynomial number of value queries cannot achieve better than O( m log m )-approximation for the social welfare.\nProof.\nImmediate from Lemma 4.7: achieving any approximation ratio k which is asymptotically greater than m log m needs an exponential number of value queries.\nAn Approximation Algorithm: Initialization: Let T \u2190 M be the current items for sale.\nLet K \u2190 N be the currently participating bidders.\nLet S\u2217 1 \u2190 \u2205, ..., S\u2217 n \u2190 \u2205 be the provisional allocation.\nRepeat until T = \u2205 or K = \u2205: Ask each bidder i in K for the bundle Si that maximizes her per-item value, i.e., Si \u2208 argmaxS\u2286T vi(S) |S| .\nLet i be the bidder with the maximal per-item value, i.e., i \u2208 argmaxi\u2208K vi(Si) |Si| .\nSet: s\u2217 i = si, K = K \\ i, M = M \\ Si Finally: Ask the bidders for their values vi(M) for the grand bundle.\nIf allocating all the items to some bidder i improves the social welfare achieved so far (i.e., \u2203i \u2208 N such that vi(M) > P i\u2208N vi(S\u2217 i )), then allocate all items to this bidder i. Figure 5: This algorithm achieves a min{n, 4 \u221a m}approximation for the social welfare, which is asymptotically the best worst-case approximation possible with polynomial communication.\nThis algorithm can be implemented with a polynomial number of demand queries.\n4.3 The Representation of Bundle Prices In this section we explicitly fix the language in which bundle prices are presented to the bidders in bundle-price auctions.\nThis language requires the algorithm to explicitly list the price of every bundle with a non-trivial price.\nTrivial in this context is a price that is equal to that of one of its proper subsets (which was listed explicitly).\nThis representation is equivalent to the XOR-language for expressing valuations.\nFormally, each query q is given by an expression: q = (S1 : p1) \u2295 (S2 : p2) \u2295 ... \u2295 (Sl : pl).\nIn this representation, the price demanded for every set S is simply p(S) = max{k=1...l|Sk\u2286S}pk.\nDefinition 4.\nThe length of the query q = (S1 : p1) \u2295 (S2 : p2) \u2295 ... \u2295 (Sl : pl) is l.\nThe cost of an algorithm is the sum of the lengths of the queries asked during the operation of the algorithm on the worst case input.\nNote that under this definition, bundle-price auctions are not necessarily stronger than item-price auctions.\nAn itemprice query that prices each item for 1, is translated to an exponentially long bundle-price query that needs to specify the price |S| for each bundle S.\nBut perhaps bundle-price auctions can still find optimal allocations whenever itemprice auction can, without directly simulating such queries?\nWe show that this is not the case: indeed, when the representation length is taken into account, bundle price auctions are sometimes seriously inferior to item price auctions.\nConsider the following family of valuations: Each item is valued at 3, except that for some single set S, its value is a bit more: 3|S| + b, where b \u2208 {0, 1, 2}.\nNote that an item price auction can easily find the optimal allocation between any two such valuations: Set the prices of each item to 3+ ; if the demand sets of the two players are both empty, then b = 0 for both valuations, and an arbitrary allocation is fine.\nIf one of them is empty and the other non-empty, allocate the non-empty demand set to its bidder, and the rest to the other.\nIf both demand sets are non-empty then, unless they form an exact partition, we need to see which b is larger, which we can do by increasing the price of a single item in each demand set.\n37 We will show that any bundle-price auction that uses only the XOR-language to describe bundle prices requires an exponential cost (which includes the sum of all description lengths of prices) to find an optimal allocation between any two such valuations.\nLemma 4.9.\nEvery bundle-price auction that uses XORexpressions to denote bundle prices requires 2\u2126( \u221a m) cost in order to find the optimal allocation among two valuations from the above family.\nThe complication in the proof stems from the fact that using XOR-expressions, the length of the price description depends on the number of bundles whose price is strictly larger than each of their subsets - this may be significantly smaller than the number of bundles that have a non-zero price.\n(The proof becomes easy if we require the protocol to explicitly name every bundle with non-zero price.)\nWe do not know of any elementary proof for this lemma (although we believe that one can be found).\nInstead we reduce the problem to a well known lower bound in boolean circuit complexity [18] stating that boolean circuits of depth 3 that compute the majority function on m variables require 2\u2126( \u221a m) size.\n4.4 Demand Queries and Linear Programming Consider the following linear-programming relaxation for the generalized winner-determination problem in combinatorial auctions (the primal program): Maximize X i\u2208N,S\u2286M wi xi,S vi(S) s.t. X i\u2208N, S|j\u2208S xi,S \u2264 qj \u2200j \u2208 M X S\u2286M xi,S \u2264 di \u2200i \u2208 N xi,S \u2265 0 \u2200i \u2208 N, S \u2286 M Note that the primal program has an exponential number of variables.\nYet, we will be able to solve it in polynomial time using demand queries to the bidders.\nThe solution will have a polynomial size support (non-zero values for xi,S), and thus we will be able to describe it in polynomial time.\nHere is its dual: Minimize X j\u2208M qjpj + X i\u2208N diui s.t. ui + X j\u2208S pj \u2265 wivi(S) \u2200i \u2208 N, S \u2286 M pi \u2265 0, uj \u2265 0 \u2200i \u2208 M, j \u2208 N Notice that the dual problem has exactly n + m variables but an exponential number of constraints.\nThus, the dual can be solved using the Ellipsoid method in polynomial time - if a separation oracle can be implemented in polynomial time.\nRecall that a separation oracle, when given a possible solution, either confirms that it is a feasible solution, or responds with a constraint that is violated by the possible solution.\nWe construct a separation oracle for solving the dual program, using a single demand query to each of the bidders.\nConsider a possible solution (u, p) for the dual program.\nWe can re-write the constraints of the dual program as: ui\/wi \u2265 vi(S) \u2212 X j\u2208S pj\/wi Now a demand query to bidder i with prices pj\/wi reveals exactly the set S that maximizes the RHS of the previous inequality.\nThus, in order to check whether (u, p) is feasible it suffices to (1) query each bidder i for his demand Di under the prices pj\/wi; (2) check only the n constraints ui + P j\u2208Di pj \u2265 wivi(Di) (where vi(Di) can be simulated using a polynomial sequence of demand queries as shown in Lemma 4.2).\nIf none of these is violated then we are assured that (u, p) is feasible; otherwise we get a violated constraint.\nWhat is left to be shown is how the primal program can be solved.\n(Recall that the primal program has an exponential number of variables.)\nSince the Ellipsoid algorithm runs in polynomial time, it encounters only a polynomial number of constraints during its operation.\nClearly, if all other constraints were removed from the dual program, it would still have the same solution (adding constraints can only decrease the space of feasible solutions).\nNow take the reduced dual where only the constraints encountered exist, and look at its dual.\nIt will have the same solution as the original dual and hence of the original primal.\nHowever, look at the form of this dual of the reduced dual.\nIt is just a version of the primal program with a polynomial number of variables - those corresponding to constraints that remained in the reduced dual.\nThus, it can be solved in polynomial time, and this solution clearly solves the original primal program, setting all other variables to zero.\n5.\nITEM-PRICE ASCENDING AUCTIONS In this section we characterize the power of ascending item-price auctions.\nWe first show that this power is not trivial: such auctions can in general elicit an exponential amount of information.\nOn the other hand, we show that the optimal allocation cannot always be determined by a single ascending auction, and in some cases, nor by an exponential number of ascending-price trajectories.\nFinally, we separate the power of different models of ascending auctions.\n5.1 The Power of Item-Price Ascending Auctions We first show that if small enough increments are allowed, a single ascending trajectory of item-prices can elicit preferences that cannot be elicited with polynomial communication.\nAs mentioned, all our hardness results hold for any increment, even infinitesimal.\nTheorem 5.1.\nSome classes of valuations can be elicited by item-price ascending auctions, but cannot be elicited by a polynomial number of queries of any kind.\nProof.\n(sketch) Consider two bidders with v(S) = 1 if |S| > n 2 , v(S) = 0 if |S| < n 2 and every S such that |S| = n 2 has an unknown value from {0, 1}.\nDue to [32], determining the optimal allocation here requires exponential communication in the worst case.\nNevertheless, we show (see [9]) that an item-price ascending auction can do it, as long as it can use exponentially small increments.\nWe now describe another positive result for the power of item-price ascending auctions.\nIn section 4.1, we showed 38 v(ab) v(a) v(b) Bidder 1 2 \u03b1 \u2208 (0, 1) \u03b2 \u2208 (0, 1) Bidder 2 2 2 2 Figure 6: No item-price ascending auctions can determine the optimal allocation for this class of valuations.\nthat a value query can be simulated with a (truly) polynomial number of item-price demand queries.\nHere, we show that every value query can be simulated by a (pseudo) polynomial number of ascending item-price demand queries.\n(In the next subsection, we show that we cannot always simulate even a pair of value queries using a single item-price ascending auction.)\nIn the full paper (part II,[9]), we show that we can simulate other types of queries using item-price ascending auctions.\nProposition 5.2.\nA value query can be simulated by an item-price ascending auction.\nThis simulation requires a polynomial number of queries.\nActually, the proof for Proposition 5.2 proves a stronger useful result regarding the information elicited by iterative auctions.\nIt says that in any iterative auction in which the changes of prices are small enough in each stage (pseudocontinuous auctions), the value of all bundles demanded during the auction can be computed.\nThe basic idea is that when the bidder moves from demanding some bundle Ti to demanding another bundle Ti+1, there is a point in which she is indifferent between these two bundles.\nThus, knowing the value of some demanded bundle (e.g., the empty set) enables computing the values of all other demanded bundles.\nWe say that an auction is pseudo-continuous, if it only uses demand queries, and in each step, the price of at most one item is changed by (for some \u2208 (0, \u03b4]) with respect to the previous query.\nProposition 5.3.\nConsider any pseudo-continuous auction (not necessarily ascending), in which bidder i demands the empty set at least once along the auction.\nThen, the value of every bundle demanded by bidder i throughout the auction can be calculated at the end of the auction.\n5.2 Limitations of Item-Price Ascending Auctions Although we observed that demand queries can solve any combinatorial auction problem, when the queries are restricted to be ascending, some classes of valuations cannot be elicited nor fully-elicited.\nAn example for such class of valuations is given in Figure 6.\nTheorem 5.4.\nThere are classes of valuations that cannot be elicited nor fully elicited by any item-price ascending auction.\nProof.\nLet bidder 1 have the valuation described in the first row of Figure 6, where \u03b1 and \u03b2 are unknown values in (0, 1).\nFirst, we prove that this class cannot be fully elicited by a single ascending auction.\nSpecifically, an ascending auction cannot reveal the values of both \u03b1 and \u03b2.\nAs long as pa and pb are both below 1, the bidder will always demand the whole bundle ab: her utility from ab is strictly greater than the utility from either a or b separately.\nFor example, we show that u1(ab) > u1(a): u1(ab) = 2 \u2212 (pa + pb) = 1 \u2212 pa + 1 \u2212 pb > vA(a) \u2212 pa + 1 \u2212 pb > u1(a) Thus, in order to gain any information about \u03b1 or \u03b2, the price of one of the items should become at least 1, w.l.o.g. pa \u2265 1.\nBut then, the bundle a will not be demanded by bidder 1 throughout the auction, thus no information at all will be gained about \u03b1.\nNow, assume that bidder 2 is known to have the valuation described in the second row of Figure 6.\nThe optimal allocation depends on whether \u03b1 is greater than \u03b2 (in bidder 1``s valuation), and we proved that an ascending auction cannot determine this.\nThe proof of the theorem above shows that for an unknown value to be revealed, the price of one item should be greater than 1, and the other price should be smaller than 1.\nTherefore, in a price-monotonic trajectory of prices, only one of these values can be revealed.\nAn immediate conclusion is that this impossibility result also holds for item-price descending auctions.\nSince no such trajectory exists, then the same conclusion even holds for non-deterministic itemprice auctions (in which exogenous data tells us how to increase the prices).\nAlso note that since the hardness stems from the impossibility to fully-elicit a valuation of a single bidder, this result also holds for non-anonymous ascending item-price auctions.\n5.3 Limitations of Multi-Trajectory Ascending Auctions According to Theorem 5.4, no ascending item-price auction can always elicit the preferences (we prove a similar result for bundle prices in section 6).\nBut can two ascending trajectories do the job?\nOr a polynomial number of ascending trajectories?\nWe give negative answers for such suggestions.\nWe define a k-trajectory ascending auction as a demandquery iterative auction in which the demand queries can be partitioned to k sets of queries, where the prices published in each set only increase in time.\nNote that we use a general definition; It allows the trajectories to run in parallel or sequentially, and to use information elicited in some trajectories for determining the future queries in other trajectories.\nThe power of multiple-trajectory auctions can be demonstrated by the negative result of Gul and Stacchetti [17] who showed that even for an auction among substitutes valuations, an anonymous ascending item-price auction cannot compute VCG prices for all players.21 Ausubel [4] overcame this impossibility result and designed auctions that do compute VCG prices by organizing the auction as a sequence of n + 1 ascending auctions.\nHere, we prove that one cannot elicit XOR valuations with k terms by less than k \u2212 1 ascending trajectories.\nOn the other hand, we show that an XOR formula can be fully elicited by k\u22121 non-deterministic ascending auctions (or by k\u22121 deterministic ascending auctions if the auctioneer knows the atomic bundles).22 21 A recent unpublished paper by Mishra and Parkes extends this result, and shows that non-anonymous prices with bundle-prices are necessary in order that an ascending auction will end up with a universal-competitive-equilibrium (that leads to VCG payments).\n22 This result actually separates the power of deterministic 39 Proposition 5.5.\nXOR valuations with k terms cannot be elicited (or fully elicited) by any (k-2)-trajectory itemprice ascending auction, even when the atomic bundles are known to the elicitor.\nHowever, these valuations can be elicited (and fully elicited) by (k-1)-trajectory non-deterministic non-anonymous item-price ascending auctions.\nMoreover, an exponential number of trajectories is required for eliciting some classes of valuations: Proposition 5.6.\nElicitation and full-elicitation of some classes of valuations cannot be done by any k-trajectory itemprice ascending auction, where k = o(2m ).\nProof.\n(sketch) Consider the following class of valuations: For |S| < m 2 , v(S) = 0 and for |S| > m 2 , v(S) = 2; every bundle S of size m 2 has some unknown value in (0, 1).\nWe show ([9]) that a single item-price ascending auction can reveal the value of at most one bundle of size n 2 , and therefore an exponential number of ascending trajectories is needed in order to elicit such valuations.\nWe observe that the algorithm we presented in Section 4.2 can be implemented by a polynomial number of ascending auctions (each item-price demand query can be considered as a separate ascending auction), and therefore a min(n, 4 \u221a m)-approximation can be achieved by a polynomial number of ascending auctions.\nWe do not currently have a better upper bound or any lower bound.\n5.4 Separating the Various Models of Ascending Auctions Various models for ascending auctions have been suggested in the literature.\nIn this section, we compare the power of the different models.\nAs mentioned, all auctions are considered anonymous and deterministic, unless specified otherwise.\nAscending vs. Descending Auctions: We begin the discussion of the relation between ascending auctions and descending auctions with an example.\nThe algorithm by Lehmann, Lehmann and Nisan [25] can be implemented by a simple item-price descending auction (see the full paper for details [9]).\nThis algorithm guarantees at least half of the optimal efficiency for submodular valuations.\nHowever, we are not familiar with any ascending auction that guarantees a similar fraction of the efficiency.\nThis raises a more general question: can ascending auctions solve any combinatorialauction problem that is solvable using a descending auction (and vice versa)?\nWe give negative answers to these questions.\nThe idea behind the proofs is that the information that the auctioneer can get for free at the beginning of each type of auction is different.23 and non-deterministic iterative auctions: our proof shows that a non-deterministic iterative auction can elicit the kterm XOR valuations with a polynomial number of demand queries, and [7] show that this elicitation must take an exponential number of demand queries.\n23 In ascending auctions, the auctioneer can reveal the most valuable bundle (besides M) before she starts raising the prices, thus she can use this information for adaptively choose the subsequent queries.\nIn descending auctions, one can easily find the bundle with the highest average per-item price, keeping all other bundles with non-positive utilities, and use this information in the adaptive price change.\nProposition 5.7.\nThere are classes that cannot be elicited (fully elicited) by ascending item-price auctions, but can be elicited (resp.\nfully elicited) with a descending item-price auction.\nProposition 5.8.\nThere are classes that cannot be elicited (fully elicited) by item-price descending auctions, but can be elicited (resp.\nfully elicited) by item-price ascending auctions.\nDeterministic vs. Non-Deterministic Auctions: Nondeterministic ascending auctions can be viewed as auctions where some benevolent teacher that has complete information guides the auctioneer on how she should raise the prices.\nThat is, preference elicitation can be done by a non-deterministic ascending auction, if there is some ascending trajectory that elicits enough information for determining the optimal allocation (and verifying that it is indeed optimal).\nWe show that non-deterministic ascending auctions are more powerful than deterministic ascending auctions: Proposition 5.9.\nSome classes can be elicited (fully elicited) by an item-price non-deterministic ascending auction, but cannot be elicited (resp.\nfully elicited) by item-price deterministic ascending auctions.\nAnonymous vs. Non-Anonymous Auctions: As will be shown in Section 6, the power of anonymous and nonanonymous bundle-price ascending auctions differs significantly.\nHere, we show that a difference also exists for itemprice ascending auctions.\nProposition 5.10.\nSome classes cannot be elicited by anonymous item-price ascending auctions, but can be elicited by a non-anonymous item-price ascending auction.\nSequential vs. Simultaneous Auctions: A non-anonymous auction is called simultaneous if at each stage, the price of some item is raised by for every bidder.\nThe auctioneer can use the information gathered until each stage, in all the personalized trajectories, to determine the next queries.\nA non-anonymous auction is called sequential if the auctioneer performs an auction for each bidder separately, in sequential order.\nThe auctioneer can determine the next query based on the information gathered in the trajectories completed so far and on the history of the current trajectory.\nProposition 5.11.\nThere are classes that cannot be elicited by simultaneous non-anonymous item-price ascending auctions, but can be elicited by a sequential non-anonymous item-price ascending auction.\nAdaptive vs. Oblivious Auctions: If the auctioneer determines the queries regardless of the bidders'' responses (i.e., the queries are predefined) we say that the auction is oblivious.\nOtherwise, the auction is adaptive.\nWe prove that an adaptive behaviour of the auctioneer may be beneficial.\nProposition 5.12.\nThere are classes that cannot be elicited (fully elicited) using oblivious item-price ascending auctions, but can be elicited (resp.\nfully elicited) by an adaptive item-price ascending auction.\n40 5.5 Preference Elicitation vs. Full Elicitation Preference elicitation and full elicitation are closely related problems.\nIf full elicitation is easy (e.g., in polynomial time) then clearly elicitation is also easy (by a nonanonymous auction, simply by learning all the valuations separately24 ).\nOn the other hand, there are examples where preference elicitation is considered easy but learning is hard (typically, elicitation requires smaller amount of information; some examples can be found in [7]).\nThe tatonnement algorithms by [22, 12, 16] end up with the optimal allocation for substitutes valuations.25 We prove that we cannot fully elicit substitutes valuations (or even their sub-class of OXS valuations defined in [25]), even for a single bidder, by an item-price ascending auction (although the optimal allocation can be found by an ascending auction for any number of bidders!)\n.\nTheorem 5.13.\nSubstitute valuations cannot be fully elicited by ascending item-price auctions.\nMoreover, they cannot be fully elicited by any m 2 ascending trajectories (m > 3).\nWhether substitutes valuations have a compact representation (i.e., polynomial in the number of goods) is an important open question.\nAs a step in this direction, we show that its sub-class of OXS valuations does have a compact representation: every OXS valuation can be represented by at most m2 values.26 Lemma 5.14.\nAny OXS valuation can be represented by no more than m2 values.\n6.\nBUNDLE-PRICE ASCENDING AUCTIONS All the ascending auctions in the literature that are proved to find the optimal allocation for unrestricted valuations are non-anonymous bundle-price auctions (iBundle(3) by Parkes and Ungar [37] and the Proxy Auction by Ausubel and Milgrom [3]).\nYet, several anonymous ascending auctions have been suggested (e.g., AkBA [42], [21] and iBundle(2) [37]).\nIn this section, we prove that anonymous bundle-price ascending auctions achieve poor results in the worst-case.\nWe also show that the family of non-anonymous bundleprice ascending auctions can run exponentially slower than simple item-price ascending auctions.\n6.1 Limitations of Anonymous Bundle-Price Ascending Auctions We present a class of valuations that cannot be elicited by anonymous bundle-price ascending auctions.\nThese valuations are described in Figure 7.\nThe basic idea: for determining some unknown value of one bidder we must raise 24 Note that an anonymous ascending auction cannot necessarily elicit a class that can be fully elicited by an ascending auction.\n25 Substitute valuations are defined, e.g., in [16].\nRoughly speaking, a bidder with a substitute valuation will continue demand a certain item after the price of some other items was increased.\nFor completeness, we present in the full paper [9] a proof for the efficiency of such auctions for substitutes valuations.\n26 A unit-demand valuation is an XOR valuation in which all the atomic bundles are singletons.\nOXS valuations can be interpreted as an aggregation (OR) of any number of unit-demand bidders.\nBid.\n1 v1(ac) = 2 v1(bd) = 2 v1(cd) = \u03b1 \u2208 (0, 1) Bid.\n2 v2(ab) = 2 v2(cd) = 2 v2(bd) = \u03b2 \u2208 (0, 1) Figure 7: Anonymous ascending bundle-price auctions cannot determine the optimal allocation for this class of valuations.\na price of a bundle that should be demanded by the other bidder in the future.\nTheorem 6.1.\nSome classes of valuations cannot be elicited by anonymous bundle-price ascending auctions.\nProof.\nConsider a pair of XOR valuations as described in Figure 7.\nFor finding the optimal allocation we must know which value is greater between \u03b1 and \u03b2.27 However, we cannot learn the value of both \u03b1 and \u03b2 by a single ascending trajectory: assume w.l.o.g. that bidder 1 demands cd before bidder 2 demands bd (no information will be elicited if none of these happens).\nIn this case, the price for bd must be greater than 1 (otherwise, bidder 1 prefers bd to cd).\nThus, bidder 2 will never demand the bundle bd, and no information will be elicited about \u03b2.\nThe valuations described in the proof of Theorem 6.1 can be easily elicited by a non-anonymous item-price ascending auction.\nOn the other hand, the valuations in Figure 6 can be easily elicited by an anonymous bundle-price ascending auction.\nWe conclude that the power of these two families of ascending auctions is incomparable.\nWe strengthen the impossibility result above by showing that anonymous bundle-price auctions cannot even achieve better than a min{O(n), O( \u221a m)}-approximation for the social welfare.\nThis approximation ratio can be achieved with polynomial communication, and specifically with a polynomial number of item-price demand queries.28 Theorem 6.2.\nAn anonymous bundle-price ascending auction cannot guarantee better than a min{ n 2 , \u221a m 2 } approximation for the optimal welfare.\nProof.\n(Sketch) Assume we have n bidders and n2 items for sale, and that n is prime.\nWe construct n2 distinct bundles with the following properties: for each bidder, we define a partition Si = (Si 1, ..., Si n) of the n2 items to n bundles, such that any two bundles from different partitions intersect.\nIn the full paper, part II [9] we show an explicit construction using the properties of linear functions over finite fields.\nThe rest of the proof is independent of the specific construction.\nUsing these n2 bundles we construct a hard-to-elicit class.\nEvery bidder has an atomic bid, in his XOR valuation, for each of these n2 bundles.\nA bidder i has a value of 2 for any bundle Si j in his partition.\nFor all bundles in the other partitions, he has a value of either 0 or of 1 \u2212 \u03b4, and these values are unknown to the auctioneer.\nSince every pair of bundles from different partitions intersect, only one bidder can receive a bundle with a value of 2.\n27 If \u03b1 > \u03b2, the optimal allocation will allocate cd to bidder 1 and ab to bidder 2.\nOtherwise, we give bd to bidder 2 and ac to bidder 1.\nNote that both bidders cannot gain a value of 2 in the same allocation, due to the intersections of the high-valued bundles.\n28 Note that bundle-price queries may use exponential communication, thus the lower bound of [32] does not hold.\n41 Non-anonymous Bundle-Price Economically-Efficient Ascending Auctions: Initialization: All prices are initialized to zero (non-anonymous bundle prices).\nRepeat: - Each bidder submits a bundle that maximizes his utility under his current personalized prices.\n- The auctioneer calculates a provisional allocation that maximizes his revenue under the current prices.\n- The prices of bundles that were demanded by losing bidders are increased by .\nFinally: Terminate when the provisional allocation assigns to each bidder the bundle he demanded.\nFigure 8: Auctions from this family (denoted by NBEA auctions) are known to achieve the optimal welfare.\nNo bidder will demand a low-valued bundle, as long as the price of one of his high-valued bundles is below 1 (and thus gain him a utility greater than 1).\nTherefore, for eliciting any information about the low-valued bundles, the auctioneer should first arbitrarily choose a bidder (w.l.o.g bidder 1) and raise the prices of all the bundles (S1 1 , ..., S1 n) to be greater than 1.\nSince the prices cannot decrease, the other bidders will clearly never demand these bundles in future stages.\nAn adversary may choose the values such that the low values of all the bidders for the bundles not in bidder 1``s partition are zero (i.e., vi(S1 j ) = 0 for every i = 1 and every j), however, allocating each bidder a different bundle from bidder 1``s partition, might achieve a welfare of n+1\u2212(n\u22121)\u03b4 (bidder 1``s valuation is 2, and 1 \u2212 \u03b4 for all other bidders); If these bundles were wrongly allocated, only a welfare of 2 might be achieved (2 for bidder 1``s high-valued bundle, 0 for all other bidders).\nAt this point, the auctioneer cannot have any information about the identity of the bundles with the non-zero values.\nTherefore, an adversary can choose the values of the bundles received by bidders 2, ..., n in the final allocation to be zero.\nWe conclude that anonymous bundleprice auctions cannot guarantee a welfare greater than 2 for this class, where the optimal welfare can be arbitrarily close to n + 1.\n6.2 Bundle Prices vs. Item Prices The core of the auctions in [37, 3] is the scheme described in Figure 8 (in the spirit of [35]) for auctions with nonanonymous bundle prices.\nAuctions from this scheme end up with the optimal allocation for any class of valuations.\nWe denote this family of ascending auctions as NBEA auctions29 .\nNBEA auctions can elicit k-term XOR valuations by a polynomial (in k) number of steps , although the elicitation of such valuations may require an exponential number of item-price queries ([7]), and item-price ascending auctions cannot do it at all (Theorem 5.4).\nNevertheless, we show that NBEA auctions (and in particular, iBundle(3) and the proxy auction) are sometimes inferior to simple item-price demand auctions.\nThis may justify the use of hybrid auctions that use both linear and non-linear prices (e.g., the clock-proxy auction [10]).\nWe show that auctions from this 29 Non-anonymous Bundle-price economically Efficient Ascending auctions.\nFor completeness, we give in the full paper [9] a simple proof for the efficiency (up to an ) of auctions of this scheme .\nfamily may use an exponential number of queries even for determining the optimal allocation among two bidders with additive valuations30 , where such valuations can be elicited by a simple item-price ascending auction.\nWe actually prove this property for a wider class of auctions we call conservative auctions.\nWe also observe that in conservative auctions, allowing the bidders to submit all the bundles in their demand sets ensures that the auction runs a polynomial number of steps - if L is not too high (but with exponential communication, of course).\nAn ascending auction is called conservative if it is nonanonymous, uses bundle prices initialized to zero and at every stage the auctioneer can only raise prices of bundles demanded by the bidders until this stage.\nIn addition, each bidder can only receive bundles he demanded during the auction.\nNote that NBEA auctions are by definition conservative.\nProposition 6.3.\nIf every bidder demands a single bundle in each step of the auction, conservative auctions may run for an exponential number of steps even for additive valuations.\nIf the bidders are allowed to submit all the bundles in their demand sets in each step, then conservative auctions can run in a polynomial number of steps for any profile of valuations, as long as the maximal valuation L is polynomial in m, n and 1 \u03b4 .\nAcknowledgments: The authors thank Moshe Babaioff, Shahar Dobzinski, Ron Lavi, Daniel Lehmann, Ahuva Mu``alem, David Parkes, Michael Schapira and Ilya Segal for helpful discussions.\nSupported by grants from the Israeli Academy of Sciences and the USAIsrael Binational Science Foundation.\n7.\nREFERENCES [1] amazon.\nWeb Page: http:\/\/www.amazon.com.\n[2] ebay.\nWeb Page: http:\/\/www.ebay.com.\n[3] L. M. Ausubel and P. R. Milgrom.\nAscending auctions with package bidding.\nFrontiers of Theoretical Economics, 1:1-42, 2002.\n[4] Lawrence Ausubel.\nAn efficient dynamic auction for heterogeneous commodities, 2000.\nWorking paper, University of Maryland.\n[5] Yair Bartal, Rica Gonen, and Noam Nisan.\nIncentive compatible multi unit combinatorial auctions.\nIn TARK 03, 2003.\n[6] Alejandro Bertelsen.\nSubstitutes valuations and m -concavity.\nM.Sc.\nThesis, The Hebrew University of Jerusalem, 2005.\n[7] Avrim Blum, Jeffrey C. Jackson, Tuomas Sandholm, and Martin A. Zinkevich.\nPreference elicitation and query learning.\nJournal of Machine Learning Research, 5:649-667, 2004.\n[8] Liad Blumrosen and Noam Nisan.\nOn the computational power of iterative auctions I: demand queries.\nWorking paper, The Hebrew University of 30 Valuations are called additive if for any disjoint bundles A and B, v(A \u222a B) = v(A) + v(B).\nAdditive valuations are both sub-additive and super-additive and are determined by the m values assigned for the singletons.\n42 Jerusalem.\nAvailable from http:\/\/www.cs.huji.ac.il\/\u02dcnoam\/mkts.html.\n[9] Liad Blumrosen and Noam Nisan.\nOn the computational power of iterative auctions II: ascending auctions.\nWorking paper, The Hebrew University of Jerusalem.\nAvailable from http:\/\/www.cs.huji.ac.il\/\u02dcnoam\/mkts.html.\n[10] P. Cramton, L.M. Ausubel, and P.R. Milgrom.\nIn P. Cramton and Y. Shoham and R. Steinberg (Editors), Combinatorial Auctions.\nChapter 5.\nThe Clock-Proxy Auction: A Practical Combinatorial Auction Design.\nMIT Press.\nForthcoming, 2005.\n[11] P. Cramton, Y. Shoham, and R. Steinberg (Editors).\nCombinatorial Auctions.\nMIT Press.\nForthcoming, 2005.\n[12] G. Demange, D. Gale, and M. Sotomayor.\nMulti-item auctions.\nJournal of Political Economy, 94:863-872, 1986.\n[13] Shahar Dobzinski, Noam Nisan, and Michael Schapira.\nApproximation algorithms for cas with complement-free bidders.\nIn The 37th ACM symposium on theory of computing (STOC).\n, 2005.\n[14] Shahar Dobzinski and Michael Schapira.\nOptimal upper and lower approximation bounds for k-duplicates combinatorial auctions.\nWorking paper, the Hebrew University.\n[15] Combinatorial bidding conference.\nWeb Page: http:\/\/wireless.fcc.gov\/auctions\/conferences\/combin2003.\n[16] Faruk Gul and Ennio Stacchetti.\nWalrasian equilibrium with gross substitutes.\nJournal of Economic Theory, 87:95 - 124, 1999.\n[17] Faruk Gul and Ennio Stacchetti.\nThe english auction with differentiated commodities.\nJournal of Economic Theory, 92(3):66 - 95, 2000.\n[18] J. Hastad.\nAlmost optimal lower bounds for small depth circuits.\nIn 18th STOC, pages 6-20, 1986.\n[19] Ron Holzman, Noa Kfir-Dahav, Dov Monderer, and Moshe Tennenholtz.\nBundling equilibrium in combinatrial auctions.\nGames and Economic Behavior, 47:104-123, 2004.\n[20] H. Karloff.\nLinear Programming.\nBirkh\u00a8auser Verlag, 1991.\n[21] Frank Kelly and Richard Steinberg.\nA combinatorial auction with multiple winners for universal service.\nManagement Science, 46:586-596, 2000.\n[22] A.S. Kelso and V.P. Crawford.\nJob matching, coalition formation, and gross substitute.\nEconometrica, 50:1483-1504, 1982.\n[23] Subhash Khot, Richard J. Lipton, Evangelos Markakis, and Aranyak Mehta.\nInapproximability results for combinatorial auctions with submodular utility functions.\nIn Working paper., 2004.\n[24] Sebastien Lahaie and David C. Parkes.\nApplying learning algorithms to preference elicitation.\nIn EC 04.\n[25] Benny Lehmann, Daniel Lehmann, and Noam Nisan.\nCombinatorial auctions with decreasing marginal utilities.\nIn ACM conference on electronic commerce.\nTo appear, Games and Economic Behaviour., 2001.\n[26] D. Lehmann, L. O``Callaghan, and Y. Shoham.\nTruth revelation in approximately efficient combinatorial auctions.\nJACM, 49(5):577-602, Sept. 2002.\n[27] A. Mas-Collel, W. Whinston, and J. Green.\nMicroeconomic Theory.\nOxford university press, 1995.\n[28] Debasis Mishra and David Parkes.\nAscending price vickrey auctions using primal-dual algorithms., 2004.\nWorking paper, Harvard University.\n[29] Noam Nisan.\nThe communication complexity of approximate set packing and covering.\nIn ICALP 2002.\n[30] Noam Nisan.\nBidding and allocation in combinatorial auctions.\nIn ACM Conference on Electronic Commerce, 2000.\n[31] Noam Nisan.\nIn P. Cramton and Y. Shoham and R. Steinberg (Editors), Combinatorial Auctions.\nChapter 1.\nBidding Languages.\nMIT Press.\nForthcoming, 2005.\n[32] Noam Nisan and Ilya Segal.\nThe communication requirements of efficient allocations and supporting prices, 2003.\nWorking paper.\nAvailable from http:\/\/www.cs.huji.ac.il\/\u02dcnoam\/mkts.html Forthcoming in the Journal of Economic Theory.\n[33] Noam Nisan and Ilya Segal.\nExponential communication inefficiency of demand queries, 2004.\nWorking paper.\nAvailable from http:\/\/www.stanford.edu\/ isegal\/queries1.\npdf.\n[34] D. C. Parkes and L. H. Ungar.\nAn ascending-price generalized vickrey auction.\nTech.\nRep., Harvard University, 2002.\n[35] David Parkes.\nIn P. Cramton and Y. Shoham and R. Steinberg (Editors), Combinatorial Auctions.\nChapter 3.\nIterative Combinatorial Auctions.\nMIT Press.\nForthcoming, 2005.\n[36] David C. Parkes.\nIterative combinatorial auctions: Achieving economic and computational efficiency.\nPh.D..\nThesis, Department of Computer and Information Science, University of Pennsylvania., 2001.\n[37] David C. Parkes and Lyle H. Ungar.\nIterative combinatorial auctions: Theory and practice.\nIn AAAI\/IAAI, pages 74-81, 2000.\n[38] Ariel Rubinstein.\nWhy are certain properties of binary relations relatively more common in natural languages.\nEconometrica, 64:343-356, 1996.\n[39] Tuomas Sandholm.\nAlgorithm for optimal winner determination in combinatorial auctions.\nIn Artificial Intelligence, volume 135, pages 1-54, 2002.\n[40] P. Santi, V. Conitzer, and T. Sandholm.\nTowards a characterization of polynomial preference elicitation with value queries in combinatorial auctions.\nIn The 17th Annual Conference on Learning Theory, 2004.\n[41] Ilya Segal.\nThe communication requirements of social choice rules and supporting budget sets, 2004.\nWorking paper.\nAvailable from http:\/\/www.stanford.edu\/ isegal\/rules.\npdf.\n[42] P.R. Wurman and M.P. Wellman.\nAkba: A progressive, anonymous-price combinatorial auction.\nIn Second ACM Conference on Electronic Commerce, 2000.\n[43] Martin A. Zinkevich, Avrim Blum, and Tuomas Sandholm.\nOn polynomial-time preference elicitation with value queries.\nIn ACM Conference on Electronic Commerce, 2003.\n43","lvl-3":"On the Computational Power of Iterative Auctions *\nABSTRACT\nWe embark on a systematic analysis of the power and limitations of iterative combinatorial auctions.\nMost existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their \"demand\" under these prices.\nWe prove a large number of results showing the boundaries of what can be achieved by auctions of this kind.\nWe first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions.\n1.\nINTRODUCTION\nCombinatorial auctions have recently received a lot of attention.\nIn a combinatorial auction, a set M of m nonidentical items are sold in a single auction to n competing bidders.\nThe bidders have preferences regarding the bundles of items that they may receive.\nThe preferences of bidder i are specified by a valuation function vi: 2M--+ R +, where * This paper is a merger of two papers accepted to EC' 05, merged at the request of the program committee.\nFull versions of the two papers [8, 9] can be obtained from our webpages.\nvi (S) denotes the value that bidder i attaches to winning the bundle of items S.\nWe assume \"free disposal\", i.e., that the tioneer is to optimize the social welfare P vi's are monotone non-decreasing.\nThe usual goal of the auci vi (Si), where the allocation S1...Sn must be a partition of the items.\nApplications include many complex resource allocation problems and, in fact, combinatorial auctions may be viewed as the common abstraction of many complex resource allocation problems.\nCombinatorial auctions face both economic and computational difficulties and are a central problem in the recently active border of economic theory and computer science.\nA forthcoming book [11] addresses many of the issues involved in the design and implementation of combinatorial auctions.\nThe design of a combinatorial auction involves many considerations.\nIn this paper we focus on just one central issue: the communication between bidders and the allocation mechanism--\"preference elicitation\".\nTransferring all information about bidders' preferences requires an infeasible (exponential in m) amount of communication.\nThus, \"direct revelation\" auctions in which bidders simply declare their preferences to the mechanism are only practical for very small auction sizes or for very limited families of bidder preferences.\nWe have therefore seen a multitude of suggested \"iterative auctions\" in which the auction protocol repeatedly interacts with the different bidders, aiming to adaptively elicit enough information about the bidders' preferences as to be able to find a good (optimal or close to optimal) allocation.\nMost of the suggested iterative auctions proceed by maintaining temporary prices for the bundles of items and repeatedly querying the bidders as to their preferences between the bundles under the current set of prices, and then updating the set of bundle prices according to the replies received (e.g., [22, 12, 17, 37, 3]).\nEffectively, such an iterative auction accesses the bidders' preferences by repeatedly making the following type of demand query to bidders: \"Query to bidder i: a vector of bundle prices p = {p (S)} ScM; Answer: a bundle of items S C _ M that maximizes vi (S) \u2212 p (S).\"\n.\nThese types of queries are very natural in an economic setting as they capture the \"revealed preferences\" of the bidders.\nSome auctions, called item-price or linear-price auctions, specify a price pi for each item, and the price of any given bundle S is always linear, p (S) = PicS pi.\nOther auctions, called bundle-price auctions, allow specifying arbitrary (non-linear) prices p (S) for bundles.\nAnother important differentiation between models of iterative auctions is\nbased on whether they use anonymous or non-anonymous prices: In some auctions the prices that are presented to the bidders are always the same (anonymous prices).\nIn other auctions (non-anonymous), different bidders may face different (discriminatory) vectors of prices.\nIn ascending-price auctions, forcing prices to be anonymous may be a significant restriction.\nIn this paper, we embark on a systematic analysis of the computational power of iterative auctions that are based on demand queries.\nWe do not aim to present auctions for practical use but rather to understand the limitations and possibilities of these kinds of auctions.\nIn the first part of this paper, our main question is what can be done using a polynomial number of these types of queries?\nThat is, polynomial in the main parameters of the problem: n, m and the number of bits t needed for representing a single value vi (S).\nNote that from an algorithmic point of view we are talking about sub-linear time algorithms: the input size here is really n (2m \u2212 1) numbers--the descriptions of the valuation functions of all bidders.\nThere are two aspects to computational efficiency in these settings: the first is the communication with the bidders, i.e., the number of queries made, and the second is the \"usual\" computational tractability.\nOur lower bounds will depend only on the number of queries--and hold independently of any computational assumptions like P = NP.\nOur upper bounds will always be computationally efficient both in terms of the number of queries and in terms of regular computation.\nAs mentioned, this paper concentrates on the single aspect of preference elicitation and on its computational consequences and does not address issues of incentives.\nThis strengthens our lower bounds, but means that the upper bounds require evaluation from this perspective also before being used in any real combinatorial auction .1 The second part of this paper studies the power of ascending - price auctions.\nAscending auctions are iterative auctions where the published prices cannot decrease in time.\nIn this work, we try to systematically analyze what do the differences between various models of ascending auctions mean.\nWe try to answer the following questions: (i) Which models of ascending auctions can find the optimal allocation, and for which classes of valuations?\n(ii) In cases where the optimal allocation cannot be determined by ascending auctions, how well can such auctions approximate the social welfare?\n(iii) How do the different models for ascending auctions compare?\nAre some models computationally stronger than others?\nAscending auctions have been extensively studied in the literature (see the recent survey by Parkes [35]).\nMost of this work presented' upper bounds', i.e., proposed mechanisms with ascending prices and analyzed their properties.\nA result which is closer in spirit to ours, is by Gul and Stacchetti [17], who showed that no item-price ascending auction can always determine the VCG prices, even for substitutes valuations .2 Our framework is more general than the traditional line of research that concentrates on the final allocation and 1We do observe however that some weak incentive property comes for free in demand-query auctions since \"myopic\" players will answer all demand queries truthfully.\nWe also note that in some cases (but not always!)\nthe incentives issues can be handled orthogonally to the preference elicitation issues, e.g., by using Vickrey-Clarke-Groves (VCG) prices (e.g., [4, 34]).\n2We further discuss this result in Section 5.3.\nFigure 1: The diagram classifies the following auctions according to their properties:\n(1) The adaptation [12] for Kelso & Crawford's [22] auction.\n(2) The Proxy Auction [3] by Ausubel & Milgrom.\n(3) iBundle (3) by Parkes & Ungar [34].\n(4) iBundle (2) by Parkes & Ungar [37].\n(5) Our descending adaptation for the 2-approximation for submodular valuations by [25] (see Subsection 5.4).\n(6) Ausubel's [4] auction for substitutes valuations.\n(7) The adaptation by Nisan & Segal [32] of the O (\/ m) approximation by [26].\n(8) The duplicate-item auction by [5].\n(9) Auction for Read-Once formulae by [43].\n(10) The AkBA Auction by Wurman & Wellman [42].\npayments and in particular, on reaching' Walrasian equilibria' or' Competitive equilibria'.\nA Walrasian equilibrium3 is known to exist in the case of Substitutes valuations, and is known to be impossible for any wider class of valuations [16].\nThis does not rule out other allocations by ascending auctions: in this paper we view the auctions as a computational process where the outcome - both the allocation and the payments - can be determined according to all the data elicited throughout the auction; This general framework strengthens our negative results .4 We find the study of ascending auctions appealing for various reasons.\nFirst, ascending auctions are widely used in many real-life settings from the FCC spectrum auctions [15] to almost any e-commerce website (e.g., [2, 1]).\nActually, this is maybe the most straightforward way to sell items: ask the bidders what would they like to buy under certain prices, and increase the prices of over-demanded goods.\nAscending auctions are also considered more intuitive for many bidders, and are believed to increase the \"trust\" of the bidders in the auctioneer, as they see the result gradually emerging from the bidders' responses.\nAscending auctions also have other desired economic properties, e.g., they incur smaller information revelation (consider, for example, English auctions vs. second-price sealed bid auctions).\n1.1 Extant Work\nMany iterative combinatorial auction mechanisms rely on demand queries (see the survey in [35]).\nFigure 1 summa3A Walrasian equilibrium is vector of item prices for which all the items are sold when each bidder receives a bundle in his demand set.\n4In few recent auction designs (e.g., [4, 28]) the payments are not necessarily the final prices of the auctions.\nFigure 2: The best approximation factors currently achievable by computationally-efficient combinatorial auctions, for several classes of valuations.\nAll lower bounds in the table apply to all iterative auctions (except the one marked by *); all upper bounds in the table are achieved with item-price demand queries.\nrizes the basic \"classes\" of auctions implied by combinations of the above properties and classifies some of the auctions proposed in the literature according to this classification.\nFor our purposes, two families of these auctions serve as the main motivating starting points: the first is the ascending item-price auctions of [22, 17] that with computational efficiency find an optimal allocation among \"(gross) substitutes\" valuations, and the second is the ascending bundleprice auctions of [37, 3] that find an optimal allocation among general valuations--but not necessarily with computational efficiency.\nThe main lower bound in this area, due to [32], states that indeed, due to inherent communication requirements, it is not possible for any iterative auction to find the optimal allocation among general valuations with sub-exponentially many queries.\nA similar exponential lower bound was shown in [32] also for even approximating the optimal allocation to within a factor of m1\/2 \u2212 E. Several lower bounds and upper bounds for approximation are known for some natural classes of valuations--these are summarized in Figure 2.\nIn [32], the universal generality of demand queries is also shown: any non-deterministic communication protocol for finding an allocation that optimizes the social welfare can be converted into one that only uses demand queries (with bundle prices).\nIn [41] this was generalized also to nondeterministic protocols for finding allocations that satisfy other natural types of economic requirements (e.g., approximate social efficiency, envy-freeness).\nHowever, in [33] it was demonstrated that this \"completeness\" of demand queries holds only in the nondeterministic setting, while in the usual deterministic setting, demand queries (even with bundle prices) may be exponentially weaker than general communication.\nBundle-price auctions are a generalization of (the more natural and intuitive) item-price auctions.\nIt is known that indeed item-price auctions may be exponentially weaker: a nice example is the case of valuations that are a XOR of k bundles5, where k is small (say, polynomial).\nLahaie and Parkes [24] show an economically-efficient bundle-price auction that uses a polynomial number of queries whenever k is polynomial.\nIn contrast, [7] show that there exist valuations that are XORs of k =, \/ m bundles such that any item-price auction that finds an optimal allocation between them requires exponentially many queries.\nThese results are part of a recent line of research ([7, 43, 24, 40]) that study the \"preference elicitation\" problem in combinatorial auctions and its relation to the \"full elicitation\" problem (i.e., learn5These are valuations where bidders have values for k specific packages, and the value of each bundle is the maximal value of one of these packages that it contains.\ning the exact valuations of the bidders).\nThese papers adapt methods from machine-learning theory to the combinatorialauction setting.\nThe preference elicitation problem and the full elicitation problem relate to a well studied problem in microeconomics known as the integrability problem (see, e.g., [27]).\nThis problem studies if and when one can derive the utility function of a consumer from her demand function.\nPaper organization: Due to the relatively large number of results we present, we start with a survey of our new results in Section 2.\nAfter describing our formal model in Section 3, we present our results concerning the power of demand queries in Section 4.\nThen, we describe the power of item-price ascending auctions (Section 5) and bundle-price ascending auctions (Section 6).\nReaders who are mainly interested in the self-contained discussion of ascending auctions can skip Section 4.\nMissing proofs from Section 4 can be found in part I of the full paper ([8]).\nMissing proofs from Sections 5 and 6 can be found in part II of the full paper ([9]).\n2.\nA SURVEY OF OUR RESULTS\n2.1 Demand Queries Comparison of query types\nP% ES p% .8\nWelfare approximation\nRepresenting bundle-prices\nDemand queries and linear programs\n2.2 Ascending Auctions\nAscending bundle-price auctions:\n3.\nTHE MODEL\n3.1 Discrete Auctions for Continuous Values\n3.2 Valuations\n3.3 Iterative Auctions\n3.4 Demand Queries and Ascending Auctions\n4.\nTHE POWER OF DEMAND QUERIES\n4.1 The Power of Different Types of Queries\n4.2 Approximating the Social Welfare with Value and Demand Queries\nO (m\nm\nAn Approximation Algorithm:\n4.3 The Representation of Bundle Prices\n4.4 Demand Queries and Linear Programming\n5.\nITEM-PRICE ASCENDING AUCTIONS\n5.1 The Power of Item-Price Ascending Auctions\n5.2 Limitations of Item-Price Ascending Auctions\n5.3 Limitations of Multi-Trajectory Ascending Auctions\n5.4 Separating the Various Models of Ascending Auctions\n5.5 Preference Elicitation vs. Full Elicitation\n6.\nBUNDLE-PRICE ASCENDING AUCTIONS\n6.1 Limitations of Anonymous Bundle-Price Ascending Auctions\nTHEOREM 6.2.\nAn anonymous bundle-price ascending auc\nJm\nNon-anonymous Bundle-Price Economically-Efficient Ascending Auctions:\n6.2 Bundle Prices vs. Item Prices\nAcknowledgments:\nThe authors thank Moshe Babaioff, Shahar Dobzinski, Ron Lavi, Daniel Lehmann, Ahuva Mu'alem, David Parkes, Michael Schapira and Ilya Segal for helpful discussions.\nSupported by grants from the Israeli Academy of Sciences and the USAIsrael Binational Science Foundation.","lvl-4":"On the Computational Power of Iterative Auctions *\nABSTRACT\nWe embark on a systematic analysis of the power and limitations of iterative combinatorial auctions.\nMost existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their \"demand\" under these prices.\nWe prove a large number of results showing the boundaries of what can be achieved by auctions of this kind.\nWe first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions.\n1.\nINTRODUCTION\nCombinatorial auctions have recently received a lot of attention.\nIn a combinatorial auction, a set M of m nonidentical items are sold in a single auction to n competing bidders.\nThe bidders have preferences regarding the bundles of items that they may receive.\nThe preferences of bidder i are specified by a valuation function vi: 2M--+ R +, where * This paper is a merger of two papers accepted to EC' 05, merged at the request of the program committee.\nvi (S) denotes the value that bidder i attaches to winning the bundle of items S.\nApplications include many complex resource allocation problems and, in fact, combinatorial auctions may be viewed as the common abstraction of many complex resource allocation problems.\nCombinatorial auctions face both economic and computational difficulties and are a central problem in the recently active border of economic theory and computer science.\nA forthcoming book [11] addresses many of the issues involved in the design and implementation of combinatorial auctions.\nThe design of a combinatorial auction involves many considerations.\nIn this paper we focus on just one central issue: the communication between bidders and the allocation mechanism--\"preference elicitation\".\nTransferring all information about bidders' preferences requires an infeasible (exponential in m) amount of communication.\nThus, \"direct revelation\" auctions in which bidders simply declare their preferences to the mechanism are only practical for very small auction sizes or for very limited families of bidder preferences.\nWe have therefore seen a multitude of suggested \"iterative auctions\" in which the auction protocol repeatedly interacts with the different bidders, aiming to adaptively elicit enough information about the bidders' preferences as to be able to find a good (optimal or close to optimal) allocation.\nMost of the suggested iterative auctions proceed by maintaining temporary prices for the bundles of items and repeatedly querying the bidders as to their preferences between the bundles under the current set of prices, and then updating the set of bundle prices according to the replies received (e.g., [22, 12, 17, 37, 3]).\nEffectively, such an iterative auction accesses the bidders' preferences by repeatedly making the following type of demand query to bidders: \"Query to bidder i: a vector of bundle prices p = {p (S)} ScM; Answer: a bundle of items S C _ M that maximizes vi (S) \u2212 p (S).\"\n.\nThese types of queries are very natural in an economic setting as they capture the \"revealed preferences\" of the bidders.\nSome auctions, called item-price or linear-price auctions, specify a price pi for each item, and the price of any given bundle S is always linear, p (S) = PicS pi.\nOther auctions, called bundle-price auctions, allow specifying arbitrary (non-linear) prices p (S) for bundles.\nAnother important differentiation between models of iterative auctions is\nbased on whether they use anonymous or non-anonymous prices: In some auctions the prices that are presented to the bidders are always the same (anonymous prices).\nIn other auctions (non-anonymous), different bidders may face different (discriminatory) vectors of prices.\nIn ascending-price auctions, forcing prices to be anonymous may be a significant restriction.\nIn this paper, we embark on a systematic analysis of the computational power of iterative auctions that are based on demand queries.\nWe do not aim to present auctions for practical use but rather to understand the limitations and possibilities of these kinds of auctions.\nIn the first part of this paper, our main question is what can be done using a polynomial number of these types of queries?\nThere are two aspects to computational efficiency in these settings: the first is the communication with the bidders, i.e., the number of queries made, and the second is the \"usual\" computational tractability.\nOur lower bounds will depend only on the number of queries--and hold independently of any computational assumptions like P = NP.\nOur upper bounds will always be computationally efficient both in terms of the number of queries and in terms of regular computation.\nAs mentioned, this paper concentrates on the single aspect of preference elicitation and on its computational consequences and does not address issues of incentives.\nThis strengthens our lower bounds, but means that the upper bounds require evaluation from this perspective also before being used in any real combinatorial auction .1 The second part of this paper studies the power of ascending - price auctions.\nAscending auctions are iterative auctions where the published prices cannot decrease in time.\nIn this work, we try to systematically analyze what do the differences between various models of ascending auctions mean.\nWe try to answer the following questions: (i) Which models of ascending auctions can find the optimal allocation, and for which classes of valuations?\n(ii) In cases where the optimal allocation cannot be determined by ascending auctions, how well can such auctions approximate the social welfare?\n(iii) How do the different models for ascending auctions compare?\nAre some models computationally stronger than others?\nAscending auctions have been extensively studied in the literature (see the recent survey by Parkes [35]).\nMost of this work presented' upper bounds', i.e., proposed mechanisms with ascending prices and analyzed their properties.\nWe also note that in some cases (but not always!)\nthe incentives issues can be handled orthogonally to the preference elicitation issues, e.g., by using Vickrey-Clarke-Groves (VCG) prices (e.g., [4, 34]).\n2We further discuss this result in Section 5.3.\nFigure 1: The diagram classifies the following auctions according to their properties:\n(1) The adaptation [12] for Kelso & Crawford's [22] auction.\n(2) The Proxy Auction [3] by Ausubel & Milgrom.\n(3) iBundle (3) by Parkes & Ungar [34].\n(4) iBundle (2) by Parkes & Ungar [37].\n(5) Our descending adaptation for the 2-approximation for submodular valuations by [25] (see Subsection 5.4).\n(6) Ausubel's [4] auction for substitutes valuations.\n(8) The duplicate-item auction by [5].\n(9) Auction for Read-Once formulae by [43].\n(10) The AkBA Auction by Wurman & Wellman [42].\nA Walrasian equilibrium3 is known to exist in the case of Substitutes valuations, and is known to be impossible for any wider class of valuations [16].\nFirst, ascending auctions are widely used in many real-life settings from the FCC spectrum auctions [15] to almost any e-commerce website (e.g., [2, 1]).\nActually, this is maybe the most straightforward way to sell items: ask the bidders what would they like to buy under certain prices, and increase the prices of over-demanded goods.\nAscending auctions are also considered more intuitive for many bidders, and are believed to increase the \"trust\" of the bidders in the auctioneer, as they see the result gradually emerging from the bidders' responses.\nAscending auctions also have other desired economic properties, e.g., they incur smaller information revelation (consider, for example, English auctions vs. second-price sealed bid auctions).\n1.1 Extant Work\nMany iterative combinatorial auction mechanisms rely on demand queries (see the survey in [35]).\nFigure 1 summa3A Walrasian equilibrium is vector of item prices for which all the items are sold when each bidder receives a bundle in his demand set.\n4In few recent auction designs (e.g., [4, 28]) the payments are not necessarily the final prices of the auctions.\nFigure 2: The best approximation factors currently achievable by computationally-efficient combinatorial auctions, for several classes of valuations.\nAll lower bounds in the table apply to all iterative auctions (except the one marked by *); all upper bounds in the table are achieved with item-price demand queries.\nrizes the basic \"classes\" of auctions implied by combinations of the above properties and classifies some of the auctions proposed in the literature according to this classification.\nThe main lower bound in this area, due to [32], states that indeed, due to inherent communication requirements, it is not possible for any iterative auction to find the optimal allocation among general valuations with sub-exponentially many queries.\nIn [32], the universal generality of demand queries is also shown: any non-deterministic communication protocol for finding an allocation that optimizes the social welfare can be converted into one that only uses demand queries (with bundle prices).\nIn [41] this was generalized also to nondeterministic protocols for finding allocations that satisfy other natural types of economic requirements (e.g., approximate social efficiency, envy-freeness).\nHowever, in [33] it was demonstrated that this \"completeness\" of demand queries holds only in the nondeterministic setting, while in the usual deterministic setting, demand queries (even with bundle prices) may be exponentially weaker than general communication.\nBundle-price auctions are a generalization of (the more natural and intuitive) item-price auctions.\nIt is known that indeed item-price auctions may be exponentially weaker: a nice example is the case of valuations that are a XOR of k bundles5, where k is small (say, polynomial).\nLahaie and Parkes [24] show an economically-efficient bundle-price auction that uses a polynomial number of queries whenever k is polynomial.\nIn contrast, [7] show that there exist valuations that are XORs of k =, \/ m bundles such that any item-price auction that finds an optimal allocation between them requires exponentially many queries.\ning the exact valuations of the bidders).\nThese papers adapt methods from machine-learning theory to the combinatorialauction setting.\nThe preference elicitation problem and the full elicitation problem relate to a well studied problem in microeconomics known as the integrability problem (see, e.g., [27]).\nThis problem studies if and when one can derive the utility function of a consumer from her demand function.\nPaper organization: Due to the relatively large number of results we present, we start with a survey of our new results in Section 2.\nAfter describing our formal model in Section 3, we present our results concerning the power of demand queries in Section 4.\nThen, we describe the power of item-price ascending auctions (Section 5) and bundle-price ascending auctions (Section 6).\nReaders who are mainly interested in the self-contained discussion of ascending auctions can skip Section 4.\nMissing proofs from Section 4 can be found in part I of the full paper ([8]).\nMissing proofs from Sections 5 and 6 can be found in part II of the full paper ([9]).\nAcknowledgments:","lvl-2":"On the Computational Power of Iterative Auctions *\nABSTRACT\nWe embark on a systematic analysis of the power and limitations of iterative combinatorial auctions.\nMost existing iterative combinatorial auctions are based on repeatedly suggesting prices for bundles of items, and querying the bidders for their \"demand\" under these prices.\nWe prove a large number of results showing the boundaries of what can be achieved by auctions of this kind.\nWe first focus on auctions that use a polynomial number of demand queries, and then we analyze the power of different kinds of ascending-price auctions.\n1.\nINTRODUCTION\nCombinatorial auctions have recently received a lot of attention.\nIn a combinatorial auction, a set M of m nonidentical items are sold in a single auction to n competing bidders.\nThe bidders have preferences regarding the bundles of items that they may receive.\nThe preferences of bidder i are specified by a valuation function vi: 2M--+ R +, where * This paper is a merger of two papers accepted to EC' 05, merged at the request of the program committee.\nFull versions of the two papers [8, 9] can be obtained from our webpages.\nvi (S) denotes the value that bidder i attaches to winning the bundle of items S.\nWe assume \"free disposal\", i.e., that the tioneer is to optimize the social welfare P vi's are monotone non-decreasing.\nThe usual goal of the auci vi (Si), where the allocation S1...Sn must be a partition of the items.\nApplications include many complex resource allocation problems and, in fact, combinatorial auctions may be viewed as the common abstraction of many complex resource allocation problems.\nCombinatorial auctions face both economic and computational difficulties and are a central problem in the recently active border of economic theory and computer science.\nA forthcoming book [11] addresses many of the issues involved in the design and implementation of combinatorial auctions.\nThe design of a combinatorial auction involves many considerations.\nIn this paper we focus on just one central issue: the communication between bidders and the allocation mechanism--\"preference elicitation\".\nTransferring all information about bidders' preferences requires an infeasible (exponential in m) amount of communication.\nThus, \"direct revelation\" auctions in which bidders simply declare their preferences to the mechanism are only practical for very small auction sizes or for very limited families of bidder preferences.\nWe have therefore seen a multitude of suggested \"iterative auctions\" in which the auction protocol repeatedly interacts with the different bidders, aiming to adaptively elicit enough information about the bidders' preferences as to be able to find a good (optimal or close to optimal) allocation.\nMost of the suggested iterative auctions proceed by maintaining temporary prices for the bundles of items and repeatedly querying the bidders as to their preferences between the bundles under the current set of prices, and then updating the set of bundle prices according to the replies received (e.g., [22, 12, 17, 37, 3]).\nEffectively, such an iterative auction accesses the bidders' preferences by repeatedly making the following type of demand query to bidders: \"Query to bidder i: a vector of bundle prices p = {p (S)} ScM; Answer: a bundle of items S C _ M that maximizes vi (S) \u2212 p (S).\"\n.\nThese types of queries are very natural in an economic setting as they capture the \"revealed preferences\" of the bidders.\nSome auctions, called item-price or linear-price auctions, specify a price pi for each item, and the price of any given bundle S is always linear, p (S) = PicS pi.\nOther auctions, called bundle-price auctions, allow specifying arbitrary (non-linear) prices p (S) for bundles.\nAnother important differentiation between models of iterative auctions is\nbased on whether they use anonymous or non-anonymous prices: In some auctions the prices that are presented to the bidders are always the same (anonymous prices).\nIn other auctions (non-anonymous), different bidders may face different (discriminatory) vectors of prices.\nIn ascending-price auctions, forcing prices to be anonymous may be a significant restriction.\nIn this paper, we embark on a systematic analysis of the computational power of iterative auctions that are based on demand queries.\nWe do not aim to present auctions for practical use but rather to understand the limitations and possibilities of these kinds of auctions.\nIn the first part of this paper, our main question is what can be done using a polynomial number of these types of queries?\nThat is, polynomial in the main parameters of the problem: n, m and the number of bits t needed for representing a single value vi (S).\nNote that from an algorithmic point of view we are talking about sub-linear time algorithms: the input size here is really n (2m \u2212 1) numbers--the descriptions of the valuation functions of all bidders.\nThere are two aspects to computational efficiency in these settings: the first is the communication with the bidders, i.e., the number of queries made, and the second is the \"usual\" computational tractability.\nOur lower bounds will depend only on the number of queries--and hold independently of any computational assumptions like P = NP.\nOur upper bounds will always be computationally efficient both in terms of the number of queries and in terms of regular computation.\nAs mentioned, this paper concentrates on the single aspect of preference elicitation and on its computational consequences and does not address issues of incentives.\nThis strengthens our lower bounds, but means that the upper bounds require evaluation from this perspective also before being used in any real combinatorial auction .1 The second part of this paper studies the power of ascending - price auctions.\nAscending auctions are iterative auctions where the published prices cannot decrease in time.\nIn this work, we try to systematically analyze what do the differences between various models of ascending auctions mean.\nWe try to answer the following questions: (i) Which models of ascending auctions can find the optimal allocation, and for which classes of valuations?\n(ii) In cases where the optimal allocation cannot be determined by ascending auctions, how well can such auctions approximate the social welfare?\n(iii) How do the different models for ascending auctions compare?\nAre some models computationally stronger than others?\nAscending auctions have been extensively studied in the literature (see the recent survey by Parkes [35]).\nMost of this work presented' upper bounds', i.e., proposed mechanisms with ascending prices and analyzed their properties.\nA result which is closer in spirit to ours, is by Gul and Stacchetti [17], who showed that no item-price ascending auction can always determine the VCG prices, even for substitutes valuations .2 Our framework is more general than the traditional line of research that concentrates on the final allocation and 1We do observe however that some weak incentive property comes for free in demand-query auctions since \"myopic\" players will answer all demand queries truthfully.\nWe also note that in some cases (but not always!)\nthe incentives issues can be handled orthogonally to the preference elicitation issues, e.g., by using Vickrey-Clarke-Groves (VCG) prices (e.g., [4, 34]).\n2We further discuss this result in Section 5.3.\nFigure 1: The diagram classifies the following auctions according to their properties:\n(1) The adaptation [12] for Kelso & Crawford's [22] auction.\n(2) The Proxy Auction [3] by Ausubel & Milgrom.\n(3) iBundle (3) by Parkes & Ungar [34].\n(4) iBundle (2) by Parkes & Ungar [37].\n(5) Our descending adaptation for the 2-approximation for submodular valuations by [25] (see Subsection 5.4).\n(6) Ausubel's [4] auction for substitutes valuations.\n(7) The adaptation by Nisan & Segal [32] of the O (\/ m) approximation by [26].\n(8) The duplicate-item auction by [5].\n(9) Auction for Read-Once formulae by [43].\n(10) The AkBA Auction by Wurman & Wellman [42].\npayments and in particular, on reaching' Walrasian equilibria' or' Competitive equilibria'.\nA Walrasian equilibrium3 is known to exist in the case of Substitutes valuations, and is known to be impossible for any wider class of valuations [16].\nThis does not rule out other allocations by ascending auctions: in this paper we view the auctions as a computational process where the outcome - both the allocation and the payments - can be determined according to all the data elicited throughout the auction; This general framework strengthens our negative results .4 We find the study of ascending auctions appealing for various reasons.\nFirst, ascending auctions are widely used in many real-life settings from the FCC spectrum auctions [15] to almost any e-commerce website (e.g., [2, 1]).\nActually, this is maybe the most straightforward way to sell items: ask the bidders what would they like to buy under certain prices, and increase the prices of over-demanded goods.\nAscending auctions are also considered more intuitive for many bidders, and are believed to increase the \"trust\" of the bidders in the auctioneer, as they see the result gradually emerging from the bidders' responses.\nAscending auctions also have other desired economic properties, e.g., they incur smaller information revelation (consider, for example, English auctions vs. second-price sealed bid auctions).\n1.1 Extant Work\nMany iterative combinatorial auction mechanisms rely on demand queries (see the survey in [35]).\nFigure 1 summa3A Walrasian equilibrium is vector of item prices for which all the items are sold when each bidder receives a bundle in his demand set.\n4In few recent auction designs (e.g., [4, 28]) the payments are not necessarily the final prices of the auctions.\nFigure 2: The best approximation factors currently achievable by computationally-efficient combinatorial auctions, for several classes of valuations.\nAll lower bounds in the table apply to all iterative auctions (except the one marked by *); all upper bounds in the table are achieved with item-price demand queries.\nrizes the basic \"classes\" of auctions implied by combinations of the above properties and classifies some of the auctions proposed in the literature according to this classification.\nFor our purposes, two families of these auctions serve as the main motivating starting points: the first is the ascending item-price auctions of [22, 17] that with computational efficiency find an optimal allocation among \"(gross) substitutes\" valuations, and the second is the ascending bundleprice auctions of [37, 3] that find an optimal allocation among general valuations--but not necessarily with computational efficiency.\nThe main lower bound in this area, due to [32], states that indeed, due to inherent communication requirements, it is not possible for any iterative auction to find the optimal allocation among general valuations with sub-exponentially many queries.\nA similar exponential lower bound was shown in [32] also for even approximating the optimal allocation to within a factor of m1\/2 \u2212 E. Several lower bounds and upper bounds for approximation are known for some natural classes of valuations--these are summarized in Figure 2.\nIn [32], the universal generality of demand queries is also shown: any non-deterministic communication protocol for finding an allocation that optimizes the social welfare can be converted into one that only uses demand queries (with bundle prices).\nIn [41] this was generalized also to nondeterministic protocols for finding allocations that satisfy other natural types of economic requirements (e.g., approximate social efficiency, envy-freeness).\nHowever, in [33] it was demonstrated that this \"completeness\" of demand queries holds only in the nondeterministic setting, while in the usual deterministic setting, demand queries (even with bundle prices) may be exponentially weaker than general communication.\nBundle-price auctions are a generalization of (the more natural and intuitive) item-price auctions.\nIt is known that indeed item-price auctions may be exponentially weaker: a nice example is the case of valuations that are a XOR of k bundles5, where k is small (say, polynomial).\nLahaie and Parkes [24] show an economically-efficient bundle-price auction that uses a polynomial number of queries whenever k is polynomial.\nIn contrast, [7] show that there exist valuations that are XORs of k =, \/ m bundles such that any item-price auction that finds an optimal allocation between them requires exponentially many queries.\nThese results are part of a recent line of research ([7, 43, 24, 40]) that study the \"preference elicitation\" problem in combinatorial auctions and its relation to the \"full elicitation\" problem (i.e., learn5These are valuations where bidders have values for k specific packages, and the value of each bundle is the maximal value of one of these packages that it contains.\ning the exact valuations of the bidders).\nThese papers adapt methods from machine-learning theory to the combinatorialauction setting.\nThe preference elicitation problem and the full elicitation problem relate to a well studied problem in microeconomics known as the integrability problem (see, e.g., [27]).\nThis problem studies if and when one can derive the utility function of a consumer from her demand function.\nPaper organization: Due to the relatively large number of results we present, we start with a survey of our new results in Section 2.\nAfter describing our formal model in Section 3, we present our results concerning the power of demand queries in Section 4.\nThen, we describe the power of item-price ascending auctions (Section 5) and bundle-price ascending auctions (Section 6).\nReaders who are mainly interested in the self-contained discussion of ascending auctions can skip Section 4.\nMissing proofs from Section 4 can be found in part I of the full paper ([8]).\nMissing proofs from Sections 5 and 6 can be found in part II of the full paper ([9]).\n2.\nA SURVEY OF OUR RESULTS\nOur systematic analysis is composed of the combination of a rather large number of results characterizing the power and limitations of various classes of auctions.\nIn this section, we will present an exposition describing our new results.\nWe first discuss the power of demand-query iterative auctions, and then we turn our attention to ascending auctions.\nFigure 3 summarizes some of our main results.\n2.1 Demand Queries Comparison of query types\nWe first ask what other natural types of queries could we imagine iterative auctions using?\nHere is a list of such queries that are either natural, have been used in the literature, or that we found useful.\n1.\nValue query: The auctioneer presents a bundle S, the bidder reports his value v (S) for this bundle.\n2.\nMarginal-value query: The auctioneer presents a bundle A and an item j, the bidder reports how much he is willing to pay for j, given that he already owns A, i.e., v (j | A) = v (A U {j}) \u2212 v (A).\n3.\nDemand query (with item prices): The auctioneer presents a vector of item prices p1...pm; the bidder reports his demand under these prices, i.e., some set S that maximizes v (S) \u2212 Pi, S pi .6\nFigure 3: This paper studies the economic efficiency of auctions that follow certain communication constraints.\nFor each class of auctions, the table shows whether the optimal allocation can be achieved, or else, how well can it be approximated (both upper bounds and lower bounds).\nNew results are highlighted.\nAbbreviations: \"Poly.\"\n(Polynomial number\/size), AA (ascending auctions).\n\"-\" means that nothing is currently known except trivial solutions.\n4.\nIndirect-utility query: The auctioneer presents a set of item prices p1...pm, and the bidder responds with his \"indirect-utility\" under these prices, that is, the highest utility he can achieve from a bundle under these prices: maxScM (v (S) \u2212 PiES pi).7 5.\nRelative-demand query: the auctioneer presents a set of non-zero prices p1...pm, and the bidder reports the bundle that maximizes his value per unit of money,\ni.e., some set that maximizes v (S)\nP% ES p% .8\nTheorem: Each of these queries can be efficiently (i.e., in time polynomial in n, m, and the number of bits of precision t needed to represent a single value vi (S)) simulated by a sequence of demand queries with item prices.\nIn particular this shows that demand queries can elicit all information about a valuation by simulating all 2m \u2212 1 value queries.\nWe also observe that value queries and marginalvalue queries can simulate each other in polynomial time and that demand queries and indirect-utility queries can also simulate each other in polynomial time.\nWe prove that exponentially many value queries may be needed in order to simulate a single demand query.\nIt is interesting to note that for the restricted class of substitutes valuations, demand queries may be simulated by polynomial number of value queries [6].\nWelfare approximation\nThe next question that we ask is how well can a computationally-efficient auction that uses only demand queries approximate the optimal allocation?\nTwo separate obstacles are known: In [32], a lower bound of min (n, m1\/2 \u2212), for any fixed e> 0, was shown for the approximation factor apply for any fixed tie breaking rule.\n7This is exactly the utility achieved by the bundle which would be returned in a demand query with the same prices.\nThis notion relates to the Indirect-utility function studied in the Microeconomic literature (see, e.g., [27]).\n8Note that when all the prices are 1, the bidder actually reports the bundle with the highest per-item price.\nWe found this type of query useful, for example, in the design of the approximation algorithm described in Figure 5 in Section 4.2.\nobtained using any polynomial amount of communication.\nA computational bound with the same value applies even for the case of single-minded bidders, but under the assumption of NP = ZPP [39].\nAs noted in [32], the computationallyefficient greedy algorithm of [26] can be adapted to become a polynomial-time iterative auction that achieves a nearly matching approximation factor of min (n, O (, \/ m)).\nThis iterative auction may be implemented with bundle-price demand queries but, as far as we see, not as one with item prices.\nSince in a single bundle-price demand query an exponential number of prices can be presented, this algorithm can have an exponential communication cost.\nIn Section 4.2, we describe a different item-price auction that achieves the same approximation factor with a polynomial number of queries (and thus with a polynomial communication).\nTheorem: There exists a computationally-efficient iterative auction with item-price demand queries that finds an allocation that approximates the optimal welfare between arbitrary valuations to within a factor of min (n, O (, \/ m)).\nOne may then attempt obtaining such an approximation factor using iterative auctions that use only the weaker value queries.\nHowever, we show that this is impossible: Theorem: Any iterative auction that uses a polynomial (in n and m) number of value queries cannot achieve an approximation factor that is better than O (m\nNote however that auctions with only value queries are not completely trivial in power: the bundling auctions of Holzman et al. [19] can easily be implemented by a polynomial number of value queries and can achieve an approximation factor of O (m logm) by using O (log m) equi-sized bundles.\nWe do not know how to close the (tiny) gap between this upper bound and our lower bound.\nRepresenting bundle-prices\nWe then deal with a critical issue with bundle-price auctions that was side-stepped by our model, as well as by all previous works that used bundle-price auctions: how are\nthe bundle prices represented?\nFor item-price auctions this is not an issue since a query needs only to specify a small number, m, of prices.\nIn bundle-price auctions that situation is more difficult since there are exponentially many bundles that require pricing.\nOur basic model (like all previous work that used bundle prices, e.g., [37, 34, 3]), ignores this issue, and only requires that the prices be determined, somehow, by the protocol.\nA finer model would fix a specific language for denoting bundle prices, force the protocol to represent the bundle-prices in this language, and require that the representations of the bundle-prices also be polynomial.\nWhat could such a language for denoting prices for all bundles look like?\nFirst note that specifying a price for each bundle is equivalent to specifying a valuation.\nSecond, as noted in [31], most of the proposed bidding languages are really just languages for representing valuations, i.e., a syntactic representation of valuations--thus we could use any of them.\nThis point of view opens up the general issue of which language should be used in bundle-price auctions and what are the implications of this choice.\nHere we initiate this line of investigation.\nWe consider bundle-price auctions where the prices must be given as a XOR-bid, i.e., the protocol must explicitly indicate the price of every bundle whose value is different than that of all of its proper subsets.\nNote that all bundle-price auctions that do not explicitly specify a bidding language must implicitly use this language or a weaker one, since without a specific language one would need to list prices for all bundles, perhaps except for trivial ones (those with value 0, or more generally, those with a value that is determined by one of their proper subsets.)\nWe show that once the representation length of bundle prices is taken into account (using the XOR-language), bundle-price auctions are no more strictly stronger than item-price auctions.\nDefine the cost of an iterative auction as the total length of the queries and answers used throughout the auction (in the worst case).\nTheorem: For some class of valuations, bundle price auctions that use the XOR-language require an exponential cost for finding the optimal allocation.\nIn contrast, item-price auctions can find the optimal allocation for this class within polynomial cost .10 This put doubts on the applicability of bundle-price auctions like [3, 37], and it may justify the use of \"hybrid\" pricing methods such as Ausubel, Cramton and Milgrom's Clock-Proxy auction ([10]).\nDemand queries and linear programs\nThe winner determination problem in combinatorial auctions may be formulated as an integer program.\nIn many cases solving the linear-program relaxation of this integer program is useful: for some restricted classes of valuations it finds the optimum of the integer program (e.g., substitute valuations [22, 17]) or helps approximating the optimum (e.g., by randomized rounding [13, 14]).\nHowever, the linear program has an exponential number of variables.\nNisan and Segal [32] observed the surprising fact that despite the ex10Our proof relies on the sophisticated known lower bounds for constant depth circuits.\nWe were not able to find an elementary proof.\nponential number of variables, this linear program may be solved within polynomial communication.\nThe basic idea is to solve the dual program using the Ellipsoid method (see, e.g., [20]).\nThe dual program has a polynomial number of variables, but an exponential number of constraints.\nThe Ellipsoid algorithm runs in polynomial time even on such programs, provided that a \"separation oracle\" is given for the set of constraints.\nSurprisingly, such a separation oracle can be implemented using a single demand query (with item prices) to each of the bidders.\nThe treatment of [32] was somewhat ad-hoc to the problem at hand (the case of substitute valuations).\nHere we give a somewhat more general form of this important observation.\nLet us call the following class of linear programs \"generalized-winner-determination-relaxation (GWDR) LPs\":\nThe case where wi = 1, di = 1, qj = 1 (for every i, j) is the usual linear relaxation of the winner determination problem.\nMore generally, wi may be viewed as the weight given to bidder i's welfare, qj may be viewed as the quantity of units of good j, and di may be viewed as duplicity of the number of bidders of type i. Theorem: Any GWDR linear program may be solved in polynomial time (in n, m, and the number of bits of precision t) using only demand queries with item prices .11\n2.2 Ascending Auctions\nAscending item-price auctions: It is well known that the item-price ascending auctions of Kelso and Crawford [22] and its variants [12, 16] find the optimal allocation as long as all players' valuations have the substitutes property.\nThe obvious question is whether the optimal allocation can be found for a larger class of valuations.\nOur main result here is a strong negative result: Theorem: There is a 2-item 2-player problem where no ascending item-price auction can find the optimal allocation.\nThis is in contrast to both the power of bundle-price ascending auctions and to the power of general item-price demand queries (see above), both of which can always find the optimal allocation and in fact even provide full preference elicitation.\nThe same proof proves a similar impossibility result for other types of auctions (e.g., descending auctions, non-anonymous auctions).\nMore extension of this result: 9 Eliciting some classes of valuations requires an exponential number of ascending item-price trajectories.\n11The produced optimal solution will have polynomial support and thus can be listed fully.\n\u2022 At least k \u2212 1 ascending item-price trajectories are needed to elicit XOR formulae with k terms.\nThis result is in some sense tight, since we show that any k-term XOR formula can be fully elicited by k \u2212 1 nondeterministic (i.e., when some exogenous \"teacher\" instructs the auctioneer on how to increase the prices) ascending auctions .12\nWe also show that item-price ascending auctions and iterative auctions that are limited to a polynomial number of queries (of any kind, not necessarily ascending) are incomparable in their power: ascending auctions, with small enough increments, can elicit the preferences in cases where any polynomial number of queries cannot.\nMotivated by several recent papers that studied the relation between eliciting and fully-eliciting the preferences in combinatorial auctions (e.g., [7, 24]), we explore the difference between these problems in the context of ascending auctions.\nWe show that although a single ascending auction can determine the optimal allocation among any number of bidders with substitutes valuations, it cannot fully-elicit such a valuation even for a single bidder.\nWhile it was shown in [25] that the set of substitutes valuations has measure zero in the space of general valuations, its dimension is not known, and in particular it is still open whether a polynomial amount of information suffices to describe a substitutes valuation.\nWhile our result may be a small step in that direction (a polynomial full elicitation may still be possible with other communication protocols), we note that our impossibility result also holds for valuations in the class OXS defined by [25], valuations that we are able to show have a compact representation.\nWe also give several results separating the power of different models for ascending combinatorial auctions that use item-prices: we prove, not surprisingly, that adaptive ascending auctions are more powerful than oblivious ascending auctions and that non-deterministic ascending auctions are more powerful than deterministic ascending auctions.\nWe also compare different kinds of non-anonymous auctions (e.g., simultaneous or sequential), and observe that anonymous bundle-price auctions and non-anonymous item-price auctions are incomparable in their power.\nFinally, motivated by Dutch auctions, we consider descending auctions, and how they compare to ascending ones; we show classes of valuations that can be elicited by ascending item-price auctions but not by descending item-price auctions, and vice versa.\nAscending bundle-price auctions:\nAll known ascending bundle-price auctions that are able to find the optimal allocation between general valuations (with \"free disposal\") use non-anonymous prices.\nAnonymous ascending-price auctions (e.g., [42, 21, 37]) are only known to be able to find the optimal allocation among superadditive valuations or few other simple classes ([36]).\nWe show that this is no mistake: Theorem: No ascending auction with anonymous prices can find the optimal allocation between general valuations.\n12Non-deterministic computation is widely used in CS and also in economics (e.g, a Walrasian equilibrium or [38]).\nIn some settings, deterministic and non-deterministic models have equal power (e.g., computation with finite automata).\nThis bound is regardless of the running time, and it also holds for descending auctions and non-deterministic auctions.\nWe strengthen this result significantly by showing that anonymous ascending auctions cannot produce a better than O (, \/ m) approximation--the approximation ratio that can be achieved with a polynomial number of queries ([26, 32]) and, as mentioned, with a polynomial number of item-price demand queries.\nThe same lower bound clearly holds for anonymous item-price ascending auctions since such auctions can be simulated by anonymous bundle-price ascending auctions.\nWe currently do not have any lower bound on the approximation achievable by non-anonymous item-price ascending auctions.\nFinally, we study the performance of the existing computationally-efficient ascending auctions.\nThese protocols ([37, 3]) require exponential time in the worst case, and this is unavoidable as shown by [32].\nHowever, we also observe that these auctions, as well as the whole class of similar ascending bundle-price auctions, require an exponential time even for simple additive valuations.\nThis is avoidable and indeed the ascending item-price auctions of [22] can find the optimal allocation for these simple valuations with polynomial communication.\n3.\nTHE MODEL\n3.1 Discrete Auctions for Continuous Values\nOur model aims to capture iterative auctions that operate on real-valued valuations.\nThere is a slight technical difficulty here in bridging the gap between the discrete nature of an iterative auction, and the continuous nature of the valuations.\nThis is exactly the same problem as in modeling a simple English auction.\nThere are three standard formal ways to model it:\n1.\nModel the auction as a continuous process and study its trajectory in time.\nFor example, the so-called Japanese auction is basically a continuous model of an English model .13 2.\nModel the auction as discrete and the valuations as continuously valued.\nIn this case we introduce a parameter a and usually require the auction to produce results that are a-close to optimal.\n3.\nModel the valuations as discrete.\nIn this case we will assume that all valuations are integer multiples of some\nsmall fixed quantity S, e.g., 1 penny.\nAll communication in this case is then naturally finite.\nIn this paper we use the latter formulation and assume that all values are multiples of some S. Thus, in some parts of the paper we assume without loss of generality that S = 1, hence all valuations are integral.\nAlmost all (if not all) of our results can be translated to the other two models with little effort.\n3.2 Valuations\nA single auctioneer is selling m indivisible non-homogeneous items in a single auction, and let M be the set of these 13Another similar model is the \"moving knives\" model in the cake-cutting literature.\nitems and N be the set of bidders.\nEach one of the n bidders in the auction has a valuation function vi: 2m--+ {0, S, 2S,..., L}, where for every bundle of items S C _ M, vi (S) denotes the value of bidder i for the bundle S and is a multiple of S in the range 0...L.\nWe will sometimes denote the number of bits needed to represent such values in the range 0...L by t = log L.\nWe assume free disposal, i.e., S C _ T implies vi (S) <vi (T) and that vi (0) = 0 for all bidders.\nWe will mention the following classes of valuations:\n\u2022 A valuation is called sub-modular if for all sets of items A and B we have that v (A U B) + v (A n B) <v (A) + v (B).\n\u2022 A valuation is called super-additive if for all disjoint sets of items A and B we have that v (AUB)> v (A) + v (B).\n\u2022 A valuation is called a k-bundle XOR if it can be represented as a XOR combination of at most k atomic bids [30], i.e., if there are at most k bundles Si and prices pi such that for all S, v (S) = maxi | SDSipi.\nSuch valuations will be denoted by v = (S1: p1) \u00ae (S2: p2) \u00ae...\u00ae (Sk: pk).14\n3.3 Iterative Auctions\nThe auctioneer sets up a protocol (equivalently an \"algorithm\"), where at each stage of the protocol some information q--termed the \"query\"--is sent to some bidder i, and then bidder i replies with some reply that depends on the query as well as on his own valuation.\nIn this paper, we assume that we have complete control over the bidders' behavior, and thus the protocol also defines a reply function ri (q, vi) that specifies bidder i's reply to query q.\nThe protocol may be adaptive: the query value as well as the queried bidder may depend on the replies received for past queries.\nAt the end of the protocol, an allocation S1...Sn must be declared, where Si n Sj = 0 for i = j.\nWe say that the auction finds an optimal allocation if it finds the allocation that maximizes the social welfare Pi vi (Si).\nWe say that it finds a c-approximation if Pi vi (Si)> Pi vi (Ti) \/ c where T1...Tn is an optimal allocation.\nThe running time of the auction on a given instance of the bidders' valuations is the total number of queries made on this instance.\nThe running time of a protocol is the worst case cost over all instances.\nNote that we impose no computational limitations on the protocol or on the players .15 This of course only strengthens our hardness results.\nYet, our positive results will not use this power and will be efficient also in the usual computational sense.\nOur goal will be to design computationally-efficient protocols.\nWe will deem a protocol computationally-efficient if its cost is polynomial in the relevant parameters: the number of bidders n, the number of items m, and t = log L, where L is the largest possible value of a bundle.\nHowever, when we discuss ascending-price auctions and their variants, a computationally-efficient protocol will be required to be 14For example, v = (abcd: 5) \u00ae (ab: 3) \u00ae (c: 4) denotes the XOR valuation with the terms abcd, ab, c and prices 5, 3, 4 respectively.\nFor this valuation, v (abcd) = 5, v (abd) = 3, v (abc) = 4.\n15The running time really measures communication costs and not computational running time.\n\"pseudo-polynomial\", i.e., it should ask a number of queries which is polynomial in m, n and L.\nThis is because that ascending auctions can usually not achieve such running times (consider even the English auction on a single item).16 Note that all of our results give concrete bounds, where the dependence on the parameters is given explicitly; we use the standard big-Oh notation just as a shorthand.\nWe say than an auction elicits some class V of valuations, if it determines the optimal allocation for any profile of valuations drawn from V; We say that an auction fully elicits some class of valuations V, if it can fully learn any single valuation v E V (i.e., learn v (S) for every S).\n3.4 Demand Queries and Ascending Auctions\nMost of the paper will be concerned with a common special case of iterative auctions that we term \"demand auctions\".\nIn such auctions, the queries that are sent to bidders are demand queries: the query specifies a price p (S) E R + for each bundle S.\nThe reply of bidder i is simply the set most desired--\"demanded\"--under these prices.\nFormally, a set S that maximizes vi (S) \u2212 p (S).\nIt may happen that more than one set S maximizes this value.\nIn which case, ties are broken according to some fixed tie-breaking rule, e.g., the lexicographically first such set is returned.\nAll of our results hold for any fixed tie-breaking rule.\nAscending auctions are iterative auctions with non-decreasing prices:\nNote that the term \"ascending auction\" refers to an auction with a single ascending trajectory of prices.\nIt may be useful to define multi-trajectory ascending auctions, in which the prices maybe reset to zero a number of times (see, e.g., [4]).\nWe consider two main restrictions on the types of allowed demand queries:\nNote that even though in our model valuations are integral (or multiples of some S), we allow the demand query to 16Most of the auctions we present may be adapted to run in time polynomial in log L, using a binary-search-like procedure, losing their ascending nature.\n17Note that a non-anonymous auction can clearly be simulated by n parallel anonymous auctions.\nuse arbitrary real numbers in R +.\nThat is, we assume that the increment a we use in the ascending auctions may be significantly smaller than S. All our hardness results hold for any e, even for continuous price increments.\nA practical issue here is how will the query be specified: in the general case, an exponential number of prices needs to be sent in a single query.\nFormally, this is not a problem as the model does not limit the length of queries in any way--the protocol must simply define what the prices are in terms of the replies received for previous queries.\nWe look into this issue further in Section 4.3.\n4.\nTHE POWER OF DEMAND QUERIES\nIn this section, we study the power of iterative auctions that use demand queries (not necessarily ascending).\nWe start by comapring demand queries to other types of queries.\nThen, we discuss how well can one approximate the optimal welfare using a polynomial number of demand queries.\nWe also initiate the study of the representation of bundle-price demand queries, and finally, we show how demand queries help solving the linear-programming relaxation of combinatorial auctions in polynomial time.\n4.1 The Power of Different Types of Queries\nIn this section we compare the power of the various types of queries defined in Section 2.\nWe will present computationally - efficient simulations of these query types using item-price demand queries.\nIn Section 5.1 we show that these simulations can also be done using item-price ascending auctions.\nLEMMA 4.2.\nA value query can be simulated by mt demand queries (where t = log L is the number of bits needed to represent a single bundle value).18 As a direct corollary we get that demand auctions can always fully elicit the bidders' valuations by simulating all possible 2m \u2212 1 queries and thus elicit enough information for determining the optimal allocation.\nNote, however, that this elicitation may be computationally inefficient.\nThe next lemma shows that demand queries can be exponentially more powerful than value queries.\nLEMMA 4.3.\nAn exponential number of value queries may be required for simulating a single demand query.\nIndirect utility queries are, however, equivalent in power to demand queries: LEMMA 4.4.\nAn indirect-utility query can be simulated by mt + 1 demand queries.\nA demand query can be simulated by m + 1 indirect-utility queries.\nDemand queries can also simulate relative-demand queries:19 18Note that t bundle-price demand queries can easily simulate a value query by setting the prices of all the bundles except S (the bundle with the unknown value) to be L, and performing a binary search on the price of S. 19Note: although in our model values are integral (our multiples of S), we allow the query prices to be arbitrary real num\nFigure 4: Each entry in the table specifies how many queries of this row are needed to simulate a query from the relevant column.\nAbbreviations: V (value query), MV (marginal-value query), D (demand query), IU (Indirect-utility query), RD (relative demand query).\nLEMMA 4.5.\nRelative-demand queries can be simulated by a polynomial number of demand queries.\nAccording to our definition of relative-demand queries, they clearly cannot simulate even value queries.\nFigure 4 summarizes the relations between these query types.\n4.2 Approximating the Social Welfare with Value and Demand Queries\nWe know from [32] that iterative combinatorial auctions that only use a polynomial number of queries cannot find an optimal allocation among general valuations and in fact cannot even approximate it to within a factor better than min {n, m1\/2 \u2212 E}.\nIn this section we ask how well can this approximation be done using demand queries with item prices, or using the weaker value queries.\nWe show that, using demand queries, the lower bound can be matched, while value queries can only do much worse.\nFigure 5 describes a polynomial-time algorithm that achieves a min (n, O (, \/ m)) approximation ratio.\nThis algorithm greedily picks the bundles that maximize the bidders' per-item value (using \"relative-demand\" queries, see Section 4.1).\nAs a final step, it allocates all the items to a single bidder if it improves the social welfare (this can be checked using value queries).\nSince both value queries and relative-demand queries can be simulated by a polynomial number of demand queries with item prices (Lemmas 4.2 and 4.5), this algorithm can be implemented by a polynomial number of demand queries with item prices .20 THEOREM 4.6.\nThe auction described in Figure 5 can be implemented by a polynomial number of demand queries and achieves a min {n, 4, \/ m} - approximation for the social welfare.\nWe now ask how well can the optimal welfare be approximated by a polynomial number of value queries.\nFirst we note that value queries are not completely powerless: In [19] it is shown that if the m items are split into k fixed bundles of size m\/k each, and these fixed bundles are auctioned as though each was indivisible, then the social welfare bers, thus we may have bundles with arbitrarily close relative demands.\nIn this sense the simulation above is only up to any given a (and the number of queries is O (log L + log 1)).\nWhen the relative-demand query prices are given as rational numbers, exact simulation is implied when log a is linear in the input length.\n20In the full paper [8], we observe that this algorithm can be implemented by two descending item-price auctions (where we allow removing items along the auction).\ngenerated by such an auction is at least mk-approximation of that possible in the original auction.\nNotice that such an auction can be implemented by 2k \u2212 1 value queries to each bidder--querying the value of each bundle of the fixed bundles.\nThus, if we choose k = log m bundles we get an m log m-approximation while still using a polynomial number of queries.\nThe following lemma shows that not much more is possible using value queries: LEMMA 4.7.\nAny iterative auction that uses only value queries and distinguishes between k-tuples of 0\/1 valuations where the optimal allocation has value 1, and those where the optimal allocation has value k requires at least 2 mk queries.\nPROOF.\nConsider the following family of valuations: for every S, such that | S |> m\/2, v (S) = 1, and there exists a single set T, such that for | S | m\/2, v (S) = 1 iff T S and v (S) = 0 otherwise.\nNow look at the behavior of the protocol when all valuations vi have T = {1...m}.\nClearly in this case the value of the best allocation is 1 since no set of size m2 or lower has non-zero value for any player.\nFix the sequence of queries and answers received on this k-tuple of valuations.\nNow consider the k-tuple of valuations chosen at random as follows: a partition of the m items into k sets T1...Tk each of size mk each is chosen uniformly at random among all such partitions.\nNow consider the k-tuple of valuations from our family that correspond to this partition--clearly Ti can be allocated to i, for each i, getting a total value of k.\nNow look at the protocol when running on these valuations and compare its behavior to the original case.\nNote that the answer to a query S to player i differs between the case of Ti and the original case of T = {1...m} only if | S | m2 and Ti S.\nSince Ti is distributed uniformly among all sets of size exactly mk, we have that for any fixed query S to player i, where | S | m2, \u201e | S | \"| Ti | 2 mk m Using the union-bound, if the original sequence of queries was of length less than 2mk, then with positive probability none of the queries in the sequence would receive a different answer than for the original input tuple.\nThis is forbidden since the protocol must distinguish between this case and the original case--which cannot happen if all queries receive the same answer.\nHence there must have been at least 2 mk queries for the original tuple of valuations.\nWe conclude that a polynomial time protocol that uses only value queries cannot obtain a better than O (m log m) approximation of the welfare:\nO (m\nlog m) - approximation for the social welfare.\nPROOF.\nImmediate from Lemma 4.7: achieving any approximation ratio k which is asymptotically greater than log m needs an exponential number of value queries.\nm\nAn Approximation Algorithm:\nInitialization: Let T +--M be the current items for sale.\nLet K +--N be the currently participating bidders.\nSet: s * i = si, K = K \\ i, M = M \\ Si Finally: Ask the bidders for their values vi (M) for the grand bundle.\nIf allocating all the items to some bidder i improves the social welfare achieved so far (i.e.,] i E N such that vi (M)> PiEN vi (S * i)), then allocate all items to this bidder i. Figure 5: This algorithm achieves a min {n, 4, \/ m} approximation for the social welfare, which is asymptotically the best worst-case approximation possible with polynomial communication.\nThis algorithm can be implemented with a polynomial number of demand queries.\n4.3 The Representation of Bundle Prices\nIn this section we explicitly fix the language in which bundle prices are presented to the bidders in bundle-price auctions.\nThis language requires the algorithm to explicitly list the price of every bundle with a non-trivial price.\n\"Trivial\" in this context is a price that is equal to that of one of its proper subsets (which was listed explicitly).\nThis representation is equivalent to the XOR-language for expressing valuations.\nFormally, each query q is given by an expression: q = (S1: p1) (S2: p2)... (Sl: pl).\nIn this representation, the price demanded for every set S is simply p (S) = max {k = 1...l | SkCS} pk.\nDEFINITION 4.\nThe length of the query q = (S1: p1) (S2: p2)... (Sl: pl) is l.\nThe cost of an algorithm is the sum of the lengths of the queries asked during the operation of the algorithm on the worst case input.\nNote that under this definition, bundle-price auctions are not necessarily stronger than item-price auctions.\nAn itemprice query that prices each item for 1, is translated to an exponentially long bundle-price query that needs to specify the price | S | for each bundle S.\nBut perhaps bundle-price auctions can still find optimal allocations whenever itemprice auction can, without directly simulating such queries?\nWe show that this is not the case: indeed, when the representation length is taken into account, bundle price auctions are sometimes seriously inferior to item price auctions.\nConsider the following family of valuations: Each item is valued at 3, except that for some single set S, its value is a bit more: 3 | S | + b, where b {0, 1, 2}.\nNote that an item price auction can easily find the optimal allocation between any two such valuations: Set the prices of each item to 3 + e; if the demand sets of the two players are both empty, then b = 0 for both valuations, and an arbitrary allocation is fine.\nIf one of them is empty and the other non-empty, allocate the non-empty demand set to its bidder, and the rest to the other.\nIf both demand sets are non-empty then, unless they form an exact partition, we need to see which b is larger, which we can do by increasing the price of a single item in each demand set.\nWe will show that any bundle-price auction that uses only the XOR-language to describe bundle prices requires an exponential cost (which includes the sum of all description lengths of prices) to find an optimal allocation between any two such valuations.\nLEMMA 4.9.\nEvery bundle-price auction that uses XORexpressions to denote bundle prices requires 20 (1xm) cost in order to find the optimal allocation among two valuations from the above family.\nThe complication in the proof stems from the fact that using XOR-expressions, the length of the price description depends on the number of bundles whose price is strictly larger than each of their subsets--this may be significantly smaller than the number of bundles that have a non-zero price.\n(The proof becomes easy if we require the protocol to explicitly name every bundle with non-zero price.)\nWe do not know of any elementary proof for this lemma (although we believe that one can be found).\nInstead we reduce the problem to a well known lower bound in boolean circuit complexity [18] stating that boolean circuits of depth 3 that compute the majority function on m variables require 20 (1xm) size.\n4.4 Demand Queries and Linear Programming\nConsider the following linear-programming relaxation for the generalized winner-determination problem in combinatorial auctions (the \"primal\" program):\nNote that the primal program has an exponential number of variables.\nYet, we will be able to solve it in polynomial time using demand queries to the bidders.\nThe solution will have a polynomial size support (non-zero values for xi, S), and thus we will be able to describe it in polynomial time.\nHere is its dual:\npi> 0, uj> 0 ` di E M, j E N Notice that the dual problem has exactly n + m variables but an exponential number of constraints.\nThus, the dual can be solved using the Ellipsoid method in polynomial time--if a \"separation oracle\" can be implemented in polynomial time.\nRecall that a separation oracle, when given a possible solution, either confirms that it is a feasible solution, or responds with a constraint that is violated by the possible solution.\nWe construct a separation oracle for solving the dual program, using a single demand query to each of the bidders.\nConsider a possible solution (u, p) for the dual program.\nWe can re-write the constraints of the dual program as:\nNow a demand query to bidder i with prices pj\/wi reveals exactly the set S that maximizes the RHS of the previous inequality.\nThus, in order to check whether (u, p) is feasible it suffices to (1) query each bidder i for his demand Di under the prices pj\/wi; (2) check only the n constraints ui + EjEDi pj> wivi (Di) (where vi (Di) can be simulated using a polynomial sequence of demand queries as shown in Lemma 4.2).\nIf none of these is violated then we are assured that (u, p) is feasible; otherwise we get a violated constraint.\nWhat is left to be shown is how the primal program can be solved.\n(Recall that the primal program has an exponential number of variables.)\nSince the Ellipsoid algorithm runs in polynomial time, it encounters only a polynomial number of constraints during its operation.\nClearly, if all other constraints were removed from the dual program, it would still have the same solution (adding constraints can only decrease the space of feasible solutions).\nNow take the \"reduced dual\" where only the constraints encountered exist, and look at its dual.\nIt will have the same solution as the original dual and hence of the original primal.\nHowever, look at the form of this \"dual of the reduced dual\".\nIt is just a version of the primal program with a polynomial number of variables--those corresponding to constraints that remained in the reduced dual.\nThus, it can be solved in polynomial time, and this solution clearly solves the original primal program, setting all other variables to zero.\n5.\nITEM-PRICE ASCENDING AUCTIONS\nIn this section we characterize the power of ascending item-price auctions.\nWe first show that this power is not trivial: such auctions can in general elicit an exponential amount of information.\nOn the other hand, we show that the optimal allocation cannot always be determined by a single ascending auction, and in some cases, nor by an exponential number of ascending-price trajectories.\nFinally, we separate the power of different models of ascending auctions.\n5.1 The Power of Item-Price Ascending Auctions\nWe first show that if small enough increments are allowed, a single ascending trajectory of item-prices can elicit preferences that cannot be elicited with polynomial communication.\nAs mentioned, all our hardness results hold for any increment, even infinitesimal.\nTHEOREM 5.1.\nSome classes of valuations can be elicited by item-price ascending auctions, but cannot be elicited by a polynomial number of queries of any kind.\nPROOF.\n(sketch) Consider two bidders with v (S) = 1 if ISI> n2, v (S) = 0 if S <n2 andevery S such that S = n2 has an unknown value from {0, 11.\nDue to [32], determining the optimal allocation here requires exponential communication in the worst case.\nNevertheless, we show (see [9]) that an item-price ascending auction can do it, as long as it can use exponentially small increments.\nWe now describe another positive result for the power of item-price ascending auctions.\nIn section 4.1, we showed\nFigure 6: No item-price ascending auctions can determine the optimal allocation for this class of valuations.\nthat a value query can be simulated with a (truly) polynomial number of item-price demand queries.\nHere, we show that every value query can be simulated by a (pseudo) polynomial number of ascending item-price demand queries.\n(In the next subsection, we show that we cannot always simulate even a pair of value queries using a single item-price ascending auction.)\nIn the full paper (part II, [9]), we show that we can simulate other types of queries using item-price ascending auctions.\nPROPOSITION 5.2.\nA value query can be simulated by an item-price ascending auction.\nThis simulation requires a polynomial number of queries.\nActually, the proof for Proposition 5.2 proves a stronger useful result regarding the information elicited by iterative auctions.\nIt says that in any iterative auction in which the changes of prices are small enough in each stage (\"pseudocontinuous\" auctions), the value of all bundles demanded during the auction can be computed.\nThe basic idea is that when the bidder moves from demanding some bundle Ti to demanding another bundle Ti +1, there is a point in which she is indifferent between these two bundles.\nThus, knowing the value of some demanded bundle (e.g., the empty set) enables computing the values of all other demanded bundles.\nWe say that an auction is \"pseudo-continuous\", if it only uses demand queries, and in each step, the price of at most one item is changed by e (for some e G (0, S]) with respect to the previous query.\nPROPOSITION 5.3.\nConsider any pseudo-continuous auction (not necessarily ascending), in which bidder i demands the empty set at least once along the auction.\nThen, the value of every bundle demanded by bidder i throughout the auction can be calculated at the end of the auction.\n5.2 Limitations of Item-Price Ascending Auctions\nAlthough we observed that demand queries can solve any combinatorial auction problem, when the queries are restricted to be ascending, some classes of valuations cannot be elicited nor fully-elicited.\nAn example for such class of valuations is given in Figure 6.\nTHEOREM 5.4.\nThere are classes of valuations that cannot be elicited nor fully elicited by any item-price ascending auction.\nPROOF.\nLet bidder 1 have the valuation described in the first row of Figure 6, where a and,3 are unknown values in (0, 1).\nFirst, we prove that this class cannot be fully elicited by a single ascending auction.\nSpecifically, an ascending auction cannot reveal the values of both a and,3.\nAs long as pa and pb are both below 1, the bidder will always demand the whole bundle ab: her utility from ab is strictly greater than the utility from either a or b separately.\nFor example, we show that u1 (ab)> u1 (a): u1 (ab) = 2--(pa + pb) = 1--pa + 1--pb> vA (a)--pa + 1--pb> u1 (a) Thus, in order to gain any information about a or,3, the price of one of the items should become at least 1, w.l.o.g. pa> 1.\nBut then, the bundle a will not be demanded by bidder 1 throughout the auction, thus no information at all will be gained about a. Now, assume that bidder 2 is known to have the valuation described in the second row of Figure 6.\nThe optimal allocation depends on whether a is greater than,3 (in bidder 1's valuation), and we proved that an ascending auction cannot determine this.\nThe proof of the theorem above shows that for an unknown value to be revealed, the price of one item should be greater than 1, and the other price should be smaller than 1.\nTherefore, in a price-monotonic trajectory of prices, only one of these values can be revealed.\nAn immediate conclusion is that this impossibility result also holds for item-price descending auctions.\nSince no such trajectory exists, then the same conclusion even holds for non-deterministic itemprice auctions (in which exogenous data tells us how to increase the prices).\nAlso note that since the hardness stems from the impossibility to fully-elicit a valuation of a single bidder, this result also holds for non-anonymous ascending item-price auctions.\n5.3 Limitations of Multi-Trajectory Ascending Auctions\nAccording to Theorem 5.4, no ascending item-price auction can always elicit the preferences (we prove a similar result for bundle prices in section 6).\nBut can two ascending trajectories do the job?\nOr a polynomial number of ascending trajectories?\nWe give negative answers for such suggestions.\nWe define a k-trajectory ascending auction as a demandquery iterative auction in which the demand queries can be partitioned to k sets of queries, where the prices published in each set only increase in time.\nNote that we use a general definition; It allows the trajectories to run in parallel or sequentially, and to use information elicited in some trajectories for determining the future queries in other trajectories.\nThe power of multiple-trajectory auctions can be demonstrated by the negative result of Gul and Stacchetti [17] who showed that even for an auction among substitutes valuations, an anonymous ascending item-price auction cannot compute VCG prices for all players .21 Ausubel [4] overcame this impossibility result and designed auctions that do compute VCG prices by organizing the auction as a sequence of n + 1 ascending auctions.\nHere, we prove that one cannot elicit XOR valuations with k terms by less than k--1 ascending trajectories.\nOn the other hand, we show that an XOR formula can be fully elicited by k--1 non-deterministic ascending auctions (or by k--1 deterministic ascending auctions if the auctioneer knows the atomic bundles).22 21A recent unpublished paper by Mishra and Parkes extends this result, and shows that non-anonymous prices with bundle-prices are necessary in order that an ascending auction will end up with a \"universal-competitive-equilibrium\" (that leads to VCG payments).\n22This result actually separates the power of deterministic\nPROPOSITION 5.5.\nXOR valuations with k terms cannot be elicited (or fully elicited) by any (k-2) - trajectory itemprice ascending auction, even when the atomic bundles are known to the elicitor.\nHowever, these valuations can be elicited (and fully elicited) by (k-1) - trajectory non-deterministic non-anonymous item-price ascending auctions.\nMoreover, an exponential number of trajectories is required for eliciting some classes of valuations: PROPOSITION 5.6.\nElicitation and full-elicitation of some classes of valuations cannot be done by any k-trajectory itemprice ascending auction, where k = o (2m).\nPROOF.\n(sketch) Consider the following class of valuations: For | S | <m2, v (S) = 0 and for | S |> m2, v (S) = 2; every bundle S of size m2 has some unknown value in (0, 1).\nWe show ([9]) that a single item-price ascending auction can reveal the value of at most one bundle of size n2, and therefore an exponential number of ascending trajectories is needed in order to elicit such valuations.\nWe observe that the algorithm we presented in Section 4.2 can be implemented by a polynomial number of ascending auctions (each item-price demand query can be considered as a separate ascending auction), and therefore a min (n, 4, \/ m) - approximation can be achieved by a polynomial number of ascending auctions.\nWe do not currently have a better upper bound or any lower bound.\n5.4 Separating the Various Models of Ascending Auctions\nVarious models for ascending auctions have been suggested in the literature.\nIn this section, we compare the power of the different models.\nAs mentioned, all auctions are considered anonymous and deterministic, unless specified otherwise.\nAscending vs. Descending Auctions: We begin the discussion of the relation between ascending auctions and descending auctions with an example.\nThe algorithm by Lehmann, Lehmann and Nisan [25] can be implemented by a simple item-price descending auction (see the full paper for details [9]).\nThis algorithm guarantees at least half of the optimal efficiency for submodular valuations.\nHowever, we are not familiar with any ascending auction that guarantees a similar fraction of the efficiency.\nThis raises a more general question: can ascending auctions solve any combinatorialauction problem that is solvable using a descending auction (and vice versa)?\nWe give negative answers to these questions.\nThe idea behind the proofs is that the information that the auctioneer can get \"for free\" at the beginning of each type of auction is different .23 and non-deterministic iterative auctions: our proof shows that a non-deterministic iterative auction can elicit the kterm XOR valuations with a polynomial number of demand queries, and [7] show that this elicitation must take an exponential number of demand queries.\n23In ascending auctions, the auctioneer can reveal the most valuable bundle (besides M) before she starts raising the prices, thus she can use this information for adaptively choose the subsequent queries.\nIn descending auctions, one can easily find the bundle with the highest average per-item price, keeping all other bundles with non-positive utilities, and use this information in the adaptive price change.\nPROPOSITION 5.7.\nThere are classes that cannot be elicited (fully elicited) by ascending item-price auctions, but can be elicited (resp.\nfully elicited) with a descending item-price auction.\nPROPOSITION 5.8.\nThere are classes that cannot be elicited (fully elicited) by item-price descending auctions, but can be elicited (resp.\nfully elicited) by item-price ascending auctions.\nDeterministic vs. Non-Deterministic Auctions: Nondeterministic ascending auctions can be viewed as auctions where some benevolent teacher that has complete information guides the auctioneer on how she should raise the prices.\nThat is, preference elicitation can be done by a non-deterministic ascending auction, if there is some ascending trajectory that elicits enough information for determining the optimal allocation (and verifying that it is indeed optimal).\nWe show that non-deterministic ascending auctions are more powerful than deterministic ascending auctions: PROPOSITION 5.9.\nSome classes can be elicited (fully elicited) by an item-price non-deterministic ascending auction, but cannot be elicited (resp.\nfully elicited) by item-price deterministic ascending auctions.\nAnonymous vs. Non-Anonymous Auctions: As will be shown in Section 6, the power of anonymous and nonanonymous bundle-price ascending auctions differs significantly.\nHere, we show that a difference also exists for itemprice ascending auctions.\nPROPOSITION 5.10.\nSome classes cannot be elicited by anonymous item-price ascending auctions, but can be elicited by a non-anonymous item-price ascending auction.\nSequential vs. Simultaneous Auctions: A non-anonymous auction is called simultaneous if at each stage, the price of some item is raised by e for every bidder.\nThe auctioneer can use the information gathered until each stage, in all the personalized trajectories, to determine the next queries.\nA non-anonymous auction is called sequential if the auctioneer performs an auction for each bidder separately, in sequential order.\nThe auctioneer can determine the next query based on the information gathered in the trajectories completed so far and on the history of the current trajectory.\nAdaptive vs. Oblivious Auctions: If the auctioneer determines the queries regardless of the bidders' responses (i.e., the queries are predefined) we say that the auction is oblivious.\nOtherwise, the auction is adaptive.\nWe prove that an adaptive behaviour of the auctioneer may be beneficial.\nPROPOSITION 5.12.\nThere are classes that cannot be elicited (fully elicited) using oblivious item-price ascending auctions, but can be elicited (resp.\nfully elicited) by an adaptive item-price ascending auction.\n5.5 Preference Elicitation vs. Full Elicitation\nPreference elicitation and full elicitation are closely related problems.\nIf full elicitation is \"easy\" (e.g., in polynomial time) then clearly elicitation is also easy (by a nonanonymous auction, simply by learning all the valuations separately24).\nOn the other hand, there are examples where preference elicitation is considered \"easy\" but learning is hard (typically, elicitation requires smaller amount of information; some examples can be found in [7]).\nThe tatonnement algorithms by [22, 12, 16] end up with the optimal allocation for substitutes valuations .25 We prove that we cannot fully elicit substitutes valuations (or even their sub-class of OXS valuations defined in [25]), even for a single bidder, by an item-price ascending auction (although the optimal allocation can be found by an ascending auction for any number of bidders!)\n.\nTHEOREM 5.13.\nSubstitute valuations cannot be fully elicited by ascending item-price auctions.\nMoreover, they cannot be fully elicited by any m2 ascending trajectories (m> 3).\nWhether substitutes valuations have a compact representation (i.e., polynomial in the number of goods) is an important open question.\nAs a step in this direction, we show that its sub-class of OXS valuations does have a compact representation: every OXS valuation can be represented by at most m2 values .26 LEMMA 5.14.\nAny OXS valuation can be represented by no more than m2 values.\n6.\nBUNDLE-PRICE ASCENDING AUCTIONS\nAll the ascending auctions in the literature that are proved to find the optimal allocation for unrestricted valuations are non-anonymous bundle-price auctions (iBundle (3) by Parkes and Ungar [37] and the \"Proxy Auction\" by Ausubel and Milgrom [3]).\nYet, several anonymous ascending auctions have been suggested (e.g., AkBA [42], [21] and iBundle (2) [37]).\nIn this section, we prove that anonymous bundle-price ascending auctions achieve poor results in the worst-case.\nWe also show that the family of non-anonymous bundleprice ascending auctions can run exponentially slower than simple item-price ascending auctions.\n6.1 Limitations of Anonymous Bundle-Price Ascending Auctions\nWe present a class of valuations that cannot be elicited by anonymous bundle-price ascending auctions.\nThese valuations are described in Figure 7.\nThe basic idea: for determining some unknown value of one bidder we must raise 24Note that an anonymous ascending auction cannot necessarily elicit a class that can be fully elicited by an ascending auction.\n25Substitute valuations are defined, e.g., in [16].\nRoughly speaking, a bidder with a substitute valuation will continue demand a certain item after the price of some other items was increased.\nFor completeness, we present in the full paper [9] a proof for the efficiency of such auctions for substitutes valuations.\n26A unit-demand valuation is an XOR valuation in which all the atomic bundles are singletons.\nOXS valuations can be interpreted as an aggregation (\"OR\") of any number of unit-demand bidders.\nFigure 7: Anonymous ascending bundle-price auctions cannot determine the optimal allocation for this class of valuations.\na price of a bundle that should be demanded by the other bidder in the future.\nTHEOREM 6.1.\nSome classes of valuations cannot be elicited by anonymous bundle-price ascending auctions.\nPROOF.\nConsider a pair of XOR valuations as described in Figure 7.\nFor finding the optimal allocation we must know which value is greater between a and 3.27 However, we cannot learn the value of both a and 3 by a single ascending trajectory: assume w.l.o.g. that bidder 1 demands cd before bidder 2 demands bd (no information will be elicited if none of these happens).\nIn this case, the price for bd must be greater than 1 (otherwise, bidder 1 prefers bd to cd).\nThus, bidder 2 will never demand the bundle bd, and no information will be elicited about 3.\nThe valuations described in the proof of Theorem 6.1 can be easily elicited by a non-anonymous item-price ascending auction.\nOn the other hand, the valuations in Figure 6 can be easily elicited by an anonymous bundle-price ascending auction.\nWe conclude that the power of these two families of ascending auctions is incomparable.\nWe strengthen the impossibility result above by showing that anonymous bundle-price auctions cannot even achieve better than a min {O (n), O (\\ \/ m)} - approximation for the social welfare.\nThis approximation ratio can be achieved with polynomial communication, and specifically with a polynomial number of item-price demand queries .28\nTHEOREM 6.2.\nAn anonymous bundle-price ascending auc\nJm\ntion cannot guarantee better than a min {n2, 2} approximation for the optimal welfare.\nPROOF.\n(Sketch) Assume we have n bidders and n2 items for sale, and that n is prime.\nWe construct n2 distinct bundles with the following properties: for each bidder, we define a partition Si = (Si1,..., Sin) of the n2 items to n bundles, such that any two bundles from different partitions intersect.\nIn the full paper, part II [9] we show an explicit construction using the properties of linear functions over finite fields.\nThe rest of the proof is independent of the specific construction.\nUsing these n2 bundles we construct a \"hard-to-elicit\" class.\nEvery bidder has an atomic bid, in his XOR valuation, for each of these n2 bundles.\nA bidder i has a value of 2 for any bundle Sij in his partition.\nFor all bundles in the other partitions, he has a value of either 0 or of 1 \u2212 S, and these values are unknown to the auctioneer.\nSince every pair of bundles from different partitions intersect, only one bidder can receive a bundle with a value of 2.\n27If a> 3, the optimal allocation will allocate cd to bidder 1 and ab to bidder 2.\nOtherwise, we give bd to bidder 2 and ac to bidder 1.\nNote that both bidders cannot gain a value of 2 in the same allocation, due to the intersections of the high-valued bundles.\n28Note that bundle-price queries may use exponential communication, thus the lower bound of [32] does not hold.\nNon-anonymous Bundle-Price Economically-Efficient Ascending Auctions:\nInitialization: All prices are initialized to zero (non-anonymous bundle prices).\nRepeat: - Each bidder submits a bundle that maximizes his utility under his current personalized prices.\n- The auctioneer calculates a provisional allocation that maximizes his revenue under the current prices.\n- The prices of bundles that were demanded by losing bidders are increased by E.\nFinally: Terminate when the provisional allocation assigns to each bidder the bundle he demanded.\nFigure 8: Auctions from this family (denoted by NBEA auctions) are known to achieve the optimal welfare.\nNo bidder will demand a low-valued bundle, as long as the price of one of his high-valued bundles is below 1 (and thus gain him a utility greater than 1).\nTherefore, for eliciting any information about the low-valued bundles, the auctioneer should first arbitrarily choose a bidder (w.l.o.g bidder 1) and raise the prices of all the bundles (S11,..., S1n) to be greater than 1.\nSince the prices cannot decrease, the other bidders will clearly never demand these bundles in future stages.\nAn adversary may choose the values such that the low values of all the bidders for the bundles not in bidder 1's partition are zero (i.e., vi (S1j) = 0 for every i = 1 and every j), however, allocating each bidder a different bundle from bidder 1's partition, might achieve a welfare of n +1 \u2212 (n \u2212 1) 6 (bidder 1's valuation is 2, and 1 \u2212 S for all other bidders); If these bundles were wrongly allocated, only a welfare of 2 might be achieved (2 for bidder 1's high-valued bundle, 0 for all other bidders).\nAt this point, the auctioneer cannot have any information about the identity of the bundles with the non-zero values.\nTherefore, an adversary can choose the values of the bundles received by bidders 2,..., n in the final allocation to be zero.\nWe conclude that anonymous bundleprice auctions cannot guarantee a welfare greater than 2 for this class, where the optimal welfare can be arbitrarily close to n + 1.\n6.2 Bundle Prices vs. Item Prices\nThe core of the auctions in [37, 3] is the scheme described in Figure 8 (in the spirit of [35]) for auctions with nonanonymous bundle prices.\nAuctions from this scheme end up with the optimal allocation for any class of valuations.\nWe denote this family of ascending auctions as NBEA auctions29.\nNBEA auctions can elicit k-term XOR valuations by a polynomial (in k) number of steps, although the elicitation of such valuations may require an exponential number of item-price queries ([7]), and item-price ascending auctions cannot do it at all (Theorem 5.4).\nNevertheless, we show that NBEA auctions (and in particular, iBundle (3) and the \"proxy\" auction) are sometimes inferior to simple item-price demand auctions.\nThis may justify the use of hybrid auctions that use both linear and non-linear prices (e.g., the clock-proxy auction [10]).\nWe show that auctions from this 29Non-anonymous Bundle-price economically Efficient Ascending auctions.\nFor completeness, we give in the full paper [9] a simple proof for the efficiency (up to an e) of auctions of this scheme.\nfamily may use an exponential number of queries even for determining the optimal allocation among two bidders with additive valuations30, where such valuations can be elicited by a simple item-price ascending auction.\nWe actually prove this property for a wider class of auctions we call conservative auctions.\nWe also observe that in conservative auctions, allowing the bidders to submit all the bundles in their demand sets ensures that the auction runs a polynomial number of steps--if L is not too high (but with exponential communication, of course).\nAn ascending auction is called conservative if it is nonanonymous, uses bundle prices initialized to zero and at every stage the auctioneer can only raise prices of bundles demanded by the bidders until this stage.\nIn addition, each bidder can only receive bundles he demanded during the auction.\nNote that NBEA auctions are by definition conservative.\nPROPOSITION 6.3.\nIf every bidder demands a single bundle in each step of the auction, conservative auctions may run for an exponential number of steps even for additive valuations.\nIf the bidders are allowed to submit all the bundles in their demand sets in each step, then conservative auctions can run in a polynomial number of steps for any profile of valuations, as long as the maximal valuation L is polynomial in m, n and 1.\nAcknowledgments:\nThe authors thank Moshe Babaioff, Shahar Dobzinski, Ron Lavi, Daniel Lehmann, Ahuva Mu'alem, David Parkes, Michael Schapira and Ilya Segal for helpful discussions.\nSupported by grants from the Israeli Academy of Sciences and the USAIsrael Binational Science Foundation.","keyphrases":["combinatori auction","price","bidder","demand queri","polynomi demand","ascend-price auction","bound","approxim factor","optim alloc","prefer elicit","ascend auction","commun complex"],"prmu":["P","P","P","P","R","M","U","U","U","U","M","U"]} {"id":"C-62","title":"Network Monitors and Contracting Systems: Competition and Innovation","abstract":"Today's Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution. Rather than introducing new services, ISPs are presently moving towards greater commoditization. It is apparent that the network's primitive system of contracts does not align incentives properly. In this study, we identify the network's lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system. Furthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics. Our work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation.","lvl-1":"Network Monitors and Contracting Systems: Competition and Innovation Paul Laskowski John Chuang UC Berkeley {paul,chuang}@sims.\nberkeley.edu ABSTRACT Today``s Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution.\nRather than introducing new services, ISPs are presently moving towards greater commoditization.\nIt is apparent that the network``s primitive system of contracts does not align incentives properly.\nIn this study, we identify the network``s lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system.\nFurthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics.\nOur work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; J.4 [Social And Behavioral Sciences]: Economics General Terms Economics, Theory, Measurement, Design, Legal Aspects.\n1.\nINTRODUCTION Many studies before us have noted the Internet``s resistance to new services and evolution.\nIn recent decades, numerous ideas have been developed in universities, implemented in code, and even written into the routers and end systems of the network, only to languish as network operators fail to turn them on on a large scale.\nThe list includes Multicast, IPv6, IntServ, and DiffServ.\nLacking the incentives just to activate services, there seems to be little hope of ISPs devoting adequate resources to developing new ideas.\nIn the long term, this pathology stands out as a critical obstacle to the network``s continued success (Ratnasamy, Shenker, and McCanne provide extensive discussion in [11]).\nOn a smaller time scale, ISPs shun new services in favor of cost cutting measures.\nThus, the network has characteristics of a commodity market.\nAlthough in theory, ISPs have a plethora of routing policies at their disposal, the prevailing strategy is to route in the cheapest way possible [2].\nOn one hand, this leads directly to suboptimal routing.\nMore importantly, commoditization in the short term is surely related to the lack of innovation in the long term.\nWhen the routing decisions of others ignore quality characteristics, ISPs are motivated only to lower costs.\nThere is simply no reward for introducing new services or investing in quality improvements.\nIn response to these pathologies and others, researchers have put forth various proposals for improving the situation.\nThese can be divided according to three high-level strategies: The first attempts to improve the status quo by empowering end-users.\nClark, et al., suggest that giving end-users control over routing would lead to greater service diversity, recognizing that some payment mechanism must also be provided [5].\nRatnasamy, Shenker, and McCanne postulate a link between network evolution and user-directed routing [11].\nThey propose a system of Anycast to give end-users the ability to tunnel their packets to an ISP that introduces a desirable protocol.\nThe extra traffic to the ISP, the authors suggest, will motivate the initial investment.\nThe second strategy suggests a revision of the contracting system.\nThis is exemplified by MacKie-Mason and Varian, who propose a smart market to control access to network resources [10].\nPrices are set to the market-clearing level based on bids that users associate to their traffic.\nIn another direction, Afergan and Wroclawski suggest that prices should be explicitly encoded in the routing protocols [2].\nThey argue that such a move would improve stability and align incentives.\nThe third high-level strategy calls for greater network accountability.\nIn this vein, Argyraki, et al., propose a system of packet obituaries to provide feedback as to which ISPs drop packets [3].\nThey argue that such feedback would help reveal which ISPs were adequately meeting their contractual obligations.\nUnlike the first two strategies, we are not aware of any previous studies that have connected accountability with the pathologies of commoditization or lack of innovation.\nIt is clear that these three strategies are closely linked to each other (for example, [2], [5], and [9] each argue that giving end-users routing control within the current contracting system is problematic).\nUntil today, however, the relationship between them has been poorly understood.\nThere is currently little theoretical foundation to compare the relative merits of each proposal, and a particular lack of evidence linking accountability with innovation and service differentiation.\nThis paper will address both issues.\nWe will begin by introducing an economic network model that relates accountability, contracts, competition, and innovation.\nOur model is highly stylized and may be considered preliminary: it is based on a single source sending data to a single destination.\nNevertheless, the structure is rich enough to expose previously unseen features of network behavior.\nWe will use our model for two main purposes: First, we will use our model to argue that the lack of accountability in today``s network is a fundamental obstacle to overcoming the pathologies of commoditization and lack of innovation.\nIn other words, unless new monitoring capabilities are introduced, and integrated with the system of contracts, the network cannot achieve optimal routing and innovation characteristics.\nThis result provides motivation for the remainder of the paper, in which we explore how accountability can be leveraged to overcome these pathologies and create a sustainable industry.\nWe will approach this problem from a clean-slate perspective, deriving the level of accountability needed to sustain an ideal competitive structure.\nWhen we say that today``s Internet has poor accountability, we mean that it reveals little information about the behavior - or misbehavior - of ISPs.\nThis well-known trait is largely rooted in the network``s history.\nIn describing the design philosophy behind the Internet protocols, Clark lists accountability as the least important among seven second level goals.\n[4] Accordingly, accountability received little attention during the network``s formative years.\nClark relates this to the network``s military context, and finds that had the network been designed for commercial development, accountability would have been a top priority.\nArgyraki, et al., conjecture that applying the principles of layering and transparency may have led to the network``s lack of accountability [3].\nAccording to these principles, end hosts should be informed of network problems only to the extent that they are required to adapt.\nThey notice when packet drops occur so that they can perform congestion control and retransmit packets.\nDetails of where and why drops occur are deliberately concealed.\nThe network``s lack of accountability is highly relevant to a discussion of innovation because it constrains the system of contracts.\nThis is because contracts depend upon external institutions to function - the judge in the language of incomplete contract theory, or simply the legal system.\nUltimately, if a judge cannot verify that some condition holds, she cannot enforce a contract based on that condition.\nOf course, the vast majority of contracts never end up in court.\nEspecially when a judge``s ruling is easily predicted, the parties will typically comply with the contract terms on their own volition.\nThis would not be possible, however, without the judge acting as a last resort.\nAn institution to support contracts is typically complex, but we abstract it as follows: We imagine that a contract is an algorithm that outputs a payment transfer among a set of ISPs (the parties) at every time.\nThis payment is a function of the past and present behaviors of the participants, but only those that are verifiable.\nHence, we imagine that a contract only accepts proofs as inputs.\nWe will call any process that generates these proofs a contractible monitor.\nSuch a monitor includes metering or sensing devices on the physical network, but it is a more general concept.\nConstructing a proof of a particular behavior may require readings from various devices distributed among many ISPs.\nThe contractible monitor includes whatever distributed algorithmic mechanism is used to motivate ISPs to share this private information.\nFigure 1 demonstrates how our model of contracts fits together.\nWe make the assumption that all payments are mediated by contracts.\nThis means that without contractible monitors that attest to, say, latency, payments cannot be conditioned on latency.\nFigure 1: Relationship between monitors and contracts With this model, we may conclude that the level of accountability in today``s Internet only permits best effort contracts.\nNodes cannot condition payments on either quality or path characteristics.\nIs there anything wrong with best-effort contracts?\nThe reader might wonder why the Internet needs contracts at all.\nAfter all, in non-network industries, traditional firms invest in research and differentiate their products, all in the hopes of keeping their customers and securing new ones.\nOne might believe that such market forces apply to ISPs as well.\nWe may adopt this as our null hypothesis: Null hypothesis: Market forces are sufficient to maintain service diversity and innovation on a network, at least to the same extent as they do in traditional markets.\nThere is a popular intuitive argument that supports this hypothesis, and it may be summarized as follows: Intuitive argument supporting null hypothesis: 1.\nAccess providers try to increase their quality to get more consumers.\n2.\nAccess providers are themselves customers for second hop ISPs, and the second hops will therefore try to provide highquality service in order to secure traffic from access providers.\nAccess providers try to select high quality transit because that increases their quality.\n3.\nThe process continues through the network, giving every ISP a competitive reason to increase quality.\nWe are careful to model our network in continuous time, in order to capture the essence of this argument.\nWe can, for example, specify equilibria in which nodes switch to a new next hop in the event of a quality drop.\nMoreover, our model allows us to explore any theoretically possible punishments against cheaters, including those that are costly for end-users to administer.\nBy contrast, customers in the real world rarely respond collectively, and often simply seek the best deal currently offered.\nThese constraints limit their ability to punish cheaters.\nEven with these liberal assumptions, however, we find that we must reject our null hypothesis.\nOur model will demonstrate that identifying a cheating ISP is difficult under low accountability, limiting the threat of market driven punishment.\nWe will define an index of commoditization and show that it increases without bound as data paths grow long.\nFurthermore, we will demonstrate a framework in which an ISP``s maximum research investment decreases hyperbolically with its distance from the end-user.\nNetwork Behavior Monitor Contract Proof Payments 184 To summarize, we argue that the Internet``s lack of accountability must be addressed before the pathologies of commoditization and lack of innovation can be resolved.\nThis leads us to our next topic: How can we leverage accountability to overcome these pathologies?\nWe approach this question from a clean-slate perspective.\nInstead of focusing on incremental improvements, we try to imagine how an ideal industry would behave, then derive the level of accountability needed to meet that objective.\nAccording to this approach, we first craft a new equilibrium concept appropriate for network competition.\nOur concept includes the following requirements: First, we require that punishing ISPs that cheat is done without rerouting the path.\nRerouting is likely to prompt end-users to switch providers, punishing access providers who administer punishments correctly.\nNext, we require that the equilibrium cannot be threatened by a coalition of ISPs that exchanges illicit side payments.\nFinally, we require that the punishment mechanism that enforces contracts does not punish innocent nodes that are not in the coalition.\nThe last requirement is somewhat unconventional from an economic perspective, but we maintain that it is crucial for any reasonable solution.\nAlthough ISPs provide complementary services when they form a data path together, they are likely to be horizontal competitors as well.\nIf innocent nodes may be punished, an ISP may decide to deliberately cheat and draw punishment onto itself and its neighbors.\nBy cheating, the ISP may save resources, thereby ensuring that the punishment is more damaging to the other ISPs, which probably compete with the cheater directly for some customers.\nIn the extreme case, the cheater may force the other ISPs out of business, thereby gaining a monopoly on some routes.\nApplying this equilibrium concept, we derive the monitors needed to maintain innovation and optimize routes.\nThe solution is surprisingly simple: contractible monitors must report the quality of the rest of the path, from each ISP to the destination.\nIt turns out that this is the correct minimum accountability requirement, as opposed to either end-to-end monitors or hop-by-hop monitors, as one might initially suspect.\nRest of path monitors can be implemented in various ways.\nThey may be purely local algorithms that listen for packet echoes.\nAlternately, they can be distributed in nature.\nWe describe a way to construct a rest of path monitor out of monitors for individual ISP quality and for the data path.\nThis requires a mechanism to motivate ISPs to share their monitor outputs with each other.\nThe rest of path monitor then includes the component monitors and the distributed algorithmic mechanism that ensures that information is shared as required.\nThis example shows that other types of monitors may be useful as building blocks, but must be combined to form rest of path monitors in order to achieve ideal innovation characteristics.\nOur study has several practical implications for future protocol design.\nWe show that new monitors must be implemented and integrated with the contracting system before the pathologies of commoditization and lack of innovation can be overcome.\nMoreover, we derive exactly what monitors are needed to optimize routes and support innovation.\nIn addition, our results provide useful input for clean-slate architectural design, and we use several novel techniques that we expect will be applicable to a variety of future research.\nThe rest of this paper is organized as follows: In section 2, we lay out our basic network model.\nIn section 3, we present a lowaccountability network, modeled after today``s Internet.\nWe demonstrate how poor monitoring causes commoditization and a lack of innovation.\nIn section 4, we present verifiable monitors, and show that proofs, even without contracts, can improve the status quo.\nIn section 5, we turn our attention to contractible monitors.\nWe show that rest of path monitors can support competition games with optimal routing and innovation.\nWe further show that rest of path monitors are required to support such competition games.\nWe continue by discussing how such monitors may be constructed using other monitors as building blocks.\nIn section 6, we conclude and present several directions for future research.\n2.\nBASIC NETWORK MODEL A source, S, wants to send data to destination, D. S and D are nodes on a directed, acyclic graph, with a finite set of intermediate nodes, { }NV ,...2,1= , representing ISPs.\nAll paths lead to D, and every node not connected to D has at least two choices for next hop.\nWe will represent quality by a finite dimensional vector space, Q, called the quality space.\nEach dimension represents a distinct network characteristic that end-users care about.\nFor example, latency, loss probability, jitter, and IP version can each be assigned to a dimension.\nTo each node, i, we associate a vector in the quality space, Qqi \u2208 .\nThis corresponds to the quality a user would experience if i were the only ISP on the data path.\nLet N Q\u2208q be the vector of all node qualities.\nOf course, when data passes through multiple nodes, their qualities combine in some way to yield a path quality.\nWe represent this by an associative binary operation, *: QQQ \u2192\u00d7 .\nFor path ( )nvvv ,...,, 21 , the quality is given by nvvv qqq \u2217\u2217\u2217 ...21 .\nThe * operation reflects the characteristics of each dimension of quality.\nFor example, * can act as an addition in the case of latency, multiplication in the case of loss probability, or a minimumargument function in the case of security.\nWhen data flows along a complete path from S to D, the source and destination, generally regarded as a single player, enjoy utility given by a function of the path quality, \u2192Qu : .\nEach node along the path, i, experiences some cost of transmission, ci.\n2.1 Game Dynamics Ultimately, we are most interested in policies that promote innovation on the network.\nIn this study, we will use innovation in a fairly general sense.\nInnovation describes any investment by an ISP that alters its quality vector so that at least one potential data path offers higher utility.\nThis includes researching a new routing algorithm that decreases the amount of jitter users experience.\nIt also includes deploying a new protocol that supports quality of service.\nEven more broadly, buying new equipment to decrease S D 185 latency may also be regarded as innovation.\nInnovation may be thought of as the micro-level process by which the network evolves.\nOur analysis is limited in one crucial respect: We focus on inventions that a single ISP can implement to improve the end-user experience.\nThis excludes technologies that require adoption by all ISPs on the network to function.\nBecause such technologies do not create a competitive advantage, rewarding them is difficult and may require intellectual property or some other market distortion.\nWe defer this interesting topic to future work.\nAt first, it may seem unclear how a large-scale distributed process such as innovation can be influenced by mechanical details like networks monitors.\nOur model must draw this connection in a realistic fashion.\nThe rate of innovation depends on the profits that potential innovators expect in the future.\nThe reward generated by an invention must exceed the total cost to develop it, or the inventor will not rationally invest.\nThis reward, in turn, is governed by the competitive environment in which the firm operates, including the process by which firms select prices, and agree upon contracts with each other.\nOf course, these decisions depend on how routes are established, and how contracts determine actual monetary exchanges.\nAny model of network innovation must therefore relate at least three distinct processes: innovation, competition, and routing.\nWe select a game dynamics that makes the relation between these processes as explicit as possible.\nThis is represented schematically in Figure 2.\nThe innovation stage occurs first, at time 2\u2212=t .\nIn this stage, each agent decides whether or not to make research investments.\nIf she chooses not to, her quality remains fixed.\nIf she makes an investment, her quality may change in some way.\nIt is not necessary for us to specify how such changes take place.\nThe agents'' choices in this stage determine the vector of qualities, q, common knowledge for the rest of the game.\nNext, at time 1\u2212=t , agents participate in the competition stage, in which contracts are agreed upon.\nIn today``s industry, these contracts include prices for transit access, and peering agreements.\nSince access is provided on a best-effort basis, a transit agreement can simply be represented by its price.\nOther contracting systems we will explore will require more detail.\nFinally, beginning at 0=t , firms participate in the routing stage.\nOther research has already employed repeated games to study routing, for example [1], [12].\nRepetition reveals interesting effects not visible in a single stage game, such as informal collusion to elevate prices in [12].\nWe use a game in continuous time in order to study such properties.\nFor example, we will later ask whether a player will maintain higher quality than her contracts require, in the hope of keeping her customer base or attracting future customers.\nOur dynamics reflect the fact that ISPs make innovation decisions infrequently.\nAlthough real firms have multiple opportunities to innovate, each opportunity is followed by a substantial length of time in which qualities are fixed.\nThe decision to invest focuses on how the firm``s new quality will improve the contracts it can enter into.\nHence, our model places innovation at the earliest stage, attempting to capture a single investment decision.\nContracting decisions are made on an intermediate time scale, thus appearing next in the dynamics.\nRouting decisions are made very frequently, mainly to maximize immediate profit flows, so they appear in the last stage.\nBecause of this ordering, our model does not allow firms to route strategically to affect future innovation or contracting decisions.\nIn opposition, Afergan and Wroclawski argue that contracts are formed in response to current traffic patterns, in a feedback loop [2].\nAlthough we are sympathetic to their observation, such an addition would make our analysis intractable.\nOur model is most realistic when contracting decisions are infrequent.\nThroughout this paper, our solution concept will be a subgame perfect equilibrium (SPE).\nAn SPE is a strategy point that is a Nash equilibrium when restricted to each subgame.\nThree important subgames have been labeled in Figure 2.\nThe innovation game includes all three stages.\nThe competition game includes only the competition stage and the routing stage.\nThe routing game includes only the routing stage.\nAn SPE guarantees that players are forward-looking.\nThis means, for example, that in the competition stage, firms must act rationally, maximizing their expected profits in the routing stage.\nThey cannot carry out threats they made in the innovation stage if it lowers their expected payoff.\nOur schematic already suggests that the routing game is crucial for promoting innovation.\nTo support innovation, the competition game must somehow reward ISPs with high quality.\nBut that means that the routing game must tend to route to nodes with high quality.\nIf the routing game always selects the lowest-cost routes, for example, innovation will not be supported.\nWe will support this observation with analysis later.\n2.2 The Routing Game The routing game proceeds in continuous time, with all players discounting by a common factor, r.\nThe outputs from previous stages, q and the set of contracts, are treated as exogenous parameters for this game.\nFor each time 0\u2265t , each node must select a next hop to route data to.\nData flows across the resultant path, causing utility flow to S and D, and a flow cost to the nodes on the path, as described above.\nPayment flows are also created, based on the contracts in place.\nRelating our game to the familiar repeated prisoners'' dilemma, imagine that we are trying to impose a high quality, but costly path.\nAs we argued loosely above, such paths must be sustainable in order to support innovation.\nEach ISP on the path tries to maximize her own payment, net of costs, so she may not want to cooperate with our plan.\nRather, if she can find a way to save on costs, at the expense of the high quality we desire, she will be tempted to do so.\nInnovation Game Competition Game Routing Game Innovation stage Competition stage Routing stageQualities (q) Contracts (prices) Profits t = -2 t = -1 t \u2208 [ 0 , ) Figure 2: Game Dynamics 186 Analogously to the prisoners'' dilemma, we will call such a decision cheating.\nA little more formally, Cheating refers to any action that an ISP can take, contrary to some target strategy point that we are trying to impose, that enhances her immediate payoff, but compromises the quality of the data path.\nOne type of cheating relates to the data path.\nEach node on the path has to pay the next node to deliver its traffic.\nIf the next node offers high quality transit, we may expect that a lower quality node will offer a lower price.\nEach node on the path will be tempted to route to a cheaper next hop, increasing her immediate profits, but lowering the path quality.\nWe will call this type of action cheating in route.\nAnother possibility we can model, is that a node finds a way to save on its internal forwarding costs, at the expense of its own quality.\nWe will call this cheating internally to distinguish it from cheating in route.\nFor example, a node might drop packets beyond the rate required for congestion control, in order to throttle back TCP flows and thus save on forwarding costs [3].\nAlternately, a node employing quality of service could give high priority packets a lower class of service, thus saving on resources and perhaps allowing itself to sell more high priority service.\nIf either cheating in route or cheating internally is profitable, the specified path will not be an equilibrium.\nWe assume that cheating can never be caught instantaneously.\nRather, a cheater can always enjoy the payoff from cheating for some positive time, which we label 0t .\nThis includes the time for other players to detect and react to the cheating.\nIf the cheater has a contract which includes a customer lock-in period, 0t also includes the time until customers are allowed to switch to a new ISP.\nAs we will see later, it is socially beneficial to decrease 0t , so such lock-in is detrimental to welfare.\n3.\nPATHOLOGIES OF A LOWACCOUNTABILITY NETWORK In order to motivate an exploration of monitoring systems, we begin in this section by considering a network with a poor degree of accountability, modeled after today``s Internet.\nWe will show how the lack of monitoring necessarily leads to poor routing and diminishes the rate of innovation.\nThus, the network``s lack of accountability is a fundamental obstacle to resolving these pathologies.\n3.1 Accountability in the Current Internet First, we reflect on what accountability characteristics the present Internet has.\nArgyraki, et al., point out that end hosts are given minimal information about packet drops [3].\nUsers know when drops occur, but not where they occur, nor why.\nDropped packets may represent the innocent signaling of congestion, or, as we mentioned above, they may be a form of cheating internally.\nThe problem is similar for other dimensions of quality, or in fact more acute.\nFinding an ISP that gives high priority packets a lower class of service, for example, is further complicated by the lack of even basic diagnostic tools.\nIn fact, it is similarly difficult to identify an ISP that cheats in route.\nHuston notes that Internet traffic flows do not always correspond to routing information [8].\nAn ISP may hand a packet off to a neighbor regardless of what routes that neighbor has advertised.\nFurthermore, blocks of addresses are summarized together for distant hosts, so a destination may not even be resolvable until packets are forwarded closer.\nOne might argue that diagnostic tools like ping and traceroute can identify cheaters.\nUnfortunately, Argyraki, et al., explain that these tools only reveal whether probe packets are echoed, not the fate of past packets [3].\nThus, for example, they are ineffective in detecting low-frequency packet drops.\nEven more fundamentally, a sophisticated cheater can always spot diagnostic packets and give them special treatment.\nAs a further complication, a cheater may assume different aliases for diagnostic packets arriving over different routes.\nAs we will see below, this gives the cheater a significant advantage in escaping punishment for bad behavior, even if the data path is otherwise observable.\n3.2 Modeling Low-Accountability As the above evidence suggests, the current industry allows for very little insight into the behavior of the network.\nIn this section, we attempt to capture this lack of accountability in our model.\nWe begin by defining a monitor, our model of the way that players receive external information about network behavior, A monitor is any distributed algorithmic mechanism that runs on the network graph, and outputs, to specific nodes, informational statements about current or past network behavior.\nWe assume that all external information about network behavior is mediated in this way.\nThe accountability properties of the Internet can be represented by the following monitors: E2E (End to End): A monitor that informs S\/D about what the total path quality is at any time (this is the quality they experience).\nROP (Rest of Path): A monitor that informs each node along the data path what the quality is for the rest of the path to the destination.\nPRc (Packets Received): A monitor that tells nodes how much data they accept from each other, so that they can charge by volume.\nIt is important to note, however, that this information is aggregated over many source-destination pairs.\nHence, for the sake of realism, it cannot be used to monitor what the data path is.\nPlayers cannot measure the qualities of other, single nodes, just the rest of the path.\nNodes cannot see the path past the next hop.\nThis last assumption is stricter than needed for our results.\nThe critical ingredient is that nodes cannot verify that the path avoids a specific hop.\nThis holds, for example, if the path is generally visible, except nodes can use different aliases for different parents.\nSimilar results also hold if alternate paths always converge after some integer number, m, of hops.\nIt is important to stress that E2E and ROP are not the contractible monitors we described in the introduction - they do not generate proofs.\nThus, even though a player observes certain information, she generally cannot credibly share it with another player.\nFor example, if a node after the first hop starts cheating, the first hop will detect the sudden drop in quality for the rest of the path, but the first hop cannot make the source believe this observation - the 187 source will suspect that the first hop was the cheater, and fabricated the claim against the rest of the path.\nTypically, E2E and ROP are envisioned as algorithms that run on a single node, and listen for packet echoes.\nThis is not the only way that they could be implemented, however; an alternate strategy is to aggregate quality measurements from multiple points in the network.\nThese measurements can originate in other monitors, located at various ISPs.\nThe monitor then includes the component monitors as well as whatever mechanisms are in place to motivate nodes to share information honestly as needed.\nFor example, if the source has monitors that reveal the qualities of individual nodes, they could be combined with path information to create an ROP monitor.\nSince we know that contracts only accept proofs as input, we can infer that payments in this environment can only depend on the number of packets exchanged between players.\nIn other words, contracts are best-effort.\nFor the remainder of this section, we will assume that contracts are also linear - there is a constant payment flow so long as a node accepts data, and all conditions of the contract are met.\nOther, more complicated tariffs are also possible, and are typically used to generate lock-in.\nWe believe that our parameter t0 is sufficient to describe lock-in effects, and we believe that the insights in this section apply equally to any tariffs that are bounded so that the routing game remains continuous at infinity.\nRestricting attention to linear contracts allows us to represent some node i``s contract by its price, pi.\nBecause we further know that nodes cannot observe the path after the next hop, we can infer that contracts exist only between neighboring nodes on the graph.\nWe will call this arrangement of contracts bilateral.\nWhen a competition game exclusively uses bilateral contracts, we will call it a bilateral contract competition game.\nWe first focus on the routing game and ask whether a high quality route can be maintained, even when a low quality route is cheaper.\nRecall that this is a requirement in order for nodes to have any incentive to innovate.\nIf nodes tend to route to low price next hops, regardless of quality, we say that the network is commoditized.\nTo measure this tendency, we define an index of commoditization as follows: For a node on the data path, i, define its quality premium, minppd ji \u2212= , where pj is the flow payment to the next hop in equilibrium, and pmin is the price of the lowest cost next hop.\nDefinition: The index of commoditization, CI , is the average, over each node on the data path, i, of i``s flow profit as a fraction of i``s quality premium, ( ) ijii dpcp \/\u2212\u2212 .\nCI ranges from 0, when each node spends all of its potential profit on its quality premium, to infinite, when a node absorbs positive profit, but uses the lowest price next hop.\nA high value for CI implies that nodes are spending little of their money inflow on purchasing high quality for the rest of the path.\nAs the next claim shows, this is exactly what happens as the path grows long: Claim 1.\nIf the only monitors are E2E, ROP, and PRc, \u221e\u2192CI as \u221e\u2192n , where n is the number of nodes on the data path.\nTo show that this is true, we first need the following lemma, which will establish the difficulty of punishing nodes in the network.\nFirst a bit of notation: Recall that a cheater can benefit from its actions for 00 >t before other players can react.\nWhen a node cheats, it can expect a higher profit flow, at least until it is caught and other players react, perhaps by diverting traffic.\nLet node i``s normal profit flow be i\u03c0 , and her profit flow during cheating be some greater value, yi.\nWe will call the ratio, iiy \u03c0\/ , the temptation to cheat.\nLemma 1.\nIf the only monitors are E2E, ROP, and PRc, the discounted time, \u2212nt rt e 0 , needed to punish a cheater increases at least as fast as the product of the temptations to cheat along the data path, \u220f \u2212\u2212 \u2265 0 0 pathdataon 0 t rt i i i t rt e y e n \u03c0 (1) Corollary.\nIf nodes share a minimum temptation to cheat, \u03c0\/y , the discounted time needed to punish cheating increases at least exponentially in the length of the data path, n, \u2212\u2212 \u2265 0 00 t rt nt rt e y e n \u03c0 (2) Since it is the discounted time that increases exponentially, the actual time increases faster than exponentially.\nIf n is so large that tn is undefined, the given path cannot be maintained in equilibrium.\nProof.\nThe proof proceeds by induction on the number of nodes on the equilibrium data path, n. For 1=n , there is a single node, say i. By cheating, the node earns extra profit ( ) \u2212 \u2212 0 0 t rt ii ey \u03c0 .\nIf node i is then punished until time 1t , the extra profit must be cancelled out by the lost profit between time 0t and 1t , \u22121 0 t t rt i e\u03c0 .\nA little manipulation gives \u2212\u2212 = 01 00 t rt i i t rt e y e \u03c0 , as required.\nFor 1>n , assume for induction that the claim holds for 1\u2212n .\nThe source does not know whether the cheater is the first hop, or after the first hop.\nBecause the source does not know the data path after the first hop, it is unable to punish nodes beyond it.\nIf it chooses a new first hop, it might not affect the rest of the data path.\nBecause of this, the source must rely on the first hop to punish cheating nodes farther along the path.\nThe first hop needs discounted time, \u220f \u22120 0 hopfirstafter t rt i i i e y \u03c0 , to accomplish this by assumption.\nSo the source must give the first hop this much discounted time in order to punish defectors further down the line (and the source will expect poor quality during this period).\nNext, the source must be protected against a first hop that cheats, and pretends that the problem is later in the path.\nThe first hop can 188 do this for the full discounted time, \u220f \u22120 0 hopfirstafter t rt i i i e y \u03c0 , so the source must punish the first hop long enough to remove the extra profit it can make.\nFollowing the same argument as for 1=n , we can show that the full discounted time is \u220f \u22120 0 pathdataon t rt i i i e y \u03c0 , which completes the proof.\nThe above lemma and its corollary show that punishing cheaters becomes more and more difficult as the data path grows long, until doing so is impossible.\nTo capture some intuition behind this result, imagine that you are an end user, and you notice a sudden drop in service quality.\nIf your data only travels through your access provider, you know it is that provider``s fault.\nYou can therefore take your business elsewhere, at least for some time.\nThis threat should motivate your provider to maintain high quality.\nSuppose, on the other hand, that your data traverses two providers.\nWhen you complain to your ISP, he responds, yes, we know your quality went down, but it``s not our fault, it``s the next ISP.\nGive us some time to punish them and then normal quality will resume.\nIf your access provider is telling the truth, you will want to listen, since switching access providers may not even route around the actual offender.\nThus, you will have to accept lower quality service for some longer time.\nOn the other hand, you may want to punish your access provider as well, in case he is lying.\nThis means you have to wait longer to resume normal service.\nAs more ISPs are added to the path, the time increases in a recursive fashion.\nWith this lemma in hand, we can return to prove Claim 1.\nProof of Claim 1.\nFix an equilibrium data path of length n. Label the path nodes 1,2,...,n. For each node i, let i``s quality premium be '11 ++ \u2212= iii ppd .\nThen we have, [ ] = \u2212 = \u2212 + + = \u2212 + ++ = + \u2212=\u2212 \u2212\u2212 \u2212\u2212 = \u2212\u2212 \u2212 = \u2212\u2212 = n i i n i iii iii n i iii ii n i i iii C g npcp pcp n pcp pp nd pcp n I 1 1 1 1 1 1 1 1 1 11 1 1 1 1 1 '1 '11 , (3) where gi is node i``s temptation to cheat by routing to the lowest price next hop.\nLemma 1 tells us that Tg n i i <\u220f =1 , where ( )01 rt eT \u2212 \u2212= .\nIt requires a bit of calculus to show that IC is minimized by setting each gi equal to n T \/1 .\nHowever, as \u221e\u2192n , we have 1\/1 \u2192n T , which shows that \u221e\u2192CI .\nAccording to the claim, as the data path grows long, it increasingly resembles a lowest-price path.\nSince lowest-price routing does not support innovation, we may speculate that innovation degrades with the length of the data path.\nThough we suspect stronger claims are possible, we can demonstrate one such result by including an extra assumption: Available Bargain Path: A competitive market exists for lowcost transit, such that every node can route to the destination for no more than flow payment, lp .\nClaim 2.\nUnder the available bargain path assumption, if node i , a distance n from S, can invest to alter its quality, and the source will spend no more than sP for a route including node i``s new quality, then the payment to node i, p, decreases hyperbolically with n, ( ) ( ) s n l P n T pp 1 1\/1 \u2212 +\u2264 \u2212 , (4) where ( )01 rt eT \u2212 \u2212= is the bound on the product of temptations from the previous claim.\nThus, i will spend no more than ( ) ( )\u2212 + \u2212 s n l P n T p r 1 1\u00a01\/1 on this quality improvement, which approaches the bargain path``s payment, r pl , as \u221e\u2192n .\nThe proof is given in the appendix.\nAs a node gets farther from the source, its maximum payment approaches the bargain price, pl.\nHence, the reward for innovation is bounded by the same amount.\nLarge innovations, meaning substantially more expensive than rpl \/ , will not be pursued deep into the network.\nClaim 2 can alternately be viewed as a lower bound on how much it costs to elicit innovation in a network.\nIf the source S wants node i to innovate, it needs to get a motivating payment, p, to i during the routing stage.\nHowever, it must also pay the nodes on the way to i a premium in order to motivate them to route properly.\nThe claim shows that this premium increases with the distance to i, until it dwarfs the original payment, p.\nOur claims stand in sharp contrast to our null hypothesis from the introduction.\nComparing the intuitive argument that supported our hypothesis with these claims, we can see that we implicitly used an oversimplified model of market pressure (as either present or not).\nAs is now clear, market pressure relies on the decisions of customers, but these are limited by the lack of information.\nHence, competitive forces degrade as the network deepens.\n4.\nVERIFIABLE MONITORS In this section, we begin to introduce more accountability into the network.\nRecall that in the previous section, we assumed that players couldn``t convince each other of their private information.\nWhat would happen if they could?\nIf a monitor``s informational signal can be credibly conveyed to others, we will call it a verifiable monitor.\nThe monitor``s output in this case can be thought of as a statement accompanied by a proof, a string that can be processed by any player to determine that the statement is true.\nA verifiable monitor is a distributed algorithmic mechanism that runs on the network graph, and outputs, to specific nodes, proofs about current or past network behavior.\nAlong these lines, we can imagine verifiable counterparts to E2E and ROP.\nWe will label these E2Ev and ROPv.\nWith these monitors, each node observes the quality of the rest of the path and can also convince other players of these observations by giving them a proof.\n189 By adding verifiability to our monitors, identifying a single cheater is straightforward.\nThe cheater is the node that cannot produce proof that the rest of path quality decreased.\nThis means that the negative results of the previous section no longer hold.\nFor example, the following lemma stands in contrast to Lemma 1.\nLemma 2.\nWith monitors E2Ev, ROPv, and PRc, and provided that the node before each potential cheater has an alternate next hop that isn``t more expensive, it is possible to enforce any data path in SPE so long as the maximum temptation is less than what can be deterred in finite time, \u2212 \u2264 0 0 max 1 t rt er y \u03c0 (5) Proof.\nThis lemma follows because nodes can share proofs to identify who the cheater is.\nOnly that node must be punished in equilibrium, and the preceding node does not lose any payoff in administering the punishment.\nWith this lemma in mind, it is easy to construct counterexamples to Claim 1 and Claim 2 in this new environment.\nUnfortunately, there are at least four reasons not to be satisfied with this improved monitoring system.\nThe first, and weakest reason is that the maximum temptation remains finite, causing some distortion in routes or payments.\nEach node along a route must extract some positive profit unless the next hop is also the cheapest.\nOf course, if t0 is small, this effect is minimal.\nThe second, and more serious reason is that we have always given our source the ability to commit to any punishment.\nReal world users are less likely to act collectively, and may simply search for the best service currently offered.\nSince punishment phases are generally characterized by a drop in quality, real world end-users may take this opportunity to shop for a new access provider.\nThis will make nodes less motivated to administer punishments.\nThe third reason is that Lemma 2 does not apply to cheating by coalitions.\nA coalition node may pretend to punish its successor, but instead enjoy a secret payment from the cheating node.\nAlternately, a node may bribe its successor to cheat, if the punishment phase is profitable, and so forth.\nThe required discounted time for punishment may increase exponentially in the number of coalition members, just as in the previous section!\nThe final reason not to accept this monitoring system is that when a cheater is punished, the path will often be routed around not just the offender, but around other nodes as well.\nEffectively, innocent nodes will be punished along with the guilty.\nIn our abstract model, this doesn``t cause trouble since the punishment falls off the equilibrium path.\nThe effects are not so benign in the real world.\nWhen ISPs lie in sequence along a data path, they contribute complementary services, and their relationship is vertical.\nFrom the perspective of other source-destination pairs, however, these same firms are likely to be horizontal competitors.\nBecause of this, a node might deliberately cheat, in order to trigger punishment for itself and its neighbors.\nBy cheating, the node will save money to some extent, so the cheater is likely to emerge from the punishment phase better off than the innocent nodes.\nThis may give the cheater a strategic advantage against its competitors.\nIn the extreme, the cheater may use such a strategy to drive neighbors out of business, and thereby gain a monopoly on some routes.\n5.\nCONTRACTIBLE MONITORS At the end of the last section, we identified several drawbacks that persist in an environment with E2Ev, ROPv, and PRc.\nIn this section, we will show how all of these drawbacks can be overcome.\nTo do this, we will require our third and final category of monitor: A contractible monitor is simply a verifiable monitor that generates proofs that can serve as input to a contract.\nThus, contractible is jointly a property of the monitor and the institutions that must verify its proofs.\nContractibility requires that a court, 1.\nCan verify the monitor``s proofs.\n2.\nCan understand what the proofs and contracts represent to the extent required to police illegal activity.\n3.\nCan enforce payments among contracting parties.\nUnderstanding the agreements between companies has traditionally been a matter of reading contracts on paper.\nThis may prove to be a harder task in a future network setting.\nContracts may plausibly be negotiated by machine, be numerous, even per-flow, and be further complicated by the many dimensions of quality.\nWhen a monitor (together with institutional infrastructure) meets these criteria, we will label it with a subscript c, for contractible.\nThe reader may recall that this is how we labeled the packets received monitor, PRc, which allows ISPs to form contracts with per-packet payments.\nSimilarly, E2Ec and ROPc are contractible versions of the monitors we are now familiar with.\nAt the end of the previous section, we argued for some desirable properties that we``d like our solution to have.\nBriefly, we would like to enforce optimal data paths with an equilibrium concept that doesn``t rely on re-routing for punishment, is coalition proof, and doesn``t punish innocent nodes when a coalition cheats.\nWe will call such an equilibrium a fixed-route coalition-proof protect-theinnocent equilibrium.\nAs the next claim shows, ROPc allows us to create a system of linear (price, quality) contracts under just such an equilibrium.\nClaim 3.\nWith ROPc, for any feasible and consistent assignment of rest of path qualities to nodes, and any corresponding payment schedule that yields non-negative payoffs, these qualities can be maintained with bilateral contracts in a fixed-route coalition-proof protect-the-innocent equilibrium.\nProof: Fix any data path consistent with the given rest of path qualities.\nSelect some monetary punishment, P, large enough to prevent any cheating for time t0 (the discounted total payment from the source will work).\nLet each node on the path enter into a contract with its parent, which fixes an arbitrary payment schedule so long as the rest of path quality is as prescribed.\nWhen the parent node, which has ROPc, submits a proof that the rest of path quality is less than expected, the contract awards her an instantaneous transfer, P, from the downstream node.\nSuch proofs can be submitted every 0t for the previous interval.\nSuppose now that a coalition, C, decides to cheat.\nThe source measures a decrease in quality, and according to her contract, is awarded P from the first hop.\nThis means that there is a net outflow of P from the ISPs as a whole.\nSuppose that node i is not in C.\nIn order for the parent node to claim P from i, it must submit proof that the quality of the path starting at i is not as prescribed.\nThis means 190 that there is a cheater after i. Hence, i would also have detected a change in quality, so i can claim P from the next node on the path.\nThus, innocent nodes are not punished.\nThe sequence of payments must end by the destination, so the net outflow of P must come from the nodes in C.\nThis establishes all necessary conditions of the equilibrium.\nEssentially, ROPc allows for an implementation of (price, quality) contracts.\nBuilding upon this result, we can construct competition games in which nodes offer various qualities to each other at specified prices, and can credibly commit to meet these performance targets, even allowing for coalitions and a desire to damage other ISPs.\nExample 1.\nDefine a Stackelberg price-quality competition game as follows: Extend the partial order of nodes induced by the graph to any complete ordering, such that downstream nodes appear before their parents.\nIn this order, each node selects a contract to offer to its parents, consisting of a rest of path quality, and a linear price.\nIn the routing game, each node selects a next hop at every time, consistent with its advertised rest of path quality.\nThe Stackelberg price-quality competition game can be implemented in our model with ROPc monitors, by using the strategy in the proof, above.\nIt has the following useful property: Claim 4.\nThe Stackelberg price-quality competition game yields optimal routes in SPE.\nThe proof is given in the appendix.\nThis property is favorable from an innovation perspective, since firms that invest in high quality will tend to fall on the optimal path, gaining positive payoff.\nIn general, however, investments may be over or under rewarded.\nExtra conditions may be given under which innovation decisions approach perfect efficiency for large innovations.\nWe omit the full analysis here.\nExample 2.\nAlternately, we can imagine that players report their private information to a central authority, which then assigns all contracts.\nFor example, contracts could be computed to implement the cost-minimizing VCG mechanism proposed by Feigenbaum, et al. in [7].\nWith ROPc monitors, we can adapt this mechanism to maximize welfare.\nFor node, i, on the optimal path, L, the net payment must equal, essentially, its contribution to the welfare of S, D, and the other nodes.\nIf L'' is an optimal path in the graph with i removed, the profit flow to i is, ( ) ( ) \u2208\u2260\u2208 +\u2212\u2212 ', ' Lj j ijLj jLL ccququ , (6) where Lq and `Lq are the qualities of the two paths.\nHere, (price, quality) contracts ensure that nodes report their qualities honestly.\nThe incentive structure of the VCG mechanism is what motivates nodes to report their costs accurately.\nA nice feature of this game is that individual innovation decisions are efficient, meaning that a node will invest in an innovation whenever the investment cost is less than the increased welfare of the optimal data path.\nUnfortunately, the source may end up paying more than the utility of the path.\nNotice that with just E2Ec, a weaker version of Claim 3 holds.\nBilateral (price, quality) contracts can be maintained in an equilibrium that is fixed-route and coalition-proof, but not protectthe-innocent.\nThis is done by writing contracts to punish everyone on the path when the end to end quality drops.\nIf the path length is n, the first hop pays nP to the source, the second hop pays ( )Pn 1\u2212 to the first, and so forth.\nThis ensures that every node is punished sufficiently to make cheating unprofitable.\nFor the reasons we gave previously, we believe that this solution concept is less than ideal, since it allows for malicious nodes to deliberately trigger punishments for potential competitors.\nUp to this point, we have adopted fixed-route coalition-proof protect-the-innocent equilibrium as our desired solution concept, and shown that ROPc monitors are sufficient to create some competition games that are desirable in terms of service diversity and innovation.\nAs the next claim will show, rest of path monitoring is also necessary to construct such games under our solution concept.\nBefore we proceed, what does it mean for a game to be desirable from the perspective of service diversity and innovation?\nWe will use a very weak assumption, essentially, that the game is not fully commoditized for any node.\nThe claim will hold for this entire class of games.\nDefinition: A competition game is nowhere-commoditized if for each node, i, not adjacent to D, there is some assignment of qualities and marginal costs to nodes, such that the optimal data path includes i, and i has a positive temptation to cheat.\nIn the case of linear contracts, it is sufficient to require that \u221e<CI , and that every node make positive profit under some assignment of qualities and marginal costs.\nStrictly speaking, ROPc monitors are not the only way to construct these desirable games.\nTo prove the next claim, we must broaden our notion of rest of path monitoring to include the similar ROPc'' monitor, which attests to the quality starting at its own node, through the end of the path.\nCompare the two monitors below: ROPc: gives a node proof that the path quality from the next node to the destination is not correct.\nROPc'': gives a node proof that the path quality from that node to the destination is correct.\nWe present a simplified version of this claim, by including an assumption that only one node on the path can cheat at a time (though conspirators can still exchange side payments).\nWe will discuss the full version after the proof.\nClaim 5.\nAssume a set of monitors, and a nowhere-commoditized bilateral contract competition game that always maintains the optimal quality in fixed-route coalition-proof protect-the-innocent equilibrium, with only one node allowed to cheat at a time.\nThen for each node, i, not adjacent to D, either i has an ROPc monitor, or i``s children each have an ROPc'' monitor.\nProof: First, because of the fixed-route assumption, punishments must be purely monetary.\nNext, when cheating occurs, if the payment does not go to the source or destination, it may go to another coalition member, rendering it ineffective.\nThus, the source must accept some monetary compensation, net of its normal flow payment, when cheating occurs.\nSince the source only contracts with the first hop, it must accept this money from the first hop.\nThe source``s contract must therefore distinguish when the path quality is normal from when it is lowered by cheating.\nTo do so, it can either accept proofs 191 from the source, that the quality is lower than required, or it can accept proofs from the first hop, that the quality is correct.\nThese nodes will not rationally offer the opposing type of proof.\nBy definition, any monitor that gives the source proof that the path quality is wrong is an ROPc monitor.\nAny monitor that gives the first hop proof that the quality is correct is a ROPc'' monitor.\nThus, at least one of these monitors must exist.\nBy the protect-the-innocent assumption, if cheating occurs, but the first hop is not a cheater, she must be able to claim the same size reward from the next ISP on the path, and thus pass on the punishment.\nThe first hop``s contract with the second must then distinguish when cheating occurs after the first hop.\nBy argument similar to that for the source, either the first hop has a ROPc monitor, or the second has a ROPc'' monitor.\nThis argument can be iterated along the entire path to the penultimate node before D.\nSince the marginal costs and qualities can be arranged to make any path the optimal path, these statements must hold for all nodes and their children, which completes the proof.\nThe two possibilities for monitor correspond to which node has the burden of proof.\nIn one case, the prior node must prove the suboptimal quality to claim its reward.\nIn the other, the subsequent node must prove that the quality was correct to avoid penalty.\nBecause the two monitors are similar, it seems likely that they require comparable costs to implement.\nIf submitting the proofs is costly, it seems natural that nodes would prefer to use the ROPc monitor, placing the burden of proof on the upstream node.\nFinally, we note that it is straightforward to derive the full version of the claim, which allows for multiple cheaters.\nThe only complication is that cheaters can exchange side payments, which makes any money transfers between them redundant.\nBecause of this, we have to further generalize our rest of path monitors, so they are less constrained in the case that there are cheaters on either side.\n5.1 Implementing Monitors Claim 5 should not be interpreted as a statement that each node must compute the rest of path quality locally, without input from other nodes.\nOther monitors, besides ROPc and ROPc'' can still be used, loosely speaking, as building blocks.\nFor instance, network tomography is concerned with measuring properties of the network interior with tools located at the edge.\nUsing such techniques, our source might learn both individual node qualities and the data path.\nThis is represented by the following two monitors: SHOPc i : (source-based hop quality) A monitor that gives the source proof of what the quality of node i is.\nSPATHc: (source-based path) A monitor that gives the source proof of what the data path is at any time, at least as far as it matches the equilibrium path.\nWith these monitors, a punishment mechanism can be designed to fulfill the conditions of Claim 5.\nIt involves the source sharing the proofs it generates with nodes further down the path, which use them to determine bilateral payments.\nUltimately however, the proof of Claim 5 shows us that each node i``s bilateral contracts require proof of the rest of path quality.\nThis means that node i (or possibly its children) will have to combine the proofs that they receive to generate a proof of the rest of path quality.\nThus, the combined process is itself a rest of path monitor.\nWhat we have done, all in all, is constructed a rest of path monitor using SPATHc and SHOPc i as building blocks.\nOur new monitor includes both the component monitors and whatever distributed algorithmic mechanism exists to make sure nodes share their proofs correctly.\nThis mechanism can potentially involve external institutions.\nFor a concrete example, suppose that when node i suspects it is getting poor rest of path quality from its successor, it takes the downstream node to court.\nDuring the discovery process, the court subpoenas proofs of the path and of node qualities from the source (ultimately, there must be some threat to ensure the source complies).\nFinally, for the court to issue a judgment, one party or the other must compile a proof of what the rest of path quality was.\nHence, the entire discovery process acts as a rest of path monitor, albeit a rather costly monitor in this case.\nOf course, mechanisms can be designed to combine these monitors at much lower cost.\nTypically, such mechanisms would call for automatic sharing of proofs, with court intervention only as a last resort.\nWe defer these interesting mechanisms to future work.\nAs an aside, intuition might dictate that SHOPc i generates more information than ROPc; after all, inferring individual node qualities seems a much harder problem.\nYet, without path information, SHOPc i is not sufficient for our first-best innovation result.\nThe proof of this demonstrates a useful technique: Claim 6.\nWith monitors E2E, ROP, SHOPc i and PRc, and a nowhere-commoditized bilateral contract competition game, the optimal quality cannot be maintained for all assignments of quality and marginal cost, in fixed-route coalition-proof protect-theinnocent equilibrium.\nProof: Because nodes cannot verify the data path, they cannot form a proof of what the rest of path quality is.\nHence, ROPc monitors do not exist, and therefore the requirements of Claim 5 cannot hold.\n6.\nCONCLUSIONS AND FUTURE WORK It is our hope that this study will have a positive impact in at least three different ways.\nThe first is practical: we believe our analysis has implications for the design of future monitoring protocols and for public policy.\nFor protocol designers, we first provide fresh motivation to create monitoring systems.\nWe have argued that the poor accountability of the Internet is a fundamental obstacle to alleviating the pathologies of commoditization and lack of innovation.\nUnless accountability improves, these pathologies are guaranteed to remain.\nSecondly, we suggest directions for future advances in monitoring.\nWe have shown that adding verifiability to monitors allows for some improvements in the characteristics of competition.\nAt the same time, this does not present a fully satisfying solution.\nThis paper has suggested a novel standard for monitors to aspire to - one of supporting optimal routes in innovative competition games under fixed-route coalition-proof protect-the-innocent equilibrium.\nWe have shown that under bilateral contracts, this specifically requires contractible rest of path monitors.\nThis is not to say that other types of monitors are unimportant.\nWe included an example in which individual hop quality monitors and a path monitor can also meet our standard for sustaining competition.\nHowever, in order for this to happen, a mechanism must be included 192 to combine proofs from these monitors to form a proof of rest of path quality.\nIn other words, the monitors must ultimately be combined to form contractible rest of path monitors.\nTo support service differentiation and innovation, it may be easier to design rest of path monitors directly, thereby avoiding the task of designing mechanisms for combining component monitors.\nAs far as policy implications, our analysis points to the need for legal institutions to enforce contracts based on quality.\nThese institutions must be equipped to verify proofs of quality, and police illegal contracting behavior.\nAs quality-based contracts become numerous and complicated, and possibly negotiated by machine, this may become a challenging task, and new standards and regulations may have to emerge in response.\nThis remains an interesting and unexplored area for research.\nThe second area we hope our study will benefit is that of clean-slate architectural design.\nTraditionally, clean-slate design tends to focus on creating effective and elegant networks for a static set of requirements.\nThus, the approach is often one of engineering, which tends to neglect competitive effects.\nWe agree with Ratnasamy, Shenker, and McCanne, that designing for evolution should be a top priority [11].\nWe have demonstrated that the network``s monitoring ability is critical to supporting innovation, as are the institutions that support contracting.\nThese elements should feature prominently in new designs.\nOur analysis specifically suggests that architectures based on bilateral contracts should include contractible rest of path monitoring.\nFrom a clean-slate perspective, these monitors can be transparently and fully integrated with the routing and contracting systems.\nFinally, the last contribution our study makes is methodological.\nWe believe that the mathematical formalization we present is applicable to a variety of future research questions.\nWhile a significant literature addresses innovation in the presence of network effects, to the best of our knowledge, ours is the first model of innovation in a network industry that successfully incorporates the actual topological structure as input.\nThis allows the discovery of new properties, such as the weakening of market forces with the number of ISPs on a data path that we observe with lowaccountability.\nOur method also stands in contrast to the typical approach of distributed algorithmic mechanism design.\nBecause this field is based on a principle-agent framework, contracts are usually proposed by the source, who is allowed to make a take it or leave it offer to network nodes.\nOur technique allows contracts to emerge from a competitive framework, so the source is limited to selecting the most desirable contract.\nWe believe this is a closer reflection of the industry.\nBased on the insights in this study, the possible directions for future research are numerous and exciting.\nTo some degree, contracting based on quality opens a Pandora``s Box of pressing questions: Do quality-based contracts stand counter to the principle of network neutrality?\nShould ISPs be allowed to offer a choice of contracts at different quality levels?\nWhat anti-competitive behaviors are enabled by quality-based contracts?\nCan a contracting system support optimal multicast trees?\nIn this study, we have focused on bilateral contracts.\nThis system has seemed natural, especially since it is the prevalent system on the current network.\nPerhaps its most important benefit is that each contract is local in nature, so both parties share a common, familiar legal jurisdiction.\nThere is no need to worry about who will enforce a punishment against another ISP on the opposite side of the planet, nor is there a dispute over whose legal rules to apply in interpreting a contract.\nAlthough this benefit is compelling, it is worth considering other systems.\nThe clearest alternative is to form a contract between the source and every node on the path.\nWe may call these source contracts.\nSource contracting may present surprising advantages.\nFor instance, since ISPs do not exchange money with each other, an ISP cannot save money by selecting a cheaper next hop.\nAdditionally, if the source only has contracts with nodes on the intended path, other nodes won``t even be willing to accept packets from this source since they won``t receive compensation for carrying them.\nThis combination seems to eliminate all temptation for a single cheater to cheat in route.\nBecause of this and other encouraging features, we believe source contracts are a fertile topic for further study.\nAnother important research task is to relax our assumption that quality can be measured fully and precisely.\nOne possibility is to assume that monitoring is only probabilistic or suffers from noise.\nEven more relevant is the possibility that quality monitors are fundamentally incomplete.\nA quality monitor can never anticipate every dimension of quality that future applications will care about, nor can it anticipate a new and valuable protocol that an ISP introduces.\nWe may define a monitor space as a subspace of the quality space that a monitor can measure, QM \u2282 , and a corresponding monitoring function that simply projects the full range of qualities onto the monitor space, MQm \u2192: .\nClearly, innovations that leave quality invariant under m are not easy to support - they are invisible to the monitoring system.\nIn this environment, we expect that path monitoring becomes more important, since it is the only way to ensure data reaches certain innovator ISPs.\nFurther research is needed to understand this process.\n7.\nACKNOWLEDGEMENTS We would like to thank the anonymous reviewers, Jens Grossklags, Moshe Babaioff, Scott Shenker, Sylvia Ratnasamy, and Hal Varian for their comments.\nThis work is supported in part by the National Science Foundation under ITR award ANI-0331659.\n8.\nREFERENCES [1] Afergan, M. Using Repeated Games to Design IncentiveBased Routing Systems.\nIn Proceedings of IEEE INFOCOM (April 2006).\n[2] Afergan, M. and Wroclawski, J. On the Benefits and Feasibility of Incentive Based Routing Infrastructure.\nIn ACM SIGCOMM'04 Workshop on Practice and Theory of Incentives in Networked Systems (PINS) (August 2004).\n[3] Argyraki, K., Maniatis, P., Cheriton, D., and Shenker, S. Providing Packet Obituaries.\nIn Third Workshop on Hot Topics in Networks (HotNets) (November 2004).\n[4] Clark, D. D.\nThe Design Philosophy of the DARPA Internet Protocols.\nIn Proceedings of ACM SIGCOMM (1988).\n193 [5] Clark, D. D., Wroclawski, J., Sollins, K. R., and Braden, R. Tussle in cyberspace: Defining tomorrow's internet.\nIn Proceedings of ACM SIGCOMM (August 2002).\n[6] Dang-Nguyen, G. and P\u00e9nard, T. Interconnection Agreements: Strategic Behaviour and Property Rights.\nIn Brousseau, E. and Glachant, J.M. Eds.\nThe Economics of Contracts: Theories and Applications, Cambridge University Press, 2002.\n[7] Feigenbaum, J., Papadimitriou, C., Sami, R., and Shenker, S.\nA BGP-based Mechanism for Lowest-Cost Routing.\nDistributed Computing 18 (2005), pp. 61-72.\n[8] Huston, G. Interconnection, Peering, and Settlements.\nTelstra, Australia.\n[9] Liu, Y., Zhang, H., Gong, W., and Towsley, D. On the Interaction Between Overlay Routing and Traffic Engineering.\nIn Proceedings of IEEE INFOCOM (2005).\n[10] MacKie-Mason, J. and Varian, H. Pricing the Internet.\nIn Kahin, B. and Keller, J. Eds.\nPublic access to the Internet.\nEnglewood Cliffs, NJ; Prentice-Hall, 1995.\n[11] Ratnasamy, S., Shenker, S., and McCanne, S. Towards an Evolvable Internet Architecture.\nIn Proceeding of ACM SIGCOMM (2005).\n[12] Shakkottai, S., and Srikant, R. Economics of Network Pricing with Multiple ISPs.\nIn Proceedings of IEEE INFOCOM (2005).\n9.\nAPPENDIX Proof of Claim 2.\nNode i must fall on the equilibrium data path to receive any payment.\nLet the prices along the data path be ppppP nS == ,..., 21 , with marginal costs, ncc ,...,1 .\nWe may assume the prices on the path are greater than lp or the claim follows trivially.\nEach node along the data path can cheat in route by giving data to the bargain path at price no more than lp .\nSo node j``s temptation to cheat is at least 11 ++ \u2212 \u2212 \u2265 \u2212\u2212 \u2212\u2212 jj l jjj ljj pp pp pcp pcp .\nThen Lemma 1 gives, 1 13221 1... \u2212 \u2212 \u2212 \u2212> \u2212 \u2212 \u22c5\u22c5 \u2212 \u2212 \u2212 \u2212 \u2265 n S l n lll P pp n pp pp pp pp pp pp T (7) This can be rearranged to give ( ) ( ) s n l P n T pp 1 1\/1 \u2212 +\u2264 \u2212 , as required.\nThe rest of the claim simply recognizes that rp \/ is the greatest reward node i can receive for its investment, so it will not invest sums greater than this.\nProof of Claim 4.\nLabel the nodes 1,2,.\n.\nN in the order in which they select contracts.\nLet subgame n be the game that begins with n choosing its contract.\nLet Ln be the set of possible paths restricted to nodes n,...,N.\nThat is, Ln is the set of possible routes from S to reach some node that has already moved.\nFor subgame n, define the local welfare over paths nLl \u2208 , and their possible next hops, nj < as follows, ( ) ( ) j li ipathjl pcqqujlV \u2212\u2212= \u2208 *, , (8) where ql is the quality of path l in the set {n,...,N}, and pathjq and pj are the quality and price of the contract j has offered.\nFor induction, assume that subgame n + 1 maximizes local welfare.\nWe show that subgame n does as well.\nIf node n selects next hop k, we can write the following relation, ( ) ( )( ) ( )( ) nknn knlVpcpknlVnlV \u03c0\u2212=++\u2212= ,,,,, , (9) where n is node n``s profit if the path to n is chosen.\nThis path is chosen whenever ( )nlV , is maximal over Ln+1 and possible next hops.\nIf ( )( )knlV ,, is maximal over Ln, it is also maximal over the paths in Ln+1 that don``t lead to n.\nThis means that node n can choose some n small enough so that ( )nlV , is maximal over Ln+1, so the route will lead to k. Conversely, if ( )( )knlV ,, is not maximal over Ln, either V is greater for another of n``s next hops, in which case n will select that one in order to increase n, or V is greater for some path in Ln+1 that don``t lead to n, in which case ( )nlV , cannot be maximal for any nonnegative n. Thus, we conclude that subgame n maximizes local welfare.\nFor the initial case, observe that this assumption holds for the source.\nFinally, we deduce that subgame 1, which is the entire game, maximizes local welfare, which is equivalent to actual welfare.\nHence, the Stackelberg price-quality game yields an optimal route.\n194","lvl-3":"Network Monitors and Contracting Systems: Competition and Innovation\nABSTRACT\nToday's Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution.\nRather than introducing new services, ISPs are presently moving towards greater commoditization.\nIt is apparent that the network's primitive system of contracts does not align incentives properly.\nIn this study, we identify the network's lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system.\nFurthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics.\nOur work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation.\n1.\nINTRODUCTION\nMany studies before us have noted the Internet's resistance to new services and evolution.\nIn recent decades, numerous ideas have been developed in universities, implemented in code, and even written into the routers and end systems of the network, only to languish as network operators fail to turn them on on a large scale.\nThe list includes Multicast, IPv6, IntServ, and DiffServ.\nLacking the incentives just to activate services, there seems to be little hope of ISPs devoting adequate resources to developing new ideas.\nIn the long term, this pathology stands out as a critical obstacle to the network's continued success (Ratnasamy, Shenker, and McCanne provide extensive discussion in [11]).\nOn a smaller time scale, ISPs shun new services in favor of cost cutting measures.\nThus, the network has characteristics of a commodity market.\nAlthough in theory, ISPs have a plethora of routing policies at their disposal, the prevailing strategy is to route in the cheapest way possible [2].\nOn one hand, this leads directly to suboptimal routing.\nMore importantly, commoditization in the short term is surely related to the lack of innovation in the long term.\nWhen the routing decisions of others ignore quality characteristics, ISPs are motivated only to lower costs.\nThere is simply no reward for introducing new services or investing in quality improvements.\nIn response to these pathologies and others, researchers have put forth various proposals for improving the situation.\nThese can be divided according to three high-level strategies: The first attempts to improve the status quo by empowering end-users.\nClark, et al., suggest that giving end-users control over routing would lead to greater service diversity, recognizing that some payment mechanism must also be provided [5].\nRatnasamy, Shenker, and McCanne postulate a link between network evolution and user-directed routing [11].\nThey propose a system of Anycast to give end-users the ability to tunnel their packets to an ISP that introduces a desirable protocol.\nThe extra traffic to the ISP, the authors suggest, will motivate the initial investment.\nThe second strategy suggests a revision of the contracting system.\nThis is exemplified by MacKie-Mason and Varian, who propose a \"smart market\" to control access to network resources [10].\nPrices are set to the market-clearing level based on bids that users associate to their traffic.\nIn another direction, Afergan and Wroclawski suggest that prices should be explicitly encoded in the routing protocols [2].\nThey argue that such a move would improve stability and align incentives.\nThe third high-level strategy calls for greater network accountability.\nIn this vein, Argyraki, et al., propose a system of packet obituaries to provide feedback as to which ISPs drop packets [3].\nThey argue that such feedback would help reveal which ISPs were adequately meeting their contractual obligations.\nUnlike the first two strategies, we are not aware of any previous studies that have connected accountability with the pathologies of commoditization or lack of innovation.\nIt is clear that these three strategies are closely linked to each other (for example, [2], [5], and [9] each argue that giving end-users routing control within the current contracting system is problematic).\nUntil today, however, the relationship between them has been poorly understood.\nThere is currently little theoretical foundation to compare the relative merits of each proposal, and a particular lack of evidence linking accountability with innovation and service differentiation.\nThis paper will address both issues.\nWe will begin by introducing an economic network model that relates accountability, contracts, competition, and innovation.\nOur model is highly stylized and may be considered preliminary: it is based on a single source sending data to a single destination.\nNevertheless, the structure is rich enough to expose previously unseen features of network behavior.\nWe will use our model for two main purposes: First, we will use our model to argue that the lack of accountability in today's network is a fundamental obstacle to overcoming the pathologies of commoditization and lack of innovation.\nIn other words, unless new monitoring capabilities are introduced, and integrated with the system of contracts, the network cannot achieve optimal routing and innovation characteristics.\nThis result provides motivation for the remainder of the paper, in which we explore how accountability can be leveraged to overcome these pathologies and create a sustainable industry.\nWe will approach this problem from a clean-slate perspective, deriving the level of accountability needed to sustain an ideal competitive structure.\nWhen we say that today's Internet has poor accountability, we mean that it reveals little information about the behavior--or misbehavior--of ISPs.\nThis well-known trait is largely rooted in the network's history.\nIn describing the design philosophy behind the Internet protocols, Clark lists accountability as the least important among seven \"second level goals.\"\n[4] Accordingly, accountability received little attention during the network's formative years.\nClark relates this to the network's military context, and finds that had the network been designed for commercial development, accountability would have been a top priority.\nArgyraki, et al., conjecture that applying the principles of layering and transparency may have led to the network's lack of accountability [3].\nAccording to these principles, end hosts should be informed of network problems only to the extent that they are required to adapt.\nThey notice when packet drops occur so that they can perform congestion control and retransmit packets.\nDetails of where and why drops occur are deliberately concealed.\nThe network's lack of accountability is highly relevant to a discussion of innovation because it constrains the system of contracts.\nThis is because contracts depend upon external institutions to function--the \"judge\" in the language of incomplete contract theory, or simply the legal system.\nUltimately, if a judge cannot verify that some condition holds, she cannot enforce a contract based on that condition.\nOf course, the vast majority of contracts never end up in court.\nEspecially when a judge's ruling is easily predicted, the parties will typically comply with the contract terms on their own volition.\nThis would not be possible, however, without the judge acting as a last resort.\nAn institution to support contracts is typically complex, but we abstract it as follows: We imagine that a contract is an algorithm that outputs a payment transfer among a set of ISPs (the parties) at every time.\nThis payment is a function of the past and present behaviors of the participants, but only those that are verifiable.\nHence, we imagine that a contract only accepts \"proofs\" as inputs.\nWe will call any process that generates these proofs a contractible monitor.\nSuch a monitor includes metering or sensing devices on the physical network, but it is a more general concept.\nConstructing a proof of a particular behavior may require readings from various devices distributed among many ISPs.\nThe contractible monitor includes whatever distributed algorithmic mechanism is used to motivate ISPs to share this private information.\nFigure 1 demonstrates how our model of contracts fits together.\nWe make the assumption that all payments are mediated by contracts.\nThis means that without contractible monitors that attest to, say, latency, payments cannot be conditioned on latency.\nNetwork Monitor Behavior\nFigure 1: Relationship between monitors and contracts\nWith this model, we may conclude that the level of accountability in today's Internet only permits best effort contracts.\nNodes cannot condition payments on either quality or path characteristics.\nIs there anything wrong with best-effort contracts?\nThe reader might wonder why the Internet needs contracts at all.\nAfter all, in non-network industries, traditional firms invest in research and differentiate their products, all in the hopes of keeping their customers and securing new ones.\nOne might believe that such market forces apply to ISPs as well.\nWe may adopt this as our null hypothesis: Null hypothesis: Market forces are sufficient to maintain service diversity and innovation on a network, at least to the same extent as they do in traditional markets.\nThere is a popular intuitive argument that supports this hypothesis, and it may be summarized as follows: Intuitive argument supporting null hypothesis:\n1.\nAccess providers try to increase their quality to get more consumers.\n2.\nAccess providers are themselves customers for second hop ISPs, and the second hops will therefore try to provide highquality service in order to secure traffic from access providers.\nAccess providers try to select high quality transit because that increases their quality.\n3.\nThe process continues through the network, giving every ISP a competitive reason to increase quality.\nWe are careful to model our network in continuous time, in order to capture the essence of this argument.\nWe can, for example, specify equilibria in which nodes switch to a new next hop in the event of a quality drop.\nMoreover, our model allows us to explore any theoretically possible punishments against cheaters, including those that are costly for end-users to administer.\nBy contrast, customers in the real world rarely respond collectively, and often simply seek the best deal currently offered.\nThese constraints limit their ability to punish cheaters.\nEven with these liberal assumptions, however, we find that we must reject our null hypothesis.\nOur model will demonstrate that identifying a cheating ISP is difficult under low accountability, limiting the threat of market driven punishment.\nWe will define an index of commoditization and show that it increases without bound as data paths grow long.\nFurthermore, we will demonstrate a framework in which an ISP's maximum research investment decreases hyperbolically with its distance from the end-user.\nTo summarize, we argue that the Internet's lack of accountability must be addressed before the pathologies of commoditization and lack of innovation can be resolved.\nThis leads us to our next topic: How can we leverage accountability to overcome these pathologies?\nWe approach this question from a clean-slate perspective.\nInstead of focusing on incremental improvements, we try to imagine how an ideal industry would behave, then derive the level of accountability needed to meet that objective.\nAccording to this approach, we first craft a new equilibrium concept appropriate for network competition.\nOur concept includes the following requirements: First, we require that punishing ISPs that cheat is done without rerouting the path.\nRerouting is likely to prompt end-users to switch providers, punishing access providers who administer punishments correctly.\nNext, we require that the equilibrium cannot be threatened by a coalition of ISPs that exchanges illicit side payments.\nFinally, we require that the punishment mechanism that enforces contracts does not punish innocent nodes that are not in the coalition.\nThe last requirement is somewhat unconventional from an economic perspective, but we maintain that it is crucial for any reasonable solution.\nAlthough ISPs provide complementary services when they form a data path together, they are likely to be horizontal competitors as well.\nIf innocent nodes may be punished, an ISP may decide to deliberately cheat and draw punishment onto itself and its neighbors.\nBy cheating, the ISP may save resources, thereby ensuring that the punishment is more damaging to the other ISPs, which probably compete with the cheater directly for some customers.\nIn the extreme case, the cheater may force the other ISPs out of business, thereby gaining a monopoly on some routes.\nApplying this equilibrium concept, we derive the monitors needed to maintain innovation and optimize routes.\nThe solution is surprisingly simple: contractible monitors must report the quality of the rest of the path, from each ISP to the destination.\nIt turns out that this is the correct minimum accountability requirement, as opposed to either end-to-end monitors or hop-by-hop monitors, as one might initially suspect.\nRest of path monitors can be implemented in various ways.\nThey may be purely local algorithms that listen for packet echoes.\nAlternately, they can be distributed in nature.\nWe describe a way to construct a rest of path monitor out of monitors for individual ISP quality and for the data path.\nThis requires a mechanism to motivate ISPs to share their monitor outputs with each other.\nThe rest of path monitor then includes the component monitors and the distributed algorithmic mechanism that ensures that information is shared as required.\nThis example shows that other types of monitors may be useful as building blocks, but must be combined to form rest of path monitors in order to achieve ideal innovation characteristics.\nOur study has several practical implications for future protocol design.\nWe show that new monitors must be implemented and integrated with the contracting system before the pathologies of commoditization and lack of innovation can be overcome.\nMoreover, we derive exactly what monitors are needed to optimize routes and support innovation.\nIn addition, our results provide useful input for clean-slate architectural design, and we use several novel techniques that we expect will be applicable to a variety of future research.\nThe rest of this paper is organized as follows: In section 2, we lay out our basic network model.\nIn section 3, we present a lowaccountability network, modeled after today's Internet.\nWe demonstrate how poor monitoring causes commoditization and a lack of innovation.\nIn section 4, we present verifiable monitors, and show that proofs, even without contracts, can improve the status quo.\nIn section 5, we turn our attention to contractible monitors.\nWe show that rest of path monitors can support competition games with optimal routing and innovation.\nWe further show that rest of path monitors are required to support such competition games.\nWe continue by discussing how such monitors may be constructed using other monitors as building blocks.\nIn section 6, we conclude and present several directions for future research.\n2.\nBASIC NETWORK MODEL\n2.1 Game Dynamics\n2.2 The Routing Game\n3.\nPATHOLOGIES OF A LOWACCOUNTABILITY NETWORK\n3.1 Accountability in the Current Internet\n3.2 Modeling Low-Accountability\n4.\nVERIFIABLE MONITORS\n5.\nCONTRACTIBLE MONITORS\n5.1 Implementing Monitors\n6.\nCONCLUSIONS AND FUTURE WORK\nIt is our hope that this study will have a positive impact in at least three different ways.\nThe first is practical: we believe our analysis has implications for the design of future monitoring protocols and for public policy.\nFor protocol designers, we first provide fresh motivation to create monitoring systems.\nWe have argued that the poor accountability of the Internet is a fundamental obstacle to alleviating the pathologies of commoditization and lack of innovation.\nUnless accountability improves, these pathologies are guaranteed to remain.\nSecondly, we suggest directions for future advances in monitoring.\nWe have shown that adding verifiability to monitors allows for some improvements in the characteristics of competition.\nAt the same time, this does not present a fully satisfying solution.\nThis paper has suggested a novel standard for monitors to aspire to--one of supporting optimal routes in innovative competition games under fixed-route coalition-proof protect-the-innocent equilibrium.\nWe have shown that under bilateral contracts, this specifically requires contractible rest of path monitors.\nThis is not to say that other types of monitors are unimportant.\nWe included an example in which individual hop quality monitors and a path monitor can also meet our standard for sustaining competition.\nHowever, in order for this to happen, a mechanism must be included\nto combine proofs from these monitors to form a proof of rest of path quality.\nIn other words, the monitors must ultimately be combined to form contractible rest of path monitors.\nTo support service differentiation and innovation, it may be easier to design rest of path monitors directly, thereby avoiding the task of designing mechanisms for combining component monitors.\nAs far as policy implications, our analysis points to the need for legal institutions to enforce contracts based on quality.\nThese institutions must be equipped to verify proofs of quality, and police illegal contracting behavior.\nAs quality-based contracts become numerous and complicated, and possibly negotiated by machine, this may become a challenging task, and new standards and regulations may have to emerge in response.\nThis remains an interesting and unexplored area for research.\nThe second area we hope our study will benefit is that of clean-slate architectural design.\nTraditionally, clean-slate design tends to focus on creating effective and elegant networks for a static set of requirements.\nThus, the approach is often one of engineering, which tends to neglect competitive effects.\nWe agree with Ratnasamy, Shenker, and McCanne, that designing for evolution should be a top priority [11].\nWe have demonstrated that the network's monitoring ability is critical to supporting innovation, as are the institutions that support contracting.\nThese elements should feature prominently in new designs.\nOur analysis specifically suggests that architectures based on bilateral contracts should include contractible rest of path monitoring.\nFrom a clean-slate perspective, these monitors can be transparently and fully integrated with the routing and contracting systems.\nFinally, the last contribution our study makes is methodological.\nWe believe that the mathematical formalization we present is applicable to a variety of future research questions.\nWhile a significant literature addresses innovation in the presence of network effects, to the best of our knowledge, ours is the first model of innovation in a network industry that successfully incorporates the actual topological structure as input.\nThis allows the discovery of new properties, such as the weakening of market forces with the number of ISPs on a data path that we observe with lowaccountability.\nOur method also stands in contrast to the typical approach of distributed algorithmic mechanism design.\nBecause this field is based on a principle-agent framework, contracts are usually proposed by the source, who is allowed to make a take it or leave it offer to network nodes.\nOur technique allows contracts to emerge from a competitive framework, so the source is limited to selecting the most desirable contract.\nWe believe this is a closer reflection of the industry.\nBased on the insights in this study, the possible directions for future research are numerous and exciting.\nTo some degree, contracting based on quality opens a Pandora's Box of pressing questions: Do quality-based contracts stand counter to the principle of network neutrality?\nShould ISPs be allowed to offer a choice of contracts at different quality levels?\nWhat anti-competitive behaviors are enabled by quality-based contracts?\nCan a contracting system support optimal multicast trees?\nIn this study, we have focused on bilateral contracts.\nThis system has seemed natural, especially since it is the prevalent system on the current network.\nPerhaps its most important benefit is that each contract is local in nature, so both parties share a common, familiar legal jurisdiction.\nThere is no need to worry about who will enforce a punishment against another ISP on the opposite side of the planet, nor is there a dispute over whose legal rules to apply in interpreting a contract.\nAlthough this benefit is compelling, it is worth considering other systems.\nThe clearest alternative is to form a contract between the source and every node on the path.\nWe may call these source contracts.\nSource contracting may present surprising advantages.\nFor instance, since ISPs do not exchange money with each other, an ISP cannot save money by selecting a cheaper next hop.\nAdditionally, if the source only has contracts with nodes on the intended path, other nodes won't even be willing to accept packets from this source since they won't receive compensation for carrying them.\nThis combination seems to eliminate all temptation for a single cheater to cheat in route.\nBecause of this and other encouraging features, we believe source contracts are a fertile topic for further study.\nAnother important research task is to relax our assumption that quality can be measured fully and precisely.\nOne possibility is to assume that monitoring is only probabilistic or suffers from noise.\nEven more relevant is the possibility that quality monitors are fundamentally incomplete.\nA quality monitor can never anticipate every dimension of quality that future applications will care about, nor can it anticipate a new and valuable protocol that an ISP introduces.\nWe may define a monitor space as a subspace of the quality space that a monitor can measure, M c Q, and a corresponding monitoring function that simply projects the full range of qualities onto the monitor space, m: Q--> M.\nClearly, innovations that leave quality invariant under m are not easy to support--they are invisible to the monitoring system.\nIn this environment, we expect that path monitoring becomes more important, since it is the only way to ensure data reaches certain innovator ISPs.\nFurther research is needed to understand this process.","lvl-4":"Network Monitors and Contracting Systems: Competition and Innovation\nABSTRACT\nToday's Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution.\nRather than introducing new services, ISPs are presently moving towards greater commoditization.\nIt is apparent that the network's primitive system of contracts does not align incentives properly.\nIn this study, we identify the network's lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system.\nFurthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics.\nOur work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation.\n1.\nINTRODUCTION\nMany studies before us have noted the Internet's resistance to new services and evolution.\nThe list includes Multicast, IPv6, IntServ, and DiffServ.\nLacking the incentives just to activate services, there seems to be little hope of ISPs devoting adequate resources to developing new ideas.\nIn the long term, this pathology stands out as a critical obstacle to the network's continued success (Ratnasamy, Shenker, and McCanne provide extensive discussion in [11]).\nOn a smaller time scale, ISPs shun new services in favor of cost cutting measures.\nThus, the network has characteristics of a commodity market.\nAlthough in theory, ISPs have a plethora of routing policies at their disposal, the prevailing strategy is to route in the cheapest way possible [2].\nOn one hand, this leads directly to suboptimal routing.\nMore importantly, commoditization in the short term is surely related to the lack of innovation in the long term.\nWhen the routing decisions of others ignore quality characteristics, ISPs are motivated only to lower costs.\nThere is simply no reward for introducing new services or investing in quality improvements.\nThese can be divided according to three high-level strategies: The first attempts to improve the status quo by empowering end-users.\nClark, et al., suggest that giving end-users control over routing would lead to greater service diversity, recognizing that some payment mechanism must also be provided [5].\nRatnasamy, Shenker, and McCanne postulate a link between network evolution and user-directed routing [11].\nThey propose a system of Anycast to give end-users the ability to tunnel their packets to an ISP that introduces a desirable protocol.\nThe second strategy suggests a revision of the contracting system.\nThis is exemplified by MacKie-Mason and Varian, who propose a \"smart market\" to control access to network resources [10].\nIn another direction, Afergan and Wroclawski suggest that prices should be explicitly encoded in the routing protocols [2].\nThey argue that such a move would improve stability and align incentives.\nThe third high-level strategy calls for greater network accountability.\nIn this vein, Argyraki, et al., propose a system of packet obituaries to provide feedback as to which ISPs drop packets [3].\nThey argue that such feedback would help reveal which ISPs were adequately meeting their contractual obligations.\nUnlike the first two strategies, we are not aware of any previous studies that have connected accountability with the pathologies of commoditization or lack of innovation.\nUntil today, however, the relationship between them has been poorly understood.\nThere is currently little theoretical foundation to compare the relative merits of each proposal, and a particular lack of evidence linking accountability with innovation and service differentiation.\nThis paper will address both issues.\nWe will begin by introducing an economic network model that relates accountability, contracts, competition, and innovation.\nNevertheless, the structure is rich enough to expose previously unseen features of network behavior.\nWe will use our model for two main purposes: First, we will use our model to argue that the lack of accountability in today's network is a fundamental obstacle to overcoming the pathologies of commoditization and lack of innovation.\nIn other words, unless new monitoring capabilities are introduced, and integrated with the system of contracts, the network cannot achieve optimal routing and innovation characteristics.\nThis result provides motivation for the remainder of the paper, in which we explore how accountability can be leveraged to overcome these pathologies and create a sustainable industry.\nWe will approach this problem from a clean-slate perspective, deriving the level of accountability needed to sustain an ideal competitive structure.\nWhen we say that today's Internet has poor accountability, we mean that it reveals little information about the behavior--or misbehavior--of ISPs.\nThis well-known trait is largely rooted in the network's history.\nIn describing the design philosophy behind the Internet protocols, Clark lists accountability as the least important among seven \"second level goals.\"\n[4] Accordingly, accountability received little attention during the network's formative years.\nClark relates this to the network's military context, and finds that had the network been designed for commercial development, accountability would have been a top priority.\nArgyraki, et al., conjecture that applying the principles of layering and transparency may have led to the network's lack of accountability [3].\nAccording to these principles, end hosts should be informed of network problems only to the extent that they are required to adapt.\nDetails of where and why drops occur are deliberately concealed.\nThe network's lack of accountability is highly relevant to a discussion of innovation because it constrains the system of contracts.\nThis is because contracts depend upon external institutions to function--the \"judge\" in the language of incomplete contract theory, or simply the legal system.\nUltimately, if a judge cannot verify that some condition holds, she cannot enforce a contract based on that condition.\nOf course, the vast majority of contracts never end up in court.\nEspecially when a judge's ruling is easily predicted, the parties will typically comply with the contract terms on their own volition.\nThis would not be possible, however, without the judge acting as a last resort.\nAn institution to support contracts is typically complex, but we abstract it as follows: We imagine that a contract is an algorithm that outputs a payment transfer among a set of ISPs (the parties) at every time.\nHence, we imagine that a contract only accepts \"proofs\" as inputs.\nWe will call any process that generates these proofs a contractible monitor.\nSuch a monitor includes metering or sensing devices on the physical network, but it is a more general concept.\nConstructing a proof of a particular behavior may require readings from various devices distributed among many ISPs.\nThe contractible monitor includes whatever distributed algorithmic mechanism is used to motivate ISPs to share this private information.\nFigure 1 demonstrates how our model of contracts fits together.\nWe make the assumption that all payments are mediated by contracts.\nThis means that without contractible monitors that attest to, say, latency, payments cannot be conditioned on latency.\nNetwork Monitor Behavior\nFigure 1: Relationship between monitors and contracts\nWith this model, we may conclude that the level of accountability in today's Internet only permits best effort contracts.\nNodes cannot condition payments on either quality or path characteristics.\nIs there anything wrong with best-effort contracts?\nThe reader might wonder why the Internet needs contracts at all.\nOne might believe that such market forces apply to ISPs as well.\nWe may adopt this as our null hypothesis: Null hypothesis: Market forces are sufficient to maintain service diversity and innovation on a network, at least to the same extent as they do in traditional markets.\n1.\nAccess providers try to increase their quality to get more consumers.\n2.\nAccess providers are themselves customers for second hop ISPs, and the second hops will therefore try to provide highquality service in order to secure traffic from access providers.\nAccess providers try to select high quality transit because that increases their quality.\n3.\nThe process continues through the network, giving every ISP a competitive reason to increase quality.\nWe are careful to model our network in continuous time, in order to capture the essence of this argument.\nWe can, for example, specify equilibria in which nodes switch to a new next hop in the event of a quality drop.\nMoreover, our model allows us to explore any theoretically possible punishments against cheaters, including those that are costly for end-users to administer.\nThese constraints limit their ability to punish cheaters.\nEven with these liberal assumptions, however, we find that we must reject our null hypothesis.\nOur model will demonstrate that identifying a cheating ISP is difficult under low accountability, limiting the threat of market driven punishment.\nWe will define an index of commoditization and show that it increases without bound as data paths grow long.\nTo summarize, we argue that the Internet's lack of accountability must be addressed before the pathologies of commoditization and lack of innovation can be resolved.\nThis leads us to our next topic: How can we leverage accountability to overcome these pathologies?\nWe approach this question from a clean-slate perspective.\nInstead of focusing on incremental improvements, we try to imagine how an ideal industry would behave, then derive the level of accountability needed to meet that objective.\nAccording to this approach, we first craft a new equilibrium concept appropriate for network competition.\nOur concept includes the following requirements: First, we require that punishing ISPs that cheat is done without rerouting the path.\nRerouting is likely to prompt end-users to switch providers, punishing access providers who administer punishments correctly.\nNext, we require that the equilibrium cannot be threatened by a coalition of ISPs that exchanges illicit side payments.\nFinally, we require that the punishment mechanism that enforces contracts does not punish innocent nodes that are not in the coalition.\nAlthough ISPs provide complementary services when they form a data path together, they are likely to be horizontal competitors as well.\nIn the extreme case, the cheater may force the other ISPs out of business, thereby gaining a monopoly on some routes.\nApplying this equilibrium concept, we derive the monitors needed to maintain innovation and optimize routes.\nThe solution is surprisingly simple: contractible monitors must report the quality of the rest of the path, from each ISP to the destination.\nIt turns out that this is the correct minimum accountability requirement, as opposed to either end-to-end monitors or hop-by-hop monitors, as one might initially suspect.\nRest of path monitors can be implemented in various ways.\nWe describe a way to construct a rest of path monitor out of monitors for individual ISP quality and for the data path.\nThis requires a mechanism to motivate ISPs to share their monitor outputs with each other.\nThe rest of path monitor then includes the component monitors and the distributed algorithmic mechanism that ensures that information is shared as required.\nThis example shows that other types of monitors may be useful as building blocks, but must be combined to form rest of path monitors in order to achieve ideal innovation characteristics.\nOur study has several practical implications for future protocol design.\nWe show that new monitors must be implemented and integrated with the contracting system before the pathologies of commoditization and lack of innovation can be overcome.\nMoreover, we derive exactly what monitors are needed to optimize routes and support innovation.\nThe rest of this paper is organized as follows: In section 2, we lay out our basic network model.\nIn section 3, we present a lowaccountability network, modeled after today's Internet.\nWe demonstrate how poor monitoring causes commoditization and a lack of innovation.\nIn section 4, we present verifiable monitors, and show that proofs, even without contracts, can improve the status quo.\nIn section 5, we turn our attention to contractible monitors.\nWe show that rest of path monitors can support competition games with optimal routing and innovation.\nWe further show that rest of path monitors are required to support such competition games.\nWe continue by discussing how such monitors may be constructed using other monitors as building blocks.\nIn section 6, we conclude and present several directions for future research.\n6.\nCONCLUSIONS AND FUTURE WORK\nThe first is practical: we believe our analysis has implications for the design of future monitoring protocols and for public policy.\nFor protocol designers, we first provide fresh motivation to create monitoring systems.\nWe have argued that the poor accountability of the Internet is a fundamental obstacle to alleviating the pathologies of commoditization and lack of innovation.\nUnless accountability improves, these pathologies are guaranteed to remain.\nSecondly, we suggest directions for future advances in monitoring.\nWe have shown that adding verifiability to monitors allows for some improvements in the characteristics of competition.\nAt the same time, this does not present a fully satisfying solution.\nThis paper has suggested a novel standard for monitors to aspire to--one of supporting optimal routes in innovative competition games under fixed-route coalition-proof protect-the-innocent equilibrium.\nWe have shown that under bilateral contracts, this specifically requires contractible rest of path monitors.\nThis is not to say that other types of monitors are unimportant.\nWe included an example in which individual hop quality monitors and a path monitor can also meet our standard for sustaining competition.\nHowever, in order for this to happen, a mechanism must be included\nto combine proofs from these monitors to form a proof of rest of path quality.\nIn other words, the monitors must ultimately be combined to form contractible rest of path monitors.\nTo support service differentiation and innovation, it may be easier to design rest of path monitors directly, thereby avoiding the task of designing mechanisms for combining component monitors.\nAs far as policy implications, our analysis points to the need for legal institutions to enforce contracts based on quality.\nThese institutions must be equipped to verify proofs of quality, and police illegal contracting behavior.\nThis remains an interesting and unexplored area for research.\nThe second area we hope our study will benefit is that of clean-slate architectural design.\nTraditionally, clean-slate design tends to focus on creating effective and elegant networks for a static set of requirements.\nWe have demonstrated that the network's monitoring ability is critical to supporting innovation, as are the institutions that support contracting.\nThese elements should feature prominently in new designs.\nOur analysis specifically suggests that architectures based on bilateral contracts should include contractible rest of path monitoring.\nFrom a clean-slate perspective, these monitors can be transparently and fully integrated with the routing and contracting systems.\nFinally, the last contribution our study makes is methodological.\nWe believe that the mathematical formalization we present is applicable to a variety of future research questions.\nThis allows the discovery of new properties, such as the weakening of market forces with the number of ISPs on a data path that we observe with lowaccountability.\nOur method also stands in contrast to the typical approach of distributed algorithmic mechanism design.\nBecause this field is based on a principle-agent framework, contracts are usually proposed by the source, who is allowed to make a take it or leave it offer to network nodes.\nOur technique allows contracts to emerge from a competitive framework, so the source is limited to selecting the most desirable contract.\nWe believe this is a closer reflection of the industry.\nBased on the insights in this study, the possible directions for future research are numerous and exciting.\nTo some degree, contracting based on quality opens a Pandora's Box of pressing questions: Do quality-based contracts stand counter to the principle of network neutrality?\nShould ISPs be allowed to offer a choice of contracts at different quality levels?\nWhat anti-competitive behaviors are enabled by quality-based contracts?\nCan a contracting system support optimal multicast trees?\nIn this study, we have focused on bilateral contracts.\nThis system has seemed natural, especially since it is the prevalent system on the current network.\nPerhaps its most important benefit is that each contract is local in nature, so both parties share a common, familiar legal jurisdiction.\nAlthough this benefit is compelling, it is worth considering other systems.\nThe clearest alternative is to form a contract between the source and every node on the path.\nWe may call these source contracts.\nSource contracting may present surprising advantages.\nFor instance, since ISPs do not exchange money with each other, an ISP cannot save money by selecting a cheaper next hop.\nAdditionally, if the source only has contracts with nodes on the intended path, other nodes won't even be willing to accept packets from this source since they won't receive compensation for carrying them.\nThis combination seems to eliminate all temptation for a single cheater to cheat in route.\nBecause of this and other encouraging features, we believe source contracts are a fertile topic for further study.\nAnother important research task is to relax our assumption that quality can be measured fully and precisely.\nEven more relevant is the possibility that quality monitors are fundamentally incomplete.\nA quality monitor can never anticipate every dimension of quality that future applications will care about, nor can it anticipate a new and valuable protocol that an ISP introduces.\nClearly, innovations that leave quality invariant under m are not easy to support--they are invisible to the monitoring system.\nIn this environment, we expect that path monitoring becomes more important, since it is the only way to ensure data reaches certain innovator ISPs.\nFurther research is needed to understand this process.","lvl-2":"Network Monitors and Contracting Systems: Competition and Innovation\nABSTRACT\nToday's Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution.\nRather than introducing new services, ISPs are presently moving towards greater commoditization.\nIt is apparent that the network's primitive system of contracts does not align incentives properly.\nIn this study, we identify the network's lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system.\nFurthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics.\nOur work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation.\n1.\nINTRODUCTION\nMany studies before us have noted the Internet's resistance to new services and evolution.\nIn recent decades, numerous ideas have been developed in universities, implemented in code, and even written into the routers and end systems of the network, only to languish as network operators fail to turn them on on a large scale.\nThe list includes Multicast, IPv6, IntServ, and DiffServ.\nLacking the incentives just to activate services, there seems to be little hope of ISPs devoting adequate resources to developing new ideas.\nIn the long term, this pathology stands out as a critical obstacle to the network's continued success (Ratnasamy, Shenker, and McCanne provide extensive discussion in [11]).\nOn a smaller time scale, ISPs shun new services in favor of cost cutting measures.\nThus, the network has characteristics of a commodity market.\nAlthough in theory, ISPs have a plethora of routing policies at their disposal, the prevailing strategy is to route in the cheapest way possible [2].\nOn one hand, this leads directly to suboptimal routing.\nMore importantly, commoditization in the short term is surely related to the lack of innovation in the long term.\nWhen the routing decisions of others ignore quality characteristics, ISPs are motivated only to lower costs.\nThere is simply no reward for introducing new services or investing in quality improvements.\nIn response to these pathologies and others, researchers have put forth various proposals for improving the situation.\nThese can be divided according to three high-level strategies: The first attempts to improve the status quo by empowering end-users.\nClark, et al., suggest that giving end-users control over routing would lead to greater service diversity, recognizing that some payment mechanism must also be provided [5].\nRatnasamy, Shenker, and McCanne postulate a link between network evolution and user-directed routing [11].\nThey propose a system of Anycast to give end-users the ability to tunnel their packets to an ISP that introduces a desirable protocol.\nThe extra traffic to the ISP, the authors suggest, will motivate the initial investment.\nThe second strategy suggests a revision of the contracting system.\nThis is exemplified by MacKie-Mason and Varian, who propose a \"smart market\" to control access to network resources [10].\nPrices are set to the market-clearing level based on bids that users associate to their traffic.\nIn another direction, Afergan and Wroclawski suggest that prices should be explicitly encoded in the routing protocols [2].\nThey argue that such a move would improve stability and align incentives.\nThe third high-level strategy calls for greater network accountability.\nIn this vein, Argyraki, et al., propose a system of packet obituaries to provide feedback as to which ISPs drop packets [3].\nThey argue that such feedback would help reveal which ISPs were adequately meeting their contractual obligations.\nUnlike the first two strategies, we are not aware of any previous studies that have connected accountability with the pathologies of commoditization or lack of innovation.\nIt is clear that these three strategies are closely linked to each other (for example, [2], [5], and [9] each argue that giving end-users routing control within the current contracting system is problematic).\nUntil today, however, the relationship between them has been poorly understood.\nThere is currently little theoretical foundation to compare the relative merits of each proposal, and a particular lack of evidence linking accountability with innovation and service differentiation.\nThis paper will address both issues.\nWe will begin by introducing an economic network model that relates accountability, contracts, competition, and innovation.\nOur model is highly stylized and may be considered preliminary: it is based on a single source sending data to a single destination.\nNevertheless, the structure is rich enough to expose previously unseen features of network behavior.\nWe will use our model for two main purposes: First, we will use our model to argue that the lack of accountability in today's network is a fundamental obstacle to overcoming the pathologies of commoditization and lack of innovation.\nIn other words, unless new monitoring capabilities are introduced, and integrated with the system of contracts, the network cannot achieve optimal routing and innovation characteristics.\nThis result provides motivation for the remainder of the paper, in which we explore how accountability can be leveraged to overcome these pathologies and create a sustainable industry.\nWe will approach this problem from a clean-slate perspective, deriving the level of accountability needed to sustain an ideal competitive structure.\nWhen we say that today's Internet has poor accountability, we mean that it reveals little information about the behavior--or misbehavior--of ISPs.\nThis well-known trait is largely rooted in the network's history.\nIn describing the design philosophy behind the Internet protocols, Clark lists accountability as the least important among seven \"second level goals.\"\n[4] Accordingly, accountability received little attention during the network's formative years.\nClark relates this to the network's military context, and finds that had the network been designed for commercial development, accountability would have been a top priority.\nArgyraki, et al., conjecture that applying the principles of layering and transparency may have led to the network's lack of accountability [3].\nAccording to these principles, end hosts should be informed of network problems only to the extent that they are required to adapt.\nThey notice when packet drops occur so that they can perform congestion control and retransmit packets.\nDetails of where and why drops occur are deliberately concealed.\nThe network's lack of accountability is highly relevant to a discussion of innovation because it constrains the system of contracts.\nThis is because contracts depend upon external institutions to function--the \"judge\" in the language of incomplete contract theory, or simply the legal system.\nUltimately, if a judge cannot verify that some condition holds, she cannot enforce a contract based on that condition.\nOf course, the vast majority of contracts never end up in court.\nEspecially when a judge's ruling is easily predicted, the parties will typically comply with the contract terms on their own volition.\nThis would not be possible, however, without the judge acting as a last resort.\nAn institution to support contracts is typically complex, but we abstract it as follows: We imagine that a contract is an algorithm that outputs a payment transfer among a set of ISPs (the parties) at every time.\nThis payment is a function of the past and present behaviors of the participants, but only those that are verifiable.\nHence, we imagine that a contract only accepts \"proofs\" as inputs.\nWe will call any process that generates these proofs a contractible monitor.\nSuch a monitor includes metering or sensing devices on the physical network, but it is a more general concept.\nConstructing a proof of a particular behavior may require readings from various devices distributed among many ISPs.\nThe contractible monitor includes whatever distributed algorithmic mechanism is used to motivate ISPs to share this private information.\nFigure 1 demonstrates how our model of contracts fits together.\nWe make the assumption that all payments are mediated by contracts.\nThis means that without contractible monitors that attest to, say, latency, payments cannot be conditioned on latency.\nNetwork Monitor Behavior\nFigure 1: Relationship between monitors and contracts\nWith this model, we may conclude that the level of accountability in today's Internet only permits best effort contracts.\nNodes cannot condition payments on either quality or path characteristics.\nIs there anything wrong with best-effort contracts?\nThe reader might wonder why the Internet needs contracts at all.\nAfter all, in non-network industries, traditional firms invest in research and differentiate their products, all in the hopes of keeping their customers and securing new ones.\nOne might believe that such market forces apply to ISPs as well.\nWe may adopt this as our null hypothesis: Null hypothesis: Market forces are sufficient to maintain service diversity and innovation on a network, at least to the same extent as they do in traditional markets.\nThere is a popular intuitive argument that supports this hypothesis, and it may be summarized as follows: Intuitive argument supporting null hypothesis:\n1.\nAccess providers try to increase their quality to get more consumers.\n2.\nAccess providers are themselves customers for second hop ISPs, and the second hops will therefore try to provide highquality service in order to secure traffic from access providers.\nAccess providers try to select high quality transit because that increases their quality.\n3.\nThe process continues through the network, giving every ISP a competitive reason to increase quality.\nWe are careful to model our network in continuous time, in order to capture the essence of this argument.\nWe can, for example, specify equilibria in which nodes switch to a new next hop in the event of a quality drop.\nMoreover, our model allows us to explore any theoretically possible punishments against cheaters, including those that are costly for end-users to administer.\nBy contrast, customers in the real world rarely respond collectively, and often simply seek the best deal currently offered.\nThese constraints limit their ability to punish cheaters.\nEven with these liberal assumptions, however, we find that we must reject our null hypothesis.\nOur model will demonstrate that identifying a cheating ISP is difficult under low accountability, limiting the threat of market driven punishment.\nWe will define an index of commoditization and show that it increases without bound as data paths grow long.\nFurthermore, we will demonstrate a framework in which an ISP's maximum research investment decreases hyperbolically with its distance from the end-user.\nTo summarize, we argue that the Internet's lack of accountability must be addressed before the pathologies of commoditization and lack of innovation can be resolved.\nThis leads us to our next topic: How can we leverage accountability to overcome these pathologies?\nWe approach this question from a clean-slate perspective.\nInstead of focusing on incremental improvements, we try to imagine how an ideal industry would behave, then derive the level of accountability needed to meet that objective.\nAccording to this approach, we first craft a new equilibrium concept appropriate for network competition.\nOur concept includes the following requirements: First, we require that punishing ISPs that cheat is done without rerouting the path.\nRerouting is likely to prompt end-users to switch providers, punishing access providers who administer punishments correctly.\nNext, we require that the equilibrium cannot be threatened by a coalition of ISPs that exchanges illicit side payments.\nFinally, we require that the punishment mechanism that enforces contracts does not punish innocent nodes that are not in the coalition.\nThe last requirement is somewhat unconventional from an economic perspective, but we maintain that it is crucial for any reasonable solution.\nAlthough ISPs provide complementary services when they form a data path together, they are likely to be horizontal competitors as well.\nIf innocent nodes may be punished, an ISP may decide to deliberately cheat and draw punishment onto itself and its neighbors.\nBy cheating, the ISP may save resources, thereby ensuring that the punishment is more damaging to the other ISPs, which probably compete with the cheater directly for some customers.\nIn the extreme case, the cheater may force the other ISPs out of business, thereby gaining a monopoly on some routes.\nApplying this equilibrium concept, we derive the monitors needed to maintain innovation and optimize routes.\nThe solution is surprisingly simple: contractible monitors must report the quality of the rest of the path, from each ISP to the destination.\nIt turns out that this is the correct minimum accountability requirement, as opposed to either end-to-end monitors or hop-by-hop monitors, as one might initially suspect.\nRest of path monitors can be implemented in various ways.\nThey may be purely local algorithms that listen for packet echoes.\nAlternately, they can be distributed in nature.\nWe describe a way to construct a rest of path monitor out of monitors for individual ISP quality and for the data path.\nThis requires a mechanism to motivate ISPs to share their monitor outputs with each other.\nThe rest of path monitor then includes the component monitors and the distributed algorithmic mechanism that ensures that information is shared as required.\nThis example shows that other types of monitors may be useful as building blocks, but must be combined to form rest of path monitors in order to achieve ideal innovation characteristics.\nOur study has several practical implications for future protocol design.\nWe show that new monitors must be implemented and integrated with the contracting system before the pathologies of commoditization and lack of innovation can be overcome.\nMoreover, we derive exactly what monitors are needed to optimize routes and support innovation.\nIn addition, our results provide useful input for clean-slate architectural design, and we use several novel techniques that we expect will be applicable to a variety of future research.\nThe rest of this paper is organized as follows: In section 2, we lay out our basic network model.\nIn section 3, we present a lowaccountability network, modeled after today's Internet.\nWe demonstrate how poor monitoring causes commoditization and a lack of innovation.\nIn section 4, we present verifiable monitors, and show that proofs, even without contracts, can improve the status quo.\nIn section 5, we turn our attention to contractible monitors.\nWe show that rest of path monitors can support competition games with optimal routing and innovation.\nWe further show that rest of path monitors are required to support such competition games.\nWe continue by discussing how such monitors may be constructed using other monitors as building blocks.\nIn section 6, we conclude and present several directions for future research.\n2.\nBASIC NETWORK MODEL\nA source, S, wants to send data to destination, D. S and D are nodes on a directed, acyclic graph, with a finite set of intermediate nodes, V = {1,2,...NJ, representing ISPs.\nAll paths lead to D, and every node not connected to D has at least two choices for next hop.\nWe will represent quality by a finite dimensional vector space, Q, called the quality space.\nEach dimension represents a distinct network characteristic that end-users care about.\nFor example, latency, loss probability, jitter, and IP version can each be assigned to a dimension.\nTo each node, i, we associate a vector in the quality space, qi e Q.\nThis corresponds to the quality a user would experience if i were the only ISP on the data path.\nLet q e Q be the vector of all node\nOf course, when data passes through multiple nodes, their qualities combine in some way to yield a path quality.\nWe represent this by an associative binary operation, *: Q x Q--> Q.\nFor path (v1, v2,..., vn), the quality is given by qv * qv *...* qvn.\nThe *\noperation reflects the characteristics of each dimension of quality.\nFor example, * can act as an addition in the case of latency, multiplication in the case of loss probability, or a minimumargument function in the case of security.\nWhen data flows along a complete path from S to D, the source and destination, generally regarded as a single player, enjoy utility given by a function of the path quality, u: Q--> R. Each node along the path, i, experiences some cost of transmission, ci.\n2.1 Game Dynamics\nUltimately, we are most interested in policies that promote innovation on the network.\nIn this study, we will use innovation in a fairly general sense.\nInnovation describes any investment by an ISP that alters its quality vector so that at least one potential data path offers higher utility.\nThis includes researching a new routing algorithm that decreases the amount of jitter users experience.\nIt also includes deploying a new protocol that supports quality of service.\nEven more broadly, buying new equipment to decrease\nFigure 2: Game Dynamics\nlatency may also be regarded as innovation.\nInnovation may be thought of as the micro-level process by which the network evolves.\nOur analysis is limited in one crucial respect: We focus on inventions that a single ISP can implement to improve the end-user experience.\nThis excludes technologies that require adoption by all ISPs on the network to function.\nBecause such technologies do not create a competitive advantage, rewarding them is difficult and may require intellectual property or some other market distortion.\nWe defer this interesting topic to future work.\nAt first, it may seem unclear how a large-scale distributed process such as innovation can be influenced by mechanical details like networks monitors.\nOur model must draw this connection in a realistic fashion.\nThe rate of innovation depends on the profits that potential innovators expect in the future.\nThe reward generated by an invention must exceed the total cost to develop it, or the inventor will not rationally invest.\nThis reward, in turn, is governed by the competitive environment in which the firm operates, including the process by which firms select prices, and agree upon contracts with each other.\nOf course, these decisions depend on how routes are established, and how contracts determine actual monetary exchanges.\nAny model of network innovation must therefore relate at least three distinct processes: innovation, competition, and routing.\nWe select a game dynamics that makes the relation between these processes as explicit as possible.\nThis is represented schematically in Figure 2.\nThe innovation stage occurs first, at time t = \u2212 2.\nIn this stage, each agent decides whether or not to make research investments.\nIf she chooses not to, her quality remains fixed.\nIf she makes an investment, her quality may change in some way.\nIt is not necessary for us to specify how such changes take place.\nThe agents' choices in this stage determine the vector of qualities, q, common knowledge for the rest of the game.\nNext, at time t = \u2212 1, agents participate in the competition stage, in which contracts are agreed upon.\nIn today's industry, these contracts include prices for transit access, and peering agreements.\nSince access is provided on a best-effort basis, a transit agreement can simply be represented by its price.\nOther contracting systems we will explore will require more detail.\nFinally, beginning at t = 0, firms participate in the routing stage.\nOther research has already employed repeated games to study routing, for example [1], [12].\nRepetition reveals interesting effects not visible in a single stage game, such as informal collusion to elevate prices in [12].\nWe use a game in continuous time in order to study such properties.\nFor example, we will later ask whether a player will maintain higher quality than her contracts require, in the hope of keeping her customer base or attracting future customers.\nOur dynamics reflect the fact that ISPs make innovation decisions infrequently.\nAlthough real firms have multiple opportunities to innovate, each opportunity is followed by a substantial length of time in which qualities are fixed.\nThe decision to invest focuses on how the firm's new quality will improve the contracts it can enter into.\nHence, our model places innovation at the earliest stage, attempting to capture a single investment decision.\nContracting decisions are made on an intermediate time scale, thus appearing next in the dynamics.\nRouting decisions are made very frequently, mainly to maximize immediate profit flows, so they appear in the last stage.\nBecause of this ordering, our model does not allow firms to route strategically to affect future innovation or contracting decisions.\nIn opposition, Afergan and Wroclawski argue that contracts are formed in response to current traffic patterns, in a feedback loop [2].\nAlthough we are sympathetic to their observation, such an addition would make our analysis intractable.\nOur model is most realistic when contracting decisions are infrequent.\nThroughout this paper, our solution concept will be a subgame perfect equilibrium (SPE).\nAn SPE is a strategy point that is a Nash equilibrium when restricted to each subgame.\nThree important subgames have been labeled in Figure 2.\nThe innovation game includes all three stages.\nThe competition game includes only the competition stage and the routing stage.\nThe routing game includes only the routing stage.\nAn SPE guarantees that players are \"forward-looking.\"\nThis means, for example, that in the competition stage, firms must act rationally, maximizing their expected profits in the routing stage.\nThey cannot carry out threats they made in the innovation stage if it lowers their expected payoff.\nOur schematic already suggests that the routing game is crucial for promoting innovation.\nTo support innovation, the competition game must somehow reward ISPs with \"high\" quality.\nBut that means that the routing game must tend to route to nodes with high quality.\nIf the routing game always selects the lowest-cost routes, for example, innovation will not be supported.\nWe will support this observation with analysis later.\n2.2 The Routing Game\nThe routing game proceeds in continuous time, with all players discounting by a common factor, r.\nThe outputs from previous stages, q and the set of contracts, are treated as exogenous parameters for this game.\nFor each time t>--0, each node must select a next hop to route data to.\nData flows across the resultant path, causing utility flow to S and D, and a flow cost to the nodes on the path, as described above.\nPayment flows are also created, based on the contracts in place.\nRelating our game to the familiar repeated prisoners' dilemma, imagine that we are trying to impose a high quality, but costly path.\nAs we argued loosely above, such paths must be sustainable in order to support innovation.\nEach ISP on the path tries to maximize her own payment, net of costs, so she may not want to cooperate with our plan.\nRather, if she can find a way to save on costs, at the expense of the high quality we desire, she will be tempted to do so.\nAnalogously to the prisoners' dilemma, we will call such a decision cheating.\nA little more formally, Cheating refers to any action that an ISP can take, contrary to some target strategy point that we are trying to impose, that enhances her immediate payoff, but compromises the quality of the data path.\nOne type of cheating relates to the data path.\nEach node on the path has to pay the next node to deliver its traffic.\nIf the next node offers high quality transit, we may expect that a lower quality node will offer a lower price.\nEach node on the path will be tempted to route to a cheaper next hop, increasing her immediate profits, but lowering the path quality.\nWe will call this type of action cheating in route.\nAnother possibility we can model, is that a node finds a way to save on its internal forwarding costs, at the expense of its own quality.\nWe will call this cheating internally to distinguish it from cheating in route.\nFor example, a node might drop packets beyond the rate required for congestion control, in order to throttle back TCP flows and thus save on forwarding costs [3].\nAlternately, a node employing quality of service could give high priority packets a lower class of service, thus saving on resources and perhaps allowing itself to sell more high priority service.\nIf either cheating in route or cheating internally is profitable, the specified path will not be an equilibrium.\nWe assume that cheating can never be caught instantaneously.\nRather, a cheater can always enjoy the payoff from cheating for some positive time, which we label t0.\nThis includes the time for other players to detect and react to the cheating.\nIf the cheater has a contract which includes a customer lock-in period, t0 also includes the time until customers are allowed to switch to a new ISP.\nAs we will see later, it is socially beneficial to decrease t0, so such lock-in is detrimental to welfare.\n3.\nPATHOLOGIES OF A LOWACCOUNTABILITY NETWORK\nIn order to motivate an exploration of monitoring systems, we begin in this section by considering a network with a poor degree of accountability, modeled after today's Internet.\nWe will show how the lack of monitoring necessarily leads to poor routing and diminishes the rate of innovation.\nThus, the network's lack of accountability is a fundamental obstacle to resolving these pathologies.\n3.1 Accountability in the Current Internet\nFirst, we reflect on what accountability characteristics the present Internet has.\nArgyraki, et al., point out that end hosts are given minimal information about packet drops [3].\nUsers know when drops occur, but not where they occur, nor why.\nDropped packets may represent the innocent signaling of congestion, or, as we mentioned above, they may be a form of cheating internally.\nThe problem is similar for other dimensions of quality, or in fact more acute.\nFinding an ISP that gives high priority packets a lower class of service, for example, is further complicated by the lack of even basic diagnostic tools.\nIn fact, it is similarly difficult to identify an ISP that cheats in route.\nHuston notes that Internet traffic flows do not always correspond to routing information [8].\nAn ISP may hand a packet off to a neighbor regardless of what routes that neighbor has advertised.\nFurthermore, blocks of addresses are summarized together for distant hosts, so a destination may not even be resolvable until packets are forwarded closer.\nOne might argue that diagnostic tools like ping and traceroute can identify cheaters.\nUnfortunately, Argyraki, et al., explain that these tools only reveal whether probe packets are echoed, not the fate of past packets [3].\nThus, for example, they are ineffective in detecting low-frequency packet drops.\nEven more fundamentally, a sophisticated cheater can always spot diagnostic packets and give them special treatment.\nAs a further complication, a cheater may assume different aliases for diagnostic packets arriving over different routes.\nAs we will see below, this gives the cheater a significant advantage in escaping punishment for bad behavior, even if the data path is otherwise observable.\n3.2 Modeling Low-Accountability\nAs the above evidence suggests, the current industry allows for very little insight into the behavior of the network.\nIn this section, we attempt to capture this lack of accountability in our model.\nWe begin by defining a monitor, our model of the way that players receive external information about network behavior, A monitor is any distributed algorithmic mechanism that runs on the network graph, and outputs, to specific nodes, informational statements about current or past network behavior.\nWe assume that all external information about network behavior is mediated in this way.\nThe accountability properties of the Internet can be represented by the following monitors: E2E (End to End): A monitor that informs S\/D about what the total path quality is at any time (this is the quality they experience).\nROP (Rest of Path): A monitor that informs each node along the data path what the quality is for the rest of the path to the destination.\nPRc (Packets Received): A monitor that tells nodes how much data they accept from each other, so that they can charge by volume.\nIt is important to note, however, that this information is aggregated over many source-destination pairs.\nHence, for the sake of realism, it cannot be used to monitor what the data path is.\nPlayers cannot measure the qualities of other, single nodes, just the rest of the path.\nNodes cannot see the path past the next hop.\nThis last assumption is stricter than needed for our results.\nThe critical ingredient is that nodes cannot verify that the path avoids a specific hop.\nThis holds, for example, if the path is generally visible, except nodes can use different aliases for different parents.\nSimilar results also hold if alternate paths always converge after some integer number, m, of hops.\nIt is important to stress that E2E and ROP are not the contractible monitors we described in the introduction--they do not generate proofs.\nThus, even though a player observes certain information, she generally cannot credibly share it with another player.\nFor example, if a node after the first hop starts cheating, the first hop will detect the sudden drop in quality for the rest of the path, but the first hop cannot make the source believe this observation--the\nsource will suspect that the first hop was the cheater, and fabricated the claim against the rest of the path.\nTypically, E2E and ROP are envisioned as algorithms that run on a single node, and listen for packet echoes.\nThis is not the only way that they could be implemented, however; an alternate strategy is to aggregate quality measurements from multiple points in the network.\nThese measurements can originate in other monitors, located at various ISPs.\nThe monitor then includes the component monitors as well as whatever mechanisms are in place to motivate nodes to share information honestly as needed.\nFor example, if the source has monitors that reveal the qualities of individual nodes, they could be combined with path information to create an ROP monitor.\nSince we know that contracts only accept proofs as input, we can infer that payments in this environment can only depend on the number of packets exchanged between players.\nIn other words, contracts are best-effort.\nFor the remainder of this section, we will assume that contracts are also linear--there is a constant payment flow so long as a node accepts data, and all conditions of the contract are met.\nOther, more complicated tariffs are also possible, and are typically used to generate lock-in.\nWe believe that our parameter t0 is sufficient to describe lock-in effects, and we believe that the insights in this section apply equally to any tariffs that are bounded so that the routing game remains continuous at infinity.\nRestricting attention to linear contracts allows us to represent some node i's contract by its price, pi.\nBecause we further know that nodes cannot observe the path after the next hop, we can infer that contracts exist only between neighboring nodes on the graph.\nWe will call this arrangement of contracts bilateral.\nWhen a competition game exclusively uses bilateral contracts, we will call it a bilateral contract competition game.\nWe first focus on the routing game and ask whether a high quality route can be maintained, even when a low quality route is cheaper.\nRecall that this is a requirement in order for nodes to have any incentive to innovate.\nIf nodes tend to route to low price next hops, regardless of quality, we say that the network is commoditized.\nTo measure this tendency, we define an index of commoditization as follows: For a node on the data path, i, define its quality premium, di = pj \u2212 pmin, where pj is the flow payment to the next hop in equilibrium, and pmin is the price of the lowest cost next hop.\nDefinition: The index of commoditization, IC, is the average, over each node on the data path, i, of i's flow profit as a fraction of i's quality premium, (pi \u2212 ci \u2212 pj) \/ di.\nIC ranges from 0, when each node spends all of its potential profit on its quality premium, to infinite, when a node absorbs positive profit, but uses the lowest price next hop.\nA high value for IC implies that nodes are spending little of their money inflow on purchasing high quality for the rest of the path.\nAs the next claim shows, this is exactly what happens as the path grows long: Claim 1.\nIf the only monitors are E2E, ROP, and PRc, IC as n, where n is the number of nodes on the data path.\nTo show that this is true, we first need the following lemma, which will establish the difficulty of punishing nodes in the network.\nFirst a bit of notation: Recall that a cheater can benefit from its actions for t0> 0 before other players can react.\nWhen a node cheats, it can expect a higher profit flow, at least until it is caught and other players react, perhaps by diverting traffic.\nLet node i's normal profit flow be i, and her profit flow during cheating be some greater value, yi.\nWe will call the ratio, y i \/ i, the temptation to cheat.\nLemma 1.\nIf the only monitors are E2E, ROP, and PRc, the discounted time, tn \u2212 rt e, needed to punish a cheater increases at\ni on data path i Corollary.\nIf nodes share a minimum temptation to cheat, y \/, the discounted time needed to punish cheating increases at least exponentially in the length of the data path, n,\nSince it is the discounted time that increases exponentially, the actual time increases faster than exponentially.\nIf n is so large that tn is undefined, the given path cannot be maintained in equilibrium.\nProof.\nThe proof proceeds by induction on the number of nodes on the equilibrium data path, n. For n = 1, there is a single node, say i.\nBy cheating, the node earns extra profit () \u2212\nFor n> 1, assume for induction that the claim holds for n \u2212 1.\nThe source does not know whether the cheater is the first hop, or after the first hop.\nBecause the source does not know the data path after the first hop, it is unable to punish nodes beyond it.\nIf it chooses a new first hop, it might not affect the rest of the data path.\nBecause of this, the source must rely on the first hop to punish cheating nodes farther along the path.\nThe first hop needs discounted time,\nthe source must give the first hop this much discounted time in order to punish defectors further down the line (and the source will expect poor quality during this period).\nNext, the source must be protected against a first hop that cheats, and pretends that the problem is later in the path.\nThe first hop can rt, to accomplish this by assumption.\nSo\ndo this for the full discounted time,\nthe source must punish the first hop long enough to remove the extra profit it can make.\nFollowing the same argument as for n = 1, we\ni on data path i which completes the proof.\n\u2751 The above lemma and its corollary show that punishing cheaters becomes more and more difficult as the data path grows long, until doing so is impossible.\nTo capture some intuition behind this result, imagine that you are an end user, and you notice a sudden drop in service quality.\nIf your data only travels through your access provider, you know it is that provider's fault.\nYou can therefore take your business elsewhere, at least for some time.\nThis threat should motivate your provider to maintain high quality.\nSuppose, on the other hand, that your data traverses two providers.\nWhen you complain to your ISP, he responds, \"yes, we know your quality went down, but it's not our fault, it's the next ISP.\nGive us some time to punish them and then normal quality will resume.\"\nIf your access provider is telling the truth, you will want to listen, since switching access providers may not even route around the actual offender.\nThus, you will have to accept lower quality service for some longer time.\nOn the other hand, you may want to punish your access provider as well, in case he is lying.\nThis means you have to wait longer to resume normal service.\nAs more ISPs are added to the path, the time increases in a recursive fashion.\nWith this lemma in hand, we can return to prove Claim 1.\nProof of Claim 1.\nFix an equilibrium data path of length n. Label the path nodes 1,2,..., n. For each node i, let i's quality premium be di = pi +1 \u2212 pi +1'.\nThen we have, where gi is node i's temptation to cheat by routing to the lowest\nAccording to the claim, as the data path grows long, it increasingly resembles a lowest-price path.\nSince lowest-price routing does not support innovation, we may speculate that innovation degrades with the length of the data path.\nThough we suspect stronger claims are possible, we can demonstrate one such result by including an extra assumption: Available Bargain Path: A competitive market exists for lowcost transit, such that every node can route to the destination for no more than flow payment, pl.\nClaim 2.\nUnder the available bargain path assumption, if node i, a distance n from S, can invest to alter its quality, and the source will spend no more than Ps for a route including node i's new quality, then the payment to node i, p, decreases hyperbolically with n, where (1 rt0)\nThe proof is given in the appendix.\nAs a node gets farther from the source, its maximum payment approaches the bargain price, pl.\nHence, the reward for innovation is bounded by the same amount.\nLarge innovations, meaning substantially more expensive than pl \/ r, will not be pursued deep into the network.\nClaim 2 can alternately be viewed as a lower bound on how much it costs to elicit innovation in a network.\nIf the source S wants node i to innovate, it needs to get a motivating payment, p, to i during the routing stage.\nHowever, it must also pay the nodes on the way to i a premium in order to motivate them to route properly.\nThe claim shows that this premium increases with the distance to i, until it dwarfs the original payment, p.\nOur claims stand in sharp contrast to our null hypothesis from the introduction.\nComparing the intuitive argument that supported our hypothesis with these claims, we can see that we implicitly used an oversimplified model of market pressure (as either present or not).\nAs is now clear, market pressure relies on the decisions of customers, but these are limited by the lack of information.\nHence, competitive forces degrade as the network deepens.\n4.\nVERIFIABLE MONITORS\nIn this section, we begin to introduce more accountability into the network.\nRecall that in the previous section, we assumed that players couldn't convince each other of their private information.\nWhat would happen if they could?\nIf a monitor's informational signal can be credibly conveyed to others, we will call it a verifiable monitor.\nThe monitor's output in this case can be thought of as a statement accompanied by a proof, a string that can be processed by any player to determine that the statement is true.\nA verifiable monitor is a distributed algorithmic mechanism that runs on the network graph, and outputs, to specific nodes, proofs about current or past network behavior.\nAlong these lines, we can imagine verifiable counterparts to E2E and ROP.\nWe will label these E2Ev and ROPv.\nWith these monitors, each node observes the quality of the rest of the path and can also convince other players of these observations by giving them a proof.\nBy adding verifiability to our monitors, identifying a single cheater is straightforward.\nThe cheater is the node that cannot produce proof that the rest of path quality decreased.\nThis means that the negative results of the previous section no longer hold.\nFor example, the following lemma stands in contrast to Lemma 1.\nLemma 2.\nWith monitors E2Ev, ROPv, and PRc, and provided that the node before each potential cheater has an alternate next hop that isn't more expensive, it is possible to enforce any data path in SPE so long as the maximum temptation is less than what can be deterred in finite time,\nProof.\nThis lemma follows because nodes can share proofs to identify who the cheater is.\nOnly that node must be punished in equilibrium, and the preceding node does not lose any payoff in administering the punishment.\n\u2751 With this lemma in mind, it is easy to construct counterexamples to Claim 1 and Claim 2 in this new environment.\nUnfortunately, there are at least four reasons not to be satisfied with this improved monitoring system.\nThe first, and weakest reason is that the maximum temptation remains finite, causing some distortion in routes or payments.\nEach node along a route must extract some positive profit unless the next hop is also the cheapest.\nOf course, if t0 is small, this effect is minimal.\nThe second, and more serious reason is that we have always given our source the ability to commit to any punishment.\nReal world users are less likely to act collectively, and may simply search for the best service currently offered.\nSince punishment phases are generally characterized by a drop in quality, real world end-users may take this opportunity to shop for a new access provider.\nThis will make nodes less motivated to administer punishments.\nThe third reason is that Lemma 2 does not apply to cheating by coalitions.\nA coalition node may pretend to punish its successor, but instead enjoy a secret payment from the cheating node.\nAlternately, a node may bribe its successor to cheat, if the punishment phase is profitable, and so forth.\nThe required discounted time for punishment may increase exponentially in the number of coalition members, just as in the previous section!\nThe final reason not to accept this monitoring system is that when a cheater is punished, the path will often be routed around not just the offender, but around other nodes as well.\nEffectively, innocent nodes will be punished along with the guilty.\nIn our abstract model, this doesn't cause trouble since the punishment falls off the equilibrium path.\nThe effects are not so benign in the real world.\nWhen ISPs lie in sequence along a data path, they contribute complementary services, and their relationship is vertical.\nFrom the perspective of other source-destination pairs, however, these same firms are likely to be horizontal competitors.\nBecause of this, a node might deliberately cheat, in order to trigger punishment for itself and its neighbors.\nBy cheating, the node will save money to some extent, so the cheater is likely to emerge from the punishment phase better off than the innocent nodes.\nThis may give the cheater a strategic advantage against its competitors.\nIn the extreme, the cheater may use such a strategy to drive neighbors out of business, and thereby gain a monopoly on some routes.\n5.\nCONTRACTIBLE MONITORS\nAt the end of the last section, we identified several drawbacks that persist in an environment with E2Ev, ROPv, and PRc.\nIn this section, we will show how all of these drawbacks can be overcome.\nTo do this, we will require our third and final category of monitor: A contractible monitor is simply a verifiable monitor that generates proofs that can serve as input to a contract.\nThus, contractible is jointly a property of the monitor and the institutions that must verify its proofs.\nContractibility requires that a court,\n1.\nCan verify the monitor's proofs.\n2.\nCan understand what the proofs and contracts represent to the extent required to police illegal activity.\n3.\nCan enforce payments among contracting parties.\nUnderstanding the agreements between companies has traditionally been a matter of reading contracts on paper.\nThis may prove to be a harder task in a future network setting.\nContracts may plausibly be negotiated by machine, be numerous, even per-flow, and be further complicated by the many dimensions of quality.\nWhen a monitor (together with institutional infrastructure) meets these criteria, we will label it with a subscript c, for contractible.\nThe reader may recall that this is how we labeled the packets received monitor, PRc, which allows ISPs to form contracts with per-packet payments.\nSimilarly, E2Ec and ROPc are contractible versions of the monitors we are now familiar with.\nAt the end of the previous section, we argued for some desirable properties that we'd like our solution to have.\nBriefly, we would like to enforce optimal data paths with an equilibrium concept that doesn't rely on re-routing for punishment, is coalition proof, and doesn't punish innocent nodes when a coalition cheats.\nWe will call such an equilibrium a fixed-route coalition-proof protect-theinnocent equilibrium.\nAs the next claim shows, ROPc allows us to create a system of linear (price, quality) contracts under just such an equilibrium.\nClaim 3.\nWith ROPc, for any feasible and consistent assignment of rest of path qualities to nodes, and any corresponding payment schedule that yields non-negative payoffs, these qualities can be maintained with bilateral contracts in a fixed-route coalition-proof protect-the-innocent equilibrium.\nProof: Fix any data path consistent with the given rest of path qualities.\nSelect some monetary punishment, P, large enough to prevent any cheating for time t0 (the discounted total payment from the source will work).\nLet each node on the path enter into a contract with its parent, which fixes an arbitrary payment schedule so long as the rest of path quality is as prescribed.\nWhen the parent node, which has ROPc, submits a proof that the rest of path quality is less than expected, the contract awards her an instantaneous transfer, P, from the downstream node.\nSuch proofs can be submitted every t0 for the previous interval.\nSuppose now that a coalition, C, decides to cheat.\nThe source measures a decrease in quality, and according to her contract, is awarded P from the first hop.\nThis means that there is a net outflow of P from the ISPs as a whole.\nSuppose that node i is not in C.\nIn order for the parent node to claim P from i, it must submit proof that the quality of the path starting at i is not as prescribed.\nThis means\nthat there is a cheater after i. Hence, i would also have detected a change in quality, so i can claim P from the next node on the path.\nThus, innocent nodes are not punished.\nThe sequence of payments must end by the destination, so the net outflow of P must come from the nodes in C.\nThis establishes all necessary conditions of the equilibrium.\n\u2751 Essentially, ROPc allows for an implementation of (price, quality) contracts.\nBuilding upon this result, we can construct competition games in which nodes offer various qualities to each other at specified prices, and can credibly commit to meet these performance targets, even allowing for coalitions and a desire to damage other ISPs.\nExample 1.\nDefine a Stackelberg price-quality competition game as follows: Extend the partial order of nodes induced by the graph to any complete ordering, such that downstream nodes appear before their parents.\nIn this order, each node selects a contract to offer to its parents, consisting of a rest of path quality, and a linear price.\nIn the routing game, each node selects a next hop at every time, consistent with its advertised rest of path quality.\nThe Stackelberg price-quality competition game can be implemented in our model with ROPc monitors, by using the strategy in the proof, above.\nIt has the following useful property: Claim 4.\nThe Stackelberg price-quality competition game yields optimal routes in SPE.\nThe proof is given in the appendix.\nThis property is favorable from an innovation perspective, since firms that invest in high quality will tend to fall on the optimal path, gaining positive payoff.\nIn general, however, investments may be over or under rewarded.\nExtra conditions may be given under which innovation decisions approach perfect efficiency for large innovations.\nWe omit the full analysis here.\n\u2751 Example 2.\nAlternately, we can imagine that players report their private information to a central authority, which then assigns all contracts.\nFor example, contracts could be computed to implement the cost-minimizing VCG mechanism proposed by Feigenbaum, et al. in [7].\nWith ROPc monitors, we can adapt this mechanism to maximize welfare.\nFor node, i, on the optimal path, L, the net payment must equal, essentially, its contribution to the welfare of S, D, and the other nodes.\nIf L' is an optimal path in the graph with i removed, the profit flow to i is,\nwhere qL and qL' are the qualities of the two paths.\nHere, (price, quality) contracts ensure that nodes report their qualities honestly.\nThe incentive structure of the VCG mechanism is what motivates nodes to report their costs accurately.\nA nice feature of this game is that individual innovation decisions are efficient, meaning that a node will invest in an innovation whenever the investment cost is less than the increased welfare of the optimal data path.\nUnfortunately, the source may end up paying more than the utility of the path.\n\u2751 Notice that with just E2Ec, a weaker version of Claim 3 holds.\nBilateral (price, quality) contracts can be maintained in an equilibrium that is fixed-route and coalition-proof, but not protectthe-innocent.\nThis is done by writing contracts to punish everyone on the path when the end to end quality drops.\nIf the path length is n, the first hop pays nP to the source, the second hop pays (n \u2212 1) P to the first, and so forth.\nThis ensures that every node is punished sufficiently to make cheating unprofitable.\nFor the reasons we gave previously, we believe that this solution concept is less than ideal, since it allows for malicious nodes to deliberately trigger punishments for potential competitors.\nUp to this point, we have adopted fixed-route coalition-proof protect-the-innocent equilibrium as our desired solution concept, and shown that ROPc monitors are sufficient to create some competition games that are desirable in terms of service diversity and innovation.\nAs the next claim will show, rest of path monitoring is also necessary to construct such games under our solution concept.\nBefore we proceed, what does it mean for a game to be desirable from the perspective of service diversity and innovation?\nWe will use a very weak assumption, essentially, that the game is not fully commoditized for any node.\nThe claim will hold for this entire class of games.\nDefinition: A competition game is nowhere-commoditized if for each node, i, not adjacent to D, there is some assignment of qualities and marginal costs to nodes, such that the optimal data path includes i, and i has a positive temptation to cheat.\nIn the case of linear contracts, it is sufficient to require that IC <--, and that every node make positive profit under some assignment of qualities and marginal costs.\nStrictly speaking, ROPc monitors are not the only way to construct these desirable games.\nTo prove the next claim, we must broaden our notion of rest of path monitoring to include the similar ROPc' monitor, which attests to the quality starting at its own node, through the end of the path.\nCompare the two monitors below: ROP,: gives a node proof that the path quality from the next node to the destination is not correct.\nROP,': gives a node proof that the path quality from that node to the destination is correct.\nWe present a simplified version of this claim, by including an assumption that only one node on the path can cheat at a time (though conspirators can still exchange side payments).\nWe will discuss the full version after the proof.\nClaim 5.\nAssume a set of monitors, and a nowhere-commoditized bilateral contract competition game that always maintains the optimal quality in fixed-route coalition-proof protect-the-innocent equilibrium, with only one node allowed to cheat at a time.\nThen for each node, i, not adjacent to D, either i has an ROPc monitor, or i's children each have an ROPc' monitor.\nProof: First, because of the fixed-route assumption, punishments must be purely monetary.\nNext, when cheating occurs, if the payment does not go to the source or destination, it may go to another coalition member, rendering it ineffective.\nThus, the source must accept some monetary compensation, net of its normal flow payment, when cheating occurs.\nSince the source only contracts with the first hop, it must accept this money from the first hop.\nThe source's contract must therefore distinguish when the path quality is normal from when it is lowered by cheating.\nTo do so, it can either accept proofs\nfrom the source, that the quality is lower than required, or it can accept proofs from the first hop, that the quality is correct.\nThese nodes will not rationally offer the opposing type of proof.\nBy definition, any monitor that gives the source proof that the path quality is wrong is an ROPc monitor.\nAny monitor that gives the first hop proof that the quality is correct is a ROPc' monitor.\nThus, at least one of these monitors must exist.\nBy the protect-the-innocent assumption, if cheating occurs, but the first hop is not a cheater, she must be able to claim the same size reward from the next ISP on the path, and thus \"pass on\" the punishment.\nThe first hop's contract with the second must then distinguish when cheating occurs after the first hop.\nBy argument similar to that for the source, either the first hop has a ROPc monitor, or the second has a ROPc' monitor.\nThis argument can be iterated along the entire path to the penultimate node before D.\nSince the marginal costs and qualities can be arranged to make any path the optimal path, these statements must hold for all nodes and their children, which completes the proof.\n\u2751 The two possibilities for monitor correspond to which node has the burden of proof.\nIn one case, the prior node must prove the suboptimal quality to claim its reward.\nIn the other, the subsequent node must prove that the quality was correct to avoid penalty.\nBecause the two monitors are similar, it seems likely that they require comparable costs to implement.\nIf submitting the proofs is costly, it seems natural that nodes would prefer to use the ROPc monitor, placing the burden of proof on the upstream node.\nFinally, we note that it is straightforward to derive the full version of the claim, which allows for multiple cheaters.\nThe only complication is that cheaters can exchange side payments, which makes any money transfers between them redundant.\nBecause of this, we have to further generalize our rest of path monitors, so they are less constrained in the case that there are cheaters on either side.\n5.1 Implementing Monitors\nClaim 5 should not be interpreted as a statement that each node must compute the rest of path quality locally, without input from other nodes.\nOther monitors, besides ROPc and ROPc' can still be used, loosely speaking, as building blocks.\nFor instance, network tomography is concerned with measuring properties of the network interior with tools located at the edge.\nUsing such techniques, our source might learn both individual node qualities and the data path.\nThis is represented by the following two monitors: SHOPci: (source-based hop quality) A monitor that gives the source proof of what the quality of node i is.\nSPATHc: (source-based path) A monitor that gives the source proof of what the data path is at any time, at least as far as it matches the equilibrium path.\nWith these monitors, a punishment mechanism can be designed to fulfill the conditions of Claim 5.\nIt involves the source sharing the proofs it generates with nodes further down the path, which use them to determine bilateral payments.\nUltimately however, the proof of Claim 5 shows us that each node i's bilateral contracts require proof of the rest of path quality.\nThis means that node i (or possibly its children) will have to combine the proofs that they receive to generate a proof of the rest of path quality.\nThus, the combined process is itself a rest of path monitor.\nWhat we have done, all in all, is constructed a rest of path monitor using SPATHc and SHOPci as building blocks.\nOur new monitor includes both the component monitors and whatever distributed algorithmic mechanism exists to make sure nodes share their proofs correctly.\nThis mechanism can potentially involve external institutions.\nFor a concrete example, suppose that when node i suspects it is getting poor rest of path quality from its successor, it takes the downstream node to court.\nDuring the discovery process, the court subpoenas proofs of the path and of node qualities from the source (ultimately, there must be some threat to ensure the source complies).\nFinally, for the court to issue a judgment, one party or the other must compile a proof of what the rest of path quality was.\nHence, the entire discovery process acts as a rest of path monitor, albeit a rather costly monitor in this case.\nOf course, mechanisms can be designed to combine these monitors at much lower cost.\nTypically, such mechanisms would call for automatic sharing of proofs, with court intervention only as a last resort.\nWe defer these interesting mechanisms to future work.\nAs an aside, intuition might dictate that SHOPci generates more information than ROPc; after all, inferring individual node qualities seems a much harder problem.\nYet, without path information, SHOPci is not sufficient for our first-best innovation result.\nThe proof of this demonstrates a useful technique: Claim 6.\nWith monitors E2E, ROP, SHOPci and PRc, and a nowhere-commoditized bilateral contract competition game, the optimal quality cannot be maintained for all assignments of quality and marginal cost, in fixed-route coalition-proof protect-theinnocent equilibrium.\nProof: Because nodes cannot verify the data path, they cannot form a proof of what the rest of path quality is.\nHence, ROPc monitors do not exist, and therefore the requirements of Claim 5 cannot hold.\n\u2751\n6.\nCONCLUSIONS AND FUTURE WORK\nIt is our hope that this study will have a positive impact in at least three different ways.\nThe first is practical: we believe our analysis has implications for the design of future monitoring protocols and for public policy.\nFor protocol designers, we first provide fresh motivation to create monitoring systems.\nWe have argued that the poor accountability of the Internet is a fundamental obstacle to alleviating the pathologies of commoditization and lack of innovation.\nUnless accountability improves, these pathologies are guaranteed to remain.\nSecondly, we suggest directions for future advances in monitoring.\nWe have shown that adding verifiability to monitors allows for some improvements in the characteristics of competition.\nAt the same time, this does not present a fully satisfying solution.\nThis paper has suggested a novel standard for monitors to aspire to--one of supporting optimal routes in innovative competition games under fixed-route coalition-proof protect-the-innocent equilibrium.\nWe have shown that under bilateral contracts, this specifically requires contractible rest of path monitors.\nThis is not to say that other types of monitors are unimportant.\nWe included an example in which individual hop quality monitors and a path monitor can also meet our standard for sustaining competition.\nHowever, in order for this to happen, a mechanism must be included\nto combine proofs from these monitors to form a proof of rest of path quality.\nIn other words, the monitors must ultimately be combined to form contractible rest of path monitors.\nTo support service differentiation and innovation, it may be easier to design rest of path monitors directly, thereby avoiding the task of designing mechanisms for combining component monitors.\nAs far as policy implications, our analysis points to the need for legal institutions to enforce contracts based on quality.\nThese institutions must be equipped to verify proofs of quality, and police illegal contracting behavior.\nAs quality-based contracts become numerous and complicated, and possibly negotiated by machine, this may become a challenging task, and new standards and regulations may have to emerge in response.\nThis remains an interesting and unexplored area for research.\nThe second area we hope our study will benefit is that of clean-slate architectural design.\nTraditionally, clean-slate design tends to focus on creating effective and elegant networks for a static set of requirements.\nThus, the approach is often one of engineering, which tends to neglect competitive effects.\nWe agree with Ratnasamy, Shenker, and McCanne, that designing for evolution should be a top priority [11].\nWe have demonstrated that the network's monitoring ability is critical to supporting innovation, as are the institutions that support contracting.\nThese elements should feature prominently in new designs.\nOur analysis specifically suggests that architectures based on bilateral contracts should include contractible rest of path monitoring.\nFrom a clean-slate perspective, these monitors can be transparently and fully integrated with the routing and contracting systems.\nFinally, the last contribution our study makes is methodological.\nWe believe that the mathematical formalization we present is applicable to a variety of future research questions.\nWhile a significant literature addresses innovation in the presence of network effects, to the best of our knowledge, ours is the first model of innovation in a network industry that successfully incorporates the actual topological structure as input.\nThis allows the discovery of new properties, such as the weakening of market forces with the number of ISPs on a data path that we observe with lowaccountability.\nOur method also stands in contrast to the typical approach of distributed algorithmic mechanism design.\nBecause this field is based on a principle-agent framework, contracts are usually proposed by the source, who is allowed to make a take it or leave it offer to network nodes.\nOur technique allows contracts to emerge from a competitive framework, so the source is limited to selecting the most desirable contract.\nWe believe this is a closer reflection of the industry.\nBased on the insights in this study, the possible directions for future research are numerous and exciting.\nTo some degree, contracting based on quality opens a Pandora's Box of pressing questions: Do quality-based contracts stand counter to the principle of network neutrality?\nShould ISPs be allowed to offer a choice of contracts at different quality levels?\nWhat anti-competitive behaviors are enabled by quality-based contracts?\nCan a contracting system support optimal multicast trees?\nIn this study, we have focused on bilateral contracts.\nThis system has seemed natural, especially since it is the prevalent system on the current network.\nPerhaps its most important benefit is that each contract is local in nature, so both parties share a common, familiar legal jurisdiction.\nThere is no need to worry about who will enforce a punishment against another ISP on the opposite side of the planet, nor is there a dispute over whose legal rules to apply in interpreting a contract.\nAlthough this benefit is compelling, it is worth considering other systems.\nThe clearest alternative is to form a contract between the source and every node on the path.\nWe may call these source contracts.\nSource contracting may present surprising advantages.\nFor instance, since ISPs do not exchange money with each other, an ISP cannot save money by selecting a cheaper next hop.\nAdditionally, if the source only has contracts with nodes on the intended path, other nodes won't even be willing to accept packets from this source since they won't receive compensation for carrying them.\nThis combination seems to eliminate all temptation for a single cheater to cheat in route.\nBecause of this and other encouraging features, we believe source contracts are a fertile topic for further study.\nAnother important research task is to relax our assumption that quality can be measured fully and precisely.\nOne possibility is to assume that monitoring is only probabilistic or suffers from noise.\nEven more relevant is the possibility that quality monitors are fundamentally incomplete.\nA quality monitor can never anticipate every dimension of quality that future applications will care about, nor can it anticipate a new and valuable protocol that an ISP introduces.\nWe may define a monitor space as a subspace of the quality space that a monitor can measure, M c Q, and a corresponding monitoring function that simply projects the full range of qualities onto the monitor space, m: Q--> M.\nClearly, innovations that leave quality invariant under m are not easy to support--they are invisible to the monitoring system.\nIn this environment, we expect that path monitoring becomes more important, since it is the only way to ensure data reaches certain innovator ISPs.\nFurther research is needed to understand this process.","keyphrases":["network monitor","monitor","contract system","contract","innov","commodit","incent","smart market","rout stagequ","verifi monitor","contract monitor","rout polici","clean-slate architectur design"],"prmu":["P","P","P","P","P","P","P","U","M","M","R","M","M"]} {"id":"J-52","title":"Hidden-Action in Multi-Hop Routing","abstract":"In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful. Therefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all. Using a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases. We further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system. In addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action.","lvl-1":"Hidden-Action in Multi-Hop Routing Michal Feldman1 mfeldman@sims.berkeley.edu John Chuang1 chuang@sims.berkeley.edu Ion Stoica2 istoica@cs.berkeley.edu Scott Shenker2 shenker@icir.org 1 School of Information Management and Systems U.C. Berkeley 2 Computer Science Division U.C. Berkeley ABSTRACT In multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful.\nTherefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all.\nUsing a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases.\nWe further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system.\nIn addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; J.4 [Social And Behavioral Sciences]: Economics General Terms Design, Economics 1.\nINTRODUCTION Endpoints wishing to communicate over a multi-hop network rely on intermediate nodes to forward packets from the sender to the receiver.\nIn settings where the intermediate nodes are independent agents (such as individual nodes in ad-hoc and peer-topeer networks or autonomous systems on the Internet), this poses an incentive problem; the intermediate nodes may incur significant communication and computation costs in the forwarding of packets without deriving any direct benefit from doing so.\nConsequently, a rational (i.e., utility maximizing) intermediate node may choose to forward packets at a low priority or not forward the packets at all.\nThis rational behavior may lead to suboptimal system performance.\nThe endpoints can provide incentives, e.g., in the form of payments, to encourage the intermediate nodes to forward their packets.\nHowever, the actions of the intermediate nodes are often hidden from the endpoints.\nIn many cases, the endpoints can only observe whether or not the packet has reached the destination, and cannot attribute failure to a specific node on the path.\nEven if some form of monitoring mechanism allows them to pinpoint the location of the failure, they may still be unable to attribute the cause of failure to either the deliberate action of the intermediate node, or to some external factors beyond the control of the intermediate node, such as network congestion, channel interference, or data corruption.\nThe problem of hidden action is hardly unique to networks.\nAlso known as moral hazard, this problem has long been of interest in the economics literature concerning information asymmetry, incentive and contract theory, and agency theory.\nWe follow this literature by formalizing the problem as a principal-agent model, where multiple agents making sequential hidden actions [17, 27].\nOur results are threefold.\nFirst, we show that it is possible to design contracts to induce cooperation when intermediate nodes can choose to forward or drop packets, as well as when the nodes can choose to forward packets with different levels of quality of service.\nIf the path and transit costs are known prior to transmission, the principal achieves first best solution, and can implement the contracts either directly with each intermediate node or recursively through the network (each node making a contract with the following node) without any loss in utility.\nSecond, we find that introducing per-hop monitoring has no impact on the principal``s expected utility in equilibrium.\nFor a principal who wishes to induce an equilibrium in which all intermediate nodes cooperate, its expected total payment is the same with or without monitoring.\nHowever, monitoring provides a dominant strategy equilibrium, which is a stronger solution concept than the Nash equilibrium achievable in the absence of monitoring.\nThird, we show that in the absence of a priori information about transit costs on the packet forwarding path, it is possible to generalize existing mechanisms to overcome scenarios that involve both hidden-information and hidden-action.\nIn these scenarios, the principal pays a premium compared to scenarios with known transit costs.\n2.\nBASELINE MODEL We consider a principal-agent model, where the principal is a pair of communication endpoints who wish to communicate over a multi-hop network, and the agents are the intermediate nodes capable of forwarding packets between the endpoints.\nThe principal (who in practice can be either the sender, the receiver, or 117 both) makes individual take-it-or-leave-it offers (contracts) to the agents.\nIf the contracts are accepted, the agents choose their actions sequentially to maximize their expected payoffs based on the payment schedule of the contract.\nWhen necessary, agents can in turn make subsequent take-it-or-leave-it offers to their downstream agents.\nWe assume that all participants are risk neutral and that standard assumptions about the global observability of the final outcome and the enforceability of payments by guaranteeing parties hold.\nFor simplicity, we assume that each agent has only two possible actions; one involving significant effort and one involving little effort.\nWe denote the action choice of agent i by ai \u2208 {0, 1}, where ai = 0 and ai = 1 stand for the low-effort and high-effort actions, respectively.\nEach action is associated with a cost (to the agent) C(ai), and we assume: C(ai = 1) > C(ai = 0) At this stage, we assume that all nodes have the same C(ai) for presentation clarity, but we relax this assumption later.\nWithout loss of generality we normalize the C(ai = 0) to be zero, and denote the high-effort cost by c, so C(ai = 0) = 0 and C(ai = 1) = c.\nThe utility of agent i, denoted by ui, is a function of the payment it receives from the principal (si), the action it takes (ai), and the cost it incurs (ci), as follows: ui(si, ci, ai) = si \u2212 aici The outcome is denoted by x \u2208 {xG , xB }, where xG stands for the Good outcome in which the packet reaches the destination, and xB stands for the Bad outcome in which the packet is dropped before it reaches the destination.\nThe outcome is a function of the vector of actions taken by the agents on the path, a = (a1, ..., an) \u2208 {0, 1}n , and the loss rate on the channels, k.\nThe benefit of the sender from the outcome is denoted by w(x), where: w(xG ) = wG ; and w(xB ) = wB = 0 The utility of the sender is consequently: u(x, S) = w(x) \u2212 S where: S = Pn i=1 si A sender who wishes to induce an equilibrium in which all nodes engage in the high-effort action needs to satisfy two constraints for each agent i: (IR) Individual rationality (participation constraint)1 : the expected utility from participation should (weakly) exceed its reservation utility (which we normalize to 0).\n(IC) Incentive compatibility: the expected utility from exerting high-effort should (weakly) exceed its expected utility from exerting low-effort.\nIn some network scenarios, the topology and costs are common knowledge.\nThat is, the sender knows in advance the path that its packet will take and the costs on that path.\nIn other routing scenarios, the sender does not have this a priori information.\nWe show that our model can be applied to both scenarios with known and unknown topologies and costs, and highlight the implications of each scenario in the context of contracts.\nWe also distinguish between direct contracts, where the principal signs an individual contract 1We use the notion of ex ante individual rationality, in which the agents choose to participate before they know the state of the system.\nS Dn1 Source Destination n intermediate nodes Figure 1: Multi-hop path from sender to destination.\nFigure 2: Structure of the multihop routing game under known topology and transit costs.\nwith each node, and recursive contracts, where each node enters a contractual relationship with its downstream node.\nThe remainder of this paper is organized as follows.\nIn Section 3 we consider agents who decide whether to drop or forward packets with and without monitoring when the transit costs are common knowledge.\nIn Section 4, we extend the model to scenarios with unknown transit costs.\nIn Section 5, we distinguish between recursive and direct contracts and discuss their relationship.\nIn Section 6, we show that the model applies to scenarios in which agents choose between different levels of quality of service.\nWe consider Internet routing as a case study in Section 7.\nIn Section 8 we present related work, and Section 9 concludes the paper.\n3.\nKNOWN TRANSIT COSTS In this section we analyze scenarios in which the principal knows in advance the nodes on the path to the destination and their costs, as shown in figure 1.\nWe consider agents who decide whether to drop or forward packets, and distinguish between scenarios with and without monitoring.\n3.1 Drop versus Forward without Monitoring In this scenario, the agents decide whether to drop (a = 0) or forward (a = 1) packets.\nThe principal uses no monitoring to observe per-hop outcomes.\nConsequently, the principal makes the payment schedule to each agent contingent on the final outcome, x, as follows: si(x) = (sB i , sG i ) where: sB i = si(x = xB ) sG i = si(x = xG ) The timeline of this scenario is shown in figure 2.\nGiven a perhop loss rate of k, we can express the probability that a packet is successfully delivered from node i to its successor i + 1 as: Pr(xG i\u2192i+1|ai) = (1 \u2212 k)ai (1) where xG i\u2192j denotes a successful transmission from node i to j. PROPOSITION 3.1.\nUnder the optimal contract that induces high-effort behavior from all intermediate nodes in the Nash Equi118 librium2 , the expected payment to each node is the same as its expected cost, with the following payment schedule: sB i = si(x = xB ) = 0 (2) sG i = si(x = xG ) = c (1 \u2212 k)n\u2212i+1 (3) PROOF.\nThe principal needs to satisfy the IC and IR constraints for each agent i, which can be expressed as follows: (IC)Pr(xG |aj\u2265i = 1)sG i + Pr(xB |aj\u2265i = 1)sB i \u2212 c \u2265 Pr(xG |ai = 0, aj>i = 1)sG i + Pr(xB |ai = 0, aj>i = 1)sB i (4) This constraint says that the expected utility from forwarding is greater than or equal to its expected utility from dropping, if all subsequent nodes forward as well.\n(IR)Pr(xG S\u2192i|aj<i = 1)(Pr(xG |aj\u2265i = 1)sG i + Pr(xB |aj\u2265i = 1)sB i \u2212 c) + Pr(xB S\u2192i|aj<i = 1)sB i \u2265 0 (5) This constraint says that the expected utility from participating is greater than or equal to zero (reservation utility), if all other nodes forward.\nThe above constraints can be expressed as follows, based on Eq.\n1: (IC) : (1 \u2212 k)n\u2212i+1 sG i + (1 \u2212 (1 \u2212 k)n\u2212i+1 )sB i \u2212 c \u2265 sB i (IR) : (1\u2212k)i ((1\u2212k)n\u2212i+1 sG i +(1\u2212(1\u2212k)n\u2212i+1 )sB i \u2212c)+ (1 \u2212 (1 \u2212 k)i )sB i \u2265 0 It is a standard result that both constraints bind at the optimal contract (see [23]).\nSolving the two equations, we obtain the solution that is presented in Eqs.\n2 and 3.\nWe next prove that the expected payment to a node equals its expected cost in equilibrium.\nThe expected cost of node i is its transit cost multiplied by the probability that it faces this cost (i.e., the probability that the packet reaches node i), which is: (1 \u2212 k)i c.\nThe expected payment that node i receives is: Pr(xG )sG i + Pr(xB )sB i = (1 \u2212 k)n+1 c (1 \u2212 k)n\u2212i+1 = (1 \u2212 k)i c Note that the expected payment to a node decreases as the node gets closer to the destination due to the asymmetric distribution of risk.\nThe closer the node is to the destination, the lower the probability that a packet will fail to reach the destination, resulting in the low payment being made to the node.\nThe expected payment by the principal is: E[S] = (1 \u2212 k)n+1 nX i=1 sG i + (1 \u2212 (1 \u2212 k)n+1 ) nX i=1 sB i = (1 \u2212 k)n+1 nX i=1 ci (1 \u2212 k)n\u2212i+1 (6) The expected payment made by the principal depends not only on the total cost, but also the number of nodes on the path.\nPROPOSITION 3.2.\nGiven two paths with respective lengths of n1 and n2 hops, per-hop transit costs of c1 and c2, and per-hop loss rates of k1 and k2, such that: 2Since transit nodes perform actions sequentially, this is really a subgameperfect equilibrium (SPE), but we will refer to it as Nash equilibrium in the remainder of the paper.\nFigure 3: Two paths of equal total costs but different lengths and individual costs.\n\u2022 c1n1 = c2n2 (equal total cost) \u2022 (1 \u2212 k1)n1+1 = (1 \u2212 k2)n2+1 (equal expected benefit) \u2022 n1 < n2 (path 1 is shorter than path 2) the expected total payment made by the principal is lower on the shorter path.\nPROOF.\nThe expected payment in path j is E[S]j = nj X i=1 cj (1 \u2212 kj )i = cj (1 \u2212 kj ) 1 \u2212 (1 \u2212 kj)nj kj So, we have to show that: c1(1 \u2212 k1) 1 \u2212 (1 \u2212 k1)n1 k1 > c2(1 \u2212 k2) 1 \u2212 (1 \u2212 k2)n2 k2 Let M = c1n1 = c2n2 and N = (1 \u2212 k1)n1+1 = (1 \u2212 k2)n2+1 .\nWe have to show that MN 1 n1+1 (1 \u2212 N n1 n1+1 ) n1(1 \u2212 N 1 n1+1 ) < MN 1 n2+1 (1 \u2212 N n2 n2+1 ) n2(1 \u2212 N 1 n2+1 ) (7) Let f = N 1 n+1 (1 \u2212 N n n+1 ) n(1 \u2212 N 1 n+1 ) Then, it is enough to show that f is monotonically increasing in n \u2202f \u2202n = g(N, n) h(N, n) where: g(N, n) = \u2212((ln(N)n \u2212 (n + 1)2 )(N 1 n+1 \u2212 N n+2 n+1 ) \u2212 (n + 1)2 (N + N 2 n+1 )) and h(N, n) = (n + 1)2 n2 (\u22121 + N 1 n+1 )2 but h(N, n) > 0 \u2200N, n, therefore, it is enough to show that g(N, n) > 0.\nBecause N \u2208 (0, 1): (i) ln(N) < 0, and (ii) N 1 n+1 > N n+2 n+1 .\nTherefore, g(N, n) > 0 \u2200N, n.\nThis means that, ceteris paribus, shorter paths should always be preferred over longer ones.\nFor example, consider the two topologies presented in Figure 3.\nWhile the paths are of equal total cost, the total expected payment by the principal is different.\nBased on Eqs.\n2 and 3, the expected total payment for the top path is: E[S] = Pr(xG )(sG A + sG B) = \u201e c1 (1 \u2212 k1)2 + c1 1 \u2212 k1 `` (1 \u2212 k1)3 (8) 119 while the expect total payment for the bottom path is: E[S] = Pr(xG )(sG A + sG B + sG C ) = ( c2 (1 \u2212 k2)3 + c2 (1 \u2212 k2)2 + c2 1 \u2212 k2 )(1 \u2212 k2)4 For n1 = 2, c1 = 1.5, k1 = 0.5, n2 = 3, c2 = 1, k2 = 0.405, we have equal total cost and equal expected benefit, but E[S]1 = 0.948 and E[S]2 = 1.313.\n3.2 Drop versus Forward with Monitoring Suppose the principal obtains per-hop monitoring information.3 Per-hop information broadens the set of mechanisms the principal can use.\nFor example, the principal can make the payment schedule contingent on arrival to the next hop instead of arrival to the final destination.\nCan such information be of use to a principal wishing to induce an equilibrium in which all intermediate nodes forward the packet?\nPROPOSITION 3.3.\nIn the drop versus forward model, the principal derives the same expected utility whether it obtains per-hop monitoring information or not.\nPROOF.\nThe proof to this proposition is already implied in the findings of the previous section.\nWe found that in the absence of per-hop information, the expected cost of each intermediate node equals its expected payment.\nIn order to satisfy the IR constraint, it is essential to pay each intermediate node an expected amount of at least its expected cost; otherwise, the node would be better-off not participating.\nTherefore, no other payment scheme can reduce the expected payment from the principal to the intermediate nodes.\nIn addition, if all nodes are incentivized to forward packets, the probability that the packet reaches the destination is the same in both scenarios, thus the expected benefit of the principal is the same.\nIndeed, we have found that even in the absence of per-hop monitoring information, the principal achieves first-best solution.\nTo convince the reader that this is indeed the case, we provide an example of a mechanism that conditions payments on arrival to the next hop.\nThis is possible only if per-hop monitoring information is provided.\nIn the new mechanism, the principal makes the payment schedule contingent on whether the packet has reached the next hop or not.\nThat is, the payment to node i is sG i if the packet has reached node i + 1, and sB i otherwise.\nWe assume costless monitoring, giving us the best case scenario for the use of monitoring.\nAs before, we consider a principal who wishes to induce an equilibrium in which all intermediate nodes forward the packet.\nThe expected utility of the principal is the difference between its expected benefit and its expected payment.\nBecause the expected benefit when all nodes forward is the same under both scenarios, we only need to show that the expected total payment is identical as well.\nUnder the monitoring mechanism, the principal has to satisfy the following constraints: (IC)Pr(xG i\u2192i+1|ai = 1)sG + Pr(xB i\u2192i+1|ai = 1)sB \u2212 c \u2265 Pr(xG i\u2192i+1|ai = 0)sG + Pr(xB i\u2192i+1|ai = 0)sB (9) (IR)Pr(xG S\u2192i|aj<i = 1)(Pr(xG i\u2192i+1|ai = 1)sG + Pr(xB i\u2192i+1|ai = 1)sB \u2212 c) \u2265 0 (10) 3For a recent proposal of an accountability framework that provides such monitoring information see [4].\nThese constraints can be expressed as follows: (IC) : (1 \u2212 k)sG + ksB \u2212 c \u2265 s0 (IR) : (1 \u2212 k)i ((1 \u2212 k)sG + ksB \u2212 c) \u2265 0 The two constraints bind at the optimal contract as before, and we get the following payment schedule: sB = 0 sG = c 1 \u2212 k The expected total payment under this scenario is: E[S] = nX i=1 ((1 \u2212 k)i (sB + (i \u2212 1)sG )) + (1 \u2212 k)n+1 nsG = (1 \u2212 k)n+1 nX i=1 ci (1 \u2212 k)n\u2212i+1 as in the scenario without monitoring (see Equation 6.)\nWhile the expected total payment is the same with or without monitoring, there are some differences between the two scenarios.\nFirst, the payment structure is different.\nIf no per-hop monitoring is used, the payment to each node depends on its location (i).\nIn contrast, monitoring provides us with n identical contracts.\nSecond, the solution concept used is different.\nIf no monitoring is used, the strategy profile of ai = 1 \u2200i is a Nash equilibrium, which means that no agent has an incentive to deviate unilaterally from the strategy profile.\nIn contrast, with the use of monitoring, the action chosen by node i is independent of the other agents'' forwarding behavior.\nTherefore, monitoring provides us with dominant strategy equilibrium, which is a stronger solution concept than Nash equilibrium.\n[15], [16] discuss the appropriateness of different solution concepts in the context of online environments.\n4.\nUNKNOWN TRANSIT COSTS In certain network settings, the transit costs of nodes along the forwarding path may not be common knowledge, i.e., there exists the problem of hidden information.\nIn this section, we address the following questions: 1.\nIs it possible to design contracts that induce cooperative behavior in the presence of both hidden-action and hiddeninformation?\n2.\nWhat is the principal``s loss due to the lack of knowledge of the transit costs?\nIn hidden-information problems, the principal employs mechanisms to induce truthful revelation of private information from the agents.\nIn the routing game, the principal wishes to extract transit cost information from the network routers in order to determine the lowest cost path (LCP) for a given source-destination pair.\nThe network routers act strategically and declare transit costs to maximize their profit.\nMechanisms that have been proposed in the literature for the routing game [24, 13] assume that once the transit costs have been obtained, and the LCP has been determined, the nodes on the LCP obediently forward all packets, and that there is no loss in the network, i.e., k = 0.\nIn this section, we consider both hidden information and hidden action, and generalize these mechanisms to induce both truth revelation and high-effort action in equilibrium, where nodes transmit over a lossy communication channel, i.e., k \u2265 0.\n4.1 V CG Mechanism In their seminal paper [24], Nisan and Ronen present a VCG mechanism that induces truthful revelation of transit costs by edges 120 Figure 4: Game structure for F P SS, where only hidden-information is considered.\nFigure 5: Game structure for F P SS , where both hiddeninformation and hidden-action are considered.\nin a biconnected network, such that lowest cost paths can be chosen.\nLike all VCG mechanisms, it is a strategyproof mechanism, meaning that it induces truthful revelation in a dominant strategy equilibrium.\nIn [13] (FPSS), Feigenbaum et al. slightly modify the model to have the routers as the selfish agents instead of the edges, and present a distributed algorithm that computes the VCG payments.\nThe timeline of the FPSS game is presented in figure 4.\nUnder FPSS, transit nodes keep track of the amount of traffic routed through them via counters, and payments are periodically transferred from the principals to the transit nodes based on the counter values.\nFPSS assumes that transit nodes are obedient in packet forwarding behavior, and will not update the counters without exerting high effort in packet forwarding.\nIn this section, we present FPSS , which generalizes FPSS to operate in an environment with lossy communication channels (i.e., k \u2265 0) and strategic behavior in terms of packet forwarding.\nWe will show that FPSS induces an equilibrium in which all nodes truthfully reveal their transit costs and forward packets if they are on the LCP.\nFigure 5 presents the timeline of FPSS .\nIn the first stage, the sender declares two payment functions, (sG i , sB i ), that will be paid upon success or failure of packet delivery.\nGiven these payments, nodes have incentive to reveal their costs truthfully, and later to forward packets.\nPayments are transferred based on the final outcome.\nIn FPSS , each node i submits a bid bi, which is its reported transit cost.\nNode i is said to be truthful if bi = ci.\nWe write b for the vector (b1, ... , bn) of bids submitted by all transit nodes.\nLet Ii(b) be the indicator function for the LCP given the bid vector b such that Ii(b) = 1 if i is on the LCP; 0 otherwise.\nFollowing FPSS [13], the payment received by node i at equilibrium is: pi = biIi(b) + [ X r Ir(b|i \u221e)br \u2212 X r Ir(b)br] = X r Ir(b|i \u221e)br \u2212 X r=i Ir(b)br (11) where the expression b|i x means that (b|i x)j = cj for all j = i, and (b|i x)i = x.\nIn FPSS , we compute sB i and sG i as a function of pi, k, and n. First, we recognize that sB i must be less than or equal to zero in order for the true LCP to be chosen.\nOtherwise, strategic nodes may have an incentive to report extremely small costs to mislead the principal into believing that they are on the LCP.\nThen, these nodes can drop any packets they receive, incur zero transit cost, collect a payment of sB i > 0, and earn positive profit.\nPROPOSITION 4.1.\nLet the payments of FPSS be: sB i = 0 sG i = pi (1 \u2212 k)n\u2212i+1 Then, FPSS has a Nash equilibrium in which all nodes truthfully reveal their transit costs and all nodes on the LCP forward packets.\nPROOF.\nIn order to prove the proposition above, we have to show that nodes have no incentive to engage in the following misbehaviors: 1.\ntruthfully reveal cost but drop packet, 2.\nlie about cost and forward packet, 3.\nlie about cost and drop packet.\nIf all nodes truthfully reveal their costs and forward packets, the expected utility of node i on the LCP is: E[u]i = Pr(xG S\u2192i)(E[si] \u2212 ci) + Pr(xB S\u2192i)sB i = (1 \u2212 k)i (1 \u2212 k)n\u2212i+1 sG i + (1 \u2212 (1 \u2212 k)n\u2212i+1 )sB i \u2212 ci + (1 \u2212 (1 \u2212 k)i )sB i = (1 \u2212 k)i (1 \u2212 k)n\u2212i+1 pi (1 \u2212 k)n\u2212i+1 \u2212 (1 \u2212 k)i ci = (1 \u2212 k)i (pi \u2212 ci) \u2265 0 (12) The last inequality is derived from the fact that FPSS is a truthful mechanism, thus pi \u2265 ci.\nThe expected utility of a node not on the LCP is 0.\nA node that drops a packet receives sB i = 0, which is smaller than or equal to E[u]i for i \u2208 LCP and equals E[u]i for i \/\u2208 LCP.\nTherefore, nodes cannot gain utility from misbehaviors (1) or (3).\nWe next show that nodes cannot gain utility from misbehavior (2).\n1.\nif i \u2208 LCP, E[u]i > 0.\n(a) if it reports bi > ci: i. if bi < P r Ir(b|i \u221e)br \u2212 P r=i Ir(b)br, it is still on the LCP, and since the payment is independent of bi, its utility does not change.\nii.\nif bi > P r Ir(b|i \u221e)br \u2212 P r=i Ir(b)br, it will not be on the LCP and obtain E[u]i = 0, which is less than its expected utility if truthfully revealing its cost.\n121 (b) if it reports bi < ci, it is still on the LCP, and since the payment is independent of bi, its utility does not change.\n2.\nif i \/\u2208 LCP, E[u]i = 0.\n(a) if it reports bi > ci, it remains out of the LCP, so its utility does not change.\n(b) if it reports bi < ci: i. if bi < P r Ir(b|i \u221e)br \u2212 P r=i Ir(b)br, it joins the LCP, and gains an expected utility of E[u]i = (1 \u2212 k)i (pi \u2212 ct) However, if i \/\u2208 LCP, it means that ci > X r Ir(c|i \u221e)cr \u2212 X r=i Ir(c)cr But if all nodes truthfully reveal their costs, pi = X r Ir(c|i \u221e)cr \u2212 X r=i Ir(c)cr < ci therefore, E[u]i < 0 ii.\nif bi > P r Ir(b|i \u221e)br \u2212 P r=i Ir(b)br, it remains out of the LCP, so its utility does not change.\nTherefore, there exists an equilibrium in which all nodes truthfully reveal their transit costs and forward the received packets.\nWe note that in the hidden information only context, FPSS induces truthful revelation as a dominant strategy equilibrium.\nIn the current setting with both hidden information and hidden action, FPSS achieves a Nash equilibrium in the absence of per-hop monitoring, and a dominant strategy equilibrium in the presence of per-hop monitoring, consistent with the results in section 3 where there is hidden action only.\nIn particular, with per-hop monitoring, the principal declares the payments sB i and sG i to each node upon failure or success of delivery to the next node.\nGiven the payments sB i = 0 and sG i = pi\/(1 \u2212 k), it is a dominant strategy for the nodes to reveal costs truthfully and forward packets.\n4.2 Discussion More generally, for any mechanism M that induces a bid vector b in equilibrium by making a payment of pi(b) to node i on the LCP, there exists a mechanism M that induces an equilibrium with the same bid vector and packet forwarding by making a payment of: sB i = 0 sG i = pi(b) (1 \u2212 k)n\u2212i+1 .\nA sketch of the proof would be as follows: 1.\nIM i (b) = IM i (b)\u2200i, since M uses the same choice metric.\n2.\nThe expected utility of a LCP node is E[u]i = (1 \u2212 k)i (pi(b) \u2212 ci) \u2265 0 if it forwards and 0 if it drops, and the expected utility of a non-LCP node is 0.\n3.\nFrom 1 and 2, we get that if a node i can increase its expected utility by deviating from bi under M , it can also increase its utility by deviating from bi in M, but this is in contradiction to bi being an equilibrium in M. 4.\nNodes have no incentive to drop packets since they derive an expected utility of 0 if they do.\nIn addition to the generalization of FPSS into FPSS , we can also consider the generalization of the first-price auction (FPA) mechanism, where the principal determines the LCP and pays each node on the LCP its bid, pi(b) = bi.\nFirst-price auctions achieve Nash equilibrium as opposed to dominant strategy equilibrium.\nTherefore, we should expect the generalization of FPA to achieve Nash equilibrium with or without monitoring.\nWe make two additional comments concerning this class of mechanisms.\nFirst, we find that the expected total payment made by the principal under the proposed mechanisms is E[S] = nX i=1 (1 \u2212 k)i pi(b) and the expected benefit realized by the principal is E[w] = (1 \u2212 k)n+1 wG where Pn i=1 pi and wG are the expected payment and expected benefit, respectively, when only the hidden-information problem is considered.\nWhen hidden action is also taken into consideration, the generalized mechanism handles strategic forwarding behavior by conditioning payments upon the final outcome, and accounts for lossy communication channels by designing payments that reflect the distribution of risk.\nThe difference between expected payment and benefit is not due to strategic forwarding behavior, but to lossy communications.\nTherefore, in a lossless network, we should not see any gap between expected benefits and payments, independent of strategic or non-strategic forwarding behavior.\nSecond, the loss to the principal due to unknown transit costs is also known as the price of frugality, and is an active field of research [2, 12].\nThis price greatly depends on the network topology and on the mechanism employed.\nWhile it is simple to characterize the principal``s loss in some special cases, it is not a trivial problem in general.\nFor example, in topologies with parallel disjoint paths from source to destination, we can prove that under first-price auctions, the loss to the principal is the difference between the cost of the shortest path and the second-shortest path, and the loss is higher under the FPSS mechanism.\n5.\nRECURSIVE CONTRACTS In this section, we distinguish between direct and recursive contracts.\nIn direct contracts, the principal contracts directly with each node on the path and pays it directly.\nIn recursive payment, the principal contracts with the first node on the path, which in turn contracts with the second, and so on, such that each node contracts with its downstream node and makes the payment based on the final result, as demonstrated in figure 6.\nWith direct payments, the principal needs to know the identity and cost of each node on the path and to have some communication channel with the node.\nWith recursive payments, every node needs to communicate only with its downstream node.\nSeveral questions arise in this context: \u2022 What knowledge should the principal have in order to induce cooperative behavior through recursive contracts?\n\u2022 What should be the structure of recursive contracts that induce cooperative behavior?\n\u2022 What is the relation between the total expected payment under direct and recursive contracts?\n\u2022 Is it possible to design recursive contracts in scenarios of unknown transit costs?\n122 Figure 6: Structure of the multihop routing game under known topology and recursive contracts.\nIn order to answer the questions outlined above, we look at the IR and IC constraints that the principal needs to satisfy when contracting with the first node on the path.\nWhen the principal designs a contract with the first node, he should take into account the incentives that the first node should provide to the second node, and so on all the way to the destination.\nFor example, consider the topology given in figure 3 (a).\nWhen the principal comes to design a contract with node A, he needs to consider the subsequent contract that A should sign with B, which should satisfy the following constrints.\n(IR) :Pr(xG A\u2192B|aA = 1)(E[s|aB = 1] \u2212 c)+ Pr(xB A\u2192B|aA = 1)sB A\u2192B \u2265 0 (IC) :E[s|aB = 1] \u2212 c \u2265 E[s|aB = 0] where: E[s|aB = 1] = Pr(xG B\u2192D|aB = 1)sG A\u2192B + Pr(xB B\u2192D|aB = 1)sB A\u2192B and E[s|aB = 0] = Pr(xG B\u2192D|aB = 0)sG A\u2192B + Pr(xB B\u2192D|aB = 0)sB A\u2192B These (binding) constraints yield the values of sB A\u2192B and sG A\u2192B: sB A\u2192B = 0 sG A\u2192B = c\/(1 \u2212 k) Based on these values, S can express the constraints it should satisfy in a contract with A. (IR) :Pr(xG S\u2192A|aS = 1)(E[sS\u2192A \u2212 sA\u2192B|ai = 1\u2200i] \u2212 c) + Pr(xB S\u2192A|aS = 1)sB S\u2192A \u2265 0 (IC) : E[sS\u2192A \u2212 sA\u2192B|ai = 1\u2200i] \u2212 c \u2265 E[sS\u2192A \u2212 sA\u2192B|aA = 0, aB = 1] where: E[sS\u2192A \u2212 sA\u2192B|ai = 1\u2200i] = Pr(xG A\u2192D|ai = 1\u2200i)(sG S\u2192A \u2212 sG A\u2192B) +Pr(xB A\u2192D|ai = 1\u2200i)(sB S\u2192A \u2212 sB A\u2192B) and E[sS\u2192A \u2212 sA\u2192B|aA = 0, aB = 1] = Pr(xG A\u2192D|aA = 0, aB = 1)(sG S\u2192A \u2212 sG A\u2192B) +Pr(xB A\u2192D|aA = 0, aB = 1)(sB S\u2192A \u2212 sB A\u2192B) Solving for sB S\u2192A and sG S\u2192A, we get: sB S\u2192A = 0 sG S\u2192A = c(2 \u2212 k) 1 \u2212 2k + k2 The expected total payment is E[S] = sG S\u2192APr(xG S\u2192D) + sB S\u2192APr(xB S\u2192D) = c(2 \u2212 k)(1 \u2212 k) (13) which is equal to the expected total payment under direct contracts (see Eq.\n8).\nPROPOSITION 5.1.\nThe expected total payments by the principal under direct and recursive contracts are equal.\nPROOF.\nIn order to calculate the expected total payment, we have to find the payment to the first node on the path that will induce appropriate behavior.\nBecause sB i = 0 in the drop \/ forward model, both constraints can be reduced to: Pr(xG i\u2192R|aj = 1\u2200j)(sG i \u2212 sG i+1) \u2212 ci = 0 \u21d4 (1 \u2212 k)n\u2212i+1 (sG i \u2212 sG i+1) \u2212 ci = 0 which yields, for all 1 \u2264 i \u2264 n: sG i = ci (1 \u2212 k)n\u2212i+1 + sG i+1 Thus, sG n = cn 1 \u2212 k sG n\u22121 = cn\u22121 (1 \u2212 k)2 + sG n = cn\u22121 (1 \u2212 k)2 + cn 1 \u2212 k \u00b7 \u00b7 \u00b7 sG 1 = c1 (1 \u2212 k)n + sG 2 = ... = nX i=1 ci (1 \u2212 k)i and the expected total payment is E[S] = (1 \u2212 k)n+1 sG 1 = (1 \u2212 k)n+1 nX i=1 ci (1 \u2212 k)n\u2212i+1 which equals the total expected payment in direct payments, as expressed in Eq.\n6.\nBecause the payment is contingent on the final outcome, and the expected payment to a node equals its expected cost, nodes have no incentive to offer their downstream nodes lower payment than necessary, since if they do, their downstream nodes will not forward the packet.\nWhat information should the principal posess in order to implement recursive contracts?\nLike in direct payments, the expected payment is not affected solely by the total payment on the path, but also by the topology.\nTherefore, while the principal only needs to communicate with the first node on the forwarding path and does not have to know the identities of the other nodes, it still needs to know the number of nodes on the path and their individual transit costs.\nFinally, is it possible to design recursive contracts under unknown transit costs, and, if so, what should be the structure of such contracts?\nSuppose the principal has implemented the distributed algorithm that calculates the necessary payments, pi for truthful 123 revelation, would the following payment schedule to the first node induce cooperative behavior?\nsB 1 = 0 sG 1 = nX i=1 pi (1 \u2212 k)i The answer is not clear.\nUnlike contracts in known transit costs, the expected payment to a node usually exceeds its expected cost.\nTherefore, transit nodes may not have the appropriate incentive to follow the principal``s guarantee during the payment phase.\nFor example, in FPSS , the principal guarantees to pay each node an expected payment of pi > ci.\nWe assume that payments are enforceable if made by the same entity that pledge to pay.\nHowever, in the case of recursive contracts, the entity that pledges to pay in the cost discovery stage (the principal) is not the same as the entity that defines and executes the payments in the forwarding stage (the transit nodes).\nTransit nodes, who design the contracts in the second stage, know that their downstream nodes will forward the packet as long as the expected payment exceeds the expected cost, even if it is less than the promised amount.\nThus, every node has incentive to offer lower payments than promised and keep the profit.\nTransit nodes, who know this is a plausible scenario, may no longer truthfully reveal their cost.\nTherefore, while recursive contracts under known transit costs are strategically equivalent to direct contracts, it is not clear whether this is the case under unknown transit costs.\n6.\nHIGH-QUALITY VERSUS LOW-QUALITY FORWARDING So far, we have considered the agents'' strategy space to be limited to the drop (a = 0) and forward (a = 1) actions.\nIn this section, we consider a variation of the model where the agents choose between providing a low-quality service (a = 0) and a high-quality service (a = 1).\nThis may correspond to a service-differentiated service model where packets are forwarded on a best-effort or a priority basis [6].\nIn contrast to drop versus forward, a packet may still reach the next hop (albeit with a lower probability) even if the low-effort action is taken.\nAs a second example, consider the practice of hot-potato routing in inter-domain routing of today``s Internet.\nIndividual autonomous systems (AS``s) can either adopt hot-potato routing or early exit routing (a = 0), where a packet is handed off to the downstream AS at the first possible exit, or late exit routing (a = 1), where an AS carries the packet longer than it needs to, handing off the packet at an exit closer to the destination.\nIn the absence of explicit incentives, it is not surprising that AS``s choose hot-potato routing to minimize their costs, even though it leads to suboptimal routes [28, 29].\nIn both examples, in the absence of contracts, a rational node would exert low-effort, resulting in lower performance.\nNevertheless, this behavior can be avoided with an appropriate design of contracts.\nFormally, the probability that a packet successfully gets from node i to node i + 1 is: Pr(xG i\u2192i+1|ai) = 1 \u2212 (k \u2212 qai) (14) where: q \u2208 (0, 1] and k \u2208 (q, 1] In the drop versus forward model, a low-effort action by any node results in a delivery failure.\nIn contrast, a node in the high\/low scenario may exert low-effort and hope to free-ride on the higheffort level exerted by the other agents.\nPROPOSITION 6.1.\nIn the high-quality versus low-quality forwarding model, where transit costs are common knowledge, the principal derives the same expected utility whether it obtains perhop monitoring information or not.\nPROOF.\nThe IC and IR constraints are the same as specified in the proof of proposition 3.1, but their values change, based on Eq.\n14 to reflect the different model: (IC) : (1\u2212k +q)n\u2212i+1 sG i +(1\u2212(1\u2212k +q)n\u2212i+1 )sB i \u2212c \u2265 (1 \u2212 k)(1 \u2212 k + q)n\u2212i sG i + (1 \u2212 (1 \u2212 k)(1 \u2212 k + q)n\u2212i )sB i (IR) : (1 \u2212 k + q)i ((1 \u2212 k + q)n\u2212i+1 sG i +(1 \u2212 (1 \u2212 k + q)n\u2212i+1 )sB i \u2212 c) + (1 \u2212 (1 \u2212 k + q)i )sB i \u2265 0 For this set of constraints, we obtain the following solution: sB i = (1 \u2212 k + q)i c(k \u2212 1) q (15) sG i = (1 \u2212 k + q)i c(k \u2212 1 + (1 \u2212 k + q)\u2212n ) q (16) We observe that in this version, both the high and the low payments depend on i.\nIf monitoring is used, we obtain the following constraints: (IC) : (1 \u2212 k + q)sG i + (k \u2212 q)sB i \u2212 c \u2265 (1 \u2212 k)sG i + (k)sB i (IR) : (1 \u2212 k + q)i ((1 \u2212 k + q)sG i + (k \u2212 q)sB i \u2212 c) \u2265 0 and we get the solution: sB i = c(k \u2212 1) q sG i = ck q The expected payment by the principal with or without forwarding is the same, and equals: E[S] = c(1 \u2212 k + q)(1 \u2212 (1 \u2212 k + q)n ) k \u2212 q (17) and this concludes the proof.\nThe payment structure in the high-quality versus low-quality forwarding model is different from that in the drop versus forward model.\nIn particular, at the optimal contract, the low-outcome payment sB i is now less than zero.\nA negative payment means that the agent must pay the principal in the event that the packet fails to reach the destination.\nIn some settings, it may be necessary to impose a limited liability constraint, i.e., si \u2265 0.\nThis prevents the first-best solution from being achieved.\nPROPOSITION 6.2.\nIn the high-quality versus low-quality forwarding model, if negative payments are disallowed, the expected payment to each node exceeds its expected cost under the optimal contract.\nPROOF.\nThe proof is a direct outcome of the following statements, which are proved above: 1.\nThe optimal contract is the contract specified in equations 15 and 16 2.\nUnder the optimal contract, E[si] equals node i s expected cost 3.\nUnder the optimal contract, sB i = (1\u2212k+q)i c(k\u22121) q < 0 Therefore, under any other contract the sender will have to compensate each node with an expected payment that is higher than its expected transit cost.\n124 There is an additional difference between the two models.\nIn drop versus forward, a principal either signs a contract with all n nodes along the path or with none.\nThis is because a single node dropping the packet determines a failure.\nIn contrast, in high versus low-quality forwarding, a success may occur under the low effort actions as well, and payments are used to increase the probability of success.\nTherefore, it may be possible for the principal to maximize its utility by contracting with only m of the n nodes along the path.\nWhile the expected outcome depends on m, it is independent of which specific m nodes are induced.\nAt the same time, the individual expected payments decrease in i (see Eq.\n16).\nTherefore, a principal who wishes to sign a contract with only m out of the n nodes should do so with the nodes that are closest to the destination; namely, nodes (n \u2212 m + 1, ..., n \u2212 1, n).\nSolving for the high-quality versus low-quality forwarding model with unknown transit costs is left for future work.\n7.\nCASE STUDY: INTERNET ROUTING We can map different deployed and proposed Internet routing schemes to the various models we have considered in this work.\nBorder Gateway Protocol (BGP), the current inter-domain routing protocol in the Internet, computes routes based on path vectors.\nSince the protocol reveals only the autonomous systems (AS``s) along a route but not the cost associated to them, the current BGP routing is best characterized by lack of a priori information about transit costs.\nIn this case, the principal (e.g., a multi-homed site or a tier-1 AS) can implement one of the mechanisms proposed in Section 4 by contracting with individual nodes on the path.\nSuch contracts involve paying some premium over the real cost, and it is not clear whether recursive contacts can be implemented in this scenario.\nIn addition, the current protocol does not have the infrastructure to support implementation of direct contracts between endpoints and the network.\nRecently, several new architectures have been proposed in the context of the Internet to provide the principal not only with a set of paths from which it can chose (like BGP does) but also with the performance along those paths and the network topology.\nOne approach to obtain such information is through end-to-end probing [1].\nAnother approach is to have the edge networks perform measurements and discover the network topology [32].\nYet another approach is to delegate the task of obtaining topology and performance information to a third-party, like in the routing-as-a-service proposal [21].\nThese proposals are quite different in nature, but they are common in their attempt to provide more visibility and transparency into the network.\nIf information about topology and transit costs is obtained, the scenario is mapped to the known transit costs model (Section 3).\nIn this case, first-best contracts can be achieved through individual contracts with nodes along the path.\nHowever, as we have shown in Section 5, as long as each agent can chose the next hop, the principal can gain full benefit by contracting with only the first hop (through the implementation of recursive contracts).\nHowever, the various proposals for acquiring network topology and performance information do not deal with strategic behavior by the intermediate nodes.\nWith the realization that the information collected may be used by the principal in subsequent contractual relationships, the intermediate nodes may behave strategically, misrepresenting their true costs to the entities that collect and aggregate such information.\nOne recent approach that can alleviate this problem is to provide packet obituaries by having each packet to confirm its delivery or report its last successful AS hop [4].\nAnother approach is to have third parties like Keynote independently monitor the network performance.\n8.\nRELATED WORK The study of non-cooperative behavior in communication networks, and the design of incentives, has received significant attention in the context of wireless ad-hoc routing.\n[22] considers the problem of malicious behavior, where nodes respond positively to route requests but then fail to forward the actual packets.\nIt proposes to mitigate it by detection and report mechanisms that will essentially help to route around the malicious nodes.\nHowever, rather than penalizing nodes that do not forward traffic, it bypasses the misbehaving nodes, thereby relieving their burden.\nTherefore, such a mechanism is not effective against selfish behavior.\nIn order to mitigate selfish behavior, some approaches [7, 8, 9] require reputation exchange between nodes, or simply first-hand observations [5].\nOther approaches propose payment schemes [10, 20, 31] to encourage cooperation.\n[31] is the closest to our work in that it designs payment schemes in which the sender pays the intermediate nodes in order to prevent several types of selfish behavior.\nIn their approach, nodes are supposed to send receipts to a thirdparty entity.\nWe show that this type of per-hop monitoring may not be needed.\nIn the context of Internet routing, [4] proposes an accountability framework that provide end hosts and service providers after-thefact audits on the fate of their packets.\nThis proposal is part of a broader approach to provide end hosts with greater control over the path of their packets [3, 30].\nIf senders have transit cost information and can fully control the path of their packets, they can design contracts that yield them with first-best utility.\nThe accountability framework proposed in [4] can serve two main goals: informing nodes of network conditions to help them make informed decision, and helping entities to establish whether individual ASs have performed their duties adequately.\nWhile such a framework can be used for the first task, we propose a different approach to the second problem without the need of per-hop auditing information.\nResearch in distributed algorithmic mechanism design (DAMD) has been applied to BGP routing [13, 14].\nThese works propose mechanisms to tackle the hidden-information problem, but ignore the problem of forwarding enforcement.\nInducing desired behavior is also the objective in [26], which attempts to respond to the challenge of distributed AMD raised in [15]: if the same agents that seek to manipulate the system also run the mechanism, what prevents them from deviating from the mechanism``s proposed rules to maximize their own welfare?\nThey start with the proposed mechanism presented in [13] and use mostly auditing mechanisms to prevent deviation from the algorithm.\nThe focus of this work is the design of a payment scheme that provides the appropriate incentives within the context of multi-hop routing.\nLike other works in this field, we assume that all the accounting services are done using out-of-band mechanisms.\nSecurity issues within this context, such as node authentication or message encryption, are orthogonal to the problem presented in this paper, and can be found, for example, in [18, 19, 25].\nThe problem of information asymmetry and hidden-action (also known as moral hazard) is well studied in the economics literature [11, 17, 23, 27].\n[17] identifies the problem of moral hazard in production teams, and shows that it is impossible to design a sharing rule which is efficient and budget-balanced.\n[27] shows that this task is made possible when production takes place sequentially.\n9.\nCONCLUSIONS AND FUTURE DIRECTIONS In this paper we show that in a multi-hop routing setting, where the actions of the intermediate nodes are hidden from the source 125 and\/or destination, it is possible to design payment schemes to induce cooperative behavior from the intermediate nodes.\nWe conclude that monitoring per-hop outcomes may not improve the utility of the participants or the network performace.\nIn addition, in scenarios of unknown transit costs, it is also possible to design mechanisms that induce cooperative behavior in equilibrium, but the sender pays a premium for extracting information from the transit nodes.\nOur model and results suggest several natural and intriguing research avenues: \u2022 Consider manipulative or collusive behaviors which may arise under the proposed payment schemes.\n\u2022 Analyze the feasibility of recursive contracts under hiddeninformation of transit costs.\n\u2022 While the proposed payment schemes sustain cooperation in equilibrium, it is not a unique equilibrium.\nWe plan to study under what mechanisms this strategy profile may emerge as a unique equilibrium (e.g., penalty by successor nodes).\n\u2022 Consider the effect of congestion and capacity constraints on the proposed mechanisms.\nOur preliminary results show that when several senders compete for a single transit node``s capacity, the sender with the highest demand pays a premium even if transit costs are common knowledge.\nThe premium can be expressed as a function of the second-highest demand.\nIn addition, if congestion affects the probability of successful delivery, a sender with a lower cost alternate path may end up with a lower utility level than his rival with a higher cost alternate path.\n\u2022 Fully characterize the full-information Nash equilibrium in first price auctions, and use this characterization to derive its overcharging compared to truthful mechaisms.\n10.\nACKNOWLEDGEMENTS We thank Hal Varian for his useful comments.\nThis work is supported in part by the National Science Foundation under ITR awards ANI-0085879 and ANI-0331659, and Career award ANI0133811.\n11.\nREFERENCES [1] ANDERSEN, D. G., BALAKRISHNAN, H., KAASHOEK, M. F., AND MORRIS, R. Resilient Overlay Networks.\nIn 18th ACM SOSP (2001).\n[2] ARCHER, A., AND TARDOS, E. Frugal path mechanisms.\n[3] ARGYRAKI, K., AND CHERITON, D. Loose Source Routing as a Mechanism for Traffic Policies.\nIn Proceedings of SIGCOMM FDNA (August 2004).\n[4] ARGYRAKI, K., MANIATIS, P., CHERITON, D., AND SHENKER, S. Providing Packet Obituaries.\nIn Third Workshop on Hot Topics in Networks (HotNets) (November 2004).\n[5] BANSAL, S., AND BAKER, M. Observation-based cooperation enforcement in ad-hoc networks.\nTechnical report, Stanford university (2003).\n[6] BLAKE, S., BLACK, D., CARLSON, M., DAVIES, E., WANG, Z., AND WEISS, W.\nAn Architecture for Differentiated Service.\nRFC 2475, 1998.\n[7] BUCHEGGER, S., AND BOUDEC, J.-Y.\nL. Performance Analysis of the CONFIDANT Protocol: Cooperation of Nodes - Fairness in Dynamic ad-hoc Networks.\nIn IEEE\/ACM Symposium on Mobile Ad Hoc Networking and Computing (MobiHOC) (2002).\n[8] BUCHEGGER, S., AND BOUDEC, J.-Y.\nL. Coping with False Accusations in Misbehavior Reputation Systems For Mobile ad-hoc Networks.\nIn EPFL, Technical report (2003).\n[9] BUCHEGGER, S., AND BOUDEC, J.-Y.\nL.\nThe effect of rumor spreading in reputation systems for mobile ad-hoc networks.\nIn WiOpt``03: Modeling and Optimization in Mobile ad-hoc and Wireless Networks (2003).\n[10] BUTTYAN, L., AND HUBAUX, J. Stimulating Cooperation in Self-Organizing Mobile ad-hoc Networks.\nACM\/Kluwer Journal on Mobile Networks and Applications (MONET) (2003).\n[11] CAILLAUD, B., AND HERMALIN, B. Hidden Action and Incentives.\nTeaching Notes.\nU.C. Berkeley.\n[12] ELKIND, E., SAHAI, A., AND STEIGLITZ, K. Frugality in path auctions, 2004.\n[13] FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER, S.\nA BGP-based Mechanism for Lowest-Cost Routing.\nIn Proceedings of the ACM Symposium on Principles of Distributed Computing (2002).\n[14] FEIGENBAUM, J., SAMI, R., AND SHENKER, S. Mechanism Design for Policy Routing.\nIn Yale University, Technical Report (2003).\n[15] FEIGENBAUM, J., AND SHENKER, S. Distributed Algorithmic Mechanism Design: Recent Results and Future Directions.\nIn Proceedings of the International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications (2002).\n[16] FRIEDMAN, E., AND SHENKER, S. Learning and implementation on the internet.\nIn Manuscript.\nNew Brunswick: Rutgers University, Department of Economics (1997).\n[17] HOLMSTROM, B. Moral Hazard in Teams.\nBell Journal of Economics 13 (1982), 324-340.\n[18] HU, Y., PERRIG, A., AND JOHNSON, D. Ariadne: A Secure On-Demand Routing Protocol for ad-hoc Networks.\nIn Eighth Annual International Conference on Mobile Computing and Networking (Mobicom) (2002), pp. 12-23.\n[19] HU, Y., PERRIG, A., AND JOHNSON, D. SEAD: Secure Efficient Distance Vector Routing for Mobile ad-hoc Networks.\nIn 4th IEEE Workshop on Mobile Computing Systems and Applications (WMCSA) (2002).\n[20] JAKOBSSON, M., HUBAUX, J.-P., AND BUTTYAN, L.\nA Micro-Payment Scheme Encouraging Collaboration in Multi-Hop Cellular Networks.\nIn Financial Cryptography (2003).\n[21] LAKSHMINARAYANAN, K., STOICA, I., AND SHENKER, S. Routing as a service.\nIn UCB Technical Report No.\nUCB\/CSD-04-1327 (January 2004).\n[22] MARTI, S., GIULI, T. J., LAI, K., AND BAKER, M. Mitigating Routing Misbehavior in Mobile ad-hoc Networks.\nIn Proceedings of MobiCom (2000), pp. 255-265.\n[23] MASS-COLELL, A., WHINSTON, M., AND GREEN, J. Microeconomic Theory.\nOxford University Press, 1995.\n[24] NISAN, N., AND RONEN, A. Algorithmic Mechanism Design.\nIn Proceedings of the 31st Symposium on Theory of Computing (1999).\n[25] SANZGIRI, K., DAHILL, B., LEVINE, B., SHIELDS, C., AND BELDING-ROYER, E.\nA Secure Routing Protocol for ad-hoc Networks.\nIn International Conference on Network Protocols (ICNP) (2002).\n[26] SHNEIDMAN, J., AND PARKES, D. C. Overcoming rational manipulation in mechanism implementation, 2004.\n[27] STRAUSZ, R. Moral Hazard in Sequential Teams.\nDepartmental Working Paper.\nFree University of Berlin (1996).\n[28] TEIXEIRA, R., GRIFFIN, T., SHAIKH, A., AND VOELKER, G. Network sensitivity to hot-potato disruptions.\nIn Proceedings of ACM SIGCOMM (September 2004).\n[29] TEIXEIRA, R., SHAIKH, A., GRIFFIN, T., AND REXFORD, J. Dynamics of hot-potato routing in IP networks.\nIn Proceedings of ACM SIGMETRICS (June 2004).\n[30] YANG, X. NIRA: A New Internet Routing Architecture.\nIn Proceedings of SIGCOMM FDNA (August 2003).\n[31] ZHONG, S., CHEN, J., AND YANG, Y. R. Sprite: A Simple, Cheat-Proof, Credit-Based System for Mobile ad-hoc Networks.\nIn 22nd Annual Joint Conference of the IEEE Computer and Communications Societies (2003).\n[32] ZHU, D., GRITTER, M., AND CHERITON, D. Feedback-based Routing.\nIn Proc Hotnets-I (2002).\n126","lvl-3":"Hidden-Action in Multi-Hop Routing\nABSTRACT\nIn multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful.\nTherefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all.\nUsing a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases.\nWe further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system.\nIn addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action.\n1.\nINTRODUCTION\nEndpoints wishing to communicate over a multi-hop network rely on intermediate nodes to forward packets from the sender to the receiver.\nIn settings where the intermediate nodes are independent agents (such as individual nodes in ad hoc and peer-topeer networks or autonomous systems on the Internet), this poses an incentive problem; the intermediate nodes may incur significant communication and computation costs in the forwarding of packets without deriving any direct benefit from doing so.\nConsequently, a rational (i.e., utility maximizing) intermediate node may choose to\n2Computer Science Division U.C. Berkeley\n2.\nBASELINE MODEL\n3.\nKNOWN TRANSIT COSTS\n3.1 Drop versus Forward without Monitoring\n3.2 Drop versus Forward with Monitoring\n4.\nUNKNOWN TRANSIT COSTS\n4.1 V CG Mechanism\n4.2 Discussion\n5.\nRECURSIVE CONTRACTS\n6.\nHIGH-QUALITY VERSUS LOW-QUALITY FORWARDING\n7.\nCASE STUDY: INTERNET ROUTING\n8.\nRELATED WORK\nThe study of non-cooperative behavior in communication networks, and the design of incentives, has received significant attention in the context of wireless ad-hoc routing.\n[22] considers the problem of malicious behavior, where nodes respond positively to route requests but then fail to forward the actual packets.\nIt proposes to mitigate it by detection and report mechanisms that will essentially help to route around the malicious nodes.\nHowever, rather than penalizing nodes that do not forward traffic, it bypasses the misbehaving nodes, thereby relieving their burden.\nTherefore, such a mechanism is not effective against selfish behavior.\nIn order to mitigate selfish behavior, some approaches [7, 8, 9] require reputation exchange between nodes, or simply first-hand observations [5].\nOther approaches propose payment schemes [10, 20, 31] to encourage cooperation.\n[31] is the closest to our work in that it designs payment schemes in which the sender pays the intermediate nodes in order to prevent several types of selfish behavior.\nIn their approach, nodes are supposed to send receipts to a thirdparty entity.\nWe show that this type of per-hop monitoring may not be needed.\nIn the context of Internet routing, [4] proposes an accountability framework that provide end hosts and service providers after-thefact audits on the fate of their packets.\nThis proposal is part of a broader approach to provide end hosts with greater control over the path of their packets [3, 30].\nIf senders have transit cost information and can fully control the path of their packets, they can design contracts that yield them with first-best utility.\nThe accountability framework proposed in [4] can serve two main goals: informing nodes of network conditions to help them make informed decision, and helping entities to establish whether individual ASs have performed their duties adequately.\nWhile such a framework can be used for the first task, we propose a different approach to the second problem without the need of per-hop auditing information.\nResearch in distributed algorithmic mechanism design (DAMD) has been applied to BGP routing [13, 14].\nThese works propose mechanisms to tackle the hidden-information problem, but ignore the problem of forwarding enforcement.\nInducing desired behavior is also the objective in [26], which attempts to respond to the challenge of distributed AMD raised in [15]: if the same agents that seek to manipulate the system also run the mechanism, what prevents them from deviating from the mechanism's proposed rules to maximize their own welfare?\nThey start with the proposed mechanism presented in [13] and use mostly auditing mechanisms to prevent deviation from the algorithm.\nThe focus of this work is the design of a payment scheme that provides the appropriate incentives within the context of multi-hop routing.\nLike other works in this field, we assume that all the accounting services are done using out-of-band mechanisms.\nSecurity issues within this context, such as node authentication or message encryption, are orthogonal to the problem presented in this paper, and can be found, for example, in [18, 19, 25].\nThe problem of information asymmetry and hidden-action (also known as moral hazard) is well studied in the economics literature [11, 17, 23, 27].\n[17] identifies the problem of moral hazard in production teams, and shows that it is impossible to design a sharing rule which is efficient and budget-balanced.\n[27] shows that this task is made possible when production takes place sequentially.\n9.\nCONCLUSIONS AND FUTURE DIRECTIONS\nIn this paper we show that in a multi-hop routing setting, where the actions of the intermediate nodes are hidden from the source\nand\/or destination, it is possible to design payment schemes to induce cooperative behavior from the intermediate nodes.\nWe conclude that monitoring per-hop outcomes may not improve the utility of the participants or the network performace.\nIn addition, in scenarios of unknown transit costs, it is also possible to design mechanisms that induce cooperative behavior in equilibrium, but the sender pays a premium for extracting information from the transit nodes.\nOur model and results suggest several natural and intriguing research avenues:\n\u2022 Consider manipulative or collusive behaviors which may arise under the proposed payment schemes.\n\u2022 Analyze the feasibility of recursive contracts under hiddeninformation of transit costs.\n\u2022 While the proposed payment schemes sustain cooperation in equilibrium, it is not a unique equilibrium.\nWe plan to study under what mechanisms this strategy profile may emerge as a unique equilibrium (e.g., penalty by successor nodes).\n\u2022 Consider the effect of congestion and capacity constraints on\nthe proposed mechanisms.\nOur preliminary results show that when several senders compete for a single transit node's capacity, the sender with the highest demand pays a premium even if transit costs are common knowledge.\nThe premium can be expressed as a function of the second-highest demand.\nIn addition, if congestion affects the probability of successful delivery, a sender with a lower cost alternate path may end up with a lower utility level than his rival with a higher cost alternate path.\n\u2022 Fully characterize the full-information Nash equilibrium in first price auctions, and use this characterization to derive its overcharging compared to truthful mechaisms.","lvl-4":"Hidden-Action in Multi-Hop Routing\nABSTRACT\nIn multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful.\nTherefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all.\nUsing a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases.\nWe further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system.\nIn addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action.\n1.\nINTRODUCTION\nEndpoints wishing to communicate over a multi-hop network rely on intermediate nodes to forward packets from the sender to the receiver.\nConsequently, a rational (i.e., utility maximizing) intermediate node may choose to\n8.\nRELATED WORK\nThe study of non-cooperative behavior in communication networks, and the design of incentives, has received significant attention in the context of wireless ad-hoc routing.\n[22] considers the problem of malicious behavior, where nodes respond positively to route requests but then fail to forward the actual packets.\nIt proposes to mitigate it by detection and report mechanisms that will essentially help to route around the malicious nodes.\nHowever, rather than penalizing nodes that do not forward traffic, it bypasses the misbehaving nodes, thereby relieving their burden.\nTherefore, such a mechanism is not effective against selfish behavior.\nIn order to mitigate selfish behavior, some approaches [7, 8, 9] require reputation exchange between nodes, or simply first-hand observations [5].\nOther approaches propose payment schemes [10, 20, 31] to encourage cooperation.\n[31] is the closest to our work in that it designs payment schemes in which the sender pays the intermediate nodes in order to prevent several types of selfish behavior.\nIn their approach, nodes are supposed to send receipts to a thirdparty entity.\nWe show that this type of per-hop monitoring may not be needed.\nIn the context of Internet routing, [4] proposes an accountability framework that provide end hosts and service providers after-thefact audits on the fate of their packets.\nThis proposal is part of a broader approach to provide end hosts with greater control over the path of their packets [3, 30].\nIf senders have transit cost information and can fully control the path of their packets, they can design contracts that yield them with first-best utility.\nWhile such a framework can be used for the first task, we propose a different approach to the second problem without the need of per-hop auditing information.\nResearch in distributed algorithmic mechanism design (DAMD) has been applied to BGP routing [13, 14].\nThese works propose mechanisms to tackle the hidden-information problem, but ignore the problem of forwarding enforcement.\nThey start with the proposed mechanism presented in [13] and use mostly auditing mechanisms to prevent deviation from the algorithm.\nThe focus of this work is the design of a payment scheme that provides the appropriate incentives within the context of multi-hop routing.\nLike other works in this field, we assume that all the accounting services are done using out-of-band mechanisms.\nSecurity issues within this context, such as node authentication or message encryption, are orthogonal to the problem presented in this paper, and can be found, for example, in [18, 19, 25].\n[17] identifies the problem of moral hazard in production teams, and shows that it is impossible to design a sharing rule which is efficient and budget-balanced.\n[27] shows that this task is made possible when production takes place sequentially.\n9.\nCONCLUSIONS AND FUTURE DIRECTIONS\nIn this paper we show that in a multi-hop routing setting, where the actions of the intermediate nodes are hidden from the source\nand\/or destination, it is possible to design payment schemes to induce cooperative behavior from the intermediate nodes.\nWe conclude that monitoring per-hop outcomes may not improve the utility of the participants or the network performace.\nIn addition, in scenarios of unknown transit costs, it is also possible to design mechanisms that induce cooperative behavior in equilibrium, but the sender pays a premium for extracting information from the transit nodes.\nOur model and results suggest several natural and intriguing research avenues:\n\u2022 Consider manipulative or collusive behaviors which may arise under the proposed payment schemes.\n\u2022 Analyze the feasibility of recursive contracts under hiddeninformation of transit costs.\n\u2022 While the proposed payment schemes sustain cooperation in equilibrium, it is not a unique equilibrium.\nWe plan to study under what mechanisms this strategy profile may emerge as a unique equilibrium (e.g., penalty by successor nodes).\n\u2022 Consider the effect of congestion and capacity constraints on\nthe proposed mechanisms.\nOur preliminary results show that when several senders compete for a single transit node's capacity, the sender with the highest demand pays a premium even if transit costs are common knowledge.","lvl-2":"Hidden-Action in Multi-Hop Routing\nABSTRACT\nIn multi-hop networks, the actions taken by individual intermediate nodes are typically hidden from the communicating endpoints; all the endpoints can observe is whether or not the end-to-end transmission was successful.\nTherefore, in the absence of incentives to the contrary, rational (i.e., selfish) intermediate nodes may choose to forward packets at a low priority or simply not forward packets at all.\nUsing a principal-agent model, we show how the hidden-action problem can be overcome through appropriate design of contracts, in both the direct (the endpoints contract with each individual router) and recursive (each router contracts with the next downstream router) cases.\nWe further demonstrate that per-hop monitoring does not necessarily improve the utility of the principal or the social welfare in the system.\nIn addition, we generalize existing mechanisms that deal with hidden-information to handle scenarios involving both hidden-information and hidden-action.\n1.\nINTRODUCTION\nEndpoints wishing to communicate over a multi-hop network rely on intermediate nodes to forward packets from the sender to the receiver.\nIn settings where the intermediate nodes are independent agents (such as individual nodes in ad hoc and peer-topeer networks or autonomous systems on the Internet), this poses an incentive problem; the intermediate nodes may incur significant communication and computation costs in the forwarding of packets without deriving any direct benefit from doing so.\nConsequently, a rational (i.e., utility maximizing) intermediate node may choose to\n2Computer Science Division U.C. Berkeley\nforward packets at a low priority or not forward the packets at all.\nThis rational behavior may lead to suboptimal system performance.\nThe endpoints can provide incentives, e.g., in the form of payments, to encourage the intermediate nodes to forward their packets.\nHowever, the actions of the intermediate nodes are often hidden from the endpoints.\nIn many cases, the endpoints can only observe whether or not the packet has reached the destination, and cannot attribute failure to a specific node on the path.\nEven if some form of monitoring mechanism allows them to pinpoint the location of the failure, they may still be unable to attribute the cause of failure to either the deliberate action of the intermediate node, or to some external factors beyond the control of the intermediate node, such as network congestion, channel interference, or data corruption.\nThe problem of hidden action is hardly unique to networks.\nAlso known as moral hazard, this problem has long been of interest in the economics literature concerning information asymmetry, incentive and contract theory, and agency theory.\nWe follow this literature by formalizing the problem as a principal-agent model, where multiple agents making sequential hidden actions [17, 27].\nOur results are threefold.\nFirst, we show that it is possible to design contracts to induce cooperation when intermediate nodes can choose to forward or drop packets, as well as when the nodes can choose to forward packets with different levels of quality of service.\nIf the path and transit costs are known prior to transmission, the principal achieves first best solution, and can implement the contracts either directly with each intermediate node or recursively through the network (each node making a contract with the following node) without any loss in utility.\nSecond, we find that introducing per-hop monitoring has no impact on the principal's expected utility in equilibrium.\nFor a principal who wishes to induce an equilibrium in which all intermediate nodes cooperate, its expected total payment is the same with or without monitoring.\nHowever, monitoring provides a dominant strategy equilibrium, which is a stronger solution concept than the Nash equilibrium achievable in the absence of monitoring.\nThird, we show that in the absence of a priori information about transit costs on the packet forwarding path, it is possible to generalize existing mechanisms to overcome scenarios that involve both hidden-information and hidden-action.\nIn these scenarios, the principal pays a premium compared to scenarios with known transit costs.\n2.\nBASELINE MODEL\nWe consider a principal-agent model, where the principal is a pair of communication endpoints who wish to communicate over a multi-hop network, and the agents are the intermediate nodes capable of forwarding packets between the endpoints.\nThe principal (who in practice can be either the sender, the receiver, or\nboth) makes individual take-it-or-leave-it offers (contracts) to the agents.\nIf the contracts are accepted, the agents choose their actions sequentially to maximize their expected payoffs based on the payment schedule of the contract.\nWhen necessary, agents can in turn make subsequent take-it-or-leave-it offers to their downstream agents.\nWe assume that all participants are risk neutral and that standard assumptions about the global observability of the final outcome and the enforceability of payments by guaranteeing parties hold.\nFor simplicity, we assume that each agent has only two possible actions; one involving significant effort and one involving little effort.\nWe denote the action choice of agent i by ai E {0, 1}, where ai = 0 and ai = 1 stand for the low-effort and high-effort actions, respectively.\nEach action is associated with a cost (to the agent) C (ai), and we assume:\nAt this stage, we assume that all nodes have the same C (ai) for presentation clarity, but we relax this assumption later.\nWithout loss of generality we normalize the C (ai = 0) to be zero, and denote the high-effort cost by c, so C (ai = 0) = 0 and C (ai = 1) = c.\nThe utility of agent i, denoted by ui, is a function of the payment it receives from the principal (si), the action it takes (ai), and the cost it incurs (ci), as follows: ui (si, ci, ai) = si \u2212 aici The outcome is denoted by x E {xG, xB}, where xG stands for the \"Good\" outcome in which the packet reaches the destination, and xB stands for the \"Bad\" outcome in which the packet is dropped before it reaches the destination.\nThe outcome is a function of the vector of actions taken by the agents on the path, a = (a1,..., an) E {0, 1} n, and the loss rate on the channels, k.\nThe benefit of the sender from the outcome is denoted by w (x), where: w (xG) = wG; and w (xB) = wB = 0 The utility of the sender is consequently:\nwhere: S = Eni = 1 si A sender who wishes to induce an equilibrium in which all nodes engage in the high-effort action needs to satisfy two constraints for each agent i: (IR) Individual rationality (participation constraint)': the expected utility from participation should (weakly) exceed its reservation utility (which we normalize to 0).\n(IC) Incentive compatibility: the expected utility from exerting high-effort should (weakly) exceed its expected utility from exerting low-effort.\nIn some network scenarios, the topology and costs are common knowledge.\nThat is, the sender knows in advance the path that its packet will take and the costs on that path.\nIn other routing scenarios, the sender does not have this a priori information.\nWe show that our model can be applied to both scenarios with known and unknown topologies and costs, and highlight the implications of each scenario in the context of contracts.\nWe also distinguish between direct contracts, where the principal signs an individual contract' We use the notion of ex ante individual rationality, in which the agents choose to participate before they know the state of the system.\nFigure 1: Multi-hop path from sender to destination.\nFigure 2: Structure of the multihop routing game under known topology and transit costs.\nwith each node, and recursive contracts, where each node enters a contractual relationship with its downstream node.\nThe remainder of this paper is organized as follows.\nIn Section 3 we consider agents who decide whether to drop or forward packets with and without monitoring when the transit costs are common knowledge.\nIn Section 4, we extend the model to scenarios with unknown transit costs.\nIn Section 5, we distinguish between recursive and direct contracts and discuss their relationship.\nIn Section 6, we show that the model applies to scenarios in which agents choose between different levels of quality of service.\nWe consider Internet routing as a case study in Section 7.\nIn Section 8 we present related work, and Section 9 concludes the paper.\n3.\nKNOWN TRANSIT COSTS\nIn this section we analyze scenarios in which the principal knows in advance the nodes on the path to the destination and their costs, as shown in figure 1.\nWe consider agents who decide whether to drop or forward packets, and distinguish between scenarios with and without monitoring.\n3.1 Drop versus Forward without Monitoring\nIn this scenario, the agents decide whether to drop (a = 0) or forward (a = 1) packets.\nThe principal uses no monitoring to observe per-hop outcomes.\nConsequently, the principal makes the payment schedule to each agent contingent on the final outcome, x, as follows:\nThe timeline of this scenario is shown in figure 2.\nGiven a perhop loss rate of k, we can express the probability that a packet is successfully delivered from node i to its successor i + 1 as:\nwhere xGi_j denotes a successful transmission from node i to j.\nPROOF.\nThe principal needs to satisfy the IC and IR constraints for each agent i, which can be expressed as follows:\nThis constraint says that the expected utility from forwarding is greater than or equal to its expected utility from dropping, if all subsequent nodes forward as well.\nThis constraint says that the expected utility from participating is greater than or equal to zero (reservation utility), if all other nodes forward.\nThe above constraints can be expressed as follows, based on Eq.\n1:\nIt is a standard result that both constraints bind at the optimal contract (see [23]).\nSolving the two equations, we obtain the solution that is presented in Eqs.\n2 and 3.\nWe next prove that the expected payment to a node equals its expected cost in equilibrium.\nThe expected cost of node i is its transit cost multiplied by the probability that it faces this cost (i.e., the probability that the packet reaches node i), which is: (1--k) ic.\nThe expected payment that node i receives is:\nNote that the expected payment to a node decreases as the node gets closer to the destination due to the asymmetric distribution of risk.\nThe closer the node is to the destination, the lower the probability that a packet will fail to reach the destination, resulting in the low payment being made to the node.\nThe expected payment by the principal is:\nThe expected payment made by the principal depends not only on the total cost, but also the number of nodes on the path.\nPROPOSITION 3.2.\nGiven two paths with respective lengths of n1 and n2 hops, per-hop transit costs of c1 and c2, and per-hop loss rates of k1 and k2, such that:\nFigure 3: Two paths of equal total costs but different lengths and individual costs.\nthe expected total payment made by the principal is lower on the shorter path.\nPROOF.\nThe expected payment in path j is\nSo, we have to show that:\nLet M = c1n1 = c2n2 and N = (1--k1) n1 +1 = (1--k2) n2 +1.\nWe have to show that\nn +1.\nTherefore, g (N, n)> 0 ` dN, n.\nThis means that, ceteris paribus, shorter paths should always be preferred over longer ones.\nFor example, consider the two topologies presented in Figure 3.\nWhile the paths are of equal total cost, the total expected payment by the principal is different.\nBased on Eqs.\n2 and 3, the expected total payment for the top path is:\nwe have equal total cost and equal expected benefit, but E [S] 1 = 0.948 and E [S] 2 = 1.313.\n3.2 Drop versus Forward with Monitoring\nSuppose the principal obtains per-hop monitoring information .3 Per-hop information broadens the set of mechanisms the principal can use.\nFor example, the principal can make the payment schedule contingent on arrival to the next hop instead of arrival to the final destination.\nCan such information be of use to a principal wishing to induce an equilibrium in which all intermediate nodes forward the packet?\nPROPOSITION 3.3.\nIn the drop versus forward model, the principal derives the same expected utility whether it obtains per-hop monitoring information or not.\nPROOF.\nThe proof to this proposition is already implied in the findings of the previous section.\nWe found that in the absence of per-hop information, the expected cost of each intermediate node equals its expected payment.\nIn order to satisfy the IR constraint, it is essential to pay each intermediate node an expected amount of at least its expected cost; otherwise, the node would be better-off not participating.\nTherefore, no other payment scheme can reduce the expected payment from the principal to the intermediate nodes.\nIn addition, if all nodes are incentivized to forward packets, the probability that the packet reaches the destination is the same in both scenarios, thus the expected benefit of the principal is the same.\nIndeed, we have found that even in the absence of per-hop monitoring information, the principal achieves first-best solution.\nTo convince the reader that this is indeed the case, we provide an example of a mechanism that conditions payments on arrival to the next hop.\nThis is possible only if per-hop monitoring information is provided.\nIn the new mechanism, the principal makes the payment schedule contingent on whether the packet has reached the next hop or not.\nThat is, the payment to node i is sGi if the packet has reached node i + 1, and sBi otherwise.\nWe assume costless monitoring, giving us the best case scenario for the use of monitoring.\nAs before, we consider a principal who wishes to induce an equilibrium in which all intermediate nodes forward the packet.\nThe expected utility of the principal is the difference between its expected benefit and its expected payment.\nBecause the expected benefit when all nodes forward is the same under both scenarios, we only need to show that the expected total payment is identical as well.\nUnder the monitoring mechanism, the principal has to satisfy the following constraints:\n3For a recent proposal of an accountability framework that provides such monitoring information see [4].\nThese constraints can be expressed as follows:\nThe two constraints bind at the optimal contract as before, and we get the following payment schedule: as in the scenario without monitoring (see Equation 6.)\nWhile the expected total payment is the same with or without monitoring, there are some differences between the two scenarios.\nFirst, the payment structure is different.\nIf no per-hop monitoring is used, the payment to each node depends on its location (i).\nIn contrast, monitoring provides us with n identical contracts.\nSecond, the solution concept used is different.\nIf no monitoring is used, the strategy profile of ai = 1 ` di is a Nash equilibrium, which means that no agent has an incentive to deviate unilaterally from the strategy profile.\nIn contrast, with the use of monitoring, the action chosen by node i is independent of the other agents' forwarding behavior.\nTherefore, monitoring provides us with dominant strategy equilibrium, which is a stronger solution concept than Nash equilibrium.\n[15], [16] discuss the appropriateness of different solution concepts in the context of online environments.\n4.\nUNKNOWN TRANSIT COSTS\nIn certain network settings, the transit costs of nodes along the forwarding path may not be common knowledge, i.e., there exists the problem of hidden information.\nIn this section, we address the following questions:\n1.\nIs it possible to design contracts that induce cooperative behavior in the presence of both hidden-action and hiddeninformation?\n2.\nWhat is the principal's loss due to the lack of knowledge of the transit costs?\nIn hidden-information problems, the principal employs mechanisms to induce truthful revelation of private information from the agents.\nIn the routing game, the principal wishes to extract transit cost information from the network routers in order to determine the lowest cost path (LCP) for a given source-destination pair.\nThe network routers act strategically and declare transit costs to maximize their profit.\nMechanisms that have been proposed in the literature for the routing game [24, 13] assume that once the transit costs have been obtained, and the LCP has been determined, the nodes on the LCP obediently forward all packets, and that there is no loss in the network, i.e., k = 0.\nIn this section, we consider both hidden information and hidden action, and generalize these mechanisms to induce both truth revelation and high-effort action in equilibrium, where nodes transmit over a lossy communication channel, i.e., k> 0.\n4.1 V CG Mechanism\nIn their seminal paper [24], Nisan and Ronen present a VCG mechanism that induces truthful revelation of transit costs by edges\nFigure 4: Game structure for FPSS, where only hidden-information is considered.\nFigure 5: Game structure for FPSS', where both hiddeninformation and hidden-action are considered.\nin a biconnected network, such that lowest cost paths can be chosen.\nLike all VCG mechanisms, it is a strategyproof mechanism, meaning that it induces truthful revelation in a dominant strategy equilibrium.\nIn [13] (FPSS), Feigenbaum et al. slightly modify the model to have the routers as the selfish agents instead of the edges, and present a distributed algorithm that computes the VCG payments.\nThe timeline of the FPSS game is presented in figure 4.\nUnder FPSS, transit nodes keep track of the amount of traffic routed through them via counters, and payments are periodically transferred from the principals to the transit nodes based on the counter values.\nFPSS assumes that transit nodes are obedient in packet forwarding behavior, and will not update the counters without exerting high effort in packet forwarding.\nIn this section, we present FPSS', which generalizes FPSS to operate in an environment with lossy communication channels (i.e., k 0) and strategic behavior in terms of packet forwarding.\nWe will show that FPSS' induces an equilibrium in which all nodes truthfully reveal their transit costs and forward packets if they are on the LCP.\nFigure 5 presents the timeline of FPSS'.\nIn the first stage, the sender declares two payment functions, (sGi, s i), that will be paid upon success or failure of packet delivery.\nGiven these payments, nodes have incentive to reveal their costs truthfully, and later to forward packets.\nPayments are transferred based on the final outcome.\nIn FPSS', each node i submits a bid bi, which is its reported transit cost.\nNode i is said to be truthful if bi = ci.\nWe write b for the vector (b1,..., bn) of bids submitted by all transit nodes.\nLet Ii (b) be the indicator function for the LCP given the bid vector b such that\nwhere the expression b | ix means that (b | ix) j = cj for all j = i, and (b | ix) i = x.\nIn FPSS', we compute s i and sGi as a function of pi, k, and n. First, we recognize that s i must be less than or equal to zero in order for the true LCP to be chosen.\nOtherwise, strategic nodes may have an incentive to report extremely small costs to mislead the principal into believing that they are on the LCP.\nThen, these nodes can drop any packets they receive, incur zero transit cost, collect a payment of s i> 0, and earn positive profit.\nThen, FPSS' has a Nash equilibrium in which all nodes truthfully reveal their transit costs and all nodes on the LCPforward packets.\nPROOF.\nIn order to prove the proposition above, we have to show that nodes have no incentive to engage in the following misbehaviors:\n1.\ntruthfully reveal cost but drop packet, 2.\nlie about cost and forward packet, 3.\nlie about cost and drop packet.\nIf all nodes truthfully reveal their costs and forward packets, the expected utility of node i on the LCP is:\nThe last inequality is derived from the fact that FPSS is a truthful mechanism, thus pi ci.\nThe expected utility of a node not on the LCP is 0.\nA node that drops a packet receives s i = 0, which is smaller than or equal to E [u] i for i LCP and equals E [u] i for i \/ LCP.\nTherefore, nodes cannot gain utility from misbehaviors (1) or (3).\nWe next show that nodes cannot gain utility from misbehavior (2).\n1.\nif i LCP, E [u] i> 0.\n(a) if it reports bi> ci: i. if bi <Er Ir (b | i) br \u2212 Er = i Ir (b) br, it is still on the LCP, and since the payment is independent of bi, its utility does not change.\nii.\nif bi> Er Ir (b | i) br \u2212 Er = i Ir (b) br, it will not be on the LCP and obtain E [u] i = 0, which is less than its expected utility if truthfully revealing its cost.\n(b) if it reports bi <ci, it is still on the LCP, and since the payment is independent of bi, its utility does not change.\n2.\nif i \/ LCP, E [u] i = 0.\n(a) if it reports bi> ci, it remains out of the LCP, so its utility does not change.\n(b) if it reports bi <ci: i. if bi <Pr Ir (blioo) br--Pr = i Ir (b) br, it joins the LCP, and gains an expected utility of\nii.\nif bi> Pr Ir (blioo) br--Pr = i Ir (b) br, it remains out of the LCP, so its utility does not change.\nTherefore, there exists an equilibrium in which all nodes truthfully reveal their transit costs and forward the received packets.\nWe note that in the hidden information only context, FPSS induces truthful revelation as a dominant strategy equilibrium.\nIn the current setting with both hidden information and hidden action, FPSS' achieves a Nash equilibrium in the absence of per-hop monitoring, and a dominant strategy equilibrium in the presence of per-hop monitoring, consistent with the results in section 3 where there is hidden action only.\nIn particular, with per-hop monitoring, the principal declares the payments sBi and sGi to each node upon failure or success of delivery to the next node.\nGiven the payments sBi = 0 and sGi = pi \/ (1--k), it is a dominant strategy for the nodes to reveal costs truthfully and forward packets.\n4.2 Discussion\nMore generally, for any mechanism M that induces a bid vector b in equilibrium by making a payment of pi (b) to node i on the LCP, there exists a mechanism M' that induces an equilibrium with the same bid vector and packet forwarding by making a payment of:\n1.\nIMi (b) = IM' i (b) Vi, since M' uses the same choice metric.\n2.\nThe expected utility of a LCP node is E [u] i = (1--k) i (pi (b)--ci)> 0 if it forwards and 0 if it drops, and the expected utility of a non-LCP node is 0.\n3.\nFrom 1 and 2, we get that if a node i can increase its expected utility by deviating from bi under M', it can also increase its utility by deviating from bi in M, but this is in contradiction to bi being an equilibrium in M. 4.\nNodes have no incentive to drop packets since they derive an expected utility of 0 if they do.\nIn addition to the generalization of FPSS into FPSS', we can also consider the generalization of the first-price auction (FPA) mechanism, where the principal determines the LCP and pays each node on the LCP its bid, pi (b) = bi.\nFirst-price auctions achieve Nash equilibrium as opposed to dominant strategy equilibrium.\nTherefore, we should expect the generalization of FPA to achieve Nash equilibrium with or without monitoring.\nWe make two additional comments concerning this class of mechanisms.\nFirst, we find that the expected total payment made by the principal under the proposed mechanisms is\nand the expected benefit realized by the principal is\nwhere Pn i = 1 pi and wG are the expected payment and expected benefit, respectively, when only the hidden-information problem is considered.\nWhen hidden action is also taken into consideration, the generalized mechanism handles strategic forwarding behavior by conditioning payments upon the final outcome, and accounts for lossy communication channels by designing payments that reflect the distribution of risk.\nThe difference between expected payment and benefit is not due to strategic forwarding behavior, but to lossy communications.\nTherefore, in a lossless network, we should not see any gap between expected benefits and payments, independent of strategic or non-strategic forwarding behavior.\nSecond, the loss to the principal due to unknown transit costs is also known as the price offrugality, and is an active field of research [2, 12].\nThis price greatly depends on the network topology and on the mechanism employed.\nWhile it is simple to characterize the principal's loss in some special cases, it is not a trivial problem in general.\nFor example, in topologies with parallel disjoint paths from source to destination, we can prove that under first-price auctions, the loss to the principal is the difference between the cost of the shortest path and the second-shortest path, and the loss is higher under the FPSS mechanism.\n5.\nRECURSIVE CONTRACTS\nIn this section, we distinguish between direct and recursive contracts.\nIn direct contracts, the principal contracts directly with each node on the path and pays it directly.\nIn recursive payment, the principal contracts with the first node on the path, which in turn contracts with the second, and so on, such that each node contracts with its downstream node and makes the payment based on the final result, as demonstrated in figure 6.\nWith direct payments, the principal needs to know the identity and cost of each node on the path and to have some communication channel with the node.\nWith recursive payments, every node needs to communicate only with its downstream node.\nSeveral questions arise in this context:.\nWhat knowledge should the principal have in order to induce cooperative behavior through recursive contracts?\n.\nWhat should be the structure of recursive contracts that induce cooperative behavior?\n.\nWhat is the relation between the total expected payment under direct and recursive contracts?\n.\nIs it possible to design recursive contracts in scenarios of unknown transit costs?\nFigure 6: Structure of the multihop routing game under known topology and recursive contracts.\nIn order to answer the questions outlined above, we look at the IR and IC constraints that the principal needs to satisfy when contracting with the first node on the path.\nWhen the principal designs a contract with the first node, he should take into account the incentives that the first node should provide to the second node, and so on all the way to the destination.\nFor example, consider the topology given in figure 3 (a).\nWhen the principal comes to design a contract with node A, he needs to consider the subsequent contract that A should sign with B, which should satisfy the following constrints.\nSolving for SBS -.\nA and SGS -.\nA, we get: SBS -.\nA = 0 The expected total payment is\nwhich is equal to the expected total payment under direct contracts (see Eq.\n8).\nPROPOSITION 5.1.\nThe expected total payments by the principal under direct and recursive contracts are equal.\nPROOF.\nIn order to calculate the expected total payment, we have to find the payment to the first node on the path that will induce appropriate behavior.\nBecause SBi = 0 in the drop \/ forward model, both constraints can be reduced to:\nand the expected total payment is\nwhere:\nwhich equals the total expected payment in direct payments, as expressed in Eq.\n6.\nBecause the payment is contingent on the final outcome, and the expected payment to a node equals its expected cost, nodes have no incentive to offer their downstream nodes lower payment than necessary, since if they do, their downstream nodes will not forward the packet.\nWhat information should the principal posess in order to implement recursive contracts?\nLike in direct payments, the expected payment is not affected solely by the total payment on the path, but also by the topology.\nTherefore, while the principal only needs to communicate with the first node on the forwarding path and does not have to know the identities of the other nodes, it still needs to know the number of nodes on the path and their individual transit costs.\nFinally, is it possible to design recursive contracts under unknown transit costs, and, if so, what should be the structure of such contracts?\nSuppose the principal has implemented the distributed algorithm that calculates the necessary payments, Pi for truthful\nrevelation, would the following payment schedule to the first node induce cooperative behavior?\nThe answer is not clear.\nUnlike contracts in known transit costs, the expected payment to a node usually exceeds its expected cost.\nTherefore, transit nodes may not have the appropriate incentive to follow the principal's guarantee during the payment phase.\nFor example, in FPSS, the principal guarantees to pay each node an expected payment of pi> ci.\nWe assume that payments are enforceable if made by the same entity that pledge to pay.\nHowever, in the case of recursive contracts, the entity that pledges to pay in the cost discovery stage (the principal) is not the same as the entity that defines and executes the payments in the forwarding stage (the transit nodes).\nTransit nodes, who design the contracts in the second stage, know that their downstream nodes will forward the packet as long as the expected payment exceeds the expected cost, even if it is less than the promised amount.\nThus, every node has incentive to offer lower payments than promised and keep the profit.\nTransit nodes, who know this is a plausible scenario, may no longer truthfully reveal their cost.\nTherefore, while recursive contracts under known transit costs are strategically equivalent to direct contracts, it is not clear whether this is the case under unknown transit costs.\n6.\nHIGH-QUALITY VERSUS LOW-QUALITY FORWARDING\nSo far, we have considered the agents' strategy space to be limited to the drop (a = 0) and forward (a = 1) actions.\nIn this section, we consider a variation of the model where the agents choose between providing a low-quality service (a = 0) and a high-quality service (a = 1).\nThis may correspond to a service-differentiated service model where packets are forwarded on a best-effort or a priority basis [6].\nIn contrast to drop versus forward, a packet may still reach the next hop (albeit with a lower probability) even if the low-effort action is taken.\nAs a second example, consider the practice of hot-potato routing in inter-domain routing of today's Internet.\nIndividual autonomous systems (AS's) can either adopt hot-potato routing or early exit routing (a = 0), where a packet is handed off to the downstream AS at the first possible exit, or late exit routing (a = 1), where an AS carries the packet longer than it needs to, handing off the packet at an exit closer to the destination.\nIn the absence of explicit incentives, it is not surprising that AS's choose hot-potato routing to minimize their costs, even though it leads to suboptimal routes [28, 29].\nIn both examples, in the absence of contracts, a rational node would exert low-effort, resulting in lower performance.\nNevertheless, this behavior can be avoided with an appropriate design of contracts.\nFormally, the probability that a packet successfully gets from node i to node i + 1 is:\nwhere: q E (0, 1] and k E (q, 1] In the drop versus forward model, a low-effort action by any node results in a delivery failure.\nIn contrast, a node in the high\/low scenario may exert low-effort and hope to free-ride on the higheffort level exerted by the other agents.\nPROPOSITION 6.1.\nIn the high-quality versus low-quality forwarding model, where transit costs are common knowledge, the principal derives the same expected utility whether it obtains perhop monitoring information or not.\nPROOF.\nThe IC and IR constraints are the same as specified in the proof of proposition 3.1, but their values change, based on Eq.\n14 to reflect the different model:\nWe observe that in this version, both the high and the low payments depend on i.\nIf monitoring is used, we obtain the following constraints:\nand we get the solution:\nThe expected payment by the principal with or without forwarding is the same, and equals:\nand this concludes the proof.\nThe payment structure in the high-quality versus low-quality forwarding model is different from that in the drop versus forward model.\nIn particular, at the optimal contract, the low-outcome payment sBi is now less than zero.\nA negative payment means that the agent must pay the principal in the event that the packet fails to reach the destination.\nIn some settings, it may be necessary to impose a limited liability constraint, i.e., si> 0.\nThis prevents the first-best solution from being achieved.\nPROPOSITION 6.2.\nIn the high-quality versus low-quality forwarding model, if negative payments are disallowed, the expected payment to each node exceeds its expected cost under the optimal contract.\nPROOF.\nThe proof is a direct outcome of the following statements, which are proved above:\n1.\nThe optimal contract is the contract specified in equations 15 and 16 2.\nUnder the optimal contract, E [si] equals node i's expected cost 3.\nUnder the optimal contract, sBi = (1--k + q) ic (k--1) <0 q\nTherefore, under any other contract the sender will have to compensate each node with an expected payment that is higher than its expected transit cost.\nThere is an additional difference between the two models.\nIn drop versus forward, a principal either signs a contract with all n nodes along the path or with none.\nThis is because a single node dropping the packet determines a failure.\nIn contrast, in high versus low-quality forwarding, a success may occur under the low effort actions as well, and payments are used to increase the probability of success.\nTherefore, it may be possible for the principal to maximize its utility by contracting with only m of the n nodes along the path.\nWhile the expected outcome depends on m, it is independent of which specific m nodes are induced.\nAt the same time, the individual expected payments decrease in i (see Eq.\n16).\nTherefore, a principal who wishes to sign a contract with only m out of the n nodes should do so with the nodes that are closest to the destination; namely, nodes (n \u2212 m + 1,..., n \u2212 1, n).\nSolving for the high-quality versus low-quality forwarding model with unknown transit costs is left for future work.\n7.\nCASE STUDY: INTERNET ROUTING\nWe can map different deployed and proposed Internet routing schemes to the various models we have considered in this work.\nBorder Gateway Protocol (BGP), the current inter-domain routing protocol in the Internet, computes routes based on path vectors.\nSince the protocol reveals only the autonomous systems (AS's) along a route but not the cost associated to them, the current BGP routing is best characterized by lack of a priori information about transit costs.\nIn this case, the principal (e.g., a multi-homed site or a tier-1 AS) can implement one of the mechanisms proposed in Section 4 by contracting with individual nodes on the path.\nSuch contracts involve paying some premium over the real cost, and it is not clear whether recursive contacts can be implemented in this scenario.\nIn addition, the current protocol does not have the infrastructure to support implementation of direct contracts between endpoints and the network.\nRecently, several new architectures have been proposed in the context of the Internet to provide the principal not only with a set of paths from which it can chose (like BGP does) but also with the performance along those paths and the network topology.\nOne approach to obtain such information is through end-to-end probing [1].\nAnother approach is to have the edge networks perform measurements and discover the network topology [32].\nYet another approach is to delegate the task of obtaining topology and performance information to a third-party, like in the routing-as-a-service proposal [21].\nThese proposals are quite different in nature, but they are common in their attempt to provide more visibility and transparency into the network.\nIf information about topology and transit costs is obtained, the scenario is mapped to the \"known transit costs\" model (Section 3).\nIn this case, first-best contracts can be achieved through individual contracts with nodes along the path.\nHowever, as we have shown in Section 5, as long as each agent can chose the next hop, the principal can gain full benefit by contracting with only the first hop (through the implementation of recursive contracts).\nHowever, the various proposals for acquiring network topology and performance information do not deal with strategic behavior by the intermediate nodes.\nWith the realization that the information collected may be used by the principal in subsequent contractual relationships, the intermediate nodes may behave strategically, misrepresenting their true costs to the entities that collect and aggregate such information.\nOne recent approach that can alleviate this problem is to provide packet obituaries by having each packet to confirm its delivery or report its last successful AS hop [4].\nAnother approach is to have third parties like Keynote independently monitor the network performance.\n8.\nRELATED WORK\nThe study of non-cooperative behavior in communication networks, and the design of incentives, has received significant attention in the context of wireless ad-hoc routing.\n[22] considers the problem of malicious behavior, where nodes respond positively to route requests but then fail to forward the actual packets.\nIt proposes to mitigate it by detection and report mechanisms that will essentially help to route around the malicious nodes.\nHowever, rather than penalizing nodes that do not forward traffic, it bypasses the misbehaving nodes, thereby relieving their burden.\nTherefore, such a mechanism is not effective against selfish behavior.\nIn order to mitigate selfish behavior, some approaches [7, 8, 9] require reputation exchange between nodes, or simply first-hand observations [5].\nOther approaches propose payment schemes [10, 20, 31] to encourage cooperation.\n[31] is the closest to our work in that it designs payment schemes in which the sender pays the intermediate nodes in order to prevent several types of selfish behavior.\nIn their approach, nodes are supposed to send receipts to a thirdparty entity.\nWe show that this type of per-hop monitoring may not be needed.\nIn the context of Internet routing, [4] proposes an accountability framework that provide end hosts and service providers after-thefact audits on the fate of their packets.\nThis proposal is part of a broader approach to provide end hosts with greater control over the path of their packets [3, 30].\nIf senders have transit cost information and can fully control the path of their packets, they can design contracts that yield them with first-best utility.\nThe accountability framework proposed in [4] can serve two main goals: informing nodes of network conditions to help them make informed decision, and helping entities to establish whether individual ASs have performed their duties adequately.\nWhile such a framework can be used for the first task, we propose a different approach to the second problem without the need of per-hop auditing information.\nResearch in distributed algorithmic mechanism design (DAMD) has been applied to BGP routing [13, 14].\nThese works propose mechanisms to tackle the hidden-information problem, but ignore the problem of forwarding enforcement.\nInducing desired behavior is also the objective in [26], which attempts to respond to the challenge of distributed AMD raised in [15]: if the same agents that seek to manipulate the system also run the mechanism, what prevents them from deviating from the mechanism's proposed rules to maximize their own welfare?\nThey start with the proposed mechanism presented in [13] and use mostly auditing mechanisms to prevent deviation from the algorithm.\nThe focus of this work is the design of a payment scheme that provides the appropriate incentives within the context of multi-hop routing.\nLike other works in this field, we assume that all the accounting services are done using out-of-band mechanisms.\nSecurity issues within this context, such as node authentication or message encryption, are orthogonal to the problem presented in this paper, and can be found, for example, in [18, 19, 25].\nThe problem of information asymmetry and hidden-action (also known as moral hazard) is well studied in the economics literature [11, 17, 23, 27].\n[17] identifies the problem of moral hazard in production teams, and shows that it is impossible to design a sharing rule which is efficient and budget-balanced.\n[27] shows that this task is made possible when production takes place sequentially.\n9.\nCONCLUSIONS AND FUTURE DIRECTIONS\nIn this paper we show that in a multi-hop routing setting, where the actions of the intermediate nodes are hidden from the source\nand\/or destination, it is possible to design payment schemes to induce cooperative behavior from the intermediate nodes.\nWe conclude that monitoring per-hop outcomes may not improve the utility of the participants or the network performace.\nIn addition, in scenarios of unknown transit costs, it is also possible to design mechanisms that induce cooperative behavior in equilibrium, but the sender pays a premium for extracting information from the transit nodes.\nOur model and results suggest several natural and intriguing research avenues:\n\u2022 Consider manipulative or collusive behaviors which may arise under the proposed payment schemes.\n\u2022 Analyze the feasibility of recursive contracts under hiddeninformation of transit costs.\n\u2022 While the proposed payment schemes sustain cooperation in equilibrium, it is not a unique equilibrium.\nWe plan to study under what mechanisms this strategy profile may emerge as a unique equilibrium (e.g., penalty by successor nodes).\n\u2022 Consider the effect of congestion and capacity constraints on\nthe proposed mechanisms.\nOur preliminary results show that when several senders compete for a single transit node's capacity, the sender with the highest demand pays a premium even if transit costs are common knowledge.\nThe premium can be expressed as a function of the second-highest demand.\nIn addition, if congestion affects the probability of successful delivery, a sender with a lower cost alternate path may end up with a lower utility level than his rival with a higher cost alternate path.\n\u2022 Fully characterize the full-information Nash equilibrium in first price auctions, and use this characterization to derive its overcharging compared to truthful mechaisms.","keyphrases":["multi-hop","rout","multi-hop network","intermedi node","endpoint","incent","prioriti","contract","mechan","cost","failur caus","hidden action","moral hazard","princip-agent model","hidden-action","mechan design","moralhazard"],"prmu":["P","P","P","P","P","P","P","P","P","U","U","R","U","M","U","R","U"]} {"id":"C-76","title":"Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation","abstract":"The paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation. Today's event correlation mainly addresses the correlation of events as reported from management tools. However, a correlation of user trouble reports concerning services should also be performed. This is necessary to improve the resolution time and to reduce the effort for keeping the service agreements. We refer to such a type of correlation as service-oriented event correlation. The necessity to use this kind of event correlation is motivated in the paper. To introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary. Therefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area. The different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation. The MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling. An example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation.","lvl-1":"Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation Andreas Hanemann Munich Network Management Team Leibniz Supercomputing Center Barer Str.\n21, D-80333 Munich, Germany hanemann@lrz.de Martin Sailer Munich Network Management Team University of Munich (LMU) Oettingenstr.\n67, D-80538 Munich, Germany sailer@nm.ifi.lmu.de David Schmitz Munich Network Management Team Leibniz Supercomputing Center Barer Str.\n21, D-80333 Munich, Germany schmitz@lrz.de ABSTRACT The paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation.\nToday``s event correlation mainly addresses the correlation of events as reported from management tools.\nHowever, a correlation of user trouble reports concerning services should also be performed.\nThis is necessary to improve the resolution time and to reduce the effort for keeping the service agreements.\nWe refer to such a type of correlation as service-oriented event correlation.\nThe necessity to use this kind of event correlation is motivated in the paper.\nTo introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary.\nTherefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area.\nThe different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation.\nThe MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling.\nAn example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation.\nCategories and Subject Descriptors C.2.4 [Computer Systems Organization]: ComputerCommunication Networks-Distributed Applications General Terms Management, Performance, Reliability 1.\nINTRODUCTION In huge networks a single fault can cause a burst of failure events.\nTo handle the flood of events and to find the root cause of a fault, event correlation approaches like rule-based reasoning, case-based reasoning or the codebook approach have been developed.\nThe main idea of correlation is to condense and structure events to retrieve meaningful information.\nUntil now, these approaches address primarily the correlation of events as reported from management tools or devices.\nTherefore, we call them device-oriented.\nIn this paper we define a service as a set of functions which are offered by a provider to a customer at a customer provider interface.\nThe definition of a service is therefore more general than the definition of a Web Service, but a Web Service is included in this service definition.\nAs a consequence, the results are applicable for Web Services as well as for other kinds of services.\nA service level agreement (SLA) is defined as a contract between customer and provider about guaranteed service performance.\nAs in today``s IT environments the offering of such services with an agreed service quality becomes more and more important, this change also affects the event correlation.\nIt has become a necessity for providers to offer such guarantees for a differentiation from other providers.\nTo avoid SLA violations it is especially important for service providers to identify the root cause of a fault in a very short time or even act proactively.\nThe latter refers to the case of recognizing the influence of a device breakdown on the offered services.\nAs in this scenario the knowledge about services and their SLAs is used we call it service-oriented.\nIt can be addressed from two directions.\nTop-down perspective: Several customers report a problem in a certain time interval.\nAre these trouble reports correlated?\nHow to identify a resource as being the problem``s root cause?\n183 Bottom-up perspective: A device (e.g., router, server) breaks down.\nWhich services, and especially which customers, are affected by this fault?\nThe rest of the paper is organized as follows.\nIn Section 2 we describe how event correlation is performed today and present a selection of the state-of-the-art event correlation techniques.\nSection 3 describes the motivation for serviceoriented event correlation and its benefits.\nAfter having motivated the need for such type of correlation we use two well-known IT service management models to gain requirements for an appropriate workflow modeling and present our proposal for it (see Section 4).\nIn Section 5 we present our information modeling which is derived from the MNM Service Model.\nAn application of the approach for a web hosting scenario is performed in Section 6.\nThe last section concludes the paper and presents future work.\n2.\nTODAY``S EVENT CORRELATION TECHNIQUES In [11] the task of event correlation is defined as a conceptual interpretation procedure in the sense that a new meaning is assigned to a set of events that happen in a certain time interval.\nWe can distinguish between three aspects for event correlation.\nFunctional aspect: The correlation focuses on functions which are provided by each network element.\nIt is also regarded which other functions are used to provide a specific function.\nTopology aspect: The correlation takes into account how the network elements are connected to each other and how they interact.\nTime aspect: When explicitly regarding time constraints, a start and end time has to be defined for each event.\nThe correlation can use time relationships between the events to perform the correlation.\nThis aspect is only mentioned in some papers [11], but it has to be treated in an event correlation system.\nIn the event correlation it is also important to distinguish between the knowledge acquisition\/representation and the correlation algorithm.\nExamples of approaches to knowledge acquisition\/representation are Gruschke``s dependency graphs [6] and Ensel``s dependency detection by neural networks [3].\nIt is also possible to find the dependencies by analyzing interactions [7].\nIn addition, there is an approach to manage service dependencies with XML and to define a resource description framework [4].\nTo get an overview about device-oriented event correlation a selection of several event correlation techniques being used for this kind of correlation is presented.\nModel-based reasoning: Model-based reasoning (MBR) [15, 10, 20] represents a system by modeling each of its components.\nA model can either represent a physical entity or a logical entity (e.g., LAN, WAN, domain, service, business process).\nThe models for physical entities are called functional model, while the models for all logical entities are called logical model.\nA description of each model contains three categories of information: attributes, relations to other models, and behavior.\nThe event correlation is a result of the collaboration among models.\nAs all components of a network are represented with their behavior in the model, it is possible to perform simulations to predict how the whole network will behave.\nA comparison in [20] showed that a large MBR system is not in all cases easy to maintain.\nIt can be difficult to appropriately model the behavior for all components and their interactions correctly and completely.\nAn example system for MBR is NetExpert[16] from OSI which is a hybrid MBR\/RBR system (in 2000 OSI was acquired by Agilent Technologies).\nRule-based reasoning: Rule-based reasoning (RBR) [15, 10] uses a set of rules for event correlation.\nThe rules have the form conclusion if condition.\nThe condition uses received events and information about the system, while the conclusion contains actions which can either lead to system changes or use system parameters to choose the next rule.\nAn advantage of the approach is that the rules are more or less human-readable and therefore their effect is intuitive.\nThe correlation has proved to be very fast in practice by using the RETE algorithm.\nIn the literature [20, 1] it is claimed that RBR systems are classified as relatively inflexible.\nFrequent changes in the modeled IT environment would lead to many rule updates.\nThese changes would have to be performed by experts as no automation has currently been established.\nIn some systems information about the network topology which is needed for the event correlation is not used explicitly, but is encoded into the rules.\nThis intransparent usage would make rule updates for topology changes quite difficult.\nThe system brittleness would also be a problem for RBR systems.\nIt means that the system fails if an unknown case occurs, because the case cannot be mapped onto similar cases.\nThe output of RBR systems would also be difficult to predict, because of unforeseen rule interactions in a large rule set.\nAccording to [15] an RBR system is only appropriate if the domain for which it is used is small, nonchanging, and well understood.\nThe GTE IMPACT system [11] is an example of a rulebased system.\nIt also uses MBR (GTE has merged with Bell Atlantic in 1998 and is now called Verizon [19]).\nCodebook approach: The codebook approach [12, 21] has similarities to RBR, but takes a further step and encodes the rules into a correlation matrix.\nThe approach starts using a dependency graph with two kinds of nodes for the modeling.\nThe first kind of nodes are the faults (denoted as problems in the cited papers) which have to be detected, while the second kind of nodes are observable events (symptoms in the papers) which are caused by the faults or other events.\nThe dependencies between the nodes are denoted as directed edges.\nIt is possible to choose weights for the edges, e.g., a weight for the probability that 184 fault\/event A causes event B. Another possible weighting could indicate time dependencies.\nThere are several possibilities to reduce the initial graph.\nIf, e.g., a cyclic dependency of events exists and there are no probabilities for the cycles'' edges, all events can be treated as one event.\nAfter a final input graph is chosen, the graph is transformed into a correlation matrix where the columns contain the faults and the rows contain the events.\nIf there is a dependency in the graph, the weight of the corresponding edge is put into the according matrix cell.\nIn case no weights are used, the matrix cells get the values 1 for dependency and 0 otherwise.\nAfterwards, a simplification can be done, where events which do not help to discriminate faults are deleted.\nThere is a trade-off between the minimization of the matrix and the robustness of the results.\nIf the matrix is minimized as much as possible, some faults can only be distinguished by a single event.\nIf this event cannot be reliably detected, the event correlation system cannot discriminate between the two faults.\nA measure how many event observation errors can be compensated by the system is the Hamming distance.\nThe number of rows (events) that can be deleted from the matrix can differ very much depending on the relationships [15].\nThe codebook approach has the advantage that it uses long-term experience with graphs and coding.\nThis experience is used to minimize the dependency graph and to select an optimal group of events with respect to processing time and robustness against noise.\nA disadvantage of the approach could be that similar to RBR frequent changes in the environment make it necessary to frequently edit the input graph.\nSMARTS InCharge [12, 17] is an example of such a correlation system.\nCase-based reasoning: In contrast to other approaches case-based reasoning (CBR) [14, 15] systems do not use any knowledge about the system structure.\nThe knowledge base saves cases with their values for system parameters and successful recovery actions for these cases.\nThe recovery actions are not performed by the CBR system in the first place, but in most cases by a human operator.\nIf a new case appears, the CBR system compares the current system parameters with the system parameters in prior cases and tries to find a similar one.\nTo identify such a match it has to be defined for which parameters the cases can differ or have to be the same.\nIf a match is found, a learned action can be performed automatically or the operator can be informed with a recovery proposal.\nAn advantage of this approach is that the ability to learn is an integral part of it which is important for rapid changing environments.\nThere are also difficulties when applying the approach [15].\nThe fields which are used to find a similar case and their importance have to be defined appropriately.\nIf there is a match with a similar case, an adaptation of the previous solution to the current one has to be found.\nAn example system for CBR is SpectroRx from Cabletron Systems.\nThe part of Cabletron that developed SpectroRx became an independent software company in 2002 and is now called Aprisma Management Technologies [2].\nIn this section four event correlation approaches were presented which have evolved into commercial event correlation systems.\nThe correlation approaches have different focuses.\nMBR mainly deals with the knowledge acquisition and representation, while RBR and the codebook approach propose a correlation algorithm.\nThe focus of CBR is its ability to learn from prior cases.\n3.\nMOTIVATION OF SERVICE-ORIENTED EVENT CORRELATION Fig. 1 shows a general service scenario upon which we will discuss the importance of a service-oriented correlation.\nSeveral services like SSH, a web hosting service, or a video conference service are offered by a provider to its customers at the customer provider interface.\nA customer can allow several users to use a subscribed service.\nThe quality and cost issues of the subscribed services between a customer and a provider are agreed in SLAs.\nOn the provider side the services use subservices for their provisioning.\nIn case of the services mentioned above such subservices are DNS, proxy service, and IP service.\nBoth services and subservices depend on resources upon which they are provisioned.\nAs displayed in the figure a service can depend on more than one resource and a resource can be used by one or more services.\nSSH DNS proxy IP service dependency resource dependency user a user b user c customer SLA web a video conf.\nSSH sun1 provider video conf.\nweb services subservices resources Figure 1: Scenario To get a common understanding, we distinguish between different types of events: Resource event: We use the term resource event for network events and system events.\nA network event refers to events like node up\/down or link up\/down whereas system events refer to events like server down or authentication failure.\nService event: A service event indicates that a service does not work properly.\nA trouble ticket which is generated from a customer report is a kind of such an 185 event.\nOther service events can be generated by the provider of a service, if the provider himself detects a service malfunction.\nIn such a scenario the provider may receive service events from customers which indicate that SSH, web hosting service, and video conference service are not available.\nWhen referring to the service hierarchy, the provider can conclude in such a case that all services depend on DNS.\nTherefore, it seems more likely that a common resource which is necessary for this service does not work properly or is not available than to assume three independent service failures.\nIn contrast to a resource-oriented perspective where all of the service events would have to be processed separately, the service events can be linked together.\nTheir information can be aggregated and processed only once.\nIf, e.g., the problem is solved, one common message to the customers that their services are available again is generated and distributed by using the list of linked service events.\nThis is certainly a simplified example.\nHowever, it shows the general principle of identifying the common subservices and common resources of different services.\nIt is important to note that the service-oriented perspective is needed to integrate service aspects, especially QoS aspects.\nAn example of such an aspect is that a fault does not lead to a total failure of a service, but its QoS parameters, respectively agreed service levels, at the customer-provider interface might not be met.\nA degradation in service quality which is caused by high traffic load on the backbone is another example.\nIn the resource-oriented perspective it would be possible to define events which indicate that there is a link usage higher than a threshold, but no mechanism has currently been established to find out which services are affected and whether a QoS violation occurs.\nTo summarize, the reasons for the necessity of a serviceoriented event correlation are the following: Keeping of SLAs (top-down perspective): The time interval between the first symptom (recognized either by provider, network management tools, or customers) that a service does not perform properly and the verified fault repair needs to be minimized.\nThis is especially needed with respect to SLAs as such agreements often contain guarantees like a mean time to repair.\nEffort reduction (top-down perspective): If several user trouble reports are symptoms of the same fault, fault processing should be performed only once and not several times.\nIf the fault has been repaired, the affected customers should be informed about this automatically.\nImpact analysis (bottom-up perspective): In case of a fault in a resource, its influence on the associated services and affected customers can be determined.\nThis analysis can be performed for short term (when there is currently a resource failure) or long term (e.g., network optimization) considerations.\n4.\nWORKFLOW MODELING In the following we examine the established IT process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM).\nThe aim is find out where event correlation can be found in the process structure and how detailed the frameworks currently are.\nAfter that we present our solution for a workflow modeling for the service-oriented event correlation.\n4.1 IT Infrastructure Library (ITIL) The British Office of Government Commerce (OGC) and the IT Service Management Forum (itSMF) [9] provide a collection of best practices for IT processes in the area of IT service management which is called ITIL.\nThe service management is described by 11 modules which are grouped into Service Support Set (provider internal processes) and Service Delivery Set (processes at the customer-provider interface).\nEach module describes processes, functions, roles, and responsibilities as well as necessary databases and interfaces.\nIn general, ITIL describes contents, processes, and aims at a high abstraction level and contains no information about management architectures and tools.\nThe fault management is divided into Incident Management process and Problem Management process.\nIncident Management: The Incident Management contains the service desk as interface to customers (e.g., receives reports about service problems).\nIn case of severe errors structured queries are transferred to the Problem Management.\nProblem Management: The Problem Management``s tasks are to solve problems, take care of keeping priorities, minimize the reoccurrence of problems, and to provide management information.\nAfter receiving requests from the Incident Management, the problem has to be identified and information about necessary countermeasures is transferred to the Change Management.\nThe ITIL processes describe only what has to be done, but contain no information how this can be actually performed.\nAs a consequence, event correlation is not part of the modeling.\nThe ITIL incidents could be regarded as input for the service-oriented event correlation, while the output could be used as a query to the ITIL Problem Management.\n4.2 Enhanced Telecom Operations Map (eTOM) The TeleManagement Forum (TMF) [18] is an international non-profit organization from service providers and suppliers in the area of telecommunications services.\nSimilar to ITIL a process-oriented framework has been developed at first, but the framework was designed for a narrower focus, i.e., the market of information and communications service providers.\nA horizontal grouping into processes for customer care, service development & operations, network & systems management, and partner\/supplier is performed.\nThe vertical grouping (fulfillment, assurance, billing) reflects the service life cycle.\nIn the area of fault management three processes have been defined along the horizontal process grouping.\nProblem Handling: The purpose of this process is to receive trouble reports from customers and to solve them by using the Service Problem Management.\nThe aim is also to keep the customer informed about the current status of the trouble report processing as well as about the general network status (e.g., planned maintenance).\nIt is also a task of this process to inform the 186 QoS\/SLA management about the impact of current errors on the SLAs.\nService Problem Management: In this process reports about customer-affecting service failures are received and transformed.\nTheir root causes are identified and a problem solution or a temporary workaround is established.\nThe task of the Diagnose Problem subprocess is to find the root cause of the problem by performing appropriate tests.\nNothing is said how this can be done (e.g., no event correlation is mentioned).\nResource Trouble Management: A subprocess of the Resource Trouble Management is responsible for resource failure event analysis, alarm correlation & filtering, and failure event detection & reporting.\nAnother subprocess is used to execute different tests to find a resource failure.\nThere is also another subprocess which keeps track about the status of the trouble report processing.\nThis subprocess is similar to the functionality of a trouble ticket system.\nThe process description in eTOM is not very detailed.\nIt is useful to have a check list which aspects for these processes have to be taken into account, but there is no detailed modeling of the relationships and no methodology for applying the framework.\nEvent correlation is only mentioned in the resource management, but it is not used in the service level.\n4.3 Workflow Modeling for the Service-Oriented Event Correlation Fig. 2 shows a general service scenario which we will use as basis for the workflow modeling for the service-oriented event correlation.\nWe assume that the dependencies are already known (e.g., by using the approaches mentioned in Section 2).\nThe provider offers different services which depend on other services called subservices (service dependency).\nAnother kind of dependency exists between services\/subservices and resources.\nThese dependencies are called resource dependencies.\nThese two kinds of dependencies are in most cases not used for the event correlation performed today.\nThis resource-oriented event correlation deals only with relationships on the resource level (e.g., network topology).\nservice dependency resources subservices provider services resource dependency Figure 2: Different kinds of dependencies for the service-oriented event correlation The dependencies depicted in Figure 2 reflect a situation with no redundancy in the service provisioning.\nThe relationships can be seen as AND relationships.\nIn case of redundancy, if e.g., a provider has 3 independent web servers, another modeling (see Figure 3) should be used (OR relationship).\nIn such a case different relationships are possible.\nThe service could be seen as working properly if one of the servers is working or a certain percentage of them is working.\nservices ) AND relationship b) OR relationship resources Figure 3: Modeling of no redundancy (a) and of redundancy (b) As both ITIL and eTOM contain no description how event correlation and especially service-oriented event correlation should actually be performed, we propose the following design for such a workflow (see Fig. 4).\nThe additional components which are not part of a device-oriented event correlation are depicted with a gray background.\nThe workflow is divided into the phases fault detection, fault diagnosis, and fault recovery.\nIn the fault detection phase resource events and service events can be generated from different sources.\nThe resource events are issued during the use of a resource, e.g., via SNMP traps.\nThe service events are originated from customer trouble reports, which are reported via the Customer Service Management (see below) access point.\nIn addition to these two passive ways to get the events, a provider can also perform active tests.\nThese tests can either deal with the resources (resource active probing) or can assume the role of a virtual customer and test a service or one of its subservices by performing interactions at the service access points (service active probing).\nThe fault diagnosis phase is composed of three event correlation steps.\nThe first one is performed by the resource event correlator which can be regarded as the event correlator in today``s commercial systems.\nTherefore, it deals only with resource events.\nThe service event correlator does a correlation of the service events, while the aggregate event correlator finally performs a correlation of both resource and service events.\nIf the correlation result in one of the correlation steps shall be improved, it is possible to go back to the fault detection phase and start the active probing to get additional events.\nThese events can be helpful to confirm a correlation result or to reduce the list of possible root causes.\nAfter the event correlation an ordered list of possible root causes is checked by the resource management.\nWhen the root cause is found, the failure repair starts.\nThis last step is performed in the fault recovery phase.\nThe next subsections present different elements of the event correlation process.\n4.4 Customer Service Management and Intelligent Assistant The Customer Service Management (CSM) access point was proposed by [13] as a single interface between customer 187 fault detection fault diagnosis resource active probing resource event resource event correlator resource management candidate list fault recovery resource usage service active probing intelligent assistant service event correlator aggregate event correlator service eventCSM AP Figure 4: Event correlation workflow and provider.\nIts functionality is to provide information to the customer about his subscribed services, e.g., reports about the fulfillment of agreed SLAs.\nIt can also be used to subscribe services or to allow the customer to manage his services in a restricted way.\nReports about problems with a service can be sent to the customer via CSM.\nThe CSM is also contained in the MNM Service Model (see Section 5).\nTo reduce the effort for the provider``s first level support, an Intelligent Assistant can be added to the CSM.\nThe Intelligent Assistant structures the customer``s information about a service problem.\nThe information which is needed for a preclassification of the problem is gathered from a list of questions to the customer.\nThe list is not static as the current question depends on the answers to prior questions or from the result of specific tests.\nA decision tree is used to structure the questions and tests.\nThe tests allow the customer to gain a controlled access to the provider``s management.\nAt the LRZ a customer of the E-Mail Service can, e.g., use the Intelligent Assistant to perform ping requests to the mail server.\nBut also more complex requests could be possible, e.g., requests of a combination of SNMP variables.\n4.5 Active Probing Active probing is useful for the provider to check his offered services.\nThe aim is to identify and react to problems before a customer notices them.\nThe probing can be done from a customer point of view or by testing the resources which are part of the services.\nIt can also be useful to perform tests of subservices (own subservices or subservices offered by suppliers).\nDifferent schedules are possible to perform the active probing.\nThe provider could select to test important services and resources in regular time intervals.\nOther tests could be initiated by a user who traverses the decision tree of the Intelligent Assistant including active tests.\nAnother possibility for the use of active probing is a request from the event correlator, if the current correlation result needs to be improved.\nThe results of active probing are reported via service or resource events to the event correlator (or if the test was demanded by the Intelligent Assistant the result is reported to it, too).\nWhile the events that are received from management tools and customers denote negative events (something does not work), the events from active probing should also contain positive events for a better discrimination.\n4.6 Event Correlator The event correlation should not be performed by a single event correlator, but by using different steps.\nThe reason for this are the different characteristics of the dependencies (see Fig. 1).\nOn the resource level there are only relationships between resources (network topology, systems configuration).\nAn example for this could be a switch linking separate LANs.\nIf the switch is down, events are reported that other network components which are located behind the switch are also not reachable.\nWhen correlating these events it can be figured out that the switch is the likely error cause.\nAt this stage, the integration of service events does not seem to be helpful.\nThe result of this step is a list of resources which could be the problem``s root cause.\nThe resource event correlator is used to perform this step.\nIn the service-oriented scenario there are also service and resource dependencies.\nAs next step in the event correlation process the service events should be correlated with each other using the service dependencies, because the service dependencies have no direct relationship to the resource level.\nThe result of this step, which is performed by the service event correlator, is a list of services\/subservices which could contain a failure in a resource.\nIf, e.g., there are service events from customers that two services do not work and both services depend on a common subservice, it seems more likely that the resource failure can be found inside the subservice.\nThe output of this correlation is a list of services\/subservices which could be affected by a failure in an associated resource.\nIn the last step the aggregate event correlator matches the lists from resource event correlator and service event correlator to find the problem``s possible root cause.\nThis is done by using the resource dependencies.\nThe event correlation techniques presented in Section 2 could be used to perform the correlation inside the three event correlators.\nIf the dependencies can be found precisely, an RBR or codebook approach seems to be appropriate.\nA case database (CBR) could be used if there are cases which could not be covered by RBR or the codebook approach.\nThese cases could then be used to improve the modeling in a way that RBR or the codebook approach can deal with them in future correlations.\n5.\nINFORMATION MODELING In this section we use a generic model for IT service management to derive the information necessary for the event correlation process.\n5.1 MNM Service Model The MNM Service Model [5] is a generic model for IT service management.\nA distinction is made between customer side and provider side.\nThe customer side contains the basic roles customer and user, while the provider side contains the role provider.\nThe provider makes the service available to the customer side.\nThe service as a whole is divided into usage which is accessed by the role user and management which is used by the role customer.\nThe model consists of two main views.\nThe Service View (see Fig. 5) shows a common perspective of the service for customer and provider.\nEverything that is only important 188 for the service realization is not contained in this view.\nFor these details another perspective, the Realization View, is defined (see Fig. 6).\ncustomer domain supplies supplies provider domain ``role'' provider accesses uses concludes accesses implements observesrealizes provides directs substantiates usesuses manages implementsrealizes manages service concludes QoS parameters usage functionality service access point management functionality service implementation service management implementation service agreement customersideprovidersidesideindependent ``role'' user ``role'' customer CSM access point service client CSM client Figure 5: Service View The Service View contains the service for which the functionality is defined for usage as well as for management.\nThere are two access points (service access point and CSM access point) where user and customer can access the usage and management functionality, respectively.\nAssociated to each service is a list of QoS parameters which have to be met by the service at the service access point.\nThe QoS surveillance is performed by the management.\nprovider domain implements observesrealizes provides directs implementsrealizes accesses uses concludes accesses usesuses manages side independent side independent manages manages concludes acts as service implementation service management implementation manages uses acts as service logic sub-service client service client CSM client uses resources usesuses service management logic sub-service management client basic management functionality ``role'' customer ``role'' provider ``role'' user Figure 6: Realization View In the Realization View the service implementation and the service management implementation are described in detail.\nFor both there are provider-internal resources and subservices.\nFor the service implementation a service logic uses internal resources (devices, knowledge, staff) and external subservices to provide the service.\nAnalogous, the service management implementation includes a service management logic using basic management functionalities [8] and external management subservices.\nThe MNM Service Model can be used for a similar modeling of the used subservices, i.e., the model can be applied recursively.\nAs the service-oriented event correlation has to use dependencies of a service from subservices and resources, the model is used in the following to derive the needed information for service events.\n5.2 Information Modeling for Service Events Today``s event correlation deals mainly with events which are originated from resources.\nBeside a resource identifier these events contain information about the resource status, e.g., SNMP variables.\nTo perform a service-oriented event correlation it is necessary to define events which are related to services.\nThese events can be generated from the provider``s own service surveillance or from customer reports at the CSM interface.\nThey contain information about the problems with the agreed QoS.\nIn our information modeling we define an event superclass which contains common attributes (e.g., time stamp).\nResource event and service event inherit from this superclass.\nDerived from the MNM Service Model we define the information necessary for a service event.\nService: As a service event shall represent the problems of a single service, a unique identification of the affected service is contained here.\nEvent description: This field has to contain a description of the problem.\nDepending on the interactions at the service access point (Service View) a classification of the problem into different categories should be defined.\nIt should also be possible to add an informal description of the problem.\nQoS parameters: For each service QoS parameters (Service View) are defined between the provider and the customer.\nThis field represents a list of these QoS parameters and agreed service levels.\nThe list can help the provider to set the priority of a problem with respect to the service levels agreed.\nResource list: This list contains the resources (Realization View) which are needed to provide the service.\nThis list is used by the provider to check if one of these resources causes the problem.\nSubservice service event identification: In the service hierarchy (Realization View) the service, for which this service event has been issued, may depend on subservices.\nIf there is a suspicion that one of these subservices causes the problem, child service events are issued from this service event for the subservices.\nIn such a case this field contains links to the corresponding events.\nOther event identifications: In the event correlation process the service event can be correlated with other service events or with resource events.\nThis field then contains links to other events which have been correlated to this service event.\nThis is useful to, e.g., send a common message to all affected customers when their subscribed services are available again.\nIssuer``s identification: This field can either contain an identification of the customer who reported the problem, an identification of a service provider``s employee 189 (in case the failure has been detected by the provider``s own service active probing) or a link to a parent service event.\nThe identification is needed, if there are ambiguities in the service event or the issuer should be informed (e.g., that the service is available again).\nThe possible issuers refer to the basic roles (customer, provider) in the Service Model.\nAssignee: To keep track of the processing the name and address of the provider``s employee who is solving or solved the problem is also noted.\nThis is a specialization of the provider role in the Service Model.\nDates: This field contains key dates in the processing of the service event such as initial date, problem identification date, resolution date.\nThese dates are important to keep track how quick the problems have been solved.\nStatus: This field represents the service event``s actual status (e.g., active, suspended, solved).\nPriority: The priority shows which importance the service event has from the provider``s perspective.\nThe importance is derived from the service agreement, especially the agreed QoS parameters (Service View).\nThe fields date, status, and other service events are not derived directly from the Service Model, but are necessary for the event correlation process.\n6.\nAPPLICATION OF SERVICE-ORIENTED EVENT CORRELATION FOR A WEB HOSTING SCENARIO The Leibniz Supercomputing Center is the joint computing center for the Munich universities and research institutions.\nIt also runs the Munich Scientific Network and offers related services.\nOne of these services is the Virtual WWW Server, a web hosting offer for smaller research institutions.\nIt currently has approximately 200 customers.\nA subservice of the Virtual WWW Server is the Storage Service which stores the static and dynamic web pages and uses caching techniques for a fast access.\nOther subservices are DNS and IP service.\nWhen a user accesses a hosted web site via one of the LRZ``s Virtual Private Networks the VPN service is also used.\nThe resources of the Virtual WWW Server include a load balancer and 5 redundant servers.\nThe network connections are also part of the resources as well as the Apache web server application running on the servers.\nFigure 7 shows the dependencies of the Virtual WWW Server.\n6.1 Customer Service Management and Intelligent Assistant The Intelligent Assistant that is available at the Leibniz Supercomputing Center can currently be used for connectivity or performance problems or problems with the LRZ E-Mail Service.\nA selection of possible customer problem reports for the Virtual WWW Server is given in the following: \u2022 The hosted web site is not reachable.\n\u2022 The web site access is (too) slow.\n\u2022 The web site contains outdated content.\nserver serverserver server server server server server server outgoing connection hosting of LRZ``s own pages content caching server emergency server webmail server dynamic web pages static web pages DNS ProxyIP Storage Resources: Services: Virtual WWW Server five redundant servers AFS NFS DBload balancer Figure 7: Dependencies of the Virtual WWW Server \u2022 The transfer of new content to the LRZ does not change the provided content.\n\u2022 The web site looks strange (e.g., caused by problems with HTML version) This customer reports have to be mapped onto failures in resources.\nFor, e.g., an unreachable web site different root causes are possible like a DNS problem, connectivity problem, wrong configuration of the load balancer.\n6.2 Active Probing In general, active probing can be used for services or resources.\nFor the service active probing of the Virtual WWW Server a virtual customer could be installed.\nThis customer does typical HTTP requests of web sites and compares the answer with the known content.\nTo check the up-to-dateness of a test web site, the content could contain a time stamp.\nThe service active probing could also include the testing of subservices, e.g., sending requests to the DNS.\nThe resource active probing performs tests of the resources.\nExamples are connectivity tests, requests to application processes, and tests of available disk space.\n6.3 Event Correlation for the Virtual WWW Server Figure 8 shows the example processing.\nAt first, a customer who takes a look at his hosted web site reports that the content that he had changed is not displayed correctly.\nThis report is transferred to the service management via the CSM interface.\nAn Intelligent Assistant could be used to structure the customer report.\nThe service management translates the customer report into a service event.\nIndependent from the customer report the service provider``s own service active probing tries to change the content of a test web site.\nBecause this is not possible, a service event is issued.\nMeanwhile, a resource event has been reported to the event correlator, because an access of the content caching server to one of the WWW servers failed.\nAs there are no other events at the moment the resource event correlation 190 customer CSM service mgmt event correlator resource mgmt customer reports: ``web site content not up\u2212to\u2212date'' service active probing reports: ``web site content change not possible'' event: ``retrieval of server content failed``event forward resource event correlation service event correlation aggregate event correlation link failure report event forward check WWW server check link result display link repair result display result forward customer report Figure 8: Example processing of a customer report cannot correlate this event to other events.\nAt this stage it would be possible that the event correlator asks the resource management to perform an active probing of related resources.\nBoth service events are now transferred to the service event correlator and are correlated.\nFrom the correlation of these events it seems likely that either the WWW server itself or the link to the WWW server is the problem``s root cause.\nA wrong web site update procedure inside the content caching server seems to be less likely as this would only explain the customer report and not the service active probing result.\nAt this stage a service active probing could be started, but this does not seem to be useful as this correlation only deals with the Web Hosting Service and its resources and not with other services.\nAfter the separate correlation of both resource and service events, which can be performed in parallel, the aggregate event correlator is used to correlate both types of events.\nThe additional resource event makes it seem much more likely that the problems are caused by a broken link to the WWW server or by the WWW server itself and not by the content caching server.\nIn this case the event correlator asks the resource management to check the link and the WWW server.\nThe decision between these two likely error causes can not be further automated here.\nLater, the resource management finds out that a broken link is the failure``s root cause.\nIt informs the event correlator about this and it can be determined that this explains all previous events.\nTherefore, the event correlation can be stopped at this point.\nDepending on the provider``s customer relationship management the finding of the root cause and an expected repair time could be reported to the customers.\nAfter the link has been repaired, it is possible to report this event via the CSM interface.\nEven though many details of this event correlation process could also be performed differently, the example showed an important advantage of the service-oriented event correlation.\nThe relationship between the service provisioning and the provider``s resources is explicitly modeled.\nThis allows a mapping of the customer report onto the provider-internal resources.\n6.4 Event Correlation for Different Services If a provider like the LRZ offers several services the serviceoriented event correlation can be used to reveal relationships that are not obvious in the first place.\nIf the LRZ E-Mail Service and its events are viewed in relationship with the events for the Virtual WWW Server, it is possible to identify failures in common subservices and resources.\nBoth services depend on the DNS which means that customer reports like I cannot retrieve new e-mail and The web site of my research institute is not available can have a common cause, e.g., the DNS does not work properly.\n7.\nCONCLUSION AND FUTURE WORK In our paper we showed the need for a service-oriented event correlation.\nFor an IT service provider this new kind of event correlation makes it possible to automatically map problems with the current service quality onto resource failures.\nThis helps to find the failure``s root cause earlier and to reduce costs for SLA violations.\nIn addition, customer reports can be linked together and therefore the processing effort can be reduced.\nTo receive these benefits we presented our approach for performing the service-oriented event correlation as well as a modeling of the necessary correlation information.\nIn the future we are going to apply our workflow and information modeling for services offered by the Leibniz Supercomputing Center going further into details.\nSeveral issues have not been treated in detail so far, e.g., the consequences for the service-oriented event correlation if a subservice is offered by another provider.\nIf a service does not perform properly, it has to be determined whether this is caused by the provider himself or by the subservice.\nIn the latter case appropriate information has to be exchanged between the providers via the CSM interface.\nAnother issue is the use of active probing in the event correlation process which can improve the result, but can also lead to a correlation delay.\nAnother important point is the precise definition of dependency which has also been left out by many other publications.\nTo avoid having to much dependencies in a certain situation one could try to check whether the dependencies currently exist.\nIn case of a download from a web site there is only a dependency from the DNS subservice at the beginning, but after the address is resolved a download failure is unlikely to have been caused by the DNS.\nAnother possibility to reduce the dependencies is to divide a service into its possible user interactions (e.g., an e-mail service into transactions like get mail, sent mail, etc) and to define the dependencies for each user interaction.\nAcknowledgments The authors wish to thank the members of the Munich Network Management (MNM) Team for helpful discussions and valuable comments on previous versions of the paper.\nThe MNM Team, directed by Prof. Dr. Heinz-Gerd Hegering, is a 191 group of researchers of the Munich Universities and the Leibniz Supercomputing Center of the Bavarian Academy of Sciences.\nIts web server is located at wwwmnmteam.informatik.\nuni-muenchen.\nde.\n8.\nREFERENCES [1] K. Appleby, G. Goldszmidt, and M. Steinder.\nYemanja - A Layered Event Correlation Engine for Multi-domain Server Farms.\nIn Proceedings of the Seventh IFIP\/IEEE International Symposium on Integrated Network Management, pages 329-344.\nIFIP\/IEEE, May 2001.\n[2] Spectrum, Aprisma Corporation.\nhttp:\/\/www.aprisma.com.\n[3] C. Ensel.\nNew Approach for Automated Generation of Service Dependency Models.\nIn Network Management as a Strategy for Evolution and Development; Second Latin American Network Operation and Management Symposium (LANOMS 2001).\nIEEE, August 2001.\n[4] C. Ensel and A. Keller.\nAn Approach for Managing Service Dependencies with XML and the Resource Description Framework.\nJournal of Network and Systems Management, 10(2), June 2002.\n[5] M. Garschhammer, R. Hauck, H.-G.\nHegering, B. Kempter, M. Langer, M. Nerb, I. Radisic, H. Roelle, and H. Schmidt.\nTowards generic Service Management Concepts - A Service Model Based Approach.\nIn Proceedings of the Seventh IFIP\/IEEE International Symposium on Integrated Network Management, pages 719-732.\nIFIP\/IEEE, May 2001.\n[6] B. Gruschke.\nIntegrated Event Management: Event Correlation using Dependency Graphs.\nIn Proceedings of the 9th IFIP\/IEEE International Workshop on Distributed Systems: Operations & Management (DSOM 98).\nIEEE\/IFIP, October 1998.\n[7] M. Gupta, A. Neogi, M. Agarwal, and G. Kar.\nDiscovering Dynamic Dependencies in Enterprise Environments for Problem Determination.\nIn Proceedings of the 14th IFIP\/IEEE Workshop on Distributed Sytems: Operations and Management.\nIFIP\/IEEE, October 2003.\n[8] H.-G.\nHegering, S. Abeck, and B. Neumair.\nIntegrated Management of Networked Systems - Concepts, Architectures and their Operational Application.\nMorgan Kaufmann Publishers, 1999.\n[9] IT Infrastructure Library, Office of Government Commerce and IT Service Management Forum.\nhttp:\/\/www.itil.co.uk.\n[10] G. Jakobson and M. Weissman.\nAlarm Correlation.\nIEEE Network, 7(6), November 1993.\n[11] G. Jakobson and M. Weissman.\nReal-time Telecommunication Network Management: Extending Event Correlation with Temporal Constraints.\nIn Proceedings of the Fourth IEEE\/IFIP International Symposium on Integrated Network Management, pages 290-301.\nIEEE\/IFIP, May 1995.\n[12] S. Kliger, S. Yemini, Y. Yemini, D. Ohsie, and S. Stolfo.\nA Coding Approach to Event Correlation.\nIn Proceedings of the Fourth IFIP\/IEEE International Symposium on Integrated Network Management, pages 266-277.\nIFIP\/IEEE, May 1995.\n[13] M. Langer, S. Loidl, and M. Nerb.\nCustomer Service Management: A More Transparent View To Your Subscribed Services.\nIn Proceedings of the 9th IFIP\/IEEE International Workshop on Distributed Systems: Operations & Management (DSOM 98), Newark, DE, USA, October 1998.\n[14] L. Lewis.\nA Case-based Reasoning Approach for the Resolution of Faults in Communication Networks.\nIn Proceedings of the Third IFIP\/IEEE International Symposium on Integrated Network Management.\nIFIP\/IEEE, 1993.\n[15] L. Lewis.\nService Level Management for Enterprise Networks.\nArtech House, Inc., 1999.\n[16] NETeXPERT, Agilent Technologies.\nhttp:\/\/www.agilent.com\/comms\/OSS.\n[17] InCharge, Smarts Corporation.\nhttp:\/\/www.smarts.com.\n[18] Enhanced Telecom Operations Map, TeleManagement Forum.\nhttp:\/\/www.tmforum.org.\n[19] Verizon Communications.\nhttp:\/\/www.verizon.com.\n[20] H. Wietgrefe, K.-D.\nTuchs, K. Jobmann, G. Carls, P. Froelich, W. Nejdl, and S. Steinfeld.\nUsing Neural Networks for Alarm Correlation in Cellular Phone Networks.\nIn International Workshop on Applications of Neural Networks to Telecommunications (IWANNT), May 1997.\n[21] S. Yemini, S. Kliger, E. Mozes, Y. Yemini, and D. Ohsie.\nHigh Speed and Robust Event Correlation.\nIEEE Communiations Magazine, 34(5), May 1996.\n192","lvl-3":"Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation\nABSTRACT\nThe paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation.\nToday's event correlation mainly addresses the correlation of events as reported from management tools.\nHowever, a correlation of user trouble reports concerning services should also be performed.\nThis is necessary to improve the resolution time and to reduce the effort for keeping the service agreements.\nWe refer to such a type of correlation as service-oriented event correlation.\nThe necessity to use this kind of event correlation is motivated in the paper.\nTo introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary.\nTherefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area.\nThe different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation.\nThe MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling.\nAn example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation.\n1.\nINTRODUCTION\nIn huge networks a single fault can cause a burst of failure events.\nTo handle the flood of events and to find the root cause of a fault, event correlation approaches like rule-based reasoning, case-based reasoning or the codebook approach have been developed.\nThe main idea of correlation is to condense and structure events to retrieve meaningful information.\nUntil now, these approaches address primarily the correlation of events as reported from management tools or devices.\nTherefore, we call them device-oriented.\nIn this paper we define a service as a set of functions which are offered by a provider to a customer at a customer provider interface.\nThe definition of a \"service\" is therefore more general than the definition of a \"Web Service\", but a \"Web Service\" is included in this \"service\" definition.\nAs a consequence, the results are applicable for Web Services as well as for other kinds of services.\nA service level agreement (SLA) is defined as a contract between customer and provider about guaranteed service performance.\nAs in today's IT environments the offering of such services with an agreed service quality becomes more and more important, this change also affects the event correlation.\nIt has become a necessity for providers to offer such guarantees for a differentiation from other providers.\nTo avoid SLA violations it is especially important for service providers to identify the root cause of a fault in a very short time or even act proactively.\nThe latter refers to the case of recognizing the influence of a device breakdown on the offered services.\nAs in this scenario the knowledge about services and their SLAs is used we call it service-oriented.\nIt can be addressed from two directions.\nTop-down perspective: Several customers report a problem in a certain time interval.\nAre these trouble reports correlated?\nHow to identify a resource as being the problem's root cause?\nBottom-up perspective: A device (e.g., router, server) breaks down.\nWhich services, and especially which customers, are affected by this fault?\nThe rest of the paper is organized as follows.\nIn Section 2 we describe how event correlation is performed today and present a selection of the state-of-the-art event correlation techniques.\nSection 3 describes the motivation for serviceoriented event correlation and its benefits.\nAfter having motivated the need for such type of correlation we use two well-known IT service management models to gain requirements for an appropriate workflow modeling and present our proposal for it (see Section 4).\nIn Section 5 we present our information modeling which is derived from the MNM Service Model.\nAn application of the approach for a web hosting scenario is performed in Section 6.\nThe last section concludes the paper and presents future work.\n2.\nTODAY'S EVENT CORRELATION TECHNIQUES\n3.\nMOTIVATION OF SERVICE-ORIENTED EVENT CORRELATION\n4.\nWORKFLOW MODELING\n4.1 IT Infrastructure Library (ITIL)\n4.2 Enhanced Telecom Operations Map (eTOM)\n4.3 Workflow Modeling for the Service-Oriented Event Correlation\n4.4 Customer Service Management and Intelligent Assistant\n4.5 Active Probing\n4.6 Event Correlator\n5.\nINFORMATION MODELING\n5.1 MNM Service Model\n5.2 Information Modeling for Service Events\n6.\nAPPLICATION OF SERVICE-ORIENTED EVENT CORRELATION FOR A WEB HOSTING SCENARIO\n6.1 Customer Service Management and Intelligent Assistant\n6.2 Active Probing\n6.3 Event Correlation for the Virtual WWW Server\n6.4 Event Correlation for Different Services","lvl-4":"Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation\nABSTRACT\nThe paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation.\nToday's event correlation mainly addresses the correlation of events as reported from management tools.\nHowever, a correlation of user trouble reports concerning services should also be performed.\nThis is necessary to improve the resolution time and to reduce the effort for keeping the service agreements.\nWe refer to such a type of correlation as service-oriented event correlation.\nThe necessity to use this kind of event correlation is motivated in the paper.\nTo introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary.\nTherefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area.\nThe different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation.\nThe MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling.\nAn example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation.\n1.\nINTRODUCTION\nIn huge networks a single fault can cause a burst of failure events.\nTo handle the flood of events and to find the root cause of a fault, event correlation approaches like rule-based reasoning, case-based reasoning or the codebook approach have been developed.\nThe main idea of correlation is to condense and structure events to retrieve meaningful information.\nUntil now, these approaches address primarily the correlation of events as reported from management tools or devices.\nTherefore, we call them device-oriented.\nIn this paper we define a service as a set of functions which are offered by a provider to a customer at a customer provider interface.\nAs a consequence, the results are applicable for Web Services as well as for other kinds of services.\nA service level agreement (SLA) is defined as a contract between customer and provider about guaranteed service performance.\nAs in today's IT environments the offering of such services with an agreed service quality becomes more and more important, this change also affects the event correlation.\nTo avoid SLA violations it is especially important for service providers to identify the root cause of a fault in a very short time or even act proactively.\nThe latter refers to the case of recognizing the influence of a device breakdown on the offered services.\nAs in this scenario the knowledge about services and their SLAs is used we call it service-oriented.\nTop-down perspective: Several customers report a problem in a certain time interval.\nAre these trouble reports correlated?\nHow to identify a resource as being the problem's root cause?\nWhich services, and especially which customers, are affected by this fault?\nThe rest of the paper is organized as follows.\nIn Section 2 we describe how event correlation is performed today and present a selection of the state-of-the-art event correlation techniques.\nSection 3 describes the motivation for serviceoriented event correlation and its benefits.\nAfter having motivated the need for such type of correlation we use two well-known IT service management models to gain requirements for an appropriate workflow modeling and present our proposal for it (see Section 4).\nIn Section 5 we present our information modeling which is derived from the MNM Service Model.\nAn application of the approach for a web hosting scenario is performed in Section 6.\nThe last section concludes the paper and presents future work.","lvl-2":"Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation\nABSTRACT\nThe paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation.\nToday's event correlation mainly addresses the correlation of events as reported from management tools.\nHowever, a correlation of user trouble reports concerning services should also be performed.\nThis is necessary to improve the resolution time and to reduce the effort for keeping the service agreements.\nWe refer to such a type of correlation as service-oriented event correlation.\nThe necessity to use this kind of event correlation is motivated in the paper.\nTo introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary.\nTherefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area.\nThe different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation.\nThe MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling.\nAn example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation.\n1.\nINTRODUCTION\nIn huge networks a single fault can cause a burst of failure events.\nTo handle the flood of events and to find the root cause of a fault, event correlation approaches like rule-based reasoning, case-based reasoning or the codebook approach have been developed.\nThe main idea of correlation is to condense and structure events to retrieve meaningful information.\nUntil now, these approaches address primarily the correlation of events as reported from management tools or devices.\nTherefore, we call them device-oriented.\nIn this paper we define a service as a set of functions which are offered by a provider to a customer at a customer provider interface.\nThe definition of a \"service\" is therefore more general than the definition of a \"Web Service\", but a \"Web Service\" is included in this \"service\" definition.\nAs a consequence, the results are applicable for Web Services as well as for other kinds of services.\nA service level agreement (SLA) is defined as a contract between customer and provider about guaranteed service performance.\nAs in today's IT environments the offering of such services with an agreed service quality becomes more and more important, this change also affects the event correlation.\nIt has become a necessity for providers to offer such guarantees for a differentiation from other providers.\nTo avoid SLA violations it is especially important for service providers to identify the root cause of a fault in a very short time or even act proactively.\nThe latter refers to the case of recognizing the influence of a device breakdown on the offered services.\nAs in this scenario the knowledge about services and their SLAs is used we call it service-oriented.\nIt can be addressed from two directions.\nTop-down perspective: Several customers report a problem in a certain time interval.\nAre these trouble reports correlated?\nHow to identify a resource as being the problem's root cause?\nBottom-up perspective: A device (e.g., router, server) breaks down.\nWhich services, and especially which customers, are affected by this fault?\nThe rest of the paper is organized as follows.\nIn Section 2 we describe how event correlation is performed today and present a selection of the state-of-the-art event correlation techniques.\nSection 3 describes the motivation for serviceoriented event correlation and its benefits.\nAfter having motivated the need for such type of correlation we use two well-known IT service management models to gain requirements for an appropriate workflow modeling and present our proposal for it (see Section 4).\nIn Section 5 we present our information modeling which is derived from the MNM Service Model.\nAn application of the approach for a web hosting scenario is performed in Section 6.\nThe last section concludes the paper and presents future work.\n2.\nTODAY'S EVENT CORRELATION TECHNIQUES\nIn [11] the task of event correlation is defined as \"a conceptual interpretation procedure in the sense that a new meaning is assigned to a set of events that happen in a certain time interval\".\nWe can distinguish between three aspects for event correlation.\nFunctional aspect: The correlation focuses on functions which are provided by each network element.\nIt is also regarded which other functions are used to provide a specific function.\nTopology aspect: The correlation takes into account how the network elements are connected to each other and how they interact.\nTime aspect: When explicitly regarding time constraints, a start and end time has to be defined for each event.\nThe correlation can use time relationships between the events to perform the correlation.\nThis aspect is only mentioned in some papers [11], but it has to be treated in an event correlation system.\nIn the event correlation it is also important to distinguish between the knowledge acquisition\/representation and the correlation algorithm.\nExamples of approaches to knowledge acquisition\/representation are Gruschke's dependency graphs [6] and Ensel's dependency detection by neural networks [3].\nIt is also possible to find the dependencies by analyzing interactions [7].\nIn addition, there is an approach to manage service dependencies with XML and to define a resource description framework [4].\nTo get an overview about device-oriented event correlation a selection of several event correlation techniques being used for this kind of correlation is presented.\nModel-based reasoning: Model-based reasoning (MBR) [15, 10, 20] represents a system by modeling each of its components.\nA model can either represent a physical entity or a logical entity (e.g., LAN, WAN, domain, service, business process).\nThe models for physical entities are called functional model, while the models for all logical entities are called logical model.\nA description of each model contains three categories of information: attributes, relations to other models, and behavior.\nThe event correlation is a result of the collaboration among models.\nAs all components of a network are represented with their behavior in the model, it is possible to perform simulations to predict how the whole network will behave.\nA comparison in [20] showed that a large MBR system is not in all cases easy to maintain.\nIt can be difficult to appropriately model the behavior for all components and their interactions correctly and completely.\nAn example system for MBR is NetExpert [16] from OSI which is a hybrid MBR\/RBR system (in 2000 OSI was acquired by Agilent Technologies).\nRule-based reasoning: Rule-based reasoning (RBR) [15, 10] uses a set of rules for event correlation.\nThe rules have the form conclusion if condition.\nThe condition uses received events and information about the system, while the conclusion contains actions which can either lead to system changes or use system parameters to choose the next rule.\nAn advantage of the approach is that the rules are more or less human-readable and therefore their effect is intuitive.\nThe correlation has proved to be very fast in practice by using the RETE algorithm.\nIn the literature [20, 1] it is claimed that RBR systems are classified as relatively inflexible.\nFrequent changes in the modeled IT environment would lead to many rule updates.\nThese changes would have to be performed by experts as no automation has currently been established.\nIn some systems information about the network topology which is needed for the event correlation is not used explicitly, but is encoded into the rules.\nThis intransparent usage would make rule updates for topology changes quite difficult.\nThe system brittleness would also be a problem for RBR systems.\nIt means that the system fails if an unknown case occurs, because the case cannot be mapped onto similar cases.\nThe output of RBR systems would also be difficult to predict, because of unforeseen rule interactions in a large rule set.\nAccording to [15] an RBR system is only appropriate if the domain for which it is used is small, nonchanging, and well understood.\nThe GTE IMPACT system [11] is an example of a rulebased system.\nIt also uses MBR (GTE has merged with Bell Atlantic in 1998 and is now called Verizon [19]).\nCodebook approach: The codebook approach [12, 21] has similarities to RBR, but takes a further step and encodes the rules into a correlation matrix.\nThe approach starts using a dependency graph with two kinds of nodes for the modeling.\nThe first kind of nodes are the faults (denoted as problems in the cited papers) which have to be detected, while the second kind of nodes are observable events (symptoms in the papers) which are caused by the faults or other events.\nThe dependencies between the nodes are denoted as directed edges.\nIt is possible to choose weights for the edges, e.g., a weight for the probability that\nfault\/event A causes event B. Another possible weighting could indicate time dependencies.\nThere are several possibilities to reduce the initial graph.\nIf, e.g., a cyclic dependency of events exists and there are no probabilities for the cycles' edges, all events can be treated as one event.\nAfter a final input graph is chosen, the graph is transformed into a correlation matrix where the columns contain the faults and the rows contain the events.\nIf there is a dependency in the graph, the weight of the corresponding edge is put into the according matrix cell.\nIn case no weights are used, the matrix cells get the values 1 for dependency and 0 otherwise.\nAfterwards, a simplification can be done, where events which do not help to discriminate faults are deleted.\nThere is a trade-off between the minimization of the matrix and the robustness of the results.\nIf the matrix is minimized as much as possible, some faults can only be distinguished by a single event.\nIf this event cannot be reliably detected, the event correlation system cannot discriminate between the two faults.\nA measure how many event observation errors can be compensated by the system is the Hamming distance.\nThe number of rows (events) that can be deleted from the matrix can differ very much depending on the relationships [15].\nThe codebook approach has the advantage that it uses long-term experience with graphs and coding.\nThis experience is used to minimize the dependency graph and to select an optimal group of events with respect to processing time and robustness against noise.\nA disadvantage of the approach could be that similar to RBR frequent changes in the environment make it necessary to frequently edit the input graph.\nSMARTS InCharge [12, 17] is an example of such a correlation system.\nCase-based reasoning: In contrast to other approaches case-based reasoning (CBR) [14, 15] systems do not use any knowledge about the system structure.\nThe knowledge base saves cases with their values for system parameters and successful recovery actions for these cases.\nThe recovery actions are not performed by the CBR system in the first place, but in most cases by a human operator.\nIf a new case appears, the CBR system compares the current system parameters with the system parameters in prior cases and tries to find a similar one.\nTo identify such a match it has to be defined for which parameters the cases can differ or have to be the same.\nIf a match is found, a learned action can be performed automatically or the operator can be informed with a recovery proposal.\nAn advantage of this approach is that the ability to learn is an integral part of it which is important for rapid changing environments.\nThere are also difficulties when applying the approach [15].\nThe fields which are used to find a similar case and their importance have to be defined appropriately.\nIf there is a match with a similar case, an adaptation of the previous solution to the current one has to be found.\nAn example system for CBR is SpectroRx from Cabletron Systems.\nThe part of Cabletron that developed SpectroRx became an independent software company in 2002 and is now called Aprisma Management Technologies [2].\nIn this section four event correlation approaches were presented which have evolved into commercial event correlation systems.\nThe correlation approaches have different focuses.\nMBR mainly deals with the knowledge acquisition and representation, while RBR and the codebook approach propose a correlation algorithm.\nThe focus of CBR is its ability to learn from prior cases.\n3.\nMOTIVATION OF SERVICE-ORIENTED EVENT CORRELATION\nFig. 1 shows a general service scenario upon which we will discuss the importance of a service-oriented correlation.\nSeveral services like SSH, a web hosting service, or a video conference service are offered by a provider to its customers at the customer provider interface.\nA customer can allow several users to use a subscribed service.\nThe quality and cost issues of the subscribed services between a customer and a provider are agreed in SLAs.\nOn the provider side the services use subservices for their provisioning.\nIn case of the services mentioned above such subservices are DNS, proxy service, and IP service.\nBoth services and subservices depend on resources upon which they are provisioned.\nAs displayed in the figure a service can depend on more than one resource and a resource can be used by one or more services.\nFigure 1: Scenario\nTo get a common understanding, we distinguish between different types of events: Resource event: We use the term resource event for network events and system events.\nA network event refers to events like node up\/down or link up\/down whereas system events refer to events like server down or authentication failure.\nService event: A service event indicates that a service does not work properly.\nA trouble ticket which is generated from a customer report is a kind of such an\nevent.\nOther service events can be generated by the provider of a service, if the provider himself detects a service malfunction.\nIn such a scenario the provider may receive service events from customers which indicate that SSH, web hosting service, and video conference service are not available.\nWhen referring to the service hierarchy, the provider can conclude in such a case that all services depend on DNS.\nTherefore, it seems more likely that a common resource which is necessary for this service does not work properly or is not available than to assume three independent service failures.\nIn contrast to a resource-oriented perspective where all of the service events would have to be processed separately, the service events can be linked together.\nTheir information can be aggregated and processed only once.\nIf, e.g., the problem is solved, one common message to the customers that their services are available again is generated and distributed by using the list of linked service events.\nThis is certainly a simplified example.\nHowever, it shows the general principle of identifying the common subservices and common resources of different services.\nIt is important to note that the service-oriented perspective is needed to integrate service aspects, especially QoS aspects.\nAn example of such an aspect is that a fault does not lead to a total failure of a service, but its QoS parameters, respectively agreed service levels, at the customer-provider interface might not be met.\nA degradation in service quality which is caused by high traffic load on the backbone is another example.\nIn the resource-oriented perspective it would be possible to define events which indicate that there is a link usage higher than a threshold, but no mechanism has currently been established to find out which services are affected and whether a QoS violation occurs.\nTo summarize, the reasons for the necessity of a serviceoriented event correlation are the following: Keeping of SLAs (top-down perspective): The time interval between the first symptom (recognized either by provider, network management tools, or customers) that a service does not perform properly and the verified fault repair needs to be minimized.\nThis is especially needed with respect to SLAs as such agreements often contain guarantees like a mean time to repair.\nEffort reduction (top-down perspective): If several user trouble reports are symptoms of the same fault, fault processing should be performed only once and not several times.\nIf the fault has been repaired, the affected customers should be informed about this automatically.\nImpact analysis (bottom-up perspective): In case of a fault in a resource, its influence on the associated services and affected customers can be determined.\nThis analysis can be performed for short term (when there is currently a resource failure) or long term (e.g., network optimization) considerations.\n4.\nWORKFLOW MODELING\nIn the following we examine the established IT process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM).\nThe aim is find out where event correlation can be found in the process structure and how detailed the frameworks currently are.\nAfter that we present our solution for a workflow modeling for the service-oriented event correlation.\n4.1 IT Infrastructure Library (ITIL)\nThe British Office of Government Commerce (OGC) and the IT Service Management Forum (itSMF) [9] provide a collection of best practices for IT processes in the area of IT service management which is called ITIL.\nThe service management is described by 11 modules which are grouped into Service Support Set (provider internal processes) and Service Delivery Set (processes at the customer-provider interface).\nEach module describes processes, functions, roles, and responsibilities as well as necessary databases and interfaces.\nIn general, ITIL describes contents, processes, and aims at a high abstraction level and contains no information about management architectures and tools.\nThe fault management is divided into Incident Management process and Problem Management process.\nIncident Management: The Incident Management contains the service desk as interface to customers (e.g., receives reports about service problems).\nIn case of severe errors structured queries are transferred to the Problem Management.\nProblem Management: The Problem Management's tasks are to solve problems, take care of keeping priorities, minimize the reoccurrence of problems, and to provide management information.\nAfter receiving requests from the Incident Management, the problem has to be identified and information about necessary countermeasures is transferred to the Change Management.\nThe ITIL processes describe only what has to be done, but contain no information how this can be actually performed.\nAs a consequence, event correlation is not part of the modeling.\nThe ITIL incidents could be regarded as input for the service-oriented event correlation, while the output could be used as a query to the ITIL Problem Management.\n4.2 Enhanced Telecom Operations Map (eTOM)\nThe TeleManagement Forum (TMF) [18] is an international non-profit organization from service providers and suppliers in the area of telecommunications services.\nSimilar to ITIL a process-oriented framework has been developed at first, but the framework was designed for a narrower focus, i.e., the market of information and communications service providers.\nA horizontal grouping into processes for customer care, service development & operations, network & systems management, and partner\/supplier is performed.\nThe vertical grouping (fulfillment, assurance, billing) reflects the service life cycle.\nIn the area of fault management three processes have been defined along the horizontal process grouping.\nProblem Handling: The purpose of this process is to receive trouble reports from customers and to solve them by using the Service Problem Management.\nThe aim is also to keep the customer informed about the current status of the trouble report processing as well as about the general network status (e.g., planned maintenance).\nIt is also a task of this process to inform the\nQoS\/SLA management about the impact of current errors on the SLAs.\nService Problem Management: In this process reports about customer-affecting service failures are received and transformed.\nTheir root causes are identified and a problem solution or a temporary workaround is established.\nThe task of the \"Diagnose Problem\" subprocess is to find the root cause of the problem by performing appropriate tests.\nNothing is said how this can be done (e.g., no event correlation is mentioned).\nResource Trouble Management: A subprocess of the Resource Trouble Management is responsible for resource failure event analysis, alarm correlation & filtering, and failure event detection & reporting.\nAnother subprocess is used to execute different tests to find a resource failure.\nThere is also another subprocess which keeps track about the status of the trouble report processing.\nThis subprocess is similar to the functionality of a trouble ticket system.\nThe process description in eTOM is not very detailed.\nIt is useful to have a check list which aspects for these processes have to be taken into account, but there is no detailed modeling of the relationships and no methodology for applying the framework.\nEvent correlation is only mentioned in the resource management, but it is not used in the service level.\n4.3 Workflow Modeling for the Service-Oriented Event Correlation\nFig. 2 shows a general service scenario which we will use as basis for the workflow modeling for the service-oriented event correlation.\nWe assume that the dependencies are already known (e.g., by using the approaches mentioned in Section 2).\nThe provider offers different services which depend on other services called subservices (service dependency).\nAnother kind of dependency exists between services\/subservices and resources.\nThese dependencies are called resource dependencies.\nThese two kinds of dependencies are in most cases not used for the event correlation performed today.\nThis resource-oriented event correlation deals only with relationships on the resource level (e.g., network topology).\nFigure 2: Different kinds of dependencies for the service-oriented event correlation\nThe dependencies depicted in Figure 2 reflect a situation with no redundancy in the service provisioning.\nThe relationships can be seen as AND relationships.\nIn case of redundancy, if e.g., a provider has 3 independent web servers, another modeling (see Figure 3) should be used (OR relationship).\nIn such a case different relationships are possible.\nThe service could be seen as working properly if one of the servers is working or a certain percentage of them is working.\nFigure 3: Modeling of no redundancy (a) and of redundancy (b)\nAs both ITIL and eTOM contain no description how event correlation and especially service-oriented event correlation should actually be performed, we propose the following design for such a workflow (see Fig. 4).\nThe additional components which are not part of a device-oriented event correlation are depicted with a gray background.\nThe workflow is divided into the phases fault detection, fault diagnosis, and fault recovery.\nIn the fault detection phase resource events and service events can be generated from different sources.\nThe resource events are issued during the use of a resource, e.g., via SNMP traps.\nThe service events are originated from customer trouble reports, which are reported via the Customer Service Management (see below) access point.\nIn addition to these two \"passive\" ways to get the events, a provider can also perform active tests.\nThese tests can either deal with the resources (resource active probing) or can assume the role of a virtual customer and test a service or one of its subservices by performing interactions at the service access points (service active probing).\nThe fault diagnosis phase is composed of three event correlation steps.\nThe first one is performed by the resource event correlator which can be regarded as the event correlator in today's commercial systems.\nTherefore, it deals only with resource events.\nThe service event correlator does a correlation of the service events, while the aggregate event correlator finally performs a correlation of both resource and service events.\nIf the correlation result in one of the correlation steps shall be improved, it is possible to go back to the fault detection phase and start the active probing to get additional events.\nThese events can be helpful to confirm a correlation result or to reduce the list of possible root causes.\nAfter the event correlation an ordered list of possible root causes is checked by the resource management.\nWhen the root cause is found, the failure repair starts.\nThis last step is performed in the fault recovery phase.\nThe next subsections present different elements of the event correlation process.\n4.4 Customer Service Management and Intelligent Assistant\nThe Customer Service Management (CSM) access point was proposed by [13] as a single interface between customer\nFigure 4: Event correlation workflow\nand provider.\nIts functionality is to provide information to the customer about his subscribed services, e.g., reports about the fulfillment of agreed SLAs.\nIt can also be used to subscribe services or to allow the customer to manage his services in a restricted way.\nReports about problems with a service can be sent to the customer via CSM.\nThe CSM is also contained in the MNM Service Model (see Section 5).\nTo reduce the effort for the provider's first level support, an Intelligent Assistant can be added to the CSM.\nThe Intelligent Assistant structures the customer's information about a service problem.\nThe information which is needed for a preclassification of the problem is gathered from a list of questions to the customer.\nThe list is not static as the current question depends on the answers to prior questions or from the result of specific tests.\nA decision tree is used to structure the questions and tests.\nThe tests allow the customer to gain a controlled access to the provider's management.\nAt the LRZ a customer of the E-Mail Service can, e.g., use the Intelligent Assistant to perform \"ping\" requests to the mail server.\nBut also more complex requests could be possible, e.g., requests of a combination of SNMP variables.\n4.5 Active Probing\nActive probing is useful for the provider to check his offered services.\nThe aim is to identify and react to problems before a customer notices them.\nThe probing can be done from a customer point of view or by testing the resources which are part of the services.\nIt can also be useful to perform tests of subservices (own subservices or subservices offered by suppliers).\nDifferent schedules are possible to perform the active probing.\nThe provider could select to test important services and resources in regular time intervals.\nOther tests could be initiated by a user who traverses the decision tree of the Intelligent Assistant including active tests.\nAnother possibility for the use of active probing is a request from the event correlator, if the current correlation result needs to be improved.\nThe results of active probing are reported via service or resource events to the event correlator (or if the test was demanded by the Intelligent Assistant the result is reported to it, too).\nWhile the events that are received from management tools and customers denote negative events (something does not work), the events from active probing should also contain positive events for a better discrimination.\n4.6 Event Correlator\nThe event correlation should not be performed by a single event correlator, but by using different steps.\nThe reason for this are the different characteristics of the dependencies (see Fig. 1).\nOn the resource level there are only relationships between resources (network topology, systems configuration).\nAn example for this could be a switch linking separate LANs.\nIf the switch is down, events are reported that other network components which are located behind the switch are also not reachable.\nWhen correlating these events it can be figured out that the switch is the likely error cause.\nAt this stage, the integration of service events does not seem to be helpful.\nThe result of this step is a list of resources which could be the problem's root cause.\nThe resource event correlator is used to perform this step.\nIn the service-oriented scenario there are also service and resource dependencies.\nAs next step in the event correlation process the service events should be correlated with each other using the service dependencies, because the service dependencies have no direct relationship to the resource level.\nThe result of this step, which is performed by the service event correlator, is a list of services\/subservices which could contain a failure in a resource.\nIf, e.g., there are service events from customers that two services do not work and both services depend on a common subservice, it seems more likely that the resource failure can be found inside the subservice.\nThe output of this correlation is a list of services\/subservices which could be affected by a failure in an associated resource.\nIn the last step the aggregate event correlator matches the lists from resource event correlator and service event correlator to find the problem's possible root cause.\nThis is done by using the resource dependencies.\nThe event correlation techniques presented in Section 2 could be used to perform the correlation inside the three event correlators.\nIf the dependencies can be found precisely, an RBR or codebook approach seems to be appropriate.\nA case database (CBR) could be used if there are cases which could not be covered by RBR or the codebook approach.\nThese cases could then be used to improve the modeling in a way that RBR or the codebook approach can deal with them in future correlations.\n5.\nINFORMATION MODELING\nIn this section we use a generic model for IT service management to derive the information necessary for the event correlation process.\n5.1 MNM Service Model\nThe MNM Service Model [5] is a generic model for IT service management.\nA distinction is made between customer side and provider side.\nThe customer side contains the basic roles customer and user, while the provider side contains the role provider.\nThe provider makes the service available to the customer side.\nThe service as a whole is divided into usage which is accessed by the role user and management which is used by the role customer.\nThe model consists of two main views.\nThe Service View (see Fig. 5) shows a common perspective of the service for customer and provider.\nEverything that is only important\nfor the service realization is not contained in this view.\nFor these details another perspective, the Realization View, is defined (see Fig. 6).\nFigure 5: Service View\nThe Service View contains the service for which the functionality is defined for usage as well as for management.\nThere are two access points (service access point and CSM access point) where user and customer can access the usage and management functionality, respectively.\nAssociated to each service is a list of QoS parameters which have to be met by the service at the service access point.\nThe QoS surveillance is performed by the management.\nside independent side independent\nFigure 6: Realization View\nIn the Realization View the service implementation and the service management implementation are described in detail.\nFor both there are provider-internal resources and subservices.\nFor the service implementation a service logic uses internal resources (devices, knowledge, staff) and external subservices to provide the service.\nAnalogous, the service management implementation includes a service management logic using basic management functionalities [8] and external management subservices.\nThe MNM Service Model can be used for a similar modeling of the used subservices, i.e., the model can be applied recursively.\nAs the service-oriented event correlation has to use dependencies of a service from subservices and resources, the model is used in the following to derive the needed information for service events.\n5.2 Information Modeling for Service Events\nToday's event correlation deals mainly with events which are originated from resources.\nBeside a resource identifier these events contain information about the resource status, e.g., SNMP variables.\nTo perform a service-oriented event correlation it is necessary to define events which are related to services.\nThese events can be generated from the provider's own service surveillance or from customer reports at the CSM interface.\nThey contain information about the problems with the agreed QoS.\nIn our information modeling we define an event superclass which contains common attributes (e.g., time stamp).\nResource event and service event inherit from this superclass.\nDerived from the MNM Service Model we define the information necessary for a service event.\nService: As a service event shall represent the problems of a single service, a unique identification of the affected service is contained here.\nEvent description: This field has to contain a description of the problem.\nDepending on the interactions at the service access point (Service View) a classification of the problem into different categories should be defined.\nIt should also be possible to add an informal description of the problem.\nQoS parameters: For each service QoS parameters (Service View) are defined between the provider and the customer.\nThis field represents a list of these QoS parameters and agreed service levels.\nThe list can help the provider to set the priority of a problem with respect to the service levels agreed.\nResource list: This list contains the resources (Realization View) which are needed to provide the service.\nThis list is used by the provider to check if one of these resources causes the problem.\nSubservice service event identification: In the service hierarchy (Realization View) the service, for which this service event has been issued, may depend on subservices.\nIf there is a suspicion that one of these subservices causes the problem, child service events are issued from this service event for the subservices.\nIn such a case this field contains links to the corresponding events.\nOther event identifications: In the event correlation process the service event can be correlated with other service events or with resource events.\nThis field then contains links to other events which have been correlated to this service event.\nThis is useful to, e.g., send a common message to all affected customers when their subscribed services are available again.\nIssuer's identification: This field can either contain an identification of the customer who reported the problem, an identification of a service provider's employee\n(in case the failure has been detected by the provider's own service active probing) or a link to a parent service event.\nThe identification is needed, if there are ambiguities in the service event or the issuer should be informed (e.g., that the service is available again).\nThe possible issuers refer to the basic roles (customer, provider) in the Service Model.\nAssignee: To keep track of the processing the name and address of the provider's employee who is solving or solved the problem is also noted.\nThis is a specialization of the provider role in the Service Model.\nDates: This field contains key dates in the processing of the service event such as initial date, problem identification date, resolution date.\nThese dates are important to keep track how quick the problems have been solved.\nStatus: This field represents the service event's actual status (e.g., active, suspended, solved).\nPriority: The priority shows which importance the service event has from the provider's perspective.\nThe importance is derived from the service agreement, especially the agreed QoS parameters (Service View).\nThe fields date, status, and other service events are not derived directly from the Service Model, but are necessary for the event correlation process.\n6.\nAPPLICATION OF SERVICE-ORIENTED EVENT CORRELATION FOR A WEB HOSTING SCENARIO\nThe Leibniz Supercomputing Center is the joint computing center for the Munich universities and research institutions.\nIt also runs the Munich Scientific Network and offers related services.\nOne of these services is the Virtual WWW Server, a web hosting offer for smaller research institutions.\nIt currently has approximately 200 customers.\nA subservice of the Virtual WWW Server is the Storage Service which stores the static and dynamic web pages and uses caching techniques for a fast access.\nOther subservices are DNS and IP service.\nWhen a user accesses a hosted web site via one of the LRZ's Virtual Private Networks the VPN service is also used.\nThe resources of the Virtual WWW Server include a load balancer and 5 redundant servers.\nThe network connections are also part of the resources as well as the Apache web server application running on the servers.\nFigure 7 shows the dependencies of the Virtual WWW Server.\n6.1 Customer Service Management and Intelligent Assistant\nThe Intelligent Assistant that is available at the Leibniz Supercomputing Center can currently be used for connectivity or performance problems or problems with the LRZ E-Mail Service.\nA selection of possible customer problem reports for the Virtual WWW Server is given in the following:\n\u2022 The hosted web site is not reachable.\n\u2022 The web site access is (too) slow.\n\u2022 The web site contains outdated content.\nFigure 7: Dependencies of the Virtual WWW Server\n\u2022 The transfer of new content to the LRZ does not change the provided content.\n\u2022 The web site looks strange (e.g., caused by problems with HTML version)\nThis customer reports have to be mapped onto failures in resources.\nFor, e.g., an unreachable web site different root causes are possible like a DNS problem, connectivity problem, wrong configuration of the load balancer.\n6.2 Active Probing\nIn general, active probing can be used for services or resources.\nFor the service active probing of the Virtual WWW Server a virtual customer could be installed.\nThis customer does typical HTTP requests of web sites and compares the answer with the known content.\nTo check the up-to-dateness of a test web site, the content could contain a time stamp.\nThe service active probing could also include the testing of subservices, e.g., sending requests to the DNS.\nThe resource active probing performs tests of the resources.\nExamples are connectivity tests, requests to application processes, and tests of available disk space.\n6.3 Event Correlation for the Virtual WWW Server\nFigure 8 shows the example processing.\nAt first, a customer who takes a look at his hosted web site reports that the content that he had changed is not displayed correctly.\nThis report is transferred to the service management via the CSM interface.\nAn Intelligent Assistant could be used to structure the customer report.\nThe service management translates the customer report into a service event.\nIndependent from the customer report the service provider's own service active probing tries to change the content of a test web site.\nBecause this is not possible, a service event is issued.\nMeanwhile, a resource event has been reported to the event correlator, because an access of the content caching server to one of the WWW servers failed.\nAs there are no other events at the moment the resource event correlation\nFigure 8: Example processing of a customer report\ncannot correlate this event to other events.\nAt this stage it would be possible that the event correlator asks the resource management to perform an active probing of related resources.\nBoth service events are now transferred to the service event correlator and are correlated.\nFrom the correlation of these events it seems likely that either the WWW server itself or the link to the WWW server is the problem's root cause.\nA wrong web site update procedure inside the content caching server seems to be less likely as this would only explain the customer report and not the service active probing result.\nAt this stage a service active probing could be started, but this does not seem to be useful as this correlation only deals with the Web Hosting Service and its resources and not with other services.\nAfter the separate correlation of both resource and service events, which can be performed in parallel, the aggregate event correlator is used to correlate both types of events.\nThe additional resource event makes it seem much more likely that the problems are caused by a broken link to the WWW server or by the WWW server itself and not by the content caching server.\nIn this case the event correlator asks the resource management to check the link and the WWW server.\nThe decision between these two likely error causes cannot be further automated here.\nLater, the resource management finds out that a broken link is the failure's root cause.\nIt informs the event correlator about this and it can be determined that this explains all previous events.\nTherefore, the event correlation can be stopped at this point.\nDepending on the provider's customer relationship management the finding of the root cause and an expected repair time could be reported to the customers.\nAfter the link has been repaired, it is possible to report this event via the CSM interface.\nEven though many details of this event correlation process could also be performed differently, the example showed an important advantage of the service-oriented event correlation.\nThe relationship between the service provisioning and the provider's resources is explicitly modeled.\nThis allows a mapping of the customer report onto the provider-internal resources.\n6.4 Event Correlation for Different Services\nIf a provider like the LRZ offers several services the serviceoriented event correlation can be used to reveal relationships that are not obvious in the first place.\nIf the LRZ E-Mail Service and its events are viewed in relationship with the events for the Virtual WWW Server, it is possible to identify failures in common subservices and resources.\nBoth services depend on the DNS which means that customer reports like \"I cannot retrieve new e-mail\" and \"The web site of my research institute is not available\" can have a common cause, e.g., the DNS does not work properly.\n7.\nCONCLUSION AND FUTURE WORK In our paper we showed the need for a service-oriented event correlation.\nFor an IT service provider this new kind of event correlation makes it possible to automatically map problems with the current service quality onto resource failures.\nThis helps to find the failure's root cause earlier and to reduce costs for SLA violations.\nIn addition, customer reports can be linked together and therefore the processing effort can be reduced.\nTo receive these benefits we presented our approach for performing the service-oriented event correlation as well as a modeling of the necessary correlation information.\nIn the future we are going to apply our workflow and information modeling for services offered by the Leibniz Supercomputing Center going further into details.\nSeveral issues have not been treated in detail so far, e.g., the consequences for the service-oriented event correlation if a subservice is offered by another provider.\nIf a service does not perform properly, it has to be determined whether this is caused by the provider himself or by the subservice.\nIn the latter case appropriate information has to be exchanged between the providers via the CSM interface.\nAnother issue is the use of active probing in the event correlation process which can improve the result, but can also lead to a correlation delay.\nAnother important point is the precise definition of \"dependency\" which has also been left out by many other publications.\nTo avoid having to much dependencies in a certain situation one could try to check whether the dependencies currently exist.\nIn case of a download from a web site there is only a dependency from the DNS subservice at the beginning, but after the address is resolved a download failure is unlikely to have been caused by the DNS.\nAnother possibility to reduce the dependencies is to divide a service into its possible user interactions (e.g., an e-mail service into transactions like get mail, sent mail, etc) and to define the dependencies for each user interaction.","keyphrases":["fault manag","event correl","process manag framework","servic manag","servic-orient event correl","servic level agreement","qo","custom servic manag","servic-orient manag","case-base reason","rule-base reason"],"prmu":["P","P","P","P","M","M","U","M","M","U","U"]} {"id":"H-43","title":"Combining Content and Link for Classification using Matrix Factorization","abstract":"The world wide web contains rich textual contents that are interconnected via complex hyperlinks. This huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample. It is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure. The research in this direction has recently received considerable attention but are still in an early stage. Though a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features. Being practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors. Further analysis can be performed based on the compact representation of web pages. In the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks.","lvl-1":"Combining Content and Link for Classification using Matrix Factorization Shenghuo Zhu Kai Yu Yun Chi Yihong Gong {zsh,kyu,ychi,ygong}@sv.\nnec-labs.\ncom NEC Laboratories America, Inc. 10080 North Wolfe Road SW3-350 Cupertino, CA 95014, USA ABSTRACT The world wide web contains rich textual contents that are interconnected via complex hyperlinks.\nThis huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample.\nIt is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure.\nThe research in this direction has recently received considerable attention but are still in an early stage.\nThough a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features.\nBeing practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors.\nFurther analysis can be performed based on the compact representation of web pages.\nIn the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks.\nCategories and Subject Descriptors: H.3.3 [Information Systems]: Information Search and Retrieval General Terms: Algorithms, Experimentation 1.\nINTRODUCTION With the advance of the World Wide Web, more and more hypertext documents become available on the Web.\nSome examples of such data include organizational and personal web pages (e.g, the WebKB benchmark data set, which contains university web pages), research papers (e.g., data in CiteSeer), online news articles, and customer-generated media (e.g., blogs).\nComparing to data in traditional information management, in addition to content, these data on the Web also contain links: e.g., hyperlinks from a student``s homepage pointing to the homepage of her advisor, paper citations, sources of a news article, comments of one blogger on posts from another blogger, and so on.\nPerforming information management tasks on such structured data raises many new research challenges.\nIn the following discussion, we use the task of web page classification as an illustrating example, while the techniques we develop in later sections are applicable equally well to many other tasks in information retrieval and data mining.\nFor the classification problem of web pages, a simple approach is to treat web pages as independent documents.\nThe advantage of this approach is that many off-the-shelf classification tools can be directly applied to the problem.\nHowever, this approach relies only on the content of web pages and ignores the structure of links among them.\nLink structures provide invaluable information about properties of the documents as well as relationships among them.\nFor example, in the WebKB dataset, the link structure provides additional insights about the relationship among documents (e.g., links often pointing from a student to her advisor or from a faculty member to his projects).\nSince some links among these documents imply the inter-dependence among the documents, the usual i.i.d. (independent and identical distributed) assumption of documents does not hold any more.\nFrom this point of view, the traditional classification methods that ignore the link structure may not be suitable.\nOn the other hand, a few studies, for example [25], rely solely on link structures.\nIt is however a very rare case that content information can be ignorable.\nFor example, in the Cora dataset, the content of a research article abstract largely determines the category of the article.\nTo improve the performance of web page classification, therefore, both link structure and content information should be taken into consideration.\nTo achieve this goal, a simple approach is to convert one type of information to the other.\nFor example, in spam blog classification, Kolari et al. [13] concatenate outlink features with the content features of the blog.\nIn document classification, Kurland and Lee [14] convert content similarity among documents into weights of links.\nHowever, link and content information have different properties.\nFor example, a link is an actual piece of evidence that represents an asymmetric relationship whereas the content similarity is usually defined conceptually for every pair of documents in a symmetric way.\nTherefore, directly converting one type of information to the other usually degrades the quality of information.\nOn the other hand, there exist some studies, as we will discuss in detail in related work, that consider link information and content information separately and then combine them.\nWe argue that such an approach ignores the inherent consistency between link and content information and therefore fails to combine the two seamlessly.\nSome work, such as [3], incorporates link information using cocitation similarity, but this may not fully capture the global link structure.\nIn Figure 1, for example, web pages v6 and v7 co-cite web page v8, implying that v6 and v7 are similar to each other.\nIn turns, v4 and v5 should be similar to each other, since v4 and v5 cite similar web pages v6 and v7, respectively.\nBut using cocitation similarity, the similarity between v4 and v5 is zero without considering other information.\nv1 v2 v3 v4 v5 v6 v7 v8 Figure 1: An example of link structure In this paper, we propose a simple technique for analyzing inter-connected documents, such as web pages, using factor analysis[18].\nIn the proposed technique, both content information and link structures are seamlessly combined through a single set of latent factors.\nOur model contains two components.\nThe first component captures the content information.\nThis component has a form similar to that of the latent topics in the Latent Semantic Indexing (LSI) [8] in traditional information retrieval.\nThat is, documents are decomposed into latent topics\/factors, which in turn are represented as term vectors.\nThe second component captures the information contained in the underlying link structure, such as links from homepages of students to those of faculty members.\nA factor can be loosely considered as a type of documents (e.g., those homepages belonging to students).\nIt is worth noting that we do not explicitly define the semantic of a factor a priori.\nInstead, similar to LSI, the factors are learned from the data.\nTraditional factor analysis models the variables associated with entities through the factors.\nHowever, in analysis of link structures, we need to model the relationship of two ends of links, i.e., edges between vertex pairs.\nTherefore, the model should involve factors of both vertices of the edge.\nThis is a key difference between traditional factor analysis and our model.\nIn our model, we connect two components through a set of shared factors, that is, the latent factors in the second component (for contents) are tied to the factors in the first component (for links).\nBy doing this, we search for a unified set of latent factors that best explains both content and link structures simultaneously and seamlessly.\nIn the formulation, we perform factor analysis based on matrix factorization: solution to the first component is based on factorizing the term-document matrix derived from content features; solution to the second component is based on factorizing the adjacency matrix derived from links.\nBecause the two factorizations share a common base, the discovered bases (latent factors) explain both content information and link structures, and are then used in further information management tasks such as classification.\nThis paper is organized as follows.\nSection 2 reviews related work.\nSection 3 presents the proposed approach to analyze the web page based on the combined information of links and content.\nSection 4 extends the basic framework and a few variants for fine tune.\nSection 5 shows the experiment results.\nSection 6 discusses the details of this approach and Section 7 concludes.\n2.\nRELATED WORK In the content analysis part, our approach is closely related to Latent Semantic Indexing (LSI) [8].\nLSI maps documents into a lower dimensional latent space.\nThe latent space implicitly captures a large portion of information of documents, therefore it is called the latent semantic space.\nThe similarity between documents could be defined by the dot products of the corresponding vectors of documents in the latent space.\nAnalysis tasks, such as classification, could be performed on the latent space.\nThe commonly used singular value decomposition (SVD) method ensures that the data points in the latent space can optimally reconstruct the original documents.\nThough our approach also uses latent space to represent web pages (documents), we consider the link structure as well as the content of web pages.\nIn the link analysis approach, the framework of hubs and authorities (HITS) [12] puts web page into two categories, hubs and authorities.\nUsing recursive notion, a hub is a web page with many outgoing links to authorities, while an authority is a web page with many incoming links from hubs.\nInstead of using two categories, PageRank [17] uses a single category for the recursive notion, an authority is a web page with many incoming links from authorities.\nHe et al. [9] propose a clustering algorithm for web document clustering.\nThe algorithm incorporates link structure and the co-citation patterns.\nIn the algorithm, all links are treated as undirected edge of the link graph.\nThe content information is only used for weighing the links by the textual similarity of both ends of the links.\nZhang et al. [23] uses the undirected graph regularization framework for document classification.\nAchlioptas et al[2] decompose the web into hub and authority attributes then combine them with content.\nZhou et al. [25] and [24] propose a directed graph regularization framework for semi-supervised learning.\nThe framework combines the hub and authority information of web pages.\nBut it is difficult to combine the content information into that framework.\nOur approach consider the content and the directed linkage between topics of source and destination web pages in one step, which implies the topic combines the information of web page as authorities and as hubs in a single set of factors.\nCohn and Hofmann [6] construct the latent space from both content and link information, using content analysis based on probabilistic LSI (PLSI) [10] and link analysis based on PHITS [5].\nThe major difference between the approach of [6] (PLSI+PHITS) and our approach is in the part of link analysis.\nIn PLSI+PHITS, the link is constructed with the linkage from the topic of the source web page to the destination web page.\nIn the model, the outgoing links of the destination web page have no effect on the source web page.\nIn other words, the overall link structure is not utilized in PHITS.\nIn our approach, the link is constructed with the linkage between the factor of the source web page and the factor of the destination web page, instead of the destination web page itself.\nThe factor of the destination web page contains information of its outgoing links.\nIn turn, such information is passed to the factor of the source web page.\nAs the result of matrix factorization, the factor forms a factor graph, a miniature of the original graph, preserving the major structure of the original graph.\nTaskar et al. [19] propose relational Markov networks (RMNs) for entity classification, by describing a conditional distribution of entity classes given entity attributes and relationships.\nThe model was applied to web page classification, where web pages are entities and hyperlinks are treated as relationships.\nRMNs apply conditional random fields to define a set of potential functions on cliques of random variables, where the link structure provides hints to form the cliques.\nHowever the model does not give an off-the-shelf solution, because the success highly depends on the arts of designing the potential functions.\nOn the other hand, the inference for RMNs is intractable and requires belief propagation.\nThe following are some work on combining documents and links, but the methods are loosely related to our approach.\nThe experiments of [21] show that using terms from the linked document improves the classification accuracy.\nChakrabarti et al.[3] use co-citation information in their classification model.\nJoachims et al.[11] combine text kernels and co-citation kernels for classification.\nOh et al [16] use the Naive Bayesian frame to combine link information with content.\n3.\nOUR APPROACH In this section we will first introduce a novel matrix factorization method, which is more suitable than conventional matrix factorization methods for link analysis.\nThen we will introduce our approach that jointly factorizes the document-term matrix and link matrix and obtains compact and highly indicative factors for representing documents or web pages.\n3.1 Link Matrix Factorization Suppose we have a directed graph G = (V, E), where the vertex set V = {vi}n i=1 represents the web pages and the edge set E represents the hyperlinks between web pages.\nLet A = {asd} denotes the n\u00d7n adjacency matrix of G, which is also called the link matrix in this paper.\nFor a pair of vertices, vs and vd, let asd = 1 when there is an edge from vs to vd, and asd = 0, otherwise.\nNote that A is an asymmetric matrix, because hyperlinks are directed.\nMost machine learning algorithms assume a feature-vector representation of instances.\nFor web page classification, however, the link graph does not readily give such a vector representation for web pages.\nIf one directly uses each row or column of A for the job, she will suffer a very high computational cost because the dimensionality equals to the number of web pages.\nOn the other hand, it will produces a poor classification accuracy (see our experiments in Section 5), because A is extremely sparse1 .\nThe idea of link matrix factorization is to derive a high-quality feature representation Z of web pages based on analyzing the link matrix A, where Z is an n \u00d7 l matrix, with each row being the ldimensional feature vector of a web page.\nThe new representation of web pages captures the principal factors of the link structure and makes further processing more efficient.\nOne may use a method similar to LSI, to apply the well-known principal component analysis (PCA) for deriving Z from A.\nThe corresponding optimization problem 2 is min Z,U A \u2212 ZU 2 F + \u03b3 U 2 F (1) where \u03b3 is a small positive number, U is an l \u00d7n matrix, and \u00b7 F is the Frobenius norm.\nThe optimization aims to approximate A by ZU , a product of two low-rank matrices, with a regularization on U.\nIn the end, the i-th row vector of Z can be thought as the hub feature vector of vertex vi, and the row vector of U can be thought as the authority features.\nA link generation model proposed in [2] is similar to the PCA approach.\nSince A is a nonnegative matrix here, one can also consider to put nonnegative constraints on U and Z, which produces an algorithm similar to PLSA [10] and NMF [20].\n1 Due to the sparsity of A, links from two similar pages may not share any common target pages, which makes them to appear dissimilar.\nHowever the two pages may be indirectly linked to many common pages via their neighbors.\n2 Another equivalent form is minZ,U A \u2212 ZU 2 F , s. t. U U = I.\nThe solution Z is identical subject to a scaling factor.\nHowever, despite its popularity in matrix analysis, PCA (or other similar methods like PLSA) is restrictive for link matrix factorization.\nThe major problem is that, PCA ignores the fact that the rows and columns of A are indexed by exactly the same set of objects (i.e., web pages).\nThe approximating matrix \u02dcA = ZU shows no evidence that links are within the same set of objects.\nTo see the drawback, let``s consider a link transitivity situation vi \u2192 vs \u2192 vj, where page i is linked to page s which itself is linked to page j.\nSince \u02dcA = ZU treats A as links from web pages {vi} to a different set of objects, let it be denoted by {oi}, \u02dcA = ZU actually splits an linked object os from vs and breaks down the link path into two parts vi \u2192 os and vs \u2192 oj.\nThis is obviously a miss interpretation to the original link path.\nTo overcome the problem of PCA, in this paper we suggest to use a different factorization: min Z,U A \u2212 ZUZ 2 F + \u03b3 U 2 F (2) where U is an l \u00d7 l full matrix.\nNote that U is not symmetric, thus ZUZ produces an asymmetric matrix, which is the case of A. Again, each row vector of Z corresponds to a feature vector of a web pages.\nThe new approximating form \u02dcA = ZUZ puts a clear meaning that the links are between the same set of objects, represented by features Z.\nThe factor model actually maps each vertex, vi, into a vector zi = {zi,k; 1 \u2264 k \u2264 l} in the Rl space.\nWe call the Rl space the factor space.\nThen, {zi} encodes the information of incoming and outgoing connectivity of vertices {vi}.\nThe factor loadings, U, explain how these observed connections happened based on {zi}.\nOnce we have the vector zi, we can use many traditional classification methods (such as SVMs) or clustering tools (such as K-Means) to perform the analysis.\nIllustration Based on a Synthetic Problem To further illustrate the advantages of the proposed link matrix factorization Eq.\n(2), let us consider the graph in Figure 1.\nGiven v1 v2 v3 v4 v5 v6 v7 v8 Figure 2: Summarize Figure 1 with a factor graph these observations, we can summarize the graph by grouping as factor graph depicted in Figure 2.\nIn the next we preform the two factorization methods Eq.\n(2) and Eq.\n(1) on this link matrix.\nA good low-rank representation should reveal the structure of the factor graph.\nFirst we try PCA-like decomposition, solving Eq.\n(1) and obtaining Z = U = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 1.\n1.\n0 0 0 0 0 \u2212.6 \u2212.7 .1 0 0 .0 .6 \u2212.0 0 0 .8 \u2212.4 .3 0 0 .2 \u2212.2 \u2212.9 .7 .7 0 0 0 .7 .7 0 0 0 0 0 0 0 0 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 0 0 0 0 0 .5 \u2212.5 0 0 0 .5 \u2212.5 0 0 0 0 0 \u22120.6 \u2212.7 .1 0 0 .0 .6 \u2212.0 0 0 .8 \u2212.4 .3 0 0 .2 \u2212.2 \u2212.9 .7 .7 0 0 0 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 We can see that the row vectors of v6 and v7 are the same in Z, indicating that v6 and v7 have the same hub attributes.\nThe row vectors of v2 and v3 are the same in U, indicating that v2 and v3 have the same authority attributes.\nIt is not clear to see the similarity between v4 and v5, because their inlinks (and outlinks) are different.\nThen, we factorize A by ZUZ via solving Eq.\n(2), and obtain the results Z = U = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 \u2212.8 \u2212.5 .3 \u2212.1 \u2212.0 \u2212.0 .4 .6 \u2212.1 \u2212.4 \u2212.0 .4 .6 \u2212.1 \u2212.4 .3 \u2212.2 .3 \u2212.4 .3 .3 \u2212.2 .3 \u2212.4 .3 \u2212.4 .5 .0 \u2212.2 .6 \u2212.4 .5 .0 \u2212.2 .6 \u2212.1 .1 \u2212.4 \u2212.8 \u2212.4 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 2 6 6 6 6 6 6 6 6 4 \u2212.1 \u2212.2 \u2212.4 .6 .7 .2 \u2212.5 \u2212.5 \u2212.5 .0 .1 .1 .4 \u2212.4 .3 .1 \u2212.2 \u2212.0 .3 \u2212.1 \u2212.3 .3 \u2212.5 \u2212.4 \u2212.2 3 7 7 7 7 7 7 7 7 5 The resultant Z is very consistent with the clustering structure of vertices: the row vectors of v2 and v3 are the same, those of v4 and v5 are the same, those of v6 and v7 are the same.\nEven interestingly, if we add constraints to ensure Z and U be nonnegative, we have Z = U = 2 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 4 1.\n0 0 0 0 0 .9 0 0 0 0 .9 0 0 0 0 0 .7 0 0 0 0 .7 0 0 0 0 0 .9 0 0 0 0 .9 0 0 0 0 0 1.\n3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 2 6 6 6 6 6 6 6 6 4 0 1.\n0 0 0 0 0 .7 0 0 0 0 0 .7 0 0 0 0 0 1.\n0 0 0 0 0 3 7 7 7 7 7 7 7 7 5 which clearly tells the assignment of vertices to clusters from Z and the links of factor graph from U.\nWhen the interpretability is not critical in some tasks, for example, classification, we found that it achieves better accuracies without the nonnegative constraints.\nGiven our above analysis, it is clear that the factorization ZUZ is more expressive than ZU in representing the link matrix A. 3.2 Content Matrix Factorization Now let us consider the content information on the vertices.\nTo combine the link information and content information, we want to use the same latent space to approximate the content as the latent space for the links.\nUsing the bag-of-words approach, we denote the content of web pages by an n\u00d7m matrix C, each of whose rows represents a document, each column represents a keyword, where m is the number of keywords.\nLike the latent semantic indexing (LSI) [8], the l-dimensional latent space for words is denoted by an m \u00d7 l matrix V .\nTherefore, we use ZV to approximate matrix C, min V,Z C \u2212 ZV 2 F + \u03b2 V 2 F , (3) where \u03b2 is a small positive number, \u03b2 V 2 F serves as a regularization term to improve the robustness.\n3.3 Joint Link-Content Matrix Factorization There are many ways to employ both the content and link information for web page classification.\nOur idea in this paper is not to simply combine them, but rather to fuse them into a single, consistent, and compact feature representation.\nTo achieve this goal, we solve the following problem, min U,V,Z n J (U, V, Z) def = A \u2212 ZUZ 2 F + \u03b1 C \u2212 ZV 2 F + \u03b3 U 2 F + \u03b2 V 2 F o .\n(4) Eq.\n(4) is the joined matrix factorization of A and C with regularization.\nThe new representation Z is ensured to capture both the structures of the link matrix A and the content matrix C.\nOnce we find the optimal Z, we can apply the traditional classification or clustering methods on vectorial data Z.\nThe relationship among these matrices can be depicted as Figure 3.\nA Y C U Z V Figure 3: Relationship among the matrices.\nNode Y is the target of classification.\nEq.\n(4) can be solved using gradient methods, such as the conjugate gradient method and quasi-Newton methods.\nThen main computation of gradient methods is evaluating the object function J and its gradients against variables, \u2202J \u2202U = Z ZUZ Z \u2212 Z AZ + \u03b3U, \u2202J \u2202V =\u03b1 V Z Z \u2212 C Z + \u03b2V, \u2202J \u2202Z = ZU Z ZU + ZUZ ZU \u2212 A ZU \u2212 AZU + \u03b1 ZV V \u2212 CV .\nBecause of the sparsity of A, the computational complexity of multiplication of A and Z is O(\u00b5Al), where \u00b5A is the number of nonzero entries in A. Similarly, the computational complexity of C Z and CV is O(\u00b5C l), where \u00b5C is the number of nonzero entries in C.\nThe computational complexity of the rest multiplications in the gradient computation is O(nl2 ).\nTherefore, the total computational complexity in one iteration is O(\u00b5Al + \u00b5C l + nl2 ).\nThe number of links and the number of words in a web page are relatively small comparing to the number of web pages, and are almost constant as the number of web pages\/documents increases, i.e. \u00b5A = O(n) and \u00b5C = O(n).\nTherefore, theoretically the computation time is almost linear to the number of web pages\/documents, n. 4.\nSUPERVISED MATRIX FACTORIZATION Consider a web page classification problem.\nWe can solve Eq.\n(4) to obtain Z as Section 3, then use a traditional classifier to perform classification.\nHowever, this approach does not take data labels into account in the first step.\nBelieving that using data labels improves the accuracy by obtaining a better Z for the classification, we consider to use the data labels to guide the matrix factorization, called supervised matrix factorization [22].\nBecause some data used in the matrix factorization have no label information, the supervised matrix factorization falls into the category of semi-supervised learning.\nLet C be the set of classes.\nFor simplicity, we first consider binary class problem, i.e. C = {\u22121, 1}.\nAssume we know the labels {yi} for vertices in T \u2282 V.\nWe want to find a hypothesis h : V \u2192 R, such that we assign vi to 1 when h(vi) \u2265 0, \u22121 otherwise.\nWe assume a transform from the latent space to R is linear, i.e. h(vi) = w \u03c6(vi) + b = w zi + b, (5) School course dept. faculty other project staff student total Cornell 44 1 34 581 18\u00a021\u00a0128\u00a0827 Texas 36 1 46 561 20 2 148 814 Washington 77 1 30 907 18\u00a010\u00a0123\u00a01166 Wisconsin 85 0 38 894 25\u00a012\u00a0156\u00a01210 Table 1: Dataset of WebKB where w and b are parameters to estimate.\nHere, w is the norm of the decision boundary.\nSimilar to Support Vector Machines (SVMs) [7], we can use the hinge loss to measure the loss, X i:vi\u2208T [1 \u2212 yih(vi)]+ , where [x]+ is x if x \u2265 0, 0 if x < 0.\nHowever, the hinge loss is not smooth at the hinge point, which makes it difficult to apply gradient methods on the problem.\nTo overcome the difficulty, we use a smoothed version of hinge loss for each data point, g(yih(vi)), (6) where g(x) = 8 >< >: 0 when x \u2265 2, 1 \u2212 x when x \u2264 0, 1 4 (x \u2212 2)2 when 0 < x < 2.\nWe reduce a multiclass problem into multiple binary ones.\nOne simple scheme of reduction is the one-against-rest coding scheme.\nIn the one-against-rest scheme, we assign a label vector for each class label.\nThe element of a label vector is 1 if the data point belongs the corresponding class, \u22121, if the data point does not belong the corresponding class, 0, if the data point is not labeled.\nLet Y be the label matrix, each column of which is a label vector.\nTherefore, Y is a matrix of n \u00d7 c, where c is the number of classes, |C|.\nThen the values of Eq.\n(5) form a matrix H = ZW + 1b , (7) where 1 is a vector of size n, whose elements are all one, W is a c \u00d7 l parameter matrix, and b is a parameter vector of size c.\nThe total loss is proportional to the sum of Eq.\n(6) over all labeled data points and the classes, LY (W, b, Z) = \u03bb X i:vi\u2208T ,j\u2208C g(YijHij), where \u03bb is the parameter to scale the term.\nTo derive a robust solution, we also use Tikhonov regularization for W, \u2126W (W) = \u03bd 2 W 2 F , where \u03bd is the parameter to scale the term.\nThen the supervised matrix factorization problem becomes min U,V,Z,W,b Js(U, V, Z, W, b) (8) where Js(U, V, Z, W, b) = J (U, V, Z) + LY (W, b, Z) + \u2126W (W).\nWe can also use gradient methods to solve the problem of Eq.\n(8).\nThe gradients are \u2202Js \u2202U = \u2202J \u2202U , \u2202Js \u2202V = \u2202J \u2202V , \u2202Js \u2202Z = \u2202J \u2202Z + \u03bbGW, \u2202Js \u2202W =\u03bbG Z + \u03bdW, \u2202Js \u2202b =\u03bbG 1, where G is an n\u00d7c matrix, whose ik-th element is Yikg (YikHik), and g (x) = 8 >< >: 0 when x \u2265 2, \u22121 when x \u2264 0, 1 2 (x \u2212 2) when 0 < x < 2.\nOnce we obtain w, b, and Z, we can apply h on the vertices with unknown class labels, or apply traditional classification algorithms on Z to get the classification results.\n5.\nEXPERIMENTS 5.1 Data Description In this section, we perform classification on two datasets, to demonstrate the our approach.\nThe two datasets are the WebKB data set[1] and the Cora data set [15].\nThe WebKB data set consists of about 6000 web pages from computer science departments of four schools (Cornell, Texas, Washington, and Wisconsin).\nThe web pages are classified into seven categories.\nThe numbers of pages in each category are shown in Table 1.\nThe Cora data set consists of the abstracts and references of about 34,000 computer science research papers.\nWe use part of them to categorize into one of subfields of data structure (DS), hardware and architecture (HA), machine learning (ML), and programing language (PL).\nWe remove those articles without reference to other articles in the set.\nThe number of papers and the number of subfields in each area are shown in Table 2.\narea # of papers # of subfields Data structure (DS) 751 9 Hardware and architecture (HA) 400 7 Machine learning (ML) 1617 7 Programing language (PL) 1575 9 Table 2: Dataset of Cora 5.2 Methods The task of the experiments is to classify the data based on their content information and\/or link structure.\nWe use the following methods: 65 70 75 80 85 90 95 100 WisconsinWashingtonTexasCornell accuracy(%) dataset SVM on content SVM on link SVM on link-content Directed graph reg.\nPLSI+PHITS link-content MF link-content sup.\nMF method Cornell Texas Washington Wisconsin SVM on content 81.00 \u00b1 0.90 77.00 \u00b1 0.60 85.20 \u00b1 0.90 84.30 \u00b1 0.80 SVM on links 70.10 \u00b1 0.80 75.80 \u00b1 1.20 80.50 \u00b1 0.30 74.90 \u00b1 1.00 SVM on link-content 80.00 \u00b1 0.80 78.30 \u00b1 1.00 85.20 \u00b1 0.70 84.00 \u00b1 0.90 Directed graph regularization 89.47 \u00b1 1.41 91.28 \u00b1 0.75 91.08 \u00b1 0.51 89.26 \u00b1 0.45 PLSI+PHITS 80.74 \u00b1 0.88 76.15 \u00b1 1.29 85.12 \u00b1 0.37 83.75 \u00b1 0.87 link-content MF 93.50 \u00b1 0.80 96.20 \u00b1 0.50 93.60 \u00b1 0.50 92.60 \u00b1 0.60 link-content sup.\nMF 93.80 \u00b1 0.70 97.07 \u00b1 1.11 93.70 \u00b1 0.60 93.00 \u00b1 0.30 Table 3: Classification accuracy (mean \u00b1 std-err %) on WebKB data set \u2022 SVM on content We apply support vector machines (SVM) on the content of documents.\nThe features are the bag-ofwords and all word are stemmed.\nThis method ignores link structure in the data.\nLinear SVM is used.\nThe regularization parameter of SVM is selected using the cross-validation method.\nThe implementation of SVM used in the experiments is libSVM[4].\n\u2022 SVM on links We treat links as the features of each document, i.e. the i-th feature is link-to-pagei.\nWe apply SVM on link features.\nThis method uses link information, but not the link structure.\n\u2022 SVM on link-content We combine the features of the above two methods.\nWe use different weights for these two set of features.\nThe weights are also selected using crossvalidation.\n\u2022 Directed graph regularization This method is described in [25] and [24].\nThis method is solely based on link structure.\n\u2022 PLSI+PHITS This method is described in [6].\nThis method combines text content information and link structure for analysis.\nThe PHITS algorithm is in spirit similar to Eq.1, with an additional nonnegative constraint.\nIt models the outgoing and in-coming structures separately.\n\u2022 Link-content MF This is our approach of matrix factorization described in Section 3.\nWe use 50 latent factors for Z.\nAfter we compute Z, we train a linear SVM using Z as the feature vectors, then apply SVM on testing portion of Z to obtain the final result, because of the multiclass output.\n\u2022 Link-content sup.\nMF This method is our approach of the supervised matrix factorization in Section 4.\nWe use 50 latent factors for Z.\nAfter we compute Z, we train a linear SVM on the training portion of Z, then apply SVM on testing portion of Z to obtain the final result, because of the multiclass output.\nWe randomly split data into five folds and repeat the experiment for five times, for each time we use one fold for test, four other folds for training.\nDuring the training process, we use the crossvalidation to select all model parameters.\nWe measure the results by the classification accuracy, i.e., the percentage of the number of correct classified documents in the entire data set.\nThe results are shown as the average classification accuracies and it standard deviation over the five repeats.\n5.3 Results The average classification accuracies for the WebKB data set are shown in Table 3.\nFor this task, the accuracies of SVM on links are worse than that of SVM on content.\nBut the directed graph regularization, which is also based on link alone, achieves a much higher accuracy.\nThis implies that the link structure plays an important role in the classification of this dataset, but individual links in a web page give little information.\nThe combination of link and content using SVM achieves similar accuracy as that of SVM on content alone, which confirms individual links in a web page give little information.\nSince our approach consider the link structure as well as the content information, our two methods give results a highest accuracies among these approaches.\nThe difference between the results of our two methods is not significant.\nHowever in the experiments below, we show the difference between them.\nThe classification accuracies for the Cora data set are shown in Table 4.\nIn this experiment, the accuracies of SVM on the combination of links and content are higher than either SVM on content or SVM on links.\nThis indicates both content and links are infor45 50 55 60 65 70 75 80 PLMLHADS accuracy(%) dataset SVM on content SVM on link SVM on link-content Directed graph reg.\nPLSI+PHITS link-content MF link-content sup.\nMF method DS HA ML PL SVM on content 53.70 \u00b1 0.50 67.50 \u00b1 1.70 68.30 \u00b1 1.60 56.40 \u00b1 0.70 SVM on links 48.90 \u00b1 1.70 65.80 \u00b1 1.40 60.70 \u00b1 1.10 58.20 \u00b1 0.70 SVM on link-content 63.70 \u00b1 1.50 70.50 \u00b1 2.20 70.56 \u00b1 0.80 62.35 \u00b1 1.00 Directed graph regularization 46.07 \u00b1 0.82 65.50 \u00b1 2.30 59.37 \u00b1 0.96 56.06 \u00b1 0.84 PLSI+PHITS 53.60 \u00b1 1.78 67.40 \u00b1 1.48 67.51 \u00b1 1.13 57.45 \u00b1 0.68 link-content MF 61.00 \u00b1 0.70 74.20 \u00b1 1.20 77.50 \u00b1 0.80 62.50 \u00b1 0.80 link-content sup.\nMF 69.38 \u00b1 1.80 74.20 \u00b1 0.70 78.70 \u00b1 0.90 68.76 \u00b1 1.32 Table 4: Classification accuracy (mean \u00b1 std-err %) on Cora data set mative for classifying the articles into subfields.\nThe method of directed graph regularization does not perform as good as SVM on link-content, which confirms the importance of the article content in this task.\nThough our method of link-content matrix factorization perform slightly better than other methods, our method of linkcontent supervised matrix factorization outperform significantly.\n5.4 The Number of Factors As we discussed in Section 3, the computational complexity of each iteration for solving the optimization problem is quadratic to the number of factors.\nWe perform experiments to study how the number of factors affects the accuracy of predication.\nWe use different numbers of factors for the Cornell data of WebKB data set and the machine learning (ML) data of Cora data set.\nThe result shown in Figure 4(a) and 4(b).\nThe figures show that the accuracy 88 89 90 91 92 93 94 95 0 10 20 30 40 50 accuracy(%) number of factors link-content sup.\nMF link-content MF (a) Cornell data 62 64 66 68 70 72 74 76 78 80 0 10 20 30 40 50 accuracy(%) number of factors link-content sup.\nMF link-content MF (b) ML data Figure 4: Accuracy vs number of factors increases as the number of factors increases.\nIt is a different concept from choosing the optimal number of clusters in clustering application.\nIt is how much information to represent in the latent variables.\nWe have considered the regularization over the factors, which avoids the overfit problem for a large number of factors.\nTo choose of the number of factors, we need to consider the trade-off between the accuracy and the computation time, which is quadratic to the number of factors.\nThe difference between the method of matrix factorization and that of supervised one decreases as the number of factors increases.\nThis indicates that the usefulness of supervised matrix factorization at lower number of factors.\n6.\nDISCUSSIONS The loss functions LA in Eq.\n(2) and LC in Eq.\n(3) use squared loss due to computationally convenience.\nActually, squared loss does not precisely describe the underlying noise model, because the weights of adjacency matrix can only take nonnegative values, in our case, zero or one only, and the components of content matrix C can only take nonnegative integers.\nTherefore, we can apply other types of loss, such as hinge loss or smoothed hinge loss, e.g. LA(U, Z) = \u00b5h(A, ZUZ ), where h(A, B) =P i,j [1 \u2212 AijBij]+ .\nIn our paper, we mainly discuss the application of classification.\nA entry of matrix Z means the relationship of a web page and a factor.\nThe values of the entries are the weights of linear model, instead of the probabilities of web pages belonging to latent topics.\nTherefore, we allow the components take any possible real values.\nWhen we come to the clustering application, we can use this model to find Z, then apply K-means to partition the web pages into clusters.\nActually, we can use the idea of nonnegative matrix factorization for clustering [20] to directly cluster web pages.\nAs the example with nonnegative constraints shown in Section 3, we represent each cluster by a latent topic, i.e. the dimensionality of the latent space is set to the number of clusters we want.\nThen the problem of Eq.\n(4) becomes min U,V,Z J (U, V, Z), s.t.Z \u2265 0.\n(9) Solving Eq.\n(9), we can obtain more interpretable results, which could be used for clustering.\n7.\nCONCLUSIONS In this paper, we study the problem of how to combine the information of content and links for web page analysis, mainly on classification application.\nWe propose a simple approach using factors to model the text content and link structure of web pages\/documents.\nThe directed links are generated from the linear combination of linkage of between source and destination factors.\nBy sharing factors between text content and link structure, it is easy to combine both the content information and link structure.\nOur experiments show our approach is effective for classification.\nWe also discuss an extension for clustering application.\nAcknowledgment We would like to thank Dr. Dengyong Zhou for sharing his code of his algorithm.\nAlso, thanks to the reviewers for constructive comments.\n8.\nREFERENCES [1] CMU world wide knowledge base (WebKB) project.\nAvailable at http:\/\/www.cs.cmu.edu\/\u223cWebKB\/.\n[2] D. Achlioptas, A. Fiat, A. R. Karlin, and F. McSherry.\nWeb search via hub synthesis.\nIn IEEE Symposium on Foundations of Computer Science, pages 500-509, 2001.\n[3] S. Chakrabarti, B. E. Dom, and P. Indyk.\nEnhanced hypertext categorization using hyperlinks.\nIn L. M. Haas and A. Tiwary, editors, Proceedings of SIGMOD-98, ACM International Conference on Management of Data, pages 307-318, Seattle, US, 1998.\nACM Press, New York, US.\n[4] C.-C.\nChang and C.-J.\nLin.\nLIBSVM: a library for support vector machines, 2001.\nSoftware available at http:\/\/www.csie.ntu.edu.tw\/\u223ccjlin\/libsvm.\n[5] D. Cohn and H. Chang.\nLearning to probabilistically identify authoritative documents.\nProc.\nICML 2000.\npp.167-174., 2000.\n[6] D. Cohn and T. Hofmann.\nThe missing link - a probabilistic model of document content and hypertext connectivity.\nIn T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 430-436.\nMIT Press, 2001.\n[7] C. Cortes and V. Vapnik.\nSupport-vector networks.\nMachine Learning, 20:273, 1995.\n[8] S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman.\nIndexing by latent semantic analysis.\nJournal of the American Society of Information Science, 41(6):391-407, 1990.\n[9] X. He, H. Zha, C. Ding, and H. Simon.\nWeb document clustering using hyperlink structures.\nComputational Statistics and Data Analysis, 41(1):19-45, 2002.\n[10] T. Hofmann.\nProbabilistic latent semantic indexing.\nIn Proceedings of the Twenty-Second Annual International SIGIR Conference, 1999.\n[11] T. Joachims, N. Cristianini, and J. Shawe-Taylor.\nComposite kernels for hypertext categorisation.\nIn C. Brodley and A. Danyluk, editors, Proceedings of ICML-01, 18th International Conference on Machine Learning, pages 250-257, Williams College, US, 2001.\nMorgan Kaufmann Publishers, San Francisco, US.\n[12] J. M. Kleinberg.\nAuthoritative sources in a hyperlinked environment.\nJ. ACM, 48:604-632, 1999.\n[13] P. Kolari, T. Finin, and A. Joshi.\nSVMs for the Blogosphere: Blog Identification and Splog Detection.\nIn AAAI Spring Symposium on Computational Approaches to Analysing Weblogs, March 2006.\n[14] O. Kurland and L. Lee.\nPagerank without hyperlinks: structural re-ranking using links induced by language models.\nIn SIGIR ``05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 306-313, New York, NY, USA, 2005.\nACM Press.\n[15] A. McCallum, K. Nigam, J. Rennie, and K. Seymore.\nAutomating the contruction of internet portals with machine learning.\nInformation Retrieval Journal, 3(127-163), 2000.\n[16] H.-J.\nOh, S. H. Myaeng, and M.-H.\nLee.\nA practical hypertext catergorization method using links and incrementally available class information.\nIn SIGIR ``00: Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 264-271, New York, NY, USA, 2000.\nACM Press.\n[17] L. Page, S. Brin, R. Motowani, and T. Winograd.\nPageRank citation ranking: bring order to the web.\nStanford Digital Library working paper 1997-0072, 1997.\n[18] C. Spearman.\nGeneral Intelligence, objectively determined and measured.\nThe American Journal of Psychology, 15(2):201-292, Apr 1904.\n[19] B. Taskar, P. Abbeel, and D. Koller.\nDiscriminative probabilistic models for relational data.\nIn Proceedings of 18th International UAI Conference, 2002.\n[20] W. Xu, X. Liu, and Y. Gong.\nDocument clustering based on non-negative matrix factorization.\nIn SIGIR ``03: Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 267-273.\nACM Press, 2003.\n[21] Y. Yang, S. Slattery, and R. Ghani.\nA study of approaches to hypertext categorization.\nJournal of Intelligent Information Systems, 18(2-3):219-241, 2002.\n[22] K. Yu, S. Yu, and V. Tresp.\nMulti-label informed latent semantic indexing.\nIn SIGIR ``05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 258-265, New York, NY, USA, 2005.\nACM Press.\n[23] T. Zhang, A. Popescul, and B. Dom.\nLinear prediction models with graph regularization for web-page categorization.\nIn KDD ``06: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 821-826, New York, NY, USA, 2006.\nACM Press.\n[24] D. Zhou, J. Huang, and B. Sch\u00a8olkopf.\nLearning from labeled and unlabeled data on a directed graph.\nIn Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany, 2005.\n[25] D. Zhou, B. Sch\u00a8olkopf, and T. Hofmann.\nSemi-supervised learning on directed graphs.\nProc.\nNeural Info.\nProcessing Systems, 2004.","lvl-3":"Combining Content and Link for Classification using Matrix Factorization\nABSTRACT\nThe world wide web contains rich textual contents that are interconnected via complex hyperlinks.\nThis huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample.\nIt is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure.\nThe research in this direction has recently received considerable attention but are still in an early stage.\nThough a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features.\nBeing practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors.\nFurther analysis can be performed based on the compact representation of web pages.\nIn the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks.\n1.\nINTRODUCTION\nWith the advance of the World Wide Web, more and more hypertext documents become available on the Web.\nSome examples of such data include organizational and personal web pages (e.g, the WebKB benchmark data set, which contains university web pages), research papers (e.g., data in CiteSeer), online news articles, and customer-generated media (e.g., blogs).\nComparing to data in tra\nditional information management, in addition to content, these data on the Web also contain links: e.g., hyperlinks from a student's homepage pointing to the homepage of her advisor, paper citations, sources of a news article, comments of one blogger on posts from another blogger, and so on.\nPerforming information management tasks on such structured data raises many new research challenges.\nIn the following discussion, we use the task of web page classification as an illustrating example, while the techniques we develop in later sections are applicable equally well to many other tasks in information retrieval and data mining.\nFor the classification problem of web pages, a simple approach is to treat web pages as independent documents.\nThe advantage of this approach is that many off-the-shelf classification tools can be directly applied to the problem.\nHowever, this approach relies only on the content of web pages and ignores the structure of links among them.\nLink structures provide invaluable information about properties of the documents as well as relationships among them.\nFor example, in the WebKB dataset, the link structure provides additional insights about the relationship among documents (e.g., links often pointing from a student to her advisor or from a faculty member to his projects).\nSince some links among these documents imply the inter-dependence among the documents, the usual i.i.d. (independent and identical distributed) assumption of documents does not hold any more.\nFrom this point of view, the traditional classification methods that ignore the link structure may not be suitable.\nOn the other hand, a few studies, for example [25], rely solely on link structures.\nIt is however a very rare case that content information can be ignorable.\nFor example, in the Cora dataset, the content of a research article abstract largely determines the category of the article.\nTo improve the performance of web page classification, therefore, both link structure and content information should be taken into consideration.\nTo achieve this goal, a simple approach is to convert one type of information to the other.\nFor example, in spam blog classification, Kolari et al. [13] concatenate outlink features with the content features of the blog.\nIn document classification, Kurland and Lee [14] convert content similarity among documents into weights of links.\nHowever, link and content information have different properties.\nFor example, a link is an actual piece of evidence that represents an asymmetric relationship whereas the content similarity is usually defined conceptually for every pair of documents in a symmetric way.\nTherefore, directly converting one type of information to the other usually degrades the quality of information.\nOn the other hand, there exist some studies, as we will discuss in detail in related work, that consider link information and content information separately and then combine them.\nWe argue that such an approach ignores the inherent consistency between link and con\ntent information and therefore fails to combine the two seamlessly.\nSome work, such as [3], incorporates link information using cocitation similarity, but this may not fully capture the global link structure.\nIn Figure 1, for example, web pages v6 and v7 co-cite web page v8, implying that v6 and v7 are similar to each other.\nIn turns, v4 and v5 should be similar to each other, since v4 and v5 cite similar web pages v6 and v7, respectively.\nBut using cocitation similarity, the similarity between v4 and v5 is zero without considering other information.\nFigure 1: An example of link structure\nIn this paper, we propose a simple technique for analyzing inter-connected documents, such as web pages, using factor analysis [18].\nIn the proposed technique, both content information and link structures are seamlessly combined through a single set of latentfactors.\nOur model contains two components.\nThe first component captures the content information.\nThis component has a form similar to that of the latent topics in the Latent Semantic Indexing (LSI) [8] in traditional information retrieval.\nThat is, documents are decomposed into latent topics\/factors, which in turn are represented as term vectors.\nThe second component captures the information contained in the underlying link structure, such as links from homepages of students to those of faculty members.\nA factor can be loosely considered as a type of documents (e.g., those homepages belonging to students).\nIt is worth noting that we do not explicitly define the semantic of a factor a priori.\nInstead, similar to LSI, the factors are learned from the data.\nTraditional factor analysis models the variables associated with entities through the factors.\nHowever, in analysis of link structures, we need to model the relationship of two ends of links, i.e., edges between vertex pairs.\nTherefore, the model should involve factors of both vertices of the edge.\nThis is a key difference between traditional factor analysis and our model.\nIn our model, we connect two components through a set of shared factors, that is, the latent factors in the second component (for contents) are tied to the factors in the first component (for links).\nBy doing this, we search for a unified set of latent factors that best explains both content and link structures simultaneously and seamlessly.\nIn the formulation, we perform factor analysis based on matrix factorization: solution to the first component is based on factorizing the term-document matrix derived from content features; solution to the second component is based on factorizing the adjacency matrix derived from links.\nBecause the two factorizations share a common base, the discovered bases (latent factors) explain both content information and link structures, and are then used in further information management tasks such as classification.\nThis paper is organized as follows.\nSection 2 reviews related work.\nSection 3 presents the proposed approach to analyze the web page based on the combined information of links and content.\nSection 4 extends the basic framework and a few variants for fine tune.\nSection 5 shows the experiment results.\nSection 6 discusses the details of this approach and Section 7 concludes.\n2.\nRELATED WORK\nIn the content analysis part, our approach is closely related to Latent Semantic Indexing (LSI) [8].\nLSI maps documents into a lower dimensional latent space.\nThe latent space implicitly captures a large portion of information of documents, therefore it is called the latent semantic space.\nThe similarity between documents could be defined by the dot products of the corresponding vectors of documents in the latent space.\nAnalysis tasks, such as classification, could be performed on the latent space.\nThe commonly used singular value decomposition (SVD) method ensures that the data points in the latent space can optimally reconstruct the original documents.\nThough our approach also uses latent space to represent web pages (documents), we consider the link structure as well as the content of web pages.\nIn the link analysis approach, the framework of hubs and authorities (HITS) [12] puts web page into two categories, hubs and authorities.\nUsing recursive notion, a hub is a web page with many outgoing links to authorities, while an authority is a web page with many incoming links from hubs.\nInstead of using two categories, PageRank [17] uses a single category for the recursive notion, an authority is a web page with many incoming links from authorities.\nHe et al. [9] propose a clustering algorithm for web document clustering.\nThe algorithm incorporates link structure and the co-citation patterns.\nIn the algorithm, all links are treated as undirected edge of the link graph.\nThe content information is only used for weighing the links by the textual similarity of both ends of the links.\nZhang et al. [23] uses the undirected graph regularization framework for document classification.\nAchlioptas et al [2] decompose the web into hub and authority attributes then combine them with content.\nZhou et al. [25] and [24] propose a directed graph regularization framework for semi-supervised learning.\nThe framework combines the hub and authority information of web pages.\nBut it is difficult to combine the content information into that framework.\nOur approach consider the content and the directed linkage between topics of source and destination web pages in one step, which implies the topic combines the information of web page as authorities and as hubs in a single set offactors.\nCohn and Hofmann [6] construct the latent space from both content and link information, using content analysis based on probabilistic LSI (PLSI) [10] and link analysis based on PHITS [5].\nThe major difference between the approach of [6] (PLSI+PHITS) and our approach is in the part of link analysis.\nIn PLSI+PHITS, the link is constructed with the linkage from the topic of the source web page to the destination web page.\nIn the model, the outgoing links of the destination web page have no effect on the source web page.\nIn other words, the overall link structure is not utilized in PHITS.\nIn our approach, the link is constructed with the linkage between the factor of the source web page and the factor of the destination web page, instead of the destination web page itself.\nThe factor of the destination web page contains information of its outgoing links.\nIn turn, such information is passed to the factor of the source web page.\nAs the result of matrix factorization, the factor forms a factor graph, a miniature of the original graph, preserving the major structure of the original graph.\nTaskar et al. [19] propose relational Markov networks (RMNs) for entity classification, by describing a conditional distribution of entity classes given entity attributes and relationships.\nThe model was applied to web page classification, where web pages are entities and hyperlinks are treated as relationships.\nRMNs apply conditional random fields to define a set of potential functions on cliques of random variables, where the link structure provides hints to form the cliques.\nHowever the model does not give an off-the-shelf solution, because the success highly depends on the arts of designing\nthe potential functions.\nOn the other hand, the inference for RMNs is intractable and requires belief propagation.\nThe following are some work on combining documents and links, but the methods are loosely related to our approach.\nThe experiments of [21] show that using terms from the linked document improves the classification accuracy.\nChakrabarti et al. [3] use co-citation information in their classification model.\nJoachims et al. [11] combine text kernels and co-citation kernels for classification.\nOh et al [16] use the Naive Bayesian frame to combine link information with content.\n3.\nOUR APPROACH\n3.1 Link Matrix Factorization\n3.2 Content Matrix Factorization\n3.3 Joint Link-Content Matrix Factorization\n4.\nSUPERVISED MATRIX FACTORIZATION\n5.\nEXPERIMENTS\n5.1 Data Description\n5.2 Methods\n5.3 Results\n5.4 The Number of Factors\n6.\nDISCUSSIONS\n7.\nCONCLUSIONS\nIn this paper, we study the problem of how to combine the information of content and links for web page analysis, mainly on classification application.\nWe propose a simple approach using factors to model the text content and link structure of web pages\/documents.\nThe directed links are generated from the linear combination of linkage of between source and destination factors.\nBy sharing factors between text content and link structure, it is easy to combine both the content information and link structure.\nOur experiments show our approach is effective for classification.\nWe also discuss an extension for clustering application.","lvl-4":"Combining Content and Link for Classification using Matrix Factorization\nABSTRACT\nThe world wide web contains rich textual contents that are interconnected via complex hyperlinks.\nThis huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample.\nIt is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure.\nThe research in this direction has recently received considerable attention but are still in an early stage.\nThough a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features.\nBeing practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors.\nFurther analysis can be performed based on the compact representation of web pages.\nIn the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks.\n1.\nINTRODUCTION\nComparing to data in tra\nPerforming information management tasks on such structured data raises many new research challenges.\nFor the classification problem of web pages, a simple approach is to treat web pages as independent documents.\nThe advantage of this approach is that many off-the-shelf classification tools can be directly applied to the problem.\nHowever, this approach relies only on the content of web pages and ignores the structure of links among them.\nLink structures provide invaluable information about properties of the documents as well as relationships among them.\nSince some links among these documents imply the inter-dependence among the documents, the usual i.i.d. (independent and identical distributed) assumption of documents does not hold any more.\nFrom this point of view, the traditional classification methods that ignore the link structure may not be suitable.\nOn the other hand, a few studies, for example [25], rely solely on link structures.\nIt is however a very rare case that content information can be ignorable.\nFor example, in the Cora dataset, the content of a research article abstract largely determines the category of the article.\nTo improve the performance of web page classification, therefore, both link structure and content information should be taken into consideration.\nTo achieve this goal, a simple approach is to convert one type of information to the other.\nFor example, in spam blog classification, Kolari et al. [13] concatenate outlink features with the content features of the blog.\nIn document classification, Kurland and Lee [14] convert content similarity among documents into weights of links.\nHowever, link and content information have different properties.\nFor example, a link is an actual piece of evidence that represents an asymmetric relationship whereas the content similarity is usually defined conceptually for every pair of documents in a symmetric way.\nTherefore, directly converting one type of information to the other usually degrades the quality of information.\nOn the other hand, there exist some studies, as we will discuss in detail in related work, that consider link information and content information separately and then combine them.\nWe argue that such an approach ignores the inherent consistency between link and con\ntent information and therefore fails to combine the two seamlessly.\nSome work, such as [3], incorporates link information using cocitation similarity, but this may not fully capture the global link structure.\nBut using cocitation similarity, the similarity between v4 and v5 is zero without considering other information.\nFigure 1: An example of link structure\nIn this paper, we propose a simple technique for analyzing inter-connected documents, such as web pages, using factor analysis [18].\nIn the proposed technique, both content information and link structures are seamlessly combined through a single set of latentfactors.\nOur model contains two components.\nThe first component captures the content information.\nThis component has a form similar to that of the latent topics in the Latent Semantic Indexing (LSI) [8] in traditional information retrieval.\nThat is, documents are decomposed into latent topics\/factors, which in turn are represented as term vectors.\nThe second component captures the information contained in the underlying link structure, such as links from homepages of students to those of faculty members.\nA factor can be loosely considered as a type of documents (e.g., those homepages belonging to students).\nInstead, similar to LSI, the factors are learned from the data.\nTraditional factor analysis models the variables associated with entities through the factors.\nHowever, in analysis of link structures, we need to model the relationship of two ends of links, i.e., edges between vertex pairs.\nTherefore, the model should involve factors of both vertices of the edge.\nThis is a key difference between traditional factor analysis and our model.\nIn our model, we connect two components through a set of shared factors, that is, the latent factors in the second component (for contents) are tied to the factors in the first component (for links).\nBy doing this, we search for a unified set of latent factors that best explains both content and link structures simultaneously and seamlessly.\nBecause the two factorizations share a common base, the discovered bases (latent factors) explain both content information and link structures, and are then used in further information management tasks such as classification.\nThis paper is organized as follows.\nSection 2 reviews related work.\nSection 3 presents the proposed approach to analyze the web page based on the combined information of links and content.\nSection 5 shows the experiment results.\nSection 6 discusses the details of this approach and Section 7 concludes.\n2.\nRELATED WORK\nIn the content analysis part, our approach is closely related to Latent Semantic Indexing (LSI) [8].\nLSI maps documents into a lower dimensional latent space.\nThe latent space implicitly captures a large portion of information of documents, therefore it is called the latent semantic space.\nThe similarity between documents could be defined by the dot products of the corresponding vectors of documents in the latent space.\nAnalysis tasks, such as classification, could be performed on the latent space.\nThe commonly used singular value decomposition (SVD) method ensures that the data points in the latent space can optimally reconstruct the original documents.\nThough our approach also uses latent space to represent web pages (documents), we consider the link structure as well as the content of web pages.\nIn the link analysis approach, the framework of hubs and authorities (HITS) [12] puts web page into two categories, hubs and authorities.\nUsing recursive notion, a hub is a web page with many outgoing links to authorities, while an authority is a web page with many incoming links from hubs.\nInstead of using two categories, PageRank [17] uses a single category for the recursive notion, an authority is a web page with many incoming links from authorities.\nHe et al. [9] propose a clustering algorithm for web document clustering.\nThe algorithm incorporates link structure and the co-citation patterns.\nIn the algorithm, all links are treated as undirected edge of the link graph.\nThe content information is only used for weighing the links by the textual similarity of both ends of the links.\nZhang et al. [23] uses the undirected graph regularization framework for document classification.\nAchlioptas et al [2] decompose the web into hub and authority attributes then combine them with content.\nThe framework combines the hub and authority information of web pages.\nBut it is difficult to combine the content information into that framework.\nOur approach consider the content and the directed linkage between topics of source and destination web pages in one step, which implies the topic combines the information of web page as authorities and as hubs in a single set offactors.\nCohn and Hofmann [6] construct the latent space from both content and link information, using content analysis based on probabilistic LSI (PLSI) [10] and link analysis based on PHITS [5].\nThe major difference between the approach of [6] (PLSI+PHITS) and our approach is in the part of link analysis.\nIn PLSI+PHITS, the link is constructed with the linkage from the topic of the source web page to the destination web page.\nIn the model, the outgoing links of the destination web page have no effect on the source web page.\nIn other words, the overall link structure is not utilized in PHITS.\nIn our approach, the link is constructed with the linkage between the factor of the source web page and the factor of the destination web page, instead of the destination web page itself.\nThe factor of the destination web page contains information of its outgoing links.\nIn turn, such information is passed to the factor of the source web page.\nThe model was applied to web page classification, where web pages are entities and hyperlinks are treated as relationships.\nRMNs apply conditional random fields to define a set of potential functions on cliques of random variables, where the link structure provides hints to form the cliques.\nthe potential functions.\nThe following are some work on combining documents and links, but the methods are loosely related to our approach.\nThe experiments of [21] show that using terms from the linked document improves the classification accuracy.\nChakrabarti et al. [3] use co-citation information in their classification model.\nJoachims et al. [11] combine text kernels and co-citation kernels for classification.\nOh et al [16] use the Naive Bayesian frame to combine link information with content.\n7.\nCONCLUSIONS\nIn this paper, we study the problem of how to combine the information of content and links for web page analysis, mainly on classification application.\nWe propose a simple approach using factors to model the text content and link structure of web pages\/documents.\nThe directed links are generated from the linear combination of linkage of between source and destination factors.\nBy sharing factors between text content and link structure, it is easy to combine both the content information and link structure.\nOur experiments show our approach is effective for classification.\nWe also discuss an extension for clustering application.","lvl-2":"Combining Content and Link for Classification using Matrix Factorization\nABSTRACT\nThe world wide web contains rich textual contents that are interconnected via complex hyperlinks.\nThis huge database violates the assumption held by most of conventional statistical methods that each web page is considered as an independent and identical sample.\nIt is thus difficult to apply traditional mining or learning methods for solving web mining problems, e.g., web page classification, by exploiting both the content and the link structure.\nThe research in this direction has recently received considerable attention but are still in an early stage.\nThough a few methods exploit both the link structure or the content information, some of them combine the only authority information with the content information, and the others first decompose the link structure into hub and authority features, then apply them as additional document features.\nBeing practically attractive for its great simplicity, this paper aims to design an algorithm that exploits both the content and linkage information, by carrying out a joint factorization on both the linkage adjacency matrix and the document-term matrix, and derives a new representation for web pages in a low-dimensional factor space, without explicitly separating them as content, hub or authority factors.\nFurther analysis can be performed based on the compact representation of web pages.\nIn the experiments, the proposed method is compared with state-of-the-art methods and demonstrates an excellent accuracy in hypertext classification on the WebKB and Cora benchmarks.\n1.\nINTRODUCTION\nWith the advance of the World Wide Web, more and more hypertext documents become available on the Web.\nSome examples of such data include organizational and personal web pages (e.g, the WebKB benchmark data set, which contains university web pages), research papers (e.g., data in CiteSeer), online news articles, and customer-generated media (e.g., blogs).\nComparing to data in tra\nditional information management, in addition to content, these data on the Web also contain links: e.g., hyperlinks from a student's homepage pointing to the homepage of her advisor, paper citations, sources of a news article, comments of one blogger on posts from another blogger, and so on.\nPerforming information management tasks on such structured data raises many new research challenges.\nIn the following discussion, we use the task of web page classification as an illustrating example, while the techniques we develop in later sections are applicable equally well to many other tasks in information retrieval and data mining.\nFor the classification problem of web pages, a simple approach is to treat web pages as independent documents.\nThe advantage of this approach is that many off-the-shelf classification tools can be directly applied to the problem.\nHowever, this approach relies only on the content of web pages and ignores the structure of links among them.\nLink structures provide invaluable information about properties of the documents as well as relationships among them.\nFor example, in the WebKB dataset, the link structure provides additional insights about the relationship among documents (e.g., links often pointing from a student to her advisor or from a faculty member to his projects).\nSince some links among these documents imply the inter-dependence among the documents, the usual i.i.d. (independent and identical distributed) assumption of documents does not hold any more.\nFrom this point of view, the traditional classification methods that ignore the link structure may not be suitable.\nOn the other hand, a few studies, for example [25], rely solely on link structures.\nIt is however a very rare case that content information can be ignorable.\nFor example, in the Cora dataset, the content of a research article abstract largely determines the category of the article.\nTo improve the performance of web page classification, therefore, both link structure and content information should be taken into consideration.\nTo achieve this goal, a simple approach is to convert one type of information to the other.\nFor example, in spam blog classification, Kolari et al. [13] concatenate outlink features with the content features of the blog.\nIn document classification, Kurland and Lee [14] convert content similarity among documents into weights of links.\nHowever, link and content information have different properties.\nFor example, a link is an actual piece of evidence that represents an asymmetric relationship whereas the content similarity is usually defined conceptually for every pair of documents in a symmetric way.\nTherefore, directly converting one type of information to the other usually degrades the quality of information.\nOn the other hand, there exist some studies, as we will discuss in detail in related work, that consider link information and content information separately and then combine them.\nWe argue that such an approach ignores the inherent consistency between link and con\ntent information and therefore fails to combine the two seamlessly.\nSome work, such as [3], incorporates link information using cocitation similarity, but this may not fully capture the global link structure.\nIn Figure 1, for example, web pages v6 and v7 co-cite web page v8, implying that v6 and v7 are similar to each other.\nIn turns, v4 and v5 should be similar to each other, since v4 and v5 cite similar web pages v6 and v7, respectively.\nBut using cocitation similarity, the similarity between v4 and v5 is zero without considering other information.\nFigure 1: An example of link structure\nIn this paper, we propose a simple technique for analyzing inter-connected documents, such as web pages, using factor analysis [18].\nIn the proposed technique, both content information and link structures are seamlessly combined through a single set of latentfactors.\nOur model contains two components.\nThe first component captures the content information.\nThis component has a form similar to that of the latent topics in the Latent Semantic Indexing (LSI) [8] in traditional information retrieval.\nThat is, documents are decomposed into latent topics\/factors, which in turn are represented as term vectors.\nThe second component captures the information contained in the underlying link structure, such as links from homepages of students to those of faculty members.\nA factor can be loosely considered as a type of documents (e.g., those homepages belonging to students).\nIt is worth noting that we do not explicitly define the semantic of a factor a priori.\nInstead, similar to LSI, the factors are learned from the data.\nTraditional factor analysis models the variables associated with entities through the factors.\nHowever, in analysis of link structures, we need to model the relationship of two ends of links, i.e., edges between vertex pairs.\nTherefore, the model should involve factors of both vertices of the edge.\nThis is a key difference between traditional factor analysis and our model.\nIn our model, we connect two components through a set of shared factors, that is, the latent factors in the second component (for contents) are tied to the factors in the first component (for links).\nBy doing this, we search for a unified set of latent factors that best explains both content and link structures simultaneously and seamlessly.\nIn the formulation, we perform factor analysis based on matrix factorization: solution to the first component is based on factorizing the term-document matrix derived from content features; solution to the second component is based on factorizing the adjacency matrix derived from links.\nBecause the two factorizations share a common base, the discovered bases (latent factors) explain both content information and link structures, and are then used in further information management tasks such as classification.\nThis paper is organized as follows.\nSection 2 reviews related work.\nSection 3 presents the proposed approach to analyze the web page based on the combined information of links and content.\nSection 4 extends the basic framework and a few variants for fine tune.\nSection 5 shows the experiment results.\nSection 6 discusses the details of this approach and Section 7 concludes.\n2.\nRELATED WORK\nIn the content analysis part, our approach is closely related to Latent Semantic Indexing (LSI) [8].\nLSI maps documents into a lower dimensional latent space.\nThe latent space implicitly captures a large portion of information of documents, therefore it is called the latent semantic space.\nThe similarity between documents could be defined by the dot products of the corresponding vectors of documents in the latent space.\nAnalysis tasks, such as classification, could be performed on the latent space.\nThe commonly used singular value decomposition (SVD) method ensures that the data points in the latent space can optimally reconstruct the original documents.\nThough our approach also uses latent space to represent web pages (documents), we consider the link structure as well as the content of web pages.\nIn the link analysis approach, the framework of hubs and authorities (HITS) [12] puts web page into two categories, hubs and authorities.\nUsing recursive notion, a hub is a web page with many outgoing links to authorities, while an authority is a web page with many incoming links from hubs.\nInstead of using two categories, PageRank [17] uses a single category for the recursive notion, an authority is a web page with many incoming links from authorities.\nHe et al. [9] propose a clustering algorithm for web document clustering.\nThe algorithm incorporates link structure and the co-citation patterns.\nIn the algorithm, all links are treated as undirected edge of the link graph.\nThe content information is only used for weighing the links by the textual similarity of both ends of the links.\nZhang et al. [23] uses the undirected graph regularization framework for document classification.\nAchlioptas et al [2] decompose the web into hub and authority attributes then combine them with content.\nZhou et al. [25] and [24] propose a directed graph regularization framework for semi-supervised learning.\nThe framework combines the hub and authority information of web pages.\nBut it is difficult to combine the content information into that framework.\nOur approach consider the content and the directed linkage between topics of source and destination web pages in one step, which implies the topic combines the information of web page as authorities and as hubs in a single set offactors.\nCohn and Hofmann [6] construct the latent space from both content and link information, using content analysis based on probabilistic LSI (PLSI) [10] and link analysis based on PHITS [5].\nThe major difference between the approach of [6] (PLSI+PHITS) and our approach is in the part of link analysis.\nIn PLSI+PHITS, the link is constructed with the linkage from the topic of the source web page to the destination web page.\nIn the model, the outgoing links of the destination web page have no effect on the source web page.\nIn other words, the overall link structure is not utilized in PHITS.\nIn our approach, the link is constructed with the linkage between the factor of the source web page and the factor of the destination web page, instead of the destination web page itself.\nThe factor of the destination web page contains information of its outgoing links.\nIn turn, such information is passed to the factor of the source web page.\nAs the result of matrix factorization, the factor forms a factor graph, a miniature of the original graph, preserving the major structure of the original graph.\nTaskar et al. [19] propose relational Markov networks (RMNs) for entity classification, by describing a conditional distribution of entity classes given entity attributes and relationships.\nThe model was applied to web page classification, where web pages are entities and hyperlinks are treated as relationships.\nRMNs apply conditional random fields to define a set of potential functions on cliques of random variables, where the link structure provides hints to form the cliques.\nHowever the model does not give an off-the-shelf solution, because the success highly depends on the arts of designing\nthe potential functions.\nOn the other hand, the inference for RMNs is intractable and requires belief propagation.\nThe following are some work on combining documents and links, but the methods are loosely related to our approach.\nThe experiments of [21] show that using terms from the linked document improves the classification accuracy.\nChakrabarti et al. [3] use co-citation information in their classification model.\nJoachims et al. [11] combine text kernels and co-citation kernels for classification.\nOh et al [16] use the Naive Bayesian frame to combine link information with content.\n3.\nOUR APPROACH\nIn this section we will first introduce a novel matrix factorization method, which is more suitable than conventional matrix factorization methods for link analysis.\nThen we will introduce our approach that jointly factorizes the document-term matrix and link matrix and obtains compact and highly indicative factors for representing documents or web pages.\n3.1 Link Matrix Factorization\nSuppose we have a directed graph G = (V, E), where the vertex set V = {vi} n i = 1 represents the web pages and the edge set E represents the hyperlinks between web pages.\nLet A = {asd} denotes the n \u00d7 n adjacency matrix of G, which is also called the link matrix in this paper.\nFor a pair of vertices, vs and vd, let asd = 1 when there is an edge from vs to vd, and asd = 0, otherwise.\nNote that A is an asymmetric matrix, because hyperlinks are directed.\nMost machine learning algorithms assume a feature-vector representation of instances.\nFor web page classification, however, the link graph does not readily give such a vector representation for web pages.\nIf one directly uses each row or column of A for the job, she will suffer a very high computational cost because the dimensionality equals to the number of web pages.\nOn the other hand, it will produces a poor classification accuracy (see our experiments in Section 5), because A is extremely sparse1.\nThe idea of link matrix factorization is to derive a high-quality feature representation Z of web pages based on analyzing the link matrix A, where Z is an n \u00d7 l matrix, with each row being the ldimensional feature vector of a web page.\nThe new representation of web pages captures the principal factors of the link structure and makes further processing more efficient.\nOne may use a method similar to LSI, to apply the well-known principal component analysis (PCA) for deriving Z from A.\nThe corresponding optimization problem 2 is\nwhere \u03b3 is a small positive number, U is an l \u00d7 n matrix, and \u00b7 I IF is the Frobenius norm.\nThe optimization aims to approximate A by ZUT, a product of two low-rank matrices, with a regularization on U.\nIn the end, the i-th row vector of Z can be thought as the hub feature vector of vertex vi, and the row vector of U can be thought as the authority features.\nA link generation model proposed in [2] is similar to the PCA approach.\nSince A is a nonnegative matrix here, one can also consider to put nonnegative constraints on U and Z, which produces an algorithm similar to PLSA [10] and NMF [20].\n1Due to the sparsity of A, links from two similar pages may not share any common target pages, which makes them to appear \"dissimilar\".\nHowever the two pages may be indirectly linked to many common pages via their neighbors.\n2Another equivalent form is minZ, U IIA \u2212 ZUT112F, s. t. UTU = I.\nThe solution Z is identical subject to a scaling factor.\nHowever, despite its popularity in matrix analysis, PCA (or other similar methods like PLSA) is restrictive for link matrix factorization.\nThe major problem is that, PCA ignores the fact that the rows and columns of A are indexed by exactly the same set of objects (i.e., web pages).\nThe approximating matrix A\u02dc = ZUT shows no evidence that links are within the same set of objects.\nTo see the drawback, let's consider a link transitivity situation vi--+ vs--+ vj, where page i is linked to page s which itself is linked to page j.\nwhere U is an l \u00d7 l full matrix.\nNote that U is not symmetric, thus ZUZT produces an asymmetric matrix, which is the case of A. Again, each row vector of Z corresponds to a feature vector of a web pages.\nThe new approximating form A\u02dc = ZUZT puts a clear meaning that the links are between the same set of objects, represented by features Z.\nThe factor model actually maps each vertex, vi, into a vector zi = {zi, k; 1 <k <l} in the Rl space.\nWe call the Rl space the factor space.\nThen, {zi} encodes the information of incoming and outgoing connectivity of vertices {vi}.\nThe factor loadings, U, explain how these observed connections happened based on {zi}.\nOnce we have the vector zi, we can use many traditional classification methods (such as SVMs) or clustering tools (such as K-Means) to perform the analysis.\nIllustration Based on a Synthetic Problem To further illustrate the advantages of the proposed link matrix factorization Eq.\n(2), let us consider the graph in Figure 1.\nGiven\nFigure 2: Summarize Figure 1 with a factor graph\nthese observations, we can summarize the graph by grouping as factor graph depicted in Figure 2.\nIn the next we preform the two factorization methods Eq.\n(2) and Eq.\n(1) on this link matrix.\nA good low-rank representation should reveal the structure of the factor graph.\nFirst we try PCA-like decomposition, solving Eq.\n(1) and obtaining\nWe can see that the row vectors of v6 and v7 are the same in Z, indicating that v6 and v7 have the same hub attributes.\nThe row\nnks from web pages {vi} to a different set of objects, let it be denoted by {oi}, A\u02dc = ZUT actually splits an \"linked\" object os from vs and breaks down the link path into two parts vi--+ os and vs--+ oj.\nThis is obviously a miss interpretation to the original link path.\nTo overcome the problem of PCA, in this paper we suggest to use a different factorization:\nvectors of v2 and v3 are the same in U, indicating that v2 and v3 have the same authority attributes.\nIt is not clear to see the similarity between v4 and v5, because their inlinks (and outlinks) are different.\nThen, we factorize A by ZUZT via solving Eq.\n(2), and obtain the results ization.\nThe new representation Z is ensured to capture both the structures of the link matrix A and the content matrix C.\nOnce we find the optimal Z, we can apply the traditional classification or clustering methods on vectorial data Z.\nThe relationship among these matrices can be depicted as Figure 3.\nFigure 3: Relationship among the matrices.\nNode Y is the target of classification.\nThe resultant Z is very consistent with the clustering structure of vertices: the row vectors of v2 and v3 are the same, those of v4 and v5 are the same, those of v6 and v7 are the same.\nEven interestingly, if we add constraints to ensure Z and U be nonnegative, we have Eq.\n(4) can be solved using gradient methods, such as the conjugate gradient method and quasi-Newton methods.\nThen main computation of gradient methods is evaluating the object function J and its gradients against variables,\nwhich clearly tells the assignment of vertices to clusters from Z and the links of factor graph from U.\nWhen the interpretability is not critical in some tasks, for example, classification, we found that it achieves better accuracies without the nonnegative constraints.\nGiven our above analysis, it is clear that the factorization ZUZT is more expressive than ZUT in representing the link matrix A.\n3.2 Content Matrix Factorization\nNow let us consider the content information on the vertices.\nTo combine the link information and content information, we want to use the same latent space to approximate the content as the latent space for the links.\nUsing the bag-of-words approach, we denote the content of web pages by an nxm matrix C, each of whose rows represents a document, each column represents a keyword, where m is the number of keywords.\nLike the latent semantic indexing (LSI) [8], the l-dimensional latent space for words is denoted by an m x l matrix V.\nTherefore, we use ZV T to approximate matrix C,\nwhere \u03b2 is a small positive number, \u03b2IIV II2F serves as a regularization term to improve the robustness.\n3.3 Joint Link-Content Matrix Factorization\nThere are many ways to employ both the content and link information for web page classification.\nOur idea in this paper is not to simply combine them, but rather to fuse them into a single, consistent, and compact feature representation.\nTo achieve this goal, we solve the following problem,\nBecause of the sparsity of A, the computational complexity of multiplication of A and Z is O (\u00b5Al), where \u00b5A is the number of nonzero entries in A. Similarly, the computational complexity of CTZ and CV is O (\u00b5Cl), where \u00b5C is the number of nonzero entries in C.\nThe computational complexity of the rest multiplications in the gradient computation is O (nl2).\nTherefore, the total computational complexity in one iteration is O (\u00b5Al + \u00b5Cl + nl2).\nThe number of links and the number of words in a web page are relatively small comparing to the number of web pages, and are almost constant as the number of web pages\/documents increases, i.e. \u00b5A = O (n) and \u00b5C = O (n).\nTherefore, theoretically the computation time is almost linear to the number of web pages\/documents, n.\n4.\nSUPERVISED MATRIX FACTORIZATION\nConsider a web page classification problem.\nWe can solve Eq.\n(4) to obtain Z as Section 3, then use a traditional classifier to perform classification.\nHowever, this approach does not take data labels into account in the first step.\nBelieving that using data labels improves the accuracy by obtaining a better Z for the classification, we consider to use the data labels to guide the matrix factorization, called supervised matrix factorization [22].\nBecause some data used in the matrix factorization have no label information, the supervised matrix factorization falls into the category of semi-supervised learning.\nLet C be the set of classes.\nFor simplicity, we first consider binary class problem, i.e. C = f \u2212 1, 1}.\nAssume we know the labels fyi} for vertices in T \u2282 V.\nWe want to find a hypothesis h: V--+ R, such that we assign vi to 1 when h (vi)> 0, \u2212 1 otherwise.\nWe assume a transform from the latent space to R is linear, i.e.\nTable 1: Dataset of WebKB\nwhere w and b are parameters to estimate.\nHere, w is the norm of the decision boundary.\nSimilar to Support Vector Machines (SVMs) [7], we can use the hinge loss to measure the loss,\nwhere [x] + is x if x> 0, 0 if x <0.\nHowever, the hinge loss is not smooth at the hinge point, which makes it difficult to apply gradient methods on the problem.\nTo overcome the difficulty, we use a smoothed version of hinge loss for each data point,\nwhere and\nWe reduce a multiclass problem into multiple binary ones.\nOne simple scheme of reduction is the one-against-rest coding scheme.\nIn the one-against-rest scheme, we assign a label vector for each class label.\nThe element of a label vector is 1 if the data point belongs the corresponding class,--1, if the data point does not belong the corresponding class, 0, if the data point is not labeled.\nLet Y be the label matrix, each column of which is a label vector.\nTherefore, Y is a matrix of n x c, where c is the number of classes, JCJ.\nThen the values of Eq.\n(5) form a matrix\nwhere 1 is a vector of size n, whose elements are all one, W is a c x l parameter matrix, and b is a parameter vector of size c.\nThe total loss is proportional to the sum of Eq.\n(6) over all labeled data points and the classes,\nwhere \u03bb is the parameter to scale the term.\nTo derive a robust solution, we also use Tikhonov regularization for W, Once we obtain w, b, and Z, we can apply h on the vertices with unknown class labels, or apply traditional classification algorithms on Z to get the classification results.\n5.\nEXPERIMENTS\n5.1 Data Description\nIn this section, we perform classification on two datasets, to demonstrate the our approach.\nThe two datasets are the WebKB data set [1] and the Cora data set [15].\nThe WebKB data set consists of about 6000 web pages from computer science departments of four schools (Cornell, Texas, Washington, and Wisconsin).\nThe web pages are classified into seven categories.\nThe numbers of pages in each category are shown in Table 1.\nThe Cora data set consists of the abstracts and references of about 34,000 computer science research papers.\nWe use part of them to categorize into one of subfields of data structure (DS), hardware and architecture (HA), machine learning (ML), and programing language (PL).\nWe remove those articles without reference to other articles in the set.\nThe number of papers and the number of subfields in each area are shown in Table 2.\narea #of papers #of subfields\nwhere \u03bd is the parameter to scale the term.\nThen the supervised matrix factorization problem becomes Data structure (DS) Hardware and architecture (HA) Machine learning (ML) Programing language (PL)\nmin is (U, V, Z, W, b) (8) Table 2: Dataset of Cora U, V, Z, W, b where is (U, V, Z, W, b) = i (U, V, Z) + LY (W, b, Z) + \u03a9W (W).\nWe can also use gradient methods to solve the problem of Eq.\n(8).\n5.2 Methods\nThe task of the experiments is to classify the data based on their content information and\/or link structure.\nWe use the following methods:\nTable 3: Classification accuracy (mean f std-err%) on WebKB data set\n.\nSVM on content We apply support vector machines (SVM) on the content of documents.\nThe features are the bag-ofwords and all word are stemmed.\nThis method ignores link structure in the data.\nLinear SVM is used.\nThe regularization parameter of SVM is selected using the cross-validation method.\nThe implementation of SVM used in the experiments is libSVM [4].\n.\nSVM on links We treat links as the features of each document, i.e. the i-th feature is link-to-pagei.\nWe apply SVM on link features.\nThis method uses link information, but not the link structure.\n.\nSVM on link-content We combine the features of the above two methods.\nWe use different weights for these two set of features.\nThe weights are also selected using crossvalidation.\n.\nDirected graph regularization This method is described in [25] and [24].\nThis method is solely based on link structure.\n.\nPLSI+PHITS This method is described in [6].\nThis method combines text content information and link structure for analysis.\nThe PHITS algorithm is in spirit similar to Eq .1, with an additional nonnegative constraint.\nIt models the outgoing and in-coming structures separately.\n.\nLink-content MF This is our approach of matrix factorization described in Section 3.\nWe use 50 latent factors for Z.\nAfter we compute Z, we train a linear SVM using Z as the feature vectors, then apply SVM on testing portion of Z to obtain the final result, because of the multiclass output.\n.\nLink-content sup.\nMF This method is our approach of the supervised matrix factorization in Section 4.\nWe use 50 latent factors for Z.\nAfter we compute Z, we train a linear SVM on the training portion of Z, then apply SVM on testing portion of Z to obtain the final result, because of the multiclass output.\nWe randomly split data into five folds and repeat the experiment for five times, for each time we use one fold for test, four other folds for training.\nDuring the training process, we use the crossvalidation to select all model parameters.\nWe measure the results by the classification accuracy, i.e., the percentage of the number of correct classified documents in the entire data set.\nThe results are shown as the average classification accuracies and it standard deviation over the five repeats.\n5.3 Results\nThe average classification accuracies for the WebKB data set are shown in Table 3.\nFor this task, the accuracies of SVM on links are worse than that of SVM on content.\nBut the directed graph regularization, which is also based on link alone, achieves a much higher accuracy.\nThis implies that the link structure plays an important role in the classification of this dataset, but individual links in a web page give little information.\nThe combination of link and content using SVM achieves similar accuracy as that of SVM on content alone, which confirms individual links in a web page give little information.\nSince our approach consider the link structure as well as the content information, our two methods give results a highest accuracies among these approaches.\nThe difference between the results of our two methods is not significant.\nHowever in the experiments below, we show the difference between them.\nThe classification accuracies for the Cora data set are shown in Table 4.\nIn this experiment, the accuracies of SVM on the combination of links and content are higher than either SVM on content or SVM on links.\nThis indicates both content and links are infor\nTable 4: Classification accuracy (mean f std-err%) on Cora data set\nmative for classifying the articles into subfields.\nThe method of directed graph regularization does not perform as good as SVM on link-content, which confirms the importance of the article content in this task.\nThough our method of link-content matrix factorization perform slightly better than other methods, our method of linkcontent supervised matrix factorization outperform significantly.\n5.4 The Number of Factors\nAs we discussed in Section 3, the computational complexity of each iteration for solving the optimization problem is quadratic to the number of factors.\nWe perform experiments to study how the number of factors affects the accuracy of predication.\nWe use different numbers of factors for the Cornell data of WebKB data set and the machine learning (ML) data of Cora data set.\nThe result shown in Figure 4 (a) and 4 (b).\nThe figures show that the accuracy\nFigure 4: Accuracy vs number of factors\nincreases as the number of factors increases.\nIt is a different concept from choosing the \"optimal\" number of clusters in clustering application.\nIt is how much information to represent in the latent variables.\nWe have considered the regularization over the factors, which avoids the overfit problem for a large number of factors.\nTo choose of the number of factors, we need to consider the trade-off between the accuracy and the computation time, which is quadratic to the number of factors.\nThe difference between the method of matrix factorization and that of supervised one decreases as the number of factors increases.\nThis indicates that the usefulness of supervised matrix factorization at lower number of factors.\n6.\nDISCUSSIONS\nThe loss functions LA in Eq.\n(2) and LC in Eq.\n(3) use squared loss due to computationally convenience.\nActually, squared loss does not precisely describe the underlying noise model, because the weights of adjacency matrix can only take nonnegative values, in our case, zero or one only, and the components of content matrix C can only take nonnegative integers.\nTherefore, we can apply other types of loss, such as hinge loss or smoothed hinge loss, e.g. LA (U, Z) = \u00b5h (A, ZUZT), where h (A, B) =\nIn our paper, we mainly discuss the application of classification.\nA entry of matrix Z means the relationship of a web page and a factor.\nThe values of the entries are the weights of linear model, instead of the probabilities of web pages belonging to latent topics.\nTherefore, we allow the components take any possible real values.\nWhen we come to the clustering application, we can use this model to find Z, then apply K-means to partition the web pages into clusters.\nActually, we can use the idea of nonnegative matrix factorization for clustering [20] to directly cluster web pages.\nAs the example with nonnegative constraints shown in Section 3, we represent each cluster by a latent topic, i.e. the dimensionality of the latent space is set to the number of clusters we want.\nThen the\nSolving Eq.\n(9), we can obtain more interpretable results, which could be used for clustering.\n7.\nCONCLUSIONS\nIn this paper, we study the problem of how to combine the information of content and links for web page analysis, mainly on classification application.\nWe propose a simple approach using factors to model the text content and link structure of web pages\/documents.\nThe directed links are generated from the linear combination of linkage of between source and destination factors.\nBy sharing factors between text content and link structure, it is easy to combine both the content information and link structure.\nOur experiments show our approach is effective for classification.\nWe also discuss an extension for clustering application.","keyphrases":["combin content and link","classif","matrix factor","web mine problem","link structur","content inform","author inform","joint factor","linkag adjac matrix","document-term matrix","low-dimension factor space","webkb and cora benchmark","relationship","asymmetr relationship","cocit similar","text content","factor analysi"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","U","U","U","M","R"]} {"id":"H-69","title":"Ranking Web Objects from Multiple Communities","abstract":"Vertical search is a promising direction as it leverages domain-specific knowledge and can provide more precise information for users. In this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine. More specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation. We proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible. The proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums. Both intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described. Though the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking.","lvl-1":"Ranking Web Objects from Multiple Communities Le Chen \u2217 Le.Chen@idiap.ch Lei Zhang leizhang@ microsoft.com Feng Jing fengjing@ microsoft.com Ke-Feng Deng kefengdeng@hotmail.com Wei-Ying Ma wyma@microsoft.com Microsoft Research Asia 5F, Sigma Center, No. 49, Zhichun Road Haidian District, Beijing, 100080, P R China ABSTRACT Vertical search is a promising direction as it leverages domainspecific knowledge and can provide more precise information for users.\nIn this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine.\nMore specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation.\nWe proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible.\nThe proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums.\nBoth intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described.\nThough the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; G.2.2 [Discrete Mathematics]: Graph Theory; H.3.5 [Information Storage and Retrieval]: Online Information Services - Web-based services General Terms Algorithms, Experimentation 1.\nINTRODUCTION Despite numerous refinements and optimizations, general purpose search engines still fail to find relevant results for many queries.\nAs a new trend, vertical search has shown promise because it can leverage domain-specific knowledge and is more effective in connecting users with the information they want.\nThere are many vertical search engines, including some for paper search (e.g. Libra [21], Citeseer [7] and Google Scholar [4]), product search (e.g. Froogle [5]), movie search [6], image search [1, 8], video search [6], local search [2], as well as news search [3].\nWe believe the vertical search engine trend will continue to grow.\nEssentially, building vertical search engines includes data crawling, information extraction, object identification and integration, and object-level Web information retrieval (or Web object ranking) [20], among which ranking is one of the most important factors.\nThis is because it deals with the core problem of how to combine and rank objects coming from multiple communities.\nAlthough object-level ranking has been well studied in building vertical search engines, there are still some kinds of vertical domains in which objects cannot be effectively ranked.\nFor example, algorithms that evolved from PageRank [22], PopRank [21] and LinkFusion [27] were proposed to rank objects coming from multiple communities, but can only work on well-defined graphs of heterogeneous data.\nWell-defined means that like objects (e.g. authors in paper search) can be identified in multiple communities (e.g. conferences).\nThis allows heterogeneous objects to be well linked to form a graph through leveraging all the relationships (e.g. cited-by, authored-by and published-by) among the multiple communities.\nHowever, this assumption does not always stand for some domains.\nHigh-quality photo search, movie search and news search are exceptions.\nFor example, a photograph forum 377 website usually includes three kinds of objects: photos, authors and reviewers.\nYet different photo forums seem to lack any relationships, as there are no cited-by relationships.\nThis makes it difficult to judge whether two authors cited are the same author, or two photos are indeed identical photos.\nConsequently, although each photo has a rating score in a forum, it is non-trivial to rank photos coming from different photo forums.\nSimilar problems also exist in movie search and news search.\nAlthough two movie titles can be identified as the same one by title and director in different movie discussion groups, it is non-trivial to combine rating scores from different discussion groups and rank movies effectively.\nWe call such non-trivial object relationship in which identification is difficult, incomplete relationships.\nOther related work includes rank aggregation for the Web [13, 14], and learning algorithm for rank, such as RankBoost [15], RankSVM [17, 19], and RankNet [12].\nWe will contrast differences of these methods with the proposed methods after we have described the problem and our methods.\nWe will specifically focus on Web object-ranking problem in cases that lack object relationships or have with incomplete object relationships, and take high-quality photo search as the test bed for this investigation.\nIn the following, we will introduce rationale for building high-quality photo search.\n1.1 High-Quality Photo Search In the past ten years, the Internet has grown to become an incredible resource, allowing users to easily access a huge number of images.\nHowever, compared to the more than 1 billion images indexed by commercial search engines, actual queries submitted to image search engines are relatively minor, and occupy only 8-10 percent of total image and text queries submitted to commercial search engines [24].\nThis is partially because user requirements for image search are far less than those for general text search.\nOn the other hand, current commercial search engines still cannot well meet various user requirements, because there is no effective and practical solution to understand image content.\nTo better understand user needs in image search, we conducted a query log analysis based on a commercial search engine.\nThe result shows that more than 20% of image search queries are related to nature and places and daily life categories.\nUsers apparently are interested in enjoying high-quality photos or searching for beautiful images of locations or other kinds.\nHowever, such user needs are not well supported by current image search engines because of the difficulty of the quality assessment problem.\nIdeally, the most critical part of a search engine - the ranking function - can be simplified as consisting of two key factors: relevance and quality.\nFor the relevance factor, search in current commercial image search engines provide most returned images that are quite relevant to queries, except for some ambiguity.\nHowever, as to quality factor, there is still no way to give an optimal rank to an image.\nThough content-based image quality assessment has been investigated over many years [23, 25, 26], it is still far from ready to provide a realistic quality measure in the immediate future.\nSeemingly, it really looks pessimistic to build an image search engine that can fulfill the potentially large requirement of enjoying high-quality photos.\nVarious proliferating Web communities, however, notices us that people today have created and shared a lot of high-quality photos on the Web on virtually any topics, which provide a rich source for building a better image search engine.\nIn general, photos from various photo forums are of higher quality than personal photos, and are also much more appealing to public users than personal photos.\nIn addition, photos uploaded to photo forums generally require rich metadata about title, camera setting, category and description to be provide by photographers.\nThese metadata are actually the most precise descriptions for photos and undoubtedly can be indexed to help search engines find relevant results.\nMore important, there are volunteer users in Web communities actively providing valuable ratings for these photos.\nThe rating information is generally of great value in solving the photo quality ranking problem.\nMotivated by such observations, we have been attempting to build a vertical photo search engine by extracting rich metadata and integrating information form various photo Web forums.\nIn this paper, we specifically focus on how to rank photos from multiple Web forums.\nIntuitively, the rating scores from different photo forums can be empirically normalized based on the number of photos and the number of users in each forum.\nHowever, such a straightforward approach usually requires large manual effort in both tedious parameter tuning and subjective results evaluation, which makes it impractical when there are tens or hundreds of photo forums to combine.\nTo address this problem, we seek to build relationships\/links between different photo forums.\nThat is, we first adopt an efficient algorithm to find duplicate photos which can be considered as hidden links connecting multiple forums.\nWe then formulate the ranking challenge as an optimization problem, which eventually results in an optimal ranking function.\n1.2 Main Contributions and Organization.\nThe main contributions of this paper are: 1.\nWe have proposed and built a vertical image search engine by leveraging rich metadata from various photo forum Web sites to meet user requirements of searching for and enjoying high-quality photos, which is impossible in traditional image search engines.\n2.\nWe have proposed two kinds of Web object-ranking algorithms for photos with incomplete relationships, which can automatically and efficiently integrate as many as possible Web communities with rating information and achieves an equal qualitative result compared with the manually tuned fusion scheme.\nThe rest of this paper is organized as follows.\nIn Section 2, we present in detail the proposed solutions to the ranking problem, including how to find hidden links between different forums, normalize rating scores, obtain the optimal ranking function, and contrast our methods with some other related research.\nIn Section 3, we describe the experimental setting and experiments and user studies conducted to evaluate our algorithm.\nOur conclusion and a discussion of future work is in Section 4.\nIt is worth noting that although we treat vertical photo search as the test bed in this paper, the proposed ranking algorithm can also be applied to rank other content that includes video clips, poems, short stories, drawings, sculptures, music, and so on.\n378 2.\nALGORITHM 2.1 Overview The difficulty of integrating multiple Web forums is in their different rating systems, where there are generally two kinds of freedom.\nThe first kind of freedom is the rating interval or rating scale including the minimal and maximal ratings for each Web object.\nFor example, some forums use a 5-point rating scale whereas other forums use 3-point or 10-point rating scales.\nIt seems easy to fix this freedom, but detailed analysis of the data and experiments show that it is a non-trivial problem.\nThe second kind of freedom is the varying rating criteria found in different Web forums.\nThat is, the same score does not mean the same quality in different forums.\nIntuitively, if we can detect same photographers or same photographs, we can build relationships between any two photo forums and therefore can standardize the rating criterion by score normalization and transformation.\nFortunately, we find that quite a number of duplicate photographs exist in various Web photo forums.\nThis fact is reasonable when considering that photographers sometimes submit a photo to more than one forum to obtain critiques or in hopes of widespread publicity.\nIn this work, we adopt an efficient duplicate photo detection algorithm [10] to find these photos.\nThe proposed methods below are based on the following considerations.\nFaced with the need to overcome a ranking problem, a standardized rating criterion rather than a reasonable rating criterion is needed.\nTherefore, we can take a large scale forum as the reference forum, and align other forums by taking into account duplicate Web objects (duplicate photos in this work).\nIdeally, the scores of duplicate photos should be equal even though they are in different forums.\nYet we can deem that scores in different forumsexcept for the reference forum - can vary in a parametric space.\nThis can be determined by minimizing the objective function defined by the sum of squares of the score differences.\nBy formulating the ranking problem as an optimization problem that attempts to make the scores of duplicate photos in non-reference forums as close as possible to those in the reference forum, we can effectively solve the ranking problem.\nFor convenience, the following notations are employed.\nSki and \u00afSki denote the total score and mean score of ith Web object (photo) in the kth Web site, respectively.\nThe total score refers to the sum of the various rating scores (e.g., novelty rating and aesthetic rating), and the mean score refers to the mean of the various rating scores.\nSuppose there are a total of K Web sites.\nWe further use {Skl i |i = 1, ..., Ikl; k, l = 1, ..., K; k = l} to denote the set of scores for Web objects (photos) in kth Web forums that are duplicate with the lth Web forums, where Ikl is the total number of duplicate Web objects between these two Web sites.\nIn general, score fusion can be seen as the procedure of finding K transforms \u03c8k(Ski) = eSki, k = 1, ..., K such that eSki can be used to rank Web objects from different Web sites.\nThe objective function described in the above Figure 1: Web community integration.\nEach Web community forms a subgraph, and all communities are linked together by some hidden links (dashed lines).\nparagraph can then be formulated as min {\u03c8k|k=2,...,K} KX k=2 Ik1X i=1 \u00afwk i S1k i \u2212 \u03c8k(Sk1 i ) 2 (1) where we use k = 1 as the reference forum and thus \u03c81(S1i) = S1i.\n\u00afwk i (\u2265 0) is the weight coefficient that can be set heuristically according to the numbers of voters (reviewers or commenters) in both the reference forum and the non-reference forum.\nThe more reviewers, the more popular the photo is and the larger the corresponding weight \u00afwk i should be.\nIn this work, we do not inspect the problem of how to choose \u00afwk i and simply set them to one.\nBut we believe the proper use of \u00afwk i , which leverages more information, can significantly improve the results.\nFigure 1 illustrates the aforementioned idea.\nThe Web Community 1 is the reference community.\nThe dashed lines are links indicating that the two linked Web objects are actually the same.\nThe proposed algorithm will try to find the best \u03c8k(k = 2, ..., K), which has certain parametric forms according to certain models.\nSo as to minimize the cost function defined in Eq.\n1, the summation is taken on all the red dashed lines.\nWe will first discuss the score normalization methods in Section 2.2, which serves as the basis for the following work.\nBefore we describe the proposed ranking algorithms, we first introduce a manually tuned method in Section 2.3, which is laborious and even impractical when the number of communities become large.\nIn Section 2.4, we will briefly explain how to precisely find duplicate photos between Web forums.\nThen we will describe the two proposed methods: Linear fusion and Non-linear fusion, and a performance measure for result evaluation in Section 2.5.\nFinally, in Section 2.6 we will discuss the relationship of the proposed methods with some other related work.\n2.2 Score Normalization Since different Web (photo) forums on the Web usually have different rating criteria, it is necessary to normalize them before applying different kinds of fusion methods.\nIn addition, as there are many kinds of ratings, such as ratings for novelty, ratings for aesthetics etc, it is reasonable to choose a common one - total score or average scorethat can always be extracted in any Web forum or calculated by corresponding ratings.\nThis allows the normaliza379 tion method on the total score or average score to be viewed as an impartial rating method between different Web forums.\nIt is straightforward to normalize average scores by linearly transforming them to a fixed interval.\nWe call this kind of score as Scaled Mean Score.\nThe difficulty, however, of using this normalization method is that, if there are only a few users rating an object, say a photo in a photo forum, the average score for the object is likely to be spammed or skewed.\nTotal score can avoid such drawbacks that contain more information such as a Web object``s quality and popularity.\nThe problem is thus how to normalize total scores in different Web forums.\nThe simplest way may be normalization by the maximal and minimal scores.\nThe drawback of this normalization method is it is non robust, or in other words, it is sensitive to outliers.\nTo make the normalization insensitive to unusual data, we propose the Mode-90% Percentile normalization method.\nHere, the mode score represents the total score that has been assigned to more photos than any other total score.\nAnd The high percentile score (e.g.,90%) represents the total score for which the high percentile of images have a lower total score.\nThis normalization method utilizes the mode and 90% percentile as two reference points to align two rating systems, which makes the distributions of total scores in different forums more consistent.\nThe underlying assumption, for example in different photo forums, is that even the qualities of top photos in different forums may vary greatly and be less dependent on the forum quality, the distribution of photos of middle-level quality (from mode to 90% percentile) should be almost of the same quality up to the freedom which reflects the rating criterion (strictness) of Web forums.\nPhotos of this middle-level in a Web forum usually occupy more than 70 % of total photos in that forum.\nWe will give more detailed analysis of the scores in Section 3.2.\n2.3 Manual Fusion The Web movie forum, IMDB [16], proposed to use a Bayesian-ranking function to normalize rating scores within one community.\nMotivated by this ranking function, we propose this manual fusion method: For the kth Web site, we use the following formula eSki = \u03b1k \u00b7 \u201e nk \u00b7 \u00afSki nk + n\u2217 k + n\u2217 k \u00b7 S\u2217 k nk + n\u2217 k `` (2) to rank photos, where nk is the number of votes and n\u2217 k, S\u2217 k and \u03b1k are three parameters.\nThis ranking function first takes a balance between the original mean score \u00afSki and a reference score S\u2217 k to get a weighted mean score which may be more reliable than \u00afSki.\nThen the weighted mean score is scaled by \u03b1k to get the final score fSki.\nFor n Web communities, there are then about 3n parameters in {(\u03b1k, n\u2217 k, S\u2217 k)|k = 1, ..., n} to tune.\nThough this method can achieves pretty good results after careful and thorough manual tuning on these parameters, when n becomes increasingly large, say there are tens or hundreds of Web communities crawled and indexed, this method will become more and more laborious and will eventually become impractical.\nIt is therefore desirable to find an effective fusion method whose parameters can be automatically determined.\n2.4 Duplicate Photo Detection We use Dedup [10], an efficient and effective duplicate image detection algorithm, to find duplicate photos between any two photo forums.\nThis algorithm uses hash function to map a high dimensional feature to a 32 bits hash code (see below for how to construct the hash code).\nIts computational complexity to find all the duplicate images among n images is about O(n log n).\nThe low-level visual feature for each photo is extracted on k \u00d7 k regular grids.\nBased on all features extracted from the image database, a PCA model is built.\nThe visual features are then transformed to a relatively low-dimensional and zero mean PCA space, or 29 dimensions in our system.\nThen the hash code for each photo is built as follows: each dimension is transformed to one, if the value in this dimension is greater than 0, and 0 otherwise.\nPhotos in the same bucket are deemed potential duplicates and are further filtered by a threshold in terms of Euclidean similarity in the visual feature space.\nFigure 2 illustrates the hashing procedure, where visual features - mean gray values - are extracted on both 6 \u00d7 6 and 7\u00d77 grids.\nThe 85-dimensional features are transformed to a 32-dimensional vector, and the hash code is generated according to the signs.\nFigure 2: Hashing procedure for duplicate photo dectection 2.5 Score Fusion In this section, we will present two solutions on score fusion based on different parametric form assumptions of \u03c8k in Eq.\n1.\n2.5.1 Linear Fusion by Duplicate Photos Intuitively, the most straightforward way to factor out the uncertainties caused by the different criterion is to scale, rel380 ative to a given center, the total scores of each unreferenced Web photo forum with respect to the reference forum.\nMore strictly, we assume \u03c8k has the following form \u03c8k(Ski) = \u03b1kSki + tk, k = 2, ..., K (3) \u03c81(S1i) = S1i (4) which means that the scores of k(= 1)th forum should be scaled by \u03b1k relative to the center tk 1\u2212\u03b1k as shown in Figure 3.\nThen, if we substitute above \u03c8k to Eq.\n1, we get the following objective function, min {\u03b1k,tk|k=2,...,K} KX k=2 Ik1X i=1 \u00afwk i h S1k i \u2212 \u03b1kSk1 i \u2212 tk i2 .\n(5) By solving the following set of functions, ( \u2202f \u2202\u03b1k = = 0 \u2202f \u2202tk = 0 , k = 1, ..., K where f is the objective function defined in Eq.\n5, we get the closed form solution as: \u201e \u03b1k tk `` = A\u22121 k Lk (6) where Ak = \u201e P i \u00afwi(Sk1 i )2 P i \u00afwiSk1 iP i \u00afwiSk1 i P i \u00afwi `` (7) Lk = \u201e P i \u00afwiS1k i Sk1 iP i \u00afwiS1k i `` (8) and k = 2, ..., K.\nThis is a linear fusion method.\nIt enjoys simplicity and excellent performance in the following experiments.\nFigure 3: Linear Fusion method 2.5.2 Nonlinear Fusion by Duplicate Photos Sometimes we want a method which can adjust scores on intervals with two endpoints unchanged.\nAs illustrated in Figure 4, the method can tune scores between [C0, C1] while leaving scores C0 and C1 unchanged.\nThis kind of fusion method is then much finer than the linear ones and contains many more parameters to tune and expect to further improve the results.\nHere, we propose a nonlinear fusion solution to satisfy such constraints.\nFirst, we introduce a transform: \u03b7c0,c1,\u03b1(x) = ( x\u2212c0 c1\u2212c0 \u03b1 (c1 \u2212 c0) + c0, if x \u2208 (c0, c1] x otherwise where \u03b1 > 0.\nThis transform satisfies that for x \u2208 [c0, c1], \u03b7c0,c1,\u03b1(x) \u2208 [c0, c1] with \u03b7c0,c1,\u03b1(c0) = c0 and \u03b7c0,c1,\u03b1(c1) = c1.\nThen we can utilize this nonlinear transform to adjust the scores in certain interval, say (M, T], \u03c8k(Ski) = \u03b7M,T,\u03b1(Ski) .\n(9) Figure 4: Nonlinear Fusion method.\nWe intent to finely adjust the shape of the curves in each segment.\nEven there is no closed-form solution for the following optimization problem, min {\u03b1k|k\u2208[2,K]} KX k=2 Ik1X i=1 \u00afwk i h S1k i \u2212 \u03b7M,T,\u03b1(Ski) i2 it is not hard to get the numeric one.\nUnder the same assumptions made in Section 2.2, we can use this method to adjust scores of the middle-level (from the mode point to the 90 % percentile).\nThis more complicated non-linear fusion method is expected to achieve better results than the linear one.\nHowever, difficulties in evaluating the rank results block us from tuning these parameters extensively.\nThe current experiments in Section 3.5 do not reveal any advantages over the simple linear model.\n2.5.3 Performance Measure of the Fusion Results Since our objective function is to make the scores of the same Web objects (e.g. duplicate photos) between a nonreference forum and the reference forum as close as possible, it is natural to investigate how close they become to each other and how the scores of the same Web objects change between the two non-reference forums before and after score fusion.\nTaken Figure 1 as an example, the proposed algorithms minimize the score differences of the same Web objects in two Web forums: the reference forum (the Web Community 1) and a non-reference forum, which corresponds to minimizing the objective function on the red dashed (hidden) links.\nAfter the optimization, we must ask what happens to the score differences of the same Web objects in two nonreference forums?\nOr, in other words, whether the scores of two objects linked by the green dashed (hidden) links become more consistent?\nWe therefore define the following performance measure\u03b4 measure - to quantify the changes for scores of the same Web objects in different Web forums as \u03b4kl = Sim(Slk , Skl ) \u2212 Sim(Slk \u2217 , Skl \u2217 ) (10) 381 where Skl = (Skl 1 , ..., Skl Ikl )T , Skl \u2217 = (eSkl 1 , ..., eSkl Ikl )T and Sim(a, b) = a \u00b7 b ||a||||b|| .\n\u03b4kl > 0 means after score fusion, scores on the same Web objects between kth and lth Web forum become more consistent, which is what we expect.\nOn the contrary, if \u03b4kl < 0, those scores become more inconsistent.\nAlthough we cannot rely on this measure to evaluate our final fusion results as ranking photos by their popularity and qualities is such a subjective process that every person can have its own results, it can help us understand the intermediate ranking results and provide insights into the final performances of different ranking methods.\n2.6 Contrasts with Other Related Work We have already mentioned the differences of the proposed methods with the traditional methods, such as PageRank [22], PopRank [21], and LinkFusion [27] algorithms in Section 1.\nHere, we discuss some other related works.\nThe current problem can also be viewed as a rank aggregation one [13, 14] as we deal with the problem of how to combine several rank lists.\nHowever, there are fundamental differences between them.\nFirst of all, unlike the Web pages, which can be easily and accurately detected as the same pages, detecting the same photos in different Web forums is a non-trivial work, and can only be implemented by some delicate algorithms while with certain precision and recall.\nSecond, the numbers of the duplicate photos from different Web forums are small relative to the whole photo sets (see Table 1).\nIn another words, the top K rank lists of different Web forums are almost disjointed for a given query.\nUnder this condition, both the algorithms proposed in [13] and their measurements - Kendall tau distance or Spearman footrule distance - will degenerate to some trivial cases.\nAnother category of rank fusion (aggregation) methods is based on machine learning algorithms, such as RankSVM [17, 19], RankBoost [15], and RankNet [12].\nAll of these methods entail some labelled datasets to train a model.\nIn current settings, it is difficult or even impossible to get these datasets labelled as to their level of professionalism or popularity, since the photos are too vague and subjective to rank.\nInstead, the problem here is how to combine several ordered sub lists to form a total order list.\n3.\nEXPERIMENTS In this section, we carry out our research on high-quality photo search.\nWe first briefly introduce the newly proposed vertical image search engine - EnjoyPhoto in section 3.1.\nThen we focus on how to rank photos from different Web forums.\nIn order to do so, we first normalize the scores (ratings) for photos from different multiple Web forums in section 3.2.\nThen we try to find duplicate photos in section 3.3.\nSome intermediate results are discussed using \u03b4 measure in section 3.4.\nFinally a set of user studies is carried out carefully to justify our proposed method in section 3.5.\n3.1 EnjoyPhoto: high-quality Photo Search Engine In order to meet user requirement of enjoying high-quality photos, we propose and build a high-quality photo search engine - EnjoyPhoto, which accounts for the following three key issues: 1.\nhow to crawl and index photos, 2.\nhow to determine the qualities of each photo and 3.\nhow to display the search results in order to make the search process enjoyable.\nFor a given text based query, this system ranks the photos based on certain combination of relevance of the photo to this query (Issue 1) and the quality of the photo (Issue 2), and finally displays them in an enjoyable manner (Issue 3).\nAs for Issue 3, we devise the interface of the system deliberately in order to smooth the users'' process of enjoying high-quality photos.\nTechniques, such as Fisheye and slides show, are utilized in current system.\nFigure 5 shows the interface.\nWe will not talk more about this issue as it is not an emphasis of this paper.\nFigure 5: EnjoyPhoto: an enjoyable high-quality photo search engine, where 26,477 records are returned for the query fall in about 0.421 seconds As for Issue 1, we extracted from a commercial search engine a subset of photos coming from various photo forums all over the world, and explicitly parsed the Web pages containing these photos.\nThe number of photos in the data collection is about 2.5 million.\nAfter the parsing, each photo was associated with its title, category, description, camera setting, EXIF data 1 (when available for digital images), location (when available in some photo forums), and many kinds of ratings.\nAll these metadata are generally precise descriptions or annotations for the image content, which are then indexed by general text-based search technologies [9, 18, 11].\nIn current system, the ranking function was specifically tuned to emphasize title, categorization, and rating information.\nIssue 2 is essentially dealt with in the following sections which derive the quality of photos by analyzing ratings provided by various Web photo forums.\nHere we chose six photo forums to study the ranking problem and denote them as Web-A, Web-B, Web-C, Web-D, Web-E and Web-F.\n3.2 Photo Score Normalization Detailed analysis of different score normalization methods are analyzed in this section.\nIn this analysis, the zero 1 Digital cameras save JPEG (.\njpg) files with EXIF (Exchangeable Image File) data.\nCamera settings and scene information are recorded by the camera into the image file.\nwww.digicamhelp.com\/what-is-exif\/ 382 0 2 4 6 8 10 0 1000 2000 3000 4000 Normalized Score TotalNumber (a) Web-A 0 2 4 6 8 10 0 0.5 1 1.5 2 2.5 3 x 10 4 Normalized Score TotalNumber (b) Web-B 0 2 4 6 8 10 0 0.5 1 1.5 2 x 10 5 Normalized Score TotalNumber (c) Web-C 0 2 4 6 8 10 0 2 4 6 8 10 x 10 4 Normalized Score TotalNumber (d) Web-D 0 2 4 6 8 10 0 2000 4000 6000 8000 10000 12000 14000 Normalized Score TotalNumber (e) Web-E 0 2 4 6 8 10 0 1 2 3 4 5 6 x 10 4 Normalized Score TotalNumber (f) Web-F Figure 6: Distributions of mean scores normalized to [0, 10] scores that usually occupy about than 30% of the total number of photos for some Web forums are not currently taken into account.\nHow to utilize these photos is left for future explorations.\nIn Figure 6, we list the distributions of the mean score, which is transformed to a fixed interval [0, 10].\nThe distributions of the average scores of these Web forums look quite different.\nDistributions in Figure 6(a), 6(b), and 6(e) looks like Gaussian distributions, while those in Figure 6(d) and 6(f) are dominated by the top score.\nThe reason of these eccentric distributions for Web-D and Web-F lies in their coarse rating systems.\nIn fact, Web-D and Web-F use 2 or 3 point rating scales whereas other Web forums use 7 or 14 point rating scales.\nTherefore, it will be problematic if we directly use these averaged scores.\nFurthermore the average score is very likely to be spammed, if there are only a few users rating a photo.\nFigure 7 shows the total score normalization method by maximal and minimal scores, which is one of our base line system.\nAll the total scores of a given Web forum are normalized to [0, 100] according to the maximal score and minimal score of corresponding Web forum.\nWe notice that total score distribution of Web-A in Figure 7(a) has two larger tails than all the others.\nTo show the shape of the distributions more clearly, we only show the distributions on [0, 25] in Figure 7(b),7(c),7(d),7(e), and 7(f).\nFigure 8 shows the Mode-90% Percentile normalization method, where the modes of the six distributions are normalized to 5 and the 90% percentile to 8.\nWe can see that this normalization method makes the distributions of total scores in different forums more consistent.\nThe two proposed algorithms are all based on these normalization methods.\n3.3 Duplicate photo detection Targeting at computational efficiency, the Dedup algorithm may lose some recall rate, but can achieve a high precision rate.\nWe also focus on finding precise hidden links rather than all hidden links.\nFigure 9 shows some duplicate detection examples.\nThe results are shown in Table 1 and verify that large numbers of duplicate photos exist in any two Web forums even with the strict condition for Dedup where we chose first 29 bits as the hash code.\nSince there are only a few parameters to estimate in the proposed fusion methods, the numbers of duplicate photos shown Table 1 are 0 20 40 60 80 100 0 100 200 300 400 500 600 Normalized Score TotalNumber (a) Web-A 0 5 10 15 20 25 0 1 2 3 4 5 x 10 4 Normalized Score TotalNumber (b) Web-B 0 5 10 15 20 25 0 1 2 3 4 5 x 10 5 Normalized Score TotalNumber (c) Web-C 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 x 10 4 Normalized Score TotalNumber (d) Web-D 0 5 10 15 20 25 0 2000 4000 6000 8000 10000 Normalized Score TotalNumber (e) Web-E 0 5 10 15 20 25 0 0.5 1 1.5 2 2.5 3 x 10 4 Normalized Score TotalNumber (f) Web-F Figure 7: Maxmin Normalization 0 5 10 15 0 200 400 600 800 1000 1200 1400 Normalized Score TotalNumber (a) Web-A 0 5 10 15 0 1 2 3 4 5 x 10 4 Normalized Score TotalNumber (b) Web-B 0 5 10 15 0 2 4 6 8 10 12 14 x 10 4 Normalized Score TotalNumber (c) Web-C 0 5 10 15 0 0.5 1 1.5 2 2.5 x 10 4 Normalized Score TotalNumber (d) Web-D 0 5 10 15 0 2000 4000 6000 8000 10000 12000 Normalized Score TotalNumber (e) Web-E 0 5 10 15 0 2000 4000 6000 8000 10000 Normalized Score TotalNumber (f) Web-F Figure 8: Mode-90% Percentile Normalization sufficient to determine these parameters.\nThe last table column lists the total number of photos in the corresponding Web forums.\n3.4 \u03b4 Measure The parameters of the proposed linear and nonlinear algorithms are calculated using the duplicate data shown in Table 1, where the Web-C is chosen as the reference Web forum since it shares the most duplicate photos with other forums.\nTable 2 and 3 show the \u03b4 measure on the linear model and nonlinear model.\nAs \u03b4kl is symmetric and \u03b4kk = 0, we only show the upper triangular part.\nThe NaN values in both tables lie in that no duplicate photos have been detected by the Dedup algorithm as reported in Table 1.\nThe linear model guarantees that the \u03b4 measures related Table 1: Number of duplicate photos between each pair of Web forums A B C D E F Scale A 0 316 1,386 178 302 0 130k B 316 0 14,708 909 8,023 348 675k C 1,386 14,708 0 1,508 19,271 1,083 1,003k D 178 909 1,508 0 1,084 21 155k E 302 8,023 19,271 1,084 0 98 448k F 0 348 1,083 21 98 0 122k 383 Figure 9: Some results of duplicate photo detection Table 2: \u03b4 measure on the linear model.\nWeb-B Web-C Web-D Web-E Web-F Web-A 0.0659 0.0911 0.0956 0.0928 NaN Web-B - 0.0672 0.0578 0.0791 0.4618 Web-C - - 0.0105 0.0070 0.2220 Web-D - - - 0.0566 0.0232 Web-E - - - - 0.6525 to the reference community should be no less than 0 theoretically.\nIt is indeed the case (see the underlined numbers in Table 2).\nBut this model can not guarantee that the \u03b4 measures on the non-reference communities can also be no less than 0, as the normalization steps are based on duplicate photos between the reference community and a nonreference community.\nResults shows that all the numbers in the \u03b4 measure are greater than 0 (see all the non-underlined numbers in Table 2), which indicates that it is probable that this model will give optimal results.\nOn the contrary, the nonlinear model does not guarantee that \u03b4 measures related to the reference community should be no less than 0, as not all duplicate photos between the two Web forums can be used when optimizing this model.\nIn fact, the duplicate photos that lie in different intervals will not be used in this model.\nIt is these specific duplicate photos that make the \u03b4 measure negative.\nAs a result, there are both negative and positive items in Table 3, but overall the number of positive ones are greater than negative ones (9:5), that indicates the model may be better than the normalization only method (see next subsection) which has an all-zero \u03b4 measure, and worse than the linear model.\n3.5 User Study Because it is hard to find an objective criterion to evaluate Table 3: \u03b4 measure on the nonlinear model.\nWeb-B Web-C Web-D Web-E Web-F Web-A 0.0559 0.0054 -0.0185 -0.0054 NaN Web-B - -0.0162 -0.0345 -0.0301 0.0466 Web-C - - 0.0136 0.0071 0.1264 Web-D - - - 0.0032 0.0143 Web-E - - - - 0.214 which ranking function is better, we chose to employ user studies for subjective evaluations.\nTen subjects were invited to participate in the user study.\nThey were recruited from nearby universities.\nAs search engines of both text search and image search are familiar to university students, there was no prerequisite criterion for choosing students.\nWe conducted user studies using Internet Explorer 6.0 on Windows XP with 17-inch LCD monitors set at 1,280 pixels by 1,024 pixels in 32-bit color.\nData was recorded with server logs and paper-based surveys after each task.\nFigure 10: User study interface We specifically device an interface for user study as shown in Figure 10.\nFor each pair of fusion methods, participants were encouraged to try any query they wished.\nFor those without specific ideas, two combo boxes (category list and query list) were listed on the bottom panel, where the top 1,000 image search queries from a commercial search engine were provided.\nAfter a participant submitted a query, the system randomly selected the left or right frame to display each of the two ranking results.\nThe participant were then required to judge which ranking result was better of the two ranking results, or whether the two ranking results were of equal quality, and submit the judgment by choosing the corresponding radio button and clicking the Submit button.\nFor example, in Figure 10, query sunset is submitted to the system.\nThen, 79,092 photos were returned and ranked by the Minmax fusion method in the left frame and linear fusion method in the right frame.\nA participant then compares the two ranking results (without knowing the ranking methods) and submits his\/her feedback by choosing answers in the Your option.\nTable 4: Results of user study Norm.Only Manually Linear Linear 29:13:10 14:22:15Nonlinear 29:15:9 12:27:12 6:4:45 Table 4 shows the experimental results, where Linear denotes the linear fusion method, Nonlinear denotes the non linear fusion method, Norm.\nOnly means Maxmin normalization method, Manually means the manually tuned method.\nThe three numbers in each item, say 29:13:10, mean that 29 judgments prefer the linear fusion results, 10 384 judgments prefer the normalization only method, and 13 judgments consider these two methods as equivalent.\nWe conduct the ANOVA analysis, and obtain the following conclusions: 1.\nBoth the linear and nonlinear methods are significantly better than the Norm.\nOnly method with respective P-values 0.00165(< 0.05) and 0.00073(<< 0.05).\nThis result is consistent with the \u03b4-measure evaluation result.\nThe Norm.\nOnly method assumes that the top 10% photos in different forums are of the same quality.\nHowever, this assumption does not stand in general.\nFor example, a top 10% photo in a top tier photo forum is generally of higher quality than a top 10% photo in a second-tier photo forum.\nThis is similar to that, those top 10% students in a top-tier university and those in a second-tier university are generally of different quality.\nBoth linear and nonlinear fusion methods acknowledge the existence of such differences and aim at quantizing the differences.\nTherefore, they perform better than the Norm.\nOnly method.\n2.\nThe linear fusion method is significantly better than the nonlinear one with P-value 1.195 \u00d7 10\u221210 .\nThis result is rather surprising as this more complicated ranking method is expected to tune the ranking more finely than the linear one.\nThe main reason for this result may be that it is difficult to find the best intervals where the nonlinear tuning should be carried out and yet simply the middle part of the Mode-90% Percentile Normalization method was chosen.\nThe timeconsuming and subjective evaluation methods - user studies - blocked us extensively tuning these parameters.\n3.\nThe proposed linear and nonlinear methods perform almost the same with or slightly better than the manually tuned method.\nGiven that the linear\/nonlinear fusion methods are fully automatic approaches, they are considered practical and efficient solutions when more communities (e.g. dozens of communities) need to be integrated.\n4.\nCONCLUSIONS AND FUTURE WORK In this paper, we studied the Web object-ranking problem in the cases of lacking object relationships where traditional ranking algorithms are no longer valid, and took high-quality photo search as the test bed for this investigation.\nWe have built a vertical high-quality photo search engine, and proposed score fusion methods which can automatically integrate as many data sources (Web forums) as possible.\nThe proposed fusion methods leverage the hidden links discovered by duplicate photo detection algorithm, and minimize score differences of duplicate photos in different forums.\nBoth the intermediate results and the user studies show that the proposed fusion methods are a practical and efficient solution to Web object ranking in the aforesaid relationships.\nThough the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other kinds of Web objects including video clips, poems, short stories, music, drawings, sculptures, and so on.\nCurrent system is far from being perfect.\nIn order to make this system more effective, more delicate analysis for the vertical domain (e.g., Web photo forums) are needed.\nThe following points, for example, may improve the searching results and will be our future work: 1.\nmore subtle analysis and then utilization of different kinds of ratings (e.g., novelty ratings, aesthetic ratings); 2.\ndifferentiating various communities who may have different interests and preferences or even distinct culture understandings; 3.\nincorporating more useful information, including photographers'' and reviewers'' information, to model the photos in a heterogeneous data space instead of the current homogeneous one.\nWe will further utilize collaborative filtering to recommend relevant high-quality photos to browsers.\nOne open problem is whether we can find an objective and efficient criterion for evaluating the ranking results, instead of employing subjective and inefficient user studies, which blocked us from trying more ranking algorithms and tuning parameters in one algorithm.\n5.\nACKNOWLEDGMENTS We thank Bin Wang and Zhi Wei Li for providing Dedup codes to detect duplicate photos; Zhen Li for helping us design the interface of EnjoyPhoto; Ming Jing Li, Longbin Chen, Changhu Wang, Yuanhao Chen, and Li Zhuang etc. for useful discussions.\nSpecial thanks go to Dwight Daniels for helping us revise the language of this paper.\n6.\nREFERENCES [1] Google image search.\nhttp:\/\/images.google.com.\n[2] Google local search.\nhttp:\/\/local.google.com\/.\n[3] Google news search.\nhttp:\/\/news.google.com.\n[4] Google paper search.\nhttp:\/\/Scholar.google.com.\n[5] Google product search.\nhttp:\/\/froogle.google.com.\n[6] Google video search.\nhttp:\/\/video.google.com.\n[7] Scientific literature digital library.\nhttp:\/\/citeseer.ist.psu.edu.\n[8] Yahoo image search.\nhttp:\/\/images.yahoo.com.\n[9] R. Baeza-Yates and B. Ribeiro-Neto.\nModern Information Retrieval.\nNew York: ACM Press; Harlow, England: Addison-Wesley, 1999.\n[10] W. Bin, L. Zhiwei, L. Ming Jing, and M. Wei-Ying.\nLarge-scale duplicate detection for web image search.\nIn Proceedings of the International Conference on Multimedia and Expo, page 353, 2006.\n[11] S. Brin and L. Page.\nThe anatomy of a large-scale hypertextual web search engine.\nIn Computer Networks, volume 30, pages 107-117, 1998.\n[12] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender.\nLearning to rank using gradient descent.\nIn Proceedings of the 22nd international conference on Machine learning, pages 89 - 96, 2005.\n[13] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar.\nRank aggregation methods for the web.\nIn Proceedings 10th International Conference on World Wide Web, pages 613 - 622, Hong-Kong, 2001.\n[14] R. Fagin, R. Kumar, and D. Sivakumar.\nComparing top k lists.\nSIAM Journal on Discrete Mathematics, 17(1):134 - 160, 2003.\n[15] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer.\nAn efficient boosting algorithm for combining preferences.\n385 Journal of Machine Learning Research, 4(1):933-969(37), 2004.\n[16] IMDB.\nFormula for calculating the top rated 250 titles in imdb.\nhttp:\/\/www.imdb.com\/chart\/top.\n[17] T. Joachims.\nOptimizing search engines using clickthrough data.\nIn Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 133 - 142, 2002.\n[18] J. M. Kleinberg.\nAuthoritative sources in a hyperlinked environment.\nJournal of the ACM, 46(5):604-632, 1999.\n[19] R. Nallapati.\nDiscriminative models for information retrieval.\nIn Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 64 - 71, 2004.\n[20] Z. Nie, Y. Ma, J.-R.\nWen, and W.-Y.\nMa.\nObject-level web information retrieval.\nIn Technical Report of Microsoft Research, volume MSR-TR-2005-11, 2005.\n[21] Z. Nie, Y. Zhang, J.-R.\nWen, and W.-Y.\nMa.\nObject-level ranking: Bringing order to web objects.\nIn Proceedings of the 14th international conference on World Wide Web, pages 567 - 574, Chiba, Japan, 2005.\n[22] L. Page, S. Brin, R. Motwani, and T. Winograd.\nThe pagerank citation ranking: Bringing order to the web.\nIn Technical report, Stanford Digital Libraries, 1998.\n[23] A. Savakis, S. Etz, and A. Loui.\nEvaluation of image appeal in consumer photography.\nIn SPIE Human Vision and Electronic Imaging, pages 111-120, 2000.\n[24] D. Sullivan.\nHitwise search engine ratings.\nSearch Engine Watch Articles, http:\/\/searchenginewatch.\ncom\/reports\/article.\nphp\/3099931, August 23, 2005.\n[25] S. Susstrunk and S. Winkler.\nColor image quality on the internet.\nIn IS&T\/SPIE Electronic Imaging 2004: Internet Imaging V, volume 5304, pages 118-131, 2004.\n[26] H. Tong, M. Li, Z. H.J., J. He, and Z. C.S. Classification of digital photos taken by photographers or home users.\nIn Pacific-Rim Conference on Multimedia (PCM), pages 198-205, 2004.\n[27] W. Xi, B. Zhang, Z. Chen, Y. Lu, S. Yan, W.-Y.\nMa, and E. A. Fox.\nLink fusion: a unified link analysis framework for multi-type interrelated data objects.\nIn Proceedings of the 13th international conference on World Wide Web, pages 319 - 327, 2004.\n386","lvl-3":"Ranking Web Objects from Multiple Communities\nABSTRACT\nVertical search is a promising direction as it leverages domainspecific knowledge and can provide more precise information for users.\nIn this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine.\nMore specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation.\nWe proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible.\nThe proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums.\nBoth intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described.\nThough the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking.\n1.\nINTRODUCTION\nDespite numerous refinements and optimizations, general purpose search engines still fail to find relevant results for many queries.\nAs a new trend, vertical search has shown promise because it can leverage domain-specific knowledge and is more effective in connecting users with the information they want.\nThere are many vertical search engines, including some for paper search (e.g. Libra [21], Citeseer [7] and Google Scholar [4]), product search (e.g. Froogle [5]), movie search [6], image search [1, 8], video search [6], local search [2], as well as news search [3].\nWe believe the vertical search engine trend will continue to grow.\nEssentially, building vertical search engines includes data crawling, information extraction, object identification and integration, and object-level Web information retrieval (or Web object ranking) [20], among which ranking is one of the most important factors.\nThis is because it deals with the core problem of how to combine and rank objects coming from multiple communities.\nAlthough object-level ranking has been well studied in building vertical search engines, there are still some kinds of vertical domains in which objects cannot be effectively ranked.\nFor example, algorithms that evolved from PageRank [22], PopRank [21] and LinkFusion [27] were proposed to rank objects coming from multiple communities, but can only work on well-defined graphs of heterogeneous data.\n\"Well-defined\" means that like objects (e.g. authors in paper search) can be identified in multiple communities (e.g. conferences).\nThis allows heterogeneous objects to be well linked to form a graph through leveraging all the relationships (e.g. cited-by, authored-by and published-by) among the multiple communities.\nHowever, this assumption does not always stand for some domains.\nHigh-quality photo search, movie search and news search are exceptions.\nFor example, a photograph forum\nwebsite usually includes three kinds of objects: photos, authors and reviewers.\nYet different photo forums seem to lack any relationships, as there are no cited-by relationships.\nThis makes it difficult to judge whether two authors cited are the same author, or two photos are indeed identical photos.\nConsequently, although each photo has a rating score in a forum, it is non-trivial to rank photos coming from different photo forums.\nSimilar problems also exist in movie search and news search.\nAlthough two movie titles can be identified as the same one by title and director in different movie discussion groups, it is non-trivial to combine rating scores from different discussion groups and rank movies effectively.\nWe call such non-trivial object relationship in which identification is difficult, incomplete relationships.\nOther related work includes rank aggregation for the Web [13, 14], and learning algorithm for rank, such as RankBoost [15], RankSVM [17, 19], and RankNet [12].\nWe will contrast differences of these methods with the proposed methods after we have described the problem and our methods.\nWe will specifically focus on Web object-ranking problem in cases that lack object relationships or have with incomplete object relationships, and take high-quality photo search as the test bed for this investigation.\nIn the following, we will introduce rationale for building high-quality photo search.\n1.1 High-Quality Photo Search\nIn the past ten years, the Internet has grown to become an incredible resource, allowing users to easily access a huge number of images.\nHowever, compared to the more than 1 billion images indexed by commercial search engines, actual queries submitted to image search engines are relatively minor, and occupy only 8-10 percent of total image and text queries submitted to commercial search engines [24].\nThis is partially because user requirements for image search are far less than those for general text search.\nOn the other hand, current commercial search engines still cannot well meet various user requirements, because there is no effective and practical solution to understand image content.\nTo better understand user needs in image search, we conducted a query log analysis based on a commercial search engine.\nThe result shows that more than 20% of image search queries are related to nature and places and daily life categories.\nUsers apparently are interested in enjoying high-quality photos or searching for beautiful images of locations or other kinds.\nHowever, such user needs are not well supported by current image search engines because of the difficulty of the quality assessment problem.\nIdeally, the most critical part of a search engine--the ranking function--can be simplified as consisting of two key factors: relevance and quality.\nFor the relevance factor, search in current commercial image search engines provide most returned images that are quite relevant to queries, except for some ambiguity.\nHowever, as to quality factor, there is still no way to give an optimal rank to an image.\nThough content-based image quality assessment has been investigated over many years [23, 25, 26], it is still far from ready to provide a realistic quality measure in the immediate future.\nSeemingly, it really looks pessimistic to build an image search engine that can fulfill the potentially large requirement of enjoying high-quality photos.\nVarious proliferating Web communities, however, notices us that people today have created and shared a lot of high-quality photos on the Web on virtually any topics, which provide a rich source for building a better image search engine.\nIn general, photos from various photo forums are of higher quality than personal photos, and are also much more appealing to public users than personal photos.\nIn addition, photos uploaded to photo forums generally require rich metadata about title, camera setting, category and description to be provide by photographers.\nThese metadata are actually the most precise descriptions for photos and undoubtedly can be indexed to help search engines find relevant results.\nMore important, there are volunteer users in Web communities actively providing valuable ratings for these photos.\nThe rating information is generally of great value in solving the photo quality ranking problem.\nMotivated by such observations, we have been attempting to build a vertical photo search engine by extracting rich metadata and integrating information form various photo Web forums.\nIn this paper, we specifically focus on how to rank photos from multiple Web forums.\nIntuitively, the rating scores from different photo forums can be empirically normalized based on the number of photos and the number of users in each forum.\nHowever, such a straightforward approach usually requires large manual effort in both tedious parameter tuning and subjective results evaluation, which makes it impractical when there are tens or hundreds of photo forums to combine.\nTo address this problem, we seek to build relationships\/links between different photo forums.\nThat is, we first adopt an efficient algorithm to find duplicate photos which can be considered as hidden links connecting multiple forums.\nWe then formulate the ranking challenge as an optimization problem, which eventually results in an optimal ranking function.\n1.2 Main Contributions and Organization.\nThe main contributions of this paper are: 1.\nWe have proposed and built a vertical image search engine by leveraging rich metadata from various photo forum Web sites to meet user requirements of searching for and enjoying high-quality photos, which is impossible in traditional image search engines.\n2.\nWe have proposed two kinds of Web object-ranking algorithms for photos with incomplete relationships, which can automatically and efficiently integrate as many as possible Web communities with rating information and achieves an equal qualitative result compared with the manually tuned fusion scheme.\nThe rest of this paper is organized as follows.\nIn Section 2, we present in detail the proposed solutions to the ranking problem, including how to find hidden links between different forums, normalize rating scores, obtain the optimal ranking function, and contrast our methods with some other related research.\nIn Section 3, we describe the experimental setting and experiments and user studies conducted to evaluate our algorithm.\nOur conclusion and a discussion of future work is in Section 4.\nIt is worth noting that although we treat vertical photo search as the test bed in this paper, the proposed ranking algorithm can also be applied to rank other content that includes video clips, poems, short stories, drawings, sculptures, music, and so on.\n2.\nALGORITHM 2.1 Overview\n2.2 Score Normalization\n2.3 Manual Fusion\n2.4 Duplicate Photo Detection\n2.5 Score Fusion\n2.5.1 Linear Fusion by Duplicate Photos\n3.\n2.5.2 Nonlinear Fusion by Duplicate Photos\n2.5.3 Performance Measure of the Fusion Results\n2.6 Contrasts with Other Related Work\n3.\nEXPERIMENTS\n3.1 EnjoyPhoto: high-quality Photo Search Engine\n3.2 Photo Score Normalization\n3.3 Duplicate photo detection\n3.4 \u03b4 Measure\n3.5 User Study\n4.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we studied the Web object-ranking problem in the cases of lacking object relationships where traditional ranking algorithms are no longer valid, and took high-quality photo search as the test bed for this investigation.\nWe have built a vertical high-quality photo search engine, and proposed score fusion methods which can automatically integrate as many data sources (Web forums) as possible.\nThe proposed fusion methods leverage the hidden links discovered by duplicate photo detection algorithm, and minimize score differences of duplicate photos in different forums.\nBoth the intermediate results and the user studies show that the proposed fusion methods are a practical and efficient solution to Web object ranking in the aforesaid relationships.\nThough the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other kinds of Web objects including video clips, poems, short stories, music, drawings, sculptures, and so on.\nCurrent system is far from being perfect.\nIn order to make this system more effective, more delicate analysis for the vertical domain (e.g., Web photo forums) are needed.\nThe following points, for example, may improve the searching results and will be our future work: 1.\nmore subtle analysis and then utilization of different kinds of ratings (e.g., novelty ratings, aesthetic ratings); 2.\ndifferentiating various communities who may have different interests and preferences or even distinct culture understandings; 3.\nincorporating more useful information, including photographers' and reviewers' information, to model the photos in a heterogeneous data space instead of the current homogeneous one.\nWe will further utilize collaborative filtering to recommend relevant high-quality photos to browsers.\nOne open problem is whether we can find an objective and efficient criterion for evaluating the ranking results, instead of employing subjective and inefficient user studies, which blocked us from trying more ranking algorithms and tuning parameters in one algorithm.","lvl-4":"Ranking Web Objects from Multiple Communities\nABSTRACT\nVertical search is a promising direction as it leverages domainspecific knowledge and can provide more precise information for users.\nIn this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine.\nMore specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation.\nWe proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible.\nThe proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums.\nBoth intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described.\nThough the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking.\n1.\nINTRODUCTION\nDespite numerous refinements and optimizations, general purpose search engines still fail to find relevant results for many queries.\nAs a new trend, vertical search has shown promise because it can leverage domain-specific knowledge and is more effective in connecting users with the information they want.\nWe believe the vertical search engine trend will continue to grow.\nEssentially, building vertical search engines includes data crawling, information extraction, object identification and integration, and object-level Web information retrieval (or Web object ranking) [20], among which ranking is one of the most important factors.\nThis is because it deals with the core problem of how to combine and rank objects coming from multiple communities.\nAlthough object-level ranking has been well studied in building vertical search engines, there are still some kinds of vertical domains in which objects cannot be effectively ranked.\n\"Well-defined\" means that like objects (e.g. authors in paper search) can be identified in multiple communities (e.g. conferences).\nThis allows heterogeneous objects to be well linked to form a graph through leveraging all the relationships (e.g. cited-by, authored-by and published-by) among the multiple communities.\nHigh-quality photo search, movie search and news search are exceptions.\nFor example, a photograph forum\nwebsite usually includes three kinds of objects: photos, authors and reviewers.\nYet different photo forums seem to lack any relationships, as there are no cited-by relationships.\nThis makes it difficult to judge whether two authors cited are the same author, or two photos are indeed identical photos.\nConsequently, although each photo has a rating score in a forum, it is non-trivial to rank photos coming from different photo forums.\nSimilar problems also exist in movie search and news search.\nWe call such non-trivial object relationship in which identification is difficult, incomplete relationships.\nWe will contrast differences of these methods with the proposed methods after we have described the problem and our methods.\nWe will specifically focus on Web object-ranking problem in cases that lack object relationships or have with incomplete object relationships, and take high-quality photo search as the test bed for this investigation.\nIn the following, we will introduce rationale for building high-quality photo search.\n1.1 High-Quality Photo Search\nThis is partially because user requirements for image search are far less than those for general text search.\nOn the other hand, current commercial search engines still cannot well meet various user requirements, because there is no effective and practical solution to understand image content.\nTo better understand user needs in image search, we conducted a query log analysis based on a commercial search engine.\nThe result shows that more than 20% of image search queries are related to nature and places and daily life categories.\nUsers apparently are interested in enjoying high-quality photos or searching for beautiful images of locations or other kinds.\nHowever, such user needs are not well supported by current image search engines because of the difficulty of the quality assessment problem.\nIdeally, the most critical part of a search engine--the ranking function--can be simplified as consisting of two key factors: relevance and quality.\nFor the relevance factor, search in current commercial image search engines provide most returned images that are quite relevant to queries, except for some ambiguity.\nHowever, as to quality factor, there is still no way to give an optimal rank to an image.\nSeemingly, it really looks pessimistic to build an image search engine that can fulfill the potentially large requirement of enjoying high-quality photos.\nVarious proliferating Web communities, however, notices us that people today have created and shared a lot of high-quality photos on the Web on virtually any topics, which provide a rich source for building a better image search engine.\nIn general, photos from various photo forums are of higher quality than personal photos, and are also much more appealing to public users than personal photos.\nIn addition, photos uploaded to photo forums generally require rich metadata about title, camera setting, category and description to be provide by photographers.\nThese metadata are actually the most precise descriptions for photos and undoubtedly can be indexed to help search engines find relevant results.\nMore important, there are volunteer users in Web communities actively providing valuable ratings for these photos.\nThe rating information is generally of great value in solving the photo quality ranking problem.\nMotivated by such observations, we have been attempting to build a vertical photo search engine by extracting rich metadata and integrating information form various photo Web forums.\nIn this paper, we specifically focus on how to rank photos from multiple Web forums.\nIntuitively, the rating scores from different photo forums can be empirically normalized based on the number of photos and the number of users in each forum.\nTo address this problem, we seek to build relationships\/links between different photo forums.\nThat is, we first adopt an efficient algorithm to find duplicate photos which can be considered as hidden links connecting multiple forums.\nWe then formulate the ranking challenge as an optimization problem, which eventually results in an optimal ranking function.\n1.2 Main Contributions and Organization.\nThe main contributions of this paper are: 1.\nWe have proposed and built a vertical image search engine by leveraging rich metadata from various photo forum Web sites to meet user requirements of searching for and enjoying high-quality photos, which is impossible in traditional image search engines.\n2.\nWe have proposed two kinds of Web object-ranking algorithms for photos with incomplete relationships, which can automatically and efficiently integrate as many as possible Web communities with rating information and achieves an equal qualitative result compared with the manually tuned fusion scheme.\nThe rest of this paper is organized as follows.\nIn Section 3, we describe the experimental setting and experiments and user studies conducted to evaluate our algorithm.\nOur conclusion and a discussion of future work is in Section 4.\nIt is worth noting that although we treat vertical photo search as the test bed in this paper, the proposed ranking algorithm can also be applied to rank other content that includes video clips, poems, short stories, drawings, sculptures, music, and so on.\n4.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we studied the Web object-ranking problem in the cases of lacking object relationships where traditional ranking algorithms are no longer valid, and took high-quality photo search as the test bed for this investigation.\nWe have built a vertical high-quality photo search engine, and proposed score fusion methods which can automatically integrate as many data sources (Web forums) as possible.\nThe proposed fusion methods leverage the hidden links discovered by duplicate photo detection algorithm, and minimize score differences of duplicate photos in different forums.\nBoth the intermediate results and the user studies show that the proposed fusion methods are a practical and efficient solution to Web object ranking in the aforesaid relationships.\nThough the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other kinds of Web objects including video clips, poems, short stories, music, drawings, sculptures, and so on.\nCurrent system is far from being perfect.\nIn order to make this system more effective, more delicate analysis for the vertical domain (e.g., Web photo forums) are needed.\nThe following points, for example, may improve the searching results and will be our future work: 1.\nmore subtle analysis and then utilization of different kinds of ratings (e.g., novelty ratings, aesthetic ratings); 2.\ndifferentiating various communities who may have different interests and preferences or even distinct culture understandings; 3.\nincorporating more useful information, including photographers' and reviewers' information, to model the photos in a heterogeneous data space instead of the current homogeneous one.\nWe will further utilize collaborative filtering to recommend relevant high-quality photos to browsers.","lvl-2":"Ranking Web Objects from Multiple Communities\nABSTRACT\nVertical search is a promising direction as it leverages domainspecific knowledge and can provide more precise information for users.\nIn this paper, we study the Web object-ranking problem, one of the key issues in building a vertical search engine.\nMore specifically, we focus on this problem in cases when objects lack relationships between different Web communities, and take high-quality photo search as the test bed for this investigation.\nWe proposed two score fusion methods that can automatically integrate as many Web communities (Web forums) with rating information as possible.\nThe proposed fusion methods leverage the hidden links discovered by a duplicate photo detection algorithm, and aims at minimizing score differences of duplicate photos in different forums.\nBoth intermediate results and user studies show the proposed fusion methods are practical and efficient solutions to Web object ranking in cases we have described.\nThough the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other ranking problems, such as movie ranking and music ranking.\n1.\nINTRODUCTION\nDespite numerous refinements and optimizations, general purpose search engines still fail to find relevant results for many queries.\nAs a new trend, vertical search has shown promise because it can leverage domain-specific knowledge and is more effective in connecting users with the information they want.\nThere are many vertical search engines, including some for paper search (e.g. Libra [21], Citeseer [7] and Google Scholar [4]), product search (e.g. Froogle [5]), movie search [6], image search [1, 8], video search [6], local search [2], as well as news search [3].\nWe believe the vertical search engine trend will continue to grow.\nEssentially, building vertical search engines includes data crawling, information extraction, object identification and integration, and object-level Web information retrieval (or Web object ranking) [20], among which ranking is one of the most important factors.\nThis is because it deals with the core problem of how to combine and rank objects coming from multiple communities.\nAlthough object-level ranking has been well studied in building vertical search engines, there are still some kinds of vertical domains in which objects cannot be effectively ranked.\nFor example, algorithms that evolved from PageRank [22], PopRank [21] and LinkFusion [27] were proposed to rank objects coming from multiple communities, but can only work on well-defined graphs of heterogeneous data.\n\"Well-defined\" means that like objects (e.g. authors in paper search) can be identified in multiple communities (e.g. conferences).\nThis allows heterogeneous objects to be well linked to form a graph through leveraging all the relationships (e.g. cited-by, authored-by and published-by) among the multiple communities.\nHowever, this assumption does not always stand for some domains.\nHigh-quality photo search, movie search and news search are exceptions.\nFor example, a photograph forum\nwebsite usually includes three kinds of objects: photos, authors and reviewers.\nYet different photo forums seem to lack any relationships, as there are no cited-by relationships.\nThis makes it difficult to judge whether two authors cited are the same author, or two photos are indeed identical photos.\nConsequently, although each photo has a rating score in a forum, it is non-trivial to rank photos coming from different photo forums.\nSimilar problems also exist in movie search and news search.\nAlthough two movie titles can be identified as the same one by title and director in different movie discussion groups, it is non-trivial to combine rating scores from different discussion groups and rank movies effectively.\nWe call such non-trivial object relationship in which identification is difficult, incomplete relationships.\nOther related work includes rank aggregation for the Web [13, 14], and learning algorithm for rank, such as RankBoost [15], RankSVM [17, 19], and RankNet [12].\nWe will contrast differences of these methods with the proposed methods after we have described the problem and our methods.\nWe will specifically focus on Web object-ranking problem in cases that lack object relationships or have with incomplete object relationships, and take high-quality photo search as the test bed for this investigation.\nIn the following, we will introduce rationale for building high-quality photo search.\n1.1 High-Quality Photo Search\nIn the past ten years, the Internet has grown to become an incredible resource, allowing users to easily access a huge number of images.\nHowever, compared to the more than 1 billion images indexed by commercial search engines, actual queries submitted to image search engines are relatively minor, and occupy only 8-10 percent of total image and text queries submitted to commercial search engines [24].\nThis is partially because user requirements for image search are far less than those for general text search.\nOn the other hand, current commercial search engines still cannot well meet various user requirements, because there is no effective and practical solution to understand image content.\nTo better understand user needs in image search, we conducted a query log analysis based on a commercial search engine.\nThe result shows that more than 20% of image search queries are related to nature and places and daily life categories.\nUsers apparently are interested in enjoying high-quality photos or searching for beautiful images of locations or other kinds.\nHowever, such user needs are not well supported by current image search engines because of the difficulty of the quality assessment problem.\nIdeally, the most critical part of a search engine--the ranking function--can be simplified as consisting of two key factors: relevance and quality.\nFor the relevance factor, search in current commercial image search engines provide most returned images that are quite relevant to queries, except for some ambiguity.\nHowever, as to quality factor, there is still no way to give an optimal rank to an image.\nThough content-based image quality assessment has been investigated over many years [23, 25, 26], it is still far from ready to provide a realistic quality measure in the immediate future.\nSeemingly, it really looks pessimistic to build an image search engine that can fulfill the potentially large requirement of enjoying high-quality photos.\nVarious proliferating Web communities, however, notices us that people today have created and shared a lot of high-quality photos on the Web on virtually any topics, which provide a rich source for building a better image search engine.\nIn general, photos from various photo forums are of higher quality than personal photos, and are also much more appealing to public users than personal photos.\nIn addition, photos uploaded to photo forums generally require rich metadata about title, camera setting, category and description to be provide by photographers.\nThese metadata are actually the most precise descriptions for photos and undoubtedly can be indexed to help search engines find relevant results.\nMore important, there are volunteer users in Web communities actively providing valuable ratings for these photos.\nThe rating information is generally of great value in solving the photo quality ranking problem.\nMotivated by such observations, we have been attempting to build a vertical photo search engine by extracting rich metadata and integrating information form various photo Web forums.\nIn this paper, we specifically focus on how to rank photos from multiple Web forums.\nIntuitively, the rating scores from different photo forums can be empirically normalized based on the number of photos and the number of users in each forum.\nHowever, such a straightforward approach usually requires large manual effort in both tedious parameter tuning and subjective results evaluation, which makes it impractical when there are tens or hundreds of photo forums to combine.\nTo address this problem, we seek to build relationships\/links between different photo forums.\nThat is, we first adopt an efficient algorithm to find duplicate photos which can be considered as hidden links connecting multiple forums.\nWe then formulate the ranking challenge as an optimization problem, which eventually results in an optimal ranking function.\n1.2 Main Contributions and Organization.\nThe main contributions of this paper are: 1.\nWe have proposed and built a vertical image search engine by leveraging rich metadata from various photo forum Web sites to meet user requirements of searching for and enjoying high-quality photos, which is impossible in traditional image search engines.\n2.\nWe have proposed two kinds of Web object-ranking algorithms for photos with incomplete relationships, which can automatically and efficiently integrate as many as possible Web communities with rating information and achieves an equal qualitative result compared with the manually tuned fusion scheme.\nThe rest of this paper is organized as follows.\nIn Section 2, we present in detail the proposed solutions to the ranking problem, including how to find hidden links between different forums, normalize rating scores, obtain the optimal ranking function, and contrast our methods with some other related research.\nIn Section 3, we describe the experimental setting and experiments and user studies conducted to evaluate our algorithm.\nOur conclusion and a discussion of future work is in Section 4.\nIt is worth noting that although we treat vertical photo search as the test bed in this paper, the proposed ranking algorithm can also be applied to rank other content that includes video clips, poems, short stories, drawings, sculptures, music, and so on.\n2.\nALGORITHM 2.1 Overview\nThe difficulty of integrating multiple Web forums is in their different rating systems, where there are generally two kinds of freedom.\nThe first kind of freedom is the rating interval or rating scale including the minimal and maximal ratings for each Web object.\nFor example, some forums use a 5-point rating scale whereas other forums use 3-point or 10-point rating scales.\nIt seems easy to fix this freedom, but detailed analysis of the data and experiments show that it is a non-trivial problem.\nThe second kind of freedom is the varying rating criteria found in different Web forums.\nThat is, the same score does not mean the same quality in different forums.\nIntuitively, if we can detect same photographers or same photographs, we can build relationships between any two photo forums and therefore can standardize the rating criterion by score normalization and transformation.\nFortunately, we find that quite a number of duplicate photographs exist in various Web photo forums.\nThis fact is reasonable when considering that photographers sometimes submit a photo to more than one forum to obtain critiques or in hopes of widespread publicity.\nIn this work, we adopt an efficient duplicate photo detection algorithm [10] to find these photos.\nThe proposed methods below are based on the following considerations.\nFaced with the need to overcome a ranking problem, a standardized rating criterion rather than a reasonable rating criterion is needed.\nTherefore, we can take a large scale forum as the reference forum, and align other forums by taking into account duplicate Web objects (duplicate photos in this work).\nIdeally, the scores of duplicate photos should be equal even though they are in different forums.\nYet we can deem that scores in different forums--except for the reference forum--can vary in a parametric space.\nThis can be determined by minimizing the objective function defined by the sum of squares of the score differences.\nBy formulating the ranking problem as an optimization problem that attempts to make the scores of duplicate photos in non-reference forums as close as possible to those in the reference forum, we can effectively solve the ranking problem.\nFor convenience, the following notations are employed.\nSki and \u00af Ski denote the total score and mean score of ith Web object (photo) in the kth Web site, respectively.\nThe total score refers to the sum of the various rating scores (e.g., novelty rating and aesthetic rating), and the mean score refers to the mean of the various rating scores.\nSuppose there are a total of K Web sites.\nWe further use\nto denote the set of scores for Web objects (photos) in kth Web forums that are duplicate with the lth Web forums, where Ikl is the total number of duplicate Web objects between these two Web sites.\nIn general, score fusion can be seen as the procedure of finding K transforms\nsuch that eSki can be used to rank Web objects from different Web sites.\nThe objective function described in the above\nFigure 1: Web community integration.\nEach Web community forms a subgraph, and all communities are linked together by some hidden links (dashed lines).\nparagraph can then be formulated as\nwhere we use k = 1 as the reference forum and thus \u03c81 (S1i) = S1i.\n\u00af wki (\u2265 0) is the weight coefficient that can be set heuristically according to the numbers of voters (reviewers or commenters) in both the reference forum and the non-reference forum.\nThe more reviewers, the more popular the photo is and the larger the corresponding weight \u00af wki should be.\nIn this work, we do not inspect the problem of how to choose \u00af wki and simply set them to one.\nBut we believe the proper use of \u00af wki, which leverages more information, can significantly improve the results.\nFigure 1 illustrates the aforementioned idea.\nThe Web Community 1 is the reference community.\nThe dashed lines are links indicating that the two linked Web objects are actually the same.\nThe proposed algorithm will try to find the best \u03c8k (k = 2,..., K), which has certain parametric forms according to certain models.\nSo as to minimize the cost function defined in Eq.\n1, the summation is taken on all the red dashed lines.\nWe will first discuss the score normalization methods in Section 2.2, which serves as the basis for the following work.\nBefore we describe the proposed ranking algorithms, we first introduce a manually tuned method in Section 2.3, which is laborious and even impractical when the number of communities become large.\nIn Section 2.4, we will briefly explain how to precisely find duplicate photos between Web forums.\nThen we will describe the two proposed methods: Linear fusion and Non-linear fusion, and a performance measure for result evaluation in Section 2.5.\nFinally, in Section 2.6 we will discuss the relationship of the proposed methods with some other related work.\n2.2 Score Normalization\nSince different Web (photo) forums on the Web usually have different rating criteria, it is necessary to normalize them before applying different kinds of fusion methods.\nIn addition, as there are many kinds of ratings, such as ratings for novelty, ratings for aesthetics etc, it is reasonable to choose a common one--total score or average score--that can always be extracted in any Web forum or calculated by corresponding ratings.\nThis allows the normaliza\ntion method on the total score or average score to be viewed as an impartial rating method between different Web forums.\nIt is straightforward to normalize average scores by linearly transforming them to a fixed interval.\nWe call this kind of score as Scaled Mean Score.\nThe difficulty, however, of using this normalization method is that, if there are only a few users rating an object, say a photo in a photo forum, the average score for the object is likely to be spammed or skewed.\nTotal score can avoid such drawbacks that contain more information such as a Web object's quality and popularity.\nThe problem is thus how to normalize total scores in different Web forums.\nThe simplest way may be normalization by the maximal and minimal scores.\nThe drawback of this normalization method is it is non robust, or in other words, it is sensitive to outliers.\nTo make the normalization insensitive to unusual data, we propose the Mode-90% Percentile normalization method.\nHere, the mode score represents the total score that has been assigned to more photos than any other total score.\nAnd The high percentile score (e.g.,90%) represents the total score for which the high percentile of images have a lower total score.\nThis normalization method utilizes the mode and 90% percentile as two reference points to align two rating systems, which makes the distributions of total scores in different forums more consistent.\nThe underlying assumption, for example in different photo forums, is that even the qualities of top photos in different forums may vary greatly and be less dependent on the forum quality, the distribution of photos of middle-level quality (from mode to 90% percentile) should be almost of the same quality up to the freedom which reflects the rating criterion (strictness) of Web forums.\nPhotos of this middle-level in a Web forum usually occupy more than 70% of total photos in that forum.\nWe will give more detailed analysis of the scores in Section 3.2.\n2.3 Manual Fusion\nThe Web movie forum, IMDB [16], proposed to use a Bayesian-ranking function to normalize rating scores within one community.\nMotivated by this ranking function, we propose this manual fusion method: For the kth Web site, we use the following formula\nto rank photos, where nk is the number of votes and n \u2217 k, S \u2217 k and \u03b1k are three parameters.\nThis ranking function first takes a balance between the original mean score \u00af Ski and a reference score S \u2217 k to get a weighted mean score which may be more reliable than \u00af Ski.\nThen the weighted mean score is scaled by \u03b1k to get the final score Ski.\nFor n Web communities, there are then about 3n parameters in {(\u03b1k, n \u2217 k, S \u2217 k) | k = 1,..., n} to tune.\nThough this method can achieves pretty good results after careful and thorough manual tuning on these parameters, when n becomes increasingly large, say there are tens or hundreds of Web communities crawled and indexed, this method will become more and more laborious and will eventually become impractical.\nIt is therefore desirable to find an effective fusion method whose parameters can be automatically determined.\n2.4 Duplicate Photo Detection\nWe use Dedup [10], an efficient and effective duplicate image detection algorithm, to find duplicate photos between any two photo forums.\nThis algorithm uses hash function to map a high dimensional feature to a 32 bits hash code (see below for how to construct the hash code).\nIts computational complexity to find all the duplicate images among n images is about O (n log n).\nThe low-level visual feature for each photo is extracted on k x k regular grids.\nBased on all features extracted from the image database, a PCA model is built.\nThe visual features are then transformed to a relatively low-dimensional and zero mean PCA space, or 29 dimensions in our system.\nThen the hash code for each photo is built as follows: each dimension is transformed to one, if the value in this dimension is greater than 0, and 0 otherwise.\nPhotos in the same bucket are deemed potential duplicates and are further filtered by a threshold in terms of Euclidean similarity in the visual feature space.\nFigure 2 illustrates the hashing procedure, where visual features--mean gray values--are extracted on both 6 x 6 and 7x7 grids.\nThe 85-dimensional features are transformed to a 32-dimensional vector, and the hash code is generated according to the signs.\nFigure 2: Hashing procedure for duplicate photo dectection\n2.5 Score Fusion\nIn this section, we will present two solutions on score fusion based on different parametric form assumptions of \u03c8k in Eq.\n1.\n2.5.1 Linear Fusion by Duplicate Photos\nIntuitively, the most straightforward way to factor out the uncertainties caused by the different criterion is to scale, rel\native to a given center, the total scores of each unreferenced Web photo forum with respect to the reference forum.\nMore strictly, we assume \u03c8k has the following form\nwhich means that the scores of k (~ = 1) th forum should be scaled by \u03b1k relative to the center tk 1 \u2212 \u03b1k as shown in Figure\n3.\nThen, if we substitute above \u03c8k to Eq.\n1, we get the following objective function,\nBy solving the following set of functions, (\u2202 f\nwhere f is the objective function defined in Eq.\n5, we get the closed form solution as:\nwhere and k = 2,..., K.\nThis is a linear fusion method.\nIt enjoys simplicity and excellent performance in the following experiments.\nFigure 3: Linear Fusion method\n2.5.2 Nonlinear Fusion by Duplicate Photos\nSometimes we want a method which can adjust scores on intervals with two endpoints unchanged.\nAs illustrated in Figure 4, the method can tune scores between [C0, C1] while leaving scores C0 and C1 unchanged.\nThis kind of fusion method is then much finer than the linear ones and contains many more parameters to tune and expect to further improve the results.\nHere, we propose a nonlinear fusion solution to satisfy such constraints.\nFirst, we introduce a transform:\nwhere \u03b1> 0.\nThis transform satisfies that for x E [c0, c1], \u03b7co, ci, \u03b1 (x) E [c0, c1] with \u03b7co, ci, \u03b1 (c0) = c0 and \u03b7co, ci, \u03b1 (c1) = c1.\nThen we can utilize this nonlinear transform to adjust the scores in certain interval, say (M, T],\nFigure 4: Nonlinear Fusion method.\nWe intent to finely adjust the shape of the curves in each segment.\nEven there is no closed-form solution for the following optimization problem, it is not hard to get the numeric one.\nUnder the same assumptions made in Section 2.2, we can use this method to adjust scores of the middle-level (from the mode point to the 90% percentile).\nThis more complicated non-linear fusion method is expected to achieve better results than the linear one.\nHowever, difficulties in evaluating the rank results block us from tuning these parameters extensively.\nThe current experiments in Section 3.5 do not reveal any advantages over the simple linear model.\n2.5.3 Performance Measure of the Fusion Results\nSince our objective function is to make the scores of the same Web objects (e.g. duplicate photos) between a nonreference forum and the reference forum as close as possible, it is natural to investigate how close they become to each other and how the scores of the same Web objects change between the two non-reference forums before and after score fusion.\nTaken Figure 1 as an example, the proposed algorithms minimize the score differences of the same Web objects in two Web forums: the reference forum (the Web Community 1) and a non-reference forum, which corresponds to minimizing the objective function on the red dashed (hidden) links.\nAfter the optimization, we must ask what happens to the score differences of the same Web objects in two nonreference forums?\nOr, in other words, whether the scores of two objects linked by the green dashed (hidden) links become more consistent?\nWe therefore define the following performance measure--\u03b4 measure--to quantify the changes for scores of the same Web objects in different Web forums as\nwhere Skl = (Skl\n\u03b4kl> 0 means after score fusion, scores on the same Web objects between kth and lth Web forum become more consistent, which is what we expect.\nOn the contrary, if \u03b4kl <0, those scores become more inconsistent.\nAlthough we cannot rely on this measure to evaluate our final fusion results as ranking photos by their popularity and qualities is such a subjective process that every person can have its own results, it can help us understand the intermediate ranking results and provide insights into the final performances of different ranking methods.\n2.6 Contrasts with Other Related Work\nWe have already mentioned the differences of the proposed methods with the traditional methods, such as PageRank [22], PopRank [21], and LinkFusion [27] algorithms in Section 1.\nHere, we discuss some other related works.\nThe current problem can also be viewed as a rank aggregation one [13, 14] as we deal with the problem of how to combine several rank lists.\nHowever, there are fundamental differences between them.\nFirst of all, unlike the Web pages, which can be easily and accurately detected as the same pages, detecting the same photos in different Web forums is a non-trivial work, and can only be implemented by some delicate algorithms while with certain precision and recall.\nSecond, the numbers of the duplicate photos from different Web forums are small relative to the whole photo sets (see Table 1).\nIn another words, the top K rank lists of different Web forums are almost disjointed for a given query.\nUnder this condition, both the algorithms proposed in [13] and their measurements--Kendall tau distance or Spearman footrule distance--will degenerate to some trivial cases.\nAnother category of rank fusion (aggregation) methods is based on machine learning algorithms, such as RankSVM [17, 19], RankBoost [15], and RankNet [12].\nAll of these methods entail some labelled datasets to train a model.\nIn current settings, it is difficult or even impossible to get these datasets labelled as to their level of professionalism or popularity, since the photos are too vague and subjective to rank.\nInstead, the problem here is how to combine several ordered sub lists to form a total order list.\n3.\nEXPERIMENTS\nIn this section, we carry out our research on high-quality photo search.\nWe first briefly introduce the newly proposed vertical image search engine--EnjoyPhoto in section 3.1.\nThen we focus on how to rank photos from different Web forums.\nIn order to do so, we first normalize the scores (ratings) for photos from different multiple Web forums in section 3.2.\nThen we try to find duplicate photos in section 3.3.\nSome intermediate results are discussed using \u03b4 measure in section 3.4.\nFinally a set of user studies is carried out carefully to justify our proposed method in section 3.5.\n3.1 EnjoyPhoto: high-quality Photo Search Engine\nIn order to meet user requirement of enjoying high-quality photos, we propose and build a high-quality photo search engine--EnjoyPhoto, which accounts for the following three key issues: 1.\nhow to crawl and index photos, 2.\nhow to determine the qualities of each photo and 3.\nhow to display the search results in order to make the search process enjoyable.\nFor a given text based query, this system ranks the photos based on certain combination of relevance of the photo to this query (Issue 1) and the quality of the photo (Issue 2), and finally displays them in an enjoyable manner (Issue 3).\nAs for Issue 3, we devise the interface of the system deliberately in order to smooth the users' process of enjoying high-quality photos.\nTechniques, such as Fisheye and slides show, are utilized in current system.\nFigure 5 shows the interface.\nWe will not talk more about this issue as it is not an emphasis of this paper.\nFigure 5: EnjoyPhoto: an enjoyable high-quality photo search engine, where 26,477 records are returned for the query \"fall\" in about 0.421 seconds\nAs for Issue 1, we extracted from a commercial search engine a subset of photos coming from various photo forums all over the world, and explicitly parsed the Web pages containing these photos.\nThe number of photos in the data collection is about 2.5 million.\nAfter the parsing, each photo was associated with its title, category, description, camera setting, EXIF data 1 (when available for digital images), location (when available in some photo forums), and many kinds of ratings.\nAll these metadata are generally precise descriptions or annotations for the image content, which are then indexed by general text-based search technologies [9, 18, 11].\nIn current system, the ranking function was specifically tuned to emphasize title, categorization, and rating information.\nIssue 2 is essentially dealt with in the following sections which derive the quality of photos by analyzing ratings provided by various Web photo forums.\nHere we chose six photo forums to study the ranking problem and denote them as Web-A, Web-B, Web-C, Web-D, Web-E and Web-F.\n3.2 Photo Score Normalization\nDetailed analysis of different score normalization methods are analyzed in this section.\nIn this analysis, the zero\nFigure 6: Distributions of mean scores normalized to [0, 10]\nscores that usually occupy about than 30% of the total number of photos for some Web forums are not currently taken into account.\nHow to utilize these photos is left for future explorations.\nIn Figure 6, we list the distributions of the mean score, which is transformed to a fixed interval [0, 10].\nThe distributions of the average scores of these Web forums look quite different.\nDistributions in Figure 6 (a), 6 (b), and 6 (e) looks like Gaussian distributions, while those in Figure 6 (d) and 6 (f) are dominated by the top score.\nThe reason of these eccentric distributions for Web-D and Web-F lies in their coarse rating systems.\nIn fact, Web-D and Web-F use 2 or 3 point rating scales whereas other Web forums use 7 or 14 point rating scales.\nTherefore, it will be problematic if we directly use these averaged scores.\nFurthermore the average score is very likely to be spammed, if there are only a few users rating a photo.\nFigure 7 shows the total score normalization method by maximal and minimal scores, which is one of our base line system.\nAll the total scores of a given Web forum are normalized to [0, 100] according to the maximal score and minimal score of corresponding Web forum.\nWe notice that total score distribution of Web-A in Figure 7 (a) has two larger tails than all the others.\nTo show the shape of the distributions more clearly, we only show the distributions on [0, 25] in Figure 7 (b),7 (c),7 (d),7 (e), and 7 (f).\nFigure 8 shows the Mode-90% Percentile normalization method, where the modes of the six distributions are normalized to 5 and the 90% percentile to 8.\nWe can see that this normalization method makes the distributions of total scores in different forums more consistent.\nThe two proposed algorithms are all based on these normalization methods.\n3.3 Duplicate photo detection\nTargeting at computational efficiency, the Dedup algorithm may lose some recall rate, but can achieve a high precision rate.\nWe also focus on finding precise hidden links rather than all hidden links.\nFigure 9 shows some duplicate detection examples.\nThe results are shown in Table 1 and verify that large numbers of duplicate photos exist in any two Web forums even with the strict condition for Dedup where we chose first 29 bits as the hash code.\nSince there are only a few parameters to estimate in the proposed fusion methods, the numbers of duplicate photos shown Table 1 are\nFigure 7: Maxmin Normalization\nFigure 8: Mode-90% Percentile Normalization\nsufficient to determine these parameters.\nThe last table column lists the total number of photos in the corresponding Web forums.\n3.4 \u03b4 Measure\nThe parameters of the proposed linear and nonlinear algorithms are calculated using the duplicate data shown in Table 1, where the Web-C is chosen as the reference Web forum since it shares the most duplicate photos with other forums.\nTable 2 and 3 show the \u03b4 measure on the linear model and nonlinear model.\nAs \u03b4kl is symmetric and \u03b4kk = 0, we only show the upper triangular part.\nThe NaN values in both tables lie in that no duplicate photos have been detected by the Dedup algorithm as reported in Table 1.\nThe linear model guarantees that the \u03b4 measures related\nTable 1: Number of duplicate photos between each pair of Web forums\nFigure 9: Some results of duplicate photo detection\nTable 2: S measure on the linear model.\nto the reference community should be no less than 0 theoretically.\nIt is indeed the case (see the underlined numbers in Table 2).\nBut this model cannot guarantee that the S measures on the non-reference communities can also be no less than 0, as the normalization steps are based on duplicate photos between the reference community and a nonreference community.\nResults shows that all the numbers in the S measure are greater than 0 (see all the non-underlined numbers in Table 2), which indicates that it is probable that this model will give optimal results.\nOn the contrary, the nonlinear model does not guarantee that S measures related to the reference community should be no less than 0, as not all duplicate photos between the two Web forums can be used when optimizing this model.\nIn fact, the duplicate photos that lie in different intervals will not be used in this model.\nIt is these specific duplicate photos that make the S measure negative.\nAs a result, there are both negative and positive items in Table 3, but overall the number of positive ones are greater than negative ones (9:5), that indicates the model may be better than the \"normalization only\" method (see next subsection) which has an all-zero S measure, and worse than the linear model.\n3.5 User Study\nBecause it is hard to find an objective criterion to evaluate\nTable 3: S measure on the nonlinear model.\nwhich ranking function is better, we chose to employ user studies for subjective evaluations.\nTen subjects were invited to participate in the user study.\nThey were recruited from nearby universities.\nAs search engines of both text search and image search are familiar to university students, there was no prerequisite criterion for choosing students.\nWe conducted user studies using Internet Explorer 6.0 on Windows XP with 17-inch LCD monitors set at 1,280 pixels by 1,024 pixels in 32-bit color.\nData was recorded with server logs and paper-based surveys after each task.\nFigure 10: User study interface\nWe specifically device an interface for user study as shown in Figure 10.\nFor each pair of fusion methods, participants were encouraged to try any query they wished.\nFor those without specific ideas, two combo boxes (category list and query list) were listed on the bottom panel, where the top 1,000 image search queries from a commercial search engine were provided.\nAfter a participant submitted a query, the system randomly selected the left or right frame to display each of the two ranking results.\nThe participant were then required to judge which ranking result was better of the two ranking results, or whether the two ranking results were of equal quality, and submit the judgment by choosing the corresponding radio button and clicking the \"Submit\" button.\nFor example, in Figure 10, query \"sunset\" is submitted to the system.\nThen, 79,092 photos were returned and ranked by the Minmax fusion method in the left frame and linear fusion method in the right frame.\nA participant then compares the two ranking results (without knowing the ranking methods) and submits his\/her feedback by choosing answers in the \"Your option.\"\nTable 4: Results of user study\nTable 4 shows the experimental results, where \"Linear\" denotes the linear fusion method, \"Nonlinear\" denotes the non linear fusion method, \"Norm.\nOnly\" means Maxmin normalization method, \"Manually\" means the manually tuned method.\nThe three numbers in each item, say 29:13:10, mean that 29 judgments prefer the linear fusion results, 10\njudgments prefer the normalization only method, and 13 judgments consider these two methods as equivalent.\nWe conduct the ANOVA analysis, and obtain the following conclusions: 1.\nBoth the linear and nonlinear methods are significantly better than the \"Norm.\nOnly\" method with respective P-values 0.00165 (<0.05) and 0.00073 (<<0.05).\nThis result is consistent with the \u03b4-measure evaluation result.\nThe \"Norm.\nOnly\" method assumes that the top 10% photos in different forums are of the same quality.\nHowever, this assumption does not stand in general.\nFor example, a top 10% photo in a top tier photo forum is generally of higher quality than a top 10% photo in a second-tier photo forum.\nThis is similar to that, those top 10% students in a top-tier university and those in a second-tier university are generally of different quality.\nBoth linear and nonlinear fusion methods acknowledge the existence of such differences and aim at quantizing the differences.\nTherefore, they perform better than the \"Norm.\nOnly\" method.\n2.\nThe linear fusion method is significantly better than the nonlinear one with P-value 1.195 \u00d7 10 \u2212 10.\nThis result is rather surprising as this more complicated ranking method is expected to tune the ranking more finely than the linear one.\nThe main reason for this result may be that it is difficult to find the best intervals where the nonlinear tuning should be carried out and yet simply the middle part of the Mode-90% Percentile Normalization method was chosen.\nThe timeconsuming and subjective evaluation methods--user studies--blocked us extensively tuning these parameters.\n3.\nThe proposed linear and nonlinear methods perform almost the same with or slightly better than the manually tuned method.\nGiven that the linear\/nonlinear fusion methods are fully automatic approaches, they are considered practical and efficient solutions when more communities (e.g. dozens of communities) need to be integrated.\n4.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we studied the Web object-ranking problem in the cases of lacking object relationships where traditional ranking algorithms are no longer valid, and took high-quality photo search as the test bed for this investigation.\nWe have built a vertical high-quality photo search engine, and proposed score fusion methods which can automatically integrate as many data sources (Web forums) as possible.\nThe proposed fusion methods leverage the hidden links discovered by duplicate photo detection algorithm, and minimize score differences of duplicate photos in different forums.\nBoth the intermediate results and the user studies show that the proposed fusion methods are a practical and efficient solution to Web object ranking in the aforesaid relationships.\nThough the experiments were conducted on high-quality photo ranking, the proposed algorithms are also applicable to other kinds of Web objects including video clips, poems, short stories, music, drawings, sculptures, and so on.\nCurrent system is far from being perfect.\nIn order to make this system more effective, more delicate analysis for the vertical domain (e.g., Web photo forums) are needed.\nThe following points, for example, may improve the searching results and will be our future work: 1.\nmore subtle analysis and then utilization of different kinds of ratings (e.g., novelty ratings, aesthetic ratings); 2.\ndifferentiating various communities who may have different interests and preferences or even distinct culture understandings; 3.\nincorporating more useful information, including photographers' and reviewers' information, to model the photos in a heterogeneous data space instead of the current homogeneous one.\nWe will further utilize collaborative filtering to recommend relevant high-quality photos to browsers.\nOne open problem is whether we can find an objective and efficient criterion for evaluating the ranking results, instead of employing subjective and inefficient user studies, which blocked us from trying more ranking algorithms and tuning parameters in one algorithm.","keyphrases":["rank","web object","vertic search","web object-rank","web object-rank problem","score fusion method","duplic photo detect algorithm","algorithm","domain specif knowledg","high-qualiti photo search","imag search queri","rank photo","multipl web forum","nonlinear fusion method","rank function","imag search"],"prmu":["P","P","P","P","P","P","P","P","M","M","M","R","R","M","M","M"]} {"id":"H-41","title":"HITS on the Web: How does it Compare?","abstract":"This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-of-the-art text retrieval algorithm exploiting anchor text. We quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements. The evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges. We found that HITS outperforms PageRank, but is about as effective as web-page in-degree. The same holds true when any of the link-based features are combined with the text retrieval algorithm. Finally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries.","lvl-1":"HITS on the Web: How does it Compare?\nMarc Najork Microsoft Research 1065 La Avenida Mountain View, CA, USA najork@microsoft.com Hugo Zaragoza \u2217 Yahoo! Research Barcelona Ocata 1 Barcelona 08003, Spain hugoz@es.yahoo-inc.com Michael Taylor Microsoft Research 7 J J Thompson Ave Cambridge CB3 0FB, UK mitaylor@microsoft.com ABSTRACT This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-ofthe-art text retrieval algorithm exploiting anchor text.\nWe quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements.\nThe evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges.\nWe found that HITS outperforms PageRank, but is about as effective as web-page in-degree.\nThe same holds true when any of the link-based features are combined with the text retrieval algorithm.\nFinally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Information Storage and Retrieval-search process, selection process General Terms Algorithms, Measurement, Experimentation 1.\nINTRODUCTION Link graph features such as in-degree and PageRank have been shown to significantly improve the performance of text retrieval algorithms on the web.\nThe HITS algorithm is also believed to be of interest for web search; to some degree, one may expect HITS to be more informative that other link-based features because it is query-dependent: it tries to measure the interest of pages with respect to a given query.\nHowever, it remains unclear today whether there are practical benefits of HITS over other link graph measures.\nThis is even more true when we consider that modern retrieval algorithms used on the web use a document representation which incorporates the document``s anchor text, i.e. the text of incoming links.\nThis, at least to some degree, takes the link graph into account, in a query-dependent manner.\nComparing HITS to PageRank or in-degree empirically is no easy task.\nThere are two main difficulties: scale and relevance.\nScale is important because link-based features are known to improve in quality as the document graph grows.\nIf we carry out a small experiment, our conclusions won``t carry over to large graphs such as the web.\nHowever, computing HITS efficiently on a graph the size of a realistic web crawl is extraordinarily difficult.\nRelevance is also crucial because we cannot measure the performance of a feature in the absence of human judgments: what is crucial is ranking at the top of the ten or so documents that a user will peruse.\nTo our knowledge, this paper is the first attempt to evaluate HITS at a large scale and compare it to other link-based features with respect to human evaluated judgment.\nOur results confirm many of the intuitions we have about link-based features and their relationship to text retrieval methods exploiting anchor text.\nThis is reassuring: in the absence of a theoretical model capable of tying these measures with relevance, the only way to validate our intuitions is to carry out realistic experiments.\nHowever, we were quite surprised to find that HITS, a query-dependent feature, is about as effective as web page in-degree, the most simpleminded query-independent link-based feature.\nThis continues to be true when the link-based features are combined with a text retrieval algorithm exploiting anchor text.\nThe remainder of this paper is structured as follows: Section 2 surveys related work.\nSection 3 describes the data sets we used in our study.\nSection 4 reviews the performance measures we used.\nSections 5 and 6 describe the PageRank and HITS algorithms in more detail, and sketch the computational infrastructure we employed to carry out large scale experiments.\nSection 7 presents the results of our evaluations, and Section 8 offers concluding remarks.\n2.\nRELATED WORK The idea of using hyperlink analysis for ranking web search results arose around 1997, and manifested itself in the HITS [16, 17] and PageRank [5, 21] algorithms.\nThe popularity of these two algorithms and the phenomenal success of the Google search engine, which uses PageRank, have spawned a large amount of subsequent research.\nThere are numerous attempts at improving the effectiveness of HITS and PageRank.\nQuery-dependent link-based ranking algorithms inspired by HITS include SALSA [19], Randomized HITS [20], and PHITS [7], to name a few.\nQuery-independent link-based ranking algorithms inspired by PageRank include TrafficRank [22], BlockRank [14], and TrustRank [11], and many others.\nAnother line of research is concerned with analyzing the mathematical properties of HITS and PageRank.\nFor example, Borodin et al. [3] investigated various theoretical properties of PageRank, HITS, SALSA, and PHITS, including their similarity and stability, while Bianchini et al. [2] studied the relationship between the structure of the web graph and the distribution of PageRank scores, and Langville and Meyer examined basic properties of PageRank such as existence and uniqueness of an eigenvector and convergence of power iteration [18].\nGiven the attention that has been paid to improving the effectiveness of PageRank and HITS, and the thorough studies of the mathematical properties of these algorithms, it is somewhat surprising that very few evaluations of their effectiveness have been published.\nWe are aware of two studies that have attempted to formally evaluate the effectiveness of HITS and of PageRank.\nAmento et al. [1] employed quantitative measures, but based their experiments on the result sets of just 5 queries and the web-graph induced by topical crawls around the result set of each query.\nA more recent study by Borodin et al. [4] is based on 34 queries, result sets of 200 pages per query obtained from Google, and a neighborhood graph derived by retrieving 50 in-links per result from Google.\nBy contrast, our study is based on over 28,000 queries and a web graph covering 2.9 billion URLs.\n3.\nOUR DATA SETS Our evaluation is based on two data sets: a large web graph and a substantial set of queries with associated results, some of which were labeled by human judges.\nOur web graph is based on a web crawl that was conducted in a breadth-first-search fashion, and successfully retrieved 463,685,607 HTML pages.\nThese pages contain 17,672,011,890 hyperlinks (after eliminating duplicate hyperlinks embedded in the same web page), which refer to a total of 2,897,671,002 URLs.\nThus, at the end of the crawl there were 2,433,985,395 URLs in the frontier set of the crawler that had been discovered, but not yet downloaded.\nThe mean out-degree of crawled web pages is 38.11; the mean in-degree of discovered pages (whether crawled or not) is 6.10.\nAlso, it is worth pointing out that there is a lot more variance in in-degrees than in out-degrees; some popular pages have millions of incoming links.\nAs we will see, this property affects the computational cost of HITS.\nOur query set was produced by sampling 28,043 queries from the MSN Search query log, and retrieving a total of 66,846,214 result URLs for these queries (using commercial search engine technology), or about 2,838 results per query on average.\nIt is important to point out that our 2.9 billion URL web graph does not cover all these result URLs.\nIn fact, only 9,525,566 of the result URLs (about 14.25%) were covered by the graph.\n485,656 of the results in the query set (about 0.73% of all results, or about 17.3 results per query) were rated by human judges as to their relevance to the given query, and labeled on a six-point scale (the labels being definitive, excellent, good, fair, bad and detrimental).\nResults were selected for judgment based on their commercial search engine placement; in other words, the subset of labeled results is not random, but biased towards documents considered relevant by pre-existing ranking algorithms.\nInvolving a human in the evaluation process is extremely cumbersome and expensive; however, human judgments are crucial for the evaluation of search engines.\nThis is so because no document features have been found yet that can effectively estimate the relevance of a document to a user query.\nSince content-match features are very unreliable (and even more so link features, as we will see) we need to ask a human to evaluate the results in order to compare the quality of features.\nEvaluating the retrieval results from document scores and human judgments is not trivial and has been the subject of many investigations in the IR community.\nA good performance measure should correlate with user satisfaction, taking into account that users will dislike having to delve deep in the results to find relevant documents.\nFor this reason, standard correlation measures (such as the correlation coefficient between the score and the judgment of a document), or order correlation measures (such as Kendall tau between the score and judgment induced orders) are not adequate.\n4.\nMEASURING PERFORMANCE In this study, we quantify the effectiveness of various ranking algorithms using three measures: NDCG, MRR, and MAP.\nThe normalized discounted cumulative gains (NDCG) measure [13] discounts the contribution of a document to the overall score as the document``s rank increases (assuming that the best document has the lowest rank).\nSuch a measure is particularly appropriate for search engines, as studies have shown that search engine users rarely consider anything beyond the first few results [12].\nNDCG values are normalized to be between 0 and 1, with 1 being the NDCG of a perfect ranking scheme that completely agrees with the assessment of the human judges.\nThe discounted cumulative gain at a particular rank-threshold T (DCG@T) is defined to be PT j=1 1 log(1+j) 2r(j) \u2212 1 , where r(j) is the rating (0=detrimental, 1=bad, 2=fair, 3=good, 4=excellent, and 5=definitive) at rank j.\nThe NDCG is computed by dividing the DCG of a ranking by the highest possible DCG that can be obtained for that query.\nFinally, the NDGCs of all queries in the query set are averaged to produce a mean NDCG.\nThe reciprocal rank (RR) of the ranked result set of a query is defined to be the reciprocal value of the rank of the highest-ranking relevant document in the result set.\nThe RR at rank-threshold T is defined to be 0 if none of the highestranking T documents is relevant.\nThe mean reciprocal rank (MRR) of a query set is the average reciprocal rank of all queries in the query set.\nGiven a ranked set of n results, let rel(i) be 1 if the result at rank i is relevant and 0 otherwise.\nThe precision P(j) at rank j is defined to be 1 j Pj i=1 rel(i), i.e. the fraction of the relevant results among the j highest-ranking results.\nThe average precision (AP) at rank-threshold k is defined to be Pk i=1 P (i)rel(i) Pn i=1 rel(i) .\nThe mean average precision (MAP) of a query set is the mean of the average precisions of all queries in the query set.\nThe above definitions of MRR and MAP rely on the notion of a relevant result.\nWe investigated two definitions of relevance: One where all documents rated fair or better were deemed relevant, and one were all documents rated good or better were deemed relevant.\nFor reasons of space, we only report MAP and MRR values computed using the latter definition; using the former definition does not change the qualitative nature of our findings.\nSimilarly, we computed NDCG, MAP, and MRR values for a wide range of rank-thresholds; we report results here at rank 10; again, changing the rank-threshold never led us to different conclusions.\nRecall that over 99% of documents are unlabeled.\nWe chose to treat all these documents as irrelevant to the query.\nFor some queries, however, not all relevant documents have been judged.\nThis introduces a bias into our evaluation: features that bring new documents to the top of the rank may be penalized.\nThis will be more acute for features less correlated to the pre-existing commercial ranking algorithms used to select documents for judgment.\nOn the other hand, most queries have few perfect relevant documents (i.e. home page or item searches) and they will most often be within the judged set.\n5.\nCOMPUTING PAGERANK ON A LARGE WEB GRAPH PageRank is a query-independent measure of the importance of web pages, based on the notion of peer-endorsement: A hyperlink from page A to page B is interpreted as an endorsement of page B``s content by page A``s author.\nThe following recursive definition captures this notion of endorsement: R(v) = X (u,v)\u2208E R(u) Out(u) where R(v) is the score (importance) of page v, (u, v) is an edge (hyperlink) from page u to page v contained in the edge set E of the web graph, and Out(u) is the out-degree (number of embedded hyperlinks) of page u. However, this definition suffers from a severe shortcoming: In the fixedpoint of this recursive equation, only edges that are part of a strongly-connected component receive a non-zero score.\nIn order to overcome this deficiency, Page et al. grant each page a guaranteed minimum score, giving rise to the definition of standard PageRank: R(v) = d |V | + (1 \u2212 d) X (u,v)\u2208E R(u) Out(u) where |V | is the size of the vertex set (the number of known web pages), and d is a damping factor, typically set to be between 0.1 and 0.2.\nAssuming that scores are normalized to sum up to 1, PageRank can be viewed as the stationary probability distribution of a random walk on the web graph, where at each step of the walk, the walker with probability 1 \u2212 d moves from its current node u to a neighboring node v, and with probability d selects a node uniformly at random from all nodes in the graph and jumps to it.\nIn the limit, the random walker is at node v with probability R(v).\nOne issue that has to be addressed when implementing PageRank is how to deal with sink nodes, nodes that do not have any outgoing links.\nOne possibility would be to select another node uniformly at random and transition to it; this is equivalent to adding edges from each sink nodes to all other nodes in the graph.\nWe chose the alternative approach of introducing a single phantom node.\nEach sink node has an edge to the phantom node, and the phantom node has an edge to itself.\nIn practice, PageRank scores can be computed using power iteration.\nSince PageRank is query-independent, the computation can be performed off-line ahead of query time.\nThis property has been key to PageRank``s success, since it is a challenging engineering problem to build a system that can perform any non-trivial computation on the web graph at query time.\nIn order to compute PageRank scores for all 2.9 billion nodes in our web graph, we implemented a distributed version of PageRank.\nThe computation consists of two distinct phases.\nIn the first phase, the link files produced by the web crawler, which contain page URLs and their associated link URLs in textual form, are partitioned among the machines in the cluster used to compute PageRank scores, and converted into a more compact format along the way.\nSpecifically, URLs are partitioned across the machines in the cluster based on a hash of the URLs'' host component, and each machine in the cluster maintains a table mapping the URL to a 32-bit integer.\nThe integers are drawn from a densely packed space, so as to make suitable indices into the array that will later hold the PageRank scores.\nThe system then translates our log of pages and their associated hyperlinks into a compact representation where both page URLs and link URLs are represented by their associated 32-bit integers.\nHashing the host component of the URLs guarantees that all URLs from the same host are assigned to the same machine in our scoring cluster.\nSince over 80% of all hyperlinks on the web are relative (that is, are between two pages on the same host), this property greatly reduces the amount of network communication required by the second stage of the distributed scoring computation.\nThe second phase performs the actual PageRank power iteration.\nBoth the link data and the current PageRank vector reside on disk and are read in a streaming fashion; while the new PageRank vector is maintained in memory.\nWe represent PageRank scores as 64-bit floating point numbers.\nPageRank contributions to pages assigned to remote machines are streamed to the remote machine via a TCP connection.\nWe used a three-machine cluster, each machine equipped with 16 GB of RAM, to compute standard PageRank scores for all 2.9 billion URLs that were contained in our web graph.\nWe used a damping factor of 0.15, and performed 200 power iterations.\nStarting at iteration 165, the L\u221e norm of the change in the PageRank vector from one iteration to the next had stopped decreasing, indicating that we had reached as much of a fixed point as the limitations of 64-bit floating point arithmetic would allow.\n0.07 0.08 0.09 0.10 0.11 1 10 100 Number of back-links sampled per result NDCG@10 hits-aut-all hits-aut-ih hits-aut-id 0.01 0.02 0.03 0.04 1 10 100 Number of back-links sampled per result MAP@10 hits-aut-all hits-aut-ih hits-aut-id 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 1 10 100 Number of back-links sampled per result MRR@10 hits-aut-all hits-aut-ih hits-aut-id Figure 1: Effectiveness of authority scores computed using different parameterizations of HITS.\nA post-processing phase uses the final PageRank vectors (one per machine) and the table mapping URLs to 32-bit integers (representing indices into each PageRank vector) to score the result URL in our query log.\nAs mentioned above, our web graph covered 9,525,566 of the 66,846,214 result URLs.\nThese URLs were annotated with their computed PageRank score; all other URLs received a score of 0.\n6.\nHITS HITS, unlike PageRank, is a query-dependent ranking algorithm.\nHITS (which stands for Hypertext Induced Topic Search) is based on the following two intuitions: First, hyperlinks can be viewed as topical endorsements: A hyperlink from a page u devoted to topic T to another page v is likely to endorse the authority of v with respect to topic T. Second, the result set of a particular query is likely to have a certain amount of topical coherence.\nTherefore, it makes sense to perform link analysis not on the entire web graph, but rather on just the neighborhood of pages contained in the result set, since this neighborhood is more likely to contain topically relevant links.\nBut while the set of nodes immediately reachable from the result set is manageable (given that most pages have only a limited number of hyperlinks embedded into them), the set of pages immediately leading to the result set can be enormous.\nFor this reason, Kleinberg suggests sampling a fixed-size random subset of the pages linking to any high-indegree page in the result set.\nMoreover, Kleinberg suggests considering only links that cross host boundaries, the rationale being that links between pages on the same host (intrinsic links) are likely to be navigational or nepotistic and not topically relevant.\nGiven a web graph (V, E) with vertex set V and edge set E \u2286 V \u00d7 V , and the set of result URLs to a query (called the root set R \u2286 V ) as input, HITS computes a neighborhood graph consisting of a base set B \u2286 V (the root set and some of its neighboring vertices) and some of the edges in E induced by B.\nIn order to formalize the definition of the neighborhood graph, it is helpful to first introduce a sampling operator and the concept of a linkselection predicate.\nGiven a set A, the notation Sn[A] draws n elements uniformly at random from A; Sn[A] = A if |A| \u2264 n.\nA link section predicate P takes an edge (u, v) \u2208 E.\nIn this study, we use the following three link section predicates: all(u, v) \u21d4 true ih(u, v) \u21d4 host(u) = host(v) id(u, v) \u21d4 domain(u) = domain(v) where host(u) denotes the host of URL u, and domain(u) denotes the domain of URL u. So, all is true for all links, whereas ih is true only for inter-host links, and id is true only for inter-domain links.\nThe outlinked-set OP of the root set R w.r.t. a linkselection predicate P is defined to be: OP = [ u\u2208R {v \u2208 V : (u, v) \u2208 E \u2227 P(u, v)} The inlinking-set IP s of the root set R w.r.t. a link-selection predicate P and a sampling value s is defined to be: IP s = [ v\u2208R Ss[{u \u2208 V : (u, v) \u2208 E \u2227 P(u, v)}] The base set BP s of the root set R w.r.t. P and s is defined to be: BP s = R \u222a IP s \u222a OP The neighborhood graph (BP s , NP s ) has the base set BP s as its vertex set and an edge set NP s containing those edges in E that are covered by BP s and permitted by P: NP s = {(u, v) \u2208 E : u \u2208 BP s \u2227 v \u2208 BP s \u2227 P(u, v)} To simplify notation, we write B to denote BP s , and N to denote NP s .\nFor each node u in the neighborhood graph, HITS computes two scores: an authority score A(u), estimating how authoritative u is on the topic induced by the query, and a hub score H(u), indicating whether u is a good reference to many authoritative pages.\nThis is done using the following algorithm: 1.\nFor all u \u2208 B do H(u) := q 1 |B| , A(u) := q 1 |B| .\n2.\nRepeat until H and A converge: (a) For all v \u2208 B : A (v) := P (u,v)\u2208N H(u) (b) For all u \u2208 B : H (u) := P (u,v)\u2208N A(v) (c) H := H 2, A := A 2 where X 2 normalizes the vector X to unit length in euclidean space, i.e. the squares of its elements sum up to 1.\nIn practice, implementing a system that can compute HITS within the time constraints of a major search engine (where the peak query load is in the thousands of queries per second, and the desired query response time is well below one second) is a major engineering challenge.\nAmong other things, the web graph cannot reasonably be stored on disk, since .221 .106 .105 .104 .102 .095 .092 .090 .038 .036 .035 .034 .032 .032 .011 0.00 0.05 0.10 0.15 0.20 0.25 bm25f degree-in-id degree-in-ih hits-aut-id-25 hits-aut-ih-100 degree-in-all pagerank hits-aut-all-100 hits-hub-all-100 hits-hub-ih-100 hits-hub-id-100 degree-out-all degree-out-ih degree-out-id random NDCG@10 .100 .035 .033 .033 .033 .029 .027 .027 .008 .007 .007 .006 .006 .006 .002 0.00 0.02 0.04 0.06 0.08 0.10 0.12 bm25f hits-aut-id-9 degree-in-id hits-aut-ih-15 degree-in-ih degree-in-all pagerank hits-aut-all-100 hits-hub-all-100 hits-hub-ih-100 hits-hub-id-100 degree-out-all degree-out-ih degree-out-id random MAP@10 .273 .132 .126 .117 .114 .101 .101 .097 .032 .032 .030 .028 .027 .027 .007 0.00 0.05 0.10 0.15 0.20 0.25 0.30 bm25f hits-aut-id-9 hits-aut-ih-15 degree-in-id degree-in-ih degree-in-all hits-aut-all-100 pagerank hits-hub-all-100 hits-hub-ih-100 hits-hub-id-100 degree-out-all degree-out-ih degree-out-id random MRR@10 Figure 2: Effectiveness of different features.\nseek times of modern hard disks are too slow to retrieve the links within the time constraints, and the graph does not fit into the main memory of a single machine, even when using the most aggressive compression techniques.\nIn order to experiment with HITS and other query-dependent link-based ranking algorithms that require non-regular accesses to arbitrary nodes and edges in the web graph, we implemented a system called the Scalable Hyperlink Store, or SHS for short.\nSHS is a special-purpose database, distributed over an arbitrary number of machines that keeps a highly compressed version of the web graph in memory and allows very fast lookup of nodes and edges.\nOn our hardware, it takes an average of 2 microseconds to map a URL to a 64-bit integer handle called a UID, 15 microseconds to look up all incoming or outgoing link UIDs associated with a page UID, and 5 microseconds to map a UID back to a URL (the last functionality not being required by HITS).\nThe RPC overhead is about 100 microseconds, but the SHS API allows many lookups to be batched into a single RPC request.\nWe implemented the HITS algorithm using the SHS infrastructure.\nWe compiled three SHS databases, one containing all 17.6 billion links in our web graph (all), one containing only links between pages that are on different hosts (ih, for inter-host), and one containing only links between pages that are on different domains (id).\nWe consider two URLs to belong to different hosts if the host portions of the URLs differ (in other words, we make no attempt to determine whether two distinct symbolic host names refer to the same computer), and we consider a domain to be the name purchased from a registrar (for example, we consider news.bbc.co.uk and www.bbc.co.uk to be different hosts belonging to the same domain).\nUsing each of these databases, we computed HITS authority and hub scores for various parameterizations of the sampling operator S, sampling between 1 and 100 back-links of each page in the root set.\nResult URLs that were not covered by our web graph automatically received authority and hub scores of 0, since they were not connected to any other nodes in the neighborhood graph and therefore did not receive any endorsements.\nWe performed forty-five different HITS computations, each combining one of the three link selection predicates (all, ih, and id) with a sampling value.\nFor each combination, we loaded one of the three databases into an SHS system running on six machines (each equipped with 16 GB of RAM), and computed HITS authority and hub scores, one query at a time.\nThe longest-running combination (using the all database and sampling 100 back-links of each root set vertex) required 30,456 seconds to process the entire query set, or about 1.1 seconds per query on average.\n7.\nEXPERIMENTAL RESULTS For a given query Q, we need to rank the set of documents satisfying Q (the result set of Q).\nOur hypothesis is that good features should be able to rank relevant documents in this set higher than non-relevant ones, and this should result in an increase in each performance measure over the query set.\nWe are specifically interested in evaluating the usefulness of HITS and other link-based features.\nIn principle, we could do this by sorting the documents in each result set by their feature value, and compare the resulting NDCGs.\nWe call this ranking with isolated features.\nLet us first examine the relative performance of the different parameterizations of the HITS algorithm we examined.\nRecall that we computed HITS for each combination of three link section schemes - all links (all), inter-host links only (ih), and inter-domain links only (id) - with back-link sampling values ranging from 1 to 100.\nFigure 1 shows the impact of the number of sampled back-links on the retrieval performance of HITS authority scores.\nEach graph is associated with one performance measure.\nThe horizontal axis of each graph represents the number of sampled back-links, the vertical axis represents performance under the appropriate measure, and each curve depicts a link selection scheme.\nThe id scheme slightly outperforms ih, and both vastly outperform the all scheme - eliminating nepotistic links pays off.\nThe performance of the all scheme increases as more back-links of each root set vertex are sampled, while the performance of the id and ih schemes peaks at between 10 and 25 samples and then plateaus or even declines, depending on the performance measure.\nHaving compared different parameterizations of HITS, we will now fix the number of sampled back-links at 100 and compare the three link selection schemes against other isolated features: PageRank, in-degree and out-degree counting links of all pages, of different hosts only and of different domains only (all, ih and id datasets respectively), and a text retrieval algorithm exploiting anchor text: BM25F[24].\nBM25F is a state-of-the art ranking function solely based on textual content of the documents and their associated anchor texts.\nBM25F is a descendant of BM25 that combines the different textual fields of a document, namely title, body and anchor text.\nThis model has been shown to be one of the best-performing web search scoring functions over the last few years [8, 24].\nBM25F has a number of free parameters (2 per field, 6 in our case); we used the parameter values described in [24].\n.341 .340 .339 .337 .336 .336 .334 .311 .311 .310 .310 .310 .310 .231 0.22 0.24 0.26 0.28 0.30 0.32 0.34 0.36 degree-in-id degree-in-ih degree-in-all hits-aut-ih-100 hits-aut-all-100 pagerank hits-aut-id-10 degree-out-all hits-hub-all-100 degree-out-ih hits-hub-ih-100 degree-out-id hits-hub-id-10 bm25f NDCG@10 .152 .152 .151 .150 .150 .149 .149 .137 .136 .136 .128 .127 .127 .100 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 degree-in-ih degree-in-id degree-in-all hits-aut-ih-100 hits-aut-all-100 hits-aut-id-10 pagerank hits-hub-all-100 degree-out-ih hits-hub-id-100 degree-out-all degree-out-id hits-hub-ih-100 bm25f MAP@10 .398 .397 .394 .394 .392 .392 .391 .356 .356 .356 .356 .356 .355 .273 0.25 0.30 0.35 0.40 degree-in-id degree-in-ih degree-in-all hits-aut-ih-100 hits-aut-all-100 pagerank hits-aut-id-10 degree-out-all hits-hub-all-100 degree-out-ih hits-hub-ih-100 degree-out-id hits-hub-id-10 bm25f MRR@10 Figure 3: Effectiveness measures for linear combinations of link-based features with BM25F.\nFigure 2 shows the NDCG, MRR, and MAP measures of these features.\nAgain all performance measures (and for all rank-thresholds we explored) agree.\nAs expected, BM25F outperforms all link-based features by a large margin.\nThe link-based features are divided into two groups, with a noticeable performance drop between the groups.\nThe better-performing group consists of the features that are based on the number and\/or quality of incoming links (in-degree, PageRank, and HITS authority scores); and the worse-performing group consists of the features that are based on the number and\/or quality of outgoing links (outdegree and HITS hub scores).\nIn the group of features based on incoming links, features that ignore nepotistic links perform better than their counterparts using all links.\nMoreover, using only inter-domain (id) links seems to be marginally better than using inter-host (ih) links.\nThe fact that features based on outgoing links underperform those based on incoming links matches our expectations; if anything, it is mildly surprising that outgoing links provide a useful signal for ranking at all.\nOn the other hand, the fact that in-degree features outperform PageRank under all measures is quite surprising.\nA possible explanation is that link-spammers have been targeting the published PageRank algorithm for many years, and that this has led to anomalies in the web graph that affect PageRank, but not other link-based features that explore only a distance-1 neighborhood of the result set.\nLikewise, it is surprising that simple query-independent features such as in-degree, which might estimate global quality but cannot capture relevance to a query, would outperform query-dependent features such as HITS authority scores.\nHowever, we cannot investigate the effect of these features in isolation, without regard to the overall ranking function, for several reasons.\nFirst, features based on the textual content of documents (as opposed to link-based features) are the best predictors of relevance.\nSecond, link-based features can be strongly correlated with textual features for several reasons, mainly the correlation between in-degree and numFeature Transform function bm25f T(s) = s pagerank T(s) = log(s + 3 \u00b7 10\u221212 ) degree-in-* T(s) = log(s + 3 \u00b7 10\u22122 ) degree-out-* T(s) = log(s + 3 \u00b7 103 ) hits-aut-* T(s) = log(s + 3 \u00b7 10\u22128 ) hits-hub-* T(s) = log(s + 3 \u00b7 10\u22121 ) Table 1: Near-optimal feature transform functions.\nber of textual anchor matches.\nTherefore, one must consider the effect of link-based features in combination with textual features.\nOtherwise, we may find a link-based feature that is very good in isolation but is strongly correlated with textual features and results in no overall improvement; and vice versa, we may find a link-based feature that is weak in isolation but significantly improves overall performance.\nFor this reason, we have studied the combination of the link-based features above with BM25F.\nAll feature combinations were done by considering the linear combination of two features as a document score, using the formula score(d) =Pn i=1 wiTi(Fi(d)), where d is a document (or documentquery pair, in the case of BM25F), Fi(d) (for 1 \u2264 i \u2264 n) is a feature extracted from d, Ti is a transform, and wi is a free scalar weight that needs to be tuned.\nWe chose transform functions that we empirically determined to be well-suited.\nTable 1 shows the chosen transform functions.\nThis type of linear combination is appropriate if we assume features to be independent with respect to relevance and an exponential model for link features, as discussed in [8].\nWe tuned the weights by selecting a random subset of 5,000 queries from the query set, used an iterative refinement process to find weights that maximized a given performance measure on that training set, and used the remaining 23,043 queries to measure the performance of the thus derived scoring functions.\nWe explored the pairwise combination of BM25F with every link-based scoring function.\nFigure 3 shows the NDCG, MRR, and MAP measures of these feature combinations, together with a baseline BM25F score (the right-most bar in each graph), which was computed using the same subset of 23,045 queries that were used as the test set for the feature combinations.\nRegardless of the performance measure applied, we can make the following general observations: 1.\nCombining any of the link-based features with BM25F results in a substantial performance improvement over BM25F in isolation.\n2.\nThe combination of BM25F with features based on incoming links (PageRank, in-degree, and HITS authority scores) performs substantially better than the combination with features based on outgoing links (HITS hub scores and out-degree).\n3.\nThe performance differences between the various combinations of BM25F with features based on incoming links is comparatively small, and the relative ordering of feature combinations is fairly stable across the 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0 2 4 6 8 10 12 14 16 18 20 22 24 MAP@10 26\u00a0374\u00a01640\u00a02751 3768\u00a04284\u00a03944\u00a03001 2617\u00a01871\u00a01367\u00a0771 1629 bm25fnorm pagerank degree-in-id hits-aut-id-100 Figure 4: Effectiveness measures for selected isolated features, broken down by query specificity.\nferent performance measures used.\nHowever, the combination of BM25F with any in-degree variant, and in particular with id in-degree, consistently outperforms the combination of BM25F with PageRank or HITS authority scores, and can be computed much easier and faster.\nFinally, we investigated whether certain features are better for some queries than for others.\nParticularly, we are interested in the relationship between the specificity of a query and the performance of different ranking features.\nThe most straightforward measure of the specificity of a query Q would be the number of documents in a search engine``s corpus that satisfy Q. Unfortunately, the query set available to us did not contain this information.\nTherefore, we chose to approximate the specificity of Q by summing up the inverse document frequencies of the individual query terms comprising Q.\nThe inverse document frequency (IDF) of a term t with respect to a corpus C is defined to be logN\/doc(t), where doc(t) is the number of documents in C containing t and N is the total number of documents in C. By summing up the IDFs of the query terms, we make the (flawed) assumption that the individual query terms are independent of each other.\nHowever, while not perfect, this approximation is at least directionally correct.\nWe broke down our query set into 13 buckets, each bucket associated with an interval of query IDF values, and we computed performance metrics for all ranking functions applied (in isolation) to the queries in each bucket.\nIn order to keep the graphs readable, we will not show the performance of all the features, but rather restrict ourselves to the four most interesting ones: PageRank, id HITS authority scores, id in-degree, and BM25F.\nFigure 4 shows the MAP@10 for all 13 query specificity buckets.\nBuckets on the far left of each graph represent very general queries; buckets on the far right represent very specific queries.\nThe figures on the upper x axis of each graph show the number of queries in each bucket (e.g. the right-most bucket contains 1,629 queries).\nBM25F performs best for medium-specific queries, peaking at the buckets representing the IDF sum interval [12,14).\nBy comparison, HITS peaks at the bucket representing the IDF sum interval [4,6), and PageRank and in-degree peak at the bucket representing the interval [6,8), i.e. more general queries.\n8.\nCONCLUSIONS AND FUTURE WORK This paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, in particular PageRank and in-degree, when applied in isolation or in combination with a text retrieval algorithm exploiting anchor text (BM25F).\nEvaluation is carried out with respect to a large number of human evaluated queries, using three different measures of effectiveness: NDCG, MRR, and MAP.\nEvaluating link-based features in isolation, we found that web page in-degree outperforms PageRank, and is about as effwective as HITS authority scores.\nHITS hub scores and web page out-degree are much less effective ranking features, but still outperform a random ordering.\nA linear combination of any link-based features with BM25F produces a significant improvement in performance, and there is a clear difference between combining BM25F with a feature based on incoming links (indegree, PageRank, or HITS authority scores) and a feature based on outgoing links (HITS hub scores and out-degree), but within those two groups the precise choice of link-based feature matters relatively little.\nWe believe that the measurements presented in this paper provide a solid evaluation of the best well-known link-based ranking schemes.\nThere are many possible variants of these schemes, and many other link-based ranking algorithms have been proposed in the literature, hence we do not claim this work to be the last word on this subject, but rather the first step on a long road.\nFuture work includes evaluation of different parameterizations of PageRank and HITS.\nIn particular, we would like to study the impact of changes to the PageRank damping factor on effectiveness, the impact of various schemes meant to counteract the effects of link spam, and the effect of weighing hyperlinks differently depending on whether they are nepotistic or not.\nGoing beyond PageRank and HITS, we would like to measure the effectiveness of other link-based ranking algorithms, such as SALSA.\nFinally, we are planning to experiment with more complex feature combinations.\n9.\nREFERENCES [1] B. Amento, L. Terveen, and W. Hill.\nDoes authority mean quality?\nPredicting expert quality ratings of web documents.\nIn Proc.\nof the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 296-303, 2000.\n[2] M. Bianchini, M. Gori, and F. Scarselli.\nInside PageRank.\nACM Transactions on Internet Technology, 5(1):92-128, 2005.\n[3] A. Borodin, G. O. Roberts, and J. S. Rosenthal.\nFinding authorities and hubs from link structures on the World Wide Web.\nIn Proc.\nof the 10th International World Wide Web Conference, pages 415-429, 2001.\n[4] A. Borodin, G. O. Roberts, J. S. Rosenthal, and P. Tsaparas.\nLink analysis ranking: algorithms, theory, and experiments.\nACM Transactions on Interet Technology, 5(1):231-297, 2005.\n[5] S. Brin and L. Page.\nThe anatomy of a large-scale hypertextual Web search engine.\nComputer Networks and ISDN Systems, 30(1-7):107-117, 1998.\n[6] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender.\nLearning to rank using gradient descent.\nIn Proc.\nof the 22nd International Conference on Machine Learning, pages 89-96, New York, NY, USA, 2005.\nACM Press.\n[7] D. Cohn and H. Chang.\nLearning to probabilistically identify authoritative documents.\nIn Proc.\nof the 17th International Conference on Machine Learning, pages 167-174, 2000.\n[8] N. Craswell, S. Robertson, H. Zaragoza, and M. Taylor.\nRelevance weighting for query independent evidence.\nIn Proc.\nof the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 416-423, 2005.\n[9] E. Garfield.\nCitation analysis as a tool in journal evaluation.\nScience, 178(4060):471-479, 1972.\n[10] Z. Gy\u00a8ongyi and H. Garcia-Molina.\nWeb spam taxonomy.\nIn 1st International Workshop on Adversarial Information Retrieval on the Web, 2005.\n[11] Z. Gy\u00a8ongyi, H. Garcia-Molina, and J. Pedersen.\nCombating web spam with TrustRank.\nIn Proc.\nof the 30th International Conference on Very Large Databases, pages 576-587, 2004.\n[12] B. J. Jansen, A. Spink, J. Bateman, and T. Saracevic.\nReal life information retrieval: a study of user queries on the web.\nACM SIGIR Forum, 32(1):5-17, 1998.\n[13] K. J\u00a8arvelin and J. Kek\u00a8al\u00a8ainen.\nCumulated gain-based evaluation of IR techniques.\nACM Transactions on Information Systems, 20(4):422-446, 2002.\n[14] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub.\nExtrapolation methods for accelerating PageRank computations.\nIn Proc.\nof the 12th International World Wide Web Conference, pages 261-270, 2003.\n[15] M. M. Kessler.\nBibliographic coupling between scientific papers.\nAmerican Documentation, 14(1):10-25, 1963.\n[16] J. M. Kleinberg.\nAuthoritative sources in a hyperlinked environment.\nIn Proc.\nof the 9th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 668-677, 1998.\n[17] J. M. Kleinberg.\nAuthoritative sources in a hyperlinked environment.\nJournal of the ACM, 46(5):604-632, 1999.\n[18] A. N. Langville and C. D. Meyer.\nDeeper inside PageRank.\nInternet Mathematics, 1(3):2005, 335-380.\n[19] R. Lempel and S. Moran.\nThe stochastic approach for link-structure analysis (SALSA) and the TKC effect.\nComputer Networks and ISDN Systems, 33(1-6):387-401, 2000.\n[20] A. Y. Ng, A. X. Zheng, and M. I. Jordan.\nStable algorithms for link analysis.\nIn Proc.\nof the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 258-266, 2001.\n[21] L. Page, S. Brin, R. Motwani, and T. Winograd.\nThe PageRank citation ranking: Bringing order to the web.\nTechnical report, Stanford Digital Library Technologies Project, 1998.\n[22] J. A. Tomlin.\nA new paradigm for ranking pages on the World Wide Web.\nIn Proc.\nof the 12th International World Wide Web Conference, pages 350-355, 2003.\n[23] T. Upstill, N. Craswell, and D. Hawking.\nPredicting fame and fortune: Pagerank or indegree?\nIn Proc.\nof the Australasian Document Computing Symposium, pages 31-40, 2003.\n[24] H. Zaragoza, N. Craswell, M. Taylor, S. Saria, and S. Robertson.\nMicrosoft Cambridge at TREC-13: Web and HARD tracks.\nIn Proc.\nof the 13th Text Retrieval Conference, 2004.","lvl-3":"SIGIR 2007 Proceedings Session 20: Link Analysis HITS on the Web: How does it Compare?\n*\nABSTRACT\nThis paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-ofthe-art text retrieval algorithm exploiting anchor text.\nWe quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements.\nThe evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges.\nWe found that HITS outperforms PageRank, but is about as effective as web-page in-degree.\nThe same holds true when any of the link-based features are combined with the text retrieval algorithm.\nFinally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries.\n1.\nINTRODUCTION\nLink graph features such as in-degree and PageRank have been shown to significantly improve the performance of text retrieval algorithms on the web.\nThe HITS algorithm is also believed to be of interest for web search; to some degree, one may expect HITS to be more informative that other link-based features because it is query-dependent: it tries to measure the interest of pages with respect to a given query.\nHowever, it remains unclear today whether there are practical benefits of HITS over other link graph measures.\nThis is even more true when we consider that modern retrieval algorithms used on the web use a document representation which incorporates the document's anchor text, i.e. the text of incoming links.\nThis, at least to some degree, takes the link graph into account, in a query-dependent manner.\nComparing HITS to PageRank or in-degree empirically is no easy task.\nThere are two main difficulties: scale and relevance.\nScale is important because link-based features are known to improve in quality as the document graph grows.\nIf we carry out a small experiment, our conclusions won't carry over to large graphs such as the web.\nHowever, computing HITS efficiently on a graph the size of a realistic web crawl is extraordinarily difficult.\nRelevance is also crucial because we cannot measure the performance of a feature in the absence of human judgments: what is crucial is ranking at the top of the ten or so documents that a user will peruse.\nTo our knowledge, this paper is the first attempt to evaluate HITS at a large scale and compare it to other link-based features with respect to human evaluated judgment.\nOur results confirm many of the intuitions we have about link-based features and their relationship to text retrieval methods exploiting anchor text.\nThis is reassuring: in the absence of a theoretical model capable of tying these measures with relevance, the only way to validate our intuitions is to carry out realistic experiments.\nHowever, we were quite surprised to find that HITS, a query-dependent feature, is about as effective as web page in-degree, the most simpleminded query-independent link-based feature.\nThis continues to be true when the link-based features are combined with a text retrieval algorithm exploiting anchor text.\nThe remainder of this paper is structured as follows: Section 2 surveys related work.\nSection 3 describes the data sets we used in our study.\nSection 4 reviews the performance measures we used.\nSections 5 and 6 describe the PageRank and HITS algorithms in more detail, and sketch the computational infrastructure we employed to carry out large scale experiments.\nSection 7 presents the results of our evaluations, and Section 8 offers concluding remarks.\n2.\nRELATED WORK\nThe idea of using hyperlink analysis for ranking web search results arose around 1997, and manifested itself in the HITS [16, 17] and PageRank [5, 21] algorithms.\nThe popularity of these two algorithms and the phenomenal success of the Google search engine, which uses PageRank, have spawned a large amount of subsequent research.\nThere are numerous attempts at improving the effectiveness of HITS and PageRank.\nQuery-dependent link-based ranking algorithms inspired by HITS include SALSA [19], Randomized HITS [20], and PHITS [7], to name a few.\nQuery-independent link-based ranking algorithms inspired by PageRank include TrafficRank [22], BlockRank [14], and TrustRank [11], and many others.\nAnother line of research is concerned with analyzing the mathematical properties of HITS and PageRank.\nFor example, Borodin et al. [3] investigated various theoretical properties of PageRank, HITS, SALSA, and PHITS, including their similarity and stability, while Bianchini et al. [2] studied the relationship between the structure of the web graph and the distribution of PageRank scores, and Langville and Meyer examined basic properties of PageRank such as existence and uniqueness of an eigenvector and convergence of power iteration [18].\nGiven the attention that has been paid to improving the effectiveness of PageRank and HITS, and the thorough studies of the mathematical properties of these algorithms, it is somewhat surprising that very few evaluations of their effectiveness have been published.\nWe are aware of two studies that have attempted to formally evaluate the effectiveness of HITS and of PageRank.\nAmento et al. [1] employed quantitative measures, but based their experiments on the result sets of just 5 queries and the web-graph induced by topical crawls around the result set of each query.\nA more recent study by Borodin et al. [4] is based on 34 queries, result sets of 200 pages per query obtained from Google, and a neighborhood graph derived by retrieving 50 in-links per result from Google.\nBy contrast, our study is based on over 28,000 queries and a web graph covering 2.9 billion URLs.\n3.\nOUR DATA SETS\n4.\nMEASURING PERFORMANCE\n5.\nCOMPUTING PAGERANK ON A LARGE WEB GRAPH\n6.\nHITS\n7.\nEXPERIMENTAL RESULTS\n8.\nCONCLUSIONS AND FUTURE WORK\nThis paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, in particular PageRank and in-degree, when applied in isolation or in combination with a text retrieval algorithm exploiting anchor text (BM25F).\nEvaluation is carried out with respect to a large number of human evaluated queries, using three different measures of effectiveness: NDCG, MRR, and MAP.\nEvaluating link-based features in isolation, we found that web page in-degree outperforms PageRank, and is about as effwective as HITS authority scores.\nHITS hub scores and web page out-degree are much less effective ranking features, but still outperform a random ordering.\nA linear combination of any link-based features with BM25F produces a significant improvement in performance, and there is a clear difference between combining BM25F with a feature based on incoming links (indegree, PageRank, or HITS authority scores) and a feature based on outgoing links (HITS hub scores and out-degree), but within those two groups the precise choice of link-based feature matters relatively little.\nWe believe that the measurements presented in this paper provide a solid evaluation of the best well-known link-based ranking schemes.\nThere are many possible variants of these schemes, and many other link-based ranking algorithms have been proposed in the literature, hence we do not claim this work to be the last word on this subject, but rather the first step on a long road.\nFuture work includes evaluation of different parameterizations of PageRank and HITS.\nIn particular, we would like to study the impact of changes to the PageRank damping factor on effectiveness, the impact of various schemes meant to counteract the effects of link spam, and the effect of weighing hyperlinks differently depending on whether they are nepotistic or not.\nGoing beyond PageRank and HITS, we would like to measure the effectiveness of other link-based ranking algorithms, such as SALSA.\nFinally, we are planning to experiment with more complex feature combinations.","lvl-4":"SIGIR 2007 Proceedings Session 20: Link Analysis HITS on the Web: How does it Compare?\n*\nABSTRACT\nThis paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-ofthe-art text retrieval algorithm exploiting anchor text.\nWe quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements.\nThe evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges.\nWe found that HITS outperforms PageRank, but is about as effective as web-page in-degree.\nThe same holds true when any of the link-based features are combined with the text retrieval algorithm.\nFinally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries.\n1.\nINTRODUCTION\nLink graph features such as in-degree and PageRank have been shown to significantly improve the performance of text retrieval algorithms on the web.\nThe HITS algorithm is also believed to be of interest for web search; to some degree, one may expect HITS to be more informative that other link-based features because it is query-dependent: it tries to measure the interest of pages with respect to a given query.\nHowever, it remains unclear today whether there are practical benefits of HITS over other link graph measures.\nThis is even more true when we consider that modern retrieval algorithms used on the web use a document representation which incorporates the document's anchor text, i.e. the text of incoming links.\nThis, at least to some degree, takes the link graph into account, in a query-dependent manner.\nComparing HITS to PageRank or in-degree empirically is no easy task.\nThere are two main difficulties: scale and relevance.\nScale is important because link-based features are known to improve in quality as the document graph grows.\nIf we carry out a small experiment, our conclusions won't carry over to large graphs such as the web.\nHowever, computing HITS efficiently on a graph the size of a realistic web crawl is extraordinarily difficult.\nTo our knowledge, this paper is the first attempt to evaluate HITS at a large scale and compare it to other link-based features with respect to human evaluated judgment.\nOur results confirm many of the intuitions we have about link-based features and their relationship to text retrieval methods exploiting anchor text.\nHowever, we were quite surprised to find that HITS, a query-dependent feature, is about as effective as web page in-degree, the most simpleminded query-independent link-based feature.\nThis continues to be true when the link-based features are combined with a text retrieval algorithm exploiting anchor text.\nThe remainder of this paper is structured as follows: Section 2 surveys related work.\nSection 3 describes the data sets we used in our study.\nSection 4 reviews the performance measures we used.\nSections 5 and 6 describe the PageRank and HITS algorithms in more detail, and sketch the computational infrastructure we employed to carry out large scale experiments.\nSection 7 presents the results of our evaluations, and Section 8 offers concluding remarks.\n2.\nRELATED WORK\nThe idea of using hyperlink analysis for ranking web search results arose around 1997, and manifested itself in the HITS [16, 17] and PageRank [5, 21] algorithms.\nThe popularity of these two algorithms and the phenomenal success of the Google search engine, which uses PageRank, have spawned a large amount of subsequent research.\nThere are numerous attempts at improving the effectiveness of HITS and PageRank.\nQuery-dependent link-based ranking algorithms inspired by HITS include SALSA [19], Randomized HITS [20], and PHITS [7], to name a few.\nQuery-independent link-based ranking algorithms inspired by PageRank include TrafficRank [22], BlockRank [14], and TrustRank [11], and many others.\nAnother line of research is concerned with analyzing the mathematical properties of HITS and PageRank.\nGiven the attention that has been paid to improving the effectiveness of PageRank and HITS, and the thorough studies of the mathematical properties of these algorithms, it is somewhat surprising that very few evaluations of their effectiveness have been published.\nWe are aware of two studies that have attempted to formally evaluate the effectiveness of HITS and of PageRank.\nAmento et al. [1] employed quantitative measures, but based their experiments on the result sets of just 5 queries and the web-graph induced by topical crawls around the result set of each query.\nBy contrast, our study is based on over 28,000 queries and a web graph covering 2.9 billion URLs.\n8.\nCONCLUSIONS AND FUTURE WORK\nThis paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, in particular PageRank and in-degree, when applied in isolation or in combination with a text retrieval algorithm exploiting anchor text (BM25F).\nEvaluation is carried out with respect to a large number of human evaluated queries, using three different measures of effectiveness: NDCG, MRR, and MAP.\nEvaluating link-based features in isolation, we found that web page in-degree outperforms PageRank, and is about as effwective as HITS authority scores.\nHITS hub scores and web page out-degree are much less effective ranking features, but still outperform a random ordering.\nWe believe that the measurements presented in this paper provide a solid evaluation of the best well-known link-based ranking schemes.\nFuture work includes evaluation of different parameterizations of PageRank and HITS.\nGoing beyond PageRank and HITS, we would like to measure the effectiveness of other link-based ranking algorithms, such as SALSA.\nFinally, we are planning to experiment with more complex feature combinations.","lvl-2":"SIGIR 2007 Proceedings Session 20: Link Analysis HITS on the Web: How does it Compare?\n*\nABSTRACT\nThis paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, when used in combination with a state-ofthe-art text retrieval algorithm exploiting anchor text.\nWe quantified their effectiveness using three common performance measures: the mean reciprocal rank, the mean average precision, and the normalized discounted cumulative gain measurements.\nThe evaluation is based on two large data sets: a breadth-first search crawl of 463 million web pages containing 17.6 billion hyperlinks and referencing 2.9 billion distinct URLs; and a set of 28,043 queries sampled from a query log, each query having on average 2,383 results, about 17 of which were labeled by judges.\nWe found that HITS outperforms PageRank, but is about as effective as web-page in-degree.\nThe same holds true when any of the link-based features are combined with the text retrieval algorithm.\nFinally, we studied the relationship between query specificity and the effectiveness of selected features, and found that link-based features perform better for general queries, whereas BM25F performs better for specific queries.\n1.\nINTRODUCTION\nLink graph features such as in-degree and PageRank have been shown to significantly improve the performance of text retrieval algorithms on the web.\nThe HITS algorithm is also believed to be of interest for web search; to some degree, one may expect HITS to be more informative that other link-based features because it is query-dependent: it tries to measure the interest of pages with respect to a given query.\nHowever, it remains unclear today whether there are practical benefits of HITS over other link graph measures.\nThis is even more true when we consider that modern retrieval algorithms used on the web use a document representation which incorporates the document's anchor text, i.e. the text of incoming links.\nThis, at least to some degree, takes the link graph into account, in a query-dependent manner.\nComparing HITS to PageRank or in-degree empirically is no easy task.\nThere are two main difficulties: scale and relevance.\nScale is important because link-based features are known to improve in quality as the document graph grows.\nIf we carry out a small experiment, our conclusions won't carry over to large graphs such as the web.\nHowever, computing HITS efficiently on a graph the size of a realistic web crawl is extraordinarily difficult.\nRelevance is also crucial because we cannot measure the performance of a feature in the absence of human judgments: what is crucial is ranking at the top of the ten or so documents that a user will peruse.\nTo our knowledge, this paper is the first attempt to evaluate HITS at a large scale and compare it to other link-based features with respect to human evaluated judgment.\nOur results confirm many of the intuitions we have about link-based features and their relationship to text retrieval methods exploiting anchor text.\nThis is reassuring: in the absence of a theoretical model capable of tying these measures with relevance, the only way to validate our intuitions is to carry out realistic experiments.\nHowever, we were quite surprised to find that HITS, a query-dependent feature, is about as effective as web page in-degree, the most simpleminded query-independent link-based feature.\nThis continues to be true when the link-based features are combined with a text retrieval algorithm exploiting anchor text.\nThe remainder of this paper is structured as follows: Section 2 surveys related work.\nSection 3 describes the data sets we used in our study.\nSection 4 reviews the performance measures we used.\nSections 5 and 6 describe the PageRank and HITS algorithms in more detail, and sketch the computational infrastructure we employed to carry out large scale experiments.\nSection 7 presents the results of our evaluations, and Section 8 offers concluding remarks.\n2.\nRELATED WORK\nThe idea of using hyperlink analysis for ranking web search results arose around 1997, and manifested itself in the HITS [16, 17] and PageRank [5, 21] algorithms.\nThe popularity of these two algorithms and the phenomenal success of the Google search engine, which uses PageRank, have spawned a large amount of subsequent research.\nThere are numerous attempts at improving the effectiveness of HITS and PageRank.\nQuery-dependent link-based ranking algorithms inspired by HITS include SALSA [19], Randomized HITS [20], and PHITS [7], to name a few.\nQuery-independent link-based ranking algorithms inspired by PageRank include TrafficRank [22], BlockRank [14], and TrustRank [11], and many others.\nAnother line of research is concerned with analyzing the mathematical properties of HITS and PageRank.\nFor example, Borodin et al. [3] investigated various theoretical properties of PageRank, HITS, SALSA, and PHITS, including their similarity and stability, while Bianchini et al. [2] studied the relationship between the structure of the web graph and the distribution of PageRank scores, and Langville and Meyer examined basic properties of PageRank such as existence and uniqueness of an eigenvector and convergence of power iteration [18].\nGiven the attention that has been paid to improving the effectiveness of PageRank and HITS, and the thorough studies of the mathematical properties of these algorithms, it is somewhat surprising that very few evaluations of their effectiveness have been published.\nWe are aware of two studies that have attempted to formally evaluate the effectiveness of HITS and of PageRank.\nAmento et al. [1] employed quantitative measures, but based their experiments on the result sets of just 5 queries and the web-graph induced by topical crawls around the result set of each query.\nA more recent study by Borodin et al. [4] is based on 34 queries, result sets of 200 pages per query obtained from Google, and a neighborhood graph derived by retrieving 50 in-links per result from Google.\nBy contrast, our study is based on over 28,000 queries and a web graph covering 2.9 billion URLs.\n3.\nOUR DATA SETS\nOur evaluation is based on two data sets: a large web graph and a substantial set of queries with associated results, some of which were labeled by human judges.\nOur web graph is based on a web crawl that was conducted in a breadth-first-search fashion, and successfully retrieved 463,685,607 HTML pages.\nThese pages contain 17,672,011,890 hyperlinks (after eliminating duplicate hyperlinks embedded in the same web page), which refer to a total of 2,897,671,002 URLs.\nThus, at the end of the crawl there were 2,433,985,395 URLs in the \"frontier\" set of the crawler that had been discovered, but not yet downloaded.\nThe mean out-degree of crawled web pages is 38.11; the mean in-degree of discovered pages (whether crawled or not) is 6.10.\nAlso, it is worth pointing out that there is a lot more variance in in-degrees than in out-degrees; some popular pages have millions of incoming links.\nAs we will see, this property affects the computational cost of HITS.\nOur query set was produced by sampling 28,043 queries from the MSN Search query log, and retrieving a total of 66,846,214 result URLs for these queries (using commercial search engine technology), or about 2,838 results per query on average.\nIt is important to point out that our 2.9 billion URL web graph does not cover all these result URLs.\nIn fact, only 9,525,566 of the result URLs (about 14.25%) were covered by the graph.\n485,656 of the results in the query set (about 0.73% of all results, or about 17.3 results per query) were rated by human judges as to their relevance to the given query, and labeled on a six-point scale (the labels being \"definitive\", \"excellent\", \"good\", \"fair\", \"bad\" and \"detrimental\").\nResults were selected for judgment based on their commercial search engine placement; in other words, the subset of labeled results is not random, but biased towards documents considered relevant by pre-existing ranking algorithms.\nInvolving a human in the evaluation process is extremely cumbersome and expensive; however, human judgments are crucial for the evaluation of search engines.\nThis is so because no document features have been found yet that can effectively estimate the relevance of a document to a user query.\nSince content-match features are very unreliable (and even more so link features, as we will see) we need to ask a human to evaluate the results in order to compare the quality of features.\nEvaluating the retrieval results from document scores and human judgments is not trivial and has been the subject of many investigations in the IR community.\nA good performance measure should correlate with user satisfaction, taking into account that users will dislike having to delve deep in the results to find relevant documents.\nFor this reason, standard correlation measures (such as the correlation coefficient between the score and the judgment of a document), or order correlation measures (such as Kendall tau between the score and judgment induced orders) are not adequate.\n4.\nMEASURING PERFORMANCE\nIn this study, we quantify the effectiveness of various ranking algorithms using three measures: NDCG, MRR, and MAP.\nThe normalized discounted cumulative gains (NDCG) measure [13] discounts the contribution of a document to the overall score as the document's rank increases (assuming that the best document has the lowest rank).\nSuch a measure is particularly appropriate for search engines, as studies have shown that search engine users rarely consider anything beyond the first few results [12].\nNDCG values are normalized to be between 0 and 1, with 1 being the NDCG of a \"perfect\" ranking scheme that completely agrees with the assessment of the human judges.\nThe discounted cumulative gain at a particular rank-threshold T (DCG@T) is de\ning (0 = detrimental, 1 = bad, 2 = fair, 3 = good, 4 = excellent, and 5 = definitive) at rank j.\nThe NDCG is computed by dividing the DCG of a ranking by the highest possible DCG that can be obtained for that query.\nFinally, the NDGCs of all queries in the query set are averaged to produce a mean NDCG.\nThe reciprocal rank (RR) of the ranked result set of a query is defined to be the reciprocal value of the rank of the highest-ranking relevant document in the result set.\nThe RR at rank-threshold T is defined to be 0 if none of the highestranking T documents is relevant.\nThe mean reciprocal rank (MRR) of a query set is the average reciprocal rank of all queries in the query set.\nGiven a ranked set of n results, let rel (i) be 1 if the result at rank i is relevant and 0 otherwise.\nThe precision P (j)\nof the relevant results among the j highest-ranking results.\nThe average precision (AP) at rank-threshold k is defined to\ni = 1 rel (i).\nThe mean average precision (MAP) of a query set is the mean of the average precisions of all queries in the query set.\nThe above definitions of MRR and MAP rely on the notion of a \"relevant\" result.\nWe investigated two definitions of relevance: One where all documents rated \"fair\" or better were deemed relevant, and one were all documents rated \"good\" or better were deemed relevant.\nFor reasons of space, we only report MAP and MRR values computed using the latter definition; using the former definition does not change the qualitative nature of our findings.\nSimilarly, we computed NDCG, MAP, and MRR values for a wide range of rank-thresholds; we report results here at rank 10; again, changing the rank-threshold never led us to different conclusions.\nRecall that over 99% of documents are unlabeled.\nWe chose to treat all these documents as irrelevant to the query.\nFor some queries, however, not all relevant documents have been judged.\nThis introduces a bias into our evaluation: features that bring new documents to the top of the rank may be penalized.\nThis will be more acute for features less correlated to the pre-existing commercial ranking algorithms used to select documents for judgment.\nOn the other hand, most queries have few perfect relevant documents (i.e. home page or item searches) and they will most often be within the judged set.\n5.\nCOMPUTING PAGERANK ON A LARGE WEB GRAPH\nPageRank is a query-independent measure of the importance of web pages, based on the notion of peer-endorsement: A hyperlink from page A to page B is interpreted as an endorsement of page B's content by page A's author.\nThe following recursive definition captures this notion of endorsement:\nwhere R (v) is the score (importance) of page v, (u, v) is an edge (hyperlink) from page u to page v contained in the edge set E of the web graph, and Out (u) is the out-degree (number of embedded hyperlinks) of page u. However, this definition suffers from a severe shortcoming: In the fixedpoint of this recursive equation, only edges that are part of a strongly-connected component receive a non-zero score.\nIn order to overcome this deficiency, Page et al. grant each page a guaranteed \"minimum score\", giving rise to the definition of standard PageRank:\nwhere IV I is the size of the vertex set (the number of known web pages), and d is a \"damping factor\", typically set to be between 0.1 and 0.2.\nAssuming that scores are normalized to sum up to 1, PageRank can be viewed as the stationary probability distribution of a random walk on the web graph, where at each step of the walk, the walker with probability 1--d moves from its current node u to a neighboring node v, and with probability d selects a node uniformly at random from all nodes in the graph and jumps to it.\nIn the limit, the random walker is at node v with probability R (v).\nOne issue that has to be addressed when implementing PageRank is how to deal with \"sink\" nodes, nodes that do not have any outgoing links.\nOne possibility would be to select another node uniformly at random and transition to it; this is equivalent to adding edges from each sink nodes to all other nodes in the graph.\nWe chose the alternative approach of introducing a single \"phantom\" node.\nEach sink node has an edge to the phantom node, and the phantom node has an edge to itself.\nIn practice, PageRank scores can be computed using power iteration.\nSince PageRank is query-independent, the computation can be performed off-line ahead of query time.\nThis property has been key to PageRank's success, since it is a challenging engineering problem to build a system that can perform any non-trivial computation on the web graph at query time.\nIn order to compute PageRank scores for all 2.9 billion nodes in our web graph, we implemented a distributed version of PageRank.\nThe computation consists of two distinct phases.\nIn the first phase, the link files produced by the web crawler, which contain page URLs and their associated link URLs in textual form, are partitioned among the machines in the cluster used to compute PageRank scores, and converted into a more compact format along the way.\nSpecifically, URLs are partitioned across the machines in the cluster based on a hash of the URLs' host component, and each machine in the cluster maintains a table mapping the URL to a 32-bit integer.\nThe integers are drawn from a densely packed space, so as to make suitable indices into the array that will later hold the PageRank scores.\nThe system then translates our log of pages and their associated hyperlinks into a compact representation where both page URLs and link URLs are represented by their associated 32-bit integers.\nHashing the host component of the URLs guarantees that all URLs from the same host are assigned to the same machine in our scoring cluster.\nSince over 80% of all hyperlinks on the web are relative (that is, are between two pages on the same host), this property greatly reduces the amount of network communication required by the second stage of the distributed scoring computation.\nThe second phase performs the actual PageRank power iteration.\nBoth the link data and the current PageRank vector reside on disk and are read in a streaming fashion; while the new PageRank vector is maintained in memory.\nWe represent PageRank scores as 64-bit floating point numbers.\nPageRank contributions to pages assigned to remote machines are streamed to the remote machine via a TCP connection.\nWe used a three-machine cluster, each machine equipped with 16 GB of RAM, to compute standard PageRank scores for all 2.9 billion URLs that were contained in our web graph.\nWe used a damping factor of 0.15, and performed 200 power iterations.\nStarting at iteration 165, the L \u221e norm of the change in the PageRank vector from one iteration to the next had stopped decreasing, indicating that we had reached as much of a fixed point as the limitations of 64-bit floating point arithmetic would allow.\nFigure 1: Effectiveness of authority scores computed using different parameterizations of HITS.\nA post-processing phase uses the final PageRank vectors (one per machine) and the table mapping URLs to 32-bit integers (representing indices into each PageRank vector) to score the result URL in our query log.\nAs mentioned above, our web graph covered 9,525,566 of the 66,846,214 result URLs.\nThese URLs were annotated with their computed PageRank score; all other URLs received a score of 0.\n6.\nHITS\nHITS, unlike PageRank, is a query-dependent ranking algorithm.\nHITS (which stands for \"Hypertext Induced Topic Search\") is based on the following two intuitions: First, hyperlinks can be viewed as topical endorsements: A hyperlink from a page u devoted to topic T to another page v is likely to endorse the authority of v with respect to topic T. Second, the result set of a particular query is likely to have a certain amount of topical coherence.\nTherefore, it makes sense to perform link analysis not on the entire web graph, but rather on just the neighborhood of pages contained in the result set, since this neighborhood is more likely to contain topically relevant links.\nBut while the set of nodes immediately reachable from the result set is manageable (given that most pages have only a limited number of hyperlinks embedded into them), the set of pages immediately leading to the result set can be enormous.\nFor this reason, Kleinberg suggests sampling a fixed-size random subset of the pages linking to any high-indegree page in the result set.\nMoreover, Kleinberg suggests considering only links that cross host boundaries, the rationale being that links between pages on the same host (\"intrinsic links\") are likely to be navigational or nepotistic and not topically relevant.\nGiven a web graph (V, E) with vertex set V and edge set E \u2286 V \u00d7 V, and the set of result URLs to a query (called the root set R \u2286 V) as input, HITS computes a neighborhood graph consisting of a base set B \u2286 V (the root set and some of its neighboring vertices) and some of the edges in E induced by B.\nIn order to formalize the definition of the neighborhood graph, it is helpful to first introduce a sampling operator and the concept of a linkselection predicate.\nGiven a set A, the notation Sn [A] draws n elements uniformly at random from A; Sn [A] = A if | A | \u2264 n.\nA link section predicate P takes an edge (u, v) \u2208 E.\nIn this study, we use the following three link section predicates:\nwhere host (u) denotes the host of URL u, and domain (u) denotes the domain of URL u. So, all is true for all links, whereas ih is true only for inter-host links, and id is true only for inter-domain links.\nThe outlinked-set OP of the root set R w.r.t. a linkselection predicate P is defined to be:\nThe inlinking-set IsP of the root set R w.r.t. a link-selection predicate P and a sampling value s is defined to be:\nThe neighborhood graph (BPs, NsP) has the base set BPs as its vertex set and an edge set NsP containing those edges in E that are covered by BPs and permitted by P: NsP ={(u, v) \u2208 E: u \u2208 BPs \u2227 v \u2208 BPs \u2227 P (u, v)} To simplify notation, we write B to denote BPs, and N to denote NsP.\nFor each node u in the neighborhood graph, HITS computes two scores: an authority score A (u), estimating how authoritative u is on the topic induced by the query, and a hub score H (u), indicating whether u is a good reference to many authoritative pages.\nThis is done using the following algorithm:\n2.\nRepeat until H and A converge: (a) For all v \u2208 B: A ~ (v): = E (u, v) \u2208 N H (u) (b) For all u \u2208 B: H ~ (u): = E (u, v) \u2208 N A (v) (c) H: = H ~ 2, A: =\nwhere X 2 normalizes the vector X to unit length in euclidean space, i.e. the squares of its elements sum up to 1.\nIn practice, implementing a system that can compute HITS within the time constraints of a major search engine (where the peak query load is in the thousands of queries per second, and the desired query response time is well below one second) is a major engineering challenge.\nAmong other things, the web graph cannot reasonably be stored on disk, since\nFigure 2: Effectiveness of different features.\nseek times of modern hard disks are too slow to retrieve the links within the time constraints, and the graph does not fit into the main memory of a single machine, even when using the most aggressive compression techniques.\nIn order to experiment with HITS and other query-dependent link-based ranking algorithms that require non-regular accesses to arbitrary nodes and edges in the web graph, we implemented a system called the Scalable Hyperlink Store, or SHS for short.\nSHS is a special-purpose database, distributed over an arbitrary number of machines that keeps a highly compressed version of the web graph in memory and allows very fast lookup of nodes and edges.\nOn our hardware, it takes an average of 2 microseconds to map a URL to a 64-bit integer handle called a UID, 15 microseconds to look up all incoming or outgoing link UIDs associated with a page UID, and 5 microseconds to map a UID back to a URL (the last functionality not being required by HITS).\nThe RPC overhead is about 100 microseconds, but the SHS API allows many lookups to be batched into a single RPC request.\nWe implemented the HITS algorithm using the SHS infrastructure.\nWe compiled three SHS databases, one containing all 17.6 billion links in our web graph (all), one containing only links between pages that are on different hosts (ih, for \"inter-host\"), and one containing only links between pages that are on different domains (id).\nWe consider two URLs to belong to different hosts if the host portions of the URLs differ (in other words, we make no attempt to determine whether two distinct symbolic host names refer to the same computer), and we consider a domain to be the name purchased from a registrar (for example, we consider news.bbc.co.uk and www.bbc.co.uk to be different hosts belonging to the same domain).\nUsing each of these databases, we computed HITS authority and hub scores for various parameterizations of the sampling operator S, sampling between 1 and 100 back-links of each page in the root set.\nResult URLs that were not covered by our web graph automatically received authority and hub scores of 0, since they were not connected to any other nodes in the neighborhood graph and therefore did not receive any endorsements.\nWe performed forty-five different HITS computations, each combining one of the three link selection predicates (all, ih, and id) with a sampling value.\nFor each combination, we loaded one of the three databases into an SHS system running on six machines (each equipped with 16 GB of RAM), and computed HITS authority and hub scores, one query at a time.\nThe longest-running combination (using the all database and sampling 100 back-links of each root set vertex) required 30,456 seconds to process the entire query set, or about 1.1 seconds per query on average.\n7.\nEXPERIMENTAL RESULTS\nFor a given query Q, we need to rank the set of documents satisfying Q (the \"result set\" of Q).\nOur hypothesis is that good features should be able to rank relevant documents in this set higher than non-relevant ones, and this should result in an increase in each performance measure over the query set.\nWe are specifically interested in evaluating the usefulness of HITS and other link-based features.\nIn principle, we could do this by sorting the documents in each result set by their feature value, and compare the resulting NDCGs.\nWe call this ranking with isolated features.\nLet us first examine the relative performance of the different parameterizations of the HITS algorithm we examined.\nRecall that we computed HITS for each combination of three link section schemes--all links (all), inter-host links only (ih), and inter-domain links only (id)--with back-link sampling values ranging from 1 to 100.\nFigure 1 shows the impact of the number of sampled back-links on the retrieval performance of HITS authority scores.\nEach graph is associated with one performance measure.\nThe horizontal axis of each graph represents the number of sampled back-links, the vertical axis represents performance under the appropriate measure, and each curve depicts a link selection scheme.\nThe id scheme slightly outperforms ih, and both vastly outperform the all scheme--eliminating nepotistic links pays off.\nThe performance of the all scheme increases as more back-links of each root set vertex are sampled, while the performance of the id and ih schemes peaks at between 10 and 25 samples and then plateaus or even declines, depending on the performance measure.\nHaving compared different parameterizations of HITS, we will now fix the number of sampled back-links at 100 and compare the three link selection schemes against other isolated features: PageRank, in-degree and out-degree counting links of all pages, of different hosts only and of different domains only (all, ih and id datasets respectively), and a text retrieval algorithm exploiting anchor text: BM25F [24].\nBM25F is a state-of-the art ranking function solely based on textual content of the documents and their associated anchor texts.\nBM25F is a descendant of BM25 that combines the different textual fields of a document, namely title, body and anchor text.\nThis model has been shown to be one of the best-performing web search scoring functions over the last few years [8, 24].\nBM25F has a number of free parameters (2 per field, 6 in our case); we used the parameter values described in [24].\nFigure 3: Effectiveness measures for linear combinations of link-based features with BM25F.\nFigure 2 shows the NDCG, MRR, and MAP measures of these features.\nAgain all performance measures (and for all rank-thresholds we explored) agree.\nAs expected, BM25F outperforms all link-based features by a large mar\ngin.\nThe link-based features are divided into two groups, with a noticeable performance drop between the groups.\nThe better-performing group consists of the features that are based on the number and\/or quality of incoming links (in-degree, PageRank, and HITS authority scores); and the worse-performing group consists of the features that are based on the number and\/or quality of outgoing links (outdegree and HITS hub scores).\nIn the group of features based on incoming links, features that ignore nepotistic links perform better than their counterparts using all links.\nMoreover, using only inter-domain (id) links seems to be marginally better than using inter-host (ih) links.\nThe fact that features based on outgoing links underperform those based on incoming links matches our expectations; if anything, it is mildly surprising that outgoing links provide a useful signal for ranking at all.\nOn the other hand, the fact that in-degree features outperform PageRank under all measures is quite surprising.\nA possible explanation is that link-spammers have been targeting the published PageRank algorithm for many years, and that this has led to anomalies in the web graph that affect PageRank, but not other link-based features that explore only a distance-1 neighborhood of the result set.\nLikewise, it is surprising that simple query-independent features such as in-degree, which might estimate global quality but cannot capture relevance to a query, would outperform query-dependent features such as HITS authority scores.\nHowever, we cannot investigate the effect of these features in isolation, without regard to the overall ranking function, for several reasons.\nFirst, features based on the textual content of documents (as opposed to link-based features) are the best predictors of relevance.\nSecond, link-based features can be strongly correlated with textual features for several reasons, mainly the correlation between in-degree and num\nTable 1: Near-optimal feature transform functions.\nber of textual anchor matches.\nTherefore, one must consider the effect of link-based features in combination with textual features.\nOtherwise, we may find a link-based feature that is very good in isolation but is strongly correlated with textual features and results in no overall improvement; and vice versa, we may find a link-based feature that is weak in isolation but significantly improves overall performance.\nFor this reason, we have studied the combination of the link-based features above with BM25F.\nAll feature combinations were done by considering the linear combination of two features as a document score, using the formula score (d) =\nquery pair, in the case of BM25F), Fi (d) (for 1 <i <n) is a feature extracted from d, Ti is a transform, and wi is a free scalar weight that needs to be tuned.\nWe chose transform functions that we empirically determined to be well-suited.\nTable 1 shows the chosen transform functions.\nThis type of linear combination is appropriate if we assume features to be independent with respect to relevance and an exponential model for link features, as discussed in [8].\nWe tuned the weights by selecting a random subset of 5,000 queries from the query set, used an iterative refinement process to find weights that maximized a given performance measure on that training set, and used the remaining 23,043 queries to measure the performance of the thus derived scoring functions.\nWe explored the pairwise combination of BM25F with every link-based scoring function.\nFigure 3 shows the NDCG, MRR, and MAP measures of these feature combinations, together with a baseline BM25F score (the right-most bar in each graph), which was computed using the same subset of 23,045 queries that were used as the test set for the feature combinations.\nRegardless of the performance measure applied, we can make the following general observations:\n1.\nCombining any of the link-based features with BM25F results in a substantial performance improvement over BM25F in isolation.\n2.\nThe combination of BM25F with features based on incoming links (PageRank, in-degree, and HITS authority scores) performs substantially better than the combination with features based on outgoing links (HITS hub scores and out-degree).\n3.\nThe performance differences between the various combinations of BM25F with features based on incoming links is comparatively small, and the relative ordering of feature combinations is fairly stable across the dif\nFigure 4: Effectiveness measures for selected isolated features, broken down by query specificity.\nferent performance measures used.\nHowever, the combination of BM25F with any in-degree variant, and in particular with id in-degree, consistently outperforms the combination of BM25F with PageRank or HITS authority scores, and can be computed much easier and faster.\nFinally, we investigated whether certain features are better for some queries than for others.\nParticularly, we are interested in the relationship between the specificity of a query and the performance of different ranking features.\nThe most straightforward measure of the specificity of a query Q would be the number of documents in a search engine's corpus that satisfy Q. Unfortunately, the query set available to us did not contain this information.\nTherefore, we chose to approximate the specificity of Q by summing up the inverse document frequencies of the individual query terms comprising Q.\nThe inverse document frequency (IDF) of a term t with respect to a corpus C is defined to be logN\/doc (t), where doc (t) is the number of documents in C containing t and N is the total number of documents in C. By summing up the IDFs of the query terms, we make the (flawed) assumption that the individual query terms are independent of each other.\nHowever, while not perfect, this approximation is at least directionally correct.\nWe broke down our query set into 13 buckets, each bucket associated with an interval of query IDF values, and we computed performance metrics for all ranking functions applied (in isolation) to the queries in each bucket.\nIn order to keep the graphs readable, we will not show the performance of all the features, but rather restrict ourselves to the four most interesting ones: PageRank, id HITS authority scores, id in-degree, and BM25F.\nFigure 4 shows the MAP@10 for all 13 query specificity buckets.\nBuckets on the far left of each graph represent very general queries; buckets on the far right represent very specific queries.\nThe figures on the upper x axis of each graph show the number of queries in each bucket (e.g. the right-most bucket contains 1,629 queries).\nBM25F performs best for medium-specific queries, peaking at the buckets representing the IDF sum interval [12,14).\nBy comparison, HITS peaks at the bucket representing the IDF sum interval [4,6), and PageRank and in-degree peak at the bucket representing the interval [6,8), i.e. more general queries.\n8.\nCONCLUSIONS AND FUTURE WORK\nThis paper describes a large-scale evaluation of the effectiveness of HITS in comparison with other link-based ranking algorithms, in particular PageRank and in-degree, when applied in isolation or in combination with a text retrieval algorithm exploiting anchor text (BM25F).\nEvaluation is carried out with respect to a large number of human evaluated queries, using three different measures of effectiveness: NDCG, MRR, and MAP.\nEvaluating link-based features in isolation, we found that web page in-degree outperforms PageRank, and is about as effwective as HITS authority scores.\nHITS hub scores and web page out-degree are much less effective ranking features, but still outperform a random ordering.\nA linear combination of any link-based features with BM25F produces a significant improvement in performance, and there is a clear difference between combining BM25F with a feature based on incoming links (indegree, PageRank, or HITS authority scores) and a feature based on outgoing links (HITS hub scores and out-degree), but within those two groups the precise choice of link-based feature matters relatively little.\nWe believe that the measurements presented in this paper provide a solid evaluation of the best well-known link-based ranking schemes.\nThere are many possible variants of these schemes, and many other link-based ranking algorithms have been proposed in the literature, hence we do not claim this work to be the last word on this subject, but rather the first step on a long road.\nFuture work includes evaluation of different parameterizations of PageRank and HITS.\nIn particular, we would like to study the impact of changes to the PageRank damping factor on effectiveness, the impact of various schemes meant to counteract the effects of link spam, and the effect of weighing hyperlinks differently depending on whether they are nepotistic or not.\nGoing beyond PageRank and HITS, we would like to measure the effectiveness of other link-based ranking algorithms, such as SALSA.\nFinally, we are planning to experiment with more complex feature combinations.","keyphrases":["hit","rank","rank","mean reciproc rank","mean averag precis","normal discount cumul gain measur","breadth-first search crawl","pagerank","queri specif","bm25f","featur select","link graph","scale and relev","link-base featur","hyperlink analysi","quantit measur","crawl web page","mrr","map","ndcg"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","U","M","M","M","M","R","U","U","U"]} {"id":"H-82","title":"Downloading Textual Hidden Web Content Through Keyword Queries","abstract":"An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only entry point to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.","lvl-1":"Downloading Textual Hidden Web Content Through Keyword Queries Alexandros Ntoulas UCLA Computer Science ntoulas@cs.ucla.edu Petros Zerfos UCLA Computer Science pzerfos@cs.ucla.edu Junghoo Cho UCLA Computer Science cho@cs.ucla.edu ABSTRACT An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites.\nThese pages are often referred to as the Hidden Web or the Deep Web.\nSince there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results.\nHowever, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users.\nIn this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web.\nSince the only entry point to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site.\nHere, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically.\nOur policies proceed iteratively, issuing a different query in every iteration.\nWe experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising.\nFor instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.\nCategories and Subject Descriptors: H.3.7 [Information Systems]: Digital Libraries; H.3.1 [Information Systems]: Content Analysis and Indexing; H.3.3 [Information Systems]: Information Search and Retrieval.\nGeneral Terms: Algorithms, Performance, Design.\n1.\nINTRODUCTION Recent studies show that a significant fraction of Web content cannot be reached by following links [7, 12].\nIn particular, a large part of the Web is hidden behind search forms and is reachable only when users type in a set of keywords, or queries, to the forms.\nThese pages are often referred to as the Hidden Web [17] or the Deep Web [7], because search engines typically cannot index the pages and do not return them in their results (thus, the pages are essentially hidden from a typical Web user).\nAccording to many studies, the size of the Hidden Web increases rapidly as more organizations put their valuable content online through an easy-to-use Web interface [7].\nIn [12], Chang et al. estimate that well over 100,000 Hidden-Web sites currently exist on the Web.\nMoreover, the content provided by many Hidden-Web sites is often of very high quality and can be extremely valuable to many users [7].\nFor example, PubMed hosts many high-quality papers on medical research that were selected from careful peerreview processes, while the site of the US Patent and Trademarks Office1 makes existing patent documents available, helping potential inventors examine prior art.\nIn this paper, we study how we can build a Hidden-Web crawler2 that can automatically download pages from the Hidden Web, so that search engines can index them.\nConventional crawlers rely on the hyperlinks on the Web to discover pages, so current search engines cannot index the Hidden-Web pages (due to the lack of links).\nWe believe that an effective Hidden-Web crawler can have a tremendous impact on how users search information on the Web: \u2022 Tapping into unexplored information: The Hidden-Web crawler will allow an average Web user to easily explore the vast amount of information that is mostly hidden at present.\nSince a majority of Web users rely on search engines to discover pages, when pages are not indexed by search engines, they are unlikely to be viewed by many Web users.\nUnless users go directly to Hidden-Web sites and issue queries there, they cannot access the pages at the sites.\n\u2022 Improving user experience: Even if a user is aware of a number of Hidden-Web sites, the user still has to waste a significant amount of time and effort, visiting all of the potentially relevant sites, querying each of them and exploring the result.\nBy making the Hidden-Web pages searchable at a central location, we can significantly reduce the user``s wasted time and effort in searching the Hidden Web.\n\u2022 Reducing potential bias: Due to the heavy reliance of many Web users on search engines for locating information, search engines influence how the users perceive the Web [28].\nUsers do not necessarily perceive what actually exists on the Web, but what is indexed by search engines [28].\nAccording to a recent article [5], several organizations have recognized the importance of bringing information of their Hidden Web sites onto the surface, and committed considerable resources towards this effort.\nOur 1 US Patent Office: http:\/\/www.uspto.gov 2 Crawlers are the programs that traverse the Web automatically and download pages for search engines.\n100 Figure 1: A single-attribute search interface Hidden-Web crawler attempts to automate this process for Hidden Web sites with textual content, thus minimizing the associated costs and effort required.\nGiven that the only entry to Hidden Web pages is through querying a search form, there are two core challenges to implementing an effective Hidden Web crawler: (a) The crawler has to be able to understand and model a query interface, and (b) The crawler has to come up with meaningful queries to issue to the query interface.\nThe first challenge was addressed by Raghavan and Garcia-Molina in [29], where a method for learning search interfaces was presented.\nHere, we present a solution to the second challenge, i.e. how a crawler can automatically generate queries so that it can discover and download the Hidden Web pages.\nClearly, when the search forms list all possible values for a query (e.g., through a drop-down list), the solution is straightforward.\nWe exhaustively issue all possible queries, one query at a time.\nWhen the query forms have a free text input, however, an infinite number of queries are possible, so we cannot exhaustively issue all possible queries.\nIn this case, what queries should we pick?\nCan the crawler automatically come up with meaningful queries without understanding the semantics of the search form?\nIn this paper, we provide a theoretical framework to investigate the Hidden-Web crawling problem and propose effective ways of generating queries automatically.\nWe also evaluate our proposed solutions through experiments conducted on real Hidden-Web sites.\nIn summary, this paper makes the following contributions: \u2022 We present a formal framework to study the problem of HiddenWeb crawling.\n(Section 2).\n\u2022 We investigate a number of crawling policies for the Hidden Web, including the optimal policy that can potentially download the maximum number of pages through the minimum number of interactions.\nUnfortunately, we show that the optimal policy is NP-hard and cannot be implemented in practice (Section 2.2).\n\u2022 We propose a new adaptive policy that approximates the optimal policy.\nOur adaptive policy examines the pages returned from previous queries and adapts its query-selection policy automatically based on them (Section 3).\n\u2022 We evaluate various crawling policies through experiments on real Web sites.\nOur experiments will show the relative advantages of various crawling policies and demonstrate their potential.\nThe results from our experiments are very promising.\nIn one experiment, for example, our adaptive policy downloaded more than 90% of the pages within PubMed (that contains 14 million documents) after it issued fewer than 100 queries.\n2.\nFRAMEWORK In this section, we present a formal framework for the study of the Hidden-Web crawling problem.\nIn Section 2.1, we describe our assumptions on Hidden-Web sites and explain how users interact with the sites.\nBased on this interaction model, we present a highlevel algorithm for a Hidden-Web crawler in Section 2.2.\nFinally in Section 2.3, we formalize the Hidden-Web crawling problem.\n2.1 Hidden-Web database model There exists a variety of Hidden Web sources that provide information on a multitude of topics.\nDepending on the type of information, we may categorize a Hidden-Web site either as a textual database or a structured database.\nA textual database is a site that Figure 2: A multi-attribute search interface mainly contains plain-text documents, such as PubMed and LexisNexis (an online database of legal documents [1]).\nSince plaintext documents do not usually have well-defined structure, most textual databases provide a simple search interface where users type a list of keywords in a single search box (Figure 1).\nIn contrast, a structured database often contains multi-attribute relational data (e.g., a book on the Amazon Web site may have the fields title=`Harry Potter'', author=`J.K. Rowling'' and isbn=`0590353403'') and supports multi-attribute search interfaces (Figure 2).\nIn this paper, we will mainly focus on textual databases that support single-attribute keyword queries.\nWe discuss how we can extend our ideas for the textual databases to multi-attribute structured databases in Section 6.1.\nTypically, the users need to take the following steps in order to access pages in a Hidden-Web database: 1.\nStep 1.\nFirst, the user issues a query, say liver, through the search interface provided by the Web site (such as the one shown in Figure 1).\n2.\nStep 2.\nShortly after the user issues the query, she is presented with a result index page.\nThat is, the Web site returns a list of links to potentially relevant Web pages, as shown in Figure 3(a).\n3.\nStep 3.\nFrom the list in the result index page, the user identifies the pages that look interesting and follows the links.\nClicking on a link leads the user to the actual Web page, such as the one shown in Figure 3(b), that the user wants to look at.\n2.2 A generic Hidden Web crawling algorithm Given that the only entry to the pages in a Hidden-Web site is its search from, a Hidden-Web crawler should follow the three steps described in the previous section.\nThat is, the crawler has to generate a query, issue it to the Web site, download the result index page, and follow the links to download the actual pages.\nIn most cases, a crawler has limited time and network resources, so the crawler repeats these steps until it uses up its resources.\nIn Figure 4 we show the generic algorithm for a Hidden-Web crawler.\nFor simplicity, we assume that the Hidden-Web crawler issues single-term queries only.3 The crawler first decides which query term it is going to use (Step (2)), issues the query, and retrieves the result index page (Step (3)).\nFinally, based on the links found on the result index page, it downloads the Hidden Web pages from the site (Step (4)).\nThis same process is repeated until all the available resources are used up (Step (1)).\nGiven this algorithm, we can see that the most critical decision that a crawler has to make is what query to issue next.\nIf the crawler can issue successful queries that will return many matching pages, the crawler can finish its crawling early on using minimum resources.\nIn contrast, if the crawler issues completely irrelevant queries that do not return any matching pages, it may waste all of its resources simply issuing queries without ever retrieving actual pages.\nTherefore, how the crawler selects the next query can greatly affect its effectiveness.\nIn the next section, we formalize this query selection problem.\n3 For most Web sites that assume AND for multi-keyword queries, single-term queries return the maximum number of results.\nExtending our work to multi-keyword queries is straightforward.\n101 (a) List of matching pages for query liver.\n(b) The first matching page for liver.\nFigure 3: Pages from the PubMed Web site.\nALGORITHM 2.1.\nCrawling a Hidden Web site Procedure (1) while ( there are available resources ) do \/\/ select a term to send to the site (2) qi = SelectTerm() \/\/ send query and acquire result index page (3) R(qi) = QueryWebSite( qi ) \/\/ download the pages of interest (4) Download( R(qi) ) (5) done Figure 4: Algorithm for crawling a Hidden Web site.\nS q1 q qq 2 34 Figure 5: A set-formalization of the optimal query selection problem.\n2.3 Problem formalization Theoretically, the problem of query selection can be formalized as follows: We assume that the crawler downloads pages from a Web site that has a set of pages S (the rectangle in Figure 5).\nWe represent each Web page in S as a point (dots in Figure 5).\nEvery potential query qi that we may issue can be viewed as a subset of S, containing all the points (pages) that are returned when we issue qi to the site.\nEach subset is associated with a weight that represents the cost of issuing the query.\nUnder this formalization, our goal is to find which subsets (queries) cover the maximum number of points (Web pages) with the minimum total weight (cost).\nThis problem is equivalent to the set-covering problem in graph theory [16].\nThere are two main difficulties that we need to address in this formalization.\nFirst, in a practical situation, the crawler does not know which Web pages will be returned by which queries, so the subsets of S are not known in advance.\nWithout knowing these subsets the crawler cannot decide which queries to pick to maximize the coverage.\nSecond, the set-covering problem is known to be NP-Hard [16], so an efficient algorithm to solve this problem optimally in polynomial time has yet to be found.\nIn this paper, we will present an approximation algorithm that can find a near-optimal solution at a reasonable computational cost.\nOur algorithm leverages the observation that although we do not know which pages will be returned by each query qi that we issue, we can predict how many pages will be returned.\nBased on this information our query selection algorithm can then select the best queries that cover the content of the Web site.\nWe present our prediction method and our query selection algorithm in Section 3.\n2.3.1 Performance Metric Before we present our ideas for the query selection problem, we briefly discuss some of our notation and the cost\/performance metrics.\nGiven a query qi, we use P(qi) to denote the fraction of pages that we will get back if we issue query qi to the site.\nFor example, if a Web site has 10,000 pages in total, and if 3,000 pages are returned for the query qi = medicine, then P(qi) = 0.3.\nWe use P(q1 \u2227 q2) to represent the fraction of pages that are returned from both q1 and q2 (i.e., the intersection of P(q1) and P(q2)).\nSimilarly, we use P(q1 \u2228 q2) to represent the fraction of pages that are returned from either q1 or q2 (i.e., the union of P(q1) and P(q2)).\nWe also use Cost(qi) to represent the cost of issuing the query qi.\nDepending on the scenario, the cost can be measured either in time, network bandwidth, the number of interactions with the site, or it can be a function of all of these.\nAs we will see later, our proposed algorithms are independent of the exact cost function.\nIn the most common case, the query cost consists of a number of factors, including the cost for submitting the query to the site, retrieving the result index page (Figure 3(a)) and downloading the actual pages (Figure 3(b)).\nWe assume that submitting a query incurs a fixed cost of cq.\nThe cost for downloading the result index page is proportional to the number of matching documents to the query, while the cost cd for downloading a matching document is also fixed.\nThen the overall cost of query qi is Cost(qi) = cq + crP(qi) + cdP(qi).\n(1) In certain cases, some of the documents from qi may have already been downloaded from previous queries.\nIn this case, the crawler may skip downloading these documents and the cost of qi can be Cost(qi) = cq + crP(qi) + cdPnew(qi).\n(2) Here, we use Pnew(qi) to represent the fraction of the new documents from qi that have not been retrieved from previous queries.\nLater in Section 3.1 we will study how we can estimate P(qi) and Pnew(qi) to estimate the cost of qi.\nSince our algorithms are independent of the exact cost function, we will assume a generic cost function Cost(qi) in this paper.\nWhen we need a concrete cost function, however, we will use Equation 2.\nGiven the notation, we can formalize the goal of a Hidden-Web crawler as follows: 102 PROBLEM 1.\nFind the set of queries q1, ... , qn that maximizes P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qn) under the constraint n i=1 Cost(qi) \u2264 t. Here, t is the maximum download resource that the crawler has.\n3.\nKEYWORD SELECTION How should a crawler select the queries to issue?\nGiven that the goal is to download the maximum number of unique documents from a textual database, we may consider one of the following options: \u2022 Random: We select random keywords from, say, an English dictionary and issue them to the database.\nThe hope is that a random query will return a reasonable number of matching documents.\n\u2022 Generic-frequency: We analyze a generic document corpus collected elsewhere (say, from the Web) and obtain the generic frequency distribution of each keyword.\nBased on this generic distribution, we start with the most frequent keyword, issue it to the Hidden-Web database and retrieve the result.\nWe then continue to the second-most frequent keyword and repeat this process until we exhaust all download resources.\nThe hope is that the frequent keywords in a generic corpus will also be frequent in the Hidden-Web database, returning many matching documents.\n\u2022 Adaptive: We analyze the documents returned from the previous queries issued to the Hidden-Web database and estimate which keyword is most likely to return the most documents.\nBased on this analysis, we issue the most promising query, and repeat the process.\nAmong these three general policies, we may consider the random policy as the base comparison point since it is expected to perform the worst.\nBetween the generic-frequency and the adaptive policies, both policies may show similar performance if the crawled database has a generic document collection without a specialized topic.\nThe adaptive policy, however, may perform significantly better than the generic-frequency policy if the database has a very specialized collection that is different from the generic corpus.\nWe will experimentally compare these three policies in Section 4.\nWhile the first two policies (random and generic-frequency policies) are easy to implement, we need to understand how we can analyze the downloaded pages to identify the most promising query in order to implement the adaptive policy.\nWe address this issue in the rest of this section.\n3.1 Estimating the number of matching pages In order to identify the most promising query, we need to estimate how many new documents we will download if we issue the query qi as the next query.\nThat is, assuming that we have issued queries q1, ... , qi\u22121 we need to estimate P(q1\u2228\u00b7 \u00b7 \u00b7\u2228qi\u22121\u2228qi), for every potential next query qi and compare this value.\nIn estimating this number, we note that we can rewrite P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121 \u2228 qi) as: P((q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) \u2228 qi) = P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) + P(qi) \u2212 P((q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) \u2227 qi) = P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) + P(qi) \u2212 P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121)P(qi|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) (3) In the above formula, note that we can precisely measure P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) and P(qi | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) by analyzing previouslydownloaded pages: We know P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121), the union of all pages downloaded from q1, ... , qi\u22121, since we have already issued q1, ... , qi\u22121 and downloaded the matching pages.4 We can also measure P(qi | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121), the probability that qi appears in the pages from q1, ... , qi\u22121, by counting how many times qi appears in the pages from q1, ... , qi\u22121.\nTherefore, we only need to estimate P(qi) to evaluate P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi).\nWe may consider a number of different ways to estimate P(qi), including the following: 1.\nIndependence estimator: We assume that the appearance of the term qi is independent of the terms q1, ... , qi\u22121.\nThat is, we assume that P(qi) = P(qi|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121).\n2.\nZipf estimator: In [19], Ipeirotis et al. proposed a method to estimate how many times a particular term occurs in the entire corpus based on a subset of documents from the corpus.\nTheir method exploits the fact that the frequency of terms inside text collections follows a power law distribution [30, 25].\nThat is, if we rank all terms based on their occurrence frequency (with the most frequent term having a rank of 1, second most frequent a rank of 2 etc.), then the frequency f of a term inside the text collection is given by: f = \u03b1(r + \u03b2)\u2212\u03b3 (4) where r is the rank of the term and \u03b1, \u03b2, and \u03b3 are constants that depend on the text collection.\nTheir main idea is (1) to estimate the three parameters, \u03b1, \u03b2 and \u03b3, based on the subset of documents that we have downloaded from previous queries, and (2) use the estimated parameters to predict f given the ranking r of a term within the subset.\nFor a more detailed description on how we can use this method to estimate P(qi), we refer the reader to the extended version of this paper [27].\nAfter we estimate P(qi) and P(qi|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) values, we can calculate P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi).\nIn Section 3.3, we explain how we can efficiently compute P(qi|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) by maintaining a succinct summary table.\nIn the next section, we first examine how we can use this value to decide which query we should issue next to the Hidden Web site.\n3.2 Query selection algorithm The goal of the Hidden-Web crawler is to download the maximum number of unique documents from a database using its limited download resources.\nGiven this goal, the Hidden-Web crawler has to take two factors into account.\n(1) the number of new documents that can be obtained from the query qi and (2) the cost of issuing the query qi.\nFor example, if two queries, qi and qj, incur the same cost, but qi returns more new pages than qj, qi is more desirable than qj.\nSimilarly, if qi and qj return the same number of new documents, but qi incurs less cost then qj, qi is more desirable.\nBased on this observation, the Hidden-Web crawler may use the following efficiency metric to quantify the desirability of the query qi: Efficiency(qi) = Pnew(qi) Cost(qi) Here, Pnew(qi) represents the amount of new documents returned for qi (the pages that have not been returned for previous queries).\nCost(qi) represents the cost of issuing the query qi.\nIntuitively, the efficiency of qi measures how many new documents are retrieved per unit cost, and can be used as an indicator of 4 For exact estimation, we need to know the total number of pages in the site.\nHowever, in order to compare only relative values among queries, this information is not actually needed.\n103 ALGORITHM 3.1.\nGreedy SelectTerm() Parameters: T: The list of potential query keywords Procedure (1) Foreach tk in T do (2) Estimate Efficiency(tk) = Pnew(tk) Cost(tk) (3) done (4) return tk with maximum Efficiency(tk) Figure 6: Algorithm for selecting the next query term.\nhow well our resources are spent when issuing qi.\nThus, the Hidden Web crawler can estimate the efficiency of every candidate qi, and select the one with the highest value.\nBy using its resources more efficiently, the crawler may eventually download the maximum number of unique documents.\nIn Figure 6, we show the query selection function that uses the concept of efficiency.\nIn principle, this algorithm takes a greedy approach and tries to maximize the potential gain in every step.\nWe can estimate the efficiency of every query using the estimation method described in Section 3.1.\nThat is, the size of the new documents from the query qi, Pnew(qi), is Pnew(qi) = P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121 \u2228 qi) \u2212 P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) = P(qi) \u2212 P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121)P(qi|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) from Equation 3, where P(qi) can be estimated using one of the methods described in section 3.\nWe can also estimate Cost(qi) similarly.\nFor example, if Cost(qi) is Cost(qi) = cq + crP(qi) + cdPnew(qi) (Equation 2), we can estimate Cost(qi) by estimating P(qi) and Pnew(qi).\n3.3 Efficient calculation of query statistics In estimating the efficiency of queries, we found that we need to measure P(qi|q1\u2228\u00b7 \u00b7 \u00b7\u2228qi\u22121) for every potential query qi.\nThis calculation can be very time-consuming if we repeat it from scratch for every query qi in every iteration of our algorithm.\nIn this section, we explain how we can compute P(qi|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) efficiently by maintaining a small table that we call a query statistics table.\nThe main idea for the query statistics table is that P(qi|q1 \u2228\u00b7 \u00b7 \u00b7\u2228 qi\u22121) can be measured by counting how many times the keyword qi appears within the documents downloaded from q1, ... , qi\u22121.\nWe record these counts in a table, as shown in Figure 7(a).\nThe left column of the table contains all potential query terms and the right column contains the number of previously-downloaded documents containing the respective term.\nFor example, the table in Figure 7(a) shows that we have downloaded 50 documents so far, and the term model appears in 10 of these documents.\nGiven this number, we can compute that P(model|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) = 10 50 = 0.2.\nWe note that the query statistics table needs to be updated whenever we issue a new query qi and download more documents.\nThis update can be done efficiently as we illustrate in the following example.\nEXAMPLE 1.\nAfter examining the query statistics table of Figure 7(a), we have decided to use the term computer as our next query qi.\nFrom the new query qi = computer, we downloaded 20 more new pages.\nOut of these, 12 contain the keyword model Term tk N(tk) model 10 computer 38 digital 50 Term tk N(tk) model 12 computer 20 disk 18 Total pages: 50 New pages: 20 (a) After q1, ... , qi\u22121 (b) New from qi = computer Term tk N(tk) model 10+12 = 22 computer 38+20 = 58 disk 0+18 = 18 digital 50+0 = 50 Total pages: 50 + 20 = 70 (c) After q1, ... , qi Figure 7: Updating the query statistics table.\nq i1 i\u22121 q\\\/ ... \\\/q q i \/ S Figure 8: A Web site that does not return all the results.\nand 18 the keyword disk.\nThe table in Figure 7(b) shows the frequency of each term in the newly-downloaded pages.\nWe can update the old table (Figure 7(a)) to include this new information by simply adding corresponding entries in Figures 7(a) and (b).\nThe result is shown on Figure 7(c).\nFor example, keyword model exists in 10 + 12 = 22 pages within the pages retrieved from q1, ... , qi.\nAccording to this new table, P(model|q1\u2228\u00b7 \u00b7 \u00b7\u2228qi) is now 22 70 = 0.3.\n3.4 Crawling sites that limit the number of results In certain cases, when a query matches a large number of pages, the Hidden Web site returns only a portion of those pages.\nFor example, the Open Directory Project [2] allows the users to see only up to 10, 000 results after they issue a query.\nObviously, this kind of limitation has an immediate effect on our Hidden Web crawler.\nFirst, since we can only retrieve up to a specific number of pages per query, our crawler will need to issue more queries (and potentially will use up more resources) in order to download all the pages.\nSecond, the query selection method that we presented in Section 3.2 assumes that for every potential query qi, we can find P(qi|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121).\nThat is, for every query qi we can find the fraction of documents in the whole text database that contains qi with at least one of q1, ... , qi\u22121.\nHowever, if the text database returned only a portion of the results for any of the q1, ... , qi\u22121 then the value P(qi|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) is not accurate and may affect our decision for the next query qi, and potentially the performance of our crawler.\nSince we cannot retrieve more results per query than the maximum number the Web site allows, our crawler has no other choice besides submitting more queries.\nHowever, there is a way to estimate the correct value for P(qi|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121) in the case where the Web site returns only a portion of the results.\n104 Again, assume that the Hidden Web site we are currently crawling is represented as the rectangle on Figure 8 and its pages as points in the figure.\nAssume that we have already issued queries q1, ... , qi\u22121 which returned a number of results less than the maximum number than the site allows, and therefore we have downloaded all the pages for these queries (big circle in Figure 8).\nThat is, at this point, our estimation for P(qi|q1 \u2228\u00b7 \u00b7 \u00b7\u2228qi\u22121) is accurate.\nNow assume that we submit query qi to the Web site, but due to a limitation in the number of results that we get back, we retrieve the set qi (small circle in Figure 8) instead of the set qi (dashed circle in Figure 8).\nNow we need to update our query statistics table so that it has accurate information for the next step.\nThat is, although we got the set qi back, for every potential query qi+1 we need to find P(qi+1|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi): P(qi+1|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi) = 1 P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi) \u00b7 [P(qi+1 \u2227 (q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121))+ P(qi+1 \u2227 qi) \u2212 P(qi+1 \u2227 qi \u2227 (q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121))] (5) In the previous equation, we can find P(q1 \u2228\u00b7 \u00b7 \u00b7\u2228qi) by estimating P(qi) with the method shown in Section 3.\nAdditionally, we can calculate P(qi+1 \u2227 (q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121)) and P(qi+1 \u2227 qi \u2227 (q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121)) by directly examining the documents that we have downloaded from queries q1, ... , qi\u22121.\nThe term P(qi+1 \u2227 qi) however is unknown and we need to estimate it.\nAssuming that qi is a random sample of qi, then: P(qi+1 \u2227 qi) P(qi+1 \u2227 qi) = P(qi) P(qi) (6) From Equation 6 we can calculate P(qi+1 \u2227 qi) and after we replace this value to Equation 5 we can find P(qi+1|q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi).\n4.\nEXPERIMENTAL EVALUATION In this section we experimentally evaluate the performance of the various algorithms for Hidden Web crawling presented in this paper.\nOur goal is to validate our theoretical analysis through realworld experiments, by crawling popular Hidden Web sites of textual databases.\nSince the number of documents that are discovered and downloaded from a textual database depends on the selection of the words that will be issued as queries5 to the search interface of each site, we compare the various selection policies that were described in section 3, namely the random, generic-frequency, and adaptive algorithms.\nThe adaptive algorithm learns new keywords and terms from the documents that it downloads, and its selection process is driven by a cost model as described in Section 3.2.\nTo keep our experiment and its analysis simple at this point, we will assume that the cost for every query is constant.\nThat is, our goal is to maximize the number of downloaded pages by issuing the least number of queries.\nLater, in Section 4.4 we will present a comparison of our policies based on a more elaborate cost model.\nIn addition, we use the independence estimator (Section 3.1) to estimate P(qi) from downloaded pages.\nAlthough the independence estimator is a simple estimator, our experiments will show that it can work very well in practice.6 For the generic-frequency policy, we compute the frequency distribution of words that appear in a 5.5-million-Web-page corpus 5 Throughout our experiments, once an algorithm has submitted a query to a database, we exclude the query from subsequent submissions to the same database from the same algorithm.\n6 We defer the reporting of results based on the Zipf estimation to a future work.\ndownloaded from 154 Web sites of various topics [26].\nKeywords are selected based on their decreasing frequency with which they appear in this document set, with the most frequent one being selected first, followed by the second-most frequent keyword, etc.7 Regarding the random policy, we use the same set of words collected from the Web corpus, but in this case, instead of selecting keywords based on their relative frequency, we choose them randomly (uniform distribution).\nIn order to further investigate how the quality of the potential query-term list affects the random-based algorithm, we construct two sets: one with the 16, 000 most frequent words of the term collection used in the generic-frequency policy (hereafter, the random policy with the set of 16,000 words will be referred to as random-16K), and another set with the 1 million most frequent words of the same collection as above (hereafter, referred to as random-1M).\nThe former set has frequent words that appear in a large number of documents (at least 10, 000 in our collection), and therefore can be considered of high-quality terms.\nThe latter set though contains a much larger collection of words, among which some might be bogus, and meaningless.\nThe experiments were conducted by employing each one of the aforementioned algorithms (adaptive, generic-frequency, random16K, and random-1M) to crawl and download contents from three Hidden Web sites: The PubMed Medical Library,8 Amazon,9 and the Open Directory Project[2].\nAccording to the information on PubMed``s Web site, its collection contains approximately 14 million abstracts of biomedical articles.\nWe consider these abstracts as the documents in the site, and in each iteration of the adaptive policy, we use these abstracts as input to the algorithm.\nThus our goal is to discover as many unique abstracts as possible by repeatedly querying the Web query interface provided by PubMed.\nThe Hidden Web crawling on the PubMed Web site can be considered as topic-specific, due to the fact that all abstracts within PubMed are related to the fields of medicine and biology.\nIn the case of the Amazon Web site, we are interested in downloading all the hidden pages that contain information on books.\nThe querying to Amazon is performed through the Software Developer``s Kit that Amazon provides for interfacing to its Web site, and which returns results in XML form.\nThe generic keyword field is used for searching, and as input to the adaptive policy we extract the product description and the text of customer reviews when present in the XML reply.\nSince Amazon does not provide any information on how many books it has in its catalogue, we use random sampling on the 10-digit ISBN number of the books to estimate the size of the collection.\nOut of the 10, 000 random ISBN numbers queried, 46 are found in the Amazon catalogue, therefore the size of its book collection is estimated to be 46 10000 \u00b7 1010 = 4.6 million books.\nIt``s also worth noting here that Amazon poses an upper limit on the number of results (books in our case) returned by each query, which is set to 32, 000.\nAs for the third Hidden Web site, the Open Directory Project (hereafter also referred to as dmoz), the site maintains the links to 3.8 million sites together with a brief summary of each listed site.\nThe links are searchable through a keyword-search interface.\nWe consider each indexed link together with its brief summary as the document of the dmoz site, and we provide the short summaries to the adaptive algorithm to drive the selection of new keywords for querying.\nOn the dmoz Web site, we perform two Hidden Web crawls: the first is on its generic collection of 3.8-million indexed 7 We did not manually exclude stop words (e.g., the, is, of, etc.) from the keyword list.\nAs it turns out, all Web sites except PubMed return matching documents for the stop words, such as the.\n8 PubMed Medical Library: http:\/\/www.pubmed.org 9 Amazon Inc.: http:\/\/www.amazon.com 105 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 50\u00a0100\u00a0150\u00a0200 fractionofdocuments query number Cumulative fraction of unique documents - PubMed website adaptive generic-frequency random-16K random-1M Figure 9: Coverage of policies for Pubmed 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100\u00a0200\u00a0300\u00a0400 500\u00a0600\u00a0700 fractionofdocuments query number Cumulative fraction of unique documents - Amazon website adaptive generic-frequency random-16K random-1M Figure 10: Coverage of policies for Amazon sites, regardless of the category that they fall into.\nThe other crawl is performed specifically on the Arts section of dmoz (http:\/\/ dmoz.org\/Arts), which comprises of approximately 429, 000 indexed sites that are relevant to Arts, making this crawl topicspecific, as in PubMed.\nLike Amazon, dmoz also enforces an upper limit on the number of returned results, which is 10, 000 links with their summaries.\n4.1 Comparison of policies The first question that we seek to answer is the evolution of the coverage metric as we submit queries to the sites.\nThat is, what fraction of the collection of documents stored in the Hidden Web site can we download as we continuously query for new words selected using the policies described above?\nMore formally, we are interested in the value of P(q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi\u22121 \u2228 qi), after we submit q1, ... , qi queries, and as i increases.\nIn Figures 9, 10, 11, and 12 we present the coverage metric for each policy, as a function of the query number, for the Web sites of PubMed, Amazon, general dmoz and the art-specific dmoz, respectively.\nOn the y-axis the fraction of the total documents downloaded from the website is plotted, while the x-axis represents the query number.\nA first observation from these graphs is that in general, the generic-frequency and the adaptive policies perform much better than the random-based algorithms.\nIn all of the figures, the graphs for the random-1M and the random-16K are significantly below those of other policies.\nBetween the generic-frequency and the adaptive policies, we can see that the latter outperforms the former when the site is topic spe0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100\u00a0200\u00a0300\u00a0400 500\u00a0600\u00a0700 fractionofdocuments query number Cumulative fraction of unique documents - dmoz website adaptive generic-frequency random-16K random-1M Figure 11: Coverage of policies for general dmoz 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 450 fractionofdocuments query number Cumulative fraction of unique documents - dmoz\/Arts website adaptive generic-frequency random-16K random-1M Figure 12: Coverage of policies for the Arts section of dmoz cific.\nFor example, for the PubMed site (Figure 9), the adaptive algorithm issues only 83 queries to download almost 80% of the documents stored in PubMed, but the generic-frequency algorithm requires 106 queries for the same coverage,.\nFor the dmoz\/Arts crawl (Figure 12), the difference is even more substantial: the adaptive policy is able to download 99.98% of the total sites indexed in the Directory by issuing 471 queries, while the frequency-based algorithm is much less effective using the same number of queries, and discovers only 72% of the total number of indexed sites.\nThe adaptive algorithm, by examining the contents of the pages that it downloads at each iteration, is able to identify the topic of the site as expressed by the words that appear most frequently in the result-set.\nConsequently, it is able to select words for subsequent queries that are more relevant to the site, than those preferred by the genericfrequency policy, which are drawn from a large, generic collection.\nTable 1 shows a sample of 10 keywords out of 211 chosen and submitted to the PubMed Web site by the adaptive algorithm, but not by the other policies.\nFor each keyword, we present the number of the iteration, along with the number of results that it returned.\nAs one can see from the table, these keywords are highly relevant to the topics of medicine and biology of the Public Medical Library, and match against numerous articles stored in its Web site.\nIn both cases examined in Figures 9, and 12, the random-based policies perform much worse than the adaptive algorithm, and the generic-frequency.\nIt is worthy noting however, that the randombased policy with the small, carefully selected set of 16, 000 quality words manages to download a considerable fraction of 42.5% 106 Iteration Keyword Number of Results 23 department 2, 719, 031 34 patients 1, 934, 428 53 clinical 1, 198, 322 67 treatment 4, 034, 565 69 medical 1, 368, 200 70 hospital 503, 307 146 disease 1, 520, 908 172 protein 2, 620, 938 Table 1: Sample of keywords queried to PubMed exclusively by the adaptive policy from the PubMed Web site after 200 queries, while the coverage for the Arts section of dmoz reaches 22.7%, after 471 queried keywords.\nOn the other hand, the random-based approach that makes use of the vast collection of 1 million words, among which a large number is bogus keywords, fails to download even a mere 1% of the total collection, after submitting the same number of query words.\nFor the generic collections of Amazon and the dmoz sites, shown in Figures 10 and 11 respectively, we get mixed results: The genericfrequency policy shows slightly better performance than the adaptive policy for the Amazon site (Figure 10), and the adaptive method clearly outperforms the generic-frequency for the general dmoz site (Figure 11).\nA closer look at the log files of the two Hidden Web crawlers reveals the main reason: Amazon was functioning in a very flaky way when the adaptive crawler visited it, resulting in a large number of lost results.\nThus, we suspect that the slightly poor performance of the adaptive policy is due to this experimental variance.\nWe are currently running another experiment to verify whether this is indeed the case.\nAside from this experimental variance, the Amazon result indicates that if the collection and the words that a Hidden Web site contains are generic enough, then the generic-frequency approach may be a good candidate algorithm for effective crawling.\nAs in the case of topic-specific Hidden Web sites, the randombased policies also exhibit poor performance compared to the other two algorithms when crawling generic sites: for the Amazon Web site, random-16K succeeds in downloading almost 36.7% after issuing 775 queries, alas for the generic collection of dmoz, the fraction of the collection of links downloaded is 13.5% after the 770th query.\nFinally, as expected, random-1M is even worse than random16K, downloading only 14.5% of Amazon and 0.3% of the generic dmoz.\nIn summary, the adaptive algorithm performs remarkably well in all cases: it is able to discover and download most of the documents stored in Hidden Web sites by issuing the least number of queries.\nWhen the collection refers to a specific topic, it is able to identify the keywords most relevant to the topic of the site and consequently ask for terms that is most likely that will return a large number of results .\nOn the other hand, the generic-frequency policy proves to be quite effective too, though less than the adaptive: it is able to retrieve relatively fast a large portion of the collection, and when the site is not topic-specific, its effectiveness can reach that of adaptive (e.g. Amazon).\nFinally, the random policy performs poorly in general, and should not be preferred.\n4.2 Impact of the initial query An interesting issue that deserves further examination is whether the initial choice of the keyword used as the first query issued by the adaptive algorithm affects its effectiveness in subsequent iterations.\nThe choice of this keyword is not done by the selection of the 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 10 20 30 40 50 60 fractionofdocuments query number Convergence of adaptive under different initial queries - PubMed website pubmed data information return Figure 13: Convergence of the adaptive algorithm using different initial queries for crawling the PubMed Web site adaptive algorithm itself and has to be manually set, since its query statistics tables have not been populated yet.\nThus, the selection is generally arbitrary, so for purposes of fully automating the whole process, some additional investigation seems necessary.\nFor this reason, we initiated three adaptive Hidden Web crawlers targeting the PubMed Web site with different seed-words: the word data, which returns 1,344,999 results, the word information that reports 308, 474 documents, and the word return that retrieves 29, 707 pages, out of 14 million.\nThese keywords represent varying degrees of term popularity in PubMed, with the first one being of high popularity, the second of medium, and the third of low.\nWe also show results for the keyword pubmed, used in the experiments for coverage of Section 4.1, and which returns 695 articles.\nAs we can see from Figure 13, after a small number of queries, all four crawlers roughly download the same fraction of the collection, regardless of their starting point: Their coverages are roughly equivalent from the 25th query.\nEventually, all four crawlers use the same set of terms for their queries, regardless of the initial query.\nIn the specific experiment, from the 36th query onward, all four crawlers use the same terms for their queries in each iteration, or the same terms are used off by one or two query numbers.\nOur result confirms the observation of [11] that the choice of the initial query has minimal effect on the final performance.\nWe can explain this intuitively as follows: Our algorithm approximates the optimal set of queries to use for a particular Web site.\nOnce the algorithm has issued a significant number of queries, it has an accurate estimation of the content of the Web site, regardless of the initial query.\nSince this estimation is similar for all runs of the algorithm, the crawlers will use roughly the same queries.\n4.3 Impact of the limit in the number of results While the Amazon and dmoz sites have the respective limit of 32,000 and 10,000 in their result sizes, these limits may be larger than those imposed by other Hidden Web sites.\nIn order to investigate how a tighter limit in the result size affects the performance of our algorithms, we performed two additional crawls to the generic-dmoz site: we ran the generic-frequency and adaptive policies but we retrieved only up to the top 1,000 results for every query.\nIn Figure 14 we plot the coverage for the two policies as a function of the number of queries.\nAs one might expect, by comparing the new result in Figure 14 to that of Figure 11 where the result limit was 10,000, we conclude that the tighter limit requires a higher number of queries to achieve the same coverage.\nFor example, when the result limit was 10,000, the adaptive pol107 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 500\u00a01000\u00a01500\u00a02000 2500\u00a03000\u00a03500 FractionofUniquePages Query Number Cumulative fraction of unique pages downloaded per query - Dmoz Web site (cap limit 1000) adaptive generic-frequency Figure 14: Coverage of general dmoz after limiting the number of results to 1,000 icy could download 70% of the site after issuing 630 queries, while it had to issue 2,600 queries to download 70% of the site when the limit was 1,000.\nOn the other hand, our new result shows that even with a tight result limit, it is still possible to download most of a Hidden Web site after issuing a reasonable number of queries.\nThe adaptive policy could download more than 85% of the site after issuing 3,500 queries when the limit was 1,000.\nFinally, our result shows that our adaptive policy consistently outperforms the generic-frequency policy regardless of the result limit.\nIn both Figure 14 and Figure 11, our adaptive policy shows significantly larger coverage than the generic-frequency policy for the same number of queries.\n4.4 Incorporating the document download cost For brevity of presentation, the performance evaluation results provided so far assumed a simplified cost-model where every query involved a constant cost.\nIn this section we present results regarding the performance of the adaptive and generic-frequency algorithms using Equation 2 to drive our query selection process.\nAs we discussed in Section 2.3.1, this query cost model includes the cost for submitting the query to the site, retrieving the result index page, and also downloading the actual pages.\nFor these costs, we examined the size of every result in the index page and the sizes of the documents, and we chose cq = 100, cr = 100, and cd = 10000, as values for the parameters of Equation 2, and for the particular experiment that we ran on the PubMed website.\nThe values that we selected imply that the cost for issuing one query and retrieving one result from the result index page are roughly the same, while the cost for downloading an actual page is 100 times larger.\nWe believe that these values are reasonable for the PubMed Web site.\nFigure 15 shows the coverage of the adaptive and genericfrequency algorithms as a function of the resource units used during the download process.\nThe horizontal axis is the amount of resources used, and the vertical axis is the coverage.\nAs it is evident from the graph, the adaptive policy makes more efficient use of the available resources, as it is able to download more articles than the generic-frequency, using the same amount of resource units.\nHowever, the difference in coverage is less dramatic in this case, compared to the graph of Figure 9.\nThe smaller difference is due to the fact that under the current cost metric, the download cost of documents constitutes a significant portion of the cost.\nTherefore, when both policies downloaded the same number of documents, the saving of the adaptive policy is not as dramatic as before.\nThat 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 5000 10000 15000 20000 25000 30000 FractionofUniquePages Total Cost (cq=100, cr=100, cd=10000) Cumulative fraction of unique pages downloaded per cost unit - PubMed Web site adaptive frequency Figure 15: Coverage of PubMed after incorporating the document download cost is, the savings in the query cost and the result index download cost is only a relatively small portion of the overall cost.\nStill, we observe noticeable savings from the adaptive policy.\nAt the total cost of 8000, for example, the coverage of the adaptive policy is roughly 0.5 while the coverage of the frequency policy is only 0.3.\n5.\nRELATED WORK In a recent study, Raghavan and Garcia-Molina [29] present an architectural model for a Hidden Web crawler.\nThe main focus of this work is to learn Hidden-Web query interfaces, not to generate queries automatically.\nThe potential queries are either provided manually by users or collected from the query interfaces.\nIn contrast, our main focus is to generate queries automatically without any human intervention.\nThe idea of automatically issuing queries to a database and examining the results has been previously used in different contexts.\nFor example, in [10, 11], Callan and Connel try to acquire an accurate language model by collecting a uniform random sample from the database.\nIn [22] Lawrence and Giles issue random queries to a number of Web Search Engines in order to estimate the fraction of the Web that has been indexed by each of them.\nIn a similar fashion, Bharat and Broder [8] issue random queries to a set of Search Engines in order to estimate the relative size and overlap of their indexes.\nIn [6], Barbosa and Freire experimentally evaluate methods for building multi-keyword queries that can return a large fraction of a document collection.\nOur work differs from the previous studies in two ways.\nFirst, it provides a theoretical framework for analyzing the process of generating queries for a database and examining the results, which can help us better understand the effectiveness of the methods presented in the previous work.\nSecond, we apply our framework to the problem of Hidden Web crawling and demonstrate the efficiency of our algorithms.\nCope et al. [15] propose a method to automatically detect whether a particular Web page contains a search form.\nThis work is complementary to ours; once we detect search interfaces on the Web using the method in [15], we may use our proposed algorithms to download pages automatically from those Web sites.\nReference [4] reports methods to estimate what fraction of a text database can be eventually acquired by issuing queries to the database.\nIn [3] the authors study query-based techniques that can extract relational data from large text databases.\nAgain, these works study orthogonal issues and are complementary to our work.\nIn order to make documents in multiple textual databases searchable at a central place, a number of harvesting approaches have 108 been proposed (e.g., OAI [21], DP9 [24]).\nThese approaches essentially assume cooperative document databases that willingly share some of their metadata and\/or documents to help a third-party search engine to index the documents.\nOur approach assumes uncooperative databases that do not share their data publicly and whose documents are accessible only through search interfaces.\nThere exists a large body of work studying how to identify the most relevant database given a user query [20, 19, 14, 23, 18].\nThis body of work is often referred to as meta-searching or database selection problem over the Hidden Web.\nFor example, [19] suggests the use of focused probing to classify databases into a topical category, so that given a query, a relevant database can be selected based on its topical category.\nOur vision is different from this body of work in that we intend to download and index the Hidden pages at a central location in advance, so that users can access all the information at their convenience from one single location.\n6.\nCONCLUSION AND FUTURE WORK Traditional crawlers normally follow links on the Web to discover and download pages.\nTherefore they cannot get to the Hidden Web pages which are only accessible through query interfaces.\nIn this paper, we studied how we can build a Hidden Web crawler that can automatically query a Hidden Web site and download pages from it.\nWe proposed three different query generation policies for the Hidden Web: a policy that picks queries at random from a list of keywords, a policy that picks queries based on their frequency in a generic text collection, and a policy which adaptively picks a good query based on the content of the pages downloaded from the Hidden Web site.\nExperimental evaluation on 4 real Hidden Web sites shows that our policies have a great potential.\nIn particular, in certain cases the adaptive policy can download more than 90% of a Hidden Web site after issuing approximately 100 queries.\nGiven these results, we believe that our work provides a potential mechanism to improve the search-engine coverage of the Web and the user experience of Web search.\n6.1 Future Work We briefly discuss some future-research avenues.\nMulti-attribute Databases We are currently investigating how to extend our ideas to structured multi-attribute databases.\nWhile generating queries for multi-attribute databases is clearly a more difficult problem, we may exploit the following observation to address this problem: When a site supports multi-attribute queries, the site often returns pages that contain values for each of the query attributes.\nFor example, when an online bookstore supports queries on title, author and isbn, the pages returned from a query typically contain the title, author and ISBN of corresponding books.\nThus, if we can analyze the returned pages and extract the values for each field (e.g, title = `Harry Potter'', author = `J.K. Rowling'', etc), we can apply the same idea that we used for the textual database: estimate the frequency of each attribute value and pick the most promising one.\nThe main challenge is to automatically segment the returned pages so that we can identify the sections of the pages that present the values corresponding to each attribute.\nSince many Web sites follow limited formatting styles in presenting multiple attributes - for example, most book titles are preceded by the label Title: - we believe we may learn page-segmentation rules automatically from a small set of training examples.\nOther Practical Issues In addition to the automatic query generation problem, there are many practical issues to be addressed to build a fully automatic Hidden-Web crawler.\nFor example, in this paper we assumed that the crawler already knows all query interfaces for Hidden-Web sites.\nBut how can the crawler discover the query interfaces?\nThe method proposed in [15] may be a good starting point.\nIn addition, some Hidden-Web sites return their results in batches of, say, 20 pages, so the user has to click on a next button in order to see more results.\nIn this case, a fully automatic Hidden-Web crawler should know that the first result index page contains only a partial result and press the next button automatically.\nFinally, some Hidden Web sites may contain an infinite number of Hidden Web pages which do not contribute much significant content (e.g. a calendar with links for every day).\nIn this case the Hidden-Web crawler should be able to detect that the site does not have much more new content and stop downloading pages from the site.\nPage similarity detection algorithms may be useful for this purpose [9, 13].\n7.\nREFERENCES [1] Lexisnexis http:\/\/www.lexisnexis.com.\n[2] The Open Directory Project, http:\/\/www.dmoz.org.\n[3] E. Agichtein and L. Gravano.\nQuerying text databases for efficient information extraction.\nIn ICDE, 2003.\n[4] E. Agichtein, P. Ipeirotis, and L. Gravano.\nModeling query-based access to text databases.\nIn WebDB, 2003.\n[5] Article on New York Times.\nOld Search Engine, the Library, Tries to Fit Into a Google World.\nAvailable at: http: \/\/www.nytimes.com\/2004\/06\/21\/technology\/21LIBR.html, June 2004.\n[6] L. Barbosa and J. Freire.\nSiphoning hidden-web data through keyword-based interfaces.\nIn SBBD, 2004.\n[7] M. K. Bergman.\nThe deep web: Surfacing hidden value,http: \/\/www.press.umich.edu\/jep\/07-01\/bergman.html.\n[8] K. Bharat and A. Broder.\nA technique for measuring the relative size and overlap of public web search engines.\nIn WWW, 1998.\n[9] A. Z. Broder, S. C. Glassman, M. S. Manasse, and G. Zweig.\nSyntactic clustering of the web.\nIn WWW, 1997.\n[10] J. Callan, M. Connell, and A. Du.\nAutomatic discovery of language models for text databases.\nIn SIGMOD, 1999.\n[11] J. P. Callan and M. E. Connell.\nQuery-based sampling of text databases.\nInformation Systems, 19(2):97-130, 2001.\n[12] K. C.-C.\nChang, B. He, C. Li, and Z. Zhang.\nStructured databases on the web: Observations and implications.\nTechnical report, UIUC.\n[13] J. Cho, N. Shivakumar, and H. Garcia-Molina.\nFinding replicated web collections.\nIn SIGMOD, 2000.\n[14] W. Cohen and Y. Singer.\nLearning to query the web.\nIn AAAI Workshop on Internet-Based Information Systems, 1996.\n[15] J. Cope, N. Craswell, and D. Hawking.\nAutomated discovery of search interfaces on the web.\nIn 14th Australasian conference on Database technologies, 2003.\n[16] T. H. Cormen, C. E. Leiserson, and R. L. Rivest.\nIntroduction to Algorithms, 2nd Edition.\nMIT Press\/McGraw Hill, 2001.\n[17] D. Florescu, A. Y. Levy, and A. O. Mendelzon.\nDatabase techniques for the world-wide web: A survey.\nSIGMOD Record, 27(3):59-74, 1998.\n[18] B.\nHe and K. C.-C.\nChang.\nStatistical schema matching across web query interfaces.\nIn SIGMOD Conference, 2003.\n[19] P. Ipeirotis and L. Gravano.\nDistributed search over the hidden web: Hierarchical database sampling and selection.\nIn VLDB, 2002.\n[20] P. G. Ipeirotis, L. Gravano, and M. Sahami.\nProbe, count, and classify: Categorizing hidden web databases.\nIn SIGMOD, 2001.\n[21] C. Lagoze and H. V. Sompel.\nThe Open Archives Initiative: Building a low-barrier interoperability framework In JCDL, 2001.\n[22] S. Lawrence and C. L. Giles.\nSearching the World Wide Web.\nScience, 280(5360):98-100, 1998.\n[23] V. Z. Liu, J. C. Richard C. Luo and, and W. W. Chu.\nDpro: A probabilistic approach for hidden web database selection using dynamic probing.\nIn ICDE, 2004.\n[24] X. Liu, K. Maly, M. Zubair and M. L. Nelson.\nDP9-An OAI Gateway Service for Web Crawlers.\nIn JCDL, 2002.\n[25] B. B. Mandelbrot.\nFractal Geometry of Nature.\nW. H. Freeman & Co. [26] A. Ntoulas, J. Cho, and C. Olston.\nWhat``s new on the web?\nthe evolution of the web from a search engine perspective.\nIn WWW, 2004.\n[27] A. Ntoulas, P. Zerfos, and J. Cho.\nDownloading hidden web content.\nTechnical report, UCLA, 2004.\n[28] S. Olsen.\nDoes search engine``s power threaten web``s independence?\nhttp:\/\/news.com.com\/2009-1023-963618.html.\n[29] S. Raghavan and H. Garcia-Molina.\nCrawling the hidden web.\nIn VLDB, 2001.\n[30] G. K. Zipf.\nHuman Behavior and the Principle of Least-Effort.\nAddison-Wesley, Cambridge, MA, 1949.\n109","lvl-3":"Downloading Textual Hidden Web Content Through Keyword Queries\nABSTRACT\nAn ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites.\nThese pages are often referred to as the Hidden Web or the Deep Web.\nSince there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results.\nHowever, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users.\nIn this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web.\nSince the only \"entry point\" to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site.\nHere, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically.\nOur policies proceed iteratively, issuing a different query in every iteration.\nWe experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising.\nFor instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.\n1.\nINTRODUCTION\nRecent studies show that a significant fraction of Web content cannot be reached by following links [7, 12].\nIn particular, a large part of the Web is \"hidden\" behind search forms and is reachable only when users type in a set of keywords, or queries, to the forms.\nThese pages are often referred to as the Hidden Web [17] or the\nDeep Web [7], because search engines typically cannot index the pages and do not return them in their results (thus, the pages are essentially \"hidden\" from a typical Web user).\nAccording to many studies, the size of the Hidden Web increases rapidly as more organizations put their valuable content online through an easy-to-use Web interface [7].\nIn [12], Chang et al. estimate that well over 100,000 Hidden-Web sites currently exist on the Web.\nMoreover, the content provided by many Hidden-Web sites is often of very high quality and can be extremely valuable to many users [7].\nFor example, PubMed hosts many high-quality papers on medical research that were selected from careful peerreview processes, while the site of the US Patent and Trademarks Office1 makes existing patent documents available, helping potential inventors examine \"prior art.\"\nIn this paper, we study how we can build a Hidden-Web crawler2 that can automatically download pages from the Hidden Web, so that search engines can index them.\nConventional crawlers rely on the hyperlinks on the Web to discover pages, so current search engines cannot index the Hidden-Web pages (due to the lack of links).\nWe believe that an effective Hidden-Web crawler can have a tremendous impact on how users search information on the Web: 9 Tapping into unexplored information: The Hidden-Web crawler will allow an average Web user to easily explore the vast amount of information that is mostly \"hidden\" at present.\nSince a majority of Web users rely on search engines to discover pages, when pages are not indexed by search engines, they are unlikely to be viewed by many Web users.\nUnless users go directly to Hidden-Web sites and issue queries there, they cannot access the pages at the sites.\n9 Improving user experience: Even if a user is aware of a number of Hidden-Web sites, the user still has to waste a significant amount of time and effort, visiting all of the potentially relevant sites, querying each of them and exploring the result.\nBy making the Hidden-Web pages searchable at a central location, we can significantly reduce the user's wasted time and effort in searching the Hidden Web.\n9 Reducing potential bias: Due to the heavy reliance of many Web users on search engines for locating information, search engines influence how the users perceive the Web [28].\nUsers do not necessarily perceive what actually exists on the Web, but what is indexed by search engines [28].\nAccording to a recent article [5], several organizations have recognized the importance of bringing information of their Hidden Web sites onto the surface, and committed considerable resources towards this effort.\nOur\nFigure 1: A single-attribute search interface\nHidden-Web crawler attempts to automate this process for Hid - Figure 2: A multi-attribute search interface den Web sites with textual content, thus minimizing the associated costs and effort required.\nGiven that the only \"entry\" to Hidden Web pages is through querying a search form, there are two core challenges to implementing an effective Hidden Web crawler: (a) The crawler has to be able to understand and model a query interface, and (b) The crawler has to come up with meaningful queries to issue to the query interface.\nThe first challenge was addressed by Raghavan and Garcia-Molina in [29], where a method for learning search interfaces was presented.\nHere, we present a solution to the second challenge, i.e. how a crawler can automatically generate queries so that it can discover and download the Hidden Web pages.\nClearly, when the search forms list all possible values for a query (e.g., through a drop-down list), the solution is straightforward.\nWe exhaustively issue all possible queries, one query at a time.\nWhen the query forms have a \"free text\" input, however, an infinite number of queries are possible, so we cannot exhaustively issue all possible queries.\nIn this case, what queries should we pick?\nCan the crawler automatically come up with meaningful queries without understanding the semantics of the search form?\nIn this paper, we provide a theoretical framework to investigate the Hidden-Web crawling problem and propose effective ways of generating queries automatically.\nWe also evaluate our proposed solutions through experiments conducted on real Hidden-Web sites.\nIn summary, this paper makes the following contributions: \u2022 We present a formal framework to study the problem of HiddenWeb crawling.\n(Section 2).\n\u2022 We investigate a number of crawling policies for the Hidden Web, including the optimal policy that can potentially download the maximum number of pages through the minimum number of interactions.\nUnfortunately, we show that the optimal policy is NP-hard and cannot be implemented in practice (Section 2.2).\n\u2022 We propose a new adaptive policy that approximates the optimal policy.\nOur adaptive policy examines the pages returned from previous queries and adapts its query-selection policy automatically based on them (Section 3).\n\u2022 We evaluate various crawling policies through experiments on real Web sites.\nOur experiments will show the relative advantages of various crawling policies and demonstrate their potential.\nThe results from our experiments are very promising.\nIn one experiment, for example, our adaptive policy downloaded more than 90% of the pages within PubMed (that contains 14 million documents) after it issued fewer than 100 queries.\n2.\nFRAMEWORK In this section, we present a formal framework for the study of the Hidden-Web crawling problem.\nIn Section 2.1, we describe our assumptions on Hidden-Web sites and explain how users interact with the sites.\nBased on this interaction model, we present a highlevel algorithm for a Hidden-Web crawler in Section 2.2.\nFinally in Section 2.3, we formalize the Hidden-Web crawling problem.\n2.1 Hidden-Web database model There exists a variety of Hidden Web sources that provide information on a multitude of topics.\nDepending on the type of information, we may categorize a Hidden-Web site either as a textual database or a structured database.\nA textual database is a site that mainly contains plain-text documents, such as PubMed and LexisNexis (an online database of legal documents [1]).\nSince plaintext documents do not usually have well-defined structure, most textual databases provide a simple search interface where users type a list of keywords in a single search box (Figure 1).\nIn contrast, a structured database often contains multi-attribute relational data (e.g., a book on the Amazon Web site may have the fields title = ` Harry Potter', author = ` J.K. Rowling' and isbn = ` 0590353403') and supports multi-attribute search interfaces (Figure 2).\nIn this paper, we will mainly focus on textual databases that support single-attribute keyword queries.\nWe discuss how we can extend our ideas for the textual databases to multi-attribute structured databases in Section 6.1.\nTypically, the users need to take the following steps in order to access pages in a Hidden-Web database: 1.\nStep 1.\nFirst, the user issues a query, say \"liver,\" through the search interface provided by the Web site (such as the one shown in Figure 1).\n2.\nStep 2.\nShortly after the user issues the query, she is presented with a result index page.\nThat is, the Web site returns a list of links to potentially relevant Web pages, as shown in Figure 3 (a).\n3.\nStep 3.\nFrom the list in the result index page, the user identifies the pages that look \"interesting\" and follows the links.\nClicking on a link leads the user to the actual Web page, such as the one shown in Figure 3 (b), that the user wants to look at.\n2.2 A generic Hidden Web crawling algorithm Given that the only \"entry\" to the pages in a Hidden-Web site is its search from, a Hidden-Web crawler should follow the three steps described in the previous section.\nThat is, the crawler has to generate a query, issue it to the Web site, download the result index page, and follow the links to download the actual pages.\nIn most cases, a crawler has limited time and network resources, so the crawler repeats these steps until it uses up its resources.\nIn Figure 4 we show the generic algorithm for a Hidden-Web crawler.\nFor simplicity, we assume that the Hidden-Web crawler issues single-term queries only .3 The crawler first decides which query term it is going to use (Step (2)), issues the query, and retrieves the result index page (Step (3)).\nFinally, based on the links found on the result index page, it downloads the Hidden Web pages from the site (Step (4)).\nThis same process is repeated until all the available resources are used up (Step (1)).\nGiven this algorithm, we can see that the most critical decision that a crawler has to make is what query to issue next.\nIf the crawler can issue successful queries that will return many matching pages, the crawler can finish its crawling early on using minimum resources.\nIn contrast, if the crawler issues completely irrelevant queries that do not return any matching pages, it may waste all of its resources simply issuing queries without ever retrieving actual pages.\nTherefore, how the crawler selects the next query can greatly affect its effectiveness.\nIn the next section, we formalize this query selection problem.\n101 3For most Web sites that assume \"AND\" for multi-keyword queries, single-term queries return the maximum number of results.\nExtending our work to multi-keyword queries is straightforward.\n(a) List of matching pages for query \"liver\".\n(b) The first matching page for \"liver\".\nFigure 3: Pages from the PubMed Web site.\nALGORITHM 2.1.\nCrawling a Hidden Web site Procedure\n(1) while (there are available resources) do \/ \/ select a term to send to the site (2) qi = SelectTerm () \/ \/ send query and acquire result index page (3) R (qi) = QueryWebSite (qi) \/ \/ download the pages of interest (4) Download (R (qi)) (5) done\nFigure 4: Algorithm for crawling a Hidden Web site.\nFigure 5: A set-formalization of the optimal query selection problem.\n2.3 Problem formalization\nTheoretically, the problem of query selection can be formalized as follows: We assume that the crawler downloads pages from a Web site that has a set of pages S (the rectangle in Figure 5).\nWe represent each Web page in S as a point (dots in Figure 5).\nEvery potential query qi that we may issue can be viewed as a subset of S, containing all the points (pages) that are returned when we issue qi to the site.\nEach subset is associated with a weight that represents the cost of issuing the query.\nUnder this formalization, our goal is to find which subsets (queries) cover the maximum number of points (Web pages) with the minimum total weight (cost).\nThis problem is equivalent to the set-covering problem in graph theory [16].\nThere are two main difficulties that we need to address in this formalization.\nFirst, in a practical situation, the crawler does not know which Web pages will be returned by which queries, so the subsets of S are not known in advance.\nWithout knowing these subsets the crawler cannot decide which queries to pick to maximize the coverage.\nSecond, the set-covering problem is known to be NP-Hard [16], so an efficient algorithm to solve this problem optimally in polynomial time has yet to be found.\nIn this paper, we will present an approximation algorithm that can find a near-optimal solution at a reasonable computational cost.\nOur algorithm leverages the observation that although we do not know which pages will be returned by each query qi that we issue, we can predict how many pages will be returned.\nBased on this information our query selection algorithm can then select the \"best\" queries that cover the content of the Web site.\nWe present our prediction method and our query selection algorithm in Section 3.\n2.3.1 Performance Metric\nBefore we present our ideas for the query selection problem, we briefly discuss some of our notation and the cost\/performance metrics.\nGiven a query qi, we use P (qi) to denote the fraction of pages that we will get back if we issue query qi to the site.\nFor example, if a Web site has 10,000 pages in total, and if 3,000 pages are returned for the query qi = \"medicine\", then P (qi) = 0.3.\nWe use P (q1 \u2227 q2) to represent the fraction of pages that are returned from both q1 and q2 (i.e., the intersection of P (q1) and P (q2)).\nSimilarly, we use P (q1 \u2228 q2) to represent the fraction of pages that are returned from either q1 or q2 (i.e., the union of P (q1) and P (q2)).\nWe also use Cost (qi) to represent the cost of issuing the query qi.\nDepending on the scenario, the cost can be measured either in time, network bandwidth, the number of interactions with the site, or it can be a function of all of these.\nAs we will see later, our proposed algorithms are independent of the exact cost function.\nIn the most common case, the query cost consists of a number of factors, including the cost for submitting the query to the site, retrieving the result index page (Figure 3 (a)) and downloading the actual pages (Figure 3 (b)).\nWe assume that submitting a query incurs a fixed cost of cq.\nThe cost for downloading the result index page is proportional to the number of matching documents to the query, while the cost cd for downloading a matching document is also fixed.\nThen the overall cost of query qi is\nIn certain cases, some of the documents from qi may have already been downloaded from previous queries.\nIn this case, the crawler may skip downloading these documents and the cost of qi can be\nHere, we use Pnew (qi) to represent the fraction of the new documents from qi that have not been retrieved from previous queries.\nLater in Section 3.1 we will study how we can estimate P (qi) and Pnew (qi) to estimate the cost of qi.\nSince our algorithms are independent of the exact cost function, we will assume a generic cost function Cost (qi) in this paper.\nWhen we need a concrete cost function, however, we will use Equation 2.\nGiven the notation, we can formalize the goal of a Hidden-Web crawler as follows:\n3.\nKEYWORD SELECTION\n3.1 Estimating the number of matching pages\n3.2 Query selection algorithm\n3.3 Efficient calculation of query statistics\n3.4 Crawling sites that limit the number of results\n4.\nEXPERIMENTAL EVALUATION\n4.1 Comparison of policies\n4.2 Impact of the initial query\n4.3 Impact of the limit in the number of results\n4.4 Incorporating the document download cost\n5.\nRELATED WORK\nIn a recent study, Raghavan and Garcia-Molina [29] present an architectural model for a Hidden Web crawler.\nThe main focus of this work is to learn Hidden-Web query interfaces, not to generate queries automatically.\nThe potential queries are either provided manually by users or collected from the query interfaces.\nIn contrast, our main focus is to generate queries automatically without any human intervention.\nThe idea of automatically issuing queries to a database and examining the results has been previously used in different contexts.\nFor example, in [10, 11], Callan and Connel try to acquire an accurate language model by collecting a uniform random sample from the database.\nIn [22] Lawrence and Giles issue random queries to a number of Web Search Engines in order to estimate the fraction of the Web that has been indexed by each of them.\nIn a similar fashion, Bharat and Broder [8] issue random queries to a set of Search Engines in order to estimate the relative size and overlap of their indexes.\nIn [6], Barbosa and Freire experimentally evaluate methods for building multi-keyword queries that can return a large fraction of a document collection.\nOur work differs from the previous studies in two ways.\nFirst, it provides a theoretical framework for analyzing the process of generating queries for a database and examining the results, which can help us better understand the effectiveness of the methods presented in the previous work.\nSecond, we apply our framework to the problem of Hidden Web crawling and demonstrate the efficiency of our algorithms.\nCope et al. [15] propose a method to automatically detect whether a particular Web page contains a search form.\nThis work is complementary to ours; once we detect search interfaces on the Web using the method in [15], we may use our proposed algorithms to download pages automatically from those Web sites.\nReference [4] reports methods to estimate what fraction of a text database can be eventually acquired by issuing queries to the database.\nIn [3] the authors study query-based techniques that can extract relational data from large text databases.\nAgain, these works study orthogonal issues and are complementary to our work.\nIn order to make documents in multiple textual databases searchable at a central place, a number of \"harvesting\" approaches have\nbeen proposed (e.g., OAI [21], DP9 [24]).\nThese approaches essentially assume cooperative document databases that willingly share some of their metadata and\/or documents to help a third-party search engine to index the documents.\nOur approach assumes uncooperative databases that do not share their data publicly and whose documents are accessible only through search interfaces.\nThere exists a large body of work studying how to identify the most relevant database given a user query [20, 19, 14, 23, 18].\nThis body of work is often referred to as meta-searching or database selection problem over the Hidden Web.\nFor example, [19] suggests the use offocused probing to classify databases into a topical category, so that given a query, a relevant database can be selected based on its topical category.\nOur vision is different from this body of work in that we intend to download and index the Hidden pages at a central location in advance, so that users can access all the information at their convenience from one single location.\n6.\nCONCLUSION AND FUTURE WORK\nTraditional crawlers normally follow links on the Web to discover and download pages.\nTherefore they cannot get to the Hidden Web pages which are only accessible through query interfaces.\nIn this paper, we studied how we can build a Hidden Web crawler that can automatically query a Hidden Web site and download pages from it.\nWe proposed three different query generation policies for the Hidden Web: a policy that picks queries at random from a list of keywords, a policy that picks queries based on their frequency in a generic text collection, and a policy which adaptively picks a good query based on the content of the pages downloaded from the Hidden Web site.\nExperimental evaluation on 4 real Hidden Web sites shows that our policies have a great potential.\nIn particular, in certain cases the adaptive policy can download more than 90% of a Hidden Web site after issuing approximately 100 queries.\nGiven these results, we believe that our work provides a potential mechanism to improve the search-engine coverage of the Web and the user experience of Web search.\n6.1 Future Work\nWe briefly discuss some future-research avenues.\nMulti-attribute Databases We are currently investigating how to extend our ideas to structured multi-attribute databases.\nWhile generating queries for multi-attribute databases is clearly a more difficult problem, we may exploit the following observation to address this problem: When a site supports multi-attribute queries, the site often returns pages that contain values for each of the query attributes.\nFor example, when an online bookstore supports queries on title, author and isbn, the pages returned from a query typically contain the title, author and ISBN of corresponding books.\nThus, if we can analyze the returned pages and extract the values for each field (e.g, title = ` Harry Potter', author = ` J.K. Rowling', etc), we can apply the same idea that we used for the textual database: estimate the frequency of each attribute value and pick the most promising one.\nThe main challenge is to automatically segment the returned pages so that we can identify the sections of the pages that present the values corresponding to each attribute.\nSince many Web sites follow limited formatting styles in presenting multiple attributes--for example, most book titles are preceded by the label \"Title:\"--we believe we may learn page-segmentation rules automatically from a small set of training examples.\nOther Practical Issues In addition to the automatic query generation problem, there are many practical issues to be addressed to build a fully automatic Hidden-Web crawler.\nFor example, in this paper we assumed that the crawler already knows all query interfaces for Hidden-Web sites.\nBut how can the crawler discover the query interfaces?\nThe method proposed in [15] may be a good starting point.\nIn addition, some Hidden-Web sites return their results in batches of, say, 20 pages, so the user has to click on a \"next\" button in order to see more results.\nIn this case, a fully automatic Hidden-Web crawler should know that the first result index page contains only a partial result and \"press\" the next button automatically.\nFinally, some Hidden Web sites may contain an infinite number of Hidden Web pages which do not contribute much significant content (e.g. a calendar with links for every day).\nIn this case the Hidden-Web crawler should be able to detect that the site does not have much more new content and stop downloading pages from the site.\nPage similarity detection algorithms may be useful for this purpose [9, 13].","lvl-4":"Downloading Textual Hidden Web Content Through Keyword Queries\nABSTRACT\nAn ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites.\nThese pages are often referred to as the Hidden Web or the Deep Web.\nSince there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results.\nHowever, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users.\nIn this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web.\nSince the only \"entry point\" to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site.\nHere, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically.\nOur policies proceed iteratively, issuing a different query in every iteration.\nWe experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising.\nFor instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.\n1.\nINTRODUCTION\nRecent studies show that a significant fraction of Web content cannot be reached by following links [7, 12].\nIn particular, a large part of the Web is \"hidden\" behind search forms and is reachable only when users type in a set of keywords, or queries, to the forms.\nThese pages are often referred to as the Hidden Web [17] or the\nDeep Web [7], because search engines typically cannot index the pages and do not return them in their results (thus, the pages are essentially \"hidden\" from a typical Web user).\nIn [12], Chang et al. estimate that well over 100,000 Hidden-Web sites currently exist on the Web.\nMoreover, the content provided by many Hidden-Web sites is often of very high quality and can be extremely valuable to many users [7].\nIn this paper, we study how we can build a Hidden-Web crawler2 that can automatically download pages from the Hidden Web, so that search engines can index them.\nConventional crawlers rely on the hyperlinks on the Web to discover pages, so current search engines cannot index the Hidden-Web pages (due to the lack of links).\nSince a majority of Web users rely on search engines to discover pages, when pages are not indexed by search engines, they are unlikely to be viewed by many Web users.\nUnless users go directly to Hidden-Web sites and issue queries there, they cannot access the pages at the sites.\nBy making the Hidden-Web pages searchable at a central location, we can significantly reduce the user's wasted time and effort in searching the Hidden Web.\nUsers do not necessarily perceive what actually exists on the Web, but what is indexed by search engines [28].\nOur\nFigure 1: A single-attribute search interface\nHidden-Web crawler attempts to automate this process for Hid - Figure 2: A multi-attribute search interface den Web sites with textual content, thus minimizing the associated costs and effort required.\nThe first challenge was addressed by Raghavan and Garcia-Molina in [29], where a method for learning search interfaces was presented.\nHere, we present a solution to the second challenge, i.e. how a crawler can automatically generate queries so that it can discover and download the Hidden Web pages.\nClearly, when the search forms list all possible values for a query (e.g., through a drop-down list), the solution is straightforward.\nWe exhaustively issue all possible queries, one query at a time.\nWhen the query forms have a \"free text\" input, however, an infinite number of queries are possible, so we cannot exhaustively issue all possible queries.\nIn this case, what queries should we pick?\nCan the crawler automatically come up with meaningful queries without understanding the semantics of the search form?\nIn this paper, we provide a theoretical framework to investigate the Hidden-Web crawling problem and propose effective ways of generating queries automatically.\nWe also evaluate our proposed solutions through experiments conducted on real Hidden-Web sites.\nIn summary, this paper makes the following contributions: \u2022 We present a formal framework to study the problem of HiddenWeb crawling.\n(Section 2).\n\u2022 We investigate a number of crawling policies for the Hidden Web, including the optimal policy that can potentially download the maximum number of pages through the minimum number of interactions.\n\u2022 We propose a new adaptive policy that approximates the optimal policy.\nOur adaptive policy examines the pages returned from previous queries and adapts its query-selection policy automatically based on them (Section 3).\n\u2022 We evaluate various crawling policies through experiments on real Web sites.\nOur experiments will show the relative advantages of various crawling policies and demonstrate their potential.\nThe results from our experiments are very promising.\nIn one experiment, for example, our adaptive policy downloaded more than 90% of the pages within PubMed (that contains 14 million documents) after it issued fewer than 100 queries.\n2.\nFRAMEWORK In this section, we present a formal framework for the study of the Hidden-Web crawling problem.\nIn Section 2.1, we describe our assumptions on Hidden-Web sites and explain how users interact with the sites.\nBased on this interaction model, we present a highlevel algorithm for a Hidden-Web crawler in Section 2.2.\nFinally in Section 2.3, we formalize the Hidden-Web crawling problem.\n2.1 Hidden-Web database model There exists a variety of Hidden Web sources that provide information on a multitude of topics.\nDepending on the type of information, we may categorize a Hidden-Web site either as a textual database or a structured database.\nA textual database is a site that mainly contains plain-text documents, such as PubMed and LexisNexis (an online database of legal documents [1]).\nSince plaintext documents do not usually have well-defined structure, most textual databases provide a simple search interface where users type a list of keywords in a single search box (Figure 1).\nIn this paper, we will mainly focus on textual databases that support single-attribute keyword queries.\nWe discuss how we can extend our ideas for the textual databases to multi-attribute structured databases in Section 6.1.\nTypically, the users need to take the following steps in order to access pages in a Hidden-Web database: 1.\nStep 1.\nFirst, the user issues a query, say \"liver,\" through the search interface provided by the Web site (such as the one shown in Figure 1).\n2.\nStep 2.\nShortly after the user issues the query, she is presented with a result index page.\nThat is, the Web site returns a list of links to potentially relevant Web pages, as shown in Figure 3 (a).\n3.\nStep 3.\nFrom the list in the result index page, the user identifies the pages that look \"interesting\" and follows the links.\nClicking on a link leads the user to the actual Web page, such as the one shown in Figure 3 (b), that the user wants to look at.\n2.2 A generic Hidden Web crawling algorithm Given that the only \"entry\" to the pages in a Hidden-Web site is its search from, a Hidden-Web crawler should follow the three steps described in the previous section.\nThat is, the crawler has to generate a query, issue it to the Web site, download the result index page, and follow the links to download the actual pages.\nIn most cases, a crawler has limited time and network resources, so the crawler repeats these steps until it uses up its resources.\nIn Figure 4 we show the generic algorithm for a Hidden-Web crawler.\nFor simplicity, we assume that the Hidden-Web crawler issues single-term queries only .3 The crawler first decides which query term it is going to use (Step (2)), issues the query, and retrieves the result index page (Step (3)).\nFinally, based on the links found on the result index page, it downloads the Hidden Web pages from the site (Step (4)).\nGiven this algorithm, we can see that the most critical decision that a crawler has to make is what query to issue next.\nIf the crawler can issue successful queries that will return many matching pages, the crawler can finish its crawling early on using minimum resources.\nIn contrast, if the crawler issues completely irrelevant queries that do not return any matching pages, it may waste all of its resources simply issuing queries without ever retrieving actual pages.\nTherefore, how the crawler selects the next query can greatly affect its effectiveness.\nIn the next section, we formalize this query selection problem.\n101 3For most Web sites that assume \"AND\" for multi-keyword queries, single-term queries return the maximum number of results.\nExtending our work to multi-keyword queries is straightforward.\n(a) List of matching pages for query \"liver\".\n(b) The first matching page for \"liver\".\nFigure 3: Pages from the PubMed Web site.\nALGORITHM 2.1.\nCrawling a Hidden Web site Procedure\nFigure 4: Algorithm for crawling a Hidden Web site.\nFigure 5: A set-formalization of the optimal query selection problem.\n2.3 Problem formalization\nTheoretically, the problem of query selection can be formalized as follows: We assume that the crawler downloads pages from a Web site that has a set of pages S (the rectangle in Figure 5).\nWe represent each Web page in S as a point (dots in Figure 5).\nEvery potential query qi that we may issue can be viewed as a subset of S, containing all the points (pages) that are returned when we issue qi to the site.\nEach subset is associated with a weight that represents the cost of issuing the query.\nUnder this formalization, our goal is to find which subsets (queries) cover the maximum number of points (Web pages) with the minimum total weight (cost).\nFirst, in a practical situation, the crawler does not know which Web pages will be returned by which queries, so the subsets of S are not known in advance.\nWithout knowing these subsets the crawler cannot decide which queries to pick to maximize the coverage.\nIn this paper, we will present an approximation algorithm that can find a near-optimal solution at a reasonable computational cost.\nOur algorithm leverages the observation that although we do not know which pages will be returned by each query qi that we issue, we can predict how many pages will be returned.\nBased on this information our query selection algorithm can then select the \"best\" queries that cover the content of the Web site.\nWe present our prediction method and our query selection algorithm in Section 3.\n2.3.1 Performance Metric\nBefore we present our ideas for the query selection problem, we briefly discuss some of our notation and the cost\/performance metrics.\nGiven a query qi, we use P (qi) to denote the fraction of pages that we will get back if we issue query qi to the site.\nFor example, if a Web site has 10,000 pages in total, and if 3,000 pages are returned for the query qi = \"medicine\", then P (qi) = 0.3.\nWe also use Cost (qi) to represent the cost of issuing the query qi.\nAs we will see later, our proposed algorithms are independent of the exact cost function.\nIn the most common case, the query cost consists of a number of factors, including the cost for submitting the query to the site, retrieving the result index page (Figure 3 (a)) and downloading the actual pages (Figure 3 (b)).\nWe assume that submitting a query incurs a fixed cost of cq.\nThe cost for downloading the result index page is proportional to the number of matching documents to the query, while the cost cd for downloading a matching document is also fixed.\nThen the overall cost of query qi is\nIn certain cases, some of the documents from qi may have already been downloaded from previous queries.\nIn this case, the crawler may skip downloading these documents and the cost of qi can be\nHere, we use Pnew (qi) to represent the fraction of the new documents from qi that have not been retrieved from previous queries.\nLater in Section 3.1 we will study how we can estimate P (qi) and Pnew (qi) to estimate the cost of qi.\nSince our algorithms are independent of the exact cost function, we will assume a generic cost function Cost (qi) in this paper.\nGiven the notation, we can formalize the goal of a Hidden-Web crawler as follows:\n5.\nRELATED WORK\nIn a recent study, Raghavan and Garcia-Molina [29] present an architectural model for a Hidden Web crawler.\nThe main focus of this work is to learn Hidden-Web query interfaces, not to generate queries automatically.\nThe potential queries are either provided manually by users or collected from the query interfaces.\nIn contrast, our main focus is to generate queries automatically without any human intervention.\nThe idea of automatically issuing queries to a database and examining the results has been previously used in different contexts.\nIn [22] Lawrence and Giles issue random queries to a number of Web Search Engines in order to estimate the fraction of the Web that has been indexed by each of them.\nIn a similar fashion, Bharat and Broder [8] issue random queries to a set of Search Engines in order to estimate the relative size and overlap of their indexes.\nIn [6], Barbosa and Freire experimentally evaluate methods for building multi-keyword queries that can return a large fraction of a document collection.\nOur work differs from the previous studies in two ways.\nFirst, it provides a theoretical framework for analyzing the process of generating queries for a database and examining the results, which can help us better understand the effectiveness of the methods presented in the previous work.\nSecond, we apply our framework to the problem of Hidden Web crawling and demonstrate the efficiency of our algorithms.\nCope et al. [15] propose a method to automatically detect whether a particular Web page contains a search form.\nThis work is complementary to ours; once we detect search interfaces on the Web using the method in [15], we may use our proposed algorithms to download pages automatically from those Web sites.\nReference [4] reports methods to estimate what fraction of a text database can be eventually acquired by issuing queries to the database.\nIn [3] the authors study query-based techniques that can extract relational data from large text databases.\nAgain, these works study orthogonal issues and are complementary to our work.\nIn order to make documents in multiple textual databases searchable at a central place, a number of \"harvesting\" approaches have\nThese approaches essentially assume cooperative document databases that willingly share some of their metadata and\/or documents to help a third-party search engine to index the documents.\nOur approach assumes uncooperative databases that do not share their data publicly and whose documents are accessible only through search interfaces.\nThere exists a large body of work studying how to identify the most relevant database given a user query [20, 19, 14, 23, 18].\nThis body of work is often referred to as meta-searching or database selection problem over the Hidden Web.\nOur vision is different from this body of work in that we intend to download and index the Hidden pages at a central location in advance, so that users can access all the information at their convenience from one single location.\n6.\nCONCLUSION AND FUTURE WORK\nTraditional crawlers normally follow links on the Web to discover and download pages.\nTherefore they cannot get to the Hidden Web pages which are only accessible through query interfaces.\nIn this paper, we studied how we can build a Hidden Web crawler that can automatically query a Hidden Web site and download pages from it.\nExperimental evaluation on 4 real Hidden Web sites shows that our policies have a great potential.\nIn particular, in certain cases the adaptive policy can download more than 90% of a Hidden Web site after issuing approximately 100 queries.\nGiven these results, we believe that our work provides a potential mechanism to improve the search-engine coverage of the Web and the user experience of Web search.\n6.1 Future Work\nWe briefly discuss some future-research avenues.\nMulti-attribute Databases We are currently investigating how to extend our ideas to structured multi-attribute databases.\nWhile generating queries for multi-attribute databases is clearly a more difficult problem, we may exploit the following observation to address this problem: When a site supports multi-attribute queries, the site often returns pages that contain values for each of the query attributes.\nFor example, when an online bookstore supports queries on title, author and isbn, the pages returned from a query typically contain the title, author and ISBN of corresponding books.\nThe main challenge is to automatically segment the returned pages so that we can identify the sections of the pages that present the values corresponding to each attribute.\nOther Practical Issues In addition to the automatic query generation problem, there are many practical issues to be addressed to build a fully automatic Hidden-Web crawler.\nFor example, in this paper we assumed that the crawler already knows all query interfaces for Hidden-Web sites.\nBut how can the crawler discover the query interfaces?\nThe method proposed in [15] may be a good starting point.\nIn addition, some Hidden-Web sites return their results in batches of, say, 20 pages, so the user has to click on a \"next\" button in order to see more results.\nIn this case, a fully automatic Hidden-Web crawler should know that the first result index page contains only a partial result and \"press\" the next button automatically.\nFinally, some Hidden Web sites may contain an infinite number of Hidden Web pages which do not contribute much significant content (e.g. a calendar with links for every day).\nIn this case the Hidden-Web crawler should be able to detect that the site does not have much more new content and stop downloading pages from the site.\nPage similarity detection algorithms may be useful for this purpose [9, 13].","lvl-2":"Downloading Textual Hidden Web Content Through Keyword Queries\nABSTRACT\nAn ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites.\nThese pages are often referred to as the Hidden Web or the Deep Web.\nSince there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results.\nHowever, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users.\nIn this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web.\nSince the only \"entry point\" to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site.\nHere, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically.\nOur policies proceed iteratively, issuing a different query in every iteration.\nWe experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising.\nFor instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.\n1.\nINTRODUCTION\nRecent studies show that a significant fraction of Web content cannot be reached by following links [7, 12].\nIn particular, a large part of the Web is \"hidden\" behind search forms and is reachable only when users type in a set of keywords, or queries, to the forms.\nThese pages are often referred to as the Hidden Web [17] or the\nDeep Web [7], because search engines typically cannot index the pages and do not return them in their results (thus, the pages are essentially \"hidden\" from a typical Web user).\nAccording to many studies, the size of the Hidden Web increases rapidly as more organizations put their valuable content online through an easy-to-use Web interface [7].\nIn [12], Chang et al. estimate that well over 100,000 Hidden-Web sites currently exist on the Web.\nMoreover, the content provided by many Hidden-Web sites is often of very high quality and can be extremely valuable to many users [7].\nFor example, PubMed hosts many high-quality papers on medical research that were selected from careful peerreview processes, while the site of the US Patent and Trademarks Office1 makes existing patent documents available, helping potential inventors examine \"prior art.\"\nIn this paper, we study how we can build a Hidden-Web crawler2 that can automatically download pages from the Hidden Web, so that search engines can index them.\nConventional crawlers rely on the hyperlinks on the Web to discover pages, so current search engines cannot index the Hidden-Web pages (due to the lack of links).\nWe believe that an effective Hidden-Web crawler can have a tremendous impact on how users search information on the Web: 9 Tapping into unexplored information: The Hidden-Web crawler will allow an average Web user to easily explore the vast amount of information that is mostly \"hidden\" at present.\nSince a majority of Web users rely on search engines to discover pages, when pages are not indexed by search engines, they are unlikely to be viewed by many Web users.\nUnless users go directly to Hidden-Web sites and issue queries there, they cannot access the pages at the sites.\n9 Improving user experience: Even if a user is aware of a number of Hidden-Web sites, the user still has to waste a significant amount of time and effort, visiting all of the potentially relevant sites, querying each of them and exploring the result.\nBy making the Hidden-Web pages searchable at a central location, we can significantly reduce the user's wasted time and effort in searching the Hidden Web.\n9 Reducing potential bias: Due to the heavy reliance of many Web users on search engines for locating information, search engines influence how the users perceive the Web [28].\nUsers do not necessarily perceive what actually exists on the Web, but what is indexed by search engines [28].\nAccording to a recent article [5], several organizations have recognized the importance of bringing information of their Hidden Web sites onto the surface, and committed considerable resources towards this effort.\nOur\nFigure 1: A single-attribute search interface\nHidden-Web crawler attempts to automate this process for Hid - Figure 2: A multi-attribute search interface den Web sites with textual content, thus minimizing the associated costs and effort required.\nGiven that the only \"entry\" to Hidden Web pages is through querying a search form, there are two core challenges to implementing an effective Hidden Web crawler: (a) The crawler has to be able to understand and model a query interface, and (b) The crawler has to come up with meaningful queries to issue to the query interface.\nThe first challenge was addressed by Raghavan and Garcia-Molina in [29], where a method for learning search interfaces was presented.\nHere, we present a solution to the second challenge, i.e. how a crawler can automatically generate queries so that it can discover and download the Hidden Web pages.\nClearly, when the search forms list all possible values for a query (e.g., through a drop-down list), the solution is straightforward.\nWe exhaustively issue all possible queries, one query at a time.\nWhen the query forms have a \"free text\" input, however, an infinite number of queries are possible, so we cannot exhaustively issue all possible queries.\nIn this case, what queries should we pick?\nCan the crawler automatically come up with meaningful queries without understanding the semantics of the search form?\nIn this paper, we provide a theoretical framework to investigate the Hidden-Web crawling problem and propose effective ways of generating queries automatically.\nWe also evaluate our proposed solutions through experiments conducted on real Hidden-Web sites.\nIn summary, this paper makes the following contributions: \u2022 We present a formal framework to study the problem of HiddenWeb crawling.\n(Section 2).\n\u2022 We investigate a number of crawling policies for the Hidden Web, including the optimal policy that can potentially download the maximum number of pages through the minimum number of interactions.\nUnfortunately, we show that the optimal policy is NP-hard and cannot be implemented in practice (Section 2.2).\n\u2022 We propose a new adaptive policy that approximates the optimal policy.\nOur adaptive policy examines the pages returned from previous queries and adapts its query-selection policy automatically based on them (Section 3).\n\u2022 We evaluate various crawling policies through experiments on real Web sites.\nOur experiments will show the relative advantages of various crawling policies and demonstrate their potential.\nThe results from our experiments are very promising.\nIn one experiment, for example, our adaptive policy downloaded more than 90% of the pages within PubMed (that contains 14 million documents) after it issued fewer than 100 queries.\n2.\nFRAMEWORK In this section, we present a formal framework for the study of the Hidden-Web crawling problem.\nIn Section 2.1, we describe our assumptions on Hidden-Web sites and explain how users interact with the sites.\nBased on this interaction model, we present a highlevel algorithm for a Hidden-Web crawler in Section 2.2.\nFinally in Section 2.3, we formalize the Hidden-Web crawling problem.\n2.1 Hidden-Web database model There exists a variety of Hidden Web sources that provide information on a multitude of topics.\nDepending on the type of information, we may categorize a Hidden-Web site either as a textual database or a structured database.\nA textual database is a site that mainly contains plain-text documents, such as PubMed and LexisNexis (an online database of legal documents [1]).\nSince plaintext documents do not usually have well-defined structure, most textual databases provide a simple search interface where users type a list of keywords in a single search box (Figure 1).\nIn contrast, a structured database often contains multi-attribute relational data (e.g., a book on the Amazon Web site may have the fields title = ` Harry Potter', author = ` J.K. Rowling' and isbn = ` 0590353403') and supports multi-attribute search interfaces (Figure 2).\nIn this paper, we will mainly focus on textual databases that support single-attribute keyword queries.\nWe discuss how we can extend our ideas for the textual databases to multi-attribute structured databases in Section 6.1.\nTypically, the users need to take the following steps in order to access pages in a Hidden-Web database: 1.\nStep 1.\nFirst, the user issues a query, say \"liver,\" through the search interface provided by the Web site (such as the one shown in Figure 1).\n2.\nStep 2.\nShortly after the user issues the query, she is presented with a result index page.\nThat is, the Web site returns a list of links to potentially relevant Web pages, as shown in Figure 3 (a).\n3.\nStep 3.\nFrom the list in the result index page, the user identifies the pages that look \"interesting\" and follows the links.\nClicking on a link leads the user to the actual Web page, such as the one shown in Figure 3 (b), that the user wants to look at.\n2.2 A generic Hidden Web crawling algorithm Given that the only \"entry\" to the pages in a Hidden-Web site is its search from, a Hidden-Web crawler should follow the three steps described in the previous section.\nThat is, the crawler has to generate a query, issue it to the Web site, download the result index page, and follow the links to download the actual pages.\nIn most cases, a crawler has limited time and network resources, so the crawler repeats these steps until it uses up its resources.\nIn Figure 4 we show the generic algorithm for a Hidden-Web crawler.\nFor simplicity, we assume that the Hidden-Web crawler issues single-term queries only .3 The crawler first decides which query term it is going to use (Step (2)), issues the query, and retrieves the result index page (Step (3)).\nFinally, based on the links found on the result index page, it downloads the Hidden Web pages from the site (Step (4)).\nThis same process is repeated until all the available resources are used up (Step (1)).\nGiven this algorithm, we can see that the most critical decision that a crawler has to make is what query to issue next.\nIf the crawler can issue successful queries that will return many matching pages, the crawler can finish its crawling early on using minimum resources.\nIn contrast, if the crawler issues completely irrelevant queries that do not return any matching pages, it may waste all of its resources simply issuing queries without ever retrieving actual pages.\nTherefore, how the crawler selects the next query can greatly affect its effectiveness.\nIn the next section, we formalize this query selection problem.\n101 3For most Web sites that assume \"AND\" for multi-keyword queries, single-term queries return the maximum number of results.\nExtending our work to multi-keyword queries is straightforward.\n(a) List of matching pages for query \"liver\".\n(b) The first matching page for \"liver\".\nFigure 3: Pages from the PubMed Web site.\nALGORITHM 2.1.\nCrawling a Hidden Web site Procedure\n(1) while (there are available resources) do \/ \/ select a term to send to the site (2) qi = SelectTerm () \/ \/ send query and acquire result index page (3) R (qi) = QueryWebSite (qi) \/ \/ download the pages of interest (4) Download (R (qi)) (5) done\nFigure 4: Algorithm for crawling a Hidden Web site.\nFigure 5: A set-formalization of the optimal query selection problem.\n2.3 Problem formalization\nTheoretically, the problem of query selection can be formalized as follows: We assume that the crawler downloads pages from a Web site that has a set of pages S (the rectangle in Figure 5).\nWe represent each Web page in S as a point (dots in Figure 5).\nEvery potential query qi that we may issue can be viewed as a subset of S, containing all the points (pages) that are returned when we issue qi to the site.\nEach subset is associated with a weight that represents the cost of issuing the query.\nUnder this formalization, our goal is to find which subsets (queries) cover the maximum number of points (Web pages) with the minimum total weight (cost).\nThis problem is equivalent to the set-covering problem in graph theory [16].\nThere are two main difficulties that we need to address in this formalization.\nFirst, in a practical situation, the crawler does not know which Web pages will be returned by which queries, so the subsets of S are not known in advance.\nWithout knowing these subsets the crawler cannot decide which queries to pick to maximize the coverage.\nSecond, the set-covering problem is known to be NP-Hard [16], so an efficient algorithm to solve this problem optimally in polynomial time has yet to be found.\nIn this paper, we will present an approximation algorithm that can find a near-optimal solution at a reasonable computational cost.\nOur algorithm leverages the observation that although we do not know which pages will be returned by each query qi that we issue, we can predict how many pages will be returned.\nBased on this information our query selection algorithm can then select the \"best\" queries that cover the content of the Web site.\nWe present our prediction method and our query selection algorithm in Section 3.\n2.3.1 Performance Metric\nBefore we present our ideas for the query selection problem, we briefly discuss some of our notation and the cost\/performance metrics.\nGiven a query qi, we use P (qi) to denote the fraction of pages that we will get back if we issue query qi to the site.\nFor example, if a Web site has 10,000 pages in total, and if 3,000 pages are returned for the query qi = \"medicine\", then P (qi) = 0.3.\nWe use P (q1 \u2227 q2) to represent the fraction of pages that are returned from both q1 and q2 (i.e., the intersection of P (q1) and P (q2)).\nSimilarly, we use P (q1 \u2228 q2) to represent the fraction of pages that are returned from either q1 or q2 (i.e., the union of P (q1) and P (q2)).\nWe also use Cost (qi) to represent the cost of issuing the query qi.\nDepending on the scenario, the cost can be measured either in time, network bandwidth, the number of interactions with the site, or it can be a function of all of these.\nAs we will see later, our proposed algorithms are independent of the exact cost function.\nIn the most common case, the query cost consists of a number of factors, including the cost for submitting the query to the site, retrieving the result index page (Figure 3 (a)) and downloading the actual pages (Figure 3 (b)).\nWe assume that submitting a query incurs a fixed cost of cq.\nThe cost for downloading the result index page is proportional to the number of matching documents to the query, while the cost cd for downloading a matching document is also fixed.\nThen the overall cost of query qi is\nIn certain cases, some of the documents from qi may have already been downloaded from previous queries.\nIn this case, the crawler may skip downloading these documents and the cost of qi can be\nHere, we use Pnew (qi) to represent the fraction of the new documents from qi that have not been retrieved from previous queries.\nLater in Section 3.1 we will study how we can estimate P (qi) and Pnew (qi) to estimate the cost of qi.\nSince our algorithms are independent of the exact cost function, we will assume a generic cost function Cost (qi) in this paper.\nWhen we need a concrete cost function, however, we will use Equation 2.\nGiven the notation, we can formalize the goal of a Hidden-Web crawler as follows:\n3.\nKEYWORD SELECTION\nHow should a crawler select the queries to issue?\nGiven that the goal is to download the maximum number of unique documents from a textual database, we may consider one of the following options:\n.\nRandom: We select random keywords from, say, an English dictionary and issue them to the database.\nThe hope is that a random query will return a reasonable number of matching documents.\n.\nGeneric-frequency: We analyze a generic document corpus collected elsewhere (say, from the Web) and obtain the generic fre\nquency distribution of each keyword.\nBased on this generic distribution, we start with the most frequent keyword, issue it to the Hidden-Web database and retrieve the result.\nWe then continue to the second-most frequent keyword and repeat this process until we exhaust all download resources.\nThe hope is that the frequent keywords in a generic corpus will also be frequent in the Hidden-Web database, returning many matching documents.\n.\nAdaptive: We analyze the documents returned from the previous queries issued to the Hidden-Web database and estimate which keyword is most likely to return the most documents.\nBased on this analysis, we issue the most \"promising\" query, and repeat the process.\nAmong these three general policies, we may consider the random policy as the base comparison point since it is expected to perform the worst.\nBetween the generic-frequency and the adaptive policies, both policies may show similar performance if the crawled database has a generic document collection without a specialized topic.\nThe adaptive policy, however, may perform significantly better than the generic-frequency policy if the database has a very specialized collection that is different from the generic corpus.\nWe will experimentally compare these three policies in Section 4.\nWhile the first two policies (random and generic-frequency policies) are easy to implement, we need to understand how we can analyze the downloaded pages to identify the most \"promising\" query in order to implement the adaptive policy.\nWe address this issue in the rest of this section.\n3.1 Estimating the number of matching pages\nIn order to identify the most promising query, we need to estimate how many new documents we will download if we issue the query qi as the next query.\nThat is, assuming that we have issued queries q1,..., qi-1 we need to estimate P (q1V \u2022 \u2022 \u2022 Vqi-1Vqi), for every potential next query qi and compare this value.\nIn estimating this number, we note that we can rewrite P (q1 V \u2022 \u2022 \u2022 V qi-1 V qi) as:\nIn the above formula, note that we can precisely measure P (q1 V\n\u2022 \u2022 \u2022 V qi-1) and P (qi I q1 V \u2022 \u2022 \u2022 V qi-1) by analyzing previouslydownloaded pages: We know P (q1 V \u2022 \u2022 \u2022 V qi-1), the union of\nall pages downloaded from q1,..., qi-1, since we have already issued q1,..., qi-1 and downloaded the matching pages .4 We can also measure P (qi I q1 V \u2022 \u2022 \u2022 V qi-1), the probability that qi appears in the pages from q1,..., qi-1, by counting how many times qi appears in the pages from q1,..., qi-1.\nTherefore, we only need to estimate P (qi) to evaluate P (q1 V \u2022 \u2022 \u2022 V qi).\nWe may consider a number of different ways to estimate P (qi), including the following:\n1.\nIndependence estimator: We assume that the appearance of the term qi is independent of the terms q1,..., qi-1.\nThat is, we assume that P (qi) = P (qiIq1 V \u2022 \u2022 \u2022 V qi-1).\n2.\nZipf estimator: In [19], Ipeirotis et al. proposed a method to estimate how many times a particular term occurs in the entire\ncorpus based on a subset of documents from the corpus.\nTheir method exploits the fact that the frequency of terms inside text collections follows a power law distribution [30, 25].\nThat is, if we rank all terms based on their occurrence frequency (with the most frequent term having a rank of 1, second most frequent a rank of 2 etc.), then the frequency f of a term inside the text collection is given by:\nwhere r is the rank of the term and \u03b1,,3, and - y are constants that depend on the text collection.\nTheir main idea is (1) to estimate the three parameters, \u03b1,,3 and - y, based on the subset of documents that we have downloaded from previous queries, and (2) use the estimated parameters to predict f given the ranking r of a term within the subset.\nFor a more detailed description on how we can use this method to estimate P (qi), we refer the reader to the extended version of this paper [27].\nAfter we estimate P (qi) and P (qiIq1 V \u2022 \u2022 \u2022 V qi-1) values, we can calculate P (q1 V \u2022 \u2022 \u2022 V qi).\nIn Section 3.3, we explain how we can efficiently compute P (qiIq1 V \u2022 \u2022 \u2022 V qi-1) by maintaining a succinct summary table.\nIn the next section, we first examine how we can use this value to decide which query we should issue next to the Hidden Web site.\n3.2 Query selection algorithm\nThe goal of the Hidden-Web crawler is to download the maximum number of unique documents from a database using its limited download resources.\nGiven this goal, the Hidden-Web crawler has to take two factors into account.\n(1) the number of new documents that can be obtained from the query qi and (2) the cost of issuing the query qi.\nFor example, if two queries, qi and qj, incur the same cost, but qi returns more new pages than qj, qi is more desirable than qj.\nSimilarly, if qi and qj return the same number of new documents, but qi incurs less cost then qj, qi is more desirable.\nBased on this observation, the Hidden-Web crawler may use the following efficiency metric to quantify the desirability of the query qi:\nHere, Pnew (qi) represents the amount of new documents returned for qi (the pages that have not been returned for previous queries).\nCost (qi) represents the cost of issuing the query qi.\nIntuitively, the efficiency of qi measures how many new documents are retrieved per unit cost, and can be used as an indicator of 4For exact estimation, we need to know the total number ofpages in the site.\nHowever, in order to compare only relative values among queries, this information is not actually needed.\nALGORITHM 3.1.\nGreedy SelectTerm () Parameters: T: The list ofpotential query keywords Procedure\n(1) Foreach tk in T do (2) Estimate Efficiency (tk) = Pnew (tk) Cost (tk) (3) done (4) return tk with maximum Efficiency (tk)\nFigure 6: Algorithm for selecting the next query term.\nhow well our resources are spent when issuing qi.\nThus, the Hidden Web crawler can estimate the efficiency of every candidate qi, and select the one with the highest value.\nBy using its resources more efficiently, the crawler may eventually download the maximum number of unique documents.\nIn Figure 6, we show the query selection function that uses the concept of efficiency.\nIn principle, this algorithm takes a greedy approach and tries to maximize the \"potential gain\" in every step.\nWe can estimate the efficiency of every query using the estimation method described in Section 3.1.\nThat is, the size of the new documents from the query qi, Pnew (qi), is\nfrom Equation 3, where P (qi) can be estimated using one of the methods described in section 3.\nWe can also estimate Cost (qi) similarly.\nFor example, if Cost (qi) is\n(Equation 2), we can estimate Cost (qi) by estimating P (qi) and Pnew (qi).\n3.3 Efficient calculation of query statistics\nIn estimating the efficiency of queries, we found that we need to measure P (qi | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi-1) for every potential query qi.\nThis calculation can be very time-consuming if we repeat it from scratch for every query qi in every iteration of our algorithm.\nIn this section, we explain how we can compute P (qi | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi-1) efficiently by maintaining a small table that we call a query statistics table.\nThe main idea for the query statistics table is that P (qi | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi-1) can be measured by counting how many times the keyword qi appears within the documents downloaded from q1,..., qi-1.\nWe record these counts in a table, as shown in Figure 7 (a).\nThe left column of the table contains all potential query terms and the right column contains the number of previously-downloaded documents containing the respective term.\nFor example, the table in Figure 7 (a) shows that we have downloaded 50 documents so far, and the term model appears in 10 of these documents.\nGiven this number, we can compute that P (model | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi-1) = 10\nWe note that the query statistics table needs to be updated whenever we issue a new query qi and download more documents.\nThis update can be done efficiently as we illustrate in the following example.\nEXAMPLE 1.\nAfter examining the query statistics table of Figure 7 (a), we have decided to use the term \"computer\" as our next query qi.\nFrom the new query qi = \"computer,\" we downloaded 20 more new pages.\nOut of these, 12 contain the keyword \"model\"\nFigure 7: Updating the query statistics table.\nFigure 8: A Web site that does not return all the results.\nand 18 the keyword \"disk.\"\nThe table in Figure 7 (b) shows the frequency of each term in the newly-downloaded pages.\nWe can update the old table (Figure 7 (a)) to include this new information by simply adding corresponding entries in Figures 7 (a) and (b).\nThe result is shown on Figure 7 (c).\nFor example, keyword \"model\" exists in 10 + 12 = 22 pages within the pages retrieved from q1,..., qi.\nAccording to this new table, P (model | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi)\n3.4 Crawling sites that limit the number of results\nIn certain cases, when a query matches a large number of pages, the Hidden Web site returns only a portion of those pages.\nFor example, the Open Directory Project [2] allows the users to see only up to 10, 000 results after they issue a query.\nObviously, this kind of limitation has an immediate effect on our Hidden Web crawler.\nFirst, since we can only retrieve up to a specific number of pages per query, our crawler will need to issue more queries (and potentially will use up more resources) in order to download all the pages.\nSecond, the query selection method that we presented in Section 3.2 assumes that for every potential query qi, we can find P (qi | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi-1).\nThat is, for every query qi we can find the fraction of documents in the whole text database that contains qi with at least one of q1,..., qi-1.\nHowever, if the text database returned only a portion of the results for any of the q1,..., qi-1 then the value P (qi | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi-1) is not accurate and may affect our decision for the next query qi, and potentially the performance of our crawler.\nSince we cannot retrieve more results per query than the maximum number the Web site allows, our crawler has no other choice besides submitting more queries.\nHowever, there is a way to estimate the correct value for P (qi | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi-1) in the case where the Web site returns only a portion of the results.\nAgain, assume that the Hidden Web site we are currently crawling is represented as the rectangle on Figure 8 and its pages as points in the figure.\nAssume that we have already issued queries q1,..., qi-1 which returned a number of results less than the maximum number than the site allows, and therefore we have downloaded all the pages for these queries (big circle in Figure 8).\nThat is, at this point, our estimation for P (qi | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi-1) is accurate.\nNow assume that we submit query qi to the Web site, but due to a limitation in the number of results that we get back, we retrieve the set q' i (small circle in Figure 8) instead of the set qi (dashed circle in Figure 8).\nNow we need to update our query statistics table so that it has accurate information for the next step.\nThat is, although we got the set q' i back, for every potential query qi +1 we need to\nIn the previous equation, we can find P (q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi) by estimating P (qi) with the method shown in Section 3.\nAdditionally, we can calculate P (qi +1 \u2227 (q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi-1)) and P (qi +1 \u2227 qi \u2227 (q1 \u2228\n\u00b7 \u00b7 \u00b7 \u2228 qi-1)) by directly examining the documents that we have downloaded from queries q1,..., qi-1.\nThe term P (qi +1 \u2227 qi) however is unknown and we need to estimate it.\nAssuming that q' i is a random sample of qi, then:\nFrom Equation 6 we can calculate P (qi +1 \u2227 qi) and after we replace this value to Equation 5 we can find P (qi +1 | q1 \u2228 \u00b7 \u00b7 \u00b7 \u2228 qi).\n4.\nEXPERIMENTAL EVALUATION\nIn this section we experimentally evaluate the performance of the various algorithms for Hidden Web crawling presented in this paper.\nOur goal is to validate our theoretical analysis through realworld experiments, by crawling popular Hidden Web sites of textual databases.\nSince the number of documents that are discovered and downloaded from a textual database depends on the selection of the words that will be issued as queries5 to the search interface of each site, we compare the various selection policies that were described in section 3, namely the random, generic-frequency, and adaptive algorithms.\nThe adaptive algorithm learns new keywords and terms from the documents that it downloads, and its selection process is driven by a cost model as described in Section 3.2.\nTo keep our experiment and its analysis simple at this point, we will assume that the cost for every query is constant.\nThat is, our goal is to maximize the number of downloaded pages by issuing the least number of queries.\nLater, in Section 4.4 we will present a comparison of our policies based on a more elaborate cost model.\nIn addition, we use the independence estimator (Section 3.1) to estimate P (qi) from downloaded pages.\nAlthough the independence estimator is a simple estimator, our experiments will show that it can work very well in practice .6 For the generic-frequency policy, we compute the frequency distribution of words that appear in a 5.5-million-Web-page corpus 5Throughout our experiments, once an algorithm has submitted a query to a database, we exclude the query from subsequent submissions to the same database from the same algorithm.\n6We defer the reporting of results based on the Zipf estimation to a future work.\ndownloaded from 154 Web sites of various topics [26].\nKeywords are selected based on their decreasing frequency with which they appear in this document set, with the most frequent one being selected first, followed by the second-most frequent keyword, etc. 7 Regarding the random policy, we use the same set of words collected from the Web corpus, but in this case, instead of selecting keywords based on their relative frequency, we choose them randomly (uniform distribution).\nIn order to further investigate how the quality of the potential query-term list affects the random-based algorithm, we construct two sets: one with the 16, 000 most frequent words of the term collection used in the generic-frequency policy (hereafter, the random policy with the set of 16,000 words will be referred to as random-16K), and another set with the 1 million most frequent words of the same collection as above (hereafter, referred to as random-1M).\nThe former set has frequent words that appear in a large number of documents (at least 10, 000 in our collection), and therefore can be considered of \"high-quality\" terms.\nThe latter set though contains a much larger collection of words, among which some might be bogus, and meaningless.\nThe experiments were conducted by employing each one of the aforementioned algorithms (adaptive, generic-frequency, random16K, and random-1M) to crawl and download contents from three Hidden Web sites: The PubMed Medical Library,8 Amazon,9 and the Open Directory Project [2].\nAccording to the information on PubMed's Web site, its collection contains approximately 14 million abstracts of biomedical articles.\nWe consider these abstracts as the \"documents\" in the site, and in each iteration of the adaptive policy, we use these abstracts as input to the algorithm.\nThus our goal is to \"discover\" as many unique abstracts as possible by repeatedly querying the Web query interface provided by PubMed.\nThe Hidden Web crawling on the PubMed Web site can be considered as topic-specific, due to the fact that all abstracts within PubMed are related to the fields of medicine and biology.\nIn the case of the Amazon Web site, we are interested in downloading all the hidden pages that contain information on books.\nThe querying to Amazon is performed through the Software Developer's Kit that Amazon provides for interfacing to its Web site, and which returns results in XML form.\nThe generic \"keyword\" field is used for searching, and as input to the adaptive policy we extract the product description and the text of customer reviews when present in the XML reply.\nSince Amazon does not provide any information on how many books it has in its catalogue, we use random sampling on the 10-digit ISBN number of the books to estimate the size of the collection.\nOut of the 10, 000 random ISBN numbers queried, 46 are found in the Amazon catalogue, therefore the size of its book collection is estimated to be 46\nmillion books.\nIt's also worth noting here that Amazon poses an upper limit on the number of results (books in our case) returned by each query, which is set to 32, 000.\nAs for the third Hidden Web site, the Open Directory Project (hereafter also referred to as dmoz), the site maintains the links to 3.8 million sites together with a brief summary of each listed site.\nThe links are searchable through a keyword-search interface.\nWe consider each indexed link together with its brief summary as the document of the dmoz site, and we provide the short summaries to the adaptive algorithm to drive the selection of new keywords for querying.\nOn the dmoz Web site, we perform two Hidden Web crawls: the first is on its generic collection of 3.8-million indexed\nFigure 9: Coverage of policies for Pubmed\nFigure 11: Coverage of policies for general dmoz\nFigure 12: Coverage of policies for the Arts section of dmoz\nFigure 10: Coverage of policies for Amazon\nsites, regardless of the category that they fall into.\nThe other crawl is performed specifically on the Arts section of dmoz (http: \/ \/ dmoz.org\/Arts), which comprises of approximately 429, 000 indexed sites that are relevant to Arts, making this crawl topicspecific, as in PubMed.\nLike Amazon, dmoz also enforces an upper limit on the number of returned results, which is 10, 000 links with their summaries.\n4.1 Comparison of policies\nThe first question that we seek to answer is the evolution of the coverage metric as we submit queries to the sites.\nThat is, what fraction of the collection of documents stored in the Hidden Web site can we download as we continuously query for new words selected using the policies described above?\nMore formally, we are interested in the value of P (q1 V \u2022 \u2022 \u2022 V qi \u2212 1 V qi), after we submit q1,..., qi queries, and as i increases.\nIn Figures 9, 10, 11, and 12 we present the coverage metric for each policy, as a function of the query number, for the Web sites of PubMed, Amazon, general dmoz and the art-specific dmoz, respectively.\nOn the y-axis the fraction of the total documents downloaded from the website is plotted, while the x-axis represents the query number.\nA first observation from these graphs is that in general, the generic-frequency and the adaptive policies perform much better than the random-based algorithms.\nIn all of the figures, the graphs for the random-1M and the random-16K are significantly below those of other policies.\nBetween the generic-frequency and the adaptive policies, we can see that the latter outperforms the former when the site is topic specific.\nFor example, for the PubMed site (Figure 9), the adaptive algorithm issues only 83 queries to download almost 80% of the documents stored in PubMed, but the generic-frequency algorithm requires 106 queries for the same coverage,.\nFor the dmoz\/Arts crawl (Figure 12), the difference is even more substantial: the adaptive policy is able to download 99.98% of the total sites indexed in the Directory by issuing 471 queries, while the frequency-based algorithm is much less effective using the same number of queries, and discovers only 72% of the total number of indexed sites.\nThe adaptive algorithm, by examining the contents of the pages that it downloads at each iteration, is able to identify the topic of the site as expressed by the words that appear most frequently in the result-set.\nConsequently, it is able to select words for subsequent queries that are more relevant to the site, than those preferred by the genericfrequency policy, which are drawn from a large, generic collection.\nTable 1 shows a sample of 10 keywords out of 211 chosen and submitted to the PubMed Web site by the adaptive algorithm, but not by the other policies.\nFor each keyword, we present the number of the iteration, along with the number of results that it returned.\nAs one can see from the table, these keywords are highly relevant to the topics of medicine and biology of the Public Medical Library, and match against numerous articles stored in its Web site.\nIn both cases examined in Figures 9, and 12, the random-based policies perform much worse than the adaptive algorithm, and the generic-frequency.\nIt is worthy noting however, that the randombased policy with the small, carefully selected set of 16, 000 \"quality\" words manages to download a considerable fraction of 42.5%\nTable 1: Sample of keywords queried to PubMed exclusively by\nfrom the PubMed Web site after 200 queries, while the coverage for the Arts section of dmoz reaches 22.7%, after 471 queried keywords.\nOn the other hand, the random-based approach that makes use of the vast collection of 1 million words, among which a large number is bogus keywords, fails to download even a mere 1% of the total collection, after submitting the same number of query words.\nFor the generic collections of Amazon and the dmoz sites, shown in Figures 10 and 11 respectively, we get mixed results: The genericfrequency policy shows slightly better performance than the adaptive policy for the Amazon site (Figure 10), and the adaptive method clearly outperforms the generic-frequency for the general dmoz site (Figure 11).\nA closer look at the log files of the two Hidden Web crawlers reveals the main reason: Amazon was functioning in a very flaky way when the adaptive crawler visited it, resulting in a large number of lost results.\nThus, we suspect that the slightly poor performance of the adaptive policy is due to this experimental variance.\nWe are currently running another experiment to verify whether this is indeed the case.\nAside from this experimental variance, the Amazon result indicates that if the collection and the words that a Hidden Web site contains are generic enough, then the generic-frequency approach may be a good candidate algorithm for effective crawling.\nAs in the case of topic-specific Hidden Web sites, the randombased policies also exhibit poor performance compared to the other two algorithms when crawling generic sites: for the Amazon Web site, random-16K succeeds in downloading almost 36.7% after issuing 775 queries, alas for the generic collection of dmoz, the fraction of the collection of links downloaded is 13.5% after the 770th query.\nFinally, as expected, random-1M is even worse than random16K, downloading only 14.5% of Amazon and 0.3% of the generic dmoz.\nIn summary, the adaptive algorithm performs remarkably well in all cases: it is able to discover and download most of the documents stored in Hidden Web sites by issuing the least number of queries.\nWhen the collection refers to a specific topic, it is able to identify the keywords most relevant to the topic of the site and consequently ask for terms that is most likely that will return a large number of results.\nOn the other hand, the generic-frequency policy proves to be quite effective too, though less than the adaptive: it is able to retrieve relatively fast a large portion of the collection, and when the site is not topic-specific, its effectiveness can reach that of adaptive (e.g. Amazon).\nFinally, the random policy performs poorly in general, and should not be preferred.\n4.2 Impact of the initial query\nAn interesting issue that deserves further examination is whether the initial choice of the keyword used as the first query issued by the adaptive algorithm affects its effectiveness in subsequent iterations.\nThe choice of this keyword is not done by the selection of the\nFigure 13: Convergence of the adaptive algorithm using different initial queries for crawling the PubMed Web site\nadaptive algorithm itself and has to be manually set, since its query statistics tables have not been populated yet.\nThus, the selection is generally arbitrary, so for purposes of fully automating the whole process, some additional investigation seems necessary.\nFor this reason, we initiated three adaptive Hidden Web crawlers targeting the PubMed Web site with different seed-words: the word \"data\", which returns 1,344,999 results, the word \"information\" that reports 308, 474 documents, and the word \"return\" that retrieves 29, 707 pages, out of 14 million.\nThese keywords represent varying degrees of term popularity in PubMed, with the first one being of high popularity, the second of medium, and the third of low.\nWe also show results for the keyword \"pubmed\", used in the experiments for coverage of Section 4.1, and which returns 695 articles.\nAs we can see from Figure 13, after a small number of queries, all four crawlers roughly download the same fraction of the collection, regardless of their starting point: Their coverages are roughly equivalent from the 25th query.\nEventually, all four crawlers use the same set of terms for their queries, regardless of the initial query.\nIn the specific experiment, from the 36th query onward, all four crawlers use the same terms for their queries in each iteration, or the same terms are used off by one or two query numbers.\nOur result confirms the observation of [11] that the choice of the initial query has minimal effect on the final performance.\nWe can explain this intuitively as follows: Our algorithm approximates the optimal set of queries to use for a particular Web site.\nOnce the algorithm has issued a significant number of queries, it has an accurate estimation of the content of the Web site, regardless of the initial query.\nSince this estimation is similar for all runs of the algorithm, the crawlers will use roughly the same queries.\n4.3 Impact of the limit in the number of results\nWhile the Amazon and dmoz sites have the respective limit of 32,000 and 10,000 in their result sizes, these limits may be larger than those imposed by other Hidden Web sites.\nIn order to investigate how a \"tighter\" limit in the result size affects the performance of our algorithms, we performed two additional crawls to the generic-dmoz site: we ran the generic-frequency and adaptive policies but we retrieved only up to the top 1,000 results for every query.\nIn Figure 14 we plot the coverage for the two policies as a function of the number of queries.\nAs one might expect, by comparing the new result in Figure 14 to that of Figure 11 where the result limit was 10,000, we conclude that the tighter limit requires a higher number of queries to achieve the same coverage.\nFor example, when the result limit was 10,000, the adaptive pol\nFigure 14: Coverage of general dmoz after limiting the number of results to 1,000\nicy could download 70% of the site after issuing 630 queries, while it had to issue 2,600 queries to download 70% of the site when the limit was 1,000.\nOn the other hand, our new result shows that even with a tight result limit, it is still possible to download most of a Hidden Web site after issuing a reasonable number of queries.\nThe adaptive policy could download more than 85% of the site after issuing 3,500 queries when the limit was 1,000.\nFinally, our result shows that our adaptive policy consistently outperforms the generic-frequency policy regardless of the result limit.\nIn both Figure 14 and Figure 11, our adaptive policy shows significantly larger coverage than the generic-frequency policy for the same number of queries.\n4.4 Incorporating the document download cost\nFor brevity of presentation, the performance evaluation results provided so far assumed a simplified cost-model where every query involved a constant cost.\nIn this section we present results regarding the performance of the adaptive and generic-frequency algorithms using Equation 2 to drive our query selection process.\nAs we discussed in Section 2.3.1, this query cost model includes the cost for submitting the query to the site, retrieving the result index page, and also downloading the actual pages.\nFor these costs, we examined the size of every result in the index page and the sizes of the documents, and we chose cq = 100, cr = 100, and cd = 10000, as values for the parameters of Equation 2, and for the particular experiment that we ran on the PubMed website.\nThe values that we selected imply that the cost for issuing one query and retrieving one result from the result index page are roughly the same, while the cost for downloading an actual page is 100 times larger.\nWe believe that these values are reasonable for the PubMed Web site.\nFigure 15 shows the coverage of the adaptive and genericfrequency algorithms as a function of the resource units used during the download process.\nThe horizontal axis is the amount of resources used, and the vertical axis is the coverage.\nAs it is evident from the graph, the adaptive policy makes more efficient use of the available resources, as it is able to download more articles than the generic-frequency, using the same amount of resource units.\nHowever, the difference in coverage is less dramatic in this case, compared to the graph of Figure 9.\nThe smaller difference is due to the fact that under the current cost metric, the download cost of documents constitutes a significant portion of the cost.\nTherefore, when both policies downloaded the same number of documents, the saving of the adaptive policy is not as dramatic as before.\nThat\nFigure 15: Coverage of PubMed after incorporating the document download cost\nis, the savings in the query cost and the result index download cost is only a relatively small portion of the overall cost.\nStill, we observe noticeable savings from the adaptive policy.\nAt the total cost of 8000, for example, the coverage of the adaptive policy is roughly 0.5 while the coverage of the frequency policy is only 0.3.\n5.\nRELATED WORK\nIn a recent study, Raghavan and Garcia-Molina [29] present an architectural model for a Hidden Web crawler.\nThe main focus of this work is to learn Hidden-Web query interfaces, not to generate queries automatically.\nThe potential queries are either provided manually by users or collected from the query interfaces.\nIn contrast, our main focus is to generate queries automatically without any human intervention.\nThe idea of automatically issuing queries to a database and examining the results has been previously used in different contexts.\nFor example, in [10, 11], Callan and Connel try to acquire an accurate language model by collecting a uniform random sample from the database.\nIn [22] Lawrence and Giles issue random queries to a number of Web Search Engines in order to estimate the fraction of the Web that has been indexed by each of them.\nIn a similar fashion, Bharat and Broder [8] issue random queries to a set of Search Engines in order to estimate the relative size and overlap of their indexes.\nIn [6], Barbosa and Freire experimentally evaluate methods for building multi-keyword queries that can return a large fraction of a document collection.\nOur work differs from the previous studies in two ways.\nFirst, it provides a theoretical framework for analyzing the process of generating queries for a database and examining the results, which can help us better understand the effectiveness of the methods presented in the previous work.\nSecond, we apply our framework to the problem of Hidden Web crawling and demonstrate the efficiency of our algorithms.\nCope et al. [15] propose a method to automatically detect whether a particular Web page contains a search form.\nThis work is complementary to ours; once we detect search interfaces on the Web using the method in [15], we may use our proposed algorithms to download pages automatically from those Web sites.\nReference [4] reports methods to estimate what fraction of a text database can be eventually acquired by issuing queries to the database.\nIn [3] the authors study query-based techniques that can extract relational data from large text databases.\nAgain, these works study orthogonal issues and are complementary to our work.\nIn order to make documents in multiple textual databases searchable at a central place, a number of \"harvesting\" approaches have\nbeen proposed (e.g., OAI [21], DP9 [24]).\nThese approaches essentially assume cooperative document databases that willingly share some of their metadata and\/or documents to help a third-party search engine to index the documents.\nOur approach assumes uncooperative databases that do not share their data publicly and whose documents are accessible only through search interfaces.\nThere exists a large body of work studying how to identify the most relevant database given a user query [20, 19, 14, 23, 18].\nThis body of work is often referred to as meta-searching or database selection problem over the Hidden Web.\nFor example, [19] suggests the use offocused probing to classify databases into a topical category, so that given a query, a relevant database can be selected based on its topical category.\nOur vision is different from this body of work in that we intend to download and index the Hidden pages at a central location in advance, so that users can access all the information at their convenience from one single location.\n6.\nCONCLUSION AND FUTURE WORK\nTraditional crawlers normally follow links on the Web to discover and download pages.\nTherefore they cannot get to the Hidden Web pages which are only accessible through query interfaces.\nIn this paper, we studied how we can build a Hidden Web crawler that can automatically query a Hidden Web site and download pages from it.\nWe proposed three different query generation policies for the Hidden Web: a policy that picks queries at random from a list of keywords, a policy that picks queries based on their frequency in a generic text collection, and a policy which adaptively picks a good query based on the content of the pages downloaded from the Hidden Web site.\nExperimental evaluation on 4 real Hidden Web sites shows that our policies have a great potential.\nIn particular, in certain cases the adaptive policy can download more than 90% of a Hidden Web site after issuing approximately 100 queries.\nGiven these results, we believe that our work provides a potential mechanism to improve the search-engine coverage of the Web and the user experience of Web search.\n6.1 Future Work\nWe briefly discuss some future-research avenues.\nMulti-attribute Databases We are currently investigating how to extend our ideas to structured multi-attribute databases.\nWhile generating queries for multi-attribute databases is clearly a more difficult problem, we may exploit the following observation to address this problem: When a site supports multi-attribute queries, the site often returns pages that contain values for each of the query attributes.\nFor example, when an online bookstore supports queries on title, author and isbn, the pages returned from a query typically contain the title, author and ISBN of corresponding books.\nThus, if we can analyze the returned pages and extract the values for each field (e.g, title = ` Harry Potter', author = ` J.K. Rowling', etc), we can apply the same idea that we used for the textual database: estimate the frequency of each attribute value and pick the most promising one.\nThe main challenge is to automatically segment the returned pages so that we can identify the sections of the pages that present the values corresponding to each attribute.\nSince many Web sites follow limited formatting styles in presenting multiple attributes--for example, most book titles are preceded by the label \"Title:\"--we believe we may learn page-segmentation rules automatically from a small set of training examples.\nOther Practical Issues In addition to the automatic query generation problem, there are many practical issues to be addressed to build a fully automatic Hidden-Web crawler.\nFor example, in this paper we assumed that the crawler already knows all query interfaces for Hidden-Web sites.\nBut how can the crawler discover the query interfaces?\nThe method proposed in [15] may be a good starting point.\nIn addition, some Hidden-Web sites return their results in batches of, say, 20 pages, so the user has to click on a \"next\" button in order to see more results.\nIn this case, a fully automatic Hidden-Web crawler should know that the first result index page contains only a partial result and \"press\" the next button automatically.\nFinally, some Hidden Web sites may contain an infinite number of Hidden Web pages which do not contribute much significant content (e.g. a calendar with links for every day).\nIn this case the Hidden-Web crawler should be able to detect that the site does not have much more new content and stop downloading pages from the site.\nPage similarity detection algorithms may be useful for this purpose [9, 13].","keyphrases":["hidden web","keyword queri","deep web","hidden web crawler","queri-select polici","crawl polici","textual databas","gener-frequenc polici","independ estim","adapt polici","hide web crawl","deep web crawler","adapt algorithm","queri select"],"prmu":["P","P","P","P","M","M","M","M","U","M","M","R","U","M"]} {"id":"H-96","title":"A Study of Factors Affecting the Utility of Implicit Relevance Feedback","abstract":"Implicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system. IRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly. In this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search. Our findings suggest that all three of these factors contribute to the utility of IRF.","lvl-1":"A Study of Factors Affecting the Utility of Implicit Relevance Feedback Ryen W. White Human-Computer Interaction Laboratory Institute for Advanced Computer Studies University of Maryland College Park, MD 20742, USA ryen@umd.edu Ian Ruthven Department of Computer and Information Sciences University of Strathclyde Glasgow, Scotland.\nG1 1XH.\nir@cis.strath.ac.uk Joemon M. Jose Department of Computing Science University of Glasgow Glasgow, Scotland.\nG12 8RZ.\njj@dcs.gla.ac.uk ABSTRACT Implicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system.\nIRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly.\nIn this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search.\nOur findings suggest that all three of these factors contribute to the utility of IRF.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval] General Terms Experimentation, Human Factors.\n1.\nINTRODUCTION Information Retrieval (IR) systems are designed to help searchers solve problems.\nIn the traditional interaction metaphor employed by Web search systems such as Yahoo! and MSN Search, the system generally only supports the retrieval of potentially relevant documents from the collection.\nHowever, it is also possible to offer support to searchers for different search activities, such as selecting the terms to present to the system or choosing which search strategy to adopt [3, 8]; both of which can be problematic for searchers.\nAs the quality of the query submitted to the system directly affects the quality of search results, the issue of how to improve search queries has been studied extensively in IR research [6].\nTechniques such as Relevance Feedback (RF) [11] have been proposed as a way in which the IR system can support the iterative development of a search query by suggesting alternative terms for query modification.\nHowever, in practice RF techniques have been underutilised as they place an increased cognitive burden on searchers to directly indicate relevant results [10].\nImplicit Relevance Feedback (IRF) [7] has been proposed as a way in which search queries can be improved by passively observing searchers as they interact.\nIRF has been implemented either through the use of surrogate measures based on interaction with documents (such as reading time, scrolling or document retention) [7] or using interaction with browse-based result interfaces [5].\nIRF has been shown to display mixed effectiveness because the factors that are good indicators of user interest are often erratic and the inferences drawn from user interaction are not always valid [7].\nIn this paper we present a study into the use and effectiveness of IRF in an online search environment.\nThe study aims to investigate the factors that affect IRF, in particular three research questions: (i) is the use of and perceived quality of terms generated by IRF affected by the search task?\n(ii) is the use of and perceived quality of terms generated by IRF affected by the level of search experience of system users?\n(iii) is IRF equally used and does it generate terms that are equally useful at all search stages?\nThis study aims to establish when, and under what circumstances, IRF performs well in terms of its use and the query modification terms selected as a result of its use.\nThe main experiment from which the data are taken was designed to test techniques for selecting query modification terms and techniques for displaying retrieval results [13].\nIn this paper we use data derived from that experiment to study factors affecting the utility of IRF.\n2.\nSTUDY In this section we describe the user study conducted to address our research questions.\n2.1 Systems Our study used two systems both of which suggested new query terms to the user.\nOne system suggested terms based on the user``s interaction (IRF), the other used Explicit RF (ERF) asking the user to explicitly indicate relevant material.\nBoth systems used the same term suggestion algorithm, [15], and used a common interface.\n2.1.1 Interface Overview In both systems, retrieved documents are represented at the interface by their full-text and a variety of smaller, query-relevant representations, created at retrieval time.\nWe used the Web as the test collection in this study and Google1 as the underlying search engine.\nDocument representations include the document title and a summary of the document; a list of top-ranking sentences (TRS) extracted from the top documents retrieved, scored in relation to the query, a sentence in the document summary, and each summary sentence in the context it occurs in the document (i.e., with the preceding and following sentence).\nEach summary sentence and top-ranking sentence is regarded as a representation of the document.\nThe default display contains the list of top-ranking sentences and the list of the first ten document titles.\nInteracting with a representation guides searchers to a different representation from the same document, e.g., moving the mouse over a document title displays a summary of the document.\nThis presentation of progressively more information from documents to aid relevance assessments has been shown to be effective in earlier work [14, 16].\nIn Appendix A we show the complete interface to the IRF system with the document representations marked and in Appendix B we show a fragment from the ERF interface with the checkboxes used by searchers to indicate relevant information.\nBoth systems provide an interactive query expansion feature by suggesting new query terms to the user.\nThe searcher has the responsibility for choosing which, if any, of these terms to add to the query.\nThe searcher can also add or remove terms from the query at will.\n2.1.2 Explicit RF system This version of the system implements explicit RF.\nNext to each document representation are checkboxes that allow searchers to mark individual representations as relevant; marking a representation is an indication that its contents are relevant.\nOnly the representations marked relevant by the user are used for suggesting new query terms.\nThis system was used as a baseline against which the IRF system could be compared.\n2.1.3 Implicit RF system This system makes inferences about searcher interests based on the information with which they interact.\nAs described in Section 2.1.1 interacting with a representation highlights a new representation from the same document.\nTo the searcher this is a way they can find out more information from a potentially interesting source.\nTo the implicit RF system each interaction with a representation is interpreted as an implicit indication of interest in that representation; interacting with a representation is assumed to be an indication that its contents are relevant.\nThe query modification terms are selected using the same algorithm as in the Explicit RF system.\nTherefore the only difference between the systems is how relevance is communicated to the system.\nThe results of the main experiment [13] indicated that these two systems were comparable in terms of effectiveness.\n2.2 Tasks Search tasks were designed to encourage realistic search behaviour by our subjects.\nThe tasks were phrased in the form of simulated work task situations [2], i.e., short search scenarios that were designed to reflect real-life search situations and allow subjects to develop personal assessments of relevance.\nWe devised six search topics (i.e., applying to university, allergies in the workplace, art galleries in Rome, Third Generation mobile phones, Internet music piracy and petrol prices) based on pilot testing with a small representative group of subjects.\nThese subjects were not involved in the main experiment.\nFor each topic, three versions of each work task situation were devised, each version differing in their predicted level of task complexity.\nAs described in [1] task complexity is a variable that affects subject perceptions of a task and their interactive behaviour, e.g., subjects perform more filtering activities with highly complex search tasks.\nBy developing tasks of different complexity we can assess how the nature of the task affects the subjects'' interactive behaviour and hence the evidence supplied to IRF algorithms.\nTask complexity was varied according to the methodology described in [1], specifically by varying the number of potential information sources and types of information required, to complete a task.\nIn our pilot tests (and in a posteriori analysis of the main experiment results) we verified that subjects reporting of individual task complexity matched our estimation of the complexity of the task.\nSubjects attempted three search tasks: one high complexity, one moderate complexity and one low complexity2 .\nThey were asked to read the task, place themselves in the situation it described and find the information they felt was required to complete the task.\nFigure 1 shows the task statements for three levels of task complexity for one of the six search topics.\nHC Task: High Complexity Whilst having dinner with an American colleague, they comment on the high price of petrol in the UK compared to other countries, despite large volumes coming from the same source.\nUnaware of any major differences, you decide to find out how and why petrol prices vary worldwide.\nMC Task: Moderate Complexity Whilst out for dinner one night, one of your friends'' guests is complaining about the price of petrol and the factors that cause it.\nThroughout the night they seem to be complaining about everything they can, reducing the credibility of their earlier statements so you decide to research which factors actually are important in determining the price of petrol in the UK.\nLC Task: Low Complexity While out for dinner one night, your friend complains about the rising price of petrol.\nHowever, as you have not been driving for long, you are unaware of any major changes in price.\nYou decide to find out how the price of petrol has changed in the UK in recent years.\nFigure 1.\nVarying task complexity (Petrol Prices topic).\n2.3 Subjects 156 volunteers expressed an interest in participating in our study.\n48 subjects were selected from this set with the aim of populating two groups, each with 24 subjects: inexperienced (infrequent\/ inexperienced searchers) and experienced (frequent\/ experienced searchers).\nSubjects were not chosen and classified into their groups until they had completed an entry questionnaire that asked them about their search experience and computer use.\nThe average age of the subjects was 22.83 years (maximum 51, minimum 18, \u03c3 = 5.23 years) and 75% had a university diploma or a higher degree.\n47.91% of subjects had, or were pursuing, a qualification in a discipline related to Computer Science.\nThe subjects were a mixture of students, researchers, academic staff and others, with different levels of computer and search experience.\nThe subjects were divided into the two groups depending on their search experience, how often they searched and the types of searches they performed.\nAll were familiar with Web searching, and some with searching in other domains.\n2.4 Methodology The experiment had a factorial design; with 2 levels of search experience, 3 experimental systems (although we only report on the findings from the ERF and IRF systems) and 3 levels of search task complexity.\nSubjects attempted one task of each complexity, 2 The main experiment from which these results are drawn had a third comparator system which had a different interface.\nEach subject carried out three tasks, one on each system.\nWe only report on the results from the ERF and IRF systems as these are the only pertinent ones for this paper.\nswitched systems after each task and used each system once.\nThe order in which systems were used and search tasks attempted was randomised according to a Latin square experimental design.\nQuestionnaires used Likert scales, semantic differentials and openended questions to elicit subject opinions [4].\nSystem logging was also used to record subject interaction.\nA tutorial carried out prior to the experiment allowed subjects to use a non-feedback version of the system to attempt a practice task before using the first experimental system.\nExperiments lasted between oneand-a-half and two hours, dependent on variables such as the time spent completing questionnaires.\nSubjects were offered a 5 minute break after the first hour.\nIn each experiment: i. the subject was welcomed and asked to read an introduction to the experiments and sign consent forms.\nThis set of instructions was written to ensure that each subject received precisely the same information.\nii.\nthe subject was asked to complete an introductory questionnaire.\nThis contained questions about the subject``s education, general search experience, computer experience and Web search experience.\niii.\nthe subject was given a tutorial on the interface, followed by a training topic on a version of the interface with no RF.\niv.\nthe subject was given three task sheets and asked to choose one task from the six topics on each sheet.\nNo guidelines were given to subjects when choosing a task other than they could not choose a task from any topic more than once.\nTask complexity was rotated by the experimenter so each subject attempted one high complexity task, one moderate complexity task and one low complexity task.\nv. the subject was asked to perform the search and was given 15 minutes to search.\nThe subject could terminate a search early if they were unable to find any more information they felt helped them complete the task.\nvi.\nafter completion of the search, the subject was asked to complete a post-search questionnaire.\nvii.\nthe remaining tasks were attempted by the subject, following steps v. and vi.\nviii.\nthe subject completed a post-experiment questionnaire and participated in a post-experiment interview.\nSubjects were told that their interaction may be used by the IRF system to help them as they searched.\nThey were not told which behaviours would be used or how it would be used.\nWe now describe the findings of our analysis.\n3.\nFINDINGS In this section we use the data derived from the experiment to answer our research questions about the effect of search task complexity, search experience and stage in search on the use and effectiveness of IRF.\nWe present our findings per research question.\nDue to the ordinal nature of much of the data non-parametric statistical testing is used in this analysis and the level of significance is set to p < .05, unless otherwise stated.\nWe use the method proposed by [12] to determine the significance of differences in multiple comparisons and that of [9] to test for interaction effects between experimental variables, the occurrence of which we report where appropriate.\nAll Likert scales and semantic differentials were on a 5-point scale where a rating closer to 1 signifies more agreement with the attitude statement.\nThe category labels HC, MC and LC are used to denote the high, moderate and low complexity tasks respectively.\nThe highest, or most positive, values in each table are shown in bold.\nOur analysis uses data from questionnaires, post-experiment interviews and background system logging on the ERF and IRF systems.\n3.1 Search Task Searchers attempted three search tasks of varying complexity, each on a different experimental system.\nIn this section we present an analysis on the use and usefulness of IRF for search tasks of different complexities.\nWe present our findings in terms of the RF provided by subjects and the terms recommended by the systems.\n3.1.1 Feedback We use questionnaires and system logs to gather data on subject perceptions and provision of RF for different search tasks.\nIn the postsearch questionnaire subjects were asked about how RF was conveyed using differentials to elicit their opinion on: 1.\nthe value of the feedback technique: How you conveyed relevance to the system (i.e. ticking boxes or viewing information) was: easy \/ difficult, effective\/ ineffective, useful''\/not useful.\n2.\nthe process of providing the feedback: How you conveyed relevance to the system made you feel: comfortable\/uncomfortable, in control\/not in control.\nThe average obtained differential values are shown in Table 1 for IRF and each task category.\nThe value corresponding to the differential All represents the mean of all differentials for a particular attitude statement.\nThis gives some overall understanding of the subjects'' feelings which can be useful as the subjects may not answer individual differentials very precisely.\nThe values for ERF are included for reference in this table and all other tables and figures in the Findings section.\nSince the aim of the paper is to investigate situations in which IRF might perform well, not a direct comparison between IRF and ERF, we make only limited comparisons between these two types of feedback.\nTable 1.\nSubject perceptions of RF method (lower = better).\nEach cell in Table 1 summarises the subject responses for 16 tasksystem pairs (16 subjects who ran a high complexity (HC) task on the ERF system, 16 subjects who ran a medium complexity (MC) task on the ERF system, etc).\nKruskal-Wallis Tests were applied to each differential for each type of RF3 .\nSubject responses suggested that 3 Since this analysis involved many differentials, we use a Bonferroni correction to control the experiment-wise error rate and set the alpha level (\u03b1) to .0167 and .0250 for both statements 1.\nand 2.\nrespectively, i.e., .05 divided by the number of differentials.\nThis correction reduces the number of Type I errors i.e., rejecting null hypotheses that are true.\nExplicit RF Implicit RF Differential HC MC LC HC MC LC Easy 2.78 2.47 2.12 1.86 1.81 1.93 Effective 2.94 2.68 2.44 2.04 2.41 2.66 Useful 2.76 2.51 2.16 1.91 2.37 2.56 All (1) 2.83 2.55 2.24 1.94 2.20 2.38 Comfortable 2.27 2.28 2.35 2.11 2.15 2.16 In control 2.01 1.97 1.93 2.73 2.68 2.61 All (2) 2.14 2.13 2.14 2.42 2.42 2.39 IRF was most effective and useful for more complex search tasks4 and that the differences in all pair-wise comparisons between tasks were significant5 .\nSubject perceptions of IRF elicited using the other differentials did not appear to be affected by the complexity of the search task6 .\nTo determine whether a relationship exists between the effectiveness and usefulness of the IRF process and task complexity we applied Spearman``s Rank Order Correlation Coefficient to participant responses.\nThe results of this analysis suggest that the effectiveness of IRF and usefulness of IRF are both related to task complexity; as task complexity increases subject preference for IRF also increases7 .\nOn the other hand, subjects felt ERF was more effective and useful for low complexity tasks8 .\nTheir verbal reporting of ERF, where perceived utility and effectiveness increased as task complexity decreased, supports this finding.\nIn tasks of lower complexity the subjects felt they were better able to provide feedback on whether or not documents were relevant to the task.\nWe analyse interaction logs generated by both interfaces to investigate the amount of RF subjects provided.\nTo do this we use a measure of search precision that is the proportion of all possible document representations that a searcher assessed, divided by the total number they could assess.\nIn ERF this is the proportion of all possible representations that were marked relevant by the searcher, i.e., those representations explicitly marked relevant.\nIn IRF this is the proportion of representations viewed by a searcher over all possible representations that could have been viewed by the searcher.\nThis proportion measures the searcher``s level of interaction with a document, we take it to measure the user``s interest in the document: the more document representations viewed the more interested we assume a user is in the content of the document.\nThere are a maximum of 14 representations per document: 4 topranking sentences, 1 title, 1 summary, 4 summary sentences and 4 summary sentences in document context.\nSince the interface shows document representations from the top-30 documents, there are 420 representations that a searcher can assess.\nTable 2 shows proportion of representations provided as RF by subjects.\nTable 2.\nFeedback and documents viewed.\nExplicit RF Implicit RF Measure HC MC LC HC MC LC Proportion Feedback 2.14 2.39 2.65 21.50 19.36 15.32 Documents Viewed 10.63 10.43 10.81 10.84 12.19 14.81 For IRF there is a clear pattern: as complexity increases the subjects viewed fewer documents but viewed more representations for each document.\nThis suggests a pattern where users are investigating retrieved documents in more depth.\nIt also means that the amount of 4 effective: \u03c72 (2) = 11.62, p = .003; useful: \u03c72 (2) = 12.43, p = .002 5 Dunn``s post-hoc tests (multiple comparison using rank sums); all Z \u2265 2.88, all p \u2264 .002 6 all \u03c72 (2) \u2264 2.85, all p \u2265 .24 (Kruskal-Wallis Tests) 7 effective: all r \u2265 0.644, p \u2264 .002; useful: all r \u2265 0.541, p \u2264 .009 8 effective: \u03c72 (2) = 7.01, p = .03; useful: \u03c72 (2) = 6.59, p = .037 (Kruskal-Wallis Test); all pair-wise differences significant, all Z \u2265 2.34, all p \u2264 .01 (Dunn``s post-hoc tests) feedback varies based on the complexity of the search task.\nSince IRF is based on the interaction of the searcher, the more they interact, the more feedback they provide.\nThis has no effect on the number of RF terms chosen, but may affect the quality of the terms selected.\nCorrelation analysis revealed a strong negative correlation between the number of documents viewed and the amount of feedback searchers provide9 ; as the number of documents viewed increases the proportion of feedback falls (searchers view less representations of each document).\nThis may be a natural consequence of their being less time to view documents in a time constrained task environment but as we will show as complexity changes, the nature of information searchers interact with also appears to change.\nIn the next section we investigate the effect of task complexity on the terms chosen as a result of IRF.\n3.1.2 Terms The same RF algorithm was used to select query modification terms in all systems [16].\nWe use subject opinions of terms recommended by the systems as a measure of the effectiveness of IRF with respect to the terms generated for different search tasks.\nTo test this, subjects were asked to complete two semantic differentials that completed the statement: The words chosen by the system were: relevant\/irrelevant and useful\/not useful.\nTable 3 presents average responses grouped by search task.\nTable 3.\nSubject perceptions of system terms (lower = better).\nExplicit RF Implicit RF Differential HC MC LC HC MC LC Relevant 2.50 2.46 2.41 1.94 2.35 2.68 Useful 2.61 2.61 2.59 2.06 2.54 2.70 Kruskal-Wallis Tests were applied within each type of RF.\nThe results indicate that the relevance and usefulness of the terms chosen by IRF is affected by the complexity of the search task; the terms chosen are more relevant and useful when the search task is more complex.\n10 Relevant here, was explained as being related to their task whereas useful was for terms that were seen as being helpful in the search task.\nFor ERF, the results indicate that the terms generated are perceived to be more relevant and useful for less complex search tasks; although differences between tasks were not significant11 .\nThis suggests that subject perceptions of the terms chosen for query modification are affected by task complexity.\nComparison between ERF and IRF shows that subject perceptions also vary for different types of RF12 .\nAs well as using data on relevance and utility of the terms chosen, we used data on term acceptance to measure the perceived value of the terms suggested.\nExplicit and Implicit RF systems made recommendations about which terms could be added to the original search query.\nIn Table 4 we show the proportion of the top six terms 9 r = \u22120.696, p = .001 (Pearson``s Correlation Coefficient) 10 relevant: \u03c72 (2) = 13.82, p = .001; useful: \u03c72 (2) = 11.04, p = .004; \u03b1 = .025 11 all \u03c72 (2) \u2264 2.28, all p \u2265 .32 (Kruskal-Wallis Test) 12 all T(16) \u2265 102, all p \u2264 .021, (Wilcoxon Signed-Rank Test) 13 that were shown to the searcher that were added to the search query, for each type of task and each type of RF.\nTable 4.\nTerm Acceptance (percentage of top six terms).\nExplicit RF Implicit RFProportion of terms HC MC LC HC MC LC Accepted 65.31 67.32 68.65 67.45 67.24 67.59 The average number of terms accepted from IRF is approximately the same across all search tasks and generally the same as that of ERF14 .\nAs Table 2 shows, subjects marked fewer documents relevant for highly complex tasks .\nTherefore, when task complexity increases the ERF system has fewer examples of relevant documents and the expansion terms generated may be poorer.\nThis could explain the difference in the proportion of recommended terms accepted in ERF as task complexity increases.\nFor IRF there is little difference in how many of the recommended terms were chosen by subjects for each level of task complexity15 .\nSubjects may have perceived IRF terms as more useful for high complexity tasks but this was not reflected in the proportion of IRF terms accepted.\nDifferences may reside in the nature of the terms accepted; future work will investigate this issue.\n3.1.3 Summary In this section we have presented an investigation on the effect of search task complexity on the utility of IRF.\nFrom the results there appears to be a strong relation between the complexity of the task and the subject interaction: subjects preferring IRF for highly complex tasks.\nTask complexity did not affect the proportion of terms accepted in either RF method, despite there being a difference in how relevant and useful subjects perceived the terms to be for different complexities; complexity may affect term selection in ways other than the proportion of terms accepted.\n3.2 Search Experience Experienced searchers may interact differently and give different types of evidence to RF than inexperienced searchers.\nAs such, levels of search experience may affect searchers'' use and perceptions of IRF.\nIn our experiment subjects were divided into two groups based on their level of search experience, the frequency with which they searched and the types of searches they performed.\nIn this section we use their perceptions and logging to address the next research question; the relationship between the usefulness and use of IRF and the search experience of experimental subjects.\nThe data are the same as that analysed in the previous section, but here we focus on search experience rather than the search task.\n3.2.1 Feedback We analyse the results from the attitude statements described at the beginning of Section 3.1.1.\n(i.e., How you conveyed relevance to the system was... and How you conveyed relevance to the system made you feel...).\nThese differentials elicited opinion from experimental subjects about the RF method used.\nIn Table 5 we show the mean average responses for inexperienced and experienced subject groups on ERF and IRF; 24 subjects per cell.\n13 This was the smallest number of query modification terms that were offered in both systems.\n14 all T(16) \u2265 80, all p \u2264 .31, (Wilcoxon Signed-Rank Test) 15 ERF: \u03c72 (2) = 3.67, p = .16; IRF: \u03c72 (2) = 2.55, p = .28 (KruskalWallis Tests) Table 5.\nSubject perceptions of RF method (lower = better).\nThe results demonstrate a strong preference in inexperienced subjects for IRF; they found it more easy and effective than experienced subjects.\n16 The differences for all other IRF differentials were not statistically significant.\nFor all differentials, apart from in control, inexperienced subjects generally preferred IRF over ERF17 .\nInexperienced subjects also felt that IRF was more difficult to control than experienced subjects18 .\nAs these subjects have less search experience they may be less able to understand RF processes and may be more comfortable with the system gathering feedback implicitly from their interaction.\nExperienced subjects tended to like ERF more than inexperienced subjects and felt more comfortable with this feedback method19 .\nIt appears from these results that experienced subjects found ERF more useful and were more at ease with the ERF process.\nIn a similar way to Section 3.1.1 we analysed the proportion of feedback that searchers provided to the experimental systems.\nOur analysis suggested that search experience does not affect the amount of feedback subjects provide20 .\n3.2.2 Terms We used questionnaire responses to gauge subject opinion on the relevance and usefulness of the terms from the perspective of experienced and inexperienced subjects.\nTable 6 shows the average differential responses obtained from both subject groups.\nTable 6.\nSubject perceptions of system terms (lower = better).\nExplicit RF Implicit RF Differential Inexp.\nExp.\nInexp.\nExp.\nRelevant 2.58 2.44 2.33 2.21 Useful 2.88 2.63 2.33 2.23 The differences between subject groups were significant21 .\nExperienced subjects generally reacted to the query modification terms chosen by the system more positively than inexperienced 16 easy: U(24) = 391, p = .016; effective: U(24) = 399, p = .011; \u03b1 = .0167 (Mann-Whitney Tests) 17 all T(24) \u2265 231, all p \u2264 .001 (Wilcoxon Signed-Rank Test) 18 U(24) = 390, p = .018; \u03b1 = .0250 (Mann-Whitney Test) 19 T(24) = 222, p = .020 (Wilcoxon Signed-Rank Test) 20 ERF: all U(24) \u2264 319, p \u2265 .26, IRF: all U(24) \u2264 313, p \u2265 .30 (MannWhitney Tests) 21 ERF: all U(24) \u2265 388, p \u2264 .020, IRF: all U(24) \u2265 384, p \u2264 .024 Explicit RF Implicit RF Differential Inexp.\nExp.\nInexp.\nExp.\nEasy 2.46 2.46 1.84 1.98 Effective 2.75 2.63 2.32 2.43 Useful 2.50 2.46 2.28 2.27 All (1) 2.57 2.52 2.14 2.23 Comfortable 2.46 2.14 2.05 2.24 In control 1.96 1.98 2.73 2.64 All (2) 2.21 2.06 2.39 2.44 subjects.\nThis finding was supported by the proportion of query modification terms these subjects accepted.\nIn the same way as in Section 3.1.2, we analysed the number of query modification terms recommended by the system that were used by experimental subjects.\nTable 7 shows the average number of accepted terms per subject group.\nTable 7.\nTerm Acceptance (percentage of top six terms).\nExplicit RF Implicit RFProportion of terms Inexp.\nExp.\nInexp.\nExp.\nAccepted 63.76 70.44 64.43 71.35 Our analysis of the data show that differences between subject groups for each type of RF are significant; experienced subjects accepted more expansion terms regardless of type of RF.\nHowever, the differences between the same groups for different types of RF are not significant; subjects chose roughly the same percentage of expansion terms offered irrespective of the type of RF22 .\n3.2.3 Summary In this section we have analysed data gathered from two subject groups - inexperienced searchers and experienced searchers - on how they perceive and use IRF.\nThe results indicate that inexperienced subjects found IRF more easy and effective than experienced subjects, who in turn found the terms chosen as a result of IRF more relevant and useful.\nWe also showed that inexperienced subjects generally accepted less recommended terms than experienced subjects, perhaps because they were less comfortable with RF or generally submitted shorter search queries.\nSearch experience appears to affect how subjects use the terms recommended as a result of the RF process.\n3.3 Search Stage From our observations of experimental subjects as they searched we conjectured that RF may be used differently at different times during a search.\nTo test this, our third research question concerned the use and usefulness of IRF during the course of a search.\nIn this section we investigate whether the amount of RF provided by searchers or the proportion of terms accepted are affected by how far through their search they are.\nFor the purposes of this analysis a search begins when a subject poses the first query to the system and progresses until they terminate the search or reach the maximum allowed time for a search task of 15 minutes.\nWe do not divide tasks based on this limit as subjects often terminated their search in less than 15 minutes.\nIn this section we use data gathered from interaction logs and subject opinions to investigate the extent to which RF was used and the extent to which it appeared to benefit our experimental subjects at different stages in their search 3.3.1 Feedback The interaction logs for all searches on the Explicit RF and Implicit RF were analysed and each search is divided up into nine equal length time slices.\nThis number of slices gave us an equal number per stage and was a sufficient level of granularity to identify trends in the results.\nSlices 1 - 3 correspond to the start of the search, 4 - 6 to the middle of the search and 7 - 9 to the end.\nIn Figure 2 we plot the measure of precision described in Section 3.1.1 (i.e., the proportion of all possible representations that were provided as RF) at each of the 22 IRF: U(24) = 403, p = .009, ERF: U(24) = 396, p = .013 nine slices, per search task, averaged across all subjects; this allows us to see how the provision of RF was distributed during a search.\nThe total amount of feedback for a single RF method\/task complexity pairing across all nine slices corresponds to the value recorded in the first row of Table 2 (e.g., the sum of the RF for IRF\/HC across all nine slices of Figure 2 is 21.50%).\nTo simplify the statistical analysis and comparison we use the grouping of start, middle and end.\n0 1 2 3 4 5 6 7 8 9 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 Slice Search``precision''(%oftotalrepsprovidedasRF) Explicit RF\/HC Explicit RF\/MC Explicit RF\/LC Implicit RF\/HC Implicit RF\/MC Implicit RF\/LC Figure 2.\nDistribution of RF provision per search task.\nFigure 2 appears to show the existence of a relationship between the stage in the search and the amount of relevance information provided to the different types of feedback algorithm.\nThese are essentially differences in the way users are assessing documents.\nIn the case of ERF subjects provide explicit relevance assessments throughout most of the search, but there is generally a steep increase in the end phase towards the completion of the search23 .\nWhen using the IRF system, the data indicates that at the start of the search subjects are providing little relevance information24 , which corresponds to interacting with few document representations.\nAt this stage the subjects are perhaps concentrating more on reading the retrieved results.\nImplicit relevance information is generally offered extensively in the middle of the search as they interact with results and it then tails off towards the end of the search.\nThis would appear to correspond to stages of initial exploration, detailed analysis of document representations and storage and presentation of findings.\nFigure 2 also shows the proportion of feedback for tasks of different complexity.\nThe results appear to show a difference25 in how IRF is used that relates to the complexity of the search task.\nMore specifically, as complexity increases it appears as though subjects take longer to reach their most interactive point.\nThis suggests that task complexity affects how IRF is distributed during the search and that they may be spending more time initially interpreting search results for more complex tasks.\n23 IRF: all Z \u2265 1.87, p \u2264 .031, ERF: start vs. end Z = 2.58, p = .005 (Dunn``s post-hoc tests).\n24 Although increasing toward the end of the start stage.\n25 Although not statistically significant; \u03c72 (2) = 3.54, p = .17 (Friedman Rank Sum Test) 3.3.2 Terms The terms recommended by the system are chosen based on the frequency of their occurrence in the relevant items.\nThat is, nonstopword, non-query terms occurring frequently in search results regarded as relevant are likely to be recommended to the searcher for query modification.\nSince there is a direct association between the RF and the terms selected we use the number of terms accepted by searchers at different points in the search as an indication of how effective the RF has been up until the current point in the search.\nIn this section we analysed the average number of terms from the top six terms recommended by Explicit RF and Implicit RF over the course of a search.\nThe average proportion of the top six recommended terms that were accepted at each stage are shown in Table 8; each cell contains data from all 48 subjects.\nTable 8.\nTerm Acceptance (proportion of top six terms).\nExplicit RF Implicit RFProportion of terms start middle end start middle end Accepted 66.87 66.98 67.34 61.85 68.54 73.22 The results show an apparent association between the stage in the search and the number of feedback terms subjects accept.\nSearch stage affects term acceptance in IRF but not in ERF26 .\nThe further into a search a searcher progresses, the more likely they are to accept terms recommended via IRF (significantly more than ERF27 ).\nA correlation analysis between the proportion of terms accepted at each search stage and cumulative RF (i.e., the sum of all precision at each slice in Figure 2 up to and including the end of the search stage) suggests that in both types of RF the quality of system terms improves as more RF is provided28 .\n3.3.3 Summary The results from this section indicate that the location in a search affects the amount of feedback given by the user to the system, and hence the amount of information that the RF mechanism has to decide which terms to offer the user.\nFurther, trends in the data suggest that the complexity of the task affects how subjects provide IRF and the proportion of system terms accepted.\n4.\nDISCUSSION AND IMPLICATIONS In this section we discuss the implications of the findings presented in the previous section for each research question.\n4.1 Search Task The results of our study showed that ERF was preferred for less complex tasks and IRF for more complex tasks.\nFrom observations and subject comments we perceived that when using ERF systems subjects generally forgot to provide the feedback but also employed different criteria during the ERF process (i.e., they were assessing relevance rather than expressing an interest).\nWhen the search was more complex subjects rarely found results they regarded as completely relevant.\nTherefore they struggled to find relevant 26 ERF: \u03c72 (2) = 2.22, p = .33; IRF: \u03c72 (2) = 7.73, p = .021 (Friedman Rank Sum Tests); IRF: all pair-wise comparisons significant at Z \u2265 1.77, all p \u2264 .038 (Dunn``s post-hoc tests) 27 all T(48) \u2265 786, all p \u2264 .002, (Wilcoxon Signed-Rank Test) 28 IRF: r = .712, p < .001, ERF: r = .695, p = .001 (Pearson Correlation Coefficient) information and were unable to communicate RF to the search system.\nIn these situations subjects appeared to prefer IRF as they do not need to make a relevance decision to obtain the benefits of RF, i.e., term suggestions, whereas in ERF they do.\nThe association between RF method and task complexity has implications for the design of user studies of RF systems and the RF systems themselves.\nIt implies that in the design of user studies involving ERF or IRF systems care should be taken to include tasks of varying complexities, to avoid task bias.\nAlso, in the design of search systems it implies that since different types of RF may be appropriate for different task complexities then a system that could automatically detect complexity could use both ERF and IRF simultaneously to benefit the searcher.\nFor example, on the IRF system we noticed that as task complexity falls search behaviour shifts from results interface to retrieved documents.\nMonitoring such interaction across a number of studies may lead to a set of criteria that could help IR systems automatically detect task complexity and tailor support to suit.\n4.2 Search Experience We analysed the affect of search experience on the utility of IRF.\nOur analysis revealed a general preference across all subjects for IRF over ERF.\nThat is, the average ratings assigned to IRF were generally more positive than those assigned to ERF.\nHowever, IRF was generally liked by both subject groups (perhaps because it removed the burden of providing relevance information) and ERF was generally preferred by experienced subjects more than inexperienced subjects (perhaps because it allowed them to specify which results were used by the system when generating term recommendations).\nAll subjects felt more in control with ERF than IRF, but for inexperienced subjects this did not appear to affect their overall preferences29 .\nThese subjects may understand the RF process less, but may be more willing to sacrifice control over feedback in favour of IRF, a process that they perceive more positively.\n4.3 Search Stage We also analysed the effects of search stage on the use and usefulness of IRF.\nThrough analysis of this nature we can build a more complete picture of how searchers used RF and how this varies based on the RF method.\nThe results suggest that IRF is used more in the middle of the search than at the beginning or end, whereas ERF is used more towards the end.\nThe results also show the effects of task complexity on the IRF process and how rapidly subjects reach their most interactive point.\nWithout an analysis of this type it would not have been possible to establish the existence of such patterns of behaviour.\nThe findings suggest that searchers interact differently for IRF and ERF.\nSince ERF is not traditionally used until toward the end of the search it may be possible to incorporate both IRF and ERF into the same IR system, with IRF being used to gather evidence until subjects decide to use ERF.\nThe development of such a system represents part of our ongoing work in this area.\n5.\nCONCLUSIONS In this paper we have presented an investigation of Implicit Relevance Feedback (IRF).\nWe aimed to answer three research questions about factors that may affect the provision and usefulness of IRF.\nThese factors were search task complexity, the subjects'' search experience and the stage in the search.\nOur overall conclusion was that all factors 29 This may also be true for experienced subjects, but the data we have is insufficient to draw this conclusion.\nappear to have some effect on the use and effectiveness of IRF, although the interaction effects between factors are not statistically significant.\nOur conclusions per each research question are: (i) IRF is generally more useful for complex search tasks, where searchers want to focus on the search task and get new ideas for their search from the system, (ii) IRF is preferred to ERF overall and generally preferred by inexperienced subjects wanting to reduce the burden of providing RF, and (iii) within a single search session IRF is affected by temporal location in a search (i.e., it is used in the middle, not the beginning or end) and task complexity.\nStudies of this nature are important to establish the circumstances where a promising technique such as IRF are useful and those when it is not.\nIt is only after such studies have been run and analysed in this way can we develop an understanding of IRF that allow it to be successfully implemented in operational IR systems.\n6.\nREFERENCES [1] Bell, D.J. and Ruthven, I. (2004).\nSearchers' assessments of task complexity for web searching.\nProceedings of the 26th European Conference on Information Retrieval, 57-71.\n[2] Borlund, P. (2000).\nExperimental components for the evaluation of interactive information retrieval systems.\nJournal of Documentation.\n56(1): 71-90.\n[3] Brajnik, G., Mizzaro, S., Tasso, C., and Venuti, F. (2002).\nStrategic help for user interfaces for information retrieval.\nJournal of the American Society for Information Science and Technology.\n53(5): 343-358.\n[4] Busha, C.H. and Harter, S.P., (1980).\nResearch methods in librarianship: Techniques and interpretation.\nLibrary and information science series.\nNew York: Academic Press.\n[5] Campbell, I. and Van Rijsbergen, C.J. (1996).\nThe ostensive model of developing information needs.\nProceedings of the 3rd International Conference on Conceptions of Library and Information Science, 251-268.\n[6] Harman, D., (1992).\nRelevance feedback and other query modification techniques.\nIn Information retrieval: Data structures and algorithms.\nNew York: Prentice-Hall.\n[7] Kelly, D. and Teevan, J. (2003).\nImplicit feedback for inferring user preference.\nSIGIR Forum.\n37(2): 18-28.\n[8] Koenemann, J. and Belkin, N.J. (1996).\nA case for interaction: A study of interactive information retrieval behavior and effectiveness.\nProceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 205-212.\n[9] Meddis, R., (1984).\nStatistics using ranks: A unified approach.\nOxford: Basil Blackwell, 303-308.\n[10] Morita, M. and Shinoda, Y. (1994).\nInformation filtering based on user behavior analysis and best match text retrieval.\nProceedings of the 17th Annual ACM SIGIR Conference on Research and Development in Information Retrieval, 272-281.\n[11] Salton, G. and Buckley, C. (1990).\nImproving retrieval performance by relevance feedback.\nJournal of the American Society for Information Science.\n41(4): 288-297.\n[12] Siegel, S. and Castellan, N.J. (1988).\nNonparametric statistics for the behavioural sciences.\n2nd ed.\nSingapore: McGraw-Hill.\n[13] White, R.W. (2004).\nImplicit feedback for interactive information retrieval.\nUnpublished Doctoral Dissertation, University of Glasgow, Glasgow, United Kingdom.\n[14] White, R.W., Jose, J.M. and Ruthven, I. (2005).\nAn implicit feedback approach for interactive information retrieval, Information Processing and Management, in press.\n[15] White, R.W., Jose, J.M., Ruthven, I. and Van Rijsbergen, C.J. (2004).\nA simulated study of implicit feedback models.\nProceedings of the 26th European Conference on Information Retrieval, 311-326.\n[16] Zellweger, P.T., Regli, S.H., Mackinlay, J.D., and Chang, B.-W.\n(2000).\nThe impact of fluid documents on reading and browsing: An observational study.\nProceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 249-256.\nAppendix B. Checkboxes to mark relevant document titles in the Explicit RF system.\nAppendix A. Interface to Implicit RF system.\n1.\nTop-Ranking Sentence 2.\nTitle 3.\nSummary 4.\nSummary Sentence 5.\nSentence in Context 2 3 4 5 1","lvl-3":"A Study of Factors Affecting the Utility of Implicit Relevance Feedback\nABSTRACT\nImplicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system.\nIRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly.\nIn this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search.\nOur findings suggest that all three of these factors contribute to the utility of IRF.\n1.\nINTRODUCTION\nInformation Retrieval (IR) systems are designed to help searchers solve problems.\nIn the traditional interaction metaphor employed by Web search systems such as Yahoo! and MSN Search, the system generally only supports the retrieval of potentially relevant documents from the collection.\nHowever, it is also possible to offer support to searchers for different search activities, such as selecting the terms to present to the system or choosing which search strategy to adopt [3, 8]; both of which can be problematic for searchers.\nAs the quality of the query submitted to the system directly affects the quality of search results, the issue of how to improve search queries has been studied extensively in IR research [6].\nTechniques such as Relevance Feedback (RF) [11] have been proposed as a way in which the IR system can support the iterative development of a search query by suggesting alternative terms for query modification.\nHowever, in practice RF techniques have been underutilised as they place an increased cognitive burden on searchers to directly indicate relevant results [10].\nImplicit Relevance Feedback (IRF) [7] has been proposed as a way in which search queries can be improved by passively observing searchers as they interact.\nIRF has been implemented either through the use of surrogate measures based on interaction with documents (such as reading time, scrolling or document retention) [7] or using interaction with browse-based result interfaces [5].\nIRF has been shown to display mixed effectiveness because the factors that are good indicators of user interest are often erratic and the inferences drawn from user interaction are not always valid [7].\nIn this paper we present a study into the use and effectiveness of IRF in an online search environment.\nThe study aims to investigate the factors that affect IRF, in particular three research questions: (i) is the use of and perceived quality of terms generated by IRF affected by the search task?\n(ii) is the use of and perceived quality of terms generated by IRF affected by the level of search experience of system users?\n(iii) is IRF equally used and does it generate terms that are equally useful at all search stages?\nThis study aims to establish when, and under what circumstances, IRF performs well in terms of its use and the query modification terms selected as a result of its use.\nThe main experiment from which the data are taken was designed to test techniques for selecting query modification terms and techniques for displaying retrieval results [13].\nIn this paper we use data derived from that experiment to study factors affecting the utility of IRF.\n2.\nSTUDY\n2.1 Systems\n2.1.1 Interface Overview\n2.1.2 Explicit RF system\n2.1.3 Implicit RF system\n2.2 Tasks\nMC Task: Moderate Complexity\nLC Task: Low Complexity\n2.3 Subjects\n2.4 Methodology\n3.\nFINDINGS\n3.1 Search Task\n3.1.1 Feedback\n3.1.2 Terms\n3.1.3 Summary\n3.2 Search Experience\n3.2.1 Feedback\n3.2.2 Terms\n3.2.3 Summary\n3.3 Search Stage\n3.3.1 Feedback\n3.3.2 Terms\n25 Although not statistically significant; \u03c72 (2) = 3.54, p = .17 (Friedman Rank Sum Test)\n3.3.3 Summary\n5.\nCONCLUSIONS\nIn this paper we have presented an investigation of Implicit Relevance Feedback (IRF).\nWe aimed to answer three research questions about factors that may affect the provision and usefulness of IRF.\nThese factors were search task complexity, the subjects' search experience and the stage in the search.\nOur overall conclusion was that all factors appear to have some effect on the use and effectiveness of IRF, although the interaction effects between factors are not statistically significant.\n29 This may also be true for experienced subjects, but the data we have is insufficient to draw this conclusion.\nOur conclusions per each research question are: (i) IRF is generally more useful for complex search tasks, where searchers want to focus on the search task and get new ideas for their search from the system, (ii) IRF is preferred to ERF overall and generally preferred by inexperienced subjects wanting to reduce the burden of providing RF, and (iii) within a single search session IRF is affected by temporal location in a search (i.e., it is used in the middle, not the beginning or end) and task complexity.\nStudies of this nature are important to establish the circumstances where a promising technique such as IRF are useful and those when it is not.\nIt is only after such studies have been run and analysed in this way can we develop an understanding of IRF that allow it to be successfully implemented in operational IR systems.","lvl-4":"A Study of Factors Affecting the Utility of Implicit Relevance Feedback\nABSTRACT\nImplicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system.\nIRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly.\nIn this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search.\nOur findings suggest that all three of these factors contribute to the utility of IRF.\n1.\nINTRODUCTION\nInformation Retrieval (IR) systems are designed to help searchers solve problems.\nIn the traditional interaction metaphor employed by Web search systems such as Yahoo! and MSN Search, the system generally only supports the retrieval of potentially relevant documents from the collection.\nHowever, it is also possible to offer support to searchers for different search activities, such as selecting the terms to present to the system or choosing which search strategy to adopt [3, 8]; both of which can be problematic for searchers.\nAs the quality of the query submitted to the system directly affects the quality of search results, the issue of how to improve search queries has been studied extensively in IR research [6].\nTechniques such as Relevance Feedback (RF) [11] have been proposed as a way in which the IR system can support the iterative development of a search query by suggesting alternative terms for query modification.\nImplicit Relevance Feedback (IRF) [7] has been proposed as a way in which search queries can be improved by passively observing searchers as they interact.\nIn this paper we present a study into the use and effectiveness of IRF in an online search environment.\nThe study aims to investigate the factors that affect IRF, in particular three research questions: (i) is the use of and perceived quality of terms generated by IRF affected by the search task?\n(ii) is the use of and perceived quality of terms generated by IRF affected by the level of search experience of system users?\n(iii) is IRF equally used and does it generate terms that are equally useful at all search stages?\nThis study aims to establish when, and under what circumstances, IRF performs well in terms of its use and the query modification terms selected as a result of its use.\nThe main experiment from which the data are taken was designed to test techniques for selecting query modification terms and techniques for displaying retrieval results [13].\nIn this paper we use data derived from that experiment to study factors affecting the utility of IRF.\n5.\nCONCLUSIONS\nIn this paper we have presented an investigation of Implicit Relevance Feedback (IRF).\nWe aimed to answer three research questions about factors that may affect the provision and usefulness of IRF.\nThese factors were search task complexity, the subjects' search experience and the stage in the search.\nOur overall conclusion was that all factors appear to have some effect on the use and effectiveness of IRF, although the interaction effects between factors are not statistically significant.\n29 This may also be true for experienced subjects, but the data we have is insufficient to draw this conclusion.\nStudies of this nature are important to establish the circumstances where a promising technique such as IRF are useful and those when it is not.\nIt is only after such studies have been run and analysed in this way can we develop an understanding of IRF that allow it to be successfully implemented in operational IR systems.","lvl-2":"A Study of Factors Affecting the Utility of Implicit Relevance Feedback\nABSTRACT\nImplicit relevance feedback (IRF) is the process by which a search system unobtrusively gathers evidence on searcher interests from their interaction with the system.\nIRF is a new method of gathering information on user interest and, if IRF is to be used in operational IR systems, it is important to establish when it performs well and when it performs poorly.\nIn this paper we investigate how the use and effectiveness of IRF is affected by three factors: search task complexity, the search experience of the user and the stage in the search.\nOur findings suggest that all three of these factors contribute to the utility of IRF.\n1.\nINTRODUCTION\nInformation Retrieval (IR) systems are designed to help searchers solve problems.\nIn the traditional interaction metaphor employed by Web search systems such as Yahoo! and MSN Search, the system generally only supports the retrieval of potentially relevant documents from the collection.\nHowever, it is also possible to offer support to searchers for different search activities, such as selecting the terms to present to the system or choosing which search strategy to adopt [3, 8]; both of which can be problematic for searchers.\nAs the quality of the query submitted to the system directly affects the quality of search results, the issue of how to improve search queries has been studied extensively in IR research [6].\nTechniques such as Relevance Feedback (RF) [11] have been proposed as a way in which the IR system can support the iterative development of a search query by suggesting alternative terms for query modification.\nHowever, in practice RF techniques have been underutilised as they place an increased cognitive burden on searchers to directly indicate relevant results [10].\nImplicit Relevance Feedback (IRF) [7] has been proposed as a way in which search queries can be improved by passively observing searchers as they interact.\nIRF has been implemented either through the use of surrogate measures based on interaction with documents (such as reading time, scrolling or document retention) [7] or using interaction with browse-based result interfaces [5].\nIRF has been shown to display mixed effectiveness because the factors that are good indicators of user interest are often erratic and the inferences drawn from user interaction are not always valid [7].\nIn this paper we present a study into the use and effectiveness of IRF in an online search environment.\nThe study aims to investigate the factors that affect IRF, in particular three research questions: (i) is the use of and perceived quality of terms generated by IRF affected by the search task?\n(ii) is the use of and perceived quality of terms generated by IRF affected by the level of search experience of system users?\n(iii) is IRF equally used and does it generate terms that are equally useful at all search stages?\nThis study aims to establish when, and under what circumstances, IRF performs well in terms of its use and the query modification terms selected as a result of its use.\nThe main experiment from which the data are taken was designed to test techniques for selecting query modification terms and techniques for displaying retrieval results [13].\nIn this paper we use data derived from that experiment to study factors affecting the utility of IRF.\n2.\nSTUDY\nIn this section we describe the user study conducted to address our research questions.\n2.1 Systems\nOur study used two systems both of which suggested new query terms to the user.\nOne system suggested terms based on the user's interaction (IRF), the other used Explicit RF (ERF) asking the user to explicitly indicate relevant material.\nBoth systems used the same term suggestion algorithm, [15], and used a common interface.\n2.1.1 Interface Overview\nIn both systems, retrieved documents are represented at the interface by their full-text and a variety of smaller, query-relevant representations, created at retrieval time.\nWe used the Web as the test collection in this study and Google1 as the underlying search engine.\nDocument representations include the document title and a summary of the document; a list of top-ranking sentences (TRS) extracted from the top documents retrieved, scored in relation to the query, a sentence in the document summary, and each summary sentence in the context it occurs in the document (i.e., with the preceding and following\nsentence).\nEach summary sentence and top-ranking sentence is regarded as a representation of the document.\nThe default display contains the list of top-ranking sentences and the list of the first ten document titles.\nInteracting with a representation guides searchers to a different representation from the same document, e.g., moving the mouse over a document title displays a summary of the document.\nThis presentation of progressively more information from documents to aid relevance assessments has been shown to be effective in earlier work [14, 16].\nIn Appendix A we show the complete interface to the IRF system with the document representations marked and in Appendix B we show a fragment from the ERF interface with the checkboxes used by searchers to indicate relevant information.\nBoth systems provide an interactive query expansion feature by suggesting new query terms to the user.\nThe searcher has the responsibility for choosing which, if any, of these terms to add to the query.\nThe searcher can also add or remove terms from the query at will.\n2.1.2 Explicit RF system\nThis version of the system implements explicit RF.\nNext to each document representation are checkboxes that allow searchers to mark individual representations as relevant; marking a representation is an indication that its contents are relevant.\nOnly the representations marked relevant by the user are used for suggesting new query terms.\nThis system was used as a baseline against which the IRF system could be compared.\n2.1.3 Implicit RF system\nThis system makes inferences about searcher interests based on the information with which they interact.\nAs described in Section 2.1.1 interacting with a representation highlights a new representation from the same document.\nTo the searcher this is a way they can find out more information from a potentially interesting source.\nTo the implicit RF system each interaction with a representation is interpreted as an implicit indication of interest in that representation; interacting with a representation is assumed to be an indication that its contents are relevant.\nThe query modification terms are selected using the same algorithm as in the Explicit RF system.\nTherefore the only difference between the systems is how relevance is communicated to the system.\nThe results of the main experiment [13] indicated that these two systems were comparable in terms of effectiveness.\n2.2 Tasks\nSearch tasks were designed to encourage realistic search behaviour by our subjects.\nThe tasks were phrased in the form of simulated work task situations [2], i.e., short search scenarios that were designed to reflect real-life search situations and allow subjects to develop personal assessments of relevance.\nWe devised six search topics (i.e., applying to university, allergies in the workplace, art galleries in Rome, \"Third Generation\" mobile phones, Internet music piracy and petrol prices) based on pilot testing with a small representative group of subjects.\nThese subjects were not involved in the main experiment.\nFor each topic, three versions of each work task situation were devised, each version differing in their predicted level of task complexity.\nAs described in [1] task complexity is a variable that affects subject perceptions of a task and their interactive behaviour, e.g., subjects perform more filtering activities with highly complex search tasks.\nBy developing tasks of different complexity we can assess how the nature of the task affects the subjects' interactive behaviour and hence the evidence supplied to IRF algorithms.\nTask complexity was varied according to the methodology described in [1], specifically by varying the number of potential information sources and types of information required, to complete a task.\nIn our pilot tests (and in a posteriori analysis of the main experiment results) we verified that subjects reporting of individual task complexity matched our estimation of the complexity of the task.\nSubjects attempted three search tasks: one high complexity, one moderate complexity and one low complexity2.\nThey were asked to read the task, place themselves in the situation it described and find the information they felt was required to complete the task.\nFigure 1 shows the task statements for three levels of task complexity for one of the six search topics.\nHC Task: High Complexity Whilst having dinner with an American colleague, they comment on the high price of petrol in the UK compared to other countries, despite large volumes coming from the same source.\nUnaware of any major differences, you decide to find out how and why petrol prices vary worldwide.\nMC Task: Moderate Complexity\nWhilst out for dinner one night, one of your friends' guests is complaining about the price of petrol and the factors that cause it.\nThroughout the night they seem to be complaining about everything they can, reducing the credibility of their earlier statements so you decide to research which factors actually are important in determining the price of petrol in the UK.\nLC Task: Low Complexity\nWhile out for dinner one night, your friend complains about the rising price of petrol.\nHowever, as you have not been driving for long, you are unaware of any major changes in price.\nYou decide to find out how the price of petrol has changed in the UK in recent years.\nFigure 1.\nVarying task complexity (\"Petrol Prices\" topic).\n2.3 Subjects\n156 volunteers expressed an interest in participating in our study.\n48 subjects were selected from this set with the aim of populating two groups, each with 24 subjects: inexperienced (infrequent \/ inexperienced searchers) and experienced (frequent \/ experienced searchers).\nSubjects were not chosen and classified into their groups until they had completed an entry questionnaire that asked them about their search experience and computer use.\nThe average age of the subjects was 22.83 years (maximum 51, minimum 18, \u03c3 = 5.23 years) and 75% had a university diploma or a higher degree.\n47.91% of subjects had, or were pursuing, a qualification in a discipline related to Computer Science.\nThe subjects were a mixture of students, researchers, academic staff and others, with different levels of computer and search experience.\nThe subjects were divided into the two groups depending on their search experience, how often they searched and the types of searches they performed.\nAll were familiar with Web searching, and some with searching in other domains.\n2.4 Methodology\nThe experiment had a factorial design; with 2 levels of search experience, 3 experimental systems (although we only report on the findings from the ERF and IRF systems) and 3 levels of search task complexity.\nSubjects attempted one task of each complexity, 2 The main experiment from which these results are drawn had a third comparator system which had a different interface.\nEach subject carried out three tasks, one on each system.\nWe only report on the results from the ERF and IRF systems as these are the only pertinent ones for this paper.\nswitched systems after each task and used each system once.\nThe order in which systems were used and search tasks attempted was randomised according to a Latin square experimental design.\nQuestionnaires used Likert scales, semantic differentials and openended questions to elicit subject opinions [4].\nSystem logging was also used to record subject interaction.\nA tutorial carried out prior to the experiment allowed subjects to use a non-feedback version of the system to attempt a practice task before using the first experimental system.\nExperiments lasted between oneand-a-half and two hours, dependent on variables such as the time spent completing questionnaires.\nSubjects were offered a 5 minute break after the first hour.\nIn each experiment: i. the subject was welcomed and asked to read an introduction to the experiments and sign consent forms.\nThis set of instructions was written to ensure that each subject received precisely the same information.\nii.\nthe subject was asked to complete an introductory questionnaire.\nThis contained questions about the subject's education, general search experience, computer experience and Web search experience.\niii.\nthe subject was given a tutorial on the interface, followed by a training topic on a version of the interface with no RF.\niv.\nthe subject was given three task sheets and asked to choose one task from the six topics on each sheet.\nNo guidelines were given to subjects when choosing a task other than they could not choose a task from any topic more than once.\nTask complexity was rotated by the experimenter so each subject attempted one high complexity task, one moderate complexity task and one low complexity task.\nv. the subject was asked to perform the search and was given 15 minutes to search.\nThe subject could terminate a search early if they were unable to find any more information they felt helped them complete the task.\nvi.\nafter completion of the search, the subject was asked to complete a post-search questionnaire.\nvii.\nthe remaining tasks were attempted by the subject, following steps v. and vi.\nviii.\nthe subject completed a post-experiment questionnaire and participated in a post-experiment interview.\nSubjects were told that their interaction may be used by the IRF system to help them as they searched.\nThey were not told which behaviours would be used or how it would be used.\nWe now describe the findings of our analysis.\n3.\nFINDINGS\nIn this section we use the data derived from the experiment to answer our research questions about the effect of search task complexity, search experience and stage in search on the use and effectiveness of IRF.\nWe present our findings per research question.\nDue to the ordinal nature of much of the data non-parametric statistical testing is used in this analysis and the level of significance is set to p <.05, unless otherwise stated.\nWe use the method proposed by [12] to determine the significance of differences in multiple comparisons and that of [9] to test for interaction effects between experimental variables, the occurrence of which we report where appropriate.\nAll Likert scales and semantic differentials were on a 5-point scale where a rating closer to 1 signifies more agreement with the attitude statement.\nThe category labels HC, MC and LC are used to denote the high, moderate and low complexity tasks respectively.\nThe highest, or most positive, values in each table are shown in bold.\nOur analysis uses data from questionnaires, post-experiment interviews and background system logging on the ERF and IRF systems.\n3.1 Search Task\nSearchers attempted three search tasks of varying complexity, each on a different experimental system.\nIn this section we present an analysis on the use and usefulness of IRF for search tasks of different complexities.\nWe present our findings in terms of the RF provided by subjects and the terms recommended by the systems.\n3.1.1 Feedback\nWe use questionnaires and system logs to gather data on subject perceptions and provision of RF for different search tasks.\nIn the postsearch questionnaire subjects were asked about how RF was conveyed using differentials to elicit their opinion on:\n1.\nthe value of the feedback technique: How you conveyed relevance to the system (i.e. ticking boxes or viewing information) was: \"easy\" \/ \"difficult\", \"effective\" \/ \"ineffective\", \"useful\"' \/ \"not useful\".\n2.\nthe process of providing the feedback: How you conveyed relevance to the system made you feel: \"comfortable\" \/ \"uncomfortable\", \"in control\" \/ \"not in control\".\nThe average obtained differential values are shown in Table 1 for IRF and each task category.\nThe value corresponding to the differential \"All\" represents the mean of all differentials for a particular attitude statement.\nThis gives some overall understanding of the subjects' feelings which can be useful as the subjects may not answer individual differentials very precisely.\nThe values for ERF are included for reference in this table and all other tables and figures in the \"Findings\" section.\nSince the aim of the paper is to investigate situations in which IRF might perform well, not a direct comparison between IRF and ERF, we make only limited comparisons between these two types of feedback.\nTable 1.\nSubject perceptions of RF method (lower = better).\nEach cell in Table 1 summarises the subject responses for 16 tasksystem pairs (16 subjects who ran a high complexity (HC) task on the ERF system, 16 subjects who ran a medium complexity (MC) task on the ERF system, etc).\nKruskal-Wallis Tests were applied to each differential for each type of RF3.\nSubject responses suggested that IRF 3 Since this analysis involved many differentials, we use a Bonferroni correction to control the experiment-wise error rate and set the alpha level (\u03b1) to .0167 and .0250 for both statements 1.\nand 2.\nrespectively, i.e., .05 divided by the number of differentials.\nThis correction reduces the number of Type I errors i.e., rejecting null hypotheses that are true.\nwas most \"effective\" and \"useful\" for more complex search tasks4 and that the differences in all pair-wise comparisons between tasks were significant5.\nSubject perceptions of IRF elicited using the other differentials did not appear to be affected by the complexity of the search task6.\nTo determine whether a relationship exists between the effectiveness and usefulness of the IRF process and task complexity we applied Spearman's Rank Order Correlation Coefficient to participant responses.\nThe results of this analysis suggest that the effectiveness of IRF and usefulness of IRF are both related to task complexity; as task complexity increases subject preference for IRF also increases7.\nOn the other hand, subjects felt ERF was more \"effective\" and \"useful\" for low complexity tasks8.\nTheir verbal reporting of ERF, where perceived utility and effectiveness increased as task complexity decreased, supports this finding.\nIn tasks of lower complexity the subjects felt they were better able to provide feedback on whether or not documents were relevant to the task.\nWe analyse interaction logs generated by both interfaces to investigate the amount of RF subjects provided.\nTo do this we use a measure of search \"precision\" that is the proportion of all possible document representations that a searcher assessed, divided by the total number they could assess.\nIn ERF this is the proportion of all possible representations that were marked relevant by the searcher, i.e., those representations explicitly marked relevant.\nIn IRF this is the proportion of representations viewed by a searcher over all possible representations that could have been viewed by the searcher.\nThis proportion measures the searcher's level of interaction with a document, we take it to measure the user's interest in the document: the more document representations viewed the more interested we assume a user is in the content of the document.\nThere are a maximum of 14 representations per document: 4 topranking sentences, 1 title, 1 summary, 4 summary sentences and 4 summary sentences in document context.\nSince the interface shows document representations from the top-30 documents, there are 420 representations that a searcher can assess.\nTable 2 shows proportion of representations provided as RF by subjects.\nTable 2.\nFeedback and documents viewed.\nFor IRF there is a clear pattern: as complexity increases the subjects viewed fewer documents but viewed more representations for each document.\nThis suggests a pattern where users are investigating retrieved documents in more depth.\nIt also means that the amount of\n4 effective: \u03c72 (2) = 11.62, p = .003; useful: \u03c72 (2) = 12.43, p = .002 5 Dunn's post-hoc tests (multiple comparison using rank sums); all Z \u2265 2.88, all p \u2264 .002 6 all \u03c72 (2) \u2264 2.85, all p \u2265 .24 (Kruskal-Wallis Tests) 7 effective: all r \u2265 0.644, p \u2264 .002; useful: all r \u2265 0.541, p \u2264 .009 8 effective: \u03c72 (2) = 7.01, p = .03; useful: \u03c72 (2) = 6.59, p = .037 (Kruskal-Wallis Test); all pair-wise differences significant, all Z \u2265 2.34, all p \u2264 .01 (Dunn's post-hoc tests)\nfeedback varies based on the complexity of the search task.\nSince IRF is based on the interaction of the searcher, the more they interact, the more feedback they provide.\nThis has no effect on the number of RF terms chosen, but may affect the quality of the terms selected.\nCorrelation analysis revealed a strong negative correlation between the number of documents viewed and the amount of feedback searchers provide9; as the number of documents viewed increases the proportion of feedback falls (searchers view less representations of each document).\nThis may be a natural consequence of their being less time to view documents in a time constrained task environment but as we will show as complexity changes, the nature of information searchers interact with also appears to change.\nIn the next section we investigate the effect of task complexity on the terms chosen as a result of IRF.\n3.1.2 Terms\nThe same RF algorithm was used to select query modification terms in all systems [16].\nWe use subject opinions of terms recommended by the systems as a measure of the effectiveness of IRF with respect to the terms generated for different search tasks.\nTo test this, subjects were asked to complete two semantic differentials that completed the statement: The words chosen by the system were: \"relevant\" \/ \"irrelevant\" and \"useful\" \/ \"not useful\".\nTable 3 presents average responses grouped by search task.\nTable 3.\nSubject perceptions of system terms (lower = better).\nKruskal-Wallis Tests were applied within each type of RF.\nThe results indicate that the relevance and usefulness of the terms chosen by IRF is affected by the complexity of the search task; the terms chosen are more \"relevant\" and \"useful\" when the search task is more complex10.\nRelevant here, was explained as being related to their task whereas useful was for terms that were seen as being helpful in the search task.\nFor ERF, the results indicate that the terms generated are perceived to be more \"relevant\" and \"useful\" for less complex search tasks; although differences between tasks were not significant11.\nThis suggests that subject perceptions of the terms chosen for query modification are affected by task complexity.\nComparison between ERF and IRF shows that subject perceptions also vary for different types of RF12.\nAs well as using data on relevance and utility of the terms chosen, we used data on term acceptance to measure the perceived value of the terms suggested.\nExplicit and Implicit RF systems made recommendations about which terms could be added to the original search query.\nIn Table 4 we show the proportion of the top six terms 13 that were shown to the searcher that were added to the search query, for each type of task and each type of RF.\nTable 4.\nTerm Acceptance (percentage of top six terms).\nThe average number of terms accepted from IRF is approximately the same across all search tasks and generally the same as that of ERF14.\nAs Table 2 shows, subjects marked fewer documents relevant for highly complex tasks.\nTherefore, when task complexity increases the ERF system has fewer examples of relevant documents and the expansion terms generated may be poorer.\nThis could explain the difference in the proportion of recommended terms accepted in ERF as task complexity increases.\nFor IRF there is little difference in how many of the recommended terms were chosen by subjects for each level of task complexity15.\nSubjects may have perceived IRF terms as more useful for high complexity tasks but this was not reflected in the proportion of IRF terms accepted.\nDifferences may reside in the nature of the terms accepted; future work will investigate this issue.\n3.1.3 Summary\nIn this section we have presented an investigation on the effect of search task complexity on the utility of IRF.\nFrom the results there appears to be a strong relation between the complexity of the task and the subject interaction: subjects preferring IRF for highly complex tasks.\nTask complexity did not affect the proportion of terms accepted in either RF method, despite there being a difference in how \"relevant\" and \"useful\" subjects perceived the terms to be for different complexities; complexity may affect term selection in ways other than the proportion of terms accepted.\n3.2 Search Experience\nExperienced searchers may interact differently and give different types of evidence to RF than inexperienced searchers.\nAs such, levels of search experience may affect searchers' use and perceptions of IRF.\nIn our experiment subjects were divided into two groups based on their level of search experience, the frequency with which they searched and the types of searches they performed.\nIn this section we use their perceptions and logging to address the next research question; the relationship between the usefulness and use of IRF and the search experience of experimental subjects.\nThe data are the same as that analysed in the previous section, but here we focus on search experience rather than the search task.\n3.2.1 Feedback\nWe analyse the results from the attitude statements described at the beginning of Section 3.1.1.\n(i.e., How you conveyed relevance to the system was...and How you conveyed relevance to the system made you feel ...).\nThese differentials elicited opinion from experimental subjects about the RF method used.\nIn Table 5 we show the mean average responses for inexperienced and experienced subject groups on ERF and IRF; 24 subjects per cell.\n14 all T (16) \u2265 80, all p \u2264 .31, (Wilcoxon Signed-Rank Test) 15 ERF: \u03c72 (2) = 3.67, p = .16; IRF: \u03c72 (2) = 2.55, p = .28 (KruskalWallis Tests)\nTable 5.\nSubject perceptions of RF method (lower = better).\nThe results demonstrate a strong preference in inexperienced subjects for IRF; they found it more \"easy\" and \"effective\" than experienced subjects.\n16 The differences for all other IRF differentials were not statistically significant.\nFor all differentials, apart from \"in control\", inexperienced subjects generally preferred IRF over ERF17.\nInexperienced subjects also felt that IRF was more difficult to control than experienced subjects18.\nAs these subjects have less search experience they may be less able to understand RF processes and may be more comfortable with the system gathering feedback implicitly from their interaction.\nExperienced subjects tended to like ERF more than inexperienced subjects and felt more \"comfortable\" with this feedback method19.\nIt appears from these results that experienced subjects found ERF more useful and were more at ease with the ERF process.\nIn a similar way to Section 3.1.1 we analysed the proportion of feedback that searchers provided to the experimental systems.\nOur analysis suggested that search experience does not affect the amount of feedback subjects provide20.\n3.2.2 Terms\nWe used questionnaire responses to gauge subject opinion on the relevance and usefulness of the terms from the perspective of experienced and inexperienced subjects.\nTable 6 shows the average differential responses obtained from both subject groups.\nTable 6.\nSubject perceptions of system terms (lower = better).\nThis finding was supported by the proportion of query modification terms these subjects accepted.\nIn the same way as in Section 3.1.2, we analysed the number of query modification terms recommended by the system that were used by experimental subjects.\nTable 7 shows the average number of accepted terms per subject group.\nTable 7.\nTerm Acceptance (percentage of top six terms).\nOur analysis of the data show that differences between subject groups for each type of RF are significant; experienced subjects accepted more expansion terms regardless of type of RF.\nHowever, the differences between the same groups for different types of RF are not significant; subjects chose roughly the same percentage of expansion terms offered irrespective of the type of RF22.\n3.2.3 Summary\nIn this section we have analysed data gathered from two subject groups--inexperienced searchers and experienced searchers--on how they perceive and use IRF.\nThe results indicate that inexperienced subjects found IRF more \"easy\" and \"effective\" than experienced subjects, who in turn found the terms chosen as a result of IRF more \"relevant\" and \"useful\".\nWe also showed that inexperienced subjects generally accepted less recommended terms than experienced subjects, perhaps because they were less comfortable with RF or generally submitted shorter search queries.\nSearch experience appears to affect how subjects use the terms recommended as a result of the RF process.\n3.3 Search Stage\nFrom our observations of experimental subjects as they searched we conjectured that RF may be used differently at different times during a search.\nTo test this, our third research question concerned the use and usefulness of IRF during the course of a search.\nIn this section we investigate whether the amount of RF provided by searchers or the proportion of terms accepted are affected by how far through their search they are.\nFor the purposes of this analysis a search begins when a subject poses the first query to the system and progresses until they terminate the search or reach the maximum allowed time for a search task of 15 minutes.\nWe do not divide tasks based on this limit as subjects often terminated their search in less than 15 minutes.\nIn this section we use data gathered from interaction logs and subject opinions to investigate the extent to which RF was used and the extent to which it appeared to benefit our experimental subjects at different stages in their search\n3.3.1 Feedback\nThe interaction logs for all searches on the Explicit RF and Implicit RF were analysed and each search is divided up into nine equal length time slices.\nThis number of slices gave us an equal number per stage and was a sufficient level of granularity to identify trends in the results.\nSlices 1--3 correspond to the \"start\" of the search, 4--6 to the \"middle\" of the search and 7--9 to the \"end\".\nIn Figure 2 we plot the measure of \"precision\" described in Section 3.1.1 (i.e., the proportion of all possible representations that were provided as RF) at each of the nine slices, per search task, averaged across all subjects; this allows us to see how the provision of RF was distributed during a search.\nThe 22 IRF: U (24) = 403, p = .009, ERF: U (24) = 396, p = .013 total amount of feedback for a single RF method\/task complexity pairing across all nine slices corresponds to the value recorded in the first row of Table 2 (e.g., the sum of the RF for IRF\/HC across all nine slices of Figure 2 is 21.50%).\nTo simplify the statistical analysis and comparison we use the grouping of \"start\", \"middle\" and \"end\".\nFigure 2.\nDistribution of RF provision per search task.\nFigure 2 appears to show the existence of a relationship between the stage in the search and the amount of relevance information provided to the different types of feedback algorithm.\nThese are essentially differences in the way users are assessing documents.\nIn the case of ERF subjects provide explicit relevance assessments throughout most of the search, but there is generally a steep increase in the \"end\" phase towards the completion of the search23.\nWhen using the IRF system, the data indicates that at the start of the search subjects are providing little relevance information24, which corresponds to interacting with few document representations.\nAt this stage the subjects are perhaps concentrating more on reading the retrieved results.\nImplicit relevance information is generally offered extensively in the middle of the search as they interact with results and it then tails off towards the end of the search.\nThis would appear to correspond to stages of initial exploration, detailed analysis of document representations and storage and presentation of findings.\nFigure 2 also shows the proportion of feedback for tasks of different complexity.\nThe results appear to show a difference25 in how IRF is used that relates to the complexity of the search task.\nMore specifically, as complexity increases it appears as though subjects take longer to reach their most interactive point.\nThis suggests that task complexity affects how IRF is distributed during the search and that they may be spending more time initially interpreting search results for more complex tasks.\n3.3.2 Terms\nThe terms recommended by the system are chosen based on the frequency of their occurrence in the relevant items.\nThat is, nonstopword, non-query terms occurring frequently in search results 23 IRF: all Z \u2265 1.87, p \u2264 .031, ERF: \"start\" vs. \"end\" Z = 2.58, p = .005 (Dunn's post-hoc tests).\n24 Although increasing toward the end of the \"start\" stage.\n25 Although not statistically significant; \u03c72 (2) = 3.54, p = .17 (Friedman Rank Sum Test)\nregarded as relevant are likely to be recommended to the searcher for query modification.\nSince there is a direct association between the RF and the terms selected we use the number of terms accepted by searchers at different points in the search as an indication of how effective the RF has been up until the current point in the search.\nIn this section we analysed the average number of terms from the top six terms recommended by Explicit RF and Implicit RF over the course of a search.\nThe average proportion of the top six recommended terms that were accepted at each stage are shown in Table 8; each cell contains data from all 48 subjects.\nTable 8.\nTerm Acceptance (proportion of top six terms).\nThe results show an apparent association between the stage in the search and the number of feedback terms subjects accept.\nSearch stage affects term acceptance in IRF but not in ERF26.\nThe further into a search a searcher progresses, the more likely they are to accept terms recommended via IRF (significantly more than ERF27).\nA correlation analysis between the proportion of terms accepted at each search stage and cumulative RF (i.e., the sum of all \"precision\" at each slice in Figure 2 up to and including the end of the search stage) suggests that in both types of RF the quality of system terms improves as more RF is provided28.\n3.3.3 Summary\nThe results from this section indicate that the location in a search affects the amount of feedback given by the user to the system, and hence the amount of information that the RF mechanism has to decide which terms to offer the user.\nFurther, trends in the data suggest that the complexity of the task affects how subjects provide IRF and the proportion of system terms accepted.\n5.\nCONCLUSIONS\nIn this paper we have presented an investigation of Implicit Relevance Feedback (IRF).\nWe aimed to answer three research questions about factors that may affect the provision and usefulness of IRF.\nThese factors were search task complexity, the subjects' search experience and the stage in the search.\nOur overall conclusion was that all factors appear to have some effect on the use and effectiveness of IRF, although the interaction effects between factors are not statistically significant.\n29 This may also be true for experienced subjects, but the data we have is insufficient to draw this conclusion.\nOur conclusions per each research question are: (i) IRF is generally more useful for complex search tasks, where searchers want to focus on the search task and get new ideas for their search from the system, (ii) IRF is preferred to ERF overall and generally preferred by inexperienced subjects wanting to reduce the burden of providing RF, and (iii) within a single search session IRF is affected by temporal location in a search (i.e., it is used in the middle, not the beginning or end) and task complexity.\nStudies of this nature are important to establish the circumstances where a promising technique such as IRF are useful and those when it is not.\nIt is only after such studies have been run and analysed in this way can we develop an understanding of IRF that allow it to be successfully implemented in operational IR systems.","keyphrases":["implicit relev feedback","relev feedback","search task complex","brows-base result interfac","top-rank sentenc","queri modif term","explicit rf system","interact queri expans featur","high complex whilst","moder complex whilst","introductori questionnair","vari complex","medium complex","search precis","proport feedback"],"prmu":["P","P","P","U","U","U","M","M","M","M","U","M","M","M","M"]} {"id":"C-48","title":"Multi-dimensional Range Queries in Sensor Networks","abstract":"In many sensor networks, data or events are named by attributes. Many of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query. An example is: List all events whose temperature lies between 50\u25e6 and 60\u25e6 , and whose light levels lie between 10 and 15. Such queries are useful for correlating events occurring within the network. In this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries. Our distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm. Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O( \u221a N)). In detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network. Finally, experiments on a small scale testbed validate the feasibility of DIMs.","lvl-1":"Multi-dimensional Range Queries in Sensor Networks\u2217 Xin Li \u2020 Young Jin Kim \u2020 Ramesh Govindan \u2020 Wei Hong \u2021 ABSTRACT In many sensor networks, data or events are named by attributes.\nMany of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query.\nAn example is: List all events whose temperature lies between 50\u25e6 and 60\u25e6 , and whose light levels lie between 10 and 15.\nSuch queries are useful for correlating events occurring within the network.\nIn this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries.\nOur distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm.\nOur analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O( \u221a N)).\nIn detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network.\nFinally, experiments on a small scale testbed validate the feasibility of DIMs.\nCategories and Subject Descriptors C.2.4 [Computer Communication Networks]: Distributed Systems; C.3 [Special-Purpose and Application-Based Systems]: Embedded Systems General Terms Embedded Systems, Sensor Networks, Storage 1.\nINTRODUCTION In wireless sensor networks, data or events will be named by attributes [15] or represented as virtual relations in a distributed database [18, 3].\nMany of these attributes will have scalar values: e.g., temperature and light levels, soil moisture conditions, etc..\nIn these systems, we argue, one natural way to query for events of interest will be to use multi-dimensional range queries on these attributes.\nFor example, scientists analyzing the growth of marine microorganisms might be interested in events that occurred within certain temperature and light conditions: List all events that have temperatures between 50\u25e6 F and 60\u25e6 F, and light levels between 10 and 20.\nSuch range queries can be used in two distinct ways.\nThey can help users efficiently drill-down their search for events of interest.\nThe query described above illustrates this, where the scientist is presumably interested in discovering, and perhaps mapping the combined effect of temperature and light on the growth of marine micro-organisms.\nMore importantly, they can be used by application software running within a sensor network for correlating events and triggering actions.\nFor example, if in a habitat monitoring application, a bird alighting on its nest is indicated by a certain range of thermopile sensor readings, and a certain range of microphone readings, a multi-dimensional range query on those attributes enables higher confidence detection of the arrival of a flock of birds, and can trigger a system of cameras.\nIn traditional database systems, such range queries are supported using pre-computed indices.\nIndices trade-off some initial pre-computation cost to achieve a significantly more efficient querying capability.\nFor sensor networks, we assert that a centralized index for multi-dimensional range queries may not be feasible for energy-efficiency reasons (as well as the fact that the access bandwidth to this central index will be limited, particularly for queries emanating from within the network).\nRather, we believe, there will be situations when it is more appropriate to build an innetwork distributed data structure for efficiently answering multi-dimensional range queries.\nIn this paper, we present just such a data structure, that we call a DIM1 .\nDIMs are inspired by classical database indices, and are essentially embeddings of such indices within the sensor network.\nDIMs leverage two key ideas: in-network 1 Distributed Index for Multi-dimensional data.\n63 data centric storage, and a novel locality-preserving geographic hash (Section 3).\nDIMs trace their lineage to datacentric storage systems [23].\nThe underlying mechanism in these systems allows nodes to consistently hash an event to some location within the network, which allows efficient retrieval of events.\nBuilding upon this, DIMs use a technique whereby events whose attribute values are close are likely to be stored at the same or nearby nodes.\nDIMs then use an underlying geographic routing algorithm (GPSR [16]) to route events and queries to their corresponding nodes in an entirely distributed fashion.\nWe discuss the design of a DIM, presenting algorithms for event insertion and querying, for maintaining a DIM in the event of node failure, and for making DIMs robust to data or packet loss (Section 3).\nWe then extensively evaluate DIMs using analysis (Section 4), simulation (Section 5), and actual implementation (Section 6).\nOur analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O( \u221a N)).\nIn detailed simulations, we show that in practice, the event insertion and querying costs of other alternatives are sometimes an order of magnitude the costs of DIMs, even for moderately sized network.\nExperiments on a small scale testbed validate the feasibility of DIMs (Section 6).\nMuch work remains, including efficient support for skewed data distributions, existential queries, and node heterogeneity.\nWe believe that DIMs will be an essential, but perhaps not necessarily the only, distributed data structure supporting efficient queries in sensor networks.\nDIMs will be part of a suite of such systems that enable feature extraction [7], simple range querying [10], exact-match queries [23], or continuous queries [15, 18].\nAll such systems will likely be integrated to a sensor network database system such as TinyDB [17].\nApplication designers could then choose the appropriate method of information access.\nFor instance, a fire tracking application would use DIM to detect the hotspots, and would then use mechanisms that enable continuous queries [15, 18] to track the spatio-temporal progress of the hotspots.\nFinally, we note that DIMs are applicable not just to sensor networks, but to other deeply distributed systems (embedded networks for home and factory automation) as well.\n2.\nRELATED WORK The basic problem that this paper addresses - multidimensional range queries - is typically solved in database systems using indexing techniques.\nThe database community has focused mostly on centralized indices, but distributed indexing has received some attention in the literature.\nIndexing techniques essentially trade-off some data insertion cost to enable efficient querying.\nIndexing has, for long, been a classical research problem in the database community [5, 2].\nOur work draws its inspiration from the class of multi-key constant branching index structures, exemplified by k-d trees [2], where k represents the dimensionality of the data space.\nOur approach essentially represents a geographic embedding of such structures in a sensor field.\nThere is one important difference.\nThe classical indexing structures are data-dependent (as are some indexing schemes that use locality preserving hashes, and developed in the theory literature [14, 8, 13]).\nThe index structure is decided not only by the data, but also by the order in which data is inserted.\nOur current design is not data dependent.\nFinally, tangentially related to our work is the class of spatial indexing systems [21, 6, 11].\nWhile there has been some work on distributed indexing, the problem has not been extensively explored.\nThere exist distributed indices of a restricted kind-those that allow exact match or partial prefix match queries.\nExamples of such systems, of course, are the Internet Domain Name System, and the class of distributed hash table (DHT) systems exemplified by Freenet[4], Chord[24], and CAN[19].\nOur work is superficially similar to CAN in that both construct a zone-based overlay atop of the underlying physical network.\nThe underlying details make the two systems very different: CAN``s overlay is purely logical while our overlay is consistent with the underlying physical topology.\nMore recent work in the Internet context has addressed support for range queries in DHT systems [1, 12], but it is unclear if these directly translate to the sensor network context.\nSeveral research efforts have expressed the vision of a database interface to sensor networks [9, 3, 18], and there are examples of systems that contribute to this vision [18, 3, 17].\nOur work is similar in spirit to this body of literature.\nIn fact, DIMs could become an important component of a sensor network database system such as TinyDB [17].\nOur work departs from prior work in this area in two significant respects.\nUnlike these approaches, in our work the data generated at a node are hashed (in general) to different locations.\nThis hashing is the key to scaling multi-dimensional range searches.\nIn all the other systems described above, queries are flooded throughout the network, and can dominate the total cost of the system.\nOur work avoids query flooding by an appropriate choice of hashing.\nMadden et al. [17] also describe a distributed index, called Semantic Routing Trees (SRT).\nThis index is used to direct queries to nodes that have detected relevant data.\nOur work differs from SRT in three key aspects.\nFirst, SRT is built on single attributes while DIM supports mulitple attributes.\nSecond, SRT constructs a routing tree based on historical sensor readings, and therefore works well only for slowlychanging sensor values.\nFinally, in SRT queries are issued from a fixed node while in DIM queries can be issued from any node.\nA similar differentiation applies with respect to work on data-centric routing in sensor networks [15, 25], where data generated at a node is assumed to be stored at the node, and queries are either flooded throughout the network [15], or each source sets up a network-wide overlay announcing its presence so that mobile sinks can rendezvous with sources at the nearest node on the overlay [25].\nThese approaches work well for relatively long-lived queries.\nFinally, our work is most close related to data-centric storage [23] systems, which include geographic hash-tables (GHTs) [20], DIMENSIONS [7], and DIFS [10].\nIn a GHT, data is hashed by name to a location within the network, enabling highly efficient rendezvous.\nGHTs are built upon the GPSR [16] protocol and leverage some interesting properties of that protocol, such as the ability to route to a node nearest to a given location.\nWe also leverage properties in GPSR (as we describe later), but we use a locality-preserving hash to store data, enabling efficient multi-dimensional range queries.\nDIMENSIONS and DIFS can be thought of as using the same set of primitives as GHT (storage using consistent hashing), but for different ends: DIMENSIONS allows drill64 down search for features within a sensor network, while DIFS allows range queries on a single key in addition to other operations.\n3.\nTHE DESIGN OF DIMS Most sensor networks are deployed to collect data from the environment.\nIn these networks, nodes (either individually or collaboratively) will generate events.\nAn event can generally be described as a tuple of attribute values, A1, A2, \u00b7 \u00b7 \u00b7 , Ak , where each attribute Ai represents a sensor reading, or some value corresponding to a detection (e.g., a confidence level).\nThe focus of this paper is the design of systems to efficiently answer multi-dimensional range queries of the form: x1 \u2212 y1, x2 \u2212 y2, \u00b7 \u00b7 \u00b7 , xk \u2212 yk .\nSuch a query returns all events whose attribute values fall into the corresponding ranges.\nNotice that point queries, i.e., queries that ask for events with specified values for each attribute, are a special case of range queries.\nAs we have discussed in Section 1, range queries can enable efficient correlation and triggering within the network.\nIt is possible to implement range queries by flooding a query within the network.\nHowever, as we show in later sections, this alternative can be inefficient, particularly as the system scales, and if nodes within the network issue such queries relatively frequently.\nThe other alternative, sending all events to an external storage node results in the access link being a bottleneck, especially if nodes within the network issue queries.\nShenker et al. [23] also make similar arguments with respect to data-centric storage schemes in general; DIMs are an instance of such schemes.\nThe system we present in this paper, the DIM, relies upon two foundations: a locality-preserving geographic hash, and an underlying geographic routing scheme.\nThe key to resolving range queries efficiently is data locality: i.e., events with comparable attribute values are stored nearby.\nThe basic insight underlying DIM is that data locality can be obtained by a locality-preserving geographic hash function.\nOur geographic hash function finds a localitypreserving mapping from the multi-dimensional space (described by the set of attributes) to a 2-d geographic space; this mapping is inspired by k-d trees [2] and is described later.\nMoreover, each node in the network self-organizes to claim part of the attribute space for itself (we say that each node owns a zone), so events falling into that space are routed to and stored at that node.\nHaving established the mapping, and the zone structure, DIMs use a geographic routing algorithm previously developed in the literature to route events to their corresponding nodes, or to resolve queries.\nThis algorithm, GPSR [16], essentially enables the delivery of a packet to a node at a specified location.\nThe routing mechanism is simple: when a node receives a packet destined to a node at location X, it forwards the packet to the neighbor closest to X.\nIn GPSR, this is called greedy-mode forwarding.\nWhen no such neighbor exists (as when there exists a void in the network), the node starts the packet on a perimeter mode traversal, using the well known right-hand rule to circumnavigate voids.\nGPSR includes efficient techniques for perimeter traversal that are based on graph planarization algorithms amenable to distributed implementation.\nFor all of this to work, DIMs make two assumptions that are consistent with the literature [23].\nFirst, all nodes know the approximate geographic boundaries of the network.\nThese boundaries may either be configured in nodes at the time of deployment, or may be discovered using a simple protocol.\nSecond, each node knows its geographic location.\nNode locations can be automatically determined by a localization system or by other means.\nAlthough the basic idea of DIMs may seem straightforward, it is challenging to design a completely distributed data structure that must be robust to packet losses and node failures, yet must support efficient query distribution and deal with communication voids and obstacles.\nWe now describe the complete design of DIMs.\n3.1 Zones The key idea behind DIMs, as we have discussed, is a geographic locality-preserving hash that maps a multi-attribute event to a geographic zone.\nIntuitively, a zone is a subdivision of the geographic extent of a sensor field.\nA zone is defined by the following constructive procedure.\nConsider a rectangle R on the x-y plane.\nIntuitively, R is the bounding rectangle that contains all sensors withing the network.\nWe call a sub-rectangle Z of R a zone, if Z is obtained by dividing R k times, k \u2265 0, using a procedure that satisfies the following property: After the i-th division, 0 \u2264 i \u2264 k, R is partitioned into 2i equal sized rectangles.\nIf i is an odd (even) number, the i-th division is parallel to the y-axis (x-axis).\nThat is, the bounding rectangle R is first sub-divided into two zones at level 0 by a vertical line that splits R into two equal pieces, each of these sub-zones can be split into two zones at level 1 by a horizontal line, and so on.\nWe call the non-negative integer k the level of zone Z, i.e. level(Z) = k.\nA zone can be identified either by a zone code code(Z) or by an address addr(Z).\nThe code code(Z) is a 0-1 bit string of length level(Z), and is defined as follows.\nIf Z lies in the left half of R, the first (from the left) bit of code(Z) is 0, else 1.\nIf Z lies in the bottom half of R, the second bit of code(Z) is 0, else 1.\nThe remaining bits of code(Z) are then recursively defined on each of the four quadrants of R.\nThis definition of the zone code matches the definition of zones given above, encoding divisions of the sensor field geography by bit strings.\nThus, in Figure 2, the zone in the top-right corner of the rectangle R has a zone code of 1111.\nNote that the zone codes collectively define a zone tree such that individual zones are at the leaves of this tree.\nThe address of a zone Z, addr(Z), is defined to be the centroid of the rectangle defined by Z.\nThe two representations of a zone (its code and its address) can each be computed from the other, assuming the level of the zone is known.\nTwo zones are called sibling zones if their zone codes are the same except for the last bit.\nFor example, if code(Z1) = 01101 and code(Z2) = 01100, then Z1 and Z2 are sibling zones.\nThe sibling subtree of a zone is the subtree rooted at the left or right sibling of the zone in the zone tree.\nWe uniquely define a backup zone for each zone as follows: if the sibling subtree of the zone is on the left, the backup zone is the right-most zone in the sibling subtree; otherwise, the backup zone is the left-most zone in the sibling subtree.\nFor a zone Z, let p be the first level(Z) \u2212 1 digits of code(Z).\nLet backup(Z) be the backup zone of zone Z.\nIf code(Z) = p1, code(backup(Z)) = p01\u2217 with the most number of trailing 1``s (\u2217 means 0 or 1 occurrences).\nIf 65 code(Z) = p0, code(backup(Z)) = p10\u2217 with the most number of trailing 0``s. 3.2 Associating Zones with Nodes Our definition of a zone is independent of the actual distribution of nodes in the sensor field, and only depends upon the geographic extent (the bounding rectangle) of the sensor field.\nNow we describe how zones are mapped to nodes.\nConceptually, the sensor field is logically divided into zones and each zone is assigned to a single node.\nIf the sensor network were deployed in a grid-like (i.e., very regular) fashion, then it is easy to see that there exists a k such that each node maps into a distinct level-k zone.\nIn general, however, the node placements within a sensor field are likely to be less regular than the grid.\nFor some k, some zones may be empty and other zones might have more than one node situated within them.\nOne alternative would have been to choose a fixed k for the overall system, and then associate nodes with the zones they are in (and if a zone is empty, associate the nearest node with it, for some definition of nearest).\nBecause it makes our overall query routing system simpler, we allow nodes in a DIM to map to different-sized zones.\nTo precisely understand the associations between zones and nodes, we define the notion of zone ownership.\nFor any given placement of network nodes, consider a node A. Let ZA to be the largest zone that includes only node A and no other node.\nThen, we say that A owns ZA.\nNotice that this definition of ownership may leave some sections of the sensor field un-associated with a node.\nFor example, in Figure 2, the zone 110 does not contain any nodes and would not have an owner.\nTo remedy this, for any empty zone Z, we define the owner to be the owner of backup(Z).\nIn our example, that empty zone``s owner would also be the node that owns 1110, its backup zone.\nHaving defined the association between nodes and zones, the next problem we tackle is: given a node placement, does there exist a distributed algorithm that enables each node to determine which zones it owns, knowing only the overall boundary of the sensor network?\nIn principle, this should be relatively straightforward, since each node can simply determine the location of its neighbors, and apply simple geometric methods to determine the largest zone around it such that no other node resides in that zone.\nIn practice, however, communication voids and obstacles make the algorithm much more challenging.\nIn particular, resolving the ownership of zones that do not contain any nodes is complicated.\nEqually complicated is the case where the zone of a node is larger than its communication radius and the node cannot determine the boundaries of its zone by local communication alone.\nOur distributed zone building algorithm defers the resolution of such zones until when either a query is initiated, or when an event is inserted.\nThe basic idea behind our algorithm is that each node tentatively builds up an idea of the zone it resides in just by communicating with its neighbors (remembering which boundaries of the zone are undecided because there is no radio neighbor that can help resolve that boundary).\nThese undecided boundaries are later resolved by a GPSR perimeter traversal when data messages are actually routed.\nWe now describe the algorithm, and illustrate it using examples.\nIn our algorithm, each node uses an array bound[0.\n.3] to maintain the four boundaries of the zone it owns (rememFigure 1: A network, where circles represent sensor nodes and dashed lines mark the network boundary.\n1111 011 00 110 100 101 1110 010 Figure 2: The zone code and boundaries.\n0 1 0 1 10 10 1 1 10 00 Figure 3: The Corresponding Zone Tree ber that in this algorithm, the node only tries to determine the zone it resides in, not the other zones it might own because those zones are devoid of nodes).\nWhen a node starts up, each node initializes this array to be the network boundary, i.e., initially each node assumes its zone contains the whole network.\nThe zone boundary algorithm now relies upon GPSR``s beacon messages to learn the locations of neighbors within radio range.\nUpon hearing of such a neighbor, the node calls the algorithm in Figure 4 to update its zone boundaries and its code accordingly.\nIn this algorithm, we assume that A is the node at which the algorithm is executed, ZA is its zone, and a is a newly discovered neighbor of A. (Procedure Contain(ZA, a) is used to decide if node a is located within the current zone boundaries of node A).\nUsing this algorithm, then, each node can independently and asynchronously decide its own tentative zone based on the location of its neighbors.\nFigure 2 illustrates the results of applying this algorithm for the network in Figure 1.\nFigure 3 describes the corresponding zone tree.\nEach zone resides at a leaf node and the code of a zone is the path from the root to the zone if we represent the branch to the left 66 Build-Zone(a) 1 while Contain(ZA, a) 2 do if length(code(ZA)) mod 2 = 0 3 then new bound \u2190 (bound[0] + bound[1])\/2 4 if A.x < new bound 5 then bound[1] \u2190 new bound 6 else bound[0] \u2190 new bound 7 else new bound \u2190 (bound[2] + bound[3])\/2 8 if A.y < new bound 9 then bound[3] \u2190 new bound 10 else bound[2] \u2190 new bound 11 Update zone code code(ZA) Figure 4: Zone Boundary Determination, where A.x and A.y represent the geographic coordinate of node A. Insert-Event(e) 1 c \u2190 Encode(e) 2 if Contain(ZA, c) = true and is Internal() = true 3 then Store e and exit 4 Send-Message(c, e) Send-Message(c, m) 1 if \u2203 neighbor Y, Closer(Y, owner(m), m) = true 2 then addr(m) \u2190 addr(Y ) 3 else if length(c) > length(code(m)) 4 then Update code(m) and addr(m) 5 source(m) \u2190 caller 6 if is Owner(msg) = true 7 then owner(m) \u2190 caller``s code 8 Send(m) Figure 5: Inserting an event in a DIM.\nProcedure Closer(A, B, m) returns true if code(A) is closer to code(m) than code(B).\nsource(m) is used to set the source address of message m. child by 0 and the branch to the right child by 1.\nThis binary tree forms the index that we will use in the following event and query processing procedures.\nWe see that the zone sizes are different and depend on the local densities and so are the lengths of zone codes for different nodes.\nNotice that in Figure 2, there is an empty zone whose code should be 110.\nIn this case, if the node in zone 1111 can only hear the node in zone 1110, it sets its boundary with the empty zone to undecided, because it did not hear from any neighboring nodes from that direction.\nAs we have mentioned before, the undecided boundaries are resolved using GPSR``s perimeter mode when an event is inserted, or a query sent.\nWe describe event insertion in the next step.\nFinally, this description does not describe how a node``s zone codes are adjusted when neighboring nodes fail, or new nodes come up.\nWe return to this in Section 3.5.\n3.3 Inserting an Event In this section, we describe how events are inserted into a DIM.\nThere are two algorithms of interest: a consistent hashing technique for mapping an event to a zone, and a routing algorithm for storing the event at the appropriate zone.\nAs we shall see, these two algorithms are inter-related.\n3.3.1 Hashing an Event to a Zone In Section 3.1, we described a recursive tessellation of the geographic extent of a sensor field.\nWe now describe a consistent hashing scheme for a DIM that supports range queries on m distinct attributes2 Let us denote these attributes A1 ... Am.\nFor simplicity, assume for now that the depth of every zone in the network is k, k is a multiple of m, and that this value of k is known to every node.\nWe will relax this assumption shortly.\nFurthermore, for ease of discussion, we assume that all attribute values have been normalized to be between 0 and 1.\nOur hashing scheme assigns a k bit zone code to an event as follows.\nFor i between 1 and m, if Ai < 0.5, the i-th bit of the zone code is assigned 0, else 1.\nFor i between m + 1 and 2m, if Ai\u2212m < 0.25 or Ai\u2212m \u2208 [0.5, 0.75), the i-th bit of the zone is assigned 0, else 1, because the next level divisions are at 0.25 and 0.75 which divide the ranges to [0, 0.25), [0.25, 0.5), [0.5, 0.75), and [0.75, 1).\nWe repeat this procedure until all k bits have been assigned.\nAs an example, consider event E = 0.3, 0.8 .\nFor this event, the 5-bit zone code is code(ZA) = 01110.\nEssentially, our hashing scheme uses the values of the attributes in round-robin fashion on the zone tree (such as the one in Figure 3), in order to map an m-attribute event to a zone code.\nThis is reminiscent of k-d trees [2], but is quite different from that data structure: zone trees are spatial embeddings and do not incorporate the re-balancing algorithms in k-d trees.\nIn our design of DIMs, we do not require nodes to have zone codes of the same length, nor do we expect a node to know the zone codes of other nodes.\nRather, suppose the encoding node is A and its own zone code is of length kA.\nThen, given an event E, node A only hashes E to a zone code of length kA.\nWe denote the zone code assigned to an event E by code(E).\nAs we describe below, as the event is routed, code(E) is refined by intermediate nodes.\nThis lazy evaluation of zone codes allows different nodes to use different length zone codes without any explicit coordination.\n3.3.2 Routing an Event to its Owner The aim of hashing an event to a zone code is to store the event at the node within the network node that owns that zone.\nWe call this node the owner of the event.\nConsider an event E that has just been generated at a node A.\nAfter encoding event E, node A compares code(E) with code(A).\nIf the two are identical, node A store event E locally; otherwise, node A will attempt to route the event to its owner.\nTo do this, note that code(E) corresponds to some zone Z , which is A``s current guess for the zone at which event E should be stored.\nA now invokes GPSR to send a message to addr(Z ) (the centroid of Z , Section 3.1).\nThe message contains the event E, code(E), and the target geographic location for storing the event.\nIn the message, A also marks itself as the owner of event E.\nAs we will see later, the guessed zone Z , the address addr(Z ), and the owner of E, all of them contained in the message, will be refined by intermediate forwarding nodes.\nGPSR now delivers this message to the next hop towards addr(Z ) from A.\nThis next hop node (call it B) does not immediately forward the message.\nRather, it attempts to com2 DIM does not assume that all nodes are homogeneous in terms of the sensors they have.\nThus, in an m dimensional DIM, a node that does not possess all m sensors can use NULL values for the corresponding readings.\nDIM treats NULL as an extreme value for range comparisons.\nAs an aside, a network may have many DIM instances running concurrently.\n67 pute a new zone code for E to get a new code codenew(E).\nB will update the code contained in the message (and also the geographic destination of the message) if codenew(E) is longer than the event code in the message.\nIn this manner, as the event wends its way to its owner, its zone code gets refined.\nNow, B compares its own code code(B) against the owner code owner(E) contained in the incoming message.\nIf code(B) has a longer match with code(E) than the current owner owner(E), then B sets itself to be the current owner of E, meaning that if nobody is eligible to store E, then B will store the event (we shall see how this happens next).\nIf B``s zone code does not exactly match code(E), B will invoke GPSR to deliver E to the next hop.\n3.3.3 Resolving undecided zone boundaries during insertion Suppose that some node, say C, finds itself to be the destination (or eventual owner) of an event E.\nIt does so by noticing that code code(C) equals code(E) after locally recomputing a code for E.\nIn that case, C stores E locally, but only if all four of C``s zone boundaries are decided.\nWhen this condition holds, C knows for sure that no other nodes have overlapped zones with it.\nIn this case, we call C an internal node.\nRecall, though, that because the zone discovery algorithm Section 3.2 only uses information from immediate neighbors, one or more of C``s boundaries may be undecided.\nIf so, C assumes that some other nodes have a zone that overlaps with its own, and sets out to resolve this overlap.\nTo do this, C now sets itself to be the owner of E and continues forwarding the message.\nHere we rely on GPSR``s perimeter mode routing to probe around the void that causes the undecided boundary.\nSince the message starts from C and is destined for a geographic location near C, GPSR guarantees that the message will be delivered back to C if no other nodes will update the information in the message.\nIf the message comes back to C with itself to be the owner, C infers that it must be the true owner of the zone and stores E locally.\nIf this does not happen, there are two possibilities.\nThe first is that as the event traverses the perimeter, some intermediate node, say B whose zone overlaps with C``s marks itself to be the owner of the event, but otherwise does not change the event``s zone code.\nThis node also recognizes that its own zone overlaps with C``s and initiates a message exchange which causes each of them to appropriately shrink their zone.\nFigures 6 through 8 show an example of this data-driven zone shrinking.\nInitially, both node A and node B have claimed the same zone 0 because they are out of radio range of each other.\nSuppose that A inserts an event E = 0.4, 0.8, 0.9 .\nA encodes E to 0 and claims itself to be the owner of E.\nSince A is not an internal node, it sends out E, looking for other owner candidates of E.\nOnce E gets to node B, B will see in the message``s owner field A``s code that is the same as its own.\nB then shrinks its zone from 0 to 01 according to A``s location which is also recorded in the message and send a shrink request to A. Upon receiving this request, A also shrinks its zone from 0 to 00.\nA second possibility is if some intermediate node changes the destination code of E to a more specific value (i.e., longer zone code).\nLet us label this node D. D now tries to initiate delivery to the centroid of the new zone.\nThis A B 0 0 110 100 1111 1110 101 Figure 6: Nodes A and B have claimed the same zone.\nA B <0.4,0.8,0.9> Figure 7: An event\/query message (filled arrows) triggers zone shrinking (hollow arrows).\nA B 01 00 110 100 1111 1110 101 Figure 8: The zone layout after shrinking.\nNow node A and B have been mapped to different zones.\nmight result in a new perimeter walk that returns to D (if, for example, D happens to be geographically closest to the centroid of the zone).\nHowever, D would not be the owner of the event, which would still be C.\nIn routing to the centroid of this zone, the message may traverse the perimeter and return to D.\nNow D notices that C was the original owner, so it encapsulates the event and directs it to C.\nIn case that there indeed is another node, say X, that owns an overlapped zone with C, X will notice this fact by finding in the message the same prefix of the code of one of its zones, but with a different geographic location from its own.\nX will shrink its zone to resolve the overlap.\nIf X``s zone is smaller than or equal to C``s zone, X will also send a shrink request to C.\nOnce C receives a shrink request, it will reduce its zone appropriately and fix its undecided boundary.\nIn this manner, the zone formation process is resolved on demand in a data-driven way.\n68 There are several interesting effects with respect to perimeter walking that arise in our algorithm.\nThe first is that there are some cases where an event insertion might cause the entire outer perimeter of the network to be traversed3 .\nFigure 6 also works as an example where the outer perimeter is traversed.\nEvent E inserted by A will eventually be stored in node B. Before node B stores event E, if B``s nominal radio range does not intersect the network boundary, it needs to send out E again as A did, because B in this case is not an internal node.\nBut if B``s nominal radio range intersects the network boundary, it then has two choices.\nIt can assume that there will not be any nodes outside the network boundary and so B is an internal node.\nThis is an aggressive approach.\nOn the other hand, B can also make a conservative decision assuming that there might be some other nodes it have not heard of yet.\nB will then force the message walking another perimeter before storing it.\nIn some situations, especially for large zones where the node that owns a zone is far away from the centroid of the owned zone, there might exist a small perimeter around the destination that does not include the owner of the zone.\nThe event will end up being stored at a different node than the real owner.\nIn order to deal with this problem, we add an extra operation in event forwarding, called efficient neighbor discovery.\nBefore invoking GPSR, a node needs to check if there exists a neighbor who is eligible to be the real owner of the event.\nTo do this, a node C, say, needs to know the zone codes of its neighboring nodes.\nWe deploy GPSR``s beaconing message to piggyback the zone codes for nodes.\nSo by simply comparing the event``s code and neighbor``s code, a node can decide whether there exists a neighbor Y which is more likely to be the owner of event E. C delivers E to Y , which simply follows the decision making procedure discussed above.\n3.3.4 Summary and Pseudo-code In summary, our event insertion procedure is designed to nicely interact with the zone discovery mechanism, and the event hashing mechanism.\nThe latter two mechanisms are kept simple, while the event insertion mechanism uses lazy evaluation at each hop to refine the event``s zone code, and it leverages GPSR``s perimeter walking mechanism to fix undecided zone boundaries.\nIn Section 3.5, we address robustness of event insertion to packet loss or to node failures.\nFigure 5 shows the pseudo-code for inserting and forwarding an event e.\nIn this pseudo code, we have omitted a description of the zone shrinking procedure.\nIn the pseudo code, procedure is Internal() is used to determine if the caller is an internal node and procedure is Owner() is used to determine if the caller is more eligible to be the owner of the event than is currently claimed owner as recorded in the message.\nProcedure Send-Message is used to send either an event message or a query message.\nIf the message destination address has been changed, the packet source address needs also to be changed in order to avoid being dropped by GPSR, since GPSR does not allow a node to see the same packet in greedy mode twice.\n3 This happens less frequently than for GHTs, where inserting an event to a location outside the actual (but inside the nominal) boundary of the network will always invoke an external perimeter walk.\n3.4 Resolving and Routing Queries DIMs support both point queries4 and range queries.\nRouting a point query is identical to routing an event.\nThus, the rest of this section details how range queries are routed.\nThe key challenge in routing zone queries is brought out by the following strawman design.\nIf the entire network was divided evenly into zones of depth k (for some pre-defined constant k), then the querier (the node issuing the query) could subdivide a given range query into the relevant subzones and route individual requests to each of the zones.\nThis can be inefficient for large range queries and also hard to implement in our design where zone sizes are not predefined.\nAccordingly, we use a slightly different technique where a range query is initially routed to a zone corresponding to the entire range, and is then progressively split into smaller subqueries.\nWe describe this algorithm here.\nThe first step of the algorithm is to map a range query to a zone code prefix.\nConceptually, this is easy; in a zone tree (Figure 3), there exists some node which contains the entire range query in its sub-tree, and none of its children in the tree do.\nThe initial zone code we choose for the query is the zone code corresponding to that tree node, and is a prefix of the zone codes of all zones (note that these zones may not be geographically contiguous) in the subtree.\nThe querier computes the zone code of Q, denoted by code(Q) and then starts routing a query to addr(code(Q)).\nUpon receiving a range query Q, a node A (where A is any node on the query propagation path) divides it into multiple smaller sized subqueries if there is an overlap between the zone of A, zone(A) and the zone code associated with Q, code(Q).\nOur approach to split a query Q into subqueries is as follows.\nIf the range of Q``s first attribute contains the value 0.5, A divides Q into two sub-queries one of whose first attribute ranges from 0 to 0.5, and the other from 0.5 to 1.\nThen A decides the half that overlaps with its own zone.\nLet``s call it QA.\nIf QA does not exist, then A stops splitting; otherwise, it continues splitting (using the second attribute range) and recomputing QA until QA is small enough so that it completely falls into zone(A) and hence A can now resolve it.\nFor example, suppose that node A, whose code is 0110, is to split a range query Q = 0.3 \u2212 0.8, 0.6 \u2212 0.9 .\nThe splitting steps is shown in Figure 2.\nAfter splitting, we obtain three smaller queries q0 = 0.3 \u2212 0.5, 0.6 \u2212 0.75 , q1 = 0.3 \u2212 0.5, 0.75 \u2212 0.9 , and q2 = 0.5 \u2212 0.8, 0.6 \u2212 0.9 .\nThis splitting procedure is illustrated in Figure 9 which also shows the codes of each subquery after splitting.\nA then replies to subquery q0 with data stored locally and sends subqueries q1 and q2 using the procedure outlined above.\nMore generally, if node A finds itself to be inside the zone subtree that maximally covers Q, it will send the subqueries that resulted from the split.\nOtherwise, if there is no overlap between A and Q, then A forwards Q as is (in this case Q is either the original query, or a product of an earlier split).\nFigure 10 describes the pseudo-code for the zone splitting algorithm.\nAs shown in the above algorithm, once a subquery has been recognized as belonging to the caller``s zone, procedure Resolve is invoked to resolve the subquery and send a reply to the querier.\nEvery query message contains 4 By point queries, we mean the equality condition on all indexed keys.\nDIM index attributes are not necessarily primary keys.\n69 the geographic location of its initiator, so the corresponding reply message can be delivered directly back to the initiator.\nFinally, in the process of query resolution, zones might shrink similar to shrinkage during inserting.\nWe omit this in the pseudo code.\n3.5 Robustness Until now, we have not discussed the impact of node failures and packet losses, or node arrivals and departures on our algorithms.\nPacket losses can affect query and event insertion, and node failures can result in lost data, while node arrivals and departures can impact the zone structure.\nWe now discuss how DIMs can be made robust to these kinds of dynamics.\n3.5.1 Maintaining Zones In previous sections, we described how the zone discovery algorithm could leave zone boundaries undecided.\nThese undecided boundaries are resolved during insertion or querying, using the zone shrinking procedure describe above.\nWhen a new node joins the network, the zone discovery mechanism (Section 3.2) will cause neighboring zones to appropriately adjust their zone boundaries.\nAt this time, those zones can also transfer to the new node those events they store but which should belong to the new node.\nBefore a node turns itself off (if this is indeed possible), it knows that its backup node (Section 3.1) will take over its zone, and will simply send all its events to its backup node.\nNode deletion may also cause zone expansion.\nIn order to keep the mapping between the binary zone tree``s leaf nodes and zones, we allow zone expansion to only occur among sibling zones (Section 3.1).\nThe rule is: if zone(A)``s sibling zone becomes empty, then A can expand its own zone to include its sibling zone.\nNow, we turn our attention to node failures.\nNode failures are just like node deletions except that a failed node does not have a chance to move its events to another node.\nBut how does a node decide if its sibling has failed?\nIf the sibling is within radio range, the absence of GPSR beaconing messages can detect this.\nOnce it detects this, the node can expand its zone.\nA different approach is needed for detecting siblings who are not within radio range.\nThese are the cases where two nodes own their zones after exchanging a shrink message; they do not periodically exchange messages thereafter to maintain this zone relationship.\nIn this case, we detect the failure in a data-driven fashion, with obvious efficiency benefits compared to periodic keepalives.\nOnce a node B has failed, an event or query message that previously should have been owned by the failed node will now be delivered to the node A that owns the empty zone left by node B.\nA can see this message because A stands right around the empty area left by B and is guaranteed to be visited in a GPSR perimeter traversal.\nA will set itself to be the owner of the message, and any node which would have dropped this message due to a perimeter loop will redirect the message to A instead.\nIf A``s zone happens to be the sibling of B``s zone, A can safely expand its own zone and notify its expanded zone to its neighbors via GPSR beaconing messages.\n3.5.2 Preventing Data Loss from Node Failure The algorithms described above are robust in terms of zone formation, but node failure can erase data.\nTo avoid this, DIMs can employ two kinds of replication: local replication to be resilient to random node failures, and mirror replication for resilience to concurrent failure of geographically contiguous nodes.\nMirror replication is conceptually easy.\nSuppose an event E has a zone code code(E).\nThen, the node that inserts E would store two copies of E; one at the zone denoted by code(E), and the other at the zone corresponding to the one``s complement of code(E).\nThis technique essentially creates a mirror DIM.\nA querier would need, in parallel, to query both the original DIM and its mirror since there is no way of knowing if a collection of nodes has failed.\nClearly, the trade-off here is an approximate doubling of both insertion and query costs.\nThere exists a far cheaper technique to ensure resilience to random node failures.\nOur local replication technique rests on the observation that, for each node A, there exists a unique node which will take over its zone when A fails.\nThis node is defined as the node responsible for A``s zone``s backup zone (see Section 3.1).\nThe basic idea is that A replicates each data item it has in this node.\nWe call this node A``s local replica.\nLet A``s local replica be B. Often B will be a radio neighbor of A and can be detected from GPSR beacons.\nSometimes, however, this is not the case, and B will have to be explicitly discovered.\nWe use an explicit message for discovering the local replica.\nDiscovering the local replica is data-driven, and uses a mechanism similar to that of event insertion.\nNode A sends a message whose geographic destination is a random nearby location chosen by A.\nThe location is close enough to A such that GPSR will guarantee that the message will delivered back to A.\nIn addition, the message has three fields, one for the zone code of A, code(A), one for the owner owner(A) of zone(A) which is set to be empty, and one for the geographic location of owner(A).\nThen the packet will be delivered in GPSR perimeter mode.\nEach node that receives this message will compare its zone code and code(A) in the message, and if it is more eligible to be the owner of zone(A) than the current owner(A) recorded in the message, it will update the field owner(A) and the corresponding geographic location.\nOnce the packet comes back to A, it will know the location of its local replica and can start to send replicas.\nIn a dense sensor network, the local replica of a node is usually very near to the node, either its direct neighbor or 1-2 hops away, so the cost of sending replicas to local replication will not dominate the network traffic.\nHowever, a node``s local replica itself may fail.\nThere are two ways to deal with this situation; periodic refreshes, or repeated datadriven discovery of local replicas.\nThe former has higher overhead, but more quickly discovers failed replicas.\n3.5.3 Robustness to Packet Loss Finally, the mechanisms for querying and event insertion can be easily made resilient to packet loss.\nFor event insertion, a simple ACK scheme suffices.\nOf course, queries and responses can be lost as well.\nIn this case, there exists an efficient approach for error recovery.\nThis rests on the observation that the querier knows which zones fall within its query and should have responded (we assume that a node that has no data matching a query, but whose zone falls within the query, responds with a negative acknowledgment).\nAfter a conservative timeout, the querier can re-issue the queries selectively to these zones.\nIf DIM cannot get any answers (positive or negative) from 70 <0.3-0.8, 0.6-0.9> <0.5-0.8, 0.6-0.9><0.3-0.5, 0.6-0.9> <0.3-0.5, 0.6-0.9> <0.3-0.5, 0.6-0.9> <0.3-0.5, 0.6-0.75> <0.3-0.5, 0.75-0.9> 0 0 1 1 1 1 Figure 9: An example of range query splitting Resolve-Range-Query(Q) 1 Qsub \u2190 nil 2 q0, Qsub \u2190 Split-Query(Q) 3 if q0 = nil 4 then c \u2190 Encode(Q) 5 if Contain(c, code(A)) = true 6 then go to step 12 7 else Send-Message(c, q0) 8 else Resolve(q0) 9 if is Internal() = true 10 then Absorb (q0) 11 else Append q0 to Qsub 12 if Qsub = nil 13 then for each subquery q \u2208 Qsub 14 do c \u2190 Encode(q) 15 Send-Message(c, q) Figure 10: Query resolving algorithm certain zones after repeated timeouts, it can at least return the partial query results to the application together with the information about the zones from which data is missing.\n4.\nDIMS: AN ANALYSIS In this section, we present a simple analytic performance evaluation of DIMs, and compare their performance against other possible approaches for implementing multi-dimensional range queries in sensor networks.\nIn the next section, we validate these analyses using detailed packet-level simulations.\nOur primary metrics for the performance of a DIM are: Average Insertion Cost measures the average number of messages required to insert an event into the network.\nAverage Query Delivery Cost measures the average number of messages required to route a query message to all the relevant nodes in the network.\nIt does not measure the number of messages required to transmit responses to the querier; this latter number depends upon the precise data distribution and is the same for many of the schemes we compare DIMs against.\nIn DIMs, event insertion essentially uses geographic routing.\nIn a dense N-node network where the likelihood of traversing perimeters is small, the average event insertion cost proportional to \u221a N [23].\nOn the other hand, the query delivery cost depends upon the size of ranges specified in the query.\nRecall that our query delivery mechanism is careful about splitting a query into sub-queries, doing so only when the query nears the zone that covers the query range.\nThus, when the querier is far from the queried zone, there are two components to the query delivery cost.\nThe first, which is proportional to \u221a N, is the cost to deliver the query near the covering zone.\nIf within this covering zone, there are M nodes, the message delivery cost of splitting the query is proportional to M.\nThe average cost of query delivery depends upon the distribution of query range sizes.\nNow, suppose that query sizes follow some density function f(x), then the average cost of resolve a query can be approximated by \u00ca N 1 xf(x)dx.\nTo give some intuition for the performance of DIMs, we consider four different forms for f(x): the uniform distribution where a query range encompassing the entire network is as likely as a point query; a bounded uniform distribution where all sizes up to a bound B are equally likely; an algebraic distribution in which most queries are small, but large queries are somewhat likely; and an exponential distribution where most queries are small and large queries are unlikely.\nIn all our analyses, we make the simplifying assumption that the size of a query is proportional to the number of nodes that can answer that query.\nFor the uniform distribution P(x) \u221d c for some constant c.\nIf each query size from 1 ... N is equally likely, the average query delivery cost of uniformly distributed queries is O(N).\nThus, for uniformly distributed queries, the performance of DIMs is comparable to that of flooding.\nHowever, for the applications we envision, where nodes within the network are trying to correlate events, the uniform distribution is highly unrealistic.\nSomewhat more realistic is a situation where all query sizes are bounded by a constant B.\nIn this case, the average cost for resolving such a query is approximately \u00ca B 1 xf(x)dx = O(B).\nRecall now that all queries have to pay an approximate cost of O( \u221a N) to deliver the query near the covering zone.\nThus, if DIM limited queries to a size proportional to\u221a N, the average query cost would be O( \u221a N).\nThe algebraic distribution, where f(x) \u221d x\u2212k , for some constant k between 1 and 2, has an average query resolution cost given by \u00ca N 1 xf(x)dx = O(N2\u2212k ).\nIn this case, if k > 1.5, the average cost of query delivery is dominated by the cost to deliver the query to near the covering zone, given by O( \u221a N).\nFinally, for the exponential distribution, f(x) = ce\u2212cx for some constant c, and the average cost is just the mean of the corresponding distribution, i.e., O(1) for large N. Asymptotically, then, the cost of the query for the exponential distribution is dominated by the cost to deliver the query near the covering zone (O( \u221a N)).\nThus, we see that if queries follow either the bounded uniform distribution, the algebraic distribution, or the exponential distribution, the query cost scales as the insertion cost (for appropriate choice of constants for the bounded uniform and the algebraic distributions).\nHow well does the performance of DIMs compare against alternative choices for implementing multi-dimensional queries?\nA simple alternative is called external storage [23], where all events are stored centrally in a node outside the sensor network.\nThis scheme incurs an insertion cost of O( \u221a N), and a zero query cost.\nHowever, as [23] points out, such systems may be impractical in sensor networks since the access link to the external node becomes a hotspot.\nA second alternative implementation would store events at the node where they are generated.\nQueries are flooded 71 throughout the network, and nodes that have matching data respond.\nExamples of systems that can be used for this (although, to our knowledge, these systems do not implement multi-dimensional range queries) are Directed Diffusion [15] and TinyDB [17].\nThe flooding scheme incurs a zero insertion cost, but an O(N) query cost.\nIt is easy to show that DIMs outperform flooding as long as the ratio of the number of insertions to the number of queries is less than \u221a N.\nA final alternative would be to use a geographic hash table (GHT [20]).\nIn this approach, attribute values are assumed to be integers (this is actually quite a reasonable assumption since attribute values are often quantized), and events are hashed on some (say, the first) attribute.\nA range query is sub-divided into several sub-queries, one for each integer in the range of the first attribute.\nEach sub-query is then hashed to the appropriate location.\nThe nodes that receive a sub-query only return events that match all other attribute ranges.\nIn this approach, which we call GHT-R (GHT``s for range queries) the insertion cost is O( \u221a N).\nSuppose that the range of the first attribute contains r discrete values.\nThen the cost to deliver queries is O(r \u221a N).\nThus, asymptotically, GHT-R``s perform similarly to DIMs.\nIn practice, however, the proportionality constants are significantly different, and DIMs outperform GHT-Rs, as we shall show using detailed simulations.\n5.\nDIMS: SIMULATION RESULTS Our analysis gives us some insight into the asymptotic behavior of various approaches for multi-dimensional range queries.\nIn this section, we use simulation to compare DIMs against flooding and GHT-R; this comparison gives us a more detailed understanding of these approaches for moderate size networks, and gives us a nuanced view of the mechanistic differences between some of these approaches.\n5.1 Simulation Methodology We use ns-2 for our simulations.\nSince DIMs are implemented on top of GPSR, we first ported an earlier GPSR implementation to the latest version of ns-2.\nWe modified the GPSR module to call our DIM implementation when it receives any data message in transit or when it is about to drop a message because that message traversed the entire perimeter.\nThis allows a DIM to modify message zone codes in flight (Section 3), and determine the actual owner of an event or query.\nIn addition, to this, we implemented in ns-2 most of the DIM mechanisms described in Section 3.\nOf those mechanisms, the only one we did not implement is mirror replication.\nWe have implemented selective query retransmission for resiliency to packet loss, but have left the evaluation of this mechanism to future work.\nOur DIM implementation in ns-2 is 2800 lines of code.\nFinally, we implemented GHT-R, our GHT-based multidimensional range query mechanism in ns-2.\nThis implementation was relatively straightforward, given that we had ported GPSR, and modified GPSR to detect the completion of perimeter mode traversals.\nUsing this implementation, we conducted a fairly extensive evaluation of DIM and two alternatives (flooding, and our GHT-R).\nFor all our experiments, we use uniformly placed sensor nodes with network sizes ranging from 50 nodes to 300 nodes.\nEach node has a radio range of 40m.\nFor the results presented here, each node has on average 20 nodes within its nominal radio range.\nWe have conducted experiments at other node densities; they are in agreement with the results presented here.\nIn all our experiments, each node first generates 3 events5 on average (more precisely, for a topology of size N, we have 3N events, and each node is equally likely to generate an event).\nWe have conducted experiments for three different event value distributions.\nOur uniform event distribution generates 2-dimensional events and, for each dimension, every attribute value is equally likely.\nOur normal event distribution generates 2-dimensional events and, for each dimension, the attribute value is normally distributed with a mean corresponding to the mid-point of the attribute value range.\nThe normal event distribution represents a skewed data set.\nFinally, our trace event distribution is a collection of 4-dimensional events obtained from a habitat monitoring network.\nAs we shall see, this represents a fairly skewed data set.\nHaving generated events, for each simulation we generate queries such that, on average, each node generates 2 queries.\nThe query sizes are determined using the four size distributions we discussed in Section 4: uniform, boundeduniform, algebraic and exponential.\nOnce a query size has been determined, the location of the query (i.e., the actual boundaries of the zone) are uniformly distributed.\nFor our GHT-R experiments, the dynamic range of the attributes had 100 discrete values, but we restricted the query range for any one attribute to 50 discrete values to allow those simulations to complete in reasonable time.\nFinally, using one set of simulations we evaluate the efficacy of local replication by turning off random fractions of nodes and measuring the fidelity of the returned results.\nThe primary metrics for our simulations are the average query and insertion costs, as defined in Section 4.\n5.2 Results Although we have examined almost all the combinations of factors described above, we discuss only the most salient ones here, for lack of space.\nFigure 11 plots the average insertion costs for DIM and GHT-R (for flooding, of course, the insertion costs are zero).\nDIM incurs less per event overhead in inserting events (regardless of the actual event distribution; Figure 11 shows the cost for uniformly distributed events).\nThe reason for this is interesting.\nIn GHT-R, storing almost every event incurs a perimeter traversal, and storing some events require traversing the outer perimeter of the network [20].\nBy contrast, in DIM, storing an event incurs a perimeter traversal only when a node``s boundaries are undecided.\nFurthermore, an insertion or a query in a DIM can traverse the outer perimeter (Section 3.3), but less frequently than in GHTs.\nFigure 13 plots the average query cost for a bounded uniform query size distribution.\nFor this graph (and the next) we use a uniform event distribution, since the event distribution does not affect the query delivery cost.\nFor this simulation, our bound was 1 4 th the size of the largest possible 5 Our metrics are chosen so that the exact number of events and queries is unimportant for our discussion.\nOf course, the overall performance of the system will depend on the relative frequency of events and queries, as we discuss in Section 4.\nSince we don``t have realistic ratios for these, we focus on the microscopic costs, rather than on the overall system costs.\n72 0 2 4 6 8 10 12 14 16 18 20 50\u00a0100\u00a0150\u00a0200 250 300 AverageCostperInsertion Network Size DIM GHT-R Figure 11: Average insertion cost for DIM and GHT.\n0.4 0.5 0.6 0.7 0.8 0.9 1 5 10 15 20 25 30 Fractionofrepliescomparedwithnon-failurecase Fraction of failed nodes (%) No Replication Local Replication Figure 12: Local replication performance.\nquery (e.g., a query of the form 0 \u2212 0.5, 0 \u2212 0.5 .\nEven for this generous query size, DIMs perform quite well (almost a third the cost of flooding).\nNotice, however, that GHTRs incur high query cost since almost any query requires as many subqueries as the width of the first attribute``s range.\nFigure 14 plots the average query cost for the exponential distribution (the average query size for this distribution was set to be 1 16 th the largest possible query).\nThe superior scaling of DIMs is evident in these graphs.\nClearly, this is the regime in which one might expect DIMs to perform best, when most of the queries are small and large queries are relatively rare.\nThis is also the regime in which one would expect to use multi-dimensional range queries: to perform relatively tight correlations.\nAs with the bounded uniform distribution, GHT query cost is dominated by the cost of sending sub-queries; for DIMs, the query splitting strategy works quite well in keep overall query delivery costs low.\nFigure 12 describes the efficacy of local replication.\nTo obtain this figure, we conducted the following experiment.\nOn a 100-node network, we inserted a number of events uniformly distributed throughout the network, then issued a query covering the entire network and recorded the answers.\nKnowing the expected answers for this query, we then successively removed a fraction f of nodes randomly, and re-issued the same query.\nThe figure plots the fraction of expected responses actually received, with and without replication.\nAs the graph shows, local replication performs well for random failures, returning almost 90% of the responses when up to 30% of the nodes have failed simultaneously 6 .\nIn the absence of local replication, of course, when 6 In practice, the performance of local replication is likely to 0 100 200 300 400 500 600 700 50\u00a0100\u00a0150\u00a0200 250 300 AverageCostperQueryinBoundedUnifDistribution Network Size DIM flooding GHT-R Figure 13: Average query cost with a bounded uniform query distribution 0 50 100 150 200 250 300 350 400 450 50\u00a0100\u00a0150\u00a0200 250 300 AverageCostperQueryinExponentialDistribution Network Size DIM flooding GHT-R Figure 14: Average query cost with an exponential query distribution 30% of the nodes fail, the response rate is only 70% as one would expect.\nWe note that DIMs (as currently designed) are not perfect.\nWhen the data is highly skewed-as it was for our trace data set from the habitat monitoring application where most of the event values fell into within 10% of the attribute``s range-a few DIM nodes will clearly become the bottleneck.\nThis is depicted in Figure 15, which shows that for DIMs, and GHT-Rs, the maximum number of transmissions at any network node (the hotspots) is rather high.\n(For less skewed data distributions, and reasonable query size distributions, the hotspot curves for all three schemes are comparable.)\nThis is a standard problem that the database indices have dealt with by tree re-balancing.\nIn our case, simpler solutions might be possible (and we discuss this in Section 7).\nHowever, our use of the trace data demonstrates that DIMs work for events which have more than two dimensions.\nIncreasing the number of dimensions does not noticeably degrade DIMs query cost (omitted for lack of space).\nAlso omitted are experiments examining the impact of several other factors, as they do not affect our conclusions in any way.\nAs we expected, DIMs are comparable in performance to flooding when all sizes of queries are equally likely.\nFor an algebraic distribution of query sizes, the relative performance is close to that for the exponential distribution.\nFor normally distributed events, the insertion costs be much better than this.\nAssuming a node and its replica don``t simultaneously fail often, a node will almost always detect a replica failure and re-replicate, leading to near 100% response rates.\n73 0 2000 4000 6000 8000 10000 12000 50\u00a0100\u00a0150\u00a0200 250 300 MaximumHotspotonTraceDataSet Network Size DIM flooding GHT-R Figure 15: Hotspot usage DIM Zone Manager Query Router Query Processor Event Manager Event Router GPSR interface(Event driven\/Thread based) update useuse update GPSR Upper interface(Event driven\/Thread based) Lower interface(Event driven\/Thread based) Greedy Forwarding Perimeter Forwarding Beaconing Neighbor List Manager update use MoteNIC (MicaRadio) IP Socket (802.11b\/Ethernet) Figure 16: Software architecture of DIM over GPSR are comparable to that for the uniform distribution.\nFinally, we note that in all our evaluations we have only used list queries (those that request all events matching the specified range).\nWe expect that for summary queries (those that expect an aggregate over matching events), the overall cost of DIMs could be lower because the matching data are likely to be found in one or a small number of zones.\nWe leave an understanding of this to future work.\nAlso left to future work is a detailed understanding of the impact of location error on DIM``s mechanisms.\nRecent work [22] has examined the impact of imprecise location information on other data-centric storage mechanisms such as GHTs, and found that there exist relatively simple fixes to GPSR that ameliorate the effects of location error.\n6.\nIMPLEMENTATION We have implemented DIMs on a Linux platform suitable for experimentation on PDAs and PC-104 class machines.\nTo implement DIMs, we had to develop and test an independent implementation of GPSR.\nOur GPSR implementation is full-featured, while our DIM implementation has most of the algorithms discussed in Section 3; some of the robustness extensions have only simpler variants implemented.\nThe software architecture of DIM\/GPSR system is shown in Figure 16.\nThe entire system (about 5000 lines of code) is event-driven and multi-threaded.\nThe DIM subsystem consists of six logical components: zone management, event maintenance, event routing, query routing, query processing, and GPSR interactions.\nThe GPSR system is implemented as user-level daemon process.\nApplications are executed as clients.\nFor the DIM subsystem, the GPSR module 0 2 4 6 8 10 12 14 16 0.25x0.25 0.50x0.50 0.75x0.75 1.0x1.0 Query size Average#ofreceivedresponses perquery Figure 17: Number of events received for different query sizes 0 2 4 6 8 10 12 14 16 0.25x0.25 0.50x0.50 0.75x0.75 1.0x1.0 Query sizeTotalnumberofmessages onlyforsendingthequery Figure 18: Query distribution cost provides several extensions: it exports information about neighbors, and provides callbacks during packet forwarding and perimeter-mode termination.\nWe tested our implementation on a testbed consisting of 8 PC-104 class machines.\nEach of these boxes runs Linux and uses a Mica mote (attached through a serial cable) for communication.\nThese boxes are laid out in an office building with a total spatial separation of over a hundred feet.\nWe manually measured the locations of these nodes relative to some coordinate system and configured the nodes with their location.\nThe network topology is approximately a chain.\nOn this testbed, we inserted queries and events from a single designated node.\nOur events have two attributes which span all combinations of the four values [0, 0.25, 0.75, 1] (sixteen events in all).\nOur queries span four sizes, returning 1, 4, 9 and 16 events respectively.\nFigure 17 plots the number of events received for different sized queries.\nIt might appear that we received fewer events than expected, but this graph doesn``t count the events that were already stored at the querier.\nWith that adjustment, the number of responses matches our expectation.\nFinally, Figure 18 shows the total number of messages required for different query sizes on our testbed.\nWhile these experiments do not reveal as much about the performance range of DIMs as our simulations do, they nevertheless serve as proof-of-concept for DIMs.\nOur next step in the implementation is to port DIMs to the Mica motes, and integrate them into the TinyDB [17] sensor database engine on motes.\n74 7.\nCONCLUSIONS In this paper, we have discussed the design and evaluation of a distributed data structure called DIM for efficiently resolving multi-dimensional range queries in sensor networks.\nOur design of DIMs relies upon a novel locality-preserving hash inspired by early work in database indexing, and is built upon GPSR.\nWe have a working prototype, both of GPSR and DIM, and plan to conduct larger scale experiments in the future.\nThere are several interesting future directions that we intend to pursue.\nOne is adaptation to skewed data distributions, since these can cause storage and transmission hotspots.\nUnlike traditional database indices that re-balance trees upon data insertion, in sensor networks it might be feasible to re-structure the zones on a much larger timescale after obtaining a rough global estimate of the data distribution.\nAnother direction is support for node heterogeneity in the zone construction process; nodes with larger storage space assert larger-sized zones for themselves.\nA third is support for efficient resolution of existential queries-whether there exists an event matching a multi-dimensional range.\nAcknowledgments This work benefited greatly from discussions with Fang Bian, Hui Zhang and other ENL lab members, as well as from comments provided by the reviewers and our shepherd Feng Zhao.\n8.\nREFERENCES [1] J. Aspnes and G. Shah.\nSkip Graphs.\nIn Proceedings of 14th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), Baltimore, MD, January 2003.\n[2] J. L. Bentley.\nMultidimensional Binary Search Trees Used for Associative Searching.\nCommunicaions of the ACM, 18(9):475-484, 1975.\n[3] P. Bonnet, J. E. Gerhke, and P. Seshadri.\nTowards Sensor Database Systems.\nIn Proceedings of the Second International Conference on Mobile Data Management, Hong Kong, January 2001.\n[4] I. Clarke, O. Sandberg, B. Wiley, and T. W. Hong.\nFreenet: A Distributed Anonymous Information Storage and Retrieval System.\nIn Designing Privacy Enhancing Technologies: International Workshop on Design Issues in Anonymity and Unobservability.\nSpringer, New York, 2001.\n[5] D. Comer.\nThe Ubiquitous B-tree.\nACM Computing Surveys, 11(2):121-137, 1979.\n[6] R. A. Finkel and J. L. Bentley.\nQuad Trees: A Data Structure for Retrieval on Composite Keys.\nActa Informatica, 4:1-9, 1974.\n[7] D. Ganesan, D. Estrin, and J. Heidemann.\nDIMENSIONS: Why do we need a new Data Handling architecture for Sensor Networks?\nIn Proceedings of the First Workshop on Hot Topics In Networks (HotNets-I), Princeton, NJ, October 2002.\n[8] A. Gionis, P. Indyk, and R. Motwani.\nSimilarity Search in High Dimensions via Hashing.\nIn Proceedings of the 25th VLDB conference, Edinburgh, Scotland, September 1999.\n[9] R. Govindan, J. Hellerstein, W. Hong, S. Madden, M. Franklin, and S. Shenker.\nThe Sensor Network as a Database.\nTechnical Report 02-771, Computer Science Department, University of Southern California, September 2002.\n[10] B. Greenstein, D. Estrin, R. Govindan, S. Ratnasamy, and S. Shenker.\nDIFS: A Distributed Index for Features in Sensor Networks.\nIn Proceedings of 1st IEEE International Workshop on Sensor Network Protocols and Applications, Anchorage, AK, May 2003.\n[11] A. Guttman.\nR-trees: A Dynamic Index Structure for Spatial Searching.\nIn Proceedings of the ACM SIGMOD, Boston, MA, June 1984.\n[12] M. Harren, J. M. Hellerstein, R. Huebsch, B. T. Loo, S. Shenker, and I. Stoica.\nComplex Queries in DHT-based Peer-to-Peer Networks.\nIn P. Druschel, F. Kaashoek, and A. Rowstron, editors, Proceedings of 1st International Workshop on Peer-to-Peer Systems (IPTPS``02), volume 2429 of LNCS, page 242, Cambridge, MA, March 2002.\nSpringer-Verlag.\n[13] P. Indyk and R. Motwani.\nApproximate Nearest Neighbors: Towards Removing the Curse of Dimensionality.\nIn Proceedings of the 30th Annual ACM Symposium on Theory of Computing, Dallas, Texas, May 1998.\n[14] P. Indyk, R. Motwani, P. Raghavan, and S. Vempala.\nLocality-preserving Hashing in Multidimensional Spaces.\nIn Proceedings of the 29th Annual ACM symposium on Theory of Computing, pages 618 - 625, El Paso, Texas, May 1997.\nACM Press.\n[15] C. Intanagonwiwat, R. Govindan, and D. Estrin.\nDirected Diffusion: A Scalable and Robust Communication Paradigm for Sensor Networks.\nIn Proceedings of the Sixth Annual ACM\/IEEE International Conference on Mobile Computing and Networking (Mobicom 2000), Boston, MA, August 2000.\n[16] B. Karp and H. T. Kung.\nGPSR: Greedy Perimeter Stateless Routing for Wireless Networks.\nIn Proceedings of the Sixth Annual ACM\/IEEE International Conference on Mobile Computing and Networking (Mobicom 2000), Boston, MA, August 2000.\n[17] S. Madden, M. Franklin, J. Hellerstein, and W. Hong.\nThe Design of an Acquisitional Query Processor for Sensor Networks.\nIn Proceedings of ACM SIGCMOD, San Diego, CA, June 2003.\n[18] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong.\nTAG: a Tiny AGregation Service for ad-hoc Sensor Networks.\nIn Proceedings of 5th Annual Symposium on Operating Systems Design and Implementation (OSDI), Boston, MA, December 2002.\n[19] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker.\nA Scalable Content-Addressable Network.\nIn Proceedings of the ACM SIGCOMM, San Diego, CA, August 2001.\n[20] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin, R. Govindan, and S. Shenker.\nGHT: A Geographic Hash Table for Data-Centric Storage.\nIn Proceedings of the First ACM International Workshop on Wireless Sensor Networks and Applications, Atlanta, GA, September 2002.\n[21] H. Samet.\nSpatial Data Structures.\nIn W. Kim, editor, Modern Database Systems: The Object Model, Interoperability and Beyond, pages 361-385.\nAddison Wesley\/ACM, 1995.\n[22] K. Sead, A. Helmy, and R. Govindan.\nOn the Effect of Localization Errors on Geographic Face Routing in Sensor Networks.\nIn Under submission, 2003.\n[23] S. Shenker, S. Ratnasamy, B. Karp, R. Govindan, and D. Estrin.\nData-Centric Storage in Sensornets.\nIn Proc.\nACM SIGCOMM Workshop on Hot Topics In Networks, Princeton, NJ, 2002.\n[24] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan.\nChord: A Scalable Peer-To-Peer Lookup Service for Internet Applications.\nIn Proceedings of the ACM SIGCOMM, San Diego, CA, August 2001.\n[25] F. Ye, H. Luo, J. Cheng, S. Lu, and L. Zhang.\nA Two-Tier Data Dissemination Model for Large-scale Wireless Sensor Networks.\nIn Proceedings of the Eighth Annual ACM\/IEEE International Conference on Mobile Computing and Networking (Mobicom``02), Atlanta, GA, September 2002.\n75","lvl-3":"Multi-dimensional Range Queries in Sensor Networks *\nABSTRACT\nIn many sensor networks, data or events are named by attributes.\nMany of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query.\nAn example is: \"List all events whose temperature lies between 50 \u25e6 and 60 \u25e6, and whose light levels lie between 10 and 15.\"\nSuch queries are useful for correlating events occurring within the network.\nIn this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries.\nOur distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm.\nOur analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O (\u221a N)).\nIn detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network.\nFinally, experiments on a small scale testbed validate the feasibility of DIMs.\n1.\nINTRODUCTION\nIn wireless sensor networks, data or events will be named by attributes [15] or represented as virtual relations in a distributed database [18, 3].\nMany of these attributes will have scalar values: e.g., temperature and light levels, soil moisture conditions, etc. .\nIn these systems, we argue, one natural way to query for events of interest will be to use multi-dimensional range queries on these attributes.\nFor example, scientists analyzing the growth of marine microorganisms might be interested in events that occurred within certain temperature and light conditions: \"List all events that have temperatures between 50 \u25e6 F and 60 \u25e6 F, and light levels between 10 and 20\".\nSuch range queries can be used in two distinct ways.\nThey can help users efficiently drill-down their search for events of interest.\nThe query described above illustrates this, where the scientist is presumably interested in discovering, and perhaps mapping the combined effect of temperature and light on the growth of marine micro-organisms.\nMore importantly, they can be used by application software running within a sensor network for correlating events and triggering actions.\nFor example, if in a habitat monitoring application, a bird alighting on its nest is indicated by a certain range of thermopile sensor readings, and a certain range of microphone readings, a multi-dimensional range query on those attributes enables higher confidence detection of the arrival of a flock of birds, and can trigger a system of cameras.\nIn traditional database systems, such range queries are supported using pre-computed indices.\nIndices trade-off some initial pre-computation cost to achieve a significantly more efficient querying capability.\nFor sensor networks, we assert that a centralized index for multi-dimensional range queries may not be feasible for energy-efficiency reasons (as well as the fact that the access bandwidth to this central index will be limited, particularly for queries emanating from within the network).\nRather, we believe, there will be situations when it is more appropriate to build an innetwork distributed data structure for efficiently answering multi-dimensional range queries.\nIn this paper, we present just such a data structure, that we call a DIM'.\nDIMs are inspired by classical database indices, and are essentially embeddings of such indices within the sensor network.\nDIMs leverage two key ideas: in-network\ndata centric storage, and a novel locality-preserving geographic hash (Section 3).\nDIMs trace their lineage to datacentric storage systems [23].\nThe underlying mechanism in these systems allows nodes to consistently hash an event to some location within the network, which allows efficient retrieval of events.\nBuilding upon this, DIMs use a technique whereby events whose attribute values are \"close\" are likely to be stored at the same or nearby nodes.\nDIMs then use an underlying geographic routing algorithm (GPSR [16]) to route events and queries to their corresponding nodes in an entirely distributed fashion.\nWe discuss the design of a DIM, presenting algorithms for event insertion and querying, for maintaining a DIM in the event of node failure, and for making DIMs robust to data or packet loss (Section 3).\nWe then extensively evaluate DIMs using analysis (Section 4), simulation (Section 5), and actual implementation (Section 6).\nOur analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query \u221a costs scale as O (N)).\nIn detailed simulations, we show that in practice, the event insertion and querying costs of other alternatives are sometimes an order of magnitude the costs of DIMs, even for moderately sized network.\nExperiments on a small scale testbed validate the feasibility of DIMs (Section 6).\nMuch work remains, including efficient support for skewed data distributions, existential queries, and node heterogeneity.\nWe believe that DIMs will be an essential, but perhaps not necessarily the only, distributed data structure supporting efficient queries in sensor networks.\nDIMs will be part of a suite of such systems that enable feature extraction [7], simple range querying [10], exact-match queries [23], or continuous queries [15, 18].\nAll such systems will likely be integrated to a sensor network database system such as TinyDB [17].\nApplication designers could then choose the appropriate method of information access.\nFor instance, a fire tracking application would use DIM to detect the hotspots, and would then use mechanisms that enable continuous queries [15, 18] to track the spatio-temporal progress of the hotspots.\nFinally, we note that DIMs are applicable not just to sensor networks, but to other deeply distributed systems (embedded networks for home and factory automation) as well.\n2.\nRELATED WORK\nThe basic problem that this paper addresses--multidimensional range queries--is typically solved in database systems using indexing techniques.\nThe database community has focused mostly on centralized indices, but distributed indexing has received some attention in the literature.\nIndexing techniques essentially trade-off some data insertion cost to enable efficient querying.\nIndexing has, for long, been a classical research problem in the database community [5, 2].\nOur work draws its inspiration from the class of multi-key constant branching index structures, exemplified by k-d trees [2], where k represents the dimensionality of the data space.\nOur approach essentially represents a geographic embedding of such structures in a sensor field.\nThere is one important difference.\nThe classical indexing structures are data-dependent (as are some indexing schemes that use locality preserving hashes, and developed in the theory literature [14, 8, 13]).\nThe index structure is decided not only by the data, but also by the order in which data is inserted.\nOur current design is not data dependent.\nFinally, tangentially related to our work is the class of spatial indexing systems [21, 6, 11].\nWhile there has been some work on distributed indexing, the problem has not been extensively explored.\nThere exist distributed indices of a restricted kind--those that allow exact match or partial prefix match queries.\nExamples of such systems, of course, are the Internet Domain Name System, and the class of distributed hash table (DHT) systems exemplified by Freenet [4], Chord [24], and CAN [19].\nOur work is superficially similar to CAN in that both construct a zone-based overlay atop of the underlying physical network.\nThe underlying details make the two systems very different: CAN's overlay is purely logical while our overlay is consistent with the underlying physical topology.\nMore recent work in the Internet context has addressed support for range queries in DHT systems [1, 12], but it is unclear if these directly translate to the sensor network context.\nSeveral research efforts have expressed the vision of a database interface to sensor networks [9, 3, 18], and there are examples of systems that contribute to this vision [18, 3, 17].\nOur work is similar in spirit to this body of literature.\nIn fact, DIMs could become an important component of a sensor network database system such as TinyDB [17].\nOur work departs from prior work in this area in two significant respects.\nUnlike these approaches, in our work the data generated at a node are hashed (in general) to different locations.\nThis hashing is the key to scaling multi-dimensional range searches.\nIn all the other systems described above, queries are flooded throughout the network, and can dominate the total cost of the system.\nOur work avoids query flooding by an appropriate choice of hashing.\nMadden et al. [17] also describe a distributed index, called Semantic Routing Trees (SRT).\nThis index is used to direct queries to nodes that have detected relevant data.\nOur work differs from SRT in three key aspects.\nFirst, SRT is built on single attributes while DIM supports mulitple attributes.\nSecond, SRT constructs a routing tree based on historical sensor readings, and therefore works well only for slowlychanging sensor values.\nFinally, in SRT queries are issued from a fixed node while in DIM queries can be issued from any node.\nA similar differentiation applies with respect to work on data-centric routing in sensor networks [15, 25], where data generated at a node is assumed to be stored at the node, and queries are either flooded throughout the network [15], or each source sets up a network-wide overlay announcing its presence so that mobile sinks can rendezvous with sources at the nearest node on the overlay [25].\nThese approaches work well for relatively long-lived queries.\nFinally, our work is most close related to data-centric storage [23] systems, which include geographic hash-tables (GHTs) [20], DIMENSIONS [7], and DIFS [10].\nIn a GHT, data is hashed by name to a location within the network, enabling highly efficient rendezvous.\nGHTs are built upon the GPSR [16] protocol and leverage some interesting properties of that protocol, such as the ability to route to a node nearest to a given location.\nWe also leverage properties in GPSR (as we describe later), but we use a locality-preserving hash to store data, enabling efficient multi-dimensional range queries.\nDIMENSIONS and DIFS can be thought of as using the same set of primitives as GHT (storage using consistent hashing), but for different ends: DIMENSIONS allows drill\ndown search for features within a sensor network, while DIFS allows range queries on a single key in addition to other operations.\n3.\nTHE DESIGN OF DIMS\n3.1 Zones\n3.2 Associating Zones with Nodes\n3.3 Inserting an Event\n3.3.1 Hashing an Event to a Zone\n3.3.2 Routing an Event to its Owner\n3.3.4 Summary and Pseudo-code\n3.4 Resolving and Routing Queries\n3.5 Robustness\n3.5.1 Maintaining Zones\n3.5.2 Preventing Data Loss from Node Failure\n3.5.3 Robustness to Packet Loss\n4.\nDIMS: AN ANALYSIS\n5.\nDIMS: SIMULATION RESULTS\n5.1 Simulation Methodology\n5.2 Results\n6.\nIMPLEMENTATION\n7.\nCONCLUSIONS\nIn this paper, we have discussed the design and evaluation of a distributed data structure called DIM for efficiently resolving multi-dimensional range queries in sensor networks.\nOur design of DIMs relies upon a novel locality-preserving hash inspired by early work in database indexing, and is built upon GPSR.\nWe have a working prototype, both of GPSR and DIM, and plan to conduct larger scale experiments in the future.\nThere are several interesting future directions that we intend to pursue.\nOne is adaptation to skewed data distributions, since these can cause storage and transmission hotspots.\nUnlike traditional database indices that re-balance trees upon data insertion, in sensor networks it might be feasible to re-structure the zones on a much larger timescale after obtaining a rough global estimate of the data distribution.\nAnother direction is support for node heterogeneity in the zone construction process; nodes with larger storage space assert larger-sized zones for themselves.\nA third is support for efficient resolution of existential queries--whether there exists an event matching a multi-dimensional range.","lvl-4":"Multi-dimensional Range Queries in Sensor Networks *\nABSTRACT\nIn many sensor networks, data or events are named by attributes.\nMany of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query.\nAn example is: \"List all events whose temperature lies between 50 \u25e6 and 60 \u25e6, and whose light levels lie between 10 and 15.\"\nSuch queries are useful for correlating events occurring within the network.\nIn this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries.\nOur distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm.\nOur analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O (\u221a N)).\nIn detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network.\nFinally, experiments on a small scale testbed validate the feasibility of DIMs.\n1.\nINTRODUCTION\nIn wireless sensor networks, data or events will be named by attributes [15] or represented as virtual relations in a distributed database [18, 3].\nIn these systems, we argue, one natural way to query for events of interest will be to use multi-dimensional range queries on these attributes.\nSuch range queries can be used in two distinct ways.\nThey can help users efficiently drill-down their search for events of interest.\nThe query described above illustrates this, where the scientist is presumably interested in discovering, and perhaps mapping the combined effect of temperature and light on the growth of marine micro-organisms.\nMore importantly, they can be used by application software running within a sensor network for correlating events and triggering actions.\nIn traditional database systems, such range queries are supported using pre-computed indices.\nIndices trade-off some initial pre-computation cost to achieve a significantly more efficient querying capability.\nFor sensor networks, we assert that a centralized index for multi-dimensional range queries may not be feasible for energy-efficiency reasons (as well as the fact that the access bandwidth to this central index will be limited, particularly for queries emanating from within the network).\nRather, we believe, there will be situations when it is more appropriate to build an innetwork distributed data structure for efficiently answering multi-dimensional range queries.\nIn this paper, we present just such a data structure, that we call a DIM'.\nDIMs are inspired by classical database indices, and are essentially embeddings of such indices within the sensor network.\nDIMs leverage two key ideas: in-network\ndata centric storage, and a novel locality-preserving geographic hash (Section 3).\nDIMs trace their lineage to datacentric storage systems [23].\nThe underlying mechanism in these systems allows nodes to consistently hash an event to some location within the network, which allows efficient retrieval of events.\nBuilding upon this, DIMs use a technique whereby events whose attribute values are \"close\" are likely to be stored at the same or nearby nodes.\nDIMs then use an underlying geographic routing algorithm (GPSR [16]) to route events and queries to their corresponding nodes in an entirely distributed fashion.\nWe discuss the design of a DIM, presenting algorithms for event insertion and querying, for maintaining a DIM in the event of node failure, and for making DIMs robust to data or packet loss (Section 3).\nWe then extensively evaluate DIMs using analysis (Section 4), simulation (Section 5), and actual implementation (Section 6).\nOur analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query \u221a costs scale as O (N)).\nIn detailed simulations, we show that in practice, the event insertion and querying costs of other alternatives are sometimes an order of magnitude the costs of DIMs, even for moderately sized network.\nExperiments on a small scale testbed validate the feasibility of DIMs (Section 6).\nMuch work remains, including efficient support for skewed data distributions, existential queries, and node heterogeneity.\nWe believe that DIMs will be an essential, but perhaps not necessarily the only, distributed data structure supporting efficient queries in sensor networks.\nDIMs will be part of a suite of such systems that enable feature extraction [7], simple range querying [10], exact-match queries [23], or continuous queries [15, 18].\nAll such systems will likely be integrated to a sensor network database system such as TinyDB [17].\nFinally, we note that DIMs are applicable not just to sensor networks, but to other deeply distributed systems (embedded networks for home and factory automation) as well.\n2.\nRELATED WORK\nThe basic problem that this paper addresses--multidimensional range queries--is typically solved in database systems using indexing techniques.\nThe database community has focused mostly on centralized indices, but distributed indexing has received some attention in the literature.\nIndexing techniques essentially trade-off some data insertion cost to enable efficient querying.\nIndexing has, for long, been a classical research problem in the database community [5, 2].\nOur work draws its inspiration from the class of multi-key constant branching index structures, exemplified by k-d trees [2], where k represents the dimensionality of the data space.\nOur approach essentially represents a geographic embedding of such structures in a sensor field.\nThere is one important difference.\nThe index structure is decided not only by the data, but also by the order in which data is inserted.\nOur current design is not data dependent.\nFinally, tangentially related to our work is the class of spatial indexing systems [21, 6, 11].\nWhile there has been some work on distributed indexing, the problem has not been extensively explored.\nThere exist distributed indices of a restricted kind--those that allow exact match or partial prefix match queries.\nOur work is superficially similar to CAN in that both construct a zone-based overlay atop of the underlying physical network.\nMore recent work in the Internet context has addressed support for range queries in DHT systems [1, 12], but it is unclear if these directly translate to the sensor network context.\nSeveral research efforts have expressed the vision of a database interface to sensor networks [9, 3, 18], and there are examples of systems that contribute to this vision [18, 3, 17].\nOur work is similar in spirit to this body of literature.\nIn fact, DIMs could become an important component of a sensor network database system such as TinyDB [17].\nOur work departs from prior work in this area in two significant respects.\nUnlike these approaches, in our work the data generated at a node are hashed (in general) to different locations.\nThis hashing is the key to scaling multi-dimensional range searches.\nIn all the other systems described above, queries are flooded throughout the network, and can dominate the total cost of the system.\nOur work avoids query flooding by an appropriate choice of hashing.\nMadden et al. [17] also describe a distributed index, called Semantic Routing Trees (SRT).\nThis index is used to direct queries to nodes that have detected relevant data.\nOur work differs from SRT in three key aspects.\nFirst, SRT is built on single attributes while DIM supports mulitple attributes.\nSecond, SRT constructs a routing tree based on historical sensor readings, and therefore works well only for slowlychanging sensor values.\nFinally, in SRT queries are issued from a fixed node while in DIM queries can be issued from any node.\nThese approaches work well for relatively long-lived queries.\nFinally, our work is most close related to data-centric storage [23] systems, which include geographic hash-tables (GHTs) [20], DIMENSIONS [7], and DIFS [10].\nIn a GHT, data is hashed by name to a location within the network, enabling highly efficient rendezvous.\nWe also leverage properties in GPSR (as we describe later), but we use a locality-preserving hash to store data, enabling efficient multi-dimensional range queries.\ndown search for features within a sensor network, while DIFS allows range queries on a single key in addition to other operations.\n7.\nCONCLUSIONS\nIn this paper, we have discussed the design and evaluation of a distributed data structure called DIM for efficiently resolving multi-dimensional range queries in sensor networks.\nOur design of DIMs relies upon a novel locality-preserving hash inspired by early work in database indexing, and is built upon GPSR.\nWe have a working prototype, both of GPSR and DIM, and plan to conduct larger scale experiments in the future.\nOne is adaptation to skewed data distributions, since these can cause storage and transmission hotspots.\nUnlike traditional database indices that re-balance trees upon data insertion, in sensor networks it might be feasible to re-structure the zones on a much larger timescale after obtaining a rough global estimate of the data distribution.\nAnother direction is support for node heterogeneity in the zone construction process; nodes with larger storage space assert larger-sized zones for themselves.\nA third is support for efficient resolution of existential queries--whether there exists an event matching a multi-dimensional range.","lvl-2":"Multi-dimensional Range Queries in Sensor Networks *\nABSTRACT\nIn many sensor networks, data or events are named by attributes.\nMany of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query.\nAn example is: \"List all events whose temperature lies between 50 \u25e6 and 60 \u25e6, and whose light levels lie between 10 and 15.\"\nSuch queries are useful for correlating events occurring within the network.\nIn this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries.\nOur distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm.\nOur analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O (\u221a N)).\nIn detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network.\nFinally, experiments on a small scale testbed validate the feasibility of DIMs.\n1.\nINTRODUCTION\nIn wireless sensor networks, data or events will be named by attributes [15] or represented as virtual relations in a distributed database [18, 3].\nMany of these attributes will have scalar values: e.g., temperature and light levels, soil moisture conditions, etc. .\nIn these systems, we argue, one natural way to query for events of interest will be to use multi-dimensional range queries on these attributes.\nFor example, scientists analyzing the growth of marine microorganisms might be interested in events that occurred within certain temperature and light conditions: \"List all events that have temperatures between 50 \u25e6 F and 60 \u25e6 F, and light levels between 10 and 20\".\nSuch range queries can be used in two distinct ways.\nThey can help users efficiently drill-down their search for events of interest.\nThe query described above illustrates this, where the scientist is presumably interested in discovering, and perhaps mapping the combined effect of temperature and light on the growth of marine micro-organisms.\nMore importantly, they can be used by application software running within a sensor network for correlating events and triggering actions.\nFor example, if in a habitat monitoring application, a bird alighting on its nest is indicated by a certain range of thermopile sensor readings, and a certain range of microphone readings, a multi-dimensional range query on those attributes enables higher confidence detection of the arrival of a flock of birds, and can trigger a system of cameras.\nIn traditional database systems, such range queries are supported using pre-computed indices.\nIndices trade-off some initial pre-computation cost to achieve a significantly more efficient querying capability.\nFor sensor networks, we assert that a centralized index for multi-dimensional range queries may not be feasible for energy-efficiency reasons (as well as the fact that the access bandwidth to this central index will be limited, particularly for queries emanating from within the network).\nRather, we believe, there will be situations when it is more appropriate to build an innetwork distributed data structure for efficiently answering multi-dimensional range queries.\nIn this paper, we present just such a data structure, that we call a DIM'.\nDIMs are inspired by classical database indices, and are essentially embeddings of such indices within the sensor network.\nDIMs leverage two key ideas: in-network\ndata centric storage, and a novel locality-preserving geographic hash (Section 3).\nDIMs trace their lineage to datacentric storage systems [23].\nThe underlying mechanism in these systems allows nodes to consistently hash an event to some location within the network, which allows efficient retrieval of events.\nBuilding upon this, DIMs use a technique whereby events whose attribute values are \"close\" are likely to be stored at the same or nearby nodes.\nDIMs then use an underlying geographic routing algorithm (GPSR [16]) to route events and queries to their corresponding nodes in an entirely distributed fashion.\nWe discuss the design of a DIM, presenting algorithms for event insertion and querying, for maintaining a DIM in the event of node failure, and for making DIMs robust to data or packet loss (Section 3).\nWe then extensively evaluate DIMs using analysis (Section 4), simulation (Section 5), and actual implementation (Section 6).\nOur analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query \u221a costs scale as O (N)).\nIn detailed simulations, we show that in practice, the event insertion and querying costs of other alternatives are sometimes an order of magnitude the costs of DIMs, even for moderately sized network.\nExperiments on a small scale testbed validate the feasibility of DIMs (Section 6).\nMuch work remains, including efficient support for skewed data distributions, existential queries, and node heterogeneity.\nWe believe that DIMs will be an essential, but perhaps not necessarily the only, distributed data structure supporting efficient queries in sensor networks.\nDIMs will be part of a suite of such systems that enable feature extraction [7], simple range querying [10], exact-match queries [23], or continuous queries [15, 18].\nAll such systems will likely be integrated to a sensor network database system such as TinyDB [17].\nApplication designers could then choose the appropriate method of information access.\nFor instance, a fire tracking application would use DIM to detect the hotspots, and would then use mechanisms that enable continuous queries [15, 18] to track the spatio-temporal progress of the hotspots.\nFinally, we note that DIMs are applicable not just to sensor networks, but to other deeply distributed systems (embedded networks for home and factory automation) as well.\n2.\nRELATED WORK\nThe basic problem that this paper addresses--multidimensional range queries--is typically solved in database systems using indexing techniques.\nThe database community has focused mostly on centralized indices, but distributed indexing has received some attention in the literature.\nIndexing techniques essentially trade-off some data insertion cost to enable efficient querying.\nIndexing has, for long, been a classical research problem in the database community [5, 2].\nOur work draws its inspiration from the class of multi-key constant branching index structures, exemplified by k-d trees [2], where k represents the dimensionality of the data space.\nOur approach essentially represents a geographic embedding of such structures in a sensor field.\nThere is one important difference.\nThe classical indexing structures are data-dependent (as are some indexing schemes that use locality preserving hashes, and developed in the theory literature [14, 8, 13]).\nThe index structure is decided not only by the data, but also by the order in which data is inserted.\nOur current design is not data dependent.\nFinally, tangentially related to our work is the class of spatial indexing systems [21, 6, 11].\nWhile there has been some work on distributed indexing, the problem has not been extensively explored.\nThere exist distributed indices of a restricted kind--those that allow exact match or partial prefix match queries.\nExamples of such systems, of course, are the Internet Domain Name System, and the class of distributed hash table (DHT) systems exemplified by Freenet [4], Chord [24], and CAN [19].\nOur work is superficially similar to CAN in that both construct a zone-based overlay atop of the underlying physical network.\nThe underlying details make the two systems very different: CAN's overlay is purely logical while our overlay is consistent with the underlying physical topology.\nMore recent work in the Internet context has addressed support for range queries in DHT systems [1, 12], but it is unclear if these directly translate to the sensor network context.\nSeveral research efforts have expressed the vision of a database interface to sensor networks [9, 3, 18], and there are examples of systems that contribute to this vision [18, 3, 17].\nOur work is similar in spirit to this body of literature.\nIn fact, DIMs could become an important component of a sensor network database system such as TinyDB [17].\nOur work departs from prior work in this area in two significant respects.\nUnlike these approaches, in our work the data generated at a node are hashed (in general) to different locations.\nThis hashing is the key to scaling multi-dimensional range searches.\nIn all the other systems described above, queries are flooded throughout the network, and can dominate the total cost of the system.\nOur work avoids query flooding by an appropriate choice of hashing.\nMadden et al. [17] also describe a distributed index, called Semantic Routing Trees (SRT).\nThis index is used to direct queries to nodes that have detected relevant data.\nOur work differs from SRT in three key aspects.\nFirst, SRT is built on single attributes while DIM supports mulitple attributes.\nSecond, SRT constructs a routing tree based on historical sensor readings, and therefore works well only for slowlychanging sensor values.\nFinally, in SRT queries are issued from a fixed node while in DIM queries can be issued from any node.\nA similar differentiation applies with respect to work on data-centric routing in sensor networks [15, 25], where data generated at a node is assumed to be stored at the node, and queries are either flooded throughout the network [15], or each source sets up a network-wide overlay announcing its presence so that mobile sinks can rendezvous with sources at the nearest node on the overlay [25].\nThese approaches work well for relatively long-lived queries.\nFinally, our work is most close related to data-centric storage [23] systems, which include geographic hash-tables (GHTs) [20], DIMENSIONS [7], and DIFS [10].\nIn a GHT, data is hashed by name to a location within the network, enabling highly efficient rendezvous.\nGHTs are built upon the GPSR [16] protocol and leverage some interesting properties of that protocol, such as the ability to route to a node nearest to a given location.\nWe also leverage properties in GPSR (as we describe later), but we use a locality-preserving hash to store data, enabling efficient multi-dimensional range queries.\nDIMENSIONS and DIFS can be thought of as using the same set of primitives as GHT (storage using consistent hashing), but for different ends: DIMENSIONS allows drill\ndown search for features within a sensor network, while DIFS allows range queries on a single key in addition to other operations.\n3.\nTHE DESIGN OF DIMS\nMost sensor networks are deployed to collect data from the environment.\nIn these networks, nodes (either individually or collaboratively) will generate events.\nAn event can generally be described as a tuple of attribute values, (A1, A2, \u2022 \u2022 \u2022, Ak), where each attribute Ai represents a sensor reading, or some value corresponding to a detection (e.g., a confidence level).\nThe focus of this paper is the design of systems to efficiently answer multi-dimensional range queries of the form: (x1--y1, x2--y2, \u2022 \u2022 \u2022, xk--yk).\nSuch a query returns all events whose attribute values fall into the corresponding ranges.\nNotice that point queries, i.e., queries that ask for events with specified values for each attribute, are a special case of range queries.\nAs we have discussed in Section 1, range queries can enable efficient correlation and triggering within the network.\nIt is possible to implement range queries by flooding a query within the network.\nHowever, as we show in later sections, this alternative can be inefficient, particularly as the system scales, and if nodes within the network issue such queries relatively frequently.\nThe other alternative, sending all events to an external storage node results in the access link being a bottleneck, especially if nodes within the network issue queries.\nShenker et al. [23] also make similar arguments with respect to data-centric storage schemes in general; DIMs are an instance of such schemes.\nThe system we present in this paper, the DIM, relies upon two foundations: a locality-preserving geographic hash, and an underlying geographic routing scheme.\nThe key to resolving range queries efficiently is data locality: i.e., events with comparable attribute values are stored nearby.\nThe basic insight underlying DIM is that data locality can be obtained by a locality-preserving geographic hash function.\nOur geographic hash function finds a localitypreserving mapping from the multi-dimensional space (described by the set of attributes) to a 2-d geographic space; this mapping is inspired by k-d trees [2] and is described later.\nMoreover, each node in the network self-organizes to claim part of the attribute space for itself (we say that each node owns a zone), so events falling into that space are routed to and stored at that node.\nHaving established the mapping, and the zone structure, DIMs use a geographic routing algorithm previously developed in the literature to route events to their corresponding nodes, or to resolve queries.\nThis algorithm, GPSR [16], essentially enables the delivery of a packet to a node at a specified location.\nThe routing mechanism is simple: when a node receives a packet destined to a node at location X, it forwards the packet to the neighbor closest to X.\nIn GPSR, this is called greedy-mode forwarding.\nWhen no such neighbor exists (as when there exists a void in the network), the node starts the packet on a perimeter mode traversal, using the well known right-hand rule to circumnavigate voids.\nGPSR includes efficient techniques for perimeter traversal that are based on graph planarization algorithms amenable to distributed implementation.\nFor all of this to work, DIMs make two assumptions that are consistent with the literature [23].\nFirst, all nodes know the approximate geographic boundaries of the network.\nThese boundaries may either be configured in nodes at the time of deployment, or may be discovered using a simple protocol.\nSecond, each node knows its geographic location.\nNode locations can be automatically determined by a localization system or by other means.\nAlthough the basic idea of DIMs may seem straightforward, it is challenging to design a completely distributed data structure that must be robust to packet losses and node failures, yet must support efficient query distribution and deal with communication voids and obstacles.\nWe now describe the complete design of DIMs.\n3.1 Zones\nThe key idea behind DIMs, as we have discussed, is a geographic locality-preserving hash that maps a multi-attribute event to a geographic zone.\nIntuitively, a zone is a subdivision of the geographic extent of a sensor field.\nA zone is defined by the following constructive procedure.\nConsider a rectangle R on the x-y plane.\nIntuitively, R is the bounding rectangle that contains all sensors withing the network.\nWe call a sub-rectangle Z of R a zone, if Z is obtained by dividing R k times, k> 0, using a procedure that satisfies the following property: After the i-th division, 0 <i <k, R is partitioned into 2i equal sized rectangles.\nIf i is an odd (even) number, the i-th division is parallel to the y-axis (x-axis).\nThat is, the bounding rectangle R is first sub-divided into two zones at level 0 by a vertical line that splits R into two equal pieces, each of these sub-zones can be split into two zones at level 1 by a horizontal line, and so on.\nWe call the non-negative integer k the level of zone Z, i.e. level (Z) = k.\nA zone can be identified either by a zone code code (Z) or by an address addr (Z).\nThe code code (Z) is a 0-1 bit string of length level (Z), and is defined as follows.\nIf Z lies in the left half of R, the first (from the left) bit of code (Z) is 0, else 1.\nIf Z lies in the bottom half of R, the second bit of code (Z) is 0, else 1.\nThe remaining bits of code (Z) are then recursively defined on each of the four quadrants of R.\nThis definition of the zone code matches the definition of zones given above, encoding divisions of the sensor field geography by bit strings.\nThus, in Figure 2, the zone in the top-right corner of the rectangle R has a zone code of 1111.\nNote that the zone codes collectively define a zone tree such that individual zones are at the leaves of this tree.\nThe address of a zone Z, addr (Z), is defined to be the centroid of the rectangle defined by Z.\nThe two representations of a zone (its code and its address) can each be computed from the other, assuming the level of the zone is known.\nTwo zones are called sibling zones if their zone codes are the same except for the last bit.\nFor example, if code (Z1) = 01101 and code (Z2) = 01100, then Z1 and Z2 are sibling zones.\nThe sibling subtree of a zone is the subtree rooted at the left or right sibling of the zone in the zone tree.\nWe uniquely define a backup zone for each zone as follows: if the sibling subtree of the zone is on the left, the backup zone is the right-most zone in the sibling subtree; otherwise, the backup zone is the left-most zone in the sibling subtree.\nFor a zone Z, let p be the first level (Z)--1 digits of code (Z).\nLet backup (Z) be the backup zone of zone Z.\nIf code (Z) = p1, code (backup (Z)) = p01 * with the most number of trailing 1's (* means 0 or 1 occurrences).\nIf\ncode (Z) = p0, code (backup (Z)) = p10 \u2217 with the most number of trailing 0's.\n3.2 Associating Zones with Nodes\nOur definition of a zone is independent of the actual distribution of nodes in the sensor field, and only depends upon the geographic extent (the bounding rectangle) of the sensor field.\nNow we describe how zones are mapped to nodes.\nConceptually, the sensor field is logically divided into zones and each zone is assigned to a single node.\nIf the sensor network were deployed in a grid-like (i.e., very regular) fashion, then it is easy to see that there exists a k such that each node maps into a distinct level-k zone.\nIn general, however, the node placements within a sensor field are likely to be less regular than the grid.\nFor some k, some zones may be empty and other zones might have more than one node situated within them.\nOne alternative would have been to choose a fixed k for the overall system, and then associate nodes with the zones they are in (and if a zone is empty, associate the \"nearest\" node with it, for some definition of \"nearest\").\nBecause it makes our overall query routing system simpler, we allow nodes in a DIM to map to different-sized zones.\nTo precisely understand the associations between zones and nodes, we define the notion of zone ownership.\nFor any given placement of network nodes, consider a node A. Let ZA to be the largest zone that includes only node A and no other node.\nThen, we say that A owns ZA.\nNotice that this definition of ownership may leave some sections of the sensor field un-associated with a node.\nFor example, in Figure 2, the zone 110 does not contain any nodes and would not have an owner.\nTo remedy this, for any empty zone Z, we define the owner to be the owner of backup (Z).\nIn our example, that empty zone's owner would also be the node that owns 1110, its backup zone.\nHaving defined the association between nodes and zones, the next problem we tackle is: given a node placement, does there exist a distributed algorithm that enables each node to determine which zones it owns, knowing only the overall boundary of the sensor network?\nIn principle, this should be relatively straightforward, since each node can simply determine the location of its neighbors, and apply simple geometric methods to determine the largest zone around it such that no other node resides in that zone.\nIn practice, however, communication voids and obstacles make the algorithm much more challenging.\nIn particular, resolving the ownership of zones that do not contain any nodes is complicated.\nEqually complicated is the case where the zone of a node is larger than its communication radius and the node cannot determine the boundaries of its zone by local communication alone.\nOur distributed zone building algorithm defers the resolution of such zones until when either a query is initiated, or when an event is inserted.\nThe basic idea behind our algorithm is that each node tentatively builds up an idea of the zone it resides in just by communicating with its neighbors (remembering which boundaries of the zone are \"undecided\" because there is no radio neighbor that can help resolve that boundary).\nThese undecided boundaries are later resolved by a GPSR perimeter traversal when data messages are actually routed.\nWe now describe the algorithm, and illustrate it using examples.\nIn our algorithm, each node uses an array bound [0.\n.3] to maintain the four boundaries of the zone it owns (remem\nFigure 1: A network, where circles represent sensor nodes and dashed lines mark the network boundary.\nFigure 2: The zone code and boundaries.\nFigure 3: The Corresponding Zone Tree\nber that in this algorithm, the node only tries to determine the zone it resides in, not the other zones it might own because those zones are devoid of nodes).\nWhen a node starts up, each node initializes this array to be the network boundary, i.e., initially each node assumes its zone contains the whole network.\nThe zone boundary algorithm now relies upon GPSR's beacon messages to learn the locations of neighbors within radio range.\nUpon hearing of such a neighbor, the node calls the algorithm in Figure 4 to update its zone boundaries and its code accordingly.\nIn this algorithm, we assume that A is the node at which the algorithm is executed, ZA is its zone, and a is a newly discovered neighbor of A. (Procedure CONTAiN (ZA, a) is used to decide if node a is located within the current zone boundaries of node A).\nUsing this algorithm, then, each node can independently and asynchronously decide its own tentative zone based on the location of its neighbors.\nFigure 2 illustrates the results of applying this algorithm for the network in Figure 1.\nFigure 3 describes the corresponding zone tree.\nEach zone resides at a leaf node and the code of a zone is the path from the root to the zone if we represent the branch to the left\nFigure 4: Zone Boundary Determination, where A.x and A.y represent the geographic coordinate of node\nFigure 5: Inserting an event in a DIM.\nProcedure CLOSER (A, B, m) returns true if code (A) is closer to code (m) than code (B).\nsource (m) is used to set the source address of message m.\nchild by 0 and the branch to the right child by 1.\nThis binary tree forms the index that we will use in the following event and query processing procedures.\nWe see that the zone sizes are different and depend on the local densities and so are the lengths of zone codes for different nodes.\nNotice that in Figure 2, there is an empty zone whose code should be 110.\nIn this case, if the node in zone 1111 can only hear the node in zone 1110, it sets its boundary with the empty zone to undecided, because it did not hear from any neighboring nodes from that direction.\nAs we have mentioned before, the undecided boundaries are resolved using GPSR's perimeter mode when an event is inserted, or a query sent.\nWe describe event insertion in the next step.\nFinally, this description does not describe how a node's zone codes are adjusted when neighboring nodes fail, or new nodes come up.\nWe return to this in Section 3.5.\n3.3 Inserting an Event\nIn this section, we describe how events are inserted into a DIM.\nThere are two algorithms of interest: a consistent hashing technique for mapping an event to a zone, and a routing algorithm for storing the event at the appropriate zone.\nAs we shall see, these two algorithms are inter-related.\n3.3.1 Hashing an Event to a Zone\nIn Section 3.1, we described a recursive tessellation of the geographic extent of a sensor field.\nWe now describe a consistent hashing scheme for a DIM that supports range queries on m distinct attributes2 Let us denote these attributes A1...Am.\nFor simplicity, assume for now that the depth of every zone in the network is k, k is a multiple of m, and that this value of k is known to every node.\nWe will relax this assumption shortly.\nFurthermore, for ease of discussion, we assume that all attribute values have been normalized to be between 0 and 1.\nOur hashing scheme assigns a k bit zone code to an event as follows.\nFor i between 1 and m, if Ai <0.5, the i-th bit of the zone code is assigned 0, else 1.\nFor i between m + 1 and 2m, if Ai--m <0.25 or Ai--m \u2208 [0.5, 0.75), the i-th bit of the zone is assigned 0, else 1, because the next level divisions are at 0.25 and 0.75 which divide the ranges to [0, 0.25), [0.25, 0.5), [0.5, 0.75), and [0.75, 1).\nWe repeat this procedure until all k bits have been assigned.\nAs an example, consider event E = (0.3, 0.8).\nFor this event, the 5-bit zone code is code (ZA) = 01110.\nEssentially, our hashing scheme uses the values of the attributes in round-robin fashion on the zone tree (such as the one in Figure 3), in order to map an m-attribute event to a zone code.\nThis is reminiscent of k-d trees [2], but is quite different from that data structure: zone trees are spatial embeddings and do not incorporate the re-balancing algorithms in k-d trees.\nIn our design of DIMs, we do not require nodes to have zone codes of the same length, nor do we expect a node to know the zone codes of other nodes.\nRather, suppose the encoding node is A and its own zone code is of length kA.\nThen, given an event E, node A only hashes E to a zone code of length kA.\nWe denote the zone code assigned to an event E by code (E).\nAs we describe below, as the event is routed, code (E) is refined by intermediate nodes.\nThis lazy evaluation of zone codes allows different nodes to use different length zone codes without any explicit coordination.\n3.3.2 Routing an Event to its Owner\nThe aim of hashing an event to a zone code is to store the event at the node within the network node that owns that zone.\nWe call this node the owner of the event.\nConsider an event E that has just been generated at a node A.\nAfter encoding event E, node A compares code (E) with code (A).\nIf the two are identical, node A store event E locally; otherwise, node A will attempt to route the event to its owner.\nTo do this, note that code (E) corresponds to some zone Z', which is A's current guess for the zone at which event E should be stored.\nA now invokes GPSR to send a message to addr (Z') (the centroid of Z', Section 3.1).\nThe message contains the event E, code (E), and the target geographic location for storing the event.\nIn the message, A also marks itself as the owner of event E.\nAs we will see later, the guessed zone Z', the address addr (Z'), and the owner of E, all of them contained in the message, will be refined by intermediate forwarding nodes.\nGPSR now delivers this message to the next hop towards addr (Z') from A.\nThis next hop node (call it B) does not immediately forward the message.\nRather, it attempts to com2DIM does not assume that all nodes are homogeneous in terms of the sensors they have.\nThus, in an m dimensional DIM, a node that does not possess all m sensors can use NULL values for the corresponding readings.\nDIM treats NULL as an extreme value for range comparisons.\nAs an aside, a network may have many DIM instances running concurrently.\npute a new zone code for E to get a new code codenew (E).\nB will update the code contained in the message (and also the geographic destination of the message) if codenew (E) is longer than the event code in the message.\nIn this manner, as the event wends its way to its owner, its zone code gets refined.\nNow, B compares its own code code (B) against the owner code owner (E) contained in the incoming message.\nIf code (B) has a longer match with code (E) than the current owner owner (E), then B sets itself to be the current owner of E, meaning that if nobody is eligible to store E, then B will store the event (we shall see how this happens next).\nIf B's zone code does not exactly match code (E), B will invoke GPSR to deliver E to the next hop.\nFigure 6: Nodes A and B have claimed the same zone.\nSuppose that some node, say C, finds itself to be the destination (or eventual owner) of an event E.\nIt does so by noticing that code code (C) equals code (E) after locally recomputing a code for E.\nIn that case, C stores E locally, but only if all four of C's zone boundaries are decided.\nWhen this condition holds, C knows for sure that no other nodes have overlapped zones with it.\nIn this case, we call C an internal node.\nRecall, though, that because the zone discovery algorithm Section 3.2 only uses information from immediate neighbors, one or more of C's boundaries may be undecided.\nIf so, C assumes that some other nodes have a zone that overlaps with its own, and sets out to resolve this overlap.\nTo do this, C now sets itself to be the owner of E and continues forwarding the message.\nHere we rely on GPSR's perimeter mode routing to probe around the void that causes the undecided boundary.\nSince the message starts from C and is destined for a geographic location near C, GPSR guarantees that the message will be delivered back to C if no other nodes will update the information in the message.\nIf the message comes back to C with itself to be the owner, C infers that it must be the true owner of the zone and stores E locally.\nIf this does not happen, there are two possibilities.\nThe first is that as the event traverses the perimeter, some intermediate node, say B whose zone overlaps with C's marks itself to be the owner of the event, but otherwise does not change the event's zone code.\nThis node also recognizes that its own zone overlaps with C's and initiates a message exchange which causes each of them to appropriately shrink their zone.\nFigures 6 through 8 show an example of this data-driven zone shrinking.\nInitially, both node A and node B have claimed the same zone 0 because they are out of radio range of each other.\nSuppose that A inserts an event E = (0.4, 0.8, 0.9).\nA encodes E to 0 and claims itself to be the owner of E.\nSince A is not an internal node, it sends out E, looking for other owner candidates of E.\nOnce E gets to node B, B will see in the message's owner field A's code that is the same as its own.\nB then shrinks its zone from 0 to 01 according to A's location which is also recorded in the message and send a shrink request to A. Upon receiving this request, A also shrinks its zone from 0 to 00.\nA second possibility is if some intermediate node changes the destination code of E to a more specific value (i.e., longer zone code).\nLet us label this node D. D now tries to initiate delivery to the centroid of the new zone.\nThis\nFigure 7: An event\/query message (filled arrows) triggers zone shrinking (hollow arrows).\nFigure 8: The zone layout after shrinking.\nNow node A and B have been mapped to different zones.\nmight result in a new perimeter walk that returns to D (if, for example, D happens to be geographically closest to the centroid of the zone).\nHowever, D would not be the owner of the event, which would still be C.\nIn routing to the centroid of this zone, the message may traverse the perimeter and return to D.\nNow D notices that C was the original owner, so it encapsulates the event and directs it to C.\nIn case that there indeed is another node, say X, that owns an overlapped zone with C, X will notice this fact by finding in the message the same prefix of the code of one of its zones, but with a different geographic location from its own.\nX will shrink its zone to resolve the overlap.\nIf X's zone is smaller than or equal to C's zone, X will also send a\" shrink\" request to C.\nOnce C receives a shrink request, it will reduce its zone appropriately and fix its \"undecided\" boundary.\nIn this manner, the zone formation process is resolved on demand in a data-driven way.\nThere are several interesting effects with respect to perimeter walking that arise in our algorithm.\nThe first is that there are some cases where an event insertion might cause the entire outer perimeter of the network to be traversed3.\nFigure 6 also works as an example where the outer perimeter is traversed.\nEvent E inserted by A will eventually be stored in node B. Before node B stores event E, if B's nominal radio range does not intersect the network boundary, it needs to send out E again as A did, because B in this case is not an internal node.\nBut if B's nominal radio range intersects the network boundary, it then has two choices.\nIt can assume that there will not be any nodes outside the network boundary and so B is an internal node.\nThis is an aggressive approach.\nOn the other hand, B can also make a conservative decision assuming that there might be some other nodes it have not heard of yet.\nB will then force the message walking another perimeter before storing it.\nIn some situations, especially for large zones where the node that owns a zone is far away from the centroid of the owned zone, there might exist a small perimeter around the destination that does not include the owner of the zone.\nThe event will end up being stored at a different node than the real owner.\nIn order to deal with this problem, we add an extra operation in event forwarding, called efficient neighbor discovery.\nBefore invoking GPSR, a node needs to check if there exists a neighbor who is eligible to be the real owner of the event.\nTo do this, a node C, say, needs to know the zone codes of its neighboring nodes.\nWe deploy GPSR's beaconing message to piggyback the zone codes for nodes.\nSo by simply comparing the event's code and neighbor's code, a node can decide whether there exists a neighbor Y which is more likely to be the owner of event E. C delivers E to Y, which simply follows the decision making procedure discussed above.\n3.3.4 Summary and Pseudo-code\nIn summary, our event insertion procedure is designed to nicely interact with the zone discovery mechanism, and the event hashing mechanism.\nThe latter two mechanisms are kept simple, while the event insertion mechanism uses lazy evaluation at each hop to refine the event's zone code, and it leverages GPSR's perimeter walking mechanism to fix undecided zone boundaries.\nIn Section 3.5, we address robustness of event insertion to packet loss or to node failures.\nFigure 5 shows the pseudo-code for inserting and forwarding an event e.\nIn this pseudo code, we have omitted a description of the zone shrinking procedure.\nIn the pseudo code, procedure IS INTERNAL () is used to determine if the caller is an internal node and procedure IS OWNER () is used to determine if the caller is more eligible to be the owner of the event than is currently claimed owner as recorded in the message.\nProcedure SEND-MESSAGE is used to send either an event message or a query message.\nIf the message destination address has been changed, the packet source address needs also to be changed in order to avoid being dropped by GPSR, since GPSR does not allow a node to see the same packet in greedy mode twice.\n3This happens less frequently than for GHTs, where inserting an event to a location outside the actual (but inside the nominal) boundary of the network will always invoke an external perimeter walk.\n3.4 Resolving and Routing Queries\nDIMs support both point queries4 and range queries.\nRouting a point query is identical to routing an event.\nThus, the rest of this section details how range queries are routed.\nThe key challenge in routing zone queries is brought out by the following strawman design.\nIf the entire network was divided evenly into zones of depth k (for some pre-defined constant k), then the querier (the node issuing the query) could subdivide a given range query into the relevant subzones and route individual requests to each of the zones.\nThis can be inefficient for large range queries and also hard to implement in our design where zone sizes are not predefined.\nAccordingly, we use a slightly different technique where a range query is initially routed to a zone corresponding to the entire range, and is then progressively split into smaller subqueries.\nWe describe this algorithm here.\nThe first step of the algorithm is to map a range query to a zone code prefix.\nConceptually, this is easy; in a zone tree (Figure 3), there exists some node which contains the entire range query in its sub-tree, and none of its children in the tree do.\nThe initial zone code we choose for the query is the zone code corresponding to that tree node, and is a prefix of the zone codes of all zones (note that these zones may not be geographically contiguous) in the subtree.\nThe querier computes the zone code of Q, denoted by code (Q) and then starts routing a query to addr (code (Q)).\nUpon receiving a range query Q, a node A (where A is any node on the query propagation path) divides it into multiple smaller sized subqueries if there is an overlap between the zone of A, zone (A) and the zone code associated with Q, code (Q).\nOur approach to split a query Q into subqueries is as follows.\nIf the range of Q's first attribute contains the value 0.5, A divides Q into two sub-queries one of whose first attribute ranges from 0 to 0.5, and the other from 0.5 to 1.\nThen A decides the half that overlaps with its own zone.\nLet's call it QA.\nIf QA does not exist, then A stops splitting; otherwise, it continues splitting (using the second attribute range) and recomputing QA until QA is small enough so that it completely falls into zone (A) and hence A can now resolve it.\nFor example, suppose that node A, whose code is 0110, is to split a range query Q = (0.3 - 0.8, 0.6 - 0.9).\nThe splitting steps is shown in Figure 2.\nAfter splitting, we obtain three smaller queries q0 = (0.3 - 0.5, 0.6 - 0.75), q1 = (0.3 - 0.5, 0.75 - 0.9), and q2 = (0.5 - 0.8, 0.6 - 0.9).\nThis splitting procedure is illustrated in Figure 9 which also shows the codes of each subquery after splitting.\nA then replies to subquery q0 with data stored locally and sends subqueries q1 and q2 using the procedure outlined above.\nMore generally, if node A finds itself to be inside the zone subtree that maximally covers Q, it will send the subqueries that resulted from the split.\nOtherwise, if there is no overlap between A and Q, then A forwards Q as is (in this case Q is either the original query, or a product of an earlier split).\nFigure 10 describes the pseudo-code for the zone splitting algorithm.\nAs shown in the above algorithm, once a subquery has been recognized as belonging to the caller's zone, procedure RESOLVE is invoked to resolve the subquery and send a reply to the querier.\nEvery query message contains\nthe geographic location of its initiator, so the corresponding reply message can be delivered directly back to the initiator.\nFinally, in the process of query resolution, zones might shrink similar to shrinkage during inserting.\nWe omit this in the pseudo code.\n3.5 Robustness\nUntil now, we have not discussed the impact of node failures and packet losses, or node arrivals and departures on our algorithms.\nPacket losses can affect query and event insertion, and node failures can result in lost data, while node arrivals and departures can impact the zone structure.\nWe now discuss how DIMs can be made robust to these kinds of dynamics.\n3.5.1 Maintaining Zones\nIn previous sections, we described how the zone discovery algorithm could leave zone boundaries undecided.\nThese undecided boundaries are resolved during insertion or querying, using the zone shrinking procedure describe above.\nWhen a new node joins the network, the zone discovery mechanism (Section 3.2) will cause neighboring zones to appropriately adjust their zone boundaries.\nAt this time, those zones can also transfer to the new node those events they store but which should belong to the new node.\nBefore a node turns itself off (if this is indeed possible), it knows that its backup node (Section 3.1) will take over its zone, and will simply send all its events to its backup node.\nNode deletion may also cause zone expansion.\nIn order to keep the mapping between the binary zone tree's leaf nodes and zones, we allow zone expansion to only occur among sibling zones (Section 3.1).\nThe rule is: if zone (A)'s sibling zone becomes empty, then A can expand its own zone to include its sibling zone.\nNow, we turn our attention to node failures.\nNode failures are just like node deletions except that a failed node does not have a chance to move its events to another node.\nBut how does a node decide if its sibling has failed?\nIf the sibling is within radio range, the absence of GPSR beaconing messages can detect this.\nOnce it detects this, the node can expand its zone.\nA different approach is needed for detecting siblings who are not within radio range.\nThese are the cases where two nodes own their zones after exchanging a shrink message; they do not periodically exchange messages thereafter to maintain this zone relationship.\nIn this case, we detect the failure in a data-driven fashion, with obvious efficiency benefits compared to periodic keepalives.\nOnce a node B has failed, an event or query message that previously should have been owned by the failed node will now be delivered to the node A that owns the empty zone left by node B.\nA can see this message because A stands right around the empty area left by B and is guaranteed to be visited in a GPSR perimeter traversal.\nA will set itself to be the owner of the message, and any node which would have dropped this message due to a perimeter loop will redirect the message to A instead.\nIf A's zone happens to be the sibling of B's zone, A can safely expand its own zone and notify its expanded zone to its neighbors via GPSR beaconing messages.\n3.5.2 Preventing Data Loss from Node Failure\nThe algorithms described above are robust in terms of zone formation, but node failure can erase data.\nTo avoid this, DIMs can employ two kinds of replication: local replication to be resilient to random node failures, and mirror replication for resilience to concurrent failure of geographically contiguous nodes.\nMirror replication is conceptually easy.\nSuppose an event E has a zone code code (E).\nThen, the node that inserts E would store two copies of E; one at the zone denoted by code (E), and the other at the zone corresponding to the one's complement of code (E).\nThis technique essentially creates a mirror DIM.\nA querier would need, in parallel, to query both the original DIM and its mirror since there is no way of knowing if a collection of nodes has failed.\nClearly, the trade-off here is an approximate doubling of both insertion and query costs.\nThere exists a far cheaper technique to ensure resilience to random node failures.\nOur local replication technique rests on the observation that, for each node A, there exists a unique node which will take over its zone when A fails.\nThis node is defined as the node responsible for A's zone's backup zone (see Section 3.1).\nThe basic idea is that A replicates each data item it has in this node.\nWe call this node A's local replica.\nLet A's local replica be B. Often B will be a radio neighbor of A and can be detected from GPSR beacons.\nSometimes, however, this is not the case, and B will have to be explicitly discovered.\nWe use an explicit message for discovering the local replica.\nDiscovering the local replica is data-driven, and uses a mechanism similar to that of event insertion.\nNode A sends a message whose geographic destination is a random nearby location chosen by A.\nThe location is close enough to A such that GPSR will guarantee that the message will delivered back to A.\nIn addition, the message has three fields, one for the zone code of A, code (A), one for the owner owner (A) of zone (A) which is set to be empty, and one for the geographic location of owner (A).\nThen the packet will be delivered in GPSR perimeter mode.\nEach node that receives this message will compare its zone code and code (A) in the message, and if it is more eligible to be the owner of zone (A) than the current owner (A) recorded in the message, it will update the field owner (A) and the corresponding geographic location.\nOnce the packet comes back to A, it will know the location of its local replica and can start to send replicas.\nIn a dense sensor network, the local replica of a node is usually very near to the node, either its direct neighbor or 1--2 hops away, so the cost of sending replicas to local replication will not dominate the network traffic.\nHowever, a node's local replica itself may fail.\nThere are two ways to deal with this situation; periodic refreshes, or repeated datadriven discovery of local replicas.\nThe former has higher overhead, but more quickly discovers failed replicas.\n3.5.3 Robustness to Packet Loss\nFinally, the mechanisms for querying and event insertion can be easily made resilient to packet loss.\nFor event insertion, a simple ACK scheme suffices.\nOf course, queries and responses can be lost as well.\nIn this case, there exists an efficient approach for error recovery.\nThis rests on the observation that the querier knows which zones fall within its query and should have responded (we assume that a node that has no data matching a query, but whose zone falls within the query, responds with a negative acknowledgment).\nAfter a conservative timeout, the querier can re-issue the queries selectively to these zones.\nIf DIM cannot get any answers (positive or negative) from\nFigure 9: An example of range query splitting\nFigure 10: Query resolving algorithm\ncertain zones after repeated timeouts, it can at least return the partial query results to the application together with the information about the zones from which data is missing.\n4.\nDIMS: AN ANALYSIS\nIn this section, we present a simple analytic performance evaluation of DIMs, and compare their performance against other possible approaches for implementing multi-dimensional range queries in sensor networks.\nIn the next section, we validate these analyses using detailed packet-level simulations.\nOur primary metrics for the performance of a DIM are: Average Insertion Cost measures the average number of messages required to insert an event into the network.\nAverage Query Delivery Cost measures the average number of messages required to route a query message to all the relevant nodes in the network.\nIt does not measure the number of messages required to transmit responses to the querier; this latter number depends upon the precise data distribution and is the same for many of the schemes we compare DIMs against.\nIn DIMs, event insertion essentially uses geographic routing.\nIn a dense N-node network where the likelihood of traversing perimeters is small, the average event insertion \\ \/ cost proportional to N [23].\nOn the other hand, the query delivery cost depends upon the size of ranges specified in the query.\nRecall that our query delivery mechanism is careful about splitting a query into sub-queries, doing so only when the query nears the zone that covers the query range.\nThus, when the querier is far from the queried zone, there are two components to the \\ \/ query delivery cost.\nThe first, which is proportional to N, is the cost to deliver the query near the covering zone.\nIf within this covering zone, there are M nodes, the message delivery cost of splitting the query is proportional to M.\nThe average cost of query delivery depends upon the distribution of query range sizes.\nNow, suppose that query sizes follow some density function f (x), then the average cost of resolve a query can be approximated by f1N xf (x) dx.\nTo give some intuition for the performance of DIMs, we consider four different forms for f (x): the uniform distribution where a query range encompassing the entire network is as likely as a point query; a bounded uniform distribution where all sizes up to a bound B are equally likely; an algebraic distribution in which most queries are small, but large queries are somewhat likely; and an exponential distribution where most queries are small and large queries are unlikely.\nIn all our analyses, we make the simplifying assumption that the size of a query is proportional to the number of nodes that can answer that query.\nFor the uniform distribution P (x) a c for some constant c.\nIf each query size from 1...N is equally likely, the average query delivery cost of uniformly distributed queries is O (N).\nThus, for uniformly distributed queries, the performance of DIMs is comparable to that of flooding.\nHowever, for the applications we envision, where nodes within the network are trying to correlate events, the uniform distribution is highly unrealistic.\nSomewhat more realistic is a situation where all query sizes are bounded by a constant B.\nIn this case, the average cost for resolving such a query is approximately f1 B xf (x) dx = O (B).\nRecall now that all queries have to pay an approxi \\ \/ mate cost of O (N) to deliver the query near the covering zone.\nThus, if DIM limited queries to a size proportional to \\ \/ \\ \/ N, the average query cost would be O (N).\nThe algebraic distribution, where f (x) a x \u2212 k, for some constant k between 1 and 2, has an average query resolution cost given by f1N xf (x) dx = O (N2 \u2212 k).\nIn this case, if k> 1.5, the average cost of query delivery is dominated by the cost to deliver the query to near the covering zone, given by O (\\ \/ N).\nFinally, for the exponential distribution, f (x) = ce \u2212 cx for some constant c, and the average cost is just the mean of the corresponding distribution, i.e., O (1) for large N. Asymptotically, then, the cost of the query for the exponential distribution is dominated by the cost to deliver the query \\ \/ near the covering zone (O (N)).\nThus, we see that if queries follow either the bounded uniform distribution, the algebraic distribution, or the exponential distribution, the query cost scales as the insertion cost (for appropriate choice of constants for the bounded uniform and the algebraic distributions).\nHow well does the performance of DIMs compare against alternative choices for implementing multi-dimensional queries?\nA simple alternative is called external storage [23], where all events are stored centrally in a node outside the sensor net \\ \/ work.\nThis scheme incurs an insertion cost of O (N), and a zero query cost.\nHowever, as [23] points out, such systems may be impractical in sensor networks since the access link to the external node becomes a hotspot.\nA second alternative implementation would store events at the node where they are generated.\nQueries are flooded\nthroughout the network, and nodes that have matching data respond.\nExamples of systems that can be used for this (although, to our knowledge, these systems do not implement multi-dimensional range queries) are Directed Diffusion [15] and TinyDB [17].\nThe flooding scheme incurs a zero insertion cost, but an O (N) query cost.\nIt is easy to show that DIMs outperform flooding as long as the ratio of the number \\ \/ of insertions to the number of queries is less than N.\nA final alternative would be to use a geographic hash table (GHT [20]).\nIn this approach, attribute values are assumed to be integers (this is actually quite a reasonable assumption since attribute values are often quantized), and events are hashed on some (say, the first) attribute.\nA range query is sub-divided into several sub-queries, one for each integer in the range of the first attribute.\nEach sub-query is then hashed to the appropriate location.\nThe nodes that receive a sub-query only return events that match all other attribute ranges.\nIn this approach, which we call GHT-R (GHT's for \\ \/ range queries) the insertion cost is O (N).\nSuppose that the range of the first attribute contains r discrete values.\n\\ \/ Then the cost to deliver queries is O (r N).\nThus, asymptotically, GHT-R's perform similarly to DIMs.\nIn practice, however, the proportionality constants are significantly different, and DIMs outperform GHT-Rs, as we shall show using detailed simulations.\n5.\nDIMS: SIMULATION RESULTS\nOur analysis gives us some insight into the asymptotic behavior of various approaches for multi-dimensional range queries.\nIn this section, we use simulation to compare DIMs against flooding and GHT-R; this comparison gives us a more detailed understanding of these approaches for moderate size networks, and gives us a nuanced view of the mechanistic differences between some of these approaches.\n5.1 Simulation Methodology\nWe use ns-2 for our simulations.\nSince DIMs are implemented on top of GPSR, we first ported an earlier GPSR implementation to the latest version of ns-2.\nWe modified the GPSR module to call our DIM implementation when it receives any data message in transit or when it is about to drop a message because that message traversed the entire perimeter.\nThis allows a DIM to modify message zone codes in flight (Section 3), and determine the actual owner of an event or query.\nIn addition, to this, we implemented in ns-2 most of the DIM mechanisms described in Section 3.\nOf those mechanisms, the only one we did not implement is mirror replication.\nWe have implemented selective query retransmission for resiliency to packet loss, but have left the evaluation of this mechanism to future work.\nOur DIM implementation in ns-2 is 2800 lines of code.\nFinally, we implemented GHT-R, our GHT-based multidimensional range query mechanism in ns-2.\nThis implementation was relatively straightforward, given that we had ported GPSR, and modified GPSR to detect the completion of perimeter mode traversals.\nUsing this implementation, we conducted a fairly extensive evaluation of DIM and two alternatives (flooding, and our GHT-R).\nFor all our experiments, we use uniformly placed sensor nodes with network sizes ranging from 50 nodes to 300 nodes.\nEach node has a radio range of 40m.\nFor the results presented here, each node has on average 20 nodes within its nominal radio range.\nWe have conducted experiments at other node densities; they are in agreement with the results presented here.\nIn all our experiments, each node first generates 3 events5 on average (more precisely, for a topology of size N, we have 3N events, and each node is equally likely to generate an event).\nWe have conducted experiments for three different event value distributions.\nOur uniform event distribution generates 2-dimensional events and, for each dimension, every attribute value is equally likely.\nOur normal event distribution generates 2-dimensional events and, for each dimension, the attribute value is normally distributed with a mean corresponding to the mid-point of the attribute value range.\nThe normal event distribution represents a skewed data set.\nFinally, our trace event distribution is a collection of 4-dimensional events obtained from a habitat monitoring network.\nAs we shall see, this represents a fairly skewed data set.\nHaving generated events, for each simulation we generate queries such that, on average, each node generates 2 queries.\nThe query sizes are determined using the four size distributions we discussed in Section 4: uniform, boundeduniform, algebraic and exponential.\nOnce a query size has been determined, the location of the query (i.e., the actual boundaries of the zone) are uniformly distributed.\nFor our GHT-R experiments, the dynamic range of the attributes had 100 discrete values, but we restricted the query range for any one attribute to 50 discrete values to allow those simulations to complete in reasonable time.\nFinally, using one set of simulations we evaluate the efficacy of local replication by turning off random fractions of nodes and measuring the fidelity of the returned results.\nThe primary metrics for our simulations are the average query and insertion costs, as defined in Section 4.\n5.2 Results\nAlthough we have examined almost all the combinations of factors described above, we discuss only the most salient ones here, for lack of space.\nFigure 11 plots the average insertion costs for DIM and GHT-R (for flooding, of course, the insertion costs are zero).\nDIM incurs less per event overhead in inserting events (regardless of the actual event distribution; Figure 11 shows the cost for uniformly distributed events).\nThe reason for this is interesting.\nIn GHT-R, storing almost every event incurs a perimeter traversal, and storing some events require traversing the outer perimeter of the network [20].\nBy contrast, in DIM, storing an event incurs a perimeter traversal only when a node's boundaries are undecided.\nFurthermore, an insertion or a query in a DIM can traverse the outer perimeter (Section 3.3), but less frequently than in GHTs.\nFigure 13 plots the average query cost for a bounded uniform query size distribution.\nFor this graph (and the next) we use a uniform event distribution, since the event distribution does not affect the query delivery cost.\nFor this simulation, our bound was 14th the size of the largest possible 5Our metrics are chosen so that the exact number of events and queries is unimportant for our discussion.\nOf course, the overall performance of the system will depend on the relative frequency of events and queries, as we discuss in Section 4.\nSince we don't have realistic ratios for these, we focus on the microscopic costs, rather than on the overall system costs.\nFigure 11: Average insertion cost for DIM and GHT.\nFigure 12: Local replication performance.\nquery (e.g., a query of the form (0 \u2212 0.5, 0 \u2212 0.5).\nEven for this generous query size, DIMs perform quite well (almost a third the cost of flooding).\nNotice, however, that GHTRs incur high query cost since almost any query requires as many subqueries as the width of the first attribute's range.\nFigure 14 plots the average query cost for the exponential distribution (the average query size for this distribution was set to be 1 16th the largest possible query).\nThe superior scaling of DIMs is evident in these graphs.\nClearly, this is the regime in which one might expect DIMs to perform best, when most of the queries are small and large queries are relatively rare.\nThis is also the regime in which one would expect to use multi-dimensional range queries: to perform relatively tight correlations.\nAs with the bounded uniform distribution, GHT query cost is dominated by the cost of sending sub-queries; for DIMs, the query splitting strategy works quite well in keep overall query delivery costs low.\nFigure 12 describes the efficacy of local replication.\nTo obtain this figure, we conducted the following experiment.\nOn a 100-node network, we inserted a number of events uniformly distributed throughout the network, then issued a query covering the entire network and recorded the answers.\nKnowing the expected answers for this query, we then successively removed a fraction f of nodes randomly, and re-issued the same query.\nThe figure plots the fraction of expected responses actually received, with and without replication.\nAs the graph shows, local replication performs well for random failures, returning almost 90% of the responses when up to 30% of the nodes have failed simultaneously 6.\nIn the absence of local replication, of course, when\nFigure 13: Average query cost with a bounded uniform query distribution Figure 14: Average query cost with an exponential query distribution\n30% of the nodes fail, the response rate is only 70% as one would expect.\nWe note that DIMs (as currently designed) are not perfect.\nWhen the data is highly skewed--as it was for our trace data set from the habitat monitoring application where most of the event values fell into within 10% of the attribute's range--a few DIM nodes will clearly become the bottleneck.\nThis is depicted in Figure 15, which shows that for DIMs, and GHT-Rs, the maximum number of transmissions at any network node (the hotspots) is rather high.\n(For less skewed data distributions, and reasonable query size distributions, the hotspot curves for all three schemes are comparable.)\nThis is a standard problem that the database indices have dealt with by tree re-balancing.\nIn our case, simpler solutions might be possible (and we discuss this in Section 7).\nHowever, our use of the trace data demonstrates that DIMs work for events which have more than two dimensions.\nIncreasing the number of dimensions does not noticeably degrade DIMs query cost (omitted for lack of space).\nAlso omitted are experiments examining the impact of several other factors, as they do not affect our conclusions in any way.\nAs we expected, DIMs are comparable in performance to flooding when all sizes of queries are equally likely.\nFor an algebraic distribution of query sizes, the relative performance is close to that for the exponential distribution.\nFor normally distributed events, the insertion costs be much better than this.\nAssuming a node and its replica don't simultaneously fail often, a node will almost always detect a replica failure and re-replicate, leading to near 100% response rates.\nFigure 15: Hotspot usage\nFigure 17: Number of events received for different query sizes\nFigure 16: Software architecture of DIM over GPSR Figure 18: Query distribution cost\nare comparable to that for the uniform distribution.\nFinally, we note that in all our evaluations we have only used list queries (those that request all events matching the specified range).\nWe expect that for summary queries (those that expect an aggregate over matching events), the overall cost of DIMs could be lower because the matching data are likely to be found in one or a small number of zones.\nWe leave an understanding of this to future work.\nAlso left to future work is a detailed understanding of the impact of location error on DIM's mechanisms.\nRecent work [22] has examined the impact of imprecise location information on other data-centric storage mechanisms such as GHTs, and found that there exist relatively simple fixes to GPSR that ameliorate the effects of location error.\n6.\nIMPLEMENTATION\nWe have implemented DIMs on a Linux platform suitable for experimentation on PDAs and PC-104 class machines.\nTo implement DIMs, we had to develop and test an independent implementation of GPSR.\nOur GPSR implementation is full-featured, while our DIM implementation has most of the algorithms discussed in Section 3; some of the robustness extensions have only simpler variants implemented.\nThe software architecture of DIM\/GPSR system is shown in Figure 16.\nThe entire system (about 5000 lines of code) is event-driven and multi-threaded.\nThe DIM subsystem consists of six logical components: zone management, event maintenance, event routing, query routing, query processing, and GPSR interactions.\nThe GPSR system is implemented as user-level daemon process.\nApplications are executed as clients.\nFor the DIM subsystem, the GPSR module provides several extensions: it exports information about neighbors, and provides callbacks during packet forwarding and perimeter-mode termination.\nWe tested our implementation on a testbed consisting of 8 PC-104 class machines.\nEach of these boxes runs Linux and uses a Mica mote (attached through a serial cable) for communication.\nThese boxes are laid out in an office building with a total spatial separation of over a hundred feet.\nWe manually measured the locations of these nodes relative to some coordinate system and configured the nodes with their location.\nThe network topology is approximately a chain.\nOn this testbed, we inserted queries and events from a single designated node.\nOur events have two attributes which span all combinations of the four values [0, 0.25, 0.75, 1] (sixteen events in all).\nOur queries span four sizes, returning 1, 4, 9 and 16 events respectively.\nFigure 17 plots the number of events received for different sized queries.\nIt might appear that we received fewer events than expected, but this graph doesn't count the events that were already stored at the querier.\nWith that adjustment, the number of responses matches our expectation.\nFinally, Figure 18 shows the total number of messages required for different query sizes on our testbed.\nWhile these experiments do not reveal as much about the performance range of DIMs as our simulations do, they nevertheless serve as proof-of-concept for DIMs.\nOur next step in the implementation is to port DIMs to the Mica motes, and integrate them into the TinyDB [17] sensor database engine on motes.\n7.\nCONCLUSIONS\nIn this paper, we have discussed the design and evaluation of a distributed data structure called DIM for efficiently resolving multi-dimensional range queries in sensor networks.\nOur design of DIMs relies upon a novel locality-preserving hash inspired by early work in database indexing, and is built upon GPSR.\nWe have a working prototype, both of GPSR and DIM, and plan to conduct larger scale experiments in the future.\nThere are several interesting future directions that we intend to pursue.\nOne is adaptation to skewed data distributions, since these can cause storage and transmission hotspots.\nUnlike traditional database indices that re-balance trees upon data insertion, in sensor networks it might be feasible to re-structure the zones on a much larger timescale after obtaining a rough global estimate of the data distribution.\nAnother direction is support for node heterogeneity in the zone construction process; nodes with larger storage space assert larger-sized zones for themselves.\nA third is support for efficient resolution of existential queries--whether there exists an event matching a multi-dimensional range.","keyphrases":["multi-dimension rang queri","dim","sensor network","multidimension rang queri","distribut index","geograph rout","queri cost","central index","distribut data structur","datacentr storag system","event insert","index techniqu","queri flood","effici correl","local-preserv geograph hash","asymptot behavior","normal event distribut"],"prmu":["P","P","P","P","P","P","P","M","R","U","R","M","M","M","M","U","M"]} {"id":"J-44","title":"Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering","abstract":"Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors. The quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce. We present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering. In particular, we formulate three roles -- scouts, promoters, and connectors -- that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.). These roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole. For instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling. We argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community.","lvl-1":"Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering Bharath Kumar Mohan Dept. of CSA Indian Institute of Science Bangalore 560 012, India mbk@csa.iisc.ernet.in Benjamin J. Keller Dept. of Computer Science Eastern Michigan University Ypsilanti, MI 48917, USA bkeller@emich.edu Naren Ramakrishnan Dept. of Computer Science Virginia Tech, Blacksburg VA 24061, USA naren@cs.vt.edu ABSTRACT Recommender systems aggregate individual user ratings into predictions of products or services that might interest visitors.\nThe quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce.\nWe present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering.\nIn particular, we formulate three roles-scouts, promoters, and connectors-that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.)\n.\nThese roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole.\nFor instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling.\nWe argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community.\nCategories and Subject Descriptors H.4.2 [Information Systems Applications]: Types of Systems-Decision support; J.4 [Computer Applications]: Social and Behavioral Sciences General Terms Algorithms, Human Factors 1.\nINTRODUCTION Recommender systems have become integral to e-commerce, providing technology that suggests products to a visitor based on previous purchases or rating history.\nCollaborative filtering, a common form of recommendation, predicts a user``s rating for an item by combining (other) ratings of that user with other users'' ratings.\nSignificant research has been conducted in implementing fast and accurate collaborative filtering algorithms [2, 7], designing interfaces for presenting recommendations to users [1], and studying the robustness of these algorithms [8].\nHowever, with the exception of a few studies on the influence of users [10], little attention has been paid to unraveling the inner workings of a recommender in terms of the individual ratings and the roles they play in making (good) recommendations.\nSuch an understanding will give an important handle to monitoring and managing a recommender system, to engineer mechanisms to sustain the recommender, and thereby ensure its continued success.\nOur motivation here is to disaggregate global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearest-neighbor collaborative filtering.\nWe identify three possible roles: (scouts) to connect the user into the system to receive recommendations, (promoters) to connect an item into the system to be recommended, and (connectors) to connect ratings of these two kinds.\nViewing ratings in this way, we can define the contribution of a rating in each role, both in terms of allowing recommendations to occur, and in terms of influence on the quality of recommendations.\nIn turn, this capability helps support scenarios such as: 1.\nSituating users in better neighborhoods: A user``s ratings may inadvertently connect the user to a neighborhood for which the user``s tastes may not be a perfect match.\nIdentifying ratings responsible for such bad recommendations and suggesting new items to rate can help situate the user in a better neighborhood.\n2.\nTargeting items: Recommender systems suffer from lack of user participation, especially in cold-start scenarios [13] involving newly arrived items.\nIdentifying users who can be encouraged to rate specific items helps ensure coverage of the recommender system.\n3.\nMonitoring the evolution of the recommender system and its stakeholders: A recommender system is constantly under change: growing with new users and 250 items, shrinking with users leaving the system, items becoming irrelevant, and parts of the system under attack.\nTracking the roles of a rating and its evolution over time provides many insights into the health of the system, and how it could be managed and improved.\nThese include being able to identify rating subspaces that do not contribute (or contribute negatively) to system performance, and could be removed; to enumerate users who are in danger of leaving, or have left the system; and to assess the susceptibility of the system to attacks such as shilling [5].\nAs we show, the characterization of rating roles presented here provides broad primitives to manage a recommender system and its community.\nThe rest of the paper is organized as follows.\nBackground on nearest-neighbor collaborative filtering and algorithm evaluation is discussed in Section 2.\nSection 3 defines and discusses the roles of a rating, and Section 4 defines measures of the contribution of a rating in each of these roles.\nIn Section 5, we illustrate the use of these roles to address the goals outlined above.\n2.\nBACKGROUND 2.1 Algorithms Nearest-neighbor collaborative filtering algorithms either use neighborhoods of users or neighborhoods of items to compute a prediction.\nAn algorithm of the first kind is called user-based, and one of the second kind is called itembased [12].\nIn both families of algorithms, neighborhoods are formed by first computing the similarity between all pairs of users (for user-based) or items (for item-based).\nPredictions are then computed by aggregating ratings, which in a user-based algorithm involves aggregating the ratings of the target item by the user``s neighbors and, in an item-based algorithm, involves aggregating the user``s ratings of items that are neighbors of the target item.\nAlgorithms within these families differ in the definition of similarity, formation of neighborhoods, and the computation of predictions.\nWe consider a user-based algorithm based on that defined for GroupLens [11] with variations from Herlocker et al. [2], and an item-based algorithm similar to that of Sarwar et al. [12].\nThe algorithm used by Resnick et al. [11] defines the similarity of two users u and v as the Pearson correlation of their common ratings: sim(u, v) = P i\u2208Iu\u2229Iv (ru,i \u2212 \u00afru)(rv,i \u2212 \u00afrv) qP i\u2208Iu (ru,i \u2212 \u00afru)2 qP i\u2208Iv (rv,i \u2212 \u00afrv)2 , where Iu is the set of items rated by user u, ru,i is user u``s rating for item i, and \u00afru is the average rating of user u (similarly for v).\nSimilarity computed in this manner is typically scaled by a factor proportional to the number of common ratings, to reduce the chance of making a recommendation made on weak connections: sim (u, v) = max(|Iu \u2229 Iv|, \u03b3) \u03b3 \u00b7 sim(u, v), where \u03b3 \u2248 5 is a constant used as a lower limit in scaling [2].\nThese new similarities are then used to define a static neighborhood Nu for each user u consisting of the top K users most similar to user u.\nA prediction for user u and item i is computed by a weighted average of the ratings by the neighbors pu,i = \u00afru + P v\u2208V sim (u, v)(rv,i \u2212 \u00afrv) P v\u2208V sim (u, v) (1) where V = Nu \u2229 Ui is the set of users most similar to u who have rated i.\nThe item-based algorithm we use is the one defined by Sarwar et al. [12].\nIn this algorithm, similarity is defined as the adjusted cosine measure sim(i, j) = P u\u2208Ui\u2229Uj (ru,i \u2212 \u00afru)(ru,j \u2212 \u00afru) qP u\u2208Ui (ru,i \u2212 \u00afru)2 qP u\u2208Uj (ru,j \u2212 \u00afru)2 (2) where Ui is the set of users who have rated item i.\nAs for the user-based algorithm, the similarity weights are adjusted proportionally to the number of users that have rated the items in common sim (i, j) = max(|Ui \u2229 Uj|, \u03b3) \u03b3 \u00b7 sim(i, j).\n(3) Given the similarities, the neighborhood Ni of an item i is defined as the top K most similar items for that item.\nA prediction for user u and item i is computed as the weighted average pu,i = \u00afri + P j\u2208J sim (i, j)(ru,j \u2212 \u00afrj) P j\u2208J sim (i, j) (4) where J = Ni \u2229 Iu is the set of items rated by u that are most similar to i. 2.2 Evaluation Recommender algorithms have typically been evaluated using measures of predictive accuracy and coverage [3].\nStudies on recommender algorithms, notably Herlocker et al. [2] and Sarwar et al. [12], typically compute predictive accuracy by dividing a set of ratings into training and test sets, and compute the prediction for an item in the test set using the ratings in the training set.\nA standard measure of predictive accuracy is mean absolute error (MAE), which for a test set T = {(u, i)} is defined as, MAE = P (u,i)\u2208T |pu,i \u2212 ru,i| |T | .\n(5) Coverage has a number of definitions, but generally refers to the proportion of items that can be predicted by the algorithm [3].\nA practical issue with predictive accuracy is that users typically are presented with recommendation lists, and not individual numeric predictions.\nRecommendation lists are lists of items in decreasing order of prediction (sometimes stated in terms of star-ratings), and so predictive accuracy may not be reflective of the accuracy of the list.\nSo, instead we can measure recommendation or rank accuracy, which indicates the extent to which the list is in the correct order.\nHerlocker et al. [3] discuss a number of rank accuracy measures, which range from Kendall``s Tau to measures that consider the fact that users tend to only look at a prefix of the list [5].\nKendall``s Tau measures the number of inversions when comparing ordered pairs in the true user ordering of 251 Jim Tom Jeff My Cousin Vinny The Matrix Star Wars The Mask Figure 1: Ratings in simple movie recommender.\nitems and the recommended order, and is defined as \u03c4 = C \u2212 D p (C + D + TR)(C + D + TP) (6) where C is the number of pairs that the system predicts in the correct order, D the number of pairs the system predicts in the wrong order, TR the number of pairs in the true ordering that have the same ratings, and TP is the number of pairs in the predicted ordering that have the same ratings [3].\nA shortcoming of the Tau metric is that it is oblivious to the position in the ordered list where the inversion occurs [3].\nFor instance, an inversion toward the end of the list is given the same weight as one in the beginning.\nOne solution is to consider inversions only in the top few items in the recommended list or to weight inversions based on their position in the list.\n3.\nROLES OF A RATING Our basic observation is that each rating plays a different role in each prediction in which it is used.\nConsider a simplified movie recommender system with three users Jim, Jeff, and Tom and their ratings for a few movies, as shown in Fig. 1.\n(For this initial discussion we will not consider the rating values involved.)\nThe recommender predicts whether Tom will like The Mask using the other already available ratings.\nHow this is done depends on the algorithm: 1.\nAn item-based collaborative filtering algorithm constructs a neighborhood of movies around The Mask by using the ratings of users who rated The Mask and other movies similarly (e.g., Jim``s ratings of The Matrix and The Mask; and Jeff``s ratings of Star Wars and The Mask).\nTom``s ratings of those movies are then used to make a prediction for The Mask.\n2.\nA user-based collaborative filtering algorithm would construct a neighborhood around Tom by tracking other users whose rating behaviors are similar to Tom``s (e.g., Tom and Jeff have rated Star Wars; Tom and Jim have rated The Matrix).\nThe prediction of Tom``s rating for The Mask is then based on the ratings of Jeff and Tim.\nAlthough the nearest-neighbor algorithms aggregate the ratings to form neighborhoods used to compute predictions, we can disaggregate the similarities to view the computation of a prediction as simultaneously following parallel paths of ratings.\nSo, irrespective of the collaborative filtering algorithm used, we can visualize the prediction of Tom``s rating of The Mask as walking through a sequence of ratings.\nIn Jim Tom Jeff The Matrix Star Wars The Mask q1 q2 q3 p1 p2 p3 Figure 2: Ratings used to predict The Mask for Tom.\nJim Tom Jeff The Matrix Star Wars The Mask q1 q2 q3 p1 p2 p3 Jerry r2 r3 Figure 3: Prediction of The Mask for Tom in which a rating is used more than once.\nthis example, two paths were used for this prediction as depicted in Fig. 2: (p1, p2, p3) and (q1, q2, q3).\nNote that these paths are undirected, and are all of length 3.\nOnly the order in which the ratings are traversed is different between the item-based algorithm (e.g., (p3, p2, p1), (q3, q2, q1)) and the user-based algorithm (e.g., (p1, p2, p3), (q1, q2, q3).)\nA rating can be part of many paths for a single prediction as shown in Fig. 3, where three paths are used for a prediction, two of which follow p1: (p1, p2, p3) and (p1, r2, r3).\nPredictions in a collaborative filtering algorithms may involve thousands of such walks in parallel, each playing a part in influencing the predicted value.\nEach prediction path consists of three ratings, playing roles that we call scouts, promoters, and connectors.\nTo illustrate these roles, consider the path (p1, p2, p3) in Fig. 2 used to make a prediction of The Mask for Tom: 1.\nThe rating p1 (Tom \u2192 Star Wars) makes a connection from Tom to other ratings that can be used to predict Tom``s rating for The Mask.\nThis rating serves as a scout in the bipartite graph of ratings to find a path that leads to The Mask.\n2.\nThe rating p2 (Jeff \u2192 Star Wars) helps the system recommend The Mask to Tom by connecting the scout to the promoter.\n3.\nThe rating p3 (Jeff \u2192 The Mask) allows connections to The Mask, and, therefore, promotes this movie to Tom.\nFormally, given a prediction pu,a of a target item a for user u, a scout for pu,a is a rating ru,i such that there exists a user v with ratings rv,a and rv,i for some item i; a promoter for pu,a is a rating rv,a for some user v, such that there exist ratings rv,i and ru,i for an item i, and; a connector for pu,a 252 Jim Tom Jeff Jerry My Cousin Vinny The Matrix Star Wars The Mask Jurasic Park Figure 4: Scouts, promoters, and connectors.\nis a rating rv,i by some user v and rating i, such that there exists ratings ru,i and rv,a.\nThe scouts, connectors, and promoters for the prediction of Tom``s rating of The Mask are p1 and q1, p2 and q2, and p3 and q3 (respectively).\nEach of these roles has a value in the recommender to the user, the user``s neighborhood, and the system in terms of allowing recommendations to be made.\n3.1 Roles in Detail Ratings that act as scouts tend to help the recommender system suggest more movies to the user, though the extent to which this is true depends on the rating behavior of other users.\nFor example, in Fig. 4 the rating Tom \u2192 Star Wars helps the system recommend only The Mask to him, while Tom \u2192 The Matrix helps recommend The Mask, Jurassic Park, and My Cousin Vinny.\nTom makes a connection to Jim who is a prolific user of the system, by rating The Matrix.\nHowever, this does not make The Matrix the best movie to rate for everyone.\nFor example, Jim is benefited equally by both The Mask and The Matrix, which allow the system to recommend Star Wars to him.\nHis rating of The Mask is the best scout for Jeff, and Jerry``s only scout is his rating of Star Wars.\nThis suggests that good scouts allow a user to build similarity with prolific users, and thereby ensure they get more from the system.\nWhile scouts represent beneficial ratings from the perspective of a user, promoters are their duals, and are of benefit to items.\nIn Fig. 4, My Cousin Vinny benefits from Jim``s rating, since it allows recommendations to Jeff and Tom.\nThe Mask is not so dependent on just one rating, since the ratings by Jim and Jeff help it.\nOn the other hand, Jerry``s rating of Star Wars does not help promote it to any other user.\nWe conclude that a good promoter connects an item to a broader neighborhood of other items, and thereby ensures that it is recommended to more users.\nConnectors serve a crucial role in a recommender system that is not as obvious.\nThe movies My Cousin Vinny and Jurassic Park have the highest recommendation potential since they can be recommended to Jeff, Jerry and Tom based on the linkage structure illustrated in Fig. 4.\nBeside the fact that Jim rated these movies, these recommendations are possible only because of the ratings Jim \u2192 The Matrix and Jim \u2192 The Mask, which are the best connectors.\nA connector improves the system``s ability to make recommendations with no explicit gain for the user.\nNote that every rating can be of varied benefit in each of these roles.\nThe rating Jim \u2192 My Cousin Vinny is a poor scout and connector, but is a very good promoter.\nThe rating Jim \u2192 The Mask is a reasonably good scout, a very good connector, and a good promoter.\nFinally, the rating Jerry \u2192 Star Wars is a very good scout, but is of no value as a connector or promoter.\nAs illustrated here, a rating can have different value in each of the three roles in terms of whether a recommendation can be made or not.\nWe could measure this value by simply counting the number of times a rating is used in each role, which alone would be helpful in characterizing the behavior of a system.\nBut we can also measure the contribution of each rating to the quality of recommendations or health of the system.\nSince every prediction is a combined effort of several recommendation paths, we are interested in discerning the influence of each rating (and, hence, each path) in the system towards the system``s overall error.\nWe can understand the dynamics of the system at a finer granularity by tracking the influence of a rating according to the role played.\nThe next section describes the approach to measuring the values of a rating in each role.\n4.\nCONTRIBUTIONS OF RATINGS As we``ve seen, a rating may play different roles in different predictions and, in each prediction, contribute to the quality of a prediction in different ways.\nOur approach can use any numeric measure of a property of system health, and assigns credit (or blame) to each rating proportional to its influence in the prediction.\nBy tracking the role of each rating in a prediction, we can accumulate the credit for a rating in each of the three roles, and also track the evolution of the roles of rating over time in the system.\nThis section defines the methodology for computing the contribution of ratings by first defining the influence of a rating, and then instantiating the approach for predictive accuracy, and then rank accuracy.\nWe also demonstrate how these contributions can be aggregated to study the neighborhood of ratings involved in computing a user``s recommendations.\nNote that although our general formulation for rating influence is algorithm independent, due to space considerations, we present the approach for only item-based collaborative filtering.\nThe definition for user-based algorithms is similar and will be presented in an expanded version of this paper.\n4.1 Influence of Ratings Recall that an item-based approach to collaborative filtering relies on building item neighborhoods using the similarity of ratings by the same user.\nAs described earlier, similarity is defined by the adjusted cosine measure (Equations (2) and (3)).\nA set of the top K neighbors is maintained for all items for space and computational efficiency.\nA prediction of item i for a user u is computed as the weighted deviation from the item``s mean rating as shown in Equation (4).\nThe list of recommendations for a user is then the list of items sorted in descending order of their predicted values.\nWe first define impact(a, i, j), the impact a user a has in determining the similarity between two items i and j.\nThis is the change in the similarity between i and j when a``s rating is removed, and is defined as impact(a, i, j) = |sim (i, j) \u2212 sim\u00afa(i, j)| P w\u2208Cij |sim (i, j) \u2212 sim \u00afw(i, j)| where Cij = {u \u2208 U | \u2203 ru,i, ru,j \u2208 R(u)} is the set of coraters 253 of items i and j (users who rate both i and j), R(u) is the set of ratings provided by user u, and sim\u00afa(i, j) is the similarity of i and j when the ratings of user a are removed sim\u00afa(i, j) = P v\u2208U\\{a} (ru,i \u2212 \u00afru)(ru,j \u2212 \u00afru) qP u\u2208U\\{a}(ru,i \u2212 \u00afru)2 qP u\u2208U\\{a}(ru,j \u2212 \u00afru)2 , and adjusted for the number of raters sim\u00afa(i, j) = max(|Ui \u2229 Uj| \u2212 1, \u03b3) \u03b3 \u00b7 sim(i, j).\nIf all coraters of i and j rate them identically, we define the impact as impact(a, i, j) = 1 |Cij| since P w\u2208Cij |sim (i, j) \u2212 sim \u00afw(i, j)| = 0.\nThe influence of each path (u, j, v, i) = [ru,j, rv,j, rv,i] in the prediction of pu,i is given by influence(u, j, v, i) = sim (i, j) P l\u2208Ni\u2229Iu sim (i, l) \u00b7 impact(v, i, j) It follows that the sum of influences over all such paths, for a given set of endpoints, is 1.\n4.2 Role Values for Predictive Accuracy The value of a rating in each role is computed from the influence depending on the evaluation measure employed.\nHere we illustrate the approach using predictive accuracy as the evaluation metric.\nIn general, the goodness of a prediction decides whether the ratings involved must be credited or discredited for their role.\nFor predictive accuracy, the error in prediction e = |pu,i \u2212 ru,i| is mapped to a comfort level using a mapping function M(e).\nAnecdotal evidence suggests that users are unable to discern errors less than 1.0 (for a rating scale of 1 to 5) [4], and so an error less than 1.0 is considered acceptable, but anything larger is not.\nWe hence define M(e) as (1 \u2212 e) binned to an appropriate value in [\u22121, \u22120.5, 0.5, 1].\nFor each prediction pu,i, M(e) is attributed to all the paths that assisted the computation of pu,i, proportional to their influences.\nThis tribute, M(e)\u2217influence(u, j, v, i), is in turn inherited by each of the ratings in the path [ru,j, rv,j, rv,i], with the credit\/blame accumulating to the respective roles of ru,j as a scout, rv,j as a connector, and rv,i as a promoter.\nIn other words, the scout value SF(ru,j), the connector value CF(rv,j) and the promoter value PF(rv,i) are all incremented by the tribute amount.\nOver a large number of predictions, scouts that have repeatedly resulted in big error rates have a big negative scout value, and vice versa (similarly with the other roles).\nEvery rating is thus summarized by its triple [SF, CF, PF].\n4.3 Role Values for Rank Accuracy We now define the computation of the contribution of ratings to observed rank accuracy.\nFor this computation, we must know the user``s preference order for a set of items for which predictions can be computed.\nWe assume that we have a test set of the users'' ratings of the items presented in the recommendation list.\nFor every pair of items rated by a user in the test data, we check whether the predicted order is concordant with his preference.\nWe say a pair (i, j) is concordant (with error ) whenever one of the following holds: \u2022 if (ru,i < ru,j) then (pu,i \u2212 pu,j < ); \u2022 if (ru,i > ru,j) then (pu,i \u2212 pu,j > ); or \u2022 if (ru,i = ru,j) then (|pu,i \u2212 pu,j| \u2264 ).\nSimilarly, a pair (i, j) is discordant (with error ) if it is not concordant.\nOur experiments described below use an error tolerance of = 0.1.\nAll paths involved in the prediction of the two items in a concordant pair are credited, and the paths involved in a discordant pair are discredited.\nThe credit assigned to a pair of items (i, j) in the recommendation list for user u is computed as c(i, j) = ( t T \u00b7 1 C+D if (i, j) are concordant \u2212 t T \u00b7 1 C+D if (i, j) are discordant (7) where t is the number of items in the user``s test set whose ratings could be predicted, T is the number of items rated by user u in the test set, C is the number of concordances and D is the number of discordances.\nThe credit c is then divided among all paths responsible for predicting pu,i and pu,j proportional to their influences.\nWe again add the role values obtained from all the experiments to form a triple [SF, CF, PF] for each rating.\n4.4 Aggregating rating roles After calculating the role values for individual ratings, we can also use these values to study neighborhoods and the system.\nHere we consider how we can use the role values to characterize the health of a neighborhood.\nConsider the list of top recommendations presented to a user at a specific point in time.\nThe collaborative filtering algorithm traversed many paths in his neighborhood through his scouts and other connectors and promoters to make these recommendations.\nWe call these ratings the recommender neighborhood of the user.\nThe user implicitly chooses this neighborhood of ratings through the items he rates.\nApart from the collaborative filtering algorithm, the health of this neighborhood completely influences a user``s satisfaction with the system.\nWe can characterize a user``s recommender neighborhood by aggregating the individual role values of the ratings involved, weighted by the influence of individual ratings in determining his recommended list.\nDifferent sections of the user``s neighborhood wield varied influence on his recommendation list.\nFor instance, ratings reachable through highly rated items have a bigger say in the recommended items.\nOur aim is to study the system and classify users with respect to their positioning in a healthy or unhealthy neighborhood.\nA user can have a good set of scouts, but may be exposed to a neighborhood with bad connectors and promoters.\nHe can have a good neighborhood, but his bad scouts may ensure the neighborhood``s potential is rendered useless.\nWe expect that users with good scouts and good neighborhoods will be most satisfied with the system in the future.\nA user``s neighborhood is characterized by a triple that represents the weighted sum of the role values of individual ratings involved in making recommendations.\nConsider a user u and his ordered list of recommendations L.\nAn item i 254 in the list is weighted inversely, as K(i), depending on its position in the list.\nIn our studies we use K(i) = p position(i).\nSeveral paths of ratings [ru,j, rv,j, rv,i] are involved in predicting pu,i which ultimately decides its position in L, each with influence(u, j, v, i).\nThe recommender neighborhood of a user u is characterized by the triple, [SFN(u), CFN(u), PFN(u)] where SFN(u) = X i\u2208L P [ru,j ,rv,j ,rv,i] SF(ru,j)influence(u, j, v, i) K(i) !\nCFN(u) and PFN(u) are defined similarly.\nThis triple estimates the quality of u``s recommendations based on the past track record of the ratings involved in their respective roles.\n5.\nEXPERIMENTATION As we have seen, we can assign role values to each rating when evaluating a collaborative filtering system.\nIn this section, we demonstrate the use of this approach to our overall goal of defining an approach to monitor and manage the health of a recommender system through experiments done on the MovieLens million rating dataset.\nIn particular, we discuss results relating to identifying good scouts, promoters, and connectors; the evolution of rating roles; and the characterization of user neighborhoods.\n5.1 Methodology Our experiments use the MovieLens million rating dataset, which consists of ratings by 6040 users of 3952 movies.\nThe ratings are in the range 1 to 5, and are labeled with the time the rating was given.\nAs discussed before, we consider only the item-based algorithm here (with item neighborhoods of size 30) and, due to space considerations, only present role value results for rank accuracy.\nSince we are interested in the evolution of the rating role values over time, the model of the recommender system is built by processing ratings in their arrival order.\nThe timestamping provided by MovieLens is hence crucial for the analyses presented here.\nWe make assessments of rating roles at intervals of 10,000 ratings and processed the first 200,000 ratings in the dataset (giving rise to 20 snapshots).\nWe incrementally update the role values as the time ordered ratings are merged into the model.\nTo keep the experiment computationally manageable, we define a test dataset for each user.\nAs the time ordered ratings are merged into the model, we label a small randomly selected percentage (20%) as test data.\nAt discrete epochs, i.e., after processing every 10,000 ratings, we compute the predictions for the ratings in the test data, and then compute the role values for the ratings used in the predictions.\nOne potential criticism of this methodology is that the ratings in the test set are never evaluated for their roles.\nWe overcome this concern by repeating the experiment, using different random seeds.\nThe probability that every rating is considered for evaluation is then considerably high: 1 \u2212 0.2n , where n is the number of times the experiment is repeated with different random seeds.\nThe results here are based on n = 4 repetitions.\nThe item-based collaborative filtering algorithm``s performance was ordinary with respect to rank accuracy.\nFig. 5 shows a plot of the precision and recall as ratings were merged in time order into the model.\nThe recall was always high, but the average precision was just about 53%.\n0 0.2 0.4 0.6 0.8 1 1.2 10000 30000 50000 70000 90000110000130000150000 Ratings merged into model Value Precision Recall Figure 5: Precision and recall for the item-based collaborative filtering algorithm.\n5.2 Inducing good scouts The ratings of a user that serve as scouts are those that allow the user to receive recommendations.\nWe claim that users with ratings that have respectable scout values will be happier with the system than those with ratings with low scout values.\nNote that the item-based algorithm discussed here produces recommendation lists with nearly half of the pairs in the list discordant from the user``s preference.\nWhether all of these discordant pairs are observable by the user is unclear, however, certainly this suggests that there is a need to be able to direct users to items whose ratings would improve the lists.\nThe distribution of the scout values for most users'' ratings are Gaussian with mean zero.\nFig. 6 shows the frequency distribution of scout values for a sample user at a given snapshot.\nWe observe that a large number of ratings never serve as scouts for their users.\nA relatable scenario is when Amazon``s recommender makes suggestions of books or items based on other items that were purchased as gifts.\nWith simple relevance feedback from the user, such ratings can be isolated as bad scouts and discounted from future predictions.\nRemoving bad scouts was found to be worthwhile for individual users but the overall performance improvement was only marginal.\nAn obvious question is whether good scouts can be formed by merely rating popular movies as suggested by Rashid et al. [9].\nThey show that a mix of popularity and rating entropy identifies the best items to suggest to new users when evaluated using MAE.\nFollowing their intuition, we would expect to see a higher correlation between popularityentropy and good scouts.\nWe measured the Pearson correlation coefficient between aggregated scout values for a movie with the popularity of a movie (number of times it is rated); and with its popularity*variance measure at different snapshots of the system.\nNote that the scout values were initially anti-correlated with popularity (Fig. 7), but became moderately correlated as the system evolved.\nBoth popularity and popularity*variance performed similarly.\nA possible explanation is that there has been insufficient time for the popular movies to accumulate ratings.\n255 -10 0 10 20 30 40 50 60 -0.08 -0.06 -0.04 -0.02 0 0.02 0.04 Scout Value Frequency Figure 6: Distribution of scout values for a sample user.\n-0.4 -0.2 0 0.2 0.4 0.6 0.8 30000 60000 90000 120000 150000 180000 Popularity Pop*Var Figure 7: Correlation between aggregated scout value and item popularity (computed at different intervals).\n-0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 30000 60000 90000 120000 150000 180000 Figure 8: Correlation between aggregated promoter value and user prolificity (computed at different intervals).\nTable 1: Movies forming the best scouts.\nBest Scouts Conf.\nPop.\nBeing John Malkovich (1999) 1.00 445 Star Wars: Episode IV - A New Hope (1977) 0.92 623 Princess Bride, The (1987) 0.85 477 Sixth Sense, The (1999) 0.85 617 Matrix, The (1999) 0.77 522 Ghostbusters (1984) 0.77 441 Casablanca (1942) 0.77 384 Insider, The (1999) 0.77 235 American Beauty (1999) 0.69 624 Terminator 2: Judgment Day (1991) 0.69 503 Fight Club (1999) 0.69 235 Shawshank Redemption, The (1994) 0.69 445 Run Lola Run (Lola rennt) (1998) 0.69 220 Terminator, The (1984) 0.62 450 Usual Suspects, The (1995) 0.62 326 Aliens (1986) 0.62 385 North by Northwest (1959) 0.62 245 Fugitive, The (1993) 0.62 402 End of Days (1999) 0.62 132 Raiders of the Lost Ark (1981) 0.54 540 Schindler``s List (1993) 0.54 453 Back to the Future (1985) 0.54 543 Toy Story (1995) 0.54 419 Alien (1979) 0.54 415 Abyss, The (1989) 0.54 345 2001: A Space Odyssey (1968) 0.54 358 Dogma (1999) 0.54 228 Little Mermaid, The (1989) 0.54 203 Table 2: Movies forming the worst scouts.\nWorst scouts Conf.\nPop.\nHarold and Maude (1971) 0.46 141 Grifters, The (1990) 0.46 180 Sting, The (1973) 0.38 244 Godfather: Part III, The (1990) 0.38 154 Lawrence of Arabia (1962) 0.38 167 High Noon (1952) 0.38 84 Women on the Verge of a..\n.\n(1988) 0.38 113 Grapes of Wrath, The (1940) 0.38 115 Duck Soup (1933) 0.38 131 Arsenic and Old Lace (1944) 0.38 138 Midnight Cowboy (1969) 0.38 137 To Kill a Mockingbird (1962) 0.31 195 Four Weddings and a Funeral (1994) 0.31 271 Good, The Bad and The Ugly, The (1966) 0.31 156 It``s a Wonderful Life (1946) 0.31 146 Player, The (1992) 0.31 220 Jackie Brown (1997) 0.31 118 Boat, The (Das Boot) (1981) 0.31 210 Manhattan (1979) 0.31 158 Truth About Cats & Dogs, The (1996) 0.31 143 Ghost (1990) 0.31 227 Lone Star (1996) 0.31 125 Big Chill, The (1983) 0.31 184 256 By studying the evolution of scout values, we can identify movies that consistently feature in good scouts over time.\nWe claim these movies will make viable scouts for other users.\nWe found the aggregated scout values for all movies in intervals of 10,000 ratings each.\nA movie is said to induce a good scout if the movie was in the top 100 of the sorted list, and to induce a bad scout if it was in bottom 100 of the same list.\nMovies appearing consistently high over time are expected to remain up there in the future.\nThe effective confidence in a movie m is Cm = Tm \u2212 Bm N (8) where Tm is the number of times it appeared in the top 100, Bm the number of times it appeared in the bottom 100, and N is the number of intervals considered.\nUsing this measure, the top few movies expected to induce the best scouts are shown in Table 1.\nMovies that would be bad scout choices are shown in Table 2 with their associated confidences.\nThe popularities of the movies are also displayed.\nAlthough more popular movies appear in the list of good scouts, these tables show that a blind choice of scout based on popularity alone can be potentially dangerous.\nInterestingly, the best scout-`Being John Malkovich''-is about a puppeteer who discovers a portal into a movie star, a movie that has been described variously on amazon.com as `makes you feel giddy,'' `seriously weird,'' `comedy with depth,'' `silly,'' `strange,'' and `inventive.''\nIndicating whether someone likes this movie or not goes a long way toward situating the user in a suitable neighborhood, with similar preferences.\nOn the other hand, several factors may have made a movie a bad scout, like the sharp variance in user preferences in the neighborhood of a movie.\nTwo users may have the same opinion about Lawrence of Arabia, but they may differ sharply about how they felt about the other movies they saw.\nBad scouts ensue when there is deviation in behavior around a common synchronization point.\n5.3 Inducing good promoters Ratings that serve to promote items in a collaborative filtering system are critical to allowing a new item be recommended to users.\nSo, inducing good promoters is important for cold-start recommendation.\nWe note that the frequency distribution of promoter values for a sample movie``s ratings is also Gaussian (similar to Fig. 6).\nThis indicates that the promotion of a movie is benefited most by the ratings of a few users, and are unaffected by the ratings of most users.\nWe find a strong correlation between a user``s number of ratings and his aggregated promoter value.\nFig. 8 depicts the evolution of the Pearson correlation co-efficient between the prolificity of a user (number of ratings) versus his aggregated promoter value.\nWe expect that conspicuous shills, by recommending wrong movies to users, will be discredited with negative aggregate promoter values and should be identifiable easily.\nGiven this observation, the obvious rule to follow when introducing a new movie is to have it rated directly by prolific users who posses high aggregated promoter values.\nA new movie is thus cast into the neighborhood of many other movies improving its visibility.\nNote, though, that a user may have long stopped using the system.\nTracking promoter values consistently allows only the most active recent users to be considered.\n5.4 Inducing good connectors Given the way scouts, connectors, and promoters are characterized, it follows that the movies that are part of the best scouts are also part of the best connectors.\nSimilarly, the users that constitute the best promoters are also part of the best connectors.\nGood connectors are induced by ensuring a user with a high promoter value rates a movie with a high scout value.\nIn our experiments, we find that a rating``s longest standing role is often as a connector.\nA rating with a poor connector value is often seen due to its user being a bad promoter, or its movie being a bad scout.\nSuch ratings can be removed from the prediction process to bring marginal improvements to recommendations.\nIn some selected experiments, we observed that removing a set of badly behaving connectors helped improve the system``s overall performance by 1.5%.\nThe effect was even higher on a few select users who observed an improvement of above 10% in precision without much loss in recall.\n5.5 Monitoring the evolution of rating roles One of the more significant contributions of our work is the ability to model the evolution of recommender systems, by studying the changing roles of ratings over time.\nThe role and value of a rating can change depending on many factors like user behavior, redundancy, shilling effects or properties of the collaborative filtering algorithm used.\nStudying the dynamics of rating roles in terms of transitions between good, bad, and negligible values can provide insights into the functioning of the recommender system.\nWe believe that a continuous visualization of these transitions will improve the ability to manage a recommender system.\nWe classify different rating states as good, bad, or negligible.\nConsider a user who has rated 100 movies in a particular interval, of which 20 are part of the test set.\nIf a scout has a value greater than 0.005, it indicates that it is uniquely involved in at least 2 concordant predictions, which we will say is good.\nThus, a threshold of 0.005 is chosen to bin a rating as good, bad or negligible in terms of its scout, connector and promoter value.\nFor instance, a rating r, at time t with role value triple [0.1, 0.001, \u22120.01] is classified as [scout +, connector 0, promoter \u2212], where + indicates good, 0 indicates negligible, and \u2212 indicates bad.\nThe positive credit held by a rating is a measure of its contribution to the betterment of the system, and the discredit is a measure of its contribution to the detriment of the system.\nEven though the positive roles (and the negative roles) make up a very small percentage of all ratings, their contribution supersedes their size.\nFor example, even though only 1.7% of all ratings were classified as good scouts, they hold 79% of all positive credit in the system!\nSimilarly, the bad scouts were just 1.4% of all ratings but hold 82% of all discredit.\nNote that good and bad scouts, together, comprise only 1.4% + 1.7% = 3.1% of the ratings, implying that the majority of the ratings are negligible role players as scouts (more on this later).\nLikewise, good connectors were 1.2% of the system, and hold 30% of all positive credit.\nThe bad connectors (0.8% of the system) hold 36% of all discredit.\nGood promoters (3% of the system) hold 46% of all credit, while bad promoters (2%) hold 50% of all discredit.\nThis reiterates that a few ratings influence most of the system``s performance.\nHence it is important to track transitions between them regardless of their small numbers.\n257 Across different snapshots, a rating can remain in the same state or change.\nA good scout can become a bad scout, a good promoter can become a good connector, good and bad scouts can become vestigial, and so on.\nIt is not practical to expect a recommender system to have no ratings in bad roles.\nHowever, it suffices to see ratings in bad roles either convert to good or vestigial roles.\nSimilarly, observing a large number of good roles become bad ones is a sign of imminent failure of the system.\nWe employ the principle of non-overlapping episodes [6] to count such transitions.\nA sequence such as: [+, 0, 0] \u2192 [+, 0, 0] \u2192 [0, +, 0] \u2192 [0, 0, 0] is interpreted as the transitions [+, 0, 0] ; [0, +, 0] : 1 [+, 0, 0] ; [0, 0, 0] : 1 [0, +, 0] ; [0, 0, 0] : 1 instead of [+, 0, 0] ; [0, +, 0] : 2 [+, 0, 0] ; [0, 0, 0] : 2 [0, +, 0] ; [0, 0, 0] : 1.\nSee [6] for further details about this counting procedure.\nThus, a rating can be in one of 27 possible states, and there are about 272 possible transitions.\nWe make a further simplification and utilize only 9 states, indicating whether the rating is a scout, promoter, or connector, and whether it has a positive, negative, or negligible role.\nRatings that serve multiple purposes are counted using multiple episode instantiations but the states themselves are not duplicated beyond the 9 restricted states.\nIn this model, a transition such as [+, 0, +] ; [0, +, 0] : 1 is counted as [scout+] ; [scout0] : 1 [scout+] ; [connector+] : 1 [scout+] ; [promoter0] : 1 [connector0] ; [scout0] : 1 [connector0] ; [scout+] : 1 [connector0] ; [promoter0] : 1 [promoter+] ; [scout0] : 1 [promoter+] ; [connector+] : 1 [promoter+] ; [promoter0] : 1 Of these, transitions like [pX] ; [q0] where p = q, X \u2208 {+, 0, \u2212} are considered uninteresting, and only the rest are counted.\nFig. 9 depicts the major transitions counted while processing the first 200,000 ratings from the MovieLens dataset.\nOnly transitions with frequency greater than or equal to 3% are shown.\nThe percentages for each state indicates the number of ratings that were found to be in those states.\nWe consider transitions from any state to a good state as healthy, from any state to a bad state as unhealthy, and from any state to a vestigial state as decaying.\nFrom Fig. 9, we can observe: \u2022 The bulk of the ratings have negligible values, irrespective of their role.\nThe majority of the transitions involve both good and bad ratings becoming negligible.\nScout + (2%) Scout(1.5%) Scout 0 (96.5%) Connector + (1.2%) Connector(0.8%) Connector 0 (98%) Promoter + (3%) Promoter(2%) Promoter 0 (95%) 84% 84% 81% 74% 10% 6% 11% 77% 8% 7% 8% 82% 4% 86% 4% 68% 15% 13% 5% 5% 77% 11% 7% 5% 4% 3% 3% 3% Healthy Unhealthy Decaying Figure 9: Transitions among rating roles.\n\u2022 The number of good ratings is comparable to the bad ratings, and ratings are seen to switch states often, except in the case of scouts (see below).\n\u2022 The negative and positive scout states are not reachable through any transition, indicating that these ratings must begin as such, and cannot be coerced into these roles.\n\u2022 Good promoters and good connectors have a much longer survival period than scouts.\nTransitions that recur to these states have frequencies of 10% and 15% when compared to just 4% for scouts.\nGood connectors are the slowest to decay whereas (good) scouts decay the fastest.\n\u2022 Healthy percentages are seen on transitions between promoters and connectors.\nAs indicated earlier, there are hardly any transitions from promoters\/connectors to scouts.\nThis indicates that, over the long run, a user``s rating is more useful to others (movies or other users) than to the user himself.\n\u2022 The percentages of healthy transitions outweigh the unhealthy ones - this hints that the system is healthy, albeit only marginally.\nNote that these results are conditioned by the static nature of the dataset, which is a set of ratings over a fixed window of time.\nHowever a diagram such as Fig. 9 is clearly useful for monitoring the health of a recommender system.\nFor instance, acceptable limits can be imposed on different types of transitions and, if a transition fails to meet the threshold, the recommender system or a part of it can be brought under closer scrutiny.\nFurthermore, the role state transition diagram would also be the ideal place to study the effects of shilling, a topic we will consider in future research.\n5.6 Characterizing neighborhoods Earlier we saw that we can characterize the neighborhood of ratings involved in creating a recommendation list L for 258 a user.\nIn our experiment, we consider lists of length 30, and sample the lists of about 5% of users through the evolution of the model (at intervals of 10,000 ratings each) and compute their neighborhood characteristics.\nTo simplify our presentation, we consider the percentage of the sample that fall into one of the following categories: 1.\nInactive user: (SFN(u) = 0) 2.\nGood scouts, Good neighborhood: (SFN(u) > 0) \u2227 (CFN(u) > 0 \u2227 PFN(u) > 0) 3.\nGood scouts, Bad neighborhood: (SFN(u) > 0) \u2227 (CFN(u) < 0 \u2228 PFN(u) < 0) 4.\nBad scouts, Good neighborhood: (SFN(u) < 0) \u2227 (CFN(u) > 0 \u2227 PFN(u) > 0) 5.\nBad scouts, Bad neighborhood: (SFN(u) < 0) \u2227 (CFN(u) < 0 \u2228 PFN(u) < 0) From our sample set of 561 users, we found that 476 users were inactive.\nOf the remaining 85 users, we found 26 users had good scouts and a good neighborhood, 6 had bad scouts and a good neighborhood, 29 had good scouts and a bad neighborhood, and 24 had bad scouts and a bad neighborhood.\nThus, we conjecture that 59 users (29+24+6) are in danger of leaving the system.\nAs a remedy, users with bad scouts and a good neighborhood can be asked to reconsider rating of some movies hoping to improve the system``s recommendations.\nThe system can be expected to deliver more if they engineer some good scouts.\nUsers with good scouts and a bad neighborhood are harder to address; this situation might entail selectively removing some connector-promoter pairs that are causing the damage.\nHandling users with bad scouts and bad neighborhoods is a more difficult challenge.\nSuch a classification allows the use of different strategies to better a user``s experience with the system depending on his context.\nIn future work, we intend to conduct field studies and study the improvement in performance of different strategies for different contexts.\n6.\nCONCLUSIONS To further recommender system acceptance and deployment, we require new tools and methodologies to manage an installed recommender and develop insights into the roles played by ratings.\nA fine-grained characterization in terms of rating roles such as scouts, promoters, and connectors, as done here, helps such an endeavor.\nAlthough we have presented results on only the item-based algorithm with list rank accuracy as the metric, the same approach outlined here applies to user-based algorithms and other metrics.\nIn future research, we plan to systematically study the many algorithmic parameters, tolerances, and cutoff thresholds employed here and reason about their effects on the downstream conclusions.\nWe also aim to extend our formulation to other collaborative filtering algorithms, study the effect of shilling in altering rating roles, conduct field studies, and evaluate improvements in user experience by tweaking ratings based on their role values.\nFinally, we plan to develop the idea of mining the evolution of rating role patterns into a reporting and tracking system for all aspects of recommender system health.\n7.\nREFERENCES [1] Cosley, D., Lam, S., Albert, I., Konstan, J., and Riedl, J. Is Seeing Believing?\n: How Recommender System Interfaces Affect User``s Opinions.\nIn Proc.\nCHI (2001), pp. 585-592.\n[2] Herlocker, J. L., Konstan, J. A., Borchers, A., and Riedl, J.\nAn Algorithmic Framework for Performing Collaborative Filtering.\nIn Proc.\nSIGIR (1999), pp. 230-237.\n[3] Herlocker, J. L., Konstan, J. A., Terveen, L. G., and Riedl, J. T. Evaluating Collaborative Filtering Recommender Systems.\nACM Transactions on Information Systems Vol.\n22, 1 (2004), pp. 5-53.\n[4] Konstan, J. A. Personal communication.\n2003.\n[5] Lam, S. K., and Riedl, J. Shilling Recommender Systems for Fun and Profit.\nIn Proceedings of the 13th International World Wide Web Conference (2004), ACM Press, pp. 393-402.\n[6] Laxman, S., Sastry, P. S., and Unnikrishnan, K. P. Discovering Frequent Episodes and Learning Hidden Markov Models: A Formal Connection.\nIEEE Transactions on Knowledge and Data Engineering Vol.\n17, 11 (2005), 1505-1517.\n[7] McLaughlin, M. R., and Herlocker, J. L.\nA Collaborative Filtering Algorithm and Evaluation Metric that Accurately Model the User Experience.\nIn Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (2004), pp. 329 - 336.\n[8] O``Mahony, M., Hurley, N. J., Kushmerick, N., and Silvestre, G. Collaborative Recommendation: A Robustness Analysis.\nACM Transactions on Internet Technology Vol.\n4, 4 (Nov 2004), pp. 344-377.\n[9] Rashid, A. M., Albert, I., Cosley, D., Lam, S., McNee, S., Konstan, J. A., and Riedl, J. Getting to Know You: Learning New User Preferences in Recommender Systems.\nIn Proceedings of the 2002 Conference on Intelligent User Interfaces (IUI 2002) (2002), pp. 127-134.\n[10] Rashid, A. M., Karypis, G., and Riedl, J. Influence in Ratings-Based Recommender Systems: An Algorithm-Independent Approach.\nIn Proc.\nof the SIAM International Conference on Data Mining (2005).\n[11] Resnick, P., Iacovou, N., Sushak, M., Bergstrom, P., and Riedl, J. GroupLens: An Open Architecture for Collaborative Filtering of Netnews.\nIn Proceedings of the Conference on Computer Supported Collaborative Work (CSCW``94) (1994), ACM Press, pp. 175-186.\n[12] Sarwar, B., Karypis, G., Konstan, J., and Reidl, J. Item-Based Collaborative Filtering Recommendation Algorithms.\nIn Proceedings of the Tenth International World Wide Web Conference (WWW``10) (2001), pp. 285-295.\n[13] Schein, A., Popescu, A., Ungar, L., and Pennock, D. Methods and Metrics for Cold-Start Recommendation.\nIn Proc.\nSIGIR (2002), pp. 253-260.\n259","lvl-3":"Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering\nABSTRACT\nRecommender systems aggregate individual user ratings into predictions of products or services that might interest visitors.\nThe quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce.\nWe present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering.\nIn particular, we formulate three roles--scouts, promoters, and connectors--that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.)\n.\nThese roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole.\nFor instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling.\nWe argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community.\n1.\nINTRODUCTION\nRecommender systems have become integral to e-commerce, providing technology that suggests products to a visitor based on previous purchases or rating history.\nCollaborative filtering, a common form of recommendation, predicts a user's rating for an item by combining (other) ratings of that user with other users' ratings.\nSignificant research has been conducted in implementing fast and accurate collaborative filtering algorithms [2, 7], designing interfaces for presenting recommendations to users [1], and studying the robustness of these algorithms [8].\nHowever, with the exception of a few studies on the influence of users [10], little attention has been paid to unraveling the inner workings of a recommender in terms of the individual ratings and the roles they play in making (good) recommendations.\nSuch an understanding will give an important handle to monitoring and managing a recommender system, to engineer mechanisms to sustain the recommender, and thereby ensure its continued success.\nOur motivation here is to disaggregate global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearest-neighbor collaborative filtering.\nWe identify three possible roles: (scouts) to connect the user into the system to receive recommendations, (promoters) to connect an item into the system to be recommended, and (connectors) to connect ratings of these two kinds.\nViewing ratings in this way, we can define the contribution of a rating in each role, both in terms of allowing recommendations to occur, and in terms of influence on the quality of recommendations.\nIn turn, this capability helps support scenarios such as:\n1.\nSituating users in better neighborhoods: A user's ratings may inadvertently connect the user to a neighborhood for which the user's tastes may not be a perfect match.\nIdentifying ratings responsible for such bad recommendations and suggesting new items to rate can help situate the user in a better neighborhood.\n2.\nTargeting items: Recommender systems suffer from lack of user participation, especially in cold-start scenarios [13] involving newly arrived items.\nIdentifying users who can be encouraged to rate specific items helps ensure coverage of the recommender system.\n3.\nMonitoring the evolution of the recommender system and its stakeholders: A recommender system is constantly under change: growing with new users and\nitems, shrinking with users leaving the system, items becoming irrelevant, and parts of the system under attack.\nTracking the roles of a rating and its evolution over time provides many insights into the health of the system, and how it could be managed and improved.\nThese include being able to identify rating subspaces that do not contribute (or contribute negatively) to system performance, and could be removed; to enumerate users who are in danger of leaving, or have left the system; and to assess the susceptibility of the system to attacks such as shilling [5].\nAs we show, the characterization of rating roles presented here provides broad primitives to manage a recommender system and its community.\nThe rest of the paper is organized as follows.\nBackground on nearest-neighbor collaborative filtering and algorithm evaluation is discussed in Section 2.\nSection 3 defines and discusses the roles of a rating, and Section 4 defines measures of the contribution of a rating in each of these roles.\nIn Section 5, we illustrate the use of these roles to address the goals outlined above.\n2.\nBACKGROUND 2.1 Algorithms\nNearest-neighbor collaborative filtering algorithms either use neighborhoods of users or neighborhoods of items to compute a prediction.\nAn algorithm of the first kind is called user-based, and one of the second kind is called itembased [12].\nIn both families of algorithms, neighborhoods are formed by first computing the similarity between all pairs of users (for user-based) or items (for item-based).\nPredictions are then computed by aggregating ratings, which in a user-based algorithm involves aggregating the ratings of the target item by the user's neighbors and, in an item-based algorithm, involves aggregating the user's ratings of items that are neighbors of the target item.\nAlgorithms within these families differ in the definition of similarity, formation of neighborhoods, and the computation of predictions.\nWe consider a user-based algorithm based on that defined for GroupLens [11] with variations from Herlocker et al. [2], and an item-based algorithm similar to that of Sarwar et al. [12].\nThe algorithm used by Resnick et al. [11] defines the similarity of two users u and v as the Pearson correlation of their common ratings:\nwhere Iu is the set of items rated by user u, ru, i is user u's rating for item i, and \u00af ru is the average rating of user u (similarly for v).\nSimilarity computed in this manner is typically scaled by a factor proportional to the number of common ratings, to reduce the chance of making a recommendation made on weak connections: sim' (u, v) = max (| Iu \u2229 Iv |, \u03b3) \u00b7 sim (u, v), \u03b3 where \u03b3 \u2248 5 is a constant used as a lower limit in scaling [2].\nThese new similarities are then used to define a static neighborhood Nu for each user u consisting of the top K users most similar to user u.\nA prediction for user u and item i is computed by a weighted average of the ratings by the neighbors\nwhere V = Nu \u2229 Ui is the set of users most similar to u who have rated i.\nThe item-based algorithm we use is the one defined by Sarwar et al. [12].\nIn this algorithm, similarity is defined as the adjusted cosine measure where Ui is the set of users who have rated item i.\nAs for the user-based algorithm, the similarity weights are adjusted proportionally to the number of users that have rated the items in common Given the similarities, the neighborhood Ni of an item i is defined as the top K most similar items for that item.\nA prediction for user u and item i is computed as the weighted average\nwhere J = Ni \u2229 Iu is the set of items rated by u that are most similar to i.\n2.2 Evaluation\nRecommender algorithms have typically been evaluated using measures of predictive accuracy and coverage [3].\nStudies on recommender algorithms, notably Herlocker et al. [2] and Sarwar et al. [12], typically compute predictive accuracy by dividing a set of ratings into training and test sets, and compute the prediction for an item in the test set using the ratings in the training set.\nA standard measure of predictive accuracy is mean absolute error (MAE), which for a test set\nCoverage has a number of definitions, but generally refers to the proportion of items that can be predicted by the algorithm [3].\nA practical issue with predictive accuracy is that users typically are presented with recommendation lists, and not individual numeric predictions.\nRecommendation lists are lists of items in decreasing order of prediction (sometimes stated in terms of star-ratings), and so predictive accuracy may not be reflective of the accuracy of the list.\nSo, instead we can measure recommendation or rank accuracy, which indicates the extent to which the list is in the correct order.\nHerlocker et al. [3] discuss a number of rank accuracy measures, which range from Kendall's Tau to measures that consider the fact that users tend to only look at a prefix of the list [5].\nKendall's Tau measures the number of inversions when comparing ordered pairs in the true user ordering of\nFigure 1: Ratings in simple movie recommender.\nitems and the recommended order, and is defined as\nwhere C is the number of pairs that the system predicts in the correct order, D the number of pairs the system predicts in the wrong order, TR the number of pairs in the true ordering that have the same ratings, and TP is the number of pairs in the predicted ordering that have the same ratings [3].\nA shortcoming of the Tau metric is that it is oblivious to the position in the ordered list where the inversion occurs [3].\nFor instance, an inversion toward the end of the list is given the same weight as one in the beginning.\nOne solution is to consider inversions only in the top few items in the recommended list or to weight inversions based on their position in the list.\n3.\nROLES OF A RATING\n3.1 Roles in Detail\n4.\nCONTRIBUTIONS OF RATINGS\n4.1 Influence of Ratings\n4.2 Role Values for Predictive Accuracy\n4.3 Role Values for Rank Accuracy\n4.4 Aggregating rating roles\n5.\nEXPERIMENTATION\n5.1 Methodology\n5.2 Inducing good scouts\n5.3 Inducing good promoters\n5.4 Inducing good connectors\n5.5 Monitoring the evolution of rating roles\n5.6 Characterizing neighborhoods\n6.\nCONCLUSIONS\nTo further recommender system acceptance and deployment, we require new tools and methodologies to manage an installed recommender and develop insights into the roles played by ratings.\nA fine-grained characterization in terms of rating roles such as scouts, promoters, and connectors, as done here, helps such an endeavor.\nAlthough we have presented results on only the item-based algorithm with list rank accuracy as the metric, the same approach outlined here applies to user-based algorithms and other metrics.\nIn future research, we plan to systematically study the many algorithmic parameters, tolerances, and cutoff thresholds employed here and reason about their effects on the downstream conclusions.\nWe also aim to extend our formulation to other collaborative filtering algorithms, study the effect of shilling in altering rating roles, conduct field studies, and evaluate improvements in user experience by tweaking ratings based on their role values.\nFinally, we plan to develop the idea of mining the evolution of rating role patterns into a reporting and tracking system for all aspects of recommender system health.","lvl-4":"Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering\nABSTRACT\nRecommender systems aggregate individual user ratings into predictions of products or services that might interest visitors.\nThe quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce.\nWe present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering.\nIn particular, we formulate three roles--scouts, promoters, and connectors--that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.)\n.\nThese roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole.\nFor instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling.\nWe argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community.\n1.\nINTRODUCTION\nRecommender systems have become integral to e-commerce, providing technology that suggests products to a visitor based on previous purchases or rating history.\nCollaborative filtering, a common form of recommendation, predicts a user's rating for an item by combining (other) ratings of that user with other users' ratings.\nSignificant research has been conducted in implementing fast and accurate collaborative filtering algorithms [2, 7], designing interfaces for presenting recommendations to users [1], and studying the robustness of these algorithms [8].\nHowever, with the exception of a few studies on the influence of users [10], little attention has been paid to unraveling the inner workings of a recommender in terms of the individual ratings and the roles they play in making (good) recommendations.\nOur motivation here is to disaggregate global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearest-neighbor collaborative filtering.\nWe identify three possible roles: (scouts) to connect the user into the system to receive recommendations, (promoters) to connect an item into the system to be recommended, and (connectors) to connect ratings of these two kinds.\nViewing ratings in this way, we can define the contribution of a rating in each role, both in terms of allowing recommendations to occur, and in terms of influence on the quality of recommendations.\n1.\nSituating users in better neighborhoods: A user's ratings may inadvertently connect the user to a neighborhood for which the user's tastes may not be a perfect match.\nIdentifying ratings responsible for such bad recommendations and suggesting new items to rate can help situate the user in a better neighborhood.\n2.\nTargeting items: Recommender systems suffer from lack of user participation, especially in cold-start scenarios [13] involving newly arrived items.\nIdentifying users who can be encouraged to rate specific items helps ensure coverage of the recommender system.\n3.\nMonitoring the evolution of the recommender system and its stakeholders: A recommender system is constantly under change: growing with new users and\nitems, shrinking with users leaving the system, items becoming irrelevant, and parts of the system under attack.\nTracking the roles of a rating and its evolution over time provides many insights into the health of the system, and how it could be managed and improved.\nAs we show, the characterization of rating roles presented here provides broad primitives to manage a recommender system and its community.\nBackground on nearest-neighbor collaborative filtering and algorithm evaluation is discussed in Section 2.\nSection 3 defines and discusses the roles of a rating, and Section 4 defines measures of the contribution of a rating in each of these roles.\nIn Section 5, we illustrate the use of these roles to address the goals outlined above.\n2.\nBACKGROUND 2.1 Algorithms\nNearest-neighbor collaborative filtering algorithms either use neighborhoods of users or neighborhoods of items to compute a prediction.\nAn algorithm of the first kind is called user-based, and one of the second kind is called itembased [12].\nIn both families of algorithms, neighborhoods are formed by first computing the similarity between all pairs of users (for user-based) or items (for item-based).\nPredictions are then computed by aggregating ratings, which in a user-based algorithm involves aggregating the ratings of the target item by the user's neighbors and, in an item-based algorithm, involves aggregating the user's ratings of items that are neighbors of the target item.\nAlgorithms within these families differ in the definition of similarity, formation of neighborhoods, and the computation of predictions.\nWe consider a user-based algorithm based on that defined for GroupLens [11] with variations from Herlocker et al. [2], and an item-based algorithm similar to that of Sarwar et al. [12].\nThe algorithm used by Resnick et al. [11] defines the similarity of two users u and v as the Pearson correlation of their common ratings:\nwhere Iu is the set of items rated by user u, ru, i is user u's rating for item i, and \u00af ru is the average rating of user u (similarly for v).\nThese new similarities are then used to define a static neighborhood Nu for each user u consisting of the top K users most similar to user u.\nA prediction for user u and item i is computed by a weighted average of the ratings by the neighbors\nwhere V = Nu \u2229 Ui is the set of users most similar to u who have rated i.\nThe item-based algorithm we use is the one defined by Sarwar et al. [12].\nIn this algorithm, similarity is defined as the adjusted cosine measure where Ui is the set of users who have rated item i.\nAs for the user-based algorithm, the similarity weights are adjusted proportionally to the number of users that have rated the items in common Given the similarities, the neighborhood Ni of an item i is defined as the top K most similar items for that item.\nA prediction for user u and item i is computed as the weighted average\nwhere J = Ni \u2229 Iu is the set of items rated by u that are most similar to i.\n2.2 Evaluation\nRecommender algorithms have typically been evaluated using measures of predictive accuracy and coverage [3].\nStudies on recommender algorithms, notably Herlocker et al. [2] and Sarwar et al. [12], typically compute predictive accuracy by dividing a set of ratings into training and test sets, and compute the prediction for an item in the test set using the ratings in the training set.\nA standard measure of predictive accuracy is mean absolute error (MAE), which for a test set\nCoverage has a number of definitions, but generally refers to the proportion of items that can be predicted by the algorithm [3].\nA practical issue with predictive accuracy is that users typically are presented with recommendation lists, and not individual numeric predictions.\nRecommendation lists are lists of items in decreasing order of prediction (sometimes stated in terms of star-ratings), and so predictive accuracy may not be reflective of the accuracy of the list.\nSo, instead we can measure recommendation or rank accuracy, which indicates the extent to which the list is in the correct order.\nKendall's Tau measures the number of inversions when comparing ordered pairs in the true user ordering of\nFigure 1: Ratings in simple movie recommender.\nitems and the recommended order, and is defined as\nOne solution is to consider inversions only in the top few items in the recommended list or to weight inversions based on their position in the list.\n6.\nCONCLUSIONS\nTo further recommender system acceptance and deployment, we require new tools and methodologies to manage an installed recommender and develop insights into the roles played by ratings.\nA fine-grained characterization in terms of rating roles such as scouts, promoters, and connectors, as done here, helps such an endeavor.\nAlthough we have presented results on only the item-based algorithm with list rank accuracy as the metric, the same approach outlined here applies to user-based algorithms and other metrics.\nWe also aim to extend our formulation to other collaborative filtering algorithms, study the effect of shilling in altering rating roles, conduct field studies, and evaluate improvements in user experience by tweaking ratings based on their role values.\nFinally, we plan to develop the idea of mining the evolution of rating role patterns into a reporting and tracking system for all aspects of recommender system health.","lvl-2":"Scouts, Promoters, and Connectors: The Roles of Ratings in Nearest Neighbor Collaborative Filtering\nABSTRACT\nRecommender systems aggregate individual user ratings into predictions of products or services that might interest visitors.\nThe quality of this aggregation process crucially affects the user experience and hence the effectiveness of recommenders in e-commerce.\nWe present a novel study that disaggregates global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearestneighbor collaborative filtering.\nIn particular, we formulate three roles--scouts, promoters, and connectors--that capture how users receive recommendations, how items get recommended, and how ratings of these two types are themselves connected (resp.)\n.\nThese roles find direct uses in improving recommendations for users, in better targeting of items and, most importantly, in helping monitor the health of the system as a whole.\nFor instance, they can be used to track the evolution of neighborhoods, to identify rating subspaces that do not contribute (or contribute negatively) to system performance, to enumerate users who are in danger of leaving, and to assess the susceptibility of the system to attacks such as shilling.\nWe argue that the three rating roles presented here provide broad primitives to manage a recommender system and its community.\n1.\nINTRODUCTION\nRecommender systems have become integral to e-commerce, providing technology that suggests products to a visitor based on previous purchases or rating history.\nCollaborative filtering, a common form of recommendation, predicts a user's rating for an item by combining (other) ratings of that user with other users' ratings.\nSignificant research has been conducted in implementing fast and accurate collaborative filtering algorithms [2, 7], designing interfaces for presenting recommendations to users [1], and studying the robustness of these algorithms [8].\nHowever, with the exception of a few studies on the influence of users [10], little attention has been paid to unraveling the inner workings of a recommender in terms of the individual ratings and the roles they play in making (good) recommendations.\nSuch an understanding will give an important handle to monitoring and managing a recommender system, to engineer mechanisms to sustain the recommender, and thereby ensure its continued success.\nOur motivation here is to disaggregate global recommender performance metrics into contributions made by each individual rating, allowing us to characterize the many roles played by ratings in nearest-neighbor collaborative filtering.\nWe identify three possible roles: (scouts) to connect the user into the system to receive recommendations, (promoters) to connect an item into the system to be recommended, and (connectors) to connect ratings of these two kinds.\nViewing ratings in this way, we can define the contribution of a rating in each role, both in terms of allowing recommendations to occur, and in terms of influence on the quality of recommendations.\nIn turn, this capability helps support scenarios such as:\n1.\nSituating users in better neighborhoods: A user's ratings may inadvertently connect the user to a neighborhood for which the user's tastes may not be a perfect match.\nIdentifying ratings responsible for such bad recommendations and suggesting new items to rate can help situate the user in a better neighborhood.\n2.\nTargeting items: Recommender systems suffer from lack of user participation, especially in cold-start scenarios [13] involving newly arrived items.\nIdentifying users who can be encouraged to rate specific items helps ensure coverage of the recommender system.\n3.\nMonitoring the evolution of the recommender system and its stakeholders: A recommender system is constantly under change: growing with new users and\nitems, shrinking with users leaving the system, items becoming irrelevant, and parts of the system under attack.\nTracking the roles of a rating and its evolution over time provides many insights into the health of the system, and how it could be managed and improved.\nThese include being able to identify rating subspaces that do not contribute (or contribute negatively) to system performance, and could be removed; to enumerate users who are in danger of leaving, or have left the system; and to assess the susceptibility of the system to attacks such as shilling [5].\nAs we show, the characterization of rating roles presented here provides broad primitives to manage a recommender system and its community.\nThe rest of the paper is organized as follows.\nBackground on nearest-neighbor collaborative filtering and algorithm evaluation is discussed in Section 2.\nSection 3 defines and discusses the roles of a rating, and Section 4 defines measures of the contribution of a rating in each of these roles.\nIn Section 5, we illustrate the use of these roles to address the goals outlined above.\n2.\nBACKGROUND 2.1 Algorithms\nNearest-neighbor collaborative filtering algorithms either use neighborhoods of users or neighborhoods of items to compute a prediction.\nAn algorithm of the first kind is called user-based, and one of the second kind is called itembased [12].\nIn both families of algorithms, neighborhoods are formed by first computing the similarity between all pairs of users (for user-based) or items (for item-based).\nPredictions are then computed by aggregating ratings, which in a user-based algorithm involves aggregating the ratings of the target item by the user's neighbors and, in an item-based algorithm, involves aggregating the user's ratings of items that are neighbors of the target item.\nAlgorithms within these families differ in the definition of similarity, formation of neighborhoods, and the computation of predictions.\nWe consider a user-based algorithm based on that defined for GroupLens [11] with variations from Herlocker et al. [2], and an item-based algorithm similar to that of Sarwar et al. [12].\nThe algorithm used by Resnick et al. [11] defines the similarity of two users u and v as the Pearson correlation of their common ratings:\nwhere Iu is the set of items rated by user u, ru, i is user u's rating for item i, and \u00af ru is the average rating of user u (similarly for v).\nSimilarity computed in this manner is typically scaled by a factor proportional to the number of common ratings, to reduce the chance of making a recommendation made on weak connections: sim' (u, v) = max (| Iu \u2229 Iv |, \u03b3) \u00b7 sim (u, v), \u03b3 where \u03b3 \u2248 5 is a constant used as a lower limit in scaling [2].\nThese new similarities are then used to define a static neighborhood Nu for each user u consisting of the top K users most similar to user u.\nA prediction for user u and item i is computed by a weighted average of the ratings by the neighbors\nwhere V = Nu \u2229 Ui is the set of users most similar to u who have rated i.\nThe item-based algorithm we use is the one defined by Sarwar et al. [12].\nIn this algorithm, similarity is defined as the adjusted cosine measure where Ui is the set of users who have rated item i.\nAs for the user-based algorithm, the similarity weights are adjusted proportionally to the number of users that have rated the items in common Given the similarities, the neighborhood Ni of an item i is defined as the top K most similar items for that item.\nA prediction for user u and item i is computed as the weighted average\nwhere J = Ni \u2229 Iu is the set of items rated by u that are most similar to i.\n2.2 Evaluation\nRecommender algorithms have typically been evaluated using measures of predictive accuracy and coverage [3].\nStudies on recommender algorithms, notably Herlocker et al. [2] and Sarwar et al. [12], typically compute predictive accuracy by dividing a set of ratings into training and test sets, and compute the prediction for an item in the test set using the ratings in the training set.\nA standard measure of predictive accuracy is mean absolute error (MAE), which for a test set\nCoverage has a number of definitions, but generally refers to the proportion of items that can be predicted by the algorithm [3].\nA practical issue with predictive accuracy is that users typically are presented with recommendation lists, and not individual numeric predictions.\nRecommendation lists are lists of items in decreasing order of prediction (sometimes stated in terms of star-ratings), and so predictive accuracy may not be reflective of the accuracy of the list.\nSo, instead we can measure recommendation or rank accuracy, which indicates the extent to which the list is in the correct order.\nHerlocker et al. [3] discuss a number of rank accuracy measures, which range from Kendall's Tau to measures that consider the fact that users tend to only look at a prefix of the list [5].\nKendall's Tau measures the number of inversions when comparing ordered pairs in the true user ordering of\nFigure 1: Ratings in simple movie recommender.\nitems and the recommended order, and is defined as\nwhere C is the number of pairs that the system predicts in the correct order, D the number of pairs the system predicts in the wrong order, TR the number of pairs in the true ordering that have the same ratings, and TP is the number of pairs in the predicted ordering that have the same ratings [3].\nA shortcoming of the Tau metric is that it is oblivious to the position in the ordered list where the inversion occurs [3].\nFor instance, an inversion toward the end of the list is given the same weight as one in the beginning.\nOne solution is to consider inversions only in the top few items in the recommended list or to weight inversions based on their position in the list.\n3.\nROLES OF A RATING\nOur basic observation is that each rating plays a different role in each prediction in which it is used.\nConsider a simplified movie recommender system with three users Jim, Jeff, and Tom and their ratings for a few movies, as shown in Fig. 1.\n(For this initial discussion we will not consider the rating values involved.)\nThe recommender predicts whether Tom will like The Mask using the other already available ratings.\nHow this is done depends on the algorithm: 1.\nAn item-based collaborative filtering algorithm constructs a neighborhood of movies around The Mask by using the ratings of users who rated The Mask and other movies similarly (e.g., Jim's ratings of The Matrix and The Mask; and Jeff's ratings of Star Wars and The Mask).\nTom's ratings of those movies are then used to make a prediction for The Mask.\n2.\nA user-based collaborative filtering algorithm would construct a neighborhood around Tom by tracking other users whose rating behaviors are similar to Tom's (e.g., Tom and Jeff have rated Star Wars; Tom and Jim have rated The Matrix).\nThe prediction of Tom's rating for The Mask is then based on the ratings of Jeff and Tim.\nAlthough the nearest-neighbor algorithms aggregate the ratings to form neighborhoods used to compute predictions, we can disaggregate the similarities to view the computation of a prediction as simultaneously following parallel paths of ratings.\nSo, irrespective of the collaborative filtering algorithm used, we can visualize the prediction of Tom's rating of The Mask as walking through a sequence of ratings.\nIn\nFigure 2: Ratings used to predict The Mask for Tom.\nFigure 3: Prediction of The Mask for Tom in which a rating is used more than once.\nthis example, two paths were used for this prediction as depicted in Fig. 2: (p1, p2, p3) and (q1, q2, q3).\nNote that these paths are undirected, and are all of length 3.\nOnly the order in which the ratings are traversed is different between the item-based algorithm (e.g., (p3, p2, p1), (q3, q2, q1)) and the user-based algorithm (e.g., (p1, p2, p3), (q1, q2, q3).)\nA rating can be part of many paths for a single prediction as shown in Fig. 3, where three paths are used for a prediction, two of which follow p1: (p1, p2, p3) and (p1, r2, r3).\nPredictions in a collaborative filtering algorithms may involve thousands of such walks in parallel, each playing a part in influencing the predicted value.\nEach prediction path consists of three ratings, playing roles that we call scouts, promoters, and connectors.\nTo illustrate these roles, consider the path (p1, p2, p3) in Fig. 2 used to make a prediction of The Mask for Tom: 1.\nThe rating p1 (Tom--> Star Wars) makes a connection from Tom to other ratings that can be used to predict Tom's rating for The Mask.\nThis rating serves as a scout in the bipartite graph of ratings to find a path that leads to The Mask.\n2.\nThe rating p2 (Jeff--> Star Wars) helps the system recommend The Mask to Tom by connecting the scout to the promoter.\n3.\nThe rating p3 (Jeff--> The Mask) allows connections to The Mask, and, therefore, promotes this movie to Tom.\nFormally, given a prediction pu, a of a target item a for user u, a scout for pu, a is a rating ru, i such that there exists a user v with ratings rv, a and rv, i for some item i; a promoter for pu, a is a rating rv, a for some user v, such that there exist ratings rv, i and ru, i for an item i, and; a connector for pu, a\nFigure 4: Scouts, promoters, and connectors.\nis a rating rv, i by some user v and rating i, such that there exists ratings ru, i and rv, a.\nThe scouts, connectors, and promoters for the prediction of Tom's rating of The Mask are p1 and q1, p2 and q2, and p3 and q3 (respectively).\nEach of these roles has a value in the recommender to the user, the user's neighborhood, and the system in terms of allowing recommendations to be made.\n3.1 Roles in Detail\nRatings that act as scouts tend to help the recommender system suggest more movies to the user, though the extent to which this is true depends on the rating behavior of other users.\nFor example, in Fig. 4 the rating Tom \u2192 Star Wars helps the system recommend only The Mask to him, while Tom \u2192 The Matrix helps recommend The Mask, Jurassic Park, and My Cousin Vinny.\nTom makes a connection to Jim who is a prolific user of the system, by rating The Matrix.\nHowever, this does not make The Matrix the best movie to rate for everyone.\nFor example, Jim is benefited equally by both The Mask and The Matrix, which allow the system to recommend Star Wars to him.\nHis rating of The Mask is the best scout for Jeff, and Jerry's only scout is his rating of Star Wars.\nThis suggests that good scouts allow a user to build similarity with prolific users, and thereby ensure they get more from the system.\nWhile scouts represent beneficial ratings from the perspective of a user, promoters are their duals, and are of benefit to items.\nIn Fig. 4, My Cousin Vinny benefits from Jim's rating, since it allows recommendations to Jeff and Tom.\nThe Mask is not so dependent on just one rating, since the ratings by Jim and Jeff help it.\nOn the other hand, Jerry's rating of Star Wars does not help promote it to any other user.\nWe conclude that a good promoter connects an item to a broader neighborhood of other items, and thereby ensures that it is recommended to more users.\nConnectors serve a crucial role in a recommender system that is not as obvious.\nThe movies My Cousin Vinny and Jurassic Park have the highest recommendation potential since they can be recommended to Jeff, Jerry and Tom based on the linkage structure illustrated in Fig. 4.\nBeside the fact that Jim rated these movies, these recommendations are possible only because of the ratings Jim \u2192 The Matrix and Jim \u2192 The Mask, which are the best connectors.\nA connector improves the system's ability to make recommendations with no explicit gain for the user.\nNote that every rating can be of varied benefit in each of these roles.\nThe rating Jim \u2192 My Cousin Vinny is a poor scout and connector, but is a very good promoter.\nThe rating Jim \u2192 The Mask is a reasonably good scout, a very good connector, and a good promoter.\nFinally, the rating Jerry \u2192 Star Wars is a very good scout, but is of no value as a connector or promoter.\nAs illustrated here, a rating can have different value in each of the three roles in terms of whether a recommendation can be made or not.\nWe could measure this value by simply counting the number of times a rating is used in each role, which alone would be helpful in characterizing the behavior of a system.\nBut we can also measure the contribution of each rating to the quality of recommendations or health of the system.\nSince every prediction is a combined effort of several recommendation paths, we are interested in discerning the influence of each rating (and, hence, each path) in the system towards the system's overall error.\nWe can understand the dynamics of the system at a finer granularity by tracking the influence of a rating according to the role played.\nThe next section describes the approach to measuring the values of a rating in each role.\n4.\nCONTRIBUTIONS OF RATINGS\nAs we've seen, a rating may play different roles in different predictions and, in each prediction, contribute to the quality of a prediction in different ways.\nOur approach can use any numeric measure of a property of system health, and assigns credit (or blame) to each rating proportional to its influence in the prediction.\nBy tracking the role of each rating in a prediction, we can accumulate the credit for a rating in each of the three roles, and also track the evolution of the roles of rating over time in the system.\nThis section defines the methodology for computing the contribution of ratings by first defining the influence of a rating, and then instantiating the approach for predictive accuracy, and then rank accuracy.\nWe also demonstrate how these contributions can be aggregated to study the neighborhood of ratings involved in computing a user's recommendations.\nNote that although our general formulation for rating influence is algorithm independent, due to space considerations, we present the approach for only item-based collaborative filtering.\nThe definition for user-based algorithms is similar and will be presented in an expanded version of this paper.\n4.1 Influence of Ratings\nRecall that an item-based approach to collaborative filtering relies on building item neighborhoods using the similarity of ratings by the same user.\nAs described earlier, similarity is defined by the adjusted cosine measure (Equations (2) and (3)).\nA set of the top K neighbors is maintained for all items for space and computational efficiency.\nA prediction of item i for a user u is computed as the weighted deviation from the item's mean rating as shown in Equation (4).\nThe list of recommendations for a user is then the list of items sorted in descending order of their predicted values.\nWe first define impact (a, i, j), the impact a user a has in determining the similarity between two items i and j.\nThis is the change in the similarity between i and j when a's rating is removed, and is defined as\nof items i and j (users who rate both i and j), R (u) is the set of ratings provided by user u, and sim' \u00af a (i, j) is the similarity of i and j when the ratings of user a are removed\nand adjusted for the number of raters\nThe influence of each path (u, j, v, i) = [ru, j, rv, j, rv, i] in the prediction of pu, i is given by\nIt follows that the sum of influences over all such paths, for a given set of endpoints, is 1.\n4.2 Role Values for Predictive Accuracy\nThe value of a rating in each role is computed from the influence depending on the evaluation measure employed.\nHere we illustrate the approach using predictive accuracy as the evaluation metric.\nIn general, the goodness of a prediction decides whether the ratings involved must be credited or discredited for their role.\nFor predictive accuracy, the error in prediction e = | pu, i \u2212 ru, i | is mapped to a comfort level using a mapping function M (e).\nAnecdotal evidence suggests that users are unable to discern errors less than 1.0 (for a rating scale of 1 to 5) [4], and so an error less than 1.0 is considered acceptable, but anything larger is not.\nWe hence define M (e) as (1 \u2212 e) binned to an appropriate value in [\u2212 1, \u2212 0.5, 0.5, 1].\nFor each prediction pu, i, M (e) is attributed to all the paths that assisted the computation of pu, i, proportional to their influences.\nThis tribute, M (e) \u2217 influence (u, j, v, i), is in turn inherited by each of the ratings in the path [ru, j, rv, j, rv, i], with the credit\/blame accumulating to the respective roles of ru, j as a scout, rv, j as a connector, and rv, i as a promoter.\nIn other words, the scout value SF (ru, j), the connector value CF (rv, j) and the promoter value PF (rv, i) are all incremented by the tribute amount.\nOver a large number of predictions, scouts that have repeatedly resulted in big error rates have a big negative scout value, and vice versa (similarly with the other roles).\nEvery rating is thus summarized by its triple [SF, CF, PF].\n4.3 Role Values for Rank Accuracy\nWe now define the computation of the contribution of ratings to observed rank accuracy.\nFor this computation, we must know the user's preference order for a set of items for which predictions can be computed.\nWe assume that we have a test set of the users' ratings of the items presented in the recommendation list.\nFor every pair of items rated by a user in the test data, we check whether the predicted order is concordant with his preference.\nWe say a pair (i, j) is concordant (with error e) whenever one of the following holds:\n\u2022 if (ru, i <ru, j) then (pu, i \u2212 pu, j <e); \u2022 if (ru, i> ru, j) then (pu, i \u2212 pu, j> e); or \u2022 if (ru, i = ru, j) then (| pu, i \u2212 pu, j | \u2264 E).\nSimilarly, a pair (i, j) is discordant (with error e) if it is not concordant.\nOur experiments described below use an error tolerance of e = 0.1.\nAll paths involved in the prediction of the two items in a concordant pair are credited, and the paths involved in a discordant pair are discredited.\nThe credit assigned to a pair of items (i, j) in the recommendation list for user u is computed as\nwhere t is the number of items in the user's test set whose ratings could be predicted, T is the number of items rated by user u in the test set, C is the number of concordances and D is the number of discordances.\nThe credit c is then divided among all paths responsible for predicting pu, i and pu, j proportional to their influences.\nWe again add the role values obtained from all the experiments to form a triple [SF, CF, PF] for each rating.\n4.4 Aggregating rating roles\nAfter calculating the role values for individual ratings, we can also use these values to study neighborhoods and the system.\nHere we consider how we can use the role values to characterize the health of a neighborhood.\nConsider the list of top recommendations presented to a user at a specific point in time.\nThe collaborative filtering algorithm traversed many paths in his neighborhood through his scouts and other connectors and promoters to make these recommendations.\nWe call these ratings the recommender neighborhood of the user.\nThe user implicitly chooses this neighborhood of ratings through the items he rates.\nApart from the collaborative filtering algorithm, the health of this neighborhood completely influences a user's satisfaction with the system.\nWe can characterize a user's recommender neighborhood by aggregating the individual role values of the ratings involved, weighted by the influence of individual ratings in determining his recommended list.\nDifferent sections of the user's neighborhood wield varied influence on his recommendation list.\nFor instance, ratings reachable through highly rated items have a bigger say in the recommended items.\nOur aim is to study the system and classify users with respect to their positioning in a healthy or unhealthy neighborhood.\nA user can have a good set of scouts, but may be exposed to a neighborhood with bad connectors and promoters.\nHe can have a good neighborhood, but his bad scouts may ensure the neighborhood's potential is rendered useless.\nWe expect that users with good scouts and good neighborhoods will be most satisfied with the system in the future.\nA user's neighborhood is characterized by a triple that represents the weighted sum of the role values of individual ratings involved in making recommendations.\nConsider a user u and his ordered list of recommendations L.\nAn item i\nin the list is weighted inversely, as K (i), depending on its popsition in the list.\nIn our studies we use K (i) = position (i).\nSeveral paths of ratings [ru, j, rv, j, rv, i] are involved in predicting pu, i which ultimately decides its position in L, each with inf luence (u, j, v, i).\nThe recommender neighborhood of a user u is characterized by the triple, [SFN (u), CFN (u), PFN (u)] where\nCFN (u) and PFN (u) are defined similarly.\nThis triple estimates the quality of u's recommendations based on the past track record of the ratings involved in their respective roles.\n5.\nEXPERIMENTATION\nAs we have seen, we can assign role values to each rating when evaluating a collaborative filtering system.\nIn this section, we demonstrate the use of this approach to our overall goal of defining an approach to monitor and manage the health of a recommender system through experiments done on the MovieLens million rating dataset.\nIn particular, we discuss results relating to identifying good scouts, promoters, and connectors; the evolution of rating roles; and the characterization of user neighborhoods.\n5.1 Methodology\nOur experiments use the MovieLens million rating dataset, which consists of ratings by 6040 users of 3952 movies.\nThe ratings are in the range 1 to 5, and are labeled with the time the rating was given.\nAs discussed before, we consider only the item-based algorithm here (with item neighborhoods of size 30) and, due to space considerations, only present role value results for rank accuracy.\nSince we are interested in the evolution of the rating role values over time, the model of the recommender system is built by processing ratings in their arrival order.\nThe timestamping provided by MovieLens is hence crucial for the analyses presented here.\nWe make assessments of rating roles at intervals of 10,000 ratings and processed the first 200,000 ratings in the dataset (giving rise to 20 snapshots).\nWe incrementally update the role values as the time ordered ratings are merged into the model.\nTo keep the experiment computationally manageable, we define a test dataset for each user.\nAs the time ordered ratings are merged into the model, we label a small randomly selected percentage (20%) as test data.\nAt discrete epochs, i.e., after processing every 10,000 ratings, we compute the predictions for the ratings in the test data, and then compute the role values for the ratings used in the predictions.\nOne potential criticism of this methodology is that the ratings in the test set are never evaluated for their roles.\nWe overcome this concern by repeating the experiment, using different random seeds.\nThe probability that every rating is considered for evaluation is then considerably high: 1 \u2212 0.2 n, where n is the number of times the experiment is repeated with different random seeds.\nThe results here are based on n = 4 repetitions.\nThe item-based collaborative filtering algorithm's performance was ordinary with respect to rank accuracy.\nFig. 5 shows a plot of the precision and recall as ratings were merged in time order into the model.\nThe recall was always high, but the average precision was just about 53%.\nFigure 5: Precision and recall for the item-based collaborative filtering algorithm.\n5.2 Inducing good scouts\nThe ratings of a user that serve as scouts are those that allow the user to receive recommendations.\nWe claim that users with ratings that have respectable scout values will be happier with the system than those with ratings with low scout values.\nNote that the item-based algorithm discussed here produces recommendation lists with nearly half of the pairs in the list discordant from the user's preference.\nWhether all of these discordant pairs are observable by the user is unclear, however, certainly this suggests that there is a need to be able to direct users to items whose ratings would improve the lists.\nThe distribution of the scout values for most users' ratings are Gaussian with mean zero.\nFig. 6 shows the frequency distribution of scout values for a sample user at a given snapshot.\nWe observe that a large number of ratings never serve as scouts for their users.\nA relatable scenario is when Amazon's recommender makes suggestions of books or items based on other items that were purchased as gifts.\nWith simple relevance feedback from the user, such ratings can be isolated as bad scouts and discounted from future predictions.\nRemoving bad scouts was found to be worthwhile for individual users but the overall performance improvement was only marginal.\nAn obvious question is whether good scouts can be formed by merely rating popular movies as suggested by Rashid et al. [9].\nThey show that a mix of popularity and rating entropy identifies the best items to suggest to new users when evaluated using MAE.\nFollowing their intuition, we would expect to see a higher correlation between popularityentropy and good scouts.\nWe measured the Pearson correlation coefficient between aggregated scout values for a movie with the popularity of a movie (number of times it is rated); and with its popularity * variance measure at different snapshots of the system.\nNote that the scout values were initially anti-correlated with popularity (Fig. 7), but became moderately correlated as the system evolved.\nBoth popularity and popularity * variance performed similarly.\nA possible explanation is that there has been insufficient time for the popular movies to accumulate ratings.\nFigure 6: Distribution of scout values for a sample user.\nFigure 7: Correlation between aggregated scout value and item popularity (computed at different intervals).\nFigure 8: Correlation between aggregated promoter value and user prolificity (computed at different intervals).\nTable 1: Movies forming the best scouts.\nTable 2: Movies forming the worst scouts.\nBy studying the evolution of scout values, we can identify movies that consistently feature in good scouts over time.\nWe claim these movies will make viable scouts for other users.\nWe found the aggregated scout values for all movies in intervals of 10,000 ratings each.\nA movie is said to induce a good scout if the movie was in the top 100 of the sorted list, and to induce a bad scout if it was in bottom 100 of the same list.\nMovies appearing consistently high over time are expected to remain up there in the future.\nThe effective confidence in a movie m is\nwhere Tm is the number of times it appeared in the top 100, Bm the number of times it appeared in the bottom 100, and N is the number of intervals considered.\nUsing this measure, the top few movies expected to induce the best scouts are shown in Table 1.\nMovies that would be bad scout choices are shown in Table 2 with their associated confidences.\nThe popularities of the movies are also displayed.\nAlthough more popular movies appear in the list of good scouts, these tables show that a blind choice of scout based on popularity alone can be potentially dangerous.\nInterestingly, the best scout--` Being John Malkovich'--is about a puppeteer who discovers a portal into a movie star, a movie that has been described variously on amazon.com as ` makes you feel giddy,' ` seriously weird,' ` comedy with depth,' ` silly,' ` strange,' and ` inventive . '\nIndicating whether someone likes this movie or not goes a long way toward situating the user in a suitable neighborhood, with similar preferences.\nOn the other hand, several factors may have made a movie a bad scout, like the sharp variance in user preferences in the neighborhood of a movie.\nTwo users may have the same opinion about Lawrence of Arabia, but they may differ sharply about how they felt about the other movies they saw.\nBad scouts ensue when there is deviation in behavior around a common synchronization point.\n5.3 Inducing good promoters\nRatings that serve to promote items in a collaborative filtering system are critical to allowing a new item be recommended to users.\nSo, inducing good promoters is important for cold-start recommendation.\nWe note that the frequency distribution of promoter values for a sample movie's ratings is also Gaussian (similar to Fig. 6).\nThis indicates that the promotion of a movie is benefited most by the ratings of a few users, and are unaffected by the ratings of most users.\nWe find a strong correlation between a user's number of ratings and his aggregated promoter value.\nFig. 8 depicts the evolution of the Pearson correlation co-efficient between the prolificity of a user (number of ratings) versus his aggregated promoter value.\nWe expect that conspicuous shills, by recommending wrong movies to users, will be discredited with negative aggregate promoter values and should be identifiable easily.\nGiven this observation, the obvious rule to follow when introducing a new movie is to have it rated directly by prolific users who posses high aggregated promoter values.\nA new movie is thus cast into the neighborhood of many other movies improving its visibility.\nNote, though, that a user may have long stopped using the system.\nTracking promoter values consistently allows only the most active recent users to be considered.\n5.4 Inducing good connectors\nGiven the way scouts, connectors, and promoters are characterized, it follows that the movies that are part of the best scouts are also part of the best connectors.\nSimilarly, the users that constitute the best promoters are also part of the best connectors.\nGood connectors are induced by ensuring a user with a high promoter value rates a movie with a high scout value.\nIn our experiments, we find that a rating's longest standing role is often as a connector.\nA rating with a poor connector value is often seen due to its user being a bad promoter, or its movie being a bad scout.\nSuch ratings can be removed from the prediction process to bring marginal improvements to recommendations.\nIn some selected experiments, we observed that removing a set of badly behaving connectors helped improve the system's overall performance by 1.5%.\nThe effect was even higher on a few select users who observed an improvement of above 10% in precision without much loss in recall.\n5.5 Monitoring the evolution of rating roles\nOne of the more significant contributions of our work is the ability to model the evolution of recommender systems, by studying the changing roles of ratings over time.\nThe role and value of a rating can change depending on many factors like user behavior, redundancy, shilling effects or properties of the collaborative filtering algorithm used.\nStudying the dynamics of rating roles in terms of transitions between good, bad, and negligible values can provide insights into the functioning of the recommender system.\nWe believe that a continuous visualization of these transitions will improve the ability to manage a recommender system.\nWe classify different rating states as good, bad, or negligible.\nConsider a user who has rated 100 movies in a particular interval, of which 20 are part of the test set.\nIf a scout has a value greater than 0.005, it indicates that it is uniquely involved in at least 2 concordant predictions, which we will say is good.\nThus, a threshold of 0.005 is chosen to bin a rating as good, bad or negligible in terms of its scout, connector and promoter value.\nFor instance, a rating r, at time t with role value triple [0.1, 0.001, \u2212 0.01] is classified as [scout +, connector 0, promoter \u2212], where + indicates good, 0 indicates negligible, and \u2212 indicates bad.\nThe positive credit held by a rating is a measure of its contribution to the betterment of the system, and the discredit is a measure of its contribution to the detriment of the system.\nEven though the positive roles (and the negative roles) make up a very small percentage of all ratings, their contribution supersedes their size.\nFor example, even though only 1.7% of all ratings were classified as good scouts, they hold 79% of all positive credit in the system!\nSimilarly, the bad scouts were just 1.4% of all ratings but hold 82% of all discredit.\nNote that good and bad scouts, together, comprise only 1.4% + 1.7% = 3.1% of the ratings, implying that the majority of the ratings are negligible role players as scouts (more on this later).\nLikewise, good connectors were 1.2% of the system, and hold 30% of all positive credit.\nThe bad connectors (0.8% of the system) hold 36% of all discredit.\nGood promoters (3% of the system) hold 46% of all credit, while bad promoters (2%) hold 50% of all discredit.\nThis reiterates that a few ratings influence most of the system's performance.\nHence it is important to track transitions between them regardless of their small numbers.\nAcross different snapshots, a rating can remain in the same state or change.\nA good scout can become a bad scout, a good promoter can become a good connector, good and bad scouts can become vestigial, and so on.\nIt is not practical to expect a recommender system to have no ratings in bad roles.\nHowever, it suffices to see ratings in bad roles either convert to good or vestigial roles.\nSimilarly, observing a large number of good roles become bad ones is a sign of imminent failure of the system.\nWe employ the principle of non-overlapping episodes [6] to count such transitions.\nA sequence such as: [+, 0, 0]--+ [+, 0, 0]--+ [0, +, 0]--+ [0, 0, 0] is interpreted as the transitions [+, 0, 0] ^.\n* [0, +, 0]: 1 [+, 0, 0] ^.\n* [0, 0, 0]: 1 [0, +, 0] ^.\n* [0, 0, 0]: 1 instead of [+, 0, 0] ^.\n* [0, +, 0]: 2 [+, 0, 0] ^.\n* [0, 0, 0]: 2 [0, +, 0] ^.\n* [0, 0, 0]: 1.\nSee [6] for further details about this counting procedure.\nThus, a rating can be in one of 27 possible states, and there are about 272 possible transitions.\nWe make a further simplification and utilize only 9 states, indicating whether the rating is a scout, promoter, or connector, and whether it has a positive, negative, or negligible role.\nRatings that serve multiple purposes are counted using multiple episode instantiations but the states themselves are not duplicated beyond the 9 restricted states.\nIn this model, a transition such as [+, 0, +] ^.\n* [0, +, 0]: 1 is counted as\nOf these, transitions like [pX] ^.\n* [q0] where p = q, X \u2208 {+, 0, \u2212} are considered uninteresting, and only the rest are counted.\nFig. 9 depicts the major transitions counted while processing the first 200,000 ratings from the MovieLens dataset.\nOnly transitions with frequency greater than or equal to 3% are shown.\nThe percentages for each state indicates the number of ratings that were found to be in those states.\nWe consider transitions from any state to a good state as healthy, from any state to a bad state as unhealthy, and from any state to a vestigial state as decaying.\nFrom Fig. 9, we can observe: \u2022 The bulk of the ratings have negligible values, irrespective of their role.\nThe majority of the transitions involve both good and bad ratings becoming negligible.\nFigure 9: Transitions among rating roles.\n\u2022 The number of good ratings is comparable to the bad ratings, and ratings are seen to switch states often, except in the case of scouts (see below).\n\u2022 The negative and positive scout states are not reachable through any transition, indicating that these ratings must begin as such, and cannot be coerced into these roles.\n\u2022 Good promoters and good connectors have a much longer survival period than scouts.\nTransitions that recur to these states have frequencies of 10% and 15% when compared to just 4% for scouts.\nGood connectors are the slowest to decay whereas (good) scouts decay the fastest.\n\u2022 Healthy percentages are seen on transitions between promoters and connectors.\nAs indicated earlier, there are hardly any transitions from promoters\/connectors to scouts.\nThis indicates that, over the long run, a user's rating is more useful to others (movies or other users) than to the user himself.\n\u2022 The percentages of healthy transitions outweigh the unhealthy ones--this hints that the system is healthy, albeit only marginally.\nNote that these results are conditioned by the static nature of the dataset, which is a set of ratings over a fixed window of time.\nHowever a diagram such as Fig. 9 is clearly useful for monitoring the health of a recommender system.\nFor instance, acceptable limits can be imposed on different types of transitions and, if a transition fails to meet the threshold, the recommender system or a part of it can be brought under closer scrutiny.\nFurthermore, the role state transition diagram would also be the ideal place to study the effects of shilling, a topic we will consider in future research.\n5.6 Characterizing neighborhoods\nEarlier we saw that we can characterize the neighborhood of ratings involved in creating a recommendation list L for\na user.\nIn our experiment, we consider lists of length 30, and sample the lists of about 5% of users through the evolution of the model (at intervals of 10,000 ratings each) and compute their neighborhood characteristics.\nTo simplify our presentation, we consider the percentage of the sample that fall into one of the following categories:\n1.\nInactive user: (SFN (u) = 0) 2.\nGood scouts, Good neighborhood:\nFrom our sample set of 561 users, we found that 476 users were inactive.\nOf the remaining 85 users, we found 26 users had good scouts and a good neighborhood, 6 had bad scouts and a good neighborhood, 29 had good scouts and a bad neighborhood, and 24 had bad scouts and a bad neighborhood.\nThus, we conjecture that 59 users (29 +24 +6) are in danger of leaving the system.\nAs a remedy, users with bad scouts and a good neighborhood can be asked to reconsider rating of some movies hoping to improve the system's recommendations.\nThe system can be expected to deliver more if they engineer some good scouts.\nUsers with good scouts and a bad neighborhood are harder to address; this situation might entail selectively removing some connector-promoter pairs that are causing the damage.\nHandling users with bad scouts and bad neighborhoods is a more difficult challenge.\nSuch a classification allows the use of different strategies to better a user's experience with the system depending on his context.\nIn future work, we intend to conduct field studies and study the improvement in performance of different strategies for different contexts.\n6.\nCONCLUSIONS\nTo further recommender system acceptance and deployment, we require new tools and methodologies to manage an installed recommender and develop insights into the roles played by ratings.\nA fine-grained characterization in terms of rating roles such as scouts, promoters, and connectors, as done here, helps such an endeavor.\nAlthough we have presented results on only the item-based algorithm with list rank accuracy as the metric, the same approach outlined here applies to user-based algorithms and other metrics.\nIn future research, we plan to systematically study the many algorithmic parameters, tolerances, and cutoff thresholds employed here and reason about their effects on the downstream conclusions.\nWe also aim to extend our formulation to other collaborative filtering algorithms, study the effect of shilling in altering rating roles, conduct field studies, and evaluate improvements in user experience by tweaking ratings based on their role values.\nFinally, we plan to develop the idea of mining the evolution of rating role patterns into a reporting and tracking system for all aspects of recommender system health.","keyphrases":["scout","promot","connector","rate","nearest neighbor","collabor filter","recommend","recommend system","aggreg process","neighborhood","collabor filter algorithm","purchas","opinion","list rank accuraci","user-base and item-base algorithm"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","U","U","U","M"]} {"id":"C-74","title":"Adapting Asynchronous Messaging Middleware to ad-hoc Networking","abstract":"The characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware. In particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad-hoc environments, in which not even the intermittent availability of a backbone network can be assumed. Instead, asynchronous communication seems to be a generally more suitable paradigm for such environments. Message oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems. In this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for ad-hoc networks), an adaptation of Java Message Service (JMS) for mobile ad-hoc environments. We discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years.","lvl-1":"Adapting Asynchronous Messaging Middleware to ad-hoc Networking Mirco Musolesi Dept. of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom m.musolesi@cs.ucl.ac.uk Cecilia Mascolo Dept. of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom c.mascolo@cs.ucl.ac.uk Stephen Hailes Dept. of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom s.hailes@cs.ucl.ac.uk ABSTRACT The characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware.\nIn particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad-hoc environments, in which not even the intermittent availability of a backbone network can be assumed.\nInstead, asynchronous communication seems to be a generally more suitable paradigm for such environments.\nMessage oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems.\nIn this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for ad-hoc networks), an adaptation of Java Message Service (JMS) for mobile ad-hoc environments.\nWe discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Distributed Applications; C.2.1 [Network Architecture and Design]: Wireless Communication General Terms DESIGN, ALGORITHMS 1.\nINTRODUCTION With the increasing popularity of mobile devices and their widespread adoption, there is a clear need to allow the development of a broad spectrum of applications that operate effectively over such an environment.\nUnfortunately, this is far from simple: mobile devices are increasingly heterogeneous in terms of processing capabilities, memory size, battery capacity, and network interfaces.\nEach such configuration has substantially different characteristics that are both statically different - for example, there is a major difference in capability between a Berkeley mote and an 802.11g-equipped laptop - and that vary dynamically, as in situations of fluctuating bandwidth and intermittent connectivity.\nMobile ad hoc environments have an additional element of complexity in that they are entirely decentralised.\nIn order to craft applications for such complex environments, an appropriate form of middleware is essential if cost effective development is to be achieved.\nIn this paper, we examine one of the foundational aspects of middleware for mobile ad-hoc environments: that of the communication primitives.\nTraditionally, the most frequently used middleware primitives for communication assume the simultaneous presence of both end points on a network, since the stability and pervasiveness of the networking infrastructure is not an unreasonable assumption for most wired environments.\nIn other words, most communication paradigms are synchronous: object oriented middleware such as CORBA and Java RMI are typical examples of middleware based on synchronous communication.\nIn recent years, there has been growing interest in platforms based on asynchronous communication paradigms, such as publish-subscribe systems [6]: these have been exploited very successfully where there is application level asynchronicity.\nFrom a Gartner Market Report [7]: Given messageoriented-middleware``s (MOM) popularity, scalability, flexibility, and affinity with mobile and wireless architectures, by 2004, MOM will emerge as the dominant form of communication middleware for linking mobile and enterprise applications (0.7 probability)... Moreover, in mobile ad-hoc systems, the likelihood of network fragmentation means that synchronous communication may in any case be impracticable, giving situations in which delay tolerant asynchronous traffic is the only form of traffic that could be supported.\n121 Middleware 2004 Companion Middleware for mobile ad-hoc environments must therefore support semi-synchronous or completely asynchronous communication primitives if it is to avoid substantial limitations to its utility.\nAside from the intellectual challenge in supporting this model, this work is also interesting because there are a number of practical application domains in allowing inter-community communication in undeveloped areas of the globe.\nThus, for example, projects that have been carried out to help populations that live in remote places of the globe such as Lapland [3] or in poor areas that lack fixed connectivity infrastructure [9].\nThere have been attempts to provide mobile middleware with these properties, including STEAM, LIME, XMIDDLE, Bayou (see [11] for a more complete review of mobile middleware).\nThese models differ quite considerably from the existing traditional middleware in terms of primitives provided.\nFurthermore, some of them fail in providing a solution for the true ad-hoc scenarios.\nIf the projected success of MOM becomes anything like a reality, there will be many programmers with experience of it.\nThe ideal solution to the problem of middleware for ad-hoc systems is, then, to allow programmers to utilise the same paradigms and models presented by common forms of MOM and to ensure that these paradigms are supportable within the mobile environment.\nThis approach has clear advantages in allowing applications developed on standard middleware platforms to be easily deployed on mobile devices.\nIndeed, some research has already led to the adaptation of traditional middleware platforms to mobile settings, mainly to provide integration between mobile devices and existing fixed networks in a nomadic (i.e., mixed) environment [4].\nWith respect to message oriented middleware, the current implementations, however, either assume the existence of a backbone network to which the mobile hosts connect from time to time while roaming [10], or assume that nodes are always somehow reachable through a path [18].\nNo adaptation to heterogeneous or completely ad-hoc scenarios, with frequent disconnection and periodically isolated clouds of hosts, has been attempted.\nIn the remainder of this paper we describe an initial attempt to adapt message oriented middleware to suit mobile and, more specifically, mobile ad-hoc networks.\nIn our case, we elected to examine JMS, as one of the most widely known MOM systems.\nIn the latter part of this paper, we explore the limitations of our results and describe the plans we have to take the work further.\n2.\nMESSAGE ORIENTED MIDDLEWARE AND JAVA MESSAGE SERVICE (JMS) Message-oriented middleware systems support communication between distributed components via message-passing: the sender sends a message to identified queues, which usually reside on a server.\nA receiver retrieves the message from the queue at a different time and may acknowledge the reply using the same asynchronous mechanism.\nMessage-oriented middleware thus supports asynchronous communication in a very natural way, achieving de-coupling of senders and receivers.\nA sender is able to continue processing as soon as the middleware has accepted the message; eventually, the receiver will send an acknowledgment message and the sender will be able to collect it at a convenient time.\nHowever, given the way they are implemented, these middleware systems usually require resource-rich devices, especially in terms of memory and disk space, where persistent queues of messages that have been received but not yet processed, are stored.\nSun Java Message Service [5], IBM WebSphere MQ [6], Microsoft MSMQ [12] are examples of very successful message-oriented middleware for traditional distributed systems.\nThe Java Messaging Service (JMS) is a collection of interfaces for asynchronous communication between distributed components.\nIt provides a common way for Java programs to create, send and receive messages.\nJMS users are usually referred to as clients.\nThe JMS specification further defines providers as the components in charge of implementing the messaging system and providing the administrative and control functionality (i.e., persistence and reliability) required by the system.\nClients can send and receive messages, asynchronously, through the JMS provider, which is in charge of the delivery and, possibly, of the persistence of the messages.\nThere are two types of communication supported: point to point and publish-subscribe models.\nIn the point to point model, hosts send messages to queues.\nReceivers can be registered with some specific queues, and can asynchronously retrieve the messages and then acknowledge them.\nThe publish-subscribe model is based on the use of topics that can be subscribed to by clients.\nMessages are sent to topics by other clients and are then received in an asynchronous mode by all the subscribed clients.\nClients learn about the available topics and queues through Java Naming and Directory Interface (JNDI) [14].\nQueues and topics are created by an administrator on the provider and are registered with the JNDI interface for look-up.\nIn the next section, we introduce the challenges of mobile networks, and show how JMS can be adapted to cope with these requirements.\n3.\nJMS FOR MOBILE COMPUTING Mobile networks vary very widely in their characteristics, from nomadic networks in which modes relocate whilst offline through to ad-hoc networks in which modes move freely and in which there is no infrastructure.\nMobile ad-hoc networks are most generally applicable in situations where survivability and instant deployability are key: most notably in military applications and disaster relief.\nIn between these two types of ``mobile'' networks, there are, however, a number of possible heterogeneous combinations, where nomadic and ad-hoc paradigms are used to interconnect totally unwired areas to more structured networks (such as a LAN or the Internet).\nWhilst the JMS specification has been extensively implemented and used in traditional distributed systems, adaptations for mobile environments have been proposed only recently.\nThe challenges of porting JMS to mobile settings are considerable; however, in view of its widespread acceptance and use, there are considerable advantages in allowing the adaptation of existing applications to mobile environments and in allowing the interoperation of applications in the wired and wireless regions of a network.\nIn [10], JMS was adapted to a nomadic mobile setting, where mobile hosts can be JMS clients and communicate through the JMS provider that, however, sits on a backbone network, providing reliability and persistence.\nThe client prototype presented in [10] is very lightweight, due to the delegation of all the heavyweight functionality to the Middleware for Pervasive and ad-hoc Computing 122 provider on the wired network.\nHowever, this approach is somewhat limited in terms of widespread applicability and scalability as a consequence of the concentration of functionality in the wired portion of the network.\nIf JMS is to be adapted to completely ad-hoc environments, where no fixed infrastructure is available, and where nodes change location and status very dynamically, more issues must be taken into consideration.\nFirstly, discovery needs to use a resilient but distributed model: in this extremely dynamic environment, static solutions are unacceptable.\nAs discussed in Section 2, a JMS administrator defines queues and topics on the provider.\nClients can then learn about them using the Java Naming and Directory Interface (JNDI).\nHowever, due to the way JNDI is designed, a JNDI node (or more than one) needs to be in reach in order to obtain a binding of a name to an address (i.e., knowing where a specific queue\/topic is).\nIn mobile ad-hoc environments, the discovery process cannot assume the existence of a fixed set of discovery servers that are always reachable, as this would not match the dynamicity of ad-hoc networks.\nSecondly, a JMS Provider, as suggested by the JMS specification, also needs to be reachable by each node in the network, in order to communicate.\nThis assumes a very centralised architecture, which again does not match the requirements of a mobile ad-hoc setting, in which nodes may be moving and sparse: a more distributed and dynamic solution is needed.\nPersistence is, however, essential functionality in asynchronous communication environments as hosts are, by definition, connected at different times.\nIn the following section, we will discuss our experience in designing and implementing JMS for mobile ad-hoc networks.\n4.\nJMSFOR MOBILE ad-hoc NETWORKS 4.1 Adaptation of JMS for Mobile ad-hoc Networks Developing applications for mobile networks is yet more challenging: in addition to the same considerations as for infrastructured wireless environments, such as the limited device capabilities and power constraints, there are issues of rate of change of network connectivity, and the lack of a static routing infrastructure.\nConsequently, we now describe an initial attempt to adapt the JMS specification to target the particular requirements related to ad-hoc scenarios.\nAs discussed in Section 3, a JMS application can use either the point to point and the publish-subscribe styles of messaging.\nPoint to Point Model The point to point model is based on the concept of queues, that are used to enable asynchronous communication between the producer of a message and possible different consumers.\nIn our solution, the location of queues is determined by a negotiation process that is application dependent.\nFor example, let us suppose that it is possible to know a priori, or it is possible to determine dynamically, that a certain host is the receiver of the most part of messages sent to a particular queue.\nIn this case, the optimum location of the queue may well be on this particular host.\nIn general, it is worth noting that, according to the JMS specification and suggested design patterns, it is common and preferable for a client to have all of its messages delivered to a single queue.\nQueues are advertised periodically to the hosts that are within transmission range or that are reachable by means of the underlying synchronous communication protocol, if provided.\nIt is important to note that, at the middleware level, it is logically irrelevant whether or not the network layer implements some form of ad-hoc routing (though considerably more efficient if it does); the middleware only considers information about which nodes are actively reachable at any point in time.\nThe hosts that receive advertisement messages add entries to their JNDI registry.\nEach entry is characterized by a lease (a mechanism similar to that present in Jini [15]).\nA lease represents the time of validity of a particular entry.\nIf a lease is not refreshed (i.e, its life is not extended), it can expire and, consequently, the entry is deleted from the registry.\nIn other words, the host assumes that the queue will be unreachable from that point in time.\nThis may be caused, for example, if a host storing the queue becomes unreachable.\nA host that initiates a discovery process will find the topics and the queues present in its connected portion of the network in a straightforward manner.\nIn order to deliver a message to a host that is not currently in reach1 , we use an asynchronous epidemic routing protocol that will be discussed in detail in Section 4.2.\nIf two hosts are in the same cloud (i.e., a connected path exists between them), but no synchronous protocol is available, the messages are sent using the epidemic protocol.\nIn this case, the delivery latency will be low as a result of the rapidity of propagation of the infection in the connected cloud (see also the simulation results in Section 5).\nGiven the existence of an epidemic protocol, the discovery mechanism consists of advertising the queues to the hosts that are currently unreachable using analogous mechanisms.\nPublish-Subscribe Model In the publish-subscribe model, some of the hosts are similarly designated to hold topics and store subscriptions, as before.\nTopics are advertised through the registry in the same way as are queues, and a client wishing to subscribe to a topic must register with the client holding the topic.\nWhen a client wishes to send a message to the topic list, it sends it to the topic holder (in the same way as it would send a message to a queue).\nThe topic holder then forwards the message to all subscribers, using the synchronous protocol if possible, the epidemic protocol otherwise.\nIt is worth noting that we use a single message with multiple recipients, instead of multiple messages with multiple recipients.\nWhen a message is delivered to one of the subscribers, this recipient is deleted from the list.\nIn order to delete the other possible replicas, we employ acknowledgment messages (discussed in Section 4.4), returned in the same way as a normal message.\nWe have also adapted the concepts of durable and non durable subscriptions for ad-hoc settings.\nIn fixed platforms, durable subscriptions are maintained during the disconnections of the clients, whether these are intentional or are the result of failures.\nIn traditional systems, while a durable subscriber is disconnected from the server, it is responsible for storing messages.\nWhen the durable subscriber reconnects, the server sends it all unexpired messages.\nThe problem is that, in our scenario, disconnections are the norm 1 In theory, it is not possible to send a message to a peer that has never been reachable in the past, since there can be no entry present in the registry.\nHowever, to overcome this possible limitation, we provide a primitive through which information can be added to the registry without using the normal channels.\n123 Middleware 2004 Companion rather than the exception.\nIn other words, we cannot consider disconnections as failures.\nFor these reasons, we adopt a slightly different semantics.\nWith respect to durable subscriptions, if a subscriber becomes disconnected, notifications are not stored but are sent using the epidemic protocol rather than the synchronous protocol.\nIn other words, durable notifications remain valid during the possible disconnections of the subscriber.\nOn the other hand, if a non-durable subscriber becomes disconnected, its subscription is deleted; in other words, during disconnections, notifications are not sent using the epidemic protocol but exploit only the synchronous protocol.\nIf the topic becomes accessible to this host again, it must make another subscription in order to receive the notifications.\nUnsubscription messages are delivered in the same way as are subscription messages.\nIt is important to note that durable subscribers have explicitly to unsubscribe from a topic in order to stop the notification process; however, all durable subscriptions have a predefined expiration time in order to cope with the cases of subscribers that do not meet again because of their movements or failures.\nThis feature is clearly provided to limit the number of the unnecessary messages sent around the network.\n4.2 Message Delivery using Epidemic Routing In this section, we examine one possible mechanism that will allow the delivery of messages in a partially connected network.\nThe mechanism we discuss is intended for the purposes of demonstrating feasibility; more efficient communication mechanisms for this environment are themselves complex, and are the subject of another paper [13].\nThe asynchronous message delivery described above is based on a typical pure epidemic-style routing protocol [16].\nA message that needs to be sent is replicated on each host in reach.\nIn this way, copies of the messages are quickly spread through connected networks, like an infection.\nIf a host becomes connected to another cloud of mobile nodes, during its movement, the message spreads through this collection of hosts.\nEpidemic-style replication of data and messages has been exploited in the past in many fields starting with the distributed database systems area [2].\nWithin epidemic routing, each host maintains a buffer containing the messages that it has created and the replicas of the messages generated by the other hosts.\nTo improve the performance, a hash-table indexes the content of the buffer.\nWhen two hosts connect, the host with the smaller identifier initiates a so-called anti-entropy session, sending a list containing the unique identifiers of the messages that it currently stores.\nThe other host evaluates this list and sends back a list containing the identifiers it is storing that are not present in the other host, together with the messages that the other does not have.\nThe host that has started the session receives the list and, in the same way, sends the messages that are not present in the other host.\nShould buffer overflow occur, messages are dropped.\nThe reliability offered by this protocol is typically best effort, since there is no guarantee that a message will eventually be delivered to its recipient.\nClearly, the delivery ratio of the protocol increases proportionally to the maximum allowed delay time and the buffer size in each host (interesting simulation results may be found in [16]).\n4.3 Adaptation of the JMS Message Model In this section, we will analyse the aspects of our adaptation of the specification related to the so-called JMS Message Model [5].\nAccording to this, JMS messages are characterised by some properties defined using the header field, which contains values that are used by both clients and providers for their delivery.\nThe aspects discussed in the remainder of this section are valid for both models (point to point and publish-subscribe).\nA JMS message can be persistent or non-persistent.\nAccording to the JMS specification, persistent messages must be delivered with a higher degree of reliability than the nonpersistent ones.\nHowever, it is worth noting that it is not possible to ensure once-and-only-once reliability for persistent messages as defined in the specification, since, as we discussed in the previous subsection, the underlying epidemic protocol can guarantee only best-effort delivery.\nHowever, clients maintain a list of the identifiers of the recently received messages to avoid the delivery of message duplicates.\nIn other words, we provide the applications with at-mostonce reliability for both types of messages.\nIn order to implement different levels of reliability, EMMA treats persistent and non-persistent messages differently, during the execution of the anti-entropy epidemic protocol.\nSince the message buffer space is limited, persistent messages are preferentially replicated using the available free space.\nIf this is insufficient and non-persistent messages are present in the buffer, these are replaced.\nOnly the successful deliveries of the persistent messages are notified to the senders.\nAccording to the JMS specification, it is possible to assign a priority to each message.\nThe messages with higher priorities are delivered in a preferential way.\nAs discussed above, persistent messages are prioritised above the non-persistent ones.\nFurther selection is based on their priorities.\nMessages with higher priorities are treated in a preferential way.\nIn fact, if there is not enough space to replicate all the persistent messages, a mechanism based on priorities is used to delete and replicate non-persistent messages (and, if necessary, persistent messages).\nMessages are deleted from the buffers using the expiration time value that can be set by senders.\nThis is a way to free space in the buffers (one preferentially deletes older messages in cases of conflict); to eliminate stale replicas in the system; and to limit the time for which destinations must hold message identifiers to dispose of duplicates.\n4.4 Reliability and Acknowledgment Mechanisms As already discussed, at-most-once message delivery is the best that can be achieved in terms of delivery semantics in partially connected ad-hoc settings.\nHowever, it is possible to improve the reliability of the system with efficient acknowledgment mechanisms.\nEMMA provides a mechanism for failure notification to applications if the acknowledgment is not received within a given timeout (that can be configured by application developers).\nThis mechanism is the one that distinguishes the delivery of persistent and non-persistent messages in our JMS implementation: the deliveries of the former are notified to the senders, whereas the latter are not.\nWe use acknowledgment messages not only to inform senders about the successful delivery of messages but also to delete the replicas of the delivered messages that are still present in the network.\nEach host maintains a list of the messages Middleware for Pervasive and ad-hoc Computing 124 successfully delivered that is updated as part of the normal process of information exchange between the hosts.\nThe lists are exchanged during the first steps of the anti-entropic epidemic protocol with a certain predefined frequency.\nIn the case of messages with multiple recipients, a list of the actual recipients is also stored.\nWhen a host receives the list, it checks its message buffer and updates it according to the following rules: (1) if a message has a single recipient and it has been delivered, it is deleted from the buffer; (2) if a message has multiple recipients, the identifiers of the delivered hosts are deleted from the associated list of recipients.\nIf the resulting length of the list of recipients is zero, the message is deleted from the buffer.\nThese lists have, clearly, finite dimensions and are implemented as circular queues.\nThis simple mechanism, together with the use of expiration timestamps, guarantees that the old acknowledgment notifications are deleted from the system after a limited period of time.\nIn order to improve the reliability of EMMA, a design mechanism for intelligent replication of queues and topics based on the context information could be developed.\nHowever this is not yet part of the current architecture of EMMA.\n5.\nIMPLEMENTATION AND PRELIMINARY EVALUATION We implemented a prototype of our platform using the J2ME Personal Profile.\nThe size of the executable is about 250KB including the JMS 1.1 jar file; this is a perfectly acceptable figure given the available memory of the current mobile devices on the market.\nWe tested our prototype on HP IPaq PDAs running Linux, interconnected with WaveLan, and on a number of laptops with the same network interface.\nWe also evaluated the middleware platform using the OMNET++ discrete event simulator [17] in order to explore a range of mobile scenarios that incorporated a more realistic number of hosts than was achievable experimentally.\nMore specifically, we assessed the performance of the system in terms of delivery ratio and average delay, varying the density of population and the buffer size, and using persistent and non-persistent messages with different priorities.\nThe simulation results show that the EMMA``s performance, in terms of delivery ratio and delay of persistent messages with higher priorities, is good.\nIn general, it is evident that the delivery ratio is strongly related to the correct dimensioning of the buffers to the maximum acceptable delay.\nMoreover, the epidemic algorithms are able to guarantee a high delivery ratio if one evaluates performance over a time interval sufficient for the dissemination of the replicas of messages (i.e., the infection spreading) in a large portion of the ad-hoc network.\nOne consequence of the dimensioning problem is that scalability may be seriously impacted in peer-to-peer middleware for mobile computing due to the resource poverty of the devices (limited memory to store temporarily messages) and the number of possible interconnections in ad-hoc settings.\nWhat is worse is that common forms of commercial and social organisation (six degrees of separation) mean that even modest TTL values on messages will lead to widespread flooding of epidemic messages.\nThis problem arises because of the lack of intelligence in the epidemic protocol, and can be addressed by selecting carrier nodes for messages with greater care.\nThe details of this process are, however, outside the scope of this paper (but may be found in [13]) and do not affect the foundation on which the EMMA middleware is based: the ability to deliver messages asynchronously.\n6.\nCRITICAL VIEW OF THE STATE OF THE ART The design of middleware platforms for mobile computing requires researchers to answer new and fundamentally different questions; simply assuming the presence of wired portions of the network on which centralised functionality can reside is not generalisable.\nThus, it is necessary to investigate novel design principles and to devise architectural patterns that differ from those traditionally exploited in the design of middleware for fixed systems.\nAs an example, consider the recent cross-layering trend in ad-hoc networking [1].\nThis is a way of re-thinking software systems design, explicitly abandoning the classical forms of layering, since, although this separation of concerns afford portability, it does so at the expense of potential efficiency gains.\nWe believe that it is possible to view our approach as an instance of cross-layering.\nIn fact, we have added the epidemic network protocol at middleware level and, at the same time, we have used the existing synchronous network protocol if present both in delivering messages (traditional layering) and in informing the middleware about when messages may be delivered by revealing details of the forwarding tables (layer violation).\nFor this reason, we prefer to consider them jointly as the communication layer of our platform together providing more efficient message delivery.\nAnother interesting aspect is the exploitation of context and system information to improve the performance of mobile middleware platforms.\nAgain, as a result of adopting a cross-layering methodology, we are able to build systems that gather information from the underlying operating system and communication components in order to allow for adaptation of behaviour.\nWe can summarise this conceptual design approach by saying that middleware platforms must be not only context-aware (i.e., they should be able to extract and analyse information from the surrounding context) but also system-aware (i.e., they should be able to gather information from the software and hardware components of the mobile system).\nA number of middleware systems have been developed to support ad-hoc networking with the use of asynchronous communication (such as LIME, XMIDDLE, STEAM [11]).\nIn particular, the STEAM platform is an interesting example of event-based middleware for ad-hoc networks, providing location-aware message delivery and an effective solution for event filtering.\nA discussion of JMS, and its mobile realisation, has already been conducted in Sections 4 and 2.\nThe Swiss company Softwired has developed the first JMS middleware for mobile computing, called iBus Mobile [10].\nThe main components of this typically infrastructure-based architecture are the JMS provider, the so-called mobile JMS gateway, which is deployed on a fixed host and a lightweight JMS client library.\nThe gateway is used for the communication between the application server and mobile hosts.\nThe gateway is seen by the JMS provider as a normal JMS client.\nThe JMS provider can be any JMS-enabled application server, such as BEA Weblogic.\nPronto [19] is an example of mid125 Middleware 2004 Companion dleware system based on messaging that is specifically designed for mobile environments.\nThe platform is composed of three classes of components: mobile clients implementing the JMS specification, gateways that control traffic, guaranteeing efficiency and possible user customizations using different plug-ins and JMS servers.\nDifferent configurations of these components are possible; with respect to mobile ad hoc networks applications, the most interesting is Serverless JMS.\nThe aim of this configuration is to adapt JMS to a decentralized model.\nThe publish-subscribe model exploits the efficiency and the scalability of the underlying IP multicast protocol.\nUnreliable and reliable message delivery services are provided: reliability is provided through a negative acknowledgment-based protocol.\nPronto represents a good solution for infrastructure-based mobile networks but it does not adequately target ad-hoc settings, since mobile nodes rely on fixed servers for the exchange of messages.\nOther MOM implemented for mobile environments exist; however, they are usually straightforward extensions of existing middleware [8].\nThe only implementation of MOM specifically designed for mobile ad-hoc networks was developed at the University of Newcastle [18].\nThis work is again a JMS adaptation; the focus of that implementation is on group communication and the use of application level routing algorithms for topic delivery of messages.\nHowever, there are a number of differences in the focus of our work.\nThe importance that we attribute to disconnections makes persistence a vital requirement for any middleware that needs to be used in mobile ad-hoc networks.\nThe authors of [18] signal persistence as possible future work, not considering the fact that routing a message to a non-connected host will result in delivery failure.\nThis is a remarkable limitation in mobile settings where unpredictable disconnections are the norm rather than the exception.\n7.\nROADMAP AND CONCLUSIONS Asynchronous communication is a useful communication paradigm for mobile ad-hoc networks, as hosts are allowed to come, go and pick up messages when convenient, also taking account of their resource availability (e.g., power, connectivity levels).\nIn this paper we have described the state of the art in terms of MOM for mobile systems.\nWe have also shown a proof of concept adaptation of JMS to the extreme scenario of partially connected mobile ad-hoc networks.\nWe have described and discussed the characteristics and differences of our solution with respect to traditional JMS implementations and the existing adaptations for mobile settings.\nHowever, trade-offs between application-level routing and resource usage should also be investigated, as mobile devices are commonly power\/resource scarce.\nA key limitation of this work is the poorly performing epidemic algorithm and an important advance in the practicability of this work requires an algorithm that better balances the needs of efficiency and message delivery probability.\nWe are currently working on algorithms and protocols that, exploiting probabilistic and statistical techniques on the basis of small amounts of exchanged information, are able to improve considerably the efficiency in terms of resources (memory, bandwidth, etc) and the reliability of our middleware platform [13].\nOne futuristic research development, which may take these ideas of adaptation of messaging middleware for mobile environments further is the introduction of more mobility oriented communication extensions, for instance the support of geocast (i.e., the ability to send messages to specific geographical areas).\n8.\nREFERENCES [1] M. Conti, G. Maselli, G. Turi, and S. Giordano.\nCross-layering in Mobile ad-hoc Network Design.\nIEEE Computer, 37(2):48-51, February 2004.\n[2] A. Demers, D. Greene, C. Hauser, W. Irish, J. Larson, S. Shenker, H. Sturgis, D. Swinehart, and D. Terry.\nEpidemic Algorithms for Replicated Database Maintenance.\nIn Sixth Symposium on Principles of Distributed Computing, pages 1-12, August 1987.\n[3] A. Doria, M. Uden, and D. P. Pandey.\nProviding connectivity to the Saami nomadic community.\nIn Proceedings of the Second International Conference on Open Collaborative Design for Sustainable Innovation, December 2002.\n[4] M. Haahr, R. Cunningham, and V. Cahill.\nSupporting CORBA applications in a Mobile Environment.\nIn 5th International Conference on Mobile Computing and Networking (MOBICOM99), pages 36-47.\nACM, August 1999.\n[5] M. Hapner, R. Burridge, R. Sharma, J. Fialli, and K. Stout.\nJava Message Service Specification Version 1.1.\nSun Microsystems, Inc., April 2002.\nhttp:\/\/java.sun.com\/products\/jms\/.\n[6] J. Hart.\nWebSphere MQ: Connecting your applications without complex programming.\nIBM WebSphere Software White Papers, 2003.\n[7] S. Hayward and M. Pezzini.\nMarrying Middleware and Mobile Computing.\nGartner Group Research Report, September 2001.\n[8] IBM.\nWebSphere MQ EveryPlace Version 2.0, November 2002.\nhttp:\/\/www-3.ibm.com\/software\/integration\/wmqe\/.\n[9] ITU.\nConnecting remote communities.\nDocuments of the World Summit on Information Society, 2003.\nhttp:\/\/www.itu.int\/osg\/spu\/wsis-themes.\n[10] S. Maffeis.\nIntroducing Wireless JMS.\nSoftwired AG, www.sofwired-inc.com, 2002.\n[11] C. Mascolo, L. Capra, and W. Emmerich.\nMiddleware for Mobile Computing.\nIn E. Gregori, G. Anastasi, and S. Basagni, editors, Advanced Lectures on Networking, volume 2497 of Lecture Notes in Computer Science, pages 20-58.\nSpringer Verlag, 2002.\n[12] Microsoft.\nMicrosoft Message Queuing (MSMQ) Version 2.0 Documentation.\n[13] M. Musolesi, S. Hailes, and C. Mascolo.\nAdaptive routing for intermittently connected mobile ad-hoc networks.\nTechnical report, UCL-CS Research Note, July 2004.\nSubmitted for Publication.\n[14] Sun Microsystems.\nJava Naming and Directory Interface (JNDI) Documentation Version 1.2.\n2003.\nhttp:\/\/java.sun.com\/products\/jndi\/.\n[15] Sun Microsystems.\nJini Specification Version 2.0, 2003.\nhttp:\/\/java.sun.com\/products\/jini\/.\n[16] A. Vahdat and D. Becker.\nEpidemic routing for Partially Connected ad-hoc Networks.\nTechnical Report CS-2000-06, Department of Computer Science, Duke University, 2000.\n[17] A. Vargas.\nThe OMNeT++ discrete event simulation system.\nIn Proceedings of the European Simulation Multiconference (ESM``2001), Prague, June 2001.\n[18] E. Vollset, D. Ingham, and P. Ezhilchelvan.\nJMS on Mobile ad-hoc Networks.\nIn Personal Wireless Communications (PWC), pages 40-52, Venice, September 2003.\n[19] E. Yoneki and J. Bacon.\nPronto: Mobilegateway with publish-subscribe paradigm over wireless network.\nTechnical Report 559, University of Cambridge, Computer Laboratory, February 2003.\nMiddleware for Pervasive and ad-hoc Computing 126","lvl-3":"Adapting Asynchronous Messaging Middleware to Ad Hoc Networking\nABSTRACT\nThe characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware.\nIn particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad hoc environments, in which not even the intermittent availability of a backbone network can be assumed.\nInstead, asynchronous communication seems to be a generally more suitable paradigm for such environments.\nMessage oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems.\nIn this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for Ad hoc networks), an adaptation of Java Message Service (JMS) for mobile ad hoc environments.\nWe discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years.\n2.\nMESSAGE ORIENTED MIDDLEWARE AND JAVA MESSAGE SERVICE (JMS)\n4.\nJMS FOR MOBILE AD HOC NETWORKS\n4.1 Adaptation of JMS for Mobile Ad Hoc Networks\n4.2 Message Delivery using Epidemic Routing\n5.\nIMPLEMENTATION AND PRELIMINARY EVALUATION","lvl-4":"Adapting Asynchronous Messaging Middleware to Ad Hoc Networking\nABSTRACT\nThe characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware.\nIn particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad hoc environments, in which not even the intermittent availability of a backbone network can be assumed.\nInstead, asynchronous communication seems to be a generally more suitable paradigm for such environments.\nMessage oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems.\nIn this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for Ad hoc networks), an adaptation of Java Message Service (JMS) for mobile ad hoc environments.\nWe discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years.","lvl-2":"Adapting Asynchronous Messaging Middleware to Ad Hoc Networking\nABSTRACT\nThe characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware.\nIn particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad hoc environments, in which not even the intermittent availability of a backbone network can be assumed.\nInstead, asynchronous communication seems to be a generally more suitable paradigm for such environments.\nMessage oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems.\nIn this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for Ad hoc networks), an adaptation of Java Message Service (JMS) for mobile ad hoc environments.\nWe discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years.\n2.\nMESSAGE ORIENTED MIDDLEWARE AND JAVA MESSAGE SERVICE (JMS)\nMessage-oriented middleware systems support communication between distributed components via message-passing: the sender sends a message to identified provider on the wired network.\nHowever, this approach is somewhat limited in terms of widespread applicability and scalability as a consequence of the concentration of functionality in the wired portion of the network.\nIf JMS is to be adapted to completely ad hoc environments, where no fixed infrastructure is available, and where nodes change location and status very dynamically, more issues must be taken into consideration.\nFirstly, discovery needs to use a resilient but distributed model: in this extremely dynamic environment, static solutions are unacceptable.\nAs discussed in Section 2, a JMS administrator defines queues and topics on the provider.\nClients can then learn about them using the Java Naming and Directory Interface (JNDI).\nHowever, due to the way JNDI is designed, a JNDI node (or more than one) needs to be in reach in order to obtain a binding of a name to an address (i.e., knowing where a specific queue\/topic is).\nIn mobile ad hoc environments, the discovery process cannot assume the existence of a fixed set of discovery servers that are always reachable, as this would not match the dynamicity of ad hoc networks.\nSecondly, a JMS Provider, as suggested by the JMS specification, also needs to be reachable by each node in the network, in order to communicate.\nThis assumes a very centralised architecture, which again does not match the requirements of a mobile ad hoc setting, in which nodes may be moving and sparse: a more distributed and dynamic solution is needed.\nPersistence is, however, essential functionality in asynchronous communication environments as hosts are, by definition, connected at different times.\nIn the following section, we will discuss our experience in designing and implementing JMS for mobile ad hoc networks.\n4.\nJMS FOR MOBILE AD HOC NETWORKS\n4.1 Adaptation of JMS for Mobile Ad Hoc Networks\nDeveloping applications for mobile networks is yet more challenging: in addition to the same considerations as for infrastructured wireless environments, such as the limited device capabilities and power constraints, there are issues of rate of change of network connectivity, and the lack of a static routing infrastructure.\nConsequently, we now describe an initial attempt to adapt the JMS specification to target the particular requirements related to ad hoc scenarios.\nAs discussed in Section 3, a JMS application can use either the rather than the exception.\nIn other words, we cannot consider disconnections as failures.\nFor these reasons, we adopt a slightly different semantics.\nWith respect to durable subscriptions, if a subscriber becomes disconnected, notifications are not stored but are sent using the epidemic protocol rather than the synchronous protocol.\nIn other words, durable notifications remain valid during the possible disconnections of the subscriber.\nOn the other hand, if a non-durable subscriber becomes disconnected, its subscription is deleted; in other words, during disconnections, notifications are not sent using the epidemic protocol but exploit only the synchronous protocol.\nIf the topic becomes accessible to this host again, it must make another subscription in order to receive the notifications.\nUnsubscription messages are delivered in the same way as are subscription messages.\nIt is important to note that durable subscribers have explicitly to unsubscribe from a topic in order to stop the notification process; however, all durable subscriptions have a predefined expiration time in order to cope with the cases of subscribers that do not meet again because of their movements or failures.\nThis feature is clearly provided to limit the number of the unnecessary messages sent around the network.\n4.2 Message Delivery using Epidemic Routing\nIn this section, we examine one possible mechanism that will allow the delivery of messages in a partially connected network.\nThe mechanism we discuss is intended for the purposes of demonstrating feasibility; more efficient communication mechanisms for this environment are themselves complex, and are the subject of another paper [13].\nThe asynchronous message delivery described above is based on a typical pure epidemic-style routing protocol [16].\nA message that needs to be sent is replicated on each host in reach.\nIn this way, copies of the messages are quickly spread through connected networks, like an infection.\nIf a host becomes connected to another cloud of mobile nodes, during its movement, the message spreads through this collection of hosts.\nEpidemic-style replication of data and messages has been exploited in the past in many fields starting with the distributed database systems area [2].\nWithin epidemic routing, each host maintains a buffer containing the messages that it has created and the replicas of the messages generated by the other hosts.\nTo improve the performance, a hash-table indexes the content of the buffer.\nWhen two hosts connect, the host with the smaller identifier initiates a so-called successfully delivered that is updated as part of the normal process of information exchange between the hosts.\nThe lists are exchanged during the first steps of the anti-entropic epidemic protocol with a certain predefined frequency.\nIn the case of messages with multiple recipients, a list of the actual recipients is also stored.\nWhen a host receives the list, it checks its message buffer and updates it according to the following rules: (1) if a message has a single recipient and it has been delivered, it is deleted from the buffer; (2) if a message has multiple recipients, the identifiers of the delivered hosts are deleted from the associated list of recipients.\nIf the resulting length of the list of recipients is zero, the message is deleted from the buffer.\nThese lists have, clearly, finite dimensions and are implemented as circular queues.\nThis simple mechanism, together with the use of expiration timestamps, guarantees that the old acknowledgment notifications are deleted from the system after a limited period of time.\nIn order to improve the reliability of EMMA, a design mechanism for intelligent replication of queues and topics based on the context information could be developed.\nHowever this is not yet part of the current architecture of EMMA.\n5.\nIMPLEMENTATION AND PRELIMINARY EVALUATION\nWe implemented a prototype of our platform using the J2ME Personal Profile.\nThe size of the executable is about 250KB including the JMS 1.1 jar file; this is a perfectly acceptable figure given the available memory of the current mobile devices on the market.\nWe tested our prototype on HP IPaq PDAs running Linux, interconnected with WaveLan, and on a number of laptops with the same network interface.\nWe also evaluated the middleware platform using the OMNET + + discrete event simulator [17] in order to explore a range of mobile scenarios that incorporated a more realistic number of hosts than was achievable experimentally.\nMore specifically, we assessed the performance of the system in terms of delivery ratio and average delay, varying the density of population and the buffer size, and using persistent and non-persistent messages with different priorities.\nThe simulation results show that the EMMA's performance, in terms of delivery ratio and delay of persistent messages with higher priorities, is good.\nIn general, it is evident that the delivery ratio is strongly related to the correct dimensioning of the buffers to the maximum acceptable delay.\nMoreover, the epidemic algorithms are able to guarantee a high delivery ratio if one evaluates performance over a time interval sufficient for the dissemination of the replicas of messages (i.e., the dleware system based on messaging that is specifically designed for mobile environments.\nThe platform is composed of three classes of components: mobile clients implementing the JMS specification, gateways that control traffic, guaranteeing efficiency and possible user customizations using different plug-ins and JMS servers.\nDifferent configurations of these components are possible; with respect to mobile ad hoc networks applications, the most interesting is","keyphrases":["asynchron messag middlewar","asynchron commun","messag orient middlewar","epidem messag middlewar","java messag servic","mobil ad-hoc environ","messag-orient middlewar","epidem protocol","cross-layer","applic level rout","group commun","middlewar for mobil comput","mobil ad-hoc network","context awar"],"prmu":["P","P","P","P","P","P","M","M","U","M","M","R","R","U"]} {"id":"J-50","title":"Communication Complexity of Common Voting Rules","abstract":"We determine the communication complexity of the common voting rules. The rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs. For each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule. The bounds match for all voting rules except STV and maximin.","lvl-1":"Communication Complexity of Common Voting Rules\u2217 Vincent Conitzer Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA conitzer@cs.cmu.edu Tuomas Sandholm Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA sandholm@cs.cmu.edu ABSTRACT We determine the communication complexity of the common voting rules.\nThe rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs.\nFor each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule.\nThe bounds match for all voting rules except STV and maximin.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION One key factor in the practicality of any preference aggregation rule is its communication burden.\nTo successfully aggregate the agents'' preferences, it is usually not necessary for all the agents to report all of their preference information.\nClever protocols that elicit the agents'' preferences partially and sequentially have the potential to dramatically reduce the required communication.\nThis has at least the following advantages: \u2022 It can make preference aggregation feasible in settings where the total amount of preference information is too large to communicate.\n\u2022 Even when communicating all the preference information is feasible, reducing the communication requirements lessens the burden placed on the agents.\nThis is especially true when the agents, rather than knowing all their preferences in advance, need to invest effort (such as computation or information gathering) to determine their preferences [16].\n\u2022 It preserves (some of) the agents'' privacy.\nMost of the work on reducing the communication burden in preference aggregation has focused on resource allocation settings such as combinatorial auctions, in which an auctioneer auctions off a number of (possibly distinct) items in a single event.\nBecause in a combinatorial auction, bidders can have separate valuations for each of an exponential number of possible bundles of items, this is a setting in which reducing the communication burden is especially crucial.\nThis can be accomplished by supplementing the auctioneer with an elicitor that incrementally elicits parts of the bidders'' preferences on an as-needed basis, based on what the bidders have revealed about their preferences so far, as suggested by Conen and Sandholm [5].\nFor example, the elicitor can ask for a bidder``s value for a specific bundle (value queries), which of two bundles the bidder prefers (order queries), which bundle he ranks kth or what the rank of a given bundle is (rank queries), which bundle he would purchase given a particular vector of prices (demand queries), etc.-until (at least) the final allocation can be determined.\nExperimentally, this yields drastic savings in preference revelation [11].\nFurthermore, if the agents'' valuation functions are drawn from certain natural subclasses, the elicitation problem can be solved using only polynomially many queries even in the worst case [23, 4, 13, 18, 14].\nFor a review of preference elicitation in combinatorial auctions, see [17].\nAscending combinatorial auctions are a well-known special form of preference elicitation, where the elicitor asks demand queries with increasing prices [15, 21, 1, 9].\nFinally, resource 78 allocation problems have also been studied from a communication complexity viewpoint, thereby deriving lower bounds on the required communication.\nFor example, Nisan and Segal show that exponential communication is required even to obtain a surplus greater than that obtained by auctioning off all objects as a single bundle [14].\nSegal also studies social choice rules in general, and shows that for a large class of social choice rules, supporting budget sets must be revealed such that if every agent prefers the same outcome in her budget set, this proves the optimality of that outcome.\nSegal then uses this characterization to prove bounds on the communication required in resource allocation as well as matching settings [20].\nIn this paper, we will focus on the communication requirements of a generally applicable subclass of social choice rules, commonly known as voting rules.\nIn a voting setting, there is a set of candidate outcomes over which the voters express their preferences by submitting a vote (typically, a ranking of the candidates), and the winner (that is, the chosen outcome) is determined based on these votes.\nThe communication required by voting rules can be large either because the number of voters is large (such as, for example, in national elections), or because the number of candidates is large (for example, the agents can vote over allocations of a number of resources), or both.\nPrior work [8] has studied elicitation in voting, studying how computationally hard it is to decide whether a winner can be determined with the information elicited so far, as well as how hard it is to find the optimal sequence of queries given perfect suspicions about the voters'' preferences.\nIn addition, that paper discusses strategic (game-theoretic) issues introduced by elicitation.\nIn contrast, in this paper, we are concerned with the worst-case number of bits that must be communicated to execute a given voting rule, when nothing is known in advance about the voters'' preferences.\nWe determine the communication complexity of the common voting rules.\nFor each rule, we first give an upper bound on the (deterministic) communication complexity by providing a communication protocol for it and analyzing how many bits need to be transmitted in this protocol.\n(Segal``s results [20] do not apply to most voting rules because most voting rules are not intersectionmonotonic (or even monotonic).1 ) For many of the voting rules under study, it turns out that one cannot do better than simply letting each voter immediately communicate all her (potentially relevant) information.\nHowever, for some rules (such as plurality with runoff, STV and cup) there is a straightforward multistage communication protocol that, with some analysis, can be shown to significantly outperform the immediate communication of all (potentially relevant) information.\nFinally, for some rules (such as the Condorcet and Bucklin rules), we need to introduce a more complex communication protocol to achieve the best possible upper 1 For two of the rules that we study that are intersectionmonotonic, namely the approval and Condorcet rules, Segal``s results can in fact be used to give alternative proofs of our lower bounds.\nWe only give direct proofs for these rules here because 1) these direct proofs are among the easier ones in this paper, 2) the alternative proofs are nontrivial even given Segal``s results, and 3) a space constraint applies.\nHowever, we hope to also include the alternative proofs in a later version.\nbound.\nAfter obtaining the upper bounds, we show that they are tight by giving matching lower bounds on (even the nondeterministic) communication complexity of each voting rule.\nThere are two exceptions: STV, for which our upper and lower bounds are apart by a factor log m; and maximin, for which our best deterministic upper bound is also a factor log m above the (nondeterministic) lower bound, although we give a nondeterministic upper bound that matches the lower bound.\n2.\nREVIEW OF VOTING RULES In this section, we review the common voting rules that we study in this paper.\nA voting rule2 is a function mapping a vector of the n voters'' votes (i.e. preferences over candidates) to one of the m candidates (the winner) in the candidate set C.\nIn some cases (such as the Condorcet rule), the rule may also declare that no winner exists.\nWe do not concern ourselves with what happens in case of a tie between candidates (our lower bounds hold regardless of how ties are broken, and the communication protocols used for our upper bounds do not attempt to break the ties).\nAll of the rules that we study are rank-based rules, which means that a vote is defined as an ordering of the candidates (with the exception of the plurality rule, for which a vote is a single candidate, and the approval rule, for which a vote is a subset of the candidates).\nWe will consider the following voting rules.\n(For rules that define a score, the candidate with the highest score wins.)\n\u2022 scoring rules.\nLet \u03b1 = \u03b11, ... , \u03b1m be a vector of integers such that \u03b11 \u2265 \u03b12 ... \u2265 \u03b1m.\nFor each voter, a candidate receives \u03b11 points if it is ranked first by the voter, \u03b12 if it is ranked second etc..\nThe score s\u03b1 of a candidate is the total number of points the candidate receives.\nThe Borda rule is the scoring rule with \u03b1 = m\u22121, m\u22122, ... , 0 .\nThe plurality rule is the scoring rule with \u03b1 = 1, 0, ... , 0 .\n\u2022 single transferable vote (STV).\nThe rule proceeds through a series of m \u2212 1 rounds.\nIn each round, the candidate with the lowest plurality score (that is, the least number of voters ranking it first among the remaining candidates) is eliminated (and each of the votes for that candidate transfer to the next remaining candidate in the order given in that vote).\nThe winner is the last remaining candidate.\n\u2022 plurality with run-off.\nIn this rule, a first round eliminates all candidates except the two with the highest plurality scores.\nVotes are transferred to these as in the STV rule, and a second round determines the winner from these two.\n\u2022 approval.\nEach voter labels each candidate as either approved or disapproved.\nThe candidate approved by the greatest number of voters wins.\n\u2022 Condorcet.\nFor any two candidates i and j, let N(i, j) be the number of voters who prefer i to j.\nIf there is a candidate i that is preferred to any other candidate by a majority of the voters (that is, N(i, j) > N(j, i) for all j = i-that is, i wins every pairwise election), then candidate i wins.\n2 The term voting protocol is often used to describe the same concept, but we seek to draw a sharp distinction between the rule mapping preferences to outcomes, and the communication\/elicitation protocol used to implement this rule.\n79 \u2022 maximin (aka.\nSimpson).\nThe maximin score of i is s(i) = minj=i N(i, j)-that is, i``s worst performance in a pairwise election.\nThe candidate with the highest maximin score wins.\n\u2022 Copeland.\nFor any two distinct candidates i and j, let C(i, j) = 1 if N(i, j) > N(j, i), C(i, j) = 1\/2 if N(i, j) = N(j, i) and C(i, j) = 0 if N(i, j) < N(j, i).\nThe Copeland score of candidate i is s(i) = j=i C(i, j).\n\u2022 cup (sequential binary comparisons).\nThe cup rule is defined by a balanced3 binary tree T with one leaf per candidate, and an assignment of candidates to leaves (each leaf gets one candidate).\nEach non-leaf node is assigned the winner of the pairwise election of the node``s children; the candidate assigned to the root wins.\n\u2022 Bucklin.\nFor any candidate i and integer l, let B(i, l) be the number of voters that rank candidate i among the top l candidates.\nThe winner is arg mini(min{l : B(i, l) > n\/2}).\nThat is, if we say that a voter approves her top l candidates, then we repeatedly increase l by 1 until some candidate is approved by more than half the voters, and this candidate is the winner.\n\u2022 ranked pairs.\nThis rule determines an order on all the candidates, and the winner is the candidate at the top of this order.\nSort all ordered pairs of candidates (i, j) by N(i, j), the number of voters who prefer i to j. Starting with the pair (i, j) with the highest N(i, j), we lock in the result of their pairwise election (i j).\nThen, we move to the next pair, and we lock the result of their pairwise election.\nWe continue to lock every pairwise result that does not contradict the ordering established so far.\nWe emphasize that these definitions of voting rules do not concern themselves with how the votes are elicited from the voters; all the voting rules, including those that are suggestively defined in terms of rounds, are in actuality just functions mapping the vector of all the voters'' votes to a winner.\nNevertheless, there are always many different ways of eliciting the votes (or the relevant parts thereof) from the voters.\nFor example, in the plurality with runoff rule, one way of eliciting the votes is to ask every voter to declare her entire ordering of the candidates up front.\nAlternatively, we can first ask every voter to declare only her most preferred candidate; then, we will know the two candidates in the runoff, and we can ask every voter which of these two candidates she prefers.\nThus, we distinguish between the voting rule (the mapping from vectors of votes to outcomes) and the communication protocol (which determines how the relevant parts of the votes are actually elicited from the voters).\nThe goal of this paper is to give efficient communication protocols for the voting rules just defined, and to prove that there do not exist any more efficient communication protocols.\nIt is interesting to note that the choice of the communication protocol may affect the strategic behavior of the voters.\nMultistage communication protocols may reveal to the voters some information about how the other voters are voting (for example, in the two-stage communication protocol just given for plurality with runoff, in the second stage voters 3 Balanced means that the difference in depth between two leaves can be at most one.\nwill know which two candidates have the highest plurality scores).\nIn general, when the voters receive such information, it may give them incentives to vote differently than they would have in a single-stage communication protocol in which all voters declare their entire votes simultaneously.\nOf course, even the single-stage communication protocol is not strategy-proof4 for any reasonable voting rule, by the Gibbard-Satterthwaite theorem [10, 19].\nHowever, this does not mean that we should not be concerned about adding even more opportunities for strategic voting.\nIn fact, many of the communication protocols introduced in this paper do introduce additional opportunities for strategic voting, but we do not have the space to discuss this here.\n(In prior work [8], we do give an example where an elicitation protocol for the approval voting rule introduces strategic voting, and give principles for designing elicitation protocols that do not introduce strategic problems.)\nNow that we have reviewed voting rules, we move on to a brief review of communication complexity theory.\n3.\nREVIEW OF SOME COMMUNICATION COMPLEXITY THEORY In this section, we review the basic model of a communication problem and the lower-bounding technique of constructing a fooling set.\n(The basic model of a communication problem is due to Yao [22]; for an overview see Kushilevitz and Nisan [12].)\nEach player 1 \u2264 i \u2264 n knows (only) input xi.\nTogether, they seek to compute f(x1, x2, ... , xn).\nIn a deterministic protocol for computing f, in each stage, one of the players announces (to all other players) a bit of information based on her own input and the bits announced so far.\nEventually, the communication terminates and all players know f(x1, x2, ... , xn).\nThe goal is to minimize the worst-case (over all input vectors) number of bits sent.\nThe deterministic communication complexity of a problem is the worstcase number of bits sent in the best (correct) deterministic protocol for it.\nIn a nondeterministic protocol, the next bit to be sent can be chosen nondeterministically.\nFor the purposes of this paper, we will consider a nondeterministic protocol correct if for every input vector, there is some sequence of nondeterministic choices the players can make so that the players know the value of f when the protocol terminates.\nThe nondeterministic communication complexity of a problem is the worst-case number of bits sent in the best (correct) nondeterministic protocol for it.\nWe are now ready to give the definition of a fooling set.\nDefinition 1.\nA fooling set is a set of input vectors {(x1 1, x1 2, ... , x1 n), (x2 1, x2 2, ... , x2 n), ... , (xk 1 , xk 2 , ... , xk n) such that for any i, f(xi 1, xi 2, ... , xi n) = f0 for some constant f0, but for any i = j, there exists some vector (r1, r2, ... , rn) \u2208 {i, j}n such that f(xr1 1 , xr2 2 , ... , xrn n ) = f0.\n(That is, we can mix the inputs from the two input vectors to obtain a vector with a different function value.)\nIt is known that if a fooling set of size k exists, then log k is a lower bound on the communication complexity (even the nondeterministic communication complexity) [12].\n4 A strategy-proof protocol is one in which it is in the players'' best interest to report their preferences truthfully.\n80 For the purposes of this paper, f is the voting rule that maps the votes to the winning candidate, and xi is voter i``s vote (the information that the voting rule would require from the voter if there were no possibility of multistage communication-i.e. the most preferred candidate (plurality), the approved candidates (approval), or the ranking of all the candidates (all other protocols)).\nHowever, when we derive our lower bounds, f will only signify whether a distinguished candidate a wins.\n(That is, f is 1 if a wins, and 0 otherwise.)\nThis will strengthen our lower bound results (because it implies that even finding out whether one given candidate wins is hard).5 Thus, a fooling set in our context is a set of vectors of votes so that a wins (does not win) with each of them; but for any two different vote vectors in the set, there is a way of taking some voters'' votes from the first vector and the others'' votes from the second vector, so that a does not win (wins).\nTo simplify the proofs of our lower bounds, we make assumptions such as the number of voters n is odd in many of these proofs.\nTherefore, technically, we do not prove the lower bound for (number of candidates, number of voters) pairs (m, n) that do not satisfy these assumptions (for example, if we make the above assumption, then we technically do not prove the lower bound for any pair (m, n) in which n is even).\nNevertheless, we always prove the lower bound for a representative set of (m, n) pairs.\nFor example, for every one of our lower bounds it is the case that for infinitely many values of m, there are infinitely many values of n such that the lower bound is proved for the pair (m, n).\n4.\nRESULTS We are now ready to present our results.\nFor each voting rule, we first give a deterministic communication protocol for determining the winner to establish an upper bound.\nThen, we give a lower bound on the nondeterministic communication complexity (even on the complexity of deciding whether a given candidate wins, which is an easier question).\nThe lower bounds match the upper bounds in all but two cases: the STV rule (upper bound O(n(log m)2 ); lower bound \u2126(n log m)) and the maximin rule (upper bound O(nm log m), although we do give a nondeterministic protocol that is O(nm); lower bound \u2126(nm)).\nWhen we discuss a voting rule in which the voters rank the candidates, we will represent a ranking in which candidate c1 is ranked first, c2 is ranked second, etc. as c1 c2 ... cm.\n5 One possible concern is that in the case where ties are possible, it may require much communication to verify whether a specific candidate a is among the winners, but little communication to produce one of the winners.\nHowever, all the fooling sets we use in the proofs have the property that if a wins, then a is the unique winner.\nTherefore, in these fooling sets, if one knows any one of the winners, then one knows whether a is a winner.\nThus, computing one of the winners requires at least as much communication as verifying whether a is among the winners.\nIn general, when a communication problem allows multiple correct answers for a given vector of inputs, this is known as computing a relation rather than a function [12].\nHowever, as per the above, we can restrict our attention to a subset of the domain where the voting rule truly is a (single-valued) function, and hence lower bounding techniques for functions rather than relations will suffice.\nSometimes for the purposes of a proof the internal ranking of a subset of the candidates does not matter, and in this case we will not specify it.\nFor example, if S = {c2, c3}, then c1 S c4 indicates that either the ranking c1 c2 c3 c4 or the ranking c1 c3 c2 c4 can be used for the proof.\nWe first give a universal upper bound.\nTheorem 1.\nThe deterministic communication complexity of any rank-based voting rule is O(nm log m).\nProof.\nThis bound is achieved by simply having everyone communicate their entire ordering of the candidates (indicating the rank of an individual candidate requires only O(log m) bits, so each of the n voters can simply indicate the rank of each of the m candidates).\nThe next lemma will be useful in a few of our proofs.\nLemma 1.\nIf m divides n, then log(n!)\n\u2212m log((n\/m)!)\n\u2265 n(log m \u2212 1)\/2.\nProof.\nIf n\/m = 1 (that is, n = m), then this expression simplifies to log(n!)\n.\nWe have log(n!)\n= n i=1 log i \u2265 n x=1 log(i)dx, which, using integration by parts, is equal to n log n \u2212 (n \u2212 1) > n(log n \u2212 1) = n(log m \u2212 1) > n(log m \u2212 1)\/2.\nSo, we can assume that n\/m \u2265 2.\nWe observe that log(n!)\n= n i=1 log i = n\/m\u22121 i=0 m j=1 log(im+j) \u2265 n\/m\u22121 i=1 m j=1 log(im) = m n\/m\u22121 i=1 log(im), and that m log((n\/m)!)\n= m n\/m i=1 log(i).\nTherefore, log(n!)\n\u2212 m log((n\/m)!)\n\u2265 m n\/m\u22121 i=1 log(im) \u2212 m n\/m i=1 log(i) = m(( n\/m\u22121 i=1 log(im\/i))\u2212log(n\/m)) = m((n\/m\u2212 1) log m\u2212log n+log m) = n log m\u2212m log n. Now, using the fact that n\/m \u2265 2, we have m log n = n(m\/n) log m(n\/m) = n(m\/n)(log m + log(n\/m)) \u2264 n(1\/2)(log m + log 2).\nThus, log(n!)\n\u2212 m log((n\/m)!)\n\u2265 n log m \u2212 m log n \u2265 n log m \u2212 n(1\/2)(log m + log 2) = n(log m \u2212 1)\/2.\nTheorem 2.\nThe deterministic communication complexity of the plurality rule is O(n log m).\nProof.\nIndicating one of the candidates requires only O(log m) bits, so each voter can simply indicate her most preferred candidate.\nTheorem 3.\nThe nondeterministic communication complexity of the plurality rule is \u2126(n log m) (even to decide whether a given candidate a wins).\nProof.\nWe will exhibit a fooling set of size n !\n(( n m )!)\nm where n = (n\u22121)\/2.\nTaking the logarithm of this gives log(n !)\n\u2212 m log((n \/m)!)\n, so the result follows from Lemma 1.\nThe fooling set will consist of all vectors of votes satisfying the following constraints: \u2022 For any 1 \u2264 i \u2264 n , voters 2i\u22121 and 2i vote the same.\n81 \u2022 Every candidate receives equally many votes from the first 2n = n \u2212 1 voters.\n\u2022 The last voter (voter n) votes for a. Candidate a wins with each one of these vote vectors because of the extra vote for a from the last voter.\nGiven that m divides n , let us see how many vote vectors there are in the fooling set.\nWe need to distribute n voter pairs evenly over m candidates, for a total of n \/m voter pairs per candidate; and there are precisely n !\n(( n m )!)\nm ways of doing this.6 All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet i be a number such that the two vote vectors disagree on the candidate for which voters 2i \u2212 1 and 2i vote.\nWithout loss of generality, suppose that in the first vote vector, these voters do not vote for a (but for some other candidate, b, instead).\nNow, construct a new vote vector by taking votes 2i \u2212 1 and 2i from the first vote vector, and the remaining votes from the second vote vector.\nThen, b receives 2n \/m + 2 votes in this newly constructed vote vector, whereas a receives at most 2n \/m+1 votes.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTheorem 4.\nThe deterministic communication complexity of the plurality with runoff rule is O(n log m).\nProof.\nFirst, let every voter indicate her most preferred candidate using log m bits.\nAfter this, the two candidates in the runoff are known, and each voter can indicate which one she prefers using a single additional bit.\nTheorem 5.\nThe nondeterministic communication complexity of the plurality with runoff rule is \u2126(n log m) (even to decide whether a given candidate a wins).\nProof.\nWe will exhibit a fooling set of size n !\n(( n m )!)\nm where m = m\/2 and n = (n \u2212 2)\/4.\nTaking the logarithm of this gives log(n !)\n\u2212 m log((n \/m )!)\n, so the result follows from Lemma 1.\nDivide the candidates into m pairs: (c1, d1), (c2, d2), ... , (cm , dm ) where c1 = a and d1 = b.\nThe fooling set will consist of all vectors of votes satisfying the following constraints: \u2022 For any 1 \u2264 i \u2264 n , voters 4i \u2212 3 and 4i \u2212 2 rank the candidates ck(i) a C \u2212 {a, ck(i)}, for some candidate ck(i).\n(If ck(i) = a then the vote is simply a C \u2212 {a}.)\n\u2022 For any 1 \u2264 i \u2264 n , voters 4i \u2212 1 and 4i rank the candidates dk(i) a C \u2212 {a, dk(i)} (that is, their most preferred candidate is the candidate that is paired with the candidate that the previous two voters vote for).\n6 An intuitive proof of this is the following.\nWe can count the number of permutations of n elements as follows.\nFirst, divide the elements into m buckets of size n \/m, so that if x is placed in a lower-indexed bucket than y, then x will be indexed lower in the eventual permutation.\nThen, decide on the permutation within each bucket (for which there are (n \/m)!\nchoices per bucket).\nIt follows that n !\nequals the number of ways to divide n elements into m buckets of size n \/m, times ((n \/m)!)\nm .\n\u2022 Every candidate is ranked at the top of equally many of the first 4n = n \u2212 2 votes.\n\u2022 Voter 4n +1 = n\u22121 ranks the candidates a C\u2212{a}.\n\u2022 Voter 4n + 2 = n ranks the candidates b C \u2212 {b}.\nCandidate a wins with each one of these vote vectors: because of the last two votes, candidates a and b are one vote ahead of all the other candidates and continue to the runoff, and at this point all the votes that had another candidate ranked at the top transfer to a, so that a wins the runoff.\nGiven that m divides n , let us see how many vote vectors there are in the fooling set.\nWe need to distribute n groups of four voters evenly over the m pairs of candidates, and (as in the proof of Theorem 3) there are n !\n(( n m )!)\nm ways of doing this.\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet i be a number such that ck(i) is not the same in both of these two vote vectors, that is, c1 k(i) (ck(i) in the first vote vector) is not equal to c2 k(i) (ck(i) in the second vote vector).\nWithout loss of generality, suppose c1 k(i) = a. Now, construct a new vote vector by taking votes 4i \u2212 3, 4i \u2212 2, 4i \u2212 1, 4i from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, c1 k(i) and d1 k(i) each receive 4n \/m+2 votes in the first round, whereas a receives at most 4n \/m+1 votes.\nSo, a does not continue to the runoff in the newly constructed vote vector, and hence we have a correct fooling set.\nTheorem 6.\nThe nondeterministic communication complexity of the Borda rule is \u2126(nm log m) (even to decide whether a given candidate a wins).\nProof.\nWe will exhibit a fooling set of size (m !)\nn where m = m\u22122 and n = (n\u22122)\/4.\nThis will prove the theorem because m !\nis \u2126(m log m), so that log((m !)\nn ) = n log(m !)\nis \u2126(nm log m).\nFor every vector (\u03c01, \u03c02, ... , \u03c0n ) consisting of n orderings of all candidates other than a and another fixed candidate b (technically, the orderings take the form of a one-to-one function \u03c0i : {1, 2, ... , m } \u2192 C \u2212 {a, b} with \u03c0i(j) = c indicating that candidate c is the jth in the order represented by \u03c0i), let the following vector of votes be an element of the fooling set: \u2022 For 1 \u2264 i \u2264 n , let voters 4i \u2212 3 and 4i \u2212 2 rank the candidates a b \u03c0i(1) \u03c0i(2) ... \u03c0i(m ).\n\u2022 For 1 \u2264 i \u2264 n , let voters 4i \u2212 1 and 4i rank the candidates \u03c0i(m ) \u03c0i(m \u2212 1) ... \u03c0i(1) b a. \u2022 Let voter 4n + 1 = n \u2212 1 rank the candidates a b \u03c00(1) \u03c00(2) ... \u03c00(m ) (where \u03c00 is an arbitrary order of the candidates other than a and b which is the same for every element of the fooling set).\n\u2022 Let voter 4n + 2 = n rank the candidates \u03c00(m ) \u03c00(m \u2212 1) ... \u03c00(1) a b.\nWe observe that this fooling set has size (m !)\nn , and that candidate a wins in each vector of votes in the fooling set (to 82 see why, we observe that for any 1 \u2264 i \u2264 n , votes 4i\u22123 and 4i \u2212 2 rank the candidates in the exact opposite way from votes 4i \u2212 1 and 4i, which under the Borda rule means they cancel out; and the last two votes give one more point to a than to any other candidate-besides b who gets two fewer points than a).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (\u03c01 1, \u03c01 2, ... , \u03c01 n ), and let the second vote vector correspond to the vector (\u03c02 1, \u03c02 2, ... , \u03c02 n ).\nFor some i, we must have \u03c01 i = \u03c02 i , so that for some candidate c \/\u2208 {a, b}, (\u03c01 i )\u22121 (c) < (\u03c02 i )\u22121 (c) (that is, c is ranked higher in \u03c01 i than in \u03c02 i ).\nNow, construct a new vote vector by taking votes 4i\u22123 and 4i\u22122 from the first vote vector, and the remaining votes from the second vote vector.\na``s Borda score remains unchanged.\nHowever, because c is ranked higher in \u03c01 i than in \u03c02 i , c receives at least 2 more points from votes 4i\u22123 and 4i \u2212 2 in the newly constructed vote vector than it did in the second vote vector.\nIt follows that c has a higher Borda score than a in the newly constructed vote vector.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTheorem 7.\nThe nondeterministic communication complexity of the Copeland rule is \u2126(nm log m) (even to decide whether a given candidate a wins).\nProof.\nWe will exhibit a fooling set of size (m !)\nn where m = (m \u2212 2)\/2 and n = (n \u2212 2)\/2.\nThis will prove the theorem because m !\nis \u2126(m log m), so that log((m !)\nn ) = n log(m !)\nis \u2126(nm log m).\nWe write the set of candidates as the following disjoint union: C = {a, b} \u222a L \u222a R where L = {l1, l2, ... , lm } and R = {r1, r2, ... , rm }.\nFor every vector (\u03c01, \u03c02, ... , \u03c0n ) consisting of n permutations of the integers 1 through m (\u03c0i : {1, 2, ... , m } \u2192 {1, 2, ... , m }), let the following vector of votes be an element of the fooling set: \u2022 For 1 \u2264 i \u2264 n , let voter 2i \u2212 1 rank the candidates a b l\u03c0i(1) r\u03c0i(1) l\u03c0i(2) r\u03c0i(2) ... l\u03c0i(m ) r\u03c0i(m ).\n\u2022 For 1 \u2264 i \u2264 n , let voter 2i rank the candidates r\u03c0i(m ) l\u03c0i(m ) r\u03c0i(m \u22121) l\u03c0i(m \u22121) ... r\u03c0i(1) l\u03c0i(1) b a. \u2022 Let voter n \u2212 1 = 2n + 1 rank the candidates a b l1 r1 l2 r2 ... lm rm .\n\u2022 Let voter n = 2n +2 rank the candidates rm lm rm \u22121 lm \u22121 ... r1 l1 a b.\nWe observe that this fooling set has size (m !)\nn , and that candidate a wins in each vector of votes in the fooling set (every pair of candidates is tied in their pairwise election, with the exception that a defeats b, so that a wins the election by half a point).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (\u03c01 1, \u03c01 2, ... , \u03c01 n ), and let the second vote vector correspond to the vector (\u03c02 1, \u03c02 2, ... , \u03c02 n ).\nFor some i, we must have \u03c01 i = \u03c02 i , so that for some j \u2208 {1, 2, ... , m }, we have (\u03c01 i )\u22121 (j) < (\u03c02 i )\u22121 (j).\nNow, construct a new vote vector by taking vote 2i\u22121 from the first vote vector, and the remaining votes from the second vote vector.\na``s Copeland score remains unchanged.\nLet us consider the score of lj.\nWe first observe that the rank of lj in vote 2i \u2212 1 in the newly constructed vote vector is at least 2 higher than it was in the second vote vector, because (\u03c01 i )\u22121 (j) < (\u03c02 i )\u22121 (j).\nLet D1 (lj) be the set of candidates in L \u222a R that voter 2i \u2212 1 ranked lower than lj in the first vote vector (D1 (lj) = {c \u2208 L \u222a R : lj 1 2i\u22121 c}), and let D2 (lj) be the set of candidates in L \u222a R that voter 2i \u2212 1 ranked lower than lj in the second vote vector (D2 (lj) = {c \u2208 L \u222a R : lj 2 2i\u22121 c}).\nThen, it follows that in the newly constructed vote vector, lj defeats all the candidates in D1 (lj) \u2212 D2 (lj) in their pairwise elections (because lj receives an extra vote in each one of these pairwise elections relative to the second vote vector), and loses to all the candidates in D2 (lj) \u2212 D1 (lj) (because lj loses a vote in each one of these pairwise elections relative to the second vote vector), and ties with everyone else.\nBut |D1 (lj)|\u2212|D2 (lj)| \u2265 2, and hence |D1 (lj) \u2212 D2 (lj)| \u2212 |D2 (lj) \u2212 D1 (lj)| \u2265 2.\nHence, in the newly constructed vote vector, lj has at least two more pairwise wins than pairwise losses, and therefore has at least 1 more point than if lj had tied all its pairwise elections.\nThus, lj has a higher Copeland score than a in the newly constructed vote vector.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTheorem 8.\nThe nondeterministic communication complexity of the maximin rule is O(nm).\nProof.\nThe nondeterministic protocol will guess which candidate w is the winner, and, for each other candidate c, which candidate o(c) is the candidate against whom c receives its lowest score in a pairwise election.\nThen, let every voter communicate the following: \u2022 for each candidate c = w, whether she prefers c to w; \u2022 for each candidate c = w, whether she prefers c to o(c).\nWe observe that this requires the communication of 2n(m\u2212 1) bits.\nIf the guesses were correct, then, letting N(d, e) be the number of voters preferring candidate d to candidate e, we should have N(c, o(c)) < N(w, c ) for any c = w, c = w, which will prove that w wins the election.\nTheorem 9.\nThe nondeterministic communication complexity of the maximin rule is \u2126(nm) (even to decide whether a given candidate a wins).\nProof.\nWe will exhibit a fooling set of size 2n m where m = m \u2212 2 and n = (n \u2212 1)\/4.\nLet b be a candidate other than a. For every vector (S1, S2, ... , Sn ) consisting of n subsets Si \u2286 C \u2212 {a, b}, let the following vector of votes be an element of the fooling set: \u2022 For 1 \u2264 i \u2264 n , let voters 4i \u2212 3 and 4i \u2212 2 rank the candidates Si a C \u2212 (Si \u222a {a, b}) b. \u2022 For 1 \u2264 i \u2264 n , let voters 4i \u2212 1 and 4i rank the candidates b C \u2212 (Si \u222a {a, b}) a Si.\n83 \u2022 Let voter 4n + 1 = n rank the candidates a b C \u2212 {a, b}.\nWe observe that this fooling set has size (2m )n = 2n m , and that candidate a wins in each vector of votes in the fooling set (in every one of a``s pairwise elections, a is ranked higher than its opponent by 2n +1 = (n+1)\/2 > n\/2 votes).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ).\nFor some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i .\nWithout loss of generality, suppose S1 i S2 i , and let c be some candidate in S1 i \u2212 S2 i .\nNow, construct a new vote vector by taking votes 4i \u2212 3 and 4i \u2212 2 from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, a is ranked higher than c by only 2n \u22121 voters, for the following reason.\nWhereas voters 4i\u22123 and 4i \u2212 2 do not rank c higher than a in the second vote vector (because c \/\u2208 S2 i ), voters 4i \u2212 3 and 4i \u2212 2 do rank c higher than a in the first vote vector (because c \u2208 S1 i ).\nMoreover, in every one of b``s pairwise elections, b is ranked higher than its opponent by at least 2n voters.\nSo, a has a lower maximin score than b, therefore a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTheorem 10.\nThe deterministic communication complexity of the STV rule is O(n(log m)2 ).\nProof.\nConsider the following communication protocol.\nLet each voter first announce her most preferred candidate (O(n log m) communication).\nIn the remaining rounds, we will keep track of each voter``s most preferred candidate among the remaining candidates, which will be enough to implement the rule.\nWhen candidate c is eliminated, let each of the voters whose most preferred candidate among the remaining candidates was c announce their most preferred candidate among the candidates remaining after c``s elimination.\nIf candidate c was the ith candidate to be eliminated (that is, there were m \u2212 i + 1 candidates remaining before c``s elimination), it follows that at most n\/(m \u2212 i + 1) voters had candidate c as their most preferred candidate among the remaining candidates, and thus the number of bits to be communicated after the elimination of the ith candidate is O((n\/(m\u2212i+1)) log m).7 Thus, the total communication in this communication protocol is O(n log m + m\u22121 i=1 (n\/(m \u2212 i + 1)) log m).\nOf course, m\u22121 i=1 1\/(m \u2212 i + 1) = m i=2 1\/i, which is O(log m).\nSubstituting into the previous expression, we find that the communication complexity is O(n(log m)2 ).\nTheorem 11.\nThe nondeterministic communication complexity of the STV rule is \u2126(n log m) (even to decide whether a given candidate a wins).\nProof.\nWe omit this proof because of space constraint.\n7 Actually, O((n\/(m \u2212 i + 1)) log(m \u2212 i + 1)) is also correct, but it will not improve the bound.\nTheorem 12.\nThe deterministic communication complexity of the approval rule is O(nm).\nProof.\nApproving or disapproving of a candidate requires only one bit of information, so every voter can simply approve or disapprove of every candidate for a total communication of nm bits.\nTheorem 13.\nThe nondeterministic communication complexity of the approval rule is \u2126(nm) (even to decide whether a given candidate a wins).\nProof.\nWe will exhibit a fooling set of size 2n m where m = m \u2212 1 and n = (n \u2212 1)\/4.\nFor every vector (S1, S2, ... , Sn ) consisting of n subsets Si \u2286 C \u2212 {a}, let the following vector of votes be an element of the fooling set: \u2022 For 1 \u2264 i \u2264 n , let voters 4i \u2212 3 and 4i \u2212 2 approve Si \u222a {a}.\n\u2022 For 1 \u2264 i \u2264 n , let voters 4i \u2212 1 and 4i approve C \u2212 (Si \u222a {a}).\n\u2022 Let voter 4n + 1 = n approve {a}.\nWe observe that this fooling set has size (2m )n = 2n m , and that candidate a wins in each vector of votes in the fooling set (a is approved by 2n + 1 voters, whereas each other candidate is approved by only 2n voters).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ).\nFor some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i .\nWithout loss of generality, suppose S1 i S2 i , and let b be some candidate in S1 i \u2212 S2 i .\nNow, construct a new vote vector by taking votes 4i \u2212 3 and 4i \u2212 2 from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, a is still approved by 2n + 1 votes.\nHowever, b is approved by 2n + 2 votes, for the following reason.\nWhereas voters 4i\u22123 and 4i\u22122 do not approve b in the second vote vector (because b \/\u2208 S2 i ), voters 4i \u2212 3 and 4i \u2212 2 do approve b in the first vote vector (because b \u2208 S1 i ).\nIt follows that b``s score in the newly constructed vote vector is b``s score in the second vote vector (2n ), plus two.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nInterestingly, an \u2126(m) lower bound can be obtained even for the problem of finding a candidate that is approved by more than one voter [20].\nTheorem 14.\nThe deterministic communication complexity of the Condorcet rule is O(nm).\nProof.\nWe maintain a set of active candidates S which is initialized to C.\nAt each stage, we choose two of the active candidates (say, the two candidates with the lowest indices), and we let each voter communicate which of the two candidates she prefers.\n(Such a stage requires the communication of n bits, one per voter.)\nThe candidate preferred by fewer 84 voters (the loser of the pairwise election) is removed from S. (If the pairwise election is tied, both candidates are removed.)\nAfter at most m \u2212 1 iterations, only one candidate is left (or zero candidates are left, in which case there is no Condorcet winner).\nLet a be the remaining candidate.\nTo find out whether candidate a is the Condorcet winner, let each voter communicate, for every candidate c = a, whether she prefers a to c. (This requires the communication of at most n(m \u2212 1) bits.)\nThis is enough to establish whether a won each of its pairwise elections (and thus, whether a is the Condorcet winner).\nTheorem 15.\nThe nondeterministic communication complexity of the Condorcet rule is \u2126(nm) (even to decide whether a given candidate a wins).\nProof.\nWe will exhibit a fooling set of size 2n m where m = m \u2212 1 and n = (n \u2212 1)\/2.\nFor every vector (S1, S2, ... , Sn ) consisting of n subsets Si \u2286 C \u2212 {a}, let the following vector of votes be an element of the fooling set: \u2022 For 1 \u2264 i \u2264 n , let voter 2i \u2212 1 rank the candidates Si a C \u2212 Si.\n\u2022 For 1 \u2264 i \u2264 n , let voter 2i rank the candidates C \u2212 Si a Si.\n\u2022 Let voter 2n +1 = n rank the candidates a C \u2212{a}.\nWe observe that this fooling set has size (2m )n = 2n m , and that candidate a wins in each vector of votes in the fooling set (a wins each of its pairwise elections by a single vote).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ).\nFor some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i .\nWithout loss of generality, suppose S1 i S2 i , and let b be some candidate in S1 i \u2212 S2 i .\nNow, construct a new vote vector by taking vote 2i \u2212 1 from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, b wins its pairwise election against a by one vote (vote 2i \u2212 1 ranks b above a in the newly constructed vote vector because b \u2208 S1 i , whereas in the second vote vector vote 2i \u2212 1 ranked a above b because b \/\u2208 S2 i ).\nSo, a is not the Condorcet winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTheorem 16.\nThe deterministic communication complexity of the cup rule is O(nm).\nProof.\nConsider the following simple communication protocol.\nFirst, let all the voters communicate, for every one of the matchups in the first round, which of its two candidates they prefer.\nAfter this, the matchups for the second round are known, so let all the voters communicate which candidate they prefer in each matchup in the second round-etc.\nBecause communicating which of two candidates is preferred requires only one bit per voter, and because there are only m \u2212 1 matchups in total, this communication protocol requires O(nm) communication.\nTheorem 17.\nThe nondeterministic communication complexity of the cup rule is \u2126(nm) (even to decide whether a given candidate a wins).\nProof.\nWe will exhibit a fooling set of size 2n m where m = (m \u2212 1)\/2 and n = (n \u2212 7)\/2.\nGiven that m + 1 is a power of 2, so that one candidate gets a bye (that is, does not face an opponent) in the first round, let a be the candidate with the bye.\nOf the m first-round matchups, let lj denote the one (left) candidate in the jth matchup, and let rj be the other (right) candidate.\nLet L = {lj : 1 \u2264 j \u2264 m } and R = {rj : 1 \u2264 j \u2264 m }, so that C = L \u222a R \u222a {a}.\n.\n.\n.\n.\n.\n.\n... l r l r l r a m``1 1 2 2 m'' Figure 1: The schedule for the cup rule used in the proof of Theorem 17.\nFor every vector (S1, S2, ... , Sn ) consisting of n subsets Si \u2286 R, let the following vector of votes be an element of the fooling set: \u2022 For 1 \u2264 i \u2264 n , let voter 2i \u2212 1 rank the candidates Si L a R \u2212 Si.\n\u2022 For 1 \u2264 i \u2264 n , let voter 2i rank the candidates R \u2212 Si L a Si.\n\u2022 Let voters 2n +1 = n\u22126, 2n +2 = n\u22125, 2n +3 = n\u22124 rank the candidates L a R. \u2022 Let voters 2n + 4 = n \u2212 3, 2n + 5 = n \u2212 2 rank the candidates a r1 l1 r2 l2 ... rm lm .\n\u2022 Let voters 2n + 6 = n \u2212 1, 2n + 7 = n rank the candidates rm lm rm \u22121 lm \u22121 ... r1 l1 a.\nWe observe that this fooling set has size (2m )n = 2n m .\nAlso, candidate a wins in each vector of votes in the fooling set, for the following reasons.\nEach candidate rj defeats its opponent lj in the first round.\n(For any 1 \u2264 i \u2264 n , the net effect of votes 2i \u2212 1 and 2i on the pairwise election between rj and lj is zero; votes n \u2212 6, n \u2212 5, n \u2212 4 prefer lj to rj, but votes n \u2212 3, n \u2212 2, n \u2212 1, n all prefer rj to lj.)\nMoreover, a defeats every rj in their pairwise election.\n(For any 1 \u2264 i \u2264 n , the net effect of votes 2i \u2212 1 and 2i on the pairwise election between a and rj is zero; votes n \u2212 1, n prefer rj to a, but votes n \u2212 6, n \u2212 5, n \u2212 4, n \u2212 3, n \u2212 2 all prefer a to rj.)\nIt follows that a will defeat all the candidates that it faces.\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector 85 (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ).\nFor some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i .\nWithout loss of generality, suppose S1 i S2 i , and let rj be some candidate in S1 i \u2212 S2 i .\nNow, construct a new vote vector by taking vote 2i from the first vote vector, and the remaining votes from the second vote vector.\nWe note that, whereas in the second vote vector vote 2i preferred rj to lj (because rj \u2208 R\u2212S2 i ), in the newly constructed vote vector this is no longer the case (because rj \u2208 S1 i ).\nIt follows that, whereas in the second vote vector, rj defeated lj in the first round by one vote, in the newly constructed vote vector, lj defeats rj in the first round.\nThus, at least one lj advances to the second round after defeating its opponent rj.\nNow, we observe that in the newly constructed vote vector, any lk wins its pairwise election against any rq with q = k.\nThis is because among the first 2n votes, at least n \u2212 1 prefer lk to rq; votes n \u2212 6, n \u2212 5, n \u2212 4 prefer lk to rq; and, because q = k, either votes n \u2212 3, n \u2212 2 prefer lk to rq (if k < q), or votes n \u2212 1, n prefer lk to rq (if k > q).\nThus, at least n + 4 = (n + 1)\/2 > n\/2 votes prefer lk to rq.\nMoreover, any lk wins its pairwise election against a.\nThis is because only votes n \u2212 3 and n \u2212 2 prefer a to lk.\nIt follows that, after the first round, any surviving candidate lk can only lose a matchup against another surviving lk , so that one of the lk must win the election.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTheorem 18.\nThe deterministic communication complexity of the Bucklin rule is O(nm).\nProof.\nLet l be the minimum integer for which there is a candidate who is ranked among the top l candidates by more than half the votes.\nWe will do a binary search for l.\nAt each point, we will have a lower bound lL which is smaller than l (initialized to 0), and an upper bound lH which is at least l (initialized to m).\nWhile lH \u2212 lL > 1, we continue by finding out whether (lH \u2212 l)\/2 is smaller than l, after which we can update the bounds.\nTo find out whether a number k is smaller than l, we determine every voter``s k most preferred candidates.\nEvery voter can communicate which candidates are among her k most preferred candidates using m bits (for each candidate, indicate whether the candidate is among the top k or not), but because the binary search requires log m iterations, this gives us an upper bound of O((log m)nm), which is not strong enough.\nHowever, if lL < k < lH , and we already know a voter``s lL most preferred candidates, as well as her lH most preferred candidates, then the voter no longer needs to communicate whether the lL most preferred candidates are among her k most preferred candidates (because they must be), and she no longer needs to communicate whether the m\u2212lH least preferred candidates are among her k most preferred candidates (because they cannot be).\nThus the voter needs to communicate only m\u2212lL \u2212(m\u2212lH ) = lH \u2212lL bits in any given stage.\nBecause each stage, lH \u2212 lL is (roughly) halved, each voter in total communicates only (roughly) m + m\/2 + m\/4 + ... \u2264 2m bits.\nTheorem 19.\nThe nondeterministic communication complexity of the Bucklin rule is \u2126(nm) (even to decide whether a given candidate a wins).\nProof.\nWe will exhibit a fooling set of size 2n m where m = (m\u22121)\/2 and n = n\/2.\nWe write the set of candidates as the following disjoint union: C = {a} \u222a L \u222a R where L = {l1, l2, ... , lm } and R = {r1, r2, ... , rm }.\nFor any subset S \u2286 {1, 2, ... , m }, let L(S) = {li : i \u2208 S} and let R(S) = {ri : i \u2208 S}.\nFor every vector (S1, S2, ... , Sn ) consisting of n sets Si \u2286 {1, 2, ... , m }, let the following vector of votes be an element of the fooling set: \u2022 For 1 \u2264 i \u2264 n , let voter 2i \u2212 1 rank the candidates L(Si) R \u2212 R(Si) a L \u2212 L(Si) R(Si).\n\u2022 For 1 \u2264 i \u2264 n , let voter 2i rank the candidates L \u2212 L(Si) R(Si) a L(Si) R \u2212 R(Si).\nWe observe that this fooling set has size (2m )n = 2n m , and that candidate a wins in each vector of votes in the fooling set, for the following reason.\nEach candidate in C \u2212 {a} is ranked among the top m candidates by exactly half the voters (which is not enough to win).\nThus, we need to look at the voters'' top m +1 candidates, and a is ranked m +1th by all voters.\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (S1 1 , S1 2 , ... , S1 n ), and let the second vote vector correspond to the vector (S2 1 , S2 2 , ... , S2 n ).\nFor some i, we must have S1 i = S2 i , so that either S1 i S2 i or S2 i S1 i .\nWithout loss of generality, suppose S1 i S2 i , and let j be some integer in S1 i \u2212 S2 i .\nNow, construct a new vote vector by taking vote 2i \u2212 1 from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, a is still ranked m + 1th by all votes.\nHowever, lj is ranked among the top m candidates by n + 1 = n\/2 + 1 votes.\nThis is because whereas vote 2i \u2212 1 does not rank lj among the top m candidates in the second vote vector (because j \/\u2208 S2 i , we have lj \/\u2208 L(S2 i )), vote 2i \u2212 1 does rank lj among the top m candidates in the first vote vector (because j \u2208 S1 i , we have lj \u2208 L(S1 i )).\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTheorem 20.\nThe nondeterministic communication complexity of the ranked pairs rule is \u2126(nm log m) (even to decide whether a given candidate a wins).\nProof.\nWe omit this proof because of space constraint.\n5.\nDISCUSSION One key obstacle to using voting for preference aggregation is the communication burden that an election places on the voters.\nBy lowering this burden, it may become feasible to conduct more elections over more issues.\nIn the limit, this could lead to a shift from representational government to a system in which most issues are decided by referenda-a veritable e-democracy.\nIn this paper, we analyzed the communication complexity of the common voting rules.\nKnowing which voting rules require little communication is especially important when the issue to be voted on is of low enough importance that the following is true: the parties involved are willing to accept a rule that tends 86 to produce outcomes that are slightly less representative of the voters'' preferences, if this rule reduces the communication burden on the voters significantly.\nThe following table summarizes the results we obtained.\nRule Lower bound Upper bound plurality \u2126(n log m) O(n log m) plurality w\/ runoff \u2126(n log m) O(n log m) STV \u2126(n log m) O(n(log m)2) Condorcet \u2126(nm) O(nm) approval \u2126(nm) O(nm) Bucklin \u2126(nm) O(nm) cup \u2126(nm) O(nm) maximin \u2126(nm) O(nm) Borda \u2126(nm log m) O(nm log m) Copeland \u2126(nm log m) O(nm log m) ranked pairs \u2126(nm log m) O(nm log m) Communication complexity of voting rules, sorted from low to high.\nAll of the upper bounds are deterministic (with the exception of maximin, for which the best deterministic upper bound we proved is O(nm log m)).\nAll of the lower bounds hold even for nondeterministic communication and even just for determining whether a given candidate a is the winner.\nOne area of future research is to study what happens when we restrict our attention to communication protocols that do not reveal any strategically useful information.\nThis restriction may invalidate some of the upper bounds that we derived using multistage communication protocols.\nAlso, all of our bounds are worst-case bounds.\nIt may be possible to outperform these bounds when the distribution of votes has additional structure.\nWhen deciding which voting rule to use for an election, there are many considerations to take into account.\nThe voting rules that we studied in this paper are the most common ones that have survived the test of time.\nOne way to select among these rules is to consider recent results on complexity.\nThe table above shows that from a communication complexity perspective, plurality, plurality with runoff, and STV are preferable.\nHowever, plurality has the undesirable property that it is computationally easy to manipulate by voting strategically [3, 7].\nPlurality with runoff is NP-hard to manipulate by a coalition of weighted voters, or by an individual that faces correlated uncertainty about the others'' votes [7, 6].\nSTV is NP-hard to manipulate in those settings as well [7], but also by an individual with perfect knowledge of the others'' votes (when the number of candidates is unbounded) [2].\nTherefore, STV is more robust, although it may require slightly more worst-case communication as per the table above.\nYet other selection criteria are the computational complexity of determining whether enough information has been elicited to declare a winner, and that of determining the optimal sequence of queries [8].\n6.\nREFERENCES [1] Lawrence Ausubel and Paul Milgrom.\nAscending auctions with package bidding.\nFrontiers of Theoretical Economics, 1, 2002.\nNo. 1, Article 1.\n[2] John Bartholdi, III and James Orlin.\nSingle transferable vote resists strategic voting.\nSocial Choice and Welfare, 8(4):341-354, 1991.\n[3] John Bartholdi, III, Craig Tovey, and Michael Trick.\nThe computational difficulty of manipulating an election.\nSocial Choice and Welfare, 6(3):227-241, 1989.\n[4] Avrim Blum, Jeffrey Jackson, Tuomas Sandholm, and Martin Zinkevich.\nPreference elicitation and query learning.\nJournal of Machine Learning Research, 5:649-667, 2004.\n[5] Wolfram Conen and Tuomas Sandholm.\nPreference elicitation in combinatorial auctions: Extended abstract.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 256-259, 2001.\n[6] Vincent Conitzer, Jerome Lang, and Tuomas Sandholm.\nHow many candidates are needed to make elections hard to manipulate?\nIn Theoretical Aspects of Rationality and Knowledge (TARK), pages 201-214, 2003.\n[7] Vincent Conitzer and Tuomas Sandholm.\nComplexity of manipulating elections with few candidates.\nIn Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 314-319, 2002.\n[8] Vincent Conitzer and Tuomas Sandholm.\nVote elicitation: Complexity and strategy-proofness.\nIn Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 392-397, 2002.\n[9] Sven de Vries, James Schummer, and Rakesh V. Vohra.\nOn ascending auctions for heterogeneous objects, 2003.\nDraft.\n[10] Allan Gibbard.\nManipulation of voting schemes.\nEconometrica, 41:587-602, 1973.\n[11] Benoit Hudson and Tuomas Sandholm.\nEffectiveness of query types and policies for preference elicitation in combinatorial auctions.\nIn International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 386-393, 2004.\n[12] E Kushilevitz and N Nisan.\nCommunication Complexity.\nCambridge University Press, 1997.\n[13] Sebasti\u00b4en Lahaie and David Parkes.\nApplying learning algorithms to preference elicitation.\nIn Proceedings of the ACM Conference on Electronic Commerce, 2004.\n[14] Noam Nisan and Ilya Segal.\nThe communication requirements of efficient allocations and supporting prices.\nJournal of Economic Theory, 2005.\nForthcoming.\n[15] David Parkes.\niBundle: An efficient ascending price bundle auction.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 148-157, 1999.\n[16] Tuomas Sandholm.\nAn implementation of the contract net protocol based on marginal cost calculations.\nIn Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 256-262, 1993.\n[17] Tuomas Sandholm and Craig Boutilier.\nPreference elicitation in combinatorial auctions.\nIn Peter Cramton, Yoav Shoham, and Richard Steinberg, editors, Combinatorial Auctions, chapter 10.\nMIT Press, 2005.\n[18] Paolo Santi, Vincent Conitzer, and Tuomas Sandholm.\nTowards a characterization of polynomial preference elicitation with value queries in combinatorial auctions.\nIn Conference on Learning Theory (COLT), pages 1-16, 2004.\n[19] Mark Satterthwaite.\nStrategy-proofness and Arrow``s conditions: existence and correspondence theorems for voting procedures and social welfare functions.\nJournal of Economic Theory, 10:187-217, 1975.\n[20] Ilya Segal.\nThe communication requirements of social choice rules and supporting budget sets, 2004.\nDraft.\nPresented at the DIMACS Workshop on Computational Issues in Auction Design, Rutgers University, New Jersey, USA.\n[21] Peter Wurman and Michael Wellman.\nAkBA: A progressive, anonymous-price combinatorial auction.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 21-29, 2000.\n[22] A. C. Yao.\nSome complexity questions related to distributed computing.\nIn Proceedings of the 11th ACM symposium on theory of computing (STOC), pages 209-213, 1979.\n[23] Martin Zinkevich, Avrim Blum, and Tuomas Sandholm.\nOn polynomial-time preference elicitation with value queries.\nIn Proceedings of the ACM Conference on Electronic Commerce (ACM-EC), pages 176-185, 2003.\n87","lvl-3":"Communication Complexity of Common Voting Rules *\nABSTRACT\nWe determine the communication complexity of the common voting rules.\nThe rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs.\nFor each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule.\nThe bounds match for all voting rules except STV and maximin.\n1.\nINTRODUCTION\nOne key factor in the practicality of any preference aggregation rule is its communication burden.\nTo successfully aggregate the agents' preferences, it is usually not necessary * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship.\nThe authors thank Ilya Segal for helpful comments.\nfor all the agents to report all of their preference information.\nClever protocols that elicit the agents' preferences partially and sequentially have the potential to dramatically reduce the required communication.\nThis has at least the following advantages: 9 It can make preference aggregation feasible in settings where the total amount of preference information is too large to communicate.\n9 Even when communicating all the preference information is feasible, reducing the communication requirements lessens the burden placed on the agents.\nThis is especially true when the agents, rather than knowing all their preferences in advance, need to invest effort (such as computation or information gathering) to determine their preferences [16].\n9 It preserves (some of) the agents' privacy.\nMost of the work on reducing the communication burden in preference aggregation has focused on resource allocation settings such as combinatorial auctions, in which an auctioneer auctions off a number of (possibly distinct) items in a single event.\nBecause in a combinatorial auction, bidders can have separate valuations for each of an exponential number of possible bundles of items, this is a setting in which reducing the communication burden is especially crucial.\nThis can be accomplished by supplementing the auctioneer with an elicitor that incrementally elicits parts of the bidders' preferences on an as-needed basis, based on what the bidders have revealed about their preferences so far, as suggested by Conen and Sandholm [5].\nFor example, the elicitor can ask for a bidder's value for a specific bundle (value queries), which of two bundles the bidder prefers (order queries), which bundle he ranks kth or what the rank of a given bundle is (rank queries), which bundle he would purchase given a particular vector of prices (demand queries), etc.--until (at least) the final allocation can be determined.\nExperimentally, this yields drastic savings in preference revelation [11].\nFurthermore, if the agents' valuation functions are drawn from certain natural subclasses, the elicitation problem can be solved using only polynomially many queries even in the worst case [23, 4, 13, 18, 14].\nFor a review of preference elicitation in combinatorial auctions, see [17].\nAscending combinatorial auctions are a well-known special form of preference elicitation, where the elicitor asks demand queries with increasing prices [15, 21, 1, 9].\nFinally, resource\nallocation problems have also been studied from a communication complexity viewpoint, thereby deriving lower bounds on the required communication.\nFor example, Nisan and Segal show that exponential communication is required even to obtain a surplus greater than that obtained by auctioning off all objects as a single bundle [14].\nSegal also studies social choice rules in general, and shows that for a large class of social choice rules, supporting budget sets must be revealed such that if every agent prefers the same outcome in her budget set, this proves the optimality of that outcome.\nSegal then uses this characterization to prove bounds on the communication required in resource allocation as well as matching settings [20].\nIn this paper, we will focus on the communication requirements of a generally applicable subclass of social choice rules, commonly known as voting rules.\nIn a voting setting, there is a set of candidate outcomes over which the voters express their preferences by submitting a vote (typically, a ranking of the candidates), and the winner (that is, the chosen outcome) is determined based on these votes.\nThe communication required by voting rules can be large either because the number of voters is large (such as, for example, in national elections), or because the number of candidates is large (for example, the agents can vote over allocations of a number of resources), or both.\nPrior work [8] has studied elicitation in voting, studying how computationally hard it is to decide whether a winner can be determined with the information elicited so far, as well as how hard it is to find the optimal sequence of queries given perfect suspicions about the voters' preferences.\nIn addition, that paper discusses strategic (game-theoretic) issues introduced by elicitation.\nIn contrast, in this paper, we are concerned with the worst-case number of bits that must be communicated to execute a given voting rule, when nothing is known in advance about the voters' preferences.\nWe determine the communication complexity of the common voting rules.\nFor each rule, we first give an upper bound on the (deterministic) communication complexity by providing a communication protocol for it and analyzing how many bits need to be transmitted in this protocol.\n(Segal's results [20] do not apply to most voting rules because most voting rules are not intersectionmonotonic (or even monotonic).1) For many of the voting rules under study, it turns out that one cannot do better than simply letting each voter immediately communicate all her (potentially relevant) information.\nHowever, for some rules (such as plurality with runoff, STV and cup) there is a straightforward multistage communication protocol that, with some analysis, can be shown to significantly outperform the immediate communication of all (potentially relevant) information.\nFinally, for some rules (such as the Condorcet and Bucklin rules), we need to introduce a more complex communication protocol to achieve the best possible upper 1For two of the rules that we study that are intersectionmonotonic, namely the approval and Condorcet rules, Segal's results can in fact be used to give alternative proofs of our lower bounds.\nWe only give direct proofs for these rules here because 1) these direct proofs are among the easier ones in this paper, 2) the alternative proofs are nontrivial even given Segal's results, and 3) a space constraint applies.\nHowever, we hope to also include the alternative proofs in a later version.\nbound.\nAfter obtaining the upper bounds, we show that they are tight by giving matching lower bounds on (even the nondeterministic) communication complexity of each voting rule.\nThere are two exceptions: STV, for which our upper and lower bounds are apart by a factor log m; and maximin, for which our best deterministic upper bound is also a factor log m above the (nondeterministic) lower bound, although we give a nondeterministic upper bound that matches the lower bound.\n2.\nREVIEW OF VOTING RULES\n3.\nREVIEW OF SOME COMMUNICATION COMPLEXITY THEORY\n4.\nRESULTS\n5.\nDISCUSSION\nOne key obstacle to using voting for preference aggregation is the communication burden that an election places on the voters.\nBy lowering this burden, it may become feasible to conduct more elections over more issues.\nIn the limit, this could lead to a shift from representational government to a system in which most issues are decided by referenda--a veritable e-democracy.\nIn this paper, we analyzed the communication complexity of the common voting rules.\nKnowing which voting rules require little communication is especially important when the issue to be voted on is of low enough importance that the following is true: the parties involved are willing to accept a rule that tends\nto produce outcomes that are slightly less representative of the voters' preferences, if this rule reduces the communication burden on the voters significantly.\nThe following table summarizes the results we obtained.\nCommunication complexity of voting rules, sorted from low to high.\nAll of the upper bounds are deterministic (with the exception of maximin, for which the best deterministic upper bound we proved is O (nm log m)).\nAll of the lower bounds hold even for nondeterministic communication and even just for determining whether a given candidate a is the winner.\nOne area of future research is to study what happens when we restrict our attention to communication protocols that do not reveal any strategically useful information.\nThis restriction may invalidate some of the upper bounds that we derived using multistage communication protocols.\nAlso, all of our bounds are worst-case bounds.\nIt may be possible to outperform these bounds when the distribution of votes has additional structure.\nWhen deciding which voting rule to use for an election, there are many considerations to take into account.\nThe voting rules that we studied in this paper are the most common ones that have survived the test of time.\nOne way to select among these rules is to consider recent results on complexity.\nThe table above shows that from a communication complexity perspective, plurality, plurality with runoff, and STV are preferable.\nHowever, plurality has the undesirable property that it is computationally easy to manipulate by voting strategically [3, 7].\nPlurality with runoff is NP-hard to manipulate by a coalition of weighted voters, or by an individual that faces correlated uncertainty about the others' votes [7, 6].\nSTV is NP-hard to manipulate in those settings as well [7], but also by an individual with perfect knowledge of the others' votes (when the number of candidates is unbounded) [2].\nTherefore, STV is more robust, although it may require slightly more worst-case communication as per the table above.\nYet other selection criteria are the computational complexity of determining whether enough information has been elicited to declare a winner, and that of determining the optimal sequence of queries [8].","lvl-4":"Communication Complexity of Common Voting Rules *\nABSTRACT\nWe determine the communication complexity of the common voting rules.\nThe rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs.\nFor each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule.\nThe bounds match for all voting rules except STV and maximin.\n1.\nINTRODUCTION\nOne key factor in the practicality of any preference aggregation rule is its communication burden.\nThe authors thank Ilya Segal for helpful comments.\nfor all the agents to report all of their preference information.\nClever protocols that elicit the agents' preferences partially and sequentially have the potential to dramatically reduce the required communication.\nThis has at least the following advantages: 9 It can make preference aggregation feasible in settings where the total amount of preference information is too large to communicate.\n9 Even when communicating all the preference information is feasible, reducing the communication requirements lessens the burden placed on the agents.\nThis is especially true when the agents, rather than knowing all their preferences in advance, need to invest effort (such as computation or information gathering) to determine their preferences [16].\n9 It preserves (some of) the agents' privacy.\nMost of the work on reducing the communication burden in preference aggregation has focused on resource allocation settings such as combinatorial auctions, in which an auctioneer auctions off a number of (possibly distinct) items in a single event.\nBecause in a combinatorial auction, bidders can have separate valuations for each of an exponential number of possible bundles of items, this is a setting in which reducing the communication burden is especially crucial.\nExperimentally, this yields drastic savings in preference revelation [11].\nFor a review of preference elicitation in combinatorial auctions, see [17].\nAscending combinatorial auctions are a well-known special form of preference elicitation, where the elicitor asks demand queries with increasing prices [15, 21, 1, 9].\nFinally, resource\nallocation problems have also been studied from a communication complexity viewpoint, thereby deriving lower bounds on the required communication.\nFor example, Nisan and Segal show that exponential communication is required even to obtain a surplus greater than that obtained by auctioning off all objects as a single bundle [14].\nSegal then uses this characterization to prove bounds on the communication required in resource allocation as well as matching settings [20].\nIn this paper, we will focus on the communication requirements of a generally applicable subclass of social choice rules, commonly known as voting rules.\nIn a voting setting, there is a set of candidate outcomes over which the voters express their preferences by submitting a vote (typically, a ranking of the candidates), and the winner (that is, the chosen outcome) is determined based on these votes.\nThe communication required by voting rules can be large either because the number of voters is large (such as, for example, in national elections), or because the number of candidates is large (for example, the agents can vote over allocations of a number of resources), or both.\nPrior work [8] has studied elicitation in voting, studying how computationally hard it is to decide whether a winner can be determined with the information elicited so far, as well as how hard it is to find the optimal sequence of queries given perfect suspicions about the voters' preferences.\nIn addition, that paper discusses strategic (game-theoretic) issues introduced by elicitation.\nIn contrast, in this paper, we are concerned with the worst-case number of bits that must be communicated to execute a given voting rule, when nothing is known in advance about the voters' preferences.\nWe determine the communication complexity of the common voting rules.\nFor each rule, we first give an upper bound on the (deterministic) communication complexity by providing a communication protocol for it and analyzing how many bits need to be transmitted in this protocol.\nHowever, for some rules (such as plurality with runoff, STV and cup) there is a straightforward multistage communication protocol that, with some analysis, can be shown to significantly outperform the immediate communication of all (potentially relevant) information.\nWe only give direct proofs for these rules here because 1) these direct proofs are among the easier ones in this paper, 2) the alternative proofs are nontrivial even given Segal's results, and 3) a space constraint applies.\nHowever, we hope to also include the alternative proofs in a later version.\nbound.\nAfter obtaining the upper bounds, we show that they are tight by giving matching lower bounds on (even the nondeterministic) communication complexity of each voting rule.\n5.\nDISCUSSION\nOne key obstacle to using voting for preference aggregation is the communication burden that an election places on the voters.\nBy lowering this burden, it may become feasible to conduct more elections over more issues.\nIn this paper, we analyzed the communication complexity of the common voting rules.\nKnowing which voting rules require little communication is especially important when the issue to be voted on is of low enough importance that the following is true: the parties involved are willing to accept a rule that tends\nto produce outcomes that are slightly less representative of the voters' preferences, if this rule reduces the communication burden on the voters significantly.\nThe following table summarizes the results we obtained.\nCommunication complexity of voting rules, sorted from low to high.\nAll of the upper bounds are deterministic (with the exception of maximin, for which the best deterministic upper bound we proved is O (nm log m)).\nAll of the lower bounds hold even for nondeterministic communication and even just for determining whether a given candidate a is the winner.\nOne area of future research is to study what happens when we restrict our attention to communication protocols that do not reveal any strategically useful information.\nThis restriction may invalidate some of the upper bounds that we derived using multistage communication protocols.\nAlso, all of our bounds are worst-case bounds.\nIt may be possible to outperform these bounds when the distribution of votes has additional structure.\nWhen deciding which voting rule to use for an election, there are many considerations to take into account.\nThe voting rules that we studied in this paper are the most common ones that have survived the test of time.\nOne way to select among these rules is to consider recent results on complexity.\nThe table above shows that from a communication complexity perspective, plurality, plurality with runoff, and STV are preferable.\nHowever, plurality has the undesirable property that it is computationally easy to manipulate by voting strategically [3, 7].\nPlurality with runoff is NP-hard to manipulate by a coalition of weighted voters, or by an individual that faces correlated uncertainty about the others' votes [7, 6].\nSTV is NP-hard to manipulate in those settings as well [7], but also by an individual with perfect knowledge of the others' votes (when the number of candidates is unbounded) [2].\nTherefore, STV is more robust, although it may require slightly more worst-case communication as per the table above.\nYet other selection criteria are the computational complexity of determining whether enough information has been elicited to declare a winner, and that of determining the optimal sequence of queries [8].","lvl-2":"Communication Complexity of Common Voting Rules *\nABSTRACT\nWe determine the communication complexity of the common voting rules.\nThe rules (sorted by their communication complexity from low to high) are plurality, plurality with runoff, single transferable vote (STV), Condorcet, approval, Bucklin, cup, maximin, Borda, Copeland, and ranked pairs.\nFor each rule, we first give a deterministic communication protocol and an upper bound on the number of bits communicated in it; then, we give a lower bound on (even the nondeterministic) communication requirements of the voting rule.\nThe bounds match for all voting rules except STV and maximin.\n1.\nINTRODUCTION\nOne key factor in the practicality of any preference aggregation rule is its communication burden.\nTo successfully aggregate the agents' preferences, it is usually not necessary * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship.\nThe authors thank Ilya Segal for helpful comments.\nfor all the agents to report all of their preference information.\nClever protocols that elicit the agents' preferences partially and sequentially have the potential to dramatically reduce the required communication.\nThis has at least the following advantages: 9 It can make preference aggregation feasible in settings where the total amount of preference information is too large to communicate.\n9 Even when communicating all the preference information is feasible, reducing the communication requirements lessens the burden placed on the agents.\nThis is especially true when the agents, rather than knowing all their preferences in advance, need to invest effort (such as computation or information gathering) to determine their preferences [16].\n9 It preserves (some of) the agents' privacy.\nMost of the work on reducing the communication burden in preference aggregation has focused on resource allocation settings such as combinatorial auctions, in which an auctioneer auctions off a number of (possibly distinct) items in a single event.\nBecause in a combinatorial auction, bidders can have separate valuations for each of an exponential number of possible bundles of items, this is a setting in which reducing the communication burden is especially crucial.\nThis can be accomplished by supplementing the auctioneer with an elicitor that incrementally elicits parts of the bidders' preferences on an as-needed basis, based on what the bidders have revealed about their preferences so far, as suggested by Conen and Sandholm [5].\nFor example, the elicitor can ask for a bidder's value for a specific bundle (value queries), which of two bundles the bidder prefers (order queries), which bundle he ranks kth or what the rank of a given bundle is (rank queries), which bundle he would purchase given a particular vector of prices (demand queries), etc.--until (at least) the final allocation can be determined.\nExperimentally, this yields drastic savings in preference revelation [11].\nFurthermore, if the agents' valuation functions are drawn from certain natural subclasses, the elicitation problem can be solved using only polynomially many queries even in the worst case [23, 4, 13, 18, 14].\nFor a review of preference elicitation in combinatorial auctions, see [17].\nAscending combinatorial auctions are a well-known special form of preference elicitation, where the elicitor asks demand queries with increasing prices [15, 21, 1, 9].\nFinally, resource\nallocation problems have also been studied from a communication complexity viewpoint, thereby deriving lower bounds on the required communication.\nFor example, Nisan and Segal show that exponential communication is required even to obtain a surplus greater than that obtained by auctioning off all objects as a single bundle [14].\nSegal also studies social choice rules in general, and shows that for a large class of social choice rules, supporting budget sets must be revealed such that if every agent prefers the same outcome in her budget set, this proves the optimality of that outcome.\nSegal then uses this characterization to prove bounds on the communication required in resource allocation as well as matching settings [20].\nIn this paper, we will focus on the communication requirements of a generally applicable subclass of social choice rules, commonly known as voting rules.\nIn a voting setting, there is a set of candidate outcomes over which the voters express their preferences by submitting a vote (typically, a ranking of the candidates), and the winner (that is, the chosen outcome) is determined based on these votes.\nThe communication required by voting rules can be large either because the number of voters is large (such as, for example, in national elections), or because the number of candidates is large (for example, the agents can vote over allocations of a number of resources), or both.\nPrior work [8] has studied elicitation in voting, studying how computationally hard it is to decide whether a winner can be determined with the information elicited so far, as well as how hard it is to find the optimal sequence of queries given perfect suspicions about the voters' preferences.\nIn addition, that paper discusses strategic (game-theoretic) issues introduced by elicitation.\nIn contrast, in this paper, we are concerned with the worst-case number of bits that must be communicated to execute a given voting rule, when nothing is known in advance about the voters' preferences.\nWe determine the communication complexity of the common voting rules.\nFor each rule, we first give an upper bound on the (deterministic) communication complexity by providing a communication protocol for it and analyzing how many bits need to be transmitted in this protocol.\n(Segal's results [20] do not apply to most voting rules because most voting rules are not intersectionmonotonic (or even monotonic).1) For many of the voting rules under study, it turns out that one cannot do better than simply letting each voter immediately communicate all her (potentially relevant) information.\nHowever, for some rules (such as plurality with runoff, STV and cup) there is a straightforward multistage communication protocol that, with some analysis, can be shown to significantly outperform the immediate communication of all (potentially relevant) information.\nFinally, for some rules (such as the Condorcet and Bucklin rules), we need to introduce a more complex communication protocol to achieve the best possible upper 1For two of the rules that we study that are intersectionmonotonic, namely the approval and Condorcet rules, Segal's results can in fact be used to give alternative proofs of our lower bounds.\nWe only give direct proofs for these rules here because 1) these direct proofs are among the easier ones in this paper, 2) the alternative proofs are nontrivial even given Segal's results, and 3) a space constraint applies.\nHowever, we hope to also include the alternative proofs in a later version.\nbound.\nAfter obtaining the upper bounds, we show that they are tight by giving matching lower bounds on (even the nondeterministic) communication complexity of each voting rule.\nThere are two exceptions: STV, for which our upper and lower bounds are apart by a factor log m; and maximin, for which our best deterministic upper bound is also a factor log m above the (nondeterministic) lower bound, although we give a nondeterministic upper bound that matches the lower bound.\n2.\nREVIEW OF VOTING RULES\nIn this section, we review the common voting rules that we study in this paper.\nA voting rule2 is a function mapping a vector of the n voters' votes (i.e. preferences over candidates) to one of the m candidates (the winner) in the candidate set C.\nIn some cases (such as the Condorcet rule), the rule may also declare that no winner exists.\nWe do not concern ourselves with what happens in case of a tie between candidates (our lower bounds hold regardless of how ties are broken, and the communication protocols used for our upper bounds do not attempt to break the ties).\nAll of the rules that we study are rank-based rules, which means that a vote is defined as an ordering of the candidates (with the exception of the plurality rule, for which a vote is a single candidate, and the approval rule, for which a vote is a subset of the candidates).\nWe will consider the following voting rules.\n(For rules that define a score, the candidate with the highest score wins.)\n9 scoring rules.\nLet \u03b1 ~ = (\u03b11,..., \u03b1m) be a vector of\nintegers such that \u03b11> \u03b12 ...> \u03b1m.\nFor each voter, a candidate receives \u03b11 points if it is ranked first by the voter, \u03b12 if it is ranked second etc. .\nThe score s ~ \u03b1 of a candidate is the total number of points the candidate receives.\nThe Borda rule is the scoring rule with \u03b1 ~ = (m-1, m-2,..., 0).\nThe plurality rule is the scoring rule with \u03b1 ~ = (1, 0,..., 0).\n9 single transferable vote (STV).\nThe rule proceeds through a series of m - 1 rounds.\nIn each round, the candidate with the lowest plurality score (that is, the least number of voters ranking it first among the remaining candidates) is eliminated (and each of the votes for that candidate \"transfer\" to the next remaining candidate in the order given in that vote).\nThe winner is the last remaining candidate.\n9 plurality with run-off.\nIn this rule, a first round eliminates all candidates except the two with the highest plurality scores.\nVotes are transferred to these as in the STV rule, and a second round determines the winner from these two.\n9 approval.\nEach voter labels each candidate as either approved or disapproved.\nThe candidate approved by the greatest number of voters wins.\n9 Condorcet.\nFor any two candidates i and j, let N (i, j) be\nthe number of voters who prefer i to j.\nIf there is a candidate i that is preferred to any other candidate by a majority of the voters (that is, N (i, j)> N (j, i) for all j = 6i--that is, i wins every pairwise election), then candidate i wins.\n2The term voting protocol is often used to describe the same concept, but we seek to draw a sharp distinction between the rule mapping preferences to outcomes, and the communication\/elicitation protocol used to implement this rule.\n9 maximin (aka.\nSimpson).\nThe maximin score of i is s (i) = minj6 = i N (i, j)--that is, i's worst performance in a pairwise election.\nThe candidate with the highest maximin score wins.\n9 Copeland.\nFor any two distinct candidates i and j, let C (i, j) = 1 if N (i, j)> N (j, i), C (i, j) = 1\/2 if N (i, j) = N (j, i) and C (i, j) = 0 if N (i, j) <N (j, i).\nThe Copeland score of candidate i is s (i) = Ej6 = i C (i, j).\n9 cup (sequential binary comparisons).\nThe cup rule is defined by a balanced3 binary tree T with one leaf per candidate, and an assignment of candidates to leaves (each leaf gets one candidate).\nEach non-leaf node is assigned the winner of the pairwise election of the node's children; the candidate assigned to the root wins.\n9 Bucklin.\nFor any candidate i and integer l, let B (i, l) be the number of voters that rank candidate i among the top l candidates.\nThe winner is arg mini (min {l: B (i, l)> n\/2}).\nThat is, if we say that a voter \"approves\" her top l candidates, then we repeatedly increase l by 1 until some candidate is approved by more than half the voters, and this candidate is the winner.\n9 ranked pairs.\nThis rule determines an order>.\n- on all the candidates, and the winner is the candidate at the top\nof this order.\nSort all ordered pairs of candidates (i, j) by N (i, j), the number of voters who prefer i to j. Starting with the pair (i, j) with the highest N (i, j), we \"lock in\" the result of their pairwise election (i>.\n- j).\nThen, we move to the next pair, and we lock the result of their pairwise election.\nWe continue to lock every pairwise result that does not contradict the ordering>.\n- established so far.\nWe emphasize that these definitions of voting rules do not concern themselves with how the votes are elicited from the voters; all the voting rules, including those that are suggestively defined in terms of \"rounds\", are in actuality just functions mapping the vector of all the voters' votes to a winner.\nNevertheless, there are always many different ways of eliciting the votes (or the relevant parts thereof) from the voters.\nFor example, in the plurality with runoff rule, one way of eliciting the votes is to ask every voter to declare her entire ordering of the candidates up front.\nAlternatively, we can first ask every voter to declare only her most preferred candidate; then, we will know the two candidates in the runoff, and we can ask every voter which of these two candidates she prefers.\nThus, we distinguish between the voting rule (the mapping from vectors of votes to outcomes) and the communication protocol (which determines how the relevant parts of the votes are actually elicited from the voters).\nThe goal of this paper is to give efficient communication protocols for the voting rules just defined, and to prove that there do not exist any more efficient communication protocols.\nIt is interesting to note that the choice of the communication protocol may affect the strategic behavior of the voters.\nMultistage communication protocols may reveal to the voters some information about how the other voters are voting (for example, in the two-stage communication protocol just given for plurality with runoff, in the second stage voters 3Balanced means that the difference in depth between two leaves can be at most one.\nwill know which two candidates have the highest plurality scores).\nIn general, when the voters receive such information, it may give them incentives to vote differently than they would have in a single-stage communication protocol in which all voters declare their entire votes simultaneously.\nOf course, even the single-stage communication protocol is not strategy-proof4 for any reasonable voting rule, by the Gibbard-Satterthwaite theorem [10, 19].\nHowever, this does not mean that we should not be concerned about adding even more opportunities for strategic voting.\nIn fact, many of the communication protocols introduced in this paper do introduce additional opportunities for strategic voting, but we do not have the space to discuss this here.\n(In prior work [8], we do give an example where an elicitation protocol for the approval voting rule introduces strategic voting, and give principles for designing elicitation protocols that do not introduce strategic problems.)\nNow that we have reviewed voting rules, we move on to a brief review of communication complexity theory.\n3.\nREVIEW OF SOME COMMUNICATION COMPLEXITY THEORY\nIn this section, we review the basic model of a communication problem and the lower-bounding technique of constructing a fooling set.\n(The basic model of a communication problem is due to Yao [22]; for an overview see Kushilevitz and Nisan [12].)\nEach player 1 <i <n knows (only) input xi.\nTogether, they seek to compute f (x1, x2,..., xn).\nIn a deterministic protocol for computing f, in each stage, one of the players announces (to all other players) a bit of information based on her own input and the bits announced so far.\nEventually, the communication terminates and all players know f (x1, x2,..., xn).\nThe goal is to minimize the worst-case (over all input vectors) number of bits sent.\nThe deterministic communication complexity of a problem is the worstcase number of bits sent in the best (correct) deterministic protocol for it.\nIn a nondeterministic protocol, the next bit to be sent can be chosen nondeterministically.\nFor the purposes of this paper, we will consider a nondeterministic protocol correct if for every input vector, there is some sequence of nondeterministic choices the players can make so that the players know the value of f when the protocol terminates.\nThe nondeterministic communication complexity of a problem is the worst-case number of bits sent in the best (correct) nondeterministic protocol for it.\nWe are now ready to give the definition of a fooling set.\n2,..., xrnn) = 6 f0.\n(That is, we can mix the inputs from the two input vectors to obtain a vector with a different function value.)\nIt is known that if a fooling set of size k exists, then log k is a lower bound on the communication complexity (even the nondeterministic communication complexity) [12].\n4A strategy-proof protocol is one in which it is in the players' best interest to report their preferences truthfully.\nFor the purposes of this paper, f is the voting rule that maps the votes to the winning candidate, and xi is voter i's vote (the information that the voting rule would require from the voter if there were no possibility of multistage communication--i.e. the most preferred candidate (plurality), the approved candidates (approval), or the ranking of all the candidates (all other protocols)).\nHowever, when we derive our lower bounds, f will only signify whether a distinguished candidate a wins.\n(That is, f is 1 if a wins, and 0 otherwise.)\nThis will strengthen our lower bound results (because it implies that even finding out whether one given candidate wins is \"hard\").5 Thus, a fooling set in our context is a set of vectors of votes so that a wins (does not win) with each of them; but for any two different vote vectors in the set, there is a way of taking some voters' votes from the first vector and the others' votes from the second vector, so that a does not win (wins).\nTo simplify the proofs of our lower bounds, we make assumptions such as \"the number of voters n is odd\" in many of these proofs.\nTherefore, technically, we do not prove the lower bound for (number of candidates, number of voters) pairs (m, n) that do not satisfy these assumptions (for example, if we make the above assumption, then we technically do not prove the lower bound for any pair (m, n) in which n is even).\nNevertheless, we always prove the lower bound for a representative set of (m, n) pairs.\nFor example, for every one of our lower bounds it is the case that for infinitely many values of m, there are infinitely many values of n such that the lower bound is proved for the pair (m, n).\n4.\nRESULTS\nWe are now ready to present our results.\nFor each voting rule, we first give a deterministic communication protocol for determining the winner to establish an upper bound.\nThen, we give a lower bound on the nondeterministic communication complexity (even on the complexity of deciding whether a given candidate wins, which is an easier question).\nThe lower bounds match the upper bounds in all but two cases: the STV rule (upper bound O (n (log m) 2); lower bound \u03a9 (n log m)) and the maximin rule (upper bound O (nm log m), although we do give a nondeterministic protocol that is O (nm); lower bound \u03a9 (nm)).\nWhen we discuss a voting rule in which the voters rank the candidates, we will represent a ranking in which candidate c1 is ranked first, c2 is ranked second, etc. as c1> - c2> -...cm.\n5One possible concern is that in the case where ties are possible, it may require much communication to verify whether a specific candidate a is among the winners, but little communication to produce one of the winners.\nHowever, all the fooling sets we use in the proofs have the property that if a wins, then a is the unique winner.\nTherefore, in these fooling sets, if one knows any one of the winners, then one knows whether a is a winner.\nThus, computing one of the winners requires at least as much communication as verifying whether a is among the winners.\nIn general, when a communication problem allows multiple correct answers for a given vector of inputs, this is known as computing a relation rather than a function [12].\nHowever, as per the above, we can restrict our attention to a subset of the domain where the voting rule truly is a (single-valued) function, and hence lower bounding techniques for functions rather than relations will suffice.\nSometimes for the purposes of a proof the internal ranking of a subset of the candidates does not matter, and in this case we will not specify it.\nFor example, if S = {c2, c31, then c1> - S> - c4 indicates that either the ranking c1> c2> - c3> - c4 or the ranking c1> - c3> - c2> - c4 can be used for the proof.\nWe first give a universal upper bound.\nTHEOREM 1.\nThe deterministic communication complexity of any rank-based voting rule is O (nm log m).\nPROOF.\nThis bound is achieved by simply having everyone communicate their entire ordering of the candidates (indicating the rank of an individual candidate requires only O (log m) bits, so each of the n voters can simply indicate the rank of each of the m candidates).\nThe next lemma will be useful in a few of our proofs.\nLEMMA 1.\nIf m divides n, then log (n!)\n- m log ((n\/m)!)\n> n (log m - 1) \/ 2.\nPROOF.\nIf n\/m = 1 (that is, n = m), then this expression simplifies to log (n!)\n.\nWe have log (n!)\n= log (i) dx, which, using integration by parts, is equal to n log n - (n - 1)> n (log n - 1) = n (log m - 1)> n (log m 1) \/ 2.\nSo, we can assume that n\/m> 2.\nWe observe that\nTHEOREM 2.\nThe deterministic communication complexity of the plurality rule is O (n log m).\nPROOF.\nIndicating one of the candidates requires only O (log m) bits, so each voter can simply indicate her most preferred candidate.\nTHEOREM 3.\nThe nondeterministic communication complexity of the plurality rule is \u03a9 (n log m) (even to decide whether a given candidate a wins).\nPROOF.\nWe will exhibit a fooling set of sizen'!\n((n0m)!)\nm where n' = (n-1) \/ 2.\nTaking the logarithm of this gives log (n'!)\nm log ((n' \/ m)!)\n, so the result follows from Lemma 1.\nThe fooling set will consist of all vectors of votes satisfying the following constraints:\n\u2022 Every candidate receives equally many votes from the first 2n' = n \u2212 1 voters.\n\u2022 The last voter (voter n) votes for a.\nCandidate a wins with each one of these vote vectors because of the extra vote for a from the last voter.\nGiven that m divides n', let us see how many vote vectors there are in the fooling set.\nWe need to distribute n' voter pairs evenly over m candidates, for a total of n' \/ m voter pairs per candidate; and there are preciselyn,!\n((n, m)!)\nm ways of doing this .6 All that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet i be a number such that the two vote vectors disagree on the candidate for which voters 2i \u2212 1 and 2i vote.\nWithout loss of generality, suppose that in the first vote vector, these voters do not vote for a (but for some other candidate, b, instead).\nNow, construct a new vote vector by taking votes 2i \u2212 1 and 2i from the first vote vector, and the remaining votes from the second vote vector.\nThen, b receives 2n' \/ m + 2 votes in this newly constructed vote vector, whereas a receives at most 2n' \/ m +1 votes.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTHEOREM 4.\nThe deterministic communication complexity of the plurality with runoff rule is O (n log m).\nPROOF.\nFirst, let every voter indicate her most preferred candidate using log m bits.\nAfter this, the two candidates in the runoff are known, and each voter can indicate which one she prefers using a single additional bit.\nTHEOREM 5.\nThe nondeterministic communication complexity of the plurality with runoff rule is 0 (n log m) (even to decide whether a given candidate a wins).\nPROOF.\nWe will exhibit a fooling set of size n,!\n((n, m,)!)\nm, where m' = m\/2 and n' = (n \u2212 2) \/ 4.\nTaking the logarithm of this gives log (n'!)\n\u2212 m' log ((n' \/ m')!)\n, so the result follows from Lemma 1.\nDivide the candidates into m' pairs: (c1, d1), (c2, d2),..., (cm,, dm,) where c1 = a and d1 = b.\nThe fooling set will consist of all vectors of votes satisfying the following constraints:\n\u2022 For any 1 \u2264 i \u2264 n', voters 4i \u2212 3 and 4i \u2212 2 rank the candidates ck (i) ~ a ~ C \u2212 {a, ck (i)}, for some candidate ck (i).\n(If ck (i) = a then the vote is simply a ~ C \u2212 {a}.)\n\u2022 For any 1 \u2264 i \u2264 n', voters 4i \u2212 1 and 4i rank the candidates dk (i) ~ a ~ C \u2212 {a, dk (i)} (that is, their most preferred candidate is the candidate that is paired with the candidate that the previous two voters vote for).\n6An intuitive proof of this is the following.\nWe can count the number of permutations of n' elements as follows.\nFirst, divide the elements into m buckets of size n' \/ m, so that if x is placed in a lower-indexed bucket than y, then x will be indexed lower in the eventual permutation.\nThen, decide on the permutation within each bucket (for which there are (n' \/ m)!\nchoices per bucket).\nIt follows that n'!\nequals the number of ways to divide n' elements into m buckets of size n' \/ m, times ((n' \/ m)!)\nm.\n\u2022 Every candidate is ranked at the top of equally many of the first 4n' = n \u2212 2 votes.\n\u2022 Voter 4n' +1 = n \u2212 1 ranks the candidates a ~ C \u2212 {a}.\n\u2022 Voter 4n' + 2 = n ranks the candidates b ~ C \u2212 {b}.\nCandidate a wins with each one of these vote vectors: because of the last two votes, candidates a and b are one vote ahead of all the other candidates and continue to the runoff, and at this point all the votes that had another candidate ranked at the top transfer to a, so that a wins the runoff.\nGiven that m' divides n', let us see how many vote vectors there are in the fooling set.\nWe need to distribute n' groups of four voters evenly over the m' pairs of candidates, and (as in the proof of Theorem 3) there are n,!\n((m, n,)!)\nm, ways of doing this.\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet i be a number such that ck (i) is not the same in both of these two vote vectors, that is, c1k (i) (ck (i) in the first vote vector) is not equal to c2k (i) (ck (i) in the second vote vector).\nWithout loss of generality, suppose c1k (i) = 6 a. Now, construct a new vote vector by taking votes 4i \u2212 3, 4i \u2212 2, 4i \u2212 1, 4i from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, c1k (i) and d1 k (i) each receive 4n' \/ m +2 votes in the first round, whereas a receives at most 4n' \/ m +1 votes.\nSo, a does not continue to the runoff in the newly constructed vote vector, and hence we have a correct fooling set.\nTHEOREM 6.\nThe nondeterministic communication complexity of the Borda rule is 0 (nm log m) (even to decide whether a given candidate a wins).\nPROOF.\nWe will exhibit a fooling set of size (m'!)\nn, where m' = m \u2212 2 and n' = (n \u2212 2) \/ 4.\nThis will prove the theorem because m'!\nis 0 (m log m), so that log ((m'!)\nn,) = n' log (m'!)\nis 0 (nm log m).\nFor every vector (Sr1, Sr2,..., Srn,) consisting of n' orderings of all candidates other than a and another fixed candidate b (technically, the orderings take the form of a one-to-one function Sri: {1, 2,..., m'} \u2192 C \u2212 {a, b} with Sri (j) = c indicating that candidate c is the jth in the order represented by Sri), let the following vector of votes be an element of the fooling set:\n\u2022 For 1 \u2264 i \u2264 n', let voters 4i \u2212 3 and 4i \u2212 2 rank the candidates a ~ b ~ Sri (1) ~ Sri (2) ~...~ Sri (m').\n\u2022 For 1 \u2264 i \u2264 n', let voters 4i \u2212 1 and 4i rank the candidates Sri (m') ~ Sri (m' \u2212 1) ~...~ Sri (1) ~ b ~ a. \u2022 Let voter 4n' + 1 = n \u2212 1 rank the candidates a ~ b ~ Sr0 (1) ~ Sr0 (2) ~...~ Sr0 (m') (where Sr0 is an arbitrary order of the candidates other than a and b which is the same for every element of the fooling set).\n\u2022 Let voter 4n' + 2 = n rank the candidates Sr0 (m') ~ Sr0 (m' \u2212 1) ~...~ Sr0 (1) ~ a ~ b.\nWe observe that this fooling set has size (m'!)\nn,, and that candidate a wins in each vector of votes in the fooling set (to\nsee why, we observe that for any 1 \u2264 i \u2264 n', votes 4i \u2212 3 and 4i \u2212 2 rank the candidates in the exact opposite way from votes 4i \u2212 1 and 4i, which under the Borda rule means they cancel out; and the last two votes give one more point to a than to any other candidate--besides b who gets two fewer points than a).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (\u03c011, \u03c012,..., \u03c01n,), and let the second vote vector correspond to the vector (\u03c021, \u03c022,..., \u03c02n,).\nFor some i, we must have \u03c01i = 6 \u03c02i, so that for some candidate c \u2208 \/ {a, b}, (\u03c01i) _ 1 (c) <(\u03c02i) _ 1 (c) (that is, c is ranked higher in \u03c01i than in \u03c02i).\nNow, construct a new vote vector by taking votes 4i \u2212 3 and 4i \u2212 2 from the first vote vector, and the remaining votes from the second vote vector.\na's Borda score remains unchanged.\nHowever, because c is ranked higher in \u03c01i than in \u03c02i, c receives at least 2 more points from votes 4i \u2212 3 and 4i \u2212 2 in the newly constructed vote vector than it did in the second vote vector.\nIt follows that c has a higher Borda score than a in the newly constructed vote vector.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTHEOREM 7.\nThe nondeterministic communication complexity of the Copeland rule is \u03a9 (nm log m) (even to decide whether a given candidate a wins).\nPROOF.\nWe will exhibit a fooling set of size (m'!)\nn, where m' = (m \u2212 2) \/ 2 and n' = (n \u2212 2) \/ 2.\nThis will prove the theorem because m'!\nis \u03a9 (m log m), so that log ((m'!)\nn,) = n' log (m'!)\nis \u03a9 (nm log m).\nWe write the set of candidates as the following disjoint union: C = {a, b} \u222a L \u222a R where L = {l1, l2,..., lm,} and R = {r1, r2,..., rm,}.\nFor every vector (\u03c01, \u03c02,..., \u03c0n,) consisting of n' permutations of the integers 1 through m' (\u03c0i: {1, 2,..., m'} \u2192 {1, 2,..., m'}), let the following vector of votes be an element of the fooling set:\n\u2022 For 1 \u2264 i \u2264 n', let voter 2i \u2212 1 rank the candidates a ~ b ~ l\u03c0i (1) ~ r\u03c0i (1) ~ l\u03c0i (2) ~ r\u03c0i (2) ~...~ l\u03c0i (m,) ~ r\u03c0i (m,).\n\u2022 For 1 \u2264 i \u2264 n', let voter 2i rank the candidates r\u03c0i (m,) ~ l\u03c0i (m,) ~ r\u03c0i (m, _ 1) ~ l\u03c0i (m, _ 1) ~...~ r\u03c0i (1) ~ l\u03c0i (1) ~ b ~ a. \u2022 Let voter n \u2212 1 = 2n' + 1 rank the candidates a ~ b ~ l1 ~ r1 ~ l2 ~ r2 ~...~ lm, ~ rm,.\n\u2022 Let voter n = 2n' +2 rank the candidates rm, ~ lm, ~ rm, _ 1 ~ lm, _ 1 ~...~ r1 ~ l1 ~ a ~ b.\nWe observe that this fooling set has size (m'!)\nn,, and that candidate a wins in each vector of votes in the fooling set (every pair of candidates is tied in their pairwise election, with the exception that a defeats b, so that a wins the election by half a point).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (\u03c011, \u03c012,..., \u03c01n,), and let the second vote vector correspond to the vector (\u03c021, \u03c022,..., \u03c02n,).\nFor some i, we must have \u03c01i = 6 \u03c02i, so that for some j \u2208 {1, 2,..., m'}, we have (\u03c01i) _ 1 (j) <(\u03c02i) _ 1 (j).\nNow, construct a new vote vector by taking vote 2i \u2212 1 from the first vote vector, and the remaining votes from the second vote vector.\na's Copeland score remains unchanged.\nLet us consider the score of lj.\nWe first observe that the rank of lj in vote 2i \u2212 1 in the newly constructed vote vector is at least 2 higher than it was in the second vote vector, because (\u03c01i) _ 1 (j) <(\u03c02i) _ 1 (j).\nLet D1 (lj) be the set of candidates in L \u222a R that voter 2i \u2212 1 ranked lower than lj in the first vote vector (D1 (lj) = {c \u2208 L \u222a R: lj ~ 12i_1 c}), and let D2 (lj) be the set of candidates in L \u222a R that voter 2i \u2212 1 ranked lower than lj in the second vote vector (D2 (lj) = {c \u2208 L \u222a R: lj ~ 22i_1 c}).\nThen, it follows that in the newly constructed vote vector, lj defeats all the candidates in D1 (lj) \u2212 D2 (lj) in their pairwise elections (because lj receives an \"extra\" vote in each one of these pairwise elections relative to the second vote vector), and loses to all the candidates in D2 (lj) \u2212 D1 (lj) (because lj loses a vote in each one of these pairwise elections relative to the second vote vector), and ties with everyone else.\nBut | D1 (lj) | \u2212 | D2 (lj) | \u2265 2, and hence | D1 (lj) \u2212 D2 (lj) | \u2212 | D2 (lj) \u2212 D1 (lj) | \u2265 2.\nHence, in the newly constructed vote vector, lj has at least two more pairwise wins than pairwise losses, and therefore has at least 1 more point than if lj had tied all its pairwise elections.\nThus, lj has a higher Copeland score than a in the newly constructed vote vector.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTHEOREM 8.\nThe nondeterministic communication complexity of the maximin rule is O (nm).\nPROOF.\nThe nondeterministic protocol will guess which candidate w is the winner, and, for each other candidate c, which candidate o (c) is the candidate against whom c receives its lowest score in a pairwise election.\nThen, let every voter communicate the following:\n\u2022 for each candidate c = 6 w, whether she prefers c to w; \u2022 for each candidate c = 6 w, whether she prefers c to o (c).\nWe observe that this requires the communication of 2n (m \u2212 1) bits.\nIf the guesses were correct, then, letting N (d, e) be the number of voters preferring candidate d to candidate e, we should have N (c, o (c)) <N (w, c') for any c = 6 w, c' = 6 w, which will prove that w wins the election.\nTHEOREM 9.\nThe nondeterministic communication complexity of the maximin rule is \u03a9 (nm) (even to decide whether a given candidate a wins).\nPROOF.\nWe will exhibit a fooling set of size 2n, m, where m' = m \u2212 2 and n' = (n \u2212 1) \/ 4.\nLet b be a candidate other than a. For every vector (S1, S2,..., Sn,) consisting of n' subsets Si \u2286 C \u2212 {a, b}, let the following vector of votes be an element of the fooling set:\n\u2022 For 1 \u2264 i \u2264 n', let voters 4i \u2212 3 and 4i \u2212 2 rank the candidates Si ~ a ~ C \u2212 (Si \u222a {a, b}) ~ b. \u2022 For 1 \u2264 i \u2264 n', let voters 4i \u2212 1 and 4i rank the candidates b ~ C \u2212 (Si \u222a {a, b}) ~ a ~ Si.\n\u2022 Let voter 4n' + 1 = n rank the candidates a> - b> C \u2212 {a, b}.\nWe observe that this fooling set has size (2m,) n, = 2n, m,, and that candidate a wins in each vector of votes in the fooling set (in every one of a's pairwise elections, a is ranked higher than its opponent by 2n' +1 = (n +1) \/ 2> n\/2 votes).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (S11, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,).\nFor some i, we must have S1i = 6 S2i, so that either S1i Z S2i or S2i Z S1i.\nWithout loss of generality, suppose S1i Z S2i, and let c be some candidate in S1i \u2212 S2i.\nNow, construct a new vote vector by taking votes 4i \u2212 3 and 4i \u2212 2 from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, a is ranked higher than c by only 2n' \u2212 1 voters, for the following reason.\nWhereas voters 4i \u2212 3 and 4i \u2212 2 do not rank c higher than a in the second vote vector (because c E \/ S2i), voters 4i \u2212 3 and 4i \u2212 2 do rank c higher than a in the first vote vector (because c E S1i).\nMoreover, in every one of b's pairwise elections, b is ranked higher than its opponent by at least 2n' voters.\nSo, a has a lower maximin score than b, therefore a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTHEOREM 10.\nThe deterministic communication complexity of the STV rule is O (n (log m) 2).\nPROOF.\nConsider the following communication protocol.\nLet each voter first announce her most preferred candidate (O (n log m) communication).\nIn the remaining rounds, we will keep track of each voter's most preferred candidate among the remaining candidates, which will be enough to implement the rule.\nWhen candidate c is eliminated, let each of the voters whose most preferred candidate among the remaining candidates was c announce their most preferred candidate among the candidates remaining after c's elimination.\nIf candidate c was the ith candidate to be eliminated (that is, there were m \u2212 i + 1 candidates remaining before c's elimination), it follows that at most n \/ (m \u2212 i + 1) voters had candidate c as their most preferred candidate among the remaining candidates, and thus the number of bits to be communicated after the elimination of the ith candidate is O ((n \/ (m \u2212 i +1)) log m).7 Thus, the total communication in this communication protocol is O (n log m + 1)) log m).\nOf course, O (log m).\nSubstituting into the previous expression, we find that the communication complexity is O (n (log m) 2).\nTHEOREM 11.\nThe nondeterministic communication complexity of the STV rule is \u03a9 (n log m) (even to decide whether a given candidate a wins).\nPROOF.\nWe omit this proof because of space constraint.\n7Actually, O ((n \/ (m \u2212 i + 1)) log (m \u2212 i + 1)) is also correct, but it will not improve the bound.\nTHEOREM 12.\nThe deterministic communication complexity of the approval rule is O (nm).\nPROOF.\nApproving or disapproving of a candidate requires only one bit of information, so every voter can simply approve or disapprove of every candidate for a total communication of nm bits.\nTHEOREM 13.\nThe nondeterministic communication complexity of the approval rule is \u03a9 (nm) (even to decide whether a given candidate a wins).\nPROOF.\nWe will exhibit a fooling set of size 2n, m, where m' = m \u2212 1 and n' = (n \u2212 1) \/ 4.\nFor every vector (S1, S2,..., Sn,) consisting of n' subsets Si C _ C \u2212 {a}, let the following vector of votes be an element of the fooling set:\n\u2022 For 1 <i <n', let voters 4i \u2212 3 and 4i \u2212 2 approve Si U {a}.\n\u2022 For 1 <i <n', let voters 4i \u2212 1 and 4i approve C \u2212 (Si U {a}).\n\u2022 Let voter 4n' + 1 = n approve {a}.\nWe observe that this fooling set has size (2m,) n, = 2n, m,, and that candidate a wins in each vector of votes in the fooling set (a is approved by 2n' + 1 voters, whereas each other candidate is approved by only 2n' voters).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (S1 1, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,).\nFor some i, we must have S1i = 6 S2i, so that either S1i Z S2i or S2i Z S1i.\nWithout loss of generality, suppose S1i Z S2i, and let b be some candidate in S1i \u2212 S2i.\nNow, construct a new vote vector by taking votes 4i \u2212 3 and 4i \u2212 2 from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, a is still approved by 2n' + 1 votes.\nHowever, b is approved by 2n' + 2 votes, for the following reason.\nWhereas voters 4i \u2212 3 and 4i \u2212 2 do not approve b in the second vote vector (because b E \/ S2i), voters 4i \u2212 3 and 4i \u2212 2 do approve b in the first vote vector (because b E S1i).\nIt follows that b's score in the newly constructed vote vector is b's score in the second vote vector (2n'), plus two.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nInterestingly, an \u03a9 (m) lower bound can be obtained even for the problem of finding a candidate that is approved by more than one voter [20].\nTHEOREM 14.\nThe deterministic communication complexity of the Condorcet rule is O (nm).\nPROOF.\nWe maintain a set of active candidates S which is initialized to C.\nAt each stage, we choose two of the active candidates (say, the two candidates with the lowest indices), and we let each voter communicate which of the two candidates she prefers.\n(Such a stage requires the communication of n bits, one per voter.)\nThe candidate preferred by fewer\nvoters (the loser of the pairwise election) is removed from S. (If the pairwise election is tied, both candidates are removed.)\nAfter at most m \u2212 1 iterations, only one candidate is left (or zero candidates are left, in which case there is no Condorcet winner).\nLet a be the remaining candidate.\nTo find out whether candidate a is the Condorcet winner, let each voter communicate, for every candidate c = 6 a, whether she prefers a to c. (This requires the communication of at most n (m \u2212 1) bits.)\nThis is enough to establish whether a won each of its pairwise elections (and thus, whether a is the Condorcet winner).\nTHEOREM 15.\nThe nondeterministic communication complexity of the Condorcet rule is \u03a9 (nm) (even to decide whether a given candidate a wins).\nPROOF.\nWe will exhibit a fooling set of size 2n, m, where m' = m \u2212 1 and n' = (n \u2212 1) \/ 2.\nFor every vector (S1, S2,..., Sn,) consisting of n' subsets Si \u2286 C \u2212 {a}, let the following vector of votes be an element of the fooling set:\n\u2022 For 1 \u2264 i \u2264 n', let voter 2i \u2212 1 rank the candidates Si ~ a ~ C \u2212 Si.\n\u2022 For 1 \u2264 i \u2264 n', let voter 2i rank the candidates C \u2212 Si ~ a ~ Si.\n\u2022 Let voter 2n' +1 = n rank the candidates a ~ C \u2212 {a}.\nWe observe that this fooling set has size (2m,) n, = 2n, m,, and that candidate a wins in each vector of votes in the fooling set (a wins each of its pairwise elections by a single vote).\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (S11, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,).\nFor some i, we must have S1i = 6 S2i, so that either S1i S2i or S2i Z S1i.\nWithout loss of generality, suppose S1i * S2i, and let b be some candidate in S1i \u2212 S2i.\nNow, construct a new vote vector by taking vote 2i \u2212 1 from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, b wins its pairwise election against a by one vote (vote 2i \u2212 1 ranks b above a in the newly constructed vote vector because b \u2208 S1i, whereas in the second vote vector vote 2i \u2212 1 ranked a above b because b \u2208 \/ S2i).\nSo, a is not the Condorcet winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTHEOREM 16.\nThe deterministic communication complexity of the cup rule is O (nm).\nPROOF.\nConsider the following simple communication protocol.\nFirst, let all the voters communicate, for every one of the matchups in the first round, which of its two candidates they prefer.\nAfter this, the matchups for the second round are known, so let all the voters communicate which candidate they prefer in each matchup in the second round--etc. .\nBecause communicating which of two candidates is preferred requires only one bit per voter, and because there are only m \u2212 1 matchups in total, this communication protocol requires O (nm) communication.\nTHEOREM 17.\nThe nondeterministic communication complexity of the cup rule is \u03a9 (nm) (even to decide whether a given candidate a wins).\nPROOF.\nWe will exhibit a fooling set of size 2n, m, where m' = (m \u2212 1) \/ 2 and n' = (n \u2212 7) \/ 2.\nGiven that m + 1 is a power of 2, so that one candidate gets a bye (that is, does not face an opponent) in the first round, let a be the candidate with the bye.\nOf the m' first-round matchups, let lj denote the one (\"left\") candidate in the jth matchup, and let rj be the other (\"right\") candidate.\nLet L = {lj: 1 \u2264 j \u2264 m'} and R = {rj: 1 \u2264 j \u2264 m'}, so that C = L \u222a R \u222a {a}.\nFigure 1: The schedule for the cup rule used in the proof of Theorem 17.\nFor every vector (S1, S2,..., Sn,) consisting of n' subsets Si \u2286 R, let the following vector of votes be an element of the fooling set:\n\u2022 For 1 \u2264 i \u2264 n', let voter 2i \u2212 1 rank the candidates Si ~ L ~ a ~ R \u2212 Si.\n\u2022 For 1 \u2264 i \u2264 n', let voter 2i rank the candidates R \u2212 Si ~ L ~ a ~ Si.\n\u2022 Let voters 2n' +1 = n \u2212 6, 2n' +2 = n \u2212 5, 2n' +3 = n \u2212 4 rank the candidates L ~ a ~ R. \u2022 Let voters 2n' + 4 = n \u2212 3, 2n' + 5 = n \u2212 2 rank the candidates a ~ r1 ~ l1 ~ r2 ~ l2 ~...~ rm, ~ lm,.\n\u2022 Let voters 2n' + 6 = n \u2212 1, 2n' + 7 = n rank the candidates rm, ~ lm, ~ rm,-1 ~ lm,-1 ~...~ r1 ~ l1 ~ a.\nWe observe that this fooling set has size (2m,) n, = 2n, m,.\nAlso, candidate a wins in each vector of votes in the fooling set, for the following reasons.\nEach candidate rj defeats its opponent lj in the first round.\n(For any 1 \u2264 i \u2264 n', the net effect of votes 2i \u2212 1 and 2i on the pairwise election between rj and lj is zero; votes n \u2212 6, n \u2212 5, n \u2212 4 prefer lj to rj, but votes n \u2212 3, n \u2212 2, n \u2212 1, n all prefer rj to lj.)\nMoreover, a defeats every rj in their pairwise election.\n(For any 1 \u2264 i \u2264 n', the net effect of votes 2i \u2212 1 and 2i on the pairwise election between a and rj is zero; votes n \u2212 1, n prefer rj to a, but votes n \u2212 6, n \u2212 5, n \u2212 4, n \u2212 3, n \u2212 2 all prefer a to rj.)\nIt follows that a will defeat all the candidates that it faces.\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector\n(S11, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,).\nFor some i, we must have S1i = 6 S2i, so that either S1i S2i or S2i Z S1i.\nWithout loss of generality, suppose S1i * S2i, and let rj be some candidate in S1i \u2212 S2i.\nNow, construct a new vote vector by taking vote 2i from the first vote vector, and the remaining votes from the second vote vector.\nWe note that, whereas in the second vote vector vote 2i preferred rj to lj (because rj E R \u2212 S2i), in the newly constructed vote vector this is no longer the case (because rj E S1i).\nIt follows that, whereas in the second vote vector, rj defeated lj in the first round by one vote, in the newly constructed vote vector, lj defeats rj in the first round.\nThus, at least one lj advances to the second round after defeating its opponent rj.\nNow, we observe that in the newly constructed vote vector, any lk wins its pairwise election against any rq with q = 6 k.\nThis is because among the first 2n' votes, at least n' \u2212 1 prefer lk to rq; votes n \u2212 6, n \u2212 5, n \u2212 4 prefer lk to rq; and, because q = 6 k, either votes n \u2212 3, n \u2212 2 prefer lk to rq (if k <q), or votes n \u2212 1, n prefer lk to rq (if k> q).\nThus, at least n' + 4 = (n + 1) \/ 2> n\/2 votes prefer lk to rq.\nMoreover, any lk wins its pairwise election against a.\nThis is because only votes n \u2212 3 and n \u2212 2 prefer a to lk.\nIt follows that, after the first round, any surviving candidate lk can only lose a matchup against another surviving lk,, so that one of the lk must win the election.\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTHEOREM 18.\nThe deterministic communication complexity of the Bucklin rule is O (nm).\nPROOF.\nLet l be the minimum integer for which there is a candidate who is ranked among the top l candidates by more than half the votes.\nWe will do a binary search for l.\nAt each point, we will have a lower bound lL which is smaller than l (initialized to 0), and an upper bound lH which is at least l (initialized to m).\nWhile lH \u2212 lL> 1, we continue by finding out whether b (lH \u2212 l) \/ 2J is smaller than l, after which we can update the bounds.\nTo find out whether a number k is smaller than l, we determine every voter's k most preferred candidates.\nEvery voter can communicate which candidates are among her k most preferred candidates using m bits (for each candidate, indicate whether the candidate is among the top k or not), but because the binary search requires log m iterations, this gives us an upper bound of O ((log m) nm), which is not strong enough.\nHowever, if lL <k <lH, and we already know a voter's lL most preferred candidates, as well as her lH most preferred candidates, then the voter no longer needs to communicate whether the lL most preferred candidates are among her k most preferred candidates (because they must be), and she no longer needs to communicate whether the m \u2212 lH least preferred candidates are among her k most preferred candidates (because they cannot be).\nThus the voter needs to communicate only m \u2212 lL \u2212 (m \u2212 lH) = lH \u2212 lL bits in any given stage.\nBecause each stage, lH \u2212 lL is (roughly) halved, each voter in total communicates only (roughly) m + m\/2 + m\/4 +...<2m bits.\nTHEOREM 19.\nThe nondeterministic communication complexity of the Bucklin rule is \u03a9 (nm) (even to decide whether a given candidate a wins).\nPROOF.\nWe will exhibit a fooling set of size 2n, m, where m' = (m \u2212 1) \/ 2 and n' = n\/2.\nWe write the set of candidates as the following disjoint union: C = {a} U L U R where L = {l1, l2,..., lm,} and R = {r1, r2,..., rm,}.\nFor any subset S C _ {1, 2,..., m'}, let L (S) = {li: i E S} and let R (S) = {ri: i E S}.\nFor every vector (S1, S2,..., Sn,) consisting of n' sets Si C _ {1, 2,..., m'}, let the following vector of votes be an element of the fooling set:\n\u2022 For 1 <i <n', let voter 2i \u2212 1 rank the candidates L (Si)> - R \u2212 R (Si)> - a> - L \u2212 L (Si)> - R (Si).\n\u2022 For 1 <i <n', let voter 2i rank the candidates L \u2212 L (Si)> - R (Si)> - a> - L (Si)> - R \u2212 R (Si).\nWe observe that this fooling set has size (2m,) n, = 2n, m,, and that candidate a wins in each vector of votes in the fooling set, for the following reason.\nEach candidate in C \u2212 {a} is ranked among the top m' candidates by exactly half the voters (which is not enough to win).\nThus, we need to look at the voters' top m' +1 candidates, and a is ranked m' +1 th by all voters.\nAll that remains to show is that for any two distinct vectors of votes in the fooling set, we can let each of the voters vote according to one of these two vectors in such a way that a loses.\nLet the first vote vector correspond to the vector (S11, S12,..., S1n,), and let the second vote vector correspond to the vector (S21, S22,..., S2n,).\nFor some i, we must have S1i = 6 S2i, so that either S1i S2i or S2i Z S1i.\nWithout loss of generality, suppose S1i * S2i, and let j be some integer in S1i \u2212 S2i.\nNow, construct a new vote vector by taking vote 2i \u2212 1 from the first vote vector, and the remaining votes from the second vote vector.\nIn this newly constructed vote vector, a is still ranked m' + 1th by all votes.\nHowever, lj is ranked among the top m' candidates by n' + 1 = n\/2 + 1 votes.\nThis is because whereas vote 2i \u2212 1 does not rank lj among the top m' candidates in the second vote vector (because j E \/ S2i, we have lj E \/ L (S2i)), vote 2i \u2212 1 does rank lj among the top m' candidates in the first vote vector (because j E S1i, we have lj E L (S1i)).\nSo, a is not the winner in the newly constructed vote vector, and hence we have a correct fooling set.\nTHEOREM 20.\nThe nondeterministic communication complexity of the ranked pairs rule is \u03a9 (nm log m) (even to decide whether a given candidate a wins).\nPROOF.\nWe omit this proof because of space constraint.\n5.\nDISCUSSION\nOne key obstacle to using voting for preference aggregation is the communication burden that an election places on the voters.\nBy lowering this burden, it may become feasible to conduct more elections over more issues.\nIn the limit, this could lead to a shift from representational government to a system in which most issues are decided by referenda--a veritable e-democracy.\nIn this paper, we analyzed the communication complexity of the common voting rules.\nKnowing which voting rules require little communication is especially important when the issue to be voted on is of low enough importance that the following is true: the parties involved are willing to accept a rule that tends\nto produce outcomes that are slightly less representative of the voters' preferences, if this rule reduces the communication burden on the voters significantly.\nThe following table summarizes the results we obtained.\nCommunication complexity of voting rules, sorted from low to high.\nAll of the upper bounds are deterministic (with the exception of maximin, for which the best deterministic upper bound we proved is O (nm log m)).\nAll of the lower bounds hold even for nondeterministic communication and even just for determining whether a given candidate a is the winner.\nOne area of future research is to study what happens when we restrict our attention to communication protocols that do not reveal any strategically useful information.\nThis restriction may invalidate some of the upper bounds that we derived using multistage communication protocols.\nAlso, all of our bounds are worst-case bounds.\nIt may be possible to outperform these bounds when the distribution of votes has additional structure.\nWhen deciding which voting rule to use for an election, there are many considerations to take into account.\nThe voting rules that we studied in this paper are the most common ones that have survived the test of time.\nOne way to select among these rules is to consider recent results on complexity.\nThe table above shows that from a communication complexity perspective, plurality, plurality with runoff, and STV are preferable.\nHowever, plurality has the undesirable property that it is computationally easy to manipulate by voting strategically [3, 7].\nPlurality with runoff is NP-hard to manipulate by a coalition of weighted voters, or by an individual that faces correlated uncertainty about the others' votes [7, 6].\nSTV is NP-hard to manipulate in those settings as well [7], but also by an individual with perfect knowledge of the others' votes (when the number of candidates is unbounded) [2].\nTherefore, STV is more robust, although it may require slightly more worst-case communication as per the table above.\nYet other selection criteria are the computational complexity of determining whether enough information has been elicited to declare a winner, and that of determining the optimal sequence of queries [8].","keyphrases":["commun","commun complex","complex","vote rule","vote","stv","maximin","protocol","prefer","prefer aggreg","resourc alloc","elicit problem"],"prmu":["P","P","P","P","P","P","P","P","U","U","U","U"]} {"id":"C-75","title":"Composition of a DIDS by Integrating Heterogeneous IDSs on Grids","abstract":"This paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems). A Grid middleware is used for this integration. In addition, an architecture for this integration is proposed and validated through simulation.","lvl-1":"Composition of a DIDS by Integrating Heterogeneous IDSs on Grids Paulo F. Silva and Carlos B. Westphall and Carla M. Westphall Network and Management Laboratory Department of Computer Science and Statistics Federal University of Santa Catarina, Florian\u00f3polis, Brazil Marcos D. Assun\u00e7\u00e3o Grid Computing and Distributed Systems Laboratory and NICTA Victoria Laboratory Department of Computer Science and Software Engineering The University of Melbourne, Victoria, 3053, Australia {paulo,westphal,assuncao,carla}@lrg.\nufsc.br ABSTRACT This paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems).\nA Grid middleware is used for this integration.\nIn addition, an architecture for this integration is proposed and validated through simulation.\nCategories and Subject Descriptors C.2.4 [Distributed Systes]: Client\/Server, Distributed Applications.\n1.\nINTRODUCTION Solutions for integrating heterogeneous IDSs (Intrusion Detection Systems) have been proposed by several groups [6],[7],[11],[2].\nSome reasons for integrating IDSs are described by the IDWG (Intrusion Detection Working Group) from the IETF (Internet Engineering Task Force) [12] as follows: \u2022 Many IDSs available in the market have strong and weak points, which generally make necessary the deployment of more than one IDS to provided an adequate solution.\n\u2022 Attacks and intrusions generally originate from multiple networks spanning several administrative domains; these domains usually utilize different IDSs.\nThe integration of IDSs is then needed to correlate information from multiple networks to allow the identification of distributed attacks and or intrusions.\n\u2022 The interoperability\/integration of different IDS components would benefit the research on intrusion detection and speed up the deployment of IDSs as commercial products.\nDIDSs (Distributed Intrusion Detection Systems) therefore started to emerge in early 90s [9] to allow the correlation of intrusion information from multiple hosts, networks or domains to detect distributed attacks.\nResearch on DIDSs has then received much interest, mainly because centralised IDSs are not able to provide the information needed to prevent such attacks [13].\nHowever, the realization of a DIDS requires a high degree of coordination.\nComputational Grids are appealing as they enable the development of distributed application and coordination in a distributed environment.\nGrid computing aims to enable coordinate resource sharing in dynamic groups of individuals and\/or organizations.\nMoreover, Grid middleware provides means for secure access, management and allocation of remote resources; resource information services; and protocols and mechanisms for transfer of data [4].\nAccording to Foster et al. [4], Grids can be viewed as a set of aggregate services defined by the resources that they share.\nOGSA (Open Grid Service Architecture) provides the foundation for this service orientation in computational Grids.\nThe services in OGSA are specified through well-defined, open, extensible and platformindependent interfaces, which enable the development of interoperable applications.\nThis article proposes a model for integration of IDSs by using computational Grids.\nThe proposed model enables heterogeneous IDSs to work in a cooperative way; this integration is termed DIDSoG (Distributed Intrusion Detection System on Grid).\nEach of the integrated IDSs is viewed by others as a resource accessed through the services that it exposes.\nA Grid middleware provides several features for the realization of a DIDSoG, including [3]: decentralized coordination of resources; use of standard protocols and interfaces; and the delivery of optimized QoS (Quality of Service).\nThe service oriented architecture followed by Grids (OGSA) allows the definition of interfaces that are adaptable to different platforms.\nDifferent implementations can be encapsulated by a service interface; this virtualisation allows the consistent access to resources in heterogeneous environments [3].\nThe virtualisation of the environment through service interfaces allows the use of services without the knowledge of how they are actually implemented.\nThis characteristic is important for the integration of IDSs as the same service interfaces can be exposed by different IDSs.\nGrid middleware can thus be used to implement a great variety of services.\nSome functions provided by Grid middleware are [3]: (i) data management services, including access services, replication, and localisation; (ii) workflow services that implement coordinate execution of multiple applications on multiple resources; (iii) auditing services that perform the detection of frauds or intrusions; (iv) monitoring services which implement the discovery of sensors in a distributed environment and generate alerts under determined conditions; (v) services for identification of problems in a distributed environment, which implement the correlation of information from disparate and distributed logs.\nThese services are important for the implementation of a DIDSoG.\nA DIDS needs services for the location of and access to distributed data from different IDSs.\nAuditing and monitoring services take care of the proper needs of the DIDSs such as: secure storage, data analysis to detect intrusions, discovery of distributed sensors, and sending of alerts.\nThe correlation of distributed logs is also relevant because the detection of distributed attacks depends on the correlation of the alert information generated by the different IDSs that compose the DIDSoG.\nThe next sections of this article are organized as follows.\nSection 2 presents related work.\nThe proposed model is presented in Section 3.\nSection 4 describes the development and a case study.\nResults and discussion are presented in Section 5.\nConclusions and future work are discussed in Section 6.\n2.\nRELATED WORK DIDMA [5] is a flexible, scalable, reliable, and platformindependent DIDS.\nDIDMA architecture allows distributed analysis of events and can be easily extended by developing new agents.\nHowever, the integration with existing IDSs and the development of security components are presented as future work [5].\nThe extensibility of DIDS DIDMA and the integration with other IDSs are goals pursued by DIDSoG.\nThe flexibility, scalability, platform independence, reliability and security components discussed in [5] are achieved in DIDSoG by using a Grid platform.\nMore efficient techniques for analysis of great amounts of data in wide scale networks based on clustering and applicable to DIDSs are presented in [13].\nThe integration of heterogeneous IDSs to increase the variety of intrusion detection techniques in the environment is mentioned as future work [13] DIDSoG thus aims at integrating heterogeneous IDSs [13].\nRef.\n[10] presents a hierarchical architecture for a DIDS; information is collected, aggregated, correlated and analysed as it is sent up in the hierarchy.\nThe architecture comprises of several components for: monitoring, correlation, intrusion detection by statistics, detection by signatures and answers.\nComponents in the same level of the hierarchy cooperate with one another.\nThe integration proposed by DIDSoG also follows a hierarchical architecture.\nEach IDS integrated to the DIDSoG offers functionalities at a given level of the hierarchy and requests functionalities from IDSs from another level.\nThe hierarchy presented in [10] integrates homogeneous IDSs whereas the hierarchical architecture of DIDSoG integrates heterogeneous IDSs.\nThere are proposals on integrating computational Grids and IDSs [6],[7],[11],[2].\nRef.\n[6] and [7] propose the use of Globus Toolkit for intrusion detection, especially for DoS (Denial of Service) and DDoS (Distributed Denial of Service) attacks; Globus is used due to the need to process great amounts of data to detect these kinds of attack.\nA two-phase processing architecture is presented.\nThe first phase aims at the detection of momentary attacks, while the second phase is concerned with chronic or perennial attacks.\nTraditional IDSs or DIDSs are generally coordinated by a central point; a characteristic that leaves them prone to attacks.\nLeu et al. [6] point out that IDSs developed upon Grids platforms are less vulnerable to attacks because of the distribution provided for such platforms.\nLeu et al. [6],[7] have used tools to generate several types of attacks - including TCP, ICMP and UDP flooding - and have demonstrated through experimental results the advantages of applying computational Grids to IDSs.\nThis work proposes the development of a DIDS upon a Grid platform.\nHowever, the resulting DIDS integrates heterogeneous IDSs whereas the DIDSs upon Grids presented by Leu et al. [6][7] do not consider the integration of heterogeneous IDSs.\nThe processing in phases [6][7] is also contemplated by DIDSoG, which is enabled by the specification of several levels of processing allowed by the integration of heterogeneous IDSs.\nThe DIDS GIDA (Grid Intrusion Detection Architecture) targets at the detection of intrusions in a Grid environment [11].\nGridSim Grid simulator was used for the validation of DIDS GIDA.\nHomogeneous resources were used to simplify the development [11].\nHowever, the possibility of applying heterogeneous detection systems is left for future work Another DIDS for Grids is presented by Choon and Samsudim [2].\nScenarios demonstrating how a DIDS can execute on a Grid environment are presented.\nDIDSoG does not aim at detecting intrusions in a Grid environment.\nIn contrast, DIDSoG uses the Grid to compose a DIDS by integrating specific IDSs; the resulting DIDS could however be used to identify attacks in a Grid environment.\nLocal and distributed attacks can be detected through the integration of traditional IDSs while attacks particular to Grids can be detected through the integration of Grid IDSs.\n3.\nTHE PROPOSED MODEL DIDSoG presents a hierarchy of intrusion detection services; this hierarchy is organized through a two-dimensional vector defined by Scope:Complexity.\nThe IDSs composing DIDSoG can be organized in different levels of scope or complexity, depending on its functionalities, the topology of the target environment and expected results.\nFigure 1 presents a DIDSoG composed by different intrusion detection services (i.e. data gathering, data aggregation, data correlation, analysis, intrusion response and management) provided by different IDSs.\nThe information flow and the relationship between the levels of scope and complexity are presented in this figure.\nInformation about the environment (host, network or application) is collected by Sensors located both in user 1``s and user 2``s computers in domain 1.\nThe information is sent to both simple Analysers that act on the information from a single host (level 1:1), and to aggregation and correlation services that act on information from multiple hosts from the same domain (level 2:1).\nSimple Analysers in the first scope level send the information to more complex Analysers in the next levels of complexity (level 1: N).\nWhen an Analyser detects an intrusion, it communicates with Countermeasure and Monitoring services registered to its scope.\nAn Analyser can invoke a Countermeasure service that replies to a detected attack, or informs a Monitoring service about the ongoing attack, so the administrator can act accordingly.\nAggregation and correlation resources in the second scope receive information from Sensors from different users'' computers (user 1``s and user 2``s) in the domain 1.\nThese resources process the received information and send it to the analysis resources registered to the first level of complexity in the second scope (level 2:1).\nThe information is also sent to the aggregation and correlation resources registered in the first level of complexity in the next scope (level 3:1).\nUser 1 Domain 1 Analysers Level 1:1 Local Sensors Analysers Level 1:N Aggreg.\nCorrelation Level 2:1 User 2 Domain 1 Local Sensors Analysers Level 2:1 Analysers Level 2:N Aggreg.\nCorrelation Level 3:1 Domain 2 Monitor Level 1 Monitor Level 2 Analysers Level 3:1 Analysers Level 3:N Monitor Level 3 Response Level 1 Response Level 2 Response Level 3 Fig. 1.\nHow DIDSoG works.\nThe analysis resources in the second scope act like the analysis resources in the first scope, directing the information to a more complex analysis resource and putting the Countermeasure and Monitoring resources in action in case of detected attacks.\nAggregation and correlation resources in the third scope receive information from domains 1 and 2.\nThese resources then carry out the aggregation and correlation of the information from different domains and send it to the analysis resources in the first level of complexity in the third scope (level 3:1).\nThe information could also be sent to the aggregate service in the next scope in case of any resources registered to such level.\nThe analysis resources in the third scope act similar to the analysis resources in the first and second scopes, except that the analysis resources in the third scope act on information from multiple domains.\nThe functionalities of the registered resources in each of the scopes and complexity level can vary from one environment to another.\nThe model allows the development of N levels of scope and complexity.\nFigure 2 presents the architecture of a resource participating in the DIDSoG.\nInitially, the resource registers itself to GIS (Grid Information Service) so other participating resources can query the services provided.\nAfter registering itself, the resource requests information about other intrusion detection resources registered to the GIS.\nA given resource of DIDSoG interacts with other resources, by receiving data from the Source Resources, processing it, and sending the results to the Destination Resources, therefore forming a grid of intrusion detection resources.\nGrid Resource BaseNative IDS Grid Origin Resources Grid Destination Resources Grid Information Service Descri ptor Connec tor Fig. 2.\nArchitecture of a resource participating of the DIDSoG.\nA resource is made up of four components: Base, Connector, Descriptor and Native IDS.\nNative IDS corresponds to the IDS being integrated to the DIDSoG.\nThis component process the data received from the Origin Resources and generates new data to be sent to the Destination Resources.\nA Native IDS component can be any tool processes information related to intrusion detection, including analysis, data gathering, data aggregation, data correlation, intrusion response or management.\nThe Descriptor is responsible for the information that identifies a resource and its respective Destination Resources in the DIDSoG.\nFigure 3 presents the class diagram of the stored information by the Descriptor.\nThe ResourceDescriptor class has Feature, Level, DataType and Target Resources type members.\nFeature class represents the functionalities that a resource has.\nType, name and version attributes refer to the functions offered by the Native IDS component, its name and version, respectively.\nLevel class identifies the level of target and complexity in which the resource acts.\nDataType class represents the data format that the resource accepts to receive.\nDataType class is specialized by classes Text, XML and Binary.\nClass XML contains the DTDFile attribute to specify the DTD file that validates the received XML.\n-ident -version -description ResourceDescriptor -featureType -name -version Feature 1 1 -type -version DataType -escope -complex Level 1 1 Text Binary -DTDFile XML 1 1 TargetResources 1 1 10.\n.\n* -featureType Resource11 1 1 Fig. 3.\nClass Diagram of the Descriptor component.\nTargetResources class represents the features of the Destination Resources of a determined resource.\nThis class aggregates Resource.\nThe Resource class identifies the characteristics of a Destination Resource.\nThis identification is made through the featureType attribute and the Level and DataType classes.\nA given resource analyses the information from Descriptors from other resources, and compares this information with the information specified in TargetResources to know to which resources to send the results of its processing.\nThe Base component is responsible for the communication of a resource with other resources of the DIDSoG and with the Grid Information Service.\nIt is this component that registers the resource and the queries other resources in the GIS.\nThe Connector component is the link between Base and Native IDS.\nThe information that Base receives from Origin Resources is passed to Connector component.\nThe Connector component performs the necessary changes in the data so that it is understood by Native IDS, and sends this data to Native IDS for processing.\nThe Connector component also has the responsibility of collecting the information processed by Native IDS, and making the necessary changes so the information can pass through the DIDSoG again.\nAfter these changes, Connector sends the information to the Base, which in turn sends it to the Destination Resources in accordance with the specifications of the Descriptor component.\n4.\nIMPLEMENTATION We have used GridSim Toolkit 3 [1] for development and evaluation of the proposed model.\nWe have used and extended GridSim features to model and simulate the resources and components of DIDSoG.\nFigure 4 presents the Class diagram of the simulated DIDSoG.\nThe Simulation_DIDSoG class starts the simulation components.\nThe Simulation_User class represents a user of DIDSoG.\nThis class'' function is to initiate the processing of a resource Sensor, from where the gathered information will be sent to other resources.\nDIDSoG_GIS keeps a registry of the DIDSoG resources.The DIDSoG_BaseResource class implements the Base component (see Figure 2).\nDIDSoG_BaseResource interacts with DIDSoG_Descriptor class, which represents the Descriptor component.\nThe DIDSoG_Descriptor class is created from an XML file that specifies a resource descriptor (see Figure 3).\nDIDSoG_BaseResource DIDSoG_Descriptor 11 DIDSoG_GIS Simulation_User Simulation_DIDSoG 1 *1* 1 1 GridInformationService GridSim GridResource Fig. 4.\nClass Diagram of the simulatated DIDSoG.\nA Connector component must be developed for each Native IDS integrated to DIDSoG.\nThe Connector component is implemented by creating a class derived from DIDSoG_BaseResource.\nThe new class will implement new functionalities in accordance with the needs of the corresponding Native IDS.\nIn the simulation environment, data collection resources, analysis, aggregation\/correlation and generation of answers were integrated.\nClasses were developed to simulate the processing of each Native IDS components associated to the resources.\nFor each simulated Native IDS a class derived from DIDSoG_BaseResource was developed.\nThis class corresponds to the Connector component of the Native IDS and aims at the integrating the IDS to DIDSoG.\nA XML file describing each of the integrated resources is chosen by using the Connector component.\nThe resulting relationship between the resources integrated to the DIDSoG, in accordance with the specification of its respective descriptors, is presented in Figure 5.\nThe Sensor_1 and Sensor_2 resources generate simulated data in the TCPDump [8] format.\nThe generated data is directed to Analyser_1 and Aggreg_Corr_1 resources, in the case of Sensor_1, and to Aggreg_Corr_1 in the case of Sensor_2, according to the specification of their descriptors.\nUser_1 Analyser_ 1 Level 1:1 Sensor_1 Aggreg_ Corr_1 Level 2:1 User_2 Sensor_2 Analyser_2 Level 2:1 Analyser_3 Level 2:2 TCPDump TCPDump TCPDumpAg TCPDumpAg IDMEF IDMEF IDMEF TCPDump Countermeasure_1 Level 1 Countermeasure_2 Level 2 Fig. 5.\nFlow of the execution of the simulation.\nThe Native IDS of Analyser_1 generates alerts for any attempt of connection to port 23.\nThe data received from Analyser_1 had presented such features, generating an IDMEF (Intrusion Detection Message Exchange Format) alert [14].\nThe generated alert was sent to Countermeasure_1 resource, where a warning was dispatched to the administrator informing him of the alert received.\nThe Aggreg_Corr_1 resource received the information generated by sensors 1 and 2.\nIts processing activities consist in correlating the source IP addresses with the received data.\nThe resultant information of the processing of Aggreg_Corr_1 was directed to the Analyser_2 resource.\nThe Native IDS component of the Analyser_2 generates alerts when a source tries to connect to the same port number of multiple destinations.\nThis situation is identified by the Analyser_2 in the data received from Aggreg_Corr_1 and an alert in IDMEF format is then sent to the Countermeasures_2 resource.\nIn addition to generating alerts in IDMEF format, Analyser_2 also directs the received data to the Analyser_3, in the level of complexity 2.\nThe Native IDS component of Analyser_3 generates alerts when the transmission of ICMP messages from a given source to multiple destinations is detected.\nThis situation is detected in the data received from Analyser_2, and an IDMEF alert is then sent to the Countermeasure_2 resource.\nThe Countermeasure_2 resource receives the alerts generated by analysers 3 and 2, in accordance with the implementation of its Native IDS component.\nWarnings on alerts received are dispatched to the administrator.\nThe simulation carried out demonstrates how DIDSoG works.\nSimulated data was generated to be the input for a grid of intrusion detection systems composed by several distinct resources.\nThe resources carry out tasks such as data collection, aggregation and analysis, and generation of alerts and warnings in an integrated manner.\n5.\nEXPERIMENT RESULTS The hierarchic organization of scope and complexity provides a high degree of flexibility to the model.\nThe DIDSoG can be modelled in accordance with the needs of each environment.\nThe descriptors define data flow desired for the resulting DIDS.\nEach Native IDS is integrated to the DIDSoG through a Connector component.\nThe Connector component is also flexible in the DIDSoG.\nAdaptations, conversions of data types and auxiliary processes that Native IDSs need are provided by the Connector.\nFilters and generation of Specific logs for each Native IDS or environment can also be incorporated to the Connector.\nIf the integration of a new IDS to an environment already configured is desired, it is enough to develop the Connector for the desired IDS and to specify the resource Descriptor.\nAfter the specification of the Connector and the Descriptor the new IDS is integrated to the DIDSoG.\nThrough the definition of scopes, resources can act on data of different source groups.\nFor example, scope 1 can be related to a given set of hosts, scope 2 to another set of hosts, while scope 3 can be related to hosts from scopes 1 and 2.\nScopes can be defined according to the needs of each environment.\nThe complexity levels allow the distribution of the processing between several resources inside the same scope.\nIn an analysis task, for example, the search for simple attacks can be made by resources of complexity 1, whereas the search for more complex attacks, that demands more time, can be performed by resources of complexity 2.\nWith this, the analysis of the data is made by two resources.\nThe distinction between complexity levels can also be organized in order to integrate different techniques of intrusion detection.\nThe complexity level 1 could be defined for analyses based on signatures, which are simpler techniques; the complexity level 2 for techniques based on behaviour, that require greater computational power; and the complexity level 3 for intrusion detection in applications, where the techniques are more specific and depend on more data.\nThe division of scopes and the complexity levels make the processing of the data to be carried out in phases.\nNo resource has full knowledge about the complete data processing flow.\nEach resource only knows the results of its processing and the destination to which it sends the results.\nResources of higher complexity must be linked to resources of lower complexity.\nTherefore, the hierarchic structure of the DIDSoG is maintained, facilitating its extension and integration with other domains of intrusion detection.\nBy carrying out a hierarchic relationship between the several chosen analysers for an environment, the sensor resource is not overloaded with the task to send the data to all the analysers.\nAn initial analyser will exist (complexity level 1) to which the sensor will send its data, and this analyser will then direct the data to the next step of the processing flow.\nAnother feature of the hierarchical organization is the easy extension and integration with other domains.\nIf it is necessary to add a new host (sensor) to the DIDSoG, it is enough to plug it to the first hierarchy of resources.\nIf it is necessary to add a new analyser, it will be in the scope of several domains, it is enough to relate it to another resource of same scope.\nThe DIDSoG allows different levels to be managed by different entities.\nFor example, the first scope can be managed by the local user of a host.\nThe second scope, comprising several hosts of a domain can be managed by the administrator of the domain.\nA third entity can be responsible for managing the security of several domains in a joint way.\nThis entity can act in the scope 3 independently from others.\nWith the proposed model for integration of IDSs in Grids, the different IDSs of an environment (or multiple IDSs integrated) act in a cooperative manner improving the intrusion detection services, mainly in two aspects.\nFirst, the information from multiple sources are analysed in an integrated way to search for distributed attacks.\nThis integration can be made under several scopes.\nSecond, there is a great diversity of data aggregation techniques, data correlation and analysis, and intrusion response that can be applied to the same environment; these techniques can be organized under several levels of complexity.\n6.\nCONCLUSION The integration of heterogeneous IDSs is important.\nHowever, the incompatibility and diversity of IDS solutions make such integration extremely difficult.\nThis work thus proposed a model for composition of DIDS by integrating existing IDSs on a computational Grid platform (DIDSoG).\nIDSs in DIDSoG are encapsulated as Grid services for intrusion detection.\nA computational Grid platform is used for the integration by providing the basic requirements for communication, localization, resource sharing and security mechanisms.\nThe components of the architecture of the DIDSoG were developed and evaluated using the GridSim Grid simulator.\nServices for communication and localization were used to carry out the integration between components of different resources.\nBased on the components of the architecture, several resources were modelled forming a grid of intrusion detection.\nThe simulation demonstrated the usefulness of the proposed model.\nData from the sensor resources was read and this data was used to feed other resources of DIDSoG.\nThe integration of distinct IDSs could be observed through the simulated environment.\nResources providing different intrusion detection services were integrated (e.g. analysis, correlation, aggregation and alert).\nThe communication and localization services provided by GridSim were used to integrate components of different resources.\nVarious resources were modelled following the architecture components forming a grid of intrusion detection.\nThe components of DIDSoG architecture have served as base for the integration of the resources presented in the simulation.\nDuring the simulation, the different IDSs cooperated with one another in a distributed manner; however, in a coordinated way with an integrated view of the events, having, thus, the capability to detect distributed attacks.\nThis capability demonstrates that the IDSs integrated have resulted in a DIDS.\nRelated work presents cooperation between components of a specific DIDS.\nSome work focus on either the development of DIDSs on computational Grids or the application of IDSs to computational Grids.\nHowever, none deals with the integration of heterogeneous IDSs.\nIn contrast, the proposed model developed and simulated in this work, can shed some light into the question of integration of heterogeneous IDSs.\nDIDSoG presents new research opportunities that we would like to pursue, including: deployment of the model in a more realistic environment such as a Grid; incorporation of new security services; parallel analysis of data by Native IDSs in multiple hosts.\nIn addition to the integration of IDSs enabled by a grid middleware, the cooperation of heterogeneous IDSs can be viewed as an economic problem.\nIDSs from different organizations or administrative domains need incentives for joining a grid of intrusion detection services and for collaborating with other IDSs.\nThe development of distributed strategy proof mechanisms for integration of IDSs is a challenge that we would like to tackle.\n7.\nREFERENCES [1] Sulistio, A.; Poduvaly, G.; Buyya, R; and Tham, CK, Constructing A Grid Simulation with Differentiated Network Service Using GridSim, Proc.\nof the 6th International Conference on Internet Computing (ICOMP'05), June 27-30, 2005, Las Vegas, USA.\n[2] Choon, O. T.; Samsudim, A. Grid-based Intrusion Detection System.\nThe 9th IEEE Asia-Pacific Conference Communications, September 2003.\n[3] Foster, I.; Kesselman, C.; Tuecke, S.\nThe Physiology of the Grid: An Open Grid Service Architecture for Distributed System Integration.\nDraft June 2002.\nAvailable at http:\/\/www.globus.org\/research\/papers\/ogsa.pdf.\nAccess Feb. 2006.\n[4] Foster, Ian; Kesselman, Carl; Tuecke, Steven.\nThe anatomy of the Grid: enabling scalable virtual organizations.\nInternational Journal of Supercomputer Applications, 2001.\n[5] Kannadiga, P.; Zulkernine, M. DIDMA: A Distributed Intrusion Detection System Using Mobile Agents.\nProceedings of the IEEE Sixth International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel\/Distributed Computing, May 2005.\n[6] Leu, Fang-Yie, et al..\nIntegrating Grid with Intrusion Detection.\nProceedings of 19th IEEE AINA``05, March 2005.\n[7] Leu, Fang-Yie, et al..\nA Performance-Based Grid Intrusion Detection System.\nProceedings of the 29th IEEE COMPSAC``05, July 2005.\n[8] McCanne, S.; Leres, C.; Jacobson, V.; TCPdump\/Libpcap, http:\/\/www.tcpdump.org\/, 1994.\n[9] Snapp, S. R. et al..\nDIDS (Distributed Intrusion Detection System) - Motivation, Architecture and An Early Prototype.\nProceeding of the Fifteenth IEEE National Computer Security Conference.\nBaltimore, MD, October 1992.\n[10] Sterne, D.; et al..\nA General Cooperative Intrusion Detection Architecture for MANETs.\nProceedings of the Third IEEE IWIA``05, March 2005.\n[11] Tolba, M. F. ; et al..\nGIDA: Toward Enabling Grid Intrusion Detection Systems.\n5th IEEE International Symposium on Cluster Computing and the Grid, May 2005.\n[12] Wood, M. Intrusion Detection message exchange requirements.\nDraft-ietf-idwg-requirements-10, October 2002.\nAvailable at http:\/\/www.ietf.org\/internet-drafts\/draftietf-idwg-requirements-10.txt.\nAccess March 2006.\n[13] Zhang, Yu-Fang; Xiong, Z.; Wang, X. Distributed Intrusion Detection Based on Clustering.\nProceedings of IEEE International Conference Machine Learning and Cybernetics, August 2005.\n[14] Curry, D.; Debar, H. Intrusion Detection Message exchange format data model and Extensible Markup Language (XML) Document Type Definition.\nDraft-ietf-idwg-idmef-xml-10, March 2006.\nAvailable at http:\/\/www.ietf.org\/internetdrafts\/draft-ietf-idwg-idmef-xml-16.txt.","lvl-3":"Composition of a DIDS by Integrating Heterogeneous IDSs on Grids\nABSTRACT\nThis paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems).\nA Grid middleware is used for this integration.\nIn addition, an architecture for this integration is proposed and validated through simulation.\n1.\nINTRODUCTION\nSolutions for integrating heterogeneous IDSs (Intrusion Detection Systems) have been proposed by several groups [6], [7], [11], [2].\nSome reasons for integrating IDSs are described by the IDWG (Intrusion Detection Working Group) from the IETF (Internet Engineering Task Force) [12] as follows: \u2022 Many IDSs available in the market have strong and weak points, which generally make necessary the deployment of more than one IDS to provided an adequate solution.\n\u2022 Attacks and intrusions generally originate from multiple networks spanning several administrative domains; these domains usually utilize different IDSs.\nThe integration of IDSs is then needed to correlate information from multiple\nnetworks to allow the identification of distributed attacks and or intrusions.\n\u2022 The interoperability\/integration of different IDS components would benefit the research on intrusion detection and speed up the deployment of IDSs as commercial products.\nDIDSs (Distributed Intrusion Detection Systems) therefore started to emerge in early 90s [9] to allow the correlation of intrusion information from multiple hosts, networks or domains to detect distributed attacks.\nResearch on DIDSs has then received much interest, mainly because centralised IDSs are not able to provide the information needed to prevent such attacks [13].\nHowever, the realization of a DIDS requires a high degree of coordination.\nComputational Grids are appealing as they enable the development of distributed application and coordination in a distributed environment.\nGrid computing aims to enable coordinate resource sharing in dynamic groups of individuals and\/or organizations.\nMoreover, Grid middleware provides means for secure access, management and allocation of remote resources; resource information services; and protocols and mechanisms for transfer of data [4].\nAccording to Foster et al. [4], Grids can be viewed as a set of aggregate services defined by the resources that they share.\nOGSA (Open Grid Service Architecture) provides the foundation for this service orientation in computational Grids.\nThe services in OGSA are specified through well-defined, open, extensible and platformindependent interfaces, which enable the development of interoperable applications.\nThis article proposes a model for integration of IDSs by using computational Grids.\nThe proposed model enables heterogeneous IDSs to work in a cooperative way; this integration is termed DIDSoG (Distributed Intrusion Detection System on Grid).\nEach of the integrated IDSs is viewed by others as a resource accessed through the services that it exposes.\nA Grid middleware provides several features for the realization of a DIDSoG, including [3]: decentralized coordination of resources; use of standard protocols and interfaces; and the delivery of optimized QoS (Quality of Service).\nThe service oriented architecture followed by Grids (OGSA) allows the definition of interfaces that are adaptable to different platforms.\nDifferent implementations can be encapsulated by a service interface; this virtualisation allows the consistent access to resources in heterogeneous environments [3].\nThe virtualisation of the environment through service interfaces allows the use of services without the knowledge of how they are actually implemented.\nThis characteristic is important for the integration of IDSs as the same service interfaces can be exposed by different IDSs.\nGrid middleware can thus be used to implement a great variety of services.\nSome functions provided by Grid middleware are [3]: (i) data management services, including access services, replication, and localisation; (ii) workflow services that implement coordinate execution of multiple applications on multiple resources; (iii) auditing services that perform the detection of frauds or intrusions; (iv) monitoring services which implement the discovery of sensors in a distributed environment and generate alerts under determined conditions; (v) services for identification of problems in a distributed environment, which implement the correlation of information from disparate and distributed logs.\nThese services are important for the implementation of a DIDSoG.\nA DIDS needs services for the location of and access to distributed data from different IDSs.\nAuditing and monitoring services take care of the proper needs of the DIDSs such as: secure storage, data analysis to detect intrusions, discovery of distributed sensors, and sending of alerts.\nThe correlation of distributed logs is also relevant because the detection of distributed attacks depends on the correlation of the alert information generated by the different IDSs that compose the DIDSoG.\nThe next sections of this article are organized as follows.\nSection 2 presents related work.\nThe proposed model is presented in Section 3.\nSection 4 describes the development and a case study.\nResults and discussion are presented in Section 5.\nConclusions and future work are discussed in Section 6.\n2.\nRELATED WORK\nDIDMA [5] is a flexible, scalable, reliable, and platformindependent DIDS.\nDIDMA architecture allows distributed analysis of events and can be easily extended by developing new agents.\nHowever, the integration with existing IDSs and the development of security components are presented as future work [5].\nThe extensibility of DIDS DIDMA and the integration with other IDSs are goals pursued by DIDSoG.\nThe flexibility, scalability, platform independence, reliability and security components discussed in [5] are achieved in DIDSoG by using a Grid platform.\nMore efficient techniques for analysis of great amounts of data in wide scale networks based on clustering and applicable to DIDSs are presented in [13].\nThe integration of heterogeneous IDSs to increase the variety of intrusion detection techniques in the environment is mentioned as future work [13] DIDSoG thus aims at integrating heterogeneous IDSs [13].\nRef.\n[10] presents a hierarchical architecture for a DIDS; information is collected, aggregated, correlated and analysed as it is sent up in the hierarchy.\nThe architecture comprises of several components for: monitoring, correlation, intrusion detection by statistics, detection by signatures and answers.\nComponents in the same level of the hierarchy cooperate with one another.\nThe integration proposed by DIDSoG also follows a hierarchical architecture.\nEach IDS integrated to the DIDSoG offers functionalities at a given level of the hierarchy and requests functionalities from IDSs from another level.\nThe hierarchy presented in [10] integrates homogeneous IDSs whereas the hierarchical architecture of DIDSoG integrates heterogeneous IDSs.\nThere are proposals on integrating computational Grids and IDSs [6], [7], [11], [2].\nRef.\n[6] and [7] propose the use of Globus Toolkit for intrusion detection, especially for DoS (Denial of Service) and DDoS (Distributed Denial of Service) attacks; Globus is used due to the need to process great amounts of data to detect these kinds of attack.\nA two-phase processing architecture is presented.\nThe first phase aims at the detection of momentary attacks, while the second phase is concerned with chronic or perennial attacks.\nTraditional IDSs or DIDSs are generally coordinated by a central point; a characteristic that leaves them prone to attacks.\nLeu et al. [6] point out that IDSs developed upon Grids platforms are less vulnerable to attacks because of the distribution provided for such platforms.\nLeu et al. [6], [7] have used tools to generate several types of attacks - including TCP, ICMP and UDP flooding - and have demonstrated through experimental results the advantages of applying computational Grids to IDSs.\nThis work proposes the development of a DIDS upon a Grid platform.\nHowever, the resulting DIDS integrates heterogeneous IDSs whereas the DIDSs upon Grids presented by Leu et al. [6] [7] do not consider the integration of heterogeneous IDSs.\nThe processing in phases [6] [7] is also contemplated by DIDSoG, which is enabled by the specification of several levels of processing allowed by the integration of heterogeneous IDSs.\nThe DIDS GIDA (Grid Intrusion Detection Architecture) targets at the detection of intrusions in a Grid environment [11].\nGridSim Grid simulator was used for the validation of DIDS GIDA.\nHomogeneous resources were used to simplify the development [11].\nHowever, the possibility of applying heterogeneous detection systems is left for future work Another DIDS for Grids is presented by Choon and Samsudim [2].\nScenarios demonstrating how a DIDS can execute on a Grid environment are presented.\nDIDSoG does not aim at detecting intrusions in a Grid environment.\nIn contrast, DIDSoG uses the Grid to compose a DIDS by integrating specific IDSs; the resulting DIDS could however be used to identify attacks in a Grid environment.\nLocal and distributed attacks can be detected through the integration of traditional IDSs while attacks particular to Grids can be detected through the integration of Grid IDSs.\n3.\nTHE PROPOSED MODEL\nGrid Origin Resources Grid Resource Grid Information Service\nGrid Destination Resources\n4.\nIMPLEMENTATION\n5.\nEXPERIMENT RESULTS\n6.\nCONCLUSION\nThe integration of heterogeneous IDSs is important.\nHowever, the incompatibility and diversity of IDS solutions make such integration extremely difficult.\nThis work thus proposed a model for composition of DIDS by integrating existing IDSs on a computational Grid platform (DIDSoG).\nIDSs in DIDSoG are encapsulated as Grid services for intrusion detection.\nA computational Grid platform is used for the integration by providing the basic requirements for communication, localization, resource sharing and security mechanisms.\nThe components of the architecture of the DIDSoG were developed and evaluated using the GridSim Grid simulator.\nServices for communication and localization were used to carry out the integration between components of different resources.\nBased on the components of the architecture, several resources were modelled forming a grid of intrusion detection.\nThe simulation demonstrated the usefulness of the proposed model.\nData from the sensor resources was read and this data was used to feed other resources of DIDSoG.\nThe integration of distinct IDSs could be observed through the simulated environment.\nResources providing different intrusion detection services were integrated (e.g. analysis, correlation, aggregation and alert).\nThe communication and localization services provided by GridSim were used to integrate components of different resources.\nVarious resources were modelled following the architecture components forming a grid of intrusion detection.\nThe components of DIDSoG architecture have served as base for the integration of the resources presented in the simulation.\nDuring the simulation, the different IDSs cooperated with one another in a distributed manner; however, in a coordinated way with an integrated view of the events, having, thus, the capability to detect distributed attacks.\nThis capability demonstrates that the IDSs integrated have resulted in a DIDS.\nRelated work presents cooperation between components of a specific DIDS.\nSome work focus on either the development of DIDSs on computational Grids or the application of IDSs to computational Grids.\nHowever, none deals with the integration of heterogeneous IDSs.\nIn contrast, the proposed model developed and simulated in this work, can shed some light into the question of integration of heterogeneous IDSs.\nDIDSoG presents new research opportunities that we would like to pursue, including: deployment of the model in a more realistic environment such as a Grid; incorporation of new security services; parallel analysis of data by Native IDSs in multiple hosts.\nIn addition to the integration of IDSs enabled by a grid middleware, the cooperation of heterogeneous IDSs can be viewed as an economic problem.\nIDSs from different organizations or administrative domains need incentives for joining a grid of intrusion detection services and for collaborating with other IDSs.\nThe development of distributed strategy proof mechanisms for integration of IDSs is a challenge that we would like to tackle.","lvl-4":"Composition of a DIDS by Integrating Heterogeneous IDSs on Grids\nABSTRACT\nThis paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems).\nA Grid middleware is used for this integration.\nIn addition, an architecture for this integration is proposed and validated through simulation.\n1.\nINTRODUCTION\nSolutions for integrating heterogeneous IDSs (Intrusion Detection Systems) have been proposed by several groups [6], [7], [11], [2].\n\u2022 Attacks and intrusions generally originate from multiple networks spanning several administrative domains; these domains usually utilize different IDSs.\nThe integration of IDSs is then needed to correlate information from multiple\nnetworks to allow the identification of distributed attacks and or intrusions.\n\u2022 The interoperability\/integration of different IDS components would benefit the research on intrusion detection and speed up the deployment of IDSs as commercial products.\nResearch on DIDSs has then received much interest, mainly because centralised IDSs are not able to provide the information needed to prevent such attacks [13].\nHowever, the realization of a DIDS requires a high degree of coordination.\nComputational Grids are appealing as they enable the development of distributed application and coordination in a distributed environment.\nGrid computing aims to enable coordinate resource sharing in dynamic groups of individuals and\/or organizations.\nAccording to Foster et al. [4], Grids can be viewed as a set of aggregate services defined by the resources that they share.\nOGSA (Open Grid Service Architecture) provides the foundation for this service orientation in computational Grids.\nThis article proposes a model for integration of IDSs by using computational Grids.\nThe proposed model enables heterogeneous IDSs to work in a cooperative way; this integration is termed DIDSoG (Distributed Intrusion Detection System on Grid).\nEach of the integrated IDSs is viewed by others as a resource accessed through the services that it exposes.\nThe service oriented architecture followed by Grids (OGSA) allows the definition of interfaces that are adaptable to different platforms.\nDifferent implementations can be encapsulated by a service interface; this virtualisation allows the consistent access to resources in heterogeneous environments [3].\nThe virtualisation of the environment through service interfaces allows the use of services without the knowledge of how they are actually implemented.\nThis characteristic is important for the integration of IDSs as the same service interfaces can be exposed by different IDSs.\nGrid middleware can thus be used to implement a great variety of services.\nThese services are important for the implementation of a DIDSoG.\nA DIDS needs services for the location of and access to distributed data from different IDSs.\nThe correlation of distributed logs is also relevant because the detection of distributed attacks depends on the correlation of the alert information generated by the different IDSs that compose the DIDSoG.\nThe next sections of this article are organized as follows.\nSection 2 presents related work.\nThe proposed model is presented in Section 3.\nSection 4 describes the development and a case study.\nResults and discussion are presented in Section 5.\nConclusions and future work are discussed in Section 6.\n2.\nRELATED WORK\nDIDMA architecture allows distributed analysis of events and can be easily extended by developing new agents.\nHowever, the integration with existing IDSs and the development of security components are presented as future work [5].\nThe extensibility of DIDS DIDMA and the integration with other IDSs are goals pursued by DIDSoG.\nThe flexibility, scalability, platform independence, reliability and security components discussed in [5] are achieved in DIDSoG by using a Grid platform.\nThe integration of heterogeneous IDSs to increase the variety of intrusion detection techniques in the environment is mentioned as future work [13] DIDSoG thus aims at integrating heterogeneous IDSs [13].\nRef.\n[10] presents a hierarchical architecture for a DIDS; information is collected, aggregated, correlated and analysed as it is sent up in the hierarchy.\nThe architecture comprises of several components for: monitoring, correlation, intrusion detection by statistics, detection by signatures and answers.\nThe integration proposed by DIDSoG also follows a hierarchical architecture.\nEach IDS integrated to the DIDSoG offers functionalities at a given level of the hierarchy and requests functionalities from IDSs from another level.\nThe hierarchy presented in [10] integrates homogeneous IDSs whereas the hierarchical architecture of DIDSoG integrates heterogeneous IDSs.\nThere are proposals on integrating computational Grids and IDSs [6], [7], [11], [2].\nRef.\nA two-phase processing architecture is presented.\nTraditional IDSs or DIDSs are generally coordinated by a central point; a characteristic that leaves them prone to attacks.\nLeu et al. [6] point out that IDSs developed upon Grids platforms are less vulnerable to attacks because of the distribution provided for such platforms.\nThis work proposes the development of a DIDS upon a Grid platform.\nHowever, the resulting DIDS integrates heterogeneous IDSs whereas the DIDSs upon Grids presented by Leu et al. [6] [7] do not consider the integration of heterogeneous IDSs.\nThe processing in phases [6] [7] is also contemplated by DIDSoG, which is enabled by the specification of several levels of processing allowed by the integration of heterogeneous IDSs.\nThe DIDS GIDA (Grid Intrusion Detection Architecture) targets at the detection of intrusions in a Grid environment [11].\nGridSim Grid simulator was used for the validation of DIDS GIDA.\nHomogeneous resources were used to simplify the development [11].\nHowever, the possibility of applying heterogeneous detection systems is left for future work Another DIDS for Grids is presented by Choon and Samsudim [2].\nScenarios demonstrating how a DIDS can execute on a Grid environment are presented.\nDIDSoG does not aim at detecting intrusions in a Grid environment.\nIn contrast, DIDSoG uses the Grid to compose a DIDS by integrating specific IDSs; the resulting DIDS could however be used to identify attacks in a Grid environment.\nLocal and distributed attacks can be detected through the integration of traditional IDSs while attacks particular to Grids can be detected through the integration of Grid IDSs.\n6.\nCONCLUSION\nThe integration of heterogeneous IDSs is important.\nHowever, the incompatibility and diversity of IDS solutions make such integration extremely difficult.\nThis work thus proposed a model for composition of DIDS by integrating existing IDSs on a computational Grid platform (DIDSoG).\nIDSs in DIDSoG are encapsulated as Grid services for intrusion detection.\nA computational Grid platform is used for the integration by providing the basic requirements for communication, localization, resource sharing and security mechanisms.\nThe components of the architecture of the DIDSoG were developed and evaluated using the GridSim Grid simulator.\nServices for communication and localization were used to carry out the integration between components of different resources.\nBased on the components of the architecture, several resources were modelled forming a grid of intrusion detection.\nThe simulation demonstrated the usefulness of the proposed model.\nData from the sensor resources was read and this data was used to feed other resources of DIDSoG.\nThe integration of distinct IDSs could be observed through the simulated environment.\nResources providing different intrusion detection services were integrated (e.g. analysis, correlation, aggregation and alert).\nThe communication and localization services provided by GridSim were used to integrate components of different resources.\nVarious resources were modelled following the architecture components forming a grid of intrusion detection.\nThe components of DIDSoG architecture have served as base for the integration of the resources presented in the simulation.\nDuring the simulation, the different IDSs cooperated with one another in a distributed manner; however, in a coordinated way with an integrated view of the events, having, thus, the capability to detect distributed attacks.\nThis capability demonstrates that the IDSs integrated have resulted in a DIDS.\nRelated work presents cooperation between components of a specific DIDS.\nSome work focus on either the development of DIDSs on computational Grids or the application of IDSs to computational Grids.\nHowever, none deals with the integration of heterogeneous IDSs.\nIn contrast, the proposed model developed and simulated in this work, can shed some light into the question of integration of heterogeneous IDSs.\nIn addition to the integration of IDSs enabled by a grid middleware, the cooperation of heterogeneous IDSs can be viewed as an economic problem.\nIDSs from different organizations or administrative domains need incentives for joining a grid of intrusion detection services and for collaborating with other IDSs.\nThe development of distributed strategy proof mechanisms for integration of IDSs is a challenge that we would like to tackle.","lvl-2":"Composition of a DIDS by Integrating Heterogeneous IDSs on Grids\nABSTRACT\nThis paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems).\nA Grid middleware is used for this integration.\nIn addition, an architecture for this integration is proposed and validated through simulation.\n1.\nINTRODUCTION\nSolutions for integrating heterogeneous IDSs (Intrusion Detection Systems) have been proposed by several groups [6], [7], [11], [2].\nSome reasons for integrating IDSs are described by the IDWG (Intrusion Detection Working Group) from the IETF (Internet Engineering Task Force) [12] as follows: \u2022 Many IDSs available in the market have strong and weak points, which generally make necessary the deployment of more than one IDS to provided an adequate solution.\n\u2022 Attacks and intrusions generally originate from multiple networks spanning several administrative domains; these domains usually utilize different IDSs.\nThe integration of IDSs is then needed to correlate information from multiple\nnetworks to allow the identification of distributed attacks and or intrusions.\n\u2022 The interoperability\/integration of different IDS components would benefit the research on intrusion detection and speed up the deployment of IDSs as commercial products.\nDIDSs (Distributed Intrusion Detection Systems) therefore started to emerge in early 90s [9] to allow the correlation of intrusion information from multiple hosts, networks or domains to detect distributed attacks.\nResearch on DIDSs has then received much interest, mainly because centralised IDSs are not able to provide the information needed to prevent such attacks [13].\nHowever, the realization of a DIDS requires a high degree of coordination.\nComputational Grids are appealing as they enable the development of distributed application and coordination in a distributed environment.\nGrid computing aims to enable coordinate resource sharing in dynamic groups of individuals and\/or organizations.\nMoreover, Grid middleware provides means for secure access, management and allocation of remote resources; resource information services; and protocols and mechanisms for transfer of data [4].\nAccording to Foster et al. [4], Grids can be viewed as a set of aggregate services defined by the resources that they share.\nOGSA (Open Grid Service Architecture) provides the foundation for this service orientation in computational Grids.\nThe services in OGSA are specified through well-defined, open, extensible and platformindependent interfaces, which enable the development of interoperable applications.\nThis article proposes a model for integration of IDSs by using computational Grids.\nThe proposed model enables heterogeneous IDSs to work in a cooperative way; this integration is termed DIDSoG (Distributed Intrusion Detection System on Grid).\nEach of the integrated IDSs is viewed by others as a resource accessed through the services that it exposes.\nA Grid middleware provides several features for the realization of a DIDSoG, including [3]: decentralized coordination of resources; use of standard protocols and interfaces; and the delivery of optimized QoS (Quality of Service).\nThe service oriented architecture followed by Grids (OGSA) allows the definition of interfaces that are adaptable to different platforms.\nDifferent implementations can be encapsulated by a service interface; this virtualisation allows the consistent access to resources in heterogeneous environments [3].\nThe virtualisation of the environment through service interfaces allows the use of services without the knowledge of how they are actually implemented.\nThis characteristic is important for the integration of IDSs as the same service interfaces can be exposed by different IDSs.\nGrid middleware can thus be used to implement a great variety of services.\nSome functions provided by Grid middleware are [3]: (i) data management services, including access services, replication, and localisation; (ii) workflow services that implement coordinate execution of multiple applications on multiple resources; (iii) auditing services that perform the detection of frauds or intrusions; (iv) monitoring services which implement the discovery of sensors in a distributed environment and generate alerts under determined conditions; (v) services for identification of problems in a distributed environment, which implement the correlation of information from disparate and distributed logs.\nThese services are important for the implementation of a DIDSoG.\nA DIDS needs services for the location of and access to distributed data from different IDSs.\nAuditing and monitoring services take care of the proper needs of the DIDSs such as: secure storage, data analysis to detect intrusions, discovery of distributed sensors, and sending of alerts.\nThe correlation of distributed logs is also relevant because the detection of distributed attacks depends on the correlation of the alert information generated by the different IDSs that compose the DIDSoG.\nThe next sections of this article are organized as follows.\nSection 2 presents related work.\nThe proposed model is presented in Section 3.\nSection 4 describes the development and a case study.\nResults and discussion are presented in Section 5.\nConclusions and future work are discussed in Section 6.\n2.\nRELATED WORK\nDIDMA [5] is a flexible, scalable, reliable, and platformindependent DIDS.\nDIDMA architecture allows distributed analysis of events and can be easily extended by developing new agents.\nHowever, the integration with existing IDSs and the development of security components are presented as future work [5].\nThe extensibility of DIDS DIDMA and the integration with other IDSs are goals pursued by DIDSoG.\nThe flexibility, scalability, platform independence, reliability and security components discussed in [5] are achieved in DIDSoG by using a Grid platform.\nMore efficient techniques for analysis of great amounts of data in wide scale networks based on clustering and applicable to DIDSs are presented in [13].\nThe integration of heterogeneous IDSs to increase the variety of intrusion detection techniques in the environment is mentioned as future work [13] DIDSoG thus aims at integrating heterogeneous IDSs [13].\nRef.\n[10] presents a hierarchical architecture for a DIDS; information is collected, aggregated, correlated and analysed as it is sent up in the hierarchy.\nThe architecture comprises of several components for: monitoring, correlation, intrusion detection by statistics, detection by signatures and answers.\nComponents in the same level of the hierarchy cooperate with one another.\nThe integration proposed by DIDSoG also follows a hierarchical architecture.\nEach IDS integrated to the DIDSoG offers functionalities at a given level of the hierarchy and requests functionalities from IDSs from another level.\nThe hierarchy presented in [10] integrates homogeneous IDSs whereas the hierarchical architecture of DIDSoG integrates heterogeneous IDSs.\nThere are proposals on integrating computational Grids and IDSs [6], [7], [11], [2].\nRef.\n[6] and [7] propose the use of Globus Toolkit for intrusion detection, especially for DoS (Denial of Service) and DDoS (Distributed Denial of Service) attacks; Globus is used due to the need to process great amounts of data to detect these kinds of attack.\nA two-phase processing architecture is presented.\nThe first phase aims at the detection of momentary attacks, while the second phase is concerned with chronic or perennial attacks.\nTraditional IDSs or DIDSs are generally coordinated by a central point; a characteristic that leaves them prone to attacks.\nLeu et al. [6] point out that IDSs developed upon Grids platforms are less vulnerable to attacks because of the distribution provided for such platforms.\nLeu et al. [6], [7] have used tools to generate several types of attacks - including TCP, ICMP and UDP flooding - and have demonstrated through experimental results the advantages of applying computational Grids to IDSs.\nThis work proposes the development of a DIDS upon a Grid platform.\nHowever, the resulting DIDS integrates heterogeneous IDSs whereas the DIDSs upon Grids presented by Leu et al. [6] [7] do not consider the integration of heterogeneous IDSs.\nThe processing in phases [6] [7] is also contemplated by DIDSoG, which is enabled by the specification of several levels of processing allowed by the integration of heterogeneous IDSs.\nThe DIDS GIDA (Grid Intrusion Detection Architecture) targets at the detection of intrusions in a Grid environment [11].\nGridSim Grid simulator was used for the validation of DIDS GIDA.\nHomogeneous resources were used to simplify the development [11].\nHowever, the possibility of applying heterogeneous detection systems is left for future work Another DIDS for Grids is presented by Choon and Samsudim [2].\nScenarios demonstrating how a DIDS can execute on a Grid environment are presented.\nDIDSoG does not aim at detecting intrusions in a Grid environment.\nIn contrast, DIDSoG uses the Grid to compose a DIDS by integrating specific IDSs; the resulting DIDS could however be used to identify attacks in a Grid environment.\nLocal and distributed attacks can be detected through the integration of traditional IDSs while attacks particular to Grids can be detected through the integration of Grid IDSs.\n3.\nTHE PROPOSED MODEL\nDIDSoG presents a hierarchy of intrusion detection services; this hierarchy is organized through a two-dimensional vector defined by \"Scope: Complexity\".\nThe IDSs composing DIDSoG can be organized in different levels of scope or complexity, depending on its functionalities, the topology of the target environment and expected results.\nFigure 1 presents a DIDSoG composed by different intrusion detection services (i.e. data gathering, data aggregation, data correlation, analysis, intrusion response and management) provided by different IDSs.\nThe information flow and the relationship between the levels of scope and complexity are presented in this figure.\nInformation about the environment (host, network or application) is collected by Sensors located both in user 1's and user 2's computers in domain 1.\nThe information is sent to both simple Analysers that act on the information from a single host (level 1:1), and to aggregation and correlation services that act on information from multiple hosts from the same domain (level 2:1).\nSimple Analysers in the first scope level send the information to more complex Analysers in the next levels of complexity (level 1: N).\nWhen an Analyser detects an intrusion, it communicates with Countermeasure and Monitoring services registered to its scope.\nAn Analyser can invoke a Countermeasure service that replies to a detected attack, or informs a Monitoring service about the ongoing attack, so the administrator can act accordingly.\nAggregation and correlation resources in the second scope receive information from Sensors from different users' computers (user 1's and user 2's) in the domain 1.\nThese resources process the received information and send it to the analysis resources registered to the first level of complexity in the second scope (level 2:1).\nThe information is also sent to the aggregation and correlation resources registered in the first level of complexity in the next scope (level 3:1).\nFig. 1.\nHow DIDSoG works.\nThe analysis resources in the second scope act like the analysis resources in the first scope, directing the information to a more complex analysis resource and putting the Countermeasure and Monitoring resources in action in case of detected attacks.\nAggregation and correlation resources in the third scope receive information from domains 1 and 2.\nThese resources then carry out the aggregation and correlation of the information from different domains and send it to the analysis resources in the first level of complexity in the third scope (level 3:1).\nThe information could also be sent to the aggregate service in the next scope in case of any resources registered to such level.\nThe analysis resources in the third scope act similar to the analysis resources in the first and second scopes, except that the analysis resources in the third scope act on information from multiple domains.\nThe functionalities of the registered resources in each of the scopes and complexity level can vary from one environment to another.\nThe model allows the development of \"N\" levels of scope and complexity.\nFigure 2 presents the architecture of a resource participating in the DIDSoG.\nInitially, the resource registers itself to GIS (Grid Information Service) so other participating resources can query the services provided.\nAfter registering itself, the resource requests information about other intrusion detection resources registered to the GIS.\nA given resource of DIDSoG interacts with other resources, by receiving data from the Source Resources, processing it, and sending the results to the Destination Resources, therefore forming a grid of intrusion detection resources.\nGrid Origin Resources Grid Resource Grid Information Service\nNative IDS\nGrid Destination Resources\nFig. 2.\nArchitecture of a resource participating of the DIDSoG.\nA resource is made up of four components: Base, Connector, Descriptor and Native IDS.\nNative IDS corresponds to the IDS being integrated to the DIDSoG.\nThis component process the data received from the Origin Resources and generates new data to be sent to the Destination Resources.\nA Native IDS component can be any tool processes information related to intrusion detection, including analysis, data gathering, data aggregation, data correlation, intrusion response or management.\nThe Descriptor is responsible for the information that identifies a resource and its respective Destination Resources in the DIDSoG.\nFigure 3 presents the class diagram of the stored information by the Descriptor.\nThe ResourceDescriptor class has Feature, Level, DataType and Target Resources type members.\nFeature class represents the functionalities that a resource has.\nType, name and version attributes refer to the functions offered by the Native IDS component, its name and version, respectively.\nLevel class identifies the level of target and complexity in which the resource\nacts.\nDataType class represents the data format that the resource accepts to receive.\nDataType class is specialized by classes Text, XML and Binary.\nClass XML contains the DTDFile attribute to specify the DTD file that validates the received XML.\nFig. 3.\nClass Diagram of the Descriptor component.\nTargetResources class represents the features of the Destination Resources of a determined resource.\nThis class aggregates Resource.\nThe Resource class identifies the characteristics of a Destination Resource.\nThis identification is made through the featureType attribute and the Level and DataType classes.\nA given resource analyses the information from Descriptors from other resources, and compares this information with the information specified in TargetResources to know to which resources to send the results of its processing.\nThe Base component is responsible for the communication of a resource with other resources of the DIDSoG and with the Grid Information Service.\nIt is this component that registers the resource and the queries other resources in the GIS.\nThe Connector component is the link between Base and Native IDS.\nThe information that Base receives from Origin Resources is passed to Connector component.\nThe Connector component performs the necessary changes in the data so that it is understood by Native IDS, and sends this data to Native IDS for processing.\nThe Connector component also has the responsibility of collecting the information processed by Native IDS, and making the necessary changes so the information can pass through the DIDSoG again.\nAfter these changes, Connector sends the information to the Base, which in turn sends it to the Destination Resources in accordance with the specifications of the Descriptor component.\n4.\nIMPLEMENTATION\nWe have used GridSim Toolkit 3 [1] for development and evaluation of the proposed model.\nWe have used and extended GridSim features to model and simulate the resources and components of DIDSoG.\nFigure 4 presents the Class diagram of the simulated DIDSoG.\nThe Simulation_DIDSoG class starts the simulation components.\nThe Simulation_User class represents a user of DIDSoG.\nThis class' function is to initiate the processing of a resource Sensor, from where the gathered information will be sent to other resources.\nDIDSoG_GIS keeps a registry of the DIDSoG resources.The DIDSoG_BaseResource class implements the Base component (see Figure 2).\nDIDSoG_BaseResource interacts with DIDSoG_Descriptor class, which represents the Descriptor component.\nThe DIDSoG_Descriptor class is created from an XML file that specifies a resource descriptor (see Figure 3).\nFig. 4.\nClass Diagram of the simulatated DIDSoG.\nA Connector component must be developed for each Native IDS integrated to DIDSoG.\nThe Connector component is implemented by creating a class derived from DIDSoG_BaseResource.\nThe new class will implement new functionalities in accordance with the needs of the corresponding Native IDS.\nIn the simulation environment, data collection resources, analysis, aggregation\/correlation and generation of answers were integrated.\nClasses were developed to simulate the processing of each Native IDS components associated to the resources.\nFor each simulated Native IDS a class derived from DIDSoG_BaseResource was developed.\nThis class corresponds to the Connector component of the Native IDS and aims at the integrating the IDS to DIDSoG.\nA XML file describing each of the integrated resources is chosen by using the Connector component.\nThe resulting relationship between the resources integrated to the DIDSoG, in accordance with the specification of its respective descriptors, is presented in Figure 5.\nThe Sensor_1 and Sensor_2 resources generate simulated data in the TCPDump [8] format.\nThe generated data is directed to Analyser_1 and Aggreg_Corr_1 resources, in the case of Sensor_1, and to Aggreg_Corr_1 in the case of Sensor_2, according to the specification of their descriptors.\nFig. 5.\nFlow of the execution of the simulation.\nLevel 2 The Native IDS of Analyser_1 generates alerts for any attempt of connection to port 23.\nThe data received from Analyser_1 had presented such features, generating an IDMEF (Intrusion Detection Message Exchange Format) alert [14].\nThe generated alert was sent to Countermeasure_1 resource, where a warning was dispatched to the administrator informing him of the alert received.\nThe Aggreg_Corr_1 resource received the information generated by sensors 1 and 2.\nIts processing activities consist in correlating the source IP addresses with the received data.\nThe resultant information of the processing of Aggreg_Corr_1 was directed to the Analyser_2 resource.\nThe Native IDS component of the Analyser_2 generates alerts when a source tries to connect to the same port number of multiple destinations.\nThis situation is identified by the Analyser_2 in the data received from Aggreg_Corr_1 and an alert in IDMEF format is then sent to the Countermeasures_2 resource.\nIn addition to generating alerts in IDMEF format, Analyser_2 also directs the received data to the Analyser_3, in the level of complexity 2.\nThe Native IDS component of Analyser_3 generates alerts when the transmission of ICMP messages from a given source to multiple destinations is detected.\nThis situation is detected in the data received from Analyser_2, and an IDMEF alert is then sent to the Countermeasure_2 resource.\nThe Countermeasure_2 resource receives the alerts generated by analysers 3 and 2, in accordance with the implementation of its Native IDS component.\nWarnings on alerts received are dispatched to the administrator.\nThe simulation carried out demonstrates how DIDSoG works.\nSimulated data was generated to be the input for a grid of intrusion detection systems composed by several distinct resources.\nThe resources carry out tasks such as data collection, aggregation and analysis, and generation of alerts and warnings in an integrated manner.\n5.\nEXPERIMENT RESULTS\nThe hierarchic organization of scope and complexity provides a high degree of flexibility to the model.\nThe DIDSoG can be modelled in accordance with the needs of each environment.\nThe descriptors define data flow desired for the resulting DIDS.\nEach Native IDS is integrated to the DIDSoG through a Connector component.\nThe Connector component is also flexible in the DIDSoG.\nAdaptations, conversions of data types and auxiliary processes that Native IDSs need are provided by the Connector.\nFilters and generation of Specific logs for each Native IDS or environment can also be incorporated to the Connector.\nIf the integration of a new IDS to an environment already configured is desired, it is enough to develop the Connector for the desired IDS and to specify the resource Descriptor.\nAfter the specification of the Connector and the Descriptor the new IDS is integrated to the DIDSoG.\nThrough the definition of scopes, resources can act on data of different source groups.\nFor example, scope 1 can be related to a given set of hosts, scope 2 to another set of hosts, while scope 3 can be related to hosts from scopes 1 and 2.\nScopes can be defined according to the needs of each environment.\nThe complexity levels allow the distribution of the processing between several resources inside the same scope.\nIn an analysis task, for example, the search for simple attacks can be made by resources of complexity 1, whereas the search for more complex attacks, that demands more time, can be performed by resources of complexity 2.\nWith this, the analysis of the data is made by two resources.\nThe distinction between complexity levels can also be organized in order to integrate different techniques of intrusion detection.\nThe complexity level 1 could be defined for analyses based on signatures, which are simpler techniques; the complexity level 2 for techniques based on behaviour, that require greater computational power; and the complexity level 3 for intrusion detection in applications, where the techniques are more specific and depend on more data.\nThe division of scopes and the complexity levels make the processing of the data to be carried out in phases.\nNo resource has full knowledge about the complete data processing flow.\nEach resource only knows the results of its processing and the destination to which it sends the results.\nResources of higher complexity must be linked to resources of lower complexity.\nTherefore, the hierarchic structure of the DIDSoG is maintained, facilitating its extension and integration with other domains of intrusion detection.\nBy carrying out a hierarchic relationship between the several chosen analysers for an environment, the sensor resource is not overloaded with the task to send the data to all the analysers.\nAn initial analyser will exist (complexity level 1) to which the sensor will send its data, and this analyser will then direct the data to the next step of the processing flow.\nAnother feature of the hierarchical organization is the easy extension and integration with other domains.\nIf it is necessary to add a new host (sensor) to the DIDSoG, it is enough to plug it to the first hierarchy of resources.\nIf it is necessary to add a new analyser, it will be in the scope of several domains, it is enough to relate it to another resource of same scope.\nThe DIDSoG allows different levels to be managed by different entities.\nFor example, the first scope can be managed by the local user of a host.\nThe second scope, comprising several hosts of a domain can be managed by the administrator of the domain.\nA third entity can be responsible for managing the security of several domains in a joint way.\nThis entity can act in the scope 3 independently from others.\nWith the proposed model for integration of IDSs in Grids, the different IDSs of an environment (or multiple IDSs integrated) act in a cooperative manner improving the intrusion detection services, mainly in two aspects.\nFirst, the information from multiple sources are analysed in an integrated way to search for distributed attacks.\nThis integration can be made under several scopes.\nSecond, there is a great diversity of data aggregation techniques, data correlation and analysis, and intrusion response that can be applied to the same environment; these techniques can be organized under several levels of complexity.\n6.\nCONCLUSION\nThe integration of heterogeneous IDSs is important.\nHowever, the incompatibility and diversity of IDS solutions make such integration extremely difficult.\nThis work thus proposed a model for composition of DIDS by integrating existing IDSs on a computational Grid platform (DIDSoG).\nIDSs in DIDSoG are encapsulated as Grid services for intrusion detection.\nA computational Grid platform is used for the integration by providing the basic requirements for communication, localization, resource sharing and security mechanisms.\nThe components of the architecture of the DIDSoG were developed and evaluated using the GridSim Grid simulator.\nServices for communication and localization were used to carry out the integration between components of different resources.\nBased on the components of the architecture, several resources were modelled forming a grid of intrusion detection.\nThe simulation demonstrated the usefulness of the proposed model.\nData from the sensor resources was read and this data was used to feed other resources of DIDSoG.\nThe integration of distinct IDSs could be observed through the simulated environment.\nResources providing different intrusion detection services were integrated (e.g. analysis, correlation, aggregation and alert).\nThe communication and localization services provided by GridSim were used to integrate components of different resources.\nVarious resources were modelled following the architecture components forming a grid of intrusion detection.\nThe components of DIDSoG architecture have served as base for the integration of the resources presented in the simulation.\nDuring the simulation, the different IDSs cooperated with one another in a distributed manner; however, in a coordinated way with an integrated view of the events, having, thus, the capability to detect distributed attacks.\nThis capability demonstrates that the IDSs integrated have resulted in a DIDS.\nRelated work presents cooperation between components of a specific DIDS.\nSome work focus on either the development of DIDSs on computational Grids or the application of IDSs to computational Grids.\nHowever, none deals with the integration of heterogeneous IDSs.\nIn contrast, the proposed model developed and simulated in this work, can shed some light into the question of integration of heterogeneous IDSs.\nDIDSoG presents new research opportunities that we would like to pursue, including: deployment of the model in a more realistic environment such as a Grid; incorporation of new security services; parallel analysis of data by Native IDSs in multiple hosts.\nIn addition to the integration of IDSs enabled by a grid middleware, the cooperation of heterogeneous IDSs can be viewed as an economic problem.\nIDSs from different organizations or administrative domains need incentives for joining a grid of intrusion detection services and for collaborating with other IDSs.\nThe development of distributed strategy proof mechanisms for integration of IDSs is a challenge that we would like to tackle.","keyphrases":["grid","distribut intrus detect system","intrus detect system","grid middlewar","heterogen intrus detect system","open grid servic architectur","comput grid","intrus detect servic","grid intrus detect architectur","id integr","gridsim grid simul","grid servic for intrus detect","system integr"],"prmu":["P","P","P","P","R","M","M","M","R","M","M","M","R"]} {"id":"J-51","title":"Complexity of (Iterated) Dominance","abstract":"We study various computational aspects of solving games using dominance and iterated dominance. We first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve. We then move on to iterated dominance. We show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance. This allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete. Both of these results hold both with and without dominance by mixed strategies. (A weaker version of the second result (only without dominance by mixed strategies) was already known [7].) Iterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time. We then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies. First, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance). Then, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3). Finally, we study Bayesian games. We show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak dominance). Finally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).","lvl-1":"Complexity of (Iterated) Dominance\u2217 Vincent Conitzer Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA conitzer@cs.cmu.edu Tuomas Sandholm Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA sandholm@cs.cmu.edu ABSTRACT We study various computational aspects of solving games using dominance and iterated dominance.\nWe first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve.\nWe then move on to iterated dominance.\nWe show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance.\nThis allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete.\nBoth of these results hold both with and without dominance by mixed strategies.\n(A weaker version of the second result (only without dominance by mixed strategies) was already known [7].)\nIterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time.\nWe then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies.\nFirst, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance).\nThen, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3).\nFinally, we study Bayesian games.\nWe show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak \u2217 This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship.\ndominance).\nFinally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of Algorithms and Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION In multiagent systems with self-interested agents, the optimal action for one agent may depend on the actions taken by other agents.\nIn such settings, the agents require tools from game theory to rationally decide on an action.\nGame theory offers various formal models of strategic settings-the best-known of which is a game in normal (or matrix) form, specifying a utility (payoff) for each agent for each combination of strategies that the agents choose-as well as solution concepts, which, given a game, specify which outcomes are reasonable (under various assumptions of rationality and common knowledge).\nProbably the best-known (and certainly the most-studied) solution concept is that of Nash equilibrium.\nA Nash equilibrium specifies a strategy for each player, in such a way that no player has an incentive to (unilaterally) deviate from the prescribed strategy.\nRecently, numerous papers have studied computing Nash equilibria in various settings [9, 4, 12, 3, 13, 14], and the complexity of constructing a Nash equilibrium in normal form games has been labeled one of the two most important open problems on the boundary of P today [20].\nThe problem of computing solutions according to the perhaps more elementary solution concepts of dominance and iterated dominance has received much less attention.\n(After an early short paper on an easy case [11], the main computational study of these concepts has actually taken place in a paper in the game theory community [7].1 ) A strategy strictly dominates another strategy if it performs strictly 1 This is not to say that computer scientists have ignored 88 better against all vectors of opponent strategies, and weakly dominates it if it performs at least as well against all vectors of opponent strategies, and strictly better against at least one.\nThe idea is that dominated strategies can be eliminated from consideration.\nIn iterated dominance, the elimination proceeds in rounds, and becomes easier as more strategies are eliminated: in any given round, the dominating strategy no longer needs to perform better than or as well as the dominated strategy against opponent strategies that were eliminated in earlier rounds.\nComputing solutions according to (iterated) dominance is important for at least the following reasons: 1) it can be computationally easier than computing (for instance) a Nash equilibrium (and therefore it can be useful as a preprocessing step in computing a Nash equilibrium), and 2) (iterated) dominance requires a weaker rationality assumption on the players than (for instance) Nash equilibrium, and therefore solutions derived according to it are more likely to occur.\nIn this paper, we study some fundamental computational questions concerning dominance and iterated dominance, including how hard it is to check whether a given strategy can be eliminated by each of the variants of these notions.\nThe rest of the paper is organized as follows.\nIn Section 2, we briefly review definitions and basic properties of normal form games, strict and weak dominance, and iterated strict and weak dominance.\nIn the remaining sections, we study computational aspects of dominance and iterated dominance.\nIn Section 3, we study one-shot (not iterated) dominance.\nIn Section 4, we study iterated dominance.\nIn Section 5, we study dominance and iterated dominance when the dominating strategy can only place probability on a few pure strategies.\nFinally, in Section 6, we study dominance and iterated dominance in Bayesian games.\n2.\nDEFINITIONS AND BASIC PROPERTIES In this section, we briefly review normal form games, as well as dominance and iterated dominance (both strict and weak).\nAn n-player normal form game is defined as follows.\nDefinition 1.\nA normal form game is given by a set of players {1, 2, ... , n}; and, for each player i, a (finite) set of pure strategies \u03a3i and a utility function ui : \u03a31 \u00d7 \u03a32 \u00d7 ... \u00d7 \u03a3n \u2192 R (where ui(\u03c31, \u03c32, ... , \u03c3n) denotes player i``s utility when each player j plays action \u03c3j).\nThe two main notions of dominance are defined as follows.\nDefinition 2.\nPlayer i``s strategy \u03c3i is said to be strictly dominated by player i``s strategy \u03c3i if for any vector of strategies \u03c3\u2212i for the other players, ui(\u03c3i, \u03c3\u2212i) > ui(\u03c3i, \u03c3\u2212i).\nPlayer i``s strategy \u03c3i is said to be weakly dominated by player i``s strategy \u03c3i if for any vector of strategies \u03c3\u2212i for the other players, ui(\u03c3i, \u03c3\u2212i) \u2265 ui(\u03c3i, \u03c3\u2212i), and for at least one vector of strategies \u03c3\u2212i for the other players, ui(\u03c3i, \u03c3\u2212i) > ui(\u03c3i, \u03c3\u2212i).\nIn this definition, it is sometimes allowed for the dominating strategy \u03c3i to be a mixed strategy, that is, a probability distribution over pure strategies.\nIn this case, the utilities in dominance altogether.\nFor example, simple dominance checks are sometimes used as a subroutine in searching for Nash equilibria [21].\nthe definition are the expected utilities.2 There are other notions of dominance, such as very weak dominance (in which no strict inequality is required, so two strategies can dominate each other), but we will not study them here.\nWhen we are looking at the dominance relations for player i, the other players (\u2212i) can be thought of as a single player.3 Therefore, in the rest of the paper, when we study one-shot (not iterated) dominance, we will focus without loss of generality on two-player games.4 In two-player games, we will generally refer to the players as r (row) and c (column) rather than 1 and 2.\nIn iterated dominance, dominated strategies are removed from the game, and no longer have any effect on future dominance relations.\nIterated dominance can eliminate more strategies than dominance, as follows.\n\u03c3r may originally not dominate \u03c3r because the latter performs better against \u03c3c; but then, once \u03c3c is removed because it is dominated by \u03c3c, \u03c3r dominates \u03c3r, and the latter can be removed.\nFor example, in the following game, R can be removed first, after which D is dominated.\nL R U 1, 1 0, 0 D 0, 1 1, 0 Either strict or weak dominance can be used in the definition of iterated dominance.\nWe note that the process of iterated dominance is never helped by removing a dominated mixed strategy, for the following reason.\nIf \u03c3i gives player i a higher utility than \u03c3i against mixed strategy \u03c3j for player j = i (and strategies \u03c3\u2212{i,j} for the other players), then for at least one pure strategy \u03c3j that \u03c3j places positive probability on, \u03c3i must perform better than \u03c3i against \u03c3j (and strategies \u03c3\u2212{i,j} for the other players).\nThus, removing the mixed strategy \u03c3j does not introduce any new dominances.\nMore detailed discussions and examples can be found in standard texts on microeconomics or game theory [17, 5].\nWe are now ready to move on to the core of this paper.\n3.\nDOMINANCE (NOT ITERATED) In this section, we study the notion of one-shot (not iterated) dominance.\nAs a first observation, checking whether a given strategy is strictly (weakly) dominated by some pure strategy is straightforward, by checking, for every pure strategy for that player, whether the latter strategy performs strictly better against all the opponent``s strategies (at least as well against all the opponent``s strategies, and strictly 2 The dominated strategy \u03c3i is, of course, also allowed to be mixed, but this has no technical implications for the paper: when we study one-shot dominance, we ask whether a given strategy is dominated, and it does not matter whether the given strategy is pure or mixed; when we study iterated dominance, there is no use in eliminating mixed strategies, as we will see shortly.\n3 This player may have a very large strategy space (one pure strategy for every vector of pure strategies for the players that are being replaced).\nNevertheless, this will not result in an increase in our representation size, because the original representation already had to specify utilities for each of these vectors.\n4 We note that a restriction to two-player games would not be without loss of generality for iterated dominance.\nThis is because for iterated dominance, we need to look at the dominated strategies of each individual player, so we cannot merge any players.\n89 better against at least one).5 Next, we show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time by solving a single linear program.\n(Similar linear programs have been given before [18]; we present the result here for completeness, and because we will build on the linear programs given below in Theorem 6.)\nProposition 1.\nGiven the row player``s utilities, a subset Dr of the row player``s pure strategies \u03a3r, and a distinguished strategy \u03c3\u2217 r for the row player, we can check in time polynomial in the size of the game (by solving a single linear program of polynomial size) whether there exists some mixed strategy \u03c3r, that places positive probability only on strategies in Dr and dominates \u03c3\u2217 r , both for strict and for weak dominance.\nProof.\nLet pdr be the probability that \u03c3r places on dr \u2208 Dr. We will solve a single linear program in each of our algorithms; linear programs can be solved in polynomial time [10].\nFor strict dominance, the question is whether the pdr can be set so that for every pure strategy for the column player \u03c3c \u2208 \u03a3c, dr\u2208Dr pdr ur(dr, \u03c3c) > ur(\u03c3\u2217 r , \u03c3c).\nBecause the inequality must be strict, we cannot solve this directly by linear programming.\nWe proceed as follows.\nBecause the game is finite, we may assume without loss of generality that all utilities are positive (if not, simply add a constant to all utilities.)\nSolve the following linear program: minimize dr\u2208Dr pdr such that for any \u03c3c \u2208 \u03a3c, dr\u2208Dr pdr ur(dr, \u03c3c) \u2265 ur(\u03c3\u2217 r , \u03c3c).\nIf \u03c3\u2217 r is strictly dominated by some mixed strategy, this linear program has a solution with objective value < 1.\n(The dominating strategy is a feasible solution with objective value exactly 1.\nBecause no constraint is binding for this solution, we can reduce one of the probabilities slightly without affecting feasibility, thereby obtaining a solution with objective value < 1.)\nMoreover, if this linear program has a solution with objective value < 1, there is a mixed strategy strictly dominating \u03c3\u2217 r , which can be obtained by taking the LP solution and adding the remaining probability to any strategy (because all the utilities are positive, this will add to the left side of any inequality, so all inequalities will become strict).\nFor weak dominance, we can solve the following linear program: maximize \u03c3c\u2208\u03a3c (( dr\u2208Dr pdr ur(dr, \u03c3c)) \u2212 ur(\u03c3\u2217 r , \u03c3c)) such that for any \u03c3c \u2208 \u03a3c, dr\u2208Dr pdr ur(dr, \u03c3c) \u2265 ur(\u03c3\u2217 r , \u03c3c); dr\u2208Dr pdr = 1.\nIf \u03c3\u2217 r is weakly dominated by some mixed strategy, then that mixed strategy is a feasible solution to this program with objective value > 0, because for at least one strategy \u03c3c \u2208 \u03a3c we have ( dr\u2208Dr pdr ur(dr, \u03c3c)) \u2212 ur(\u03c3\u2217 r , \u03c3c) > 0.\nOn the other hand, if this program has a solution with objective value > 0, then for at least one strategy \u03c3c \u2208 \u03a3c we 5 Recall that the assumption of a single opponent (that is, the assumption of two players) is without loss of generality for one-shot dominance.\nmust have ( dr\u2208Dr pdr ur(dr, \u03c3c)) \u2212 ur(\u03c3\u2217 r , \u03c3c) > 0, and thus the linear program``s solution is a weakly dominating mixed strategy.\n4.\nITERATED DOMINANCE We now move on to iterated dominance.\nIt is well-known that iterated strict dominance is path-independent [6, 19]that is, if we remove dominated strategies until no more dominated strategies remain, in the end the remaining strategies for each player will be the same, regardless of the order in which strategies are removed.\nBecause of this, to see whether a given strategy can be eliminated by iterated strict dominance, all that needs to be done is to repeatedly remove strategies that are strictly dominated, until no more dominated strategies remain.\nBecause we can check in polynomial time whether any given strategy is dominated (whether or not dominance by mixed strategies is allowed, as described in Section 3), this whole procedure takes only polynomial time.\nIn the case of iterated dominance by pure strategies with two players, Knuth et al. [11] slightly improve on (speed up) the straightforward implementation of this procedure by keeping track of, for each ordered pair of strategies for a player, the number of opponent strategies that prevent the first strategy from dominating the second.\nHereby the runtime for an m \u00d7 n game is reduced from O((m + n)4 ) to O((m + n)3 ).\n(Actually, they only study very weak dominance (for which no strict inequalities are required), but the approach is easily extended.)\nIn contrast, iterated weak dominance is known to be pathdependent.6 For example, in the following game, using iterated weak dominance we can eliminate M first, and then D, or R first, and then U. L M R U 1, 1 0, 0 1, 0 D 1, 1 1, 0 0, 0 Therefore, while the procedure of removing weakly dominated strategies until no more weakly dominated strategies remain can certainly be executed in polynomial time, which strategies survive in the end depends on the order in which we remove the dominated strategies.\nWe will investigate two questions for iterated weak dominance: whether a given strategy is eliminated in some path, and whether there is a path to a unique solution (one pure strategy left per player).\nWe will show that both of these problems are computationally hard.\nDefinition 3.\nGiven a game in normal form and a distinguished strategy \u03c3\u2217 , IWD-STRATEGY-ELIMINATION asks whether there is some path of iterated weak dominance that eliminates \u03c3\u2217 .\nGiven a game in normal form, IWDUNIQUE-SOLUTION asks whether there is some path of iterated weak dominance that leads to a unique solution (one strategy left per player).\nThe following lemma shows a special case of normal form games in which allowing for weak dominance by mixed strategies (in addition to weak dominance by pure strategies) does 6 There is, however, a restriction of weak dominance called nice weak dominance which is path-independent [15, 16].\nFor an overview of path-independence results, see Apt [1].\n90 not help.\nWe will prove the hardness results in this setting, so that they will hold whether or not dominance by mixed strategies is allowed.\nLemma 1.\nSuppose that all the utilities in a game are in {0, 1}.\nThen every pure strategy that is weakly dominated by a mixed strategy is also weakly dominated by a pure strategy.\nProof.\nSuppose pure strategy \u03c3 is weakly dominated by mixed strategy \u03c3\u2217 .\nIf \u03c3 gets a utility of 1 against some opponent strategy (or vector of opponent strategies if there are more than 2 players), then all the pure strategies that \u03c3\u2217 places positive probability on must also get a utility of 1 against that opponent strategy (or else the expected utility would be smaller than 1).\nMoreover, at least one of the pure strategies that \u03c3\u2217 places positive probability on must get a utility of 1 against an opponent strategy that \u03c3 gets 0 against (or else the inequality would never be strict).\nIt follows that this pure strategy weakly dominates \u03c3.\nWe are now ready to prove the main results of this section.\nTheorem 1.\nIWD-STRATEGY-ELIMINATION is NPcomplete, even with 2 players, and with 0 and 1 being the only utilities occurring in the matrix-whether or not dominance by mixed strategies is allowed.\nProof.\nThe problem is in NP because given a sequence of strategies to be eliminated, we can easily check whether this is a valid sequence of eliminations (even when dominance by mixed strategies is allowed, using Proposition 1).\nTo show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a nonempty set of clauses C over a nonempty set of variables V , with corresponding literals L = {+v : v \u2208 V } \u222a {\u2212v : v \u2208 V }) to the following IWD-STRATEGY-ELIMINATION instance.\n(In this instance, we will specify that certain strategies are uneliminable.\nA strategy \u03c3r can be made uneliminable, even when 0 and 1 are the only allowed utilities, by adding another strategy \u03c3r and another opponent strategy \u03c3c, so that: 1.\n\u03c3r and \u03c3r are the only strategies that give the row player a utility of 1 against \u03c3c.\n2.\n\u03c3r and \u03c3r always give the row player the same utility.\n3.\n\u03c3c is the only strategy that gives the column player a utility of 1 against \u03c3r, but otherwise \u03c3c always gives the column player utility 0.\nThis makes it impossible to eliminate any of these three strategies.\nWe will not explicitly specify the additional strategies to make the proof more legible.)\nIn this proof, we will denote row player strategies by s, and column player strategies by t, to improve legibility.\nLet the row player``s pure strategy set be given as follows.\nFor every variable v \u2208 V , the row player has corresponding strategies s1 +v, s2 +v, s1 \u2212v, s2 \u2212v. Additionally, the row player has the following 2 strategies: s1 0 and s2 0, where s2 0 = \u03c3\u2217 r (that is, it is the strategy we seek to eliminate).\nFinally, for every clause c \u2208 C, the row player has corresponding strategies s1 c (uneliminable) and s2 c. Let the column player``s pure strategy set be given as follows.\nFor every variable v \u2208 V , the column player has a corresponding strategy tv.\nFor every clause c \u2208 C, the column player has a corresponding strategy tc, and additionally, for every literal l \u2208 L that occurs in c, a strategy tc,l. For every variable v \u2208 V , the column player has corresponding strategies t+v, t\u2212v (both uneliminable).\nFinally, the column player has three additional strategies: t1 0 (uneliminable), t2 0, and t1.\nThe utility function for the row player is given as follows: \u2022 ur(s1 +v, tv) = 0 for all v \u2208 V ; \u2022 ur(s2 +v, tv) = 1 for all v \u2208 V ; \u2022 ur(s1 \u2212v, tv) = 1 for all v \u2208 V ; \u2022 ur(s2 \u2212v, tv) = 0 for all v \u2208 V ; \u2022 ur(s1 +v, t1) = 1 for all v \u2208 V ; \u2022 ur(s2 +v, t1) = 0 for all v \u2208 V ; \u2022 ur(s1 \u2212v, t1) = 0 for all v \u2208 V ; \u2022 ur(s2 \u2212v, t1) = 1 for all v \u2208 V ; \u2022 ur(sb +v, t+v) = 1 for all v \u2208 V and b \u2208 {1, 2}; \u2022 ur(sb \u2212v, t\u2212v) = 1 for all v \u2208 V and b \u2208 {1, 2}; \u2022 ur(sl, t) = 0 otherwise for all l \u2208 L and t \u2208 S2; \u2022 ur(s1 0, tc) = 0 for all c \u2208 C; \u2022 ur(s2 0, tc) = 1 for all c \u2208 C; \u2022 ur(sb 0, t1 0) = 1 for all b \u2208 {1, 2}; \u2022 ur(s1 0, t2 0) = 1; \u2022 ur(s2 0, t2 0) = 0; \u2022 ur(sb 0, t) = 0 otherwise for all b \u2208 {1, 2} and t \u2208 S2; \u2022 ur(sb c, t) = 0 otherwise for all c \u2208 C and b \u2208 {1, 2}; and the row player``s utility is 0 in every other case.\nThe utility function for the column player is given as follows: \u2022 uc(s, tv) = 0 for all v \u2208 V and s \u2208 S1; \u2022 uc(s, t1) = 0 for all s \u2208 S1; \u2022 uc(s2 l , tc) = 1 for all c \u2208 C and l \u2208 L where l \u2208 c (literal l occurs in clause c); \u2022 uc(s2 l2 , tc,l1 ) = 1 for all c \u2208 C and l1, l2 \u2208 L, l1 = l2 where l2 \u2208 c; \u2022 uc(s1 c, tc) = 1 for all c \u2208 C; \u2022 uc(s2 c, tc) = 0 for all c \u2208 C; \u2022 uc(sb c, tc,l) = 1 for all c \u2208 C, l \u2208 L, and b \u2208 {1, 2}; \u2022 uc(s2, tc) = uc(s2, tc,l) = 0 otherwise for all c \u2208 C and l \u2208 L; and the column player``s utility is 0 in every other case.\nWe now show that the two instances are equivalent.\nFirst, suppose there is a solution to the satisfiability instance: that is, a truth-value assignment to the variables in V such that all clauses are satisfied.\nThen, consider the following sequence of eliminations in our game: 1.\nFor every variable v that is set to true in the assignment, eliminate tv (which gives the column player utility 0 everywhere).\n2.\nThen, for every variable v that is set to true in the assignment, eliminate s2 +v using s1 +v (which is possible because tv has been eliminated, and because t1 has not been eliminated (yet)).\n3.\nNow eliminate t1 (which gives the column player utility 0 everywhere).\n4.\nNext, for every variable v that is set to false in the assignment, eliminate s2 \u2212v using s1 \u2212v (which is possible because t1 has been eliminated, and because tv has not been eliminated (yet)).\n5.\nFor every clause c which has the variable corresponding to one of its positive literals l = +v set to true in the assignment, eliminate tc using tc,l (which is possible because s2 l has been eliminated, and s2 c has not been eliminated (yet)).\n6.\nFor every clause c which has the variable corresponding to one of its negative literals l = \u2212v set to false in the assignment, eliminate tc using tc,l 91 (which is possible because s2 l has been eliminated, and s2 c has not been eliminated (yet)).\n7.\nBecause the assignment satisfied the formula, all the tc have now been eliminated.\nThus, we can eliminate s2 0 = \u03c3\u2217 r using s1 0.\nIt follows that there is a solution to the IWD-STRATEGY-ELIMINATION instance.\nNow suppose there is a solution to the IWD-STRATEGYELIMINATION instance.\nBy Lemma 1, we can assume that all the dominances are by pure strategies.\nWe first observe that only s1 0 can eliminate s2 0 = \u03c3\u2217 r , because it is the only other strategy that gets the row player a utility of 1 against t1 0, and t1 0 is uneliminable.\nHowever, because s2 0 performs better than s1 0 against the tc strategies, it follows that all of the tc strategies must be eliminated.\nFor each c \u2208 C, the strategy tc can only be eliminated by one of the strategies tc,l (with the same c), because these are the only other strategies that get the column player a utility of 1 against s1 c, and s1 c is uneliminable.\nBut, in order for some tc,l to eliminate tc, s2 l must be eliminated first.\nOnly s1 l can eliminate s2 l , because it is the only other strategy that gets the row player a utility of 1 against tl, and tl is uneliminable.\nWe next show that for every v \u2208 V only one of s2 +v, s2 \u2212v can be eliminated.\nThis is because in order for s1 +v to eliminate s2 +v, tv needs to have been eliminated and t1, not (so tv must be eliminated before t1); but in order for s1 \u2212v to eliminate s2 \u2212v, t1 needs to have been eliminated and tv, not (so t1 must be eliminated before tv).\nSo, set v to true if s2 +v is eliminated, and to false otherwise Because by the above, for every clause c, one of the s2 l with l \u2208 c must be eliminated, it follows that this is a satisfying assignment to the satisfiability instance.\nUsing Theorem 1, it is now (relatively) easy to show that IWD-UNIQUE-SOLUTION is also NP-complete under the same restrictions.\nTheorem 2.\nIWD-UNIQUE-SOLUTION is NP-complete, even with 2 players, and with 0 and 1 being the only utilities occurring in the matrix-whether or not dominance by mixed strategies is allowed.\nProof.\nAgain, the problem is in NP because we can nondeterministically choose the sequence of eliminations and verify whether it is correct.\nTo show NP-hardness, we reduce an arbitrary IWD-STRATEGY-ELIMINATION instance to the following IWD-UNIQUE-SOLUTION instance.\nLet all the strategies for each player from the original instance remain part of the new instance, and let the utilities resulting from the players playing a pair of these strategies be the same.\nWe add three additional strategies \u03c31 r , \u03c32 r , \u03c33 r for the row player, and three additional strategies \u03c31 c , \u03c32 c , \u03c33 c for the column player.\nLet the additional utilities be as follows: \u2022 ur(\u03c3r, \u03c3j c) = 1 for all \u03c3r \/\u2208 {\u03c31 r , \u03c32 r , \u03c33 r } and j \u2208 {2, 3}; \u2022 ur(\u03c3i r, \u03c3c) = 1 for all i \u2208 {1, 2, 3} and \u03c3c \/\u2208 {\u03c32 c , \u03c33 c }; \u2022 ur(\u03c3i r, \u03c32 c ) = 1 for all i \u2208 {2, 3}; \u2022 ur(\u03c31 r , \u03c33 c ) = 1; \u2022 and the row player``s utility is 0 in all other cases involving a new strategy.\n\u2022 uc(\u03c33 r , \u03c3c) = 1 for all \u03c3c \/\u2208 {\u03c31 c , \u03c32 c , \u03c33 c }; \u2022 uc(\u03c3\u2217 r , \u03c3j c) = 1 for all j \u2208 {2, 3} (\u03c3\u2217 r is the strategy to be eliminated in the original instance); \u2022 uc(\u03c3i r, \u03c31 c ) = 1 for all i \u2208 {1, 2}; \u2022 ur(\u03c31 r , \u03c32 c ) = 1; \u2022 ur(\u03c32 r , \u03c33 c ) = 1; \u2022 and the column player``s utility is 0 in all other cases involving a new strategy.\nWe proceed to show that the two instances are equivalent.\nFirst suppose there exists a solution to the original IWDSTRATEGY-ELIMINATION instance.\nThen, perform the same sequence of eliminations to eliminate \u03c3\u2217 r in the new IWD-UNIQUE-SOLUTION instance.\n(This is possible because at any stage, any weak dominance for the row player in the original instance is still a weak dominance in the new instance, because the two strategies'' utilities for the row player are the same when the column player plays one of the new strategies; and the same is true for the column player.)\nOnce \u03c3\u2217 r is eliminated, let \u03c31 c eliminate \u03c32 c .\n(It performs better against \u03c32 r .)\nThen, let \u03c31 r eliminate all the other remaining strategies for the row player.\n(It always performs better against either \u03c31 c or \u03c33 c .)\nFinally, \u03c31 c is the unique best response against \u03c31 r among the column player``s remaining strategies, so let it eliminate all the other remaining strategies for the column player.\nThus, there exists a solution to the IWD-UNIQUE-SOLUTION instance.\nNow suppose there exists a solution to the IWD-UNIQUESOLUTION instance.\nBy Lemma 1, we can assume that all the dominances are by pure strategies.\nWe will show that none of the new strategies (\u03c31 r , \u03c32 r , \u03c33 r , \u03c31 c , \u03c32 c , \u03c33 c ) can either eliminate another strategy, or be eliminated before \u03c3\u2217 r is eliminated.\nThus, there must be a sequence of eliminations ending in the elimination of \u03c3\u2217 r , which does not involve any of the new strategies, and is therefore a valid sequence of eliminations in the original game (because all original strategies perform the same against each new strategy).\nWe now show that this is true by exhausting all possibilities for the first elimination before \u03c3\u2217 r is eliminated that involves a new strategy.\nNone of the \u03c3i r can be eliminated by a \u03c3r \/\u2208 {\u03c31 r , \u03c32 r , \u03c33 r }, because the \u03c3i r perform better against \u03c31 c .\n\u03c31 r cannot eliminate any other strategy, because it always performs poorer against \u03c32 c .\n\u03c32 r and \u03c33 r are equivalent from the row player``s perspective (and thus cannot eliminate each other), and cannot eliminate any other strategy because they always perform poorer against \u03c33 c .\nNone of the \u03c3j c can be eliminated by a \u03c3c \/\u2208 {\u03c31 c , \u03c32 c , \u03c33 c }, because the \u03c3j c always perform better against either \u03c31 r or \u03c32 r .\n\u03c31 c cannot eliminate any other strategy, because it always performs poorer against either \u03c3\u2217 r or \u03c33 r .\n\u03c32 c cannot eliminate any other strategy, because it always performs poorer against \u03c32 r or \u03c33 r .\n\u03c33 c cannot eliminate any other strategy, because it always performs poorer against \u03c31 r or \u03c33 r .\nThus, there exists a solution to the IWDSTRATEGY-ELIMINATION instance.\nA slightly weaker version of the part of Theorem 2 concerning dominance by pure strategies only is the main result of Gilboa et al. [7].\n(Besides not proving the result for dominance by mixed strategies, the original result was weaker because it required utilities {0, 1, 2, 3, 4, 5, 6, 7, 8} rather than just {0, 1} (and because of this, our Lemma 1 cannot be applied to it to get the result for mixed strategies).)\n5.\n(ITERATED) DOMINANCE USING MIXED STRATEGIES WITH SMALL SUPPORTS When showing that a strategy is dominated by a mixed strategy, there are several reasons to prefer exhibiting a 92 dominating strategy that places positive probability on as few pure strategies as possible.\nFirst, this will reduce the number of bits required to specify the dominating strategy (and thus the proof of dominance can be communicated quicker): if the dominating mixed strategy places positive probability on only k strategies, then it can be specified using k real numbers for the probabilities, plus k log m (where m is the number of strategies for the player under consideration) bits to indicate which strategies are used.\nSecond, the proof of dominance will be cleaner: for a dominating mixed strategy, it is typically (always in the case of strict dominance) possible to spread some of the probability onto any unused pure strategy and still have a dominating strategy, but this obscures which pure strategies are the ones that are key in making the mixed strategy dominating.\nThird, because (by the previous) the argument for eliminating the dominated strategy is simpler and easier to understand, it is more likely to be accepted.\nFourth, the level of risk neutrality required for the argument to work is reduced, at least in the extreme case where dominance by a single pure strategy can be exhibited (no risk neutrality is required here).\nThis motivates the following problem.\nDefinition 4 (MINIMUM-DOMINATING-SET).\nWe are given the row player``s utilities of a game in normal form, a distinguished strategy \u03c3\u2217 for the row player, a specification of whether the dominance should be strict or weak, and a number k.\nWe are asked whether there exists a mixed strategy \u03c3 for the row player that places positive probability on at most k pure strategies, and dominates \u03c3\u2217 in the required sense.\nUnfortunately, this problem is NP-complete.\nTheorem 3.\nMINIMUM-DOMINATING-SET is NPcomplete, both for strict and for weak dominance.\nProof.\nThe problem is in NP because we can nondeterministically choose a set of at most k strategies to give positive probability, and decide whether we can dominate \u03c3\u2217 with these k strategies as described in Proposition 1.\nTo show NP-hardness, we reduce an arbitrary SET-COVER instance (given a set S, subsets S1, S2, ... , Sr, and a number t, can all of S be covered by at most t of the subsets?)\nto the following MINIMUM-DOMINATING-SET instance.\nFor every element s \u2208 S, there is a pure strategy \u03c3s for the column player.\nFor every subset Si, there is a pure strategy \u03c3Si for the row player.\nFinally, there is the distinguished pure strategy \u03c3\u2217 for the row player.\nThe row player``s utilities are as follows: ur(\u03c3Si , \u03c3s) = t + 1 if s \u2208 Si; ur(\u03c3Si , \u03c3s) = 0 if s \/\u2208 Si; ur(\u03c3\u2217 , \u03c3s) = 1 for all s \u2208 S. Finally, we let k = t.\nWe now proceed to show that the two instances are equivalent.\nFirst suppose there exists a solution to the SET-COVER instance.\nWithout loss of generality, we can assume that there are exactly k subsets in the cover.\nThen, for every Si that is in the cover, let the dominating strategy \u03c3 place exactly 1 k probability on the corresponding pure strategy \u03c3Si .\nNow, if we let n(s) be the number of subsets in the cover containing s (we observe that that n(s) \u2265 1), then for every strategy \u03c3s for the column player, the row player``s expected utility for playing \u03c3 when the column player is playing \u03c3s is u(\u03c3, \u03c3s) = n(s) k (k + 1) \u2265 k+1 k > 1 = u(\u03c3\u2217 , \u03c3s).\nSo \u03c3 strictly (and thus also weakly) dominates \u03c3\u2217 , and there exists a solution to the MINIMUM-DOMINATING-SET instance.\nNow suppose there exists a solution to the MINIMUMDOMINATING-SET instance.\nConsider the (at most k) pure strategies of the form \u03c3Si on which the dominating mixed strategy \u03c3 places positive probability, and let T be the collection of the corresponding subsets Si.\nWe claim that T is a cover.\nFor suppose there is some s \u2208 S that is not in any of the subsets in T .\nThen, if the column player plays \u03c3s, the row player (when playing \u03c3) will always receive utility 0-as opposed to the utility of 1 the row player would receive for playing \u03c3\u2217 , contradicting the fact that \u03c3 dominates \u03c3\u2217 (whether this dominance is weak or strict).\nIt follows that there exists a solution to the SET-COVER instance.\nOn the other hand, if we require that the dominating strategy only places positive probability on a very small number of pure strategies, then it once again becomes easy to check whether a strategy is dominated.\nSpecifically, to find out whether player i``s strategy \u03c3\u2217 is dominated by a strategy that places positive probability on only k pure strategies, we can simply check, for every subset of k of player i``s pure strategies, whether there is a strategy that places positive probability only on these k strategies and dominates \u03c3\u2217 , using Proposition 1.\nThis requires only O(|\u03a3i|k ) such checks.\nThus, if k is a constant, this constitutes a polynomial-time algorithm.\nA natural question to ask next is whether iterated strict dominance remains computationally easy when dominating strategies are required to place positive probability on at most k pure strategies, where k is a small constant.\n(We have already shown in Section 4 that iterated weak dominance is hard even when k = 1, that is, only dominance by pure strategies is allowed.)\nOf course, if iterated strict dominance were path-independent under this restriction, computational easiness would follow as it did in Section 4.\nHowever, it turns out that this is not the case.\nObservation 1.\nIf we restrict the dominating strategies to place positive probability on at most two pure strategies, iterated strict dominance becomes path-dependent.\nProof.\nConsider the following game: 7, 1 0, 0 0, 0 0, 0 7, 1 0, 0 3, 0 3, 0 0, 0 0, 0 0, 0 3, 1 1, 0 1, 0 1, 0 Let (i, j) denote the outcome in which the row player plays the ith row and the column player plays the jth column.\nBecause (1, 1), (2, 2), and (4, 3) are all Nash equilibria, none of the column player``s pure strategies will ever be eliminated, and neither will rows 1, 2, and 4.\nWe now observe that randomizing uniformly over rows 1 and 2 dominates row 3, and randomizing uniformly over rows 3 and 4 dominates row 5.\nHowever, if we eliminate row 3 first, it becomes impossible to dominate row 5 without randomizing over at least 3 pure strategies.\nIndeed, iterated strict dominance turns out to be hard even when k = 3.\nTheorem 4.\nIf we restrict the dominating strategies to place positive probability on at most three pure strategies, it becomes NP-complete to decide whether a given strategy can be eliminated using iterated strict dominance.\n93 Proof.\nThe problem is in NP because given a sequence of strategies to be eliminated, we can check in polynomial time whether this is a valid sequence of eliminations (for any strategy to be eliminated, we can check, for every subset of three other strategies, whether there is a strategy placing positive probability on only these three strategies that dominates the strategy to be eliminated, using Proposition 1).\nTo show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a nonempty set of clauses C over a nonempty set of variables V , with corresponding literals L = {+v : v \u2208 V } \u222a {\u2212v : v \u2208 V }) to the following two-player game.\nFor every variable v \u2208 V , the row player has strategies s+v, s\u2212v, s1 v, s2 v, s3 v, s4 v, and the column player has strategies t1 v, t2 v, t3 v, t4 v. For every clause c \u2208 C, the row player has a strategy sc, and the column player has a strategy tc, as well as, for every literal l occurring in c, an additional strategy tl c.\nThe row player has two additional strategies s1 and s2.\n(s2 is the strategy that we are seeking to eliminate.)\nFinally, the column player has one additional strategy t1.\nThe utility function for the row player is given as follows (where is some sufficiently small number): \u2022 ur(s+v, tj v) = 4 if j \u2208 {1, 2}, for all v \u2208 V ; \u2022 ur(s+v, tj v) = 1 if j \u2208 {3, 4}, for all v \u2208 V ; \u2022 ur(s\u2212v, tj v) = 1 if j \u2208 {1, 2}, for all v \u2208 V ; \u2022 ur(s\u2212v, tj v) = 4 if j \u2208 {3, 4}, for all v \u2208 V ; \u2022 ur(s+v, t) = ur(s\u2212v, t) = 0 for all v \u2208 V and t \/\u2208 {t1 v, t2 v, t3 v, t4 v}; \u2022 ur(si v, ti v) = 13 for all v \u2208 V and i \u2208 {1, 2, 3, 4}; \u2022 ur(si v, t) = for all v \u2208 V , i \u2208 {1, 2, 3, 4}, and t = ti v; \u2022 ur(sc, tc) = 2 for all c \u2208 C; \u2022 ur(sc, t) = 0 for all c \u2208 C and t = tc; \u2022 ur(s1, t1) = 1 + ; \u2022 ur(s1, t) = for all t = t1; \u2022 ur(s2, t1) = 1; \u2022 ur(s2, tc) = 1 for all c \u2208 C; \u2022 ur(s2, t) = 0 for all t \/\u2208 {t1} \u222a {tc : c \u2208 C}.\nThe utility function for the column player is given as follows: \u2022 uc(si v, ti v) = 1 for all v \u2208 V and i \u2208 {1, 2, 3, 4}; \u2022 uc(s, ti v) = 0 for all v \u2208 V , i \u2208 {1, 2, 3, 4}, and s = si v; \u2022 uc(sc, tc) = 1 for all c \u2208 C; \u2022 uc(sl, tc) = 1 for all c \u2208 C and l \u2208 L occurring in c; \u2022 uc(s, tc) = 0 for all c \u2208 C and s \/\u2208 {sc} \u222a {sl : l \u2208 c}; \u2022 uc(sc, tl c) = 1 + for all c \u2208 C; \u2022 uc(sl , tl c) = 1 + for all c \u2208 C and l = l occurring in c; \u2022 uc(s, tl c) = for all c \u2208 C and s \/\u2208 {sc} \u222a {sl : l \u2208 c, l = l }; \u2022 uc(s2, t1) = 1; \u2022 uc(s, t1) = 0 for all s = s2.\nWe now show that the two instances are equivalent.\nFirst, suppose that there is a solution to the satisfiability instance.\nThen, consider the following sequence of eliminations in our game: 1.\nFor every variable v that is set to true in the satisfying assignment, eliminate s+v with the mixed strategy \u03c3r that places probability 1\/3 on s\u2212v, probability 1\/3 on s1 v, and probability 1\/3 on s2 v. (The expected utility of playing \u03c3r against t1 v or t2 v is 14\/3 > 4; against t3 v or t4 v, it is 4\/3 > 1; and against anything else it is 2 \/3 > 0.\nHence the dominance is valid.)\n2.\nSimilarly, for every variable v that is set to false in the satisfying assignment, eliminate s\u2212v with the mixed strategy \u03c3r that places probability 1\/3 on s+v, probability 1\/3 on s3 v, and probability 1\/3 on s4 v. (The expected utility of playing \u03c3r against t1 v or t2 v is 4\/3 > 1; against t3 v or t4 v, it is 14\/3 > 4; and against anything else it is 2 \/3 > 0.\nHence the dominance is valid.)\n3.\nFor every c \u2208 C, eliminate tc with any tl c for which l was set to true in the satisfying assignment.\n(This is a valid dominance because tl c performs better than tc against any strategy other than sl, and we eliminated sl in step 1 or in step 2.)\n4.\nFinally, eliminate s2 with s1.\n(This is a valid dominance because s1 performs better than s2 against any strategy other than those in {tc : c \u2208 C}, which we eliminated in step 3.)\nHence, there is an elimination path that eliminates s2.\nNow, suppose that there is an elimination path that eliminates s2.\nThe strategy that eventually dominates s2 must place most of its probability on s1, because s1 is the only other strategy that performs well against t1, which cannot be eliminated before s2.\nBut, s1 performs significantly worse than s2 against any strategy tc with c \u2208 C, so it follows that all these strategies must be eliminated first.\nEach strategy tc can only be eliminated by a strategy that places most of its weight on the corresponding strategies tl c with l \u2208 c, because they are the only other strategies that perform well against sc, which cannot be eliminated before tc.\nBut, each strategy tl c performs significantly worse than tc against sl, so it follows that for every clause c, for one of the literals l occurring in it, sl must be eliminated first.\nNow, strategies of the form tj v will never be eliminated because they are the unique best responses to the corresponding strategies sj v (which are, in turn, the best responses to the corresponding tj v).\nAs a result, if strategy s+v (respectively, s\u2212v) is eliminated, then its opposite strategy s\u2212v (respectively, s+v) can no longer be eliminated, for the following reason.\nThere is no other pure strategy remaining that gets a significant utility against more than one of the strategies t1 v, t2 v, t3 v, t4 v, but s\u2212v (respectively, s+v) gets significant utility against all 4, and therefore cannot be dominated by a mixed strategy placing positive probability on at most 3 strategies.\nIt follows that for each v \u2208 V , at most one of the strategies s+v, s\u2212v is eliminated, in such a way that for every clause c, for one of the literals l occurring in it, sl must be eliminated.\nBut then setting all the literals l such that sl is eliminated to true constitutes a solution to the satisfiability instance.\nIn the next section, we return to the setting where there is no restriction on the number of pure strategies on which a dominating mixed strategy can place positive probability.\n6.\n(ITERATED) DOMINANCE IN BAYESIAN GAMES So far, we have focused on normal form games that are flatly represented (that is, every matrix entry is given ex94 plicitly).\nHowever, for many games, the flat representation is too large to write down explicitly, and instead, some representation that exploits the structure of the game needs to be used.\nBayesian games, besides being of interest in their own right, can be thought of as a useful structured representation of normal form games, and we will study them in this section.\nIn a Bayesian game, each player first receives privately held preference information (the player``s type) from a distribution, which determines the utility that that player receives for every outcome of (that is, vector of actions played in) the game.\nAfter receiving this type, the player plays an action based on it.7 Definition 5.\nA Bayesian game is given by a set of players {1, 2, ... , n}; and, for each player i, a (finite) set of actions Ai, a (finite) type space \u0398i with a probability distribution \u03c0i over it, and a utility function ui : \u0398i \u00d7 A1 \u00d7 A2 \u00d7 ... \u00d7 An \u2192 R (where ui(\u03b8i, a1, a2, ... , an) denotes player i``s utility when i``s type is \u03b8i and each player j plays action aj).\nA pure strategy in a Bayesian game is a mapping from types to actions, \u03c3i : \u0398i \u2192 Ai, where \u03c3i(\u03b8i) denotes the action that player i plays for type \u03b8i.\nAny vector of pure strategies in a Bayesian game defines an (expected) utility for each player, and therefore we can translate a Bayesian game into a normal form game.\nIn this normal form game, the notions of dominance and iterated dominance are defined as before.\nHowever, the normal form representation of the game is exponentially larger than the Bayesian representation, because each player i has |Ai||\u0398i| distinct pure strategies.\nThus, any algorithm for Bayesian games that relies on expanding the game to its normal form will require exponential time.\nSpecifically, our easiness results for normal form games do not directly transfer to this setting.\nIn fact, it turns out that checking whether a strategy is dominated by a pure strategy is hard in Bayesian games.\nTheorem 5.\nIn a Bayesian game, it is NP-complete to decide whether a given pure strategy \u03c3r : \u0398r \u2192 Ar is dominated by some other pure strategy (both for strict and weak dominance), even when the row player``s distribution over types is uniform.\nProof.\nThe problem is in NP because it is easy to verify whether a candidate dominating strategy is indeed a dominating strategy.\nTo show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a set of clauses C using variables from V ) to the following Bayesian game.\nLet the row player``s action set be Ar = {t, f, 0} and let the column player``s action set be Ac = {ac : c \u2208 C}.\nLet the row player``s type set be \u0398r = {\u03b8v : v \u2208 V }, with a distribution \u03c0r that is uniform.\nLet the row player``s utility function be as follows: \u2022 ur(\u03b8v, 0, ac) = 0 for all v \u2208 V and c \u2208 C; \u2022 ur(\u03b8v, b, ac) = |V | for all v \u2208 V , c \u2208 C, and b \u2208 {t, f} such that setting v to b satisfies c; \u2022 ur(\u03b8v, b, ac) = \u22121 for all v \u2208 V , c \u2208 C, and b \u2208 {t, f} such that setting v to b does not satisfy c. 7 In general, a player can also receive a signal about the other players'' preferences, but we will not concern ourselves with that here.\nLet the pure strategy to be dominated be the one that plays 0 for every type.\nWe show that the strategy is dominated by a pure strategy if and only if there is a solution to the satisfiability instance.\nFirst, suppose there is a solution to the satisfiability instance.\nThen, let \u03c3d r be given by: \u03c3d r (\u03b8v) = t if v is set to true in the solution to the satisfiability instance, and \u03c3d r (\u03b8v) = f otherwise.\nThen, against any action ac by the column player, there is at least one type \u03b8v such that either +v \u2208 c and \u03c3d r (\u03b8v) = t, or \u2212v \u2208 c and \u03c3d r (\u03b8v) = f. Thus, the row player``s expected utility against action ac is at least |V | |V | \u2212 |V |\u22121 |V | = 1 |V | > 0.\nSo, \u03c3d r is a dominating strategy.\nNow, suppose there is a dominating pure strategy \u03c3d r .\nThis dominating strategy must play t or f for at least one type.\nThus, against any ac by the column player, there must at least be some type \u03b8v for which ur(\u03b8v, \u03c3d r (\u03b8v), ac) > 0.\nThat is, there must be at least one variable v such that setting v to \u03c3d r (\u03b8v) satifies c.\nBut then, setting each v to \u03c3d r (\u03b8v) must satisfy all the clauses.\nSo a satisfying assignment exists.\nHowever, it turns out that we can modify the linear programs from Proposition 1 to obtain a polynomial time algorithm for checking whether a strategy is dominated by a mixed strategy in Bayesian games.\nTheorem 6.\nIn a Bayesian game, it can be decided in polynomial time whether a given (possibly mixed) strategy \u03c3r is dominated by some other mixed strategy, using linear programming (both for strict and weak dominance).\nProof.\nWe can modify the linear programs presented in Proposition 1 as follows.\nFor strict dominance, again assuming without loss of generality that all the utilities in the game are positive, use the following linear program (in which p\u03c3r r (\u03b8r, ar) is the probability that \u03c3r, the strategy to be dominated, places on ar for type \u03b8r): minimize \u03b8r\u2208\u0398r ar\u2208Ar pr(ar) such that for any ac \u2208 Ac, \u03b8r\u2208\u0398r ar\u2208Ar \u03c0(\u03b8r)ur(\u03b8r, ar, ac)pr(\u03b8r, ar) \u2265 \u03b8r\u2208\u0398r ar\u2208Ar \u03c0(\u03b8r)ur(\u03b8r, ar, ac)p\u03c3r r (\u03b8r, ar); for any \u03b8r \u2208 \u0398r, ar\u2208Ar pr(\u03b8r, ar) \u2264 1.\nAssuming that \u03c0(\u03b8r) > 0 for all \u03b8r \u2208 \u0398r, this program will return an objective value smaller than |\u0398r| if and only if \u03c3r is strictly dominated, by reasoning similar to that done in Proposition 1.\nFor weak dominance, use the following linear program: maximize ac\u2208Ac ( \u03b8r\u2208\u0398r ar\u2208Ar \u03c0(\u03b8r)ur(\u03b8r, ar, ac)pr(\u03b8r, ar)\u2212 \u03b8r\u2208\u0398r ar\u2208Ar \u03c0(\u03b8r)ur(\u03b8r, ar, ac)p\u03c3r r (\u03b8r, ar)) such that for any ac \u2208 Ac, \u03b8r\u2208\u0398r ar\u2208Ar \u03c0(\u03b8r)ur(\u03b8r, ar, ac)pr(\u03b8r, ar) \u2265 \u03b8r\u2208\u0398r ar\u2208Ar \u03c0(\u03b8r)ur(\u03b8r, ar, ac)p\u03c3r r (\u03b8r, ar); for any \u03b8r \u2208 \u0398r, ar\u2208Ar pr(\u03b8r, ar) = 1.\nThis program will return an objective value greater than 0 if and only if \u03c3r is weakly dominated, by reasoning similar to that done in Proposition 1.\nWe now turn to iterated dominance in Bayesian games.\nNa\u00a8\u0131vely, one might argue that iterated dominance in Bayesian 95 games always requires an exponential number of steps when a significant fraction of the game``s pure strategies can be eliminated, because there are exponentially many pure strategies.\nHowever, this is not a very strong argument because oftentimes we can eliminate exponentially many pure strategies in one step.\nFor example, if for some type \u03b8r \u2208 \u0398r we have, for all ac \u2208 Ac, that u(\u03b8r, a1 r, ac) > u(\u03b8r, a2 r, ac), then any pure strategy for the row player which plays action a2 r for type \u03b8r is dominated (by the strategy that plays action a1 r for type \u03b8r instead)-and there are exponentially many (|Ar||\u0398r|\u22121 ) such strategies.\nIt is therefore conceivable that we need only polynomially many eliminations of collections of a player``s strategies.\nHowever, the following theorem shows that this is not the case, by giving an example where an exponential number of iterations (that is, alternations between the players in eliminating strategies) is required.\n(We emphasize that this is not a result about computational complexity.)\nTheorem 7.\nEven in symmetric 3-player Bayesian games, iterated dominance by pure strategies can require an exponential number of iterations (both for strict and weak dominance), even with only three actions per player.\nProof.\nLet each player i \u2208 {1, 2, 3} have n + 1 types \u03b81 i , \u03b82 i , ... , \u03b8n+1 i .\nLet each player i have 3 actions ai, bi, ci, and let the utility function of each player be defined as follows.\n(In the below, i + 1 and i + 2 are shorthand for i + 1(mod 3) and i + 2(mod 3) when used as player indices.\nAlso, \u2212\u221e can be replaced by a sufficiently negative number.\nFinally, \u03b4 and should be chosen to be very small (even compared to 2\u2212(n+1) ), and should be more than twice as large as \u03b4.)\n\u2022 ui(\u03b81 i ; ai, ci+1, ci+2) = \u22121; \u2022 ui(\u03b81 i ; ai, si+1, si+2) = 0 for si+1 = ci+1 or si+2 = ci+2; \u2022 ui(\u03b81 i ; bi, si+1, si+2) = \u2212 for si+1 = ai+1 and si+2 = ai+2; \u2022 ui(\u03b81 i ; bi, si+1, si+2) = \u2212\u221e for si+1 = ai+1 or si+2 = ai+2; \u2022 ui(\u03b81 i ; ci, si+1, si+2) = \u2212\u221e for all si+1, si+2; \u2022 ui(\u03b8j i ; ai, si+1, si+2) = \u2212\u221e for all si+1, si+2 when j > 1; \u2022 ui(\u03b8j i ; bi, si+1, si+2) = \u2212 for all si+1, si+2 when j > 1; \u2022 ui(\u03b8j i ; ci, si+1, ci+2) = \u03b4 \u2212 \u2212 1\/2 for all si+1 when j > 1; \u2022 ui(\u03b8j i ; ci, si+1, si+2) = \u03b4\u2212 for all si+1 and si+2 = ci+2 when j > 1.\nLet the distribution over each player``s types be given by p(\u03b8j i ) = 2\u2212j (with the exception that p(\u03b82 i ) = 2\u22122 +2\u2212(n+1) ).\nWe will be interested in eliminating strategies of the following form: play bi for type \u03b81 i , and play one of bi or ci otherwise.\nBecause the utility function is the same for any type \u03b8j i with j > 1, these strategies are effectively defined by the total probability that they place on ci,8 which is t2 i (2\u22122 + 2\u2212(n+1) ) + n+1 j=3 tj i 2\u2212j where tj i = 1 if player i 8 Note that the strategies are still pure strategies; the probability placed on an action by a strategy here is simply the sum of the probabilities of the types for which the strategy chooses that action.\nplays ci for type \u03b8j i , and 0 otherwise.\nThis probability is different for any two different strategies of the given form, and we have exponentially many different strategies of the given form.\nFor any probability q which can be expressed as t2(2\u22122 + 2\u2212(n+1) ) + n+1 j=3 tj2\u2212j (with all tj \u2208 {0, 1}), let \u03c3i(q) denote the (unique) strategy of the given form for player i which places a total probability of q on ci.\nAny strategy that plays ci for type \u03b81 i or ai for some type \u03b8j i with j > 1 can immediately be eliminated.\nWe will show that, after that, we must eliminate the strategies \u03c3i(q) with high q first, slowly working down to those with lower q. Claim 1: If \u03c3i+1(q ) and \u03c3i+2(q ) have not yet been eliminated, and q < q , then \u03c3i(q) cannot yet be eliminated.\nProof: First, we show that no strategy \u03c3i(q ) can eliminate \u03c3i(q).\nAgainst \u03c3i+1(q ), \u03c3i+2(q ), the utility of playing \u03c3i(p) is \u2212 + p \u00b7 \u03b4 \u2212 p \u00b7 q \/2.\nThus, when q = 0, it is best to set p as high as possible (and we note that \u03c3i+1(0) and \u03c3i+2(0) have not been eliminated), but when q > 0, it is best to set p as low as possible because \u03b4 < q \/2.\nThus, whether q > q or q < q , \u03c3i(q) will always do strictly better than \u03c3i(q ) against some remaining opponent strategies.\nHence, no strategy \u03c3i(q ) can eliminate \u03c3i(q).\nThe only other pure strategies that could dominate \u03c3i(q) are strategies that play ai for type \u03b81 i , and bi or ci for all other types.\nLet us take such a strategy and suppose that it plays c with probability p. Against \u03c3i+1(q ), \u03c3i+2(q ) (which have not yet been eliminated), the utility of playing this strategy is \u2212(q )2 \/2 \u2212 \/2 + p \u00b7 \u03b4 \u2212 p \u00b7 q \/2.\nOn the other hand, playing \u03c3i(q) gives \u2212 + q \u00b7 \u03b4 \u2212 q \u00b7 q \/2.\nBecause q > q, we have \u2212(q )2 \/2 < \u2212q \u00b7 q \/2, and because \u03b4 and are small, it follows that \u03c3i(q) receives a higher utility.\nTherefore, no strategy dominates \u03c3i(q), proving the claim.\nClaim 2: If for all q > q, \u03c3i+1(q ) and \u03c3i+2(q ) have been eliminated, then \u03c3i(q) can be eliminated.\nProof: Consider the strategy for player i that plays ai for type \u03b81 i , and bi for all other types (call this strategy \u03c3i); we claim \u03c3i dominates \u03c3i(q).\nFirst, if either of the other players k plays ak for \u03b81 k, then \u03c3i performs better than \u03c3i(q) (which receives \u2212\u221e in some cases).\nBecause the strategies for player k that play ck for type \u03b81 k, or ak for some type \u03b8j k with j > 1, have already been eliminated, all that remains to check is that \u03c3i performs better than \u03c3i(q) whenever both of the other two players play strategies of the following form: play bk for type \u03b81 k, and play one of bk or ck otherwise.\nWe note that among these strategies, there are none left that place probability greater than q on ck.\nLetting qk denote the probability with which player k plays ck, the expected utility of playing \u03c3i is \u2212qi+1 \u00b7 qi+2\/2 \u2212 \/2.\nOn the other hand, the utility of playing \u03c3i(q) is \u2212 + q \u00b7 \u03b4 \u2212 q \u00b7 qi+2\/2.\nBecause qi+1 \u2264 q, the difference between these two expressions is at least \/2 \u2212 \u03b4, which is positive.\nIt follows that \u03c3i dominates \u03c3i(q).\nFrom Claim 2, it follows that all strategies of the form \u03c3i(q) will eventually be eliminated.\nHowever, Claim 1 shows that we cannot go ahead and eliminate multiple such strategies for one player, unless at least one other player simultaneously keeps up in the eliminated strategies: every time a \u03c3i(q) is eliminated such that \u03c3i+1(q) and \u03c3i+2(q) have not yet been eliminated, we need to eliminate one of the latter two strategies before any \u03c3i(q ) with q > q can be eliminated-that is, we need to alternate between players.\nBecause there are exponentially many strategies of the form \u03c3i(q), it follows that iterated elimination will require exponentially many iterations to complete.\n96 It follows that an efficient algorithm for iterated dominance (strict or weak) by pure strategies in Bayesian games, if it exists, must somehow be able to perform (at least part of) many iterations in a single step of the algorithm (because if each step only performed a single iteration, we would need exponentially many steps).\nInterestingly, Knuth et al. [11] argue that iterated dominance appears to be an inherently sequential problem (in light of their result that iterated very weak dominance is P-complete, that is, apparently not efficiently parallelizable), suggesting that aggregating many iterations may be difficult.\n7.\nCONCLUSIONS While the Nash equilibrium solution concept is studied more and more intensely in our community, the perhaps more elementary concept of (iterated) dominance has received much less attention.\nIn this paper we studied various computational aspects of this concept.\nWe first studied both strict and weak dominance (not iterated), and showed that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve.\nWe then moved on to iterated dominance.\nWe showed that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance.\nThis allowed us to also show that determining whether there is a path that leads to a unique solution is NP-complete.\nBoth of these results hold both with and without dominance by mixed strategies.\n(A weaker version of the second result (only without dominance by mixed strategies) was already known [7].)\nIterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time.\nWe then studied what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies.\nFirst, we showed that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance).\nThen, we showed that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3).\nFinally, we studied dominance and iterated dominance in Bayesian games, as an example of a concise representation language for normal form games that is interesting in its own right.\nWe showed that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak dominance).\nFinally, we showed that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).\nThere are various avenues for future research.\nFirst, there is the open question of whether it is possible to complete iterated dominance in Bayesian games in polynomial time (even though we showed that an exponential number of alternations between the players in eliminating strategies is sometimes required).\nSecond, we can study computational aspects of (iterated) dominance in concise representations of normal form games other than Bayesian games-for example, in graphical games [9] or local-effect\/action graph games [12, 2].\n(How to efficiently perform iterated very weak dominance has already been studied for partially observable stochastic games [8].)\nFinally, we can ask whether some of the algorithms we described (such as the one for iterated strict dominance with mixed strategies) can be made faster.\n8.\nREFERENCES [1] Krzysztof R. Apt.\nUniform proofs of order independence for various strategy elimination procedures.\nContributions to Theoretical Economics, 4(1), 2004.\n[2] Nivan A. R. Bhat and Kevin Leyton-Brown.\nComputing Nash equilibria of action-graph games.\nIn UAI, 2004.\n[3] Ben Blum, Christian R. Shelton, and Daphne Koller.\nA continuation method for Nash equilibria in structured games.\nIn IJCAI, 2003.\n[4] Vincent Conitzer and Tuomas Sandholm.\nComplexity results about Nash equilibria.\nIn IJCAI, pages 765-771, 2003.\n[5] Drew Fudenberg and Jean Tirole.\nGame Theory.\nMIT Press, 1991.\n[6] Itzhak Gilboa, Ehud Kalai, and Eitan Zemel.\nOn the order of eliminating dominated strategies.\nOperations Research Letters, 9:85-89, 1990.\n[7] Itzhak Gilboa, Ehud Kalai, and Eitan Zemel.\nThe complexity of eliminating dominated strategies.\nMathematics of Operation Research, 18:553-565, 1993.\n[8] Eric A. Hansen, Daniel S. Bernstein, and Shlomo Zilberstein.\nDynamic programming for partially observable stochastic games.\nIn AAAI, pages 709-715, 2004.\n[9] Michael Kearns, Michael Littman, and Satinder Singh.\nGraphical models for game theory.\nIn UAI, 2001.\n[10] Leonid Khachiyan.\nA polynomial algorithm in linear programming.\nSoviet Math.\nDoklady, 20:191-194, 1979.\n[11] Donald E. Knuth, Christos H. Papadimitriou, and John N Tsitsiklis.\nA note on strategy elimination in bimatrix games.\nOperations Research Letters, 7(3):103-107, 1988.\n[12] Kevin Leyton-Brown and Moshe Tennenholtz.\nLocal-effect games.\nIn IJCAI, 2003.\n[13] Richard Lipton, Evangelos Markakis, and Aranyak Mehta.\nPlaying large games using simple strategies.\nIn ACM-EC, pages 36-41, 2003.\n[14] Michael Littman and Peter Stone.\nA polynomial-time Nash equilibrium algorithm for repeated games.\nIn ACM-EC, pages 48-54, 2003.\n[15] Leslie M. Marx and Jeroen M. Swinkels.\nOrder independence for iterated weak dominance.\nGames and Economic Behavior, 18:219-245, 1997.\n[16] Leslie M. Marx and Jeroen M. Swinkels.\nCorrigendum, order independence for iterated weak dominance.\nGames and Economic Behavior, 31:324-329, 2000.\n[17] Andreu Mas-Colell, Michael Whinston, and Jerry R. Green.\nMicroeconomic Theory.\nOxford University Press, 1995.\n[18] Roger Myerson.\nGame Theory: Analysis of Conflict.\nHarvard University Press, Cambridge, 1991.\n[19] Martin J Osborne and Ariel Rubinstein.\nA Course in Game Theory.\nMIT Press, 1994.\n[20] Christos Papadimitriou.\nAlgorithms, games and the Internet.\nIn STOC, pages 749-753, 2001.\n[21] Ryan Porter, Eugene Nudelman, and Yoav Shoham.\nSimple search methods for finding a Nash equilibrium.\nIn AAAI, pages 664-669, 2004.\n97","lvl-3":"Complexity of (Iterated) Dominance *\nABSTRACT\nWe study various computational aspects of solving games using dominance and iterated dominance.\nWe first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve.\nWe then move on to iterated dominance.\nWe show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance.\nThis allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete.\nBoth of these results hold both with and without dominance by mixed strategies.\n(A weaker version of the second result (only without dominance by mixed strategies) was already known [7].)\nIterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time.\nWe then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies.\nFirst, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance).\nThen, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3).\nFinally, we study Bayesian games.\nWe show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship.\ndominance).\nFinally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).\n1.\nINTRODUCTION\nIn multiagent systems with self-interested agents, the optimal action for one agent may depend on the actions taken by other agents.\nIn such settings, the agents require tools from game theory to rationally decide on an action.\nGame theory offers various formal models of strategic settings--the best-known of which is a game in normal (or matrix) form, specifying a utility (payoff) for each agent for each combination of strategies that the agents choose--as well as solution concepts, which, given a game, specify which outcomes are reasonable (under various assumptions of rationality and common knowledge).\nProbably the best-known (and certainly the most-studied) solution concept is that of Nash equilibrium.\nA Nash equilibrium specifies a strategy for each player, in such a way that no player has an incentive to (unilaterally) deviate from the prescribed strategy.\nRecently, numerous papers have studied computing Nash equilibria in various settings [9, 4, 12, 3, 13, 14], and the complexity of constructing a Nash equilibrium in normal form games has been labeled one of the two most important open problems on the boundary of P today [20].\nThe problem of computing solutions according to the perhaps more elementary solution concepts of dominance and iterated dominance has received much less attention.\n(After an early short paper on an easy case [11], the main computational study of these concepts has actually taken place in a paper in the game theory community [7].')\nA strategy strictly dominates another strategy if it performs strictly ` This is not to say that computer scientists have ignored\nbetter against all vectors of opponent strategies, and weakly dominates it if it performs at least as well against all vectors of opponent strategies, and strictly better against at least one.\nThe idea is that dominated strategies can be eliminated from consideration.\nIn iterated dominance, the elimination proceeds in rounds, and becomes easier as more strategies are eliminated: in any given round, the dominating strategy no longer needs to perform better than or as well as the dominated strategy against opponent strategies that were eliminated in earlier rounds.\nComputing solutions according to (iterated) dominance is important for at least the following reasons: 1) it can be computationally easier than computing (for instance) a Nash equilibrium (and therefore it can be useful as a preprocessing step in computing a Nash equilibrium), and 2) (iterated) dominance requires a weaker rationality assumption on the players than (for instance) Nash equilibrium, and therefore solutions derived according to it are more likely to occur.\nIn this paper, we study some fundamental computational questions concerning dominance and iterated dominance, including how hard it is to check whether a given strategy can be eliminated by each of the variants of these notions.\nThe rest of the paper is organized as follows.\nIn Section 2, we briefly review definitions and basic properties of normal form games, strict and weak dominance, and iterated strict and weak dominance.\nIn the remaining sections, we study computational aspects of dominance and iterated dominance.\nIn Section 3, we study one-shot (not iterated) dominance.\nIn Section 4, we study iterated dominance.\nIn Section 5, we study dominance and iterated dominance when the dominating strategy can only place probability on a few pure strategies.\nFinally, in Section 6, we study dominance and iterated dominance in Bayesian games.\n2.\nDEFINITIONS AND BASIC PROPERTIES\n3.\nDOMINANCE (NOT ITERATED)\n4.\nITERATED DOMINANCE\n5.\n(ITERATED) DOMINANCE USING MIXED STRATEGIES WITH SMALL SUPPORTS\n6.\n(ITERATED) DOMINANCE IN BAYESIAN GAMES\nIV I IV I\n7.\nCONCLUSIONS\nWhile the Nash equilibrium solution concept is studied more and more intensely in our community, the perhaps more elementary concept of (iterated) dominance has received much less attention.\nIn this paper we studied various computational aspects of this concept.\nWe first studied both strict and weak dominance (not iterated), and showed that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve.\nWe then moved on to iterated dominance.\nWe showed that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance.\nThis allowed us to also show that determining whether there is a path that leads to a unique solution is NP-complete.\nBoth of these results hold both with and without dominance by mixed strategies.\n(A weaker version of the second result (only without dominance by mixed strategies) was already known [7].)\nIterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time.\nWe then studied what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies.\nFirst, we showed that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance).\nThen, we showed that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3).\nFinally, we studied dominance and iterated dominance in Bayesian games, as an example of a concise representation language for normal form games that is interesting in its own right.\nWe showed that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak dominance).\nFinally, we showed that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).\nThere are various avenues for future research.\nFirst, there is the open question of whether it is possible to complete iterated dominance in Bayesian games in polynomial time (even though we showed that an exponential number of alternations between the players in eliminating strategies is sometimes required).\nSecond, we can study computational aspects of (iterated) dominance in concise representations of normal form games other than Bayesian games--for example, in graphical games [9] or local-effect\/action graph games [12, 2].\n(How to efficiently perform iterated very weak dominance has already been studied for partially observable stochastic games [8].)\nFinally, we can ask whether some of the algorithms we described (such as the one for iterated strict dominance with mixed strategies) can be made faster.","lvl-4":"Complexity of (Iterated) Dominance *\nABSTRACT\nWe study various computational aspects of solving games using dominance and iterated dominance.\nWe first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve.\nWe then move on to iterated dominance.\nWe show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance.\nThis allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete.\nBoth of these results hold both with and without dominance by mixed strategies.\n(A weaker version of the second result (only without dominance by mixed strategies) was already known [7].)\nIterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time.\nWe then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies.\nFirst, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance).\nThen, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3).\nFinally, we study Bayesian games.\nWe show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship.\ndominance).\nFinally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).\n1.\nINTRODUCTION\nIn such settings, the agents require tools from game theory to rationally decide on an action.\nProbably the best-known (and certainly the most-studied) solution concept is that of Nash equilibrium.\nThe problem of computing solutions according to the perhaps more elementary solution concepts of dominance and iterated dominance has received much less attention.\nA strategy strictly dominates another strategy if it performs strictly ` This is not to say that computer scientists have ignored\nbetter against all vectors of opponent strategies, and weakly dominates it if it performs at least as well against all vectors of opponent strategies, and strictly better against at least one.\nThe idea is that dominated strategies can be eliminated from consideration.\nIn iterated dominance, the elimination proceeds in rounds, and becomes easier as more strategies are eliminated: in any given round, the dominating strategy no longer needs to perform better than or as well as the dominated strategy against opponent strategies that were eliminated in earlier rounds.\nIn this paper, we study some fundamental computational questions concerning dominance and iterated dominance, including how hard it is to check whether a given strategy can be eliminated by each of the variants of these notions.\nThe rest of the paper is organized as follows.\nIn Section 2, we briefly review definitions and basic properties of normal form games, strict and weak dominance, and iterated strict and weak dominance.\nIn the remaining sections, we study computational aspects of dominance and iterated dominance.\nIn Section 3, we study one-shot (not iterated) dominance.\nIn Section 4, we study iterated dominance.\nIn Section 5, we study dominance and iterated dominance when the dominating strategy can only place probability on a few pure strategies.\nFinally, in Section 6, we study dominance and iterated dominance in Bayesian games.\n7.\nCONCLUSIONS\nWhile the Nash equilibrium solution concept is studied more and more intensely in our community, the perhaps more elementary concept of (iterated) dominance has received much less attention.\nIn this paper we studied various computational aspects of this concept.\nWe first studied both strict and weak dominance (not iterated), and showed that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve.\nWe then moved on to iterated dominance.\nWe showed that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance.\nThis allowed us to also show that determining whether there is a path that leads to a unique solution is NP-complete.\nBoth of these results hold both with and without dominance by mixed strategies.\n(A weaker version of the second result (only without dominance by mixed strategies) was already known [7].)\nIterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time.\nWe then studied what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies.\nFirst, we showed that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance).\nThen, we showed that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3).\nFinally, we studied dominance and iterated dominance in Bayesian games, as an example of a concise representation language for normal form games that is interesting in its own right.\nFinally, we showed that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).\nThere are various avenues for future research.\nFirst, there is the open question of whether it is possible to complete iterated dominance in Bayesian games in polynomial time (even though we showed that an exponential number of alternations between the players in eliminating strategies is sometimes required).\nSecond, we can study computational aspects of (iterated) dominance in concise representations of normal form games other than Bayesian games--for example, in graphical games [9] or local-effect\/action graph games [12, 2].\n(How to efficiently perform iterated very weak dominance has already been studied for partially observable stochastic games [8].)\nFinally, we can ask whether some of the algorithms we described (such as the one for iterated strict dominance with mixed strategies) can be made faster.","lvl-2":"Complexity of (Iterated) Dominance *\nABSTRACT\nWe study various computational aspects of solving games using dominance and iterated dominance.\nWe first study both strict and weak dominance (not iterated), and show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve.\nWe then move on to iterated dominance.\nWe show that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance.\nThis allows us to also show that determining whether there is a path that leads to a unique solution is NP-complete.\nBoth of these results hold both with and without dominance by mixed strategies.\n(A weaker version of the second result (only without dominance by mixed strategies) was already known [7].)\nIterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time.\nWe then study what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies.\nFirst, we show that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance).\nThen, we show that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3).\nFinally, we study Bayesian games.\nWe show that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship.\ndominance).\nFinally, we show that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).\n1.\nINTRODUCTION\nIn multiagent systems with self-interested agents, the optimal action for one agent may depend on the actions taken by other agents.\nIn such settings, the agents require tools from game theory to rationally decide on an action.\nGame theory offers various formal models of strategic settings--the best-known of which is a game in normal (or matrix) form, specifying a utility (payoff) for each agent for each combination of strategies that the agents choose--as well as solution concepts, which, given a game, specify which outcomes are reasonable (under various assumptions of rationality and common knowledge).\nProbably the best-known (and certainly the most-studied) solution concept is that of Nash equilibrium.\nA Nash equilibrium specifies a strategy for each player, in such a way that no player has an incentive to (unilaterally) deviate from the prescribed strategy.\nRecently, numerous papers have studied computing Nash equilibria in various settings [9, 4, 12, 3, 13, 14], and the complexity of constructing a Nash equilibrium in normal form games has been labeled one of the two most important open problems on the boundary of P today [20].\nThe problem of computing solutions according to the perhaps more elementary solution concepts of dominance and iterated dominance has received much less attention.\n(After an early short paper on an easy case [11], the main computational study of these concepts has actually taken place in a paper in the game theory community [7].')\nA strategy strictly dominates another strategy if it performs strictly ` This is not to say that computer scientists have ignored\nbetter against all vectors of opponent strategies, and weakly dominates it if it performs at least as well against all vectors of opponent strategies, and strictly better against at least one.\nThe idea is that dominated strategies can be eliminated from consideration.\nIn iterated dominance, the elimination proceeds in rounds, and becomes easier as more strategies are eliminated: in any given round, the dominating strategy no longer needs to perform better than or as well as the dominated strategy against opponent strategies that were eliminated in earlier rounds.\nComputing solutions according to (iterated) dominance is important for at least the following reasons: 1) it can be computationally easier than computing (for instance) a Nash equilibrium (and therefore it can be useful as a preprocessing step in computing a Nash equilibrium), and 2) (iterated) dominance requires a weaker rationality assumption on the players than (for instance) Nash equilibrium, and therefore solutions derived according to it are more likely to occur.\nIn this paper, we study some fundamental computational questions concerning dominance and iterated dominance, including how hard it is to check whether a given strategy can be eliminated by each of the variants of these notions.\nThe rest of the paper is organized as follows.\nIn Section 2, we briefly review definitions and basic properties of normal form games, strict and weak dominance, and iterated strict and weak dominance.\nIn the remaining sections, we study computational aspects of dominance and iterated dominance.\nIn Section 3, we study one-shot (not iterated) dominance.\nIn Section 4, we study iterated dominance.\nIn Section 5, we study dominance and iterated dominance when the dominating strategy can only place probability on a few pure strategies.\nFinally, in Section 6, we study dominance and iterated dominance in Bayesian games.\n2.\nDEFINITIONS AND BASIC PROPERTIES\nIn this section, we briefly review normal form games, as well as dominance and iterated dominance (both strict and weak).\nAn n-player normal form game is defined as follows.\npure strategies \u03a3i and a utility function ui: \u03a31 x \u03a32 x...x \u03a3n--+ R (where ui (v1, v2,..., vn) denotes player i's utility when each player j plays action vj).\nThe two main notions of dominance are defined as follows.\ngies v \u2212 i for the other players, ui (vi, v \u2212 i)> ui (v0 i, v \u2212 i).\nPlayer i's strategy v0i is said to be weakly dominated by player i's strategy vi if for any vector of strategies v \u2212 i for the other players, ui (vi, v \u2212 i)> ui (v0 i, v \u2212 i), and for at least one vector of strategies v \u2212 i for the other players, ui (vi, v \u2212 i)> ui (v0 i, v \u2212 i).\nIn this definition, it is sometimes allowed for the dominating strategy vi to be a mixed strategy, that is, a probability distribution over pure strategies.\nIn this case, the utilities in dominance altogether.\nFor example, simple dominance checks are sometimes used as a subroutine in searching for Nash equilibria [21].\nthe definition are the expected utilities .2 There are other notions of dominance, such as very weak dominance (in which no strict inequality is required, so two strategies can dominate each other), but we will not study them here.\nWhen we are looking at the dominance relations for player i, the other players (\u2212 i) can be thought of as a single player .3 Therefore, in the rest of the paper, when we study one-shot (not iterated) dominance, we will focus without loss of generality on two-player games .4 In two-player games, we will generally refer to the players as r (row) and c (column) rather than 1 and 2.\nIn iterated dominance, dominated strategies are removed from the game, and no longer have any effect on future dominance relations.\nIterated dominance can eliminate more strategies than dominance, as follows.\nvr may originally not dominate v0r because the latter performs better against v0c; but then, once v0c is removed because it is dominated by vc, vr dominates v0r, and the latter can be removed.\nFor example, in the following game, R can be removed first, after which D is dominated.\nEither strict or weak dominance can be used in the definition of iterated dominance.\nWe note that the process of iterated dominance is never helped by removing a dominated mixed strategy, for the following reason.\nIf v0i gives player i a higher utility than vi against mixed strategy v0j for player j = 6 i (and strategies v \u2212 {i, j} for the other players), then for at least one pure strategy vj that v0j places positive probability on, v0i must perform better than vi against vj (and strategies v \u2212 {i, j} for the other players).\nThus, removing the mixed strategy v0j does not introduce any new dominances.\nMore detailed discussions and examples can be found in standard texts on microeconomics or game theory [17, 5].\nWe are now ready to move on to the core of this paper.\n3.\nDOMINANCE (NOT ITERATED)\nIn this section, we study the notion of one-shot (not iterated) dominance.\nAs a first observation, checking whether a given strategy is strictly (weakly) dominated by some pure strategy is straightforward, by checking, for every pure strategy for that player, whether the latter strategy performs strictly better against all the opponent's strategies (at least as well against all the opponent's strategies, and strictly 2The dominated strategy v0i is, of course, also allowed to be mixed, but this has no technical implications for the paper: when we study one-shot dominance, we ask whether a given strategy is dominated, and it does not matter whether the given strategy is pure or mixed; when we study iterated dominance, there is no use in eliminating mixed strategies, as we will see shortly.\n3This player may have a very large strategy space (one pure strategy for every vector of pure strategies for the players that are being replaced).\nNevertheless, this will not result in an increase in our representation size, because the original representation already had to specify utilities for each of these vectors.\n4We note that a restriction to two-player games would not be without loss of generality for iterated dominance.\nThis is because for iterated dominance, we need to look at the dominated strategies of each individual player, so we cannot merge any players.\nbetter against at least one).5 Next, we show that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time by solving a single linear program.\n(Similar linear programs have been given before [18]; we present the result here for completeness, and because we will build on the linear programs given below in Theorem 6.)\nPROOF.\nLet pdr be the probability that \u03c3r places on dr E Dr. We will solve a single linear program in each of our algorithms; linear programs can be solved in polynomial time [10].\nFor strict dominance, the question is whether the pdr can be set so that for every pure strategy for the column player \u03c3c E \u03a3c, E pdr ur (dr, \u03c3c)> ur (\u03c3 * r, \u03c3c).\nBecause dr EDr the inequality must be strict, we cannot solve this directly by linear programming.\nWe proceed as follows.\nBecause the game is finite, we may assume without loss of generality that all utilities are positive (if not, simply add a constant to all utilities.)\nSolve the following linear program:\nIf \u03c3 * r is strictly dominated by some mixed strategy, this linear program has a solution with objective value <1.\n(The dominating strategy is a feasible solution with objective value exactly 1.\nBecause no constraint is binding for this solution, we can reduce one of the probabilities slightly without affecting feasibility, thereby obtaining a solution with objective value <1.)\nMoreover, if this linear program has a solution with objective value <1, there is a mixed strategy strictly dominating \u03c3 * r, which can be obtained by taking the LP solution and adding the remaining probability to any strategy (because all the utilities are positive, this will add to the left side of any inequality, so all inequalities will become strict).\nFor weak dominance, we can solve the following linear program:\nthat mixed strategy is a feasible solution to this program with objective value> 0, because for at least one strategy\nthe other hand, if this program has a solution with objective value> 0, then for at least one strategy \u03c3c E \u03a3c we 5Recall that the assumption of a single opponent (that is, the assumption of two players) is without loss of generality for one-shot dominance.\nmust have (E pdr ur (dr, \u03c3c))--ur (\u03c3 * r, \u03c3c)> 0, and thus dr EDr the linear program's solution is a weakly dominating mixed strategy.\n4.\nITERATED DOMINANCE\nWe now move on to iterated dominance.\nIt is well-known that iterated strict dominance is path-independent [6, 19]--that is, if we remove dominated strategies until no more dominated strategies remain, in the end the remaining strategies for each player will be the same, regardless of the order in which strategies are removed.\nBecause of this, to see whether a given strategy can be eliminated by iterated strict dominance, all that needs to be done is to repeatedly remove strategies that are strictly dominated, until no more dominated strategies remain.\nBecause we can check in polynomial time whether any given strategy is dominated (whether or not dominance by mixed strategies is allowed, as described in Section 3), this whole procedure takes only polynomial time.\nIn the case of iterated dominance by pure strategies with two players, Knuth et al. [11] slightly improve on (speed up) the straightforward implementation of this procedure by keeping track of, for each ordered pair of strategies for a player, the number of opponent strategies that prevent the first strategy from dominating the second.\nHereby the runtime for an m x n game is reduced from O ((m + n) 4) to O ((m + n) 3).\n(Actually, they only study very weak dominance (for which no strict inequalities are required), but the approach is easily extended.)\nIn contrast, iterated weak dominance is known to be pathdependent .6 For example, in the following game, using iterated weak dominance we can eliminate M first, and then D, or R first, and then U.\nTherefore, while the procedure of removing weakly dominated strategies until no more weakly dominated strategies remain can certainly be executed in polynomial time, which strategies survive in the end depends on the order in which we remove the dominated strategies.\nWe will investigate two questions for iterated weak dominance: whether a given strategy is eliminated in some path, and whether there is a path to a unique solution (one pure strategy left per player).\nWe will show that both of these problems are computationally hard.\nThe following lemma shows a special case of normal form games in which allowing for weak dominance by mixed strategies (in addition to weak dominance by pure strategies) does 6There is, however, a restriction of weak dominance called nice weak dominance which is path-independent [15, 16].\nFor an overview of path-independence results, see Apt [1].\nnot help.\nWe will prove the hardness results in this setting, so that they will hold whether or not dominance by mixed strategies is allowed.\nPROOF.\nSuppose pure strategy \u03c3 is weakly dominated by mixed strategy \u03c3 \u2217.\nIf \u03c3 gets a utility of 1 against some opponent strategy (or vector of opponent strategies if there are more than 2 players), then all the pure strategies that \u03c3 \u2217 places positive probability on must also get a utility of 1 against that opponent strategy (or else the expected utility would be smaller than 1).\nMoreover, at least one of the pure strategies that \u03c3 \u2217 places positive probability on must get a utility of 1 against an opponent strategy that \u03c3 gets 0 against (or else the inequality would never be strict).\nIt follows that this pure strategy weakly dominates \u03c3.\nWe are now ready to prove the main results of this section.\nPROOF.\nThe problem is in NP because given a sequence of strategies to be eliminated, we can easily check whether this is a valid sequence of eliminations (even when dominance by mixed strategies is allowed, using Proposition 1).\nTo show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a nonempty set of clauses C over a nonempty set of variables V, with corresponding literals L = {+ v: v E V} U {\u2212 v: v E V}) to the following IWD-STRATEGY-ELIMINATION instance.\n(In this instance, we will specify that certain strategies are uneliminable.\nA strategy \u03c3r can be made uneliminable, even when 0 and 1 are the only allowed utilities, by adding another strategy \u03c30r and another opponent strategy \u03c3c, so that: 1.\n\u03c3r and \u03c30r are the only strategies that give the row player a utility of 1 against \u03c3c.\n2.\n\u03c3r and \u03c30r always give the row player the same utility.\n3.\n\u03c3c is the only strategy that gives the column player a utility of 1 against \u03c30r, but otherwise \u03c3c always gives the column player utility 0.\nThis makes it impossible to eliminate any of these three strategies.\nWe will not explicitly specify the additional strategies to make the proof more legible.)\nIn this proof, we will denote row player strategies by s, and column player strategies by t, to improve legibility.\nLet the row player's pure strategy set be given as follows.\nFor every variable v E V, the row player has corresponding strategies s1 + v, s2 + v, s1 \u2212 v, s2 \u2212 v. Additionally, the row player has the following 2 strategies: s10 and s20, where s20 = \u03c3 \u2217 r (that is, it is the strategy we seek to eliminate).\nFinally, for every clause c E C, the row player has corresponding strategies s1c (uneliminable) and s2c.\nLet the column player's pure strategy set be given as follows.\nFor every variable v E V, the column player has a corresponding strategy tv.\nFor every clause c E C, the column player has a corresponding strategy tc, and additionally, for every literal l E L that occurs in c, a strategy tc, l. For every variable v E V, the column player has corresponding strategies t + v, t \u2212 v (both uneliminable).\nFinally, the column player has three additional strategies: t10 (uneliminable), t20, and t1.\nThe utility function for the row player is given as follows:\n\u2022 ur (s1 + v, tv) = 0 for all v E V; \u2022 ur (s2 + v, tv) = 1 for all v E V; \u2022 ur (s1 \u2212 v, tv) = 1 for all v E V; \u2022 ur (s2 \u2212 v, tv) = 0 for all v E V; \u2022 ur (s1 + v, t1) = 1 for all v E V; \u2022 ur (s2 + v, t1) = 0 for all v E V; \u2022 ur (s1 \u2212 v, t1) = 0 for all v E V; \u2022 ur (s2 \u2212 v, t1) = 1 for all v E V; \u2022 ur (sb + v, t + v) = 1 for all v E V and b E {1, 2}; \u2022 ur (sb \u2212 v, t \u2212 v) = 1 for all v E V and b E {1, 2}; \u2022 ur (sl, t) = 0 otherwise for all l E L and t E S2; \u2022 ur (s10, tc) = 0 for all c E C; \u2022 ur (s20, tc) = 1 for all c E C; \u2022 ur (sb0, t10) = 1 for all b E {1, 2}; \u2022 ur (s10, t20) = 1; \u2022 ur (s20, t20) = 0; \u2022 ur (sb0, t) = 0 otherwise for all b E {1, 2} and t E S2; \u2022 ur (sb c, t) = 0 otherwise for all c E C and b E {1, 2}; and the row player's utility is 0 in every other case.\nThe utility function for the column player is given as follows: \u2022 uc (s, tv) = 0 for all v E V and s E S1; \u2022 uc (s, t1) = 0 for all s E S1; \u2022 uc (s2l, tc) = 1 for all c E C and l E L where l E c (literal l occurs in clause c); \u2022 uc (s2l2, tc, l1) = 1 for all c E C and l1, l2 E L, l1 = 6 l2 where l2 E c; \u2022 uc (s1 c, tc) = 1 for all c E C; \u2022 uc (s2 c, tc) = 0 for all c E C; \u2022 uc (sb c, tc, l) = 1 for all c E C, l E L, and b E {1, 2}; \u2022 uc (s2, tc) = uc (s2, tc, l) = 0 otherwise for all c E C and l E L;\nand the column player's utility is 0 in every other case.\nWe now show that the two instances are equivalent.\nFirst, suppose there is a solution to the satisfiability instance: that is, a truth-value assignment to the variables in V such that all clauses are satisfied.\nThen, consider the following sequence of eliminations in our game: 1.\nFor every variable v that is set to true in the assignment, eliminate tv (which gives the column player utility 0 everywhere).\n2.\nThen, for every variable v that is set to true in the assignment, eliminate s2 + v using s1 + v (which is possible because tv has been eliminated, and because t1 has not been eliminated (yet)).\n3.\nNow eliminate t1 (which gives the column player utility 0 everywhere).\n4.\nNext, for every variable v that is set to false in the assignment, eliminate s2 \u2212 v using s1 \u2212 v (which is possible because t1 has been eliminated, and because tv has not been eliminated (yet)).\n5.\nFor every clause c which has the variable corresponding to one of its positive literals l = + v set to true in the assignment, eliminate tc using tc, l (which is possible because s2l has been eliminated, and s2c has not been eliminated (yet)).\n6.\nFor every clause c which has the variable corresponding to one of its negative literals l = \u2212 v set to false in the assignment, eliminate tc using tc, l\n(which is possible because s2l has been eliminated, and s2c has not been eliminated (yet)).\n7.\nBecause the assignment satisfied the formula, all the tc have now been eliminated.\nThus, we can eliminate s20 = a * r using s10.\nIt follows that there is a solution to the IWD-STRATEGY-ELIMINATION instance.\nNow suppose there is a solution to the IWD-STRATEGYELIMINATION instance.\nBy Lemma 1, we can assume that all the dominances are by pure strategies.\nWe first observe that only s10 can eliminate s20 = a * r, because it is the only other strategy that gets the row player a utility of 1 against t10, and t10 is uneliminable.\nHowever, because s20 performs better than s10 against the tc strategies, it follows that all of the tc strategies must be eliminated.\nFor each c \u2208 C, the strategy tc can only be eliminated by one of the strategies tc, l (with the same c), because these are the only other strategies that get the column player a utility of 1 against s1 c, and s1c is uneliminable.\nBut, in order for some tc, l to eliminate tc, s2l must be eliminated first.\nOnly s1l can eliminate s2l, because it is the only other strategy that gets the row player a utility of 1 against tl, and tl is uneliminable.\nWe next show that for every v \u2208 V only one of s2 + v, s2 \u2212 v can be eliminated.\nThis is because in order for s1 + v to eliminate s2 + v, tv needs to have been eliminated and t1, not (so tv must be eliminated before t1); but in order for s1 \u2212 v to eliminate s2 \u2212 v, t1 needs to have been eliminated and tv, not (so t1 must be eliminated before tv).\nSo, set v to true if s2 + v is eliminated, and to false otherwise Because by the above, for every clause c, one of the s2l with l \u2208 c must be eliminated, it follows that this is a satisfying assignment to the satisfiability instance.\nUsing Theorem 1, it is now (relatively) easy to show that IWD-UNIQUE-SOLUTION is also NP-complete under the same restrictions.\nPROOF.\nAgain, the problem is in NP because we can nondeterministically choose the sequence of eliminations and verify whether it is correct.\nTo show NP-hardness, we reduce an arbitrary IWD-STRATEGY-ELIMINATION instance to the following IWD-UNIQUE-SOLUTION instance.\nLet all the strategies for each player from the original instance remain part of the new instance, and let the utilities resulting from the players playing a pair of these strategies be the same.\nWe add three additional strategies a1r, a2r, a3r for the row player, and three additional strategies a1c, a2c, a3c for the column player.\nLet the additional utilities be as follows:\n\u2022 ur (ar, ajc) = 1 for all ar \u2208 \/ {a1r, a2r, a3r} and j \u2208 {2, 3}; \u2022 ur (air, ac) = 1 for all i \u2208 {1, 2, 3} and ac \u2208 \/ {a2c, a3c}; \u2022 ur (air, ac2) = 1 for all i \u2208 {2, 3}; \u2022 ur (a1r, a3c) = 1; \u2022 and the row player's utility is 0 in all other cases involving a new strategy.\n\u2022 uc (a3r, ac) = 1 for all ac \u2208 \/ {a1c, a2c, a3c}; \u2022 uc (a * r, ajc) = 1 for all j \u2208 {2, 3} (a * r is the strategy to be eliminated in the original instance); \u2022 uc (air, a1c) = 1 for all i \u2208 {1, 2}; \u2022 ur (a1r, a2c) = 1; \u2022 ur (a2r, a3c) = 1; \u2022 and the column player's utility is 0 in all other cases involving a new strategy.\nWe proceed to show that the two instances are equivalent.\nFirst suppose there exists a solution to the original IWDSTRATEGY-ELIMINATION instance.\nThen, perform the same sequence of eliminations to eliminate a * r in the new IWD-UNIQUE-SOLUTION instance.\n(This is possible because at any stage, any weak dominance for the row player in the original instance is still a weak dominance in the new instance, because the two strategies' utilities for the row player are the same when the column player plays one of the new strategies; and the same is true for the column player.)\nOnce a * r is eliminated, let a1c eliminate a2c.\n(It performs better against a2r.)\nThen, let a1r eliminate all the other remaining strategies for the row player.\n(It always performs better against either a1c or a3c.)\nFinally, a1c is the unique best response against a1r among the column player's remaining strategies, so let it eliminate all the other remaining strategies for the column player.\nThus, there exists a solution to the IWD-UNIQUE-SOLUTION instance.\nNow suppose there exists a solution to the IWD-UNIQUESOLUTION instance.\nBy Lemma 1, we can assume that all the dominances are by pure strategies.\nWe will show that none of the new strategies (a1r, a2r, a3r, a1c, a2c, a3c) can either eliminate another strategy, or be eliminated before a * r is eliminated.\nThus, there must be a sequence of eliminations ending in the elimination of a * r, which does not involve any of the new strategies, and is therefore a valid sequence of eliminations in the original game (because all original strategies perform the same against each new strategy).\nWe now show that this is true by exhausting all possibilities for the first elimination before a * r is eliminated that involves a new strategy.\nNone of the air can be eliminated by a ar \u2208 \/ {a1r, a2r, a3r}, because the air perform better against a1c.\na1r cannot eliminate any other strategy, because it always performs poorer against a2c.\na2r and a3r are equivalent from the row player's perspective (and thus cannot eliminate each other), and cannot eliminate any other strategy because they always perform poorer against a3c.\nNone of the ajc can be eliminated by a ac \u2208 \/ {a1c, a2c, a3c}, because the ajc always perform better against either a1r or a2r.\na1c cannot eliminate any other strategy, because it always performs poorer against either a * r or a3r.\na2c cannot eliminate any other strategy, because it always performs poorer against a2r or a3r.\na3c cannot eliminate any other strategy, because it always performs poorer against a1r or a3r.\nThus, there exists a solution to the IWDSTRATEGY-ELIMINATION instance.\nA slightly weaker version of the part of Theorem 2 concerning dominance by pure strategies only is the main result of Gilboa et al. [7].\n(Besides not proving the result for dominance by mixed strategies, the original result was weaker because it required utilities {0, 1, 2, 3, 4, 5, 6, 7, 8} rather than just {0, 1} (and because of this, our Lemma 1 cannot be applied to it to get the result for mixed strategies).)\n5.\n(ITERATED) DOMINANCE USING MIXED STRATEGIES WITH SMALL SUPPORTS\nWhen showing that a strategy is dominated by a mixed strategy, there are several reasons to prefer exhibiting a\ndominating strategy that places positive probability on as few pure strategies as possible.\nFirst, this will reduce the number of bits required to specify the dominating strategy (and thus the proof of dominance can be communicated quicker): if the dominating mixed strategy places positive probability on only k strategies, then it can be specified using k real numbers for the probabilities, plus k log m (where m is the number of strategies for the player under consideration) bits to indicate which strategies are used.\nSecond, the proof of dominance will be \"cleaner\": for a dominating mixed strategy, it is typically (always in the case of strict dominance) possible to spread some of the probability onto any unused pure strategy and still have a dominating strategy, but this obscures which pure strategies are the ones that are key in making the mixed strategy dominating.\nThird, because (by the previous) the argument for eliminating the dominated strategy is simpler and easier to understand, it is more likely to be accepted.\nFourth, the level of risk neutrality required for the argument to work is reduced, at least in the extreme case where dominance by a single pure strategy can be exhibited (no risk neutrality is required here).\nThis motivates the following problem.\nDEFINITION 4 (MINIMUM-DOMINATING-SET).\nWe are given the row player's utilities of a game in normal form, a distinguished strategy \u03c3 * for the row player, a specification of whether the dominance should be strict or weak, and a number k.\nWe are asked whether there exists a mixed strategy \u03c3 for the row player that places positive probability on at most k pure strategies, and dominates \u03c3 * in the required sense.\nUnfortunately, this problem is NP-complete.\nTHEOREM 3.\nMINIMUM-DOMINATING-SET is NPcomplete, both for strict and for weak dominance.\nPROOF.\nThe problem is in NP because we can nondeterministically choose a set of at most k strategies to give positive probability, and decide whether we can dominate \u03c3 * with these k strategies as described in Proposition 1.\nTo show NP-hardness, we reduce an arbitrary SET-COVER instance (given a set S, subsets S1, S2,..., S,.\n, and a number t, can all of S be covered by at most t of the subsets?)\nto the following MINIMUM-DOMINATING-SET instance.\nFor every element s \u2208 S, there is a pure strategy \u03c33 for the column player.\nFor every subset Si, there is a pure strategy \u03c3Si for the row player.\nFinally, there is the distinguished pure strategy \u03c3 * for the row player.\nThe row player's utilities are as follows: u,.\n(\u03c3Si, \u03c33) = t + 1 if s \u2208 Si; u,.\n(\u03c3Si, \u03c33) = 0 if s \u2208 \/ Si; u,.\n(\u03c3 *, \u03c33) = 1 for all s \u2208 S. Finally, we let k = t.\nWe now proceed to show that the two instances are equivalent.\nFirst suppose there exists a solution to the SET-COVER instance.\nWithout loss of generality, we can assume that there are exactly k subsets in the cover.\nThen, for every Si that is in the cover, let the dominating strategy \u03c3 place exactly k1 probability on the corresponding pure strategy \u03c3Si.\nNow, if we let n (s) be the number of subsets in the cover containing s (we observe that that n (s) \u2265 1), then for every strategy \u03c33 for the column player, the row player's expected utility for playing \u03c3 when the column player is playing \u03c33 is\n(and thus also weakly) dominates \u03c3 *, and there exists a solution to the MINIMUM-DOMINATING-SET instance.\nNow suppose there exists a solution to the MINIMUMDOMINATING-SET instance.\nConsider the (at most k) pure strategies of the form \u03c3Si on which the dominating mixed strategy \u03c3 places positive probability, and let T be the collection of the corresponding subsets Si.\nWe claim that T is a cover.\nFor suppose there is some s \u2208 S that is not in any of the subsets in T.\nThen, if the column player plays \u03c33, the row player (when playing \u03c3) will always receive utility 0--as opposed to the utility of 1 the row player would receive for playing \u03c3 *, contradicting the fact that \u03c3 dominates \u03c3 * (whether this dominance is weak or strict).\nIt follows that there exists a solution to the SET-COVER instance.\nOn the other hand, if we require that the dominating strategy only places positive probability on a very small number of pure strategies, then it once again becomes easy to check whether a strategy is dominated.\nSpecifically, to find out whether player i's strategy \u03c3 * is dominated by a strategy that places positive probability on only k pure strategies, we can simply check, for every subset of k of player i's pure strategies, whether there is a strategy that places positive probability only on these k strategies and dominates \u03c3 *, using Proposition 1.\nThis requires only O (| \u03a3i | k) such checks.\nThus, if k is a constant, this constitutes a polynomial-time algorithm.\nA natural question to ask next is whether iterated strict dominance remains computationally easy when dominating strategies are required to place positive probability on at most k pure strategies, where k is a small constant.\n(We have already shown in Section 4 that iterated weak dominance is hard even when k = 1, that is, only dominance by pure strategies is allowed.)\nOf course, if iterated strict dominance were path-independent under this restriction, computational easiness would follow as it did in Section 4.\nHowever, it turns out that this is not the case.\nOBSERVATION 1.\nIf we restrict the dominating strategies to place positive probability on at most two pure strategies, iterated strict dominance becomes path-dependent.\nPROOF.\nConsider the following game: 7, 1 0, 0 0, 0 0, 0 7, 1 0, 0 3, 0 3, 0 0, 0 0, 0 0, 0 3, 1 1, 0 1, 0 1, 0 Let (i, j) denote the outcome in which the row player plays the ith row and the column player plays the jth column.\nBecause (1, 1), (2, 2), and (4, 3) are all Nash equilibria, none of the column player's pure strategies will ever be eliminated, and neither will rows 1, 2, and 4.\nWe now observe that randomizing uniformly over rows 1 and 2 dominates row 3, and randomizing uniformly over rows 3 and 4 dominates row 5.\nHowever, if we eliminate row 3 first, it becomes impossible to dominate row 5 without randomizing over at least 3 pure strategies.\nIndeed, iterated strict dominance turns out to be hard even when k = 3.\nPROOF.\nThe problem is in NP because given a sequence of strategies to be eliminated, we can check in polynomial time whether this is a valid sequence of eliminations (for any strategy to be eliminated, we can check, for every subset of three other strategies, whether there is a strategy placing positive probability on only these three strategies that dominates the strategy to be eliminated, using Proposition 1).\nTo show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a nonempty set of clauses C over a nonempty set of variables V, with corresponding literals L = {+ v: v \u2208 V} \u222a {\u2212 v: v \u2208 V}) to the following two-player game.\nFor every variable v \u2208 V, the row player has strategies s + v, s_v, s1v, s2v, s3v, s4v, and the column player has strategies t1v, t2v, t3v, t4v.\nFor every clause c \u2208 C, the row player has a strategy sc, and the column player has a strategy tc, as well as, for every literal l occurring in c, an additional strategy tlc.\nThe row player has two additional strategies s1 and s2.\n(s2 is the strategy that we are seeking to eliminate.)\nFinally, the column player has one additional strategy t1.\nThe utility function for the row player is given as follows (where ~ is some sufficiently small number):\n\u2022 ur (s + v, tjv) = 4 if j \u2208 {1, 2}, for all v \u2208 V; \u2022 ur (s + v, tjv) = 1 if j \u2208 {3, 4}, for all v \u2208 V; \u2022 ur (s_v, tjv) = 1 if j \u2208 {1, 2}, for all v \u2208 V; \u2022 ur (s_v, tjv) = 4 if j \u2208 {3, 4}, for all v \u2208 V; \u2022 ur (s + v, t) = ur (s_v, t) = 0 for all v \u2208 V and t \u2208 \/ {t1 v, t2 v, t3 v, t4 v}; \u2022 ur (siv, tiv) = 13 for all v \u2208 V and i \u2208 {1, 2, 3, 4}; \u2022 ur (siv, t) = ~ for all v \u2208 V, i \u2208 {1, 2, 3, 4}, and t = 6 tiv; \u2022 ur (sc, tc) = 2 for all c \u2208 C; \u2022 ur (sc, t) = 0 for all c \u2208 C and t = 6 tc; \u2022 ur (s1, t1) = 1 + ~; \u2022 ur (s1, t) = ~ for all t = 6 t1; \u2022 ur (s2, t1) = 1; \u2022 ur (s2, tc) = 1 for all c \u2208 C; \u2022 ur (s2, t) = 0 for all t \u2208 \/ {t1} \u222a {tc: c \u2208 C}.\nThe utility function for the column player is given as follows:\n\u2022 uc (siv, tiv) = 1 for all v \u2208 V and i \u2208 {1, 2, 3, 4}; \u2022 uc (s, tiv) = 0 for all v \u2208 V, i \u2208 {1, 2, 3, 4}, and s = 6 siv; \u2022 uc (sc, tc) = 1 for all c \u2208 C; \u2022 uc (sl, tc) = 1 for all c \u2208 C and l \u2208 L occurring in c; \u2022 uc (s, tc) = 0 for all c \u2208 C and s \u2208 \/ {sc} \u222a {sl: l \u2208 c}; \u2022 uc (sc, tlc) = 1 + ~ for all c \u2208 C; \u2022 uc (sl, tlc) = 1 + ~ for all c \u2208 C and l' = 6 l occurring in c; \u2022 uc (s, tlc) = ~ for all c \u2208 C and s \u2208 \/ {sc} \u222a {sl: l' \u2208 c, l = 6 l'}; \u2022 uc (s2, t1) = 1; \u2022 uc (s, t1) = 0 for all s = 6 s2.\nWe now show that the two instances are equivalent.\nFirst, suppose that there is a solution to the satisfiability instance.\nThen, consider the following sequence of eliminations in our game: 1.\nFor every variable v that is set to true in the satisfying assignment, eliminate s + v with the mixed strategy \u03c3r that places probability 1\/3 on s_v, probability 1\/3 on s1v, and probability 1\/3 on s2v.\n(The expected utility of playing \u03c3r against t1v or t2v is 14\/3> 4; against t3 v or t4 v, it is 4\/3> 1; and against anything else it is 2 ~ \/ 3> 0.\nHence the dominance is valid.)\n2.\nSimilarly, for every variable v that is set to false in the satisfying assignment, eliminate s_v with the mixed strategy \u03c3r that places probability 1\/3 on s + v, probability 1\/3 on s3v, and probability 1\/3 on s4v.\n(The expected utility of playing \u03c3r against t1 v or t2 v is 4\/3> 1; against t3 v or t4 v, it is 14\/3> 4; and against anything else it is 2 ~ \/ 3> 0.\nHence the dominance is valid.)\n3.\nFor every c \u2208 C, eliminate tc with any tlc for which l was set to true in the satisfying assignment.\n(This is a valid dominance because tlc performs better than tc against any strategy other than sl, and we eliminated sl in step 1 or in step 2.)\n4.\nFinally, eliminate s2 with s1.\n(This is a valid dominance because s1 performs better than s2 against any strategy other than those in {tc: c \u2208 C}, which we eliminated in step 3.)\nHence, there is an elimination path that eliminates s2.\nNow, suppose that there is an elimination path that eliminates s2.\nThe strategy that eventually dominates s2 must place most of its probability on s1, because s1 is the only other strategy that performs well against t1, which cannot be eliminated before s2.\nBut, s1 performs significantly worse than s2 against any strategy tc with c \u2208 C, so it follows that all these strategies must be eliminated first.\nEach strategy tc can only be eliminated by a strategy that places most of its weight on the corresponding strategies tlc with l \u2208 c, because they are the only other strategies that perform well against sc, which cannot be eliminated before tc.\nBut, each strategy tlc performs significantly worse than tc against sl, so it follows that for every clause c, for one of the literals l occurring in it, sl must be eliminated first.\nNow, strategies of the form tjv will never be eliminated because they are the unique best responses to the corresponding strategies sjv (which are, in turn, the best responses to the corresponding tjv).\nAs a result, if strategy s + v (respectively, s_v) is eliminated, then its opposite strategy s_v (respectively, s + v) can no longer be eliminated, for the following reason.\nThere is no other pure strategy remaining that gets a significant utility against more than one of the strategies t1v, t2v, t3v, t4v, but s_v (respectively, s + v) gets significant utility against all 4, and therefore cannot be dominated by a mixed strategy placing positive probability on at most 3 strategies.\nIt follows that for each v \u2208 V, at most one of the strategies s + v, s_v is eliminated, in such a way that for every clause c, for one of the literals l occurring in it, sl must be eliminated.\nBut then setting all the literals l such that sl is eliminated to true constitutes a solution to the satisfiability instance.\nIn the next section, we return to the setting where there is no restriction on the number of pure strategies on which a dominating mixed strategy can place positive probability.\n6.\n(ITERATED) DOMINANCE IN BAYESIAN GAMES\nSo far, we have focused on normal form games that are flatly represented (that is, every matrix entry is given ex\nplicitly).\nHowever, for many games, the flat representation is too large to write down explicitly, and instead, some representation that exploits the structure of the game needs to be used.\nBayesian games, besides being of interest in their own right, can be thought of as a useful structured representation of normal form games, and we will study them in this section.\nIn a Bayesian game, each player first receives privately held preference information (the player's type) from a distribution, which determines the utility that that player receives for every outcome of (that is, vector of actions played in) the game.\nAfter receiving this type, the player plays an action based on it .7\ni's utility when i's type is \u03b8i and each player j plays action aj).\nA pure strategy in a Bayesian game is a mapping from types to actions, \u03c3i: \u0398i--+ Ai, where \u03c3i (\u03b8i) denotes the action that player i plays for type \u03b8i.\nAny vector of pure strategies in a Bayesian game defines an (expected) utility for each player, and therefore we can translate a Bayesian game into a normal form game.\nIn this normal form game, the notions of dominance and iterated dominance are defined as before.\nHowever, the normal form representation of the game is exponentially larger than the Bayesian representation, because each player i has | Ai | I\u0398iI distinct pure strategies.\nThus, any algorithm for Bayesian games that relies on expanding the game to its normal form will require exponential time.\nSpecifically, our easiness results for normal form games do not directly transfer to this setting.\nIn fact, it turns out that checking whether a strategy is dominated by a pure strategy is hard in Bayesian games.\nTHEOREM 5.\nIn a Bayesian game, it is NP-complete to decide whether a given pure strategy \u03c3r: \u0398r--+ Ar is dominated by some other pure strategy (both for strict and weak dominance), even when the row player's distribution over types is uniform.\nPROOF.\nThe problem is in NP because it is easy to verify whether a candidate dominating strategy is indeed a dominating strategy.\nTo show that the problem is NP-hard, we reduce an arbitrary satisfiability instance (given by a set of clauses C using variables from V) to the following Bayesian game.\nLet the row player's action set be Ar = {t, f, 0} and let the column player's action set be Ac = {ac: c E C}.\nLet the row player's type set be \u0398r = {\u03b8v: v E V}, with a distribution \u03c0r that is uniform.\nLet the row player's utility function be as follows:\n\u2022 ur (\u03b8v, 0, ac) = 0 for all v E V and c E C; \u2022 ur (\u03b8v, b, ac) = | V | for all v E V, c E C, and b E {t, f} such that setting v to b satisfies c; \u2022 ur (\u03b8v, b, ac) = \u2212 1 for all v E V, c E C, and b E {t, f} such that setting v to b does not satisfy c.\nLet the pure strategy to be dominated be the one that plays 0 for every type.\nWe show that the strategy is dominated by a pure strategy if and only if there is a solution to the satisfiability instance.\nFirst, suppose there is a solution to the satisfiability instance.\nThen, let \u03c3dr be given by: \u03c3dr (\u03b8v) = t if v is set to true in the solution to the satisfiability instance, and \u03c3dr (\u03b8v) = f otherwise.\nThen, against any action ac by the column player, there is at least one type \u03b8v such that either + v E c and \u03c3dr (\u03b8v) = t, or \u2212 v E c and \u03c3dr (\u03b8v) = f. Thus, the row player's expected utility against action ac is at least\nIV I IV I\nHowever, it turns out that we can modify the linear programs from Proposition 1 to obtain a polynomial time algorithm for checking whether a strategy is dominated by a mixed strategy in Bayesian games.\nTHEOREM 6.\nIn a Bayesian game, it can be decided in polynomial time whether a given (possibly mixed) strategy \u03c3r is dominated by some other mixed strategy, using linear programming (both for strict and weak dominance).\nPROOF.\nWe can modify the linear programs presented in Proposition 1 as follows.\nFor strict dominance, again assuming without loss of generality that all the utilities in the game are positive, use the following linear program (in which p\u03c3r\nsuch that for any ac E Ac, E E \u03c0 (\u03b8r) ur (\u03b8r, ar, ac) pr (\u03b8r, ar)>\nfor any \u03b8r E \u0398r, E pr (\u03b8r, ar) <1.\narEAr Assuming that \u03c0 (\u03b8r)> 0 for all \u03b8r E \u0398r, this program will return an objective value smaller than | \u0398r | if and only if \u03c3r is strictly dominated, by reasoning similar to that done in Proposition 1.\nFor weak dominance, use the following linear program: such that for any ac E Ac, E E \u03c0 (\u03b8r) ur (\u03b8r, ar, ac) pr (\u03b8r, ar)>\nfor any \u03b8r E \u0398r, E pr (\u03b8r, ar) = 1.\narEAr This program will return an objective value greater than 0 if and only if \u03c3r is weakly dominated, by reasoning similar to that done in Proposition 1.\nWe now turn to iterated dominance in Bayesian games.\nNa \u00a8 \u0131vely, one might argue that iterated dominance in Bayesian\ngames always requires an exponential number of steps when a significant fraction of the game's pure strategies can be eliminated, because there are exponentially many pure strategies.\nHowever, this is not a very strong argument because oftentimes we can eliminate exponentially many pure strategies in one step.\nFor example, if for some type \u03b8r \u2208 \u0398r we have, for all ac \u2208 Ac, that u (\u03b8r, a1r, ac)> u (\u03b8r, a2r, ac), then any pure strategy for the row player which plays action a2r for type \u03b8r is dominated (by the strategy that plays action a1r for type \u03b8r instead)--and there are exponentially many (| Ar | | \u0398r | \u2212 1) such strategies.\nIt is therefore conceivable that we need only polynomially many eliminations of collections of a player's strategies.\nHowever, the following theorem shows that this is not the case, by giving an example where an exponential number of iterations (that is, alternations between the players in eliminating strategies) is required.\n(We emphasize that this is not a result about computational complexity.)\nTHEOREM 7.\nEven in symmetric 3-player Bayesian games, iterated dominance by pure strategies can require an exponential number of iterations (both for strict and weak dominance), even with only three actions per player.\nPROOF.\nLet each player i \u2208 {1, 2, 3} have n + 1 types \u03b81i, \u03b82i,..., \u03b8n +1 i.\nLet each player i have 3 actions ai, bi, ci, and let the utility function of each player be defined as follows.\n(In the below, i + 1 and i + 2 are shorthand for i + 1 (mod 3) and i + 2 (mod 3) when used as player indices.\nAlso, \u2212 \u221e can be replaced by a sufficiently negative number.\nFinally, \u03b4 and ~ should be chosen to be very small (even compared to 2 \u2212 (n +1)), and ~ should be more than twice as large as \u03b4.)\n\u2022 ui (\u03b81i; ai, ci +1, ci +2) = \u2212 1; \u2022 ui (\u03b81i; ai, si +1, si +2) = 0 for si +1 = 6 ci +1 or si +2 = 6 ci +2; \u2022 ui (\u03b81i; bi, si +1, si +2) = \u2212 ~ for si +1 = 6 ai +1 and si +2 = 6 ai +2; \u2022 ui (\u03b81i; bi, si +1, si +2) = \u2212 \u221e for si +1 = ai +1 or si +2 = ai +2; \u2022 ui (\u03b81i; ci, si +1, si +2) = \u2212 \u221e for all si +1, si +2; \u2022 ui (\u03b8ji; ai, si +1, si +2) = \u2212 \u221e for all si +1, si +2 when j> 1; \u2022 ui (\u03b8ji; bi, si +1, si +2) = \u2212 ~ for all si +1, si +2 when j> 1; \u2022 ui (\u03b8ji; ci, si +1, ci +2) = \u03b4 \u2212 ~ \u2212 1\/2 for all si +1 when j> 1; \u2022 ui (\u03b8ji; ci, si +1, si +2) = \u03b4 \u2212 ~ for all si +1 and si +2 = 6 ci +2 when j> 1.\nLet the distribution over each player's types be given by p (\u03b8ji) = 2 \u2212 j (with the exception that p (\u03b82i) = 2 \u2212 2 +2 \u2212 (n +1)).\nWe will be interested in eliminating strategies of the following form: play bi for type \u03b81i, and play one of bi or ci otherwise.\nBecause the utility function is the same for any type \u03b8ji with j> 1, these strategies are effectively defined by the total probability that they place on ci,8 which is t2i (2 \u2212 2 + 2 \u2212 (n +1)) + n +1 j = 3 tji2 \u2212 j where tji = 1 if player i 8Note that the strategies are still pure strategies; the probability placed on an action by a strategy here is simply the sum of the probabilities of the types for which the strategy chooses that action.\nplays ci for type \u03b8ji, and 0 otherwise.\nThis probability is different for any two different strategies of the given form, and we have exponentially many different strategies of the given form.\nFor any probability q which can be expressed\nlet \u03c3i (q) denote the (unique) strategy of the given form for player i which places a total probability of q on ci.\nAny strategy that plays ci for type \u03b81i or ai for some type \u03b8ji with j> 1 can immediately be eliminated.\nWe will show that, after that, we must eliminate the strategies \u03c3i (q) with high q first, slowly working down to those with lower q. Claim 1: If \u03c3i +1 (q0) and \u03c3i +2 (q0) have not yet been eliminated, and q <q0, then \u03c3i (q) cannot yet be eliminated.\nProof: First, we show that no strategy \u03c3i (q00) can eliminate \u03c3i (q).\nAgainst \u03c3i +1 (q000), \u03c3i +2 (q000), the utility of playing \u03c3i (p) is \u2212 ~ + p \u00b7 \u03b4 \u2212 p \u00b7 q000\/2.\nThus, when q000 = 0, it is best to set p as high as possible (and we note that \u03c3i +1 (0) and \u03c3i +2 (0) have not been eliminated), but when q000> 0, it is best to set p as low as possible because \u03b4 <q000\/2.\nThus, whether q> q00 or q <q00, \u03c3i (q) will always do strictly better than \u03c3i (q00) against some remaining opponent strategies.\nHence, no strategy \u03c3i (q00) can eliminate \u03c3i (q).\nThe only other pure strategies that could dominate \u03c3i (q) are strategies that play ai for type \u03b81i, and bi or ci for all other types.\nLet us take such a strategy and suppose that it plays c with probability p. Against \u03c3i +1 (q0), \u03c3i +2 (q0) (which have not yet been eliminated), the utility of playing this strategy is \u2212 (q0) 2\/2 \u2212 ~ \/ 2 + p \u00b7 \u03b4 \u2212 p \u00b7 q0\/2.\nOn the other hand, playing \u03c3i (q) gives \u2212 ~ + q \u00b7 \u03b4 \u2212 q \u00b7 q0\/2.\nBecause q0> q, we have \u2212 (q0) 2\/2 <\u2212 q \u00b7 q0\/2, and because \u03b4 and ~ are small, it follows that \u03c3i (q) receives a higher utility.\nTherefore, no strategy dominates \u03c3i (q), proving the claim.\nClaim 2: If for all q0> q, \u03c3i +1 (q0) and \u03c3i +2 (q0) have been eliminated, then \u03c3i (q) can be eliminated.\nProof: Consider the strategy for player i that plays ai for type \u03b81i, and bi for all other types (call this strategy \u03c30i); we claim \u03c30i dominates \u03c3i (q).\nFirst, if either of the other players k plays ak for \u03b81k, then \u03c30i performs better than \u03c3i (q) (which receives \u2212 \u221e in some cases).\nBecause the strategies for player k that play ck for type \u03b81k, or ak for some type \u03b8jk with j> 1, have already been eliminated, all that remains to check is that \u03c30i performs better than \u03c3i (q) whenever both of the other two players play strategies of the following form: play bk for type \u03b81k, and play one of bk or ck otherwise.\nWe note that among these strategies, there are none left that place probability greater than q on ck.\nLetting qk denote the probability with which player k plays ck, the expected utility of playing \u03c30i is \u2212 qi +1 \u00b7 qi +2 \/ 2 \u2212 ~ \/ 2.\nOn the other hand, the utility of playing \u03c3i (q) is \u2212 ~ + q \u00b7 \u03b4 \u2212 q \u00b7 qi +2 \/ 2.\nBecause qi +1 \u2264 q, the difference between these two expressions is at least ~ \/ 2 \u2212 \u03b4, which is positive.\nIt follows that \u03c30i dominates \u03c3i (q).\nFrom Claim 2, it follows that all strategies of the form \u03c3i (q) will eventually be eliminated.\nHowever, Claim 1 shows that we cannot go ahead and eliminate multiple such strategies for one player, unless at least one other player simultaneously \"keeps up\" in the eliminated strategies: every time a \u03c3i (q) is eliminated such that \u03c3i +1 (q) and \u03c3i +2 (q) have not yet been eliminated, we need to eliminate one of the latter two strategies before any \u03c3i (q0) with q0> q can be eliminated--that is, we need to alternate between players.\nBecause there are exponentially many strategies of the form \u03c3i (q), it follows that iterated elimination will require exponentially many iterations to complete.\nIt follows that an efficient algorithm for iterated dominance (strict or weak) by pure strategies in Bayesian games, if it exists, must somehow be able to perform (at least part of) many iterations in a single step of the algorithm (because if each step only performed a single iteration, we would need exponentially many steps).\nInterestingly, Knuth et al. [11] argue that iterated dominance appears to be an inherently sequential problem (in light of their result that iterated very weak dominance is P-complete, that is, apparently not efficiently parallelizable), suggesting that aggregating many iterations may be difficult.\n7.\nCONCLUSIONS\nWhile the Nash equilibrium solution concept is studied more and more intensely in our community, the perhaps more elementary concept of (iterated) dominance has received much less attention.\nIn this paper we studied various computational aspects of this concept.\nWe first studied both strict and weak dominance (not iterated), and showed that checking whether a given strategy is dominated by some mixed strategy can be done in polynomial time using a single linear program solve.\nWe then moved on to iterated dominance.\nWe showed that determining whether there is some path that eliminates a given strategy is NP-complete with iterated weak dominance.\nThis allowed us to also show that determining whether there is a path that leads to a unique solution is NP-complete.\nBoth of these results hold both with and without dominance by mixed strategies.\n(A weaker version of the second result (only without dominance by mixed strategies) was already known [7].)\nIterated strict dominance, on the other hand, is path-independent (both with and without dominance by mixed strategies) and can therefore be done in polynomial time.\nWe then studied what happens when the dominating strategy is allowed to place positive probability on only a few pure strategies.\nFirst, we showed that finding the dominating strategy with minimum support size is NP-complete (both for strict and weak dominance).\nThen, we showed that iterated strict dominance becomes path-dependent when there is a limit on the support size of the dominating strategies, and that deciding whether a given strategy can be eliminated by iterated strict dominance under this restriction is NP-complete (even when the limit on the support size is 3).\nFinally, we studied dominance and iterated dominance in Bayesian games, as an example of a concise representation language for normal form games that is interesting in its own right.\nWe showed that, unlike in normal form games, deciding whether a given pure strategy is dominated by another pure strategy in a Bayesian game is NP-complete (both with strict and weak dominance); however, deciding whether a strategy is dominated by some mixed strategy can still be done in polynomial time with a single linear program solve (both with strict and weak dominance).\nFinally, we showed that iterated dominance using pure strategies can require an exponential number of iterations in a Bayesian game (both with strict and weak dominance).\nThere are various avenues for future research.\nFirst, there is the open question of whether it is possible to complete iterated dominance in Bayesian games in polynomial time (even though we showed that an exponential number of alternations between the players in eliminating strategies is sometimes required).\nSecond, we can study computational aspects of (iterated) dominance in concise representations of normal form games other than Bayesian games--for example, in graphical games [9] or local-effect\/action graph games [12, 2].\n(How to efficiently perform iterated very weak dominance has already been studied for partially observable stochastic games [8].)\nFinally, we can ask whether some of the algorithms we described (such as the one for iterated strict dominance with mixed strategies) can be made faster.","keyphrases":["domin","domin","iter domin","strategi","elimin","bayesian game","normal form game","multiag system","self-interest agent","optim action","game theori","nash equilibrium"],"prmu":["P","P","P","P","P","P","P","U","U","U","M","U"]} {"id":"J-45","title":"Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario","abstract":"Our proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings. We illustrate our approach with a design task from a supply-chain trading competition. Designers adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient. Our empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect. More generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent.","lvl-1":"Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario Yevgeniy Vorobeychik, Christopher Kiekintveld, and Michael P. Wellman University of Michigan Computer Science & Engineering Ann Arbor, MI 48109-2121 USA { yvorobey, ckiekint, wellman }@umich.\nedu ABSTRACT Our proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings.\nWe illustrate our approach with a design task from a supply-chain trading competition.\nDesigners adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient.\nOur empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect.\nMore generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent.\nCategories and Subject Descriptors I.6 [Computing Methodologies]: Simulation and Modeling; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Algorithms, Economics, Design 1.\nMOTIVATION We illustrate our problem with an anecdote from a supply chain research exercise: the 2003 and 2004 Trading Agent Competition (TAC) Supply Chain Management (SCM) game.\nTAC\/SCM [1] defines a scenario where agents compete to maximize their profits as manufacturers in a supply chain.\nThe agents procure components from the various suppliers and assemble finished goods for sale to customers, repeatedly over a simulated year.\nAs it happened, the specified negotiation behavior of suppliers provided a great incentive for agents to procure large quantities of components on day 0: the very beginning of the simulation.\nDuring the early rounds of the 2003 SCM competition, several agent developers discovered this, and the apparent success led to most agents performing the majority of their purchasing on day 0.\nAlthough jockeying for day-0 procurement turned out to be an interesting strategic issue in itself [19], the phenomenon detracted from other interesting problems, such as adapting production levels to varying demand (since component costs were already sunk), and dynamic management of production, sales, and inventory.\nSeveral participants noted that the predominance of day-0 procurement overshadowed other key research issues, such as factory scheduling [2] and optimizing bids for customer orders [13].\nAfter the 2003 tournament, there was a general consensus in the TAC community that the rules should be changed to deter large day-0 procurement.\nThe task facing game organizers can be viewed as a problem in mechanism design.\nThe designers have certain game features under their control, and a set of objectives regarding game outcomes.\nUnlike most academic treatments of mechanism design, the objective is a behavioral feature (moderate day-0 procurement) rather than an allocation feature like economic efficiency, and the allowed mechanisms are restricted to those judged to require only an incremental modification of the current game.\nReplacing the supplychain negotiation procedures with a one-shot direct mechanism, for example, was not an option.\nWe believe that such operational restrictions and idiosyncratic objectives are actually quite typical of practical mechanism design settings, where they are perhaps more commonly characterized as incentive engineering problems.\nIn response to the problem, the TAC\/SCM designers adopted several rule changes intended to penalize large day-0 orders.\nThese included modifications to supplier pricing policies and introduction of storage costs assessed on inventories of components and finished goods.\nDespite the changes, day-0 procurement was very high in the early rounds of the 2004 competition.\nIn a drastic measure, the GameMaster imposed a fivefold increase of storage costs midway through the tournament.\nEven this did not stem the tide, and day-0 procurement in the final rounds actually increased (by some measures) from 2003 [9].\nThe apparent difficulty in identifying rule modifications that effect moderation in day-0 procurement is quite striking.\nAlthough the designs were widely discussed, predictions for the effects of various proposals were supported primarily by intuitive arguments or at best by back-of-the-envelope calculations.\nMuch of the difficulty, of course, is anticipating the agents'' (and their developers'') responses without essentially running a gaming exercise for this purpose.\nThe episode caused us to consider whether new ap306 proaches or tools could enable more systematic analysis of design options.\nStandard game-theoretic and mechanism design methods are clearly relevant, although the lack of an analytic description of the game seems to be an impediment.\nUnder the assumption that the simulator itself is the only reliable source of outcome computation, we refer to our task as empirical mechanism design.\nIn the sequel, we develop some general methods for empirical mechanism design and apply them to the TAC\/SCM redesign problem.\nOur analysis focuses on the setting of storage costs (taking other game modifications as fixed), since this is the most direct deterrent to early procurement adopted.\nOur results confirm the basic intuition that incentives for day-0 purchasing decrease as storage costs rise.\nWe also confirm that the high day-0 procurement observed in the 2004 tournament is a rational response to the setting of storage costs used.\nFinally, we conclude from our data that it is very unlikely that any reasonable setting of storage costs would result in acceptable levels of day-0 procurement, so a different design approach would have been required to eliminate this problem.\nOverall, we contribute a formal framework and a set of methods for tackling indirect mechanism design problems in settings where only a black-box description of players'' utilities is available.\nOur methods incorporate estimation of sets of Nash equilibria and sample Nash equilibria, used in conjuction to support general claims about the structure of the mechanism designer``s utility, as well as a restricted probabilistic analysis to assess the likelihood of conclusions.\nWe believe that most realistic problems are too complex to be amenable to exact analysis.\nConsequently, we advocate the approach of gathering evidence to provide indirect support of specific hypotheses.\n2.\nPRELIMINARIES A normal form game2 is denoted by [I, {Ri}, {ui(r)}], where I refers to the set of players and m = |I| is the number of players.\nRi is the set of strategies available to player i \u2208 I, with R = R1 \u00d7...\u00d7Rm representing the set of joint strategies of all players.\nWe designate the set of pure strategies available to player i by Ai, and denote the joint set of pure strategies of all players by A = A1 \u00d7...\u00d7Am.\nIt is often convenient to refer to a strategy of player i separately from that of the remaining players.\nTo accommodate this, we use a\u2212i to denote the joint strategy of all players other than player i. Let Si be the set of all probability distributions (mixtures) over Ai and, similarly, S be the set of all distributions over A.\nAn s \u2208 S is called a mixed strategy profile.\nWhen the game is finite (i.e., A and I are both finite), the probability that a \u2208 A is played under s is written s(a) = s(ai, a\u2212i).\nWhen the distribution s is not correlated, we can simply say si(ai) when referring to the probability player i plays ai under s. Next, we define the payoff (utility) function of each player i by ui : A1 \u00d7\u00b7 \u00b7 \u00b7\u00d7Am \u2192 R, where ui(ai, a\u2212i) indicates the payoff to player i to playing pure strategy ai when the remaining players play a\u2212i.\nWe can extend this definition to mixed strategies by assuming that ui are von Neumann-Morgenstern (vNM) utilities as follows: ui(s) = Es[ui], where Es is the expectation taken with respect to the probability distribution of play induced by the players'' mixed strategy s. 2 By employing the normal form, we model agents as playing a single action, with decisions taken simultaneously.\nThis is appropriate for our current study, which treats strategies (agent programs) as atomic actions.\nWe could capture finer-grained decisions about action over time in the extensive form.\nAlthough any extensive game can be recast in normal form, doing so may sacrifice compactness and blur relevant distinctions (e.g., subgame perfection).\nOccasionally, we write ui(x, y) to mean that x \u2208 Ai or Si and y \u2208 A\u2212i or S\u2212i depending on context.\nWe also express the set of utility functions of all players as u(\u00b7) = {u1(\u00b7), ... , um(\u00b7)}.\nWe define a function, : R \u2192 R, interpreted as the maximum benefit any player can obtain by deviating from its strategy in the specified profile.\n(r) = max i\u2208I max ai\u2208Ai [ui(ai, r\u2212i) \u2212 ui(r)], (1) where r belongs to some strategy set, R, of either pure or mixed strategies.\nFaced with a game, an agent would ideally play its best strategy given those played by the other agents.\nA configuration where all agents play strategies that are best responses to the others constitutes a Nash equilibrium.\nDEFINITION 1.\nA strategy profile r = (r1, ... , rm) constitutes a Nash equilibrium of game [I, {Ri}, {ui(r)}] if for every i \u2208 I, ri \u2208 Ri, ui(ri, r\u2212i) \u2265 ui(ri, r\u2212i).\nWhen r \u2208 A, the above defines a pure strategy Nash equilibrium; otherwise the definition describes a mixed strategy Nash equilibrium.\nWe often appeal to the concept of an approximate, or -Nash equilibrium, where is the maximum benefit to any agent for deviating from the prescribed strategy.\nThus, (r) as defined above (1) is such that profile r is an -Nash equilibrium iff (r) \u2264 .\nIn this study we devote particular attention to games that exhibit symmetry with respect to payoffs, rendering agents strategically identical.\nDEFINITION 2.\nA game [I, {Ri}, {ui(r)}] is symmetric if for all i, j \u2208 I, (a) Ri = Rj and (b) ui(ri, r\u2212i) = uj (rj, r\u2212j) whenever ri = rj and r\u2212i = r\u2212j 3.\nTHE MODEL We model the strategic interactions between the designer of the mechanism and its participants as a two-stage game.\nThe designer moves first by selecting a value, \u03b8, from a set of allowable mechanism settings, \u0398.\nAll the participant agents observe the mechanism parameter \u03b8 and move simultaneously thereafter.\nFor example, the designer could be deciding between a first-price and second-price sealed-bid auction mechanisms, with the presumption that after the choice has been made, the bidders will participate with full awareness of the auction rules.\nSince the participants play with full knowledge of the mechanism parameter, we define a game between them in the second stage as \u0393\u03b8 = [I, {Ri}, {ui(r, \u03b8)}].\nWe refer to \u0393\u03b8 as a game induced by \u03b8.\nLet N(\u03b8) be the set of strategy profiles considered solutions of the game \u0393\u03b8.3 Suppose that the goal of the designer is to optimize the value of some welfare function, W (r, \u03b8), dependent on the mechanism parameter and resulting play, r.\nWe define a pessimistic measure, W ( \u02c6R, \u03b8) = inf{W (r, \u03b8) : r \u2208 \u02c6R}, representing the worst-case welfare of the game induced by \u03b8, assuming that agents play some joint strategy in \u02c6R.\nTypically we care about W (N(\u03b8), \u03b8), the worst-case outcome of playing some solution.4 On some problems we can gain considerable advantage by using an aggregation function to map the welfare outcome of a game 3 We generally adopt Nash equilibrium as the solution concept, and thus take N(\u03b8) to be the set of equilibria.\nHowever, much of the methodology developed here could be employed with alternative criteria for deriving agent behavior from a game definition.\n4 Again, alternatives are available.\nFor example, if one has a probability distribution over the solution set N(\u03b8), it would be natural to take the expectation of W (r, \u03b8) instead.\n307 specified in terms of agent strategies to an equivalent welfare outcome specified in terms of a lower-dimensional summary.\nDEFINITION 3.\nA function \u03c6 : R1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Rm \u2192 Rq is an aggregation function if m \u2265 q and W (r, \u03b8) = V (\u03c6(r), \u03b8) for some function V .\nWe overload the function symbol to apply to sets of strategy profiles: \u03c6( \u02c6R) = {\u03c6(r) : r \u2208 \u02c6R}.\nFor convenience of exposition, we write \u03c6\u2217 (\u03b8) to mean \u03c6(N(\u03b8)).\nUsing an aggregation function yields a more compact representation of strategy profiles.\nFor example, suppose-as in our application below-that an agent``s strategy is defined by a numeric parameter.\nIf all we care about is the total value played, we may take \u03c6(a) = Pm i=1 ai.\nIf we have chosen our aggregator carefully, we may also capture structure not obvious otherwise.\nFor example, \u03c6\u2217 (\u03b8) could be decreasing in \u03b8, whereas N(\u03b8) might have a more complex structure.\nGiven a description of the solution correspondence N(\u03b8) (equivalently, \u03c6\u2217 (\u03b8)), the designer faces a standard optimization problem.\nAlternatively, given a simulator that could produce an unbiased sample from the distribution of W (N(\u03b8), \u03b8) for any \u03b8, the designer would be faced with another much appreciated problem in the literature: simulation optimization [12].\nHowever, even for a game \u0393\u03b8 with known payoffs it may be computationally intractable to solve for Nash equilibria, particularly if the game has large or infinite strategy sets.\nAdditionally, we wish to study games where the payoffs are not explicitly given, but must be determined from simulation or other experience with the game.5 Accordingly, we assume that we are given a (possibly noisy) data set of payoff realizations: Do = {(\u03b81 , a1 , U1 ), ... , (\u03b8k , ak , Uk )}, where for every data point \u03b8i is the observed mechanism parameter setting, ai is the observed pure strategy profile of the participants, and Ui is the corresponding realization of agent payoffs.\nWe may also have additional data generated by a (possibly noisy) simulator: Ds = {(\u03b8k+1 , ak+1 , Uk+1 ), ... , (\u03b8k+l , ak+l , Uk+l )}.\nLet D = {Do, Ds} be the combined data set.\n(Either Do or Ds may be null for a particular problem.)\nIn the remainder of this paper, we apply our modeling approach, together with several empirical game-theoretic methods, in order to answer questions regarding the design of the TAC\/SCM scenario.\n4.\nEMPIRICAL DESIGN ANALYSIS Since our data comes in the form of payoff experience and not as the value of an objective function for given settings of the control variable, we can no longer rely on the methods for optimizing functions using simulations.\nIndeed, a fundamental aspect of our design problem involves estimating the Nash equilibrium correspondence.\nFurthermore, we cannot rely directly on the convergence results that abound in the simulation optimization literature, and must establish probabilistic analysis methods tailored for our problem setting.\n4.1 TAC\/SCM Design Problem We describe our empirical design analysis methods by presenting a detailed application to the TAC\/SCM scenario introduced above.\nRecall that during the 2004 tournament, the designers of the supplychain game chose to dramatically increase storage costs as a measure aimed at curbing day-0 procurement, to little avail.\nHere we systematically explore the relationship between storage costs and 5 This is often the case for real games of interest, where natural language or algorithmic descriptions may substitute for a formal specification of strategy and payoff functions.\nthe aggregate quantity of components procured on day 0 in equilibrium.\nIn doing so, we consider several questions raised during and after the tournament.\nFirst, does increasing storage costs actually reduce day-0 procurement?\nSecond, was the excessive day-0 procurement that was observed during the 2004 tournament rational?\nAnd third, could increasing storage costs sufficiently have reduced day-0 procurement to an acceptable level, and if so, what should the setting of storage costs have been?\nIt is this third question that defines the mechanism design aspect of our analysis.6 To apply our methods, we must specify the agent strategy sets, the designer``s welfare function, the mechanism parameter space, and the source of data.\nWe restrict the agent strategies to be a multiplier on the quantity of the day-0 requests by one of the finalists, Deep Maize, in the 2004 TAC\/SCM tournament.\nWe further restrict it to the set [0,1.5], since any strategy below 0 is illegal and strategies above 1.5 are extremely aggressive (thus unlikely to provide refuting deviations beyond those available from included strategies, and certainly not part of any desirable equilibrium).\nAll other behavior is based on the behavior of Deep Maize and is identical for all agents.\nThis choice can provide only an estimate of the actual tournament behavior of a typical agent.\nHowever, we believe that the general form of the results should be robust to changes in the full agent behavior.\nWe model the designer``s welfare function as a threshold on the sum of day-0 purchases.\nLet \u03c6(a) = P6 i=1 ai be the aggregation function representing the sum of day-0 procurement of the six agents participating in a particular supply-chain game (for mixed strategy profiles s, we take expectation of \u03c6 with respect to the mixture).\nThe designer``s welfare function W (N(\u03b8), \u03b8) is then given by I{sup{\u03c6\u2217 (\u03b8)} \u2264 \u03b1}, where \u03b1 is the maximum acceptable level of day-0 procurement and I is the indicator function.\nThe designer selects a value \u03b8 of storage costs, expressed as an annual percentage of the baseline value of components in the inventory (charged daily), from the set \u0398 = R+ .\nSince the designer``s decision depends only on \u03c6\u2217 (\u03b8), we present all of our results in terms of the value of the aggregation function.\n4.2 Estimating Nash Equilibria The objective of TAC\/SCM agents is to maximize profits realized over a game instance.\nThus, if we fix a strategy for each agent at the beginning of the simulation and record the corresponding profits at the end, we will have obtained a data point in the form (a, U(a)).\nIf we also have fixed the parameter \u03b8 of the simulator, the resulting data point becomes part of our data set D.\nThis data set, then, contains data only in the form of pure strategies of players and their corresponding payoffs, and, consequently, in order to formulate the designer``s problem as optimization, we must first determine or approximate the set of Nash equilibria of each game \u0393\u03b8.\nThus, we need methods for approximating Nash equilibria for infinite games.\nBelow, we describe the two methods we used in our study.\nThe first has been explored empirically before, whereas the second is introduced here as the method specifically designed to approximate a set of Nash equilibria.\n4.2.1 Payoff Function Approximation The first method for estimating Nash equilibria based on data uses supervised learning to approximate payoff functions of mech6 We do not address whether and how other measures (e.g., constraining procurement directly) could have achieved design objectives.\nOur approach takes as given some set of design options, in this case defined by the storage cost parameter.\nIn principle our methods could be applied to a different or larger design space, though with corresponding complexity growth.\n308 anism participants from a data set of game experience [17].\nOnce approximate payoff functions are available for all players, the Nash equilibria may be either found analytically or approximated using numerical techniques, depending on the learning model.\nIn what follows, we estimate only a sample Nash equilibrium using this technique, although this restriction can be removed at the expense of additional computation time.\nOne advantage of this method is that it can be applied to any data set and does not require the use of a simulator.\nThus, we can apply it when Ds = \u2205.\nIf a simulator is available, we can generate additional data to build confidence in our initial estimates.7 We tried the following methods for approximating payoff functions: quadratic regression (QR), locally weighted average (LWA), and locally weighted linear regression (LWLR).\nWe also used control variates to reduce the variance of payoff estimates, as in our previous empirical game-theoretic analysis of TAC\/SCM-03 [19].\nThe quadratic regression model makes it possible to compute equilibria of the learned game analytically.\nFor the other methods we applied replicator dynamics [7] to a discrete approximation of the learned game.\nThe expected total day-0 procurement in equilibrium was taken as the estimate of an outcome.\n4.2.2 Search in Strategy Profile Space When we have access to a simulator, we can also use directed search through profile space to estimate the set of Nash equilibria, which we describe here after presenting some additional notation.\nDEFINITION 4.\nA strategic neighbor of a pure strategy profile a is a profile that is identical to a in all but one strategy.\nWe define Snb(a, D) as the set of all strategic neighbors of a available in the data set D. Similarly, we define Snb(a, \u02dcD) to be all strategic neighbors of a not in D. Finally, for any a \u2208 Snb(a, D) we define the deviating agent as i(a, a ).\nDEFINITION 5.\nThe -bound, \u02c6, of a pure strategy profile a is defined as maxa \u2208Snb(a,D) max{ui(a,a )(a )\u2212ui(a,a )(a), 0}.\nWe say that a is a candidate \u03b4-equilibrium for \u03b4 \u2265 \u02c6.\nWhen Snb(a, \u02dcD) = \u2205 (i.e., all strategic neighbors are represented in the data), a is confirmed as an \u02c6-Nash equilibrium.\nOur search method operates by exploring deviations from candidate equilibria.\nWe refer to it as BestFirstSearch, as it selects with probability one a strategy profile a \u2208 Snb(a, \u02dcD) that has the smallest \u02c6in D. Finally we define an estimator for a set of Nash equilibria.\nDEFINITION 6.\nFor a set K, define Co(K) to be the convex hull of K. Let B\u03b4 be the set of candidates at level \u03b4.\nWe define \u02c6\u03c6\u2217 (\u03b8) = Co({\u03c6(a) : a \u2208 B\u03b4}) for a fixed \u03b4 to be an estimator of \u03c6\u2217 (\u03b8).\nIn words, the estimate of a set of equilibrium outcomes is the convex hull of all aggregated strategy profiles with -bound below some fixed \u03b4.\nThis definition allows us to exploit structure arising from the aggregation function.\nIf two profiles are close in terms of aggregation values, they may be likely to have similar -bounds.\nIn particular, if one is an equilibrium, the other may be as well.\nWe present some theoretical support for this method of estimating the set of Nash equilibria below.\nSince the game we are interested in is infinite, it is necessary to terminate BestFirstSearch before exploring the entire space of strat7 For example, we can use active learning techniques [5] to improve the quality of payoff function approximation.\nIn this work, we instead concentrate on search in strategy profile space.\negy profiles.\nWe currently determine termination time in a somewhat ad-hoc manner, based on observations about the current set of candidate equilibria.8 4.3 Data Generation Our data was collected by simulating TAC\/SCM games on a local version of the 2004 TAC\/SCM server, which has a configuration setting for the storage cost.\nAgent strategies in simulated games were selected from the set {0, 0.3, 0.6, ... , 1.5} in order to have positive probability of generating strategic neighbors.9 A baseline data set Do was generated by sampling 10 randomly generated strategy profiles for each \u03b8 \u2208 {0, 50, 100, 150, 200}.\nBetween 5 and 10 games were run for each profile after discarding games that had various flaws.10 We used search to generate a simulated data set Ds, performing between 12 and 32 iterations of BestFirstSearch for each of the above settings of \u03b8.\nSince simulation cost is extremely high (a game takes nearly 1 hour to run), we were able to run a total of 2670 games over the span of more than six months.\nFor comparison, to get the entire description of an empirical game defined by the restricted finite joint strategy space for each value of \u03b8 \u2208 {0, 50, 100, 150, 200} would have required at least 23100 games (sampling each profile 10 times).\n4.4 Results 4.4.1 Analysis of the Baseline Data Set We applied the three learning methods described above to the baseline data set Do.\nAdditionally, we generated an estimate of the Nash equilibrium correspondence, \u02c6\u03c6\u2217 (\u03b8), by applying Definition 6 with \u03b4 =2.5E6.\nThe results are shown in Figure 1.\nAs we can see, the correspondence \u02c6\u03c6\u2217 (\u03b8) has little predictive power based on Do, and reveals no interesting structure about the game.\nIn contrast, all three learning methods suggest that total day-0 procurement is a decreasing function of storage costs.\n0 1 2 3 4 5 6 7 8 9 10 0 50\u00a0100\u00a0150\u00a0200 Storage Cost TotalDay-0Procurement LWA LWLR QR BaselineMin BaselineMax Figure 1: Aggregate day-0 procurement estimates based on Do.\nThe correspondence \u02c6\u03c6\u2217 (\u03b8) is the interval between BaselineMin and BaselineMax.\n8 Generally, search is terminated once the set of candidate equilibria is small enough to draw useful conclusions about the likely range of equilibrium strategies in the game.\n9 Of course, we do not restrict our Nash equilibrium estimates to stay in this discrete subset of [0,1.5].\n10 For example, if we detected that any agent failed during the game (failures included crashes, network connectivity problems, and other obvious anomalies), the game would be thrown out.\n309 4.4.2 Analysis of Search Data To corroborate the initial evidence from the learning methods, we estimated \u02c6\u03c6\u2217 (\u03b8) (again, using \u03b4 =2.5E6) on the data set D = {Do, Ds}, where Ds is data generated through the application of BestFirstSearch.\nThe results of this estimate are plotted against the results of the learning methods trained on Do 11 in Figure 2.\nFirst, we note that the addition of the search data narrows the range of potential equilibria substantially.\nFurthermore, the actual point predictions of the learning methods and those based on -bounds after search are reasonably close.\nCombining the evidence gathered from these two very different approaches to estimating the outcome correspondence yields a much more compelling picture of the relationship between storage costs and day-0 procurement than either method used in isolation.\n0 1 2 3 4 5 6 7 8 9 10 0 50\u00a0100\u00a0150\u00a0200 Storage Cost TotayDay-0Procurement LWA LWLR QR SearchMin SearchMax Figure 2: Aggregate day-0 procurement estimates based on search in strategy profile space compared to function approximation techniques trained on Do.\nThe correspondence \u02c6\u03c6\u2217 (\u03b8) for D = {Do, Ds} is the interval between SearchMin and SearchMax.\nThis evidence supports the initial intuition that day-0 procurement should be decreasing with storage costs.\nIt also confirms that high levels of day-0 procurement are a rational response to the 2004 tournament setting of average storage cost, which corresponds to \u03b8 = 100.\nThe minimum prediction for aggregate procurement at this level of storage costs given by any experimental methods is approximately 3.\nThis is quite high, as it corresponds to an expected commitment of 1\/3 of the total supplier capacity for the entire game.\nThe maximum prediction is considerably higher at 4.5.\nIn the actual 2004 competition, aggregate day-0 procurement was equivalent to 5.71 on the scale used here [9].\nOur predictions underestimate this outcome to some degree, but show that any rational outcome was likely to have high day-0 procurement.\n4.4.3 Extrapolating the Solution Correspondence We have reasonably strong evidence that the outcome correspondence is decreasing.\nHowever, the ultimate goal is to be able to either set the storage cost parameter to a value that would curb day-0 procurement in equilibrium or conclude that this is not possible.\nTo answer this question directly, suppose that we set a conservative threshold \u03b1 = 2 on aggregate day-0 procurement.12 Linear 11 It is unclear how meaningful the results of learning would be if Ds were added to the training data set.\nIndeed, the additional data may actually increase the learning variance.\n12 Recall that designer``s objective is to incentivize aggergate day-0 procurement that is below the threshold \u03b1.\nOur threshold here still represents a commitment of over 20% of the suppliers'' capacity for extrapolation of the maximum of the outcome correspondence estimated from D yields \u03b8 = 320.\nThe data for \u03b8 = 320 were collected in the same way as for other storage cost settings, with 10 randomly generated profiles followed by 33 iterations of BestFirstSearch.\nFigure 3 shows the detailed -bounds for all profiles in terms of their corresponding values of \u03c6.\n0.00E+00 5.00E+06 1.00E+07 1.50E+07 2.00E+07 2.50E+07 3.00E+07 3.50E+07 4.00E+07 4.50E+07 5.00E+07 2.1 2.4 2.7 3 3.3 3.6 3.9 4.2 4.5 4.8 5.1 5.4 5.7 6 6.3 6.6 6.9 7.2 Total Day-0 Procurement \u03b5\u2212boundFigure 3: Values of \u02c6 for profiles explored using search when \u03b8 = 320.\nStrategy profiles explored are presented in terms of the corresponding values of \u03c6(a).\nThe gray region corresponds to \u02c6\u03c6\u2217 (320) with \u03b4 =2.5M.\nThe estimated set of aggregate day-0 outcomes is very close to that for \u03b8 = 200, indicating that there is little additional benefit to raising storage costs above 200.\nObserve, that even the lower bound of our estimated set of Nash equilibria is well above the target day-0 procurement of 2.\nFurthermore, payoffs to agents are almost always negative at \u03b8 = 320.\nConsequently, increasing the costs further would be undesirable even if day-0 procurement could eventually be curbed.\nSince we are reasonably confident that \u03c6\u2217 (\u03b8) is decreasing in \u03b8, we also do not expect that setting \u03b8 somewhere between 200 and 320 will achieve the desired result.\nWe conclude that it is unlikely that day-0 procurement could ever be reduced to a desirable level using any reasonable setting of the storage cost parameter.\nThat our predictions tend to underestimate tournament outcomes reinforces this conclusion.\nTo achieve the desired reduction in day-0 procurement requires redesigning other aspects of the mechanism.\n4.5 Probabilistic Analysis Our empirical analysis has produced evidence in support of the conclusion that no reasonable setting of storage cost was likely to sufficiently curb excessive day-0 procurement in TAC\/SCM ``04.\nAll of this evidence has been in the form of simple interpolation and extrapolation of estimates of the Nash equilibrium correspondence.\nThese estimates are based on simulating game instances, and are subject to sampling noise contributed by the various stochastic elements of the game.\nIn this section, we develop and apply methods for evaluating the sensitivity of our -bound calculations to such stochastic effects.\nSuppose that all agents have finite (and small) pure strategy sets, A. Thus, it is feasible to sample the entire payoff matrix of the game.\nAdditionally, suppose that noise is additive with zero-mean the entire game on average, so in practice we would probably want the threshold to be even lower.\n310 and finite variance, that is, Ui(a) = ui(a) + \u02dc\u03bei(a), where Ui(a) is the observed payoff to i when a was played, ui(a) is the actual corresponding payoff, and \u02dc\u03bei(a) is a mean-zero normal random variable.\nWe designate the known variance of \u02dc\u03bei(a) by \u03c32 i (a).\nThus, we assume that \u02dc\u03bei(a) is normal with distribution N(0, \u03c32 i (a)).\nWe take \u00afui(a) to be the sample mean over all Ui(a) in D, and follow Chang and Huang [3] to assume that we have an improper prior over the actual payoffs ui(a) and sampling was independent for all i and a.\nWe also rely on their result that ui(a)|\u00afui(a) = \u00afui(a)\u2212Zi(a)\/[\u03c3i(a)\/ p ni(a)] are independent with posterior distributions N(\u00afui(a), \u03c32 i (a)\/ni(a)), where ni(a) is the number of samples taken of payoffs to i for pure profile a, and Zi(a) \u223c N(0, 1).\nWe now derive a generic probabilistic bound that a profile a \u2208 A is an -Nash equilibrium.\nIf ui(\u00b7)|\u00afui(\u00b7) are independent for all i \u2208 I and a \u2208 A, we have the following result (from this point on we omit conditioning on \u00afui(\u00b7) for brevity): PROPOSITION 1.\nPr \u201e max i\u2208I max b\u2208Ai ui(b, a\u2212i) \u2212 ui(a) \u2264 `` = = Y i\u2208I Z R Y b\u2208Ai\\ai Pr(ui(b, a\u2212i) \u2264 u + )fui(a)(u)du, (2) where fui(a)(u) is the pdf of N(\u00afui(a), \u03c3i(a)).\nThe proofs of this and all subsequent results are in the Appendix.\nThe posterior distribution of the optimum mean of n samples, derived by Chang and Huang [3], is Pr (ui(a) \u2264 c) = 1 \u2212 \u03a6 ``p ni(a)(\u00afui(a) \u2212 c) \u03c3i(a) # , (3) where a \u2208 A and \u03a6(\u00b7) is the N(0, 1) distribution function.\nCombining the results (2) and (3), we obtain a probabilistic confidence bound that (a) \u2264 \u03b3 for a given \u03b3.\nNow, we consider cases of incomplete data and use the results we have just obtained to construct an upper bound (restricted to profiles represented in data) on the distribution of sup{\u03c6\u2217 (\u03b8)} and inf{\u03c6\u2217 (\u03b8)} (assuming that both are attainable): Pr{sup{\u03c6\u2217 (\u03b8)} \u2264 x} \u2264D Pr{\u2203a \u2208 D : \u03c6(a) \u2264 x \u2227 a \u2208 N(\u03b8)} \u2264 X a\u2208D:\u03c6(a)\u2264x Pr{a \u2208 N(\u03b8)} = X a\u2208D:\u03c6(a)\u2264x Pr{ (a) = 0}, where x is a real number and \u2264D indicates that the upper bound accounts only for strategies that appear in the data set D.\nSince the events {\u2203a \u2208 D : \u03c6(a) \u2264 x \u2227 a \u2208 N(\u03b8)} and {inf{\u03c6\u2217 (\u03b8)} \u2264 x} are equivalent, this also defines an upper bound on the probability of {inf{\u03c6\u2217 (\u03b8)} \u2264 x}.\nThe values thus derived comprise the Tables 1 and 2.\n\u03c6\u2217 (\u03b8) \u03b8 = 0 \u03b8 = 50 \u03b8 = 100 <2.7 0.000098 0 0.146 <3 0.158 0.0511 0.146 <3.9 0.536 0.163 1 <4.5 1 1 1 Table 1: Upper bounds on the distribution of inf{\u03c6\u2217 (\u03b8)} restricted to D for \u03b8 \u2208 {0, 50, 100} when N(\u03b8) is a set of Nash equilibria.\n\u03c6\u2217 (\u03b8) \u03b8 = 150 \u03b8 = 200 \u03b8 = 320 <2.7 0 0 0.00132 <3 0.0363 0.141 1 <3.9 1 1 1 <4.5 1 1 1 Table 2: Upper bounds on the distribution of inf{\u03c6\u2217 (\u03b8)} restricted to D for \u03b8 \u2208 {150, 200, 320} when N(\u03b8) is a set of Nash equilibria.\nTables 1 and 2 suggest that the existence of any equilibrium with \u03c6(a) < 2.7 is unlikely for any \u03b8 that we have data for, although this judgment, as we mentioned, is only with respect to the profiles we have actually sampled.\nWe can then accept this as another piece of evidence that the designer could not find a suitable setting of \u03b8 to achieve his objectives-indeed, the designer seems unlikely to achieve his objective even if he could persuade participants to play a desirable equilibrium!\nTable 1 also provides additional evidence that the agents in the 2004 TAC\/SCM tournament were indeed rational in procuring large numbers of components at the beginning fo the game.\nIf we look at the third column of this table, which corresponds to \u03b8 = 100, we can gather that no profile a in our data with \u03c6(a) < 3 is very likely to be played in equilibrium.\nThe bounds above provide some general evidence, but ultimately we are interested in a concrete probabilistic assessment of our conclusion with respect to the data we have sampled.\nParticularly, we would like to say something about what happens for the settings of \u03b8 for which we have no data.\nTo derive an approximate probabilistic bound on the probability that no \u03b8 \u2208 \u0398 could have achieved the designer``s objective, let \u222aJ j=1\u0398j, be a partition of \u0398, and assume that the function sup{\u03c6\u2217 (\u03b8)} satisfies the Lipschitz condition with Lipschitz constant Aj on each subset \u0398j.13 Since we have determined that raising the storage cost above 320 is undesirable due to secondary considerations, we restrict attention to \u0398 = [0, 320].\nWe now define each subset j to be the interval between two points for which we have produced data.\nThus, \u0398 = [0, 50] [ (50, 100] [ (100, 150] [ (150, 200] [ (200, 320], with j running between 1 and 5, corresponding to subintervals above.\nWe will further denote each \u0398j by (aj , bj].14 Then, the following Proposition gives us an approximate upper bound15 on the probability that sup{\u03c6\u2217 (\u03b8)} \u2264 \u03b1.\nPROPOSITION 2.\nPr{ _ \u03b8\u2208\u0398 sup{\u03c6(\u03b8)} \u2264 \u03b1} \u2264D 5X j=1 X y,z\u2208D:y+z\u2264cj 0 @ X a:\u03c6(a)=z Pr{ (a) = 0} 1 A \u00d7 \u00d7 0 @ X a:\u03c6(a)=y Pr{ (a) = 0} 1 A , where cj = 2\u03b1 + Aj(bj \u2212 aj) and \u2264D indicates that the upper bound only accounts for strategies that appear in the data set D. 13 A function that satisfies the Lipschitz condition is called Lipschitz continuous.\n14 The treatment for the interval [0,50] is identical.\n15 It is approximate in a sense that we only take into account strategies that are present in the data.\n311 Due to the fact that our bounds are approximate, we cannot use them as a conclusive probabilistic assessment.\nInstead, we take this as another piece of evidence to complement our findings.\nEven if we can assume that a function that we approximate from data is Lipschitz continuous, we rarely actually know the Lipschitz constant for any subset of \u0398.\nThus, we are faced with a task of estimating it from data.\nHere, we tried three methods of doing this.\nThe first one simply takes the highest slope that the function attains within the available data and uses this constant value for every subinterval.\nThis produces the most conservative bound, and in many situations it is unlikely to be informative.\nAn alternative method is to take an upper bound on slope obtained within each subinterval using the available data.\nThis produces a much less conservative upper bound on probabilities.\nHowever, since the actual upper bound is generally greater for each subinterval, the resulting probabilistic bound may be deceiving.\nA final method that we tried is a compromise between the two above.\nInstead of taking the conservative upper bound based on data over the entire function domain \u0398, we take the average of upper bounds obtained at each \u0398j.\nThe bound at an interval is then taken to be the maximum of the upper bound for this interval and the average upper bound for all intervals.\nThe results of evaluating the expression for Pr{ _ \u03b8\u2208\u0398 sup{\u03c6\u2217 (\u03b8)} \u2264 \u03b1} when \u03b1 = 2 are presented in Table 3.\nIn terms of our claims in maxj Aj Aj max{Aj ,ave(Aj)} 1 0.00772 0.00791 Table 3: Approximate upper bound on probability that some setting of \u03b8 \u2208 [0, 320] will satisfy the designer objective with target \u03b1 = 2.\nDifferent methods of approximating the upper bound on slope in each subinterval j are used.\nthis work, the expression gives an upper bound on the probability that some setting of \u03b8 (i.e., storage cost) in the interval [0,320] will result in total day-0 procurement that is no greater in any equilibrium than the target specified by \u03b1 and taken here to be 2.\nAs we had suspected, the most conservative approach to estimating the upper bound on slope, presented in the first column of the table, provides us little information here.\nHowever, the other two estimation approaches, found in columns two and three of Table 3, suggest that we are indeed quite confident that no reasonable setting of \u03b8 \u2208 [0, 320] would have done the job.\nGiven the tremendous difficulty of the problem, this result is very strong.16 Still, we must be very cautious in drawing too heroic a conclusion based on this evidence.\nCertainly, we have not checked all the profiles but only a small proportion of them (infinitesimal, if we consider the entire continuous domain of \u03b8 and strategy sets).\nNor can we expect ever to obtain enough evidence to make completely objective conclusions.\nInstead, the approach we advocate here is to collect as much evidence as is feasible given resource constraints, and make the most compelling judgment based on this evidence, if at all possible.\n5.\nCONVERGENCE RESULTS At this point, we explore abstractly whether a design parameter choice based on payoff data can be asymptotically reliable.\n16 Since we did not have all the possible deviations for any profile available in the data, the true upper bounds may be even lower.\nAs a matter of convenience, we will use notation un,i(a) to refer to a payoff function of player i based on an average over n i.i.d. samples from the distribution of payoffs.\nWe also assume that un,i(a) are independent for all a \u2208 A and i \u2208 I.\nWe will use the notation \u0393n to refer to the game [I, R, {ui,n(\u00b7)}], whereas \u0393 will denote the underlying game, [I, R, {ui(\u00b7)}].\nSimilarly, we define n(r) to be (r) with respect to the game \u0393n.\nIn this section, we show that n(s) \u2192 (s) a.s. uniformly on the mixed strategy space for any finite game, and, furthermore, that all mixed strategy Nash equilibria in empirical games eventually become arbitrarily close to some Nash equilibrium strategies in the underlying game.\nWe use these results to show that under certain conditions, the optimal choice of the design parameter based on empirical data converges almost surely to the actual optimum.\nTHEOREM 3.\nSuppose that |I| < \u221e, |A| < \u221e.\nThen n(s) \u2192 (s) a.s. uniformly on S. Recall that N is a set of all Nash equilibria of \u0393.\nIf we define Nn,\u03b3 = {s \u2208 S : n(s) \u2264 \u03b3}, we have the following corollary to Theorem 3: COROLLARY 4.\nFor every \u03b3 > 0, there is M such that \u2200n \u2265 M, N \u2282 Nn,\u03b3 a.s. PROOF.\nSince (s) = 0 for every s \u2208 N, we can find M large enough such that Pr{supn\u2265M sups\u2208N n(s) < \u03b3} = 1.\nBy the Corollary, for any game with a finite set of pure strategies and for any > 0, all Nash equilibria lie in the set of empirical -Nash equilibria if enough samples have been taken.\nAs we now show, this provides some justification for our use of a set of profiles with a non-zero -bound as an estimate of the set of Nash equilibria.\nFirst, suppose we conclude that for a particular setting of \u03b8, sup{\u02c6\u03c6\u2217 (\u03b8)} \u2264 \u03b1.\nThen, since for any fixed > 0, N(\u03b8) \u2282 Nn, (\u03b8) when n is large enough, sup{\u03c6\u2217 (\u03b8)} = sup s\u2208N (\u03b8) \u03c6(s) \u2264 sup s\u2208Nn, (\u03b8) \u03c6(s) = sup{\u02c6\u03c6\u2217 (\u03b8)} \u2264 \u03b1 for any such n. Thus, since we defined the welfare function of the designer to be I{sup{\u03c6\u2217 (\u03b8)} \u2264 \u03b1} in our domain of interest, the empirical choice of \u03b8 satisfies the designer``s objective, thereby maximizing his welfare function.\nAlternatively, suppose we conclude that inf{\u02c6\u03c6\u2217 (\u03b8)} > \u03b1 for every \u03b8 in the domain.\nThen, \u03b1 < inf{\u02c6\u03c6\u2217 (\u03b8)} = inf s\u2208Nn, (\u03b8) \u03c6(s) \u2264 inf s\u2208N (\u03b8) \u03c6(s) \u2264 \u2264 sup s\u2208N (\u03b8) \u03c6(s) = sup{\u03c6\u2217 (\u03b8)}, for every \u03b8, and we can conclude that no setting of \u03b8 will satisfy the designer``s objective.\nNow, we will show that when the number of samples is large enough, every Nash equilibrium of \u0393n is close to some Nash equilibrium of the underlying game.\nThis result will lead us to consider convergence of optimizers based on empirical data to actual optimal mechanism parameter settings.\nWe first note that the function (s) is continuous in a finite game.\nLEMMA 5.\nLet S be a mixed strategy set defined on a finite game.\nThen : S \u2192 R is continuous.\n312 For the exposition that follows, we need a bit of additional notation.\nFirst, let (Z, d) be a metric space, and X, Y \u2282 Z and define directed Hausdorff distance from X to Y to be h(X, Y ) = sup x\u2208X inf y\u2208Y d(x, y).\nObserve that U \u2282 X \u21d2 h(U, Y ) \u2264 h(X, Y ).\nFurther, define BS(x, \u03b4) to be an open ball in S \u2282 Z with center x \u2208 S and radius \u03b4.\nNow, let Nn denote all Nash equilibria of the game \u0393n and let N\u03b4 = [ x\u2208N BS(x, \u03b4), that is, the union of open balls of radius \u03b4 with centers at Nash equilibria of \u0393.\nNote that h(N\u03b4, N) = \u03b4.\nWe can then prove the following general result.\nTHEOREM 6.\nSuppose |I| < \u221e and |A| < \u221e.\nThen almost surely h(Nn, N) converges to 0.\nWe will now show that in the special case when \u0398 and A are finite and each \u0393\u03b8 has a unique Nash equilibrium, the estimates \u02c6\u03b8 of optimal designer parameter converge to an actual optimizer almost surely.\nLet \u02c6\u03b8 = arg max\u03b8\u2208\u0398 W (Nn(\u03b8), \u03b8), where n is the number of times each pure profile was sampled in \u0393\u03b8 for every \u03b8, and let \u03b8\u2217 = arg max\u03b8\u2208\u0398 W (N(\u03b8), \u03b8).\nTHEOREM 7.\nSuppose |N(\u03b8)| = 1 for all \u03b8 \u2208 \u0398 and suppose that \u0398 and A are finite.\nLet W (s, \u03b8) be continuous at the unique s\u2217 (\u03b8) \u2208 N(\u03b8) for each \u03b8 \u2208 \u0398.\nThen \u02c6\u03b8 is a consistent estimator of \u03b8\u2217 if W (N(\u03b8), \u03b8) is defined as a supremum, infimum, or expectation over the set of Nash equilibria.\nIn fact, \u02c6\u03b8 \u2192 \u03b8\u2217 a.s. in each of these cases.\nThe shortcoming of the above result is that, within our framework, the designer has no way of knowing or ensuring that \u0393\u03b8 do, indeed, have unique equilibria.\nHowever, it does lend some theoretical justification for pursuing design in this manner, and, perhaps, will serve as a guide for more general results in the future.\n6.\nRELATED WORK The mechanism design literature in Economics has typically explored existence of a mechanism that implements a social choice function in equilibrium [10].\nAdditionally, there is an extensive literature on optimal auction design [10], of which the work by Roger Myerson [11] is, perhaps, the most relevant.\nIn much of this work, analytical results are presented with respect to specific utility functions and accounting for constraints such as incentive compatibility and individual rationality.\nSeveral related approaches to search for the best mechanism exist in the Computer Science literature.\nConitzer and Sandholm [6] developed a search algorithm when all the relevant game parameters are common knowledge.\nWhen payoff functions of players are unknown, a search using simulations has been explored as an alternative.\nOne approach in that direction, taken in [4] and [15], is to co-evolve the mechanism parameter and agent strategies, using some notion of social utility and agent payoffs as fitness criteria.\nAn alternative to co-evolution explored in [16] was to optimize a well-defined welfare function of the designer using genetic programming.\nIn this work the authors used a common learning strategy for all agents and defined an outcome of a game induced by a mechanism parameter as the outcome of joint agent learning.\nMost recently, Phelps et al. [14] compared two mechanisms based on expected social utility with expectation taken over an empirical distribution of equilibria in games defined by heuristic strategies, as in [18].\n7.\nCONCLUSION In this work we spent considerable effort developing general tactics for empirical mechanism design.\nWe defined a formal gametheoretic model of interaction between the designer and the participants of the mechanism as a two-stage game.\nWe also described in some generality the methods for estimating a sample Nash equilibrium function when the data is extremely scarce, or a Nash equilibrium correspondence when more data is available.\nOur techniques are designed specifically to deal with problems in which both the mechanism parameter space and the agent strategy sets are infinite and only a relatively small data set can be acquired.\nA difficult design issue in the TAC\/SCM game which the TAC community has been eager to address provides us with a setting to test our methods.\nIn applying empirical game analysis to the problem at hand, we are fully aware that our results are inherently inexact.\nThus, we concentrate on collecting evidence about the structure of the Nash equilibrium correspondence.\nIn the end, we can try to provide enough evidence to either prescribe a parameter setting, or suggest that no setting is possible that will satisfy the designer.\nIn the case of TAC\/SCM, our evidence suggests quite strongly that storage cost could not have been effectively adjusted in the 2004 tournament to curb excessive day-0 procurement without detrimental effects on overall profitability.\nThe success of our analysis in this extremely complex environment with high simulation costs makes us optimistic that our methods can provide guidance in making mechanism design decisions in other challenging domains.\nThe theoretical results confirm some intuitions behind the empirical mechanism design methods we have introduced, and increases our confidence that our framework can be effective in estimating the best mechanism parameter choice in relatively general settings.\nAcknowledgments We thank Terence Kelly, Matthew Rudary, and Satinder Singh for helpful comments on earlier drafts of this work.\nThis work was supported in part by NSF grant IIS-0205435 and the DARPA REAL strategic reasoning program.\n8.\nREFERENCES [1] R. Arunachalam and N. M. Sadeh.\nThe supply chain trading agent competition.\nElectronic Commerce Research and Applications, 4:63-81, 2005.\n[2] M. Benisch, A. Greenwald, V. Naroditskiy, and M. Tschantz.\nA stochastic programming approach to scheduling in TAC SCM.\nIn Fifth ACM Conference on Electronic Commerce, pages 152-159, New York, 2004.\n[3] Y.-P.\nChang and W.-T.\nHuang.\nGeneralized confidence intervals for the largest value of some functions of parameters under normality.\nStatistica Sinica, 10:1369-1383, 2000.\n[4] D. Cliff.\nEvolution of market mechanism through a continuous space of auction-types.\nIn Congress on Evolutionary Computation, 2002.\n[5] D. A. Cohn, Z. Ghahramani, and M. I. Jordan.\nActive learning with statistical models.\nJournal of Artificial Intelligence Research, 4:129-145, 1996.\n[6] V. Conitzer and T. Sandholm.\nAn algorithm for automatically designing deterministic mechanisms without payments.\nIn 313 Third International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 128-135, 2004.\n[7] D. Friedman.\nEvolutionary games in economics.\nEconometrica, 59(3):637-666, May 1991.\n[8] R. Keener.\nStatistical Theory: A Medley of Core Topics.\nUniversity of Michigan Department of Statistics, 2004.\n[9] C. Kiekintveld, Y. Vorobeychik, and M. P. Wellman.\nAn analysis of the 2004 supply chain management trading agent competition.\nIn IJCAI-05 Workshop on Trading Agent Design and Analysis, Edinburgh, 2005.\n[10] A. Mas-Colell, M. Whinston, and J. Green.\nMicroeconomic Theory.\nOxford University Press, 1995.\n[11] R. B. Myerson.\nOptimal auction design.\nMathematics of Operations Research, 6(1):58-73, February 1981.\n[12] S. Olafsson and J. Kim.\nSimulation optimization.\nIn E. Yucesan, C.-H.\nChen, J. Snowdon, and J. Charnes, editors, 2002 Winter Simulation Conference, 2002.\n[13] D. Pardoe and P. Stone.\nTacTex-03: A supply chain management agent.\nSIGecom Exchanges, 4(3):19-28, 2004.\n[14] S. Phelps, S. Parsons, and P. McBurney.\nAutomated agents versus virtual humans: an evolutionary game theoretic comparison of two double-auction market designs.\nIn Workshop on Agent Mediated Electronic Commerce VI, 2004.\n[15] S. Phelps, S. Parsons, P. McBurney, and E. Sklar.\nCo-evolution of auction mechanisms and trading strategies: towards a novel approach to microeconomic design.\nIn ECOMAS 2002 Workshop, 2002.\n[16] S. Phelps, S. Parsons, E. Sklar, and P. McBurney.\nUsing genetic programming to optimise pricing rules for a double-auction market.\nIn Workshop on Agents for Electronic Commerce, 2003.\n[17] Y. Vorobeychik, M. P. Wellman, and S. Singh.\nLearning payoff functions in infinite games.\nIn Nineteenth International Joint Conference on Artificial Intelligence, pages 977-982, 2005.\n[18] W. E. Walsh, R. Das, G. Tesauro, and J. O. Kephart.\nAnalyzing complex strategic interactions in multi-agent systems.\nIn AAAI-02 Workshop on Game Theoretic and Decision Theoretic Agents, 2002.\n[19] M. P. Wellman, J. J. Estelle, S. Singh, Y. Vorobeychik, C. Kiekintveld, and V. Soni.\nStrategic interactions in a supply chain game.\nComputational Intelligence, 21(1):1-26, February 2005.\nAPPENDIX A. PROOFS A.1 Proof of Proposition 1 Pr \u201e max i\u2208I max b\u2208Ai\\ai ui(b, a\u2212i) \u2212 ui(a) \u2264 `` = = Y i\u2208I Eui(a) '' Pr( max b\u2208Ai\\ai ui(b, a\u2212i) \u2212 ui(a) \u2264 |ui(a)) = = Y i\u2208I Z R Y b\u2208Ai\\ai Pr(ui(b, a\u2212i) \u2264 u + )fui(a)(u)du.\nA.2 Proof of Proposition 2 First, let us suppose that some function, f(x) defined on [ai, bi], satisfy Lipschitz condition on (ai, bi] with Lipschitz constant Ai.\nThen the following claim holds: Claim: infx\u2208(ai,bi] f(x) \u2265 0.5(f(ai) + f(bi) \u2212 Ai(bi \u2212 ai).\nTo prove this claim, note that the intersection of lines at f(ai) and f(bi) with slopes \u2212Ai and Ai respectively will determine the lower bound on the minimum of f(x) on [ai, bi] (which is a lower bound on infimum of f(x) on (ai, bj ]).\nThe line at f(ai) is determined by f(ai) = \u2212Aiai + cL and the line at f(bi) is determined by f(bi) = Aibi +cR.\nThus, the intercepts are cL = f(ai)+Aiai and cR = f(bi) + Aibi respectively.\nLet x\u2217 be the point at which these lines intersect.\nThen, x\u2217 = \u2212 f(x\u2217 ) \u2212 cR A = f(x\u2217 ) \u2212 cL A .\nBy substituting the expressions for cR and cL, we get the desired result.\nNow, subadditivity gives us Pr{ _ \u03b8\u2208\u0398 sup{\u03c6\u2217 (\u03b8)} \u2264 \u03b1} \u2264 5X j=1 Pr{ _ \u03b8\u2208\u0398j sup{\u03c6\u2217 (\u03b8)} \u2264 \u03b1}, and, by the claim, Pr{ _ \u03b8\u2208\u0398j sup{\u03c6\u2217 (\u03b8)} \u2264 \u03b1} = 1 \u2212 Pr{ inf \u03b8\u2208\u0398j sup{\u03c6\u2217 (\u03b8)} > \u03b1} \u2264 Pr{sup{\u03c6\u2217 (aj)} + sup{\u03c6\u2217 (bj)} \u2264 2\u03b1 + Aj(bj \u2212 aj )}.\nSince we have a finite number of points in the data set for each \u03b8, we can obtain the following expression: Pr{sup{\u03c6\u2217 (aj)} + sup{\u03c6\u2217 (bj)} \u2264 cj } =D X y,z\u2208D:y+z\u2264cj Pr{sup{\u03c6\u2217 (bj )} = y} Pr{sup{\u03c6\u2217 (aj)} = z}.\nWe can now restrict attention to deriving an upper bound on Pr{sup{\u03c6\u2217 (\u03b8)} = y} for a fixed \u03b8.\nTo do this, observe that Pr{sup{\u03c6\u2217 (\u03b8)} = y} \u2264D Pr{ _ a\u2208D:\u03c6(a)=y (a) = 0} \u2264 X a\u2208D:\u03c6(a)=y Pr{ (a) = 0} by subadditivity and the fact that a profile a is a Nash equilibrium if and only if (a) = 0.\nPutting everything together yields the desired result.\nA.3 Proof of Theorem 3 First, we will need the following fact: Claim: Given a function fi(x) and a set X, | maxx\u2208X f1(x) \u2212 maxx\u2208X f2(x)| \u2264 maxx\u2208X |f1(x) \u2212 f2(x)|.\nTo prove this claim, observe that | max x\u2208X f1(x) \u2212 max x\u2208X f2(x)| = maxx f1(x) \u2212 maxx f2(x) if maxx f1(x) \u2265 maxx f2(x) maxx f2(x) \u2212 maxx f1(x) if maxx f2(x) \u2265 maxx f1(x) In the first case, max x\u2208X f1(x) \u2212 max x\u2208X f2(x) \u2264 max x\u2208X (f1(x) \u2212 f2(x)) \u2264 \u2264 max x\u2208X |f1(x) \u2212 f2(x)|.\n314 Similarly, in the second case, max x\u2208X f2(x) \u2212 max x\u2208X f1(x) \u2264 max x\u2208X (f2(x) \u2212 f1(x)) \u2264 \u2264 max x\u2208X |f2(x) \u2212 f1(x)| = max x\u2208X |f1(x) \u2212 f2(x)|.\nThus, the claim holds.\nBy the Strong Law of Large Numbers, un,i(a) \u2192 ui(a) a.s. for all i \u2208 I, a \u2208 A.\nThat is, Pr{ lim n\u2192\u221e un,i(a) = ui(a)} = 1, or, equivalently [8], for any \u03b1 > 0 and \u03b4 > 0, there is M(i, a) > 0 such that Pr{ sup n\u2265M(i,a) |un,i(a) \u2212 ui(a)| < \u03b4 2|A| } \u2265 1 \u2212 \u03b1.\nBy taking M = maxi\u2208I maxa\u2208A M(i, a), we have Pr{max i\u2208I max a\u2208A sup n\u2265M |un,i(a) \u2212 ui(a)| < \u03b4 2|A| } \u2265 1 \u2212 \u03b1.\nThus, by the claim, for any n \u2265 M, sup n\u2265M | n(s) \u2212 (s)| \u2264 max i\u2208I max ai\u2208Ai sup n\u2265M |un,i(ai, s\u2212i) \u2212 ui(ai, s\u2212i)|+ + sup n\u2265M max i\u2208I |un,i(s) \u2212 ui(s)| \u2264 max i\u2208I max ai\u2208Ai X b\u2208A\u2212i sup n\u2265M |un,i(ai, b) \u2212 ui(ai, b)|s\u2212i(b)+ + max i\u2208I X b\u2208A sup n\u2265M |un,i(b) \u2212 ui(b)|s(b) \u2264 max i\u2208I max ai\u2208Ai X b\u2208A\u2212i sup n\u2265M |un,i(ai, b) \u2212 ui(ai, b)|+ + max i\u2208I X b\u2208A sup n\u2265M |un,i(b) \u2212 ui(b)| < max i\u2208I max ai\u2208Ai X b\u2208A\u2212i ( \u03b4 2|A| ) + max i\u2208I X b\u2208A ( \u03b4 2|A| ) \u2264 \u03b4 with probability at least 1 \u2212 \u03b1.\nNote that since s\u2212i(a) and s(a) are bounded between 0 and 1, we were able to drop them from the expressions above to obtain a bound that will be valid independent of the particular choice of s. Furthermore, since the above result can be obtained for an arbitrary \u03b1 > 0 and \u03b4 > 0, we have Pr{limn\u2192\u221e n(s) = (s)} = 1 uniformly on S. A.4 Proof of Lemma 5 We prove the result using uniform continuity of ui(s) and preservation of continuity under maximum.\nClaim: A function f : Rk \u2192 R defined by f(t) = Pk i=1 ziti, where zi are constants in R, is uniformly continuous in t.\nThe claim follows because |f(t)\u2212f(t )| = | Pk i=1 zi(ti\u2212ti)| \u2264 Pk i=1 |zi||ti \u2212 ti|.\nAn immediate result of this for our purposes is that ui(s) is uniformly continuous in s and ui(ai, s\u2212i) is uniformly continuous in s\u2212i. Claim: Let f(a, b) be uniformly continuous in b \u2208 B for every a \u2208 A, with |A| < \u221e.\nThen V (b) = maxa\u2208A f(a, b) is uniformly continuous in b. To show this, take \u03b3 > 0 and let b, b \u2208 B such that b \u2212 b < \u03b4(a) \u21d2 |f(a, b) \u2212 f(a, b )| < \u03b3.\nNow take \u03b4 = mina\u2208A \u03b4(a).\nThen, whenever b \u2212 b < \u03b4, |V (b) \u2212 V (b )| = | max a\u2208A f(a, b) \u2212 max a\u2208A f(a, b )| \u2264 max a\u2208A |f(a, b) \u2212 f(a, b )| < \u03b3.\nNow, recall that (s) = maxi[maxai\u2208Ai ui(ai, s\u2212i) \u2212 ui(s)].\nBy the claims above, maxai\u2208Ai ui(ai, s\u2212i) is uniformly continuous in s\u2212i and ui(s) is uniformly continuous in s.\nSince the difference of two uniformly continuous functions is uniformly continuous, and since this continuity is preserved under maximum by our second claim, we have the desired result.\nA.5 Proof of Theorem 6 Choose \u03b4 > 0.\nFirst, we need to ascertain that the following claim holds: Claim: \u00af = mins\u2208S\\N\u03b4 (s) exists and \u00af > 0.\nSince N\u03b4 is an open subset of compact S, it follows that S \\ N\u03b4 is compact.\nAs we had also proved in Lemma 5 that (s) is continuous, existence follows from the Weierstrass theorem.\nThat \u00af > 0 is clear since (s) = 0 if and only if s is a Nash equilibrium of \u0393.\nNow, by Theorem 3, for any \u03b1 > 0 there is M such that Pr{ sup n\u2265M sup s\u2208S | n(s) \u2212 (s)| < \u00af} \u2265 1 \u2212 \u03b1.\nConsequently, for any \u03b4 > 0, Pr{ sup n\u2265M h(Nn, N\u03b4) < \u03b4} \u2265 Pr{\u2200n \u2265 M Nn \u2282 N\u03b4} \u2265 Pr{ sup n\u2265M sup s\u2208N (s) < \u00af} \u2265 Pr{ sup n\u2265M sup s\u2208S | n(s) \u2212 (s)| < \u00af} \u2265 1 \u2212 \u03b1.\nSince this holds for an arbitrary \u03b1 > 0 and \u03b4 > 0, the desired result follows.\nA.6 Proof of Theorem 7 Fix \u03b8 and choose \u03b4 > 0.\nSince W (s, \u03b8) is continuous at s\u2217 (\u03b8), given > 0 there is \u03b4 > 0 such that for every s that is within \u03b4 of s\u2217 (\u03b8), |W (s , \u03b8) \u2212 W (s\u2217 (\u03b8), \u03b8)| < .\nBy Theorem 6, we can find M(\u03b8) large enough such that all s \u2208 Nn are within \u03b4 of s\u2217 (\u03b8) for all n \u2265 M(\u03b8) with probability 1.\nConsequently, for any > 0 we can find M(\u03b8) large enough such that with probability 1 we have supn\u2265M(\u03b8) sups \u2208Nn |W (s , \u03b8) \u2212 W (s\u2217 (\u03b8), \u03b8)| < .\nLet us assume without loss of generality that there is a unique optimal choice of \u03b8.\nNow, since the set \u0398 is finite, there is also the second-best choice of \u03b8 (if there is only one \u03b8 \u2208 \u0398 this discussion is moot anyway): \u03b8\u2217\u2217 = arg max \u0398\\\u03b8\u2217 W (s\u2217 (\u03b8), \u03b8).\nSuppose w.l.o.g. that \u03b8\u2217\u2217 is also unique and let \u2206 = W (s\u2217 (\u03b8\u2217 ), \u03b8\u2217 ) \u2212 W (s\u2217 (\u03b8\u2217\u2217 ), \u03b8\u2217\u2217 ).\nThen if we let < \u2206\/2 and let M = max\u03b8\u2208\u0398 M(\u03b8), where each M(\u03b8) is large enough such that supn\u2265M(\u03b8) sups \u2208Nn |W (s , \u03b8)\u2212 W (s\u2217 (\u03b8), \u03b8)| < a.s., the optimal choice of \u03b8 based on any empirical equilibrium will be \u03b8\u2217 with probability 1.\nThus, in particular, given any probability distribution over empirical equilibria, the best choice of \u03b8 will be \u03b8\u2217 with probability 1 (similarly, if we take supremum or infimum of W (Nn(\u03b8), \u03b8) over the set of empirical equilibria in constructing the objective function).\n315","lvl-3":"Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario\nABSTRACT\nOur proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings.\nWe illustrate our approach with a design task from a supply-chain trading competition.\nDesigners adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient.\nOur empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect.\nMore generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent.\n1.\nMOTIVATION\nWe illustrate our problem with an anecdote from a supply chain research exercise: the 2003 and 2004 Trading Agent Competition (TAC) Supply Chain Management (SCM) game.\nTAC\/SCM [1] defines a scenario where agents compete to maximize their profits as manufacturers in a supply chain.\nThe agents procure components from the various suppliers and assemble finished goods for sale to customers, repeatedly over a simulated year .1 1Information about TAC and the SCM game, including specifications, rules, and competition results, can be found at http:\/\/www.sics.se\/tac.\nAs it happened, the specified negotiation behavior of suppliers provided a great incentive for agents to procure large quantities of components on day 0: the very beginning of the simulation.\nDuring the early rounds of the 2003 SCM competition, several agent developers discovered this, and the apparent success led to most agents performing the majority of their purchasing on day 0.\nAlthough jockeying for day-0 procurement turned out to be an interesting strategic issue in itself [19], the phenomenon detracted from other interesting problems, such as adapting production levels to varying demand (since component costs were already sunk), and dynamic management of production, sales, and inventory.\nSeveral participants noted that the predominance of day-0 procurement overshadowed other key research issues, such as factory scheduling [2] and optimizing bids for customer orders [13].\nAfter the 2003 tournament, there was a general consensus in the TAC community that the rules should be changed to deter large day-0 procurement.\nThe task facing game organizers can be viewed as a problem in mechanism design.\nThe designers have certain game features under their control, and a set of objectives regarding game outcomes.\nUnlike most academic treatments of mechanism design, the objective is a behavioral feature (moderate day-0 procurement) rather than an allocation feature like economic efficiency, and the allowed mechanisms are restricted to those judged to require only an incremental modification of the current game.\nReplacing the supplychain negotiation procedures with a one-shot direct mechanism, for example, was not an option.\nWe believe that such operational restrictions and idiosyncratic objectives are actually quite typical of practical mechanism design settings, where they are perhaps more commonly characterized as incentive engineering problems.\nIn response to the problem, the TAC\/SCM designers adopted several rule changes intended to penalize large day-0 orders.\nThese included modifications to supplier pricing policies and introduction of storage costs assessed on inventories of components and finished goods.\nDespite the changes, day-0 procurement was very high in the early rounds of the 2004 competition.\nIn a drastic measure, the GameMaster imposed a fivefold increase of storage costs midway through the tournament.\nEven this did not stem the tide, and day-0 procurement in the final rounds actually increased (by some measures) from 2003 [9].\nThe apparent difficulty in identifying rule modifications that effect moderation in day-0 procurement is quite striking.\nAlthough the designs were widely discussed, predictions for the effects of various proposals were supported primarily by intuitive arguments or at best by back-of-the-envelope calculations.\nMuch of the difficulty, of course, is anticipating the agents' (and their developers') responses without essentially running a gaming exercise for this purpose.\nThe episode caused us to consider whether new ap\nproaches or tools could enable more systematic analysis of design options.\nStandard game-theoretic and mechanism design methods are clearly relevant, although the lack of an analytic description of the game seems to be an impediment.\nUnder the assumption that the simulator itself is the only reliable source of outcome computation, we refer to our task as empirical mechanism design.\nIn the sequel, we develop some general methods for empirical mechanism design and apply them to the TAC\/SCM redesign problem.\nOur analysis focuses on the setting of storage costs (taking other game modifications as fixed), since this is the most direct deterrent to early procurement adopted.\nOur results confirm the basic intuition that incentives for day-0 purchasing decrease as storage costs rise.\nWe also confirm that the high day-0 procurement observed in the 2004 tournament is a rational response to the setting of storage costs used.\nFinally, we conclude from our data that it is very unlikely that any reasonable setting of storage costs would result in acceptable levels of day-0 procurement, so a different design approach would have been required to eliminate this problem.\nOverall, we contribute a formal framework and a set of methods for tackling indirect mechanism design problems in settings where only a black-box description of players' utilities is available.\nOur methods incorporate estimation of sets of Nash equilibria and sample Nash equilibria, used in conjuction to support general claims about the structure of the mechanism designer's utility, as well as a restricted probabilistic analysis to assess the likelihood of conclusions.\nWe believe that most realistic problems are too complex to be amenable to exact analysis.\nConsequently, we advocate the approach of gathering evidence to provide indirect support of specific hypotheses.\n2.\nPRELIMINARIES\nA normalform game2 is denoted by [I, {Ri}, {ui (r)}], where I refers to the set of players and m = | I | is the number of players.\nRi is the set of strategies available to player i \u2208 I, with R = R1 \u00d7...\u00d7 Rm representing the set ofjoint strategies of all players.\nWe designate the set of pure strategies available to player i by Ai, and denote the joint set of pure strategies of all players by A = A1 \u00d7...\u00d7 Am.\nIt is often convenient to refer to a strategy of player i separately from that of the remaining players.\nTo accommodate this, we use a_i to denote the joint strategy of all players other than player i. Let Si be the set of all probability distributions (mixtures) over Ai and, similarly, S be the set of all distributions over A.\nAn s \u2208 S is called a mixed strategy profile.\nWhen the game is finite (i.e., A and I are both finite), the probability that a \u2208 A is played under s is written s (a) = s (ai, a_i).\nWhen the distribution s is not correlated, we can simply say si (ai) when referring to the probability player i plays ai under s. Next, we define the payoff (utility) function of each player i by ui: A1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Am--+ R, where ui (ai, a_i) indicates the payoff to player i to playing pure strategy ai when the remaining players play a_i.\nWe can extend this definition to mixed strategies by assuming that ui are von Neumann-Morgenstern (vNM) utilities as follows: ui (s) = Es [ui], where Es is the expectation taken with respect to the probability distribution of play induced by the players' mixed strategy s. 2By employing the normal form, we model agents as playing a single action, with decisions taken simultaneously.\nThis is appropriate for our current study, which treats strategies (agent programs) as atomic actions.\nWe could capture finer-grained decisions about action over time in the extensive form.\nAlthough any extensive game can be recast in normal form, doing so may sacrifice compactness and blur relevant distinctions (e.g., subgame perfection).\nOccasionally, we write ui (x, y) to mean that x \u2208 Ai or Si and y \u2208 A_i or S_i depending on context.\nWe also express the set of utility functions of all players as u (\u00b7) = {u1 (\u00b7),..., um (\u00b7)}.\nWe define a function, e: R--+ R, interpreted as the maximum benefit any player can obtain by deviating from its strategy in the specified profile.\nwhere r belongs to some strategy set, R, of either pure or mixed strategies.\nFaced with a game, an agent would ideally play its best strategy given those played by the other agents.\nA configuration where all agents play strategies that are best responses to the others constitutes a Nash equilibrium.\nWhen r \u2208 A, the above defines a pure strategy Nash equilibrium; otherwise the definition describes a mixed strategy Nash equilibrium.\nWe often appeal to the concept of an approximate, or e-Nash equilibrium, where e is the maximum benefit to any agent for deviating from the prescribed strategy.\nThus, e (r) as defined above (1) is such that profile r is an e-Nash equilibrium iff e (r) \u2264 e.\nIn this study we devote particular attention to games that exhibit symmetry with respect to payoffs, rendering agents strategically identical.\n3.\nTHE MODEL\n4.\nEMPIRICAL DESIGN ANALYSIS\n4.1 TAC\/SCM Design Problem\n4.2 Estimating Nash Equilibria\n4.2.1 PayoffFunction Approximation\n4.2.2 Search in Strategy Profile Space\n4.3 Data Generation\n4.4 Results\n4.4.1 Analysis of the Baseline Data Set\n4.4.2 Analysis of Search Data\n4.4.3 Extrapolating the Solution Correspondence\n4.5 Probabilistic Analysis\n5.\nCONVERGENCE RESULTS\n6.\nRELATED WORK\nThe mechanism design literature in Economics has typically explored existence of a mechanism that implements a social choice function in equilibrium [10].\nAdditionally, there is an extensive literature on optimal auction design [10], of which the work by Roger Myerson [11] is, perhaps, the most relevant.\nIn much of this work, analytical results are presented with respect to specific utility functions and accounting for constraints such as incentive compatibility and individual rationality.\nSeveral related approaches to search for the best mechanism exist in the Computer Science literature.\nConitzer and Sandholm [6] developed a search algorithm when all the relevant game parameters are common knowledge.\nWhen payoff functions of players are unknown, a search using simulations has been explored as an alternative.\nOne approach in that direction, taken in [4] and [15], is to co-evolve the mechanism parameter and agent strategies, using some notion of social utility and agent payoffs as fitness criteria.\nAn alternative to co-evolution explored in [16] was to optimize a well-defined welfare function of the designer using genetic programming.\nIn this work the authors used a common learning strategy for all agents and defined an outcome of a game induced by a mechanism parameter as the outcome of joint agent learning.\nMost recently, Phelps et al. [14] compared two mechanisms based on expected social utility with expectation taken over an empirical distribution of equilibria in games defined by heuristic strategies, as in [18].\n7.\nCONCLUSION\nAcknowledgments\n8.\nREFERENCES\nAPPENDIX A. PROOFS A. 1 Proof of Proposition 1\nA. 2 Proof of Proposition 2\nA. 3 Proof of Theorem 3\nA. 5 Proof of Theorem 6\nA. 4 Proof of Lemma 5\nA. 6 Proof of Theorem 7","lvl-4":"Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario\nABSTRACT\nOur proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings.\nWe illustrate our approach with a design task from a supply-chain trading competition.\nDesigners adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient.\nOur empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect.\nMore generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent.\n1.\nMOTIVATION\nWe illustrate our problem with an anecdote from a supply chain research exercise: the 2003 and 2004 Trading Agent Competition (TAC) Supply Chain Management (SCM) game.\nTAC\/SCM [1] defines a scenario where agents compete to maximize their profits as manufacturers in a supply chain.\nAs it happened, the specified negotiation behavior of suppliers provided a great incentive for agents to procure large quantities of components on day 0: the very beginning of the simulation.\nDuring the early rounds of the 2003 SCM competition, several agent developers discovered this, and the apparent success led to most agents performing the majority of their purchasing on day 0.\nSeveral participants noted that the predominance of day-0 procurement overshadowed other key research issues, such as factory scheduling [2] and optimizing bids for customer orders [13].\nAfter the 2003 tournament, there was a general consensus in the TAC community that the rules should be changed to deter large day-0 procurement.\nThe task facing game organizers can be viewed as a problem in mechanism design.\nThe designers have certain game features under their control, and a set of objectives regarding game outcomes.\nUnlike most academic treatments of mechanism design, the objective is a behavioral feature (moderate day-0 procurement) rather than an allocation feature like economic efficiency, and the allowed mechanisms are restricted to those judged to require only an incremental modification of the current game.\nReplacing the supplychain negotiation procedures with a one-shot direct mechanism, for example, was not an option.\nWe believe that such operational restrictions and idiosyncratic objectives are actually quite typical of practical mechanism design settings, where they are perhaps more commonly characterized as incentive engineering problems.\nIn response to the problem, the TAC\/SCM designers adopted several rule changes intended to penalize large day-0 orders.\nThese included modifications to supplier pricing policies and introduction of storage costs assessed on inventories of components and finished goods.\nDespite the changes, day-0 procurement was very high in the early rounds of the 2004 competition.\nIn a drastic measure, the GameMaster imposed a fivefold increase of storage costs midway through the tournament.\nEven this did not stem the tide, and day-0 procurement in the final rounds actually increased (by some measures) from 2003 [9].\nThe apparent difficulty in identifying rule modifications that effect moderation in day-0 procurement is quite striking.\nAlthough the designs were widely discussed, predictions for the effects of various proposals were supported primarily by intuitive arguments or at best by back-of-the-envelope calculations.\nproaches or tools could enable more systematic analysis of design options.\nStandard game-theoretic and mechanism design methods are clearly relevant, although the lack of an analytic description of the game seems to be an impediment.\nUnder the assumption that the simulator itself is the only reliable source of outcome computation, we refer to our task as empirical mechanism design.\nIn the sequel, we develop some general methods for empirical mechanism design and apply them to the TAC\/SCM redesign problem.\nOur analysis focuses on the setting of storage costs (taking other game modifications as fixed), since this is the most direct deterrent to early procurement adopted.\nOur results confirm the basic intuition that incentives for day-0 purchasing decrease as storage costs rise.\nWe also confirm that the high day-0 procurement observed in the 2004 tournament is a rational response to the setting of storage costs used.\nFinally, we conclude from our data that it is very unlikely that any reasonable setting of storage costs would result in acceptable levels of day-0 procurement, so a different design approach would have been required to eliminate this problem.\nOverall, we contribute a formal framework and a set of methods for tackling indirect mechanism design problems in settings where only a black-box description of players' utilities is available.\nWe believe that most realistic problems are too complex to be amenable to exact analysis.\nConsequently, we advocate the approach of gathering evidence to provide indirect support of specific hypotheses.\n2.\nPRELIMINARIES\nRi is the set of strategies available to player i \u2208 I, with R = R1 \u00d7...\u00d7 Rm representing the set ofjoint strategies of all players.\nWe designate the set of pure strategies available to player i by Ai, and denote the joint set of pure strategies of all players by A = A1 \u00d7...\u00d7 Am.\nIt is often convenient to refer to a strategy of player i separately from that of the remaining players.\nTo accommodate this, we use a_i to denote the joint strategy of all players other than player i. Let Si be the set of all probability distributions (mixtures) over Ai and, similarly, S be the set of all distributions over A.\nAn s \u2208 S is called a mixed strategy profile.\nWhen the game is finite (i.e., A and I are both finite), the probability that a \u2208 A is played under s is written s (a) = s (ai, a_i).\nThis is appropriate for our current study, which treats strategies (agent programs) as atomic actions.\nWe could capture finer-grained decisions about action over time in the extensive form.\nAlthough any extensive game can be recast in normal form, doing so may sacrifice compactness and blur relevant distinctions (e.g., subgame perfection).\nWe also express the set of utility functions of all players as u (\u00b7) = {u1 (\u00b7),..., um (\u00b7)}.\nWe define a function, e: R--+ R, interpreted as the maximum benefit any player can obtain by deviating from its strategy in the specified profile.\nwhere r belongs to some strategy set, R, of either pure or mixed strategies.\nFaced with a game, an agent would ideally play its best strategy given those played by the other agents.\nA configuration where all agents play strategies that are best responses to the others constitutes a Nash equilibrium.\nWhen r \u2208 A, the above defines a pure strategy Nash equilibrium; otherwise the definition describes a mixed strategy Nash equilibrium.\nWe often appeal to the concept of an approximate, or e-Nash equilibrium, where e is the maximum benefit to any agent for deviating from the prescribed strategy.\nIn this study we devote particular attention to games that exhibit symmetry with respect to payoffs, rendering agents strategically identical.\n6.\nRELATED WORK\nThe mechanism design literature in Economics has typically explored existence of a mechanism that implements a social choice function in equilibrium [10].\nAdditionally, there is an extensive literature on optimal auction design [10], of which the work by Roger Myerson [11] is, perhaps, the most relevant.\nIn much of this work, analytical results are presented with respect to specific utility functions and accounting for constraints such as incentive compatibility and individual rationality.\nSeveral related approaches to search for the best mechanism exist in the Computer Science literature.\nConitzer and Sandholm [6] developed a search algorithm when all the relevant game parameters are common knowledge.\nWhen payoff functions of players are unknown, a search using simulations has been explored as an alternative.\nOne approach in that direction, taken in [4] and [15], is to co-evolve the mechanism parameter and agent strategies, using some notion of social utility and agent payoffs as fitness criteria.\nAn alternative to co-evolution explored in [16] was to optimize a well-defined welfare function of the designer using genetic programming.\nIn this work the authors used a common learning strategy for all agents and defined an outcome of a game induced by a mechanism parameter as the outcome of joint agent learning.\nMost recently, Phelps et al. [14] compared two mechanisms based on expected social utility with expectation taken over an empirical distribution of equilibria in games defined by heuristic strategies, as in [18].","lvl-2":"Empirical Mechanism Design: Methods, with Application to a Supply-Chain Scenario\nABSTRACT\nOur proposed methods employ learning and search techniques to estimate outcome features of interest as a function of mechanism parameter settings.\nWe illustrate our approach with a design task from a supply-chain trading competition.\nDesigners adopted several rule changes in order to deter particular procurement behavior, but the measures proved insufficient.\nOur empirical mechanism analysis models the relation between a key design parameter and outcomes, confirming the observed behavior and indicating that no reasonable parameter settings would have been likely to achieve the desired effect.\nMore generally, we show that under certain conditions, the estimator of optimal mechanism parameter setting based on empirical data is consistent.\n1.\nMOTIVATION\nWe illustrate our problem with an anecdote from a supply chain research exercise: the 2003 and 2004 Trading Agent Competition (TAC) Supply Chain Management (SCM) game.\nTAC\/SCM [1] defines a scenario where agents compete to maximize their profits as manufacturers in a supply chain.\nThe agents procure components from the various suppliers and assemble finished goods for sale to customers, repeatedly over a simulated year .1 1Information about TAC and the SCM game, including specifications, rules, and competition results, can be found at http:\/\/www.sics.se\/tac.\nAs it happened, the specified negotiation behavior of suppliers provided a great incentive for agents to procure large quantities of components on day 0: the very beginning of the simulation.\nDuring the early rounds of the 2003 SCM competition, several agent developers discovered this, and the apparent success led to most agents performing the majority of their purchasing on day 0.\nAlthough jockeying for day-0 procurement turned out to be an interesting strategic issue in itself [19], the phenomenon detracted from other interesting problems, such as adapting production levels to varying demand (since component costs were already sunk), and dynamic management of production, sales, and inventory.\nSeveral participants noted that the predominance of day-0 procurement overshadowed other key research issues, such as factory scheduling [2] and optimizing bids for customer orders [13].\nAfter the 2003 tournament, there was a general consensus in the TAC community that the rules should be changed to deter large day-0 procurement.\nThe task facing game organizers can be viewed as a problem in mechanism design.\nThe designers have certain game features under their control, and a set of objectives regarding game outcomes.\nUnlike most academic treatments of mechanism design, the objective is a behavioral feature (moderate day-0 procurement) rather than an allocation feature like economic efficiency, and the allowed mechanisms are restricted to those judged to require only an incremental modification of the current game.\nReplacing the supplychain negotiation procedures with a one-shot direct mechanism, for example, was not an option.\nWe believe that such operational restrictions and idiosyncratic objectives are actually quite typical of practical mechanism design settings, where they are perhaps more commonly characterized as incentive engineering problems.\nIn response to the problem, the TAC\/SCM designers adopted several rule changes intended to penalize large day-0 orders.\nThese included modifications to supplier pricing policies and introduction of storage costs assessed on inventories of components and finished goods.\nDespite the changes, day-0 procurement was very high in the early rounds of the 2004 competition.\nIn a drastic measure, the GameMaster imposed a fivefold increase of storage costs midway through the tournament.\nEven this did not stem the tide, and day-0 procurement in the final rounds actually increased (by some measures) from 2003 [9].\nThe apparent difficulty in identifying rule modifications that effect moderation in day-0 procurement is quite striking.\nAlthough the designs were widely discussed, predictions for the effects of various proposals were supported primarily by intuitive arguments or at best by back-of-the-envelope calculations.\nMuch of the difficulty, of course, is anticipating the agents' (and their developers') responses without essentially running a gaming exercise for this purpose.\nThe episode caused us to consider whether new ap\nproaches or tools could enable more systematic analysis of design options.\nStandard game-theoretic and mechanism design methods are clearly relevant, although the lack of an analytic description of the game seems to be an impediment.\nUnder the assumption that the simulator itself is the only reliable source of outcome computation, we refer to our task as empirical mechanism design.\nIn the sequel, we develop some general methods for empirical mechanism design and apply them to the TAC\/SCM redesign problem.\nOur analysis focuses on the setting of storage costs (taking other game modifications as fixed), since this is the most direct deterrent to early procurement adopted.\nOur results confirm the basic intuition that incentives for day-0 purchasing decrease as storage costs rise.\nWe also confirm that the high day-0 procurement observed in the 2004 tournament is a rational response to the setting of storage costs used.\nFinally, we conclude from our data that it is very unlikely that any reasonable setting of storage costs would result in acceptable levels of day-0 procurement, so a different design approach would have been required to eliminate this problem.\nOverall, we contribute a formal framework and a set of methods for tackling indirect mechanism design problems in settings where only a black-box description of players' utilities is available.\nOur methods incorporate estimation of sets of Nash equilibria and sample Nash equilibria, used in conjuction to support general claims about the structure of the mechanism designer's utility, as well as a restricted probabilistic analysis to assess the likelihood of conclusions.\nWe believe that most realistic problems are too complex to be amenable to exact analysis.\nConsequently, we advocate the approach of gathering evidence to provide indirect support of specific hypotheses.\n2.\nPRELIMINARIES\nA normalform game2 is denoted by [I, {Ri}, {ui (r)}], where I refers to the set of players and m = | I | is the number of players.\nRi is the set of strategies available to player i \u2208 I, with R = R1 \u00d7...\u00d7 Rm representing the set ofjoint strategies of all players.\nWe designate the set of pure strategies available to player i by Ai, and denote the joint set of pure strategies of all players by A = A1 \u00d7...\u00d7 Am.\nIt is often convenient to refer to a strategy of player i separately from that of the remaining players.\nTo accommodate this, we use a_i to denote the joint strategy of all players other than player i. Let Si be the set of all probability distributions (mixtures) over Ai and, similarly, S be the set of all distributions over A.\nAn s \u2208 S is called a mixed strategy profile.\nWhen the game is finite (i.e., A and I are both finite), the probability that a \u2208 A is played under s is written s (a) = s (ai, a_i).\nWhen the distribution s is not correlated, we can simply say si (ai) when referring to the probability player i plays ai under s. Next, we define the payoff (utility) function of each player i by ui: A1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Am--+ R, where ui (ai, a_i) indicates the payoff to player i to playing pure strategy ai when the remaining players play a_i.\nWe can extend this definition to mixed strategies by assuming that ui are von Neumann-Morgenstern (vNM) utilities as follows: ui (s) = Es [ui], where Es is the expectation taken with respect to the probability distribution of play induced by the players' mixed strategy s. 2By employing the normal form, we model agents as playing a single action, with decisions taken simultaneously.\nThis is appropriate for our current study, which treats strategies (agent programs) as atomic actions.\nWe could capture finer-grained decisions about action over time in the extensive form.\nAlthough any extensive game can be recast in normal form, doing so may sacrifice compactness and blur relevant distinctions (e.g., subgame perfection).\nOccasionally, we write ui (x, y) to mean that x \u2208 Ai or Si and y \u2208 A_i or S_i depending on context.\nWe also express the set of utility functions of all players as u (\u00b7) = {u1 (\u00b7),..., um (\u00b7)}.\nWe define a function, e: R--+ R, interpreted as the maximum benefit any player can obtain by deviating from its strategy in the specified profile.\nwhere r belongs to some strategy set, R, of either pure or mixed strategies.\nFaced with a game, an agent would ideally play its best strategy given those played by the other agents.\nA configuration where all agents play strategies that are best responses to the others constitutes a Nash equilibrium.\nWhen r \u2208 A, the above defines a pure strategy Nash equilibrium; otherwise the definition describes a mixed strategy Nash equilibrium.\nWe often appeal to the concept of an approximate, or e-Nash equilibrium, where e is the maximum benefit to any agent for deviating from the prescribed strategy.\nThus, e (r) as defined above (1) is such that profile r is an e-Nash equilibrium iff e (r) \u2264 e.\nIn this study we devote particular attention to games that exhibit symmetry with respect to payoffs, rendering agents strategically identical.\n3.\nTHE MODEL\nWe model the strategic interactions between the designer of the mechanism and its participants as a two-stage game.\nThe designer moves first by selecting a value, 0, from a set of allowable mechanism settings, \u0398.\nAll the participant agents observe the mechanism parameter 0 and move simultaneously thereafter.\nFor example, the designer could be deciding between a first-price and second-price sealed-bid auction mechanisms, with the presumption that after the choice has been made, the bidders will participate with full awareness of the auction rules.\nSince the participants play with full knowledge of the mechanism parameter, we define a game between them in the second stage as \u0393\u03b8 = [I, {Ri}, {ui (r, 0)}].\nWe refer to \u0393\u03b8 as a game induced by 0.\nLet N (0) be the set of strategy profiles considered solutions of the game \u0393\u03b8 .3 Suppose that the goal of the designer is to optimize the value of some welfare function, W (r, 0), dependent on the mechanism parameter and resulting play, r.\nWe define a pessimistic measure, W (\u02c6R, 0) = inf {W (r, 0): r \u2208 \u02c6R}, representing the worst-case welfare of the game induced by 0, assuming that agents play some joint strategy in \u02c6R.\nTypically we care about W (N (0), 0), the worst-case outcome of playing some solution .4 On some problems we can gain considerable advantage by using an aggregation function to map the welfare outcome of a game 3We generally adopt Nash equilibrium as the solution concept, and thus take N (0) to be the set of equilibria.\nHowever, much of the methodology developed here could be employed with alternative criteria for deriving agent behavior from a game definition.\n4Again, alternatives are available.\nFor example, if one has a probability distribution over the solution set N (0), it would be natural to take the expectation of W (r, 0) instead.\nspecified in terms of agent strategies to an equivalent welfare outcome specified in terms of a lower-dimensional summary.\nWe overload the function symbol to apply to sets of strategy profiles: \u03c6 (\u02c6R) = {\u03c6 (r): r \u2208 \u02c6R}.\nFor convenience of exposition, we write \u03c6 \u2217 (\u03b8) to mean \u03c6 (N (\u03b8)).\nUsing an aggregation function yields a more compact representation of strategy profiles.\nFor example, suppose--as in our application below--that an agent's strategy is defined by a numeric parameter.\nIf all we care about is the total value played, we may take \u03c6 (a) = Pmi = 1 ai.\nIf we have chosen our aggregator carefully, we may also capture structure not obvious otherwise.\nFor example, \u03c6 \u2217 (\u03b8) could be decreasing in \u03b8, whereas N (\u03b8) might have a more complex structure.\nGiven a description of the solution correspondence N (\u03b8) (equivalently, \u03c6 \u2217 (\u03b8)), the designer faces a standard optimization problem.\nAlternatively, given a simulator that could produce an unbiased sample from the distribution of W (N (\u03b8), \u03b8) for any \u03b8, the designer would be faced with another much appreciated problem in the literature: simulation optimization [12].\nHowever, even for a game \u0393\u03b8 with known payoffs it may be computationally intractable to solve for Nash equilibria, particularly if the game has large or infinite strategy sets.\nAdditionally, we wish to study games where the payoffs are not explicitly given, but must be determined from simulation or other experience with the game .5 Accordingly, we assume that we are given a (possibly noisy) data set of payoff realizations: Do = {(\u03b81, a1, U1),..., (\u03b8k, ak, Uk)}, where for every data point \u03b8i is the observed mechanism parameter setting, ai is the observed pure strategy profile of the participants, and Ui is the corresponding realization of agent payoffs.\nWe may also have additional data generated by a (possibly noisy) simulator: Ds = {(\u03b8k +1, ak +1, Uk +1),..., (\u03b8k + l, ak + l, Uk + l)}.\nLet D = {Do, Ds} be the combined data set.\n(Either Do or Ds may be null for a particular problem.)\nIn the remainder of this paper, we apply our modeling approach, together with several empirical game-theoretic methods, in order to answer questions regarding the design of the TAC\/SCM scenario.\n4.\nEMPIRICAL DESIGN ANALYSIS\nSince our data comes in the form of payoff experience and not as the value of an objective function for given settings of the control variable, we can no longer rely on the methods for optimizing functions using simulations.\nIndeed, a fundamental aspect of our design problem involves estimating the Nash equilibrium correspondence.\nFurthermore, we cannot rely directly on the convergence results that abound in the simulation optimization literature, and must establish probabilistic analysis methods tailored for our problem setting.\n4.1 TAC\/SCM Design Problem\nWe describe our empirical design analysis methods by presenting a detailed application to the TAC\/SCM scenario introduced above.\nRecall that during the 2004 tournament, the designers of the supplychain game chose to dramatically increase storage costs as a measure aimed at curbing day-0 procurement, to little avail.\nHere we systematically explore the relationship between storage costs and 5This is often the case for real games of interest, where natural language or algorithmic descriptions may substitute for a formal specification of strategy and payoff functions.\nthe aggregate quantity of components procured on day 0 in equilibrium.\nIn doing so, we consider several questions raised during and after the tournament.\nFirst, does increasing storage costs actually reduce day-0 procurement?\nSecond, was the excessive day-0 procurement that was observed during the 2004 tournament rational?\nAnd third, could increasing storage costs sufficiently have reduced day-0 procurement to an \"acceptable\" level, and if so, what should the setting of storage costs have been?\nIt is this third question that defines the mechanism design aspect of our analysis .6 To apply our methods, we must specify the agent strategy sets, the designer's welfare function, the mechanism parameter space, and the source of data.\nWe restrict the agent strategies to be a multiplier on the quantity of the day-0 requests by one of the finalists, Deep Maize, in the 2004 TAC\/SCM tournament.\nWe further restrict it to the set [0,1.5], since any strategy below 0 is illegal and strategies above 1.5 are extremely aggressive (thus unlikely to provide refuting deviations beyond those available from included strategies, and certainly not part of any desirable equilibrium).\nAll other behavior is based on the behavior of Deep Maize and is identical for all agents.\nThis choice can provide only an estimate of the actual tournament behavior of a \"typical\" agent.\nHowever, we believe that the general form of the results should be robust to changes in the full agent behavior.\nsum of day-0 purchases.\nLet \u03c6 (a) = P6 We model the designer's welfare function as a threshold on the i = 1 ai be the aggregation function representing the sum of day-0 procurement of the six agents participating in a particular supply-chain game (for mixed strategy profiles s, we take expectation of \u03c6 with respect to the mixture).\nThe designer's welfare function W (N (\u03b8), \u03b8) is then given by I {sup {\u03c6 \u2217 (\u03b8)} \u2264 \u03b1}, where \u03b1 is the maximum acceptable level of day-0 procurement and I is the indicator function.\nThe designer selects a value \u03b8 of storage costs, expressed as an annual percentage of the baseline value of components in the inventory (charged daily), from the set \u0398 = R +.\nSince the designer's decision depends only on \u03c6 \u2217 (\u03b8), we present all of our results in terms of the value of the aggregation function.\n4.2 Estimating Nash Equilibria\nThe objective of TAC\/SCM agents is to maximize profits realized over a game instance.\nThus, if we fix a strategy for each agent at the beginning of the simulation and record the corresponding profits at the end, we will have obtained a data point in the form (a, U (a)).\nIf we also have fixed the parameter \u03b8 of the simulator, the resulting data point becomes part of our data set D.\nThis data set, then, contains data only in the form of pure strategies of players and their corresponding payoffs, and, consequently, in order to formulate the designer's problem as optimization, we must first determine or approximate the set of Nash equilibria of each game \u0393\u03b8.\nThus, we need methods for approximating Nash equilibria for infinite games.\nBelow, we describe the two methods we used in our study.\nThe first has been explored empirically before, whereas the second is introduced here as the method specifically designed to approximate a set of Nash equilibria.\n4.2.1 PayoffFunction Approximation\nThe first method for estimating Nash equilibria based on data uses supervised learning to approximate payoff functions of mech6We do not address whether and how other measures (e.g., constraining procurement directly) could have achieved design objectives.\nOur approach takes as given some set of design options, in this case defined by the storage cost parameter.\nIn principle our methods could be applied to a different or larger design space, though with corresponding complexity growth.\nanism participants from a data set of game experience [17].\nOnce approximate payoff functions are available for all players, the Nash equilibria may be either found analytically or approximated using numerical techniques, depending on the learning model.\nIn what follows, we estimate only a sample Nash equilibrium using this technique, although this restriction can be removed at the expense of additional computation time.\nOne advantage of this method is that it can be applied to any data set and does not require the use of a simulator.\nThus, we can apply it when Ds = \u2205.\nIf a simulator is available, we can generate additional data to build confidence in our initial estimates .7 We tried the following methods for approximating payoff functions: quadratic regression (QR), locally weighted average (LWA), and locally weighted linear regression (LWLR).\nWe also used control variates to reduce the variance of payoff estimates, as in our previous empirical game-theoretic analysis of TAC\/SCM -03 [19].\nThe quadratic regression model makes it possible to compute equilibria of the learned game analytically.\nFor the other methods we applied replicator dynamics [7] to a discrete approximation of the learned game.\nThe expected total day-0 procurement in equilibrium was taken as the estimate of an outcome.\n4.2.2 Search in Strategy Profile Space\nWhen we have access to a simulator, we can also use directed search through profile space to estimate the set of Nash equilibria, which we describe here after presenting some additional notation.\ndefined as maxa' \u2208 Snb (a, D) max {ui (a, a,) (a) \u2212 ui (a, al) (a), 0}.\nWe say that a is a candidate \u03b4-equilibrium for \u03b4 \u2265 \u02c6 ~.\nWhen Snb (a, \u02dcD) = \u2205 (i.e., all strategic neighbors are represented in the data), a is confirmed as an \u02c6 ~ - Nash equilibrium.\nOur search method operates by exploring deviations from candidate equilibria.\nWe refer to it as \"BestFirstSearch\", as it selects with probability one a strategy profile a' \u2208 Snb (a, \u02dcD) that has the smallest ~ \u02c6 in D. Finally we define an estimator for a set of Nash equilibria.\nDEFINITION 6.\nFor a set K, define Co (K) to be the convex hull of K. Let B\u03b4 be the set of candidates at level \u03b4.\nWe define \u02c6\u03c6 \u2217 (\u03b8) = Co ({\u03c6 (a): a \u2208 B\u03b4}) for a fixed \u03b4 to be an estimator of \u03c6 \u2217 (\u03b8).\nIn words, the estimate of a set of equilibrium outcomes is the convex hull of all aggregated strategy profiles with ~ - bound below some fixed \u03b4.\nThis definition allows us to exploit structure arising from the aggregation function.\nIf two profiles are close in terms of aggregation values, they may be likely to have similar ~ - bounds.\nIn particular, if one is an equilibrium, the other may be as well.\nWe present some theoretical support for this method of estimating the set of Nash equilibria below.\nSince the game we are interested in is infinite, it is necessary to terminate BestFirstSearch before exploring the entire space of strat7For example, we can use active learning techniques [5] to improve the quality of payoff function approximation.\nIn this work, we instead concentrate on search in strategy profile space.\negy profiles.\nWe currently determine termination time in a somewhat ad hoc manner, based on observations about the current set of candidate equilibria .8\n4.3 Data Generation\nOur data was collected by simulating TAC\/SCM games on a local version of the 2004 TAC\/SCM server, which has a configuration setting for the storage cost.\nAgent strategies in simulated games were selected from the set {0, 0.3, 0.6,..., 1.5} in order to have positive probability of generating strategic neighbors .9 A baseline data set Do was generated by sampling 10 randomly generated strategy profiles for each \u03b8 \u2208 {0, 50, 100, 150, 200}.\nBetween 5 and 10 games were run for each profile after discarding games that had various flaws .10 We used search to generate a simulated data set Ds, performing between 12 and 32 iterations of BestFirstSearch for each of the above settings of \u03b8.\nSince simulation cost is extremely high (a game takes nearly 1 hour to run), we were able to run a total of 2670 games over the span of more than six months.\nFor comparison, to get the entire description of an empirical game defined by the restricted finite joint strategy space for each value of \u03b8 \u2208 {0, 50, 100, 150, 200} would have required at least 23100 games (sampling each profile 10 times).\n4.4 Results\n4.4.1 Analysis of the Baseline Data Set\nWe applied the three learning methods described above to the baseline data set Do.\nAdditionally, we generated an estimate of the Nash equilibrium correspondence, \u02c6\u03c6 \u2217 (\u03b8), by applying Definition 6 with \u03b4 = 2.5 E6.\nThe results are shown in Figure 1.\nAs we can see, the correspondence \u02c6\u03c6 \u2217 (\u03b8) has little predictive power based on Do, and reveals no interesting structure about the game.\nIn contrast, all three learning methods suggest that total day-0 procurement is a decreasing function of storage costs.\nFigure 1: Aggregate day-0 procurement estimates based on Do.\nThe correspondence \u02c6\u03c6 \u2217 (\u03b8) is the interval between \"BaselineMin\" and \"BaselineMax\".\n8Generally, search is terminated once the set of candidate equilibria is small enough to draw useful conclusions about the likely range of equilibrium strategies in the game.\n9Of course, we do not restrict our Nash equilibrium estimates to stay in this discrete subset of [0,1.5].\n10For example, if we detected that any agent failed during the game (failures included crashes, network connectivity problems, and other obvious anomalies), the game would be thrown out.\n4.4.2 Analysis of Search Data\nTo corroborate the initial evidence from the learning methods, we estimated \u02c6\u03c6 \u2217 (0) (again, using S = 2.5 E6) on the data set D = {Do, Ds}, where Ds is data generated through the application of BestFirstSearch.\nThe results of this estimate are plotted against the results of the learning methods trained on Do11 in Figure 2.\nFirst, we note that the addition of the search data narrows the range of potential equilibria substantially.\nFurthermore, the actual point predictions of the learning methods and those based on e-bounds after search are reasonably close.\nCombining the evidence gathered from these two very different approaches to estimating the outcome correspondence yields a much more compelling picture of the relationship between storage costs and day-0 procurement than either method used in isolation.\nFigure 2: Aggregate day-0 procurement estimates based on search in strategy profile space compared to function approx\n\u02c6 imation techniques trained on Do.\nThe correspondence \u03c6 \u2217 (0) for D = {Do, Ds} is the interval between \"SearchMin\" and \"SearchMax\".\nThis evidence supports the initial intuition that day-0 procurement should be decreasing with storage costs.\nIt also confirms that high levels of day-0 procurement are a rational response to the 2004 tournament setting of average storage cost, which corresponds to 0 = 100.\nThe minimum prediction for aggregate procurement at this level of storage costs given by any experimental methods is approximately 3.\nThis is quite high, as it corresponds to an expected commitment of 1\/3 of the total supplier capacity for the entire game.\nThe maximum prediction is considerably higher at 4.5.\nIn the actual 2004 competition, aggregate day-0 procurement was equivalent to 5.71 on the scale used here [9].\nOur predictions underestimate this outcome to some degree, but show that any rational outcome was likely to have high day-0 procurement.\n4.4.3 Extrapolating the Solution Correspondence\nWe have reasonably strong evidence that the outcome correspondence is decreasing.\nHowever, the ultimate goal is to be able to either set the storage cost parameter to a value that would curb day-0 procurement in equilibrium or conclude that this is not possible.\nTo answer this question directly, suppose that we set a conservative threshold \u03b1 = 2 on aggregate day-0 procurement .12 Linear 11It is unclear how meaningful the results of learning would be if Ds were added to the training data set.\nIndeed, the additional data may actually increase the learning variance.\n12Recall that designer's objective is to incentivize aggergate day-0 procurement that is below the threshold \u03b1.\nOur threshold here still represents a commitment of over 20% of the suppliers' capacity for extrapolation of the maximum of the outcome correspondence estimated from D yields 0 = 320.\nThe data for 0 = 320 were collected in the same way as for other storage cost settings, with 10 randomly generated profiles followed by 33 iterations of BestFirstSearch.\nFigure 3 shows the detailed e-bounds for all profiles in terms of their corresponding values of\nFigure 3: Values of e\u02c6 for profiles explored using search when\n0 = 320.\nStrategy profiles explored are presented in terms of the corresponding values of \u03c6 (a).\nThe gray region corresponds to \u02c6\u03c6 \u2217 (320) with S = 2.5 M.\nThe estimated set of aggregate day-0 outcomes is very close to that for 0 = 200, indicating that there is little additional benefit to raising storage costs above 200.\nObserve, that even the lower bound of our estimated set of Nash equilibria is well above the target day-0 procurement of 2.\nFurthermore, payoffs to agents are almost always negative at 0 = 320.\nConsequently, increasing the costs further would be undesirable even if day-0 procurement could eventually be curbed.\nSince we are reasonably confident that \u03c6 \u2217 (0) is decreasing in 0, we also do not expect that setting 0 somewhere between 200 and 320 will achieve the desired result.\nWe conclude that it is unlikely that day-0 procurement could ever be reduced to a desirable level using any reasonable setting of the storage cost parameter.\nThat our predictions tend to underestimate tournament outcomes reinforces this conclusion.\nTo achieve the desired reduction in day-0 procurement requires redesigning other aspects of the mechanism.\n4.5 Probabilistic Analysis\nOur empirical analysis has produced evidence in support of the conclusion that no reasonable setting of storage cost was likely to sufficiently curb excessive day-0 procurement in TAC\/SCM' 04.\nAll of this evidence has been in the form of simple interpolation and extrapolation of estimates of the Nash equilibrium correspondence.\nThese estimates are based on simulating game instances, and are subject to sampling noise contributed by the various stochastic elements of the game.\nIn this section, we develop and apply methods for evaluating the sensitivity of our e-bound calculations to such stochastic effects.\nSuppose that all agents have finite (and small) pure strategy sets, A. Thus, it is feasible to sample the entire payoff matrix of the game.\nAdditionally, suppose that noise is additive with zero-mean the entire game on average, so in practice we would probably want the threshold to be even lower.\nand finite variance, that is, Ui (a) = ui (a) + \u02dc\u03bei (a), where Ui (a) is the observed payoff to i when a was played, ui (a) is the actual corresponding payoff, and \u02dc\u03bei (a) is a mean-zero normal random variable.\nWe designate the known variance of\u02dc\u03bei (a) by \u03c32i (a).\nThus, we assume that \u02dc\u03bei (a) is normal with distribution N (0, \u03c32i (a)).\nWe take \u00af ui (a) to be the sample mean over all Ui (a) in D, and follow Chang and Huang [3] to assume that we have an improper prior over the actual payoffs ui (a) and sampling was independent for all i and a.\nWe also rely on their result that ui (a) | \u00af ui (a) =\n\u00af ui (a) \u2212 Zi (a) \/ [\u03c3i (a) \/ ni (a)] are independent with posterior distributions N (\u00af ui (a), \u03c32i (a) \/ ni (a)), where ni (a) is the number of samples taken of payoffs to i for pure profile a, and Zi (a) \u223c N (0, 1).\nWe now derive a generic probabilistic bound that a profile a \u2208 A is an ~ - Nash equilibrium.\nIf ui (\u00b7) | \u00af ui (\u00b7) are independent for all i \u2208 I and a \u2208 A, we have the following result (from this point on we omit conditioning on \u00af ui (\u00b7) for brevity):\nwhere fui (a) (u) is the pdf of N (\u00af ui (a), \u03c3i (a)).\nThe proofs of this and all subsequent results are in the Appendix.\nThe posterior distribution of the optimum mean of n samples, derived by Chang and Huang [3], is\nwhere a \u2208 A and \u03a6 (\u00b7) is the N (0, 1) distribution function.\nCombining the results (2) and (3), we obtain a probabilistic confidence bound that ~ (a) \u2264 \u03b3 for a given \u03b3.\nNow, we consider cases of incomplete data and use the results we have just obtained to construct an upper bound (restricted to profiles represented in data) on the distribution of sup {\u03c6 \u2217 (\u03b8)} and inf {\u03c6 \u2217 (\u03b8)} (assuming that both are attainable):\nTable 2: Upper bounds on the distribution of inf {\u03c6 \u2217 (\u03b8)} re\nstricted to D for \u03b8 \u2208 {150, 200, 320} when N (\u03b8) is a set of Nash equilibria.\nTables 1 and 2 suggest that the existence of any equilibrium with \u03c6 (a) <2.7 is unlikely for any \u03b8 that we have data for, although this judgment, as we mentioned, is only with respect to the profiles we have actually sampled.\nWe can then accept this as another piece of evidence that the designer could not find a suitable setting of \u03b8 to achieve his objectives--indeed, the designer seems unlikely to achieve his objective even if he could persuade participants to play a desirable equilibrium!\nTable 1 also provides additional evidence that the agents in the 2004 TAC\/SCM tournament were indeed rational in procuring large numbers of components at the beginning fo the game.\nIf we look at the third column of this table, which corresponds to \u03b8 = 100, we can gather that no profile a in our data with \u03c6 (a) <3 is very likely to be played in equilibrium.\nThe bounds above provide some general evidence, but ultimately we are interested in a concrete probabilistic assessment of our conclusion with respect to the data we have sampled.\nParticularly, we would like to say something about what happens for the settings of \u03b8 for which we have no data.\nTo derive an approximate probabilistic bound on the probability that no \u03b8 \u2208 \u0398 could have achieved the designer's objective, let \u222a Jj = 1\u0398j, be a partition of \u0398, and assume that the function sup {\u03c6 \u2217 (\u03b8)} satisfies the Lipschitz condition with Lipschitz constant Aj on each subset \u0398j .13 Since we have determined that raising the storage cost above 320 is undesirable due to secondary considerations, we restrict attention to \u0398 = [0, 320].\nWe now define each subset j to be the interval between two points for which we have produced data.\nThus,\nwith j running between 1 and 5, corresponding to subintervals above.\nWe will further denote each \u0398j by (aj, bj].14 Then, the following Proposition gives us an approximate upper bound15 on the probability that sup {\u03c6 \u2217 (\u03b8)} \u2264 \u03b1.\nwhere x is a real number and \u2264 D indicates that the upper bound accounts only for strategies that appear in the data set D.\nSince the events {\u2203 a \u2208 D: \u03c6 (a) \u2264 x \u2227 a \u2208 N (\u03b8)} and {inf {\u03c6 \u2217 (\u03b8)} \u2264 x} are equivalent, this also defines an upper bound on the probability of {inf {\u03c6 \u2217 (\u03b8)} \u2264 x}.\nThe values thus derived comprise the Tables 1 and 2.\nTable 1: Upper bounds on the distribution of inf {\u03c6 \u2217 (\u03b8)} re\nstricted to D for \u03b8 \u2208 {0, 50, 100} when N (\u03b8) is a set of Nash equilibria.\nwhere cj = 2\u03b1 + Aj (bj \u2212 aj) and \u2264 D indicates that the upper bound only accounts for strategies that appear in the data set D.\nDue to the fact that our bounds are approximate, we cannot use them as a conclusive probabilistic assessment.\nInstead, we take this as another piece of evidence to complement our findings.\nEven if we can assume that a function that we approximate from data is Lipschitz continuous, we rarely actually know the Lipschitz constant for any subset of \u0398.\nThus, we are faced with a task of estimating it from data.\nHere, we tried three methods of doing this.\nThe first one simply takes the highest slope that the function attains within the available data and uses this constant value for every subinterval.\nThis produces the most conservative bound, and in many situations it is unlikely to be informative.\nAn alternative method is to take an upper bound on slope obtained within each subinterval using the available data.\nThis produces a much less conservative upper bound on probabilities.\nHowever, since the actual upper bound is generally greater for each subinterval, the resulting probabilistic bound may be deceiving.\nA final method that we tried is a compromise between the two above.\nInstead of taking the conservative upper bound based on data over the entire function domain \u0398, we take the average of upper bounds obtained at each \u0398j.\nThe bound at an interval is then taken to be the maximum of the upper bound for this interval and the average upper bound for all intervals.\nThe results of evaluating the expression for\nTable 3: Approximate upper bound on probability that some\nsetting of \u03b8 \u2208 [0, 320] will satisfy the designer objective with target \u03b1 = 2.\nDifferent methods of approximating the upper bound on slope in each subinterval j are used.\nthis work, the expression gives an upper bound on the probability that some setting of \u03b8 (i.e., storage cost) in the interval [0,320] will result in total day-0 procurement that is no greater in any equilibrium than the target specified by \u03b1 and taken here to be 2.\nAs we had suspected, the most conservative approach to estimating the upper bound on slope, presented in the first column of the table, provides us little information here.\nHowever, the other two estimation approaches, found in columns two and three of Table 3, suggest that we are indeed quite confident that no reasonable setting of \u03b8 \u2208 [0, 320] would have done the job.\nGiven the tremendous difficulty of the problem, this result is very strong .16 Still, we must be very cautious in drawing too heroic a conclusion based on this evidence.\nCertainly, we have not \"checked\" all the profiles but only a small proportion of them (infinitesimal, if we consider the entire continuous domain of \u03b8 and strategy sets).\nNor can we expect ever to obtain enough evidence to make completely objective conclusions.\nInstead, the approach we advocate here is to collect as much evidence as is feasible given resource constraints, and make the most compelling judgment based on this evidence, if at all possible.\n5.\nCONVERGENCE RESULTS\nAt this point, we explore abstractly whether a design parameter choice based on payoff data can be asymptotically reliable.\n16Since we did not have all the possible deviations for any profile available in the data, the true upper bounds may be even lower.\nAs a matter of convenience, we will use notation un, i (a) to refer to a payoff function of player i based on an average over n i.i.d. samples from the distribution of payoffs.\nWe also assume that un, i (a) are independent for all a \u2208 A and i \u2208 I.\nWe will use the notation \u0393n to refer to the game [I, R, {ui, n (\u00b7)}], whereas \u0393 will denote the \"underlying\" game, [I, R, {ui (\u00b7)}].\nSimilarly, we define ~ n (r) to be ~ (r) with respect to the game \u0393n.\nIn this section, we show that ~ n (s) \u2192 ~ (s) a.s. uniformly on the mixed strategy space for any finite game, and, furthermore, that all mixed strategy Nash equilibria in empirical games eventually become arbitrarily close to some Nash equilibrium strategies in the underlying game.\nWe use these results to show that under certain conditions, the optimal choice of the design parameter based on empirical data converges almost surely to the actual optimum.\nPROOF.\nSince ~ (s) = 0 for every s \u2208 N, we can find M large enough such that Pr {supn \u2265 M sups \u2208 N ~ n (s) <\u03b3} = 1.\nBy the Corollary, for any game with a finite set of pure strategies and for any ~> 0, all Nash equilibria lie in the set of empirical ~ - Nash equilibria if enough samples have been taken.\nAs we now show, this provides some justification for our use of a set of profiles with a non-zero ~ - bound as an estimate of the set of Nash equilibria.\nFirst, suppose we conclude that for a particular setting of \u03b8, sup {\u02c6\u03c6 \u2217 (\u03b8)} \u2264 \u03b1.\nThen, since for any fixed ~> 0, N (\u03b8) \u2282 Nn, ~ (\u03b8) when n is large enough,\nfor any such n. Thus, since we defined the welfare function of the designer to be I {sup {\u03c6 \u2217 (\u03b8)} \u2264 \u03b1} in our domain of interest, the empirical choice of \u03b8 satisfies the designer's objective, thereby maximizing his welfare function.\nAlternatively, suppose we conclude that inf {\u02c6\u03c6 \u2217 (\u03b8)}> \u03b1 for ev\nfor every \u03b8, and we can conclude that no setting of \u03b8 will satisfy the designer's objective.\nNow, we will show that when the number of samples is large enough, every Nash equilibrium of \u0393n is close to some Nash equilibrium of the underlying game.\nThis result will lead us to consider convergence of optimizers based on empirical data to actual optimal mechanism parameter settings.\nWe first note that the function ~ (s) is continuous in a finite game.\nLEMMA 5.\nLet S be a mixed strategy set defined on a finite game.\nThen ~: S \u2192 R is continuous.\nFor the exposition that follows, we need a bit of additional notation.\nFirst, let (Z, d) be a metric space, and X, Y C Z and define directed Hausdorffdistance from X to Y to be\nObserve that U C X =:>.\nh (U, Y) <h (X, Y).\nFurther, define BS (x, \u03b4) to be an open ball in S C Z with center x E S and radius \u03b4.\nNow, let Nn denote all Nash equilibria of the game \u0393n and let\nthat is, the union of open balls of radius \u03b4 with centers at Nash equilibria of \u0393.\nNote that h (N\u03b4, N) = \u03b4.\nWe can then prove the following general result.\nWe will now show that in the special case when \u0398 and A are finite and each \u0393\u03b8 has a unique Nash equilibrium, the estimates 0\u02c6 of optimal designer parameter converge to an actual optimizer almost surely.\nLet 0\u02c6 = arg max\u03b8 \u2208 \u0398 W (Nn (0), 0), where n is the number of times each pure profile was sampled in \u0393\u03b8 for every 0, and let 0 \u2217 = arg max\u03b8 \u2208 \u0398 W (N (0), 0).\nThe shortcoming of the above result is that, within our framework, the designer has no way of knowing or ensuring that \u0393\u03b8 do, indeed, have unique equilibria.\nHowever, it does lend some theoretical justification for pursuing design in this manner, and, perhaps, will serve as a guide for more general results in the future.\n6.\nRELATED WORK\nThe mechanism design literature in Economics has typically explored existence of a mechanism that implements a social choice function in equilibrium [10].\nAdditionally, there is an extensive literature on optimal auction design [10], of which the work by Roger Myerson [11] is, perhaps, the most relevant.\nIn much of this work, analytical results are presented with respect to specific utility functions and accounting for constraints such as incentive compatibility and individual rationality.\nSeveral related approaches to search for the best mechanism exist in the Computer Science literature.\nConitzer and Sandholm [6] developed a search algorithm when all the relevant game parameters are common knowledge.\nWhen payoff functions of players are unknown, a search using simulations has been explored as an alternative.\nOne approach in that direction, taken in [4] and [15], is to co-evolve the mechanism parameter and agent strategies, using some notion of social utility and agent payoffs as fitness criteria.\nAn alternative to co-evolution explored in [16] was to optimize a well-defined welfare function of the designer using genetic programming.\nIn this work the authors used a common learning strategy for all agents and defined an outcome of a game induced by a mechanism parameter as the outcome of joint agent learning.\nMost recently, Phelps et al. [14] compared two mechanisms based on expected social utility with expectation taken over an empirical distribution of equilibria in games defined by heuristic strategies, as in [18].\n7.\nCONCLUSION\nIn this work we spent considerable effort developing general tactics for empirical mechanism design.\nWe defined a formal gametheoretic model of interaction between the designer and the participants of the mechanism as a two-stage game.\nWe also described in some generality the methods for estimating a sample Nash equilibrium function when the data is extremely scarce, or a Nash equilibrium correspondence when more data is available.\nOur techniques are designed specifically to deal with problems in which both the mechanism parameter space and the agent strategy sets are infinite and only a relatively small data set can be acquired.\nA difficult design issue in the TAC\/SCM game which the TAC community has been eager to address provides us with a setting to test our methods.\nIn applying empirical game analysis to the problem at hand, we are fully aware that our results are inherently inexact.\nThus, we concentrate on collecting evidence about the structure of the Nash equilibrium correspondence.\nIn the end, we can try to provide enough evidence to either prescribe a parameter setting, or suggest that no setting is possible that will satisfy the designer.\nIn the case of TAC\/SCM, our evidence suggests quite strongly that storage cost could not have been effectively adjusted in the 2004 tournament to curb excessive day-0 procurement without detrimental effects on overall profitability.\nThe success of our analysis in this extremely complex environment with high simulation costs makes us optimistic that our methods can provide guidance in making mechanism design decisions in other challenging domains.\nThe theoretical results confirm some intuitions behind the empirical mechanism design methods we have introduced, and increases our confidence that our framework can be effective in estimating the best mechanism parameter choice in relatively general settings.\nAcknowledgments\nWe thank Terence Kelly, Matthew Rudary, and Satinder Singh for helpful comments on earlier drafts of this work.\nThis work was supported in part by NSF grant IIS-0205435 and the DARPA REAL strategic reasoning program.\n8.\nREFERENCES\nAPPENDIX A. PROOFS A. 1 Proof of Proposition 1\nA. 2 Proof of Proposition 2\nFirst, let us suppose that some function, f (x) defined on [ai, bi], satisfy Lipschitz condition on (ai, bi] with Lipschitz constant Ai.\nThen the following claim holds: Claim: infx \u2208 (ai, bi] f (x)> 0.5 (f (ai) + f (bi) \u2212 Ai (bi \u2212 ai).\nTo prove this claim, note that the intersection of lines at f (ai) and f (bi) with slopes \u2212 Ai and Ai respectively will determine the lower bound on the minimum of f (x) on [ai, bi] (which is a lower bound on infimum of f (x) on (ai, bj]).\nThe line at f (ai) is determined by f (ai) = \u2212 Aiai + cL and the line at f (bi) is determined by f (bi) = Aibi + cR.\nThus, the intercepts are cL = f (ai) + Aiai and cR = f (bi) + Aibi respectively.\nLet x \u2217 be the point at which these lines intersect.\nThen,\nSince we have a finite number of points in the data set for each \u03b8, we can obtain the following expression:\nby subadditivity and the fact that a profile a is a Nash equilibrium if and only if ~ (a) = 0.\nPutting everything together yields the desired result.\nA. 3 Proof of Theorem 3\nFirst, we will need the following fact: Claim: Given a function fi (x) and a set X, | maxx \u2208 X f1 (x) \u2212\nTo prove this claim, observe that | max x \u2208 X f1 (x) \u2212 max x \u2208 X f2 (x) | = {maxx f1 (x) \u2212 maxx f2 (x) if maxx f1 (x)> maxx f2 (x) maxx f2 (x) \u2212 maxx f1 (x) if maxx f2 (x)> maxx f1 (x) In the first case,\nThus, the claim holds.\nBy the Strong Law of Large Numbers, un, i (a) \u2192 ui (a) a.s. for\nor, equivalently [8], for any \u03b1> 0 and \u03b4> 0, there is M (i, a)> 0 such that\nThus, by the claim, for any n \u2265 M,\nNow, recall that ~ (s) = maxi [maxai \u2208 Ai ui (ai, s \u2212 i) \u2212 ui (s)].\nBy the claims above, maxai \u2208 Ai ui (ai, s \u2212 i) is uniformly continuous in s \u2212 i and ui (s) is uniformly continuous in s.\nSince the difference of two uniformly continuous functions is uniformly continuous, and since this continuity is preserved under maximum by our second claim, we have the desired result.\nA. 5 Proof of Theorem 6\nChoose \u03b4> 0.\nFirst, we need to ascertain that the following claim holds: Claim: ~ \u00af = mins \u2208 S \\ N\u03b4 ~ (s) exists and ~ \u00af> 0.\nSince N\u03b4 is an open subset of compact S, it follows that S \\ N\u03b4 is compact.\nAs we had also proved in Lemma 5 that ~ (s) is continuous, existence follows from the Weierstrass theorem.\nThat ~ \u00af> 0 is clear since ~ (s) = 0 if and only if s is a Nash equilibrium of \u0393.\nNow, by Theorem 3, for any \u03b1> 0 there is M such that\nwith probability at least 1 \u2212 \u03b1.\nNote that since s \u2212 i (a) and s (a) are bounded between 0 and 1, we were able to drop them from the expressions above to obtain a bound that will be valid independent of the particular choice of s. Furthermore, since the above result can be obtained for an arbitrary \u03b1> 0 and \u03b4> 0, we have Pr {limn \u2192 \u221e ~ n (s) = ~ (s)} = 1 uniformly on S.\nA. 4 Proof of Lemma 5\nWe prove the result using uniform continuity of ui (s) and preservation of continuity under maximum.\nClaim: A function f: Rk \u2192 R defined by f (t) = Eki = 1 ziti, where zi are constants in R, is uniformly continuous in t.\nThe claim follows because | f (t) \u2212 f (t ~) | = | Eki = 1 zi (ti \u2212 t ~ i) | \u2264 Pk i = 1 | zi | | ti \u2212 t ~ i |.\nAn immediate result of this for our purposes is that ui (s) is uniformly continuous in s and ui (ai, s \u2212 i) is uniformly continuous in s \u2212 i. Claim: Let f (a, b) be uniformly continuous in b \u2208 B for every a \u2208 A, with | A | <\u221e.\nThen V (b) = maxa \u2208 A f (a, b) is uniformly continuous in b. To show this, take \u03b3> 0 and let b, b ~ \u2208 B such that ~ b \u2212 b ~ ~ <\u03b4 (a) \u21d2 | f (a, b) \u2212 f (a, b ~) | <\u03b3.\nNow take \u03b4 = mina \u2208 A \u03b4 (a).\nSince this holds for an arbitrary \u03b1> 0 and \u03b4> 0, the desired result follows.\nA. 6 Proof of Theorem 7\nFix \u03b8 and choose \u03b4> 0.\nSince W (s, \u03b8) is continuous at s \u2217 (\u03b8), given ~> 0 there is \u03b4> 0 such that for every s ~ that is within \u03b4 of s \u2217 (\u03b8), | W (s ~, \u03b8) \u2212 W (s \u2217 (\u03b8), \u03b8) | <~.\nBy Theorem 6, we can find M (\u03b8) large enough such that all s ~ \u2208 Nn are within \u03b4 of s \u2217 (\u03b8) for all n \u2265 M (\u03b8) with probability 1.\nConsequently, for any ~> 0 we can find M (\u03b8) large enough such that with probability 1 we have supn \u2265 M (\u03b8) sups ~ \u2208 Nn | W (s ~, \u03b8) \u2212 W (s \u2217 (\u03b8), \u03b8) | <~.\nLet us assume without loss of generality that there is a unique optimal choice of \u03b8.\nNow, since the set \u0398 is finite, there is also the \"second-best\" choice of \u03b8 (if there is only one \u03b8 \u2208 \u0398 this discussion is moot anyway):\nThen if we let ~ <\u2206 \/ 2 and let M = max\u03b8 \u2208 \u0398 M (\u03b8), where each M (\u03b8) is large enough such that supn \u2265 M (\u03b8) sups ~ \u2208 Nn | W (s ~, \u03b8) \u2212 W (s \u2217 (\u03b8), \u03b8) | <~ a.s., the optimal choice of \u03b8 based on any empirical equilibrium will be \u03b8 \u2217 with probability 1.\nThus, in particular, given any probability distribution over empirical equilibria, the best choice of \u03b8 will be \u03b8 \u2217 with probability 1 (similarly, if we take supremum or infimum of W (Nn (\u03b8), \u03b8) over the set of empirical equilibria in constructing the objective function).","keyphrases":["empir mechan","empir mechan design","paramet set","analysi","observ behavior","interest outcom featur","suppli-chain trade","two-stage game","player","particip","gametheoret model","nash equilibrium","game theori"],"prmu":["P","P","P","P","P","R","M","U","U","U","M","U","U"]} {"id":"C-61","title":"Authority Assignment in Distributed Multi-Player Proxy-based Games","abstract":"We present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games. The proposed game architecture consists of distributed game clients that connect to game proxies (referred to as communication proxies) which forward game related messages from the clients to one or more game servers. Unlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support. Using this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions\/events that occur within the game between client and servers on a per action\/event basis. We show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest. In addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics.","lvl-1":"Authority Assignment in Distributed Multi-Player Proxy-based Games Sudhir Aggarwal Justin Christofoli Department of Computer Science Florida State University, Tallahassee, FL {sudhir, christof}@cs.\nfsu.edu Sarit Mukherjee Sampath Rangarajan Center for Networking Research Bell Laboratories, Holmdel, NJ {sarit, sampath}@bell-labs.com ABSTRACT We present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games.\nThe proposed game architecture consists of distributed game clients that connect to game proxies (referred to as communication proxies) which forward game related messages from the clients to one or more game servers.\nUnlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support.\nUsing this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions\/events that occur within the game between client and servers on a per action\/event basis.\nWe show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest.\nIn addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics.\nCategories and Subject Descriptors C.2.4 [Computer-Communication networks]: Distributed Systems-Distributed Applications General Terms Games, Performance 1.\nINTRODUCTION In Massively Multi-player On-line Games (MMOG), game clients who are positioned across the Internet connect to a game server to interact with other clients in order to be part of the game.\nIn current architectures, these interactions are direct in that the game clients and the servers exchange game messages with each other.\nIn addition, current MMOGs delegate all authority to the game server to make decisions about the results pertaining to the actions that game clients take and also to decide upon the result of other game related events.\nSuch centralized authority has been implemented with the claim that this improves the security and consistency required in a gaming environment.\nA number of works have shown the effect of network latency on distributed multi-player games [1, 2, 3, 4].\nIt has been shown that network latency has real impact on practical game playing experience [3, 5].\nSome types of games can function quite well even in the presence of large delays.\nFor example, [4] shows that in a modern RPG called Everquest 2, the breakpoint of the game when adding artificial latency was 1250ms.\nThis is accounted to the fact that the combat system used in Everquest 2 is queueing based and has very low interaction.\nFor example, a player queues up 4 or 5 spells they wish to cast, each of these spells take 1-2 seconds to actually perform, giving the server plenty of time to validate these actions.\nBut there are other games such as FPS games that break even in the presence of moderate network latencies [3, 5].\nLatency compensation techniques have been proposed to alleviate the effect of latency [1, 6, 7] but it is obvious that if MMOGs are to increase in interactivity and speed, more architectures will have to be developed that address responsiveness, accuracy and consistency of the gamestate.\nIn this paper, we propose two important features that would make game playing within MMOGs more responsive for movement and scalable.\nFirst, we propose that centralized server-based architectures be made hierarchical through the introduction of communication proxies so that game updates made by clients that are time sensitive, such as movement, can be more efficiently distributed to other players within their game-space.\nSecond, we propose that assignment of authority in terms of who makes the decision on client actions such as object pickups and hits, and collisions between players, be distributed between the clients and the servers in order to distribute the computing load away from the central server.\nIn order to move towards more complex real-time networked games, we believe that definitions of authority must be refined.\nMost currently implemented MMOGs have game servers that have almost absolute authority.\nWe argue that there is no single consistent view of the virtual game space that can be maintained on any one component within a network that has significant latency, such as the one that many MMOG players would experience.\nWe believe that in most cases, the client with the most accurate view of an entity is the best suited to make decisions for that entity when the causality of that action will not immediately affect any other players.\nIn this paper we define what it means to have authority within the context of events and objects in a virtual game space.\nWe then show the benefits of delegating authority for different actions and game events between the clients and server.\nIn our model, the game space consists of game clients (representing the players) and objects that they control.\nWe divide the client actions and game events (we will collectively refer to these as events) such as collisions, hits etc. into three different categories, a) events for which the game client has absolute authority, b) events for which the game server has absolute authority, and c) events for which the authority changes dynamically from client to the server and vice-versa.\nDepending on who has the authority, that entity will make decisions on the events that happen within a game space.\nWe propose that authority for all decisions that pertain to a single player or object in the game that neither affects the other players or objects, nor are affected by the actions of other players be delegated to that player``s game client.\nThese type of decisions would include collision detection with static objects within the virtual game space and hit detection with linear path bullets (whose trajectory is fixed and does not change with time) fired by other players.\nAuthority for decisions that could be affected by two or more players should be delegated to the impartial central server, in some cases, to ensure that no conflicts occur and in other cases can be delegated to the clients responsible for those players.\nFor example, collision detection of two players that collide with each other and hit detection of non-linear bullets (that changes trajectory with time) should be delegated to the server.\nDecision on events such as item pickup (for example, picking up items in a game to accumulate points) should be delegated to a server if there are multiple players within close proximity of an item and any one of the players could succeed in picking the item; for item pick-up contention where the client realizes that no other player, except its own player, is within a certain range of the item, the client could be delegated the responsibility to claim the item.\nThe client``s decision can always be accurately verified by the server.\nIn summary, we argue that while current authority models that only delegate responsibility to the server to make authoritative decisions on events is more secure than allowing the clients to make the decisions, these types of models add undesirable delays to events that could very well be decided by the clients without any inconsistency being introduced into the game.\nAs networked games become more complex, our architecture will become more applicable.\nThis architecture is applicable for massively multiplayer games where the speed and accuracy of game-play are a major concern while consistency between player game-states is still desired.\nWe propose that a mixed authority assignment mechanism such as the one outlined above be implemented in high interaction MMOGs.\nOur paper has the following contributions.\nFirst we propose an architecture that uses communication proxies to enable clients to connect to the game server.\nA communication proxy in the proposed architecture maintains information only about portions of the game space that are relevant to clients connected to it and is able to process the movement information of objects and players within these portions.\nIn addition, it is capable of multicasting this information only to a relevant subset of other communication proxies.\nThese functionalities of a communication proxy leads to a decrease in latency of event update and subsequently, better game playing experience.\nSecond, we propose a mixed authority assignment mechanism as described above that improves game playing experience.\nThird, we implement the proposed mixed authority assignment mechanism within a MMOG called RPGQuest [8] to validate its viability within MMOGs.\nIn Section 2, we describe the proxy-based game architecture in more detail and illustrate its advantages.\nIn Section 3, we provide a generic description of the mixed authority assignment mechanism and discuss how it improves game playing experience.\nIn Section 4, we show the feasibility of implementing the proposed mixed authority assignment mechanism within existing MMOGs by describing a proof-of-concept implementation within an existing MMOG called RPGQuest.\nSection 5 discusses related work.\nIn Section 6, we present our conclusions and discuss future work.\n2.\nPROXY-BASED GAME ARCHITECTURE Massively Multi-player Online Games (MMOGs) usually consist of a large game space in which the players and different game objects reside and move around and interact with each-other.\nState information about the whole game space could be kept in a single central server which we would refer to as a Central-Server Architecture.\nBut to alleviate the heavy demand on the processing for handling the large player population and the objects in the game in real-time, a MMOG is normally implemented using a distributed server architecture where the game space is further sub-divided into regions so that each region has relatively smaller number of players and objects that can be handled by a single server.\nIn other words, the different game regions are hosted by different servers in a distributed fashion.\nWhen a player moves out of one game region to another adjacent one, the player must communicate with a different server (than it was currently communicating with) hosting the new region.\nThe servers communicate with one another to hand off a player or an object from one region to another.\nIn this model, the player on the client machine has to establish multiple gaming sessions with different servers so that it can roam in the entire game space.\nWe propose a communication proxy based architecture where a player connects to a (geographically) nearby proxy instead of connecting to a central server in the case of a centralserver architecture or to one of the servers in case of dis2 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 tributed server architecture.\nIn the proposed architecture, players who are close by geographically join a particular proxy.\nThe proxy then connects to one or more game servers, as needed by the set of players that connect to it and maintains persistent transport sessions with these server.\nThis alleviates the problem of each player having to connect directly to multiple game servers, which can add extra connection setup delay.\nIntroduction of communication proxies also mitigates the overhead of a large number of transport sessions that must be managed and reduces required network bandwidth [9] and processing at the game servers both with central server and distributed server architectures.\nWith central server architectures, communication proxies reduce the overhead at the server by not requiring the server to terminate persistent transport sessions from every one of the clients.\nWith distributed-server architectures, additionally, communication proxies eliminate the need for the clients to maintain persistent transport sessions to every one of the servers.\nFigure 1 shows the proposed architecture.\nFigure 1: Architecture of the gaming environment.\nNote that the communication proxies need not be cognizant of the game.\nThey host a number of players and inform the servers which players are hosted by the proxy in question.\nAlso note that the players hosted by a proxy may not be in the same game space.\nThat is, a proxy hosts players that are geographically close to it, but the players themselves can reside in different parts of the game space.\nThe proxy communicates with the servers responsible for maintaining the game spaces subscribed by the different players.\nThe proxies communicate with one another in a peer-to-peer to fashion.\nThe responsiveness of the game can be improved for updates that do not need to wait on processing at a central authority.\nIn this way, information about players can be disseminated faster before even the game server gets to know about it.\nThis definitely improves the responsiveness of the game.\nHowever, it ignores consistency that is critical in MMORPGs.\nThe notion that an architecture such as this one can still maintain temporal consistency will be discussed in detail in Section 3.\nFigure 2 shows and example of the working principle of the proposed architecture.\nAssume that the game space is divided into 9 regions and there are three servers responsible for managing the regions.\nServer S1 owns regions 1 and 2, S2 manages 4, 5, 7, and 8, and S3 is responsible for 3, 6 and 9.\nFigure 2: An example.\nThere are four communication proxies placed in geographically distant locations.\nPlayers a, b, c join proxy P1, proxy P2 hosts players d, e, f, players g, h are with proxy P3, whereas players i, j, k, l are with proxy P4.\nUnderneath each player, the figure shows which game region the player is located currently.\nFor example, players a, b, c are in regions 1, 2, 6, respectively.\nTherefore, proxy P1 must communicate with servers S1 and S3.\nThe reader can verify the rest of the links between the proxies and the servers.\nPlayers can move within the region and between regions.\nPlayer movement within a region will be tracked by the proxy hosting the player and this movement information (for example, the player``s new coordinates) will be multicast to a subset of other relevant communication proxies directly.\nAt the same time, this information will be sent to the server responsible for that region with the indication that this movement has already been communicated to all the other relevant communication proxies (so that the server does not have to relay this information to all the proxies).\nFor example, if player a moves within region 1, this information will be communicated by proxy P1 to server S1 and multicast to proxies P3 and P4.\nNote that proxies that do not keep state information about this region at this point in time (because they do not have any clients within that region) such as P2 do not have to receive this movement information.\nIf a player is at the boundary of a region and moves into a new region, there are two possibilities.\nThe first possibility is that the proxy hosting the player can identify the region into which the player is moving (based on the trajectory information) because it is also maintaining state information about the new region at that point in time.\nIn this case, the proxy can update movement information directly at the other relevant communication proxies and also send information to the appropriate server informing of the movement (this may require handoff between servers as we will describe).\nConsider the scenario where player a is at the boundary of region 1 and proxy P1 can identify that the player is moving into region 2.\nBecause proxy P1 is currently keeping state information about region 2, it can inform all The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 3 the other relevant communication proxies (in this example, no other proxy maintains information about region 2 at this point and so no update needs to be sent to any of the other proxies) about this movement and then inform the server independently.\nIn this particular case, server S1 is responsible for region 2 as well and so no handoff between servers would be needed.\nNow consider another scenario where player j moves from region 9 to region 8 and that proxy P4 is able to identify this movement.\nAgain, because proxy P4 maintains state information about region 8, it can inform any other relevant communication proxies (again, none in this example) about this movement.\nBut now, regions 9 and 8 are managed by different servers (servers S3 and S2 respectively) and thus a hand-off between these servers is needed.\nWe propose that in this particular scenario, the handoff be managed by the proxy P4 itself.\nWhen the proxy sends movement update to server S3 (informing the server that the player is moving out of its region), it would also send a message to server S2 informing the server of the presence and location of the player in one of its region.\nIn the intra-region and inter-region scenarios described above, the proxy is able to manage movement related information, update only the relevant communication proxies about the movement, update the servers with the movement and enable handoff of a player between the servers if needed.\nIn this way, the proxy performs movement updates without involving the servers in any way in this time-critical function thereby speeding up the game and improving game playing experience for the players.\nWe consider this the fast path for movement update.\nWe envision the proxies to be just communication proxies in that they do not know about the workings of specific games.\nThey merely process movement information of players and objects and communicate this information to the other proxies and the servers.\nIf the proxies are made more intelligent in that they understand more of the game logic, it is possible for them to quickly check on claims made by the clients and mitigate cheating.\nThe servers could perform the same functionality but with more delay.\nEven without being aware of game logic, the proxies can provide additional functionalities such as timestamping messages to make the game playing experience more accurate [10] and fair [11].\nThe second possibility that should be considered is when players move between regions.\nIt is possible that a player moves from one region to another but the proxy that is hosting the player is not able to determine the region into which the player is moving, a) the proxy does not maintain state information about all the regions into which the player could potentially move, or b) the proxy is not able to determine which region the player may move into (even if maintains state information about all these regions).\nIn this case, we propose that the proxy be not responsible for making the movement decision, but instead communicate the movement indication to the server responsible for the region within which the player is currently located.\nThe server will then make the movement decision and then a) inform all the proxies including the proxy hosting the player, and b) initiate handoff with another server if the player moves into a region managed by another server.\nWe consider this the slow path for movement update in that the servers need to be involved in determining the new position of the player.\nIn the example, assume that player a moves from region 1 to region 4.\nProxy P1 does not maintain state information about region 4 and thus would pass the movement information to server S1.\nThe server will identify that the player has moved into region 4 and would inform proxy P1 as well as proxy P2 (which is the only other proxy that maintains information about region 4 at this point in time).\nServer S1 will also initiate a handoff of player a with server S2.\nProxy P1 will now start maintaining state information about region 4 because one of its hosted players, player a has moved into this region.\nIt will do so by requesting and receiving the current state information about region 4 from server S2 which is responsible for this region.\nThus, a proxy architecture allows us to make use of faster movement updates through the fast path through a proxy if and when possible as opposed to conventional server-based architectures that always have to use the slow path through the server for movement updates.\nBy selectively maintaining relevant regional game state information at the proxies, we are able to achieve this capability in our architecture without the need for maintaining the complete game state at every proxy.\n3.\nASSIGNMENT OF AUTHORITY As a MMOG is played, the players and the game objects that are part of the game, continually change their state.\nFor example, consider a player who owns a tank in a battlefield game.\nBased on action of the player, the tank changes its position in the game space, the amount of ammunition the tank contains changes as it fires at other tanks, the tank collects bonus firing power based on successful hits, etc..\nSimilarly objects in the battlefield, such as flags, buildings etc. change their state when a flag is picked up by a player (i.e. tank) or a building is destroyed by firing at it.\nThat is, some decision has to be made on the state of each player and object as the game progresses.\nNote that the state of a player and\/or object can contain several parameters (e.g., position, amount of ammunition, fuel storage, points collected, etc), and if any of the parameters changes, the state of the player\/object changes.\nIn a client-server based game, the server controls all the players and the objects.\nWhen a player at a client machine makes a move, the move is transmitted to the server over the network.\nThe server then analyzes the move, and if the move is a valid one, changes the state of the player at the server and informs the client of the change.\nThe client subsequently updates the state of the player and renders the player at the new location.\nIn this case the authority to change the state of the player resides with the server entirely and the client simply follows what the server instructs it to do.\nMost of the current first person shooter (FPS) games and role playing games (RPG) fall under this category.\nIn current FPS games, much like in RPG games, the client is not trusted.\nAll moves and actions that it makes are validated.\nIf a client detects that it has hit another player with a bullet, it proceeds assuming that it is a hit.\nMeanwhile, an update is sent to the server and the server will send back a message either affirming or denying that the player was hit.\nIf the remote player was not hit, then the client will know that it 4 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 did not actually make the shot.\nIf it did make the hit, an update will also be sent from the server to the other clients informing them that the other player was hit.\nA difference that occurs in some RPGs is that they use very dumb client programs.\nSome RPGs do not maintain state information at the client and therefore, cannot predict anything such as hits at the client.\nState information is not maintained because the client is not trusted with it.\nIn RPGs, a cheating player with a hacked game client can use state information stored at the client to gain an advantage and find things such as hidden treasure or monsters lurking around the corner.\nThis is a reason why most MMORPGs do not send a lot of state information to the client and causes the game to be less responsive and have lower interaction game-play than FPS games.\nIn a peer-to-peer game, each peer controls the player and object that it owns.\nWhen a player makes a move, the peer machine analyzes the move and if it is a valid one, changes the state of the player and places the player in new position.\nAfterwards, the owner peer informs all other peers about the new state of the player and the rest of the peers update the state of the player.\nIn this scenario, the authority to change the state of the player is given to the owning peer and all other peers simply follow the owner.\nFor example, Battle Zone Flag (BzFlag) [12] is a multiplayer client-server game where the client has all authority for making decisions.\nIt was built primarily with LAN play in mind and cheating as an afterthought.\nClients in BzFlag are completely authoritative and when they detect that they were hit by a bullet, they send an update to the server which simply forwards the message along to all other players.\nThe server does no sort of validation.\nEach of the above two traditional approaches has its own set of advantages and disadvantages.\nThe first approach, which we will refer to as server authoritative henceforth, uses a centralized method to assign authority.\nWhile a centralized approach can keep the state of the game (i.e., state of all the players and objects) consistent across any number of client machines, it suffers from delayed response in game-play as any move that a player at the client machine makes must go through one round-trip delay to the server before it can take effect on the client``s screen.\nIn addition to the round-trip delay, there is also queuing delay in processing the state change request at the server.\nThis can result in additional processing delay, and can also bring in severe scalability problems if there are large number of clients playing the game.\nOne definite advantage of the server authoritative approach is that it can easily detect if a client is cheating and can take appropriate action to prevent cheating.\nThe peer-to-peer approach, henceforth referred to as client authoritative, can make games very responsive.\nHowever, it can make the game state inconsistent for a few players and tie break (or roll back) has to be performed to bring the game back to a consistent state.\nNeither tie break nor roll back is a desirable feature of online gaming.\nFor example, assume that for a game, the goal of each player is to collect as many flags as possible from the game space (e.g. BzFlag).\nWhen two players in proximity try to collect the same flag at the same time, depending on the algorithm used at the client-side, both clients may determine that it is the winner, although in reality only one player can pick the flag up.\nBoth players will see on their screen that it is the winner.\nThis makes the state of the game inconsistent.\nWays to recover from this inconsistency are to give the flag to only one player (using some tie break rule) or roll the game back so that the players can try again.\nNeither of these two approaches is a pleasing experience for online gaming.\nAnother problem with client authoritative approach is that of cheating by clients as there is no cross checking of the validation of the state changes authorized by the owner client.\nWe propose to use a hybrid approach to assign the authority dynamically between the client and the server.\nThat is, we assign the authority to the client to make the game responsive, and use the server``s authority only when the client``s individual authoritative decisions can make the game state inconsistent.\nBy moving the authority of time critical updates to the client, we avoid the added delay caused by requiring the server to validate these updates.\nFor example, in the flag pickup game, the clients will be given the authority to pickup flags only when other players are not within a range that they could imminently pickup a flag.\nOnly when two or more players are close by so that more than one player may claim to have picked up a flag, the authority for movement and flag pickup would go to the central server so that the game state does not become inconsistent.\nWe believe that in a large game-space where a player is often in a very wide open and sparsely populated area such as those often seen in the game Second Life [13], this hybrid architecture would be very beneficial because of the long periods that the client would have authority to send movement updates for itself.\nThis has two advantages over the centralauthority approach, it distributes the processing load down to the clients for the majority of events and it allows for a more responsive game that does not need to wait on a server for validation.\nWe believe that our notion of authority can be used to develop a globally consistent state model of the evolution of a game.\nFundamentally, the consistent state of the system is the one that is defined by the server.\nHowever, if local authority is delegated to the client, in this case, the client``s state is superimposed on the server``s state to determine the correct global state.\nFor example, if the client is authoritative with respect to movement of a player, then the trajectory of the player is the true trajectory and must replace the server``s view of the player``s trajectory.\nNote that this could be problematic and lead to temporal inconsistency only if, for example, two or more entities are moving in the same region and can interact with each other.\nIn this situation, the client authority must revert to the server and the sever would then make decisions.\nThus, the client is only authoritative in situations where there is no potential to imminently interact with other players.\nWe believe that in complex MMOGs, when allowing more rapid movement, it will still be the case that local authority is possible for significant spans of game time.\nNote that it might also be possible to minimize the occurrences of the Dead Man Shooting problem described in [14].\nThis could be done by allowing the client to be authoritative for more actions such as its player``s own death and disallowing other players from making preemptive decisions based on a remote player.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 5 One reason why the client-server based architecture has gained popularity is due to belief that the fastest route to the other clients is through the server.\nWhile this may be true, we aim to create a new architecture where decisions do not always have to be made at the game server and the fastest route to a client is actually through a communication proxy located close to the client.\nThat is, the shortest distance in our architecture is not through the game server but through the communication proxy.\nAfter a client makes an action such as movement, it will simultaneously distribute it directly to the clients and the game server by way of the communications proxy.\nWe note that our architecture however is not practical for a game where game players setup their own servers in an ad-hoc fashion and do not have access to proxies at the various ISPs.\nThis proxy and distributed authority architecture can be used to its full potential only when the proxies can be placed at strategic places within the main ISPs and evenly distributed geographically.\nOur game architecture does not assume that the client is not to be trusted.\nWe are designing our architecture on the fact that there will be sufficient cheat deterring and detection mechanisms present so that it will be both undesirable and very difficult to cheat [15].\nIn our proposed approach, we can make the games cheat resilient by using the proxybased architecture when client authoritative decisions take place.\nIn order to achieve this, the proxies have to be game cognizant so that decisions made by a client can be cross checked by a proxy that the client connects to.\nFor example, assume that in a game a plane controlled by a client moves in the game space.\nIt is not possible for the plane to go through a building unharmed.\nIn a client authoritative mode, it is possible for the client to cheat by maneuvering the plane through a building and claiming the plane to be unharmed.\nHowever, when such move is published by the client, the proxy, being aware of the game space that the plane is in, can quickly check that the client has misused the authority and then can block such move.\nThis allows us to distribute authority to make decisions about the clients.\nIn the following section we use a multiplayer game called RPGQuest to implement different authoritative schemes and discuss our experience with the implementation.\nOur implementation shows the viability of our proposed solution.\n4.\nIMPLEMENTATION EXPERIENCE We have experimented with the authority assignment mechanism described in the last section by implementing the mechanisms in a game called RPGQuest.\nA screen shot from this game is shown in Figure 3.\nThe purpose of the implementation is to test its feasibility in a real game.\nRPGQuest is a basic first person game where the player can move around a three dimensional environment.\nObjects are placed within the game world and players gain points for each object that is collected.\nThe game clients connect to a game server which allows many players to coexist in the same game world.\nThe basic functionality of this game is representative of current online first person shooter and role playing games.\nThe game uses the DirectX 8 graphics API and DirectPlay networking API.\nIn this section we will discuss the three different versions of the game that we experimented with.\nFigure 3: The RPGQuest Game.\nThe first version of the game, which is the original implementation of RPGQuest, was created with a completely authoritative server and a non-authoritative client.\nAuthority given to the server includes decisions of when a player collides with static objects and other players and when a player picks up an object.\nThis version of the game performs well up to 100ms round-trip latency between the client and the server.\nThere is little lag between the time player hits a wall and the time the server corrects the player``s position.\nHowever, as more latency is induced between the client and server, the game becomes increasingly difficult to play.\nWith the increased latency, the messages coming from the server correcting the player when it runs into a wall are not received fast enough.\nThis causes the player to pass through the wall for the period that it is waiting for the server to resolve the collision.\nWhen studying the source code of the original version of the RPGQuest game, there is a substantial delay that is unavoidable each time an action must be validated by the server.\nWhenever a movement update is sent to the server, the client must then wait whatever the round trip delay is, plus some processing time at the server in order to receive its validated or corrected position.\nThis is obviously unacceptable in any game where movement or any other rapidly changing state information must be validated and disseminated to the other clients rapidly.\nIn order to get around this problem, we developed a second version of the game, which gives all authority to the client.\nThe client was delegated the authority to validate its own movement and the authority to pick up objects without validation from the server.\nIn this version of the game when a player moves around the game space, the client validates that the player``s new position does not intersect with any walls or static objects.\nA position update is then sent to the server which then immediately forwards the update to the other clients within the region.\nThe update does not have to go through any extra processing or validation.\nThis game model of complete authority given to the client is beneficial with respect to movement.\nWhen latencies of 6 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 100ms and up are induced into the link between the client and server, the game is still playable since time critical aspects of the game like movement do not have to wait on a reply from the server.\nWhen a player hits a wall, the collision is processed locally and does not have to wait on the server to resolve the collision.\nAlthough game playing experience with respect to responsiveness is improved when the authority for movement is given to the client, there are still aspects of games that do not benefit from this approach.\nThe most important of these is consistency.\nAlthough actions such as movement are time critical, other actions are not as time critical, but instead require consistency among the player states.\nAn example of a game aspect that requires consistency is picking up objects that should only be possessed by a single player.\nIn our client authoritative version of RPGQuest clients send their own updates to all other players whenever they pick up an object.\nFrom our tests we have realized this is a problem because when there is a realistic amount of latency between the client and server, it is possible for two players to pick up the same object at the same time.\nWhen two players attempt to pick up an object at physical times which are close to each other, the update sent by the player who picked up the object first will not reach the second player in time for it to see that the object has already been claimed.\nThe two players will now both think that they own the object.\nThis is why a server is still needed to be authoritative in this situation and maintain consistency throughout the players.\nThese two versions of the RPGQuest game has showed us why it is necessary to mix the two absolute models of authority.\nIt is better to place authority on the client for quickly changing actions such as movement.\nIt is not desirable to have to wait for server validation on a movement that could change before the reply is even received.\nIt is also sometimes necessary to place consistency over efficiency in aspects of the game that cannot tolerate any inconsistencies such as object ownership.\nWe believe that as the interactivity of games increases, our architecture of mixed authority that does not rely on server validation will be necessary.\nTo test the benefits and show the feasibility of our architecture of mixed authority, we developed a third version of the RPGQuest game that distributed authority for different actions between the client and server.\nIn this version, in the interest of consistency, the server remained authoritative for deciding who picked up an object.\nThe client was given full authority to send positional updates to other clients and verify its own position without the need to verify its updates with the server.\nWhen the player tries to move their avatar, the client verifies that the move will not cause it to move through a wall.\nA positional update is then sent to the server which then simply forwards it to the other clients within the region.\nThis eliminates any extra processing delay that would occur at the server and is also a more accurate means of verification since the client has a more accurate view of its own state than the server.\nThis version of the RPGQuest game where authority is distributed between the client and server is an improvement from the server authoritative version.\nThe client has no delay in waiting for an update for its own position and other clients do not have to wait on the server to verify the update.\nThe inconsistencies where two clients can pick up the same object in the client authoritative architecture are not present in this version of the client.\nHowever, the benefits of mixed authority will not truly be seen until an implementation of our communication proxy is integrated into the game.\nWith the addition of the communication proxy, after the client verifies its own positional updates it will be able to send the update to all clients within its region through a low latency link instead of having to first go through the game server which could possibly be in a very remote location.\nThe coding of the different versions of the game was very simple.\nThe complexity of the client increased very slightly in the client authoritative and hybrid models.\nThe original dumb clients of RPGQuest know the position of other players; it is not just sent a screen snapshot from the server.\nThe server updates each client with the position of all nearby clients.\nThe dumb clients use client side prediction to fill in the gaps between the updates they receive.\nThe only extra processing the client has to do in the hybrid architecture is to compare its current position to the positions of all objects (walls, boxes, etc.) in its area.\nThis obviously means that each client will have to already have downloaded the locations of all static objects within its current region.\n5.\nRELATED WORK It has been noted that in addition to latency, bandwidth requirements also dictate the type of gaming architecture to be used.\nIn [16], different types of architectures are studied with respect to bandwidth efficiencies and latency.\nIt is pointed out that Central Server architectures are not scalable because of bandwidth requirements at the server but the overhead for consistency checks are limited as they are performed at the server.\nA Peer-to-Peer architecture, on the other hand, is scalable but there is a significant overhead for consistency checks as this is required at every player.\nThe paper proposes a hybrid architecture which is Peer-toPeer in terms of message exchange (and thereby is scalable) where a Central Server is used for off-line consistency checks (thereby mitigating consistency check overhead).\nThe paper provides an implementation example of BZFlag which is a peer-to-peer game which is modified to transfer all authority to a central server.\nIn essence, this paper advocates an authority architecture which is server based even for peerto-peer games, but does not consider division of authority between a client and a server to minimize latency which could affect game playing experience even with the type of latency found in server based games (where all authority is with the server).\nThere is also previous work that has suggested that proxy based architectures be used to alleviate the latency problem and in addition use proxies to provide congestion control and cheat-proof mechanisms in distributed multi-player games [17].\nIn [18], a proxy server-network architecture is presented that is aimed at improving scalability of multiplayer games and lowering latency in server-client data transmission.\nThe main goal of this work is to improve scalability of First-Person Shooter (FPS) and RPG games.\nThe further objective is to improve the responsiveness MMOGs by providing low latency communications between the client and The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 7 server.\nThe architecture uses interconnected proxy servers that each have a full view of the global game state.\nProxy servers are located at various different ISPs.\nIt is mentioned in this work that dividing the game space among multiple games servers such as the federated model presented in [19] is inefficient for a relatively fast game flow and that the proposed architecture alleviates this problem because users do not have to connect to a different server whenever they cross the server boundary.\nThis architecture still requires all proxies to be aware of the overall game state over the whole game space unlike our work where we require the proxies to maintain only partial state information about the game space.\nFidelity based agent architectures have been proposed in [20, 21].\nThese works propose a distributed client-server architecture for distributed interactive simulations where different servers are responsible for different portions of the game space.\nWhen an object moves from one portion to another, there is a handoff from one server to another.\nAlthough these works propose an architecture where different portions of the simulation space are managed by different servers, they do not address the issue of decreasing the bandwidth required through the use of communication proxies.\nOur work differs from the above discussed previous works by proposing a) a distributed proxy-based architecture to decrease bandwidth requirements at the clients and the servers without requiring the proxies to keep state information about the whole game space, b) a dynamic authority assignment technique to reduce latency (by performing consistency checks locally at the client whenever possible) by splitting the authority between the clients and servers on a per object basis, and c) proposing that cheat detection can be built into the proxies if they are provided more information about the specific game instead of using them purely as communication proxies (although this idea has not been implemented yet and is part of our future work).\n6.\nCONCLUSIONS AND FUTURE WORK In this paper, we first proposed a proxy-based architecture for MMOGs that enables MMOGs to scale to a large number of users by mitigating the need for a large number of transport sessions to be maintained and decreasing both bandwidth overhead and latency of event update.\nSecond, we proposed a mixed authority assignment mechanism that divides authority for making decisions on actions and events within the game between the clients and server and argued how such an authority assignment leads to better game playing experience without sacrificing the consistency of the game.\nThird, to validate the viability of the mixed authority assignment mechanism, we implemented it within a MMOG called RPGQuest and described our implementation experience.\nIn future work, we propose to implement the communications proxy architecture described in this paper and integrate the mixed authority mechanism within this architecture.\nWe propose to evaluate the benefits of the proxy-based architecture in terms of scalability, accuracy and responsiveness.\nWe also plan to implement a version of the RPGQuest game with dynamic assignment of authority to allow players the authority to pickup objects when no other players are near.\nAs discussed earlier, this will allow for a more efficient and responsive game in certain situations and alleviate some of the processing load from the server.\nAlso, since so much trust is put into the clients of our architecture, it will be necessary to integrate into the architecture many of the cheat detection schemes that have been proposed in the literature.\nSoftware such as Punkbuster [22] and a reputation system like those proposed by [23] and [15] would be integral to the operation of an architecture such as ours which has a lot of trust placed on the client.\nWe further propose to make the proxies in our architecture more game cognizant so that cheat detection mechanisms can be built into the proxies themselves.\n7.\nREFERENCES [1] Y. W. Bernier.\nLatency Compensation Methods in Client\/Server In-game Protocol Design and Optimization.\nIn Proc.\nof Game Developers Conference``01, 2001.\n[2] Lothar Pantel and Lars C. Wolf.\nOn the impact of delay on real-time multiplayer games.\nIn NOSSDAV ``02: Proceedings of the 12th international workshop on Network and operating systems support for digital audio and video, pages 23-29, New York, NY, USA, 2002.\nACM Press.\n[3] G. Armitage.\nSensitivity of Quake3 Players to Network Latency.\nIn Proc.\nof IMW2001, Workshop Poster Session, November 2001.\nhttp:\/\/www.geocities.com\/ gj armitage\/q3\/quake-results.\nhtml.\n[4] Tobias Fritsch, Hartmut Ritter, and Jochen Schiller.\nThe effect of latency and network limitations on mmorpgs: a field study of everquest2.\nIn NetGames ``05: Proceedings of 4th ACM SIGCOMM workshop on Network and system support for games, pages 1-9, New York, NY, USA, 2005.\nACM Press.\n[5] Tom Beigbeder, Rory Coughlan, Corey Lusher, John Plunkett, Emmanuel Agu, and Mark Claypool.\nThe effects of loss and latency on user performance in unreal tournament 2003.\nIn NetGames ``04: Proceedings of 3rd ACM SIGCOMM workshop on Network and system support for games, pages 144-151, New York, NY, USA, 2004.\nACM Press.\n[6] Y. Lin, K. Guo, and S. Paul.\nSync-MS: Synchronized Messaging Service for Real-Time Multi-Player Distributed Games.\nIn Proc.\nof 10th IEEE International Conference on Network Protocols (ICNP), Nov 2002.\n[7] Katherine Guo, Sarit Mukherjee, Sampath Rangarajan, and Sanjoy Paul.\nA fair message exchange framework for distributed multi-player games.\nIn NetGames ``03: Proceedings of the 2nd workshop on Network and system support for games, pages 29-41, New York, NY, USA, 2003.\nACM Press.\n[8] T. Barron.\nMultiplayer Game Programming, chapter 16-17, pages 672-731.\nPrima Tech``s Game Development Series.\nPrima Publishing, 2001.\n8 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 [9] Carsten Griwodz and P\u02daal Halvorsen.\nThe fun of using tcp for an mmorpg.\nIn NOSSDAV ``06: Proceedings of the International Workshop on Network and Operating Systems Support for Digital Audio and VIdeo, New York, NY, USA, 2006.\nACM Press.\n[10] Sudhir Aggarwal, Hemant Banavar, Amit Khandelwal, Sarit Mukherjee, and Sampath Rangarajan.\nAccuracy in dead-reckoning based distributed multi-player games.\nIn NetGames ``04: Proceedings of 3rd ACM SIGCOMM workshop on Network and system support for games, pages 161-165, New York, NY, USA, 2004.\nACM Press.\n[11] Sudhir Aggarwal, Hemant Banavar, Sarit Mukherjee, and Sampath Rangarajan.\nFairness in dead-reckoning based distributed multi-player games.\nIn NetGames ``05: Proceedings of 4th ACM SIGCOMM workshop on Network and system support for games, pages 1-10, New York, NY, USA, 2005.\nACM Press.\n[12] Riker, T. et al..\nBzflag.\nhttp:\/\/www.bzflag.org, 2000-2006.\n[13] Linden Lab.\nSecond life.\nhttp:\/\/secondlife.com, 2003.\n[14] Martin Mauve.\nHow to keep a dead man from shooting.\nIn IDMS ``00: Proceedings of the 7th International Workshop on Interactive Distributed Multimedia Systems and Telecommunication Services, pages 199-204, London, UK, 2000.\nSpringer-Verlag.\n[15] Max Skibinsky.\nMassively Multiplayer Game Development 2, chapter The Quest for Holy ScalePart 2: P2P Continuum, pages 355-373.\nCharles River Media, 2005.\n[16] Joseph D. Pellegrino and Constantinos Dovrolis.\nBandwidth requirement and state consistency in three multiplayer game architectures.\nIn NetGames ``03: Proceedings of the 2nd workshop on Network and system support for games, pages 52-59, New York, NY, USA, 2003.\nACM Press.\n[17] M. Mauve J. Widmer and S. Fischer.\nA Generic Proxy Systems for Networked Computer Games.\nIn Proc.\nof the Workshop on Network Games, Netgames 2002, April 2002.\n[18] S. Gorlatch J. Muller, S. Fischer and M.Mauve.\nA Proxy Server Network Architecture for Real-Time Computer Games.\nIn Euor-Par 2004 Parallel Processing: 10th International EURO-PAR Conference, August-September 2004.\n[19] H. Hazeyama T. Limura and Y. Kadobayashi.\nZoned Federation of Game Servers: A Peer-to-Peer Approach to Scalable Multiplayer On-line Games.\nIn Proc.\nof ACM Workshop on Network Games, Netgames 2004, August-September 2004.\n[20] B. Kelly and S. Aggarwal.\nA Framework for a Fidelity Based Agent Architecture for Distributed Interactive Simulation.\nIn Proc.\n14th Workshop on Standards for Distributed Interactive Simulation, pages 541-546, March 1996.\n[21] S. Aggarwal and B. Kelly.\nHierarchical Structuring for Distributed Interactive Simulation.\nIn Proc.\n13th Workshop on Standards for Distributed Interactive Simulation, pages 125-132, Sept 1995.\n[22] Even Balance, Inc..\nPunkbuster.\nhttp:\/\/www.evenbalance.com\/, 2001-2006.\n[23] Y. Wang and J. Vassileva.\nTrust and Reputation Model in Peer-to-Peer Networks.\nIn Third International Conference on Peer-to-Peer Computing, 2003.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 9","lvl-3":"Authority Assignment in Distributed Multi-Player Proxy-based Games\nABSTRACT\nWe present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games.\nThe proposed game architecture consists of distributed game clients that connect to game proxies (referred to as \"communication proxies\") which forward game related messages from the clients to one or more game servers.\nUnlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support.\nUsing this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions\/events that occur within the game between client and servers on a per action\/event basis.\nWe show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest.\nIn addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics.\n1.\nINTRODUCTION\nIn Massively Multi-player On-line Games (MMOG), game clients who are positioned across the Internet connect to a game server to interact with other clients in order to be part of the game.\nIn current architectures, these interactions are direct in that the game clients and the servers exchange game messages with each other.\nIn addition, current MMOGs delegate all authority to the game server to make decisions about the results pertaining to the actions that game clients take and also to decide upon the result of other game related events.\nSuch centralized authority has been implemented with the claim that this improves the security and consistency required in a gaming environment.\nA number of works have shown the effect of network latency on distributed multi-player games [1, 2, 3, 4].\nIt has been shown that network latency has real impact on practical game playing experience [3, 5].\nSome types of games can function quite well even in the presence of large delays.\nFor example, [4] shows that in a modern RPG called Everquest 2, the \"breakpoint\" of the game when adding artificial latency was 1250ms.\nThis is accounted to the fact that the combat system used in Everquest 2 is queueing based and has very low interaction.\nFor example, a player queues up 4 or 5 spells they wish to cast, each of these spells take 1-2 seconds to actually perform, giving the server plenty of time to validate these actions.\nBut there are other games such as FPS games that break even in the presence of moderate network latencies [3, 5].\nLatency compensation techniques have been proposed to alleviate the effect of latency [1, 6, 7] but it is obvious that if MMOGs are to increase in interactivity and speed, more architectures will have to be developed that address responsiveness, accuracy and consistency of the gamestate.\nIn this paper, we propose two important features that would make game playing within MMOGs more responsive for movement and scalable.\nFirst, we propose that centralized server-based architectures be made hierarchical through the introduction of communication proxies so that game updates made by clients that are time sensitive, such as movement, can be more efficiently distributed to other players within their game-space.\nSecond, we propose that assignment of authority in terms of who makes the decision on client actions such as object pickups and hits, and collisions between players, be distributed between the clients and the servers in order to distribute the computing load away from the central server.\nIn order to move towards more complex real-time\nnetworked games, we believe that definitions of authority must be refined.\nMost currently implemented MMOGs have game servers that have almost absolute authority.\nWe argue that there is no single consistent view of the virtual game space that can be maintained on any one component within a network that has significant latency, such as the one that many MMOG players would experience.\nWe believe that in most cases, the client with the most accurate view of an entity is the best suited to make decisions for that entity when the causality of that action will not immediately affect any other players.\nIn this paper we define what it means to have authority within the context of events and objects in a virtual game space.\nWe then show the benefits of delegating authority for different actions and game events between the clients and server.\nIn our model, the game space consists of game clients (representing the players) and objects that they control.\nWe divide the client actions and game events (we will collectively refer to these as \"events\") such as collisions, hits etc. into three different categories, a) events for which the game client has absolute authority, b) events for which the game server has absolute authority, and c) events for which the authority changes dynamically from client to the server and vice-versa.\nDepending on who has the authority, that entity will make decisions on the events that happen within a game space.\nWe propose that authority for all decisions that pertain to a single player or object in the game that neither affects the other players or objects, nor are affected by the actions of other players be delegated to that player's game client.\nThese type of decisions would include collision detection with static objects within the virtual game space and hit detection with linear path bullets (whose trajectory is fixed and does not change with time) fired by other players.\nAuthority for decisions that could be affected by two or more players should be delegated to the impartial central server, in some cases, to ensure that no conflicts occur and in other cases can be delegated to the clients responsible for those players.\nFor example, collision detection of two players that collide with each other and hit detection of non-linear bullets (that changes trajectory with time) should be delegated to the server.\nDecision on events such as item pickup (for example, picking up items in a game to accumulate points) should be delegated to a server if there are multiple players within close proximity of an item and any one of the players could succeed in picking the item; for item pick-up contention where the client realizes that no other player, except its own player, is within a certain range of the item, the client could be delegated the responsibility to claim the item.\nThe client's decision can always be accurately verified by the server.\nIn summary, we argue that while current authority models that only delegate responsibility to the server to make authoritative decisions on events is more secure than allowing the clients to make the decisions, these types of models add undesirable delays to events that could very well be decided by the clients without any inconsistency being introduced into the game.\nAs networked games become more complex, our architecture will become more applicable.\nThis architecture is applicable for massively multiplayer games where the speed and accuracy of game-play are a major concern while consistency between player game-states is still desired.\nWe propose that a mixed authority assignment mechanism such as the one outlined above be implemented in high interaction MMOGs.\nOur paper has the following contributions.\nFirst we propose an architecture that uses communication proxies to enable clients to connect to the game server.\nA communication proxy in the proposed architecture maintains information only about portions of the game space that are relevant to clients connected to it and is able to process the movement information of objects and players within these portions.\nIn addition, it is capable of multicasting this information only to a relevant subset of other communication proxies.\nThese functionalities of a communication proxy leads to a decrease in latency of event update and subsequently, better game playing experience.\nSecond, we propose a mixed authority assignment mechanism as described above that improves game playing experience.\nThird, we implement the proposed mixed authority assignment mechanism within a MMOG called RPGQuest [8] to validate its viability within MMOGs.\nIn Section 2, we describe the proxy-based game architecture in more detail and illustrate its advantages.\nIn Section 3, we provide a generic description of the mixed authority assignment mechanism and discuss how it improves game playing experience.\nIn Section 4, we show the feasibility of implementing the proposed mixed authority assignment mechanism within existing MMOGs by describing a proof-of-concept implementation within an existing MMOG called RPGQuest.\nSection 5 discusses related work.\nIn Section 6, we present our conclusions and discuss future work.\n2.\nPROXY-BASED GAME ARCHITECTURE\n3.\nASSIGNMENT OF AUTHORITY\n4 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006\n4.\nIMPLEMENTATION EXPERIENCE\n5.\nRELATED WORK\nIt has been noted that in addition to latency, bandwidth requirements also dictate the type of gaming architecture to be used.\nIn [16], different types of architectures are studied with respect to bandwidth efficiencies and latency.\nIt is pointed out that Central Server architectures are not scalable because of bandwidth requirements at the server but the overhead for consistency checks are limited as they are performed at the server.\nA Peer-to-Peer architecture, on the other hand, is scalable but there is a significant overhead for consistency checks as this is required at every player.\nThe paper proposes a hybrid architecture which is Peer-toPeer in terms of message exchange (and thereby is scalable) where a Central Server is used for off-line consistency checks (thereby mitigating consistency check overhead).\nThe paper provides an implementation example of BZFlag which is a peer-to-peer game which is modified to transfer all authority to a central server.\nIn essence, this paper advocates an authority architecture which is server based even for peerto-peer games, but does not consider division of authority between a client and a server to minimize latency which could affect game playing experience even with the type of latency found in server based games (where all authority is with the server).\nThere is also previous work that has suggested that proxy based architectures be used to alleviate the latency problem and in addition use proxies to provide congestion control and cheat-proof mechanisms in distributed multi-player games [17].\nIn [18], a proxy server-network architecture is presented that is aimed at improving scalability of multiplayer games and lowering latency in server-client data transmission.\nThe main goal of this work is to improve scalability of First-Person Shooter (FPS) and RPG games.\nThe further objective is to improve the responsiveness MMOGs by providing low latency communications between the client and The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 7 server.\nThe architecture uses interconnected proxy servers that each have a full view of the global game state.\nProxy servers are located at various different ISPs.\nIt is mentioned in this work that dividing the game space among multiple games servers such as the federated model presented in [19] is inefficient for a relatively fast game flow and that the proposed architecture alleviates this problem because users do not have to connect to a different server whenever they cross the server boundary.\nThis architecture still requires all proxies to be aware of the overall game state over the whole game space unlike our work where we require the proxies to maintain only partial state information about the game space.\nFidelity based agent architectures have been proposed in [20, 21].\nThese works propose a distributed client-server architecture for distributed interactive simulations where different servers are responsible for different portions of the game space.\nWhen an object moves from one portion to another, there is a handoff from one server to another.\nAlthough these works propose an architecture where different portions of the simulation space are managed by different servers, they do not address the issue of decreasing the bandwidth required through the use of communication proxies.\nOur work differs from the above discussed previous works by proposing a) a distributed proxy-based architecture to decrease bandwidth requirements at the clients and the servers without requiring the proxies to keep state information about the whole game space, b) a dynamic authority assignment technique to reduce latency (by performing consistency checks locally at the client whenever possible) by splitting the authority between the clients and servers on a per object basis, and c) proposing that cheat detection can be built into the proxies if they are provided more information about the specific game instead of using them purely as communication proxies (although this idea has not been implemented yet and is part of our future work).\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we first proposed a proxy-based architecture for MMOGs that enables MMOGs to scale to a large number of users by mitigating the need for a large number of transport sessions to be maintained and decreasing both bandwidth overhead and latency of event update.\nSecond, we proposed a mixed authority assignment mechanism that divides authority for making decisions on actions and events within the game between the clients and server and argued how such an authority assignment leads to better game playing experience without sacrificing the consistency of the game.\nThird, to validate the viability of the mixed authority assignment mechanism, we implemented it within a MMOG called RPGQuest and described our implementation experience.\nIn future work, we propose to implement the communications proxy architecture described in this paper and integrate the mixed authority mechanism within this architecture.\nWe propose to evaluate the benefits of the proxy-based architecture in terms of scalability, accuracy and responsiveness.\nWe also plan to implement a version of the RPGQuest game with dynamic assignment of authority to allow players the authority to pickup objects when no other players are near.\nAs discussed earlier, this will allow for a more efficient and responsive game in certain situations and alleviate some of the processing load from the server.\nAlso, since so much trust is put into the clients of our architecture, it will be necessary to integrate into the architecture many of the cheat detection schemes that have been proposed in the literature.\nSoftware such as Punkbuster [22] and a reputation system like those proposed by [23] and [15] would be integral to the operation of an architecture such as ours which has a lot of trust placed on the client.\nWe further propose to make the proxies in our architecture more game cognizant so that cheat detection mechanisms can be built into the proxies themselves.","lvl-4":"Authority Assignment in Distributed Multi-Player Proxy-based Games\nABSTRACT\nWe present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games.\nThe proposed game architecture consists of distributed game clients that connect to game proxies (referred to as \"communication proxies\") which forward game related messages from the clients to one or more game servers.\nUnlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support.\nUsing this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions\/events that occur within the game between client and servers on a per action\/event basis.\nWe show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest.\nIn addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics.\n1.\nINTRODUCTION\nIn Massively Multi-player On-line Games (MMOG), game clients who are positioned across the Internet connect to a game server to interact with other clients in order to be part of the game.\nIn current architectures, these interactions are direct in that the game clients and the servers exchange game messages with each other.\nIn addition, current MMOGs delegate all authority to the game server to make decisions about the results pertaining to the actions that game clients take and also to decide upon the result of other game related events.\nSuch centralized authority has been implemented with the claim that this improves the security and consistency required in a gaming environment.\nA number of works have shown the effect of network latency on distributed multi-player games [1, 2, 3, 4].\nIt has been shown that network latency has real impact on practical game playing experience [3, 5].\nSome types of games can function quite well even in the presence of large delays.\nFor example, [4] shows that in a modern RPG called Everquest 2, the \"breakpoint\" of the game when adding artificial latency was 1250ms.\nBut there are other games such as FPS games that break even in the presence of moderate network latencies [3, 5].\nIn this paper, we propose two important features that would make game playing within MMOGs more responsive for movement and scalable.\nFirst, we propose that centralized server-based architectures be made hierarchical through the introduction of communication proxies so that game updates made by clients that are time sensitive, such as movement, can be more efficiently distributed to other players within their game-space.\nSecond, we propose that assignment of authority in terms of who makes the decision on client actions such as object pickups and hits, and collisions between players, be distributed between the clients and the servers in order to distribute the computing load away from the central server.\nIn order to move towards more complex real-time\nnetworked games, we believe that definitions of authority must be refined.\nMost currently implemented MMOGs have game servers that have almost absolute authority.\nWe argue that there is no single consistent view of the virtual game space that can be maintained on any one component within a network that has significant latency, such as the one that many MMOG players would experience.\nIn this paper we define what it means to have authority within the context of events and objects in a virtual game space.\nWe then show the benefits of delegating authority for different actions and game events between the clients and server.\nIn our model, the game space consists of game clients (representing the players) and objects that they control.\nDepending on who has the authority, that entity will make decisions on the events that happen within a game space.\nWe propose that authority for all decisions that pertain to a single player or object in the game that neither affects the other players or objects, nor are affected by the actions of other players be delegated to that player's game client.\nThese type of decisions would include collision detection with static objects within the virtual game space and hit detection with linear path bullets (whose trajectory is fixed and does not change with time) fired by other players.\nAuthority for decisions that could be affected by two or more players should be delegated to the impartial central server, in some cases, to ensure that no conflicts occur and in other cases can be delegated to the clients responsible for those players.\nThe client's decision can always be accurately verified by the server.\nAs networked games become more complex, our architecture will become more applicable.\nThis architecture is applicable for massively multiplayer games where the speed and accuracy of game-play are a major concern while consistency between player game-states is still desired.\nWe propose that a mixed authority assignment mechanism such as the one outlined above be implemented in high interaction MMOGs.\nOur paper has the following contributions.\nFirst we propose an architecture that uses communication proxies to enable clients to connect to the game server.\nA communication proxy in the proposed architecture maintains information only about portions of the game space that are relevant to clients connected to it and is able to process the movement information of objects and players within these portions.\nIn addition, it is capable of multicasting this information only to a relevant subset of other communication proxies.\nThese functionalities of a communication proxy leads to a decrease in latency of event update and subsequently, better game playing experience.\nSecond, we propose a mixed authority assignment mechanism as described above that improves game playing experience.\nThird, we implement the proposed mixed authority assignment mechanism within a MMOG called RPGQuest [8] to validate its viability within MMOGs.\nIn Section 2, we describe the proxy-based game architecture in more detail and illustrate its advantages.\nIn Section 3, we provide a generic description of the mixed authority assignment mechanism and discuss how it improves game playing experience.\nIn Section 4, we show the feasibility of implementing the proposed mixed authority assignment mechanism within existing MMOGs by describing a proof-of-concept implementation within an existing MMOG called RPGQuest.\nSection 5 discusses related work.\nIn Section 6, we present our conclusions and discuss future work.\n5.\nRELATED WORK\nIt has been noted that in addition to latency, bandwidth requirements also dictate the type of gaming architecture to be used.\nIn [16], different types of architectures are studied with respect to bandwidth efficiencies and latency.\nIt is pointed out that Central Server architectures are not scalable because of bandwidth requirements at the server but the overhead for consistency checks are limited as they are performed at the server.\nA Peer-to-Peer architecture, on the other hand, is scalable but there is a significant overhead for consistency checks as this is required at every player.\nThe paper proposes a hybrid architecture which is Peer-toPeer in terms of message exchange (and thereby is scalable) where a Central Server is used for off-line consistency checks (thereby mitigating consistency check overhead).\nThe paper provides an implementation example of BZFlag which is a peer-to-peer game which is modified to transfer all authority to a central server.\nIn essence, this paper advocates an authority architecture which is server based even for peerto-peer games, but does not consider division of authority between a client and a server to minimize latency which could affect game playing experience even with the type of latency found in server based games (where all authority is with the server).\nThere is also previous work that has suggested that proxy based architectures be used to alleviate the latency problem and in addition use proxies to provide congestion control and cheat-proof mechanisms in distributed multi-player games [17].\nIn [18], a proxy server-network architecture is presented that is aimed at improving scalability of multiplayer games and lowering latency in server-client data transmission.\nThe main goal of this work is to improve scalability of First-Person Shooter (FPS) and RPG games.\nThe further objective is to improve the responsiveness MMOGs by providing low latency communications between the client and The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 7 server.\nThe architecture uses interconnected proxy servers that each have a full view of the global game state.\nProxy servers are located at various different ISPs.\nThis architecture still requires all proxies to be aware of the overall game state over the whole game space unlike our work where we require the proxies to maintain only partial state information about the game space.\nFidelity based agent architectures have been proposed in [20, 21].\nThese works propose a distributed client-server architecture for distributed interactive simulations where different servers are responsible for different portions of the game space.\nWhen an object moves from one portion to another, there is a handoff from one server to another.\nAlthough these works propose an architecture where different portions of the simulation space are managed by different servers, they do not address the issue of decreasing the bandwidth required through the use of communication proxies.\n6.\nCONCLUSIONS AND FUTURE WORK\nSecond, we proposed a mixed authority assignment mechanism that divides authority for making decisions on actions and events within the game between the clients and server and argued how such an authority assignment leads to better game playing experience without sacrificing the consistency of the game.\nThird, to validate the viability of the mixed authority assignment mechanism, we implemented it within a MMOG called RPGQuest and described our implementation experience.\nIn future work, we propose to implement the communications proxy architecture described in this paper and integrate the mixed authority mechanism within this architecture.\nWe propose to evaluate the benefits of the proxy-based architecture in terms of scalability, accuracy and responsiveness.\nWe also plan to implement a version of the RPGQuest game with dynamic assignment of authority to allow players the authority to pickup objects when no other players are near.\nAs discussed earlier, this will allow for a more efficient and responsive game in certain situations and alleviate some of the processing load from the server.\nAlso, since so much trust is put into the clients of our architecture, it will be necessary to integrate into the architecture many of the cheat detection schemes that have been proposed in the literature.\nWe further propose to make the proxies in our architecture more game cognizant so that cheat detection mechanisms can be built into the proxies themselves.","lvl-2":"Authority Assignment in Distributed Multi-Player Proxy-based Games\nABSTRACT\nWe present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games.\nThe proposed game architecture consists of distributed game clients that connect to game proxies (referred to as \"communication proxies\") which forward game related messages from the clients to one or more game servers.\nUnlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support.\nUsing this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions\/events that occur within the game between client and servers on a per action\/event basis.\nWe show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest.\nIn addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics.\n1.\nINTRODUCTION\nIn Massively Multi-player On-line Games (MMOG), game clients who are positioned across the Internet connect to a game server to interact with other clients in order to be part of the game.\nIn current architectures, these interactions are direct in that the game clients and the servers exchange game messages with each other.\nIn addition, current MMOGs delegate all authority to the game server to make decisions about the results pertaining to the actions that game clients take and also to decide upon the result of other game related events.\nSuch centralized authority has been implemented with the claim that this improves the security and consistency required in a gaming environment.\nA number of works have shown the effect of network latency on distributed multi-player games [1, 2, 3, 4].\nIt has been shown that network latency has real impact on practical game playing experience [3, 5].\nSome types of games can function quite well even in the presence of large delays.\nFor example, [4] shows that in a modern RPG called Everquest 2, the \"breakpoint\" of the game when adding artificial latency was 1250ms.\nThis is accounted to the fact that the combat system used in Everquest 2 is queueing based and has very low interaction.\nFor example, a player queues up 4 or 5 spells they wish to cast, each of these spells take 1-2 seconds to actually perform, giving the server plenty of time to validate these actions.\nBut there are other games such as FPS games that break even in the presence of moderate network latencies [3, 5].\nLatency compensation techniques have been proposed to alleviate the effect of latency [1, 6, 7] but it is obvious that if MMOGs are to increase in interactivity and speed, more architectures will have to be developed that address responsiveness, accuracy and consistency of the gamestate.\nIn this paper, we propose two important features that would make game playing within MMOGs more responsive for movement and scalable.\nFirst, we propose that centralized server-based architectures be made hierarchical through the introduction of communication proxies so that game updates made by clients that are time sensitive, such as movement, can be more efficiently distributed to other players within their game-space.\nSecond, we propose that assignment of authority in terms of who makes the decision on client actions such as object pickups and hits, and collisions between players, be distributed between the clients and the servers in order to distribute the computing load away from the central server.\nIn order to move towards more complex real-time\nnetworked games, we believe that definitions of authority must be refined.\nMost currently implemented MMOGs have game servers that have almost absolute authority.\nWe argue that there is no single consistent view of the virtual game space that can be maintained on any one component within a network that has significant latency, such as the one that many MMOG players would experience.\nWe believe that in most cases, the client with the most accurate view of an entity is the best suited to make decisions for that entity when the causality of that action will not immediately affect any other players.\nIn this paper we define what it means to have authority within the context of events and objects in a virtual game space.\nWe then show the benefits of delegating authority for different actions and game events between the clients and server.\nIn our model, the game space consists of game clients (representing the players) and objects that they control.\nWe divide the client actions and game events (we will collectively refer to these as \"events\") such as collisions, hits etc. into three different categories, a) events for which the game client has absolute authority, b) events for which the game server has absolute authority, and c) events for which the authority changes dynamically from client to the server and vice-versa.\nDepending on who has the authority, that entity will make decisions on the events that happen within a game space.\nWe propose that authority for all decisions that pertain to a single player or object in the game that neither affects the other players or objects, nor are affected by the actions of other players be delegated to that player's game client.\nThese type of decisions would include collision detection with static objects within the virtual game space and hit detection with linear path bullets (whose trajectory is fixed and does not change with time) fired by other players.\nAuthority for decisions that could be affected by two or more players should be delegated to the impartial central server, in some cases, to ensure that no conflicts occur and in other cases can be delegated to the clients responsible for those players.\nFor example, collision detection of two players that collide with each other and hit detection of non-linear bullets (that changes trajectory with time) should be delegated to the server.\nDecision on events such as item pickup (for example, picking up items in a game to accumulate points) should be delegated to a server if there are multiple players within close proximity of an item and any one of the players could succeed in picking the item; for item pick-up contention where the client realizes that no other player, except its own player, is within a certain range of the item, the client could be delegated the responsibility to claim the item.\nThe client's decision can always be accurately verified by the server.\nIn summary, we argue that while current authority models that only delegate responsibility to the server to make authoritative decisions on events is more secure than allowing the clients to make the decisions, these types of models add undesirable delays to events that could very well be decided by the clients without any inconsistency being introduced into the game.\nAs networked games become more complex, our architecture will become more applicable.\nThis architecture is applicable for massively multiplayer games where the speed and accuracy of game-play are a major concern while consistency between player game-states is still desired.\nWe propose that a mixed authority assignment mechanism such as the one outlined above be implemented in high interaction MMOGs.\nOur paper has the following contributions.\nFirst we propose an architecture that uses communication proxies to enable clients to connect to the game server.\nA communication proxy in the proposed architecture maintains information only about portions of the game space that are relevant to clients connected to it and is able to process the movement information of objects and players within these portions.\nIn addition, it is capable of multicasting this information only to a relevant subset of other communication proxies.\nThese functionalities of a communication proxy leads to a decrease in latency of event update and subsequently, better game playing experience.\nSecond, we propose a mixed authority assignment mechanism as described above that improves game playing experience.\nThird, we implement the proposed mixed authority assignment mechanism within a MMOG called RPGQuest [8] to validate its viability within MMOGs.\nIn Section 2, we describe the proxy-based game architecture in more detail and illustrate its advantages.\nIn Section 3, we provide a generic description of the mixed authority assignment mechanism and discuss how it improves game playing experience.\nIn Section 4, we show the feasibility of implementing the proposed mixed authority assignment mechanism within existing MMOGs by describing a proof-of-concept implementation within an existing MMOG called RPGQuest.\nSection 5 discusses related work.\nIn Section 6, we present our conclusions and discuss future work.\n2.\nPROXY-BASED GAME ARCHITECTURE\nMassively Multi-player Online Games (MMOGs) usually consist of a large game space in which the players and different game objects reside and move around and interact with each-other.\nState information about the whole game space could be kept in a single central server which we would refer to as a Central-Server Architecture.\nBut to alleviate the heavy demand on the processing for handling the large player population and the objects in the game in real-time, a MMOG is normally implemented using a distributed server architecture where the game space is further sub-divided into regions so that each region has relatively smaller number of players and objects that can be handled by a single server.\nIn other words, the different game regions are hosted by different servers in a distributed fashion.\nWhen a player moves out of one game region to another adjacent one, the player must communicate with a different server (than it was currently communicating with) hosting the new region.\nThe servers communicate with one another to hand off a player or an object from one region to another.\nIn this model, the player on the client machine has to establish multiple gaming sessions with different servers so that it can roam in the entire game space.\nWe propose a communication proxy based architecture where a player connects to a (geographically) nearby proxy instead of connecting to a central server in the case of a centralserver architecture or to one of the servers in case of dis\ntributed server architecture.\nIn the proposed architecture, players who are close by geographically join a particular proxy.\nThe proxy then connects to one or more game servers, as needed by the set of players that connect to it and maintains persistent transport sessions with these server.\nThis alleviates the problem of each player having to connect directly to multiple game servers, which can add extra connection setup delay.\nIntroduction of communication proxies also mitigates the overhead of a large number of transport sessions that must be managed and reduces required network bandwidth [9] and processing at the game servers both with central server and distributed server architectures.\nWith central server architectures, communication proxies reduce the overhead at the server by not requiring the server to terminate persistent transport sessions from every one of the clients.\nWith distributed-server architectures, additionally, communication proxies eliminate the need for the clients to maintain persistent transport sessions to every one of the servers.\nFigure 1 shows the proposed architecture.\nFigure 1: Architecture of the gaming environment.\nNote that the communication proxies need not be cognizant of the game.\nThey host a number of players and inform the servers which players are hosted by the proxy in question.\nAlso note that the players hosted by a proxy may not be in the same game space.\nThat is, a proxy hosts players that are geographically close to it, but the players themselves can reside in different parts of the game space.\nThe proxy communicates with the servers responsible for maintaining the game spaces subscribed by the different players.\nThe proxies communicate with one another in a peer-to-peer to fashion.\nThe responsiveness of the game can be improved for updates that do not need to wait on processing at a central authority.\nIn this way, information about players can be disseminated faster before even the game server gets to know about it.\nThis definitely improves the responsiveness of the game.\nHowever, it ignores consistency that is critical in MMORPGs.\nThe notion that an architecture such as this one can still maintain temporal consistency will be discussed in detail in Section 3.\nFigure 2 shows and example of the working principle of the proposed architecture.\nAssume that the game space is divided into 9 regions and there are three servers responsible for managing the regions.\nServer S1 owns regions 1 and 2, S2 manages 4, 5, 7, and 8, and S3 is responsible for 3, 6 and 9.\nFigure 2: An example.\nThere are four communication proxies placed in geographically distant locations.\nPlayers a, b, c join proxy P1, proxy P2 hosts players d, e, f, players g, h are with proxy P3, whereas players i, j, k, l are with proxy P4.\nUnderneath each player, the figure shows which game region the player is located currently.\nFor example, players a, b, c are in regions 1, 2, 6, respectively.\nTherefore, proxy P1 must communicate with servers S1 and S3.\nThe reader can verify the rest of the links between the proxies and the servers.\nPlayers can move within the region and between regions.\nPlayer movement within a region will be tracked by the proxy hosting the player and this movement information (for example, the player's new coordinates) will be multicast to a subset of other relevant communication proxies directly.\nAt the same time, this information will be sent to the server responsible for that region with the indication that this movement has already been communicated to all the other relevant communication proxies (so that the server does not have to relay this information to all the proxies).\nFor example, if player a moves within region 1, this information will be communicated by proxy P1 to server S1 and multicast to proxies P3 and P4.\nNote that proxies that do not keep state information about this region at this point in time (because they do not have any clients within that region) such as P2 do not have to receive this movement information.\nIf a player is at the boundary of a region and moves into a new region, there are two possibilities.\nThe first possibility is that the proxy hosting the player can identify the region into which the player is moving (based on the trajectory information) because it is also maintaining state information about the new region at that point in time.\nIn this case, the proxy can update movement information directly at the other relevant communication proxies and also send information to the appropriate server informing of the movement (this may require handoff between servers as we will describe).\nConsider the scenario where player a is at the boundary of region 1 and proxy P1 can identify that the player is moving into region 2.\nBecause proxy P1 is currently keeping state information about region 2, it can inform all The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 3 the other relevant communication proxies (in this example, no other proxy maintains information about region 2 at this point and so no update needs to be sent to any of the other proxies) about this movement and then inform the server independently.\nIn this particular case, server S1 is responsible for region 2 as well and so no handoff between servers would be needed.\nNow consider another scenario where player j moves from region 9 to region 8 and that proxy P4 is able to identify this movement.\nAgain, because proxy P4 maintains state information about region 8, it can inform any other relevant communication proxies (again, none in this example) about this movement.\nBut now, regions 9 and 8 are managed by different servers (servers S3 and S2 respectively) and thus a hand-off between these servers is needed.\nWe propose that in this particular scenario, the handoff be managed by the proxy P4 itself.\nWhen the proxy sends movement update to server S3 (informing the server that the player is moving out of its region), it would also send a message to server S2 informing the server of the presence and location of the player in one of its region.\nIn the intra-region and inter-region scenarios described above, the proxy is able to manage movement related information, update only the relevant communication proxies about the movement, update the servers with the movement and enable handoff of a player between the servers if needed.\nIn this way, the proxy performs movement updates without involving the servers in any way in this time-critical function thereby speeding up the game and improving game playing experience for the players.\nWe consider this the \"fast path\" for movement update.\nWe envision the proxies to be just communication proxies in that they do not know about the workings of specific games.\nThey merely process movement information of players and objects and communicate this information to the other proxies and the servers.\nIf the proxies are made more intelligent in that they understand more of the game logic, it is possible for them to quickly check on claims made by the clients and mitigate cheating.\nThe servers could perform the same functionality but with more delay.\nEven without being aware of game logic, the proxies can provide additional functionalities such as timestamping messages to make the game playing experience more accurate [10] and fair [11].\nThe second possibility that should be considered is when players move between regions.\nIt is possible that a player moves from one region to another but the proxy that is hosting the player is not able to determine the region into which the player is moving, a) the proxy does not maintain state information about all the regions into which the player could potentially move, or b) the proxy is not able to determine which region the player may move into (even if maintains state information about all these regions).\nIn this case, we propose that the proxy be not responsible for making the movement decision, but instead communicate the movement indication to the server responsible for the region within which the player is currently located.\nThe server will then make the movement decision and then a) inform all the proxies including the proxy hosting the player, and b) initiate handoff with another server if the player moves into a region managed by another server.\nWe consider this the \"slow path\" for movement update in that the servers need to be involved in determining the new position of the player.\nIn the example, assume that player a moves from region 1 to region 4.\nProxy P1 does not maintain state information about region 4 and thus would pass the movement information to server S1.\nThe server will identify that the player has moved into region 4 and would inform proxy P1 as well as proxy P2 (which is the only other proxy that maintains information about region 4 at this point in time).\nServer S1 will also initiate a handoff of player a with server S2.\nProxy P1 will now start maintaining state information about region 4 because one of its hosted players, player a has moved into this region.\nIt will do so by requesting and receiving the current state information about region 4 from server S2 which is responsible for this region.\nThus, a proxy architecture allows us to make use of faster movement updates through the fast path through a proxy if and when possible as opposed to conventional server-based architectures that always have to use the slow path through the server for movement updates.\nBy selectively maintaining relevant regional game state information at the proxies, we are able to achieve this capability in our architecture without the need for maintaining the complete game state at every proxy.\n3.\nASSIGNMENT OF AUTHORITY\nAs a MMOG is played, the players and the game objects that are part of the game, continually change their state.\nFor example, consider a player who owns a tank in a battlefield game.\nBased on action of the player, the tank changes its position in the game space, the amount of ammunition the tank contains changes as it fires at other tanks, the tank collects bonus firing power based on successful hits, etc. .\nSimilarly objects in the battlefield, such as flags, buildings etc. change their state when a flag is picked up by a player (i.e. tank) or a building is destroyed by firing at it.\nThat is, some decision has to be made on the state of each player and object as the game progresses.\nNote that the state of a player and\/or object can contain several parameters (e.g., position, amount of ammunition, fuel storage, points collected, etc), and if any of the parameters changes, the state of the player\/object changes.\nIn a client-server based game, the server controls all the players and the objects.\nWhen a player at a client machine makes a move, the move is transmitted to the server over the network.\nThe server then analyzes the move, and if the move is a valid one, changes the state of the player at the server and informs the client of the change.\nThe client subsequently updates the state of the player and renders the player at the new location.\nIn this case the authority to change the state of the player resides with the server entirely and the client simply follows what the server instructs it to do.\nMost of the current first person shooter (FPS) games and role playing games (RPG) fall under this category.\nIn current FPS games, much like in RPG games, the client is not trusted.\nAll moves and actions that it makes are validated.\nIf a client detects that it has hit another player with a bullet, it proceeds assuming that it is a hit.\nMeanwhile, an update is sent to the server and the server will send back a message either affirming or denying that the player was hit.\nIf the remote player was not hit, then the client will know that it\n4 The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006\ndid not actually make the shot.\nIf it did make the hit, an update will also be sent from the server to the other clients informing them that the other player was hit.\nA difference that occurs in some RPGs is that they use very dumb client programs.\nSome RPGs do not maintain state information at the client and therefore, cannot predict anything such as hits at the client.\nState information is not maintained because the client is not trusted with it.\nIn RPGs, a cheating player with a hacked game client can use state information stored at the client to gain an advantage and find things such as hidden treasure or monsters lurking around the corner.\nThis is a reason why most MMORPGs do not send a lot of state information to the client and causes the game to be less responsive and have lower interaction game-play than FPS games.\nIn a peer-to-peer game, each peer controls the player and object that it \"owns\".\nWhen a player makes a move, the peer machine analyzes the move and if it is a valid one, changes the state of the player and places the player in new position.\nAfterwards, the owner peer informs all other peers about the new state of the player and the rest of the peers update the state of the player.\nIn this scenario, the authority to change the state of the player is given to the owning peer and all other peers simply follow the owner.\nFor example, Battle Zone Flag (BzFlag) [12] is a multiplayer client-server game where the client has all authority for making decisions.\nIt was built primarily with LAN play in mind and cheating as an afterthought.\nClients in BzFlag are completely authoritative and when they detect that they were hit by a bullet, they send an update to the server which simply forwards the message along to all other players.\nThe server does no sort of validation.\nEach of the above two traditional approaches has its own set of advantages and disadvantages.\nThe first approach, which we will refer to as \"server authoritative\" henceforth, uses a centralized method to assign authority.\nWhile a centralized approach can keep the state of the game (i.e., state of all the players and objects) consistent across any number of client machines, it suffers from delayed response in game-play as any move that a player at the client machine makes must go through one round-trip delay to the server before it can take effect on the client's screen.\nIn addition to the round-trip delay, there is also queuing delay in processing the state change request at the server.\nThis can result in additional processing delay, and can also bring in severe scalability problems if there are large number of clients playing the game.\nOne definite advantage of the server authoritative approach is that it can easily detect if a client is cheating and can take appropriate action to prevent cheating.\nThe peer-to-peer approach, henceforth referred to as \"client authoritative\", can make games very responsive.\nHowever, it can make the game state inconsistent for a few players and tie break (or roll back) has to be performed to bring the game back to a consistent state.\nNeither tie break nor roll back is a desirable feature of online gaming.\nFor example, assume that for a game, the goal of each player is to collect as many flags as possible from the game space (e.g. BzFlag).\nWhen two players in proximity try to collect the same flag at the same time, depending on the algorithm used at the client-side, both clients may determine that it is the winner, although in reality only one player can pick the flag up.\nBoth players will see on their screen that it is the winner.\nThis makes the state of the game inconsistent.\nWays to recover from this inconsistency are to give the flag to only one player (using some tie break rule) or roll the game back so that the players can try again.\nNeither of these two approaches is a pleasing experience for online gaming.\nAnother problem with client authoritative approach is that of cheating by clients as there is no cross checking of the validation of the state changes authorized by the owner client.\nWe propose to use a hybrid approach to assign the authority dynamically between the client and the server.\nThat is, we assign the authority to the client to make the game responsive, and use the server's authority only when the client's individual authoritative decisions can make the game state inconsistent.\nBy moving the authority of time critical updates to the client, we avoid the added delay caused by requiring the server to validate these updates.\nFor example, in the flag pickup game, the clients will be given the authority to pickup flags only when other players are not within a range that they could imminently pickup a flag.\nOnly when two or more players are close by so that more than one player may claim to have picked up a flag, the authority for movement and flag pickup would go to the central server so that the game state does not become inconsistent.\nWe believe that in a large game-space where a player is often in a very wide open and sparsely populated area such as those often seen in the game Second Life [13], this hybrid architecture would be very beneficial because of the long periods that the client would have authority to send movement updates for itself.\nThis has two advantages over the centralauthority approach, it distributes the processing load down to the clients for the majority of events and it allows for a more responsive game that does not need to wait on a server for validation.\nWe believe that our notion of authority can be used to develop a globally consistent state model of the evolution of a game.\nFundamentally, the consistent state of the system is the one that is defined by the server.\nHowever, if local authority is delegated to the client, in this case, the client's state is superimposed on the server's state to determine the correct global state.\nFor example, if the client is authoritative with respect to movement of a player, then the trajectory of the player is the \"true\" trajectory and must replace the server's view of the player's trajectory.\nNote that this could be problematic and lead to temporal inconsistency only if, for example, two or more entities are moving in the same region and can interact with each other.\nIn this situation, the client authority must revert to the server and the sever would then make decisions.\nThus, the client is only authoritative in situations where there is no potential to imminently interact with other players.\nWe believe that in complex MMOGs, when allowing more rapid movement, it will still be the case that local authority is possible for significant spans of game time.\nNote that it might also be possible to minimize the occurrences of the \"Dead Man Shooting\" problem described in [14].\nThis could be done by allowing the client to be authoritative for more actions such as its player's own death and disallowing other players from making preemptive decisions based on a remote player.\nThe 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 5 One reason why the client-server based architecture has gained popularity is due to belief that the fastest route to the other clients is through the server.\nWhile this may be true, we aim to create a new architecture where decisions do not always have to be made at the game server and the fastest route to a client is actually through a communication proxy located close to the client.\nThat is, the shortest distance in our architecture is not through the game server but through the communication proxy.\nAfter a client makes an action such as movement, it will simultaneously distribute it directly to the clients and the game server by way of the communications proxy.\nWe note that our architecture however is not practical for a game where game players setup their own servers in an ad-hoc fashion and do not have access to proxies at the various ISPs.\nThis proxy and distributed authority architecture can be used to its full potential only when the proxies can be placed at strategic places within the main ISPs and evenly distributed geographically.\nOur game architecture does not assume that the client is not to be trusted.\nWe are designing our architecture on the fact that there will be sufficient cheat deterring and detection mechanisms present so that it will be both undesirable and very difficult to cheat [15].\nIn our proposed approach, we can make the games cheat resilient by using the proxybased architecture when client authoritative decisions take place.\nIn order to achieve this, the proxies have to be game cognizant so that decisions made by a client can be cross checked by a proxy that the client connects to.\nFor example, assume that in a game a plane controlled by a client moves in the game space.\nIt is not possible for the plane to go through a building unharmed.\nIn a client authoritative mode, it is possible for the client to cheat by maneuvering the plane through a building and claiming the plane to be unharmed.\nHowever, when such move is published by the client, the proxy, being aware of the game space that the plane is in, can quickly check that the client has misused the authority and then can block such move.\nThis allows us to distribute authority to make decisions about the clients.\nIn the following section we use a multiplayer game called RPGQuest to implement different authoritative schemes and discuss our experience with the implementation.\nOur implementation shows the viability of our proposed solution.\n4.\nIMPLEMENTATION EXPERIENCE\nWe have experimented with the authority assignment mechanism described in the last section by implementing the mechanisms in a game called RPGQuest.\nA screen shot from this game is shown in Figure 3.\nThe purpose of the implementation is to test its feasibility in a real game.\nRPGQuest is a basic first person game where the player can move around a three dimensional environment.\nObjects are placed within the game world and players gain points for each object that is collected.\nThe game clients connect to a game server which allows many players to coexist in the same game world.\nThe basic functionality of this game is representative of current online first person shooter and role playing games.\nThe game uses the DirectX 8 graphics API and DirectPlay networking API.\nIn this section we will discuss the three different versions of the game that we experimented with.\nFigure 3: The RPGQuest Game.\nThe first version of the game, which is the original implementation of RPGQuest, was created with a completely authoritative server and a non-authoritative client.\nAuthority given to the server includes decisions of when a player collides with static objects and other players and when a player picks up an object.\nThis version of the game performs well up to 100ms round-trip latency between the client and the server.\nThere is little lag between the time player hits a wall and the time the server corrects the player's position.\nHowever, as more latency is induced between the client and server, the game becomes increasingly difficult to play.\nWith the increased latency, the messages coming from the server correcting the player when it runs into a wall are not received fast enough.\nThis causes the player to pass through the wall for the period that it is waiting for the server to resolve the collision.\nWhen studying the source code of the original version of the RPGQuest game, there is a substantial delay that is unavoidable each time an action must be validated by the server.\nWhenever a movement update is sent to the server, the client must then wait whatever the round trip delay is, plus some processing time at the server in order to receive its validated or corrected position.\nThis is obviously unacceptable in any game where movement or any other rapidly changing state information must be validated and disseminated to the other clients rapidly.\nIn order to get around this problem, we developed a second version of the game, which gives all authority to the client.\nThe client was delegated the authority to validate its own movement and the authority to pick up objects without validation from the server.\nIn this version of the game when a player moves around the game space, the client validates that the player's new position does not intersect with any walls or static objects.\nA position update is then sent to the server which then immediately forwards the update to the other clients within the region.\nThe update does not have to go through any extra processing or validation.\nThis game model of complete authority given to the client is beneficial with respect to movement.\nWhen latencies of\n100ms and up are induced into the link between the client and server, the game is still playable since time critical aspects of the game like movement do not have to wait on a reply from the server.\nWhen a player hits a wall, the collision is processed locally and does not have to wait on the server to resolve the collision.\nAlthough game playing experience with respect to responsiveness is improved when the authority for movement is given to the client, there are still aspects of games that do not benefit from this approach.\nThe most important of these is consistency.\nAlthough actions such as movement are time critical, other actions are not as time critical, but instead require consistency among the player states.\nAn example of a game aspect that requires consistency is picking up objects that should only be possessed by a single player.\nIn our client authoritative version of RPGQuest clients send their own updates to all other players whenever they pick up an object.\nFrom our tests we have realized this is a problem because when there is a realistic amount of latency between the client and server, it is possible for two players to pick up the same object at the same time.\nWhen two players attempt to pick up an object at physical times which are close to each other, the update sent by the player who picked up the object first will not reach the second player in time for it to see that the object has already been claimed.\nThe two players will now both think that they own the object.\nThis is why a server is still needed to be authoritative in this situation and maintain consistency throughout the players.\nThese two versions of the RPGQuest game has showed us why it is necessary to mix the two absolute models of authority.\nIt is better to place authority on the client for quickly changing actions such as movement.\nIt is not desirable to have to wait for server validation on a movement that could change before the reply is even received.\nIt is also sometimes necessary to place consistency over efficiency in aspects of the game that cannot tolerate any inconsistencies such as object ownership.\nWe believe that as the interactivity of games increases, our architecture of mixed authority that does not rely on server validation will be necessary.\nTo test the benefits and show the feasibility of our architecture of mixed authority, we developed a third version of the RPGQuest game that distributed authority for different actions between the client and server.\nIn this version, in the interest of consistency, the server remained authoritative for deciding who picked up an object.\nThe client was given full authority to send positional updates to other clients and verify its own position without the need to verify its updates with the server.\nWhen the player tries to move their avatar, the client verifies that the move will not cause it to move through a wall.\nA positional update is then sent to the server which then simply forwards it to the other clients within the region.\nThis eliminates any extra processing delay that would occur at the server and is also a more accurate means of verification since the client has a more accurate view of its own state than the server.\nThis version of the RPGQuest game where authority is distributed between the client and server is an improvement from the server authoritative version.\nThe client has no delay in waiting for an update for its own position and other clients do not have to wait on the server to verify the update.\nThe inconsistencies where two clients can pick up the same object in the client authoritative architecture are not present in this version of the client.\nHowever, the benefits of mixed authority will not truly be seen until an implementation of our communication proxy is integrated into the game.\nWith the addition of the communication proxy, after the client verifies its own positional updates it will be able to send the update to all clients within its region through a low latency link instead of having to first go through the game server which could possibly be in a very remote location.\nThe coding of the different versions of the game was very simple.\nThe complexity of the client increased very slightly in the client authoritative and hybrid models.\nThe original \"dumb\" clients of RPGQuest know the position of other players; it is not just sent a screen snapshot from the server.\nThe server updates each client with the position of all nearby clients.\nThe \"dumb\" clients use client side prediction to fill in the gaps between the updates they receive.\nThe only extra processing the client has to do in the hybrid architecture is to compare its current position to the positions of all objects (walls, boxes, etc.) in its area.\nThis obviously means that each client will have to already have downloaded the locations of all static objects within its current region.\n5.\nRELATED WORK\nIt has been noted that in addition to latency, bandwidth requirements also dictate the type of gaming architecture to be used.\nIn [16], different types of architectures are studied with respect to bandwidth efficiencies and latency.\nIt is pointed out that Central Server architectures are not scalable because of bandwidth requirements at the server but the overhead for consistency checks are limited as they are performed at the server.\nA Peer-to-Peer architecture, on the other hand, is scalable but there is a significant overhead for consistency checks as this is required at every player.\nThe paper proposes a hybrid architecture which is Peer-toPeer in terms of message exchange (and thereby is scalable) where a Central Server is used for off-line consistency checks (thereby mitigating consistency check overhead).\nThe paper provides an implementation example of BZFlag which is a peer-to-peer game which is modified to transfer all authority to a central server.\nIn essence, this paper advocates an authority architecture which is server based even for peerto-peer games, but does not consider division of authority between a client and a server to minimize latency which could affect game playing experience even with the type of latency found in server based games (where all authority is with the server).\nThere is also previous work that has suggested that proxy based architectures be used to alleviate the latency problem and in addition use proxies to provide congestion control and cheat-proof mechanisms in distributed multi-player games [17].\nIn [18], a proxy server-network architecture is presented that is aimed at improving scalability of multiplayer games and lowering latency in server-client data transmission.\nThe main goal of this work is to improve scalability of First-Person Shooter (FPS) and RPG games.\nThe further objective is to improve the responsiveness MMOGs by providing low latency communications between the client and The 5th Workshop on Network & System Supportfor Games 2006--NETGAMES 2006 7 server.\nThe architecture uses interconnected proxy servers that each have a full view of the global game state.\nProxy servers are located at various different ISPs.\nIt is mentioned in this work that dividing the game space among multiple games servers such as the federated model presented in [19] is inefficient for a relatively fast game flow and that the proposed architecture alleviates this problem because users do not have to connect to a different server whenever they cross the server boundary.\nThis architecture still requires all proxies to be aware of the overall game state over the whole game space unlike our work where we require the proxies to maintain only partial state information about the game space.\nFidelity based agent architectures have been proposed in [20, 21].\nThese works propose a distributed client-server architecture for distributed interactive simulations where different servers are responsible for different portions of the game space.\nWhen an object moves from one portion to another, there is a handoff from one server to another.\nAlthough these works propose an architecture where different portions of the simulation space are managed by different servers, they do not address the issue of decreasing the bandwidth required through the use of communication proxies.\nOur work differs from the above discussed previous works by proposing a) a distributed proxy-based architecture to decrease bandwidth requirements at the clients and the servers without requiring the proxies to keep state information about the whole game space, b) a dynamic authority assignment technique to reduce latency (by performing consistency checks locally at the client whenever possible) by splitting the authority between the clients and servers on a per object basis, and c) proposing that cheat detection can be built into the proxies if they are provided more information about the specific game instead of using them purely as communication proxies (although this idea has not been implemented yet and is part of our future work).\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we first proposed a proxy-based architecture for MMOGs that enables MMOGs to scale to a large number of users by mitigating the need for a large number of transport sessions to be maintained and decreasing both bandwidth overhead and latency of event update.\nSecond, we proposed a mixed authority assignment mechanism that divides authority for making decisions on actions and events within the game between the clients and server and argued how such an authority assignment leads to better game playing experience without sacrificing the consistency of the game.\nThird, to validate the viability of the mixed authority assignment mechanism, we implemented it within a MMOG called RPGQuest and described our implementation experience.\nIn future work, we propose to implement the communications proxy architecture described in this paper and integrate the mixed authority mechanism within this architecture.\nWe propose to evaluate the benefits of the proxy-based architecture in terms of scalability, accuracy and responsiveness.\nWe also plan to implement a version of the RPGQuest game with dynamic assignment of authority to allow players the authority to pickup objects when no other players are near.\nAs discussed earlier, this will allow for a more efficient and responsive game in certain situations and alleviate some of the processing load from the server.\nAlso, since so much trust is put into the clients of our architecture, it will be necessary to integrate into the architecture many of the cheat detection schemes that have been proposed in the literature.\nSoftware such as Punkbuster [22] and a reputation system like those proposed by [23] and [15] would be integral to the operation of an architecture such as ours which has a lot of trust placed on the client.\nWe further propose to make the proxies in our architecture more game cognizant so that cheat detection mechanisms can be built into the proxies themselves.","keyphrases":["author assign","author","commun proxi","multi-player onlin game","latenc compens","artifici latenc","proxi-base game architectur","central-server architectur","first person shooter","role plai game","client authorit approach","cheat-proof mechan","mmog","distribut multi-player game"],"prmu":["P","P","P","M","U","U","M","M","U","M","M","M","U","M"]} {"id":"C-49","title":"Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces","abstract":"Traditional mobile ad-hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers. In some mobile ad-hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time. Many routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic opportunistic network setting. We use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol. We show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage.","lvl-1":"Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces Libo Song and David F. Kotz Institute for Security Technology Studies (ISTS) Department of Computer Science, Dartmouth College, Hanover, NH, USA 03755 ABSTRACT Traditional mobile ad-hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers.\nIn some mobile ad-hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time.\nMany routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic opportunistic network setting.\nWe use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol.\nWe show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage.\nCategories and Subject Descriptors C.2.4 [Computer Systems Organization]: Computer Communication Networks-Distributed Systems General Terms Performance, Design 1.\nINTRODUCTION Mobile opportunistic networks are one kind of delay-tolerant network (DTN) [6].\nDelay-tolerant networks provide service despite long link delays or frequent link breaks.\nLong link delays happen in networks with communication between nodes at a great distance, such as interplanetary networks [2].\nLink breaks are caused by nodes moving out of range, environmental changes, interference from other moving objects, radio power-offs, or failed nodes.\nFor us, mobile opportunistic networks are those DTNs with sparse node population and frequent link breaks caused by power-offs and the mobility of the nodes.\nMobile opportunistic networks have received increasing interest from researchers.\nIn the literature, these networks include mobile sensor networks [25], wild-animal tracking networks [11], pocketswitched networks [8], and transportation networks [1, 14].\nWe expect to see more opportunistic networks when the one-laptopper-child (OLPC) project [18] starts rolling out inexpensive laptops with wireless networking capability for children in developing countries, where often no infrastructure exits.\nOpportunistic networking is one promising approach for those children to exchange information.\nOne fundamental problem in opportunistic networks is how to route messages from their source to their destination.\nMobile opportunistic networks differ from the Internet in that disconnections are the norm instead of the exception.\nIn mobile opportunistic networks, communication devices can be carried by people [4], vehicles [1] or animals [11].\nSome devices can form a small mobile ad-hoc network when the nodes move close to each other.\nBut a node may frequently be isolated from other nodes.\nNote that traditional Internet routing protocols and ad-hoc routing protocols, such as AODV [20] or DSDV [19], assume that a contemporaneous endto-end path exists, and thus fail in mobile opportunistic networks.\nIndeed, there may never exist an end-to-end path between two given devices.\nIn this paper, we study protocols for routing messages between wireless networking devices carried by people.\nWe assume that people send messages to other people occasionally, using their devices; when no direct link exists between the source and the destination of the message, other nodes may relay the message to the destination.\nEach device represents a unique person (it is out of the scope of this paper when a device maybe carried by multiple people).\nEach message is destined for a specific person and thus for a specific node carried by that person.\nAlthough one person may carry multiple devices, we assume that the sender knows which device is the best to receive the message.\nWe do not consider multicast or geocast in this paper.\nMany routing protocols have been proposed in the literature.\nFew of them were evaluated in realistic network settings, or even in realistic simulations, due to the lack of any realistic people mobility model.\nRandom walk or random way-point mobility models are often used to evaluate the performance of those routing protocols.\nAlthough these synthetic mobility models have received extensive interest by mobile ad-hoc network researchers [3], they do not reflect people``s mobility patterns [9].\nRealising the limitations of using random mobility models in simulations, a few researchers have studied routing protocols in mobile opportunistic networks with realistic mobility traces.\nChaintreau et al. [5] theoretically analyzed the impact of routing algorithms over a model derived from a realistic mobility data set.\nSu et al. [22] simulated a set of routing 35 protocols in a small experimental network.\nThose studies help researchers better understand the theoretical limits of opportunistic networks, and the routing protocol performance in a small network (20-30 nodes).\nDeploying and experimenting large-scale mobile opportunistic networks is difficult, we too resort to simulation.\nInstead of using a complex mobility model to mimic people``s mobility patterns, we used mobility traces collected in a production wireless network at Dartmouth College to drive our simulation.\nOur messagegeneration model, however, was synthetic.\nTo the best of our knowledge, we are the first to simulate the effect of routing protocols in a large-scale mobile opportunistic network, using realistic contact traces derived from real traces of a production network with more than 5, 000 users.\nUsing realistic contact traces, we evaluate the performance of three naive routing protocols (direct-delivery, epidemic, and random) and two prediction-based routing protocols, PRoPHET [16] and Link-State [22].\nWe also propose a new prediction-based routing protocol, and compare it to the above in our evaluation.\n2.\nROUTING PROTOCOL A routing protocol is designed for forwarding messages from one node (source) to another node (destination).\nAny node may generate messages for any other node, and may carry messages destined for other nodes.\nIn this paper, we consider only messages that are unicast (single destination).\nDTN routing protocols could be described in part by their transfer probability and replication probability; that is, when one node meets another node, what is the probability that a message should be transfered and if so, whether the sender should retain its copy.\nTwo extremes are the direct-delivery protocol and the epidemic protocol.\nThe former transfers with probability 1 when the node meets the destination, 0 for others, and no replication.\nThe latter uses transfer probability 1 for all nodes and unlimited replication.\nBoth these protocols have their advantages and disadvantages.\nAll other protocols are between the two extremes.\nFirst, we define the notion of contact between two nodes.\nThen we describe five existing protocols before presenting our own proposal.\nA contact is defined as a period of time during which two nodes have the opportunity to communicate.\nAlthough we are aware that wireless technologies differ, we assume that a node can reliably detect the beginning and end time of a contact with nearby nodes.\nA node may be in contact with several other nodes at the same time.\nThe contact history of a node is a sequence of contacts with other nodes.\nNode i has a contact history Hi(j), for each other node j, which denotes the historical contacts between node i and node j.\nWe record the start and end time for each contact; however, the last contacts in the node``s contact history may not have ended.\n2.1 Direct Delivery Protocol In this simple protocol, a message is transmitted only when the source node can directly communicate with the destination node of the message.\nIn mobile opportunistic networks, however, the probability for the sender to meet the destination may be low, or even zero.\n2.2 Epidemic Routing Protocol The epidemic routing protocol [23] floods messages into the network.\nThe source node sends a copy of the message to every node that it meets.\nThe nodes that receive a copy of the message also send a copy of the message to every node that they meet.\nEventually, a copy of the message arrives at the destination of the message.\nThis protocol is simple, but may use significant resources; excessive communication may drain each node``s battery quickly.\nMoreover, since each node keeps a copy of each message, storage is not used efficiently, and the capacity of the network is limited.\nAt a minimum, each node must expire messages after some amount of time or stop forwarding them after a certain number of hops.\nAfter a message expires, the message will not be transmitted and will be deleted from the storage of any node that holds the message.\nAn optimization to reduce the communication cost is to transfer index messages before transferring any data message.\nThe index messages contain IDs of messages that a node currently holds.\nThus, by examining the index messages, a node only transfers messages that are not yet contained on the other nodes.\n2.3 Random Routing An obvious approach between the above two extremes is to select a transfer probability between 0 and 1 to forward messages at each contact.\nWe use a simple replication strategy that allows only the source node to make replicas, and limits the replication to a specific number of copies.\nThe message has some chance of being transferred to a highly mobile node, and thus may have a better chance to reach its destination before the message expires.\n2.4 PRoPHET Protocol PRoPHET [16] is a Probabilistic Routing Protocol using History of past Encounters and Transitivity to estimate each node``s delivery probability for each other node.\nWhen node i meets node j, the delivery probability of node i for j is updated by pij = (1 \u2212 pij)p0 + pij, (1) where p0 is an initial probability, a design parameter for a given network.\nLindgren et al. [16] chose 0.75, as did we in our evaluation.\nWhen node i does not meet j for some time, the delivery probability decreases by pij = \u03b1k pij, (2) where \u03b1 is the aging factor (\u03b1 < 1), and k is the number of time units since the last update.\nThe PRoPHET protocol exchanges index messages as well as delivery probabilities.\nWhen node i receives node j``s delivery probabilities, node i may compute the transitive delivery probability through j to z with piz = piz + (1 \u2212 piz)pijpjz\u03b2, (3) where \u03b2 is a design parameter for the impact of transitivity; we used \u03b2 = 0.25 as did Lindgren [16].\n2.5 Link-State Protocol Su et al. [22] use a link-state approach to estimate the weight of each path from the source of a message to the destination.\nThey use the median inter-contact duration or exponentially aged intercontact duration as the weight on links.\nThe exponentially aged inter-contact duration of node i and j is computed by wij = \u03b1wij + (1 \u2212 \u03b1)I, (4) where I is the new inter-contact duration and \u03b1 is the aging factor.\nNodes share their link-state weights when they can communicate with each other, and messages are forwarded to the node that have the path with the lowest link-state weight.\n36 3.\nTIMELY-CONTACT PROBABILITY We also use historical contact information to estimate the probability of meeting other nodes in the future.\nBut our method differs in that we estimate the contact probability within a period of time.\nFor example, what is the contact probability in the next hour?\nNeither PRoPHET nor Link-State considers time in this way.\nOne way to estimate the timely-contact probability is to use the ratio of the total contact duration to the total time.\nHowever, this approach does not capture the frequency of contacts.\nFor example, one node may have a long contact with another node, followed by a long non-contact period.\nA third node may have a short contact with the first node, followed by a short non-contact period.\nUsing the above estimation approach, both examples would have similar contact probability.\nIn the second example, however, the two nodes have more frequent contacts.\nWe design a method to capture the contact frequency of mobile nodes.\nFor this purpose, we assume that even short contacts are sufficient to exchange messages.1 The probability for node i to meet node j is computed by the following procedure.\nWe divide the contact history Hi(j) into a sequence of n periods of \u0394T starting from the start time (t0) of the first contact in history Hi(j) to the current time.\nWe number each of the n periods from 0 to n \u2212 1, then check each period.\nIf node i had any contact with node j during a given period m, which is [t0 + m\u0394T, t0 + (m + 1)\u0394T), we set the contact status Im to be 1; otherwise, the contact status Im is 0.\nThe probability p (0) ij that node i meets node j in the next \u0394T can be estimated as the average of the contact status in prior intervals: p (0) ij = 1 n n\u22121X m=0 Im.\n(5) To adapt to the change of contact patterns, and reduce the storage space for contact histories, a node may discard old history contacts; in this situation, the estimate would be based on only the retained history.\nThe above probability is the direct contact probability of two nodes.\nWe are also interested in the probability that we may be able to pass a message through a sequence of k nodes.\nWe define the k-order probability inductively, p (k) ij = p (0) ij + X \u03b1 p (0) i\u03b1 p (k\u22121) \u03b1j , (6) where \u03b1 is any node other than i or j. 3.1 Our Routing Protocol We first consider the case of a two-hop path, that is, with only one relay node.\nWe consider two approaches: either the receiving neighbor decides whether to act as a relay, or the source decides which neighbors to use as relay.\n3.1.1 Receiver Decision Whenever a node meets other nodes, they exchange all their messages (or as above, index messages).\nIf the destination of a message is the receiver itself, the message is delivered.\nOtherwise, if the probability of delivering the message to its destination through this receiver node within \u0394T is greater than or equal to a certain threshold, the message is stored in the receiver``s storage to forward 1 In our simulation, however, we accurately model the communication costs and some short contacts will not succeed in transfer of all messages.\nto the destination.\nIf the probability is less than the threshold, the receiver discards the message.\nNotice that our protocol replicates the message whenever a good-looking relay comes along.\n3.1.2 Sender Decision To make decisions, a sender must have the information about its neighbors'' contact probability with a message``s destination.\nTherefore, meta-data exchange is necessary.\nWhen two nodes meet, they exchange a meta-message, containing an unordered list of node IDs for which the sender of the metamessage has a contact probability greater than the threshold.\nAfter receiving a meta-message, a node checks whether it has any message that destined to its neighbor, or to a node in the node list of the neighbor``s meta-message.\nIf it has, it sends a copy of the message.\nWhen a node receives a message, if the destination of the message is the receiver itself, the message is delivered.\nOtherwise, the message is stored in the receiver``s storage for forwarding to the destination.\n3.1.3 Multi-node Relay When we use more than two hops to relay a message, each node needs to know the contact probabilities along all possible paths to the message destination.\nEvery node keeps a contact probability matrix, in which each cell pij is a contact probability between to nodes i and j. Each node i computes its own contact probabilities (row i) with other nodes using Equation (5) whenever the node ends a contact with other nodes.\nEach row of the contact probability matrix has a version number; the version number for row i is only increased when node i updates the matrix entries in row i.\nOther matrix entries are updated through exchange with other nodes when they meet.\nWhen two nodes i and j meet, they first exchange their contact probability matrices.\nNode i compares its own contact matrix with node j``s matrix.\nIf node j``s matrix has a row l with a higher version number, then node i replaces its own row l with node j``s row l. Likewise node j updates its matrix.\nAfter the exchange, the two nodes will have identical contact probability matrices.\nNext, if a node has a message to forward, the node estimates its neighboring node``s order-k contact probability to contact the destination of the message using Equation (6).\nIf p (k) ij is above a threshold, or if j is the destination of the message, node i will send a copy of the message to node j. All the above effort serves to determine the transfer probability when two nodes meet.\nThe replication decision is orthogonal to the transfer decision.\nIn our implementation, we always replicate.\nAlthough PRoPHET [16] and Link-State [22] do no replication, as described, we added replication to those protocols for better comparison to our protocol.\n4.\nEVALUATION RESULTS We evaluate and compare the results of direct delivery, epidemic, random, PRoPHET, Link-State, and timely-contact routing protocols.\n4.1 Mobility traces We use real mobility data collected at Dartmouth College.\nDartmouth College has collected association and disassociation messages from devices on its wireless network wireless users since spring 2001 [13].\nEach message records the wireless card MAC address, the time of association\/disassociation, and the name of the access point.\nWe treat each unique MAC address as a node.\nFor 37 more information about Dartmouth``s network and the data collection, see previous studies [7, 12].\nOur data are not contacts in a mobile ad-hoc network.\nWe can approximate contact traces by assuming that two users can communicate with each other whenever they are associated with the same access point.\nChaintreau et al. [5] used Dartmouth data traces and made the same assumption to theoretically analyze the impact of human mobility on opportunistic forwarding algorithms.\nThis assumption may not be accurate,2 but it is a good first approximation.\nIn our simulation, we imagine the same clients and same mobility in a network with no access points.\nSince our campus has full WiFi coverage, we assume that the location of access points had little impact on users'' mobility.\nWe simulated one full month of trace data (November 2003) taken from CRAWDAD [13], with 5, 142 users.\nAlthough predictionbased protocols require prior contact history to estimate each node``s delivery probability, our preliminary results show that the performance improvement of warming-up over one month of trace was marginal.\nTherefore, for simplicity, we show the results of all protocols without warming-up.\n4.2 Simulator We developed a custom simulator.3 Since we used contact traces derived from real mobility data, we did not need a mobility model and omitted physical and link-layer details for node discovery.\nWe were aware that the time for neighbor discovery in different wireless technologies vary from less than one seconds to several seconds.\nFurthermore, connection establishment also takes time, such as DHCP.\nIn our simulation, we assumed the nodes could discover and connect each other instantly when they were associated with a same AP.\nTo accurately model communication costs, however, we simulated some MAC-layer behaviors, such as collision.\nThe default settings of the network of our simulator are listed in Table 1, using the values recommended by other papers [22, 16].\nThe message probability was the probability of generating messages, as described in Section 4.3.\nThe default transmission bandwidth was 11 Mb\/s.\nWhen one node tried to transmit a message, it first checked whether any nearby node was transmitting.\nIf it was, the node backed off a random number of slots.\nEach slot was 1 millisecond, and the maximum number of backoff slots was 30.\nThe size of messages was uniformly distributed between 80 bytes and 1024 bytes.\nThe hop count limit (HCL) was the maximum number of hops before a message should stop forwarding.\nThe time to live (TTL) was the maximum duration that a message may exist before expiring.\nThe storage capacity was the maximum space that a node can use for storing messages.\nFor our routing method, we used a default prediction window \u0394T of 10 hours and a probability threshold of 0.01.\nThe replication factor r was not limited by default, so the source of a message transferred the messages to any other node that had a contact probability with the message destination higher than the probability threshold.\n4.3 Message generation After each contact event in the contact trace, we generated a message with a given probability; we choose a source node and a des2 Two nodes may not have been able to directly communicate while they were at two far sides of an access point, or two nodes may have been able to directly communicate if they were between two adjacent access points.\n3 We tried to use a general network simulator (ns2), which was extremely slow when simulating a large number of mobile nodes (in our case, more than 5000 nodes), and provided unnecessary detail in modeling lower-level network protocols.\nTable 1: Default Settings of the Simulation Parameter Default value message probability 0.001 bandwidth 11 Mb\/s transmission slot 1 millisecond max backoff slots 30 message size 80-1024 bytes hop count limit (HCL) unlimited time to live (TTL) unlimited storage capacity unlimited prediction window \u0394T 10 hours probability threshold 0.01 contact history length 20 replication always aging factor \u03b1 0.9 (0.98 PRoPHET) initial probability p0 0.75 (PRoPHET) transitivity impact \u03b2 0.25 (PRoPHET) 0 20000 40000 60000 80000 100000 120000 0 5 10 15 20 Numberofoccurrence hour movements contacts Figure 1: Movements and contacts duration each hour tination node randomly using a uniform distribution across nodes seen in the contact trace up to the current time.\nWhen there were more contacts during a certain period, there was a higher likelihood that a new message was generated in that period.\nThis correlation is not unreasonable, since there were more movements during the day than during the night, and so the number of contacts.\nFigure 1 shows the statistics of the numbers of movements and the numbers of contacts during each hour of the day, summed across all users and all days.\nThe plot shows a clear diurnal activity pattern.\nThe activities reached lowest around 5am and peaked between 4pm and 5pm.\nWe assume that in some applications, network traffic exhibits similar patterns, that is, people send more messages during the day, too.\nMessages expire after a TTL.\nWe did not use proactive methods to notify nodes the delivery of messages, so that the messages can be removed from storage.\n4.4 Metrics We define a set of metrics that we use in evaluating routing protocols in opportunistic networks: \u2022 delivery ratio, the ratio of the number of messages delivered to the number of total messages generated.\n\u2022 message transmissions, the total number of messages transmitted during the simulation across all nodes.\n38 \u2022 meta-data transmissions, the total number of meta-data units transmitted during the simulation across all nodes.\n\u2022 message duplications, the number of times a message copy occurred, due to replication.\n\u2022 delay, the duration between a message``s generation time and the message``s delivery time.\n\u2022 storage usage, the max and mean of maximum storage (bytes) used across all nodes.\n4.5 Results Here we compare simulation results of the six routing protocols.\n0.001 0.01 0.1 1 unlimited 100 24 10 1 Deliveryratio Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Figure 2: Delivery ratio (log scale).\nThe direct and random protocols for one-hour TTL had delivery ratios that were too low to be visible in the plot.\nFigure 2 shows the delivery ratio of all the protocols, with different TTLs.\n(In all the plots in the paper, prediction stands for our method, state stands for the Link-State protocol, and prophet represents PRoPHET.)\nAlthough we had 5,142 users in the network, the direct-delivery and random protocols had low delivery ratios (note the log scale).\nEven for messages with an unlimited lifetime, only 59 out of 2077 messages were delivered during this one-month simulation.\nThe delivery ratio of epidemic routing was the best.\nThe three prediction-based approaches had low delivery ratio, compared to epidemic routing.\nAlthough our method was slightly better than the other two, the advantage was marginal.\nThe high delivery ratio of epidemic routing came with a price: excessive transmissions.\nFigure 3 shows the number of message data transmissions.\nThe number of message transmissions of epidemic routing was more than 10 times higher than for the predictionbased routing protocols.\nObviously, the direct delivery protocol had the lowest number of message transmissions - the number of message delivered.\nAmong the three prediction-based methods, the PRoPHET transmitted fewer messages, but had comparable delivery-ratio as seen in Figure 2.\nFigure 4 shows that epidemic and all prediction-based methods had substantial meta-data transmissions, though epidemic routing had relatively more, with shorter TTLs.\nBecause epidemic protocol transmitted messages at every contact, in turn, more nodes had messages that required meta-data transmission during contact.\nThe direct-delivery and random protocols had no meta-data transmissions.\nIn addition to its message transmissions and meta-data transmissions, the epidemic routing protocol also had excessive message 1 10 100 1000 10000 100000 1e+06 1e+07 1e+08 unlimited 100 24 10 1 Numberofmessagetransmitted Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Figure 3: Message transmissions (log scale) 1 10 100 1000 10000 100000 1e+06 1e+07 1e+08 unlimited 100 24 10 1 Numberofmeta-datatransmissions Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Figure 4: Meta-data transmissions (log scale).\nDirect and random protocols had no meta-data transmissions.\nduplications, spreading replicas of messages over the network.\nFigure 5 shows that epidemic routing had one or two orders more duplication than the prediction-based protocols.\nRecall that the directdelivery and random protocols did not replicate, thus had no data duplications.\nFigure 6 shows both the median and mean delivery delays.\nAll protocols show similar delivery delays in both mean and median measures for medium TTLs, but differ for long and short TTLs.\nWith a 100-hour TTL, or unlimited TTL, epidemic routing had the shortest delays.\nThe direct-delivery had the longest delay for unlimited TTL, but it had the shortest delay for the one-hour TTL.\nThe results seem contrary to our intuition: the epidemic routing protocol should be the fastest routing protocol since it spreads messages all over the network.\nIndeed, the figures show only the delay time for delivered messages.\nFor direct delivery, random, and the probability-based routing protocols, relatively few messages were delivered for short TTLs, so many messages expired before they could reach their destination; those messages had infinite delivery delay and were not included in the median or mean measurements.\nFor longer TTLs, more messages were delivered even for the directdelivery protocol.\nThe statistics of longer TTLs for comparison are more meaningful than those of short TTLs.\nSince our message generation rate was low, the storage usage was also low in our simulation.\nFigure 7 shows the maximum and average of maximum volume (in KBytes) of messages stored 39 1 10 100 1000 10000 100000 1e+06 1e+07 1e+08 unlimited 100 24 10 1 Numberofmessageduplications Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Figure 5: Message duplications (log scale).\nDirect and random protocols had no message duplications.\n1 10 100 1000 10000 unlimited100 24 10 1 unlimited100 24 10 1 Delay(minute) Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Mean delayMedian delay Figure 6: Median and mean delays (log scale).\nin each node.\nThe epidemic routing had the most storage usage.\nThe message time-to-live parameter was the big factor affecting the storage usage for epidemic and prediction-based routing protocols.\nWe studied the impact of different parameters of our predictionbased routing protocol.\nOur prediction-based protocol was sensitive to several parameters, such as the probability threshold and the prediction window \u0394T.\nFigure 8 shows the delivery ratios when we used different probability thresholds.\n(The leftmost value 0.01 is the value used for the other plots.)\nA higher probability threshold limited the transfer probability, so fewer messages were delivered.\nIt also required fewer transmissions as shown in Figure 9.\nWith a larger prediction window, we got a higher contact probability.\nThus, for the same probability threshold, we had a slightly higher delivery ratio as shown in Figure 10, and a few more transmissions as shown in Figure 11.\n5.\nRELATED WORK In addition to the protocols that we evaluated in our simulation, several other opportunistic network routing protocols have been proposed in the literature.\nWe did not implement and evaluate these routing protocols, because either they require domain-specific information (location information) [14, 15], assume certain mobility patterns [17], present orthogonal approaches [10, 24] to other routing protocols.\n0.1 1 10 100 1000 10000 unlimited100 24 10 1 unlimited100 24 10 1 Storageusage(KB) Message time-to-live (TTL) (hour) direct random prediction state prophet epidemic Mean of maximumMax of maximum Figure 7: Max and mean of maximum storage usage across all nodes (log scale).\n0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Deliveryratio Probability threshold Figure 8: Probability threshold impact on delivery ratio of timely-contact routing.\nLeBrun et al. [14] propose a location-based delay-tolerant network routing protocol.\nTheir algorithm assumes that every node knows its own position, and the destination is stationary at a known location.\nA node forwards data to a neighbor only if the neighbor is closer to the destination than its own position.\nOur protocol does not require knowledge of the nodes'' locations, and learns their contact patterns.\nLeguay et al. [15] use a high-dimensional space to represent a mobility pattern, then routes messages to nodes that are closer to the destination node in the mobility pattern space.\nLocation information of nodes is required to construct mobility patterns.\nMusolesi et al. [17] propose an adaptive routing protocol for intermittently connected mobile ad-hoc networks.\nThey use a Kalman filter to compute the probability that a node delivers messages.\nThis protocol assumes group mobility and cloud connectivity, that is, nodes move as a group, and among this group of nodes a contemporaneous end-to-end connection exists for every pair of nodes.\nWhen two nodes are in the same connected cloud, DSDV [19] routing is used.\nNetwork coding also draws much interest from DTN research.\nErasure-coding [10, 24] explores coding algorithms to reduce message replicas.\nThe source node replicates a message m times, then uses a coding scheme to encode them in one big message.\nAfter replicas are encoded, the source divides the big message into k 40 0 0.5 1 1.5 2 2.5 3 3.5 0 0.2 0.4 0.6 0.8 1 Numberofmessagetransmitted(million) Probability threshold Figure 9: Probability threshold impact on message transmission of timely-contact routing.\n0 0.2 0.4 0.6 0.8 1 0.01 0.1 1 10 100 Deliveryratio Prediction window (hour) Figure 10: Prediction window impact on delivery ratio of timely-contact routing (semi-log scale).\nblocks of the same size, and transmits a block to each of the first k encountered nodes.\nIf m of the blocks are received at the destination, the message can be restored, where m < k.\nIn a uniformly distributed mobility scenario, the delivery probability increases because the probability that the destination node meets m relays is greater than it meets k relays, given m < k. 6.\nSUMMARY We propose a prediction-based routing protocol for opportunistic networks.\nWe evaluate the performance of our protocol using realistic contact traces, and compare to five existing routing protocols.\nOur simulation results show that direct delivery had the lowest delivery ratio, the fewest data transmissions, and no meta-data transmission or data duplication.\nDirect delivery is suitable for devices that require an extremely low power consumption.\nThe random protocol increased the chance of delivery for messages otherwise stuck at some low mobility nodes.\nEpidemic routing delivered the most messages.\nThe excessive transmissions, and data duplication, however, consume more resources than portable devices may be able to provide.\nNone of these protocols (direct-delivery, random and epidemic routing) are practical for real deployment of opportunistic networks, 0 0.5 1 1.5 2 2.5 3 3.5 0.01 0.1 1 10 100 Numberofmessagetransmitted(million) Prediction window (hour) Figure 11: Prediction window impact on message transmission of timely-contact routing (semi-log scale).\nbecause they either had an extremely low delivery ratio, or had an extremely high resource consumption.\nThe prediction-based routing protocols had a delivery ratio more than 10 times better than that for direct-delivery and random routing, and fewer transmissions and less storage usage than epidemic routing.\nThey also had fewer data duplications than epidemic routing.\nAll the prediction-based routing protocols that we have evaluated had similar performance.\nOur method had a slightly higher delivery ratio, but more transmissions and higher storage usage.\nThere are many parameters for prediction-based routing protocols, however, and different parameters may produce different results.\nIndeed, there is an opportunity for some adaptation; for example, high priority messages may be given higher transfer and replication probabilities to increase the chance of delivery and reduce the delay, or a node with infrequent contact may choose to raise its transfer probability.\nWe only studied the impact of predicting peer-to-peer contact probability for routing in unicast messages.\nIn some applications, context information (such as location) may be available for the peers.\nOne may also consider other messaging models, for example, where messages are sent to a location, such that every node at that location will receive a copy of the message.\nLocation prediction [21] may be used to predict nodes'' mobility, and to choose as relays those nodes moving toward the destined location.\nResearch on routing in opportunistic networks is still in its early stage.\nMany other issues of opportunistic networks, such as security and privacy, are mainly left open.\nWe anticipate studying these issues in future work.\n7.\nACKNOWLEDGEMENT This research is a project of the Center for Mobile Computing and the Institute for Security Technology Studies at Dartmouth College.\nIt was supported by DoCoMo Labs USA, the CRAWDAD archive at Dartmouth College (funded by NSF CRI Award 0454062), NSF Infrastructure Award EIA-9802068, and by Grant number 2005-DD-BX-1091 awarded by the Bureau of Justice Assistance.\nPoints of view or opinions in this document are those of the authors and do not represent the official position or policies of any sponsor.\n8.\nREFERENCES [1] John Burgess, Brian Gallagher, David Jensen, and Brian Neil Levine.\nMaxProp: routing for vehicle-based 41 disruption-tolerant networks.\nIn Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM), April 2006.\n[2] Scott Burleigh, Adrian Hooke, Leigh Torgerson, Kevin Fall, Vint Cerf, Bob Durst, Keith Scott, and Howard Weiss.\nDelay-tolerant networking: An approach to interplanetary Internet.\nIEEE Communications Magazine, 41(6):128-136, June 2003.\n[3] Tracy Camp, Jeff Boleng, and Vanessa Davies.\nA survey of mobility models for ad-hoc network research.\nWireless Communication & Mobile Computing (WCMC): Special issue on Mobile ad-hoc Networking: Research, Trends and Applications, 2(5):483-502, 2002.\n[4] Andrew Campbell, Shane Eisenman, Nicholas Lane, Emiliano Miluzzo, and Ronald Peterson.\nPeople-centric urban sensing.\nIn IEEE Wireless Internet Conference, August 2006.\n[5] Augustin Chaintreau, Pan Hui, Jon Crowcroft, Christophe Diot, Richard Gass, and James Scott.\nImpact of human mobility on the design of opportunistic forwarding algorithms.\nIn Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM), April 2006.\n[6] Kevin Fall.\nA delay-tolerant network architecture for challenged internets.\nIn Proceedings of the 2003 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM), August 2003.\n[7] Tristan Henderson, David Kotz, and Ilya Abyzov.\nThe changing usage of a mature campus-wide wireless network.\nIn Proceedings of the 10th Annual International Conference on Mobile Computing and Networking (MobiCom), pages 187-201, September 2004.\n[8] Pan Hui, Augustin Chaintreau, James Scott, Richard Gass, Jon Crowcroft, and Christophe Diot.\nPocket switched networks and human mobility in conference environments.\nIn ACM SIGCOMM Workshop on Delay Tolerant Networking, pages 244-251, August 2005.\n[9] Ravi Jain, Dan Lelescu, and Mahadevan Balakrishnan.\nModel T: an empirical model for user registration patterns in a campus wireless LAN.\nIn Proceedings of the 11th Annual International Conference on Mobile Computing and Networking (MobiCom), pages 170-184, 2005.\n[10] Sushant Jain, Mike Demmer, Rabin Patra, and Kevin Fall.\nUsing redundancy to cope with failures in a delay tolerant network.\nIn Proceedings of the 2005 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (SIGCOMM), pages 109-120, August 2005.\n[11] Philo Juang, Hidekazu Oki, Yong Wang, Margaret Martonosi, Li-Shiuan Peh, and Daniel Rubenstein.\nEnergy-efficient computing for wildlife tracking: Design tradeoffs and early experiences with ZebraNet.\nIn the Tenth International Conference on Architectural Support for Programming Languages and Operating Systems, October 2002.\n[12] David Kotz and Kobby Essien.\nAnalysis of a campus-wide wireless network.\nWireless Networks, 11:115-133, 2005.\n[13] David Kotz, Tristan Henderson, and Ilya Abyzov.\nCRAWDAD data set dartmouth\/campus.\nhttp:\/\/crawdad.cs.dartmouth.edu\/dartmouth\/campus, December 2004.\n[14] Jason LeBrun, Chen-Nee Chuah, Dipak Ghosal, and Michael Zhang.\nKnowledge-based opportunistic forwarding in vehicular wireless ad-hoc networks.\nIn IEEE Vehicular Technology Conference, pages 2289-2293, May 2005.\n[15] Jeremie Leguay, Timur Friedman, and Vania Conan.\nEvaluating mobility pattern space routing for DTNs.\nIn Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM), April 2006.\n[16] Anders Lindgren, Avri Doria, and Olov Schelen.\nProbabilistic routing in intermittently connected networks.\nIn Workshop on Service Assurance with Partial and Intermittent Resources (SAPIR), pages 239-254, 2004.\n[17] Mirco Musolesi, Stephen Hailes, and Cecilia Mascolo.\nAdaptive routing for intermittently connected mobile ad-hoc networks.\nIn IEEE International Symposium on a World of Wireless Mobile and Multimedia Networks, pages 183-189, June 2005.\nextended version.\n[18] OLPC.\nOne laptop per child project.\nhttp:\/\/laptop.org.\n[19] C. E. Perkins and P. Bhagwat.\nHighly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers.\nComputer Communication Review, pages 234-244, October 1994.\n[20] C. E. Perkins and E. M. Royer.\nad-hoc on-demand distance vector routing.\nIn IEEE Workshop on Mobile Computing Systems and Applications, pages 90-100, February 1999.\n[21] Libo Song, David Kotz, Ravi Jain, and Xiaoning He.\nEvaluating next-cell predictors with extensive Wi-Fi mobility data.\nIEEE Transactions on Mobile Computing, 5(12):1633-1649, December 2006.\n[22] Jing Su, Ashvin Goel, and Eyal de Lara.\nAn empirical evaluation of the student-net delay tolerant network.\nIn International Conference on Mobile and Ubiquitous Systems (MobiQuitous), July 2006.\n[23] Amin Vahdat and David Becker.\nEpidemic routing for partially-connected ad-hoc networks.\nTechnical Report CS-2000-06, Duke University, July 2000.\n[24] Yong Wang, Sushant Jain, Margaret Martonosia, and Kevin Fall.\nErasure-coding based routing for opportunistic networks.\nIn ACM SIGCOMM Workshop on Delay Tolerant Networking, pages 229-236, August 2005.\n[25] Yu Wang and Hongyi Wu.\nDFT-MSN: the delay fault tolerant mobile sensor network for pervasive information gathering.\nIn Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM), April 2006.\n42","lvl-3":"Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces\nABSTRACT\nTraditional mobile ad hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers.\nIn some mobile ad hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time.\nMany routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic \"opportunistic\" network setting.\nWe use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol.\nWe show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage.\n1.\nINTRODUCTION\nMobile opportunistic networks are one kind of delay-tolerant network (DTN) [6].\nDelay-tolerant networks provide service despite long link delays or frequent link breaks.\nLong link delays happen in networks with communication between nodes at a great distance, such as interplanetary networks [2].\nLink breaks are caused by nodes moving out of range, environmental changes, interference from other moving objects, radio power-offs, or failed nodes.\nFor us, mobile opportunistic networks are those DTNs with sparse node population and frequent link breaks caused by power-offs and the mobility of the nodes.\nMobile opportunistic networks have received increasing interest from researchers.\nIn the literature, these networks include mobile sensor networks [25], wild-animal tracking networks [11], \"pocketswitched\" networks [8], and transportation networks [1, 14].\nWe expect to see more opportunistic networks when the one-laptopper-child (OLPC) project [18] starts rolling out inexpensive laptops with wireless networking capability for children in developing countries, where often no infrastructure exits.\nOpportunistic networking is one promising approach for those children to exchange information.\nOne fundamental problem in opportunistic networks is how to route messages from their source to their destination.\nMobile opportunistic networks differ from the Internet in that disconnections are the norm instead of the exception.\nIn mobile opportunistic networks, communication devices can be carried by people [4], vehicles [1] or animals [11].\nSome devices can form a small mobile ad hoc network when the nodes move close to each other.\nBut a node may frequently be isolated from other nodes.\nNote that traditional Internet routing protocols and ad hoc routing protocols, such as AODV [20] or DSDV [19], assume that a contemporaneous endto-end path exists, and thus fail in mobile opportunistic networks.\nIndeed, there may never exist an end-to-end path between two given devices.\nIn this paper, we study protocols for routing messages between wireless networking devices carried by people.\nWe assume that people send messages to other people occasionally, using their devices; when no direct link exists between the source and the destination of the message, other nodes may relay the message to the destination.\nEach device represents a unique person (it is out of the scope of this paper when a device maybe carried by multiple people).\nEach message is destined for a specific person and thus for a specific node carried by that person.\nAlthough one person may carry multiple devices, we assume that the sender knows which device is the best to receive the message.\nWe do not consider multicast or geocast in this paper.\nMany routing protocols have been proposed in the literature.\nFew of them were evaluated in realistic network settings, or even in realistic simulations, due to the lack of any realistic people mobility model.\nRandom walk or random way-point mobility models are often used to evaluate the performance of those routing protocols.\nAlthough these synthetic mobility models have received extensive interest by mobile ad hoc network researchers [3], they do not reflect people's mobility patterns [9].\nRealising the limitations of using random mobility models in simulations, a few researchers have studied routing protocols in mobile opportunistic networks with realistic mobility traces.\nChaintreau et al. [5] theoretically analyzed the impact of routing algorithms over a model derived from a realistic mobility data set.\nSu et al. [22] simulated a set of routing\nprotocols in a small experimental network.\nThose studies help researchers better understand the theoretical limits of opportunistic networks, and the routing protocol performance in a small network (20--30 nodes).\nDeploying and experimenting large-scale mobile opportunistic networks is difficult, we too resort to simulation.\nInstead of using a complex mobility model to mimic people's mobility patterns, we used mobility traces collected in a production wireless network at Dartmouth College to drive our simulation.\nOur messagegeneration model, however, was synthetic.\nTo the best of our knowledge, we are the first to simulate the effect of routing protocols in a large-scale mobile opportunistic network, using realistic contact traces derived from real traces of a production network with more than 5, 000 users.\nUsing realistic contact traces, we evaluate the performance of three \"naive\" routing protocols (direct-delivery, epidemic, and random) and two prediction-based routing protocols, PRoPHET [16] and Link-State [22].\nWe also propose a new prediction-based routing protocol, and compare it to the above in our evaluation.\n2.\nROUTING PROTOCOL\n2.1 Direct Delivery Protocol\n2.2 Epidemic Routing Protocol\n2.3 Random Routing\n2.4 PRoPHET Protocol\n2.5 Link-State Protocol\n3.\nTIMELY-CONTACT PROBABILITY\n3.1 Our Routing Protocol\n3.1.1 Receiver Decision\n3.1.2 Sender Decision\n3.1.3 Multi-node Relay\n4.\nEVALUATION RESULTS\n4.1 Mobility traces\n4.2 Simulator\n4.3 Message generation\n4.4 Metrics\n4.5 Results\n5.\nRELATED WORK\nIn addition to the protocols that we evaluated in our simulation, several other opportunistic network routing protocols have been proposed in the literature.\nWe did not implement and evaluate these routing protocols, because either they require domain-specific information (location information) [14, 15], assume certain mobility patterns [17], present orthogonal approaches [10, 24] to other routing protocols.\nFigure 7: Max and mean of maximum storage usage across all nodes (log scale).\nFigure 8: Probability threshold impact on delivery ratio of timely-contact routing.\nLeBrun et al. [14] propose a location-based delay-tolerant network routing protocol.\nTheir algorithm assumes that every node knows its own position, and the destination is stationary at a known location.\nA node forwards data to a neighbor only if the neighbor is closer to the destination than its own position.\nOur protocol does not require knowledge of the nodes' locations, and learns their contact patterns.\nLeguay et al. [15] use a high-dimensional space to represent a mobility pattern, then routes messages to nodes that are closer to the destination node in the mobility pattern space.\nLocation information of nodes is required to construct mobility patterns.\nMusolesi et al. [17] propose an adaptive routing protocol for intermittently connected mobile ad hoc networks.\nThey use a Kalman filter to compute the probability that a node delivers messages.\nThis protocol assumes group mobility and cloud connectivity, that is, nodes move as a group, and among this group of nodes a contemporaneous end-to-end connection exists for every pair of nodes.\nWhen two nodes are in the same connected cloud, DSDV [19] routing is used.\nNetwork coding also draws much interest from DTN research.\nErasure-coding [10, 24] explores coding algorithms to reduce message replicas.\nThe source node replicates a message m times, then uses a coding scheme to encode them in one big message.\nAfter replicas are encoded, the source divides the big message into k\nFigure 9: Probability threshold impact on message transmission of timely-contact routing.\nFigure 10: Prediction window impact on delivery ratio of timely-contact routing (semi-log scale).\nblocks of the same size, and transmits a block to each of the first k encountered nodes.\nIf m of the blocks are received at the destination, the message can be restored, where m <k.\nIn a uniformly distributed mobility scenario, the delivery probability increases because the probability that the destination node meets m relays is greater than it meets k relays, given m <k.\n6.\nSUMMARY\nWe propose a prediction-based routing protocol for opportunistic networks.\nWe evaluate the performance of our protocol using realistic contact traces, and compare to five existing routing protocols.\nOur simulation results show that direct delivery had the lowest delivery ratio, the fewest data transmissions, and no meta-data transmission or data duplication.\nDirect delivery is suitable for devices that require an extremely low power consumption.\nThe random protocol increased the chance of delivery for messages otherwise stuck at some low mobility nodes.\nEpidemic routing delivered the most messages.\nThe excessive transmissions, and data duplication, however, consume more resources than portable devices may be able to provide.\nNone of these protocols (direct-delivery, random and epidemic routing) are practical for real deployment of opportunistic networks,\nFigure 11: Prediction window impact on message transmission of timely-contact routing (semi-log scale).\nbecause they either had an extremely low delivery ratio, or had an extremely high resource consumption.\nThe prediction-based routing protocols had a delivery ratio more than 10 times better than that for direct-delivery and random routing, and fewer transmissions and less storage usage than epidemic routing.\nThey also had fewer data duplications than epidemic routing.\nAll the prediction-based routing protocols that we have evaluated had similar performance.\nOur method had a slightly higher delivery ratio, but more transmissions and higher storage usage.\nThere are many parameters for prediction-based routing protocols, however, and different parameters may produce different results.\nIndeed, there is an opportunity for some adaptation; for example, high priority messages may be given higher transfer and replication probabilities to increase the chance of delivery and reduce the delay, or a node with infrequent contact may choose to raise its transfer probability.\nWe only studied the impact of predicting peer-to-peer contact probability for routing in unicast messages.\nIn some applications, context information (such as location) may be available for the peers.\nOne may also consider other messaging models, for example, where messages are sent to a location, such that every node at that location will receive a copy of the message.\nLocation prediction [21] may be used to predict nodes' mobility, and to choose as relays those nodes moving toward the destined location.\nResearch on routing in opportunistic networks is still in its early stage.\nMany other issues of opportunistic networks, such as security and privacy, are mainly left open.\nWe anticipate studying these issues in future work.","lvl-4":"Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces\nABSTRACT\nTraditional mobile ad hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers.\nIn some mobile ad hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time.\nMany routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic \"opportunistic\" network setting.\nWe use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol.\nWe show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage.\n1.\nINTRODUCTION\nMobile opportunistic networks are one kind of delay-tolerant network (DTN) [6].\nDelay-tolerant networks provide service despite long link delays or frequent link breaks.\nLong link delays happen in networks with communication between nodes at a great distance, such as interplanetary networks [2].\nFor us, mobile opportunistic networks are those DTNs with sparse node population and frequent link breaks caused by power-offs and the mobility of the nodes.\nMobile opportunistic networks have received increasing interest from researchers.\nOpportunistic networking is one promising approach for those children to exchange information.\nOne fundamental problem in opportunistic networks is how to route messages from their source to their destination.\nMobile opportunistic networks differ from the Internet in that disconnections are the norm instead of the exception.\nIn mobile opportunistic networks, communication devices can be carried by people [4], vehicles [1] or animals [11].\nSome devices can form a small mobile ad hoc network when the nodes move close to each other.\nBut a node may frequently be isolated from other nodes.\nNote that traditional Internet routing protocols and ad hoc routing protocols, such as AODV [20] or DSDV [19], assume that a contemporaneous endto-end path exists, and thus fail in mobile opportunistic networks.\nIndeed, there may never exist an end-to-end path between two given devices.\nIn this paper, we study protocols for routing messages between wireless networking devices carried by people.\nWe assume that people send messages to other people occasionally, using their devices; when no direct link exists between the source and the destination of the message, other nodes may relay the message to the destination.\nEach message is destined for a specific person and thus for a specific node carried by that person.\nAlthough one person may carry multiple devices, we assume that the sender knows which device is the best to receive the message.\nMany routing protocols have been proposed in the literature.\nFew of them were evaluated in realistic network settings, or even in realistic simulations, due to the lack of any realistic people mobility model.\nRandom walk or random way-point mobility models are often used to evaluate the performance of those routing protocols.\nAlthough these synthetic mobility models have received extensive interest by mobile ad hoc network researchers [3], they do not reflect people's mobility patterns [9].\nRealising the limitations of using random mobility models in simulations, a few researchers have studied routing protocols in mobile opportunistic networks with realistic mobility traces.\nChaintreau et al. [5] theoretically analyzed the impact of routing algorithms over a model derived from a realistic mobility data set.\nSu et al. [22] simulated a set of routing\nprotocols in a small experimental network.\nThose studies help researchers better understand the theoretical limits of opportunistic networks, and the routing protocol performance in a small network (20--30 nodes).\nDeploying and experimenting large-scale mobile opportunistic networks is difficult, we too resort to simulation.\nOur messagegeneration model, however, was synthetic.\nTo the best of our knowledge, we are the first to simulate the effect of routing protocols in a large-scale mobile opportunistic network, using realistic contact traces derived from real traces of a production network with more than 5, 000 users.\nUsing realistic contact traces, we evaluate the performance of three \"naive\" routing protocols (direct-delivery, epidemic, and random) and two prediction-based routing protocols, PRoPHET [16] and Link-State [22].\nWe also propose a new prediction-based routing protocol, and compare it to the above in our evaluation.\n5.\nRELATED WORK\nIn addition to the protocols that we evaluated in our simulation, several other opportunistic network routing protocols have been proposed in the literature.\nFigure 7: Max and mean of maximum storage usage across all nodes (log scale).\nFigure 8: Probability threshold impact on delivery ratio of timely-contact routing.\nLeBrun et al. [14] propose a location-based delay-tolerant network routing protocol.\nTheir algorithm assumes that every node knows its own position, and the destination is stationary at a known location.\nA node forwards data to a neighbor only if the neighbor is closer to the destination than its own position.\nOur protocol does not require knowledge of the nodes' locations, and learns their contact patterns.\nLeguay et al. [15] use a high-dimensional space to represent a mobility pattern, then routes messages to nodes that are closer to the destination node in the mobility pattern space.\nLocation information of nodes is required to construct mobility patterns.\nMusolesi et al. [17] propose an adaptive routing protocol for intermittently connected mobile ad hoc networks.\nThey use a Kalman filter to compute the probability that a node delivers messages.\nThis protocol assumes group mobility and cloud connectivity, that is, nodes move as a group, and among this group of nodes a contemporaneous end-to-end connection exists for every pair of nodes.\nWhen two nodes are in the same connected cloud, DSDV [19] routing is used.\nNetwork coding also draws much interest from DTN research.\nErasure-coding [10, 24] explores coding algorithms to reduce message replicas.\nThe source node replicates a message m times, then uses a coding scheme to encode them in one big message.\nAfter replicas are encoded, the source divides the big message into k\nFigure 9: Probability threshold impact on message transmission of timely-contact routing.\nFigure 10: Prediction window impact on delivery ratio of timely-contact routing (semi-log scale).\nblocks of the same size, and transmits a block to each of the first k encountered nodes.\nIf m of the blocks are received at the destination, the message can be restored, where m <k.\n6.\nSUMMARY\nWe propose a prediction-based routing protocol for opportunistic networks.\nWe evaluate the performance of our protocol using realistic contact traces, and compare to five existing routing protocols.\nDirect delivery is suitable for devices that require an extremely low power consumption.\nThe random protocol increased the chance of delivery for messages otherwise stuck at some low mobility nodes.\nEpidemic routing delivered the most messages.\nNone of these protocols (direct-delivery, random and epidemic routing) are practical for real deployment of opportunistic networks,\nFigure 11: Prediction window impact on message transmission of timely-contact routing (semi-log scale).\nThe prediction-based routing protocols had a delivery ratio more than 10 times better than that for direct-delivery and random routing, and fewer transmissions and less storage usage than epidemic routing.\nThey also had fewer data duplications than epidemic routing.\nAll the prediction-based routing protocols that we have evaluated had similar performance.\nOur method had a slightly higher delivery ratio, but more transmissions and higher storage usage.\nThere are many parameters for prediction-based routing protocols, however, and different parameters may produce different results.\nWe only studied the impact of predicting peer-to-peer contact probability for routing in unicast messages.\nOne may also consider other messaging models, for example, where messages are sent to a location, such that every node at that location will receive a copy of the message.\nLocation prediction [21] may be used to predict nodes' mobility, and to choose as relays those nodes moving toward the destined location.\nResearch on routing in opportunistic networks is still in its early stage.\nMany other issues of opportunistic networks, such as security and privacy, are mainly left open.\nWe anticipate studying these issues in future work.","lvl-2":"Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces\nABSTRACT\nTraditional mobile ad hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers.\nIn some mobile ad hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time.\nMany routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic \"opportunistic\" network setting.\nWe use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol.\nWe show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage.\n1.\nINTRODUCTION\nMobile opportunistic networks are one kind of delay-tolerant network (DTN) [6].\nDelay-tolerant networks provide service despite long link delays or frequent link breaks.\nLong link delays happen in networks with communication between nodes at a great distance, such as interplanetary networks [2].\nLink breaks are caused by nodes moving out of range, environmental changes, interference from other moving objects, radio power-offs, or failed nodes.\nFor us, mobile opportunistic networks are those DTNs with sparse node population and frequent link breaks caused by power-offs and the mobility of the nodes.\nMobile opportunistic networks have received increasing interest from researchers.\nIn the literature, these networks include mobile sensor networks [25], wild-animal tracking networks [11], \"pocketswitched\" networks [8], and transportation networks [1, 14].\nWe expect to see more opportunistic networks when the one-laptopper-child (OLPC) project [18] starts rolling out inexpensive laptops with wireless networking capability for children in developing countries, where often no infrastructure exits.\nOpportunistic networking is one promising approach for those children to exchange information.\nOne fundamental problem in opportunistic networks is how to route messages from their source to their destination.\nMobile opportunistic networks differ from the Internet in that disconnections are the norm instead of the exception.\nIn mobile opportunistic networks, communication devices can be carried by people [4], vehicles [1] or animals [11].\nSome devices can form a small mobile ad hoc network when the nodes move close to each other.\nBut a node may frequently be isolated from other nodes.\nNote that traditional Internet routing protocols and ad hoc routing protocols, such as AODV [20] or DSDV [19], assume that a contemporaneous endto-end path exists, and thus fail in mobile opportunistic networks.\nIndeed, there may never exist an end-to-end path between two given devices.\nIn this paper, we study protocols for routing messages between wireless networking devices carried by people.\nWe assume that people send messages to other people occasionally, using their devices; when no direct link exists between the source and the destination of the message, other nodes may relay the message to the destination.\nEach device represents a unique person (it is out of the scope of this paper when a device maybe carried by multiple people).\nEach message is destined for a specific person and thus for a specific node carried by that person.\nAlthough one person may carry multiple devices, we assume that the sender knows which device is the best to receive the message.\nWe do not consider multicast or geocast in this paper.\nMany routing protocols have been proposed in the literature.\nFew of them were evaluated in realistic network settings, or even in realistic simulations, due to the lack of any realistic people mobility model.\nRandom walk or random way-point mobility models are often used to evaluate the performance of those routing protocols.\nAlthough these synthetic mobility models have received extensive interest by mobile ad hoc network researchers [3], they do not reflect people's mobility patterns [9].\nRealising the limitations of using random mobility models in simulations, a few researchers have studied routing protocols in mobile opportunistic networks with realistic mobility traces.\nChaintreau et al. [5] theoretically analyzed the impact of routing algorithms over a model derived from a realistic mobility data set.\nSu et al. [22] simulated a set of routing\nprotocols in a small experimental network.\nThose studies help researchers better understand the theoretical limits of opportunistic networks, and the routing protocol performance in a small network (20--30 nodes).\nDeploying and experimenting large-scale mobile opportunistic networks is difficult, we too resort to simulation.\nInstead of using a complex mobility model to mimic people's mobility patterns, we used mobility traces collected in a production wireless network at Dartmouth College to drive our simulation.\nOur messagegeneration model, however, was synthetic.\nTo the best of our knowledge, we are the first to simulate the effect of routing protocols in a large-scale mobile opportunistic network, using realistic contact traces derived from real traces of a production network with more than 5, 000 users.\nUsing realistic contact traces, we evaluate the performance of three \"naive\" routing protocols (direct-delivery, epidemic, and random) and two prediction-based routing protocols, PRoPHET [16] and Link-State [22].\nWe also propose a new prediction-based routing protocol, and compare it to the above in our evaluation.\n2.\nROUTING PROTOCOL\nA routing protocol is designed for forwarding messages from one node (source) to another node (destination).\nAny node may generate messages for any other node, and may carry messages destined for other nodes.\nIn this paper, we consider only messages that are unicast (single destination).\nDTN routing protocols could be described in part by their transfer probability and replication probability; that is, when one node meets another node, what is the probability that a message should be transfered and if so, whether the sender should retain its copy.\nTwo extremes are the direct-delivery protocol and the epidemic protocol.\nThe former transfers with probability 1 when the node meets the destination, 0 for others, and no replication.\nThe latter uses transfer probability 1 for all nodes and unlimited replication.\nBoth these protocols have their advantages and disadvantages.\nAll other protocols are between the two extremes.\nFirst, we define the notion of contact between two nodes.\nThen we describe five existing protocols before presenting our own proposal.\nA contact is defined as a period of time during which two nodes have the opportunity to communicate.\nAlthough we are aware that wireless technologies differ, we assume that a node can reliably detect the beginning and end time of a contact with nearby nodes.\nA node may be in contact with several other nodes at the same time.\nThe contact history of a node is a sequence of contacts with other nodes.\nNode i has a contact history Hi (j), for each other node j, which denotes the historical contacts between node i and node j.\nWe record the start and end time for each contact; however, the last contacts in the node's contact history may not have ended.\n2.1 Direct Delivery Protocol\nIn this simple protocol, a message is transmitted only when the source node can directly communicate with the destination node of the message.\nIn mobile opportunistic networks, however, the probability for the sender to meet the destination may be low, or even zero.\n2.2 Epidemic Routing Protocol\nThe epidemic routing protocol [23] floods messages into the network.\nThe source node sends a copy of the message to every node that it meets.\nThe nodes that receive a copy of the message also send a copy of the message to every node that they meet.\nEventually, a copy of the message arrives at the destination of the message.\nThis protocol is simple, but may use significant resources; excessive communication may drain each node's battery quickly.\nMoreover, since each node keeps a copy of each message, storage is not used efficiently, and the capacity of the network is limited.\nAt a minimum, each node must expire messages after some amount of time or stop forwarding them after a certain number of hops.\nAfter a message expires, the message will not be transmitted and will be deleted from the storage of any node that holds the message.\nAn optimization to reduce the communication cost is to transfer index messages before transferring any data message.\nThe index messages contain IDs of messages that a node currently holds.\nThus, by examining the index messages, a node only transfers messages that are not yet contained on the other nodes.\n2.3 Random Routing\nAn obvious approach between the above two extremes is to select a transfer probability between 0 and 1 to forward messages at each contact.\nWe use a simple replication strategy that allows only the source node to make replicas, and limits the replication to a specific number of copies.\nThe message has some chance of being transferred to a highly mobile node, and thus may have a better chance to reach its destination before the message expires.\n2.4 PRoPHET Protocol\nPRoPHET [16] is a Probabilistic Routing Protocol using History ofpast Encounters and Transitivity to estimate each node's delivery probability for each other node.\nWhen node i meets node j, the delivery probability of node i for j is updated by\nwhere p0 is an initial probability, a design parameter for a given network.\nLindgren et al. [16] chose 0.75, as did we in our evaluation.\nWhen node i does not meet j for some time, the delivery probability decreases by\nwhere \u03b1 is the aging factor (\u03b1 <1), and k is the number of time units since the last update.\nThe PRoPHET protocol exchanges index messages as well as delivery probabilities.\nWhen node i receives node j's delivery probabilities, node i may compute the transitive delivery probability through j to z with\nwhere \u03b2 is a design parameter for the impact of transitivity; we used \u03b2 = 0.25 as did Lindgren [16].\n2.5 Link-State Protocol\nSu et al. [22] use a link-state approach to estimate the weight of each path from the source of a message to the destination.\nThey use the median inter-contact duration or exponentially aged intercontact duration as the weight on links.\nThe exponentially aged inter-contact duration of node i and j is computed by\nwhere I is the new inter-contact duration and \u03b1 is the aging factor.\nNodes share their link-state weights when they can communicate with each other, and messages are forwarded to the node that have the path with the lowest link-state weight.\n3.\nTIMELY-CONTACT PROBABILITY\nWe also use historical contact information to estimate the probability of meeting other nodes in the future.\nBut our method differs in that we estimate the contact probability within a period of time.\nFor example, what is the contact probability in the next hour?\nNeither PRoPHET nor Link-State considers time in this way.\nOne way to estimate the \"timely-contact probability\" is to use the ratio of the total contact duration to the total time.\nHowever, this approach does not capture the frequency of contacts.\nFor example, one node may have a long contact with another node, followed by a long non-contact period.\nA third node may have a short contact with the first node, followed by a short non-contact period.\nUsing the above estimation approach, both examples would have similar contact probability.\nIn the second example, however, the two nodes have more frequent contacts.\nWe design a method to capture the contact frequency of mobile nodes.\nFor this purpose, we assume that even short contacts are sufficient to exchange messages .1 The probability for node i to meet node j is computed by the following procedure.\nWe divide the contact history Hi (j) into a sequence of n periods of \u0394T starting from the start time (t0) of the first contact in history Hi (j) to the current time.\nWe number each of the n periods from 0 to n \u2212 1, then check each period.\nIf node i had any contact with node j during a given period m, which is [t0 + m\u0394T, t0 + (m + 1) \u0394T), we set the contact status Im to be 1; otherwise, the contact status Im is 0.\nThe probability p (0) ij that node i meets node j in the next \u0394T can be estimated as the average of the contact status in prior intervals:\nTo adapt to the change of contact patterns, and reduce the storage space for contact histories, a node may discard old history contacts; in this situation, the estimate would be based on only the retained history.\nThe above probability is the direct contact probability of two nodes.\nWe are also interested in the probability that we may be able to pass a message through a sequence of k nodes.\nWe define the k-order probability inductively, where \u03b1 is any node other than i or j.\n3.1 Our Routing Protocol\nWe first consider the case of a two-hop path, that is, with only one relay node.\nWe consider two approaches: either the receiving neighbor decides whether to act as a relay, or the source decides which neighbors to use as relay.\n3.1.1 Receiver Decision\nWhenever a node meets other nodes, they exchange all their messages (or as above, index messages).\nIf the destination of a message is the receiver itself, the message is delivered.\nOtherwise, if the probability of delivering the message to its destination through this receiver node within \u0394T is greater than or equal to a certain threshold, the message is stored in the receiver's storage to forward 1In our simulation, however, we accurately model the communication costs and some short contacts will not succeed in transfer of all messages.\nto the destination.\nIf the probability is less than the threshold, the receiver discards the message.\nNotice that our protocol replicates the message whenever a good-looking relay comes along.\n3.1.2 Sender Decision\nTo make decisions, a sender must have the information about its neighbors' contact probability with a message's destination.\nTherefore, meta-data exchange is necessary.\nWhen two nodes meet, they exchange a meta-message, containing an unordered list of node IDs for which the sender of the metamessage has a contact probability greater than the threshold.\nAfter receiving a meta-message, a node checks whether it has any message that destined to its neighbor, or to a node in the node list of the neighbor's meta-message.\nIf it has, it sends a copy of the message.\nWhen a node receives a message, if the destination of the message is the receiver itself, the message is delivered.\nOtherwise, the message is stored in the receiver's storage for forwarding to the destination.\n3.1.3 Multi-node Relay\nWhen we use more than two hops to relay a message, each node needs to know the contact probabilities along all possible paths to the message destination.\nEvery node keeps a contact probability matrix, in which each cell pij is a contact probability between to nodes i and j. Each node i computes its own contact probabilities (row i) with other nodes using Equation (5) whenever the node ends a contact with other nodes.\nEach row of the contact probability matrix has a version number; the version number for row i is only increased when node i updates the matrix entries in row i.\nOther matrix entries are updated through exchange with other nodes when they meet.\nWhen two nodes i and j meet, they first exchange their contact probability matrices.\nNode i compares its own contact matrix with node j's matrix.\nIf node j's matrix has a row l with a higher version number, then node i replaces its own row l with node j's row l. Likewise node j updates its matrix.\nAfter the exchange, the two nodes will have identical contact probability matrices.\nNext, if a node has a message to forward, the node estimates its neighboring node's order-k contact probability to contact the destination of the message using Equation (6).\nIf p (k) ij is above a threshold, or if j is the destination of the message, node i will send a copy of the message to node j. All the above effort serves to determine the transfer probability when two nodes meet.\nThe replication decision is orthogonal to the transfer decision.\nIn our implementation, we always replicate.\nAlthough PRoPHET [16] and Link-State [22] do no replication, as described, we added replication to those protocols for better comparison to our protocol.\n4.\nEVALUATION RESULTS\nWe evaluate and compare the results of direct delivery, epidemic, random, PRoPHET, Link-State, and timely-contact routing protocols.\n4.1 Mobility traces\nWe use real mobility data collected at Dartmouth College.\nDartmouth College has collected association and disassociation messages from devices on its wireless network wireless users since spring 2001 [13].\nEach message records the wireless card MAC address, the time of association\/disassociation, and the name of the access point.\nWe treat each unique MAC address as a node.\nFor\nmore information about Dartmouth's network and the data collection, see previous studies [7, 12].\nOur data are not contacts in a mobile ad hoc network.\nWe can approximate contact traces by assuming that two users can communicate with each other whenever they are associated with the same access point.\nChaintreau et al. [5] used Dartmouth data traces and made the same assumption to theoretically analyze the impact of human mobility on opportunistic forwarding algorithms.\nThis assumption may not be accurate,2 but it is a good first approximation.\nIn our simulation, we imagine the same clients and same mobility in a network with no access points.\nSince our campus has full WiFi coverage, we assume that the location of access points had little impact on users' mobility.\nWe simulated one full month of trace data (November 2003) taken from CRAWDAD [13], with 5, 142 users.\nAlthough predictionbased protocols require prior contact history to estimate each node's delivery probability, our preliminary results show that the performance improvement of warming-up over one month of trace was marginal.\nTherefore, for simplicity, we show the results of all protocols without warming-up.\n4.2 Simulator\nWe developed a custom simulator .3 Since we used contact traces derived from real mobility data, we did not need a mobility model and omitted physical and link-layer details for node discovery.\nWe were aware that the time for neighbor discovery in different wireless technologies vary from less than one seconds to several seconds.\nFurthermore, connection establishment also takes time, such as DHCP.\nIn our simulation, we assumed the nodes could discover and connect each other instantly when they were associated with a same AP.\nTo accurately model communication costs, however, we simulated some MAC-layer behaviors, such as collision.\nThe default settings of the network of our simulator are listed in Table 1, using the values recommended by other papers [22, 16].\nThe message probability was the probability of generating messages, as described in Section 4.3.\nThe default transmission bandwidth was 11 Mb\/s.\nWhen one node tried to transmit a message, it first checked whether any nearby node was transmitting.\nIf it was, the node backed off a random number of slots.\nEach slot was 1 millisecond, and the maximum number of backoff slots was 30.\nThe size of messages was uniformly distributed between 80 bytes and 1024 bytes.\nThe hop count limit (HCL) was the maximum number of hops before a message should stop forwarding.\nThe time to live (TTL) was the maximum duration that a message may exist before expiring.\nThe storage capacity was the maximum space that a node can use for storing messages.\nFor our routing method, we used a default prediction window \u0394T of 10 hours and a probability threshold of 0.01.\nThe replication factor r was not limited by default, so the source of a message transferred the messages to any other node that had a contact probability with the message destination higher than the probability threshold.\n4.3 Message generation\nAfter each contact event in the contact trace, we generated a message with a given probability; we choose a source node and a des2Two nodes may not have been able to directly communicate while they were at two far sides of an access point, or two nodes may have been able to directly communicate if they were between two adjacent access points.\nTable 1: Default Settings of the Simulation\nFigure 1: Movements and contacts duration each hour\ntination node randomly using a uniform distribution across nodes seen in the contact trace up to the current time.\nWhen there were more contacts during a certain period, there was a higher likelihood that a new message was generated in that period.\nThis correlation is not unreasonable, since there were more movements during the day than during the night, and so the number of contacts.\nFigure 1 shows the statistics of the numbers of movements and the numbers of contacts during each hour of the day, summed across all users and all days.\nThe plot shows a clear diurnal activity pattern.\nThe activities reached lowest around 5am and peaked between 4pm and 5pm.\nWe assume that in some applications, network traffic exhibits similar patterns, that is, people send more messages during the day, too.\nMessages expire after a TTL.\nWe did not use proactive methods to notify nodes the delivery of messages, so that the messages can be removed from storage.\n4.4 Metrics\nWe define a set of metrics that we use in evaluating routing protocols in opportunistic networks:\n\u2022 delivery ratio, the ratio of the number of messages delivered to the number of total messages generated.\n\u2022 message transmissions, the total number of messages transmitted during the simulation across all nodes.\n\u2022 meta-data transmissions, the total number of meta-data units transmitted during the simulation across all nodes.\n\u2022 message duplications, the number of times a message copy occurred, due to replication.\n\u2022 delay, the duration between a message's generation time and the message's delivery time.\n\u2022 storage usage, the max and mean of maximum storage (bytes) used across all nodes.\n4.5 Results\nHere we compare simulation results of the six routing protocols.\nFigure 2: Delivery ratio (log scale).\nThe direct and random protocols for one-hour TTL had delivery ratios that were too low to be visible in the plot.\nFigure 2 shows the delivery ratio of all the protocols, with different TTLs.\n(In all the plots in the paper, \"prediction\" stands for our method, \"state\" stands for the Link-State protocol, and \"prophet\" represents PRoPHET.)\nAlthough we had 5,142 users in the network, the direct-delivery and random protocols had low delivery ratios (note the log scale).\nEven for messages with an unlimited lifetime, only 59 out of 2077 messages were delivered during this one-month simulation.\nThe delivery ratio of epidemic routing was the best.\nThe three prediction-based approaches had low delivery ratio, compared to epidemic routing.\nAlthough our method was slightly better than the other two, the advantage was marginal.\nThe high delivery ratio of epidemic routing came with a price: excessive transmissions.\nFigure 3 shows the number of message data transmissions.\nThe number of message transmissions of epidemic routing was more than 10 times higher than for the predictionbased routing protocols.\nObviously, the direct delivery protocol had the lowest number of message transmissions--the number of message delivered.\nAmong the three prediction-based methods, the PRoPHET transmitted fewer messages, but had comparable delivery-ratio as seen in Figure 2.\nFigure 4 shows that epidemic and all prediction-based methods had substantial meta-data transmissions, though epidemic routing had relatively more, with shorter TTLs.\nBecause epidemic protocol transmitted messages at every contact, in turn, more nodes had messages that required meta-data transmission during contact.\nThe direct-delivery and random protocols had no meta-data transmissions.\nIn addition to its message transmissions and meta-data transmissions, the epidemic routing protocol also had excessive message\nFigure 3: Message transmissions (log scale)\nFigure 4: Meta-data transmissions (log scale).\nDirect and random protocols had no meta-data transmissions.\nduplications, spreading replicas of messages over the network.\nFigure 5 shows that epidemic routing had one or two orders more duplication than the prediction-based protocols.\nRecall that the directdelivery and random protocols did not replicate, thus had no data duplications.\nFigure 6 shows both the median and mean delivery delays.\nAll protocols show similar delivery delays in both mean and median measures for medium TTLs, but differ for long and short TTLs.\nWith a 100-hour TTL, or unlimited TTL, epidemic routing had the shortest delays.\nThe direct-delivery had the longest delay for unlimited TTL, but it had the shortest delay for the one-hour TTL.\nThe results seem contrary to our intuition: the epidemic routing protocol should be the fastest routing protocol since it spreads messages all over the network.\nIndeed, the figures show only the delay time for delivered messages.\nFor direct delivery, random, and the probability-based routing protocols, relatively few messages were delivered for short TTLs, so many messages expired before they could reach their destination; those messages had infinite delivery delay and were not included in the median or mean measurements.\nFor longer TTLs, more messages were delivered even for the directdelivery protocol.\nThe statistics of longer TTLs for comparison are more meaningful than those of short TTLs.\nSince our message generation rate was low, the storage usage was also low in our simulation.\nFigure 7 shows the maximum and average of maximum volume (in KBytes) of messages stored\nFigure 5: Message duplications (log scale).\nDirect and random protocols had no message duplications.\nFigure 6: Median and mean delays (log scale).\nin each node.\nThe epidemic routing had the most storage usage.\nThe message time-to-live parameter was the big factor affecting the storage usage for epidemic and prediction-based routing protocols.\nWe studied the impact of different parameters of our predictionbased routing protocol.\nOur prediction-based protocol was sensitive to several parameters, such as the probability threshold and the prediction window \u0394T.\nFigure 8 shows the delivery ratios when we used different probability thresholds.\n(The leftmost value 0.01 is the value used for the other plots.)\nA higher probability threshold limited the transfer probability, so fewer messages were delivered.\nIt also required fewer transmissions as shown in Figure 9.\nWith a larger prediction window, we got a higher contact probability.\nThus, for the same probability threshold, we had a slightly higher delivery ratio as shown in Figure 10, and a few more transmissions as shown in Figure 11.\n5.\nRELATED WORK\nIn addition to the protocols that we evaluated in our simulation, several other opportunistic network routing protocols have been proposed in the literature.\nWe did not implement and evaluate these routing protocols, because either they require domain-specific information (location information) [14, 15], assume certain mobility patterns [17], present orthogonal approaches [10, 24] to other routing protocols.\nFigure 7: Max and mean of maximum storage usage across all nodes (log scale).\nFigure 8: Probability threshold impact on delivery ratio of timely-contact routing.\nLeBrun et al. [14] propose a location-based delay-tolerant network routing protocol.\nTheir algorithm assumes that every node knows its own position, and the destination is stationary at a known location.\nA node forwards data to a neighbor only if the neighbor is closer to the destination than its own position.\nOur protocol does not require knowledge of the nodes' locations, and learns their contact patterns.\nLeguay et al. [15] use a high-dimensional space to represent a mobility pattern, then routes messages to nodes that are closer to the destination node in the mobility pattern space.\nLocation information of nodes is required to construct mobility patterns.\nMusolesi et al. [17] propose an adaptive routing protocol for intermittently connected mobile ad hoc networks.\nThey use a Kalman filter to compute the probability that a node delivers messages.\nThis protocol assumes group mobility and cloud connectivity, that is, nodes move as a group, and among this group of nodes a contemporaneous end-to-end connection exists for every pair of nodes.\nWhen two nodes are in the same connected cloud, DSDV [19] routing is used.\nNetwork coding also draws much interest from DTN research.\nErasure-coding [10, 24] explores coding algorithms to reduce message replicas.\nThe source node replicates a message m times, then uses a coding scheme to encode them in one big message.\nAfter replicas are encoded, the source divides the big message into k\nFigure 9: Probability threshold impact on message transmission of timely-contact routing.\nFigure 10: Prediction window impact on delivery ratio of timely-contact routing (semi-log scale).\nblocks of the same size, and transmits a block to each of the first k encountered nodes.\nIf m of the blocks are received at the destination, the message can be restored, where m <k.\nIn a uniformly distributed mobility scenario, the delivery probability increases because the probability that the destination node meets m relays is greater than it meets k relays, given m <k.\n6.\nSUMMARY\nWe propose a prediction-based routing protocol for opportunistic networks.\nWe evaluate the performance of our protocol using realistic contact traces, and compare to five existing routing protocols.\nOur simulation results show that direct delivery had the lowest delivery ratio, the fewest data transmissions, and no meta-data transmission or data duplication.\nDirect delivery is suitable for devices that require an extremely low power consumption.\nThe random protocol increased the chance of delivery for messages otherwise stuck at some low mobility nodes.\nEpidemic routing delivered the most messages.\nThe excessive transmissions, and data duplication, however, consume more resources than portable devices may be able to provide.\nNone of these protocols (direct-delivery, random and epidemic routing) are practical for real deployment of opportunistic networks,\nFigure 11: Prediction window impact on message transmission of timely-contact routing (semi-log scale).\nbecause they either had an extremely low delivery ratio, or had an extremely high resource consumption.\nThe prediction-based routing protocols had a delivery ratio more than 10 times better than that for direct-delivery and random routing, and fewer transmissions and less storage usage than epidemic routing.\nThey also had fewer data duplications than epidemic routing.\nAll the prediction-based routing protocols that we have evaluated had similar performance.\nOur method had a slightly higher delivery ratio, but more transmissions and higher storage usage.\nThere are many parameters for prediction-based routing protocols, however, and different parameters may produce different results.\nIndeed, there is an opportunity for some adaptation; for example, high priority messages may be given higher transfer and replication probabilities to increase the chance of delivery and reduce the delay, or a node with infrequent contact may choose to raise its transfer probability.\nWe only studied the impact of predicting peer-to-peer contact probability for routing in unicast messages.\nIn some applications, context information (such as location) may be available for the peers.\nOne may also consider other messaging models, for example, where messages are sent to a location, such that every node at that location will receive a copy of the message.\nLocation prediction [21] may be used to predict nodes' mobility, and to choose as relays those nodes moving toward the destined location.\nResearch on routing in opportunistic networks is still in its early stage.\nMany other issues of opportunistic networks, such as security and privacy, are mainly left open.\nWe anticipate studying these issues in future work.","keyphrases":["rout protocol","rout","contact trace","opportunist network","simul","prophet","delai-toler network","mobil opportunist network","frequent link break","end-to-end path","random mobil model","realist mobil trace","unicast","transfer probabl","direct-deliveri protocol","epidem protocol","replic strategi","past encount and transit histori"],"prmu":["P","P","P","P","P","P","M","R","M","R","M","R","U","U","R","R","U","M"]} {"id":"H-97","title":"Feature Representation for Effective Action-Item Detection","abstract":"E-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination. Whereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage(s) indicating the request(s) for action. Unlike standard topic-driven text classification, action-item detection requires inferring the sender's intent, and as such responds less well to pure bag-of-words classification. However, using enriched feature sets, such as n-grams (up to n=4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation.","lvl-1":"Feature Representation for Effective Action-Item Detection Paul N. Bennett Computer Science Department Carnegie Mellon University Pittsburgh, PA 15213 pbennett+@cs.cmu.edu Jaime Carbonell Language Technologies Institute.\nCarnegie Mellon University Pittsburgh, PA 15213 jgc+@cs.cmu.edu ABSTRACT E-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination.\nWhereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage(s) indicating the request(s) for action.\nUnlike standard topic-driven text classification, action-item detection requires inferring the sender``s intent, and as such responds less well to pure bag-of-words classification.\nHowever, using enriched feature sets, such as n-grams (up to n=4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; I.2.6 [Artificial Intelligence]: Learning; I.5.4 [Pattern Recognition]: Applications General Terms Experimentation 1.\nINTRODUCTION E-mail users are facing an increasingly difficult task of managing their inboxes in the face of mounting challenges that result from rising e-mail usage.\nThis includes prioritizing e-mails over a range of sources from business partners to family members, filtering and reducing junk e-mail, and quickly managing requests that demand From: Henry Hutchins <hhutchins@innovative.company.com> To: Sara Smith; Joe Johnson; William Woolings Subject: meeting with prospective customers Sent: Fri 12\/10\/2005 8:08 AM Hi All, I``d like to remind all of you that the group from GRTY will be visiting us next Friday at 4:30 p.m..\nThe current schedule looks like this: + 9:30 a.m. Informal Breakfast and Discussion in Cafeteria + 10:30 a.m. Company Overview + 11:00 a.m. Individual Meetings (Continue Over Lunch) + 2:00 p.m. Tour of Facilities + 3:00 p.m. Sales Pitch In order to have this go off smoothly, I would like to practice the presentation well in advance.\nAs a result, I will need each of your parts by Wednesday.\nKeep up the good work!\n-Henry Figure 1: An E-mail with emphasized Action-Item, an explicit request that requires the recipient``s attention or action.\nthe receiver``s attention or action.\nAutomated action-item detection targets the third of these problems by attempting to detect which e-mails require an action or response with information, and within those e-mails, attempting to highlight the sentence (or other passage length) that directly indicates the action request.\nSuch a detection system can be used as one part of an e-mail agent which would assist a user in processing important e-mails quicker than would have been possible without the agent.\nWe view action-item detection as one necessary component of a successful e-mail agent which would perform spam detection, action-item detection, topic classification and priority ranking, among other functions.\nThe utility of such a detector can manifest as a method of prioritizing e-mails according to task-oriented criteria other than the standard ones of topic and sender or as a means of ensuring that the email user hasn``t dropped the proverbial ball by forgetting to address an action request.\nAction-item detection differs from standard text classification in two important ways.\nFirst, the user is interested both in detecting whether an email contains action items and in locating exactly where these action item requests are contained within the email body.\nIn contrast, standard text categorization merely assigns a topic label to each text, whether that label corresponds to an e-mail folder or a controlled indexing vocabulary [12, 15, 22].\nSecond, action-item detection attempts to recover the email sender``s intent - whether she means to elicit response or action on the part of the receiver; note that for this task, classifiers using only unigrams as features do not perform optimally, as evidenced in our results below.\nInstead we find that we need more information-laden features such as higher-order n-grams.\nText categorization by topic, on the other hand, works very well using just individual words as features [2, 9, 13, 17].\nIn fact, genre-classification, which one would think may require more than a bag-of-words approach, also works quite well using just unigram features [14].\nTopic detection and tracking (TDT), also works well with unigram feature sets [1, 20].\nWe believe that action-item detection is one of the first clear instances of an IR-related task where we must move beyond bag-of-words to achieve high performance, albeit not too far, as bag-of-n-grams seem to suffice.\nWe first review related work for similar text classification problems such as e-mail priority ranking and speech act identification.\nThen we more formally define the action-item detection problem, discuss the aspects that distinguish it from more common problems like topic classification, and highlight the challenges in constructing systems that can perform well at the sentence and document level.\nFrom there, we move to a discussion of feature representation and selection techniques appropriate for this problem and how standard text classification approaches can be adapted to smoothly move from the sentence-level detection problem to the documentlevel classification problem.\nWe then conduct an empirical analysis that helps us determine the effectiveness of our feature extraction procedures as well as establish baselines for a number of classification algorithms on this task.\nFinally, we summarize this paper``s contributions and consider interesting directions for future work.\n2.\nRELATED WORK Several other researchers have considered very similar text classification tasks.\nCohen et al. [5] describe an ontology of speech acts, such as Propose a Meeting, and attempt to predict when an e-mail contains one of these speech acts.\nWe consider action-items to be an important specific type of speech act that falls within their more general classification.\nWhile they provide results for several classification methods, their methods only make use of human judgments at the document-level.\nIn contrast, we consider whether accuracy can be increased by using finer-grained human judgments that mark the specific sentences and phrases of interest.\nCorston-Oliver et al. [6] consider detecting items in e-mail to Put on a To-Do List.\nThis classification task is very similar to ours except they do not consider simple factual questions to belong to this category.\nWe include questions, but note that not all questions are action-items - some are rhetorical or simply social convention, How are you?\n.\nFrom a learning perspective, while they make use of judgments at the sentence-level, they do not explicitly compare what if any benefits finer-grained judgments offer.\nAdditionally, they do not study alternative choices or approaches to the classification task.\nInstead, they simply apply a standard SVM at the sentence-level and focus primarily on a linguistic analysis of how the sentence can be logically reformulated before adding it to the task list.\nIn this study, we examine several alternative classification methods, compare document-level and sentence-level approaches and analyze the machine learning issues implicit in these problems.\nInterest in a variety of learning tasks related to e-mail has been rapidly growing in the recent literature.\nFor example, in a forum dedicated to e-mail learning tasks, Culotta et al. [7] presented methods for learning social networks from e-mail.\nIn this work, we do not focus on peer relationships; however, such methods could complement those here since peer relationships often influence word choice when requesting an action.\n3.\nPROBLEM DEFINITION & APPROACH In contrast to previous work, we explicitly focus on the benefits that finer-grained, more costly, sentence-level human judgments offer over coarse-grained document-level judgments.\nAdditionally, we consider multiple standard text classification approaches and analyze both the quantitative and qualitative differences that arise from taking a document-level vs. a sentence-level approach to classification.\nFinally, we focus on the representation necessary to achieve the most competitive performance.\n3.1 Problem Definition In order to provide the most benefit to the user, a system would not only detect the document, but it would also indicate the specific sentences in the e-mail which contain the action-items.\nTherefore, there are three basic problems: 1.\nDocument detection: Classify a document as to whether or not it contains an action-item.\n2.\nDocument ranking: Rank the documents such that all documents containing action-items occur as high as possible in the ranking.\n3.\nSentence detection: Classify each sentence in a document as to whether or not it is an action-item.\nAs in most Information Retrieval tasks, the weight the evaluation metric should give to precision and recall depends on the nature of the application.\nIn situations where a user will eventually read all received messages, ranking (e.g., via precision at recall of 1) may be most important since this will help encourage shorter delays in communications between users.\nIn contrast, high-precision detection at low recall will be of increasing importance when the user is under severe time-pressure and therefore will likely not read all mail.\nThis can be the case for crisis managers during disaster management.\nFinally, sentence detection plays a role in both timepressure situations and simply to alleviate the user``s required time to gist the message.\n3.2 Approach As mentioned above, the labeled data can come in one of two forms: a document-labeling provides a yes\/no label for each document as to whether it contains an action-item; a phrase-labeling provides only a yes label for the specific items of interest.\nWe term the human judgments a phrase-labeling since the user``s view of the action-item may not correspond with actual sentence boundaries or predicted sentence boundaries.\nObviously, it is straightforward to generate a document-labeling consistent with a phrase-labeling by labeling a document yes if and only if it contains at least one phrase labeled yes.\nTo train classifiers for this task, we can take several viewpoints related to both the basic problems we have enumerated and the form of the labeled data.\nThe document-level view treats each e-mail as a learning instance with an associated class-label.\nThen, the document can be converted to a feature-value vector and learning progresses as usual.\nApplying a document-level classifier to document detection and ranking is straightforward.\nIn order to apply it to sentence detection, one must make additional steps.\nFor example, if the classifier predicts a document contains an action-item, then areas of the document that contain a high-concentration of words which the model weights heavily in favor of action-items can be indicated.\nThe obvious benefit of the document-level approach is that training set collection costs are lower since the user only has to specify whether or not an e-mail contains an action-item and not the specific sentences.\nIn the sentence-level view, each e-mail is automatically segmented into sentences, and each sentence is treated as a learning instance with an associated class-label.\nSince the phrase-labeling provided by the user may not coincide with the automatic segmentation, we must determine what label to assign a partially overlapping sentence when converting it to a learning instance.\nOnce trained, applying the resulting classifiers to sentence detection is now straightforward, but in order to apply the classifiers to document detection and document ranking, the individual predictions over each sentence must be aggregated in order to make a document-level prediction.\nThis approach has the potential to benefit from morespecific labels that enable the learner to focus attention on the key sentences instead of having to learn based on data that the majority of the words in the e-mail provide no or little information about class membership.\n3.2.1 Features Consider some of the phrases that might constitute part of an action item: would like to know, let me know, as soon as possible, have you.\nEach of these phrases consists of common words that occur in many e-mails.\nHowever, when they occur in the same sentence, they are far more indicative of an action-item.\nAdditionally, order can be important: consider have you versus you have.\nBecause of this, we posit that n-grams play a larger role in this problem than is typical of problems like topic classification.\nTherefore, we consider all n-grams up to size 4.\nWhen using n-grams, if we find an n-gram of size 4 in a segment of text, we can represent the text as just one occurrence of the ngram or as one occurrence of the n-gram and an occurrence of each smaller n-gram contained by it.\nWe choose the second of these alternatives since this will allow the algorithm itself to smoothly back-off in terms of recall.\nMethods such as na\u00a8\u0131ve Bayes may be hurt by such a representation because of double-counting.\nSince sentence-ending punctuation can provide information, we retain the terminating punctuation token when it is identifiable.\nAdditionally, we add a beginning-of-sentence and end-of-sentence token in order to capture patterns that are often indicators at the beginning or end of a sentence.\nAssuming proper punctuation, these extra tokens are unnecessary, but often e-mail lacks proper punctuation.\nIn addition, for the sentence-level classifiers that use ngrams, we additionally code for each sentence a binary encoding of the position of the sentence relative to the document.\nThis encoding has eight associated features that represent which octile (the first eighth, second eighth, etc.) contains the sentence.\n3.2.2 Implementation Details In order to compare the document-level to the sentence-level approach, we compare predictions at the document-level.\nWe do not address how to use a document-level classifier to make predictions at the sentence-level.\nIn order to automatically segment the text of the e-mail, we use the RASP statistical parser [4].\nSince the automatically segmented sentences may not correspond directly with the phrase-level boundaries, we treat any sentence that contains at least 30% of a marked action-item segment as an action-item.\nWhen evaluating sentencedetection for the sentence-level system, we use these class labels as ground truth.\nSince we are not evaluating multiple segmentation approaches, this does not bias any of the methods.\nIf multiple segmentation systems were under evaluation, one would need to use a metric that matched predicted positive sentences to phrases labeled positive.\nThe metric would need to punish overly long true predictions as well as too short predictions.\nOur criteria for converting to labeled instances implicitly includes both criteria.\nSince the segmentation is fixed, an overly long prediction would be predicting yes for many no instances since presumably the extra length corresponds to additional segmented sentences all of which do not contain 30% of action-item.\nLikewise, a too short prediction must correspond to a small sentence included in the action-item but not constituting all of the action-item.\nTherefore, in order to consider the prediction to be too short, there will be an additional preceding\/following sentence that is an action-item where we incorrectly predicted no.\nOnce a sentence-level classifier has made a prediction for each sentence, we must combine these predictions to make both a document-level prediction and a document-level score.\nWe use the simple policy of predicting positive when any of the sentences is predicted positive.\nIn order to produce a document score for ranking, the confidence that the document contains an action-item is: \u03c8(d) = 1 n(d) s\u2208d|\u03c0(s)=1 \u03c8(s) if for any s \u2208 d, \u03c0(s) = 1 1 n(d) maxs\u2208d \u03c8(s) o.w. where s is a sentence in document d, \u03c0 is the classifier``s 1\/0 prediction, \u03c8 is the score the classifier assigns as its confidence that \u03c0(s) = 1, and n(d) is the greater of 1 and the number of (unigram) tokens in the document.\nIn other words, when any sentence is predicted positive, the document score is the length normalized sum of the sentence scores above threshold.\nWhen no sentence is predicted positive, the document score is the maximum sentence score normalized by length.\nAs in other text problems, we are more likely to emit false positives for documents with more words or sentences.\nThus we include a length normalization factor.\n4.\nEXPERIMENTAL ANALYSIS 4.1 The Data Our corpus consists of e-mails obtained from volunteers at an educational institution and cover subjects such as: organizing a research workshop, arranging for job-candidate interviews, publishing proceedings, and talk announcements.\nThe messages were anonymized by replacing the names of each individual and institution with a pseudonym.1 After attempting to identify and eliminate duplicate e-mails, the corpus contains 744 e-mail messages.\nAfter identity anonymization, the corpora has three basic versions.\nQuoted material refers to the text of a previous e-mail that an author often leaves in an e-mail message when responding to the e-mail.\nQuoted material can act as noise when learning since it may include action-items from previous messages that are no longer relevant.\nTo isolate the effects of quoted material, we have three versions of the corpora.\nThe raw form contains the basic messages.\nThe auto-stripped version contains the messages after quoted material has been automatically removed.\nThe hand-stripped version contains the messages after quoted material has been removed by a human.\nAdditionally, the hand-stripped version has had any xml content and e-mail signatures removed - leaving only the essential content of the message.\nThe studies reported here are performed with the hand-stripped version.\nThis allows us to balance the cognitive load in terms of number of tokens that must be read in the user-studies we report - including quoted material would complicate the user studies since some users might skip the material while others read it.\nAdditionally, ensuring all quoted material is removed 1 We have an even more highly anonymized version of the corpus that can be made available for some outside experimentation.\nPlease contact the authors for more information on obtaining this data.\nprevents tainting the cross-validation since otherwise a test item could occur as quoted material in a training document.\n4.1.1 Data Labeling Two human annotators labeled each message as to whether or not it contained an action-item.\nIn addition, they identified each segment of the e-mail which contained an action-item.\nA segment is a contiguous section of text selected by the human annotators and may span several sentences or a complete phrase contained in a sentence.\nThey were instructed that an action item is an explicit request for information that requires the recipient``s attention or a required action and told to highlight the phrases or sentences that make up the request.\nAnnotator 1 No Yes Annotator 2 No 391 26 Yes 29 298 Table 1: Agreement of Human Annotators at Document Level Annotator One labeled 324 messages as containing action items.\nAnnotator Two labeled 327 messages as containing action items.\nThe agreement of the human annotators is shown in Tables 1 and 2.\nThe annotators are said to agree at the document-level when both marked the same document as containing no action-items or both marked at least one action-item regardless of whether the text segments were the same.\nAt the document-level, the annotators agreed 93% of the time.\nThe kappa statistic [3, 5] is often used to evaluate inter-annotator agreement: \u03ba = A \u2212 R 1 \u2212 R A is the empirical estimate of the probability of agreement.\nR is the empirical estimate of the probability of random agreement given the empirical class priors.\nA value close to \u22121 implies the annotators agree far less often than would be expected randomly, while a value close to 1 means they agree more often than randomly expected.\nAt the document-level, the kappa statistic for inter-annotator agreement is 0.85.\nThis value is both strong enough to expect the problem to be learnable and is comparable with results for similar tasks [5, 6].\nIn order to determine the sentence-level agreement, we use each judgment to create a sentence-corpus with labels as described in Section 3.2.2, then consider the agreement over these sentences.\nThis allows us to compare agreement over no judgments.\nWe perform this comparison over the hand-stripped corpus since that eliminates spurious no judgments that would come from including quoted material, etc..\nBoth annotators were free to label the subject as an action-item, but since neither did, we omit the subject line of the message as well.\nThis only reduces the number of no agreements.\nThis leaves 6301 automatically segmented sentences.\nAt the sentence-level, the annotators agreed 98% of the time, and the kappa statistic for inter-annotator agreement is 0.82.\nIn order to produce one single set of judgments, the human annotators went through each annotation where there was disagreement and came to a consensus opinion.\nThe annotators did not collect statistics during this process but anecdotally reported that the majority of disagreements were either cases of clear annotator oversight or different interpretations of conditional statements.\nFor example, If you would like to keep your job, come to tomorrow``s meeting implies a required action where If you would like to join Annotator 1 No Yes Annotator 2 No 5810 65 Yes 74 352 Table 2: Agreement of Human Annotators at Sentence Level the football betting pool, come to tomorrow``s meeting does not.\nThe first would be an action-item in most contexts while the second would not.\nOf course, many conditional statements are not so clearly interpretable.\nAfter reconciling the judgments there are 416 e-mails with no action-items and 328 e-mails containing actionitems.\nOf the 328 e-mails containing action-items, 259 messages have one action-item segment; 55 messages have two action-item segments; 11 messages have three action-item segments.\nTwo messages have four action-item segments, and one message has six action-item segments.\nComputing the sentence-level agreement using the reconciled gold standard judgments with each of the annotators'' individual judgments gives a kappa of 0.89 for Annotator One and a kappa of 0.92 for Annotator Two.\nIn terms of message characteristics, there were on average 132 content tokens in the body after stripping.\nFor action-item messages, there were 115.\nHowever, by examining Figure 2 we see the length distributions are nearly identical.\nAs would be expected for e-mail, it is a long-tailed distribution with about half the messages having more than 60 tokens in the body (this paragraph has 65 tokens).\n4.2 Classifiers For this experiment, we have selected a variety of standard text classification algorithms.\nIn selecting algorithms, we have chosen algorithms that are not only known to work well but which differ along such lines as discriminative vs. generative and lazy vs. eager.\nWe have done this in order to provide both a competitive and thorough sampling of learning methods for the task at hand.\nThis is important since it is easy to improve a strawman classifier by introducing a new representation.\nBy thoroughly sampling alternative classifier choices we demonstrate that representation improvements over bag-of-words are not due to using the information in the bag-of-words poorly.\n4.2.1 kNN We employ a standard variant of the k-nearest neighbor algorithm used in text classification, kNN with s-cut score thresholding [19].\nWe use a tfidf-weighting of the terms with a distanceweighted vote of the neighbors to compute the score before thresholding it.\nIn order to choose the value of s for thresholding, we perform leave-one-out cross-validation over the training set.\nThe value of k is set to be 2( log2 N + 1) where N is the number of training points.\nThis rule for choosing k is theoretically motivated by results which show such a rule converges to the optimal classifier as the number of training points increases [8].\nIn practice, we have also found it to be a computational convenience that frequently leads to comparable results with numerically optimizing k via a cross-validation procedure.\n4.2.2 Na\u00a8\u0131ve Bayes We use a standard multinomial na\u00a8\u0131ve Bayes classifier [16].\nIn using this classifier, we smoothed word and class probabilities using a Bayesian estimate (with the word prior) and a Laplace m-estimate, respectively.\n0 20 40 60 80 100 120 140 160 0 200\u00a0400\u00a0600\u00a0800 1000\u00a01200\u00a01400 NumberofMessages Number of Tokens All Messages Action-Item Messages 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0 200\u00a0400\u00a0600\u00a0800 1000\u00a01200\u00a01400 PercentageofMessages Number of Tokens All Messages Action-Item Messages Figure 2: The Histogram (left) and Distribution (right) of Message Length.\nA bin size of 20 words was used.\nOnly tokens in the body after hand-stripping were counted.\nAfter stripping, the majority of words left are usually actual message content.\nClassifiers Document Unigram Document Ngram Sentence Unigram Sentence Ngram F1 kNN 0.6670 \u00b1 0.0288 0.7108 \u00b1 0.0699 0.7615 \u00b1 0.0504 0.7790 \u00b1 0.0460 na\u00a8\u0131ve Bayes 0.6572 \u00b1 0.0749 0.6484 \u00b1 0.0513 0.7715 \u00b1 0.0597 0.7777 \u00b1 0.0426 SVM 0.6904 \u00b1 0.0347 0.7428 \u00b1 0.0422 0.7282 \u00b1 0.0698 0.7682 \u00b1 0.0451 Voted Perceptron 0.6288 \u00b1 0.0395 0.6774 \u00b1 0.0422 0.6511 \u00b1 0.0506 0.6798 \u00b1 0.0913 Accuracy kNN 0.7029 \u00b1 0.0659 0.7486 \u00b1 0.0505 0.7972 \u00b1 0.0435 0.8092 \u00b1 0.0352 na\u00a8\u0131ve Bayes 0.6074 \u00b1 0.0651 0.5816 \u00b1 0.1075 0.7863 \u00b1 0.0553 0.8145 \u00b1 0.0268 SVM 0.7595 \u00b1 0.0309 0.7904 \u00b1 0.0349 0.7958 \u00b1 0.0551 0.8173 \u00b1 0.0258 Voted Perceptron 0.6531 \u00b1 0.0390 0.7164 \u00b1 0.0376 0.6413 \u00b1 0.0833 0.7082 \u00b1 0.1032 Table 3: Average Document-Detection Performance during Cross-Validation for Each Method and the Sample Standard Deviation (Sn\u22121) in italics.\nThe best performance for each classifier is shown in bold.\n4.2.3 SVM We have used a linear SVM with a tfidf feature representation and L2-norm as implemented in the SVMlight package v6.01 [11].\nAll default settings were used.\n4.2.4 Voted Perceptron Like the SVM, the Voted Perceptron is a kernel-based learning method.\nWe use the same feature representation and kernel as we have for the SVM, a linear kernel with tfidf-weighting and an L2-norm.\nThe voted perceptron is an online-learning method that keeps a history of past perceptrons used, as well as a weight signifying how often that perceptron was correct.\nWith each new training example, a correct classification increases the weight on the current perceptron and an incorrect classification updates the perceptron.\nThe output of the classifier uses the weights on the perceptra to make a final voted classification.\nWhen used in an offline-manner, multiple passes can be made through the training data.\nBoth the voted perceptron and the SVM give a solution from the same hypothesis space - in this case, a linear classifier.\nFurthermore, it is well-known that the Voted Perceptron increases the margin of the solution after each pass through the training data [10].\nSince Cohen et al. [5] obtain worse results using an SVM than a Voted Perceptron with one training iteration, they conclude that the best solution for detecting speech acts may not lie in an area with a large margin.\nBecause their tasks are highly similar to ours, we employ both classifiers to ensure we are not overlooking a competitive alternative classifier to the SVM for the basic bag-of-words representation.\n4.3 Performance Measures To compare the performance of the classification methods, we look at two standard performance measures, F1 and accuracy.\nThe F1 measure [18, 21] is the harmonic mean of precision and recall where Precision = Correct Positives Predicted Positives and Recall = Correct Positives Actual Positives .\n4.4 Experimental Methodology We perform standard 10-fold cross-validation on the set of documents.\nFor the sentence-level approach, all sentences in a document are either entirely in the training set or entirely in the test set for each fold.\nFor significance tests, we use a two-tailed t-test [21] to compare the values obtained during each cross-validation fold with a p-value of 0.05.\nFeature selection was performed using the chi-squared statistic.\nDifferent levels of feature selection were considered for each classifier.\nEach of the following number of features was tried: 10, 25, 50, 100, 250, 750, 1000, 2000, 4000.\nThere are approximately 4700 unigram tokens without feature selection.\nIn order to choose the number of features to use for each classifier, we perform nested cross-validation and choose the settings that yield the optimal document-level F1 for that classifier.\nFor this study, only the body of each e-mail message was used.\nFeature selection is always applied to all candidate features.\nThat is, for the n-gram representation, the n-grams and position features are also subject to removal by the feature selection method.\n4.5 Results The results for document-level classification are given in Table 3.\nThe primary hypothesis we are concerned with is that n-grams are critical for this task; if this is true, we expect to see a significant gap in performance between the document-level classifiers that use n-grams (denoted Document Ngram) and those using only unigram features (denoted Document Unigram).\nExamining Table 3, we observe that this is indeed the case for every classifier except na\u00a8\u0131ve Bayes.\nThis difference in performance produced by the n-gram representation is statistically significant for each classifier except for na\u00a8\u0131ve Bayes and the accuracy metric for kNN (see Table 4).\nNa\u00a8\u0131ve Bayes poor performance with the n-gram representation is not surprising since the bag-of-n-grams causes excessive doublecounting as mentioned in Section 3.2.1; however, na\u00a8\u0131ve Bayes is not hurt at the sentence-level because the sparse examples provide few chances for agglomerative effects of double counting.\nIn either case, when a language-modeling approach is desired, modeling the n-grams directly would be preferable to na\u00a8\u0131ve Bayes.\nMore importantly for the n-gram hypothesis, the n-grams lead to the best document-level classifier performance as well.\nAs would be expected, the difference between the sentence-level n-gram representation and unigram representation is small.\nThis is because the window of text is so small that the unigram representation, when done at the sentence-level, implicitly picks up on the power of the n-grams.\nFurther improvement would signify that the order of the words matter even when only considering a small sentence-size window.\nTherefore, the finer-grained sentence-level judgments allows a unigram representation to succeed but only when performed in a small window - behaving as an n-gram representation for all practical purposes.\nDocument Winner Sentence Winner kNN Ngram Ngram na\u00a8\u0131ve Bayes Unigram Ngram SVM Ngram\u2020 Ngram Voted Perceptron Ngram\u2020 Ngram Table 4: Significance results for n-grams versus unigrams for document detection using document-level and sentence-level classifiers.\nWhen the F1 result is statistically significant, it is shown in bold.\nWhen the accuracy result is significant, it is shown with a \u2020 .\nF1 Winner Accuracy Winner kNN Sentence Sentence na\u00a8\u0131ve Bayes Sentence Sentence SVM Sentence Sentence Voted Perceptron Sentence Document Table 5: Significance results for sentence-level classifiers vs. document-level classifiers for the document detection problem.\nWhen the result is statistically significant, it is shown in bold.\nFurther highlighting the improvement from finer-grained judgments and n-grams, Figure 3 graphically depicts the edge the SVM sentence-level classifier has over the standard bag-of-words approach with a precision-recall curve.\nIn the high precision area of the graph, the consistent edge of the sentence-level classifier is rather impressive - continuing at precision 1 out to 0.1 recall.\nThis would mean that a tenth of the user``s action-items would be placed at the top of their action-item sorted inbox.\nAdditionally, the large separation at the top right of the curves corresponds to the area where the optimal F1 occurs for each classifier, agreeing with the large improvement from 0.6904 to 0.7682 in F1 score.\nConsidering the relative unexplored nature of classification at the sentence-level, this gives great hope for further increases in performance.\nAccuracy F1 Unigram Ngram Unigram Ngram kNN 0.9519 0.9536 0.6540 0.6686 na\u00a8\u0131ve Bayes 0.9419 0.9550 0.6176 0.6676 SVM 0.9559 0.9579 0.6271 0.6672 Voted Perceptron 0.8895 0.9247 0.3744 0.5164 Table 6: Performance of the Sentence-Level Classifiers at Sentence Detection Although Cohen et al. [5] observed that the Voted Perceptron with a single training iteration outperformed SVM in a set of similar tasks, we see no such behavior here.\nThis further strengthens the evidence that an alternate classifier with the bag-of-words representation could not reach the same level of performance.\nThe Voted Perceptron classifier does improve when the number of training iterations are increased, but it is still lower than the SVM classifier.\nSentence detection results are presented in Table 6.\nWith regard to the sentence detection problem, we note that the F1 measure gives a better feel for the remaining room for improvement in this difficult problem.\nThat is, unlike document detection where actionitem documents are fairly common, action-item sentences are very rare.\nThus, as in other text problems, the accuracy numbers are deceptively high sheerly because of the default accuracy attainable by always predicting no.\nAlthough, the results here are significantly above-random, it is unclear what level of performance is necessary for sentence detection to be useful in and of itself and not simply as a means to document ranking and classification.\nFigure 4: Users find action-items quicker when assisted by a classification system.\nFinally, when considering a new type of classification task, one of the most basic questions is whether an accurate classifier built for the task can have an impact on the end-user.\nIn order to demonstrate the impact this task can have on e-mail users, we conducted a user study using an earlier less-accurate version of the sentence classifier - where instead of using just a single sentence, a threesentence windowed-approach was used.\nThere were three distinct sets of e-mail in which users had to find action-items.\nThese sets were either presented in a random order (Unordered), ordered by the classifier (Ordered), or ordered by the classifier and with the 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Precision Recall Action-Item Detection SVM Performance (Post Model Selection) Document Unigram Sentence Ngram Figure 3: Both n-grams and a small prediction window lead to consistent improvements over the standard approach.\ncenter sentence in the highest confidence window highlighted (Order+help).\nIn order to perform fair comparisons between conditions, the overall number of tokens in each message set should be approximately equal; that is, the cognitive reading load should be approximately the same before the classifier``s reordering.\nAdditionally, users typically show practice effects by improving at the overall task and thus performing better at later message sets.\nThis is typically handled by varying the ordering of the sets across users so that the means are comparable.\nWhile omitting further detail, we note the sets were balanced for the total number of tokens and a latin square design was used to balance practice effects.\nFigure 4 shows that at intervals of 5, 10, and 15 minutes, users consistently found significantly more action-items when assisted by the classifier, but were most critically aided in the first five minutes.\nAlthough, the classifier consistently aids the users, we did not gain an additional end-user impact by highlighting.\nAs mentioned above, this might be a result of the large room for improvement that still exists for sentence detection, but anecdotal evidence suggests this might also be a result of how the information is presented to the user rather than the accuracy of sentence detection.\nFor example, highlighting the wrong sentence near an actual action-item hurts the user``s trust, but if a vague indicator (e.g., an arrow) points to the approximate area the user is not aware of the near-miss.\nSince the user studies used a three sentence window, we believe this played a role as well as sentence detection accuracy.\n4.6 Discussion In contrast to problems where n-grams have yielded little difference, we believe their power here stems from the fact that many of the meaningful n-grams for action-items consist of common words, e.g., let me know.\nTherefore, the document-level unigram approach cannot gain much leverage, even when modeling their joint probability correctly, since these words will often co-occur in the document but not necessarily in a phrase.\nAdditionally, action-item detection is distinct from many text classification tasks in that a single sentence can change the class label of the document.\nAs a result, good classifiers cannot rely on aggregating evidence from a large number of weak indicators across the entire document.\nEven though we discarded the header information, examining the top-ranked features at the document-level reveals that many of the features are names or parts of e-mail addresses that occurred in the body and are highly associated with e-mails that tend to contain many or no action-items.\nA few examples are terms such as org, bob, and gov. We note that these features will be sensitive to the particular distribution (senders\/receivers) and thus the document-level approach may produce classifiers that transfer less readily to alternate contexts and users at different institutions.\nThis points out that part of the problem of going beyond bag-of-words may be the methodology, and investigating such properties as learning curves and how well a model transfers may highlight differences in models which appear to have similar performance when tested on the distributions they were trained on.\nWe are currently investigating whether the sentence-level classifiers do perform better over different test corpora without retraining.\n5.\nFUTURE WORK While applying text classifiers at the document-level is fairly well-understood, there exists the potential for significantly increasing the performance of the sentence-level classifiers.\nSuch methods include alternate ways of combining the predictions over each sentence, weightings other than tfidf, which may not be appropriate since sentences are small, better sentence segmentation, and other types of phrasal analysis.\nAdditionally, named entity tagging, time expressions, etc., seem likely candidates for features that can further improve this task.\nWe are currently pursuing some of these avenues to see what additional gains these offer.\nFinally, it would be interesting to investigate the best methods for combining the document-level and sentence-level classifiers.\nSince the simple bag-of-words representation at the document-level leads to a learned model that behaves somewhat like a context-specific prior dependent on the sender\/receiver and general topic, a first choice would be to treat it as such when combining probability estimates with the sentence-level classifier.\nSuch a model might serve as a general example for other problems where bag-of-words can establish a baseline model but richer approaches are needed to achieve performance beyond that baseline.\n6.\nSUMMARY AND CONCLUSIONS The effectiveness of sentence-level detection argues that labeling at the sentence-level provides significant value.\nFurther experiments are needed to see how this interacts with the amount of training data available.\nSentence detection that is then agglomerated to document-level detection works surprisingly better given low recall than would be expected with sentence-level items.\nThis, in turn, indicates that improved sentence segmentation methods could yield further improvements in classification.\nIn this work, we examined how action-items can be effectively detected in e-mails.\nOur empirical analysis has demonstrated that n-grams are of key importance to making the most of documentlevel judgments.\nWhen finer-grained judgments are available, then a standard bag-of-words approach using a small (sentence) window size and automatic segmentation techniques can produce results almost as good as the n-gram based approaches.\nAcknowledgments This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No.\nNBCHD030010.\nAny opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA), or the Department of InteriorNational Business Center (DOI-NBC).\nWe would like to extend our sincerest thanks to Jill Lehman whose efforts in data collection were essential in constructing the corpus, and both Jill and Aaron Steinfeld for their direction of the HCI experiments.\nWe would also like to thank Django Wexler for constructing and supporting the corpus labeling tools and Curtis Huttenhower``s support of the text preprocessing package.\nFinally, we gratefully acknowledge Scott Fahlman for his encouragement and useful discussions on this topic.\n7.\nREFERENCES [1] J. Allan, J. Carbonell, G. Doddington, J. Yamron, and Y. Yang.\nTopic detection and tracking pilot study: Final report.\nIn Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop, Washington, D.C., 1998.\n[2] C. Apte, F. Damerau, and S. M. Weiss.\nAutomated learning of decision rules for text categorization.\nACM Transactions on Information Systems, 12(3):233-251, July 1994.\n[3] J. Carletta.\nAssessing agreement on classification tasks: The kappa statistic.\nComputational Linguistics, 22(2):249-254, 1996.\n[4] J. Carroll.\nHigh precision extraction of grammatical relations.\nIn Proceedings of the 19th International Conference on Computational Linguistics (COLING), pages 134-140, 2002.\n[5] W. W. Cohen, V. R. Carvalho, and T. M. Mitchell.\nLearning to classify email into speech acts.\nIn EMNLP-2004 (Conference on Empirical Methods in Natural Language Processing), pages 309-316, 2004.\n[6] S. Corston-Oliver, E. Ringger, M. Gamon, and R. Campbell.\nTask-focused summarization of email.\nIn Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 43-50, 2004.\n[7] A. Culotta, R. Bekkerman, and A. McCallum.\nExtracting social networks and contact information from email and the web.\nIn CEAS-2004 (Conference on Email and Anti-Spam), Mountain View, CA, July 2004.\n[8] L. Devroye, L. Gy\u00a8orfi, and G. Lugosi.\nA Probabilistic Theory of Pattern Recognition.\nSpringer-Verlag, New York, NY, 1996.\n[9] S. T. Dumais, J. Platt, D. Heckerman, and M. Sahami.\nInductive learning algorithms and representations for text categorization.\nIn CIKM ``98, Proceedings of the 7th ACM Conference on Information and Knowledge Management, pages 148-155, 1998.\n[10] Y. Freund and R. Schapire.\nLarge margin classification using the perceptron algorithm.\nMachine Learning, 37(3):277-296, 1999.\n[11] T. Joachims.\nMaking large-scale svm learning practical.\nIn B. Sch\u00a8olkopf, C. J. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 41-56.\nMIT Press, 1999.\n[12] L. S. Larkey.\nA patent search and classification system.\nIn Proceedings of the Fourth ACM Conference on Digital Libraries, pages 179 - 187, 1999.\n[13] D. D. Lewis.\nAn evaluation of phrasal and clustered representations on a text categorization task.\nIn SIGIR ``92, Proceedings of the 15th Annual International ACM Conference on Research and Development in Information Retrieval, pages 37-50, 1992.\n[14] Y. Liu, J. Carbonell, and R. Jin.\nA pairwise ensemble approach for accurate genre classification.\nIn Proceedings of the European Conference on Machine Learning (ECML), 2003.\n[15] Y. Liu, R. Yan, R. Jin, and J. Carbonell.\nA comparison study of kernels for multi-label text classification using category association.\nIn The Twenty-first International Conference on Machine Learning (ICML), 2004.\n[16] A. McCallum and K. Nigam.\nA comparison of event models for naive bayes text classification.\nIn Working Notes of AAAI ``98 (The 15th National Conference on Artificial Intelligence), Workshop on Learning for Text Categorization, pages 41-48, 1998.\nTR WS-98-05.\n[17] F. Sebastiani.\nMachine learning in automated text categorization.\nACM Computing Surveys, 34(1):1-47, March 2002.\n[18] C. J. van Rijsbergen.\nInformation Retrieval.\nButterworths, London, 1979.\n[19] Y. Yang.\nAn evaluation of statistical approaches to text categorization.\nInformation Retrieval, 1(1\/2):67-88, 1999.\n[20] Y. Yang, J. Carbonell, R. Brown, T. Pierce, B. T. Archibald, and X. Liu.\nLearning approaches to topic detection and tracking.\nIEEE EXPERT, Special Issue on Applications of Intelligent Information Retrieval, 1999.\n[21] Y. Yang and X. Liu.\nA re-examination of text categorization methods.\nIn SIGIR ``99, Proceedings of the 22nd Annual International ACM Conference on Research and Development in Information Retrieval, pages 42-49, 1999.\n[22] Y. Yang, J. Zhang, J. Carbonell, and C. Jin.\nTopic-conditioned novelty detection.\nIn Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, July 2002.","lvl-3":"Feature Representation for Effective Action-Item Detection\nABSTRACT\nE-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination.\nWhereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage (s) indicating the request (s) for action.\nUnlike standard topic-driven text classification, action-item detection requires inferring the sender's intent, and as such responds less well to pure bag-of-words classification.\nHowever, using enriched feature sets, such as n-grams (up to n = 4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation.\n1.\nINTRODUCTION\nE-mail users are facing an increasingly difficult task of managing their inboxes in the face of mounting challenges that result from rising e-mail usage.\nThis includes prioritizing e-mails over a range of sources from business partners to family members, filtering and reducing junk e-mail, and quickly managing requests that demand\nFigure 1: An E-mail with emphasized Action-Item, an explicit request that requires the recipient's attention or action.\nthe receiver's attention or action.\nAutomated action-item detection targets the third of these problems by attempting to detect which e-mails require an action or response with information, and within those e-mails, attempting to highlight the sentence (or other passage length) that directly indicates the action request.\nSuch a detection system can be used as one part of an e-mail agent which would assist a user in processing important e-mails quicker than would have been possible without the agent.\nWe view action-item detection as one necessary component of a successful e-mail agent which would perform spam detection, action-item detection, topic classification and priority ranking, among other functions.\nThe utility of such a detector can manifest as a method of prioritizing e-mails according to task-oriented criteria other than the standard ones of topic and sender or as a means of ensuring that the email user hasn't dropped the proverbial ball by forgetting to address an action request.\nAction-item detection differs from standard text classification in two important ways.\nFirst, the user is interested both in detecting whether an email contains action items and in locating exactly where these action item requests are contained within the email body.\nIn contrast, standard text categorization merely assigns a topic label to each text, whether that label corresponds to an e-mail folder or a controlled indexing vocabulary [12, 15, 22].\nSecond, action-item detection attempts to recover the email sender's intent--whether she means to elicit response or action on the part of the receiver; note that for this task, classifiers using only unigrams as features do not perform optimally, as evidenced in our results below.\nInstead we find that we need more information-laden features such as higher-order n-grams.\nText categorization by topic, on the other hand, works very well using just individual words as features [2, 9, 13, 17].\nIn fact, genre-classification, which one would think may require more than a bag-of-words approach, also works quite well using just unigram features [14].\nTopic detection and tracking (TDT), also works well with unigram feature sets [1, 20].\nWe believe that action-item detection is one of the first clear instances of an IR-related task where we must move beyond bag-of-words to achieve high performance, albeit not too far, as bag-of-n-grams seem to suffice.\nWe first review related work for similar text classification problems such as e-mail priority ranking and speech act identification.\nThen we more formally define the action-item detection problem, discuss the aspects that distinguish it from more common problems like topic classification, and highlight the challenges in constructing systems that can perform well at the sentence and document level.\nFrom there, we move to a discussion of feature representation and selection techniques appropriate for this problem and how standard text classification approaches can be adapted to smoothly move from the sentence-level detection problem to the documentlevel classification problem.\nWe then conduct an empirical analysis that helps us determine the effectiveness of our feature extraction procedures as well as establish baselines for a number of classification algorithms on this task.\nFinally, we summarize this paper's contributions and consider interesting directions for future work.\n2.\nRELATED WORK\nSeveral other researchers have considered very similar text classification tasks.\nCohen et al. [5] describe an ontology of \"speech acts\", such as \"Propose a Meeting\", and attempt to predict when an e-mail contains one of these speech acts.\nWe consider action-items to be an important specific type of speech act that falls within their more general classification.\nWhile they provide results for several classification methods, their methods only make use of human judgments at the document-level.\nIn contrast, we consider whether accuracy can be increased by using finer-grained human judgments that mark the specific sentences and phrases of interest.\nCorston-Oliver et al. [6] consider detecting items in e-mail to \"Put on a To-Do List\".\nThis classification task is very similar to ours except they do not consider \"simple factual questions\" to belong to this category.\nWe include questions, but note that not all questions are action-items--some are rhetorical or simply social convention, \"How are you?\"\n.\nFrom a learning perspective, while they make use of judgments at the sentence-level, they do not explicitly compare what if any benefits finer-grained judgments offer.\nAdditionally, they do not study alternative choices or approaches to the classification task.\nInstead, they simply apply a standard SVM at the sentence-level and focus primarily on a linguistic analysis of how the sentence can be logically reformulated before adding it to the task list.\nIn this study, we examine several alternative classification methods, compare document-level and sentence-level approaches and analyze the machine learning issues implicit in these problems.\nInterest in a variety of learning tasks related to e-mail has been rapidly growing in the recent literature.\nFor example, in a forum dedicated to e-mail learning tasks, Culotta et al. [7] presented methods for learning social networks from e-mail.\nIn this work, we do not focus on peer relationships; however, such methods could complement those here since peer relationships often influence word choice when requesting an action.\n3.\nPROBLEM DEFINITION & APPROACH\n3.1 Problem Definition\n3.2 Approach\n3.2.1 Features\n3.2.2 Implementation Details\n4.\nEXPERIMENTAL ANALYSIS\n4.1 The Data\n4.1.1 Data Labeling\n4.2 Classifiers\n4.2.1 kNN\n4.2.2 Na \u00a8 \u0131ve Bayes\n4.2.3 SVM\n4.2.4 Voted Perceptron\n4.3 Performance Measures\n4.4 Experimental Methodology\n4.5 Results\n4.6 Discussion\n6.\nSUMMARY AND CONCLUSIONS\nThe effectiveness of sentence-level detection argues that labeling at the sentence-level provides significant value.\nFurther experiments are needed to see how this interacts with the amount of training data available.\nSentence detection that is then agglomerated to document-level detection works surprisingly better given low recall than would be expected with sentence-level items.\nThis, in turn, indicates that improved sentence segmentation methods could yield further improvements in classification.\nIn this work, we examined how action-items can be effectively detected in e-mails.\nOur empirical analysis has demonstrated that n-grams are of key importance to making the most of documentlevel judgments.\nWhen finer-grained judgments are available, then a standard bag-of-words approach using a small (sentence) window size and automatic segmentation techniques can produce results almost as good as the n-gram based approaches.","lvl-4":"Feature Representation for Effective Action-Item Detection\nABSTRACT\nE-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination.\nWhereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage (s) indicating the request (s) for action.\nUnlike standard topic-driven text classification, action-item detection requires inferring the sender's intent, and as such responds less well to pure bag-of-words classification.\nHowever, using enriched feature sets, such as n-grams (up to n = 4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation.\n1.\nINTRODUCTION\nE-mail users are facing an increasingly difficult task of managing their inboxes in the face of mounting challenges that result from rising e-mail usage.\nFigure 1: An E-mail with emphasized Action-Item, an explicit request that requires the recipient's attention or action.\nthe receiver's attention or action.\nAutomated action-item detection targets the third of these problems by attempting to detect which e-mails require an action or response with information, and within those e-mails, attempting to highlight the sentence (or other passage length) that directly indicates the action request.\nSuch a detection system can be used as one part of an e-mail agent which would assist a user in processing important e-mails quicker than would have been possible without the agent.\nWe view action-item detection as one necessary component of a successful e-mail agent which would perform spam detection, action-item detection, topic classification and priority ranking, among other functions.\nAction-item detection differs from standard text classification in two important ways.\nFirst, the user is interested both in detecting whether an email contains action items and in locating exactly where these action item requests are contained within the email body.\nIn contrast, standard text categorization merely assigns a topic label to each text, whether that label corresponds to an e-mail folder or a controlled indexing vocabulary [12, 15, 22].\nSecond, action-item detection attempts to recover the email sender's intent--whether she means to elicit response or action on the part of the receiver; note that for this task, classifiers using only unigrams as features do not perform optimally, as evidenced in our results below.\nInstead we find that we need more information-laden features such as higher-order n-grams.\nText categorization by topic, on the other hand, works very well using just individual words as features [2, 9, 13, 17].\nIn fact, genre-classification, which one would think may require more than a bag-of-words approach, also works quite well using just unigram features [14].\nTopic detection and tracking (TDT), also works well with unigram feature sets [1, 20].\nWe first review related work for similar text classification problems such as e-mail priority ranking and speech act identification.\nThen we more formally define the action-item detection problem, discuss the aspects that distinguish it from more common problems like topic classification, and highlight the challenges in constructing systems that can perform well at the sentence and document level.\nFrom there, we move to a discussion of feature representation and selection techniques appropriate for this problem and how standard text classification approaches can be adapted to smoothly move from the sentence-level detection problem to the documentlevel classification problem.\nWe then conduct an empirical analysis that helps us determine the effectiveness of our feature extraction procedures as well as establish baselines for a number of classification algorithms on this task.\nFinally, we summarize this paper's contributions and consider interesting directions for future work.\n2.\nRELATED WORK\nSeveral other researchers have considered very similar text classification tasks.\nWe consider action-items to be an important specific type of speech act that falls within their more general classification.\nWhile they provide results for several classification methods, their methods only make use of human judgments at the document-level.\nIn contrast, we consider whether accuracy can be increased by using finer-grained human judgments that mark the specific sentences and phrases of interest.\nCorston-Oliver et al. [6] consider detecting items in e-mail to \"Put on a To-Do List\".\nThis classification task is very similar to ours except they do not consider \"simple factual questions\" to belong to this category.\n.\nAdditionally, they do not study alternative choices or approaches to the classification task.\nInstead, they simply apply a standard SVM at the sentence-level and focus primarily on a linguistic analysis of how the sentence can be logically reformulated before adding it to the task list.\nIn this study, we examine several alternative classification methods, compare document-level and sentence-level approaches and analyze the machine learning issues implicit in these problems.\nInterest in a variety of learning tasks related to e-mail has been rapidly growing in the recent literature.\nFor example, in a forum dedicated to e-mail learning tasks, Culotta et al. [7] presented methods for learning social networks from e-mail.\nIn this work, we do not focus on peer relationships; however, such methods could complement those here since peer relationships often influence word choice when requesting an action.\n6.\nSUMMARY AND CONCLUSIONS\nThe effectiveness of sentence-level detection argues that labeling at the sentence-level provides significant value.\nSentence detection that is then agglomerated to document-level detection works surprisingly better given low recall than would be expected with sentence-level items.\nThis, in turn, indicates that improved sentence segmentation methods could yield further improvements in classification.\nIn this work, we examined how action-items can be effectively detected in e-mails.\nOur empirical analysis has demonstrated that n-grams are of key importance to making the most of documentlevel judgments.\nWhen finer-grained judgments are available, then a standard bag-of-words approach using a small (sentence) window size and automatic segmentation techniques can produce results almost as good as the n-gram based approaches.","lvl-2":"Feature Representation for Effective Action-Item Detection\nABSTRACT\nE-mail users face an ever-growing challenge in managing their inboxes due to the growing centrality of email in the workplace for task assignment, action requests, and other roles beyond information dissemination.\nWhereas Information Retrieval and Machine Learning techniques are gaining initial acceptance in spam filtering and automated folder assignment, this paper reports on a new task: automated action-item detection, in order to flag emails that require responses, and to highlight the specific passage (s) indicating the request (s) for action.\nUnlike standard topic-driven text classification, action-item detection requires inferring the sender's intent, and as such responds less well to pure bag-of-words classification.\nHowever, using enriched feature sets, such as n-grams (up to n = 4) with chi-squared feature selection, and contextual cues for action-item location improve performance by up to 10% over unigrams, using in both cases state of the art classifiers such as SVMs with automated model selection via embedded cross-validation.\n1.\nINTRODUCTION\nE-mail users are facing an increasingly difficult task of managing their inboxes in the face of mounting challenges that result from rising e-mail usage.\nThis includes prioritizing e-mails over a range of sources from business partners to family members, filtering and reducing junk e-mail, and quickly managing requests that demand\nFigure 1: An E-mail with emphasized Action-Item, an explicit request that requires the recipient's attention or action.\nthe receiver's attention or action.\nAutomated action-item detection targets the third of these problems by attempting to detect which e-mails require an action or response with information, and within those e-mails, attempting to highlight the sentence (or other passage length) that directly indicates the action request.\nSuch a detection system can be used as one part of an e-mail agent which would assist a user in processing important e-mails quicker than would have been possible without the agent.\nWe view action-item detection as one necessary component of a successful e-mail agent which would perform spam detection, action-item detection, topic classification and priority ranking, among other functions.\nThe utility of such a detector can manifest as a method of prioritizing e-mails according to task-oriented criteria other than the standard ones of topic and sender or as a means of ensuring that the email user hasn't dropped the proverbial ball by forgetting to address an action request.\nAction-item detection differs from standard text classification in two important ways.\nFirst, the user is interested both in detecting whether an email contains action items and in locating exactly where these action item requests are contained within the email body.\nIn contrast, standard text categorization merely assigns a topic label to each text, whether that label corresponds to an e-mail folder or a controlled indexing vocabulary [12, 15, 22].\nSecond, action-item detection attempts to recover the email sender's intent--whether she means to elicit response or action on the part of the receiver; note that for this task, classifiers using only unigrams as features do not perform optimally, as evidenced in our results below.\nInstead we find that we need more information-laden features such as higher-order n-grams.\nText categorization by topic, on the other hand, works very well using just individual words as features [2, 9, 13, 17].\nIn fact, genre-classification, which one would think may require more than a bag-of-words approach, also works quite well using just unigram features [14].\nTopic detection and tracking (TDT), also works well with unigram feature sets [1, 20].\nWe believe that action-item detection is one of the first clear instances of an IR-related task where we must move beyond bag-of-words to achieve high performance, albeit not too far, as bag-of-n-grams seem to suffice.\nWe first review related work for similar text classification problems such as e-mail priority ranking and speech act identification.\nThen we more formally define the action-item detection problem, discuss the aspects that distinguish it from more common problems like topic classification, and highlight the challenges in constructing systems that can perform well at the sentence and document level.\nFrom there, we move to a discussion of feature representation and selection techniques appropriate for this problem and how standard text classification approaches can be adapted to smoothly move from the sentence-level detection problem to the documentlevel classification problem.\nWe then conduct an empirical analysis that helps us determine the effectiveness of our feature extraction procedures as well as establish baselines for a number of classification algorithms on this task.\nFinally, we summarize this paper's contributions and consider interesting directions for future work.\n2.\nRELATED WORK\nSeveral other researchers have considered very similar text classification tasks.\nCohen et al. [5] describe an ontology of \"speech acts\", such as \"Propose a Meeting\", and attempt to predict when an e-mail contains one of these speech acts.\nWe consider action-items to be an important specific type of speech act that falls within their more general classification.\nWhile they provide results for several classification methods, their methods only make use of human judgments at the document-level.\nIn contrast, we consider whether accuracy can be increased by using finer-grained human judgments that mark the specific sentences and phrases of interest.\nCorston-Oliver et al. [6] consider detecting items in e-mail to \"Put on a To-Do List\".\nThis classification task is very similar to ours except they do not consider \"simple factual questions\" to belong to this category.\nWe include questions, but note that not all questions are action-items--some are rhetorical or simply social convention, \"How are you?\"\n.\nFrom a learning perspective, while they make use of judgments at the sentence-level, they do not explicitly compare what if any benefits finer-grained judgments offer.\nAdditionally, they do not study alternative choices or approaches to the classification task.\nInstead, they simply apply a standard SVM at the sentence-level and focus primarily on a linguistic analysis of how the sentence can be logically reformulated before adding it to the task list.\nIn this study, we examine several alternative classification methods, compare document-level and sentence-level approaches and analyze the machine learning issues implicit in these problems.\nInterest in a variety of learning tasks related to e-mail has been rapidly growing in the recent literature.\nFor example, in a forum dedicated to e-mail learning tasks, Culotta et al. [7] presented methods for learning social networks from e-mail.\nIn this work, we do not focus on peer relationships; however, such methods could complement those here since peer relationships often influence word choice when requesting an action.\n3.\nPROBLEM DEFINITION & APPROACH\nIn contrast to previous work, we explicitly focus on the benefits that finer-grained, more costly, sentence-level human judgments offer over coarse-grained document-level judgments.\nAdditionally, we consider multiple standard text classification approaches and analyze both the quantitative and qualitative differences that arise from taking a document-level vs. a sentence-level approach to classification.\nFinally, we focus on the representation necessary to achieve the most competitive performance.\n3.1 Problem Definition\nIn order to provide the most benefit to the user, a system would not only detect the document, but it would also indicate the specific sentences in the e-mail which contain the action-items.\nTherefore, there are three basic problems:\n1.\nDocument detection: Classify a document as to whether or not it contains an action-item.\n2.\nDocument ranking: Rank the documents such that all documents containing action-items occur as high as possible in the ranking.\n3.\nSentence detection: Classify each sentence in a document as to whether or not it is an action-item.\nAs in most Information Retrieval tasks, the weight the evaluation metric should give to precision and recall depends on the nature of the application.\nIn situations where a user will eventually read all received messages, ranking (e.g., via precision at recall of 1) may be most important since this will help encourage shorter delays in communications between users.\nIn contrast, high-precision detection at low recall will be of increasing importance when the user is under severe time-pressure and therefore will likely not read all mail.\nThis can be the case for crisis managers during disaster management.\nFinally, sentence detection plays a role in both timepressure situations and simply to alleviate the user's required time to gist the message.\n3.2 Approach\nAs mentioned above, the labeled data can come in one of two forms: a document-labeling provides a yes\/no label for each document as to whether it contains an action-item; a phrase-labeling provides only a yes label for the specific items of interest.\nWe term the human judgments a phrase-labeling since the user's view of the action-item may not correspond with actual sentence boundaries or predicted sentence boundaries.\nObviously, it is straightforward to generate a document-labeling consistent with a phrase-labeling by labeling a document \"yes\" if and only if it contains at least one phrase labeled \"yes\".\nTo train classifiers for this task, we can take several viewpoints related to both the basic problems we have enumerated and the form of the labeled data.\nThe document-level view treats each e-mail as a learning instance with an associated class-label.\nThen, the document can be converted to a feature-value vector and learning progresses as usual.\nApplying a document-level classifier to document detection and ranking is straightforward.\nIn order to apply it to sentence detection, one must make additional steps.\nFor example, if the classifier predicts a document contains an action-item, then areas of the document that contain a high-concentration of words which the model weights heavily in favor of action-items can be indicated.\nThe obvious benefit of the document-level approach is that training set collection costs are lower since the user only has to specify whether or not an e-mail contains an action-item and not the specific sentences.\nIn the sentence-level view, each e-mail is automatically segmented into sentences, and each sentence is treated as a learning instance with an associated class-label.\nSince the phrase-labeling provided by the user may not coincide with the automatic segmentation, we must determine what label to assign a partially overlapping sentence when converting it to a learning instance.\nOnce trained, applying the resulting classifiers to sentence detection is now straightforward, but in order to apply the classifiers to document detection and document ranking, the individual predictions over each sentence must be aggregated in order to make a document-level prediction.\nThis approach has the potential to benefit from morespecific labels that enable the learner to focus attention on the key sentences instead of having to learn based on data that the majority of the words in the e-mail provide no or little information about class membership.\n3.2.1 Features\nConsider some of the phrases that might constitute part of an action item: \"would like to know\", \"let me know\", \"as soon as possible\", \"have you\".\nEach of these phrases consists of common words that occur in many e-mails.\nHowever, when they occur in the same sentence, they are far more indicative of an action-item.\nAdditionally, order can be important: consider \"have you\" versus \"you have\".\nBecause of this, we posit that n-grams play a larger role in this problem than is typical of problems like topic classification.\nTherefore, we consider all n-grams up to size 4.\nWhen using n-grams, if we find an n-gram of size 4 in a segment of text, we can represent the text as just one occurrence of the ngram or as one occurrence of the n-gram and an occurrence of each smaller n-gram contained by it.\nWe choose the second of these alternatives since this will allow the algorithm itself to smoothly back-off in terms of recall.\nMethods such as na \u00a8 \u0131ve Bayes may be hurt by such a representation because of double-counting.\nSince sentence-ending punctuation can provide information, we retain the terminating punctuation token when it is identifiable.\nAdditionally, we add a beginning-of-sentence and end-of-sentence token in order to capture patterns that are often indicators at the beginning or end of a sentence.\nAssuming proper punctuation, these extra tokens are unnecessary, but often e-mail lacks proper punctuation.\nIn addition, for the sentence-level classifiers that use ngrams, we additionally code for each sentence a binary encoding of the position of the sentence relative to the document.\nThis encoding has eight associated features that represent which octile (the first eighth, second eighth, etc.) contains the sentence.\n3.2.2 Implementation Details\nIn order to compare the document-level to the sentence-level approach, we compare predictions at the document-level.\nWe do not address how to use a document-level classifier to make predictions at the sentence-level.\nIn order to automatically segment the text of the e-mail, we use the RASP statistical parser [4].\nSince the automatically segmented sentences may not correspond directly with the phrase-level boundaries, we treat any sentence that contains at least 30% of a marked action-item segment as an action-item.\nWhen evaluating sentencedetection for the sentence-level system, we use these class labels as ground truth.\nSince we are not evaluating multiple segmentation approaches, this does not bias any of the methods.\nIf multiple segmentation systems were under evaluation, one would need to use a metric that matched predicted positive sentences to phrases labeled positive.\nThe metric would need to punish overly long true predictions as well as too short predictions.\nOur criteria for converting to labeled instances implicitly includes both criteria.\nSince the segmentation is fixed, an overly long prediction would be predicting \"yes\" for many \"no\" instances since presumably the extra length corresponds to additional segmented sentences all of which do not contain 30% of action-item.\nLikewise, a too short prediction must correspond to a small sentence included in the action-item but not constituting all of the action-item.\nTherefore, in order to consider the prediction to be too short, there will be an additional preceding\/following sentence that is an action-item where we incorrectly predicted \"no\".\nOnce a sentence-level classifier has made a prediction for each sentence, we must combine these predictions to make both a document-level prediction and a document-level score.\nWe use the simple policy of predicting positive when any of the sentences is predicted positive.\nIn order to produce a document score for ranking, the confidence that the document contains an action-item is:\nwhere s is a sentence in document d, \u03c0 is the classifier's 1\/0 prediction, \u03c8 is the score the classifier assigns as its confidence that \u03c0 (s) = 1, and n (d) is the greater of 1 and the number of (unigram) tokens in the document.\nIn other words, when any sentence is predicted positive, the document score is the length normalized sum of the sentence scores above threshold.\nWhen no sentence is predicted positive, the document score is the maximum sentence score normalized by length.\nAs in other text problems, we are more likely to emit false positives for documents with more words or sentences.\nThus we include a length normalization factor.\n4.\nEXPERIMENTAL ANALYSIS\n4.1 The Data\nOur corpus consists of e-mails obtained from volunteers at an educational institution and cover subjects such as: organizing a research workshop, arranging for job-candidate interviews, publishing proceedings, and talk announcements.\nThe messages were anonymized by replacing the names of each individual and institution with a pseudonym .1 After attempting to identify and eliminate duplicate e-mails, the corpus contains 744 e-mail messages.\nAfter identity anonymization, the corpora has three basic versions.\nQuoted material refers to the text of a previous e-mail that an author often leaves in an e-mail message when responding to the e-mail.\nQuoted material can act as noise when learning since it may include action-items from previous messages that are no longer relevant.\nTo isolate the effects of quoted material, we have three versions of the corpora.\nThe raw form contains the basic messages.\nThe auto-stripped version contains the messages after quoted material has been automatically removed.\nThe hand-stripped version contains the messages after quoted material has been removed by a human.\nAdditionally, the hand-stripped version has had any xml content and e-mail signatures removed--leaving only the essential content of the message.\nThe studies reported here are performed with the hand-stripped version.\nThis allows us to balance the cognitive load in terms of number of tokens that must be read in the user-studies we report--including quoted material would complicate the user studies since some users might skip the material while others read it.\nAdditionally, ensuring all quoted material is removed 1We have an even more highly anonymized version of the corpus that can be made available for some outside experimentation.\nPlease contact the authors for more information on obtaining this data.\nprevents tainting the cross-validation since otherwise a test item could occur as quoted material in a training document.\n4.1.1 Data Labeling\nTwo human annotators labeled each message as to whether or not it contained an action-item.\nIn addition, they identified each segment of the e-mail which contained an action-item.\nA segment is a contiguous section of text selected by the human annotators and may span several sentences or a complete phrase contained in a sentence.\nThey were instructed that an action item is \"an explicit request for information that requires the recipient's attention or a required action\" and told to \"highlight the phrases or sentences that make up the request\".\nTable 1: Agreement of Human Annotators at Document Level\nAnnotator One labeled 324 messages as containing action items.\nAnnotator Two labeled 327 messages as containing action items.\nThe agreement of the human annotators is shown in Tables 1 and 2.\nThe annotators are said to agree at the document-level when both marked the same document as containing no action-items or both marked at least one action-item regardless of whether the text segments were the same.\nAt the document-level, the annotators agreed 93% of the time.\nThe kappa statistic [3, 5] is often used to evaluate inter-annotator agreement:\nA is the empirical estimate of the probability of agreement.\nR is the empirical estimate of the probability of random agreement given the empirical class priors.\nA value close to \u2212 1 implies the annotators agree far less often than would be expected randomly, while a value close to 1 means they agree more often than randomly expected.\nAt the document-level, the kappa statistic for inter-annotator agreement is 0.85.\nThis value is both strong enough to expect the problem to be learnable and is comparable with results for similar tasks [5, 6].\nIn order to determine the sentence-level agreement, we use each judgment to create a sentence-corpus with labels as described in Section 3.2.2, then consider the agreement over these sentences.\nThis allows us to compare agreement over \"no judgments\".\nWe perform this comparison over the hand-stripped corpus since that eliminates spurious \"no\" judgments that would come from including quoted material, etc. .\nBoth annotators were free to label the subject as an action-item, but since neither did, we omit the subject line of the message as well.\nThis only reduces the number of \"no\" agreements.\nThis leaves 6301 automatically segmented sentences.\nAt the sentence-level, the annotators agreed 98% of the time, and the kappa statistic for inter-annotator agreement is 0.82.\nIn order to produce one single set of judgments, the human annotators went through each annotation where there was disagreement and came to a consensus opinion.\nThe annotators did not collect statistics during this process but anecdotally reported that the majority of disagreements were either cases of clear annotator oversight or different interpretations of conditional statements.\nFor example, \"Ifyou would like to keep your job, come to tomorrow's meeting\" implies a required action where \"Ifyou would like to join\nTable 2: Agreement of Human Annotators at Sentence Level\nthe football betting pool, come to tomorrow's meeting\" does not.\nThe first would be an action-item in most contexts while the second would not.\nOf course, many conditional statements are not so clearly interpretable.\nAfter reconciling the judgments there are 416 e-mails with no action-items and 328 e-mails containing actionitems.\nOf the 328 e-mails containing action-items, 259 messages have one action-item segment; 55 messages have two action-item segments; 11 messages have three action-item segments.\nTwo messages have four action-item segments, and one message has six action-item segments.\nComputing the sentence-level agreement using the reconciled \"gold standard\" judgments with each of the annotators' individual judgments gives a kappa of 0.89 for Annotator One and a kappa of 0.92 for Annotator Two.\nIn terms of message characteristics, there were on average 132 content tokens in the body after stripping.\nFor action-item messages, there were 115.\nHowever, by examining Figure 2 we see the length distributions are nearly identical.\nAs would be expected for e-mail, it is a long-tailed distribution with about half the messages having more than 60 tokens in the body (this paragraph has 65 tokens).\n4.2 Classifiers\nFor this experiment, we have selected a variety of standard text classification algorithms.\nIn selecting algorithms, we have chosen algorithms that are not only known to work well but which differ along such lines as discriminative vs. generative and lazy vs. eager.\nWe have done this in order to provide both a competitive and thorough sampling of learning methods for the task at hand.\nThis is important since it is easy to improve a strawman classifier by introducing a new representation.\nBy thoroughly sampling alternative classifier choices we demonstrate that representation improvements over bag-of-words are not due to using the information in the bag-of-words poorly.\n4.2.1 kNN\nWe employ a standard variant of the k-nearest neighbor algorithm used in text classification, kNN with s-cut score thresholding [19].\nWe use a tfidf-weighting of the terms with a distanceweighted vote of the neighbors to compute the score before thresholding it.\nIn order to choose the value of s for thresholding, we perform leave-one-out cross-validation over the training set.\nThe value of k is set to be 2 ([log2 N] + 1) where N is the number of training points.\nThis rule for choosing k is theoretically motivated by results which show such a rule converges to the optimal classifier as the number of training points increases [8].\nIn practice, we have also found it to be a computational convenience that frequently leads to comparable results with numerically optimizing k via a cross-validation procedure.\n4.2.2 Na \u00a8 \u0131ve Bayes\nWe use a standard multinomial na \u00a8 \u0131ve Bayes classifier [16].\nIn using this classifier, we smoothed word and class probabilities using a Bayesian estimate (with the word prior) and a Laplace m-estimate, respectively.\nFigure 2: The Histogram (left) and Distribution (right) of Message Length.\nA bin size of 20 words was used.\nOnly tokens in the body after hand-stripping were counted.\nAfter stripping, the majority of words left are usually actual message content.\nTable 3: Average Document-Detection Performance during Cross-Validation for Each Method and the Sample Standard Deviation (Sn \u2212 1) in italics.\nThe best performance for each classifier is shown in bold.\n4.2.3 SVM\nWe have used a linear SVM with a tfidf feature representation and L2-norm as implemented in the SVMlight package v6 .01 [11].\nAll default settings were used.\n4.2.4 Voted Perceptron\nLike the SVM, the Voted Perceptron is a kernel-based learning method.\nWe use the same feature representation and kernel as we have for the SVM, a linear kernel with tfidf-weighting and an L2-norm.\nThe voted perceptron is an online-learning method that keeps a history of past perceptrons used, as well as a weight signifying how often that perceptron was correct.\nWith each new training example, a correct classification increases the weight on the current perceptron and an incorrect classification updates the perceptron.\nThe output of the classifier uses the weights on the perceptra to make a final \"voted\" classification.\nWhen used in an offline-manner, multiple passes can be made through the training data.\nBoth the voted perceptron and the SVM give a solution from the same hypothesis space--in this case, a linear classifier.\nFurthermore, it is well-known that the Voted Perceptron increases the margin of the solution after each pass through the training data [10].\nSince Cohen et al. [5] obtain worse results using an SVM than a Voted Perceptron with one training iteration, they conclude that the best solution for detecting speech acts may not lie in an area with a large margin.\nBecause their tasks are highly similar to ours, we employ both classifiers to ensure we are not overlooking a competitive alternative classifier to the SVM for the basic bag-of-words representation.\n4.3 Performance Measures\nTo compare the performance of the classification methods, we look at two standard performance measures, F1 and accuracy.\nThe F1 measure [18, 21] is the harmonic mean of precision and recall where Precision = Correct Positives Predicted Positives and Recall = Correct Positives Actual Positives.\n4.4 Experimental Methodology\nWe perform standard 10-fold cross-validation on the set of documents.\nFor the sentence-level approach, all sentences in a document are either entirely in the training set or entirely in the test set for each fold.\nFor significance tests, we use a two-tailed t-test [21] to compare the values obtained during each cross-validation fold with a p-value of 0.05.\nFeature selection was performed using the chi-squared statistic.\nDifferent levels of feature selection were considered for each classifier.\nEach of the following number of features was tried: 10, 25, 50, 100, 250, 750, 1000, 2000, 4000.\nThere are approximately 4700 unigram tokens without feature selection.\nIn order to choose the number of features to use for each classifier, we perform nested cross-validation and choose the settings that yield the optimal document-level F1 for that classifier.\nFor this study, only the body of each e-mail message was used.\nFeature selection is always applied to all candidate features.\nThat is, for the n-gram representation, the n-grams and position features are also subject to removal by the feature selection method.\n4.5 Results\nThe results for document-level classification are given in Table 3.\nThe primary hypothesis we are concerned with is that n-grams are critical for this task; if this is true, we expect to see a significant gap in performance between the document-level classifiers that use n-grams (denoted Document Ngram) and those using only unigram features (denoted Document Unigram).\nExamining Table 3, we observe that this is indeed the case for every classifier except na \u00a8 \u0131ve Bayes.\nThis difference in performance produced by the n-gram representation is statistically significant for each classifier except for na \u00a8 \u0131ve Bayes and the accuracy metric for kNN (see Table 4).\nNa \u00a8 \u0131ve Bayes poor performance with the n-gram representation is not surprising since the bag-of-n-grams causes excessive doublecounting as mentioned in Section 3.2.1; however, na \u00a8 \u0131ve Bayes is not hurt at the sentence-level because the sparse examples provide few chances for agglomerative effects of double counting.\nIn either case, when a language-modeling approach is desired, modeling the n-grams directly would be preferable to na \u00a8 \u0131ve Bayes.\nMore importantly for the n-gram hypothesis, the n-grams lead to the best document-level classifier performance as well.\nAs would be expected, the difference between the sentence-level n-gram representation and unigram representation is small.\nThis is because the window of text is so small that the unigram representation, when done at the sentence-level, implicitly picks up on the power of the n-grams.\nFurther improvement would signify that the order of the words matter even when only considering a small sentence-size window.\nTherefore, the finer-grained sentence-level judgments allows a unigram representation to succeed but only when performed in a small window--behaving as an n-gram representation for all practical purposes.\nTable 4: Significance results for n-grams versus unigrams for document detection using document-level and sentence-level classifiers.\nWhen the F1 result is statistically significant, it is shown in bold.\nWhen the accuracy result is significant, it is shown with a \u2020.\nTable 5: Significance results for sentence-level classifiers vs.\ndocument-level classifiers for the document detection problem.\nWhen the result is statistically significant, it is shown in bold.\nFurther highlighting the improvement from finer-grained judgments and n-grams, Figure 3 graphically depicts the edge the SVM sentence-level classifier has over the standard bag-of-words approach with a precision-recall curve.\nIn the high precision area of the graph, the consistent edge of the sentence-level classifier is rather impressive--continuing at precision 1 out to 0.1 recall.\nThis would mean that a tenth of the user's action-items would be placed at the top of their action-item sorted inbox.\nAdditionally, the large separation at the top right of the curves corresponds to the area where the optimal F1 occurs for each classifier, agreeing with the large improvement from 0.6904 to 0.7682 in F1 score.\nConsidering the relative unexplored nature of classification at the sentence-level, this gives great hope for further increases in performance.\nTable 6: Performance of the Sentence-Level Classifiers at Sentence Detection\nAlthough Cohen et al. [5] observed that the Voted Perceptron with a single training iteration outperformed SVM in a set of similar tasks, we see no such behavior here.\nThis further strengthens the evidence that an alternate classifier with the bag-of-words representation could not reach the same level of performance.\nThe Voted Perceptron classifier does improve when the number of training iterations are increased, but it is still lower than the SVM classifier.\nSentence detection results are presented in Table 6.\nWith regard to the sentence detection problem, we note that the F1 measure gives a better feel for the remaining room for improvement in this difficult problem.\nThat is, unlike document detection where actionitem documents are fairly common, action-item sentences are very rare.\nThus, as in other text problems, the accuracy numbers are deceptively high sheerly because of the default accuracy attainable by always predicting \"no\".\nAlthough, the results here are significantly above-random, it is unclear what level of performance is necessary for sentence detection to be useful in and of itself and not simply as a means to document ranking and classification.\nFigure 4: Users find action-items quicker when assisted by a classification system.\nFinally, when considering a new type of classification task, one of the most basic questions is whether an accurate classifier built for the task can have an impact on the end-user.\nIn order to demonstrate the impact this task can have on e-mail users, we conducted a user study using an earlier less-accurate version of the sentence classifier--where instead of using just a single sentence, a threesentence windowed-approach was used.\nThere were three distinct sets of e-mail in which users had to find action-items.\nThese sets were either presented in a random order (Unordered), ordered by the classifier (Ordered), or ordered by the classifier and with the\nFigure 3: Both n-grams and a small prediction window lead to consistent improvements over the standard approach.\ncenter sentence in the highest confidence window highlighted (Order + help).\nIn order to perform fair comparisons between conditions, the overall number of tokens in each message set should be approximately equal; that is, the cognitive reading load should be approximately the same before the classifier's reordering.\nAdditionally, users typically show \"practice effects\" by improving at the overall task and thus performing better at later message sets.\nThis is typically handled by varying the ordering of the sets across users so that the means are comparable.\nWhile omitting further detail, we note the sets were balanced for the total number of tokens and a latin square design was used to balance practice effects.\nFigure 4 shows that at intervals of 5, 10, and 15 minutes, users consistently found significantly more action-items when assisted by the classifier, but were most critically aided in the first five minutes.\nAlthough, the classifier consistently aids the users, we did not gain an additional end-user impact by highlighting.\nAs mentioned above, this might be a result of the large room for improvement that still exists for sentence detection, but anecdotal evidence suggests this might also be a result of how the information is presented to the user rather than the accuracy of sentence detection.\nFor example, highlighting the wrong sentence near an actual action-item hurts the user's trust, but if a vague indicator (e.g., an arrow) points to the approximate area the user is not aware of the near-miss.\nSince the user studies used a three sentence window, we believe this played a role as well as sentence detection accuracy.\n4.6 Discussion\nIn contrast to problems where n-grams have yielded little difference, we believe their power here stems from the fact that many of the meaningful n-grams for action-items consist of common words, e.g., \"let me know\".\nTherefore, the document-level unigram approach cannot gain much leverage, even when modeling their joint probability correctly, since these words will often co-occur in the document but not necessarily in a phrase.\nAdditionally, action-item detection is distinct from many text classification tasks in that a single sentence can change the class label of the document.\nAs a result, good classifiers cannot rely on aggregating evidence from a large number of weak indicators across the entire document.\nEven though we discarded the header information, examining the top-ranked features at the document-level reveals that many of the features are names or parts of e-mail addresses that occurred in the body and are highly associated with e-mails that tend to contain many or no action-items.\nA few examples are terms such as \"org\", \"bob\", and \"gov\".\nWe note that these features will be sensitive to the particular distribution (senders\/receivers) and thus the document-level approach may produce classifiers that transfer less readily to alternate contexts and users at different institutions.\nThis points out that part of the problem of going beyond bag-of-words may be the methodology, and investigating such properties as learning curves and how well a model transfers may highlight differences in models which appear to have similar performance when tested on the distributions they were trained on.\nWe are currently investigating whether the sentence-level classifiers do perform better over different test corpora without retraining.\n6.\nSUMMARY AND CONCLUSIONS\nThe effectiveness of sentence-level detection argues that labeling at the sentence-level provides significant value.\nFurther experiments are needed to see how this interacts with the amount of training data available.\nSentence detection that is then agglomerated to document-level detection works surprisingly better given low recall than would be expected with sentence-level items.\nThis, in turn, indicates that improved sentence segmentation methods could yield further improvements in classification.\nIn this work, we examined how action-items can be effectively detected in e-mails.\nOur empirical analysis has demonstrated that n-grams are of key importance to making the most of documentlevel judgments.\nWhen finer-grained judgments are available, then a standard bag-of-words approach using a small (sentence) window size and automatic segmentation techniques can produce results almost as good as the n-gram based approaches.","keyphrases":["action-item detect","e-mail","inform retriev","topic-driven text classif","text classif","n-gram","chi-squar featur select","featur select","svm","autom model select","embed cross-valid","text categor","genr-classif","e-mail prioriti rank","speech act identif","simpl factual question","document detect","document rank","sentenc detect","sentenc-level classifi","speech act"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","U","M","U","U","M","U","M","M","U"]} {"id":"H-83","title":"Estimating the Global PageRank of Web Communities","abstract":"Localized search engines are small-scale systems that index a particular community on the web. They offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains. One disadvantage such systems have over large-scale search engines is the lack of global PageRank values. Such information is needed to assess the value of pages in the localized search domain within the context of the web as a whole. In this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain. The algorithms are all highly scalable in that, given a local domain of size n, they use O(n) resources that include computation time, bandwidth, and storage. We test our methods across a variety of localized domains, including site-specific domains and topic-specific domains. We demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates.","lvl-1":"Estimating the Global PageRank of Web Communities Jason V. Davis Dept. of Computer Sciences University of Texas at Austin Austin, TX 78712 jdavis@cs.utexas.edu Inderjit S. Dhillon Dept. of Computer Sciences University of Texas at Austin Austin, TX 78712 inderjit@cs.utexas.edu ABSTRACT Localized search engines are small-scale systems that index a particular community on the web.\nThey offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains.\nOne disadvantage such systems have over large-scale search engines is the lack of global PageRank values.\nSuch information is needed to assess the value of pages in the localized search domain within the context of the web as a whole.\nIn this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain.\nThe algorithms are all highly scalable in that, given a local domain of size n, they use O(n) resources that include computation time, bandwidth, and storage.\nWe test our methods across a variety of localized domains, including site-specific domains and topic-specific domains.\nWe demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval; G.1.3 [Numerical Analysis]: Numerical Linear Algebra; G.3 [Probability and Statistics]: Markov Processes General Terms PageRank, Markov Chain, Stochastic Complementation 1.\nINTRODUCTION Localized search engines are small-scale search engines that index only a single community of the web.\nSuch communities can be site-specific domains, such as pages within the cs.utexas.edu domain, or topic-related communitiesfor example, political websites.\nCompared to the web graph crawled and indexed by large-scale search engines, the size of such local communities is typically orders of magnitude smaller.\nConsequently, the computational resources needed to build such a search engine are also similarly lighter.\nBy restricting themselves to smaller, more manageable sections of the web, localized search engines can also provide more precise and complete search capabilities over their respective domains.\nOne drawback of localized indexes is the lack of global information needed to compute link-based rankings.\nThe PageRank algorithm [3], has proven to be an effective such measure.\nIn general, the PageRank of a given page is dependent on pages throughout the entire web graph.\nIn the context of a localized search engine, if the PageRanks are computed using only the local subgraph, then we would expect the resulting PageRanks to reflect the perceived popularity within the local community and not of the web as a whole.\nFor example, consider a localized search engine that indexes political pages with conservative views.\nA person wishing to research the opinions on global warming within the conservative political community may encounter numerous such opinions across various websites.\nIf only local PageRank values are available, then the search results will reflect only strongly held beliefs within the community.\nHowever, if global PageRanks are also available, then the results can additionally reflect outsiders'' views of the conservative community (those documents that liberals most often access within the conservative community).\nThus, for many localized search engines, incorporating global PageRanks can improve the quality of search results.\nHowever, the number of pages a local search engine indexes is typically orders of magnitude smaller than the number of pages indexed by their large-scale counterparts.\nLocalized search engines do not have the bandwidth, storage capacity, or computational power to crawl, download, and compute the global PageRanks of the entire web.\nIn this work, we present a method of approximating the global PageRanks of a local domain while only using resources of the same order as those needed to compute the PageRanks of the local subgraph.\nOur proposed method looks for a supergraph of our local subgraph such that the local PageRanks within this supergraph are close to the true global PageRanks.\nWe construct this supergraph by iteratively crawling global pages on the current web frontier-i.e., global pages with inlinks from pages that have already been crawled.\nIn order to provide 116 Research Track Paper a good approximation to the global PageRanks, care must be taken when choosing which pages to crawl next; in this paper, we present a well-motivated page selection algorithm that also performs well empirically.\nThis algorithm is derived from a well-defined problem objective and has a running time linear in the number of local nodes.\nWe experiment across several types of local subgraphs, including four topic related communities and several sitespecific domains.\nTo evaluate performance, we measure the difference between the current global PageRank estimate and the global PageRank, as a function of the number of pages crawled.\nWe compare our algorithm against several heuristics and also against a baseline algorithm that chooses pages at random, and we show that our method outperforms these other methods.\nFinally, we empirically demonstrate that, given a local domain of size n, we can provide good approximations to the global PageRank values by crawling at most n or 2n additional pages.\nThe paper is organized as follows.\nSection 2 gives an overview of localized search engines and outlines their advantages over global search.\nSection 3 provides background on the PageRank algorithm.\nSection 4 formally defines our problem, and section 5 presents our page selection criteria and derives our algorithms.\nSection 6 provides experimental results, section 7 gives an overview of related work, and, finally, conclusions are given in section 8.\n2.\nLOCALIZED SEARCH ENGINES Localized search engines index a single community of the web, typically either a site-specific community, or a topicspecific community.\nLocalized search engines enjoy three major advantages over their large-scale counterparts: they are relatively inexpensive to build, they can offer more precise search capability over their local domain, and they can provide a more complete index.\nThe resources needed to build a global search engine are enormous.\nA 2003 study by Lyman et al. [13] found that the `surface web'' (publicly available static sites) consists of 8.9 billion pages, and that the average size of these pages is approximately 18.7 kilobytes.\nTo download a crawl of this size, approximately 167 terabytes of space is needed.\nFor a researcher who wishes to build a search engine with access to a couple of workstations or a small server, storage of this magnitude is simply not available.\nHowever, building a localized search engine over a web community of a hundred thousand pages would only require a few gigabytes of storage.\nThe computational burden required to support search queries over a database this size is more manageable as well.\nWe note that, for topic-specific search engines, the relevant community can be efficiently identified and downloaded by using a focused crawler [21, 4].\nFor site-specific domains, the local domain is readily available on their own web server.\nThis obviates the need for crawling or spidering, and a complete and up-to-date index of the domain can thus be guaranteed.\nThis is in contrast to their large-scale counterparts, which suffer from several shortcomings.\nFirst, crawling dynamically generated pages-pages in the `hidden web''-has been the subject of research [20] and is a non-trivial task for an external crawler.\nSecond, site-specific domains can enable the robots exclusion policy.\nThis prohibits external search engines'' crawlers from downloading content from the domain, and an external search engine must instead rely on outside links and anchor text to index these restricted pages.\nBy restricting itself to only a specific domain of the internet, a localized search engine can provide more precise search results.\nConsider the canonical ambiguous search query, `jaguar'', which can refer to either the car manufacturer or the animal.\nA scientist trying to research the habitat and evolutionary history of a jaguar may have better success using a finely tuned zoology-specific search engine than querying Google with multiple keyword searches and wading through irrelevant results.\nA method to learn better ranking functions for retrieval was recently proposed by Radlinski and Joachims [19] and has been applied to various local domains, including Cornell University``s website [8].\n3.\nPAGERANK OVERVIEW The PageRank algorithm defines the importance of web pages by analyzing the underlying hyperlink structure of a web graph.\nThe algorithm works by building a Markov chain from the link structure of the web graph and computing its stationary distribution.\nOne way to compute the stationary distribution of a Markov chain is to find the limiting distribution of a random walk over the chain.\nThus, the PageRank algorithm uses what is sometimes referred to as the `random surfer'' model.\nIn each step of the random walk, the `surfer'' either follows an outlink from the current page (i.e. the current node in the chain), or jumps to a random page on the web.\nWe now precisely define the PageRank problem.\nLet U be an m \u00d7 m adjacency matrix for a given web graph such that Uji = 1 if page i links to page j and Uji = 0 otherwise.\nWe define the PageRank matrix PU to be: PU = \u03b1UD\u22121 U + (1 \u2212 \u03b1)veT , (1) where DU is the (unique) diagonal matrix such that UD\u22121 U is column stochastic, \u03b1 is a given scalar such that 0 \u2264 \u03b1 \u2264 1, e is the vector of all ones, and v is a non-negative, L1normalized vector, sometimes called the `random surfer'' vector.\nNote that the matrix D\u22121 U is well-defined only if each column of U has at least one non-zero entry-i.e., each page in the webgraph has at least one outlink.\nIn the presence of such `dangling nodes'' that have no outlinks, one commonly used solution, proposed by Brin et al. [3], is to replace each zero column of U by a non-negative, L1-normalized vector.\nThe PageRank vector r is the dominant eigenvector of the PageRank matrix, r = PU r.\nWe will assume, without loss of generality, that r has an L1-norm of one.\nComputationally, r can be computed using the power method.\nThis method first chooses a random starting vector r(0) , and iteratively multiplies the current vector by the PageRank matrix PU ; see Algorithm 1.\nIn general, each iteration of the power method can take O(m2 ) operations when PU is a dense matrix.\nHowever, in practice, the number of links in a web graph will be of the order of the number of pages.\nBy exploiting the sparsity of the PageRank matrix, the work per iteration can be reduced to O(km), where k is the average number of links per web page.\nIt has also been shown that the total number of iterations needed for convergence is proportional to \u03b1 and does not depend on the size of the web graph [11, 7].\nFinally, the total space needed is also O(km), mainly to store the matrix U. 117 Research Track Paper Algorithm 1: A linear time (per iteration) algorithm for computing PageRank.\nComputePR(U) Input: U: Adjacency matrix.\nOutput: r: PageRank vector.\nChoose (randomly) an initial non-negative vector r(0) such that r(0) 1 = 1.\ni \u2190 0 repeat i \u2190 i + 1 \u03bd \u2190 \u03b1UD\u22121 U r(i\u22121) {\u03b1 is the random surfing probability} r(i) \u2190 \u03bd + (1 \u2212 \u03b1)v {v is the random surfer vector.}\nuntil r(i) \u2212 r(i\u22121) < \u03b4 {\u03b4 is the convergence threshold.}\nr \u2190 r(i) 4.\nPROBLEM DEFINITION Given a local domain L, let G be an N \u00d7 N adjacency matrix for the entire connected component of the web that contains L, such that Gji = 1 if page i links to page j and Gji = 0 otherwise.\nWithout loss of generality, we will partition G as: G = L Gout Lout Gwithin , (2) where L is the n \u00d7 n local subgraph corresponding to links inside the local domain, Lout is the subgraph that corresponds to links from the local domain pointing out to the global domain, Gout is the subgraph containing links from the global domain into the local domain, and Gwithin contains links within the global domain.\nWe assume that when building a localized search engine, only pages inside the local domain are crawled, and the links between these pages are represented by the subgraph L.\nThe links in Lout are also known, as these point from crawled pages in the local domain to uncrawled pages in the global domain.\nAs defined in equation (1), PG is the PageRank matrix formed from the global graph G, and we define the global PageRank vector of this graph to be g. Let the n-length vector p\u2217 be the L1-normalized vector corresponding to the global PageRank of the pages in the local domain L: p\u2217 = EL g ELg 1 , where EL = [ I | 0 ] is the restriction matrix that selects the components from g corresponding to nodes in L. Let p denote the PageRank vector constructed from the local domain subgraph L.\nIn practice, the observed local PageRank p and the global PageRank p\u2217 will be quite different.\nOne would expect that as the size of local matrix L approaches the size of global matrix G, the global PageRank and the observed local PageRank will become more similar.\nThus, one approach to estimating the global PageRank is to crawl the entire global domain, compute its PageRank, and extract the PageRanks of the local domain.\nTypically, however, n N , i.e., the number of global pages is much larger than the number of local pages.\nTherefore, crawling all global pages will quickly exhaust all local resources (computational, storage, and bandwidth) available to create the local search engine.\nWe instead seek a supergraph \u02c6F of our local subgraph L with size O(n).\nOur goal Algorithm 2: The FindGlobalPR algorithm.\nFindGlobalPR(L, Lout, T, k) Input: L: zero-one adjacency matrix for the local domain, Lout: zero-one outlink matrix from L to global subgraph as in (2), T: number of iterations, k: number of pages to crawl per iteration.\nOutput: \u02c6p: an improved estimate of the global PageRank of L. F \u2190 L Fout \u2190 Lout f \u2190 ComputePR(F ) for (i = 1 to T) {Determine which pages to crawl next} pages \u2190 SelectNodes(F , Fout, f, k) Crawl pages, augment F and modify Fout {Update PageRanks for new local domain} f \u2190 ComputePR(F ) end {Extract PageRanks of original local domain & normalize} \u02c6p \u2190 ELf ELf 1 is to find such a supergraph \u02c6F with PageRank \u02c6f, so that \u02c6f when restricted to L is close to p\u2217 .\nFormally, we seek to minimize GlobalDiff( \u02c6f) = EL \u02c6f EL \u02c6f 1 \u2212 p\u2217 1 .\n(3) We choose the L1 norm for measuring the error as it does not place excessive weight on outliers (as the L2 norm does, for example), and also because it is the most commonly used distance measure in the literature for comparing PageRank vectors, as well as for detecting convergence of the algorithm [3].\nWe propose a greedy framework, given in Algorithm 2, for constructing \u02c6F .\nInitially, F is set to the local subgraph L, and the PageRank f of this graph is computed.\nThe algorithm then proceeds as follows.\nFirst, the SelectNodes algorithm (which we discuss in the next section) is called and it returns a set of k nodes to crawl next from the set of nodes in the current crawl frontier, Fout.\nThese selected nodes are then crawled to expand the local subgraph, F , and the PageRanks of this expanded graph are then recomputed.\nThese steps are repeated for each of T iterations.\nFinally, the PageRank vector \u02c6p, which is restricted to pages within the original local domain, is returned.\nGiven our computation, bandwidth, and memory restrictions, we will assume that the algorithm will crawl at most O(n) pages.\nSince the PageRanks are computed in each iteration of the algorithm, which is an O(n) operation, we will also assume that the number of iterations T is a constant.\nOf course, the main challenge here is in selecting which set of k nodes to crawl next.\nIn the next section, we formally define the problem and give efficient algorithms.\n5.\nNODE SELECTION In this section, we present node selection algorithms that operate within the greedy framework presented in the previous section.\nWe first give a well-defined criteria for the page selection problem and provide experimental evidence that this criteria can effectively identify pages that optimize our problem objective (3).\nWe then present our main al118 Research Track Paper gorithmic contribution of the paper, a method with linear running time that is derived from this page selection criteria.\nFinally, we give an intuitive analysis of our algorithm in terms of `leaks'' and `flows''.\nWe show that if only the `flow'' is considered, then the resulting method is very similar to a widely used page selection heuristic [6].\n5.1 Formulation For a given page j in the global domain, we define the expanded local graph Fj: Fj = F s uT j 0 , (4) where uj is the zero-one vector containing the outlinks from F into page j, and s contains the inlinks from page j into the local domain.\nNote that we do not allow self-links in this framework.\nIn practice, self-links are often removed, as they only serve to inflate a given page``s PageRank.\nObserve that the inlinks into F from node j are not known until after node j is crawled.\nTherefore, we estimate this inlink vector as the expectation over inlink counts among the set of already crawled pages, s = F T e F T e 1 .\n(5) In practice, for any given page, this estimate may not reflect the true inlinks from that page.\nFurthermore, this expectation is sampled from the set of links within the crawled domain, whereas a better estimate would also use links from the global domain.\nHowever, the latter distribution is not known to a localized search engine, and we contend that the above estimate will, on average, be a better estimate than the uniform distribution, for example.\nLet the PageRank of F be f.\nWe express the PageRank f+ j of the expanded local graph Fj as f+ j = (1 \u2212 xj)fj xj , (6) where xj is the PageRank of the candidate global node j, and fj is the L1-normalized PageRank vector restricted to the pages in F .\nSince directly optimizing our problem goal requires knowing the global PageRank p\u2217 , we instead propose to crawl those nodes that will have the greatest influence on the PageRanks of pages in the original local domain L: influence(j) = k\u2208L |fj[k] \u2212 f[k]| (7) = EL (fj \u2212 f) 1.\nExperimentally, the influence score is a very good predictor of our problem objective (3).\nFor each candidate global node j, figure 1(a) shows the objective function value Global Diff(fj) as a function of the influence of page j.\nThe local domain used here is a crawl of conservative political pages (we will provide more details about this dataset in section 6); we observed similar results in other domains.\nThe correlation is quite strong, implying that the influence criteria can effectively identify pages that improve the global PageRank estimate.\nAs a baseline, figure 1(b) compares our objective with an alternative criteria, outlink count.\nThe outlink count is defined as the number of outlinks from the local domain to page j.\nThe correlation here is much weaker.\n.00001 .0001 .001 .01 0.26 0.262 0.264 0.266 Influence Objective 1 10\u00a0100\u00a01000 0.266 0.264 0.262 0.26 Outlink Count Objective (a) (b) Figure 1: (a) The correlation between our influence page selection criteria (7) and the actual objective function (3) value is quite strong.\n(b) This is in contrast to other criteria, such as outlink count, which exhibit a much weaker correlation.\n5.2 Computation As described, for each candidate global page j, the influence score (7) must be computed.\nIf fj is computed exactly for each global page j, then the PageRank algorithm would need to be run for each of the O(n) such global pages j we consider, resulting in an O(n2 ) computational cost for the node selection method.\nThus, computing the exact value of fj will lead to a quadratic algorithm, and we must instead turn to methods of approximating this vector.\nThe algorithm we present works by performing one power method iteration used by the PageRank algorithm (Algorithm 1).\nThe convergence rate for the PageRank algorithm has been shown to equal the random surfer probability \u03b1 [7, 11].\nGiven a starting vector x(0) , if k PageRank iterations are performed, the current PageRank solution x(k) satisfies: x(k) \u2212 x\u2217 1 = O(\u03b1k x(0) \u2212 x\u2217 1), (8) where x\u2217 is the desired PageRank vector.\nTherefore, if only one iteration is performed, choosing a good starting vector is necessary to achieve an accurate approximation.\nWe partition the PageRank matrix PFj , corresponding to the \u00d7 subgraph Fj as: PFj = \u02dcF \u02dcs \u02dcuT j w , (9) where \u02dcF = \u03b1F (DF + diag(uj))\u22121 + (1 \u2212 \u03b1) e + 1 eT , \u02dcs = \u03b1s + (1 \u2212 \u03b1) e + 1 , \u02dcuj = \u03b1(DF + diag(uj))\u22121 uj + (1 \u2212 \u03b1) e + 1 , w = 1 \u2212 \u03b1 + 1 , and diag(uj) is the diagonal matrix with the (i, i)th entry equal to one if the ith element of uj equals one, and is zero otherwise.\nWe have assumed here that the random surfer vector is the uniform vector, and that L has no `dangling links''.\nThese assumptions are not necessary and serve only to simplify discussion and analysis.\nA simple approach for estimating fj is the following.\nFirst, estimate the PageRank f+ j of Fj by computing one PageRank iteration over the matrix PFj , using the starting vector \u03bd = f 0 .\nThen, estimate fj by removing the last 119 Research Track Paper component from our estimate of f+ j (i.e., the component corresponding to the added node j), and renormalizing.\nThe problem with this approach is in the starting vector.\nRecall from (6) that xj is the PageRank of the added node j.\nThe difference between the actual PageRank f+ j of PFj and the starting vector \u03bd is \u03bd \u2212 f+ j 1 = xj + f \u2212 (1 \u2212 xj)fj 1 \u2265 xj + | f 1 \u2212 (1 \u2212 xj) fj 1| = xj + |xj| = 2xj.\nThus, by (8), after one PageRank iteration, we expect our estimate of f+ j to still have an error of about 2\u03b1xj.\nIn particular, for candidate nodes j with relatively high PageRank xj, this method will yield more inaccurate results.\nWe will next present a method that eliminates this bias and runs in O(n) time.\n5.2.1 Stochastic Complementation Since f+ j , as given in (6) is the PageRank of the matrix PFj , we have: fj(1 \u2212 xj) xj = \u02dcF \u02dcs \u02dcuT j w fj(1 \u2212 xj) xj = \u02dcF fj(1 \u2212 xj) + \u02dcsxj \u02dcuT j fj(1 \u2212 xj) + wxj .\nSolving the above system for fj can be shown to yield fj = ( \u02dcF + (1 \u2212 w)\u22121 \u02dcs\u02dcuT j )fj.\n(10) The matrix S = \u02dcF +(1\u2212w)\u22121 \u02dcs\u02dcuT j is known as the stochastic complement of the column stochastic matrix PFj with respect to the sub matrix \u02dcF .\nThe theory of stochastic complementation is well studied, and it can be shown the stochastic complement of an irreducible matrix (such as the PageRank matrix) is unique.\nFurthermore, the stochastic complement is also irreducible and therefore has a unique stationary distribution as well.\nFor an extensive study, see [15].\nIt can be easily shown that the sub-dominant eigenvalue of S is at most +1 \u03b1, where is the size of F .\nFor sufficiently large , this value will be very close to \u03b1.\nThis is important, as other properties of the PageRank algorithm, notably the algorithm``s sensitivity, are dependent on this value [11].\nIn this method, we estimate the length vector fj by computing one PageRank iteration over the \u00d7 stochastic complement S, starting at the vector f: fj \u2248 Sf.\n(11) This is in contrast to the simple method outlined in the previous section, which first iterates over the ( + 1) \u00d7 ( + 1) matrix PFj to estimate f+ j , and then removes the last component from the estimate and renormalizes to approximate fj.\nThe problem with the latter method is in the choice of the ( + 1) length starting vector, \u03bd.\nConsequently, the PageRank estimate given by the simple method differs from the true PageRank by at least 2\u03b1xj, where xj is the PageRank of page j. By using the stochastic complement, we can establish a tight lower bound of zero for this difference.\nTo see this, consider the case in which a node k is added to F to form the augmented local subgraph Fk, and that the PageRank of this new graph is (1 \u2212 xk)f xk .\nSpecifically, the addition of page k does not change the PageRanks of the pages in F , and thus fk = f. By construction of the stochastic complement, fk = Sfk, so the approximation given in equation (11) will yield the exact solution.\nNext, we present the computational details needed to efficiently compute the quantity fj \u2212f 1 over all known global pages j.\nWe begin by expanding the difference fj \u2212f, where the vector fj is estimated as in (11), fj \u2212 f \u2248 Sf \u2212 f = \u03b1F (DF + diag(uj))\u22121 f + (1 \u2212 \u03b1) e + 1 eT f +(1 \u2212 w)\u22121 (\u02dcuT j f)\u02dcs \u2212 f. (12) Note that the matrix (DF +diag(uj))\u22121 is diagonal.\nLetting o[k] be the outlink count for page k in F , we can express the kth diagonal element as: (DF + diag(uj))\u22121 [k, k] = 1 o[k]+1 if uj[k] = 1 1 o[k] if uj[k] = 0 Noting that (o[k] + 1)\u22121 = o[k]\u22121 \u2212 (o[k](o[k] + 1))\u22121 and rewriting this in matrix form yields (DF +diag(uj))\u22121 = D\u22121 F \u2212D\u22121 F (DF +diag(uj))\u22121 diag(uj).\n(13) We use the same identity to express e + 1 = e \u2212 e ( + 1) .\n(14) Recall that, by definition, we have PF = \u03b1F D\u22121 F +(1\u2212\u03b1)e .\nSubstituting (13) and (14) in (12) yields fj \u2212 f \u2248 (PF f \u2212 f) \u2212\u03b1F D\u22121 F (DF + diag(uj))\u22121 diag(uj)f \u2212(1 \u2212 \u03b1) e ( + 1) + (1 \u2212 w)\u22121 (\u02dcuT j f)\u02dcs = x + y + (\u02dcuT j f)z, (15) noting that by definition, f = PF f, and defining the vectors x, y, and z to be x = \u2212\u03b1F D\u22121 F (DF + diag(uj))\u22121 diag(uj)f (16) y = \u2212(1 \u2212 \u03b1) e ( + 1) (17) z = (1 \u2212 w)\u22121 \u02dcs.\n(18) The first term x is a sparse vector, and takes non-zero values only for local pages k that are siblings of the global page j.\nWe define (i, j) \u2208 F if and only if F [j, i] = 1 (equivalently, page i links to page j) and express the value of the component x[k ] as: x[k ] = \u2212\u03b1 k:(k,k )\u2208F ,uj [k]=1 f[k] o[k](o[k] + 1) , (19) where o[k], as before, is the number of outlinks from page k in the local domain.\nNote that the last two terms, y and z are not dependent on the current global node j. Given the function hj(f) = y + (\u02dcuT j f)z 1, the quantity fj \u2212 f 1 120 Research Track Paper can be expressed as fj \u2212 f 1 = k x[k] + y[k] + (\u02dcuT j f)z[k] = k:x[k]=0 y[k] + (\u02dcuT j f)z[k] + k:x[k]=0 x[k] + y[k] + (\u02dcuT j f)z[k] = hj(f) \u2212 k:x[k]=0 y[k] + (\u02dcuT j f)z[k] + k:x[k]=0 x[k] + y[k] + (\u02dcuT j f)z[k] .\n(20) If we can compute the function hj in linear time, then we can compute each value of fj \u2212 f 1 using an additional amount of time that is proportional to the number of nonzero components in x.\nThese optimizations are carried out in Algorithm 3.\nNote that (20) computes the difference between all components of f and fj, whereas our node selection criteria, given in (7), is restricted to the components corresponding to nodes in the original local domain L. Let us examine Algorithm 3 in more detail.\nFirst, the algorithm computes the outlink counts for each page in the local domain.\nThe algorithm then computes the quantity \u02dcuT j f for each known global page j.\nThis inner product can be written as (1 \u2212 \u03b1) 1 + 1 + \u03b1 k:(k,j)\u2208Fout f[k] o[k] + 1 , where the second term sums over the set of local pages that link to page j.\nSince the total number of edges in Fout was assumed to have size O( ) (recall that is the number of pages in F ), the running time of this step is also O( ).\nThe algorithm then computes the vectors y and z, as given in (17) and (18), respectively.\nThe L1NormDiff method is called on the components of these vectors which correspond to the pages in L, and it estimates the value of EL(y + (\u02dcuT j f)z) 1 for each page j.\nThe estimation works as follows.\nFirst, the values of \u02dcuT j f are discretized uniformly into c values {a1, ..., ac}.\nThe quantity EL(y + aiz) 1 is then computed for each discretized value of ai and stored in a table.\nTo evaluate EL (y + az) 1 for some a \u2208 [a1, ac], the closest discretized value ai is determined, and the corresponding entry in the table is used.\nThe total running time for this method is linear in and the discretization parameter c (which we take to be a constant).\nWe note that if exact values are desired, we have also developed an algorithm that runs in O( log ) time that is not described here.\nIn the main loop, we compute the vector x, as defined in equation (16).\nThe nested loops iterate over the set of pages in F that are siblings of page j. Typically, the size of this set is bounded by a constant.\nFinally, for each page j, the scores vector is updated over the set of non-zero components k of the vector x with k \u2208 L.\nThis set has size equal to the number of local siblings of page j, and is a subset of the total number of siblings of page j. Thus, each iteration of the main loop takes constant time, and the total running time of the main loop is O( ).\nSince we have assumed that the size of F will not grow larger than O(n), the total running time for the algorithm is O(n).\nAlgorithm 3: Node Selection via Stochastic Complementation.\nSC-Select(F , Fout, f, k) Input: F : zero-one adjacency matrix of size corresponding to the current local subgraph, Fout: zero-one outlink matrix from F to global subgraph, f: PageRank of F , k: number of pages to return Output: pages: set of k pages to crawl next {Compute outlink sums for local subgraph} foreach (page j \u2208 F ) o[j] \u2190 k:(j,k)\u2208F F[j, k] end {Compute scalar \u02dcuT j f for each global node j } foreach (page j \u2208 Fout) g[j] \u2190 (1 \u2212 \u03b1) 1 +1 foreach (page k : (k, j) \u2208 Fout) g[j] \u2190 g[j] + \u03b1 f[k] o[k]+1 end end {Compute vectors y and z as in (17) and (18) } y \u2190 \u2212(1 \u2212 \u03b1) e ( +1) z \u2190 (1 \u2212 w)\u22121 \u02dcs {Approximate y + g[j] \u2217 z 1 for all values g[j]} norm diffs \u2190L1NormDiffs(g, ELy, ELz) foreach (page j \u2208 Fout) {Compute sparse vector x as in (19)} x \u2190 0 foreach (page k : (k, j) \u2208 Fout) foreach (page k : (k, k ) \u2208 F )) x[k ] \u2190 x[k ] \u2212 f[k] o[k](o[k]+1) end end x \u2190 \u03b1x scores[j] \u2190 norm diffs[j] foreach (k : x[k] > 0 and page k \u2208 L) scores[j] \u2190 scores[j] \u2212 |y[k] + g[j] \u2217 z[k]| +|x[k]+y[k]+g[j]\u2217z[k])| end end Return k pages with highest scores 5.2.2 PageRank Flows We now present an intuitive analysis of the stochastic complementation method by decomposing the change in PageRank in terms of `leaks'' and `flows''.\nThis analysis is motivated by the decomposition given in (15).\nPageRank `flow'' is the increase in the local PageRanks originating from global page j.\nThe flows are represented by the non-negative vector (\u02dcuT j f)z (equations (15) and (18)).\nThe scalar \u02dcuT j f can be thought of as the total amount of PageRank flow that page j has available to distribute.\nThe vector z dictates how the flow is allocated to the local domain; the flow that local page k receives is proportional to (within a constant factor due to the random surfer vector) the expected number of its inlinks.\nThe PageRank `leaks'' represent the decrease in PageRank resulting from the addition of page j.\nThe leakage can be quantified in terms of the non-positive vectors x and y (equations (16) and (17)).\nFor vector x, we can see from equation (19) that the amount of PageRank leaked by a local page is proportional to the weighted sum of the Page121 Research Track Paper Ranks of its siblings.\nThus, pages that have siblings with higher PageRanks (and low outlink counts) will experience more leakage.\nThe leakage caused by y is an artifact of the random surfer vector.\nWe will next show that if only the `flow'' term, (\u02dcuT j f)z, is considered, then the resulting method is very similar to a heuristic proposed by Cho et al. [6] that has been widely used for the Crawling Through URL Ordering problem.\nThis heuristic is computationally cheaper, but as we will see later, not as effective as the Stochastic Complementation method.\nOur node selection strategy chooses global nodes that have the largest influence (equation (7)).\nIf this influence is approximated using only `flows'', the optimal node j\u2217 is: j\u2217 = argmaxj EL \u02dcuT j fz 1 = argmaxj \u02dcuT j f EL z 1 = argmaxj \u02dcuT j f = argmaxj \u03b1(DF + diag(uj))\u22121 uj + (1 \u2212 \u03b1) e + 1 , f = argmaxjfT (DF + diag(uj))\u22121 uj.\nThe resulting page selection score can be expressed as a sum of the PageRanks of each local page k that links to j, where each PageRank value is normalized by o[k]+1.\nInterestingly, the normalization that arises in our method differs from the heuristic given in [6], which normalizes by o[k].\nThe algorithm PF-Select, which is omitted due to lack of space, first computes the quantity fT (DF +diag(uj))\u22121 uj for each global page j, and then returns the pages with the k largest scores.\nTo see that the running time for this algorithm is O(n), note that the computation involved in this method is a subset of that needed for the SC-Select method (Algorithm 3), which was shown to have a running time of O(n).\n6.\nEXPERIMENTS In this section, we provide experimental evidence to verify the effectiveness of our algorithms.\nWe first outline our experimental methodology and then provide results across a variety of local domains.\n6.1 Methodology Given the limited resources available at an academic institution, crawling a section of the web that is of the same magnitude as that indexed by Google or Yahoo! is clearly infeasible.\nThus, for a given local domain, we approximate the global graph by crawling a local neighborhood around the domain that is several orders of magnitude larger than the local subgraph.\nEven though such a graph is still orders of magnitude smaller than the `true'' global graph, we contend that, even if there exist some highly influential pages that are very far away from our local domain, it is unrealistic for any local node selection algorithm to find them.\nSuch pages also tend to be highly unrelated to pages within the local domain.\nWhen explaining our node selection strategies in section 5, we made the simplifying assumption that our local graph contained no dangling nodes.\nThis assumption was only made to ease our analysis.\nOur implementation efficiently handles dangling links by replacing each zero column of our adjacency matrix with the uniform vector.\nWe evaluate the algorithm using the two node selection strategies given in Section 5.2, and also against the following baseline methods: \u2022 Random: Nodes are chosen uniformly at random among the known global nodes.\n\u2022 OutlinkCount: Global nodes with the highest number of outlinks from the local domain are chosen.\nAt each iteration of the FindGlobalPR algorithm, we evaluate performance by computing the difference between the current PageRank estimate of the local domain, ELf ELf 1 , and the global PageRank of the local domain ELg ELg 1 .\nAll PageRank calculations were performed using the uniform random surfer vector.\nAcross all experiments, we set the random surfer parameter \u03b1, to be .85, and used a convergence threshold of 10\u22126 .\nWe evaluate the difference between the local and global PageRank vectors using three different metrics: the L1 and L\u221e norms, and Kendall``s tau.\nThe L1 norm measures the sum of the absolute value of the differences between the two vectors, and the L\u221e norm measures the absolute value of the largest difference.\nKendall``s tau metric is a popular rank correlation measure used to compare PageRanks [2, 11].\nThis metric can be computed by counting the number of pairs of pairs that agree in ranking, and subtracting from that the number of pairs of pairs that disagree in ranking.\nThe final value is then normalized by the total number of n 2 such pairs, resulting in a [\u22121, 1] range, where a negative score signifies anti-correlation among rankings, and values near one correspond to strong rank correlation.\n6.2 Results Our experiments are based on two large web crawls and were downloaded using the web crawler that is part of the Nutch open source search engine project [18].\nAll crawls were restricted to only `http'' pages, and to limit the number of dynamically generated pages that we crawl, we ignored all pages with urls containing any of the characters `?''\n, `*'', `@'', or `=''.\nThe first crawl, which we will refer to as the `edu'' dataset, was seeded by homepages of the top 100 graduate computer science departments in the USA, as rated by the US News and World Report [16], and also by the home pages of their respective institutions.\nA crawl of depth 5 was performed, restricted to pages within the \u2018.edu'' domain, resulting in a graph with approximately 4.7 million pages and 22.9 million links.\nThe second crawl was seeded by the set of pages under the `politics'' hierarchy in the dmoz open directory project[17].\nWe crawled all pages up to four links away, which yielded a graph with 4.4 million pages and 17.3 million links.\nWithin the `edu'' crawl, we identified the five site-specific domains corresponding to the websites of the top five graduate computer science departments, as ranked by the US News and World Report.\nThis yielded local domains of various sizes, from 10,626 (UIUC) to 59,895 (Berkeley).\nFor each of these site-specific domains with size n, we performed 50 iterations of the FindGlobalPR algorithm to crawl a total of 2n additional nodes.\nFigure 2(a) gives the (L1) difference from the PageRank estimate at each iteration to the global PageRank, for the Berkeley local domain.\nThe performance of this dataset was representative of the typical performance across the five computer science sitespecific local domains.\nInitially, the L1 difference between the global and local PageRanks ranged from .0469 (Stanford) to .149 (MIT).\nFor the first several iterations, the 122 Research Track Paper 0 5 10 15 20 25 30 35 40 45 50 0.015 0.02 0.025 0.03 0.035 0.04 0.045 0.05 0.055 Number of Iterations GlobalandLocalPageRankDifference(L1) Stochastic Complement PageRank Flow Outlink Count Random 0 10 20 30 40 50 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Number of Iterations GlobalandLocalPageRankDifference(L1) Stochastic Complement PageRank Flow Outlink Count Random 0 5 10 15 20 25 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 0.32 0.34 Number of Iterations GlobalandLocalPageRankDifference(L1) Stochastic Complement PageRank Flow Outlink Count Random (a) www.cs.berkeley.edu (b) www.enterstageright.com (c) Politics Figure 2: L1 difference between the estimated and true global PageRanks for (a) Berkeley``s computer science website, (b) the site-specific domain, www.enterstageright.com, and (c) the `politics'' topic-specific domain.\nThe stochastic complement method outperforms all other methods across various domains.\nthree link-based methods all outperform the random selection heuristic.\nAfter these initial iterations, the random heuristic tended to be more competitive with (or even outperform, as in the Berkeley local domain) the outlink count and PageRank flow heuristics.\nIn all tests, the stochastic complementation method either outperformed, or was competitive with, the other methods.\nTable 1 gives the average difference between the final estimated global PageRanks and the true global PageRanks for various distance measures.\nAlgorithm L1 L\u221e Kendall Stoch.\nComp.\n.0384 .00154 .9257 PR Flow .0470 .00272 .8946 Outlink .0419 .00196 .9053 Random .0407 .00204 .9086 Table 1: Average final performance of various node selection strategies for the five site-specific computer science local domains.\nNote that Kendall``s Tau measures similarity, while the other metrics are dissimilarity measures.\nStochastic Complementation clearly outperforms the other methods in all metrics.\nWithin the `politics'' dataset, we also performed two sitespecific tests for the largest websites in the crawl: www.adamsmith.org, the website for the London based Adam Smith Institute, and www.enterstageright.com, an online conservative journal.\nAs with the `edu'' local domains, we ran our algorithm for 50 iterations, crawling a total of 2n nodes.\nFigure 2 (b) plots the results for the www.enterstageright.com domain.\nIn contrast to the `edu'' local domains, the Random and OutlinkCount methods were not competitive with either the SC-Select or the PF-Select methods.\nAmong all datasets and all node selection methods, the stochastic complementation method was most impressive in this dataset, realizing a final estimate that differed only .0279 from the global PageRank, a ten-fold improvement over the initial local PageRank difference of .299.\nFor the Adam Smith local domain, the initial difference between the local and global PageRanks was .148, and the final estimates given by the SC-Select, PF-Select, OutlinkCount, and Random methods were .0208, .0193, .0222, and .0356, respectively.\nWithin the `politics'' dataset, we constructed four topicspecific local domains.\nThe first domain consisted of all pages in the dmoz politics category, and also all pages within each of these sites up to two links away.\nThis yielded a local domain of 90,811 pages, and the results are given in figure 2 (c).\nBecause of the larger size of the topic-specific domains, we ran our algorithm for only 25 iterations to crawl a total of n nodes.\nWe also created topic-specific domains from three political sub-topics: liberalism, conservatism, and socialism.\nThe pages in these domains were identified by their corresponding dmoz categories.\nFor each sub-topic, we set the local domain to be all pages within three links from the corresponding dmoz category pages.\nTable 2 summarizes the performance of these three topic-specific domains, and also the larger political domain.\nTo quantify a global page j``s effect on the global PageRank values of pages in the local domain, we define page j``s impact to be its PageRank value, g[j], normalized by the fraction of its outlinks pointing to the local domain: impact(j) = oL[j] o[j] \u00b7 g[j], where, oL[j] is the number of outlinks from page j to pages in the local domain L, and o[j] is the total number of j``s outlinks.\nIn terms of the random surfer model, the impact of page j is the probability that the random surfer (1) is currently at global page j in her random walk and (2) takes an outlink to a local page, given that she has already decided not to jump to a random page.\nFor the politics local domain, we found that many of the pages with high impact were in fact political pages that should have been included in the dmoz politics topic, but were not.\nFor example, the two most influential global pages were the political search engine www.askhenry.com, and the home page of the online political magazine, www.policyreview.com.\nAmong non-political pages, the home page of the journal Education Next was most influential.\nThe journal is freely available online and contains articles regarding various aspect of K-12 education in America.\nTo provide some anecdotal evidence for the effectiveness of our page selection methods, we note that the SC-Select method chose 11 pages within the www.educationnext.org domain, the PF-Select method discovered 7 such pages, while the OutlinkCount and Random methods found only 6 pages each.\nFor the conservative political local domain, the socialist website www.ornery.org had a very high impact score.\nThis 123 Research Track Paper All Politics: Algorithm L1 L2 Kendall Stoch.\nComp.\n.1253 .000700 .8671 PR Flow .1446 .000710 .8518 Outlink .1470 .00225 .8642 Random .2055 .00203 .8271 Conservativism: Algorithm L1 L2 Kendall Stoch.\nComp.\n.0496 .000990 .9158 PR Flow .0554 .000939 .9028 Outlink .0602 .00527 .9144 Random .1197 .00102 .8843 Liberalism: Algorithm L1 L2 Kendall Stoch.\nComp.\n.0622 .001360 .8848 PR Flow .0799 .001378 .8669 Outlink .0763 .001379 .8844 Random .1127 .001899 .8372 Socialism: Algorithm L1 L\u221e Kendall Stoch.\nComp.\n.04318 .00439 .9604 PR Flow .0450 .004251 .9559 Outlink .04282 .00344 .9591 Random .0631 .005123 .9350 Table 2: Final performance among node selection strategies for the four political topic-specific crawls.\nNote that Kendall``s Tau measures similarity, while the other metrics are dissimilarity measures.\nwas largely due to a link from the front page of this site to an article regarding global warming published by the National Center for Public Policy Research, a conservative research group in Washington, DC.\nNot surprisingly, the global PageRank of this article (which happens to be on the home page of the NCCPR, www.nationalresearch.com), was approximately .002, whereas the local PageRank of this page was only .00158.\nThe SC-Select method yielded a global PageRank estimate of approximately .00182, the PFSelect method estimated a value of .00167, and the Random and OutlinkCount methods yielded values of .01522 and .00171, respectively.\n7.\nRELATED WORK The node selection framework we have proposed is similar to the url ordering for crawling problem proposed by Cho et al. in [6].\nWhereas our framework seeks to minimize the difference between the global and local PageRank, the objective used in [6] is to crawl the most highly (globally) ranked pages first.\nThey propose several node selection algorithms, including the outlink count heuristic, as well as a variant of our PF-Select algorithm which they refer to as the `PageRank ordering metric''.\nThey found this method to be most effective in optimizing their objective, as did a recent survey of these methods by Baeza-Yates et al. [1].\nBoldi et al. also experiment within a similar crawling framework in [2], but quantify their results by comparing Kendall``s rank correlation between the PageRanks of the current set of crawled pages and those of the entire global graph.\nThey found that node selection strategies that crawled pages with the highest global PageRank first actually performed worse (with respect to Kendall``s Tau correlation between the local and global PageRanks) than basic depth first or breadth first strategies.\nHowever, their experiments differ from our work in that our node selection algorithms do not use (or have access to) global PageRank values.\nMany algorithmic improvements for computing exact PageRank values have been proposed [9, 10, 14].\nIf such algorithms are used to compute the global PageRanks of our local domain, they would all require O(N) computation, storage, and bandwidth, where N is the size of the global domain.\nThis is in contrast to our method, which approximates the global PageRank and scales linearly with the size of the local domain.\nWang and Dewitt [22] propose a system where the set of web servers that comprise the global domain communicate with each other to compute their respective global PageRanks.\nFor a given web server hosting n pages, the computational, bandwidth, and storage requirements are also linear in n.\nOne drawback of this system is that the number of distinct web servers that comprise the global domain can be very large.\nFor example, our `edu'' dataset contains websites from over 3,200 different universities; coordinating such a system among a large number of sites can be very difficult.\nGan, Chen, and Suel propose a method for estimating the PageRank of a single page [5] which uses only constant bandwidth, computation, and space.\nTheir approach relies on the availability of a remote connectivity server that can supply the set of inlinks to a given page, an assumption not used in our framework.\nThey experimentally show that a reasonable estimate of the node``s PageRank can be obtained by visiting at most a few hundred nodes.\nUsing their algorithm for our problem would require that either the entire global domain first be downloaded or a connectivity server be used, both of which would lead to very large web graphs.\n8.\nCONCLUSIONS AND FUTURE WORK The internet is growing exponentially, and in order to navigate such a large repository as the web, global search engines have established themselves as a necessity.\nAlong with the ubiquity of these large-scale search engines comes an increase in search users'' expectations.\nBy providing complete and isolated coverage of a particular web domain, localized search engines are an effective outlet to quickly locate content that could otherwise be difficult to find.\nIn this work, we contend that the use of global PageRank in a localized search engine can improve performance.\nTo estimate the global PageRank, we have proposed an iterative node selection framework where we select which pages from the global frontier to crawl next.\nOur primary contribution is our stochastic complementation page selection algorithm.\nThis method crawls nodes that will most significantly impact the local domain and has running time linear in the number of nodes in the local domain.\nExperimentally, we validate these methods across a diverse set of local domains, including seven site-specific domains and four topic-specific domains.\nWe conclude that by crawling an additional n or 2n pages, our methods find an estimate of the global PageRanks that is up to ten times better than just using the local PageRanks.\nFurthermore, we demonstrate that our algorithm consistently outperforms other existing heuristics.\n124 Research Track Paper Often times, topic-specific domains are discovered using a focused web crawler which considers a page``s content in conjunction with link anchor text to decide which pages to crawl next [4].\nAlthough such crawlers have proven to be quite effective in discovering topic-related content, many irrelevant pages are also crawled in the process.\nTypically, these pages are deleted and not indexed by the localized search engine.\nThese pages can of course provide valuable information regarding the global PageRank of the local domain.\nOne way to integrate these pages into our framework is to start the FindGlobalPR algorithm with the current subgraph F equal to the set of pages that were crawled by the focused crawler.\nThe global PageRank estimation framework, along with the node selection algorithms presented, all require O(n) computation per iteration and bandwidth proportional to the number of pages crawled, Tk.\nIf the number of iterations T is relatively small compared to the number of pages crawled per iteration, k, then the bottleneck of the algorithm will be the crawling phase.\nHowever, as the number of iterations increases (relative to k), the bottleneck will reside in the node selection computation.\nIn this case, our algorithms would benefit from constant factor optimizations.\nRecall that the FindGlobalPR algorithm (Algorithm 2) requires that the PageRanks of the current expanded local domain be recomputed in each iteration.\nRecent work by Langville and Meyer [12] gives an algorithm to quickly recompute PageRanks of a given webgraph if a small number of nodes are added.\nThis algorithm was shown to give speedup of five to ten times on some datasets.\nWe plan to investigate this and other such optimizations as future work.\nIn this paper, we have objectively evaluated our methods by measuring how close our global PageRank estimates are to the actual global PageRanks.\nTo determine the benefit of using global PageRanks in a localized search engine, we suggest a user study in which users are asked to rate the quality of search results for various search queries.\nFor some queries, only the local PageRanks are used in ranking, and for the remaining queries, local PageRanks and the approximate global PageRanks, as computed by our algorithms, are used.\nThe results of such a study can then be analyzed to determine the added benefit of using the global PageRanks computed by our methods, over just using the local PageRanks.\nAcknowledgements.\nThis research was supported by NSF grant CCF-0431257, NSF Career Award ACI-0093404, and a grant from Sabre, Inc. 9.\nREFERENCES [1] R. Baeza-Yates, M. Marin, C. Castillo, and A. Rodriguez.\nCrawling a country: better strategies than breadth-first for web page ordering.\nWorld-Wide Web Conference, 2005.\n[2] P. Boldi, M. Santini, and S. Vigna.\nDo your worst to make the best: paradoxical effects in pagerank incremental computations.\nWorkshop on Web Graphs, 3243:168-180, 2004.\n[3] S. Brin and L. Page.\nThe anatomy of a large-scale hypertextual web search engine.\nComputer Networks and ISDN Systems, 33(1-7):107-117, 1998.\n[4] S. Chakrabarti, M. van den Berg, and B. Dom.\nFocused crawling: a new approach to topic-specific web resource discovery.\nWorld-Wide Web Conference, 1999.\n[5] Y. Chen, Q. Gan, and T. Suel.\nLocal methods for estimating pagerank values.\nConference on Information and Knowledge Management, 2004.\n[6] J. Cho, H. Garcia-Molina, and L. Page.\nEfficient crawling through url ordering.\nWorld-Wide Web Conference, 1998.\n[7] T. H. Haveliwala and S. D. Kamvar.\nThe second eigenvalue of the Google matrix.\nTechnical report, Stanford University, 2003.\n[8] T. Joachims, F. Radlinski, L. Granka, A. Cheng, C. Tillekeratne, and A. Patel.\nLearning retrieval functions from implicit feedback.\nhttp:\/\/www.cs.cornell.edu\/People\/tj\/career.\n[9] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub.\nExploiting the block structure of the web for computing pagerank.\nWorld-Wide Web Conference, 2003.\n[10] S. D. Kamvar, T. H. Haveliwala, C. D. Manning, and G. H. Golub.\nExtrapolation methods for accelerating pagerank computation.\nWorld-Wide Web Conference, 2003.\n[11] A. N. Langville and C. D. Meyer.\nDeeper inside pagerank.\nInternet Mathematics, 2004.\n[12] A. N. Langville and C. D. Meyer.\nUpdating the stationary vector of an irreducible markov chain with an eye on Google``s pagerank.\nSIAM Journal on Matrix Analysis, 2005.\n[13] P. Lyman, H. R. Varian, K. Swearingen, P. Charles, N. Good, L. L. Jordan, and J. Pal.\nHow much information 2003?\nSchool of Information Management and System, University of California at Berkely, 2003.\n[14] F. McSherry.\nA uniform approach to accelerated pagerank computation.\nWorld-Wide Web Conference, 2005.\n[15] C. D. Meyer.\nStochastic complementation, uncoupling markov chains, and the theory of nearly reducible systems.\nSIAM Review, 31:240-272, 1989.\n[16] US News and World Report.\nhttp:\/\/www.usnews.com.\n[17] Dmoz open directory project.\nhttp:\/\/www.dmoz.org.\n[18] Nutch open source search engine.\nhttp:\/\/www.nutch.org.\n[19] F. Radlinski and T. Joachims.\nQuery chains: learning to rank from implicit feedback.\nACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2005.\n[20] S. Raghavan and H. Garcia-Molina.\nCrawling the hidden web.\nIn Proceedings of the Twenty-seventh International Conference on Very Large Databases, 2001.\n[21] T. Tin Tang, D. Hawking, N. Craswell, and K. Griffiths.\nFocused crawling for both topical relevance and quality of medical information.\nConference on Information and Knowledge Management, 2005.\n[22] Y. Wang and D. J. DeWitt.\nComputing pagerank in a distributed internet search system.\nProceedings of the 30th VLDB Conference, 2004.\n125 Research Track Paper","lvl-3":"Estimating the Global PageRank of Web Communities\nABSTRACT\nLocalized search engines are small-scale systems that index a particular community on the web.\nThey offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains.\nOne disadvantage such systems have over large-scale search engines is the lack of global PageRank values.\nSuch information is needed to assess the value of pages in the localized search domain within the context of the web as a whole.\nIn this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain.\nThe algorithms are all highly scalable in that, given a local domain of size n, they use O (n) resources that include computation time, bandwidth, and storage.\nWe test our methods across a variety of localized domains, including site-specific domains and topic-specific domains.\nWe demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates.\n1.\nINTRODUCTION\nLocalized search engines are small-scale search engines that index only a single community of the web.\nSuch communities can be site-specific domains, such as pages within\nthe cs.utexas.edu domain, or topic-related communities--for example, political websites.\nCompared to the web graph crawled and indexed by large-scale search engines, the size of such local communities is typically orders of magnitude smaller.\nConsequently, the computational resources needed to build such a search engine are also similarly lighter.\nBy restricting themselves to smaller, more manageable sections of the web, localized search engines can also provide more precise and complete search capabilities over their respective domains.\nOne drawback of localized indexes is the lack of global information needed to compute link-based rankings.\nThe PageRank algorithm [3], has proven to be an effective such measure.\nIn general, the PageRank of a given page is dependent on pages throughout the entire web graph.\nIn the context of a localized search engine, if the PageRanks are computed using only the local subgraph, then we would expect the resulting PageRanks to reflect the perceived popularity within the local community and not of the web as a whole.\nFor example, consider a localized search engine that indexes political pages with conservative views.\nA person wishing to research the opinions on global warming within the conservative political community may encounter numerous such opinions across various websites.\nIf only local PageRank values are available, then the search results will reflect only strongly held beliefs within the community.\nHowever, if global PageRanks are also available, then the results can additionally reflect outsiders' views of the conservative community (those documents that liberals most often access within the conservative community).\nThus, for many localized search engines, incorporating global PageRanks can improve the quality of search results.\nHowever, the number of pages a local search engine indexes is typically orders of magnitude smaller than the number of pages indexed by their large-scale counterparts.\nLocalized search engines do not have the bandwidth, storage capacity, or computational power to crawl, download, and compute the global PageRanks of the entire web.\nIn this work, we present a method of approximating the global PageRanks of a local domain while only using resources of the same order as those needed to compute the PageRanks of the local subgraph.\nOur proposed method looks for a supergraph of our local subgraph such that the local PageRanks within this supergraph are close to the true global PageRanks.\nWe construct this supergraph by iteratively crawling global pages on the current web frontier--i.e., global pages with inlinks from pages that have already been crawled.\nIn order to provide\na good approximation to the global PageRanks, care must be taken when choosing which pages to crawl next; in this paper, we present a well-motivated page selection algorithm that also performs well empirically.\nThis algorithm is derived from a well-defined problem objective and has a running time linear in the number of local nodes.\nWe experiment across several types of local subgraphs, including four topic related communities and several sitespecific domains.\nTo evaluate performance, we measure the difference between the current global PageRank estimate and the global PageRank, as a function of the number of pages crawled.\nWe compare our algorithm against several heuristics and also against a baseline algorithm that chooses pages at random, and we show that our method outperforms these other methods.\nFinally, we empirically demonstrate that, given a local domain of size n, we can provide good approximations to the global PageRank values by crawling at most n or 2n additional pages.\nThe paper is organized as follows.\nSection 2 gives an overview of localized search engines and outlines their advantages over global search.\nSection 3 provides background on the PageRank algorithm.\nSection 4 formally defines our problem, and section 5 presents our page selection criteria and derives our algorithms.\nSection 6 provides experimental results, section 7 gives an overview of related work, and, finally, conclusions are given in section 8.\n2.\nLOCALIZED SEARCH ENGINES\nResearch Track Paper\n3.\nPAGERANK OVERVIEW\nResearch Track Paper\n4.\nPROBLEM DEFINITION\nALGORITHM 2: The FINDGLOBALPR algorithm.\n5.\nNODE SELECTION\n5.1 Formulation\nResearch Track Paper\n5.2 Computation\nResearch Track Paper\n5.2.1 Stochastic Complementation\n5.2.2 PageRank Flows\nResearch Track Paper\n6.\nEXPERIMENTS\n6.1 Methodology\n6.2 Results\n7.\nRELATED WORK\nThe node selection framework we have proposed is similar to the url ordering for crawling problem proposed by Cho et al. in [6].\nWhereas our framework seeks to minimize the difference between the global and local PageRank, the objective used in [6] is to crawl the most highly (globally) ranked pages first.\nThey propose several node selection algorithms, including the outlink count heuristic, as well as a variant of our PF-SELECT algorithm which they refer to as the ` PageRank ordering metric'.\nThey found this method to be most effective in optimizing their objective, as did a recent survey of these methods by Baeza-Yates et al. [1].\nBoldi et al. also experiment within a similar crawling framework in [2], but quantify their results by comparing Kendall's rank correlation between the PageRanks of the current set of crawled pages and those of the entire global graph.\nThey found that node selection strategies that crawled pages with the highest global PageRank first actually performed worse (with respect to Kendall's Tau correlation between the local and global PageRanks) than basic depth first or breadth first strategies.\nHowever, their experiments differ from our work in that our node selection algorithms do not use (or have access to) global PageRank values.\nMany algorithmic improvements for computing exact PageRank values have been proposed [9, 10, 14].\nIf such algorithms are used to compute the global PageRanks of our local domain, they would all require O (N) computation, storage, and bandwidth, where N is the size of the global domain.\nThis is in contrast to our method, which approximates the global PageRank and scales linearly with the size of the local domain.\nWang and Dewitt [22] propose a system where the set of web servers that comprise the global domain communicate with each other to compute their respective global PageRanks.\nFor a given web server hosting n pages, the computational, bandwidth, and storage requirements are also linear in n.\nOne drawback of this system is that the number of distinct web servers that comprise the global domain can be very large.\nFor example, our ` edu' dataset contains websites from over 3,200 different universities; coordinating such a system among a large number of sites can be very difficult.\nGan, Chen, and Suel propose a method for estimating the PageRank of a single page [5] which uses only constant bandwidth, computation, and space.\nTheir approach relies on the availability of a remote connectivity server that can supply the set of inlinks to a given page, an assumption not used in our framework.\nThey experimentally show that a reasonable estimate of the node's PageRank can be obtained by visiting at most a few hundred nodes.\nUsing their algorithm for our problem would require that either the entire global domain first be downloaded or a connectivity server be used, both of which would lead to very large web graphs.\n8.\nCONCLUSIONS AND FUTURE WORK\nThe internet is growing exponentially, and in order to navigate such a large repository as the web, global search engines have established themselves as a necessity.\nAlong with the ubiquity of these large-scale search engines comes an increase in search users' expectations.\nBy providing complete and isolated coverage of a particular web domain, localized search engines are an effective outlet to quickly locate content that could otherwise be difficult to find.\nIn this work, we contend that the use of global PageRank in a localized search engine can improve performance.\nTo estimate the global PageRank, we have proposed an iterative node selection framework where we select which pages from the global frontier to crawl next.\nOur primary contribution is our stochastic complementation page selection algorithm.\nThis method crawls nodes that will most significantly impact the local domain and has running time linear in the number of nodes in the local domain.\nExperimentally, we validate these methods across a diverse set of local domains, including seven site-specific domains and four topic-specific domains.\nWe conclude that by crawling an additional n or 2n pages, our methods find an estimate of the global PageRanks that is up to ten times better than just using the local PageRanks.\nFurthermore, we demonstrate that our algorithm consistently outperforms other existing heuristics.\nResearch Track Paper\nOften times, topic-specific domains are discovered using a focused web crawler which considers a page's content in conjunction with link anchor text to decide which pages to crawl next [4].\nAlthough such crawlers have proven to be quite effective in discovering topic-related content, many irrelevant pages are also crawled in the process.\nTypically, these pages are deleted and not indexed by the localized search engine.\nThese pages can of course provide valuable information regarding the global PageRank of the local domain.\nOne way to integrate these pages into our framework is to start the FINDGLOBALPR algorithm with the current subgraph F equal to the set of pages that were crawled by the focused crawler.\nThe global PageRank estimation framework, along with the node selection algorithms presented, all require O (n) computation per iteration and bandwidth proportional to the number of pages crawled, Tk.\nIf the number of iterations T is relatively small compared to the number of pages crawled per iteration, k, then the bottleneck of the algorithm will be the crawling phase.\nHowever, as the number of iterations increases (relative to k), the bottleneck will reside in the node selection computation.\nIn this case, our algorithms would benefit from constant factor optimizations.\nRecall that the FINDGLOBALPR algorithm (Algorithm 2) requires that the PageRanks of the current expanded local domain be recomputed in each iteration.\nRecent work by Langville and Meyer [12] gives an algorithm to quickly recompute PageRanks of a given webgraph if a small number of nodes are added.\nThis algorithm was shown to give speedup of five to ten times on some datasets.\nWe plan to investigate this and other such optimizations as future work.\nIn this paper, we have objectively evaluated our methods by measuring how close our global PageRank estimates are to the actual global PageRanks.\nTo determine the benefit of using global PageRanks in a localized search engine, we suggest a user study in which users are asked to rate the quality of search results for various search queries.\nFor some queries, only the local PageRanks are used in ranking, and for the remaining queries, local PageRanks and the approximate global PageRanks, as computed by our algorithms, are used.\nThe results of such a study can then be analyzed to determine the added benefit of using the global PageRanks computed by our methods, over just using the local PageRanks.\nAcknowledgements.\nThis research was supported by NSF grant CCF-0431257, NSF Career Award ACI-0093404, and a grant from Sabre, Inc. .","lvl-4":"Estimating the Global PageRank of Web Communities\nABSTRACT\nLocalized search engines are small-scale systems that index a particular community on the web.\nThey offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains.\nOne disadvantage such systems have over large-scale search engines is the lack of global PageRank values.\nSuch information is needed to assess the value of pages in the localized search domain within the context of the web as a whole.\nIn this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain.\nThe algorithms are all highly scalable in that, given a local domain of size n, they use O (n) resources that include computation time, bandwidth, and storage.\nWe test our methods across a variety of localized domains, including site-specific domains and topic-specific domains.\nWe demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates.\n1.\nINTRODUCTION\nLocalized search engines are small-scale search engines that index only a single community of the web.\nSuch communities can be site-specific domains, such as pages within\nthe cs.utexas.edu domain, or topic-related communities--for example, political websites.\nCompared to the web graph crawled and indexed by large-scale search engines, the size of such local communities is typically orders of magnitude smaller.\nConsequently, the computational resources needed to build such a search engine are also similarly lighter.\nBy restricting themselves to smaller, more manageable sections of the web, localized search engines can also provide more precise and complete search capabilities over their respective domains.\nOne drawback of localized indexes is the lack of global information needed to compute link-based rankings.\nThe PageRank algorithm [3], has proven to be an effective such measure.\nIn general, the PageRank of a given page is dependent on pages throughout the entire web graph.\nFor example, consider a localized search engine that indexes political pages with conservative views.\nA person wishing to research the opinions on global warming within the conservative political community may encounter numerous such opinions across various websites.\nIf only local PageRank values are available, then the search results will reflect only strongly held beliefs within the community.\nThus, for many localized search engines, incorporating global PageRanks can improve the quality of search results.\nHowever, the number of pages a local search engine indexes is typically orders of magnitude smaller than the number of pages indexed by their large-scale counterparts.\nLocalized search engines do not have the bandwidth, storage capacity, or computational power to crawl, download, and compute the global PageRanks of the entire web.\nIn this work, we present a method of approximating the global PageRanks of a local domain while only using resources of the same order as those needed to compute the PageRanks of the local subgraph.\nOur proposed method looks for a supergraph of our local subgraph such that the local PageRanks within this supergraph are close to the true global PageRanks.\nWe construct this supergraph by iteratively crawling global pages on the current web frontier--i.e., global pages with inlinks from pages that have already been crawled.\nIn order to provide\na good approximation to the global PageRanks, care must be taken when choosing which pages to crawl next; in this paper, we present a well-motivated page selection algorithm that also performs well empirically.\nThis algorithm is derived from a well-defined problem objective and has a running time linear in the number of local nodes.\nWe experiment across several types of local subgraphs, including four topic related communities and several sitespecific domains.\nTo evaluate performance, we measure the difference between the current global PageRank estimate and the global PageRank, as a function of the number of pages crawled.\nWe compare our algorithm against several heuristics and also against a baseline algorithm that chooses pages at random, and we show that our method outperforms these other methods.\nFinally, we empirically demonstrate that, given a local domain of size n, we can provide good approximations to the global PageRank values by crawling at most n or 2n additional pages.\nThe paper is organized as follows.\nSection 2 gives an overview of localized search engines and outlines their advantages over global search.\nSection 3 provides background on the PageRank algorithm.\nSection 4 formally defines our problem, and section 5 presents our page selection criteria and derives our algorithms.\n7.\nRELATED WORK\nThe node selection framework we have proposed is similar to the url ordering for crawling problem proposed by Cho et al. in [6].\nWhereas our framework seeks to minimize the difference between the global and local PageRank, the objective used in [6] is to crawl the most highly (globally) ranked pages first.\nThey found that node selection strategies that crawled pages with the highest global PageRank first actually performed worse (with respect to Kendall's Tau correlation between the local and global PageRanks) than basic depth first or breadth first strategies.\nHowever, their experiments differ from our work in that our node selection algorithms do not use (or have access to) global PageRank values.\nMany algorithmic improvements for computing exact PageRank values have been proposed [9, 10, 14].\nIf such algorithms are used to compute the global PageRanks of our local domain, they would all require O (N) computation, storage, and bandwidth, where N is the size of the global domain.\nThis is in contrast to our method, which approximates the global PageRank and scales linearly with the size of the local domain.\nWang and Dewitt [22] propose a system where the set of web servers that comprise the global domain communicate with each other to compute their respective global PageRanks.\nFor a given web server hosting n pages, the computational, bandwidth, and storage requirements are also linear in n.\nOne drawback of this system is that the number of distinct web servers that comprise the global domain can be very large.\nGan, Chen, and Suel propose a method for estimating the PageRank of a single page [5] which uses only constant bandwidth, computation, and space.\nUsing their algorithm for our problem would require that either the entire global domain first be downloaded or a connectivity server be used, both of which would lead to very large web graphs.\n8.\nCONCLUSIONS AND FUTURE WORK\nThe internet is growing exponentially, and in order to navigate such a large repository as the web, global search engines have established themselves as a necessity.\nAlong with the ubiquity of these large-scale search engines comes an increase in search users' expectations.\nBy providing complete and isolated coverage of a particular web domain, localized search engines are an effective outlet to quickly locate content that could otherwise be difficult to find.\nIn this work, we contend that the use of global PageRank in a localized search engine can improve performance.\nTo estimate the global PageRank, we have proposed an iterative node selection framework where we select which pages from the global frontier to crawl next.\nOur primary contribution is our stochastic complementation page selection algorithm.\nThis method crawls nodes that will most significantly impact the local domain and has running time linear in the number of nodes in the local domain.\nExperimentally, we validate these methods across a diverse set of local domains, including seven site-specific domains and four topic-specific domains.\nWe conclude that by crawling an additional n or 2n pages, our methods find an estimate of the global PageRanks that is up to ten times better than just using the local PageRanks.\nFurthermore, we demonstrate that our algorithm consistently outperforms other existing heuristics.\nResearch Track Paper\nAlthough such crawlers have proven to be quite effective in discovering topic-related content, many irrelevant pages are also crawled in the process.\nTypically, these pages are deleted and not indexed by the localized search engine.\nThese pages can of course provide valuable information regarding the global PageRank of the local domain.\nOne way to integrate these pages into our framework is to start the FINDGLOBALPR algorithm with the current subgraph F equal to the set of pages that were crawled by the focused crawler.\nThe global PageRank estimation framework, along with the node selection algorithms presented, all require O (n) computation per iteration and bandwidth proportional to the number of pages crawled, Tk.\nIf the number of iterations T is relatively small compared to the number of pages crawled per iteration, k, then the bottleneck of the algorithm will be the crawling phase.\nIn this case, our algorithms would benefit from constant factor optimizations.\nRecall that the FINDGLOBALPR algorithm (Algorithm 2) requires that the PageRanks of the current expanded local domain be recomputed in each iteration.\nRecent work by Langville and Meyer [12] gives an algorithm to quickly recompute PageRanks of a given webgraph if a small number of nodes are added.\nThis algorithm was shown to give speedup of five to ten times on some datasets.\nWe plan to investigate this and other such optimizations as future work.\nIn this paper, we have objectively evaluated our methods by measuring how close our global PageRank estimates are to the actual global PageRanks.\nTo determine the benefit of using global PageRanks in a localized search engine, we suggest a user study in which users are asked to rate the quality of search results for various search queries.\nFor some queries, only the local PageRanks are used in ranking, and for the remaining queries, local PageRanks and the approximate global PageRanks, as computed by our algorithms, are used.\nThe results of such a study can then be analyzed to determine the added benefit of using the global PageRanks computed by our methods, over just using the local PageRanks.\nAcknowledgements.","lvl-2":"Estimating the Global PageRank of Web Communities\nABSTRACT\nLocalized search engines are small-scale systems that index a particular community on the web.\nThey offer several benefits over their large-scale counterparts in that they are relatively inexpensive to build, and can provide more precise and complete search capability over their relevant domains.\nOne disadvantage such systems have over large-scale search engines is the lack of global PageRank values.\nSuch information is needed to assess the value of pages in the localized search domain within the context of the web as a whole.\nIn this paper, we present well-motivated algorithms to estimate the global PageRank values of a local domain.\nThe algorithms are all highly scalable in that, given a local domain of size n, they use O (n) resources that include computation time, bandwidth, and storage.\nWe test our methods across a variety of localized domains, including site-specific domains and topic-specific domains.\nWe demonstrate that by crawling as few as n or 2n additional pages, our methods can give excellent global PageRank estimates.\n1.\nINTRODUCTION\nLocalized search engines are small-scale search engines that index only a single community of the web.\nSuch communities can be site-specific domains, such as pages within\nthe cs.utexas.edu domain, or topic-related communities--for example, political websites.\nCompared to the web graph crawled and indexed by large-scale search engines, the size of such local communities is typically orders of magnitude smaller.\nConsequently, the computational resources needed to build such a search engine are also similarly lighter.\nBy restricting themselves to smaller, more manageable sections of the web, localized search engines can also provide more precise and complete search capabilities over their respective domains.\nOne drawback of localized indexes is the lack of global information needed to compute link-based rankings.\nThe PageRank algorithm [3], has proven to be an effective such measure.\nIn general, the PageRank of a given page is dependent on pages throughout the entire web graph.\nIn the context of a localized search engine, if the PageRanks are computed using only the local subgraph, then we would expect the resulting PageRanks to reflect the perceived popularity within the local community and not of the web as a whole.\nFor example, consider a localized search engine that indexes political pages with conservative views.\nA person wishing to research the opinions on global warming within the conservative political community may encounter numerous such opinions across various websites.\nIf only local PageRank values are available, then the search results will reflect only strongly held beliefs within the community.\nHowever, if global PageRanks are also available, then the results can additionally reflect outsiders' views of the conservative community (those documents that liberals most often access within the conservative community).\nThus, for many localized search engines, incorporating global PageRanks can improve the quality of search results.\nHowever, the number of pages a local search engine indexes is typically orders of magnitude smaller than the number of pages indexed by their large-scale counterparts.\nLocalized search engines do not have the bandwidth, storage capacity, or computational power to crawl, download, and compute the global PageRanks of the entire web.\nIn this work, we present a method of approximating the global PageRanks of a local domain while only using resources of the same order as those needed to compute the PageRanks of the local subgraph.\nOur proposed method looks for a supergraph of our local subgraph such that the local PageRanks within this supergraph are close to the true global PageRanks.\nWe construct this supergraph by iteratively crawling global pages on the current web frontier--i.e., global pages with inlinks from pages that have already been crawled.\nIn order to provide\na good approximation to the global PageRanks, care must be taken when choosing which pages to crawl next; in this paper, we present a well-motivated page selection algorithm that also performs well empirically.\nThis algorithm is derived from a well-defined problem objective and has a running time linear in the number of local nodes.\nWe experiment across several types of local subgraphs, including four topic related communities and several sitespecific domains.\nTo evaluate performance, we measure the difference between the current global PageRank estimate and the global PageRank, as a function of the number of pages crawled.\nWe compare our algorithm against several heuristics and also against a baseline algorithm that chooses pages at random, and we show that our method outperforms these other methods.\nFinally, we empirically demonstrate that, given a local domain of size n, we can provide good approximations to the global PageRank values by crawling at most n or 2n additional pages.\nThe paper is organized as follows.\nSection 2 gives an overview of localized search engines and outlines their advantages over global search.\nSection 3 provides background on the PageRank algorithm.\nSection 4 formally defines our problem, and section 5 presents our page selection criteria and derives our algorithms.\nSection 6 provides experimental results, section 7 gives an overview of related work, and, finally, conclusions are given in section 8.\n2.\nLOCALIZED SEARCH ENGINES\nLocalized search engines index a single community of the web, typically either a site-specific community, or a topicspecific community.\nLocalized search engines enjoy three major advantages over their large-scale counterparts: they are relatively inexpensive to build, they can offer more precise search capability over their local domain, and they can provide a more complete index.\nThe resources needed to build a global search engine are enormous.\nA 2003 study by Lyman et al. [13] found that the ` surface web' (publicly available static sites) consists of 8.9 billion pages, and that the average size of these pages is approximately 18.7 kilobytes.\nTo download a crawl of this size, approximately 167 terabytes of space is needed.\nFor a researcher who wishes to build a search engine with access to a couple of workstations or a small server, storage of this magnitude is simply not available.\nHowever, building a localized search engine over a web community of a hundred thousand pages would only require a few gigabytes of storage.\nThe computational burden required to support search queries over a database this size is more manageable as well.\nWe note that, for topic-specific search engines, the relevant community can be efficiently identified and downloaded by using a focused crawler [21, 4].\nFor site-specific domains, the local domain is readily available on their own web server.\nThis obviates the need for crawling or spidering, and a complete and up-to-date index of the domain can thus be guaranteed.\nThis is in contrast to their large-scale counterparts, which suffer from several shortcomings.\nFirst, crawling dynamically generated pages--pages in the ` hidden web'--has been the subject of research [20] and is a non-trivial task for an external crawler.\nSecond, site-specific domains can enable the robots exclusion policy.\nThis prohibits external search engines' crawlers from downloading content from the domain, and an external\nResearch Track Paper\nsearch engine must instead rely on outside links and anchor text to index these restricted pages.\nBy restricting itself to only a specific domain of the internet, a localized search engine can provide more precise search results.\nConsider the canonical ambiguous search query, ` jaguar', which can refer to either the car manufacturer or the animal.\nA scientist trying to research the habitat and evolutionary history of a jaguar may have better success using a finely tuned zoology-specific search engine than querying Google with multiple keyword searches and wading through irrelevant results.\nA method to learn better ranking functions for retrieval was recently proposed by Radlinski and Joachims [19] and has been applied to various local domains, including Cornell University's website [8].\n3.\nPAGERANK OVERVIEW\nThe PageRank algorithm defines the importance of web pages by analyzing the underlying hyperlink structure of a web graph.\nThe algorithm works by building a Markov chain from the link structure of the web graph and computing its stationary distribution.\nOne way to compute the stationary distribution of a Markov chain is to find the limiting distribution of a random walk over the chain.\nThus, the PageRank algorithm uses what is sometimes referred to as the ` random surfer' model.\nIn each step of the random walk, the ` surfer' either follows an outlink from the current page (i.e. the current node in the chain), or jumps to a random page on the web.\nWe now precisely define the PageRank problem.\nLet U be an m \u00d7 m adjacency matrix for a given web graph such that Uji = 1 if page i links to page j and Uji = 0 otherwise.\nWe define the PageRank matrix PU to be:\nis column stochastic, \u03b1 is a given scalar such that 0 <\u03b1 <1, e is the vector of all ones, and v is a non-negative, L1normalized vector, sometimes called the ` random surfer' vector.\nNote that the matrix D \u2212 1 U is well-defined only if each column of U has at least one non-zero entry--i.e., each page in the webgraph has at least one outlink.\nIn the presence of such ` dangling nodes' that have no outlinks, one commonly used solution, proposed by Brin et al. [3], is to replace each zero column of U by a non-negative, L1-normalized vector.\nThe PageRank vector r is the dominant eigenvector of the PageRank matrix, r = PU r.\nWe will assume, without loss of generality, that r has an L1-norm of one.\nComputationally, r can be computed using the power method.\nThis method first chooses a random starting vector r (0), and iteratively multiplies the current vector by the PageRank matrix PU; see Algorithm 1.\nIn general, each iteration of the power method can take O (m2) operations when PU is a dense matrix.\nHowever, in practice, the number of links in a web graph will be of the order of the number of pages.\nBy exploiting the sparsity of the PageRank matrix, the work per iteration can be reduced to O (km), where k is the average number of links per web page.\nIt has also been shown that the total number of iterations needed for convergence is proportional to \u03b1 and does not depend on the size of the web graph [11, 7].\nFinally, the total space needed is also O (km), mainly to store the matrix U.\nResearch Track Paper\nALGORITHM 1: A linear time (per iteration) algorithm for computing PageRank.\n4.\nPROBLEM DEFINITION\nGiven a local domain L, let G be an N x N adjacency matrix for the entire connected component of the web that contains L, such that Gji = 1 if page i links to page j and Gji = 0 otherwise.\nWithout loss of generality, we will partition G as:\nwhere L is the n x n local subgraph corresponding to links inside the local domain, Lout is the subgraph that corresponds to links from the local domain pointing out to the global domain, Gout is the subgraph containing links from the global domain into the local domain, and Gwithin contains links within the global domain.\nWe assume that when building a localized search engine, only pages inside the local domain are crawled, and the links between these pages are represented by the subgraph L.\nThe links in Lout are also known, as these point from crawled pages in the local domain to uncrawled pages in the global domain.\nAs defined in equation (1), PG is the PageRank matrix formed from the global graph G, and we define the global PageRank vector of this graph to be g. Let the n-length vector p \u2217 be the L1-normalized vector corresponding to the global PageRank of the pages in the local domain L:\nwhere EL = [I | 0] is the restriction matrix that selects the components from g corresponding to nodes in L. Let p denote the PageRank vector constructed from the local domain subgraph L.\nIn practice, the observed local PageRank p and the global PageRank p \u2217 will be quite different.\nOne would expect that as the size of local matrix L approaches the size of global matrix G, the global PageRank and the observed local PageRank will become more similar.\nThus, one approach to estimating the global PageRank is to crawl the entire global domain, compute its PageRank, and extract the PageRanks of the local domain.\nTypically, however, n \"N, i.e., the number of global pages is much larger than the number of local pages.\nTherefore, crawling all global pages will quickly exhaust all local resources (computational, storage, and bandwidth) available to create the local search engine.\nWe instead seek a supergraph F\u02c6 of our local subgraph L with size O (n).\nOur goal\nALGORITHM 2: The FINDGLOBALPR algorithm.\nis to find such a supergraph F\u02c6 with PageRank f\u02c6, so that f\u02c6 when restricted to L is close to p \u2217.\nFormally, we seek to minimize GlobalDif f (We choose the L1 norm for measuring the error as it does not place excessive weight on outliers (as the L2 norm does, for example), and also because it is the most commonly used distance measure in the literature for comparing PageRank vectors, as well as for detecting convergence of the algorithm [3].\nfor constructing \u02c6 We propose a greedy framework, given in Algorithm 2, F. Initially, F is set to the local subgraph L, and the PageRank f of this graph is computed.\nThe algorithm then proceeds as follows.\nFirst, the SELECTNODES algorithm (which we discuss in the next section) is called and it returns a set of k nodes to crawl next from the set of nodes in the current crawl frontier, Fout.\nThese selected nodes are then crawled to expand the local subgraph, F, and the PageRanks of this expanded graph are then recomputed.\nThese steps are repeated for each of T iterations.\nFinally, the PageRank vector \u02c6p, which is restricted to pages within the original local domain, is returned.\nGiven our computation, bandwidth, and memory restrictions, we will assume that the algorithm will crawl at most O (n) pages.\nSince the PageRanks are computed in each iteration of the algorithm, which is an O (n) operation, we will also assume that the number of iterations T is a constant.\nOf course, the main challenge here is in selecting which set of k nodes to crawl next.\nIn the next section, we formally define the problem and give efficient algorithms.\n5.\nNODE SELECTION\nIn this section, we present node selection algorithms that operate within the greedy framework presented in the previous section.\nWe first give a well-defined criteria for the page selection problem and provide experimental evidence that this criteria can effectively identify pages that optimize our problem objective (3).\nWe then present our main alFINDGLOBALPR (L, Lout, T, k) Input: L: zero-one adjacency matrix for the local domain, Lout: zero-one outlink matrix from L to global subgraph as in (2), T: number of iterations, k: number of pages to crawl per iteration.\nOutput: \u02c6p: an improved estimate of the global Page\ngorithmic contribution of the paper, a method with linear running time that is derived from this page selection criteria.\nFinally, we give an intuitive analysis of our algorithm in terms of ` leaks' and ` flows'.\nWe show that if only the ` flow' is considered, then the resulting method is very similar to a widely used page selection heuristic [6].\n5.1 Formulation\nFor a given page j in the global domain, we define the expanded local graph Fj:\nwhere uj is the zero-one vector containing the outlinks from F into page j, and s contains the inlinks from page j into the local domain.\nNote that we do not allow self-links in this framework.\nIn practice, self-links are often removed, as they only serve to inflate a given page's PageRank.\nObserve that the inlinks into F from node j are not known until after node j is crawled.\nTherefore, we estimate this inlink vector as the expectation over inlink counts among the set of already crawled pages,\nIn practice, for any given page, this estimate may not reflect the true inlinks from that page.\nFurthermore, this expectation is sampled from the set of links within the crawled domain, whereas a better estimate would also use links from the global domain.\nHowever, the latter distribution is not known to a localized search engine, and we contend that the above estimate will, on average, be a better estimate than the uniform distribution, for example.\nLet the PageRank of F be f.\nWe express the PageRank fj + of the expanded local graph Fj as\nwhere xj is the PageRank of the candidate global node j, and fj is the L1-normalized PageRank vector restricted to the pages in F.\nSince directly optimizing our problem goal requires knowing the global PageRank p \u2217, we instead propose to crawl those nodes that will have the greatest influence on the PageRanks of pages in the original local domain L:\nExperimentally, the influence score is a very good predictor of our problem objective (3).\nFor each candidate global node j, figure 1 (a) shows the objective function value Global Diff (fj) as a function of the influence of page j.\nThe local domain used here is a crawl of conservative political pages (we will provide more details about this dataset in section 6); we observed similar results in other domains.\nThe correlation is quite strong, implying that the influence criteria can effectively identify pages that improve the global PageRank estimate.\nAs a baseline, figure 1 (b) compares our objective with an alternative criteria, outlink count.\nThe outlink count is defined as the number of outlinks from the local domain to page j.\nThe correlation here is much weaker.\nResearch Track Paper\nFigure 1: (a) The correlation between our influence page selection criteria (7) and the actual objective function (3) value is quite strong.\n(b) This is in contrast to other criteria, such as outlink count, which exhibit a much weaker correlation.\n5.2 Computation\nAs described, for each candidate global page j, the influence score (7) must be computed.\nIf fj is computed exactly for each global page j, then the PageRank algorithm would need to be run for each of the O (n) such global pages j we consider, resulting in an O (n2) computational cost for the node selection method.\nThus, computing the exact value of fj will lead to a quadratic algorithm, and we must instead turn to methods of approximating this vector.\nThe algorithm we present works by performing one power method iteration used by the PageRank algorithm (Algorithm 1).\nThe convergence rate for the PageRank algorithm has been shown to equal the random surfer probability \u03b1 [7, 11].\nGiven a starting vector x (0), if k PageRank iterations are performed, the current PageRank solution x (k) satisfies:\nwhere x * is the desired PageRank vector.\nTherefore, if only one iteration is performed, choosing a good starting vector is necessary to achieve an accurate approximation.\nWe partition the PageRank matrix PFj, corresponding to the ~ x ~ subgraph Fj as:\nand diag (uj) is the diagonal matrix with the (i, i) th entry equal to one if the ith element of uj equals one, and is zero otherwise.\nWe have assumed here that the random surfer vector is the uniform vector, and that L has no ` dangling links'.\nThese assumptions are not necessary and serve only to simplify discussion and analysis.\nA simple approach for estimating fj is the following.\nFirst, estimate the PageRank fj + of Fj by computing one PageRank iteration over the matrix PFj, using the starting vec ~ f ~ tor \u03bd =.\nThen, estimate fj by removing the last\nResearch Track Paper\ncomponent from our estimate of fj + (i.e., the component corresponding to the added node j), and renormalizing.\nThe problem with this approach is in the starting vector.\nRecall from (6) that xj is the PageRank of the added node j.\nThe difference between the actual PageRank fj + of PFj and the starting vector \u03bd is\nThus, by (8), after one PageRank iteration, we expect our estimate of fj + to still have an error of about 2\u03b1xj.\nIn particular, for candidate nodes j with relatively high PageRank xj, this method will yield more inaccurate results.\nWe will next present a method that eliminates this bias and runs in O (n) time.\n5.2.1 Stochastic Complementation\nSince fj +, as given in (6) is the PageRank of the matrix PFj, we have: ~ fj (1 \u2212 xj) ~ ~ F\u02dc s\u02dc ~ ~ fj (1 \u2212 xj) ~ = xj \u02dcuTj w xj cally, the addition of page k does not change the PageRanks of the pages in F, and thus fk = f. By construction of the stochastic complement, fk = Sfk, so the approximation given in equation (11) will yield the exact solution.\nNext, we present the computational details needed to efficiently compute the quantity ~ fj \u2212 f ~ 1 over all known global pages j.\nWe begin by expanding the difference fj \u2212 f, where the vector fj is estimated as in (11),\nNote that the matrix (DF + diag (uj)) \u2212 1 is diagonal.\nLetting o [k] be the outlink count for page k in F, we can express the kth diagonal element as:\nNoting that (o [k] + 1) \u2212 1 = o [k] \u2212 1 \u2212 (o [k] (o [k] + 1)) \u2212 1 and rewriting this in matrix form yields\nThe matrix S = F\u02dc + (1 \u2212 w) \u2212 1s \u02dc\u02dcuTj is known as the stochastic complement of the column stochastic matrix PFj with respect to the sub matrix F\u02dc.\nThe theory of stochastic complementation is well studied, and it can be shown the stochastic complement of an irreducible matrix (such as the PageRank matrix) is unique.\nFurthermore, the stochastic complement is also irreducible and therefore has a unique stationary distribution as well.\nFor an extensive study, see [15].\nIt can be easily shown that the sub-dominant eigenvalue of S is at most ~ +1 ~ \u03b1, where ~ is the size of F.\nFor sufficiently large ~, this value will be very close to \u03b1.\nThis is important, as other properties of the PageRank algorithm, notably the algorithm's sensitivity, are dependent on this value [11].\nIn this method, we estimate the length ~ vector fj by computing one PageRank iteration over the ~ \u00d7 ~ stochastic complement S, starting at the vector f: fj \u2248 Sf.\n(11) This is in contrast to the simple method outlined in the previous section, which first iterates over the (~ + 1) \u00d7 (~ + 1) matrix PFj to estimate fj +, and then removes the last component from the estimate and renormalizes to approximate fj.\nThe problem with the latter method is in the choice of the (~ + 1) length starting vector, \u03bd.\nConsequently, the PageRank estimate given by the simple method differs from the true PageRank by at least 2\u03b1xj, where xj is the PageRank of page j. By using the stochastic complement, we can establish a tight lower bound of zero for this difference.\nTo see this, consider the case in which a node k is added to F to form the augmented local subgraph Fk, and that\nRecall that, by definition, we have PF = \u03b1F D \u2212 1F + (1 \u2212 \u03b1) e ~.\nSubstituting (13) and (14) in (12) yields\nnoting that by definition, f = PF f, and defining the vectors x, y, and z to be\nThe first term x is a sparse vector, and takes non-zero values only for local pages k ~ that are siblings of the global page j.\nWe define (i, j) \u2208 F if and only if F [j, i] = 1 (equivalently, page i links to page j) and express the value of the component x [k ~] as:\nwhere o [k], as before, is the number of outlinks from page k in the local domain.\nNote that the last two terms, y and z are not dependent on the current global node j. Given the function hj (f) = ~ y + (\u02dcuTj f) z ~ 1, the quantity ~ fj \u2212 f ~ 1 the PageRank of this new graph is\nIf we can compute the function hj in linear time, then we can compute each value of ~ fj \u2212 f ~ 1 using an additional amount of time that is proportional to the number of nonzero components in x.\nThese optimizations are carried out in Algorithm 3.\nNote that (20) computes the difference between all components of f and fj, whereas our node selection criteria, given in (7), is restricted to the components corresponding to nodes in the original local domain L. Let us examine Algorithm 3 in more detail.\nFirst, the algorithm computes the outlink counts for each page in the local domain.\nThe algorithm then computes the quantity \u02dcuTj f for each known global page j.\nThis inner product can be written as where the second term sums over the set of local pages that link to page j.\nSince the total number of edges in Fout was assumed to have size O ($) (recall that f is the number of pages in F), the running time of this step is also O ($).\nThe algorithm then computes the vectors y and z, as given in (17) and (18), respectively.\nThe L1NORMDIFF method is called on the components of these vectors which correspond to the pages in L, and it estimates the value of ~ EL (y + (\u02dcuTj f) z) ~ 1 for each page j.\nThe estimation works as follows.\nFirst, the values of \u02dcuTj f are discretized uniformly into c values {a1,..., ac}.\nThe quantity ~ EL (y + aiz) ~ 1 is then computed for each discretized value of ai and stored in a table.\nTo evaluate ~ EL (y + az) ~ 1 for some a \u2208 [a1, ac], the closest discretized value ai is determined, and the corresponding entry in the table is used.\nThe total running time for this method is linear in f and the discretization parameter c (which we take to be a constant).\nWe note that if exact values are desired, we have also developed an algorithm that runs in O (f log f) time that is not described here.\nIn the main loop, we compute the vector x, as defined in equation (16).\nThe nested loops iterate over the set of pages in F that are siblings of page j. Typically, the size of this set is bounded by a constant.\nFinally, for each page j, the scores vector is updated over the set of non-zero components k of the vector x with k \u2208 L.\nThis set has size equal to the number of local siblings of page j, and is a subset of the total number of siblings of page j. Thus, each iteration of the main loop takes constant time, and the total running time of the main loop is O ($).\nSince we have assumed that the size of F will not grow larger than O (n), the total running time for the algorithm is O (n).\nALGORITHM 3: Node Selection via Stochastic Complementation.\nSC-SELECT (F, Fout, f, k) Input: F: zero-one adjacency matrix of size f corresponding to the current local subgraph, Fout: zero-one outlink matrix from F to global subgraph, f: PageRank of F, k: number of pages to return Output: pages: set of k pages to crawl next {Compute outlink sums for local subgraph}\nReturn k pages with highest scores\n5.2.2 PageRank Flows\nWe now present an intuitive analysis of the stochastic complementation method by decomposing the change in PageRank in terms of ` leaks' and ` flows'.\nThis analysis is motivated by the decomposition given in (15).\nPageRank ` flow' is the increase in the local PageRanks originating from global page j.\nThe flows are represented by the non-negative vector (\u02dcuTj f) z (equations (15) and (18)).\nThe scalar \u02dcuTj f can be thought of as the total amount of PageRank flow that page j has available to distribute.\nThe vector z dictates how the flow is allocated to the local domain; the flow that local page k receives is proportional to (within a constant factor due to the random surfer vector) the expected number of its inlinks.\nThe PageRank ` leaks' represent the decrease in PageRank resulting from the addition of page j.\nThe leakage can be quantified in terms of the non-positive vectors x and y (equations (16) and (17)).\nFor vector x, we can see from equation (19) that the amount of PageRank leaked by a local page is proportional to the weighted sum of the Page\nResearch Track Paper\nRanks of its siblings.\nThus, pages that have siblings with higher PageRanks (and low outlink counts) will experience more leakage.\nThe leakage caused by y is an artifact of the random surfer vector.\nWe will next show that if only the ` flow' term, (\u02dcuTj f) z, is considered, then the resulting method is very similar to a heuristic proposed by Cho et al. [6] that has been widely used for the \"Crawling Through URL Ordering\" problem.\nThis heuristic is computationally cheaper, but as we will see later, not as effective as the Stochastic Complementation method.\nOur node selection strategy chooses global nodes that have the largest influence (equation (7)).\nIf this influence is approximated using only ` flows', the optimal node j \u2217 is:\nThe resulting page selection score can be expressed as a sum of the PageRanks of each local page k that links to j, where each PageRank value is normalized by o [k] +1.\nInterestingly, the normalization that arises in our method differs from the heuristic given in [6], which normalizes by o [k].\nThe algorithm PF-SELECT, which is omitted due to lack of space, first computes the quantity fT (DF + diag (uj)) \u2212 1uj for each global page j, and then returns the pages with the k largest scores.\nTo see that the running time for this algorithm is O (n), note that the computation involved in this method is a subset of that needed for the SC-SELECT method (Algorithm 3), which was shown to have a running time of O (n).\n6.\nEXPERIMENTS\nIn this section, we provide experimental evidence to verify the effectiveness of our algorithms.\nWe first outline our experimental methodology and then provide results across a variety of local domains.\n6.1 Methodology\nGiven the limited resources available at an academic institution, crawling a section of the web that is of the same magnitude as that indexed by Google or Yahoo! is clearly infeasible.\nThus, for a given local domain, we approximate the global graph by crawling a local neighborhood around the domain that is several orders of magnitude larger than the local subgraph.\nEven though such a graph is still orders of magnitude smaller than the ` true' global graph, we contend that, even if there exist some highly influential pages that are very far away from our local domain, it is unrealistic for any local node selection algorithm to find them.\nSuch pages also tend to be highly unrelated to pages within the local domain.\nWhen explaining our node selection strategies in section 5, we made the simplifying assumption that our local graph contained no dangling nodes.\nThis assumption was only made to ease our analysis.\nOur implementation efficiently handles dangling links by replacing each zero column of our adjacency matrix with the uniform vector.\nWe evaluate the algorithm using the two node selection strategies given in Section 5.2, and also against the following baseline methods:\n\u2022 RANDOM: Nodes are chosen uniformly at random among the known global nodes.\n\u2022 OUTLiNKCOUNT: Global nodes with the highest number of outlinks from the local domain are chosen.\nAt each iteration of the FiNDGLOBALPR algorithm, we evaluate performance by computing the difference between the current PageRank estimate of the local domain, ELf ~ ELf ~ 1, and the global PageRank of the local domain ELg ~ ELg ~ 1.\nAll PageRank calculations were performed using the uniform random surfer vector.\nAcross all experiments, we set the random surfer parameter \u03b1, to be .85, and used a convergence threshold of 10 \u2212 6.\nWe evaluate the difference between the local and global PageRank vectors using three different metrics: the L1 and L \u221e norms, and Kendall's tau.\nThe L1 norm measures the sum of the absolute value of the differences between the two vectors, and the L \u221e norm measures the absolute value of the largest difference.\nKendall's tau metric is a popular rank correlation measure used to compare PageRanks [2, 11].\nThis metric can be computed by counting the number of pairs of pairs that agree in ranking, and subtracting from that the number of pairs of pairs that disagree in ranking.\nThe final value is then normalized by the total number of n such pairs, resulting in a [\u2212 1, 1] range, where\na negative score signifies anti-correlation among rankings, and values near one correspond to strong rank correlation.\n6.2 Results\nOur experiments are based on two large web crawls and were downloaded using the web crawler that is part of the Nutch open source search engine project [18].\nAll crawls were restricted to only ` http' pages, and to limit the number of dynamically generated pages that we crawl, we ignored all pages with urls containing any of the characters `? '\n, ` *', ` @', or ` ='.\nThe first crawl, which we will refer to as the ` edu' dataset, was seeded by homepages of the top 100 graduate computer science departments in the USA, as rated by the US News and World Report [16], and also by the home pages of their respective institutions.\nA crawl of depth 5 was performed, restricted to pages within the \u2018.edu' domain, resulting in a graph with approximately 4.7 million pages and 22.9 million links.\nThe second crawl was seeded by the set of pages under the ` politics' hierarchy in the dmoz open directory project [17].\nWe crawled all pages up to four links away, which yielded a graph with 4.4 million pages and 17.3 million links.\nWithin the ` edu' crawl, we identified the five site-specific domains corresponding to the websites of the top five graduate computer science departments, as ranked by the US News and World Report.\nThis yielded local domains of various sizes, from 10,626 (UIUC) to 59,895 (Berkeley).\nFor each of these site-specific domains with size n, we performed 50 iterations of the FiNDGLOBALPR algorithm to crawl a total of 2n additional nodes.\nFigure 2 (a) gives the (L1) difference from the PageRank estimate at each iteration to the global PageRank, for the Berkeley local domain.\nThe performance of this dataset was representative of the typical performance across the five computer science sitespecific local domains.\nInitially, the L1 difference between the global and local PageRanks ranged from .0469 (Stanford) to .149 (MIT).\nFor the first several iterations, the\nFigure 2: L1 difference between the estimated and true global PageRanks for (a) Berkeley's computer science website, (b) the site-specific domain, www.enterstageright.com, and (c) the ` politics' topic-specific domain.\nThe stochastic complement method outperforms all other methods across various domains.\nthree link-based methods all outperform the random selection heuristic.\nAfter these initial iterations, the random heuristic tended to be more competitive with (or even outperform, as in the Berkeley local domain) the outlink count and PageRank flow heuristics.\nIn all tests, the stochastic complementation method either outperformed, or was competitive with, the other methods.\nTable 1 gives the average difference between the final estimated global PageRanks and the true global PageRanks for various distance measures.\nTable 1: Average final performance of various node\nselection strategies for the five site-specific computer science local domains.\nNote that Kendall's Tau measures similarity, while the other metrics are dissimilarity measures.\nStochastic Complementation clearly outperforms the other methods in all metrics.\nWithin the ` politics' dataset, we also performed two sitespecific tests for the largest websites in the crawl: www.adamsmith.org, the website for the London based Adam Smith Institute, and www.enterstageright.com, an online conservative journal.\nAs with the ` edu' local domains, we ran our algorithm for 50 iterations, crawling a total of 2n nodes.\nFigure 2 (b) plots the results for the www.enterstageright.com domain.\nIn contrast to the ` edu' local domains, the RANDOM and OUTLINKCOUNT methods were not competitive with either the SC-SELECT or the PF-SELECT methods.\nAmong all datasets and all node selection methods, the stochastic complementation method was most impressive in this dataset, realizing a final estimate that differed only .0279 from the global PageRank, a ten-fold improvement over the initial local PageRank difference of .299.\nFor the Adam Smith local domain, the initial difference between the local and global PageRanks was .148, and the final estimates given by the SC-SELECT, PF-SELECT, OUTLINKCOUNT, and RANDOM methods were .0208, .0193, .0222, and .0356, respectively.\nWithin the ` politics' dataset, we constructed four topicspecific local domains.\nThe first domain consisted of all pages in the dmoz politics category, and also all pages within each of these sites up to two links away.\nThis yielded a local domain of 90,811 pages, and the results are given in figure 2 (c).\nBecause of the larger size of the topic-specific domains, we ran our algorithm for only 25 iterations to crawl a total of n nodes.\nWe also created topic-specific domains from three political sub-topics: liberalism, conservatism, and socialism.\nThe pages in these domains were identified by their corresponding dmoz categories.\nFor each sub-topic, we set the local domain to be all pages within three links from the corresponding dmoz category pages.\nTable 2 summarizes the performance of these three topic-specific domains, and also the larger political domain.\nTo quantify a global page j's effect on the global PageRank values of pages in the local domain, we define page j's impact to be its PageRank value, g [j], normalized by the fraction of its outlinks pointing to the local domain:\nwhere, oL [j] is the number of outlinks from page j to pages in the local domain L, and o [j] is the total number of j's outlinks.\nIn terms of the random surfer model, the impact of page j is the probability that the random surfer (1) is currently at global page j in her random walk and (2) takes an outlink to a local page, given that she has already decided not to jump to a random page.\nFor the politics local domain, we found that many of the pages with high impact were in fact political pages that should have been included in the dmoz politics topic, but were not.\nFor example, the two most influential global pages were the political search engine www.askhenry.com, and the home page of the online political magazine, www.policyreview.com.\nAmong non-political pages, the home page of the journal \"Education Next\" was most influential.\nThe journal is freely available online and contains articles regarding various aspect of K-12 education in America.\nTo provide some anecdotal evidence for the effectiveness of our page selection methods, we note that the SC-SELECT method chose 11 pages within the www.educationnext.org domain, the PF-SELECT method discovered 7 such pages, while the OUTLINKCOUNT and RANDOM methods found only 6 pages each.\nFor the conservative political local domain, the socialist website www.ornery.org had a very high impact score.\nThis \u00b7 g [j] l\nTable 2: Final performance among node selection strategies for the four political topic-specific crawls.\nNote that Kendall's Tau measures similarity, while the other metrics are dissimilarity measures.\nwas largely due to a link from the front page of this site to an article regarding global warming published by the National Center for Public Policy Research, a conservative research group in Washington, DC.\nNot surprisingly, the global PageRank of this article (which happens to be on the home page of the NCCPR, www.nationalresearch.com), was approximately .002, whereas the local PageRank of this page was only .00158.\nThe SC-SELECT method yielded a global PageRank estimate of approximately .00182, the PFSELECT method estimated a value of .00167, and the RANDOM and OUTLiNKCOUNT methods yielded values of .01522 and .00171, respectively.\n7.\nRELATED WORK\nThe node selection framework we have proposed is similar to the url ordering for crawling problem proposed by Cho et al. in [6].\nWhereas our framework seeks to minimize the difference between the global and local PageRank, the objective used in [6] is to crawl the most highly (globally) ranked pages first.\nThey propose several node selection algorithms, including the outlink count heuristic, as well as a variant of our PF-SELECT algorithm which they refer to as the ` PageRank ordering metric'.\nThey found this method to be most effective in optimizing their objective, as did a recent survey of these methods by Baeza-Yates et al. [1].\nBoldi et al. also experiment within a similar crawling framework in [2], but quantify their results by comparing Kendall's rank correlation between the PageRanks of the current set of crawled pages and those of the entire global graph.\nThey found that node selection strategies that crawled pages with the highest global PageRank first actually performed worse (with respect to Kendall's Tau correlation between the local and global PageRanks) than basic depth first or breadth first strategies.\nHowever, their experiments differ from our work in that our node selection algorithms do not use (or have access to) global PageRank values.\nMany algorithmic improvements for computing exact PageRank values have been proposed [9, 10, 14].\nIf such algorithms are used to compute the global PageRanks of our local domain, they would all require O (N) computation, storage, and bandwidth, where N is the size of the global domain.\nThis is in contrast to our method, which approximates the global PageRank and scales linearly with the size of the local domain.\nWang and Dewitt [22] propose a system where the set of web servers that comprise the global domain communicate with each other to compute their respective global PageRanks.\nFor a given web server hosting n pages, the computational, bandwidth, and storage requirements are also linear in n.\nOne drawback of this system is that the number of distinct web servers that comprise the global domain can be very large.\nFor example, our ` edu' dataset contains websites from over 3,200 different universities; coordinating such a system among a large number of sites can be very difficult.\nGan, Chen, and Suel propose a method for estimating the PageRank of a single page [5] which uses only constant bandwidth, computation, and space.\nTheir approach relies on the availability of a remote connectivity server that can supply the set of inlinks to a given page, an assumption not used in our framework.\nThey experimentally show that a reasonable estimate of the node's PageRank can be obtained by visiting at most a few hundred nodes.\nUsing their algorithm for our problem would require that either the entire global domain first be downloaded or a connectivity server be used, both of which would lead to very large web graphs.\n8.\nCONCLUSIONS AND FUTURE WORK\nThe internet is growing exponentially, and in order to navigate such a large repository as the web, global search engines have established themselves as a necessity.\nAlong with the ubiquity of these large-scale search engines comes an increase in search users' expectations.\nBy providing complete and isolated coverage of a particular web domain, localized search engines are an effective outlet to quickly locate content that could otherwise be difficult to find.\nIn this work, we contend that the use of global PageRank in a localized search engine can improve performance.\nTo estimate the global PageRank, we have proposed an iterative node selection framework where we select which pages from the global frontier to crawl next.\nOur primary contribution is our stochastic complementation page selection algorithm.\nThis method crawls nodes that will most significantly impact the local domain and has running time linear in the number of nodes in the local domain.\nExperimentally, we validate these methods across a diverse set of local domains, including seven site-specific domains and four topic-specific domains.\nWe conclude that by crawling an additional n or 2n pages, our methods find an estimate of the global PageRanks that is up to ten times better than just using the local PageRanks.\nFurthermore, we demonstrate that our algorithm consistently outperforms other existing heuristics.\nResearch Track Paper\nOften times, topic-specific domains are discovered using a focused web crawler which considers a page's content in conjunction with link anchor text to decide which pages to crawl next [4].\nAlthough such crawlers have proven to be quite effective in discovering topic-related content, many irrelevant pages are also crawled in the process.\nTypically, these pages are deleted and not indexed by the localized search engine.\nThese pages can of course provide valuable information regarding the global PageRank of the local domain.\nOne way to integrate these pages into our framework is to start the FINDGLOBALPR algorithm with the current subgraph F equal to the set of pages that were crawled by the focused crawler.\nThe global PageRank estimation framework, along with the node selection algorithms presented, all require O (n) computation per iteration and bandwidth proportional to the number of pages crawled, Tk.\nIf the number of iterations T is relatively small compared to the number of pages crawled per iteration, k, then the bottleneck of the algorithm will be the crawling phase.\nHowever, as the number of iterations increases (relative to k), the bottleneck will reside in the node selection computation.\nIn this case, our algorithms would benefit from constant factor optimizations.\nRecall that the FINDGLOBALPR algorithm (Algorithm 2) requires that the PageRanks of the current expanded local domain be recomputed in each iteration.\nRecent work by Langville and Meyer [12] gives an algorithm to quickly recompute PageRanks of a given webgraph if a small number of nodes are added.\nThis algorithm was shown to give speedup of five to ten times on some datasets.\nWe plan to investigate this and other such optimizations as future work.\nIn this paper, we have objectively evaluated our methods by measuring how close our global PageRank estimates are to the actual global PageRanks.\nTo determine the benefit of using global PageRanks in a localized search engine, we suggest a user study in which users are asked to rate the quality of search results for various search queries.\nFor some queries, only the local PageRanks are used in ranking, and for the remaining queries, local PageRanks and the approximate global PageRanks, as computed by our algorithms, are used.\nThe results of such a study can then be analyzed to determine the added benefit of using the global PageRanks computed by our methods, over just using the local PageRanks.\nAcknowledgements.\nThis research was supported by NSF grant CCF-0431257, NSF Career Award ACI-0093404, and a grant from Sabre, Inc. .","keyphrases":["global pagerank","web commun","local search engin","algorithm","local domain","topic-specif domain","larg-scale search engin","link-base rank","subgraph","global graph","crawl problem","experiment"],"prmu":["P","P","P","P","P","P","M","U","U","M","M","U"]} {"id":"H-54","title":"Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature","abstract":"This paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature. The effects of different types of domain-specific knowledge in performance contribution are examined. Based on the TREC platform, we show that appropriate use of domain-specific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006.","lvl-1":"Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature Wei Zhou, Clement Yu Department of Computer Science University of Illinois at Chicago wzhou8@uic.edu, yu@cs.uic.edu Neil Smalheiser, Vetle Torvik Department of Psychiatry and Psychiatric Institute (MC912) University of Illinois at Chicago {neils, vtorvik}@uic.\nedu Jie Hong Division of Epidemiology and Biostatistics, School of Public health University of Illinois at Chicago jhong20@uic.edu ABSTRACT This paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature.\nThe effects of different types of domain-specific knowledge in performance contribution are examined.\nBased on the TREC platform, we show that appropriate use of domainspecific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - retrieval models, query formulation, information filtering.\nH.3.1 [Information Storage and Retrieval]: Content Analysis and Indexing - thesauruses.\nGeneral Terms Algorithms, Performance, Experimentation.\n1.\nINTRODUCTION Biologists search for literature on a daily basis.\nFor most biologists, PubMed, an online service of U.S. National Library of Medicine (NLM), is the most commonly used tool for searching the biomedical literature.\nPubMed allows for keyword search by using Boolean operators.\nFor example, if one desires documents on the use of the drug propanolol in the disease hypertension, a typical PubMed query might be propanolol AND hypertension, which will return all the documents having the two keywords.\nKeyword search in PubMed is effective if the query is well-crafted by the users using their expertise.\nHowever, information needs of biologists, in some cases, are expressed as complex questions [8][9], which PubMed is not designed to handle.\nWhile NLM does maintain an experimental tool for free-text queries [6], it is still based on PubMed keyword search.\nThe Genomics track of the 2006 Text REtrieval Conference (TREC) provides a common platform to assess the methods and techniques proposed by various groups for biomedical information retrieval.\nThe queries were collected from real biologists and they are expressed as complex questions, such as How do mutations in the Huntingtin gene affect Huntington``s disease?\n.\nThe document collection contains 162,259 Highwire full-text documents in HTML format.\nSystems from participating groups are expected to find relevant passages within the full-text documents.\nA passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or <\/P>).\nWe approached the problem by utilizing domain-specific knowledge in a conceptual retrieval model.\nDomain-specific knowledge, in this paper, refers to information about concepts and relationships between concepts in a certain domain.\nWe assume that appropriate use of domain-specific knowledge might improve the effectiveness of retrieval.\nFor example, given a query What is the role of gene PRNP in the Mad Cow Disease?\n, expanding the gene symbol PRNP with its synonyms Prp, PrPSc, and prion protein, more relevant documents might be retrieved.\nPubMed and many other biomedical systems [8][9][10][13] also make use of domain-specific knowledge to improve retrieval effectiveness.\nIntuitively, retrieval on the level of concepts should outperform bag-of-words approaches, since the semantic relationships among words in a concept are utilized.\nIn some recent studies [13][15], positive results have been reported for this hypothesis.\nIn this paper, concepts are entry terms of the ontology Medical Subject Headings (MeSH), a controlled vocabulary maintained by NLM for indexing biomedical literature, or gene symbols in the Entrez gene database also from NLM.\nA concept could be a word, such as the gene symbol PRNP, or a phrase, such as Mad cow diseases.\nIn the conceptual retrieval model presented in this paper, the similarity between a query and a document is measured on both concept and word levels.\nThis paper makes two contributions: 1.\nWe propose a conceptual approach to utilize domain-specific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature.\nBased on this approach, our system achieved significant improvement (23%) over the best reported result in passage retrieval in the Genomics track of TREC 2006.\n2.\nWe examine the effects of utilizing concepts and of different types of domain-specific knowledge in performance contribution.\nThis paper is organized as follows: problem statement is given in the next section.\nThe techniques are introduced in section 3.\nIn section 4, we present the experimental results.\nRelated works are given in section 5 and finally, we conclude the paper in section 6.\n2.\nPROBLEM STATEMENT We describe the queries, document collection and the system output in this section.\nThe query set used in the Genomics track of TREC 2006 consists of 28 questions collected from real biologists.\nAs described in [8], these questions all have the following general format: Biological object (1.\n.\nm) Relationship \u2190\u23af\u23af\u23af\u23af\u2192 Biological process (1.\n.\nn) (1) where a biological object might be a gene, protein, or gene mutation and a biological process can be a physiological process or disease.\nA question might involve multiple biological objects (m) and multiple biological processes (n).\nThese questions were derived from four templates (Table 2).\nTable 2 Query templates and examples in the Genomics track of TREC 2006 Template Example What is the role of gene in disease?\nWhat is the role of DRD4 in alcoholism?\nWhat effect does gene have on biological process?\nWhat effect does the insulin receptor gene have on tumorigenesis?\nHow do genes interact in organ function?\nHow do HMG and HMGB1 interact in hepatitis?\nHow does a mutation in gene influence biological process?\nHow does a mutation in Ret influence thyroid function?\nFeatures of the queries: 1) They are different from the typical Web queries and the PubMed queries, both of which usually consist of 1 to 3 keywords; 2) They are generated from structural templates which can be used by a system to identify the query components, the biological object or process.\nThe document collection contains 162,259 Highwire full-text documents in HTML format.\nThe output of the system is a list of passages ranked according to their similarities with the query.\nA passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or <\/P>).\nA passage could be a part of a sentence, a sentence, a set of consecutive sentences or a paragraph (i.e., the whole span of text that are inside of <P> and <\/P> HTML tags).\nThis is a passage-level information retrieval problem with the attempt to put biologists in contexts where relevant information is provided.\n3.\nTECHNIQUES AND METHODS We approached the problem by first retrieving the top-k most relevant paragraphs, then extracting passages from these paragraphs, and finally ranking the passages.\nIn this process, we employed several techniques and methods, which will be introduced in this section.\nFirst, we give two definitions: Definition 3.1 A concept is 1) a entry term in the MeSH ontology, or 2) a gene symbol in the Entrez gene database.\nThis definition of concept can be generalized to include other biomedical dictionary terms.\nDefinition 3.2 A semantic type is a category defined in the Semantic Network of the Unified Medical Language System (UMLS) [14].\nThe current release of the UMLS Semantic Network contains 135 semantic types such as Disease or Syndrome.\nEach entry term in the MeSH ontology is assigned one or more semantic types.\nEach gene symbol in the Entrez gene database maps to the semantic type Gene or Genome.\nIn addition, these semantic types are linked by 54 relationships.\nFor example, Antibiotic prevents Disease or Syndrome.\nThese relationships among semantic types represent general biomedical knowledge.\nWe utilized these semantic types and their relationships to identify related concepts.\nThe rest of this section is organized as follows: in section 3.1, we explain how the concepts are identified within a query.\nIn section 3.2, we specify five different types of domain-specific knowledge and introduce how they are compiled.\nIn section 3.3, we present our conceptual IR model.\nFinally, our strategy for passage extraction is described in section 3.4.\n3.1 Identifying concepts within a query A concept, defined in Definition 3.1, is a gene symbol or a MeSH term.\nWe make use of the query templates to identify gene symbols.\nFor example, the query How do HMG and HMGB1 interact in hepatitis?\nis derived from the template How do genes interact in organ function?\n.\nIn this case, HMG and HMGB1 will be identified as gene symbols.\nIn cases where the query templates are not provided, programs for recognition of gene symbols within texts are needed.\nWe use the query translation functionality of PubMed to extract MeSH terms in a query.\nThis is done by submitting the whole query to PubMed, which will then return a file in which the MeSH terms in the query are labeled.\nIn Table 3.1, three MeSH terms within the query What is the role of gene PRNP in the Mad cow disease?\nare found in the PubMed translation: ``encephalopathy, bovine spongiform'' for Mad cow disease, genes for gene, and role for role.\nTable 3.1 The PubMed translation of the query ``What is the role of gene PRNP in the Mad cow disease?''\n.\nTerm PubMed translation Mad cow disease ``bovine spongiform encephalopathy''[Text Word] OR ``encephalopathy, bovine spongiform''[MeSH Terms] OR Mad cow disease[Text Word] gene (``genes''[TIAB] NOT Medline[SB]) OR ``genes''[MeSH Terms] OR gene[Text Word] role ``role''[MeSH Terms] OR role[Text Word] 3.2 Compiling domain-specific knowledge In this paper, domain-specific knowledge refers to information about concepts and their relationships in a certain domain.\nWe used five types of domain-specific knowledge in the domain of genomics: Type 1.\nSynonyms (terms listed in the thesauruses that refer to the same meaning) Type 2.\nHypernyms (more generic terms, one level only) Type 3.\nHyponyms (more specific terms, one level only) Type 4.\nLexical variants (different forms of the same concept, such as abbreviations.\nThey are commonly used in the literature, but might not be listed in the thesauruses) Type 5.\nImplicitly related concepts (terms that are semantically related and also co-occur more frequently than being independent in the biomedical texts) Knowledge of type 1-3 is retrieved from the following two thesauruses: 1) MeSH, a controlled vocabulary maintained by NLM for indexing biomedical literature.\nThe 2007 version of MeSH contains information about 190,000 concepts.\nThese concepts are organized in a tree hierarchy; 2) Entrez Gene, one of the most widely used searchable databases of genes.\nThe current version of Entrez Gene contains information about 1.7 million genes.\nIt does not have a hierarchy.\nOnly synonyms are retrieved from Entrez Gene.\nThe compiling of type 4-5 knowledge is introduced in section 3.2.1 and 3.2.2, respectively.\n3.2.1 Lexical variants Lexical variants of gene symbols New gene symbols and their lexical variants are regularly introduced into the biomedical literature [7].\nHowever, many reference databases, such as UMLS and Entrez Gene, may not be able to keep track of all this kind of variants.\nFor example, for the gene symbol ``NF-kappa B'', at least 5 different lexical variants can be found in the biomedical literature: ``NF-kappaB'', ``NFkappaB'', ``NFkappa B'', ``NF-kB'', and ``NFkB'', three of which are not in the current UMLS and two not in the Entrez Gene.\n[3][21] have shown that expanding gene symbols with their lexical variants improved the retrieval effectiveness of their biomedical IR systems.\nIn our system, we employed the following two strategies to retrieve lexical variants of gene symbols.\nStrategy I: This strategy is to automatically generate lexical variants according to a set of manually crafted heuristics [3][21].\nFor example, given a gene symbol PLA2, a variant PLAII is generated according to the heuristic that Roman numerals and Arabic numerals are convertible when naming gene symbols.\nAnother variant, PLA 2, is also generated since a hyphen or a space could be inserted at the transition between alphabetic and numerical characters in a gene symbol.\nStrategy II: This strategy is to retrieve lexical variants from an abbreviation database.\nADAM [22] is an abbreviation database which covers frequently used abbreviations and their definitions (or long-forms) within MEDLINE, the authoritative repository of citations from the biomedical literature maintained by the NLM.\nGiven a query How does nucleoside diphosphate kinase (NM23) contribute to tumor progression?\n, we first identify the abbreviation NM23 and its long-form nucleoside diphosphate kinase using the abbreviation identification program from [4].\nSearching the long-form nucleoside diphosphate kinase in ADAM, other abbreviations, such as NDPK or NDK, are retrieved.\nThese abbreviations are considered as the lexical variants of NM23.\nLexical variants of MeSH concepts ADAM is used to obtain the lexical variants of MeSH concepts as well.\nAll the abbreviations of a MeSH concept in ADAM are considered as lexical variants to each other.\nIn addition, those long-forms that share the same abbreviation with the MeSH concept and are different by an edit distance of 1 or 2 are also considered as its lexical variants.\nAs an example, ``human papilloma viruses'' and ``human papillomaviruses'' have the same abbreviation HPV in ADAM and their edit distance is 1.\nThus they are considered as lexical variants to each other.\nThe edit distance between two strings is measured by the minimum number of insertions, deletions, and substitutions of a single character required to transform one string into the other [12].\n3.2.2 Implicitly related concepts Motivation: In some cases, there are few documents in the literature that directly answer a given query.\nIn this situation, those documents that implicitly answer their questions or provide supporting information would be very helpful.\nFor example, there are few documents in PubMed that directly answer the query ``What is the role of the genes HNF4 and COUP-tf I in the suppression in the function of the liver?''\n.\nHowever, there exist some documents about the role of ``HNF4'' and ``COUP-tf I'' in regulating ``hepatitis B virus'' transcription.\nIt is very likely that the biologists would be interested in these documents because ``hepatitis B virus'' is known as a virus that could cause serious damage to the function of liver.\nIn the given example, ``hepatitis B virus'' is not a synonym, hypernym, hyponym, nor a lexical variant of any of the query concepts, but it is semantically related to the query concepts according to the UMLS Semantic Network.\nWe call this type of concepts implicitly related concepts of the query.\nThis notion is similar to the B-term used in [19] for relating two disjoint literatures for biomedical hypothesis generation.\nThe difference is that we utilize the semantic relationships among query concepts to exclusively focus on concepts of certain semantic types.\nA query q in format (1) of section 2 can be represented by q = (A, C) where A is the set of biological objects and C is the set of biological processes.\nThose concepts that are semantically related to both A and C according to the UMLS Semantic Network are considered as the implicitly related concepts of the query.\nIn the above example, A = {HNF4, COUP-tf I}, C = {function of liver}, and ``hepatitis B virus'' is one of the implicitly related concepts.\nWe make use of the MEDLINE database to extract the implicitly related concepts.\nThe 2006 version of MEDLINE database contains citations (i.e., abstracts, titles, and etc.) of over 15 million biomedical articles.\nEach document in MEDLINE is manually indexed by a list of MeSH terms to describe the topics covered by that document.\nImplicitly related concepts are extracted and ranked in the following steps: Step 1.\nLet list_A be the set of MeSH terms that are 1) used for indexing those MEDLINE citations having A, and 2) semantically related to A according to the UMLS Semantic Network.\nSimilarly, list_C is created for C. Concepts in B = list_A \u2229 list_C are considered as implicitly related concepts of the query.\nStep 2.\nFor each concept b\u2208B, compute the association between b and A using the mutual information measure [5]: P( , ) ( , ) log P( )P( ) b A I b A b A = where P(x) = n\/N, n is the number of MEDLINE citations having x and N is the size of MEDLINE.\nA large value for I(b, A) means that b and A co-occur much more often than being independent.\nI(b, C) is computed similarly.\nStep 3.\nLet r(b) = (I(b, A), I(b, C)), for b\u2208 B. Given b1, b2 \u2208 B, we say r(b1) \u2264 r(b2) if I(b1, A) \u2264 I(b2, A) and I(b1, C) \u2264 I(b2, C).\nThen the association between b and the query q is measured by: { : and ( ) ( )} ( , ) { : and ( ) ( )} x x B r x r b score b q x x B r b r x \u2208 \u2264 = \u2208 \u2264 (2) The numerator in Formula 2 is the number of the concepts in B that are associated with both A and C equally with or less than b.\nThe denominator is the number of the concepts in B that are associated with both A and C equally with or more than b. Figure 3.2.2 shows the top 4 implicitly related concepts for the sample query.\nFigure 3.2.2 Top 4 implicitly related concepts for the query ``How do interactions between HNF4 and COUP-TF1 suppress liver function?''\n.\nIn Figure 3.2.2, the top 4 implicitly related concepts are all highly associated with liver: Hepatocytes are liver cells; Hepatoblastoma is a malignant liver neoplasm occurring in young children; the vast majority of Gluconeogenesis takes place in the liver; and Hepatitis B virus is a virus that could cause serious damage to the function of liver.\nThe top-k ranked concepts in B are used for query expansion: if I(b, A) \u2265 I(b, C), then b is considered as an implicit related concept of A.\nA document having b but not A will receive a partial weight of A.\nThe expansion is similar for C when I(b, A) < I(b, C).\n3.3 Conceptual IR model We now discuss our conceptual IR model.\nWe first give the basic conceptual IR model in section 3.3.1.\nThen we explain how the domain-specific knowledge is incorporated in the model using query expansion in section 3.3.2.\nA pseudo-feedback strategy is introduced in section 3.3.3.\nIn section 3.3.4, we give a strategy to improve the ranking by avoiding incorrect match of abbreviations.\n3.3.1 Basic model Given a query q and a document d, our model measures two similarities, concept similarity and word similarity: ( , ) ( , )( , ) ( , ) concept word sim q d sim q d sim q d= Concept similarity Two vectors are derived from a query q, 1 2 1 11 12 1 2 21 22 2 ( , ) ( , ,..., ) ( , ,..., ) m n q v v v c c c v c c c = = = where v1 is a vector of concepts describing the biological object(s) and v2 is a vector of concepts describing the biological process(es).\nGiven a vector of concepts v, let s(v) be the set of concepts in v.\nThe weight of vi is then measured by: ( ) max{log : ( ) ( ) and 0}i i v v N w v s v s v n n = \u2286 > where v is a vector that contains a subset of concepts in vi and nv is the number of documents having all the concepts in v.\nThe concept similarity between q and d is then computed by 2 1 ( )( , ) i i concept i w vsim q d \u03b1 = = \u00d7\u2211 where \u03b1i is a parameter to indicate the completeness of vi that document d has covered.\n\u03b1i is measured by: and i i i c c d c v c c v idf idf \u03b1 \u2208 \u2208 \u2208 = \u2211 \u2211 (3) where idfc is the inverse document frequency of concept c.\nAn example: suppose we have a query How does Nurr-77 delete T cells before they migrate to the spleen or lymph nodes and how does this impact autoimmunity?\n.\nAfter identifying the concepts in the query, we have: 1 2 (`Nurr-77') ('T cells', `spleen', `autoimmunity', `lymph nodes') v v = = Suppose that some document frequencies of different combinations of concepts are as follows: 25 df(`Nurr-77') 0 df('T cells', `spleen', `autoimmunity', `lymph nodes') 326 df('T cells', `spleen', `autoimmunity') 82 df(`spleen', `autoimmunity', `lymph nodes') 147 df('T cells', `autoimmunity', `lymph nodes') 2332 df('T cells', `spleen', `lymph nodes') The weight of vi is then computed by (note that there does not exist a document having all the concepts in v2): 1 2 ( ) log( \/ 25) ( ) log( \/82) w v N w v N = = .\nNow suppose a document d contains concepts `Nurr-77'', 'T cells', `spleen', and `lymph nodes', but not `autoimmunity'', then the value of parameter \u03b1i is computed as follows: 1 2 1 ('T cells')+ (`spleen')+ (`lymph nodes') ('T cells')+ (`spleen')+ (`lymph nodes')+ (`autoimmunity') idf idf idf idf idf idf idf \u03b1 \u03b1 = = Word similarity The similarity between q and d on the word level is computed using Okapi [17]: 10.5 ( 1) log( )( , ) 0.5word w q N n k tf sim q d n K tf\u2208 \u2212 + + = + + \u2211 (4) where N is the size of the document collection; n is the number of documents containing w; K=k1 \u00d7 ((1-b)+b \u00d7 dl\/avdl) and k1=1.2, C Function of Liver Implicitly related concepts (B) Hepatocytes Hepatoblastoma Gluconeogenesis Hepatitis B virus HNF4 and COUP-tf I A b=0.75 are constants.\ndl is the document length of d and avdl is the average document length; tf is the term frequency of w within d.\nThe model Given two documents d1 and d2, we say 1 2( , ) ( , )sim q d sim q d> or d1 will be ranked higher than d2, with respect to the same query q, if either 1) 1 2( , ) ( , ) concept concept sim q d sim q d> or 2) 1 2 1 2and( , ) ( , ) ( , ) ( , ) concept concept word word sim q d sim q d sim q d sim q d= > This conceptual IR model emphasizes the similarity on the concept level.\nA similar model but applied to non-biomedical domain has been given in [15].\n3.3.2 Incorporating domain-specific knowledge Given a concept c, a vector u is derived by incorporating its domain-specific knowledge: 1 2 3( , , , )u c u u u= where u1 is a vector of its synonyms, hyponyms, and lexical variants; u2 is a vector of its hypernyms; and u3 is a vector of its implicitly related concepts.\nAn occurrence of any term in u1 will be counted as an occurrence of c. idfc in Formula 3 is updated as: 1, logc c u N D idf = 1,c uD is the set of documents having c or any term in 1u .\nThe weight that a document d receives from u is given by: max{ : and }tw t u t d\u2208 \u2208 where wt = \u03b2 .\ncidf\u00d7 The weighting factor \u03b2 is an empirical tuning parameter determined as: 1.\n\u03b2 = 1 if t is the original concept, its synonym, its hyponym, or its lexical variant; 2.\n\u03b2 = 0.95 if t is a hypernym; 3.\n\u03b2 = 0.90\u00d7 (k-i+1)\/k if t is an implicitly related concept.\nk is the number of selected top ranked implicitly related concepts (see section 3.2.2); i is the position of t in the ranking of implicitly related concepts.\n3.3.3 Pseudo-feedback Pseudo-feedback is a technique commonly used to improve retrieval performance by adding new terms into the original query.\nWe used a modified pseudo-feedback strategy described in [2].\nStep 1.\nLet C be the set of concepts in the top 15 ranked documents.\nFor each concept c in C, compute the similarity between c and the query q, the computation of sim(q,c) can be found in [2].\nStep 2.\nThe top-k ranked concepts by sim(q,c) are selected.\nStep 3.\nAssociate each selected concept c' with the concept cq in q that 1) has the same semantic type as c', and 2) is most related to c' among all the concepts in q.\nThe association between c' and cq is computed by: P( ', ) ( ', ) log P( ')P( ) q q q c c I c c c c = where P(x) = n\/N, n is the number of documents having x and N is the size of the document collection.\nA document having c' but not cq receives a weight given by: (0.5\u00d7 (k-i+1)\/k) ,qcidf\u00d7 where i is the position of c' in the ranking of step 2.\n3.3.4 Avoid incorrect match of abbreviations Some gene symbols are very short and thus ambiguous.\nFor example, the gene symbol APC could be the abbreviation for many non-gene long-forms, such as air pollution control, aerobic plate count, or argon plasma coagulation.\nThis step is to avoid incorrect match of abbreviations in the top ranked documents.\nGiven an abbreviation X with the long-form L in the query, we scan the top-k ranked (k=1000) documents and when a document is found with X, we compare L with all the long-forms of X in that document.\nIf none of these long-forms is equal or close to L (i.e., the edit distance between L and the long-form of X in that document is 1 or 2), then the concept similarity of X is subtracted.\n3.4 Passage extraction The goal of passage extraction is to highlight the most relevant fragments of text in paragraphs.\nA passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or <\/P>).\nA passage could be a part of a sentence, a sentence, a set of consecutive sentences or a paragraph (i.e., the whole span of text that are inside of <P> and <\/P> HTML tags).\nIt is also possible to have more than one relevant passage in a single paragraph.\nOur strategy for passage extraction assumes that the optimal passage(s) in a paragraph should have all the query concepts that the whole paragraph has.\nAlso they should have higher density of query concepts than other fragments of text in the paragraph.\nSuppose we have a query q and a paragraph p represented by a sequence of sentences 1 2... .\nnp s s s= Let C be the set of concepts in q that occur in p and S = \u03a6.\nStep 1.\nFor each sequence of consecutive sentences 1... ,i i js s s+ 1 \u2264 i \u2264 j \u2264 n, let S = S 1{ ... }i i js s s+\u222a if 1...i i js s s+ satisfies that: 1) Every query concept in C occurs in 1...i i js s s+ and 2) There does not exist k, such that i < k < j and every query concept in C occurs in 1...i i ks s s+ or 1 2... .\nk k js s s+ + Condition 1 requires 1...i i js s s+ having all the query concepts in p and condition 2 requires 1...i i js s s+ be the minimal.\nStep 2.\nLet 1min{ 1: ... }i i jL j i s s s S+= \u2212 + \u2208 .\nFor every 1...i i js s s+ in S, let 1{ ... }i i jS S s s s+= \u2212 if (j - i + 1) > L.\nThis step is to remove those sequences of sentences in S that have lower density of query concepts.\nStep 3.\nFor every two sequences of consecutive sentences 1 1 1 2 2 21 1... , and ...i i j i i js s s S s s s S+ +\u2208 \u2208 , if 1 2 1 2 2 1 , and 1 i i j j i j \u2264 \u2264 \u2264 + (5) then do Repeat this step until for every two sequences of consecutive sentences in S, condition (5) does not apply.\nThis step is to merge those sequences of sentences in S that are adjacent or overlapped.\nFinally the remaining sequences of sentences in S are returned as the optimal passages in the paragraph p with respect to the query.\n1 1 2 1 1 2 2 2 1 1 1 1 { ... } { ... } { ... } i i j i i j i i j S S s s s S S s s s S S s s s + + + = \u222a = \u2212 = \u2212 4.\nEXPERIMENTAL RESULTS The evaluation of our techniques and the experimental results are given in this section.\nWe first describe the datasets and evaluation metrics used in our experiments and then present the results.\n4.1 Data sets and evaluation metrics Our experiments were performed on the platform of the Genomics track of TREC 2006.\nThe document collection contains 162,259 full-text documents from 49 Highwire biomedical journals.\nThe set of queries consists of 28 queries collected from real biologists.\nThe performance is measured on three different levels (passage, aspect, and document) to provide better insight on how the question is answered from different perspectives.\nPassage MAP: As described in [8], this is a character-based precision calculated as follows: At each relevant retrieved passage, precision will be computed as the fraction of characters overlapping with the gold standard passages divided by the total number of characters included in all nominated passages from this system for the topic up until that point.\nSimilar to regular MAP, relevant passages that were not retrieved will be added into the calculation as well, with precision set to 0 for relevant passages not retrieved.\nThen the mean of these average precisions over all topics will be calculated to compute the mean average passage precision.\nAspect MAP: A question could be addressed from different aspects.\nFor example, the question what is the role of gene PRNP in the Mad cow disease?\ncould be answered from aspects like Diagnosis, Neurologic manifestations, or Prions\/Genetics.\nThis measure indicates how comprehensive the question is answered.\nDocument MAP: This is the standard IR measure.\nThe precision is measured at every point where a relevant document is obtained and then averaged over all relevant documents to obtain the average precision for a given query.\nFor a set of queries, the mean of the average precision for all queries is the MAP of that IR system.\nThe output of the system is a list of passages ranked according to their similarities with the query.\nThe performances on the three levels are then calculated based on the ranking of the passages.\n4.2 Results The Wilcoxon signed-rank test was employed to determine the statistical significance of the results.\nIn the tables of the following sections, statistically significant improvements (at the 5% level) are marked with an asterisk.\n4.2.1 Conceptual IR model vs. term-based model The initial baseline was established using word similarity only computed by the Okapi (Formula 4).\nAnother run based on our basic conceptual IR model was performed without using query expansion, pseudo-feedback, or abbreviation correction.\nThe experimental result is shown in Table 4.2.1.\nOur basic conceptual IR model significantly outperforms the Okapi on all three levels, which suggests that, although it requires additional efforts to identify concepts, retrieval on the concept level can achieve substantial improvements over purely term-based retrieval model.\n4.2.2 Contribution of different types of knowledge A series of experiments were performed to examine how each type of domain-specific knowledge contributes to the retrieval performance.\nA new baseline was established using the basic conceptual IR model without incorporating any type of domainspecific knowledge.\nThen five runs were conducted by adding each individual type of domain-specific knowledge.\nWe also conducted a run by adding all types of domain-specific knowledge.\nResults of these experiments are shown in Table 4.2.2.\nWe found that any available type of domain-specific knowledge improved the performance in passage retrieval.\nThe biggest improvement comes from the lexical variants, which is consistent with the result reported in [3].\nThis result also indicates that biologists are likely to use different variants of the same concept according to their own writing preferences and these variants might not be collected in the existing biomedical thesauruses.\nIt also suggests that the biomedical IR systems can benefit from the domain-specific knowledge extracted from the literature by text mining systems.\nSynonyms provided the second biggest improvement.\nHypernyms, hyponyms, and implicitly related concepts provided similar degrees of improvement.\nThe overall performance is an accumulative result of adding different types of domain-specific knowledge and it is better than any individual addition.\nIt is clearly shown that the performance is significantly improved (107% on passage level, 63.1% on aspect level, and 49.6% on document level) when the domain-specific knowledge is appropriately incorporated.\nAlthough it is not explicitly shown in Table 4.2.3, different types of domain-specific knowledge affect different subsets of queries.\nMore specifically, each of these types (with the exception of the lexical variants which affects a large number of queries) affects only a few queries.\nBut for those affected queries, their improvement is significant.\nAs a consequence, the accumulative improvement is very significant.\n4.2.3 Pseudo-feedback and abbreviation correction Using the Baseline+All in Table 4.2.2 as a new baseline, the contribution of abbreviation correction and pseudo-feedback is given in Table 4.2.3.\nThere is little improvement by avoiding incorrect matching of abbreviations.\nThe pseudo-feedback contributed about 4.6% improvement in passage retrieval.\n4.2.4 Performance compared with best-reported results We compared our result with the results reported in the Genomics track of TREC 2006 [8] on the conditions that 1) systems are automatic systems and 2) passages are extracted from paragraphs.\nThe performance of our system relative to the best reported results is shown in Table 4.2.4 (in TREC 2006, some systems returned the whole paragraphs as passages.\nAs a consequence, excellent retrieval results were obtained on document and aspect levels at the expense of performance on the passage level.\nWe do not include the results of such systems here).\nTable 4.2.4 Performance compared with best-reported results.\nPassage MAP Aspect MAP Document MAP Best reported results 0.1486 0.3492 0.5320 Our results 0.1823 0.3811 0.5391 Improvement 22.68% 9.14% 1.33% The best reported results in the first row of Table 4.2.4 on three levels (passage, aspect, and document) are from different systems.\nOur result is from a single run on passage retrieval in which it is better than the best reported result by 22.68% in passage retrieval and at the same time, 9.14% better in aspect retrieval, and 1.33% better in document retrieval (Since the average precision of each individual query was not reported, we can not apply the Wilcoxon signed-rank test to calculate the significance of difference between our performance and the best reported result.)\n.\nTable 4.2.1 Basic conceptual IR model vs. term-based model Run Passage Aspect Document MAP Imprvd qs # (%) MAP Imprvd qs # (%) MAP Imprvd qs # (%) Okapi 0.064 N\/A 0.175 N\/A 0.285 N\/A Basic conceptual IR model 0.084* (+31.3%) 17 (65.4%) 0.233* (+33.1%) 12 (46.2%) 0.359* (+26.0%) 15 (57.7%) Table 4.2.2 Contribution of different types of domain-specific knowledge Run Passage Aspect Document MAP Imprvd qs # (%) MAP Imprvd qs # (%) MAP Imprvd qs # (%) Baseline = Basic conceptual IR model 0.084 N\/A 0.233 N\/A 0.359 N\/A Baseline+Synonyms 0.105 (+25%) 11 (42.3%) 0.246 (+5.6%) 9 (34.6%) 0.420 (+17%) 13 (50%) Baseline+Hypernyms 0.088 (+4.8%) 11 (42.3%) 0.225 (-3.4%) 9 (34.6%) 0.390 (+8.6%) 16 (61.5%) Baseline+Hyponyms 0.087 (+3.6%) 10 (38.5%) 0.217 (-6.9%) 7 (26.9%) 0.389 (+8.4%) 10 (38.5%) Baseline+Variants 0.150* (+78.6%) 16 (61.5%) 0.348* (+49.4%) 13 (50%) 0.495* (+37.9%) 10 (38.5%) Baseline+Related 0.086 (+2.4%) 9 (34.6%) 0.220 (-5.6%) 9 (34.6%) 0.387 (+7.8%) 13 (50%) Baseline+All 0.174* (107%) 25 (96.2%) 0.380* (+63.1%) 19 (73.1%) 0.537* (+49.6%) 14 (53.8%) Table 4.2.3 Contribution of abbreviation correction and pseudo-feedback Run Passage Aspect Document MAP Imprvd qs # (%) MAP Imprvd qs # (%) MAP Imprvd qs # (%) Baseline+All 0.174 N\/A 0.380 N\/A 0.537 N\/A Baseline+All+Abbr 0.175 (+0.6%) 5 (19.2%) 0.375 (-1.3%) 4 (15.4%) 0.535 (-0.4%) 4 (15.4%) Baseline+All+Abbr+PF 0.182 (+4.6%) 10 (38.5%) 0.381 (+0.3%) 6 (23.1%) 0.539 (+0.4%) 9 (34.6%) A separate experiment has been done using a second testbed, the ad-hoc Task of TREC Genomics 2005, to evaluate our knowledge-intensive conceptual IR model for document retrieval of biomedical literature.\nThe overall performance in terms of MAP is 35.50%, which is about 22.92% above the best reported result [9].\nNotice that the performance was only measured on the document level for the ad-hoc Task of TREC Genomics 2005.\n5.\nRELATED WORKS Many studies used manually-crafted thesauruses or knowledge databases created by text mining systems to improve retrieval effectiveness based on either word-statistical retrieval systems or conceptual retrieval systems.\n[11][1] assessed query expansion using the UMLS Metathesaurus.\nBased on a word-statistical retrieval system, [11] used definitions and different types of thesaurus relationships for query expansion and a deteriorated performance was reported.\n[1] expanded queries with phrases and UMLS concepts determined by the MetaMap, a program which maps biomedical text to UMLS concepts, and no significant improvement was shown.\nWe used MeSH, Entrez gene, and other non-thesaurus knowledge resources such as an abbreviation database for query expansion.\nA critical difference between our work and those in [11][1] is that our retrieval model is based on concepts, not on individual words.\nThe Genomics track in TREC provides a common platform to evaluate methods and techniques proposed by various groups for biomedical information retrieval.\nAs summarized in [8][9][10], many groups utilized domain-specific knowledge to improve retrieval effectiveness.\nAmong these groups, [3] assessed both thesaurus-based knowledge, such as gene information, and non thesaurus-based knowledge, such as lexical variants of gene symbols, for query expansion.\nThey have shown that query expansion with acronyms and lexical variants of gene symbols produced the biggest improvement, whereas, the query expansion with gene information from gene databases deteriorated the performance.\n[21] used a similar approach for generating lexical variants of gene symbols and reported significant improvements.\nOur system utilized more types of domain-specific knowledge, including hyponyms, hypernyms and implicitly related concepts.\nIn addition, under the conceptual retrieval framework, we examined more comprehensively the effects of different types of domain-specific knowledge in performance contribution.\n[20][15] utilized WordNet, a database of English words and their lexical relationships developed by Princeton University, for query expansion in the non-biomedical domain.\nIn their studies, queries were expanded using the lexical semantic relations such as synonyms, hypernyms, or hyponyms.\nLittle benefit has been shown in [20].\nThis has been due to ambiguity of the query terms which have different meanings in different contexts.\nWhen these synonyms having multiple meanings are added to the query, substantial irrelevant documents are retrieved.\nIn the biomedical domain, this kind of ambiguity of query terms is relatively less frequent, because, although the abbreviations are highly ambiguous, general biomedical concepts usually have only one meaning in the thesaurus, such as UMLS, whereas a term in WordNet usually have multiple meanings (represented as synsets in WordNet).\nBesides, we have implemented a post-ranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity.\nBesides, we have implemented a postranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity.\nThe retrieval model in [15] emphasized the similarity between a query and a document on the phrase level assuming that phrases are more important than individual words when retrieving documents.\nAlthough the assumption is similar, our conceptual model is based on the biomedical concepts, not phrases.\n[13] presented a good study of the role of knowledge in the document retrieval of clinical medicine.\nThey have shown that appropriate use of semantic knowledge in a conceptual retrieval framework can yield substantial improvements.\nAlthough the retrieval model is similar, we made a study in the domain of genomics, in which the problem structure and task knowledge is not as well-defined as in the domain of clinical medicine [18].\nAlso, our similarity function is very different from that in [13].\nIn summary, our approach differs from previous works in four important ways: First, we present a case study of conceptual retrieval in the domain of genomics, where many knowledge resources can be used to improve the performance of biomedical IR systems.\nSecond, we have studied more types of domainspecific knowledge than previous researchers and carried out more comprehensive experiments to look into the effects of different types of domain-specific knowledge in performance contribution.\nThird, although some of the techniques seem similar to previously published ones, they are actually quite different in details.\nFor example, in our pseudo-feedback process, we require that the unit of feedback is a concept and the concept has to be of the same semantic type as a query concept.\nThis is to ensure that our conceptual model of retrieval can be applied.\nAs another example, the way in which implicitly related concepts are extracted in this paper is significantly different from that given in [19].\nFinally, our conceptual IR model is actually based on complex concepts because some biomedical meanings, such as biological processes, are represented by multiple simple concepts.\n6.\nCONCLUSION This paper proposed a conceptual approach to utilize domainspecific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature.\nWe specified five different types of domain-specific knowledge (i.e., synonyms, hyponyms, hypernyms, lexical variants, and implicitly related concepts) and examined their effects in performance contribution.\nWe also evaluated other two techniques, pseudo-feedback and abbreviation correction.\nExperimental results have shown that appropriate use of domain-specific knowledge in a conceptual IR model yields significant improvements (23%) in passage retrieval over the best known results.\nIn our future work, we will explore the use of other existing knowledge resources, such as UMLS and the Wikipedia, and evaluate techniques such as disambiguation of gene symbols for improving retrieval effectiveness.\nThe application of our conceptual IR model in other domains such as clinical medicine will be investigated.\n7.\nACKNOWLEDGMENTS insightful discussion.\n8.\nREFERENCES [1] Aronson A.R., Rindflesch T.C. Query expansion using the UMLS Metathesaurus.\nProc AMIA Annu Fall Symp.\n1997.\n485-9.\n[2] Baeza-Yates R., Ribeiro-Neto B. Modern Information Retrieval.\nAddison-Wesley, 1999, 129-131.\n[3] Buttcher S., Clarke C.L.A., Cormack G.V. Domain-specific synonym expansion and validation for biomedical information retrieval (MultiText experiments for TREC 2004).\nTREC``04.\n[4] Chang J.T., Schutze H., Altman R.B. Creating an online dictionary of abbreviations from MEDLINE.\nJournal of the American Medical Informatics Association.\n2002 9(6).\n[5] Church K.W., Hanks P. Word association norms, mutual information and lexicography.\nComputational Linguistics.\n1990;16:22, C29.\n[6] Fontelo P., Liu F., Ackerman M. askMEDLINE: a free-text, natural language query tool for MEDLINE\/PubMed.\nBMC Med Inform Decis Mak.\n2005 Mar 10;5(1):5.\n[7] Fukuda K., Tamura A., Tsunoda T., Takagi T. Toward information extraction: identifying protein names from biological papers.\nPac Symp Biocomput.\n1998;:707-18.\n[8] Hersh W.R., and etc..\nTREC 2006 Genomics Track Overview.\nTREC``06.\n[9] Hersh W.R., and etc..\nTREC 2005 Genomics Track Overview.\nIn TREC``05.\n[10] Hersh W.R., and etc..\nTREC 2004 Genomics Track Overview.\nIn TREC``04.\n[11] Hersh W.R., Price S., Donohoe L. Assessing thesaurus-based query expansion using the UMLS Metathesaurus.\nProc AMIA Symp.\n344-8.\n2000.\n[12] Levenshtein, V. Binary codes capable of correcting deletions, insertions, and reversals.\nSoviet Physics - Doklady 10, 10 (1996), 707-710.\n[13] Lin J., Demner-Fushman D.\nThe Role of Knowledge in Conceptual Retrieval: A Study in the Domain of Clinical Medicine.\nSIGIR``06.\n99-06.\n[14] Lindberg D., Humphreys B., and McCray A.\nThe Unified Medical Language System.\nMethods of Information in Medicine.\n32(4):281-291, 1993.\n[15] Liu S., Liu F., Yu C., and Meng W.Y..\nAn Effective Approach to Document Retrieval via Utilizing WordNet and Recognizing Phrases.\nSIGIR``04.\n266-272 [16] Proux D., Rechenmann F., Julliard L., Pillet V.V., Jacq B. Detecting Gene Symbols and Names in Biological Texts: A First Step toward Pertinent Information Extraction.\nGenome Inform Ser Workshop Genome Inform.\n1998;9:72-80.\n[17] Robertson S.E., Walker S. Okapi\/Keenbow at TREC-8.\nNIST Special Publication 500-246: TREC 8.\n[18] Sackett D.L., and etc..\nEvidence-Based Medicine: How to Practice and Teach EBM.\nChurchill Livingstone.\nSecond edition, 2000.\n[19] Swanson,D.R., Smalheiser,N.R..\nAn interactive system for finding complemen-tary literatures: a stimulus to scientific discovery.\nArtificial Intelligence, 1997; 91,183-203.\n[20] Voorhees E. Query expansion using lexical-semantic relations.\nSIGIR 1994.\n61-9 [21] Zhong M., Huang X.J. Concept-based biomedical text retrieval.\nSIGIR``06.\n723-4 [22] Zhou W., Torvik V.I., Smalheiser N.R. ADAM: Another Database of Abbreviations in MEDLINE.\nBioinformatics.\n2006; 22(22): 2813-2818.","lvl-3":"Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature\nABSTRACT\nThis paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature.\nThe effects of different types of domain-specific knowledge in performance contribution are examined.\nBased on the TREC platform, we show that appropriate use of domainspecific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006.\n1.\nINTRODUCTION\nBiologists search for literature on a daily basis.\nFor most biologists, PubMed, an online service of U.S. National Library of Medicine (NLM), is the most commonly used tool for searching the biomedical literature.\nPubMed allows for keyword search by using Boolean operators.\nFor example, if one desires documents on the use of the drug propanolol in the disease hypertension, a typical PubMed query might be \"propanolol AND hypertension\", which will return all the documents having the two keywords.\nKeyword search in PubMed is effective if the query is well-crafted\nby the users using their expertise.\nHowever, information needs of biologists, in some cases, are expressed as complex questions [8] [9], which PubMed is not designed to handle.\nWhile NLM does maintain an experimental tool for free-text queries [6], it is still based on PubMed keyword search.\nThe Genomics track of the 2006 Text REtrieval Conference (TREC) provides a common platform to assess the methods and techniques proposed by various groups for biomedical information retrieval.\nThe queries were collected from real biologists and they are expressed as complex questions, such as \"How do mutations in the Huntingtin gene affect Huntington's disease?\"\n.\nThe document collection contains 162,259 Highwire full-text documents in HTML format.\nSystems from participating groups are expected to find relevant passages within the full-text documents.\nA passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or <\/P>).\nWe approached the problem by utilizing domain-specific knowledge in a conceptual retrieval model.\nDomain-specific knowledge, in this paper, refers to information about concepts and relationships between concepts in a certain domain.\nWe assume that appropriate use of domain-specific knowledge might improve the effectiveness of retrieval.\nFor example, given a query \"What is the role of gene PRNP in the Mad Cow Disease?\"\n, expanding the gene symbol \"PRNP\" with its synonyms \"Prp\", \"PrPSc\", and \"prion protein\", more relevant documents might be retrieved.\nPubMed and many other biomedical systems [8] [9] [10] [13] also make use of domain-specific knowledge to improve retrieval effectiveness.\nIntuitively, retrieval on the level of concepts should outperform \"bag-of-words\" approaches, since the semantic relationships among words in a concept are utilized.\nIn some recent studies [13] [15], positive results have been reported for this hypothesis.\nIn this paper, concepts are entry terms of the ontology Medical Subject Headings (MeSH), a controlled vocabulary maintained by NLM for indexing biomedical literature, or gene symbols in the Entrez gene database also from NLM.\nA concept could be a word, such as the gene symbol \"PRNP\", or a phrase, such as \"Mad cow diseases\".\nIn the conceptual retrieval model presented in this paper, the similarity between a query and a document is measured on both concept and word levels.\nThis paper makes two contributions:\nretrieving biomedical literature.\nBased on this approach, our system achieved significant improvement (23%) over the best reported result in passage retrieval in the Genomics track of TREC 2006.\n2.\nWe examine the effects of utilizing concepts and of different types of domain-specific knowledge in performance contribution.\nThis paper is organized as follows: problem statement is given in the next section.\nThe techniques are introduced in section 3.\nIn section 4, we present the experimental results.\nRelated works are given in section 5 and finally, we conclude the paper in section 6.\n2.\nPROBLEM STATEMENT\n3.\nTECHNIQUES AND METHODS\n3.1 Identifying concepts within a query\n3.2 Compiling domain-specific knowledge\n3.2.1 Lexical variants\nLexical variants of gene symbols\nLexical variants of MeSH concepts\n3.2.2 Implicitly related concepts\n3.3 Conceptual IR model\n3.3.1 Basic model\nConcept similarity\nWord similarity\nThe model\n3.3.2 Incorporating domain-specific knowledge\n3.3.3 Pseudo-feedback\n3.3.4 Avoid incorrect match of abbreviations\n3.4 Passage extraction\n4.\nEXPERIMENTAL RESULTS\n4.1 Data sets and evaluation metrics\n4.2 Results\n4.2.1 Conceptual IR model vs. term-based model\n4.2.2 Contribution of different types of knowledge\n4.2.3 Pseudo-feedback and abbreviation correction\n4.2.4 Performance compared with best-reported results\n5.\nRELATED WORKS\nMany studies used manually-crafted thesauruses or knowledge databases created by text mining systems to improve retrieval effectiveness based on either word-statistical retrieval systems or conceptual retrieval systems.\n[11] [1] assessed query expansion using the UMLS Metathesaurus.\nBased on a word-statistical retrieval system, [11] used definitions and different types of thesaurus relationships for query expansion and a deteriorated performance was reported.\n[1] expanded queries with phrases and UMLS concepts determined by the MetaMap, a program which maps biomedical text to UMLS concepts, and no significant improvement was shown.\nWe used MeSH, Entrez gene, and other non-thesaurus knowledge resources such as an abbreviation database for query expansion.\nA critical difference between our work and those in [11] [1] is that our retrieval model is based on concepts, not on individual words.\nThe Genomics track in TREC provides a common platform to evaluate methods and techniques proposed by various groups for biomedical information retrieval.\nAs summarized in [8] [9] [10], many groups utilized domain-specific knowledge to improve retrieval effectiveness.\nAmong these groups, [3] assessed both thesaurus-based knowledge, such as gene information, and non thesaurus-based knowledge, such as lexical variants of gene symbols, for query expansion.\nThey have shown that query expansion with acronyms and lexical variants of gene symbols produced the biggest improvement, whereas, the query expansion with gene information from gene databases deteriorated the performance.\n[21] used a similar approach for generating lexical variants of gene symbols and reported significant improvements.\nOur system utilized more types of domain-specific knowledge, including hyponyms, hypernyms and implicitly related concepts.\nIn addition, under the conceptual retrieval framework, we examined more comprehensively the effects of different types of domain-specific knowledge in performance contribution.\n[20] [15] utilized WordNet, a database of English words and their lexical relationships developed by Princeton University, for query expansion in the non-biomedical domain.\nIn their studies, queries were expanded using the lexical semantic relations such as synonyms, hypernyms, or hyponyms.\nLittle benefit has been shown in [20].\nThis has been due to ambiguity of the query terms which have different meanings in different contexts.\nWhen these synonyms having multiple meanings are added to the query, substantial irrelevant documents are retrieved.\nIn the biomedical domain, this kind of ambiguity of query terms is relatively less frequent, because, although the abbreviations are highly ambiguous, general biomedical concepts usually have only one meaning in the thesaurus, such as UMLS, whereas a term in WordNet usually have multiple meanings (represented as synsets in WordNet).\nBesides, we have implemented a post-ranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity.\nBesides, we have implemented a postranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity.\nThe retrieval model in [15] emphasized the similarity between a query and a document on the phrase level assuming that phrases are more important than individual words when retrieving documents.\nAlthough the assumption is similar, our conceptual model is based on the biomedical concepts, not phrases.\n[13] presented a good study of the role of knowledge in the document retrieval of clinical medicine.\nThey have shown that appropriate use of semantic knowledge in a conceptual retrieval framework can yield substantial improvements.\nAlthough the retrieval model is similar, we made a study in the domain of genomics, in which the problem structure and task knowledge is not as well-defined as in the domain of clinical medicine [18].\nAlso, our similarity function is very different from that in [13].\nIn summary, our approach differs from previous works in four important ways: First, we present a case study of conceptual retrieval in the domain of genomics, where many knowledge resources can be used to improve the performance of biomedical IR systems.\nSecond, we have studied more types of domainspecific knowledge than previous researchers and carried out more comprehensive experiments to look into the effects of different types of domain-specific knowledge in performance contribution.\nThird, although some of the techniques seem similar to previously published ones, they are actually quite different in details.\nFor example, in our pseudo-feedback process, we require that the unit of feedback is a concept and the concept has to be of the same semantic type as a query concept.\nThis is to ensure that our conceptual model of retrieval can be applied.\nAs another example, the way in which implicitly related concepts are extracted in this paper is significantly different from that given in [19].\nFinally, our conceptual IR model is actually based on complex concepts because some biomedical meanings, such as biological processes, are represented by multiple simple concepts.\n6.\nCONCLUSION\nThis paper proposed a conceptual approach to utilize domainspecific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature.\nWe specified five different types of domain-specific knowledge (i.e., synonyms, hyponyms, hypernyms, lexical variants, and implicitly related concepts) and examined their effects in performance contribution.\nWe also evaluated other two techniques, pseudo-feedback and abbreviation correction.\nExperimental results have shown that appropriate use of domain-specific knowledge in a conceptual IR model yields significant improvements (23%) in passage retrieval over the best known results.\nIn our future work, we will explore the use of other existing knowledge resources, such as UMLS and the Wikipedia, and evaluate techniques such as disambiguation of gene symbols for improving retrieval effectiveness.\nThe application of our conceptual IR model in other domains such as clinical medicine will be investigated.","lvl-4":"Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature\nABSTRACT\nThis paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature.\nThe effects of different types of domain-specific knowledge in performance contribution are examined.\nBased on the TREC platform, we show that appropriate use of domainspecific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006.\n1.\nINTRODUCTION\nBiologists search for literature on a daily basis.\nFor most biologists, PubMed, an online service of U.S. National Library of Medicine (NLM), is the most commonly used tool for searching the biomedical literature.\nPubMed allows for keyword search by using Boolean operators.\nKeyword search in PubMed is effective if the query is well-crafted\nby the users using their expertise.\nWhile NLM does maintain an experimental tool for free-text queries [6], it is still based on PubMed keyword search.\nThe Genomics track of the 2006 Text REtrieval Conference (TREC) provides a common platform to assess the methods and techniques proposed by various groups for biomedical information retrieval.\nThe queries were collected from real biologists and they are expressed as complex questions, such as \"How do mutations in the Huntingtin gene affect Huntington's disease?\"\n.\nThe document collection contains 162,259 Highwire full-text documents in HTML format.\nSystems from participating groups are expected to find relevant passages within the full-text documents.\nWe approached the problem by utilizing domain-specific knowledge in a conceptual retrieval model.\nDomain-specific knowledge, in this paper, refers to information about concepts and relationships between concepts in a certain domain.\nWe assume that appropriate use of domain-specific knowledge might improve the effectiveness of retrieval.\nFor example, given a query \"What is the role of gene PRNP in the Mad Cow Disease?\"\nPubMed and many other biomedical systems [8] [9] [10] [13] also make use of domain-specific knowledge to improve retrieval effectiveness.\nIntuitively, retrieval on the level of concepts should outperform \"bag-of-words\" approaches, since the semantic relationships among words in a concept are utilized.\nA concept could be a word, such as the gene symbol \"PRNP\", or a phrase, such as \"Mad cow diseases\".\nIn the conceptual retrieval model presented in this paper, the similarity between a query and a document is measured on both concept and word levels.\nThis paper makes two contributions:\nretrieving biomedical literature.\nBased on this approach, our system achieved significant improvement (23%) over the best reported result in passage retrieval in the Genomics track of TREC 2006.\n2.\nWe examine the effects of utilizing concepts and of different types of domain-specific knowledge in performance contribution.\nThis paper is organized as follows: problem statement is given in the next section.\nThe techniques are introduced in section 3.\nIn section 4, we present the experimental results.\nRelated works are given in section 5 and finally, we conclude the paper in section 6.\n5.\nRELATED WORKS\nMany studies used manually-crafted thesauruses or knowledge databases created by text mining systems to improve retrieval effectiveness based on either word-statistical retrieval systems or conceptual retrieval systems.\n[11] [1] assessed query expansion using the UMLS Metathesaurus.\nBased on a word-statistical retrieval system, [11] used definitions and different types of thesaurus relationships for query expansion and a deteriorated performance was reported.\n[1] expanded queries with phrases and UMLS concepts determined by the MetaMap, a program which maps biomedical text to UMLS concepts, and no significant improvement was shown.\nWe used MeSH, Entrez gene, and other non-thesaurus knowledge resources such as an abbreviation database for query expansion.\nA critical difference between our work and those in [11] [1] is that our retrieval model is based on concepts, not on individual words.\nThe Genomics track in TREC provides a common platform to evaluate methods and techniques proposed by various groups for biomedical information retrieval.\nAs summarized in [8] [9] [10], many groups utilized domain-specific knowledge to improve retrieval effectiveness.\nAmong these groups, [3] assessed both thesaurus-based knowledge, such as gene information, and non thesaurus-based knowledge, such as lexical variants of gene symbols, for query expansion.\nThey have shown that query expansion with acronyms and lexical variants of gene symbols produced the biggest improvement, whereas, the query expansion with gene information from gene databases deteriorated the performance.\n[21] used a similar approach for generating lexical variants of gene symbols and reported significant improvements.\nOur system utilized more types of domain-specific knowledge, including hyponyms, hypernyms and implicitly related concepts.\nIn addition, under the conceptual retrieval framework, we examined more comprehensively the effects of different types of domain-specific knowledge in performance contribution.\n[20] [15] utilized WordNet, a database of English words and their lexical relationships developed by Princeton University, for query expansion in the non-biomedical domain.\nIn their studies, queries were expanded using the lexical semantic relations such as synonyms, hypernyms, or hyponyms.\nThis has been due to ambiguity of the query terms which have different meanings in different contexts.\nWhen these synonyms having multiple meanings are added to the query, substantial irrelevant documents are retrieved.\nThe retrieval model in [15] emphasized the similarity between a query and a document on the phrase level assuming that phrases are more important than individual words when retrieving documents.\nAlthough the assumption is similar, our conceptual model is based on the biomedical concepts, not phrases.\n[13] presented a good study of the role of knowledge in the document retrieval of clinical medicine.\nThey have shown that appropriate use of semantic knowledge in a conceptual retrieval framework can yield substantial improvements.\nAlthough the retrieval model is similar, we made a study in the domain of genomics, in which the problem structure and task knowledge is not as well-defined as in the domain of clinical medicine [18].\nAlso, our similarity function is very different from that in [13].\nIn summary, our approach differs from previous works in four important ways: First, we present a case study of conceptual retrieval in the domain of genomics, where many knowledge resources can be used to improve the performance of biomedical IR systems.\nSecond, we have studied more types of domainspecific knowledge than previous researchers and carried out more comprehensive experiments to look into the effects of different types of domain-specific knowledge in performance contribution.\nFor example, in our pseudo-feedback process, we require that the unit of feedback is a concept and the concept has to be of the same semantic type as a query concept.\nThis is to ensure that our conceptual model of retrieval can be applied.\nAs another example, the way in which implicitly related concepts are extracted in this paper is significantly different from that given in [19].\nFinally, our conceptual IR model is actually based on complex concepts because some biomedical meanings, such as biological processes, are represented by multiple simple concepts.\n6.\nCONCLUSION\nThis paper proposed a conceptual approach to utilize domainspecific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature.\nWe specified five different types of domain-specific knowledge (i.e., synonyms, hyponyms, hypernyms, lexical variants, and implicitly related concepts) and examined their effects in performance contribution.\nWe also evaluated other two techniques, pseudo-feedback and abbreviation correction.\nExperimental results have shown that appropriate use of domain-specific knowledge in a conceptual IR model yields significant improvements (23%) in passage retrieval over the best known results.\nIn our future work, we will explore the use of other existing knowledge resources, such as UMLS and the Wikipedia, and evaluate techniques such as disambiguation of gene symbols for improving retrieval effectiveness.\nThe application of our conceptual IR model in other domains such as clinical medicine will be investigated.","lvl-2":"Knowledge-intensive Conceptual Retrieval and Passage Extraction of Biomedical Literature\nABSTRACT\nThis paper presents a study of incorporating domain-specific knowledge (i.e., information about concepts and relationships between concepts in a certain domain) in an information retrieval (IR) system to improve its effectiveness in retrieving biomedical literature.\nThe effects of different types of domain-specific knowledge in performance contribution are examined.\nBased on the TREC platform, we show that appropriate use of domainspecific knowledge in a proposed conceptual retrieval model yields about 23% improvement over the best reported result in passage retrieval in the Genomics Track of TREC 2006.\n1.\nINTRODUCTION\nBiologists search for literature on a daily basis.\nFor most biologists, PubMed, an online service of U.S. National Library of Medicine (NLM), is the most commonly used tool for searching the biomedical literature.\nPubMed allows for keyword search by using Boolean operators.\nFor example, if one desires documents on the use of the drug propanolol in the disease hypertension, a typical PubMed query might be \"propanolol AND hypertension\", which will return all the documents having the two keywords.\nKeyword search in PubMed is effective if the query is well-crafted\nby the users using their expertise.\nHowever, information needs of biologists, in some cases, are expressed as complex questions [8] [9], which PubMed is not designed to handle.\nWhile NLM does maintain an experimental tool for free-text queries [6], it is still based on PubMed keyword search.\nThe Genomics track of the 2006 Text REtrieval Conference (TREC) provides a common platform to assess the methods and techniques proposed by various groups for biomedical information retrieval.\nThe queries were collected from real biologists and they are expressed as complex questions, such as \"How do mutations in the Huntingtin gene affect Huntington's disease?\"\n.\nThe document collection contains 162,259 Highwire full-text documents in HTML format.\nSystems from participating groups are expected to find relevant passages within the full-text documents.\nA passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or <\/P>).\nWe approached the problem by utilizing domain-specific knowledge in a conceptual retrieval model.\nDomain-specific knowledge, in this paper, refers to information about concepts and relationships between concepts in a certain domain.\nWe assume that appropriate use of domain-specific knowledge might improve the effectiveness of retrieval.\nFor example, given a query \"What is the role of gene PRNP in the Mad Cow Disease?\"\n, expanding the gene symbol \"PRNP\" with its synonyms \"Prp\", \"PrPSc\", and \"prion protein\", more relevant documents might be retrieved.\nPubMed and many other biomedical systems [8] [9] [10] [13] also make use of domain-specific knowledge to improve retrieval effectiveness.\nIntuitively, retrieval on the level of concepts should outperform \"bag-of-words\" approaches, since the semantic relationships among words in a concept are utilized.\nIn some recent studies [13] [15], positive results have been reported for this hypothesis.\nIn this paper, concepts are entry terms of the ontology Medical Subject Headings (MeSH), a controlled vocabulary maintained by NLM for indexing biomedical literature, or gene symbols in the Entrez gene database also from NLM.\nA concept could be a word, such as the gene symbol \"PRNP\", or a phrase, such as \"Mad cow diseases\".\nIn the conceptual retrieval model presented in this paper, the similarity between a query and a document is measured on both concept and word levels.\nThis paper makes two contributions:\nretrieving biomedical literature.\nBased on this approach, our system achieved significant improvement (23%) over the best reported result in passage retrieval in the Genomics track of TREC 2006.\n2.\nWe examine the effects of utilizing concepts and of different types of domain-specific knowledge in performance contribution.\nThis paper is organized as follows: problem statement is given in the next section.\nThe techniques are introduced in section 3.\nIn section 4, we present the experimental results.\nRelated works are given in section 5 and finally, we conclude the paper in section 6.\n2.\nPROBLEM STATEMENT\nWe describe the queries, document collection and the system output in this section.\nThe query set used in the Genomics track of TREC 2006 consists of 28 questions collected from real biologists.\nAs described in [8], these questions all have the following general format:\nwhere a biological object might be a gene, protein, or gene mutation and a biological process can be a physiological process or disease.\nA question might involve multiple biological objects (m) and multiple biological processes (n).\nThese questions were derived from four templates (Table 2).\nTable 2 Query templates and examples in the Genomics track of TREC 2006\nTemplate Example What is the role of gene in What is the role of DRD4 in disease?\nalcoholism?\nWhat effect does gene have What effect does the insulin on biological process?\nreceptor gene have on tumorigenesis?\nHow do genes interact in How do HMG and HMGB1 organ function?\ninteract in hepatitis?\nHow does a mutation in How does a mutation in Ret gene influence biological influence thyroid function?\nprocess?\nFeatures of the queries: 1) They are different from the typical Web queries and the PubMed queries, both of which usually consist of 1 to 3 keywords; 2) They are generated from structural templates which can be used by a system to identify the query components, the biological object or process.\nThe document collection contains 162,259 Highwire full-text documents in HTML format.\nThe output of the system is a list of passages ranked according to their similarities with the query.\nA passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or <\/P>).\nA passage could be a part of a sentence, a sentence, a set of consecutive sentences or a paragraph (i.e., the whole span of text that are inside of <P> and <\/P> HTML tags).\nThis is a passage-level information retrieval problem with the attempt to put biologists in contexts where relevant information is provided.\n3.\nTECHNIQUES AND METHODS\nWe approached the problem by first retrieving the top-k most relevant paragraphs, then extracting passages from these paragraphs, and finally ranking the passages.\nIn this process, we employed several techniques and methods, which will be introduced in this section.\nFirst, we give two definitions: Definition 3.1 A concept is 1) a entry term in the MeSH ontology, or 2) a gene symbol in the Entrez gene database.\nThis definition of concept can be generalized to include other biomedical dictionary terms.\nDefinition 3.2 A semantic type is a category defined in the Semantic Network of the Unified Medical Language System (UMLS) [14].\nThe current release of the UMLS Semantic Network contains 135 semantic types such as \"Disease or Syndrome\".\nEach entry term in the MeSH ontology is assigned one or more semantic types.\nEach gene symbol in the Entrez gene database maps to the semantic type \"Gene or Genome\".\nIn addition, these semantic types are linked by 54 relationships.\nFor example, \"Antibiotic\" prevents \"Disease or Syndrome\".\nThese relationships among semantic types represent general biomedical knowledge.\nWe utilized these semantic types and their relationships to identify related concepts.\nThe rest of this section is organized as follows: in section 3.1, we explain how the concepts are identified within a query.\nIn section 3.2, we specify five different types of domain-specific knowledge and introduce how they are compiled.\nIn section 3.3, we present our conceptual IR model.\nFinally, our strategy for passage extraction is described in section 3.4.\n3.1 Identifying concepts within a query\nA concept, defined in Definition 3.1, is a gene symbol or a MeSH term.\nWe make use of the query templates to identify gene symbols.\nFor example, the query \"How do HMG and HMGB1 interact in hepatitis?\"\nis derived from the template \"How do genes interact in organ function?\"\n.\nIn this case, \"HMG\" and \"HMGB1\" will be identified as gene symbols.\nIn cases where the query templates are not provided, programs for recognition of gene symbols within texts are needed.\nWe use the query translation functionality of PubMed to extract MeSH terms in a query.\nThis is done by submitting the whole query to PubMed, which will then return a file in which the MeSH terms in the query are labeled.\nIn Table 3.1, three MeSH terms within the query \"What is the role of gene PRNP in the Mad cow disease?\"\nare found in the PubMed translation: \"encephalopathy, bovine spongiform\" for \"Mad cow disease\", \"genes\" for \"gene\", and \"role\" for \"role\".\nTable 3.1 The PubMed translation of the query \"What is the role of gene PRNP in the Mad cow disease?\"\n.\n3.2 Compiling domain-specific knowledge\nIn this paper, domain-specific knowledge refers to information about concepts and their relationships in a certain domain.\nWe used five types of domain-specific knowledge in the domain of genomics: Type 1.\nSynonyms (terms listed in the thesauruses that refer to the same meaning) Type 2.\nHypernyms (more generic terms, one level only) Type 3.\nHyponyms (more specific terms, one level only) Type 4.\nLexical variants (different forms of the same concept, such as abbreviations.\nThey are commonly used in the literature, but might not be listed in the thesauruses) Type 5.\nImplicitly related concepts (terms that are semantically related and also co-occur more frequently than being independent in the biomedical texts) Knowledge of type 1-3 is retrieved from the following two thesauruses: 1) MeSH, a controlled vocabulary maintained by NLM for indexing biomedical literature.\nThe 2007 version of MeSH contains information about 190,000 concepts.\nThese concepts are organized in a tree hierarchy; 2) Entrez Gene, one of the most widely used searchable databases of genes.\nThe current version of Entrez Gene contains information about 1.7 million genes.\nIt does not have a hierarchy.\nOnly synonyms are retrieved from Entrez Gene.\nThe compiling of type 4-5 knowledge is introduced in section 3.2.1 and 3.2.2, respectively.\n3.2.1 Lexical variants\nLexical variants of gene symbols\nNew gene symbols and their lexical variants are regularly introduced into the biomedical literature [7].\nHowever, many reference databases, such as UMLS and Entrez Gene, may not be able to keep track of all this kind of variants.\nFor example, for the gene symbol \"NF-kappa B\", at least 5 different lexical variants can be found in the biomedical literature: \"NF-kappaB\", \"NFkappaB\", \"NFkappa B\", \"NF-kB\", and \"NFkB\", three of which are not in the current UMLS and two not in the Entrez Gene.\n[3] [21] have shown that expanding gene symbols with their lexical variants improved the retrieval effectiveness of their biomedical IR systems.\nIn our system, we employed the following two strategies to retrieve lexical variants of gene symbols.\nStrategy I: This strategy is to automatically generate lexical variants according to a set of manually crafted heuristics [3] [21].\nFor example, given a gene symbol \"PLA2\", a variant \"PLAII\" is generated according to the heuristic that Roman numerals and Arabic numerals are convertible when naming gene symbols.\nAnother variant, \"PLA 2\", is also generated since a hyphen or a space could be inserted at the transition between alphabetic and numerical characters in a gene symbol.\nStrategy II: This strategy is to retrieve lexical variants from an abbreviation database.\nADAM [22] is an abbreviation database which covers frequently used abbreviations and their definitions (or long-forms) within MEDLINE, the authoritative repository of citations from the biomedical literature maintained by the NLM.\nGiven a query \"How does nucleoside diphosphate kinase (NM23) contribute to tumor progression?\"\n, we first identify the abbreviation \"NM23\" and its long-form \"nucleoside diphosphate kinase\" using the abbreviation identification program from [4].\nSearching the long-form \"nucleoside diphosphate kinase\" in ADAM, other abbreviations, such as \"NDPK\" or \"NDK\", are retrieved.\nThese abbreviations are considered as the lexical variants of \"NM23\".\nLexical variants of MeSH concepts\nADAM is used to obtain the lexical variants of MeSH concepts as well.\nAll the abbreviations of a MeSH concept in ADAM are considered as lexical variants to each other.\nIn addition, those long-forms that share the same abbreviation with the MeSH concept and are different by an edit distance of 1 or 2 are also considered as its lexical variants.\nAs an example, \"human papilloma viruses\" and \"human papillomaviruses\" have the same abbreviation \"HPV\" in ADAM and their edit distance is 1.\nThus they are considered as lexical variants to each other.\nThe edit distance between two strings is measured by the minimum number of insertions, deletions, and substitutions of a single character required to transform one string into the other [12].\n3.2.2 Implicitly related concepts\nMotivation: In some cases, there are few documents in the literature that directly answer a given query.\nIn this situation, those documents that implicitly answer their questions or provide supporting information would be very helpful.\nFor example, there are few documents in PubMed that directly answer the query \"What is the role of the genes HNF4 and COUP-tf I in the suppression in the function of the liver?\"\n.\nHowever, there exist some documents about the role of \"HNF4\" and \"COUP-tf I\" in regulating \"hepatitis B virus\" transcription.\nIt is very likely that the biologists would be interested in these documents because \"hepatitis B virus\" is known as a virus that could cause serious damage to the function of liver.\nIn the given example, \"hepatitis B virus\" is not a synonym, hypernym, hyponym, nor a lexical variant of any of the query concepts, but it is semantically related to the query concepts according to the UMLS Semantic Network.\nWe call this type of concepts \"implicitly related concepts\" of the query.\nThis notion is similar to the \"B-term\" used in [19] for relating two disjoint literatures for biomedical hypothesis generation.\nThe difference is that we utilize the semantic relationships among query concepts to exclusively focus on concepts of certain semantic types.\nA query q in format (1) of section 2 can be represented by q = (A, C) where A is the set of biological objects and C is the set of biological processes.\nThose concepts that are semantically related to both A and C according to the UMLS Semantic Network are considered as the implicitly related concepts of the query.\nIn the above example, A = {\"HNF4\", \"COUP-tf I\"}, C = {\"function of liver\"}, and \"hepatitis B virus\" is one of the implicitly related concepts.\nWe make use of the MEDLINE database to extract the implicitly related concepts.\nThe 2006 version of MEDLINE database contains citations (i.e., abstracts, titles, and etc.) of over 15 million biomedical articles.\nEach document in MEDLINE is manually indexed by a list of MeSH terms to describe the topics covered by that document.\nImplicitly related concepts are extracted and ranked in the following steps: Step 1.\nLet list_A be the set of MeSH terms that are 1) used for indexing those MEDLINE citations having A, and 2) semantically related to A according to the UMLS Semantic Network.\nSimilarly, list_C is created for C. Concepts in B = list_A \u2229 list_C are considered as implicitly related concepts of the query.\nStep 2.\nFor each concept b \u2208 B, compute the association between b and A using the mutual information measure [5]: where P (x) = n\/N, n is the number of MEDLINE citations having x and N is the size of MEDLINE.\nA large value for I (b, A) means that b and A co-occur much more often than being independent.\nI (b, C) is computed similarly.\nStep 3.\nLet r (b) = (I (b, A), I (b, C)), for b \u2208 B. Given b1, b2 \u2208 B, we say r (b1) <r (b2) if I (b1, A) <I (b2, A) and I (b1, C) <I (b2, C).\nThen the association between b and the query q is measured by:\nThe numerator in Formula 2 is the number of the concepts in B that are associated with both A and C equally with or less than b.\nThe denominator is the number of the concepts in B that are associated with both A and C equally with or more than b. Figure 3.2.2 shows the top 4 implicitly related concepts for the sample query.\nImplicitly related concepts (B) where v1 is a vector of concepts describing the biological object (s) and v2 is a vector of concepts describing the biological process (es).\nGiven a vector of concepts v, let s (v) be the set of concepts in v.\nThe weight of vi is then measured by:\nwhere v is a vector that contains a subset of concepts in vi and nv is the number of documents having all the concepts in v.\nThe concept similarity between q and d is then computed by\nwhere \u03b1i is a parameter to indicate the completeness of vi that document d has covered.\n\u03b1i is measured by:\nc \u2208 d and c \u2208 vi where idfc is the inverse document frequency of concept c.\nAn example: suppose we have a query \"How does Nurr-77 delete T cells before they migrate to the spleen or lymph nodes and how does this impact autoimmunity?\"\n.\nAfter identifying the concepts in the query, we have:\nFigure 3.2.2 Top 4 implicitly related concepts for the query \"How do interactions between HNF4 and COUP-TF1 suppress liver function?\"\n.\nIn Figure 3.2.2, the top 4 implicitly related concepts are all highly associated with \"liver\": \"Hepatocytes\" are liver cells; \"Hepatoblastoma\" is a malignant liver neoplasm occurring in young children; the vast majority of \"Gluconeogenesis\" takes place in the liver; and \"Hepatitis B virus\" is a virus that could cause serious damage to the function of liver.\nThe top-k ranked concepts in B are used for query expansion: if I (b, A) \u2265 I (b, C), then b is considered as an implicit related concept of A.\nA document having b but not A will receive a partial weight of A.\nThe expansion is similar for C when I (b, A) <I (b, C).\n3.3 Conceptual IR model\nWe now discuss our conceptual IR model.\nWe first give the basic conceptual IR model in section 3.3.1.\nThen we explain how the domain-specific knowledge is incorporated in the model using query expansion in section 3.3.2.\nA pseudo-feedback strategy is introduced in section 3.3.3.\nIn section 3.3.4, we give a strategy to improve the ranking by avoiding incorrect match of abbreviations.\n3.3.1 Basic model\nGiven a query q and a document d, our model measures two similarities, concept similarity and word similarity:\nconcept word\nConcept similarity\nTwo vectors are derived from a query q,\nSuppose that some document frequencies of different combinations of concepts are as follows:\nThe weight of vi is then computed by (note that there does not exist a document having all the concepts in v2):\nNow suppose a document d contains concepts ` Nurr-77',' T cells','s pleen', and' lymph nodes', but not ` autoimmunity', then the value of parameter \u03b1i is computed as follows: \u03b11 idf (' T cells') + idf ('s pleen') + idf (' lymph nodes') idf (' T cells') + idf ('s pleen') + idf (' lymph nodes') + idf (' autoimmunity')\nWord similarity\nThe similarity between q and d on the word level is computed using Okapi [17]:\nwhere N is the size of the document collection; n is the number of documents containing w; K = k1 \u00d7 ((1-b) + b \u00d7 dl\/avdl) and k1 = 1.2,\nb = 0.75 are constants.\ndl is the document length of d and avdl is the average document length; tf is the term frequency of w within d.\nThe model\nGiven two documents d1 and d2, we say sim (q, d1)> sim (q, d2) or d1 will be ranked higher than d2, with respect to the same query q, if either\nThis conceptual IR model emphasizes the similarity on the concept level.\nA similar model but applied to non-biomedical domain has been given in [15].\n3.3.2 Incorporating domain-specific knowledge\nGiven a concept c, a vector u is derived by incorporating its domain-specific knowledge:\nwhere u1 is a vector of its synonyms, hyponyms, and lexical variants; u2 is a vector of its hypernyms; and u3 is a vector of its implicitly related concepts.\nAn occurrence of any term in u1 will be counted as an occurrence of c. idfc in Formula 3 is updated as:\nwhere wt = 8 \u00d7 idfc.\nThe weighting factor 8 is an empirical tuning parameter determined as:\n1.\n8 = 1 if t is the original concept, its synonym, its hyponym, or its lexical variant; 2.\n8 = 0.95 if t is a hypernym; 3.\n8 = 0.90 \u00d7 (k--i +1) \/ k if t is an implicitly related concept.\nk is the number of selected top ranked implicitly related concepts (see section 3.2.2); i is the position of t in the ranking of implicitly related concepts.\n3.3.3 Pseudo-feedback\nPseudo-feedback is a technique commonly used to improve retrieval performance by adding new terms into the original query.\nWe used a modified pseudo-feedback strategy described in [2].\nStep 1.\nLet C be the set of concepts in the top 15 ranked documents.\nFor each concept c in C, compute the similarity between c and the query q, the computation of sim (q, c) can be found in [2].\nStep 2.\nThe top-k ranked concepts by sim (q, c) are selected.\nStep 3.\nAssociate each selected concept c' with the concept cq in q that 1) has the same semantic type as c', and 2) is most related to c' among all the concepts in q.\nThe association between c' and cq is computed by:\nwhere P (x) = n\/N, n is the number of documents having x and N is the size of the document collection.\nA document having c' but not cq receives a weight given by: (0.5 \u00d7 (k-i +1) \/ k) \u00d7 idfcq, where i is the position of c' in the ranking of step 2.\n3.3.4 Avoid incorrect match of abbreviations\nSome gene symbols are very short and thus ambiguous.\nFor example, the gene symbol \"APC\" could be the abbreviation for many non-gene long-forms, such as \"air pollution control\", \"aerobic plate count\", or \"argon plasma coagulation\".\nThis step is to avoid incorrect match of abbreviations in the top ranked documents.\nGiven an abbreviation X with the long-form L in the query, we scan the top-k ranked (k = 1000) documents and when a document is found with X, we compare L with all the long-forms of X in that document.\nIf none of these long-forms is equal or close to L (i.e., the edit distance between L and the long-form of X in that document is 1 or 2), then the concept similarity of X is subtracted.\n3.4 Passage extraction\nThe goal of passage extraction is to highlight the most relevant fragments of text in paragraphs.\nA passage is defined as any span of text that does not include the HTML paragraph tag (i.e., <P> or <\/P>).\nA passage could be a part of a sentence, a sentence, a set of consecutive sentences or a paragraph (i.e., the whole span of text that are inside of <P> and <\/P> HTML tags).\nIt is also possible to have more than one relevant passage in a single paragraph.\nOur strategy for passage extraction assumes that the optimal passage (s) in a paragraph should have all the query concepts that the whole paragraph has.\nAlso they should have higher density of query concepts than other fragments of text in the paragraph.\nSuppose we have a query q and a paragraph p represented by a sequence of sentences p = s1s2...sn.\nLet C be the set of concepts in q that occur in p and S = 0.\nStep 1.\nFor each sequence of consecutive sentences sisi +1...s j, 1 <i <j <n, let S = S \u222a {sisi +1...s j} if sisi +1...s j satisfies that:\n1) Every query concept in C occurs in sisi +1...s j and 2) There does not exist k, such that i <k <j and every query concept in C occurs in sisi +1...sk orsk +1 sk +2...s j.\nCondition 1 requires sisi +1...sj having all the query concepts in p and condition 2 requires sisi +1...sj be the minimal.\nStep 2.\nLet L = min {j \u2212 i + 1: sisi +1...s j \u2208 S}.\nFor every sisi +1...sj in S, let S = S \u2212 {sisi +1...s j} if (j--i + 1)> L.\nThis step is to remove those sequences of sentences in S that have lower density of query concepts.\nStep 3.\nFor every two sequences of consecutive\nRepeat this step until for every two sequences of consecutive sentences in S, condition (5) does not apply.\nThis step is to merge those sequences of sentences in S that are adjacent or overlapped.\nFinally the remaining sequences of sentences in S are returned as the optimal passages in the paragraph p with respect to the query.\n4.\nEXPERIMENTAL RESULTS\nThe evaluation of our techniques and the experimental results are given in this section.\nWe first describe the datasets and evaluation metrics used in our experiments and then present the results.\n4.1 Data sets and evaluation metrics\nOur experiments were performed on the platform of the Genomics track of TREC 2006.\nThe document collection contains 162,259 full-text documents from 49 Highwire biomedical journals.\nThe set of queries consists of 28 queries collected from real biologists.\nThe performance is measured on three different levels (passage, aspect, and document) to provide better insight on how the question is answered from different perspectives.\nPassage MAP: As described in [8], this is a character-based precision calculated as follows: \"At each relevant retrieved passage, precision will be computed as the fraction of characters overlapping with the gold standard passages divided by the total number of characters included in all nominated passages from this system for the topic up until that point.\nSimilar to regular MAP, relevant passages that were not retrieved will be added into the calculation as well, with precision set to 0 for relevant passages not retrieved.\nThen the mean of these average precisions over all topics will be calculated to compute the mean average passage precision\".\nAspect MAP: A question could be addressed from different aspects.\nFor example, the question \"what is the role of gene PRNP in the Mad cow disease?\"\ncould be answered from aspects like \"Diagnosis\", \"Neurologic manifestations\", or \"Prions\/Genetics\".\nThis measure indicates how comprehensive the question is answered.\nDocument MAP: This is the standard IR measure.\nThe precision is measured at every point where a relevant document is obtained and then averaged over all relevant documents to obtain the average precision for a given query.\nFor a set of queries, the mean of the average precision for all queries is the MAP of that IR system.\nThe output of the system is a list of passages ranked according to their similarities with the query.\nThe performances on the three levels are then calculated based on the ranking of the passages.\n4.2 Results\nThe Wilcoxon signed-rank test was employed to determine the statistical significance of the results.\nIn the tables of the following sections, statistically significant improvements (at the 5% level) are marked with an asterisk.\n4.2.1 Conceptual IR model vs. term-based model\nThe initial baseline was established using word similarity only computed by the Okapi (Formula 4).\nAnother run based on our basic conceptual IR model was performed without using query expansion, pseudo-feedback, or abbreviation correction.\nThe experimental result is shown in Table 4.2.1.\nOur basic conceptual IR model significantly outperforms the Okapi on all three levels, which suggests that, although it requires additional efforts to identify concepts, retrieval on the concept level can achieve substantial improvements over purely term-based retrieval model.\n4.2.2 Contribution of different types of knowledge\nA series of experiments were performed to examine how each type of domain-specific knowledge contributes to the retrieval performance.\nA new baseline was established using the basic conceptual IR model without incorporating any type of domainspecific knowledge.\nThen five runs were conducted by adding each individual type of domain-specific knowledge.\nWe also conducted a run by adding all types of domain-specific knowledge.\nResults of these experiments are shown in Table 4.2.2.\nWe found that any available type of domain-specific knowledge improved the performance in passage retrieval.\nThe biggest improvement comes from the lexical variants, which is consistent with the result reported in [3].\nThis result also indicates that biologists are likely to use different variants of the same concept according to their own writing preferences and these variants might not be collected in the existing biomedical thesauruses.\nIt also suggests that the biomedical IR systems can benefit from the domain-specific knowledge extracted from the literature by text mining systems.\nSynonyms provided the second biggest improvement.\nHypernyms, hyponyms, and implicitly related concepts provided similar degrees of improvement.\nThe overall performance is an accumulative result of adding different types of domain-specific knowledge and it is better than any individual addition.\nIt is clearly shown that the performance is significantly improved (107% on passage level, 63.1% on aspect level, and 49.6% on document level) when the domain-specific knowledge is appropriately incorporated.\nAlthough it is not explicitly shown in Table 4.2.3, different types of domain-specific knowledge affect different subsets of queries.\nMore specifically, each of these types (with the exception of \"the lexical variants\" which affects a large number of queries) affects only a few queries.\nBut for those affected queries, their improvement is significant.\nAs a consequence, the accumulative improvement is very significant.\n4.2.3 Pseudo-feedback and abbreviation correction\nUsing the \"Baseline + All\" in Table 4.2.2 as a new baseline, the contribution of abbreviation correction and pseudo-feedback is given in Table 4.2.3.\nThere is little improvement by avoiding incorrect matching of abbreviations.\nThe pseudo-feedback contributed about 4.6% improvement in passage retrieval.\n4.2.4 Performance compared with best-reported results\nWe compared our result with the results reported in the Genomics track of TREC 2006 [8] on the conditions that 1) systems are automatic systems and 2) passages are extracted from paragraphs.\nThe performance of our system relative to the best reported results is shown in Table 4.2.4 (in TREC 2006, some systems returned the whole paragraphs as passages.\nAs a consequence, excellent retrieval results were obtained on document and aspect levels at the expense of performance on the passage level.\nWe do not include the results of such systems here).\nTable 4.2.4 Performance compared with best-reported results.\nThe best reported results in the first row of Table 4.2.4 on three levels (passage, aspect, and document) are from different systems.\nOur result is from a single run on passage retrieval in which it is better than the best reported result by 22.68% in passage retrieval and at the same time, 9.14% better in aspect retrieval, and 1.33% better in document retrieval (Since the average precision of each individual query was not reported, we cannot apply the Wilcoxon signed-rank test to calculate the significance of difference between our performance and the best reported result.)\n.\nTable 4.2.1 Basic conceptual IR model vs. term-based model\nTable 4.2.2 Contribution of different types of domain-specific knowledge\nTable 4.2.3 Contribution of abbreviation correction and pseudo-feedback\nA separate experiment has been done using a second testbed, the Ad Hoc Task of TREC Genomics 2005, to evaluate our knowledge-intensive conceptual IR model for document retrieval of biomedical literature.\nThe overall performance in terms of MAP is 35.50%, which is about 22.92% above the best reported result [9].\nNotice that the performance was only measured on the document level for the Ad Hoc Task of TREC Genomics 2005.\n5.\nRELATED WORKS\nMany studies used manually-crafted thesauruses or knowledge databases created by text mining systems to improve retrieval effectiveness based on either word-statistical retrieval systems or conceptual retrieval systems.\n[11] [1] assessed query expansion using the UMLS Metathesaurus.\nBased on a word-statistical retrieval system, [11] used definitions and different types of thesaurus relationships for query expansion and a deteriorated performance was reported.\n[1] expanded queries with phrases and UMLS concepts determined by the MetaMap, a program which maps biomedical text to UMLS concepts, and no significant improvement was shown.\nWe used MeSH, Entrez gene, and other non-thesaurus knowledge resources such as an abbreviation database for query expansion.\nA critical difference between our work and those in [11] [1] is that our retrieval model is based on concepts, not on individual words.\nThe Genomics track in TREC provides a common platform to evaluate methods and techniques proposed by various groups for biomedical information retrieval.\nAs summarized in [8] [9] [10], many groups utilized domain-specific knowledge to improve retrieval effectiveness.\nAmong these groups, [3] assessed both thesaurus-based knowledge, such as gene information, and non thesaurus-based knowledge, such as lexical variants of gene symbols, for query expansion.\nThey have shown that query expansion with acronyms and lexical variants of gene symbols produced the biggest improvement, whereas, the query expansion with gene information from gene databases deteriorated the performance.\n[21] used a similar approach for generating lexical variants of gene symbols and reported significant improvements.\nOur system utilized more types of domain-specific knowledge, including hyponyms, hypernyms and implicitly related concepts.\nIn addition, under the conceptual retrieval framework, we examined more comprehensively the effects of different types of domain-specific knowledge in performance contribution.\n[20] [15] utilized WordNet, a database of English words and their lexical relationships developed by Princeton University, for query expansion in the non-biomedical domain.\nIn their studies, queries were expanded using the lexical semantic relations such as synonyms, hypernyms, or hyponyms.\nLittle benefit has been shown in [20].\nThis has been due to ambiguity of the query terms which have different meanings in different contexts.\nWhen these synonyms having multiple meanings are added to the query, substantial irrelevant documents are retrieved.\nIn the biomedical domain, this kind of ambiguity of query terms is relatively less frequent, because, although the abbreviations are highly ambiguous, general biomedical concepts usually have only one meaning in the thesaurus, such as UMLS, whereas a term in WordNet usually have multiple meanings (represented as synsets in WordNet).\nBesides, we have implemented a post-ranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity.\nBesides, we have implemented a postranking step to reduce the number of incorrect matches of abbreviations, which will hopefully decrease the negative impact caused by the abbreviation ambiguity.\nThe retrieval model in [15] emphasized the similarity between a query and a document on the phrase level assuming that phrases are more important than individual words when retrieving documents.\nAlthough the assumption is similar, our conceptual model is based on the biomedical concepts, not phrases.\n[13] presented a good study of the role of knowledge in the document retrieval of clinical medicine.\nThey have shown that appropriate use of semantic knowledge in a conceptual retrieval framework can yield substantial improvements.\nAlthough the retrieval model is similar, we made a study in the domain of genomics, in which the problem structure and task knowledge is not as well-defined as in the domain of clinical medicine [18].\nAlso, our similarity function is very different from that in [13].\nIn summary, our approach differs from previous works in four important ways: First, we present a case study of conceptual retrieval in the domain of genomics, where many knowledge resources can be used to improve the performance of biomedical IR systems.\nSecond, we have studied more types of domainspecific knowledge than previous researchers and carried out more comprehensive experiments to look into the effects of different types of domain-specific knowledge in performance contribution.\nThird, although some of the techniques seem similar to previously published ones, they are actually quite different in details.\nFor example, in our pseudo-feedback process, we require that the unit of feedback is a concept and the concept has to be of the same semantic type as a query concept.\nThis is to ensure that our conceptual model of retrieval can be applied.\nAs another example, the way in which implicitly related concepts are extracted in this paper is significantly different from that given in [19].\nFinally, our conceptual IR model is actually based on complex concepts because some biomedical meanings, such as biological processes, are represented by multiple simple concepts.\n6.\nCONCLUSION\nThis paper proposed a conceptual approach to utilize domainspecific knowledge in an IR system to improve its effectiveness in retrieving biomedical literature.\nWe specified five different types of domain-specific knowledge (i.e., synonyms, hyponyms, hypernyms, lexical variants, and implicitly related concepts) and examined their effects in performance contribution.\nWe also evaluated other two techniques, pseudo-feedback and abbreviation correction.\nExperimental results have shown that appropriate use of domain-specific knowledge in a conceptual IR model yields significant improvements (23%) in passage retrieval over the best known results.\nIn our future work, we will explore the use of other existing knowledge resources, such as UMLS and the Wikipedia, and evaluate techniques such as disambiguation of gene symbols for improving retrieval effectiveness.\nThe application of our conceptual IR model in other domains such as clinical medicine will be investigated.","keyphrases":["passag extract","domain-specif knowledg","retriev model","keyword search","document collect","passag-level inform retriev","queri concept","conceptu ir model","passag map","aspect map","document map","document retriev","biomed document"],"prmu":["P","P","P","U","U","M","M","R","M","U","U","M","M"]} {"id":"H-40","title":"Cross-Lingual Query Suggestion Using Query Logs of Different Languages","abstract":"Query suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs. Previously, the suggested terms are mostly in the same language of the input query. In this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages. This is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement. Instead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log. Important monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model. Benchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation. Besides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections. The results demonstrate higher effectiveness than the traditional query translation methods.","lvl-1":"Cross-Lingual Query Suggestion Using Query Logs of Different Languages Wei Gao1* , Cheng Niu2 , Jian-Yun Nie3 , Ming Zhou2 , Jian Hu2 , Kam-Fai Wong1 , Hsiao-Wuen Hon2 1 The Chinese University of Hong Kong, Hong Kong, China {wgao, kfwong}@se.\ncuhk.edu.hk 2 Microsoft Research Asia, Beijing, China {chengniu, mingzhou, jianh, hon}@microsoft.\ncom 3 Universit\u00e9 de Montr\u00e9al, Montr\u00e9al, QC, Canada nie@iro.umontreal.ca ABSTRACT Query suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs.\nPreviously, the suggested terms are mostly in the same language of the input query.\nIn this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages.\nThis is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement.\nInstead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log.\nImportant monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model.\nBenchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation.\nBesides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections.\nThe results demonstrate higher effectiveness than the traditional query translation methods.\nCategories and Subject Descriptors H.3.3 [Information storage and retrieval]: Information Search and Retrieval - Query formulation General Terms Algorithms, Performance, Experimentation, Theory.\n1.\nINTRODUCTION Query suggestion is a functionality to help users of a search engine to better specify their information need, by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users.\nSearch engines, such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method.\nIn addition, the same technology has been leveraged to recommend bidding terms to online advertiser in the pay-forperformance search market [12].\nQuery suggestion is closely related to query expansion which extends the original query with new search terms to narrow the scope of the search.\nBut different from query expansion, query suggestion aims to suggest full queries that have been formulated by users so that the query integrity and coherence are preserved in the suggested queries.\nTypical methods for query suggestion exploit query logs and document collections, by assuming that in the same period of time, many users share the same or similar interests, which can be expressed in different manners [12, 14, 26].\nBy suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents.\nHowever, all of the existing studies dealt with monolingual query suggestion and to our knowledge, there is no published study on cross-lingual query suggestion (CLQS).\nCLQS aims to suggest related queries but in a different language.\nIt has wide applications on World Wide Web: for cross-language search or for suggesting relevant bidding terms in a different language.\n1 CLQS can be approached as a query translation problem, i.e., to suggest the queries that are translations of the original query.\nDictionaries, large size of parallel corpora and existing commercial machine translation systems can be used for translation.\nHowever, these kinds of approaches usually rely on static knowledge and data.\nIt cannot effectively reflect the quickly shifting interests of Web users.\nMoreover, there are some problems with translated queries in target language.\nFor instance, the translated terms can be reasonable translations, but they are not popularly used in the target language.\nFor example, the French query aliment biologique is translated into biologic food by Google translation tool2 , yet the correct formulation nowadays should be organic food.\nTherefore, there exist many mismatch cases between the translated terms and the really used terms in target language.\nThis mismatch makes the suggested terms in the target language ineffective.\nA natural thinking of solving this mismatch is to map the queries in the source language and the queries in the target language, by using the query log of a search engine.\nWe exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages.\nAs a result, a query written in a source language likely has an equivalent in a query log in the target language.\nIn particular, if the user intends to perform CLIR, then original query is even more likely to have its correspondent included in the target language query log.\nTherefore, if a candidate for CLQS appears often in the query log, then it is more likely the appropriate one to be suggested.\nIn this paper, we propose a method of calculating the similarity between source language query and the target language query by exploiting, in addition to the translation information, a wide spectrum of bilingual and monolingual information, such as term co-occurrences, query logs with click-through data, etc..\nA discriminative model is used to learn the cross-lingual query similarity based on a set of manually translated queries.\nThe model is trained by optimizing the cross-lingual similarity to best fit the monolingual similarity between one query and the other query``s translation.\nBesides being benchmarked as an independent module, the resulting CLQS system is tested as a new means of query translation in CLIR task on TREC collections.\nThe results show that this new translation method is more effective than the traditional query translation method.\nThe remainder of this paper is organized as follows: Section 2 introduces the related work; Section 3 describes in detail the discriminative model for estimating cross-lingual query similarity; Section 4 presents a new CLIR approach using cross-lingual query suggestion as a bridge across language boundaries.\nSection 5 discusses the experiments and benchmarks; finally, the paper is concluded in Section 6.\n2.\nRELATED WORK Most approaches to CLIR perform a query translation followed by a monolingual IR.\nTypically, queries are translated either using a bilingual dictionary [22], a machine translation software [9] or a parallel corpus [20].\nDespite the various types of resources used, out-of-vocabulary (OOV) words and translation disambiguation are the two major bottlenecks for CLIR [20].\nIn [7, 27], OOV term translations are mined from the Web using a search engine.\nIn [17], bilingual knowledge is acquired based on anchor text analysis.\nIn addition, word co-occurrence statistics in the target language has been leveraged for translation disambiguation [3, 10, 11, 19].\n2 http:\/\/www.google.com\/language_tools Nevertheless, it is arguable that accurate query translation may not be necessary for CLIR.\nIndeed, in many cases, it is helpful to introduce words even if they are not direct translations of any query word, but are closely related to the meaning of the query.\nThis observation has led to the development of cross-lingual query expansion (CLQE) techniques [2, 16, 18].\n[2] reports the enhancement on CLIR by post-translation expansion.\n[16] develops a cross-lingual relevancy model by leveraging the crosslingual co-occurrence statistics in parallel texts.\n[18] makes performance comparison on multiple CLQE techniques, including pre-translation expansion and post-translation expansion.\nHowever, there is lack of a unified framework to combine the wide spectrum of resources and recent advances of mining techniques for CLQE.\nCLQS is different from CLQE in that it aims to suggest full queries that have been formulated by users in another language.\nAs CLQS exploits up-to-date query logs, it is expected that for most user queries, we can find common formulations on these topics in the query log in the target language.\nTherefore, CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language.\nQuery logs have been successfully used for monolingual IR [8, 12, 15, 26], especially in monolingual query suggestions [12] and relating the semantically relevant terms for query expansion [8, 15].\nIn [1], the target language query log has been exploited to help query translation in CLIR.\n3.\nESTIMATING CROSS-LINGUAL QUERY SIMILARITY A search engine has a query log containing user queries in different languages within a certain period of time.\nIn addition to query terms, click-through information is also recorded.\nTherefore, we know which documents have been selected by users for each query.\nGiven a query in the source language, our CLQS task is to determine one or several similar queries in the target language from the query log.\nThe key problem with cross-lingual query suggestion is how to learn a similarity measure between two queries in different languages.\nAlthough various statistical similarity measures have been studied for monolingual terms [8, 26], most of them are based on term co-occurrence statistics, and can hardly be applied directly in cross-lingual settings.\nIn order to define a similarity measure across languages, one has to use at least one translation tool or resource.\nSo the measure is based on both translation relation and monolingual similarity.\nIn this paper, as our purpose is to provide up-to-date query similarity measure, it may not be sufficient to use only a static translation resource.\nTherefore, we also integrate a method to mine possible translations on the Web.\nThis method is particularly useful for dealing with OOV terms.\nGiven a set of resources of different natures, the next question is how to integrate them in a principled manner.\nIn this paper, we propose a discriminative model to learn the appropriate similarity measure.\nThe principle is as follows: we assume that we have a reasonable monolingual query similarity measure.\nFor any training query example for which a translation exists, its similarity measure (with any other query) is transposed to its translation.\nTherefore, we have the desired cross-language similarity value for this example.\nThen we use a discriminative model to learn the cross-language similarity function which fits the best these examples.\nIn the following sections, let us first describe the detail of the discriminative model for cross-lingual query similarity estimation.\nThen we introduce all the features (monolingual and cross-lingual information) that we will use in the discriminative model.\n3.1 Discriminative Model for Estimating Cross-Lingual Query Similarity In this section, we propose a discriminative model to learn crosslingual query similarities in a principled manner.\nThe principle is as follows: for a reasonable monolingual query similarity between two queries, a cross-lingual correspondent can be deduced between one query and another query``s translation.\nIn other words, for a pair of queries in different languages, their crosslingual similarity should fit the monolingual similarity between one query and the other query``s translation.\nFor example, the similarity between French query pages jaunes (i.e., yellow page in English) and English query telephone directory should be equal to the monolingual similarity between the translation of the French query yellow page and telephone directory.\nThere are many ways to obtain a monolingual similarity measure between terms, e.g., term co-occurrence based mutual information and 2 \u03c7 .\nAny of them can be used as the target for the cross-lingual similarity function to fit.\nIn this way, cross-lingual query similarity estimation is formulated as a regression task as follows: Given a source language query fq , a target language query eq , and a monolingual query similarity MLsim , the corresponding cross-lingual query similarity CLsim is defined as follows: ),(),( eqMLefCL qTsimqqsim f = (1) where fqT is the translation of fq in the target language.\nBased on Equation (1), it would be relatively easy to create a training corpus.\nAll it requires is a list of query translations.\nThen an existing monolingual query suggestion system can be used to automatically produce similar query to each translation, and create the training corpus for cross-lingual similarity estimation.\nAnother advantage is that it is fairly easy to make use of arbitrary information sources within a discriminative modeling framework to achieve optimal performance.\nIn this paper, support vector machine (SVM) regression algorithm [25] is used to learn the cross-lingual term similarity function.\nGiven a vector of feature functions f between fq and eq , ),( efCL ttsim is represented as an inner product between a weight vector and the feature vector in a kernel space as follows: )),((),( efefCL ttfwttsim \u03c6\u2022= (2) where \u03c6 is the mapping from the input feature space onto the kernel space, and wis the weight vector in the kernel space which will be learned by the SVM regression training.\nOnce the weight vector is learned, the Equation (2) can be used to estimate the similarity between queries of different languages.\nWe want to point out that instead of regression, one can definitely simplify the task as a binary or ordinal classification, in which case CLQS can be categorized according to discontinuous class labels, e.g., relevant and irrelevant, or a series of levels of relevancies, e.g., strongly relevant, weakly relevant, and irrelevant.\nIn either case, one can resort to discriminative classification approaches, such as an SVM or maximum entropy model, in a straightforward way.\nHowever, the regression formalism enables us to fully rank the suggested queries based on the similarity score given by Equation (1).\nThe Equations (1) and (2) construct a regression model for cross-lingual query similarity estimation.\nIn the following sections, the monolingual query similarity measure (see Section 3.2) and the feature functions used for SVM regression (see Section 3.3) will be presented.\n3.2 Monolingual Query Similarity Measure Based on Click-through Information Any monolingual term similarity measure can be used as the regression target.\nIn this paper, we select the monolingual query similarity measure presented in [26] which reports good performance by using search users'' click-through information in query logs.\nThe reason to choose this monolingual similarity is that it is defined in a similar context as ours \u2212 according to a user log that reflects users'' intention and behavior.\nTherefore, we can expect that the cross-language term similarity learned from it can also reflect users'' intention and expectation.\nFollowing [26], our monolingual query similarity is defined by combining both query content-based similarity and click-through commonality in the query log.\nFirst the content similarity between two queries p and q is defined as follows: ))(),(( ),( ),( qknpknMax qpKN qpsimilarity content = (3) where )( xkn is the number of keywords in a query x, ),( qpKN is the number of common keywords in the two queries.\nSecondly, the click-through based similarity is defined as follows, ))(),(( ),( ),( qrdprdMax qpRD qpsimilarity throughclick =\u2212 (4) where )(xrd is the number of clicked URLs for a query x, and ),( qpRD is the number of common URLs clicked for two queries.\nFinally, the similarity between two queries is a linear combination of the content-based and click-through-based similarities, and is presented as follows: ),(* ),(*),( qpsimilarity qpsimilarityqpsimilarity throughclick content \u2212 += \u03b2 \u03b1 (5) where \u03b1 and \u03b2 are the relative importance of the two similarity measures.\nIn this paper, we set ,4.0=\u03b1 and 6.0=\u03b2 following the practice in [26].\nQueries with similarity measure higher than a threshold with another query will be regarded as relevant monolingual query suggestions (MLQS) for the latter.\nIn this paper, the threshold is set as 0.9 empirically.\n3.3 Features Used for Learning Cross-Lingual Query Similarity Measure This section presents the extraction of candidate relevant queries from the log with the assistance of various monolingual and bilingual resources.\nMeanwhile, feature functions over source query and the cross-lingual relevant candidates are defined.\nSome of the resources being used here, such as bilingual lexicon and parallel corpora, were for query translation in previous work.\nBut note that we employ them here as an assistant means for finding relevant candidates in the log rather than for acquiring accurate translations.\n3.3.1 Bilingual Dictionary In this subsection, a built-in-house bilingual dictionary containing 120,000 unique entries is used to retrieve candidate queries.\nSince multiple translations may be associated with each source word, co-occurrence based translation disambiguation is performed [3, 10].\nThe process is presented as follows: Given an input query }{ ,2,1 fnfff wwwq K= in the source language, for each query term fiw , a set of unique translations are provided by the bilingual dictionary D : },,{)( ,2,1 imiifi tttwD K= .\nThen the cohesion between the translations of two query terms is measured using mutual information which is computed as follows: )()( ),( log),()( , klij klij klijklij tPtP ttP ttPttMI = (6) where . )\n( )(, ),( ),( N tC tP N ttC ttP klij klij == Here ),( yxC is the number of queries in the log containing both x and y , )(xC is the number of queries containing term x , and N is the total number of queries in the log.\nBased on the term-term cohesion defined in Equation (6), all the possible query translations are ranked using the summation of the term-term cohesion \u2211\u2260 = kiki klijqdict ttMITS f ,, ),()( .\nThe set of top-4 query translations is denoted as )( fqTS .\nFor each possible query translation )( fqTST\u2208 , we retrieve all the queries containing the same keywords as T from the target language log.\nThe retrieved queries are candidate target queries, and are assigned )(TSdict as the value of the feature Dictionary-based Translation Score.\n3.3.2 Parallel Corpora Parallel corpora are precious resources for bilingual knowledge acquisition.\nDifferent from the bilingual dictionary, the bilingual knowledge learned from parallel corpora assigns probability for each translation candidate which is useful in acquiring dominant query translations.\nIn this paper, the Europarl corpus (a set of parallel French and English texts from the proceedings of the European Parliament) is used.\nThe corpus is first sentence aligned.\nThen word alignments are derived by training an IBM translation model 1 [4] using GIZA++ [21].\nThe learned bilingual knowledge is used to extract candidate queries from the query log.\nThe process is presented as follows: Given a pair of queries, fq in the source language and eq in the target language, the Bi-Directional Translation Score is defined as follows: )|()|(),( 111 feIBMefIBMefIBM qqpqqpqqS = (7) where )|(1 xypIBM is the word sequence translation probability given by IBM model 1 which has the following form: \u220f\u2211= =+ = || 1 || 0 ||1 )|( )1|(| 1 )|( y j x i ijyIBM xyp x xyp (8) where )|( ij xyp is the word to word translation probability derived from the word-aligned corpora.\nThe reason to use bidirectional translation probability is to deal with the fact that common words can be considered as possible translations of many words.\nBy using bidirectional translation, we test whether the translation words can be translated back to the source words.\nThis is helpful to focus on the translation probability onto the most specific translation candidates.\nNow, given an input query fq , the top 10 queries }{ eq with the highest bidirectional translation scores with fq are retrieved from the query log, and ),(1 efIBM qqS in Equation (7) is assigned as the value for the feature Bi-Directional Translation Score.\n3.3.3 Online Mining for Related Queries OOV word translation is a major knowledge bottleneck for query translation and CLIR.\nTo overcome this knowledge bottleneck, web mining has been exploited in [7, 27] to acquire EnglishChinese term translations based on the observation that Chinese terms may co-occur with their English translations in the same web page.\nIn this section, this web mining approach is adapted to acquire not only translations but semantically related queries in the target language.\nIt is assumed that if a query in the target language co-occurs with the source query in many web pages, they are probably semantically related.\nTherefore, a simple method is to send the source query to a search engine (Google in our case) for Web pages in the target language in order to find related queries in the target language.\nFor instance, by sending a French query pages jaunes to search for English pages, the English snippets containing the key words yellow pages or telephone directory will be returned.\nHowever, this simple approach may induce significant amount of noise due to the non-relevant returns from the search engine.\nIn order to improve the relevancy of the bilingual snippets, we extend the simple approach by the following query modification: the original query is used to search with the dictionary-based query keyword translations, which are unified by the \u2227 (and) \u2228 (OR) operators into a single Boolean query.\nFor example, for a given query abcq = where the set of translation entries in the dictionary of for a is },,{ 321 aaa , b is },{ 21 bb and c is }{ 1c , we issue 121321 )()( cbbaaaq \u2227\u2228\u2227\u2228\u2228\u2227 as one web query.\nFrom the returned top 700 snippets, the most frequent 10 target queries are identified, and are associated with the feature Frequency in the Snippets.\nFurthermore, we use Co-Occurrence Double-Check (CODC) Measure to weight the association between the source and target queries.\nCODC Measure is proposed in [6] as an association measure based on snippet analysis, named Web Search with Double Checking (WSDC) model.\nIn WSDC model, two objects a and b are considered to have an association if b can be found by using a as query (forward process), and a can be found by using b as query (backward process) by web search.\nThe forward process counts the frequency of b in the top N snippets of query a, denoted as )@( abfreq .\nSimilarly, the backward process count the frequency of a in the top N snippets of query b, denoted as )@( bafreq .\nThen the CODC association score is defined as follows: \u23aa \u23a9 \u23aa \u23a8 \u23a7 =\u00d7 = \u23a5 \u23a5 \u23a6 \u23a4 \u23a2 \u23a2 \u23a3 \u23a1 \u00d7 otherwise, 0)@()@(if,0 ),( )( )@( )( )@( log \u03b1 e ef f fe qfreq qqfreq qfreq qqfreq effe efCODC e qqfreqqqfreq qqS (9) CODC measures the association of two terms in the range between 0 and 1, where under the two extreme cases, eq and fq are of no association when 0)@( =fe qqfreq or 0)@( =ef qqfreq , and are of the strongest association when )()@( ffe qfreqqqfreq = and )()@( eef qfreqqqfreq = .\nIn our experiment, \u03b1 is set at 0.15 following the practice in [6].\nAny query eq mined from the Web will be associated with a feature CODC Measure with ),( efCODC qqS as its value.\n3.3.4 Monolingual Query Suggestion For all the candidate queries 0Q being retrieved using dictionary (see Section 3.3.1), parallel data (see Section 3.3.2) and web mining (see Section 3.3.3), monolingual query suggestion system (described in Section 3.1) is called to produce more related queries in the target language.\nFor each target query eq , its monolingual source query )( eML qSQ is defined as the query in 0Q with the highest monolingual similarity with eq , i.e., ),(maxarg)( 0 eeMLQqeML qqsimqSQ e \u2032= \u2208\u2032 (10) Then the monolingual similarity between eq and )( eML qSQ is used as the value of the eq ``s Monolingual Query Suggestion Feature.\nFor any target query 0Qq\u2208 , its Monolingual Query Suggestion Feature is set as 1.\nFor any query 0Qqe \u2209 , its values of Dictionary-based Translation Score, Bi-Directional Translation Score, Frequency in the Snippet, and CODC Measure are set to be equal to the feature values of )( eML qSQ .\n3.4 Estimating Cross-lingual Query Similarity In summary, four categories of features are used to learn the crosslingual query similarity.\nSVM regression algorithm [25] is used to learn the weights in Equation (2).\nIn this paper, LibSVM toolkit [5] is used for the regression training.\nIn the prediction stage, the candidate queries will be ranked using the cross-lingual query similarity score computed in terms of )),((),( efefCL ttfwttsim \u03c6\u2022= , and the queries with similarity score lower than a threshold will be regarded as nonrelevant.\nThe threshold is learned using a development data set by fitting MLQS``s output.\n4.\nCLIR BASED ON CROSS-LINGUAL QUERY SUGGESTION In Section 3, we presented a discriminative model for cross lingual query suggestion.\nHowever, objectively benchmarking a query suggestion system is not a trivial task.\nIn this paper, we propose to use CLQS as an alternative to query translation, and test its effectiveness in CLIR tasks.\nThe resulting good performance of CLIR corresponds to the high quality of the suggested queries.\nGiven a source query fq , a set of relevant queries }{ eq in the target language are recommended using the cross-lingual query suggestion system.\nThen a monolingual IR system based on the BM25 model [23] is called using each }{ eqq\u2208 as queries to retrieve documents.\nThen the retrieved documents are re-ranked based on the sum of the BM25 scores associated with each monolingual retrieval.\n5.\nPERFORMACNCE EVALUATION In this section, we will benchmark the cross-lingual query suggestion system, comparing its performance with monolingual query suggestion, studying the contribution of various information sources, and testing its effectiveness when being used in CLIR tasks.\n5.1 Data Resources In our experiments, French and English are selected as the source and target language respectively.\nSuch selection is due to the fact that large scale query logs are readily available for these two languages.\nA one-month English query log (containing 7 million unique English queries with occurrence frequency more than 5) of MSN search engine is used as the target language log.\nAnd a monolingual query suggestion system is built based on it.\nIn addition, 5,000 French queries are selected randomly from a French query log (containing around 3 million queries), and are manually translated into English by professional French-English translators.\nAmong the 5,000 French queries, 4,171 queries have their translations in the English query log, and are used for CLQS training and testing.\nFurthermore, among the 4,171 French queries, 70% are used for cross-lingual query similarity training, 10% are used as the development data to determine the relevancy threshold, and 20% are used for testing.\nTo retrieve the crosslingual related queries, a built-in-house French-English bilingual lexicon (containing 120,000 unique entries) and the Europarl corpus are used.\nBesides benchmarking CLQS as an independent system, the CLQS is also tested as a query translation system for CLIR tasks.\nBased on the observation that the CLIR performance heavily relies on the quality of the suggested queries, this benchmark measures the quality of CLQS in terms of its effectiveness in helping CLIR.\nTo perform such benchmark, we use the documents of TREC6 CLIR data (AP88-90 newswire, 750MB) with officially provided 25 short French-English queries pairs (CL1-CL25).\nThe selection of this data set is due to the fact that the average length of the queries are 3.3 words long, which matches the web query logs we use to train CLQS.\n5.2 Performance of Cross-lingual Query Suggestion Mean-square-error (MSE) is used to measure the regression error and it is defined as follows: ( )2 ),(),( 1 \u2211 \u2212= i eiqMLeifiCL qTsimqqsim l MSE fi where l is the total number of cross-lingual query pairs in the testing data.\nAs described in Section 3.4, a relevancy threshold is learned using the development data, and only CLQS with similarity value above the threshold is regarded as truly relevant to the input query.\nIn this way, CLQS can also be benchmarked as a classification task using precision (P) and recall (R) which are defined as follows: CLQS MLQSCLQS P S SS I = , MLQS MLQSCLQS R S SS I = where CLQSS is the set of relevant queries suggested by CLQS, MLQSS is the set of relevant queries suggested by MLQS (see Section 3.2).\nThe benchmarking results with various feature configurations are shown in Table 1.\nRegression Classification Features MSE P R DD 0.274 0.723 0.098 DD+PC 0.224 0.713 0.125 DD+PC+ Web 0.115 0.808 0.192 DD+PC+ Web+ML QS 0.174 0.796 0.421 Table 1.\nCLQS performance with different feature settings (DD: dictionary only; DD+PC: dictionary and parallel corpora; DD+PC+Web: dictionary, parallel corpora, and web mining; DD+PC+Web+MLQS: dictionary, parallel corpora, web mining and monolingual query suggestion) Table 1 reports the performance comparison with various feature settings.\nThe baseline system (DD) uses a conventional query translation approach, i.e., a bilingual dictionary with cooccurrence-based translation disambiguation.\nThe baseline system only covers less than 10% of the suggestions made by MLQS.\nUsing additional features obviously enables CLQS to generate more relevant queries.\nThe most significant improvement on recall is achieved by exploiting MLQS.\nThe final CLQS system is able to generate 42% of the queries suggested by MLQS.\nAmong all the feature combinations, there is no significant change in precision.\nThis indicates that our methods can improve the recall by effectively leveraging various information sources without losing the accuracy of the suggestions.\nBesides benchmarking CLQS by comparing its output with MLQS output, 200 French queries are randomly selected from the French query log.\nThese queries are double-checked to make sure that they are not in the CLQS training corpus.\nThen CLQS system is used to suggest relevant English queries for them.\nOn average, for each French query, 8.7 relevant English queries are suggested.\nThen the total 1,740 suggested English queries are manually checked by two professional English\/French translators with cross-validation.\nAmong the 1,747 suggested queries, 1,407 queries are recognized as relevant to the original ones, hence the accuracy is 80.9%.\nFigure 1 shows an example of CLQS of the French query terrorisme international (international terrorism in English).\n5.3 CLIR Performance In this section, CLQS is tested with French to English CLIR tasks.\nWe conduct CLIR experiments using the TREC 6 CLIR dataset described in Section 5.1.\nThe CLIR is performed using a query translation system followed by a BM25-based [23] monolingual IR module.\nThe following three different systems have been used to perform query translation: (1) CLQS: our CLQS system; (2) MT: Google French to English machine translation system; (3) DT: a dictionary based query translation system using cooccurrence statistics for translation disambiguation.\nThe translation disambiguation algorithm is presented in Section 3.3.1.\nBesides, the monolingual IR performance is also reported as a reference.\nThe average precision of the four IR systems are reported in Table 2, and the 11-point precision-recall curves are shown in Figure 2.\nTable 2.\nAverage precision of CLIR on TREC 6 Dataset (Monolingual: monolingual IR system; MT: CLIR based on machine translation; DT: CLIR based on dictionary translation; CLQS: CLQS-based CLIR) IR System Average Precision % of Monolingual IR Monolingual 0.266 100% MT 0.217 81.6% DT 0.186 69.9% CLQS 0.233 87.6% Figure 1.\nAn example of CLQS of the French query terrorisme international international terrorism (0.991); what is terrorism (0.943); counter terrorism (0.920); terrorist (0.911); terrorist attacks (0.898); international terrorist (0.853); world terrorism (0.845); global terrorism (0.833); transnational terrorism (0.821); human rights (0.811); terrorist groups (0.\n777); patterns of global terrorism (0.762) september 11 (0.734) 11-point P-R curves (TREC6) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precison Monolingual MT DT CLQS The benchmark shows that using CLQS as a query translation tool outperforms CLIR based on machine translation by 7.4%, outperforms CLIR based on dictionary translation by 25.2%, and achieves 87.6% of the monolingual IR performance.\nThe effectiveness of CLQS lies in its ability in suggesting closely related queries besides accurate translations.\nFor example, for the query CL14 terrorisme international (international terrorism), although the machine translation tool translates the query correctly, CLQS system still achieves higher score by recommending many additional related terms such as global terrorism, world terrorism, etc. (as shown in Figure 1).\nAnother example is the query La pollution caus\u00e9e par l'automobile (air pollution due to automobile) of CL6.\nThe MT tool provides the translation the pollution caused by the car, while CLQS system enumerates all the possible synonyms of car, and suggest the following queries car pollution, auto pollution, automobile pollution.\nBesides, other related queries such as global warming are also suggested.\nFor the query CL12 La culture \u00e9cologique (organic farming), the MT tool fails to generate the correct translation.\nAlthough the correct translation is neither in our French-English dictionary, CLQS system generates organic farm as a relevant query due to successful web mining.\nThe above experiment demonstrates the effectiveness of using CLQS to suggest relevant queries for CLIR enhancement.\nA related research is to perform query expansion to enhance CLIR [2, 18].\nSo it is very interesting to compare the CLQS approach with the conventional query expansion approaches.\nFollowing [18], post-translation expansion is performed based on pseudorelevance feedback (PRF) techniques.\nWe first perform CLIR in the same way as before.\nThen we use the traditional PRF algorithm described in [24] to select expansion terms.\nIn our experiments, the top 10 terms are selected to expand the original query, and the new query is used to search the collection for the second time.\nThe new CLIR performance in terms of average precision is shown in Table 3.\nThe 11-point P-R curves are drawn in Figure 3.\nAlthough being enhanced by pseudo-relevance feedback, the CLIR using either machine translation or dictionary-based query translation still does not perform as well as CLQS-based approach.\nStatistical t-test [13] is conducted to indicate whether the CLQS-based CLIR performs significantly better.\nPair-wise pvalues are shown in Table 4.\nClearly, CLQS significantly outperforms MT and DT without PRF as well as DT+PRF, but its superiority over MT+PRF is not significant.\nHowever, when combined with PRF, CLQS significant outperforms all the other methods.\nThis indicates the higher effectiveness of CLQS in related term identification by leveraging a wide spectrum of resources.\nFurthermore, post-translation expansion is capable of improving CLQS-based CLIR.\nThis is due to the fact that CLQS and pseudo-relevance feedback are leveraging different categories of resources, and both approaches can be complementary.\nIR System AP without PRF AP with PRF Monolingual 0.266 (100%) 0.288 (100%) MT 0.217 (81.6%) 0.222 (77.1%) DT 0.186 (69.9%) 0.220 (76.4%) CLQS 0.233 (87.6%) 0.259 (89.9%) 11-point P-R curves with pseudo relevance feedback (TREC6) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precison Monolingual MT DT CLQS MT DT MT+PRF DT+PRF CLQS 0.0298 3.84e-05 0.1472 0.0282 CLQS+PR F 0.0026 2.63e-05 0.0094 0.0016 6.\nCONCLUSIONS In this paper, we proposed a new approach to cross-lingual query suggestion by mining relevant queries in different languages from query logs.\nThe key solution to this problem is to learn a crosslingual query similarity measure by a discriminative model exploiting multiple monolingual and bilingual resources.\nThe model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query``s translation.\nFigure 2.\n11 points precision-recall on TREC6 CLIR data set Figure 3.\n11 points precision-recall on TREC6 CLIR dataset with pseudo relevance feedback Table 3.\nComparison of average precision (AP) on TREC 6 without and with post-translation expansion.\n(%) are the relative percentages over the monolingual IR performance Table 4.\nThe results of pair-wise significance t-test.\nHere pvalue < 0.05 is considered statistically significant The baseline CLQS system applies a typical query translation approach, using a bilingual dictionary with co-occurrence-based translation disambiguation.\nThis approach only covers 10% of the relevant queries suggested by an MLQS system (when the exact translation of the original query is given).\nBy leveraging additional resources such as parallel corpora, web mining and logbased monolingual query expansion, the final system is able to cover 42% of the relevant queries suggested by an MLQS system with precision as high as 79.6%.\nTo further test the quality of the suggested queries, CLQS system is used as a query translation system in CLIR tasks.\nBenchmarked using TREC 6 French to English CLIR task, CLQS demonstrates higher effectiveness than the traditional query translation methods using either bilingual dictionary or commercial machine translation tools.\nThe improvement on TREC French to English CLIR task by using CLQS demonstrates the high quality of the suggested queries.\nThis also shows the strong correspondence between the input French queries and English queries in the log.\nIn the future, we will build CLQS system between languages which may be more loosely correlated, e.g., English and Chinese, and study the CLQS performance change due to the less strong correspondence among queries in such languages.\n7.\nREFERENCES [1] Ambati, V. and Rohini., U. Using Monolingual Clickthrough Data to Build Cross-lingual Search Systems.\nIn Proceedings of New Directions in Multilingual Information Access Workshop of SIGIR 2006.\n[2] Ballestors, L. A. and Croft, W. B. Phrasal Translation and Query Expansion Techniques for Cross-Language Information Retrieval.\nIn Proc.\nSIGIR 1997, pp. 84-91.\n[3] Ballestors, L. A. and Croft, W. B. Resolving Ambiguity for Cross-Language Retrieval.\nIn Proc.\nSIGIR 1998, pp. 64-71.\n[4] Brown, P. F., Pietra, D. S. A., Pietra, D. V. J., and Mercer, R. L.\nThe Mathematics of Statistical Machine Translation: Parameter Estimation.\nComputational Linguistics, 19(2):263311, 1993.\n[5] Chang, C. C. and Lin, C. LIBSVM: a Library for Support Vector Machines (Version 2.3).\n2001.\nhttp:\/\/citeseer.ist.psu.edu\/chang01libsvm.html [6] Chen, H.-H., Lin, M.-S., and Wei, Y.-C.\nNovel Association Measures Using Web Search with Double Checking.\nIn Proc.\nCOLING\/ACL 2006, pp. 1009-1016.\n[7] Cheng, P.-J., Teng, J.-W., Chen, R.-C., Wang, J.-H., Lu, W.H., and Chien, L.-F.\nTranslating Unknown Queries with Web Corpora for Cross-Language Information Retrieval.\nIn Proc.\nSIGIR 2004, pp. 146-153.\n[8] Cui, H., Wen, J. R., Nie, J.-Y., and Ma, W. Y. Query Expansion by Mining User Logs.\nIEEE Trans.\non Knowledge and Data Engineering, 15(4):829-839, 2003.\n[9] Fujii A. and Ishikawa, T. Applying Machine Translation to Two-Stage Cross-Language Information Retrieval.\nIn Proceedings of 4th Conference of the Association for Machine Translation in the Americas, pp. 13-24, 2000.\n[10] Gao, J. F., Nie, J.-Y., Xun, E., Zhang, J., Zhou, M., and Huang, C. Improving query translation for CLIR using statistical Models.\nIn Proc.\nSIGIR 2001, pp. 96-104.\n[11] Gao, J. F., Nie, J.-Y., He, H., Chen, W., and Zhou, M. Resolving Query Translation Ambiguity using a Decaying Co-occurrence Model and Syntactic Dependence Relations.\nIn Proc.\nSIGIR 2002, pp. 183-190.\n[12] Gleich, D., and Zhukov, L. SVD Subspace Projections for Term Suggestion Ranking and Clustering.\nIn Technical Report, Yahoo! Research Labs, 2004.\n[13] Hull, D. Using Statistical Testing in the Evaluation of Retrieval Experiments.\nIn Proc.\nSIGIR 1993, pp. 329-338.\n[14] Jeon, J., Croft, W. B., and Lee, J. Finding Similar Questions in Large Question and Answer Archives.\nIn Proc.\nCIKM 2005, pp. 84-90.\n[15] Joachims, T. Optimizing Search Engines Using Clickthrough Data.\nIn Proc.\nSIGKDD 2002, pp. 133-142.\n[16] Lavrenko, V., Choquette, M., and Croft, W. B. Cross-Lingual Relevance Models.\nIn Proc.\nSIGIR 2002, pp. 175-182.\n[17] Lu, W.-H., Chien, L.-F., and Lee, H.-J.\nAnchor Text Mining for Translation Extraction of Query Terms.\nIn Proc.\nSIGIR 2001, pp. 388-389.\n[18] McNamee, P. and Mayfield, J. Comparing Cross-Language Query Expansion Techniques by Degrading Translation Resources.\nIn Proc.\nSIGIR 2002, pp. 159-166.\n[19] Monz, C. and Dorr, B. J. Iterative Translation Disambiguation for Cross-Language Information Retrieval.\nIn Proc.\nSIGIR 2005, pp. 520-527.\n[20] Nie, J.-Y., Simard, M., Isabelle, P., and Durand, R. CrossLanguage Information Retrieval based on Parallel Text and Automatic Mining of Parallel Text from the Web.\nIn Proc.\nSIGIR 1999, pp. 74-81.\n[21] Och, F. J. and Ney, H.\nA Systematic Comparison of Various Statistical Alignment Models.\nComputational Linguistics, 29(1):19-51, 2003.\n[22] Pirkola, A., Hedlund, T., Keshusalo, H., and J\u00e4rvelin, K. Dictionary-Based Cross-Language Information Retrieval: Problems, Methods, and Research Findings.\nInformation Retrieval, 4(3\/4):209-230, 2001.\n[23] Robertson, S. E., Walker, S., Hancock-Beaulieu, M. M., and Gatford, M. OKAPI at TREC-3.\nIn Proc.TREC-3, pp.\n200225, 1995.\n[24] Robertson, S. E. and Jones, K. S. Relevance Weighting of Search Terms.\nJournal of the American Society of Information Science, 27(3):129-146, 1976.\n[25] Smola, A. J. and Sch\u00f6lkopf, B. A. Tutorial on Support Vector Regression.\nStatistics and Computing, 14(3):199-222, 2004.\n[26] Wen, J. R., Nie, J.-Y., and Zhang, H. J. Query Clustering Using User Logs.\nACM Trans.\nInformation Systems, 20(1):59-81, 2002.\n[27] Zhang, Y. and Vines, P. Using the Web for Automated Translation Extraction in Cross-Language Information Retrieval.\nIn Proc.\nSIGIR 2004, pp. 162-169.","lvl-3":"Cross-Lingual Query Suggestion Using Query Logs of Different Languages\nABSTRACT\nQuery suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs.\nPreviously, the suggested terms are mostly in the same language of the input query.\nIn this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages.\nThis is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement.\nInstead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log.\nImportant monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model.\nBenchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation.\nBesides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections.\nThe results demonstrate higher effectiveness than the traditional query translation methods.\n1.\nINTRODUCTION\nQuery suggestion is a functionality to help users of a search engine to better specify their information need, by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users.\nSearch engines, such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method.\nIn addition, the same technology has been leveraged to recommend bidding terms to online advertiser in the pay-forperformance search market [12].\nQuery suggestion is closely related to query expansion which extends the original query with new search terms to narrow the scope of the search.\nBut different from query expansion, query suggestion aims to suggest full queries that have been formulated by users so that the query integrity and coherence are preserved in the suggested queries.\nTypical methods for query suggestion exploit query logs and document collections, by assuming that in the same period of time, many users share the same or similar interests, which can be expressed in different manners [12, 14, 26].\nBy suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents.\nHowever, all of the existing studies dealt with monolingual query suggestion and to our knowledge, there is no published study on cross-lingual query suggestion (CLQS).\nCLQS aims to suggest related queries but in a different language.\nIt has wide applications on World Wide Web: for cross-language search or for suggesting relevant bidding terms in a different language.\nCLQS can be approached as a query translation problem, i.e., to suggest the queries that are translations of the original query.\nDictionaries, large size of parallel corpora and existing commercial machine translation systems can be used for translation.\nHowever, these kinds of approaches usually rely on static knowledge and data.\nIt cannot effectively reflect the quickly shifting interests of Web users.\nMoreover, there are some problems with translated queries in target language.\nFor instance,\nthe translated terms can be reasonable translations, but they are not popularly used in the target language.\nFor example, the French query \"aliment biologique\" is translated into \"biologic food\" by Google translation tool2, yet the correct formulation nowadays should be \"organic food\".\nTherefore, there exist many mismatch cases between the translated terms and the really used terms in target language.\nThis mismatch makes the suggested terms in the target language ineffective.\nA natural thinking of solving this mismatch is to map the queries in the source language and the queries in the target language, by using the query log of a search engine.\nWe exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages.\nAs a result, a query written in a source language likely has an equivalent in a query log in the target language.\nIn particular, if the user intends to perform CLIR, then original query is even more likely to have its correspondent included in the target language query log.\nTherefore, if a candidate for CLQS appears often in the query log, then it is more likely the appropriate one to be suggested.\nIn this paper, we propose a method of calculating the similarity between source language query and the target language query by exploiting, in addition to the translation information, a wide spectrum of bilingual and monolingual information, such as term co-occurrences, query logs with click-through data, etc. .\nA discriminative model is used to learn the cross-lingual query similarity based on a set of manually translated queries.\nThe model is trained by optimizing the cross-lingual similarity to best fit the monolingual similarity between one query and the other query's translation.\nBesides being benchmarked as an independent module, the resulting CLQS system is tested as a new means of query \"translation\" in CLIR task on TREC collections.\nThe results show that this new \"translation\" method is more effective than the traditional query translation method.\nThe remainder of this paper is organized as follows: Section 2 introduces the related work; Section 3 describes in detail the discriminative model for estimating cross-lingual query similarity; Section 4 presents a new CLIR approach using cross-lingual query suggestion as a bridge across language boundaries.\nSection 5 discusses the experiments and benchmarks; finally, the paper is concluded in Section 6.\n2.\nRELATED WORK\nMost approaches to CLIR perform a query translation followed by a monolingual IR.\nTypically, queries are translated either using a bilingual dictionary [22], a machine translation software [9] or a parallel corpus [20].\nDespite the various types of resources used, out-of-vocabulary (OOV) words and translation disambiguation are the two major bottlenecks for CLIR [20].\nIn [7, 27], OOV term translations are mined from the Web using a search engine.\nIn [17], bilingual knowledge is acquired based on anchor text analysis.\nIn addition, word co-occurrence statistics in the target language has been leveraged for translation disambiguation [3, 10, 11, 19].\nNevertheless, it is arguable that accurate query translation may not be necessary for CLIR.\nIndeed, in many cases, it is helpful to introduce words even if they are not direct translations of any query word, but are closely related to the meaning of the query.\nThis observation has led to the development of cross-lingual query expansion (CLQE) techniques [2, 16, 18].\n[2] reports the enhancement on CLIR by post-translation expansion.\n[16] develops a cross-lingual relevancy model by leveraging the crosslingual co-occurrence statistics in parallel texts.\n[18] makes performance comparison on multiple CLQE techniques, including pre-translation expansion and post-translation expansion.\nHowever, there is lack of a unified framework to combine the wide spectrum of resources and recent advances of mining techniques for CLQE.\nCLQS is different from CLQE in that it aims to suggest full queries that have been formulated by users in another language.\nAs CLQS exploits up-to-date query logs, it is expected that for most user queries, we can find common formulations on these topics in the query log in the target language.\nTherefore, CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language.\nQuery logs have been successfully used for monolingual IR [8, 12, 15, 26], especially in monolingual query suggestions [12] and relating the semantically relevant terms for query expansion [8, 15].\nIn [1], the target language query log has been exploited to help query translation in CLIR.\n3.\nESTIMATING CROSS-LINGUAL QUERY SIMILARITY\n3.1 Discriminative Model for Estimating Cross-Lingual Query Similarity\n3.2 Monolingual Query Similarity Measure Based on Click-through Information\n3.3 Features Used for Learning Cross-Lingual Query Similarity Measure\n3.3.1 Bilingual Dictionary\n3.3.2 Parallel Corpora\n3.3.3 Online Mining for Related Queries\n3.3.4 Monolingual Query Suggestion\n3.4 Estimating Cross-lingual Query Similarity\n4.\nCLIR BASED ON CROSS-LINGUAL QUERY SUGGESTION\n5.\nPERFORMACNCE EVALUATION\n5.1 Data Resources\n5.2 Performance of Cross-lingual Query Suggestion\n5.3 CLIR Performance\n6.\nCONCLUSIONS\nIn this paper, we proposed a new approach to cross-lingual query suggestion by mining relevant queries in different languages from query logs.\nThe key solution to this problem is to learn a crosslingual query similarity measure by a discriminative model exploiting multiple monolingual and bilingual resources.\nThe model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query's translation.","lvl-4":"Cross-Lingual Query Suggestion Using Query Logs of Different Languages\nABSTRACT\nQuery suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs.\nPreviously, the suggested terms are mostly in the same language of the input query.\nIn this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages.\nThis is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement.\nInstead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log.\nImportant monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model.\nBenchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation.\nBesides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections.\nThe results demonstrate higher effectiveness than the traditional query translation methods.\n1.\nINTRODUCTION\nQuery suggestion is a functionality to help users of a search engine to better specify their information need, by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users.\nSearch engines, such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method.\nQuery suggestion is closely related to query expansion which extends the original query with new search terms to narrow the scope of the search.\nBut different from query expansion, query suggestion aims to suggest full queries that have been formulated by users so that the query integrity and coherence are preserved in the suggested queries.\nBy suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents.\nHowever, all of the existing studies dealt with monolingual query suggestion and to our knowledge, there is no published study on cross-lingual query suggestion (CLQS).\nCLQS aims to suggest related queries but in a different language.\nIt has wide applications on World Wide Web: for cross-language search or for suggesting relevant bidding terms in a different language.\nCLQS can be approached as a query translation problem, i.e., to suggest the queries that are translations of the original query.\nDictionaries, large size of parallel corpora and existing commercial machine translation systems can be used for translation.\nHowever, these kinds of approaches usually rely on static knowledge and data.\nMoreover, there are some problems with translated queries in target language.\nFor instance,\nthe translated terms can be reasonable translations, but they are not popularly used in the target language.\nFor example, the French query \"aliment biologique\" is translated into \"biologic food\" by Google translation tool2, yet the correct formulation nowadays should be \"organic food\".\nTherefore, there exist many mismatch cases between the translated terms and the really used terms in target language.\nThis mismatch makes the suggested terms in the target language ineffective.\nA natural thinking of solving this mismatch is to map the queries in the source language and the queries in the target language, by using the query log of a search engine.\nWe exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages.\nAs a result, a query written in a source language likely has an equivalent in a query log in the target language.\nIn particular, if the user intends to perform CLIR, then original query is even more likely to have its correspondent included in the target language query log.\nTherefore, if a candidate for CLQS appears often in the query log, then it is more likely the appropriate one to be suggested.\nIn this paper, we propose a method of calculating the similarity between source language query and the target language query by exploiting, in addition to the translation information, a wide spectrum of bilingual and monolingual information, such as term co-occurrences, query logs with click-through data, etc. .\nA discriminative model is used to learn the cross-lingual query similarity based on a set of manually translated queries.\nThe model is trained by optimizing the cross-lingual similarity to best fit the monolingual similarity between one query and the other query's translation.\nBesides being benchmarked as an independent module, the resulting CLQS system is tested as a new means of query \"translation\" in CLIR task on TREC collections.\nThe results show that this new \"translation\" method is more effective than the traditional query translation method.\n2.\nRELATED WORK\nMost approaches to CLIR perform a query translation followed by a monolingual IR.\nTypically, queries are translated either using a bilingual dictionary [22], a machine translation software [9] or a parallel corpus [20].\nDespite the various types of resources used, out-of-vocabulary (OOV) words and translation disambiguation are the two major bottlenecks for CLIR [20].\nIn [7, 27], OOV term translations are mined from the Web using a search engine.\nIn addition, word co-occurrence statistics in the target language has been leveraged for translation disambiguation [3, 10, 11, 19].\nNevertheless, it is arguable that accurate query translation may not be necessary for CLIR.\nIndeed, in many cases, it is helpful to introduce words even if they are not direct translations of any query word, but are closely related to the meaning of the query.\nThis observation has led to the development of cross-lingual query expansion (CLQE) techniques [2, 16, 18].\n[2] reports the enhancement on CLIR by post-translation expansion.\n[16] develops a cross-lingual relevancy model by leveraging the crosslingual co-occurrence statistics in parallel texts.\nCLQS is different from CLQE in that it aims to suggest full queries that have been formulated by users in another language.\nAs CLQS exploits up-to-date query logs, it is expected that for most user queries, we can find common formulations on these topics in the query log in the target language.\nTherefore, CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language.\nQuery logs have been successfully used for monolingual IR [8, 12, 15, 26], especially in monolingual query suggestions [12] and relating the semantically relevant terms for query expansion [8, 15].\nIn [1], the target language query log has been exploited to help query translation in CLIR.\n6.\nCONCLUSIONS\nIn this paper, we proposed a new approach to cross-lingual query suggestion by mining relevant queries in different languages from query logs.\nThe key solution to this problem is to learn a crosslingual query similarity measure by a discriminative model exploiting multiple monolingual and bilingual resources.\nThe model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query's translation.","lvl-2":"Cross-Lingual Query Suggestion Using Query Logs of Different Languages\nABSTRACT\nQuery suggestion aims to suggest relevant queries for a given query, which help users better specify their information needs.\nPreviously, the suggested terms are mostly in the same language of the input query.\nIn this paper, we extend it to cross-lingual query suggestion (CLQS): for a query in one language, we suggest similar or relevant queries in other languages.\nThis is very important to scenarios of cross-language information retrieval (CLIR) and cross-lingual keyword bidding for search engine advertisement.\nInstead of relying on existing query translation technologies for CLQS, we present an effective means to map the input query of one language to queries of the other language in the query log.\nImportant monolingual and cross-lingual information such as word translation relations and word co-occurrence statistics, etc. are used to estimate the cross-lingual query similarity with a discriminative model.\nBenchmarks show that the resulting CLQS system significantly outperforms a baseline system based on dictionary-based query translation.\nBesides, the resulting CLQS is tested with French to English CLIR tasks on TREC collections.\nThe results demonstrate higher effectiveness than the traditional query translation methods.\n1.\nINTRODUCTION\nQuery suggestion is a functionality to help users of a search engine to better specify their information need, by narrowing down or expanding the scope of the search with synonymous queries and relevant queries, or by suggesting related queries that have been frequently used by other users.\nSearch engines, such as Google, Yahoo!, MSN, Ask Jeeves, all have implemented query suggestion functionality as a valuable addition to their core search method.\nIn addition, the same technology has been leveraged to recommend bidding terms to online advertiser in the pay-forperformance search market [12].\nQuery suggestion is closely related to query expansion which extends the original query with new search terms to narrow the scope of the search.\nBut different from query expansion, query suggestion aims to suggest full queries that have been formulated by users so that the query integrity and coherence are preserved in the suggested queries.\nTypical methods for query suggestion exploit query logs and document collections, by assuming that in the same period of time, many users share the same or similar interests, which can be expressed in different manners [12, 14, 26].\nBy suggesting the related and frequently used formulations, it is hoped that the new query can cover more relevant documents.\nHowever, all of the existing studies dealt with monolingual query suggestion and to our knowledge, there is no published study on cross-lingual query suggestion (CLQS).\nCLQS aims to suggest related queries but in a different language.\nIt has wide applications on World Wide Web: for cross-language search or for suggesting relevant bidding terms in a different language.\nCLQS can be approached as a query translation problem, i.e., to suggest the queries that are translations of the original query.\nDictionaries, large size of parallel corpora and existing commercial machine translation systems can be used for translation.\nHowever, these kinds of approaches usually rely on static knowledge and data.\nIt cannot effectively reflect the quickly shifting interests of Web users.\nMoreover, there are some problems with translated queries in target language.\nFor instance,\nthe translated terms can be reasonable translations, but they are not popularly used in the target language.\nFor example, the French query \"aliment biologique\" is translated into \"biologic food\" by Google translation tool2, yet the correct formulation nowadays should be \"organic food\".\nTherefore, there exist many mismatch cases between the translated terms and the really used terms in target language.\nThis mismatch makes the suggested terms in the target language ineffective.\nA natural thinking of solving this mismatch is to map the queries in the source language and the queries in the target language, by using the query log of a search engine.\nWe exploit the fact that the users of search engines in the same period of time have similar interests, and they submit queries on similar topics in different languages.\nAs a result, a query written in a source language likely has an equivalent in a query log in the target language.\nIn particular, if the user intends to perform CLIR, then original query is even more likely to have its correspondent included in the target language query log.\nTherefore, if a candidate for CLQS appears often in the query log, then it is more likely the appropriate one to be suggested.\nIn this paper, we propose a method of calculating the similarity between source language query and the target language query by exploiting, in addition to the translation information, a wide spectrum of bilingual and monolingual information, such as term co-occurrences, query logs with click-through data, etc. .\nA discriminative model is used to learn the cross-lingual query similarity based on a set of manually translated queries.\nThe model is trained by optimizing the cross-lingual similarity to best fit the monolingual similarity between one query and the other query's translation.\nBesides being benchmarked as an independent module, the resulting CLQS system is tested as a new means of query \"translation\" in CLIR task on TREC collections.\nThe results show that this new \"translation\" method is more effective than the traditional query translation method.\nThe remainder of this paper is organized as follows: Section 2 introduces the related work; Section 3 describes in detail the discriminative model for estimating cross-lingual query similarity; Section 4 presents a new CLIR approach using cross-lingual query suggestion as a bridge across language boundaries.\nSection 5 discusses the experiments and benchmarks; finally, the paper is concluded in Section 6.\n2.\nRELATED WORK\nMost approaches to CLIR perform a query translation followed by a monolingual IR.\nTypically, queries are translated either using a bilingual dictionary [22], a machine translation software [9] or a parallel corpus [20].\nDespite the various types of resources used, out-of-vocabulary (OOV) words and translation disambiguation are the two major bottlenecks for CLIR [20].\nIn [7, 27], OOV term translations are mined from the Web using a search engine.\nIn [17], bilingual knowledge is acquired based on anchor text analysis.\nIn addition, word co-occurrence statistics in the target language has been leveraged for translation disambiguation [3, 10, 11, 19].\nNevertheless, it is arguable that accurate query translation may not be necessary for CLIR.\nIndeed, in many cases, it is helpful to introduce words even if they are not direct translations of any query word, but are closely related to the meaning of the query.\nThis observation has led to the development of cross-lingual query expansion (CLQE) techniques [2, 16, 18].\n[2] reports the enhancement on CLIR by post-translation expansion.\n[16] develops a cross-lingual relevancy model by leveraging the crosslingual co-occurrence statistics in parallel texts.\n[18] makes performance comparison on multiple CLQE techniques, including pre-translation expansion and post-translation expansion.\nHowever, there is lack of a unified framework to combine the wide spectrum of resources and recent advances of mining techniques for CLQE.\nCLQS is different from CLQE in that it aims to suggest full queries that have been formulated by users in another language.\nAs CLQS exploits up-to-date query logs, it is expected that for most user queries, we can find common formulations on these topics in the query log in the target language.\nTherefore, CLQS also plays a role of adapting the original query formulation to the common formulations of similar topics in the target language.\nQuery logs have been successfully used for monolingual IR [8, 12, 15, 26], especially in monolingual query suggestions [12] and relating the semantically relevant terms for query expansion [8, 15].\nIn [1], the target language query log has been exploited to help query translation in CLIR.\n3.\nESTIMATING CROSS-LINGUAL QUERY SIMILARITY\nA search engine has a query log containing user queries in different languages within a certain period of time.\nIn addition to query terms, click-through information is also recorded.\nTherefore, we know which documents have been selected by users for each query.\nGiven a query in the source language, our CLQS task is to determine one or several similar queries in the target language from the query log.\nThe key problem with cross-lingual query suggestion is how to learn a similarity measure between two queries in different languages.\nAlthough various statistical similarity measures have been studied for monolingual terms [8, 26], most of them are based on term co-occurrence statistics, and can hardly be applied directly in cross-lingual settings.\nIn order to define a similarity measure across languages, one has to use at least one translation tool or resource.\nSo the measure is based on both translation relation and monolingual similarity.\nIn this paper, as our purpose is to provide up-to-date query similarity measure, it may not be sufficient to use only a static translation resource.\nTherefore, we also integrate a method to mine possible translations on the Web.\nThis method is particularly useful for dealing with OOV terms.\nGiven a set of resources of different natures, the next question is how to integrate them in a principled manner.\nIn this paper, we propose a discriminative model to learn the appropriate similarity measure.\nThe principle is as follows: we assume that we have a reasonable monolingual query similarity measure.\nFor any training query example for which a translation exists, its similarity measure (with any other query) is transposed to its translation.\nTherefore, we have the desired cross-language similarity value for this example.\nThen we use a discriminative model to learn the cross-language similarity function which fits the best these examples.\nIn the following sections, let us first describe the detail of the discriminative model for cross-lingual query similarity estimation.\nThen we introduce all the features (monolingual and cross-lingual information) that we will use in the discriminative model.\n3.1 Discriminative Model for Estimating Cross-Lingual Query Similarity\nIn this section, we propose a discriminative model to learn crosslingual query similarities in a principled manner.\nThe principle is as follows: for a reasonable monolingual query similarity between two queries, a cross-lingual correspondent can be deduced between one query and another query's translation.\nIn other words, for a pair of queries in different languages, their crosslingual similarity should fit the monolingual similarity between one query and the other query's translation.\nFor example, the similarity between French query \"pages jaunes\" (i.e., \"yellow page\" in English) and English query \"telephone directory\" should be equal to the monolingual similarity between the translation of the French query \"yellow page\" and \"telephone directory\".\nThere are many ways to obtain a monolingual similarity measure between terms, e.g., term co-occurrence based mutual information and 2 \u03c7.\nAny of them can be used as the target for the cross-lingual similarity function to fit.\nIn this way, cross-lingual query similarity estimation is formulated as a regression task as follows: Given a source language query q f, a target language query qe, and a monolingual query similarity simML, the corresponding cross-lingual query similarity simCL is defined as follows: sim CL (q f, qe) sim ML (T q f, q e = where Tqf is the translation of q f in the target language.\nBased on Equation (1), it would be relatively easy to create a training corpus.\nAll it requires is a list of query translations.\nThen an existing monolingual query suggestion system can be used to automatically produce similar query to each translation, and create the training corpus for cross-lingual similarity estimation.\nAnother advantage is that it is fairly easy to make use of arbitrary information sources within a discriminative modeling framework to achieve optimal performance.\nIn this paper, support vector machine (SVM) regression algorithm [25] is used to learn the cross-lingual term similarity function.\nGiven a vector of feature functions f between q f and qe, simCL (t f, te) is represented as an inner product between a weight vector and the feature vector in a kernel space as follows: sim CL (t f, te) = w \u2022 \u03c6 (f (tf, te)) (2) where \u03c6 is the mapping from the input feature space onto the kernel space, and w is the weight vector in the kernel space which will be learned by the SVM regression training.\nOnce the weight vector is learned, the Equation (2) can be used to estimate the similarity between queries of different languages.\nWe want to point out that instead of regression, one can definitely simplify the task as a binary or ordinal classification, in which case CLQS can be categorized according to discontinuous class labels, e.g., relevant and irrelevant, or a series of levels of relevancies, e.g., strongly relevant, weakly relevant, and irrelevant.\nIn either case, one can resort to discriminative classification approaches, such as an SVM or maximum entropy model, in a straightforward way.\nHowever, the regression formalism enables us to fully rank the suggested queries based on the similarity score given by Equation (1).\nThe Equations (1) and (2) construct a regression model for cross-lingual query similarity estimation.\nIn the following sections, the monolingual query similarity measure (see Section 3.2) and the feature functions used for SVM regression (see Section 3.3) will be presented.\n3.2 Monolingual Query Similarity Measure Based on Click-through Information\nAny monolingual term similarity measure can be used as the regression target.\nIn this paper, we select the monolingual query similarity measure presented in [26] which reports good performance by using search users' click-through information in query logs.\nThe reason to choose this monolingual similarity is that it is defined in a similar context as ours \u2212 according to a user log that reflects users' intention and behavior.\nTherefore, we can expect that the cross-language term similarity learned from it can also reflect users' intention and expectation.\nFollowing [26], our monolingual query similarity is defined by combining both query content-based similarity and click-through commonality in the query log.\nFirst the content similarity between two queries p and q is defined as follows:\nwhere kn (x) is the number of keywords in a query x, KN (p, q) is the number of common keywords in the two queries.\nSecondly, the click-through based similarity is defined as follows, similarity click \u2212 through p q =\nwhere rd (x) is the number of clicked URLs for a query x, and RD (p, q) is the number of common URLs clicked for two queries.\nFinally, the similarity between two queries is a linear combination of the content-based and click-through-based similarities, and is presented as follows:\nwhere \u03b1 and \u03b2 are the relative importance of the two similarity measures.\nIn this paper, we set \u03b1 = 0.4, and \u03b2 = 0.6 following the\npractice in [26].\nQueries with similarity measure higher than a threshold with another query will be regarded as relevant monolingual query suggestions (MLQS) for the latter.\nIn this paper, the threshold is set as 0.9 empirically.\n3.3 Features Used for Learning Cross-Lingual Query Similarity Measure\nThis section presents the extraction of candidate relevant queries from the log with the assistance of various monolingual and bilingual resources.\nMeanwhile, feature functions over source query and the cross-lingual relevant candidates are defined.\nSome of the resources being used here, such as bilingual lexicon and parallel corpora, were for query translation in previous work.\nBut note that we employ them here as an assistant means for finding relevant candidates in the log rather than for acquiring accurate translations.\n3.3.1 Bilingual Dictionary\nIn this subsection, a built-in-house bilingual dictionary containing 120,000 unique entries is used to retrieve candidate queries.\nSince multiple translations may be associated with each source word, co-occurrence based translation disambiguation is performed [3, 10].\nThe process is presented as follows: Given an input query q f = {w f1, w f2,...wfn} in the source language, for each query term wfi, a set of unique translations are provided by the bilingual dictionaryD: D (wfi) = {ti1, ti2,,..., tim}.\nThen the cohesion between the translations of two query terms is measured using mutual information which is computed as follows:\nHere C (x, y) is the number of queries in the log containing both x and y, C (x) is the number of queries containing term x, and N is the total number of queries in the log.\nBased on the term-term cohesion defined in Equation (6), all the possible query translations are ranked using the summation of the term-term cohesion Sdict Tq f\ntop-4 query translations is denoted as S (Tqf).\nFor each possible query translationT \u2208 S (Tqf), we retrieve all the queries containing the same keywords as T from the target language log.\nThe retrieved queries are candidate target queries, and are assigned Sdict (T) as the value of the feature Dictionary-based Translation Score.\n3.3.2 Parallel Corpora\nParallel corpora are precious resources for bilingual knowledge acquisition.\nDifferent from the bilingual dictionary, the bilingual knowledge learned from parallel corpora assigns probability for each translation candidate which is useful in acquiring dominant query translations.\nIn this paper, the Europarl corpus (a set of parallel French and English texts from the proceedings of the European Parliament) is used.\nThe corpus is first sentence aligned.\nThen word alignments are derived by training an IBM translation model 1 [4] using GIZA + + [21].\nThe learned bilingual knowledge is used to extract candidate queries from the query log.\nThe process is presented as follows: Given a pair of queries, q f in the source language and qe in the target language, the Bi-Directional Translation Score is defined as follows:\nwhere pIBM1 (y | x) is the word sequence translation probability given by IBM model 1 which has the following form:\nwhere p (yj | xi) is the word to word translation probability derived from the word-aligned corpora.\nThe reason to use bidirectional translation probability is to deal with the fact that common words can be considered as possible translations of many words.\nBy using bidirectional translation, we test whether the translation words can be translated back to the source words.\nThis is helpful to focus on the translation probability onto the most specific translation candidates.\nNow, given an input query qf, the top 10 queries {qe} with the highest bidirectional translation scores with qf are retrieved from the query log, and S IBM1 (q f, qe) in Equation (7) is assigned as the value for the feature Bi-Directional Translation Score.\n3.3.3 Online Mining for Related Queries\nOOV word translation is a major knowledge bottleneck for query translation and CLIR.\nTo overcome this knowledge bottleneck, web mining has been exploited in [7, 27] to acquire EnglishChinese term translations based on the observation that Chinese terms may co-occur with their English translations in the same web page.\nIn this section, this web mining approach is adapted to acquire not only translations but semantically related queries in the target language.\nIt is assumed that if a query in the target language co-occurs with the source query in many web pages, they are probably semantically related.\nTherefore, a simple method is to send the source query to a search engine (Google in our case) for Web pages in the target language in order to find related queries in the target language.\nFor instance, by sending a French query \"pages jaunes\" to search for English pages, the English snippets containing the key words \"yellow pages\" or \"telephone directory\" will be returned.\nHowever, this simple approach may induce significant amount of noise due to the non-relevant returns from the search engine.\nIn order to improve the relevancy of the bilingual snippets, we extend the simple approach by the following query modification: the original query is used to search with the dictionary-based query keyword translations, which are unified by the n (and) \u2228 (OR) operators into a single Boolean query.\nFor example, for a given query q = abc where the set of where MI tij t).\nThe set of (, kl\none web query.\nFrom the returned top 700 snippets, the most frequent 10 target queries are identified, and are associated with the feature Frequency in the Snippets.\nFurthermore, we use Co-Occurrence Double-Check (CODC) Measure to weight the association between the source and target queries.\nCODC Measure is proposed in [6] as an association measure based on snippet analysis, named Web Search with Double Checking (WSDC) model.\nIn WSDC model, two objects a and b are considered to have an association if b can be found by using a as query (forward process), and a can be found by using b as query (backward process) by web search.\nThe forward process counts the frequency of b in the top N snippets of query a, denoted as freq (b @a).\nSimilarly, the backward process count the frequency of a in the top N snippets of query b, denoted as freq (a@b).\nThen the CODC association score is defined as follows: CODC measures the association of two terms in the range between 0 and 1, where under the two extreme cases, qe and q f are of no association when freq (qe @ q f) = 0 or freq (q f @qe) = 0, and are of the strongest association when freq (qe @ q f) = freq (q f) and freq (q f @ qe) = freq (qe).\nIn our experiment, \u03b1 is set at 0.15 following the practice in [6].\nAny query qe mined from the Web will be associated with a feature CODC Measure with S CODC (qf, qe) as its value.\n3.3.4 Monolingual Query Suggestion\nFor all the candidate queries Q0 being retrieved using dictionary (see Section 3.3.1), parallel data (see Section 3.3.2) and web mining (see Section 3.3.3), monolingual query suggestion system (described in Section 3.1) is called to produce more related queries in the target language.\nFor each target query qe, its monolingual source query SQML (qe) is defined as the query in Q0 with the highest monolingual similarity with qe, i.e.,\nThen the monolingual similarity between qe and SQML (qe) is used as the value of the qe's Monolingual Query Suggestion Feature.\nFor any target query q \u2208 Q0, its Monolingual Query Suggestion Feature is set as 1.\nFor any query qe \u2209 Q0, its values of Dictionary-based Translation Score, Bi-Directional Translation Score, Frequency in the Snippet, and CODC Measure are set to be equal to the feature values of SQML (qe).\n3.4 Estimating Cross-lingual Query Similarity\nIn summary, four categories of features are used to learn the crosslingual query similarity.\nSVM regression algorithm [25] is used to learn the weights in Equation (2).\nIn this paper, LibSVM toolkit [5] is used for the regression training.\nIn the prediction stage, the candidate queries will be ranked using the cross-lingual query similarity score computed in terms of simCL (tf, te) = w \u2022 \u03c6 (f (tf, te)), and the queries with similarity score lower than a threshold will be regarded as nonrelevant.\nThe threshold is learned using a development data set by fitting MLQS's output.\n4.\nCLIR BASED ON CROSS-LINGUAL QUERY SUGGESTION\nIn Section 3, we presented a discriminative model for cross lingual query suggestion.\nHowever, objectively benchmarking a query suggestion system is not a trivial task.\nIn this paper, we propose to use CLQS as an alternative to query translation, and test its effectiveness in CLIR tasks.\nThe resulting good performance of CLIR corresponds to the high quality of the suggested queries.\nGiven a source query q f, a set of relevant queries {qe} in the target language are recommended using the cross-lingual query suggestion system.\nThen a monolingual IR system based on the BM25 model [23] is called using each q \u2208 {qe} as queries to retrieve documents.\nThen the retrieved documents are re-ranked based on the sum of the BM25 scores associated with each monolingual retrieval.\n5.\nPERFORMACNCE EVALUATION\nIn this section, we will benchmark the cross-lingual query suggestion system, comparing its performance with monolingual query suggestion, studying the contribution of various information sources, and testing its effectiveness when being used in CLIR tasks.\n5.1 Data Resources\nIn our experiments, French and English are selected as the source and target language respectively.\nSuch selection is due to the fact that large scale query logs are readily available for these two languages.\nA one-month English query log (containing 7 million unique English queries with occurrence frequency more than 5) of MSN search engine is used as the target language log.\nAnd a monolingual query suggestion system is built based on it.\nIn addition, 5,000 French queries are selected randomly from a French query log (containing around 3 million queries), and are manually translated into English by professional French-English translators.\nAmong the 5,000 French queries, 4,171 queries have their translations in the English query log, and are used for CLQS training and testing.\nFurthermore, among the 4,171 French queries, 70% are used for cross-lingual query similarity training, 10% are used as the development data to determine the relevancy threshold, and 20% are used for testing.\nTo retrieve the crosslingual related queries, a built-in-house French-English bilingual lexicon (containing 120,000 unique entries) and the Europarl corpus are used.\nBesides benchmarking CLQS as an independent system, the CLQS is also tested as a query \"translation\" system for CLIR\ntasks.\nBased on the observation that the CLIR performance heavily relies on the quality of the suggested queries, this benchmark measures the quality of CLQS in terms of its effectiveness in helping CLIR.\nTo perform such benchmark, we use the documents of TREC6 CLIR data (AP88-90 newswire, 750MB) with officially provided 25 short French-English queries pairs (CL1-CL25).\nThe selection of this data set is due to the fact that the average length of the queries are 3.3 words long, which matches the web query logs we use to train CLQS.\n5.2 Performance of Cross-lingual Query Suggestion\nMean-square-error (MSE) is used to measure the regression error and it is defined as follows:\nwhere l is the total number of cross-lingual query pairs in the testing data.\nAs described in Section 3.4, a relevancy threshold is learned using the development data, and only CLQS with similarity value above the threshold is regarded as truly relevant to the input query.\nIn this way, CLQS can also be benchmarked as a classification task using precision (P) and recall (R) which are defined as follows: where S CLQS is the set of relevant queries suggested by CLQS, S MLQS is the set of relevant queries suggested by MLQS (see Section 3.2).\nThe benchmarking results with various feature configurations are shown in Table 1.\nTable 1.\nCLQS performance with different feature settings\n(DD: dictionary only; DD+PC: dictionary and parallel corpora; DD+PC+W eb: dictionary, parallel corpora, and web mining; DD+PC+W eb + MLQS: dictionary, parallel corpora, web mining and monolingual query suggestion) Table 1 reports the performance comparison with various feature settings.\nThe baseline system (DD) uses a conventional query translation approach, i.e., a bilingual dictionary with cooccurrence-based translation disambiguation.\nThe baseline system only covers less than 10% of the suggestions made by MLQS.\nUsing additional features obviously enables CLQS to generate more relevant queries.\nThe most significant improvement on recall is achieved by exploiting MLQS.\nThe final CLQS system is able to generate 42% of the queries suggested by MLQS.\nAmong all the feature combinations, there is no significant change in precision.\nThis indicates that our methods can improve the recall by effectively leveraging various information sources without losing the accuracy of the suggestions.\nBesides benchmarking CLQS by comparing its output with MLQS output, 200 French queries are randomly selected from the French query log.\nThese queries are double-checked to make sure that they are not in the CLQS training corpus.\nThen CLQS system is used to suggest relevant English queries for them.\nOn average, for each French query, 8.7 relevant English queries are suggested.\nThen the total 1,740 suggested English queries are manually checked by two professional English\/French translators with cross-validation.\nAmong the 1,747 suggested queries, 1,407 queries are recognized as relevant to the original ones, hence the accuracy is 80.9%.\nFigure 1 shows an example of CLQS of the French query \"terrorisme international\" (\"international terrorism\" in English).\ninternational terrorism (0.991); what is terrorism (0.943); counter terrorism (0.920); terrorist (0.911); terrorist attacks (0.898); international terrorist (0.853); world terrorism (0.845); global terrorism (0.833); transnational terrorism (0.821); human rights (0.811); terrorist groups (0.\n777); patterns of global terrorism (0.762) september 11 (0.734)\nFigure 1.\nAn example of CLQS of the French query \"terrorisme international\"\n5.3 CLIR Performance\nIn this section, CLQS is tested with French to English CLIR tasks.\nWe conduct CLIR experiments using the TREC 6 CLIR dataset described in Section 5.1.\nThe CLIR is performed using a query translation system followed by a BM25-based [23] monolingual IR module.\nThe following three different systems have been used to perform query translation: (1) CLQS: our CLQS system; (2) MT: Google French to English machine translation system; (3) DT: a dictionary based query translation system using cooccurrence statistics for translation disambiguation.\nThe translation disambiguation algorithm is presented in Section 3.3.1.\nBesides, the monolingual IR performance is also reported as a reference.\nThe average precision of the four IR systems are reported in Table 2, and the 11-point precision-recall curves are shown in Figure 2.\nTable 2.\nAverage precision of CLIR on TREC 6 Dataset\n(Monolingual: monolingual IR system; MT: CLIR based on machine translation; DT: CLIR based on dictionary translation; CLQS: CLQS-based CLIR)\nFigure 2.\n11 points precision-recall on TREC6 CLIR data set\nThe benchmark shows that using CLQS as a query translation tool outperforms CLIR based on machine translation by 7.4%, outperforms CLIR based on dictionary translation by 25.2%, and achieves 87.6% of the monolingual IR performance.\nThe effectiveness of CLQS lies in its ability in suggesting closely related queries besides accurate translations.\nFor example, for the query CL14 \"terrorisme international\" (\"international terrorism\"), although the machine translation tool translates the query correctly, CLQS system still achieves higher score by recommending many additional related terms such as \"global terrorism\", \"world terrorism\", etc. (as shown in Figure 1).\nAnother example is the query \"La pollution caus\u00e9e par l'automobile\" (\"air pollution due to automobile\") of CL6.\nThe MT tool provides the translation \"the pollution caused by the car\", while CLQS system enumerates all the possible synonyms of \"car\", and suggest the following queries \"car pollution\", \"auto pollution\", \"automobile pollution\".\nBesides, other related queries such as \"global warming\" are also suggested.\nFor the query CL12 \"La culture \u00e9cologique\" (\"organic farming\"), the MT tool fails to generate the correct translation.\nAlthough the correct translation is neither in our French-English dictionary, CLQS system generates \"organic farm\" as a relevant query due to successful web mining.\nThe above experiment demonstrates the effectiveness of using CLQS to suggest relevant queries for CLIR enhancement.\nA related research is to perform query expansion to enhance CLIR [2, 18].\nSo it is very interesting to compare the CLQS approach with the conventional query expansion approaches.\nFollowing [18], post-translation expansion is performed based on pseudorelevance feedback (PRF) techniques.\nWe first perform CLIR in the same way as before.\nThen we use the traditional PRF algorithm described in [24] to select expansion terms.\nIn our experiments, the top 10 terms are selected to expand the original query, and the new query is used to search the collection for the second time.\nThe new CLIR performance in terms of average precision is shown in Table 3.\nThe 11-point P-R curves are drawn in Figure 3.\nAlthough being enhanced by pseudo-relevance feedback, the CLIR using either machine translation or dictionary-based query translation still does not perform as well as CLQS-based approach.\nStatistical t-test [13] is conducted to indicate whether the CLQS-based CLIR performs significantly better.\nPair-wise pvalues are shown in Table 4.\nClearly, CLQS significantly outperforms MT and DT without PRF as well as DT+PRF, but its superiority over MT+PRF is not significant.\nHowever, when combined with PRF, CLQS significant outperforms all the other methods.\nThis indicates the higher effectiveness of CLQS in related term identification by leveraging a wide spectrum of resources.\nFurthermore, post-translation expansion is capable of improving CLQS-based CLIR.\nThis is due to the fact that CLQS and pseudo-relevance feedback are leveraging different categories of resources, and both approaches can be complementary.\nTable 3.\nComparison of average precision (AP) on TREC 6 without and with post-translation expansion.\n(%) are the relative percentages over the monolingual IR performance\nFigure 3.\n11 points precision-recall on TREC6 CLIR dataset with pseudo relevance feedback\nTable 4.\nThe results of pair-wise significance t-test.\nHere pvalue <0.05 is considered statistically significant\n6.\nCONCLUSIONS\nIn this paper, we proposed a new approach to cross-lingual query suggestion by mining relevant queries in different languages from query logs.\nThe key solution to this problem is to learn a crosslingual query similarity measure by a discriminative model exploiting multiple monolingual and bilingual resources.\nThe model is trained based on the principle that cross-lingual similarity should best fit the monolingual similarity between one query and the other query's translation.","keyphrases":["queri suggest","queri log","cross-languag inform retriev","keyword bid","search engin advertis","search engin","queri translat","map","benchmark","queri expans","bid term","monolingu queri suggest","target languag queri log"],"prmu":["P","P","P","P","P","P","P","P","P","M","R","R","M"]} {"id":"H-44","title":"A Time Machine for Text Search","abstract":"Text search over temporally versioned document collections such as web archives has received little attention as a research problem. As a consequence, there is no scalable and principled solution to search such a collection as of a specified time t. In this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search. We introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results. In order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance. These techniques can be formulated as optimization problems that can be solved to near-optimality. Finally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets. Results unequivocally show that our methods make it possible to build an efficient time machine scalable to large versioned text collections.","lvl-1":"A Time Machine for Text Search Klaus Berberich Srikanta Bedathur Thomas Neumann Gerhard Weikum Max-Planck Institute for Informatics Saarbr\u00a8ucken, Germany {kberberi, bedathur, neumann, weikum}@mpi-inf.mpg.de ABSTRACT Text search over temporally versioned document collections such as web archives has received little attention as a research problem.\nAs a consequence, there is no scalable and principled solution to search such a collection as of a specified time t.\nIn this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search.\nWe introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results.\nIn order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance.\nThese techniques can be formulated as optimization problems that can be solved to near-optimality.\nFinally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets.\nResults unequivocally show that our methods make it possible to build an efficient time machine scalable to large versioned text collections.\nCategories and Subject Descriptors H.3.1 [Content Analysis and Indexing]: Indexing methods; H.3.3 [Information Search and Retrieval]: Retrieval models, Search process General Terms Algorithms, Experimentation, Performance 1.\nINTRODUCTION In this work we address time-travel text search over temporally versioned document collections.\nGiven a keyword query q and a time t our goal is to identify and rank relevant documents as if the collection was in its state as of time t.\nAn increasing number of such versioned document collections is available today including web archives, collaborative authoring environments like Wikis, or timestamped information feeds.\nText search on these collections, however, is mostly time-ignorant: while the searched collection changes over time, often only the most recent version of a documents is indexed, or, versions are indexed independently and treated as separate documents.\nEven worse, for some collections, in particular web archives like the Internet Archive [18], a comprehensive text-search functionality is often completely missing.\nTime-travel text search, as we develop it in this paper, is a crucial tool to explore these collections and to unfold their full potential as the following example demonstrates.\nFor a documentary about a past political scandal, a journalist needs to research early opinions and statements made by the involved politicians.\nSending an appropriate query to a major web search-engine, the majority of returned results contains only recent coverage, since many of the early web pages have disappeared and are only preserved in web archives.\nIf the query could be enriched with a time point, say August 20th 2003 as the day after the scandal got revealed, and be issued against a web archive, only pages that existed specifically at that time could be retrieved thus better satisfying the journalist``s information need.\nDocument collections like the Web or Wikipedia [32], as we target them here, are already large if only a single snapshot is considered.\nLooking at their evolutionary history, we are faced with even larger data volumes.\nAs a consequence, na\u00a8\u0131ve approaches to time-travel text search fail, and viable approaches must scale-up well to such large data volumes.\nThis paper presents an efficient solution to time-travel text search by making the following key contributions: 1.\nThe popular well-studied inverted file index [35] is transparently extended to enable time-travel text search.\n2.\nTemporal coalescing is introduced to avoid an indexsize explosion while keeping results highly accurate.\n3.\nWe develop two sublist materialization techniques to improve index performance that allow trading off space vs. performance.\n4.\nIn a comprehensive experimental evaluation our approach is evaluated on the English Wikipedia and parts of the Internet Archive as two large-scale real-world datasets with versioned documents.\nThe remainder of this paper is organized as follows.\nThe presented work is put in context with related work in Section 2.\nWe delineate our model of a temporally versioned document collection in Section 3.\nWe present our time-travel inverted index in Section 4.\nBuilding on it, temporal coalescing is described in Section 5.\nIn Section 6 we describe principled techniques to improve index performance, before presenting the results of our experimental evaluation in Section 7.\n2.\nRELATED WORK We can classify the related work mainly into the following two categories: (i) methods that deal explicitly with collections of versioned documents or temporal databases, and (ii) methods for reducing the index size by exploiting either the document-content overlap or by pruning portions of the index.\nWe briefly review work under these categories here.\nTo the best of our knowledge, there is very little prior work dealing with historical search over temporally versioned documents.\nAnick and Flynn [3], while pioneering this research, describe a help-desk system that supports historical queries.\nAccess costs are optimized for accesses to the most recent versions and increase as one moves farther into the past.\nBurrows and Hisgen [10], in a patent description, delineate a method for indexing range-based values and mention its potential use for searching based on dates associated with documents.\nRecent work by N\u00f8rv\u02daag and Nyb\u00f8 [25] and their earlier proposals concentrate on the relatively simpler problem of supporting text-containment queries only and neglect the relevance scoring of results.\nStack [29] reports practical experiences made when adapting the open source search-engine Nutch to search web archives.\nThis adaptation, however, does not provide the intended time-travel text search functionality.\nIn contrast, research in temporal databases has produced several index structures tailored for time-evolving databases; a comprehensive overview of the state-of-art is available in [28].\nUnlike the inverted file index, their applicability to text search is not well understood.\nMoving on to the second category of related work, Broder et al. [8] describe a technique that exploits large content overlaps between documents to achieve a reduction in index size.\nTheir technique makes strong assumptions about the structure of document overlaps rendering it inapplicable to our context.\nMore recent approaches by Hersovici et al. [17] and Zhang and Suel [34] exploit arbitrary content overlaps between documents to reduce index size.\nNone of the approaches, however, considers time explicitly or provides the desired time-travel text search functionality.\nStatic indexpruning techniques [11, 12] aim to reduce the effective index size, by removing portions of the index that are expected to have low impact on the query result.\nThey also do not consider temporal aspects of documents, and thus are technically quite different from our proposal despite having a shared goal of index-size reduction.\nIt should be noted that index-pruning techniques can be adapted to work along with the temporal text index we propose here.\n3.\nMODEL In the present work, we deal with a temporally versioned document collection D that is modeled as described in the following.\nEach document d \u2208 D is a sequence of its versions d = dt1 , dt2 , ... Each version dti has an associated timestamp ti reflecting when the version was created.\nEach version is a vector of searchable terms or features.\nAny modification to a document version results in the insertion of a new version with corresponding timestamp.\nWe employ a discrete definition of time, so that timestamps are non-negative integers.\nThe deletion of a document at time ti, i.e., its disappearance from the current state of the collection, is modeled as the insertion of a special tombstone version \u22a5.\nThe validity time-interval val(dti ) of a version dti is [ti, ti+1), if a newer version with associated timestamp ti+1 exists, and [ti, now) otherwise where now points to the greatest possible value of a timestamp (i.e., \u2200t : t < now).\nPutting all this together, we define the state Dt of the collection at time t (i.e., the set of versions valid at t that are not deletions) as Dt = [ d\u2208D {dti \u2208 d | t \u2208 val(dti ) \u2227 dti = \u22a5} .\nAs mentioned earlier, we want to enrich a keyword query q with a timestamp t, so that q be evaluated over Dt , i.e., the state of the collection at time t.\nThe enriched time-travel query is written as q t for brevity.\nAs a retrieval model in this work we adopt Okapi BM25 [27], but note that the proposed techniques are not dependent on this choice and are applicable to other retrieval models like tf-idf [4] or language models [26] as well.\nFor our considered setting, we slightly adapt Okapi BM25 as w(q t , dti ) = X v\u2208q wtf (v, dti ) \u00b7 widf (v, t) .\nIn the above formula, the relevance w(q t , dti ) of a document version dti to the time-travel query q t is defined.\nWe reiterate that q t is evaluated over Dt so that only the version dti valid at time t is considered.\nThe first factor wtf (v, dti ) in the summation, further referred to as the tfscore is defined as wtf (v, dti ) = (k1 + 1) \u00b7 tf(v, dti ) k1 \u00b7 ((1 \u2212 b) + b \u00b7 dl(d ti ) avdl(ti) ) + tf(v, dti ) .\nIt considers the plain term frequency tf(v, dti ) of term v in version dti normalizing it, taking into account both the length dl(dti ) of the version and the average document length avdl(ti) in the collection at time ti.\nThe length-normalization parameter b and the tf-saturation parameter k1 are inherited from the original Okapi BM25 and are commonly set to values 1.2 and 0.75 respectively.\nThe second factor widf (v, t), which we refer to as the idf-score in the remainder, conveys the inverse document frequency of term v in the collection at time t and is defined as widf (v, t) = log N(t) \u2212 df(v, t) + 0.5 df(v, t) + 0.5 where N(t) = |Dt | is the collection size at time t and df(v, t) gives the number of documents in the collection that contain the term v at time t.\nWhile the idf-score depends on the whole corpus as of the query time t, the tf-score is specific to each version.\n4.\nTIME-TRAVELINVERTEDFILEINDEX The inverted file index is a standard technique for text indexing, deployed in many systems.\nIn this section, we briefly review this technique and present our extensions to the inverted file index that make it ready for time-travel text search.\n4.1 Inverted File Index An inverted file index consists of a vocabulary, commonly organized as a B+-Tree, that maps each term to its idfscore and inverted list.\nThe index list Lv belonging to term v contains postings of the form ( d, p ) where d is a document-identifier and p is the so-called payload.\nThe payload p contains information about the term frequency of v in d, but may also include positional information about where the term appears in the document.\nThe sort-order of index lists depends on which queries are to be supported efficiently.\nFor Boolean queries it is favorable to sort index lists in document-order.\nFrequencyorder and impact-order sorted index lists are beneficial for ranked queries and enable optimized query processing that stops early after having identified the k most relevant documents [1, 2, 9, 15, 31].\nA variety of compression techniques, such as encoding document identifiers more compactly, have been proposed [33, 35] to reduce the size of index lists.\nFor an excellent recent survey about inverted file indexes we refer to [35].\n4.2 Time-Travel Inverted File Index In order to prepare an inverted file index for time travel we extend both inverted lists and the vocabulary structure by explicitly incorporating temporal information.\nThe main idea for inverted lists is that we include a validity timeinterval [tb, te) in postings to denote when the payload information was valid.\nThe postings in our time-travel inverted file index are thus of the form ( d, p, [tb, te) ) where d and p are defined as in the standard inverted file index above and [tb, te) is the validity time-interval.\nAs a concrete example, in our implementation, for a version dti having the Okapi BM25 tf-score wtf (v, dti ) for term v, the index list Lv contains the posting ( d, wtf (v, dti ), [ti, ti+1) ) .\nSimilarly, the extended vocabulary structure maintains for each term a time-series of idf-scores organized as a B+Tree.\nUnlike the tf-score, the idf-score of every term could vary with every change in the corpus.\nTherefore, we take a simplified approach to idf-score maintenance, by computing idf-scores for all terms in the corpus at specific (possibly periodic) times.\n4.3 Query Processing During processing of a time-travel query q t , for each query term the corresponding idf-score valid at time t is retrieved from the extended vocabulary.\nThen, index lists are sequentially read from disk, thereby accumulating the information contained in the postings.\nWe transparently extend the sequential reading, which is - to the best of our knowledgecommon to all query processing techniques on inverted file indexes, thus making them suitable for time-travel queryprocessing.\nTo this end, sequential reading is extended by skipping all postings whose validity time-interval does not contain t (i.e., t \u2208 [tb, te)).\nWhether a posting can be skipped can only be decided after the posting has been transferred from disk into memory and therefore still incurs significant I\/O cost.\nAs a remedy, we propose index organization techniques in Section 6 that aim to reduce the I\/O overhead significantly.\nWe note that our proposed extension of the inverted file index makes no assumptions about the sort-order of index lists.\nAs a consequence, existing query-processing techniques and most optimizations (e.g., compression techniques) remain equally applicable.\n5.\nTEMPORAL COALESCING If we employ the time-travel inverted index, as described in the previous section, to a versioned document collection, we obtain one posting per term per document version.\nFor frequent terms and large highly-dynamic collections, this time score non-coalesced coalesced Figure 1: Approximate Temporal Coalescing leads to extremely long index lists with very poor queryprocessing performance.\nThe approximate temporal coalescing technique that we propose in this section counters this blowup in index-list size.\nIt builds on the observation that most changes in a versioned document collection are minor, leaving large parts of the document untouched.\nAs a consequence, the payload of many postings belonging to temporally adjacent versions will differ only slightly or not at all.\nApproximate temporal coalescing reduces the number of postings in an index list by merging such a sequence of postings that have almost equal payloads, while keeping the maximal error bounded.\nThis idea is illustrated in Figure 1, which plots non-coalesced and coalesced scores of postings belonging to a single document.\nApproximate temporal coalescing is greatly effective given such fluctuating payloads and reduces the number of postings from 9 to 3 in the example.\nThe notion of temporal coalescing was originally introduced in temporal database research by B\u00a8ohlen et al. [6], where the simpler problem of coalescing only equal information was considered.\nWe next formally state the problem dealt with in approximate temporal coalescing, and discuss the computation of optimal and approximate solutions.\nNote that the technique is applied to each index list separately, so that the following explanations assume a fixed term v and index list Lv.\nAs an input we are given a sequence of temporally adjacent postings I = ( d, pi, [ti, ti+1) ), ... , ( d, pn\u22121, [tn\u22121, tn)) ) .\nEach sequence represents a contiguous time period during which the term was present in a single document d.\nIf a term disappears from d but reappears later, we obtain multiple input sequences that are dealt with separately.\nWe seek to generate the minimal length output sequence of postings O = ( d, pj, [tj, tj+1) ), ... , ( d, pm\u22121, [tm\u22121, tm)) ) , that adheres to the following constraints: First, O and I must cover the same time-range, i.e., ti = tj and tn = tm.\nSecond, when coalescing a subsequence of postings of the input into a single posting of the output, we want the approximation error to be below a threshold .\nIn other words, if (d, pi, [ti, ti+1)) and (d, pj, [tj, tj+1)) are postings of I and O respectively, then the following must hold for a chosen error function and a threshold : tj \u2264 ti \u2227 ti+1 \u2264 tj+1 \u21d2 error(pi, pj) \u2264 .\nIn this paper, as an error function we employ the relative error between payloads (i.e., tf-scores) of a document in I and O, defined as: errrel(pi, pj) = |pi \u2212 pj| \/ |pi| .\nFinding an optimal output sequence of postings can be cast into finding a piecewise-constant representation for the points (ti, pi) that uses a minimal number of segments while retaining the above approximation guarantee.\nSimilar problems occur in time-series segmentation [21, 30] and histogram construction [19, 20].\nTypically dynamic programming is applied to obtain an optimal solution in O(n2 m\u2217 ) [20, 30] time with m\u2217 being the number of segments in an optimal sequence.\nIn our setting, as a key difference, only a guarantee on the local error is retained - in contrast to a guarantee on the global error in the aforementioned settings.\nExploiting this fact, an optimal solution is computable by means of induction [24] in O(n2 ) time.\nDetails of the optimal algorithm are omitted here but can be found in the accompanying technical report [5].\nThe quadratic complexity of the optimal algorithm makes it inappropriate for the large datasets encountered in this work.\nAs an alternative, we introduce a linear-time approximate algorithm that is based on the sliding-window algorithm given in [21].\nThis algorithm produces nearly-optimal output sequences that retain the bound on the relative error, but possibly require a few additional segments more than an optimal solution.\nAlgorithm 1 Temporal Coalescing (Approximate) 1: I = ( d, pi, [ti, ti+1) ), ... O = 2: pmin = pi pmax = pi p = pi tb = ti te = ti+1 3: for ( d, pj, [tj, tj+1) ) \u2208 I do 4: pmin = min( pmin, pj ) pmax = max( pmax, pj ) 5: p = optrep(pmin, pmax) 6: if errrel(pmin, p ) \u2264 \u2227 errrel(pmax, p ) \u2264 then 7: pmin = pmin pmax = pmax p = p te = tj+1 8: else 9: O = O \u222a ( d, p, [tb, te) ) 10: pmin = pj pmax = pj p = pj tb = tj te = tj+1 11: end if 12: end for 13: O = O \u222a ( d, p, [tb, te) ) Algorithm 1 makes one pass over the input sequence I.\nWhile doing so, it coalesces sequences of postings having maximal length.\nThe optimal representative for a sequence of postings depends only on their minimal and maximal payload (pmin and pmax) and can be looked up using optrep in O(1) (see [16] for details).\nWhen reading the next posting, the algorithm tries to add it to the current sequence of postings.\nIt computes the hypothetical new representative p and checks whether it would retain the approximation guarantee.\nIf this test fails, a coalesced posting bearing the old representative is added to the output sequence O and, following that, the bookkeeping is reinitialized.\nThe time complexity of the algorithm is in O(n).\nNote that, since we make no assumptions about the sort order of index lists, temporal-coalescing algorithms have an additional preprocessing cost in O(|Lv| log |Lv|) for sorting the index list and chopping it up into subsequences for each document.\n6.\nSUBLIST MATERIALIZATION Efficiency of processing a query q t on our time-travel inverted index is influenced adversely by the wasted I\/O due to read but skipped postings.\nTemporal coalescing implicitly addresses this problem by reducing the overall index list size, but still a significant overhead remains.\nIn this section, we tackle this problem by proposing the idea of materializing sublists each of which corresponds to a contiguous subinterval of time spanned by the full index.\nEach of these sublists contains all coalesced postings that overlap with the corresponding time interval of the sublist.\nNote that all those postings whose validity time-interval spans across the temporal boundaries of several sublists are replicated in each of the spanned sublists.\nThus, in order to process the query q t time t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 d1 d2 d3 document 1 2 3 4 5 6 7 8 9 10 Figure 2: Sublist Materialization it is sufficient to scan any materialized sublist whose timeinterval contains t.\nWe illustrate the idea of sublist materialization using an example shown in Figure 2.\nThe index list Lv visualized in the figure contains a total of 10 postings from three documents d1, d2, and d3.\nFor ease of description, we have numbered boundaries of validity time-intervals, in increasing time-order, as t1, ... , t10 and numbered the postings themselves as 1, ... , 10.\nNow, consider the processing of a query q t with t \u2208 [t1, t2) using this inverted list.\nAlthough only three postings (postings 1, 5 and 8) are valid at time t, the whole inverted list has to be read in the worst case.\nSuppose that we split the time axis of the list at time t2, forming two sublists with postings {1, 5, 8} and {2, 3, 4, 5, 6, 7, 8, 9, 10} respectively.\nThen, we can process the above query with optimal cost by reading only those postings that existed at this t.\nAt a first glance, it may seem counterintuitive to reduce index size in the first step (using temporal coalescing), and then to increase it again using the sublist materialization techniques presented in this section.\nHowever, we reiterate that our main objective is to improve the efficiency of processing queries, not to reduce the index size alone.\nThe use of temporal coalescing improves the performance by reducing the index size, while the sublist materialization improves performance by judiciously replicating entries.\nFurther, the two techniques, can be applied separately and are independent.\nIf applied in conjunction, though, there is a synergetic effect - sublists that are materialized from a temporally coalesced index are generally smaller.\nWe employ the notation Lv : [ti, tj) to refer to the materialized sublist for the time interval [ti, tj), that is formally defined as, Lv : [ti, tj) = {( d, p, [tb, te) ) \u2208 Lv | tb < tj \u2227 te > ti} .\nTo aid the presentation in the rest of the paper, we first provide some definitions.\nLet T = t1 ... tn be the sorted sequence of all unique time-interval boundaries of an inverted list Lv.\nThen we define E = { [ti, ti+1) | 1 \u2264 i < n} to be the set of elementary time intervals.\nWe refer to the set of time intervals for which sublists are materialized as M \u2286 { [ti, tj) | 1 \u2264 i < j \u2264 n } , and demand \u2200 t \u2208 [t1, tn) \u2203 m \u2208 M : t \u2208 m , i.e., the time intervals in M must completely cover the time interval [t1, tn), so that time-travel queries q t for all t \u2208 [t1, tn) can be processed.\nWe also assume that intervals in M are disjoint.\nWe can make this assumption without ruling out any optimal solution with regard to space or performance defined below.\nThe space required for the materialization of sublists in a set M is defined as S( M ) = X m\u2208M |Lv : m| , i.e., the total length of all lists in M. Given a set M, we let \u03c0( [ti, ti+1) ) = [tj, tk) \u2208 M : [ti, ti+1) \u2286 [tj, tk) denote the time interval that is used to process queries q t with t \u2208 [ti, ti+1).\nThe performance of processing queries q t for t \u2208 [ti, ti+1) inversely depends on its processing cost PC( [ti, ti+1) ) = |Lv : \u03c0( [ti, ti+1) )| , which is assumed to be proportional to the length of the list Lv : \u03c0( [ti, ti+1) ).\nThus, in order to optimize the performance of processing queries we minimize their processing costs.\n6.1 Performance\/Space-Optimal Approaches One strategy to eliminate the problem of skipped postings is to eagerly materialize sublists for all elementary time intervals, i.e., to choose M = E.\nIn doing so, for every query q t only postings valid at time t are read and thus the best possible performance is achieved.\nTherefore, we will refer to this approach as Popt in the remainder.\nThe initial approach described above that keeps only the full list Lv and thus picks M = { [t1, tn) } is referred to as Sopt in the remainder.\nThis approach requires minimal space, since it keeps each posting exactly once.\nPopt and Sopt are extremes: the former provides the best possible performance but is not space-efficient, the latter requires minimal space but does not provide good performance.\nThe two approaches presented in the rest of this section allow mutually trading off space and performance and can thus be thought of as means to explore the configuration spectrum between the Popt and the Sopt approach.\n6.2 Performance-Guarantee Approach The Popt approach clearly wastes a lot of space materializing many nearly-identical sublists.\nIn the example illustrated in Figure 2 materialized sublists for [t1, t2) and [t2, t3) differ only by one posting.\nIf the sublist for [t1, t3) was materialized instead, one could save significant space while incurring only an overhead of one skipped posting for all t \u2208 [t1, t3).\nThe technique presented next is driven by the idea that significant space savings over Popt are achievable, if an upper-bounded loss on the performance can be tolerated, or to put it differently, if a performance guarantee relative to the optimum is to be retained.\nIn detail, the technique, which we refer to as PG (Performance Guarantee) in the remainder, finds a set M that has minimal required space, but guarantees for any elementary time interval [ti, ti+1) (and thus for any query q t with t \u2208 [ti, ti+1)) that performance is worse than optimal by at most a factor of \u03b3 \u2265 1.\nFormally, this problem can be stated as argmin M S( M ) s.t. \u2200 [ti, ti+1) \u2208 E : PC( [ti, ti+1) ) \u2264 \u03b3 \u00b7 |Lv : [ti, ti+1)| .\nAn optimal solution to the problem can be computed by means of induction using the recurrence C( [t1, tk+1) ) = min {C( [t1, tj) ) + |Lv : [tj, tk+1)| | 1 \u2264 j \u2264 k \u2227 condition} , where C( [t1, tj) ) is the optimal cost (i.e., the space required) for the prefix subproblem { [ti, ti+1) \u2208 E | [ti, ti+1) \u2286 [t1, tj) } and condition stands for \u2200 [ti, ti+1) \u2208 E : [ti, ti+1) \u2286 [tj, tk+1) \u21d2 |Lv : [tj, tk+1)| \u2264 \u03b3 \u00b7 |Lv : [ti, ti+1)| .\nIntuitively, the recurrence states that an optimal solution for [t1, tk+1) be combined from an optimal solution to a prefix subproblem C( [t1, tj) ) and a time interval [tj, tk+1) that can be materialized without violating the performance guarantee.\nPseudocode of the algorithm is omitted for space reasons, but can be found in the accompanying technical report [5].\nThe time complexity of the algorithm is in O(n2 ) - for each prefix subproblem the above recurrence must be evaluated, which is possible in linear time if list sizes |L : [ti, tj)| are precomputed.\nThe space complexity is in O(n2 ) - the cost of keeping the precomputed sublist lengths and memoizing optimal solutions to prefix subproblems.\n6.3 Space-Bound Approach So far we considered the problem of materializing sublists that give a guarantee on performance while requiring minimal space.\nIn many situations, though, the storage space is at a premium and the aim would be to materialize a set of sublists that optimizes expected performance while not exceeding a given space limit.\nThe technique presented next, which is named SB, tackles this very problem.\nThe space restriction is modeled by means of a user-specified parameter \u03ba \u2265 1 that limits the maximum allowed blowup in index size from the space-optimal solution provided by Sopt.\nThe SB technique seeks to find a set M that adheres to this space limit but minimizes the expected processing cost (and thus optimizes the expected performance).\nIn the definition of the expected processing cost, P( [ti, ti+1) ) denotes the probability of a query time-point being in [ti, ti+1).\nFormally, this space-bound sublist-materialization problem can be stated as argmin M X [ti, ti+1) \u2208 E P( [ti, ti+1) ) \u00b7 PC( [ti, ti+1) ) s.t. X m\u2208M |Lv : m| \u2264 \u03ba |Lv| .\nThe problem can be solved by using dynamic programming over an increasing number of time intervals: At each time interval in E the algorithms decides whether to start a new materialization time-interval, using the known best materialization decision from the previous time intervals, and keeping track of the required space consumption for materialization.\nA detailed description of the algorithm is omitted here, but can be found in the accompanying technical report [5].\nUnfortunately, the algorithm has time complexity in O(n3 |Lv|) and its space complexity is in O(n2 |Lv|), which is not practical for large data sets.\nWe obtain an approximate solution to the problem using simulated annealing [22, 23].\nSimulated annealing takes a fixed number R of rounds to explore the solution space.\nIn each round a random successor of the current solution is looked at.\nIf the successor does not adhere to the space limit, it is always rejected (i.e., the current solution is kept).\nA successor adhering to the space limit is always accepted if it achieves lower expected processing cost than the current solution.\nIf it achieves higher expected processing cost, it is randomly accepted with probability e\u2212\u2206\/r where \u2206 is the increase in expected processing cost and R \u2265 r \u2265 1 denotes the number of remaining rounds.\nIn addition, throughout all rounds, the method keeps track of the best solution seen so far.\nThe solution space for the problem at hand can be efficiently explored.\nAs we argued above, we solely have to look at sets M that completely cover the time interval [t1, tn) and do not contain overlapping time intervals.\nWe represent such a set M as an array of n boolean variables b1 ... bn that convey the boundaries of time intervals in the set.\nNote that b1 and bn are always set to true.\nInitially, all n \u2212 2 intermediate variables assume false, which corresponds to the set M = { [t1, tn) }.\nA random successor can now be easily generated by switching the value of one of the n \u2212 2 intermediate variables.\nThe time complexity of the method is in O(n2 ) - the expected processing cost must be computed in each round.\nIts space complexity is in O(n) - for keeping the n boolean variables.\nAs a side remark note that for \u03ba = 1.0 the SB method does not necessarily produce the solution that is obtained from Sopt, but may produce a solution that requires the same amount of space while achieving better expected performance.\n7.\nEXPERIMENTAL EVALUATION We conducted a comprehensive series of experiments on two real-world datasets to evaluate the techniques proposed in this paper.\n7.1 Setup and Datasets The techniques described in this paper were implemented in a prototype system using Java JDK 1.5.\nAll experiments described below were run on a single SUN V40z machine having four AMD Opteron CPUs, 16GB RAM, a large network-attached RAID-5 disk array, and running Microsoft Windows Server 2003.\nAll data and indexes are kept in an Oracle 10g database that runs on the same machine.\nFor our experiments we used two different datasets.\nThe English Wikipedia revision history (referred to as WIKI in the remainder) is available for free download as a single XML file.\nThis large dataset, totaling 0.7 TBytes, contains the full editing history of the English Wikipedia from January 2001 to December 2005 (the time of our download).\nWe indexed all encyclopedia articles excluding versions that were marked as the result of a minor edit (e.g., the correction of spelling errors etc.).\nThis yielded a total of 892,255 documents with 13,976,915 versions having a mean (\u00b5) of 15.67 versions per document at standard deviation (\u03c3) of 59.18.\nWe built a time-travel query workload using the query log temporarily made available recently by AOL Research as follows - we first extracted the 300 most frequent keyword queries that yielded a result click on a Wikipedia article (for e.g., french revolution, hurricane season 2005, da vinci code etc.).\nThe thus extracted queries contained a total of 422 distinct terms.\nFor each extracted query, we randomly picked a time point for each month covered by the dataset.\nThis resulted in a total of 18, 000 (= 300 \u00d7 60) time-travel queries.\nThe second dataset used in our experiments was based on a subset of the European Archive [13], containing weekly crawls of 11 .\ngov.uk websites throughout the years 2004 and 2005 amounting close to 2 TBytes of raw data.\nWe filtered out documents not belonging to MIME-types text\/plain and text\/html, to obtain a dataset that totals 0.4 TBytes and which we refer to as UKGOV in rest of the paper.\nThis included a total of 502,617 documents with 8,687,108 versions (\u00b5 = 17.28 and \u03c3 = 13.79).\nWe built a corresponding query workload as mentioned before, this time choosing keyword queries that led to a site in the .\ngov.uk domain (e.g., minimum wage, inheritance tax , citizenship ceremony dates etc.), and randomly sampling a time point for every month within the two year period spanned by the dataset.\nThus, we obtained a total of 7,200 (= 300 \u00d7 24) time-travel queries for the UKGOV dataset.\nIn total 522 terms appear in the extracted queries.\nThe collection statistics (i.e., N and avdl) and term statistics (i.e., DF) were computed at monthly granularity for both datasets.\n7.2 Impact of Temporal Coalescing Our first set of experiments is aimed at evaluating the approximate temporal coalescing technique, described in Section 5, in terms of index-size reduction and its effect on the result quality.\nFor both the WIKI and UKGOV datasets, we compare temporally coalesced indexes for different values of the error threshold computed using Algorithm 1 with the non-coalesced index as a baseline.\nWIKI UKGOV # Postings Ratio # Postings Ratio - 8,647,996,223 100.00% 7,888,560,482 100.00% 0.00 7,769,776,831 89.84% 2,926,731,708 37.10% 0.01 1,616,014,825 18.69% 744,438,831 9.44% 0.05 556,204,068 6.43% 259,947,199 3.30% 0.10 379,962,802 4.39% 187,387,342 2.38% 0.25 252,581,230 2.92% 158,107,198 2.00% 0.50 203,269,464 2.35% 155,434,617 1.97% Table 1: Index sizes for non-coalesced index (-) and coalesced indexes for different values of Table 1 summarizes the index sizes measured as the total number of postings.\nAs these results demonstrate, approximate temporal coalescing is highly effective in reducing index size.\nEven a small threshold value, e.g. = 0.01, has a considerable effect by reducing the index size almost by an order of magnitude.\nNote that on the UKGOV dataset, even accurate coalescing ( = 0) manages to reduce the index size to less than 38% of the original size.\nIndex size continues to reduce on both datasets, as we increase the value of .\nHow does the reduction in index size affect the query results?\nIn order to evaluate this aspect, we compared the top-k results computed using a coalesced index against the ground-truth result obtained from the original index, for different cutoff levels k. Let Gk and Ck be the top-k documents from the ground-truth result and from the coalesced index respectively.\nWe used the following two measures for comparison: (i) Relative Recall at cutoff level k (RR@k), that measures the overlap between Gk and Ck, which ranges in [0, 1] and is defined as RR@k = |Gk \u2229 Ck|\/k .\n(ii) Kendall``s \u03c4 (see [7, 14] for a detailed definition) at cutoff level k (KT@k), measuring the agreement between two results in the relative order of items in Gk \u2229 Ck, with value 1 (or -1) indicating total agreement (or disagreement).\nFigure 3 plots, for cutoff levels 10 and 100, the mean of RR@k and KT@k along with 5% and 95% percentiles, for different values of the threshold starting from 0.01.\nNote that for = 0, results coincide with those obtained by the original index, and hence are omitted from the graph.\nIt is reassuring to see from these results that approximate temporal coalescing induces minimal disruption to the query results, since RR@k and KT@k are within reasonable limits.\nFor = 0.01, the smallest value of in our experiments, RR@100 for WIKI is 0.98 indicating that the results are -1 -0.5 0 0.5 1 \u03b5 0.01 0.05 0.10 0.25 0.50 Relative Recall @ 10 (WIKI) Kendall's \u03c4 @ 10 (WIKI) Relative Recall @ 10 (UKGOV) Kendall's \u03c4 @ 10 (UKGOV) (a) @10 -1 -0.5 0 0.5 1 \u03b5 0.01 0.05 0.10 0.25 0.50 Relative Recall @ 100 (WIKI) Kendall's \u03c4 @ 100 (WIKI) Relative Recall @ 100 (UKGOV) Kendall's \u03c4 @ 100 (UKGOV) (b) @100 Figure 3: Relative recall and Kendall``s \u03c4 observed on coalesced indexes for different values of almost indistinguishable from those obtained through the original index.\nEven the relative order of these common results is quite high, as the mean KT@100 is close to 0.95.\nFor the extreme value of = 0.5, which results in an index size of just 2.35% of the original, the RR@100 and KT@100 are about 0.8 and 0.6 respectively.\nOn the relatively less dynamic UKGOV dataset (as can be seen from the \u03c3 values above), results were even better, with high values of RR and KT seen throughout the spectrum of values for both cutoff values.\n7.3 Sublist Materialization We now turn our attention towards evaluating the sublist materialization techniques introduced in Section 6.\nFor both datasets, we started with the coalesced index produced by a moderate threshold setting of = 0.10.\nIn order to reduce the computational effort, boundaries of elementary time intervals were rounded to day granularity before computing the sublist materializations.\nHowever, note that the postings in the materialized sublists still retain their original timestamps.\nFor a comparative evaluation of the four approaches - Popt, Sopt, PG, and SB - we measure space and performance as follows.\nThe required space S(M), as defined earlier, is equal to the total number of postings in the materialized sublists.\nTo assess performance we compute the expected processing cost (EPC) for all terms in the respective query workload assuming a uniform probability distribution among query time-points.\nWe report the mean EPC, as well as the 5%- and 95%-percentile.\nIn other words, the mean EPC reflects the expected length of the index list (in terms of index postings) that needs to be scanned for a random time point and a random term from the query workload.\nThe Sopt and Popt approaches are, by their definition, parameter-free.\nFor the PG approach, we varied its parameter \u03b3, which limits the maximal performance degradation, between 1.0 and 3.0.\nAnalogously, for the SB approach the parameter \u03ba, as an upper-bound on the allowed space blowup, was varied between 1.0 and 3.0.\nSolutions for the SB approach were obtained running simulated annealing for R = 50, 000 rounds.\nTable 2 lists the obtained space and performance figures.\nNote that EPC values are smaller on WIKI than on UKGOV, since terms in the query workload employed for WIKI are relatively rarer in the corpus.\nBased on the depicted results, we make the following key observations.\ni) As expected, Popt achieves optimal performance at the cost of an enormous space consumption.\nSopt, to the contrary, while consuming an optimal amount of space, provides only poor expected processing cost.\nThe PG and SB methods, for different values of their respective parameter, produce solutions whose space and performance lie in between the extremes that Popt and Sopt represent.\nii) For the PG method we see that for an acceptable performance degradation of only 10% (i.e., \u03b3 = 1.10) the required space drops by more than one order of magnitude in comparison to Popt on both datasets.\niii) The SB approach achieves close-to-optimal performance on both datasets, if allowed to consume at most three times the optimal amount of space (i.e., \u03ba = 3.0), which on our datasets still corresponds to a space reduction over Popt by more than one order of magnitude.\nWe also measured wall-clock times on a sample of the queries with results indicating improvements in execution time by up to a factor of 12.\n8.\nCONCLUSIONS In this work we have developed an efficient solution for time-travel text search over temporally versioned document collections.\nExperiments on two real-world datasets showed that a combination of the proposed techniques can reduce index size by up to an order of magnitude while achieving nearly optimal performance and highly accurate results.\nThe present work opens up many interesting questions for future research, e.g.: How can we even further improve performance by applying (and possibly extending) encoding, compression, and skipping techniques [35]?\n.\nHow can we extend the approach for queries q [tb, te] specifying a time interval instead of a time point?\nHow can the described time-travel text search functionality enable or speed up text mining along the time axis (e.g., tracking sentiment changes in customer opinions)?\n9.\nACKNOWLEDGMENTS We are grateful to the anonymous reviewers for their valuable comments - in particular to the reviewer who pointed out the opportunity for algorithmic improvements in Section 5 and Section 6.2.\n10.\nREFERENCES [1] V. N. Anh and A. Moffat.\nPruned Query Evaluation Using Pre-Computed Impacts.\nIn SIGIR, 2006.\n[2] V. N. Anh and A. Moffat.\nPruning Strategies for Mixed-Mode Querying.\nIn CIKM, 2006.\nWIKI UKGOV S(M) EPC S(M) EPC 5% Mean 95% 5% Mean 95% Popt 54,821,634,137 11.22 3,132.29 15,658.42 21,372,607,052 39.93 15,593.60 66,938.86 Sopt 379,962,802 114.05 30,186.52 149,820.1 187,387,342 63.15 22,852.67 102,923.85 PG \u03b3 = 1.10 3,814,444,654 11.30 3,306.71 16,512.88 1,155,833,516 40.66 16,105.61 71,134.99 PG \u03b3 = 1.25 1,827,163,576 12.37 3,629.05 18,120.86 649,884,260 43.62 17,059.47 75,749.00 PG \u03b3 = 1.50 1,121,661,751 13.96 4,128.03 20,558.60 436,578,665 46.68 18,379.69 78,115.89 PG \u03b3 = 1.75 878,959,582 15.48 4,560.99 22,476.77 345,422,898 51.26 19,150.06 82,028.48 PG \u03b3 = 2.00 744,381,287 16.79 4,992.53 24,637.62 306,944,062 51.48 19,499.78 87,136.31 PG \u03b3 = 2.50 614,258,576 18.28 5,801.66 28,849.02 269,178,107 53.36 20,279.62 87,897.95 PG \u03b3 = 3.00 552,796,130 21.04 6,485.44 32,361.93 247,666,812 55.95 20,800.35 89,591.94 SB \u03ba = 1.10 412,383,387 38.97 12,723.68 60,350.60 194,287,671 63.09 22,574.54 102,208.58 SB \u03ba = 1.25 467,537,173 26.87 9,011.81 45,119.08 204,454,800 57.42 22,036.39 95,337.33 SB \u03ba = 1.50 557,341,140 19.84 6,699.36 32,810.85 246,323,383 53.24 20,566.68 91,691.38 SB \u03ba = 1.75 647,187,522 16.59 5,769.40 28,272.89 296,345,976 49.56 19,065.99 84,377.44 SB \u03ba = 2.00 737,819,354 15.86 5,358.99 27,112.01 336,445,773 47.58 18,569.08 81,386.02 SB \u03ba = 2.50 916,308,766 13.99 4,639.77 23,037.59 427,122,038 44.89 17,153.94 74,449.28 SB \u03ba = 3.00 1,094,973,140 13.01 4,343.72 22,708.37 511,470,192 42.15 16,772.65 72,307.43 Table 2: Required space and expected processing cost (in # postings) observed on coalesced indexes ( = 0.10) [3] P. G. Anick and R. A. Flynn.\nVersioning a Full-Text Information Retrieval System.\nIn SIGIR, 1992.\n[4] R. A. Baeza-Yates and B. Ribeiro-Neto.\nModern Information Retrieval.\nAddison-Wesley, 1999.\n[5] K. Berberich, S. Bedathur, T. Neumann, and G. Weikum.\nA Time Machine for Text search.\nTechnical Report MPI-I-2007-5-002, Max-Planck Institute for Informatics, 2007.\n[6] M. H. B\u00a8ohlen, R. T. Snodgrass, and M. D. Soo.\nCoalescing in Temporal Databases.\nIn VLDB, 1996.\n[7] P. Boldi, M. Santini, and S. Vigna.\nDo Your Worst to Make the Best: Paradoxical Effects in PageRank Incremental Computations.\nIn WAW, 2004.\n[8] A. Z. Broder, N. Eiron, M. Fontoura, M. Herscovici, R. Lempel, J. McPherson, R. Qi, and E. J. Shekita.\nIndexing Shared Content in Information Retrieval Systems.\nIn EDBT, 2006.\n[9] C. Buckley and A. F. Lewit.\nOptimization of Inverted Vector Searches.\nIn SIGIR, 1985.\n[10] M. Burrows and A. L. Hisgen.\nMethod and Apparatus for Generating and Searching Range-Based Index of Word Locations.\nU.S. Patent 5,915,251, 1999.\n[11] S. B\u00a8uttcher and C. L. A. Clarke.\nA Document-Centric Approach to Static Index Pruning in Text Retrieval Systems.\nIn CIKM, 2006.\n[12] D. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y. S. Maarek, and A. Soffer.\nStatic Index Pruning for Information Retrieval Systems.\nIn SIGIR, 2001.\n[13] http:\/\/www.europarchive.org.\n[14] R. Fagin, R. Kumar, and D. Sivakumar.\nComparing Top k Lists.\nSIAM J. Discrete Math., 17(1):134-160, 2003.\n[15] R. Fagin, A. Lotem, and M. Naor.\nOptimal Aggregation Algorithms for Middleware.\nJ. Comput.\nSyst.\nSci., 66(4):614-656, 2003.\n[16] S. Guha, K. Shim, and J. Woo.\nREHIST: Relative Error Histogram Construction Algorithms.\nIn VLDB, 2004.\n[17] M. Hersovici, R. Lempel, and S. Yogev.\nEfficient Indexing of Versioned Document Sequences.\nIn ECIR, 2007.\n[18] http:\/\/www.archive.org.\n[19] Y. E. Ioannidis and V. Poosala.\nBalancing Histogram Optimality and Practicality for Query Result Size Estimation.\nIn SIGMOD, 1995.\n[20] H. V. Jagadish, N. Koudas, S. Muthukrishnan, V. Poosala, K. C. Sevcik, and T. Suel.\nOptimal Histograms with Quality Guarantees.\nIn VLDB, 1998.\n[21] E. J. Keogh, S. Chu, D. Hart, and M. J. Pazzani.\nAn Online Algorithm for Segmenting Time Series.\nIn ICDM, 2001.\n[22] S. Kirkpatrick, D. G. Jr., and M. P. Vecchi.\nOptimization by Simulated Annealing.\nScience, 220(4598):671-680, 1983.\n[23] J. Kleinberg and E. Tardos.\nAlgorithm Design.\nAddison-Wesley, 2005.\n[24] U. Manber.\nIntroduction to Algorithms: A Creative Approach.\nAddison-Wesley, 1989.\n[25] K. N\u00f8rv\u02daag and A. O. N. Nyb\u00f8.\nDyST: Dynamic and Scalable Temporal Text Indexing.\nIn TIME, 2006.\n[26] J. M. Ponte and W. B. Croft.\nA Language Modeling Approach to Information Retrieval.\nIn SIGIR, 1998.\n[27] S. E. Robertson and S. Walker.\nOkapi\/Keenbow at TREC-8.\nIn TREC, 1999.\n[28] B. Salzberg and V. J. Tsotras.\nComparison of Access Methods for Time-Evolving Data.\nACM Comput.\nSurv., 31(2):158-221, 1999.\n[29] M. Stack.\nFull Text Search of Web Archive Collections.\nIn IWAW, 2006.\n[30] E. Terzi and P. Tsaparas.\nEfficient Algorithms for Sequence Segmentation.\nIn SIAM-DM, 2006.\n[31] M. Theobald, G. Weikum, and R. Schenkel.\nTop-k Query Evaluation with Probabilistic Guarantees.\nIn VLDB, 2004.\n[32] http:\/\/www.wikipedia.org.\n[33] I. H. Witten, A. Moffat, and T. C. Bell.\nManaging Gigabytes: Compressing and Indexing Documents and Images.\nMorgan Kaufmann publishers Inc., 1999.\n[34] J. Zhang and T. Suel.\nEfficient Search in Large Textual Collections with Redundancy.\nIn WWW, 2007.\n[35] J. Zobel and A. Moffat.\nInverted Files for Text Search Engines.\nACM Comput.\nSurv., 38(2):6, 2006.","lvl-3":"A Time Machine for Text Search\nABSTRACT\nText search over temporally versioned document collections such as web archives has received little attention as a research problem.\nAs a consequence, there is no scalable and principled solution to search such a collection as of a specified time t.\nIn this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search.\nWe introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results.\nIn order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance.\nThese techniques can be formulated as optimization problems that can be solved to near-optimality.\nFinally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets.\nResults unequivocally show that our methods make it possible to build an efficient \"time machine\" scalable to large versioned text collections.\n1.\nINTRODUCTION\nIn this work we address time-travel text search over temporally versioned document collections.\nGiven a keyword query q and a time t our goal is to identify and rank relevant documents as if the collection was in its state as of time t.\nAn increasing number of such versioned document collections is available today including web archives, collabo\nrative authoring environments like Wikis, or timestamped information feeds.\nText search on these collections, however, is mostly time-ignorant: while the searched collection changes over time, often only the most recent version of a documents is indexed, or, versions are indexed independently and treated as separate documents.\nEven worse, for some collections, in particular web archives like the Internet Archive [18], a comprehensive text-search functionality is often completely missing.\nTime-travel text search, as we develop it in this paper, is a crucial tool to explore these collections and to unfold their full potential as the following example demonstrates.\nFor a documentary about a past political scandal, a journalist needs to research early opinions and statements made by the involved politicians.\nSending an appropriate query to a major web search-engine, the majority of returned results contains only recent coverage, since many of the early web pages have disappeared and are only preserved in web archives.\nIf the query could be enriched with a time point, say August 20th 2003 as the day after the scandal got revealed, and be issued against a web archive, only pages that existed specifically at that time could be retrieved thus better satisfying the journalist's information need.\nDocument collections like the Web or Wikipedia [32], as we target them here, are already large if only a single snapshot is considered.\nLooking at their evolutionary history, we are faced with even larger data volumes.\nAs a consequence, na \u00a8 \u0131ve approaches to time-travel text search fail, and viable approaches must scale-up well to such large data volumes.\nThis paper presents an efficient solution to time-travel text search by making the following key contributions:\n1.\nThe popular well-studied inverted file index [35] is transparently extended to enable time-travel text search.\n2.\nTemporal coalescing is introduced to avoid an indexsize explosion while keeping results highly accurate.\n3.\nWe develop two sublist materialization techniques to improve index performance that allow trading off space vs. performance.\n4.\nIn a comprehensive experimental evaluation our approach is evaluated on the English Wikipedia and parts of the Internet Archive as two large-scale real-world datasets with versioned documents.\nThe remainder of this paper is organized as follows.\nThe presented work is put in context with related work in Section 2.\nWe delineate our model of a temporally versioned document collection in Section 3.\nWe present our time-travel inverted index in Section 4.\nBuilding on it, temporal coalescing is described in Section 5.\nIn Section 6 we describe principled techniques to improve index performance, before presenting the results of our experimental evaluation in Section 7.\n2.\nRELATED WORK\nWe can classify the related work mainly into the following two categories: (i) methods that deal explicitly with collections of versioned documents or temporal databases, and (ii) methods for reducing the index size by exploiting either the document-content overlap or by pruning portions of the index.\nWe briefly review work under these categories here.\nTo the best of our knowledge, there is very little prior work dealing with historical search over temporally versioned documents.\nAnick and Flynn [3], while pioneering this research, describe a help-desk system that supports historical queries.\nAccess costs are optimized for accesses to the most recent versions and increase as one moves farther into the past.\nBurrows and Hisgen [10], in a patent description, delineate a method for indexing range-based values and mention its potential use for searching based on dates associated with documents.\nRecent work by N\u00f8rv\u02daag and Nyb\u00f8 [25] and their earlier proposals concentrate on the relatively simpler problem of supporting text-containment queries only and neglect the relevance scoring of results.\nStack [29] reports practical experiences made when adapting the open source search-engine Nutch to search web archives.\nThis adaptation, however, does not provide the intended time-travel text search functionality.\nIn contrast, research in temporal databases has produced several index structures tailored for time-evolving databases; a comprehensive overview of the state-of-art is available in [28].\nUnlike the inverted file index, their applicability to text search is not well understood.\nMoving on to the second category of related work, Broder et al. [8] describe a technique that exploits large content overlaps between documents to achieve a reduction in index size.\nTheir technique makes strong assumptions about the structure of document overlaps rendering it inapplicable to our context.\nMore recent approaches by Hersovici et al. [17] and Zhang and Suel [34] exploit arbitrary content overlaps between documents to reduce index size.\nNone of the approaches, however, considers time explicitly or provides the desired time-travel text search functionality.\nStatic indexpruning techniques [11, 12] aim to reduce the effective index size, by removing portions of the index that are expected to have low impact on the query result.\nThey also do not consider temporal aspects of documents, and thus are technically quite different from our proposal despite having a shared goal of index-size reduction.\nIt should be noted that index-pruning techniques can be adapted to work along with the temporal text index we propose here.\n3.\nMODEL\n4.\nTIME-TRAVEL INVERTED FILE INDEX\n4.1 Inverted File Index\n4.2 Time-Travel Inverted File Index\n4.3 Query Processing\n5.\nTEMPORAL COALESCING\n6.\nSUBLIST MATERIALIZATION\n6.1 Performance\/Space-Optimal Approaches\n6.2 Performance-Guarantee Approach\n6.3 Space-Bound Approach\n7.\nEXPERIMENTAL EVALUATION\n7.1 Setup and Datasets\n7.2 Impact of Temporal Coalescing\n7.3 Sublist Materialization\n8.\nCONCLUSIONS\nIn this work we have developed an efficient solution for time-travel text search over temporally versioned document collections.\nExperiments on two real-world datasets showed that a combination of the proposed techniques can reduce index size by up to an order of magnitude while achieving nearly optimal performance and highly accurate results.\nThe present work opens up many interesting questions for future research, e.g.: How can we even further improve performance by applying (and possibly extending) encoding, compression, and skipping techniques [35]?\n.\nHow can we extend the approach for queries q [tb, te] specifying a time interval instead of a time point?\nHow can the described time-travel text search functionality enable or speed up text mining along the time axis (e.g., tracking sentiment changes in customer opinions)?","lvl-4":"A Time Machine for Text Search\nABSTRACT\nText search over temporally versioned document collections such as web archives has received little attention as a research problem.\nAs a consequence, there is no scalable and principled solution to search such a collection as of a specified time t.\nIn this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search.\nWe introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results.\nIn order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance.\nThese techniques can be formulated as optimization problems that can be solved to near-optimality.\nFinally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets.\nResults unequivocally show that our methods make it possible to build an efficient \"time machine\" scalable to large versioned text collections.\n1.\nINTRODUCTION\nIn this work we address time-travel text search over temporally versioned document collections.\nGiven a keyword query q and a time t our goal is to identify and rank relevant documents as if the collection was in its state as of time t.\nAn increasing number of such versioned document collections is available today including web archives, collabo\nText search on these collections, however, is mostly time-ignorant: while the searched collection changes over time, often only the most recent version of a documents is indexed, or, versions are indexed independently and treated as separate documents.\nEven worse, for some collections, in particular web archives like the Internet Archive [18], a comprehensive text-search functionality is often completely missing.\nTime-travel text search, as we develop it in this paper, is a crucial tool to explore these collections and to unfold their full potential as the following example demonstrates.\nDocument collections like the Web or Wikipedia [32], as we target them here, are already large if only a single snapshot is considered.\nAs a consequence, na \u00a8 \u0131ve approaches to time-travel text search fail, and viable approaches must scale-up well to such large data volumes.\nThis paper presents an efficient solution to time-travel text search by making the following key contributions:\n1.\nThe popular well-studied inverted file index [35] is transparently extended to enable time-travel text search.\n2.\nTemporal coalescing is introduced to avoid an indexsize explosion while keeping results highly accurate.\n3.\nWe develop two sublist materialization techniques to improve index performance that allow trading off space vs. performance.\n4.\nIn a comprehensive experimental evaluation our approach is evaluated on the English Wikipedia and parts of the Internet Archive as two large-scale real-world datasets with versioned documents.\nThe remainder of this paper is organized as follows.\nThe presented work is put in context with related work in Section 2.\nWe delineate our model of a temporally versioned document collection in Section 3.\nWe present our time-travel inverted index in Section 4.\nBuilding on it, temporal coalescing is described in Section 5.\nIn Section 6 we describe principled techniques to improve index performance, before presenting the results of our experimental evaluation in Section 7.\n2.\nRELATED WORK\nWe can classify the related work mainly into the following two categories: (i) methods that deal explicitly with collections of versioned documents or temporal databases, and (ii) methods for reducing the index size by exploiting either the document-content overlap or by pruning portions of the index.\nWe briefly review work under these categories here.\nTo the best of our knowledge, there is very little prior work dealing with historical search over temporally versioned documents.\nAnick and Flynn [3], while pioneering this research, describe a help-desk system that supports historical queries.\nBurrows and Hisgen [10], in a patent description, delineate a method for indexing range-based values and mention its potential use for searching based on dates associated with documents.\nStack [29] reports practical experiences made when adapting the open source search-engine Nutch to search web archives.\nThis adaptation, however, does not provide the intended time-travel text search functionality.\nIn contrast, research in temporal databases has produced several index structures tailored for time-evolving databases; a comprehensive overview of the state-of-art is available in [28].\nUnlike the inverted file index, their applicability to text search is not well understood.\nMoving on to the second category of related work, Broder et al. [8] describe a technique that exploits large content overlaps between documents to achieve a reduction in index size.\nTheir technique makes strong assumptions about the structure of document overlaps rendering it inapplicable to our context.\nMore recent approaches by Hersovici et al. [17] and Zhang and Suel [34] exploit arbitrary content overlaps between documents to reduce index size.\nNone of the approaches, however, considers time explicitly or provides the desired time-travel text search functionality.\nStatic indexpruning techniques [11, 12] aim to reduce the effective index size, by removing portions of the index that are expected to have low impact on the query result.\nThey also do not consider temporal aspects of documents, and thus are technically quite different from our proposal despite having a shared goal of index-size reduction.\nIt should be noted that index-pruning techniques can be adapted to work along with the temporal text index we propose here.\n8.\nCONCLUSIONS\nIn this work we have developed an efficient solution for time-travel text search over temporally versioned document collections.\nExperiments on two real-world datasets showed that a combination of the proposed techniques can reduce index size by up to an order of magnitude while achieving nearly optimal performance and highly accurate results.\n.\nHow can we extend the approach for queries q [tb, te] specifying a time interval instead of a time point?\nHow can the described time-travel text search functionality enable or speed up text mining along the time axis (e.g., tracking sentiment changes in customer opinions)?","lvl-2":"A Time Machine for Text Search\nABSTRACT\nText search over temporally versioned document collections such as web archives has received little attention as a research problem.\nAs a consequence, there is no scalable and principled solution to search such a collection as of a specified time t.\nIn this work, we address this shortcoming and propose an efficient solution for time-travel text search by extending the inverted file index to make it ready for temporal search.\nWe introduce approximate temporal coalescing as a tunable method to reduce the index size without significantly affecting the quality of results.\nIn order to further improve the performance of time-travel queries, we introduce two principled techniques to trade off index size for its performance.\nThese techniques can be formulated as optimization problems that can be solved to near-optimality.\nFinally, our approach is evaluated in a comprehensive series of experiments on two large-scale real-world datasets.\nResults unequivocally show that our methods make it possible to build an efficient \"time machine\" scalable to large versioned text collections.\n1.\nINTRODUCTION\nIn this work we address time-travel text search over temporally versioned document collections.\nGiven a keyword query q and a time t our goal is to identify and rank relevant documents as if the collection was in its state as of time t.\nAn increasing number of such versioned document collections is available today including web archives, collabo\nrative authoring environments like Wikis, or timestamped information feeds.\nText search on these collections, however, is mostly time-ignorant: while the searched collection changes over time, often only the most recent version of a documents is indexed, or, versions are indexed independently and treated as separate documents.\nEven worse, for some collections, in particular web archives like the Internet Archive [18], a comprehensive text-search functionality is often completely missing.\nTime-travel text search, as we develop it in this paper, is a crucial tool to explore these collections and to unfold their full potential as the following example demonstrates.\nFor a documentary about a past political scandal, a journalist needs to research early opinions and statements made by the involved politicians.\nSending an appropriate query to a major web search-engine, the majority of returned results contains only recent coverage, since many of the early web pages have disappeared and are only preserved in web archives.\nIf the query could be enriched with a time point, say August 20th 2003 as the day after the scandal got revealed, and be issued against a web archive, only pages that existed specifically at that time could be retrieved thus better satisfying the journalist's information need.\nDocument collections like the Web or Wikipedia [32], as we target them here, are already large if only a single snapshot is considered.\nLooking at their evolutionary history, we are faced with even larger data volumes.\nAs a consequence, na \u00a8 \u0131ve approaches to time-travel text search fail, and viable approaches must scale-up well to such large data volumes.\nThis paper presents an efficient solution to time-travel text search by making the following key contributions:\n1.\nThe popular well-studied inverted file index [35] is transparently extended to enable time-travel text search.\n2.\nTemporal coalescing is introduced to avoid an indexsize explosion while keeping results highly accurate.\n3.\nWe develop two sublist materialization techniques to improve index performance that allow trading off space vs. performance.\n4.\nIn a comprehensive experimental evaluation our approach is evaluated on the English Wikipedia and parts of the Internet Archive as two large-scale real-world datasets with versioned documents.\nThe remainder of this paper is organized as follows.\nThe presented work is put in context with related work in Section 2.\nWe delineate our model of a temporally versioned document collection in Section 3.\nWe present our time-travel inverted index in Section 4.\nBuilding on it, temporal coalescing is described in Section 5.\nIn Section 6 we describe principled techniques to improve index performance, before presenting the results of our experimental evaluation in Section 7.\n2.\nRELATED WORK\nWe can classify the related work mainly into the following two categories: (i) methods that deal explicitly with collections of versioned documents or temporal databases, and (ii) methods for reducing the index size by exploiting either the document-content overlap or by pruning portions of the index.\nWe briefly review work under these categories here.\nTo the best of our knowledge, there is very little prior work dealing with historical search over temporally versioned documents.\nAnick and Flynn [3], while pioneering this research, describe a help-desk system that supports historical queries.\nAccess costs are optimized for accesses to the most recent versions and increase as one moves farther into the past.\nBurrows and Hisgen [10], in a patent description, delineate a method for indexing range-based values and mention its potential use for searching based on dates associated with documents.\nRecent work by N\u00f8rv\u02daag and Nyb\u00f8 [25] and their earlier proposals concentrate on the relatively simpler problem of supporting text-containment queries only and neglect the relevance scoring of results.\nStack [29] reports practical experiences made when adapting the open source search-engine Nutch to search web archives.\nThis adaptation, however, does not provide the intended time-travel text search functionality.\nIn contrast, research in temporal databases has produced several index structures tailored for time-evolving databases; a comprehensive overview of the state-of-art is available in [28].\nUnlike the inverted file index, their applicability to text search is not well understood.\nMoving on to the second category of related work, Broder et al. [8] describe a technique that exploits large content overlaps between documents to achieve a reduction in index size.\nTheir technique makes strong assumptions about the structure of document overlaps rendering it inapplicable to our context.\nMore recent approaches by Hersovici et al. [17] and Zhang and Suel [34] exploit arbitrary content overlaps between documents to reduce index size.\nNone of the approaches, however, considers time explicitly or provides the desired time-travel text search functionality.\nStatic indexpruning techniques [11, 12] aim to reduce the effective index size, by removing portions of the index that are expected to have low impact on the query result.\nThey also do not consider temporal aspects of documents, and thus are technically quite different from our proposal despite having a shared goal of index-size reduction.\nIt should be noted that index-pruning techniques can be adapted to work along with the temporal text index we propose here.\n3.\nMODEL\nIn the present work, we deal with a temporally versioned document collection D that is modeled as described in the following.\nEach document d E D is a sequence of its versions d = (dt1, dt2, ...).\nEach version d ti has an associated timestamp ti reflecting when the version was created.\nEach version is a vector of searchable terms or features.\nAny modification to a document version results in the insertion of a new version with corresponding timestamp.\nWe employ a discrete definition of time, so that timestamps are non-negative integers.\nThe deletion of a document at time ti, i.e., its disappearance from the current state of the collection, is modeled as the insertion of a special \"tombstone\" version 1.\nThe validity time-interval val (d ti) of a version d ti is [ti, ti +1), if a newer version with associated timestamp ti +1 exists, and [ti, now) otherwise where now points to the greatest possible value of a timestamp (i.e., ` dt: t <now).\nPutting all this together, we define the state D t of the collection at time t (i.e., the set of versions valid at t that are not deletions) as\nAs mentioned earlier, we want to enrich a keyword query q with a timestamp t, so that q be evaluated over D t, i.e., the state of the collection at time t.\nThe enriched time-travel query is written as q t for brevity.\nAs a retrieval model in this work we adopt Okapi BM25 [27], but note that the proposed techniques are not dependent on this choice and are applicable to other retrieval models like tf-idf [4] or language models [26] as well.\nFor our considered setting, we slightly adapt Okapi BM25 as w (q t, d ti) = X wtf (v, d ti).\nwidf (v, t).\nv \u2208 q In the above formula, the relevance w (q t, d ti) of a document version dti to the time-travel query q t is defined.\nWe reiterate that q t is evaluated over D t so that only the version d ti valid at time t is considered.\nThe first factor wtf (v, d ti) in the summation, further referred to as the tfscore is defined as\nIt considers the plain term frequency tf (v, d ti) of term v in version d ti normalizing it, taking into account both the length dl (d ti) of the version and the average document length avdl (ti) in the collection at time ti.\nThe length-normalization parameter b and the tf-saturation parameter k1 are inherited from the original Okapi BM25 and are commonly set to values 1.2 and 0.75 respectively.\nThe second factor widf (v, t), which we refer to as the idf-score in the remainder, conveys the inverse document frequency of term v in the collection at time t and is defined as\nwhere N (t) = ID tI is the collection size at time t and df (v, t) gives the number of documents in the collection that contain the term v at time t.\nWhile the idf-score depends on the whole corpus as of the query time t, the tf-score is specific to each version.\n4.\nTIME-TRAVEL INVERTED FILE INDEX\nThe inverted file index is a standard technique for text indexing, deployed in many systems.\nIn this section, we briefly review this technique and present our extensions to the inverted file index that make it ready for time-travel text search.\n4.1 Inverted File Index\nAn inverted file index consists of a vocabulary, commonly organized as a B + - Tree, that maps each term to its idfscore and inverted list.\nThe index list Lv belonging to term v contains postings of the form\nwhere d is a document-identifier and p is the so-called payload.\nThe payload p contains information about the term frequency of v in d, but may also include positional information about where the term appears in the document.\nThe sort-order of index lists depends on which queries are to be supported efficiently.\nFor Boolean queries it is favorable to sort index lists in document-order.\nFrequencyorder and impact-order sorted index lists are beneficial for ranked queries and enable optimized query processing that stops early after having identified the k most relevant documents [1, 2, 9, 15, 31].\nA variety of compression techniques, such as encoding document identifiers more compactly, have been proposed [33, 35] to reduce the size of index lists.\nFor an excellent recent survey about inverted file indexes we refer to [35].\n4.2 Time-Travel Inverted File Index\nIn order to prepare an inverted file index for time travel we extend both inverted lists and the vocabulary structure by explicitly incorporating temporal information.\nThe main idea for inverted lists is that we include a validity timeinterval [tb, te) in postings to denote when the payload information was valid.\nThe postings in our time-travel inverted file index are thus of the form (d, p, [tb, te)) where d and p are defined as in the standard inverted file index above and [tb, te) is the validity time-interval.\nAs a concrete example, in our implementation, for a version d ti having the Okapi BM25 tf-score wtf (v, d ti) for term v, the index list Lv contains the posting (d, wtf (v, d ti), [ti, ti +1)).\nSimilarly, the extended vocabulary structure maintains for each term a time-series of idf-scores organized as a B+T ree.\nUnlike the tf-score, the idf-score of every term could vary with every change in the corpus.\nTherefore, we take a simplified approach to idf-score maintenance, by computing idf-scores for all terms in the corpus at specific (possibly periodic) times.\n4.3 Query Processing\nDuring processing of a time-travel query q t, for each query term the corresponding idf-score valid at time t is retrieved from the extended vocabulary.\nThen, index lists are sequentially read from disk, thereby accumulating the information contained in the postings.\nWe transparently extend the sequential reading, which is--to the best of our knowledge--common to all query processing techniques on inverted file indexes, thus making them suitable for time-travel queryprocessing.\nTo this end, sequential reading is extended by skipping all postings whose validity time-interval does not contain t (i.e., t \u2208 6 [tb, te)).\nWhether a posting can be skipped can only be decided after the posting has been transferred from disk into memory and therefore still incurs significant I\/O cost.\nAs a remedy, we propose index organization techniques in Section 6 that aim to reduce the I\/O overhead significantly.\nWe note that our proposed extension of the inverted file index makes no assumptions about the sort-order of index lists.\nAs a consequence, existing query-processing techniques and most optimizations (e.g., compression techniques) remain equally applicable.\n5.\nTEMPORAL COALESCING\nIf we employ the time-travel inverted index, as described in the previous section, to a versioned document collection, we obtain one posting per term per document version.\nFor frequent terms and large highly-dynamic collections, this\nFigure 1: Approximate Temporal Coalescing\nleads to extremely long index lists with very poor queryprocessing performance.\nThe approximate temporal coalescing technique that we propose in this section counters this blowup in index-list size.\nIt builds on the observation that most changes in a versioned document collection are minor, leaving large parts of the document untouched.\nAs a consequence, the payload of many postings belonging to temporally adjacent versions will differ only slightly or not at all.\nApproximate temporal coalescing reduces the number of postings in an index list by merging such a sequence of postings that have almost equal payloads, while keeping the maximal error bounded.\nThis idea is illustrated in Figure 1, which plots non-coalesced and coalesced scores of postings belonging to a single document.\nApproximate temporal coalescing is greatly effective given such fluctuating payloads and reduces the number of postings from 9 to 3 in the example.\nThe notion of temporal coalescing was originally introduced in temporal database research by B \u00a8 ohlen et al. [6], where the simpler problem of coalescing only equal information was considered.\nWe next formally state the problem dealt with in approximate temporal coalescing, and discuss the computation of optimal and approximate solutions.\nNote that the technique is applied to each index list separately, so that the following explanations assume a fixed term v and index list Lv.\nAs an input we are given a sequence of temporally adjacent postings\nEach sequence represents a contiguous time period during which the term was present in a single document d.\nIf a term disappears from d but reappears later, we obtain multiple input sequences that are dealt with separately.\nWe seek to generate the minimal length output sequence of postings\nthat adheres to the following constraints: First, O and I must cover the same time-range, i.e., ti = tj and tn = tm.\nSecond, when coalescing a subsequence of postings of the input into a single posting of the output, we want the approximation error to be below a threshold ~.\nIn other words, if (d, pi, [ti, ti +1)) and (d, pj, [tj, tj +1)) are postings of I and O respectively, then the following must hold for a chosen error function and a threshold ~: tj <ti n ti +1 <tj +1 =:>.\nerror (pi, pj) <~.\nIn this paper, as an error function we employ the relative error between payloads (i.e., tf-scores) of a document in I and O, defined as: errrel (pi, pj) = Ipi--pjI \/ IpiI.\nFinding an optimal output sequence of postings can be cast into finding a piecewise-constant representation for the points (ti, pi) that uses a minimal number of segments while retaining the above approximation guarantee.\nSimilar problems occur in time-series segmentation [21, 30] and histogram construction [19, 20].\nTypically dynamic programming is score\napplied to obtain an optimal solution in O (n2 m *) [20, 30] time with m * being the number of segments in an optimal sequence.\nIn our setting, as a key difference, only a guarantee on the local error is retained--in contrast to a guarantee on the global error in the aforementioned settings.\nExploiting this fact, an optimal solution is computable by means of induction [24] in O (n2) time.\nDetails of the optimal algorithm are omitted here but can be found in the accompanying technical report [5].\nThe quadratic complexity of the optimal algorithm makes it inappropriate for the large datasets encountered in this work.\nAs an alternative, we introduce a linear-time approximate algorithm that is based on the sliding-window algorithm given in [21].\nThis algorithm produces nearly-optimal output sequences that retain the bound on the relative error, but possibly require a few additional segments more than an optimal solution.\nAlgorithm 1 Temporal Coalescing (Approximate) 1: I = ((d, pi, [ti, ti +1)), ...) O = () 2: pmin = pi pmax = pi p = pi tb = ti te = ti +1 3: for (d, pj, [tj, tj +1)) \u2208 I do 4: pnamin = min (pmin, pj) pmmax = max (pmax, pj) 5: p0 = optrep (p' min, pm, max) 6: if errrel (pmin, pI) <e \u2227 errrel (pmax, p) <e then 7: pmin = pmin pmax = pmax p = p te = tj +1 8: else 9: O = O U (d, p, [tb, te)) 10: pmin = pj pmax = pj p = pj tb = tj te = tj +1 11: end if 12: end for 13: O = O U (d, p, [tb, te))\nAlgorithm 1 makes one pass over the input sequence I.\nWhile doing so, it coalesces sequences of postings having maximal length.\nThe optimal representative for a sequence of postings depends only on their minimal and maximal payload (pmin and pmax) and can be looked up using optrep in O (1) (see [16] for details).\nWhen reading the next posting, the algorithm tries to add it to the current sequence of postings.\nIt computes the hypothetical new representative p and checks whether it would retain the approximation guarantee.\nIf this test fails, a coalesced posting bearing the old representative is added to the output sequence O and, following that, the bookkeeping is reinitialized.\nThe time complexity of the algorithm is in O (n).\nNote that, since we make no assumptions about the sort order of index lists, temporal-coalescing algorithms have an additional preprocessing cost in O (| Lv | log | Lv |) for sorting the index list and chopping it up into subsequences for each document.\n6.\nSUBLIST MATERIALIZATION\nEfficiency of processing a query q t on our time-travel inverted index is influenced adversely by the wasted I\/O due to read but skipped postings.\nTemporal coalescing implicitly addresses this problem by reducing the overall index list size, but still a significant overhead remains.\nIn this section, we tackle this problem by proposing the idea of materializing sublists each of which corresponds to a contiguous subinterval of time spanned by the full index.\nEach of these sublists contains all coalesced postings that overlap with the corresponding time interval of the sublist.\nNote that all those postings whose validity time-interval spans across the temporal boundaries of several sublists are replicated in each of the spanned sublists.\nThus, in order to process the query q t\nand demand\ni.e., the time intervals in M must completely cover the time interval [t1, tn), so that time-travel queries q t for all t \u2208 [t1, tn) can be processed.\nWe also assume that intervals in M are disjoint.\nWe can make this assumption without ruling out any optimal solution with regard to space or performance defined below.\nThe space required for the materialization of sublists in a set M is defined as\nFigure 2: Sublist Materialization\nit is sufficient to scan any materialized sublist whose timeinterval contains t.\nWe illustrate the idea of sublist materialization using an example shown in Figure 2.\nThe index list Lv visualized in the figure contains a total of 10 postings from three documents d1, d2, and d3.\nFor ease of description, we have numbered boundaries of validity time-intervals, in increasing time-order, as t1,..., t10 and numbered the postings themselves as 1,..., 10.\nNow, consider the processing of a query q t with t \u2208 [t1, t2) using this inverted list.\nAlthough only three postings (postings 1, 5 and 8) are valid at time t, the whole inverted list has to be read in the worst case.\nSuppose that we split the time axis of the list at time t2, forming two sublists with postings {1, 5, 8} and {2, 3, 4, 5, 6, 7, 8, 9, 10} respectively.\nThen, we can process the above query with optimal cost by reading only those postings that existed at this t.\nAt a first glance, it may seem counterintuitive to reduce index size in the first step (using temporal coalescing), and then to increase it again using the sublist materialization techniques presented in this section.\nHowever, we reiterate that our main objective is to improve the efficiency of processing queries, not to reduce the index size alone.\nThe use of temporal coalescing improves the performance by reducing the index size, while the sublist materialization improves performance by judiciously replicating entries.\nFurther, the two techniques, can be applied separately and are independent.\nIf applied in conjunction, though, there is a synergetic effect--sublists that are materialized from a temporally coalesced index are generally smaller.\nWe employ the notation Lv: [ti, tj) to refer to the materialized sublist for the time interval [ti, tj), that is formally defined as,\nTo aid the presentation in the rest of the paper, we first provide some definitions.\nLet T = (t1...tn) be the sorted sequence of all unique time-interval boundaries of an inverted list Lv.\nThen we define\nto be the set of elementary time intervals.\nWe refer to the set of time intervals for which sublists are materialized as\ni.e., the total length of all lists in M. Given a set M, we let \u03c0 ([ti, ti +1)) = [tj, tk) \u2208 M: [ti, ti +1) \u2286 [tj, tk) denote the time interval that is used to process queries q t with t \u2208 [ti, ti +1).\nThe performance of processing queries q t for t \u2208 [ti, ti +1) inversely depends on its processing cost\nwhich is assumed to be proportional to the length of the list Lv: \u03c0 ([ti, ti +1)).\nThus, in order to optimize the performance of processing queries we minimize their processing costs.\n6.1 Performance\/Space-Optimal Approaches\nOne strategy to eliminate the problem of skipped postings is to eagerly materialize sublists for all elementary time intervals, i.e., to choose M = E.\nIn doing so, for every query q t only postings valid at time t are read and thus the best possible performance is achieved.\nTherefore, we will refer to this approach as Popt in the remainder.\nThe initial approach described above that keeps only the full list Lv and thus picks M = {[t1, tn)} is referred to as Sopt in the remainder.\nThis approach requires minimal space, since it keeps each posting exactly once.\nPopt and Sopt are extremes: the former provides the best possible performance but is not space-efficient, the latter requires minimal space but does not provide good performance.\nThe two approaches presented in the rest of this section allow mutually trading off space and performance and can thus be thought of as means to explore the configuration spectrum between the Popt and the Sopt approach.\n6.2 Performance-Guarantee Approach\nThe Popt approach clearly wastes a lot of space materializing many nearly-identical sublists.\nIn the example illustrated in Figure 2 materialized sublists for [t1, t2) and [t2, t3) differ only by one posting.\nIf the sublist for [t1, t3) was materialized instead, one could save significant space while incurring only an overhead of one skipped posting for all t \u2208 [t1, t3).\nThe technique presented next is driven by the idea that significant space savings over Popt are achievable, if an upper-bounded loss on the performance can be tolerated, or to put it differently, if a performance guarantee relative to the optimum is to be retained.\nIn detail, the technique, which we refer to as PG (Performance Guarantee) in the remainder, finds a set M that has minimal required space, but guarantees for any elementary time interval [ti, ti +1) (and thus for any query q t with t \u2208 [ti, ti +1)) that performance is worse than optimal by at most a factor of - y \u2265 1.\nFormally, this problem can be stated as\nAn optimal solution to the problem can be computed by means of induction using the recurrence\nwhere C ([t1, tj)) is the optimal cost (i.e., the space required) for the prefix subproblem\nIntuitively, the recurrence states that an optimal solution for [t1, tk +1) be combined from an optimal solution to a prefix subproblem C ([t1, tj)) and a time interval [tj, tk +1) that can be materialized without violating the performance guarantee.\nPseudocode of the algorithm is omitted for space reasons, but can be found in the accompanying technical report [5].\nThe time complexity of the algorithm is in O (n 2)--for each prefix subproblem the above recurrence must be evaluated, which is possible in linear time if list sizes | L: [ti, tj) | are precomputed.\nThe space complexity is in O (n 2)--the cost of keeping the precomputed sublist lengths and memoizing optimal solutions to prefix subproblems.\n6.3 Space-Bound Approach\nSo far we considered the problem of materializing sublists that give a guarantee on performance while requiring minimal space.\nIn many situations, though, the storage space is at a premium and the aim would be to materialize a set of sublists that optimizes expected performance while not exceeding a given space limit.\nThe technique presented next, which is named SB, tackles this very problem.\nThe space restriction is modeled by means of a user-specified parameter r, \u2265 1 that limits the maximum allowed blowup in index size from the space-optimal solution provided by Sopt.\nThe SB technique seeks to find a set M that adheres to this space limit but minimizes the expected processing cost (and thus optimizes the expected performance).\nIn the definition of the expected processing cost, P ([ti, ti +1)) denotes the probability of a query time-point being in [ti, ti +1).\nFormally, this space-bound sublist-materialization problem can be stated as\nThe problem can be solved by using dynamic programming over an increasing number of time intervals: At each time interval in E the algorithms decides whether to start a new materialization time-interval, using the known best materialization decision from the previous time intervals, and keeping track of the required space consumption for materialization.\nA detailed description of the algorithm is omitted here, but can be found in the accompanying technical report [5].\nUnfortunately, the algorithm has time complexity in O (n3 | Lv |) and its space complexity is in O (n2 | Lv |), which is not practical for large data sets.\nWe obtain an approximate solution to the problem using simulated annealing [22, 23].\nSimulated annealing takes a fixed number R of rounds to explore the solution space.\nIn each round a random successor of the current solution is looked at.\nIf the successor does not adhere to the space limit, it is always rejected (i.e., the current solution is kept).\nA successor adhering to the space limit is always accepted if it achieves lower expected processing cost than the current solution.\nIf it achieves higher expected processing cost, it is randomly accepted with probability e \u2212 o\/r where \u0394 is the increase in expected processing cost and R \u2265 r \u2265 1 denotes the number of remaining rounds.\nIn addition, throughout all rounds, the method keeps track of the best solution seen\nso far.\nThe solution space for the problem at hand can be efficiently explored.\nAs we argued above, we solely have to look at sets M that completely cover the time interval [t1, tn) and do not contain overlapping time intervals.\nWe represent such a set M as an array of n boolean variables b1...bn that convey the boundaries of time intervals in the set.\nNote that b1 and bn are always set to \"true\".\nInitially, all n \u2212 2 intermediate variables assume \"false\", which corresponds to the set M = {[t1, tn)}.\nA random successor can now be easily generated by switching the value of one of the n \u2212 2 intermediate variables.\nThe time complexity of the method is in O (n 2)--the expected processing cost must be computed in each round.\nIts space complexity is in O (n)--for keeping the n boolean variables.\nAs a side remark note that for \u03ba = 1.0 the SB method does not necessarily produce the solution that is obtained from Soft, but may produce a solution that requires the same amount of space while achieving better expected performance.\n7.\nEXPERIMENTAL EVALUATION\nWe conducted a comprehensive series of experiments on two real-world datasets to evaluate the techniques proposed in this paper.\n7.1 Setup and Datasets\nThe techniques described in this paper were implemented in a prototype system using Java JDK 1.5.\nAll experiments described below were run on a single SUN V40z machine having four AMD Opteron CPUs, 16GB RAM, a large network-attached RAID-5 disk array, and running Microsoft Windows Server 2003.\nAll data and indexes are kept in an Oracle 10g database that runs on the same machine.\nFor our experiments we used two different datasets.\nThe English Wikipedia revision history (referred to as WIKI in the remainder) is available for free download as a single XML file.\nThis large dataset, totaling 0.7 TBytes, contains the full editing history of the English Wikipedia from January 2001 to December 2005 (the time of our download).\nWe indexed all encyclopedia articles excluding versions that were marked as the result of a minor edit (e.g., the correction of spelling errors etc.).\nThis yielded a total of 892,255 documents with 13,976,915 versions having a mean (\u00b5) of 15.67 versions per document at standard deviation (\u03c3) of 59.18.\nWe built a time-travel query workload using the query log temporarily made available recently by AOL Research as follows--we first extracted the 300 most frequent keyword queries that yielded a result click on a Wikipedia article (for e.g., \"french revolution\", \"hurricane season 2005\", \"da vinci code\" etc.).\nThe thus extracted queries contained a total of 422 distinct terms.\nFor each extracted query, we randomly picked a time point for each month covered by the dataset.\nThis resulted in a total of 18, 000 (= 300 x 60) time-travel queries.\nThe second dataset used in our experiments was based on a subset of the European Archive [13], containing weekly crawls of 11.\ngov.uk websites throughout the years 2004 and 2005 amounting close to 2 TBytes of raw data.\nWe filtered out documents not belonging to MIME-types text\/plain and text\/html, to obtain a dataset that totals 0.4 TBytes and which we refer to as UKGOV in rest of the paper.\nThis included a total of 502,617 documents with 8,687,108 versions (\u00b5 = 17.28 and \u03c3 = 13.79).\nWe built a corresponding query workload as mentioned before, this time choosing keyword queries that led to a site in the.\ngov.uk domain (e.g., \"minimum wage\", \"inheritance tax\", \"citizenship ceremony dates\" etc.), and randomly sampling a time point for every month within the two year period spanned by the dataset.\nThus, we obtained a total of 7,200 (= 300 x 24) time-travel queries for the UKGOV dataset.\nIn total 522 terms appear in the extracted queries.\nThe collection statistics (i.e., N and avdl) and term statistics (i.e., DF) were computed at monthly granularity for both datasets.\n7.2 Impact of Temporal Coalescing\nOur first set of experiments is aimed at evaluating the approximate temporal coalescing technique, described in Section 5, in terms of index-size reduction and its effect on the result quality.\nFor both the WIKI and UKGOV datasets, we compare temporally coalesced indexes for different values of the error threshold e computed using Algorithm 1 with the non-coalesced index as a baseline.\nTable 1: Index sizes for non-coalesced index (-) and coalesced indexes for different values of e\nTable 1 summarizes the index sizes measured as the total number of postings.\nAs these results demonstrate, approximate temporal coalescing is highly effective in reducing index size.\nEven a small threshold value, e.g. e = 0.01, has a considerable effect by reducing the index size almost by an order of magnitude.\nNote that on the UKGOV dataset, even accurate coalescing (e = 0) manages to reduce the index size to less than 38% of the original size.\nIndex size continues to reduce on both datasets, as we increase the value of E.\nHow does the reduction in index size affect the query results?\nIn order to evaluate this aspect, we compared the top-k results computed using a coalesced index against the ground-truth result obtained from the original index, for different cutoff levels k. Let Gk and Ck be the top-k documents from the ground-truth result and from the coalesced index respectively.\nWe used the following two measures for comparison: (i) Relative Recall at cutoff level k (RR@k), that measures the overlap between Gk and Ck, which ranges in [0, 1] and is defined as\n(ii) Kendall's \u03c4 (see [7, 14] for a detailed definition) at cutoff level k (KT@k), measuring the agreement between two results in the relative order of items in Gk n Ck, with value 1 (or -1) indicating total agreement (or disagreement).\nFigure 3 plots, for cutoff levels 10 and 100, the mean of RR@k and KT@k along with 5% and 95% percentiles, for different values of the threshold a starting from 0.01.\nNote that for e = 0, results coincide with those obtained by the original index, and hence are omitted from the graph.\nIt is reassuring to see from these results that approximate temporal coalescing induces minimal disruption to the query results, since RR@k and KT@k are within reasonable limits.\nFor e = 0.01, the smallest value of a in our experiments, RR@100 for WIKI is 0.98 indicating that the results are\nFigure 3: Relative recall and Kendall's \u03c4 observed on coalesced indexes for different values of e\nalmost indistinguishable from those obtained through the original index.\nEven the relative order of these common results is quite high, as the mean KT@100 is close to 0.95.\nFor the extreme value of e = 0.5, which results in an index size of just 2.35% of the original, the RR@100 and KT@100 are about 0.8 and 0.6 respectively.\nOn the relatively less dynamic UKGOV dataset (as can be seen from the \u03c3 values above), results were even better, with high values of RR and KT seen throughout the spectrum of a values for both cutoff values.\n7.3 Sublist Materialization\nWe now turn our attention towards evaluating the sublist materialization techniques introduced in Section 6.\nFor both datasets, we started with the coalesced index produced by a moderate threshold setting of e = 0.10.\nIn order to reduce the computational effort, boundaries of elementary time intervals were rounded to day granularity before computing the sublist materializations.\nHowever, note that the postings in the materialized sublists still retain their original timestamps.\nFor a comparative evaluation of the four approaches--Poet, Soet, PG, and SB--we measure space and performance as follows.\nThe required space S (M), as defined earlier, is equal to the total number of postings in the materialized sublists.\nTo assess performance we compute the expected processing cost (EPC) for all terms in the respective query workload assuming a uniform probability distribution among query time-points.\nWe report the mean EPC, as well as the 5% - and 95% - percentile.\nIn other words, the mean EPC reflects the expected length of the index list (in terms of index postings) that needs to be scanned for a random time point and a random term from the query workload.\nThe Soet and Poet approaches are, by their definition, parameter-free.\nFor the PG approach, we varied its parameter - y, which limits the maximal performance degradation, between 1.0 and 3.0.\nAnalogously, for the SB approach the parameter r,, as an upper-bound on the allowed space blowup, was varied between 1.0 and 3.0.\nSolutions for the SB approach were obtained running simulated annealing for R = 50, 000 rounds.\nTable 2 lists the obtained space and performance figures.\nNote that EPC values are smaller on WIKI than on UKGOV, since terms in the query workload employed for WIKI are relatively rarer in the corpus.\nBased on the depicted results, we make the following key observations.\ni) As expected, Poet achieves optimal performance at the cost of an enormous space consumption.\nSoet, to the contrary, while consuming an optimal amount of space, provides only poor expected processing cost.\nThe PG and SB methods, for different values of their respective parameter, produce solutions whose space and performance lie in between the extremes that Poet and Soet represent.\nii) For the PG method we see that for an acceptable performance degradation of only 10% (i.e., - y = 1.10) the required space drops by more than one order of magnitude in comparison to Poet on both datasets.\niii) The SB approach achieves close-to-optimal performance on both datasets, if allowed to consume at most three times the optimal amount of space (i.e., r, = 3.0), which on our datasets still corresponds to a space reduction over Poet by more than one order of magnitude.\nWe also measured wall-clock times on a sample of the queries with results indicating improvements in execution time by up to a factor of 12.\n8.\nCONCLUSIONS\nIn this work we have developed an efficient solution for time-travel text search over temporally versioned document collections.\nExperiments on two real-world datasets showed that a combination of the proposed techniques can reduce index size by up to an order of magnitude while achieving nearly optimal performance and highly accurate results.\nThe present work opens up many interesting questions for future research, e.g.: How can we even further improve performance by applying (and possibly extending) encoding, compression, and skipping techniques [35]?\n.\nHow can we extend the approach for queries q [tb, te] specifying a time interval instead of a time point?\nHow can the described time-travel text search functionality enable or speed up text mining along the time axis (e.g., tracking sentiment changes in customer opinions)?","keyphrases":["time machin","text search","version document collect","web archiv","time-travel text search","invert file index","tempor search","approxim tempor coalesc","collabor author environ","timestamp inform feed","document-content overlap","index rang-base valu","open sourc search-engin nutch","static indexprun techniqu","valid time-interv","sublist materi","tempor text index"],"prmu":["P","P","P","P","P","P","P","P","U","U","U","M","U","M","U","U","R"]} {"id":"H-50","title":"An Outranking Approach for Rank Aggregation in Information Retrieval","abstract":"Research in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.). In this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking. In this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another. We show that the proposed method deals well with the Information Retrieval distinctive features. Experimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators.","lvl-1":"An Outranking Approach for Rank Aggregation in Information Retrieval Mohamed Farah Lamsade, Paris Dauphine University Place du Mal de Lattre de Tassigny 75775 Paris Cedex 16, France farah@lamsade.dauphine.fr Daniel Vanderpooten Lamsade, Paris Dauphine University Place du Mal de Lattre de Tassigny 75775 Paris Cedex 16, France vdp@lamsade.dauphine.fr ABSTRACT Research in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.).\nIn this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking.\nIn this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another.\nWe show that the proposed method deals well with the Information Retrieval distinctive features.\nExperimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators.\nCategories and Subject Descriptors: H.3.3 [Information Systems]: Information Search and Retrieval - Retrieval models.\nGeneral Terms: Algorithms, Measurement, Experimentation, Performance, Theory.\n1.\nINTRODUCTION A wide range of current Information Retrieval (IR) approaches are based on various search models (Boolean, Vector Space, Probabilistic, Language, etc. [2]) in order to retrieve relevant documents in response to a user request.\nThe result lists produced by these approaches depend on the exact definition of the relevance concept.\nRank aggregation approaches, also called data fusion approaches, consist in combining these result lists in order to produce a new and hopefully better ranking.\nSuch approaches give rise to metasearch engines in the Web context.\nWe consider, in the following, cases where only ranks are available and no other additional information is provided such as the relevance scores.\nThis corresponds indeed to the reality, where only ordinal information is available.\nData fusion is also relevant in other contexts, such as when the user writes several queries of his\/her information need (e.g., a boolean query and a natural language query) [4], or when many document surrogates are available [16].\nSeveral studies argued that rank aggregation has the potential of combining effectively all the various sources of evidence considered in various input methods.\nFor instance, experiments carried out in [16], [30], [4] and [19] showed that documents which appear in the lists of the majority of the input methods are more likely to be relevant.\nMoreover, Lee [19] and Vogt and Cottrell [31] found that various retrieval approaches often return very different irrelevant documents, but many of the same relevant documents.\nBartell et al. [3] also found that rank aggregation methods improve the performances w.r.t. those of the input methods, even when some of them have weak individual performances.\nThese methods also tend to smooth out biases of the input methods according to Montague and Aslam [22].\nData fusion has recently been proved to improve performances for both the ad-hoc retrieval and categorization tasks within the TREC genomics track in 2005 [1].\nThe rank aggregation problem was addressed in various fields such as i) in social choice theory which studies voting algorithms which specify winners of elections or winners of competitions in tournaments [29], ii) in statistics when studying correlation between rankings, iii) in distributed databases when results from different databases must be combined [12], and iv) in collaborative filtering [23].\nMost current rank aggregation methods consider each input ranking as a permutation over the same set of items.\nThey also give rigid interpretation to the exact ranking of the items.\nBoth of these assumptions are rather not valid in the IR context, as will be shown in the following sections.\nThe remaining of the paper is organized as follows.\nWe first review current rank aggregation methods in Section 2.\nThen we outline the specificities of the data fusion problem in the IR context (Section 3).\nIn Section 4, we present a new aggregation method which is proven to best fit the IR context.\nExperimental results are presented in Section 5 and conclusions are provided in a final section.\n2.\nRELATED WORK As pointed out by Riker [25], we can distinguish two families of rank aggregation methods: positional methods which assign scores to items to be ranked according to the ranks they receive and majoritarian methods which are based on pairwise comparisons of items to be ranked.\nThese two families of methods find their roots in the pioneering works of Borda [5] and Condorcet [7], respectively, in the social choice literature.\n2.1 Preliminaries We first introduce some basic notations to present the rank aggregation methods in a uniform way.\nLet D = {d1, d2, ... , dnd } be a set of nd documents.\nA list or a ranking j is an ordering defined on Dj \u2286 D (j = 1, ... , n).\nThus, di j di means di `is ranked better than'' di in j.\nWhen Dj = D, j is said to be a full list.\nOtherwise, it is a partial list.\nIf di belongs to Dj, rj i denotes the rank or position of di in j.\nWe assume that the best answer (document) is assigned the position 1 and the worst one is assigned the position |Dj|.\nLet D be the set of all permutations on D or all subsets of D.\nA profile is a n-tuple of rankings PR = ( 1, 2, ... , n).\nRestricting PR to the rankings containing document di defines PRi.\nWe also call the number of rankings which contain document di the rank hits of di [19].\nThe rank aggregation or data fusion problem consists of finding a ranking function or mechanism \u03a8 (also called a social welfare function in the social choice theory terminology) defined by: \u03a8 : n D \u2192 D PR = ( 1, 2, ... , n) \u2192 \u03c3 = \u03a8(PR) where \u03c3 is called a consensus ranking.\n2.2 Positional Methods 2.2.1 Borda Count This method [5] first assigns a score n j=1 rj i to each document di.\nDocuments are then ranked by increasing order of this score, breaking ties, if any, arbitrarily.\n2.2.2 Linear Combination Methods This family of methods basically combine scores of documents.\nWhen used for the rank aggregation problem, ranks are assumed to be scores or performances to be combined using aggregation operators such as the weighted sum or some variation of it [3, 31, 17, 28].\nFor instance, Callan et al. [6] used the inference networks model [30] to combine rankings.\nFox and Shaw [15] proposed several combination strategies which are CombSUM, CombMIN, CombMAX, CombANZ and CombMNZ.\nThe first three operators correspond to the sum, min and max operators, respectively.\nCombANZ and CombMNZ respectively divides and multiplies the CombSUM score by the rank hits.\nIt is shown in [19] that the CombSUM and CombMNZ operators perform better than the others.\nMetasearch engines such as SavvySearch and MetaCrawler use the CombSUM strategy to fuse rankings.\n2.2.3 Footrule Optimal Aggregation In this method, a consensus ranking minimizes the Spearman footrule distance from the input rankings [21].\nFormally, given two full lists j and j , this distance is given by F( j, j ) = nd i=1 |rj i \u2212 rj i |.\nIt extends to several lists as follows.\nGiven a profile PR and a consensus ranking \u03c3, the Spearman footrule distance of \u03c3 to PR is given by F(\u03c3, PR) = n j=1 F(\u03c3, j).\nCook and Kress [8] proposed a similar method which consists in optimizing the distance D( j, j ) = 1 2 nd i,i =1 |rj i,i \u2212 rj i,i |, where rj i,i = rj i \u2212rj i .\nThis formulation has the advantage that it considers the intensity of preferences.\n2.2.4 Probabilistic Methods This kind of methods assume that the performance of the input methods on a number of training queries is indicative of their future performance.\nDuring the training process, probabilities of relevance are calculated.\nFor subsequent queries, documents are ranked based on these probabilities.\nFor instance, in [20], each input ranking j is divided into a number of segments, and the conditional probability of relevance (R) of each document di depending on the segment k it occurs in, is computed, i.e. prob(R|di, k, j).\nFor subsequent queries, the score of each document di is given by n j=1 prob(R|di,k, j ) k .\nLe Calve and Savoy [18] suggest using a logistic regression approach for combining scores.\nTraining data is needed to infer the model parameters.\n2.3 Majoritarian Methods 2.3.1 Condorcet Procedure The original Condorcet rule [7] specifies that a winner of the election is any item that beats or ties with every other item in a pairwise contest.\nFormally, let C(di\u03c3di ) = { j\u2208 PR : di j di } be the coalition of rankings that are concordant with establishing di\u03c3di , i.e. with the proposition di `should be ranked better than'' di in the final ranking \u03c3.\ndi beats or ties with di iff |C(di\u03c3di )| \u2265 |C(di \u03c3di)|.\nThe repetitive application of the Condorcet algorithm can produce a ranking of items in a natural way: select the Condorcet winner, remove it from the lists, and repeat the previous two steps until there are no more documents to rank.\nSince there is not always Condorcet winners, variations of the Condorcet procedure have been developed within the multiple criteria decision aid theory, with methods such as ELECTRE [26].\n2.3.2 Kemeny Optimal Aggregation As in section 2.2.3, a consensus ranking minimizes a geometric distance from the input rankings, where the Kendall tau distance is used instead of the Spearman footrule distance.\nFormally, given two full lists j and j , the Kendall tau distance is given by K( j, j ) = |{(di, di ) : i < i , rj i < rj i , rj i > rj i }|, i.e. the number of pairwise disagreements between the two lists.\nIt is easy to show that the consensus ranking corresponds to the geometric median of the input rankings and that the Kemeny optimal aggregation problem corresponds to the minimum feedback edge set problem.\n2.3.3 Markov Chain Methods Markov chains (MCs) have been used by Dwork et al. [11] as a `natural'' method to obtain a consensus ranking where states correspond to the documents to be ranked and the transition probabilities vary depending on the interpretation of the transition event.\nIn the same reference, the authors proposed four specific MCs and experimental testing had shown that the following MC is the best performing one (see also [24]): \u2022 MC4: move from the current state di to the next state di by first choosing a document di uniformly from D.\nIf for the majority of the rankings, we have rj i \u2264 rj i , then move to di , else stay in di.\nThe consensus ranking corresponds to the stationary distribution of MC4.\n3.\nSPECIFICITIES OF THE RANK AGGREGATION PROBLEM IN THE IR CONTEXT 3.1 Limited Significance of the Rankings The exact positions of documents in one input ranking have limited significance and should not be overemphasized.\nFor instance, having three relevant documents in the first three positions, any perturbation of these three items will have the same value.\nIndeed, in the IR context, the complete order provided by an input method may hide ties.\nIn this case, we call such rankings semi orders.\nThis was outlined in [13] as the problem of aggregation with ties.\nIt is therefore important to build the consensus ranking based on robust information: \u2022 Documents with near positions in j are more likely to have similar interest or relevance.\nThus a slight perturbation of the initial ranking is meaningless.\n\u2022 Assuming that document di is better ranked than document di in a ranking j, di is more likely to be definitively more relevant than di in j when the number of intermediate positions between di and di increases.\n3.2 Partial Lists In real world applications, such as metasearch engines, rankings provided by the input methods are often partial lists.\nThis was outlined in [14] as the problem of having to merge top-k results from various input lists.\nFor instance, in the experiments carried out by Dwork et al. [11], authors found that among the top 100 best documents of 7 input search engines, 67% of the documents were present in only one search engine, whereas less than two documents were present in all the search engines.\nRank aggregation of partial lists raises four major difficulties which we state hereafter, proposing for each of them various working assumptions: 1.\nPartial lists can have various lengths, which can favour long lists.\nWe thus consider the following two working hypotheses: H1 k : We only consider the top k best documents from each input ranking.\nH1 all: We consider all the documents from each input ranking.\n2.\nSince there are different documents in the input rankings, we must decide which documents should be kept in the consensus ranking.\nTwo working hypotheses are therefore considered: H2 k : We only consider documents which are present in at least k input rankings (k > 1).\nH2 all: We consider all the documents which are ranked in at least one input ranking.\nHereafter, we call documents which will be retained in the consensus ranking, candidate documents, and documents that will be excluded from the consensus ranking, excluded documents.\nWe also call a candidate document which is missing in one or more rankings, a missing document.\n3.\nSome candidate documents are missing documents in some input rankings.\nMain reasons for a missing document are that it was not indexed or it was indexed but deemed irrelevant ; usually this information is not available.\nWe consider the following two working hypotheses: H3 yes: Each missing document in each j is assigned a position.\nH3 no: No assumption is made, that is each missing document is considered neither better nor worse than any other document.\n4.\nWhen assumption H2 k holds, each input ranking may contain documents which will not be considered in the consensus ranking.\nRegarding the positions of the candidate documents, we can consider the following working hypotheses: H4 init: The initial positions of candidate documents are kept in each input ranking.\nH4 new: Candidate documents receive new positions in each input ranking, after discarding excluded ones.\nIn the IR context, rank aggregation methods need to decide more or less explicitly which assumptions to retain w.r.t. the above-mentioned difficulties.\n4.\nOUTRANKING APPROACH FOR RANK AGGREGATION 4.1 Presentation Positional methods consider implicitly that the positions of the documents in the input rankings are scores giving thus a cardinal meaning to an ordinal information.\nThis constitutes a strong assumption that is questionable, especially when the input rankings have different lengths.\nMoreover, for positional methods, assumptions H3 and H4 , which are often arbitrary, have a strong impact on the results.\nFor instance, let us consider an input ranking of 500 documents out of 1000 candidate documents.\nWhether we assign to each of the missing documents the position 1, 501, 750 or 1000 -corresponding to variations of H3 yes- will give rise to very contrasted results, especially regarding the top of the consensus ranking.\nMajoritarian methods do not suffer from the above-mentioned drawbacks of the positional methods since they build consensus rankings exploiting only ordinal information contained in the input rankings.\nNevertheless, they suppose that such rankings are complete orders, ignoring that they may hide ties.\nTherefore, majoritarian methods base consensus rankings on illusory discriminant information rather than less discriminant but more robust information.\nTrying to overcome the limits of current rank aggregation methods, we found that outranking approaches, which were initially used for multiple criteria aggregation problems [26], can also be used for the rank aggregation purpose, where each ranking plays the role of a criterion.\nTherefore, in order to decide whether a document di should be ranked better than di in the consensus ranking \u03c3, the two following conditions should be met: \u2022 a concordance condition which ensures that a majority of the input rankings are concordant with di\u03c3di (majority principle).\n\u2022 a discordance condition which ensures that none of the discordant input rankings strongly refutes d\u03c3d (respect of minorities principle).\nFormally, the concordance coalition with di\u03c3di is Csp (di\u03c3di ) = { j\u2208 PR : rj i \u2264 rj i \u2212 sp} where sp is a preference threshold which is the variation of document positions -whether it is absolute or relative to the ranking length- which draws the boundaries between an indifference and a preference situation between documents.\nThe discordance coalition with di\u03c3di is Dsv (di\u03c3di ) = { j\u2208 PR : rj i \u2265 rj i + sv} where sv is a veto threshold which is the variation of document positions -whether it is absolute or relative to the ranking length- which draws the boundaries between a weak and a strong opposition to di\u03c3di .\nDepending on the exact definition of the preceding concordance and discordance coalitions leading to the definition of some decision rules, several outranking relations can be defined.\nThey can be more or less demanding depending on i) the values of the thresholds sp and sv, ii) the importance or minimal size cmin required for the concordance coalition, and iii) the importance or maximum size dmax of the discordance coalition.\nA generic outranking relation can thus be defined as follows: diS(sp,sv,cmin,dmax)di \u21d4 |Csp (di\u03c3di )| \u2265 cmin AND |Dsv (di\u03c3di )| \u2264 dmax This expression defines a family of nested outranking relations since S(sp,sv,cmin,dmax) \u2286 S(sp,sv,cmin,dmax) when cmin \u2265 cmin and\/or dmax \u2264 dmax and\/or sp \u2265 sp and\/or sv \u2264 sv.\nThis expression also generalizes the majority rule which corresponds to the particular relation S(0,\u221e, n 2 ,n).\nIt also satisfies important properties of rank aggregation methods, called neutrality, Pareto-optimality, Condorcet property and Extended Condorcet property, in the social choice literature [29].\nOutranking relations are not necessarily transitive and do not necessarily correspond to rankings since directed cycles may exist.\nTherefore, we need specific procedures in order to derive a consensus ranking.\nWe propose the following procedure which finds its roots in [27].\nIt consists in partitioning the set of documents into r ranked classes.\nEach class Ch contains documents with the same relevance and results from the application of all relations (if possible) to the set of documents remaining after previous classes are computed.\nDocuments within the same equivalence class are ranked arbitrarily.\nFormally, let \u2022 R be the set of candidate documents for a query, \u2022 S1 , S2 , ... be a family of nested outranking relations, \u2022 Fk(di, E) = |{di \u2208 E : diSk di }| be the number of documents in E(E \u2286 R) that could be considered `worse'' than di according to relation Sk , \u2022 fk(di, E) = |{di \u2208 E : di Sk di}| be the number of documents in E that could be considered `better'' than di according to Sk , \u2022 sk(di, E) = Fk(di, E) \u2212 fk(di, E) be the qualification of di in E according to Sk .\nEach class Ch results from a distillation process.\nIt corresponds to the last distillate of a series of sets E0 \u2287 E1 \u2287 ... where E0 = R \\ (C1 \u222a ... \u222a Ch\u22121) and Ek is a reduced subset of Ek\u22121 resulting from the application of the following procedure: 1.\ncompute for each di \u2208 Ek\u22121 its qualification according to Sk , i.e. sk(di, Ek\u22121), 2.\ndefine smax = maxdi\u2208Ek\u22121 {sk(di, Ek\u22121)}, then 3.\nEk = {di \u2208 Ek\u22121 : sk(di, Ek\u22121) = smax} When one outranking relation is used, the distillation process stops after the first application of the previous procedure, i.e., Ch corresponds to distillate E1.\nWhen different outranking relations are used, the distillation process stops when all the pre-defined outranking relations have been used or when |Ek| = 1.\n4.2 Illustrative Example This section illustrates the concepts and procedures of section 4.1.\nLet us consider a set of candidate documents R = {d1, d2, d3, d4, d5}.\nThe following table gives a profile PR of different rankings of the documents of R: PR = ( 1 , 2, 3, 4).\nTable 1: Rankings of documents rj i 1 2 3 4 d1 1 3 1 5 d2 2 1 3 3 d3 3 2 2 1 d4 4 4 5 2 d5 5 5 4 4 Let us suppose that the preference and veto thresholds are set to values 1 and 4 respectively, and that the concordance and discordance thresholds are set to values 2 and 1 respectively.\nThe following tables give the concordance, discordance and outranking matrices.\nEach entry csp (di, di ) (dsv (di, di )) in the concordance (discordance) matrix gives the number of rankings that are concordant (discordant) with di\u03c3di , i.e. csp (di, di ) = |Csp (di\u03c3di )| and dsv (di, di ) = |Dsv (di\u03c3di )|.\nTable 2: Computation of the outranking relation d1 d2 d3 d4 d5 d1 - 2 2 3 3 d2 2 - 2 3 4 d3 2 2 - 4 4 d4 1 1 0 - 3 d5 1 0 0 1Concordance Matrix d1 d2 d3 d4 d5 d1 - 0 1 0 0 d2 0 - 0 0 0 d3 0 0 - 0 0 d4 1 0 0 - 0 d5 1 1 0 0Discordance Matrix d1 d2 d3 d4 d5 d1 - 1 1 1 1 d2 1 - 1 1 1 d3 1 1 - 1 1 d4 0 0 0 - 1 d5 0 0 0 0Outranking Matrix (S1) For instance, the concordance coalition for the assertion d1\u03c3d4 is C1(d1\u03c3d4) = { 1, 2, 3} and the discordance coalition for the same assertion is D4(d1\u03c3d4) = \u2205.\nTherefore, c1(d1, d4) = 3, d4(d1, d4) = 0 and d1S1 d4 holds.\nNotice that Fk(di, R) (fk(di, R)) is given by summing the values of the ith row (column) of the outranking matrix.\nThe consensus ranking is obtained as follows: to get the first class C1, we compute the qualifications of all the documents of E0 = R with respect to S1 .\nThey are respectively 2, 2, 2, -2 and -4.\nTherefore smax equals 2 and C1 = E1 = {d1, d2, d3}.\nObserve that, if we had used a second outranking relation S2(\u2287 S1), these three documents could have been possibly discriminated.\nAt this stage, we remove documents of C1 from the outranking matrix and compute the next class C2: we compute the new qualifications of the documents of E0 = R \\ C1 = {d4, d5}.\nThey are respectively 1 and -1.\nSo C3 = E1 = {d4}.\nThe last document d5 is the only document of the last class C3.\nThus, the consensus ranking is {d1, d2, d3} \u2192 {d4} \u2192 {d5}.\n5.\nEXPERIMENTS AND RESULTS 5.1 Test Setting To facilitate empirical investigation of the proposed methodology, we developed a prototype metasearch engine that implements a version of our outranking approach for rank aggregation.\nIn this paper, we apply our approach to the Topic Distillation (TD) task of TREC-2004 Web track [10].\nIn this task, there are 75 topics where only a short description of each is given.\nFor each query, we retained the rankings of the 10 best runs of the TD task which are provided by TREC-2004 participating teams.\nThe performances of these runs are reported in table 3.\nTable 3: Performances of the 10 best runs of the TD task of TREC-2004 Run Id MAP P@10 S@1 S@5 S@10 uogWebCAU150 17.9% 24.9% 50.7% 77.3% 89.3% MSRAmixed1 17.8% 25.1% 38.7% 72.0% 88.0% MSRC04C12 16.5% 23.1% 38.7% 74.7% 80.0% humW04rdpl 16.3% 23.1% 37.3% 78.7% 90.7% THUIRmix042 14.7% 20.5% 21.3% 58.7% 74.7% UAmsT04MWScb 14.6% 20.9% 36.0% 66.7% 76.0% ICT04CIIS1AT 14.1% 20.8% 33.3% 64.0% 78.7% SJTUINCMIX5 12.9% 18.9% 29.3% 57.3% 72.0% MU04web1 11.5% 19.9% 33.3% 64.0% 76.0% MeijiHILw3 11.5% 15.3% 30.7% 54.7% 64.0% Average 14.7% 21.2% 34.9% 66.8% 78.94% For each query, each run provides a ranking of about 1000 documents.\nThe number of documents retrieved by all these runs ranges from 543 to 5769.\nTheir average (median) number is 3340 (3386).\nIt is worth noting that we found similar distributions of the documents among the rankings as in [11].\nFor evaluation, we used the `trec eval'' standard tool which is used by the TREC community to calculate the standard measures of system effectiveness which are Mean Average Precision (MAP) and Success@n (S@n) for n=1, 5 and 10.\nOur approach effectiveness is compared against some high performing official results from TREC-2004 as well as against some standard rank aggregation algorithms.\nIn the experiments, significance testing is mainly based on the t-student statistic which is computed on the basis of the MAP values of the compared runs.\nIn the tables of the following section, statistically significant differences are marked with an asterisk.\nValues between brackets of the first column of each table, indicate the parameter value of the corresponding run.\n5.2 Results We carried out several series of runs in order to i) study performance variations of the outranking approach when tuning the parameters and working assumptions, ii) compare performances of the outranking approach vs standard rank aggregation strategies , and iii) check whether rank aggregation performs better than the best input rankings.\nWe set our basic run mcm with the following parameters.\nWe considered that each input ranking is a complete order (sp = 0) and that an input ranking strongly refutes di\u03c3di when the difference of both document positions is large enough (sv = 75%).\nPreference and veto thresholds are computed proportionally to the number of documents retained in each input ranking.\nThey consequently may vary from one ranking to another.\nIn addition, to accept the assertion di\u03c3di , we supposed that the majority of the rankings must be concordant (cmin = 50%) and that every input ranking can impose its veto (dmax = 0).\nConcordance and discordance thresholds are computed for each tuple (di, di ) as the percentage of the input rankings of PRi \u2229PRi .\nThus, our choice of parameters leads to the definition of the outranking relation S(0,75%,50%,0).\nTo test the run mcm, we had chosen the following assumptions.\nWe retained the top 100 best documents from each input ranking (H1 100), only considered documents which are present in at least half of the input rankings (H2 5 ) and assumed H3 no and H4 new.\nIn these conditions, the number of successful documents was about 100 on average, and the computation time per query was less than one second.\nObviously, modifying the working assumptions should have deeper impact on the performances than tuning our model parameters.\nThis was validated by preliminary experiments.\nThus, we hereafter begin by studying performance variation when different sets of assumptions are considered.\nAfterwards, we study the impact of tuning parameters.\nFinally, we compare our model performances w.r.t. the input rankings as well as some standard data fusion algorithms.\n5.2.1 Impact of the Working Assumptions Table 4 summarizes the performance variation of the outranking approach under different working hypotheses.\nIn Table 4: Impact of the working assumptions Run Id MAP S@1 S@5 S@10 mcm 18.47% 41.33% 81.33% 86.67% mcm22 (H3 yes) 17.72% (-4.06%) 34.67% 81.33% 86.67% mcm23 (H4 init) 18.26% (-1.14%) 41.33% 81.33% 86.67% mcm24 (H1 all) 20.67% (+11.91%*) 38.66% 80.00% 86.66% mcm25 (H2 all) 21.68% (+17.38%*) 40.00% 78.66% 89.33% this table, we first show that run mcm22, in which missing documents are all put in the same last position of each input ranking, leads to performance drop w.r.t. run mcm.\nMoreover, S@1 moves from 41.33% to 34.67% (-16.11%).\nThis shows that several relevant documents which were initially put at the first position of the consensus ranking in mcm, lose this first position but remain ranked in the top 5 documents since S@5 did not change.\nWe also conclude that documents which have rather good positions in some input rankings are more likely to be relevant, even though they are missing in some other rankings.\nConsequently, when they are missing in some rankings, assigning worse ranks to these documents is harmful for performance.\nAlso, from Table 4, we found that the performances of runs mcm and mcm23 are similar.\nTherefore, the outranking approach is not sensitive to keeping the initial positions of candidate documents or recomputing them by discarding excluded ones.\nFrom the same Table 4, performance of the outranking approach increases significantly for runs mcm24 and mcm25.\nTherefore, whether we consider all the documents which are present in half of the rankings (mcm24) or we consider all the documents which are ranked in the first 100 positions in one or more rankings (mcm25), increases performances.\nThis result was predictable since in both cases we have more detailed information on the relative importance of documents.\nTables 5 and 6 confirm this evidence.\nTable 5, where values between brackets of the first column give the number of documents which are retained from each input ranking, shows that selecting more documents from each input ranking leads to performance increase.\nIt is worth mentioning that selecting more than 600 documents from each input ranking does not improve performance.\nTable 5: Impact of the number of retained documents Run Id MAP S@1 S@5 S@10 mcm (100) 18.47% 41.33% 81.33% 86.67% mcm24-1 (200) 19.32% (+4.60%) 42.67% 78.67% 88.00% mcm24-2 (400) 19.88% (+7.63%*) 37.33% 80.00% 88.00% mcm24-3 (600) 20.80% (+12.62%*) 40.00% 80.00% 88.00% mcm24-4 (800) 20.66% (+11.86%*) 40.00% 78.67% 86.67% mcm24 (1000) 20.67% (+11.91%*) 38.66% 80.00% 86.66% Table 6 reports runs corresponding to variations of H2 k .\nValues between brackets are rank hits.\nFor instance, in the run mcm32, only documents which are present in 3 or more input rankings, were considered successful.\nThis table shows that performance is significantly better when rare documents are considered, whereas it decreases significantly when these documents are discarded.\nTherefore, we conclude that many of the relevant documents are retrieved by a rather small set of IR models.\nTable 6: Performance considering different rank hits Run Id MAP S@1 S@5 S@10 mcm25 (1) 21.68% (+17.38%*) 40.00% 78.67% 89.33% mcm32 (3) 18.98% (+2.76%) 38.67% 80.00% 85.33% mcm (5) 18.47% 41.33% 81.33% 86.67% mcm33 (7) 15.83% (-14.29%*) 37.33% 78.67% 85.33% mcm34 (9) 10.96% (-40.66%*) 36.11% 66.67% 70.83% mcm35 (10) 7.42% (-59.83%*) 39.22% 62.75% 64.70% For both runs mcm24 and mcm25, the number of successful documents was about 1000 and therefore, the computation time per query increased and became around 5 seconds.\n5.2.2 Impact of the Variation of the Parameters Table 7 shows performance variation of the outranking approach when different preference thresholds are considered.\nWe found performance improvement up to threshold values of about 5%, then there is a decrease in the performance which becomes significant for threshold values greater than 10%.\nMoreover, S@1 improves from 41.33% to 46.67% when preference threshold changes from 0 to 5%.\nWe can thus conclude that the input rankings are semi orders rather than complete orders.\nTable 8 shows the evolution of the performance measures w.r.t. the concordance threshold.\nWe can conclude that in order to put document di before di in the consensus ranking, Table 7: Impact of the variation of the preference threshold from 0 to 12.5% Run Id MAP S@1 S@5 S@10 mcm (0%) 18.47% 41.33% 81.33% 86.67% mcm1 (1%) 18.57% (+0.54%) 41.33% 81.33% 86.67% mcm2 (2.5%) 18.63% (+0.87%) 42.67% 78.67% 86.67% mcm3 (5%) 18.69% (+1.19%) 46.67% 81.33% 86.67% mcm4 (7.5%) 18.24% (-1.25%) 46.67% 81.33% 86.67% mcm5 (10%) 17.93% (-2.92%) 40.00% 82.67% 86.67% mcm5b (12.5%) 17.51% (-5.20%*) 41.33% 80.00% 86.67% at least half of the input rankings of PRi \u2229 PRi should be concordant.\nPerformance drops significantly for very low and very high values of the concordance threshold.\nIn fact, for such values, the concordance condition is either fulfilled rather always by too many document pairs or not fulfilled at all, respectively.\nTherefore, the outranking relation becomes either too weak or too strong respectively.\nTable 8: Impact of the variation of cmin Run Id MAP S@1 S@5 S@10 mcm11 (20%) 17.63% (-4.55%*) 41.33% 76.00% 85.33% mcm12 (40%) 18.37% (-0.54%) 42.67% 76.00% 86.67% mcm (50%) 18.47% 41.33% 81.33% 86.67% mcm13 (60%) 18.42% (-0.27%) 40.00% 78.67% 86.67% mcm14 (80%) 17.43% (-5.63%*) 40.00% 78.67% 86.67% mcm15 (100%) 16.12% (-12.72%*) 41.33% 70.67% 85.33% In the experiments, varying the veto threshold as well as the discordance threshold within reasonable intervals does not have significant impact on performance measures.\nIn fact, runs with different veto thresholds (sv \u2208 [50%; 100%]) had similar performances even though there is a slight advantage for runs with high threshold values which means that it is better not to allow the input rankings to put their veto easily.\nAlso, tuning the discordance threshold was carried out for values 50% and 75% of the veto threshold.\nFor these runs we did not get any noticeable performance variation, although for low discordance thresholds (dmax < 20%), performance slightly decreased.\n5.2.3 Impact of the Variation of the Number of Input Rankings To study performance evolution when different sets of input rankings are considered, we carried three more runs where 2, 4, and 6 of the best performing sets of the input rankings are considered.\nResults reported in Table 9 are seemingly counter-intuitive and also do not support previous findings regarding rank aggregation research [3].\nNevertheless, this result shows that low performing rankings bring more noise than information to the establishment of the consensus ranking.\nTherefore, when they are considered, performance decreases.\nTable 9: Performance considering different best performing sets of input rankings Run Id MAP S@1 S@5 S@10 mcm (10) 18.47% 41.33% 81.33% 86.67% mcm27 (6) 18.60% (+0.70%) 41.33% 80.00% 85.33% mcm28 (4) 19.02% (+2.98%) 40.00% 86.67% 88.00% mcm29 (2) 18.33% (-0.76%) 44.00% 76.00% 88.00% 5.2.4 Comparison of the Performance of Different Rank Aggregation Methods In this set of runs, we compare the outranking approach with some standard rank aggregation methods which were proven to have acceptable performance in previous studies: we considered two positional methods which are the CombSUM and the CombMNZ strategies.\nWe also examined the performance of one majoritarian method which is the Markov chain method (MC4).\nFor the comparisons, we considered a specific outranking relation S\u2217 = S(5%,50%,50%,30%) which results in good overall performances when tuning all the parameters.\nThe first row of Table 10 gives performances of the rank aggregation methods w.r.t. a basic assumption set A1 = (H1 100, H2 5 , H4 new): we only consider the 100 first documents from each ranking, then retain documents present in 5 or more rankings and update ranks of successful documents.\nFor positional methods, we place missing documents at the queue of the ranking (H3 yes) whereas for our method as well as for MC4, we retained hypothesis H3 no.\nThe three following rows of Table 10 report performances when changing one element from the basic assumption set: the second row corresponds to the assumption set A2 = (H1 1000, H2 5 , H4 new), i.e. changing the number of retained documents from 100 to 1000.\nThe third row corresponds to the assumption set A3 = (H1 100, H2 all, H4 new), i.e. considering the documents present in at least one ranking.\nThe fourth row corresponds to the assumption set A4 = (H1 100, H2 5 , H4 init), i.e. keeping the original ranks of successful documents.\nThe fifth row of Table 10, labeled A5, gives performance when all the 225 queries of the Web track of TREC-2004 are considered.\nObviously, performance level cannot be compared with previous lines since the additional queries are different from the TD queries and correspond to other tasks (Home Page and Named Page tasks [10]) of TREC-2004 Web track.\nThis set of runs aims to show whether relative performance of the various methods is task-dependent.\nThe last row of Table 10, labeled A6, reports performance of the various methods considering the TD task of TREC2002 instead of TREC-2004: we fused the results of input rankings of the 10 best official runs for each of the 50 TD queries [9] considering the set of assumptions A1 of the first row.\nThis aims to show whether relative performance of the various methods changes from year to year.\nValues between brackets of Table 10 are variations of performance of each rank aggregation method w.r.t. performance of the outranking approach.\nTable 10: Performance (MAP) of different rank aggregation methods under 3 different test collections mcm combSUM combMNZ markov A1 18.79% 17.54% (-6.65%*) 17.08% (-9.10%*) 18.63% (-0.85%) A2 21.36% 19.18% (-10.21%*) 18.61% (-12.87%*) 21.33% (-0.14%) A3 21.92% 21.38% (-2.46%) 20.88% (-4.74%) 19.35% (-11.72%*) A4 18.64% 17.58% (-5.69%*) 17.18% (-7.83%*) 18.63% (-0.05%) A5 55.39% 52.16% (-5.83%*) 49.70% (-10.27%*) 53.30% (-3.77%) A6 16.95% 15.65% (-7.67%*) 14.57% (-14.04%*) 16.39% (-3.30%) From the analysis of table 10 the following can be established: \u2022 for all the runs, considering all the documents in each input ranking (A2) significantly improves performance (MAP increases by 11.62% on average).\nThis is predictable since some initially unreported relevant documents would receive better positions in the consensus ranking.\n\u2022 for all the runs, considering documents even those present in only one input ranking (A3) significantly improves performance.\nFor mcm, combSUM and combMNZ, performance improvement is more important (MAP increases by 20.27% on average) than for the markov run (MAP increases by 3.86%).\n\u2022 preserving the initial positions of documents (A4) or recomputing them (A1) does not have a noticeable influence on performance for both positional and majoritarian methods.\n\u2022 considering all the queries of the Web track of TREC2004 (A5) as well as the TD queries of the Web track of TREC-2002 (A6) does not alter the relative performance of the different data fusion methods.\n\u2022 considering the TD queries of the Web track of TREC2002, performances of all the data fusion methods are lower than that of the best performing input ranking for which the MAP value equals 18.58%.\nThis is because most of the fused input rankings have very low performances compared to the best one, which brings more noise to the consensus ranking.\n\u2022 performances of the data fusion methods mcm and markov are significantly better than that of the best input ranking uogWebCAU150.\nThis remains true for runs combSUM and combMNZ only under assumptions H1 all or H2 all.\nThis shows that majoritarian methods are less sensitive to assumptions than positional methods.\n\u2022 outranking approach always performs significantly better than positional methods combSUM and combMNZ.\nIt has also better performances than the Markov chain method, especially under assumption H2 all where difference of performances becomes significant.\n6.\nCONCLUSIONS In this paper, we address the rank aggregation problem where different, but not disjoint, lists of documents are to be fused.\nWe noticed that the input rankings can hide ties, so they should not be considered as complete orders.\nOnly robust information should be used from each input ranking.\nCurrent rank aggregation methods, and especially positional methods (e.g. combSUM [15]), are not initially designed to work with such rankings.\nThey should be adapted by considering specific working assumptions.\nWe propose a new outranking method for rank aggregation which is well adapted to the IR context.\nIndeed, it ranks two documents w.r.t. the intensity of their positions difference in each input ranking and also considering the number of the input rankings that are concordant and discordant in favor of a specific document.\nThere is also no need to make specific assumptions on the positions of the missing documents.\nThis is an important feature since the absence of a document from a ranking should not be necessarily interpreted negatively.\nExperimental results show that the outranking method significantly out-performs popular classical positional data fusion methods like combSUM and combMNZ strategies.\nIt also out-performs a good performing majoritarian methods which is the Markov chain method.\nThese results are tested against different test collections and queries.\nFrom the experiments, we can also conclude that in order to improve the performances, we should fuse result lists of well performing IR models, and that majoritarian data fusion methods perform better than positional methods.\nThe proposed method can have a real impact on Web metasearch performances since only ranks are available from most primary search engines, whereas most of the current approaches need scores to merge result lists into one single list.\nFurther work involves investigating whether the outranking approach performs well in various other contexts, e.g. using the document scores or some combination of document ranks and scores.\nAcknowledgments The authors would like to thank Jacques Savoy for his valuable comments on a preliminary version of this paper.\n7.\nREFERENCES [1] A. Aronson, D. Demner-Fushman, S. Humphrey, J. Lin, H. Liu, P. Ruch, M. Ruiz, L. Smith, L. Tanabe, and W. Wilbur.\nFusion of knowledge-intensive and statistical approaches for retrieving and annotating textual genomics documents.\nIn Proceedings TREC``2005.\nNIST Publication, 2005.\n[2] R. A. Baeza-Yates and B. A. Ribeiro-Neto.\nModern Information Retrieval.\nACM Press , 1999.\n[3] B. T. Bartell, G. W. Cottrell, and R. K. Belew.\nAutomatic combination of multiple ranked retrieval systems.\nIn Proceedings ACM-SIGIR``94, pages 173-181.\nSpringer-Verlag, 1994.\n[4] N. J. Belkin, P. Kantor, E. A. Fox, and J. A. Shaw.\nCombining evidence of multiple query representations for information retrieval.\nIPM, 31(3):431-448, 1995.\n[5] J. Borda.\nM\u00b4emoire sur les \u00b4elections au scrutin.\nHistoire de l``Acad\u00b4emie des Sciences, 1781.\n[6] J. P. Callan, Z. Lu, and W. B. Croft.\nSearching distributed collections with inference networks.\nIn Proceedings ACM-SIGIR``95, pages 21-28, 1995.\n[7] M. Condorcet.\nEssai sur l``application de l``analyse `a la probabilit\u00b4e des d\u00b4ecisions rendues `a la pluralit\u00b4e des voix.\nImprimerie Royale, Paris, 1785.\n[8] W. D. Cook and M. Kress.\nOrdinal ranking with intensity of preference.\nManagement Science, 31(1):26-32, 1985.\n[9] N. Craswell and D. Hawking.\nOverview of the TREC-2002 Web Track.\nIn Proceedings TREC``2002.\nNIST Publication, 2002.\n[10] N. Craswell and D. Hawking.\nOverview of the TREC-2004 Web Track.\nIn Proceedings of TREC``2004.\nNIST Publication, 2004.\n[11] C. Dwork, S. R. Kumar, M. Naor, and D. Sivakumar.\nRank aggregation methods for the Web.\nIn Proceedings WWW``2001, pages 613-622, 2001.\n[12] R. Fagin.\nCombining fuzzy information from multiple systems.\nJCSS, 58(1):83-99, 1999.\n[13] R. Fagin, R. Kumar, M. Mahdian, D. Sivakumar, and E. Vee.\nComparing and aggregating rankings with ties.\nIn PODS, pages 47-58, 2004.\n[14] R. Fagin, R. Kumar, and D. Sivakumar.\nComparing top k lists.\nSIAM J. on Discrete Mathematics, 17(1):134-160, 2003.\n[15] E. A. Fox and J. A. Shaw.\nCombination of multiple searches.\nIn Proceedings of TREC``3.\nNIST Publication, 1994.\n[16] J. Katzer, M. McGill, J. Tessier, W. Frakes, and P. DasGupta.\nA study of the overlap among document representations.\nInformation Technology: Research and Development, 1(4):261-274, 1982.\n[17] L. S. Larkey, M. E. Connell, and J. Callan.\nCollection selection and results merging with topically organized U.S. patents and TREC data.\nIn Proceedings ACM-CIKM``2000, pages 282-289.\nACM Press, 2000.\n[18] A. Le Calv\u00b4e and J. Savoy.\nDatabase merging strategy based on logistic regression.\nIPM, 36(3):341-359, 2000.\n[19] J. H. Lee.\nAnalyses of multiple evidence combination.\nIn Proceedings ACM-SIGIR``97, pages 267-276, 1997.\n[20] D. Lillis, F. Toolan, R. Collier, and J. Dunnion.\nProbfuse: a probabilistic approach to data fusion.\nIn Proceedings ACM-SIGIR``2006, pages 139-146.\nACM Press, 2006.\n[21] J. I. Marden.\nAnalyzing and Modeling Rank Data.\nNumber 64 in Monographs on Statistics and Applied Probability.\nChapman & Hall, 1995.\n[22] M. Montague and J. A. Aslam.\nMetasearch consistency.\nIn Proceedings ACM-SIGIR``2001, pages 386-387.\nACM Press, 2001.\n[23] D. M. Pennock and E. Horvitz.\nAnalysis of the axiomatic foundations of collaborative filtering.\nIn Workshop on AI for Electronic Commerce at the 16th National Conference on Artificial Intelligence, 1999.\n[24] M. E. Renda and U. Straccia.\nWeb metasearch: rank vs. score based rank aggregation methods.\nIn Proceedings ACM-SAC``2003, pages 841-846.\nACM Press, 2003.\n[25] W. H. Riker.\nLiberalism against populism.\nWaveland Press, 1982.\n[26] B. Roy.\nThe outranking approach and the foundations of ELECTRE methods.\nTheory and Decision, 31:49-73, 1991.\n[27] B. Roy and J. Hugonnard.\nRanking of suburban line extension projects on the Paris metro system by a multicriteria method.\nTransportation Research, 16A(4):301-312, 1982.\n[28] L. Si and J. Callan.\nUsing sampled data and regression to merge search engine results.\nIn Proceedings ACM-SIGIR``2002, pages 19-26.\nACM Press, 2002.\n[29] M. Truchon.\nAn extension of the Condorcet criterion and Kemeny orders.\nCahier 9813, Centre de Recherche en Economie et Finance Appliqu\u00b4ees, Oct. 1998.\n[30] H. Turtle and W. B. Croft.\nInference networks for document retrieval.\nIn Proceedings of ACM-SIGIR``90, pages 1-24.\nACM Press, 1990.\n[31] C. C. Vogt and G. W. Cottrell.\nFusion via a linear combination of scores.\nInformation Retrieval, 1(3):151-173, 1999.","lvl-3":"An Outranking Approach for Rank Aggregation in Information Retrieval\nABSTRACT\nResearch in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.).\nIn this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking.\nIn this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another.\nWe show that the proposed method deals well with the Information Retrieval distinctive features.\nExperimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators.\n1.\nINTRODUCTION\nA wide range of current Information Retrieval (IR) approaches are based on various search models (Boolean, Vector Space, Probabilistic, Language, etc. [2]) in order to retrieve relevant documents in response to a user request.\nThe result lists produced by these approaches depend on the exact definition of the relevance concept.\nRank aggregation approaches, also called data fusion approaches, consist in combining these result lists in order to produce a new and hopefully better ranking.\nSuch approaches give rise to metasearch engines in the Web context.\nWe consider, in the following, cases where only ranks are\navailable and no other additional information is provided such as the relevance scores.\nThis corresponds indeed to the reality, where only ordinal information is available.\nData fusion is also relevant in other contexts, such as when the user writes several queries of his\/her information need (e.g., a boolean query and a natural language query) [4], or when many document surrogates are available [16].\nSeveral studies argued that rank aggregation has the potential of combining effectively all the various sources of evidence considered in various input methods.\nFor instance, experiments carried out in [16], [30], [4] and [19] showed that documents which appear in the lists of the majority of the input methods are more likely to be relevant.\nMoreover, Lee [19] and Vogt and Cottrell [31] found that various retrieval approaches often return very different irrelevant documents, but many of the same relevant documents.\nBartell et al. [3] also found that rank aggregation methods improve the performances w.r.t. those of the input methods, even when some of them have weak individual performances.\nThese methods also tend to smooth out biases of the input methods according to Montague and Aslam [22].\nData fusion has recently been proved to improve performances for both the ad hoc retrieval and categorization tasks within the TREC genomics track in 2005 [1].\nThe rank aggregation problem was addressed in various fields such as i) in social choice theory which studies voting algorithms which specify winners of elections or winners of competitions in tournaments [29], ii) in statistics when studying correlation between rankings, iii) in distributed databases when results from different databases must be combined [12], and iv) in collaborative filtering [23].\nMost current rank aggregation methods consider each input ranking as a permutation over the same set of items.\nThey also give rigid interpretation to the exact ranking of the items.\nBoth of these assumptions are rather not valid in the IR context, as will be shown in the following sections.\nThe remaining of the paper is organized as follows.\nWe first review current rank aggregation methods in Section 2.\nThen we outline the specificities of the data fusion problem in the IR context (Section 3).\nIn Section 4, we present a new aggregation method which is proven to best fit the IR context.\nExperimental results are presented in Section 5 and conclusions are provided in a final section.\n2.\nRELATED WORK\nAs pointed out by Riker [25], we can distinguish two families of rank aggregation methods: positional methods which assign scores to items to be ranked according to the ranks\nthey receive and majoritarian methods which are based on pairwise comparisons of items to be ranked.\nThese two families of methods find their roots in the pioneering works of Borda [5] and Condorcet [7], respectively, in the social choice literature.\n2.1 Preliminaries\nWe first introduce some basic notations to present the rank aggregation methods in a uniform way.\nLet D = {d1, d2,..., dnd} be a set of nd documents.\nA list or a ranking ~ j is an ordering defined on Dj \u2286 D (j = 1,..., n).\nThus, di ~ j di, means di ` is ranked better than' di, in ~ j.\nWhen Dj = D, ~ j is said to be a full list.\nOtherwise, it is a partial list.\nIf di belongs to Dj, rji denotes the rank or position of di in ~ j.\nWe assume that the best answer (document) is assigned the position 1 and the worst one is assigned the position | Dj |.\nLet ~ D be the set of all permutations on D or all subsets of D.\nA profile is a n-tuple of rankings PR = (~ 1, ~ 2,..., ~ n).\nRestricting PR to the rankings containing document di defines PRi.\nWe also call the number of rankings which contain document di the rank hits of di [19].\nThe rank aggregation or data fusion problem consists of finding a ranking function or mechanism \u03a8 (also called a social welfare function in the social choice theory terminology) defined by:\nwhere \u03c3 is called a consensus ranking.\n2.2 Positional Methods\n2.2.1 Borda Count\nThis method [5] first assigns a score Enj = 1 rji to each document di.\nDocuments are then ranked by increasing order of this score, breaking ties, if any, arbitrarily.\n2.2.2 Linear Combination Methods\nThis family of methods basically combine scores of documents.\nWhen used for the rank aggregation problem, ranks are assumed to be scores or performances to be combined using aggregation operators such as the weighted sum or some variation of it [3, 31, 17, 28].\nFor instance, Callan et al. [6] used the inference networks model [30] to combine rankings.\nFox and Shaw [15] proposed several combination strategies which are CombSUM, CombMIN, CombMAX, CombANZ and CombMNZ.\nThe first three operators correspond to the sum, min and max operators, respectively.\nCombANZ and CombMNZ respectively divides and multiplies the CombSUM score by the rank hits.\nIt is shown in [19] that the CombSUM and CombMNZ operators perform better than the others.\nMetasearch engines such as SavvySearch and MetaCrawler use the CombSUM strategy to fuse rankings.\n2.2.3 Footrule Optimal Aggregation\nIn this method, a consensus ranking minimizes the Spearman footrule distance from the input rankings [21].\nFormally, given two full lists ~ j and ~ j,, this distance is given by F (~ j, ~ j,) = nd\nas follows.\nGiven a profile PR and a consensus ranking \u03c3, the Spearman footrule distance of \u03c3 to PR is given by\ni, i, |, where rji, i, = rji, \u2212 rji.\nThis formulation has the advantage that it considers the intensity of preferences.\n2.2.4 Probabilistic Methods\nThis kind of methods assume that the performance of the input methods on a number of training queries is indicative of their future performance.\nDuring the training process, probabilities of relevance are calculated.\nFor subsequent queries, documents are ranked based on these probabilities.\nFor instance, in [20], each input ranking ~ j is divided into a number of segments, and the conditional probability of relevance (R) of each document di depending on the segment k it occurs in, is computed, i.e. prob (R | di, k, ~ j).\nFor subsequent queries, the score of each document di is given by En prob (R | di, k, Yj).\nLe Calve and Savoy [18] suggest using j = 1 k a logistic regression approach for combining scores.\nTraining data is needed to infer the model parameters.\n2.3 Majoritarian Methods\n2.3.1 Condorcet Procedure\nThe original Condorcet rule [7] specifies that a winner of the election is any item that beats or ties with every other item in a pairwise contest.\nFormally, let C (di\u03c3di,) = {~ j \u2208 PR: di ~ j di,} be the coalition of rankings that are concordant with establishing di\u03c3di,, i.e. with the proposition di ` should be ranked better than' di, in the final ranking \u03c3.\ndi beats or ties with di, iff | C (di\u03c3di,) | \u2265 | C (di, \u03c3di) |.\nThe repetitive application of the Condorcet algorithm can produce a ranking of items in a natural way: select the Condorcet winner, remove it from the lists, and repeat the previous two steps until there are no more documents to rank.\nSince there is not always Condorcet winners, variations of the Condorcet procedure have been developed within the multiple criteria decision aid theory, with methods such as ELECTRE [26].\n2.3.2 Kemeny Optimal Aggregation\nAs in section 2.2.3, a consensus ranking minimizes a geometric distance from the input rankings, where the Kendall tau distance is used instead of the Spearman footrule distance.\nFormally, given two full lists ~ j and ~ j,, the Kendall tau distance is given by K (~ j, ~ j,) = | {(di, di,): i <i ~, rji <rji,, rj,\ni,} |, i.e. the number of pairwise disagreements between the two lists.\nIt is easy to show that the consensus ranking corresponds to the geometric median of the input rankings and that the Kemeny optimal aggregation problem corresponds to the minimum feedback edge set problem.\n2.3.3 Markov Chain Methods\nMarkov chains (MCs) have been used by Dwork et al. [11] as a ` natural' method to obtain a consensus ranking where states correspond to the documents to be ranked and the transition probabilities vary depending on the interpretation of the transition event.\nIn the same reference, the authors proposed four specific MCs and experimental testing had shown that the following MC is the best performing one (see also [24]):\n\u2022 MC4: move from the current state di to the next state di, by first choosing a document di, uniformly from D.\nIf for the majority of the rankings, we have rji, \u2264 rji, then move to di,, else stay in di.\nThe consensus ranking corresponds to the stationary distribution of MC4.\n3.\nSPECIFICITIES OF THE RANK AGGREGATION PROBLEM IN THE IR CONTEXT\n3.1 Limited Significance of the Rankings\n3.2 Partial Lists\n4.\nOUTRANKING APPROACH FOR RANK AGGREGATION\n4.1 Presentation\n4.2 Illustrative Example\n5.\nEXPERIMENTS AND RESULTS\n5.1 Test Setting\n5.2 Results\n5.2.1 Impact of the Working Assumptions\n5.2.2 Impact of the Variation of the Parameters\n5.2.3 Impact of the Variation of the Number of Input Rankings\n5.2.4 Comparison of the Performance of Different Rank Aggregation Methods\n6.\nCONCLUSIONS\nIn this paper, we address the rank aggregation problem where different, but not disjoint, lists of documents are to be fused.\nWe noticed that the input rankings can hide ties, so they should not be considered as complete orders.\nOnly robust information should be used from each input ranking.\nCurrent rank aggregation methods, and especially positional methods (e.g. combSUM [15]), are not initially designed to work with such rankings.\nThey should be adapted by considering specific working assumptions.\nWe propose a new outranking method for rank aggregation which is well adapted to the IR context.\nIndeed, it ranks two documents w.r.t. the intensity of their positions difference in each input ranking and also considering the number of the input rankings that are concordant and discordant in favor of a specific document.\nThere is also no need to make specific assumptions on the positions of the missing documents.\nThis is an important feature since the absence of a document from a ranking should not be necessarily interpreted negatively.\nExperimental results show that the outranking method significantly out-performs popular classical positional data fusion methods like combSUM and combMNZ strategies.\nIt also out-performs a good performing majoritarian methods which is the Markov chain method.\nThese results are tested against different test collections and queries.\nFrom the experiments, we can also conclude that in order to improve the performances, we should fuse result lists of well performing\nIR models, and that majoritarian data fusion methods perform better than positional methods.\nThe proposed method can have a real impact on Web metasearch performances since only ranks are available from most primary search engines, whereas most of the current approaches need scores to merge result lists into one single list.\nFurther work involves investigating whether the outranking approach performs well in various other contexts, e.g. using the document scores or some combination of document ranks and scores.","lvl-4":"An Outranking Approach for Rank Aggregation in Information Retrieval\nABSTRACT\nResearch in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.).\nIn this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking.\nIn this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another.\nWe show that the proposed method deals well with the Information Retrieval distinctive features.\nExperimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators.\n1.\nINTRODUCTION\nThe result lists produced by these approaches depend on the exact definition of the relevance concept.\nRank aggregation approaches, also called data fusion approaches, consist in combining these result lists in order to produce a new and hopefully better ranking.\nSuch approaches give rise to metasearch engines in the Web context.\nWe consider, in the following, cases where only ranks are\navailable and no other additional information is provided such as the relevance scores.\nThis corresponds indeed to the reality, where only ordinal information is available.\nSeveral studies argued that rank aggregation has the potential of combining effectively all the various sources of evidence considered in various input methods.\nFor instance, experiments carried out in [16], [30], [4] and [19] showed that documents which appear in the lists of the majority of the input methods are more likely to be relevant.\nBartell et al. [3] also found that rank aggregation methods improve the performances w.r.t. those of the input methods, even when some of them have weak individual performances.\nThese methods also tend to smooth out biases of the input methods according to Montague and Aslam [22].\nMost current rank aggregation methods consider each input ranking as a permutation over the same set of items.\nThey also give rigid interpretation to the exact ranking of the items.\nBoth of these assumptions are rather not valid in the IR context, as will be shown in the following sections.\nThe remaining of the paper is organized as follows.\nWe first review current rank aggregation methods in Section 2.\nThen we outline the specificities of the data fusion problem in the IR context (Section 3).\nIn Section 4, we present a new aggregation method which is proven to best fit the IR context.\nExperimental results are presented in Section 5 and conclusions are provided in a final section.\n2.\nRELATED WORK\nAs pointed out by Riker [25], we can distinguish two families of rank aggregation methods: positional methods which assign scores to items to be ranked according to the ranks\nthey receive and majoritarian methods which are based on pairwise comparisons of items to be ranked.\nThese two families of methods find their roots in the pioneering works of Borda [5] and Condorcet [7], respectively, in the social choice literature.\n2.1 Preliminaries\nWe first introduce some basic notations to present the rank aggregation methods in a uniform way.\nLet D = {d1, d2,..., dnd} be a set of nd documents.\nA list or a ranking ~ j is an ordering defined on Dj \u2286 D (j = 1,..., n).\nThus, di ~ j di, means di ` is ranked better than' di, in ~ j.\nWhen Dj = D, ~ j is said to be a full list.\nOtherwise, it is a partial list.\nIf di belongs to Dj, rji denotes the rank or position of di in ~ j.\nWe assume that the best answer (document) is assigned the position 1 and the worst one is assigned the position | Dj |.\nA profile is a n-tuple of rankings PR = (~ 1, ~ 2,..., ~ n).\nRestricting PR to the rankings containing document di defines PRi.\nWe also call the number of rankings which contain document di the rank hits of di [19].\nThe rank aggregation or data fusion problem consists of finding a ranking function or mechanism \u03a8 (also called a social welfare function in the social choice theory terminology) defined by:\nwhere \u03c3 is called a consensus ranking.\n2.2 Positional Methods\n2.2.1 Borda Count\nThis method [5] first assigns a score Enj = 1 rji to each document di.\nDocuments are then ranked by increasing order of this score, breaking ties, if any, arbitrarily.\n2.2.2 Linear Combination Methods\nThis family of methods basically combine scores of documents.\nWhen used for the rank aggregation problem, ranks are assumed to be scores or performances to be combined using aggregation operators such as the weighted sum or some variation of it [3, 31, 17, 28].\nFor instance, Callan et al. [6] used the inference networks model [30] to combine rankings.\nThe first three operators correspond to the sum, min and max operators, respectively.\nCombANZ and CombMNZ respectively divides and multiplies the CombSUM score by the rank hits.\nIt is shown in [19] that the CombSUM and CombMNZ operators perform better than the others.\nMetasearch engines such as SavvySearch and MetaCrawler use the CombSUM strategy to fuse rankings.\n2.2.3 Footrule Optimal Aggregation\nIn this method, a consensus ranking minimizes the Spearman footrule distance from the input rankings [21].\nFormally, given two full lists ~ j and ~ j,, this distance is given by F (~ j, ~ j,) = nd\nas follows.\nGiven a profile PR and a consensus ranking \u03c3, the Spearman footrule distance of \u03c3 to PR is given by\n2.2.4 Probabilistic Methods\nThis kind of methods assume that the performance of the input methods on a number of training queries is indicative of their future performance.\nDuring the training process, probabilities of relevance are calculated.\nFor subsequent queries, documents are ranked based on these probabilities.\nFor subsequent queries, the score of each document di is given by En prob (R | di, k, Yj).\nLe Calve and Savoy [18] suggest using j = 1 k a logistic regression approach for combining scores.\nTraining data is needed to infer the model parameters.\n2.3 Majoritarian Methods\n2.3.1 Condorcet Procedure\nThe repetitive application of the Condorcet algorithm can produce a ranking of items in a natural way: select the Condorcet winner, remove it from the lists, and repeat the previous two steps until there are no more documents to rank.\n2.3.2 Kemeny Optimal Aggregation\nAs in section 2.2.3, a consensus ranking minimizes a geometric distance from the input rankings, where the Kendall tau distance is used instead of the Spearman footrule distance.\ni,} |, i.e. the number of pairwise disagreements between the two lists.\nIt is easy to show that the consensus ranking corresponds to the geometric median of the input rankings and that the Kemeny optimal aggregation problem corresponds to the minimum feedback edge set problem.\n2.3.3 Markov Chain Methods\nMarkov chains (MCs) have been used by Dwork et al. [11] as a ` natural' method to obtain a consensus ranking where states correspond to the documents to be ranked and the transition probabilities vary depending on the interpretation of the transition event.\nIn the same reference, the authors proposed four specific MCs and experimental testing had shown that the following MC is the best performing one (see also [24]):\n\u2022 MC4: move from the current state di to the next state di, by first choosing a document di, uniformly from D.\nIf for the majority of the rankings, we have rji, \u2264 rji, then move to di,, else stay in di.\nThe consensus ranking corresponds to the stationary distribution of MC4.\n6.\nCONCLUSIONS\nIn this paper, we address the rank aggregation problem where different, but not disjoint, lists of documents are to be fused.\nWe noticed that the input rankings can hide ties, so they should not be considered as complete orders.\nOnly robust information should be used from each input ranking.\nCurrent rank aggregation methods, and especially positional methods (e.g. combSUM [15]), are not initially designed to work with such rankings.\nThey should be adapted by considering specific working assumptions.\nWe propose a new outranking method for rank aggregation which is well adapted to the IR context.\nIndeed, it ranks two documents w.r.t. the intensity of their positions difference in each input ranking and also considering the number of the input rankings that are concordant and discordant in favor of a specific document.\nThere is also no need to make specific assumptions on the positions of the missing documents.\nThis is an important feature since the absence of a document from a ranking should not be necessarily interpreted negatively.\nExperimental results show that the outranking method significantly out-performs popular classical positional data fusion methods like combSUM and combMNZ strategies.\nIt also out-performs a good performing majoritarian methods which is the Markov chain method.\nThese results are tested against different test collections and queries.\nFrom the experiments, we can also conclude that in order to improve the performances, we should fuse result lists of well performing\nIR models, and that majoritarian data fusion methods perform better than positional methods.\nThe proposed method can have a real impact on Web metasearch performances since only ranks are available from most primary search engines, whereas most of the current approaches need scores to merge result lists into one single list.\nFurther work involves investigating whether the outranking approach performs well in various other contexts, e.g. using the document scores or some combination of document ranks and scores.","lvl-2":"An Outranking Approach for Rank Aggregation in Information Retrieval\nABSTRACT\nResearch in Information Retrieval usually shows performance improvement when many sources of evidence are combined to produce a ranking of documents (e.g., texts, pictures, sounds, etc.).\nIn this paper, we focus on the rank aggregation problem, also called data fusion problem, where rankings of documents, searched into the same collection and provided by multiple methods, are combined in order to produce a new ranking.\nIn this context, we propose a rank aggregation method within a multiple criteria framework using aggregation mechanisms based on decision rules identifying positive and negative reasons for judging whether a document should get a better rank than another.\nWe show that the proposed method deals well with the Information Retrieval distinctive features.\nExperimental results are reported showing that the suggested method performs better than the well-known CombSUM and CombMNZ operators.\n1.\nINTRODUCTION\nA wide range of current Information Retrieval (IR) approaches are based on various search models (Boolean, Vector Space, Probabilistic, Language, etc. [2]) in order to retrieve relevant documents in response to a user request.\nThe result lists produced by these approaches depend on the exact definition of the relevance concept.\nRank aggregation approaches, also called data fusion approaches, consist in combining these result lists in order to produce a new and hopefully better ranking.\nSuch approaches give rise to metasearch engines in the Web context.\nWe consider, in the following, cases where only ranks are\navailable and no other additional information is provided such as the relevance scores.\nThis corresponds indeed to the reality, where only ordinal information is available.\nData fusion is also relevant in other contexts, such as when the user writes several queries of his\/her information need (e.g., a boolean query and a natural language query) [4], or when many document surrogates are available [16].\nSeveral studies argued that rank aggregation has the potential of combining effectively all the various sources of evidence considered in various input methods.\nFor instance, experiments carried out in [16], [30], [4] and [19] showed that documents which appear in the lists of the majority of the input methods are more likely to be relevant.\nMoreover, Lee [19] and Vogt and Cottrell [31] found that various retrieval approaches often return very different irrelevant documents, but many of the same relevant documents.\nBartell et al. [3] also found that rank aggregation methods improve the performances w.r.t. those of the input methods, even when some of them have weak individual performances.\nThese methods also tend to smooth out biases of the input methods according to Montague and Aslam [22].\nData fusion has recently been proved to improve performances for both the ad hoc retrieval and categorization tasks within the TREC genomics track in 2005 [1].\nThe rank aggregation problem was addressed in various fields such as i) in social choice theory which studies voting algorithms which specify winners of elections or winners of competitions in tournaments [29], ii) in statistics when studying correlation between rankings, iii) in distributed databases when results from different databases must be combined [12], and iv) in collaborative filtering [23].\nMost current rank aggregation methods consider each input ranking as a permutation over the same set of items.\nThey also give rigid interpretation to the exact ranking of the items.\nBoth of these assumptions are rather not valid in the IR context, as will be shown in the following sections.\nThe remaining of the paper is organized as follows.\nWe first review current rank aggregation methods in Section 2.\nThen we outline the specificities of the data fusion problem in the IR context (Section 3).\nIn Section 4, we present a new aggregation method which is proven to best fit the IR context.\nExperimental results are presented in Section 5 and conclusions are provided in a final section.\n2.\nRELATED WORK\nAs pointed out by Riker [25], we can distinguish two families of rank aggregation methods: positional methods which assign scores to items to be ranked according to the ranks\nthey receive and majoritarian methods which are based on pairwise comparisons of items to be ranked.\nThese two families of methods find their roots in the pioneering works of Borda [5] and Condorcet [7], respectively, in the social choice literature.\n2.1 Preliminaries\nWe first introduce some basic notations to present the rank aggregation methods in a uniform way.\nLet D = {d1, d2,..., dnd} be a set of nd documents.\nA list or a ranking ~ j is an ordering defined on Dj \u2286 D (j = 1,..., n).\nThus, di ~ j di, means di ` is ranked better than' di, in ~ j.\nWhen Dj = D, ~ j is said to be a full list.\nOtherwise, it is a partial list.\nIf di belongs to Dj, rji denotes the rank or position of di in ~ j.\nWe assume that the best answer (document) is assigned the position 1 and the worst one is assigned the position | Dj |.\nLet ~ D be the set of all permutations on D or all subsets of D.\nA profile is a n-tuple of rankings PR = (~ 1, ~ 2,..., ~ n).\nRestricting PR to the rankings containing document di defines PRi.\nWe also call the number of rankings which contain document di the rank hits of di [19].\nThe rank aggregation or data fusion problem consists of finding a ranking function or mechanism \u03a8 (also called a social welfare function in the social choice theory terminology) defined by:\nwhere \u03c3 is called a consensus ranking.\n2.2 Positional Methods\n2.2.1 Borda Count\nThis method [5] first assigns a score Enj = 1 rji to each document di.\nDocuments are then ranked by increasing order of this score, breaking ties, if any, arbitrarily.\n2.2.2 Linear Combination Methods\nThis family of methods basically combine scores of documents.\nWhen used for the rank aggregation problem, ranks are assumed to be scores or performances to be combined using aggregation operators such as the weighted sum or some variation of it [3, 31, 17, 28].\nFor instance, Callan et al. [6] used the inference networks model [30] to combine rankings.\nFox and Shaw [15] proposed several combination strategies which are CombSUM, CombMIN, CombMAX, CombANZ and CombMNZ.\nThe first three operators correspond to the sum, min and max operators, respectively.\nCombANZ and CombMNZ respectively divides and multiplies the CombSUM score by the rank hits.\nIt is shown in [19] that the CombSUM and CombMNZ operators perform better than the others.\nMetasearch engines such as SavvySearch and MetaCrawler use the CombSUM strategy to fuse rankings.\n2.2.3 Footrule Optimal Aggregation\nIn this method, a consensus ranking minimizes the Spearman footrule distance from the input rankings [21].\nFormally, given two full lists ~ j and ~ j,, this distance is given by F (~ j, ~ j,) = nd\nas follows.\nGiven a profile PR and a consensus ranking \u03c3, the Spearman footrule distance of \u03c3 to PR is given by\ni, i, |, where rji, i, = rji, \u2212 rji.\nThis formulation has the advantage that it considers the intensity of preferences.\n2.2.4 Probabilistic Methods\nThis kind of methods assume that the performance of the input methods on a number of training queries is indicative of their future performance.\nDuring the training process, probabilities of relevance are calculated.\nFor subsequent queries, documents are ranked based on these probabilities.\nFor instance, in [20], each input ranking ~ j is divided into a number of segments, and the conditional probability of relevance (R) of each document di depending on the segment k it occurs in, is computed, i.e. prob (R | di, k, ~ j).\nFor subsequent queries, the score of each document di is given by En prob (R | di, k, Yj).\nLe Calve and Savoy [18] suggest using j = 1 k a logistic regression approach for combining scores.\nTraining data is needed to infer the model parameters.\n2.3 Majoritarian Methods\n2.3.1 Condorcet Procedure\nThe original Condorcet rule [7] specifies that a winner of the election is any item that beats or ties with every other item in a pairwise contest.\nFormally, let C (di\u03c3di,) = {~ j \u2208 PR: di ~ j di,} be the coalition of rankings that are concordant with establishing di\u03c3di,, i.e. with the proposition di ` should be ranked better than' di, in the final ranking \u03c3.\ndi beats or ties with di, iff | C (di\u03c3di,) | \u2265 | C (di, \u03c3di) |.\nThe repetitive application of the Condorcet algorithm can produce a ranking of items in a natural way: select the Condorcet winner, remove it from the lists, and repeat the previous two steps until there are no more documents to rank.\nSince there is not always Condorcet winners, variations of the Condorcet procedure have been developed within the multiple criteria decision aid theory, with methods such as ELECTRE [26].\n2.3.2 Kemeny Optimal Aggregation\nAs in section 2.2.3, a consensus ranking minimizes a geometric distance from the input rankings, where the Kendall tau distance is used instead of the Spearman footrule distance.\nFormally, given two full lists ~ j and ~ j,, the Kendall tau distance is given by K (~ j, ~ j,) = | {(di, di,): i <i ~, rji <rji,, rj,\ni,} |, i.e. the number of pairwise disagreements between the two lists.\nIt is easy to show that the consensus ranking corresponds to the geometric median of the input rankings and that the Kemeny optimal aggregation problem corresponds to the minimum feedback edge set problem.\n2.3.3 Markov Chain Methods\nMarkov chains (MCs) have been used by Dwork et al. [11] as a ` natural' method to obtain a consensus ranking where states correspond to the documents to be ranked and the transition probabilities vary depending on the interpretation of the transition event.\nIn the same reference, the authors proposed four specific MCs and experimental testing had shown that the following MC is the best performing one (see also [24]):\n\u2022 MC4: move from the current state di to the next state di, by first choosing a document di, uniformly from D.\nIf for the majority of the rankings, we have rji, \u2264 rji, then move to di,, else stay in di.\nThe consensus ranking corresponds to the stationary distribution of MC4.\n3.\nSPECIFICITIES OF THE RANK AGGREGATION PROBLEM IN THE IR CONTEXT\n3.1 Limited Significance of the Rankings\nThe exact positions of documents in one input ranking have limited significance and should not be overemphasized.\nFor instance, having three relevant documents in the first three positions, any perturbation of these three items will have the same value.\nIndeed, in the IR context, the complete order provided by an input method may hide ties.\nIn this case, we call such rankings semi orders.\nThis was outlined in [13] as the problem of aggregation with ties.\nIt is therefore important to build the consensus ranking based on robust information:\n\u2022 Documents with near positions in> - j are more likely to have similar interest or relevance.\nThus a slight perturbation of the initial ranking is meaningless.\n\u2022 Assuming that document di is better ranked than document di, in a ranking> - j, di is more likely to be definitively more relevant than di, in> - j when the number of intermediate positions between di and di, increases.\n3.2 Partial Lists\nIn real world applications, such as metasearch engines, rankings provided by the input methods are often partial lists.\nThis was outlined in [14] as the problem of having to merge top-k results from various input lists.\nFor instance, in the experiments carried out by Dwork et al. [11], authors found that among the top 100 best documents of 7 input search engines, 67% of the documents were present in only one search engine, whereas less than two documents were present in all the search engines.\nRank aggregation of partial lists raises four major difficulties which we state hereafter, proposing for each of them various working assumptions:\n1.\nPartial lists can have various lengths, which can favour long lists.\nWe thus consider the following two working hypotheses: H1k: We only consider the top k best documents from each input ranking.\nH1 all: We consider all the documents from each input ranking.\n2.\nSince there are different documents in the input rankings, we must decide which documents should be kept in the consensus ranking.\nTwo working hypotheses are therefore considered: H2k: We only consider documents which are present in at least k input rankings (k> 1).\nH2 all: We consider all the documents which are ranked in at least one input ranking.\nHereafter, we call documents which will be retained in the consensus ranking, candidate documents, and documents that will be excluded from the consensus ranking, excluded documents.\nWe also call a candidate document which is missing in one or more rankings, a missing document.\n3.\nSome candidate documents are missing documents in some input rankings.\nMain reasons for a missing doc\nument are that it was not indexed or it was indexed but deemed irrelevant; usually this information is not available.\nWe consider the following two working hypotheses: H3 yes: Each missing document in each> - j is assigned a position.\nH3no: No assumption is made, that is each missing document is considered neither better nor worse than any other document.\n4.\nWhen assumption H2k holds, each input ranking may contain documents which will not be considered in the consensus ranking.\nRegarding the positions of the candidate documents, we can consider the following working hypotheses: H4init: The initial positions of candidate documents are kept in each input ranking.\nH4 new: Candidate documents receive new positions in each input ranking, after discarding excluded ones.\nIn the IR context, rank aggregation methods need to decide more or less explicitly which assumptions to retain w.r.t. the above-mentioned difficulties.\n4.\nOUTRANKING APPROACH FOR RANK AGGREGATION\n4.1 Presentation\nPositional methods consider implicitly that the positions of the documents in the input rankings are scores giving thus a cardinal meaning to an ordinal information.\nThis constitutes a strong assumption that is questionable, especially when the input rankings have different lengths.\nMoreover, for positional methods, assumptions H3 and H4, which are often arbitrary, have a strong impact on the results.\nFor instance, let us consider an input ranking of 500 documents out of 1000 candidate documents.\nWhether we assign to each of the missing documents the position 1, 501, 750 or 1000 - corresponding to variations of H3 yes - will give rise to very contrasted results, especially regarding the top of the consensus ranking.\nMajoritarian methods do not suffer from the above-mentioned drawbacks of the positional methods since they build consensus rankings exploiting only ordinal information contained in the input rankings.\nNevertheless, they suppose that such rankings are complete orders, ignoring that they may hide ties.\nTherefore, majoritarian methods base consensus rankings on illusory discriminant information rather than less discriminant but more robust information.\nTrying to overcome the limits of current rank aggregation methods, we found that outranking approaches, which were initially used for multiple criteria aggregation problems [26], can also be used for the rank aggregation purpose, where each ranking plays the role of a criterion.\nTherefore, in order to decide whether a document di should be ranked better than di, in the consensus ranking \u03c3, the two following conditions should be met:\n\u2022 a concordance condition which ensures that a majority of the input rankings are concordant with diadi' (majority principle).\n\u2022 a discordance condition which ensures that none of the discordant input rankings strongly refutes dad ~ (respect of minorities principle).\nFormally, the concordance coalition with diadi' is Csp (diadi') = {~ j \u2208 PR: rji \u2264 rji' \u2212 sp} where sp is a preference threshold which is the variation of document positions - whether it is absolute or relative to the ranking length - which draws the boundaries between an indifference and a preference situation between documents.\nThe discordance coalition with diadi' is Dsv (diadi') = {~ j \u2208 PR: rji \u2265 rji' + sv} where sv is a veto threshold which is the variation of document positions - whether it is absolute or relative to the ranking length - which draws the boundaries between a weak and a strong opposition to diadi'.\nDepending on the exact definition of the preceding concordance and discordance coalitions leading to the definition of some decision rules, several outranking relations can be defined.\nThey can be more or less demanding depending on i) the values of the thresholds sp and sv, ii) the importance or minimal size cmin required for the concordance coalition, and iii) the importance or maximum size dmax of the discordance coalition.\nA generic outranking relation can thus be defined as follows:\nThis expression defines a family of nested outranking relations since S (sp, sv, cmin, dmax) \u2286 S (s' p, s' v, c'm in, d'max) when cmin \u2265 c ~ min and\/or dmax \u2264 d ~ max and\/or sp \u2265 s ~ p and\/or sv \u2264 s ~ v.\nThis expression also generalizes the majority rule which corresponds to the particular relation S (0, \u221e, n 2, n).\nIt also satisfies important properties of rank aggregation methods, called neutrality, Pareto-optimality, Condorcet property and Extended Condorcet property, in the social choice literature [29].\nOutranking relations are not necessarily transitive and do not necessarily correspond to rankings since directed cycles may exist.\nTherefore, we need specific procedures in order to derive a consensus ranking.\nWe propose the following procedure which finds its roots in [27].\nIt consists in partitioning the set of documents into r ranked classes.\nEach class Ch contains documents with the same relevance and results from the application of all relations (if possible) to the set of documents remaining after previous classes are computed.\nDocuments within the same equivalence class are ranked arbitrarily.\nFormally, let\n\u2022 R be the set of candidate documents for a query, \u2022 S1, S2,...be a family of nested outranking relations, \u2022 Fk (di, E) = | {di' \u2208 E: diSkdi'} | be the number of documents in E (E \u2286 R) that could be considered ` worse' than di according to relation Sk, \u2022 fk (di, E) = | {di' \u2208 E: di' Skdi} | be the number of documents in E that could be considered ` better' than di according to Sk, \u2022 sk (di, E) = Fk (di, E) \u2212 fk (di, E) be the qualification of di in E according to Sk.\nEach class Ch results from a distillation process.\nIt corresponds to the last distillate of a series of sets E0 \u2287 E1 \u2287...where E0 = R \\ (C1 \u222a...\u222a Ch \u2212 1) and Ek is a reduced subset of Ek \u2212 1 resulting from the application of the following procedure:\n1.\ncompute for each di \u2208 Ek \u2212 1 its qualification according to Sk, i.e. sk (di, Ek \u2212 1), 2.\ndefine smax = maxdi \u2208 Ek-1 {sk (di, Ek \u2212 1)}, then 3.\nEk = {di \u2208 Ek \u2212 1: sk (di, Ek \u2212 1) = smax}\nWhen one outranking relation is used, the distillation process stops after the first application of the previous procedure, i.e., Ch corresponds to distillate E1.\nWhen different outranking relations are used, the distillation process stops when all the pre-defined outranking relations have been used or when | Ek | = 1.\n4.2 Illustrative Example\nThis section illustrates the concepts and procedures of section 4.1.\nLet us consider a set of candidate documents\nTable 1: Rankings of documents\nLet us suppose that the preference and veto thresholds are set to values 1 and 4 respectively, and that the concordance and discordance thresholds are set to values 2 and 1 respectively.\nThe following tables give the concordance, discordance and outranking matrices.\nEach entry csp (di, di') (dsv (di, di')) in the concordance (discordance) matrix gives the number of rankings that are concordant (discordant) with diadi', i.e. csp (di, di') = | Csp (diadi') | and dsv (di, di') = | Dsv (diadi') |.\nTable 2: Computation of the outranking relation\nFor instance, the concordance coalition for the assertion d1ad4 is C1 (d1ad4) = {~ 1, ~ 2, ~ 3} and the discordance coalition for the same assertion is D4 (d1ad4) = \u2205.\nTherefore, c1 (d1, d4) = 3, d4 (d1, d4) = 0 and d1S1d4 holds.\nNotice that Fk (di, R) (fk (di, R)) is given by summing the values of the ith row (column) of the outranking matrix.\nThe\nconsensus ranking is obtained as follows: to get the first class C1, we compute the qualifications of all the documents of E0 = R with respect to S1.\nThey are respectively 2, 2, 2, -2 and -4.\nTherefore smax equals 2 and C1 = E1 = {d1, d2, d3}.\nObserve that, if we had used a second outranking relation S2 (\u2287 S1), these three documents could have been possibly discriminated.\nAt this stage, we remove documents of C1 from the outranking matrix and compute the next class C2: we compute the new qualifications of the documents of E0 = R \\ C1 = {d4, d5}.\nThey are respectively 1 and -1.\nSo C3 = E1 = {d4}.\nThe last document d5 is the only document of the last class C3.\nThus, the consensus ranking is {d1, d2, d3} - {d4} - {d5}.\n5.\nEXPERIMENTS AND RESULTS\n5.1 Test Setting\nTo facilitate empirical investigation of the proposed methodology, we developed a prototype metasearch engine that implements a version of our outranking approach for rank aggregation.\nIn this paper, we apply our approach to the Topic Distillation (TD) task of TREC-2004 Web track [10].\nIn this task, there are 75 topics where only a short description of each is given.\nFor each query, we retained the rankings of the 10 best runs of the TD task which are provided by TREC-2004 participating teams.\nThe performances of these runs are reported in table 3.\nTable 3: Performances of the 10 best runs of the TD task of TREC-2004\nFor each query, each run provides a ranking of about 1000 documents.\nThe number of documents retrieved by all these runs ranges from 543 to 5769.\nTheir average (median) number is 3340 (3386).\nIt is worth noting that we found similar distributions of the documents among the rankings as in [11].\nFor evaluation, we used the ` trec eval' standard tool which is used by the TREC community to calculate the standard measures of system effectiveness which are Mean Average Precision (MAP) and Success@n (S0n) for n = 1, 5 and 10.\nOur approach effectiveness is compared against some high performing official results from TREC-2004 as well as against some standard rank aggregation algorithms.\nIn the experiments, significance testing is mainly based on the t-student statistic which is computed on the basis of the MAP values of the compared runs.\nIn the tables of the following section, statistically significant differences are marked with an asterisk.\nValues between brackets of the first column of each table, indicate the parameter value of the corresponding run.\n5.2 Results\nWe carried out several series of runs in order to i) study performance variations of the outranking approach when tuning the parameters and working assumptions, ii) compare performances of the outranking approach vs standard rank aggregation strategies, and iii) check whether rank aggregation performs better than the best input rankings.\nWe set our basic run mcm with the following parameters.\nWe considered that each input ranking is a complete order (sp = 0) and that an input ranking strongly refutes di\u03c3di, when the difference of both document positions is large enough (sv = 75%).\nPreference and veto thresholds are computed proportionally to the number of documents retained in each input ranking.\nThey consequently may vary from one ranking to another.\nIn addition, to accept the assertion di\u03c3dii, we supposed that the majority of the rankings must be concordant (cmin = 50%) and that every input ranking can impose its veto (dmax = 0).\nConcordance and discordance thresholds are computed for each tuple (di, dii) as the percentage of the input rankings of PRi \u2229 PRii.\nThus, our choice of parameters leads to the definition of the outranking relation S (0,75%,50%,0).\nTo test the run mcm, we had chosen the following assumptions.\nWe retained the top 100 best documents from each input ranking (H1100), only considered documents which are present in at least half of the input rankings (H25) and assumed H3no and H4 new.\nIn these conditions, the number of successful documents was about 100 on average, and the computation time per query was less than one second.\nObviously, modifying the working assumptions should have deeper impact on the performances than tuning our model parameters.\nThis was validated by preliminary experiments.\nThus, we hereafter begin by studying performance variation when different sets of assumptions are considered.\nAfterwards, we study the impact of tuning parameters.\nFinally, we compare our model performances w.r.t. the input rankings as well as some standard data fusion algorithms.\n5.2.1 Impact of the Working Assumptions\nTable 4 summarizes the performance variation of the outranking approach under different working hypotheses.\nIn\nTable 4: Impact of the working assumptions\nthis table, we first show that run mcm22, in which missing documents are all put in the same last position of each input ranking, leads to performance drop w.r.t. run mcm.\nMoreover, S01 moves from 41.33% to 34.67% (-16.11%).\nThis shows that several relevant documents which were initially put at the first position of the consensus ranking in mcm, lose this first position but remain ranked in the top 5 documents since S05 did not change.\nWe also conclude that documents which have rather good positions in some input rankings are more likely to be relevant, even though they are missing in some other rankings.\nConsequently, when they are missing in some rankings, assigning worse ranks to these documents is harmful for performance.\nAlso, from Table 4, we found that the performances of runs mcm and mcm23 are similar.\nTherefore, the outranking approach is not sensitive to keeping the initial positions of candidate documents or recomputing them by discarding excluded ones.\nFrom the same Table 4, performance of the outranking approach increases significantly for runs mcm24 and mcm25.\nTherefore, whether we consider all the documents which are present in half of the rankings (mcm24) or we consider all the documents which are ranked in the first 100 positions in one or more rankings (mcm25), increases performances.\nThis result was predictable since in both cases we have more detailed information on the relative importance of documents.\nTables 5 and 6 confirm this evidence.\nTable 5, where values between brackets of the first column give the number of documents which are retained from each input ranking, shows that selecting more documents from each input ranking leads to performance increase.\nIt is worth mentioning that selecting more than 600 documents from each input ranking does not improve performance.\nTable 5: Impact of the number of retained documents\nTable 6 reports runs corresponding to variations of H2k.\nValues between brackets are rank hits.\nFor instance, in\nthe run mcm32, only documents which are present in 3 or more input rankings, were considered successful.\nThis table shows that performance is significantly better when rare documents are considered, whereas it decreases significantly when these documents are discarded.\nTherefore, we conclude that many of the relevant documents are retrieved by a rather small set of IR models.\nTable 6: Performance considering different rank hits\nFor both runs mcm24 and mcm25, the number of successful documents was about 1000 and therefore, the computation time per query increased and became around 5 seconds.\n5.2.2 Impact of the Variation of the Parameters\nTable 7 shows performance variation of the outranking approach when different preference thresholds are considered.\nWe found performance improvement up to threshold values of about 5%, then there is a decrease in the performance which becomes significant for threshold values greater than 10%.\nMoreover, S@1 improves from 41.33% to 46.67% when preference threshold changes from 0 to 5%.\nWe can thus conclude that the input rankings are semi orders rather than complete orders.\nTable 8 shows the evolution of the performance measures w.r.t. the concordance threshold.\nWe can conclude that in order to put document di before di, in the consensus ranking,\nTable 7: Impact of the variation of the preference\nat least half of the input rankings of PRi \u2229 PRii should be concordant.\nPerformance drops significantly for very low and very high values of the concordance threshold.\nIn fact, for such values, the concordance condition is either fulfilled rather always by too many document pairs or not fulfilled at all, respectively.\nTherefore, the outranking relation becomes either too weak or too strong respectively.\nTable 8: Impact of the variation of cmin\nIn the experiments, varying the veto threshold as well as the discordance threshold within reasonable intervals does not have significant impact on performance measures.\nIn fact, runs with different veto thresholds (sv \u2208 [50%; 100%]) had similar performances even though there is a slight advantage for runs with high threshold values which means that it is better not to allow the input rankings to put their veto easily.\nAlso, tuning the discordance threshold was carried out for values 50% and 75% of the veto threshold.\nFor these runs we did not get any noticeable performance variation, although for low discordance thresholds (dmax <20%), performance slightly decreased.\n5.2.3 Impact of the Variation of the Number of Input Rankings\nTo study performance evolution when different sets of input rankings are considered, we carried three more runs where 2, 4, and 6 of the best performing sets of the input rankings are considered.\nResults reported in Table 9 are seemingly counter-intuitive and also do not support previous findings regarding rank aggregation research [3].\nNevertheless, this result shows that low performing rankings bring more noise than information to the establishment of the consensus ranking.\nTherefore, when they are considered, performance decreases.\nTable 9: Performance considering different best performing sets of input rankings\n5.2.4 Comparison of the Performance of Different Rank Aggregation Methods\nIn this set of runs, we compare the outranking approach with some standard rank aggregation methods which were\nproven to have acceptable performance in previous studies: we considered two positional methods which are the CombSUM and the CombMNZ strategies.\nWe also examined the performance of one majoritarian method which is the Markov chain method (MC4).\nFor the comparisons, we considered a specific outranking relation S \u2217 = S (5%,50%,50%,30%) which results in good overall performances when tuning all the parameters.\nThe first row of Table 10 gives performances of the rank aggregation methods w.r.t. a basic assumption set A1 = (H1100, H25, H4new): we only consider the 100 first documents from each ranking, then retain documents present in 5 or more rankings and update ranks of successful documents.\nFor positional methods, we place missing documents at the queue of the ranking (H3 yes) whereas for our method as well as for MC4, we retained hypothesis Hn3o.\nThe three following rows of Table 10 report performances when changing one element from the basic assumption set: the second row corresponds to the assumption set A2 = (H11000, H25, H4new), i.e. changing the number of retained documents from 100 to 1000.\nThe third row corresponds to the assumption set A3 = (H1100, H2 all, Hn4ew), i.e. considering the documents present in at least one ranking.\nThe fourth row corresponds to the assumption set A4 = (H1100, H25, H4init), i.e. keeping the original ranks of successful documents.\nThe fifth row of Table 10, labeled A5, gives performance when all the 225 queries of the Web track of TREC-2004 are considered.\nObviously, performance level cannot be compared with previous lines since the additional queries are different from the TD queries and correspond to other tasks (Home Page and Named Page tasks [10]) of TREC-2004 Web track.\nThis set of runs aims to show whether relative performance of the various methods is task-dependent.\nThe last row of Table 10, labeled A6, reports performance of the various methods considering the TD task of TREC2002 instead of TREC-2004: we fused the results of input rankings of the 10 best official runs for each of the 50 TD queries [9] considering the set of assumptions A1 of the first row.\nThis aims to show whether relative performance of the various methods changes from year to year.\nValues between brackets of Table 10 are variations of performance of each rank aggregation method w.r.t. performance of the outranking approach.\nTable 10: Performance (MAP) of different rank aggregation methods under 3 different test collections\nFrom the analysis of table 10 the following can be established: \u2022 for all the runs, considering all the documents in each input ranking (A2) significantly improves performance (MAP increases by 11.62% on average).\nThis is predictable since some initially unreported relevant documents would receive better positions in the consensus ranking.\n\u2022 for all the runs, considering documents even those present in only one input ranking (A3) significantly im\nproves performance.\nFor mcm, combSUM and combMNZ, performance improvement is more important (MAP increases by 20.27% on average) than for the markov run (MAP increases by 3.86%).\n\u2022 preserving the initial positions of documents (A4) or recomputing them (A1) does not have a noticeable influence on performance for both positional and majoritarian methods.\n\u2022 considering all the queries of the Web track of TREC2004 (A5) as well as the TD queries of the Web track of TREC-2002 (A6) does not alter the relative performance of the different data fusion methods.\n\u2022 considering the TD queries of the Web track of TREC\n2002, performances of all the data fusion methods are lower than that of the best performing input ranking for which the MAP value equals 18.58%.\nThis is because most of the fused input rankings have very low performances compared to the best one, which brings more noise to the consensus ranking.\n\u2022 performances of the data fusion methods mcm and markov are significantly better than that of the best input ranking uogWebCAU150.\nThis remains true for runs combSUM and combMNZ only under assumptions H1 all or H2 all.\nThis shows that majoritarian methods are less sensitive to assumptions than positional methods.\n\u2022 outranking approach always performs significantly bet\nter than positional methods combSUM and combMNZ.\nIt has also better performances than the Markov chain method, especially under assumption H2 all where difference of performances becomes significant.\n6.\nCONCLUSIONS\nIn this paper, we address the rank aggregation problem where different, but not disjoint, lists of documents are to be fused.\nWe noticed that the input rankings can hide ties, so they should not be considered as complete orders.\nOnly robust information should be used from each input ranking.\nCurrent rank aggregation methods, and especially positional methods (e.g. combSUM [15]), are not initially designed to work with such rankings.\nThey should be adapted by considering specific working assumptions.\nWe propose a new outranking method for rank aggregation which is well adapted to the IR context.\nIndeed, it ranks two documents w.r.t. the intensity of their positions difference in each input ranking and also considering the number of the input rankings that are concordant and discordant in favor of a specific document.\nThere is also no need to make specific assumptions on the positions of the missing documents.\nThis is an important feature since the absence of a document from a ranking should not be necessarily interpreted negatively.\nExperimental results show that the outranking method significantly out-performs popular classical positional data fusion methods like combSUM and combMNZ strategies.\nIt also out-performs a good performing majoritarian methods which is the Markov chain method.\nThese results are tested against different test collections and queries.\nFrom the experiments, we can also conclude that in order to improve the performances, we should fuse result lists of well performing\nIR models, and that majoritarian data fusion methods perform better than positional methods.\nThe proposed method can have a real impact on Web metasearch performances since only ranks are available from most primary search engines, whereas most of the current approaches need scores to merge result lists into one single list.\nFurther work involves investigating whether the outranking approach performs well in various other contexts, e.g. using the document scores or some combination of document ranks and scores.","keyphrases":["outrank approach","rank aggreg","inform retriev","data fusion problem","data fusion","decis rule","multipl criterion framework","metasearch engin","combsum and combmnz strategi","majoritarian method","ir model","multipl criterium approach","outrank method"],"prmu":["P","P","P","P","P","P","M","U","M","M","U","M","R"]} {"id":"H-87","title":"Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation","abstract":"This paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering. Using four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings. We found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function. Using systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11. Relevance feedback on a small portion (0.05~0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with \u03b2=0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information.","lvl-1":"Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation Yiming Yang, Shinjae Yoo, Jian Zhang, Bryan Kisiel School of Computer Science, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213, USA ABSTRACT This paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering.\nUsing four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings.\nWe found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function.\nUsing systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11.\nRelevance feedback on a small portion (0.05~0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with \u03b2=0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Information filtering, Relevance feedback, Retrieval models, Selection process; I.5.2 [Design Methodology]: Classifier design and evaluation General Terms Algorithms, Measurement, Performance, Experimentation 1.\nINTRODUCTION Adaptive filtering (AF) has been a challenging research topic in information retrieval.\nThe task is for the system to make an online topic membership decision (yes or no) for every document, as soon as it arrives, with respect to each pre-defined topic of interest.\nStarting from 1997 in the Topic Detection and Tracking (TDT) area and 1998 in the Text Retrieval Conferences (TREC), benchmark evaluations have been conducted by NIST under the following conditions[6][7][8][3][4]: \u2022 A very small number (1 to 4) of positive training examples was provided for each topic at the starting point.\n\u2022 Relevance feedback was available but only for the systemaccepted documents (with a yes decision) in the TREC evaluations for AF.\n\u2022 Relevance feedback (RF) was not allowed in the TDT evaluations for AF (or topic tracking in the TDT terminology) until 2004.\n\u2022 TDT2004 was the first time that TREC and TDT metrics were jointly used in evaluating AF methods on the same benchmark (the TDT5 corpus) where non-stationary topics dominate.\nThe above conditions attempt to mimic realistic situations where an AF system would be used.\nThat is, the user would be willing to provide a few positive examples for each topic of interest at the start, and might or might not be able to provide additional labeling on a small portion of incoming documents through relevance feedback.\nFurthermore, topics of interest might change over time, with new topics appearing and growing, and old topics shrinking and diminishing.\nThese conditions make adaptive filtering a difficult task in statistical learning (online classification), for the following reasons: 1) it is difficult to learn accurate models for prediction based on extremely sparse training data; 2) it is not obvious how to correct the sampling bias (i.e., relevance feedback on system-accepted documents only) during the adaptation process; 3) it is not well understood how to effectively tune parameters in AF methods using cross-corpus validation where the validation and evaluation topics do not overlap, and the documents may be from different sources or different epochs.\nNone of these problems is addressed in the literature of statistical learning for batch classification where all the training data are given at once.\nThe first two problems have been studied in the adaptive filtering literature, including topic profile adaptation using incremental Rocchio, Gaussian-Exponential density models, logistic regression in a Bayesian framework, etc., and threshold optimization strategies using probabilistic calibration or local fitting techniques [1][2][9][10][11][12][13].\nAlthough these works provide valuable insights for understanding the problems and possible solutions, it is difficult to draw conclusions regarding the effectiveness and robustness of current methods because the third problem has not been thoroughly investigated.\nAddressing the third issue is the main focus in this paper.\nWe argue that robustness is an important measure for evaluating and comparing AF methods.\nBy robust we mean consistent and strong performance across benchmark corpora with a systematic method for parameter tuning across multiple corpora.\nMost AF methods have pre-specified parameters that may influence the performance significantly and that must be determined before the test process starts.\nAvailable training examples, on the other hand, are often insufficient for tuning the parameters.\nIn TDT5, for example, there is only one labeled training example per topic at the start; parameter optimization on such training data is doomed to be ineffective.\nThis leaves only one option (assuming tuning on the test set is not an alternative), that is, choosing an external corpus as the validation set.\nNotice that the validation-set topics often do not overlap with the test-set topics, thus the parameter optimization is performed under the tough condition that the validation data and the test data may be quite different from each other.\nNow the important question is: which methods (if any) are robust under the condition of using cross-corpus validation to tune parameters?\nCurrent literature does not offer an answer because no thorough investigation on the robustness of AF methods has been reported.\nIn this paper we address the above question by conducting a cross-benchmark evaluation with two effective approaches in AF: incremental Rocchio and regularized logistic regression (LR).\nRocchio-style classifiers have been popular in AF, with good performance in benchmark evaluations (TREC and TDT) if appropriate parameters were used and if combined with an effective threshold calibration strategy [2][4][7][8][9][11][13].\nLogistic regression is a classical method in statistical learning, and one of the best in batch-mode text categorization [15][14].\nIt was recently evaluated in adaptive filtering and was found to have relatively strong performance (Section 5.1).\nFurthermore, a recent paper [13] reported that the joint use of Rocchio and LR in a Bayesian framework outperformed the results of using each method alone on the TREC11 corpus.\nStimulated by those findings, we decided to include Rocchio and LR in our crossbenchmark evaluation for robustness testing.\nSpecifically, we focus on how much the performance of these methods depends on parameter tuning, what the most influential parameters are in these methods, how difficult (or how easy) to optimize these influential parameters using cross-corpus validation, how strong these methods perform on multiple benchmarks with the systematic tuning of parameters on other corpora, and how efficient these methods are in running AF on large benchmark corpora.\nThe organization of the paper is as follows: Section 2 introduces the four benchmark corpora (TREC10 and TREC11, TDT3 and TDT5) used in this study.\nSection 3 analyzes the differences among the TREC and TDT metrics (utilities and tracking cost) and the potential implications of those differences.\nSection 4 outlines the Rocchio and LR approaches to AF, respectively.\nSection 5 reports the experiments and results.\nSection 6 concludes the main findings in this study.\n2.\nBENCHMARK CORPORA We used four benchmark corpora in our study.\nTable 1 shows the statistics about these data sets.\nTREC10 was the evaluation benchmark for adaptive filtering in TREC 2001, consisting of roughly 806,791 Reuters news stories from August 1996 to August 1997 with 84 topic labels (subject categories)[7].\nThe first two weeks (August 20th to 31st , 1996) of documents is the training set, and the remaining 11 & 1\/2 months (from September 1st , 1996 to August 19th , 1997) is the test set.\nTREC11 was the evaluation benchmark for adaptive filtering in TREC 2002, consisting of the same set of documents as those in TREC10 but with a slightly different splitting point for the training and test sets.\nThe TREC11 topics (50) are quite different from those in TREC10; they are queries for retrieval with relevance judgments by NIST assessors [8].\nTDT3 was the evaluation benchmark in the TDT2001 dry run1 .\nThe tracking part of the corpus consists of 71,388 news stories from multiple sources in English and Mandarin (AP, NYT, CNN, ABC, NBC, MSNBC, Xinhua, Zaobao, Voice of America and PRI the World) in the period of October to December 1998.\nMachine-translated versions of the non-English stories (Xinhua, Zaobao and VOA Mandarin) are provided as well.\nThe splitting point for training-test sets is different for each topic in TDT.\nTDT5 was the evaluation benchmark in TDT2004 [4].\nThe tracking part of the corpus consists of 407,459 news stories in the period of April to September, 2003 from 15 news agents or broadcast sources in English, Arabic and Mandarin, with machine-translated versions of the non-English stories.\nWe only used the English versions of those documents in our experiments for this paper.\nThe TDT topics differ from TREC topics both conceptually and statistically.\nInstead of generic, ever-lasting subject categories (as those in TREC), TDT topics are defined at a finer level of granularity, for events that happen at certain times and locations, and that are born and die, typically associated with a bursty distribution over chronologically ordered news stories.\nThe average size of TDT topics (events) is two orders of magnitude smaller than that of the TREC10 topics.\nFigure 1 compares the document densities of a TREC topic (Civil Wars) and two TDT topics (Gunshot and APEC Summit Meeting, respectively) over a 3-month time period, where the area under each curve is normalized to one.\nThe granularity differences among topics and the corresponding non-stationary distributions make the cross-benchmark evaluation interesting.\nFor example, algorithms favoring large and stable topics may not work well for short-lasting and nonstationary topics, and vice versa.\nCross-benchmark evaluations allow us to test this hypothesis and possibly identify the weaknesses in current approaches to adaptive filtering in tracking the drifting trends of topics.\n1 http:\/\/www.ldc.upenn.edu\/Projects\/TDT2001\/topics.html Table 1: Statistics of benchmark corpora for adaptive filtering evaluations N(tr) is the number of the initial training documents; N(ts) is the number of the test documents; n+ is the number of positive examples of a predefined topic; * is an average over all the topics.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Week P(topic|week) Gunshot (TDT5) APEC Summit Meeting (TDT3) Civil War(TREC10) Figure 1: The temporal nature of topics 3.\nMETRICS To make our results comparable to the literature, we decided to use both TREC-conventional and TDT-conventional metrics in our evaluation.\n3.1 TREC11 metrics Let A, B, C and D be, respectively, the numbers of true positives, false alarms, misses and true negatives for a specific topic, and DCBAN +++= be the total number of test documents.\nThe TREC-conventional metrics are defined as: Precision )\/( BAA += , Recall )\/( CAA += )(2 )21( CABA A F +++ + = \u03b2 \u03b2 \u03b2 ( ) \u03b7 \u03b7\u03b7\u03b2 \u03b7\u03b2 \u2212 \u2212+\u2212 = 1 ),\/()(max 11 , CABA SUT where parameters \u03b2 and \u03b7 were set to 0.5 and -0.5 respectively in TREC10 (2001) and TREC11 (2002).\nFor evaluating the performance of a system, the performance scores are computed for individual topics first and then averaged over topics (macroaveraging).\n3.2 TDT metrics The TDT-conventional metric for topic tracking is defined as: famisstrk PTPwPTPwTC ))(1()()( 21 \u2212+= where P(T) is the percentage of documents on topic T, missP is the miss rate by the system on that topic, faP is the false alarm rate, and 1w and 2w are the costs (pre-specified constants) for a miss and a false alarm, respectively.\nThe TDT benchmark evaluations (since 1997) have used the settings of 11 =w , 1.02 =w and 02.0)( =TP for all topics.\nFor evaluating the performance of a system, Ctrk is computed for each topic first and then the resulting scores are averaged for a single measure (the topic-weighted Ctrk).\nTo make the intuition behind this measure transparent, we substitute the terms in the definition of Ctrk as follows: N CA TP + =-RRB-( , N DB TP + =\u2212 )(1 , CA C Pmiss + = , DB B Pfa + = , )( 1 )( 21 21 BwCw N DB B N DB w CA C N CA wTCtrk +\u22c5= + \u22c5 + \u22c5+ + \u22c5 + \u22c5= Clearly, trkC is the average cost per error on topic T, with 1w and 2w controlling the penalty ratio for misses vs. false alarms.\nIn addition to trkC , TDT2004 also employed 1.011 =\u03b2SUT as a utility metric.\nTo distinguish this from the 5.011 =\u03b2SUT in TREC11, we call former TDT5SU in the rest of this paper.\nCorpus #Topics N(tr) N(ts) Avg n+ (tr) Avg n+ (ts) Max n+ (ts) Min n+ (ts) #Topics per doc (ts) TREC10 84 20,307 783,484 2 9795.3 39,448 38 1.57 TREC11 50 80.664 726,419 3 378.0 597 198 1.12 TDT3 53 18,738* 37,770* 4 79.3 520 1 1.06 TDT5 111 199,419* 207,991* 1 71.3 710 1 1.01 3.3 The correlations and the differences From an optimization point of view, TDT5SU and T11SU are both utility functions while Ctrk is a cost function.\nOur objective is to maximize the former or to minimize the latter on test documents.\nThe differences and correlations among these objective functions can be analyzed through the shared counts of A, B, C and D in their definitions.\nFor example, both TDT5SU and T11SU are positively correlated to the values of A and D, and negatively correlated to the values of B and C; the only difference between them is in their penalty ratios for misses vs. false alarms, i.e., 10:1 in TDT5SU and 2:1 in T11SU.\nThe Ctrk function, on the other hand, is positively correlated to the values of C and B, and negatively correlated to the values of A and D; hence, it is negatively correlated to T11SU and TDT5SU.\nMore importantly, there is a subtle and major difference between Ctrk and the utility functions: T11SU and TDT5SU.\nThat is, Ctrk has a very different penalty ratio for misses vs. false alarms: it favors recall-oriented systems to an extreme.\nAt first glance, one would think that the penalty ratio in Ctrk is 10:1 since 11 =w and 1.02 =w .\nHowever, this is not true if 02.0)( =TP is an inaccurate estimate of the on-topic documents on average for the test corpus.\nUsing TDT3 as an example, the true percentage is: 002.0 37770 3.79 )( \u2248= + = N n TP where N is the average size of the test sets in TDT3, and n+ is the average number of positive examples per topic in the test sets.\nUsing 02.0)(\u02c6 =TP as an (inaccurate) estimate of 0.002 enlarges the intended penalty ratio of 10:1 to 100:1, roughly speaking.\nTo wit: )1.010( 1 1.010 )3.7937770(37770 3.79 1011.0101 1011.0101 ))(1(2)(1 )02.01(202.01)( BC NN B N C B N C DB B N CA N C faPTPwmissPTPw faPwmissPwTtrkC \u00d7+\u00d7=\u00d7+\u00d7\u2248 \u2212 \u00d7\u2212\u00d7+\u00d7\u00d7= + + \u00d7\u2212\u00d7+\u00d7\u00d7= \u00d7\u2212\u22c5+\u00d7\u00d7= \u2212\u00d7+\u00d7\u00d7= \u239f \u23a0 \u239e \u239c \u239d \u239b \u239f \u23a0 \u239e \u239c \u239d \u239b \u03c1\u03c1 where 10 002.0 02.0 )( )(\u02c6 === TP TP \u03c1 is the factor of enlargement in the estimation of P(T) compared to the truth.\nComparing the above result to formula 2, we can see the actual penalty ratio for misses vs. false alarms was 100:1 in the evaluations on TDT3 using Ctrk.\nSimilarly, we can compute the enlargement factor for TDT5 using the statistics in Table 1 as follows: 3.58 991,207\/3.71 02.0 )( )(\u02c6 === TP TP \u03c1 which means the actual penalty ratio for misses vs. false alarms in the evaluation on TDT5 using Ctrk was approximately 583:1.\nThe implications of the above analysis are rather significant: \u2022 Ctrk defined in the same formula does not necessarily mean the same objective function in evaluation; instead, the optimization criterion depends on the test corpus.\n\u2022 Systems optimized for Ctrk would not optimize TDT5SU (and T11SU) because the former favors high-recall oriented to an extreme while the latter does not.\n\u2022 Parameters tuned on one corpus (e.g., TDT3) might not work for an evaluation on another corpus (say, TDT5) unless we account for the previously-unknown subtle dependency of Ctrk on data.\n\u2022 Results in Ctrk in the past years of TDT evaluations may not be directly comparable to each other because the evaluation collections changed most years and hence the penalty ratio in Ctrk varied.\nAlthough these problems with Ctrk were not originally anticipated, it offered an opportunity to examine the ability of systems in trading off precision for extreme recall.\nThis was a challenging part of the TDT2004 evaluation for AF.\nComparing the metrics in TDT and TREC from a utility or cost optimization point of view is important for understanding the evaluation results of adaptive filtering methods.\nThis is the first time this issue is explicitly analyzed, to our knowledge.\n4.\nMETHODS 4.1 Incremental Rocchio for AF We employed a common version of Rocchio-style classifiers which computes a prototype vector per topic (T) as follows: |)(| ' |)(| )()( )(')( TD d TD d TqTp TDdTDd \u2212 \u2208 + \u2208 \u2211\u2211 \u2212+ \u2212+= rr rr rr \u03b3\u03b2\u03b1 The first term on the RHS is the weighted vector representation of topic description whose elements are terms weights.\nThe second term is the weighted centroid of the set )(TD+ of positive training examples, each of which is a vector of withindocument term weights.\nThe third term is the weighted centroid of the set )(TD\u2212 of negative training examples which are the nearest neighbors of the positive centroid.\nThe three terms are given pre-specified weights of \u03b2\u03b1, and \u03b3 , controlling the relative influence of these components in the prototype.\nThe prototype of a topic is updated each time the system makes a yes decision on a new document for that topic.\nIf relevance feedback is available (as is the case in TREC adaptive filtering), the new document is added to the pool of either )(TD+ or )(TD\u2212 , and the prototype is recomputed accordingly; if relevance feedback is not available (as is the case in TDT event tracking), the system``s prediction (yes) is treated as the truth, and the new document is added to )(TD+ for updating the prototype.\nBoth cases are part of our experiments in this paper (and part of the TDT 2004 evaluations for AF).\nTo distinguish the two, we call the first case simply Rocchio and the second case PRF Rocchio where PRF stands for pseudorelevance feedback.\nThe predictions on a new document are made by computing the cosine similarity between each topic prototype and the document vector, and then comparing the resulting scores against a threshold: \u23a9 \u23a8 \u23a7 \u2212 + =\u2212 )( )( ))),((cos( no yes dTpsign new \u03b8 rr Threshold calibration in incremental Rocchio is a challenging research topic.\nMultiple approaches have been developed.\nThe simplest is to use a universal threshold for all topics, tuned on a validation set and fixed during the testing phase.\nMore elaborate methods include probabilistic threshold calibration which converts the non-probabilistic similarity scores to probabilities (i.e., )|( dTP r ) for utility optimization [9][13], and margin-based local regression for risk reduction [11].\nIt is beyond the scope of this paper to compare all the different ways to adapt Rocchio-style methods for AF.\nInstead, our focus here is to investigate the robustness of Rocchio-style methods in terms of how much their performance depends on elaborate system tuning, and how difficult (or how easy) it is to get good performance through cross-corpus parameter optimization.\nHence, we decided to use a relatively simple version of Rocchio as the baseline, i.e., with a universal threshold tuned on a validation corpus and fixed for all topics in the testing phase.\nThis simple version of Rocchio has been commonly used in the past TDT benchmark evaluations for topic tracking, and had strong performance in the TDT2004 evaluations for adaptive filtering with and without relevance feedback (Section 5.1).\nResults of more complex variants of Rocchio are also discussed when relevant.\n4.2 Logistic Regression for AF Logistic regression (LR) estimates the posterior probability of a topic given a document using a sigmoid function )1\/(1),|1( xw ewxyP rrrr \u22c5\u2212 +== where x r is the document vector whose elements are term weights, w r is the vector of regression coefficients, and }1,1{ \u2212+\u2208y is the output variable corresponding to yes or no with respect to a particular topic.\nGiven a training set of labeled documents { }),(,),,( 11 nn yxyxD r L r = , the standard regression problem is defined as to find the maximum likelihood estimates of the regression coefficients (the model parameters): { } { } { }))exp(1(1logminarg )|(logmaxarg)|(maxarg ii xwyn i w wDP w wDP w mlw rr r r r r r r \u22c5\u2212+\u2211 == == This is a convex optimization problem which can be solved using a standard conjugate gradient algorithm in O(INF) time for training per topic, where I is the average number of iterations needed for convergence, and N and F are the number of training documents and number of features respectively [14].\nOnce the regression coefficients are optimized on the training data, the filtering prediction on each incoming document is made as: ( ) \u23a9 \u23a8 \u23a7 \u2212 + =\u2212 )( )( ),|( no yes wxyPsign optnew \u03b8 rr Note that w r is constantly updated whenever a new relevance judgment is available in the testing phase of AF, while the optimal threshold opt\u03b8 is constant, depending only on the predefined utility (or cost) function for evaluation.\nIf T11SU is the metric, for example, with the penalty ratio of 2:1 for misses and false alarms (Section 3.1), the optimal threshold for LR is 33.0)12\/(1 =+ for all topics.\nWe modified the standard (above) version of LR to allow more flexible optimization criteria as follows: \u23ad \u23ac \u23ab \u23a9 \u23a8 \u23a7 \u2212++= \u2211= \u22c5\u2212 2 1 )1log()(minarg \u03bc\u03bb rrr rr r weysw n i xwy i w map ii where )( iys is taken to be \u03b1 , \u03b2 and \u03b3 for query, positive and negative documents respectively, which are similar to those in Rocchio, giving different weights to the three kinds of training examples: topic descriptions (queries), on-topic documents and off-topic documents.\nThe second term in the objective function is for regularization, equivalent to adding a Gaussian prior to the regression coefficients with mean \u03bc r and covariance variance matrix \u0399\u22c5\u03bb2\/1 , where \u0399 is the identity matrix.\nTuning \u03bb (\u22650) is theoretically justified for reducing model complexity (the effective degree of freedom) and avoiding over-fitting on training data [5].\nHow to find an effective \u03bc r is an open issue for research, depending on the user``s belief about the parameter space and the optimal range.\nThe solution of the modified objective function is called the Maximum A Posteriori (MAP) estimate, which reduces to the maximum likelihood solution for standard LR if 0=\u03bb .\n5.\nEVALUATIONS We report our empirical findings in four parts: the TDT2004 official evaluation results, the cross-corpus parameter optimization results, and the results corresponding to the amounts of relevance feedback.\n5.1 TDT2004 benchmark results The TDT2004 evaluations for adaptive filtering were conducted by NIST in November 2004.\nMultiple research teams participated and multiple runs from each team were allowed.\nCtrk and TDT5SU were used as the metrics.\nFigure 2 and Figure 3 show the results; the best run from each team was selected with respect to Ctrk or TDT5SU, respectively.\nOur Rocchio (with adaptive profiles but fixed universal threshold for all topics) had the best result in Ctrk, and our logistic regression had the best result in TDT5SU.\nAll the parameters of our runs were tuned on the TDT3 corpus.\nResults for other sites are also listed anonymously for comparison.\nCtrk Ours 0.0324 Site2 0.0467 Site3 0.1366 Site4 0.2438 Metric = Ctrk (the lower the better) 0.0324 0.0467 0.1366 0.2438 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Ours Site2 Site3 Site4 Figure 2: TDT2004 results in Ctrk of systems using true relevance feedback.\n(Ours is the Rocchio method.)\nWe also put the 1st and 3rd quartiles as sticks for each site.2 T11SU Ours 0.7328 Site3 0.7281 Site2 0.6672 Site4 0.382 Metric = TDT5SU (the higher the better) 0.7328 0.7281 0.6672 0.382 0 0.2 0.4 0.6 0.8 1 Ours Site3 Site2 Site4 Figure 3:TDT2004 results in TDT5SU of systems using true relevance feedback.\n(Ours is LR with 0=\u03bc r and 005.0=\u03bb ).\nCTrk Ours 0.0707 Site2 0.1545 Site5 0.5669 Site4 0.6507 Site6 0.8973 Primary Topic Traking Results in TDT2004 0.0707 0.8973 0.6507 0.1545 0.5669 0 0.2 0.4 0.6 0.8 1 1.2 Ours Site2 Site5 Site4 Site6 Ctrk Figure 4:TDT2004 results in Ctrk of systems without using true relevance feedback.\n(Ours is PRF Rocchio.)\nAdaptive filtering without using true relevance feedback was also a part of the evaluations.\nIn this case, systems had only one labeled training example per topic during the entire training and testing processes, although unlabeled test documents could be used as soon as predictions on them were made.\nSuch a setting has been conventional for the Topic Tracking task in TDT until 2004.\nFigure 4 shows the summarized official submissions from each team.\nOur PRF Rocchio (with a fixed threshold for all the topics) had the best performance.\n2 We use quartiles rather than standard deviations since the former is more resistant to outliers.\n5.2 Cross-corpus parameter optimization How much the strong performance of our systems depends on parameter tuning is an important question.\nBoth Rocchio and LR have parameters that must be prespecified before the AF process.\nThe shared parameters include the sample weights\u03b1 , \u03b2 and \u03b3 , the sample size of the negative training documents (i.e., )(TD\u2212 ), the term-weighting scheme, and the maximal number of non-zero elements in each document vector.\nThe method-specific parameters include the decision threshold in Rocchio, and \u03bc r , \u03bb and MI (the maximum number of iterations in training) in LR.\nGiven that we only have one labeled example per topic in the TDT5 training sets, it is impossible to effectively optimize these parameters on the training data, and we had to choose an external corpus for validation.\nAmong the choices of TREC10, TREC11 and TDT3, we chose TDT3 (c.f. Section 2) because it is most similar to TDT5 in terms of the nature of the topics (Section 2).\nWe optimized the parameters of our systems on TDT3, and fixed those parameters in the runs on TDT5 for our submissions to TDT2004.\nWe also tested our methods on TREC10 and TREC11 for further analysis.\nSince exhaustive testing of all possible parameter settings is computationally intractable, we followed a step-wise forward chaining procedure instead: we pre-specified an order of the parameters in a method (Rocchio or LR), and then tuned one parameter at the time while fixing the settings of the remaining parameters.\nWe repeated this procedure for several passes as time allowed.\n0.05 0.26 0.67 0.69 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 0.02 0.03 0.04 0.06 0.08 0.1 0.15 0.2 0.25 0.3 Threshold TDT5SU TDT3 TDT5 TREC10 TREC11 Figure 5: Performance curves of adaptive Rocchio Figure 5 compares the performance curves in TDT5SU for Rocchio on TDT3, TDT5, TREC10 and TREC11 when the decision threshold varied.\nThese curves peak at different locations: the TDT3-optimal is closest to the TDT5-optimal while the TREC10-optimal and TREC1-optimal are quite far away from the TDT5-optimal.\nIf we were using TREC10 or TREC11 instead of TDT3 as the validation corpus for TDT5, or if the TDT3 corpus were not available, we would have difficulty in obtaining strong performance for Rocchio in TDT2004.\nThe difficulty comes from the ad-hoc (non-probabilistic) scores generated by the Rocchio method: the distribution of the scores depends on the corpus, making cross-corpus threshold optimization a tricky problem.\nLogistic regression has less difficulty with respect to threshold tuning because it produces probabilistic scores of )|1Pr( xy = upon which the optimal threshold can be directly computed if probability estimation is accurate.\nGiven the penalty ratio for misses vs. false alarms as 2:1 in T11SU, 10:1 in TDT5SU and 583:1 in Ctrk (Section 3.3), the corresponding optimal thresholds (t) are 0.33, 0.091 and 0.0017 respectively.\nAlthough the theoretical threshold could be inaccurate, it still suggests the range of near-optimal settings.\nWith these threshold settings in our experiments for LR, we focused on the crosscorpus validation of the Bayesian prior parameters, that is, \u03bc r and \u03bb.\nTable 2 summarizes the results 3 .\nWe measured the performance of the runs on TREC10 and TREC11 using T11SU, and the performance of the runs on TDT3 and TDT5 using TDT5SU.\nFor comparison we also include the best results of Rocchio-based methods on these corpora, which are our own results of Rocchio on TDT3 and TDT5, and the best results reported by NIST for TREC10 and TREC11.\nFrom this set of results, we see that LR significantly outperformed Rocchio on all the corpora, even in the runs of standard LR without any tuning, i.e. \u03bb=0.\nThis empirical finding is consistent with a previous report [13] for LR on TREC11 although our results of LR (0.585~0.608 in T11SU) are stronger than the results (0.49 for standard LR and 0.54 for LR using Rocchio prototype as the prior) in that report.\nMore importantly, our cross-benchmark evaluation gives strong evidence for the robustness of LR.\nThe robustness, we believe, comes from the probabilistic nature of the system-generated scores.\nThat is, compared to the ad-hoc scores in Rocchio, the normalized posterior probabilities make the threshold optimization in LR a much easier problem.\nMoreover, logistic regression is known to converge towards the Bayes classifier asymptotically while Rocchio classifiers'' parameters do not.\nAnother interesting observation in these results is that the performance of LR did not improve when using a Rocchio prototype as the mean in the prior; instead, the performance decreased in some cases.\nThis observation does not support the previous report by [13], but we are not surprised because we are not convinced that Rocchio prototypes are more accurate than LR models for topics in the early stage of the AF process, and we believe that using a Rocchio prototype as the mean in the Gaussian prior would introduce undesirable bias to LR.\nWe also believe that variance reduction (in the testing phase) should be controlled by the choice of \u03bb (but not \u03bc r ), for which we conducted the experiments as shown in Figure 6.\nTable 2: Results of LR with different Bayesian priors Corpus TDT3 TDT5 TREC10 TREC11 LR(\u03bc=0,\u03bb=0) 0.7562 0.7737 0.585 0.5715 LR(\u03bc=0,\u03bb=0.01) 0.8384 0.7812 0.6077 0.5747 LR(\u03bc=roc*,\u03bb=0.01) 0.8138 0.7811 0.5803 0.5698 Best Rocchio 0.6628 0.6917 0.4964 0.475 3 The LR results (0.77~0.78) on TDT5 in this table are better than our TDT2004 official result (0.73) because parameter optimization has been improved afterwards.\n4 The TREC10-best result (0.496 by Oracle) is only available in T10U which is not directly comparable to the scores in T11SU, just indicative.\n*: \u03bc r was set to the Rocchio prototype 0 0.2 0.4 0.6 0.8 0.000 0.001 0.005 0.050 0.500 Lambda Performance Ctrk on TDT3 TDT5SU on TDT3 TDT5SU on TDT5 T11SU on TREC11 Figure 6: LR with varying lambda.\nThe performance of LR is summarized with respect to \u03bb tuning on the corpora of TREC10, TREC11 and TDT3.\nThe performance on each corpus was measured using the corresponding metrics, that is, T11SU for the runs on TREC10 and TREC11, and TDT5SU and Ctrk for the runs on TDT3,.\nIn the case of maximizing the utilities, the safe interval for \u03bb is between 0 and 0.01, meaning that the performance of regularized LR is stable, the same as or improved slightly over the performance of standard LR.\nIn the case of minimizing Ctrk, the safe range for \u03bb is between 0 and 0.1, and setting \u03bb between 0.005 and 0.05 yielded relatively large improvements over the performance of standard LR because training a model for extremely high recall is statistically more tricky, and hence more regularization is needed.\nIn either case, tuning \u03bb is relatively safe, and easy to do successfully by cross-corpus tuning.\nAnother influential choice in our experiment settings is term weighting: we examined the choices of binary, TF and TF-IDF (the ltc version) schemes.\nWe found TF-IDF most effective for both Rocchio and LR, and used this setting in all our experiments.\n5.3 Percentages of labeled data How much relevance feedback (RF) would be needed during the AF process is a meaningful question in real-world applications.\nTo answer it, we evaluated Rocchio and LR on TDT with the following settings: \u2022 Basic Rocchio, no adaptation at all \u2022 PRF Rocchio, updating topic profiles without using true relevance feedback; \u2022 Adaptive Rocchio, updating topic profiles using relevance feedback on system-accepted documents plus 10 documents randomly sampled from the pool of systemrejected documents; \u2022 LR with 0 rr =\u03bc , 01.0=\u03bb and threshold = 0.004; \u2022 All the parameters in Rocchio tuned on TDT3.\nTable 3 summarizes the results in Ctrk: Adaptive Rocchio with relevance feedback on 0.6% of the test documents reduced the tracking cost by 54% over the result of the PRF Rocchio, the best system in the TDT2004 evaluation for topic tracking without relevance feedback information.\nIncremental LR, on the other hand, was weaker but still impressive.\nRecall that Ctrk is an extremely high-recall oriented metric, causing frequent updating of profiles and hence an efficiency problem in LR.\nFor this reason we set a higher threshold (0.004) instead of the theoretically optimal threshold (0.0017) in LR to avoid an untolerable computation cost.\nThe computation time in machine-hours was 0.33 for the run of adaptive Rocchio and 14 for the run of LR on TDT5 when optimizing Ctrk.\nTable 4 summarizes the results in TDT5SU; adaptive LR was the winner in this case, with relevance feedback on 0.05% of the test documents improving the utility by 20.9% over the results of PRF Rocchio.\nTable 3: AF methods on TDT5 (Performance in Ctrk) Base Roc PRF Roc Adp Roc LR % of RF 0% 0% 0.6% 0.2% Ctrk 0.076 0.0707 0.0324 0.0382 \u00b1% +7% (baseline) -54% -46% Table 4: AF methods on TDT5 (Performance in TDT5SU) Base Roc PRF Roc Adp Roc LR(\u03bb=.01) % of RF 0% 0% 0.04% 0.05% TDT5SU 0.57 0.6452 0.69 0.78 \u00b1% -11.7% (baseline) +6.9% +20.9% Evidently, both Rocchio and LR are highly effective in adaptive filtering, in terms of using of a small amount of labeled data to significantly improve the model accuracy in statistical learning, which is the main goal of AF.\n5.4 Summary of Adaptation Process After we decided the parameter settings using validation, we perform the adaptive filtering in the following steps for each topic: 1) Train the LR\/Rocchio model using the provided positive training examples and 30 randomly sampled negative examples; 2) For each document in the test corpus: we first make a prediction about relevance, and then get relevance feedback for those (predicted) positive documents.\n3) Model and IDF statistics will be incrementally updated if we obtain its true relevance feedback.\n6.\nCONCLUDING REMARKS We presented a cross-benchmark evaluation of incremental Rocchio and incremental LR in adaptive filtering, focusing on their robustness in terms of performance consistency with respect to cross-corpus parameter optimization.\nOur main conclusions from this study are the following: \u2022 Parameter optimization in AF is an open challenge but has not been thoroughly studied in the past.\n\u2022 Robustness in cross-corpus parameter tuning is important for evaluation and method comparison.\n\u2022 We found LR more robust than Rocchio; it had the best results (in T11SU) ever reported on TDT5, TREC10 and TREC11 without extensive tuning.\n\u2022 We found Rocchio performs strongly when a good validation corpus is available, and a preferred choice when optimizing Ctrk is the objective, favoring recall over precision to an extreme.\nFor future research we want to study explicit modeling of the temporal trends in topic distributions and content drifting.\nAcknowledgments This material is based upon work supported in parts by the National Science Foundation (NSF) under grant IIS-0434035, by the DoD under award 114008-N66001992891808 and by the Defense Advanced Research Project Agency (DARPA) under Contract No.\nNBCHD030010.\nAny opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors.\n7.\nREFERENCES [1] J. Allan.\nIncremental relevance feedback for information filtering.\nIn SIGIR-96, 1996.\n[2] J. Callan.\nLearning while filtering documents.\nIn SIGIR-98, 224-231, 1998.\n[3] J. Fiscus and G. Duddington.\nTopic detection and tracking overview.\nIn Topic detection and tracking: event-based information organization, 17-31, 2002.\n[4] J. Fiscus and B. Wheatley.\nOverview of the TDT 2004 Evaluation and Results.\nIn TDT-04, 2004.\n[5] T. Hastie, R. Tibshirani and J. Friedman.\nElements of Statistical Learning.\nSpringer, 2001.\n[6] S. Robertson and D. Hull.\nThe TREC-9 filtering track final report.\nIn TREC-9, 2000.\n[7] S. Robertson and I. Soboroff.\nThe TREC-10 filtering track final report.\nIn TREC-10, 2001.\n[8] S. Robertson and I. Soboroff.\nThe TREC 2002 filtering track report.\nIn TREC-11, 2002.\n[9] S. Robertson and S. Walker.\nMicrosoft Cambridge at TREC-9.\nIn TREC-9, 2000.\n[10] R. Schapire, Y. Singer and A. Singhal.\nBoosting and Rocchio applied to text filtering.\nIn SIGIR-98, 215-223, 1998.\n[11] Y. Yang and B. Kisiel.\nMargin-based local regression for adaptive filtering.\nIn CIKM-03, 2003.\n[12] Y. Zhang and J. Callan.\nMaximum likelihood estimation for filtering thresholds.\nIn SIGIR-01, 2001.\n[13] Y. Zhang.\nUsing Bayesian priors to combine classifiers for adaptive filtering.\nIn SIGIR-04, 2004.\n[14] J. Zhang and Y. Yang.\nRobustness of regularized linear classification methods in text categorization.\nIn SIGIR-03: 190-197, 2003.\n[15] T. Zhang, F. J. Oles.\nText Categorization Based on Regularized Linear Classification Methods.\nInf.\nRetr.\n4(1): 5-31 (2001).","lvl-3":"Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation\nABSTRACT\nThis paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering.\nUsing four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings.\nWe found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function.\nUsing systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11.\nRelevance feedback on a small portion (0.05 ~ 0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with \u03b2 = 0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information.\n1.\nINTRODUCTION\nAdaptive filtering (AF) has been a challenging research topic in information retrieval.\nThe task is for the system to make an online topic membership decision (yes or no) for every document, as soon as it arrives, with respect to each pre-defined\ntopic of interest.\nStarting from 1997 in the Topic Detection and Tracking (TDT) area and 1998 in the Text Retrieval Conferences (TREC), benchmark evaluations have been conducted by NIST under the following conditions [6] [7] [8] [3] [4]: \u2022 A very small number (1 to 4) of positive training examples was provided for each topic at the starting point.\n\u2022 Relevance feedback was available but only for the systemaccepted documents (with a \"yes\" decision) in the TREC evaluations for AF.\n\u2022 Relevance feedback (RF) was not allowed in the TDT evaluations for AF (or topic tracking in the TDT terminology) until 2004.\n\u2022 TDT2004 was the first time that TREC and TDT metrics were jointly used in evaluating AF methods on the same benchmark (the TDT5 corpus) where non-stationary topics dominate.\nThe above conditions attempt to mimic realistic situations where an AF system would be used.\nThat is, the user would be willing to provide a few positive examples for each topic of interest at the start, and might or might not be able to provide additional labeling on a small portion of incoming documents through relevance feedback.\nFurthermore, topics of interest might change over time, with new topics appearing and growing, and old topics shrinking and diminishing.\nThese conditions make adaptive filtering a difficult task in statistical learning (online classification), for the following reasons:\n1) it is difficult to learn accurate models for prediction based on extremely sparse training data; 2) it is not obvious how to correct the sampling bias (i.e., relevance feedback on system-accepted documents only) during the adaptation process; 3) it is not well understood how to effectively tune parameters in AF methods using cross-corpus validation where the validation and evaluation topics do not overlap, and the documents may be from different sources or different epochs.\nNone of these problems is addressed in the literature of statistical learning for batch classification where all the training data are given at once.\nThe first two problems have been studied in the adaptive filtering literature, including topic profile adaptation using incremental Rocchio, Gaussian-Exponential density models, logistic regression in a Bayesian framework, etc., and threshold optimization strategies using probabilistic\ncalibration or local fitting techniques [1] [2] [9] [10] [11] [12] [13].\nAlthough these works provide valuable insights for understanding the problems and possible solutions, it is difficult to draw conclusions regarding the effectiveness and robustness of current methods because the third problem has not been thoroughly investigated.\nAddressing the third issue is the main focus in this paper.\nWe argue that robustness is an important measure for evaluating and comparing AF methods.\nBy \"robust\" we mean consistent and strong performance across benchmark corpora with a systematic method for parameter tuning across multiple corpora.\nMost AF methods have pre-specified parameters that may influence the performance significantly and that must be determined before the test process starts.\nAvailable training examples, on the other hand, are often insufficient for tuning the parameters.\nIn TDT5, for example, there is only one labeled training example per topic at the start; parameter optimization on such training data is doomed to be ineffective.\nThis leaves only one option (assuming tuning on the test set is not an alternative), that is, choosing an external corpus as the validation set.\nNotice that the validation-set topics often do not overlap with the test-set topics, thus the parameter optimization is performed under the tough condition that the validation data and the test data may be quite different from each other.\nNow the important question is: which methods (if any) are robust under the condition of using cross-corpus validation to tune parameters?\nCurrent literature does not offer an answer because no thorough investigation on the robustness of AF methods has been reported.\nIn this paper we address the above question by conducting a cross-benchmark evaluation with two effective approaches in AF: incremental Rocchio and regularized logistic regression (LR).\nRocchio-style classifiers have been popular in AF, with good performance in benchmark evaluations (TREC and TDT) if appropriate parameters were used and if combined with an effective threshold calibration strategy [2] [4] [7] [8] [9] [11] [13].\nLogistic regression is a classical method in statistical learning, and one of the best in batch-mode text categorization [15] [14].\nIt was recently evaluated in adaptive filtering and was found to have relatively strong performance (Section 5.1).\nFurthermore, a recent paper [13] reported that the joint use of Rocchio and LR in a Bayesian framework outperformed the results of using each method alone on the TREC11 corpus.\nStimulated by those findings, we decided to include Rocchio and LR in our crossbenchmark evaluation for robustness testing.\nSpecifically, we focus on how much the performance of these methods depends on parameter tuning, what the most influential parameters are in these methods, how difficult (or how easy) to optimize these influential parameters using cross-corpus validation, how strong these methods perform on multiple benchmarks with the systematic tuning of parameters on other corpora, and how efficient these methods are in running AF on large benchmark corpora.\nThe organization of the paper is as follows: Section 2 introduces the four benchmark corpora (TREC10 and TREC11, TDT3 and TDT5) used in this study.\nSection 3 analyzes the differences among the TREC and TDT metrics (utilities and tracking cost) and the potential implications of those differences.\nSection 4 outlines the Rocchio and LR approaches to AF, respectively.\nSection 5 reports the experiments and results.\nSection 6 concludes the main findings in this study.\n2.\nBENCHMARK CORPORA\n3.\nMETRICS\n3.1 TREC11 metrics\n3.3 The correlations and the differences\n3.2 TDT metrics\n4.\nMETHODS\n4.1 Incremental Rocchio for AF\n4.2 Logistic Regression for AF\n5.\nEVALUATIONS\n5.1 TDT2004 benchmark results\n5.2 Cross-corpus parameter optimization\n5.3 Percentages of labeled data\n5.4 Summary of Adaptation Process\n6.\nCONCLUDING REMARKS\nWe presented a cross-benchmark evaluation of incremental Rocchio and incremental LR in adaptive filtering, focusing on their robustness in terms of performance consistency with respect to cross-corpus parameter optimization.\nOur main conclusions from this study are the following:\n\u2022 Parameter optimization in AF is an open challenge but has not been thoroughly studied in the past.\n\u2022 Robustness in cross-corpus parameter tuning is important for evaluation and method comparison.\n\u2022 We found LR more robust than Rocchio; it had the best results (in T11SU) ever reported on TDT5, TREC10 and TREC11 without extensive tuning.\n\u2022 We found Rocchio performs strongly when a good\nvalidation corpus is available, and a preferred choice when optimizing Ctrk is the objective, favoring recall over precision to an extreme.\nFor future research we want to study explicit modeling of the temporal trends in topic distributions and content drifting.","lvl-4":"Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation\nABSTRACT\nThis paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering.\nUsing four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings.\nWe found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function.\nUsing systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11.\nRelevance feedback on a small portion (0.05 ~ 0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with \u03b2 = 0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information.\n1.\nINTRODUCTION\nAdaptive filtering (AF) has been a challenging research topic in information retrieval.\nThe task is for the system to make an online topic membership decision (yes or no) for every document, as soon as it arrives, with respect to each pre-defined\ntopic of interest.\n\u2022 Relevance feedback was available but only for the systemaccepted documents (with a \"yes\" decision) in the TREC evaluations for AF.\n\u2022 Relevance feedback (RF) was not allowed in the TDT evaluations for AF (or topic tracking in the TDT terminology) until 2004.\n\u2022 TDT2004 was the first time that TREC and TDT metrics were jointly used in evaluating AF methods on the same benchmark (the TDT5 corpus) where non-stationary topics dominate.\nThe above conditions attempt to mimic realistic situations where an AF system would be used.\nFurthermore, topics of interest might change over time, with new topics appearing and growing, and old topics shrinking and diminishing.\nThese conditions make adaptive filtering a difficult task in statistical learning (online classification), for the following reasons:\nNone of these problems is addressed in the literature of statistical learning for batch classification where all the training data are given at once.\nThe first two problems have been studied in the adaptive filtering literature, including topic profile adaptation using incremental Rocchio, Gaussian-Exponential density models, logistic regression in a Bayesian framework, etc., and threshold optimization strategies using probabilistic\nAddressing the third issue is the main focus in this paper.\nWe argue that robustness is an important measure for evaluating and comparing AF methods.\nBy \"robust\" we mean consistent and strong performance across benchmark corpora with a systematic method for parameter tuning across multiple corpora.\nMost AF methods have pre-specified parameters that may influence the performance significantly and that must be determined before the test process starts.\nAvailable training examples, on the other hand, are often insufficient for tuning the parameters.\nIn TDT5, for example, there is only one labeled training example per topic at the start; parameter optimization on such training data is doomed to be ineffective.\nNotice that the validation-set topics often do not overlap with the test-set topics, thus the parameter optimization is performed under the tough condition that the validation data and the test data may be quite different from each other.\nNow the important question is: which methods (if any) are robust under the condition of using cross-corpus validation to tune parameters?\nCurrent literature does not offer an answer because no thorough investigation on the robustness of AF methods has been reported.\nIn this paper we address the above question by conducting a cross-benchmark evaluation with two effective approaches in AF: incremental Rocchio and regularized logistic regression (LR).\nRocchio-style classifiers have been popular in AF, with good performance in benchmark evaluations (TREC and TDT) if appropriate parameters were used and if combined with an effective threshold calibration strategy [2] [4] [7] [8] [9] [11] [13].\nLogistic regression is a classical method in statistical learning, and one of the best in batch-mode text categorization [15] [14].\nIt was recently evaluated in adaptive filtering and was found to have relatively strong performance (Section 5.1).\nFurthermore, a recent paper [13] reported that the joint use of Rocchio and LR in a Bayesian framework outperformed the results of using each method alone on the TREC11 corpus.\nStimulated by those findings, we decided to include Rocchio and LR in our crossbenchmark evaluation for robustness testing.\nThe organization of the paper is as follows: Section 2 introduces the four benchmark corpora (TREC10 and TREC11, TDT3 and TDT5) used in this study.\nSection 3 analyzes the differences among the TREC and TDT metrics (utilities and tracking cost) and the potential implications of those differences.\nSection 4 outlines the Rocchio and LR approaches to AF, respectively.\nSection 5 reports the experiments and results.\nSection 6 concludes the main findings in this study.\n6.\nCONCLUDING REMARKS\nWe presented a cross-benchmark evaluation of incremental Rocchio and incremental LR in adaptive filtering, focusing on their robustness in terms of performance consistency with respect to cross-corpus parameter optimization.\nOur main conclusions from this study are the following:\n\u2022 Parameter optimization in AF is an open challenge but has not been thoroughly studied in the past.\n\u2022 Robustness in cross-corpus parameter tuning is important for evaluation and method comparison.\n\u2022 We found LR more robust than Rocchio; it had the best results (in T11SU) ever reported on TDT5, TREC10 and TREC11 without extensive tuning.\n\u2022 We found Rocchio performs strongly when a good\nvalidation corpus is available, and a preferred choice when optimizing Ctrk is the objective, favoring recall over precision to an extreme.\nFor future research we want to study explicit modeling of the temporal trends in topic distributions and content drifting.","lvl-2":"Robustness of Adaptive Filtering Methods In a Cross-benchmark Evaluation\nABSTRACT\nThis paper reports a cross-benchmark evaluation of regularized logistic regression (LR) and incremental Rocchio for adaptive filtering.\nUsing four corpora from the Topic Detection and Tracking (TDT) forum and the Text Retrieval Conferences (TREC) we evaluated these methods with non-stationary topics at various granularity levels, and measured performance with different utility settings.\nWe found that LR performs strongly and robustly in optimizing T11SU (a TREC utility function) while Rocchio is better for optimizing Ctrk (the TDT tracking cost), a high-recall oriented objective function.\nUsing systematic cross-corpus parameter optimization with both methods, we obtained the best results ever reported on TDT5, TREC10 and TREC11.\nRelevance feedback on a small portion (0.05 ~ 0.2%) of the TDT5 test documents yielded significant performance improvements, measuring up to a 54% reduction in Ctrk and a 20.9% increase in T11SU (with \u03b2 = 0.1), compared to the results of the top-performing system in TDT2004 without relevance feedback information.\n1.\nINTRODUCTION\nAdaptive filtering (AF) has been a challenging research topic in information retrieval.\nThe task is for the system to make an online topic membership decision (yes or no) for every document, as soon as it arrives, with respect to each pre-defined\ntopic of interest.\nStarting from 1997 in the Topic Detection and Tracking (TDT) area and 1998 in the Text Retrieval Conferences (TREC), benchmark evaluations have been conducted by NIST under the following conditions [6] [7] [8] [3] [4]: \u2022 A very small number (1 to 4) of positive training examples was provided for each topic at the starting point.\n\u2022 Relevance feedback was available but only for the systemaccepted documents (with a \"yes\" decision) in the TREC evaluations for AF.\n\u2022 Relevance feedback (RF) was not allowed in the TDT evaluations for AF (or topic tracking in the TDT terminology) until 2004.\n\u2022 TDT2004 was the first time that TREC and TDT metrics were jointly used in evaluating AF methods on the same benchmark (the TDT5 corpus) where non-stationary topics dominate.\nThe above conditions attempt to mimic realistic situations where an AF system would be used.\nThat is, the user would be willing to provide a few positive examples for each topic of interest at the start, and might or might not be able to provide additional labeling on a small portion of incoming documents through relevance feedback.\nFurthermore, topics of interest might change over time, with new topics appearing and growing, and old topics shrinking and diminishing.\nThese conditions make adaptive filtering a difficult task in statistical learning (online classification), for the following reasons:\n1) it is difficult to learn accurate models for prediction based on extremely sparse training data; 2) it is not obvious how to correct the sampling bias (i.e., relevance feedback on system-accepted documents only) during the adaptation process; 3) it is not well understood how to effectively tune parameters in AF methods using cross-corpus validation where the validation and evaluation topics do not overlap, and the documents may be from different sources or different epochs.\nNone of these problems is addressed in the literature of statistical learning for batch classification where all the training data are given at once.\nThe first two problems have been studied in the adaptive filtering literature, including topic profile adaptation using incremental Rocchio, Gaussian-Exponential density models, logistic regression in a Bayesian framework, etc., and threshold optimization strategies using probabilistic\ncalibration or local fitting techniques [1] [2] [9] [10] [11] [12] [13].\nAlthough these works provide valuable insights for understanding the problems and possible solutions, it is difficult to draw conclusions regarding the effectiveness and robustness of current methods because the third problem has not been thoroughly investigated.\nAddressing the third issue is the main focus in this paper.\nWe argue that robustness is an important measure for evaluating and comparing AF methods.\nBy \"robust\" we mean consistent and strong performance across benchmark corpora with a systematic method for parameter tuning across multiple corpora.\nMost AF methods have pre-specified parameters that may influence the performance significantly and that must be determined before the test process starts.\nAvailable training examples, on the other hand, are often insufficient for tuning the parameters.\nIn TDT5, for example, there is only one labeled training example per topic at the start; parameter optimization on such training data is doomed to be ineffective.\nThis leaves only one option (assuming tuning on the test set is not an alternative), that is, choosing an external corpus as the validation set.\nNotice that the validation-set topics often do not overlap with the test-set topics, thus the parameter optimization is performed under the tough condition that the validation data and the test data may be quite different from each other.\nNow the important question is: which methods (if any) are robust under the condition of using cross-corpus validation to tune parameters?\nCurrent literature does not offer an answer because no thorough investigation on the robustness of AF methods has been reported.\nIn this paper we address the above question by conducting a cross-benchmark evaluation with two effective approaches in AF: incremental Rocchio and regularized logistic regression (LR).\nRocchio-style classifiers have been popular in AF, with good performance in benchmark evaluations (TREC and TDT) if appropriate parameters were used and if combined with an effective threshold calibration strategy [2] [4] [7] [8] [9] [11] [13].\nLogistic regression is a classical method in statistical learning, and one of the best in batch-mode text categorization [15] [14].\nIt was recently evaluated in adaptive filtering and was found to have relatively strong performance (Section 5.1).\nFurthermore, a recent paper [13] reported that the joint use of Rocchio and LR in a Bayesian framework outperformed the results of using each method alone on the TREC11 corpus.\nStimulated by those findings, we decided to include Rocchio and LR in our crossbenchmark evaluation for robustness testing.\nSpecifically, we focus on how much the performance of these methods depends on parameter tuning, what the most influential parameters are in these methods, how difficult (or how easy) to optimize these influential parameters using cross-corpus validation, how strong these methods perform on multiple benchmarks with the systematic tuning of parameters on other corpora, and how efficient these methods are in running AF on large benchmark corpora.\nThe organization of the paper is as follows: Section 2 introduces the four benchmark corpora (TREC10 and TREC11, TDT3 and TDT5) used in this study.\nSection 3 analyzes the differences among the TREC and TDT metrics (utilities and tracking cost) and the potential implications of those differences.\nSection 4 outlines the Rocchio and LR approaches to AF, respectively.\nSection 5 reports the experiments and results.\nSection 6 concludes the main findings in this study.\n2.\nBENCHMARK CORPORA\nWe used four benchmark corpora in our study.\nTable 1 shows the statistics about these data sets.\nTREC10 was the evaluation benchmark for adaptive filtering in TREC 2001, consisting of roughly 806,791 Reuters news stories from August 1996 to August 1997 with 84 topic labels (subject categories) [7].\nThe first two weeks (August 20th to 31st, 1996) of documents is the training set, and the remaining 11 & 1\/2 months (from September 1st, 1996 to August 19th, 1997) is the test set.\nTREC11 was the evaluation benchmark for adaptive filtering in TREC 2002, consisting of the same set of documents as those in TREC10 but with a slightly different splitting point for the training and test sets.\nThe TREC11 topics (50) are quite different from those in TREC10; they are queries for retrieval with relevance judgments by NIST assessors [8].\nTDT3 was the evaluation benchmark in the TDT2001 dry run1.\nThe tracking part of the corpus consists of 71,388 news stories from multiple sources in English and Mandarin (AP, NYT, CNN, ABC, NBC, MSNBC, Xinhua, Zaobao, Voice of America and PRI the World) in the period of October to December 1998.\nMachine-translated versions of the non-English stories (Xinhua, Zaobao and VOA Mandarin) are provided as well.\nThe splitting point for training-test sets is different for each topic in TDT.\nTDT5 was the evaluation benchmark in TDT2004 [4].\nThe tracking part of the corpus consists of 407,459 news stories in the period of April to September, 2003 from 15 news agents or broadcast sources in English, Arabic and Mandarin, with machine-translated versions of the non-English stories.\nWe only used the English versions of those documents in our experiments for this paper.\nThe TDT \"topics\" differ from TREC topics both conceptually and statistically.\nInstead of generic, ever-lasting subject categories (as those in TREC), TDT topics are defined at a finer level of granularity, for events that happen at certain times and locations, and that are \"born\" and \"die\", typically associated with a bursty distribution over chronologically ordered news stories.\nThe average size of TDT topics (events) is two orders of magnitude smaller than that of the TREC10 topics.\nFigure 1 compares the document densities of a TREC topic (\"Civil Wars\") and two TDT topics (\"Gunshot\" and \"APEC Summit Meeting\", respectively) over a 3-month time period, where the area under each curve is normalized to one.\nThe granularity differences among topics and the corresponding non-stationary distributions make the cross-benchmark evaluation interesting.\nFor example, algorithms favoring large and stable topics may not work well for short-lasting and nonstationary topics, and vice versa.\nCross-benchmark evaluations allow us to test this hypothesis and possibly identify the weaknesses in current approaches to adaptive filtering in tracking the drifting trends of topics.\nTable 1: Statistics of benchmark corpora for adaptive filtering evaluations\nFigure 1: The temporal nature of topics\n3.\nMETRICS\nTo make our results comparable to the literature, we decided to use both TREC-conventional and TDT-conventional metrics in our evaluation.\n3.1 TREC11 metrics\nLet A, B, C and D be, respectively, the numbers of true positives, false alarms, misses and true negatives for a specific topic, and N = A + B + C + D be the total number of test documents.\nThe TREC-conventional metrics are defined as:\nwhere parameters \u03b2 and \u03b7 were set to 0.5 and -0.5 respectively in TREC10 (2001) and TREC11 (2002).\nFor evaluating the performance of a system, the performance scores are computed\n3.3 The correlations and the differences\nFrom an optimization point of view, TDT5SU and T11SU are both utility functions while Ctrk is a cost function.\nOur objective for individual topics first and then averaged over topics (macroaveraging).\n3.2 TDT metrics\nThe TDT-conventional metric for topic tracking is defined as:\nwhere P (T) is the percentage of documents on topic T, Pmiss is the miss rate by the system on that topic, Pfa is the false alarm rate, and w1 and w2 are the costs (pre-specified constants) for a miss and a false alarm, respectively.\nThe TDT benchmark evaluations (since 1997) have used the settings of w1 = 1, w2 = 0.\n1 and P (T) = 0.02 for all topics.\nFor evaluating the performance of a system, Ctrk is computed for each topic first and then the resulting scores are averaged for a single measure (the topic-weighted Ctrk).\nTo make the intuition behind this measure transparent, we substitute the terms in the definition of Ctrk as follows:\nClearly, Ctrk is the average cost per error on topic T, with w1 and w2 controlling the penalty ratio for misses vs. false alarms.\nIn addition to Ctrk, TDT2004 also employed T 11 SU\u03b2 = 0.\n1 as a utility metric.\nTo distinguish this from the T 11SU\u03b2 = 0 .5 in TREC11, we call former TDT5SU in the rest of this paper.\nis to maximize the former or to minimize the latter on test documents.\nThe differences and correlations among these objective functions can be analyzed through the shared counts of A, B, C and D in their definitions.\nFor example, both TDT5SU\nand T11SU are positively correlated to the values of A and D, and negatively correlated to the values of B and C; the only difference between them is in their penalty ratios for misses vs. false alarms, i.e., 10:1 in TDT5SU and 2:1 in T11SU.\nThe Ctrk function, on the other hand, is positively correlated to the values of C and B, and negatively correlated to the values of A and D; hence, it is negatively correlated to T11SU and TDT5SU.\nMore importantly, there is a subtle and major difference between Ctrk and the utility functions: T11SU and TDT5SU.\nThat is, Ctrk has a very different penalty ratio for misses vs. false alarms: it favors recall-oriented systems to an extreme.\nAt first glance, one would think that the penalty ratio in Ctrk is 10:1 since w1 = 1 and w2 = 0.\n1.\nHowever, this is not true if P (T) = 0 .02 is an inaccurate estimate of the on-topic documents on average for the test corpus.\nUsing TDT3 as an example, the true percentage is:\nwhere N is the average size of the test sets in TDT3, and n + is the average number of positive examples per topic in the test sets.\nUsing P\u02c6 (T) = 0.02 as an (inaccurate) estimate of 0.002 enlarges the intended penalty ratio of 10:1 to 100:1, roughly speaking.\nTo wit:\nestimation of P (T) compared to the truth.\nComparing the above result to formula 2, we can see the actual penalty ratio for misses vs. false alarms was 100:1 in the evaluations on TDT3 using Ctrk.\nSimilarly, we can compute the enlargement factor for TDT5 using the statistics in Table 1 as follows: which means the actual penalty ratio for misses vs. false alarms in the evaluation on TDT5 using Ctrk was approximately 583:1.\nThe implications of the above analysis are rather significant:\n\u2022 Ctrk defined in the same formula does not necessarily mean the same objective function in evaluation; instead, the optimization criterion depends on the test corpus.\n\u2022 Systems optimized for Ctrk would not optimize TDT5SU (and T11SU) because the former favors high-recall oriented to an extreme while the latter does not.\n\u2022 Parameters tuned on one corpus (e.g., TDT3) might not work for an evaluation on another corpus (say, TDT5) unless we account for the previously-unknown subtle dependency of Ctrk on data.\n\u2022 Results in Ctrk in the past years of TDT evaluations may not be directly comparable to each other because the evaluation collections changed most years and hence the penalty ratio in Ctrk varied.\nAlthough these problems with Ctrk were not originally anticipated, it offered an opportunity to examine the ability of systems in trading off precision for extreme recall.\nThis was a challenging part of the TDT2004 evaluation for AF.\nComparing the metrics in TDT and TREC from a utility or cost optimization point of view is important for understanding the evaluation results of adaptive filtering methods.\nThis is the first time this issue is explicitly analyzed, to our knowledge.\n4.\nMETHODS\n4.1 Incremental Rocchio for AF\nWe employed a common version of Rocchio-style classifiers which computes a prototype vector per topic (T) as follows:\nThe first term on the RHS is the weighted vector representation of topic description whose elements are terms weights.\nThe second term is the weighted centroid of the set D + (T) of positive training examples, each of which is a vector of withindocument term weights.\nThe third term is the weighted centroid of the set D \u2212 (T) of negative training examples which are the nearest neighbors of the positive centroid.\nThe three terms are given pre-specified weights of \u03b1, \u03b2 and \u03b3, controlling the relative influence of these components in the prototype.\nThe prototype of a topic is updated each time the system makes a \"yes\" decision on a new document for that topic.\nIf relevance feedback is available (as is the case in TREC adaptive filtering), the new document is added to the pool of either D + (T) or D \u2212 (T), and the prototype is recomputed accordingly; if relevance feedback is not available (as is the case in TDT event tracking), the system's prediction (\"yes\") is treated as the truth, and the new document is added to D + (T) for updating the prototype.\nBoth cases are part of our experiments in this paper (and part of the TDT 2004 evaluations for AF).\nTo distinguish the two, we call the first case simply \"Rocchio\" and the second case \"PRF Rocchio\" where PRF stands for pseudorelevance feedback.\nThe predictions on a new document are made by computing the cosine similarity between each topic prototype and the document vector, and then comparing the resulting scores against a threshold:\nThreshold calibration in incremental Rocchio is a challenging research topic.\nMultiple approaches have been developed.\nThe simplest is to use a universal threshold for all topics, tuned on a validation set and fixed during the testing phase.\nMore elaborate methods include probabilistic threshold calibration which converts the non-probabilistic similarity scores to probabilities r (i.e., P (T | d)) for utility optimization [9] [13], and margin-based local regression for risk reduction [11].\nIt is beyond the scope of this paper to compare all the different ways to adapt Rocchio-style methods for AF.\nInstead, our focus here is to investigate the robustness of Rocchio-style methods in terms of how much their performance depends on elaborate system tuning, and how difficult (or how easy) it is to get good performance through cross-corpus parameter optimization.\nHence, we decided to use a relatively simple version of Rocchio as the baseline, i.e., with a universal threshold tuned on a validation corpus and fixed for all topics in the testing phase.\nThis simple version of Rocchio has been commonly used in the past TDT benchmark evaluations for topic tracking, and had strong performance in the TDT2004 evaluations for adaptive filtering with and without relevance feedback (Section 5.1).\nResults of more complex variants of Rocchio are also discussed when relevant.\n4.2 Logistic Regression for AF\nLogistic regression (LR) estimates the posterior probability of a topic given a document using a sigmoid function\nwhere xr is the document vector whose elements are term r weights, w is the vector of regression coefficients, and y \u2208 {+1, \u2212 1} is the output variable corresponding to \"yes\" or \"no\" with respect to a particular topic.\nGiven a training set of labeled documents D {(x1, y1),, (xrn, yn)} r = L, the standard regression problem is defined as to find the maximum likelihood estimates of the regression coefficients (\"the model parameters\"):\nThis is a convex optimization problem which can be solved using a standard conjugate gradient algorithm in O (INF) time for training per topic, where I is the average number of iterations needed for convergence, and N and F are the number of training documents and number of features respectively [14].\nOnce the regression coefficients are optimized on the training data, the filtering prediction on each incoming document is made as:\nNote that wr is constantly updated whenever a new relevance judgment is available in the testing phase of AF, while the optimal threshold \u03b8opt is constant, depending only on the predefined utility (or cost) function for evaluation.\nIf T11SU is the metric, for example, with the penalty ratio of 2:1 for misses and false alarms (Section 3.1), the optimal threshold for LR is 1 \/ (2 + 1) = 0.\n3 3 for all topics.\nWe modified the standard (above) version of LR to allow more flexible optimization criteria as follows:\nwhere s (yi) is taken to be \u03b1, \u03b2 and \u03b3 for query, positive and negative documents respectively, which are similar to those in Rocchio, giving different weights to the three kinds of training examples: topic descriptions (\"queries\"), on-topic documents and off-topic documents.\nThe second term in the objective function is for regularization, equivalent to adding a Gaussian prior to the regression coefficients with mean \u00b5 r and covariance variance matrix 1 \/ 2\u03bb \u22c5 \u0399, where \u0399 is the identity matrix.\nTuning \u03bb (\u2265 0) is theoretically justified for reducing model complexity (\"the effective degree of freedom\") and avoiding over-fitting on training data [5].\nHow to find an effective \u00b5r is an open issue for research, depending on the user's belief about the parameter space and the optimal range.\nThe solution of the modified objective function is called the Maximum A Posteriori (MAP) estimate, which reduces to the maximum likelihood solution for standard LR if \u03bb = 0.\n5.\nEVALUATIONS\nWe report our empirical findings in four parts: the TDT2004 official evaluation results, the cross-corpus parameter optimization results, and the results corresponding to the amounts of relevance feedback.\n5.1 TDT2004 benchmark results\nThe TDT2004 evaluations for adaptive filtering were conducted by NIST in November 2004.\nMultiple research teams participated and multiple runs from each team were allowed.\nCtrk and TDT5SU were used as the metrics.\nFigure 2 and Figure 3 show the results; the best run from each team was selected with respect to Ctrk or TDT5SU, respectively.\nOur Rocchio (with adaptive profiles but fixed universal threshold for all topics) had the best result in Ctrk, and our logistic regression had the best result in TDT5SU.\nAll the parameters of our runs were tuned on the TDT3 corpus.\nResults for other sites are also listed anonymously for comparison.\nFigure 2: TDT2004 results in Ctrk of systems using true relevance feedback.\n(\"Ours\" is the Rocchio method.)\nWe also put the 1st and 3rd quartiles as sticks for each site .2 Figure 3: TDT2004 results in TDT5SU of systems using true relevance feedback.\n(\"Ours\" is LR with \u00b5r = 0 and \u03bb = 0.005).\nFigure 4: TDT2004 results in Ctrk of systems without using true relevance feedback.\n(\"Ours\" is PRF Rocchio.)\nAdaptive filtering without using true relevance feedback was also a part of the evaluations.\nIn this case, systems had only one labeled training example per topic during the entire training and testing processes, although unlabeled test documents could be used as soon as predictions on them were made.\nSuch a setting has been conventional for the Topic Tracking task in TDT until 2004.\nFigure 4 shows the summarized official submissions from each team.\nOur PRF Rocchio (with a fixed threshold for all the topics) had the best performance.\n2 We use quartiles rather than standard deviations since the former is more resistant to outliers.\n5.2 Cross-corpus parameter optimization\nHow much the strong performance of our systems depends on parameter tuning is an important question.\nBoth Rocchio and LR have parameters that must be prespecified before the AF process.\nThe shared parameters include the sample weights \u03b1, \u03b2 and \u03b3, the sample size of the negative training documents (i.e., D \u2212 (T)), the term-weighting scheme, and the maximal number of non-zero elements in each document vector.\nThe method-specific parameters include the decision threshold in Rocchio, and \u00b5r, \u03bb and MI (the maximum number of iterations in training) in LR.\nGiven that we only have one labeled example per topic in the TDT5 training sets, it is impossible to effectively optimize these parameters on the training data, and we had to choose an external corpus for validation.\nAmong the choices of TREC10, TREC11 and TDT3, we chose TDT3 (c.f. Section 2) because it is most similar to TDT5 in terms of the nature of the topics (Section 2).\nWe optimized the parameters of our systems on TDT3, and fixed those parameters in the runs on TDT5 for our submissions to TDT2004.\nWe also tested our methods on TREC10 and TREC11 for further analysis.\nSince exhaustive testing of all possible parameter settings is computationally intractable, we followed a step-wise forward chaining procedure instead: we pre-specified an order of the parameters in a method (Rocchio or LR), and then tuned one parameter at the time while fixing the settings of the remaining parameters.\nWe repeated this procedure for several passes as time allowed.\nFigure 5: Performance curves of adaptive Rocchio\nFigure 5 compares the performance curves in TDT5SU for Rocchio on TDT3, TDT5, TREC10 and TREC11 when the decision threshold varied.\nThese curves peak at different locations: the TDT3-optimal is closest to the TDT5-optimal while the TREC10-optimal and TREC1-optimal are quite far away from the TDT5-optimal.\nIf we were using TREC10 or TREC11 instead of TDT3 as the validation corpus for TDT5, or if the TDT3 corpus were not available, we would have difficulty in obtaining strong performance for Rocchio in TDT2004.\nThe difficulty comes from the ad hoc (non-probabilistic) scores generated by the Rocchio method: the distribution of the scores depends on the corpus, making cross-corpus threshold optimization a tricky problem.\nLogistic regression has less difficulty with respect to threshold tuning because it produces probabilistic scores of Pr (y = 1 | x)\nupon which the optimal threshold can be directly computed if probability estimation is accurate.\nGiven the penalty ratio for misses vs. false alarms as 2:1 in T11SU, 10:1 in TDT5SU and 583:1 in Ctrk (Section 3.3), the corresponding optimal thresholds (t) are 0.33, 0.091 and 0.0017 respectively.\nAlthough the theoretical threshold could be inaccurate, it still suggests the range of near-optimal settings.\nWith these threshold settings in our experiments for LR, we focused on the cross-corpus validation of the Bayesian prior parameters, that is, \u03bcr and X. Table 2 summarizes the results3.\nWe measured the performance of the runs on TREC10 and TREC11 using T11SU, and the performance of the runs on TDT3 and TDT5 using TDT5SU.\nFor comparison we also include the best results of Rocchio-based methods on these corpora, which are our own results of Rocchio on TDT3 and TDT5, and the best results reported by NIST for TREC10 and TREC11.\nFrom this set of results, we see that LR significantly outperformed Rocchio on all the corpora, even in the runs of standard LR without any tuning, i.e. X = 0.\nThis empirical finding is consistent with a previous report [13] for LR on TREC11 although our results of LR (0.585 ~ 0.608 in T11SU) are stronger than the results (0.49 for standard LR and 0.54 for LR using Rocchio prototype as the prior) in that report.\nMore importantly, our cross-benchmark evaluation gives strong evidence for the robustness of LR.\nThe robustness, we believe, comes from the probabilistic nature of the system-generated scores.\nThat is, compared to the ad hoc scores in Rocchio, the normalized posterior probabilities make the threshold optimization in LR a much easier problem.\nMoreover, logistic regression is known to converge towards the Bayes classifier asymptotically while Rocchio classifiers' parameters do not.\nAnother interesting observation in these results is that the performance of LR did not improve when using a Rocchio prototype as the mean in the prior; instead, the performance decreased in some cases.\nThis observation does not support the previous report by [13], but we are not surprised because we are not convinced that Rocchio prototypes are more accurate than LR models for topics in the early stage of the AF process, and we believe that using a Rocchio prototype as the mean in the Gaussian prior would introduce undesirable bias to LR.\nWe also believe that variance reduction (in the testing phase) should be controlled by the choice of X (but not \u03bcr), for which we conducted the experiments as shown in Figure 6.\nTable 2: Results of LR with different Bayesian priors\n3 The LR results (0.77 ~ 0.78) on TDT5 in this table are better than our TDT2004 official result (0.73) because parameter optimization has been improved afterwards.\n4 The TREC10-best result (0.496 by Oracle) is only available in T10U which is not directly comparable to the scores in T11SU, just indicative.\n*: \u03bcr was set to the Rocchio prototype\nFigure 6: LR with varying lambda.\nThe performance of LR is summarized with respect to X tuning on the corpora of TREC10, TREC11 and TDT3.\nThe performance on each corpus was measured using the corresponding metrics, that is, T11SU for the runs on TREC10 and TREC11, and TDT5SU and Ctrk for the runs on TDT3,.\nIn the case of maximizing the utilities, the \"safe\" interval for X is between 0 and 0.01, meaning that the performance of regularized LR is stable, the same as or improved slightly over the performance of standard LR.\nIn the case of minimizing Ctrk, the safe range for X is between 0 and 0.1, and setting \u03bb between 0.005 and 0.05 yielded relatively large improvements over the performance of standard LR because training a model for extremely high recall is statistically more tricky, and hence more regularization is needed.\nIn either case, tuning X is relatively safe, and easy to do successfully by cross-corpus tuning.\nAnother influential choice in our experiment settings is term weighting: we examined the choices of binary, TF and TF-IDF (the \"ltc\" version) schemes.\nWe found TF-IDF most effective for both Rocchio and LR, and used this setting in all our experiments.\n5.3 Percentages of labeled data\nHow much relevance feedback (RF) would be needed during the AF process is a meaningful question in real-world applications.\nTo answer it, we evaluated Rocchio and LR on TDT with the following settings:\n\u2022 Basic Rocchio, no adaptation at all \u2022 PRF Rocchio, updating topic profiles without using true relevance feedback; \u2022 Adaptive Rocchio, updating topic profiles using relevance feedback on system-accepted documents plus 10 \u2022 documents randomly sampled from the pool of systemrejected documents;\n\u2022 All the parameters in Rocchio tuned on TDT3.\nTable 3 summarizes the results in Ctrk: Adaptive Rocchio with relevance feedback on 0.6% of the test documents reduced the tracking cost by 54% over the result of the PRF Rocchio, the best system in the TDT2004 evaluation for topic tracking without relevance feedback information.\nIncremental LR, on the other hand, was weaker but still impressive.\nRecall that Ctrk is an extremely high-recall oriented metric, causing frequent updating of profiles and hence an efficiency problem in LR.\nFor this reason we set a higher threshold (0.004) instead of the theoretically optimal threshold (0.0017) in LR to avoid an untolerable computation cost.\nThe computation time in machinehours was 0.33 for the run of adaptive Rocchio and 14 for the run of LR on TDT5 when optimizing Ctrk.\nTable 4 summarizes the results in TDT5SU; adaptive LR was the winner in this case, with relevance feedback on 0.05% of the test documents improving the utility by 20.9% over the results of PRF Rocchio.\nTable 3: AF methods on TDT5 (Performance in Ctrk)\nTable 4: AF methods on TDT5 (Performance in TDT5SU)\nEvidently, both Rocchio and LR are highly effective in adaptive filtering, in terms of using of a small amount of labeled data to significantly improve the model accuracy in statistical learning, which is the main goal of AF.\n5.4 Summary of Adaptation Process\nAfter we decided the parameter settings using validation, we perform the adaptive filtering in the following steps for each topic: 1) Train the LR\/Rocchio model using the provided positive training examples and 30 randomly sampled negative examples; 2) For each document in the test corpus: we first make a prediction about relevance, and then get relevance feedback for those (predicted) positive documents.\n3) Model and IDF statistics will be incrementally updated if we obtain its true relevance feedback.\n6.\nCONCLUDING REMARKS\nWe presented a cross-benchmark evaluation of incremental Rocchio and incremental LR in adaptive filtering, focusing on their robustness in terms of performance consistency with respect to cross-corpus parameter optimization.\nOur main conclusions from this study are the following:\n\u2022 Parameter optimization in AF is an open challenge but has not been thoroughly studied in the past.\n\u2022 Robustness in cross-corpus parameter tuning is important for evaluation and method comparison.\n\u2022 We found LR more robust than Rocchio; it had the best results (in T11SU) ever reported on TDT5, TREC10 and TREC11 without extensive tuning.\n\u2022 We found Rocchio performs strongly when a good\nvalidation corpus is available, and a preferred choice when optimizing Ctrk is the objective, favoring recall over precision to an extreme.\nFor future research we want to study explicit modeling of the temporal trends in topic distributions and content drifting.","keyphrases":["robust","adapt filter","adapt filter","cross-benchmark evalu","regular","logist regress","lr","rocchio","topic detect","util function","cross-corpu paramet optim","relev feedback","inform retriev","topic track","statist learn","systemat method for paramet tune across multipl corpora","extern corpu","valid set","granular differ","cost function","penalti ratio","optim criterion","probabilist threshold calibr","rocchio-style method","posterior probabl","sigmoid function","bia","gaussian"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","U","M","U","M","R","R","U","M","U","M","U","M","U","U"]} {"id":"C-65","title":"Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors","abstract":"The paper presents a wireless sensor network-based mobile countersniper system. A sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA. A 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier's PDA running the data fusion and the user interface. The heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors. The system estimates the trajectory, the range, the caliber and the weapon type. The paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center. The system performance is characterized by 1-degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested.","lvl-1":"Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors Peter Volgyesi, Gyorgy Balogh, Andras Nadas, Christopher B. Nash, Akos Ledeczi Institute for Software Integrated Systems, Vanderbilt University Nashville, TN, USA akos.ledeczi@vanderbilt.edu ABSTRACT The paper presents a wireless sensor network-based mobile countersniper system.\nA sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA.\nA 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier``s PDA running the data fusion and the user interface.\nThe heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors.\nThe system estimates the trajectory, the range, the caliber and the weapon type.\nThe paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center.\nThe system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested.\nCategories and Subject Descriptors C.2.4 [Computer-Communications Networks]: Distributed Systems; J.7 [Computers in Other Systems]: Military General Terms: Design, Measurement, Performance 1.\nINTRODUCTION The importance of countersniper systems is underscored by the constant stream of news reports coming from the Middle East.\nIn October 2006 CNN reported on a new tactic employed by insurgents.\nA mobile sniper team moves around busy city streets in a car, positions itself at a good standoff distance from dismounted US military personnel, takes a single well-aimed shot and immediately melts in the city traffic.\nBy the time the soldiers can react, they are gone.\nA countersniper system that provides almost immediate shooter location to every soldier in the vicinity would provide clear benefits to the warfigthers.\nOur team introduced PinPtr, the first sensor networkbased countersniper system [17, 8] in 2003.\nThe system is based on potentially hundreds of inexpensive sensor nodes deployed in the area of interest forming an ad-hoc multihop network.\nThe acoustic sensors measure the Time of Arrival (ToA) of muzzle blasts and ballistic shockwaves, pressure waves induced by the supersonic projectile, send the data to a base station where a sensor fusion algorithm determines the origin of the shot.\nPinPtr is characterized by high precision: 1m average 3D accuracy for shots originating within or near the sensor network and 1 degree bearing precision for both azimuth and elevation and 10% accuracy in range estimation for longer range shots.\nThe truly unique characteristic of the system is that it works in such reverberant environments as cluttered urban terrain and that it can resolve multiple simultaneous shots at the same time.\nThis capability is due to the widely distributed sensing and the unique sensor fusion approach [8].\nThe system has been tested several times in US Army MOUT (Military Operations in Urban Terrain) facilities.\nThe obvious disadvantage of such a system is its static nature.\nOnce the sensors are distributed, they cover a certain area.\nDepending on the operation, the deployment may be needed for an hour or a month, but eventually the area looses its importance.\nIt is not practical to gather and reuse the sensors, especially under combat conditions.\nEven if the sensors are cheap, it is still a waste and a logistical problem to provide a continuous stream of sensors as the operations move from place to place.\nAs it is primarily the soldiers that the system protects, a natural extension is to mount the sensors on the soldiers themselves.\nWhile there are vehiclemounted countersniper systems [1] available commercially, we are not aware of a deployed system that protects dismounted soldiers.\nA helmet-mounted system was developed in the mid 90s by BBN [3], but it was not continued beyond the Darpa program that funded it.\n113 To move from a static sensor network-based solution to a highly mobile one presents significant challenges.\nThe sensor positions and orientation need to be constantly monitored.\nAs soldiers may work in groups of as little as four people, the number of sensors measuring the acoustic phenomena may be an order of magnitude smaller than before.\nMoreover, the system should be useful to even a single soldier.\nFinally, additional requirements called for caliber estimation and weapon classification in addition to source localization.\nThe paper presents the design and evaluation of our soldierwearable mobile countersniper system.\nIt describes the hardware and software architecture including the custom sensor board equipped with a small microphone array and connected to a COTS MICAz mote [12].\nSpecial emphasis is paid to the sensor fusion technique that estimates the trajectory, range, caliber and weapon type simultaneously.\nThe results and analysis of an independent evaluation of the system at the US Army Aberdeen Test Center are also presented.\n2.\nAPPROACH The firing of a typical military rifle, such as the AK47 or M16, produces two distinct acoustic phenomena.\nThe muzzle blast is generated at the muzzle of the gun and travels at the speed sound.\nThe supersonic projectile generates an acoustic shockwave, a kind of sonic boom.\nThe wavefront has a conical shape, the angle of which depends on the Mach number, the speed of the bullet relative to the speed of sound.\nThe shockwave has a characteristic shape resembling a capital N.\nThe rise time at both the start and end of the signal is very fast, under 1 \u03bcsec.\nThe length is determined by the caliber and the miss distance, the distance between the trajectory and the sensor.\nIt is typically a few hundred \u03bcsec.\nOnce a trajectory estimate is available, the shockwave length can be used for caliber estimation.\nOur system is based on four microphones connected to a sensorboard.\nThe board detects shockwaves and muzzle blasts and measures their ToA.\nIf at least three acoustic channels detect the same event, its AoA is also computed.\nIf both the shockwave and muzzle blast AoA are available, a simple analytical solution gives the shooter location as shown in Section 6.\nAs the microphones are close to each other, typically 2-4, we cannot expect very high precision.\nAlso, this method does not estimate a trajectory.\nIn fact, an infinite number of trajectory-bullet speed pairs satisfy the observations.\nHowever, the sensorboards are also connected to COTS MICAz motes and they share their AoA and ToA measurements, as well as their own location and orientation, with each other using a multihop routing service [9].\nA hybrid sensor fusion algorithm then estimates the trajectory, the range, the caliber and the weapon type based on all available observations.\nThe sensorboard is also Bluetooth capable for communication with the soldier``s PDA or laptop computer.\nA wired USB connection is also available.\nThe sensorfusion algorithm and the user interface get their data through one of these channels.\nThe orientation of the microphone array at the time of detection is provided by a 3-axis digital compass.\nCurrently the system assumes that the soldier``s PDA is GPS-capable and it does not provide self localization service itself.\nHowever, the accuracy of GPS is a few meters degrading the Figure 1: Acoustic sensorboard\/mote assembly .\noverall accuracy of the system.\nRefer to Section 7 for an analysis.\nThe latest generation sensorboard features a Texas Instruments CC-1000 radio enabling the high-precision radio interferometric self localization approach we have developed separately [7].\nHowever, we leave the integration of the two technologies for future work.\n3.\nHARDWARE Since the first static version of our system in 2003, the sensor nodes have been built upon the UC Berkeley\/Crossbow MICA product line [11].\nAlthough rudimentary acoustic signal processing can be done on these microcontroller-based boards, they do not provide the required computational performance for shockwave detection and angle of arrival measurements, where multiple signals from different microphones need to be processed in parallel at a high sampling rate.\nOur 3rd generation sensorboard is designed to be used with MICAz motes-in fact it has almost the same size as the mote itself (see Figure 1).\nThe board utilizes a powerful Xilinx XC3S1000 FPGA chip with various standard peripheral IP cores, multiple soft processor cores and custom logic for the acoustic detectors (Figure 2).\nThe onboard Flash (4MB) and PSRAM (8MB) modules allow storing raw samples of several acoustic events, which can be used to build libraries of various acoustic signatures and for refining the detection cores off-line.\nAlso, the external memory blocks can store program code and data used by the soft processor cores on the FPGA.\nThe board supports four independent analog channels sampled at up to 1 MS\/s (million samples per seconds).\nThese channels, featuring an electret microphone (Panasonic WM64PNT), amplifiers with controllable gain (30-60 dB) and a 12-bit serial ADC (Analog Devices AD7476), reside on separate tiny boards which are connected to the main sensorboard with ribbon cables.\nThis partitioning enables the use of truly different audio channels (eg.: slower sampling frequency, different gain or dynamic range) and also results in less noisy measurements by avoiding long analog signal paths.\nThe sensor platform offers a rich set of interfaces and can be integrated with existing systems in diverse ways.\nAn RS232 port and a Bluetooth (BlueGiga WT12) wireless link with virtual UART emulation are directly available on the board and provide simple means to connect the sensor to PCs and PDAs.\nThe mote interface consists of an I2 C bus along with an interrupt and GPIO line (the latter one is used 114 Figure 2: Block diagram of the sensorboard.\nfor precise time synchronization between the board and the mote).\nThe motes are equipped with IEEE 802.15.4 compliant radio transceivers and support ad-hoc wireless networking among the nodes and to\/from the base station.\nThe sensorboard also supports full-speed USB transfers (with custom USB dongles) for uploading recorded audio samples to the PC.\nThe on-board JTAG chain-directly accessible through a dedicated connector-contains the FPGA part and configuration memory and provides in-system programming and debugging facilities.\nThe integrated Honeywell HMR3300 digital compass module provides heading, pitch and roll information with 1\u25e6 accuracy, which is essential for calculating and combining directional estimates of the detected events.\nDue to the complex voltage requirements of the FPGA, the power supply circuitry is implemented on the sensorboard and provides power both locally and to the mote.\nWe used a quad pack of rechargeable AA batteries as the power source (although any other configuration is viable that meets the voltage requirements).\nThe FPGA core (1.2 V) and I\/O (3.3 V) voltages are generated by a highly efficient buck switching regulator.\nThe FPGA configuration (2.5 V) and a separate 3.3 V power net are fed by low current LDOs, the latter one is used to provide independent power to the mote and to the Bluetooth radio.\nThe regulators-except the last one-can be turned on\/off from the mote or through the Bluetooth radio (via GPIO lines) to save power.\nThe first prototype of our system employed 10 sensor nodes.\nSome of these nodes were mounted on military kevlar helmets with the microphones directly attached to the surface at about 20 cm separation as shown in Figure 3(a).\nThe rest of the nodes were mounted in plastic enclosures (Figure 3(b)) with the microphones placed near the corners of the boxes to form approximately 5 cm\u00d710 cm rectangles.\n4.\nSOFTWARE ARCHITECTURE The sensor application relies on three subsystems exploiting three different computing paradigms as they are shown in Figure 4.\nAlthough each of these execution models suit their domain specific tasks extremely well, this diversity (a) (b) Figure 3: Sensor prototypes mounted on a kevlar helmet (a) and in a plastic box on a tripod (b).\npresents a challenge for software development and system integration.\nThe sensor fusion and user interface subsystem is running on PDAs and were implemented in Java.\nThe sensing and signal processing tasks are executed by an FPGA, which also acts as a bridge between various wired and wireless communication channels.\nThe ad-hoc internode communication, time synchronization and data sharing are the responsibilities of a microcontroller based radio module.\nSimilarly, the application employs a wide variety of communication protocols such as Bluetooth and IEEE 802.14.5 wireless links, as well as optional UARTs, I2 C and\/or USB buses.\nSoldier Operated Device (PDA\/Laptop) FPGA Sensor Board Mica Radio Module 2.4 GHz Wireless Link Radio Control Message Routing Acoustic Event Encoder Sensor Time Synch.\nNetwork Time Synch.Remote Control Time stamping Interrupts Virtual Register Interface C O O R D I N A T O R A n a l o g c h a n n e l s Compass PicoBlaze Comm.\nInterface PicoBlaze WT12 Bluetooth Radio MOTE IF:I2C,Interrupts USB PSRAM U A R T U A R T MB det SW det REC Bluetooth Link User Interface Sensor Fusion Location Engine GPS Message (Dis-)AssemblerSensor Control Figure 4: Software architecture diagram.\nThe sensor fusion module receives and unpacks raw measurements (time stamps and feature vectors) from the sensorboard through the Bluetooth link.\nAlso, it fine tunes the execution of the signal processing cores by setting parameters through the same link.\nNote that measurements from other nodes along with their location and orientation information also arrive from the sensorboard which acts as a gateway between the PDA and the sensor network.\nThe handheld device obtains its own GPS location data and di115 rectly receives orientation information through the sensorboard.\nThe results of the sensor fusion are displayed on the PDA screen with low latency.\nSince, the application is implemented in pure Java, it is portable across different PDA platforms.\nThe border between software and hardware is considerably blurred on the sensor board.\nThe IP cores-implemented in hardware description languages (HDL) on the reconfigurable FPGA fabric-closely resemble hardware building blocks.\nHowever, some of them-most notably the soft processor cores-execute true software programs.\nThe primary tasks of the sensor board software are 1) acquiring data samples from the analog channels, 2) processing acoustic data (detection), and 3) providing access to the results and run-time parameters through different interfaces.\nAs it is shown in Figure 4, a centralized virtual register file contains the address decoding logic, the registers for storing parameter values and results and the point to point data buses to and from the peripherals.\nThus, it effectively integrates the building blocks within the sensorboard and decouples the various communication interfaces.\nThis architecture enabled us to deploy the same set of sensors in a centralized scenario, where the ad-hoc mote network (using the I2 C interface) collected and forwarded the results to a base station or to build a decentralized system where the local PDAs execute the sensor fusion on the data obtained through the Bluetooth interface (and optionally from other sensors through the mote interface).\nThe same set of registers are also accessible through a UART link with a terminal emulation program.\nAlso, because the low-level interfaces are hidden by the register file, one can easily add\/replace these with new ones (eg.: the first generation of motes supported a standard \u03bcP interface bus on the sensor connector, which was dropped in later designs).\nThe most important results are the time stamps of the detected events.\nThese time stamps and all other timing information (parameters, acoustic event features) are based on a 1 MHz clock and an internal timer on the FPGA.\nThe time conversion and synchronization between the sensor network and the board is done by the mote by periodically requesting the capture of the current timer value through a dedicated GPIO line and reading the captured value from the register file through the I2 C interface.\nBased on the the current and previous readings and the corresponding mote local time stamps, the mote can calculate and maintain the scaling factor and offset between the two time domains.\nThe mote interface is implemented by the I2 C slave IP core and a thin adaptation layer which provides a data and address bus abstraction on top of it.\nThe maximum effective bandwidth is 100 Kbps through this interface.\nThe FPGA contains several UART cores as well: for communicating with the on-board Bluetooth module, for controlling the digital compass and for providing a wired RS232 link through a dedicated connector.\nThe control, status and data registers of the UART modules are available through the register file.\nThe higher level protocols on these lines are implemented by Xilinx PicoBlaze microcontroller cores [13] and corresponding software programs.\nOne of them provides a command line interface for test and debug purposes, while the other is responsible for parsing compass readings.\nBy default, they are connected to the RS232 port and to the on-board digital compass line respectively, however, they can be rewired to any communication interface by changing the register file base address in the programs (e.g. the command line interface can be provided through the Bluetooth channel).\nTwo of the external interfaces are not accessible through the register file: a high speed USB link and the SRAM interface are tied to the recorder block.\nThe USB module implements a simple FIFO with parallel data lines connected to an external FT245R USB device controller.\nThe RAM driver implements data read\/write cycles with correct timing and is connected to the on-board pseudo SRAM.\nThese interfaces provide 1 MB\/s effective bandwidth for downloading recorded audio samples, for example.\nThe data acquisition and signal processing paths exhibit clear symmetry: the same set of IP cores are instantiated four times (i.e. the number of acoustic channels) and run independently.\nThe signal paths meet only just before the register file.\nEach of the analog channels is driven by a serial A\/D core for providing a 20 MHz serial clock and shifting in 8-bit data samples at 1 MS\/s and a digital potentiometer driver for setting the required gain.\nEach channel has its own shockwave and muzzle blast detector, which are described in Section 5.\nThe detectors fetch run-time parameter values from the register file and store their results there as well.\nThe coordinator core constantly monitors the detection results and generates a mote interrupt promptly upon full detection or after a reasonable timeout after partial detection.\nThe recorder component is not used in the final deployment, however, it is essential for development purposes for refining parameter values for new types of weapons or for other acoustic sources.\nThis component receives the samples from all channels and stores them in circular buffers in the PSRAM device.\nIf the signal amplitude on one of the channels crosses a predefined threshold, the recorder component suspends the sample collection with a predefined delay and dumps the contents of the buffers through the USB link.\nThe length of these buffers and delays, the sampling rate, the threshold level and the set of recorded channels can be (re)configured run-time through the register file.\nNote that the core operates independently from the other signal processing modules, therefore, it can be used to validate the detection results off-line.\nThe FPGA cores are implemented in VHDL, the PicoBlaze programs are written in assembly.\nThe complete configuration occupies 40% of the resources (slices) of the FPGA and the maximum clock speed is 30 MHz, which is safely higher than the speed used with the actual device (20MHz).\nThe MICAz motes are responsible for distributing measurement data across the network, which drastically improves the localization and classification results at each node.\nBesides a robust radio (MAC) layer, the motes require two essential middleware services to achieve this goal.\nThe messages need to be propagated in the ad-hoc multihop network using a routing service.\nWe successfully integrated the Directed Flood-Routing Framework (DFRF) [9] in our application.\nApart from automatic message aggregation and efficient buffer management, the most unique feature of DFRF is its plug-in architecture, which accepts custom routing policies.\nRouting policies are state machines that govern how received messages are stored, resent or discarded.\nExample policies include spanning tree routing, broadcast, geographic routing, etc..\nDifferent policies can be used for different messages concurrently, and the application is able to 116 change the underlying policies at run-time (eg.: because of the changing RF environment or power budget).\nIn fact, we switched several times between a simple but lavish broadcast policy and a more efficient gradient routing on the field.\nCorrelating ToA measurements requires a common time base and precise time synchronization in the sensor network.\nThe Routing Integrated Time Synchronization (RITS) [15] protocol relies on very accurate MAC-layer time-stamping to embed the cumulative delay that a data message accrued since the time of the detection in the message itself.\nThat is, at every node it measures the time the message spent there and adds this to the number in the time delay slot of the message, right before it leaves the current node.\nEvery receiving node can subtract the delay from its current time to obtain the detection time in its local time reference.\nThe service provides very accurate time conversion (few \u03bcs per hop error), which is more than adequate for this application.\nNote, that the motes also need to convert the sensorboard time stamps to mote time as it is described earlier.\nThe mote application is implemented in nesC [5] and is running on top of TinyOS [6].\nWith its 3 KB RAM and 28 KB program space (ROM) requirement, it easily fits on the MICAz motes.\n5.\nDETECTION ALGORITHM There are several characteristics of acoustic shockwaves and muzzle blasts which distinguish their detection and signal processing algorithms from regular audio applications.\nBoth events are transient by their nature and present very intense stimuli to the microphones.\nThis is increasingly problematic with low cost electret microphones-designed for picking up regular speech or music.\nAlthough mechanical damping of the microphone membranes can mitigate the problem, this approach is not without side effects.\nThe detection algorithms have to be robust enough to handle severe nonlinear distortion and transitory oscillations.\nSince the muzzle blast signature closely follows the shockwave signal and because of potential automatic weapon bursts, it is extremely important to settle the audio channels and the detection logic as soon as possible after an event.\nAlso, precise angle of arrival estimation necessitates high sampling frequency (in the MHz range) and accurate event detection.\nMoreover, the detection logic needs to process multiple channels in parallel (4 channels on our existing hardware).\nThese requirements dictated simple and robust algorithms both for muzzle blast and shockwave detections.\nInstead of using mundane energy detectors-which might not be able to distinguish the two different events-the applied detectors strive to find the most important characteristics of the two signals in the time-domain using simple state machine logic.\nThe detectors are implemented as independent IP cores within the FPGA-one pair for each channel.\nThe cores are run-time configurable and provide detection event signals with high precision time stamps and event specific feature vectors.\nAlthough the cores are running independently and in parallel, a crude local fusion module integrates them by shutting down those cores which missed their events after a reasonable timeout and by generating a single detection message towards the mote.\nAt this point, the mote can read and forward the detection times and features and is responsible to restart the cores afterwards.\nThe most conspicuous characteristics of an acoustic shockwave (see Figure 5(a)) are the steep rising edges at the be0 200\u00a0400\u00a0600\u00a0800 1000\u00a01200\u00a01400\u00a01600 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Shockwave (M16) Time (\u00b5s) Amplitude 1 3 5 2 4 len (a) s[t] - s[t-D] > E tstart := t s[t] - s[t-D] < E s[t] - s[t-D] > E & t - t_start > Lmin s[t] - s[t-D] < E len := t - tstart IDLE 1 FIRST EDGE DONE 3 SECOND EDGE 4 FIRST EDGE 2 FOUND 5 t - tstart \u2265 Lmax t - tstart \u2265 Lmax (b) Figure 5: Shockwave signal generated by a 5.56 \u00d7 45 mm NATO projectile (a) and the state machine of the detection algorithm (b).\nginning and end of the signal.\nAlso, the length of the N-wave is fairly predictable-as it is described in Section 6.5-and is relatively short (200-300 \u03bcs).\nThe shockwave detection core is continuously looking for two rising edges within a given interval.\nThe state machine of the algorithm is shown in Figure 5(b).\nThe input parameters are the minimum steepness of the edges (D, E), and the bounds on the length of the wave (Lmin, Lmax).\nThe only feature calculated by the core is the length of the observed shockwave signal.\nIn contrast to shockwaves, the muzzle blast signatures are characterized by a long initial period (1-5 ms) where the first half period is significantly shorter than the second half [4].\nDue to the physical limitations of the analog circuitry described at the beginning of this section, irregular oscillations and glitches might show up within this longer time window as they can be clearly seen in Figure 6(a).\nTherefore, the real challenge for the matching detection core is to identify the first and second half periods properly.\nThe state machine (Figure 6(b)) does not work on the raw samples directly but is fed by a zero crossing (ZC) encoder.\nAfter the initial triggering, the detector attempts to collect those ZC segments which belong to the first period (positive amplitude) while discarding too short (in our terminology: garbage) segments-effectively implementing a rudimentary low-pass filter in the ZC domain.\nAfter it encounters a sufficiently long negative segment, it runs the same collection logic for the second half period.\nIf too much garbage is discarded in the collection phases, the core resets itself to prevent the (false) detection of the halves from completely different periods separated by rapid oscillation or noise.\nFinally, if the constraints on the total length and on the length ratio hold, the core generates a detection event along with the actual length, amplitude and energy of the period calculated concurrently.\nThe initial triggering mechanism is based on two amplitude thresholds: one static (but configurable) amplitude level and a dynamically computed one.\nThe latter one is essential to adapt the sensor to different ambient noise environments and to temporarily suspend the muzzle blast detector after a shock wave event (oscillations in the analog section or reverberations in the sensor enclosure might otherwise trigger false muzzle blast detections).\nThe dynamic noise level is estimated by a single pole recursive low-pass filter (cutoff @ 0.5 kHz ) on the FPGA.\n117 0 1000\u00a02000\u00a03000\u00a04000 5000\u00a06000\u00a07000\u00a08000 9000 10000 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 Time (\u00b5s) Amplitude Muzzle blast (M16) 1 2 3 4 5 len2 + len1 (a) IDLE 1 SECOND ZC 3 PENDING ZC 4 FIRST ZC 2 FOUND 5 amplitude threshold long positive ZC long negative ZC valid full period max garbage wrong sign garbage collect first period garbage collect first period garbage (b) Figure 6: Muzzle blast signature (a) produced by an M16 assault rifle and the corresponding detection logic (b).\nThe detection cores were originally implemented in Java and evaluated on pre-recorded signals because of much faster test runs and more convenient debugging facilities.\nLater on, they were ported to VHDL and synthesized using the Xilinx ISE tool suite.\nThe functional equivalence between the two implementations were tested by VHDL test benches and Python scripts which provided an automated way to exercise the detection cores on the same set of pre-recorded signals and to compare the results.\n6.\nSENSOR FUSION The sensor fusion algorithm receives detection messages from the sensor network and estimates the bullet trajectory, the shooter position, the caliber of the projectile and the type of the weapon.\nThe algorithm consists of well separated computational tasks outlined below: 1.\nCompute muzzle blast and shockwave directions of arrivals for each individual sensor (see 6.1).\n2.\nCompute range estimates.\nThis algorithm can analytically fuse a pair of shockwave and muzzle blast AoA estimates.\n(see 6.2).\n3.\nCompute a single trajectory from all shockwave measurements (see 6.3).\n4.\nIf trajectory available compute range (see 6.4).\nelse compute shooter position first and then trajectory based on it.\n(see 6.4) 5.\nIf trajectory available compute caliber (see 6.5).\n6.\nIf caliber available compute weapon type (see 6.6).\nWe describe each step in the following sections in detail.\n6.1 Direction of arrival The first step of the sensor fusion is to calculate the muzzle blast and shockwave AoA-s for each sensorboard.\nEach sensorboard has four microphones that measure the ToA-s.\nSince the microphone spacing is orders of magnitude smaller than the distance to the sound source, we can approximate the approaching sound wave front with a plane (far field assumption).\nLet us formalize the problem for 3 microphones first.\nLet P1, P2 and P3 be the position of the microphones ordered by time of arrival t1 < t2 < t3.\nFirst we apply a simple geometry validation step.\nThe measured time difference between two microphones cannot be larger than the sound propagation time between the two microphones: |ti \u2212 tj| <= |Pi \u2212 Pj |\/c + \u03b5 Where c is the speed of sound and \u03b5 is the maximum measurement error.\nIf this condition does not hold, the corresponding detections are discarded.\nLet v(x, y, z) be the normal vector of the unknown direction of arrival.\nWe also use r1(x1, y1, z1), the vector from P1 to P2 and r2(x2, y2, z2), the vector from P1 to P3.\nLet``s consider the projection of the direction of the motion of the wave front (v) to r1 divided by the speed of sound (c).\nThis gives us how long it takes the wave front to propagate form P1 to P2: vr1 = c(t2 \u2212 t1) The same relationship holds for r2 and v: vr2 = c(t3 \u2212 t1) We also know that v is a normal vector: vv = 1 Moving from vectors to coordinates using the dot product definition leads to a quadratic system: xx1 + yy1 + zz1 = c(t2 \u2212 t1) xx2 + yy2 + zz2 = c(t3 \u2212 t1) x2 + y2 + z2 = 1 We omit the solution steps here, as they are straightforward, but long.\nThere are two solutions (if the source is on the P1P2P3 plane the two solutions coincide).\nWe use the fourth microphone``s measurement-if there is one-to eliminate one of them.\nOtherwise, both solutions are considered for further processing.\n6.2 Muzzle-shock fusion u v 11,tP 22,tP tP, 2P\u2032 Bullet trajectory Figure 7: Section plane of a shot (at P) and two sensors (at P1 and at P2).\nOne sensor detects the muzzle blast``s, the other the shockwave``s time and direction of arrivals.\nConsider the situation in Figure 7.\nA shot was fired from P at time t. Both P and t are unknown.\nWe have one muzzle blast and one shockwave detections by two different sensors 118 with AoA and hence, ToA information available.\nThe muzzle blast detection is at position P1 with time t1 and AoA u.\nThe shockwave detection is at P2 with time t2 and AoA v. u and v are normal vectors.\nIt is shown below that these measurements are sufficient to compute the position of the shooter (P).\nLet P2 be the point on the extended shockwave cone surface where PP2 is perpendicular to the surface.\nNote that PP2 is parallel with v.\nSince P2 is on the cone surface which hits P2, a sensor at P2 would detect the same shockwave time of arrival (t2).\nThe cone surface travels at the speed of sound (c), so we can express P using P2: P = P2 + cv(t2 \u2212 t).\nP can also be expressed from P1: P = P1 + cu(t1 \u2212 t) yielding P1 + cu(t1 \u2212 t) = P2 + cv(t2 \u2212 t).\nP2P2 is perpendicular to v: (P2 \u2212 P2)v = 0 yielding (P1 + cu(t1 \u2212 t) \u2212 cv(t2 \u2212 t) \u2212 P2)v = 0 containing only one unknown t.\nOne obtains: t = (P1\u2212P2)v c +uvt1\u2212t2 uv\u22121 .\nFrom here we can calculate the shoter position P. Let``s consider the special single sensor case where P1 = P2 (one sensor detects both shockwave and muzzle blast AoA).\nIn this case: t = uvt1\u2212t2 uv\u22121 .\nSince u and v are not used separately only uv, the absolute orientation of the sensor can be arbitrary, we still get t which gives us the range.\nHere we assumed that the shockwave is a cone which is only true for constant projectile speeds.\nIn reality, the angle of the cone slowly grows; the surface resembles one half of an American football.\nThe decelerating bullet results in a smaller time difference between the shockwave and the muzzle blast detections because the shockwave generation slows down with the bullet.\nA smaller time difference results in a smaller range, so the above formula underestimates the true range.\nHowever, it can still be used with a proper deceleration correction function.\nWe leave this for future work.\n6.3 Trajectory estimation Danicki showed that the bullet trajectory and speed can be computed analytically from two independent shockwave measurements where both ToA and AoA are measured [2].\nThe method gets more sensitive to measurement errors as the two shockwave directions get closer to each other.\nIn the special case when both directions are the same, the trajectory cannot be computed.\nIn a real world application, the sensors are typically deployed on a plane approximately.\nIn this case, all sensors located on one side of the trajectory measure almost the same shockwave AoA.\nTo avoid this error sensitivity problem, we consider shockwave measurement pairs only if the direction of arrival difference is larger than a certain threshold.\nWe have multiple sensors and one sensor can report two different directions (when only three microphones detect the shockwave).\nHence, we typically have several trajectory candidates, i.e. one for each AoA pair over the threshold.\nWe applied an outlier filtering and averaging method to fuse together the shockwave direction and time information and come up with a single trajectory.\nAssume that we have N individual shockwave AoA measurements.\nLet``s take all possible unordered pairs where the direction difference is above the mentioned threshold and compute the trajectory for each.\nThis gives us at most N(N\u22121) 2 trajectories.\nA trajectory is represented by one point pi and the normal vector vi (where i is the trajectory index).\nWe define the distance of two trajectories as the dot product of their normal vectors: D(i, j) = vivj For each trajectory a neighbor set is defined: N(i) := {j|D(i, j) < R} where R is a radius parameter.\nThe largest neighbor set is considered to be the core set C, all other trajectories are outliers.\nThe core set can be found in O(N2 ) time.\nThe trajectories in the core set are then averaged to get the final trajectory.\nIt can happen that we cannot form any sensor pairs because of the direction difference threshold.\nIt means all sensors are on the same side of the trajectory.\nIn this case, we first compute the shooter position (described in the next section) that fixes p making v the only unknown.\nTo find v in this case, we use a simple high resolution grid search and minimize an error function based on the shockwave directions.\nWe have made experiments to utilize the measured shockwave length in the trajectory estimation.\nThere are some promising results, but it needs further research.\n6.4 Shooter position estimation The shooter position estimation algorithm aggregates the following heterogenous information generated by earlier computational steps: 1.\ntrajectory, 2.\nmuzzle blast ToA at a sensor, 3.\nmuzzle blast AoA at a sensor, which is effectively a bearing estimate to the shooter, and 4.\nrange estimate at a sensor (when both shockwave and muzzle blast AoA are available).\nSome sensors report only ToA, some has bearing estimate(s) also and some has range estimate(s) as well, depending on the number of successful muzzle blast and shockwave detections by the sensor.\nFor an example, refer to Figure 8.\nNote that a sensor may have two different bearing and range estimates.\n3 detections gives two possible AoA-s for muzzle blast (i.e. bearing) and\/or shockwave.\nFurthermore, the combination of two different muzzle blast and shockwave AoA-s may result in two different ranges.\n119 11111 ,,,, rrvvt \u2032\u2032 22 ,vt 333 ,, vvt \u2032 4t 5t 6t bullet trajectory shooter position Figure 8: Example of heterogenous input data for the shooter position estimation algorithm.\nAll sensors have ToA measurements (t1, t2, t3, t4, t5), one sensor has a single bearing estimate (v2), one sensor has two possible bearings (v3, v3) and one sensor has two bearing and two range estimates (v1, v1,r1, r1) In a multipath environment, these detections will not only contain gaussian noise, but also possibly large errors due to echoes.\nIt has been showed in our earlier work that a similar problem can be solved efficiently with an interval arithmetic based bisection search algorithm [8].\nThe basic idea is to define a discrete consistency function over the area of interest and subdivide the space into 3D boxes.\nFor any given 3D box, this function gives the number of measurements supporting the hypothesis that the shooter was within that box.\nThe search starts with a box large enough to contain the whole area of interest, then zooms in by dividing and evaluating boxes.\nThe box with the maximum consistency is divided until the desired precision is reached.\nBacktracking is possible to avoid getting stuck in a local maximum.\nThis approach has been shown to be fast enough for online processing.\nNote, however, that when the trajectory has already been calculated in previous steps, the search needs to be done only on the trajectory making it orders of magnitude faster.\nNext let us describe how the consistency function is calculated in detail.\nConsider B, a three dimensional box, we would like to compute the consistency value of.\nFirst we consider only the ToA information.\nIf one sensor has multiple ToA detections, we use the average of those times, so one sensor supplies at most one ToA estimate.\nFor each ToA, we can calculate the corresponding time of the shot, since the origin is assumed to be in box B.\nSince it is a box and not a single point, this gives us an interval for the shot time.\nThe maximum number of overlapping time intervals gives us the value of the consistency function for B. For a detailed description of the consistency function and search algorithm, refer to [8].\nHere we extend the approach the following way.\nWe modify the consistency function based on the bearing and range data from individual sensors.\nA bearing estimate supports B if the line segment starting from the sensor with the measured direction intersects the B box.\nA range supports B, if the sphere with the radius of the range and origin of the sensor intersects B. Instead of simply checking whether the position specified by the corresponding bearing-range pairs falls within B, this eliminates the sensor``s possible orientation error.\nThe value of the consistency function is incremented by one for each bearing and range estimate that is consistent with B. 6.5 Caliber estimation The shockwave signal characteristics has been studied before by Whitham [20].\nHe showed that the shockwave period T is related to the projectile diameter d, the length l, the perpendicular miss distance b from the bullet trajectory to the sensor, the Mach number M and the speed of sound c. T = 1.82Mb1\/4 c(M2\u22121)3\/8 d l1\/4 \u2248 1.82d c (Mb l )1\/4 0 100 200 300 400 500 600 0 10 20 30 miss distance (m)shockwavelength(microseconds) .50 cal 5.56 mm 7.62 mm Figure 9: Shockwave length and miss distance relationship.\nEach data point represents one sensorboard after an aggregation of the individual measurements of the four acoustic channels.\nThree different caliber projectiles have been tested (196 shots, 10 sensors).\nTo illustrate the relationship between miss distance and shockwave length, here we use all 196 shots with three different caliber projectiles fired during the evaluation.\n(During the evaluation we used data obtained previously using a few practice shots per weapon.)\n10 sensors (4 microphones by sensor) measured the shockwave length.\nFor each sensor, we considered the shockwave length estimation valid if at least three out of four microphones agreed on a value with at most 5 microsecond variance.\nThis filtering leads to a 86% report rate per sensor and gets rid of large measurement errors.\nThe experimental data is shown in Figure 9.\nWhitham``s formula suggests that the shockwave length for a given caliber can be approximated with a power function of the miss distance (with a 1\/4 exponent).\nBest fit functions on our data are: .50 cal: T = 237.75b0.2059 7.62 mm: T = 178.11b0.1996 5.56 mm: T = 144.39b0.1757 To evaluate a shot, we take the caliber whose approximation function results in the smallest RMS error of the filtered sensor readings.\nThis method has less than 1% caliber estimation error when an accurate trajectory estimate is available.\nIn other words, caliber estimation only works if enough shockwave detections are made by the system to compute a trajectory.\n120 6.6 Weapon estimation We analyzed all measured signal characteristics to find weapon specific information.\nUnfortunately, we concluded that the observed muzzle blast signature is not characteristic enough of the weapon for classification purposes.\nThe reflections of the high energy muzzle blast from the environment have much higher impact on the muzzle blast signal shape than the weapon itself.\nShooting the same weapon from different places caused larger differences on the recorded signal than shooting different weapons from the same place.\n0 100 200 300 400 500 600 700 800 900 0 100\u00a0200\u00a0300\u00a0400 range (m) speed(m\/s) AK-47 M240 Figure 10: AK47 and M240 bullet deceleration measurements.\nBoth weapons have the same caliber.\nData is approximated using simple linear regression.\n0 100 200 300 400 500 600 700 800 900 1000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350 range (m) speed(m\/s) M16 M249 M4 Figure 11: M16, M249 and M4 bullet deceleration measurements.\nAll weapons have the same caliber.\nData is approximated using simple linear regression.\nHowever, the measured speed of the projectile and its caliber showed good correlation with the weapon type.\nThis is because for a given weapon type and ammunition pair, the muzzle velocity is nearly constant.\nIn Figures 10 and 11 we can see the relationship between the range and the measured bullet speed for different calibers and weapons.\nIn the supersonic speed range, the bullet deceleration can be approximated with a linear function.\nIn case of the 7.62 mm caliber, the two tested weapons (AK47, M240) can be clearly separated (Figure 10).\nUnfortunately, this is not necessarily true for the 5.56 mm caliber.\nThe M16 with its higher muzzle speed can still be well classified, but the M4 and M249 weapons seem practically undistinguishable (Figure 11).\nHowever, this may be partially due to the limited number of practice shots we were able to take before the actual testing began.\nMore training data may reveal better separation between the two weapons since their published muzzle velocities do differ somewhat.\nThe system carries out weapon classification in the following manner.\nOnce the trajectory is known, the speed can be calculated for each sensor based on the shockwave geometry.\nTo evaluate a shot, we choose the weapon type whose deceleration function results in the smallest RMS error of the estimated range-speed pairs for the estimated caliber class.\n7.\nRESULTS An independent evaluation of the system was carried out by a team from NIST at the US Army Aberdeen Test Center in April 2006 [19].\nThe experiment was setup on a shooting range with mock-up wooden buildings and walls for supporting elevated shooter positions and generating multipath effects.\nFigure 12 shows the user interface with an aerial photograph of the site.\n10 sensor nodes were deployed on surveyed points in an approximately 30\u00d730 m area.\nThere were five fixed targets behind the sensor network.\nSeveral firing positions were located at each of the firing lines at 50, 100, 200 and 300 meters.\nThese positions were known to the evaluators, but not to the operators of the system.\nSix different weapons were utilized: AK47 and M240 firing 7.62 mm projectiles, M16, M4 and M249 with 5.56mm ammunition and the .50 caliber M107.\nNote that the sensors remained static during the test.\nThe primary reason for this is that nobody is allowed downrange during live fire tests.\nUtilizing some kind of remote control platform would have been too involved for the limited time the range was available for the test.\nThe experiment, therefore, did not test the mobility aspect of the system.\nDuring the one day test, there were 196 shots fired.\nThe results are summarized in Table 1.\nThe system detected all shots successfully.\nSince a ballistic shockwave is a unique acoustic phenomenon, it makes the detection very robust.\nThere were no false positives for shockwaves, but there were a handful of false muzzle blast detections due to parallel tests of artillery at a nearby range.\nShooter Local- Caliber Trajectory Trajectory Distance No.\nRange ization Accu- Azimuth Distance Error of (m) Rate racy Error (deg) Error (m) (m) Shots 50 93% 100% 0.86 0.91 2.2 54 100 100% 100% 0.66 1.34 8.7 54 200 96% 100% 0.74 2.71 32.8 54 300 97% 97% 1.49 6.29 70.6 34 All 96% 99.5% 0.88 2.47 23.0 196 Table 1: Summary of results fusing all available sensor observations.\nAll shots were successfully detected, so the detection rate is omitted.\nLocalization rate means the percentage of shots that the sensor fusion was able to estimate the trajectory of.\nThe caliber accuracy rate is relative to the shots localized and not all the shots because caliber estimation requires the trajectory.\nThe trajectory error is broken down to azimuth in degrees and the actual distance of the shooter from the trajectory.\nThe distance error shows the distance between the real shooter position and the estimated shooter position.\nAs such, it includes the error caused by both the trajectory and that of the range estimation.\nNote that the traditional bearing and range measures are not good ones for a distributed system such as ours because of the lack of a single reference point.\n121 Figure 12: The user interface of the system showing the experimental setup.\nThe 10 sensor nodes are labeled by their ID and marked by dark circles.\nThe targets are black squares marked T-1 through T-5.\nThe long white arrows point to the shooter position estimated by each sensor.\nWhere it is missing, the corresponding sensor did not have enough detections to measure the AoA of either the muzzle blast, the shockwave or both.\nThe thick black line and large circle indicate the estimated trajectory and the shooter position as estimated by fusing all available detections from the network.\nThis shot from the 100-meter line at target T-3 was localized almost perfectly by the sensor network.\nThe caliber and weapon were also identified correctly.\n6 out of 10 nodes were able to estimate the location alone.\nTheir bearing accuracy is within a degree, while the range is off by less than 10% in the worst case.\nThe localization rate characterizes the system``s ability to successfully estimate the trajectory of shots.\nSince caliber estimation and weapon classification relies on the trajectory, non-localized shots are not classified either.\nThere were 7 shots out of 196 that were not localized.\nThe reason for missed shots is the trajectory ambiguity problem that occurs when the projectile passes on one side of all the sensors.\nIn this case, two significantly different trajectories can generate the same set of observations (see [8] and also Section 6.3).\nInstead of estimating which one is more likely or displaying both possibilities, we decided not to provide a trajectory at all.\nIt is better not to give an answer other than a shot alarm than misleading the soldier.\nLocalization accuracy is broken down to trajectory accuracy and range estimation precision.\nThe angle of the estimated trajectory was better than 1 degree except for the 300 m range.\nSince the range should not affect trajectory estimation as long as the projectile passes over the network, we suspect that the slightly worse angle precision for 300 m is due to the hurried shots we witnessed the soldiers took near the end of the day.\nThis is also indicated by another datapoint: the estimated trajectory distance from the actual targets has an average error of 1.3 m for 300 m shots, 0.75 m for 200 m shots and 0.6 m for all but 300 m shots.\nAs the distance between the targets and the sensor network was fixed, this number should not show a 2\u00d7 improvement just because the shooter is closer.\nSince the angle of the trajectory itself does not characterize the overall error-there can be a translation alsoTable 1 also gives the distance of the shooter from the estimated trajectory.\nThese indicate an error which is about 1-2% of the range.\nTo put this into perspective, a trajectory estimate for a 100 m shot will very likely go through or very near the window the shooter is located at.\nAgain, we believe that the disproportionally larger errors at 300 m are due to human errors in aiming.\nAs the ground truth was obtained by knowing the precise location of the shooter and the target, any inaccuracy in the actual trajectory directly adds to the perceived error of the system.\nWe call the estimation of the shooter``s position on the calculated trajectory range estimation due to the lack of a better term.\nThe range estimates are better than 5% accurate from 50 m and 10% for 100 m. However, this goes to 20% or worse for longer distances.\nWe did not have a facility to test system before the evaluation for ranges beyond 100 m.\nDuring the evaluation, we ran into the problem of mistaking shockwave echoes for muzzle blasts.\nThese echoes reached the sensors before the real muzzle blast for long range shots only, since the projectile travels 2-3\u00d7 faster than the speed of sound, so the time between the shockwave (and its possible echo from nearby objects) and the muzzle blast increases with increasing ranges.\nThis resulted in underestimating the range, since the system measured shorter times than the real ones.\nSince the evaluation we finetuned the muzzle blast detection algorithm to avoid this problem.\nDistance M16 AK47 M240 M107 M4 M249 M4-M249 50m 100% 100% 100% 100% 11% 25% 94% 100m 100% 100% 100% 100% 22% 33% 100% 200m 100% 100% 100% 100% 50% 22% 100% 300m 67% 100% 83% 100% 33% 0% 57% All 96% 100% 97% 100% 23% 23% 93% Table 2: Weapon classification results.\nThe percentages are relative to the number of shots localized and not all shots, as the classification algorithm needs to know the trajectory and the range.\nNote that the difference is small; there were 189 shots localized out of the total 196.\nThe caliber and weapon estimation accuracy rates are based on the 189 shots that were successfully localized.\nNote that there was a single shot that was falsely classified by the caliber estimator.\nThe 73% overall weapon classification accuracy does not seem impressive.\nBut if we break it down to the six different weapons tested, the picture changes dramatically as shown in Table 2.\nFor four of the weapons (AK14, M16, M240 and M107), the classification rate is almost 100%.\nThere were only two shots out of approximately 140 that were missed.\nThe M4 and M249 proved to be too similar and they were mistaken for each other most of the time.\nOne possible explanation is that we had only a limited number of test shots taken with these weapons right before the evaluation and used the wrong deceleration approximation function.\nEither this or a similar mistake was made 122 since if we simply used the opposite of the system``s answer where one of these weapons were indicated, the accuracy would have improved 3x.\nIf we consider these two weapons a single weapon class, then the classification accuracy for it becomes 93%.\nNote that the AK47 and M240 have the same caliber (7.62 mm), just as the M16, M4 and M249 do (5.56 mm).\nThat is, the system is able to differentiate between weapons of the same caliber.\nWe are not aware of any system that classifies weapons this accurately.\n7.1 Single sensor performance As was shown previously, a single sensor alone is able to localize the shooter if it can determine both the muzzle blast and the shockwave AoA, that is, it needs to measure the ToA of both on at least three acoustic channels.\nWhile shockwave detection is independent of the range-unless the projectile becomes subsonic-, the likelihood of muzzle blast detection beyond 150 meters is not enough for consistently getting at least three per sensor node for AoA estimation.\nHence, we only evaluate the single sensor performance for the 104 shots that were taken from 50 and 100 m. Note that we use the same test data as in the previous section, but we evaluate individually for each sensor.\nTable 3 summarizes the results broken down by the ten sensors utilized.\nSince this is now not a distributed system, the results are given relative to the position of the given sensor, that is, a bearing and range estimate is provided.\nNote that many of the common error sources of the networked system do not play a role here.\nTime synchronization is not applicable.\nThe sensor``s absolute location is irrelevant (just as the relative location of multiple sensors).\nThe sensor``s orientation is still important though.\nThere are several disadvantages of the single sensor case compared to the networked system: there is no redundancy to compensate for other errors and to perform outlier rejection, the localization rate is markedly lower, and a single sensor alone is not able to estimate the caliber or classify the weapon.\nSensor id 1 2 3 5 7 8 9 10 11 12 Loc.\nrate 44% 37% 53% 52% 19% 63% 51% 31% 23% 44% Bearing (deg) 0.80 1.25 0.60 0.85 1.02 0.92 0.73 0.71 1.28 1.44 Range (m) 3.2 6.1 4.4 4.7 4.6 4.6 4.1 5.2 4.8 8.2 Table 3: Single sensor accuracy for 108 shots fired from 50 and 100 meters.\nLocalization rate refers to the percentage of shots the given sensor alone was able to localize.\nThe bearing and range values are average errors.\nThey characterize the accuracy of localization from the given sensor``s perspective.\nThe data indicates that the performance of the sensors varied significantly especially considering the localization rate.\nOne factor has to be the location of the given sensor including how far it was from the firing lines and how obstructed its view was.\nAlso, the sensors were hand-built prototypes utilizing nowhere near production quality packaging\/mounting.\nIn light of these factors, the overall average bearing error of 0.9 degrees and range error of 5 m with a microphone spacing of less than 10 cm are excellent.\nWe believe that professional manufacturing and better microphones could easily achieve better performance than the best sensor in our experiment (>60% localization rate and 3 m range error).\nInterestingly, the largest error in range was a huge 90 m clearly due to some erroneous detection, yet the largest bearing error was less than 12 degrees which is still a good indication for the soldier where to look.\nThe overall localization rate over all single sensors was 42%, while for 50 m shots only, this jumped to 61%.\nNote that the firing range was prepared to simulate an urban area to some extent: there were a few single- and two-storey wooden structures built both in and around the sensor deployment area and the firing lines.\nHence, not all sensors had line-of-sight to all shooting positions.\nWe estimate that 10% of the sensors had obstructed view to the shooter on average.\nHence, we can claim that a given sensor had about 50% chance of localizing a shot within 130 m. (Since the sensor deployment area was 30 m deep, 100 m shots correspond to actual distances between 100 and 130 m.) Again, we emphasize that localization needs at least three muzzle blast and three shockwave detections out of a possible four for each per sensor.\nThe detection rate for single sensors-corresponding to at least one shockwave detection per sensor-was practically 100%.\n0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0 1 2 3 4 5 6 7 8 9 10 number of sensors percentageofshots Figure 13: Histogram showing what fraction of the 104 shots taken from 50 and 100 meters were localized by at most how many individual sensors alone.\n13% of the shots were missed by every single sensor, i.e., none of them had both muzzle blast and shockwave AoA detections.\nNote that almost all of these shots were still accurately localized by the networked system, i.e. the sensor fusion using all available observations in the sensor network.\nIt would be misleading to interpret these results as the system missing half the shots.\nAs soldiers never work alone and the sensor node is relatively cheap to afford having every soldier equipped with one, we also need to look at the overall detection rates for every shot.\nFigure 13 shows the histogram of the percentage of shots vs. the number of individual sensors that localized it.\n13% of shots were not localized by any sensor alone, but 87% was localized by at least one sensor out of ten.\n7.2 Error sources In this section, we analyze the most significant sources of error that affect the performance of the networked shooter localization and weapon classification system.\nIn order to correlate the distributed observations of the acoustic events, the nodes need to have a common time and space reference.\nHence, errors in the time synchronization, node localization and node orientation all degrade the overall accuracy of the system.\n123 Our time synchronization approach yields errors significantly less than 100 microseconds.\nAs the sound travels about 3 cm in that time, time synchronization errors have a negligible effect on the system.\nOn the other hand, node location and orientation can have a direct effect on the overall system performance.\nNotice that to analyze this, we do not have to resort to simulation, instead we can utilize the real test data gathered at Aberdeen.\nBut instead of using the real sensor locations known very accurately and the measured and calibrated almost perfect node orientations, we can add error terms to them and run the sensor fusion.\nThis exactly replicates how the system would have performed during the test using the imprecisely known locations and orientations.\nAnother aspect of the system performance that can be evaluated this way is the effect of the number of available sensors.\nInstead of using all ten sensors in the data fusion, we can pick any subset of the nodes to see how the accuracy degrades as we decrease the number of nodes.\nThe following experiment was carried out.\nThe number of sensors were varied from 2 to 10 in increments of 2.\nEach run picked the sensors randomly using a uniform distribution.\nAt each run each node was randomly moved to a new location within a circle around its true position with a radius determined by a zero-mean Gaussian distribution.\nFinally, the node orientations were perturbed using a zeromean Gaussian distribution.\nEach combination of parameters were generated 100 times and utilized for all 196 shots.\nThe results are summarized in Figure 14.\nThere is one 3D barchart for each of the experiment sets with the given fixed number of sensors.\nThe x-axis shows the node location error, that is, the standard deviation of the corresponding Gaussian distribution that was varied between 0 and 6 meters.\nThe y-axis shows the standard deviation of the node orientation error that was varied between 0 and 6 degrees.\nThe z-axis is the resulting trajectory azimuth error.\nNote that the elevation angles showed somewhat larger errors than the azimuth.\nSince all the sensors were in approximately a horizontal plane and only a few shooter positions were out of the same plane and only by 2 m or so, the test was not sufficient to evaluate this aspect of the system.\nThere are many interesting observation one can make by analyzing these charts.\nNode location errors in this range have a small effect on accuracy.\nNode orientation errors, on the other hand, noticeably degrade the performance.\nStill the largest errors in this experiment of 3.5 degrees for 6 sensors and 5 degrees for 2 sensors are still very good.\nNote that as the location and orientation errors increase and the number of sensors decrease, the most significantly affected performance metric is the localization rate.\nSee Table 4 for a summary.\nSuccessful localization goes down from almost 100% to 50% when we go from 10 sensors to 2 even without additional errors.\nThis is primarily caused by geometry: for a successful localization, the bullet needs to pass over the sensor network, that is, at least one sensor should be on the side of the trajectory other than the rest of the nodes.\n(This is a simplification for illustrative purposes.\nIf all the sensors and the trajectory are not coplanar, localization may be successful even if the projectile passes on one side of the network.\nSee Section 6.3.)\nAs the numbers of sensors decreased in the experiment by randomly selecting a subset, the probability of trajectories abiding by this rule decreased.\nThis also means that even if there are 0 2 4 6 0 2 4 6 0 1 2 3 4 5 6 azimutherror(degree) position error (m) orientation error (degree) 2 sensors 0 2 4 6 0 2 4 6 0 1 2 3 4 5 6 azimutherror(degree) position error (m) orientation error (degree) 4 sensors 0 2 4 6 0 2 4 6 0 1 2 3 4 5 6 azimutherror(degree) position error (m) orientation error (degree) 6 sensors 0 2 4 6 0 2 4 6 0 1 2 3 4 5 6 azimutherror(degree) position error (m) orientation error (degree) 8 sensors Figure 14: The effect of node localization and orientation errors on azimuth accuracy with 2, 4, 6 and 8 nodes.\nNote that the chart for 10 nodes is almost identical for the 8-node case, hence, it is omitted.\n124 many sensors (i.e. soldiers), but all of them are right next to each other, the localization rate will suffer.\nHowever, when the sensor fusion does provide a result, it is still accurate even with few available sensors and relatively large individual errors.\nA very few consistent observation lead to good accuracy as the inconsistent ones are discarded by the algorithm.\nThis is also supported by the observation that for the cases with the higher number of sensors (8 or 10), the localization rate is hardly affected by even large errors.\nErrors\/Sensors 2 4 6 8 10 0 m, 0 deg 54% 87% 94% 95% 96% 2 m, 2 deg 53% 80% 91% 96% 96% 6 m, 0 deg 43% 79% 88% 94% 94% 0 m, 6 deg 44% 78% 90% 93% 94% 6 m, 6 deg 41% 73% 85% 89% 92% Table 4: Localization rate as a function of the number of sensors used, the sensor node location and orientation errors.\nOne of the most significant observations on Figure 14 and Table 4 is that there is hardly any difference in the data for 6, 8 and 10 sensors.\nThis means that there is little advantage of adding more nodes beyond 6 sensors as far as the accuracy is concerned.\nThe speed of sound depends on the ambient temperature.\nThe current prototype considers it constant that is typically set before a test.\nIt would be straightforward to employ a temperature sensor to update the value of the speed of sound periodically during operation.\nNote also that wind may adversely affect the accuracy of the system.\nThe sensor fusion, however, could incorporate wind speed into its calculations.\nIt would be more complicated than temperature compensation, but could be done.\nOther practical issues also need to be looked at before a real world deployment.\nSilencers reduce the muzzle blast energy and hence, the effective range the system can detect it at.\nHowever, silencers do not effect the shockwave and the system would still detect the trajectory and caliber accurately.\nThe range and weapon type could not be estimated without muzzle blast detections.\nSubsonic weapons do not produce a shockwave.\nHowever, this is not of great significance, since they have shorter range, lower accuracy and much less lethality.\nHence, their use is not widespread and they pose less danger in any case.\nAnother issue is the type of ammunition used.\nIrregular armies may use substandard, even hand manufactured bullets.\nThis effects the muzzle velocity of the weapon.\nFor weapon classification to work accurately, the system would need to be calibrated with the typical ammunition used by the given adversary.\n8.\nRELATED WORK Acoustic detection and recognition has been under research since the early fifties.\nThe area has a close relevance to the topic of supersonic flow mechanics [20].\nFansler analyzed the complex near-field pressure waves that occur within a foot of the muzzle blast.\nFansler``s work gives a good idea of the ideal muzzle blast pressure wave without contamination from echoes or propagation effects [4].\nExperiments with greater distances from the muzzle were conducted by Stoughton [18].\nThe measurements of the ballistic shockwaves using calibrated pressure transducers at known locations, measured bullet speeds, and miss distances of 355 meters for 5.56 mm and 7.62 mm projectiles were made.\nResults indicate that ground interaction becomes a problem for miss distances of 30 meters or larger.\nAnother area of research is the signal processing of gunfire acoustics.\nThe focus is on the robust detection and length estimation of small caliber acoustic shockwaves and muzzle blasts.\nPossible techniques for classifying signals as either shockwaves or muzzle blasts includes short-time Fourier Transform (STFT), the Smoothed Pseudo Wigner-Ville distribution (SPWVD), and a discrete wavelet transformation (DWT).\nJoint time-frequency (JTF) spectrograms are used to analyze the typical separation of the shockwave and muzzle blast transients in both time and frequency.\nMays concludes that the DWT is the best method for classifying signals as either shockwaves or muzzle blasts because it works well and is less expensive to compute than the SPWVD [10].\nThe edges of the shockwave are typically well defined and the shockwave length is directly related to the bullet characteristics.\nA paper by Sadler [14] compares two shockwave edge detection methods: a simple gradient-based detector, and a multi-scale wavelet detector.\nIt also demonstrates how the length of the shockwave, as determined by the edge detectors, can be used along with Whithams equations [20] to estimate the caliber of a projectile.\nNote that the available computational performance on the sensor nodes, the limited wireless bandwidth and real-time requirements render these approaches infeasible on our platform.\nA related topic is the research and development of experimental and prototype shooter location systems.\nResearchers at BBN have developed the Bullet Ears system [3] which has the capability to be installed in a fixed position or worn by soldiers.\nThe fixed system has tetrahedron shaped microphone arrays with 1.5 meter spacing.\nThe overall system consists of two to three of these arrays spaced 20 to 100 meters from each other.\nThe soldier-worn system has 12 microphones as well as a GPS antenna and orientation sensors mounted on a helmet.\nThere is a low speed RF connection from the helmet to the processing body.\nAn extensive test has been conducted to measure the performance of both type of systems.\nThe fixed systems performance was one order of magnitude better in the angle calculations while their range performance where matched.\nThe angle accuracy of the fixed system was dominantly less than one degree while it was around five degrees for the helmet mounted one.\nThe range accuracy was around 5 percent for both of the systems.\nThe problem with this and similar centralized systems is the need of the one or handful of microphone arrays to be in line-of-sight of the shooter.\nA sensor networked based solution has the advantage of widely distributed sensing for better coverage, multipath effect compensation and multiple simultaneous shot resolution [8].\nThis is especially important for operation in acoustically reverberant urban areas.\nNote that BBN``s current vehicle-mounted system called BOOMERANG, a modified version of Bullet Ears, is currently used in Iraq [1].\nThe company ShotSpotter specializes in law enforcement systems that report the location of gunfire to police within seconds.\nThe goal of the system is significantly different than that of military systems.\nShotspotter reports 25 m typical accuracy which is more than enough for police to 125 respond.\nThey are also manufacturing experimental soldier wearable and UAV mounted systems for military use [16], but no specifications or evaluation results are publicly available.\n9.\nCONCLUSIONS The main contribution of this work is twofold.\nFirst, the performance of the overall distributed networked system is excellent.\nMost noteworthy are the trajectory accuracy of one degree, the correct caliber estimation rate of well over 90% and the close to 100% weapon classification rate for 4 of the 6 weapons tested.\nThe system proved to be very robust when increasing the node location and orientation errors and decreasing the number of available sensors all the way down to a couple.\nThe key factor behind this is the sensor fusion algorithm``s ability to reject erroneous measurements.\nIt is also worth mentioning that the results presented here correspond to the first and only test of the system beyond 100 m and with six different weapons.\nWe believe that with the lessons learned in the test, a consecutive field experiment could have showed significantly improved results especially in range estimation beyond 100 m and weapon classification for the remaining two weapons that were mistaken for each other the majority of the times during the test.\nSecond, the performance of the system when used in standalone mode, that is, when single sensors alone provided localization, was also very good.\nWhile the overall localization rate of 42% per sensor for shots up to 130 m could be improved, the bearing accuracy of less than a degree and the average 5% range error are remarkable using the handmade prototypes of the low-cost nodes.\nNote that 87% of the shots were successfully localized by at least one of the ten sensors utilized in standalone mode.\nWe believe that the technology is mature enough that a next revision of the system could be a commercial one.\nHowever, important aspects of the system would still need to be worked on.\nWe have not addresses power management yet.\nA current node runs on 4 AA batteries for about 12 hours of continuous operation.\nA deployable version of the sensor node would need to be asleep during normal operation and only wake up when an interesting event occurs.\nAn analog trigger circuit could solve this problem, however, the system would miss the first shot.\nInstead, the acoustic channels would need to be sampled and stored in a circular buffer.\nThe rest of the board could be turned off.\nWhen a trigger wakes up the board, the acoustic data would be immediately available.\nExperiments with a previous generation sensor board indicated that this could provide a 10x increase in battery life.\nOther outstanding issues include weatherproof packaging and ruggedization, as well as integration with current military infrastructure.\n10.\nREFERENCES [1] BBN technologies website.\nhttp:\/\/www.bbn.com.\n[2] E. Danicki.\nAcoustic sniper localization.\nArchives of Acoustics, 30(2):233-245, 2005.\n[3] G. L. Duckworth et al..\nFixed and wearable acoustic counter-sniper systems for law enforcement.\nIn E. M. Carapezza and D. B. Law, editors, Proc.\nSPIE Vol.\n3577, p. 210-230, pages 210-230, Jan. 1999.\n[4] K. Fansler.\nDescription of muzzle blast by modified scaling models.\nShock and Vibration, 5(1):1-12, 1998.\n[5] D. Gay, P. Levis, R. von Behren, M. Welsh, E. Brewer, and D. Culler.\nThe nesC language: a holistic approach to networked embedded systems.\nProceedings of Programming Language Design and Implementation (PLDI), June 2003.\n[6] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and K. Pister.\nSystem architecture directions for networked sensors.\nin Proc.\nof ASPLOS 2000, Nov. 2000.\n[7] B. Kus\u00b4y, G. Balogh, P. V\u00a8olgyesi, J. Sallai, A. N\u00b4adas, A. L\u00b4edeczi, M. Mar\u00b4oti, and L. Meertens.\nNode-density independent localization.\nInformation Processing in Sensor Networks (IPSN 06) SPOTS Track, Apr. 2006.\n[8] A. L\u00b4edeczi, A. N\u00b4adas, P. V\u00a8olgyesi, G. Balogh, B. Kus\u00b4y, J. Sallai, G. Pap, S. D\u00b4ora, K. Moln\u00b4ar, M. Mar\u00b4oti, and G. Simon.\nCountersniper system for urban warfare.\nACM Transactions on Sensor Networks, 1(1):153-177, Nov. 2005.\n[9] M. Mar\u00b4oti.\nDirected flood-routing framework for wireless sensor networks.\nIn Proceedings of the 5th ACM\/IFIP\/USENIX International Conference on Middleware, pages 99-114, New York, NY, USA, 2004.\nSpringer-Verlag New York, Inc. [10] B. Mays.\nShockwave and muzzle blast classification via joint time frequency and wavelet analysis.\nTechnical report, Army Research Lab Adelphi MD 20783-1197, Sept. 2001.\n[11] TinyOS Hardware Platforms.\nhttp:\/\/tinyos.net\/scoop\/special\/hardware.\n[12] Crossbow MICAz (MPR2400) Radio Module.\nhttp:\/\/www.xbow.com\/Products\/productsdetails.\naspx?sid=101.\n[13] PicoBlaze User Resources.\nhttp:\/\/www.xilinx.com\/ipcenter\/processor_ central\/picoblaze\/picoblaze_user_resources.htm.\n[14] B. M. Sadler, T. Pham, and L. C. Sadler.\nOptimal and wavelet-based shock wave detection and estimation.\nAcoustical Society of America Journal, 104:955-963, Aug. 1998.\n[15] J. Sallai, B. Kus\u00b4y, A. L\u00b4edeczi, and P. Dutta.\nOn the scalability of routing-integrated time synchronization.\n3rd European Workshop on Wireless Sensor Networks (EWSN 2006), Feb. 2006.\n[16] ShotSpotter website.\nhttp: \/\/www.shotspotter.com\/products\/military.html.\n[17] G. Simon, M. Mar\u00b4oti, A. L\u00b4edeczi, G. Balogh, B. Kus\u00b4y, A. N\u00b4adas, G. Pap, J. Sallai, and K. Frampton.\nSensor network-based countersniper system.\nIn SenSys ``04: Proceedings of the 2nd international conference on Embedded networked sensor systems, pages 1-12, New York, NY, USA, 2004.\nACM Press.\n[18] R. Stoughton.\nMeasurements of small-caliber ballistic shock waves in air.\nAcoustical Society of America Journal, 102:781-787, Aug. 1997.\n[19] B. A. Weiss, C. Schlenoff, M. Shneier, and A. Virts.\nTechnology evaluations and performance metrics for soldier-worn sensors for assist.\nIn Performance Metrics for Intelligent Systems Workshop, Aug. 2006.\n[20] G. Whitham.\nFlow pattern of a supersonic projectile.\nCommunications on pure and applied mathematics, 5(3):301, 1952.\n126","lvl-3":"Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors\nABSTRACT\nThe paper presents a wireless sensor network-based mobile countersniper system.\nA sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA.\nA 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier's PDA running the data fusion and the user interface.\nThe heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors.\nThe system estimates the trajectory, the range, the caliber and the weapon type.\nThe paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center.\nThe system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested.\n1.\nINTRODUCTION\nThe importance of countersniper systems is underscored by the constant stream of news reports coming from the Middle East.\nIn October 2006 CNN reported on a new tactic employed by insurgents.\nA mobile sniper team moves around busy city streets in a car, positions itself at a good standoff distance from dismounted US military personnel, takes a single well-aimed shot and immediately melts in the city traffic.\nBy the time the soldiers can react, they are gone.\nA countersniper system that provides almost immediate shooter location to every soldier in the vicinity would provide clear benefits to the warfigthers.\nOur team introduced PinPtr, the first sensor networkbased countersniper system [17, 8] in 2003.\nThe system is based on potentially hundreds of inexpensive sensor nodes deployed in the area of interest forming an ad hoc multihop network.\nThe acoustic sensors measure the Time of Arrival (ToA) of muzzle blasts and ballistic shockwaves, pressure waves induced by the supersonic projectile, send the data to a base station where a sensor fusion algorithm determines the origin of the shot.\nPinPtr is characterized by high precision: 1m average 3D accuracy for shots originating within or near the sensor network and 1 degree bearing precision for both azimuth and elevation and 10% accuracy in range estimation for longer range shots.\nThe truly unique characteristic of the system is that it works in such reverberant environments as cluttered urban terrain and that it can resolve multiple simultaneous shots at the same time.\nThis capability is due to the widely distributed sensing and the unique sensor fusion approach [8].\nThe system has been tested several times in US Army MOUT (Military Operations in Urban Terrain) facilities.\nThe obvious disadvantage of such a system is its static nature.\nOnce the sensors are distributed, they cover a certain area.\nDepending on the operation, the deployment may be needed for an hour or a month, but eventually the area looses its importance.\nIt is not practical to gather and reuse the sensors, especially under combat conditions.\nEven if the sensors are cheap, it is still a waste and a logistical problem to provide a continuous stream of sensors as the operations move from place to place.\nAs it is primarily the soldiers that the system protects, a natural extension is to mount the sensors on the soldiers themselves.\nWhile there are vehiclemounted countersniper systems [1] available commercially, we are not aware of a deployed system that protects dismounted soldiers.\nA helmet-mounted system was developed in the mid 90s by BBN [3], but it was not continued beyond the Darpa program that funded it.\nTo move from a static sensor network-based solution to a highly mobile one presents significant challenges.\nThe sensor positions and orientation need to be constantly monitored.\nAs soldiers may work in groups of as little as four people, the number of sensors measuring the acoustic phenomena may be an order of magnitude smaller than before.\nMoreover, the system should be useful to even a single soldier.\nFinally, additional requirements called for caliber estimation and weapon classification in addition to source localization.\nThe paper presents the design and evaluation of our soldierwearable mobile countersniper system.\nIt describes the hardware and software architecture including the custom sensor board equipped with a small microphone array and connected to a COTS MICAz mote [12].\nSpecial emphasis is paid to the sensor fusion technique that estimates the trajectory, range, caliber and weapon type simultaneously.\nThe results and analysis of an independent evaluation of the system at the US Army Aberdeen Test Center are also presented.\n2.\nAPPROACH\n3.\nHARDWARE\n4.\nSOFTWARE ARCHITECTURE\n5.\nDETECTION ALGORITHM\n6.\nSENSOR FUSION\n6.1 Direction of arrival\n6.2 Muzzle-shock fusion\n6.3 Trajectory estimation\n6.4 Shooter position estimation\n6.5 Caliber estimation\n6.6 Weapon estimation\n7.\nRESULTS\n7.1 Single sensor performance\n7.2 Error sources\n8.\nRELATED WORK\nAcoustic detection and recognition has been under research since the early fifties.\nThe area has a close relevance to the topic of supersonic flow mechanics [20].\nFansler analyzed the complex near-field pressure waves that occur within a foot of the muzzle blast.\nFansler's work gives a good idea of the ideal muzzle blast pressure wave without contamination from echoes or propagation effects [4].\nExperiments with greater distances from the muzzle were conducted by Stoughton [18].\nThe measurements of the ballistic shockwaves using calibrated pressure transducers at known locations, measured bullet speeds, and miss distances of 3 55 meters for 5.56 mm and 7.62 mm projectiles were made.\nResults indicate that ground interaction becomes a problem for miss distances of 30 meters or larger.\nAnother area of research is the signal processing of gunfire acoustics.\nThe focus is on the robust detection and length estimation of small caliber acoustic shockwaves and muzzle blasts.\nPossible techniques for classifying signals as either shockwaves or muzzle blasts includes short-time Fourier Transform (STFT), the Smoothed Pseudo Wigner-Ville distribution (SPWVD), and a discrete wavelet transformation (DWT).\nJoint time-frequency (JTF) spectrograms are used to analyze the typical separation of the shockwave and muzzle blast transients in both time and frequency.\nMays concludes that the DWT is the best method for classifying signals as either shockwaves or muzzle blasts because it works well and is less expensive to compute than the SPWVD [10].\nThe edges of the shockwave are typically well defined and the shockwave length is directly related to the bullet characteristics.\nA paper by Sadler [14] compares two shockwave edge detection methods: a simple gradient-based detector, and a multi-scale wavelet detector.\nIt also demonstrates how the length of the shockwave, as determined by the edge detectors, can be used along with Whithams equations [20] to estimate the caliber of a projectile.\nNote that the available computational performance on the sensor nodes, the limited wireless bandwidth and real-time requirements render these approaches infeasible on our platform.\nA related topic is the research and development of experimental and prototype shooter location systems.\nResearchers at BBN have developed the Bullet Ears system [3] which has the capability to be installed in a fixed position or worn by soldiers.\nThe fixed system has tetrahedron shaped microphone arrays with 1.5 meter spacing.\nThe overall system consists of two to three of these arrays spaced 20 to 100 meters from each other.\nThe soldier-worn system has 12 microphones as well as a GPS antenna and orientation sensors mounted on a helmet.\nThere is a low speed RF connection from the helmet to the processing body.\nAn extensive test has been conducted to measure the performance of both type of systems.\nThe fixed systems performance was one order of magnitude better in the angle calculations while their range performance where matched.\nThe angle accuracy of the fixed system was dominantly less than one degree while it was around five degrees for the helmet mounted one.\nThe range accuracy was around 5 percent for both of the systems.\nThe problem with this and similar centralized systems is the need of the one or handful of microphone arrays to be in line-of-sight of the shooter.\nA sensor networked based solution has the advantage of widely distributed sensing for better coverage, multipath effect compensation and multiple simultaneous shot resolution [8].\nThis is especially important for operation in acoustically reverberant urban areas.\nNote that BBN's current vehicle-mounted system called BOOMERANG, a modified version of Bullet Ears, is currently used in Iraq [1].\nThe company ShotSpotter specializes in law enforcement systems that report the location of gunfire to police within seconds.\nThe goal of the system is significantly different than that of military systems.\nShotspotter reports 25 m typical accuracy which is more than enough for police to\nrespond.\nThey are also manufacturing experimental soldier wearable and UAV mounted systems for military use [16], but no specifications or evaluation results are publicly available.\n9.\nCONCLUSIONS\nThe main contribution of this work is twofold.\nFirst, the performance of the overall distributed networked system is excellent.\nMost noteworthy are the trajectory accuracy of one degree, the correct caliber estimation rate of well over 90% and the close to 100% weapon classification rate for 4 of the 6 weapons tested.\nThe system proved to be very robust when increasing the node location and orientation errors and decreasing the number of available sensors all the way down to a couple.\nThe key factor behind this is the sensor fusion algorithm's ability to reject erroneous measurements.\nIt is also worth mentioning that the results presented here correspond to the first and only test of the system beyond 100 m and with six different weapons.\nWe believe that with the lessons learned in the test, a consecutive field experiment could have showed significantly improved results especially in range estimation beyond 100 m and weapon classification for the remaining two weapons that were mistaken for each other the majority of the times during the test.\nSecond, the performance of the system when used in standalone mode, that is, when single sensors alone provided localization, was also very good.\nWhile the overall localization rate of 42% per sensor for shots up to 130 m could be improved, the bearing accuracy of less than a degree and the average 5% range error are remarkable using the handmade prototypes of the low-cost nodes.\nNote that 87% of the shots were successfully localized by at least one of the ten sensors utilized in standalone mode.\nWe believe that the technology is mature enough that a next revision of the system could be a commercial one.\nHowever, important aspects of the system would still need to be worked on.\nWe have not addresses power management yet.\nA current node runs on 4 AA batteries for about 12 hours of continuous operation.\nA deployable version of the sensor node would need to be asleep during normal operation and only wake up when an interesting event occurs.\nAn analog trigger circuit could solve this problem, however, the system would miss the first shot.\nInstead, the acoustic channels would need to be sampled and stored in a circular buffer.\nThe rest of the board could be turned off.\nWhen a trigger wakes up the board, the acoustic data would be immediately available.\nExperiments with a previous generation sensor board indicated that this could provide a 10x increase in battery life.\nOther outstanding issues include weatherproof packaging and ruggedization, as well as integration with current military infrastructure.","lvl-4":"Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors\nABSTRACT\nThe paper presents a wireless sensor network-based mobile countersniper system.\nA sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA.\nA 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier's PDA running the data fusion and the user interface.\nThe heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors.\nThe system estimates the trajectory, the range, the caliber and the weapon type.\nThe paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center.\nThe system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested.\n1.\nINTRODUCTION\nThe importance of countersniper systems is underscored by the constant stream of news reports coming from the Middle East.\nBy the time the soldiers can react, they are gone.\nA countersniper system that provides almost immediate shooter location to every soldier in the vicinity would provide clear benefits to the warfigthers.\nOur team introduced PinPtr, the first sensor networkbased countersniper system [17, 8] in 2003.\nThe system is based on potentially hundreds of inexpensive sensor nodes deployed in the area of interest forming an ad hoc multihop network.\nThe acoustic sensors measure the Time of Arrival (ToA) of muzzle blasts and ballistic shockwaves, pressure waves induced by the supersonic projectile, send the data to a base station where a sensor fusion algorithm determines the origin of the shot.\nThe truly unique characteristic of the system is that it works in such reverberant environments as cluttered urban terrain and that it can resolve multiple simultaneous shots at the same time.\nThis capability is due to the widely distributed sensing and the unique sensor fusion approach [8].\nThe system has been tested several times in US Army MOUT (Military Operations in Urban Terrain) facilities.\nThe obvious disadvantage of such a system is its static nature.\nOnce the sensors are distributed, they cover a certain area.\nIt is not practical to gather and reuse the sensors, especially under combat conditions.\nEven if the sensors are cheap, it is still a waste and a logistical problem to provide a continuous stream of sensors as the operations move from place to place.\nAs it is primarily the soldiers that the system protects, a natural extension is to mount the sensors on the soldiers themselves.\nWhile there are vehiclemounted countersniper systems [1] available commercially, we are not aware of a deployed system that protects dismounted soldiers.\nA helmet-mounted system was developed in the mid 90s by BBN [3], but it was not continued beyond the Darpa program that funded it.\nTo move from a static sensor network-based solution to a highly mobile one presents significant challenges.\nThe sensor positions and orientation need to be constantly monitored.\nAs soldiers may work in groups of as little as four people, the number of sensors measuring the acoustic phenomena may be an order of magnitude smaller than before.\nMoreover, the system should be useful to even a single soldier.\nFinally, additional requirements called for caliber estimation and weapon classification in addition to source localization.\nThe paper presents the design and evaluation of our soldierwearable mobile countersniper system.\nIt describes the hardware and software architecture including the custom sensor board equipped with a small microphone array and connected to a COTS MICAz mote [12].\nSpecial emphasis is paid to the sensor fusion technique that estimates the trajectory, range, caliber and weapon type simultaneously.\nThe results and analysis of an independent evaluation of the system at the US Army Aberdeen Test Center are also presented.\n8.\nRELATED WORK\nAcoustic detection and recognition has been under research since the early fifties.\nFansler analyzed the complex near-field pressure waves that occur within a foot of the muzzle blast.\nFansler's work gives a good idea of the ideal muzzle blast pressure wave without contamination from echoes or propagation effects [4].\nExperiments with greater distances from the muzzle were conducted by Stoughton [18].\nResults indicate that ground interaction becomes a problem for miss distances of 30 meters or larger.\nAnother area of research is the signal processing of gunfire acoustics.\nThe focus is on the robust detection and length estimation of small caliber acoustic shockwaves and muzzle blasts.\nJoint time-frequency (JTF) spectrograms are used to analyze the typical separation of the shockwave and muzzle blast transients in both time and frequency.\nMays concludes that the DWT is the best method for classifying signals as either shockwaves or muzzle blasts because it works well and is less expensive to compute than the SPWVD [10].\nThe edges of the shockwave are typically well defined and the shockwave length is directly related to the bullet characteristics.\nNote that the available computational performance on the sensor nodes, the limited wireless bandwidth and real-time requirements render these approaches infeasible on our platform.\nA related topic is the research and development of experimental and prototype shooter location systems.\nResearchers at BBN have developed the Bullet Ears system [3] which has the capability to be installed in a fixed position or worn by soldiers.\nThe fixed system has tetrahedron shaped microphone arrays with 1.5 meter spacing.\nThe overall system consists of two to three of these arrays spaced 20 to 100 meters from each other.\nThe soldier-worn system has 12 microphones as well as a GPS antenna and orientation sensors mounted on a helmet.\nAn extensive test has been conducted to measure the performance of both type of systems.\nThe fixed systems performance was one order of magnitude better in the angle calculations while their range performance where matched.\nThe angle accuracy of the fixed system was dominantly less than one degree while it was around five degrees for the helmet mounted one.\nThe range accuracy was around 5 percent for both of the systems.\nThe problem with this and similar centralized systems is the need of the one or handful of microphone arrays to be in line-of-sight of the shooter.\nA sensor networked based solution has the advantage of widely distributed sensing for better coverage, multipath effect compensation and multiple simultaneous shot resolution [8].\nThis is especially important for operation in acoustically reverberant urban areas.\nNote that BBN's current vehicle-mounted system called BOOMERANG, a modified version of Bullet Ears, is currently used in Iraq [1].\nThe company ShotSpotter specializes in law enforcement systems that report the location of gunfire to police within seconds.\nThe goal of the system is significantly different than that of military systems.\nShotspotter reports 25 m typical accuracy which is more than enough for police to\nrespond.\nThey are also manufacturing experimental soldier wearable and UAV mounted systems for military use [16], but no specifications or evaluation results are publicly available.\n9.\nCONCLUSIONS\nThe main contribution of this work is twofold.\nFirst, the performance of the overall distributed networked system is excellent.\nMost noteworthy are the trajectory accuracy of one degree, the correct caliber estimation rate of well over 90% and the close to 100% weapon classification rate for 4 of the 6 weapons tested.\nThe system proved to be very robust when increasing the node location and orientation errors and decreasing the number of available sensors all the way down to a couple.\nThe key factor behind this is the sensor fusion algorithm's ability to reject erroneous measurements.\nIt is also worth mentioning that the results presented here correspond to the first and only test of the system beyond 100 m and with six different weapons.\nSecond, the performance of the system when used in standalone mode, that is, when single sensors alone provided localization, was also very good.\nWhile the overall localization rate of 42% per sensor for shots up to 130 m could be improved, the bearing accuracy of less than a degree and the average 5% range error are remarkable using the handmade prototypes of the low-cost nodes.\nNote that 87% of the shots were successfully localized by at least one of the ten sensors utilized in standalone mode.\nWe believe that the technology is mature enough that a next revision of the system could be a commercial one.\nHowever, important aspects of the system would still need to be worked on.\nA current node runs on 4 AA batteries for about 12 hours of continuous operation.\nA deployable version of the sensor node would need to be asleep during normal operation and only wake up when an interesting event occurs.\nAn analog trigger circuit could solve this problem, however, the system would miss the first shot.\nInstead, the acoustic channels would need to be sampled and stored in a circular buffer.\nThe rest of the board could be turned off.\nWhen a trigger wakes up the board, the acoustic data would be immediately available.\nExperiments with a previous generation sensor board indicated that this could provide a 10x increase in battery life.","lvl-2":"Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors\nABSTRACT\nThe paper presents a wireless sensor network-based mobile countersniper system.\nA sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA.\nA 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier's PDA running the data fusion and the user interface.\nThe heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors.\nThe system estimates the trajectory, the range, the caliber and the weapon type.\nThe paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center.\nThe system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested.\n1.\nINTRODUCTION\nThe importance of countersniper systems is underscored by the constant stream of news reports coming from the Middle East.\nIn October 2006 CNN reported on a new tactic employed by insurgents.\nA mobile sniper team moves around busy city streets in a car, positions itself at a good standoff distance from dismounted US military personnel, takes a single well-aimed shot and immediately melts in the city traffic.\nBy the time the soldiers can react, they are gone.\nA countersniper system that provides almost immediate shooter location to every soldier in the vicinity would provide clear benefits to the warfigthers.\nOur team introduced PinPtr, the first sensor networkbased countersniper system [17, 8] in 2003.\nThe system is based on potentially hundreds of inexpensive sensor nodes deployed in the area of interest forming an ad hoc multihop network.\nThe acoustic sensors measure the Time of Arrival (ToA) of muzzle blasts and ballistic shockwaves, pressure waves induced by the supersonic projectile, send the data to a base station where a sensor fusion algorithm determines the origin of the shot.\nPinPtr is characterized by high precision: 1m average 3D accuracy for shots originating within or near the sensor network and 1 degree bearing precision for both azimuth and elevation and 10% accuracy in range estimation for longer range shots.\nThe truly unique characteristic of the system is that it works in such reverberant environments as cluttered urban terrain and that it can resolve multiple simultaneous shots at the same time.\nThis capability is due to the widely distributed sensing and the unique sensor fusion approach [8].\nThe system has been tested several times in US Army MOUT (Military Operations in Urban Terrain) facilities.\nThe obvious disadvantage of such a system is its static nature.\nOnce the sensors are distributed, they cover a certain area.\nDepending on the operation, the deployment may be needed for an hour or a month, but eventually the area looses its importance.\nIt is not practical to gather and reuse the sensors, especially under combat conditions.\nEven if the sensors are cheap, it is still a waste and a logistical problem to provide a continuous stream of sensors as the operations move from place to place.\nAs it is primarily the soldiers that the system protects, a natural extension is to mount the sensors on the soldiers themselves.\nWhile there are vehiclemounted countersniper systems [1] available commercially, we are not aware of a deployed system that protects dismounted soldiers.\nA helmet-mounted system was developed in the mid 90s by BBN [3], but it was not continued beyond the Darpa program that funded it.\nTo move from a static sensor network-based solution to a highly mobile one presents significant challenges.\nThe sensor positions and orientation need to be constantly monitored.\nAs soldiers may work in groups of as little as four people, the number of sensors measuring the acoustic phenomena may be an order of magnitude smaller than before.\nMoreover, the system should be useful to even a single soldier.\nFinally, additional requirements called for caliber estimation and weapon classification in addition to source localization.\nThe paper presents the design and evaluation of our soldierwearable mobile countersniper system.\nIt describes the hardware and software architecture including the custom sensor board equipped with a small microphone array and connected to a COTS MICAz mote [12].\nSpecial emphasis is paid to the sensor fusion technique that estimates the trajectory, range, caliber and weapon type simultaneously.\nThe results and analysis of an independent evaluation of the system at the US Army Aberdeen Test Center are also presented.\n2.\nAPPROACH\nThe firing of a typical military rifle, such as the AK47 or M16, produces two distinct acoustic phenomena.\nThe muzzle blast is generated at the muzzle of the gun and travels at the speed sound.\nThe supersonic projectile generates an acoustic shockwave, a kind of sonic boom.\nThe wavefront has a conical shape, the angle of which depends on the Mach number, the speed of the bullet relative to the speed of sound.\nThe shockwave has a characteristic shape resembling a capital N.\nThe rise time at both the start and end of the signal is very fast, under 1 \u00b5sec.\nThe length is determined by the caliber and the miss distance, the distance between the trajectory and the sensor.\nIt is typically a few hundred \u00b5sec.\nOnce a trajectory estimate is available, the shockwave length can be used for caliber estimation.\nOur system is based on four microphones connected to a sensorboard.\nThe board detects shockwaves and muzzle blasts and measures their ToA.\nIf at least three acoustic channels detect the same event, its AoA is also computed.\nIf both the shockwave and muzzle blast AoA are available, a simple analytical solution gives the shooter location as shown in Section 6.\nAs the microphones are close to each other, typically 2-4\", we cannot expect very high precision.\nAlso, this method does not estimate a trajectory.\nIn fact, an infinite number of trajectory-bullet speed pairs satisfy the observations.\nHowever, the sensorboards are also connected to COTS MICAz motes and they share their AoA and ToA measurements, as well as their own location and orientation, with each other using a multihop routing service [9].\nA hybrid sensor fusion algorithm then estimates the trajectory, the range, the caliber and the weapon type based on all available observations.\nThe sensorboard is also Bluetooth capable for communication with the soldier's PDA or laptop computer.\nA wired USB connection is also available.\nThe sensorfusion algorithm and the user interface get their data through one of these channels.\nThe orientation of the microphone array at the time of detection is provided by a 3-axis digital compass.\nCurrently the system assumes that the soldier's PDA is GPS-capable and it does not provide self localization service itself.\nHowever, the accuracy of GPS is a few meters degrading the\nFigure 1: Acoustic sensorboard\/mote assembly.\noverall accuracy of the system.\nRefer to Section 7 for an analysis.\nThe latest generation sensorboard features a Texas Instruments CC-1000 radio enabling the high-precision radio interferometric self localization approach we have developed separately [7].\nHowever, we leave the integration of the two technologies for future work.\n3.\nHARDWARE\nSince the first static version of our system in 2003, the sensor nodes have been built upon the UC Berkeley\/Crossbow MICA product line [11].\nAlthough rudimentary acoustic signal processing can be done on these microcontroller-based boards, they do not provide the required computational performance for shockwave detection and angle of arrival measurements, where multiple signals from different microphones need to be processed in parallel at a high sampling rate.\nOur 3rd generation sensorboard is designed to be used with MICAz motes--in fact it has almost the same size as the mote itself (see Figure 1).\nThe board utilizes a powerful Xilinx XC3S1000 FPGA chip with various standard peripheral IP cores, multiple soft processor cores and custom logic for the acoustic detectors (Figure 2).\nThe onboard Flash (4MB) and PSRAM (8MB) modules allow storing raw samples of several acoustic events, which can be used to build libraries of various acoustic signatures and for refining the detection cores off-line.\nAlso, the external memory blocks can store program code and data used by the soft processor cores on the FPGA.\nThe board supports four independent analog channels sampled at up to 1 MS\/s (million samples per seconds).\nThese channels, featuring an electret microphone (Panasonic WM64PNT), amplifiers with controllable gain (30-60 dB) and a 12-bit serial ADC (Analog Devices AD7476), reside on separate tiny boards which are connected to the main sensorboard with ribbon cables.\nThis partitioning enables the use of truly different audio channels (eg.: slower sampling frequency, different gain or dynamic range) and also results in less noisy measurements by avoiding long analog signal paths.\nThe sensor platform offers a rich set of interfaces and can be integrated with existing systems in diverse ways.\nAn RS232 port and a Bluetooth (BlueGiga WT12) wireless link with virtual UART emulation are directly available on the board and provide simple means to connect the sensor to PCs and PDAs.\nThe mote interface consists of an I2C bus along with an interrupt and GPIO line (the latter one is used\nFigure 2: Block diagram of the sensorboard.\nfor precise time synchronization between the board and the mote).\nThe motes are equipped with IEEE 802.15.4 compliant radio transceivers and support ad-hoc wireless networking among the nodes and to\/from the base station.\nThe sensorboard also supports full-speed USB transfers (with custom USB dongles) for uploading recorded audio samples to the PC.\nThe on-board JTAG chain--directly accessible through a dedicated connector--contains the FPGA part and configuration memory and provides in-system programming and debugging facilities.\nThe integrated Honeywell HMR3300 digital compass module provides heading, pitch and roll information with 1 \u25e6 accuracy, which is essential for calculating and combining directional estimates of the detected events.\nDue to the complex voltage requirements of the FPGA, the power supply circuitry is implemented on the sensorboard and provides power both locally and to the mote.\nWe used a quad pack of rechargeable AA batteries as the power source (although any other configuration is viable that meets the voltage requirements).\nThe FPGA core (1.2 V) and I\/O (3.3 V) voltages are generated by a highly efficient buck switching regulator.\nThe FPGA configuration (2.5 V) and a separate 3.3 V power net are fed by low current LDOs, the latter one is used to provide independent power to the mote and to the Bluetooth radio.\nThe regulators--except the last one--can be turned on\/off from the mote or through the Bluetooth radio (via GPIO lines) to save power.\nThe first prototype of our system employed 10 sensor nodes.\nSome of these nodes were mounted on military kevlar helmets with the microphones directly attached to the surface at about 20 cm separation as shown in Figure 3 (a).\nThe rest of the nodes were mounted in plastic enclosures (Figure 3 (b)) with the microphones placed near the corners of the boxes to form approximately 5 cm \u00d7 10 cm rectangles.\n4.\nSOFTWARE ARCHITECTURE\nThe sensor application relies on three subsystems exploiting three different computing paradigms as they are shown in Figure 4.\nAlthough each of these execution models suit their domain specific tasks extremely well, this diversity\nFigure 3: Sensor prototypes mounted on a kevlar helmet (a) and in a plastic box on a tripod (b).\npresents a challenge for software development and system integration.\nThe sensor fusion and user interface subsystem is running on PDAs and were implemented in Java.\nThe sensing and signal processing tasks are executed by an FPGA, which also acts as a bridge between various wired and wireless communication channels.\nThe ad-hoc internode communication, time synchronization and data sharing are the responsibilities of a microcontroller based radio module.\nSimilarly, the application employs a wide variety of communication protocols such as Bluetooth \u00ae and IEEE 802.14.5 wireless links, as well as optional UARTs, I2C and\/or USB buses.\nFigure 4: Software architecture diagram.\nThe sensor fusion module receives and unpacks raw measurements (time stamps and feature vectors) from the sensorboard through the Bluetooth \u00ae link.\nAlso, it fine tunes the execution of the signal processing cores by setting parameters through the same link.\nNote that measurements from other nodes along with their location and orientation information also arrive from the sensorboard which acts as a gateway between the PDA and the sensor network.\nThe handheld device obtains its own GPS location data and di\nrectly receives orientation information through the sensorboard.\nThe results of the sensor fusion are displayed on the PDA screen with low latency.\nSince, the application is implemented in pure Java, it is portable across different PDA platforms.\nThe border between software and hardware is considerably blurred on the sensor board.\nThe IP cores--implemented in hardware description languages (HDL) on the reconfigurable FPGA fabric--closely resemble hardware building blocks.\nHowever, some of them--most notably the soft processor cores--execute true software programs.\nThe primary tasks of the sensor board software are 1) acquiring data samples from the analog channels, 2) processing acoustic data (detection), and 3) providing access to the results and run-time parameters through different interfaces.\nAs it is shown in Figure 4, a centralized virtual register file contains the address decoding logic, the registers for storing parameter values and results and the point to point data buses to and from the peripherals.\nThus, it effectively integrates the building blocks within the sensorboard and decouples the various communication interfaces.\nThis architecture enabled us to deploy the same set of sensors in a centralized scenario, where the ad-hoc mote network (using the I2C interface) collected and forwarded the results to a base station or to build a decentralized system where the local PDAs execute the sensor fusion on the data obtained through the Bluetooth \u00ae interface (and optionally from other sensors through the mote interface).\nThe same set of registers are also accessible through a UART link with a terminal emulation program.\nAlso, because the low-level interfaces are hidden by the register file, one can easily add\/replace these with new ones (eg.: the first generation of motes supported a standard \u00b5P interface bus on the sensor connector, which was dropped in later designs).\nThe most important results are the time stamps of the detected events.\nThese time stamps and all other timing information (parameters, acoustic event features) are based on a 1 MHz clock and an internal timer on the FPGA.\nThe time conversion and synchronization between the sensor network and the board is done by the mote by periodically requesting the capture of the current timer value through a dedicated GPIO line and reading the captured value from the register file through the I2C interface.\nBased on the the current and previous readings and the corresponding mote local time stamps, the mote can calculate and maintain the scaling factor and offset between the two time domains.\nThe mote interface is implemented by the I2C slave IP core and a thin adaptation layer which provides a data and address bus abstraction on top of it.\nThe maximum effective bandwidth is 100 Kbps through this interface.\nThe FPGA contains several UART cores as well: for communicating with the on-board Bluetooth \u00ae module, for controlling the digital compass and for providing a wired RS232 link through a dedicated connector.\nThe control, status and data registers of the UART modules are available through the register file.\nThe higher level protocols on these lines are implemented by Xilinx PicoBlaze microcontroller cores [13] and corresponding software programs.\nOne of them provides a command line interface for test and debug purposes, while the other is responsible for parsing compass readings.\nBy default, they are connected to the RS232 port and to the on-board digital compass line respectively, however, they can be rewired to any communication interface by changing the register file base address in the programs (e.g. the command line interface can be provided through the Bluetooth \u00ae channel).\nTwo of the external interfaces are not accessible through the register file: a high speed USB link and the SRAM interface are tied to the recorder block.\nThe USB module implements a simple FIFO with parallel data lines connected to an external FT245R USB device controller.\nThe RAM driver implements data read\/write cycles with correct timing and is connected to the on-board pseudo SRAM.\nThese interfaces provide 1 MB\/s effective bandwidth for downloading recorded audio samples, for example.\nThe data acquisition and signal processing paths exhibit clear symmetry: the same set of IP cores are instantiated four times (i.e. the number of acoustic channels) and run independently.\nThe signal paths\" meet\" only just before the register file.\nEach of the analog channels is driven by a serial A\/D core for providing a 20 MHz serial clock and shifting in 8-bit data samples at 1 MS\/s and a digital potentiometer driver for setting the required gain.\nEach channel has its own shockwave and muzzle blast detector, which are described in Section 5.\nThe detectors fetch run-time parameter values from the register file and store their results there as well.\nThe coordinator core constantly monitors the detection results and generates a mote interrupt promptly upon\" full detection\" or after a reasonable timeout after\" partial detection\".\nThe recorder component is not used in the final deployment, however, it is essential for development purposes for refining parameter values for new types of weapons or for other acoustic sources.\nThis component receives the samples from all channels and stores them in circular buffers in the PSRAM device.\nIf the signal amplitude on one of the channels crosses a predefined threshold, the recorder component suspends the sample collection with a predefined delay and dumps the contents of the buffers through the USB link.\nThe length of these buffers and delays, the sampling rate, the threshold level and the set of recorded channels can be (re) configured run-time through the register file.\nNote that the core operates independently from the other signal processing modules, therefore, it can be used to validate the detection results off-line.\nThe FPGA cores are implemented in VHDL, the PicoBlaze programs are written in assembly.\nThe complete configuration occupies 40% of the resources (slices) of the FPGA and the maximum clock speed is 30 MHz, which is safely higher than the speed used with the actual device (20MHz).\nThe MICAz motes are responsible for distributing measurement data across the network, which drastically improves the localization and classification results at each node.\nBesides a robust radio (MAC) layer, the motes require two essential middleware services to achieve this goal.\nThe messages need to be propagated in the ad-hoc multihop network using a routing service.\nWe successfully integrated the Directed Flood-Routing Framework (DFRF) [9] in our application.\nApart from automatic message aggregation and efficient buffer management, the most unique feature of DFRF is its plug-in architecture, which accepts custom routing policies.\nRouting policies are state machines that govern how received messages are stored, resent or discarded.\nExample policies include spanning tree routing, broadcast, geographic routing, etc. .\nDifferent policies can be used for different messages concurrently, and the application is able to\nchange the underlying policies at run-time (eg.: because of the changing RF environment or power budget).\nIn fact, we switched several times between a simple but lavish broadcast policy and a more efficient gradient routing on the field.\nCorrelating ToA measurements requires a common time base and precise time synchronization in the sensor network.\nThe Routing Integrated Time Synchronization (RITS) [15] protocol relies on very accurate MAC-layer time-stamping to embed the cumulative delay that a data message accrued since the time of the detection in the message itself.\nThat is, at every node it measures the time the message spent there and adds this to the number in the time delay slot of the message, right before it leaves the current node.\nEvery receiving node can subtract the delay from its current time to obtain the detection time in its local time reference.\nThe service provides very accurate time conversion (few \u00b5s per hop error), which is more than adequate for this application.\nNote, that the motes also need to convert the sensorboard time stamps to mote time as it is described earlier.\nThe mote application is implemented in nesC [5] and is running on top of TinyOS [6].\nWith its 3 KB RAM and 28 KB program space (ROM) requirement, it easily fits on the MICAz motes.\n5.\nDETECTION ALGORITHM\nThere are several characteristics of acoustic shockwaves and muzzle blasts which distinguish their detection and signal processing algorithms from regular audio applications.\nBoth events are transient by their nature and present very intense stimuli to the microphones.\nThis is increasingly problematic with low cost electret microphones--designed for picking up regular speech or music.\nAlthough mechanical damping of the microphone membranes can mitigate the problem, this approach is not without side effects.\nThe detection algorithms have to be robust enough to handle severe nonlinear distortion and transitory oscillations.\nSince the muzzle blast signature closely follows the shockwave signal and because of potential automatic weapon bursts, it is extremely important to settle the audio channels and the detection logic as soon as possible after an event.\nAlso, precise angle of arrival estimation necessitates high sampling frequency (in the MHz range) and accurate event detection.\nMoreover, the detection logic needs to process multiple channels in parallel (4 channels on our existing hardware).\nThese requirements dictated simple and robust algorithms both for muzzle blast and shockwave detections.\nInstead of using mundane energy detectors--which might not be able to distinguish the two different events--the applied detectors strive to find the most important characteristics of the two signals in the time-domain using simple state machine logic.\nThe detectors are implemented as independent IP cores within the FPGA--one pair for each channel.\nThe cores are run-time configurable and provide detection event signals with high precision time stamps and event specific feature vectors.\nAlthough the cores are running independently and in parallel, a crude local fusion module integrates them by shutting down those cores which missed their events after a reasonable timeout and by generating a single detection message towards the mote.\nAt this point, the mote can read and forward the detection times and features and is responsible to restart the cores afterwards.\nThe most conspicuous characteristics of an acoustic shockwave (see Figure 5 (a)) are the steep rising edges at the be\nFigure 5: Shockwave signal generated by a 5.56 \u00d7 45 mm NATO projectile (a) and the state machine of the detection algorithm (b).\nginning and end of the signal.\nAlso, the length of the N-wave is fairly predictable--as it is described in Section 6.5--and is relatively short (200-300 \u00b5s).\nThe shockwave detection core is continuously looking for two rising edges within a given interval.\nThe state machine of the algorithm is shown in Figure 5 (b).\nThe input parameters are the minimum steepness of the edges (D, E), and the bounds on the length of the wave (Lmin, Lmax).\nThe only feature calculated by the core is the length of the observed shockwave signal.\nIn contrast to shockwaves, the muzzle blast signatures are characterized by a long initial period (1-5 ms) where the first half period is significantly shorter than the second half [4].\nDue to the physical limitations of the analog circuitry described at the beginning of this section, irregular oscillations and glitches might show up within this longer time window as they can be clearly seen in Figure 6 (a).\nTherefore, the real challenge for the matching detection core is to identify the first and second half periods properly.\nThe state machine (Figure 6 (b)) does not work on the raw samples directly but is fed by a zero crossing (ZC) encoder.\nAfter the initial triggering, the detector attempts to collect those ZC segments which belong to the first period (positive amplitude) while discarding too short (in our terminology: garbage) segments--effectively implementing a rudimentary low-pass filter in the ZC domain.\nAfter it encounters a sufficiently long negative segment, it runs the same collection logic for the second half period.\nIf too much garbage is discarded in the collection phases, the core resets itself to prevent the (false) detection of the halves from completely different periods separated by rapid oscillation or noise.\nFinally, if the constraints on the total length and on the length ratio hold, the core generates a detection event along with the actual length, amplitude and energy of the period calculated concurrently.\nThe initial triggering mechanism is based on two amplitude thresholds: one static (but configurable) amplitude level and a dynamically computed one.\nThe latter one is essential to adapt the sensor to different ambient noise environments and to temporarily suspend the muzzle blast detector after a shock wave event (oscillations in the analog section or reverberations in the sensor enclosure might otherwise trigger false muzzle blast detections).\nThe dynamic noise level is estimated by a single pole recursive low-pass filter (cutoff @ 0.5 kHz) on the FPGA.\nFigure 6: Muzzle blast signature (a) produced by an M16 assault rifle and the corresponding detection logic (b).\nThe detection cores were originally implemented in Java and evaluated on pre-recorded signals because of much faster test runs and more convenient debugging facilities.\nLater on, they were ported to VHDL and synthesized using the Xilinx ISE tool suite.\nThe functional equivalence between the two implementations were tested by VHDL test benches and Python scripts which provided an automated way to exercise the detection cores on the same set of pre-recorded signals and to compare the results.\n6.\nSENSOR FUSION\nThe sensor fusion algorithm receives detection messages from the sensor network and estimates the bullet trajectory, the shooter position, the caliber of the projectile and the type of the weapon.\nThe algorithm consists of well separated computational tasks outlined below:\n1.\nCompute muzzle blast and shockwave directions of arrivals for each individual sensor (see 6.1).\n2.\nCompute range estimates.\nThis algorithm can analytically fuse a pair of shockwave and muzzle blast AoA estimates.\n(see 6.2).\n3.\nCompute a single trajectory from all shockwave measurements (see 6.3).\n4.\nIf trajectory available compute range (see 6.4).\nelse compute shooter position first and then trajectory based on it.\n(see 6.4) 5.\nIf trajectory available compute caliber (see 6.5).\n6.\nIf caliber available compute weapon type (see 6.6).\nWe describe each step in the following sections in detail.\n6.1 Direction of arrival\nThe first step of the sensor fusion is to calculate the muzzle blast and shockwave AoA-s for each sensorboard.\nEach sensorboard has four microphones that measure the ToA-s.\nSince the microphone spacing is orders of magnitude smaller than the distance to the sound source, we can approximate the approaching sound wave front with a plane (far field assumption).\nLet us formalize the problem for 3 microphones first.\nLet P1, P2 and P3 be the position of the microphones ordered by time of arrival t1 <t2 <t3.\nFirst we apply a simple geometry validation step.\nThe measured time difference between two microphones cannot be larger than the sound propagation time between the two microphones:\nWhere c is the speed of sound and \u03b5 is the maximum measurement error.\nIf this condition does not hold, the corresponding detections are discarded.\nLet v (x, y, z) be the normal vector of the unknown direction of arrival.\nWe also use r1 (x1, y1, z1), the vector from P1 to P2 and r2 (x2, y2, z2), the vector from P1 to P3.\nLet's consider the projection of the direction of the motion of the wave front (v) to r1 divided by the speed of sound (c).\nThis gives us how long it takes the wave front to propagate form P1 to P2:\nMoving from vectors to coordinates using the dot product definition leads to a quadratic system:\nWe omit the solution steps here, as they are straightforward, but long.\nThere are two solutions (if the source is on the P1P2P3 plane the two solutions coincide).\nWe use the fourth microphone's measurement--if there is one--to eliminate one of them.\nOtherwise, both solutions are considered for further processing.\n6.2 Muzzle-shock fusion\nFigure 7: Section plane of a shot (at P) and two sensors (at P1 and at P2).\nOne sensor detects the muzzle blast's, the other the shockwave's time and direction of arrivals.\nConsider the situation in Figure 7.\nA shot was fired from P at time t. Both P and t are unknown.\nWe have one muzzle blast and one shockwave detections by two different sensors\nwith AoA and hence, ToA information available.\nThe muzzle blast detection is at position P1 with time t1 and AoA u.\nThe shockwave detection is at P2 with time t2 and AoA v. u and v are normal vectors.\nIt is shown below that these measurements are sufficient to compute the position of the shooter (P).\nLet P2 ~ be the point on the extended shockwave cone surface where PP2 ~ is perpendicular to the surface.\nNote that PP2 ~ is parallel with v.\nSince P2 ~ is on the cone surface which hits P2, a sensor at P2 ~ would detect the same shockwave time of arrival (t2).\nThe cone surface travels at the speed of sound (c), so we can express P using P ~ 2:\ncontaining only one unknown t.\nOne obtains:\nFrom here we can calculate the shoter position P. Let's consider the special single sensor case where P1 = P2 (one sensor detects both shockwave and muzzle blast AoA).\nIn this case:\nSince u and v are not used separately only uv, the absolute orientation of the sensor can be arbitrary, we still get t which gives us the range.\nHere we assumed that the shockwave is a cone which is only true for constant projectile speeds.\nIn reality, the angle of the cone slowly grows; the surface resembles one half of an American football.\nThe decelerating bullet results in a smaller time difference between the shockwave and the muzzle blast detections because the shockwave generation slows down with the bullet.\nA smaller time difference results in a smaller range, so the above formula underestimates the true range.\nHowever, it can still be used with a proper deceleration correction function.\nWe leave this for future work.\n6.3 Trajectory estimation\nDanicki showed that the bullet trajectory and speed can be computed analytically from two independent shockwave measurements where both ToA and AoA are measured [2].\nThe method gets more sensitive to measurement errors as the two shockwave directions get closer to each other.\nIn the special case when both directions are the same, the trajectory cannot be computed.\nIn a real world application, the sensors are typically deployed on a plane approximately.\nIn this case, all sensors located on one\" side\" of the trajectory measure almost the same shockwave AoA.\nTo avoid this error sensitivity problem, we consider shockwave measurement pairs only if the direction of arrival difference is larger than a certain threshold.\nWe have multiple sensors and one sensor can report two different directions (when only three microphones detect the shockwave).\nHence, we typically have several trajectory candidates, i.e. one for each AoA pair over the threshold.\nWe applied an outlier filtering and averaging method to fuse together the shockwave direction and time information and come up with a single trajectory.\nAssume that we have N individual shockwave AoA measurements.\nLet's take all possible unordered pairs where the direction difference is above the mentioned threshold and compute the trajectory for each.\nThis gives us at most N (N-1)\n2 trajectories.\nA tra\njectory is represented by one point pi and the normal vector vi (where i is the trajectory index).\nWe define the distance of two trajectories as the dot product of their normal vectors:\nwhere R is a radius parameter.\nThe largest neighbor set is considered to be the core set C, all other trajectories are outliers.\nThe core set can be found in O (N2) time.\nThe trajectories in the core set are then averaged to get the final trajectory.\nIt can happen that we cannot form any sensor pairs because of the direction difference threshold.\nIt means all sensors are on the same side of the trajectory.\nIn this case, we first compute the shooter position (described in the next section) that fixes p making v the only unknown.\nTo find v in this case, we use a simple high resolution grid search and minimize an error function based on the shockwave directions.\nWe have made experiments to utilize the measured shockwave length in the trajectory estimation.\nThere are some promising results, but it needs further research.\n6.4 Shooter position estimation\nThe shooter position estimation algorithm aggregates the following heterogenous information generated by earlier computational steps:\n1.\ntrajectory, 2.\nmuzzle blast ToA at a sensor, 3.\nmuzzle blast AoA at a sensor, which is effectively a bearing estimate to the shooter, and 4.\nrange estimate at a sensor (when both shockwave and muzzle blast AoA are available).\nSome sensors report only ToA, some has bearing estimate (s) also and some has range estimate (s) as well, depend\ning on the number of successful muzzle blast and shockwave detections by the sensor.\nFor an example, refer to Figure 8.\nNote that a sensor may have two different bearing and range estimates.\n3 detections gives two possible AoA-s for muzzle blast (i.e. bearing) and\/or shockwave.\nFurthermore, the combination of two different muzzle blast and shockwave AoA-s may result in two different ranges.\nFigure 8: Example of heterogenous input data for the shooter position estimation algorithm.\nAll sen\nsors have ToA measurements (t1, t2, t3, t4, t5), one sensor has a single bearing estimate (v2), one sensor has two possible bearings (v3, v ~ 3) and one sensor has two bearing and two range estimates (v1, v ~ 1, r1, r ~ 1) In a multipath environment, these detections will not only contain gaussian noise, but also possibly large errors due to echoes.\nIt has been showed in our earlier work that a similar problem can be solved efficiently with an interval arithmetic based bisection search algorithm [8].\nThe basic idea is to define a discrete consistency function over the area of interest and subdivide the space into 3D boxes.\nFor any given 3D box, this function gives the number of measurements supporting the hypothesis that the shooter was within that box.\nThe search starts with a box large enough to contain the whole area of interest, then zooms in by dividing and evaluating boxes.\nThe box with the maximum consistency is divided until the desired precision is reached.\nBacktracking is possible to avoid getting stuck in a local maximum.\nThis approach has been shown to be fast enough for online processing.\nNote, however, that when the trajectory has already been calculated in previous steps, the search needs to be done only on the trajectory making it orders of magnitude faster.\nNext let us describe how the consistency function is calculated in detail.\nConsider B, a three dimensional box, we would like to compute the consistency value of.\nFirst we consider only the ToA information.\nIf one sensor has multiple ToA detections, we use the average of those times, so one sensor supplies at most one ToA estimate.\nFor each ToA, we can calculate the corresponding time of the shot, since the origin is assumed to be in box B.\nSince it is a box and not a single point, this gives us an interval for the shot time.\nThe maximum number of overlapping time intervals gives us the value of the consistency function for B. For a detailed description of the consistency function and search algorithm, refer to [8].\nHere we extend the approach the following way.\nWe modify the consistency function based on the bearing and range data from individual sensors.\nA bearing estimate supports B if the line segment starting from the sensor with the measured direction intersects the B box.\nA range supports B, if the sphere with the radius of the range and origin of the sensor intersects B. Instead of simply checking whether the position specified by the corresponding bearing-range pairs falls within B, this eliminates the sensor's possible orientation error.\nThe value of the consistency function is incremented by one for each bearing and range estimate that is consistent with B.\n6.5 Caliber estimation\nThe shockwave signal characteristics has been studied before by Whitham [20].\nHe showed that the shockwave period T is related to the projectile diameter d, the length l, the perpendicular miss distance b from the bullet trajectory to the sensor, the Mach number M and the speed of sound c.\nFigure 9: Shockwave length and miss distance relationship.\nEach data point represents one sensorboard after an aggregation of the individual measurements of the four acoustic channels.\nThree different caliber projectiles have been tested (196 shots, 10 sensors).\nTo illustrate the relationship between miss distance and shockwave length, here we use all 196 shots with three different caliber projectiles fired during the evaluation.\n(During the evaluation we used data obtained previously using a few practice shots per weapon.)\n10 sensors (4 microphones by sensor) measured the shockwave length.\nFor each sensor, we considered the shockwave length estimation valid if at least three out of four microphones agreed on a value with at most 5 microsecond variance.\nThis filtering leads to a 86% report rate per sensor and gets rid of large measurement errors.\nThe experimental data is shown in Figure 9.\nWhitham's formula suggests that the shockwave length for a given caliber can be approximated with a power function of the miss distance (with a 1\/4 exponent).\nBest fit functions on our data are:\nTo evaluate a shot, we take the caliber whose approximation function results in the smallest RMS error of the filtered sensor readings.\nThis method has less than 1% caliber estimation error when an accurate trajectory estimate is available.\nIn other words, caliber estimation only works if enough shockwave detections are made by the system to compute a trajectory.\n6.6 Weapon estimation\nWe analyzed all measured signal characteristics to find weapon specific information.\nUnfortunately, we concluded that the observed muzzle blast signature is not characteristic enough of the weapon for classification purposes.\nThe reflections of the high energy muzzle blast from the environment have much higher impact on the muzzle blast signal shape than the weapon itself.\nShooting the same weapon from different places caused larger differences on the recorded signal than shooting different weapons from the same place.\nFigure 10: AK47 and M240 bullet deceleration measurements.\nBoth weapons have the same caliber.\nData is approximated using simple linear regression.\nFigure 11: M16, M249 and M4 bullet deceleration measurements.\nAll weapons have the same caliber.\nData is approximated using simple linear regression.\nHowever, the measured speed of the projectile and its caliber showed good correlation with the weapon type.\nThis is because for a given weapon type and ammunition pair, the muzzle velocity is nearly constant.\nIn Figures 10 and 11 we can see the relationship between the range and the measured bullet speed for different calibers and weapons.\nIn the supersonic speed range, the bullet deceleration can be approximated with a linear function.\nIn case of the 7.62 mm caliber, the two tested weapons (AK47, M240) can be clearly separated (Figure 10).\nUnfortunately, this is not necessarily true for the 5.56 mm caliber.\nThe M16 with its higher muzzle speed can still be well classified, but the M4 and M249 weapons seem practically undistinguishable (Figure 11).\nHowever, this may be partially due to the limited number of practice shots we were able to take before the actual testing began.\nMore training data may reveal better separation between the two weapons since their published muzzle velocities do differ somewhat.\nThe system carries out weapon classification in the following manner.\nOnce the trajectory is known, the speed can be calculated for each sensor based on the shockwave geometry.\nTo evaluate a shot, we choose the weapon type whose deceleration function results in the smallest RMS error of the estimated range-speed pairs for the estimated caliber class.\n7.\nRESULTS\nAn independent evaluation of the system was carried out by a team from NIST at the US Army Aberdeen Test Center in April 2006 [19].\nThe experiment was setup on a shooting range with mock-up wooden buildings and walls for supporting elevated shooter positions and generating multipath effects.\nFigure 12 shows the user interface with an aerial photograph of the site.\n10 sensor nodes were deployed on surveyed points in an approximately 30 \u00d7 30 m area.\nThere were five fixed targets behind the sensor network.\nSeveral firing positions were located at each of the firing lines at 50, 100, 200 and 300 meters.\nThese positions were known to the evaluators, but not to the operators of the system.\nSix different weapons were utilized: AK47 and M240 firing 7.62 mm projectiles, M16, M4 and M249 with 5.56 mm ammunition and the .50 caliber M107.\nNote that the sensors remained static during the test.\nThe primary reason for this is that nobody is allowed downrange during live fire tests.\nUtilizing some kind of remote control platform would have been too involved for the limited time the range was available for the test.\nThe experiment, therefore, did not test the mobility aspect of the system.\nDuring the one day test, there were 196 shots fired.\nThe results are summarized in Table 1.\nThe system detected all shots successfully.\nSince a ballistic shockwave is a unique acoustic phenomenon, it makes the detection very robust.\nThere were no false positives for shockwaves, but there were a handful of false muzzle blast detections due to parallel tests of artillery at a nearby range.\nTable 1: Summary of results fusing all available sen\nsor observations.\nAll shots were successfully detected, so the detection rate is omitted.\nLocalization rate means the percentage of shots that the sensor fusion was able to estimate the trajectory of.\nThe caliber accuracy rate is relative to the shots localized and not all the shots because caliber estimation requires the trajectory.\nThe trajectory error is broken down to azimuth in degrees and the actual distance of the shooter from the trajectory.\nThe distance error shows the distance between the real shooter position and the estimated shooter position.\nAs such, it includes the error caused by both the trajectory and that of the range estimation.\nNote that the traditional bearing and range measures are not good ones for a distributed system such as ours because of the lack of a single reference point.\nFigure 12: The user interface of the system show\ning the experimental setup.\nThe 10 sensor nodes are labeled by their ID and marked by dark circles.\nThe targets are black squares marked T-1 through T-5.\nThe long white arrows point to the shooter position estimated by each sensor.\nWhere it is missing, the corresponding sensor did not have enough detections to measure the AoA of either the muzzle blast, the shockwave or both.\nThe thick black line and large circle indicate the estimated trajectory and the shooter position as estimated by fusing all available detections from the network.\nThis shot from the 100-meter line at target T-3 was localized almost perfectly by the sensor network.\nThe caliber and weapon were also identified correctly.\n6 out of 10 nodes were able to estimate the location alone.\nTheir bearing accuracy is within a degree, while the range is off by less than 10% in the worst case.\nThe localization rate characterizes the system's ability to successfully estimate the trajectory of shots.\nSince caliber estimation and weapon classification relies on the trajectory, non-localized shots are not classified either.\nThere were 7 shots out of 196 that were not localized.\nThe reason for missed shots is the trajectory ambiguity problem that occurs when the projectile passes on one side of all the sensors.\nIn this case, two significantly different trajectories can generate the same set of observations (see [8] and also Section 6.3).\nInstead of estimating which one is more likely or displaying both possibilities, we decided not to provide a trajectory at all.\nIt is better not to give an answer other than a shot alarm than misleading the soldier.\nLocalization accuracy is broken down to trajectory accuracy and range estimation precision.\nThe angle of the estimated trajectory was better than 1 degree except for the 300 m range.\nSince the range should not affect trajectory estimation as long as the projectile passes over the network, we suspect that the slightly worse angle precision for 300 m is due to the hurried shots we witnessed the soldiers took near the end of the day.\nThis is also indicated by another datapoint: the estimated trajectory distance from the actual targets has an average error of 1.3 m for 300 m shots, 0.75 m for 200 m shots and 0.6 m for all but 300 m shots.\nAs the distance between the targets and the sensor network was fixed, this number should not show a 2 \u00d7 improvement just because the shooter is closer.\nSince the angle of the trajectory itself does not characterize the overall error--there can be a translation also--Table 1 also gives the distance of the shooter from the estimated trajectory.\nThese indicate an error which is about 1-2% of the range.\nTo put this into perspective, a trajectory estimate for a 100 m shot will very likely go through or very near the window the shooter is located at.\nAgain, we believe that the disproportionally larger errors at 300 m are due to human errors in aiming.\nAs the ground truth was obtained by knowing the precise location of the shooter and the target, any inaccuracy in the actual trajectory directly adds to the perceived error of the system.\nWe call the estimation of the shooter's position on the calculated trajectory range estimation due to the lack of a better term.\nThe range estimates are better than 5% accurate from 50 m and 10% for 100 m. However, this goes to 20% or worse for longer distances.\nWe did not have a facility to test system before the evaluation for ranges beyond 100 m.\nDuring the evaluation, we ran into the problem of mistaking shockwave echoes for muzzle blasts.\nThese echoes reached the sensors before the real muzzle blast for long range shots only, since the projectile travels 2-3 \u00d7 faster than the speed of sound, so the time between the shockwave (and its possible echo from nearby objects) and the muzzle blast increases with increasing ranges.\nThis resulted in underestimating the range, since the system measured shorter times than the real ones.\nSince the evaluation we finetuned the muzzle blast detection algorithm to avoid this problem.\nTable 2: Weapon classification results.\nThe percent\nages are relative to the number of shots localized and not all shots, as the classification algorithm needs to know the trajectory and the range.\nNote that the difference is small; there were 189 shots localized out of the total 196.\nThe caliber and weapon estimation accuracy rates are based on the 189 shots that were successfully localized.\nNote that there was a single shot that was falsely classified by the caliber estimator.\nThe 73% overall weapon classification accuracy does not seem impressive.\nBut if we break it down to the six different weapons tested, the picture changes dramatically as shown in Table 2.\nFor four of the weapons (AK14, M16, M240 and M107), the classification rate is almost 100%.\nThere were only two shots out of approximately 140 that were missed.\nThe M4 and M249 proved to be too similar and they were mistaken for each other most of the time.\nOne possible explanation is that we had only a limited number of test shots taken with these weapons right before the evaluation and used the wrong deceleration approximation function.\nEither this or a similar mistake was made\nsince if we simply used the opposite of the system's answer where one of these weapons were indicated, the accuracy would have improved 3x.\nIf we consider these two weapons a single weapon class, then the classification accuracy for it becomes 93%.\nNote that the AK47 and M240 have the same caliber (7.62 mm), just as the M16, M4 and M249 do (5.56 mm).\nThat is, the system is able to differentiate between weapons of the same caliber.\nWe are not aware of any system that classifies weapons this accurately.\n7.1 Single sensor performance\nAs was shown previously, a single sensor alone is able to localize the shooter if it can determine both the muzzle blast and the shockwave AoA, that is, it needs to measure the ToA of both on at least three acoustic channels.\nWhile shockwave detection is independent of the range--unless the projectile becomes subsonic--, the likelihood of muzzle blast detection beyond 150 meters is not enough for consistently getting at least three per sensor node for AoA estimation.\nHence, we only evaluate the single sensor performance for the 104 shots that were taken from 50 and 100 m. Note that we use the same test data as in the previous section, but we evaluate individually for each sensor.\nTable 3 summarizes the results broken down by the ten sensors utilized.\nSince this is now not a distributed system, the results are given relative to the position of the given sensor, that is, a bearing and range estimate is provided.\nNote that many of the common error sources of the networked system do not play a role here.\nTime synchronization is not applicable.\nThe sensor's absolute location is irrelevant (just as the relative location of multiple sensors).\nThe sensor's orientation is still important though.\nThere are several disadvantages of the single sensor case compared to the networked system: there is no redundancy to compensate for other errors and to perform outlier rejection, the localization rate is markedly lower, and a single sensor alone is not able to estimate the caliber or classify the weapon.\nTable 3: Single sensor accuracy for 108 shots fired\nfrom 50 and 100 meters.\nLocalization rate refers to the percentage of shots the given sensor alone was able to localize.\nThe bearing and range values are average errors.\nThey characterize the accuracy of localization from the given sensor's perspective.\nThe data indicates that the performance of the sensors varied significantly especially considering the localization rate.\nOne factor has to be the location of the given sensor including how far it was from the firing lines and how obstructed its view was.\nAlso, the sensors were hand-built prototypes utilizing nowhere near production quality packaging\/mounting.\nIn light of these factors, the overall average bearing error of 0.9 degrees and range error of 5 m with a microphone spacing of less than 10 cm are excellent.\nWe believe that professional manufacturing and better microphones could easily achieve better performance than the best sensor in our experiment (> 60% localization rate and 3 m range error).\nInterestingly, the largest error in range was a huge 90 m clearly due to some erroneous detection, yet the largest bearing error was less than 12 degrees which is still a good indication for the soldier where to look.\nThe overall localization rate over all single sensors was 42%, while for 50 m shots only, this jumped to 61%.\nNote that the firing range was prepared to simulate an urban area to some extent: there were a few single - and two-storey wooden structures built both in and around the sensor deployment area and the firing lines.\nHence, not all sensors had line-of-sight to all shooting positions.\nWe estimate that 10% of the sensors had obstructed view to the shooter on average.\nHence, we can claim that a given sensor had about 50% chance of localizing a shot within 130 m. (Since the sensor deployment area was 30 m deep, 100 m shots correspond to actual distances between 100 and 130 m.) Again, we emphasize that localization needs at least three muzzle blast and three shockwave detections out of a possible four for each per sensor.\nThe detection rate for single sensors--corresponding to at least one shockwave detection per sensor--was practically 100%.\nFigure 13: Histogram showing what fraction of the\n104 shots taken from 50 and 100 meters were localized by at most how many individual sensors alone.\n13% of the shots were missed by every single sensor, i.e., none of them had both muzzle blast and shockwave AoA detections.\nNote that almost all of these shots were still accurately localized by the networked system, i.e. the sensor fusion using all available observations in the sensor network.\nIt would be misleading to interpret these results as the system missing half the shots.\nAs soldiers never work alone and the sensor node is relatively cheap to afford having every soldier equipped with one, we also need to look at the overall detection rates for every shot.\nFigure 13 shows the histogram of the percentage of shots vs. the number of individual sensors that localized it.\n13% of shots were not localized by any sensor alone, but 87% was localized by at least one sensor out of ten.\n7.2 Error sources\nIn this section, we analyze the most significant sources of error that affect the performance of the networked shooter localization and weapon classification system.\nIn order to correlate the distributed observations of the acoustic events, the nodes need to have a common time and space reference.\nHence, errors in the time synchronization, node localization and node orientation all degrade the overall accuracy of the system.\npercentage of shots\nOur time synchronization approach yields errors significantly less than 100 microseconds.\nAs the sound travels about 3 cm in that time, time synchronization errors have a negligible effect on the system.\nOn the other hand, node location and orientation can have a direct effect on the overall system performance.\nNotice that to analyze this, we do not have to resort to simulation, instead we can utilize the real test data gathered at Aberdeen.\nBut instead of using the real sensor locations known very accurately and the measured and calibrated almost perfect node orientations, we can add error terms to them and run the sensor fusion.\nThis exactly replicates how the system would have performed during the test using the imprecisely known locations and orientations.\nAnother aspect of the system performance that can be evaluated this way is the effect of the number of available sensors.\nInstead of using all ten sensors in the data fusion, we can pick any subset of the nodes to see how the accuracy degrades as we decrease the number of nodes.\nThe following experiment was carried out.\nThe number of sensors were varied from 2 to 10 in increments of 2.\nEach run picked the sensors randomly using a uniform distribution.\nAt each run each node was randomly\" moved\" to a new location within a circle around its true position with a radius determined by a zero-mean Gaussian distribution.\nFinally, the node orientations were perturbed using a zeromean Gaussian distribution.\nEach combination of parameters were generated 100 times and utilized for all 196 shots.\nThe results are summarized in Figure 14.\nThere is one 3D barchart for each of the experiment sets with the given fixed number of sensors.\nThe x-axis shows the node location error, that is, the standard deviation of the corresponding Gaussian distribution that was varied between 0 and 6 meters.\nThe y-axis shows the standard deviation of the node orientation error that was varied between 0 and 6 degrees.\nThe z-axis is the resulting trajectory azimuth error.\nNote that the elevation angles showed somewhat larger errors than the azimuth.\nSince all the sensors were in approximately a horizontal plane and only a few shooter positions were out of the same plane and only by 2 m or so, the test was not sufficient to evaluate this aspect of the system.\nThere are many interesting observation one can make by analyzing these charts.\nNode location errors in this range have a small effect on accuracy.\nNode orientation errors, on the other hand, noticeably degrade the performance.\nStill the largest errors in this experiment of 3.5 degrees for 6 sensors and 5 degrees for 2 sensors are still very good.\nNote that as the location and orientation errors increase and the number of sensors decrease, the most significantly affected performance metric is the localization rate.\nSee Table 4 for a summary.\nSuccessful localization goes down from almost 100% to 50% when we go from 10 sensors to 2 even without additional errors.\nThis is primarily caused by geometry: for a successful localization, the bullet needs to pass over the sensor network, that is, at least one sensor should be on the side of the trajectory other than the rest of the nodes.\n(This is a simplification for illustrative purposes.\nIf all the sensors and the trajectory are not coplanar, localization may be successful even if the projectile passes on one side of the network.\nSee Section 6.3.)\nAs the numbers of sensors decreased in the experiment by randomly selecting a subset, the probability of trajectories abiding by this rule decreased.\nThis also means that even if there are\nFigure 14: The effect of node localization and orientation errors on azimuth accuracy with 2, 4, 6 and 8 nodes.\nNote that the chart for 10 nodes is almost identical for the 8-node case, hence, it is omitted.\nmany sensors (i.e. soldiers), but all of them are right next to each other, the localization rate will suffer.\nHowever, when the sensor fusion does provide a result, it is still accurate even with few available sensors and relatively large individual errors.\nA very few consistent observation lead to good accuracy as the inconsistent ones are discarded by the algorithm.\nThis is also supported by the observation that for the cases with the higher number of sensors (8 or 10), the localization rate is hardly affected by even large errors.\nTable 4: Localization rate as a function of the number of sensors used, the sensor node location and orientation errors.\nOne of the most significant observations on Figure 14 and Table 4 is that there is hardly any difference in the data for 6, 8 and 10 sensors.\nThis means that there is little advantage of adding more nodes beyond 6 sensors as far as the accuracy is concerned.\nThe speed of sound depends on the ambient temperature.\nThe current prototype considers it constant that is typically set before a test.\nIt would be straightforward to employ a temperature sensor to update the value of the speed of sound periodically during operation.\nNote also that wind may adversely affect the accuracy of the system.\nThe sensor fusion, however, could incorporate wind speed into its calculations.\nIt would be more complicated than temperature compensation, but could be done.\nOther practical issues also need to be looked at before a real world deployment.\nSilencers reduce the muzzle blast energy and hence, the effective range the system can detect it at.\nHowever, silencers do not effect the shockwave and the system would still detect the trajectory and caliber accurately.\nThe range and weapon type could not be estimated without muzzle blast detections.\nSubsonic weapons do not produce a shockwave.\nHowever, this is not of great significance, since they have shorter range, lower accuracy and much less lethality.\nHence, their use is not widespread and they pose less danger in any case.\nAnother issue is the type of ammunition used.\nIrregular armies may use substandard, even hand manufactured bullets.\nThis effects the muzzle velocity of the weapon.\nFor weapon classification to work accurately, the system would need to be calibrated with the typical ammunition used by the given adversary.\n8.\nRELATED WORK\nAcoustic detection and recognition has been under research since the early fifties.\nThe area has a close relevance to the topic of supersonic flow mechanics [20].\nFansler analyzed the complex near-field pressure waves that occur within a foot of the muzzle blast.\nFansler's work gives a good idea of the ideal muzzle blast pressure wave without contamination from echoes or propagation effects [4].\nExperiments with greater distances from the muzzle were conducted by Stoughton [18].\nThe measurements of the ballistic shockwaves using calibrated pressure transducers at known locations, measured bullet speeds, and miss distances of 3 55 meters for 5.56 mm and 7.62 mm projectiles were made.\nResults indicate that ground interaction becomes a problem for miss distances of 30 meters or larger.\nAnother area of research is the signal processing of gunfire acoustics.\nThe focus is on the robust detection and length estimation of small caliber acoustic shockwaves and muzzle blasts.\nPossible techniques for classifying signals as either shockwaves or muzzle blasts includes short-time Fourier Transform (STFT), the Smoothed Pseudo Wigner-Ville distribution (SPWVD), and a discrete wavelet transformation (DWT).\nJoint time-frequency (JTF) spectrograms are used to analyze the typical separation of the shockwave and muzzle blast transients in both time and frequency.\nMays concludes that the DWT is the best method for classifying signals as either shockwaves or muzzle blasts because it works well and is less expensive to compute than the SPWVD [10].\nThe edges of the shockwave are typically well defined and the shockwave length is directly related to the bullet characteristics.\nA paper by Sadler [14] compares two shockwave edge detection methods: a simple gradient-based detector, and a multi-scale wavelet detector.\nIt also demonstrates how the length of the shockwave, as determined by the edge detectors, can be used along with Whithams equations [20] to estimate the caliber of a projectile.\nNote that the available computational performance on the sensor nodes, the limited wireless bandwidth and real-time requirements render these approaches infeasible on our platform.\nA related topic is the research and development of experimental and prototype shooter location systems.\nResearchers at BBN have developed the Bullet Ears system [3] which has the capability to be installed in a fixed position or worn by soldiers.\nThe fixed system has tetrahedron shaped microphone arrays with 1.5 meter spacing.\nThe overall system consists of two to three of these arrays spaced 20 to 100 meters from each other.\nThe soldier-worn system has 12 microphones as well as a GPS antenna and orientation sensors mounted on a helmet.\nThere is a low speed RF connection from the helmet to the processing body.\nAn extensive test has been conducted to measure the performance of both type of systems.\nThe fixed systems performance was one order of magnitude better in the angle calculations while their range performance where matched.\nThe angle accuracy of the fixed system was dominantly less than one degree while it was around five degrees for the helmet mounted one.\nThe range accuracy was around 5 percent for both of the systems.\nThe problem with this and similar centralized systems is the need of the one or handful of microphone arrays to be in line-of-sight of the shooter.\nA sensor networked based solution has the advantage of widely distributed sensing for better coverage, multipath effect compensation and multiple simultaneous shot resolution [8].\nThis is especially important for operation in acoustically reverberant urban areas.\nNote that BBN's current vehicle-mounted system called BOOMERANG, a modified version of Bullet Ears, is currently used in Iraq [1].\nThe company ShotSpotter specializes in law enforcement systems that report the location of gunfire to police within seconds.\nThe goal of the system is significantly different than that of military systems.\nShotspotter reports 25 m typical accuracy which is more than enough for police to\nrespond.\nThey are also manufacturing experimental soldier wearable and UAV mounted systems for military use [16], but no specifications or evaluation results are publicly available.\n9.\nCONCLUSIONS\nThe main contribution of this work is twofold.\nFirst, the performance of the overall distributed networked system is excellent.\nMost noteworthy are the trajectory accuracy of one degree, the correct caliber estimation rate of well over 90% and the close to 100% weapon classification rate for 4 of the 6 weapons tested.\nThe system proved to be very robust when increasing the node location and orientation errors and decreasing the number of available sensors all the way down to a couple.\nThe key factor behind this is the sensor fusion algorithm's ability to reject erroneous measurements.\nIt is also worth mentioning that the results presented here correspond to the first and only test of the system beyond 100 m and with six different weapons.\nWe believe that with the lessons learned in the test, a consecutive field experiment could have showed significantly improved results especially in range estimation beyond 100 m and weapon classification for the remaining two weapons that were mistaken for each other the majority of the times during the test.\nSecond, the performance of the system when used in standalone mode, that is, when single sensors alone provided localization, was also very good.\nWhile the overall localization rate of 42% per sensor for shots up to 130 m could be improved, the bearing accuracy of less than a degree and the average 5% range error are remarkable using the handmade prototypes of the low-cost nodes.\nNote that 87% of the shots were successfully localized by at least one of the ten sensors utilized in standalone mode.\nWe believe that the technology is mature enough that a next revision of the system could be a commercial one.\nHowever, important aspects of the system would still need to be worked on.\nWe have not addresses power management yet.\nA current node runs on 4 AA batteries for about 12 hours of continuous operation.\nA deployable version of the sensor node would need to be asleep during normal operation and only wake up when an interesting event occurs.\nAn analog trigger circuit could solve this problem, however, the system would miss the first shot.\nInstead, the acoustic channels would need to be sampled and stored in a circular buffer.\nThe rest of the board could be turned off.\nWhen a trigger wakes up the board, the acoustic data would be immediately available.\nExperiments with a previous generation sensor board indicated that this could provide a 10x increase in battery life.\nOther outstanding issues include weatherproof packaging and ruggedization, as well as integration with current military infrastructure.","keyphrases":["weapon classif","internod commun","sensorboard","self orient","data fusion","trajectori","rang","calib","weapon type","calib estim accuraci","calib estim","wireless sensor network-base mobil countersnip system","helmetmount microphon arrai","1degre trajectori precis","sensor network","acoust sourc local"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","M","M","R","M"]} {"id":"J-41","title":"An Analysis of Alternative Slot Auction Designs for Sponsored Search","abstract":"Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results. Slots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page. In this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: rank by bid (RBB) and rank by revenue (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively. We also consider first- and second-price payment rules together with each of these allocation rules, as both have been used historically. We consider both the short-run incomplete information setting and the long-run complete information setting. With incomplete information, neither RBB nor RBR are truthful with either first or second pricing. We find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not. We also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance. With complete information, we find that no equilibrium exists with first pricing using either RBB or RBR. We show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully.","lvl-1":"An Analysis of Alternative Slot Auction Designs for Sponsored Search S\u00b4ebastien Lahaie \u2217 Division of Engineering and Applied Sciences Harvard University, Cambridge, MA 02138 slahaie@eecs.harvard.edu ABSTRACT Billions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results.\nSlots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page.\nIn this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: rank by bid (RBB) and rank by revenue (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively.\nWe also consider first- and second-price payment rules together with each of these allocation rules, as both have been used historically.\nWe consider both the short-run incomplete information setting and the long-run complete information setting.\nWith incomplete information, neither RBB nor RBR are truthful with either first or second pricing.\nWe find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not.\nWe also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance.\nWith complete information, we find that no equilibrium exists with first pricing using either RBB or RBR.\nWe show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully.\nCategories and Subject Descriptors J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms Economics, Theory 1.\nINTRODUCTION Today, Internet giants Google and Yahoo! boast a combined market capitalization of over $150 billion, largely on the strength of sponsored search, the fastest growing component of a resurgent online advertising industry.\nPricewaterhouseCoopers estimates that 2004 industry-wide sponsored search revenues were $3.9 billion, or 40% of total Internet advertising revenues.1 Industry watchers expect 2005 revenues to reach or exceed $7 billion.2 Roughly 80% of Google``s estimated $4 billion in 2005 revenue and roughly 45% of Yahoo!``s estimated $3.7 billion in 2005 revenue will likely be attributable to sponsored search.3 A number of other companies-including LookSmart, FindWhat, InterActiveCorp (Ask Jeeves), and eBay (Shopping.com)-earn hundreds of millions of dollars of sponsored search revenue annually.\nSponsored search is a form of advertising where merchants pay to appear alongside web search results.\nFor example, when a user searches for used honda accord san diego in a web search engine, a variety of commercial entities (San Diego car dealers, Honda Corp, automobile information portals, classified ad aggregators, eBay, etc..\n.)\nmay bid to to have their listings featured alongside the standard algorithmic search listings.\nAdvertisers bid for placement on the page in an auction-style format where the higher they bid the more likely their listing will appear above other ads on the page.\nBy convention, sponsored search advertisers generally pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked.\nThough many people claim to systematically ignore sponsored search ads, Majestic Research reports that 1 www.iab.net\/resources\/adrevenue\/pdf\/IAB PwC 2004full.pdf 2 battellemedia.com\/archives\/002032.php 3 These are rough back of the envelope estimates.\nGoogle and Yahoo! 2005 revenue estimates were obtained from Yahoo! Finance.\nWe assumed $7 billion in 2005 industry-wide sponsored search revenues.\nWe used Nielsen\/NetRatings estimates of search engine market share in the US, the most monetized market: wired-vig.\nwired.com\/news\/technology\/0,1282,69291,00.html Using comScore``s international search engine market share estimates would yield different estimates: www.comscore.com\/press\/release.asp?press=622 218 as many as 17% of Google searches result in a paid click, and that Google earns roughly nine cents on average for every search query they process.4 Usually, sponsored search results appear in a separate section of the page designated as sponsored above or to the right of the algorithmic results.\nSponsored search results are displayed in a format similar to algorithmic results: as a list of items each containing a title, a text description, and a hyperlink to a corresponding web page.\nWe call each position in the list a slot.\nGenerally, advertisements that appear in a higher ranked slot (higher on the page) garner more attention and more clicks from users.\nThus, all else being equal, merchants generally prefer higher ranked slots to lower ranked slots.\nMerchants bid for placement next to particular search queries; for example, Orbitz and Travelocity may bid for las vegas hotel while Dell and HP bid for laptop computer.\nAs mentioned, bids are expressed as a maximum willingness to pay per click.\nFor example, a forty-cent bid by HostRocket for web hosting means HostRocket is willing to pay up to forty cents every time a user clicks on their ad.5 The auctioneer (the search engine6 ) evaluates the bids and allocates slots to advertisers.\nIn principle, the allocation decision can be altered with each new incoming search query, so in effect new auctions clear continuously over time as search queries arrive.\nMany allocation rules are plausible.\nIn this paper, we investigate two allocation rules, roughly corresponding to the two allocation rules used by Yahoo! and Google.\nThe rank by bid (RBB) allocation assigns slots in order of bids, with higher ranked slots going to higher bidders.\nThe rank by revenue (RBR) allocation assigns slots in order of the product of bid times expected relevance, where relevance is the proportion of users that click on the merchant``s ad after viewing it.\nIn our model, we assume that an ad``s expected relevance is known to the auctioneer and the advertiser (but not necessarily to other advertisers), and that clickthrough rate decays monotonically with lower ranked slots.\nIn practice, the expected clickthrough rate depends on a number of factors, including the position on the page, the ad text (which in turn depends on the identity of the bidder), the nature and intent of the user, and the context of other ads and algorithmic results on the page, and must be learned over time by both the auctioneer and the bidder [13].\nAs of this writing, to a rough first-order approximation, Yahoo! employs a RBB allocation and Google employs a RBR allocation, though numerous caveats apply in both cases when it comes to the vagaries of real-world implementations.7 Even when examining a one-shot version of a slot auction, the mechanism differs from a standard multi-item auc4 battellemedia.com\/archives\/001102.php 5 Usually advertisers also set daily or monthly budget caps; in this paper we do not model budget constraints.\n6 In the sponsored search industry, the auctioneer and search engine are not always the same entity.\nFor example Google runs the sponsored search ads for AOL web search, with revenue being shared.\nSimilarly, Yahoo! currently runs the sponsored search ads for MSN web search, though Microsoft will begin independent operations soon.\n7 Here are two among many exceptions to the Yahoo! = RBB and Google = RBR assertion: (1) Yahoo! excludes ads deemed insufficiently relevant either by a human editor or due to poor historical click rate; (2) Google sets differing reserve prices depending on Google``s estimate of ad quality.\ntion in subtle ways.\nFirst, a single bid per merchant is used to allocate multiple non-identical slots.\nSecond, the bid is communicated not as a direct preference over slots, but as a preference for clicks that depend stochastically on slot allocation.\nWe investigate a number of economic properties of RBB and RBR slot auctions.\nWe consider the short-run incomplete information case in Section 3, adapting and extending standard analyses of single-item auctions.\nIn Section 4 we turn to the long-run complete information case; our characterization results here draw on techniques from linear programming.\nThroughout, important observations are highlighted as claims supported by examples.\nOur contributions are as follows: \u2022 We show that with multiple slots, bidders do not reveal their true values with either RBB or RBR, and with either first- or second-pricing.\n\u2022 With incomplete information, we find that the informational requirements of playing the equilibrium bid are much weaker for RBB than for RBR, because bidders need not know any information about each others'' relevance (or even their own) with RBB.\n\u2022 With incomplete information, we prove that RBR is efficient but that RBB is not.\n\u2022 We show via a simple example that no general revenue ranking of RBB and RBR is possible.\n\u2022 We prove that in a complete-information setting, firstprice slot auctions have no pure strategy Nash equilibrium, but that there always exists a pure-strategy equilibrium with second pricing.\n\u2022 We provide a constant-factor bound on the deviation from efficiency that can occur in the equilibrium of a second-price slot auction.\nIn Section 2 we specify our model of bidders and the various slot auction formats.\nIn Section 3.1 we study the incentive properties of each format, asking in which cases agents would bid truthfully.\nThere is possible confusion here because the second-price design for slot auctions is reminiscent of the Vickrey auction for a single item; we note that for slot auctions the Vickrey mechanism is in fact very different from the second-price mechanism, and so they have different incentive properties.8 In Section 3.2 we derive the Bayes-Nash equilibrium bids for the various auction formats.\nThis is useful for the efficiency and revenue results in later sections.\nIt should become clear in this section that slot auctions in our model are a straightforward generalization of single-item auctions.\nSections 3.3 and 3.4 address questions of efficiency and revenue under incomplete information, respectively.\nIn Section 4.1 we determine whether pure-strategy equilibria exist for the various auction formats, under complete information.\nIn Section 4.2 we derive bounds on the deviation from efficiency in the pure-strategy equilibria of secondprice slot auctions.\nOur approach is positive rather than normative.\nWe aim to clarify the incentive, efficiency, and revenue properties of two slot auction designs currently in use, under settings of 8 Other authors have also made this observation [5, 6].\n219 incomplete and complete information.\nWe do not attempt to derive the optimal mechanism for a slot auction.\nRelated work.\nFeng et al. [7] compare the revenue performance of various ranking mechanisms for slot auctions in a model with incomplete information, much as we do in Section 3.4, but they obtain their results via simulations whereas we perform an equilibrium analysis.\nLiu and Chen [12] study properties of slot auctions under incomplete information.\nTheir setting is essentially the same as ours, except they restrict their attention to a model with a single slot and a binary type for bidder relevance (high or low).\nThey find that RBR is efficient, but that no general revenue ranking of RBB and RBR is possible, which agrees with our results.\nThey also take a design approach and show how the auctioneer should assign relevance scores to optimize its revenue.\nEdelman et al. [6] model the slot auction problem both as a static game of complete information and a dynamic game of incomplete information.\nThey study the locally envyfree equilibria of the static game of complete information; this is a solution concept motivated by certain bidding behaviors that arise due to the presence of budget constraints.\nThey do not view slot auctions as static games of incomplete information as we do, but do study them as dynamic games of incomplete information and derive results on the uniqueness and revenue properties of the resulting equilibria.\nThey also provide a nice description of the evolution of the market for sponsored search.\nVarian [18] also studies slot auctions under a setting of complete information.\nHe focuses on symmetric equilibria, which are a refinement of Nash equilibria appropriate for slot auctions.\nHe provides bounds on the revenue obtained in equilibrium.\nHe also gives bounds that can be used to infer bidder values given their bids, and performs some empirical analysis using these results.\nIn contrast, we focus instead on efficiency and provide bounds on the deviation from efficiency in complete-information equilibria.\n2.\nPRELIMINARIES We focus on a slot auction for a single keyword.\nIn a setting of incomplete information, a bidder knows only distributions over others'' private information (value per click and relevance).\nWith complete information, a bidder knows others'' private information, and so does not need to rely on distributions to strategize.\nWe first describe the model for the case with incomplete information, and drop the distributional information from the model when we come to the complete-information case in Section 4.\n2.1 The Model There is a fixed number K of slots to be allocated among N bidders.\nWe assume without loss of generality that K \u2264 N, since superfluous slots can remain blank.\nBidder i assigns a value of Xi to each click received on its advertisement, regardless of this advertisement``s rank.9 The probability that i``s advertisement will be clicked if viewed is Ai \u2208 [0, 1].\nWe refer to Ai as bidder i``s relevance.\nWe refer to Ri = AiXi as bidder i``s revenue.\nThe Xi, Ai, and Ri are random 9 Indeed Kitts et al. [10] find that in their sample of actual click data, the correlation between rank and conversion rate is not statistically significant.\nHowever, for the purposes of our model it is also important that bidders believe that conversion rate does not vary with rank.\nvariables and we denote their realizations by xi, \u03b1i, and ri respectively.\nThe probability that an advertisement will be viewed if placed in slot j is \u03b3j \u2208 [0, 1].\nWe assume \u03b31 > \u03b32 > ... > \u03b3K.\nHence bidder i``s advertisement will have a clickthrough rate of \u03b3j\u03b1i if placed in slot j. Of course, an advertisement does not receive any clicks if it is not allocated a slot.\nEach bidder``s value and relevance pair (Xi, Ai) is independently and identically distributed on [0, \u00afx] \u00d7 [0, 1] according to a continuous density function f that has full support on its domain.\nThe density f and slot probabilities \u03b31, ... , \u03b3K are common knowledge.\nOnly bidder i knows the realization xi of its value per click Xi.\nBoth bidder i and the seller know the realization \u03b1i of Ai, but this realization remains unobservable to the other bidders.\nWe assume that bidders have quasi-linear utility functions.\nThat is, the expected utility to bidder i of obtaining the slot of rank j at a price of b per click is ui(j, b) = \u03b3j\u03b1i(xi \u2212 b) If the advertising firms bidding in the slot auction are riskneutral and have ample liquidity, quasi-linearity is a reasonable assumption.\nThe assumptions of independence, symmetry, and riskneutrality made above are all quite standard in single-item auction theory [11, 19].\nThe assumption that clickthrough rate decays monotonically with lower slots-by the same factors for each agent-is unique to the slot auction problem.\nWe view it as a main contribution of our work to show that this assumption allows for tractable analysis of the slot auction problem using standard tools from singleitem auction theory.\nIt also allows for interesting results in the complete information case.\nA common model of decaying clickthrough rate is the exponential decay model, where \u03b3k = 1 \u03b4k\u22121 with decay \u03b4 > 1.\nFeng et al. [7] state that their actual clickthrough data is fitted extremely well by an exponential decay model with \u03b4 = 1.428.\nOur model lacks budget constraints, which are an important feature of real slot auctions.\nWith budget constraints keyword auctions cannot be considered independently of one another, because the budget must be allocated across multiple keywords-a single advertiser typically bids on multiple keywords relevant to his business.\nIntroducing this element into the model is an important next step for future work.10 2.2 Auction Formats In a slot auction a bidder provides to the seller a declared value per click \u02dcxi(xi, \u03b1i) which depends on his true value and relevance.\nWe often denote this declared value (bid) by \u02dcxi for short.\nSince a bidder``s relevance \u03b1i is observable to the seller, the bidder cannot misrepresent it.\nWe denote the kth highest of the N declared values by \u02dcx(k) , and the kth highest of the N declared revenues by \u02dcr(k) , where the declared revenue of bidder i is \u02dcri = \u03b1i \u02dcxi.\nWe consider two types of allocation rules, rank by bid (RBB) and rank by revenue (RBR): 10 Models with budget constraints have begun to appear in this research area.\nAbrams [1] and Borgs et al. [3] design multi-unit auctions for budget-constrained bidders, which can be interpreted as slot auctions, with a focus on revenue optimization and truthfulness.\nMehta et al. [14] address the problem of matching user queries to budget-constrained advertisers so as to maximize revenue.\n220 RBB.\nSlot k goes to bidder i if and only if \u02dcxi = \u02dcx(k) .\nRBR.\nSlot k goes to bidder i if and only if \u02dcri = \u02dcr(k) .\nWe will commonly represent an allocation by a one-to-one function \u03c3 : [K] \u2192 [N], where [n] is the set of integers {1, 2, ... , n}.\nHence slot k goes to bidder \u03c3(k).\nWe also consider two different types of payment rules.\nNote that no matter what the payment rule, a bidder that is not allocated a slot will pay 0 since his listing cannot receive any clicks.\nFirst-price.\nThe bidder allocated slot k, namely \u03c3(k), pays \u02dcx\u03c3(k) per click under both the RBB and RBR allocation rules.\nSecond-price.\nIf k < N, bidder \u03c3(k) pays \u02dcx\u03c3(k+1) per click under the RBB rule, and pays \u02dcr\u03c3(k+1)\/\u03b1\u03c3(k) per click under the RBR rule.\nIf k = N, bidder \u03c3(k) pays 0 per click.11 Intuitively, a second-price payment rule sets a bidder``s payment to the lowest bid it could have declared while maintaining the same ranking, given the allocation rule used.\nOverture introduced the first slot auction design in 1997, using a first-price RBB scheme.\nGoogle then followed in 2000 with a second-price RBR scheme.\nIn 2002, Overture (at this point acquired by Yahoo!) then switched to second pricing but still allocates using RBB.\nOne possible reason for the switch is given in Section 4.\nWe assume that ties are broken as follows in the event that two agents make the exact same bid or declare the same revenue.\nThere is a permutation of the agents \u03ba : [N] \u2192 [N] that is fixed beforehand.\nIf the bids of agents i and j are tied, then agent i obtains a higher slot if and only if \u03ba(i) < \u03ba(j).\nThis is consistent with the practice in real slot auctions where ties are broken by the bidders'' order of arrival.\n3.\nINCOMPLETE INFORMATION 3.1 Incentives It should be clear that with a first-price payment rule, truthful bidding is neither a dominant strategy nor an ex post Nash equilibrium using either RBB or RBR, because this guarantees a payoff of 0.\nThere is always an incentive to shade true values with first pricing.\nThe second-price payment rule is reminiscent of the secondprice (Vickrey) auction used for selling a single item, and in a Vickrey auction it is a dominant strategy for a bidder to reveal his true value for the item [19].\nHowever, using a second-price rule in a slot auction together with either allocation rule above does not yield an incentive-compatible mechanism, either in dominant strategies or ex post Nash equilibrium.12 With a second-price rule there is no incentive for a bidder to bid higher than his true value per click using either RBB or RBR: this either leads to no change 11 We are effectively assuming a reserve price of zero, but in practice search engines charge a non-zero reserve price per click.\n12 Unless of course there is only a single slot available, since this is the single-item case.\nWith a single slot both RBB and RBR with a second-price payment rule are dominantstrategy incentive-compatible.\nin the outcome, or a situation in which he will have to pay more than his value per click for each click received, resulting in a negative payoff.13 However, with either allocation rule there may be an incentive to shade true values with second pricing.\nClaim 1.\nWith second pricing and K \u2265 2, truthful bidding is not a dominant strategy nor an ex post Nash equilibrium for either RBB or RBR.\nExample.\nThere are two agents and two slots.\nThe agents have relevance \u03b11 = \u03b12 = 1, whereas \u03b31 = 1 and \u03b32 = 1\/2.\nAgent 1 has a value of x1 = 6 per click, and agent 2 has a value of x2 = 4 per click.\nLet us first consider the RBB rule.\nSuppose agent 2 bids truthfully.\nIf agent 1 also bids truthfully, he wins the first slot and obtains a payoff of 2.\nHowever, if he shades his bid down below 4, he obtains the second slot at a cost of 0 per click yielding a payoff of 3.\nSince the agents have equal relevance, the exact same situation holds with the RBR rule.\nHence truthful bidding is not a dominant strategy in either format, and neither is it an ex post Nash equilibrium.\nTo find payments that make RBB and RBR dominantstrategy incentive-compatible, we can apply Holmstrom``s lemma [9] (see also chapter 3 in Milgrom [15]).\nUnder the restriction that a bidder with value 0 per click does not pay anything (even if he obtains a slot, which can occur if there are as many slots as bidders), this lemma implies that there is a unique payment rule that achieves dominant-strategy incentive compatibility for either allocation rule.\nFor RBB, the bidder allocated slot k is charged per click KX i=k+1 (\u03b3i\u22121 \u2212 \u03b3i)\u02dcx(i) + \u03b3K \u02dcx(K+1) (1) Note that if K = N, \u02dcx(K+1) = 0 since there is no K + 1th bidder.\nFor RBR, the bidder allocated slot k is charged per click 1 \u03b1\u03c3(k) KX i=k+1 (\u03b3i\u22121 \u2212 \u03b3i)\u02dcr(i) + \u03b3K \u02dcr(K+1) !\n(2) Using payment rule (2) and RBR, the auctioneer is aware of the true revenues of the bidders (since they reveal their values truthfully), and hence ranks them according to their true revenues.\nWe show in Section 3.3 that this allocation is in fact efficient.\nSince the VCG mechanism is the unique mechanism that is efficient, truthful, and ensures bidders with value 0 pay nothing (by the Green-Laffont theorem [8]), the RBR rule and payment scheme (2) constitute exactly the VCG mechanism.\nIn the VCG mechanism an agent pays the externality he imposes on others.\nTo understand payment (2) in this sense, note that the first term is the added utility (due to an increased clickthrough rate) agents in slots k + 1 to K would receive if they were all to move up a slot; the last term is the utility that the agent with the K +1st revenue would receive by obtaining the last slot as opposed to nothing.\nThe leading coefficient simply reduces the agent``s expected payment to a payment per click.\n13 In a dynamic setting with second pricing, there may be an incentive to bid higher than one``s true value in order to exhaust competitors'' budgets.\nThis phenomenon is commonly called bid jamming or antisocial bidding [4].\n221 3.2 Equilibrium Analysis To understand the efficiency and revenue properties of the various auction formats, we must first understand which rankings of the bidders occur in equilibrium with different allocation and payment rule combinations.\nThe following lemma essentially follows from the Monotonic Selection Theorem by Milgrom and Shannon [16].\nLemma 1.\nIn a RBB (RBR) auction with either a firstor second-price payment rule, the symmetric Bayes-Nash equilibrium bid is strictly increasing with value ( revenue).\nAs a consequence of this lemma, we find that RBB and RBR auctions allocate the slots greedily by the true values and revenues of the agents, respectively (whether using firstor second-price payment rules).\nThis will be relevant in Section 3.3 below.\nFor a first-price payment rule, we can explicitly derive the symmetric Bayes-Nash equilibrium bid functions for RBB and RBR auctions.\nThe purpose of this exercise is to lend qualitative insights into the parameters that influence an agent``s bidding, and to derive formulae for the expected revenue in RBB and RBR auctions in order to make a revenue ranking of these two allocation rules (in Section 3.4).\nLet G(y) be the expected resulting clickthrough rate, in a symmetric equilibrium of the RBB auction (with either payment rule), to a bidder with value y and relevance \u03b1 = 1.\nLet H(y) be the analogous quantity for a bidder with revenue y and relevance 1 in a RBR auction.\nBy Lemma 1, a bidder with value y will obtain slot k in a RBB auction if y is the kth highest of the true realized values.\nThe same applies in a RBR auction when y is the kth highest of the true realized revenues.\nLet FX (y) be the distribution function for value, and let FR(y) be the distribution function for revenue.\nThe probability that y is the kth highest out of N values is N \u2212 1 k \u2212 1 !\n(1 \u2212 FX (y))k\u22121 FX (y)N\u2212k whereas the probability that y is the kth highest out of N revenues is the same formula with FR replacing FX .\nHence we have G(y) = KX k=1 \u03b3k N \u2212 1 k \u2212 1 !\n(1 \u2212 FX (y))k\u22121 FX (y)N\u2212k The H function is analogous to G with FR replacing FX .\nIn the two propositions that follow, g and h are the derivatives of G and H respectively.\nWe omit the proof of the next proposition, because it is almost identical to the derivation of the equilibrium bid in the single-item case (see Krishna [11], Proposition 2.2).\nProposition 1.\nThe symmetric Bayes-Nash equilibrium strategies in a first-price RBB auction are given by \u02dcxB (x, \u03b1) = 1 G(x) Z x 0 y g(y)dy The first-price equilibrium above closely parallels the firstprice equilibrium in the single-item model.\nWith a single item g is the density of the second highest value among all N agent values, whereas in a slot auction it is a weighted combination of the densities for the second, third, etc..\nhighest values.\nNote that the symmetric Bayes-Nash equilibrium bid in a first-price RBB auction does not depend on a bidder``s relevance \u03b1.\nTo see clearly why, note that a bidder chooses a bid b so as to maximize the objective \u03b1G(\u02dcx\u22121 (b))(x \u2212 b) and here \u03b1 is just a leading constant factor.\nSo dropping it does not change the set of optimal solutions.\nHence the equilibrium bid depends only on the value x and function G, and G in turn depends only on the marginal cumulative distribution of value FX .\nSo really only the latter needs to be common knowledge to the bidders.\nOn the other hand, we will now see that information about relevance is needed for bidders to play the equilibrium in the first-price RBR auction.\nSo the informational requirements for a first-price RBB auction are much weaker than for a first-price RBR auction: in the RBB auction a bidder need not know his own relevance, and need not know any distributional information over others'' relevance in order to play the equilibrium.\nAgain we omit the next proposition``s proof since it is so similar to the one above.\nProposition 2.\nThe symmetric Bayes-Nash equilibrium strategies in a first-price RBR auction are given by \u02dcxR (x, \u03b1) = 1 \u03b1H(\u03b1x) Z \u03b1x 0 y h(y) dy Here it can be seen that the equilibrium bid is increasing with x, but not necessarily with \u03b1.\nThis should not be much of a concern to the auctioneer, however, because in any case the declared revenue in equilibrium is always increasing in the true revenue.\nIt would be interesting to obtain the equilibrium bids when using a second-price payment rule, but it appears that the resulting differential equations for this case do not have a neat analytical solution.\nNonetheless, the same conclusions about the informational requirements of the RBB and RBR rules still hold, as can be seen simply by inspecting the objective function associated with an agent``s bidding problem for the second-price case.\n3.3 Efficiency A slot auction is efficient if in equilibrium the sum of the bidders'' revenues from their allocated slots is maximized.\nUsing symmetry as our equilibrium selection criterion, we find that the RBB auction is not efficient with either payment rule.\nClaim 2.\nThe RBB auction is not efficient with either first or second pricing.\nExample.\nThere are two agents and one slot, with \u03b31 = 1.\nAgent 1 has a value of x1 = 6 per click and relevance \u03b11 = 1\/2.\nAgent 2 has a value of x2 = 4 per click and relevance \u03b12 = 1.\nBy Lemma 1, agents are ranked greedily by value.\nHence agent 1 obtains the lone slot, for a total revenue of 3 to the agents.\nHowever, it is most efficient to allocate the slot to agent 2, for a total revenue of 4.\nExamples with more agents or more slots are simple to construct along the same lines.\nOn the other hand, under our assumptions on how clickthrough rate decreases with lower rank, the RBR auction is efficient with either payment rule.\n222 Theorem 1.\nThe RBR auction is efficient with either first- or second-price payments rules.\nProof.\nSince by Lemma 1 the agents'' equilibrium bids are increasing functions of their revenues in the RBR auction, slots are allocated greedily according to true revenues.\nLet \u03c3 be a non-greedy allocation.\nThen there are slots s, t with s < t and r\u03c3(s) < r\u03c3(t).\nWe can switch the agents in slots s and t to obtain a new allocation, and the difference between the total revenue in this new allocation and the original allocation``s total revenue is ` \u03b3tr\u03c3(s) + \u03b3sr\u03c3(t) \u00b4 \u2212 ` \u03b3sr\u03c3(s) + \u03b3tr\u03c3(t) \u00b4 = (\u03b3s \u2212 \u03b3t) ` r\u03c3(t) \u2212 r\u03c3(s) \u00b4 Both parenthesized terms above are positive.\nHence the switch has increased the total revenue to the bidders.\nIf we continue to perform such switches, we will eventually reach a greedy allocation of greater revenue than the initial allocation.\nSince the initial allocation was arbitrary, it follows that a greedy allocation is always efficient, and hence the RBR auction``s allocation is efficient.\nNote that the assumption that clickthrough rate decays montonically by the same factors \u03b31, ... , \u03b3K for all agents is crucial to this result.\nA greedy allocation scheme does not necessarily find an efficient solution if the clickthrough rates are monotonically decreasing in an independent fashion for each agent.\n3.4 Revenue To obtain possible revenue rankings for the different auction formats, we first note that when the allocation rule is fixed to RBB, then using either a first-price, second-price, or truthful payment rule leads to the same expected revenue in a symmetric, increasing Bayes-Nash equilibrium.\nBecause a RBB auction ranks agents by their true values in equilibrium for any of these payment rules (by Lemma 1), it follows that expected revenue is the same for all these payment rules, following arguments that are virtually identical to those used to establish revenue equivalence in the singleitem case (see e.g. Proposition 3.1 in Krishna [11]).\nThe same holds for RBR auctions; however, the revenue ranking of the RBB and RBR allocation rules is still unclear.\nBecause of this revenue equivalence principle, we can choose whichever payment rule is most convenient for the purpose of making revenue comparisons.\nUsing Propositions 1 and 2, it is a simple matter to derive formulae for the expected revenue under both allocation rules.\nThe payment of an agent in a RBB auction is mB (x, \u03b1) = \u03b1G(x)\u02dcxV (x, \u03b1) The expected revenue is then N \u00b7 E \u02c6 mV (X, A) \u02dc , where the expectation is taken with respect to the joint density of value and relevance.\nThe expected revenue formula for RBR auctions is entirely analogous using \u02dcxR (x, \u03b1) and the H function.\nWith these in hand we can obtain revenue rankings for specific numbers of bidders and slots, and specific distributions over values and relevance.\nClaim 3.\nFor fixed K, N, and fixed \u03b31, ... , \u03b3K, no revenue ranking of RBB and RBR is possible for an arbitrary density f. Example.\nAssume there are 2 bidders, 2 slots, and that \u03b31 = 1, \u03b32 = 1\/2.\nAssume that value-relevance pairs are uniformly distributed over [0, 1]\u00d7 [0, 1].\nFor such a distribution with a closed-form formula, it is most convenient to use the revenue formulae just derived.\nRBB dominates RBR in terms of revenue for these parameters.\nThe formula for the expected revenue in a RBB auction yields 1\/12, whereas for RBR auctions we have 7\/108.\nAssume instead that with probability 1\/2 an agent``s valuerelevance pair is (1, 1\/2), and that with probability 1\/2 it is (1\/2, 1).\nIn this scenario it is more convenient to appeal to formulae (1) and (2).\nIn a truthful auction the second agent will always pay 0.\nAccording to (1), in a truthful RBB auction the first agent makes an expected payment of E \u02c6 (\u03b31 \u2212 \u03b32)A\u03c3(1)X\u03c3(2) \u02dc = 1 2 E \u02c6 A\u03c3(1) \u02dc E \u02c6 X\u03c3(2) \u02dc where we have used the fact that value and relevance are independently distributed for different agents.\nThe expected relevance of the agent with the highest value is E \u02c6 A\u03c3(1) \u02dc = 5\/8.\nThe expected second highest value is also E \u02c6 X\u03c3(2) \u02dc = 5\/8.\nThe expected revenue for a RBB auction here is then 25\/128.\nAccording to (2), in a truthful RBR auction the first agent makes an expected payment of E \u02c6 (\u03b31 \u2212 \u03b32)R\u03c3(2) \u02dc = 1 2 E \u02c6 R\u03c3(2) \u02dc In expectation the second highest revenue is E \u02c6 R\u03c3(2) \u02dc = 1\/2, so the expected revenue for a RBR auction is 1\/4.\nHence in this case the RBR auction yields higher expected revenue.1415 This example suggests the following conjecture: when value and relevance are either uncorrelated or positively correlated, RBB dominates RBR in terms of revenue.\nWhen value and relevance are negatively correlated, RBR dominates.\n4.\nCOMPLETE INFORMATION In typical slot auctions such as those run by Yahoo! and Google, bidders can adjust their bids up or down at any time.\nAs B\u00a8orgers et al. [2] and Edelman et al. [6] have noted, this can be viewed as a continuous-time process in which bidders learn each other``s bids.\nIf the process stabilizes the result can then be modeled as a Nash equilibrium in pure strategies of the static one-shot game of complete information, since each bidder will be playing a best-response to the others'' bids.16 This argument seems especially appropriate for Yahoo!``s slot auction design where all bids are 14 To be entirely rigorous and consistent with our initial assumptions, we should have constructed a continuous probability density with full support over an appropriate domain.\nTaking the domain to be e.g. [0, 1] \u00d7 [0, 1] and a continuous density with full support that is sufficiently concentrated around (1, 1\/2) and (1\/2, 1), with roughly equal mass around both, would yield the same conclusion.\n15 Claim 3 should serve as a word of caution, because Feng et al. [7] find through their simulations that with a bivariate normal distribution over value-relevance pairs, and with 5 slots, 15 bidders, and \u03b4 = 2, RBR dominates RBB in terms of revenue for any level of correlation between value and relevance.\nHowever, they assume that bidding behavior in a second-price slot auction can be well approximated by truthful bidding.\n16 We do not claim that bidders will actually learn each others'' private information (value and relevance), just that for a stable set of bids there is a corresponding equilibrium of the complete information game.\n223 made public.\nGoogle keeps bids private, but experimentation can allow one to discover other bids, especially since second pricing automatically reveals to an agent the bid of the agent ranked directly below him.\n4.1 Equilibrium Analysis In this section we ask whether a pure-strategy Nash equilibrium exists in a RBB or RBR slot auction, with either first or second pricing.\nBefore dealing with the first-price case there is a technical issue involving ties.\nIn our model we allow bids to be nonnegative real numbers for mathematical convenience, but this can become problematic because there is then no bid that is just higher than another.\nWe brush over such issues by assuming that an agent can bid infinitesimally higher than another.\nThis is imprecise but allows us to focus on the intuition behind the result that follows.\nSee Reny [17] for a full treatment of such issues.\nFor the remainder of the paper, we assume that there are as many slots as bidders.\nThe following result shows that there can be no pure-strategy Nash equilibrium with first pricing.17 Note that the argument holds for both RBB and RBR allocation rules.\nFor RBB, bids should be interpreted as declared values, and for RBR as declared revenues.\nTheorem 2.\nThere exists no complete information Nash equilibrium in pure strategies in the first-price slot auction, for any possible values of the agents, whether using a RBB or RBR allocation rule.\nProof.\nLet \u03c3 : [K] \u2192 [N] be the allocation of slots to the agents resulting from their bids.\nLet ri and bi be the revenue and bid of the agent ranked ith , respectively.\nNote that we cannot have bi > bi+1, or else the agent in slot i can make a profitable deviation by instead bidding bi \u2212 > bi+1 for small enough > 0.\nThis does not change its allocation, but increases its profit.\nHence we must have bi = bi+1 (i.e. with one bidder bidding infinitesimally higher than the other).\nSince this holds for any two consecutive bidders, it follows that in a Nash equilibrium all bidders must be bidding 0 (since the bidder ranked last matches the bid directly below him, which is 0 by default because there is no such bid).\nBut this is impossible: consider the bidder ranked last.\nThe identity of this bidder is always clear given the deterministic tie-breaking rule.\nThis bidder can obtain the top spot and increase his revenue by (\u03b31 \u2212\u03b3K)rK > 0 by bidding some > 0, and for small enough this is necessarily a profitable deviation.\nHence there is no Nash equilibrium in pure strategies.\nOn the other hand, we find that in a second-price slot auction there can be a multitude of pure strategy Nash equilibria.\nThe next two lemmas give conditions that characterize the allocations that can occur as a result of an equilibrium profile of bids, given fixed agent values and revenues.\nThen if we can exhibit an allocation that satisfies these conditions, there must exist at least one equilibrium.\nWe first consider the RBR case.\n17 B\u00a8orgers et al. [2] have proven this result in a model with three bidders and three slots, and we generalize their argument.\nEdelman et al. [6] also point out this non-existence phenomenon.\nThey only illustrate the fact with an example because the result is quite immediate.\nLemma 2.\nGiven an allocation \u03c3, there exists a Nash equilibrium profile of bids b leading to \u03c3 in a second-price RBR slot auction if and only if \u201e 1 \u2212 \u03b3i \u03b3j+1 `` r\u03c3(i) \u2264 r\u03c3(j) for 1 \u2264 j \u2264 N \u2212 2 and i \u2265 j + 2.\nProof.\nThere exists a desired vector b which constitutes a Nash equilibrium if and only if the following set of inequalities can be satisfied (the variables are the \u03c0i and bj): \u03c0i \u2265 \u03b3j(r\u03c3(i) \u2212 bj) \u2200i, \u2200j < i (3) \u03c0i \u2265 \u03b3j(r\u03c3(i) \u2212 bj+1) \u2200i, \u2200j > i (4) \u03c0i = \u03b3i(r\u03c3(i) \u2212 bi+1) \u2200i (5) bi \u2265 bi+1 1 \u2264 i \u2264 N \u2212 1 (6) \u03c0i \u2265 0, bi \u2265 0 \u2200i Here r\u03c3(i) is the revenue of the agent allocated slot i, and \u03c0i and bi may be interpreted as this agent``s surplus and declared revenue, respectively.\nWe first argue that constraints (6) can be removed, because the inequalities above can be satisfied if and only if the inequalities without (6) can be satisfied.\nThe necessary direction is immediate.\nAssume we have a vector (\u03c0, b) which satisfies all inequalities above except (6).\nThen there is some i for which bi < bi+1.\nConstruct a new vector (\u03c0, b ) identical to the original except with bi+1 = bi.\nWe now have bi = bi+1.\nAn agent in slot k < i sees the price of slot i decrease from bi+1 to bi+1 = bi, but this does not make i more preferred than k to this agent because we have \u03c0k \u2265 \u03b3i\u22121(r\u03c3(k) \u2212 bi) \u2265 \u03b3i(r\u03c3(k) \u2212 bi) = \u03b3i(r\u03c3(k) \u2212bi+1) (i.e. because the agent in slot k did not originally prefer slot i \u2212 1 at price bi, he will not prefer slot i at price bi).\nA similar argument applies for agents in slots k > i + 1.\nThe agent in slot i sees the price of this slot go down, which only makes it more preferred.\nFinally, the agent in slot i + 1 sees no change in the price of any slot, so his slot remains most preferred.\nHence inequalities (3)-(5) remain valid at (\u03c0, b ).\nWe first make this change to the bi+1 where bi < bi+1 and index i is smallest.\nWe then recursively apply the change until we eventually obtain a vector that satisfies all inequalities.\nWe safely ignore inequalities (6) from now on.\nBy the Farkas lemma, the remaining inequalities can be satisfied if and only if there is no vector z such that X i,j (\u03b3j r\u03c3(i)) z\u03c3(i)j > 0 X i>j \u03b3jz\u03c3(i)j + X i<j \u03b3j\u22121z\u03c3(i)j\u22121 \u2264 0 \u2200j (7) X j z\u03c3(i)j \u2264 0 \u2200i (8) z\u03c3(i)j \u2265 0 \u2200i, \u2200j = i z\u03c3(i)i free \u2200i Note that a variable of the form z\u03c3(i)i appears at most once in a constraint of type (8), so such a variable can never be positive.\nAlso, z\u03c3(i)1 = 0 for all i = 1 by constraint (7), since such variables never appear with another of the form z\u03c3(i)i.\nNow if we wish to raise z\u03c3(i)j above 0 by one unit for j = i, we must lower z\u03c3(i)i by one unit because of the constraint of type (8).\nBecause \u03b3jr\u03c3(i) \u2264 \u03b3ir\u03c3(i) for i < j, raising 224 z\u03c3(i)j with i < j while adjusting other variables to maintain feasibility cannot make the objective P i,j(\u03b3jr\u03c3(i))z\u03c3(i)j positive.\nIf this objective is positive, then this is due to some component z\u03c3(i)j with i > j being positive.\nNow for the constraints of type (7), if i > j then z\u03c3(i)j appears with z\u03c3(j\u22121)j\u22121 (for 1 < j < N).\nSo to raise the former variable \u03b3\u22121 j units and maintain feasibility, we must (I) lower z\u03c3(i)i by \u03b3\u22121 j units, and (II) lower z\u03c3(j\u22121)j\u22121 by \u03b3\u22121 j\u22121 units.\nHence if the following inequalities hold: r\u03c3(i) \u2264 \u201e \u03b3i \u03b3j `` r\u03c3(i) + r\u03c3(j\u22121) (9) for 2 \u2264 j \u2264 N \u2212 1 and i > j, raising some z\u03c3(i)j with i > j cannot make the objective positive, and there is no z that satisfies all inequalities above.\nConversely, if some inequality (9) does not hold, the objective can be made positive by raising the corresponding z\u03c3(i)j and adjusting other variables so that feasibility is just maintained.\nBy a slight reindexing, inequalities (9) yield the statement of the theorem.\nThe RBB case is entirely analogous.\nLemma 3.\nGiven an allocation \u03c3, there exists a Nash equilibrium profile of bids b leading to \u03c3 in a second-price RBB slot auction if and only if \u201e 1 \u2212 \u03b3i \u03b3j+1 `` x\u03c3(i) \u2264 x\u03c3(j) for 1 \u2264 j \u2264 N \u2212 2 and i \u2265 j + 2.\nProof Sketch.\nThe proof technique is the same as in the previous lemma.\nThe desired Nash equilibrium exists if and only if a related set of inequalities can be satisfied; by the Farkas lemma, this occurs if and only if an alternate set of inequalities cannot be satisfied.\nThe conditions that determine whether the latter holds are given in the statement of the lemma.\nThe two lemmas above immediately lead to the following result.\nTheorem 3.\nThere always exists a complete information Nash equilibrium in pure strategies in the second-price RBB slot auction.\nThere always exists an efficient complete information Nash equilibrium in pure strategies in the secondprice RBR slot auction.\nProof.\nFirst consider RBB.\nSuppose agents are ranked according to their true values.\nSince x\u03c3(i) \u2264 x\u03c3(j) for i > j, the system of inequalities in Lemma 3 is satisfied, and the allocation is the result of some Nash equilibrium bid profile.\nBy the same type of argument but appealing to Lemma 2 for RBR, there exists a Nash equilibrium bid profile such that bidders are ranked according to their true revenues.\nBy Theorem 1, this latter allocation is efficient.\nThis theorem establishes existence but not uniqueness.\nIndeed we expect that in many cases there will be multiple allocations (and hence equilibria) which satisfy the conditions of Lemmas 2 and 3.\nIn particular, not all equilibria of a second-price RBR auction will be efficient.\nFor instance, according to Lemma 2, with two agents and two slots any allocation can arise in a RBR equilibrium because no constraints apply.\nTheorems 2 and 3 taken together provide a possible explanation for Yahoo!``s switch from first to second pricing.\nWe saw in Section 3.1 that this does not induce truthfulness from bidders.\nWith first pricing, there will always be some bidder that feels compelled to adjust his bid.\nSecond pricing is more convenient because an equilibrium can be reached, and this reduces the cost of bid management.\n4.2 Efficiency For a given allocation rule, we call the allocation that would result if the bidders reported their values truthfully the standard allocation.\nHence in the standard RBB allocation bidders are ranked by true values, and in the standard RBR allocation they are ranked by true revenues.\nAccording to Lemmas 2 and 3, a ranking that results from a Nash equilibrium profile can only deviate from the standard allocation by having agents with relatively similar values or revenues switch places.\nThat is, if ri > rj then with RBR agent j can be ranked higher than i only if the ratio rj\/ri is sufficiently large; similarly for RBB.\nThis suggests that the value of an equilibrium allocation cannot differ too much from the value obtained in the standard allocation, and the following theorems confirms this.\nFor an allocation \u03c3 of slots to agents, we denote its total value by f(\u03c3) = PN i=1 \u03b3ir\u03c3(i).\nWe denote by g(\u03c3) = PN i=1 \u03b3ix\u03c3(i) allocation \u03c3``s value when assuming all agents have identical relevance, normalized to 1.\nLet L = min i=1,...,N\u22121 min \u03b3i+1 \u03b3i , 1 \u2212 \u03b3i+2 \u03b3i+1 ff (where by default \u03b3N+1 = 0).\nLet \u03b7x and \u03b7r be the standard allocations when using RBB and RBR, respectively.\nTheorem 4.\nFor an allocation \u03c3 that results from a purestrategy Nash equilibrium of a second-price RBR slot auction, we have f(\u03c3) \u2265 Lf(\u03b7r).\nProof.\nWe number the agents so that agent i has the ith highest revenue, so r1 \u2265 r2 \u2265 ... \u2265 rN .\nHence the standard allocation has value f(\u03b7r) = PN i=1 \u03b3iri.\nTo prove the theorem, we will make repeated use of the fact thatP k akP k bk \u2265 mink ak bk when the ak and bk are positive.\nNote that according to Lemma 2, if agent i lies at least two slots below slot j, then r\u03c3(j) \u2265 ri 1 \u2212 \u03b3j+2 \u03b3j+1 .\nIt may be the case that for some slot i, we have \u03c3(i) > i and for slots k > i + 1 we have \u03c3(k) > i.\nWe then say that slot i is inverted.\nLet S be the set of agents with indices at least i + 1; there are N \u2212 i of these.\nIf slot i is inverted, it is occupied by some agent from S. Also all slots strictly lower than i + 1 must be occupied by the remaining agents from S, since \u03c3(k) > i for k \u2265 i + 2.\nThe agent in slot i + 1 must then have an index \u03c3(i + 1) \u2264 i (note this means slot i + 1 cannot be inverted).\nNow there are two cases.\nIn the first case we have \u03c3(i) = i + 1.\nThen \u03b3ir\u03c3(i) + \u03b3i+1r\u03c3(i+1) \u03b3iri + \u03b3i+1ri+1 \u2265 \u03b3i+1ri + \u03b3iri+1 \u03b3iri + \u03b3i+1ri+1 \u2265 min \u03b3i+1 \u03b3i , \u03b3i \u03b3i+1 ff = \u03b3i+1 \u03b3i In the second case we have \u03c3(i) > i+1.\nThen since all agents in S except the one in slot i lie strictly below slot i + 1, and 225 the agent in slot i is not agent i + 1, it must be that agent i+1 is in a slot strictly below slot i+1.\nThis means that it is at least two slots below the agent that actually occupies slot i, and by Lemma 2 we then have r\u03c3(i) \u2265 ri+1 1 \u2212 \u03b3i+2 \u03b3i+1 .\nThus, \u03b3ir\u03c3(i) + \u03b3i+1r\u03c3(i+1) \u03b3iri + \u03b3i+1ri+1 \u2265 \u03b3i+1ri + \u03b3ir\u03c3(i) \u03b3iri + \u03b3i+1ri+1 \u2265 min \u03b3i+1 \u03b3i , 1 \u2212 \u03b3i+2 \u03b3i+1 ff If slot i is not inverted, then on one hand we may have \u03c3(i) \u2264 i, in which case r\u03c3(i)\/ri \u2265 1.\nOn the other hand we may have \u03c3(i) > i but there is some agent with index j \u2264 i that lies at least two slots below slot i.\nThen by Lemma 2, r\u03c3(i) \u2265 rj 1 \u2212 \u03b3i+2 \u03b3i+1 \u2265 ri 1 \u2212 \u03b3i+2 \u03b3i+1 .\nWe write i \u2208 I if slot i is inverted, and i \u2208 I if neither i nor i \u2212 1 are inverted.\nBy our arguments above two consecutive slots cannot be inverted, so we can write f(\u03c3) f(\u03b3r) = P i\u2208I ` \u03b3ir\u03c3(i) + \u03b3i+1r\u03c3(i+1) \u00b4 + P i\u2208I \u03b3ir\u03c3(i) P i\u2208I (\u03b3iri + \u03b3i+1ri+1) + P i\u2208I \u03b3iri \u2265 min min i\u2208I \u03b3ir\u03c3(i) + \u03b3i+1r\u03c3(i+1) \u03b3iri + \u03b3i+1ri+1 ff , min i\u2208I \u03b3ir\u03c3(i) \u03b3iri ffff \u2265 L and this completes the proof.\nNote that for RBR, the standard value is also the efficient value by Theorem 1.\nAlso note that for an exponential decay model, L = min \u02d81 \u03b4 , 1 \u2212 1 \u03b4 \u00af .\nWith \u03b4 = 1.428 (see Section 2.1), the factor is L \u2248 1\/3.34, so the total value in a pure-strategy Nash equilibrium of a second-price RBR slot auction is always within a factor of 3.34 of the efficient value with such a discount.\nAgain for RBB we have an analogous result.\nTheorem 5.\nFor an allocation \u03c3 that results from a purestrategy Nash equilibrium of a second-price RBB slot auction, we have g(\u03c3) \u2265 Lg(\u03b7x).\nProof Sketch.\nSimply substitute bidder values for bidder revenues in the proof of Theorem 4, and appeal to Lemma 3.\n5.\nCONCLUSIONS This paper analyzed stylized versions of the slot auction designs currently used by Yahoo! and Google, namely rank by bid (RBB) and rank by revenue (RBR), respectively.\nWe also considered first and second pricing rules together with each of these allocation rules, since both have been used historically.\nWe first studied the short-run setting with incomplete information, corresponding to the case where agents have just approached the mechanism.\nOur equilibrium analysis revealed that RBB has much weaker informational requirements than RBR, because bidders need not know any information about relevance (even their own) to play the Bayes-Nash equilibrium.\nHowever, RBR leads to an efficient allocation in equilibrium, whereas RBB does not.\nWe showed that for an arbitrary distribution over value and relevance, no revenue ranking of RBB and RBR is possible.\nWe hope that the tools we used to establish these results (revenue equivalence, the form of first-price equilibria, the truthful payments rules) will help others wanting to pursue further analyses of slot auctions.\nWe also studied the long-run case where agents have experimented with their bids and each settled on one they find optimal.\nWe argued that a stable set of bids in this setting can be modeled as a pure-strategy Nash equilibrium of the static game of complete information.\nWe showed that no pure-strategy equilibrium exists with either RBB or RBR using first pricing, but that with second pricing there always exists such an equilibrium (in the case of RBR, an efficient equilibrium).\nIn general second pricing allows for multiple pure-strategy equilibria, but we showed that the value of such equilibria diverges by only a constant factor from the value obtained if all agents bid truthfully (which in the case of RBR is the efficient value).\n6.\nFUTURE WORK Introducing budget constraints into the model is a natural next step for future work.\nThe complication here lies in the fact that budgets are often set for entire campaigns rather than single keywords.\nAssuming that the optimal choice of budget can be made independent of the choice of bid for a specific keyword, it can be shown that it is a dominant-strategy to report this optimal budget with one``s bid.\nThe problem is then to ascertain that bids and budgets can indeed be optimized separately, or to find a plausible model where deriving equilibrium bids and budgets together is tractable.\nIdentifying a condition on the distribution over value and relevance that actually does yield a revenue ranking of RBB and RBR (such as correlation between value and relevance, perhaps) would yield a more satisfactory characterization of their relative revenue properties.\nPlacing bounds on the revenue obtained in a complete information equilibrium is also a relevant question.\nBecause the incomplete information case is such a close generalization of the most basic single-item auction model, it would be interesting to see which standard results from single-item auction theory (e.g. results with risk-averse bidders, an endogenous number of bidders, asymmetries, etc..\n.)\nautomatically generalize and which do not, to fully understand the structural differences between single-item and slot auctions.\nAcknowledgements David Pennock provided valuable guidance throughout this project.\nI would also like to thank David Parkes for helpful comments.\n7.\nREFERENCES [1] Z. Abrams.\nRevenue maximization when bidders have budgets.\nIn Proc.\nthe ACM-SIAM Symposium on Discrete Algorithms, 2006.\n[2] T. B\u00a8orgers, I. Cox, and M. Pesendorfer.\nPersonal Communication.\n[3] C. Borgs, J. Chayes, N. Immorlica, M. Mahdian, and A. Saberi.\nMulti-unit auctions with budget-constrained bidders.\nIn Proc.\nthe Sixth ACM Conference on Electronic Commerce, Vancouver, BC, 2005.\n[4] F. Brandt and G. Wei\u00df.\nAntisocial agents and Vickrey auctions.\nIn J.-J.\nC. Meyer and M. Tambe, editors, 226 Intelligent Agents VIII, volume 2333 of Lecture Notes in Artificial Intelligence.\nSpringer Verlag, 2001.\n[5] B. Edelman and M. Ostrovsky.\nStrategic bidder behavior in sponsored search auctions.\nIn Workshop on Sponsored Search Auctions, ACM Electronic Commerce, 2005.\n[6] B. Edelman, M. Ostrovsky, and M. Schwarz.\nInternet advertising and the generalized second price auction: Selling billions of dollars worth of keywords.\nNBER working paper 11765, November 2005.\n[7] J. Feng, H. K. Bhargava, and D. M. Pennock.\nImplementing sponsored search in web search engines: Computational evaluation of alternative mechanisms.\nINFORMS Journal on Computing, 2005.\nForthcoming.\n[8] J. Green and J.-J.\nLaffont.\nCharacterization of satisfactory mechanisms for the revelation of preferences for public goods.\nEconometrica, 45:427-438, 1977.\n[9] B. Holmstrom.\nGroves schemes on restricted domains.\nEconometrica, 47(5):1137-1144, 1979.\n[10] B. Kitts, P. Laxminarayan, B. LeBlanc, and R. Meech.\nA formal analysis of search auctions including predictions on click fraud and bidding tactics.\nIn Workshop on Sponsored Search Auctions, ACM Electronic Commerce, 2005.\n[11] V. Krishna.\nAuction Theory.\nAcademic Press, 2002.\n[12] D. Liu and J. Chen.\nDesigning online auctions with past performance information.\nDecision Support Systems, 2005.\nForthcoming.\n[13] C. Meek, D. M. Chickering, and D. B. Wilson.\nStochastic and contingent payment auctions.\nIn Workshop on Sponsored Search Auctions, ACM Electronic Commerce, 2005.\n[14] A. Mehta, A. Saberi, U. Vazirani, and V. Vazirani.\nAdwords and generalized on-line matching.\nIn Proc.\n46th IEEE Symposium on Foundations of Computer Science, 2005.\n[15] P. Milgrom.\nPutting Auction Theory to Work.\nCambridge University Press, 2004.\n[16] P. Milgrom and C. Shannon.\nMonotone comparative statics.\nEconometrica, 62(1):157-180, 1994.\n[17] P. J. Reny.\nOn the existence of pure and mixed strategy Nash equilibria in discontinuous games.\nEconometrica, 67(5):1029-1056, 1999.\n[18] H. R. Varian.\nPosition auctions.\nWorking Paper, February 2006.\n[19] W. Vickrey.\nCounterspeculation, auctions and competitive sealed tenders.\nJournal of Finance, 16:8-37, 1961.\n227","lvl-3":"An Analysis of Alternative Slot Auction Designs for Sponsored Search\nABSTRACT\nBillions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results.\nSlots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page.\nIn this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: \"rank by bid\" (RBB) and \"rank by revenue\" (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively.\nWe also consider first - and second-price payment rules together with each of these allocation rules, as both have been used historically.\nWe consider both the \"short-run\" incomplete information setting and the \"long-run\" complete information setting.\nWith incomplete information, neither RBB nor RBR are truthful with either first or second pricing.\nWe find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not.\nWe also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance.\nWith complete information, we find that no equilibrium exists with first pricing using either RBB or RBR.\nWe show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully.\n1.\nINTRODUCTION\nToday, Internet giants Google and Yahoo! boast a combined market capitalization of over $150 billion, largely on the strength of sponsored search, the fastest growing component of a resurgent online advertising industry.\nPricewaterhouseCoopers estimates that 2004 industry-wide sponsored search revenues were $3.9 billion, or 40% of total Internet advertising revenues . '\nIndustry watchers expect 2005 revenues to reach or exceed $7 billion .2 Roughly 80% of Google's estimated $4 billion in 2005 revenue and roughly 45% of Yahoo!'s estimated $3.7 billion in 2005 revenue will likely be attributable to sponsored search .3 A number of other companies--including LookSmart, FindWhat, InterActiveCorp (Ask Jeeves), and eBay (Shopping.com)--earn hundreds of millions of dollars of sponsored search revenue annually.\nSponsored search is a form of advertising where merchants pay to appear alongside web search results.\nFor example, when a user searches for \"used honda accord san diego\" in a web search engine, a variety of commercial entities (San Diego car dealers, Honda Corp, automobile information portals, classified ad aggregators, eBay, etc. .\n.)\nmay bid to to have their listings featured alongside the standard \"algorithmic\" search listings.\nAdvertisers bid for placement on the page in an auction-style format where the higher they bid the more likely their listing will appear above other ads on the page.\nBy convention, sponsored search advertisers generally pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked.\nThough many people claim to systematically ignore sponsored search ads, Majestic Research reports that\nas many as 17% of Google searches result in a paid click, and that Google earns roughly nine cents on average for every search query they process .4 Usually, sponsored search results appear in a separate section of the page designated as \"sponsored\" above or to the right of the algorithmic results.\nSponsored search results are displayed in a format similar to algorithmic results: as a list of items each containing a title, a text description, and a hyperlink to a corresponding web page.\nWe call each position in the list a slot.\nGenerally, advertisements that appear in a higher ranked slot (higher on the page) garner more attention and more clicks from users.\nThus, all else being equal, merchants generally prefer higher ranked slots to lower ranked slots.\nMerchants bid for placement next to particular search queries; for example, Orbitz and Travelocity may bid for \"las vegas hotel\" while Dell and HP bid for \"laptop computer\".\nAs mentioned, bids are expressed as a maximum willingness to pay per click.\nFor example, a forty-cent bid by HostRocket for \"web hosting\" means HostRocket is willing to pay up to forty cents every time a user clicks on their ad .5 The auctioneer (the search engine6) evaluates the bids and allocates slots to advertisers.\nIn principle, the allocation decision can be altered with each new incoming search query, so in effect new auctions clear continuously over time as search queries arrive.\nMany allocation rules are plausible.\nIn this paper, we investigate two allocation rules, roughly corresponding to the two allocation rules used by Yahoo! and Google.\nThe \"rank by bid\" (RBB) allocation assigns slots in order of bids, with higher ranked slots going to higher bidders.\nThe \"rank by revenue\" (RBR) allocation assigns slots in order of the product of bid times expected relevance, where relevance is the proportion of users that click on the merchant's ad after viewing it.\nIn our model, we assume that an ad's expected relevance is known to the auctioneer and the advertiser (but not necessarily to other advertisers), and that clickthrough rate decays monotonically with lower ranked slots.\nIn practice, the expected clickthrough rate depends on a number of factors, including the position on the page, the ad text (which in turn depends on the identity of the bidder), the nature and intent of the user, and the context of other ads and algorithmic results on the page, and must be learned over time by both the auctioneer and the bidder [13].\nAs of this writing, to a rough first-order approximation, Yahoo! employs a RBB allocation and Google employs a RBR allocation, though numerous caveats apply in both cases when it comes to the vagaries of real-world implementations .7 Even when examining a one-shot version of a slot auction, the mechanism differs from a standard multi-item auc\nengine are not always the same entity.\nFor example Google runs the sponsored search ads for AOL web search, with revenue being shared.\nSimilarly, Yahoo! currently runs the sponsored search ads for MSN web search, though Microsoft will begin independent operations soon.\n7Here are two among many exceptions to the Yahoo! = RBB and Google = RBR assertion: (1) Yahoo! excludes ads deemed insufficiently relevant either by a human editor or due to poor historical click rate; (2) Google sets differing reserve prices depending on Google's estimate of ad quality.\ntion in subtle ways.\nFirst, a single bid per merchant is used to allocate multiple non-identical slots.\nSecond, the bid is communicated not as a direct preference over slots, but as a preference for clicks that depend stochastically on slot allocation.\nWe investigate a number of economic properties of RBB and RBR slot auctions.\nWe consider the \"short-run\" incomplete information case in Section 3, adapting and extending standard analyses of single-item auctions.\nIn Section 4 we turn to the \"long-run\" complete information case; our characterization results here draw on techniques from linear programming.\nThroughout, important observations are highlighted as claims supported by examples.\nOur contributions are as follows:\n\u2022 We show that with multiple slots, bidders do not reveal their true values with either RBB or RBR, and with either first - or second-pricing.\n\u2022 With incomplete information, we find that the informational requirements of playing the equilibrium bid are much weaker for RBB than for RBR, because bidders need not know any information about each others' relevance (or even their own) with RBB.\n\u2022 With incomplete information, we prove that RBR is efficient but that RBB is not.\n\u2022 We show via a simple example that no general revenue ranking of RBB and RBR is possible.\n\u2022 We prove that in a complete-information setting, firstprice slot auctions have no pure strategy Nash equilibrium, but that there always exists a pure-strategy equilibrium with second pricing.\n\u2022 We provide a constant-factor bound on the deviation from efficiency that can occur in the equilibrium of a second-price slot auction.\nIn Section 2 we specify our model of bidders and the various slot auction formats.\nIn Section 3.1 we study the incentive properties of each format, asking in which cases agents would bid truthfully.\nThere is possible confusion here because the \"second-price\" design for slot auctions is reminiscent of the Vickrey auction for a single item; we note that for slot auctions the Vickrey mechanism is in fact very different from the second-price mechanism, and so they have different incentive properties .8 In Section 3.2 we derive the Bayes-Nash equilibrium bids for the various auction formats.\nThis is useful for the efficiency and revenue results in later sections.\nIt should become clear in this section that slot auctions in our model are a straightforward generalization of single-item auctions.\nSections 3.3 and 3.4 address questions of efficiency and revenue under incomplete information, respectively.\nIn Section 4.1 we determine whether pure-strategy equilibria exist for the various auction formats, under complete information.\nIn Section 4.2 we derive bounds on the deviation from efficiency in the pure-strategy equilibria of secondprice slot auctions.\nOur approach is positive rather than normative.\nWe aim to clarify the incentive, efficiency, and revenue properties of two slot auction designs currently in use, under settings of\nincomplete and complete information.\nWe do not attempt to derive the \"optimal\" mechanism for a slot auction.\nRelated work.\nFeng et al. [7] compare the revenue performance of various ranking mechanisms for slot auctions in a model with incomplete information, much as we do in Section 3.4, but they obtain their results via simulations whereas we perform an equilibrium analysis.\nLiu and Chen [12] study properties of slot auctions under incomplete information.\nTheir setting is essentially the same as ours, except they restrict their attention to a model with a single slot and a binary type for bidder relevance (high or low).\nThey find that RBR is efficient, but that no general revenue ranking of RBB and RBR is possible, which agrees with our results.\nThey also take a design approach and show how the auctioneer should assign relevance scores to optimize its revenue.\nEdelman et al. [6] model the slot auction problem both as a static game of complete information and a dynamic game of incomplete information.\nThey study the \"locally envyfree equilibria\" of the static game of complete information; this is a solution concept motivated by certain bidding behaviors that arise due to the presence of budget constraints.\nThey do not view slot auctions as static games of incomplete information as we do, but do study them as dynamic games of incomplete information and derive results on the uniqueness and revenue properties of the resulting equilibria.\nThey also provide a nice description of the evolution of the market for sponsored search.\nVarian [18] also studies slot auctions under a setting of complete information.\nHe focuses on \"symmetric\" equilibria, which are a refinement of Nash equilibria appropriate for slot auctions.\nHe provides bounds on the revenue obtained in equilibrium.\nHe also gives bounds that can be used to infer bidder values given their bids, and performs some empirical analysis using these results.\nIn contrast, we focus instead on efficiency and provide bounds on the deviation from efficiency in complete-information equilibria.\n2.\nPRELIMINARIES\nWe focus on a slot auction for a single keyword.\nIn a setting of incomplete information, a bidder knows only distributions over others' private information (value per click and relevance).\nWith complete information, a bidder knows others' private information, and so does not need to rely on distributions to strategize.\nWe first describe the model for the case with incomplete information, and drop the distributional information from the model when we come to the complete-information case in Section 4.\n2.1 The Model\nThere is a fixed number K of slots to be allocated among N bidders.\nWe assume without loss of generality that K \u2264 N, since superfluous slots can remain blank.\nBidder i assigns a value of Xi to each click received on its advertisement, regardless of this advertisement's rank .9 The probability that i's advertisement will be clicked if viewed is Ai \u2208 [0, 1].\nWe refer to Ai as bidder i's relevance.\nWe refer to Ri = AiXi as bidder i's revenue.\nThe Xi, Ai, and Ri are random 9Indeed Kitts et al. [10] find that in their sample of actual click data, the correlation between rank and conversion rate is not statistically significant.\nHowever, for the purposes of our model it is also important that bidders believe that conversion rate does not vary with rank.\nvariables and we denote their realizations by xi, \u03b1i, and ri respectively.\nThe probability that an advertisement will be viewed if placed in slot j is ryj \u2208 [0, 1].\nWe assume ry1> rye>...> ryK.\nHence bidder i's advertisement will have a clickthrough rate of ryj\u03b1i if placed in slot j. Of course, an advertisement does not receive any clicks if it is not allocated a slot.\nEach bidder's value and relevance pair (Xi, Ai) is independently and identically distributed on [0, \u00af x] \u00d7 [0, 1] according to a continuous density function f that has full support on its domain.\nThe density f and slot probabilities ry1,..., ryK are common knowledge.\nOnly bidder i knows the realization xi of its value per click Xi.\nBoth bidder i and the seller know the realization \u03b1i of Ai, but this realization remains unobservable to the other bidders.\nWe assume that bidders have quasi-linear utility functions.\nThat is, the expected utility to bidder i of obtaining the slot of rank j at a price of b per click is ui (j, b) = ryj \u03b1i (xi \u2212 b) If the advertising firms bidding in the slot auction are riskneutral and have ample liquidity, quasi-linearity is a reasonable assumption.\nThe assumptions of independence, symmetry, and riskneutrality made above are all quite standard in single-item auction theory [11, 19].\nThe assumption that clickthrough rate decays monotonically with lower slots--by the same factors for each agent--is unique to the slot auction problem.\nWe view it as a main contribution of our work to show that this assumption allows for tractable analysis of the slot auction problem using standard tools from singleitem auction theory.\nIt also allows for interesting results in the complete information case.\nA common model of decaying clickthrough rate is the exponential decay model, where ryk = \u03b4k \u2212 1 with decay \u03b4> 1.\nFeng et al. [7] state that\ntheir actual clickthrough data is fitted extremely well by an exponential decay model with \u03b4 = 1.428.\nOur model lacks budget constraints, which are an important feature of real slot auctions.\nWith budget constraints keyword auctions cannot be considered independently of one another, because the budget must be allocated across multiple keywords--a single advertiser typically bids on multiple keywords relevant to his business.\nIntroducing this element into the model is an important next step for future work .10\n2.2 Auction Formats\nIn a slot auction a bidder provides to the seller a declared value per click \u02dcxi (xi, \u03b1i) which depends on his true value and relevance.\nWe often denote this declared value (bid) by \u02dcxi for short.\nSince a bidder's relevance \u03b1i is observable to the seller, the bidder cannot misrepresent it.\nWe denote the kth highest of the N declared values by \u02dcx (k), and the kth highest of the N declared revenues by \u02dcr (k), where the declared revenue of bidder i is \u02dcri = \u03b1i\u02dcxi.\nWe consider two types of allocation rules, \"rank by bid\" (RBB) and \"rank by revenue\" (RBR): 10Models with budget constraints have begun to appear in this research area.\nAbrams [1] and Borgs et al. [3] design multi-unit auctions for budget-constrained bidders, which can be interpreted as slot auctions, with a focus on revenue optimization and truthfulness.\nMehta et al. [14] address the problem of matching user queries to budget-constrained advertisers so as to maximize revenue.\nRBB.\nSlot k goes to bidder i if and only if \u02dcxi = \u02dcx (k).\nRBR.\nSlot k goes to bidder i if and only if \u02dcri = \u02dcr (k).\nWe will commonly represent an allocation by a one-to-one function \u03c3: [K]--+ [N], where [n] is the set of integers {1, 2,..., n}.\nHence slot k goes to bidder \u03c3 (k).\nWe also consider two different types of payment rules.\nNote that no matter what the payment rule, a bidder that is not allocated a slot will pay 0 since his listing cannot receive any clicks.\nFirst-price.\nThe bidder allocated slot k, namely \u03c3 (k), pays \u02dcx\u03c3 (k) per click under both the RBB and RBR allocation rules.\nSecond-price.\nIf k <N, bidder \u03c3 (k) pays \u02dcx\u03c3 (k +1) per click under the RBB rule, and pays \u02dcr\u03c3 (k +1) \/ \u03b1\u03c3 (k) per click under the RBR rule.\nIf k = N, bidder \u03c3 (k) pays 0 per click .11 Intuitively, a second-price payment rule sets a bidder's payment to the lowest bid it could have declared while maintaining the same ranking, given the allocation rule used.\nOverture introduced the first slot auction design in 1997, using a first-price RBB scheme.\nGoogle then followed in 2000 with a second-price RBR scheme.\nIn 2002, Overture (at this point acquired by Yahoo!) then switched to second pricing but still allocates using RBB.\nOne possible reason for the switch is given in Section 4.\nWe assume that ties are broken as follows in the event that two agents make the exact same bid or declare the same revenue.\nThere is a permutation of the agents \u03ba: [N]--+ [N] that is fixed beforehand.\nIf the bids of agents i and j are tied, then agent i obtains a higher slot if and only if \u03ba (i) <\u03ba (j).\nThis is consistent with the practice in real slot auctions where ties are broken by the bidders' order of arrival.\n3.\nINCOMPLETE INFORMATION\n3.1 Incentives\n3.2 Equilibrium Analysis\n3.3 Efficiency\n3.4 Revenue\n4.\nCOMPLETE INFORMATION\n4.1 Equilibrium Analysis\n4.2 Efficiency\nPk ak\n5.\nCONCLUSIONS\n6.\nFUTURE WORK\nIntroducing budget constraints into the model is a natural next step for future work.\nThe complication here lies in the fact that budgets are often set for entire campaigns rather than single keywords.\nAssuming that the optimal choice of budget can be made independent of the choice of bid for a specific keyword, it can be shown that it is a dominant-strategy to report this optimal budget with one's bid.\nThe problem is then to ascertain that bids and budgets can indeed be optimized separately, or to find a plausible model where deriving equilibrium bids and budgets together is tractable.\nIdentifying a condition on the distribution over value and relevance that actually does yield a revenue ranking of RBB and RBR (such as correlation between value and relevance, perhaps) would yield a more satisfactory characterization of their relative revenue properties.\nPlacing bounds on the revenue obtained in a complete information equilibrium is also a relevant question.\nBecause the incomplete information case is such a close generalization of the most basic single-item auction model, it would be interesting to see which standard results from single-item auction theory (e.g. results with risk-averse bidders, an endogenous number of bidders, asymmetries, etc. .\n.)\nautomatically generalize and which do not, to fully understand the structural differences between single-item and slot auctions.","lvl-4":"An Analysis of Alternative Slot Auction Designs for Sponsored Search\nABSTRACT\nBillions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results.\nSlots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page.\nIn this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: \"rank by bid\" (RBB) and \"rank by revenue\" (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively.\nWe also consider first - and second-price payment rules together with each of these allocation rules, as both have been used historically.\nWe consider both the \"short-run\" incomplete information setting and the \"long-run\" complete information setting.\nWith incomplete information, neither RBB nor RBR are truthful with either first or second pricing.\nWe find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not.\nWe also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance.\nWith complete information, we find that no equilibrium exists with first pricing using either RBB or RBR.\nWe show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully.\n1.\nINTRODUCTION\nPricewaterhouseCoopers estimates that 2004 industry-wide sponsored search revenues were $3.9 billion, or 40% of total Internet advertising revenues . '\nSponsored search is a form of advertising where merchants pay to appear alongside web search results.\n.)\nmay bid to to have their listings featured alongside the standard \"algorithmic\" search listings.\nBy convention, sponsored search advertisers generally pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked.\nThough many people claim to systematically ignore sponsored search ads, Majestic Research reports that\nWe call each position in the list a slot.\nGenerally, advertisements that appear in a higher ranked slot (higher on the page) garner more attention and more clicks from users.\nThus, all else being equal, merchants generally prefer higher ranked slots to lower ranked slots.\nAs mentioned, bids are expressed as a maximum willingness to pay per click.\nFor example, a forty-cent bid by HostRocket for \"web hosting\" means HostRocket is willing to pay up to forty cents every time a user clicks on their ad .5 The auctioneer (the search engine6) evaluates the bids and allocates slots to advertisers.\nIn principle, the allocation decision can be altered with each new incoming search query, so in effect new auctions clear continuously over time as search queries arrive.\nMany allocation rules are plausible.\nIn this paper, we investigate two allocation rules, roughly corresponding to the two allocation rules used by Yahoo! and Google.\nThe \"rank by bid\" (RBB) allocation assigns slots in order of bids, with higher ranked slots going to higher bidders.\nThe \"rank by revenue\" (RBR) allocation assigns slots in order of the product of bid times expected relevance, where relevance is the proportion of users that click on the merchant's ad after viewing it.\nIn our model, we assume that an ad's expected relevance is known to the auctioneer and the advertiser (but not necessarily to other advertisers), and that clickthrough rate decays monotonically with lower ranked slots.\nengine are not always the same entity.\nFor example Google runs the sponsored search ads for AOL web search, with revenue being shared.\nSimilarly, Yahoo! currently runs the sponsored search ads for MSN web search, though Microsoft will begin independent operations soon.\ntion in subtle ways.\nFirst, a single bid per merchant is used to allocate multiple non-identical slots.\nSecond, the bid is communicated not as a direct preference over slots, but as a preference for clicks that depend stochastically on slot allocation.\nWe investigate a number of economic properties of RBB and RBR slot auctions.\nWe consider the \"short-run\" incomplete information case in Section 3, adapting and extending standard analyses of single-item auctions.\nIn Section 4 we turn to the \"long-run\" complete information case; our characterization results here draw on techniques from linear programming.\nThroughout, important observations are highlighted as claims supported by examples.\nOur contributions are as follows:\n\u2022 We show that with multiple slots, bidders do not reveal their true values with either RBB or RBR, and with either first - or second-pricing.\n\u2022 With incomplete information, we find that the informational requirements of playing the equilibrium bid are much weaker for RBB than for RBR, because bidders need not know any information about each others' relevance (or even their own) with RBB.\n\u2022 With incomplete information, we prove that RBR is efficient but that RBB is not.\n\u2022 We show via a simple example that no general revenue ranking of RBB and RBR is possible.\n\u2022 We prove that in a complete-information setting, firstprice slot auctions have no pure strategy Nash equilibrium, but that there always exists a pure-strategy equilibrium with second pricing.\n\u2022 We provide a constant-factor bound on the deviation from efficiency that can occur in the equilibrium of a second-price slot auction.\nIn Section 2 we specify our model of bidders and the various slot auction formats.\nIn Section 3.1 we study the incentive properties of each format, asking in which cases agents would bid truthfully.\nThis is useful for the efficiency and revenue results in later sections.\nIt should become clear in this section that slot auctions in our model are a straightforward generalization of single-item auctions.\nSections 3.3 and 3.4 address questions of efficiency and revenue under incomplete information, respectively.\nIn Section 4.1 we determine whether pure-strategy equilibria exist for the various auction formats, under complete information.\nIn Section 4.2 we derive bounds on the deviation from efficiency in the pure-strategy equilibria of secondprice slot auctions.\nOur approach is positive rather than normative.\nWe aim to clarify the incentive, efficiency, and revenue properties of two slot auction designs currently in use, under settings of\nincomplete and complete information.\nWe do not attempt to derive the \"optimal\" mechanism for a slot auction.\nRelated work.\nFeng et al. [7] compare the revenue performance of various ranking mechanisms for slot auctions in a model with incomplete information, much as we do in Section 3.4, but they obtain their results via simulations whereas we perform an equilibrium analysis.\nLiu and Chen [12] study properties of slot auctions under incomplete information.\nTheir setting is essentially the same as ours, except they restrict their attention to a model with a single slot and a binary type for bidder relevance (high or low).\nThey find that RBR is efficient, but that no general revenue ranking of RBB and RBR is possible, which agrees with our results.\nThey also take a design approach and show how the auctioneer should assign relevance scores to optimize its revenue.\nEdelman et al. [6] model the slot auction problem both as a static game of complete information and a dynamic game of incomplete information.\nThey do not view slot auctions as static games of incomplete information as we do, but do study them as dynamic games of incomplete information and derive results on the uniqueness and revenue properties of the resulting equilibria.\nThey also provide a nice description of the evolution of the market for sponsored search.\nVarian [18] also studies slot auctions under a setting of complete information.\nHe focuses on \"symmetric\" equilibria, which are a refinement of Nash equilibria appropriate for slot auctions.\nHe provides bounds on the revenue obtained in equilibrium.\nHe also gives bounds that can be used to infer bidder values given their bids, and performs some empirical analysis using these results.\nIn contrast, we focus instead on efficiency and provide bounds on the deviation from efficiency in complete-information equilibria.\n2.\nPRELIMINARIES\nWe focus on a slot auction for a single keyword.\nIn a setting of incomplete information, a bidder knows only distributions over others' private information (value per click and relevance).\nWith complete information, a bidder knows others' private information, and so does not need to rely on distributions to strategize.\nWe first describe the model for the case with incomplete information, and drop the distributional information from the model when we come to the complete-information case in Section 4.\n2.1 The Model\nThere is a fixed number K of slots to be allocated among N bidders.\nWe assume without loss of generality that K \u2264 N, since superfluous slots can remain blank.\nBidder i assigns a value of Xi to each click received on its advertisement, regardless of this advertisement's rank .9 The probability that i's advertisement will be clicked if viewed is Ai \u2208 [0, 1].\nWe refer to Ai as bidder i's relevance.\nWe refer to Ri = AiXi as bidder i's revenue.\nHowever, for the purposes of our model it is also important that bidders believe that conversion rate does not vary with rank.\nThe probability that an advertisement will be viewed if placed in slot j is ryj \u2208 [0, 1].\nWe assume ry1> rye>...> ryK.\nHence bidder i's advertisement will have a clickthrough rate of ryj\u03b1i if placed in slot j. Of course, an advertisement does not receive any clicks if it is not allocated a slot.\nThe density f and slot probabilities ry1,..., ryK are common knowledge.\nOnly bidder i knows the realization xi of its value per click Xi.\nBoth bidder i and the seller know the realization \u03b1i of Ai, but this realization remains unobservable to the other bidders.\nWe assume that bidders have quasi-linear utility functions.\nThat is, the expected utility to bidder i of obtaining the slot of rank j at a price of b per click is ui (j, b) = ryj \u03b1i (xi \u2212 b) If the advertising firms bidding in the slot auction are riskneutral and have ample liquidity, quasi-linearity is a reasonable assumption.\nThe assumptions of independence, symmetry, and riskneutrality made above are all quite standard in single-item auction theory [11, 19].\nThe assumption that clickthrough rate decays monotonically with lower slots--by the same factors for each agent--is unique to the slot auction problem.\nWe view it as a main contribution of our work to show that this assumption allows for tractable analysis of the slot auction problem using standard tools from singleitem auction theory.\nIt also allows for interesting results in the complete information case.\nA common model of decaying clickthrough rate is the exponential decay model, where ryk = \u03b4k \u2212 1 with decay \u03b4> 1.\ntheir actual clickthrough data is fitted extremely well by an exponential decay model with \u03b4 = 1.428.\nOur model lacks budget constraints, which are an important feature of real slot auctions.\nWith budget constraints keyword auctions cannot be considered independently of one another, because the budget must be allocated across multiple keywords--a single advertiser typically bids on multiple keywords relevant to his business.\nIntroducing this element into the model is an important next step for future work .10\n2.2 Auction Formats\nIn a slot auction a bidder provides to the seller a declared value per click \u02dcxi (xi, \u03b1i) which depends on his true value and relevance.\nWe often denote this declared value (bid) by \u02dcxi for short.\nSince a bidder's relevance \u03b1i is observable to the seller, the bidder cannot misrepresent it.\nWe denote the kth highest of the N declared values by \u02dcx (k), and the kth highest of the N declared revenues by \u02dcr (k), where the declared revenue of bidder i is \u02dcri = \u03b1i\u02dcxi.\nWe consider two types of allocation rules, \"rank by bid\" (RBB) and \"rank by revenue\" (RBR): 10Models with budget constraints have begun to appear in this research area.\nAbrams [1] and Borgs et al. [3] design multi-unit auctions for budget-constrained bidders, which can be interpreted as slot auctions, with a focus on revenue optimization and truthfulness.\nMehta et al. [14] address the problem of matching user queries to budget-constrained advertisers so as to maximize revenue.\nRBB.\nSlot k goes to bidder i if and only if \u02dcxi = \u02dcx (k).\nRBR.\nSlot k goes to bidder i if and only if \u02dcri = \u02dcr (k).\nHence slot k goes to bidder \u03c3 (k).\nWe also consider two different types of payment rules.\nNote that no matter what the payment rule, a bidder that is not allocated a slot will pay 0 since his listing cannot receive any clicks.\nFirst-price.\nThe bidder allocated slot k, namely \u03c3 (k), pays \u02dcx\u03c3 (k) per click under both the RBB and RBR allocation rules.\nSecond-price.\nIf k <N, bidder \u03c3 (k) pays \u02dcx\u03c3 (k +1) per click under the RBB rule, and pays \u02dcr\u03c3 (k +1) \/ \u03b1\u03c3 (k) per click under the RBR rule.\nIf k = N, bidder \u03c3 (k) pays 0 per click .11 Intuitively, a second-price payment rule sets a bidder's payment to the lowest bid it could have declared while maintaining the same ranking, given the allocation rule used.\nOverture introduced the first slot auction design in 1997, using a first-price RBB scheme.\nGoogle then followed in 2000 with a second-price RBR scheme.\nIn 2002, Overture (at this point acquired by Yahoo!) then switched to second pricing but still allocates using RBB.\nOne possible reason for the switch is given in Section 4.\nWe assume that ties are broken as follows in the event that two agents make the exact same bid or declare the same revenue.\nIf the bids of agents i and j are tied, then agent i obtains a higher slot if and only if \u03ba (i) <\u03ba (j).\nThis is consistent with the practice in real slot auctions where ties are broken by the bidders' order of arrival.\n6.\nFUTURE WORK\nIntroducing budget constraints into the model is a natural next step for future work.\nThe complication here lies in the fact that budgets are often set for entire campaigns rather than single keywords.\nThe problem is then to ascertain that bids and budgets can indeed be optimized separately, or to find a plausible model where deriving equilibrium bids and budgets together is tractable.\nPlacing bounds on the revenue obtained in a complete information equilibrium is also a relevant question.\nBecause the incomplete information case is such a close generalization of the most basic single-item auction model, it would be interesting to see which standard results from single-item auction theory (e.g. results with risk-averse bidders, an endogenous number of bidders, asymmetries, etc. .\n.)\nautomatically generalize and which do not, to fully understand the structural differences between single-item and slot auctions.","lvl-2":"An Analysis of Alternative Slot Auction Designs for Sponsored Search\nABSTRACT\nBillions of dollars are spent each year on sponsored search, a form of advertising where merchants pay for placement alongside web search results.\nSlots for ad listings are allocated via an auction-style mechanism where the higher a merchant bids, the more likely his ad is to appear above other ads on the page.\nIn this paper we analyze the incentive, efficiency, and revenue properties of two slot auction designs: \"rank by bid\" (RBB) and \"rank by revenue\" (RBR), which correspond to stylized versions of the mechanisms currently used by Yahoo! and Google, respectively.\nWe also consider first - and second-price payment rules together with each of these allocation rules, as both have been used historically.\nWe consider both the \"short-run\" incomplete information setting and the \"long-run\" complete information setting.\nWith incomplete information, neither RBB nor RBR are truthful with either first or second pricing.\nWe find that the informational requirements of RBB are much weaker than those of RBR, but that RBR is efficient whereas RBB is not.\nWe also show that no revenue ranking of RBB and RBR is possible given an arbitrary distribution over bidder values and relevance.\nWith complete information, we find that no equilibrium exists with first pricing using either RBB or RBR.\nWe show that there typically exists a multitude of equilibria with second pricing, and we bound the divergence of (economic) value in such equilibria from the value obtained assuming all merchants bid truthfully.\n1.\nINTRODUCTION\nToday, Internet giants Google and Yahoo! boast a combined market capitalization of over $150 billion, largely on the strength of sponsored search, the fastest growing component of a resurgent online advertising industry.\nPricewaterhouseCoopers estimates that 2004 industry-wide sponsored search revenues were $3.9 billion, or 40% of total Internet advertising revenues . '\nIndustry watchers expect 2005 revenues to reach or exceed $7 billion .2 Roughly 80% of Google's estimated $4 billion in 2005 revenue and roughly 45% of Yahoo!'s estimated $3.7 billion in 2005 revenue will likely be attributable to sponsored search .3 A number of other companies--including LookSmart, FindWhat, InterActiveCorp (Ask Jeeves), and eBay (Shopping.com)--earn hundreds of millions of dollars of sponsored search revenue annually.\nSponsored search is a form of advertising where merchants pay to appear alongside web search results.\nFor example, when a user searches for \"used honda accord san diego\" in a web search engine, a variety of commercial entities (San Diego car dealers, Honda Corp, automobile information portals, classified ad aggregators, eBay, etc. .\n.)\nmay bid to to have their listings featured alongside the standard \"algorithmic\" search listings.\nAdvertisers bid for placement on the page in an auction-style format where the higher they bid the more likely their listing will appear above other ads on the page.\nBy convention, sponsored search advertisers generally pay per click, meaning that they pay only when a user clicks on their ad, and do not pay if their ad is displayed but not clicked.\nThough many people claim to systematically ignore sponsored search ads, Majestic Research reports that\nas many as 17% of Google searches result in a paid click, and that Google earns roughly nine cents on average for every search query they process .4 Usually, sponsored search results appear in a separate section of the page designated as \"sponsored\" above or to the right of the algorithmic results.\nSponsored search results are displayed in a format similar to algorithmic results: as a list of items each containing a title, a text description, and a hyperlink to a corresponding web page.\nWe call each position in the list a slot.\nGenerally, advertisements that appear in a higher ranked slot (higher on the page) garner more attention and more clicks from users.\nThus, all else being equal, merchants generally prefer higher ranked slots to lower ranked slots.\nMerchants bid for placement next to particular search queries; for example, Orbitz and Travelocity may bid for \"las vegas hotel\" while Dell and HP bid for \"laptop computer\".\nAs mentioned, bids are expressed as a maximum willingness to pay per click.\nFor example, a forty-cent bid by HostRocket for \"web hosting\" means HostRocket is willing to pay up to forty cents every time a user clicks on their ad .5 The auctioneer (the search engine6) evaluates the bids and allocates slots to advertisers.\nIn principle, the allocation decision can be altered with each new incoming search query, so in effect new auctions clear continuously over time as search queries arrive.\nMany allocation rules are plausible.\nIn this paper, we investigate two allocation rules, roughly corresponding to the two allocation rules used by Yahoo! and Google.\nThe \"rank by bid\" (RBB) allocation assigns slots in order of bids, with higher ranked slots going to higher bidders.\nThe \"rank by revenue\" (RBR) allocation assigns slots in order of the product of bid times expected relevance, where relevance is the proportion of users that click on the merchant's ad after viewing it.\nIn our model, we assume that an ad's expected relevance is known to the auctioneer and the advertiser (but not necessarily to other advertisers), and that clickthrough rate decays monotonically with lower ranked slots.\nIn practice, the expected clickthrough rate depends on a number of factors, including the position on the page, the ad text (which in turn depends on the identity of the bidder), the nature and intent of the user, and the context of other ads and algorithmic results on the page, and must be learned over time by both the auctioneer and the bidder [13].\nAs of this writing, to a rough first-order approximation, Yahoo! employs a RBB allocation and Google employs a RBR allocation, though numerous caveats apply in both cases when it comes to the vagaries of real-world implementations .7 Even when examining a one-shot version of a slot auction, the mechanism differs from a standard multi-item auc\nengine are not always the same entity.\nFor example Google runs the sponsored search ads for AOL web search, with revenue being shared.\nSimilarly, Yahoo! currently runs the sponsored search ads for MSN web search, though Microsoft will begin independent operations soon.\n7Here are two among many exceptions to the Yahoo! = RBB and Google = RBR assertion: (1) Yahoo! excludes ads deemed insufficiently relevant either by a human editor or due to poor historical click rate; (2) Google sets differing reserve prices depending on Google's estimate of ad quality.\ntion in subtle ways.\nFirst, a single bid per merchant is used to allocate multiple non-identical slots.\nSecond, the bid is communicated not as a direct preference over slots, but as a preference for clicks that depend stochastically on slot allocation.\nWe investigate a number of economic properties of RBB and RBR slot auctions.\nWe consider the \"short-run\" incomplete information case in Section 3, adapting and extending standard analyses of single-item auctions.\nIn Section 4 we turn to the \"long-run\" complete information case; our characterization results here draw on techniques from linear programming.\nThroughout, important observations are highlighted as claims supported by examples.\nOur contributions are as follows:\n\u2022 We show that with multiple slots, bidders do not reveal their true values with either RBB or RBR, and with either first - or second-pricing.\n\u2022 With incomplete information, we find that the informational requirements of playing the equilibrium bid are much weaker for RBB than for RBR, because bidders need not know any information about each others' relevance (or even their own) with RBB.\n\u2022 With incomplete information, we prove that RBR is efficient but that RBB is not.\n\u2022 We show via a simple example that no general revenue ranking of RBB and RBR is possible.\n\u2022 We prove that in a complete-information setting, firstprice slot auctions have no pure strategy Nash equilibrium, but that there always exists a pure-strategy equilibrium with second pricing.\n\u2022 We provide a constant-factor bound on the deviation from efficiency that can occur in the equilibrium of a second-price slot auction.\nIn Section 2 we specify our model of bidders and the various slot auction formats.\nIn Section 3.1 we study the incentive properties of each format, asking in which cases agents would bid truthfully.\nThere is possible confusion here because the \"second-price\" design for slot auctions is reminiscent of the Vickrey auction for a single item; we note that for slot auctions the Vickrey mechanism is in fact very different from the second-price mechanism, and so they have different incentive properties .8 In Section 3.2 we derive the Bayes-Nash equilibrium bids for the various auction formats.\nThis is useful for the efficiency and revenue results in later sections.\nIt should become clear in this section that slot auctions in our model are a straightforward generalization of single-item auctions.\nSections 3.3 and 3.4 address questions of efficiency and revenue under incomplete information, respectively.\nIn Section 4.1 we determine whether pure-strategy equilibria exist for the various auction formats, under complete information.\nIn Section 4.2 we derive bounds on the deviation from efficiency in the pure-strategy equilibria of secondprice slot auctions.\nOur approach is positive rather than normative.\nWe aim to clarify the incentive, efficiency, and revenue properties of two slot auction designs currently in use, under settings of\nincomplete and complete information.\nWe do not attempt to derive the \"optimal\" mechanism for a slot auction.\nRelated work.\nFeng et al. [7] compare the revenue performance of various ranking mechanisms for slot auctions in a model with incomplete information, much as we do in Section 3.4, but they obtain their results via simulations whereas we perform an equilibrium analysis.\nLiu and Chen [12] study properties of slot auctions under incomplete information.\nTheir setting is essentially the same as ours, except they restrict their attention to a model with a single slot and a binary type for bidder relevance (high or low).\nThey find that RBR is efficient, but that no general revenue ranking of RBB and RBR is possible, which agrees with our results.\nThey also take a design approach and show how the auctioneer should assign relevance scores to optimize its revenue.\nEdelman et al. [6] model the slot auction problem both as a static game of complete information and a dynamic game of incomplete information.\nThey study the \"locally envyfree equilibria\" of the static game of complete information; this is a solution concept motivated by certain bidding behaviors that arise due to the presence of budget constraints.\nThey do not view slot auctions as static games of incomplete information as we do, but do study them as dynamic games of incomplete information and derive results on the uniqueness and revenue properties of the resulting equilibria.\nThey also provide a nice description of the evolution of the market for sponsored search.\nVarian [18] also studies slot auctions under a setting of complete information.\nHe focuses on \"symmetric\" equilibria, which are a refinement of Nash equilibria appropriate for slot auctions.\nHe provides bounds on the revenue obtained in equilibrium.\nHe also gives bounds that can be used to infer bidder values given their bids, and performs some empirical analysis using these results.\nIn contrast, we focus instead on efficiency and provide bounds on the deviation from efficiency in complete-information equilibria.\n2.\nPRELIMINARIES\nWe focus on a slot auction for a single keyword.\nIn a setting of incomplete information, a bidder knows only distributions over others' private information (value per click and relevance).\nWith complete information, a bidder knows others' private information, and so does not need to rely on distributions to strategize.\nWe first describe the model for the case with incomplete information, and drop the distributional information from the model when we come to the complete-information case in Section 4.\n2.1 The Model\nThere is a fixed number K of slots to be allocated among N bidders.\nWe assume without loss of generality that K \u2264 N, since superfluous slots can remain blank.\nBidder i assigns a value of Xi to each click received on its advertisement, regardless of this advertisement's rank .9 The probability that i's advertisement will be clicked if viewed is Ai \u2208 [0, 1].\nWe refer to Ai as bidder i's relevance.\nWe refer to Ri = AiXi as bidder i's revenue.\nThe Xi, Ai, and Ri are random 9Indeed Kitts et al. [10] find that in their sample of actual click data, the correlation between rank and conversion rate is not statistically significant.\nHowever, for the purposes of our model it is also important that bidders believe that conversion rate does not vary with rank.\nvariables and we denote their realizations by xi, \u03b1i, and ri respectively.\nThe probability that an advertisement will be viewed if placed in slot j is ryj \u2208 [0, 1].\nWe assume ry1> rye>...> ryK.\nHence bidder i's advertisement will have a clickthrough rate of ryj\u03b1i if placed in slot j. Of course, an advertisement does not receive any clicks if it is not allocated a slot.\nEach bidder's value and relevance pair (Xi, Ai) is independently and identically distributed on [0, \u00af x] \u00d7 [0, 1] according to a continuous density function f that has full support on its domain.\nThe density f and slot probabilities ry1,..., ryK are common knowledge.\nOnly bidder i knows the realization xi of its value per click Xi.\nBoth bidder i and the seller know the realization \u03b1i of Ai, but this realization remains unobservable to the other bidders.\nWe assume that bidders have quasi-linear utility functions.\nThat is, the expected utility to bidder i of obtaining the slot of rank j at a price of b per click is ui (j, b) = ryj \u03b1i (xi \u2212 b) If the advertising firms bidding in the slot auction are riskneutral and have ample liquidity, quasi-linearity is a reasonable assumption.\nThe assumptions of independence, symmetry, and riskneutrality made above are all quite standard in single-item auction theory [11, 19].\nThe assumption that clickthrough rate decays monotonically with lower slots--by the same factors for each agent--is unique to the slot auction problem.\nWe view it as a main contribution of our work to show that this assumption allows for tractable analysis of the slot auction problem using standard tools from singleitem auction theory.\nIt also allows for interesting results in the complete information case.\nA common model of decaying clickthrough rate is the exponential decay model, where ryk = \u03b4k \u2212 1 with decay \u03b4> 1.\nFeng et al. [7] state that\ntheir actual clickthrough data is fitted extremely well by an exponential decay model with \u03b4 = 1.428.\nOur model lacks budget constraints, which are an important feature of real slot auctions.\nWith budget constraints keyword auctions cannot be considered independently of one another, because the budget must be allocated across multiple keywords--a single advertiser typically bids on multiple keywords relevant to his business.\nIntroducing this element into the model is an important next step for future work .10\n2.2 Auction Formats\nIn a slot auction a bidder provides to the seller a declared value per click \u02dcxi (xi, \u03b1i) which depends on his true value and relevance.\nWe often denote this declared value (bid) by \u02dcxi for short.\nSince a bidder's relevance \u03b1i is observable to the seller, the bidder cannot misrepresent it.\nWe denote the kth highest of the N declared values by \u02dcx (k), and the kth highest of the N declared revenues by \u02dcr (k), where the declared revenue of bidder i is \u02dcri = \u03b1i\u02dcxi.\nWe consider two types of allocation rules, \"rank by bid\" (RBB) and \"rank by revenue\" (RBR): 10Models with budget constraints have begun to appear in this research area.\nAbrams [1] and Borgs et al. [3] design multi-unit auctions for budget-constrained bidders, which can be interpreted as slot auctions, with a focus on revenue optimization and truthfulness.\nMehta et al. [14] address the problem of matching user queries to budget-constrained advertisers so as to maximize revenue.\nRBB.\nSlot k goes to bidder i if and only if \u02dcxi = \u02dcx (k).\nRBR.\nSlot k goes to bidder i if and only if \u02dcri = \u02dcr (k).\nWe will commonly represent an allocation by a one-to-one function \u03c3: [K]--+ [N], where [n] is the set of integers {1, 2,..., n}.\nHence slot k goes to bidder \u03c3 (k).\nWe also consider two different types of payment rules.\nNote that no matter what the payment rule, a bidder that is not allocated a slot will pay 0 since his listing cannot receive any clicks.\nFirst-price.\nThe bidder allocated slot k, namely \u03c3 (k), pays \u02dcx\u03c3 (k) per click under both the RBB and RBR allocation rules.\nSecond-price.\nIf k <N, bidder \u03c3 (k) pays \u02dcx\u03c3 (k +1) per click under the RBB rule, and pays \u02dcr\u03c3 (k +1) \/ \u03b1\u03c3 (k) per click under the RBR rule.\nIf k = N, bidder \u03c3 (k) pays 0 per click .11 Intuitively, a second-price payment rule sets a bidder's payment to the lowest bid it could have declared while maintaining the same ranking, given the allocation rule used.\nOverture introduced the first slot auction design in 1997, using a first-price RBB scheme.\nGoogle then followed in 2000 with a second-price RBR scheme.\nIn 2002, Overture (at this point acquired by Yahoo!) then switched to second pricing but still allocates using RBB.\nOne possible reason for the switch is given in Section 4.\nWe assume that ties are broken as follows in the event that two agents make the exact same bid or declare the same revenue.\nThere is a permutation of the agents \u03ba: [N]--+ [N] that is fixed beforehand.\nIf the bids of agents i and j are tied, then agent i obtains a higher slot if and only if \u03ba (i) <\u03ba (j).\nThis is consistent with the practice in real slot auctions where ties are broken by the bidders' order of arrival.\n3.\nINCOMPLETE INFORMATION\n3.1 Incentives\nIt should be clear that with a first-price payment rule, truthful bidding is neither a dominant strategy nor an ex post Nash equilibrium using either RBB or RBR, because this guarantees a payoff of 0.\nThere is always an incentive to shade true values with first pricing.\nThe second-price payment rule is reminiscent of the secondprice (Vickrey) auction used for selling a single item, and in a Vickrey auction it is a dominant strategy for a bidder to reveal his true value for the item [19].\nHowever, using a second-price rule in a slot auction together with either allocation rule above does not yield an incentive-compatible mechanism, either in dominant strategies or ex post Nash equilibrium .12 With a second-price rule there is no incentive for a bidder to bid higher than his true value per click using either RBB or RBR: this either leads to no change 11We are effectively assuming a reserve price of zero, but in practice search engines charge a non-zero reserve price per click.\n12Unless of course there is only a single slot available, since this is the single-item case.\nWith a single slot both RBB and RBR with a second-price payment rule are dominantstrategy incentive-compatible.\nin the outcome, or a situation in which he will have to pay more than his value per click for each click received, resulting in a negative payoff .13 However, with either allocation rule there may be an incentive to shade true values with second pricing.\nEXAMPLE.\nThere are two agents and two slots.\nThe agents have relevance \u03b11 = \u03b12 = 1, whereas \u03b31 = 1 and \u03b32 = 1\/2.\nAgent 1 has a value of x1 = 6 per click, and agent 2 has a value of x2 = 4 per click.\nLet us first consider the RBB rule.\nSuppose agent 2 bids truthfully.\nIf agent 1 also bids truthfully, he wins the first slot and obtains a payoff of\n2.\nHowever, if he shades his bid down below 4, he obtains the second slot at a cost of 0 per click yielding a payoff of 3.\nSince the agents have equal relevance, the exact same\nsituation holds with the RBR rule.\nHence truthful bidding is not a dominant strategy in either format, and neither is it an ex post Nash equilibrium.\nTo find payments that make RBB and RBR dominantstrategy incentive-compatible, we can apply Holmstrom's lemma [9] (see also chapter 3 in Milgrom [15]).\nUnder the restriction that a bidder with value 0 per click does not pay anything (even if he obtains a slot, which can occur if there are as many slots as bidders), this lemma implies that there is a unique payment rule that achieves dominant-strategy incentive compatibility for either allocation rule.\nFor RBB, the bidder allocated slot k is charged per click\nNote that if K = N, \u02dcx (K +1) = 0 since there is no K + 1th bidder.\nFor RBR, the bidder allocated slot k is charged per click\nUsing payment rule (2) and RBR, the auctioneer is aware of the true revenues of the bidders (since they reveal their values truthfully), and hence ranks them according to their true revenues.\nWe show in Section 3.3 that this allocation is in fact efficient.\nSince the VCG mechanism is the unique mechanism that is efficient, truthful, and ensures bidders with value 0 pay nothing (by the Green-Laffont theorem [8]), the RBR rule and payment scheme (2) constitute exactly the VCG mechanism.\nIn the VCG mechanism an agent pays the externality he imposes on others.\nTo understand payment (2) in this sense, note that the first term is the added utility (due to an increased clickthrough rate) agents in slots k + 1 to K would receive if they were all to move up a slot; the last term is the utility that the agent with the K +1 st revenue would receive by obtaining the last slot as opposed to nothing.\nThe leading coefficient simply reduces the agent's expected payment to a payment per click.\n13In a dynamic setting with second pricing, there may be an incentive to bid higher than one's true value in order to exhaust competitors' budgets.\nThis phenomenon is commonly called \"bid jamming\" or \"antisocial bidding\" [4].\n3.2 Equilibrium Analysis\nTo understand the efficiency and revenue properties of the various auction formats, we must first understand which rankings of the bidders occur in equilibrium with different allocation and payment rule combinations.\nThe following lemma essentially follows from the Monotonic Selection Theorem by Milgrom and Shannon [16].\nAs a consequence of this lemma, we find that RBB and RBR auctions allocate the slots greedily by the true values and revenues of the agents, respectively (whether using firstor second-price payment rules).\nThis will be relevant in Section 3.3 below.\nFor a first-price payment rule, we can explicitly derive the symmetric Bayes-Nash equilibrium bid functions for RBB and RBR auctions.\nThe purpose of this exercise is to lend qualitative insights into the parameters that influence an agent's bidding, and to derive formulae for the expected revenue in RBB and RBR auctions in order to make a revenue ranking of these two allocation rules (in Section 3.4).\nLet G (y) be the expected resulting clickthrough rate, in a symmetric equilibrium of the RBB auction (with either payment rule), to a bidder with value y and relevance \u03b1 = 1.\nLet H (y) be the analogous quantity for a bidder with revenue y and relevance 1 in a RBR auction.\nBy Lemma 1, a bidder with value y will obtain slot k in a RBB auction if y is the kth highest of the true realized values.\nThe same applies in a RBR auction when y is the kth highest of the true realized revenues.\nLet FX (y) be the distribution function for value, and let FR (y) be the distribution function for revenue.\nThe probability that y is the kth highest out of N values is!\n(1 - FX (y)) k \u2212 1FX (y) N \u2212 k whereas the probability that y is the kth highest out of N revenues is the same formula with FR replacing FX.\nHence we have!\n(1 - FX (y)) k \u2212 1FX (y) N \u2212 k The H function is analogous to G with FR replacing FX.\nIn the two propositions that follow, g and h are the derivatives of G and H respectively.\nWe omit the proof of the next proposition, because it is almost identical to the derivation of the equilibrium bid in the single-item case (see Krishna [11], Proposition 2.2).\nPROPOSITION 1.\nThe symmetric Bayes-Nash equilibrium\nstrategies in a first-price RBB auction are given by\nThe first-price equilibrium above closely parallels the firstprice equilibrium in the single-item model.\nWith a single item g is the density of the second highest value among all N agent values, whereas in a slot auction it is a weighted combination of the densities for the second, third, etc. highest values.\nNote that the symmetric Bayes-Nash equilibrium bid in a first-price RBB auction does not depend on a bidder's relevance \u03b1.\nTo see clearly why, note that a bidder chooses a bid b so as to maximize the objective\nand here \u03b1 is just a leading constant factor.\nSo dropping it does not change the set of optimal solutions.\nHence the equilibrium bid depends only on the value x and function G, and G in turn depends only on the marginal cumulative distribution of value FX.\nSo really only the latter needs to be common knowledge to the bidders.\nOn the other hand, we will now see that information about relevance is needed for bidders to play the equilibrium in the first-price RBR auction.\nSo the informational requirements for a first-price RBB auction are much weaker than for a first-price RBR auction: in the RBB auction a bidder need not know his own relevance, and need not know any distributional information over others' relevance in order to play the equilibrium.\nAgain we omit the next proposition's proof since it is so similar to the one above.\nPROPOSITION 2.\nThe symmetric Bayes-Nash equilibrium strategies in a first-price RBR auction are given by\nHere it can be seen that the equilibrium bid is increasing with x, but not necessarily with \u03b1.\nThis should not be much of a concern to the auctioneer, however, because in any case the declared revenue in equilibrium is always increasing in the true revenue.\nIt would be interesting to obtain the equilibrium bids when using a second-price payment rule, but it appears that the resulting differential equations for this case do not have a neat analytical solution.\nNonetheless, the same conclusions about the informational requirements of the RBB and RBR rules still hold, as can be seen simply by inspecting the objective function associated with an agent's bidding problem for the second-price case.\n3.3 Efficiency\nA slot auction is efficient if in equilibrium the sum of the bidders' revenues from their allocated slots is maximized.\nUsing symmetry as our equilibrium selection criterion, we find that the RBB auction is not efficient with either payment rule.\nCLAIM 2.\nThe RBB auction is not efficient with either first or second pricing.\nEXAMPLE.\nThere are two agents and one slot, with \u03b31 = 1.\nAgent 1 has a value of x1 = 6 per click and relevance \u03b11 = 1\/2.\nAgent 2 has a value of x2 = 4 per click and relevance \u03b12 = 1.\nBy Lemma 1, agents are ranked greedily by value.\nHence agent 1 obtains the lone slot, for a total revenue of 3 to the agents.\nHowever, it is most efficient to allocate the slot to agent 2, for a total revenue of 4.\nExamples with more agents or more slots are simple to construct along the same lines.\nOn the other hand, under our assumptions on how clickthrough rate decreases with lower rank, the RBR auction is efficient with either payment rule.\nPROOF.\nSince by Lemma 1 the agents' equilibrium bids are increasing functions of their revenues in the RBR auction, slots are allocated greedily according to true revenues.\nLet \u03c3 be a non-greedy allocation.\nThen there are slots s, t with s <t and r\u03c3 (s) <r\u03c3 (t).\nWe can switch the agents in slots s and t to obtain a new allocation, and the difference between the total revenue in this new allocation and the original allocation's total revenue is\nBoth parenthesized terms above are positive.\nHence the switch has increased the total revenue to the bidders.\nIf we continue to perform such switches, we will eventually reach a greedy allocation of greater revenue than the initial allocation.\nSince the initial allocation was arbitrary, it follows that a greedy allocation is always efficient, and hence the RBR auction's allocation is efficient.\nNote that the assumption that clickthrough rate decays montonically by the same factors 71,..., 7K for all agents is crucial to this result.\nA greedy allocation scheme does not necessarily find an efficient solution if the clickthrough rates are monotonically decreasing in an independent fashion for each agent.\n3.4 Revenue\nTo obtain possible revenue rankings for the different auction formats, we first note that when the allocation rule is fixed to RBB, then using either a first-price, second-price, or truthful payment rule leads to the same expected revenue in a symmetric, increasing Bayes-Nash equilibrium.\nBecause a RBB auction ranks agents by their true values in equilibrium for any of these payment rules (by Lemma 1), it follows that expected revenue is the same for all these payment rules, following arguments that are virtually identical to those used to establish revenue equivalence in the singleitem case (see e.g. Proposition 3.1 in Krishna [11]).\nThe same holds for RBR auctions; however, the revenue ranking of the RBB and RBR allocation rules is still unclear.\nBecause of this revenue equivalence principle, we can choose whichever payment rule is most convenient for the purpose of making revenue comparisons.\nUsing Propositions 1 and 2, it is a simple matter to derive formulae for the expected revenue under both allocation rules.\nThe payment of an agent in a RBB auction is\nThe expected revenue is then N \u2022 E \u02c6mV (X, A) \u02dc, where the expectation is taken with respect to the joint density of value and relevance.\nThe expected revenue formula for RBR auctions is entirely analogous using \u02dcxR (x, \u03b1) and the H function.\nWith these in hand we can obtain revenue rankings for specific numbers of bidders and slots, and specific distributions over values and relevance.\nEXAMPLE.\nAssume there are 2 bidders, 2 slots, and that 71 = 1, 72 = 1\/2.\nAssume that value-relevance pairs are uniformly distributed over [0, 1] x [0, 1].\nFor such a distribution with a closed-form formula, it is most convenient to use the revenue formulae just derived.\nRBB dominates RBR in terms of revenue for these parameters.\nThe formula for the expected revenue in a RBB auction yields 1\/12, whereas for RBR auctions we have 7\/108.\nAssume instead that with probability 1\/2 an agent's valuerelevance pair is (1, 1\/2), and that with probability 1\/2 it is (1\/2, 1).\nIn this scenario it is more convenient to appeal to formulae (1) and (2).\nIn a truthful auction the second agent will always pay 0.\nAccording to (1), in a truthful RBB auction the first agent makes an expected payment of\nwhere we have used the fact that value and relevance are independently distributed for different agents.\nThe expected relevance of the agent with the highest value is E \u02c6A\u03c3 (1) \u02dc = 5\/8.\nThe expected second highest value is also E \u02c6X\u03c3 (2) \u02dc = 5\/8.\nThe expected revenue for a RBB auction here is then 25\/128.\nAccording to (2), in a truthful RBR auction the first agent makes an expected payment of\nIn expectation the second highest revenue is E \u02c6R\u03c3 (2) \u02dc = 1\/2, so the expected revenue for a RBR auction is 1\/4.\nHence in this case the RBR auction yields higher expected revenue .1415 This example suggests the following conjecture: when value and relevance are either uncorrelated or positively correlated, RBB dominates RBR in terms of revenue.\nWhen value and relevance are negatively correlated, RBR dominates.\n4.\nCOMPLETE INFORMATION\nIn typical slot auctions such as those run by Yahoo! and Google, bidders can adjust their bids up or down at any time.\nAs B \u00a8 orgers et al. [2] and Edelman et al. [6] have noted, this can be viewed as a continuous-time process in which bidders learn each other's bids.\nIf the process stabilizes the result can then be modeled as a Nash equilibrium in pure strategies of the static one-shot game of complete information, since each bidder will be playing a best-response to the others' bids .16 This argument seems especially appropriate for Yahoo!'s slot auction design where all bids are 14To be entirely rigorous and consistent with our initial assumptions, we should have constructed a continuous probability density with full support over an appropriate domain.\nTaking the domain to be e.g. [0, 1] x [0, 1] and a continuous density with full support that is sufficiently concentrated around (1, 1\/2) and (1\/2, 1), with roughly equal mass around both, would yield the same conclusion.\n15Claim 3 should serve as a word of caution, because Feng et al. [7] find through their simulations that with a bivariate normal distribution over value-relevance pairs, and with 5 slots, 15 bidders, and \u03b4 = 2, RBR dominates RBB in terms of revenue for any level of correlation between value and relevance.\nHowever, they assume that bidding behavior in a second-price slot auction can be well approximated by truthful bidding.\n16We do not claim that bidders will actually learn each others' private information (value and relevance), just that for a stable set of bids there is a corresponding equilibrium of the complete information game.\nmade public.\nGoogle keeps bids private, but experimentation can allow one to discover other bids, especially since second pricing automatically reveals to an agent the bid of the agent ranked directly below him.\n4.1 Equilibrium Analysis\nIn this section we ask whether a pure-strategy Nash equilibrium exists in a RBB or RBR slot auction, with either first or second pricing.\nBefore dealing with the first-price case there is a technical issue involving ties.\nIn our model we allow bids to be nonnegative real numbers for mathematical convenience, but this can become problematic because there is then no bid that is \"just higher\" than another.\nWe brush over such issues by assuming that an agent can bid \"infinitesimally higher\" than another.\nThis is imprecise but allows us to focus on the intuition behind the result that follows.\nSee Reny [17] for a full treatment of such issues.\nFor the remainder of the paper, we assume that there are as many slots as bidders.\nThe following result shows that there can be no pure-strategy Nash equilibrium with first pricing .17 Note that the argument holds for both RBB and RBR allocation rules.\nFor RBB, bids should be interpreted as declared values, and for RBR as declared revenues.\nPROOF.\nLet \u03c3: [K] \u2192 [N] be the allocation of slots to the agents resulting from their bids.\nLet ri and bi be the revenue and bid of the agent ranked ith, respectively.\nNote that we cannot have bi> bi +1, or else the agent in slot i can make a profitable deviation by instead bidding bi \u2212 $> bi +1 for small enough $> 0.\nThis does not change its allocation, but increases its profit.\nHence we must have bi = bi +1 (i.e. with one bidder bidding infinitesimally higher than the other).\nSince this holds for any two consecutive bidders, it follows that in a Nash equilibrium all bidders must be bidding 0 (since the bidder ranked last matches the bid directly below him, which is 0 by default because there is no such bid).\nBut this is impossible: consider the bidder ranked last.\nThe identity of this bidder is always clear given the deterministic tie-breaking rule.\nThis bidder can obtain the top spot and increase his revenue by (\u03b31 \u2212 \u03b3K) rK> 0 by bidding some $> 0, and for small enough $this is necessarily a profitable deviation.\nHence there is no Nash equilibrium in pure strategies.\nOn the other hand, we find that in a second-price slot auction there can be a multitude of pure strategy Nash equilibria.\nThe next two lemmas give conditions that characterize the allocations that can occur as a result of an equilibrium profile of bids, given fixed agent values and revenues.\nThen if we can exhibit an allocation that satisfies these conditions, there must exist at least one equilibrium.\nWe first consider the RBR case.\n17B \u00a8 orgers et al. [2] have proven this result in a model with three bidders and three slots, and we generalize their argument.\nEdelman et al. [6] also point out this non-existence phenomenon.\nThey only illustrate the fact with an example because the result is quite immediate.\nPROOF.\nThere exists a desired vector b which constitutes a Nash equilibrium if and only if the following set of inequalities can be satisfied (the variables are the \u03c0i and bj):\nHere r\u03c3 (i) is the revenue of the agent allocated slot i, and \u03c0i and bi may be interpreted as this agent's surplus and declared revenue, respectively.\nWe first argue that constraints (6) can be removed, because the inequalities above can be satisfied if and only if the inequalities without (6) can be satisfied.\nThe necessary direction is immediate.\nAssume we have a vector (\u03c0, b) which satisfies all inequalities above except (6).\nThen there is some i for which bi <bi +1.\nConstruct a new vector (\u03c0, b') identical to the original except with b ` i +1 = bi.\nWe now have b' i = b ` i +1.\nAn agent in slot k <i sees the price of slot i decrease from bi +1 to b ` i +1 = bi, but this does not make i more preferred than k to this agent because we have \u03c0k \u2265 \u03b3i-1 (r\u03c3 (k) \u2212 bi) \u2265 \u03b3i (r\u03c3 (k) \u2212 bi) = \u03b3i (r\u03c3 (k) \u2212 b ` i +1) (i.e. because the agent in slot k did not originally prefer slot i \u2212 1 at price bi, he will not prefer slot i at price bi).\nA similar argument applies for agents in slots k> i + 1.\nThe agent in slot i sees the price of this slot go down, which only makes it more preferred.\nFinally, the agent in slot i + 1 sees no change in the price of any slot, so his slot remains most preferred.\nHence inequalities (3)--(5) remain valid at (\u03c0, b').\nWe first make this change to the bi +1 where bi <bi +1 and index i is smallest.\nWe then recursively apply the change until we eventually obtain a vector that satisfies all inequalities.\nWe safely ignore inequalities (6) from now on.\nBy the Farkas lemma, the remaining inequalities can be satisfied if and only if there is no vector z such that\nNote that a variable of the form z\u03c3 (i) i appears at most once in a constraint of type (8), so such a variable can never be positive.\nAlso, z\u03c3 (i) 1 = 0 for all i = 1 by constraint (7), since such variables never appear with another of the form z\u03c3 (i) i.\nNow if we wish to raise z\u03c3 (i) j above 0 by one unit for j = i, we must lower z\u03c3 (i) i by one unit because of the constraint of type (8).\nBecause \u03b3jr\u03c3 (i) \u2264 \u03b3ir\u03c3 (i) for i <j, raising\nz\u03c3 (i) j with i <j while adjusting other variables to maintain feasibility cannot make the objective Pi, j (7jr\u03c3 (i)) z\u03c3 (i) j positive.\nIf this objective is positive, then this is due to some component z\u03c3 (i) j with i> j being positive.\nNow for the constraints of type (7), if i> j then z\u03c3 (i) j appears with z\u03c3 (j \u2212 1) j \u2212 1 (for 1 <j <N).\nSo to raise the former variable 7 \u2212 1 j units and maintain feasibility, we must (I) lower z\u03c3 (i) i by 7 \u2212 1 j units, and (II) lower z\u03c3 (j \u2212 1) j \u2212 1 by\nfor 2 \u2264 j \u2264 N \u2212 1 and i> j, raising some z\u03c3 (i) j with i> j cannot make the objective positive, and there is no z that satisfies all inequalities above.\nConversely, if some inequality (9) does not hold, the objective can be made positive by raising the corresponding z\u03c3 (i) j and adjusting other variables so that feasibility is just maintained.\nBy a slight reindexing, inequalities (9) yield the statement of the theorem.\nThe RBB case is entirely analogous.\nPROOF SKETCH.\nThe proof technique is the same as in the previous lemma.\nThe desired Nash equilibrium exists if and only if a related set of inequalities can be satisfied; by the Farkas lemma, this occurs if and only if an alternate set of inequalities cannot be satisfied.\nThe conditions that determine whether the latter holds are given in the statement of the lemma.\nThe two lemmas above immediately lead to the following result.\nPROOF.\nFirst consider RBB.\nSuppose agents are ranked according to their true values.\nSince x\u03c3 (i) \u2264 x\u03c3 (j) for i> j, the system of inequalities in Lemma 3 is satisfied, and the allocation is the result of some Nash equilibrium bid profile.\nBy the same type of argument but appealing to Lemma 2 for RBR, there exists a Nash equilibrium bid profile such that bidders are ranked according to their true revenues.\nBy Theorem 1, this latter allocation is efficient.\nThis theorem establishes existence but not uniqueness.\nIndeed we expect that in many cases there will be multiple allocations (and hence equilibria) which satisfy the conditions of Lemmas 2 and 3.\nIn particular, not all equilibria of a second-price RBR auction will be efficient.\nFor instance, according to Lemma 2, with two agents and two slots any allocation can arise in a RBR equilibrium because no constraints apply.\nTheorems 2 and 3 taken together provide a possible explanation for Yahoo!'s switch from first to second pricing.\nWe saw in Section 3.1 that this does not induce truthfulness from bidders.\nWith first pricing, there will always be some bidder that feels compelled to adjust his bid.\nSecond pricing is more convenient because an equilibrium can be reached, and this reduces the cost of bid management.\n4.2 Efficiency\nFor a given allocation rule, we call the allocation that would result if the bidders reported their values truthfully the standard allocation.\nHence in the standard RBB allocation bidders are ranked by true values, and in the standard RBR allocation they are ranked by true revenues.\nAccording to Lemmas 2 and 3, a ranking that results from a Nash equilibrium profile can only deviate from the standard allocation by having agents with relatively similar values or revenues switch places.\nThat is, if ri> rj then with RBR agent j can be ranked higher than i only if the ratio rj\/ri is sufficiently large; similarly for RBB.\nThis suggests that the value of an equilibrium allocation cannot differ too much from the value obtained in the standard allocation, and the following theorems confirms this.\ntal value by f (\u03c3) = PN For an allocation \u03c3 of slots to agents, we denote its toi = 1 7ir\u03c3 (i).\nWe denote by g (\u03c3) = PNi = 1 7ix\u03c3 (i) allocation \u03c3's value when assuming all agents have identical relevance, normalized to 1.\nLet\n(where by default 7N +1 = 0).\nLet 77x and 77r be the standard allocations when using RBB and RBR, respectively.\nTHEOREM 4.\nFor an allocation \u03c3 that results from a purestrategy Nash equilibrium of a second-price RBR slot auction, we have f (\u03c3) \u2265 Lf (77r).\nPROOF.\nWe number the agents so that agent i has the ith highest revenue, so r1 \u2265 r2 \u2265...\u2265 rN.\nHence the standard allocation has value f (77r) = PNi = 1 7iri.\nTo prove the theorem, we will make repeated use of the fact that P ak\nPk ak\nk bk \u2265 mink bk when the ak and bk are positive.\nNote that according to Lemma 2, if agent i lies at least two slots \"\" below slot j, then r\u03c3 (j) \u2265 ri 1 \u2212 \u03b3' +2.\n\u03b3' +1 It may be the case that for some slot i, we have \u03c3 (i)> i and for slots k> i + 1 we have \u03c3 (k)> i.\nWe then say that slot i is inverted.\nLet S be the set of agents with indices at least i + 1; there are N \u2212 i of these.\nIf slot i is inverted, it is occupied by some agent from S. Also all slots strictly lower than i + 1 must be occupied by the remaining agents from S, since \u03c3 (k)> i for k \u2265 i + 2.\nThe agent in slot i + 1 must then have an index \u03c3 (i + 1) \u2264 i (note this means slot i + 1 cannot be inverted).\nNow there are two cases.\nIn the first case we have \u03c3 (i) = i + 1.\nThen\nIn the second case we have \u03c3 (i)> i +1.\nThen since all agents in S except the one in slot i lie strictly below slot i + 1, and\nthe agent in slot i is not agent i + 1, it must be that agent i +1 is in a slot strictly below slot i +1.\nThis means that it is at least two slots below the agent that actually occupies slot \"\" i, and by Lemma 2 we then have r\u03c3 (i)> ri +1 1--\u03b3i +2.\nIf slot i is not inverted, then on one hand we may have \u03c3 (i) <i, in which case r\u03c3 (i) \/ ri> 1.\nOn the other hand we may have \u03c3 (i)> i but there is some agent with index j <i that lies at least two slots below slot i.\nThen by Lemma 2,\nWe write i E I if slot i is inverted, and i E I if neither i nor\nand this completes the proof.\nNote that for RBR, the standard value is also the efficient value by Theorem 1.\nAlso note that for an exponential decay model, L = min \u02d8 \u03b41, 1--1 \u00af.\nWith \u03b4 = 1.428 (see Sec\u03b4 tion 2.1), the factor is L Pz 1\/3 .34, so the total value in a pure-strategy Nash equilibrium of a second-price RBR slot auction is always within a factor of 3.34 of the efficient value with such a discount.\nAgain for RBB we have an analogous result.\nTHEOREM 5.\nFor an allocation \u03c3 that results from a purestrategy Nash equilibrium of a second-price RBB slot auction, we have g (\u03c3)> Lg (\u03b7x).\nPROOF SKETCH.\nSimply substitute bidder values for bidder revenues in the proof of Theorem 4, and appeal to Lemma 3.\n5.\nCONCLUSIONS\nThis paper analyzed stylized versions of the slot auction designs currently used by Yahoo! and Google, namely \"rank by bid\" (RBB) and \"rank by revenue\" (RBR), respectively.\nWe also considered first and second pricing rules together with each of these allocation rules, since both have been used historically.\nWe first studied the \"short-run\" setting with incomplete information, corresponding to the case where agents have just approached the mechanism.\nOur equilibrium analysis revealed that RBB has much weaker informational requirements than RBR, because bidders need not know any information about relevance (even their own) to play the Bayes-Nash equilibrium.\nHowever, RBR leads to an efficient allocation in equilibrium, whereas RBB does not.\nWe showed that for an arbitrary distribution over value and relevance, no revenue ranking of RBB and RBR is possible.\nWe hope that the tools we used to establish these results (revenue equivalence, the form of first-price equilibria, the truthful payments rules) will help others wanting to pursue further analyses of slot auctions.\nWe also studied the \"long-run\" case where agents have experimented with their bids and each settled on one they find optimal.\nWe argued that a stable set of bids in this setting can be modeled as a pure-strategy Nash equilibrium of the static game of complete information.\nWe showed that no pure-strategy equilibrium exists with either RBB or RBR using first pricing, but that with second pricing there always exists such an equilibrium (in the case of RBR, an efficient equilibrium).\nIn general second pricing allows for multiple pure-strategy equilibria, but we showed that the value of such equilibria diverges by only a constant factor from the value obtained if all agents bid truthfully (which in the case of RBR is the efficient value).\n6.\nFUTURE WORK\nIntroducing budget constraints into the model is a natural next step for future work.\nThe complication here lies in the fact that budgets are often set for entire campaigns rather than single keywords.\nAssuming that the optimal choice of budget can be made independent of the choice of bid for a specific keyword, it can be shown that it is a dominant-strategy to report this optimal budget with one's bid.\nThe problem is then to ascertain that bids and budgets can indeed be optimized separately, or to find a plausible model where deriving equilibrium bids and budgets together is tractable.\nIdentifying a condition on the distribution over value and relevance that actually does yield a revenue ranking of RBB and RBR (such as correlation between value and relevance, perhaps) would yield a more satisfactory characterization of their relative revenue properties.\nPlacing bounds on the revenue obtained in a complete information equilibrium is also a relevant question.\nBecause the incomplete information case is such a close generalization of the most basic single-item auction model, it would be interesting to see which standard results from single-item auction theory (e.g. results with risk-averse bidders, an endogenous number of bidders, asymmetries, etc. .\n.)\nautomatically generalize and which do not, to fully understand the structural differences between single-item and slot auctions.","keyphrases":["altern slot auction design","sponsor search","sponsor search","ad list","rank by bid","rank by revenu","incomplet inform","second price","auction-style mechan","second-price payment rule","equilibrium multitud","diverg of econom valu","diverg of valu","combin market capit","resurg onlin advertis industri","web search engin","pai per click","search engin","slot alloc","auction theori"],"prmu":["P","P","P","P","P","P","P","P","M","M","R","R","R","U","M","M","U","M","R","M"]} {"id":"J-55","title":"From Optimal Limited To Unlimited Supply Auctions","abstract":"We investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit. We adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items. In this paper, we first derive an optimal auction for three items, answering an open question from [8]. Second, we show that the form of this auction is independent of the competitive framework used. Third, we propose a schema for converting a given limited-supply auction into an unlimited supply auction. Applying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7]. Finally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values.","lvl-1":"From Optimal Limited To Unlimited Supply Auctions Jason D. Hartline Microsoft Research 1065 La Avenida) Mountain View, CA 94043 hartline@microsoft.com Robert McGrew \u2217 Computer Science Department Stanford University Stanford, CA 94305 bmcgrew@stanford.edu ABSTRACT We investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit.\nWe adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items.\nIn this paper, we first derive an optimal auction for three items, answering an open question from [8].\nSecond, we show that the form of this auction is independent of the competitive framework used.\nThird, we propose a schema for converting a given limited-supply auction into an unlimited supply auction.\nApplying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7].\nFinally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values.\nCategories and Subject Descriptors F.2.2 [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problems; J.4 [Social and Behavioral Sciences]: Economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION The research area of optimal mechanism design looks at designing a mechanism to produce the most desirable outcome for the entity running the mechanism.\nThis problem is well studied for the auction design problem where the optimal mechanism is the one that brings the seller the most profit.\nHere, the classical approach is to design such a mechanism given the prior distribution from which the bidders'' preferences are drawn (See e.g., [12, 4]).\nRecently Goldberg et al. [9] introduced the use of worst-case competitive analysis (See e.g., [3]) to analyze the performance of auctions that have no knowledge of the prior distribution.\nThe goal of such work is to design an auction that achieves a large constant fraction of the profit attainable by an optimal mechanism that knows the prior distribution in advance.\nPositive results in this direction are fueled by the observation that in auctions for a number of identical units, much of the distribution from which the bidders are drawn can be deduced on the fly by the auction as it is being run [9, 14, 2].\nThe performance of an auction in such a worst-case competitive analysis is measured by its competitive ratio, the ratio between a benchmark performance and the auction``s performance on the input distribution that maximizes this ratio.\nThe holy grail of the worstcase competitive analysis of auctions is the auction that achieves the optimal competitive ratio (as small as possible).\nSince [9] this search has led to improved understanding of the nature of the optimal auction, the techniques for on-the-fly pricing in these scenarios, and the competitive ratio of the optimal auction [5, 7, 8].\nIn this paper we continue this line of research by improving in all of these directions.\nFurthermore, we give evidence corroborating the conjecture that the form of the optimal auction is independent of the benchmark used in the auction``s competitive analysis.\nThis result further validates the use of competitive analysis in gauging auction performance.\nWe consider the single item, multi-unit, unit-demand auction problem.\nIn such an auction there are many units of a single item available for sale to bidders who each desire only one unit.\nEach bidder has a valuation representing how much the item is worth to him.\nThe auction is performed by soliciting a sealed bid from each of the bidders and deciding on the allocation of units to bidders and the prices to be paid by the bidders.\nThe bidders are assumed to bid so as to maximize their personal utility, the difference between their valuation and the price they pay.\nTo handle the problem of designing and analyzing auctions where bidders may falsely declare their valuations to get a better deal, we will adopt the solution concept of truthful mechanism design (see, e.g., [9, 15, 13]).\nIn a truthful auction, revealing one``s true valuation as one``s bid is an optimal strategy for each bidder regardless of the bids of the other bidders.\nIn this paper, we will restrict our attention to truthful (a.k.a., incentive compatible or strategyproof) auctions.\nA particularly interesting special case of the auction problem is the unlimited supply case.\nIn this case the number of units for sale is at least the number of bidders in the auction.\nThis is natural for the sale of digital goods where there is negligible cost for duplicating 175 and distributing the good.\nPay-per-view television and downloadable audio files are examples of such goods.\nThe competitive framework introduced in [9] and further refined in [5] uses the profit of the optimal omniscient single priced mechanism that sells at least two units as the benchmark for competitive analysis.\nThe assumption that two or more units are sold is necessary because in the worst case it is impossible to obtain a constant fraction of the profit of the optimal mechanism when it sells only one unit [9].\nIn this framework for competitive analysis, an auction is said to be \u03b2-competitive if it achieves a profit that is within a factor of \u03b2 \u2265 1 of the benchmark profit on every input.\nThe optimal auction is the one which is \u03b2-competitive with the minimum value of \u03b2.\nPrevious to this work, the best known auction for the unlimited supply case had a competitive ratio of 3.39 [7] and the best lower bound known was 2.42 [8].\nFor the limited supply case, auctions can achieve substantially better competitive ratios.\nWhen there are only two units for sale, the optimal auction gives a competitive ratio of 2, which matches the lower bound for two units.\nWhen there are three units for sale, the best previously known auction had a competitive ratio of 2.3, compared with a lower bound of 13\/6 \u2248 2.17 [8].\nThe results of this paper are as follows: \u2022 We give the auction for three units that is optimally competitive against the profit of the omniscient single priced mechanism that sells at least two units.\nThis auction achieves a competitive ratio of 13\/6, matching the lower bound from [8] (Section 3).\n\u2022 We show that the form of the optimal auction is independent of the benchmark used in competitive analysis.\nIn doing so, we give an optimal three bidder auction for generalized benchmarks (Section 4).\n\u2022 We give a general technique for converting a limited supply auction into an unlimited supply auction where it is possible to use the competitive ratio of the limited supply auction to obtain a bound on the competitive ratio of the unlimited supply auction.\nWe refer to auctions derived from this framework as aggregation auctions (Section 5).\n\u2022 We improve on the best known competitive ratio by proving that the aggregation auction constructed from our optimal three-unit auction is 3.25-competitive (Section 5.1).\n\u2022 Assuming that the conjecture that the optimal -unit auction has a competitive ratio that matches the lower bound proved in [8], we show that this optimal auction for \u2265 3 on some inputs will occasionally offer prices that are higher than any bid in that input (Section 6).\nFor the three-unit case where we have shown that the lower bound of [8] is tight, this observation led to our construction of the optimal three-unit auction.\n2.\nDEFINITIONS AND BACKGROUND We consider single-round, sealed-bid auctions for a set of identical units of an item to bidders who each desire one unit.\nAs mentioned in the introduction, we adopt the game-theoretic solution concept of truthful mechanism design.\nA useful simplification of the problem of designing truthful auctions is obtained through the following algorithmic characterization [9].\nRelated formulations to this one have appeared in numerous places in recent literature (e.g., [1, 14, 5, 10]).\nDEFINITION 1.\nGiven a bid vector of n bids, b = (b1, ... , bn), let b-i denote the vector of with bi replaced with a `?''\n, i.e., b-i = (b1, ... , bi\u22121, ?\n, bi+1, ... , bn).\nDEFINITION 2.\nLet f be a function from bid vectors (with a `?'')\nto prices (non-negative real numbers).\nThe deterministic bidindependent auction defined by f, BIf , works as follows.\nFor each bidder i: 1.\nSet ti = f(b-i).\n2.\nIf ti < bi, bidder i wins at price ti.\n3.\nIf ti > bi, bidder i loses.\n4.\nOtherwise, (ti = bi) the auction can either accept the bid at price ti or reject it.\nA randomized bid-independent auction is a distribution over deterministic bid-independent auctions.\nThe proof of the following theorem can be found, for example, in [5].\nTHEOREM 1.\nAn auction is truthful if and only if it is equivalent to a bid-independent auction.\nGiven this equivalence, we will use the the terminology bidindependent and truthful interchangeably.\nFor a randomized bid-independent auction, f(b-i) is a random variable.\nWe denote the probability density of f(b-i) at z by \u03c1b-i (z).\nWe denote the profit of a truthful auction A on input b as A(b).\nThe expected profit of the auction, E[A(b)], is the sum of the expected payments made by each bidder, which we denote by pi(b) for bidder i. Clearly, the expected payment of each bid satisfies pi(b) = bi 0 x\u03c1b-i (x)dx.\n2.1 Competitive Framework We now review the competitive framework from [5].\nIn order to evaluate the performance of auctions with respect to the goal of profit maximization, we introduce the optimal single price omniscient auction F and the related omniscient auction F(2) .\nDEFINITION 3.\nGive a vector b = (b1, ... , bn), let b(i) represent the i-th largest value in b.\nThe optimal single price omniscient auction, F, is defined as follows.\nAuction F on input b determines the value k such that kb(k) is maximized.\nAll bidders with bi \u2265 b(k) win at price b(k); all remaining bidders lose.\nThe profit of F on input b is thus F(b) = max1\u2264k\u2264n kb(k).\nIn the competitive framework of [5] and subsequent papers, the performance of a truthful auction is gauged in comparison to F(2) , the optimal singled priced auction that sells at least two units.\nThe profit of F(2) is max2\u2264k\u2264n kb(k) There are a number of reasons to choose this benchmark for comparison, interested readers should see [5] or [6] for a more detailed discussion.\nLet A be a truthful auction.\nWe say that A is \u03b2-competitive against F(2) (or just \u03b2-competitive) if for all bid vectors b, the expected profit of A on b satisfies E[A(b)] \u2265 F(2) (b) \u03b2 .\nIn Section 4 we generalize this framework to other profit benchmarks.\n176 2.2 Scale Invariant and Symmetric Auctions A symmetric auction is one where the auction outcome is unchanged when the input bids arrive in a different permutation.\nGoldberg et al. [8] show that a symmetric auction achieves the optimal competitive ratio.\nThis is natural as the profit benchmark we consider is symmetric, and it allows us to consider only symmetric auctions when looking for the one with the optimal competitive ratio.\nAn auction defined by bid-independent function f is scale invariant if, for all i and all z, Pr[f(b-i) \u2265 z] = Pr[f(cb-i) \u2265 cz].\nIt is conjectured that the assumption of scale invariance is without loss of generality.\nThus, we are motivated to consider symmetric scale-invariant auctions.\nWhen specifying a symmetric scaleinvariant auction we can assume that f is only a function of the relative magnitudes of the n \u2212 1 bids in b-i and that one of the bids, bj = 1.\nIt will be convenient to specify such auctions via the density function of f(b-i), \u03c1b-i (z).\nIt is enough to specify such a density function of the form \u03c11,z1,...,zn\u22121 (z) with 1 \u2264 zi \u2264 zi+1.\n2.3 Limited Supply Versus Unlimited Supply Following [8], throughout the remainder of this paper we will be making the assumption that n = , i.e., the number of bidders is equal to the number of units for sale.\nThis is without loss of generality as (a) any lower bound that applies to the n = case also extends to the case where n \u2265 [8], and (b) there is a reduction from the unlimited supply auction problem to the limited supply auction problem that takes an unlimited supply auction that is \u03b2-competitive with F(2) and constructs a limited supply auction parameterized by that is \u03b2-competitive with F(2, ) , the optimal omniscient auction that sells between 2 and units [6].\nHenceforth, we will assume that we are in the unlimited supply case, and we will examine lower bounds for limited supply problems by placing a restriction on the number of bidders in the auction.\n2.4 Lower Bounds and Optimal Auctions Frequently in this paper, we will refer to the best known lower bound on the competitive ratio of truthful auctions: THEOREM 2.\n[8] The competitive ratio of any auction on n bidders is at least 1 \u2212 n i=2 \u22121 n i\u22121 i i \u2212 1 n \u2212 1 i \u2212 1 .\nDEFINITION 4.\nLet \u03a5n denote the n-bidder auction that achieves the optimal competitive ratio.\nThis bound is derived by analyzing the performance of any auction on the following distribution B.\nIn each random bid vector B, each bid Bi is drawn i.i.d. from the distribution such that Pr[Bi \u2265 s] \u2264 1\/s for all s \u2208 S.\nIn the two-bidder case, this lower bound is 2.\nThis is achieved by \u03a52 which is the 1-unit Vickrey auction.1 In the three-bidder case, this lower bound is 13\/6.\nIn the next section, we define the auction \u03a53 which matches this lower bound.\nIn the four-bidder case, this lower bound is 96\/215.\nIn the limit as the number of bidders grows, this lower bound approaches a number which is approximately 2.42.\nIt is conjectured that this lower bound is tight for any number of bidders and the optimal auction, \u03a5n, matches it.\n1 The 1-unit Vickrey auction sells to the highest bidder at the second highest bid value.\n2.5 Profit Extraction In this section we review the truthful profit extraction mechanism ProfitExtractR.\nThis mechanism is a special case of a general cost-sharing schema due to Moulin and Shenker [11].\nThe goal of profit extraction is, given bids b, to extract a target value R of profit from some subset of the bidders.\nProfitExtractR: Given bids b, find the largest k such that the highest k bidders can equally share the cost R. Charge each of these bidders R\/k.\nIf no subset of bidders can cover the cost, the mechanism has no winners.\nImportant properties of this auction are as follows: \u2022 ProfitExtractR is truthful.\n\u2022 If R \u2264 F(b), ProfitExtractR(b) = R; otherwise it has no winners and no revenue.\nWe will use this profit extraction mechanism in Section 5 with the following intuition.\nSuch a profit extractor makes it possible to treat this subset of bidders as a single bid with value F(S).\nNote that given a single bid, b, a truthful mechanism might offer it price t and if t \u2264 b then the bidder wins and pays t; otherwise the bidder pays nothing (and loses).\nLikewise, a mechanism can offer the set of bidders S a target revenue R.\nIf R \u2264 F(2) (S), then ProfitExtractR raises R from S; otherwise, the it raises no revenue from S. 3.\nAN OPTIMAL AUCTION FOR THREE BIDDERS In this section we define the optimal auction for three bidders, \u03a53, and prove that it indeed matches the known lower bound of 13\/6.\nWe follow the definition and proof with a discussion of how this auction was derived.\nDEFINITION 5.\n\u03a53 is scale-invariant and symmetric and given by the bid-independent function with density function \u03c11,x(z) = \u23a7 \u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23a8 \u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23a9 For x \u2264 3\/2 1 with probability 9\/13 z with probability density g(z) for z > 3\/2 For x > 3\/2\u23a7 \u23aa\u23a8 \u23aa\u23a9 1 with probability 9\/13 \u2212 x 3\/2 zg(z)dz x with probability x 3\/2 (z + 1)g(z)dz z with probability density g(z) for z > x where g(x) = 2\/13 (x\u22121)3 .\nTHEOREM 3.\nThe \u03a53 auction has a competitive ratio of 13\/6 \u2248 2.17, which is optimal.\nFurthermore, the auction raises exactly 6 13 F(2) on every input with non-identical bids.\nPROOF.\nConsider the bids 1, x, y, with 1 < x < y.\nThere are three cases.\nCASE 1 (x < y \u2264 3\/2): F(2) = 3.\nThe auction must raise expected revenue of at least 18\/13 on these bids.\nThe bidder with valuation x will pay 1 with 9\/13, and the bidder with valuation y will pay 1 with probability 9\/13.\nTherefore \u03a53 raises 18\/13 on these bids.\nCASE 2 (x \u2264 3\/2 < y): F(2) = 3.\nThe auction must raise expected revenue of at least 18\/13 on these bids.\nThe bidder with 177 valuation x will pay 9\/13 \u2212 y 3\/2 zg(z)dz in expectation.\nThe bidder with valuation y will pay 9\/13 + y 3\/2 zg(z)dz in expectation.\nTherefore \u03a53 raises 18\/13 on these bids.\nCASE 3 (3\/2 < x \u2264 y): F(2) = 2x.\nThe auction must raise expected revenue of at least 12x\/13 on these bids.\nConsider the revenue raised from all three bidders: E[\u03a53(b)] = p(1, x, y) + p(x, 1, y) + p(y, 1, x) = 0 + 9\/13 \u2212 y 3\/2 zg(z)dz + 9\/13 \u2212 x 3\/2 zg(z)dz + x x 3\/2 (z + 1)g(z)dz + y x zg(z)dz = 18\/13 + (x \u2212 2) x 3\/2 zg(z)dz + x x 3\/2 g(z)dz = 12x\/13.\nThe final equation comes from substituting in g(x) = 2\/13 (x\u22121)3 and expanding the integrals.\nNote that the fraction of F(2) raised on every input is identical.\nIf any of the inequalities 1 \u2264 x \u2264 y are not strict, the same proof applies giving a lower bound on the auction``s profit; however, this bound may no longer be tight.\nMotivation for \u03a53 In this section, we will conjecture that a particular input distribution is worst-case, and show, as a consequence, that all inputs are worstcase in the optimal auction.\nBy applying this consequence, we will derive an optimal auction for three bidders.\nA truthful, randomized auction on n bidders can be represented by a randomized function f : Rn\u22121 \u00d7 n \u2192 R that maps masked bid vectors to prices in R. By normalization, we can assume that the lowest possible bid is 1.\nRecall that \u03c1b-i (z) = Pr[f(b-i) = z].\nThe optimal auction for the finite auction problem can be found by the following optimization problem in which the variables are \u03c1b-i (z): maximize r subject to n i=1 bi z=1 z\u03c1b-i (z)dz \u2265 rF(2) (b) \u221e z=1 \u03c1b-i (z)dz = 1 \u03c1b-i (z) \u2265 0 This set of integral inequalities is difficult to maximize over.\nHowever, by guessing which constraints are tight and which are slack at the optimum, we will be able to derive a set of differential equations for which any feasible solution is an optimal auction.\nAs we discuss in Section 2.4, in [8], the authors define a distribution and use it to find a lower bound on the competitive ratio of the optimal auction.\nFor two bidders, this bid distribution is the worst-case input distribution.\nWe guess (and later verify) that this distribution is the worst-case input distribution for three bidders as well.\nSince this distribution has full support over the set of all bid vectors and a worst-case distribution puts positive probability only on worst-case inputs, we can therefore assume that all but a measure zero set of inputs is worst-case for the optimal auction.\nIn the optimal two-bidder auction, all inputs with non-identical bids are worst-case, so we will assume the same for three bidders.\nThe guess that these constraints are tight allows us to transform the optimization problem into a feasibility problem constrained by differential equations.\nIf the solution to these equations has value matching the lower bound obtained from the worst-case distribution, then this solution is the optimal auction and that our conjectured choice of worst-case distribution is correct.\nIn Section 6 we show that the optimal auction must sometimes place probability mass on sale prices above the highest bid.\nThis motivates considering symmetric scale-invariant auctions for three bidders with probability density function, \u03c11,x(z), of the following form: \u03c11,x(z) = \u23a7 \u23aa\u23a8 \u23aa\u23a9 1 with discrete probability a(x) x with discrete probability b(x) z with probability density g(z) for z > x In this auction, the sale price for the first bidder is either one of the latter two bids, or higher than either bid with a probability density which is independent of the input.\nThe feasibility problem which arises from the linear optimization problem by assuming the constraints are tight is as follows: a(y) + a(x) + xb(x) + y x zg(z)dz = r max(3, 2x) \u2200x < y a(x) + b(x) + \u221e x g(z)dz = 1 a(x) \u2265 0 b(x) \u2265 0 g(z) \u2265 0 Solving this feasibility problem gives the auction \u03a53 proposed above.\nThe proof of its optimality validates its proposed form.\nFinding a simple restriction on the form of n-bidder auctions for n > 3 under which the optimal auction can be found analytically as above remains an open problem.\n4.\nGENERALIZED PROFIT BENCHMARKS In this section, we widen our focus beyond auctions that compete with F(2) to consider other benchmarks for an auction``s profit.\nWe will show that, for three bidders, the form of the optimal auction is essentially independent of the benchmark profit used.\nThis results strongly corroborates the worst-case competitive analysis of auctions by showing that our techniques allow us to derive auctions which are competitive against a broad variety of reasonable benchmarks rather than simply against F(2) .\nPrevious work in competitive analysis of auctions has focused on the question of designing the auction with the best competitive ratio against F(2) , the profit of the optimal omniscient single-priced mechanism that sells at least two items.\nHowever, it is reasonable to consider other benchmarks.\nFor instance, one might wish to compete against V\u2217 , the profit of the k-Vickrey auction with optimal-inhindsight choice of k.2 Alternatively, if an auction is being used as a subroutine in a larger mechanism, one might wish to choose the auction which is optimally competitive with a benchmark specific to that purpose.\nRecall that F(2) (b) = max2\u2265k\u2265n kb(k).\nWe can generalize this definition to Gs, parameterized by s = (s2, ... , sn) and defined as: Gs(b) = max 2\u2264k\u2264n skb(k).\nWhen considering Gs we assume without loss of generality that si < si+1 as otherwise the constraint imposed by si+1 is irrelevant.\nNote that F(2) is the special case of Gs with si = i, and that V\u2217 = Gs with si = i \u2212 1.\n2 Recall that the k-Vickrey auction sells a unit to each of the highest k bidders at a price equal to the k + 1st highest bid, b(k+1), achieving a profit of kb(k+1).\n178 Competing with Gs We will now design a three-bidder auction \u03a5s,t 3 that achieves the optimal competitive ratio against Gs,t.\nAs before, we will first find a lower bound on the competitive ratio and then design an auction to meet that bound.\nWe can lower bound the competitive ratio of \u03a5s,t 3 using the same worst-case distribution from [8] that we used against F(2) .\nEvaluating the performance of any auction competing against Gs,t on this distribution will yield the following theorem.\nWe denote the optimal auction for three bidders against Gs,t as \u03a5s,t 3 .\nTHEOREM 4.\nThe optimal three-bidder auction, \u03a5s,t 3 , competing against Gs,t(b) = max(sb(2), tb(3)) has a competitive ratio of at least s2 +t2 2t .\nThe proof can be found in the appendix.\nSimilarly, we can find the optimal auction against Gs,t using the same technique we used to solve for the three bidder auction with the best competitive ratio against F(2) .\nDEFINITION 6.\n\u03a5s,t 3 is scale-invariant and symmetric and given by the bid-independent function with density function \u03c11,x(z) = \u23a7 \u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23a8 \u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23aa\u23a9 For x \u2264 t s 1 with probability t2 s2+t2 z with probability density g(z) for z > t s For x > t s\u23a7 \u23aa\u23aa\u23a8 \u23aa\u23aa\u23a9 1 with probability t2 s2+t2 \u2212 x t s zg(z)dz x with probability x t s (z + 1)g(z)dz z with probability density g(z) for z > x where g(x) = 2(t\u2212s)2 \/(s2 +t2 ) (x\u22121)3 .\nTHEOREM 5.\n\u03a5s,t 3 is s2 +t2 2t -competitive with Gs,t.\nThis auction, like \u03a53, can be derived by reducing the optimization problem to a feasibility problem, guessing that the optimal solution has the same form as \u03a5s,t 3 , and solving.\nThe auction is optimal because it matches the lower bound found above.\nNote that the form of \u03a5s,t 3 is essentially the same as for \u03a53, but that the probability of each price is scaled depending on the values of s and t.\nThat our auction for three bidders matches the lower bound computed by the input distribution used in [8] is strong evidence that this input distribution is the worst-case input distribution for any number of bidders and any generalized profit benchmark.\nFurthermore, we strongly suspect that for any number of bidders, the form of the optimal auction will be independent of the benchmark used.\n5.\nAGGREGATION AUCTIONS We have seen that optimal auctions for small cases of the limitedsupply model can be found analytically.\nIn this section, we will construct a schema for turning limited supply auctions into unlimited supply auctions with a good competitive ratio.\nAs discussed in Section 2.5, the existence of a profit extractor, ProfitExtractR, allows an auction to treat a set of bids S as a single bid with value F(S).\nGiven n bidders and an auction, Am, for m < n bidders, we can convert the m-bidder auction into an n-bidder auction by randomly partitioning the bidders into m subsets and then treating each subset as a single bidder (via ProfitExtractR) and running the m-bidder auction.\nDEFINITION 7.\nGiven a truthful m-bidder auction, Am, the m-aggregation auction for Am, AggAm , works as follows: 1.\nCast each bid uniformly at random into one of m bins, resulting in bid vectors b(1) , ... , b(m) .\n2.\nFor each bin j, compute the aggregate bid Bj = F(b(j) ).\nLet B be the vector of aggregate bids, and B\u2212j be the aggregate bids for all bins other than j. 3.\nCompute the aggregate price Tj = f(B\u2212j), where f is the bid-independent function for Am.\n4.\nFor each bin j, run ProfitExtractTj on b(j) .\nSince Am and ProfitExtractR are truthful, Tj is computed independently of any bid in bin j and thus the price offered any bidder in b(j) is independent of his bid; therefore, THEOREM 6.\nIf Am is truthful, the m-aggregation auction for Am, AggAm , is truthful.\nNote that this schema yields a new way of understanding the Random Sampling Profit Extraction (RSPE) auction [5] as the simplest case of an aggregation auction.\nIt is the 2-aggregation auction for \u03a52, the 1-unit Vickrey auction.\nTo analyze AggAm , consider throwing k balls into m labeled bins.\nLet k represent a configuration of balls in bins, so that ki is equal to the number of balls in bin i, and k(i) is equal to the number of balls in the ith largest bin.\nLet Km,k represent the set of all possible configurations of k balls in m bins.\nWe write the multinomial coefficient of k as k k .\nThe probability that a particular configuration k arises by throwing balls into bins uniformly at random is k k m\u2212k .\nTHEOREM 7.\nLet Am be an auction with competitive ratio \u03b2.\nThen the m-aggregation auction for Am, AggAm , raises the following fraction of the optimal revenue F(2) (b): E AggAm (b) F(2) \u2265 min k\u22652 k\u2208Km,k F(2) (k) k k \u03b2kmk PROOF.\nBy definition, F(2) sells to k \u2265 2 bidders at a single price p. Let kj be the number of such bidders in b(j) .\nClearly, F(b(j) ) \u2265 pkj.\nTherefore, F(2) (F(b(1) ), ... , F(b(n) )) F(2)(b) \u2265 F(2) (pk1, ... , pkn) pk = F(2) (k1, ... , kn) k The inequality follows from the monotonicity of F(2) , and the equality from the homogeneity of F(2) .\nProfitExtractTj will raise Tj if Tj \u2264 Bj , and no profit otherwise.\nThus, E AggAm (b) \u2265 E F(2) (B)\/\u03b2 .\nThe theorem follows by rewriting this expectation as a sum over all k in Km,k. 5.1 A 3.25 Competitive Auction We apply the aggregation auction schema to \u03a53, our optimal auction for three bidders, to achieve an auction with competitive ratio 3.25.\nThis improves on the previously best known auction which is 3.39-competitive [7].\nTHEOREM 8.\nThe aggregation auction for \u03a53 has competitive ratio 3.25.\n179 PROOF.\nBy theorem 7, E Agg\u03a53 (b) F(2)(b) \u2265 min k\u22652 k i=1 k\u2212i j=1 F(2) (i, j, k \u2212 i \u2212 j) k i,j,k\u2212i\u2212j \u03b2k3k For k = 2 and k = 3, E Agg\u03a53 (b) = 2 3 k\/\u03b2.\nAs k increases, E Agg\u03a53 (b) \/F(2) increases as well.\nSince we do not expect to find a closed-form formula for the revenue, we lower bound F(2) (b) by 3b(3).\nUsing large deviation bounds, one can show that this lower bound is greater than 2 3 k\/\u03b2 for large-enough k, and the remainder can be shown by explicit calculation.\nPlugging in \u03b2 = 13\/6, the competitive ratio is 13\/4.\nAs k increases, the competitive ratio approaches 13\/6.\nNote that the above bound on the competitive ratio of Agg\u03a53 is tight.\nTo see this, consider the bid vector with two very large and non-identical bids of h and h + with the remaining bids 1.\nGiven that the competitive ratio of \u03a53 is tight on this example, the expected revenue of this auction on this input will be exactly 13\/4.\n5.2 A Gs,t-based Aggregation Auction In this section we show that \u03a53 is not the optimal auction to use in an aggregation auction.\nOne can do better by choosing the auction that is optimally competitive against a specially tailored benchmark.\nTo see why this might be the case, notice (Table 1) that the fraction of F(2) (b) raised for when there are k = 2 and k = 3 winning bidders in F(2) (b) is substantially smaller than the fraction of F(2) (b) raised when there are more winners.\nThis occurs because the expected ratio between F(2) (B) and F(2) (b) is lower in this case while the competitive ratio of \u03a53 is constant.\nIf we chose a three bidder auction that performed better when F(2) has smaller numbers of winners, our aggregation auction would perform better in the worst case.\nOne approach is to compete against a different benchmark that puts more weight than F(2) on solutions with a small number of winners.\nRecall that F(2) is the instance of Gs,t with s = 2 and t = 3.\nBy using the auction that competes optimally against Gs,t with s > 2, while holding t = 3, we will raise a higher fraction of revenue on smaller numbers of winning bidders and a lower fraction of revenue on large numbers of winning bidders.\nWe can numerically optimize the values of s and t in Gs,t(b) in order to achieve the best competitive ratio for the aggregation auction.\nIn fact, this will allow us to improve our competitive ratio slightly.\nTHEOREM 9.\nFor an optimal choice of s and t, the aggregation auction for \u03a5s,t 3 is 3.243-competitive.\nThe proof follows the outline of Theorem 7 and 8 with the optimal choice of s = 2.162 (while t is held constant at 3).\n5.3 Further Reducing the Competitive Ratio There are a number of ways we might attempt to use this aggregation auction schema to continue to push the competitive ratio down.\nIn this section, we give a brief discussion of several attempts.\n5.3.1 Agg\u03a5m for m > 3 If the aggregation auction for \u03a52 has a competitive ratio of 4 and the aggregation auction for \u03a53 has a competitive ratio of 3.25, can we improve the competitive ratio by aggregating \u03a54 or \u03a5m for larger m?\nWe conjecture in the negative: for m > 3, the aggregation auction for \u03a5m has a larger competitive ratio than the aggregation auction for \u03a53.\nThe primary difficulty in proving this k m = 2 m = 3 m = 4 m = 5 m = 6 m = 7 2 0.25 0.3077 0.3349 0.3508 0.3612 0.3686 3 0.25 0.3077 0.3349 0.3508 0.3612 0.3686 4 0.3125 0.3248 0.3349 0.3438 0.3512 0.3573 5 0.3125 0.3191 0.3244 0.3311 0.3378 0.3439 6 0.3438 0.321 0.3057 0.3056 0.311 0.318 7 0.3438 0.333 0.3081 0.3009 0.3025 0.3074 8 0.3633 0.3229 0.3109 0.3022 0.3002 0.3024 9 0.3633 0.3233 0.3057 0.2977 0.2927 0.292 10 0.377 0.3328 0.308 0.2952 0.2866 0.2837 11 0.377 0.3319 0.3128 0.298 0.2865 0.2813 12 0.3872 0.3358 0.3105 0.3001 0.2894 0.2827 13 0.3872 0.3395 0.3092 0.2976 0.2905 0.2841 14 0.3953 0.3391 0.312 0.2961 0.2888 0.2835 15 0.3953 0.3427 0.3135 0.2973 0.2882 0.2825 16 0.4018 0.3433 0.3128 0.298 0.2884 0.2823 17 0.4018 0.3428 0.3129 0.2967 0.2878 0.282 18 0.4073 0.3461 0.3133 0.2959 0.2859 0.2808 19 0.4073 0.3477 0.3137 0.2962 0.2844 0.2789 20 0.4119 0.3486 0.3148 0.2973 0.2843 0.2777 21 0.4119 0.3506 0.3171 0.298 0.2851 0.2775 22 0.4159 0.3519 0.3189 0.2986 0.2863 0.2781 23 0.4159 0.3531 0.3202 0.2995 0.2872 0.2791 24 0.4194 0.3539 0.3209 0.3003 0.2878 0.2797 25 0.4194 0.3548 0.3218 0.3012 0.2886 0.2801 Table 1: E A(b)\/F(2) (b) for Agg\u03a5m as a function of k, the optimal number of winners in F(2) (b).\nThe lowest value for each column is printed in bold.\nconjecture lies in the difficulty of finding a closed-form solution for the formula of Theorem 7.\nWe can, however, evaluate this formula numerically for different values of m and k, assuming that the competitive ratio for \u03a5m matches the lower bound for m given by Theorem 2.\nTable 1 shows, for each value of m and k, the fraction of F(2) raised by the aggregation auction for Agg\u03a5m when there are k winning bidders, assuming the lower bound of Theorem 2 is tight.\n5.3.2 Convex combinations of Agg\u03a5m As can be seen in Table 1, when m > 3, the worst-case value of k is no longer 2 and 3, but instead an increasing function of m.\nAn aggregation auction for \u03a5m outperforms the aggregation auction for \u03a53 when there are two or three winning bidders, while the aggregation auction for \u03a53 outperforms the other when there are at least six winning bidders.\nThus, for instance, an auction which randomizes between aggregation auctions for \u03a53 and \u03a54 will have a worst-case which is better than that of either auction alone.\nLarger combinations of auctions will allow more room to optimize the worst-case.\nHowever, we suspect that no convex combination of aggregation auctions will have a competitive ratio lower than 3.\nFurthermore, note that we cannot yet claim the existence of a good auction via this technique as the optimal auction \u03a5n for n > 3 is not known and it is only conjectured that the bound given by Theorem 2 and represented in Table 1 is correct for \u03a5n.\n6.\nA LOWER BOUND FOR CONSERVATIVE AUCTIONS In this section, we define a class of auctions that never offer a sale price which is higher than any bid in the input and prove a lower bound on the competitive ratio of these auctions.\nAs this 180 lower bound is stronger than the lower bound of Theorem 2 for n \u2265 3, it shows that the optimal auction must occasionally charge a sales price higher than any bid in the input.\nSpecifically, this result partially explains the form of the optimal three bidder auction.\nDEFINITION 8.\nWe say an auction BIf is conservative if its bidindependent function f satisfies f(b-i) \u2264 max(b-i).\nWe can now state our lower bound for conservative auctions.\nTHEOREM 10.\nLet A be a conservative auction for n bidders.\nThen the competitive ratio of A is at least 3n\u22122 n .\nCOROLLARY 1.\nThe competitive ratio of any conservative auction for an arbitrary number of bidders is at least three.\nFor a two-bidder auction, this restriction does not prevent optimality.\n\u03a52, the 1-unit Vickrey auction, is conservative.\nFor larger numbers of bidders, however, the restriction to conservative auctions does affect the competitive ratio.\nFor the three-bidder case, \u03a53 has competitive ratio 2.17, while the best conservative auction is no better than 2.33-competitive.\nThe k-Vickrey auction and the Random Sampling Optimal Price auction [9] are conservative auctions.\nThe Random Sampling Profit Extraction auction [5] and the CORE auction [7], on the other hand, use the ProfitExtractR mechanism as a subroutine and thus sometimes offer a sale price which is higher than the highest input bid value.\nIn [8], the authors define a restricted auction as one on which, for any input, the sale prices are drawn from the set of input bid values.\nThe class of conservative auctions can be viewed as a generalization of the class of restricted auctions and therefore our result below gives lower bounds on the performance of a broader class of auctions.\nWe will prove Theorem 10 with the aid of the following lemma: LEMMA 1.\nLet A be a conservative auction with competitive ratio 1\/r for n bidders.\nLet h n. Let h0 = 1 and hk = kh otherwise.\nThen, for all k and H \u2265 kh, Pr[f(1, 1, ... , 1, H) \u2264 hk] \u2265 nr\u22121 n\u22121 + k(3nr\u22122r\u2212n n\u22121 ).\nPROOF.\nThe lemma is proved by strong induction on k. First some notation that will be convenient.\nFor any particular k and H we will be considering the bid vector b = (1, ... , 1, hk, H) and placing bounds on \u03c1b-i (z).\nSince we can assume without loss of generality that the auction is symmetric, we will notate b-1 as b with any one of the 1-valued bids masked.\nSimilarly we notate b-hk (resp.\nb-H ) as b with the hk-valued bid (resp.\nH-valued bid) masked.\nWe will also let p1(b), phk (b), and pH (b) represent the expected payment of a 1-valued, hk-valued, and H-valued bidder in A on b, respectively (note by symmetry the expected payment for all 1-valued bidders is the same).\nBase case (k = 0, hk = 1): A must raise revenue of at least rn on b = (1, ... , 1, 1, H): rn \u2264 pH (b) + (n \u2212 1)p1(b) = 1 + (n \u2212 1) 1 0 x\u03c1b-1 (x)dx \u2264 1 + (n \u2212 1) 1 0 \u03c1b-1 (x)dx The second inequality follows from the conservatism of the underlying auction.\nThe base case follows trivially from the final inequality.\nInductive case (k > 0, hk = kh): Let b = (1, ... , 1, hk, H).\nFirst, we will find an upper bound on pH(b) pH (b) = 1 0 x\u03c1b-H (x)dx + k i=1 hi hi\u22121 x\u03c1b-H (x)dx (1) \u2264 1 + k i=1 hi hi hi\u22121 \u03c1b-H (x)dx \u2264 1 + 3nr \u2212 2r \u2212 n n \u2212 1 k\u22121 i=1 ih + kh 1 \u2212 nr \u2212 1 n \u2212 1 \u2212 (k \u2212 1) 3nr \u2212 2r \u2212 n n \u2212 1 (2) = kh n(1 \u2212 r) n \u2212 1 + (k \u2212 1) 2 3nr \u2212 2r \u2212 n n \u2212 1 + 1.\n(3) Equation (1) follows from the conservatism of A and (2) is from invoking the strong inductive hypothesis with H = kh and the observation that the maximum possible revenue will be found by placing exactly enough probability at each multiple of h to satisfy the constraints of the inductive hypothesis and placing the remaining probability at kh.\nNext, we will find a lower bound on phk (b) by considering the revenue raised by the bids b. Recall that A must obtain a profit of at least rF(2) (b) = 2rkh.\nGiven upper-bounds on the profit from the H-valued, equation bid (3), and the 1-valued bids, the profit from the hk-valued bid must be at least: phk (b) \u2265 2rkh \u2212 (n \u2212 2)p1(b) \u2212 pH(b) \u2265 kh 2r \u2212 n(1 \u2212 r) n \u2212 1 + (k \u2212 1) 2 3nr \u2212 2r \u2212 n n \u2212 1 \u2212 O(n).\n(4) In order to lower bound Pr[f(b-hk ) \u2264 kh], consider the auction that minimizes it and is consistent with the lower bounds obtained by the strong inductive hypothesis on Pr[f(b-hk ) \u2264 ih].\nTo minimize the constraints implied by the strong inductive hypothesis, we place the minimal amount of probability mass required each price level.\nThis gives \u03c1hk (b) with nr\u22121 n\u22121 probability at 1 and exactly 3nr\u22122r\u2212n n\u22121 at each hi for 1 \u2264 i < k. Thus, the profit from offering prices at most hk\u22121 is nr\u22121 n\u22121 \u2212kh(k\u22121)3nr\u22122r\u2212n n\u22121 .\nIn order to satisfy our lower bound, (4), on phk (b), it must put at least 3nr\u22122r\u2212n n\u22121 on hk.\nTherefore, the probability that the sale price will be no more than kh on masked bid vector on bid vector b = (1, ... , 1, kh, H) must be at least nr\u22121 n\u22121 + k(3nr\u22122r\u2212n n\u22121 ).\nGiven Lemma 1, Theorem 10 is simple to prove.\nPROOF.\nLet A be a conservative auction.\nSuppose 3nr\u22122r\u2212n n\u22121 = > 0.\nLet k = 1\/ , H \u2265 kh, and h n. By Lemma 1, Pr[f(1, ... , 1, kh, H) \u2264 hk] \u2265 nr\u22121 n\u22121 + k > 1.\nBut this is a contradiction, so 3nr\u22122r\u2212n n\u22121 \u2264 0.\nThus, r \u2264 n 3n\u22122 .\nThe theorem follows.\n7.\nCONCLUSIONS AND FUTURE WORK We have found the optimal auction for the three-unit limitedsupply case, and shown that its structure is essentially independent of the benchmark used in its competitive analysis.\nWe have then used this auction to derive the best known auction for the unlimited supply case.\nOur work leaves many interesting open questions.\nWe found that the lower bound of [8] is matched by an auction for three bidders, 181 even when competing against generalized benchmarks.\nThe most interesting open question from our work is whether the lower bound from Theorem 2 can be matched by an auction for more than three bidders.\nWe conjecture that it can.\nSecond, we consider whether our techniques can be extended to find optimal auctions for greater numbers of bidders.\nThe use of our analytic solution method requires knowledge of a restricted class of auctions which is large enough to contain an optimal auction but small enough that the optimal auction in this class can be found explicitly through analytic methods.\nNo class of auctions which meets these criteria is known even for the four bidder case.\nAlso, when the number of bidders is greater than three, it might be the case that the optimal auction is not expressible in terms of elementary functions.\nAnother interesting set of open questions concerns aggregation auctions.\nAs we show, the aggregation auction for \u03a53 outperforms the aggregation auction for \u03a52 and it appears that the aggregation auction for \u03a53 is better than \u03a5m for m > 3.\nWe leave verification of this conjecture for future work.\nWe also show that \u03a53 is not the best three-bidder auction for use in an aggregation auction, but the auction that beats it is able to reduce the competitive ratio of the overall auction only a little bit.\nIt would be interesting to know whether for any m there is an m-aggregation auction that substantially improves on the competitive ratio of Agg\u03a5m .\nFinally, we remark that very little is known about the structure of the optimal competitive auction.\nIn our auction \u03a53, the sales price for a given bidder is restricted either to be one of the other bid values or to be higher than all other bid values.\nThe optimal auction for two bidders, the 1-unit Vickrey auction, also falls within this class of auctions, as its sales prices are restricted to bid values.\nWe conjecture that an optimal auction for any number of bidders lies within this class.\nOur paper provides partial evidence for this conjecture: the lower bound of Section 6 on conservative auctions shows that the optimal auction must offer sales prices higher than any bid value if the lower bound of Theorem 2 is tight, as is conjectured.\nIt remains to show that optimal auctions otherwise only offer sales prices at bid values.\n8.\nACKNOWLEDGEMENTS The authors wish to thank Yoav Shoham and Noga Alon for helpful discussions.\n9.\nREFERENCES [1] A. Archer and E. Tardos.\nTruthful mechanisms for one-parameter agents.\nIn Proc.\nof the 42nd IEEE Symposium on Foundations of Computer Science, 2001.\n[2] S. Baliga and R. Vohra.\nMarket research and market design.\nAdvances in Theoretical Economics, 3, 2003.\n[3] A. Borodin and R. El-Yaniv.\nOnline Computation and Competitive Analysis.\nCambridge University Press, 1998.\n[4] J. Bulow and J. Roberts.\nThe Simple Economics of Optimal Auctions.\nThe Journal of Political Economy, 97:1060-90, 1989.\n[5] A. Fiat, A. V. Goldberg, J. D. Hartline, and A. R. Karlin.\nCompetitive generalized auctions.\nIn Proc.\n34th ACM Symposium on the Theory of Computing, pages 72-81.\nACM, 2002.\n[6] A. Goldberg, J. Hartline, A. Karlin, M. Saks, and A. Wright.\nCompetitive auctions and digital goods.\nGames and Economic Behavior, 2002.\nSubmitted for publication.\nAn earlier version available as InterTrust Technical Report at URL http:\/\/www.star-lab.com\/tr\/tr-99-01.html.\n[7] A. V. Goldberg and J. D. Hartline.\nCompetitiveness via consensus.\nIn Proc.\n14th Symposium on Discrete Algorithms, pages 215-222.\nACM\/SIAM, 2003.\n[8] A. V. Goldberg, J. D. Hartline, A. R. Karlin, and M. E. Saks.\nA lower bound on the competitive ratio of truthful auctions.\nIn Proc.\n21st Symposium on Theoretical Aspects of Computer Science, pages 644-655.\nSpringer, 2004.\n[9] A. V. Goldberg, J. D. Hartline, and A. Wright.\nCompetitive auctions and digital goods.\nIn Proc.\n12th Symposium on Discrete Algorithms, pages 735-744.\nACM\/SIAM, 2001.\n[10] D. Lehmann, L. I. O``Callaghan, and Y. Shoham.\nTruth Revelation in Approximately Efficient Combinatorial Auctions.\nIn Proc.\nof 1st ACM Conf.\non E-Commerce, pages 96-102.\nACM Press, New York, 1999.\n[11] H. Moulin and S. Shenker.\nStrategyproof Sharing of Submodular Costs: Budget Balance Versus Efficiency.\nEconomic Theory, 18:511-533, 2001.\n[12] R. Myerson.\nOptimal Auction Design.\nMathematics of Operations Research, 6:58-73, 1981.\n[13] N. Nisan and A. Ronen.\nAlgorithmic Mechanism Design.\nIn Proc.\nof 31st Symp.\non Theory of Computing, pages 129-140.\nACM Press, New York, 1999.\n[14] I. Segal.\nOptimal pricing mechanisms with unknown demand.\nAmerican Economic Review, 16:50929, 2003.\n[15] W. Vickrey.\nCounterspeculation, Auctions, and Competitive Sealed Tenders.\nJ. of Finance, 16:8-37, 1961.\nAPPENDIX A. PROOF OF THEOREM 4 We wish to prove that \u03a5s,t 3 , the optimal auction for three bidders against Gs,t, has competitive ratio at least s2 +t2 2t .\nOur proof follows the outline of the proof of Lemma 5 and Theorem 1 from [8]; however, our case is simpler because we only looking for a bound when n = 3.\nDefine the random bid vector B = (B1, B2, B3) with Pr[Bi > z] = 1\/z.\nWe compute EB[Gs,t(B)] by integrating Pr[Gs,t(B) > z].\nThen we use the fact that no auction can have expected profit greater than 3 on B to find a lower bound on the competitive ratio against Gs,t for any auction.\nFor the input distribution B defined above, let B(i) be the ith largest bid.\nDefine the disjoint events H2 = B(2) \u2265 z\/s \u2227 B(3) < z\/t, and H3 = B(3) \u2265 z\/t.\nIntuitively, H3 corresponds to the event that all three bidders win in Gs,t, while H2 corresponds to the event that only the top two bidders win.\nGs,t(B) will be greater than z if either event occurs: Pr[Gs,t(B) > z] = Pr[H2] + Pr[H3] (5) = 3 s z 2 1 \u2212 t z + t z 3 (6) Using the identity defined for non-negative continuous random variables that E[X] = \u221e 0 Pr[X > x] dx, we have EB[Gs,t(B)] = t + \u221e t 3 s z 2 1 \u2212 t z + t z 3 dz (7) = 3 s2 + t2 2t (8) Given that, for any auction A, EB[EA[A(B)]] \u2264 3 [8], it is clear that EB[Gs,t(B)] EB[EA[A(B)]] \u2265 s2 +t2 2t .\nTherefore, there exists some input b for each auction A such that Gs,t(b) EA[A(b)] \u2265 s2+t2 2t .\n182","lvl-3":"From Optimal Limited To Unlimited Supply Auctions\nABSTRACT\nWe investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit.\nWe adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items.\nIn this paper, we first derive an optimal auction for three items, answering an open question from [8].\nSecond, we show that the form of this auction is independent of the competitive framework used.\nThird, we propose a schema for converting a given limited-supply auction into an unlimited supply auction.\nApplying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7].\nFinally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values.\n1.\nINTRODUCTION\nThe research area of optimal mechanism design looks at designing a mechanism to produce the most desirable outcome for the entity running the mechanism.\nThis problem is well studied for the auction design problem where the optimal mechanism is the one * This work was supported in part by an NSF fellowship and NSF ITR 0205633.\nthat brings the seller the most profit.\nHere, the classical approach is to design such a mechanism given the prior distribution from which the bidders' preferences are drawn (See e.g., [12, 4]).\nRecently Goldberg et al. [9] introduced the use of worst-case competitive analysis (See e.g., [3]) to analyze the performance of auctions that have no knowledge of the prior distribution.\nThe goal of such work is to design an auction that achieves a large constant fraction of the profit attainable by an optimal mechanism that knows the prior distribution in advance.\nPositive results in this direction are fueled by the observation that in auctions for a number of identical units, much of the distribution from which the bidders are drawn can be deduced on the fly by the auction as it is being run [9, 14, 2].\nThe performance of an auction in such a worst-case competitive analysis is measured by its competitive ratio, the ratio between a benchmark performance and the auction's performance on the input distribution that maximizes this ratio.\nThe holy grail of the worstcase competitive analysis of auctions is the auction that achieves the optimal competitive ratio (as small as possible).\nSince [9] this search has led to improved understanding of the nature of the optimal auction, the techniques for on-the-fly pricing in these scenarios, and the competitive ratio of the optimal auction [5, 7, 8].\nIn this paper we continue this line of research by improving in all of these directions.\nFurthermore, we give evidence corroborating the conjecture that the form of the optimal auction is independent of the benchmark used in the auction's competitive analysis.\nThis result further validates the use of competitive analysis in gauging auction performance.\nWe consider the single item, multi-unit, unit-demand auction problem.\nIn such an auction there are many units of a single item available for sale to bidders who each desire only one unit.\nEach bidder has a valuation representing how much the item is worth to him.\nThe auction is performed by soliciting a sealed bid from each of the bidders and deciding on the allocation of units to bidders and the prices to be paid by the bidders.\nThe bidders are assumed to bid so as to maximize their personal utility, the difference between their valuation and the price they pay.\nTo handle the problem of designing and analyzing auctions where bidders may falsely declare their valuations to get a better deal, we will adopt the solution concept of truthful mechanism design (see, e.g., [9, 15, 13]).\nIn a truthful auction, revealing one's true valuation as one's bid is an optimal strategy for each bidder regardless of the bids of the other bidders.\nIn this paper, we will restrict our attention to truthful (a.k.a., incentive compatible or strategyproof) auctions.\nA particularly interesting special case of the auction problem is the unlimited supply case.\nIn this case the number of units for sale is at least the number of bidders in the auction.\nThis is natural for the sale of digital goods where there is negligible cost for duplicating\nand distributing the good.\nPay-per-view television and downloadable audio files are examples of such goods.\nThe competitive framework introduced in [9] and further refined in [5] uses the profit of the optimal omniscient single priced mechanism that sells at least two units as the benchmark for competitive analysis.\nThe assumption that two or more units are sold is necessary because in the worst case it is impossible to obtain a constant fraction of the profit of the optimal mechanism when it sells only one unit [9].\nIn this framework for competitive analysis, an auction is said to be \u03b2-competitive if it achieves a profit that is within a factor of \u03b2 \u2265 1 of the benchmark profit on every input.\nThe optimal auction is the one which is \u03b2-competitive with the minimum value of \u03b2.\nPrevious to this work, the best known auction for the unlimited supply case had a competitive ratio of 3.39 [7] and the best lower bound known was 2.42 [8].\nFor the limited supply case, auctions can achieve substantially better competitive ratios.\nWhen there are only two units for sale, the optimal auction gives a competitive ratio of 2, which matches the lower bound for two units.\nWhen there are three units for sale, the best previously known auction had a competitive ratio of 2.3, compared with a lower bound of 13\/6 \u2248 2.17 [8].\nThe results of this paper are as follows:\n\u2022 We give the auction for three units that is optimally competitive against the profit of the omniscient single priced mechanism that sells at least two units.\nThis auction achieves a competitive ratio of 13\/6, matching the lower bound from [8] (Section 3).\n\u2022 We show that the form of the optimal auction is independent of the benchmark used in competitive analysis.\nIn doing so, we give an optimal three bidder auction for generalized benchmarks (Section 4).\n\u2022 We give a general technique for converting a limited supply auction into an unlimited supply auction where it is possible to use the competitive ratio of the limited supply auction to obtain a bound on the competitive ratio of the unlimited supply auction.\nWe refer to auctions derived from this framework as aggregation auctions (Section 5).\n\u2022 We improve on the best known competitive ratio by proving that the aggregation auction constructed from our optimal three-unit auction is 3.25-competitive (Section 5.1).\n\u2022 Assuming that the conjecture that the optimal ~ - unit auction\nhas a competitive ratio that matches the lower bound proved in [8], we show that this optimal auction for ~ \u2265 3 on some inputs will occasionally offer prices that are higher than any bid in that input (Section 6).\nFor the three-unit case where we have shown that the lower bound of [8] is tight, this observation led to our construction of the optimal three-unit auction.\n2.\nDEFINITIONS AND BACKGROUND\n2.1 Competitive Framework\n2.2 Scale Invariant and Symmetric Auctions\n2.3 Limited Supply Versus Unlimited Supply\n2.4 Lower Bounds and Optimal Auctions\n2.5 Profit Extraction\n3.\nAN OPTIMAL AUCTION FOR THREE BIDDERS\nMotivation for \u03a53\n4.\nGENERALIZED PROFIT BENCHMARKS\nCompeting with Gs\nDEFINITION 6.\n\u03a5s, t\n5.\nAGGREGATION AUCTIONS\n5.1 A 3.25 Competitive Auction\n5.2 A Gs,t-based Aggregation Auction\n3 is 3.243-competitive.\n5.3 Further Reducing the Competitive Ratio\n5.3.1 Agg\u03a5m for m> 3\n5.3.2 Convex combinations of Agg\u03a5m\n6.\nA LOWER BOUND FOR CONSERVATIVE AUCTIONS\n7.\nCONCLUSIONS AND FUTURE WORK\nWe have found the optimal auction for the three-unit limitedsupply case, and shown that its structure is essentially independent of the benchmark used in its competitive analysis.\nWe have then used this auction to derive the best known auction for the unlimited supply case.\nOur work leaves many interesting open questions.\nWe found that the lower bound of [8] is matched by an auction for three bidders,\neven when competing against generalized benchmarks.\nThe most interesting open question from our work is whether the lower bound from Theorem 2 can be matched by an auction for more than three bidders.\nWe conjecture that it can.\nSecond, we consider whether our techniques can be extended to find optimal auctions for greater numbers of bidders.\nThe use of our analytic solution method requires knowledge of a restricted class of auctions which is large enough to contain an optimal auction but small enough that the optimal auction in this class can be found explicitly through analytic methods.\nNo class of auctions which meets these criteria is known even for the four bidder case.\nAlso, when the number of bidders is greater than three, it might be the case that the optimal auction is not expressible in terms of elementary functions.\nAnother interesting set of open questions concerns aggregation auctions.\nAs we show, the aggregation auction for \u03a53 outperforms the aggregation auction for \u03a52 and it appears that the aggregation auction for \u03a53 is better than \u03a5m for m> 3.\nWe leave verification of this conjecture for future work.\nWe also show that \u03a53 is not the best three-bidder auction for use in an aggregation auction, but the auction that beats it is able to reduce the competitive ratio of the overall auction only a little bit.\nIt would be interesting to know whether for any m there is an m-aggregation auction that substantially improves on the competitive ratio of Agg\u03a5m.\nFinally, we remark that very little is known about the structure of the optimal competitive auction.\nIn our auction \u03a53, the sales price for a given bidder is restricted either to be one of the other bid values or to be higher than all other bid values.\nThe optimal auction for two bidders, the 1-unit Vickrey auction, also falls within this class of auctions, as its sales prices are restricted to bid values.\nWe conjecture that an optimal auction for any number of bidders lies within this class.\nOur paper provides partial evidence for this conjecture: the lower bound of Section 6 on conservative auctions shows that the optimal auction must offer sales prices higher than any bid value if the lower bound of Theorem 2 is tight, as is conjectured.\nIt remains to show that optimal auctions otherwise only offer sales prices at bid values.","lvl-4":"From Optimal Limited To Unlimited Supply Auctions\nABSTRACT\nWe investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit.\nWe adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items.\nIn this paper, we first derive an optimal auction for three items, answering an open question from [8].\nSecond, we show that the form of this auction is independent of the competitive framework used.\nThird, we propose a schema for converting a given limited-supply auction into an unlimited supply auction.\nApplying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7].\nFinally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values.\n1.\nINTRODUCTION\nThis problem is well studied for the auction design problem where the optimal mechanism is the one * This work was supported in part by an NSF fellowship and NSF ITR 0205633.\nthat brings the seller the most profit.\nRecently Goldberg et al. [9] introduced the use of worst-case competitive analysis (See e.g., [3]) to analyze the performance of auctions that have no knowledge of the prior distribution.\nThe goal of such work is to design an auction that achieves a large constant fraction of the profit attainable by an optimal mechanism that knows the prior distribution in advance.\nThe performance of an auction in such a worst-case competitive analysis is measured by its competitive ratio, the ratio between a benchmark performance and the auction's performance on the input distribution that maximizes this ratio.\nThe holy grail of the worstcase competitive analysis of auctions is the auction that achieves the optimal competitive ratio (as small as possible).\nSince [9] this search has led to improved understanding of the nature of the optimal auction, the techniques for on-the-fly pricing in these scenarios, and the competitive ratio of the optimal auction [5, 7, 8].\nFurthermore, we give evidence corroborating the conjecture that the form of the optimal auction is independent of the benchmark used in the auction's competitive analysis.\nThis result further validates the use of competitive analysis in gauging auction performance.\nWe consider the single item, multi-unit, unit-demand auction problem.\nIn such an auction there are many units of a single item available for sale to bidders who each desire only one unit.\nEach bidder has a valuation representing how much the item is worth to him.\nThe auction is performed by soliciting a sealed bid from each of the bidders and deciding on the allocation of units to bidders and the prices to be paid by the bidders.\nIn a truthful auction, revealing one's true valuation as one's bid is an optimal strategy for each bidder regardless of the bids of the other bidders.\nIn this paper, we will restrict our attention to truthful (a.k.a., incentive compatible or strategyproof) auctions.\nA particularly interesting special case of the auction problem is the unlimited supply case.\nIn this case the number of units for sale is at least the number of bidders in the auction.\nThis is natural for the sale of digital goods where there is negligible cost for duplicating\nand distributing the good.\nThe competitive framework introduced in [9] and further refined in [5] uses the profit of the optimal omniscient single priced mechanism that sells at least two units as the benchmark for competitive analysis.\nIn this framework for competitive analysis, an auction is said to be \u03b2-competitive if it achieves a profit that is within a factor of \u03b2 \u2265 1 of the benchmark profit on every input.\nThe optimal auction is the one which is \u03b2-competitive with the minimum value of \u03b2.\nPrevious to this work, the best known auction for the unlimited supply case had a competitive ratio of 3.39 [7] and the best lower bound known was 2.42 [8].\nFor the limited supply case, auctions can achieve substantially better competitive ratios.\nWhen there are only two units for sale, the optimal auction gives a competitive ratio of 2, which matches the lower bound for two units.\nWhen there are three units for sale, the best previously known auction had a competitive ratio of 2.3, compared with a lower bound of 13\/6 \u2248 2.17 [8].\nThe results of this paper are as follows:\n\u2022 We give the auction for three units that is optimally competitive against the profit of the omniscient single priced mechanism that sells at least two units.\nThis auction achieves a competitive ratio of 13\/6, matching the lower bound from [8] (Section 3).\n\u2022 We show that the form of the optimal auction is independent of the benchmark used in competitive analysis.\nIn doing so, we give an optimal three bidder auction for generalized benchmarks (Section 4).\n\u2022 We give a general technique for converting a limited supply auction into an unlimited supply auction where it is possible to use the competitive ratio of the limited supply auction to obtain a bound on the competitive ratio of the unlimited supply auction.\nWe refer to auctions derived from this framework as aggregation auctions (Section 5).\n\u2022 We improve on the best known competitive ratio by proving that the aggregation auction constructed from our optimal three-unit auction is 3.25-competitive (Section 5.1).\n\u2022 Assuming that the conjecture that the optimal ~ - unit auction\nhas a competitive ratio that matches the lower bound proved in [8], we show that this optimal auction for ~ \u2265 3 on some inputs will occasionally offer prices that are higher than any bid in that input (Section 6).\nFor the three-unit case where we have shown that the lower bound of [8] is tight, this observation led to our construction of the optimal three-unit auction.\n7.\nCONCLUSIONS AND FUTURE WORK\nWe have found the optimal auction for the three-unit limitedsupply case, and shown that its structure is essentially independent of the benchmark used in its competitive analysis.\nWe have then used this auction to derive the best known auction for the unlimited supply case.\nOur work leaves many interesting open questions.\nWe found that the lower bound of [8] is matched by an auction for three bidders,\neven when competing against generalized benchmarks.\nThe most interesting open question from our work is whether the lower bound from Theorem 2 can be matched by an auction for more than three bidders.\nWe conjecture that it can.\nSecond, we consider whether our techniques can be extended to find optimal auctions for greater numbers of bidders.\nNo class of auctions which meets these criteria is known even for the four bidder case.\nAlso, when the number of bidders is greater than three, it might be the case that the optimal auction is not expressible in terms of elementary functions.\nAnother interesting set of open questions concerns aggregation auctions.\nAs we show, the aggregation auction for \u03a53 outperforms the aggregation auction for \u03a52 and it appears that the aggregation auction for \u03a53 is better than \u03a5m for m> 3.\nWe leave verification of this conjecture for future work.\nWe also show that \u03a53 is not the best three-bidder auction for use in an aggregation auction, but the auction that beats it is able to reduce the competitive ratio of the overall auction only a little bit.\nIt would be interesting to know whether for any m there is an m-aggregation auction that substantially improves on the competitive ratio of Agg\u03a5m.\nFinally, we remark that very little is known about the structure of the optimal competitive auction.\nIn our auction \u03a53, the sales price for a given bidder is restricted either to be one of the other bid values or to be higher than all other bid values.\nThe optimal auction for two bidders, the 1-unit Vickrey auction, also falls within this class of auctions, as its sales prices are restricted to bid values.\nWe conjecture that an optimal auction for any number of bidders lies within this class.\nIt remains to show that optimal auctions otherwise only offer sales prices at bid values.","lvl-2":"From Optimal Limited To Unlimited Supply Auctions\nABSTRACT\nWe investigate the class of single-round, sealed-bid auctions for a set of identical items to bidders who each desire one unit.\nWe adopt the worst-case competitive framework defined by [9, 5] that compares the profit of an auction to that of an optimal single-price sale of least two items.\nIn this paper, we first derive an optimal auction for three items, answering an open question from [8].\nSecond, we show that the form of this auction is independent of the competitive framework used.\nThird, we propose a schema for converting a given limited-supply auction into an unlimited supply auction.\nApplying this technique to our optimal auction for three items, we achieve an auction with a competitive ratio of 3.25, which improves upon the previously best-known competitive ratio of 3.39 from [7].\nFinally, we generalize a result from [8] and extend our understanding of the nature of the optimal competitive auction by showing that the optimal competitive auction occasionally offers prices that are higher than all bid values.\n1.\nINTRODUCTION\nThe research area of optimal mechanism design looks at designing a mechanism to produce the most desirable outcome for the entity running the mechanism.\nThis problem is well studied for the auction design problem where the optimal mechanism is the one * This work was supported in part by an NSF fellowship and NSF ITR 0205633.\nthat brings the seller the most profit.\nHere, the classical approach is to design such a mechanism given the prior distribution from which the bidders' preferences are drawn (See e.g., [12, 4]).\nRecently Goldberg et al. [9] introduced the use of worst-case competitive analysis (See e.g., [3]) to analyze the performance of auctions that have no knowledge of the prior distribution.\nThe goal of such work is to design an auction that achieves a large constant fraction of the profit attainable by an optimal mechanism that knows the prior distribution in advance.\nPositive results in this direction are fueled by the observation that in auctions for a number of identical units, much of the distribution from which the bidders are drawn can be deduced on the fly by the auction as it is being run [9, 14, 2].\nThe performance of an auction in such a worst-case competitive analysis is measured by its competitive ratio, the ratio between a benchmark performance and the auction's performance on the input distribution that maximizes this ratio.\nThe holy grail of the worstcase competitive analysis of auctions is the auction that achieves the optimal competitive ratio (as small as possible).\nSince [9] this search has led to improved understanding of the nature of the optimal auction, the techniques for on-the-fly pricing in these scenarios, and the competitive ratio of the optimal auction [5, 7, 8].\nIn this paper we continue this line of research by improving in all of these directions.\nFurthermore, we give evidence corroborating the conjecture that the form of the optimal auction is independent of the benchmark used in the auction's competitive analysis.\nThis result further validates the use of competitive analysis in gauging auction performance.\nWe consider the single item, multi-unit, unit-demand auction problem.\nIn such an auction there are many units of a single item available for sale to bidders who each desire only one unit.\nEach bidder has a valuation representing how much the item is worth to him.\nThe auction is performed by soliciting a sealed bid from each of the bidders and deciding on the allocation of units to bidders and the prices to be paid by the bidders.\nThe bidders are assumed to bid so as to maximize their personal utility, the difference between their valuation and the price they pay.\nTo handle the problem of designing and analyzing auctions where bidders may falsely declare their valuations to get a better deal, we will adopt the solution concept of truthful mechanism design (see, e.g., [9, 15, 13]).\nIn a truthful auction, revealing one's true valuation as one's bid is an optimal strategy for each bidder regardless of the bids of the other bidders.\nIn this paper, we will restrict our attention to truthful (a.k.a., incentive compatible or strategyproof) auctions.\nA particularly interesting special case of the auction problem is the unlimited supply case.\nIn this case the number of units for sale is at least the number of bidders in the auction.\nThis is natural for the sale of digital goods where there is negligible cost for duplicating\nand distributing the good.\nPay-per-view television and downloadable audio files are examples of such goods.\nThe competitive framework introduced in [9] and further refined in [5] uses the profit of the optimal omniscient single priced mechanism that sells at least two units as the benchmark for competitive analysis.\nThe assumption that two or more units are sold is necessary because in the worst case it is impossible to obtain a constant fraction of the profit of the optimal mechanism when it sells only one unit [9].\nIn this framework for competitive analysis, an auction is said to be \u03b2-competitive if it achieves a profit that is within a factor of \u03b2 \u2265 1 of the benchmark profit on every input.\nThe optimal auction is the one which is \u03b2-competitive with the minimum value of \u03b2.\nPrevious to this work, the best known auction for the unlimited supply case had a competitive ratio of 3.39 [7] and the best lower bound known was 2.42 [8].\nFor the limited supply case, auctions can achieve substantially better competitive ratios.\nWhen there are only two units for sale, the optimal auction gives a competitive ratio of 2, which matches the lower bound for two units.\nWhen there are three units for sale, the best previously known auction had a competitive ratio of 2.3, compared with a lower bound of 13\/6 \u2248 2.17 [8].\nThe results of this paper are as follows:\n\u2022 We give the auction for three units that is optimally competitive against the profit of the omniscient single priced mechanism that sells at least two units.\nThis auction achieves a competitive ratio of 13\/6, matching the lower bound from [8] (Section 3).\n\u2022 We show that the form of the optimal auction is independent of the benchmark used in competitive analysis.\nIn doing so, we give an optimal three bidder auction for generalized benchmarks (Section 4).\n\u2022 We give a general technique for converting a limited supply auction into an unlimited supply auction where it is possible to use the competitive ratio of the limited supply auction to obtain a bound on the competitive ratio of the unlimited supply auction.\nWe refer to auctions derived from this framework as aggregation auctions (Section 5).\n\u2022 We improve on the best known competitive ratio by proving that the aggregation auction constructed from our optimal three-unit auction is 3.25-competitive (Section 5.1).\n\u2022 Assuming that the conjecture that the optimal ~ - unit auction\nhas a competitive ratio that matches the lower bound proved in [8], we show that this optimal auction for ~ \u2265 3 on some inputs will occasionally offer prices that are higher than any bid in that input (Section 6).\nFor the three-unit case where we have shown that the lower bound of [8] is tight, this observation led to our construction of the optimal three-unit auction.\n2.\nDEFINITIONS AND BACKGROUND\nWe consider single-round, sealed-bid auctions for a set of ~ identical units of an item to bidders who each desire one unit.\nAs mentioned in the introduction, we adopt the game-theoretic solution concept of truthful mechanism design.\nA useful simplification of the problem of designing truthful auctions is obtained through the following algorithmic characterization [9].\nRelated formulations to this one have appeared in numerous places in recent literature (e.g., [1, 14, 5, 10]).\nDEFINITION 1.\nGiven a bid vector of n bids, b = (b1,..., bn), let b-i denote the vector of with bi replaced with a `? '\n, i.e.,\n1.\nSet ti = f (b-i).\n2.\nIfti <bi, bidder i wins at price ti.\n3.\nIf ti> bi, bidder i loses.\n4.\nOtherwise, (ti = bi) the auction can either accept the bid at price ti or reject it.\nA randomized bid-independent auction is a distribution over deterministic bid-independent auctions.\nThe proof of the following theorem can be found, for example, in [5].\nGiven this equivalence, we will use the the terminology bidindependent and truthful interchangeably.\nFor a randomized bid-independent auction, f (b-i) is a random variable.\nWe denote the probability density of f (b-i) at z by \u03c1b-i (z).\nWe denote the profit of a truthful auction A on input b as A (b).\nThe expected profit of the auction, E [A (b)], is the sum of the expected payments made by each bidder, which we denote by pi (b) for bidder i. Clearly, the expected payment of each bid satisfies\n2.1 Competitive Framework\nWe now review the competitive framework from [5].\nIn order to evaluate the performance of auctions with respect to the goal of profit maximization, we introduce the optimal single price omniscient auction F and the related omniscient auction F (2).\nDEFINITION 3.\nGive a vector b = (b1,..., bn), let b (i) represent the i-th largest value in b.\nThe optimal single price omniscient auction, F, is defined as follows.\nAuction F on input b determines the value k such that kb (k) is maximized.\nAll bidders with bi \u2265 b (k) win at price b (k); all remaining bidders lose.\nThe profit of F on input b is thus F (b) = max1 \u2264 k \u2264 n kb (k).\nIn the competitive framework of [5] and subsequent papers, the performance of a truthful auction is gauged in comparison to F (2), the optimal singled priced auction that sells at least two units.\nThe profit of F (2) is max2 \u2264 k \u2264 n kb (k) There are a number of reasons to choose this benchmark for comparison, interested readers should see [5] or [6] for a more detailed discussion.\nLet A be a truthful auction.\nWe say that A is \u03b2-competitive against F (2) (or just \u03b2-competitive) if for all bid vectors b, the expected profit of A on b satisfies \u03b2.\nIn Section 4 we generalize this framework to other profit benchmarks.\n2.2 Scale Invariant and Symmetric Auctions\nA symmetric auction is one where the auction outcome is unchanged when the input bids arrive in a different permutation.\nGoldberg et al. [8] show that a symmetric auction achieves the optimal competitive ratio.\nThis is natural as the profit benchmark we consider is symmetric, and it allows us to consider only symmetric auctions when looking for the one with the optimal competitive ratio.\nAn auction defined by bid-independent function f is scale invariant if, for all i and all z, Pr [f (b-i)> z] = Pr [f (cb-i)> cz].\nIt is conjectured that the assumption of scale invariance is without loss of generality.\nThus, we are motivated to consider symmetric scale-invariant auctions.\nWhen specifying a symmetric scaleinvariant auction we can assume that f is only a function of the relative magnitudes of the n \u2212 1 bids in b-i and that one of the bids, bj = 1.\nIt will be convenient to specify such auctions via the density function of f (b-i), \u03c1b-i (z).\nIt is enough to specify such a density function of the form \u03c11, z1,..., zn \u2212 1 (z) with 1 <zi <zi +1.\n2.3 Limited Supply Versus Unlimited Supply\nFollowing [8], throughout the remainder of this paper we will be making the assumption that n = f, i.e., the number of bidders is equal to the number of units for sale.\nThis is without loss of generality as (a) any lower bound that applies to the n = f case also extends to the case where n> f [8], and (b) there is a reduction from the unlimited supply auction problem to the limited supply auction problem that takes an unlimited supply auction that is 3-competitive with F (2) and constructs a limited supply auction parameterized by f that is 3-competitive with F (2,,1), the optimal omniscient auction that sells between 2 and f units [6].\nHenceforth, we will assume that we are in the unlimited supply case, and we will examine lower bounds for limited supply problems by placing a restriction on the number of bidders in the auction.\n2.4 Lower Bounds and Optimal Auctions\nFrequently in this paper, we will refer to the best known lower bound on the competitive ratio of truthful auctions:\n2.5 Profit Extraction\nIn this section we review the truthful profit extraction mechanism ProfitExtractR.\nThis mechanism is a special case of a general cost-sharing schema due to Moulin and Shenker [11].\nThe goal of profit extraction is, given bids b, to extract a target value R of profit from some subset of the bidders.\nProfitExtractR: Given bids b, find the largest k such that the highest k bidders can equally share the cost R. Charge each of these bidders R\/k.\nIf no subset of bidders can cover the cost, the mechanism has no winners.\nImportant properties of this auction are as follows:\n\u2022 ProfitExtractR is truthful.\n\u2022 If R <F (b), ProfitExtractR (b) = R; otherwise it has no winners and no revenue.\nWe will use this profit extraction mechanism in Section 5 with the following intuition.\nSuch a profit extractor makes it possible to treat this subset of bidders as a single bid with value F (S).\nNote that given a single bid, b, a truthful mechanism might offer it price t and if t <b then the bidder wins and pays t; otherwise the bidder pays nothing (and loses).\nLikewise, a mechanism can offer the set of bidders S a target revenue R.\nIf R <F (2) (S), then ProfitExtractR raises R from S; otherwise, the it raises no revenue from S.\n3.\nAN OPTIMAL AUCTION FOR THREE BIDDERS\nIn this section we define the optimal auction for three bidders, \u03a53, and prove that it indeed matches the known lower bound of 13\/6.\nWe follow the definition and proof with a discussion of how this auction was derived.\nDEFINITION 5.\n\u03a53 is scale-invariant and symmetric and given by the bid-independent function with density function\nTHEOREM 2.\n[8] The competitive ratio of any auction on n bidders is at least DEFINITION 4.\nLet \u03a5n denote the n-bidder auction that achieves the optimal competitive ratio.\nThis bound is derived by analyzing the performance of any auction on the following distribution B.\nIn each random bid vector B, each bid Bi is drawn i.i.d. from the distribution such that Pr [Bi> s] <1\/s for all s E S.\nIn the two-bidder case, this lower bound is 2.\nThis is achieved by \u03a52 which is the 1-unit Vickrey auction .1 In the three-bidder case, this lower bound is 13\/6.\nIn the next section, we define the auction \u03a53 which matches this lower bound.\nIn the four-bidder case, this lower bound is 96\/215.\nIn the limit as the number of bidders grows, this lower bound approaches a number which is approximately 2.42.\nIt is conjectured that this lower bound is tight for any number of bidders and the optimal auction, \u03a5n, matches it.\n1The 1-unit Vickrey auction sells to the highest bidder at the second highest bid value.\nwhere g (x) = 2\/13 (x \u2212 1) 3.\nPROOF.\nConsider the bids 1, x, y, with 1 <x <y.\nThere are three cases.\nCASE 1 (x <y <3\/2): F (2) = 3.\nThe auction must raise expected revenue of at least 18\/13 on these bids.\nThe bidder with valuation x will pay 1 with 9\/13, and the bidder with valuation y will pay 1 with probability 9\/13.\nTherefore \u03a53 raises 18\/13 on these bids.\nCASE 2 (x <3\/2 <y): F (2) = 3.\nThe auction must raise expected revenue of at least 18\/13 on these bids.\nThe bidder with\nvaluation x will pay 9\/13 \u2212 fy3\/2 zg (z) dz in expectation.\nThe bidder with valuation y will pay 9\/13 + fy3\/2 zg (z) dz in expectation.\nTherefore \u03a53 raises 18\/13 on these bids.\nCASE 3 (3\/2 <x <y): F (2) = 2x.\nThe auction must raise expected revenue of at least 12x\/13 on these bids.\nConsider the revenue raised from all three bidders:\nThe final equation comes from substituting in g (x) = 2\/13 (x \u2212 1) 3 and expanding the integrals.\nNote that the fraction of F (2) raised on every input is identical.\nIf any of the inequalities 1 <x <y are not strict, the same proof applies giving a lower bound on the auction's profit; however, this bound may no longer be tight.\nMotivation for \u03a53\nIn this section, we will conjecture that a particular input distribution is worst-case, and show, as a consequence, that all inputs are worstcase in the optimal auction.\nBy applying this consequence, we will derive an optimal auction for three bidders.\nA truthful, randomized auction on n bidders can be represented by a randomized function f: Rn \u2212 1 x n--+ R that maps masked bid vectors to prices in R. By normalization, we can assume that the lowest possible bid is 1.\nRecall that Pb-i (z) = Pr [f (b-i) = z].\nThe optimal auction for the finite auction problem can be found by the following optimization problem in which the variables are\nThis set of integral inequalities is difficult to maximize over.\nHowever, by guessing which constraints are tight and which are slack at the optimum, we will be able to derive a set of differential equations for which any feasible solution is an optimal auction.\nAs we discuss in Section 2.4, in [8], the authors define a distribution and use it to find a lower bound on the competitive ratio of the optimal auction.\nFor two bidders, this bid distribution is the worst-case input distribution.\nWe guess (and later verify) that this distribution is the worst-case input distribution for three bidders as well.\nSince this distribution has full support over the set of all bid vectors and a worst-case distribution puts positive probability only on worst-case inputs, we can therefore assume that all but a measure zero set of inputs is worst-case for the optimal auction.\nIn the optimal two-bidder auction, all inputs with non-identical bids are worst-case, so we will assume the same for three bidders.\nThe guess that these constraints are tight allows us to transform the optimization problem into a feasibility problem constrained by differential equations.\nIf the solution to these equations has value matching the lower bound obtained from the worst-case distribution, then this solution is the optimal auction and that our conjectured choice of worst-case distribution is correct.\nIn Section 6 we show that the optimal auction must sometimes place probability mass on sale prices above the highest bid.\nThis motivates considering symmetric scale-invariant auctions for three bidders with probability density function, P1, x (z), of the following form: 1 with discrete probability a (x) x with discrete probability b (x) z with probability density g (z) for z> x In this auction, the sale price for the first bidder is either one of the latter two bids, or higher than either bid with a probability density which is independent of the input.\nThe feasibility problem which arises from the linear optimization problem by assuming the constraints are tight is as follows:\nSolving this feasibility problem gives the auction \u03a53 proposed above.\nThe proof of its optimality validates its proposed form.\nFinding a simple restriction on the form of n-bidder auctions for n> 3 under which the optimal auction can be found analytically as above remains an open problem.\n4.\nGENERALIZED PROFIT BENCHMARKS\nIn this section, we widen our focus beyond auctions that compete with F (2) to consider other benchmarks for an auction's profit.\nWe will show that, for three bidders, the form of the optimal auction is essentially independent of the benchmark profit used.\nThis results strongly corroborates the worst-case competitive analysis of auctions by showing that our techniques allow us to derive auctions which are competitive against a broad variety of reasonable benchmarks rather than simply against F (2).\nPrevious work in competitive analysis of auctions has focused on the question of designing the auction with the best competitive ratio against F (2), the profit of the optimal omniscient single-priced mechanism that sells at least two items.\nHowever, it is reasonable to consider other benchmarks.\nFor instance, one might wish to compete against V \u2217, the profit of the k-Vickrey auction with optimal-inhindsight choice of k. 2 Alternatively, if an auction is being used as a subroutine in a larger mechanism, one might wish to choose the auction which is optimally competitive with a benchmark specific to that purpose.\nRecall that F (2) (b) = max2 \u2265 k \u2265 n kb (k).\nWe can generalize this definition to 9 $, parameterized by s = (s2,..., sn) and defined as:\nWhen considering 9 $we assume without loss of generality that si <si +1 as otherwise the constraint imposed by si +1 is irrelevant.\nNote that F (2) is the special case of 9 $with si = i, and that V \u2217 = 9 $with si = i \u2212 1.\n2Recall that the k-Vickrey auction sells a unit to each of the highest k bidders at a price equal to the k + 1st highest bid, b (k +1), achieving a profit of kb (k +1).\nCompeting with Gs\nWe will now design a three-bidder auction \u03a5s, t 3 that achieves the optimal competitive ratio against Gs, t.\nAs before, we will first find a lower bound on the competitive ratio and then design an auction to meet that bound.\nWe can lower bound the competitive ratio of \u03a5s, t 3 using the same worst-case distribution from [8] that we used against F (2).\nEvaluating the performance of any auction competing against Gs, t on this distribution will yield the following theorem.\nWe denote the optimal auction for three bidders against Gs, t as \u03a5s, t\nThe proof can be found in the appendix.\nSimilarly, we can find the optimal auction against Gs, t using the same technique we used to solve for the three bidder auction with the best competitive ratio against F (2).\nDEFINITION 6.\n\u03a5s, t\n3 is scale-invariant and symmetric and given by the bid-independent function with density function where g (x) = 2 (t \u2212 s) 2 \/ (s2 + t2)\nThis auction, like \u03a53, can be derived by reducing the optimization problem to a feasibility problem, guessing that the optimal solution has the same form as \u03a5s, t 3, and solving.\nThe auction is optimal because it matches the lower bound found above.\nNote that the form of \u03a5s, t 3 is essentially the same as for \u03a53, but that the probability of each price is scaled depending on the values of s and t.\nThat our auction for three bidders matches the lower bound computed by the input distribution used in [8] is strong evidence that this input distribution is the worst-case input distribution for any number of bidders and any generalized profit benchmark.\nFurthermore, we strongly suspect that for any number of bidders, the form of the optimal auction will be independent of the benchmark used.\n5.\nAGGREGATION AUCTIONS\nWe have seen that optimal auctions for small cases of the limitedsupply model can be found analytically.\nIn this section, we will construct a schema for turning limited supply auctions into unlimited supply auctions with a good competitive ratio.\nAs discussed in Section 2.5, the existence of a profit extractor, ProfitExtractR, allows an auction to treat a set of bids S as a single bid with value F (S).\nGiven n bidders and an auction, Am, for m <n bidders, we can convert the m-bidder auction into an n-bidder auction by randomly partitioning the bidders into m subsets and then treating each subset as a single bidder (via ProfitExtractR) and running the m-bidder auction.\nDEFINITION 7.\nGiven a truthful m-bidder auction, Am, the m-aggregation auction for Am, AggAm, works as follows:\n1.\nCast each bid uniformly at random into one of m bins, resulting in bid vectors b (1),..., b (m).\n2.\nFor each bin j, compute the aggregate bid Bj = F (b (j)).\nLet B be the vector of aggregate bids, and B \u2212 j be the aggregate bids for all bins other than j. 3.\nCompute the aggregate price Tj = f (B \u2212 j), where f is the bid-independent function for Am.\n4.\nFor each bin j, run ProfitExtractTj on b (j).\nSince Am and ProfitExtractR are truthful, Tj is computed independently of any bid in bin j and thus the price offered any bidder in b (j) is independent of his bid; therefore, THEOREM 6.\nIf Am is truthful, the m-aggregation auction for Am, AggAm, is truthful.\nNote that this schema yields a new way of understanding the Random Sampling Profit Extraction (RSPE) auction [5] as the simplest case of an aggregation auction.\nIt is the 2-aggregation auction for \u03a52, the 1-unit Vickrey auction.\nTo analyze AggAm, consider throwing k balls into m labeled bins.\nLet k represent a configuration of balls in bins, so that ki is equal to the number of balls in bin i, and k (i) is equal to the number of balls in the ith largest bin.\nLet Km, k represent the set of all possible configurations of k balls in m bins.\nWe write the multinomial coefficient of k as (k).\nThe probability that a particular configu\nThen the m-aggregation auction for Am, AggAm, raises the following fraction of the optimal revenue F (2) (b):\nThe inequality follows from the monotonicity of F (2), and the equality from the homogeneity of F (2).\nProfitExtractTj will raise Tj if Tj \u2264 Bj, and no profit other] wise.\nThus, E [AggAm (b)] \u2265 E IF (2) (B) \/ \u03b2.\nThe theorem follows by rewriting this expectation as a sum over all k in Km, k.\n5.1 A 3.25 Competitive Auction\nWe apply the aggregation auction schema to \u03a53, our optimal auction for three bidders, to achieve an auction with competitive ratio 3.25.\nThis improves on the previously best known auction which is 3.39-competitive [7].\nTHEOREM 8.\nThe aggregation auction for \u03a53 has competitive ratio 3.25.\nPROOF.\nBy theorem 7,\nFor k = 2 and k = 3, E [Agg\u03a53 (b)] = 23 k \/,3.\nAs k increases, E [Agg\u03a53 (b)] \/ F (2) increases as well.\nSince we do not expect to find a closed-form formula for the revenue, we lower bound F (2) (b) by 3b (3).\nUsing large deviation bounds, one can show that this lower bound is greater than 23 k \/,3 for large-enough k, and the remainder can be shown by explicit calculation.\nPlugging in,3 = 13\/6, the competitive ratio is 13\/4.\nAs k increases, the competitive ratio approaches 13\/6.\nNote that the above bound on the competitive ratio of Agg\u03a53 is tight.\nTo see this, consider the bid vector with two very large and non-identical bids of h and h + e with the remaining bids 1.\nGiven that the competitive ratio of \u03a53 is tight on this example, the expected revenue of this auction on this input will be exactly 13\/4.\n5.2 A Gs,t-based Aggregation Auction\nIn this section we show that \u03a53 is not the optimal auction to use in an aggregation auction.\nOne can do better by choosing the auction that is optimally competitive against a specially tailored benchmark.\nTo see why this might be the case, notice (Table 1) that the fraction of F (2) (b) raised for when there are k = 2 and k = 3 winning bidders in F (2) (b) is substantially smaller than the fraction of F (2) (b) raised when there are more winners.\nThis occurs because the expected ratio between F (2) (B) and F (2) (b) is lower in this case while the competitive ratio of \u03a53 is constant.\nIf we chose a three bidder auction that performed better when F (2) has smaller numbers of winners, our aggregation auction would perform better in the worst case.\nOne approach is to compete against a different benchmark that puts more weight than F (2) on solutions with a small number of winners.\nRecall that F (2) is the instance of Gs, t with s = 2 and t = 3.\nBy using the auction that competes optimally against Gs, t with s> 2, while holding t = 3, we will raise a higher fraction of revenue on smaller numbers of winning bidders and a lower fraction of revenue on large numbers of winning bidders.\nWe can numerically optimize the values of s and t in Gs, t (b) in order to achieve the best competitive ratio for the aggregation auction.\nIn fact, this will allow us to improve our competitive ratio slightly.\nTHEOREM 9.\nFor an optimal choice of s and t, the aggregation auction for \u03a5s, t\n3 is 3.243-competitive.\nThe proof follows the outline of Theorem 7 and 8 with the optimal choice of s = 2.162 (while t is held constant at 3).\n5.3 Further Reducing the Competitive Ratio\nThere are a number of ways we might attempt to use this aggregation auction schema to continue to push the competitive ratio down.\nIn this section, we give a brief discussion of several attempts.\n5.3.1 Agg\u03a5m for m> 3\nIf the aggregation auction for \u03a52 has a competitive ratio of 4 and the aggregation auction for \u03a53 has a competitive ratio of 3.25, can we improve the competitive ratio by aggregating \u03a54 or \u03a5m for larger m?\nWe conjecture in the negative: for m> 3, the aggregation auction for \u03a5m has a larger competitive ratio than the aggregation auction for \u03a53.\nThe primary difficulty in proving this\nTable 1: E A (b) \/ F (2) (b) for Agg\u03a5m as a function of k, the\noptimal number of winners in F (2) (b).\nThe lowest value for each column is printed in bold.\nconjecture lies in the difficulty of finding a closed-form solution for the formula of Theorem 7.\nWe can, however, evaluate this formula numerically for different values of m and k, assuming that the competitive ratio for \u03a5m matches the lower bound for m given by Theorem 2.\nTable 1 shows, for each value of m and k, the fraction of F (2) raised by the aggregation auction for Agg\u03a5m when there are k winning bidders, assuming the lower bound of Theorem 2 is tight.\n5.3.2 Convex combinations of Agg\u03a5m\nAs can be seen in Table 1, when m> 3, the worst-case value of k is no longer 2 and 3, but instead an increasing function of m.\nAn aggregation auction for \u03a5m outperforms the aggregation auction for \u03a53 when there are two or three winning bidders, while the aggregation auction for \u03a53 outperforms the other when there are at least six winning bidders.\nThus, for instance, an auction which randomizes between aggregation auctions for \u03a53 and \u03a54 will have a worst-case which is better than that of either auction alone.\nLarger combinations of auctions will allow more room to optimize the worst-case.\nHowever, we suspect that no convex combination of aggregation auctions will have a competitive ratio lower than 3.\nFurthermore, note that we cannot yet claim the existence of a good auction via this technique as the optimal auction \u03a5n for n> 3 is not known and it is only conjectured that the bound given by Theorem 2 and represented in Table 1 is correct for \u03a5n.\n6.\nA LOWER BOUND FOR CONSERVATIVE AUCTIONS\nIn this section, we define a class of auctions that never offer a sale price which is higher than any bid in the input and prove a lower bound on the competitive ratio of these auctions.\nAs this\nlower bound is stronger than the lower bound of Theorem 2 for n \u2265 3, it shows that the optimal auction must occasionally charge a sales price higher than any bid in the input.\nSpecifically, this result partially explains the form of the optimal three bidder auction.\nDEFINITION 8.\nWe say an auction BIf is conservative if its bidindependent function f satisfies f (b-i) \u2264 max (b-i).\nWe can now state our lower bound for conservative auctions.\nTHEOREM 10.\nLet A be a conservative auction for n bidders.\nThen the competitive ratio of A is at least 3n-2n.\nCOROLLARY 1.\nThe competitive ratio ofany conservative auction for an arbitrary number of bidders is at least three.\nFor a two-bidder auction, this restriction does not prevent optimality.\n\u03a52, the 1-unit Vickrey auction, is conservative.\nFor larger numbers of bidders, however, the restriction to conservative auctions does affect the competitive ratio.\nFor the three-bidder case, \u03a53 has competitive ratio 2.17, while the best conservative auction is no better than 2.33-competitive.\nThe k-Vickrey auction and the Random Sampling Optimal Price auction [9] are conservative auctions.\nThe Random Sampling Profit Extraction auction [5] and the CORE auction [7], on the other hand, use the ProfitExtractR mechanism as a subroutine and thus sometimes offer a sale price which is higher than the highest input bid value.\nIn [8], the authors define a restricted auction as one on which, for any input, the sale prices are drawn from the set of input bid values.\nThe class of conservative auctions can be viewed as a generalization of the class of restricted auctions and therefore our result below gives lower bounds on the performance of a broader class of auctions.\nWe will prove Theorem 10 with the aid of the following lemma:\nPROOF.\nThe lemma is proved by strong induction on k. First some notation that will be convenient.\nFor any particular k and H we will be considering the bid vector b = (1,..., 1, hk, H) and placing bounds on \u03c1b-i (z).\nSince we can assume without loss of generality that the auction is symmetric, we will notate b-1 as b with any one of the 1-valued bids masked.\nSimilarly we notate b-hk (resp.\nb-H) as b with the hk-valued bid (resp.\nH-valued bid) masked.\nWe will also let p1 (b), phk (b), and pH (b) represent the expected payment of a 1-valued, hk-valued, and H-valued bidder in A on b, respectively (note by symmetry the expected payment for all 1-valued bidders is the same).\nBase case (k = 0, hk = 1): A must raise revenue of at least rn on b = (1,..., 1, 1, H):\nThe second inequality follows from the conservatism of the underlying auction.\nThe base case follows trivially from the final inequality.\nInductive case (k> 0, hk = kh): Let b = (1,..., 1, hk, H).\nFirst, we will find an upper bound on pH (b)\nEquation (1) follows from the conservatism of A and (2) is from invoking the strong inductive hypothesis with H = kh and the observation that the maximum possible revenue will be found by placing exactly enough probability at each multiple of h to satisfy the constraints of the inductive hypothesis and placing the remaining probability at kh.\nNext, we will find a lower bound on phk (b) by considering the revenue raised by the bids b. Recall that A must obtain a profit of at least rF (2) (b) = 2rkh.\nGiven upper-bounds on the profit from the H-valued, equation bid (3), and the 1-valued bids, the profit from the hk-valued bid must be at least:\nIn order to lower bound Pr [f (b-hk) \u2264 kh], consider the auction that minimizes it and is consistent with the lower bounds obtained by the strong inductive hypothesis on Pr [f (b-hk) \u2264 ih].\nTo minimize the constraints implied by the strong inductive hypothesis, we place the minimal amount of probability mass required each price level.\nThis gives \u03c1hk (b) with nr-1 n-1 probability at 1 and exactly 3nr-2r-n n-1 at each hi for 1 \u2264 i <k. Thus, the profit from offering prices at most hk-1 is nr-1\non hk.\nTherefore, the probability that the sale price will be no more than kh on masked bid vector on bid vector b = (1,..., 1, kh, H) must be at least nr-1 n-1 + k (3nr-2r-n n-1).\nGiven Lemma 1, Theorem 10 is simple to prove.\nPROOF.\nLet A be a conservative auction.\nSuppose 3nr-2r-n n-1 =\nn-1 \u2264 0.\nThus, r \u2264 n 3n-2.\nThe theorem follows.\n7.\nCONCLUSIONS AND FUTURE WORK\nWe have found the optimal auction for the three-unit limitedsupply case, and shown that its structure is essentially independent of the benchmark used in its competitive analysis.\nWe have then used this auction to derive the best known auction for the unlimited supply case.\nOur work leaves many interesting open questions.\nWe found that the lower bound of [8] is matched by an auction for three bidders,\neven when competing against generalized benchmarks.\nThe most interesting open question from our work is whether the lower bound from Theorem 2 can be matched by an auction for more than three bidders.\nWe conjecture that it can.\nSecond, we consider whether our techniques can be extended to find optimal auctions for greater numbers of bidders.\nThe use of our analytic solution method requires knowledge of a restricted class of auctions which is large enough to contain an optimal auction but small enough that the optimal auction in this class can be found explicitly through analytic methods.\nNo class of auctions which meets these criteria is known even for the four bidder case.\nAlso, when the number of bidders is greater than three, it might be the case that the optimal auction is not expressible in terms of elementary functions.\nAnother interesting set of open questions concerns aggregation auctions.\nAs we show, the aggregation auction for \u03a53 outperforms the aggregation auction for \u03a52 and it appears that the aggregation auction for \u03a53 is better than \u03a5m for m> 3.\nWe leave verification of this conjecture for future work.\nWe also show that \u03a53 is not the best three-bidder auction for use in an aggregation auction, but the auction that beats it is able to reduce the competitive ratio of the overall auction only a little bit.\nIt would be interesting to know whether for any m there is an m-aggregation auction that substantially improves on the competitive ratio of Agg\u03a5m.\nFinally, we remark that very little is known about the structure of the optimal competitive auction.\nIn our auction \u03a53, the sales price for a given bidder is restricted either to be one of the other bid values or to be higher than all other bid values.\nThe optimal auction for two bidders, the 1-unit Vickrey auction, also falls within this class of auctions, as its sales prices are restricted to bid values.\nWe conjecture that an optimal auction for any number of bidders lies within this class.\nOur paper provides partial evidence for this conjecture: the lower bound of Section 6 on conservative auctions shows that the optimal auction must offer sales prices higher than any bid value if the lower bound of Theorem 2 is tight, as is conjectured.\nIt remains to show that optimal auctions otherwise only offer sales prices at bid values.","keyphrases":["unlimit suppli","auction","ratio","mechan design","competit analysi","benchmark","aggreg auction","bound","prefer","distribut"],"prmu":["P","P","P","U","M","U","M","U","U","U"]} {"id":"C-71","title":"A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks","abstract":"We propose \u03b9, a novel index for evaluation of point-distribution. \u03b9 is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of \u03b9 result in a honeycomb structure. We propose that \u03b9 can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). To validate this idea, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model. We show that locally maximizing \u03b9 at sensor nodes is a good approach to solve this problem with an algorithm called Maximizing-\u03b9 Node-Deduction (MIND). Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design. This demonstrates a good application of employing \u03b9 in coverage-related problems for WSNs.","lvl-1":"A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks Yangfan Zhou Haixuan Yang Michael R. Lyu Edith C.-H.\nNgai Department of Computer Science and Engineering The Chinese University of Hong Kong Hong Kong, China {yfzhou, hxyang, lyu, chngai}@cse.\ncuhk.edu.hk ABSTRACT We propose \u03b9, a novel index for evaluation of point-distribution.\n\u03b9 is the minimum distance between each pair of points normalized by the average distance between each pair of points.\nWe find that a set of points that achieve a maximum value of \u03b9 result in a honeycomb structure.\nWe propose that \u03b9 can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs).\nTo validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model.\nWe show that locally maximizing \u03b9 at sensor nodes is a good approach to solve this problem with an algorithm called Maximizing\u03b9 Node-Deduction (MIND).\nSimulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design.\nThis demonstrates a good application of employing \u03b9 in coverage-related problems for WSNs.\nCategories and Subject Descriptors C.2.4 [Computer - Communication Networks]: Network Architecture and Design; C.3 [Special-Purpose and Application-Based Systems]: Realtime and Embedded Systems General Terms Theory, Algorithms, Design, Verification, Performance 1.\nINTRODUCTION A wireless sensor network (WSN) consists of a large number of in-situ battery-powered sensor nodes.\nA WSN can collect the data about physical phenomena of interest [1].\nThere are many potential applications of WSNs, including environmental monitoring and surveillance, etc. [1][11].\nIn many application scenarios, WSNs are employed to conduct surveillance tasks in adverse, or even worse, in hostile working environments.\nOne major problem caused is that sensor nodes are subjected to failures.\nTherefore, fault tolerance of a WSN is critical.\nOne way to achieve fault tolerance is that a WSN should contain a large number of redundant nodes in order to tolerate node failures.\nIt is vital to provide a mechanism that redundant nodes can be working in sleeping mode (i.e., major power-consuming units such as the transceiver of a redundant sensor node can be shut off) to save energy, and thus to prolong the network lifetime.\nRedundancy should be exploited as much as possible for the set of sensors that are currently taking charge in the surveillance work of the network area [6].\nWe find that the minimum distance between each pair of points normalized by the average distance between each pair of points serves as a good index to evaluate the distribution of the points.\nWe call this index, denoted by \u03b9, the normalized minimum distance.\nIf points are moveable, we find that maximizing \u03b9 results in a honeycomb structure.\nThe honeycomb structure poses that the coverage efficiency is the best if each point represents a sensor node that is providing surveillance work.\nEmploying \u03b9 in coverage-related problems is thus deemed promising.\nThis enlightens us that maximizing \u03b9 is a good approach to select a set of sensors that are currently taking charge in the surveillance work of the network area.\nTo explore the effectiveness of employing \u03b9 in coverage-related problems, we formulate a sensorgrouping problem for high-redundancy WSNs.\nAn algorithm called Maximizing-\u03b9 Node-Deduction (MIND) is proposed in which redundant sensor nodes are removed to obtain a large \u03b9 for each set of sensors that are currently taking charge in the surveillance work of the network area.\nWe also introduce another greedy solution called Incremental Coverage Quality Algorithm (ICQA) for this problem, which serves as a benchmark to evaluate MIND.\nThe main contribution of this paper is twofold.\nFirst, we introduce a novel index \u03b9 for evaluation of point-distribution.\nWe show that maximizing \u03b9 of a WSN results in low redundancy of the network.\nSecond, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model.\nWith the MIND algorithm we show that locally maximizing \u03b9 among each sensor node and its neighbors is a good approach to solve this problem.\nThis demonstrates a good application of employing \u03b9 in coveragerelated problems.\nThe rest of the paper is organized as follows.\nIn Section 2, we introduce our point-distribution index \u03b9.\nWe survey related work and formulate a sensor-grouping problem together with a general sensing model in Section 3.\nSection 4 investigates the application of \u03b9 in this grouping problem.\nWe propose MIND for this problem 1171 and introduce ICQA as a benchmark.\nIn Section 5, we present our simulation results in which MIND and ICQA are compared.\nSection 6 provides conclusion remarks.\n2.\nTHE NORMALIZED MINIMUM DISTANCE \u03b9: A POINT-DISTRIBUTION INDEX Suppose there are n points in a Euclidean space \u2126.\nThe coordinates of these points are denoted by xi (i = 1, ..., n).\nIt may be necessary to evaluate how the distribution of these points is.\nThere are many metrics to achieve this goal.\nFor example, the Mean Square Error from these points to their mean value can be employed to calculate how these points deviate from their mean (i.e., their central).\nIn resource-sharing evaluation, the Global Fairness Index (GFI) is often employed to measure how even the resource distributes among these points [8], when xi represents the amount of resource that belong to point i.\nIn WSNs, GFI is usually used to calculate how even the remaining energy of sensor nodes is.\nWhen n is larger than 2 and the points do not all overlap (That points all overlap means xi = xj, \u2200 i, j = 1, 2, ..., n).\nWe propose a novel index called the normalized minimum distance, namely \u03b9, to evaluate the distribution of the points.\n\u03b9 is the minimum distance between each pair of points normalized by the average distance between each pair of points.\nIt is calculated by: \u03b9 = min(||xi \u2212 xj||) \u00b5 (\u2200 i, j = 1, 2, ..., n; and i = j) (1) where ||xi \u2212 xj|| denotes the Euclidean distance between point i and point j in \u2126, the min(\u00b7) function calculates the minimum distance between each pair of points, and \u00b5 is the average distance between each pair of points, which is: \u00b5 = ( Pn i=1 Pn j=1,j=i ||xi \u2212 xj||) n(n \u2212 1) (2) \u03b9 measures how well the points separate from one another.\nObviously, \u03b9 is in interval [0, 1].\n\u03b9 is equal to 1 if and only if n is equal to 3 and these three points forms an equilateral triangle.\n\u03b9 is equal to zero if any two points overlap.\n\u03b9 is a very interesting value of a set of points.\nIf we consider each xi (\u2200i = 1, ..., n) is a variable in \u2126, how these n points would look like if \u03b9 is maximized?\nAn algorithm is implemented to generate the topology in which \u03b9 is locally maximized (The algorithm can be found in [19]).\nWe consider a 2-dimensional space.\nWe select n = 10, 20, 30, ..., 100 and perform this algorithm.\nIn order to avoid that the algorithm converge to local optimum, we select different random seeds to generate the initial points for 1000 time and obtain the best one that results in the largest \u03b9 when the algorithm converges.\nFigure 1 demonstrates what the resulting topology looks like when n = 20 as an example.\nSuppose each point represents a sensor node.\nIf the sensor coverage model is the Boolean coverage model [15][17][18][14] and the coverage radius of each node is the same.\nIt is exciting to see that this topology results in lowest redundancy because the Vonoroi diagram [2] formed by these nodes (A Vonoroi diagram formed by a set of nodes partitions a space into a set of convex polygons such that points inside a polygon are closest to only one particular node) is a honeycomb-like structure1 .\nThis enlightens us that \u03b9 may be employed to solve problems related to sensor-coverage of an area.\nIn WSNs, it is desirable 1 This is how base stations of a wireless cellular network are deployed and why such a network is called a cellular one.\n0 20 40 60\u00a080\u00a0100\u00a0120 140 160 0 20 40 60 80 100 120 140 160 X Y Figure 1: Node Number = 20, \u03b9 = 0.435376 that the active sensor nodes that are performing surveillance task should separate from one another.\nUnder the constraint that the sensing area should be covered, the more each node separates from the others, the less the redundancy of the coverage is.\n\u03b9 indicates the quality of such separation.\nIt should be useful for approaches on sensor-coverage related problems.\nIn our following discussions, we will show the effectiveness of employing \u03b9 in sensor-grouping problem.\n3.\nTHE SENSOR-GROUPING PROBLEM In many application scenarios, to achieve fault tolerance, a WSN contains a large number of redundant nodes in order to tolerate node failures.\nA node sleeping-working schedule scheme is therefore highly desired to exploit the redundancy of working sensors and let as many nodes as possible sleep.\nMuch work in the literature is on this issue [6].\nYan et al introduced a differentiated service in which a sensor node finds out its responsible working duration with cooperation of its neighbors to ensure the coverage of sampled points [17].\nYe et al developed PEAS in which sensor nodes wake up randomly over time, probe their neighboring nodes, and decide whether they should begin to take charge of surveillance work [18].\nXing et al exploited a probabilistic distributed detection model with a protocol called Coordinating Grid (Co-Grid) [16].\nWang et al designed an approach called Coverage Configuration Protocol (CCP) which introduced the notion that the coverage degree of intersection-points of the neighboring nodes'' sensing-perimeters indicates the coverage of a convex region [15].\nIn our recent work [7], we also provided a sleeping configuration protocol, namely SSCP, in which sleeping eligibility of a sensor node is determined by a local Voronoi diagram.\nSSCP can provide different levels of redundancy to maintain different requirements of fault tolerance.\nThe major feature of the aforementioned protocols is that they employ online distributed and localized algorithms in which a sensor node determines its sleeping eligibility and\/or sleeping time based on the coverage requirement of its sensing area with some information provided by its neighbors.\nAnother major approach for sensor node sleeping-working scheduling issue is to group sensor nodes.\nSensor nodes in a network are divided into several disjoint sets.\nEach set of sensor nodes are able to maintain the required area surveillance work.\nThe sensor nodes are scheduled according to which set they belong to.\nThese sets work successively.\nOnly one set of sensor nodes work at any time.\nWe call the issue sensor-grouping problem.\nThe major advantage of this approach is that it avoids the overhead caused by the processes of coordination of sensor nodes to make decision on whether a sensor node is a candidate to sleep or 1172 work and how long it should sleep or work.\nSuch processes should be performed from time to time during the lifetime of a network in many online distributed and localized algorithms.\nThe large overhead caused by such processes is the main drawback of the online distributed and localized algorithms.\nOn the contrary, roughly speaking, this approach groups sensor nodes in one time and schedules when each set of sensor nodes should be on duty.\nIt does not require frequent decision-making on working\/sleeping eligibility2 .\nIn [13] by Slijepcevic et al, the sensing area is divided into regions.\nSensor nodes are grouped with the most-constrained leastconstraining algorithm.\nIt is a greedy algorithm in which the priority of selecting a given sensor is determined by how many uncovered regions this sensor covers and the redundancy caused by this sensor.\nIn [5] by Cardei et al, disjoint sensor sets are modeled as disjoint dominating sets.\nAlthough maximum dominating sets computation is NP-complete, the authors proposed a graphcoloring based algorithm.\nCardei et al also studied similar problem in the domain of covering target points in [4].\nThe NP-completeness of the problem is proved and a heuristic that computes the sets are proposed.\nThese algorithms are centralized solutions of sensorgrouping problem.\nHowever, global information (e.g., the location of each in-network sensor node) of a large scale WSN is also very expensive to obtained online.\nAlso it is usually infeasible to obtain such information before sensor nodes are deployed.\nFor example, sensor nodes are usually deployed in a random manner and the location of each in-network sensor node is determined only after a node is deployed.\nThe solution of sensor-grouping problem should only base on locally obtainable information of a sensor node.\nThat is to say, nodes should determine which group they should join in a fully distributed way.\nHere locally obtainable information refers to a node``s local information and the information that can be directly obtained from its adjacent nodes, i.e., nodes within its communication range.\nIn Subsection 3.1, we provide a general problem formulation of the sensor-grouping problem.\nDistributed-solution requirement is formulated in this problem.\nIt is followed by discussion in Subsection 3.2 on a general sensing model, which serves as a given condition of the sensor-grouping problem formulation.\nTo facilitate our discussions, the notations in our following discussions are described as follows.\n\u2022 n: The number in-network sensor nodes.\n\u2022 S(j) (j = 1, 2, ..., m): The jth set of sensor nodes where m is the number of sets.\n\u2022 L(i) (i = 1, 2, ..., n): The physical location of node i. \u2022 \u03c6: The area monitored by the network: i.e., the sensing area of the network.\n\u2022 R: The sensing radius of a sensor node.\nWe assume that a sensor node can only be responsible to monitor a circular area centered at the node with a radius equal to R.\nThis is a usual assumption in work that addresses sensor-coverage related problems.\nWe call this circular area the sensing area of a node.\n3.1 Problem Formulation We assume that each sensor node can know its approximate physical location.\nThe approximate location information is obtainable if each sensor node carries a GPS receiver or if some localization algorithms are employed (e.g., [3]).\n2 Note that if some nodes die, a re-grouping process might also be performed to exploit the remaining nodes in a set of sensor nodes.\nHow to provide this mechanism is beyond the scope of this paper and yet to be explored.\nProblem 1.\nGiven: \u2022 The set of each sensor node i``s sensing neighbors N(i) and the location of each member in N(i); \u2022 A sensing model which quantitatively describes how a point P in area \u03c6 is covered by sensor nodes that are responsible to monitor this point.\nWe call this quantity the coverage quality of P. \u2022 The coverage quality requirement in \u03c6, denoted by s.\nWhen the coverage of a point is larger than this threshold, we say this point is covered.\nFor each sensor node i, make a decision on which group S(j) it should join so that: \u2022 Area \u03c6 can be covered by sensor nodes in each set S(j) \u2022 m, the number of sets S(j) is maximized.\nIn this formulation, we call sensor nodes within a circular area centered at a sensor node i with a radius equal to 2 \u00b7 R the sensing neighbors of node i.\nThis is because sensors nodes in this area, together with node i, may be cooperative to ensure the coverage of a point inside node i``s sensing area.\nWe assume that the communication range of a sensor node is larger than 2 \u00b7 R, which is also a general assumption in work that addresses sensor-coverage related problems.\nThat is to say, the first given condition in Problem 1 is the information that can be obtained directly from a node``s adjacent nodes.\nIt is therefore locally obtainable information.\nThe last two given conditions in this problem formulation can be programmed into a node before it is deployed or by a node-programming protocol (e.g., [9]) during network runtime.\nTherefore, the given conditions can all be easily obtained by a sensor-grouping scheme with fully distributed implementation.\nWe reify this problem with a realistic sensing model in next subsection.\n3.2 A General Sensing Model As WSNs are usually employed to monitor possible events in a given area, it is therefore a design requirement that an event occurring in the network area must\/may be successfully detected by sensors.\nThis issue is usually formulated as how to ensure that an event signal omitted in an arbitrary point in the network area can be detected by sensor nodes.\nObviously, a sensing model is required to address this problem so that how a point in the network area is covered can be modeled and quantified.\nThus the coverage quality of a WSN can be evaluated.\nDifferent applications of WSNs employ different types of sensors, which surely have widely different theoretical and physical characteristics.\nTherefore, to fulfill different application requirements, different sensing models should be constructed based on the characteristics of the sensors employed.\nA simple theoretical sensing model is the Boolean sensing model [15][18][17][14].\nBoolean sensing model assumes that a sensor node can always detect an event occurring in its responsible sensing area.\nBut most sensors detect events according to the signal strength sensed.\nEvent signals usually fade in relation to the physical distance between an event and the sensor.\nThe larger the distance, the weaker the event signals that can be sensed by the sensor, which results in a reduction of the probability that the event can be successfully detected by the sensor.\nAs in WSNs, event signals are usually electromagnetic, acoustic, or photic signals, they fade exponentially with the increasing of 1173 their transmit distance.\nSpecifically, the signal strength E(d) of an event that is received by a sensor node satisfies: E(d) = \u03b1 d\u03b2 (3) where d is the physical distance from the event to the sensor node; \u03b1 is related to the signal strength omitted by the event; and \u03b2 is signal fading factor which is typically a positive number larger than or equal to 2.\nUsually, \u03b1 and \u03b2 are considered as constants.\nBased on this notion, to be more reasonable, researchers propose collaborative sensing model to capture application requirements: Area coverage can be maintained by a set of collaborative sensor nodes: For a point with physical location L, the point is considered covered by the collaboration of i sensors (denoted by k1, ..., ki) if and only if the following two equations holds [7][10][12].\n\u2200j = 1, ..., i; L(kj) \u2212 L < R. (4) C(L) = iX j=1 (E( L(kj) \u2212 L ) > s. (5) C(L) is regarded as the coverage quality of location L in the network area [7][10][12].\nHowever, we notice that defining the sensibility as the sum of the sensed signal strength by each collaborative sensor implies a very special application: Applications must employ the sum of the signal strength to achieve decision-making.\nTo capture generally realistic application requirement, we modify the definition described in Equation (5).\nThe model we adopt in this paper is described in details as follows.\nWe consider the probability P(L, kj ) that an event located at L can be detected by sensor kj is related to the signal strength sensed by kj.\nFormally, P(L, kj) = \u03b3E(d) = \u03b4 ( L(kj) \u2212 L \/ + 1)\u03b2 , (6) where \u03b3 is a constant and \u03b4 = \u03b3\u03b1 is a constant too.\nnormalizes the distance to a proper scale and the +1 item is to avoid infinite value of P(L, kj).\nThe probability that an event located at L can be detected by any collaborative sensors that satisfied Equation (4) is: P (L) = 1 \u2212 iY j=1 (1 \u2212 P(L, kj )).\n(7) As the detection probability P (L) reasonably determines how an event occurring at location L can be detected by the networks, it is a good measure of the coverage quality of location L in a WSN.\nSpecifically, Equation (5) is modified to: C(L) = P (L) = 1 \u2212 iY j=1 [1 \u2212 \u03b4 ( L(kj) \u2212 L \/ + 1)\u03b2 ] > s. (8) To sum it up, we consider a point at location L is covered if Equation (4) and (8) hold.\n4.\nMAXIMIZING-\u03b9 NODE-DEDUCTION ALGORITHM FOR SENSOR-GROUPING PROBLEM Before we process to introduce algorithms to solve the sensor grouping problem, let us define the margin (denoted by \u03b8) of an area \u03c6 monitored by the network as the band-like marginal area of \u03c6 and all the points on the outer perimeter of \u03b8 is \u03c1 distance away from all the points on the inner perimeter of \u03b8.\n\u03c1 is called the margin length.\nIn a practical network, sensor nodes are usually evenly deployed in the network area.\nObviously, the number of sensor nodes that can sense an event occurring in the margin of the network is smaller than the number of sensor nodes that can sense an event occurring in other area of the network.\nBased on this consideration, in our algorithm design, we ensure the coverage quality of the network area except the margin.\nThe information on \u03c6 and \u03c1 is networkbased.\nEach in-network sensor node can be pre-programmed or on-line informed about \u03c6 and \u03c1, and thus calculate whether a point in its sensing area is in the margin or not.\n4.1 Maximizing-\u03b9 Node-Deduction Algorithm The node-deduction process of our Maximizing-\u03b9 Node-Deduction Algorithm (MIND) is simple.\nA node i greedily maximizes \u03b9 of the sub-network composed by itself, its ungrouped sensing neighbors, and the neighbors that are in the same group of itself.\nUnder the constraint that the coverage quality of its sensing area should be ensured, node i deletes nodes in this sub-network one by one.\nThe candidate to be pruned satisfies that: \u2022 It is an ungrouped node.\n\u2022 The deletion of the node will not result in uncovered-points inside the sensing area of i.\nA candidate is deleted if the deletion of the candidate results in largest \u03b9 of the sub-network compared to the deletion of other candidates.\nThis node-deduction process continues until no candidate can be found.\nThen all the ungrouped sensing neighbors that are not deleted are grouped into the same group of node i.\nWe call the sensing neighbors that are in the same group of node i the group sensing neighbors of node i.\nWe then call node i a finished node, meaning that it has finished the above procedure and the sensing area of the node is covered.\nThose nodes that have not yet finished this procedure are called unfinished nodes.\nThe above procedure initiates at a random-selected node that is not in the margin.\nThe node is grouped to the first group.\nIt calculates the resulting group sensing neighbors of it based on the above procedure.\nIt informs these group sensing neighbors that they are selected in the group.\nThen it hands over the above procedure to an unfinished group sensing neighbors that is the farthest from itself.\nThis group sensing neighbor continues this procedure until no unfinished neighbor can be found.\nThen the first group is formed (Algorithmic description of this procedure can be found at [19]).\nAfter a group is formed, another random-selected ungrouped node begins to group itself to the second group and initiates the above procedure.\nIn this way, groups are formed one by one.\nWhen a node that involves in this algorithm found out that the coverage quality if its sensing area, except what overlaps the network margin, cannot be ensured even if all its ungrouped sensing neighbors are grouped into the same group as itself, the algorithm stops.\nMIND is based on locally obtainable information of sensor nodes.\nIt is a distributed algorithm that serves as an approximate solution of Problem 1.\n4.2 Incremental Coverage Quality Algorithm: A Benchmark for MIND To evaluate the effectiveness of introducing \u03b9 in the sensor-group problem, another algorithm for sensor-group problem called Incremental Coverage Quality Algorithm (ICQA) is designed.\nOur aim 1174 is to evaluate how an idea, i.e., MIND, based on locally maximize \u03b9 performs.\nIn ICQA, a node-selecting process is as follows.\nA node i greedily selects an ungrouped sensing neighbor in the same group as itself one by one, and informs the neighbor it is selected in the group.\nThe criterion is: \u2022 The selected neighbor is responsible to provide surveillance work for some uncovered parts of node i``s sensing area.\n(i.e., the coverage quality requirement of the parts is not fulfilled if this neighbor is not selected.)\n\u2022 The selected neighbor results in highest improvement of the coverage quality of the neighbor``s sensing area.\nThe improvement of the coverage quality, mathematically, should be the integral of the the improvements of all points inside the neighbor``s sensing area.\nA numerical approximation is employed to calculate this improvement.\nDetails are presented in our simulation study.\nThis node-selecting process continues until the sensing area of node i is entirely covered.\nIn this way, node i``s group sensing neighbors are found.\nThe above procedure is handed over as what MIND does and new groups are thus formed one by one.\nAnd the condition that ICQA stops is the same as MIND.\nICQA is also based on locally obtainable information of sensor nodes.\nICQA is also a distributed algorithm that serves as an approximate solution of Problem 1.\n5.\nSIMULATION RESULTS To evaluate the effectiveness of employing \u03b9 in sensor-grouping problem, we build simulation surveillance networks.\nWe employ MIND and ICQA to group the in-network sensor nodes.\nWe compare the grouping results with respect to how many groups both algorithms find and how the performance of the resulting groups are.\nDetailed settings of the simulation networks are shown in Table 1.\nIn simulation networks, sensor nodes are randomly deployed in a uniform manner in the network area.\nTable 1: The settings of the simulation networks Area of sensor field 400m*400m \u03c1 20m R 80m \u03b1, \u03b2, \u03b3 and 1.0, 2.0, 1.0 and 100.0 s 0.6 For evaluating the coverage quality of the sensing area of a node, we divide the sensing area of a node into several regions and regard the coverage quality of the central point in each region as a representative of the coverage quality of the region.\nThis is a numerical approximation.\nLarger number of such regions results in better approximation.\nAs sensor nodes are with low computational capacity, there is a tradeoff between the number of such regions and the precision of the resulting coverage quality of the sensing area of a node.\nIn our simulation study, we set this number 12.\nFor evaluating the improvement of coverage quality in ICQA, we sum up all the improvements at each region-center as the total improvement.\n5.1 Number of Groups Formed by MIND and ICQA We set the total in-network node number to different values and let the networks perform MIND and ICQA.\nFor each n, simulations run with several random seeds to generate different networks.\nResults are averaged.\nFigure 2 shows the group numbers found in networks with different n``s. 500\u00a01000\u00a01500\u00a02000 0 5 10 15 20 25 30 35 40 45 50 Total in\u2212network node number Totalnumberofgroupsfound ICQA MMNP Figure 2: The number of groups found by MIND and ICQA We can see that MIND always outperforms ICQA in terms of the number of groups formed.\nObviously, the larger the number of groups can be formed, the more the redundancy of each group is exploited.\nThis output shows that an approach like MIND that aim to maximize \u03b9 of the resulting topology can exploits redundancy well.\nAs an example, in case that n = 1500, the results of five networks are listed in Table 2.\nTable 2: The grouping results of five networks with n = 1500 Net MIND ICQA MIND ICQA Group Number Group Number Average \u03b9 Average \u03b9 1 34 31 0.145514 0.031702 2 33 30 0.145036 0.036649 3 33 31 0.156483 0.033578 4 32 31 0.152671 0.029030 5 33 32 0.146560 0.033109 The difference between the average \u03b9 of the groups in each network shows that groups formed by MIND result in topologies with larger \u03b9``s.\nIt demonstrates that \u03b9 is good indicator of redundancy in different networks.\n5.2 The Performance of the Resulting Groups Although MIND forms more groups than ICQA does, which implies longer lifetime of the networks, another importance consideration is how these groups formed by MIND and ICQA perform.\nWe let 10000 events randomly occur in the network area except the margin.\nWe compare how many events happen at the locations where the quality is less than the requirement s = 0.6 when each resulting group is conducting surveillance work (We call the number of such events the failure number of group).\nFigure 3 shows the average failure numbers of the resulting groups when different node numbers are set.\nWe can see that the groups formed by MIND outperform those formed by ICQA because the groups formed by MIND result in lower failure numbers.\nThis further demonstrates that MIND is a good approach for sensor-grouping problem.\n1175 500\u00a01000\u00a01500\u00a02000 0 10 20 30 40 50 60 Total in\u2212network node number averagefailurenumbers ICQA MMNP Figure 3: The failure numbers of MIND and ICQA 6.\nCONCLUSION This paper proposes \u03b9, a novel index for evaluation of pointdistribution.\n\u03b9 is the minimum distance between each pair of points normalized by the average distance between each pair of points.\nWe find that a set of points that achieve a maximum value of \u03b9 result in a honeycomb structure.\nWe propose that \u03b9 can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs).\nWe set out to validate this idea by employing \u03b9 to a sensorgrouping problem.\nWe formulate a general sensor-grouping problem for WSNs and provide a general sensing model.\nWith an algorithm called Maximizing-\u03b9 Node-Deduction (MIND), we show that maximizing \u03b9 at sensor nodes is a good approach to solve this problem.\nSimulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design in terms of the number and the performance of the groups formed.\nThis demonstrates a good application of employing \u03b9 in coverage-related problems.\n7.\nACKNOWLEDGEMENT The work described in this paper was substantially supported by two grants, RGC Project No.\nCUHK4205\/04E and UGC Project No.\nAoE\/E-01\/99, of the Hong Kong Special Administrative Region, China.\n8.\nREFERENCES [1] I. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci.\nA survey on wireless sensor networks.\nIEEE Communications Magazine, 40(8):102-114, 2002.\n[2] F. Aurenhammer.\nVononoi diagram - a survey of a fundamental geometric data structure.\nACM Computing Surveys, 23(2):345-405, September 1991.\n[3] N. Bulusu, J. Heidemann, and D. Estrin.\nGPS-less low-cost outdoor localization for very small devices.\nIEEE Personal Communication, October 2000.\n[4] M. Cardei and D.-Z.\nDu.\nImproving wireless sensor network lifetime through power aware organization.\nACM Wireless Networks, 11(3), May 2005.\n[5] M. Cardei, D. MacCallum, X. Cheng, M. Min, X. Jia, D. Li, and D.-Z.\nDu.\nWireless sensor networks with energy efficient organization.\nJournal of Interconnection Networks, 3(3-4), December 2002.\n[6] M. Cardei and J. Wu.\nCoverage in wireless sensor networks.\nIn Handbook of Sensor Networks, (eds.\nM. Ilyas and I. Magboub), CRC Press, 2004.\n[7] X. Chen and M. R. Lyu.\nA sensibility-based sleeping configuration protocol for dependable wireless sensor networks.\nCSE Technical Report, The Chinese University of Hong Kong, 2005.\n[8] R. Jain, W. Hawe, and D. Chiu.\nA quantitative measure of fairness and discrimination for resource allocation in shared computer systems.\nTechnical Report DEC-TR-301, September 1984.\n[9] S. S. Kulkarni and L. Wang.\nMNP: Multihop network reprogramming service for sensor networks.\nIn Proc.\nof the 25th International Conference on Distributed Computing Systems (ICDCS), June 2005.\n[10] B. Liu and D. Towsley.\nA study on the coverage of large-scale sensor networks.\nIn Proc.\nof the 1st IEEE International Conference on Mobile ad-hoc and Sensor Systems, Fort Lauderdale, FL, October 2004.\n[11] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and J. Anderson.\nWireless sensor networks for habitat monitoring.\nIn Proc.\nof the ACM International Workshop on Wireless Sensor Networks and Applications, 2002.\n[12] S. Megerian, F. Koushanfar, G. Qu, G. Veltri, and M. Potkonjak.\nExplosure in wirless sensor networks: Theory and pratical solutions.\nWireless Networks, 8, 2002.\n[13] S. Slijepcevic and M. Potkonjak.\nPower efficient organization of wireless sensor networks.\nIn Proc.\nof the IEEE International Conference on Communications (ICC), volume 2, Helsinki, Finland, June 2001.\n[14] D. Tian and N. D. Georganas.\nA node scheduling scheme for energy conservation in large wireless sensor networks.\nWireless Communications and Mobile Computing, 3:272-290, May 2003.\n[15] X. Wang, G. Xing, Y. Zhang, C. Lu, R. Pless, and C. Gill.\nIntegrated coverage and connectivity configuration in wireless sensor networks.\nIn Proc.\nof the 1st ACM International Conference on Embedded Networked Sensor Systems (SenSys), Los Angeles, CA, November 2003.\n[16] G. Xing, C. Lu, R. Pless, and J. A. O\u00b4 Sullivan.\nCo-Grid: an efficient converage maintenance protocol for distributed sensor networks.\nIn Proc.\nof the 3rd International Symposium on Information Processing in Sensor Networks (IPSN), Berkeley, CA, April 2004.\n[17] T. Yan, T. He, and J. A. Stankovic.\nDifferentiated surveillance for sensor networks.\nIn Proc.\nof the 1st ACM International Conference on Embedded Networked Sensor Systems (SenSys), Los Angeles, CA, November 2003.\n[18] F. Ye, G. Zhong, J. Cheng, S. Lu, and L. Zhang.\nPEAS: A robust energy conserving protocol for long-lived sensor networks.\nIn Proc.\nof the 23rd International Conference on Distributed Computing Systems (ICDCS), Providence, Rhode Island, May 2003.\n[19] Y. Zhou, H. Yang, and M. R. Lyu.\nA point-distribution index and its application in coverage-related problems.\nCSE Technical Report, The Chinese University of Hong Kong, 2006.\n1176","lvl-3":"A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks\nABSTRACT\nWe propose t, a novel index for evaluation of point-distribution.\nt is the minimum distance between each pair of points normalized by the average distance between each pair of points.\nWe find that a set of points that achieve a maximum value of t result in a honeycomb structure.\nWe propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs).\nTo validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model.\nWe show that locally maximizing t at sensor nodes is a good approach to solve this problem with an algorithm called Maximizingt Node-Deduction (MIND).\nSimulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design.\nThis demonstrates a good application of employing t in coverage-related problems for WSNs.\n1.\nINTRODUCTION\nA wireless sensor network (WSN) consists of a large number of in-situ battery-powered sensor nodes.\nA WSN can collect the data about physical phenomena of interest [1].\nThere are many potential applications of WSNs, including environmental monitoring and surveillance, etc. [1] [11].\nIn many application scenarios, WSNs are employed to conduct surveillance tasks in adverse, or even worse, in hostile working environments.\nOne major problem caused is that sensor nodes are subjected to failures.\nTherefore, fault tolerance of a WSN is critical.\nOne way to achieve fault tolerance is that a WSN should contain a large number of redundant nodes in order to tolerate node failures.\nIt is vital to provide a mechanism that redundant nodes can be working in sleeping mode (i.e., major power-consuming units such as the transceiver of a redundant sensor node can be shut off) to save energy, and thus to prolong the network lifetime.\nRedundancy should be exploited as much as possible for the set of sensors that are currently taking charge in the surveillance work of the network area [6].\nWe find that the minimum distance between each pair of points normalized by the average distance between each pair of points serves as a good index to evaluate the distribution of the points.\nWe call this index, denoted by t, the normalized minimum distance.\nIf points are moveable, we find that maximizing t results in a honeycomb structure.\nThe honeycomb structure poses that the coverage efficiency is the best if each point represents a sensor node that is providing surveillance work.\nEmploying t in coverage-related problems is thus deemed promising.\nThis enlightens us that maximizing t is a good approach to select a set of sensors that are currently taking charge in the surveillance work of the network area.\nTo explore the effectiveness of employing t in coverage-related problems, we formulate a sensorgrouping problem for high-redundancy WSNs.\nAn algorithm called Maximizing-t Node-Deduction (MIND) is proposed in which redundant sensor nodes are removed to obtain a large t for each set of sensors that are currently taking charge in the surveillance work of the network area.\nWe also introduce another greedy solution called Incremental Coverage Quality Algorithm (ICQA) for this problem, which serves as a benchmark to evaluate MIND.\nThe main contribution of this paper is twofold.\nFirst, we introduce a novel index t for evaluation of point-distribution.\nWe show that maximizing t of a WSN results in low redundancy of the network.\nSecond, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model.\nWith the MIND algorithm we show that locally maximizing t among each sensor node and its neighbors is a good approach to solve this problem.\nThis demonstrates a good application of employing t in coveragerelated problems.\nThe rest of the paper is organized as follows.\nIn Section 2, we introduce our point-distribution index t.\nWe survey related work and formulate a sensor-grouping problem together with a general sensing model in Section 3.\nSection 4 investigates the application of t in this grouping problem.\nWe propose MIND for this problem\nand introduce ICQA as a benchmark.\nIn Section 5, we present our simulation results in which MIND and ICQA are compared.\nSection 6 provides conclusion remarks.\n2.\nTHE NORMALIZED MINIMUM DISTANCE \u03b9: A POINT-DISTRIBUTION INDEX\n3.\nTHE SENSOR-GROUPING PROBLEM\n3.1 Problem Formulation\n3.2 A General Sensing Model\n4.\nMAXIMIZING-\u03b9 NODE-DEDUCTION ALGORITHM FOR SENSOR-GROUPING PROBLEM\n4.1 Maximizing-\u03b9 Node-Deduction Algorithm\n4.2 Incremental Coverage Quality Algorithm: A Benchmark for MIND\n5.\nSIMULATION RESULTS\n5.1 Number of Groups Formed by MIND and ICQA\n5.2 The Performance of the Resulting Groups\n6.\nCONCLUSION\nThis paper proposes t, a novel index for evaluation of pointdistribution.\nt is the minimum distance between each pair of points normalized by the average distance between each pair of points.\nWe find that a set of points that achieve a maximum value of t result in a honeycomb structure.\nWe propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs).\nWe set out to validate this idea by employing t to a sensorgrouping problem.\nWe formulate a general sensor-grouping problem for WSNs and provide a general sensing model.\nWith an algorithm called Maximizing-t Node-Deduction (MIND), we show that maximizing t at sensor nodes is a good approach to solve this problem.\nSimulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design in terms of the number and the performance of the groups formed.\nThis demonstrates a good application of employing t in coverage-related problems.","lvl-4":"A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks\nABSTRACT\nWe propose t, a novel index for evaluation of point-distribution.\nt is the minimum distance between each pair of points normalized by the average distance between each pair of points.\nWe find that a set of points that achieve a maximum value of t result in a honeycomb structure.\nWe propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs).\nTo validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model.\nWe show that locally maximizing t at sensor nodes is a good approach to solve this problem with an algorithm called Maximizingt Node-Deduction (MIND).\nSimulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design.\nThis demonstrates a good application of employing t in coverage-related problems for WSNs.\n1.\nINTRODUCTION\nA wireless sensor network (WSN) consists of a large number of in-situ battery-powered sensor nodes.\nIn many application scenarios, WSNs are employed to conduct surveillance tasks in adverse, or even worse, in hostile working environments.\nOne major problem caused is that sensor nodes are subjected to failures.\nRedundancy should be exploited as much as possible for the set of sensors that are currently taking charge in the surveillance work of the network area [6].\nWe find that the minimum distance between each pair of points normalized by the average distance between each pair of points serves as a good index to evaluate the distribution of the points.\nWe call this index, denoted by t, the normalized minimum distance.\nIf points are moveable, we find that maximizing t results in a honeycomb structure.\nThe honeycomb structure poses that the coverage efficiency is the best if each point represents a sensor node that is providing surveillance work.\nEmploying t in coverage-related problems is thus deemed promising.\nThis enlightens us that maximizing t is a good approach to select a set of sensors that are currently taking charge in the surveillance work of the network area.\nTo explore the effectiveness of employing t in coverage-related problems, we formulate a sensorgrouping problem for high-redundancy WSNs.\nWe also introduce another greedy solution called Incremental Coverage Quality Algorithm (ICQA) for this problem, which serves as a benchmark to evaluate MIND.\nThe main contribution of this paper is twofold.\nFirst, we introduce a novel index t for evaluation of point-distribution.\nWe show that maximizing t of a WSN results in low redundancy of the network.\nSecond, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model.\nWith the MIND algorithm we show that locally maximizing t among each sensor node and its neighbors is a good approach to solve this problem.\nThis demonstrates a good application of employing t in coveragerelated problems.\nThe rest of the paper is organized as follows.\nIn Section 2, we introduce our point-distribution index t.\nWe survey related work and formulate a sensor-grouping problem together with a general sensing model in Section 3.\nSection 4 investigates the application of t in this grouping problem.\nWe propose MIND for this problem\nand introduce ICQA as a benchmark.\nIn Section 5, we present our simulation results in which MIND and ICQA are compared.\nSection 6 provides conclusion remarks.\n6.\nCONCLUSION\nThis paper proposes t, a novel index for evaluation of pointdistribution.\nt is the minimum distance between each pair of points normalized by the average distance between each pair of points.\nWe find that a set of points that achieve a maximum value of t result in a honeycomb structure.\nWe propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs).\nWe set out to validate this idea by employing t to a sensorgrouping problem.\nWe formulate a general sensor-grouping problem for WSNs and provide a general sensing model.\nWith an algorithm called Maximizing-t Node-Deduction (MIND), we show that maximizing t at sensor nodes is a good approach to solve this problem.\nThis demonstrates a good application of employing t in coverage-related problems.","lvl-2":"A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks\nABSTRACT\nWe propose t, a novel index for evaluation of point-distribution.\nt is the minimum distance between each pair of points normalized by the average distance between each pair of points.\nWe find that a set of points that achieve a maximum value of t result in a honeycomb structure.\nWe propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs).\nTo validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model.\nWe show that locally maximizing t at sensor nodes is a good approach to solve this problem with an algorithm called Maximizingt Node-Deduction (MIND).\nSimulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design.\nThis demonstrates a good application of employing t in coverage-related problems for WSNs.\n1.\nINTRODUCTION\nA wireless sensor network (WSN) consists of a large number of in-situ battery-powered sensor nodes.\nA WSN can collect the data about physical phenomena of interest [1].\nThere are many potential applications of WSNs, including environmental monitoring and surveillance, etc. [1] [11].\nIn many application scenarios, WSNs are employed to conduct surveillance tasks in adverse, or even worse, in hostile working environments.\nOne major problem caused is that sensor nodes are subjected to failures.\nTherefore, fault tolerance of a WSN is critical.\nOne way to achieve fault tolerance is that a WSN should contain a large number of redundant nodes in order to tolerate node failures.\nIt is vital to provide a mechanism that redundant nodes can be working in sleeping mode (i.e., major power-consuming units such as the transceiver of a redundant sensor node can be shut off) to save energy, and thus to prolong the network lifetime.\nRedundancy should be exploited as much as possible for the set of sensors that are currently taking charge in the surveillance work of the network area [6].\nWe find that the minimum distance between each pair of points normalized by the average distance between each pair of points serves as a good index to evaluate the distribution of the points.\nWe call this index, denoted by t, the normalized minimum distance.\nIf points are moveable, we find that maximizing t results in a honeycomb structure.\nThe honeycomb structure poses that the coverage efficiency is the best if each point represents a sensor node that is providing surveillance work.\nEmploying t in coverage-related problems is thus deemed promising.\nThis enlightens us that maximizing t is a good approach to select a set of sensors that are currently taking charge in the surveillance work of the network area.\nTo explore the effectiveness of employing t in coverage-related problems, we formulate a sensorgrouping problem for high-redundancy WSNs.\nAn algorithm called Maximizing-t Node-Deduction (MIND) is proposed in which redundant sensor nodes are removed to obtain a large t for each set of sensors that are currently taking charge in the surveillance work of the network area.\nWe also introduce another greedy solution called Incremental Coverage Quality Algorithm (ICQA) for this problem, which serves as a benchmark to evaluate MIND.\nThe main contribution of this paper is twofold.\nFirst, we introduce a novel index t for evaluation of point-distribution.\nWe show that maximizing t of a WSN results in low redundancy of the network.\nSecond, we formulate a general sensor-grouping problem for WSNs and provide a general sensing model.\nWith the MIND algorithm we show that locally maximizing t among each sensor node and its neighbors is a good approach to solve this problem.\nThis demonstrates a good application of employing t in coveragerelated problems.\nThe rest of the paper is organized as follows.\nIn Section 2, we introduce our point-distribution index t.\nWe survey related work and formulate a sensor-grouping problem together with a general sensing model in Section 3.\nSection 4 investigates the application of t in this grouping problem.\nWe propose MIND for this problem\nand introduce ICQA as a benchmark.\nIn Section 5, we present our simulation results in which MIND and ICQA are compared.\nSection 6 provides conclusion remarks.\n2.\nTHE NORMALIZED MINIMUM DISTANCE \u03b9: A POINT-DISTRIBUTION INDEX\nSuppose there are n points in a Euclidean space \u2126.\nThe coordinates of these points are denoted by xi (i = 1,..., n).\nIt may be necessary to evaluate how the distribution of these points is.\nThere are many metrics to achieve this goal.\nFor example, the Mean Square Error from these points to their mean value can be employed to calculate how these points deviate from their mean (i.e., their central).\nIn resource-sharing evaluation, the Global Fairness Index (GFI) is often employed to measure how even the resource distributes among these points [8], when xi represents the amount of resource that belong to point i.\nIn WSNs, GFI is usually used to calculate how even the remaining energy of sensor nodes is.\nWhen n is larger than 2 and the points do not all overlap (That points all overlap means xi = xj, \u2200 i, j = 1, 2,..., n).\nWe propose a novel index called the normalized minimum distance, namely \u03b9, to evaluate the distribution of the points.\n\u03b9 is the minimum distance\nwhere | | xi \u2212 xj | | denotes the Euclidean distance between point i and point j in \u2126, the min (\u00b7) function calculates the minimum distance between each pair of points, and \u00b5 is the average distance between each pair of points, which is:\n\u03b9 measures how well the points separate from one another.\nObviously, \u03b9 is in interval [0, 1].\n\u03b9 is equal to 1 if and only if n is equal to 3 and these three points forms an equilateral triangle.\n\u03b9 is equal to zero if any two points overlap.\n\u03b9 is a very interesting value of a set of points.\nIf we consider each xi (\u2200 i = 1,..., n) is a variable in \u2126, how these n points would look like if \u03b9 is maximized?\nAn algorithm is implemented to generate the topology in which \u03b9 is locally maximized (The algorithm can be found in [19]).\nWe consider a 2-dimensional space.\nWe select n = 10, 20, 30,..., 100 and perform this algorithm.\nIn order to avoid that the algorithm converge to local optimum, we select different random seeds to generate the initial points for 1000 time and obtain the best one that results in the largest \u03b9 when the algorithm converges.\nFigure 1 demonstrates what the resulting topology looks like when n = 20 as an example.\nSuppose each point represents a sensor node.\nIf the sensor coverage model is the Boolean coverage model [15] [17] [18] [14] and the coverage radius of each node is the same.\nIt is exciting to see that this topology results in lowest redundancy because the Vonoroi diagram [2] formed by these nodes (A Vonoroi diagram formed by a set of nodes partitions a space into a set of convex polygons such that points inside a polygon are closest to only one particular node) is a honeycomb-like structure1.\nThis enlightens us that \u03b9 may be employed to solve problems related to sensor-coverage of an area.\nIn WSNs, it is desirable\nFigure 1: Node Number = 20, \u03b9 = 0.435376\nthat the active sensor nodes that are performing surveillance task should separate from one another.\nUnder the constraint that the sensing area should be covered, the more each node separates from the others, the less the redundancy of the coverage is.\n\u03b9 indicates the quality of such separation.\nIt should be useful for approaches on sensor-coverage related problems.\nIn our following discussions, we will show the effectiveness of employing \u03b9 in sensor-grouping problem.\n3.\nTHE SENSOR-GROUPING PROBLEM\nIn many application scenarios, to achieve fault tolerance, a WSN contains a large number of redundant nodes in order to tolerate node failures.\nA node sleeping-working schedule scheme is therefore highly desired to exploit the redundancy of working sensors and let as many nodes as possible sleep.\nMuch work in the literature is on this issue [6].\nYan et al introduced a differentiated service in which a sensor node finds out its responsible working duration with cooperation of its neighbors to ensure the coverage of sampled points [17].\nYe et al developed PEAS in which sensor nodes wake up randomly over time, probe their neighboring nodes, and decide whether they should begin to take charge of surveillance work [18].\nXing et al exploited a probabilistic distributed detection model with a protocol called Coordinating Grid (Co-Grid) [16].\nWang et al designed an approach called Coverage Configuration Protocol (CCP) which introduced the notion that the coverage degree of intersection-points of the neighboring nodes' sensing-perimeters indicates the coverage of a convex region [15].\nIn our recent work [7], we also provided a sleeping configuration protocol, namely SSCP, in which sleeping eligibility of a sensor node is determined by a local Voronoi diagram.\nSSCP can provide different levels of redundancy to maintain different requirements of fault tolerance.\nThe major feature of the aforementioned protocols is that they employ online distributed and localized algorithms in which a sensor node determines its sleeping eligibility and\/or sleeping time based on the coverage requirement of its sensing area with some information provided by its neighbors.\nAnother major approach for sensor node sleeping-working scheduling issue is to group sensor nodes.\nSensor nodes in a network are divided into several disjoint sets.\nEach set of sensor nodes are able to maintain the required area surveillance work.\nThe sensor nodes are scheduled according to which set they belong to.\nThese sets work successively.\nOnly one set of sensor nodes work at any time.\nWe call the issue sensor-grouping problem.\nThe major advantage of this approach is that it avoids the overhead caused by the processes of coordination of sensor nodes to make decision on whether a sensor node is a candidate to sleep or\nwork and how long it should sleep or work.\nSuch processes should be performed from time to time during the lifetime of a network in many online distributed and localized algorithms.\nThe large overhead caused by such processes is the main drawback of the online distributed and localized algorithms.\nOn the contrary, roughly speaking, this approach groups sensor nodes in one time and schedules when each set of sensor nodes should be on duty.\nIt does not require frequent decision-making on working\/sleeping eligibility 2.\nIn [13] by Slijepcevic et al, the sensing area is divided into regions.\nSensor nodes are grouped with the most-constrained leastconstraining algorithm.\nIt is a greedy algorithm in which the priority of selecting a given sensor is determined by how many uncovered regions this sensor covers and the redundancy caused by this sensor.\nIn [5] by Cardei et al, disjoint sensor sets are modeled as disjoint dominating sets.\nAlthough maximum dominating sets computation is NP-complete, the authors proposed a graphcoloring based algorithm.\nCardei et al also studied similar problem in the domain of covering target points in [4].\nThe NP-completeness of the problem is proved and a heuristic that computes the sets are proposed.\nThese algorithms are centralized solutions of sensorgrouping problem.\nHowever, global information (e.g., the location of each in-network sensor node) of a large scale WSN is also very expensive to obtained online.\nAlso it is usually infeasible to obtain such information before sensor nodes are deployed.\nFor example, sensor nodes are usually deployed in a random manner and the location of each in-network sensor node is determined only after a node is deployed.\nThe solution of sensor-grouping problem should only base on locally obtainable information of a sensor node.\nThat is to say, nodes should determine which group they should join in a fully distributed way.\nHere locally obtainable information refers to a node's local information and the information that can be directly obtained from its adjacent nodes, i.e., nodes within its communication range.\nIn Subsection 3.1, we provide a general problem formulation of the sensor-grouping problem.\nDistributed-solution requirement is formulated in this problem.\nIt is followed by discussion in Subsection 3.2 on a general sensing model, which serves as a given condition of the sensor-grouping problem formulation.\nTo facilitate our discussions, the notations in our following discussions are described as follows.\n\u2022 n: The number in-network sensor nodes.\n\u2022 S (j) (j = 1, 2,..., m): The jth set of sensor nodes where m is the number of sets.\n\u2022 L (i) (i = 1, 2,..., n): The physical location of node i. \u2022 0: The area monitored by the network: i.e., the sensing area of the network.\n\u2022 R: The sensing radius of a sensor node.\nWe assume that a sensor node can only be responsible to monitor a circular area centered at the node with a radius equal to R.\nThis is a usual assumption in work that addresses sensor-coverage related problems.\nWe call this circular area the sensing area of a node.\n3.1 Problem Formulation\nWe assume that each sensor node can know its approximate physical location.\nThe approximate location information is obtainable if each sensor node carries a GPS receiver or if some localization algorithms are employed (e.g., [3]).\n2Note that if some nodes die, a re-grouping process might also be performed to exploit the remaining nodes in a set of sensor nodes.\nHow to provide this mechanism is beyond the scope of this paper and yet to be explored.\nProblem 1.\nGiven: \u2022 The set of each sensor node i's sensing neighbors N (i) and the location of each member in N (i); \u2022 A sensing model which quantitatively describes how a point P in area 0 is covered by sensor nodes that are responsible to monitor this point.\nWe call this quantity the coverage quality of P. \u2022 The coverage quality requirement in 0, denoted by s.\nWhen the coverage of a point is larger than this threshold, we say this point is covered.\nFor each sensor node i, make a decision on which group S (j) it shouldjoin so that: \u2022 Area 0 can be covered by sensor nodes in each set S (j) \u2022 m, the number of sets S (j) is maximized.\n\u25a0\nIn this formulation, we call sensor nodes within a circular area centered at a sensor node i with a radius equal to 2 \u00b7 R the sensing neighbors of node i.\nThis is because sensors nodes in this area, together with node i, may be cooperative to ensure the coverage of a point inside node i's sensing area.\nWe assume that the communication range of a sensor node is larger than 2 \u00b7 R, which is also a general assumption in work that addresses sensor-coverage related problems.\nThat is to say, the first given condition in Problem 1 is the information that can be obtained directly from a node's adjacent nodes.\nIt is therefore locally obtainable information.\nThe last two given conditions in this problem formulation can be programmed into a node before it is deployed or by a node-programming protocol (e.g., [9]) during network runtime.\nTherefore, the given conditions can all be easily obtained by a sensor-grouping scheme with fully distributed implementation.\nWe reify this problem with a realistic sensing model in next subsection.\n3.2 A General Sensing Model\nAs WSNs are usually employed to monitor possible events in a given area, it is therefore a design requirement that an event occurring in the network area must\/may be successfully detected by sensors.\nThis issue is usually formulated as how to ensure that an event signal omitted in an arbitrary point in the network area can be detected by sensor nodes.\nObviously, a sensing model is required to address this problem so that how a point in the network area is covered can be modeled and quantified.\nThus the coverage quality of a WSN can be evaluated.\nDifferent applications of WSNs employ different types of sensors, which surely have widely different theoretical and physical characteristics.\nTherefore, to fulfill different application requirements, different sensing models should be constructed based on the characteristics of the sensors employed.\nA simple theoretical sensing model is the Boolean sensing model [15] [18] [17] [14].\nBoolean sensing model assumes that a sensor node can always detect an event occurring in its responsible sensing area.\nBut most sensors detect events according to the signal strength sensed.\nEvent signals usually fade in relation to the physical distance between an event and the sensor.\nThe larger the distance, the weaker the event signals that can be sensed by the sensor, which results in a reduction of the probability that the event can be successfully detected by the sensor.\nAs in WSNs, event signals are usually electromagnetic, acoustic, or photic signals, they fade exponentially with the increasing of\ntheir transmit distance.\nSpecifically, the signal strength E (d) of an event that is received by a sensor node satisfies:\nwhere d is the physical distance from the event to the sensor node; \u03b1 is related to the signal strength omitted by the event; and \u03b2 is signal fading factor which is typically a positive number larger than or equal to 2.\nUsually, \u03b1 and \u03b2 are considered as constants.\nBased on this notion, to be more reasonable, researchers propose collaborative sensing model to capture application requirements: Area coverage can be maintained by a set of collaborative sensor nodes: For a point with physical location L, the point is considered covered by the collaboration of i sensors (denoted by k1,..., ki) if and only if the following two equations holds [7] [10] [12].\nC (L) is regarded as the coverage quality of location L in the network area [7] [10] [12].\nHowever, we notice that defining the sensibility as the sum of the sensed signal strength by each collaborative sensor implies a very special application: Applications must employ the sum of the signal strength to achieve decision-making.\nTo capture generally realistic application requirement, we modify the definition described in Equation (5).\nThe model we adopt in this paper is described in details as follows.\nWe consider the probability P (L, kj) that an event located at L can be detected by sensor kj is related to the signal strength sensed by kj.\nFormally,\nwhere \u03b3 is a constant and \u03b4 = \u03b3\u03b1 is a constant too.\n~ normalizes the distance to a proper scale and the \"+1\" item is to avoid infinite value of P (L, kj).\nThe probability that an event located at L can be detected by any collaborative sensors that satisfied Equation (4) is:\nAs the detection probability P (L) reasonably determines how an event occurring at location L can be detected by the networks, it is a good measure of the coverage quality of location L in a WSN.\nSpecifically, Equation (5) is modified to:\nTo sum it up, we consider a point at location L is covered if Equation (4) and (8) hold.\n4.\nMAXIMIZING-\u03b9 NODE-DEDUCTION ALGORITHM FOR SENSOR-GROUPING PROBLEM\nBefore we process to introduce algorithms to solve the sensor grouping problem, let us define the margin (denoted by \u03b8) of an area \u03c6 monitored by the network as the band-like marginal area of \u03c6 and all the points on the outer perimeter of \u03b8 is \u03c1 distance away from all the points on the inner perimeter of \u03b8.\n\u03c1 is called the margin length.\nIn a practical network, sensor nodes are usually evenly deployed in the network area.\nObviously, the number of sensor nodes that can sense an event occurring in the margin of the network is smaller than the number of sensor nodes that can sense an event occurring in other area of the network.\nBased on this consideration, in our algorithm design, we ensure the coverage quality of the network area except the margin.\nThe information on \u03c6 and \u03c1 is networkbased.\nEach in-network sensor node can be pre-programmed or on-line informed about \u03c6 and \u03c1, and thus calculate whether a point in its sensing area is in the margin or not.\n4.1 Maximizing-\u03b9 Node-Deduction Algorithm\nThe node-deduction process of our Maximizing-\u03b9 Node-Deduction Algorithm (MIND) is simple.\nA node i greedily maximizes \u03b9 of the sub-network composed by itself, its ungrouped sensing neighbors, and the neighbors that are in the same group of itself.\nUnder the constraint that the coverage quality of its sensing area should be ensured, node i deletes nodes in this sub-network one by one.\nThe candidate to be pruned satisfies that:\n\u2022 It is an ungrouped node.\n\u2022 The deletion of the node will not result in uncovered-points inside the sensing area of i.\nA candidate is deleted if the deletion of the candidate results in largest \u03b9 of the sub-network compared to the deletion of other candidates.\nThis node-deduction process continues until no candidate can be found.\nThen all the ungrouped sensing neighbors that are not deleted are grouped into the same group of node i.\nWe call the sensing neighbors that are in the same group of node i the group sensing neighbors of node i.\nWe then call node i a finished node, meaning that it has finished the above procedure and the sensing area of the node is covered.\nThose nodes that have not yet finished this procedure are called unfinished nodes.\nThe above procedure initiates at a random-selected node that is not in the margin.\nThe node is grouped to the first group.\nIt calculates the resulting group sensing neighbors of it based on the above procedure.\nIt informs these group sensing neighbors that they are selected in the group.\nThen it hands over the above procedure to an unfinished group sensing neighbors that is the farthest from itself.\nThis group sensing neighbor continues this procedure until no unfinished neighbor can be found.\nThen the first group is formed (Algorithmic description of this procedure can be found at [19]).\nAfter a group is formed, another random-selected ungrouped node begins to group itself to the second group and initiates the above procedure.\nIn this way, groups are formed one by one.\nWhen a node that involves in this algorithm found out that the coverage quality if its sensing area, except what overlaps the network margin, cannot be ensured even if all its ungrouped sensing neighbors are grouped into the same group as itself, the algorithm stops.\nMIND is based on locally obtainable information of sensor nodes.\nIt is a distributed algorithm that serves as an approximate solution of Problem 1.\n4.2 Incremental Coverage Quality Algorithm: A Benchmark for MIND\nTo evaluate the effectiveness of introducing \u03b9 in the sensor-group problem, another algorithm for sensor-group problem called Incremental Coverage Quality Algorithm (ICQA) is designed.\nOur aim\nis to evaluate how an idea, i.e., MIND, based on locally maximize t performs.\nIn ICQA, a node-selecting process is as follows.\nA node i greedily selects an ungrouped sensing neighbor in the same group as itself one by one, and informs the neighbor it is selected in the group.\nThe criterion is: \u2022 The selected neighbor is responsible to provide surveillance work for some uncovered parts of node i's sensing area.\n(i.e., the coverage quality requirement of the parts is not fulfilled if this neighbor is not selected.)\n\u2022 The selected neighbor results in highest improvement of the coverage quality of the neighbor's sensing area.\nThe improvement of the coverage quality, mathematically, should be the integral of the the improvements of all points inside the neighbor's sensing area.\nA numerical approximation is employed to calculate this improvement.\nDetails are presented in our simulation study.\nThis node-selecting process continues until the sensing area of node i is entirely covered.\nIn this way, node i's group sensing neighbors are found.\nThe above procedure is handed over as what MIND does and new groups are thus formed one by one.\nAnd the condition that ICQA stops is the same as MIND.\nICQA is also based on locally obtainable information of sensor nodes.\nICQA is also a distributed algorithm that serves as an approximate solution of Problem 1.\n5.\nSIMULATION RESULTS\nTo evaluate the effectiveness of employing t in sensor-grouping problem, we build simulation surveillance networks.\nWe employ MIND and ICQA to group the in-network sensor nodes.\nWe compare the grouping results with respect to how many groups both algorithms find and how the performance of the resulting groups are.\nDetailed settings of the simulation networks are shown in Table 1.\nIn simulation networks, sensor nodes are randomly deployed in a uniform manner in the network area.\nTable 1: The settings of the simulation networks\nFor evaluating the coverage quality of the sensing area of a node, we divide the sensing area of a node into several regions and regard the coverage quality of the central point in each region as a representative of the coverage quality of the region.\nThis is a numerical approximation.\nLarger number of such regions results in better approximation.\nAs sensor nodes are with low computational capacity, there is a tradeoff between the number of such regions and the precision of the resulting coverage quality of the sensing area of a node.\nIn our simulation study, we set this number 12.\nFor evaluating the improvement of coverage quality in ICQA, we sum up all the improvements at each region-center as the total improvement.\n5.1 Number of Groups Formed by MIND and ICQA\nWe set the total in-network node number to different values and let the networks perform MIND and ICQA.\nFor each n, simulations run with several random seeds to generate different networks.\nResults are averaged.\nFigure 2 shows the group numbers found in networks with different n's.\nFigure 2: The number of groups found by MIND and ICQA\nWe can see that MIND always outperforms ICQA in terms of the number of groups formed.\nObviously, the larger the number of groups can be formed, the more the redundancy of each group is exploited.\nThis output shows that an approach like MIND that aim to maximize t of the resulting topology can exploits redundancy well.\nAs an example, in case that n = 1500, the results of five networks are listed in Table 2.\nTable 2: The grouping results of five networks with n = 1500\nThe difference between the average t of the groups in each network shows that groups formed by MIND result in topologies with larger t's.\nIt demonstrates that t is good indicator of redundancy in different networks.\n5.2 The Performance of the Resulting Groups\nAlthough MIND forms more groups than ICQA does, which implies longer lifetime of the networks, another importance consideration is how these groups formed by MIND and ICQA perform.\nWe let 10000 events randomly occur in the network area except the margin.\nWe compare how many events happen at the locations where the quality is less than the requirement s = 0.6 when each resulting group is conducting surveillance work (We call the number of such events the failure number of group).\nFigure 3 shows the average failure numbers of the resulting groups when different node numbers are set.\nWe can see that the groups formed by MIND outperform those formed by ICQA because the groups formed by MIND result in lower failure numbers.\nThis further demonstrates that MIND is a good approach for sensor-grouping problem.\nFigure 3: The failure numbers of MIND and ICQA\n6.\nCONCLUSION\nThis paper proposes t, a novel index for evaluation of pointdistribution.\nt is the minimum distance between each pair of points normalized by the average distance between each pair of points.\nWe find that a set of points that achieve a maximum value of t result in a honeycomb structure.\nWe propose that t can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs).\nWe set out to validate this idea by employing t to a sensorgrouping problem.\nWe formulate a general sensor-grouping problem for WSNs and provide a general sensing model.\nWith an algorithm called Maximizing-t Node-Deduction (MIND), we show that maximizing t at sensor nodes is a good approach to solve this problem.\nSimulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design in terms of the number and the performance of the groups formed.\nThis demonstrates a good application of employing t in coverage-related problems.","keyphrases":["point-distribut index","sensor-group","wireless sensor network","honeycomb structur","surveil","redund","fault toler","node-deduct process","increment coverag qualiti algorithm","sleep configur protocol","sensor coverag","sensor group"],"prmu":["P","P","P","P","U","U","U","M","M","U","M","M"]} {"id":"J-69","title":"Robust Incentive Techniques for Peer-to-Peer Networks","abstract":"Lack of cooperation (free riding) is one of the key problems that confronts today's P2P systems. What makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors. To tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques. These techniques are fully distributed and include: discriminating server selection, maxflow-based subjective reputation, and adaptive stranger policies. Through simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation.","lvl-1":"Robust Incentive Techniques for Peer-to-Peer Networks Michal Feldman1 mfeldman@sims.berkeley.edu Kevin Lai2 klai@hp.com Ion Stoica3 istoica@cs.berkeley.edu John Chuang1 chuang@sims.berkeley.edu 1 School of Information Management and Systems U.C. Berkeley 2 HP Labs 3 Computer Science Division U.C. Berkeley ABSTRACT Lack of cooperation (free riding) is one of the key problems that confronts today``s P2P systems.\nWhat makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors.\nTo tackle these challenges we model the P2P system using the Generalized Prisoner``s Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques.\nThese techniques are fully distributed and include: discriminating server selection, maxflowbased subjective reputation, and adaptive stranger policies.\nThrough simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; J.4 [Social And Behavioral Sciences]: Economics General Terms Design, Economics 1.\nINTRODUCTION Many peer-to-peer (P2P) systems rely on cooperation among selfinterested users.\nFor example, in a file-sharing system, overall download latency and failure rate increase when users do not share their resources [3].\nIn a wireless ad-hoc network, overall packet latency and loss rate increase when nodes refuse to forward packets on behalf of others [26].\nFurther examples are file preservation [25], discussion boards [17], online auctions [16], and overlay routing [6].\nIn many of these systems, users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance.\nAs a result, each user``s attempt to maximize her own utility effectively lowers the overall A BC Figure 1: Example of asymmetry of interest.\nA wants service from B, B wants service form C, and C wants service from A. utility of the system.\nAvoiding this tragedy of the commons [18] requires incentives for cooperation.\nWe adopt a game-theoretic approach in addressing this problem.\nIn particular, we use a prisoners'' dilemma model to capture the essential tension between individual and social utility, asymmetric payoff matrices to allow asymmetric transactions between peers, and a learning-based [14] population dynamic model to specify the behavior of individual peers, which can be changed continuously.\nWhile social dilemmas have been studied extensively, P2P applications impose a unique set of challenges, including: \u2022 Large populations and high turnover: A file sharing system such as Gnutella and KaZaa can exceed 100, 000 simultaneous users, and nodes can have an average life-time of the order of minutes [33].\n\u2022 Asymmetry of interest: Asymmetric transactions of P2P systems create the possibility for asymmetry of interest.\nIn the example in Figure 1, A wants service from B, B wants service from C, and C wants service from A. \u2022 Zero-cost identity: Many P2P systems allow peers to continuously switch identities (i.e., whitewash).\nStrategies that work well in traditional prisoners'' dilemma games such as Tit-for-Tat [4] will not fare well in the P2P context.\nTherefore, we propose a family of scalable and robust incentive techniques, based upon a novel Reciprocative decision function, to address these challenges and provide different tradeoffs: \u2022 Discriminating Server Selection: Cooperation requires familiarity between entities either directly or indirectly.\nHowever, the large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity.\nWe show that by having each peer keep a 102 private history of the actions of other peers toward her, and using discriminating server selection, the Reciprocative decision function can scale to large populations and moderate levels of turnover.\n\u2022 Shared History: Scaling to higher turnover and mitigating asymmetry of interest requires shared history.\nConsider the example in Figure 1.\nIf everyone provides service, then the system operates optimally.\nHowever, if everyone keeps only private history, no one will provide service because B does not know that A has served C, etc..\nWe show that with shared history, B knows that A served C and consequently will serve A.\nThis results in a higher level of cooperation than with private history.\nThe cost of shared history is a distributed infrastructure (e.g., distributed hash table-based storage) to store the history.\n\u2022 Maxflow-based Subjective Reputation: Shared history creates the possibility for collusion.\nIn the example in Figure 1, C can falsely claim that A served him, thus deceiving B into providing service.\nWe show that a maxflow-based algorithm that computes reputation subjectively promotes cooperation despite collusion among 1\/3 of the population.\nThe basic idea is that B would only believe C if C had already provided service to B.\nThe cost of the maxflow algorithm is its O(V 3 ) running time, where V is the number of nodes in the system.\nTo eliminate this cost, we have developed a constant mean running time variation, which trades effectiveness for complexity of computation.\nWe show that the maxflow-based algorithm scales better than private history in the presence of colluders without the centralized trust required in previous work [9] [20].\n\u2022 Adaptive Stranger Policy: Zero-cost identities allows noncooperating peers to escape the consequences of not cooperating and eventually destroy cooperation in the system if not stopped.\nWe show that if Reciprocative peers treat strangers (peers with no history) using a policy that adapts to the behavior of previous strangers, peers have little incentive to whitewash and whitewashing can be nearly eliminated from the system.\nThe adaptive stranger policy does this without requiring centralized allocation of identities, an entry fee for newcomers, or rate-limiting [13] [9] [25].\n\u2022 Short-term History: History also creates the possibility that a previously well-behaved peer with a good reputation will turn traitor and use his good reputation to exploit other peers.\nThe peer could be making a strategic decision or someone may have hijacked her identity (e.g., by compromising her host).\nLong-term history exacerbates this problem by allowing peers with many previous transactions to exploit that history for many new transactions.\nWe show that short-term history prevents traitors from disrupting cooperation.\nThe rest of the paper is organized as follows.\nWe describe the model in Section 2 and the reciprocative decision function in Section 3.\nWe then proceed to the incentive techniques in Section 4.\nIn Section 4.1, we describe the challenges of large populations and high turnover and show the effectiveness of discriminating server selection and shared history.\nIn Section 4.2, we describe collusion and demonstrate how subjective reputation mitigates it.\nIn Section 4.3, we present the problem of zero-cost identities and show how an adaptive stranger policy promotes persistent identities.\nIn Section 4.4, we show how traitors disrupt cooperation and how short-term history deals with them.\nWe discuss related work in Section 5 and conclude in Section 6.\n2.\nMODEL AND ASSUMPTIONS In this section, we present our assumptions about P2P systems and their users, and introduce a model that aims to capture the behavior of users in a P2P system.\n2.1 Assumptions We assume a P2P system in which users are strategic, i.e., they act rationally to maximize their benefit.\nHowever, to capture some of the real-life unpredictability in the behavior of users, we allow users to randomly change their behavior with a low probability (see Section 2.4).\nFor simplicity, we assume a homogeneous system in which all peers issue and satisfy requests at the same rate.\nA peer can satisfy any request, and, unless otherwise specified, peers request service uniformly at random from the population.1 .\nFinally, we assume that all transactions incur the same cost to all servers and provide the same benefit to all clients.\nWe assume that users can pollute shared history with false recommendations (Section 4.2), switch identities at zero-cost (Section 4.3), and spoof other users (Section 4.4).\nWe do not assume any centralized trust or centralized infrastructure.\n2.2 Model To aid the development and study of the incentive schemes, in this section we present a model of the users'' behaviors.\nIn particular, we model the benefits and costs of P2P interactions (the game) and population dynamics caused by mutation, learning, and turnover.\nOur model is designed to have the following properties that characterize a large set of P2P systems: \u2022 Social Dilemma: Universal cooperation should result in optimal overall utility, but individuals who exploit the cooperation of others while not cooperating themselves (i.e., defecting) should benefit more than users who do cooperate.\n\u2022 Asymmetric Transactions: A peer may want service from another peer while not currently being able to provide the service that the second peer wants.\nTransactions should be able to have asymmetric payoffs.\n\u2022 Untraceable Defections: A peer should not be able to determine the identity of peers who have defected on her.\nThis models the difficulty or expense of determining that a peer could have provided a service, but didn``t. For example, in the Gnutella file sharing system [21], a peer may simply ignore queries despite possessing the desired file, thus preventing the querying peer from identifying the defecting peer.\n\u2022 Dynamic Population: Peers should be able to change their behavior and enter or leave the system independently and continuously.\n1The exception is discussed in Section 4.1.1 103 Cooperate Defect Cooperate DefectClient Server sc RR \/ sc ST \/ sc PP \/ sc TS \/ Figure 2: Payoff matrix for the Generalized Prisoner``s Dilemma.\nT, R, P, and S stand for temptation, reward, punishment and sucker, respectively.\n2.3 Generalized Prisoner``s Dilemma The Prisoner``s Dilemma, developed by Flood, Dresher, and Tucker in 1950 [22] is a non-cooperative repeated game satisfying the social dilemma requirement.\nEach game consists of two players who can defect or cooperate.\nDepending how each acts, the players receive a payoff.\nThe players use a strategy to decide how to act.\nUnfortunately, existing work either uses a specific asymmetric payoff matrix or only gives the general form for a symmetric one [4].\nInstead, we use the Generalized Prisoner``s Dilemma (GPD), which specifies the general form for an asymmetric payoff matrix that preserves the social dilemma.\nIn the GPD, one player is the client and one player is the server in each game, and it is only the decision of the server that is meaningful for determining the outome of the transaction.\nA player can be a client in one game and a server in another.\nThe client and server receive the payoff from a generalized payoff matrix (Figure 2).\nRc, Sc, Tc, and Pc are the client``s payoff and Rs, Ss, Ts, and Ps are the server``s payoff.\nA GPD payoff matrix must have the following properties to create a social dilemma: 1.\nMutual cooperation leads to higher payoffs than mutual defection (Rs + Rc > Ps + Pc).\n2.\nMutual cooperation leads to higher payoffs than one player suckering the other (Rs + Rc > Sc + Ts and Rs + Rc > Ss + Tc).\n3.\nDefection dominates cooperation (at least weakly) at the individual level for the entity who decides whether to cooperate or defect: (Ts \u2265 Rs and Ps \u2265 Ss and (Ts > Rs or Ps > Ss)) The last set of inequalities assume that clients do not incur a cost regardless of whether they cooperate or defect, and therefore clients always cooperate.\nThese properties correspond to similar properties of the classic Prisoner``s Dilemma and allow any form of asymmetric transaction while still creating a social dilemma.\nFurthermore, one or more of the four possible actions (client cooperate and defect, and server cooperate and defect) can be untraceable.\nIf one player makes an untraceable action, the other player does not know the identity of the first player.\nFor example, to model a P2P application like file sharing or overlay routing, we use the specific payoff matrix values shown in Figure 3.\nThis satisfies the inequalities specified above, where only the server can choose between cooperating and defecting.\nIn addition, for this particular payoff matrix, clients are unable to trace server defections.\nThis is the payoff matrix that we use in our simulation results.\nRequest Service Don't Request 7 \/ -1 0 \/ 0 0 \/ 0 0 \/ 0 Provide Service Ignore Request Client Server Figure 3: The payoff matrix for an application like P2P file sharing or overlay routing.\n2.4 Population Dynamics A characteristic of P2P systems is that peers change their behavior and enter or leave the system independently and continuously.\nSeveral studies [4] [28] of repeated Prisoner``s Dilemma games use an evolutionary model [19] [34] of population dynamics.\nAn evolutionary model is not suitable for P2P systems because it only specifies the global behavior and all changes occur at discrete times.\nFor example, it may specify that a population of 5 100% Cooperate players and 5 100% Defect players evolves into a population with 3 and 7 players, respectively.\nIt does not specify which specific players switched.\nFurthermore, all the switching occurs at the end of a generation instead of continuously, like in a real P2P system.\nAs a result, evolutionary population dynamics do not accurately model turnover, traitors, and strangers.\nIn our model, entities take independent and continuous actions that change the composition of the population.\nTime consists of rounds.\nIn each round, every player plays one game as a client and one game as a server.\nAt the end of a round, a player may: 1) mutate 2) learn, 3) turnover, or 4) stay the same.\nIf a player mutates, she switches to a randomly picked strategy.\nIf she learns, she switches to a strategy that she believes will produce a higher score (described in more detail below).\nIf she maintains her identity after switching strategies, then she is referred to as a traitor.\nIf a player suffers turnover, she leaves the system and is replaced with a newcomer who uses the same strategy as the exiting player.\nTo learn, a player collects local information about the performance of different strategies.\nThis information consists of both her personal observations of strategy performance and the observations of those players she interacts with.\nThis models users communicating out-of-band about how strategies perform.\nLet s be the running average of the performance of a player``s current strategy per round and age be the number of rounds she has been using the strategy.\nA strategy``s rating is RunningAverage(s \u2217 age) RunningAverage(age) .\nWe use the age and compute the running average before the ratio to prevent young samples (which are more likely to be outliers) from skewing the rating.\nAt the end of a round, a player switches to highest rated strategy with a probability proportional to the difference in score between her current strategy and the highest rated strategy.\n104 3.\nRECIPROCATIVE DECISION FUNCTION In this section, we present the new decision function, Reciprocative, that is the basis for our incentive techniques.\nA decision function maps from a history of a player``s actions to a decision whether to cooperate with or defect on that player.\nA strategy consists of a decision function, private or shared history, a server selection mechanism, and a stranger policy.\nOur approach to incentives is to design strategies which maximize both individual and social benefit.\nStrategic users will choose to use such strategies and thereby drive the system to high levels of cooperation.\nTwo examples of simple decision functions are 100% Cooperate and 100% Defect.\n100% Cooperate models a naive user who does not yet realize that she is being exploited.\n100% Defect models a greedy user who is intent on exploiting the system.\nIn the absence of incentive techniques, 100% Defect users will quickly dominate the 100% Cooperate users and destroy cooperation in the system.\nOur requirements for a decision function are that (1) it can use shared and subjective history, (2) it can deal with untraceable defections, and (3) it is robust against different patterns of defection.\nPrevious decision functions such as Tit-for-Tat[4] and Image[28] (see Section 5) do not satisfy these criteria.\nFor example, Tit-for-Tat and Image base their decisions on both cooperations and defections, therefore cannot deal with untraceable defections .\nIn this section and the remaining sections we demonstrate how the Reciprocativebased strategies satisfy all of the requirements stated above.\nThe probability that a Reciprocative player cooperates with a peer is a function of its normalized generosity.\nGenerosity measures the benefit an entity has provided relative to the benefit it has consumed.\nThis is important because entities which consume more services than they provide, even if they provide many services, will cause cooperation to collapse.\nFor some entity i, let pi and ci be the services i has provided and consumed, respectively.\nEntity i``s generosity is simply the ratio of the service it provides to the service it consumes: g(i) = pi\/ci.\n(1) One possibility is to cooperate with a probability equal to the generosity.\nAlthough this is effective in some cases, in other cases, a Reciprocative player may consume more than she provides (e.g., when initially using the Stranger Defect policy in 4.3).\nThis will cause Reciprocative players to defect on each other.\nTo prevent this situation, a Reciprocative player uses its own generosity as a measuring stick to judge its peer``s generosity.\nNormalized generosity measures entity i``s generosity relative to entity j``s generosity.\nMore concretely, entity i``s normalized generosity as perceived by entity j is gj(i) = g(i)\/g(j).\n(2) In the remainder of this section, we describe our simulation framework, and use it to demonstrate the benefits of the baseline Reciprocative decision function.\nParameter Nominal value Section Population Size 100 2.4 Run Time 1000 rounds 2.4 Payoff Matrix File Sharing 2.3 Ratio using 100% Cooperate 1\/3 3 Ratio using 100% Defect 1\/3 3 Ratio using Reciprocative 1\/3 3 Mutation Probability 0.0 2.4 Learning Probability 0.05 2.4 Turnover Probability 0.0001 2.4 Hit Rate 1.0 4.1.1 Table 1: Default simulation parameters.\n3.1 Simulation Framework Our simulator implements the model described in Section 2.\nWe use the asymmetric file sharing payoff matrix (Figure 3) with untraceable defections because it models transactions in many P2P systems like file-sharing and packet forwarding in ad-hoc and overlay networks.\nOur simulation study is composed of different scenarios reflecting the challenges of various non-cooperative behaviors.\nTable 1 presents the nominal parameter values used in our simulation.\nThe Ratio using rows refer to the initial ratio of the total population using a particular strategy.\nIn each scenario we vary the value range of a specific parameter to reflect a particular situation or attack.\nWe then vary the exact properties of the Reciprocative strategy to defend against that situation or attack.\n3.2 Baseline Results 0 20 40 60 80 100 120 0 200 400 600 800 1000 Population Time (a) Total Population: 60 0 20 40 60 80 100 120 0 200 400 600 800 1000 Time (b) Total Population: 120 Defector Cooperator Recip.\nPrivate Figure 4: The evolution of strategy populations over time.\nTime the number of elapsed rounds.\nPopulation is the number of players using a strategy.\nIn this section, we present the dynamics of the game for the basic scenario presented in Table 1 to familiarize the reader and set a baseline for more complicated scenarios.\nFigures 4(a) (60 players) and (b) (120 players) show players switching to higher scoring strategies over time in two separate runs of the simulator.\nEach point in the graph represents the number of players using a particular strategy at one point in time.\nFigures 5(a) and (b) show the corresponding mean overall score per round.\nThis measures the degree of cooperation in the system: 6 is the maximum possible (achieved when everybody cooperates) and 0 is the minimum (achieved when everybody defects).\nFrom the file sharing payoff matrix, a net of 6 means everyone is able to download a file and a 0 means that no one 105 0 1 2 3 4 5 6 0 200 400 600 800 1000 MeanOverallScore\/Round Time (a) Total Population: 60 0 1 2 3 4 5 6 0 200 400 600 800 1000 Time (b) Total Population: 120 Figure 5: The mean overall per round score over time.\nis able to do so.\nWe use this metric in all later results to evaluate our incentive techniques.\nFigure 5(a) shows that the Reciprocative strategy using private history causes a system of 60 players to converge to a cooperation level of 3.7, but drops to 0.5 for 120 players.\nOne would expect the 60 player system to reach the optimal level of cooperation (6) because all the defectors are eliminated from the system.\nIt does not because of asymmetry of interest.\nFor example, suppose player B is using Reciprocative with private history.\nPlayer A may happen to ask for service from player B twice in succession without providing service to player B in the interim.\nPlayer B does not know of the service player A has provided to others, so player B will reject service to player A, even though player A is cooperative.\nWe discuss solutions to asymmetry of interest and the failure of Reciprocative in the 120 player system in Section 4.1.\n4.\nRECIPROCATIVE-BASED INCENTIVE TECHNIQUES In this section we present our incentives techniques and evaluate their behavior by simulation.\nTo make the exposition clear we group our techniques by the challenges they address: large populations and high turnover (Section 4.1), collusions (Section 4.2), zero-cost identities (Section 4.3), and traitors (Section 4.4).\n4.1 Large Populations and High Turnover The large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity.\nUnder these conditions, basing decisions only on private history (records about interactions the peer has been directly involved in) is not effective.\nIn addition, private history does not deal well with asymmetry of interest.\nFor example, if player B has cooperated with others but not with player A himself in the past, player A has no indication of player B``s generosity, thus may unduly defect on him.\nWe propose two mechanisms to alleviate the problem of few repeat transactions: server-selection and shared history.\n4.1.1 Server Selection A natural way to increase the probability of interacting with familiar peers is by discriminating server selection.\nHowever, the asymmetry of transactions challenges selection mechanisms.\nUnlike in the prisoner``s dilemma payoff matrix, where players can benefit one another within a single transaction, transactions in GPD are asymmetric.\nAs a result, a player who selects her donor for the second time without contributing to her in the interim may face a defection.\nIn addition, due to untraceability of defections, it is impossible to maintain blacklists to avoid interactions with known defectors.\nIn order to deal with asymmetric transactions, every player holds (fixed size) lists of both past donors and past recipients, and selects a server from one of these lists at random with equal probabilities.\nThis way, users approach their past recipients and give them a chance to reciprocate.\nIn scenarios with selective users we omit the complete availability assumption to prevent players from being clustered into a lot of very small groups; thus, we assume that every player can perform the requested service with probability p (for the results presented in this section, p = .3).\nIn addition, in order to avoid bias in favor of the selective players, all players (including the non-discriminative ones) select servers for games.\nFigure 6 demonstrates the effectiveness of the proposed selection mechanism in scenarios with large population sizes.\nWe fix the initial ratio of Reciprocative in the population (33%) while varying the population size (between 24 to 1000) (Notice that while in Figures 4(a) and (b), the data points demonstrates the evolution of the system over time, each data point in this figure is the result of an entire simulation for a specific scenario).\nThe figure shows that the Reciprocative decision function using private history in conjunction with selective behavior can scale to large populations.\nIn Figure 7 we fix the population size and vary the turnover rate.\nIt demonstrates that while selective behavior is effective for low turnover rates, as turnover gets higher, selective behavior does not scale.\nThis occurs because selection is only effective as long as players from the past stay alive for long enough such that they can be selected for future games.\n4.1.2 Shared history In order to mitigate asymmetry of interest and scale to higher turnover rate, there is a need in shared history.\nShared history means that every peer keeps records about all of the interactions that occur in the system, regardless of whether he was directly involved in them or not.\nIt allows players to leverage off of the experiences of others in cases of few repeat transactions.\nIt only requires that someone has interacted with a particular player for the entire population to observe it, thus scales better to large populations and high turnovers, and also tolerates asymmetry of interest.\nSome examples of shared history schemes are [20] [23] [28].\nFigure 7 shows the effectiveness of shared history under high turnover rates.\nIn this figure, we fix the population size and vary the turnover rate.\nWhile selective players with private history can only tolerate a moderate turnover, shared history scales to turnovers of up to approximately 0.1.\nThis means that 10% of the players leave the system at the end of each round.\nIn Figure 6 we fix the turnover and vary the population size.\nIt shows that shared history causes the system to converge to optimal cooperation and performance, regardless of the size of the population.\nThese results show that shared history addresses all three challenges of large populations, high turnover, and asymmetry of transactions.\nNevertheless, shared history has two disadvantages.\nFirst, 106 0 1 2 3 4 5 6 0 50 100 150 200 250 300 350 400 MeanOverallScore\/Round NumPlayers Shared Non-Sel Private Non-Sel Private Selective Figure 6: Private vs. Shared History as a function of population size.\n0 1 2 3 4 5 6 0.0001 0.001 0.01 0.1 MeanOverallScore\/Round Turnover Shared Non-Sel Private Non-Sel Private Selective Figure 7: Performance of selection mechanism under turnover.\nThe x-axis is the turnover rate.\nThe y-axis is the mean overall per round score.\nwhile a decentralized implementation of private history is straightforward, implementation of shared-history requires communication overhead or centralization.\nA decentralized shared history can be implemented, for example, on top of a DHT, using a peer-to-peer storage system [36] or by disseminating information to other entities in a similar way to routing protocols.\nSecond, and more fundamental, shared history is vulnerable to collusion.\nIn the next section we propose a mechanism that addresses this problem.\n4.2 Collusion and Other Shared History Attacks 4.2.1 Collusion While shared history is scalable, it is vulnerable to collusion.\nCollusion can be either positive (e.g. defecting entities claim that other defecting entities cooperated with them) or negative (e.g. entities claim that other cooperative entities defected on them).\nCollusion subverts any strategy in which everyone in the system agrees on the reputation of a player (objective reputation).\nAn example of objective reputation is to use the Reciprocative decision function with shared history to count the total number of cooperations a player has given to and received from all entities in the system; another example is the Image strategy [28].\nThe effect of collusion is magnified in systems with zero-cost identities, where users can create fake identities that report false statements.\nInstead, to deal with collusion, entities can compute reputation subjectively, where player A weighs player B``s opinions based on how much player A trusts player B.\nOur subjective algorithm is based on maxflow [24] [32].\nMaxflow is a graph theoretic problem, which given a directed graph with weighted edges asks what is the greatest rate at which material can be shipped from the source to the target without violating any capacity constraints.\nFor example, in figure 8 each edge is labeled with the amount of traffic that can travel on it.\nThe maxflow algorithm computes the maximum amount of traffic that can go from the source (s) to the target (t) without violating the constraints.\nIn this example, even though there is a loop of high capacity edges, the maxflow between the source and the target is only 2 (the numbers in brackets represent the actual flow on each edge in the solution).\n100(0) 1(1) 5(1) s t 10(1) 100(1) 1(1) 100(1) 20(0) Figure 8: Each edge in the graph is labeled with its capacity and the actual flow it carries in brackets.\nThe maxflow between the source and the target in the graph is 2.\nC C CCCC 100100100100 100 00 0 0 20 20 0 0 A B Figure 9: This graph illustrates the robustness of maxflow in the presence of colluders who report bogus high reputation values.\nWe apply the maxflow algorithm by constructing a graph whose vertices are entities and the edges are the services that entities have received from each other.\nThis information can be stored using the same methods as the shared history.\nA maxflow is the greatest level of reputation the source can give to the sink without violating reputation capacity constraints.\nAs a result, nodes who dishonestly report high reputation values will not be able to subvert the reputation system.\nFigure 9 illustrates a scenario in which all the colluders (labeled with C) report high reputation values for each other.\nWhen node A computes the subjective reputation of B using the maxflow algorithm, it will not be affected by the local false reputation values, rather the maxflow in this case will be 0.\nThis is because no service has been received from any of the colluders.\n107 In our algorithm, the benefit that entity i has received (indirectly) from entity j is the maxflow from j to i. Conversely, the benefit that entity i has provided indirectly to j is the maxflow from i to j.\nThe subjective reputation of entity j as perceived by i is: min maxflow(j to i) maxflow(i to j) , 1 (3) 0 1 2 3 4 5 6 0 100 200 300 400 500 600 700 800 900 1000 MeanOverallScore\/Round Population Shared Private Subjective Figure 10: Subjective shared history compared to objective shared history and private history in the presence of colluders.\nAlgorithm 1 CONSTANTTIMEMAXFLOW Bound the mean running time of Maxflow to a constant.\nmethod CTMaxflow(self, src, dst) 1: self.surplus \u2190 self.surplus + self.increment {Use the running mean as a prediction.}\n2: if random() > (0.5\u2217self.surplus\/self.mean iterations) then 3: return None {Not enough surplus to run.}\n4: end if {Get the flow and number of iterations used from the maxflow alg.}\n5: flow, iterations \u2190 Maxflow(self.G, src, dst) 6: self.surplus \u2190 self.surplus \u2212 iterations {Keep a running mean of the number of iterations used.}\n7: self.mean iterations \u2190 self.\u03b1 \u2217 self.mean iterations + (1 \u2212 self.\u03b1) \u2217 iterations 8: return flow The cost of maxflow is its long running time.\nThe standard preflowpush maxflow algorithm has a worst case running time of O(V 3 ).\nInstead, we use Algorithm 1 which has a constant mean running time, but sometimes returns no flow even though one exists.\nThe essential idea is to bound the mean number of nodes examined during the maxflow computation.\nThis bounds the overhead, but also bounds the effectiveness.\nDespite this, the results below show that a maxflow-based Reciprocative decision function scales to higher populations than one using private history.\nFigure 10 compares the effectiveness of subjective reputation to objective reputation in the presence of colluders.\nIn these scenarios, defectors collude by claiming that other colluders that they encounter gave them 100 cooperations for that encounter.\nAlso, the parameters for Algorithm 1 are set as follows: increment = 100, \u03b1 = 0.9.\nAs in previous sections, Reciprocative with private history results in cooperation up to a point, beyond which it fails.\nThe difference here is that objective shared history fails for all population sizes.\nThis is because the Reciprocative players cooperate with the colluders because of their high reputations.\nHowever, subjective history can reach high levels of cooperation regardless of colluders.\nThis is because there are no high weight paths in the cooperation graph from colluders to any non-colluders, so the maxflow from a colluder to any non-colluder is 0.\nTherefore, a subjective Reciprocative player will conclude that that colluder has not provided any service to her and will reject service to the colluder.\nThus, the maxflow algorithm enables Reciprocative to maintain the scalability of shared history without being vulnerable to collusion or requiring centralized trust (e.g., trusted peers).\nSince we bound the running time of the maxflow algorithm, cooperation decreases as the population size increases, but the key point is that the subjective Reciprocative decision function scales to higher populations than one using private history.\nThis advantage only increases over time as CPU power increases and more cycles can be devoted to running the maxflow algorithm (by increasing the increment parameter).\nDespite the robustness of the maxflow algorithm to the simple form of collusion described previously, it still has vulnerabilities to more sophisticated attacks.\nOne is for an entity (the mole) to provide service and then lie positively about other colluders.\nThe other colluders can then exploit their reputation to receive service.\nHowever, the effectiveness of this attack relies on the amount of service that the mole provides.\nSince the mole is paying all of the cost of providing service and receiving none of the benefit, she has a strong incentive to stop colluding and try another strategy.\nThis forces the colluders to use mechanisms to maintain cooperation within their group, which may drive the cost of collusion to exceed the benefit.\n4.2.2 False reports Another attack is for a defector to lie about receiving or providing service to another entity.\nThere are four possibile actions that can be lied about: providing service, not providing service, receiving service, and not receiving service.\nFalsely claiming to receive service is the simple collusion attack described above.\nFalsely claiming not to have provided service provides no benefit to the attacker.\nFalsely claiming to have provided service or not to have received it allows an attacker to boost her own reputation and\/or lower the reputation of another entity.\nAn entity may want to lower another entity``s reputation in order to discourage others from selecting it and exclusively use its service.\nThese false claims are clearly identifiable in the shared history as inconsistencies where one entity claims a transaction occurred and another claims it did not.\nTo limit this attack, we modify the maxflow algorithm so that an entity always believes the entity that is closer to him in the flow graph.\nIf both entities are equally distant, then the disputed edge in the flow is not critical to the evaluation and is ignored.\nThis modification prevents those cases where the attacker is making false claims about an entity that is closer than her to the evaluating entity, which prevents her from boosting her own reputation.\nThe remaining possibilities are for the attacker to falsely claim to have provided service to or not to have received it from a victim entity that is farther from the evalulator than her.\nIn these cases, an attacker can only lower the reputation of the victim.\nThe effectiveness of doing this is limited by the number of services provided and received by the attacker, which makes executing this attack expensive.\n108 4.3 Zero-Cost Identities History assumes that entities maintain persistent identities.\nHowever, in most P2P systems, identities are zero-cost.\nThis is desirable for network growth as it encourages newcomers to join the system.\nHowever, this also allows misbehaving users to escape the consequences of their actions by switching to new identities (i.e., whitewashing).\nWhitewashers can cause the system to collapse if they are not punished appropriately.\nUnfortunately, a player cannot tell if a stranger is a whitewasher or a legitimate newcomer.\nAlways cooperating with strangers encourages newcomers to join, but at the same time encourages whitewashing behavior.\nAlways defecting on strangers prevents whitewashing, but discourages newcomers from joining and may also initiate unfavorable cycles of defection.\nThis tension suggests that any stranger policy that has a fixed probability of cooperating with strangers will fail by either being too stingy when most strangers are newcomers or too generous when most strangers are whitewashers.\nOur solution is the Stranger Adaptive stranger policy.\nThe idea is to be generous to strangers when they are being generous and stingy when they are stingy.\nLet ps and cs be the number of services that strangers have provided and consumed, respectively.\nThe probability that a player using Stranger Adaptive helps a stranger is ps\/cs.\nHowever, we do not wish to keep these counts permanently (for reasons described in Section 4.4).\nAlso, players may not know when strangers defect because defections are untraceable (as described in Section 2).\nConsequently, instead of keeping ps and cs, we assume that k = ps + cs, where k is a constant and we keep the running ratio r = ps\/cs.\nWhen we need to increment ps or cs, we generate the current values of ps and cs from k and r: cs = k\/(1 + r) ps = cs \u2217 r We then compute the new r as follows: r = (ps + 1)\/cs , if the stranger provided service r = ps\/(cs + 1) , if the stranger consumed service This method allows us to keep a running ratio that reflects the recent generosity of strangers without knowing when strangers have defected.\n0 1 2 3 4 5 6 0.0001 0.001 0.01 0.1 1 MeanOverallScore\/Round Turnover Stranger Cooperate Stranger Defect Stranger Adaptive Figure 11: Different stranger policies for Reciprocative with shared history.\nThe x-axis is the turnover rate on a log scale.\nThe y-axis is the mean overall per round score.\nFigures 11 and 12 compare the effectiveness of the Reciprocative strategy using different policies toward strangers.\nFigure 11 0 1 2 3 4 5 6 0.0001 0.001 0.01 0.1 1 MeanOverallScore\/Round Turnover Stranger Cooperate Stranger Defect Stranger Adaptive Figure 12: Different stranger policies for Reciprocative with private history.\nThe x-axis is the turnover rate on a log scale.\nThe y-axis is the mean overall per round score.\ncompares different stranger policies for Reciprocative with shared history, while Figure 12 is with private history.\nIn both figures, the players using the 100% Defect strategy change their identity (whitewash) after every transaction and are indistinguishable from legitimate newcomers.\nThe Reciprocative players using the Stranger Cooperate policy completely fail to achieve cooperation.\nThis stranger policy allows whitewashers to maximize their payoff and consequently provides a high incentive for users to switch to whitewashing.\nIn contrast, Figure 11 shows that the Stranger Defect policy is effective with shared history.\nThis is because whitewashers always appear to be strangers and therefore the Reciprocative players will always defect on them.\nThis is consistent with previous work [13] showing that punishing strangers deals with whitewashers.\nHowever, Figure 12 shows that Stranger Defect is not effective with private history.\nThis is because Reciprocative requires some initial cooperation to bootstrap.\nIn the shared history case, a Reciprocative player can observe that another player has already cooperated with others.\nWith private history, the Reciprocative player only knows about the other players'' actions toward her.\nTherefore, the initial defection dictated by the Stranger Defect policy will lead to later defections, which will prevent Reciprocative players from ever cooperating with each other.\nIn other simulations not shown here, the Stranger Defect stranger policy fails even with shared history when there are no initial 100% Cooperate players.\nFigure 11 shows that with shared history, the Stranger Adaptive policy performs as well as Stranger Defect policy until the turnover rate is very high (10% of the population turning over after every transaction).\nIn these scenarios, Stranger Adaptive is using k = 10 and each player keeps a private r.\nMore importantly, it is significantly better than Stranger Defect policy with private history because it can bootstrap cooperation.\nAlthough the Stranger Defect policy is marginally more effective than Stranger Adaptive at very high rates of turnover, P2P systems are unlikely to operate there because other services (e.g., routing) also cannot tolerate very high turnover.\nWe conclude that of the stranger policies that we have explored, Stranger Adaptive is the most effective.\nBy using Stranger Adaptive, P2P systems with zero-cost identities and a sufficiently low turnover can sustain cooperation without a centralized allocation of identities.\n109 4.4 Traitors Traitors are players who acquire high reputation scores by cooperating for a while, and then traitorously turn into defectors before leaving the system.\nThey model both users who turn deliberately to gain a higher score and cooperators whose identities have been stolen and exploited by defectors.\nA strategy that maintains longterm history without discriminating between old and recent actions becomes highly vulnerable to exploitation by these traitors.\nThe top two graphs in Figure 13 demonstrate the effect of traitors on cooperation in a system where players keep long-term history (never clear history).\nIn these simulations, we run for 2000 rounds and allow cooperative players to keep their identities when switching to the 100% Defector strategy.\nWe use the default values for the other parameters.\nWithout traitors, the cooperative strategies thrive.\nWith traitors, the cooperative strategies thrive until a cooperator turns traitor after 600 rounds.\nAs this cooperator exploits her reputation to achieve a high score, other cooperative players notice this and follow suit via learning.\nCooperation eventually collapses.\nOn the other hand, if we maintain short-term history and\/or discounting ancient history vis-a-vis recent history, traitors can be quickly detected, and the overall cooperation level stays high, as shown in the bottom two graphs in Figure 13.\n0 20 40 60 80 100 1K 2K Long-TermHistory No Traitors Population 0 20 40 60 80 100 1K 2K Traitors Defector Cooperator Recip.\nShared 0 20 40 60 80 100 1K 2K Short-TermHistory Time Population 0 20 40 60 80 100 1K 2K Time Figure 13: Keeping long-term vs. short-term history both with and without traitors.\n5.\nRELATED WORK Previous work has examined the incentive problem as applied to societies in general and more recently to Internet applications and peer-to-peer systems in particular.\nA well-known phenomenon in this context is the tragedy of the commons [18] where resources are under-provisioned due to selfish users who free-ride on the system``s resources, and is especially common in large networks [29] [3].\nThe problem has been extensively studied adopting a game theoretic approach.\nThe prisoners'' dilemma model provides a natural framework to study the effectiveness of different strategies in establishing cooperation among players.\nIn a simulation environment with many repeated games, persistent identities, and no collusion, Axelrod [4] shows that the Tit-for-Tat strategy dominates.\nOur model assumes growth follows local learning rather than evolutionary dynamics [14], and also allows for more kinds of attacks.\nNowak and Sigmund [28] introduce the Image strategy and demonstrate its ability to establish cooperation among players despite few repeat transactions by the employment of shared history.\nPlayers using Image cooperate with players whose global count of cooperations minus defections exceeds some threshold.\nAs a result, an Image player is either vulnerable to partial defectors (if the threshold is set too low) or does not cooperate with other Image players (if the threshold is set too high).\nIn recent years, researchers have used economic mechanism design theory to tackle the cooperation problem in Internet applications.\nMechanism design is the inverse of game theory.\nIt asks how to design a game in which the behavior of strategic players results in the socially desired outcome.\nDistributed Algorithmic Mechanism Design seeks solutions within this framework that are both fully distributed and computationally tractable [12].\n[10] and [11] are examples of applying DAMD to BGP routing and multicast cost sharing.\nMore recently, DAMD has been also studied in dynamic environments [38].\nIn this context, demonstrating the superiority of a cooperative strategy (as in the case of our work) is consistent with the objective of incentivizing the desired behavior among selfish players.\nThe unique challenges imposed by peer-to-peer systems inspired additional body of work [5] [37], mainly in the context of packet forwarding in wireless ad-hoc routing [8] [27] [30] [35], and file sharing [15] [31].\nFriedman and Resnick [13] consider the problem of zero-cost identities in online environments and find that in such systems punishing all newcomers is inevitable.\nUsing a theoretical model, they demonstrate that such a system can converge to cooperation only for sufficiently low turnover rates, which our results confirm.\n[6] and [9] show that whitewashing and collusion can have dire consequences for peer-to-peer systems and are difficult to prevent in a fully decentralized system.\nSome commercial file sharing clients [1] [2] provide incentive mechanisms which are enforced by making it difficult for the user to modify the source code.\nThese mechanisms can be circumvented by a skilled user or by a competing company releasing a compatible client without the incentive restrictions.\nAlso, these mechanisms are still vulnerable to zero-cost identities and collusion.\nBitTorrent [7] uses Tit-for-Tat as a method for resource allocation, where a user``s upload rate dictates his download rate.\n6.\nCONCLUSIONS In this paper we take a game theoretic approach to the problem of cooperation in peer-to-peer networks.\nAddressing the challenges imposed by P2P systems, including large populations, high turnover, asymmetry of interest and zero-cost identities, we propose a family of scalable and robust incentive techniques, based upon the Reciprocative decision function, to support cooperative behavior and improve overall system performance.\nWe find that the adoption of shared history and discriminating server selection techniques can mitigate the challenge of few repeat transactions that arises due to large population size, high turnover and asymmetry of interest.\nFurthermore, cooperation can be established even in the presence of zero-cost identities through the use of an adaptive policy towards strangers.\nFinally, colluders and traitors can be kept in check via subjective reputations and short-term history, respectively.\n110 7.\nACKNOWLEDGMENTS We thank Mary Baker, T.J. Giuli, Petros Maniatis, the anonymous reviewer, and our shepherd, Margo Seltzer, for their useful comments that helped improve the paper.\nThis work is supported in part by the National Science Foundation under ITR awards ANI-0085879 and ANI-0331659, and Career award ANI-0133811.\nViews and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of NSF, or the U.S. government.\n8.\nREFERENCES [1] Kazaa.\nhttp:\/\/www.kazaa.com.\n[2] Limewire.\nhttp:\/\/www.limewire.com.\n[3] ADAR, E., AND HUBERMAN, B. A. Free Riding on Gnutella.\nFirst Monday 5, 10 (October 2000).\n[4] AXELROD, R.\nThe Evolution of Cooperation.\nBasic Books, 1984.\n[5] BURAGOHAIN, C., AGRAWAL, D., AND SURI, S.\nA Game-Theoretic Framework for Incentives in P2P Systems.\nIn International Conference on Peer-to-Peer Computing (Sep 2003).\n[6] CASTRO, M., DRUSCHEL, P., GANESH, A., ROWSTRON, A., AND WALLACH, D. S. Security for Structured Peer-to-Peer Overlay Networks.\nIn Proceedings of Multimedia Computing and Networking 2002 (MMCN ``02) (2002).\n[7] COHEN, B. Incentives build robustness in bittorrent.\nIn 1st Workshop on Economics of Peer-to-Peer Systems (2003).\n[8] CROWCROFT, J., GIBBENS, R., KELLY, F., AND \u02d8 OSTRING, S. Modeling Incentives for Collaboration in Mobile ad-hoc Networks.\nIn Modeling and Optimization in Mobile, ad-hoc and Wireless Networks (2003).\n[9] DOUCEUR, J. R.\nThe Sybil Attack.\nIn Electronic Proceedings of the International Workshop on Peer-to-Peer Systems (2002).\n[10] FEIGENBAUM, J., PAPADIMITRIOU, C., SAMI, R., AND SHENKER, S.\nA BGP-based Mechanism for Lowest-Cost Routing.\nIn Proceedings of the ACM Symposium on Principles of Distributed Computing (2002).\n[11] FEIGENBAUM, J., PAPADIMITRIOU, C., AND SHENKER, S. Sharing the Cost of Multicast Transmissions.\nIn Journal of Computer and System Sciences (2001), vol.\n63, pp. 21-41.\n[12] FEIGENBAUM, J., AND SHENKER, S. Distributed Algorithmic Mechanism Design: Recent Results and Future Directions.\nIn Proceedings of the International Workshop on Discrete Algorithms and Methods for Mobile Computing and Communications (2002).\n[13] FRIEDMAN, E., AND RESNICK, P.\nThe Social Cost of Cheap Pseudonyms.\nJournal of Economics and Management Strategy 10, 2 (1998), 173-199.\n[14] FUDENBERG, D., AND LEVINE, D. K.\nThe Theory of Learning in Games.\nThe MIT Press, 1999.\n[15] GOLLE, P., LEYTON-BROWN, K., MIRONOV, I., AND LILLIBRIDGE, M. Incentives For Sharing in Peer-to-Peer Networks.\nIn Proceedings of the 3rd ACM conference on Electronic Commerce, October 2001 (2001).\n[16] GROSS, B., AND ACQUISTI, A. Balances of Power on EBay: Peers or Unquals?\nIn Workshop on economics of peer-to-peer networks (2003).\n[17] GU, B., AND JARVENPAA, S. Are Contributions to P2P Technical Forums Private or Public Goods?\n- An Empirical Investigation.\nIn 1st Workshop on Economics of Peer-to-Peer Systems (2003).\n[18] HARDIN, G.\nThe Tragedy of the Commons.\nScience 162 (1968), 1243-1248.\n[19] JOSEF HOFBAUER AND KARL SIGMUND.\nEvolutionary Games and Population Dynamics.\nCambridge University Press, 1998.\n[20] KAMVAR, S. D., SCHLOSSER, M. T., AND GARCIA-MOLINA, H.\nThe EigenTrust Algorithm for Reputation Management in P2P Networks.\nIn Proceedings of the Twelfth International World Wide Web Conference (May 2003).\n[21] KAN, G. Peer-to-Peer: Harnessing the Power of Disruptive Technologies, 1st ed.\nO``Reilly & Associates, Inc., March 2001, ch.\nGnutella, pp. 94-122.\n[22] KUHN, S. Prisoner``s Dilemma.\nIn The Stanford Encyclopedia of Philosophy, Edward N. Zalta, Ed., Summer ed.\n2003.\n[23] LEE, S., SHERWOOD, R., AND BHATTACHARJEE, B. Cooperative Peer Groups in Nice.\nIn Proceedings of the IEEE INFOCOM (2003).\n[24] LEVIEN, R., AND AIKEN, A. Attack-Resistant Trust Metrics for Public Key Certification.\nIn Proceedings of the USENIX Security Symposium (1998), pp. 229-242.\n[25] MANIATIS, P., ROUSSOPOULOS, M., GIULI, T. J., ROSENTHAL, D. S. H., BAKER, M., AND MULIADI, Y. Preserving Peer Replicas by Rate-Limited Sampled Voting.\nIn ACM Symposium on Operating Systems Principles (2003).\n[26] MARTI, S., GIULI, T. J., LAI, K., AND BAKER, M. Mitigating Routing Misbehavior in Mobile ad-hoc Networks.\nIn Proceedings of MobiCom (2000), pp. 255-265.\n[27] MICHIARDI, P., AND MOLVA, R.\nA Game Theoretical Approach to Evaluate Cooperation Enforcement Mechanisms in Mobile ad-hoc Networks.\nIn Modeling and Optimization in Mobile, ad-hoc and Wireless Networks (2003).\n[28] NOWAK, M. A., AND SIGMUND, K. Evolution of Indirect Reciprocity by Image Scoring.\nNature 393 (1998), 573-577.\n[29] OLSON, M.\nThe Logic of Collective Action: Public Goods and the Theory of Groups.\nHarvard University Press, 1971.\n[30] RAGHAVAN, B., AND SNOEREN, A. Priority Forwarding in ad-hoc Networks with Self-Ineterested Parties.\nIn Workshop on Economics of Peer-to-Peer Systems (June 2003).\n[31] RANGANATHAN, K., RIPEANU, M., SARIN, A., AND FOSTER, I. To Share or Not to Share: An Analysis of Incentives to Contribute in Collaborative File Sharing Environments.\nIn Workshop on Economics of Peer-to-Peer Systems (June 2003).\n[32] REITER, M. K., AND STUBBLEBINE, S. G. Authentication Metric Analysis and Design.\nACM Transactions on Information and System Security 2, 2 (1999), 138-158.\n[33] SAROIU, S., GUMMADI, P. K., AND GRIBBLE, S. D.\nA Measurement Study of Peer-to-Peer File Sharing Systems.\nIn Proceedings of Multimedia Computing and Networking 2002 (MMCN ``02) (2002).\n[34] SMITH, J. M. Evolution and the Theory of Games.\nCambridge University Press, 1982.\n[35] URPI, A., BONUCCELLI, M., AND GIORDANO, S. Modeling Cooperation in Mobile ad-hoc Networks: a Formal Description of Selfishness.\nIn Modeling and Optimization in Mobile, ad-hoc and Wireless Networks (2003).\n[36] VISHNUMURTHY, V., CHANDRAKUMAR, S., AND SIRER, E. G. KARMA : A Secure Economic Framework for P2P Resource Sharing.\nIn Workshop on Economics of Peer-to-Peer Networks (2003).\n[37] WANG, W., AND LI, B. To Play or to Control: A Game-based Control-Theoretic Approach to Peer-to-Peer Incentive Engineering.\nIn International Workshop on Quality of Service (June 2003).\n[38] WOODARD, C. J., AND PARKES, D. C. Strategyproof mechanisms for ad-hoc network formation.\nIn Workshop on Economics of Peer-to-Peer Systems (June 2003).\n111","lvl-3":"Robust Incentive Techniques for Peer-to-Peer Networks\nLack of cooperation (free riding) is one of the key problems that confronts today's P2P systems.\nWhat makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors.\nTo tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques.\nThese techniques are fully distributed and include: discriminating server selection, maxflowbased subjective reputation, and adaptive stranger policies.\nThrough simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation.\n1.\nINTRODUCTION\nMany peer-to-peer (P2P) systems rely on cooperation among selfinterested users.\nFor example, in a file-sharing system, overall download latency and failure rate increase when users do not share their resources [3].\nIn a wireless ad-hoc network, overall packet latency and loss rate increase when nodes refuse to forward packets on behalf of others [26].\nFurther examples are file preservation [25], discussion boards [17], online auctions [16], and overlay routing [6].\nIn many of these systems, users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance.\nAs a result, each user's attempt to maximize her own utility effectively lowers the overall\nFigure 1: Example of asymmetry of interest.\nA wants service from B, B wants service form C, and C wants service from A.\nutility of the system.\nAvoiding this \"tragedy of the commons\" [18] requires incentives for cooperation.\nWe adopt a game-theoretic approach in addressing this problem.\nIn particular, we use a prisoners' dilemma model to capture the essential tension between individual and social utility, asymmetric payoff matrices to allow asymmetric transactions between peers, and a learning-based [14] population dynamic model to specify the behavior of individual peers, which can be changed continuously.\nWhile social dilemmas have been studied extensively, P2P applications impose a unique set of challenges, including:\n\u2022 Large populations and high turnover: A file sharing system such as Gnutella and KaZaa can exceed 100, 000 simultaneous users, and nodes can have an average life-time of the order of minutes [33].\n\u2022 Asymmetry of interest: Asymmetric transactions of P2P systems create the possibility for asymmetry of interest.\nIn the example in Figure 1, A wants service from B, B wants service from C, and C wants service from A. \u2022 Zero-cost identity: Many P2P systems allow peers to continuously switch identities (i.e., whitewash).\nStrategies that work well in traditional prisoners' dilemma games such as Tit-for-Tat [4] will not fare well in the P2P context.\nTherefore, we propose a family of scalable and robust incentive techniques, based upon a novel Reciprocative decision function, to address these challenges and provide different tradeoffs:\n\u2022 Discriminating Server Selection: Cooperation requires familiarity between entities either directly or indirectly.\nHowever, the large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity.\nWe show that by having each peer keep a\nprivate history of the actions of other peers toward her, and using discriminating server selection, the Reciprocative decision function can scale to large populations and moderate levels of turnover.\n.\nShared History: Scaling to higher turnover and mitigating asymmetry of interest requires shared history.\nConsider the example in Figure 1.\nIf everyone provides service, then the system operates optimally.\nHowever, if everyone keeps only private history, no one will provide service because B does not know that A has served C, etc. .\nWe show that with shared history, B knows that A served C and consequently will serve A.\nThis results in a higher level of cooperation than with private history.\nThe cost of shared history is a distributed infrastructure (e.g., distributed hash table-based storage) to store the history.\n.\nMaxflow-based Subjective Reputation: Shared history creates the possibility for collusion.\nIn the example in Figure 1, C can falsely claim that A served him, thus deceiving B into providing service.\nWe show that a maxflow-based algorithm that computes reputation subjectively promotes cooperation despite collusion among 1\/3 of the population.\nThe basic idea is that B would only believe C if C had already provided service to B.\nThe cost of the maxflow algorithm is its O (V 3) running time, where V is the number of nodes in the system.\nTo eliminate this cost, we have developed a constant mean running time variation, which trades effectiveness for complexity of computation.\nWe show that the maxflow-based algorithm scales better than private history in the presence of colluders without the centralized trust required in previous work [9] [20].\n.\nAdaptive Stranger Policy: Zero-cost identities allows noncooperating peers to escape the consequences of not cooperating and eventually destroy cooperation in the system if not stopped.\nWe show that if Reciprocative peers treat strangers (peers with no history) using a policy that adapts to the behavior of previous strangers, peers have little incentive to whitewash and whitewashing can be nearly eliminated from the system.\nThe adaptive stranger policy does this without requiring centralized allocation of identities, an entry fee for newcomers, or rate-limiting [13] [9] [25].\n.\nShort-term History: History also creates the possibility that a previously well-behaved peer with a good reputation will turn traitor and use his good reputation to exploit other peers.\nThe peer could be making a strategic decision or someone may have hijacked her identity (e.g., by compromising her host).\nLong-term history exacerbates this problem by allowing peers with many previous transactions to exploit that history for many new transactions.\nWe show that short-term history prevents traitors from disrupting cooperation.\nThe rest of the paper is organized as follows.\nWe describe the model in Section 2 and the reciprocative decision function in Section 3.\nWe then proceed to the incentive techniques in Section 4.\nIn Section 4.1, we describe the challenges of large populations and high turnover and show the effectiveness of discriminating server selection and shared history.\nIn Section 4.2, we describe collusion and demonstrate how subjective reputation mitigates it.\nIn Section 4.3, we present the problem of zero-cost identities and show how an adaptive stranger policy promotes persistent identities.\nIn Section 4.4, we show how traitors disrupt cooperation and how short-term history deals with them.\nWe discuss related work in Section 5 and conclude in Section 6.\n2.\nMODEL AND ASSUMPTIONS\n2.1 Assumptions\n2.2 Model\n2.3 Generalized Prisoner's Dilemma\n2.4 Population Dynamics\n3.\nRECIPROCATIVE DECISION FUNCTION\n3.1 Simulation Framework\n3.2 Baseline Results\n4.\nRECIPROCATIVE-BASED INCENTIVE TECHNIQUES\n4.1 Large Populations and High Turnover\n4.1.1 Server Selection\n4.1.2 Shared history\n4.2 Collusion and Other Shared History Attacks\n4.2.1 Collusion\n4.2.2 False reports\n4.3 Zero-Cost Identities\n4.4 Traitors\n5.\nRELATED WORK\nPrevious work has examined the incentive problem as applied to societies in general and more recently to Internet applications and peer-to-peer systems in particular.\nA well-known phenomenon in this context is the \"tragedy of the commons\" [18] where resources are under-provisioned due to selfish users who free-ride on the system's resources, and is especially common in large networks [29] [3].\nThe problem has been extensively studied adopting a game theoretic approach.\nThe prisoners' dilemma model provides a natural framework to study the effectiveness of different strategies in establishing cooperation among players.\nIn a simulation environment with many repeated games, persistent identities, and no collusion, Axelrod [4] shows that the Tit-for-Tat strategy dominates.\nOur model assumes growth follows local learning rather than evolutionary dynamics [14], and also allows for more kinds of attacks.\nNowak and Sigmund [28] introduce the Image strategy and demonstrate its ability to establish cooperation among players despite few repeat transactions by the employment of shared history.\nPlayers using Image cooperate with players whose global count of cooperations minus defections exceeds some threshold.\nAs a result, an Image player is either vulnerable to partial defectors (if the threshold is set too low) or does not cooperate with other Image players (if the threshold is set too high).\nIn recent years, researchers have used economic \"mechanism design\" theory to tackle the cooperation problem in Internet applications.\nMechanism design is the inverse of game theory.\nIt asks how to design a game in which the behavior of strategic players results in the socially desired outcome.\nDistributed Algorithmic Mechanism Design seeks solutions within this framework that are both fully distributed and computationally tractable [12].\n[10] and [11] are examples of applying DAMD to BGP routing and multicast cost sharing.\nMore recently, DAMD has been also studied in dynamic environments [38].\nIn this context, demonstrating the superiority of a cooperative strategy (as in the case of our work) is consistent with the objective of incentivizing the desired behavior among selfish players.\nThe unique challenges imposed by peer-to-peer systems inspired additional body of work [5] [37], mainly in the context of packet forwarding in wireless ad-hoc routing [8] [27] [30] [35], and file sharing [15] [31].\nFriedman and Resnick [13] consider the problem of zero-cost identities in online environments and find that in such systems punishing all newcomers is inevitable.\nUsing a theoretical model, they demonstrate that such a system can converge to cooperation only for sufficiently low turnover rates, which our results confirm.\n[6] and [9] show that whitewashing and collusion can have dire consequences for peer-to-peer systems and are difficult to prevent in a fully decentralized system.\nSome commercial file sharing clients [1] [2] provide incentive mechanisms which are enforced by making it difficult for the user to modify the source code.\nThese mechanisms can be circumvented by a skilled user or by a competing company releasing a compatible client without the incentive restrictions.\nAlso, these mechanisms are still vulnerable to zero-cost identities and collusion.\nBitTorrent [7] uses Tit-for-Tat as a method for resource allocation, where a user's upload rate dictates his download rate.\n6.\nCONCLUSIONS\nIn this paper we take a game theoretic approach to the problem of cooperation in peer-to-peer networks.\nAddressing the challenges imposed by P2P systems, including large populations, high turnover, asymmetry of interest and zero-cost identities, we propose a family of scalable and robust incentive techniques, based upon the Reciprocative decision function, to support cooperative behavior and improve overall system performance.\nWe find that the adoption of shared history and discriminating server selection techniques can mitigate the challenge of few repeat transactions that arises due to large population size, high turnover and asymmetry of interest.\nFurthermore, cooperation can be established even in the presence of zero-cost identities through the use of an adaptive policy towards strangers.\nFinally, colluders and traitors can be kept in check via subjective reputations and short-term history, respectively.","lvl-4":"Robust Incentive Techniques for Peer-to-Peer Networks\nLack of cooperation (free riding) is one of the key problems that confronts today's P2P systems.\nWhat makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors.\nTo tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques.\nThese techniques are fully distributed and include: discriminating server selection, maxflowbased subjective reputation, and adaptive stranger policies.\nThrough simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation.\n1.\nINTRODUCTION\nMany peer-to-peer (P2P) systems rely on cooperation among selfinterested users.\nFor example, in a file-sharing system, overall download latency and failure rate increase when users do not share their resources [3].\nIn many of these systems, users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance.\nAs a result, each user's attempt to maximize her own utility effectively lowers the overall\nFigure 1: Example of asymmetry of interest.\nutility of the system.\nAvoiding this \"tragedy of the commons\" [18] requires incentives for cooperation.\nWe adopt a game-theoretic approach in addressing this problem.\nWhile social dilemmas have been studied extensively, P2P applications impose a unique set of challenges, including:\n\u2022 Large populations and high turnover: A file sharing system such as Gnutella and KaZaa can exceed 100, 000 simultaneous users, and nodes can have an average life-time of the order of minutes [33].\n\u2022 Asymmetry of interest: Asymmetric transactions of P2P systems create the possibility for asymmetry of interest.\nIn the example in Figure 1, A wants service from B, B wants service from C, and C wants service from A. \u2022 Zero-cost identity: Many P2P systems allow peers to continuously switch identities (i.e., whitewash).\nStrategies that work well in traditional prisoners' dilemma games such as Tit-for-Tat [4] will not fare well in the P2P context.\nTherefore, we propose a family of scalable and robust incentive techniques, based upon a novel Reciprocative decision function, to address these challenges and provide different tradeoffs:\n\u2022 Discriminating Server Selection: Cooperation requires familiarity between entities either directly or indirectly.\nHowever, the large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity.\nWe show that by having each peer keep a\nprivate history of the actions of other peers toward her, and using discriminating server selection, the Reciprocative decision function can scale to large populations and moderate levels of turnover.\n.\nShared History: Scaling to higher turnover and mitigating asymmetry of interest requires shared history.\nConsider the example in Figure 1.\nIf everyone provides service, then the system operates optimally.\nHowever, if everyone keeps only private history, no one will provide service because B does not know that A has served C, etc. .\nWe show that with shared history, B knows that A served C and consequently will serve A.\nThis results in a higher level of cooperation than with private history.\nThe cost of shared history is a distributed infrastructure (e.g., distributed hash table-based storage) to store the history.\n.\nMaxflow-based Subjective Reputation: Shared history creates the possibility for collusion.\nIn the example in Figure 1, C can falsely claim that A served him, thus deceiving B into providing service.\nWe show that a maxflow-based algorithm that computes reputation subjectively promotes cooperation despite collusion among 1\/3 of the population.\nThe cost of the maxflow algorithm is its O (V 3) running time, where V is the number of nodes in the system.\nWe show that the maxflow-based algorithm scales better than private history in the presence of colluders without the centralized trust required in previous work [9] [20].\n.\nAdaptive Stranger Policy: Zero-cost identities allows noncooperating peers to escape the consequences of not cooperating and eventually destroy cooperation in the system if not stopped.\nWe show that if Reciprocative peers treat strangers (peers with no history) using a policy that adapts to the behavior of previous strangers, peers have little incentive to whitewash and whitewashing can be nearly eliminated from the system.\nThe adaptive stranger policy does this without requiring centralized allocation of identities, an entry fee for newcomers, or rate-limiting [13] [9] [25].\n.\nShort-term History: History also creates the possibility that a previously well-behaved peer with a good reputation will turn traitor and use his good reputation to exploit other peers.\nThe peer could be making a strategic decision or someone may have hijacked her identity (e.g., by compromising her host).\nLong-term history exacerbates this problem by allowing peers with many previous transactions to exploit that history for many new transactions.\nWe show that short-term history prevents traitors from disrupting cooperation.\nWe describe the model in Section 2 and the reciprocative decision function in Section 3.\nWe then proceed to the incentive techniques in Section 4.\nIn Section 4.1, we describe the challenges of large populations and high turnover and show the effectiveness of discriminating server selection and shared history.\nIn Section 4.2, we describe collusion and demonstrate how subjective reputation mitigates it.\nIn Section 4.3, we present the problem of zero-cost identities and show how an adaptive stranger policy promotes persistent identities.\nIn Section 4.4, we show how traitors disrupt cooperation and how short-term history deals with them.\nWe discuss related work in Section 5 and conclude in Section 6.\n5.\nRELATED WORK\nPrevious work has examined the incentive problem as applied to societies in general and more recently to Internet applications and peer-to-peer systems in particular.\nThe problem has been extensively studied adopting a game theoretic approach.\nThe prisoners' dilemma model provides a natural framework to study the effectiveness of different strategies in establishing cooperation among players.\nIn a simulation environment with many repeated games, persistent identities, and no collusion, Axelrod [4] shows that the Tit-for-Tat strategy dominates.\nNowak and Sigmund [28] introduce the Image strategy and demonstrate its ability to establish cooperation among players despite few repeat transactions by the employment of shared history.\nPlayers using Image cooperate with players whose global count of cooperations minus defections exceeds some threshold.\nIn recent years, researchers have used economic \"mechanism design\" theory to tackle the cooperation problem in Internet applications.\nMechanism design is the inverse of game theory.\nIt asks how to design a game in which the behavior of strategic players results in the socially desired outcome.\n[10] and [11] are examples of applying DAMD to BGP routing and multicast cost sharing.\nMore recently, DAMD has been also studied in dynamic environments [38].\nFriedman and Resnick [13] consider the problem of zero-cost identities in online environments and find that in such systems punishing all newcomers is inevitable.\nUsing a theoretical model, they demonstrate that such a system can converge to cooperation only for sufficiently low turnover rates, which our results confirm.\n[6] and [9] show that whitewashing and collusion can have dire consequences for peer-to-peer systems and are difficult to prevent in a fully decentralized system.\nSome commercial file sharing clients [1] [2] provide incentive mechanisms which are enforced by making it difficult for the user to modify the source code.\nThese mechanisms can be circumvented by a skilled user or by a competing company releasing a compatible client without the incentive restrictions.\nAlso, these mechanisms are still vulnerable to zero-cost identities and collusion.\nBitTorrent [7] uses Tit-for-Tat as a method for resource allocation, where a user's upload rate dictates his download rate.\n6.\nCONCLUSIONS\nIn this paper we take a game theoretic approach to the problem of cooperation in peer-to-peer networks.\nAddressing the challenges imposed by P2P systems, including large populations, high turnover, asymmetry of interest and zero-cost identities, we propose a family of scalable and robust incentive techniques, based upon the Reciprocative decision function, to support cooperative behavior and improve overall system performance.\nWe find that the adoption of shared history and discriminating server selection techniques can mitigate the challenge of few repeat transactions that arises due to large population size, high turnover and asymmetry of interest.\nFurthermore, cooperation can be established even in the presence of zero-cost identities through the use of an adaptive policy towards strangers.\nFinally, colluders and traitors can be kept in check via subjective reputations and short-term history, respectively.","lvl-2":"Robust Incentive Techniques for Peer-to-Peer Networks\nLack of cooperation (free riding) is one of the key problems that confronts today's P2P systems.\nWhat makes this problem particularly difficult is the unique set of challenges that P2P systems pose: large populations, high turnover, asymmetry of interest, collusion, zero-cost identities, and traitors.\nTo tackle these challenges we model the P2P system using the Generalized Prisoner's Dilemma (GPD), and propose the Reciprocative decision function as the basis of a family of incentives techniques.\nThese techniques are fully distributed and include: discriminating server selection, maxflowbased subjective reputation, and adaptive stranger policies.\nThrough simulation, we show that these techniques can drive a system of strategic users to nearly optimal levels of cooperation.\n1.\nINTRODUCTION\nMany peer-to-peer (P2P) systems rely on cooperation among selfinterested users.\nFor example, in a file-sharing system, overall download latency and failure rate increase when users do not share their resources [3].\nIn a wireless ad-hoc network, overall packet latency and loss rate increase when nodes refuse to forward packets on behalf of others [26].\nFurther examples are file preservation [25], discussion boards [17], online auctions [16], and overlay routing [6].\nIn many of these systems, users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance.\nAs a result, each user's attempt to maximize her own utility effectively lowers the overall\nFigure 1: Example of asymmetry of interest.\nA wants service from B, B wants service form C, and C wants service from A.\nutility of the system.\nAvoiding this \"tragedy of the commons\" [18] requires incentives for cooperation.\nWe adopt a game-theoretic approach in addressing this problem.\nIn particular, we use a prisoners' dilemma model to capture the essential tension between individual and social utility, asymmetric payoff matrices to allow asymmetric transactions between peers, and a learning-based [14] population dynamic model to specify the behavior of individual peers, which can be changed continuously.\nWhile social dilemmas have been studied extensively, P2P applications impose a unique set of challenges, including:\n\u2022 Large populations and high turnover: A file sharing system such as Gnutella and KaZaa can exceed 100, 000 simultaneous users, and nodes can have an average life-time of the order of minutes [33].\n\u2022 Asymmetry of interest: Asymmetric transactions of P2P systems create the possibility for asymmetry of interest.\nIn the example in Figure 1, A wants service from B, B wants service from C, and C wants service from A. \u2022 Zero-cost identity: Many P2P systems allow peers to continuously switch identities (i.e., whitewash).\nStrategies that work well in traditional prisoners' dilemma games such as Tit-for-Tat [4] will not fare well in the P2P context.\nTherefore, we propose a family of scalable and robust incentive techniques, based upon a novel Reciprocative decision function, to address these challenges and provide different tradeoffs:\n\u2022 Discriminating Server Selection: Cooperation requires familiarity between entities either directly or indirectly.\nHowever, the large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity.\nWe show that by having each peer keep a\nprivate history of the actions of other peers toward her, and using discriminating server selection, the Reciprocative decision function can scale to large populations and moderate levels of turnover.\n.\nShared History: Scaling to higher turnover and mitigating asymmetry of interest requires shared history.\nConsider the example in Figure 1.\nIf everyone provides service, then the system operates optimally.\nHowever, if everyone keeps only private history, no one will provide service because B does not know that A has served C, etc. .\nWe show that with shared history, B knows that A served C and consequently will serve A.\nThis results in a higher level of cooperation than with private history.\nThe cost of shared history is a distributed infrastructure (e.g., distributed hash table-based storage) to store the history.\n.\nMaxflow-based Subjective Reputation: Shared history creates the possibility for collusion.\nIn the example in Figure 1, C can falsely claim that A served him, thus deceiving B into providing service.\nWe show that a maxflow-based algorithm that computes reputation subjectively promotes cooperation despite collusion among 1\/3 of the population.\nThe basic idea is that B would only believe C if C had already provided service to B.\nThe cost of the maxflow algorithm is its O (V 3) running time, where V is the number of nodes in the system.\nTo eliminate this cost, we have developed a constant mean running time variation, which trades effectiveness for complexity of computation.\nWe show that the maxflow-based algorithm scales better than private history in the presence of colluders without the centralized trust required in previous work [9] [20].\n.\nAdaptive Stranger Policy: Zero-cost identities allows noncooperating peers to escape the consequences of not cooperating and eventually destroy cooperation in the system if not stopped.\nWe show that if Reciprocative peers treat strangers (peers with no history) using a policy that adapts to the behavior of previous strangers, peers have little incentive to whitewash and whitewashing can be nearly eliminated from the system.\nThe adaptive stranger policy does this without requiring centralized allocation of identities, an entry fee for newcomers, or rate-limiting [13] [9] [25].\n.\nShort-term History: History also creates the possibility that a previously well-behaved peer with a good reputation will turn traitor and use his good reputation to exploit other peers.\nThe peer could be making a strategic decision or someone may have hijacked her identity (e.g., by compromising her host).\nLong-term history exacerbates this problem by allowing peers with many previous transactions to exploit that history for many new transactions.\nWe show that short-term history prevents traitors from disrupting cooperation.\nThe rest of the paper is organized as follows.\nWe describe the model in Section 2 and the reciprocative decision function in Section 3.\nWe then proceed to the incentive techniques in Section 4.\nIn Section 4.1, we describe the challenges of large populations and high turnover and show the effectiveness of discriminating server selection and shared history.\nIn Section 4.2, we describe collusion and demonstrate how subjective reputation mitigates it.\nIn Section 4.3, we present the problem of zero-cost identities and show how an adaptive stranger policy promotes persistent identities.\nIn Section 4.4, we show how traitors disrupt cooperation and how short-term history deals with them.\nWe discuss related work in Section 5 and conclude in Section 6.\n2.\nMODEL AND ASSUMPTIONS\nIn this section, we present our assumptions about P2P systems and their users, and introduce a model that aims to capture the behavior of users in a P2P system.\n2.1 Assumptions\nWe assume a P2P system in which users are strategic, i.e., they act rationally to maximize their benefit.\nHowever, to capture some of the real-life unpredictability in the behavior of users, we allow users to randomly change their behavior with a low probability (see Section 2.4).\nFor simplicity, we assume a homogeneous system in which all peers issue and satisfy requests at the same rate.\nA peer can satisfy any request, and, unless otherwise specified, peers request service uniformly at random from the population . '\n.\nFinally, we assume that all transactions incur the same cost to all servers and provide the same benefit to all clients.\nWe assume that users can pollute shared history with false recommendations (Section 4.2), switch identities at zero-cost (Section 4.3), and spoof other users (Section 4.4).\nWe do not assume any centralized trust or centralized infrastructure.\n2.2 Model\nTo aid the development and study of the incentive schemes, in this section we present a model of the users' behaviors.\nIn particular, we model the benefits and costs of P2P interactions (the game) and population dynamics caused by mutation, learning, and turnover.\nOur model is designed to have the following properties that characterize a large set of P2P systems:.\nSocial Dilemma: Universal cooperation should result in optimal overall utility, but individuals who exploit the cooperation of others while not cooperating themselves (i.e., defecting) should benefit more than users who do cooperate.\n.\nAsymmetric Transactions: A peer may want service from another peer while not currently being able to provide the service that the second peer wants.\nTransactions should be able to have asymmetric payoffs.\n.\nUntraceable Defections: A peer should not be able to determine the identity of peers who have defected on her.\nThis models the difficulty or expense of determining that a peer could have provided a service, but didn't.\nFor example, in the Gnutella file sharing system [21], a peer may simply ignore queries despite possessing the desired file, thus preventing the querying peer from identifying the defecting peer.\n.\nDynamic Population: Peers should be able to change their behavior and enter or leave the system independently and continuously.\nFigure 2: Payoff matrix for the Generalized Prisoner's Dilemma.\nT, R, P, and S stand for temptation, reward, punishment and sucker, respectively.\n2.3 Generalized Prisoner's Dilemma\nThe Prisoner's Dilemma, developed by Flood, Dresher, and Tucker in 1950 [22] is a non-cooperative repeated game satisfying the social dilemma requirement.\nEach game consists of two players who can defect or cooperate.\nDepending how each acts, the players receive a payoff.\nThe players use a strategy to decide how to act.\nUnfortunately, existing work either uses a specific asymmetric payoff matrix or only gives the general form for a symmetric one [4].\nInstead, we use the Generalized Prisoner's Dilemma (GPD), which specifies the general form for an asymmetric payoff matrix that preserves the social dilemma.\nIn the GPD, one player is the client and one player is the server in each game, and it is only the decision of the server that is meaningful for determining the outome of the transaction.\nA player can be a client in one game and a server in another.\nThe client and server receive the payoff from a generalized payoff matrix (Figure 2).\nRte, Ste, T., and P. are the client's payoff and R., S., T., and P. are the server's payoff.\nA GPD payoff matrix must have the following properties to create a social dilemma:\n1.\nMutual cooperation leads to higher payoffs than mutual defection (R. + R.> P. + P.).\n2.\nMutual cooperation leads to higher payoffs than one player suckering the other (R. + R> S + T. and R. + R> S. + T.).\n3.\nDefection dominates cooperation (at least weakly) at the individual level for the entity who decides whether to cooperate or defect: (T.> - R. and P.> - S. and (T.> R. or P.> S.))\nThe last set of inequalities assume that clients do not incur a cost regardless of whether they cooperate or defect, and therefore clients always cooperate.\nThese properties correspond to similar properties of the classic Prisoner's Dilemma and allow any form of asymmetric transaction while still creating a social dilemma.\nFurthermore, one or more of the four possible actions (client cooperate and defect, and server cooperate and defect) can be untraceable.\nIf one player makes an untraceable action, the other player does not know the identity of the first player.\nFor example, to model a P2P application like file sharing or overlay routing, we use the specific payoff matrix values shown in Figure 3.\nThis satisfies the inequalities specified above, where only the server can choose between cooperating and defecting.\nIn addition, for this particular payoff matrix, clients are unable to trace server defections.\nThis is the payoff matrix that we use in our simulation results.\nFigure 3: The payoff matrix for an application like P2P file sharing or overlay routing.\n2.4 Population Dynamics\nA characteristic of P2P systems is that peers change their behavior and enter or leave the system independently and continuously.\nSeveral studies [4] [28] of repeated Prisoner's Dilemma games use an evolutionary model [19] [34] of population dynamics.\nAn evolutionary model is not suitable for P2P systems because it only specifies the global behavior and all changes occur at discrete times.\nFor example, it may specify that a population of 5 \"100% Cooperate\" players and 5 \"100% Defect\" players evolves into a population with 3 and 7 players, respectively.\nIt does not specify which specific players switched.\nFurthermore, all the switching occurs at the end of a generation instead of continuously, like in a real P2P system.\nAs a result, evolutionary population dynamics do not accurately model turnover, traitors, and strangers.\nIn our model, entities take independent and continuous actions that change the composition of the population.\nTime consists of rounds.\nIn each round, every player plays one game as a client and one game as a server.\nAt the end of a round, a player may: 1) mutate 2) learn, 3) turnover, or 4) stay the same.\nIf a player mutates, she switches to a randomly picked strategy.\nIf she learns, she switches to a strategy that she believes will produce a higher score (described in more detail below).\nIf she maintains her identity after switching strategies, then she is referred to as a traitor.\nIf a player suffers turnover, she leaves the system and is replaced with a newcomer who uses the same strategy as the exiting player.\nTo learn, a player collects local information about the performance of different strategies.\nThis information consists of both her personal observations of strategy performance and the observations of those players she interacts with.\nThis models users communicating out-of-band about how strategies perform.\nLet s be the running average of the performance of a player's current strategy per round and age be the number of rounds she has been using the strategy.\nA strategy's rating is\nWe use the age and compute the running average before the ratio to prevent young samples (which are more likely to be outliers) from skewing the rating.\nAt the end of a round, a player switches to highest rated strategy with a probability proportional to the difference in score between her current strategy and the highest rated strategy.\n3.\nRECIPROCATIVE DECISION FUNCTION\nIn this section, we present the new decision function, Reciprocative, that is the basis for our incentive techniques.\nA decision function maps from a history of a player's actions to a decision whether to cooperate with or defect on that player.\nA strategy consists of a decision function, private or shared history, a server selection mechanism, and a stranger policy.\nOur approach to incentives is to design strategies which maximize both individual and social benefit.\nStrategic users will choose to use such strategies and thereby drive the system to high levels of cooperation.\nTwo examples of simple decision functions are \"100% Cooperate\" and \"100% Defect\".\n\"100% Cooperate\" models a naive user who does not yet realize that she is being exploited.\n\"100% Defect\" models a greedy user who is intent on exploiting the system.\nIn the absence of incentive techniques, \"100% Defect\" users will quickly dominate the \"100% Cooperate\" users and destroy cooperation in the system.\nOur requirements for a decision function are that (1) it can use shared and subjective history, (2) it can deal with untraceable defections, and (3) it is robust against different patterns of defection.\nPrevious decision functions such as Tit-for-Tat [4] and Image [28] (see Section 5) do not satisfy these criteria.\nFor example, Tit-for-Tat and Image base their decisions on both cooperations and defections, therefore cannot deal with untraceable defections.\nIn this section and the remaining sections we demonstrate how the Reciprocativebased strategies satisfy all of the requirements stated above.\nThe probability that a Reciprocative player cooperates with a peer is a function of its normalized generosity.\nGenerosity measures the benefit an entity has provided relative to the benefit it has consumed.\nThis is important because entities which consume more services than they provide, even if they provide many services, will cause cooperation to collapse.\nFor some entity i, let pi and ci be the services i has provided and consumed, respectively.\nEntity i's generosity is simply the ratio of the service it provides to the service it consumes:\nOne possibility is to cooperate with a probability equal to the generosity.\nAlthough this is effective in some cases, in other cases, a Reciprocative player may consume more than she provides (e.g., when initially using the \"Stranger Defect\" policy in 4.3).\nThis will cause Reciprocative players to defect on each other.\nTo prevent this situation, a Reciprocative player uses its own generosity as a measuring stick to judge its peer's generosity.\nNormalized generosity measures entity i's generosity relative to entity j's generosity.\nMore concretely, entity i's normalized generosity as perceived by entity j is\nIn the remainder of this section, we describe our simulation framework, and use it to demonstrate the benefits of the baseline Reciprocative decision function.\nTable 1: Default simulation parameters.\n3.1 Simulation Framework\nOur simulator implements the model described in Section 2.\nWe use the asymmetric file sharing payoff matrix (Figure 3) with untraceable defections because it models transactions in many P2P systems like file-sharing and packet forwarding in ad hoc and overlay networks.\nOur simulation study is composed of different scenarios reflecting the challenges of various non-cooperative behaviors.\nTable 1 presents the nominal parameter values used in our simulation.\nThe \"Ratio using\" rows refer to the initial ratio of the total population using a particular strategy.\nIn each scenario we vary the value range of a specific parameter to reflect a particular situation or attack.\nWe then vary the exact properties of the Reciprocative strategy to defend against that situation or attack.\n3.2 Baseline Results\nFigure 4: The evolution of strategy populations over time.\n\"Time\" the number of elapsed rounds.\n\"Population\" is the number of players using a strategy.\nIn this section, we present the dynamics of the game for the basic scenario presented in Table 1 to familiarize the reader and set a baseline for more complicated scenarios.\nFigures 4 (a) (60 players) and (b) (120 players) show players switching to higher scoring strategies over time in two separate runs of the simulator.\nEach point in the graph represents the number of players using a particular strategy at one point in time.\nFigures 5 (a) and (b) show the corresponding mean overall score per round.\nThis measures the degree of cooperation in the system: 6 is the maximum possible (achieved when everybody cooperates) and 0 is the minimum (achieved when everybody defects).\nFrom the file sharing payoff matrix, a net of 6 means everyone is able to download a file and a 0 means that no one\nFigure 5: The mean overall per round score over time.\nis able to do so.\nWe use this metric in all later results to evaluate our incentive techniques.\nFigure 5 (a) shows that the Reciprocative strategy using private history causes a system of 60 players to converge to a cooperation level of 3.7, but drops to 0.5 for 120 players.\nOne would expect the 60 player system to reach the optimal level of cooperation (6) because all the defectors are eliminated from the system.\nIt does not because of asymmetry of interest.\nFor example, suppose player B is using Reciprocative with private history.\nPlayer A may happen to ask for service from player B twice in succession without providing service to player B in the interim.\nPlayer B does not know of the service player A has provided to others, so player B will reject service to player A, even though player A is cooperative.\nWe discuss solutions to asymmetry of interest and the failure of Reciprocative in the 120 player system in Section 4.1.\n4.\nRECIPROCATIVE-BASED INCENTIVE TECHNIQUES\nIn this section we present our incentives techniques and evaluate their behavior by simulation.\nTo make the exposition clear we group our techniques by the challenges they address: large populations and high turnover (Section 4.1), collusions (Section 4.2), zero-cost identities (Section 4.3), and traitors (Section 4.4).\n4.1 Large Populations and High Turnover\nThe large populations and high turnover of P2P systems makes it less likely that repeat interactions will occur with a familiar entity.\nUnder these conditions, basing decisions only on private history (records about interactions the peer has been directly involved in) is not effective.\nIn addition, private history does not deal well with asymmetry of interest.\nFor example, if player B has cooperated with others but not with player A himself in the past, player A has no indication of player B's generosity, thus may unduly defect on him.\nWe propose two mechanisms to alleviate the problem of few repeat transactions: server-selection and shared history.\n4.1.1 Server Selection\nA natural way to increase the probability of interacting with familiar peers is by discriminating server selection.\nHowever, the asymmetry of transactions challenges selection mechanisms.\nUnlike in the prisoner's dilemma payoff matrix, where players can benefit one another within a single transaction, transactions in GPD are asymmetric.\nAs a result, a player who selects her donor for the second time without contributing to her in the interim may face a defection.\nIn addition, due to untraceability of defections, it is impossible to maintain blacklists to avoid interactions with known defectors.\nIn order to deal with asymmetric transactions, every player holds (fixed size) lists of both past donors and past recipients, and selects a server from one of these lists at random with equal probabilities.\nThis way, users approach their past recipients and give them a chance to reciprocate.\nIn scenarios with selective users we omit the complete availability assumption to prevent players from being clustered into a lot of very small groups; thus, we assume that every player can perform the requested service with probability P (for the results presented in this section, P = .3).\nIn addition, in order to avoid bias in favor of the selective players, all players (including the non-discriminative ones) select servers for games.\nFigure 6 demonstrates the effectiveness of the proposed selection mechanism in scenarios with large population sizes.\nWe fix the initial ratio of Reciprocative in the population (33%) while varying the population size (between 24 to 1000) (Notice that while in Figures 4 (a) and (b), the data points demonstrates the evolution of the system over time, each data point in this figure is the result of an entire simulation for a specific scenario).\nThe figure shows that the Reciprocative decision function using private history in conjunction with selective behavior can scale to large populations.\nIn Figure 7 we fix the population size and vary the turnover rate.\nIt demonstrates that while selective behavior is effective for low turnover rates, as turnover gets higher, selective behavior does not scale.\nThis occurs because selection is only effective as long as players from the past stay alive for long enough such that they can be selected for future games.\n4.1.2 Shared history\nIn order to mitigate asymmetry of interest and scale to higher turnover rate, there is a need in shared history.\nShared history means that every peer keeps records about all of the interactions that occur in the system, regardless of whether he was directly involved in them or not.\nIt allows players to leverage off of the experiences of others in cases of few repeat transactions.\nIt only requires that someone has interacted with a particular player for the entire population to observe it, thus scales better to large populations and high turnovers, and also tolerates asymmetry of interest.\nSome examples of shared history schemes are [20] [23] [28].\nFigure 7 shows the effectiveness of shared history under high turnover rates.\nIn this figure, we fix the population size and vary the turnover rate.\nWhile selective players with private history can only tolerate a moderate turnover, shared history scales to turnovers of up to approximately 0.1.\nThis means that 10% of the players leave the system at the end of each round.\nIn Figure 6 we fix the turnover and vary the population size.\nIt shows that shared history causes the system to converge to optimal cooperation and performance, regardless of the size of the population.\nThese results show that shared history addresses all three challenges of large populations, high turnover, and asymmetry of transactions.\nNevertheless, shared history has two disadvantages.\nFirst,\nFigure 7: Performance of selection mechanism under turnover.\nThe x-axis is the turnover rate.\nThe y-axis is the mean overall per round score.\nwhile a decentralized implementation of private history is straightforward, implementation of shared-history requires communication overhead or centralization.\nA decentralized shared history can be implemented, for example, on top of a DHT, using a peer-to-peer storage system [36] or by disseminating information to other entities in a similar way to routing protocols.\nSecond, and more fundamental, shared history is vulnerable to collusion.\nIn the next section we propose a mechanism that addresses this problem.\n4.2 Collusion and Other Shared History Attacks\n4.2.1 Collusion\nWhile shared history is scalable, it is vulnerable to collusion.\nCollusion can be either positive (e.g. defecting entities claim that other defecting entities cooperated with them) or negative (e.g. entities claim that other cooperative entities defected on them).\nCollusion subverts any strategy in which everyone in the system agrees on the reputation of a player (objective reputation).\nAn example of objective reputation is to use the Reciprocative decision function with shared history to count the total number of cooperations a player has given to and received from all entities in the system; another example is the Image strategy [28].\nThe effect of collusion is magnified in systems with zero-cost identities, where users can create fake identities that report false statements.\nInstead, to deal with collusion, entities can compute reputation subjectively, where player A weighs player B's opinions based on how much player A trusts player B.\nOur subjective algorithm is based on maxflow [24] [32].\nMaxflow is a graph theoretic problem, which given a directed graph with weighted edges asks what is the greatest rate at which \"material\" can be shipped from the source to the target without violating any capacity constraints.\nFor example, in figure 8 each edge is labeled with the amount of traffic that can travel on it.\nThe maxflow algorithm computes the maximum amount of traffic that can go from the source (s) to the target (t) without violating the constraints.\nIn this example, even though there is a loop of high capacity edges, the maxflow between the source and the target is only 2 (the numbers in brackets represent the actual flow on each edge in the solution).\nFigure 8: Each edge in the graph is labeled with its capacity and the actual flow it carries in brackets.\nThe maxflow between the source and the target in the graph is 2.\nFigure 9: This graph illustrates the robustness of maxflow in the presence of colluders who report bogus high reputation values.\nWe apply the maxflow algorithm by constructing a graph whose vertices are entities and the edges are the services that entities have received from each other.\nThis information can be stored using the same methods as the shared history.\nA maxflow is the greatest level of reputation the source can give to the sink without violating \"reputation capacity\" constraints.\nAs a result, nodes who dishonestly report high reputation values will not be able to subvert the reputation system.\nFigure 9 illustrates a scenario in which all the colluders (labeled with C) report high reputation values for each other.\nWhen node A computes the subjective reputation of B using the maxflow algorithm, it will not be affected by the local false reputation values, rather the maxflow in this case will be 0.\nThis is because no service has been received from any of the colluders.\nFigure 6: Private vs. Shared History as a function of population size.\nIn our algorithm, the benefit that entity i has received (indirectly) from entity j is the maxflow from j to i. Conversely, the benefit that entity i has provided indirectly to j is the maxflow from i to j.\nThe subjective reputation of entity j as perceived by i is:\nFigure 10: Subjective shared history compared to objective shared history and private history in the presence of colluders.\nThe cost of maxflow is its long running time.\nThe standard preflowpush maxflow algorithm has a worst case running time of O (V 3).\nInstead, we use Algorithm 1 which has a constant mean running time, but sometimes returns no flow even though one exists.\nThe essential idea is to bound the mean number of nodes examined during the maxflow computation.\nThis bounds the overhead, but also bounds the effectiveness.\nDespite this, the results below show that a maxflow-based Reciprocative decision function scales to higher populations than one using private history.\nFigure 10 compares the effectiveness of subjective reputation to objective reputation in the presence of colluders.\nIn these scenarios, defectors collude by claiming that other colluders that they encounter gave them 100 cooperations for that encounter.\nAlso, the parameters for Algorithm 1 are set as follows: increment = 100, a = 0.9.\nAs in previous sections, Reciprocative with private history results in cooperation up to a point, beyond which it fails.\nThe difference here is that objective shared history fails for all population sizes.\nThis is because the Reciprocative players cooperate with the colluders because of their high reputations.\nHowever, subjective history can reach high levels of cooperation regardless of colluders.\nThis is because there are no high weight paths in the cooperation graph from colluders to any non-colluders, so the maxflow from a colluder to any non-colluder is 0.\nTherefore, a subjective Reciprocative player will conclude that that colluder has not provided any service to her and will reject service to the colluder.\nThus, the maxflow algorithm enables Reciprocative to maintain the scalability of shared history without being vulnerable to collusion or requiring centralized trust (e.g., trusted peers).\nSince we bound the running time of the maxflow algorithm, cooperation decreases as the population size increases, but the key point is that the subjective Reciprocative decision function scales to higher populations than one using private history.\nThis advantage only increases over time as CPU power increases and more cycles can be devoted to running the maxflow algorithm (by increasing the increment parameter).\nDespite the robustness of the maxflow algorithm to the simple form of collusion described previously, it still has vulnerabilities to more sophisticated attacks.\nOne is for an entity (the \"mole\") to provide service and then lie positively about other colluders.\nThe other colluders can then exploit their reputation to receive service.\nHowever, the effectiveness of this attack relies on the amount of service that the mole provides.\nSince the mole is paying all of the cost of providing service and receiving none of the benefit, she has a strong incentive to stop colluding and try another strategy.\nThis forces the colluders to use mechanisms to maintain cooperation within their group, which may drive the cost of collusion to exceed the benefit.\n4.2.2 False reports\nAnother attack is for a defector to lie about receiving or providing service to another entity.\nThere are four possibile actions that can be lied about: providing service, not providing service, receiving service, and not receiving service.\nFalsely claiming to receive service is the simple collusion attack described above.\nFalsely claiming not to have provided service provides no benefit to the attacker.\nFalsely claiming to have provided service or not to have received it allows an attacker to boost her own reputation and\/or lower the reputation of another entity.\nAn entity may want to lower another entity's reputation in order to discourage others from selecting it and exclusively use its service.\nThese false claims are clearly identifiable in the shared history as inconsistencies where one entity claims a transaction occurred and another claims it did not.\nTo limit this attack, we modify the maxflow algorithm so that an entity always believes the entity that is closer to him in the flow graph.\nIf both entities are equally distant, then the disputed edge in the flow is not critical to the evaluation and is ignored.\nThis modification prevents those cases where the attacker is making false claims about an entity that is closer than her to the evaluating entity, which prevents her from boosting her own reputation.\nThe remaining possibilities are for the attacker to falsely claim to have provided service to or not to have received it from a victim entity that is farther from the evalulator than her.\nIn these cases, an attacker can only lower the reputation of the victim.\nThe effectiveness of doing this is limited by the number of services provided and received by the attacker, which makes executing this attack expensive.\n4.3 Zero-Cost Identities\nHistory assumes that entities maintain persistent identities.\nHowever, in most P2P systems, identities are zero-cost.\nThis is desirable for network growth as it encourages newcomers to join the system.\nHowever, this also allows misbehaving users to escape the consequences of their actions by switching to new identities (i.e., whitewashing).\nWhitewashers can cause the system to collapse if they are not punished appropriately.\nUnfortunately, a player cannot tell if a stranger is a whitewasher or a legitimate newcomer.\nAlways cooperating with strangers encourages newcomers to join, but at the same time encourages whitewashing behavior.\nAlways defecting on strangers prevents whitewashing, but discourages newcomers from joining and may also initiate unfavorable cycles of defection.\nThis tension suggests that any stranger policy that has a fixed probability of cooperating with strangers will fail by either being too stingy when most strangers are newcomers or too generous when most strangers are whitewashers.\nOur solution is the \"Stranger Adaptive\" stranger policy.\nThe idea is to be generous to strangers when they are being generous and stingy when they are stingy.\nLet ps and cs be the number of services that strangers have provided and consumed, respectively.\nThe probability that a player using \"Stranger Adaptive\" helps a stranger is ps\/cs.\nHowever, we do not wish to keep these counts permanently (for reasons described in Section 4.4).\nAlso, players may not know when strangers defect because defections are untraceable (as described in Section 2).\nConsequently, instead of keeping ps and cs, we assume that k = ps + cs, where k is a constant and we keep the running ratio r = ps\/cs.\nWhen we need to increment ps or cs, we generate the current values of ps and cs from k and r:\nWe then compute the new r as follows:\nThis method allows us to keep a running ratio that reflects the recent generosity of strangers without knowing when strangers have defected.\nFigure 11: Different stranger policies for Reciprocative with shared history.\nThe x-axis is the turnover rate on a log scale.\nThe y-axis is the mean overall per round score.\nFigures 11 and 12 compare the effectiveness of the Reciprocative strategy using different policies toward strangers.\nFigure 11\nFigure 12: Different stranger policies for Reciprocative with private history.\nThe x-axis is the turnover rate on a log scale.\nThe y-axis is the mean overall per round score.\ncompares different stranger policies for Reciprocative with shared history, while Figure 12 is with private history.\nIn both figures, the players using the \"100% Defect\" strategy change their identity (whitewash) after every transaction and are indistinguishable from legitimate newcomers.\nThe Reciprocative players using the \"Stranger Cooperate\" policy completely fail to achieve cooperation.\nThis stranger policy allows whitewashers to maximize their payoff and consequently provides a high incentive for users to switch to whitewashing.\nIn contrast, Figure 11 shows that the \"Stranger Defect\" policy is effective with shared history.\nThis is because whitewashers always appear to be strangers and therefore the Reciprocative players will always defect on them.\nThis is consistent with previous work [13] showing that punishing strangers deals with whitewashers.\nHowever, Figure 12 shows that \"Stranger Defect\" is not effective with private history.\nThis is because Reciprocative requires some initial cooperation to bootstrap.\nIn the shared history case, a Reciprocative player can observe that another player has already cooperated with others.\nWith private history, the Reciprocative player only knows about the other players' actions toward her.\nTherefore, the initial defection dictated by the \"Stranger Defect\" policy will lead to later defections, which will prevent Reciprocative players from ever cooperating with each other.\nIn other simulations not shown here, the \"Stranger Defect\" stranger policy fails even with shared history when there are no initial \"100% Cooperate\" players.\nFigure 11 shows that with shared history, the \"Stranger Adaptive\" policy performs as well as \"Stranger Defect\" policy until the turnover rate is very high (10% of the population turning over after every transaction).\nIn these scenarios, \"Stranger Adaptive\" is using k = 10 and each player keeps a private r.\nMore importantly, it is significantly better than \"Stranger Defect\" policy with private history because it can bootstrap cooperation.\nAlthough the \"Stranger Defect\" policy is marginally more effective than \"Stranger Adaptive\" at very high rates of turnover, P2P systems are unlikely to operate there because other services (e.g., routing) also cannot tolerate very high turnover.\nWe conclude that of the stranger policies that we have explored, \"Stranger Adaptive\" is the most effective.\nBy using \"Stranger Adaptive\", P2P systems with zero-cost identities and a sufficiently low turnover can sustain cooperation without a centralized allocation of identities.\n4.4 Traitors\nTraitors are players who acquire high reputation scores by cooperating for a while, and then traitorously turn into defectors before leaving the system.\nThey model both users who turn deliberately to gain a higher score and cooperators whose identities have been stolen and exploited by defectors.\nA strategy that maintains longterm history without discriminating between old and recent actions becomes highly vulnerable to exploitation by these traitors.\nThe top two graphs in Figure 13 demonstrate the effect of traitors on cooperation in a system where players keep long-term history (never clear history).\nIn these simulations, we run for 2000 rounds and allow cooperative players to keep their identities when switching to the 100% Defector strategy.\nWe use the default values for the other parameters.\nWithout traitors, the cooperative strategies thrive.\nWith traitors, the cooperative strategies thrive until a cooperator turns traitor after 600 rounds.\nAs this \"cooperator\" exploits her reputation to achieve a high score, other cooperative players notice this and follow suit via learning.\nCooperation eventually collapses.\nOn the other hand, if we maintain short-term history and\/or discounting ancient history vis-a-vis recent history, traitors can be quickly detected, and the overall cooperation level stays high, as shown in the bottom two graphs in Figure 13.\nFigure 13: Keeping long-term vs. short-term history both with and without traitors.\n5.\nRELATED WORK\nPrevious work has examined the incentive problem as applied to societies in general and more recently to Internet applications and peer-to-peer systems in particular.\nA well-known phenomenon in this context is the \"tragedy of the commons\" [18] where resources are under-provisioned due to selfish users who free-ride on the system's resources, and is especially common in large networks [29] [3].\nThe problem has been extensively studied adopting a game theoretic approach.\nThe prisoners' dilemma model provides a natural framework to study the effectiveness of different strategies in establishing cooperation among players.\nIn a simulation environment with many repeated games, persistent identities, and no collusion, Axelrod [4] shows that the Tit-for-Tat strategy dominates.\nOur model assumes growth follows local learning rather than evolutionary dynamics [14], and also allows for more kinds of attacks.\nNowak and Sigmund [28] introduce the Image strategy and demonstrate its ability to establish cooperation among players despite few repeat transactions by the employment of shared history.\nPlayers using Image cooperate with players whose global count of cooperations minus defections exceeds some threshold.\nAs a result, an Image player is either vulnerable to partial defectors (if the threshold is set too low) or does not cooperate with other Image players (if the threshold is set too high).\nIn recent years, researchers have used economic \"mechanism design\" theory to tackle the cooperation problem in Internet applications.\nMechanism design is the inverse of game theory.\nIt asks how to design a game in which the behavior of strategic players results in the socially desired outcome.\nDistributed Algorithmic Mechanism Design seeks solutions within this framework that are both fully distributed and computationally tractable [12].\n[10] and [11] are examples of applying DAMD to BGP routing and multicast cost sharing.\nMore recently, DAMD has been also studied in dynamic environments [38].\nIn this context, demonstrating the superiority of a cooperative strategy (as in the case of our work) is consistent with the objective of incentivizing the desired behavior among selfish players.\nThe unique challenges imposed by peer-to-peer systems inspired additional body of work [5] [37], mainly in the context of packet forwarding in wireless ad-hoc routing [8] [27] [30] [35], and file sharing [15] [31].\nFriedman and Resnick [13] consider the problem of zero-cost identities in online environments and find that in such systems punishing all newcomers is inevitable.\nUsing a theoretical model, they demonstrate that such a system can converge to cooperation only for sufficiently low turnover rates, which our results confirm.\n[6] and [9] show that whitewashing and collusion can have dire consequences for peer-to-peer systems and are difficult to prevent in a fully decentralized system.\nSome commercial file sharing clients [1] [2] provide incentive mechanisms which are enforced by making it difficult for the user to modify the source code.\nThese mechanisms can be circumvented by a skilled user or by a competing company releasing a compatible client without the incentive restrictions.\nAlso, these mechanisms are still vulnerable to zero-cost identities and collusion.\nBitTorrent [7] uses Tit-for-Tat as a method for resource allocation, where a user's upload rate dictates his download rate.\n6.\nCONCLUSIONS\nIn this paper we take a game theoretic approach to the problem of cooperation in peer-to-peer networks.\nAddressing the challenges imposed by P2P systems, including large populations, high turnover, asymmetry of interest and zero-cost identities, we propose a family of scalable and robust incentive techniques, based upon the Reciprocative decision function, to support cooperative behavior and improve overall system performance.\nWe find that the adoption of shared history and discriminating server selection techniques can mitigate the challenge of few repeat transactions that arises due to large population size, high turnover and asymmetry of interest.\nFurthermore, cooperation can be established even in the presence of zero-cost identities through the use of an adaptive policy towards strangers.\nFinally, colluders and traitors can be kept in check via subjective reputations and short-term history, respectively.","keyphrases":["incent","p2p system","collus","reciproc decis function","reput","adapt stranger polici","selfinterest user","incent for cooper","game-theoret approach","maxflow-base algorithm","reciproc peer","mutual cooper","asymmetr payoff","generos","paramet nomin valu","whitewash","stranger adapt","stranger defect","peer-to-peer","free-ride","cheap pseudonym","prison dilemma"],"prmu":["P","P","P","P","P","P","M","R","U","U","M","M","U","U","U","U","R","M","U","U","U","R"]} {"id":"I-38","title":"Bidding Algorithms for a Distributed Combinatorial Auction","abstract":"Distributed allocation and multiagent coordination problems can be solved through combinatorial auctions. However, most of the existing winner determination algorithms for combinatorial auctions are centralized. The PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer). It is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation. It can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary. PAUSE establishes the rules the bidders must obey. However, it does not tell us how the bidders should calculate their bids. We have developed a couple of bidding algorithms for the bidders in a PAUSE auction. Our algorithms always return the set of bids that maximizes the bidder's utility. Since the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search. In this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm.","lvl-1":"Bidding Algorithms for a Distributed Combinatorial Auction Benito Mendoza \u2217 and Jos\u00b4e M. Vidal Computer Science and Engineering University of South Carolina Columbia, SC 29208 mendoza2@engr.sc.edu, vidal@sc.edu ABSTRACT Distributed allocation and multiagent coordination problems can be solved through combinatorial auctions.\nHowever, most of the existing winner determination algorithms for combinatorial auctions are centralized.\nThe PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer).\nIt is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation.\nIt can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary.\nPAUSE establishes the rules the bidders must obey.\nHowever, it does not tell us how the bidders should calculate their bids.\nWe have developed a couple of bidding algorithms for the bidders in a PAUSE auction.\nOur algorithms always return the set of bids that maximizes the bidder``s utility.\nSince the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search.\nIn this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm.\nCategories and Subject Descriptors I.2.11 [Computing Methodologies]: Distributed Artificial Intelligence-Intelligent Agents, Multiagent Systems.\nGeneral Terms Algorithms, Performance.\n1.\nINTRODUCTION Both the research and practice of combinatorial auctions have grown rapidly in the past ten years.\nIn a combinatorial auction bidders can place bids on combinations of items, called packages or bidsets, rather than just individual items.\nOnce the bidders place their bids, it is necessary to find the allocation of items to bidders that maximizes the auctioneer``s revenue.\nThis problem, known as the winner determination problem, is a combinatorial optimization problem and is NP-Hard [10].\nNevertheless, several algorithms that have a satisfactory performance for problem sizes and structures occurring in practice have been developed.\nThe practical applications of combinatorial auctions include: allocation of airport takeoff and landing time slots, procurement of freight transportation services, procurement of public transport services, and industrial procurement [2].\nBecause of their wide applicability, one cannot hope for a general-purpose winner determination algorithm that can efficiently solve every instance of the problem.\nThus, several approaches and algorithms have been proposed to address the winner determination problem.\nHowever, most of the existing winner determination algorithms for combinatorial auctions are centralized, meaning that they require all agents to send their bids to a centralized auctioneer who then determines the winners.\nExamples of these algorithms are CASS [3], Bidtree [11] and CABOB [12].\nWe believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer.\nThe PAUSE (Progressive Adaptive User Selection Environment) auction [4, 5] is one of a few efforts to distribute the problem of winner determination amongst the bidders.\nPAUSE establishes the rules the participants have to adhere to so that the work is distributed amongst them.\nHowever, it is not concerned with how the bidders determine what they should bid.\nIn this paper we present two algorithms, pausebid and cachedpausebid, which enable agents in a PAUSE auction to find the bidset that maximizes their utility.\nOur algorithms implement a myopic utility maximizing strategy and are guaranteed to find the bidset that maximizes the agent``s utility given the outstanding best bids at a given time.\npausebid performs a branch and bound search completely from scratch every time that it is called.\ncachedpausebid is a caching-based algorithm which explores fewer nodes, since it caches some solutions.\n694 978-81-904262-7-5 (RPS) c 2007 IFAAMAS 2.\nTHE PAUSE AUCTION A PAUSE auction for m items has m stages.\nStage 1 consists of having simultaneous ascending price open-cry auctions and during this stage the bidders can only place bids on individual items.\nAt the end of this state we will know what the highest bid for each individual item is and who placed that bid.\nEach successive stage k = 2, 3, ... , m consists of an ascending price auction where the bidders must submit bidsets that cover all items but each one of the bids must be for k items or less.\nThe bidders are allowed to use bids that other agents have placed in previous rounds when building their bidsets, thus allowing them to find better solutions.\nAlso, any new bidset has to have a sum of bid prices which is bigger than that of the currently winning bidset.\nAt the end of each stage k all agents know the best bid for every subset of size k or less.\nAlso, at any point in time after stage 1 has ended there is a standing bidset whose value increases monotonically as new bidsets are submitted.\nSince in the final round all agents consider all possible bidsets, we know that the final winning bidset will be one such that no agent can propose a better bidset.\nNote, however, that this bidset is not guaranteed to be the one that maximizes revenue since we are using an ascending price auction so the winning bid for each set will be only slightly bigger than the second highest bid for the particular set of items.\nThat is, the final prices will not be the same as the prices in a traditional combinatorial auction where all the bidders bid their true valuation.\nHowever, there remains the open question of whether the final distribution of items to bidders found in a PAUSE auction is the same as the revenue maximizing solution.\nOur test results provide an answer to this question.\nThe PAUSE auction makes the job of the auctioneer very easy.\nAll it has to do is to make sure that each new bidset has a revenue bigger than the current winning bidset, as well as make sure that every bid in an agent``s bidset that is not his does indeed correspond to some other agents'' previous bid.\nThe computational problem shifts from one of winner determination to one of bid generation.\nEach agent must search over the space of all bidsets which contain at least one of its bids.\nThe search is made easier by the fact that the agent needs to consider only the current best bids and only wants bidsets where its own utility is higher than in the current winning bidset.\nEach agent also has a clear incentive for performing this computation, namely, its utility only increases with each bidset it proposes (of course, it might decrease with the bidsets that others propose).\nFinally, the PAUSE auction has been shown to be envy-free in that at the conclusion of the auction no bidder would prefer to exchange his allocation with that of any other bidder [2].\nWe can even envision completely eliminating the auctioneer and, instead, have every agent perform the task of the auctioneer.\nThat is, all bids are broadcast and when an agent receives a bid from another agent it updates the set of best bids and determines if the new bid is indeed better than the current winning bid.\nThe agents would have an incentive to perform their computation as it will increase their expected utility.\nAlso, any lies about other agents'' bids are easily found out by keeping track of the bids sent out by every agent (the set of best bids).\nNamely, the only one that can increase an agent``s bid value is the agent itself.\nAnyone claiming a higher value for some other agent is lying.\nThe only thing missing is an algorithm that calculates the utility-maximizing bidset for each agent.\n3.\nPROBLEM FORMULATION A bid b is composed of three elements bitems (the set of items the bid is over), bagent (the agent that placed the bid), and bvalue (the value or price of the bid).\nThe agents maintain a set B of the current best bids, one for each set of items of size \u2264 k, where k is the current stage.\nAt any point in the auction, after the first round, there will also be a set W \u2286 B of currently winning bids.\nThis is the set of bids that covers all the items and currently maximizes the revenue, where the revenue of W is given by r(W) = b\u2208W bvalue .\n(1) Agent i``s value function is given by vi(S) \u2208 where S is a set of items.\nGiven an agent``s value function and the current winning bidset W we can calculate the agent``s utility from W as ui(W) = b\u2208W | bagent=i vi(bitems ) \u2212 bvalue .\n(2) That is, the agent``s utility for a bidset W is the value it receives for the items it wins in W minus the price it must pay for those items.\nIf the agent is not winning any items then its utility is zero.\nThe goal of the bidding agents in the PAUSE auction is to maximize their utility, subject to the constraint that their next set of bids must have a total revenue that is at least bigger than the current revenue, where is the smallest increment allowed in the auction.\nFormally, given that W is the current winning bidset, agent i must find a g\u2217 i such that r(g\u2217 i ) \u2265 r(W) + and g\u2217 i = arg max g\u22862B ui(g), (3) where each g is a set of bids that covers all items and \u2200b\u2208g (b \u2208 B) or (bagent = i and bvalue > B(bitems ) and size(bitems ) \u2264 k), and where B(items) is the value of the bid in B for the set items (if there is no bid for those items it returns zero).\nThat is, each bid b in g must satisfy at least one of the two following conditions.\n1) b is already in B, 2) b is a bid of size \u2264 k in which the agent i bids higher than the price for the same items in B. 4.\nBIDDING ALGORITHMS According to the PAUSE auction, during the first stage we have only several English auctions, with the bidders submitting bids on individual items.\nIn this case, an agent``s dominant strategy is to bid higher than the current winning bid until it reaches its valuation for that particular item.\nOur algorithms focus on the subsequent stages: k > 1.\nWhen k > 1, agents have to find g\u2217 i .\nThis can be done by performing a complete search on B. However, this approach is computationally expensive since it produces a large search tree.\nOur algorithms represent alternative approaches to overcome this expensive search.\n4.1 The PAUSEBID Algorithm In the pausebid algorithm (shown in Figure 1) we implement some heuristics to prune the search tree.\nGiven that bidders want to maximize their utility and that at any given point there are likely only a few bids within B which The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 695 pausebid(i, k) 1 my-bids \u2190 \u2205 2 their-bids \u2190 \u2205 3 for b \u2208 B 4 do if bagent = i or vi(bitems ) > bvalue 5 then my-bids \u2190 my-bids +new Bid(bitems , i, vi(bitems )) 6 else their-bids \u2190 their-bids +b 7 for S \u2208 subsets of k or fewer items such that vi(S) > 0 and \u00ac\u2203b\u2208Bbitems = S 8 do my-bids \u2190 my-bids +new Bid(S, i, vi(S)) 9 bids \u2190 my-bids + their-bids 10 g\u2217 \u2190 \u2205 # Global variable 11 u\u2217 \u2190 ui(W)# Global variable 12 pbsearch(bids, \u2205) 13 surplus \u2190 b\u2208g\u2217 | bagent=i bvalue \u2212 B(bitems ) 14 if surplus = 0 15 then return g\u2217 16 my-payment \u2190 vi(g\u2217 ) \u2212 u\u2217 17 for b \u2208 g\u2217 | bagent = i 18 do if my-payment \u2264 0 19 then bvalue \u2190 B(bitems ) 20 else bvalue \u2190 B(bitems ) + my-payment \u00b7bvalue \u2212B(bitems ) surplus 21 return g\u2217 Figure 1: The pausebid algorithm which implements a branch and bound search.\ni is the agent and k is the current stage of the auction, for k \u2265 2.\nthe agent can dominate, we start by defining my-bids to be the list of bids for which the agent``s valuation is higher than the current best bid, as given in B.\nWe set the value of these bids to be the agent``s true valuation (but we won``t necessarily be bidding true valuation, as we explain later).\nSimilarly, we set their-bids to be the rest of the bids from B. Finally, the agent``s search list is simply the concatenation of my-bids and their-bids.\nNote that the agent``s own bids are placed first on the search list as this will enable us to do more pruning (pausebid lines 3 to 9).\nThe agent can now perform a branch and bound search on the branch-on-bids tree produced by these bids.\nThis branch and bound search is implemented by pbsearch (Figure 2).\nOur algorithm not only implements the standard bound but it also implements other pruning techniques in order to further reduce the size of the search tree.\nThe bound we use is the maximum utility that the agent can expect to receive from a given set of bids.\nWe call it u\u2217 .\nInitially, u\u2217 is set to ui(W) (pausebid line 11) since that is the utility the agent currently receives and any solution he proposes should give him more utility.\nIf pbsearch ever comes across a partial solution where the maximum utility the agent can expect to receive is less than u\u2217 then that subtree is pruned (pbsearch line 21).\nNote that we can determine the maximum utility only after the algorithm has searched over all of the agent``s own bids (which are first on the list) because after that we know that the solution will not include any more bids where the agent is the winner thus the agent``s utility will no longer increase.\nFor example, pbsearch(bids, g) 1 if bids = \u2205 then return 2 b \u2190 first(bids) 3 bids \u2190 bids \u2212b 4 g \u2190 g + b 5 \u00afIg \u2190 items not in g 6 if g does not contain a bid from i 7 then return 8 if g includes all items 9 then min-payment \u2190 max(0, r(W) + \u2212 (r(g) \u2212 ri(g)), b\u2208g | bagent=i B(bitems )) 10 max-utility \u2190 vi(g) \u2212 min-payment 11 if r(g) > r(W) and max-utility \u2265 u\u2217 12 then g\u2217 \u2190 g 13 u\u2217 \u2190 max-utility 14 pbsearch(bids, g \u2212 b) # b is Out 15 else max-revenue \u2190 r(g) + max(h(\u00afIg), hi(\u00afIg)) 16 if max-revenue \u2264 r(W) 17 then pbsearch(bids, g \u2212 b) # b is Out 18 elseif bagent = i 19 then min-payment \u2190 (r(W) + ) \u2212(r(g) \u2212 ri(g)) \u2212 h(\u00afIg) 20 max-utility \u2190 vi(g) \u2212 min-payment 21 if max-utility > u\u2217 22 then pbsearch({x \u2208 bids | xitems \u2229 bitems = \u2205}, g) # b is In 23 pbsearch(bids, g \u2212 b) # b is Out 24 else 25 pbsearch({x \u2208 bids | xitems \u2229 bitems = \u2205}, g) # b is In 26 pbsearch(bids, g \u2212 b) # b is Out 27 return Figure 2: The pbsearch recursive procedure where bids is the set of available bids and g is the current partial solution.\nif an agent has only one bid in my-bids then the maximum utility he can expect is equal to his value for the items in that bid minus the minimum possible payment we can make for those items and still come up with a set of bids that has revenue greater than r(W).\nThe calculation of the minimum payment is shown in line 19 for the partial solution case and line 9 for the case where we have a complete solution in pbsearch.\nNote that in order to calculate the min-payment for the partial solution case we need an upper bound on the payments that we must make for each item.\nThis upper bound is provided by h(S) = s\u2208S max b\u2208B | s\u2208bitems bvalue size(bitems) .\n(4) This function produces a bound identical to the one used by the Bidtree algorithm-it merely assigns to each individual item in S a value equal to the maximum bid in B divided by the number of items in that bid.\nTo prune the branches that cannot lead to a solution with revenue greater than the current W, the algorithm considers both the values of the bids in B and the valuations of the 696 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) agent.\nSimilarly to (4) we define hi(S, k) = s\u2208S max S | size(S )\u2264k and s\u2208S and vi(S )>0 vi(S ) size(S ) (5) which assigns to each individual item s in S the maximum value produced by the valuation of S divided by the size of S , where S is a set for which the agent has a valuation greater than zero, contains s, and its size is less or equal than k.\nThe algorithm uses the heuristics h and hi (lines 15 and 19 of pbsearch), to prune the just mentioned branches in the same way an A\u2217 algorithm uses its heuristic.\nA final pruning technique implemented by the algorithm is ignoring any branches where the agent has no bids in the current answer g and no more of the agent``s bids are in the list (pbsearch lines 6 and 7).\nThe resulting g\u2217 found by pbsearch is thus the set of bids that has revenue bigger than r(W) and maximizes agent i``s utility.\nHowever, agent i``s bids in g\u2217 are still set to his own valuation and not to the lowest possible price.\nLines 17 to 20 in pausebid are responsible for setting the agent``s payments so that it can achieve its maximum utility u\u2217 .\nIf the agent has only one bid in g\u2217 then it is simply a matter of reducing the payment of that bid by u\u2217 from the current maximum of the agent``s true valuation.\nHowever, if the agent has more than one bid then we face the problem of how to distribute the agent``s payments among these bids.\nThere are many ways of distributing the payments and there does not appear to be a dominant strategy for performing this distribution.\nWe have chosen to distribute the payments in proportion to the agent``s true valuation for each set of items.\npausebid assumes that the set of best bids B and the current best winning bidset W remains constant during its execution, and it returns the agent``s myopic utility-maximizing bidset (if there is one) using a branch and bound search.\nHowever it repeats the whole search at every stage.\nWe can minimize this problem by caching the result of previous searches.\n4.2 The CACHEDPAUSEBID Algorithm The cachedpausebid algorithm (shown in Figure 3) is our second approach to solve the bidding problem in the PAUSE auction.\nIt is based in a cache table called C-Table where we store some solutions to avoid doing a complete search every time.\nThe problem is the same; the agent i has to find g\u2217 i .\nWe note that g\u2217 i is a bidset that contains at least one bid of the agent i. Let S be a set of items for which the agent i has a valuation such that vi(S) \u2265 B(S) > 0, let gS i be a bidset over S such that r(gS i ) \u2265 r(W) + and gS i = arg max g\u22862B ui(g), (6) where each g is a set of bids that covers all items and \u2200b\u2208g (b \u2208 B) or (bagent = i and bvalue > B(bitems )) and (\u2203b\u2208gbitems = S and bagent = i).\nThat is, gS i is i``s best bidset for all items which includes a bid from i for all S items.\nIn the PAUSE auction we cannot bid for sets of items with size greater than k. So, if we have for each set of items S for which vi(S) > 0 and size(S) \u2264 k its corresponding gS i then g\u2217 i is the gS i that maximizes the agent``s utility.\nThat is g\u2217 i = arg max {S | vi(S)>0\u2227size(S)\u2264k} ui(gS i ).\n(7) Each agent i implements a hash table C-Table such that C-Table[S] = gS for all S which vi(S) \u2265 B(S) > 0.\nWe can cachedpausebid(i, k, k-changed) 1 for each S in C-Table 2 do if vi(S) < B(S) 3 then remove S from C-Table 4 else if k-changed and size(S) = k 5 then B \u2190 B + new Bid(i, S, vi(S)) 6 g\u2217 \u2190 \u2205 7 u\u2217 \u2190 ui(W) 8 for each S with size(S) \u2264 k in C-Table 9 do \u00afS \u2190 Items \u2212 S 10 gS \u2190 C-Table[S] # Global variable 11 min-payment \u2190 max(r(W) + , b\u2208gS B(bitems )) 12 uS \u2190 r(gS ) \u2212 min-payment # Global variable 13 if (k-changed and size(S) = k) or (\u2203b\u2208B bitems \u2286 \u00afS and bagent = i) 14 then B \u2190 {b \u2208 B |bitems \u2286 \u00afS} 15 bids \u2190 B +{b \u2208 B|bitems \u2286 \u00afS and b \/\u2208 B } 16 for b \u2208 bids 17 do if vi(bitems ) > bvalue 18 then bagent \u2190 i 19 bvalue \u2190 vi(bitems ) 20 if k-changed and size(S) = k 21 then n \u2190 size(bids) 22 uS \u2190 0 23 else n \u2190 size(B ) 24 g \u2190 \u2205 + new Bid(S, i, vi(S)) 25 cpbsearch(bids, g, n) 26 C-Table[S] \u2190 gS 27 if uS > u\u2217 and r(gS ) \u2265 r(W) + 28 then surplus \u2190 b\u2208gS | bagent=i bvalue \u2212 B(bitems ) 29 if surplus > 0 30 then my-payment \u2190 vi(gS ) \u2212 ui(gS ) 31 for b \u2208 gS | bagent = i 32 do if my-payment \u2264 0 33 then bvalue \u2190 B(bitems ) 34 else bvalue \u2190 B(bitems )+ my-payment \u00b7bvalue \u2212B(bitems ) surplus 35 u\u2217 \u2190 ui(gS ) 36 g\u2217 \u2190 gS 37 else if uS \u2264 0 and vi(S) < B(S) 38 then remove S from C-Table 39 return g\u2217 Figure 3: The cachedpausebid algorithm that implements a caching based search to find a bidset that maximizes the utility for the agent i. k is the current stage of the auction (for k \u2265 2), and k-changed is a boolean that is true right after the auction moved to the next stage.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 697 cpbsearch(bids, g, n) 1 if bids = \u2205 or n \u2264 0 then return 2 b \u2190 first(bids) 3 bids \u2190 bids \u2212b 4 g \u2190 g + b 5 \u00afIg \u2190 items not in g 6 if g includes all items 7 then min-payment \u2190 max(0, r(W) + \u2212 (r(g) \u2212 ri(g)), b\u2208g | bagent=i B(bitems )) 8 max-utility \u2190 vi(g) \u2212 min-payment 9 if r(g) > r(W) and max-utility \u2265 uS 10 then gS \u2190 g 11 uS \u2190 max-utility 12 cpbsearch(bids, g \u2212 b, n \u2212 1) # b is Out 13 else max-revenue \u2190 r(g) + max(h(\u00afIg), hi(\u00afIg)) 14 if max-revenue \u2264 r(W) 15 then cpbsearch(bids, g \u2212 b, n \u2212 1) # b is Out 16 elseif bagent = i 17 then min-payment \u2190 (r(W) + ) \u2212(r(g) \u2212 ri(g)) \u2212 h(\u00afIg) 18 max-utility \u2190 vi(g) \u2212 min-payment 19 if max-utility > uS 20 then cpbsearch({x \u2208 bids | xitems \u2229 bitems = \u2205}, g, n + 1) # b is In 21 cpbsearch(bids, g \u2212 b, n \u2212 1) # b is Out 22 else 23 cpbsearch({x \u2208 bids | xitems \u2229 bitems = \u2205}, g, n + 1) # b is In 24 cpbsearch(bids, g \u2212 b, n \u2212 1) # b is Out 25 return Figure 4: The cpbsearch recursive procedure where bids is the set of available bids, g is the current partial solution and n is a value that indicates how deep in the list bids the algorithm has to search.\nthen find g\u2217 by searching for the gS , stored in C-Table[S], that maximizes the agent``s utility, considering only the set of items S with size(S) \u2264 k.\nThe problem remains in maintaining the C-Table updated and avoiding to search every gS every time.\ncachedpausebid deals with this and other details.\nLet B be the set of bids that contains the new best bids, that is, B contains the bids recently added to B and the bids that have changed price (always higher), bidder, or both and were already in B. Let \u00afS = Items \u2212 S be the complement of S (the set of items not included in S).\ncachedpausebid takes three parameters: i the agent, k the current stage of the auction, and k-changed a boolean that is true right after the auction moved to the next stage.\nInitially C-Table has one row or entry for each set S for which vi(S) > 0.\nWe start by eliminating the entries corresponding to each set S for which vi(S) < B(S) from C-Table (line 3).\nThen, in the case that k-changed is true, for each set S with size(S) = k, we add to B a bid for that set with value equal to vi(S) and bidder agent i (line 5); this a bid that the agent is now allowed to consider.\nWe then search for g\u2217 amongst the gS stored in C-Table, for this we only need to consider the sets with size(S) \u2264 k (line 8).\nBut how do we know that the gS in C-Table[S] is still the best solution for S?\nThere are only two cases when we are not sure about that and we need to do a search to update C-Table[S].\nThese cases are: i) When k-changed is true and size(S) \u2264 k, since there was no gS stored in C-Table for this S. ii) When there exists at least one bid in B for the set of items \u00afS or a subset of it submitted by an agent different than i, since it is probable that this new bid can produce a solution better than the one stored in C-Table[S].\nWe handle the two cases mentioned above in lines 13 to 26 of cachedpausebid.\nIn both of these cases, since gS must contain a bid for S we need to find a bidset that cover the missing items, that is \u00afS. Thus, our search space consists of all the bids on B for the set of items \u00afS or for a subset of it.\nWe build the list bids that contains only those bids.\nHowever, we put the bids from B at the beginning of bids (line 14) since they are the ones that have changed.\nThen, we replace the bids in bids that have a price lower than the valuation the agent i has for those same items with a bid from agent i for those items and value equal to the agent``s valuation (lines 16-19).\nThe recursive procedure cpbsearch, called in line 25 of cachedpausebid and shown in Figure 4, is the one that finds the new gS .\ncpbsearch is a slightly modified version of our branch and bound search implemented in pbsearch.\nThe first modification is that it has a third parameter n that indicates how deep on the list bids we want to search, since it stops searching when n less or equal to zero and not only when the list bids is empty (line 1).\nEach time that there is a recursive call of cpbsearch n is decreased by one when a bid from bids is discarded or out (lines 12, 15, 21, and 24) and n remains the same otherwise (lines 20 and 23).\nWe set the value of n before calling cpbsearch, to be the size of the list bids (cachedpausebid line 21) in case i), since we want cpbsearch to search over all bids; and we set n to be the number of bids from B included in bids (cachedpausebid line 23) in case ii), since we know that only the those first n bids in bids changed and can affect our current gS .\nAnother difference with pbsearch is that the bound in cpbsearch is uS which we set to be 0 (cachedpausebid line 22) when in case i) and r(gS )\u2212min-payment (cachedpausebid line 12) when in case ii).\nWe call cpbsearch with g already containing a bid for S.\nAfter cpbsearch is executed we are sure that we have the right gS , so we store it in the corresponding C-Table[S] (cachedpausebid line 26).\nWhen we reach line 27 in cachedpausebid, we are sure that we have the right gS .\nHowever, agent i``s bids in gS are still set to his own valuation and not to the lowest possible price.\nIf uS is greater than the current u\u2217 , lines 31 to 34 in cachedpausebid are responsible for setting the agent``s payments so that it can achieve its maximum utility uS .\nAs in pausebid, we have chosen to distribute the payments in proportion to the agent``s true valuation for each set of items.\nIn the case that uS less than or equal to zero and the valuation that the agent i has for the set of items S is lower than the current value of the bid in B for the same set of items, we remove the corresponding C-Table[S] since we know that is not worthwhile to keep it in the cache table (cachedpausebid line 38).\nThe cachedpausebid function is called when k > 1 and returns the agent``s myopic utility-maximizing bidset, if there is one.\nIt assumes that W and B remains constant during its execution.\n698 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) generatevalues(i, items) 1 for x \u2208 items 2 do vi(x) = expd(.01) 3 for n \u2190 1 ... (num-bids \u2212 items) 4 do s1, s2 \u2190Two random sets of items with values.\n5 vi(s1 \u222a s2) = vi(s1) + vi(s2) + expd(.01) Figure 5: Algorithm for the generation of random value functions.\nexpd(x) returns a random number taken from an exponential distribution with mean 1\/x.\n0 20 40 60 80 100 2 3 4 5 6 7 8 9 10 Number of Items CachedPauseBid 3 3 3 3 3 3 3 3 3 3 PauseBid + + + + + + + + + + Figure 6: Average percentage of convergence (y-axis), which is the percentage of times that our algorithms converge to the revenue-maximizing solution, as function of the number of items in the auction.\n5.\nTEST AND COMPARISON We have implemented both algorithms and performed a series of experiments in order to determine how their solution compares to the revenue-maximizing solution and how their times compare with each other.\nIn order to do our tests we had to generate value functions for the agents1 .\nThe algorithm we used is shown in Figure 5.\nThe type of valuations it generates correspond to domains where a set of agents must perform a set of tasks but there are cost savings for particular agents if they can bundle together certain subsets of tasks.\nFor example, imagine a set of robots which must pick up and deliver items to different locations.\nSince each robot is at a different location and has different abilities, each one will have different preferences over how to bundle.\nTheir costs for the item bundles are subadditive, which means that their preferences are superadditive.\nThe first experiment we performed simply ensured the proper 1 Note that we could not use CATS [6] because it generates sets of bids for an indeterminate number of agents.\nIt is as if you were told the set of bids placed in a combinatorial auction but not who placed each bid or even how many people placed bids, and then asked to determine the value function of every participant in the auction.\n0 20 40 60 80 100 2 3 4 5 6 7 8 9 10 Number of Items CachedPauseBid 3 3 3 3 3 3 3 3 3 3 PauseBid + + + + + + + + + + Figure 7: Average percentage of revenue from our algorithms relative to maximum revenue (y-axis) as function of the number of items in the auction.\nfunctioning of our algorithms.\nWe then compared the solutions found by both of them to the revenue-maximizing solution as found by CASS when given a set of bids that corresponds to the agents'' true valuation.\nThat is, for each agent i and each set of items S for which vi(S) > 0 we generated a bid.\nThis set of bids was fed to CASS which implements a centralized winner determination algorithm to find the solution which maximizes revenue.\nNote, however, that the revenue from the PAUSE auction on all the auctions is always smaller than the revenue of the revenue-maximizing solution when the agents bid their true valuations.\nSince PAUSE uses English auctions the final prices (roughly) represent the second-highest valuation, plus , for that set of items.\nWe fixed the number of agents to be 5 and we experimented with different number of items, namely from 2 to 10.\nWe ran both algorithms 100 times for each combination.\nWhen we compared the solutions of our algorithms to the revenue-maximizing solution, we realized that they do not always find the same distribution of items as the revenue-maximizing solution (as shown in Figure 6).\nThe cases where our algorithms failed to arrive at the distribution of the revenue-maximizing solution are those where there was a large gap between the first and second valuation for a set (or sets) of items.\nIf the revenue-maximizing solution contains the bid (or bids) using these higher valuation then it is impossible for the PAUSE auction to find this solution because that bid (those bids) is never placed.\nFor example, if agent i has vi(1) = 1000 and the second highest valuation for (1) is only 10 then i only needs to place a bid of 11 in order to win that item.\nIf the revenue-maximizing solution requires that 1 be sold for 1000 then that solution will never be found because that bid will never be placed.\nWe also found that average percentage of times that our algorithms converges to the revenue-maximizing solution decreases as the number of items increases.\nFor 2 items is almost 100% but decreases a little bit less than 1 percent as the items increase, so that this average percentage of convergence is around 90% for 10 items.\nIn a few instances our algorithms find different solutions this is due to the different The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 699 1 10 100 1000 10000 2 3 4 5 6 7 8 9 10 Number of Items CachedPauseBid 3 3 3 3 3 3 3 3 3 PauseBid + + + + + + + + + + Figure 8: Average number of expanded nodes (y-axis) as function of items in the auction.\nordering of the bids in the bids list which makes them search in different order.\nWe know that the revenue generated by the PAUSE auction is generally lower than the revenue of the revenuemaximizing solution, but how much lower?\nTo answer this question we calculated percentage representing the proportion of the revenue given by our algorithms relative to the revenue given by CASS.\nWe found that the percentage of revenue of our algorithms increases in average 2.7% as the number of items increases, as shown in Figure 7.\nHowever, we found that cachedpausebid generates a higher revenue than pausebid (4.3% higher in average) except for auctions with 2 items where both have about the same percentage.\nAgain, this difference is produced by the order of the search.\nIn the case of 2 items both algorithms produce in average a revenue proportion of 67.4%, while in the other extreme (10 items), cachedpausebid produced in average a revenue proportion of 91.5% while pausebid produced in average a revenue proportion of 87.7%.\nThe scalability of our algorithms can be determined by counting the number of nodes expanded in the search tree.\nFor this we count the number of times that pbsearch gets invoked for each time that pausebid is called and the number of times that fastpausebidsearch gets invoked for each time that cachedpausebid, respectively for each of our algorithms.\nAs expected since this is an NP-Hard problem, the number of expanded nodes does grow exponentially with the number of items (as shown in Figure 8).\nHowever, we found that cachedpausebid outperforms pausebid, since it expands in average less than half the number of nodes.\nFor example, the average number of nodes expanded when 2 items is zero for cachedpausebid while for pausebid is 2; and in the other extreme (10 items) cachedpausebid expands in average only 633 nodes while pausebid expands in average 1672 nodes, a difference of more than 1000 nodes.\nAlthough the number of nodes expanded by our algorithms increases as function of the number of items, the actual number of nodes is a much smaller than the worst-case scenario of nn where n is the number of items.\nFor example, for 10 items we expand slightly more than 103 nodes for the case of pausebid and less than that for the case of cachedpause0.1 1 10 100 1000 2 3 4 5 6 7 8 9 10 Number of Items CachedPauseBid 3 3 3 3 3 3 3 3 3 3 PauseBid + + + + + + + + + + Figure 9: Average time in seconds that takes to finish an auction (y-axis) as function of the number of items in the auction.\nbid which are much smaller numbers than 1010 .\nNotice also that our value generation algorithm (Figure 5) generates a number of bids that is exponential on the number of items, as might be expected in many situations.\nAs such, these results do not support the conclusion that time grows exponentially with the number of items when the number of bids is independent of the number of items.\nWe expect that both algorithms will grow exponentially as a function the number of bids, but stay roughly constant as the number of items grows.\nWe wanted to make sure that less expanded nodes does indeed correspond to faster execution, especially since our algorithms execute different operations.\nWe thus ran the same experiment with all the agents in the same machine, an Intel Centrino 2.0 GHz laptop PC with 1 GB of RAM and a 7200 RMP 60 GB hard drive, and calculated the average time that takes to finish an auction for each algorithm.\nAs shown in Figure 9, cachedpausebid is faster than pausebid, the difference in execution speed is even more clear as the number of items increases.\n6.\nRELATED WORK A lot of research has been done on various aspects of combinatorial auctions.\nWe recommend [2] for a good review.\nHowever, the study of distributed winner determination algorithms for combinatorial auctions is still relatively new.\nOne approach is given by the algorithms for distributing the winner determination problem in combinatorial auctions presented in [7], but these algorithms assume the computational entities are the items being sold and thus end up with a different type of distribution.\nThe VSA algorithm [3] is another way of performing distributed winner determination in combinatorial auction but it assumes the bids themselves perform the computation.\nThis algorithm also fails to converge to a solution for most cases.\nIn [9] the authors present a distributed mechanism for calculating VCG payments in a mechanism design problem.\nTheir mechanism roughly amounts to having each agent calculate the payments for two other agents and give these to a secure 700 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) central server which then checks to make sure results from all pairs agree, otherwise a re-calculation is ordered.\nThis general idea, which they call the redundancy principle, could also be applied to our problem but it requires the existence of a secure center agent that everyone trusts.\nAnother interesting approach is given in [8] where the bidding agents prioritize their bids, thus reducing the set of bids that the centralized winner determination algorithm must consider, making that problem easier.\nFinally, in the computation procuring clock auction [1] the agents are given an everincreasing percentage of the surplus achieved by their proposed solution over the current best.\nAs such, it assumes the agents are impartial computational entities, not the set of possible buyers as assumed by the PAUSE auction.\n7.\nCONCLUSIONS We believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer or when we wish to distribute the computational load among the bidders.\nThe PAUSE auction is one of a few approaches to decentralize the winner determination problem in combinatorial auctions.\nWith this auction, we can even envision completely eliminating the auctioneer and, instead, have every agent performe the task of the auctioneer.\nHowever, while PAUSE establishes the rules the bidders must obey, it does not tell us how the bidders should calculate their bids.\nWe have presented two algorithms, pausebid and cachedpausebid, that bidder agents can use to engage in a PAUSE auction.\nBoth algorithms implement a myopic utility maximizing strategy that is guaranteed to find the bidset that maximizes the agent``s utility given the set of outstanding best bids at any given time, without considering possible future bids.\nBoth algorithms find, most of the time, the same distribution of items as the revenue-maximizing solution.\nThe cases where our algorithms failed to arrive at that distribution are those where there was a large gap between the first and second valuation for a set (or sets) of items.\nAs it is an NP-Hard problem, the running time of our algorithms remains exponential but it is significantly better than a full search.\npausebid performs a branch and bound search completely from scratch each time it is invoked.\ncachedpausebid caches partial solutions and performs a branch and bound search only on the few portions affected by the changes on the bids between consecutive times.\ncachedpausebid has a better performance since it explores fewer nodes (less than half) and it is faster.\nAs expected the revenue generated by a PAUSE auction is lower than the revenue of a revenue-maximizing solution found by a centralized winner determination algorithm, however we found that cachedpausebid generates in average 4.7% higher revenue than pausebid.\nWe also found that the revenue generated by our algorithms increases as function of the number of items in the auction.\nOur algorithms have shown that it is feasible to implement the complex coordination constraints supported by combinatorial auctions without having to resort to a centralized winner determination algorithm.\nMoreover, because of the design of the PAUSE auction, the agents in the auction also have an incentive to perform the required computation.\nOur bidding algorithms can be used by any multiagent system that would use combinatorial auctions for coordination but would rather not implement a centralized auctioneer.\n8.\nREFERENCES [1] P. J. Brewer.\nDecentralized computation procurement and computational robustness in a smart market.\nEconomic Theory, 13(1):41-92, January 1999.\n[2] P. Cramton, Y. Shoham, and R. Steinberg, editors.\nCombinatorial Auctions.\nMIT Press, 2006.\n[3] Y. Fujishima, K. Leyton-Brown, and Y. Shoham.\nTaming the computational complexity of combinatorial auctions: Optimal and approximate approaches.\nIn Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, pages 548-553.\nMorgan Kaufmann Publishers Inc., 1999.\n[4] F. Kelly and R. Stenberg.\nA combinatorial auction with multiple winners for universal service.\nManagement Science, 46(4):586-596, 2000.\n[5] A. Land, S. Powell, and R. Steinberg.\nPAUSE: A computationally tractable combinatorial auction.\nIn Cramton et al. [2], chapter 6, pages 139-157.\n[6] K. Leyton-Brown, M. Pearson, and Y. Shoham.\nTowards a universal test suite for combinatorial auction algorithms.\nIn Proceedings of the 2nd ACM conference on Electronic commerce, pages 66-76.\nACM Press, 2000.\nhttp:\/\/cats.stanford.edu.\n[7] M. V. Narumanchi and J. M. Vidal.\nAlgorithms for distributed winner determination in combinatorial auctions.\nIn LNAI volume of AMEC\/TADA.\nSpringer, 2006.\n[8] S. Park and M. H. Rothkopf.\nAuctions with endogenously determined allowable combinations.\nTechnical report, Rutgets Center for Operations Research, January 2001.\nRRR 3-2001.\n[9] D. C. Parkes and J. Shneidman.\nDistributed implementations of vickrey-clarke-groves auctions.\nIn Proceedings of the Third International Joint Conference on Autonomous Agents and MultiAgent Systems, pages 261-268.\nACM, 2004.\n[10] M. H. Rothkopf, A. Pekec, and R. M. Harstad.\nComputationally manageable combinational auctions.\nManagement Science, 44(8):1131-1147, 1998.\n[11] T. Sandholm.\nAn algorithm for winner determination in combinatorial auctions.\nArtificial Intelligence, 135(1-2):1-54, February 2002.\n[12] T. Sandholm, S. Suri, A. Gilpin, and D. Levine.\nCABOB: a fast optimal algorithm for winner determination in combinatorial auctions.\nManagement Science, 51(3):374-391, 2005.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 701","lvl-3":"Bidding Algorithms for a Distributed Combinatorial Auction\nABSTRACT\nDistributed allocation and multiagent coordination problems can be solved through combinatorial auctions.\nHowever, most of the existing winner determination algorithms for combinatorial auctions are centralized.\nThe PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer).\nIt is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation.\nIt can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary.\nPAUSE establishes the rules the bidders must obey.\nHowever, it does not tell us how the bidders should calculate their bids.\nWe have developed a couple of bidding algorithms for the bidders in a PAUSE auction.\nOur algorithms always return the set of bids that maximizes the bidder's utility.\nSince the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search.\nIn this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm.\n1.\nINTRODUCTION\nBoth the research and practice of combinatorial auctions have grown rapidly in the past ten years.\nIn a combinatorial auction bidders can place bids on combinations of items, called packages or bidsets, rather than just individual items.\nOnce the bidders place their bids, it is necessary to find the allocation of items to bidders that maximizes the auctioneer's revenue.\nThis problem, known as the winner determination problem, is a combinatorial optimization problem and is NP-Hard [10].\nNevertheless, several algorithms that have a satisfactory performance for problem sizes and structures occurring in practice have been developed.\nThe practical applications of combinatorial auctions include: allocation of airport takeoff and landing time slots, procurement of freight transportation services, procurement of public transport services, and industrial procurement [2].\nBecause of their wide applicability, one cannot hope for a general-purpose winner determination algorithm that can efficiently solve every instance of the problem.\nThus, several approaches and algorithms have been proposed to address the winner determination problem.\nHowever, most of the existing winner determination algorithms for combinatorial auctions are centralized, meaning that they require all agents to send their bids to a centralized auctioneer who then determines the winners.\nExamples of these algorithms are CASS [3], Bidtree [11] and CABOB [12].\nWe believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer.\nThe PAUSE (Progressive Adaptive User Selection Environment) auction [4, 5] is one of a few efforts to distribute the problem of winner determination amongst the bidders.\nPAUSE establishes the rules the participants have to adhere to so that the work is distributed amongst them.\nHowever, it is not concerned with how the bidders determine what they should bid.\nIn this paper we present two algorithms, PAUSEBID and CACHEDPAUSEBID, which enable agents in a PAUSE auction to find the bidset that maximizes their utility.\nOur algorithms implement a myopic utility maximizing strategy and are guaranteed to find the bidset that maximizes the agent's utility given the outstanding best bids at a given time.\nPAUSEBID performs a branch and bound search completely from scratch every time that it is called.\nCACHEDPAUSEBID is a caching-based algorithm which explores fewer nodes, since it caches some solutions.\n2.\nTHE PAUSE AUCTION\n3.\nPROBLEM FORMULATION\n4.\nBIDDING ALGORITHMS\n4.1 The PAUSEBID Algorithm\n696 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 The CACHEDPAUSEBID Algorithm\n5.\nTEST AND COMPARISON\n6.\nRELATED WORK\nA lot of research has been done on various aspects of combinatorial auctions.\nWe recommend [2] for a good review.\nHowever, the study of distributed winner determination algorithms for combinatorial auctions is still relatively new.\nOne approach is given by the algorithms for distributing the winner determination problem in combinatorial auctions presented in [7], but these algorithms assume the computational entities are the items being sold and thus end up with a different type of distribution.\nThe VSA algorithm [3] is another way of performing distributed winner determination in combinatorial auction but it assumes the bids themselves perform the computation.\nThis algorithm also fails to converge to a solution for most cases.\nIn [9] the authors present a distributed mechanism for calculating VCG payments in a mechanism design problem.\nTheir mechanism roughly amounts to having each agent calculate the payments for two other agents and give these to a secure\n700 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ncentral server which then checks to make sure results from all pairs agree, otherwise a re-calculation is ordered.\nThis general idea, which they call the redundancy principle, could also be applied to our problem but it requires the existence of a secure center agent that everyone trusts.\nAnother interesting approach is given in [8] where the bidding agents prioritize their bids, thus reducing the set of bids that the centralized winner determination algorithm must consider, making that problem easier.\nFinally, in the computation procuring clock auction [1] the agents are given an everincreasing percentage of the surplus achieved by their proposed solution over the current best.\nAs such, it assumes the agents are impartial computational entities, not the set of possible buyers as assumed by the PAUSE auction.\n7.\nCONCLUSIONS\nWe believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer or when we wish to distribute the computational load among the bidders.\nThe PAUSE auction is one of a few approaches to decentralize the winner determination problem in combinatorial auctions.\nWith this auction, we can even envision completely eliminating the auctioneer and, instead, have every agent performe the task of the auctioneer.\nHowever, while PAUSE establishes the rules the bidders must obey, it does not tell us how the bidders should calculate their bids.\nWe have presented two algorithms, PAUSEBID and CACHEDPAUSEBID, that bidder agents can use to engage in a PAUSE auction.\nBoth algorithms implement a myopic utility maximizing strategy that is guaranteed to find the bidset that maximizes the agent's utility given the set of outstanding best bids at any given time, without considering possible future bids.\nBoth algorithms find, most of the time, the same distribution of items as the revenue-maximizing solution.\nThe cases where our algorithms failed to arrive at that distribution are those where there was a large gap between the first and second valuation for a set (or sets) of items.\nAs it is an NP-Hard problem, the running time of our algorithms remains exponential but it is significantly better than a full search.\nPAUSEBID performs a branch and bound search completely from scratch each time it is invoked.\nCACHEDPAUSEBID caches partial solutions and performs a branch and bound search only on the few portions affected by the changes on the bids between consecutive times.\nCACHEDPAUSEBID has a better performance since it explores fewer nodes (less than half) and it is faster.\nAs expected the revenue generated by a PAUSE auction is lower than the revenue of a revenue-maximizing solution found by a centralized winner determination algorithm, however we found that CACHEDPAUSEBID generates in average 4.7% higher revenue than PAUSEBID.\nWe also found that the revenue generated by our algorithms increases as function of the number of items in the auction.\nOur algorithms have shown that it is feasible to implement the complex coordination constraints supported by combinatorial auctions without having to resort to a centralized winner determination algorithm.\nMoreover, because of the design of the PAUSE auction, the agents in the auction also have an incentive to perform the required computation.\nOur bidding algorithms can be used by any multiagent system that would use combinatorial auctions for coordination but would rather not implement a centralized auctioneer.","lvl-4":"Bidding Algorithms for a Distributed Combinatorial Auction\nABSTRACT\nDistributed allocation and multiagent coordination problems can be solved through combinatorial auctions.\nHowever, most of the existing winner determination algorithms for combinatorial auctions are centralized.\nThe PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer).\nIt is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation.\nIt can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary.\nPAUSE establishes the rules the bidders must obey.\nHowever, it does not tell us how the bidders should calculate their bids.\nWe have developed a couple of bidding algorithms for the bidders in a PAUSE auction.\nOur algorithms always return the set of bids that maximizes the bidder's utility.\nSince the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search.\nIn this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm.\n1.\nINTRODUCTION\nBoth the research and practice of combinatorial auctions have grown rapidly in the past ten years.\nIn a combinatorial auction bidders can place bids on combinations of items, called packages or bidsets, rather than just individual items.\nOnce the bidders place their bids, it is necessary to find the allocation of items to bidders that maximizes the auctioneer's revenue.\nThis problem, known as the winner determination problem, is a combinatorial optimization problem and is NP-Hard [10].\nNevertheless, several algorithms that have a satisfactory performance for problem sizes and structures occurring in practice have been developed.\nBecause of their wide applicability, one cannot hope for a general-purpose winner determination algorithm that can efficiently solve every instance of the problem.\nThus, several approaches and algorithms have been proposed to address the winner determination problem.\nHowever, most of the existing winner determination algorithms for combinatorial auctions are centralized, meaning that they require all agents to send their bids to a centralized auctioneer who then determines the winners.\nExamples of these algorithms are CASS [3], Bidtree [11] and CABOB [12].\nWe believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer.\nThe PAUSE (Progressive Adaptive User Selection Environment) auction [4, 5] is one of a few efforts to distribute the problem of winner determination amongst the bidders.\nPAUSE establishes the rules the participants have to adhere to so that the work is distributed amongst them.\nHowever, it is not concerned with how the bidders determine what they should bid.\nIn this paper we present two algorithms, PAUSEBID and CACHEDPAUSEBID, which enable agents in a PAUSE auction to find the bidset that maximizes their utility.\nOur algorithms implement a myopic utility maximizing strategy and are guaranteed to find the bidset that maximizes the agent's utility given the outstanding best bids at a given time.\nPAUSEBID performs a branch and bound search completely from scratch every time that it is called.\nCACHEDPAUSEBID is a caching-based algorithm which explores fewer nodes, since it caches some solutions.\n6.\nRELATED WORK\nA lot of research has been done on various aspects of combinatorial auctions.\nHowever, the study of distributed winner determination algorithms for combinatorial auctions is still relatively new.\nOne approach is given by the algorithms for distributing the winner determination problem in combinatorial auctions presented in [7], but these algorithms assume the computational entities are the items being sold and thus end up with a different type of distribution.\nThe VSA algorithm [3] is another way of performing distributed winner determination in combinatorial auction but it assumes the bids themselves perform the computation.\nThis algorithm also fails to converge to a solution for most cases.\nIn [9] the authors present a distributed mechanism for calculating VCG payments in a mechanism design problem.\nTheir mechanism roughly amounts to having each agent calculate the payments for two other agents and give these to a secure\n700 The Sixth Intl. .\nJoint Conf.\nAnother interesting approach is given in [8] where the bidding agents prioritize their bids, thus reducing the set of bids that the centralized winner determination algorithm must consider, making that problem easier.\nFinally, in the computation procuring clock auction [1] the agents are given an everincreasing percentage of the surplus achieved by their proposed solution over the current best.\nAs such, it assumes the agents are impartial computational entities, not the set of possible buyers as assumed by the PAUSE auction.\n7.\nCONCLUSIONS\nThe PAUSE auction is one of a few approaches to decentralize the winner determination problem in combinatorial auctions.\nWith this auction, we can even envision completely eliminating the auctioneer and, instead, have every agent performe the task of the auctioneer.\nHowever, while PAUSE establishes the rules the bidders must obey, it does not tell us how the bidders should calculate their bids.\nWe have presented two algorithms, PAUSEBID and CACHEDPAUSEBID, that bidder agents can use to engage in a PAUSE auction.\nBoth algorithms implement a myopic utility maximizing strategy that is guaranteed to find the bidset that maximizes the agent's utility given the set of outstanding best bids at any given time, without considering possible future bids.\nBoth algorithms find, most of the time, the same distribution of items as the revenue-maximizing solution.\nAs it is an NP-Hard problem, the running time of our algorithms remains exponential but it is significantly better than a full search.\nPAUSEBID performs a branch and bound search completely from scratch each time it is invoked.\nCACHEDPAUSEBID caches partial solutions and performs a branch and bound search only on the few portions affected by the changes on the bids between consecutive times.\nAs expected the revenue generated by a PAUSE auction is lower than the revenue of a revenue-maximizing solution found by a centralized winner determination algorithm, however we found that CACHEDPAUSEBID generates in average 4.7% higher revenue than PAUSEBID.\nWe also found that the revenue generated by our algorithms increases as function of the number of items in the auction.\nOur algorithms have shown that it is feasible to implement the complex coordination constraints supported by combinatorial auctions without having to resort to a centralized winner determination algorithm.\nMoreover, because of the design of the PAUSE auction, the agents in the auction also have an incentive to perform the required computation.\nOur bidding algorithms can be used by any multiagent system that would use combinatorial auctions for coordination but would rather not implement a centralized auctioneer.","lvl-2":"Bidding Algorithms for a Distributed Combinatorial Auction\nABSTRACT\nDistributed allocation and multiagent coordination problems can be solved through combinatorial auctions.\nHowever, most of the existing winner determination algorithms for combinatorial auctions are centralized.\nThe PAUSE auction is one of a few efforts to release the auctioneer from having to do all the work (it might even be possible to get rid of the auctioneer).\nIt is an increasing price combinatorial auction that naturally distributes the problem of winner determination amongst the bidders in such a way that they have an incentive to perform the calculation.\nIt can be used when we wish to distribute the computational load among the bidders or when the bidders do not wish to reveal their true valuations unless necessary.\nPAUSE establishes the rules the bidders must obey.\nHowever, it does not tell us how the bidders should calculate their bids.\nWe have developed a couple of bidding algorithms for the bidders in a PAUSE auction.\nOur algorithms always return the set of bids that maximizes the bidder's utility.\nSince the problem is NP-Hard, run time remains exponential on the number of items, but it is remarkably better than an exhaustive search.\nIn this paper we present our bidding algorithms, discuss their virtues and drawbacks, and compare the solutions obtained by them to the revenue-maximizing solution found by a centralized winner determination algorithm.\n1.\nINTRODUCTION\nBoth the research and practice of combinatorial auctions have grown rapidly in the past ten years.\nIn a combinatorial auction bidders can place bids on combinations of items, called packages or bidsets, rather than just individual items.\nOnce the bidders place their bids, it is necessary to find the allocation of items to bidders that maximizes the auctioneer's revenue.\nThis problem, known as the winner determination problem, is a combinatorial optimization problem and is NP-Hard [10].\nNevertheless, several algorithms that have a satisfactory performance for problem sizes and structures occurring in practice have been developed.\nThe practical applications of combinatorial auctions include: allocation of airport takeoff and landing time slots, procurement of freight transportation services, procurement of public transport services, and industrial procurement [2].\nBecause of their wide applicability, one cannot hope for a general-purpose winner determination algorithm that can efficiently solve every instance of the problem.\nThus, several approaches and algorithms have been proposed to address the winner determination problem.\nHowever, most of the existing winner determination algorithms for combinatorial auctions are centralized, meaning that they require all agents to send their bids to a centralized auctioneer who then determines the winners.\nExamples of these algorithms are CASS [3], Bidtree [11] and CABOB [12].\nWe believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer.\nThe PAUSE (Progressive Adaptive User Selection Environment) auction [4, 5] is one of a few efforts to distribute the problem of winner determination amongst the bidders.\nPAUSE establishes the rules the participants have to adhere to so that the work is distributed amongst them.\nHowever, it is not concerned with how the bidders determine what they should bid.\nIn this paper we present two algorithms, PAUSEBID and CACHEDPAUSEBID, which enable agents in a PAUSE auction to find the bidset that maximizes their utility.\nOur algorithms implement a myopic utility maximizing strategy and are guaranteed to find the bidset that maximizes the agent's utility given the outstanding best bids at a given time.\nPAUSEBID performs a branch and bound search completely from scratch every time that it is called.\nCACHEDPAUSEBID is a caching-based algorithm which explores fewer nodes, since it caches some solutions.\n2.\nTHE PAUSE AUCTION\nA PAUSE auction for m items has m stages.\nStage 1 consists of having simultaneous ascending price open-cry auctions and during this stage the bidders can only place bids on individual items.\nAt the end of this state we will know what the highest bid for each individual item is and who placed that bid.\nEach successive stage k = 2, 3,..., m consists of an ascending price auction where the bidders must submit bidsets that cover all items but each one of the bids must be for k items or less.\nThe bidders are allowed to use bids that other agents have placed in previous rounds when building their bidsets, thus allowing them to find better solutions.\nAlso, any new bidset has to have a sum of bid prices which is bigger than that of the currently winning bidset.\nAt the end of each stage k all agents know the best bid for every subset of size k or less.\nAlso, at any point in time after stage 1 has ended there is a standing bidset whose value increases monotonically as new bidsets are submitted.\nSince in the final round all agents consider all possible bidsets, we know that the final winning bidset will be one such that no agent can propose a better bidset.\nNote, however, that this bidset is not guaranteed to be the one that maximizes revenue since we are using an ascending price auction so the winning bid for each set will be only slightly bigger than the second highest bid for the particular set of items.\nThat is, the final prices will not be the same as the prices in a traditional combinatorial auction where all the bidders bid their true valuation.\nHowever, there remains the open question of whether the final distribution of items to bidders found in a PAUSE auction is the same as the revenue maximizing solution.\nOur test results provide an answer to this question.\nThe PAUSE auction makes the job of the auctioneer very easy.\nAll it has to do is to make sure that each new bidset has a revenue bigger than the current winning bidset, as well as make sure that every bid in an agent's bidset that is not his does indeed correspond to some other agents' previous bid.\nThe computational problem shifts from one of winner determination to one of bid generation.\nEach agent must search over the space of all bidsets which contain at least one of its bids.\nThe search is made easier by the fact that the agent needs to consider only the current best bids and only wants bidsets where its own utility is higher than in the current winning bidset.\nEach agent also has a clear incentive for performing this computation, namely, its utility only increases with each bidset it proposes (of course, it might decrease with the bidsets that others propose).\nFinally, the PAUSE auction has been shown to be envy-free in that at the conclusion of the auction no bidder would prefer to exchange his allocation with that of any other bidder [2].\nWe can even envision completely eliminating the auctioneer and, instead, have every agent perform the task of the auctioneer.\nThat is, all bids are broadcast and when an agent receives a bid from another agent it updates the set of best bids and determines if the new bid is indeed better than the current winning bid.\nThe agents would have an incentive to perform their computation as it will increase their expected utility.\nAlso, any lies about other agents' bids are easily found out by keeping track of the bids sent out by every agent (the set of best bids).\nNamely, the only one that can increase an agent's bid value is the agent itself.\nAnyone claiming a higher value for some other agent is lying.\nThe only thing missing is an algorithm that calculates the utility-maximizing bidset for each agent.\n3.\nPROBLEM FORMULATION\nA bid b is composed of three elements bitems (the set of items the bid is over), bagent (the agent that placed the bid), and bvalue (the value or price of the bid).\nThe agents maintain a set B of the current best bids, one for each set of items of size <k, where k is the current stage.\nAt any point in the auction, after the first round, there will also be a set W C B of currently winning bids.\nThis is the set of bids that covers all the items and currently maximizes the revenue, where the revenue of W is given by\nAgent i's value function is given by vi (S) E R where S is a set of items.\nGiven an agent's value function and the current winning bidset W we can calculate the agent's utility from W as\nThat is, the agent's utility for a bidset W is the value it receives for the items it wins in W minus the price it must pay for those items.\nIf the agent is not winning any items then its utility is zero.\nThe goal of the bidding agents in the PAUSE auction is to maximize their utility, subject to the constraint that their next set of bids must have a total revenue that is at least E bigger than the current revenue, where E is the smallest increment allowed in the auction.\nFormally, given that W is the current winning bidset, agent i must find a g \u2217 i such that r (g \u2217 i)> r (W) + E and\nwhere each g is a set of bids that covers all items and db \u2208 g (b E B) or (bagent = i and bvalue> B (bitems) and size (bitems) <k), and where B (items) is the value of the bid in B for the set items (if there is no bid for those items it returns zero).\nThat is, each bid b in g must satisfy at least one of the two following conditions.\n1) b is already in B, 2) b is a bid of size <k in which the agent i bids higher than the price for the same items in B.\n4.\nBIDDING ALGORITHMS\nAccording to the PAUSE auction, during the first stage we have only several English auctions, with the bidders submitting bids on individual items.\nIn this case, an agent's dominant strategy is to bid E higher than the current winning bid until it reaches its valuation for that particular item.\nOur algorithms focus on the subsequent stages: k> 1.\nWhen k> 1, agents have to find g \u2217 i.\nThis can be done by performing a complete search on B. However, this approach is computationally expensive since it produces a large search tree.\nOur algorithms represent alternative approaches to overcome this expensive search.\n4.1 The PAUSEBID Algorithm\nIn the PAUSEBID algorithm (shown in Figure 1) we implement some heuristics to prune the search tree.\nGiven that bidders want to maximize their utility and that at any given point there are likely only a few bids within B which\nFigure 1: The PAUSEBID algorithm which implements a branch and bound search.\ni is the agent and k is the current stage of the auction, for k> 2.\nthe agent can dominate, we start by defining my-bids to be the list of bids for which the agent's valuation is higher than the current best bid, as given in B.\nWe set the value of these bids to be the agent's true valuation (but we won't necessarily be bidding true valuation, as we explain later).\nSimilarly, we set their-bids to be the rest of the bids from B. Finally, the agent's search list is simply the concatenation of my-bids and their-bids.\nNote that the agent's own bids are placed first on the search list as this will enable us to do more pruning (PAUSEBID lines 3 to 9).\nThe agent can now perform a branch and bound search on the branch-on-bids tree produced by these bids.\nThis branch and bound search is implemented by PBSEARCH (Figure 2).\nOur algorithm not only implements the standard bound but it also implements other pruning techniques in order to further reduce the size of the search tree.\nThe bound we use is the maximum utility that the agent can expect to receive from a given set of bids.\nWe call it u \u2217.\nInitially, u \u2217 is set to ui (W) (PAUSEBID line 11) since that is the utility the agent currently receives and any solution he proposes should give him more utility.\nIf PBSEARCH ever comes across a partial solution where the maximum utility the agent can expect to receive is less than u \u2217 then that subtree is pruned (PBSEARCH line 21).\nNote that we can determine the maximum utility only after the algorithm has searched over all of the agent's own bids (which are first on the list) because after that we know that the solution will not include any more bids where the agent is the winner thus the agent's utility will no longer increase.\nFor example,\nFigure 2: The PBSEARCH recursive procedure where bids is the set of available bids and g is the current partial solution.\nif an agent has only one bid in my-bids then the maximum utility he can expect is equal to his value for the items in that bid minus the minimum possible payment we can make for those items and still come up with a set of bids that has revenue greater than r (W).\nThe calculation of the minimum payment is shown in line 19 for the partial solution case and line 9 for the case where we have a complete solution in PBSEARCH.\nNote that in order to calculate the min-payment for the partial solution case we need an upper bound on the payments that we must make for each item.\nThis upper bound is provided by\nThis function produces a bound identical to the one used by the Bidtree algorithm--it merely assigns to each individual item in S a value equal to the maximum bid in B divided by the number of items in that bid.\nTo prune the branches that cannot lead to a solution with revenue greater than the current W, the algorithm considers both the values of the bids in B and the valuations of the\n696 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nwhich assigns to each individual item s in S the maximum value produced by the valuation of S ~ divided by the size of S ~, where S ~ is a set for which the agent has a valuation greater than zero, contains s, and its size is less or equal than k.\nThe algorithm uses the heuristics h and hi (lines 15 and 19 of PBSEARCH), to prune the just mentioned branches in the same way an A \u2217 algorithm uses its heuristic.\nA final pruning technique implemented by the algorithm is ignoring any branches where the agent has no bids in the current answer g and no more of the agent's bids are in the list (PBSEARCH lines 6 and 7).\nThe resulting g \u2217 found by PBSEARCH is thus the set of bids that has revenue bigger than r (W) and maximizes agent i's utility.\nHowever, agent i's bids in g \u2217 are still set to his own valuation and not to the lowest possible price.\nLines 17 to 20 in PAUSEBID are responsible for setting the agent's payments so that it can achieve its maximum utility u \u2217.\nIf the agent has only one bid in g \u2217 then it is simply a matter of reducing the payment of that bid by u \u2217 from the current maximum of the agent's true valuation.\nHowever, if the agent has more than one bid then we face the problem of how to distribute the agent's payments among these bids.\nThere are many ways of distributing the payments and there does not appear to be a dominant strategy for performing this distribution.\nWe have chosen to distribute the payments in proportion to the agent's true valuation for each set of items.\nPAUSEBID assumes that the set of best bids B and the current best winning bidset W remains constant during its execution, and it returns the agent's myopic utility-maximizing bidset (if there is one) using a branch and bound search.\nHowever it repeats the whole search at every stage.\nWe can minimize this problem by caching the result of previous searches.\n4.2 The CACHEDPAUSEBID Algorithm\nThe CACHEDPAUSEBID algorithm (shown in Figure 3) is our second approach to solve the bidding problem in the PAUSE auction.\nIt is based in a cache table called C-Table where we store some solutions to avoid doing a complete search every time.\nThe problem is the same; the agent i has to find g \u2217 i.\nWe note that g \u2217 i is a bidset that contains at least one bid of the agent i. Let S be a set of items for which the agent i has a valuation such that vi (S) \u2265 B (S)> 0, let gSi be a bidset over S such that r (gSi) \u2265 r (W) + e and\nwhere each g is a set of bids that covers all items and \u2200 b \u2208 g (b \u2208 B) or (bagent = i and bvalue> B (bitems)) and (\u2203 b \u2208 gbitems = S and bagent = i).\nThat is, gSi is i's best bidset for all items which includes a bid from i for all S items.\nIn the PAUSE auction we cannot bid for sets of items with size greater than k. So, if we have for each set of items S for which vi (S)> 0 and size (S) \u2264 k its corresponding gSi then g \u2217 i is the gSi that maximizes the agent's utility.\nThat is\nFigure 3: The CACHEDPAUSEBID algorithm that imple\nments a caching based search to find a bidset that maximizes the utility for the agent i. k is the current stage of the auction (for k \u2265 2), and k-changed is a boolean that is true right after the auction moved to the next stage.\nFigure 4: The CPBSEARCH recursive procedure where bids is the set of available bids, g is the current partial solution and n is a value that indicates how deep in the list bids the algorithm has to search.\nthen find g' by searching for the gS, stored in C-Table [S], that maximizes the agent's utility, considering only the set of items S with size (S) <k.\nThe problem remains in maintaining the C-Table updated and avoiding to search every gS every time.\nCACHEDPAUSEBID deals with this and other details.\nLet B' be the set of bids that contains the new best bids, that is, B' contains the bids recently added to B and the bids that have changed price (always higher), bidder, or both and were already in B. Let S \u00af = Items--S be the complement of S (the set of items not included in S).\nCACHEDPAUSEBID takes three parameters: i the agent, k the current stage of the auction, and k-changed a boolean that is true right after the auction moved to the next stage.\nInitially C-Table has one row or entry for each set S for which vi (S)> 0.\nWe start by eliminating the entries corresponding to each set S for which vi (S) <B (S) from C-Table (line 3).\nThen, in the case that k-changed is true, for each set S with size (S) = k, we add to B' a bid for that set with value equal to vi (S) and bidder agent i (line 5); this a bid that the agent is now allowed to consider.\nWe then search for g' amongst the gS stored in C-Table, for this we only need to consider the sets with size (S) <k (line 8).\nBut how do we know that the gS in C-Table [S] is still the best solution for S?\nThere are only two cases when we are not sure about that and we need to do a search to update C-Table [S].\nThese cases are: i) When k-changed is true and size (S) <k, since there was no gS stored in C-Table for this S. ii) When there exists at least one bid in B' for the set of items submitted by an agent different than i, since it is probable that this new bid can produce a solution better than the one stored in C-Table [S].\nWe handle the two cases mentioned above in lines 13 to 26 of CACHEDPAUSEBID.\nIn both of these cases, since gS must contain a bid for S we need to find a bidset that cover the missing items, that is \u00af S. Thus, our search space consists of all the bids on B for the set of items S \u00af or for a subset of it.\nWe build the list bids that contains only those bids.\nHowever, we put the bids from B' at the beginning of bids (line 14) since they are the ones that have changed.\nThen, we replace the bids in bids that have a price lower than the valuation the agent i has for those same items with a bid from agent i for those items and value equal to the agent's valuation (lines 16--19).\nThe recursive procedure CPBSEARCH, called in line 25 of CACHEDPAUSEBID and shown in Figure 4, is the one that finds the new gS.\nCPBSEARCH is a slightly modified version of our branch and bound search implemented in PBSEARCH.\nThe first modification is that it has a third parameter n that indicates how deep on the list bids we want to search, since it stops searching when n less or equal to zero and not only when the list bids is empty (line 1).\nEach time that there is a recursive call of CPBSEARCH n is decreased by one when a bid from bids is discarded or out (lines 12, 15, 21, and 24) and n remains the same otherwise (lines 20 and 23).\nWe set the value of n before calling CPBSEARCH, to be the size of the list bids (CACHEDPAUSEBID line 21) in case i), since we want CPBSEARCH to search over all bids; and we set n to be the number of bids from B' included in bids (CACHEDPAUSEBID line 23) in case ii), since we know that only the those first n bids in bids changed and can affect our current gS.\nAnother difference with PBSEARCH is that the bound in CPBSEARCH is uS which we set to be 0 (CACHEDPAUSEBID line 22) when in case i) and r (gS)--min-payment (CACHEDPAUSEBID line 12) when in case ii).\nWe call CPBSEARCH with g already containing a bid for S.\nAfter CPBSEARCH is executed we are sure that we have the right gS, so we store it in the corresponding C-Table [S] (CACHEDPAUSEBID line 26).\nWhen we reach line 27 in CACHEDPAUSEBID, we are sure that we have the right gS.\nHowever, agent i's bids in gS are still set to his own valuation and not to the lowest possible price.\nIf uS is greater than the current u', lines 31 to 34 in CACHEDPAUSEBID are responsible for setting the agent's payments so that it can achieve its maximum utility uS.\nAs in PAUSEBID, we have chosen to distribute the payments in proportion to the agent's true valuation for each set of items.\nIn the case that uS less than or equal to zero and the valuation that the agent i has for the set of items S is lower than the current value of the bid in B for the same set of items, we remove the corresponding C-Table [S] since we know that is not worthwhile to keep it in the cache table (CACHEDPAUSEBID line 38).\nThe CACHEDPAUSEBID function is called when k> 1 and returns the agent's myopic utility-maximizing bidset, if there is one.\nIt assumes that W and B remains constant during its execution.\nFigure 5: Algorithm for the generation of random value functions.\nEXPD (x) returns a random number taken from an exponential distribution with mean 1\/x.\nFigure 6: Average percentage of convergence (y-axis), which is the percentage of times that our algorithms converge to the revenue-maximizing solution, as function of the number of items in the auction.\n5.\nTEST AND COMPARISON\nWe have implemented both algorithms and performed a series of experiments in order to determine how their solution compares to the revenue-maximizing solution and how their times compare with each other.\nIn order to do our tests we had to generate value functions for the agents1.\nThe algorithm we used is shown in Figure 5.\nThe type of valuations it generates correspond to domains where a set of agents must perform a set of tasks but there are cost savings for particular agents if they can bundle together certain subsets of tasks.\nFor example, imagine a set of robots which must pick up and deliver items to different locations.\nSince each robot is at a different location and has different abilities, each one will have different preferences over how to bundle.\nTheir costs for the item bundles are subadditive, which means that their preferences are superadditive.\nThe first experiment we performed simply ensured the proper 1Note that we could not use CATS [6] because it generates sets of bids for an indeterminate number of agents.\nIt is as if you were told the set of bids placed in a combinatorial auction but not who placed each bid or even how many people placed bids, and then asked to determine the value function of every participant in the auction.\nFigure 7: Average percentage of revenue from our algorithms relative to maximum revenue (y-axis) as function of the number of items in the auction.\nfunctioning of our algorithms.\nWe then compared the solutions found by both of them to the revenue-maximizing solution as found by CASS when given a set of bids that corresponds to the agents' true valuation.\nThat is, for each agent i and each set of items S for which vi (S)> 0 we generated a bid.\nThis set of bids was fed to CASS which implements a centralized winner determination algorithm to find the solution which maximizes revenue.\nNote, however, that the revenue from the PAUSE auction on all the auctions is always smaller than the revenue of the revenue-maximizing solution when the agents bid their true valuations.\nSince PAUSE uses English auctions the final prices (roughly) represent the second-highest valuation, plus E, for that set of items.\nWe fixed the number of agents to be 5 and we experimented with different number of items, namely from 2 to 10.\nWe ran both algorithms 100 times for each combination.\nWhen we compared the solutions of our algorithms to the revenue-maximizing solution, we realized that they do not always find the same distribution of items as the revenue-maximizing solution (as shown in Figure 6).\nThe cases where our algorithms failed to arrive at the distribution of the revenue-maximizing solution are those where there was a large gap between the first and second valuation for a set (or sets) of items.\nIf the revenue-maximizing solution contains the bid (or bids) using these higher valuation then it is impossible for the PAUSE auction to find this solution because that bid (those bids) is never placed.\nFor example, if agent i has vi (1) = 1000 and the second highest valuation for (1) is only 10 then i only needs to place a bid of 11 in order to win that item.\nIf the revenue-maximizing solution requires that 1 be sold for 1000 then that solution will never be found because that bid will never be placed.\nWe also found that average percentage of times that our algorithms converges to the revenue-maximizing solution decreases as the number of items increases.\nFor 2 items is almost 100% but decreases a little bit less than 1 percent as the items increase, so that this average percentage of convergence is around 90% for 10 items.\nIn a few instances our algorithms find different solutions this is due to the different\nFigure 8: Average number of expanded nodes (y-axis) as function of items in the auction.\nordering of the bids in the bids list which makes them search in different order.\nWe know that the revenue generated by the PAUSE auction is generally lower than the revenue of the revenuemaximizing solution, but how much lower?\nTo answer this question we calculated percentage representing the proportion of the revenue given by our algorithms relative to the revenue given by CASS.\nWe found that the percentage of revenue of our algorithms increases in average 2.7% as the number of items increases, as shown in Figure 7.\nHowever, we found that CACHEDPAUSEBID generates a higher revenue than PAUSEBID (4.3% higher in average) except for auctions with 2 items where both have about the same percentage.\nAgain, this difference is produced by the order of the search.\nIn the case of 2 items both algorithms produce in average a revenue proportion of 67.4%, while in the other extreme (10 items), CACHEDPAUSEBID produced in average a revenue proportion of 91.5% while PAUSEBID produced in average a revenue proportion of 87.7%.\nThe scalability of our algorithms can be determined by counting the number of nodes expanded in the search tree.\nFor this we count the number of times that PBSEARCH gets invoked for each time that PAUSEBID is called and the number of times that FASTPAUSEBIDSEARCH gets invoked for each time that CACHEDPAUSEBID, respectively for each of our algorithms.\nAs expected since this is an NP-Hard problem, the number of expanded nodes does grow exponentially with the number of items (as shown in Figure 8).\nHowever, we found that CACHEDPAUSEBID outperforms PAUSEBID, since it expands in average less than half the number of nodes.\nFor example, the average number of nodes expanded when 2 items is zero for CACHEDPAUSEBID while for PAUSEBID is 2; and in the other extreme (10 items) CACHEDPAUSEBID expands in average only 633 nodes while PAUSEBID expands in average 1672 nodes, a difference of more than 1000 nodes.\nAlthough the number of nodes expanded by our algorithms increases as function of the number of items, the actual number of nodes is a much smaller than the worst-case scenario of nn where n is the number of items.\nFor example, for 10 items we expand slightly more than 103 nodes for the case of PAUSEBID and less than that for the case of CACHEDPAUSE2 3 4 5 6 7 8 9 10 Number of Items\nFigure 9: Average time in seconds that takes to finish an auction (y-axis) as function of the number of items in the auction.\nBID which are much smaller numbers than 1010.\nNotice also that our value generation algorithm (Figure 5) generates a number of bids that is exponential on the number of items, as might be expected in many situations.\nAs such, these results do not support the conclusion that time grows exponentially with the number of items when the number of bids is independent of the number of items.\nWe expect that both algorithms will grow exponentially as a function the number of bids, but stay roughly constant as the number of items grows.\nWe wanted to make sure that less expanded nodes does indeed correspond to faster execution, especially since our algorithms execute different operations.\nWe thus ran the same experiment with all the agents in the same machine, an Intel Centrino 2.0 GHz laptop PC with 1 GB of RAM and a 7200 RMP 60 GB hard drive, and calculated the average time that takes to finish an auction for each algorithm.\nAs shown in Figure 9, CACHEDPAUSEBID is faster than PAUSEBID, the difference in execution speed is even more clear as the number of items increases.\n6.\nRELATED WORK\nA lot of research has been done on various aspects of combinatorial auctions.\nWe recommend [2] for a good review.\nHowever, the study of distributed winner determination algorithms for combinatorial auctions is still relatively new.\nOne approach is given by the algorithms for distributing the winner determination problem in combinatorial auctions presented in [7], but these algorithms assume the computational entities are the items being sold and thus end up with a different type of distribution.\nThe VSA algorithm [3] is another way of performing distributed winner determination in combinatorial auction but it assumes the bids themselves perform the computation.\nThis algorithm also fails to converge to a solution for most cases.\nIn [9] the authors present a distributed mechanism for calculating VCG payments in a mechanism design problem.\nTheir mechanism roughly amounts to having each agent calculate the payments for two other agents and give these to a secure\n700 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\ncentral server which then checks to make sure results from all pairs agree, otherwise a re-calculation is ordered.\nThis general idea, which they call the redundancy principle, could also be applied to our problem but it requires the existence of a secure center agent that everyone trusts.\nAnother interesting approach is given in [8] where the bidding agents prioritize their bids, thus reducing the set of bids that the centralized winner determination algorithm must consider, making that problem easier.\nFinally, in the computation procuring clock auction [1] the agents are given an everincreasing percentage of the surplus achieved by their proposed solution over the current best.\nAs such, it assumes the agents are impartial computational entities, not the set of possible buyers as assumed by the PAUSE auction.\n7.\nCONCLUSIONS\nWe believe that distributed solutions to the winner determination problem should be studied as they offer a better fit for some applications as when, for example, agents do not want to reveal their valuations to the auctioneer or when we wish to distribute the computational load among the bidders.\nThe PAUSE auction is one of a few approaches to decentralize the winner determination problem in combinatorial auctions.\nWith this auction, we can even envision completely eliminating the auctioneer and, instead, have every agent performe the task of the auctioneer.\nHowever, while PAUSE establishes the rules the bidders must obey, it does not tell us how the bidders should calculate their bids.\nWe have presented two algorithms, PAUSEBID and CACHEDPAUSEBID, that bidder agents can use to engage in a PAUSE auction.\nBoth algorithms implement a myopic utility maximizing strategy that is guaranteed to find the bidset that maximizes the agent's utility given the set of outstanding best bids at any given time, without considering possible future bids.\nBoth algorithms find, most of the time, the same distribution of items as the revenue-maximizing solution.\nThe cases where our algorithms failed to arrive at that distribution are those where there was a large gap between the first and second valuation for a set (or sets) of items.\nAs it is an NP-Hard problem, the running time of our algorithms remains exponential but it is significantly better than a full search.\nPAUSEBID performs a branch and bound search completely from scratch each time it is invoked.\nCACHEDPAUSEBID caches partial solutions and performs a branch and bound search only on the few portions affected by the changes on the bids between consecutive times.\nCACHEDPAUSEBID has a better performance since it explores fewer nodes (less than half) and it is faster.\nAs expected the revenue generated by a PAUSE auction is lower than the revenue of a revenue-maximizing solution found by a centralized winner determination algorithm, however we found that CACHEDPAUSEBID generates in average 4.7% higher revenue than PAUSEBID.\nWe also found that the revenue generated by our algorithms increases as function of the number of items in the auction.\nOur algorithms have shown that it is feasible to implement the complex coordination constraints supported by combinatorial auctions without having to resort to a centralized winner determination algorithm.\nMoreover, because of the design of the PAUSE auction, the agents in the auction also have an incentive to perform the required computation.\nOur bidding algorithms can be used by any multiagent system that would use combinatorial auctions for coordination but would rather not implement a centralized auctioneer.","keyphrases":["bid algorithm","combinatori auction","distribut alloc","coordin","paus auction","task and resourc alloc","revenu-maxim solut","combinatori optim problem","agent","progress adapt user select environ","branch and bound search","search tree","branch-on-bid tree"],"prmu":["P","P","P","P","P","M","M","M","U","U","M","M","U"]} {"id":"C-58","title":"A Scalable Distributed Information Management System","abstract":"We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information. To serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures. We design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Through extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures.","lvl-1":"A Scalable Distributed Information Management System\u2217 Praveen Yalagandula ypraveen@cs.utexas.edu Mike Dahlin dahlin@cs.utexas.edu Department of Computer Sciences The University of Texas at Austin Austin, TX 78712 ABSTRACT We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information.\nTo serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures.\nWe design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication.\nThrough extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-Network Operating Systems, Distributed Databases General Terms Management, Design, Experimentation 1.\nINTRODUCTION The goal of this research is to design and build a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications.\nMonitoring, querying, and reacting to changes in the state of a distributed system are core components of applications such as system management [15, 31, 37, 42], service placement [14, 43], data sharing and caching [18, 29, 32, 35, 46], sensor monitoring and control [20, 21], multicast tree formation [8, 9, 33, 36, 38], and naming and request routing [10, 11].\nWe therefore speculate that a SDIMS in a networked system would provide a distributed operating systems backbone and facilitate the development and deployment of new distributed services.\nFor a large scale information system, hierarchical aggregation is a fundamental abstraction for scalability.\nRather than expose all information to all nodes, hierarchical aggregation allows a node to access detailed views of nearby information and summary views of global information.\nIn a SDIMS based on hierarchical aggregation, different nodes can therefore receive different answers to the query find a [nearby] node with at least 1 GB of free memory or find a [nearby] copy of file foo.\nA hierarchical system that aggregates information through reduction trees [21, 38] allows nodes to access information they care about while maintaining system scalability.\nTo be used as a basic building block, a SDIMS should have four properties.\nFirst, the system should be scalable: it should accommodate large numbers of participating nodes, and it should allow applications to install and monitor large numbers of data attributes.\nEnterprise and global scale systems today might have tens of thousands to millions of nodes and these numbers will increase over time.\nSimilarly, we hope to support many applications, and each application may track several attributes (e.g., the load and free memory of a system``s machines) or millions of attributes (e.g., which files are stored on which machines).\nSecond, the system should have flexibility to accommodate a broad range of applications and attributes.\nFor example, readdominated attributes like numCPUs rarely change in value, while write-dominated attributes like numProcesses change quite often.\nAn approach tuned for read-dominated attributes will consume high bandwidth when applied to write-dominated attributes.\nConversely, an approach tuned for write-dominated attributes will suffer from unnecessary query latency or imprecision for read-dominated attributes.\nTherefore, a SDIMS should provide mechanisms to handle different types of attributes and leave the policy decision of tuning replication to the applications.\nThird, a SDIMS should provide administrative isolation.\nIn a large system, it is natural to arrange nodes in an organizational or an administrative hierarchy.\nA SDIMS should support administraSession 10: Distributed Information Systems 379 tive isolation in which queries about an administrative domain``s information can be satisfied within the domain so that the system can operate during disconnections from other domains, so that an external observer cannot monitor or affect intra-domain queries, and to support domain-scoped queries efficiently.\nFourth, the system must be robust to node failures and disconnections.\nA SDIMS should adapt to reconfigurations in a timely fashion and should also provide mechanisms so that applications can tradeoff the cost of adaptation with the consistency level in the aggregated results when reconfigurations occur.\nWe draw inspiration from two previous works: Astrolabe [38] and Distributed Hash Tables (DHTs).\nAstrolabe [38] is a robust information management system.\nAstrolabe provides the abstraction of a single logical aggregation tree that mirrors a system``s administrative hierarchy.\nIt provides a general interface for installing new aggregation functions and provides eventual consistency on its data.\nAstrolabe is robust due to its use of an unstructured gossip protocol for disseminating information and its strategy of replicating all aggregated attribute values for a subtree to all nodes in the subtree.\nThis combination allows any communication pattern to yield eventual consistency and allows any node to answer any query using local information.\nThis high degree of replication, however, may limit the system``s ability to accommodate large numbers of attributes.\nAlso, although the approach works well for read-dominated attributes, an update at one node can eventually affect the state at all nodes, which may limit the system``s flexibility to support write-dominated attributes.\nRecent research in peer-to-peer structured networks resulted in Distributed Hash Tables (DHTs) [18, 28, 29, 32, 35, 46]-a data structure that scales with the number of nodes and that distributes the read-write load for different queries among the participating nodes.\nIt is interesting to note that although these systems export a global hash table abstraction, many of them internally make use of what can be viewed as a scalable system of aggregation trees to, for example, route a request for a given key to the right DHT node.\nIndeed, rather than export a general DHT interface, Plaxton et al.``s [28] original application makes use of hierarchical aggregation to allow nodes to locate nearby copies of objects.\nIt seems appealing to develop a SDIMS abstraction that exposes this internal functionality in a general way so that scalable trees for aggregation can be a basic system building block alongside the DHTs.\nAt a first glance, it might appear to be obvious that simply fusing DHTs with Astrolabe``s aggregation abstraction will result in a SDIMS.\nHowever, meeting the SDIMS requirements forces a design to address four questions: (1) How to scalably map different attributes to different aggregation trees in a DHT mesh?\n(2) How to provide flexibility in the aggregation to accommodate different application requirements?\n(3) How to adapt a global, flat DHT mesh to attain administrative isolation property?\nand (4) How to provide robustness without unstructured gossip and total replication?\nThe key contributions of this paper that form the foundation of our SDIMS design are as follows.\n1.\nWe define a new aggregation abstraction that specifies both attribute type and attribute name and that associates an aggregation function with a particular attribute type.\nThis abstraction paves the way for utilizing the DHT system``s internal trees for aggregation and for achieving scalability with both nodes and attributes.\n2.\nWe provide a flexible API that lets applications control the propagation of reads and writes and thus trade off update cost, read latency, replication, and staleness.\n3.\nWe augment an existing DHT algorithm to ensure path convergence and path locality properties in order to achieve administrative isolation.\n4.\nWe provide robustness to node and network reconfigurations by (a) providing temporal replication through lazy reaggregation that guarantees eventual consistency and (b) ensuring that our flexible API allows demanding applications gain additional robustness by using tunable spatial replication of data aggregates or by performing fast on-demand reaggregation to augment the underlying lazy reaggregation or by doing both.\nWe have built a prototype of SDIMS.\nThrough simulations and micro-benchmark experiments on a number of department machines and PlanetLab [27] nodes, we observe that the prototype achieves scalability with respect to both nodes and attributes through use of its flexible API, inflicts an order of magnitude lower maximum node stress than unstructured gossiping schemes, achieves isolation properties at a cost of modestly increased read latency compared to flat DHTs, and gracefully handles node failures.\nThis initial study discusses key aspects of an ongoing system building effort, but it does not address all issues in building a SDIMS.\nFor example, we believe that our strategies for providing robustness will mesh well with techniques such as supernodes [22] and other ongoing efforts to improve DHTs [30] for further improving robustness.\nAlso, although splitting aggregation among many trees improves scalability for simple queries, this approach may make complex and multi-attribute queries more expensive compared to a single tree.\nAdditional work is needed to understand the significance of this limitation for real workloads and, if necessary, to adapt query planning techniques from DHT abstractions [16, 19] to scalable aggregation tree abstractions.\nIn Section 2, we explain the hierarchical aggregation abstraction that SDIMS provides to applications.\nIn Sections 3 and 4, we describe the design of our system for achieving the flexibility, scalability, and administrative isolation requirements of a SDIMS.\nIn Section 5, we detail the implementation of our prototype system.\nSection 6 addresses the issue of adaptation to the topological reconfigurations.\nIn Section 7, we present the evaluation of our system through large-scale simulations and microbenchmarks on real networks.\nSection 8 details the related work, and Section 9 summarizes our contribution.\n2.\nAGGREGATION ABSTRACTION Aggregation is a natural abstraction for a large-scale distributed information system because aggregation provides scalability by allowing a node to view detailed information about the state near it and progressively coarser-grained summaries about progressively larger subsets of a system``s data [38].\nOur aggregation abstraction is defined across a tree spanning all nodes in the system.\nEach physical node in the system is a leaf and each subtree represents a logical group of nodes.\nNote that logical groups can correspond to administrative domains (e.g., department or university) or groups of nodes within a domain (e.g., 10 workstations on a LAN in CS department).\nAn internal non-leaf node, which we call virtual node, is simulated by one or more physical nodes at the leaves of the subtree for which the virtual node is the root.\nWe describe how to form such trees in a later section.\nEach physical node has local data stored as a set of (attributeType, attributeName, value) tuples such as (configuration, numCPUs, 16), (mcast membership, session foo, yes), or (file stored, foo, myIPaddress).\nThe system associates an aggregation function ftype with each attribute type, and for each level-i subtree Ti in the system, the system defines an aggregate value Vi,type,name for each (at380 tributeType, attributeName) pair as follows.\nFor a (physical) leaf node T0 at level 0, V0,type,name is the locally stored value for the attribute type and name or NULL if no matching tuple exists.\nThen the aggregate value for a level-i subtree Ti is the aggregation function for the type, ftype computed across the aggregate values of each of Ti``s k children: Vi,type,name = ftype(V0 i\u22121,type,name,V1 i\u22121,type,name,...,Vk\u22121 i\u22121,type,name).\nAlthough SDIMS allows arbitrary aggregation functions, it is often desirable that these functions satisfy the hierarchical computation property [21]: f(v1,...,vn)= f(f(v1,...,vs1 ), f(vs1+1,...,vs2 ), ..., f(vsk+1,...,vn)), where vi is the value of an attribute at node i. For example, the average operation, defined as avg(v1,...,vn) = 1\/n.\n\u2211n i=0 vi, does not satisfy the property.\nInstead, if an attribute stores values as tuples (sum,count), the attribute satisfies the hierarchical computation property while still allowing the applications to compute the average from the aggregate sum and count values.\nFinally, note that for a large-scale system, it is difficult or impossible to insist that the aggregation value returned by a probe corresponds to the function computed over the current values at the leaves at the instant of the probe.\nTherefore our system provides only weak consistency guarantees - specifically eventual consistency as defined in [38].\n3.\nFLEXIBILITY A major innovation of our work is enabling flexible aggregate computation and propagation.\nThe definition of the aggregation abstraction allows considerable flexibility in how, when, and where aggregate values are computed and propagated.\nWhile previous systems [15, 29, 38, 32, 35, 46] implement a single static strategy, we argue that a SDIMS should provide flexible computation and propagation to efficiently support wide variety of applications with diverse requirements.\nIn order to provide this flexibility, we develop a simple interface that decomposes the aggregation abstraction into three pieces of functionality: install, update, and probe.\nThis definition of the aggregation abstraction allows our system to provide a continuous spectrum of strategies ranging from lazy aggregate computation and propagation on reads to aggressive immediate computation and propagation on writes.\nIn Figure 1, we illustrate both extreme strategies and an intermediate strategy.\nUnder the lazy Update-Local computation and propagation strategy, an update (or write) only affects local state.\nThen, a probe (or read) that reads a level-i aggregate value is sent up the tree to the issuing node``s level-i ancestor and then down the tree to the leaves.\nThe system then computes the desired aggregate value at each layer up the tree until the level-i ancestor that holds the desired value.\nFinally, the level-i ancestor sends the result down the tree to the issuing node.\nIn the other extreme case of the aggressive Update-All immediate computation and propagation on writes [38], when an update occurs, changes are aggregated up the tree, and each new aggregate value is flooded to all of a node``s descendants.\nIn this case, each level-i node not only maintains the aggregate values for the level-i subtree but also receives and locally stores copies of all of its ancestors'' level- j ( j > i) aggregation values.\nAlso, a leaf satisfies a probe for a level-i aggregate using purely local data.\nIn an intermediate Update-Up strategy, the root of each subtree maintains the subtree``s current aggregate value, and when an update occurs, the leaf node updates its local state and passes the update to its parent, and then each successive enclosing subtree updates its aggregate value and passes the new value to its parent.\nThis strategy satisfies a leaf``s probe for a level-i aggregate value by sending the probe up to the level-i ancestor of the leaf and then sending the aggregate value down to the leaf.\nFinally, notice that other strategies exist.\nIn general, an Update-Upk-Downj strategy aggregates up to parameter description optional attrType Attribute Type aggrfunc Aggregation Function up How far upward each update is sent (default: all) X down How far downward each aggregate is sent (default: none) X domain Domain restriction (default: none) X expTime Expiry Time Table 1: Arguments for the install operation the kth level and propagates the aggregate values of a node at level l (s.t. l \u2264 k) downward for j levels.\nA SDIMS must provide a wide range of flexible computation and propagation strategies to applications for it to be a general abstraction.\nAn application should be able to choose a particular mechanism based on its read-to-write ratio that reduces the bandwidth consumption while attaining the required responsiveness and precision.\nNote that the read-to-write ratio of the attributes that applications install vary extensively.\nFor example, a read-dominated attribute like numCPUs rarely changes in value, while a writedominated attribute like numProcesses changes quite often.\nAn aggregation strategy like Update-All works well for read-dominated attributes but suffers high bandwidth consumption when applied for write-dominated attributes.\nConversely, an approach like UpdateLocal works well for write-dominated attributes but suffers from unnecessary query latency or imprecision for read-dominated attributes.\nSDIMS also allows non-uniform computation and propagation across the aggregation tree with different up and down parameters in different subtrees so that applications can adapt with the spatial and temporal heterogeneity of read and write operations.\nWith respect to spatial heterogeneity, access patterns may differ for different parts of the tree, requiring different propagation strategies for different parts of the tree.\nSimilarly with respect to temporal heterogeneity, access patterns may change over time requiring different strategies over time.\n3.1 Aggregation API We provide the flexibility described above by splitting the aggregation API into three functions: Install() installs an aggregation function that defines an operation on an attribute type and specifies the update strategy that the function will use, Update() inserts or modifies a node``s local value for an attribute, and Probe() obtains an aggregate value for a specified subtree.\nThe install interface allows applications to specify the k and j parameters of the Update-Upk-Downj strategy along with the aggregation function.\nThe update interface invokes the aggregation of an attribute on the tree according to corresponding aggregation function``s aggregation strategy.\nThe probe interface not only allows applications to obtain the aggregated value for a specified tree but also allows a probing node to continuously fetch the values for a specified time, thus enabling an application to adapt to spatial and temporal heterogeneity.\nThe rest of the section describes these three interfaces in detail.\n3.1.1 Install The Install operation installs an aggregation function in the system.\nThe arguments for this operation are listed in Table 1.\nThe attrType argument denotes the type of attributes on which this aggregation function is invoked.\nInstalled functions are soft state that must be periodically renewed or they will be garbage collected at expTime.\nThe arguments up and down specify the aggregate computation 381 Update Strategy On Update On Probe for Global Aggregate Value On Probe for Level-1 Aggregate Value Update-Local Update-Up Update-All Figure 1: Flexible API parameter description optional attrType Attribute Type attrName Attribute Name mode Continuous or One-shot (default: one-shot) X level Level at which aggregate is sought (default: at all levels) X up How far up to go and re-fetch the value (default: none) X down How far down to go and reaggregate (default: none) X expTime Expiry Time Table 2: Arguments for the probe operation and propagation strategy Update-Upk-Downj.\nThe domain argument, if present, indicates that the aggregation function should be installed on all nodes in the specified domain; otherwise the function is installed on all nodes in the system.\n3.1.2 Update The Update operation takes three arguments attrType, attrName, and value and creates a new (attrType, attrName, value) tuple or updates the value of an old tuple with matching attrType and attrName at a leaf node.\nThe update interface meshes with installed aggregate computation and propagation strategy to provide flexibility.\nIn particular, as outlined above and described in detail in Section 5, after a leaf applies an update locally, the update may trigger re-computation of aggregate values up the tree and may also trigger propagation of changed aggregate values down the tree.\nNotice that our abstraction associates an aggregation function with only an attrType but lets updates specify an attrName along with the attrType.\nThis technique helps achieve scalability with respect to nodes and attributes as described in Section 4.\n3.1.3 Probe The Probe operation returns the value of an attribute to an application.\nThe complete argument set for the probe operation is shown in Table 2.\nAlong with the attrName and the attrType arguments, a level argument specifies the level at which the answers are required for an attribute.\nIn our implementation we choose to return results at all levels k < l for a level-l probe because (i) it is inexpensive as the nodes traversed for level-l probe also contain level k aggregates for k < l and as we expect the network cost of transmitting the additional information to be small for the small aggregates which we focus and (ii) it is useful as applications can efficiently get several aggregates with a single probe (e.g., for domain-scoped queries as explained in Section 4.2).\nProbes with mode set to continuous and with finite expTime enable applications to handle spatial and temporal heterogeneity.\nWhen node A issues a continuous probe at level l for an attribute, then regardless of the up and down parameters, updates for the attribute at any node in A``s level-l ancestor``s subtree are aggregated up to level l and the aggregated value is propagated down along the path from the ancestor to A. Note that continuous mode enables SDIMS to support a distributed sensor-actuator mechanism where a sensor monitors a level-i aggregate with a continuous mode probe and triggers an actuator upon receiving new values for the probe.\nThe up and down arguments enable applications to perform ondemand fast re-aggregation during reconfigurations, where a forced re-aggregation is done for the corresponding levels even if the aggregated value is available, as we discuss in Section 6.\nWhen present, the up and down arguments are interpreted as described in the install operation.\n3.1.4 Dynamic Adaptation At the API level, the up and down arguments in install API can be regarded as hints, since they suggest a computation strategy but do not affect the semantics of an aggregation function.\nA SDIMS implementation can dynamically adjust its up\/down strategies for an attribute based on its measured read\/write frequency.\nBut a virtual intermediate node needs to know the current up and down propagation values to decide if the local aggregate is fresh in order to answer a probe.\nThis is the key reason why up and down need to be statically defined at the install time and can not be specified in the update operation.\nIn dynamic adaptation, we implement a leasebased mechanism where a node issues a lease to a parent or a child denoting that it will keep propagating the updates to that parent or child.\nWe are currently evaluating different policies to decide when to issue a lease and when to revoke a lease.\n4.\nSCALABILITY Our design achieves scalability with respect to both nodes and attributes through two key ideas.\nFirst, it carefully defines the aggregation abstraction to mesh well with its underlying scalable DHT system.\nSecond, it refines the basic DHT abstraction to form an Autonomous DHT (ADHT) to achieve the administrative isolation properties that are crucial to scaling for large real-world systems.\nIn this section, we describe these two ideas in detail.\n4.1 Leveraging DHTs In contrast to previous systems [4, 15, 38, 39, 45], SDIMS``s aggregation abstraction specifies both an attribute type and attribute name and associates an aggregation function with a type rather than just specifying and associating a function with a name.\nInstalling a single function that can operate on many different named attributes matching a type improves scalability for sparse attribute types with large, sparsely-filled name spaces.\nFor example, to construct a file location service, our interface allows us to install a single function that computes an aggregate value for any named file.\nA subtree``s aggregate value for (FILELOC, name) would be the ID of a node in the subtree that stores the named file.\nConversely, Astrolabe copes with sparse attributes by having aggregation functions compute sets or lists and suggests that scalability can be improved by representing such sets with Bloom filters [6].\nSupporting sparse names within a type provides at least two advantages.\nFirst, when the value associated with a name is updated, only the state associ382 001\u00a0010100 000 011 101 111 110 011\u00a0111\u00a0001\u00a0101 000\u00a0100\u00a0110010 L0 L1 L2 L3 Figure 2: The DHT tree corresponding to key 111 (DHTtree111) and the corresponding aggregation tree.\nated with that name needs to be updated and propagated to other nodes.\nSecond, splitting values associated with different names into different aggregation values allows our system to leverage Distributed Hash Tables (DHTs) to map different names to different trees and thereby spread the function``s logical root node``s load and state across multiple physical nodes.\nGiven this abstraction, scalably mapping attributes to DHTs is straightforward.\nDHT systems assign a long, random ID to each node and define an algorithm to route a request for key k to a node rootk such that the union of paths from all nodes forms a tree DHTtreek rooted at the node rootk.\nNow, as illustrated in Figure 2, by aggregating an attribute along the aggregation tree corresponding to DHTtreek for k =hash(attribute type, attribute name), different attributes will be aggregated along different trees.\nIn comparison to a scheme where all attributes are aggregated along a single tree, aggregating along multiple trees incurs lower maximum node stress: whereas in a single aggregation tree approach, the root and the intermediate nodes pass around more messages than leaf nodes, in a DHT-based multi-tree, each node acts as an intermediate aggregation point for some attributes and as a leaf node for other attributes.\nHence, this approach distributes the onus of aggregation across all nodes.\n4.2 Administrative Isolation Aggregation trees should provide administrative isolation by ensuring that for each domain, the virtual node at the root of the smallest aggregation subtree containing all nodes of that domain is hosted by a node in that domain.\nAdministrative isolation is important for three reasons: (i) for security - so that updates and probes flowing in a domain are not accessible outside the domain, (ii) for availability - so that queries for values in a domain are not affected by failures of nodes in other domains, and (iii) for efficiency - so that domain-scoped queries can be simple and efficient.\nTo provide administrative isolation to aggregation trees, a DHT should satisfy two properties: 1.\nPath Locality: Search paths should always be contained in the smallest possible domain.\n2.\nPath Convergence: Search paths for a key from different nodes in a domain should converge at a node in that domain.\nExisting DHTs support path locality [18] or can easily support it by using the domain nearness as the distance metric [7, 17], but they do not guarantee path convergence as those systems try to optimize the search path to the root to reduce response latency.\nFor example, Pastry [32] uses prefix routing in which each node``s routing table contains one row per hexadecimal digit in the nodeId space where the ith row contains a list of nodes whose nodeIds differ from the current node``s nodeId in the ith digit with one entry for each possible digit value.\nGiven a routing topology, to route a packet to an arbitrary destination key, a node in Pastry forwards a packet to the node with a nodeId prefix matching the key in at least one more digit than the current node.\nIf such a node is not known, the current node uses an additional data structure, the leaf set containing 110XX 010XX 011XX 100XX 101XX univ dep1 dep2 key = 111XX 011XX 100XX 101XX 110XX 010XX L1 L0 L2 Figure 3: Example shows how isolation property is violated with original Pastry.\nWe also show the corresponding aggregation tree.\n110XX 010XX 011XX 100XX 101XX univ dep1 dep2 key = 111XX X 011XX 100XX 101XX 110XX 010XX L0 L1 L2 Figure 4: Autonomous DHT satisfying the isolation property.\nAlso the corresponding aggregation tree is shown.\nL immediate higher and lower neighbors in the nodeId space, and forwards the packet to a node with an identical prefix but that is numerically closer to the destination key in the nodeId space.\nThis process continues until the destination node appears in the leaf set, after which the message is routed directly.\nPastry``s expected number of routing steps is logn, where n is the number of nodes, but as Figure 3 illustrates, this algorithm does not guarantee path convergence: if two nodes in a domain have nodeIds that match a key in the same number of bits, both of them can route to a third node outside the domain when routing for that key.\nSimple modifications to Pastry``s route table construction and key-routing protocols yield an Autonomous DHT (ADHT) that satisfies the path locality and path convergence properties.\nAs Figure 4 illustrates, whenever two nodes in a domain share the same prefix with respect to a key and no other node in the domain has a longer prefix, our algorithm introduces a virtual node at the boundary of the domain corresponding to that prefix plus the next digit of the key; such a virtual node is simulated by the existing node whose id is numerically closest to the virtual node``s id.\nOur ADHT``s routing table differs from Pastry``s in two ways.\nFirst, each node maintains a separate leaf set for each domain of which it is a part.\nSecond, nodes use two proximity metrics when populating the routing tables - hierarchical domain proximity is the primary metric and network distance is secondary.\nThen, to route a packet to a global root for a key, ADHT routing algorithm uses the routing table and the leaf set entries to route to each successive enclosing domain``s root (the virtual or real node in the domain matching the key in the maximum number of digits).\nAdditional details about the ADHT algorithm are available in an extended technical report [44].\nProperties.\nMaintaining a different leaf set for each administrative hierarchy level increases the number of neighbors that each node tracks to (2b)\u2217lgb n+c.l from (2b)\u2217lgb n+c in unmodified Pastry, where b is the number of bits in a digit, n is the number of nodes, c is the leaf set size, and l is the number of domain levels.\nRouting requires O(lgbn + l) steps compared to O(lgbn) steps in Pastry; also, each routing hop may be longer than in Pastry because the modified algorithm``s routing table prefers same-domain nodes over nearby nodes.\nWe experimentally quantify the additional routing costs in Section 7.\nIn a large system, the ADHT topology allows domains to im383 A1 A2 B1 ((B1.B.,1), (B.,1),(.\n,1)) ((B1.B.,1), (B.,1),(.\n,1)) L2 L1 L0 ((B1.B.,1), (B.,1),(.\n,3)) ((A1.A.,1), (A.,2),(.\n,2)) ((A1.A.,1), (A.,1),(.\n,1)) ((A2.A.,1), (A.,1),(.\n,1)) Figure 5: Example for domain-scoped queries prove security for sensitive attribute types by installing them only within a specified domain.\nThen, aggregation occurs entirely within the domain and a node external to the domain can neither observe nor affect the updates and aggregation computations of the attribute type.\nFurthermore, though we have not implemented this feature in the prototype, the ADHT topology would also support domainrestricted probes that could ensure that no one outside of a domain can observe a probe for data stored within the domain.\nThe ADHT topology also enhances availability by allowing the common case of probes for data within a domain to depend only on a domain``s nodes.\nThis, for example, allows a domain that becomes disconnected from the rest of the Internet to continue to answer queries for local data.\nAggregation trees that provide administrative isolation also enable the definition of simple and efficient domain-scoped aggregation functions to support queries like what is the average load on machines in domain X?\nFor example, consider an aggregation function to count the number of machines in an example system with three machines illustrated in Figure 5.\nEach leaf node l updates attribute NumMachines with a value vl containing a set of tuples of form (Domain, Count) for each domain of which the node is a part.\nIn the example, the node A1 with name A1.A.\nperforms an update with the value ((A1.A.,1),(A.,1),(.\n,1)).\nAn aggregation function at an internal virtual node hosted on node N with child set C computes the aggregate as a set of tuples: for each domain D that N is part of, form a tuple (D,\u2211c\u2208C(count|(D,count) \u2208 vc)).\nThis computation is illustrated in the Figure 5.\nNow a query for NumMachines with level set to MAX will return the aggregate values at each intermediate virtual node on the path to the root as a set of tuples (tree level, aggregated value) from which it is easy to extract the count of machines at each enclosing domain.\nFor example, A1 would receive ((2, ((B1.B.,1),(B.,1),(.\n,3))), (1, ((A1.A.,1),(A.,2),(.\n,2))), (0, ((A1.A.,1),(A.,1),(.\n,1)))).\nNote that supporting domain-scoped queries would be less convenient and less efficient if aggregation trees did not conform to the system``s administrative structure.\nIt would be less efficient because each intermediate virtual node will have to maintain a list of all values at the leaves in its subtree along with their names and it would be less convenient as applications that need an aggregate for a domain will have to pick values of nodes in that domain from the list returned by a probe and perform computation.\n5.\nPROTOTYPE IMPLEMENTATION The internal design of our SDIMS prototype comprises of two layers: the Autonomous DHT (ADHT) layer manages the overlay topology of the system and the Aggregation Management Layer (AML) maintains attribute tuples, performs aggregations, stores and propagates aggregate values.\nGiven the ADHT construction described in Section 4.2, each node implements an Aggregation Management Layer (AML) to support the flexible API described in Section 3.\nIn this section, we describe the internal state and operation of the AML layer of a node in the system.\nlocal MIB MIBs ancestor reduction MIB (level 1)MIBs ancestor MIB from child 0X... MIB from child 0X... Level 2 Level 1 Level 3 Level 0 1XXX... 10XX... 100X... From parents0X.\n.\nTo parent 0X... \u2212\u2212 aggregation functions From parents To parent 10XX... 1X.\n.\n1X.\n.\n1X.\n.\nTo parent 11XX... Node Id: (1001XXX) 1001X.\n.\n100X.\n.\n10X.\n.\n1X.\n.\nVirtual Node Figure 6: Example illustrating the data structures and the organization of them at a node.\nWe refer to a store of (attribute type, attribute name, value) tuples as a Management Information Base or MIB, following the terminology from Astrolabe [38] and SNMP [34].\nWe refer an (attribute type, attribute name) tuple as an attribute key.\nAs Figure 6 illustrates, each physical node in the system acts as several virtual nodes in the AML: a node acts as leaf for all attribute keys, as a level-1 subtree root for keys whose hash matches the node``s ID in b prefix bits (where b is the number of bits corrected in each step of the ADHT``s routing scheme), as a level-i subtree root for attribute keys whose hash matches the node``s ID in the initial i \u2217 b bits, and as the system``s global root for attribute keys whose hash matches the node``s ID in more prefix bits than any other node (in case of a tie, the first non-matching bit is ignored and the comparison is continued [46]).\nTo support hierarchical aggregation, each virtual node at the root of a level-i subtree maintains several MIBs that store (1) child MIBs containing raw aggregate values gathered from children, (2) a reduction MIB containing locally aggregated values across this raw information, and (3) an ancestor MIB containing aggregate values scattered down from ancestors.\nThis basic strategy of maintaining child, reduction, and ancestor MIBs is based on Astrolabe [38], but our structured propagation strategy channels information that flows up according to its attribute key and our flexible propagation strategy only sends child updates up and ancestor aggregate results down as far as specified by the attribute key``s aggregation function.\nNote that in the discussion below, for ease of explanation, we assume that the routing protocol is correcting single bit at a time (b = 1).\nOur system, built upon Pastry, handles multi-bit correction (b = 4) and is a simple extension to the scheme described here.\nFor a given virtual node ni at level i, each child MIB contains the subset of a child``s reduction MIB that contains tuples that match ni``s node ID in i bits and whose up aggregation function attribute is at least i.\nThese local copies make it easy for a node to recompute a level-i aggregate value when one child``s input changes.\nNodes maintain their child MIBs in stable storage and use a simplified version of the Bayou log exchange protocol (sans conflict detection and resolution) for synchronization after disconnections [26].\nVirtual node ni at level i maintains a reduction MIB of tuples with a tuple for each key present in any child MIB containing the attribute type, attribute name, and output of the attribute type``s aggregate functions applied to the children``s tuples.\nA virtual node ni at level i also maintains an ancestor MIB to store the tuples containing attribute key and a list of aggregate values at different levels scattered down from ancestors.\nNote that the 384 list for a key might contain multiple aggregate values for a same level but aggregated at different nodes (see Figure 4).\nSo, the aggregate values are tagged not only with level information, but are also tagged with ID of the node that performed the aggregation.\nLevel-0 differs slightly from other levels.\nEach level-0 leaf node maintains a local MIB rather than maintaining child MIBs and a reduction MIB.\nThis local MIB stores information about the local node``s state inserted by local applications via update() calls.\nWe envision various sensor programs and applications insert data into local MIB.\nFor example, one program might monitor local configuration and perform updates with information such as total memory, free memory, etc., A distributed file system might perform update for each file stored on the local node.\nAlong with these MIBs, a virtual node maintains two other tables: an aggregation function table and an outstanding probes table.\nAn aggregation function table contains the aggregation function and installation arguments (see Table 1) associated with an attribute type or an attribute type and name.\nEach aggregate function is installed on all nodes in a domain``s subtree, so the aggregate function table can be thought of as a special case of the ancestor MIB with domain functions always installed up to a root within a specified domain and down to all nodes within the domain.\nThe outstanding probes table maintains temporary information regarding in-progress probes.\nGiven these data structures, it is simple to support the three API functions described in Section 3.1.\nInstall The Install operation (see Table 1) installs on a domain an aggregation function that acts on a specified attribute type.\nExecution of an install operation for function aggrFunc on attribute type attrType proceeds in two phases: first the install request is passed up the ADHT tree with the attribute key (attrType, null) until it reaches the root for that key within the specified domain.\nThen, the request is flooded down the tree and installed on all intermediate and leaf nodes.\nUpdate When a level i virtual node receives an update for an attribute from a child below: it first recomputes the level-i aggregate value for the specified key, stores that value in its reduction MIB and then, subject to the function``s up and domain parameters, passes the updated value to the appropriate parent based on the attribute key.\nAlso, the level-i (i \u2265 1) virtual node sends the updated level-i aggregate to all its children if the function``s down parameter exceeds zero.\nUpon receipt of a level-i aggregate from a parent, a level k virtual node stores the value in its ancestor MIB and, if k \u2265 i\u2212down, forwards this aggregate to its children.\nProbe A Probe collects and returns the aggregate value for a specified attribute key for a specified level of the tree.\nAs Figure 1 illustrates, the system satisfies a probe for a level-i aggregate value using a four-phase protocol that may be short-circuited when updates have previously propagated either results or partial results up or down the tree.\nIn phase 1, the route probe phase, the system routes the probe up the attribute key``s tree to either the root of the level-i subtree or to a node that stores the requested value in its ancestor MIB.\nIn the former case, the system proceeds to phase 2 and in the latter it skips to phase 4.\nIn phase 2, the probe scatter phase, each node that receives a probe request sends it to all of its children unless the node``s reduction MIB already has a value that matches the probe``s attribute key, in which case the node initiates phase 3 on behalf of its subtree.\nIn phase 3, the probe aggregation phase, when a node receives values for the specified key from each of its children, it executes the aggregate function on these values and either (a) forwards the result to its parent (if its level is less than i) or (b) initiates phase 4 (if it is at level i).\nFinally, in phase 4, the aggregate routing phase the aggregate value is routed down to the node that requested it.\nNote that in the extreme case of a function installed with up = down = 0, a level-i probe can touch all nodes in a level-i subtree while in the opposite extreme case of a function installed with up = down = ALL, probe is a completely local operation at a leaf.\nFor probes that include phases 2 (probe scatter) and 3 (probe aggregation), an issue is how to decide when a node should stop waiting for its children to respond and send up its current aggregate value.\nA node stops waiting for its children when one of three conditions occurs: (1) all children have responded, (2) the ADHT layer signals one or more reconfiguration events that mark all children that have not yet responded as unreachable, or (3) a watchdog timer for the request fires.\nThe last case accounts for nodes that participate in the ADHT protocol but that fail at the AML level.\nAt a virtual node, continuous probes are handled similarly as one-shot probes except that such probes are stored in the outstanding probe table for a time period of expTime specified in the probe.\nThus each update for an attribute triggers re-evaluation of continuous probes for that attribute.\nWe implement a lease-based mechanism for dynamic adaptation.\nA level-l virtual node for an attribute can issue the lease for levell aggregate to a parent or a child only if up is greater than l or it has leases from all its children.\nA virtual node at level l can issue the lease for level-k aggregate for k > l to a child only if down\u2265 k \u2212l or if it has the lease for that aggregate from its parent.\nNow a probe for level-k aggregate can be answered by level-l virtual node if it has a valid lease, irrespective of the up and down values.\nWe are currently designing different policies to decide when to issue a lease and when to revoke a lease and are also evaluating them with the above mechanism.\nOur current prototype does not implement access control on install, update, and probe operations but we plan to implement Astrolabe``s [38] certificate-based restrictions.\nAlso our current prototype does not restrict the resource consumption in executing the aggregation functions; but, `techniques from research on resource management in server systems and operating systems [2, 3] can be applied here.\n6.\nROBUSTNESS In large scale systems, reconfigurations are common.\nOur two main principles for robustness are to guarantee (i) read availability - probes complete in finite time, and (ii) eventual consistency - updates by a live node will be visible to probes by connected nodes in finite time.\nDuring reconfigurations, a probe might return a stale value for two reasons.\nFirst, reconfigurations lead to incorrectness in the previous aggregate values.\nSecond, the nodes needed for aggregation to answer the probe become unreachable.\nOur system also provides two hooks that applications can use for improved end-to-end robustness in the presence of reconfigurations: (1) Ondemand re-aggregation and (2) application controlled replication.\nOur system handles reconfigurations at two levels - adaptation at the ADHT layer to ensure connectivity and adaptation at the AML layer to ensure access to the data in SDIMS.\n6.1 ADHT Adaptation Our ADHT layer adaptation algorithm is same as Pastry``s adaptation algorithm [32] - the leaf sets are repaired as soon as a reconfiguration is detected and the routing table is repaired lazily.\nNote that maintaining extra leaf sets does not degrade the fault-tolerance property of the original Pastry; indeed, it enhances the resilience of ADHTs to failures by providing additional routing links.\nDue to redundancy in the leaf sets and the routing table, updates can be routed towards their root nodes successfully even during failures.\n385 Reconfig reconfig notices DHT partial DHT complete DHT ends Lazy Time Data 3 7 81 2 4 5 6starts Lazy Data starts Lazy Data starts Lazy Data repairrepair reaggr reaggr reaggr reaggr happens Figure 7: Default lazy data re-aggregation time line Also note that the administrative isolation property satisfied by our ADHT algorithm ensures that the reconfigurations in a level i domain do not affect the probes for level i in a sibling domain.\n6.2 AML Adaptation Broadly, we use two types of strategies for AML adaptation in the face of reconfigurations: (1) Replication in time as a fundamental baseline strategy, and (2) Replication in space as an additional performance optimization that falls back on replication in time when the system runs out of replicas.\nWe provide two mechanisms for replication in time.\nFirst, lazy re-aggregation propagates already received updates to new children or new parents in a lazy fashion over time.\nSecond, applications can reduce the probability of probe response staleness during such repairs through our flexible API with appropriate setting of the down parameter.\nLazy Re-aggregation: The DHT layer informs the AML layer about reconfigurations in the network using the following three function calls - newParent, failedChild, and newChild.\nOn newParent(parent, prefix), all probes in the outstanding-probes table corresponding to prefix are re-evaluated.\nIf parent is not null, then aggregation functions and already existing data are lazily transferred in the background.\nAny new updates, installs, and probes for this prefix are sent to the parent immediately.\nOn failedChild(child, prefix), the AML layer marks the child as inactive and any outstanding probes that are waiting for data from this child are re-evaluated.\nOn newChild(child, prefix), the AML layer creates space in its data structures for this child.\nFigure 7 shows the time line for the default lazy re-aggregation upon reconfiguration.\nProbes initiated between points 1 and 2 and that are affected by reconfigurations are reevaluated by AML upon detecting the reconfiguration.\nProbes that complete or start between points 2 and 8 may return stale answers.\nOn-demand Re-aggregation: The default lazy aggregation scheme lazily propagates the old updates in the system.\nAdditionally, using up and down knobs in the Probe API, applications can force on-demand fast re-aggregation of updates to avoid staleness in the face of reconfigurations.\nIn particular, if an application detects or suspects an answer as stale, then it can re-issue the probe increasing the up and down parameters to force the refreshing of the cached data.\nNote that this strategy will be useful only after the DHT adaptation is completed (Point 6 on the time line in Figure 7).\nReplication in Space: Replication in space is more challenging in our system than in a DHT file location application because replication in space can be achieved easily in the latter by just replicating the root node``s contents.\nIn our system, however, all internal nodes have to be replicated along with the root.\nIn our system, applications control replication in space using up and down knobs in the Install API; with large up and down values, aggregates at the intermediate virtual nodes are propagated to more nodes in the system.\nBy reducing the number of nodes that have to be accessed to answer a probe, applications can reduce the probability of incorrect results occurring due to the failure of nodes that do not contribute to the aggregate.\nFor example, in a file location application, using a non-zero positive down parameter ensures that a file``s global aggregate is replicated on nodes other than the root.\n0.1 1 10 100 1000 10000 0.0001 0.01 1 100 10000 Avg.numberofmessagesperoperation Read to Write ratio Update-All Up=ALL, Down=9 Up=ALL, Down=6 Update-Up Update-Local Up=2, Down=0 Up=5, Down=0 Figure 8: Flexibility of our approach.\nWith different UP and DOWN values in a network of 4096 nodes for different readwrite ratios.\nProbes for the file location can then be answered without accessing the root; hence they are not affected by the failure of the root.\nHowever, note that this technique is not appropriate in some cases.\nAn aggregated value in file location system is valid as long as the node hosting the file is active, irrespective of the status of other nodes in the system; whereas an application that counts the number of machines in a system may receive incorrect results irrespective of the replication.\nIf reconfigurations are only transient (like a node temporarily not responding due to a burst of load), the replicated aggregate closely or correctly resembles the current state.\n7.\nEVALUATION We have implemented a prototype of SDIMS in Java using the FreePastry framework [32] and performed large-scale simulation experiments and micro-benchmark experiments on two real networks: 187 machines in the department and 69 machines on the PlanetLab [27] testbed.\nIn all experiments, we use static up and down values and turn off dynamic adaptation.\nOur evaluation supports four main conclusions.\nFirst, flexible API provides different propagation strategies that minimize communication resources at different read-to-write ratios.\nFor example, in our simulation we observe Update-Local to be efficient for read-to-write ratios below 0.0001, Update-Up around 1, and Update-All above 50000.\nSecond, our system is scalable with respect to both nodes and attributes.\nIn particular, we find that the maximum node stress in our system is an order lower than observed with an Update-All, gossiping approach.\nThird, in contrast to unmodified Pastry which violates path convergence property in upto 14% cases, our system conforms to the property.\nFourth, the system is robust to reconfigurations and adapts to failures with in a few seconds.\n7.1 Simulation Experiments Flexibility and Scalability: A major innovation of our system is its ability to provide flexible computation and propagation of aggregates.\nIn Figure 8, we demonstrate the flexibility exposed by the aggregation API explained in Section 3.\nWe simulate a system with 4096 nodes arranged in a domain hierarchy with branching factor (bf) of 16 and install several attributes with different up and down parameters.\nWe plot the average number of messages per operation incurred for a wide range of read-to-write ratios of the operations for different attributes.\nSimulations with other sizes of networks with different branching factors reveal similar results.\nThis graph clearly demonstrates the benefit of supporting a wide range of computation and propagation strategies.\nAlthough having a small UP 386 1 10 100 1000 10000 100000 1e+06 1e+07 1 10\u00a0100\u00a01000\u00a010000 100000 MaximumNodeStress Number of attributes installed Gossip 256 Gossip 4096 Gossip 65536 DHT 256 DHT 4096 DHT 65536 Figure 9: Max node stress for a gossiping approach vs. ADHT based approach for different number of nodes with increasing number of sparse attributes.\nvalue is efficient for attributes with low read-to-write ratios (write dominated applications), the probe latency, when reads do occur, may be high since the probe needs to aggregate the data from all the nodes that did not send their aggregate up.\nConversely, applications that wish to improve probe overheads or latencies can increase their UP and DOWN propagation at a potential cost of increase in write overheads.\nCompared to an existing Update-all single aggregation tree approach [38], scalability in SDIMS comes from (1) leveraging DHTs to form multiple aggregation trees that split the load across nodes and (2) flexible propagation that avoids propagation of all updates to all nodes.\nFigure 9 demonstrates the SDIMS``s scalability with nodes and attributes.\nFor this experiment, we build a simulator to simulate both Astrolabe [38] (a gossiping, Update-All approach) and our system for an increasing number of sparse attributes.\nEach attribute corresponds to the membership in a multicast session with a small number of participants.\nFor this experiment, the session size is set to 8, the branching factor is set to 16, the propagation mode for SDIMS is Update-Up, and the participant nodes perform continuous probes for the global aggregate value.\nWe plot the maximum node stress (in terms of messages) observed in both schemes for different sized networks with increasing number of sessions when the participant of each session performs an update operation.\nClearly, the DHT based scheme is more scalable with respect to attributes than an Update-all gossiping scheme.\nObserve that at some constant number of attributes, as the number of nodes increase in the system, the maximum node stress increases in the gossiping approach, while it decreases in our approach as the load of aggregation is spread across more nodes.\nSimulations with other session sizes (4 and 16) yield similar results.\nAdministrative Hierarchy and Robustness: Although the routing protocol of ADHT might lead to an increased number of hops to reach the root for a key as compared to original Pastry, the algorithm conforms to the path convergence and locality properties and thus provides administrative isolation property.\nIn Figure 10, we quantify the increased path length by comparisons with unmodified Pastry for different sized networks with different branching factors of the domain hierarchy tree.\nTo quantify the path convergence property, we perform simulations with a large number of probe pairs - each pair probing for a random key starting from two randomly chosen nodes.\nIn Figure 11, we plot the percentage of probe pairs for unmodified pastry that do not conform to the path convergence property.\nWhen the branching factor is low, the domain hierarchy tree is deeper resulting in a large difference between 0 1 2 3 4 5 6 7 10\u00a0100\u00a01000\u00a010000 100000 PathLength Number of Nodes ADHT bf=4 ADHT bf=16 ADHT bf=64 PASTRY bf=4,16,64 Figure 10: Average path length to root in Pastry versus ADHT for different branching factors.\nNote that all lines corresponding to Pastry overlap.\n0 2 4 6 8 10 12 14 16 10\u00a0100\u00a01000\u00a010000 100000 Percentageofviolations Number of Nodes bf=4 bf=16 bf=64 Figure 11: Percentage of probe pairs whose paths to the root did not conform to the path convergence property with Pastry.\nU pdate-All U pdate-U p U pdate-Local 0 200 400 600 800 Latency(inms) Average Latency U pdate-All U pdate-U p U pdate-Local 0 1000 2000 3000 Latency(inms) Average Latency (a) (b) Figure 12: Latency of probes for aggregate at global root level with three different modes of aggregate propagation on (a) department machines, and (b) PlanetLab machines Pastry and ADHT in the average path length; but it is at these small domain sizes, that the path convergence fails more often with the original Pastry.\n7.2 Testbed experiments We run our prototype on 180 department machines (some machines ran multiple node instances, so this configuration has a total of 283 SDIMS nodes) and also on 69 machines of the PlanetLab [27] testbed.\nWe measure the performance of our system with two micro-benchmarks.\nIn the first micro-benchmark, we install three aggregation functions of types Update-Local, Update-Up, and Update-All, perform update operation on all nodes for all three aggregation functions, and measure the latencies incurred by probes for the global aggregate from all nodes in the system.\nFigure 12 387 0 20 40 60 80 100 120 140 0 5 10 15 20 25 2700 2720 2740 2760 2780 2800 2820 2840 Latency(inms) ValuesObserved Time(in sec) Values latency Node Killed Figure 13: Micro-benchmark on department network showing the behavior of the probes from a single node when failures are happening at some other nodes.\nAll 283 nodes assign a value of 10 to the attribute.\n10 100 1000 10000 100000 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350\u00a0400 450 500 500 550 600 650 700 Latency(inms) ValuesObserved Time(in sec) Values latency Node Killed Figure 14: Probe performance during failures on 69 machines of PlanetLab testbed shows the observed latencies for both testbeds.\nNotice that the latency in Update-Local is high compared to the Update-UP policy.\nThis is because latency in Update-Local is affected by the presence of even a single slow machine or a single machine with a high latency network connection.\nIn the second benchmark, we examine robustness.\nWe install one aggregation function of type Update-Up that performs sum operation on an integer valued attribute.\nEach node updates the attribute with the value 10.\nThen we monitor the latencies and results returned on the probe operation for global aggregate on one chosen node, while we kill some nodes after every few probes.\nFigure 13 shows the results on the departmental testbed.\nDue to the nature of the testbed (machines in a department), there is little change in the latencies even in the face of reconfigurations.\nIn Figure 14, we present the results of the experiment on PlanetLab testbed.\nThe root node of the aggregation tree is terminated after about 275 seconds.\nThere is a 5X increase in the latencies after the death of the initial root node as a more distant node becomes the root node after repairs.\nIn both experiments, the values returned on probes start reflecting the correct situation within a short time after the failures.\nFrom both the testbed benchmark experiments and the simulation experiments on flexibility and scalability, we conclude that (1) the flexibility provided by SDIMS allows applications to tradeoff read-write overheads (Figure 8), read latency, and sensitivity to slow machines (Figure 12), (2) a good default aggregation strategy is Update-Up which has moderate overheads on both reads and writes (Figure 8), has moderate read latencies (Figure 12), and is scalable with respect to both nodes and attributes (Figure 9), and (3) small domain sizes are the cases where DHT algorithms fail to provide path convergence more often and SDIMS ensures path convergence with only a moderate increase in path lengths (Figure 11).\n7.3 Applications SDIMS is designed as a general distributed monitoring and control infrastructure for a broad range of applications.\nAbove, we discuss some simple microbenchmarks including a multicast membership service and a calculate-sum function.\nVan Renesse et al. [38] provide detailed examples of how such a service can be used for a peer-to-peer caching directory, a data-diffusion service, a publishsubscribe system, barrier synchronization, and voting.\nAdditionally, we have initial experience using SDIMS to construct two significant applications: the control plane for a large-scale distributed file system [12] and a network monitor for identifying heavy hitters that consume excess resources.\nDistributed file system control: The PRACTI (Partial Replication, Arbitrary Consistency, Topology Independence) replication system provides a set of mechanisms for data replication over which arbitrary control policies can be layered.\nWe use SDIMS to provide several key functions in order to create a file system over the lowlevel PRACTI mechanisms.\nFirst, nodes use SDIMS as a directory to handle read misses.\nWhen a node n receives an object o, it updates the (ReadDir, o) attribute with the value n; when n discards o from its local store, it resets (ReadDir, o) to NULL.\nAt each virtual node, the ReadDir aggregation function simply selects a random non-null child value (if any) and we use the Update-Up policy for propagating updates.\nFinally, to locate a nearby copy of an object o, a node n1 issues a series of probe requests for the (ReadDir, o) attribute, starting with level = 1 and increasing the level value with each repeated probe request until a non-null node ID n2 is returned.\nn1 then sends a demand read request to n2, and n2 sends the data if it has it.\nConversely, if n2 does not have a copy of o, it sends a nack to n1, and n1 issues a retry probe with the down parameter set to a value larger than used in the previous probe in order to force on-demand re-aggregation, which will yield a fresher value for the retry.\nSecond, nodes subscribe to invalidations and updates to interest sets of files, and nodes use SDIMS to set up and maintain perinterest-set network-topology-sensitive spanning trees for propagating this information.\nTo subscribe to invalidations for interest set i, a node n1 first updates the (Inval, i) attribute with its identity n1, and the aggregation function at each virtual node selects one non-null child value.\nFinally, n1 probes increasing levels of the the (Inval, i) attribute until it finds the first node n2 = n1; n1 then uses n2 as its parent in the spanning tree.\nn1 also issues a continuous probe for this attribute at this level so that it is notified of any change to its spanning tree parent.\nSpanning trees for streams of pushed updates are maintained in a similar manner.\nIn the future, we plan to use SDIMS for at least two additional services within this replication system.\nFirst, we plan to use SDIMS to track the read and write rates to different objects; prefetch algorithms will use this information to prioritize replication [40, 41].\nSecond, we plan to track the ranges of invalidation sequence numbers seen by each node for each interest set in order to augment the spanning trees described above with additional hole filling to allow nodes to locate specific invalidations they have missed.\nOverall, our initial experience with using SDIMS for the PRACTII replication system suggests that (1) the general aggregation interface provided by SDIMS simplifies the construction of distributed applications-given the low-level PRACTI mechanisms, 388 we were able to construct a basic file system that uses SDIMS for several distinct control tasks in under two weeks and (2) the weak consistency guarantees provided by SDIMS meet the requirements of this application-each node``s controller effectively treats information from SDIMS as hints, and if a contacted node does not have the needed data, the controller retries, using SDIMS on-demand reaggregation to obtain a fresher hint.\nDistributed heavy hitter problem: The goal of the heavy hitter problem is to identify network sources, destinations, or protocols that account for significant or unusual amounts of traffic.\nAs noted by Estan et al. [13], this information is useful for a variety of applications such as intrusion detection (e.g., port scanning), denial of service detection, worm detection and tracking, fair network allocation, and network maintenance.\nSignificant work has been done on developing high-performance stream-processing algorithms for identifying heavy hitters at one router, but this is just a first step; ideally these applications would like not just one router``s views of the heavy hitters but an aggregate view.\nWe use SDIMS to allow local information about heavy hitters to be pooled into a view of global heavy hitters.\nFor each destination IP address IPx, a node updates the attribute (DestBW,IPx) with the number of bytes sent to IPx in the last time window.\nThe aggregation function for attribute type DestBW is installed with the Update-UP strategy and simply adds the values from child nodes.\nNodes perform continuous probe for global aggregate of the attribute and raise an alarm when the global aggregate value goes above a specified limit.\nNote that only nodes sending data to a particular IP address perform probes for the corresponding attribute.\nAlso note that techniques from [25] can be extended to hierarchical case to tradeoff precision for communication bandwidth.\n8.\nRELATED WORK The aggregation abstraction we use in our work is heavily influenced by the Astrolabe [38] project.\nAstrolabe adopts a PropagateAll and unstructured gossiping techniques to attain robustness [5].\nHowever, any gossiping scheme requires aggressive replication of the aggregates.\nWhile such aggressive replication is efficient for read-dominated attributes, it incurs high message cost for attributes with a small read-to-write ratio.\nOur approach provides a flexible API for applications to set propagation rules according to their read-to-write ratios.\nOther closely related projects include Willow [39], Cone [4], DASIS [1], and SOMO [45].\nWillow, DASIS and SOMO build a single tree for aggregation.\nCone builds a tree per attribute and requires a total order on the attribute values.\nSeveral academic [15, 21, 42] and commercial [37] distributed monitoring systems have been designed to monitor the status of large networked systems.\nSome of them are centralized where all the monitoring data is collected and analyzed at a central host.\nGanglia [15, 23] uses a hierarchical system where the attributes are replicated within clusters using multicast and then cluster aggregates are further aggregated along a single tree.\nSophia [42] is a distributed monitoring system designed with a declarative logic programming model where the location of query execution is both explicit in the language and can be calculated during evaluation.\nThis research is complementary to our work.\nTAG [21] collects information from a large number of sensors along a single tree.\nThe observation that DHTs internally provide a scalable forest of reduction trees is not new.\nPlaxton et al.``s [28] original paper describes not a DHT, but a system for hierarchically aggregating and querying object location data in order to route requests to nearby copies of objects.\nMany systems-building upon both Plaxton``s bit-correcting strategy [32, 46] and upon other strategies [24, 29, 35]-have chosen to hide this power and export a simple and general distributed hash table abstraction as a useful building block for a broad range of distributed applications.\nSome of these systems internally make use of the reduction forest not only for routing but also for caching [32], but for simplicity, these systems do not generally export this powerful functionality in their external interface.\nOur goal is to develop and expose the internal reduction forest of DHTs as a similarly general and useful abstraction.\nAlthough object location is a predominant target application for DHTs, several other applications like multicast [8, 9, 33, 36] and DNS [11] are also built using DHTs.\nAll these systems implicitly perform aggregation on some attribute, and each one of them must be designed to handle any reconfigurations in the underlying DHT.\nWith the aggregation abstraction provided by our system, designing and building of such applications becomes easier.\nInternal DHT trees typically do not satisfy domain locality properties required in our system.\nCastro et al. [7] and Gummadi et al. [17] point out the importance of path convergence from the perspective of achieving efficiency and investigate the performance of Pastry and other DHT algorithms, respectively.\nSkipNet [18] provides domain restricted routing where a key search is limited to the specified domain.\nThis interface can be used to ensure path convergence by searching in the lowest domain and moving up to the next domain when the search reaches the root in the current domain.\nAlthough this strategy guarantees path convergence, it loses the aggregation tree abstraction property of DHTs as the domain constrained routing might touch a node more than once (as it searches forward and then backward to stay within a domain).\n9.\nCONCLUSIONS This paper presents a Scalable Distributed Information Management System (SDIMS) that aggregates information in large-scale networked systems and that can serve as a basic building block for a broad range of applications.\nFor large scale systems, hierarchical aggregation is a fundamental abstraction for scalability.\nWe build our system by extending ideas from Astrolabe and DHTs to achieve (i) scalability with respect to both nodes and attributes through a new aggregation abstraction that helps leverage DHT``s internal trees for aggregation, (ii) flexibility through a simple API that lets applications control propagation of reads and writes, (iii) administrative isolation through simple augmentations of current DHT algorithms, and (iv) robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication.\nAcknowlegements We are grateful to J.C. Browne, Robert van Renessee, Amin Vahdat, Jay Lepreau, and the anonymous reviewers for their helpful comments on this work.\n10.\nREFERENCES [1] K. Albrecht, R. Arnold, M. Gahwiler, and R. Wattenhofer.\nJoin and Leave in Peer-to-Peer Systems: The DASIS approach.\nTechnical report, CS, ETH Zurich, 2003.\n[2] G. Back, W. H. Hsieh, and J. Lepreau.\nProcesses in KaffeOS: Isolation, Resource Management, and Sharing in Java.\nIn Proc.\nOSDI, Oct 2000.\n[3] G. Banga, P. Druschel, and J. Mogul.\nResource Containers: A New Facility for Resource Management in Server Systems.\nIn OSDI99, Feb. 1999.\n[4] R. Bhagwan, P. Mahadevan, G. Varghese, and G. M. Voelker.\nCone: A Distributed Heap-Based Approach to Resource Selection.\nTechnical Report CS2004-0784, UCSD, 2004.\n389 [5] K. P. Birman.\nThe Surprising Power of Epidemic Communication.\nIn Proceedings of FuDiCo, 2003.\n[6] B. Bloom.\nSpace\/time tradeoffs in hash coding with allowable errors.\nComm.\nof the ACM, 13(7):422-425, 1970.\n[7] M. Castro, P. Druschel, Y. C. Hu, and A. Rowstron.\nExploiting Network Proximity in Peer-to-Peer Overlay Networks.\nTechnical Report MSR-TR-2002-82, MSR.\n[8] M. Castro, P. Druschel, A.-M.\nKermarrec, A. Nandi, A. Rowstron, and A. Singh.\nSplitStream: High-bandwidth Multicast in a Cooperative Environment.\nIn SOSP, 2003.\n[9] M. Castro, P. Druschel, A.-M.\nKermarrec, and A. Rowstron.\nSCRIBE: A Large-scale and Decentralised Application-level Multicast Infrastructure.\nIEEE JSAC (Special issue on Network Support for Multicast Communications), 2002.\n[10] J. Challenger, P. Dantzig, and A. Iyengar.\nA scalable and highly available system for serving dynamic data at frequently accessed web sites.\nIn In Proceedings of ACM\/IEEE, Supercomputing ``98 (SC98), Nov. 1998.\n[11] R. Cox, A. Muthitacharoen, and R. T. Morris.\nServing DNS using a Peer-to-Peer Lookup Service.\nIn IPTPS, 2002.\n[12] M. Dahlin, L. Gao, A. Nayate, A. Venkataramani, P. Yalagandula, and J. Zheng.\nPRACTI replication for large-scale systems.\nTechnical Report TR-04-28, The University of Texas at Austin, 2004.\n[13] C. Estan, G. Varghese, and M. Fisk.\nBitmap algorithms for counting active flows on high speed links.\nIn Internet Measurement Conference 2003, 2003.\n[14] Y. Fu, J. Chase, B. Chun, S. Schwab, and A. Vahdat.\nSHARP: An architecture for secure resource peering.\nIn Proc.\nSOSP, Oct. 2003.\n[15] Ganglia: Distributed Monitoring and Execution System.\nhttp:\/\/ganglia.sourceforge.net.\n[16] S. Gribble, A. Halevy, Z. Ives, M. Rodrig, and D. Suciu.\nWhat Can Peer-to-Peer Do for Databases, and Vice Versa?\nIn Proceedings of the WebDB, 2001.\n[17] K. Gummadi, R. Gummadi, S. D. Gribble, S. Ratnasamy, S. Shenker, and I. Stoica.\nThe Impact of DHT Routing Geometry on Resilience and Proximity.\nIn SIGCOMM, 2003.\n[18] N. J. A. Harvey, M. B. Jones, S. Saroiu, M. Theimer, and A. Wolman.\nSkipNet: A Scalable Overlay Network with Practical Locality Properties.\nIn USITS, March 2003.\n[19] R. Huebsch, J. M. Hellerstein, N. Lanham, B. T. Loo, S. Shenker, and I. Stoica.\nQuerying the Internet with PIER.\nIn Proceedings of the VLDB Conference, May 2003.\n[20] C. Intanagonwiwat, R. Govindan, and D. Estrin.\nDirected diffusion: a scalable and robust communication paradigm for sensor networks.\nIn MobiCom, 2000.\n[21] S. R. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong.\nTAG: a Tiny AGgregation Service for ad-hoc Sensor Networks.\nIn OSDI, 2002.\n[22] D. Malkhi.\nDynamic Lookup Networks.\nIn FuDiCo, 2002.\n[23] M. L. Massie, B. N. Chun, and D. E. Culler.\nThe ganglia distributed monitoring system: Design, implementation, and experience.\nIn submission.\n[24] P. Maymounkov and D. Mazieres.\nKademlia: A Peer-to-peer Information System Based on the XOR Metric.\nIn Proceesings of the IPTPS, March 2002.\n[25] C. Olston and J. Widom.\nOffering a precision-performance tradeoff for aggregation queries over replicated data.\nIn VLDB, pages 144-155, Sept. 2000.\n[26] K. Petersen, M. Spreitzer, D. Terry, M. Theimer, and A. Demers.\nFlexible Update Propagation for Weakly Consistent Replication.\nIn Proc.\nSOSP, Oct. 1997.\n[27] Planetlab.\nhttp:\/\/www.planet-lab.org.\n[28] C. G. Plaxton, R. Rajaraman, and A. W. Richa.\nAccessing Nearby Copies of Replicated Objects in a Distributed Environment.\nIn ACM SPAA, 1997.\n[29] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker.\nA Scalable Content Addressable Network.\nIn Proceedings of ACM SIGCOMM, 2001.\n[30] S. Ratnasamy, S. Shenker, and I. Stoica.\nRouting Algorithms for DHTs: Some Open Questions.\nIn IPTPS, March 2002.\n[31] T. Roscoe, R. Mortier, P. Jardetzky, and S. Hand.\nInfoSpect: Using a Logic Language for System Health Monitoring in Distributed Systems.\nIn Proceedings of the SIGOPS European Workshop, 2002.\n[32] A. Rowstron and P. Druschel.\nPastry: Scalable, Distributed Object Location and Routing for Large-scale Peer-to-peer Systems.\nIn Middleware, 2001.\n[33] S.Ratnasamy, M.Handley, R.Karp, and S.Shenker.\nApplication-level Multicast using Content-addressable Networks.\nIn Proceedings of the NGC, November 2001.\n[34] W. Stallings.\nSNMP, SNMPv2, and CMIP.\nAddison-Wesley, 1993.\n[35] I. Stoica, R. Morris, D. Karger, F. Kaashoek, and H. Balakrishnan.\nChord: A scalable Peer-To-Peer lookup service for internet applications.\nIn ACM SIGCOMM, 2001.\n[36] S.Zhuang, B.Zhao, A.Joseph, R.Katz, and J.Kubiatowicz.\nBayeux: An Architecture for Scalable and Fault-tolerant Wide-Area Data Dissemination.\nIn NOSSDAV, 2001.\n[37] IBM Tivoli Monitoring.\nwww.ibm.com\/software\/tivoli\/products\/monitor.\n[38] R. VanRenesse, K. P. Birman, and W. Vogels.\nAstrolabe: A Robust and Scalable Technology for Distributed System Monitoring, Management, and Data Mining.\nTOCS, 2003.\n[39] R. VanRenesse and A. Bozdog.\nWillow: DHT, Aggregation, and Publish\/Subscribe in One Protocol.\nIn IPTPS, 2004.\n[40] A. Venkataramani, P. Weidmann, and M. Dahlin.\nBandwidth constrained placement in a wan.\nIn PODC, Aug. 2001.\n[41] A. Venkataramani, P. Yalagandula, R. Kokku, S. Sharif, and M. Dahlin.\nPotential costs and benefits of long-term prefetching for content-distribution.\nElsevier Computer Communications, 25(4):367-375, Mar. 2002.\n[42] M. Wawrzoniak, L. Peterson, and T. Roscoe.\nSophia: An Information Plane for Networked Systems.\nIn HotNets-II, 2003.\n[43] R. Wolski, N. Spring, and J. Hayes.\nThe network weather service: A distributed resource performance forecasting service for metacomputing.\nJournal of Future Generation Computing Systems, 15(5-6):757-768, Oct 1999.\n[44] P. Yalagandula and M. Dahlin.\nSDIMS: A scalable distributed information management system.\nTechnical Report TR-03-47, Dept. of Computer Sciences, UT Austin, Sep 2003.\n[45] Z. Zhang, S.-M.\nShi, and J. Zhu.\nSOMO: Self-Organized Metadata Overlay for Resource Management in P2P DHT.\nIn IPTPS, 2003.\n[46] B. Y. Zhao, J. D. Kubiatowicz, and A. D. Joseph.\nTapestry: An Infrastructure for Fault-tolerant Wide-area Location and Routing.\nTechnical Report UCB\/CSD-01-1141, UC Berkeley, Apr. 2001.\n390","lvl-3":"A Scalable Distributed Information Management System *\nWe present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information.\nTo serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures.\nWe design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication.\nThrough extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures.\n1.\nINTRODUCTION\nThe goal of this research is to design and build a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications.\nMonitoring, querying, and reacting to changes in the state of a distributed system are core components of applications such as system management [15, 31, 37, 42], service placement [14, 43], data sharing and caching [18, 29, 32, 35, 46], sensor monitoring and control [20, 21], multicast tree formation [8, 9, 33, 36, 38], and naming and request routing [10, 11].\nWe therefore speculate that a SDIMS in a networked system would provide a \"distributed operating systems backbone\" and facilitate the development and deployment of new distributed services.\nFor a large scale information system, hierarchical aggregation is a fundamental abstraction for scalability.\nRather than expose all information to all nodes, hierarchical aggregation allows a node to access detailed views of nearby information and summary views of global information.\nIn a SDIMS based on hierarchical aggregation, different nodes can therefore receive different answers to the query \"find a [nearby] node with at least 1 GB of free memory\" or \"find a [nearby] copy of file foo.\"\nA hierarchical system that aggregates information through reduction trees [21, 38] allows nodes to access information they care about while maintaining system scalability.\nTo be used as a basic building block, a SDIMS should have four properties.\nFirst, the system should be scalable: it should accommodate large numbers of participating nodes, and it should allow applications to install and monitor large numbers of data attributes.\nEnterprise and global scale systems today might have tens of thousands to millions of nodes and these numbers will increase over time.\nSimilarly, we hope to support many applications, and each application may track several attributes (e.g., the load and free memory of a system's machines) or millions of attributes (e.g., which files are stored on which machines).\nSecond, the system should have flexibility to accommodate a broad range of applications and attributes.\nFor example, readdominated attributes like numCPUs rarely change in value, while write-dominated attributes like numProcesses change quite often.\nAn approach tuned for read-dominated attributes will consume high bandwidth when applied to write-dominated attributes.\nConversely, an approach tuned for write-dominated attributes will suffer from unnecessary query latency or imprecision for read-dominated attributes.\nTherefore, a SDIMS should provide mechanisms to handle different types of attributes and leave the policy decision of tuning replication to the applications.\nThird, a SDIMS should provide administrative isolation.\nIn a large system, it is natural to arrange nodes in an organizational or an administrative hierarchy.\nA SDIMS should support administra\ntive isolation in which queries about an administrative domain's information can be satisfied within the domain so that the system can operate during disconnections from other domains, so that an external observer cannot monitor or affect intra-domain queries, and to support domain-scoped queries efficiently.\nFourth, the system must be robust to node failures and disconnections.\nA SDIMS should adapt to reconfigurations in a timely fashion and should also provide mechanisms so that applications can tradeoff the cost of adaptation with the consistency level in the aggregated results when reconfigurations occur.\nWe draw inspiration from two previous works: Astrolabe [38] and Distributed Hash Tables (DHTs).\nAstrolabe [38] is a robust information management system.\nAstrolabe provides the abstraction of a single logical aggregation tree that mirrors a system's administrative hierarchy.\nIt provides a general interface for installing new aggregation functions and provides eventual consistency on its data.\nAstrolabe is robust due to its use of an unstructured gossip protocol for disseminating information and its strategy of replicating all aggregated attribute values for a subtree to all nodes in the subtree.\nThis combination allows any communication pattern to yield eventual consistency and allows any node to answer any query using local information.\nThis high degree of replication, however, may limit the system's ability to accommodate large numbers of attributes.\nAlso, although the approach works well for read-dominated attributes, an update at one node can eventually affect the state at all nodes, which may limit the system's flexibility to support write-dominated attributes.\nRecent research in peer-to-peer structured networks resulted in Distributed Hash Tables (DHTs) [18, 28, 29, 32, 35, 46]--a data structure that scales with the number of nodes and that distributes the read-write load for different queries among the participating nodes.\nIt is interesting to note that although these systems export a global hash table abstraction, many of them internally make use of what can be viewed as a scalable system of aggregation trees to, for example, route a request for a given key to the right DHT node.\nIndeed, rather than export a general DHT interface, Plaxton et al.'s [28] original application makes use of hierarchical aggregation to allow nodes to locate nearby copies of objects.\nIt seems appealing to develop a SDIMS abstraction that exposes this internal functionality in a general way so that scalable trees for aggregation can be a basic system building block alongside the DHTs.\nAt a first glance, it might appear to be obvious that simply fusing DHTs with Astrolabe's aggregation abstraction will result in a SDIMS.\nHowever, meeting the SDIMS requirements forces a design to address four questions: (1) How to scalably map different attributes to different aggregation trees in a DHT mesh?\n(2) How to provide flexibility in the aggregation to accommodate different application requirements?\n(3) How to adapt a global, flat DHT mesh to attain administrative isolation property?\nand (4) How to provide robustness without unstructured gossip and total replication?\nThe key contributions of this paper that form the foundation of our SDIMS design are as follows.\n1.\nWe define a new aggregation abstraction that specifies both attribute type and attribute name and that associates an aggregation function with a particular attribute type.\nThis abstraction paves the way for utilizing the DHT system's internal trees for aggregation and for achieving scalability with both nodes and attributes.\n2.\nWe provide a flexible API that lets applications control the propagation of reads and writes and thus trade off update cost, read latency, replication, and staleness.\n3.\nWe augment an existing DHT algorithm to ensure path convergence and path locality properties in order to achieve administrative isolation.\n4.\nWe provide robustness to node and network reconfigurations by (a) providing temporal replication through lazy reaggre\ngation that guarantees eventual consistency and (b) ensuring that our flexible API allows demanding applications gain additional robustness by using tunable spatial replication of data aggregates or by performing fast on-demand reaggregation to augment the underlying lazy reaggregation or by doing both.\nWe have built a prototype of SDIMS.\nThrough simulations and micro-benchmark experiments on a number of department machines and PlanetLab [27] nodes, we observe that the prototype achieves scalability with respect to both nodes and attributes through use of its flexible API, inflicts an order of magnitude lower maximum node stress than unstructured gossiping schemes, achieves isolation properties at a cost of modestly increased read latency compared to flat DHTs, and gracefully handles node failures.\nThis initial study discusses key aspects of an ongoing system building effort, but it does not address all issues in building a SDIMS.\nFor example, we believe that our strategies for providing robustness will mesh well with techniques such as supernodes [22] and other ongoing efforts to improve DHTs [30] for further improving robustness.\nAlso, although splitting aggregation among many trees improves scalability for simple queries, this approach may make complex and multi-attribute queries more expensive compared to a single tree.\nAdditional work is needed to understand the significance of this limitation for real workloads and, if necessary, to adapt query planning techniques from DHT abstractions [16, 19] to scalable aggregation tree abstractions.\nIn Section 2, we explain the hierarchical aggregation abstraction that SDIMS provides to applications.\nIn Sections 3 and 4, we describe the design of our system for achieving the flexibility, scalability, and administrative isolation requirements of a SDIMS.\nIn Section 5, we detail the implementation of our prototype system.\nSection 6 addresses the issue of adaptation to the topological reconfigurations.\nIn Section 7, we present the evaluation of our system through large-scale simulations and microbenchmarks on real networks.\nSection 8 details the related work, and Section 9 summarizes our contribution.\n2.\nAGGREGATION ABSTRACTION\n3.\nFLEXIBILITY\n3.1 Aggregation API\n3.1.1 Install\n3.1.2 Update\n3.1.3 Probe\n3.1.4 Dynamic Adaptation\n4.\nSCALABILITY\n4.1 Leveraging DHTs\n4.2 Administrative Isolation\n5.\nPROTOTYPE IMPLEMENTATION\n6.\nROBUSTNESS\n6.1 ADHT Adaptation\n6.2 AML Adaptation\n7.\nEVALUATION\n7.1 Simulation Experiments\n7.2 Testbed experiments\n7.3 Applications\n8.\nRELATED WORK\nThe aggregation abstraction we use in our work is heavily influenced by the Astrolabe [38] project.\nAstrolabe adopts a PropagateAll and unstructured gossiping techniques to attain robustness [5].\nHowever, any gossiping scheme requires aggressive replication of the aggregates.\nWhile such aggressive replication is efficient for read-dominated attributes, it incurs high message cost for attributes with a small read-to-write ratio.\nOur approach provides a flexible API for applications to set propagation rules according to their read-to-write ratios.\nOther closely related projects include Willow [39], Cone [4], DASIS [1], and SOMO [45].\nWillow, DASIS and SOMO build a single tree for aggregation.\nCone builds a tree per attribute and requires a total order on the attribute values.\nSeveral academic [15, 21, 42] and commercial [37] distributed monitoring systems have been designed to monitor the status of large networked systems.\nSome of them are centralized where all the monitoring data is collected and analyzed at a central host.\nGanglia [15, 23] uses a hierarchical system where the attributes are replicated within clusters using multicast and then cluster aggregates are further aggregated along a single tree.\nSophia [42] is a distributed monitoring system designed with a declarative logic programming model where the location of query execution is both explicit in the language and can be calculated during evaluation.\nThis research is complementary to our work.\nTAG [21] collects information from a large number of sensors along a single tree.\nThe observation that DHTs internally provide a scalable forest of reduction trees is not new.\nPlaxton et al.'s [28] original paper describes not a DHT, but a system for hierarchically aggregating and querying object location data in order to route requests to nearby copies of objects.\nMany systems--building upon both Plaxton's bit-correcting strategy [32, 46] and upon other strategies [24, 29, 35]--have chosen to hide this power and export a simple and general distributed hash table abstraction as a useful building block for a broad range of distributed applications.\nSome of these systems internally make use of the reduction forest not only for routing but also for caching [32], but for simplicity, these systems do not generally export this powerful functionality in their external interface.\nOur goal is to develop and expose the internal reduction forest of DHTs as a similarly general and useful abstraction.\nAlthough object location is a predominant target application for DHTs, several other applications like multicast [8, 9, 33, 36] and DNS [11] are also built using DHTs.\nAll these systems implicitly perform aggregation on some attribute, and each one of them must be designed to handle any reconfigurations in the underlying DHT.\nWith the aggregation abstraction provided by our system, designing and building of such applications becomes easier.\nInternal DHT trees typically do not satisfy domain locality properties required in our system.\nCastro et al. [7] and Gummadi et al. [17] point out the importance of path convergence from the perspective of achieving efficiency and investigate the performance of Pastry and other DHT algorithms, respectively.\nSkipNet [18] provides domain restricted routing where a key search is limited to the specified domain.\nThis interface can be used to ensure path convergence by searching in the lowest domain and moving up to the next domain when the search reaches the root in the current domain.\nAlthough this strategy guarantees path convergence, it loses the aggregation tree abstraction property of DHTs as the domain constrained routing might touch a node more than once (as it searches forward and then backward to stay within a domain).\n9.\nCONCLUSIONS\nThis paper presents a Scalable Distributed Information Management System (SDIMS) that aggregates information in large-scale networked systems and that can serve as a basic building block for a broad range of applications.\nFor large scale systems, hierarchical aggregation is a fundamental abstraction for scalability.\nWe build our system by extending ideas from Astrolabe and DHTs to achieve (i) scalability with respect to both nodes and attributes through a new aggregation abstraction that helps leverage DHT's internal trees for aggregation, (ii) flexibility through a simple API that lets applications control propagation of reads and writes, (iii) administrative isolation through simple augmentations of current DHT algorithms, and (iv) robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication.","lvl-4":"A Scalable Distributed Information Management System *\nWe present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information.\nTo serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures.\nWe design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication.\nThrough extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures.\n1.\nINTRODUCTION\nThe goal of this research is to design and build a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications.\nWe therefore speculate that a SDIMS in a networked system would provide a \"distributed operating systems backbone\" and facilitate the development and deployment of new distributed services.\nFor a large scale information system, hierarchical aggregation is a fundamental abstraction for scalability.\nRather than expose all information to all nodes, hierarchical aggregation allows a node to access detailed views of nearby information and summary views of global information.\nIn a SDIMS based on hierarchical aggregation, different nodes can therefore receive different answers to the query \"find a [nearby] node with at least 1 GB of free memory\" or \"find a [nearby] copy of file foo.\"\nA hierarchical system that aggregates information through reduction trees [21, 38] allows nodes to access information they care about while maintaining system scalability.\nTo be used as a basic building block, a SDIMS should have four properties.\nFirst, the system should be scalable: it should accommodate large numbers of participating nodes, and it should allow applications to install and monitor large numbers of data attributes.\nEnterprise and global scale systems today might have tens of thousands to millions of nodes and these numbers will increase over time.\nSimilarly, we hope to support many applications, and each application may track several attributes (e.g., the load and free memory of a system's machines) or millions of attributes (e.g., which files are stored on which machines).\nSecond, the system should have flexibility to accommodate a broad range of applications and attributes.\nFor example, readdominated attributes like numCPUs rarely change in value, while write-dominated attributes like numProcesses change quite often.\nAn approach tuned for read-dominated attributes will consume high bandwidth when applied to write-dominated attributes.\nConversely, an approach tuned for write-dominated attributes will suffer from unnecessary query latency or imprecision for read-dominated attributes.\nTherefore, a SDIMS should provide mechanisms to handle different types of attributes and leave the policy decision of tuning replication to the applications.\nThird, a SDIMS should provide administrative isolation.\nIn a large system, it is natural to arrange nodes in an organizational or an administrative hierarchy.\nA SDIMS should support administra\nFourth, the system must be robust to node failures and disconnections.\nA SDIMS should adapt to reconfigurations in a timely fashion and should also provide mechanisms so that applications can tradeoff the cost of adaptation with the consistency level in the aggregated results when reconfigurations occur.\nWe draw inspiration from two previous works: Astrolabe [38] and Distributed Hash Tables (DHTs).\nAstrolabe [38] is a robust information management system.\nAstrolabe provides the abstraction of a single logical aggregation tree that mirrors a system's administrative hierarchy.\nIt provides a general interface for installing new aggregation functions and provides eventual consistency on its data.\nAstrolabe is robust due to its use of an unstructured gossip protocol for disseminating information and its strategy of replicating all aggregated attribute values for a subtree to all nodes in the subtree.\nThis combination allows any communication pattern to yield eventual consistency and allows any node to answer any query using local information.\nThis high degree of replication, however, may limit the system's ability to accommodate large numbers of attributes.\nAlso, although the approach works well for read-dominated attributes, an update at one node can eventually affect the state at all nodes, which may limit the system's flexibility to support write-dominated attributes.\nIt is interesting to note that although these systems export a global hash table abstraction, many of them internally make use of what can be viewed as a scalable system of aggregation trees to, for example, route a request for a given key to the right DHT node.\nIndeed, rather than export a general DHT interface, Plaxton et al.'s [28] original application makes use of hierarchical aggregation to allow nodes to locate nearby copies of objects.\nIt seems appealing to develop a SDIMS abstraction that exposes this internal functionality in a general way so that scalable trees for aggregation can be a basic system building block alongside the DHTs.\nAt a first glance, it might appear to be obvious that simply fusing DHTs with Astrolabe's aggregation abstraction will result in a SDIMS.\nHowever, meeting the SDIMS requirements forces a design to address four questions: (1) How to scalably map different attributes to different aggregation trees in a DHT mesh?\n(2) How to provide flexibility in the aggregation to accommodate different application requirements?\n(3) How to adapt a global, flat DHT mesh to attain administrative isolation property?\nand (4) How to provide robustness without unstructured gossip and total replication?\nThe key contributions of this paper that form the foundation of our SDIMS design are as follows.\n1.\nWe define a new aggregation abstraction that specifies both attribute type and attribute name and that associates an aggregation function with a particular attribute type.\nThis abstraction paves the way for utilizing the DHT system's internal trees for aggregation and for achieving scalability with both nodes and attributes.\n2.\nWe provide a flexible API that lets applications control the propagation of reads and writes and thus trade off update cost, read latency, replication, and staleness.\n3.\nWe augment an existing DHT algorithm to ensure path convergence and path locality properties in order to achieve administrative isolation.\n4.\nWe provide robustness to node and network reconfigurations by (a) providing temporal replication through lazy reaggre\nWe have built a prototype of SDIMS.\nThis initial study discusses key aspects of an ongoing system building effort, but it does not address all issues in building a SDIMS.\nAlso, although splitting aggregation among many trees improves scalability for simple queries, this approach may make complex and multi-attribute queries more expensive compared to a single tree.\nAdditional work is needed to understand the significance of this limitation for real workloads and, if necessary, to adapt query planning techniques from DHT abstractions [16, 19] to scalable aggregation tree abstractions.\nIn Section 2, we explain the hierarchical aggregation abstraction that SDIMS provides to applications.\nIn Sections 3 and 4, we describe the design of our system for achieving the flexibility, scalability, and administrative isolation requirements of a SDIMS.\nIn Section 5, we detail the implementation of our prototype system.\nSection 6 addresses the issue of adaptation to the topological reconfigurations.\nIn Section 7, we present the evaluation of our system through large-scale simulations and microbenchmarks on real networks.\nSection 8 details the related work, and Section 9 summarizes our contribution.\n8.\nRELATED WORK\nThe aggregation abstraction we use in our work is heavily influenced by the Astrolabe [38] project.\nAstrolabe adopts a PropagateAll and unstructured gossiping techniques to attain robustness [5].\nHowever, any gossiping scheme requires aggressive replication of the aggregates.\nWhile such aggressive replication is efficient for read-dominated attributes, it incurs high message cost for attributes with a small read-to-write ratio.\nOur approach provides a flexible API for applications to set propagation rules according to their read-to-write ratios.\nWillow, DASIS and SOMO build a single tree for aggregation.\nCone builds a tree per attribute and requires a total order on the attribute values.\nSeveral academic [15, 21, 42] and commercial [37] distributed monitoring systems have been designed to monitor the status of large networked systems.\nGanglia [15, 23] uses a hierarchical system where the attributes are replicated within clusters using multicast and then cluster aggregates are further aggregated along a single tree.\nSophia [42] is a distributed monitoring system designed with a declarative logic programming model where the location of query execution is both explicit in the language and can be calculated during evaluation.\nThis research is complementary to our work.\nTAG [21] collects information from a large number of sensors along a single tree.\nThe observation that DHTs internally provide a scalable forest of reduction trees is not new.\nPlaxton et al.'s [28] original paper describes not a DHT, but a system for hierarchically aggregating and querying object location data in order to route requests to nearby copies of objects.\nOur goal is to develop and expose the internal reduction forest of DHTs as a similarly general and useful abstraction.\nAll these systems implicitly perform aggregation on some attribute, and each one of them must be designed to handle any reconfigurations in the underlying DHT.\nWith the aggregation abstraction provided by our system, designing and building of such applications becomes easier.\nInternal DHT trees typically do not satisfy domain locality properties required in our system.\nSkipNet [18] provides domain restricted routing where a key search is limited to the specified domain.\nAlthough this strategy guarantees path convergence, it loses the aggregation tree abstraction property of DHTs as the domain constrained routing might touch a node more than once (as it searches forward and then backward to stay within a domain).\n9.\nCONCLUSIONS\nThis paper presents a Scalable Distributed Information Management System (SDIMS) that aggregates information in large-scale networked systems and that can serve as a basic building block for a broad range of applications.\nFor large scale systems, hierarchical aggregation is a fundamental abstraction for scalability.","lvl-2":"A Scalable Distributed Information Management System *\nWe present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information.\nTo serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures.\nWe design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication.\nThrough extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures.\n1.\nINTRODUCTION\nThe goal of this research is to design and build a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications.\nMonitoring, querying, and reacting to changes in the state of a distributed system are core components of applications such as system management [15, 31, 37, 42], service placement [14, 43], data sharing and caching [18, 29, 32, 35, 46], sensor monitoring and control [20, 21], multicast tree formation [8, 9, 33, 36, 38], and naming and request routing [10, 11].\nWe therefore speculate that a SDIMS in a networked system would provide a \"distributed operating systems backbone\" and facilitate the development and deployment of new distributed services.\nFor a large scale information system, hierarchical aggregation is a fundamental abstraction for scalability.\nRather than expose all information to all nodes, hierarchical aggregation allows a node to access detailed views of nearby information and summary views of global information.\nIn a SDIMS based on hierarchical aggregation, different nodes can therefore receive different answers to the query \"find a [nearby] node with at least 1 GB of free memory\" or \"find a [nearby] copy of file foo.\"\nA hierarchical system that aggregates information through reduction trees [21, 38] allows nodes to access information they care about while maintaining system scalability.\nTo be used as a basic building block, a SDIMS should have four properties.\nFirst, the system should be scalable: it should accommodate large numbers of participating nodes, and it should allow applications to install and monitor large numbers of data attributes.\nEnterprise and global scale systems today might have tens of thousands to millions of nodes and these numbers will increase over time.\nSimilarly, we hope to support many applications, and each application may track several attributes (e.g., the load and free memory of a system's machines) or millions of attributes (e.g., which files are stored on which machines).\nSecond, the system should have flexibility to accommodate a broad range of applications and attributes.\nFor example, readdominated attributes like numCPUs rarely change in value, while write-dominated attributes like numProcesses change quite often.\nAn approach tuned for read-dominated attributes will consume high bandwidth when applied to write-dominated attributes.\nConversely, an approach tuned for write-dominated attributes will suffer from unnecessary query latency or imprecision for read-dominated attributes.\nTherefore, a SDIMS should provide mechanisms to handle different types of attributes and leave the policy decision of tuning replication to the applications.\nThird, a SDIMS should provide administrative isolation.\nIn a large system, it is natural to arrange nodes in an organizational or an administrative hierarchy.\nA SDIMS should support administra\ntive isolation in which queries about an administrative domain's information can be satisfied within the domain so that the system can operate during disconnections from other domains, so that an external observer cannot monitor or affect intra-domain queries, and to support domain-scoped queries efficiently.\nFourth, the system must be robust to node failures and disconnections.\nA SDIMS should adapt to reconfigurations in a timely fashion and should also provide mechanisms so that applications can tradeoff the cost of adaptation with the consistency level in the aggregated results when reconfigurations occur.\nWe draw inspiration from two previous works: Astrolabe [38] and Distributed Hash Tables (DHTs).\nAstrolabe [38] is a robust information management system.\nAstrolabe provides the abstraction of a single logical aggregation tree that mirrors a system's administrative hierarchy.\nIt provides a general interface for installing new aggregation functions and provides eventual consistency on its data.\nAstrolabe is robust due to its use of an unstructured gossip protocol for disseminating information and its strategy of replicating all aggregated attribute values for a subtree to all nodes in the subtree.\nThis combination allows any communication pattern to yield eventual consistency and allows any node to answer any query using local information.\nThis high degree of replication, however, may limit the system's ability to accommodate large numbers of attributes.\nAlso, although the approach works well for read-dominated attributes, an update at one node can eventually affect the state at all nodes, which may limit the system's flexibility to support write-dominated attributes.\nRecent research in peer-to-peer structured networks resulted in Distributed Hash Tables (DHTs) [18, 28, 29, 32, 35, 46]--a data structure that scales with the number of nodes and that distributes the read-write load for different queries among the participating nodes.\nIt is interesting to note that although these systems export a global hash table abstraction, many of them internally make use of what can be viewed as a scalable system of aggregation trees to, for example, route a request for a given key to the right DHT node.\nIndeed, rather than export a general DHT interface, Plaxton et al.'s [28] original application makes use of hierarchical aggregation to allow nodes to locate nearby copies of objects.\nIt seems appealing to develop a SDIMS abstraction that exposes this internal functionality in a general way so that scalable trees for aggregation can be a basic system building block alongside the DHTs.\nAt a first glance, it might appear to be obvious that simply fusing DHTs with Astrolabe's aggregation abstraction will result in a SDIMS.\nHowever, meeting the SDIMS requirements forces a design to address four questions: (1) How to scalably map different attributes to different aggregation trees in a DHT mesh?\n(2) How to provide flexibility in the aggregation to accommodate different application requirements?\n(3) How to adapt a global, flat DHT mesh to attain administrative isolation property?\nand (4) How to provide robustness without unstructured gossip and total replication?\nThe key contributions of this paper that form the foundation of our SDIMS design are as follows.\n1.\nWe define a new aggregation abstraction that specifies both attribute type and attribute name and that associates an aggregation function with a particular attribute type.\nThis abstraction paves the way for utilizing the DHT system's internal trees for aggregation and for achieving scalability with both nodes and attributes.\n2.\nWe provide a flexible API that lets applications control the propagation of reads and writes and thus trade off update cost, read latency, replication, and staleness.\n3.\nWe augment an existing DHT algorithm to ensure path convergence and path locality properties in order to achieve administrative isolation.\n4.\nWe provide robustness to node and network reconfigurations by (a) providing temporal replication through lazy reaggre\ngation that guarantees eventual consistency and (b) ensuring that our flexible API allows demanding applications gain additional robustness by using tunable spatial replication of data aggregates or by performing fast on-demand reaggregation to augment the underlying lazy reaggregation or by doing both.\nWe have built a prototype of SDIMS.\nThrough simulations and micro-benchmark experiments on a number of department machines and PlanetLab [27] nodes, we observe that the prototype achieves scalability with respect to both nodes and attributes through use of its flexible API, inflicts an order of magnitude lower maximum node stress than unstructured gossiping schemes, achieves isolation properties at a cost of modestly increased read latency compared to flat DHTs, and gracefully handles node failures.\nThis initial study discusses key aspects of an ongoing system building effort, but it does not address all issues in building a SDIMS.\nFor example, we believe that our strategies for providing robustness will mesh well with techniques such as supernodes [22] and other ongoing efforts to improve DHTs [30] for further improving robustness.\nAlso, although splitting aggregation among many trees improves scalability for simple queries, this approach may make complex and multi-attribute queries more expensive compared to a single tree.\nAdditional work is needed to understand the significance of this limitation for real workloads and, if necessary, to adapt query planning techniques from DHT abstractions [16, 19] to scalable aggregation tree abstractions.\nIn Section 2, we explain the hierarchical aggregation abstraction that SDIMS provides to applications.\nIn Sections 3 and 4, we describe the design of our system for achieving the flexibility, scalability, and administrative isolation requirements of a SDIMS.\nIn Section 5, we detail the implementation of our prototype system.\nSection 6 addresses the issue of adaptation to the topological reconfigurations.\nIn Section 7, we present the evaluation of our system through large-scale simulations and microbenchmarks on real networks.\nSection 8 details the related work, and Section 9 summarizes our contribution.\n2.\nAGGREGATION ABSTRACTION\nAggregation is a natural abstraction for a large-scale distributed information system because aggregation provides scalability by allowing a node to view detailed information about the state near it and progressively coarser-grained summaries about progressively larger subsets of a system's data [38].\nOur aggregation abstraction is defined across a tree spanning all nodes in the system.\nEach physical node in the system is a leaf and each subtree represents a logical group of nodes.\nNote that logical groups can correspond to administrative domains (e.g., department or university) or groups of nodes within a domain (e.g., 10 workstations on a LAN in CS department).\nAn internal non-leaf node, which we call virtual node, is simulated by one or more physical nodes at the leaves of the subtree for which the virtual node is the root.\nWe describe how to form such trees in a later section.\nEach physical node has local data stored as a set of (attributeType, attributeName, value) tuples such as (configuration, numCPUs, 16), (mcast membership, session foo, yes), or (file stored, foo, myIPaddress).\nThe system associates an aggregation function ftype with each attribute type, and for each level-i subtree Ti in the system, the system defines an aggregate value Vi, type, name for each (at\ntributeType, attributeName) pair as follows.\nFor a (physical) leaf node T0 at level 0, V0, type, name is the locally stored value for the attribute type and name or NULL if no matching tuple exists.\nThen the aggregate value for a level-i subtree Ti is the aggregation function for the type, ftype computed across the aggregate values of each of Ti's k children:\nAlthough SDIMS allows arbitrary aggregation functions, it is often desirable that these functions satisfy the hierarchical computation property [21]: f (v1,..., vn) = f (f (v1,..., vs1), f (vs1 +1,..., vs2),..., f (vsk +1,..., vn)), where vi is the value of an attribute at node i. For example, the average operation, defined as avg (v1,..., vn) = 1\/n.\n\u2211 n i = 0 vi, does not satisfy the property.\nInstead, if an attribute stores values as tuples (sum, count), the attribute satisfies the hierarchical computation property while still allowing the applications to compute the average from the aggregate sum and count values.\nFinally, note that for a large-scale system, it is difficult or impossible to insist that the aggregation value returned by a probe corresponds to the function computed over the current values at the leaves at the instant of the probe.\nTherefore our system provides only weak consistency guarantees--specifically eventual consistency as defined in [38].\n3.\nFLEXIBILITY\nA major innovation of our work is enabling flexible aggregate computation and propagation.\nThe definition of the aggregation abstraction allows considerable flexibility in how, when, and where aggregate values are computed and propagated.\nWhile previous systems [15, 29, 38, 32, 35, 46] implement a single static strategy, we argue that a SDIMS should provide flexible computation and propagation to efficiently support wide variety of applications with diverse requirements.\nIn order to provide this flexibility, we develop a simple interface that decomposes the aggregation abstraction into three pieces of functionality: install, update, and probe.\nThis definition of the aggregation abstraction allows our system to provide a continuous spectrum of strategies ranging from lazy aggregate computation and propagation on reads to aggressive immediate computation and propagation on writes.\nIn Figure 1, we illustrate both extreme strategies and an intermediate strategy.\nUnder the lazy Update-Local computation and propagation strategy, an update (or write) only affects local state.\nThen, a probe (or read) that reads a level-i aggregate value is sent up the tree to the issuing node's level-i ancestor and then down the tree to the leaves.\nThe system then computes the desired aggregate value at each layer up the tree until the level-i ancestor that holds the desired value.\nFinally, the level-i ancestor sends the result down the tree to the issuing node.\nIn the other extreme case of the aggressive Update-All immediate computation and propagation on writes [38], when an update occurs, changes are aggregated up the tree, and each new aggregate value is flooded to all of a node's descendants.\nIn this case, each level-i node not only maintains the aggregate values for the level-i subtree but also receives and locally stores copies of all of its ancestors' level-j (j> i) aggregation values.\nAlso, a leaf satisfies a probe for a level-i aggregate using purely local data.\nIn an intermediate Update-Up strategy, the root of each subtree maintains the subtree's current aggregate value, and when an update occurs, the leaf node updates its local state and passes the update to its parent, and then each successive enclosing subtree updates its aggregate value and passes the new value to its parent.\nThis strategy satisfies a leaf's probe for a level-i aggregate value by sending the probe up to the level-i ancestor of the leaf and then sending the aggregate value down to the leaf.\nFinally, notice that other strategies exist.\nIn general, an Update-Upk-Downj strategy aggregates up to\nTable 1: Arguments for the install operation\nthe kth level and propagates the aggregate values of a node at level l (s.t. l \u2264 k) downward for j levels.\nA SDIMS must provide a wide range of flexible computation and propagation strategies to applications for it to be a general abstraction.\nAn application should be able to choose a particular mechanism based on its read-to-write ratio that reduces the bandwidth consumption while attaining the required responsiveness and precision.\nNote that the read-to-write ratio of the attributes that applications install vary extensively.\nFor example, a read-dominated attribute like numCPUs rarely changes in value, while a writedominated attribute like numProcesses changes quite often.\nAn aggregation strategy like Update-All works well for read-dominated attributes but suffers high bandwidth consumption when applied for write-dominated attributes.\nConversely, an approach like UpdateLocal works well for write-dominated attributes but suffers from unnecessary query latency or imprecision for read-dominated attributes.\nSDIMS also allows non-uniform computation and propagation across the aggregation tree with different up and down parameters in different subtrees so that applications can adapt with the spatial and temporal heterogeneity of read and write operations.\nWith respect to spatial heterogeneity, access patterns may differ for different parts of the tree, requiring different propagation strategies for different parts of the tree.\nSimilarly with respect to temporal heterogeneity, access patterns may change over time requiring different strategies over time.\n3.1 Aggregation API\nWe provide the flexibility described above by splitting the aggregation API into three functions: Install () installs an aggregation function that defines an operation on an attribute type and specifies the update strategy that the function will use, Update () inserts or modifies a node's local value for an attribute, and Probe () obtains an aggregate value for a specified subtree.\nThe install interface allows applications to specify the k and j parameters of the Update-Upk-Downj strategy along with the aggregation function.\nThe update interface invokes the aggregation of an attribute on the tree according to corresponding aggregation function's aggregation strategy.\nThe probe interface not only allows applications to obtain the aggregated value for a specified tree but also allows a probing node to continuously fetch the values for a specified time, thus enabling an application to adapt to spatial and temporal heterogeneity.\nThe rest of the section describes these three interfaces in detail.\n3.1.1 Install\nThe Install operation installs an aggregation function in the system.\nThe arguments for this operation are listed in Table 1.\nThe attrType argument denotes the type of attributes on which this aggregation function is invoked.\nInstalled functions are soft state that must be periodically renewed or they will be garbage collected at expTime.\nThe arguments up and down specify the aggregate computation\nFigure 1: Flexible API\nTable 2: Arguments for the probe operation\nand propagation strategy Update-Upk-Downj.\nThe domain argument, if present, indicates that the aggregation function should be installed on all nodes in the specified domain; otherwise the function is installed on all nodes in the system.\n3.1.2 Update\nThe Update operation takes three arguments attrType, attrName, and value and creates a new (attrType, attrName, value) tuple or updates the value of an old tuple with matching attrType and attrName at a leaf node.\nThe update interface meshes with installed aggregate computation and propagation strategy to provide flexibility.\nIn particular, as outlined above and described in detail in Section 5, after a leaf applies an update locally, the update may trigger re-computation of aggregate values up the tree and may also trigger propagation of changed aggregate values down the tree.\nNotice that our abstraction associates an aggregation function with only an attrType but lets updates specify an attrName along with the attrType.\nThis technique helps achieve scalability with respect to nodes and attributes as described in Section 4.\n3.1.3 Probe\nThe Probe operation returns the value of an attribute to an application.\nThe complete argument set for the probe operation is shown in Table 2.\nAlong with the attrName and the attrType arguments, a level argument specifies the level at which the answers are required for an attribute.\nIn our implementation we choose to return results at all levels k <l for a level-l probe because (i) it is inexpensive as the nodes traversed for level-l probe also contain level k aggregates for k <l and as we expect the network cost of transmitting the additional information to be small for the small aggregates which we focus and (ii) it is useful as applications can efficiently get several aggregates with a single probe (e.g., for domain-scoped queries as explained in Section 4.2).\nProbes with mode set to continuous and with finite expTime enable applications to handle spatial and temporal heterogeneity.\nWhen node A issues a continuous probe at level l for an attribute, then regardless of the up and down parameters, updates for the attribute at any node in A's level-l ancestor's subtree are aggregated up to level l and the aggregated value is propagated down along the path from the ancestor to A. Note that continuous mode enables SDIMS to support a distributed sensor-actuator mechanism where a sensor monitors a level-i aggregate with a continuous mode probe and triggers an actuator upon receiving new values for the probe.\nThe up and down arguments enable applications to perform ondemand fast re-aggregation during reconfigurations, where a forced re-aggregation is done for the corresponding levels even if the aggregated value is available, as we discuss in Section 6.\nWhen present, the up and down arguments are interpreted as described in the install operation.\n3.1.4 Dynamic Adaptation\nAt the API level, the up and down arguments in install API can be regarded as hints, since they suggest a computation strategy but do not affect the semantics of an aggregation function.\nA SDIMS implementation can dynamically adjust its up\/down strategies for an attribute based on its measured read\/write frequency.\nBut a virtual intermediate node needs to know the current up and down propagation values to decide if the local aggregate is fresh in order to answer a probe.\nThis is the key reason why up and down need to be statically defined at the install time and cannot be specified in the update operation.\nIn dynamic adaptation, we implement a leasebased mechanism where a node issues a lease to a parent or a child denoting that it will keep propagating the updates to that parent or child.\nWe are currently evaluating different policies to decide when to issue a lease and when to revoke a lease.\n4.\nSCALABILITY\nOur design achieves scalability with respect to both nodes and attributes through two key ideas.\nFirst, it carefully defines the aggregation abstraction to mesh well with its underlying scalable DHT system.\nSecond, it refines the basic DHT abstraction to form an Autonomous DHT (ADHT) to achieve the administrative isolation properties that are crucial to scaling for large real-world systems.\nIn this section, we describe these two ideas in detail.\n4.1 Leveraging DHTs\nIn contrast to previous systems [4, 15, 38, 39, 45], SDIMS's aggregation abstraction specifies both an attribute type and attribute name and associates an aggregation function with a type rather than just specifying and associating a function with a name.\nInstalling a single function that can operate on many different named attributes matching a type improves scalability for \"sparse attribute types\" with large, sparsely-filled name spaces.\nFor example, to construct a file location service, our interface allows us to install a single function that computes an aggregate value for any named file.\nA subtree's aggregate value for (FILELOC, name) would be the ID of a node in the subtree that stores the named file.\nConversely, Astrolabe copes with sparse attributes by having aggregation functions compute sets or lists and suggests that scalability can be improved by representing such sets with Bloom filters [6].\nSupporting sparse names within a type provides at least two advantages.\nFirst, when the value associated with a name is updated, only the state associ\nFigure 2: The DHT tree corresponding to key 111 (DHTtree111) and the corresponding aggregation tree.\nated with that name needs to be updated and propagated to other nodes.\nSecond, splitting values associated with different names into different aggregation values allows our system to leverage Distributed Hash Tables (DHTs) to map different names to different trees and thereby spread the function's logical root node's load and state across multiple physical nodes.\nGiven this abstraction, scalably mapping attributes to DHTs is straightforward.\nDHT systems assign a long, random ID to each node and define an algorithm to route a request for key k to a node rootk such that the union of paths from all nodes forms a tree DHTtreek rooted at the node rootk.\nNow, as illustrated in Figure 2, by aggregating an attribute along the aggregation tree corresponding to DHTtreek for k = hash (attribute type, attribute name), different attributes will be aggregated along different trees.\nIn comparison to a scheme where all attributes are aggregated along a single tree, aggregating along multiple trees incurs lower maximum node stress: whereas in a single aggregation tree approach, the root and the intermediate nodes pass around more messages than leaf nodes, in a DHT-based multi-tree, each node acts as an intermediate aggregation point for some attributes and as a leaf node for other attributes.\nHence, this approach distributes the onus of aggregation across all nodes.\n4.2 Administrative Isolation\nAggregation trees should provide administrative isolation by ensuring that for each domain, the virtual node at the root of the smallest aggregation subtree containing all nodes of that domain is hosted by a node in that domain.\nAdministrative isolation is important for three reasons: (i) for security--so that updates and probes flowing in a domain are not accessible outside the domain, (ii) for availability--so that queries for values in a domain are not affected by failures of nodes in other domains, and (iii) for efficiency--so that domain-scoped queries can be simple and efficient.\nTo provide administrative isolation to aggregation trees, a DHT should satisfy two properties:\n1.\nPath Locality: Search paths should always be contained in the smallest possible domain.\n2.\nPath Convergence: Search paths for a key from different nodes in a domain should converge at a node in that domain.\nExisting DHTs support path locality [18] or can easily support it by using the domain nearness as the distance metric [7, 17], but they do not guarantee path convergence as those systems try to optimize the search path to the root to reduce response latency.\nFor example, Pastry [32] uses prefix routing in which each node's routing table contains one row per hexadecimal digit in the nodeId space where the ith row contains a list of nodes whose nodeIds differ from the current node's nodeId in the ith digit with one entry for each possible digit value.\nGiven a routing topology, to route a packet to an arbitrary destination key, a node in Pastry forwards a packet to the node with a nodeId prefix matching the key in at least one more digit than the current node.\nIf such a node is not known, the current node uses an additional data structure, the leaf set containing\nFigure 3: Example shows how isolation property is violated with original Pastry.\nWe also show the corresponding aggregation tree.\nFigure 4: Autonomous DHT satisfying the isolation property.\nAlso the corresponding aggregation tree is shown.\nL immediate higher and lower neighbors in the nodeId space, and forwards the packet to a node with an identical prefix but that is numerically closer to the destination key in the nodeId space.\nThis process continues until the destination node appears in the leaf set, after which the message is routed directly.\nPastry's expected number of routing steps is log n, where n is the number of nodes, but as Figure 3 illustrates, this algorithm does not guarantee path convergence: if two nodes in a domain have nodeIds that match a key in the same number of bits, both of them can route to a third node outside the domain when routing for that key.\nSimple modifications to Pastry's route table construction and key-routing protocols yield an Autonomous DHT (ADHT) that satisfies the path locality and path convergence properties.\nAs Figure 4 illustrates, whenever two nodes in a domain share the same prefix with respect to a key and no other node in the domain has a longer prefix, our algorithm introduces a virtual node at the boundary of the domain corresponding to that prefix plus the next digit of the key; such a virtual node is simulated by the existing node whose id is numerically closest to the virtual node's id.\nOur ADHT's routing table differs from Pastry's in two ways.\nFirst, each node maintains a separate leaf set for each domain of which it is a part.\nSecond, nodes use two proximity metrics when populating the routing tables--hierarchical domain proximity is the primary metric and network distance is secondary.\nThen, to route a packet to a global root for a key, ADHT routing algorithm uses the routing table and the leaf set entries to route to each successive enclosing domain's root (the virtual or real node in the domain matching the key in the maximum number of digits).\nAdditional details about the ADHT algorithm are available in an extended technical report [44].\nProperties.\nMaintaining a different leaf set for each administrative hierarchy level increases the number of neighbors that each node tracks to (2b) * lgb n + c.l from (2b) * lgb n + c in unmodified Pastry, where b is the number of bits in a digit, n is the number of nodes, c is the leaf set size, and l is the number of domain levels.\nRouting requires O (lgbn + l) steps compared to O (lgbn) steps in Pastry; also, each routing hop may be longer than in Pastry because the modified algorithm's routing table prefers same-domain nodes over nearby nodes.\nWe experimentally quantify the additional routing costs in Section 7.\nIn a large system, the ADHT topology allows domains to im\nFigure 5: Example for domain-scoped queries\nprove security for sensitive attribute types by installing them only within a specified domain.\nThen, aggregation occurs entirely within the domain and a node external to the domain can neither observe nor affect the updates and aggregation computations of the attribute type.\nFurthermore, though we have not implemented this feature in the prototype, the ADHT topology would also support domainrestricted probes that could ensure that no one outside of a domain can observe a probe for data stored within the domain.\nThe ADHT topology also enhances availability by allowing the common case of probes for data within a domain to depend only on a domain's nodes.\nThis, for example, allows a domain that becomes disconnected from the rest of the Internet to continue to answer queries for local data.\nAggregation trees that provide administrative isolation also enable the definition of simple and efficient domain-scoped aggregation functions to support queries like \"what is the average load on machines in domain X?\"\nFor example, consider an aggregation function to count the number of machines in an example system with three machines illustrated in Figure 5.\nEach leaf node l updates attribute NumMachines with a value vl containing a set of tuples of form (Domain, Count) for each domain of which the node is a part.\nIn the example, the node A1 with name A1.A.\nperforms an update with the value ((A1.A.,1), (A.,1), (.\n,1)).\nAn aggregation function at an internal virtual node hosted on node N with child set C computes the aggregate as a set of tuples: for each domain D that N is part of, form a tuple (D, \u2211 c \u2208 C (count | (D, count) \u2208 vc)).\nThis computation is illustrated in the Figure 5.\nNow a query for NumMachines with level set to MAX will return the aggregate values at each intermediate virtual node on the path to the root as a set of tuples (tree level, aggregated value) from which it is easy to extract the count of machines at each enclosing domain.\nFor example, A1 would receive ((2, ((B1.B.,1), (B.,1), (.\n,3))), (1, ((A1.A.,1), (A.,2), (.\n,2))), (0, ((A1.A.,1), (A.,1), (.\n,1)))).\nNote that supporting domain-scoped queries would be less convenient and less efficient if aggregation trees did not conform to the system's administrative structure.\nIt would be less efficient because each intermediate virtual node will have to maintain a list of all values at the leaves in its subtree along with their names and it would be less convenient as applications that need an aggregate for a domain will have to pick values of nodes in that domain from the list returned by a probe and perform computation.\n5.\nPROTOTYPE IMPLEMENTATION\nThe internal design of our SDIMS prototype comprises of two layers: the Autonomous DHT (ADHT) layer manages the overlay topology of the system and the Aggregation Management Layer (AML) maintains attribute tuples, performs aggregations, stores and propagates aggregate values.\nGiven the ADHT construction described in Section 4.2, each node implements an Aggregation Management Layer (AML) to support the flexible API described in Section 3.\nIn this section, we describe the internal state and operation of the AML layer of a node in the system.\nFigure 6: Example illustrating the data structures and the organization of them at a node.\nWe refer to a store of (attribute type, attribute name, value) tuples as a Management Information Base or MIB, following the terminology from Astrolabe [38] and SNMP [34].\nWe refer an (attribute type, attribute name) tuple as an attribute key.\nAs Figure 6 illustrates, each physical node in the system acts as several virtual nodes in the AML: a node acts as leaf for all attribute keys, as a level-1 subtree root for keys whose hash matches the node's ID in b prefix bits (where b is the number of bits corrected in each step of the ADHT's routing scheme), as a level-i subtree root for attribute keys whose hash matches the node's ID in the initial i \u2217 b bits, and as the system's global root for attribute keys whose hash matches the node's ID in more prefix bits than any other node (in case of a tie, the first non-matching bit is ignored and the comparison is continued [46]).\nTo support hierarchical aggregation, each virtual node at the root of a level-i subtree maintains several MIBs that store (1) child MIBs containing raw aggregate values gathered from children, (2) a reduction MIB containing locally aggregated values across this raw information, and (3) an ancestor MIB containing aggregate values scattered down from ancestors.\nThis basic strategy of maintaining child, reduction, and ancestor MIBs is based on Astrolabe [38], but our structured propagation strategy channels information that flows up according to its attribute key and our flexible propagation strategy only sends child updates up and ancestor aggregate results down as far as specified by the attribute key's aggregation function.\nNote that in the discussion below, for ease of explanation, we assume that the routing protocol is correcting single bit at a time (b = 1).\nOur system, built upon Pastry, handles multi-bit correction (b = 4) and is a simple extension to the scheme described here.\nFor a given virtual node ni at level i, each child MIB contains the subset of a child's reduction MIB that contains tuples that match ni's node ID in i bits and whose up aggregation function attribute is at least i.\nThese local copies make it easy for a node to recompute a level-i aggregate value when one child's input changes.\nNodes maintain their child MIBs in stable storage and use a simplified version of the Bayou log exchange protocol (sans conflict detection and resolution) for synchronization after disconnections [26].\nVirtual node ni at level i maintains a reduction MIB of tuples with a tuple for each key present in any child MIB containing the attribute type, attribute name, and output of the attribute type's aggregate functions applied to the children's tuples.\nA virtual node ni at level i also maintains an ancestor MIB to store the tuples containing attribute key and a list of aggregate values at different levels scattered down from ancestors.\nNote that the\nlist for a key might contain multiple aggregate values for a same level but aggregated at different nodes (see Figure 4).\nSo, the aggregate values are tagged not only with level information, but are also tagged with ID of the node that performed the aggregation.\nLevel-0 differs slightly from other levels.\nEach level-0 leaf node maintains a local MIB rather than maintaining child MIBs and a reduction MIB.\nThis local MIB stores information about the local node's state inserted by local applications via update () calls.\nWe envision various \"sensor\" programs and applications insert data into local MIB.\nFor example, one program might monitor local configuration and perform updates with information such as total memory, free memory, etc., A distributed file system might perform update for each file stored on the local node.\nAlong with these MIBs, a virtual node maintains two other tables: an aggregation function table and an outstanding probes table.\nAn aggregation function table contains the aggregation function and installation arguments (see Table 1) associated with an attribute type or an attribute type and name.\nEach aggregate function is installed on all nodes in a domain's subtree, so the aggregate function table can be thought of as a special case of the ancestor MIB with domain functions always installed up to a root within a specified domain and down to all nodes within the domain.\nThe outstanding probes table maintains temporary information regarding in-progress probes.\nGiven these data structures, it is simple to support the three API functions described in Section 3.1.\nInstall The Install operation (see Table 1) installs on a domain an aggregation function that acts on a specified attribute type.\nExecution of an install operation for function aggrFunc on attribute type attrType proceeds in two phases: first the install request is passed up the ADHT tree with the attribute key (attrType, null) until it reaches the root for that key within the specified domain.\nThen, the request is flooded down the tree and installed on all intermediate and leaf nodes.\nUpdate When a level i virtual node receives an update for an attribute from a child below: it first recomputes the level-i aggregate value for the specified key, stores that value in its reduction MIB and then, subject to the function's up and domain parameters, passes the updated value to the appropriate parent based on the attribute key.\nAlso, the level-i (i> 1) virtual node sends the updated level-i aggregate to all its children if the function's down parameter exceeds zero.\nUpon receipt of a level-i aggregate from a parent, a level k virtual node stores the value in its ancestor MIB and, if k> i \u2212 down, forwards this aggregate to its children.\nProbe A Probe collects and returns the aggregate value for a specified attribute key for a specified level of the tree.\nAs Figure 1 illustrates, the system satisfies a probe for a level-i aggregate value using a four-phase protocol that may be short-circuited when updates have previously propagated either results or partial results up or down the tree.\nIn phase 1, the route probe phase, the system routes the probe up the attribute key's tree to either the root of the level-i subtree or to a node that stores the requested value in its ancestor MIB.\nIn the former case, the system proceeds to phase 2 and in the latter it skips to phase 4.\nIn phase 2, the probe scatter phase, each node that receives a probe request sends it to all of its children unless the node's reduction MIB already has a value that matches the probe's attribute key, in which case the node initiates phase 3 on behalf of its subtree.\nIn phase 3, the probe aggregation phase, when a node receives values for the specified key from each of its children, it executes the aggregate function on these values and either (a) forwards the result to its parent (if its level is less than i) or (b) initiates phase 4 (if it is at level i).\nFinally, in phase 4, the aggregate routing phase the aggregate value is routed down to the node that requested it.\nNote that in the extreme case of a function installed with up = down = 0, a level-i probe can touch all nodes in a level-i subtree while in the opposite extreme case of a function installed with up = down = ALL, probe is a completely local operation at a leaf.\nFor probes that include phases 2 (probe scatter) and 3 (probe aggregation), an issue is how to decide when a node should stop waiting for its children to respond and send up its current aggregate value.\nA node stops waiting for its children when one of three conditions occurs: (1) all children have responded, (2) the ADHT layer signals one or more reconfiguration events that mark all children that have not yet responded as unreachable, or (3) a watchdog timer for the request fires.\nThe last case accounts for nodes that participate in the ADHT protocol but that fail at the AML level.\nAt a virtual node, continuous probes are handled similarly as one-shot probes except that such probes are stored in the outstanding probe table for a time period of expTime specified in the probe.\nThus each update for an attribute triggers re-evaluation of continuous probes for that attribute.\nWe implement a lease-based mechanism for dynamic adaptation.\nA level-l virtual node for an attribute can issue the lease for levell aggregate to a parent or a child only if up is greater than l or it has leases from all its children.\nA virtual node at level l can issue the lease for level-k aggregate for k> l to a child only if down> k \u2212 l or if it has the lease for that aggregate from its parent.\nNow a probe for level-k aggregate can be answered by level-l virtual node if it has a valid lease, irrespective of the up and down values.\nWe are currently designing different policies to decide when to issue a lease and when to revoke a lease and are also evaluating them with the above mechanism.\nOur current prototype does not implement access control on install, update, and probe operations but we plan to implement Astrolabe's [38] certificate-based restrictions.\nAlso our current prototype does not restrict the resource consumption in executing the aggregation functions; but, ` techniques from research on resource management in server systems and operating systems [2, 3] can be applied here.\n6.\nROBUSTNESS\nIn large scale systems, reconfigurations are common.\nOur two main principles for robustness are to guarantee (i) read availability--probes complete in finite time, and (ii) eventual consistency--updates by a live node will be visible to probes by connected nodes in finite time.\nDuring reconfigurations, a probe might return a stale value for two reasons.\nFirst, reconfigurations lead to incorrectness in the previous aggregate values.\nSecond, the nodes needed for aggregation to answer the probe become unreachable.\nOur system also provides two hooks that applications can use for improved end-to-end robustness in the presence of reconfigurations: (1) Ondemand re-aggregation and (2) application controlled replication.\nOur system handles reconfigurations at two levels--adaptation at the ADHT layer to ensure connectivity and adaptation at the AML layer to ensure access to the data in SDIMS.\n6.1 ADHT Adaptation\nOur ADHT layer adaptation algorithm is same as Pastry's adaptation algorithm [32]--the leaf sets are repaired as soon as a reconfiguration is detected and the routing table is repaired lazily.\nNote that maintaining extra leaf sets does not degrade the fault-tolerance property of the original Pastry; indeed, it enhances the resilience of ADHTs to failures by providing additional routing links.\nDue to redundancy in the leaf sets and the routing table, updates can be routed towards their root nodes successfully even during failures.\nFigure 7: Default lazy data re-aggregation time line\nAlso note that the administrative isolation property satisfied by our ADHT algorithm ensures that the reconfigurations in a level i domain do not affect the probes for level i in a sibling domain.\n6.2 AML Adaptation\nBroadly, we use two types of strategies for AML adaptation in the face of reconfigurations: (1) Replication in time as a fundamental baseline strategy, and (2) Replication in space as an additional performance optimization that falls back on replication in time when the system runs out of replicas.\nWe provide two mechanisms for replication in time.\nFirst, lazy re-aggregation propagates already received updates to new children or new parents in a lazy fashion over time.\nSecond, applications can reduce the probability of probe response staleness during such repairs through our flexible API with appropriate setting of the down parameter.\nLazy Re-aggregation: The DHT layer informs the AML layer about reconfigurations in the network using the following three function calls--newParent, failedChild, and newChild.\nOn newParent (parent, prefix), all probes in the outstanding-probes table corresponding to prefix are re-evaluated.\nIf parent is not null, then aggregation functions and already existing data are lazily transferred in the background.\nAny new updates, installs, and probes for this prefix are sent to the parent immediately.\nOnfailedChild (child, prefix), the AML layer marks the child as inactive and any outstanding probes that are waiting for data from this child are re-evaluated.\nOn newChild (child, prefix), the AML layer creates space in its data structures for this child.\nFigure 7 shows the time line for the default lazy re-aggregation upon reconfiguration.\nProbes initiated between points 1 and 2 and that are affected by reconfigurations are reevaluated by AML upon detecting the reconfiguration.\nProbes that complete or start between points 2 and 8 may return stale answers.\nOn-demand Re-aggregation: The default lazy aggregation scheme lazily propagates the old updates in the system.\nAdditionally, using up and down knobs in the Probe API, applications can force on-demand fast re-aggregation of updates to avoid staleness in the face of reconfigurations.\nIn particular, if an application detects or suspects an answer as stale, then it can re-issue the probe increasing the up and down parameters to force the refreshing of the cached data.\nNote that this strategy will be useful only after the DHT adaptation is completed (Point 6 on the time line in Figure 7).\nReplication in Space: Replication in space is more challenging in our system than in a DHT file location application because replication in space can be achieved easily in the latter by just replicating the root node's contents.\nIn our system, however, all internal nodes have to be replicated along with the root.\nIn our system, applications control replication in space using up and down knobs in the Install API; with large up and down values, aggregates at the intermediate virtual nodes are propagated to more nodes in the system.\nBy reducing the number of nodes that have to be accessed to answer a probe, applications can reduce the probability of incorrect results occurring due to the failure of nodes that do not contribute to the aggregate.\nFor example, in a file location application, using a non-zero positive down parameter ensures that a file's global aggregate is replicated on nodes other than the root.\nFigure 8: Flexibility of our approach.\nWith different UP and DOWN values in a network of 4096 nodes for different readwrite ratios.\nProbes for the file location can then be answered without accessing the root; hence they are not affected by the failure of the root.\nHowever, note that this technique is not appropriate in some cases.\nAn aggregated value in file location system is valid as long as the node hosting the file is active, irrespective of the status of other nodes in the system; whereas an application that counts the number of machines in a system may receive incorrect results irrespective of the replication.\nIf reconfigurations are only transient (like a node temporarily not responding due to a burst of load), the replicated aggregate closely or correctly resembles the current state.\n7.\nEVALUATION\nWe have implemented a prototype of SDIMS in Java using the FreePastry framework [32] and performed large-scale simulation experiments and micro-benchmark experiments on two real networks: 187 machines in the department and 69 machines on the PlanetLab [27] testbed.\nIn all experiments, we use static up and down values and turn off dynamic adaptation.\nOur evaluation supports four main conclusions.\nFirst, flexible API provides different propagation strategies that minimize communication resources at different read-to-write ratios.\nFor example, in our simulation we observe Update-Local to be efficient for read-to-write ratios below 0.0001, Update-Up around 1, and Update-All above 50000.\nSecond, our system is scalable with respect to both nodes and attributes.\nIn particular, we find that the maximum node stress in our system is an order lower than observed with an Update-All, gossiping approach.\nThird, in contrast to unmodified Pastry which violates path convergence property in upto 14% cases, our system conforms to the property.\nFourth, the system is robust to reconfigurations and adapts to failures with in a few seconds.\n7.1 Simulation Experiments\nFlexibility and Scalability: A major innovation of our system is its ability to provide flexible computation and propagation of aggregates.\nIn Figure 8, we demonstrate the flexibility exposed by the aggregation API explained in Section 3.\nWe simulate a system with 4096 nodes arranged in a domain hierarchy with branching factor (bf) of 16 and install several attributes with different up and down parameters.\nWe plot the average number of messages per operation incurred for a wide range of read-to-write ratios of the operations for different attributes.\nSimulations with other sizes of networks with different branching factors reveal similar results.\nThis graph clearly demonstrates the benefit of supporting a wide range of computation and propagation strategies.\nAlthough having a small UP\nFigure 9: Max node stress for a gossiping approach vs. ADHT\nbased approach for different number of nodes with increasing number of sparse attributes.\nvalue is efficient for attributes with low read-to-write ratios (write dominated applications), the probe latency, when reads do occur, may be high since the probe needs to aggregate the data from all the nodes that did not send their aggregate up.\nConversely, applications that wish to improve probe overheads or latencies can increase their UP and DOWN propagation at a potential cost of increase in write overheads.\nCompared to an existing Update-all single aggregation tree approach [38], scalability in SDIMS comes from (1) leveraging DHTs to form multiple aggregation trees that split the load across nodes and (2) flexible propagation that avoids propagation of all updates to all nodes.\nFigure 9 demonstrates the SDIMS's scalability with nodes and attributes.\nFor this experiment, we build a simulator to simulate both Astrolabe [38] (a gossiping, Update-All approach) and our system for an increasing number of sparse attributes.\nEach attribute corresponds to the membership in a multicast session with a small number of participants.\nFor this experiment, the session size is set to 8, the branching factor is set to 16, the propagation mode for SDIMS is Update-Up, and the participant nodes perform continuous probes for the global aggregate value.\nWe plot the maximum node stress (in terms of messages) observed in both schemes for different sized networks with increasing number of sessions when the participant of each session performs an update operation.\nClearly, the DHT based scheme is more scalable with respect to attributes than an Update-all gossiping scheme.\nObserve that at some constant number of attributes, as the number of nodes increase in the system, the maximum node stress increases in the gossiping approach, while it decreases in our approach as the load of aggregation is spread across more nodes.\nSimulations with other session sizes (4 and 16) yield similar results.\nAdministrative Hierarchy and Robustness: Although the routing protocol of ADHT might lead to an increased number of hops to reach the root for a key as compared to original Pastry, the algorithm conforms to the path convergence and locality properties and thus provides administrative isolation property.\nIn Figure 10, we quantify the increased path length by comparisons with unmodified Pastry for different sized networks with different branching factors of the domain hierarchy tree.\nTo quantify the path convergence property, we perform simulations with a large number of probe pairs--each pair probing for a random key starting from two randomly chosen nodes.\nIn Figure 11, we plot the percentage of probe pairs for unmodified pastry that do not conform to the path convergence property.\nWhen the branching factor is low, the domain hierarchy tree is deeper resulting in a large difference between\nFigure 10: Average path length to root in Pastry versus ADHT for different branching factors.\nNote that all lines corresponding to Pastry overlap.\nFigure 11: Percentage of probe pairs whose paths to the root did not conform to the path convergence property with Pastry.\nFigure 12: Latency of probes for aggregate at global root level with three different modes of aggregate propagation on (a) department machines, and (b) PlanetLab machines\nPastry and ADHT in the average path length; but it is at these small domain sizes, that the path convergence fails more often with the original Pastry.\n7.2 Testbed experiments\nWe run our prototype on 180 department machines (some machines ran multiple node instances, so this configuration has a total of 283 SDIMS nodes) and also on 69 machines of the PlanetLab [27] testbed.\nWe measure the performance of our system with two micro-benchmarks.\nIn the first micro-benchmark, we install three aggregation functions of types Update-Local, Update-Up, and Update-All, perform update operation on all nodes for all three aggregation functions, and measure the latencies incurred by probes for the global aggregate from all nodes in the system.\nFigure 12\nFigure 13: Micro-benchmark on department network showing the behavior of the probes from a single node when failures are happening at some other nodes.\nAll 283 nodes assign a value of 10 to the attribute.\nFigure 14: Probe performance during failures on 69 machines of PlanetLab testbed\nshows the observed latencies for both testbeds.\nNotice that the latency in Update-Local is high compared to the Update-UP policy.\nThis is because latency in Update-Local is affected by the presence of even a single slow machine or a single machine with a high latency network connection.\nIn the second benchmark, we examine robustness.\nWe install one aggregation function of type Update-Up that performs sum operation on an integer valued attribute.\nEach node updates the attribute with the value 10.\nThen we monitor the latencies and results returned on the probe operation for global aggregate on one chosen node, while we kill some nodes after every few probes.\nFigure 13 shows the results on the departmental testbed.\nDue to the nature of the testbed (machines in a department), there is little change in the latencies even in the face of reconfigurations.\nIn Figure 14, we present the results of the experiment on PlanetLab testbed.\nThe root node of the aggregation tree is terminated after about 275 seconds.\nThere is a 5X increase in the latencies after the death of the initial root node as a more distant node becomes the root node after repairs.\nIn both experiments, the values returned on probes start reflecting the correct situation within a short time after the failures.\nFrom both the testbed benchmark experiments and the simulation experiments on flexibility and scalability, we conclude that (1) the flexibility provided by SDIMS allows applications to tradeoff read-write overheads (Figure 8), read latency, and sensitivity to slow machines (Figure 12), (2) a good default aggregation strategy is Update-Up which has moderate overheads on both reads and\n7.3 Applications\nSDIMS is designed as a general distributed monitoring and control infrastructure for a broad range of applications.\nAbove, we discuss some simple microbenchmarks including a multicast membership service and a calculate-sum function.\nVan Renesse et al. [38] provide detailed examples of how such a service can be used for a peer-to-peer caching directory, a data-diffusion service, a publishsubscribe system, barrier synchronization, and voting.\nAdditionally, we have initial experience using SDIMS to construct two significant applications: the control plane for a large-scale distributed file system [12] and a network monitor for identifying \"heavy hitters\" that consume excess resources.\nDistributed file system control: The PRACTI (Partial Replication, Arbitrary Consistency, Topology Independence) replication system provides a set of mechanisms for data replication over which arbitrary control policies can be layered.\nWe use SDIMS to provide several key functions in order to create a file system over the lowlevel PRACTI mechanisms.\nFirst, nodes use SDIMS as a directory to handle read misses.\nWhen a node n receives an object o, it updates the (ReadDir, o) attribute with the value n; when n discards o from its local store, it resets (ReadDir, o) to NULL.\nAt each virtual node, the ReadDir aggregation function simply selects a random non-null child value (if any) and we use the Update-Up policy for propagating updates.\nFinally, to locate a nearby copy of an object o, a node n1 issues a series of probe requests for the (ReadDir, o) attribute, starting with level = 1 and increasing the level value with each repeated probe request until a non-null node ID n2 is returned.\nn1 then sends a demand read request to n2, and n2 sends the data if it has it.\nConversely, if n2 does not have a copy of o, it sends a nack to n1, and n1 issues a retry probe with the down parameter set to a value larger than used in the previous probe in order to force on-demand re-aggregation, which will yield a fresher value for the retry.\nSecond, nodes subscribe to invalidations and updates to interest sets of files, and nodes use SDIMS to set up and maintain perinterest-set network-topology-sensitive spanning trees for propagating this information.\nTo subscribe to invalidations for interest set i, a node n1 first updates the (Inval, i) attribute with its identity n1, and the aggregation function at each virtual node selects one non-null child value.\nFinally, n1 probes increasing levels of the the (Inval, i) attribute until it finds the first node n2 = 6 n1; n1 then uses n2 as its parent in the spanning tree.\nn1 also issues a continuous probe for this attribute at this level so that it is notified of any change to its spanning tree parent.\nSpanning trees for streams of pushed updates are maintained in a similar manner.\nIn the future, we plan to use SDIMS for at least two additional services within this replication system.\nFirst, we plan to use SDIMS to track the read and write rates to different objects; prefetch algorithms will use this information to prioritize replication [40, 41].\nSecond, we plan to track the ranges of invalidation sequence numbers seen by each node for each interest set in order to augment the spanning trees described above with additional \"hole filling\" to allow nodes to locate specific invalidations they have missed.\nOverall, our initial experience with using SDIMS for the PRACTII replication system suggests that (1) the general aggregation interface provided by SDIMS simplifies the construction of distributed applications--given the low-level PRACTI mechanisms,\nwe were able to construct a basic file system that uses SDIMS for several distinct control tasks in under two weeks and (2) the weak consistency guarantees provided by SDIMS meet the requirements of this application--each node's controller effectively treats information from SDIMS as hints, and if a contacted node does not have the needed data, the controller retries, using SDIMS on-demand reaggregation to obtain a fresher hint.\nDistributed heavy hitter problem: The goal of the heavy hitter problem is to identify network sources, destinations, or protocols that account for significant or unusual amounts of traffic.\nAs noted by Estan et al. [13], this information is useful for a variety of applications such as intrusion detection (e.g., port scanning), denial of service detection, worm detection and tracking, fair network allocation, and network maintenance.\nSignificant work has been done on developing high-performance stream-processing algorithms for identifying heavy hitters at one router, but this is just a first step; ideally these applications would like not just one router's views of the heavy hitters but an aggregate view.\nWe use SDIMS to allow local information about heavy hitters to be pooled into a view of global heavy hitters.\nFor each destination IP address IPx, a node updates the attribute (DestBW, IPx) with the number of bytes sent to IPx in the last time window.\nThe aggregation function for attribute type DestBW is installed with the Update-UP strategy and simply adds the values from child nodes.\nNodes perform continuous probe for global aggregate of the attribute and raise an alarm when the global aggregate value goes above a specified limit.\nNote that only nodes sending data to a particular IP address perform probes for the corresponding attribute.\nAlso note that techniques from [25] can be extended to hierarchical case to tradeoff precision for communication bandwidth.\n8.\nRELATED WORK\nThe aggregation abstraction we use in our work is heavily influenced by the Astrolabe [38] project.\nAstrolabe adopts a PropagateAll and unstructured gossiping techniques to attain robustness [5].\nHowever, any gossiping scheme requires aggressive replication of the aggregates.\nWhile such aggressive replication is efficient for read-dominated attributes, it incurs high message cost for attributes with a small read-to-write ratio.\nOur approach provides a flexible API for applications to set propagation rules according to their read-to-write ratios.\nOther closely related projects include Willow [39], Cone [4], DASIS [1], and SOMO [45].\nWillow, DASIS and SOMO build a single tree for aggregation.\nCone builds a tree per attribute and requires a total order on the attribute values.\nSeveral academic [15, 21, 42] and commercial [37] distributed monitoring systems have been designed to monitor the status of large networked systems.\nSome of them are centralized where all the monitoring data is collected and analyzed at a central host.\nGanglia [15, 23] uses a hierarchical system where the attributes are replicated within clusters using multicast and then cluster aggregates are further aggregated along a single tree.\nSophia [42] is a distributed monitoring system designed with a declarative logic programming model where the location of query execution is both explicit in the language and can be calculated during evaluation.\nThis research is complementary to our work.\nTAG [21] collects information from a large number of sensors along a single tree.\nThe observation that DHTs internally provide a scalable forest of reduction trees is not new.\nPlaxton et al.'s [28] original paper describes not a DHT, but a system for hierarchically aggregating and querying object location data in order to route requests to nearby copies of objects.\nMany systems--building upon both Plaxton's bit-correcting strategy [32, 46] and upon other strategies [24, 29, 35]--have chosen to hide this power and export a simple and general distributed hash table abstraction as a useful building block for a broad range of distributed applications.\nSome of these systems internally make use of the reduction forest not only for routing but also for caching [32], but for simplicity, these systems do not generally export this powerful functionality in their external interface.\nOur goal is to develop and expose the internal reduction forest of DHTs as a similarly general and useful abstraction.\nAlthough object location is a predominant target application for DHTs, several other applications like multicast [8, 9, 33, 36] and DNS [11] are also built using DHTs.\nAll these systems implicitly perform aggregation on some attribute, and each one of them must be designed to handle any reconfigurations in the underlying DHT.\nWith the aggregation abstraction provided by our system, designing and building of such applications becomes easier.\nInternal DHT trees typically do not satisfy domain locality properties required in our system.\nCastro et al. [7] and Gummadi et al. [17] point out the importance of path convergence from the perspective of achieving efficiency and investigate the performance of Pastry and other DHT algorithms, respectively.\nSkipNet [18] provides domain restricted routing where a key search is limited to the specified domain.\nThis interface can be used to ensure path convergence by searching in the lowest domain and moving up to the next domain when the search reaches the root in the current domain.\nAlthough this strategy guarantees path convergence, it loses the aggregation tree abstraction property of DHTs as the domain constrained routing might touch a node more than once (as it searches forward and then backward to stay within a domain).\n9.\nCONCLUSIONS\nThis paper presents a Scalable Distributed Information Management System (SDIMS) that aggregates information in large-scale networked systems and that can serve as a basic building block for a broad range of applications.\nFor large scale systems, hierarchical aggregation is a fundamental abstraction for scalability.\nWe build our system by extending ideas from Astrolabe and DHTs to achieve (i) scalability with respect to both nodes and attributes through a new aggregation abstraction that helps leverage DHT's internal trees for aggregation, (ii) flexibility through a simple API that lets applications control propagation of reads and writes, (iii) administrative isolation through simple augmentations of current DHT algorithms, and (iv) robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication.","keyphrases":["inform manag system","administr isol","avail","distribut hash tabl","distribut hash tabl","tunabl spatial replic","network system monitor","larg-scale network system","distribut oper system backbon","read-domin attribut","write-domin attribut","virtual node","updat-upk-downj strategi","tempor heterogen","autonom dht","aggreg manag layer","eventu consist","lazi re-aggreg","freepastri framework"],"prmu":["P","P","P","P","P","P","M","M","M","M","M","M","U","U","M","M","U","M","U"]} {"id":"J-40","title":"Networks Preserving Evolutionary Equilibria and the Power of Randomization","abstract":"We study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network. We generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly. We examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.","lvl-1":"Networks Preserving Evolutionary Equilibria and the Power of Randomization Michael Kearns mkearns@cis.upenn.edu Siddharth Suri ssuri@cis.upenn.edu Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 ABSTRACT We study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network.\nWe generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly.\nWe examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics, Theory 1.\nINTRODUCTION In this paper, we introduce and examine a natural extension of classical evolutionary game theory (EGT) to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network.\nThis extension generalizes the classical setting, in which all pairs of organisms in an infinite population are equally likely to interact.\nThe classical setting can be viewed as the special case in which the underlying network is a clique.\nThere are many obvious reasons why one would like to examine more general graphs, the primary one being in that many scenarios considered in evolutionary game theory, all interactions are in fact not possible.\nFor example, geographical restrictions may limit interactions to physically proximate pairs of organisms.\nMore generally, as evolutionary game theory has become a plausible model not only for biological interaction, but also economic and other kinds of interaction in which certain dynamics are more imitative than optimizing (see [2, 16] and chapter 4 of [19]), the network constraints may come from similarly more general sources.\nEvolutionary game theory on networks has been considered before, but not in the generality we will do so here (see Section 4).\nWe generalize the definition of an evolutionary stable strategy (ESS) to networks, and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly.\nWe examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.\nThe work described here is part of recent efforts examining the relationship between graph topology or structure and properties of equilibrium outcomes.\nPrevious works in this line include studies of the relationship of topology to properties of correlated equilibria in graphical games [11], and studies of price variation in graph-theoretic market exchange models [12].\nMore generally, this work contributes to the line of graph-theoretic models for game theory investigated in both computer science [13] and economics [10].\n2.\nCLASSICAL EGT The fundamental concept of evolutionary game theory is the evolutionarily stable strategy (ESS).\nIntuitively, an ESS is a strategy such that if all the members of a population adopt it, then no mutant strategy could invade the population [17].\nTo make this more precise, we describe the basic model of evolutionary game theory, in which the notion of an ESS resides.\nThe standard model of evolutionary game theory considers an infinite population of organisms, each of which plays a strategy in a fixed, 2-player, symmetric game.\nThe game is defined by a fitness function F. All pairs of members of the infinite population are equally likely to interact with one another.\nIf two organisms interact, one playing strategy s 200 and the other playing strategy t, the s-player earns a fitness of F(s|t) while the t-player earns a fitness of F(t|s).\nIn this infinite population of organisms, suppose there is a 1 \u2212 fraction who play strategy s, and call these organisms incumbents; and suppose there is an fraction who play t, and call these organisms mutants.\nAssume two organisms are chosen uniformly at random to play each other.\nThe strategy s is an ESS if the expected fitness of an organism playing s is higher than that of an organism playing t, for all t = s and all sufficiently small .\nSince an incumbent will meet another incumbent with probability 1 \u2212 and it will meet a mutant with probability , we can calculate the expected fitness of an incumbent, which is simply (1 \u2212 )F(s|s) + F(s|t).\nSimilarly, the expected fitness of a mutant is (1 \u2212 )F(t|s) + F(t|t).\nThus we come to the formal definition of an ESS [19].\nDefinition 2.1.\nA strategy s is an evolutionarily stable strategy (ESS) for the 2-player, symmetric game given by fitness function F, if for every strategy t = s, there exists an t such that for all 0 < < t, (1 \u2212 )F(s|s) + F(s|t) > (1 \u2212 )F(t|s) + F(t|t).\nA consequence of this definition is that for s to be an ESS, it must be the case that F(s|s) \u2265 F(t|s), for all strategies t.\nThis inequality means that s must be a best response to itself, and thus any ESS strategy s must also be a Nash equilibrium.\nIn general the notion of ESS is more restrictive than Nash equilibrium, and not all 2-player, symmetric games have an ESS.\nIn this paper our interest is to examine what kinds of network structure preserve the ESS strategies for those games that do have a standard ESS.\nFirst we must of course generalize the definition of ESS to a network setting.\n3.\nEGT ON GRAPHS In our setting, we will no longer assume that two organisms are chosen uniformly at random to interact.\nInstead, we assume that organisms interact only with those in their local neighborhood, as defined by an undirected graph or network.\nAs in the classical setting (which can be viewed as the special case of the complete network or clique), we shall assume an infinite population, by which we mean we examine limiting behavior in a family of graphs of increasing size.\nBefore giving formal definitions, some comments are in order on what to expect in moving from the classical to the graph-theoretic setting.\nIn the classical (complete graph) setting, there exist many symmetries that may be broken in moving to the the network setting, at both the group and individual level.\nIndeed, such asymmetries are the primary interest in examining a graph-theoretic generalization.\nFor example, at the group level, in the standard ESS definition, one need not discuss any particular set of mutants of population fraction .\nSince all organisms are equally likely to interact, the survival or fate of any specific mutant set is identical to that of any other.\nIn the network setting, this may not be true: some mutant sets may be better able to survive than others due to the specific topologies of their interactions in the network.\nFor instance, foreshadowing some of our analysis, if s is an ESS but F(t|t) is much larger than F(s|s) and F(s|t), a mutant set with a great deal of internal interaction (that is, edges between mutants) may be able to survive, whereas one without this may suffer.\nAt the level of individuals, in the classical setting, the assertion that one mutant dies implies that all mutants die, again by symmetry.\nIn the network setting, individual fates may differ within a group all playing a common strategy.\nThese observations imply that in examining ESS on networks we face definitional choices that were obscured in the classical model.\nIf G is a graph representing the allowed pairwise interactions between organisms (vertices), and u is a vertex of G playing strategy su, then the fitness of u is given by F(u) = P v\u2208\u0393(u) F(su|sv) |\u0393(u)| .\nHere sv is the strategy being played by the neighbor v, and \u0393(u) = {v \u2208 V : (u, v) \u2208 E}.\nOne can view the fitness of u as the average fitness u would obtain if it played each if its neighbors, or the expected fitness u would obtain if it were assigned to play one of its neighbors chosen uniformly at random.\nClassical evolutionary game theory examines an infinite, symmetric population.\nGraphs or networks are inherently finite objects, and we are specifically interested in their asymmetries, as discussed above.\nThus all of our definitions shall revolve around an infinite family G = {Gn}\u221e n=0 of finite graphs Gn over n vertices, but we shall examine asymptotic (large n) properties of such families.\nWe first give a definition for a family of mutant vertex sets in such an infinite graph family to contract.\nDefinition 3.1.\nLet G = {Gn}\u221e n=0 be an infinite family of graphs, where Gn has n vertices.\nLet M = {Mn}\u221e n=0 be any family of subsets of vertices of the Gn such that |Mn| \u2265 n for some constant > 0.\nSuppose all the vertices of Mn play a common (mutant) strategy t, and suppose the remaining vertices in Gn play a common (incumbent) strategy s.\nWe say that Mn contracts if for sufficiently large n, for all but o(n) of the j \u2208 Mn, j has an incumbent neighbor i such that F(j) < F(i).\nA reasonable alternative would be to ask that the condition above hold for all mutants rather than all but o(n).\nNote also that we only require that a mutant have one incumbent neighbor of higher fitness in order to die; one might considering requiring more.\nIn Sections 6.1 and 6.2 we consider these stronger conditions and demonstrate that our results can no longer hold.\nIn order to properly define an ESS for an infinite family of finite graphs in a way that recovers the classical definition asymptotically in the case of the family of complete graphs, we first must give a definition that restricts attention to families of mutant vertices that are smaller than some invasion threshold n, yet remain some constant fraction of the population.\nThis prevents invasions that survive merely by constituting a vanishing fraction of the population.\nDefinition 3.2.\nLet > 0, and let G = {Gn}\u221e n=0 be an infinite family of graphs, where Gn has n vertices.\nLet M = {Mn}\u221e n=0 be any family of (mutant) vertices in Gn.\nWe say that M is -linear if there exists an , > > 0, such that for all sufficiently large n, n > |Mn| > n.\nWe can now give our definition for a strategy to be evolutionarily stable when employed by organisms interacting with their neighborhood in a graph.\n201 Definition 3.3.\nLet G = {Gn}\u221e n=0 be an infinite family of graphs, where Gn has n vertices.\nLet F be any 2-player, symmetric game for which s is a strategy.\nWe say that s is an ESS with respect to F and G if for all mutant strategies t = s, there exists an t > 0 such that for any t-linear family of mutant vertices M = {Mn}\u221e n=0 all playing t, for n sufficiently large, Mn contracts.\nThus, to violate the ESS property for G, one must witness a family of mutations M in which each Mn is an arbitrarily small but nonzero constant fraction of the population of Gn, but does not contract (i.e. every mutant set has a subset of linear size that survives all of its incumbent interactions).\nIn Section A.1 we show that the definition given coincides with the classical one in the case where G is the family of complete graphs, in the limit of large n.\nWe note that even in the classical model, small sets of mutants were allowed to have greater fitness than the incumbents, as long as the size of the set was o(n) [18].\nIn the definition above there are three parameters: the game F, the graph family G and the mutation family M.\nOur main results will hold for any 2-player, symmetric game F.\nWe will also study two rather general settings for G and M: that in which G is a family of random graphs and M is arbitrary, and that in which G is nearly arbitrary and M is randomly chosen.\nIn both cases, we will see that, subject to conditions on degree or edge density (essentially forcing connectivity of G but not much more), for any 2-player, symmetric game, the ESS of the classical settings, and only those strategies, are always preserved.\nThus a common theme of these results is the power of randomization: as long as either the network itself is chosen randomly, or the mutation set is chosen randomly, classical ESS are preserved.\n4.\nRELATED WORK There has been previous work that analyzes which strategies are resilient to mutant invasions with respect to various types of graphs.\nWhat sets our work apart is that the model we consider encompasses a significantly more general class of games and graph topologies.\nWe will briefly survey this literature and point out the differences in the previous models and ours.\nIn [8], [3], and [4], the authors consider specific families of graphs, such as cycles and lattices, where players play specific games, such as 2 \u00d7 2-games or k \u00d7 k-coordination games.\nIn these papers the authors specify a simple, local dynamic for players to improve their payoffs by changing strategies, and analyze what type of strategies will grow to dominate the population.\nThe model we propose is more general than both of these, as it encompasses a larger class of graphs as well as a richer set of games.\nAlso related to our work is that of [14], where the authors propose two models.\nThe first assumes organisms interact according to a weighted, undirected graph.\nHowever, the fitness of each organism is simply assigned and does not depend on the actions of each organism``s neighborhood.\nThe second model has organisms arranged around a directed cycle, where neighbors play a 2 \u00d7 2-game.\nWith probability proportional to its fitness, an organism is chosen to reproduce by placing a replica of itself in its neighbors position, thereby killing the neighbor.\nWe consider more general games than the first model and more general graphs than the second.\nFinally, the works most closely related to ours are [7], [15], and [6].\nThe authors consider 2-action, coordination games played by players in a general undirected graph.\nIn these three works, the authors specify a dynamic for a strategy to reproduce, and analyze properties of the graph that allow a strategy to overrun the population.\nHere again, one can see that our model is more general than these, as it allows for organisms to play any 2-player, symmetric game.\n5.\nNETWORKS PRESERVING ESS We now proceed to state and prove two complementary results in the network ESS model defined in Section 3.\nFirst, we consider a setting where the graphs are generated via the Gn,p model of Erd\u02ddos and R\u00b4enyi [5].\nIn this model, every pair of vertices are joined by an edge independently and with probability p (where p may depend on n).\nThe mutant set, however, will be constructed adversarially (subject to the linear size constraint given by Definition 3.3).\nFor these settings, we show that for any 2-player, symmetric game, s is a classical ESS of that game, if and only if s is an ESS for {Gn,p}\u221e n=0, where p = \u2126(1\/nc ) and 0 \u2264 c < 1, and any mutant family {Mn}\u221e n=0, where each Mn has linear size.\nWe note that under these settings, if we let c = 1 \u2212 \u03b3 for small \u03b3 > 0, the expected number of edges in Gn is n1+\u03b3 or larger - that is, just superlinear in the number of vertices and potentially far smaller than O(n2 ).\nIt is easy to convince oneself that once the graphs have only a linear number of edges, we are flirting with disconnectedness, and there may simply be large mutant sets that can survive in isolation due to the lack of any incumbent interactions in certain games.\nThus in some sense we examine the minimum plausible edge density.\nThe second result is a kind of dual to the first, considering a setting where the graphs are chosen arbitrarily (subject to conditions) but the mutant sets are chosen randomly.\nIt states that for any 2-player, symmetric game, s is a classical ESS for that game, if and only if s is an ESS for any {Gn = (Vn, En)}\u221e n=0 in which for all v \u2208 Vn, deg(v) = \u2126(n\u03b3 ) (for any constant \u03b3 > 0), and a family of mutant sets {Mn}\u221e n=0, that is chosen randomly (that is, in which each organism is labeled a mutant with constant probability > 0).\nThus, in this setting we again find that classical ESS are preserved subject to edge density restrictions.\nSince the degree assumption is somewhat strong, we also prove another result which only assumes that |En| \u2265 n1+\u03b3 , and shows that there must exist at least 1 mutant with an incumbent neighbor of higher fitness (as opposed to showing that all but o(n) mutants have an incumbent neighbor of higher fitness).\nAs will be discussed, this rules out stationary mutant invasions.\n5.1 Random Graphs, Adversarial Mutations Now we state and prove a theorem which shows that if s is a classical ESS, then s will be an ESS for random graphs, where a linear sized set of mutants is chosen by an adversary.\nTheorem 5.1.\nLet F be any 2-player, symmetric game, and suppose s is a classical ESS of F. Let the infinite graph family {Gn}\u221e n=0 be drawn according to Gn,p, where p = \u2126(1\/nc ) and 0 \u2264 c < 1.\nThen with probability 1, s is an ESS.\nThe main idea of the proof is to divide mutants into 2 categories, those with normal fitness and those with ab202 normal fitness.\nFirst, we show all but o(n) of the population (incumbent or mutant) have an incumbent neighbor of normal fitness.\nThis will imply that all but o(n) of the mutants of normal fitness have an incumbent neighbor of higher fitness.\nThe vehicle for proving this is Theorem 2.15 of [5], which gives an upper bound on the number of vertices not connected to a sufficiently large set.\nThis theorem assumes that the size of this large set is known with equality, which necessitates the union bound argument below.\nSecondly, we show that there can be at most o(n) mutants with abnormal fitness.\nSince there are so few of them, even if none of them have an incumbent neighbor of higher fitness, s will still be an ESS with respect to F and G. Proof.\n(Sketch) Let t = s be the mutant strategy.\nSince s is a classical ESS, there exists an t such that (1\u2212 )F(s|s)+ F(s|t) > (1 \u2212 )F(t|s) + F(t|t), for all 0 < < t. Let M be any mutant family that is t-linear.\nThus for any fixed value of n that is sufficiently large, there exists an such that |Mn| = n and t > > 0.\nAlso, let In = Vn \\ Mn and let I \u2286 In be the set of incumbents that have fitness in the range (1 \u00b1 \u03c4)[(1 \u2212 )F(s|s) + F(s|t)] for some constant \u03c4, 0 < \u03c4 < 1\/6.\nLemma 5.1 below shows (1 \u2212 )n \u2265 |I | \u2265 (1 \u2212 )n \u2212 24 log n \u03c42p .\nFinally, let TI = {x \u2208 V \\ I : \u0393(x) \u2229 I = \u2205}.\n(For the sake of clarity we suppress the subscript n on the sets I and T.) The union bound gives us Pr(|TI | \u2265 \u03b4n) \u2264 (1\u2212 )n X i=(1\u2212 )n\u2212 24 log n \u03c42p Pr(|TI | \u2265 \u03b4n and |I | = i) (1) Letting \u03b4 = n\u2212\u03b3 for some \u03b3 > 0 gives \u03b4n = o(n).\nWe will apply Theorem 2.15 of [5] to the summand on the right hand side of Equation 1.\nIf we let \u03b3 = (1\u2212c)\/2, and combine this with the fact that 0 \u2264 c < 1, all of the requirements of this theorem will be satisfied (details omitted).\nNow when we apply this theorem to Equation 1, we get Pr(|TI | \u2265 \u03b4n) \u2264 (1\u2212 )n X i=(1\u2212 )n\u2212 24 log n \u03c42p exp \u201e \u2212 1 6 C\u03b4n `` (2) = o(1) This is because equation 2 has only 24 log n \u03c42p terms, and Theorem 2.15 of [5] gives us that C \u2265 (1 \u2212 )n1\u2212c \u2212 24 log n \u03c42 .\nThus we have shown, with probability tending to 1 as n \u2192 \u221e, at most o(n) individuals are not attached to an incumbent which has fitness in the range (1 \u00b1 \u03c4)[(1 \u2212 )F(s|s) + F(s|t)].\nThis implies that the number of mutants of approximately normal fitness, not attached to an incumbent of approximately normal fitness, is also o(n).\nNow those mutants of approximately normal fitness that are attached to an incumbent of approximately normal fitness have fitness in the range (1\u00b1\u03c4)[(1\u2212 )F(t|s)+ F(t|t)].\nThe incumbents that they are attached to have fitness in the range (1\u00b1\u03c4)[(1\u2212 )F(s|s)+ F(s|t)].\nSince s is an ESS of F, we know (1\u2212 )F(s|s)+ F(s|t) > (1\u2212 )F(t|s)+ F(t|t), thus if we choose \u03c4 small enough, we can ensure that all but o(n) mutants of normal fitness have a neighboring incumbent of higher fitness.\nFinally by Lemma 5.1, we know there are at most o(n) mutants of abnormal fitness.\nSo even if all of them are more fit than their respective incumbent neighbors, we have shown all but o(n) of the mutants have an incumbent neighbor of higher fitness.\nWe now state and prove the lemma used in the proof above.\nLemma 5.1.\nFor almost every graph Gn,p with (1 \u2212 )n incumbents, all but 24 log n \u03b42p incumbents have fitness in the range (1\u00b1\u03b4)[(1\u2212 )F(s|s)+ F(s|t)], where p = \u2126(1\/nc ) and , \u03b4 and c are constants satisfying 0 < < 1, 0 < \u03b4 < 1\/6, 0 \u2264 c < 1.\nSimilarly, under the same assumptions, all but 24 log n \u03b42p mutants have fitness in the range (1 \u00b1 \u03b4)[(1 \u2212 )F(t|s) + F(t|t)].\nProof.\nWe define the mutant degree of a vertex to be the number of mutant neighbors of that vertex, and incumbent degree analogously.\nObserve that the only way for an incumbent to have fitness far from its expected value of (1\u2212 )F(s|s)+ F(s|t) is if it has a fraction of mutant neighbors either much higher or much lower than .\nTheorem 2.14 of [5] gives us a bound on the number of such incumbents.\nIt states that the number of incumbents with mutant degree outside the range (1 \u00b1 \u03b4)p|M| is at most 12 log n \u03b42p .\nBy the same theorem, the number of incumbents with incumbent degree outside the range (1 \u00b1 \u03b4)p|I| is at most 12 log n \u03b42p .\nFrom the linearity of fitness as a function of the fraction of mutant or incumbent neighbors, one can show that for those incumbents with mutant and incumbent degree in the expected range, their fitness is within a constant factor of (1 \u2212 )F(s|s) + F(s|t), where that constant goes to 1 as n tends to infinity and \u03b4 tends to 0.\nThe proof for the mutant case is analogous.\nWe note that if in the statement of Theorem 5.1 we let c = 0, then p = 1.\nThis, in turn, makes G = {Kn}\u221e n=0, where Kn is a clique of n vertices.\nThen for any Kn all of the incumbents will have identical fitness and all of the mutants will have identical fitness.\nFurthermore, since s was an ESS for G, the incumbent fitness will be higher than the mutant fitness.\nFinally, one can show that as n \u2192 \u221e, the incumbent fitness converges to (1 \u2212 )F(s|s) + F(s|t), and the mutant fitness converges to (1 \u2212 )F(t|s) + F(t|t).\nIn other words, s must be a classical ESS, providing a converse to Theorem 5.1.\nWe rigorously present this argument in Section A.1.\n5.2 Adversarial Graphs, Random Mutations We now move on to our second main result.\nHere we show that if the graph family, rather than being chosen randomly, is arbitrary subject to a minimum degree requirement, and the mutation sets are randomly chosen, classical ESS are again preserved.\nA modified notion of ESS allows us to considerably weaken the degree requirement to a minimum edge density requirement.\nTheorem 5.2.\nLet G = {Gn = (Vn, En)}\u221e n=0 be an infinite family of graphs in which for all v \u2208 Vn, deg(v) = \u2126(n\u03b3 ) (for any constant \u03b3 > 0).\nLet F be any 2-player, symmetric game, and suppose s is a classical ESS of F. Let t be any mutant strategy, and let the mutant family M = {Mn}\u221e n=0 be chosen randomly by labeling each vertex a mutant with constant probability , where t > > 0.\nThen with probability 1, s is an ESS with respect to F, G and M. 203 Proof.\nLet t = s be the mutant strategy and let X be the event that every incumbent has fitness within the range (1 \u00b1 \u03c4)[(1 \u2212 )F(s|s) + F(s|t)], for some constant \u03c4 > 0 to be specified later.\nSimilarly, let Y be the event that every mutant has fitness within the range (1 \u00b1 \u03c4)[(1 \u2212 )F(t|s) + F(t|t)].\nSince Pr(X \u2229 Y ) = 1 \u2212 Pr(\u00acX \u222a \u00acY ), we proceed by showing Pr(\u00acX \u222a \u00acY ) = o(1).\n\u00acX is the event that there exists an incumbent with fitness outside the range (1\u00b1\u03c4)[(1\u2212 )F(s|s)+ F(s|t)].\nIf degM (v) denotes the number of mutant neighbors of v, similarly, degI (v) denotes the number of incumbent neighbors of v, then an incumbent i has fitness degI (i) deg(i) F(s|s)+ degM (i) deg(i) F(s|t).\nSince F(s|s) and F(s|t) are fixed quantities, the only variation in an incumbents fitness can come from variation in the terms degI (i) deg(i) and degM (i) deg(i) .\nOne can use the Chernoff bound followed by the union bound to show that for any incumbent i, Pr(F(i) \/\u2208 (1 \u00b1 \u03c4)[(1 \u2212 )F(s|s) + F(s|t)]) < 4 exp \u201e \u2212 deg(i)\u03c42 3 `` .\nNext one can use the union bound again to bound the probability of the event \u00acX, Pr(\u00acX) \u2264 4n exp \u201e \u2212 di\u03c42 3 `` where di = mini\u2208V \\M deg(i), 0 < \u2264 1\/2.\nAn analogous argument can be made to show Pr(\u00acY ) < 4n exp(\u2212 dj \u03c42 3 ), where dj = minj\u2208M deg(j) and 0 < \u2264 1\/2.\nThus, by the union bound, Pr(\u00acX \u222a \u00acY ) < 8n exp \u201e \u2212 d\u03c42 3 `` where d = minv\u2208V deg(v), 0 < \u2264 1\/2.\nSince deg(v) = \u2126(n\u03b3 ), for all v \u2208 V , and , \u03c4 and \u03b3 are all constants greater than 0, lim n\u2192\u221e 8n exp ( d\u03c42\/3) = 0, so Pr(\u00acX\u222a\u00acY ) = o(1).\nThus, we can choose \u03c4 small enough such that (1 + \u03c4)[(1 \u2212 )F(t|s) + F(t|t)] < (1 \u2212 \u03c4)[(1 \u2212 )F(s|s)+ F(s|t)], and then choose n large enough such that with probability 1 \u2212 o(1), every incumbent will have fitness in the range (1\u00b1\u03c4)[(1\u2212 )F(s|s)+F(s|t)], and every mutant will have fitness in the range (1 \u00b1 \u03c4)[(1 \u2212 )F(t|s) + F(t|t)].\nSo with high probability, every incumbent will have a higher fitness than every mutant.\nBy arguments similar to those following the proof of Theorem 5.1, if we let G = {Kn}\u221e n=0, each incumbent will have the same fitness and each mutant will have the same fitness.\nFurthermore, since s is an ESS for G, the incumbent fitness must be higher than the mutant fitness.\nHere again, one has to show show that as n \u2192 \u221e, the incumbent fitness converges to (1 \u2212 )F(s|s) + F(s|t), and the mutant fitness converges to (1 \u2212 )F(t|s) + F(t|t).\nObserve that the exact fraction mutants of Vn is now a random variable.\nSo to prove this convergence we use an argument similar to one that is used to prove that sequence of random variables that converges in probability also converges in distribution (details omitted).\nThis in turn establishes that s must be a classical ESS, and we thus obtain a converse to Theorem 5.2.\nThis argument is made rigorous in Section A.2.\nThe assumption on the degree of each vertex of Theorem 5.2 is rather strong.\nThe following theorem relaxes this requirement and only necessitates that every graph have n1+\u03b3 edges, for some constant \u03b3 > 0, in which case it shows there will alway be at least 1 mutant with an incumbent neighbor of higher fitness.\nA strategy that is an ESS in this weakened sense will essentially rule out stable, static sets of mutant invasions, but not more complex invasions.\nAn example of more complex invasions are mutant sets that survive, but only by perpetually migrating through the graph under some natural evolutionary dynamics, akin to gliders in the well-known Game of Life [1].\nTheorem 5.3.\nLet F be any game, and let s be a classical ESS of F, and let t = s be a mutant strategy.\nFor any graph family G = {Gn = (Vn, En)}\u221e n=0 in which |En| \u2265 n1+\u03b3 (for any constant \u03b3 > 0), and any mutant family M = {Mn}\u221e n=0 which is determined by labeling each vertex a mutant with probability , where t > > 0, the probability that there exists a mutant with an incumbent neighbor of higher fitness approaches 1 as n \u2192 \u221e.\nProof.\n(Sketch) The main idea behind the proof is to show that with high probability, over only the choice of mutants, there will be an incumbent-mutant edge in which both vertices have high degree.\nIf their degree is high enough, we can show that close to an fraction of their neighbors are mutants, and thus their fitnesses are very close to what we expect them to be in the classical case.\nSince s is an ESS, the fitness of the incumbent will be higher than the mutant.\nWe call an edge (i, j) \u2208 En a g(n)-barbell if deg(i) \u2265 g(n) and deg(j) \u2265 g(n).\nSuppose Gn has at most h(n) edges that are g(n)-barbells.\nThis means there are at least |En| \u2212 h(n) edges in which at least one vertex has degree at most g(n).\nWe call these vertices light vertices.\nLet (n) be the number of light vertices in Gn.\nObserve that |En|\u2212h(n) \u2264 (n)g(n).\nThis is because each light vertex is incident on at most g(n) edges.\nThis gives us that |En| \u2264 h(n) + (n)g(n) \u2264 h(n) + ng(n).\nSo if we choose h(n) and g(n) such that h(n) + ng(n) = o(n1+\u03b3 ), then |En| = o(n1+\u03b3 ).\nThis contradicts the assumption that |En| = \u2126(n1+\u03b3 ).\nThus, subject to the above constraint on h(n) and g(n), Gn must contain at least h(n) edges that are g(n)-barbells.\nNow let Hn denote the subgraph induced by the barbell edges of Gn.\nNote that regardless of the structure of Gn, there is no reason that Hn should be connected.\nThus, let m be the number of connected components of Hn, and let c1, c2, ... , cm be the number of vertices in each of these connected components.\nNote that since Hn is an edge-induced subgraph we have ck \u2265 2 for all components k. Let us choose the mutant set by first flipping the vertices in Hn only.\nWe now show that the probability, with respect to the random mutant set, that none of the components of Hn have an incumbent-mutant edge is exponentially small in n. Let An be the event that every component of Hn contains only mutants or only incumbents.\nThen algebraic manipulations can establish that Pr[An] = \u03a0m k=1( ck + (1 \u2212 )ck ) \u2264 (1 \u2212 )(1\u2212 \u03b22 2 ) Pm k=1 ck 204 where \u03b2 is a constant.\nThus for sufficiently small the bound decreases exponentially with Pm k=1 ck.\nFurthermore, sincePm k=1 `ck 2 \u00b4 \u2265 h(n) (with equality achieved by making each component a clique), one can show that Pm k=1 ck \u2265 p h(n).\nThus, as long as h(n) \u2192 \u221e with n, the probability that all components are uniformly labeled will go to 0.\nNow assuming that there exists a non-uniformly labeled component, by construction that component contains an edge (i, j) where i is an incumbent and j is a mutant, that is a g(n)-barbell.\nWe also assume that the h(n) vertices already labeled have been done so arbitrarily, but that the remaining g(n) \u2212 h(n) vertices neighboring i and j are labeled mutants independently with probability .\nThen via a standard Chernoff bound argument, one can show that with high probability, the fraction of mutants neighboring i and the fraction of mutants neighboring j is in the range (1 \u00b1 \u03c4)(g(n)\u2212h(n)) g(n) .\nSimilarly, one can show that the fraction of incumbents neighboring i and the fraction of mutants neighboring j is in the range 1 \u2212 (1 \u00b1 \u03c4)(g(n)\u2212h(n)) g(n) .\nSince s is an ESS, there exists a \u03b6 > 0 such that (1 \u2212 )F(s|s) + F(s|t) = (1 \u2212 )F(t|s) + F(t|t) + \u03b6.\nIf we choose g(n) = n\u03b3 , and h(n) = o(g(n)), we can choose n large enough and \u03c4 small enough to force F(i) > F(j), as desired.\n6.\nLIMITATIONS OF STRONGER MODELS In this section we show that if one tried to strengthen the model described in Section 3 in two natural ways, one would not be able to prove results as strong as Theorems 5.1 and 5.2, which hold for every 2-player, symmetric game.\n6.1 Stronger Contraction for the Mutant Set In Section 3 we alluded to the fact that we made certain design decisions in arriving at Definitions 3.1, 3.2 and 3.3.\nOne such decision was to require that all but o(n) mutants have incumbent neighbors of higher fitness.\nInstead, we could have required that all mutants have an incumbent neighbor of higher fitness.\nThe two theorems in this subsection show that if one were to strengthen our notion of contraction for the mutant set, given by Definition 3.1, in this way, it would be impossible to prove theorems analogous to Theorems 5.1 and 5.3.\nRecall that Definition 3.1 gave the notion of contraction for a linear sized subset of mutants.\nIn what follows, we will say an edge (i, j) contracts if i is an incumbent, j is a mutant, and F(i) > F(j).\nAlso, recall that Theorem 5.1 stated that if s is a classical ESS, then it is an ESS for random graphs with adversarial mutations.\nNext, we prove that if we instead required every incumbent-mutant edge to contract, this need not be the case.\nTheorem 6.1.\nLet F be a 2-player, symmetric game that has a classical ESS s for which there exists a mutant strategy t = s with F(t|t) > F(s|s) and F(t|t) > F(s|t).\nLet G = {Gn}\u221e n=0 be an infinite family of random graphs drawn according to Gn,p, where p = \u2126(1\/nc ) for any constant 0 \u2264 c < 1.\nThen with probability approaching 1 as n \u2192 \u221e, there exists a mutant family M = {Mn}\u221e n=0, where tn > |Mn| > n and t, > 0, in which there is an edge that does not contract.\nProof.\n(Sketch) With probability approaching 1 as n \u2192 \u221e, there exists a vertex j where deg(j) is arbitrarily close to n.\nSo label j mutant, label one of its neighbors incumbent, denoted i, and label the rest of j``s neighborhood mutant.\nAlso, label all of i``s neighbors incumbent, with the exception of j and j``s neighbors (which were already labeled mutant).\nIn this setting, one can show that F(j) will be arbitrarily close to F(t|t) and F(i) will be a convex combination of F(s|s) and F(s|t), which are both strictly less than F(t|t).\nTheorem 5.3 stated that if s is a classical ESS, then for graphs where |En| \u2265 n1+\u03b3 , for some \u03b3 > 0, and where each organism is labeled a mutant with probability , one edge must contract.\nBelow we show that, for certain graphs and certain games, there will always exist one edge that will not contract.\nTheorem 6.2.\nLet F be a 2-player, symmetric game that has a classical ESS s, such that there exists a mutant strategy t = s where F(t|s) > F(s|t).\nThere exists an infinite family of graphs {Gn = (Vn, En)}\u221e n=0, where |En| = \u0398(n2 ), such that for a mutant family M = {Mn}\u221e n=0, which is determined by labeling each vertex a mutant with probability > 0, the probability there exists an edge in En that does not contract approaches 1 as n \u2192 \u221e.\nProof.\n(Sketch) Construct Gn as follows.\nPick n\/4 vertices u1, u2, ... , un\/4 and add edges such that they from a clique.\nThen, for each ui, i \u2208 [n\/4] add edges (ui, vi), (vi, wi) and (wi, xi).\nWith probability 1 as n \u2192 \u221e, there exists an i such that ui and wi are mutants and vi and xi are incumbents.\nObserve that F(vi) = F(xi) = F(s|t) and F(wi) = F(t|s).\n6.2 Stronger Contraction for Individuals The model of Section 3 requires that for an edge (i, j) to contract, the fitness of i must be greater than the fitness of j.\nOne way to strengthen this notion of contraction would be to require that the maximum fitness incumbent in the neighborhood of j be more fit than the maximum fitness mutant in the neighborhood of j.\nThis models the idea that each organism is trying to take over each place in its neighborhood, but only the most fit organism in the neighborhood of a vertex gets the privilege of taking it.\nIf we assume that we adopt this notion of contraction for individual mutants, and require that all incumbent-mutant edges contract, we will next show that Theorems 6.1 and 6.2 still hold, and thus it is still impossible to get results such as Theorems 5.1 and 5.3 which hold for every 2-player, symmetric game.\nIn the proof of Theorem 6.1 we proved that F(i) is strictly less than F(j).\nObserve that maximum fitness mutant in the neighborhood of j must have fitness at least F(j).\nAlso observe that there is only 1 incumbent in the neighborhood of j, namely i.\nSo under this stronger notion of contraction, the edge (i, j) will not contract.\nSimilarly, in the proof of Theorem 6.2, observe that the only mutant in the neighborhood of wi is wi itself, which has fitness F(t|s).\nFurthermore, the only incumbents in the neighborhood of wi are vi and xi, both of which have fitness F(s|t).\nBy assumption, F(t|s) > F(s|t), thus, under this stronger notion of contraction, neither of the incumbentmutant edges, (vi, wi) and (xi, wi), will contract.\n7.\nREFERENCES [1] Elwyn R. Berlekamp, John Horton Conway, and Richard K. Guy.\nWinning Ways for Your 205 Mathematical Plays, volume 4.\nAK Peters, Ltd, March 2004.\n[2] Jonas Bj\u00a8ornerstedt and Karl H. Schlag.\nOn the evolution of imitative behavior.\nDiscussion Paper B-378, University of Bonn, 1996.\n[3] L. E. Blume.\nThe statistical mechanics of strategic interaction.\nGames and Economic Behavior, 5:387-424, 1993.\n[4] L. E. Blume.\nThe statistical mechanics of best-response strategy revision.\nGames and Economic Behavior, 11(2):111-145, November 1995.\n[5] B. Bollob\u00b4as.\nRandom Graphs.\nCambridge University Press, 2001.\n[6] Michael Suk-Young Chwe.\nCommunication and coordination in social networks.\nReview of Economic Studies, 67:1-16, 2000.\n[7] Glenn Ellison.\nLearning, local interaction, and coordination.\nEconometrica, 61(5):1047-1071, Sept. 1993.\n[8] I. Eshel, L. Samuelson, and A. Shaked.\nAltruists, egoists, and hooligans in a local interaction model.\nThe American Economic Review, 88(1), 1998.\n[9] Geoffrey R. Grimmett and David R. Stirzaker.\nProbability and Random Processes.\nOxford University Press, 3rd edition, 2001.\n[10] M. Jackson.\nA survey of models of network formation: Stability and efficiency.\nIn Group Formation in Economics; Networks, Clubs and Coalitions.\nCambridge University Press, 2004.\n[11] S. Kakade, M. Kearns, J. Langford, and L. Ortiz.\nCorrelated equilibria in graphical games.\nACM Conference on Electronic Commerce, 2003.\n[12] S. Kakade, M. Kearns, L. Ortiz, R. Pemantle, and S. Suri.\nEconomic properties of social networks.\nNeural Information Processing Systems, 2004.\n[13] M. Kearns, M. Littman, and S. Singh.\nGraphical models for game theory.\nConference on Uncertainty in Artificial Intelligence, pages 253-260, 2001.\n[14] E. Lieberman, C. Hauert, and M. A. Nowak.\nEvolutionary dynamics on graphs.\nNature, 433:312-316, 2005.\n[15] S. Morris.\nContagion.\nReview of Economic Studies, 67(1):57-78, 2000.\n[16] Karl H. Schlag.\nWhy imitate and if so, how?\nJournal of Economic Theory, 78:130-156, 1998.\n[17] J. M. Smith.\nEvolution and the Theory of Games.\nCambridge University Press, 1982.\n[18] William L. Vickery.\nHow to cheat against a simple mixed strategy ESS.\nJournal of Theoretical Biology, 127:133-139, 1987.\n[19] J\u00a8orgen W. Weibull.\nEvolutionary Game Theory.\nThe MIT Press, 1995.\nAPPENDIX A. GRAPHICAL AND CLASSICAL ESS In this section we explore the conditions under which a graphical ESS is also a classical ESS.\nTo do so, we state and prove two theorems which provide converses to each of the major theorems in Section 3.\nA.1 Random Graphs, Adversarial Mutations Theorem 5.2 states that if s is a classical ESS and G = {Gn,p}, where p = \u2126(1\/nc ) and 0 \u2264 c < 1, then with probability 1 as n \u2192 \u221e, s is an ESS with respect to G.\nHere we show that if s is an ESS with respect to G, then s is a classical ESS.\nIn order to prove this theorem, we do not need the full generality of s being an ESS for G when p = \u2126(1\/nc ) where 0 \u2264 c < 1.\nAll we need is s to be an ESS for G when p = 1.\nIn this case there are no more probabilistic events in the theorem statement.\nAlso, since p = 1 each graph in G is a clique, so if one incumbent has a higher fitness than one mutant, then all incumbents have higher fitness than all mutants.\nThis gives rise to the following theorem.\nTheorem A.1.\nLet F be any 2-player, symmetric game, and suppose s is a strategy for F and t = s is a mutant strategy.\nLet G = {Kn}\u221e n=0.\nIf, as n \u2192 \u221e, for any t-linear family of mutants M = {Mn}\u221e n=0, there exists an incumbent i and a mutant j such that F(i) > F(j), then s is a classical ESS of F.\nThe proof of this theorem analyzes the limiting behavior of the mutant population as the size of the cliques in G tends to infinity.\nIt also shows how the definition of ESS given in Section 5 recovers the classical definition of ESS.\nProof.\nSince each graph in G is a clique, every incumbent will have the same number of incumbent and mutant neighbors, and every mutant will have the same number of incumbent and mutant neighbors.\nThus, all incumbents will have identical fitness and all mutants will have identical fitness.\nNext, one can construct an t-linear mutant family M, where the fraction of mutants converges to for any , where t > > 0.\nSo for n large enough, the number of mutants in Kn will be arbitrarily close to n. Thus, any mutant subset of size n will result in all incumbents having fitness (1 \u2212 n n\u22121 )F(s|s) + n n\u22121 F(s|t), and all mutants having fitness (1 \u2212 n\u22121 n\u22121 )F(t|s) + n\u22121 n\u22121 F(t|t).\nFurthermore, by assumption the incumbent fitness must be higher than the mutant fitness.\nThis implies, lim n\u2192\u221e \u201e (1 \u2212 n n \u2212 1 )F(s|s) + n n \u2212 1 F(s|t) > (1 \u2212 n \u2212 1 n \u2212 1 )F(t|s) + n \u2212 1 n \u2212 1 F(t|t) `` = 1.\nThis implies, (1\u2212 )F(s|s)+ F(s|t) > (1\u2212 )F(t|s)+ F(t|t), for all , where t > > 0.\nA.2 Adversarial Graphs, Random Mutations Theorem 5.2 states that if s is a classical ESS for a 2player, symmetric game F, where G is chosen adversarially subject to the constraint that the degree of each vertex is \u2126(n\u03b3 ) (for any constant \u03b3 > 0), and mutants are chosen with probability , then s is an ESS with respect to F, G, and M.\nHere we show that if s is an ESS with respect to F, G, and M then s is a classical ESS.\nAll we will need to prove this is that s is an ESS with respect to G = {Kn}\u221e n=0, that is when each vertex has degree n \u2212 1.\nAs in Theorem A.1, since the graphs are cliques, if one incumbent has higher fitness than one mutant, then all incumbents have higher fitness than all mutants.\nThus, the theorem below is also a converse to Theorem 5.3.\n(Recall that Theorem 5.3 uses a weaker notion of contraction that 206 requires only one incumbent to have higher fitness than one mutant.)\nTheorem A.2.\nLet F be any 2-player symmetric game, and suppose s is an incumbent strategy for F and t = s is a mutant strategy.\nLet G = {Kn}\u221e n=0.\nIf with probability 1 as n \u2192 \u221e, s is an ESS for G and a mutant family M = {Mn}\u221e n=0, which is determined by labeling each vertex a mutant with probability , where t > > 0, then s is a classical ESS of F.\nThis proof also analyzes the limiting behavior of the mutant population as the size of the cliques in G tends to infinity.\nSince the mutants are chosen randomly we will use an argument similar to the proof that a sequence of random variables that converges in probability, also converge in distribution.\nIn this case the sequence of random variables will be actual fraction of mutants in each Kn.\nProof.\nFix any value of , where n > > 0, and construct each Mn by labeling a vertex a mutant with probability .\nBy the same argument as in the proof of Theorem A.1, if the actual number of mutants in Kn is denoted by nn, any mutant subset of size nn will result in all incumbents having fitness (1 \u2212 nn n\u22121 )F(s|s) + nn n\u22121 F(s|t), and in all mutants having fitness (1 \u2212 nn\u22121 n\u22121 )F(t|s) + nn\u22121 n\u22121 F(t|t).\nThis implies lim n\u2192\u221e Pr(s is an ESS for Gn w.r.t. nn mutants) = 1 \u21d2 lim n\u2192\u221e Pr \u201e (1 \u2212 nn n \u2212 1 )F(s|s) + nn n \u2212 1 F(s|t) > (1 \u2212 nn \u2212 1 n \u2212 1 )F(t|s) + nn \u2212 1 n \u2212 1 F(t|t) `` = 1 \u21d4 lim n\u2192\u221e Pr \u201e n > F(t|s) \u2212 F(s|s) F(s|t) \u2212 F(s|s) \u2212 F(t|t) + F(t|s) + F(s|s) \u2212 F(t|t) n `` = 1 (3) By two simple applications of the Chernoff bound and an application of the union bound, one can show the sequence of random variables { n}\u221e n=0 converges to in probability.\nNext, if we let Xn = \u2212 n, X = \u2212 , b = \u2212F(s|s) + F(t|t), and a = \u2212 F (t|s)\u2212F (s|s) F (s|t)\u2212F (s|s)\u2212F (t|t)+F (t|s) , by Theorem A.3 below, we get that limn\u2192\u221e Pr(Xn < a + b\/n) = Pr(X < a).\nCombining this with equation 3, Pr( > \u2212a) = 1.\nThe proof of the following theorem is very similar to the proof that a sequence of random variables that converges in probability, also converge in distribution.\nA good explanation of this can be found in [9], which is the basis for the argument below.\nTheorem A.3.\nIf {Xn}\u221e n=0 is a sequence of random variables that converge in probability to the random variable X, and a and b are constants, then limn\u2192\u221e Pr(Xn < a+b\/n) = Pr(X < a).\nProof.\nBy Lemma A.1 (see below) we have the following two inequalities, Pr(X < a + b\/n \u2212 \u03c4) \u2264 Pr(Xn < a + b\/n) + Pr(|X \u2212 Xn| > \u03c4), Pr(Xn < a + b\/n) \u2264 Pr(X < a + b\/n + \u03c4) + Pr(|X \u2212 Xn| > \u03c4).\nCombining these gives, Pr(X < a + b\/n \u2212 \u03c4) \u2212 Pr(|X \u2212 Xn| > \u03c4) \u2264 Pr(Xn < a + b\/n) \u2264 Pr(X < a + b\/n + \u03c4) + Pr(|X \u2212 Xn| > \u03c4).\nThere exists an n0 such that for all n > n0, |b\/n| < \u03c4, so the following statement holds for all n > n0.\nPr(X < a \u2212 2\u03c4) \u2212 Pr(|X \u2212 Xn| > \u03c4) \u2264 Pr(Xn < a + b\/n) \u2264 Pr(X < a + 2\u03c4) + Pr(|X \u2212 Xn| > \u03c4).\nTake the limn\u2192\u221e of both sides of both inequalities, and since Xn converges in probability to X, Pr(X < a \u2212 2\u03c4) \u2264 lim n\u2192\u221e Pr(Xn < a + b\/n) (4) \u2264 Pr(X < a + 2\u03c4).\n(5) Recall that X is a continuous random variable representing the fraction of mutants in an infinite sized graph.\nSo if we let FX (a) = Pr(X < a), we see that FX (a) is a cumulative distribution function of a continuous random variable, and is therefore continuous from the right.\nSo lim \u03c4\u21930 FX (a \u2212 \u03c4) = lim \u03c4\u21930 FX (a + \u03c4) = FX (a).\nThus if we take the lim\u03c4\u21930 of inequalities 4 and 5 we get Pr(X < a) = lim n\u2192\u221e Pr(Xn < a + b\/n).\nThe following lemma is quite useful, as it expresses the cumulative distribution of one random variable Y , in terms of the cumulative distribution of another random variable X and the difference between X and Y .\nLemma A.1.\nIf X and Y are random variables, c \u2208 and \u03c4 > 0, then Pr(Y < c) \u2264 Pr(X < c + \u03c4) + Pr(|Y \u2212 X| > \u03c4).\nProof.\nPr(Y < c) = Pr(Y < c, X < c + \u03c4) + Pr(Y < c, X \u2265 c + \u03c4) \u2264 Pr(Y < c | X < c + \u03c4) Pr(X < c + \u03c4) + Pr(|Y \u2212 X| > \u03c4) \u2264 Pr(X < c + \u03c4) + Pr(|Y \u2212 X| > \u03c4) 207","lvl-3":"Networks Preserving Evolutionary Equilibria and the Power of Randomization\nWe study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network.\nWe generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly.\nWe examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.\n1.\nINTRODUCTION\nIn this paper, we introduce and examine a natural extension of classical evolutionary game theory (EGT) to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network.\nThis extension generalizes the classical setting, in which all pairs of organisms in an infinite population are equally likely to interact.\nThe classical setting can be viewed as the special case in which the underlying network is a clique.\nThere are many obvious reasons why one would like to examine more general graphs, the primary one being in that\nmany scenarios considered in evolutionary game theory, all interactions are in fact not possible.\nFor example, geographical restrictions may limit interactions to physically proximate pairs of organisms.\nMore generally, as evolutionary game theory has become a plausible model not only for biological interaction, but also economic and other kinds of interaction in which certain dynamics are more imitative than optimizing (see [2, 16] and chapter 4 of [19]), the network constraints may come from similarly more general sources.\nEvolutionary game theory on networks has been considered before, but not in the generality we will do so here (see Section 4).\nWe generalize the definition of an evolutionary stable strategy (ESS) to networks, and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly.\nWe examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.\nThe work described here is part of recent efforts examining the relationship between graph topology or structure and properties of equilibrium outcomes.\nPrevious works in this line include studies of the relationship of topology to properties of correlated equilibria in graphical games [11], and studies of price variation in graph-theoretic market exchange models [12].\nMore generally, this work contributes to the line of graph-theoretic models for game theory investigated in both computer science [13] and economics [10].\n2.\nCLASSICAL EGT\n3.\nEGT ON GRAPHS\nDEFINITION 3.2.\nLet E> 0, and let G = {Gn} n \u00b0 0 be\n4.\nRELATED WORK\nThere has been previous work that analyzes which strategies are resilient to mutant invasions with respect to various types of graphs.\nWhat sets our work apart is that the model we consider encompasses a significantly more general class of games and graph topologies.\nWe will briefly survey this literature and point out the differences in the previous models and ours.\nIn [8], [3], and [4], the authors consider specific families of graphs, such as cycles and lattices, where players play specific games, such as 2 \u00d7 2-games or k \u00d7 k-coordination games.\nIn these papers the authors specify a simple, local dynamic for players to improve their payoffs by changing strategies, and analyze what type of strategies will grow to dominate the population.\nThe model we propose is more general than both of these, as it encompasses a larger class of graphs as well as a richer set of games.\nAlso related to our work is that of [14], where the authors propose two models.\nThe first assumes organisms interact according to a weighted, undirected graph.\nHowever, the fitness of each organism is simply assigned and does not depend on the actions of each organism's neighborhood.\nThe second model has organisms arranged around a directed cycle, where neighbors play a 2 \u00d7 2-game.\nWith probability proportional to its fitness, an organism is chosen to reproduce by placing a replica of itself in its neighbors position, thereby \"killing\" the neighbor.\nWe consider more general games than the first model and more general graphs than the second.\nFinally, the works most closely related to ours are [7], [15], and [6].\nThe authors consider 2-action, coordination games played by players in a general undirected graph.\nIn these three works, the authors specify a dynamic for a strategy to reproduce, and analyze properties of the graph that allow a strategy to overrun the population.\nHere again, one can see that our model is more general than these, as it allows for organisms to play any 2-player, symmetric game.\n5.\nNETWORKS PRESERVING ESS\n5.1 Random Graphs, Adversarial Mutations\n5.2 Adversarial Graphs, Random Mutations\n6.\nLIMITATIONS OF STRONGER MODELS\n6.1 Stronger Contraction for the Mutant Set\n6.2 Stronger Contraction for Individuals\n7.\nREFERENCES\nAPPENDIX A. GRAPHICAL AND CLASSICAL ESS\nA. 1 Random Graphs, Adversarial Mutations\nA. 2 Adversarial Graphs, Random Mutations\nn \u2212 1 F (t | t).\nThis\nF (t | s) \u2212 F (s | s)\nPr (X <a).\nPr (Xn <a + b\/n) + Pr (| X \u2212 Xn |>),","lvl-4":"Networks Preserving Evolutionary Equilibria and the Power of Randomization\nWe study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network.\nWe generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly.\nWe examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.\n1.\nINTRODUCTION\nIn this paper, we introduce and examine a natural extension of classical evolutionary game theory (EGT) to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network.\nThis extension generalizes the classical setting, in which all pairs of organisms in an infinite population are equally likely to interact.\nThe classical setting can be viewed as the special case in which the underlying network is a clique.\nThere are many obvious reasons why one would like to examine more general graphs, the primary one being in that\nmany scenarios considered in evolutionary game theory, all interactions are in fact not possible.\nFor example, geographical restrictions may limit interactions to physically proximate pairs of organisms.\nEvolutionary game theory on networks has been considered before, but not in the generality we will do so here (see Section 4).\nWe examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.\nThe work described here is part of recent efforts examining the relationship between graph topology or structure and properties of equilibrium outcomes.\nPrevious works in this line include studies of the relationship of topology to properties of correlated equilibria in graphical games [11], and studies of price variation in graph-theoretic market exchange models [12].\nMore generally, this work contributes to the line of graph-theoretic models for game theory investigated in both computer science [13] and economics [10].\n4.\nRELATED WORK\nThere has been previous work that analyzes which strategies are resilient to mutant invasions with respect to various types of graphs.\nWhat sets our work apart is that the model we consider encompasses a significantly more general class of games and graph topologies.\nWe will briefly survey this literature and point out the differences in the previous models and ours.\nIn [8], [3], and [4], the authors consider specific families of graphs, such as cycles and lattices, where players play specific games, such as 2 \u00d7 2-games or k \u00d7 k-coordination games.\nThe model we propose is more general than both of these, as it encompasses a larger class of graphs as well as a richer set of games.\nAlso related to our work is that of [14], where the authors propose two models.\nThe first assumes organisms interact according to a weighted, undirected graph.\nThe second model has organisms arranged around a directed cycle, where neighbors play a 2 \u00d7 2-game.\nWe consider more general games than the first model and more general graphs than the second.\nFinally, the works most closely related to ours are [7], [15], and [6].\nThe authors consider 2-action, coordination games played by players in a general undirected graph.\nIn these three works, the authors specify a dynamic for a strategy to reproduce, and analyze properties of the graph that allow a strategy to overrun the population.\nHere again, one can see that our model is more general than these, as it allows for organisms to play any 2-player, symmetric game.","lvl-2":"Networks Preserving Evolutionary Equilibria and the Power of Randomization\nWe study a natural extension of classical evolutionary game theory to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network.\nWe generalize the definition of an evolutionary stable strategy (ESS), and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly.\nWe examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.\n1.\nINTRODUCTION\nIn this paper, we introduce and examine a natural extension of classical evolutionary game theory (EGT) to a setting in which pairwise interactions are restricted to the edges of an undirected graph or network.\nThis extension generalizes the classical setting, in which all pairs of organisms in an infinite population are equally likely to interact.\nThe classical setting can be viewed as the special case in which the underlying network is a clique.\nThere are many obvious reasons why one would like to examine more general graphs, the primary one being in that\nmany scenarios considered in evolutionary game theory, all interactions are in fact not possible.\nFor example, geographical restrictions may limit interactions to physically proximate pairs of organisms.\nMore generally, as evolutionary game theory has become a plausible model not only for biological interaction, but also economic and other kinds of interaction in which certain dynamics are more imitative than optimizing (see [2, 16] and chapter 4 of [19]), the network constraints may come from similarly more general sources.\nEvolutionary game theory on networks has been considered before, but not in the generality we will do so here (see Section 4).\nWe generalize the definition of an evolutionary stable strategy (ESS) to networks, and show a pair of complementary results that exhibit the power of randomization in our setting: subject to degree or edge density conditions, the classical ESS of any game are preserved when the graph is chosen randomly and the mutation set is chosen adversarially, or when the graph is chosen adversarially and the mutation set is chosen randomly.\nWe examine natural strengthenings of our generalized ESS definition, and show that similarly strong results are not possible for them.\nThe work described here is part of recent efforts examining the relationship between graph topology or structure and properties of equilibrium outcomes.\nPrevious works in this line include studies of the relationship of topology to properties of correlated equilibria in graphical games [11], and studies of price variation in graph-theoretic market exchange models [12].\nMore generally, this work contributes to the line of graph-theoretic models for game theory investigated in both computer science [13] and economics [10].\n2.\nCLASSICAL EGT\nThe fundamental concept of evolutionary game theory is the evolutionarily stable strategy (ESS).\nIntuitively, an ESS is a strategy such that if all the members of a population adopt it, then no mutant strategy could invade the population [17].\nTo make this more precise, we describe the basic model of evolutionary game theory, in which the notion of an ESS resides.\nThe standard model of evolutionary game theory considers an infinite population of organisms, each of which plays a strategy in a fixed, 2-player, symmetric game.\nThe game is defined by a fitness function F. All pairs of members of the infinite population are equally likely to interact with one another.\nIf two organisms interact, one playing strategy s\nand the other playing strategy t, the s-player earns a fitness of F (s | t) while the t-player earns a fitness of F (t | s).\nIn this infinite population of organisms, suppose there is a 1 \u2212 e fraction who play strategy s, and call these organisms incumbents; and suppose there is an a fraction who play t, and call these organisms mutants.\nAssume two organisms are chosen uniformly at random to play each other.\nThe strategy s is an ESS if the expected fitness of an organism playing s is higher than that of an organism playing t, for all t = s and all sufficiently small E.\nSince an incumbent will meet another incumbent with probability 1 \u2212 e and it will meet a mutant with probability e, we can calculate the expected fitness of an incumbent, which is simply (1 \u2212 e) F (s | s) + eF (s | t).\nSimilarly, the expected fitness of a mutant is (1 \u2212 e) F (t | s) + eF (t | t).\nThus we come to the formal definition of an ESS [19].\nA consequence of this definition is that for s to be an ESS, it must be the case that F (s | s)> F (t | s), for all strategies t.\nThis inequality means that s must be a best response to itself, and thus any ESS strategy s must also be a Nash equilibrium.\nIn general the notion of ESS is more restrictive than Nash equilibrium, and not all 2-player, symmetric games have an ESS.\nIn this paper our interest is to examine what kinds of network structure preserve the ESS strategies for those games that do have a standard ESS.\nFirst we must of course generalize the definition of ESS to a network setting.\n3.\nEGT ON GRAPHS\nIn our setting, we will no longer assume that two organisms are chosen uniformly at random to interact.\nInstead, we assume that organisms interact only with those in their local neighborhood, as defined by an undirected graph or network.\nAs in the classical setting (which can be viewed as the special case of the complete network or clique), we shall assume an infinite population, by which we mean we examine limiting behavior in a family of graphs of increasing size.\nBefore giving formal definitions, some comments are in order on what to expect in moving from the classical to the graph-theoretic setting.\nIn the classical (complete graph) setting, there exist many symmetries that may be broken in moving to the the network setting, at both the group and individual level.\nIndeed, such asymmetries are the primary interest in examining a graph-theoretic generalization.\nFor example, at the group level, in the standard ESS definition, one need not discuss any particular set of mutants of population fraction E.\nSince all organisms are equally likely to interact, the survival or fate of any specific mutant set is identical to that of any other.\nIn the network setting, this may not be true: some mutant sets may be better able to survive than others due to the specific topologies of their interactions in the network.\nFor instance, foreshadowing some of our analysis, if s is an ESS but F (t | t) is much larger than F (s | s) and F (s | t), a mutant set with a great deal of \"internal\" interaction (that is, edges between mutants) may be able to survive, whereas one without this may suffer.\nAt the level of individuals, in the classical setting, the assertion that one mutant dies implies that all mutants die, again by symmetry.\nIn the network setting, individual fates may differ within a group all playing a common strategy.\nThese observations imply that in examining ESS on networks we face definitional choices that were obscured in the classical model.\nIf G is a graph representing the allowed pairwise interactions between organisms (vertices), and u is a vertex of G playing strategy su, then the fitness of u is given by\nHere sv is the strategy being played by the neighbor v, and r (u) = {v G V: (u, v) G E}.\nOne can view the fitness of u as the average fitness u would obtain if it played each if its neighbors, or the expected fitness u would obtain if it were assigned to play one of its neighbors chosen uniformly at random.\nClassical evolutionary game theory examines an infinite, symmetric population.\nGraphs or networks are inherently finite objects, and we are specifically interested in their asymmetries, as discussed above.\nThus all of our definitions shall revolve around an infinite family G = {Gn} n \u00b0 0 of finite graphs Gn over n vertices, but we shall examine asymptotic (large n) properties of such families.\nWe first give a definition for a family of mutant vertex sets in such an infinite graph family to contract.\nA reasonable alternative would be to ask that the condition above hold for all mutants rather than all but o (n).\nNote also that we only require that a mutant have one incumbent neighbor of higher fitness in order to die; one might considering requiring more.\nIn Sections 6.1 and 6.2 we consider these stronger conditions and demonstrate that our results can no longer hold.\nIn order to properly define an ESS for an infinite family of finite graphs in a way that recovers the classical definition asymptotically in the case of the family of complete graphs, we first must give a definition that restricts attention to families of mutant vertices that are smaller than some invasion threshold E 'n, yet remain some constant fraction of the population.\nThis prevents \"invasions\" that survive merely by constituting a vanishing fraction of the population.\nDEFINITION 3.2.\nLet E> 0, and let G = {Gn} n \u00b0 0 be\nan infinite family of graphs, where Gn has n vertices.\nLet M = {Mn} n \u00b0 0 be any family of (mutant) vertices in Gn.\nWe say that M is E' - linear if there exists an e, E> e> 0, such that for all sufficiently large n, En> | Mn |> en.\nWe can now give our definition for a strategy to be evolutionarily stable when employed by organisms interacting with their neighborhood in a graph.\nDEFINITION 3.3.\nLet G = {Gn} n = 0 be an infinite family of graphs, where Gn has n vertices.\nLet F be any 2-player, symmetric game for which s is a strategy.\nWe say that s is an ESS with respect to F and G if for all mutant strategies t = s, there exists an et> 0 such that for any et-linear family of mutant vertices M = {Mn} n = 0 all playing t, for n sufficiently large, Mn contracts.\nThus, to violate the ESS property for G, one must witness a family of mutations M in which each Mn is an arbitrarily small but nonzero constant fraction of the population of Gn, but does not contract (i.e. every mutant set has a subset of linear size that survives all of its incumbent interactions).\nIn Section A. 1 we show that the definition given coincides with the classical one in the case where G is the family of complete graphs, in the limit of large n.\nWe note that even in the classical model, small sets of mutants were allowed to have greater fitness than the incumbents, as long as the size of the set was o (n) [18].\nIn the definition above there are three parameters: the game F, the graph family G and the mutation family M.\nOur main results will hold for any 2-player, symmetric game F.\nWe will also study two rather general settings for G and M: that in which G is a family of random graphs and M is arbitrary, and that in which G is nearly arbitrary and M is randomly chosen.\nIn both cases, we will see that, subject to conditions on degree or edge density (essentially forcing connectivity of G but not much more), for any 2-player, symmetric game, the ESS of the classical settings, and only those strategies, are always preserved.\nThus a common theme of these results is the power of randomization: as long as either the network itself is chosen randomly, or the mutation set is chosen randomly, classical ESS are preserved.\n4.\nRELATED WORK\nThere has been previous work that analyzes which strategies are resilient to mutant invasions with respect to various types of graphs.\nWhat sets our work apart is that the model we consider encompasses a significantly more general class of games and graph topologies.\nWe will briefly survey this literature and point out the differences in the previous models and ours.\nIn [8], [3], and [4], the authors consider specific families of graphs, such as cycles and lattices, where players play specific games, such as 2 \u00d7 2-games or k \u00d7 k-coordination games.\nIn these papers the authors specify a simple, local dynamic for players to improve their payoffs by changing strategies, and analyze what type of strategies will grow to dominate the population.\nThe model we propose is more general than both of these, as it encompasses a larger class of graphs as well as a richer set of games.\nAlso related to our work is that of [14], where the authors propose two models.\nThe first assumes organisms interact according to a weighted, undirected graph.\nHowever, the fitness of each organism is simply assigned and does not depend on the actions of each organism's neighborhood.\nThe second model has organisms arranged around a directed cycle, where neighbors play a 2 \u00d7 2-game.\nWith probability proportional to its fitness, an organism is chosen to reproduce by placing a replica of itself in its neighbors position, thereby \"killing\" the neighbor.\nWe consider more general games than the first model and more general graphs than the second.\nFinally, the works most closely related to ours are [7], [15], and [6].\nThe authors consider 2-action, coordination games played by players in a general undirected graph.\nIn these three works, the authors specify a dynamic for a strategy to reproduce, and analyze properties of the graph that allow a strategy to overrun the population.\nHere again, one can see that our model is more general than these, as it allows for organisms to play any 2-player, symmetric game.\n5.\nNETWORKS PRESERVING ESS\nWe now proceed to state and prove two complementary results in the network ESS model defined in Section 3.\nFirst, we consider a setting where the graphs are generated via the Gn, p model of Erd\u02ddos and R \u00b4 enyi [5].\nIn this model, every pair of vertices are joined by an edge independently and with probability p (where p may depend on n).\nThe mutant set, however, will be constructed adversarially (subject to the linear size constraint given by Definition 3.3).\nFor these settings, we show that for any 2-player, symmetric game, s is a classical ESS of that game, if and only if s is an ESS for {Gn, p} n = 0, where p = 52 (1\/nc) and 0 <c <1, and any mutant family {Mn} n = 0, where each Mn has linear size.\nWe note that under these settings, if we let c = 1 \u2212 - y for small - y> 0, the expected number of edges in Gn is n1 + or larger--that is, just superlinear in the number of vertices and potentially far smaller than O (n2).\nIt is easy to convince oneself that once the graphs have only a linear number of edges, we are flirting with disconnectedness, and there may simply be large mutant sets that can survive in isolation due to the lack of any incumbent interactions in certain games.\nThus in some sense we examine the minimum plausible edge density.\nThe second result is a kind of dual to the first, considering a setting where the graphs are chosen arbitrarily (subject to conditions) but the mutant sets are chosen randomly.\nIt states that for any 2-player, symmetric game, s is a classical ESS for that game, if and only if s is an ESS for any {Gn = (Vn, En)} n = 0 in which for all v E Vn, deg (v) = 52 (n) (for any constant - y> 0), and a family of mutant sets {Mn} n = 0, that is chosen randomly (that is, in which each organism is labeled a mutant with constant probability e> 0).\nThus, in this setting we again find that classical ESS are preserved subject to edge density restrictions.\nSince the degree assumption is somewhat strong, we also prove another result which only assumes that | En |> n1 +, and shows that there must exist at least 1 mutant with an incumbent neighbor of higher fitness (as opposed to showing that all but o (n) mutants have an incumbent neighbor of higher fitness).\nAs will be discussed, this rules out \"stationary\" mutant invasions.\n5.1 Random Graphs, Adversarial Mutations\nNow we state and prove a theorem which shows that if s is a classical ESS, then s will be an ESS for random graphs, where a linear sized set of mutants is chosen by an adversary.\nTHEOREM 5.1.\nLet F be any 2-player, symmetric game, and suppose s is a classical ESS of F. Let the infinite graph family {Gn} n = 0 be drawn according to Gn, p, where p = 52 (1\/nc) and 0 <c <1.\nThen with probability 1, s is an ESS.\nThe main idea of the proof is to divide mutants into 2 categories, those with \"normal\" fitness and those with \"ab\nnormal\" fitness.\nFirst, we show all but o (n) of the population (incumbent or mutant) have an incumbent neighbor of normal fitness.\nThis will imply that all but o (n) of the mutants of normal fitness have an incumbent neighbor of higher fitness.\nThe vehicle for proving this is Theorem 2.15 of [5], which gives an upper bound on the number of vertices not connected to a sufficiently large set.\nThis theorem assumes that the size of this large set is known with equality, which necessitates the union bound argument below.\nSecondly, we show that there can be at most o (n) mutants with abnormal fitness.\nSince there are so few of them, even if none of them have an incumbent neighbor of higher fitness, s will still be an ESS with respect to F and G. PROOF.\n(Sketch) Let t = s be the mutant strategy.\nSince s is a classical ESS, there exists an et such that (1 \u2212 e) F (s | s) + eF (s | t)> (1 \u2212 e) F (t | s) + eF (t | t), for all 0 <e <et.\nLet M be any mutant family that is et-linear.\nThus for any fixed value of n that is sufficiently large, there exists an e such that | Mn | = en and et> e> 0.\nAlso, let In = Vn \\ Mn and let I In be the set of incumbents that have fitness in the range (1 \u00b1 7) [(1 \u2212 e) F (s | s) + eF (s | t)] for some constant T, 0 <T <1\/6.\nLemma 5.1 below shows (1 \u2212 e) n | I |\n(For the sake of clarity we suppress the subscript n on the sets I and T.) The union bound gives us Letting S = n \u2212 for some - y> 0 gives Sn = o (n).\nWe will apply Theorem 2.15 of [5] to the summand on the right hand side of Equation 1.\nIf we let - y = (1 \u2212 c) \/ 2, and combine this with the fact that 0 c <1, all of the requirements of this theorem will be satisfied (details omitted).\nNow when we apply this theorem to Equation 1, we get\nThis is because equation 2 has only 24 log n 2p terms, and Theorem 2.15 of [5] gives us that C (1 \u2212 e) n1 \u2212 c \u2212 24 log n 2.\nThus we have shown, with probability tending to 1 as n, at most o (n) individuals are not attached to an incumbent which has fitness in the range (1 \u00b1 7) [(1 \u2212 e) F (s | s) + eF (s | t)].\nThis implies that the number of mutants of approximately normal fitness, not attached to an incumbent of approximately normal fitness, is also o (n).\nNow those mutants of approximately normal fitness that are attached to an incumbent of approximately normal fitness have fitness in the range (1 \u00b1 7) [(1 \u2212 e) F (t | s) + eF (t | t)].\nThe incumbents that they are attached to have fitness in the range (1 \u00b1 T) [(1 \u2212 e) F (s | s) + eF (s | t)].\nSince s is an ESS of F, we know (1 \u2212 e) F (s | s) + eF (s | t)> (1 \u2212 e) F (t | s) + eF (t | t), thus if we choose T small enough, we can ensure that all but o (n) mutants of normal fitness have a neighboring incumbent of higher fitness.\nFinally by Lemma 5.1, we know there are at most o (n) mutants of abnormal fitness.\nSo even if all of them are more fit than their respective incumbent neighbors, we have shown all but o (n) of the mutants have an incumbent neighbor of higher fitness.\nWe now state and prove the lemma used in the proof above.\nLEMMA 5.1.\nFor almost every graph Gn, p with (1 \u2212 e) n incumbents, all but 24 log n 2p incumbents have fitness in the range (1 \u00b1 6) [(1 \u2212 e) F (s | s) + eF (s | t)], where p = 52 (1\/nc) and e, S and c are constants satisfying 0 <e <1, 0 <S <1\/6, 0 c <1.\nSimilarly, under the same assumptions, all but 24 log n 2p mutants have fitness in the range (1 \u00b1 6) [(1 \u2212 e) F (t | s) + eF (t | t)].\nPROOF.\nWe define the mutant degree of a vertex to be the number of mutant neighbors of that vertex, and incumbent degree analogously.\nObserve that the only way for an incumbent to have fitness far from its expected value of (1 \u2212 e) F (s | s) + eF (s | t) is if it has a fraction of mutant neighbors either much higher or much lower than e. Theorem 2.14 of [5] gives us a bound on the number of such incumbents.\nIt states that the number of incumbents with mutant degree outside the range (1 \u00b1 S) p | M | is at most 12 log n 2p.\nBy the same theorem, the number of incumbents with incumbent degree outside the range (1 \u00b1 S) p | I | is at most 12 log n 2p.\nFrom the linearity of fitness as a function of the fraction of mutant or incumbent neighbors, one can show that for those incumbents with mutant and incumbent degree in the expected range, their fitness is within a constant factor of (1 \u2212 e) F (s | s) + eF (s | t), where that constant goes to 1 as n tends to infinity and S tends to 0.\nThe proof for the mutant case is analogous.\nWe note that if in the statement of Theorem 5.1 we let c = 0, then p = 1.\nThis, in turn, makes G = {Kn} n = 0, where Kn is a clique of n vertices.\nThen for any Kn all of the incumbents will have identical fitness and all of the mutants will have identical fitness.\nFurthermore, since s was an ESS for G, the incumbent fitness will be higher than the mutant fitness.\nFinally, one can show that as n, the incumbent fitness converges to (1 \u2212 e) F (s | s) + eF (s | t), and the mutant fitness converges to (1 \u2212 e) F (t | s) + eF (t | t).\nIn other words, s must be a classical ESS, providing a converse to Theorem 5.1.\nWe rigorously present this argument in Section A. 1.\n5.2 Adversarial Graphs, Random Mutations\nWe now move on to our second main result.\nHere we show that if the graph family, rather than being chosen randomly, is arbitrary subject to a minimum degree requirement, and the mutation sets are randomly chosen, classical ESS are again preserved.\nA modified notion of ESS allows us to considerably weaken the degree requirement to a minimum edge density requirement.\nTHEOREM 5.2.\nLet G = {Gn = (Vn, En)} n = 0 be an infinite family of graphs in which for all v Vn, deg (v) = 0 (n) (for any constant - y> 0).\nLet F be any 2-player, symmetric game, and suppose s is a classical ESS of F. Let t be any mutant strategy, and let the mutant family M = {Mn} n = 0 be chosen randomly by labeling each vertex a mutant with constant probability e, where et> e> 0.\nThen with probability 1, s is an ESS with respect to F, G and M.\nPROOF.\nLet t = s be the mutant strategy and let X be the event that every incumbent has fitness within the range (1 \u00b1) [(1 \u2212) F (s | s) + F (s | t)], for some constant> 0 to be specified later.\nSimilarly, let Y be the event that every mutant has fitness within the range (1 \u00b1) [(1 \u2212) F (t | s) + F (t | t)].\nSince Pr (X Y) = 1 \u2212 Pr (\u00ac X \u00ac Y), we proceed by showing Pr (\u00ac X \u00ac Y) = o (1).\n\u00ac X is the event that there exists an incumbent with fitness outside the range (1 \u00b1) [(1 \u2212) F (s | s) + F (s | t)].\nIf degM (v) denotes the number of mutant neighbors of v, similarly, degI (v) denotes the number of incumbent neighbors of v, then an incumbent i has fitness degIdeg (i) (i) F (s | s) + degM (i) deg (i) F (s | t).\nSince F (s | s) and F (s | t) are fixed quantities, the only variation in an incumbents fitness can come from variation in the terms degI (i) deg (i) and degM (i) deg (i).\nOne can use the Chernoff bound followed by the union bound to show that for any incumbent i,\nwhere dj = minjEM deg (j) and 0 <1\/2.\nThus, by the union bound, ()\nwhere d = minvEV deg (v), 0 <1\/2.\nSince deg (v) = Q (n), for all v V, and, and are all constants greater than 0, so Pr (\u00ac X \u00ac Y) = o (1).\nThus, we can choose small enough such that (1 +) [(1 \u2212) F (t | s) + F (t | t)] <(1 \u2212) [(1 \u2212) F (s | s) + F (s | t)], and then choose n large enough such that with probability 1 \u2212 o (1), every incumbent will have fitness in the range (1 \u00b1) [(1 \u2212) F (s | s) + F (s | t)], and every mutant will have fitness in the range (1 \u00b1) [(1 \u2212) F (t | s) + F (t | t)].\nSo with high probability, every incumbent will have a higher fitness than every mutant.\nBy arguments similar to those following the proof of Theorem 5.1, if we let G = {Kn} n = 0, each incumbent will have the same fitness and each mutant will have the same fitness.\nFurthermore, since s is an ESS for G, the incumbent fitness must be higher than the mutant fitness.\nHere again, one has to show show that as n, the incumbent fitness converges to (1 \u2212) F (s | s) + F (s | t), and the mutant fitness converges to (1 \u2212) F (t | s) + F (t | t).\nObserve that the exact fraction mutants of Vn is now a random variable.\nSo to prove this convergence we use an argument similar to one that is used to prove that sequence of random variables that converges in probability also converges in distribution (details omitted).\nThis in turn establishes that s must be a classical ESS, and we thus obtain a converse to Theorem 5.2.\nThis argument is made rigorous in Section A. 2.\nThe assumption on the degree of each vertex of Theorem 5.2 is rather strong.\nThe following theorem relaxes this requirement and only necessitates that every graph have n1 + edges, for some constant> 0, in which case it shows there will alway be at least 1 mutant with an incumbent neighbor of higher fitness.\nA strategy that is an ESS in this weakened sense will essentially rule out stable, static sets of mutant invasions, but not more complex invasions.\nAn example of more complex invasions are mutant sets that survive, but only by perpetually \"migrating\" through the graph under some natural evolutionary dynamics, akin to \"gliders\" in the well-known Game of Life [1].\nTHEOREM 5.3.\nLet F be any game, and let s be a classical ESS of F, and let t = s be a mutant strategy.\nFor any graph family G = {Gn = (Vn, En)} n' 0 in which | En | n1 + (for any constant> 0), and any mutant family M = {Mn} n = 0 which is determined by labeling each vertex a mutant with probability, where t>> 0, the probability that there exists a mutant with an incumbent neighbor of higher fitness approaches 1 as n.\nPROOF.\n(Sketch) The main idea behind the proof is to show that with high probability, over only the choice of mutants, there will be an incumbent-mutant edge in which both vertices have high degree.\nIf their degree is high enough, we can show that close to an fraction of their neighbors are mutants, and thus their fitnesses are very close to what we expect them to be in the classical case.\nSince s is an ESS, the fitness of the incumbent will be higher than the mutant.\nWe call an edge (i, j) En a g (n) - barbell if deg (i) g (n) and deg (j) g (n).\nSuppose Gn has at most h (n) edges that are g (n) - barbells.\nThis means there are at least | En | \u2212 h (n) edges in which at least one vertex has degree at most g (n).\nWe call these vertices light vertices.\nLet (n) be the number of light vertices in Gn.\nObserve that | En | \u2212 h (n) (n) g (n).\nThis is because each light vertex is incident on at most g (n) edges.\nThis gives us that\nSo if we choose h (n) and g (n) such that h (n) + ng (n) = o (n1 +), then | En | = o (n1 +).\nThis contradicts the assumption that | En | = Q (n1 +).\nThus, subject to the above constraint on h (n) and g (n), Gn must contain at least h (n) edges that are g (n) - barbells.\nNow let Hn denote the subgraph induced by the barbell edges of Gn.\nNote that regardless of the structure of Gn, there is no reason that Hn should be connected.\nThus, let m be the number of connected components of Hn, and let c1, c2,..., cm be the number of vertices in each of these connected components.\nNote that since Hn is an edge-induced subgraph we have ck 2 for all components k. Let us choose the mutant set by first flipping the vertices in Hn only.\nWe now show that the probability, with respect to the random mutant set, that none of the components of Hn have an incumbent-mutant edge is exponentially small in n. Let An be the event that every component of Hn contains only mutants or only incumbents.\nThen algebraic manipulations can establish that\nwhere, Q is a constant.\nThus for e sufficiently small the bound decreases exponentially with Em k = 1 ck.\nFurthermore, since\n, \/ component a clique), one can show that Em k = 1 ck> h (n).\nThus, as long as h (n)--+ oo with n, the probability that all components are uniformly labeled will go to 0.\nNow assuming that there exists a non-uniformly labeled component, by construction that component contains an edge (i, j) where i is an incumbent and j is a mutant, that is a g (n) - barbell.\nWe also assume that the h (n) vertices already labeled have been done so arbitrarily, but that the remaining g (n)--h (n) vertices neighboring i and j are labeled mutants independently with probability E.\nThen via a standard Chernoff bound argument, one can show that with high probability, the fraction of mutants neighboring i and the fraction of mutants neighboring j is in the range\ng (n).\nSimilarly, one can show that the fraction of incumbents neighboring i and the fraction of mutants neighboring j is in the range 1--(1 f z) (g (n)--h (n))\nSince s is an ESS, there exists a (> 0 such that (1--e) F (s1s) + eF (s1t) = (1--e) F (t1s) + eF (t1t) + C.\nIf we choose g (n) = n, and h (n) = o (g (n)), we can choose n large enough and T small enough to force F (i)> F (j), as desired.\n6.\nLIMITATIONS OF STRONGER MODELS\nIn this section we show that if one tried to strengthen the model described in Section 3 in two natural ways, one would not be able to prove results as strong as Theorems 5.1 and 5.2, which hold for every 2-player, symmetric game.\n6.1 Stronger Contraction for the Mutant Set\nIn Section 3 we alluded to the fact that we made certain design decisions in arriving at Definitions 3.1, 3.2 and 3.3.\nOne such decision was to require that all but o (n) mutants have incumbent neighbors of higher fitness.\nInstead, we could have required that all mutants have an incumbent neighbor of higher fitness.\nThe two theorems in this subsection show that if one were to strengthen our notion of contraction for the mutant set, given by Definition 3.1, in this way, it would be impossible to prove theorems analogous to Theorems 5.1 and 5.3.\nRecall that Definition 3.1 gave the notion of contraction for a linear sized subset of mutants.\nIn what follows, we will say an edge (i, j) contracts if i is an incumbent, j is a mutant, and F (i)> F (j).\nAlso, recall that Theorem 5.1 stated that if s is a classical ESS, then it is an ESS for random graphs with adversarial mutations.\nNext, we prove that if we instead required every incumbent-mutant edge to contract, this need not be the case.\nPROOF.\n(Sketch) With probability approaching 1 as n--+ oo, there exists a vertex j where deg (j) is arbitrarily close to en.\nSo label j mutant, label one of its neighbors incumbent, denoted i, and label the rest of j's neighborhood mutant.\nAlso, label all of i's neighbors incumbent, with the exception of j and j's neighbors (which were already labeled mutant).\nIn this setting, one can show that F (j) will be arbitrarily close to F (t1t) and F (i) will be a convex combination of F (s1s) and F (s1t), which are both strictly less than F (t1t).\nTheorem 5.3 stated that if s is a classical ESS, then for graphs where 1En1> n1 +, for some - y> 0, and where each organism is labeled a mutant with probability e, one edge must contract.\nBelow we show that, for certain graphs and certain games, there will always exist one edge that will not contract.\na clique.\nThen, for each ui, i G [n\/4] add edges (ui, vi), (vi, wi) and (wi, xi).\nWith probability 1 as n--+ oo, there exists an i such that ui and wi are mutants and vi and xi are incumbents.\nObserve that F (vi) = F (xi) = F (s1t) and F (wi) = F (t1s).\n6.2 Stronger Contraction for Individuals\nThe model of Section 3 requires that for an edge (i, j) to contract, the fitness of i must be greater than the fitness of j.\nOne way to strengthen this notion of contraction would be to require that the maximum fitness incumbent in the neighborhood of j be more fit than the maximum fitness mutant in the neighborhood of j.\nThis models the idea that each organism is trying to take over each place in its neighborhood, but only the most fit organism in the neighborhood of a vertex gets the privilege of taking it.\nIf we assume that we adopt this notion of contraction for individual mutants, and require that all incumbent-mutant edges contract, we will next show that Theorems 6.1 and 6.2 still hold, and thus it is still impossible to get results such as Theorems 5.1 and 5.3 which hold for every 2-player, symmetric game.\nIn the proof of Theorem 6.1 we proved that F (i) is strictly less than F (j).\nObserve that maximum fitness mutant in the neighborhood of j must have fitness at least F (j).\nAlso observe that there is only 1 incumbent in the neighborhood of j, namely i.\nSo under this stronger notion of contraction, the edge (i, j) will not contract.\nSimilarly, in the proof of Theorem 6.2, observe that the only mutant in the neighborhood of wi is wi itself, which has fitness F (t1s).\nFurthermore, the only incumbents in the neighborhood of wi are vi and xi, both of which have fitness F (s1t).\nBy assumption, F (t1s)> F (s1t), thus, under this stronger notion of contraction, neither of the incumbentmutant edges, (vi, wi) and (xi, wi), will contract.\n7.\nREFERENCES\nAPPENDIX A. GRAPHICAL AND CLASSICAL ESS\nIn this section we explore the conditions under which a graphical ESS is also a classical ESS.\nTo do so, we state and prove two theorems which provide converses to each of the major theorems in Section 3.\nA. 1 Random Graphs, Adversarial Mutations\nTheorem 5.2 states that if s is a classical ESS and G = {Gn, p}, where p = Q (1\/n `) and 0 c <1, then with probability 1 as n, s is an ESS with respect to G.\nHere we show that if s is an ESS with respect to G, then s is a classical ESS.\nIn order to prove this theorem, we do not need the full generality of s being an ESS for G when p = Q (1\/n `) where 0 c <1.\nAll we need is s to be an ESS for G when p = 1.\nIn this case there are no more probabilistic events in the theorem statement.\nAlso, since p = 1 each graph in G is a clique, so if one incumbent has a higher fitness than one mutant, then all incumbents have higher fitness than all mutants.\nThis gives rise to the following theorem.\nTHEOREM A. 1.\nLet F be any 2-player, symmetric game, and suppose s is a strategy for F and t = s is a mutant strategy.\nLet G = {Kn} n = 0.\nIf, as n, for any et-linear family of mutants M = {Mn} n = 0, there exists an incumbent i and a mutant j such that F (i)> F (j), then s is a classical ESS of F.\nThe proof of this theorem analyzes the limiting behavior of the mutant population as the size of the cliques in G tends to infinity.\nIt also shows how the definition of ESS given in Section 5 recovers the classical definition of ESS.\nPROOF.\nSince each graph in G is a clique, every incumbent will have the same number of incumbent and mutant neighbors, and every mutant will have the same number of incumbent and mutant neighbors.\nThus, all incumbents will have identical fitness and all mutants will have identical fitness.\nNext, one can construct an et-linear mutant family M, where the fraction of mutants converges to e for any e, where et> e> 0.\nSo for n large enough, the number of mutants in Kn will be arbitrarily close to en.\nThus, any mutant subset of size en will result in all incumbents having fitness (1 \u2212 n n \u2212 1) F (s | s) + n \u2212 1 nF (s | t), and all mutants having fitness (1 \u2212 n \u2212 1 n \u2212 1) F (t | s) + n \u2212 1 n \u2212 1 F (t | t).\nFurthermore, by assumption the incumbent fitness must be higher than the mutant fitness.\nThis implies,\nn \u2212 1) F (t | s) + en \u2212 1 n \u2212 1 F (t | t) = 1.\nThis implies, (1 \u2212 e) F (s | s) + eF (s | t)> (1 \u2212 e) F (t | s) + eF (t | t), for all e, where et> e> 0.\nA. 2 Adversarial Graphs, Random Mutations\nTheorem 5.2 states that if s is a classical ESS for a 2player, symmetric game F, where G is chosen adversarially subject to the constraint that the degree of each vertex is Q (nry) (for any constant - y> 0), and mutants are chosen with probability e, then s is an ESS with respect to F, G, and M.\nHere we show that if s is an ESS with respect to F, G, and M then s is a classical ESS.\nAll we will need to prove this is that s is an ESS with respect to G = {Kn} n = 0, that is when each vertex has degree n \u2212 1.\nAs in Theorem A. 1, since the graphs are cliques, if one incumbent has higher fitness than one mutant, then all incumbents have higher fitness than all mutants.\nThus, the theorem below is also a converse to Theorem 5.3.\n(Recall that Theorem 5.3 uses a weaker notion of contraction that lim n\nrequires only one incumbent to have higher fitness than one mutant.)\nTHEOREM A. 2.\nLet F be any 2-player symmetric game, and suppose s is an incumbent strategy for F and t = s is a mutant strategy.\nLet G = {Kn} n = 0.\nIf with probability 1 as n, s is an ESS for G and a mutant family M = {Mn} n = 0, which is determined by labeling each vertex a mutant with probability, where t>> 0, then s is a classical ESS of F.\nThis proof also analyzes the limiting behavior of the mutant population as the size of the cliques in G tends to infinity.\nSince the mutants are chosen randomly we will use an argument similar to the proof that a sequence of random variables that converges in probability, also converge in distribution.\nIn this case the sequence of random variables will be actual fraction of mutants in each Kn.\nPROOF.\nFix any value of, where n>> 0, and construct each Mn by labeling a vertex a mutant with probability.\nBy the same argument as in the proof of Theorem A. 1, if the actual number of mutants in Kn is denoted by nn, any mutant subset of size nn will result in all incumbents having fitness (1 \u2212 nn n \u2212 1) F (s | s) + nn n \u2212 1 F (s | t), and in all mutants having fitness (1 \u2212 nn \u2212 1 n \u2212 1) F (t | s) + nn \u2212 1\nn \u2212 1 F (t | t).\nThis\nRecall that X is a continuous random variable representing the fraction of mutants in an infinite sized graph.\nSo if we let FX (a) = Pr (X <a), we see that FX (a) is a cumulative distribution function of a continuous random variable, and is therefore continuous from the right.\nSo\nBy two simple applications of the Chernoff bound and an application of the union bound, one can show the sequence of random variables {n} n = 0 converges to in probability.\nNext, if we let Xn = \u2212 n, X = \u2212, b = \u2212 F (s | s) + F (t | t), and a = \u2212 F (s | t) \u2212 F (s | s) \u2212 F (t | t) + F (t | s), by Theorem A. 3 be\nF (t | s) \u2212 F (s | s)\nlow, we get that limn Pr (Xn <a + b\/n) = Pr (X <a).\nCombining this with equation 3, Pr (> \u2212 a) = 1.\nThe proof of the following theorem is very similar to the proof that a sequence of random variables that converges in probability, also converge in distribution.\nA good explanation of this can be found in [9], which is the basis for the argument below.\nTHEOREM A. 3.\nIf {Xn} n = 0 is a sequence of random variables that converge in probability to the random variable X, and a and b are constants, then limn Pr (Xn <a + b\/n) =\nPr (X <a).\nPr (Xn <a + b\/n) + Pr (| X \u2212 Xn |>),\nThe following lemma is quite useful, as it expresses the cumulative distribution of one random variable Y, in terms of the cumulative distribution of another random variable X and the difference between X and Y.\nLEMMA A. 1.\nIf X and Y are random variables, c and> 0, then","keyphrases":["network","evolutionari game theori","game theori","pairwis interact","undirect graph","evolutionari stabl strategi","edg densiti condit","mutat set","natur strengthen","nash equilibrium","random power","geograph restrict","graph topolog","equilibrium outcom","topolog relationship","graph-theoret model"],"prmu":["P","P","P","P","P","P","P","P","P","U","R","M","M","U","U","U"]} {"id":"H-92","title":"Improving Web Search Ranking by Incorporating User Behavior Information","abstract":"We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting. We examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features. We report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine. We show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance.","lvl-1":"Improving Web Search Ranking by Incorporating User Behavior Information Eugene Agichtein Microsoft Research eugeneag@microsoft.com Eric Brill Microsoft Research brill@microsoft.com Susan Dumais Microsoft Research sdumais@microsoft.com ABSTRACT We show that incorporating user behavior data can significantly improve ordering of top results in real web search setting.\nWe examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features.\nWe report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine.\nWe show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance.\nCategories and Subject Descriptors H.3.3 Information Search and Retrieval - Relevance feedback, search process; H.3.5 Online Information Services - Web-based services.\nGeneral Terms Algorithms, Measurement, Experimentation 1.\nINTRODUCTION Millions of users interact with search engines daily.\nThey issue queries, follow some of the links in the results, click on ads, spend time on pages, reformulate their queries, and perform other actions.\nThese interactions can serve as a valuable source of information for tuning and improving web search result ranking and can compliment more costly explicit judgments.\nImplicit relevance feedback for ranking and personalization has become an active area of research.\nRecent work by Joachims and others exploring implicit feedback in controlled environments have shown the value of incorporating implicit feedback into the ranking process.\nOur motivation for this work is to understand how implicit feedback can be used in a large-scale operational environment to improve retrieval.\nHow does it compare to and compliment evidence from page content, anchor text, or link-based features such as inlinks or PageRank?\nWhile it is intuitive that user interactions with the web search engine should reveal at least some information that could be used for ranking, estimating user preferences in real web search settings is a challenging problem, since real user interactions tend to be more noisy than commonly assumed in the controlled settings of previous studies.\nOur paper explores whether implicit feedback can be helpful in realistic environments, where user feedback can be noisy (or adversarial) and a web search engine already uses hundreds of features and is heavily tuned.\nTo this end, we explore different approaches for ranking web search results using real user behavior obtained as part of normal interactions with the web search engine.\nThe specific contributions of this paper include: \u2022 Analysis of alternatives for incorporating user behavior into web search ranking (Section 3).\n\u2022 An application of a robust implicit feedback model derived from mining millions of user interactions with a major web search engine (Section 4).\n\u2022 A large scale evaluation over real user queries and search results, showing significant improvements derived from incorporating user feedback (Section 6).\nWe summarize our findings and discuss extensions to the current work in Section 7, which concludes the paper.\n2.\nBACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval.\nMost common approaches primarily focus on similarity of query and a page, as well as the overall page quality [3,4,24].\nHowever, with increasing popularity of search engines, implicit feedback (i.e., the actions users take when interacting with the search engine) can be used to improve the rankings.\nImplicit relevance measures have been studied by several research groups.\nAn overview of implicit measures is compiled in Kelly and Teevan [14].\nThis research, while developing valuable insights into implicit relevance measures, was not applied to improve the ranking of web search results in realistic settings.\nClosely related to our work, Joachims [11] collected implicit measures in place of explicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions.\nFox et al. [8] explored the relationship between implicit and explicit measures in Web search, and developed Bayesian models to correlate implicit measures and explicit relevance judgments for both individual queries and search sessions.\nThis work considered a wide range of user behaviors (e.g., dwell time, scroll time, reformulation patterns) in addition to the popular clickthrough behavior.\nHowever, the modeling effort was aimed at predicting explicit relevance judgments from implicit user actions and not specifically at learning ranking functions.\nOther studies of user behavior in web search include Pharo and J\u00e4rvelin [19], but were not directly applied to improve ranking.\nMore recently, Joachims et al. [12] presented an empirical evaluation of interpreting clickthrough evidence.\nBy performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthroughs in a controlled, laboratory setting.\nUnfortunately, the extent to which previous research applies to real-world web search is unclear.\nAt the same time, while recent work (e.g., [26]) on using clickthrough information for improving web search ranking is promising, it captures only one aspect of the user interactions with web search engines.\nWe build on existing research to develop robust user behavior interpretation techniques for the real web search setting.\nInstead of treating each user as a reliable expert, we aggregate information from multiple, unreliable, user search session traces, as we describe in the next two sections.\n3.\nINCORPORATING IMPLICIT FEEDBACK We consider two complementary approaches to ranking with implicit feedback: (1) treating implicit feedback as independent evidence for ranking results, and (2) integrating implicit feedback features directly into the ranking algorithm.\nWe describe the two general ranking approaches next.\nThe specific implicit feedback features are described in Section 4, and the algorithms for interpreting and incorporating implicit feedback are described in Section 5.\n3.1 Implicit Feedback as Independent Evidence The general approach is to re-rank the results obtained by a web search engine according to observed clickthrough and other user interactions for the query in previous search sessions.\nEach result is assigned a score according to expected relevance\/user satisfaction based on previous interactions, resulting in some preference ordering based on user interactions alone.\nWhile there has been significant work on merging multiple rankings, we adapt a simple and robust approach of ignoring the original rankers'' scores, and instead simply merge the rank orders.\nThe main reason for ignoring the original scores is that since the feature spaces and learning algorithms are different, the scores are not directly comparable, and re-normalization tends to remove the benefit of incorporating classifier scores.\nWe experimented with a variety of merging functions on the development set of queries (and using a set of interactions from a different time period from final evaluation sets).\nWe found that a simple rank merging heuristic combination works well, and is robust to variations in score values from original rankers.\nFor a given query q, the implicit score ISd is computed for each result d from available user interaction features, resulting in the implicit rank Id for each result.\nWe compute a merged score SM(d) for d by combining the ranks obtained from implicit feedback, Id with the original rank of d, Od: \u00a1 cents # + + + + = otherwise O dforexistsfeedbackimplicitif OI w wOIdS d dd I IddM 1 1 1 1 1 1 ),,,( where the weight wI is a heuristically tuned scaling factor representing the relative importance of the implicit feedback.\nThe query results are ordered in by decreasing values of SM to produce the final ranking.\nOne special case of this model arises when setting wI to a very large value, effectively forcing clicked results to be ranked higher than un-clicked results - an intuitive and effective heuristic that we will use as a baseline.\nApplying more sophisticated classifier and ranker combination algorithms may result in additional improvements, and is a promising direction for future work.\nThe approach above assumes that there are no interactions between the underlying features producing the original web search ranking and the implicit feedback features.\nWe now relax this assumption by integrating implicit feedback features directly into the ranking process.\n3.2 Ranking with Implicit Feedback Features Modern web search engines rank results based on a large number of features, including content-based features (i.e., how closely a query matches the text or title or anchor text of the document), and query-independent page quality features (e.g., PageRank of the document or the domain).\nIn most cases, automatic (or semiautomatic) methods are developed for tuning the specific ranking function that combines these feature values.\nHence, a natural approach is to incorporate implicit feedback features directly as features for the ranking algorithm.\nDuring training or tuning, the ranker can be tuned as before but with additional features.\nAt runtime, the search engine would fetch the implicit feedback features associated with each query-result URL pair.\nThis model requires a ranking algorithm to be robust to missing values: more than 50% of queries to web search engines are unique, with no previous implicit feedback available.\nWe now describe such a ranker that we used to learn over the combined feature sets including implicit feedback.\n3.3 Learning to Rank Web Search Results A key aspect of our approach is exploiting recent advances in machine learning, namely trainable ranking algorithms for web search and information retrieval (e.g., [5, 11] and classical results reviewed in [3]).\nIn our setting, explicit human relevance judgments (labels) are available for a set of web search queries and results.\nHence, an attractive choice to use is a supervised machine learning technique to learn a ranking function that best predicts relevance judgments.\nRankNet is one such algorithm.\nIt is a neural net tuning algorithm that optimizes feature weights to best match explicitly provided pairwise user preferences.\nWhile the specific training algorithms used by RankNet are beyond the scope of this paper, it is described in detail in [5] and includes extensive evaluation and comparison with other ranking methods.\nAn attractive feature of RankNet is both train- and run-time efficiency - runtime ranking can be quickly computed and can scale to the web, and training can be done over thousands of queries and associated judged results.\nWe use a 2-layer implementation of RankNet in order to model non-linear relationships between features.\nFurthermore, RankNet can learn with many (differentiable) cost functions, and hence can automatically learn a ranking function from human-provided labels, an attractive alternative to heuristic feature combination techniques.\nHence, we will also use RankNet as a generic ranker to explore the contribution of implicit feedback for different ranking alternatives.\n4.\nIMPLICIT USER FEEDBACK MODEL Our goal is to accurately interpret noisy user feedback obtained as by tracing user interactions with the search engine.\nInterpreting implicit feedback in real web search setting is not an easy task.\nWe characterize this problem in detail in [1], where we motivate and evaluate a wide variety of models of implicit user activities.\nThe general approach is to represent user actions for each search result as a vector of features, and then train a ranker on these features to discover feature values indicative of relevant (and nonrelevant) search results.\nWe first briefly summarize our features and model, and the learning approach (Section 4.2) in order to provide sufficient information to replicate our ranking methods and the subsequent experiments.\n4.1 Representing User Actions as Features We model observed web search behaviors as a combination of a ``background'''' component (i.e., query- and relevance-independent noise in user behavior, including positional biases with result interactions), and a ``relevance'''' component (i.e., query-specific behavior indicative of relevance of a result to a query).\nWe design our features to take advantage of aggregated user behavior.\nThe feature set is comprised of directly observed features (computed directly from observations for each query), as well as queryspecific derived features, computed as the deviation from the overall query-independent distribution of values for the corresponding directly observed feature values.\nThe features used to represent user interactions with web search results are summarized in Table 4.1.\nThis information was obtained via opt-in client-side instrumentation from users of a major web search engine.\nWe include the traditional implicit feedback features such as clickthrough counts for the results, as well as our novel derived features such as the deviation of the observed clickthrough number for a given query-URL pair from the expected number of clicks on a result in the given position.\nWe also model the browsing behavior after a result was clicked - e.g., the average page dwell time for a given query-URL pair, as well as its deviation from the expected (average) dwell time.\nFurthermore, the feature set was designed to provide essential information about the user experience to make feedback interpretation robust.\nFor example, web search users can often determine whether a result is relevant by looking at the result title, URL, and summary - in many cases, looking at the original document is not necessary.\nTo model this aspect of user experience we include features such as overlap in words in title and words in query (TitleOverlap) and the fraction of words shared by the query and the result summary.\nClickthrough features Position Position of the URL in Current ranking ClickFrequency Number of clicks for this query, URL pair ClickProbability Probability of a click for this query and URL ClickDeviation Deviation from expected click probability IsNextClicked 1 if clicked on next position, 0 otherwise IsPreviousClicked 1 if clicked on previous position, 0 otherwise IsClickAbove 1 if there is a click above, 0 otherwise IsClickBelow 1 if there is click below, 0 otherwise Browsing features TimeOnPage Page dwell time CumulativeTimeOnPage Cumulative time for all subsequent pages after search TimeOnDomain Cumulative dwell time for this domain TimeOnShortUrl Cumulative time on URL prefix, no parameters IsFollowedLink 1 if followed link to result, 0 otherwise IsExactUrlMatch 0 if aggressive normalization used, 1 otherwise IsRedirected 1 if initial URL same as final URL, 0 otherwise IsPathFromSearch 1 if only followed links after query, 0 otherwise ClicksFromSearch Number of hops to reach page from query AverageDwellTime Average time on page for this query DwellTimeDeviation Deviation from average dwell time on page CumulativeDeviation Deviation from average cumulative dwell time DomainDeviation Deviation from average dwell time on domain Query-text features TitleOverlap Words shared between query and title SummaryOverlap Words shared between query and snippet QueryURLOverlap Words shared between query and URL QueryDomainOverlap Words shared between query and URL domain QueryLength Number of tokens in query QueryNextOverlap Fraction of words shared with next query Table 4.1: Some features used to represent post-search navigation history for a given query and search result URL.\nHaving described our feature set, we briefly review our general method for deriving a user behavior model.\n4.2 Deriving a User Feedback Model To learn to interpret the observed user behavior, we correlate user actions (i.e., the features in Table 4.1 representing the actions) with the explicit user judgments for a set of training queries.\nWe find all the instances in our session logs where these queries were submitted to the search engine, and aggregate the user behavior features for all search sessions involving these queries.\nEach observed query-URL pair is represented by the features in Table 4.1, with values averaged over all search sessions, and assigned one of six possible relevance labels, ranging from Perfect to Bad, as assigned by explicit relevance judgments.\nThese labeled feature vectors are used as input to the RankNet training algorithm (Section 3.3) which produces a trained user behavior model.\nThis approach is particularly attractive as it does not require heuristics beyond feature engineering.\nThe resulting user behavior model is used to help rank web search resultseither directly or in combination with other features, as described below.\n5.\nEXPERIMENTAL SETUP The ultimate goal of incorporating implicit feedback into ranking is to improve the relevance of the returned web search results.\nHence, we compare the ranking methods over a large set of judged queries with explicit relevance labels provided by human judges.\nIn order for the evaluation to be realistic we obtained a random sample of queries from web search logs of a major search engine, with associated results and traces for user actions.\nWe describe this dataset in detail next.\nOur metrics are described in Section 5.2 that we use to evaluate the ranking alternatives, listed in Section 5.3 in the experiments of Section 6.\n5.1 Datasets We compared our ranking methods over a random sample of 3,000 queries from the search engine query logs.\nThe queries were drawn from the logs uniformly at random by token without replacement, resulting in a query sample representative of the overall query distribution.\nOn average, 30 results were explicitly labeled by human judges using a six point scale ranging from Perfect down to Bad.\nOverall, there were over 83,000 results with explicit relevance judgments.\nIn order to compute various statistics, documents with label Good or better will be considered relevant, and with lower labels to be non-relevant.\nNote that the experiments were performed over the results already highly ranked by a web search engine, which corresponds to a typical user experience which is limited to the small number of the highly ranked results for a typical web search query.\nThe user interactions were collected over a period of 8 weeks using voluntary opt-in information.\nIn total, over 1.2 million unique queries were instrumented, resulting in over 12 million individual interactions with the search engine.\nThe data consisted of user interactions with the web search engine (e.g., clicking on a result link, going back to search results, etc.) performed after a query was submitted.\nThese actions were aggregated across users and search sessions and converted to features in Table 4.1.\nTo create the training, validation, and test query sets, we created three different random splits of 1,500 training, 500 validation, and 1000 test queries.\nThe splits were done randomly by query, so that there was no overlap in training, validation, and test queries.\n5.2 Evaluation Metrics We evaluate the ranking algorithms over a range of accepted information retrieval metrics, namely Precision at K (P(K)), Normalized Discounted Cumulative Gain (NDCG), and Mean Average Precision (MAP).\nEach metric focuses on a deferent aspect of system performance, as we describe below.\n\u2022 Precision at K: As the most intuitive metric, P(K) reports the fraction of documents ranked in the top K results that are labeled as relevant.\nIn our setting, we require a relevant document to be labeled Good or higher.\nThe position of relevant documents within the top K is irrelevant, and hence this metric measure overall user satisfaction with the top K results.\n\u2022 NDCG at K: NDCG is a retrieval measure devised specifically for web search evaluation [10].\nFor a given query q, the ranked results are examined from the top ranked down, and the NDCG computed as: = +\u2212= K j jr qq jMN 1 )( )1log(\/)12( Where Mq is a normalization constant calculated so that a perfect ordering would obtain NDCG of 1; and each r(j) is an integer relevance label (0=Bad and 5=Perfect) of result returned at position j. Note that unlabeled and Bad documents do not contribute to the sum, but will reduce NDCG for the query pushing down the relevant labeled documents, reducing their contributions.\nNDCG is well suited to web search evaluation, as it rewards relevant documents in the top ranked results more heavily than those ranked lower.\n\u2022 MAP: Average precision for each query is defined as the mean of the precision at K values computed after each relevant document was retrieved.\nThe final MAP value is defined as the mean of average precisions of all queries in the test set.\nThis metric is the most commonly used single-value summary of a run over a set of queries.\n5.3 Ranking Methods Compared Recall that our goal is to quantify the effectiveness of implicit behavior for real web search.\nOne dimension is to compare the utility of implicit feedback with other information available to a web search engine.\nSpecifically, we compare effectiveness of implicit user behaviors with content-based matching, static page quality features, and combinations of all features.\n\u2022 BM25F: As a strong web search baseline we used the BM25F scoring, which was used in one of the best performing systems in the TREC 2004 Web track [23,27].\nBM25F and its variants have been extensively described and evaluated in IR literature, and hence serve as a strong, reproducible baseline.\nThe BM25F variant we used for our experiments computes separate match scores for each field for a result document (e.g., body text, title, and anchor text), and incorporates query-independent linkbased information (e.g., PageRank, ClickDistance, and URL depth).\nThe scoring function and field-specific tuning is described in detail in [23].\nNote that BM25F does not directly consider explicit or implicit feedback for tuning.\n\u2022 RN: The ranking produced by a neural net ranker (RankNet, described in Section 3.3) that learns to rank web search results by incorporating BM25F and a large number of additional static and dynamic features describing each search result.\nThis system automatically learns weights for all features (including the BM25F score for a document) based on explicit human labels for a large set of queries.\nA system incorporating an implementation of RankNet is currently in use by a major search engine and can be considered representative of the state of the art in web search.\n\u2022 BM25F-RerankCT: The ranking produced by incorporating clickthrough statistics to reorder web search results ranked by BM25F above.\nClickthrough is a particularly important special case of implicit feedback, and has been shown to correlate with result relevance.\nThis is a special case of the ranking method in Section 3.1, with the weight wI set to 1000 and the ranking Id is simply the number of clicks on the result corresponding to d.\nIn effect, this ranking brings to the top all returned web search results with at least one click (and orders them in decreasing order by number of clicks).\nThe relative ranking of the remainder of results is unchanged and they are inserted below all clicked results.\nThis method serves as our baseline implicit feedback reranking method.\nBM25F-RerankAll The ranking produced by reordering the BM25F results using all user behavior features (Section 4).\nThis method learns a model of user preferences by correlating feature values with explicit relevance labels using the RankNet neural net algorithm (Section 4.2).\nAt runtime, for a given query the implicit score Ir is computed for each result r with available user interaction features, and the implicit ranking is produced.\nThe merged ranking is computed as described in Section 3.1.\nBased on the experiments over the development set we fix the value of wI to 3 (the effect of the wI parameter for this ranker turned out to be negligible).\n\u2022 BM25F+All: Ranking derived by training the RankNet (Section 3.3) learner over the features set of the BM25F score as well as all implicit feedback features (Section 3.2).\nWe used the 2-layer implementation of RankNet [5] trained on the queries and labels in the training and validation sets.\n\u2022 RN+All: Ranking derived by training the 2-layer RankNet ranking algorithm (Section 3.3) over the union of all content, dynamic, and implicit feedback features (i.e., all of the features described above as well as all of the new implicit feedback features we introduced).\nThe ranking methods above span the range of the information used for ranking, from not using the implicit or explicit feedback at all (i.e., BM25F) to a modern web search engine using hundreds of features and tuned on explicit judgments (RN).\nAs we will show next, incorporating user behavior into these ranking systems dramatically improves the relevance of the returned documents.\n6.\nEXPERIMENTAL RESULTS Implicit feedback for web search ranking can be exploited in a number of ways.\nWe compare alternative methods of exploiting implicit feedback, both by re-ranking the top results (i.e., the BM25F-RerankCT and BM25F-RerankAll methods that reorder BM25F results), as well as by integrating the implicit features directly into the ranking process (i.e., the RN+ALL and BM25F+All methods which learn to rank results over the implicit feedback and other features).\nWe compare our methods over strong baselines (BM25F and RN) over the NDCG, Precision at K, and MAP measures defined in Section 5.2.\nThe results were averaged over three random splits of the overall dataset.\nEach split contained 1500 training, 500 validation, and 1000 test queries, all query sets disjoint.\nWe first present the results over all 1000 test queries (i.e., including queries for which there are no implicit measures so we use the original web rankings).\nWe then drill down to examine the effects on reranking for the attempted queries in more detail, analyzing where implicit feedback proved most beneficial.\nWe first experimented with different methods of re-ranking the output of the BM25F search results.\nFigures 6.1 and 6.2 report NDCG and Precision for BM25F, as well as for the strategies reranking results with user feedback (Section 3.1).\nIncorporating all user feedback (either in reranking framework or as features to the learner directly) results in significant improvements (using two-tailed t-test with p=0.01) over both the original BM25F ranking as well as over reranking with clickthrough alone.\nThe improvement is consistent across the top 10 results and largest for the top result: NDCG at 1 for BM25F+All is 0.622 compared to 0.518 of the original results, and precision at 1 similarly increases from 0.5 to 0.63.\nBased on these results we will use the direct feature combination (i.e., BM25F+All) ranker for subsequent comparisons involving implicit feedback.\n0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 1 2 3 4 5 6 7 8 9 10K NDCG BM25 BM25-Rerank-CT BM25-Rerank-All BM25+All Figure 6.1: NDCG at K for BM25F, BM25F-RerankCT, BM25F-Rerank-All, and BM25F+All for varying K 0.35 0.4 0.45 0.5 0.55 0.6 0.65 1 3 5 10 K Precision BM25 BM25-Rerank-CT BM25-Rerank-All BM25+All Figure 6.2: Precision at K for BM25F, BM25F-RerankCT, BM25F-Rerank-All, and BM25F+All for varying K Interestingly, using clickthrough alone, while giving significant benefit over the original BM25F ranking, is not as effective as considering the full set of features in Table 4.1.\nWhile we analyze user behavior (and most effective component features) in a separate paper [1], it is worthwhile to give a concrete example of the kind of noise inherent in real user feedback in web search setting.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 5 Result position Relativeclickfrequency PTR=2 PTR=3 PTR=5 Figure 6.3: Relative clickthrough frequency for queries with varying Position of Top Relevant result (PTR).\nIf users considered only the relevance of a result to their query, they would click on the topmost relevant results.\nUnfortunately, as Joachims and others have shown, presentation also influences which results users click on quite dramatically.\nUsers often click on results above the relevant one presumably because the short summaries do not provide enough information to make accurate relevance assessments and they have learned that on average topranked items are relevant.\nFigure 6.3 shows relative clickthrough frequencies for queries with known relevant items at positions other than the first position; the position of the top relevant result (PTR) ranges from 2-10 in the figure.\nFor example, for queries with first relevant result at position 5 (PTR=5), there are more clicks on the non-relevant results in higher ranked positions than on the first relevant result at position 5.\nAs we will see, learning over a richer behavior feature set, results in substantial accuracy improvement over clickthrough alone.\nWe now consider incorporating user behavior into a much richer feature set, RN (Section 5.3) used by a major web search engine.\nRN incorporates BM25F, link-based features, and hundreds of other features.\nFigure 6.4 reports NDCG at K and Figure 6.5 reports Precision at K. Interestingly, while the original RN rankings are significantly more accurate than BM25F alone, incorporating implicit feedback features (BM25F+All) results in ranking that significantly outperforms the original RN rankings.\nIn other words, implicit feedback incorporates sufficient information to replace the hundreds of other features available to the RankNet learner trained on the RN feature set.\n0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 0.7 1 2 3 4 5 6 7 8 9 10K NDCG RN RN+All BM25 BM25+All Figure 6.4: NDCG at K for BM25F, BM25F+All, RN, and RN+All for varying K Furthermore, enriching the RN features with implicit feedback set exhibits significant gain on all measures, allowing RN+All to outperform all other methods.\nThis demonstrates the complementary nature of implicit feedback with other features available to a state of the art web search engine.\n0.4 0.45 0.5 0.55 0.6 0.65 1 3 5 10 K Precision RN RN+All BM25 BM25+All Figure 6.5: Precision at K for BM25F, BM25F+All, RN, and RN+All for varying K We summarize the performance of the different ranking methods in Table 6.1.\nWe report the Mean Average Precision (MAP) score for each system.\nWhile not intuitive to interpret, MAP allows quantitative comparison on a single metric.\nThe gains marked with * are significant at p=0.01 level using two tailed t-test.\nMAP Gain P(1) Gain BM25F 0.184 - 0.503BM25F-Rerank-CT 0.215 0.031* 0.577 0.073* BM25F-RerankImplicit 0.218 0.003 0.605 0.028* BM25F+Implicit 0.222 0.004 0.620 0.015* RN 0.215 - 0.597RN+All 0.248 0.033* 0.629 0.032* Table 6.1: Mean Average Precision (MAP) for all strategies.\nSo far we reported results averaged across all queries in the test set.\nUnfortunately, less than half had sufficient interactions to attempt reranking.\nOut of the 1000 queries in test, between 46% and 49%, depending on the train-test split, had sufficient interaction information to make predictions (i.e., there was at least 1 search session in which at least 1 result URL was clicked on by the user).\nThis is not surprising: web search is heavy-tailed, and there are many unique queries.\nWe now consider the performance on the queries for which user interactions were available.\nFigure 6.6 reports NDCG for the subset of the test queries with the implicit feedback features.\nThe gains at top 1 are dramatic.\nThe NDCG at 1 of BM25F+All increases from 0.6 to 0.75 (a 31% relative gain), achieving performance comparable to RN+All operating over a much richer feature set.\n0.6 0.65 0.7 0.75 0.8 1 3 5 10K NDCG RN RN+All BM25 BM25+All Figure 6.6: NDCG at K for BM25F, BM25F+All, RN, and RN+All on test queries with user interactions Similarly, gains on precision at top 1 are substantial (Figure 6.7), and are likely to be apparent to web search users.\nWhen implicit feedback is available, the BM25F+All system returns relevant document at top 1 almost 70% of the time, compared 53% of the time when implicit feedback is not considered by the original BM25F system.\n0.45 0.5 0.55 0.6 0.65 0.7 1 3 5 10K Precision RN RN+All BM25 BM25+All Figure 6.7: Precision at K NDCG at K for BM25F, BM25F+All, RN, and RN+All on test queries with user interactions We summarize the results on the MAP measure for attempted queries in Table 6.2.\nMAP improvements are both substantial and significant, with improvements over the BM25F ranker most pronounced.\nMethod MAP Gain P(1) Gain RN 0.269 0.632 RN+All 0.321 0.051 (19%) 0.693 0.061(10%) BM25F 0.236 0.525 BM25F+All 0.292 0.056 (24%) 0.687 0.162 (31%) Table 6.2: Mean Average Precision (MAP) on attempted queries for best performing methods We now analyze the cases where implicit feedback was shown most helpful.\nFigure 6.8 reports the MAP improvements over the baseline BM25F run for each query with MAP under 0.6.\nNote that most of the improvement is for poorly performing queries (i.e., MAP < 0.1).\nInterestingly, incorporating user behavior information degrades accuracy for queries with high original MAP score.\nOne possible explanation is that these easy queries tend to be navigational (i.e., having a single, highly-ranked most appropriate answer), and user interactions with lower-ranked results may indicate divergent information needs that are better served by the less popular results (with correspondingly poor overall relevance ratings).\n0 50 100 150 200 250 300 350 0.1 0.2 0.3 0.4 0.5 0.6 -0.4 -0.35 -0.3 -0.25 -0.2 -0.15 -0.1 -0.05 0 0.05 0.1 0.15 0.2 Frequency Average Gain Figure 6.8: Gain of BM25F+All over original BM25F ranking To summarize our experimental results, incorporating implicit feedback in real web search setting resulted in significant improvements over the original rankings, using both BM25F and RN baselines.\nOur rich set of implicit features, such as time on page and deviations from the average behavior, provides advantages over using clickthrough alone as an indicator of interest.\nFurthermore, incorporating implicit feedback features directly into the learned ranking function is more effective than using implicit feedback for reranking.\nThe improvements observed over large test sets of queries (1,000 total, between 466 and 495 with implicit feedback available) are both substantial and statistically significant.\n7.\nCONCLUSIONS AND FUTURE WORK In this paper we explored the utility of incorporating noisy implicit feedback obtained in a real web search setting to improve web search ranking.\nWe performed a large-scale evaluation over 3,000 queries and more than 12 million user interactions with a major search engine, establishing the utility of incorporating noisy implicit feedback to improve web search relevance.\nWe compared two alternatives of incorporating implicit feedback into the search process, namely reranking with implicit feedback and incorporating implicit feedback features directly into the trained ranking function.\nOur experiments showed significant improvement over methods that do not consider implicit feedback.\nThe gains are particularly dramatic for the top K=1 result in the final ranking, with precision improvements as high as 31%, and the gains are substantial for all values of K.\nOur experiments showed that implicit user feedback can further improve web search performance, when incorporated directly with popular content- and link-based features.\nInterestingly, implicit feedback is particularly valuable for queries with poor original ranking of results (e.g., MAP lower than 0.1).\nOne promising direction for future work is to apply recent research on automatically predicting query difficulty, and only attempt to incorporate implicit feedback for the difficult queries.\nAs another research direction we are exploring methods for extending our predictions to the previously unseen queries (e.g., query clustering), which should further improve the web search experience of users.\nACKNOWLEDGMENTS We thank Chris Burges and Matt Richardson for an implementation of RankNet for our experiments.\nWe also thank Robert Ragno for his valuable suggestions and many discussions.\n8.\nREFERENCES [1] E. Agichtein, E. Brill, S. Dumais, and R.Ragno, Learning User Interaction Models for Predicting Web Search Result Preferences.\nIn Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2006 [2] J. Allan, HARD Track Overview in TREC 2003, High Accuracy Retrieval from Documents, 2003 [3] R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval, Addison-Wesley, 1999.\n[4] S. Brin and L. Page, The Anatomy of a Large-scale Hypertextual Web Search Engine, in Proceedings of WWW, 1997 [5] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, G. Hullender, Learning to Rank using Gradient Descent, in Proceedings of the International Conference on Machine Learning, 2005 [6] D.M. Chickering, The WinMine Toolkit, Microsoft Technical Report MSR-TR-2002-103, 2002 [7] M. Claypool, D. Brown, P. Lee and M. Waseda.\nInferring user interest.\nIEEE Internet Computing.\n2001 [8] S. Fox, K. Karnawat, M. Mydland, S. T. Dumais and T. White.\nEvaluating implicit measures to improve the search experience.\nIn ACM Transactions on Information Systems, 2005 [9] J. Goecks and J. Shavlick.\nLearning users'' interests by unobtrusively observing their normal behavior.\nIn Proceedings of the IJCAI Workshop on Machine Learning for Information Filtering.\n1999.\n[10] K Jarvelin and J. Kekalainen.\nIR evaluation methods for retrieving highly relevant documents.\nIn Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2000 [11] T. Joachims, Optimizing Search Engines Using Clickthrough Data.\nIn Proceedings of the ACM Conference on Knowledge Discovery and Datamining (SIGKDD), 2002 [12] T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G. Gay, Accurately Interpreting Clickthrough Data as Implicit Feedback, Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2005 [13] T. Joachims, Making Large-Scale SVM Learning Practical.\nAdvances in Kernel Methods, in Support Vector Learning, MIT Press, 1999 [14] D. Kelly and J. Teevan, Implicit feedback for inferring user preference: A bibliography.\nIn SIGIR Forum, 2003 [15] J. Konstan, B. Miller, D. Maltz, J. Herlocker, L. Gordon, and J. Riedl.\nGroupLens: Applying collaborative filtering to usenet news.\nIn Communications of ACM, 1997.\n[16] M. Morita, and Y. Shinoda, Information filtering based on user behavior analysis and best match text retrieval.\nProceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 1994 [17] D. Oard and J. Kim.\nImplicit feedback for recommender systems.\nIn Proceedings of the AAAI Workshop on Recommender Systems.\n1998 [18] D. Oard and J. Kim.\nModeling information content using observable behavior.\nIn Proceedings of the 64th Annual Meeting of the American Society for Information Science and Technology.\n2001 [19] N. Pharo, N. and K. J\u00e4rvelin.\nThe SST method: a tool for analyzing web information search processes.\nIn Information Processing & Management, 2004 [20] P. Pirolli, The Use of Proximal Information Scent to Forage for Distal Content on the World Wide Web.\nIn Working with Technology in Mind: Brunswikian.\nResources for Cognitive Science and Engineering, Oxford University Press, 2004 [21] F. Radlinski and T. Joachims, Query Chains: Learning to Rank from Implicit Feedback.\nIn Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (SIGKDD), 2005.\n[22] F. Radlinski and T. Joachims, Evaluating the Robustness of Learning from Implicit Feedback, in Proceedings of the ICML Workshop on Learning in Web Search, 2005 [23] S. E. Robertson, H. Zaragoza, and M. Taylor, Simple BM25 extension to multiple weighted fields, in Proceedings of the Conference on Information and Knowledge Management (CIKM), 2004 [24] G. Salton & M. McGill.\nIntroduction to modern information retrieval.\nMcGraw-Hill, 1983 [25] E.M. Voorhees, D. Harman, Overview of TREC, 2001 [26] G.R. Xue, H.J. Zeng, Z. Chen, Y. Yu, W.Y. Ma, W.S. Xi, and W.G. Fan, Optimizing web search using web clickthrough data, in Proceedings of the Conference on Information and Knowledge Management (CIKM), 2004 [27] H. Zaragoza, N. Craswell, M. Taylor, S. Saria, and S. Robertson.\nMicrosoft Cambridge at TREC 13: Web and Hard Tracks.\nIn Proceedings of TREC 2004","lvl-3":"Improving Web Search Ranking by Incorporating User Behavior Information\nABSTRACT\nWe show that incorporating user behavior data can significantly improve ordering of top results in real web search setting.\nWe examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features.\nWe report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine.\nWe show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance.\n1.\nINTRODUCTION\nMillions of users interact with search engines daily.\nThey issue queries, follow some of the links in the results, click on ads, spend time on pages, reformulate their queries, and perform other actions.\nThese interactions can serve as a valuable source of information for tuning and improving web search result ranking and can compliment more costly explicit judgments.\nImplicit relevance feedback for ranking and personalization has become an active area of research.\nRecent work by Joachims and others exploring implicit feedback in controlled environments have shown the value of incorporating implicit feedback into the ranking process.\nOur motivation for this work is to understand how implicit feedback can be used in a large-scale operational environment to\nimprove retrieval.\nHow does it compare to and compliment evidence from page content, anchor text, or link-based features such as inlinks or PageRank?\nWhile it is intuitive that user interactions with the web search engine should reveal at least some information that could be used for ranking, estimating user preferences in real web search settings is a challenging problem, since real user interactions tend to be more \"noisy\" than commonly assumed in the controlled settings of previous studies.\nOur paper explores whether implicit feedback can be helpful in realistic environments, where user feedback can be noisy (or adversarial) and a web search engine already uses hundreds of features and is heavily tuned.\nTo this end, we explore different approaches for ranking web search results using real user behavior obtained as part of normal interactions with the web search engine.\nThe specific contributions of this paper include:\n\u2022 Analysis of alternatives for incorporating user behavior into web search ranking (Section 3).\n\u2022 An application of a robust implicit feedback model derived from mining millions of user interactions with a major web search engine (Section 4).\n\u2022 A large scale evaluation over real user queries and search results, showing significant improvements derived from incorporating user feedback (Section 6).\nWe summarize our findings and discuss extensions to the current work in Section 7, which concludes the paper.\n2.\nBACKGROUND AND RELATED WORK\nRanking search results is a fundamental problem in information retrieval.\nMost common approaches primarily focus on similarity of query and a page, as well as the overall page quality [3,4,24].\nHowever, with increasing popularity of search engines, implicit feedback (i.e., the actions users take when interacting with the search engine) can be used to improve the rankings.\nImplicit relevance measures have been studied by several research groups.\nAn overview of implicit measures is compiled in Kelly and Teevan [14].\nThis research, while developing valuable insights into implicit relevance measures, was not applied to improve the ranking of web search results in realistic settings.\nClosely related to our work, Joachims [11] collected implicit measures in place of explicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions.\nFox et al. [8] explored the relationship between implicit and explicit measures in Web search, and developed Bayesian models to\ncorrelate implicit measures and explicit relevance judgments for both individual queries and search sessions.\nThis work considered a wide range of user behaviors (e.g., dwell time, scroll time, reformulation patterns) in addition to the popular clickthrough behavior.\nHowever, the modeling effort was aimed at predicting explicit relevance judgments from implicit user actions and not specifically at learning ranking functions.\nOther studies of user behavior in web search include Pharo and J\u00e4rvelin [19], but were not directly applied to improve ranking.\nMore recently, Joachims et al. [12] presented an empirical evaluation of interpreting clickthrough evidence.\nBy performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthroughs in a controlled, laboratory setting.\nUnfortunately, the extent to which previous research applies to real-world web search is unclear.\nAt the same time, while recent work (e.g., [26]) on using clickthrough information for improving web search ranking is promising, it captures only one aspect of the user interactions with web search engines.\nWe build on existing research to develop robust user behavior interpretation techniques for the real web search setting.\nInstead of treating each user as a reliable \"expert\", we aggregate information from multiple, unreliable, user search session traces, as we describe in the next two sections.\n3.\nINCORPORATING IMPLICIT FEEDBACK\n3.1 Implicit Feedback as Independent Evidence\n3.2 Ranking with Implicit Feedback Features\n3.3 Learning to Rank Web Search Results\n4.\nIMPLICIT USER FEEDBACK MODEL\n4.1 Representing User Actions as Features\n4.2 Deriving a User Feedback Model\n5.\nEXPERIMENTAL SETUP\n5.1 Datasets\n5.2 Evaluation Metrics\n5.3 Ranking Methods Compared\n6.\nEXPERIMENTAL RESULTS\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we explored the utility of incorporating noisy implicit feedback obtained in a real web search setting to improve web search ranking.\nWe performed a large-scale evaluation over 3,000 queries and more than 12 million user interactions with a major search engine, establishing the utility of incorporating \"noisy\" implicit feedback to improve web search relevance.\nWe compared two alternatives of incorporating implicit feedback into the search process, namely reranking with implicit feedback and incorporating implicit feedback features directly into the trained ranking function.\nOur experiments showed significant improvement over methods that do not consider implicit feedback.\nThe gains are particularly dramatic for the top K = 1 result in the final ranking, with precision improvements as high as 31%, and the gains are substantial for all values of K.\nOur experiments showed that implicit user feedback can further improve web search performance, when incorporated directly with popular content - and link-based features.\nInterestingly, implicit feedback is particularly valuable for queries with poor original ranking of results (e.g., MAP lower than 0.1).\nOne promising direction for future work is to apply recent research on automatically predicting query difficulty, and only attempt to incorporate implicit feedback for the \"difficult\" queries.\nAs another research direction we are exploring methods for extending our predictions to the previously unseen queries (e.g., query clustering), which should further improve the web search experience of users.","lvl-4":"Improving Web Search Ranking by Incorporating User Behavior Information\nABSTRACT\nWe show that incorporating user behavior data can significantly improve ordering of top results in real web search setting.\nWe examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features.\nWe report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine.\nWe show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance.\n1.\nINTRODUCTION\nMillions of users interact with search engines daily.\nThese interactions can serve as a valuable source of information for tuning and improving web search result ranking and can compliment more costly explicit judgments.\nImplicit relevance feedback for ranking and personalization has become an active area of research.\nRecent work by Joachims and others exploring implicit feedback in controlled environments have shown the value of incorporating implicit feedback into the ranking process.\nOur motivation for this work is to understand how implicit feedback can be used in a large-scale operational environment to\nimprove retrieval.\nOur paper explores whether implicit feedback can be helpful in realistic environments, where user feedback can be noisy (or adversarial) and a web search engine already uses hundreds of features and is heavily tuned.\nTo this end, we explore different approaches for ranking web search results using real user behavior obtained as part of normal interactions with the web search engine.\nThe specific contributions of this paper include:\n\u2022 Analysis of alternatives for incorporating user behavior into web search ranking (Section 3).\n\u2022 An application of a robust implicit feedback model derived from mining millions of user interactions with a major web search engine (Section 4).\n\u2022 A large scale evaluation over real user queries and search results, showing significant improvements derived from incorporating user feedback (Section 6).\nWe summarize our findings and discuss extensions to the current work in Section 7, which concludes the paper.\n2.\nBACKGROUND AND RELATED WORK\nRanking search results is a fundamental problem in information retrieval.\nHowever, with increasing popularity of search engines, implicit feedback (i.e., the actions users take when interacting with the search engine) can be used to improve the rankings.\nImplicit relevance measures have been studied by several research groups.\nAn overview of implicit measures is compiled in Kelly and Teevan [14].\nThis research, while developing valuable insights into implicit relevance measures, was not applied to improve the ranking of web search results in realistic settings.\nClosely related to our work, Joachims [11] collected implicit measures in place of explicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions.\nFox et al. [8] explored the relationship between implicit and explicit measures in Web search, and developed Bayesian models to\ncorrelate implicit measures and explicit relevance judgments for both individual queries and search sessions.\nThis work considered a wide range of user behaviors (e.g., dwell time, scroll time, reformulation patterns) in addition to the popular clickthrough behavior.\nHowever, the modeling effort was aimed at predicting explicit relevance judgments from implicit user actions and not specifically at learning ranking functions.\nOther studies of user behavior in web search include Pharo and J\u00e4rvelin [19], but were not directly applied to improve ranking.\nUnfortunately, the extent to which previous research applies to real-world web search is unclear.\nAt the same time, while recent work (e.g., [26]) on using clickthrough information for improving web search ranking is promising, it captures only one aspect of the user interactions with web search engines.\nWe build on existing research to develop robust user behavior interpretation techniques for the real web search setting.\nInstead of treating each user as a reliable \"expert\", we aggregate information from multiple, unreliable, user search session traces, as we describe in the next two sections.\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we explored the utility of incorporating noisy implicit feedback obtained in a real web search setting to improve web search ranking.\nWe performed a large-scale evaluation over 3,000 queries and more than 12 million user interactions with a major search engine, establishing the utility of incorporating \"noisy\" implicit feedback to improve web search relevance.\nWe compared two alternatives of incorporating implicit feedback into the search process, namely reranking with implicit feedback and incorporating implicit feedback features directly into the trained ranking function.\nOur experiments showed significant improvement over methods that do not consider implicit feedback.\nOur experiments showed that implicit user feedback can further improve web search performance, when incorporated directly with popular content - and link-based features.\nInterestingly, implicit feedback is particularly valuable for queries with poor original ranking of results (e.g., MAP lower than 0.1).\nOne promising direction for future work is to apply recent research on automatically predicting query difficulty, and only attempt to incorporate implicit feedback for the \"difficult\" queries.\nAs another research direction we are exploring methods for extending our predictions to the previously unseen queries (e.g., query clustering), which should further improve the web search experience of users.","lvl-2":"Improving Web Search Ranking by Incorporating User Behavior Information\nABSTRACT\nWe show that incorporating user behavior data can significantly improve ordering of top results in real web search setting.\nWe examine alternatives for incorporating feedback into the ranking process and explore the contributions of user feedback compared to other common web search features.\nWe report results of a large scale evaluation over 3,000 queries and 12 million user interactions with a popular web search engine.\nWe show that incorporating implicit feedback can augment other features, improving the accuracy of a competitive web search ranking algorithms by as much as 31% relative to the original performance.\n1.\nINTRODUCTION\nMillions of users interact with search engines daily.\nThey issue queries, follow some of the links in the results, click on ads, spend time on pages, reformulate their queries, and perform other actions.\nThese interactions can serve as a valuable source of information for tuning and improving web search result ranking and can compliment more costly explicit judgments.\nImplicit relevance feedback for ranking and personalization has become an active area of research.\nRecent work by Joachims and others exploring implicit feedback in controlled environments have shown the value of incorporating implicit feedback into the ranking process.\nOur motivation for this work is to understand how implicit feedback can be used in a large-scale operational environment to\nimprove retrieval.\nHow does it compare to and compliment evidence from page content, anchor text, or link-based features such as inlinks or PageRank?\nWhile it is intuitive that user interactions with the web search engine should reveal at least some information that could be used for ranking, estimating user preferences in real web search settings is a challenging problem, since real user interactions tend to be more \"noisy\" than commonly assumed in the controlled settings of previous studies.\nOur paper explores whether implicit feedback can be helpful in realistic environments, where user feedback can be noisy (or adversarial) and a web search engine already uses hundreds of features and is heavily tuned.\nTo this end, we explore different approaches for ranking web search results using real user behavior obtained as part of normal interactions with the web search engine.\nThe specific contributions of this paper include:\n\u2022 Analysis of alternatives for incorporating user behavior into web search ranking (Section 3).\n\u2022 An application of a robust implicit feedback model derived from mining millions of user interactions with a major web search engine (Section 4).\n\u2022 A large scale evaluation over real user queries and search results, showing significant improvements derived from incorporating user feedback (Section 6).\nWe summarize our findings and discuss extensions to the current work in Section 7, which concludes the paper.\n2.\nBACKGROUND AND RELATED WORK\nRanking search results is a fundamental problem in information retrieval.\nMost common approaches primarily focus on similarity of query and a page, as well as the overall page quality [3,4,24].\nHowever, with increasing popularity of search engines, implicit feedback (i.e., the actions users take when interacting with the search engine) can be used to improve the rankings.\nImplicit relevance measures have been studied by several research groups.\nAn overview of implicit measures is compiled in Kelly and Teevan [14].\nThis research, while developing valuable insights into implicit relevance measures, was not applied to improve the ranking of web search results in realistic settings.\nClosely related to our work, Joachims [11] collected implicit measures in place of explicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions.\nFox et al. [8] explored the relationship between implicit and explicit measures in Web search, and developed Bayesian models to\ncorrelate implicit measures and explicit relevance judgments for both individual queries and search sessions.\nThis work considered a wide range of user behaviors (e.g., dwell time, scroll time, reformulation patterns) in addition to the popular clickthrough behavior.\nHowever, the modeling effort was aimed at predicting explicit relevance judgments from implicit user actions and not specifically at learning ranking functions.\nOther studies of user behavior in web search include Pharo and J\u00e4rvelin [19], but were not directly applied to improve ranking.\nMore recently, Joachims et al. [12] presented an empirical evaluation of interpreting clickthrough evidence.\nBy performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthroughs in a controlled, laboratory setting.\nUnfortunately, the extent to which previous research applies to real-world web search is unclear.\nAt the same time, while recent work (e.g., [26]) on using clickthrough information for improving web search ranking is promising, it captures only one aspect of the user interactions with web search engines.\nWe build on existing research to develop robust user behavior interpretation techniques for the real web search setting.\nInstead of treating each user as a reliable \"expert\", we aggregate information from multiple, unreliable, user search session traces, as we describe in the next two sections.\n3.\nINCORPORATING IMPLICIT FEEDBACK\nWe consider two complementary approaches to ranking with implicit feedback: (1) treating implicit feedback as independent evidence for ranking results, and (2) integrating implicit feedback features directly into the ranking algorithm.\nWe describe the two general ranking approaches next.\nThe specific implicit feedback features are described in Section 4, and the algorithms for interpreting and incorporating implicit feedback are described in Section 5.\n3.1 Implicit Feedback as Independent Evidence\nThe general approach is to re-rank the results obtained by a web search engine according to observed clickthrough and other user interactions for the query in previous search sessions.\nEach result is assigned a score according to expected relevance\/user satisfaction based on previous interactions, resulting in some preference ordering based on user interactions alone.\nWhile there has been significant work on merging multiple rankings, we adapt a simple and robust approach of ignoring the original rankers' scores, and instead simply merge the rank orders.\nThe main reason for ignoring the original scores is that since the feature spaces and learning algorithms are different, the scores are not directly comparable, and re-normalization tends to remove the benefit of incorporating classifier scores.\nWe experimented with a variety of merging functions on the development set of queries (and using a set of interactions from a different time period from final evaluation sets).\nWe found that a simple rank merging heuristic combination works well, and is robust to variations in score values from original rankers.\nFor a given query q, the implicit score ISd is computed for each result d from available user interaction features, resulting in the implicit rank Id for each result.\nWe compute a merged score SM (d) for d by combining the ranks obtained from implicit feedback, Id with the original rank of d, Od:\nwhere the weight wI is a heuristically tuned scaling factor representing the relative \"importance\" of the implicit feedback.\nThe query results are ordered in by decreasing values of SM to produce the final ranking.\nOne special case of this model arises when setting wI to a very large value, effectively forcing clicked results to be ranked higher than un-clicked results--an intuitive and effective heuristic that we will use as a baseline.\nApplying more sophisticated classifier and ranker combination algorithms may result in additional improvements, and is a promising direction for future work.\nThe approach above assumes that there are no interactions between the underlying features producing the original web search ranking and the implicit feedback features.\nWe now relax this assumption by integrating implicit feedback features directly into the ranking process.\n3.2 Ranking with Implicit Feedback Features\nModern web search engines rank results based on a large number of features, including content-based features (i.e., how closely a query matches the text or title or anchor text of the document), and queryindependent page quality features (e.g., PageRank of the document or the domain).\nIn most cases, automatic (or semi-automatic) methods are developed for tuning the specific ranking function that combines these feature values.\nHence, a natural approach is to incorporate implicit feedback features directly as features for the ranking algorithm.\nDuring training or tuning, the ranker can be tuned as before but with additional features.\nAt runtime, the search engine would fetch the implicit feedback features associated with each query-result URL pair.\nThis model requires a ranking algorithm to be robust to missing values: more than 50% of queries to web search engines are unique, with no previous implicit feedback available.\nWe now describe such a ranker that we used to learn over the combined feature sets including implicit feedback.\n3.3 Learning to Rank Web Search Results\nA key aspect of our approach is exploiting recent advances in machine learning, namely trainable ranking algorithms for web search and information retrieval (e.g., [5, 11] and classical results reviewed in [3]).\nIn our setting, explicit human relevance judgments (labels) are available for a set of web search queries and results.\nHence, an attractive choice to use is a supervised machine learning technique to learn a ranking function that best predicts relevance judgments.\nRankNet is one such algorithm.\nIt is a neural net tuning algorithm that optimizes feature weights to best match explicitly provided pairwise user preferences.\nWhile the specific training algorithms used by RankNet are beyond the scope of this paper, it is described in detail in [5] and includes extensive evaluation and comparison with other ranking methods.\nAn attractive feature of RankNet is both train - and run-time efficiency--runtime ranking can be\nquickly computed and can scale to the web, and training can be done over thousands of queries and associated judged results.\nWe use a 2-layer implementation of RankNet in order to model non-linear relationships between features.\nFurthermore, RankNet can learn with many (differentiable) cost functions, and hence can automatically learn a ranking function from human-provided labels, an attractive alternative to heuristic feature combination techniques.\nHence, we will also use RankNet as a generic ranker to explore the contribution of implicit feedback for different ranking alternatives.\n4.\nIMPLICIT USER FEEDBACK MODEL\nOur goal is to accurately interpret noisy user feedback obtained as by tracing user interactions with the search engine.\nInterpreting implicit feedback in real web search setting is not an easy task.\nWe characterize this problem in detail in [1], where we motivate and evaluate a wide variety of models of implicit user activities.\nThe general approach is to represent user actions for each search result as a vector of features, and then train a ranker on these features to discover feature values indicative of relevant (and non-relevant) search results.\nWe first briefly summarize our features and model, and the learning approach (Section 4.2) in order to provide sufficient information to replicate our ranking methods and the subsequent experiments.\n4.1 Representing User Actions as Features\nWe model observed web search behaviors as a combination of a \u030fbackground\" component (i.e., query - and relevance-independent noise in user behavior, including positional biases with result interactions), and a \u030frelevance\" component (i.e., query-specific behavior indicative of relevance of a result to a query).\nWe design our features to take advantage of aggregated user behavior.\nThe feature set is comprised of directly observed features (computed directly from observations for each query), as well as query-specific derived features, computed as the deviation from the overall queryindependent distribution of values for the corresponding directly observed feature values.\nThe features used to represent user interactions with web search results are summarized in Table 4.1.\nThis information was obtained via opt-in client-side instrumentation from users of a major web search engine.\nWe include the traditional implicit feedback features such as clickthrough counts for the results, as well as our novel derived features such as the deviation of the observed clickthrough number for a given query-URL pair from the expected number of clicks on a result in the given position.\nWe also model the browsing behavior after a result was clicked--e.g., the average page dwell time for a given query-URL pair, as well as its deviation from the expected (average) dwell time.\nFurthermore, the feature set was designed to provide essential information about the user experience to make feedback interpretation robust.\nFor example, web search users can often determine whether a result is relevant by looking at the result title, URL, and summary--in many cases, looking at the original document is not necessary.\nTo model this aspect of user experience we include features such as overlap in words in title and words in query (TitleOverlap) and the fraction of words shared by the query and the result summary.\nTable 4.1: Some features used to represent post-search navigation history for a given query and search result URL.\nHaving described our feature set, we briefly review our general method for deriving a user behavior model.\n4.2 Deriving a User Feedback Model\nTo learn to interpret the observed user behavior, we correlate user actions (i.e., the features in Table 4.1 representing the actions) with the explicit user judgments for a set of training queries.\nWe find all the instances in our session logs where these queries were submitted to the search engine, and aggregate the user behavior features for all search sessions involving these queries.\nEach observed query-URL pair is represented by the features in Table 4.1, with values averaged over all search sessions, and assigned one of six possible relevance labels, ranging from \"Perfect\" to \"Bad\", as assigned by explicit relevance judgments.\nThese labeled feature vectors are used as input to the RankNet training algorithm (Section 3.3) which produces a trained user behavior model.\nThis approach is particularly attractive as it does not require heuristics beyond feature engineering.\nThe resulting user behavior model is used to help rank web search results--either directly or in combination with other features, as described below.\n5.\nEXPERIMENTAL SETUP\nThe ultimate goal of incorporating implicit feedback into ranking is to improve the relevance of the returned web search results.\nHence, we compare the ranking methods over a large set of judged queries\nwith explicit relevance labels provided by human judges.\nIn order for the evaluation to be realistic we obtained a random sample of queries from web search logs of a major search engine, with associated results and traces for user actions.\nWe describe this dataset in detail next.\nOur metrics are described in Section 5.2 that we use to evaluate the ranking alternatives, listed in Section 5.3 in the experiments of Section 6.\n5.1 Datasets\nWe compared our ranking methods over a random sample of 3,000 queries from the search engine query logs.\nThe queries were drawn from the logs uniformly at random by token without replacement, resulting in a query sample representative of the overall query distribution.\nOn average, 30 results were explicitly labeled by human judges using a six point scale ranging from \"Perfect\" down to \"Bad\".\nOverall, there were over 83,000 results with explicit relevance judgments.\nIn order to compute various statistics, documents with label \"Good\" or better will be considered \"relevant\", and with lower labels to be \"non-relevant\".\nNote that the experiments were performed over the results already highly ranked by a web search engine, which corresponds to a typical user experience which is limited to the small number of the highly ranked results for a typical web search query.\nThe user interactions were collected over a period of 8 weeks using voluntary opt-in information.\nIn total, over 1.2 million unique queries were instrumented, resulting in over 12 million individual interactions with the search engine.\nThe data consisted of user interactions with the web search engine (e.g., clicking on a result link, going back to search results, etc.) performed after a query was submitted.\nThese actions were aggregated across users and search sessions and converted to features in Table 4.1.\nTo create the training, validation, and test query sets, we created three different random splits of 1,500 training, 500 validation, and 1000 test queries.\nThe splits were done randomly by query, so that there was no overlap in training, validation, and test queries.\n5.2 Evaluation Metrics\nWe evaluate the ranking algorithms over a range of accepted information retrieval metrics, namely Precision at K (P (K)), Normalized Discounted Cumulative Gain (NDCG), and Mean Average Precision (MAP).\nEach metric focuses on a deferent aspect of system performance, as we describe below.\n\u2022 Precision at K: As the most intuitive metric, P (K) reports the fraction of documents ranked in the top K results that are labeled as relevant.\nIn our setting, we require a relevant document to be labeled \"Good\" or higher.\nThe position of relevant documents within the top K is irrelevant, and hence this metric measure overall user satisfaction with the top K results.\n\u2022 NDCG at K: NDCG is a retrieval measure devised specifically for web search evaluation [10].\nFor a given query q, the ranked results are examined from the top ranked down, and the NDCG computed as:\nWhere Mq is a normalization constant calculated so that a perfect ordering would obtain NDCG of 1; and each r (j) is an integer relevance label (0 =\" Bad\" and 5 =\" Perfect\") of result returned at position j. Note that unlabeled and \"Bad\" documents do not contribute to the sum, but will reduce NDCG for the query pushing down the relevant labeled documents, reducing their contributions.\nNDCG is well suited to web search evaluation, as it rewards relevant documents in the top ranked results more heavily than those ranked lower.\n\u2022 MAP: Average precision for each query is defined as the mean of the precision at K values computed after each relevant document was retrieved.\nThe final MAP value is defined as the mean of average precisions of all queries in the test set.\nThis metric is the most commonly used single-value summary of a run over a set of queries.\n5.3 Ranking Methods Compared\nRecall that our goal is to quantify the effectiveness of implicit behavior for real web search.\nOne dimension is to compare the utility of implicit feedback with other information available to a web search engine.\nSpecifically, we compare effectiveness of implicit user behaviors with content-based matching, static page quality features, and combinations of all features.\n\u2022 BM25F: As a strong web search baseline we used the BM25F\nscoring, which was used in one of the best performing systems in the TREC 2004 Web track [23,27].\nBM25F and its variants have been extensively described and evaluated in IR literature, and hence serve as a strong, reproducible baseline.\nThe BM25F variant we used for our experiments computes separate match scores for each \"field\" for a result document (e.g., body text, title, and anchor text), and incorporates query-independent linkbased information (e.g., PageRank, ClickDistance, and URL depth).\nThe scoring function and field-specific tuning is described in detail in [23].\nNote that BM25F does not directly consider explicit or implicit feedback for tuning.\n\u2022 RN: The ranking produced by a neural net ranker (RankNet, described in Section 3.3) that learns to rank web search results by incorporating BM25F and a large number of additional static and dynamic features describing each search result.\nThis system automatically learns weights for all features (including the BM25F score for a document) based on explicit human labels for a large set of queries.\nA system incorporating an implementation of RankNet is currently in use by a major search engine and can be considered representative of the state of the art in web search.\n\u2022 BM25F-RerankCT: The ranking produced by incorporating clickthrough statistics to reorder web search results ranked by BM25F above.\nClickthrough is a particularly important special case of implicit feedback, and has been shown to correlate with result relevance.\nThis is a special case of the ranking method in\nSection 3.1, with the weight wI set to 1000 and the ranking Id is simply the number of clicks on the result corresponding to d.\nIn effect, this ranking brings to the top all returned web search results with at least one click (and orders them in decreasing order by number of clicks).\nThe relative ranking of the remainder of results is unchanged and they are inserted below all clicked results.\nThis method serves as our baseline implicit feedback reranking method.\nBM25F-RerankAll The ranking produced by reordering the BM25F results using all user behavior features (Section 4).\nThis method learns a model of user preferences by correlating feature values with explicit relevance labels using the RankNet neural net algorithm (Section 4.2).\nAt runtime, for a given query the\nimplicit score Ir is computed for each result r with available user interaction features, and the implicit ranking is produced.\nThe merged ranking is computed as described in Section 3.1.\nBased on the experiments over the development set we fix the value of wI to 3 (the effect of the wI parameter for this ranker turned out to be negligible).\n\u2022 BM25F + All: Ranking derived by training the RankNet (Section 3.3) learner over the features set of the BM25F score as well as all implicit feedback features (Section 3.2).\nWe used the 2-layer implementation of RankNet [5] trained on the queries and labels in the training and validation sets.\n\u2022 RN+A ll: Ranking derived by training the 2-layer RankNet ranking algorithm (Section 3.3) over the union of all content,\ndynamic, and implicit feedback features (i.e., all of the features described above as well as all of the new implicit feedback features we introduced).\nThe ranking methods above span the range of the information used for ranking, from not using the implicit or explicit feedback at all (i.e., BM25F) to a modern web search engine using hundreds of features and tuned on explicit judgments (RN).\nAs we will show next, incorporating user behavior into these ranking systems dramatically improves the relevance of the returned documents.\n6.\nEXPERIMENTAL RESULTS\nImplicit feedback for web search ranking can be exploited in a number of ways.\nWe compare alternative methods of exploiting implicit feedback, both by re-ranking the top results (i.e., the BM25F-RerankCT and BM25F-RerankAll methods that reorder BM25F results), as well as by integrating the implicit features directly into the ranking process (i.e., the RN+ALL and BM25F + All methods which learn to rank results over the implicit feedback and other features).\nWe compare our methods over strong baselines (BM25F and RN) over the NDCG, Precision at K, and MAP measures defined in Section 5.2.\nThe results were averaged over three random splits of the overall dataset.\nEach split contained 1500 training, 500 validation, and 1000 test queries, all query sets disjoint.\nWe first present the results over all 1000 test queries (i.e., including queries for which there are no implicit measures so we use the original web rankings).\nWe then drill down to examine the effects on reranking for the attempted queries in more detail, analyzing where implicit feedback proved most beneficial.\nWe first experimented with different methods of re-ranking the output of the BM25F search results.\nFigures 6.1 and 6.2 report NDCG and Precision for BM25F, as well as for the strategies reranking results with user feedback (Section 3.1).\nIncorporating all user feedback (either in reranking framework or as features to the learner directly) results in significant improvements (using twotailed t-test with p = 0.01) over both the original BM25F ranking as well as over reranking with clickthrough alone [Rev1].\nThe improvement is consistent across the top 10 results and largest for the top result: NDCG at 1 for BM25F + All is 0.622 compared to 0.518 of the original results, and precision at 1 similarly increases from 0.5 to 0.63.\nBased on these results we will use the direct feature combination (i.e., BM25F + All) ranker for subsequent comparisons involving implicit feedback.\nFigure 6.1: NDCG at K for BM25F, BM25F-RerankCT, BM25F-Rerank-All, and BM25F + All for varying K Figure 6.2: Precision at K for BM25F, BM25F-RerankCT, BM25F-Rerank-All, and BM25F + All for varying K\nInterestingly, using clickthrough alone, while giving significant benefit over the original BM25F ranking, is not as effective as considering the full set of features in Table 4.1.\nWhile we analyze user behavior (and most effective component features) in a separate paper [1], it is worthwhile to give a concrete example of the kind of noise inherent in real user feedback in web search setting.\nFigure 6.3: Relative clickthrough frequency for queries with varying Position of Top Relevant result (PTR).\nIf users considered only the relevance of a result to their query, they would click on the topmost relevant results.\nUnfortunately, as Joachims and others have shown, presentation also influences which results users click on quite dramatically.\nUsers often click on results above the relevant one presumably because the short summaries do not provide enough information to make accurate relevance assessments and they have learned that on average topranked items are relevant.\nFigure 6.3 shows relative clickthrough frequencies for queries with known relevant items at positions other than the first position; the position of the top relevant result (PTR) ranges from 2-10 in the figure.\nFor example, for queries with first relevant result at position 5 (PTR = 5), there are more clicks on the non-relevant results in higher ranked positions than on the first relevant result at position 5.\nAs we will see, learning over a richer behavior feature set, results in substantial accuracy improvement over clickthrough alone [Rev2].\nWe now consider incorporating user behavior into a much richer feature set, RN (Section 5.3) used by a major web search engine.\nRN incorporates BM25F, link-based features, and hundreds of other features.\nFigure 6.4 reports NDCG at K and Figure 6.5 reports Precision at K. Interestingly, while the original RN rankings are significantly more accurate than BM25F alone, incorporating implicit feedback features (BM25F + All) results in ranking that significantly outperforms the original RN rankings.\nIn other words, implicit feedback incorporates sufficient information to replace the hundreds of other features available to the RankNet learner trained on the RN feature set.\nFigure 6.4: NDCG at% for BM25F, BM25F + All, RN, and RN+A ll for varying% Furthermore, enriching the RN features with implicit feedback set exhibits significant gain on all measures, allowing RN+A ll to outperform all other methods.\nThis demonstrates the complementary nature of implicit feedback with other features available to a state of the art web search engine.\nFigure 6.5: Precision at% for BM25F, BM25F + All, RN, and RN+A ll for varying% We summarize the performance of the different ranking methods in Table 6.1.\nWe report the Mean Average Precision (MAP) score for each system.\nWhile not intuitive to interpret, MAP allows quantitative comparison on a single metric.\nThe gains marked with * are significant at p = 0.01 level using two tailed t-test.\nTable 6.1: Mean Average Precision (MAP) for all strategies.\nSo far we reported results averaged across all queries in the test set.\nUnfortunately, less than half had sufficient interactions to attempt reranking.\nOut of the 1000 queries in test, between 46% and 49%, depending on the train-test split, had sufficient interaction information to make predictions (i.e., there was at least 1 search session in which at least 1 result URL was clicked on by the user).\nThis is not surprising: web search is heavy-tailed, and there are many unique queries.\nWe now consider the performance on the queries for which user interactions were available.\nFigure 6.6 reports NDCG for the subset of the test queries with the implicit feedback features.\nThe gains at top 1 are dramatic.\nThe NDCG at 1 of BM25F + All increases from 0.6 to 0.75 (a 31% relative gain), achieving performance comparable to RN+A ll operating over a much richer feature set.\nFigure 6.6: NDCG at K for BM25F, BM25F + All, RN, and RN+A ll on test queries with user interactions\nSimilarly, gains on precision at top 1 are substantial (Figure 6.7), and are likely to be apparent to web search users.\nWhen implicit feedback is available, the BM25F + All system returns relevant document at top 1 almost 70% of the time, compared 53% of the time when implicit feedback is not considered by the original BM25F system.\nFigure 6.7: Precision at K NDCG at K for BM25F, BM25F + All, RN, and RN+A ll on test queries with user interactions We summarize the results on the MAP measure for attempted queries in Table 6.2.\nMAP improvements are both substantial and significant, with improvements over the BM25F ranker most pronounced.\nTable 6.2: Mean Average Precision (MAP) on attempted queries for best performing methods\nWe now analyze the cases where implicit feedback was shown most helpful.\nFigure 6.8 reports the MAP improvements over the \"baseline\" BM25F run for each query with MAP under 0.6.\nNote that most of the improvement is for poorly performing queries (i.e., MAP <0.1).\nInterestingly, incorporating user behavior information degrades accuracy for queries with high original MAP score.\nOne possible explanation is that these \"easy\" queries tend to be navigational (i.e., having a single, highly-ranked most appropriate answer), and user interactions with lower-ranked results may indicate divergent information needs that are better served by the less popular results (with correspondingly poor overall relevance ratings).\nFigure 6.8: Gain of BM25F + All over original BM25F ranking\nTo summarize our experimental results, incorporating implicit feedback in real web search setting resulted in significant improvements over the original rankings, using both BM25F and RN baselines.\nOur rich set of implicit features, such as time on page and deviations from the average behavior, provides advantages over using clickthrough alone as an indicator of interest.\nFurthermore, incorporating implicit feedback features directly into the learned ranking function is more effective than using implicit feedback for reranking.\nThe improvements observed over large test sets of queries (1,000 total, between 466 and 495 with implicit feedback available) are both substantial and statistically significant.\n7.\nCONCLUSIONS AND FUTURE WORK\nIn this paper we explored the utility of incorporating noisy implicit feedback obtained in a real web search setting to improve web search ranking.\nWe performed a large-scale evaluation over 3,000 queries and more than 12 million user interactions with a major search engine, establishing the utility of incorporating \"noisy\" implicit feedback to improve web search relevance.\nWe compared two alternatives of incorporating implicit feedback into the search process, namely reranking with implicit feedback and incorporating implicit feedback features directly into the trained ranking function.\nOur experiments showed significant improvement over methods that do not consider implicit feedback.\nThe gains are particularly dramatic for the top K = 1 result in the final ranking, with precision improvements as high as 31%, and the gains are substantial for all values of K.\nOur experiments showed that implicit user feedback can further improve web search performance, when incorporated directly with popular content - and link-based features.\nInterestingly, implicit feedback is particularly valuable for queries with poor original ranking of results (e.g., MAP lower than 0.1).\nOne promising direction for future work is to apply recent research on automatically predicting query difficulty, and only attempt to incorporate implicit feedback for the \"difficult\" queries.\nAs another research direction we are exploring methods for extending our predictions to the previously unseen queries (e.g., query clustering), which should further improve the web search experience of users.","keyphrases":["web search","web search rank","rank","user behavior","inform","result","feedback","user interact","inform retriev","relev feedback","score","document","implicit relev feedback"],"prmu":["P","P","P","P","P","P","P","P","M","M","U","U","M"]} {"id":"H-79","title":"Beyond PageRank: Machine Learning for Static Ranking","abstract":"Since the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages. We show that we can significantly outperform PageRank using features that are independent of the link structure of the Web. We gain a further boost in accuracy by using data on the frequency at which users visit Web pages. We use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics. The resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random).","lvl-1":"Beyond PageRank: Machine Learning for Static Ranking Matthew Richardson Microsoft Research One Microsoft Way Redmond, WA 98052 +1 -LRB-425-RRB-\u00a0722-3325 mattri@microsoft.com Amit Prakash MSN One Microsoft Way Redmond, WA 98052 +1 -LRB-425-RRB-\u00a0705-6015 amitp@microsoft.com Eric Brill Microsoft Research One Microsoft Way Redmond, WA 98052 +1 -LRB-425-RRB-\u00a0705-4992 brill@microsoft.com ABSTRACT Since the publication of Brin and Page``s paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages.\nWe show that we can significantly outperform PageRank using features that are independent of the link structure of the Web.\nWe gain a further boost in accuracy by using data on the frequency at which users visit Web pages.\nWe use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics.\nThe resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random).\nCategories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning.\nH.3.3 [Information Storage and Retrieval]: Information Search and Retrieval.\nGeneral Terms Algorithms, Measurement, Performance, Experimentation.\n1.\nINTRODUCTION Over the past decade, the Web has grown exponentially in size.\nUnfortunately, this growth has not been isolated to good-quality pages.\nThe number of incorrect, spamming, and malicious (e.g., phishing) sites has also grown rapidly.\nThe sheer number of both good and bad pages on the Web has led to an increasing reliance on search engines for the discovery of useful information.\nUsers rely on search engines not only to return pages related to their search query, but also to separate the good from the bad, and order results so that the best pages are suggested first.\nTo date, most work on Web page ranking has focused on improving the ordering of the results returned to the user (querydependent ranking, or dynamic ranking).\nHowever, having a good query-independent ranking (static ranking) is also crucially important for a search engine.\nA good static ranking algorithm provides numerous benefits: \u2022 Relevance: The static rank of a page provides a general indicator to the overall quality of the page.\nThis is a useful input to the dynamic ranking algorithm.\n\u2022 Efficiency: Typically, the search engine``s index is ordered by static rank.\nBy traversing the index from highquality to low-quality pages, the dynamic ranker may abort the search when it determines that no later page will have as high of a dynamic rank as those already found.\nThe more accurate the static rank, the better this early-stopping ability, and hence the quicker the search engine may respond to queries.\n\u2022 Crawl Priority: The Web grows and changes as quickly as search engines can crawl it.\nSearch engines need a way to prioritize their crawl-to determine which pages to recrawl, how frequently, and how often to seek out new pages.\nAmong other factors, the static rank of a page is used to determine this prioritization.\nA better static rank thus provides the engine with a higher quality, more upto-date index.\nGoogle is often regarded as the first commercially successful search engine.\nTheir ranking was originally based on the PageRank algorithm [5][27].\nDue to this (and possibly due to Google``s promotion of PageRank to the public), PageRank is widely regarded as the best method for the static ranking of Web pages.\nThough PageRank has historically been thought to perform quite well, there has yet been little academic evidence to support this claim.\nEven worse, there has recently been work showing that PageRank may not perform any better than other simple measures on certain tasks.\nUpstill et al. have found that for the task of finding home pages, the number of pages linking to a page and the type of URL were as, or more, effective than PageRank [32].\nThey found similar results for the task of finding high quality companies [31].\nPageRank has also been used in systems for TREC``s very large collection and Web track competitions, but with much less success than had been expected [17].\nFinally, Amento et al. [1] found that simple features, such as the number of pages on a site, performed as well as PageRank.\nDespite these, the general belief remains among many, both academic and in the public, that PageRank is an essential factor for a good static rank.\nFailing this, it is still assumed that using the link structure is crucial, in the form of the number of inlinks or the amount of anchor text.\nIn this paper, we show there are a number of simple url- or pagebased features that significantly outperform PageRank (for the purposes of statically ranking Web pages) despite ignoring the structure of the Web.\nWe combine these and other static features using machine learning to achieve a ranking system that is significantly better than PageRank (in pairwise agreement with human labels).\nA machine learning approach for static ranking has other advantages besides the quality of the ranking.\nBecause the measure consists of many features, it is harder for malicious users to manipulate it (i.e., to raise their page``s static rank to an undeserved level through questionable techniques, also known as Web spamming).\nThis is particularly true if the feature set is not known.\nIn contrast, a single measure like PageRank can be easier to manipulate because spammers need only concentrate on one goal: how to cause more pages to point to their page.\nWith an algorithm that learns, a feature that becomes unusable due to spammer manipulation will simply be reduced or removed from the final computation of rank.\nThis flexibility allows a ranking system to rapidly react to new spamming techniques.\nA machine learning approach to static ranking is also able to take advantage of any advances in the machine learning field.\nFor example, recent work on adversarial classification [12] suggests that it may be possible to explicitly model the Web page spammer``s (the adversary) actions, adjusting the ranking model in advance of the spammer``s attempts to circumvent it.\nAnother example is the elimination of outliers in constructing the model, which helps reduce the effect that unique sites may have on the overall quality of the static rank.\nBy moving static ranking to a machine learning framework, we not only gain in accuracy, but also gain in the ability to react to spammer``s actions, to rapidly add new features to the ranking algorithm, and to leverage advances in the rapidly growing field of machine learning.\nFinally, we believe there will be significant advantages to using this technique for other domains, such as searching a local hard drive or a corporation``s intranet.\nThese are domains where the link structure is particularly weak (or non-existent), but there are other domain-specific features that could be just as powerful.\nFor example, the author of an intranet page and his\/her position in the organization (e.g., CEO, manager, or developer) could provide significant clues as to the importance of that page.\nA machine learning approach thus allows rapid development of a good static algorithm in new domains.\nThis paper``s contribution is a systematic study of static features, including PageRank, for the purposes of (statically) ranking Web pages.\nPrevious studies on PageRank typically used subsets of the Web that are significantly smaller (e.g., the TREC VLC2 corpus, used by many, contains only 19 million pages).\nAlso, the performance of PageRank and other static features has typically been evaluated in the context of a complete system for dynamic ranking, or for other tasks such as question answering.\nIn contrast, we explore the use of PageRank and other features for the direct task of statically ranking Web pages.\nWe first briefly describe the PageRank algorithm.\nIn Section 3 we introduce RankNet, the machine learning technique used to combine static features into a final ranking.\nSection 4 describes the static features.\nThe heart of the paper is in Section 5, which presents our experiments and results.\nWe conclude with a discussion of related and future work.\n2.\nPAGERANK The basic idea behind PageRank is simple: a link from a Web page to another can be seen as an endorsement of that page.\nIn general, links are made by people.\nAs such, they are indicative of the quality of the pages to which they point - when creating a page, an author presumably chooses to link to pages deemed to be of good quality.\nWe can take advantage of this linkage information to order Web pages according to their perceived quality.\nImagine a Web surfer who jumps from Web page to Web page, choosing with uniform probability which link to follow at each step.\nIn order to reduce the effect of dead-ends or endless cycles the surfer will occasionally jump to a random page with some small probability \u03b1, or when on a page with no out-links.\nIf averaged over a sufficient number of steps, the probability the surfer is on page j at some point in time is given by the formula: \u2211\u2208 + \u2212 = ji i iP N jP B F )()1( )( \u03b1 \u03b1 (1) Where Fi is the set of pages that page i links to, and Bj is the set of pages that link to page j.\nThe PageRank score for node j is defined as this probability: PR(j)=P(j).\nBecause equation (1) is recursive, it must be iteratively evaluated until P(j) converges (typically, the initial distribution for P(j) is uniform).\nThe intuition is, because a random surfer would end up at the page more frequently, it is likely a better page.\nAn alternative view for equation (1) is that each page is assigned a quality, P(j).\nA page gives an equal share of its quality to each page it points to.\nPageRank is computationally expensive.\nOur collection of 5 billion pages contains approximately 370 billion links.\nComputing PageRank requires iterating over these billions of links multiple times (until convergence).\nIt requires large amounts of memory (or very smart caching schemes that slow the computation down even further), and if spread across multiple machines, requires significant communication between them.\nThough much work has been done on optimizing the PageRank computation (see e.g., [25] and [6]), it remains a relatively slow, computationally expensive property to compute.\n3.\nRANKNET Much work in machine learning has been done on the problems of classification and regression.\nLet X={xi} be a collection of feature vectors (typically, a feature is any real valued number), and Y={yi} be a collection of associated classes, where yi is the class of the object described by feature vector xi.\nThe classification problem is to learn a function f that maps yi=f(xi), for all i.\nWhen yi is real-valued as well, this is called regression.\nStatic ranking can be seen as a regression problem.\nIf we let xi represent features of page i, and yi be a value (say, the rank) for each page, we could learn a regression function that mapped each page``s features to their rank.\nHowever, this over-constrains the problem we wish to solve.\nAll we really care about is the order of the pages, not the actual value assigned to them.\nRecent work on this ranking problem [7][13][18] directly attempts to optimize the ordering of the objects, rather than the value assigned to them.\nFor these, let Z={<i,j>} be a collection of pairs of items, where item i should be assigned a higher value than item j.\nThe goal of the ranking problem, then, is to learn a function f such that, )()(,, ji ffji xxZ >\u2208\u2200 708 Note that, as with learning a regression function, the result of this process is a function (f) that maps feature vectors to real values.\nThis function can still be applied anywhere that a regressionlearned function could be applied.\nThe only difference is the technique used to learn the function.\nBy directly optimizing the ordering of objects, these methods are able to learn a function that does a better job of ranking than do regression techniques.\nWe used RankNet [7], one of the aforementioned techniques for learning ranking functions, to learn our static rank function.\nRankNet is a straightforward modification to the standard neural network back-prop algorithm.\nAs with back-prop, RankNet attempts to minimize the value of a cost function by adjusting each weight in the network according to the gradient of the cost function with respect to that weight.\nThe difference is that, while a typical neural network cost function is based on the difference between the network output and the desired output, the RankNet cost function is based on the difference between a pair of network outputs.\nThat is, for each pair of feature vectors <i,j> in the training set, RankNet computes the network outputs oi and oj.\nSince vector i is supposed to be ranked higher than vector j, the larger is oj-oi, the larger the cost.\nRankNet also allows the pairs in Z to be weighted with a confidence (posed as the probability that the pair satisfies the ordering induced by the ranking function).\nIn this paper, we used a probability of one for all pairs.\nIn the next section, we will discuss the features used in our feature vectors, xi.\n4.\nFEATURES To apply RankNet (or other machine learning techniques) to the ranking problem, we needed to extract a set of features from each page.\nWe divided our feature set into four, mutually exclusive, categories: page-level (Page), domain-level (Domain), anchor text and inlinks (Anchor), and popularity (Popularity).\nWe also optionally used the PageRank of a page as a feature.\nBelow, we describe each of these feature categories in more detail.\nPageRank We computed PageRank on a Web graph of 5 billion crawled pages (and 20 billion known URLs linked to by these pages).\nThis represents a significant portion of the Web, and is approximately the same number of pages as are used by Google, Yahoo, and MSN for their search engines.\nBecause PageRank is a graph-based algorithm, it is important that it be run on as large a subset of the Web as possible.\nMost previous studies on PageRank used subsets of the Web that are significantly smaller (e.g. the TREC VLC2 corpus, used by many, contains only 19 million pages) We computed PageRank using the standard value of 0.85 for \u03b1.\nPopularity Another feature we used is the actual popularity of a Web page, measured as the number of times that it has been visited by users over some period of time.\nWe have access to such data from users who have installed the MSN toolbar and have opted to provide it to MSN.\nThe data is aggregated into a count, for each Web page, of the number of users who viewed that page.\nThough popularity data is generally unavailable, there are two other sources for it.\nThe first is from proxy logs.\nFor example, a university that requires its students to use a proxy has a record of all the pages they have visited while on campus.\nUnfortunately, proxy data is quite biased and relatively small.\nAnother source, internal to search engines, are records of which results their users clicked on.\nSuch data was used by the search engine Direct Hit, and has recently been explored for dynamic ranking purposes [20].\nAn advantage of the toolbar data over this is that it contains information about URL visits that are not just the result of a search.\nThe raw popularity is processed into a number of features such as the number of times a page was viewed and the number of times any page in the domain was viewed.\nMore details are provided in section 5.5.\nAnchor text and inlinks These features are based on the information associated with links to the page in question.\nIt includes features such as the total amount of text in links pointing to the page (anchor text), the number of unique words in that text, etc..\nPage This category consists of features which may be determined by looking at the page (and its URL) alone.\nWe used only eight, simple features such as the number of words in the body, the frequency of the most common term, etc..\nDomain This category contains features that are computed as averages across all pages in the domain.\nFor example, the average number of outlinks on any page and the average PageRank.\nMany of these features have been used by others for ranking Web pages, particularly the anchor and page features.\nAs mentioned, the evaluation is typically for dynamic ranking, and we wish to evaluate the use of them for static ranking.\nAlso, to our knowledge, this is the first study on the use of actual page visitation popularity for static ranking.\nThe closest similar work is on using click-through behavior (that is, which search engine results the users click on) to affect dynamic ranking (see e.g., [20]).\nBecause we use a wide variety of features to come up with a static ranking, we refer to this as fRank (for feature-based ranking).\nfRank uses RankNet and the set of features described in this section to learn a ranking function for Web pages.\nUnless otherwise specified, fRank was trained with all of the features.\n5.\nEXPERIMENTS In this section, we will demonstrate that we can out perform PageRank by applying machine learning to a straightforward set of features.\nBefore the results, we first discuss the data, the performance metric, and the training method.\n5.1 Data In order to evaluate the quality of a static ranking, we needed a gold standard defining the correct ordering for a set of pages.\nFor this, we employed a dataset which contains human judgments for 28000 queries.\nFor each query, a number of results are manually assigned a rating, from 0 to 4, by human judges.\nThe rating is meant to be a measure of how relevant the result is for the query, where 0 means poor and 4 means excellent.\nThere are approximately 500k judgments in all, or an average of 18 ratings per query.\nThe queries are selected by randomly choosing queries from among those issued to the MSN search engine.\nThe probability that a query is selected is proportional to its frequency among all 709 of the queries.\nAs a result, common queries are more likely to be judged than uncommon queries.\nAs an example of how diverse the queries are, the first four queries in the training set are chef schools, chicagoland speedway, eagles fan club, and Turkish culture.\nThe documents selected for judging are those that we expected would, on average, be reasonably relevant (for example, the top ten documents returned by MSN``s search engine).\nThis provides significantly more information than randomly selecting documents on the Web, the vast majority of which would be irrelevant to a given query.\nBecause of this process, the judged pages tend to be of higher quality than the average page on the Web, and tend to be pages that will be returned for common search queries.\nThis bias is good when evaluating the quality of static ranking for the purposes of index ordering and returning relevant documents.\nThis is because the most important portion of the index to be well-ordered and relevant is the portion that is frequently returned for search queries.\nBecause of this bias, however, the results in this paper are not applicable to crawl prioritization.\nIn order to obtain experimental results on crawl prioritization, we would need ratings on a random sample of Web pages.\nTo convert the data from query-dependent to query-independent, we simply removed the query, taking the maximum over judgments for a URL that appears in more than one query.\nThe reasoning behind this is that a page that is relevant for some query and irrelevant for another is probably a decent page and should have a high static rank.\nBecause we evaluated the pages on queries that occur frequently, our data indicates the correct index ordering, and assigns high value to pages that are likely to be relevant to a common query.\nWe randomly assigned queries to a training, validation, or test set, such that they contained 84%, 8%, and 8% of the queries, respectively.\nEach set contains all of the ratings for a given query, and no query appears in more than one set.\nThe training set was used to train fRank.\nThe validation set was used to select the model that had the highest performance.\nThe test set was used for the final results.\nThis data gives us a query-independent ordering of pages.\nThe goal for a static ranking algorithm will be to reproduce this ordering as closely as possible.\nIn the next section, we describe the measure we used to evaluate this.\n5.2 Measure We chose to use pairwise accuracy to evaluate the quality of a static ranking.\nThe pairwise accuracy is the fraction of time that the ranking algorithm and human judges agree on the ordering of a pair of Web pages.\nIf S(x) is the static ranking assigned to page x, and H(x) is the human judgment of relevance for x, then consider the following sets: )}()(:,{ yHxHyx >=pH and )}()(:,{ ySxSyx >=pS The pairwise accuracy is the portion of Hp that is also contained in Sp: p pp H SH \u2229 =accuracypairwise This measure was chosen for two reasons.\nFirst, the discrete human judgments provide only a partial ordering over Web pages, making it difficult to apply a measure such as the Spearman rank order correlation coefficient (in the pairwise accuracy measure, a pair of documents with the same human judgment does not affect the score).\nSecond, the pairwise accuracy has an intuitive meaning: it is the fraction of pairs of documents that, when the humans claim one is better than the other, the static rank algorithm orders them correctly.\n5.3 Method We trained fRank (a RankNet based neural network) using the following parameters.\nWe used a fully connected 2 layer network.\nThe hidden layer had 10 hidden nodes.\nThe input weights to this layer were all initialized to be zero.\nThe output layer (just a single node) weights were initialized using a uniform random distribution in the range [-0.1, 0.1].\nWe used tanh as the transfer function from the inputs to the hidden layer, and a linear function from the hidden layer to the output.\nThe cost function is the pairwise cross entropy cost function as discussed in section 3.\nThe features in the training set were normalized to have zero mean and unit standard deviation.\nThe same linear transformation was then applied to the features in the validation and test sets.\nFor training, we presented the network with 5 million pairings of pages, where one page had a higher rating than the other.\nThe pairings were chosen uniformly at random (with replacement) from all possible pairings.\nWhen forming the pairs, we ignored the magnitude of the difference between the ratings (the rating spread) for the two URLs.\nHence, the weight for each pair was constant (one), and the probability of a pair being selected was independent of its rating spread.\nWe trained the network for 30 epochs.\nOn each epoch, the training pairs were randomly shuffled.\nThe initial training rate was 0.001.\nAt each epoch, we checked the error on the training set.\nIf the error had increased, then we decreased the training rate, under the hypothesis that the network had probably overshot.\nThe training rate at each epoch was thus set to: Training rate = 1+\u03b5 \u03ba Where \u03ba is the initial rate (0.001), and \u03b5 is the number of times the training set error has increased.\nAfter each epoch, we measured the performance of the neural network on the validation set, using 1 million pairs (chosen randomly with replacement).\nThe network with the highest pairwise accuracy on the validation set was selected, and then tested on the test set.\nWe report the pairwise accuracy on the test set, calculated using all possible pairs.\nThese parameters were determined and fixed before the static rank experiments in this paper.\nIn particular, the choice of initial training rate, number of epochs, and training rate decay function were taken directly from Burges et al [7].\nThough we had the option of preprocessing any of the features before they were input to the neural network, we refrained from doing so on most of them.\nThe only exception was the popularity features.\nAs with most Web phenomenon, we found that the distribution of site popularity is Zipfian.\nTo reduce the dynamic range, and hopefully make the feature more useful, we presented the network with both the unpreprocessed, as well as the logarithm, of the popularity features (As with the others, the logarithmic feature values were also normalized to have zero mean and unit standard deviation).\n710 Applying fRank to a document is computationally efficient, taking time that is only linear in the number of input features; it is thus within a constant factor of other simple machine learning methods such as na\u00efve Bayes.\nIn our experiments, computing the fRank for all five billion Web pages was approximately 100 times faster than computing the PageRank for the same set.\n5.4 Results As Table 1 shows, fRank significantly outperforms PageRank for the purposes of static ranking.\nWith a pairwise accuracy of 67.4%, fRank more than doubles the accuracy of PageRank (relative to the baseline of 50%, which is the accuracy that would be achieved by a random ordering of Web pages).\nNote that one of fRank``s input features is the PageRank of the page, so we would expect it to perform no worse than PageRank.\nThe significant increase in accuracy implies that the other features (anchor, popularity, etc.) do in fact contain useful information regarding the overall quality of a page.\nTable 1: Basic Results Technique Accuracy (%) None (Baseline) 50.00 PageRank 56.70 fRank 67.43 There are a number of decisions that go into the computation of PageRank, such as how to deal with pages that have no outlinks, the choice of \u03b1, numeric precision, convergence threshold, etc..\nWe were able to obtain a computation of PageRank from a completely independent implementation (provided by Marc Najork) that varied somewhat in these parameters.\nIt achieved a pairwise accuracy of 56.52%, nearly identical to that obtained by our implementation.\nWe thus concluded that the quality of the PageRank is not sensitive to these minor variations in algorithm, nor was PageRank``s low accuracy due to problems with our implementation of it.\nWe also wanted to find how well each feature set performed.\nTo answer this, for each feature set, we trained and tested fRank using only that set of features.\nThe results are shown in Table 2.\nAs can be seen, every single feature set individually outperformed PageRank on this test.\nPerhaps the most interesting result is that the Page-level features had the highest performance out of all the feature sets.\nThis is surprising because these are features that do not depend on the overall graph structure of the Web, nor even on what pages point to a given page.\nThis is contrary to the common belief that the Web graph structure is the key to finding a good static ranking of Web pages.\nTable 2: Results for individual feature sets.\nFeature Set Accuracy (%) PageRank 56.70 Popularity 60.82 Anchor 59.09 Page 63.93 Domain 59.03 All Features 67.43 Because we are using a two-layer neural network, the features in the learned network can interact with each other in interesting, nonlinear ways.\nThis means that a particular feature that appears to have little value in isolation could actually be very important when used in combination with other features.\nTo measure the final contribution of a feature set, in the context of all the other features, we performed an ablation study.\nThat is, for each set of features, we trained a network to contain all of the features except that set.\nWe then compared the performance of the resulting network to the performance of the network with all of the features.\nTable 3 shows the results of this experiment, where the decrease in accuracy is the difference in pairwise accuracy between the network trained with all of the features, and the network missing the given feature set.\nTable 3: Ablation study.\nShown is the decrease in accuracy when we train a network that has all but the given set of features.\nThe last line is shows the effect of removing the anchor, PageRank, and domain features, hence a model containing no network or link-based information whatsoever.\nFeature Set Decrease in Accuracy PageRank 0.18 Popularity 0.78 Anchor 0.47 Page 5.42 Domain Anchor, PageRank & Domain 0.10 0.60 The results of the ablation study are consistent with the individual feature set study.\nBoth show that the most important feature set is the Page-level feature set, and the second most important is the popularity feature set.\nFinally, we wished to see how the performance of fRank improved as we added features; we wanted to find at what point adding more feature sets became relatively useless.\nBeginning with no features, we greedily added the feature set that improved performance the most.\nThe results are shown in Table 4.\nFor example, the fourth line of the table shows that fRank using the page, popularity, and anchor features outperformed any network that used the page, popularity, and some other feature set, and that the performance of this network was 67.25%.\nTable 4: fRank performance as feature sets are added.\nAt each row, the feature set that gave the greatest increase in accuracy was added to the list of features (i.e., we conducted a greedy search over feature sets).\nFeature Set Accuracy (%) None 50.00 +Page 63.93 +Popularity 66.83 +Anchor 67.25 +PageRank 67.31 +Domain 67.43 711 Finally, we present a qualitative comparison of PageRank vs. fRank.\nIn Table 5 are the top ten URLs returned for PageRank and for fRank.\nPageRank``s results are heavily weighted towards technology sites.\nIt contains two QuickTime URLs (Apple``s video playback software), as well as Internet Explorer and FireFox URLs (both of which are Web browsers).\nfRank, on the other hand, contains more consumer-oriented sites such as American Express, Target, Dell, etc..\nPageRank``s bias toward technology can be explained through two processes.\nFirst, there are many pages with buttons at the bottom suggesting that the site is optimized for Internet Explorer, or that the visitor needs QuickTime.\nThese generally link back to, in these examples, the Internet Explorer and QuickTime download sites.\nConsequently, PageRank ranks those pages highly.\nThough these pages are important, they are not as important as it may seem by looking at the link structure alone.\nOne fix for this is to add information about the link to the PageRank computation, such as the size of the text, whether it was at the bottom of the page, etc..\nThe other bias comes from the fact that the population of Web site authors is different than the population of Web users.\nWeb authors tend to be technologically-oriented, and thus their linking behavior reflects those interests.\nfRank, by knowing the actual visitation popularity of a site (the popularity feature set), is able to eliminate some of that bias.\nIt has the ability to depend more on where actual Web users visit rather than where the Web site authors have linked.\nThe results confirm that fRank outperforms PageRank in pairwise accuracy.\nThe two most important feature sets are the page and popularity features.\nThis is surprising, as the page features consisted only of a few (8) simple features.\nFurther experiments found that, of the page features, those based on the text of the page (as opposed to the URL) performed the best.\nIn the next section, we explore the popularity feature in more detail.\n5.5 Popularity Data As mentioned in section 4, our popularity data came from MSN toolbar users.\nFor privacy reasons, we had access only to an aggregate count of, for each URL, how many times it was visited by any toolbar user.\nThis limited the possible features we could derive from this data.\nFor possible extensions, see section 6.3, future work.\nFor each URL in our train and test sets, we provided a feature to fRank which was how many times it had been visited by a toolbar user.\nHowever, this feature was quite noisy and sparse, particularly for URLs with query parameters (e.g., http:\/\/search.msn.com\/results.aspx?q=machine+learning&form=QBHP).\nOne solution was to provide an additional feature which was the number of times any URL at the given domain was visited by a toolbar user.\nAdding this feature dramatically improved the performance of fRank.\nWe took this one step further and used the built-in hierarchical structure of URLs to construct many levels of backoff between the full URL and the domain.\nWe did this by using the set of features shown in Table 6.\nTable 6: URL functions used to compute the Popularity feature set.\nFunction Example Exact URL cnn.com\/2005\/tech\/wikipedia.html?v=mobile No Params cnn.com\/2005\/tech\/wikipedia.html Page wikipedia.html URL-1 cnn.com\/2005\/tech URL-2 cnn.com\/2005 ... Domain cnn.com Domain+1 cnn.com\/2005 ... Each URL was assigned one feature for each function shown in the table.\nThe value of the feature was the count of the number of times a toolbar user visited a URL, where the function applied to that URL matches the function applied to the URL in question.\nFor example, a user``s visit to cnn.com\/2005\/sports.html would increment the Domain and Domain+1 features for the URL cnn.com\/2005\/tech\/wikipedia.html.\nAs seen in Table 7, adding the domain counts significantly improved the quality of the popularity feature, and adding the numerous backoff functions listed in Table 6 improved the accuracy even further.\nTable 7: Effect of adding backoff to the popularity feature set Features Accuracy (%) URL count 58.15 URL and Domain counts 59.31 All backoff functions (Table 6) 60.82 Table 5: Top ten URLs for PageRank vs. fRank PageRank fRank google.com google.com apple.com\/quicktime\/download yahoo.com amazon.com americanexpress.com yahoo.com hp.com microsoft.com\/windows\/ie target.com apple.com\/quicktime bestbuy.com mapquest.com dell.com ebay.com autotrader.com mozilla.org\/products\/firefox dogpile.com ftc.gov bankofamerica.com 712 Backing off to subsets of the URL is one technique for dealing with the sparsity of data.\nIt is also informative to see how the performance of fRank depends on the amount of popularity data that we have collected.\nIn Figure 1 we show the performance of fRank trained with only the popularity feature set vs. the amount of data we have for the popularity feature set.\nEach day, we receive additional popularity data, and as can be seen in the plot, this increases the performance of fRank.\nThe relation is logarithmic: doubling the amount of popularity data provides a constant improvement in pairwise accuracy.\nIn summary, we have found that the popularity features provide a useful boost to the overall fRank accuracy.\nGathering more popularity data, as well as employing simple backoff strategies, improve this boost even further.\n5.6 Summary of Results The experiments provide a number of conclusions.\nFirst, fRank performs significantly better than PageRank, even without any information about the Web graph.\nSecond, the page level and popularity features were the most significant contributors to pairwise accuracy.\nThird, by collecting more popularity data, we can continue to improve fRank``s performance.\nThe popularity data provides two benefits to fRank.\nFirst, we see that qualitatively, fRank``s ordering of Web pages has a more favorable bias than PageRank``s. fRank``s ordering seems to correspond to what Web users, rather than Web page authors, prefer.\nSecond, the popularity data is more timely than PageRank``s link information.\nThe toolbar provides information about which Web pages people find interesting right now, whereas links are added to pages more slowly, as authors find the time and interest.\n6.\nRELATED AND FUTURE WORK 6.1 Improvements to PageRank Since the original PageRank paper, there has been work on improving it.\nMuch of that work centers on speeding up and parallelizing the computation [15][25].\nOne recognized problem with PageRank is that of topic drift: A page about dogs will have high PageRank if it is linked to by many pages that themselves have high rank, regardless of their topic.\nIn contrast, a search engine user looking for good pages about dogs would likely prefer to find pages that are pointed to by many pages that are themselves about dogs.\nHence, a link that is on topic should have higher weight than a link that is not.\nRichardson and Domingos``s Query Dependent PageRank [29] and Haveliwala``s Topic-Sensitive PageRank [16] are two approaches that tackle this problem.\nOther variations to PageRank include differently weighting links for inter- vs. intra-domain links, adding a backwards step to the random surfer to simulate the back button on most browsers [24] and modifying the jump probability (\u03b1) [3].\nSee Langville and Meyer [23] for a good survey of these, and other modifications to PageRank.\n6.2 Other related work PageRank is not the only link analysis algorithm used for ranking Web pages.\nThe most well-known other is HITS [22], which is used by the Teoma search engine [30].\nHITS produces a list of hubs and authorities, where hubs are pages that point to many authority pages, and authorities are pages that are pointed to by many hubs.\nPrevious work has shown HITS to perform comparably to PageRank [1].\nOne field of interest is that of static index pruning (see e.g., Carmel et al. [8]).\nStatic index pruning methods reduce the size of the search engine``s index by removing documents that are unlikely to be returned by a search query.\nThe pruning is typically done based on the frequency of query terms.\nSimilarly, Pandey and Olston [28] suggest crawling pages frequently if they are likely to incorrectly appear (or not appear) as a result of a search.\nSimilar methods could be incorporated into the static rank (e.g., how many frequent queries contain words found on this page).\nOthers have investigated the effect that PageRank has on the Web at large [9].\nThey argue that pages with high PageRank are more likely to be found by Web users, thus more likely to be linked to, and thus more likely to maintain a higher PageRank than other pages.\nThe same may occur for the popularity data.\nIf we increase the ranking for popular pages, they are more likely to be clicked on, thus further increasing their popularity.\nCho et al. [10] argue that a more appropriate measure of Web page quality would depend on not only the current link structure of the Web, but also on the change in that link structure.\nThe same technique may be applicable to popularity data: the change in popularity of a page may be more informative than the absolute popularity.\nOne interesting related work is that of Ivory and Hearst [19].\nTheir goal was to build a model of Web sites that are considered high quality from the perspective of content, structure and navigation, visual design, functionality, interactivity, and overall experience.\nThey used over 100 page level features, as well as features encompassing the performance and structure of the site.\nThis let them qualitatively describe the qualities of a page that make it appear attractive (e.g., rare use of italics, at least 9 point font, ...), and (in later work) to build a system that assists novel Web page authors in creating quality pages by evaluating it according to these features.\nThe primary differences between this work and ours are the goal (discovering what constitutes a good Web page vs. ordering Web pages for the purposes of Web search), the size of the study (they used a dataset of less than 6000 pages vs. our set of 468,000), and our comparison with PageRank.\ny = 0.577Ln(x) + 58.283 R 2 = 0.9822 58 58.5 59 59.5 60 60.5 61 1 10 100 Days of Toolbar Data PairwiseAccuracy Figure 1: Relation between the amount of popularity data and the performance of the popularity feature set.\nNote the x-axis is a logarithmic scale.\n713 Nevertheless, their work provides insights to additional useful static features that we could incorporate into fRank in the future.\nRecent work on incorporating novel features into dynamic ranking includes that by Joachims et al. [21], who investigate the use of implicit feedback from users, in the form of which search engine results are clicked on.\nCraswell et al. [11] present a method for determining the best transformation to apply to query independent features (such as those used in this paper) for the purposes of improving dynamic ranking.\nOther work, such as Boyan et al. [4] and Bartell et al. [2] apply machine learning for the purposes of improving the overall relevance of a search engine (i.e., the dynamic ranking).\nThey do not apply their techniques to the problem of static ranking.\n6.3 Future work There are many ways in which we would like to extend this work.\nFirst, fRank uses only a small number of features.\nWe believe we could achieve even more significant results with more features.\nIn particular the existence, or lack thereof, of certain words could prove very significant (for instance, under construction probably signifies a low quality page).\nOther features could include the number of images on a page, size of those images, number of layout elements (tables, divs, and spans), use of style sheets, conforming to W3C standards (like XHTML 1.0 Strict), background color of a page, etc..\nMany pages are generated dynamically, the contents of which may depend on parameters in the URL, the time of day, the user visiting the site, or other variables.\nFor such pages, it may be useful to apply the techniques found in [26] to form a static approximation for the purposes of extracting features.\nThe resulting grammar describing the page could itself be a source of additional features describing the complexity of the page, such as how many non-terminal nodes it has, the depth of the grammar tree, etc. fRank allows one to specify a confidence in each pairing of documents.\nIn the future, we will experiment with probabilities that depend on the difference in human judgments between the two items in the pair.\nFor example, a pair of documents where one was rated 4 and the other 0 should have a higher confidence than a pair of documents rated 3 and 2.\nThe experiments in this paper are biased toward pages that have higher than average quality.\nAlso, fRank with all of the features can only be applied to pages that have already been crawled.\nThus, fRank is primarily useful for index ordering and improving relevance, not for directing the crawl.\nWe would like to investigate a machine learning approach for crawl prioritization as well.\nIt may be that a combination of methods is best: for example, using PageRank to select the best 5 billion of the 20 billion pages on the Web, then using fRank to order the index and affect search relevancy.\nAnother interesting direction for exploration is to incorporate fRank and page-level features directly into the PageRank computation itself.\nWork on biasing the PageRank jump vector [16], and transition matrix [29], have demonstrated the feasibility and advantages of such an approach.\nThere is reason to believe that a direct application of [29], using the fRank of a page for its relevance, could lead to an improved overall static rank.\nFinally, the popularity data can be used in other interesting ways.\nThe general surfing and searching habits of Web users varies by time of day.\nActivity in the morning, daytime, and evening are often quite different (e.g., reading the news, solving problems, and accessing entertainment, respectively).\nWe can gain insight into these differences by using the popularity data, divided into segments of the day.\nWhen a query is issued, we would then use the popularity data matching the time of query in order to do the ranking of Web pages.\nWe also plan to explore popularity features that use more than just the counts of how often a page was visited.\nFor example, how long users tended to dwell on a page, did they leave the page by clicking a link or by hitting the back button, etc..\nFox et al. did a study that showed that features such as this can be valuable for the purposes of dynamic ranking [14].\nFinally, the popularity data could be used as the label rather than as a feature.\nUsing fRank in this way to predict the popularity of a page may useful for the tasks of relevance, efficiency, and crawl priority.\nThere is also significantly more popularity data than human labeled data, potentially enabling more complex machine learning methods, and significantly more features.\n7.\nCONCLUSIONS A good static ranking is an important component for today``s search engines and information retrieval systems.\nWe have demonstrated that PageRank does not provide a very good static ranking; there are many simple features that individually out perform PageRank.\nBy combining many static features, fRank achieves a ranking that has a significantly higher pairwise accuracy than PageRank alone.\nA qualitative evaluation of the top documents shows that fRank is less technology-biased than PageRank; by using popularity data, it is biased toward pages that Web users, rather than Web authors, visit.\nThe machine learning component of fRank gives it the additional benefit of being more robust against spammers, and allows it to leverage further developments in the machine learning community in areas such as adversarial classification.\nWe have only begun to explore the options, and believe that significant strides can be made in the area of static ranking by further experimentation with additional features, other machine learning techniques, and additional sources of data.\n8.\nACKNOWLEDGMENTS Thank you to Marc Najork for providing us with additional PageRank computations and to Timo Burkard for assistance with the popularity data.\nMany thanks to Chris Burges for providing code and significant support in using training RankNets.\nAlso, we thank Susan Dumais and Nick Craswell for their edits and suggestions.\n9.\nREFERENCES [1] B. Amento, L. Terveen, and W. Hill.\nDoes authority mean quality?\nPredicting expert quality ratings of Web documents.\nIn Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2000.\n[2] B. Bartell, G. Cottrell, and R. Belew.\nAutomatic combination of multiple ranked retrieval systems.\nIn Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 1994.\n[3] P. Boldi, M. Santini, and S. Vigna.\nPageRank as a function of the damping factor.\nIn Proceedings of the International World Wide Web Conference, May 2005.\n714 [4] J. Boyan, D. Freitag, and T. Joachims.\nA machine learning architecture for optimizing web search engines.\nIn AAAI Workshop on Internet Based Information Systems, August 1996.\n[5] S. Brin and L. Page.\nThe anatomy of a large-scale hypertextual web search engine.\nIn Proceedings of the Seventh International Wide Web Conference, Brisbane, Australia, 1998.\nElsevier.\n[6] A. Broder, R. Lempel, F. Maghoul, and J. Pederson.\nEfficient PageRank approximation via graph aggregation.\nIn Proceedings of the International World Wide Web Conference, May 2004.\n[7] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, G. Hullender.\nLearning to rank using gradient descent.\nIn Proceedings of the 22nd International Conference on Machine Learning, Bonn, Germany, 2005.\n[8] D. Carmel, D. Cohen, R. Fagin, E. Farchi, M. Herscovici, Y. S. Maarek, and A. Soffer.\nStatic index pruning for information retrieval systems.\nIn Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 43-50, New Orleans, Louisiana, USA, September 2001.\n[9] J. Cho and S. Roy.\nImpact of search engines on page popularity.\nIn Proceedings of the International World Wide Web Conference, May 2004.\n[10]J. Cho, S. Roy, R. Adams.\nPage Quality: In search of an unbiased web ranking.\nIn Proceedings of the ACM SIGMOD 2005 Conference.\nBaltimore, Maryland.\nJune 2005.\n[11]N. Craswell, S. Robertson, H. Zaragoza, and M. Taylor.\nRelevance weighting for query independent evidence.\nIn Proceedings of the 28th Annual Conference on Research and Development in Information Retrieval (SIGIR), August, 2005.\n[12]N. Dalvi, P. Domingos, Mausam, S. Sanghai, D. Verma.\nAdversarial Classification.\nIn Proceedings of the Tenth International Conference on Knowledge Discovery and Data Mining (pp. 99-108), Seattle, WA, 2004.\n[13]O. Dekel, C. Manning, and Y. Singer.\nLog-linear models for label-ranking.\nIn Advances in Neural Information Processing Systems 16.\nCambridge, MA: MIT Press, 2003.\n[14]S. Fox, K S. Fox, K. Karnawat, M. Mydland, S. T. Dumais and T. White (2005).\nEvaluating implicit measures to improve the search experiences.\nIn the ACM Transactions on Information Systems, 23(2), pp. 147-168.\nApril 2005.\n[15]T. Haveliwala.\nEfficient computation of PageRank.\nStanford University Technical Report, 1999.\n[16]T. Haveliwala.\nTopic-sensitive PageRank.\nIn Proceedings of the International World Wide Web Conference, May 2002.\n[17]D. Hawking and N. Craswell.\nVery large scale retrieval and Web search.\nIn D. Harman and E. Voorhees (eds), The TREC Book.\nMIT Press.\n[18]R. Herbrich, T. Graepel, and K. Obermayer.\nSupport vector learning for ordinal regression.\nIn Proceedings of the Ninth International Conference on Artificial Neural Networks, pp. 97-102.\n1999.\n[19]M. Ivory and M. Hearst.\nStatistical profiles of highly-rated Web sites.\nIn Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 2002.\n[20]T. Joachims.\nOptimizing search engines using clickthrough data.\nIn Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), 2002.\n[21]T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G. Gay.\nAccurately Interpreting Clickthrough Data as Implicit Feedback.\nIn Proceedings of the Conference on Research and Development in Information Retrieval (SIGIR), 2005.\n[22]J. Kleinberg.\nAuthoritative sources in a hyperlinked environment.\nJournal of the ACM 46:5, pp. 604-32.\n1999.\n[23]A. Langville and C. Meyer.\nDeeper inside PageRank.\nInternet Mathematics 1(3):335-380, 2004.\n[24]F. Matthieu and M. Bouklit.\nThe effect of the back button in a random walk: application for PageRank.\nIn Alternate track papers and posters of the Thirteenth International World Wide Web Conference, 2004.\n[25]F. McSherry.\nA uniform approach to accelerated PageRank computation.\nIn Proceedings of the International World Wide Web Conference, May 2005.\n[26]Y. Minamide.\nStatic approximation of dynamically generated Web pages.\nIn Proceedings of the International World Wide Web Conference, May 2005.\n[27]L. Page, S. Brin, R. Motwani, and T. Winograd.\nThe PageRank citation ranking: Bringing order to the web.\nTechnical report, Stanford University, Stanford, CA, 1998.\n[28]S. Pandey and C. Olston.\nUser-centric Web crawling.\nIn Proceedings of the International World Wide Web Conference, May 2005.\n[29]M. Richardson and P. Domingos.\nThe intelligent surfer: probabilistic combination of link and content information in PageRank.\nIn Advances in Neural Information Processing Systems 14, pp. 1441-1448.\nCambridge, MA: MIT Press, 2002.\n[30]C. Sherman.\nTeoma vs. Google, Round 2.\nAvailable from World Wide Web (http:\/\/dc.internet.com\/news\/article.php\/ 1002061), 2002.\n[31]T. Upstill, N. Craswell, and D. Hawking.\nPredicting fame and fortune: PageRank or indegree?\n.\nIn the Eighth Australasian Document Computing Symposium.\n2003.\n[32]T. Upstill, N. Craswell, and D. Hawking.\nQuery-independent evidence in home page finding.\nIn ACM Transactions on Information Systems.\n2003.\n715","lvl-3":"Beyond PageRank: Machine Learning for Static Ranking\nABSTRACT\nSince the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages.\nWe show that we can significantly outperform PageRank using features that are independent of the link structure of the Web.\nWe gain a further boost in accuracy by using data on the frequency at which users visit Web pages.\nWe use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics.\nThe resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random).\n1.\nINTRODUCTION\nOver the past decade, the Web has grown exponentially in size.\nUnfortunately, this growth has not been isolated to good-quality pages.\nThe number of incorrect, spamming, and malicious (e.g., phishing) sites has also grown rapidly.\nThe sheer number of both good and bad pages on the Web has led to an increasing reliance on search engines for the discovery of useful information.\nUsers rely on search engines not only to return pages related to their search query, but also to separate the good from the bad, and order results so that the best pages are suggested first.\nTo date, most work on Web page ranking has focused on improving the ordering of the results returned to the user (querydependent ranking, or dynamic ranking).\nHowever, having a good query-independent ranking (static ranking) is also crucially important for a search engine.\nA good static ranking algorithm provides numerous benefits:\n\u2022 Relevance: The static rank of a page provides a general indicator to the overall quality of the page.\nThis is a useful input to the dynamic ranking algorithm.\n\u2022 Efficiency: Typically, the search engine's index is ordered by static rank.\nBy traversing the index from highquality to low-quality pages, the dynamic ranker may abort the search when it determines that no later page will have as high of a dynamic rank as those already found.\nThe more accurate the static rank, the better this early-stopping ability, and hence the quicker the search engine may respond to queries.\n\u2022 Crawl Priority: The Web grows and changes as quickly\nas search engines can crawl it.\nSearch engines need a way to prioritize their crawl--to determine which pages to recrawl, how frequently, and how often to seek out new pages.\nAmong other factors, the static rank of a page is used to determine this prioritization.\nA better static rank thus provides the engine with a higher quality, more upto-date index.\nGoogle is often regarded as the first commercially successful search engine.\nTheir ranking was originally based on the PageRank algorithm [5] [27].\nDue to this (and possibly due to Google's promotion of PageRank to the public), PageRank is widely regarded as the best method for the static ranking of Web pages.\nThough PageRank has historically been thought to perform quite well, there has yet been little academic evidence to support this claim.\nEven worse, there has recently been work showing that PageRank may not perform any better than other simple measures on certain tasks.\nUpstill et al. have found that for the task of finding home pages, the number of pages linking to a page and the type of URL were as, or more, effective than PageRank [32].\nThey found similar results for the task of finding high quality companies [31].\nPageRank has also been used in systems for TREC's \"very large collection\" and \"Web track\" competitions, but with much less success than had been expected [17].\nFinally, Amento et al. [1] found that simple features, such as the number of pages on a site, performed as well as PageRank.\nDespite these, the general belief remains among many, both academic and in the public, that PageRank is an essential factor for a good static rank.\nFailing this, it is still assumed that using the link structure is crucial, in the form of the number of inlinks or the amount of anchor text.\nIn this paper, we show there are a number of simple url - or pagebased features that significantly outperform PageRank (for the purposes of statically ranking Web pages) despite ignoring the\nstructure of the Web.\nWe combine these and other static features using machine learning to achieve a ranking system that is significantly better than PageRank (in pairwise agreement with human labels).\nA machine learning approach for static ranking has other advantages besides the quality of the ranking.\nBecause the measure consists of many features, it is harder for malicious users to manipulate it (i.e., to raise their page's static rank to an undeserved level through questionable techniques, also known as Web spamming).\nThis is particularly true if the feature set is not known.\nIn contrast, a single measure like PageRank can be easier to manipulate because spammers need only concentrate on one goal: how to cause more pages to point to their page.\nWith an algorithm that learns, a feature that becomes unusable due to spammer manipulation will simply be reduced or removed from the final computation of rank.\nThis flexibility allows a ranking system to rapidly react to new spamming techniques.\nA machine learning approach to static ranking is also able to take advantage of any advances in the machine learning field.\nFor example, recent work on adversarial classification [12] suggests that it may be possible to explicitly model the Web page spammer's (the adversary) actions, adjusting the ranking model in advance of the spammer's attempts to circumvent it.\nAnother example is the elimination of outliers in constructing the model, which helps reduce the effect that unique sites may have on the overall quality of the static rank.\nBy moving static ranking to a machine learning framework, we not only gain in accuracy, but also gain in the ability to react to spammer's actions, to rapidly add new features to the ranking algorithm, and to leverage advances in the rapidly growing field of machine learning.\nFinally, we believe there will be significant advantages to using this technique for other domains, such as searching a local hard drive or a corporation's intranet.\nThese are domains where the link structure is particularly weak (or non-existent), but there are other domain-specific features that could be just as powerful.\nFor example, the author of an intranet page and his\/her position in the organization (e.g., CEO, manager, or developer) could provide significant clues as to the importance of that page.\nA machine learning approach thus allows rapid development of a good static algorithm in new domains.\nThis paper's contribution is a systematic study of static features, including PageRank, for the purposes of (statically) ranking Web pages.\nPrevious studies on PageRank typically used subsets of the Web that are significantly smaller (e.g., the TREC VLC2 corpus, used by many, contains only 19 million pages).\nAlso, the performance of PageRank and other static features has typically been evaluated in the context of a complete system for dynamic ranking, or for other tasks such as question answering.\nIn contrast, we explore the use of PageRank and other features for the direct task of statically ranking Web pages.\nWe first briefly describe the PageRank algorithm.\nIn Section 3 we introduce RankNet, the machine learning technique used to combine static features into a final ranking.\nSection 4 describes the static features.\nThe heart of the paper is in Section 5, which presents our experiments and results.\nWe conclude with a discussion of related and future work.\n2.\nPAGERANK\n3.\nRANKNET\n4.\nFEATURES\n5.\nEXPERIMENTS\n5.1 Data\n5.2 Measure\n5.3 Method\n5.4 Results\n5.5 Popularity Data\n5.6 Summary of Results\n6.\nRELATED AND FUTURE WORK\n6.1 Improvements to PageRank\nSince the original PageRank paper, there has been work on improving it.\nMuch of that work centers on speeding up and parallelizing the computation [15] [25].\nOne recognized problem with PageRank is that of topic drift: A page about \"dogs\" will have high PageRank if it is linked to by many pages that themselves have high rank, regardless of their topic.\nIn contrast, a search engine user looking for good pages about dogs would likely prefer to find pages that are pointed to by many pages that are themselves about dogs.\nHence, a link that is \"on topic\" should have higher weight than a link that is not.\nRichardson and Domingos's Query Dependent PageRank [29] and Haveliwala's Topic-Sensitive PageRank [16] are two approaches that tackle this problem.\nOther variations to PageRank include differently weighting links for inter - vs. intra-domain links, adding a backwards step to the random surfer to simulate the \"back\" button on most browsers [24] and modifying the jump probability (\u03b1) [3].\nSee Langville and Meyer [23] for a good survey of these, and other modifications to PageRank.\n6.2 Other related work\nPageRank is not the only link analysis algorithm used for ranking Web pages.\nThe most well-known other is HITS [22], which is used by the Teoma search engine [30].\nHITS produces a list of hubs and authorities, where hubs are pages that point to many\nFigure 1: Relation between the amount of popularity data and the performance of the popularity feature set.\nNote the x-axis is a logarithmic scale.\nauthority pages, and authorities are pages that are pointed to by many hubs.\nPrevious work has shown HITS to perform comparably to PageRank [1].\nOne field of interest is that of static index pruning (see e.g., Carmel et al. [8]).\nStatic index pruning methods reduce the size of the search engine's index by removing documents that are unlikely to be returned by a search query.\nThe pruning is typically done based on the frequency of query terms.\nSimilarly, Pandey and Olston [28] suggest crawling pages frequently if they are likely to incorrectly appear (or not appear) as a result of a search.\nSimilar methods could be incorporated into the static rank (e.g., how many frequent queries contain words found on this page).\nOthers have investigated the effect that PageRank has on the Web at large [9].\nThey argue that pages with high PageRank are more likely to be found by Web users, thus more likely to be linked to, and thus more likely to maintain a higher PageRank than other pages.\nThe same may occur for the popularity data.\nIf we increase the ranking for popular pages, they are more likely to be clicked on, thus further increasing their popularity.\nCho et al. [10] argue that a more appropriate measure of Web page quality would depend on not only the current link structure of the Web, but also on the change in that link structure.\nThe same technique may be applicable to popularity data: the change in popularity of a page may be more informative than the absolute popularity.\nOne interesting related work is that of Ivory and Hearst [19].\nTheir goal was to build a model of Web sites that are considered high quality from the perspective of \"content, structure and navigation, visual design, functionality, interactivity, and overall experience\".\nThey used over 100 page level features, as well as features encompassing the performance and structure of the site.\nThis let them qualitatively describe the qualities of a page that make it appear attractive (e.g., rare use of italics, at least 9 point font, ...), and (in later work) to build a system that assists novel Web page authors in creating quality pages by evaluating it according to these features.\nThe primary differences between this work and ours are the goal (discovering what constitutes a good Web page vs. ordering Web pages for the purposes of Web search), the size of the study (they used a dataset of less than 6000 pages vs. our set of 468,000), and our comparison with PageRank.\nNevertheless, their work provides insights to additional useful static features that we could incorporate into fRank in the future.\nRecent work on incorporating novel features into dynamic ranking includes that by Joachims et al. [21], who investigate the use of implicit feedback from users, in the form of which search engine results are clicked on.\nCraswell et al. [11] present a method for determining the best transformation to apply to query independent features (such as those used in this paper) for the purposes of improving dynamic ranking.\nOther work, such as Boyan et al. [4] and Bartell et al. [2] apply machine learning for the purposes of improving the overall relevance of a search engine (i.e., the dynamic ranking).\nThey do not apply their techniques to the problem of static ranking.\n6.3 Future work\nThere are many ways in which we would like to extend this work.\nFirst, fRank uses only a small number of features.\nWe believe we could achieve even more significant results with more features.\nIn particular the existence, or lack thereof, of certain words could prove very significant (for instance, \"under construction\" probably signifies a low quality page).\nOther features could include the number of images on a page, size of those images, number of layout elements (tables, divs, and spans), use of style sheets, conforming to W3C standards (like XHTML 1.0 Strict), background color of a page, etc. .\nMany pages are generated dynamically, the contents of which may depend on parameters in the URL, the time of day, the user visiting the site, or other variables.\nFor such pages, it may be useful to apply the techniques found in [26] to form a static approximation for the purposes of extracting features.\nThe resulting grammar describing the page could itself be a source of additional features describing the complexity of the page, such as how many non-terminal nodes it has, the depth of the grammar tree, etc. fRank allows one to specify a confidence in each pairing of documents.\nIn the future, we will experiment with probabilities that depend on the difference in human judgments between the two items in the pair.\nFor example, a pair of documents where one was rated 4 and the other 0 should have a higher confidence than a pair of documents rated 3 and 2.\nThe experiments in this paper are biased toward pages that have higher than average quality.\nAlso, fRank with all of the features can only be applied to pages that have already been crawled.\nThus, fRank is primarily useful for index ordering and improving relevance, not for directing the crawl.\nWe would like to investigate a machine learning approach for crawl prioritization as well.\nIt may be that a combination of methods is best: for example, using PageRank to select the best 5 billion of the 20 billion pages on the Web, then using fRank to order the index and affect search relevancy.\nAnother interesting direction for exploration is to incorporate fRank and page-level features directly into the PageRank computation itself.\nWork on biasing the PageRank jump vector [16], and transition matrix [29], have demonstrated the feasibility and advantages of such an approach.\nThere is reason to believe that a direct application of [29], using the fRank of a page for its \"relevance\", could lead to an improved overall static rank.\nFinally, the popularity data can be used in other interesting ways.\nThe general surfing and searching habits of Web users varies by time of day.\nActivity in the morning, daytime, and evening are often quite different (e.g., reading the news, solving problems, and accessing entertainment, respectively).\nWe can gain insight into these differences by using the popularity data, divided into segments of the day.\nWhen a query is issued, we would then use the popularity data matching the time of query in order to do the ranking of Web pages.\nWe also plan to explore popularity features that use more than just the counts of how often a page was visited.\nFor example, how long users tended to dwell on a page, did they leave the page by clicking a link or by hitting the back button, etc. .\nFox et al. did a study that showed that features such as this can be valuable for the purposes of dynamic ranking [14].\nFinally, the popularity data could be used as the label rather than as a feature.\nUsing fRank in this way to predict the popularity of a page may useful for the tasks of relevance, efficiency, and crawl priority.\nThere is also significantly more popularity data than human labeled data, potentially enabling more complex machine learning methods, and significantly more features.\n7.\nCONCLUSIONS\nA good static ranking is an important component for today's search engines and information retrieval systems.\nWe have demonstrated that PageRank does not provide a very good static ranking; there are many simple features that individually out perform PageRank.\nBy combining many static features, fRank achieves a ranking that has a significantly higher pairwise accuracy than PageRank alone.\nA qualitative evaluation of the top documents shows that fRank is less technology-biased than PageRank; by using popularity data, it is biased toward pages that Web users, rather than Web authors, visit.\nThe machine learning component of fRank gives it the additional benefit of being more robust against spammers, and allows it to leverage further developments in the machine learning community in areas such as adversarial classification.\nWe have only begun to explore the options, and believe that significant strides can be made in the area of static ranking by further experimentation with additional features, other machine learning techniques, and additional sources of data.","lvl-4":"Beyond PageRank: Machine Learning for Static Ranking\nABSTRACT\nSince the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages.\nWe show that we can significantly outperform PageRank using features that are independent of the link structure of the Web.\nWe gain a further boost in accuracy by using data on the frequency at which users visit Web pages.\nWe use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics.\nThe resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random).\n1.\nINTRODUCTION\nUnfortunately, this growth has not been isolated to good-quality pages.\nThe number of incorrect, spamming, and malicious (e.g., phishing) sites has also grown rapidly.\nThe sheer number of both good and bad pages on the Web has led to an increasing reliance on search engines for the discovery of useful information.\nUsers rely on search engines not only to return pages related to their search query, but also to separate the good from the bad, and order results so that the best pages are suggested first.\nTo date, most work on Web page ranking has focused on improving the ordering of the results returned to the user (querydependent ranking, or dynamic ranking).\nHowever, having a good query-independent ranking (static ranking) is also crucially important for a search engine.\nA good static ranking algorithm provides numerous benefits:\n\u2022 Relevance: The static rank of a page provides a general indicator to the overall quality of the page.\nThis is a useful input to the dynamic ranking algorithm.\n\u2022 Efficiency: Typically, the search engine's index is ordered by static rank.\nBy traversing the index from highquality to low-quality pages, the dynamic ranker may abort the search when it determines that no later page will have as high of a dynamic rank as those already found.\nThe more accurate the static rank, the better this early-stopping ability, and hence the quicker the search engine may respond to queries.\n\u2022 Crawl Priority: The Web grows and changes as quickly\nas search engines can crawl it.\nSearch engines need a way to prioritize their crawl--to determine which pages to recrawl, how frequently, and how often to seek out new pages.\nAmong other factors, the static rank of a page is used to determine this prioritization.\nA better static rank thus provides the engine with a higher quality, more upto-date index.\nGoogle is often regarded as the first commercially successful search engine.\nTheir ranking was originally based on the PageRank algorithm [5] [27].\nDue to this (and possibly due to Google's promotion of PageRank to the public), PageRank is widely regarded as the best method for the static ranking of Web pages.\nThough PageRank has historically been thought to perform quite well, there has yet been little academic evidence to support this claim.\nEven worse, there has recently been work showing that PageRank may not perform any better than other simple measures on certain tasks.\nUpstill et al. have found that for the task of finding home pages, the number of pages linking to a page and the type of URL were as, or more, effective than PageRank [32].\nThey found similar results for the task of finding high quality companies [31].\nPageRank has also been used in systems for TREC's \"very large collection\" and \"Web track\" competitions, but with much less success than had been expected [17].\nFinally, Amento et al. [1] found that simple features, such as the number of pages on a site, performed as well as PageRank.\nDespite these, the general belief remains among many, both academic and in the public, that PageRank is an essential factor for a good static rank.\nIn this paper, we show there are a number of simple url - or pagebased features that significantly outperform PageRank (for the purposes of statically ranking Web pages) despite ignoring the\nstructure of the Web.\nWe combine these and other static features using machine learning to achieve a ranking system that is significantly better than PageRank (in pairwise agreement with human labels).\nA machine learning approach for static ranking has other advantages besides the quality of the ranking.\nBecause the measure consists of many features, it is harder for malicious users to manipulate it (i.e., to raise their page's static rank to an undeserved level through questionable techniques, also known as Web spamming).\nThis is particularly true if the feature set is not known.\nIn contrast, a single measure like PageRank can be easier to manipulate because spammers need only concentrate on one goal: how to cause more pages to point to their page.\nWith an algorithm that learns, a feature that becomes unusable due to spammer manipulation will simply be reduced or removed from the final computation of rank.\nThis flexibility allows a ranking system to rapidly react to new spamming techniques.\nA machine learning approach to static ranking is also able to take advantage of any advances in the machine learning field.\nAnother example is the elimination of outliers in constructing the model, which helps reduce the effect that unique sites may have on the overall quality of the static rank.\nFinally, we believe there will be significant advantages to using this technique for other domains, such as searching a local hard drive or a corporation's intranet.\nThese are domains where the link structure is particularly weak (or non-existent), but there are other domain-specific features that could be just as powerful.\nA machine learning approach thus allows rapid development of a good static algorithm in new domains.\nThis paper's contribution is a systematic study of static features, including PageRank, for the purposes of (statically) ranking Web pages.\nPrevious studies on PageRank typically used subsets of the Web that are significantly smaller (e.g., the TREC VLC2 corpus, used by many, contains only 19 million pages).\nAlso, the performance of PageRank and other static features has typically been evaluated in the context of a complete system for dynamic ranking, or for other tasks such as question answering.\nIn contrast, we explore the use of PageRank and other features for the direct task of statically ranking Web pages.\nWe first briefly describe the PageRank algorithm.\nIn Section 3 we introduce RankNet, the machine learning technique used to combine static features into a final ranking.\nSection 4 describes the static features.\nWe conclude with a discussion of related and future work.\n6.\nRELATED AND FUTURE WORK\n6.1 Improvements to PageRank\nSince the original PageRank paper, there has been work on improving it.\nMuch of that work centers on speeding up and parallelizing the computation [15] [25].\nOne recognized problem with PageRank is that of topic drift: A page about \"dogs\" will have high PageRank if it is linked to by many pages that themselves have high rank, regardless of their topic.\nIn contrast, a search engine user looking for good pages about dogs would likely prefer to find pages that are pointed to by many pages that are themselves about dogs.\nRichardson and Domingos's Query Dependent PageRank [29] and Haveliwala's Topic-Sensitive PageRank [16] are two approaches that tackle this problem.\nSee Langville and Meyer [23] for a good survey of these, and other modifications to PageRank.\n6.2 Other related work\nPageRank is not the only link analysis algorithm used for ranking Web pages.\nThe most well-known other is HITS [22], which is used by the Teoma search engine [30].\nHITS produces a list of hubs and authorities, where hubs are pages that point to many\nFigure 1: Relation between the amount of popularity data and the performance of the popularity feature set.\nauthority pages, and authorities are pages that are pointed to by many hubs.\nPrevious work has shown HITS to perform comparably to PageRank [1].\nOne field of interest is that of static index pruning (see e.g., Carmel et al. [8]).\nStatic index pruning methods reduce the size of the search engine's index by removing documents that are unlikely to be returned by a search query.\nThe pruning is typically done based on the frequency of query terms.\nSimilarly, Pandey and Olston [28] suggest crawling pages frequently if they are likely to incorrectly appear (or not appear) as a result of a search.\nSimilar methods could be incorporated into the static rank (e.g., how many frequent queries contain words found on this page).\nOthers have investigated the effect that PageRank has on the Web at large [9].\nThey argue that pages with high PageRank are more likely to be found by Web users, thus more likely to be linked to, and thus more likely to maintain a higher PageRank than other pages.\nThe same may occur for the popularity data.\nIf we increase the ranking for popular pages, they are more likely to be clicked on, thus further increasing their popularity.\nCho et al. [10] argue that a more appropriate measure of Web page quality would depend on not only the current link structure of the Web, but also on the change in that link structure.\nThe same technique may be applicable to popularity data: the change in popularity of a page may be more informative than the absolute popularity.\nOne interesting related work is that of Ivory and Hearst [19].\nThey used over 100 page level features, as well as features encompassing the performance and structure of the site.\nNevertheless, their work provides insights to additional useful static features that we could incorporate into fRank in the future.\nRecent work on incorporating novel features into dynamic ranking includes that by Joachims et al. [21], who investigate the use of implicit feedback from users, in the form of which search engine results are clicked on.\nCraswell et al. [11] present a method for determining the best transformation to apply to query independent features (such as those used in this paper) for the purposes of improving dynamic ranking.\nOther work, such as Boyan et al. [4] and Bartell et al. [2] apply machine learning for the purposes of improving the overall relevance of a search engine (i.e., the dynamic ranking).\nThey do not apply their techniques to the problem of static ranking.\n6.3 Future work\nThere are many ways in which we would like to extend this work.\nFirst, fRank uses only a small number of features.\nWe believe we could achieve even more significant results with more features.\nMany pages are generated dynamically, the contents of which may depend on parameters in the URL, the time of day, the user visiting the site, or other variables.\nFor such pages, it may be useful to apply the techniques found in [26] to form a static approximation for the purposes of extracting features.\nThe experiments in this paper are biased toward pages that have higher than average quality.\nAlso, fRank with all of the features can only be applied to pages that have already been crawled.\nThus, fRank is primarily useful for index ordering and improving relevance, not for directing the crawl.\nWe would like to investigate a machine learning approach for crawl prioritization as well.\nIt may be that a combination of methods is best: for example, using PageRank to select the best 5 billion of the 20 billion pages on the Web, then using fRank to order the index and affect search relevancy.\nAnother interesting direction for exploration is to incorporate fRank and page-level features directly into the PageRank computation itself.\nWork on biasing the PageRank jump vector [16], and transition matrix [29], have demonstrated the feasibility and advantages of such an approach.\nThere is reason to believe that a direct application of [29], using the fRank of a page for its \"relevance\", could lead to an improved overall static rank.\nFinally, the popularity data can be used in other interesting ways.\nThe general surfing and searching habits of Web users varies by time of day.\nWe can gain insight into these differences by using the popularity data, divided into segments of the day.\nWhen a query is issued, we would then use the popularity data matching the time of query in order to do the ranking of Web pages.\nWe also plan to explore popularity features that use more than just the counts of how often a page was visited.\nFor example, how long users tended to dwell on a page, did they leave the page by clicking a link or by hitting the back button, etc. .\nFox et al. did a study that showed that features such as this can be valuable for the purposes of dynamic ranking [14].\nFinally, the popularity data could be used as the label rather than as a feature.\nUsing fRank in this way to predict the popularity of a page may useful for the tasks of relevance, efficiency, and crawl priority.\nThere is also significantly more popularity data than human labeled data, potentially enabling more complex machine learning methods, and significantly more features.\n7.\nCONCLUSIONS\nA good static ranking is an important component for today's search engines and information retrieval systems.\nWe have demonstrated that PageRank does not provide a very good static ranking; there are many simple features that individually out perform PageRank.\nBy combining many static features, fRank achieves a ranking that has a significantly higher pairwise accuracy than PageRank alone.\nA qualitative evaluation of the top documents shows that fRank is less technology-biased than PageRank; by using popularity data, it is biased toward pages that Web users, rather than Web authors, visit.\nWe have only begun to explore the options, and believe that significant strides can be made in the area of static ranking by further experimentation with additional features, other machine learning techniques, and additional sources of data.","lvl-2":"Beyond PageRank: Machine Learning for Static Ranking\nABSTRACT\nSince the publication of Brin and Page's paper on PageRank, many in the Web community have depended on PageRank for the static (query-independent) ordering of Web pages.\nWe show that we can significantly outperform PageRank using features that are independent of the link structure of the Web.\nWe gain a further boost in accuracy by using data on the frequency at which users visit Web pages.\nWe use RankNet, a ranking machine learning algorithm, to combine these and other static features based on anchor text and domain characteristics.\nThe resulting model achieves a static ranking pairwise accuracy of 67.3% (vs. 56.7% for PageRank or 50% for random).\n1.\nINTRODUCTION\nOver the past decade, the Web has grown exponentially in size.\nUnfortunately, this growth has not been isolated to good-quality pages.\nThe number of incorrect, spamming, and malicious (e.g., phishing) sites has also grown rapidly.\nThe sheer number of both good and bad pages on the Web has led to an increasing reliance on search engines for the discovery of useful information.\nUsers rely on search engines not only to return pages related to their search query, but also to separate the good from the bad, and order results so that the best pages are suggested first.\nTo date, most work on Web page ranking has focused on improving the ordering of the results returned to the user (querydependent ranking, or dynamic ranking).\nHowever, having a good query-independent ranking (static ranking) is also crucially important for a search engine.\nA good static ranking algorithm provides numerous benefits:\n\u2022 Relevance: The static rank of a page provides a general indicator to the overall quality of the page.\nThis is a useful input to the dynamic ranking algorithm.\n\u2022 Efficiency: Typically, the search engine's index is ordered by static rank.\nBy traversing the index from highquality to low-quality pages, the dynamic ranker may abort the search when it determines that no later page will have as high of a dynamic rank as those already found.\nThe more accurate the static rank, the better this early-stopping ability, and hence the quicker the search engine may respond to queries.\n\u2022 Crawl Priority: The Web grows and changes as quickly\nas search engines can crawl it.\nSearch engines need a way to prioritize their crawl--to determine which pages to recrawl, how frequently, and how often to seek out new pages.\nAmong other factors, the static rank of a page is used to determine this prioritization.\nA better static rank thus provides the engine with a higher quality, more upto-date index.\nGoogle is often regarded as the first commercially successful search engine.\nTheir ranking was originally based on the PageRank algorithm [5] [27].\nDue to this (and possibly due to Google's promotion of PageRank to the public), PageRank is widely regarded as the best method for the static ranking of Web pages.\nThough PageRank has historically been thought to perform quite well, there has yet been little academic evidence to support this claim.\nEven worse, there has recently been work showing that PageRank may not perform any better than other simple measures on certain tasks.\nUpstill et al. have found that for the task of finding home pages, the number of pages linking to a page and the type of URL were as, or more, effective than PageRank [32].\nThey found similar results for the task of finding high quality companies [31].\nPageRank has also been used in systems for TREC's \"very large collection\" and \"Web track\" competitions, but with much less success than had been expected [17].\nFinally, Amento et al. [1] found that simple features, such as the number of pages on a site, performed as well as PageRank.\nDespite these, the general belief remains among many, both academic and in the public, that PageRank is an essential factor for a good static rank.\nFailing this, it is still assumed that using the link structure is crucial, in the form of the number of inlinks or the amount of anchor text.\nIn this paper, we show there are a number of simple url - or pagebased features that significantly outperform PageRank (for the purposes of statically ranking Web pages) despite ignoring the\nstructure of the Web.\nWe combine these and other static features using machine learning to achieve a ranking system that is significantly better than PageRank (in pairwise agreement with human labels).\nA machine learning approach for static ranking has other advantages besides the quality of the ranking.\nBecause the measure consists of many features, it is harder for malicious users to manipulate it (i.e., to raise their page's static rank to an undeserved level through questionable techniques, also known as Web spamming).\nThis is particularly true if the feature set is not known.\nIn contrast, a single measure like PageRank can be easier to manipulate because spammers need only concentrate on one goal: how to cause more pages to point to their page.\nWith an algorithm that learns, a feature that becomes unusable due to spammer manipulation will simply be reduced or removed from the final computation of rank.\nThis flexibility allows a ranking system to rapidly react to new spamming techniques.\nA machine learning approach to static ranking is also able to take advantage of any advances in the machine learning field.\nFor example, recent work on adversarial classification [12] suggests that it may be possible to explicitly model the Web page spammer's (the adversary) actions, adjusting the ranking model in advance of the spammer's attempts to circumvent it.\nAnother example is the elimination of outliers in constructing the model, which helps reduce the effect that unique sites may have on the overall quality of the static rank.\nBy moving static ranking to a machine learning framework, we not only gain in accuracy, but also gain in the ability to react to spammer's actions, to rapidly add new features to the ranking algorithm, and to leverage advances in the rapidly growing field of machine learning.\nFinally, we believe there will be significant advantages to using this technique for other domains, such as searching a local hard drive or a corporation's intranet.\nThese are domains where the link structure is particularly weak (or non-existent), but there are other domain-specific features that could be just as powerful.\nFor example, the author of an intranet page and his\/her position in the organization (e.g., CEO, manager, or developer) could provide significant clues as to the importance of that page.\nA machine learning approach thus allows rapid development of a good static algorithm in new domains.\nThis paper's contribution is a systematic study of static features, including PageRank, for the purposes of (statically) ranking Web pages.\nPrevious studies on PageRank typically used subsets of the Web that are significantly smaller (e.g., the TREC VLC2 corpus, used by many, contains only 19 million pages).\nAlso, the performance of PageRank and other static features has typically been evaluated in the context of a complete system for dynamic ranking, or for other tasks such as question answering.\nIn contrast, we explore the use of PageRank and other features for the direct task of statically ranking Web pages.\nWe first briefly describe the PageRank algorithm.\nIn Section 3 we introduce RankNet, the machine learning technique used to combine static features into a final ranking.\nSection 4 describes the static features.\nThe heart of the paper is in Section 5, which presents our experiments and results.\nWe conclude with a discussion of related and future work.\n2.\nPAGERANK\nThe basic idea behind PageRank is simple: a link from a Web page to another can be seen as an endorsement of that page.\nIn general, links are made by people.\nAs such, they are indicative of the quality of the pages to which they point--when creating a page, an author presumably chooses to link to pages deemed to be of good quality.\nWe can take advantage of this linkage information to order Web pages according to their perceived quality.\nImagine a Web surfer who jumps from Web page to Web page, choosing with uniform probability which link to follow at each step.\nIn order to reduce the effect of dead-ends or endless cycles the surfer will occasionally jump to a random page with some small probability \u03b1, or when on a page with no out-links.\nIf averaged over a sufficient number of steps, the probability the surfer is on page j at some point in time is given by the formula:\nWhere Fi is the set of pages that page i links to, and Bj is the set of pages that link to page j.\nThe PageRank score for node j is defined as this probability: PR (j) =P (j).\nBecause equation (1) is recursive, it must be iteratively evaluated until P (j) converges (typically, the initial distribution for P (j) is uniform).\nThe intuition is, because a random surfer would end up at the page more frequently, it is likely a better page.\nAn alternative view for equation (1) is that each page is assigned a quality, P (j).\nA page \"gives\" an equal share of its quality to each page it points to.\nPageRank is computationally expensive.\nOur collection of 5 billion pages contains approximately 370 billion links.\nComputing PageRank requires iterating over these billions of links multiple times (until convergence).\nIt requires large amounts of memory (or very smart caching schemes that slow the computation down even further), and if spread across multiple machines, requires significant communication between them.\nThough much work has been done on optimizing the PageRank computation (see e.g., [25] and [6]), it remains a relatively slow, computationally expensive property to compute.\n3.\nRANKNET\nMuch work in machine learning has been done on the problems of classification and regression.\nLet X = {Xi} be a collection of feature vectors (typically, a feature is any real valued number), and Y = {yi} be a collection of associated classes, where yi is the class of the object described by feature vector xi.\nThe classification problem is to learn a function f that maps yi = f (Xi), for all i.\nWhen yi is real-valued as well, this is called regression.\nStatic ranking can be seen as a regression problem.\nIf we let Xi represent features of page i, and yi be a value (say, the rank) for each page, we could learn a regression function that mapped each page's features to their rank.\nHowever, this over-constrains the problem we wish to solve.\nAll we really care about is the order of the pages, not the actual value assigned to them.\nRecent work on this ranking problem [7] [13] [18] directly attempts to optimize the ordering of the objects, rather than the value assigned to them.\nFor these, let Z ={<i, j>} be a collection of pairs of items, where item i should be assigned a higher value than item j.\nThe goal of the ranking problem, then, is to learn a function f such that,\nNote that, as with learning a regression function, the result of this process is a function (f) that maps feature vectors to real values.\nThis function can still be applied anywhere that a regressionlearned function could be applied.\nThe only difference is the technique used to learn the function.\nBy directly optimizing the ordering of objects, these methods are able to learn a function that does a better job of ranking than do regression techniques.\nWe used RankNet [7], one of the aforementioned techniques for learning ranking functions, to learn our static rank function.\nRankNet is a straightforward modification to the standard neural network back-prop algorithm.\nAs with back-prop, RankNet attempts to minimize the value of a cost function by adjusting each weight in the network according to the gradient of the cost function with respect to that weight.\nThe difference is that, while a typical neural network cost function is based on the difference between the network output and the desired output, the RankNet cost function is based on the difference between a pair of network outputs.\nThat is, for each pair of feature vectors <i, j> in the training set, RankNet computes the network outputs oi and oj.\nSince vector i is supposed to be ranked higher than vector j, the larger is oj-oi, the larger the cost.\nRankNet also allows the pairs in Z to be weighted with a confidence (posed as the probability that the pair satisfies the ordering induced by the ranking function).\nIn this paper, we used a probability of one for all pairs.\nIn the next section, we will discuss the features used in our feature vectors, xi.\n4.\nFEATURES\nTo apply RankNet (or other machine learning techniques) to the ranking problem, we needed to extract a set of features from each page.\nWe divided our feature set into four, mutually exclusive, categories: page-level (Page), domain-level (Domain), anchor text and inlinks (Anchor), and popularity (Popularity).\nWe also optionally used the PageRank of a page as a feature.\nBelow, we describe each of these feature categories in more detail.\nPageRank We computed PageRank on a Web graph of 5 billion crawled pages (and 20 billion known URLs linked to by these pages).\nThis represents a significant portion of the Web, and is approximately the same number of pages as are used by Google, Yahoo, and MSN for their search engines.\nBecause PageRank is a graph-based algorithm, it is important that it be run on as large a subset of the Web as possible.\nMost previous studies on PageRank used subsets of the Web that are significantly smaller (e.g. the TREC VLC2 corpus, used by many, contains only 19 million pages) We computed PageRank using the standard value of 0.85 for \u03b1.\nPopularity Another feature we used is the actual popularity of a Web page, measured as the number of times that it has been visited by users over some period of time.\nWe have access to such data from users who have installed the MSN toolbar and have opted to provide it to MSN.\nThe data is aggregated into a count, for each Web page, of the number of users who viewed that page.\nThough popularity data is generally unavailable, there are two other sources for it.\nThe first is from proxy logs.\nFor example, a university that requires its students to use a proxy has a record of all the pages they have visited while on campus.\nUnfortunately, proxy data is quite biased and relatively small.\nAnother source, internal to search engines, are records of which results their users clicked on.\nSuch data was used by the search engine \"Direct Hit\", and has recently been explored for dynamic ranking purposes [20].\nAn advantage of the toolbar data over this is that it contains information about URL visits that are not just the result of a search.\nThe raw popularity is processed into a number of features such as the number of times a page was viewed and the number of times any page in the domain was viewed.\nMore details are provided in section 5.5.\nAnchor text and inlinks These features are based on the information associated with links to the page in question.\nIt includes features such as the total amount of text in links pointing to the page (\"anchor text\"), the number of unique words in that text, etc. .\nPage This category consists of features which may be determined by looking at the page (and its URL) alone.\nWe used only eight, simple features such as the number of words in the body, the frequency of the most common term, etc. .\nDomain This category contains features that are computed as averages across all pages in the domain.\nFor example, the average number of outlinks on any page and the average PageRank.\nMany of these features have been used by others for ranking Web pages, particularly the anchor and page features.\nAs mentioned, the evaluation is typically for dynamic ranking, and we wish to evaluate the use of them for static ranking.\nAlso, to our knowledge, this is the first study on the use of actual page visitation popularity for static ranking.\nThe closest similar work is on using click-through behavior (that is, which search engine results the users click on) to affect dynamic ranking (see e.g., [20]).\nBecause we use a wide variety of features to come up with a static ranking, we refer to this as fRank (for feature-based ranking).\nfRank uses RankNet and the set of features described in this section to learn a ranking function for Web pages.\nUnless otherwise specified, fRank was trained with all of the features.\n5.\nEXPERIMENTS\nIn this section, we will demonstrate that we can out perform PageRank by applying machine learning to a straightforward set of features.\nBefore the results, we first discuss the data, the performance metric, and the training method.\n5.1 Data\nIn order to evaluate the quality of a static ranking, we needed a \"gold standard\" defining the correct ordering for a set of pages.\nFor this, we employed a dataset which contains human judgments for 28000 queries.\nFor each query, a number of results are manually assigned a rating, from 0 to 4, by human judges.\nThe rating is meant to be a measure of how relevant the result is for the query, where 0 means \"poor\" and 4 means \"excellent\".\nThere are approximately 500k judgments in all, or an average of 18 ratings per query.\nThe queries are selected by randomly choosing queries from among those issued to the MSN search engine.\nThe probability that a query is selected is proportional to its frequency among all\nof the queries.\nAs a result, common queries are more likely to be judged than uncommon queries.\nAs an example of how diverse the queries are, the first four queries in the training set are \"chef schools\", \"chicagoland speedway\", \"eagles fan club\", and \"Turkish culture\".\nThe documents selected for judging are those that we expected would, on average, be reasonably relevant (for example, the top ten documents returned by MSN's search engine).\nThis provides significantly more information than randomly selecting documents on the Web, the vast majority of which would be irrelevant to a given query.\nBecause of this process, the judged pages tend to be of higher quality than the average page on the Web, and tend to be pages that will be returned for common search queries.\nThis bias is good when evaluating the quality of static ranking for the purposes of index ordering and returning relevant documents.\nThis is because the most important portion of the index to be well-ordered and relevant is the portion that is frequently returned for search queries.\nBecause of this bias, however, the results in this paper are not applicable to crawl prioritization.\nIn order to obtain experimental results on crawl prioritization, we would need ratings on a random sample of Web pages.\nTo convert the data from query-dependent to query-independent, we simply removed the query, taking the maximum over judgments for a URL that appears in more than one query.\nThe reasoning behind this is that a page that is relevant for some query and irrelevant for another is probably a decent page and should have a high static rank.\nBecause we evaluated the pages on queries that occur frequently, our data indicates the correct index ordering, and assigns high value to pages that are likely to be relevant to a common query.\nWe randomly assigned queries to a training, validation, or test set, such that they contained 84%, 8%, and 8% of the queries, respectively.\nEach set contains all of the ratings for a given query, and no query appears in more than one set.\nThe training set was used to train fRank.\nThe validation set was used to select the model that had the highest performance.\nThe test set was used for the final results.\nThis data gives us a query-independent ordering of pages.\nThe goal for a static ranking algorithm will be to reproduce this ordering as closely as possible.\nIn the next section, we describe the measure we used to evaluate this.\n5.2 Measure\nWe chose to use pairwise accuracy to evaluate the quality of a static ranking.\nThe pairwise accuracy is the fraction of time that the ranking algorithm and human judges agree on the ordering of a pair of Web pages.\nIf S (x) is the static ranking assigned to page x, and H (x) is the human judgment of relevance for x, then consider the following sets:\nThe pairwise accuracy is the portion of Hp that is also contained in Sp: pairwise accuracy This measure was chosen for two reasons.\nFirst, the discrete human judgments provide only a partial ordering over Web pages, making it difficult to apply a measure such as the Spearman rank order correlation coefficient (in the pairwise accuracy measure, a pair of documents with the same human judgment does not affect the score).\nSecond, the pairwise accuracy has an intuitive meaning: it is the fraction of pairs of documents that, when the humans claim one is better than the other, the static rank algorithm orders them correctly.\n5.3 Method\nWe trained fRank (a RankNet based neural network) using the following parameters.\nWe used a fully connected 2 layer network.\nThe hidden layer had 10 hidden nodes.\nThe input weights to this layer were all initialized to be zero.\nThe output \"layer\" (just a single node) weights were initialized using a uniform random distribution in the range [-0.1, 0.1].\nWe used tanh as the transfer function from the inputs to the hidden layer, and a linear function from the hidden layer to the output.\nThe cost function is the pairwise cross entropy cost function as discussed in section 3.\nThe features in the training set were normalized to have zero mean and unit standard deviation.\nThe same linear transformation was then applied to the features in the validation and test sets.\nFor training, we presented the network with 5 million pairings of pages, where one page had a higher rating than the other.\nThe pairings were chosen uniformly at random (with replacement) from all possible pairings.\nWhen forming the pairs, we ignored the magnitude of the difference between the ratings (the rating spread) for the two URLs.\nHence, the weight for each pair was constant (one), and the probability of a pair being selected was independent of its rating spread.\nWe trained the network for 30 epochs.\nOn each epoch, the training pairs were randomly shuffled.\nThe initial training rate was 0.001.\nAt each epoch, we checked the error on the training set.\nIf the error had increased, then we decreased the training rate, under the hypothesis that the network had probably overshot.\nThe training rate at each epoch was thus set to: Training rate = \u03b5 + 1 Where \u03ba is the initial rate (0.001), and \u03b5 is the number of times the training set error has increased.\nAfter each epoch, we measured the performance of the neural network on the validation set, using 1 million pairs (chosen randomly with replacement).\nThe network with the highest pairwise accuracy on the validation set was selected, and then tested on the test set.\nWe report the pairwise accuracy on the test set, calculated using all possible pairs.\nThese parameters were determined and fixed before the static rank experiments in this paper.\nIn particular, the choice of initial training rate, number of epochs, and training rate decay function were taken directly from Burges et al [7].\nThough we had the option of preprocessing any of the features before they were input to the neural network, we refrained from doing so on most of them.\nThe only exception was the popularity features.\nAs with most Web phenomenon, we found that the distribution of site popularity is Zipfian.\nTo reduce the dynamic range, and hopefully make the feature more useful, we presented the network with both the unpreprocessed, as well as the logarithm, of the popularity features (As with the others, the logarithmic feature values were also normalized to have zero mean and unit standard deviation).\nApplying fRank to a document is computationally efficient, taking time that is only linear in the number of input features; it is thus within a constant factor of other simple machine learning methods such as na\u00efve Bayes.\nIn our experiments, computing the fRank for all five billion Web pages was approximately 100 times faster than computing the PageRank for the same set.\n5.4 Results\nAs Table 1 shows, fRank significantly outperforms PageRank for the purposes of static ranking.\nWith a pairwise accuracy of 67.4%, fRank more than doubles the accuracy of PageRank (relative to the baseline of 50%, which is the accuracy that would be achieved by a random ordering of Web pages).\nNote that one of fRank's input features is the PageRank of the page, so we would expect it to perform no worse than PageRank.\nThe significant increase in accuracy implies that the other features (anchor, popularity, etc.) do in fact contain useful information regarding the overall quality of a page.\nTable 1: Basic Results\nThere are a number of decisions that go into the computation of PageRank, such as how to deal with pages that have no outlinks, the choice of \u03b1, numeric precision, convergence threshold, etc. .\nWe were able to obtain a computation of PageRank from a completely independent implementation (provided by Marc Najork) that varied somewhat in these parameters.\nIt achieved a pairwise accuracy of 56.52%, nearly identical to that obtained by our implementation.\nWe thus concluded that the quality of the PageRank is not sensitive to these minor variations in algorithm, nor was PageRank's low accuracy due to problems with our implementation of it.\nWe also wanted to find how well each feature set performed.\nTo answer this, for each feature set, we trained and tested fRank using only that set of features.\nThe results are shown in Table 2.\nAs can be seen, every single feature set individually outperformed PageRank on this test.\nPerhaps the most interesting result is that the Page-level features had the highest performance out of all the feature sets.\nThis is surprising because these are features that do not depend on the overall graph structure of the Web, nor even on what pages point to a given page.\nThis is contrary to the common belief that the Web graph structure is the key to finding a good static ranking of Web pages.\nTable 2: Results for individual feature sets.\nBecause we are using a two-layer neural network, the features in the learned network can interact with each other in interesting, nonlinear ways.\nThis means that a particular feature that appears to have little value in isolation could actually be very important when used in combination with other features.\nTo measure the final contribution of a feature set, in the context of all the other features, we performed an ablation study.\nThat is, for each set of features, we trained a network to contain all of the features except that set.\nWe then compared the performance of the resulting network to the performance of the network with all of the features.\nTable 3 shows the results of this experiment, where the \"decrease in accuracy\" is the difference in pairwise accuracy between the network trained with all of the features, and the network missing the given feature set.\nTable 3: Ablation study.\nShown is the decrease in accuracy when we train a network that has all but the given set of features.\nThe last line is shows the effect of removing the anchor, PageRank, and domain features, hence a model containing no network or link-based information whatsoever.\nThe results of the ablation study are consistent with the individual feature set study.\nBoth show that the most important feature set is the Page-level feature set, and the second most important is the popularity feature set.\nFinally, we wished to see how the performance of fRank improved as we added features; we wanted to find at what point adding more feature sets became relatively useless.\nBeginning with no features, we greedily added the feature set that improved performance the most.\nThe results are shown in Table 4.\nFor example, the fourth line of the table shows that fRank using the page, popularity, and anchor features outperformed any network that used the page, popularity, and some other feature set, and that the performance of this network was 67.25%.\nTable 4: fRank performance as feature sets are added.\nAt each row, the feature set that gave the greatest increase in accuracy was added to the list of features (i.e., we conducted a greedy search over feature sets).\nTable 5: Top ten URLs for PageRank vs. fRank\nFinally, we present a qualitative comparison of PageRank vs. fRank.\nIn Table 5 are the top ten URLs returned for PageRank and for fRank.\nPageRank's results are heavily weighted towards technology sites.\nIt contains two QuickTime URLs (Apple's video playback software), as well as Internet Explorer and FireFox URLs (both of which are Web browsers).\nfRank, on the other hand, contains more consumer-oriented sites such as American Express, Target, Dell, etc. .\nPageRank's bias toward technology can be explained through two processes.\nFirst, there are many pages with \"buttons\" at the bottom suggesting that the site is optimized for Internet Explorer, or that the visitor needs QuickTime.\nThese generally link back to, in these examples, the Internet Explorer and QuickTime download sites.\nConsequently, PageRank ranks those pages highly.\nThough these pages are important, they are not as important as it may seem by looking at the link structure alone.\nOne fix for this is to add information about the link to the PageRank computation, such as the size of the text, whether it was at the bottom of the page, etc. .\nThe other bias comes from the fact that the population of Web site authors is different than the population of Web users.\nWeb authors tend to be technologically-oriented, and thus their linking behavior reflects those interests.\nfRank, by knowing the actual visitation popularity of a site (the popularity feature set), is able to eliminate some of that bias.\nIt has the ability to depend more on where actual Web users visit rather than where the Web site authors have linked.\nThe results confirm that fRank outperforms PageRank in pairwise accuracy.\nThe two most important feature sets are the page and popularity features.\nThis is surprising, as the page features consisted only of a few (8) simple features.\nFurther experiments found that, of the page features, those based on the text of the page (as opposed to the URL) performed the best.\nIn the next section, we explore the popularity feature in more detail.\n5.5 Popularity Data\nAs mentioned in section 4, our popularity data came from MSN toolbar users.\nFor privacy reasons, we had access only to an aggregate count of, for each URL, how many times it was visited by any toolbar user.\nThis limited the possible features we could derive from this data.\nFor possible extensions, see section 6.3, future work.\nFor each URL in our train and test sets, we provided a feature to fRank which was how many times it had been visited by a toolbar user.\nHowever, this feature was quite noisy and sparse, particularly for URLs with query parameters (e.g., http:\/\/search.msn.com\/results.aspx?q=machine+learning&form=QBHP).\nOne solution was to provide an additional feature which was the number of times any URL at the given domain was visited by a toolbar user.\nAdding this feature dramatically improved the performance of fRank.\nWe took this one step further and used the built-in hierarchical structure of URLs to construct many levels of backoff between the full URL and the domain.\nWe did this by using the set of features shown in Table 6.\nTable 6: URL functions used to compute the Popularity feature set.\nEach URL was assigned one feature for each function shown in the table.\nThe value of the feature was the count of the number of times a toolbar user visited a URL, where the function applied to that URL matches the function applied to the URL in question.\nFor example, a user's visit to cnn.com\/2005\/sports.html would increment the Domain and Domain +1 features for the URL cnn.com\/2005\/tech\/wikipedia.html.\nAs seen in Table 7, adding the domain counts significantly improved the quality of the popularity feature, and adding the numerous backoff functions listed in Table 6 improved the accuracy even further.\nTable 7: Effect of adding backoff to the popularity feature set\nBacking off to subsets of the URL is one technique for dealing with the sparsity of data.\nIt is also informative to see how the performance of fRank depends on the amount of popularity data that we have collected.\nIn Figure 1 we show the performance of fRank trained with only the popularity feature set vs. the amount of data we have for the popularity feature set.\nEach day, we receive additional popularity data, and as can be seen in the plot, this increases the performance of fRank.\nThe relation is logarithmic: doubling the amount of popularity data provides a constant improvement in pairwise accuracy.\nIn summary, we have found that the popularity features provide a useful boost to the overall fRank accuracy.\nGathering more popularity data, as well as employing simple backoff strategies, improve this boost even further.\n5.6 Summary of Results\nThe experiments provide a number of conclusions.\nFirst, fRank performs significantly better than PageRank, even without any information about the Web graph.\nSecond, the page level and popularity features were the most significant contributors to pairwise accuracy.\nThird, by collecting more popularity data, we can continue to improve fRank's performance.\nThe popularity data provides two benefits to fRank.\nFirst, we see that qualitatively, fRank's ordering of Web pages has a more favorable bias than PageRank's.\nfRank's ordering seems to correspond to what Web users, rather than Web page authors, prefer.\nSecond, the popularity data is more timely than PageRank's link information.\nThe toolbar provides information about which Web pages people find interesting right now, whereas links are added to pages more slowly, as authors find the time and interest.\n6.\nRELATED AND FUTURE WORK\n6.1 Improvements to PageRank\nSince the original PageRank paper, there has been work on improving it.\nMuch of that work centers on speeding up and parallelizing the computation [15] [25].\nOne recognized problem with PageRank is that of topic drift: A page about \"dogs\" will have high PageRank if it is linked to by many pages that themselves have high rank, regardless of their topic.\nIn contrast, a search engine user looking for good pages about dogs would likely prefer to find pages that are pointed to by many pages that are themselves about dogs.\nHence, a link that is \"on topic\" should have higher weight than a link that is not.\nRichardson and Domingos's Query Dependent PageRank [29] and Haveliwala's Topic-Sensitive PageRank [16] are two approaches that tackle this problem.\nOther variations to PageRank include differently weighting links for inter - vs. intra-domain links, adding a backwards step to the random surfer to simulate the \"back\" button on most browsers [24] and modifying the jump probability (\u03b1) [3].\nSee Langville and Meyer [23] for a good survey of these, and other modifications to PageRank.\n6.2 Other related work\nPageRank is not the only link analysis algorithm used for ranking Web pages.\nThe most well-known other is HITS [22], which is used by the Teoma search engine [30].\nHITS produces a list of hubs and authorities, where hubs are pages that point to many\nFigure 1: Relation between the amount of popularity data and the performance of the popularity feature set.\nNote the x-axis is a logarithmic scale.\nauthority pages, and authorities are pages that are pointed to by many hubs.\nPrevious work has shown HITS to perform comparably to PageRank [1].\nOne field of interest is that of static index pruning (see e.g., Carmel et al. [8]).\nStatic index pruning methods reduce the size of the search engine's index by removing documents that are unlikely to be returned by a search query.\nThe pruning is typically done based on the frequency of query terms.\nSimilarly, Pandey and Olston [28] suggest crawling pages frequently if they are likely to incorrectly appear (or not appear) as a result of a search.\nSimilar methods could be incorporated into the static rank (e.g., how many frequent queries contain words found on this page).\nOthers have investigated the effect that PageRank has on the Web at large [9].\nThey argue that pages with high PageRank are more likely to be found by Web users, thus more likely to be linked to, and thus more likely to maintain a higher PageRank than other pages.\nThe same may occur for the popularity data.\nIf we increase the ranking for popular pages, they are more likely to be clicked on, thus further increasing their popularity.\nCho et al. [10] argue that a more appropriate measure of Web page quality would depend on not only the current link structure of the Web, but also on the change in that link structure.\nThe same technique may be applicable to popularity data: the change in popularity of a page may be more informative than the absolute popularity.\nOne interesting related work is that of Ivory and Hearst [19].\nTheir goal was to build a model of Web sites that are considered high quality from the perspective of \"content, structure and navigation, visual design, functionality, interactivity, and overall experience\".\nThey used over 100 page level features, as well as features encompassing the performance and structure of the site.\nThis let them qualitatively describe the qualities of a page that make it appear attractive (e.g., rare use of italics, at least 9 point font, ...), and (in later work) to build a system that assists novel Web page authors in creating quality pages by evaluating it according to these features.\nThe primary differences between this work and ours are the goal (discovering what constitutes a good Web page vs. ordering Web pages for the purposes of Web search), the size of the study (they used a dataset of less than 6000 pages vs. our set of 468,000), and our comparison with PageRank.\nNevertheless, their work provides insights to additional useful static features that we could incorporate into fRank in the future.\nRecent work on incorporating novel features into dynamic ranking includes that by Joachims et al. [21], who investigate the use of implicit feedback from users, in the form of which search engine results are clicked on.\nCraswell et al. [11] present a method for determining the best transformation to apply to query independent features (such as those used in this paper) for the purposes of improving dynamic ranking.\nOther work, such as Boyan et al. [4] and Bartell et al. [2] apply machine learning for the purposes of improving the overall relevance of a search engine (i.e., the dynamic ranking).\nThey do not apply their techniques to the problem of static ranking.\n6.3 Future work\nThere are many ways in which we would like to extend this work.\nFirst, fRank uses only a small number of features.\nWe believe we could achieve even more significant results with more features.\nIn particular the existence, or lack thereof, of certain words could prove very significant (for instance, \"under construction\" probably signifies a low quality page).\nOther features could include the number of images on a page, size of those images, number of layout elements (tables, divs, and spans), use of style sheets, conforming to W3C standards (like XHTML 1.0 Strict), background color of a page, etc. .\nMany pages are generated dynamically, the contents of which may depend on parameters in the URL, the time of day, the user visiting the site, or other variables.\nFor such pages, it may be useful to apply the techniques found in [26] to form a static approximation for the purposes of extracting features.\nThe resulting grammar describing the page could itself be a source of additional features describing the complexity of the page, such as how many non-terminal nodes it has, the depth of the grammar tree, etc. fRank allows one to specify a confidence in each pairing of documents.\nIn the future, we will experiment with probabilities that depend on the difference in human judgments between the two items in the pair.\nFor example, a pair of documents where one was rated 4 and the other 0 should have a higher confidence than a pair of documents rated 3 and 2.\nThe experiments in this paper are biased toward pages that have higher than average quality.\nAlso, fRank with all of the features can only be applied to pages that have already been crawled.\nThus, fRank is primarily useful for index ordering and improving relevance, not for directing the crawl.\nWe would like to investigate a machine learning approach for crawl prioritization as well.\nIt may be that a combination of methods is best: for example, using PageRank to select the best 5 billion of the 20 billion pages on the Web, then using fRank to order the index and affect search relevancy.\nAnother interesting direction for exploration is to incorporate fRank and page-level features directly into the PageRank computation itself.\nWork on biasing the PageRank jump vector [16], and transition matrix [29], have demonstrated the feasibility and advantages of such an approach.\nThere is reason to believe that a direct application of [29], using the fRank of a page for its \"relevance\", could lead to an improved overall static rank.\nFinally, the popularity data can be used in other interesting ways.\nThe general surfing and searching habits of Web users varies by time of day.\nActivity in the morning, daytime, and evening are often quite different (e.g., reading the news, solving problems, and accessing entertainment, respectively).\nWe can gain insight into these differences by using the popularity data, divided into segments of the day.\nWhen a query is issued, we would then use the popularity data matching the time of query in order to do the ranking of Web pages.\nWe also plan to explore popularity features that use more than just the counts of how often a page was visited.\nFor example, how long users tended to dwell on a page, did they leave the page by clicking a link or by hitting the back button, etc. .\nFox et al. did a study that showed that features such as this can be valuable for the purposes of dynamic ranking [14].\nFinally, the popularity data could be used as the label rather than as a feature.\nUsing fRank in this way to predict the popularity of a page may useful for the tasks of relevance, efficiency, and crawl priority.\nThere is also significantly more popularity data than human labeled data, potentially enabling more complex machine learning methods, and significantly more features.\n7.\nCONCLUSIONS\nA good static ranking is an important component for today's search engines and information retrieval systems.\nWe have demonstrated that PageRank does not provide a very good static ranking; there are many simple features that individually out perform PageRank.\nBy combining many static features, fRank achieves a ranking that has a significantly higher pairwise accuracy than PageRank alone.\nA qualitative evaluation of the top documents shows that fRank is less technology-biased than PageRank; by using popularity data, it is biased toward pages that Web users, rather than Web authors, visit.\nThe machine learning component of fRank gives it the additional benefit of being more robust against spammers, and allows it to leverage further developments in the machine learning community in areas such as adversarial classification.\nWe have only begun to explore the options, and believe that significant strides can be made in the area of static ranking by further experimentation with additional features, other machine learning techniques, and additional sources of data.","keyphrases":["pagerank","machin learn","static rank","static rank","ranknet","inform retriev","featur-base rank","adversari classif","regress","relev","visit popular","search engin"],"prmu":["P","P","P","P","P","U","M","U","U","U","M","U"]} {"id":"H-45","title":"Query Performance Prediction in Web Search Environments","abstract":"Current prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist. In this paper, we present three techniques to address these challenges. We focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding. Our evaluation is mainly performed on the GOV2 collection. In addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types. To assist prediction under the mixed-query situation, a novel query classifier is adopted. Results show that our prediction of web query performance is substantially more accurate than the current state-of-the-art prediction techniques. Consequently, our paper provides a practical approach to performance prediction in real-world web settings.","lvl-1":"Query Performance Prediction in Web Search Environments Yun Zhou and W. Bruce Croft Department of Computer Science University of Massachusetts, Amherst {yzhou, croft}@cs.\numass.edu ABSTRACT Current prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist.\nIn this paper, we present three techniques to address these challenges.\nWe focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding.\nOur evaluation is mainly performed on the GOV2 collection.\nIn addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types.\nTo assist prediction under the mixed-query situation, a novel query classifier is adopted.\nResults show that our prediction of web query performance is substantially more accurate than the current stateof-the-art prediction techniques.\nConsequently, our paper provides a practical approach to performance prediction in realworld web settings.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval -Query formulation General Terms Algorithms, Experimentation, Theory 1.\nINTRODUCTION Query performance prediction has many applications in a variety of information retrieval (IR) areas such as improving retrieval consistency, query refinement, and distributed IR.\nThe importance of this problem has been recognized by IR researchers and a number of new methods have been proposed for prediction recently [1, 2, 17].\nMost work on prediction has focused on the traditional ad-hoc retrieval task where query performance is measured according to topical relevance.\nThese prediction models are evaluated on TREC document collections which typically consist of no more than one million relatively homogenous newswire articles.\nWith the popularity and influence of the Web, prediction techniques that will work well for web-style queries are highly preferable.\nHowever, web search environments pose significant challenges to current prediction models that are mainly designed for traditional TREC settings.\nHere we outline some of these challenges.\nFirst, web collections, which are much larger than conventional TREC collections, include a variety of documents that are different in many aspects such as quality and style.\nCurrent prediction techniques can be vulnerable to these characteristics of web collections.\nFor example, the reported prediction accuracy of the ranking robustness technique and the clarity technique on the GOV2 collection (a large web collection) is significantly worse compared to the other TREC collections [1].\nSimilar prediction accuracy on the GOV2 collection using another technique is reported in [2], confirming the difficult of predicting query performance on a large web collection.\nFurthermore, web search goes beyond the scope of the ad-hoc retrieval task based on topical relevance.\nFor example, the Named-Page (NP) finding task, which is a navigational task, is also popular in web retrieval.\nQuery performance prediction for the NP task is still necessary since NP retrieval performance is far from perfect.\nIn fact, according to the report on the NP task of the 2005 Terabyte Track [3], about 40% of the test queries perform poorly (no correct answer in the first 10 search results) even in the best run from the top group.\nTo our knowledge, little research has explicitly addressed the problem of NP-query performance prediction.\nCurrent prediction models devised for content-based queries will be less effective for NP queries considering the fundamental differences between the two.\nThird, in real-world web search environments, user queries are usually a mixture of different types and prior knowledge about the type of each query is generally unavailable.\nThe mixed-query situation raises new problems for query performance prediction.\nFor instance, we may need to incorporate a query classifier into prediction models.\nDespite these problems, the ability to handle this situation is a crucial step towards turning query performance prediction from an interesting research topic into a practical tool for web retrieval.\nIn this paper, we present three techniques to address the above challenges that current prediction models face in Web search environments.\nOur work focuses on query performance prediction for the content-based (ad-hoc) retrieval task and the name-page finding task in the context of web retrieval.\nOur first technique, called weighted information gain (WIG), makes use of both single term and term proximity features to estimate the quality of top retrieved documents for prediction.\nWe find that WIG offers consistent prediction accuracy across various test collections and query types.\nMoreover, we demonstrate that good prediction accuracy can be achieved for the mixed-query situation by using WIG with the help of a query type classifier.\nQuery feedback and first rank change, which are our second and third prediction techniques, perform well for content-based queries and NP queries respectively.\nOur main contributions include: (1) considerably improved prediction accuracy for web content-based queries over several state-of-the-art techniques.\n(2) new techniques for successfully predicting NP-query performance.\n(3) a practical and fully automatic solution to predicting mixed-query performance.\nIn addition, one minor contribution is that we find that the robustness score [1], which was originally proposed for performance prediction, is helpful for query classification.\n2.\nRELATED WORK As we mentioned in the introduction, a number of prediction techniques have been proposed recently that focus on contentbased queries in the topical relevance (ad-hoc) task.\nWe know of no published work that addresses other types of queries such as NP queries, let alone a mixture of query types.\nNext we review some representative models.\nThe major difficulty of performance prediction comes from the fact that many factors have an impact on retrieval performance.\nEach factor affects performance to a different degree and the overall effect is hard to predict accurately.\nTherefore, it is not surprising to notice that simple features, such as the frequency of query terms in the collection [4] and the average IDF of query terms [5], do not predict well.\nIn fact, most of the successful techniques are based on measuring some characteristics of the retrieved document set to estimate topic difficulty.\nFor example, the clarity score [6] measures the coherence of a list of documents by the KL-divergence between the query model and the collection model.\nThe robustness score [1] quantifies another property of a ranked list: the robustness of the ranking in the presence of uncertainty.\nCarmel et al. [2] found that the distance measured by the Jensen-Shannon divergence between the retrieved document set and the collection is significantly correlated to average precision.\nVinay et al.[7] proposed four measures to capture the geometry of the top retrieved documents for prediction.\nThe most effective measure is the sensitivity to document perturbation, an idea somewhat similar to the robustness score.\nUnfortunately, their way of measuring the sensitivity does not perform equally well for short queries and prediction accuracy drops considerably when a state-of-the-art retrieval technique (like Okapi or a language modeling approach) is adopted for retrieval instead of the tf-idf weighting used in their paper [16].\nThe difficulties of applying these models in web search environments have already been mentioned.\nIn this paper, we mainly adopt the clarity score and the robustness score as our baselines.\nWe experimentally show that the baselines, even after being carefully tuned, are inadequate for the web environment.\nOne of our prediction models, WIG, is related to the Markov random field (MRF) model for information retrieval [8].\nThe MRF model directly models term dependence and is found be to highly effective across a variety of test collections (particularly web collections) and retrieval tasks.\nThis model is used to estimate the joint probability distribution over documents and queries, an important part of WIG.\nThe superiority of WIG over other prediction techniques based on unigram features, which will be demonstrated later in our paper, coincides with that of MRF for retrieval.\nIn other word, it is interesting to note that term dependence, when being modeled appropriately, can be helpful for both improving and predicting retrieval performance.\n3.\nPREDICTION MODELS 3.1 Weighted Information Gain (WIG) This section introduces a weighted information gain approach that incorporates both single term and proximity features for predicting performance for both content-based and Named-Page (NP) finding queries.\nGiven a set of queries Q={Qs} (s=1,2,.\n.\nN) which includes all possible user queries and a set of documents D={Dt} (t=1,2...M), we assume that each query-document pair (Qs,Dt) is manually judged and will be put in a relevance list if Qs is found to be relevant to Dt.\nThe joint probability P(Qs,Dt) over queries Q and documents D denotes the probability that pair (Qs,Dt) will be in the relevance list.\nSuch assumptions are similar to those used in [8].\nAssuming that the user issues query Qi \u2208Q and the retrieval results in response to Qi is a ranked list L of documents, we calculate the amount of information contained in P(Qs,Dt) with respect to Qi and L by Eq.1 which is a variant of entropy called the weighted entropy[13].\nThe weights in Eq.1 are solely determined by Qi and L. )1(),(log),(),( , , \u2211\u2212= ts tststsLQ DQPDQweightDQH i In this paper, we choose the weights as follows: LindocumentsKtopthecontainsLTwhere otherwise LTDandisifK DQweight K Kt ts )( )2( ,0 )(,\/1 ),( \u23a9 \u23a8 \u23a7 \u2208= = The cutoff rank K is a parameter in our model that will be discussed later.\nAccordingly, Eq.1 can be simplified as follows: )3(),(log 1 ),( )( , \u2211\u2208 \u2212= LTD titsLQ Kt i DQP K DQH Unfortunately, weighted entropy ),(, tsLQ DQH i computed by Eq.3, which represents the amount of information about how likely the top ranked documents in L would be relevant to query Qi on average, cannot be compared across different queries, making it inappropriate for directly predicting query performance.\nTo mitigate this problem, we come up with a background distribution P(Qs,C) over Q and D by imagining that every document in D is replaced by the same special document C which represents average language usage.\nIn this paper, C is created by concatenating every document in D. Roughly speaking, C is the collection (the document set) {Dt} without document boundaries.\nSimilarly, weighted entropy ),(, CQH sLQi calculated by Eq.3 represents the amount of information about how likely an average document (represented by the whole collection) would be relevant to query Qi.\nNow we introduce our performance predictor WIG which is the weighted information gain [13] computed as the difference between ),(, tsLQ DQH i and ),(, CQH sLQi .\nSpecifically, given query Qi, collection C and ranked list L of documents, WIG is calculated as follows: )4( ),( ),( log 1 ),( ),( log),( ),(),(),,( )(, ,, \u2211\u2211 \u2208 == \u2212= LTD i ti ts s ts ts tsLQsLQi Kt ii CQP DQP KCQP DQP DQweight DQHCQHLCQWIG WIG computed by Eq.4 measures the change in information about the quality of retrieval (in response to query Qi) from an imaginary state that only an average document is retrieved to a posterior state that the actual search results are observed.\nWe hypothesize that WIG is positively correlated with retrieval effectiveness because high quality retrieval should be much more effective than just returning the average document.\nThe heart of this technique is how to estimate the joint distribution P(Qs,Dt).\nIn the language modeling approach to IR, a variety of models can be applied readily to estimate this distribution.\nAlthough most of these models are based on the bagof-words assumption, recent work on modeling term dependence under the language modeling framework have shown consistent and significant improvements in retrieval effectiveness over bagof-words models.\nInspired by the success of incorporating term proximity features into language models, we decide to adopt a good dependence model to estimate the probability P(Qs,Dt).\nThe model we chose for this paper is Metzler and Croft``s Markov Random Field (MRF) model, which has already demonstrated superiority over a number of collections and different retrieval tasks [8,9].\nAccording to the MRF model, log P(Qi, Dt) can be written as )5()|(loglog),(log )( 1 \u2211\u2208 +\u2212= iQF tti DPZDQP \u03be \u03be \u03be\u03bb where Z1 is a constant that ensures that P(Qi, Dt) sums up to 1.\nF(Qi) consists of a set of features expanded from the original query Qi .\nFor example, assuming that query Qi is talented student program, F(Qi) includes features like program and talented student.\nWe consider two kinds of features: single term features T and proximity features P. Proximity features include exact phrase (#1) and unordered window (#uwN) features as described in [8].\nNote that F(Qi) is the union of T(Qi) and P(Qi).\nFor more details on F(Qi) such as how to expand the original query Qi to F(Qi), we refer the reader to [8] and [9].\nP(\u03be|Dt) denotes the probability that feature \u03be will occur in Dt.\nMore details on P(\u03be|Dt) will be provided later in this section.\nThe choice of \u03bb\u03be is somewhat different from that used in [8] since \u03bb\u03be plays a dual role in our model.\nThe first role, which is the same as in [8], is to weight between single term and proximity features.\nThe other role, which is specific to our prediction task, is to normalize the size of F(Qi).\nWe found that the following weight strategy for \u03bb\u03be satisfies the above two roles and generalizes well on a variety of collections and query types. )\n6( )(, |)(| 1 )(, |)(| \u23aa \u23aa \u23a9 \u23aa \u23aa \u23a8 \u23a7 \u2208 \u2212 \u2208 = i i T i i T QP QP QT QT \u03be \u03bb \u03be \u03bb \u03bb\u03be where |T(Qi)| and |P(Qi)| denote the number of single term and proximity features in F(Qi) respectively.\nThe reason for choosing the square root function in the denominator of \u03bb\u03be is to penalize a feature set of large size appropriately, making WIG more comparable across queries of various lengths.\n\u03bbT is a fixed parameter and set to 0.8 according to [8] throughout this paper.\nSimilarly, log P(Qi,C) can be written as: )7()|(loglog),(log )( 2 \u2211\u2208 +\u2212= iQF i CPZCQP \u03be \u03be \u03be\u03bb When constant Z1 and Z2 are dropped, WIG computed in Eq.4 can be rewritten as follows by plugging in Eq.5 and Eq.7 : )8( )|( )|( log 1 ),,( )( )( \u2211 \u2211\u2208 \u2208 = LTD QF t i Kt i CP DP K LCQWIG \u03be \u03be \u03be \u03be \u03bb One of the advantages of WIG over other techniques is that it can handle well both content-based and NP queries.\nBased on the type (or the predicted type) of Qi, the calculation of WIG in Eq.\n8 differs in two aspects: (1) how to estimate P(\u03be|Dt) and P(\u03be|C), and (2) how to choose K. For content-based queries, P(\u03be|C) is estimated by the relative frequency of feature \u03be in collection C as a whole.\nThe estimation of P(\u03be|Dt) is the same as in [8].\nNamely, we estimate P(\u03be|Dt) by the relative frequency of feature \u03be in Dt linearly smoothed with collection frequency P(\u03be|C).\nK in Eq.8 is treated as a free parameter.\nNote that K is the only free parameter in the computation of WIG for content-based queries because all parameters involved in P(\u03be|Dt) are assumed to be fixed by taking the suggested values in [8].\nRegarding NP queries, we make use of document structure to estimate P(\u03be|Dt) and P(\u03be|C) by the so-called mixture of language models proposed in [10] and incorporated into the MRF model for Named-Page finding retrieval in [9].\nThe basic idea is that a document (collection) is divided into several fields such as the title field, the main-body field and the heading field.\nP(\u03be|Dt) and P(\u03be|C) are estimated by a linear combination of the language models from each field.\nDue to space constraints, we refer the reader to [9] for details.\nWe adopt the exact same set of parameters as used in [9] for estimation.\nWith regard to K in Eq.8, we set K to 1 because the Named-Page finding task heavily focuses on the first ranked document.\nConsequently, there are no free parameters in the computation of WIG for NP queries.\n3.2 Query Feedback In this section, we introduce another technique called query feedback (QF) for prediction.\nSuppose that a user issues query Q to a retrieval system and a ranked list L of documents is returned.\nWe view the retrieval system as a noisy channel.\nSpecifically, we assume that the output of the channel is L and the input is Q.\nAfter going through the channel, Q becomes corrupted and is transformed to ranked list L. By thinking about the retrieval process this way, the problem of predicting retrieval effectiveness turns to the task of evaluating the quality of the channel.\nIn other words, prediction becomes finding a way to measure the degree of corruption that arises when Q is transformed to L.\nAs directly computing the degree of the corruption is difficult, we tackle this problem by approximation.\nOur main idea is that we measure to what extent information on Q can be recovered from L on the assumption that only L is observed.\nSpecifically, we design a decoder that can accurately translate L back into new query Q'' and the similarity S between the original query Q and the new query Q'' is adopted as a performance predictor.\nThis is a sketch of how the QF technique predicts query performance.\nBefore filling in more details, we briefly discuss why this method would work.\nThere is a relation between the similarity S defined above and retrieval performance.\nOn the one hand, if the retrieval has strayed from the original sense of the query Q, the new query Q'' extracted from ranked list L in response to Q would be very different from the original query Q. On the other hand, a query distilled from a ranked list containing many relevant documents is likely to be similar to the original query.\nFurther examples in support of the relation will be provided later.\nNext we detail how to build the decoder and how to measure the similarity S.\nIn essence, the goal of the decoder is to compress ranked list L into a few informative terms that should represent the content of the top ranked documents in L.\nOur approach to this goal is to represent ranked list L by a language model (distribution over terms).\nThen terms are ranked by their contribution to the language model``s KL (Kullback-Leibler) divergence from the background collection model.\nTop ranked terms will be chosen to form the new query Q''.\nThis approach is similar to that used in Section 4.1 of [11].\nSpecifically, we take three steps to compress ranked list L into query Q'' without referring to the original query.\n1.\nWe adopt the ranked list language model [14], to estimate a language model based on ranked list L.\nThe model can be written as: )9()|()|()|( \u2211\u2208 = LD LDPDwPLwP where w is any term, D is a document.\nP(D|L) is estimated by a linearly decreasing function of the rank of document D. 2.\nEach term in P(w|L) is ranked by the following KL-divergence contribution: )10( )|( )|( log)|( CwP LwP LwP where P(w|C) is the collection model estimated by the relative frequency of term w in collection C as a whole.\n3.\nThe top N ranked terms by Eq.10 form a weighted query Q''={(wi,ti)} i=1,N. where wi denotes the i-th ranked term and weight ti is the KL-divergence contribution of wi in Eq.\n10.\nTerm cruise ship vessel sea passenger KL contribution 0.050 0.040 0.012 0.010 0.009 Table 1: top 5 terms compressed from the ranked list in response to query Cruise ship damage sea life Two representative examples, one for a poorly performing query Cruise ship damage sea life (TREC topic 719; average precision: 0.08) and the other for a high performing query prostate cancer treatments( TREC topic 710; average precision: 0.49), are shown in Table 1 and 2 respectively.\nThese examples indicate how the similarity between the original and the new query correlates with retrieval performance.\nThe parameter N in step 3 is set to 20 empirically and choosing a larger value of N is unnecessary since the weights after the top 20 are usually too small to make any difference.\nTerm prostate cancer treatment men therapy KL contribution 0.177 0.140 0.028 0.025 0.020 Table 2: top 5 terms compressed from the ranked list in response to query prostate cancer treatments To measure the similarity between original query Q and new query Q'', we first use Q'' to do retrieval on the same collection.\nA variant of the query likelihood model [15] is adopted for retrieval.\nNamely, documents are ranked by: )11()|()|'( '),( \u2211\u2208 = Qtw t i ii i DwPDQP where wi is a term in Q'' and ti is the associated weight.\nD is a document.\nLet L'' denote the new ranked list returned from the above retrieval.\nThe similarity is measured by the overlap of documents in L and L''.\nSpecifically, the percentage of documents in the top K documents of L that are also present in the top K documents in L''.\nthe cutoff K is treated as a free parameter.\nWe summarize here how the QF technique predicts performance given a query Q and the associated ranked list L.\nWe first obtain a weighted query Q'' compressed from L by the above three steps.\nThen we use Q'' to perform retrieval and the new ranked list is L''.\nThe overlap of documents in L and L'' is used for prediction.\n3.3 First Rank Change (FRC) In this section, we propose a method called the first rank change (FRC) for performance prediction for NP queries.\nThis method is derived from the ranking robustness technique [1] that is mainly designed for content-based queries.\nWhen directly applied to NP queries, the robustness technique will be less effective because it takes the top ranked documents as a whole into account while NP queries usually have only one single relevant document.\nInstead, our technique focuses on the first rank document while the main idea of the robustness method remains.\nSpecifically, the pseudocode for computing FRC is shown in figure 1.\nInput: (1) ranked list L={Di} where i=1,100.\nDi denotes the i-th ranked document.\n(2) query Q 1 initialize: (1) set the number of trials J=100000 (2) counter c=0; 2 for i=1 to J 3 Perturb every document in L, let the outcome be a set F={Di''} where Di'' denotes the perturbed version of Di.\n4 Do retrieval with query Q on set F 5 c=c+1 if and only if D1'' is ranked first in step 4 6 end of for 7 return the ratio c\/J Figure 1: pseudo-code for computing FRC FRC approximates the probability that the first ranked document in the original list L will remain ranked first even after the documents are perturbed.\nThe higher the probability is, the more confidence we have in the first ranked document.\nOn the other hand, in the extreme case of a random ranking, the probability would be as low as 0.5.\nWe expect that FRC has a positive association with NP query performance.\nWe adopt [1] to implement the document perturbation step (step 4 in Fig.1) using Poisson distributions.\nFor more details, we refer the reader to [1].\n4.\nEVALUATION We now present the results of predicting query performance by our models.\nThree state-of-the-art techniques are adopted as our baselines.\nWe evaluate our techniques across a variety of Web retrieval settings.\nAs mentioned before, we consider two types of queries, that is, content-based (CB) queries and Named-Page(NP) finding queries.\nFirst, suppose that the query types are known.\nWe investigate the correlation between the predicted retrieval performance and the actual performance for both types of queries separately.\nResults show that our methods yield considerable improvements over the baselines.\nWe then consider a more challenging scenario where no prior information on query types is available.\nTwo sub-cases are considered.\nIn the first one, there exists only one type of query but the actual type is unknown.\nWe assume a mixture of the two query types in the second case.\nWe demonstrate that our models achieve good accuracy under this demanding scenario, making prediction practical in a real-world Web search environment.\n4.1 Experimental Setup Our evaluation focuses on the GOV2 collection which contains about 25 million documents crawled from web sites in the .\ngov domain during 2004 [3].\nWe create two kinds of data set for CB queries and NP queries respectively.\nFor the CB type, we use the ad-hoc topics of the Terabyte Tracks of 2004, 2005 and 2006 and name them TB04-adhoc, TB05-adhoc and TB06-adhoc respectively.\nIn addition, we also use the ad-hoc topics of the 2004 Robust Track (RT04) to test the adaptability of our techniques to a non-Web environment.\nFor NP queries, we use the Named-Page finding topics of the Terabyte Tracks of 2005 and 2006 and we name them TB05-NP and TB06-NP respectively.\nAll queries used in our experiments are titles of TREC topics as we center on web retrieval.\nTable 3 summarizes the above data sets.\nName Collection Topic Number Query Type TB04-adhoc GOV2 701-750 CB TB05-adhoc GOV2 751-800 CB TB06-adhoc GOV2 801-850 CB RT04 Disk 4+5 (minus CR) 301-450;601700 CB TB05-NP GOV2 NP601-NP872 NP TB06-NP GOV2 NP901-NP1081 NP Table 3: Summary of test collections and topics Retrieval performance of individual content-based and NP queries is measured by the average precision and reciprocal rank of the first correct answer respectively.\nWe make use of the Markov Random field model for both ad-hoc and Named-Page finding retrieval.\nWe adopt the same setting of retrieval parameters used in [8,9].\nThe Indri search engine [12] is used for all of our experiments.\nThough not reported here, we also tried the query likelihood model for ad-hoc retrieval and found that the results change little because of the very high correlation between the query performances obtained by the two retrieval models (0.96 measured by Pearson``s coefficient).\n4.2 Known Query Types Suppose that query types are known.\nWe treat each type of query separately and measure the correlation with average precision (or the reciprocal rank in the case of NP queries).\nWe adopt the Pearson``s correlation test which reflects the degree of linear relationship between the predicted and the actual retrieval performance.\n4.2.1 Content-based Queries Methods Clarity Robust JSD WIG QF WIG +QF TB04+0 5 adhoc 0.333 0.317 0.362 0.574 0.480 0.637 TB06 adhoc 0.076 0.294 N\/A 0.464 0.422 0.511 Table 4: Pearson``s correlation coefficients for correlation with average precision on the Terabyte Tracks (ad-hoc) for clarity score, robustness score, the JSD-based method(we directly cites the score reported in [2]), WIG, query feedback(QF) and a linear combination of WIG and QF.\nBold cases mean the results are statistically significant at the 0.01 level.\nTable 4 shows the correlation with average precision on two data sets: one is a combination of TB04-adhoc and TB05-adhoc(100 topics in total) and the other is TB06-adhoc (50 topics).\nThe reason that we put TB04-adhoc and TB05-adhoc together is to make our results comparable to [2].\nOur baselines are the clarity score (clarity) [6],the robustness score (robust)[1] and the JSDbased method (JSD) [2].\nFor the clarity and robustness score, we have tried different parameter settings and report the highest correlation coefficients we have found.\nWe directly cite the result of the JSD-based method reported in [2].\nThe table also shows the results for the Weighted Information Gain (WIG) method and the Query Feedback (QF) method for predicting content-based queries.\nAs we described in the previous section, both WIG and QF have one free parameter to set, that is, the cutoff rank K.\nWe train the parameter on one dataset and test on the other.\nWhen combining WIG and QF, a simple linear combination is used and the combination weight is learned from the training data set.\nFrom these results, we can see that our methods are considerably more accurate compared to the baselines.\nWe also observe that further improvements are obtained from the combination of WIG and QF, suggesting that they measure different properties of the retrieval process that relate to performance.\nWe discover that our methods generalize well on TB06-adhoc while the correlation for the clarity score with retrieval performance on this data set is considerably worse.\nFurther investigation shows that the mean average precision of TB06-adhoc is 0.342 and is about 10% better than that of the first data set.\nWhile the other three methods typically consider the top 100 or less documents given a ranked list, the clarity method usually needs the top 500 or more documents to adequately measure the coherence of a ranked list.\nHigher mean average precision makes ranked lists retrieved by different queries more similar in terms of coherence at the level of top 500 documents.\nWe believe that this is the main reason for the low accuracy of the clarity score on the second data set.\nThough this paper focuses on a Web search environment, it is desirable that our techniques will work consistently well in other situations.\nTo this end, we examine the effectiveness of our techniques on the Robust 2004 Track.\nFor our methods, we evenly divide all of the test queries into five groups and perform five-fold cross validation.\nEach time we use one group for training and the remaining four groups for testing.\nWe make use of all of the queries for our two baselines, that is, the clarity score and the robustness score.\nThe parameters for our baselines are the same as those used in [1].\nThe results shown in Table 5 demonstrate that the prediction accuracy of our methods is on a par with that of the two strong baselines.\nClarity Robust WIG QF 0.464 0.539 0.468 0.464 Table 5: Comparison of Pearson``s correlation coefficients on the 2004 Robust Track for clarity score, robustness score, WIG and query feedback (QF).\nBold cases mean the results are statistically significant at the 0.01 level.\nFurthermore, we examine the prediction sensitivity of our methods to the cutoff rank K. With respect to WIG, it is quite robust to K on the Terabyte Tracks (2004-2006) while it prefers a small value of K like 5 on the 2004 Robust Track.\nIn other words, a small value of K is a nearly-optimal choice for both kinds of tracks.\nConsidering the fact that all other parameters involved in WIG are fixed and consequently the same for the two cases, this means WIG can achieve nearly-optimal prediction accuracy in two considerably different situations with exactly the same parameter setting.\nRegarding QF, it prefers a larger value of K such as 100 on the Terabyte Tracks and a smaller value of K such as 25 on the 2004 Robust Track.\n4.2.2 NP Queries We adopt WIG and first rank change (FRC) for predicting NPquery performance.\nWe also try a linear combination of the two as in the previous section.\nThe combination weight is obtained from the other data set.\nWe use the correlation with the reciprocal ranks measured by the Pearson``s correlation test to evaluate prediction quality.\nThe results are presented in Table 6.\nAgain, our baselines are the clarity score and the robustness score.\nTo make a fair comparison, we tune the clarity score in different ways.\nWe found that using the first ranked document to build the query model yields the best prediction accuracy.\nWe also attempted to utilize document structure by using the mixture of language models mentioned in section 3.1.\nLittle improvement was obtained.\nThe correlation coefficients for the clarity score reported in Table 6 are the best we have found.\nAs we can see, our methods considerably outperform the clarity score technique on both of the runs.\nThis confirms our intuition that the use of a coherence-based measure like the clarity score is inappropriate for NP queries.\nMethods Clarity Robust.\nWIG FRC WIG+FRC TB05-NP 0.150 -0.370 0.458 0.440 0.525 TB06-NP 0.112 -0.160 0.478 0.386 0.515 Table 6: Pearson``s correlation coefficients for correlation with reciprocal ranks on the Terabyte Tracks (NP) for clarity score, robustness score, WIG, the first rank change (FRC) and a linear combination of WIG and FRC.\nBold cases mean the results are statistically significant at the 0.01 level.\nRegarding the robustness score, we also tune the parameters and report the best we have found.\nWe observe an interesting and surprising negative correlation with reciprocal ranks.\nWe explain this finding briefly.\nA high robustness score means that a number of top ranked documents in the original ranked list are still highly ranked after perturbing the documents.\nThe existence of such documents is a good sign of high performance for content-based queries as these queries usually contain a number of relevant documents [1].\nHowever, with regard to NP queries, one fundamental difference is that there is only one relevant document for each query.\nThe existence of such documents can confuse the ranking function and lead to low retrieval performance.\nAlthough the negative correlation with retrieval performance exists, the strength of the correlation is weaker and less consistent compared to our methods as shown in Table 6.\nBased on the above analysis, we can see that current prediction techniques like clarity score and robustness score that are mainly designed for content-based queries face significant challenges and are inadequate to deal with NP queries.\nOur two techniques proposed for NP queries consistently demonstrate good prediction accuracy, displaying initial success in solving the problem of predicting performance for NP queries.\nAnother point we want to stress is that the WIG method works well for both types of queries, a desirable property that most prediction techniques lack.\n4.3 Unknown Query Types In this section, we run two kinds of experiments without access to query type labels.\nFirst, we assume that only one type of query exists but the type is unknown.\nSecond, we experiment on a mixture of content-based and NP queries.\nThe following two subsections will report results for the two conditions respectively.\n4.3.1 Only One Type exists We assume that all queries are of the same type, that is, they are either NP queries or content-based queries.\nWe choose WIG to deal with this case because it shows good prediction accuracy for both types of queries in the previous section.\nWe consider two cases: (1) CB: all 150 title queries from the ad-hoc task of the Terabyte Tracks 2004-2006 (2)NP: all 433 NP queries from the named page finding task of the Terabyte Tracks 2005 and 2006.\nWe take a simple strategy by labeling all of the queries in each case as the same type (either NP or CB) regardless of their actual type.\nThe computation of WIG will be based on the labeled query type instead of the actual type.\nThere are four possibilities with respect to the relation between the actual type and the labeled type.\nThe correlation with retrieval performance under the four possibilities is presented in Table 7.\nFor example, the value 0.445 at the intersection between the second row and the third column shows the Pearson``s correlation coefficient for correlation with average precision when the content-based queries are incorrectly labeled as the NP type.\nBased on these results, we recommend treating all queries as the NP type when only one query type exists and accurate query classification is not feasible, considering the risk that a large loss of accuracy will occur if NP queries are incorrectly labeled as content-based queries.\nThese results also demonstrate the strong adaptability of WIG to different query types.\nCB (labeled) NP (labeled) CB (actual) 0.536 0.445 NP (actual) 0.174 0.467 Table 7: Comparison of Pearson``s correlation coefficients for correlation with retrieval performance under four possibilities on the Terabyte Tracks (NP).\nBold cases mean the results are statistically significant at the 0.01 level.\n4.3.2 A mixture of contented-based and NP queries A mixture of the two types of queries is a more realistic situation that a Web search engine will meet.\nWe evaluate prediction accuracy by how accurately poorly-performing queries can be identified by the prediction method assuming that actual query types are unknown (but we can predict query types).\nThis is a challenging task because both the predicted and actual performance for one type of query can be incomparable to that for the other type.\nNext we discuss how to implement our evaluation.\nWe create a query pool which consists of all of the 150 ad-hoc title queries from Terabyte Track 2004-2006 and all of the 433 NP queries from Terabyte Track 2005&2006.\nWe divide the queries in the pool into classes: good (better than 50% of the queries of the same type in terms of retrieval performance) and bad (otherwise).\nAccording to these standards, a NP query with the reciprocal rank above 0.2 or a content-based query with the average precision above 0.315 will be considered as good.\nThen, each time we randomly select one query Q from the pool with probability p that Q is contented-based.\nThe remaining queries are used as training data.\nWe first decide the type of query Q according to a query classifier.\nNamely, the query classifier tells us whether query Q is NP or content-based.\nBased on the predicted query type and the score computed for query Q by a prediction technique, a binary decision is made about whether query Q is good or bad by comparing to the score threshold of the predicted query type obtained from the training data.\nPrediction accuracy is measured by the accuracy of the binary decision.\nIn our implementation, we repeatedly take a test query from the query pool and prediction accuracy is computed as the percentage of correct decisions, that is, a good(bad) query is predicted to be good (bad).\nIt is obvious that random guessing will lead to 50% accuracy.\nLet us take the WIG method for example to illustrate the process.\nTwo WIG thresholds (one for NP queries and the other for content-based queries) are trained by maximizing the prediction accuracy on the training data.\nWhen a test query is labeled as the NP (CB) type by the query type classifier, it will be predicted to be good if and only if the WIG score for this query is above the NP (CB) threshold.\nSimilar procedures will be taken for other prediction techniques.\nNow we briefly introduce the automatic query type classifier used in this paper.\nWe find that the robustness score, though originally proposed for performance prediction, is a good indicator of query types.\nWe find that on average content-based queries have a much higher robustness score than NP queries.\nFor example, Figure 2 shows the distributions of robustness scores for NP and content-based queries.\nAccording to this finding, the robustness score classifier will attach a NP (CB) label to the query if the robustness score for the query is below (above) a threshold trained from the training data.\n0 0.5 1 1.5 2 2.5 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 NP Content-based Figure 2: Distribution of robustness scores for NP and CB queries.\nThe NP queries are the 252 NP topics from the 2005 Terabyte Track.\nThe content-based queries are the 150 ad-hoc title from the Terabyte Tracks 2004-2006.\nThe probability distributions are estimated by the Kernel density estimation method.\nStrategies Robust WIG-1 WIG-2 WIG-3 Optimal p=0.6 0.565 0.624 0.665 0.684 0.701 P=0.4 0.567 0.633 0.654 0.673 0.696 Table 8: Comparison of prediction accuracy for five strategies in the mixed-query situation.\nTwo ways to sample a query from the pool: (1) the sampled query is content-based with the probability p=0.6.\n(that is, the query is NP with probability 0.4 ) (2) set the probability p=0.4.\nWe consider five strategies in our experiments.\nIn the first strategy (denoted by robust), we use the robustness score for query performance prediction with the help of a perfect query classifier that always correctly map a query into one of the two categories (that is, NP or CB).\nThis strategy represents the level of prediction accuracy that current prediction techniques can achieve in an ideal condition that query types are known.\nIn the next following three strategies, the WIG method is adopted for performance prediction.\nThe difference among the three is that three different query classifiers are used for each strategy: (1) the classifier always classifies a query into the NP type.\n(2) the Robustness Score ProbabilityDensity classifier is the robust score classifier mentioned above.\n(3) the classifier is a perfect one.\nThese three strategies are denoted by WIG-1, WIG-2 and WIG-3 respectively.\nThe reason we are interested in WIG-1 is based on the results from section 4.3.1.\nIn the last strategy (denoted by Optimal) which serves as an upper bound on how well we can do so far, we fully make use of our prediction techniques for each query type assuming a perfect query classifier is available.\nSpecifically, we linearly combine WIG and QF for content-based queries and WIG and FRC for NP queries.\nThe results for the five strategies are shown in Table 8.\nFor each strategy, we try two ways to sample a query from the pool: (1) the sampled query is CB with probability p=0.6.\n(the query is NP with probability 0.4) (2) set the probability p=0.4.\nFrom Table 8 We can see that in terms of prediction accuracy WIG-2 (the WIG method with the automatic query classifier) is not only better than the first two cases, but also is close to WIG-3 where a perfect classifier is assumed.\nSome further improvements over WIG-3 are observed when combined with other prediction techniques.\nThe merit of WIG-2 is that it provides a practical solution to automatically identifying poorly performing queries in a Web search environment with mixed query types, which poses considerable obstacles to traditional prediction techniques.\n5.\nCONCLUSIONS AND FUTURE WORK To our knowledge, our paper is the first to thoroughly explore prediction of query performance in web search environments.\nWe demonstrated that our models resulted in higher prediction accuracy than previously published techniques not specially devised for web search scenarios.\nIn this paper, we focus on two types of queries in web search: content-based and Named-Page (NP) finding queries, corresponding to the ad-hoc retrieval task and the Named-Page finding task respectively.\nFor both types of web queries, our prediction models were shown to be substantially more accurate than the current state-of-the-art techniques.\nFurthermore, we considered a more realistic case that no prior information on query types is available.\nWe demonstrated that the WIG method is particularly suitable for this situation.\nConsidering the adaptability of WIG to a range of collections and query types, one of our future plans is to apply this method to predict user preference of search results on realistic data collected from a commercial search engine.\nOther than accuracy, another major issue that prediction techniques have to deal with in a Web environment is efficiency.\nFortunately, since the WIG score is computed just over the terms and the phrases that appear in the query, this calculation can be made very efficient with the support of index.\nOn the other hand, the computation of QF and FRC is relatively less efficient since QF needs to retrieve the whole collection twice and FRC needs to repeatedly rank the perturbed documents.\nHow to improve the efficiency of QF and FRC is our future work.\nIn addition, the prediction techniques proposed in this paper have the potential of improving retrieval performance by combining with other IR techniques.\nFor example, our techniques can be incorporated to popular query modification techniques such as query expansion and query relaxation.\nGuided by performance prediction, we can make a better decision on when to or how to modify queries to enhance retrieval effectiveness.\nWe would like to carry out research in this direction in the future.\n6.\nACKNOWLEGMENTS This work was supported in part by the Center for Intelligent Information Retrieval, in part by the Defense Advanced Research Projects Agency (DARPA) under contract number HR0011-06-C0023, and in part by an award from Google.\nAny opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect those of the sponsor.\nIn addition, we thank Donald Metzler for his valuable comments on this work.\n7.\nREFERENCES [1] Y. Zhou ,W. B. Croft ,Ranking Robustness: A Novel Framework to Predict Query Performance, in Proceedings of CIKM 2006.\n[2] D.Carmel, E.Yom-Tov, A.Darlow,D.Pelleg, What Makes a Query Difficult?\n, in Proceedings of SIGIR 2006.\n[3] C.L.A. Clarke, F. Scholer, I.Soboroff, The TREC 2005 Terabyte Track, In the Online Proceedings of 2005 TREC.\n[4] B.\nHe and I.Ounis.\nInferring query performance using preretrieval predictors.\nIn proceedings of the SPIRE 2004.\n[5] S. Tomlinson.\nRobust, Web and Terabyte Retrieval with Hummingbird SearchServer at TREC 2004.\nIn the Online Proceedings of 2004 TREC.\n[6] S. Cronen-Townsend, Y. Zhou, W. B. Croft, Predicting Query Performance, in Proceedings of SIGIR 2002.\n[7] V.Vinay, I.J.Cox, N.Mill-Frayling,K.Wood, On Ranking the Effectiveness of Searcher, in Proceedings of SIGIR 2006.\n[8] D.Metzler, W.B.Croft, A Markov Random Filed Model for Term Dependencies, in Proceedings of SIGIR 2005.\n[9] D.Metzler, T.Strohman,Y.Zhou,W.B.Croft, Indri at TREC 2005: Terabyte Track, In the Online Proceedings of 2004 TREC.\n[10] P. Ogilvie and J. Callan, Combining document representations for known-item search, in Proceedings of SIGIR 2003.\n[11] A.Berger, J.Lafferty, Information retrieval as statistical translation, in Proceedings of SIGIR 1999.\n[12] Indri search engine : http:\/\/www.lemurproject.org\/indri\/ [13] I.J. Taneja: On Generalized Information Measures and Their Applications, Advances in Electronics and Electron Physics, Academic Press (USA), 76, 1989, 327-413.\n[14] S.Cronen-Townsend, Y. Zhou and Croft, W. B. , ``A Framework for Selective Query Expansion,'' in Proceedings of CIKM 2004.\n[15] F.Song, W.B.Croft, A general language model for information retrieval, in Proceedings of SIGIR 1999.\n[16] Personal email contact with Vishwa Vinay and our own experiments [17] E.Yom-Tov, S.Fine, D.Carmel, A.Darlow, Learning to Estimate Query Difficulty Including Applications to Missing Content Detection and Distributed Information retrieval, in Proceedings of SIGIR 2005","lvl-3":"Query Performance Prediction in Web Search Environments\nABSTRACT\nCurrent prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist.\nIn this paper, we present three techniques to address these challenges.\nWe focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding.\nOur evaluation is mainly performed on the GOV2 collection.\nIn addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types.\nTo assist prediction under the mixed-query situation, a novel query classifier is adopted.\nResults show that our prediction of web query performance is substantially more accurate than the current stateof-the-art prediction techniques.\nConsequently, our paper provides a practical approach to performance prediction in realworld web settings.\n1.\nINTRODUCTION\nQuery performance prediction has many applications in a variety of information retrieval (IR) areas such as improving retrieval consistency, query refinement, and distributed IR.\nThe importance of this problem has been recognized by IR researchers and a number of new methods have been proposed for prediction recently [1, 2, 17].\nMost work on prediction has focused on the traditional \"ad-hoc\" retrieval task where query performance is measured according to\ntopical relevance.\nThese prediction models are evaluated on TREC document collections which typically consist of no more than one million relatively homogenous newswire articles.\nWith the popularity and influence of the Web, prediction techniques that will work well for web-style queries are highly preferable.\nHowever, web search environments pose significant challenges to current prediction models that are mainly designed for traditional TREC settings.\nHere we outline some of these challenges.\nFirst, web collections, which are much larger than conventional TREC collections, include a variety of documents that are different in many aspects such as quality and style.\nCurrent prediction techniques can be vulnerable to these characteristics of web collections.\nFor example, the reported prediction accuracy of the ranking robustness technique and the clarity technique on the GOV2 collection (a large web collection) is significantly worse compared to the other TREC collections [1].\nSimilar prediction accuracy on the GOV2 collection using another technique is reported in [2], confirming the difficult of predicting query performance on a large web collection.\nFurthermore, web search goes beyond the scope of the ad-hoc retrieval task based on topical relevance.\nFor example, the Named-Page (NP) finding task, which is a navigational task, is also popular in web retrieval.\nQuery performance prediction for the NP task is still necessary since NP retrieval performance is far from perfect.\nIn fact, according to the report on the NP task of the 2005 Terabyte Track [3], about 40% of the test queries perform poorly (no correct answer in the first 10 search results) even in the best run from the top group.\nTo our knowledge, little research has explicitly addressed the problem of NP-query performance prediction.\nCurrent prediction models devised for content-based queries will be less effective for NP queries considering the fundamental differences between the two.\nThird, in real-world web search environments, user queries are usually a mixture of different types and prior knowledge about the type of each query is generally unavailable.\nThe mixed-query situation raises new problems for query performance prediction.\nFor instance, we may need to incorporate a query classifier into prediction models.\nDespite these problems, the ability to handle this situation is a crucial step towards turning query performance prediction from an interesting research topic into a practical tool for web retrieval.\nIn this paper, we present three techniques to address the above challenges that current prediction models face in Web search environments.\nOur work focuses on query performance prediction for the content-based (ad-hoc) retrieval task and the name-page finding task in the context of web retrieval.\nOur first technique, called weighted information gain (WIG), makes use of both single term and term proximity features to estimate the quality of top\nretrieved documents for prediction.\nWe find that WIG offers consistent prediction accuracy across various test collections and query types.\nMoreover, we demonstrate that good prediction accuracy can be achieved for the mixed-query situation by using WIG with the help of a query type classifier.\nQuery feedback and first rank change, which are our second and third prediction techniques, perform well for content-based queries and NP queries respectively.\nOur main contributions include: (1) considerably improved prediction accuracy for web content-based queries over several state-of-the-art techniques.\n(2) new techniques for successfully predicting NP-query performance.\n(3) a practical and fully automatic solution to predicting mixed-query performance.\nIn addition, one minor contribution is that we find that the robustness score [1], which was originally proposed for performance prediction, is helpful for query classification.\n2.\nRELATED WORK\nAs we mentioned in the introduction, a number of prediction techniques have been proposed recently that focus on contentbased queries in the topical relevance (ad-hoc) task.\nWe know of no published work that addresses other types of queries such as NP queries, let alone a mixture of query types.\nNext we review some representative models.\nThe major difficulty of performance prediction comes from the fact that many factors have an impact on retrieval performance.\nEach factor affects performance to a different degree and the overall effect is hard to predict accurately.\nTherefore, it is not surprising to notice that simple features, such as the frequency of query terms in the collection [4] and the average IDF of query terms [5], do not predict well.\nIn fact, most of the successful techniques are based on measuring some characteristics of the retrieved document set to estimate topic difficulty.\nFor example, the clarity score [6] measures the coherence of a list of documents by the KL-divergence between the query model and the collection model.\nThe robustness score [1] quantifies another property of a ranked list: the robustness of the ranking in the presence of uncertainty.\nCarmel et al. [2] found that the distance measured by the Jensen-Shannon divergence between the retrieved document set and the collection is significantly correlated to average precision.\nVinay et al. [7] proposed four measures to capture the geometry of the top retrieved documents for prediction.\nThe most effective measure is the sensitivity to document perturbation, an idea somewhat similar to the robustness score.\nUnfortunately, their way of measuring the sensitivity does not perform equally well for short queries and prediction accuracy drops considerably when a state-of-the-art retrieval technique (like Okapi or a language modeling approach) is adopted for retrieval instead of the tf-idf weighting used in their paper [16].\nThe difficulties of applying these models in web search environments have already been mentioned.\nIn this paper, we mainly adopt the clarity score and the robustness score as our baselines.\nWe experimentally show that the baselines, even after being carefully tuned, are inadequate for the web environment.\nOne of our prediction models, WIG, is related to the Markov random field (MRF) model for information retrieval [8].\nThe MRF model directly models term dependence and is found be to highly effective across a variety of test collections (particularly web collections) and retrieval tasks.\nThis model is used to estimate the joint probability distribution over documents and queries, an important part of WIG.\nThe superiority of WIG over other prediction techniques based on unigram features, which will be demonstrated later in our paper, coincides with that of MRF for retrieval.\nIn other word, it is interesting to note that term dependence, when being modeled appropriately, can be helpful for both improving and predicting retrieval performance.\n3.\nPREDICTION MODELS\n3.1 Weighted Information Gain (WIG)\n3.2 Query Feedback\n3.3 First Rank Change (FRC)\n4.\nEVALUATION\n4.1 Experimental Setup\n4.2 Known Query Types\n4.2.1 Content-based Queries\n4.2.2 NP Queries\n4.3 Unknown Query Types\n4.3.1 Only One Type exists\n4.3.2 A mixture of contented-based and NP queries\n5.\nCONCLUSIONS AND FUTURE WORK\nTo our knowledge, our paper is the first to thoroughly explore prediction of query performance in web search environments.\nWe demonstrated that our models resulted in higher prediction accuracy than previously published techniques not specially devised for web search scenarios.\nIn this paper, we focus on two types of queries in web search: content-based and Named-Page (NP) finding queries, corresponding to the ad-hoc retrieval task and the Named-Page finding task respectively.\nFor both types of web queries, our prediction models were shown to be substantially more accurate than the current state-of-the-art techniques.\nFurthermore, we considered a more realistic case that no prior information on query types is available.\nWe demonstrated that the WIG method is particularly suitable for this situation.\nConsidering the adaptability of WIG to a range of collections and query types, one of our future plans is to apply this method to predict user preference of search results on realistic data collected from a commercial search engine.\nOther than accuracy, another major issue that prediction techniques have to deal with in a Web environment is efficiency.\nFortunately, since the WIG score is computed just over the terms and the phrases that appear in the query, this calculation can be made very efficient with the support of index.\nOn the other hand, the computation of QF and FRC is relatively less efficient since QF needs to retrieve the whole collection twice and FRC needs to repeatedly rank the perturbed documents.\nHow to improve the efficiency of QF and FRC is our future work.\nIn addition, the prediction techniques proposed in this paper have the potential of improving retrieval performance by combining with other IR techniques.\nFor example, our techniques can be incorporated to popular query modification techniques such as query expansion and query relaxation.\nGuided by performance prediction, we can make a better decision on when to or how to modify queries to enhance retrieval effectiveness.\nWe would like to carry out research in this direction in the future.","lvl-4":"Query Performance Prediction in Web Search Environments\nABSTRACT\nCurrent prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist.\nIn this paper, we present three techniques to address these challenges.\nWe focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding.\nOur evaluation is mainly performed on the GOV2 collection.\nIn addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types.\nTo assist prediction under the mixed-query situation, a novel query classifier is adopted.\nResults show that our prediction of web query performance is substantially more accurate than the current stateof-the-art prediction techniques.\nConsequently, our paper provides a practical approach to performance prediction in realworld web settings.\n1.\nINTRODUCTION\nQuery performance prediction has many applications in a variety of information retrieval (IR) areas such as improving retrieval consistency, query refinement, and distributed IR.\nThe importance of this problem has been recognized by IR researchers and a number of new methods have been proposed for prediction recently [1, 2, 17].\nMost work on prediction has focused on the traditional \"ad-hoc\" retrieval task where query performance is measured according to\ntopical relevance.\nThese prediction models are evaluated on TREC document collections which typically consist of no more than one million relatively homogenous newswire articles.\nWith the popularity and influence of the Web, prediction techniques that will work well for web-style queries are highly preferable.\nHowever, web search environments pose significant challenges to current prediction models that are mainly designed for traditional TREC settings.\nHere we outline some of these challenges.\nCurrent prediction techniques can be vulnerable to these characteristics of web collections.\nFor example, the reported prediction accuracy of the ranking robustness technique and the clarity technique on the GOV2 collection (a large web collection) is significantly worse compared to the other TREC collections [1].\nSimilar prediction accuracy on the GOV2 collection using another technique is reported in [2], confirming the difficult of predicting query performance on a large web collection.\nFurthermore, web search goes beyond the scope of the ad-hoc retrieval task based on topical relevance.\nFor example, the Named-Page (NP) finding task, which is a navigational task, is also popular in web retrieval.\nQuery performance prediction for the NP task is still necessary since NP retrieval performance is far from perfect.\nTo our knowledge, little research has explicitly addressed the problem of NP-query performance prediction.\nCurrent prediction models devised for content-based queries will be less effective for NP queries considering the fundamental differences between the two.\nThird, in real-world web search environments, user queries are usually a mixture of different types and prior knowledge about the type of each query is generally unavailable.\nThe mixed-query situation raises new problems for query performance prediction.\nFor instance, we may need to incorporate a query classifier into prediction models.\nDespite these problems, the ability to handle this situation is a crucial step towards turning query performance prediction from an interesting research topic into a practical tool for web retrieval.\nIn this paper, we present three techniques to address the above challenges that current prediction models face in Web search environments.\nOur work focuses on query performance prediction for the content-based (ad-hoc) retrieval task and the name-page finding task in the context of web retrieval.\nretrieved documents for prediction.\nWe find that WIG offers consistent prediction accuracy across various test collections and query types.\nMoreover, we demonstrate that good prediction accuracy can be achieved for the mixed-query situation by using WIG with the help of a query type classifier.\nQuery feedback and first rank change, which are our second and third prediction techniques, perform well for content-based queries and NP queries respectively.\nOur main contributions include: (1) considerably improved prediction accuracy for web content-based queries over several state-of-the-art techniques.\n(2) new techniques for successfully predicting NP-query performance.\n(3) a practical and fully automatic solution to predicting mixed-query performance.\nIn addition, one minor contribution is that we find that the robustness score [1], which was originally proposed for performance prediction, is helpful for query classification.\n2.\nRELATED WORK\nAs we mentioned in the introduction, a number of prediction techniques have been proposed recently that focus on contentbased queries in the topical relevance (ad-hoc) task.\nWe know of no published work that addresses other types of queries such as NP queries, let alone a mixture of query types.\nNext we review some representative models.\nThe major difficulty of performance prediction comes from the fact that many factors have an impact on retrieval performance.\nEach factor affects performance to a different degree and the overall effect is hard to predict accurately.\nTherefore, it is not surprising to notice that simple features, such as the frequency of query terms in the collection [4] and the average IDF of query terms [5], do not predict well.\nIn fact, most of the successful techniques are based on measuring some characteristics of the retrieved document set to estimate topic difficulty.\nFor example, the clarity score [6] measures the coherence of a list of documents by the KL-divergence between the query model and the collection model.\nVinay et al. [7] proposed four measures to capture the geometry of the top retrieved documents for prediction.\nThe most effective measure is the sensitivity to document perturbation, an idea somewhat similar to the robustness score.\nThe difficulties of applying these models in web search environments have already been mentioned.\nIn this paper, we mainly adopt the clarity score and the robustness score as our baselines.\nOne of our prediction models, WIG, is related to the Markov random field (MRF) model for information retrieval [8].\nThe MRF model directly models term dependence and is found be to highly effective across a variety of test collections (particularly web collections) and retrieval tasks.\nThis model is used to estimate the joint probability distribution over documents and queries, an important part of WIG.\nThe superiority of WIG over other prediction techniques based on unigram features, which will be demonstrated later in our paper, coincides with that of MRF for retrieval.\nIn other word, it is interesting to note that term dependence, when being modeled appropriately, can be helpful for both improving and predicting retrieval performance.\n5.\nCONCLUSIONS AND FUTURE WORK\nTo our knowledge, our paper is the first to thoroughly explore prediction of query performance in web search environments.\nWe demonstrated that our models resulted in higher prediction accuracy than previously published techniques not specially devised for web search scenarios.\nIn this paper, we focus on two types of queries in web search: content-based and Named-Page (NP) finding queries, corresponding to the ad-hoc retrieval task and the Named-Page finding task respectively.\nFor both types of web queries, our prediction models were shown to be substantially more accurate than the current state-of-the-art techniques.\nFurthermore, we considered a more realistic case that no prior information on query types is available.\nWe demonstrated that the WIG method is particularly suitable for this situation.\nOther than accuracy, another major issue that prediction techniques have to deal with in a Web environment is efficiency.\nHow to improve the efficiency of QF and FRC is our future work.\nIn addition, the prediction techniques proposed in this paper have the potential of improving retrieval performance by combining with other IR techniques.\nFor example, our techniques can be incorporated to popular query modification techniques such as query expansion and query relaxation.\nGuided by performance prediction, we can make a better decision on when to or how to modify queries to enhance retrieval effectiveness.","lvl-2":"Query Performance Prediction in Web Search Environments\nABSTRACT\nCurrent prediction techniques, which are generally designed for content-based queries and are typically evaluated on relatively homogenous test collections of small sizes, face serious challenges in web search environments where collections are significantly more heterogeneous and different types of retrieval tasks exist.\nIn this paper, we present three techniques to address these challenges.\nWe focus on performance prediction for two types of queries in web search environments: content-based and Named-Page finding.\nOur evaluation is mainly performed on the GOV2 collection.\nIn addition to evaluating our models for the two types of queries separately, we consider a more challenging and realistic situation that the two types of queries are mixed together without prior information on query types.\nTo assist prediction under the mixed-query situation, a novel query classifier is adopted.\nResults show that our prediction of web query performance is substantially more accurate than the current stateof-the-art prediction techniques.\nConsequently, our paper provides a practical approach to performance prediction in realworld web settings.\n1.\nINTRODUCTION\nQuery performance prediction has many applications in a variety of information retrieval (IR) areas such as improving retrieval consistency, query refinement, and distributed IR.\nThe importance of this problem has been recognized by IR researchers and a number of new methods have been proposed for prediction recently [1, 2, 17].\nMost work on prediction has focused on the traditional \"ad-hoc\" retrieval task where query performance is measured according to\ntopical relevance.\nThese prediction models are evaluated on TREC document collections which typically consist of no more than one million relatively homogenous newswire articles.\nWith the popularity and influence of the Web, prediction techniques that will work well for web-style queries are highly preferable.\nHowever, web search environments pose significant challenges to current prediction models that are mainly designed for traditional TREC settings.\nHere we outline some of these challenges.\nFirst, web collections, which are much larger than conventional TREC collections, include a variety of documents that are different in many aspects such as quality and style.\nCurrent prediction techniques can be vulnerable to these characteristics of web collections.\nFor example, the reported prediction accuracy of the ranking robustness technique and the clarity technique on the GOV2 collection (a large web collection) is significantly worse compared to the other TREC collections [1].\nSimilar prediction accuracy on the GOV2 collection using another technique is reported in [2], confirming the difficult of predicting query performance on a large web collection.\nFurthermore, web search goes beyond the scope of the ad-hoc retrieval task based on topical relevance.\nFor example, the Named-Page (NP) finding task, which is a navigational task, is also popular in web retrieval.\nQuery performance prediction for the NP task is still necessary since NP retrieval performance is far from perfect.\nIn fact, according to the report on the NP task of the 2005 Terabyte Track [3], about 40% of the test queries perform poorly (no correct answer in the first 10 search results) even in the best run from the top group.\nTo our knowledge, little research has explicitly addressed the problem of NP-query performance prediction.\nCurrent prediction models devised for content-based queries will be less effective for NP queries considering the fundamental differences between the two.\nThird, in real-world web search environments, user queries are usually a mixture of different types and prior knowledge about the type of each query is generally unavailable.\nThe mixed-query situation raises new problems for query performance prediction.\nFor instance, we may need to incorporate a query classifier into prediction models.\nDespite these problems, the ability to handle this situation is a crucial step towards turning query performance prediction from an interesting research topic into a practical tool for web retrieval.\nIn this paper, we present three techniques to address the above challenges that current prediction models face in Web search environments.\nOur work focuses on query performance prediction for the content-based (ad-hoc) retrieval task and the name-page finding task in the context of web retrieval.\nOur first technique, called weighted information gain (WIG), makes use of both single term and term proximity features to estimate the quality of top\nretrieved documents for prediction.\nWe find that WIG offers consistent prediction accuracy across various test collections and query types.\nMoreover, we demonstrate that good prediction accuracy can be achieved for the mixed-query situation by using WIG with the help of a query type classifier.\nQuery feedback and first rank change, which are our second and third prediction techniques, perform well for content-based queries and NP queries respectively.\nOur main contributions include: (1) considerably improved prediction accuracy for web content-based queries over several state-of-the-art techniques.\n(2) new techniques for successfully predicting NP-query performance.\n(3) a practical and fully automatic solution to predicting mixed-query performance.\nIn addition, one minor contribution is that we find that the robustness score [1], which was originally proposed for performance prediction, is helpful for query classification.\n2.\nRELATED WORK\nAs we mentioned in the introduction, a number of prediction techniques have been proposed recently that focus on contentbased queries in the topical relevance (ad-hoc) task.\nWe know of no published work that addresses other types of queries such as NP queries, let alone a mixture of query types.\nNext we review some representative models.\nThe major difficulty of performance prediction comes from the fact that many factors have an impact on retrieval performance.\nEach factor affects performance to a different degree and the overall effect is hard to predict accurately.\nTherefore, it is not surprising to notice that simple features, such as the frequency of query terms in the collection [4] and the average IDF of query terms [5], do not predict well.\nIn fact, most of the successful techniques are based on measuring some characteristics of the retrieved document set to estimate topic difficulty.\nFor example, the clarity score [6] measures the coherence of a list of documents by the KL-divergence between the query model and the collection model.\nThe robustness score [1] quantifies another property of a ranked list: the robustness of the ranking in the presence of uncertainty.\nCarmel et al. [2] found that the distance measured by the Jensen-Shannon divergence between the retrieved document set and the collection is significantly correlated to average precision.\nVinay et al. [7] proposed four measures to capture the geometry of the top retrieved documents for prediction.\nThe most effective measure is the sensitivity to document perturbation, an idea somewhat similar to the robustness score.\nUnfortunately, their way of measuring the sensitivity does not perform equally well for short queries and prediction accuracy drops considerably when a state-of-the-art retrieval technique (like Okapi or a language modeling approach) is adopted for retrieval instead of the tf-idf weighting used in their paper [16].\nThe difficulties of applying these models in web search environments have already been mentioned.\nIn this paper, we mainly adopt the clarity score and the robustness score as our baselines.\nWe experimentally show that the baselines, even after being carefully tuned, are inadequate for the web environment.\nOne of our prediction models, WIG, is related to the Markov random field (MRF) model for information retrieval [8].\nThe MRF model directly models term dependence and is found be to highly effective across a variety of test collections (particularly web collections) and retrieval tasks.\nThis model is used to estimate the joint probability distribution over documents and queries, an important part of WIG.\nThe superiority of WIG over other prediction techniques based on unigram features, which will be demonstrated later in our paper, coincides with that of MRF for retrieval.\nIn other word, it is interesting to note that term dependence, when being modeled appropriately, can be helpful for both improving and predicting retrieval performance.\n3.\nPREDICTION MODELS\n3.1 Weighted Information Gain (WIG)\nThis section introduces a weighted information gain approach that incorporates both single term and proximity features for predicting performance for both content-based and Named-Page (NP) finding queries.\nGiven a set of queries Q = {Qs} (s = 1,2,.\n.\nN) which includes all possible user queries and a set of documents D = {Dt} (t = 1,2...M), we assume that each query-document pair (Qs, Dt) is manually judged and will be put in a relevance list if Qs is found to be relevant to Dt.\nThe joint probability P (Qs, Dt) over queries Q and documents D denotes the probability that pair (Qs, Dt) will be in the relevance list.\nSuch assumptions are similar to those used in [8].\nAssuming that the user issues query Qi \u2208 Q and the retrieval results in response to Qi is a ranked list L of documents, we calculate the amount of information contained in P (Qs, Dt) with respect to Qi and L by Eq .1 which is a variant of entropy called the weighted entropy [13].\nThe weights in Eq .1 are solely determined by Qi and L.\nUnfortunately, weighted entropyHQi, L (Q s, D t) computed by Eq .3, which represents the amount of information about how likely the top ranked documents in L would be relevant to query Qi on average, cannot be compared across different queries, making it inappropriate for directly predicting query performance.\nTo mitigate this problem, we come up with a background distribution P (Qs, C) over Q and D by imagining that every document in D is replaced by the same special document C which represents average language usage.\nIn this paper, C is created by concatenating every document in D. Roughly speaking, C is the collection (the document set) {Dt} without document boundaries.\nSimilarly, weighted entropy HQi, L (Qs, C) calculated by Eq .3 represents the amount of information about how likely an average document (represented by the whole collection) would be relevant to query Qi.\nNow we introduce our performance predictor WIG which is the weighted information gain [13] computed as the difference between HQi, L (Qs, Dt) and H Qi, L (Qs, C).\nSpecifically, given query Qi, collection C and ranked list L of documents, WIG is calculated as follows:\nWIG computed by Eq .4 measures the change in information about the quality of retrieval (in response to query Qi) from an imaginary state that only an average document is retrieved to a posterior state that the actual search results are observed.\nWe hypothesize that WIG is positively correlated with retrieval effectiveness because high quality retrieval should be much more effective than just returning the average document.\nThe heart of this technique is how to estimate the joint distribution P (Qs, Dt).\nIn the language modeling approach to IR, a variety of models can be applied readily to estimate this distribution.\nAlthough most of these models are based on the bagof-words assumption, recent work on modeling term dependence under the language modeling framework have shown consistent and significant improvements in retrieval effectiveness over bagof-words models.\nInspired by the success of incorporating term proximity features into language models, we decide to adopt a good dependence model to estimate the probability P (Qs, Dt).\nThe model we chose for this paper is Metzler and Croft's Markov Random Field (MRF) model, which has already demonstrated superiority over a number of collections and different retrieval tasks [8,9].\nAccording to the MRF model, log P (Qi, Dt) can be written as\nwhere Z1 is a constant that ensures that P (Qi, Dt) sums up to 1.\nF (Qi) consists of a set of features expanded from the original query Qi.\nFor example, assuming that query Qi is \"talented student program\", F (Qi) includes features like \"program\" and \"talented student\".\nWe consider two kinds of features: single term features T and proximity features P. Proximity features include exact phrase (#1) and unordered window (#uwN) features as described in [8].\nNote that F (Qi) is the union of T (Qi) and P (Qi).\nFor more details on F (Qi) such as how to expand the original query Qi to F (Qi), we refer the reader to [8] and [9].\nP (4 | Dt) denotes the probability that feature 4 will occur in Dt.\nMore details on P (4 | Dt) will be provided later in this section.\nThe choice of),4 is somewhat different from that used in [8] since),4 plays a dual role in our model.\nThe first role, which is the same as in [8], is to weight between single term and proximity features.\nThe other role, which is specific to our prediction task, is to normalize the size of F (Qi).\nWe found that the following weight strategy for),4 satisfies the above two roles and generalizes well on a variety of collections and query types.\nwhere | T (Qi) | and | P (Qi) | denote the number of single term and proximity features in F (Qi) respectively.\nThe reason for choosing the square root function in the denominator of),4 is to penalize a feature set of large size appropriately, making WIG more comparable across queries of various lengths.)\n, T is a fixed parameter and set to 0.8 according to [8] throughout this paper.\nSimilarly, log P (Qi, C) can be written as:\nWhen constant Z1 and Z2 are dropped, WIG computed in Eq .4 can be rewritten as follows by plugging in Eq .5 and Eq .7: One of the advantages of WIG over other techniques is that it can handle well both content-based and NP queries.\nBased on the type (or the predicted type) of Qi, the calculation of WIG in Eq.\n8 differs in two aspects: (1) how to estimate P (4 | Dt) and P (4 | C), and (2) how to choose K. For content-based queries, P (4 | C) is estimated by the relative frequency of feature 4 in collection C as a whole.\nThe estimation of P (4 | Dt) is the same as in [8].\nNamely, we estimate P (4 | Dt) by the relative frequency of feature 4 in Dt linearly smoothed with collection frequency P (4 | C).\nK in Eq .8 is treated as a free parameter.\nNote that K is the only free parameter in the computation of WIG for content-based queries because all parameters involved in P (4 | Dt) are assumed to be fixed by taking the suggested values in [8].\nRegarding NP queries, we make use of document structure to estimate P (4 | Dt) and P (4 | C) by the so-called mixture of language models proposed in [10] and incorporated into the MRF model for Named-Page finding retrieval in [9].\nThe basic idea is that a document (collection) is divided into several fields such as the title field, the main-body field and the heading field.\nP (4 | Dt) and P (4 | C) are estimated by a linear combination of the language models from each field.\nDue to space constraints, we refer the reader to [9] for details.\nWe adopt the exact same set of parameters as used in [9] for estimation.\nWith regard to K in Eq .8, we set K to 1 because the Named-Page finding task heavily focuses on the first ranked document.\nConsequently, there are no free parameters in the computation of WIG for NP queries.\n3.2 Query Feedback\nIn this section, we introduce another technique called query feedback (QF) for prediction.\nSuppose that a user issues query Q to a retrieval system and a ranked list L of documents is returned.\nWe view the retrieval system as a noisy channel.\nSpecifically, we assume that the output of the channel is L and the input is Q.\nAfter going through the channel, Q becomes corrupted and is transformed to ranked list L. By thinking about the retrieval process this way, the problem of predicting retrieval effectiveness turns to the task of evaluating the quality of the channel.\nIn other words, prediction becomes finding a way to measure the degree of corruption that arises when Q is transformed to L.\nAs directly computing the degree of the corruption is difficult, we tackle this problem by approximation.\nOur main idea is that we measure to what extent information on Q can be recovered from L on the assumption that\nonly L is observed.\nSpecifically, we design a decoder that can accurately translate L back into new query Q' and the similarity S between the original query Q and the new query Q' is adopted as a performance predictor.\nThis is a sketch of how the QF technique predicts query performance.\nBefore filling in more details, we briefly discuss why this method would work.\nThere is a relation between the similarity S defined above and retrieval performance.\nOn the one hand, if the retrieval has strayed from the original sense of the query Q, the new query Q' extracted from ranked list L in response to Q would be very different from the original query Q. On the other hand, a query distilled from a ranked list containing many relevant documents is likely to be similar to the original query.\nFurther examples in support of the relation will be provided later.\nNext we detail how to build the decoder and how to measure the similarity S.\nIn essence, the goal of the decoder is to compress ranked list L into a few informative terms that should represent the content of the top ranked documents in L.\nOur approach to this goal is to represent ranked list L by a language model (distribution over terms).\nThen terms are ranked by their contribution to the language model's KL (Kullback-Leibler) divergence from the background collection model.\nTop ranked terms will be chosen to form the new query Q'.\nThis approach is similar to that used in Section 4.1 of [11].\nSpecifically, we take three steps to compress ranked list L into query Q' without referring to the original query.\n1.\nWe adopt the ranked list language model [14], to estimate a language model based on ranked list L.\nThe model can be written as:\nwhere w is any term, D is a document.\nP (D | L) is estimated by a linearly decreasing function of the rank of document D.\n2.\nEach term in P (w | L) is ranked by the following KL-divergence contribution:\nwhere P (w | C) is the collection model estimated by the relative frequency of term w in collection C as a whole.\n3.\nThe top N ranked terms by Eq .10 form a weighted query Q' ={(wi, ti)} i = 1, N. where wi denotes the i-th ranked term and weight ti is the KL-divergence contribution of wi in Eq.\n10.\nTable 1: top 5 terms compressed from the ranked list in response to query \"Cruise ship damage sea life\"\nTwo representative examples, one for a poorly performing query \"Cruise ship damage sea life\" (TREC topic 719; average precision: 0.08) and the other for a high performing query \"prostate cancer treatments\" (TREC topic 710; average precision: 0.49), are shown in Table 1 and 2 respectively.\nThese examples indicate how the similarity between the original and the new query correlates with retrieval performance.\nThe parameter N in step 3 is set to 20 empirically and choosing a larger value of N is unnecessary since the weights after the top 20 are usually too small to make any difference.\nTable 2: top 5 terms compressed from the ranked list in response to query \"prostate cancer treatments\"\nTo measure the similarity between original query Q and new query Q', we first use Q' to do retrieval on the same collection.\nA variant of the query likelihood model [15] is adopted for retrieval.\nNamely, documents are ranked by: where wi is a term in Q' and ti is the associated weight.\nD is a document.\nLet L' denote the new ranked list returned from the above retrieval.\nThe similarity is measured by the overlap of documents in L and L'.\nSpecifically, the percentage of documents in the top K documents of L that are also present in the top K documents in L'.\nthe cutoff K is treated as a free parameter.\nWe summarize here how the QF technique predicts performance given a query Q and the associated ranked list L.\nWe first obtain a weighted query Q' compressed from L by the above three steps.\nThen we use Q' to perform retrieval and the new ranked list is L'.\nThe overlap of documents in L and L' is used for prediction.\n3.3 First Rank Change (FRC)\nIn this section, we propose a method called the first rank change (FRC) for performance prediction for NP queries.\nThis method is derived from the ranking robustness technique [1] that is mainly designed for content-based queries.\nWhen directly applied to NP queries, the robustness technique will be less effective because it takes the top ranked documents as a whole into account while NP queries usually have only one single relevant document.\nInstead, our technique focuses on the first rank document while the main idea of the robustness method remains.\nSpecifically, the pseudocode for computing FRC is shown in figure 1.\nInput: (1) ranked list L = {Di} where i = 1,100.\nDi denotes the i-th ranked document.\n(2) query Q\nFigure 1: pseudo-code for computing FRC\nFRC approximates the probability that the first ranked document in the original list L will remain ranked first even after the documents are perturbed.\nThe higher the probability is, the more confidence we have in the first ranked document.\nOn the other hand, in the extreme case of a random ranking, the probability would be as low as 0.5.\nWe expect that FRC has a positive association with NP query performance.\nWe adopt [1] to implement the document perturbation step (step 4 in Fig. 1) using Poisson distributions.\nFor more details, we refer the reader to [1].\n4.\nEVALUATION\nWe now present the results of predicting query performance by our models.\nThree state-of-the-art techniques are adopted as our baselines.\nWe evaluate our techniques across a variety of Web retrieval settings.\nAs mentioned before, we consider two types of queries, that is, content-based (CB) queries and Named-Page (NP) finding queries.\nFirst, suppose that the query types are known.\nWe investigate the correlation between the predicted retrieval performance and the actual performance for both types of queries separately.\nResults show that our methods yield considerable improvements over the baselines.\nWe then consider a more challenging scenario where no prior information on query types is available.\nTwo sub-cases are considered.\nIn the first one, there exists only one type of query but the actual type is unknown.\nWe assume a mixture of the two query types in the second case.\nWe demonstrate that our models achieve good accuracy under this demanding scenario, making prediction practical in a real-world Web search environment.\n4.1 Experimental Setup\nOur evaluation focuses on the GOV2 collection which contains about 25 million documents crawled from web sites in the.\ngov domain during 2004 [3].\nWe create two kinds of data set for CB queries and NP queries respectively.\nFor the CB type, we use the ad-hoc topics of the Terabyte Tracks of 2004, 2005 and 2006 and name them TB04-adhoc, TB05-adhoc and TB06-adhoc respectively.\nIn addition, we also use the ad-hoc topics of the 2004 Robust Track (RT04) to test the adaptability of our techniques to a non-Web environment.\nFor NP queries, we use the Named-Page finding topics of the Terabyte Tracks of 2005 and 2006 and we name them TB05-NP and TB06-NP respectively.\nAll queries used in our experiments are titles of TREC topics as we center on web retrieval.\nTable 3 summarizes the above data sets.\nTable 3: Summary of test collections and topics\nRetrieval performance of individual content-based and NP queries is measured by the average precision and reciprocal rank of the first correct answer respectively.\nWe make use of the Markov Random field model for both ad-hoc and Named-Page finding retrieval.\nWe adopt the same setting of retrieval parameters used in [8,9].\nThe Indri search engine [12] is used for all of our experiments.\nThough not reported here, we also tried the query likelihood model for ad-hoc retrieval and found that the results change little because of the very high correlation between the query performances obtained by the two retrieval models (0.96 measured by Pearson's coefficient).\n4.2 Known Query Types\nSuppose that query types are known.\nWe treat each type of query separately and measure the correlation with average precision (or the reciprocal rank in the case of NP queries).\nWe adopt the Pearson's correlation test which reflects the degree of linear relationship between the predicted and the actual retrieval performance.\n4.2.1 Content-based Queries\nTable 4: Pearson's correlation coefficients for correlation with\naverage precision on the Terabyte Tracks (ad-hoc) for clarity score, robustness score, the JSD-based method (we directly cites the score reported in [2]), WIG, query feedback (QF) and a linear combination of WIG and QF.\nBold cases mean the results are statistically significant at the 0.01 level.\nTable 4 shows the correlation with average precision on two data sets: one is a combination of TB04-adhoc and TB05-adhoc (100 topics in total) and the other is TB06-adhoc (50 topics).\nThe reason that we put TB04-adhoc and TB05-adhoc together is to make our results comparable to [2].\nOur baselines are the clarity score (clarity) [6], the robustness score (robust) [1] and the JSDbased method (JSD) [2].\nFor the clarity and robustness score, we have tried different parameter settings and report the highest correlation coefficients we have found.\nWe directly cite the result of the JSD-based method reported in [2].\nThe table also shows the results for the Weighted Information Gain (WIG) method and the Query Feedback (QF) method for predicting content-based queries.\nAs we described in the previous section, both WIG and QF have one free parameter to set, that is, the cutoff rank K.\nWe train the parameter on one dataset and test on the other.\nWhen combining WIG and QF, a simple linear combination is used and the combination weight is learned from the training data set.\nFrom these results, we can see that our methods are considerably more accurate compared to the baselines.\nWe also observe that further improvements are obtained from the combination of WIG and QF, suggesting that they measure different properties of the retrieval process that relate to performance.\nWe discover that our methods generalize well on TB06-adhoc while the correlation for the clarity score with retrieval performance on this data set is considerably worse.\nFurther\ninvestigation shows that the mean average precision of TB06-adhoc is 0.342 and is about 10% better than that of the first data set.\nWhile the other three methods typically consider the top 100 or less documents given a ranked list, the clarity method usually needs the top 500 or more documents to adequately measure the coherence of a ranked list.\nHigher mean average precision makes ranked lists retrieved by different queries more similar in terms of coherence at the level of top 500 documents.\nWe believe that this is the main reason for the low accuracy of the clarity score on the second data set.\nThough this paper focuses on a Web search environment, it is desirable that our techniques will work consistently well in other situations.\nTo this end, we examine the effectiveness of our techniques on the Robust 2004 Track.\nFor our methods, we evenly divide all of the test queries into five groups and perform five-fold cross validation.\nEach time we use one group for training and the remaining four groups for testing.\nWe make use of all of the queries for our two baselines, that is, the clarity score and the robustness score.\nThe parameters for our baselines are the same as those used in [1].\nThe results shown in Table 5 demonstrate that the prediction accuracy of our methods is on a par with that of the two strong baselines.\nTable 5: Comparison of Pearson's correlation coefficients on\nthe 2004 Robust Track for clarity score, robustness score, WIG and query feedback (QF).\nBold cases mean the results are statistically significant at the 0.01 level.\nFurthermore, we examine the prediction sensitivity of our methods to the cutoff rank K. With respect to WIG, it is quite robust to K on the Terabyte Tracks (2004-2006) while it prefers a small value of K like 5 on the 2004 Robust Track.\nIn other words, a small value of K is a nearly-optimal choice for both kinds of tracks.\nConsidering the fact that all other parameters involved in WIG are fixed and consequently the same for the two cases, this means WIG can achieve nearly-optimal prediction accuracy in two considerably different situations with exactly the same parameter setting.\nRegarding QF, it prefers a larger value of K such as 100 on the Terabyte Tracks and a smaller value of K such as 25 on the 2004 Robust Track.\n4.2.2 NP Queries\nWe adopt WIG and first rank change (FRC) for predicting NPquery performance.\nWe also try a linear combination of the two as in the previous section.\nThe combination weight is obtained from the other data set.\nWe use the correlation with the reciprocal ranks measured by the Pearson's correlation test to evaluate prediction quality.\nThe results are presented in Table 6.\nAgain, our baselines are the clarity score and the robustness score.\nTo make a fair comparison, we tune the clarity score in different ways.\nWe found that using the first ranked document to build the query model yields the best prediction accuracy.\nWe also attempted to utilize document structure by using the mixture of language models mentioned in section 3.1.\nLittle improvement was obtained.\nThe correlation coefficients for the clarity score reported in Table 6 are the best we have found.\nAs we can see, our methods considerably outperform the clarity score technique on both of the runs.\nThis confirms our intuition that the use of a coherence-based measure like the clarity score is inappropriate for NP queries.\nTable 6: Pearson's correlation coefficients for correlation with\nreciprocal ranks on the Terabyte Tracks (NP) for clarity score, robustness score, WIG, the first rank change (FRC) and a linear combination of WIG and FRC.\nBold cases mean the results are statistically significant at the 0.01 level.\nRegarding the robustness score, we also tune the parameters and report the best we have found.\nWe observe an interesting and surprising negative correlation with reciprocal ranks.\nWe explain this finding briefly.\nA high robustness score means that a number of top ranked documents in the original ranked list are still highly ranked after perturbing the documents.\nThe existence of such documents is a good sign of high performance for content-based queries as these queries usually contain a number of relevant documents [1].\nHowever, with regard to NP queries, one fundamental difference is that there is only one relevant document for each query.\nThe existence of such documents can confuse the ranking function and lead to low retrieval performance.\nAlthough the negative correlation with retrieval performance exists, the strength of the correlation is weaker and less consistent compared to our methods as shown in Table 6.\nBased on the above analysis, we can see that current prediction techniques like clarity score and robustness score that are mainly designed for content-based queries face significant challenges and are inadequate to deal with NP queries.\nOur two techniques proposed for NP queries consistently demonstrate good prediction accuracy, displaying initial success in solving the problem of predicting performance for NP queries.\nAnother point we want to stress is that the WIG method works well for both types of queries, a desirable property that most prediction techniques lack.\n4.3 Unknown Query Types\nIn this section, we run two kinds of experiments without access to query type labels.\nFirst, we assume that only one type of query exists but the type is unknown.\nSecond, we experiment on a mixture of content-based and NP queries.\nThe following two subsections will report results for the two conditions respectively.\n4.3.1 Only One Type exists\nWe assume that all queries are of the same type, that is, they are either NP queries or content-based queries.\nWe choose WIG to deal with this case because it shows good prediction accuracy for both types of queries in the previous section.\nWe consider two cases: (1) CB: all 150 title queries from the ad-hoc task of the Terabyte Tracks 2004-2006 (2) NP: all 433 NP queries from the named page finding task of the Terabyte Tracks 2005 and 2006.\nWe take a simple strategy by labeling all of the queries in each case as the same type (either NP or CB) regardless of their actual type.\nThe computation of WIG will be based on the labeled query type instead of the actual type.\nThere are four possibilities with respect to the relation between the actual type and the labeled\ntype.\nThe correlation with retrieval performance under the four possibilities is presented in Table 7.\nFor example, the value 0.445 at the intersection between the second row and the third column shows the Pearson's correlation coefficient for correlation with average precision when the content-based queries are incorrectly labeled as the NP type.\nBased on these results, we recommend treating all queries as the NP type when only one query type exists and accurate query classification is not feasible, considering the risk that a large loss of accuracy will occur if NP queries are incorrectly labeled as content-based queries.\nThese results also demonstrate the strong adaptability of WIG to different query types.\nTable 7: Comparison of Pearson's correlation coefficients for\ncorrelation with retrieval performance under four possibilities on the Terabyte Tracks (NP).\nBold cases mean the results are statistically significant at the 0.01 level.\n4.3.2 A mixture of contented-based and NP queries\nA mixture of the two types of queries is a more realistic situation that a Web search engine will meet.\nWe evaluate prediction accuracy by how accurately poorly-performing queries can be identified by the prediction method assuming that actual query types are unknown (but we can predict query types).\nThis is a challenging task because both the predicted and actual performance for one type of query can be incomparable to that for the other type.\nNext we discuss how to implement our evaluation.\nWe create a query pool which consists of all of the 150 ad-hoc title queries from Terabyte Track 2004-2006 and all of the 433 NP queries from Terabyte Track 2005 & 2006.\nWe divide the queries in the pool into classes: \"good\" (better than 50% of the queries of the same type in terms of retrieval performance) and \"bad\" (otherwise).\nAccording to these standards, a NP query with the reciprocal rank above 0.2 or a content-based query with the average precision above 0.315 will be considered as good.\nThen, each time we randomly select one query Q from the pool with probability p that Q is contented-based.\nThe remaining queries are used as training data.\nWe first decide the type of query Q according to a query classifier.\nNamely, the query classifier tells us whether query Q is NP or content-based.\nBased on the predicted query type and the score computed for query Q by a prediction technique, a binary decision is made about whether query Q is good or bad by comparing to the score threshold of the predicted query type obtained from the training data.\nPrediction accuracy is measured by the accuracy of the binary decision.\nIn our implementation, we repeatedly take a test query from the query pool and prediction accuracy is computed as the percentage of correct decisions, that is, a good (bad) query is predicted to be good (bad).\nIt is obvious that random guessing will lead to 50% accuracy.\nLet us take the WIG method for example to illustrate the process.\nTwo WIG thresholds (one for NP queries and the other for content-based queries) are trained by maximizing the prediction accuracy on the training data.\nWhen a test query is labeled as the NP (CB) type by the query type classifier, it will be predicted to be good if and only if the WIG score for this query is above the NP (CB) threshold.\nSimilar procedures will be taken for other prediction techniques.\nNow we briefly introduce the automatic query type classifier used in this paper.\nWe find that the robustness score, though originally proposed for performance prediction, is a good indicator of query types.\nWe find that on average content-based queries have a much higher robustness score than NP queries.\nFor example, Figure 2 shows the distributions of robustness scores for NP and content-based queries.\nAccording to this finding, the robustness score classifier will attach a NP (CB) label to the query if the robustness score for the query is below (above) a threshold trained from the training data.\nFigure 2: Distribution of robustness scores for NP and CB queries.\nThe NP queries are the 252 NP topics from the 2005 Terabyte Track.\nThe content-based queries are the 150 ad-hoc title from the Terabyte Tracks 2004-2006.\nThe probability distributions are estimated by the Kernel density estimation method.\nTable 8: Comparison of prediction accuracy for five strategies\nin the mixed-query situation.\nTwo ways to sample a query from the pool: (1) the sampled query is content-based with the probability p = 0.6.\n(that is, the query is NP with probability 0.4) (2) set the probability p = 0.4.\nWe consider five strategies in our experiments.\nIn the first strategy (denoted by \"robust\"), we use the robustness score for query performance prediction with the help of a perfect query classifier that always correctly map a query into one of the two categories (that is, NP or CB).\nThis strategy represents the level of prediction accuracy that current prediction techniques can achieve in an ideal condition that query types are known.\nIn the next following three strategies, the WIG method is adopted for performance prediction.\nThe difference among the three is that three different query classifiers are used for each strategy: (1) the classifier always classifies a query into the NP type.\n(2) the\nclassifier is the robust score classifier mentioned above.\n(3) the classifier is a perfect one.\nThese three strategies are denoted by WIG-1, WIG-2 and WIG-3 respectively.\nThe reason we are interested in WIG-1 is based on the results from section 4.3.1.\nIn the last strategy (denoted by \"Optimal\") which serves as an upper bound on how well we can do so far, we fully make use of our prediction techniques for each query type assuming a perfect query classifier is available.\nSpecifically, we linearly combine WIG and QF for content-based queries and WIG and FRC for NP queries.\nThe results for the five strategies are shown in Table 8.\nFor each strategy, we try two ways to sample a query from the pool: (1) the sampled query is CB with probability p = 0.6.\n(the query is NP with probability 0.4) (2) set the probability p = 0.4.\nFrom Table 8 We can see that in terms of prediction accuracy WIG-2 (the WIG method with the automatic query classifier) is not only better than the first two cases, but also is close to WIG-3 where a perfect classifier is assumed.\nSome further improvements over WIG-3 are observed when combined with other prediction techniques.\nThe merit of WIG-2 is that it provides a practical solution to automatically identifying poorly performing queries in a Web search environment with mixed query types, which poses considerable obstacles to traditional prediction techniques.\n5.\nCONCLUSIONS AND FUTURE WORK\nTo our knowledge, our paper is the first to thoroughly explore prediction of query performance in web search environments.\nWe demonstrated that our models resulted in higher prediction accuracy than previously published techniques not specially devised for web search scenarios.\nIn this paper, we focus on two types of queries in web search: content-based and Named-Page (NP) finding queries, corresponding to the ad-hoc retrieval task and the Named-Page finding task respectively.\nFor both types of web queries, our prediction models were shown to be substantially more accurate than the current state-of-the-art techniques.\nFurthermore, we considered a more realistic case that no prior information on query types is available.\nWe demonstrated that the WIG method is particularly suitable for this situation.\nConsidering the adaptability of WIG to a range of collections and query types, one of our future plans is to apply this method to predict user preference of search results on realistic data collected from a commercial search engine.\nOther than accuracy, another major issue that prediction techniques have to deal with in a Web environment is efficiency.\nFortunately, since the WIG score is computed just over the terms and the phrases that appear in the query, this calculation can be made very efficient with the support of index.\nOn the other hand, the computation of QF and FRC is relatively less efficient since QF needs to retrieve the whole collection twice and FRC needs to repeatedly rank the perturbed documents.\nHow to improve the efficiency of QF and FRC is our future work.\nIn addition, the prediction techniques proposed in this paper have the potential of improving retrieval performance by combining with other IR techniques.\nFor example, our techniques can be incorporated to popular query modification techniques such as query expansion and query relaxation.\nGuided by performance prediction, we can make a better decision on when to or how to modify queries to enhance retrieval effectiveness.\nWe would like to carry out research in this direction in the future.","keyphrases":["queri perform predict","web search environ","web search","homogen test collect","gov2 collect","content-base queri","content-base and name-page find","mix-queri situat","queri classif","trec document collect","rank robust techniqu","name-page find task","weight inform gain","wig","robust score probabilitydens classifi","kl-diverg","jensen-shannon diverg"],"prmu":["P","P","P","P","P","M","M","M","M","M","M","M","M","U","M","U","U"]} {"id":"H-53","title":"Context Sensitive Stemming for Web Search","abstract":"Traditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms. Although it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation. In this paper, we propose a context sensitive stemming method that addresses these two issues. Two unique properties make our approach feasible for Web Search. First, based on statistical language modeling, we perform context sensitive analysis on the query side. We accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine. This dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time. Second, our approach performs a context sensitive document matching for those expanded variants. This conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision. Using word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic.","lvl-1":"Context Sensitive Stemming for Web Search Fuchun Peng Nawaaz Ahmed Xin Li Yumao Lu Yahoo! Inc. 701 First Avenue Sunnyvale, California 94089 {fuchun, nawaaz, xinli, yumaol}@yahoo-inc.com ABSTRACT Traditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms.\nAlthough it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation.\nIn this paper, we propose a context sensitive stemming method that addresses these two issues.\nTwo unique properties make our approach feasible for Web Search.\nFirst, based on statistical language modeling, we perform context sensitive analysis on the query side.\nWe accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine.\nThis dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time.\nSecond, our approach performs a context sensitive document matching for those expanded variants.\nThis conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision.\nUsing word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic.\nCategories and Subject Descriptors H.3.3 [Information Systems]: Information Storage and Retrieval-Query formulation General Terms Algorithms, Experimentation 1.\nINTRODUCTION Web search has now become a major tool in our daily lives for information seeking.\nOne of the important issues in Web search is that user queries are often not best formulated to get optimal results.\nFor example, running shoe is a query that occurs frequently in query logs.\nHowever, the query running shoes is much more likely to give better search results than the original query because documents matching the intent of this query usually contain the words running shoes.\nCorrectly formulating a query requires the user to accurately predict which word form is used in the documents that best satisfy his or her information needs.\nThis is difficult even for experienced users, and especially difficult for non-native speakers.\nOne traditional solution is to use stemming [16, 18], the process of transforming inflected or derived words to their root form so that a search term will match and retrieve documents containing all forms of the term.\nThus, the word run will match running, ran, runs, and shoe will match shoes and shoeing.\nStemming can be done either on the terms in a document during indexing (and applying the same transformation to the query terms during query processing) or by expanding the query with the variants during query processing.\nStemming during indexing allows very little flexibility during query processing, while stemming by query expansion allows handling each query differently, and hence is preferred.\nAlthough traditional stemming increases recall by matching word variants [13], it can reduce precision by retrieving too many documents that have been incorrectly matched.\nWhen examining the results of applying stemming to a large number of queries, one usually finds that nearly equal numbers of queries are helped and hurt by the technique [6].\nIn addition, it reduces system performance because the search engine has to match all the word variants.\nAs we will show in the experiments, this is true even if we simplify stemming to pluralization handling, which is the process of converting a word from its plural to singular form, or vice versa.\nThus, one needs to be very cautious when using stemming in Web search engines.\nOne problem of traditional stemming is its blind transformation of all query terms, that is, it always performs the same transformation for the same query word without considering the context of the word.\nFor example, the word book has four forms book, books, booking, booked, and store has four forms store, stores, storing, stored.\nFor the query book store, expanding both words to all of their variants significantly increases computation cost and hurts precision, since not all of the variants are useful for this query.\nTransforming book store to match book stores is fine, but matching book storing or booking store is not.\nA weighting method that gives variant words smaller weights alleviates the problems to a certain extent if the weights accurately reflect the importance of the variant in this particular query.\nHowever uniform weighting is not going to work and a query dependent weighting is still a challenging unsolved problem [20].\nA second problem of traditional stemming is its blind matching of all occurrences in documents.\nFor the query book store, a transformation that allows the variant stores to be matched will cause every occurrence of stores in the document to be treated equivalent to the query term store.\nThus, a document containing the fragment reading a book in coffee stores will be matched, causing many wrong documents to be selected.\nAlthough we hope the ranking function can correctly handle these, with many more candidates to rank, the risk of making mistakes increases.\nTo alleviate these two problems, we propose a context sensitive stemming approach for Web search.\nOur solution consists of two context sensitive analysis, one on the query side and the other on the document side.\nOn the query side, we propose a statistical language modeling based approach to predict which word variants are better forms than the original word for search purpose and expanding the query with only those forms.\nOn the document side, we propose a conservative context sensitive matching for the transformed word variants, only matching document occurrences in the context of other terms in the query.\nOur model is simple yet effective and efficient, making it feasible to be used in real commercial Web search engines.\nWe use pluralization handling as a running example for our stemming approach.\nThe motivation for using pluralization handling as an example is to show that even such simple stemming, if handled correctly, can give significant benefits to search relevance.\nAs far as we know, no previous research has systematically investigated the usage of pluralization in Web search.\nAs we have to point out, the method we propose is not limited to pluralization handling, it is a general stemming technique, and can also be applied to general query expansion.\nExperiments on general stemming yield additional significant improvements over pluralization handling for long queries, although details will not be reported in this paper.\nIn the rest of the paper, we first present the related work and distinguish our method from previous work in Section 2.\nWe describe the details of the context sensitive stemming approach in Section 3.\nWe then perform extensive experiments on a major Web search engine to support our claims in Section 4, followed by discussions in Section 5.\nFinally, we conclude the paper in Section 6.\n2.\nRELATED WORK Stemming is a long studied technology.\nMany stemmers have been developed, such as the Lovins stemmer [16] and the Porter stemmer [18].\nThe Porter stemmer is widely used due to its simplicity and effectiveness in many applications.\nHowever, the Porter stemming makes many mistakes because its simple rules cannot fully describe English morphology.\nCorpus analysis is used to improve Porter stemmer [26] by creating equivalence classes for words that are morphologically similar and occur in similar context as measured by expected mutual information [23].\nWe use a similar corpus based approach for stemming by computing the similarity between two words based on their distributional context features which can be more than just adjacent words [15], and then only keep the morphologically similar words as candidates.\nUsing stemming in information retrieval is also a well known technique [8, 10].\nHowever, the effectiveness of stemming for English query systems was previously reported to be rather limited.\nLennon et al. [17] compared the Lovins and Porter algorithms and found little improvement in retrieval performance.\nLater, Harman [9] compares three general stemming techniques in text retrieval experiments including pluralization handing (called S stemmer in the paper).\nThey also proposed selective stemming based on query length and term importance, but no positive results were reported.\nOn the other hand, Krovetz [14] performed comparisons over small numbers of documents (from 400 to 12k) and showed dramatic precision improvement (up to 45%).\nHowever, due to the limited number of tested queries (less than 100) and the small size of the collection, the results are hard to generalize to Web search.\nThese mixed results, mostly failures, led early IR researchers to deem stemming irrelevant in general for English [4], although recent research has shown stemming has greater benefits for retrieval in other languages [2].\nWe suspect the previous failures were mainly due to the two problems we mentioned in the introduction.\nBlind stemming, or a simple query length based selective stemming as used in [9] is not enough.\nStemming has to be decided on case by case basis, not only at the query level but also at the document level.\nAs we will show, if handled correctly, significant improvement can be achieved.\nA more general problem related to stemming is query reformulation [3, 12] and query expansion which expands words not only with word variants [7, 22, 24, 25].\nTo decide which expanded words to use, people often use pseudorelevance feedback techniquesthat send the original query to a search engine and retrieve the top documents, extract relevant words from these top documents as additional query words, and resubmit the expanded query again [21].\nThis normally requires sending a query multiple times to search engine and it is not cost effective for processing the huge amount of queries involved in Web search.\nIn addition, query expansion, including query reformulation [3, 12], has a high risk of changing the user intent (called query drift).\nSince the expanded words may have different meanings, adding them to the query could potentially change the intent of the original query.\nThus query expansion based on pseudorelevance and query reformulation can provide suggestions to users for interactive refinement but can hardly be directly used for Web search.\nOn the other hand, stemming is much more conservative since most of the time, stemming preserves the original search intent.\nWhile most work on query expansion focuses on recall enhancement, our work focuses on increasing both recall and precision.\nThe increase on recall is obvious.\nWith quality stemming, good documents which were not selected before stemming will be pushed up and those low quality documents will be degraded.\nOn selective query expansion, Cronen-Townsend et al. [6] proposed a method for selective query expansion based on comparing the Kullback-Leibler divergence of the results from the unexpanded query and the results from the expanded query.\nThis is similar to the relevance feedback in the sense that it requires multiple passes retrieval.\nIf a word can be expanded into several words, it requires running this process multiple times to decide which expanded word is useful.\nIt is expensive to deploy this in production Web search engines.\nOur method predicts the quality of expansion based on o\ufb04ine information without sending the query to a search engine.\nIn summary, we propose a novel approach to attack an old, yet still important and challenging problem for Web search - stemming.\nOur approach is unique in that it performs predictive stemming on a per query basis without relevance feedback from the Web, using the context of the variants in documents to preserve precision.\nIt``s simple, yet very efficient and effective, making real time stemming feasible for Web search.\nOur results will affirm researchers that stemming is indeed very important to large scale information retrieval.\n3.\nCONTEXT SENSITIVE STEMMING 3.1 Overview Our system has four components as illustrated in Figure 1: candidate generation, query segmentation and head word detection, context sensitive query stemming and context sensitive document matching.\nCandidate generation (component 1) is performed o\ufb04ine and generated candidates are stored in a dictionary.\nFor an input query, we first segment query into concepts and detect the head word for each concept (component 2).\nWe then use statistical language modeling to decide whether a particular variant is useful (component 3), and finally for the expanded variants, we perform context sensitive document matching (component 4).\nBelow we discuss each of the components in more detail.\nComponent 4: context sensitive document matching Input Query: and head word detection Component 2: segment Component 1: candidate generation comparisons \u2212> comparison Component 3: selective word expansion decision: comparisons \u2212> comparison example: hotel price comparisons output: ``hotel'' ``comparisons'' hotel \u2212> hotels Figure 1: System Components 3.2 Expansion candidate generation One of the ways to generate candidates is using the Porter stemmer [18].\nThe Porter stemmer simply uses morphological rules to convert a word to its base form.\nIt has no knowledge of the semantic meaning of the words and sometimes makes serious mistakes, such as executive to execution, news to new, and paste to past.\nA more conservative way is based on using corpus analysis to improve the Porter stemmer results [26].\nThe corpus analysis we do is based on word distributional similarity [15].\nThe rationale of using distributional word similarity is that true variants tend to be used in similar contexts.\nIn the distributional word similarity calculation, each word is represented with a vector of features derived from the context of the word.\nWe use the bigrams to the left and right of the word as its context features, by mining a huge Web corpus.\nThe similarity between two words is the cosine similarity between the two corresponding feature vectors.\nThe top 20 similar words to develop is shown in the following table.\nrank candidate score rank candidate score 0 develop 1 10 berts 0.119 1 developing 0.339 11 wads 0.116 2 developed 0.176 12 developer 0.107 3 incubator 0.160 13 promoting 0.100 4 develops 0.150 14 developmental 0.091 5 development 0.148 15 reengineering 0.090 6 tutoring 0.138 16 build 0.083 7 analyzing 0.128 17 construct 0.081 8 developement 0.128 18 educational 0.081 9 automation 0.126 19 institute 0.077 Table 1: Top 20 most similar candidates to word develop.\nColumn score is the similarity score.\nTo determine the stemming candidates, we apply a few Porter stemmer [18] morphological rules to the similarity list.\nAfter applying these rules, for the word develop, the stemming candidates are developing, developed, develops, development, developement, developer, developmental.\nFor the pluralization handling purpose, only the candidate develops is retained.\nOne thing we note from observing the distributionally similar words is that they are closely related semantically.\nThese words might serve as candidates for general query expansion, a topic we will investigate in the future.\n3.3 Segmentation and headword identification For long queries, it is quite important to detect the concepts in the query and the most important words for those concepts.\nWe first break a query into segments, each segment representing a concept which normally is a noun phrase.\nFor each of the noun phrases, we then detect the most important word which we call the head word.\nSegmentation is also used in document sensitive matching (section 3.5) to enforce proximity.\nTo break a query into segments, we have to define a criteria to measure the strength of the relation between words.\nOne effective method is to use mutual information as an indicator on whether or not to split two words [19].\nWe use a log of 25M queries and collect the bigram and unigram frequencies from it.\nFor every incoming query, we compute the mutual information of two adjacent words; if it passes a predefined threshold, we do not split the query between those two words and move on to next word.\nWe continue this process until the mutual information between two words is below the threshold, then create a concept boundary here.\nTable 2 shows some examples of query segmentation.\nThe ideal way of finding the head word of a concept is to do syntactic parsing to determine the dependency structure of the query.\nQuery parsing is more difficult than sentence [running shoe] [best] [new york] [medical schools] [pictures] [of] [white house] [cookies] [in] [san francisco] [hotel] [price comparison] Table 2: Query segmentation: a segment is bracketed.\nparsing since many queries are not grammatical and are very short.\nApplying a parser trained on sentences from documents to queries will have poor performance.\nIn our solution, we just use simple heuristics rules, and it works very well in practice for English.\nFor an English noun phrase, the head word is typically the last nonstop word, unless the phrase is of a particular pattern, like XYZ of\/in\/at\/from UVW.\nIn such cases, the head word is typically the last nonstop word of XYZ.\n3.4 Context sensitive word expansion After detecting which words are the most important words to expand, we have to decide whether the expansions will be useful.\nOur statistics show that about half of the queries can be transformed by pluralization via naive stemming.\nAmong this half, about 25% of the queries improve relevance when transformed, the majority (about 50%) do not change their top 5 results, and the remaining 25% perform worse.\nThus, it is extremely important to identify which queries should not be stemmed for the purpose of maximizing relevance improvement and minimizing stemming cost.\nIn addition, for a query with multiple words that can be transformed, or a word with multiple variants, not all of the expansions are useful.\nTaking query hotel price comparison as an example, we decide that hotel and price comparison are two concepts.\nHead words hotel and comparison can be expanded to hotels and comparisons.\nAre both transformations useful?\nTo test whether an expansion is useful, we have to know whether the expanded query is likely to get more relevant documents from the Web, which can be quantified by the probability of the query occurring as a string on the Web.\nThe more likely a query to occur on the Web, the more relevant documents this query is able to return.\nNow the whole problem becomes how to calculate the probability of query to occur on the Web.\nCalculating the probability of string occurring in a corpus is a well known language modeling problem.\nThe goal of language modeling is to predict the probability of naturally occurring word sequences, s = w1w2...wN ; or more simply, to put high probability on word sequences that actually occur (and low probability on word sequences that never occur).\nThe simplest and most successful approach to language modeling is still based on the n-gram model.\nBy the chain rule of probability one can write the probability of any word sequence as Pr(w1w2...wN ) = NY i=1 Pr(wi|w1...wi\u22121) (1) An n-gram model approximates this probability by assuming that the only words relevant to predicting Pr(wi|w1...wi\u22121) are the previous n \u2212 1 words; i.e. Pr(wi|w1...wi\u22121) = Pr(wi|wi\u2212n+1...wi\u22121) A straightforward maximum likelihood estimate of n-gram probabilities from a corpus is given by the observed frequency of each of the patterns Pr(wi|wi\u2212n+1...wi\u22121) = #(wi\u2212n+1...wi) #(wi\u2212n+1...wi\u22121) (2) where #(.)\ndenotes the number of occurrences of a specified gram in the training corpus.\nAlthough one could attempt to use simple n-gram models to capture long range dependencies in language, attempting to do so directly immediately creates sparse data problems: Using grams of length up to n entails estimating the probability of Wn events, where W is the size of the word vocabulary.\nThis quickly overwhelms modern computational and data resources for even modest choices of n (beyond 3 to 6).\nAlso, because of the heavy tailed nature of language (i.e. Zipf``s law) one is likely to encounter novel n-grams that were never witnessed during training in any test corpus, and therefore some mechanism for assigning non-zero probability to novel n-grams is a central and unavoidable issue in statistical language modeling.\nOne standard approach to smoothing probability estimates to cope with sparse data problems (and to cope with potentially missing n-grams) is to use some sort of back-off estimator.\nPr(wi|wi\u2212n+1...wi\u22121) = 8 >>< >>: \u02c6Pr(wi|wi\u2212n+1...wi\u22121), if #(wi\u2212n+1...wi) > 0 \u03b2(wi\u2212n+1...wi\u22121) \u00d7 Pr(wi|wi\u2212n+2...wi\u22121), otherwise (3) where \u02c6Pr(wi|wi\u2212n+1...wi\u22121) = discount #(wi\u2212n+1...wi) #(wi\u2212n+1...wi\u22121) (4) is the discounted probability and \u03b2(wi\u2212n+1...wi\u22121) is a normalization constant \u03b2(wi\u2212n+1...wi\u22121) = 1 \u2212 X x\u2208(wi\u2212n+1...wi\u22121x) \u02c6Pr(x|wi\u2212n+1...wi\u22121) 1 \u2212 X x\u2208(wi\u2212n+1...wi\u22121x) \u02c6Pr(x|wi\u2212n+2...wi\u22121) (5) The discounted probability (4) can be computed with different smoothing techniques, including absolute smoothing, Good-Turing smoothing, linear smoothing, and Witten-Bell smoothing [5].\nWe used absolute smoothing in our experiments.\nSince the likelihood of a string, Pr(w1w2...wN ), is a very small number and hard to interpret, we use entropy as defined below to score the string.\nEntropy = \u2212 1 N log2 Pr(w1w2...wN ) (6) Now getting back to the example of the query hotel price comparisons, there are four variants of this query, and the entropy of these four candidates are shown in Table 3.\nWe can see that all alternatives are less likely than the input query.\nIt is therefore not useful to make an expansion for this query.\nOn the other hand, if the input query is hotel price comparisons which is the second alternative in the table, then there is a better alternative than the input query, and it should therefore be expanded.\nTo tolerate the variations in probability estimation, we relax the selection criterion to those query alternatives if their scores are within a certain distance (10% in our experiments) to the best score.\nQuery variations Entropy hotel price comparison 6.177 hotel price comparisons 6.597 hotels price comparison 6.937 hotels price comparisons 7.360 Table 3: Variations of query hotel price comparison ranked by entropy score, with the original query in bold face.\n3.5 Context sensitive document matching Even after we know which word variants are likely to be useful, we have to be conservative in document matching for the expanded variants.\nFor the query hotel price comparisons, we decided that word comparisons is expanded to include comparison.\nHowever, not every occurrence of comparison in the document is of interest.\nA page which is about comparing customer service can contain all of the words hotel price comparisons comparison.\nThis page is not a good page for the query.\nIf we accept matches of every occurrence of comparison, it will hurt retrieval precision and this is one of the main reasons why most stemming approaches do not work well for information retrieval.\nTo address this problem, we have a proximity constraint that considers the context around the expanded variant in the document.\nA variant match is considered valid only if the variant occurs in the same context as the original word does.\nThe context is the left or the right non-stop segments 1 of the original word.\nTaking the same query as an example, the context of comparisons is price.\nThe expanded word comparison is only valid if it is in the same context of comparisons, which is after the word price.\nThus, we should only match those occurrences of comparison in the document if they occur after the word price.\nConsidering the fact that queries and documents may not represent the intent in exactly the same way, we relax this proximity constraint to allow variant occurrences within a window of some fixed size.\nIf the expanded word comparison occurs within the context of price within a window, it is considered valid.\nThe smaller the window size is, the more restrictive the matching.\nWe use a window size of 4, which typically captures contexts that include the containing and adjacent noun phrases.\n4.\nEXPERIMENTAL EVALUATION 4.1 Evaluation metrics We will measure both relevance improvement and the stemming cost required to achieve the relevance.\n1 a context segment can not be a single stop word.\n4.1.1 Relevance measurement We use a variant of the average Discounted Cumulative Gain (DCG), a recently popularized scheme to measure search engine relevance [1, 11].\nGiven a query and a ranked list of K documents (K is set to 5 in our experiments), the DCG(K) score for this query is calculated as follows: DCG(K) = KX k=1 gk log2(1 + k) .\n(7) where gk is the weight for the document at rank k. Higher degree of relevance corresponds to a higher weight.\nA page is graded into one of the five scales: Perfect, Excellent, Good, Fair, Bad, with corresponding weights.\nWe use dcg to represent the average DCG(5) over a set of test queries.\n4.1.2 Stemming cost Another metric is to measure the additional cost incurred by stemming.\nGiven the same level of relevance improvement, we prefer a stemming method that has less additional cost.\nWe measure this by the percentage of queries that are actually stemmed, over all the queries that could possibly be stemmed.\n4.2 Data preparation We randomly sample 870 queries from a three month query log, with 290 from each month.\nAmong all these 870 queries, we remove all misspelled queries since misspelled queries are not of interest to stemming.\nWe also remove all one word queries since stemming one word queries without context has a high risk of changing query intent, especially for short words.\nIn the end, we have 529 correctly spelled queries with at least 2 words.\n4.3 Naive stemming for Web search Before explaining the experiments and results in detail, we``d like to describe the traditional way of using stemming for Web search, referred as the naive model.\nThis is to treat every word variant equivalent for all possible words in the query.\nThe query book store will be transformed into (book OR books)(store OR stores) when limiting stemming to pluralization handling only, where OR is an operator that denotes the equivalence of the left and right arguments.\n4.4 Experimental setup The baseline model is the model without stemming.\nWe first run the naive model to see how well it performs over the baseline.\nThen we improve the naive stemming model by document sensitive matching, referred as document sensitive matching model.\nThis model makes the same stemming as the naive model on the query side, but performs conservative matching on the document side using the strategy described in section 3.5.\nThe naive model and document sensitive matching model stem the most queries.\nOut of the 529 queries, there are 408 queries that they stem, corresponding to 46.7% query traffic (out of a total of 870).\nWe then further improve the document sensitive matching model from the query side with selective word stemming based on statistical language modeling (section 3.4), referred as selective stemming model.\nBased on language modeling prediction, this model stems only a subset of the 408 queries stemmed by the document sensitive matching model.\nWe experiment with unigram language model and bigram language model.\nSince we only care how much we can improve the naive model, we will only use these 408 queries (all the queries that are affected by the naive stemming model) in the experiments.\nTo get a sense of how these models perform, we also have an oracle model that gives the upper-bound performance a stemmer can achieve on this data.\nThe oracle model only expands a word if the stemming will give better results.\nTo analyze the pluralization handling influence on different query categories, we divide queries into short queries and long queries.\nAmong the 408 queries stemmed by the naive model, there are 272 short queries with 2 or 3 words, and 136 long queries with at least 4 words.\n4.5 Results We summarize the overall results in Table 4, and present the results on short queries and long queries separately in Table 5.\nEach row in Table 4 is a stemming strategy described in section 4.4.\nThe first column is the name of the strategy.\nThe second column is the number of queries affected by this strategy; this column measures the stemming cost, and the numbers should be low for the same level of dcg.\nThe third column is the average dcg score over all tested queries in this category (including the ones that were not stemmed by the strategy).\nThe fourth column is the relative improvement over the baseline, and the last column is the p-value of Wilcoxon significance test.\nThere are several observations about the results.\nWe can see the naively stemming only obtains a statistically insignificant improvement of 1.5%.\nLooking at Table 5, it gives an improvement of 2.7% on short queries.\nHowever, it also hurts long queries by -2.4%.\nOverall, the improvement is canceled out.\nThe reason that it improves short queries is that most short queries only have one word that can be stemmed.\nThus, blindly pluralizing short queries is relatively safe.\nHowever for long queries, most queries can have multiple words that can be pluralized.\nExpanding all of them without selection will significantly hurt precision.\nDocument context sensitive stemming gives a significant lift to the performance, from 2.7% to 4.2% for short queries and from -2.4% to -1.6% for long queries, with an overall lift from 1.5% to 2.8%.\nThe improvement comes from the conservative context sensitive document matching.\nAn expanded word is valid only if it occurs within the context of original query in the document.\nThis reduces many spurious matches.\nHowever, we still notice that for long queries, context sensitive stemming is not able to improve performance because it still selects too many documents and gives the ranking function a hard problem.\nWhile the chosen window size of 4 works the best amongst all the choices, it still allows spurious matches.\nIt is possible that the window size needs to be chosen on a per query basis to ensure tighter proximity constraints for different types of noun phrases.\nSelective word pluralization further helps resolving the problem faced by document context sensitive stemming.\nIt does not stem every word that places all the burden on the ranking algorithm, but tries to eliminate unnecessary stemming in the first place.\nBy predicting which word variants are going to be useful, we can dramatically reduce the number of stemmed words, thus improving both the recall and the precision.\nWith the unigram language model, we can reduce the stemming cost by 26.7% (from 408\/408 to 300\/408) and lift the overall dcg improvement from 2.8% to 3.4%.\nIn particular, it gives significant improvements on long queries.\nThe dcg gain is turned from negative to positive, from \u22121.6% to 1.1%.\nThis confirms our hypothesis that reducing unnecessary word expansion leads to precision improvement.\nFor short queries too, we observe both dcg improvement and stemming cost reduction with the unigram language model.\nThe advantages of predictive word expansion with a language model is further boosted with a better bigram language model.\nThe overall dcg gain is lifted from 3.4% to 3.9%, and stemming cost is dramatically reduced from 408\/408 to 250\/408, corresponding to only 29% of query traffic (250 out of 870) and an overall 1.8% dcg improvement overall all query traffic.\nFor short queries, bigram language model improves the dcg gain from 4.4% to 4.7%, and reduces stemming cost from 272\/272 to 150\/272.\nFor long queries, bigram language model improves dcg gain from 1.1% to 2.5%, and reduces stemming cost from 136\/136 to 100\/136.\nWe observe that the bigram language model gives a larger lift for long queries.\nThis is because the uncertainty in long queries is larger and a more powerful language model is needed.\nWe hypothesize that a trigram language model would give a further lift for long queries and leave this for future investigation.\nConsidering the tight upper-bound 2 on the improvement to be gained from pluralization handling (via the oracle model), the current performance on short queries is very satisfying.\nFor short queries, the dcg gain upper-bound is 6.3% for perfect pluralization handling, our current gain is 4.7% with a bigram language model.\nFor long queries, the dcg gain upper-bound is 4.6% for perfect pluralization handling, our current gain is 2.5% with a bigram language model.\nWe may gain additional benefit with a more powerful language model for long queries.\nHowever, the difficulties of long queries come from many other aspects including the proximity and the segmentation problem.\nThese problems have to be addressed separately.\nLooking at the the upper-bound of overhead reduction for oracle stemming, 75% (308\/408) of the naive stemmings are wasteful.\nWe currently capture about half of them.\nFurther reduction of the overhead requires sacrificing the dcg gain.\nNow we can compare the stemming strategies from a different aspect.\nInstead of looking at the influence over all queries as we described above, Table 6 summarizes the dcg improvements over the affected queries only.\nWe can see that the number of affected queries decreases as the stemming strategy becomes more accurate (dcg improvement).\nFor the bigram language model, over the 250\/408 stemmed queries, the dcg improvement is 6.1%.\nAn interesting observation is the average dcg decreases with a better model, which indicates a better stemming strategy stems more difficult queries (low dcg queries).\n5.\nDISCUSSIONS 5.1 Language models from query vs. from Web As we mentioned in Section 1, we are trying to predict the probability of a string occurring on the Web.\nThe language model should describe the occurrence of the string on the Web.\nHowever, the query log is also a good resource.\n2 Note that this upperbound is for pluralization handling only, not for general stemming.\nGeneral stemming gives a 8% upperbound, which is quite substantial in terms of our metrics.\nAffected Queries dcg dcg Improvement p-value baseline 0\/408 7.102 N\/A N\/A naive model 408\/408 7.206 1.5% 0.22 document context sensitive model 408\/408 7.302 2.8% 0.014 selective model: unigram LM 300\/408 7.321 3.4% 0.001 selective model: bigram LM 250\/408 7.381 3.9% 0.001 oracle model 100\/408 7.519 5.9% 0.001 Table 4: Results comparison of different stemming strategies over all queries affected by naive stemming Short Query Results Affected Queries dcg Improvement p-value baseline 0\/272 N\/A N\/A naive model 272\/272 2.7% 0.48 document context sensitive model 272\/272 4.2% 0.002 selective model: unigram LM 185\/272 4.4% 0.001 selective model: bigram LM 150\/272 4.7% 0.001 oracle model 71\/272 6.3% 0.001 Long Query Results Affected Queries dcg Improvement p-value baseline 0\/136 N\/A N\/A naive model 136\/136 -2.4% 0.25 document context sensitive model 136\/136 -1.6% 0.27 selective model: unigram LM 115\/136 1.1% 0.001 selective model: bigram LM 100\/136 2.5% 0.001 oracle model 29\/136 4.6% 0.001 Table 5: Results comparison of different stemming strategies overall short queries and long queries Users reformulate a query using many different variants to get good results.\nTo test the hypothesis that we can learn reliable transformation probabilities from the query log, we trained a language model from the same query top 25M queries as used to learn segmentation, and use that for prediction.\nWe observed a slight performance decrease compared to the model trained on Web frequencies.\nIn particular, the performance for unigram LM was not affected, but the dcg gain for bigram LM changed from 4.7% to 4.5% for short queries.\nThus, the query log can serve as a good approximation of the Web frequencies.\n5.2 How linguistics helps Some linguistic knowledge is useful in stemming.\nFor the pluralization handling case, pluralization and de-pluralization is not symmetric.\nA plural word used in a query indicates a special intent.\nFor example, the query new york hotels is looking for a list of hotels in new york, not the specific new york hotel which might be a hotel located in California.\nA simple equivalence of hotel to hotels might boost a particular page about new york hotel to top rank.\nTo capture this intent, we have to make sure the document is a general page about hotels in new york.\nWe do this by requiring that the plural word hotels appears in the document.\nOn the other hand, converting a singular word to plural is safer since a general purpose page normally contains specific information.\nWe observed a slight overall dcg decrease, although not statistically significant, for document context sensitive stemming if we do not consider this asymmetric property.\n5.3 Error analysis One type of mistakes we noticed, though rare but seriously hurting relevance, is the search intent change after stemming.\nGenerally speaking, pluralization or depluralization keeps the original intent.\nHowever, the intent could change in a few cases.\nFor one example of such a query, job at apple, we pluralize job to jobs.\nThis stemming makes the original query ambiguous.\nThe query job OR jobs at apple has two intents.\nOne is the employment opportunities at apple, and another is a person working at Apple, Steve Jobs, who is the CEO and co-founder of the company.\nThus, the results after query stemming returns Steve Jobs as one of the results in top 5.\nOne solution is performing results set based analysis to check if the intent is changed.\nThis is similar to relevance feedback and requires second phase ranking.\nA second type of mistakes is the entity\/concept recognition problem, These include two kinds.\nOne is that the stemmed word variant now matches part of an entity or concept.\nFor example, query cookies in san francisco is pluralized to cookies OR cookie in san francisco.\nThe results will match cookie jar in san francisco.\nAlthough cookie still means the same thing as cookies, cookie jar is a different concept.\nAnother kind is the unstemmed word matches an entity or concept because of the stemming of the other words.\nFor example, quote ICE is pluralized to quote OR quotes ICE.\nThe original intent for this query is searching for stock quote for ticker ICE.\nHowever, we noticed that among the top results, one of the results is Food quotes: Ice cream.\nThis is matched because of Affected Queries old dcg new dcg dcg Improvement naive model 408\/408 7.102 7.206 1.5% document context sensitive model 408\/408 7.102 7.302 2.8% selective model: unigram LM 300\/408 5.904 6.187 4.8% selective model: bigram LM 250\/408 5.551 5.891 6.1% Table 6: Results comparison over the stemmed queries only: column old\/new dcg is the dcg score over the affected queries before\/after applying stemming the pluralized word quotes.\nThe unchanged word ICE matches part of the noun phrase ice cream here.\nTo solve this kind of problem, we have to analyze the documents and recognize cookie jar and ice cream as concepts instead of two independent words.\nA third type of mistakes occurs in long queries.\nFor the query bar code reader software, two words are pluralized.\ncode to codes and reader to readers.\nIn fact, bar code reader in the original query is a strong concept and the internal words should not be changed.\nThis is the segmentation and entity and noun phrase detection problem in queries, which we actively are attacking.\nFor long queries, we should correctly identify the concepts in the query, and boost the proximity for the words within a concept.\n6.\nCONCLUSIONS AND FUTURE WORK We have presented a simple yet elegant way of stemming for Web search.\nIt improves naive stemming in two aspects: selective word expansion on the query side and conservative word occurrence matching on the document side.\nUsing pluralization handling as an example, experiments on a major Web search engine data show it significantly improves the Web relevance and reduces the stemming cost.\nIt also significantly improves Web click through rate (details not reported in the paper).\nFor the future work, we are investigating the problems we identified in the error analysis section.\nThese include: entity and noun phrase matching mistakes, and improved segmentation.\n7.\nREFERENCES [1] E. Agichtein, E. Brill, and S. T. Dumais.\nImproving Web Search Ranking by Incorporating User Behavior Information.\nIn SIGIR, 2006.\n[2] E. Airio.\nWord Normalization and Decompounding in Mono- and Bilingual IR.\nInformation Retrieval, 9:249-271, 2006.\n[3] P. Anick.\nUsing Terminological Feedback for Web Search Refinement: a Log-based Study.\nIn SIGIR, 2003.\n[4] R. Baeza-Yates and B. Ribeiro-Neto.\nModern Information Retrieval.\nACM Press\/Addison Wesley, 1999.\n[5] S. Chen and J. Goodman.\nAn Empirical Study of Smoothing Techniques for Language Modeling.\nTechnical Report TR-10-98, Harvard University, 1998.\n[6] S. Cronen-Townsend, Y. Zhou, and B. Croft.\nA Framework for Selective Query Expansion.\nIn CIKM, 2004.\n[7] H. Fang and C. Zhai.\nSemantic Term Matching in Axiomatic Approaches to Information Retrieval.\nIn SIGIR, 2006.\n[8] W. B. Frakes.\nTerm Conflation for Information Retrieval.\nIn C. J. Rijsbergen, editor, Research and Development in Information Retrieval, pages 383-389.\nCambridge University Press, 1984.\n[9] D. Harman.\nHow Effective is Suffixing?\nJASIS, 42(1):7-15, 1991.\n[10] D. Hull.\nStemming Algorithms - A Case Study for Detailed Evaluation.\nJASIS, 47(1):70-84, 1996.\n[11] K. Jarvelin and J. Kekalainen.\nCumulated Gain-Based Evaluation Evaluation of IR Techniques.\nACM TOIS, 20:422-446, 2002.\n[12] R. Jones, B. Rey, O. Madani, and W. Greiner.\nGenerating Query Substitutions.\nIn WWW, 2006.\n[13] W. Kraaij and R. Pohlmann.\nViewing Stemming as Recall Enhancement.\nIn SIGIR, 1996.\n[14] R. Krovetz.\nViewing Morphology as an Inference Process.\nIn SIGIR, 1993.\n[15] D. Lin.\nAutomatic Retrieval and Clustering of Similar Words.\nIn COLING-ACL, 1998.\n[16] J. B. Lovins.\nDevelopment of a Stemming Algorithm.\nMechanical Translation and Computational Linguistics, II:22-31, 1968.\n[17] M. Lennon and D. Peirce and B. Tarry and P. Willett.\nAn Evaluation of Some Conflation Algorithms for Information Retrieval.\nJournal of Information Science, 3:177-188, 1981.\n[18] M. Porter.\nAn Algorithm for Suffix Stripping.\nProgram, 14(3):130-137, 1980.\n[19] K. M. Risvik, T. Mikolajewski, and P. Boros.\nQuery Segmentation for Web Search.\nIn WWW, 2003.\n[20] S. E. Robertson.\nOn Term Selection for Query Expansion.\nJournal of Documentation, 46(4):359-364, 1990.\n[21] G. Salton and C. Buckley.\nImproving Retrieval Performance by Relevance Feedback.\nJASIS, 41(4):288 - 297, 1999.\n[22] R. Sun, C.-H.\nOng, and T.-S.\nChua.\nMining Dependency Relations for Query Expansion in Passage Retrieval.\nIn SIGIR, 2006.\n[23] C. Van Rijsbergen.\nInformation Retrieval.\nButterworths, second version, 1979.\n[24] B. V\u00b4elez, R. Weiss, M. A. Sheldon, and D. K. Gifford.\nFast and Effective Query Refinement.\nIn SIGIR, 1997.\n[25] J. Xu and B. Croft.\nQuery Expansion using Local and Global Document Analysis.\nIn SIGIR, 1996.\n[26] J. Xu and B. Croft.\nCorpus-based Stemming using Cooccurrence of Word Variants.\nACM TOIS, 16 (1):61-81, 1998.","lvl-3":"Context Sensitive Stemming for Web Search\nABSTRACT\nTraditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms.\nAlthough it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation.\nIn this paper, we propose a context sensitive stemming method that addresses these two issues.\nTwo unique properties make our approach feasible for Web Search.\nFirst, based on statistical language modeling, we perform context sensitive analysis on the query side.\nWe accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine.\nThis dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time.\nSecond, our approach performs a context sensitive document matching for those expanded variants.\nThis conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision.\nUsing word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic.\n1.\nINTRODUCTION\nWeb search has now become a major tool in our daily lives for information seeking.\nOne of the important issues in Web search is that user queries are often not best formulated to get optimal results.\nFor example, \"running shoe\" is a query that occurs frequently in query logs.\nHowever, the query \"running shoes\" is much more likely to give better search results than the original query because documents matching the intent of this query usually contain the words \"running shoes\".\nCorrectly formulating a query requires the user to accurately predict which word form is used in the documents that best satisfy his or her information needs.\nThis is difficult even for experienced users, and especially difficult for non-native speakers.\nOne traditional solution is to use stemming [16, 18], the process of transforming inflected or derived words to their root form so that a search term will match and retrieve documents containing all forms of the term.\nThus, the word \"run\" will match \"running\", \"ran\", \"runs\", and \"shoe\" will match \"shoes\" and \"shoeing\".\nStemming can be done either on the terms in a document during indexing (and applying the same transformation to the query terms during query processing) or by expanding the query with the variants during query processing.\nStemming during indexing allows very little flexibility during query processing, while stemming by query expansion allows handling each query differently, and hence is preferred.\nAlthough traditional stemming increases recall by matching word variants [13], it can reduce precision by retrieving too many documents that have been incorrectly matched.\nWhen examining the results of applying stemming to a large number of queries, one usually finds that nearly equal numbers of queries are helped and hurt by the technique [6].\nIn addition, it reduces system performance because the search engine has to match all the word variants.\nAs we will show in the experiments, this is true even if we simplify stemming to pluralization handling, which is the process of converting a word from its plural to singular form, or vice versa.\nThus, one needs to be very cautious when using stemming in Web search engines.\nOne problem of traditional stemming is its blind transformation of all query terms, that is, it always performs the same transformation for the same query word without considering the context of the word.\nFor example, the word \"book\" has four forms \"book, books, booking, booked\", and \"store\" has four forms \"store, stores, storing, stored\".\nFor the query \"book store\", expanding both words to all of their variants significantly increases computation cost and hurts\nprecision, since not all of the variants are useful for this query.\nTransforming \"book store\" to match \"book stores\" is fine, but matching \"book storing\" or \"booking store\" is not.\nA weighting method that gives variant words smaller weights alleviates the problems to a certain extent if the weights accurately reflect the importance of the variant in this particular query.\nHowever uniform weighting is not going to work and a query dependent weighting is still a challenging unsolved problem [20].\nA second problem of traditional stemming is its blind matching of all occurrences in documents.\nFor the query \"book store\", a transformation that allows the variant \"stores\" to be matched will cause every occurrence of \"stores\" in the document to be treated equivalent to the query term \"store\".\nThus, a document containing the fragment \"reading a book in coffee stores\" will be matched, causing many wrong documents to be selected.\nAlthough we hope the ranking function can correctly handle these, with many more candidates to rank, the risk of making mistakes increases.\nTo alleviate these two problems, we propose a context sensitive stemming approach for Web search.\nOur solution consists of two context sensitive analysis, one on the query side and the other on the document side.\nOn the query side, we propose a statistical language modeling based approach to predict which word variants are better forms than the original word for search purpose and expanding the query with only those forms.\nOn the document side, we propose a conservative context sensitive matching for the transformed word variants, only matching document occurrences in the context of other terms in the query.\nOur model is simple yet effective and efficient, making it feasible to be used in real commercial Web search engines.\nWe use pluralization handling as a running example for our stemming approach.\nThe motivation for using pluralization handling as an example is to show that even such simple stemming, if handled correctly, can give significant benefits to search relevance.\nAs far as we know, no previous research has systematically investigated the usage of pluralization in Web search.\nAs we have to point out, the method we propose is not limited to pluralization handling, it is a general stemming technique, and can also be applied to general query expansion.\nExperiments on general stemming yield additional significant improvements over pluralization handling for long queries, although details will not be reported in this paper.\nIn the rest of the paper, we first present the related work and distinguish our method from previous work in Section 2.\nWe describe the details of the context sensitive stemming approach in Section 3.\nWe then perform extensive experiments on a major Web search engine to support our claims in Section 4, followed by discussions in Section 5.\nFinally, we conclude the paper in Section 6.\n2.\nRELATED WORK\nStemming is a long studied technology.\nMany stemmers have been developed, such as the Lovins stemmer [16] and the Porter stemmer [18].\nThe Porter stemmer is widely used due to its simplicity and effectiveness in many applications.\nHowever, the Porter stemming makes many mistakes because its simple rules cannot fully describe English morphology.\nCorpus analysis is used to improve Porter stemmer [26] by creating equivalence classes for words that are morphologically similar and occur in similar context as measured by expected mutual information [23].\nWe use a similar corpus based approach for stemming by computing the similarity between two words based on their distributional context features which can be more than just adjacent words [15], and then only keep the morphologically similar words as candidates.\nUsing stemming in information retrieval is also a well known technique [8, 10].\nHowever, the effectiveness of stemming for English query systems was previously reported to be rather limited.\nLennon et al. [17] compared the Lovins and Porter algorithms and found little improvement in retrieval performance.\nLater, Harman [9] compares three general stemming techniques in text retrieval experiments including pluralization handing (called S stemmer in the paper).\nThey also proposed selective stemming based on query length and term importance, but no positive results were reported.\nOn the other hand, Krovetz [14] performed comparisons over small numbers of documents (from 400 to 12k) and showed dramatic precision improvement (up to 45%).\nHowever, due to the limited number of tested queries (less than 100) and the small size of the collection, the results are hard to generalize to Web search.\nThese mixed results, mostly failures, led early IR researchers to deem stemming irrelevant in general for English [4], although recent research has shown stemming has greater benefits for retrieval in other languages [2].\nWe suspect the previous failures were mainly due to the two problems we mentioned in the introduction.\nBlind stemming, or a simple query length based selective stemming as used in [9] is not enough.\nStemming has to be decided on case by case basis, not only at the query level but also at the document level.\nAs we will show, if handled correctly, significant improvement can be achieved.\nA more general problem related to stemming is query reformulation [3, 12] and query expansion which expands words not only with word variants [7, 22, 24, 25].\nTo decide which expanded words to use, people often use pseudorelevance feedback techniquesthat send the original query to a search engine and retrieve the top documents, extract relevant words from these top documents as additional query words, and resubmit the expanded query again [21].\nThis normally requires sending a query multiple times to search engine and it is not cost effective for processing the huge amount of queries involved in Web search.\nIn addition, query expansion, including query reformulation [3, 12], has a high risk of changing the user intent (called query drift).\nSince the expanded words may have different meanings, adding them to the query could potentially change the intent of the original query.\nThus query expansion based on pseudorelevance and query reformulation can provide suggestions to users for interactive refinement but can hardly be directly used for Web search.\nOn the other hand, stemming is much more conservative since most of the time, stemming preserves the original search intent.\nWhile most work on query expansion focuses on recall enhancement, our work focuses on increasing both recall and precision.\nThe increase on recall is obvious.\nWith quality stemming, good documents which were not selected before stemming will be pushed up and those low quality documents will be degraded.\nOn selective query expansion, Cronen-Townsend et al. [6] proposed a method for selective query expansion based on comparing the Kullback-Leibler divergence of the results from the unexpanded query and the results from the expanded query.\nThis is similar to the relevance feedback in\nthe sense that it requires multiple passes retrieval.\nIf a word can be expanded into several words, it requires running this process multiple times to decide which expanded word is useful.\nIt is expensive to deploy this in production Web search engines.\nOur method predicts the quality of expansion based on offline information without sending the query to a search engine.\nIn summary, we propose a novel approach to attack an old, yet still important and challenging problem for Web search--stemming.\nOur approach is unique in that it performs predictive stemming on a per query basis without relevance feedback from the Web, using the context of the variants in documents to preserve precision.\nIt's simple, yet very efficient and effective, making real time stemming feasible for Web search.\nOur results will affirm researchers that stemming is indeed very important to large scale information retrieval.\n3.\nCONTEXT SENSITIVE STEMMING\n3.1 Overview\n3.2 Expansion candidate generation\n3.3 Segmentation and headword identification\n3.4 Context sensitive word expansion\n3.5 Context sensitive document matching\n4.\nEXPERIMENTAL EVALUATION\n4.1 Evaluation metrics\n4.1.1 Relevance measurement\n4.1.2 Stemming cost\n4.2 Data preparation\n4.3 Naive stemming for Web search\n4.4 Experimental setup\n4.5 Results\n5.\nDISCUSSIONS\n5.1 Language models from query vs. from Web\n5.2 How linguistics helps\n5.3 Error analysis\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have presented a simple yet elegant way of stemming for Web search.\nIt improves naive stemming in two aspects: selective word expansion on the query side and conservative word occurrence matching on the document side.\nUsing pluralization handling as an example, experiments on a major Web search engine data show it significantly improves the Web relevance and reduces the stemming cost.\nIt also significantly improves Web click through rate (details not reported in the paper).\nFor the future work, we are investigating the problems we identified in the error analysis section.\nThese include: entity and noun phrase matching mistakes, and improved segmentation.","lvl-4":"Context Sensitive Stemming for Web Search\nABSTRACT\nTraditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms.\nAlthough it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation.\nIn this paper, we propose a context sensitive stemming method that addresses these two issues.\nTwo unique properties make our approach feasible for Web Search.\nFirst, based on statistical language modeling, we perform context sensitive analysis on the query side.\nWe accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine.\nThis dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time.\nSecond, our approach performs a context sensitive document matching for those expanded variants.\nThis conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision.\nUsing word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic.\n1.\nINTRODUCTION\nWeb search has now become a major tool in our daily lives for information seeking.\nOne of the important issues in Web search is that user queries are often not best formulated to get optimal results.\nFor example, \"running shoe\" is a query that occurs frequently in query logs.\nHowever, the query \"running shoes\" is much more likely to give better search results than the original query because documents matching the intent of this query usually contain the words \"running shoes\".\nCorrectly formulating a query requires the user to accurately predict which word form is used in the documents that best satisfy his or her information needs.\nOne traditional solution is to use stemming [16, 18], the process of transforming inflected or derived words to their root form so that a search term will match and retrieve documents containing all forms of the term.\nStemming can be done either on the terms in a document during indexing (and applying the same transformation to the query terms during query processing) or by expanding the query with the variants during query processing.\nStemming during indexing allows very little flexibility during query processing, while stemming by query expansion allows handling each query differently, and hence is preferred.\nAlthough traditional stemming increases recall by matching word variants [13], it can reduce precision by retrieving too many documents that have been incorrectly matched.\nWhen examining the results of applying stemming to a large number of queries, one usually finds that nearly equal numbers of queries are helped and hurt by the technique [6].\nIn addition, it reduces system performance because the search engine has to match all the word variants.\nAs we will show in the experiments, this is true even if we simplify stemming to pluralization handling, which is the process of converting a word from its plural to singular form, or vice versa.\nThus, one needs to be very cautious when using stemming in Web search engines.\nOne problem of traditional stemming is its blind transformation of all query terms, that is, it always performs the same transformation for the same query word without considering the context of the word.\nFor the query \"book store\", expanding both words to all of their variants significantly increases computation cost and hurts\nprecision, since not all of the variants are useful for this query.\nA weighting method that gives variant words smaller weights alleviates the problems to a certain extent if the weights accurately reflect the importance of the variant in this particular query.\nHowever uniform weighting is not going to work and a query dependent weighting is still a challenging unsolved problem [20].\nA second problem of traditional stemming is its blind matching of all occurrences in documents.\nFor the query \"book store\", a transformation that allows the variant \"stores\" to be matched will cause every occurrence of \"stores\" in the document to be treated equivalent to the query term \"store\".\nThus, a document containing the fragment \"reading a book in coffee stores\" will be matched, causing many wrong documents to be selected.\nTo alleviate these two problems, we propose a context sensitive stemming approach for Web search.\nOur solution consists of two context sensitive analysis, one on the query side and the other on the document side.\nOn the query side, we propose a statistical language modeling based approach to predict which word variants are better forms than the original word for search purpose and expanding the query with only those forms.\nOn the document side, we propose a conservative context sensitive matching for the transformed word variants, only matching document occurrences in the context of other terms in the query.\nOur model is simple yet effective and efficient, making it feasible to be used in real commercial Web search engines.\nWe use pluralization handling as a running example for our stemming approach.\nThe motivation for using pluralization handling as an example is to show that even such simple stemming, if handled correctly, can give significant benefits to search relevance.\nAs far as we know, no previous research has systematically investigated the usage of pluralization in Web search.\nAs we have to point out, the method we propose is not limited to pluralization handling, it is a general stemming technique, and can also be applied to general query expansion.\nExperiments on general stemming yield additional significant improvements over pluralization handling for long queries, although details will not be reported in this paper.\nIn the rest of the paper, we first present the related work and distinguish our method from previous work in Section 2.\nWe describe the details of the context sensitive stemming approach in Section 3.\nWe then perform extensive experiments on a major Web search engine to support our claims in Section 4, followed by discussions in Section 5.\nFinally, we conclude the paper in Section 6.\n2.\nRELATED WORK\nStemming is a long studied technology.\nThe Porter stemmer is widely used due to its simplicity and effectiveness in many applications.\nHowever, the Porter stemming makes many mistakes because its simple rules cannot fully describe English morphology.\nUsing stemming in information retrieval is also a well known technique [8, 10].\nHowever, the effectiveness of stemming for English query systems was previously reported to be rather limited.\nLater, Harman [9] compares three general stemming techniques in text retrieval experiments including pluralization handing (called S stemmer in the paper).\nThey also proposed selective stemming based on query length and term importance, but no positive results were reported.\nHowever, due to the limited number of tested queries (less than 100) and the small size of the collection, the results are hard to generalize to Web search.\nWe suspect the previous failures were mainly due to the two problems we mentioned in the introduction.\nBlind stemming, or a simple query length based selective stemming as used in [9] is not enough.\nStemming has to be decided on case by case basis, not only at the query level but also at the document level.\nA more general problem related to stemming is query reformulation [3, 12] and query expansion which expands words not only with word variants [7, 22, 24, 25].\nThis normally requires sending a query multiple times to search engine and it is not cost effective for processing the huge amount of queries involved in Web search.\nIn addition, query expansion, including query reformulation [3, 12], has a high risk of changing the user intent (called query drift).\nSince the expanded words may have different meanings, adding them to the query could potentially change the intent of the original query.\nThus query expansion based on pseudorelevance and query reformulation can provide suggestions to users for interactive refinement but can hardly be directly used for Web search.\nOn the other hand, stemming is much more conservative since most of the time, stemming preserves the original search intent.\nWhile most work on query expansion focuses on recall enhancement, our work focuses on increasing both recall and precision.\nThe increase on recall is obvious.\nWith quality stemming, good documents which were not selected before stemming will be pushed up and those low quality documents will be degraded.\nOn selective query expansion, Cronen-Townsend et al. [6] proposed a method for selective query expansion based on comparing the Kullback-Leibler divergence of the results from the unexpanded query and the results from the expanded query.\nThis is similar to the relevance feedback in\nthe sense that it requires multiple passes retrieval.\nIf a word can be expanded into several words, it requires running this process multiple times to decide which expanded word is useful.\nIt is expensive to deploy this in production Web search engines.\nOur method predicts the quality of expansion based on offline information without sending the query to a search engine.\nIn summary, we propose a novel approach to attack an old, yet still important and challenging problem for Web search--stemming.\nOur approach is unique in that it performs predictive stemming on a per query basis without relevance feedback from the Web, using the context of the variants in documents to preserve precision.\nIt's simple, yet very efficient and effective, making real time stemming feasible for Web search.\nOur results will affirm researchers that stemming is indeed very important to large scale information retrieval.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have presented a simple yet elegant way of stemming for Web search.\nIt improves naive stemming in two aspects: selective word expansion on the query side and conservative word occurrence matching on the document side.\nUsing pluralization handling as an example, experiments on a major Web search engine data show it significantly improves the Web relevance and reduces the stemming cost.\nIt also significantly improves Web click through rate (details not reported in the paper).\nFor the future work, we are investigating the problems we identified in the error analysis section.\nThese include: entity and noun phrase matching mistakes, and improved segmentation.","lvl-2":"Context Sensitive Stemming for Web Search\nABSTRACT\nTraditionally, stemming has been applied to Information Retrieval tasks by transforming words in documents to the their root form before indexing, and applying a similar transformation to query terms.\nAlthough it increases recall, this naive strategy does not work well for Web Search since it lowers precision and requires a significant amount of additional computation.\nIn this paper, we propose a context sensitive stemming method that addresses these two issues.\nTwo unique properties make our approach feasible for Web Search.\nFirst, based on statistical language modeling, we perform context sensitive analysis on the query side.\nWe accurately predict which of its morphological variants is useful to expand a query term with before submitting the query to the search engine.\nThis dramatically reduces the number of bad expansions, which in turn reduces the cost of additional computation and improves the precision at the same time.\nSecond, our approach performs a context sensitive document matching for those expanded variants.\nThis conservative strategy serves as a safeguard against spurious stemming, and it turns out to be very important for improving precision.\nUsing word pluralization handling as an example of our stemming approach, our experiments on a major Web search engine show that stemming only 29% of the query traffic, we can improve relevance as measured by average Discounted Cumulative Gain (DCG5) by 6.1% on these queries and 1.8% over all query traffic.\n1.\nINTRODUCTION\nWeb search has now become a major tool in our daily lives for information seeking.\nOne of the important issues in Web search is that user queries are often not best formulated to get optimal results.\nFor example, \"running shoe\" is a query that occurs frequently in query logs.\nHowever, the query \"running shoes\" is much more likely to give better search results than the original query because documents matching the intent of this query usually contain the words \"running shoes\".\nCorrectly formulating a query requires the user to accurately predict which word form is used in the documents that best satisfy his or her information needs.\nThis is difficult even for experienced users, and especially difficult for non-native speakers.\nOne traditional solution is to use stemming [16, 18], the process of transforming inflected or derived words to their root form so that a search term will match and retrieve documents containing all forms of the term.\nThus, the word \"run\" will match \"running\", \"ran\", \"runs\", and \"shoe\" will match \"shoes\" and \"shoeing\".\nStemming can be done either on the terms in a document during indexing (and applying the same transformation to the query terms during query processing) or by expanding the query with the variants during query processing.\nStemming during indexing allows very little flexibility during query processing, while stemming by query expansion allows handling each query differently, and hence is preferred.\nAlthough traditional stemming increases recall by matching word variants [13], it can reduce precision by retrieving too many documents that have been incorrectly matched.\nWhen examining the results of applying stemming to a large number of queries, one usually finds that nearly equal numbers of queries are helped and hurt by the technique [6].\nIn addition, it reduces system performance because the search engine has to match all the word variants.\nAs we will show in the experiments, this is true even if we simplify stemming to pluralization handling, which is the process of converting a word from its plural to singular form, or vice versa.\nThus, one needs to be very cautious when using stemming in Web search engines.\nOne problem of traditional stemming is its blind transformation of all query terms, that is, it always performs the same transformation for the same query word without considering the context of the word.\nFor example, the word \"book\" has four forms \"book, books, booking, booked\", and \"store\" has four forms \"store, stores, storing, stored\".\nFor the query \"book store\", expanding both words to all of their variants significantly increases computation cost and hurts\nprecision, since not all of the variants are useful for this query.\nTransforming \"book store\" to match \"book stores\" is fine, but matching \"book storing\" or \"booking store\" is not.\nA weighting method that gives variant words smaller weights alleviates the problems to a certain extent if the weights accurately reflect the importance of the variant in this particular query.\nHowever uniform weighting is not going to work and a query dependent weighting is still a challenging unsolved problem [20].\nA second problem of traditional stemming is its blind matching of all occurrences in documents.\nFor the query \"book store\", a transformation that allows the variant \"stores\" to be matched will cause every occurrence of \"stores\" in the document to be treated equivalent to the query term \"store\".\nThus, a document containing the fragment \"reading a book in coffee stores\" will be matched, causing many wrong documents to be selected.\nAlthough we hope the ranking function can correctly handle these, with many more candidates to rank, the risk of making mistakes increases.\nTo alleviate these two problems, we propose a context sensitive stemming approach for Web search.\nOur solution consists of two context sensitive analysis, one on the query side and the other on the document side.\nOn the query side, we propose a statistical language modeling based approach to predict which word variants are better forms than the original word for search purpose and expanding the query with only those forms.\nOn the document side, we propose a conservative context sensitive matching for the transformed word variants, only matching document occurrences in the context of other terms in the query.\nOur model is simple yet effective and efficient, making it feasible to be used in real commercial Web search engines.\nWe use pluralization handling as a running example for our stemming approach.\nThe motivation for using pluralization handling as an example is to show that even such simple stemming, if handled correctly, can give significant benefits to search relevance.\nAs far as we know, no previous research has systematically investigated the usage of pluralization in Web search.\nAs we have to point out, the method we propose is not limited to pluralization handling, it is a general stemming technique, and can also be applied to general query expansion.\nExperiments on general stemming yield additional significant improvements over pluralization handling for long queries, although details will not be reported in this paper.\nIn the rest of the paper, we first present the related work and distinguish our method from previous work in Section 2.\nWe describe the details of the context sensitive stemming approach in Section 3.\nWe then perform extensive experiments on a major Web search engine to support our claims in Section 4, followed by discussions in Section 5.\nFinally, we conclude the paper in Section 6.\n2.\nRELATED WORK\nStemming is a long studied technology.\nMany stemmers have been developed, such as the Lovins stemmer [16] and the Porter stemmer [18].\nThe Porter stemmer is widely used due to its simplicity and effectiveness in many applications.\nHowever, the Porter stemming makes many mistakes because its simple rules cannot fully describe English morphology.\nCorpus analysis is used to improve Porter stemmer [26] by creating equivalence classes for words that are morphologically similar and occur in similar context as measured by expected mutual information [23].\nWe use a similar corpus based approach for stemming by computing the similarity between two words based on their distributional context features which can be more than just adjacent words [15], and then only keep the morphologically similar words as candidates.\nUsing stemming in information retrieval is also a well known technique [8, 10].\nHowever, the effectiveness of stemming for English query systems was previously reported to be rather limited.\nLennon et al. [17] compared the Lovins and Porter algorithms and found little improvement in retrieval performance.\nLater, Harman [9] compares three general stemming techniques in text retrieval experiments including pluralization handing (called S stemmer in the paper).\nThey also proposed selective stemming based on query length and term importance, but no positive results were reported.\nOn the other hand, Krovetz [14] performed comparisons over small numbers of documents (from 400 to 12k) and showed dramatic precision improvement (up to 45%).\nHowever, due to the limited number of tested queries (less than 100) and the small size of the collection, the results are hard to generalize to Web search.\nThese mixed results, mostly failures, led early IR researchers to deem stemming irrelevant in general for English [4], although recent research has shown stemming has greater benefits for retrieval in other languages [2].\nWe suspect the previous failures were mainly due to the two problems we mentioned in the introduction.\nBlind stemming, or a simple query length based selective stemming as used in [9] is not enough.\nStemming has to be decided on case by case basis, not only at the query level but also at the document level.\nAs we will show, if handled correctly, significant improvement can be achieved.\nA more general problem related to stemming is query reformulation [3, 12] and query expansion which expands words not only with word variants [7, 22, 24, 25].\nTo decide which expanded words to use, people often use pseudorelevance feedback techniquesthat send the original query to a search engine and retrieve the top documents, extract relevant words from these top documents as additional query words, and resubmit the expanded query again [21].\nThis normally requires sending a query multiple times to search engine and it is not cost effective for processing the huge amount of queries involved in Web search.\nIn addition, query expansion, including query reformulation [3, 12], has a high risk of changing the user intent (called query drift).\nSince the expanded words may have different meanings, adding them to the query could potentially change the intent of the original query.\nThus query expansion based on pseudorelevance and query reformulation can provide suggestions to users for interactive refinement but can hardly be directly used for Web search.\nOn the other hand, stemming is much more conservative since most of the time, stemming preserves the original search intent.\nWhile most work on query expansion focuses on recall enhancement, our work focuses on increasing both recall and precision.\nThe increase on recall is obvious.\nWith quality stemming, good documents which were not selected before stemming will be pushed up and those low quality documents will be degraded.\nOn selective query expansion, Cronen-Townsend et al. [6] proposed a method for selective query expansion based on comparing the Kullback-Leibler divergence of the results from the unexpanded query and the results from the expanded query.\nThis is similar to the relevance feedback in\nthe sense that it requires multiple passes retrieval.\nIf a word can be expanded into several words, it requires running this process multiple times to decide which expanded word is useful.\nIt is expensive to deploy this in production Web search engines.\nOur method predicts the quality of expansion based on offline information without sending the query to a search engine.\nIn summary, we propose a novel approach to attack an old, yet still important and challenging problem for Web search--stemming.\nOur approach is unique in that it performs predictive stemming on a per query basis without relevance feedback from the Web, using the context of the variants in documents to preserve precision.\nIt's simple, yet very efficient and effective, making real time stemming feasible for Web search.\nOur results will affirm researchers that stemming is indeed very important to large scale information retrieval.\n3.\nCONTEXT SENSITIVE STEMMING\n3.1 Overview\nOur system has four components as illustrated in Figure 1: candidate generation, query segmentation and head word detection, context sensitive query stemming and context sensitive document matching.\nCandidate generation (component 1) is performed offline and generated candidates are stored in a dictionary.\nFor an input query, we first segment query into concepts and detect the head word for each concept (component 2).\nWe then use statistical language modeling to decide whether a particular variant is useful (component 3), and finally for the expanded variants, we perform context sensitive document matching (component 4).\nBelow we discuss each of the components in more detail.\nFigure 1: System Components\n3.2 Expansion candidate generation\nOne of the ways to generate candidates is using the Porter stemmer [18].\nThe Porter stemmer simply uses morphological rules to convert a word to its base form.\nIt has no knowledge of the semantic meaning of the words and sometimes makes serious mistakes, such as \"executive\" to \"execution\", \"news\" to \"new\", and \"paste\" to \"past\".\nA more conservative way is based on using corpus analysis to improve the Porter stemmer results [26].\nThe corpus analysis we do is based on word distributional similarity [15].\nThe rationale of using distributional word similarity is that true variants tend to be used in similar contexts.\nIn the distributional word similarity calculation, each word is represented with a vector of features derived from the context of the word.\nWe use the bigrams to the left and right of the word as its context features, by mining a huge Web corpus.\nThe similarity between two words is the cosine similarity between the two corresponding feature vectors.\nThe top 20 similar words to \"develop\" is shown in the following table.\nTable 1: Top 20 most similar candidates to word \"develop\".\nColumn score is the similarity score.\nTo determine the stemming candidates, we apply a few Porter stemmer [18] morphological rules to the similarity list.\nAfter applying these rules, for the word \"develop\", the stemming candidates are \"developing, developed, develops, development, developement, developer, developmental\".\nFor the pluralization handling purpose, only the candidate \"develops\" is retained.\nOne thing we note from observing the distributionally similar words is that they are closely related semantically.\nThese words might serve as candidates for general query expansion, a topic we will investigate in the future.\n3.3 Segmentation and headword identification\nFor long queries, it is quite important to detect the concepts in the query and the most important words for those concepts.\nWe first break a query into segments, each segment representing a concept which normally is a noun phrase.\nFor each of the noun phrases, we then detect the most important word which we call the head word.\nSegmentation is also used in document sensitive matching (section 3.5) to enforce proximity.\nTo break a query into segments, we have to define a criteria to measure the strength of the relation between words.\nOne effective method is to use mutual information as an indicator on whether or not to split two words [19].\nWe use a log of 25M queries and collect the bigram and unigram frequencies from it.\nFor every incoming query, we compute the mutual information of two adjacent words; if it passes a predefined threshold, we do not split the query between those two words and move on to next word.\nWe continue this process until the mutual information between two words is below the threshold, then create a concept boundary here.\nTable 2 shows some examples of query segmentation.\nThe ideal way of finding the head word of a concept is to do syntactic parsing to determine the dependency structure of the query.\nQuery parsing is more difficult than sentence\nComponent 2: segment and head word detection output: \"hotel\" \"comparisons\" Component 1: candidate generation\nTable 2: Query segmentation: a segment is bracketed.\nparsing since many queries are not grammatical and are very short.\nApplying a parser trained on sentences from documents to queries will have poor performance.\nIn our solution, we just use simple heuristics rules, and it works very well in practice for English.\nFor an English noun phrase, the head word is typically the last nonstop word, unless the phrase is of a particular pattern, like \"XYZ of\/in\/at \/ from UVW\".\nIn such cases, the head word is typically the last nonstop word of XYZ.\n3.4 Context sensitive word expansion\nAfter detecting which words are the most important words to expand, we have to decide whether the expansions will be useful.\nOur statistics show that about half of the queries can be transformed by pluralization via naive stemming.\nAmong this half, about 25% of the queries improve relevance when transformed, the majority (about 50%) do not change their top 5 results, and the remaining 25% perform worse.\nThus, it is extremely important to identify which queries should not be stemmed for the purpose of maximizing relevance improvement and minimizing stemming cost.\nIn addition, for a query with multiple words that can be transformed, or a word with multiple variants, not all of the expansions are useful.\nTaking query \"hotel price comparison\" as an example, we decide that hotel and price comparison are two concepts.\nHead words \"hotel\" and \"comparison\" can be expanded to \"hotels\" and \"comparisons\".\nAre both transformations useful?\nTo test whether an expansion is useful, we have to know whether the expanded query is likely to get more relevant documents from the Web, which can be quantified by the probability of the query occurring as a string on the Web.\nThe more likely a query to occur on the Web, the more relevant documents this query is able to return.\nNow the whole problem becomes how to calculate the probability of query to occur on the Web.\nCalculating the probability of string occurring in a corpus is a well known language modeling problem.\nThe goal of language modeling is to predict the probability of naturally occurring word sequences, s = w1w2...wN; or more simply, to put high probability on word sequences that actually occur (and low probability on word sequences that never occur).\nThe simplest and most successful approach to language modeling is still based on the n-gram model.\nBy the chain rule of probability one can write the probability of any word sequence as\nAn n-gram model approximates this probability by assuming that the only words relevant to predicting Pr (wilw1...wi-1) are the previous n--1 words; i.e.\nA straightforward maximum likelihood estimate of n-gram probabilities from a corpus is given by the observed frequency of each of the patterns\nwhere #(.)\ndenotes the number of occurrences of a specified gram in the training corpus.\nAlthough one could attempt to use simple n-gram models to capture long range dependencies in language, attempting to do so directly immediately creates sparse data problems: Using grams of length up to n entails estimating the probability of Wn events, where W is the size of the word vocabulary.\nThis quickly overwhelms modern computational and data resources for even modest choices of n (beyond 3 to 6).\nAlso, because of the heavy tailed nature of language (i.e. Zipf's law) one is likely to encounter novel n-grams that were never witnessed during training in any test corpus, and therefore some mechanism for assigning non-zero probability to novel n-grams is a central and unavoidable issue in statistical language modeling.\nOne standard approach to smoothing probability estimates to cope with sparse data problems (and to cope with potentially missing n-grams) is to use some sort of back-off estimator.\nwhere\nis the discounted probability and \u03b2 (wi-n +1...wi-1) is a normalization constant\nThe discounted probability (4) can be computed with different smoothing techniques, including absolute smoothing, Good-Turing smoothing, linear smoothing, and Witten-Bell smoothing [5].\nWe used absolute smoothing in our experiments.\nSince the likelihood of a string, Pr (w1w2...wN), is a very small number and hard to interpret, we use entropy as defined below to score the string.\nNow getting back to the example of the query \"hotel price comparisons\", there are four variants of this query, and the entropy of these four candidates are shown in Table 3.\nWe can see that all alternatives are less likely than the input\nquery.\nIt is therefore not useful to make an expansion for this query.\nOn the other hand, if the input query is \"hotel price comparisons\" which is the second alternative in the table, then there is a better alternative than the input query, and it should therefore be expanded.\nTo tolerate the variations in probability estimation, we relax the selection criterion to those query alternatives if their scores are within a certain distance (10% in our experiments) to the best score.\nTable 3: Variations of query \"hotel price comparison\" ranked by entropy score, with the original query in bold face.\n3.5 Context sensitive document matching\nEven after we know which word variants are likely to be useful, we have to be conservative in document matching for the expanded variants.\nFor the query \"hotel price comparisons\", we decided that word \"comparisons\" is expanded to include \"comparison\".\nHowever, not every occurrence of \"comparison\" in the document is of interest.\nA page which is about comparing customer service can contain all of the words hotel price comparisons comparison.\nThis page is not a good page for the query.\nIf we accept matches of every occurrence of \"comparison\", it will hurt retrieval precision and this is one of the main reasons why most stemming approaches do not work well for information retrieval.\nTo address this problem, we have a proximity constraint that considers the context around the expanded variant in the document.\nA variant match is considered valid only if the variant occurs in the same context as the original word does.\nThe context is the left or the right non-stop segments 1 of the original word.\nTaking the same query as an example, the context of \"comparisons\" is \"price\".\nThe expanded word \"comparison\" is only valid if it is in the same context of \"comparisons\", which is after the word \"price\".\nThus, we should only match those occurrences of \"comparison\" in the document if they occur after the word \"price\".\nConsidering the fact that queries and documents may not represent the intent in exactly the same way, we relax this proximity constraint to allow variant occurrences within a window of some fixed size.\nIf the expanded word \"comparison\" occurs within the context of \"price\" within a window, it is considered valid.\nThe smaller the window size is, the more restrictive the matching.\nWe use a window size of 4, which typically captures contexts that include the containing and adjacent noun phrases.\n4.\nEXPERIMENTAL EVALUATION\n4.1 Evaluation metrics\nWe will measure both relevance improvement and the stemming cost required to achieve the relevance.\n4.1.1 Relevance measurement\nWe use a variant of the average Discounted Cumulative Gain (DCG), a recently popularized scheme to measure search engine relevance [1, 11].\nGiven a query and a ranked list of K documents (K is set to 5 in our experiments), the DCG (K) score for this query is calculated as follows:\nwhere gk is the weight for the document at rank k. Higher degree of relevance corresponds to a higher weight.\nA page is graded into one of the five scales: Perfect, Excellent, Good, Fair, Bad, with corresponding weights.\nWe use dcg to represent the average DCG (5) over a set of test queries.\n4.1.2 Stemming cost\nAnother metric is to measure the additional cost incurred by stemming.\nGiven the same level of relevance improvement, we prefer a stemming method that has less additional cost.\nWe measure this by the percentage of queries that are actually stemmed, over all the queries that could possibly be stemmed.\n4.2 Data preparation\nWe randomly sample 870 queries from a three month query log, with 290 from each month.\nAmong all these 870 queries, we remove all misspelled queries since misspelled queries are not of interest to stemming.\nWe also remove all one word queries since stemming one word queries without context has a high risk of changing query intent, especially for short words.\nIn the end, we have 529 correctly spelled queries with at least 2 words.\n4.3 Naive stemming for Web search\nBefore explaining the experiments and results in detail, we'd like to describe the traditional way of using stemming for Web search, referred as the naive model.\nThis is to treat every word variant equivalent for all possible words in the query.\nThe query \"book store\" will be transformed into \"(book OR books) (store OR stores)\" when limiting stemming to pluralization handling only, where OR is an operator that denotes the equivalence of the left and right arguments.\n4.4 Experimental setup\nThe baseline model is the model without stemming.\nWe first run the naive model to see how well it performs over the baseline.\nThen we improve the naive stemming model by document sensitive matching, referred as document sensitive matching model.\nThis model makes the same stemming as the naive model on the query side, but performs conservative matching on the document side using the strategy described in section 3.5.\nThe naive model and document sensitive matching model stem the most queries.\nOut of the 529 queries, there are 408 queries that they stem, corresponding to 46.7% query traffic (out of a total of 870).\nWe then further improve the document sensitive matching model from the query side with selective word stemming based on statistical language modeling (section 3.4), referred as selective stemming model.\nBased on language modeling prediction, this model stems only a subset of the 408 queries stemmed by the document sensitive matching model.\nWe experiment with unigram language model and bigram lan\nguage model.\nSince we only care how much we can improve the naive model, we will only use these 408 queries (all the queries that are affected by the naive stemming model) in the experiments.\nTo get a sense of how these models perform, we also have an oracle model that gives the upper-bound performance a stemmer can achieve on this data.\nThe oracle model only expands a word if the stemming will give better results.\nTo analyze the pluralization handling influence on different query categories, we divide queries into short queries and long queries.\nAmong the 408 queries stemmed by the naive model, there are 272 short queries with 2 or 3 words, and 136 long queries with at least 4 words.\n4.5 Results\nWe summarize the overall results in Table 4, and present the results on short queries and long queries separately in Table 5.\nEach row in Table 4 is a stemming strategy described in section 4.4.\nThe first column is the name of the strategy.\nThe second column is the number of queries affected by this strategy; this column measures the stemming cost, and the numbers should be low for the same level of dcg.\nThe third column is the average dcg score over all tested queries in this category (including the ones that were not stemmed by the strategy).\nThe fourth column is the relative improvement over the baseline, and the last column is the p-value of Wilcoxon significance test.\nThere are several observations about the results.\nWe can see the naively stemming only obtains a statistically insignificant improvement of 1.5%.\nLooking at Table 5, it gives an improvement of 2.7% on short queries.\nHowever, it also hurts long queries by -2.4%.\nOverall, the improvement is canceled out.\nThe reason that it improves short queries is that most short queries only have one word that can be stemmed.\nThus, blindly pluralizing short queries is relatively safe.\nHowever for long queries, most queries can have multiple words that can be pluralized.\nExpanding all of them without selection will significantly hurt precision.\nDocument context sensitive stemming gives a significant lift to the performance, from 2.7% to 4.2% for short queries and from -2.4% to -1.6% for long queries, with an overall lift from 1.5% to 2.8%.\nThe improvement comes from the conservative context sensitive document matching.\nAn expanded word is valid only if it occurs within the context of original query in the document.\nThis reduces many spurious matches.\nHowever, we still notice that for long queries, context sensitive stemming is not able to improve performance because it still selects too many documents and gives the ranking function a hard problem.\nWhile the chosen window size of 4 works the best amongst all the choices, it still allows spurious matches.\nIt is possible that the window size needs to be chosen on a per query basis to ensure tighter proximity constraints for different types of noun phrases.\nSelective word pluralization further helps resolving the problem faced by document context sensitive stemming.\nIt does not stem every word that places all the burden on the ranking algorithm, but tries to eliminate unnecessary stemming in the first place.\nBy predicting which word variants are going to be useful, we can dramatically reduce the number of stemmed words, thus improving both the recall and the precision.\nWith the unigram language model, we can reduce the stemming cost by 26.7% (from 408\/408 to 300\/408) and lift the overall dcg improvement from 2.8% to 3.4%.\nIn particular, it gives significant improvements on long queries.\nThe dcg gain is turned from negative to positive, from \u2212 1.6% to 1.1%.\nThis confirms our hypothesis that reducing unnecessary word expansion leads to precision improvement.\nFor short queries too, we observe both dcg improvement and stemming cost reduction with the unigram language model.\nThe advantages of predictive word expansion with a language model is further boosted with a better bigram language model.\nThe overall dcg gain is lifted from 3.4% to 3.9%, and stemming cost is dramatically reduced from 408\/408 to 250\/408, corresponding to only 29% of query traffic (250 out of 870) and an overall 1.8% dcg improvement overall all query traffic.\nFor short queries, bigram language model improves the dcg gain from 4.4% to 4.7%, and reduces stemming cost from 272\/272 to 150\/272.\nFor long queries, bigram language model improves dcg gain from 1.1% to 2.5%, and reduces stemming cost from 136\/136 to 100\/136.\nWe observe that the bigram language model gives a larger lift for long queries.\nThis is because the uncertainty in long queries is larger and a more powerful language model is needed.\nWe hypothesize that a trigram language model would give a further lift for long queries and leave this for future investigation.\nConsidering the tight upper-bound 2 on the improvement to be gained from pluralization handling (via the oracle model), the current performance on short queries is very satisfying.\nFor short queries, the dcg gain upper-bound is 6.3% for perfect pluralization handling, our current gain is 4.7% with a bigram language model.\nFor long queries, the dcg gain upper-bound is 4.6% for perfect pluralization handling, our current gain is 2.5% with a bigram language model.\nWe may gain additional benefit with a more powerful language model for long queries.\nHowever, the difficulties of long queries come from many other aspects including the proximity and the segmentation problem.\nThese problems have to be addressed separately.\nLooking at the the upper-bound of overhead reduction for oracle stemming, 75% (308\/408) of the naive stemmings are wasteful.\nWe currently capture about half of them.\nFurther reduction of the overhead requires sacrificing the dcg gain.\nNow we can compare the stemming strategies from a different aspect.\nInstead of looking at the influence over all queries as we described above, Table 6 summarizes the dcg improvements over the affected queries only.\nWe can see that the number of affected queries decreases as the stemming strategy becomes more accurate (dcg improvement).\nFor the bigram language model, over the 250\/408 stemmed queries, the dcg improvement is 6.1%.\nAn interesting observation is the average dcg decreases with a better model, which indicates a better stemming strategy stems more difficult queries (low dcg queries).\n5.\nDISCUSSIONS\n5.1 Language models from query vs. from Web\nAs we mentioned in Section 1, we are trying to predict the probability of a string occurring on the Web.\nThe language model should describe the occurrence of the string on the Web.\nHowever, the query log is also a good resource.\n2Note that this upperbound is for pluralization handling only, not for general stemming.\nGeneral stemming gives a 8% upperbound, which is quite substantial in terms of our metrics.\nTable 4: Results comparison of different stemming strategies over all queries affected by naive stemming\nTable 5: Results comparison of different stemming strategies overall short queries and long queries\nUsers reformulate a query using many different variants to get good results.\nTo test the hypothesis that we can learn reliable transformation probabilities from the query log, we trained a language model from the same query top 25M queries as used to learn segmentation, and use that for prediction.\nWe observed a slight performance decrease compared to the model trained on Web frequencies.\nIn particular, the performance for unigram LM was not affected, but the dcg gain for bigram LM changed from 4.7% to 4.5% for short queries.\nThus, the query log can serve as a good approximation of the Web frequencies.\n5.2 How linguistics helps\nSome linguistic knowledge is useful in stemming.\nFor the pluralization handling case, pluralization and de-pluralization is not symmetric.\nA plural word used in a query indicates a special intent.\nFor example, the query \"new york hotels\" is looking for a list of hotels in new york, not the specific \"new york hotel\" which might be a hotel located in California.\nA simple equivalence of \"hotel\" to \"hotels\" might boost a particular page about \"new york hotel\" to top rank.\nTo capture this intent, we have to make sure the document is a general page about hotels in new york.\nWe do this by requiring that the plural word \"hotels\" appears in the document.\nOn the other hand, converting a singular word to plural is safer since a general purpose page normally contains specific information.\nWe observed a slight overall dcg decrease, although not statistically significant, for document context sensitive stemming if we do not consider this asymmetric property.\n5.3 Error analysis\nOne type of mistakes we noticed, though rare but seriously hurting relevance, is the search intent change after stemming.\nGenerally speaking, pluralization or depluralization keeps the original intent.\nHowever, the intent could change in a few cases.\nFor one example of such a query, \"job at apple\", we pluralize \"job\" to \"jobs\".\nThis stemming makes the original query ambiguous.\nThe query \"job OR jobs at apple\" has two intents.\nOne is the employment opportunities at apple, and another is a person working at Apple, Steve Jobs, who is the CEO and co-founder of the company.\nThus, the results after query stemming returns \"Steve Jobs\" as one of the results in top 5.\nOne solution is performing results set based analysis to check if the intent is changed.\nThis is similar to relevance feedback and requires second phase ranking.\nA second type of mistakes is the entity\/concept recognition problem, These include two kinds.\nOne is that the stemmed word variant now matches part of an entity or concept.\nFor example, query \"cookies in san francisco\" is pluralized to \"cookies OR cookie in san francisco\".\nThe results will match \"cookie jar in san francisco\".\nAlthough \"cookie\" still means the same thing as \"cookies\", \"cookie jar\" is a different concept.\nAnother kind is the unstemmed word matches an entity or concept because of the stemming of the other words.\nFor example, \"quote ICE\" is pluralized to \"quote OR quotes ICE\".\nThe original intent for this query is searching for stock quote for ticker ICE.\nHowever, we noticed that among the top results, one of the results is \"Food quotes: Ice cream\".\nThis is matched because of\nTable 6: Results comparison over the stemmed queries only: column old\/new dcg is the dcg score over the affected queries before\/after applying stemming\nthe pluralized word \"quotes\".\nThe unchanged word \"ICE\" matches part of the noun phrase \"ice cream\" here.\nTo solve this kind of problem, we have to analyze the documents and recognize \"cookie jar\" and \"ice cream\" as concepts instead of two independent words.\nA third type of mistakes occurs in long queries.\nFor the query \"bar code reader software\", two words are pluralized.\n\"code\" to \"codes\" and \"reader\" to \"readers\".\nIn fact, \"bar code reader\" in the original query is a strong concept and the internal words should not be changed.\nThis is the segmentation and entity and noun phrase detection problem in queries, which we actively are attacking.\nFor long queries, we should correctly identify the concepts in the query, and boost the proximity for the words within a concept.\n6.\nCONCLUSIONS AND FUTURE WORK\nWe have presented a simple yet elegant way of stemming for Web search.\nIt improves naive stemming in two aspects: selective word expansion on the query side and conservative word occurrence matching on the document side.\nUsing pluralization handling as an example, experiments on a major Web search engine data show it significantly improves the Web relevance and reduces the stemming cost.\nIt also significantly improves Web click through rate (details not reported in the paper).\nFor the future work, we are investigating the problems we identified in the error analysis section.\nThese include: entity and noun phrase matching mistakes, and improved segmentation.","keyphrases":["stem","stem","web search","languag model","context sensit document match","lovin stemmer","porter stemmer","candid gener","queri segment","head word detect","context sensit queri stem","unigram languag model","bigram languag model"],"prmu":["P","P","P","P","P","U","U","U","M","M","R","M","M"]} {"id":"H-47","title":"A Semantic Approach to Contextual Advertising","abstract":"Contextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query. In CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience. With these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads. The SS market developed quicker than the CM market, and most textual ads are still characterized by bid phrases representing those queries where the advertisers would like to have their ad displayed. Hence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach. However, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads. To overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features.","lvl-1":"A Semantic Approach to Contextual Advertising Andrei Broder Marcus Fontoura Vanja Josifovski Lance Riedel Yahoo! Research, 2821 Mission College Blvd, Santa Clara, CA 95054 {broder, marcusf, vanjaj, riedell}@yahoo-inc.com ABSTRACT Contextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query.\nIn CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience.\nWith these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads.\nThe SS market developed quicker than the CM market, and most textual ads are still characterized by bid phrases representing those queries where the advertisers would like to have their ad displayed.\nHence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach.\nHowever, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads.\nTo overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features.\nCategories and Subject Descriptors: H.3.3 [Information Storage and Retrieval]: Selection process General Terms: Algorithms, Measurement, Performance, Experimentation 1.\nINTRODUCTION Web advertising supports a large swath of today``s Internet ecosystem.\nThe total internet advertiser spend in US alone in 2006 is estimated at over 17 billion dollars with a growth rate of almost 20% year over year.\nA large part of this market consists of textual ads, that is, short text messages usually marked as sponsored links or similar.\nThe main advertising channels used to distribute textual ads are: 1.\nSponsored Search or Paid Search advertising which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query.\nAll major current web search engines (Google, Yahoo!, and Microsoft) support such ads and act simultaneously as a search engine and an ad agency.\n2.\nContextual advertising or Context Match which refers to the placement of commercial ads within the content of a generic web page.\nIn contextual advertising usually there is a commercial intermediary, called an ad-network, in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between publisher and ad-network) and improving user experience.\nAgain, all major current web search engines (Google, Yahoo!, and Microsoft) provide such ad-networking services but there are also many smaller players.\nThe SS market developed quicker than the CM market, and most textual ads are still characterized by bid phrases representing those queries where the advertisers would like to have their ad displayed.\n(See [5] for a brief history).\nHowever, today, almost all of the for-profit non-transactional web sites (that is, sites that do not sell anything directly) rely at least in part on revenue from context match.\nCM supports sites that range from individual bloggers and small niche communities to large publishers such as major newspapers.\nWithout this model, the web would be a lot smaller!\nThe prevalent pricing model for textual ads is that the advertisers pay a certain amount for every click on the advertisement (pay-per-click or PPC).\nThere are also other models used: pay-per-impression, where the advertisers pay for the number of exposures of an ad and pay-per-action where the advertiser pays only if the ad leads to a sale or similar transaction.\nFor simplicity, we only deal with the PPC model in this paper.\nGiven a page, rather than placing generic ads, it seems preferable to have ads related to the content to provide a better user experience and thus to increase the probability of clicks.\nThis intuition is supported by the analogy to conventional publishing where there are very successful magazines (e.g. Vogue) where a majority of the content is topical advertising (fashion in the case of Vogue) and by user studies that have confirmed that increased relevance increases the number of ad-clicks [4, 13].\nPrevious published approaches estimated the ad relevance based on co-occurrence of the same words or phrases within the ad and within the page (see [7, 8] and Section 3 for more details).\nHowever targeting mechanisms based solely on phrases found within the text of the page can lead to problems: For example, a page about a famous golfer named John Maytag might trigger an ad for Maytag dishwashers since Maytag is a popular brand.\nAnother example could be a page describing the Chevy Tahoe truck (a popular vehicle in US) triggering an ad about Lake Tahoe vacations.\nPolysemy is not the only culprit: there is a (maybe apocryphal) story about a lurid news item about a headless body found in a suitcase triggering an ad for Samsonite luggage!\nIn all these examples the mismatch arises from the fact that the ads are not appropriate for the context.\nIn order to solve this problem we propose a matching mechanism that combines a semantic phase with the traditional keyword matching, that is, a syntactic phase.\nThe semantic phase classifies the page and the ads into a taxonomy of topics and uses the proximity of the ad and page classes as a factor in the ad ranking formula.\nHence we favor ads that are topically related to the page and thus avoid the pitfalls of the purely syntactic approach.\nFurthermore, by using a hierarchical taxonomy we allow for the gradual generalization of the ad search space in the case when there are no ads matching the precise topic of the page.\nFor example if the page is about an event in curling, a rare winter sport, and contains the words Alpine Meadows, the system would still rank highly ads for skiing in Alpine Meadows as these ads belong to the class skiing which is a sibling of the class curling and both of these classes share the parent winter sports.\nIn some sense, the taxonomy classes are used to select the set of applicable ads and the keywords are used to narrow down the search to concepts that are of too small granularity to be in the taxonomy.\nThe taxonomy contains nodes for topics that do not change fast, for example, brands of digital cameras, say Canon.\nThe keywords capture the specificity to a level that is more dynamic and granular.\nIn the digital camera example this would correspond to the level of a particular model, say Canon SD450 whose advertising life might be just a few months.\nUpdating the taxonomy with new nodes or even new vocabulary each time a new model comes to the market is prohibitively expensive when we are dealing with millions of manufacturers.\nIn addition to increased click through rate (CTR) due to increased relevance, a significant but harder to quantify benefit of the semantic-syntactic matching is that the resulting page has a unified feel and improves the user experience.\nIn the Chevy Tahoe example above, the classifier would establish that the page is about cars\/automotive and only those ads will be considered.\nEven if there are no ads for this particular Chevy model, the chosen ads will still be within the automotive domain.\nTo implement our approach we need to solve a challenging problem: classify both pages and ads within a large taxonomy (so that the topic granularity would be small enough) with high precision (to reduce the probability of mis-match).\nWe evaluated several classifiers and taxonomies and in this paper we present results using a taxonomy with close to 6000 nodes using a variation of the Rocchio``s classifier [9].\nThis classifier gave the best results in both page and ad classification, and ultimately in ad relevance.\nThe paper proceeds as follows.\nIn the next section we review the basic principles of the contextual advertising.\nSection 3 overviews the related work.\nSection 4 describes the taxonomy and document classifier that were used for page and ad classification.\nSection 5 describes the semanticsyntactic method.\nIn Section 6 we briefly discuss how to search efficiently the ad space in order to return the top-k ranked ads.\nExperimental evaluation is presented in Section 7.\nFinally, Section 8 presents the concluding remarks.\n2.\nOVERVIEW OF CONTEXTUAL ADVERTISING Contextual advertising is an interplay of four players: \u2022 The publisher is the owner of the web pages on which the advertising is displayed.\nThe publisher typically aims to maximize advertising revenue while providing a good user experience.\n\u2022 The advertiser provides the supply of ads.\nUsually the activity of the advertisers are organized around campaigns which are defined by a set of ads with a particular temporal and thematic goal (e.g. sale of digital cameras during the holiday season).\nAs in traditional advertising, the goal of the advertisers can be broadly defined as the promotion of products or services.\n\u2022 The ad network is a mediator between the advertiser and the publisher and selects the ads that are put on the pages.\nThe ad-network shares the advertisement revenue with the publisher.\n\u2022 Users visit the web pages of the publisher and interact with the ads.\nContextual advertising usually falls into the category of direct marketing (as opposed to brand advertising), that is advertising whose aim is a direct response where the effect of an campaign is measured by the user reaction.\nOne of the advantages of online advertising in general and contextual advertising in particular is that, compared to the traditional media, it is relatively easy to measure the user response.\nUsually the desired immediate reaction is for the user to follow the link in the ad and visit the advertiser``s web site and, as noted, the prevalent financial model is that the advertiser pays a certain amount for every click on the advertisement (PPC).\nThe revenue is shared between the publisher and the network.\nContext match advertising has grown from Sponsored Search advertising, which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query.\nIn most networks, the amount paid by the advertiser for each SS click is determined by an auction process where the advertisers place bids on a search phrase, and their position in the tower of ads displayed in conjunction with the result is determined by their bid.\nThus each ad is annotated with one or more bid phrases.\nThe bid phrase has no direct bearing on the ad placement in CM.\nHowever, it is a concise description of target ad audience as determined by the advertiser and it has been shown to be an important feature for successful CM ad placement [8].\nIn addition to the bid phrase, an ad is also characterized by a title usually displayed in a bold font, and an abstract or creative, which is the few lines of text, usually less than 120 characters, displayed on the page.\nThe ad-network model aligns the interests of the publishers, advertisers and the network.\nIn general, clicks bring benefits to both the publisher and the ad network by providing revenue, and to the advertiser by bringing traffic to the target web site.\nThe revenue of the network, given a page p, can be estimated as: R = X i=1.\n.\nk P(click|p, ai)price(ai, i) where k is the number of ads displayed on page p and price(ai, i) is the click-price of the current ad ai at position i.\nThe price in this model depends on the set of ads presented on the page.\nSeveral models have been proposed to determine the price, most of them based on generalizations of second price auctions.\nHowever, for simplicity we ignore the pricing model and concentrate on finding ads that will maximize the first term of the product, that is we search for arg max i P(click|p, ai) Furthermore we assume that the probability of click for a given ad and page is determined by its relevance score with respect to the page, thus ignoring the positional effect of the ad placement on the page.\nWe assume that this is an orthogonal factor to the relevance component and could be easily incorporated in the model.\n3.\nRELATED WORK Online advertising in general and contextual advertising in particular are emerging areas of research.\nThe published literature is very sparse.\nA study presented in [13] confirms the intuition that ads need to be relevant to the user``s interest to avoid degrading the user``s experience and increase the probability of reaction.\nA recent work by Ribeiro-Neto et.\nal. [8] examines a number of strategies to match pages to ads based on extracted keywords.\nThe ads and pages are represented as vectors in a vector space.\nThe first five strategies proposed in that work match the pages and the ads based on the cosine of the angle between the ad vector and the page vector.\nTo find out the important part of the ad, the authors explore using different ad sections (bid phrase, title, body) as a basis for the ad vector.\nThe winning strategy out of the first five requires the bid phrase to appear on the page and then ranks all such ads by the cosine of the union of all the ad sections and the page vectors.\nWhile both pages and ads are mapped to the same space, there is a discrepancy (impendence mismatch) between the vocabulary used in the ads and in the pages.\nFurthermore, since in the vector model the dimensions are determined by the number of unique words, plain cosine similarity will not take into account synonyms.\nTo solve this problem, Ribeiro-Neto et al expand the page vocabulary with terms from other similar pages weighted based on the overall similarity of the origin page to the matched page, and show improved matching precision.\nIn a follow-up work [7] the authors propose a method to learn impact of individual features using genetic programming to produce a matching function.\nThe function is represented as a tree composed of arithmetic operators and the log function as internal nodes, and different numerical features of the query and ad terms as leafs.\nThe results show that genetic programming finds matching functions that significantly improve the matching compared to the best method (without page side expansion) reported in [8].\nAnother approach to contextual advertising is to reduce it to the problem of sponsored search advertising by extracting phrases from the page and matching them with the bid phrase of the ads.\nIn [14] a system for phrase extraction is described that used a variety of features to determine the importance of page phrases for advertising purposes.\nThe system is trained with pages that have been hand annotated with important phrases.\nThe learning algorithm takes into account features based on tf-idf, html meta data and query logs to detect the most important phrases.\nDuring evaluation, each page phrase up to length 5 is considered as potential result and evaluated against a trained classifier.\nIn our work we also experimented with a phrase extractor based on the work reported in [12].\nWhile increasing slightly the precision, it did not change the relative performance of the explored algorithms.\n4.\nPAGE AND AD CLASSIFICATION 4.1 Taxonomy Choice The semantic match of the pages and the ads is performed by classifying both into a common taxonomy.\nThe matching process requires that the taxonomy provides sufficient differentiation between the common commercial topics.\nFor example, classifying all medical related pages into one node will not result into a good classification since both sore foot and flu pages will end up in the same node.\nThe ads suitable for these two concepts are, however, very different.\nTo obtain sufficient resolution, we used a taxonomy of around 6000 nodes primarily built for classifying commercial interest queries, rather than pages or ads.\nThis taxonomy has been commercially built by Yahoo! US.\nWe will explain below how we can use the same taxonomy to classify pages and ads as well.\nEach node in our source taxonomy is represented as a collection of exemplary bid phrases or queries that correspond to that node concept.\nEach node has on average around 100 queries.\nThe queries placed in the taxonomy are high volume queries and queries of high interest to advertisers, as indicated by an unusually high cost-per-click (CPC) price.\nThe taxonomy has been populated by human editors using keyword suggestions tools similar to the ones used by ad networks to suggest keywords to advertisers.\nAfter initial seeding with a few queries, using the provided tools a human editor can add several hundreds queries to a given node.\nNevertheless, it has been a significant effort to develop this 6000-nodes taxonomy and it has required several person-years of work.\nA similar-in-spirit process for building enterprise taxonomies via queries has been presented in [6].\nHowever, the details and tools are completely different.\nFigure 1 provides some statistics about the taxonomy used in this work.\n4.2 Classification Method As explained, the semantic phase of the matching relies on ads and pages being topically close.\nThus we need to classify pages into the same taxonomy used to classify ads.\nIn this section we overview the methods we used to build a page and an ad classifier pair.\nThe detailed description and evaluation of this process is outside the scope of this paper.\nGiven the taxonomy of queries (or bid-phrases - we use these terms interchangeably) described in the previous section, we tried three methods to build corresponding page and ad classifiers.\nFor the first two methods we tried to find exemplary pages and ads for each concept as follows: Number of Categories By Level 0 200 400 600 800 1000 1200 1400 1600 1800 2000 1 2 3 4 5 6 7 8 9 Level NumberofCategories Number of Children per Nodes 0 50 100 150 200 250 300 350 400 0 2 4 6 8 10 12 14 16 18 20 22 24 29 31 33 35 52 Number of Children NumberofNodes Queries per Node 0 500 1000 1500 2000 2500 3000 1 50 80 120 160 200 240 280 320 360 400 440 480 Number Queries (up to 500+) NumberofNodes Figure 1: Taxonomy statistics: categories per level; fanout for non-leaf nodes; and queries per node We generated a page training set by running the queries in the taxonomy over a Web search index and using the top 10 results after some filtering as documents labeled with the query``s label.\nOn the ad side we generated a training set for each class by selecting the ads that have a bid phrase assigned to this class.\nUsing this training sets we then trained a hierarchical SVM [2] (one against all between every group of siblings) and a log-regression [11] classifier.\n(The second method differs from the first in the type of secondary filtering used.\nThis filtering eliminates low content pages, pages deemed unsuitable for advertising, pages that lead to excessive class confusion, etc.) However, we obtained the best performance by using the third document classifier, based on the information in the source taxonomy queries only.\nFor each taxonomy node we concatenated all the exemplary queries into a single metadocument.\nWe then used the meta document as a centroid for a nearest-neighbor classifier based on Rocchio``s framework [9] with only positive examples and no relevance feedback.\nEach centroid is defined as a sum of the tf-idf values of each term, normalized by the number of queries in the class cj = 1 |Cj| X q\u2208Cj q q where cj is the centroid for class Cj; q iterates over the queries in a particular class.\nThe classification is based on the cosine of the angle between the document d and the centroid meta-documents: Cmax = arg max Cj \u2208C cj cj \u00b7 d d = arg max Cj \u2208C P i\u2208|F | ci j\u00b7 di qP i\u2208|F |(ci j)2 qP i\u2208|F |(di)2 where F is the set of features.\nThe score is normalized by the document and class length to produce comparable score.\nThe terms ci and di represent the weight of the ith feature in the class centroid and the document respectively.\nThese weights are based on the standard tf-idf formula.\nAs the score of the max class is normalized with regard to document length, the scores for different documents are comparable.\nWe conducted tests using professional editors to judge the quality of page and ad class assignments.\nThe tests showed that for both ads and pages the Rocchio classifier returned the best results, especially on the page side.\nThis is probably a result of the noise induced by using a search engine to machine generate training pages for the SVM and logregression classifiers.\nIt is an area of current investigation how to improve the classification using a noisy training set.\nBased on the test results we decided to use the Rocchio``s classifier on both the ad and the page side.\n5.\nSEMANTIC-SYNTACTIC MATCHING Contextual advertising systems process the content of the page, extract features, and then search the ad space to find the best matching ads.\nGiven a page p and a set of ads A = {a1 ... as} we estimate the relative probability of click P(click|p, a) with a score that captures the quality of the match between the page and the ad.\nTo find the best ads for a page we rank the ads in A and select the top few for display.\nThe problem can be formally defined as matching every page in the set of all pages P = {p1, ... ppc} to one or more ads in the set of ads.\nEach page is represented as a set of page sections pi = {pi,1, pi,2 ... pi,m}.\nThe sections of the page represent different structural parts, such as title, metadata, body, headings, etc..\nIn turn, each section is an unordered bag of terms (keywords).\nA page is represented by the union of the terms in each section: pi = {pws1 1 , pws1 2 ... pwsi m} where pw stands for a page word and the superscript indicates the section of each term.\nA term can be a unigram or a phrase extracted by a phrase extractor [12].\nSimilarly, we represent each ad as a set of sections a = {a1, a2, ... al}, each section in turn being an unordered set of terms: ai = {aws1 1 , aws1 2 ... awsj l } where aw is an ad word.\nThe ads in our experiments have 3 sections: title, body, and bid phrase.\nIn this work, to produce the match score we use only the ad\/page textual information, leaving user information and other data for future work.\nNext, each page and ad term is associated with a weight based on the tf-idf values.\nThe tf value is determined based on the individual ad sections.\nThere are several choices for the value of idf, based on different scopes.\nOn the ad side, it has been shown in previous work that the set of ads of one campaign provide good scope for the estimation of idf that leads to improved matching results [8].\nHowever, in this work for simplicity we do not take into account campaigns.\nTo combine the impact of the term``s section and its tf-idf score, the ad\/page term weight is defined as: tWeight(kwsi ) = weightSection(Si) \u00b7 tf idf(kw) where tWeight stands for term weight and weightSection(Si) is the weight assigned to a page or ad section.\nThis weight is a fixed parameter determined by experimentation.\nEach ad and page is classified into the topical taxonomy.\nWe define these two mappings: Tax(pi) = {pci1, ... pciu} Tax(aj) = {acj1 ... acjv} where pc and ac are page and ad classes correspondingly.\nEach assignment is associated with a weight given by the classifier.\nThe weights are normalized to sum to 1: X c\u2208T ax(xi) cWeight(c) = 1 where xi is either a page or an ad, and cWeights(c) is the class weight - normalized confidence assigned by the classifier.\nThe number of classes can vary between different pages and ads.\nThis corresponds to the number of topics a page\/ad can be associated with and is almost always in the range 1-4.\nNow we define the relevance score of an ad ai and page pi as a convex combination of the keyword (syntactic) and classification (semantic) score: Score(pi, ai) = \u03b1 \u00b7 TaxScore(Tax(pi), Tax(ai)) +(1 \u2212 \u03b1) \u00b7 KeywordScore(pi, ai) The parameter \u03b1 determines the relative weight of the taxonomy score and the keyword score.\nTo calculate the keyword score we use the vector space model [1] where both the pages and ads are represented in n-dimensional space - one dimension for each distinct term.\nThe magnitude of each dimension is determined by the tWeight() formula.\nThe keyword score is then defined as the cosine of the angle between the page and the ad vectors: KeywordScore(pi, ai) = P i\u2208|K| tWeight(pwi)\u00b7 tWeight(awi) qP i\u2208|K|(tWeight(pwi))2 qP i\u2208|K|(tWeight(awi))2 where K is the set of all the keywords.\nThe formula assumes independence between the words in the pages and ads.\nFurthermore, it ignores the order and the proximity of the terms in the scoring.\nWe experimented with the impact of phrases and proximity on the keyword score and did not see a substantial impact of these two factors.\nWe now turn to the definition of the TaxScore.\nThis function indicates the topical match between a given ad and a page.\nAs opposed to the keywords that are treated as independent dimensions, here the classes (topics) are organized into a hierarchy.\nOne of the goals in the design of the TaxScore function is to be able to generalize within the taxonomy, that is accept topically related ads.\nGeneralization can help to place ads in cases when there is no ad that matches both the category and the keywords of the page.\nThe example in Figure 2 illustrates this situation.\nIn this example, in the absence of ski ads, a page about skiing containing the word Atomic could be matched to the available snowboarding ad for the same brand.\nIn general we would like the match to be stronger when both the ad and the page are classified into the same node Figure 2: Two generalization paths and weaker when the distance between the nodes in the taxonomy gets larger.\nThere are multiple ways to specify the distance between two taxonomy nodes.\nBesides the above requirement, this function should lend itself to an efficient search of the ad space.\nGiven a page we have to find the ad in a few milliseconds, as this impacts the presentation to a waiting user.\nThis will be further discussed in the next section.\nTo capture both the generalization and efficiency requirements we define the TaxScore function as follows: TaxScore(PC, AC) = X pc\u2208P C X ac\u2208AC idist(LCA(pc, ac), ac)\u00b7cWeight(pc)\u00b7cWeight(ac) In this function we consider every combination of page class and ad class.\nFor each combination we multiply the product of the class weights with the inverse distance function between the least common ancestor of the two classes (LCA) and the ad class.\nThe inverse distance function idist(c1, c2) takes two nodes on the same path in the class taxonomy and returns a number in the interval [0, 1] depending on the distance of the two class nodes.\nIt returns 1 if the two nodes are the same, and declines toward 0 when LCA(pc, ac) or ac is the root of the taxonomy.\nThe rate of decline determines the weight of the generalization versus the other terms in the scoring formula.\nTo determine the rate of decline we consider the impact on the specificity of the match when we substitute a class with one of its ancestors.\nIn general the impact is topic dependent.\nFor example the node Hobby in our taxonomy has tens of children, each representing a different hobby, two examples being Sailing and Knitting.\nPlacing an ad about Knitting on a page about Sailing does not make lots of sense.\nHowever, in the Winter Sports example above, in the absence of better alternative, skiing ads could be put on snowboarding pages as they might promote the same venues, equipment vendors etc..\nSuch detailed analysis on a case by case basis is prohibitively expensive due to the size of the taxonomy.\nOne option is to use the confidences of the ancestor classes as given by the classifier.\nHowever we found these numbers not suitable for this purpose as the magnitude of the confidences does not necessarily decrease when going up the tree.\nAnother option is to use explore-exploit methods based on machine-learning from the click feedback as described in [10].\nHowever for simplicity, in this work we chose a simple heuristic to determine the cost of generalization from a child to a parent.\nIn this heuristic we look at the broadening of the scope when moving from a child to a parent.\nWe estimate the broadening by the density of ads classified in the parent nodes vs the child node.\nThe density is obtained by classifying a large set of ads in the taxonomy using the document classifier described above.\nBased on this, let nc be the number of document classified into the subtree rooted at c.\nThen we define: idist(c, p) = nc np where c represents the child node and p is the parent node.\nThis fraction can be viewed as a probability of an ad belonging to the parent topic being suitable for the child topic.\n6.\nSEARCHING THE AD SPACE In the previous section we discussed the choice of scoring function to estimate the match between an ad and a page.\nThe top-k ads with the highest score are offered by the system for placement on the publisher``s page.\nThe process of score calculation and ad selection is to be done in real time and therefore must be very efficient.\nAs the ad collections are in the range of hundreds of millions of entries, there is a need for indexed access to the ads.\nInverted indexes provide scalable and low latency solutions for searching documents.\nHowever, these have been traditionally used to search based on keywords.\nTo be able to search the ads on a combination of keywords and classes we have mapped the classification match to term match and adapted the scoring function to be suitable for fast evaluation over inverted indexes.\nIn this section we overview the ad indexing and the ranking function of our prototype ad search system for matching ads and pages.\nWe used a standard inverted index framework where there is one posting list for each distinct term.\nThe ads are parsed into terms and each term is associated with a weight based on the section in which it appears.\nWeights from distinct occurrences of a term in an ad are added together, so that the posting lists contain one entry per term\/ad combination.\nThe next challenge is how to index the ads so that the class information is preserved in the index?\nA simple method is to create unique meta-terms for the classes and annotate each ad with one meta-term for each assigned class.\nHowever this method does not allow for generalization, since only the ads matching an exact label of the page would be selected.\nTherefore we annotated each ad with one meta-term for each ancestor of the assigned class.\nThe weights of meta-terms are assigned according to the value of the idist() function defined in the previous section.\nOn the query side, given the keywords and the class of a page, we compose a keyword only query by inserting one class term for each ancestor of the classes assigned to the page.\nThe scoring function is adapted to the two part scoreone for the class meta-terms and another for the text term.\nDuring the class score calculation, for each class path we use only the lowest class meta-term, ignoring the others.\nFor example, if an ad belongs to the Skiing class and is annotated with both Skiing and its parent Winter Sports, the index will contain the special class meta-terms for both Skiing and Winter Sports (and all their ancestors) with the weights according to the product of the classifier confidence and the idist function.\nWhen matching with a page classified into Skiing, the query will contain terms for class Skiing and for each of its ancestors.\nHowever when scoring an ad classified into Skiing we will use the weight for the Skiing class meta-term.\nAds classified into Snowboarding will be scored using the weight of the Winter Sports meta-term.\nTo make this check efficiently we keep a sorted list of all the class paths and, at scoring time, we search the paths bottom up for a meta-term appearing in the ad.\nThe first meta-term is used for scoring, the rest are ignored.\nAt runtime, we evaluate the query using a variant of the WAND algorithm [3].\nThis is a document-at-a-time algorithm [1] that uses a branch-and-bound approach to derive efficient moves for the cursors associated to the postings lists.\nIt finds the next cursor to be moved based on an upper bound of the score for the documents at which the cursors are currently positioned.\nThe algorithm keeps a heap of current best candidates.\nDocuments with an upper bound smaller than the current minimum score among the candidate documents can be eliminated from further considerations, and thus the cursors can skip over them.\nTo find the upper bound for a document, the algorithm assumes that all cursors that are before it will hit this document (i.e. the document contains all those terms represented by cursors before or at that document).\nIt has been shown that WAND can be used with any function that is monotonic with respect to the number of matching terms in the document.\nOur scoring function is monotonic since the score can never decrease when more terms are found in the document.\nIn the special case when we add a cursor representing an ancestor of a class term already factored in the score, the score simply does not change (we add 0).\nGiven these properties, we use an adaptation of the WAND algorithm where we change the calculation of the scoring function and the upper bound score calculation to reflect our scoring function.\nThe rest of the algorithm remains unchanged.\n7.\nEXPERIMENTAL EVALUATION 7.1 Data and Methodology We quantify the effect of the semantic-syntactic matching using a set of 105 pages.\nThis set of pages was selected by a random sample of a larger set of around 20 million pages with contextual advertising.\nAds for each of these pages have been selected from a larger pool of ads (tens of millions) by previous experiments conducted by Yahoo! US for other purposes.\nEach page-ad pair has been judged by three or more human judges on a 1 to 3 scale: 1.\nRelevant The ad is semantically directly related to the main subject of the page.\nFor example if the page is about the National Football League and the ad is about tickets for NFL games, it would be scored as 1.\n2.\nSomewhat relevant The ad is related to the secondary subject of the page, or is related to the main topic of the page in a general way.\nIn the NFL page example, an ad about NFL branded products would be judged as 2.\n3.\nIrrelevant The ad is unrelated to the page.\nFor example a mention of the NFL player John Maytag triggers washing machine ads on a NFL page.\npages 105 words per page 868 judgments 2946 judg.\ninter-editor agreement 84% unique ads 2680 unique ads per page 25.5 page classification precision 70% ad classification precision 86% Table 1: Dataset statistics To obtain a score for a page-ad pair we average all the scores and then round to the closest integer.\nWe then used these judgments to evaluate how well our methods distinguish the positive and the negative ad assignments for each page.\nThe statistics of the page dataset is given in Table 1.\nThe original experiments that paired the pages and the ads are loosely related to the syntactic keyword based matching and classification based assignment but used different taxonomies and keyword extraction techniques.\nTherefore we could not use standard pooling as an evaluation method since we did not control the way the pairs were selected and could not precisely establish the set of ads from which the placed ads were selected.\nInstead, in our evaluation for each page we consider only those ads for which we have judgment.\nEach different method was applied to this set and the ads were ranked by the score.\nThe relative effectiveness of the algorithms were judged by comparing how well the methods separated the ads with positive judgment from the ads with negative judgment.\nWe present precision on various levels of recall within this set.\nAs the set of ads per page is relatively small, the evaluation reports precision that is higher than it would be with a larger set of negative ads.\nHowever, these numbers still establish the relative performance of the algorithms and we can use it to evaluate performance at different score thresholds.\nIn addition to the precision-recall over the judged ads, we also present Kendall``s \u03c4 rank correlation coefficient to establish how far from the perfect ordering are the orderings produced by our ranking algorithms.\nFor this test we ranked the judged ads by the scores assigned by the judges and then compared this order to the order assigned by our algorithms.\nFinally we also examined the precision at position 1, 3 and 5.\n7.2 Results Figure 3 shows the precision recall curves for the syntactic matching (keywords only used) vs. a syntactic-semantic matching with the optimal value of \u03b1 = 0.8 (judged by the 11-point score [1]).\nIn this figure, we assume that the adpage pairs judged with 1 or 2 are positive examples and the 3s are negative examples.\nWe also examined counting only the pairs judged with 1 as positive examples and did not find a significant change in the relative performance of the tested methods.\nOverlaid are also results using phrases in the keyword match.\nWe see that these additional features do not change the relative performance of the algorithm.\nThe graphs show significant impact of the class information, especially in the mid range of recall values.\nIn the low recall part of the chart the curves meet.\nThis indicates that when the keyword match is really strong (i.e. when the ad is almost contained within the page) the precision 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 Recall Precision Alpha=.9, no phrase Alpha=0, no phrase Alpha=0, phrase Alpha=.9, phrase Figure 3: Data Set 2: Precision vs. Recall of syntactic match (\u03b1 = 0) vs. syntactic-semantic match (\u03b1 = 0.8) \u03b1 Kendall``s \u03c4 \u03b1 = 0 0.086 \u03b1 = 0.25 0.155 \u03b1 = 0.50 0.166 \u03b1 = 0.75 0.158 \u03b1 = 1 0.136 Table 2: Kendall``s \u03c4 for different \u03b1 values of the syntactic keyword match is comparable with that of the semantic-syntactic match.\nNote however that the two methods might produce different ads and could be used as a complement at level of recall.\nThe semantic components provides largest lift in precision at the mid range of recall where 25% improvement is achieved by using the class information for ad placement.\nThis means that when there is somewhat of a match between the ad and the page, the restriction to the right classes provides a better scope for selecting the ads.\nTable 2 shows the Kendall``s \u03c4 values for different values of \u03b1.\nWe calculated the values by ranking all the judged ads for each page and averaging the values over all the pages.\nThe ads with tied judgment were given the rank of the middle position.\nThe results show that when we take into account all the ad-page pairs, the optimal value of \u03b1 is around 0.5.\nNote that purely syntactic match (\u03b1 = 0) is by far the weakest method.\nFigure 4 shows the effect of the parameter \u03b1 in the scoring.\nWe see that in most cases precision grows or is flat when we increase \u03b1, except at the low level of recall where due to small number of data points there is a bit of jitter in the results.\nTable 3 shows the precision at positions 1, 3 and 5.\nAgain, the purely syntactic method has clearly the lowest score by individual positions and the total number of correctly placed ads.\nThe numbers are close due to the small number of the ads considered, but there are still some noticeable trends.\nFor position 1 the optimal \u03b1 is in the range of 0.25 to 0.75.\nFor positions 3 and 5 the optimum is at \u03b1 = 1.\nThis also indicates that for those ads that have a very high keyword score, the semantic information is somewhat less important.\nIf almost all the words in an ad appear in the page, this ad is Precision Vs Alpha for Different Levels of Recall (Data Set 2) 0.45 0.55 0.65 0.75 0.85 0.95 0 0.2 0.4 0.6 0.8 1 Alpha Precision 80% Recall 60% Recall 40% Recall 20% Recall Figure 4: Impact of \u03b1 on precision for different levels of recall \u03b1 #1 #3 #5 sum \u03b1 = 0 80 70 68 218 \u03b1 = 0.25 89 76 73 238 \u03b1 = 0.5 89 74 73 236 \u03b1 = 0.75 89 78 73 240 \u03b1 = 1 86 79 74 239 Table 3: Precision at position 1, 3 and 5 likely to be relevant for this page.\nHowever when there is no such clear affinity, the class information becomes a dominant factor.\n8.\nCONCLUSION Contextual advertising is the economic engine behind a large number of non-transactional sites on the Web.\nStudies have shown that one of the main success factors for contextual ads is their relevance to the surrounding content.\nAll existing commercial contextual match solutions known to us evolved from search advertising solutions whereby a search query is matched to the bid phrase of the ads.\nA natural extension of search advertising is to extract phrases from the page and match them to the bid phrase of the ads.\nHowever, individual phrases and words might have multiple meanings and\/or be unrelated to the overall topic of the page leading to miss-matched ads.\nIn this paper we proposed a novel way of matching advertisements to web pages that rely on a topical (semantic) match as a major component of the relevance score.\nThe semantic match relies on the classification of pages and ads into a 6000 nodes commercial advertising taxonomy to determine their topical distance.\nAs the classification relies on the full content of the page, it is more robust than individual page phrases.\nThe semantic match is complemented with a syntactic match and the final score is a convex combination of the two sub-scores with the relative weight of each determined by a parameter \u03b1.\nWe evaluated the semantic-syntactic approach against a syntactic approach over a set of pages with different contextual advertising.\nAs shown in our experimental evaluation, the optimal value of the parameter \u03b1 depends on the precise objective of optimization (precision at particular position, precision at given recall).\nHowever in all cases the optimal value of \u03b1 is between 0.25 and 0.9 indicating significant effect of the semantic score component.\nThe effectiveness of the syntactic match depends on the quality of the pages used.\nIn lower quality pages we are more likely to make classification errors that will then negatively impact the matching.\nWe demonstrated that it is feasible to build a large scale classifier that has sufficient good precision for this application.\nWe are currently examining how to employ machine learning algorithms to learn the optimal value of \u03b1 based on a collection of features of the input pages.\n9.\nREFERENCES [1] R. Baeza-Yates and B. Ribeiro-Neto.\nModern Information Retrieval.\nACM, 1999.\n[2] Bernhard E. Boser, Isabelle Guyon, and Vladimir Vapnik.\nA training algorithm for optimal margin classifiers.\nIn Computational Learning Theory, pages 144-152, 1992.\n[3] A. Z. Broder, D. Carmel, M. Herscovici, A. Soffer, and J. Zien.\nEfficient query evaluation using a two-level retrieval process.\nIn CIKM ``03: Proc.\nof the twelfth intl. conf.\non Information and knowledge management, pages 426-434, New York, NY, 2003.\nACM.\n[4] P. Chatterjee, D. L. Hoffman, and T. P. Novak.\nModeling the clickstream: Implications for web-based advertising efforts.\nMarketing Science, 22(4):520-541, 2003.\n[5] D. Fain and J. Pedersen.\nSponsored search: A brief history.\nIn In Proc.\nof the Second Workshop on Sponsored Search Auctions, 2006.\nWeb publication, 2006.\n[6] S. C. Gates, W. Teiken, and K.-Shin F. Cheng.\nTaxonomies by the numbers: building high-performance taxonomies.\nIn CIKM ``05: Proc.\nof the 14th ACM intl. conf.\non Information and knowledge management, pages 568-577, New York, NY, 2005.\nACM.\n[7] A. Lacerda, M. Cristo, M. Andre; G., W. Fan, N. Ziviani, and B. Ribeiro-Neto.\nLearning to advertise.\nIn SIGIR ``06: Proc.\nof the 29th annual intl..\nACM SIGIR conf., pages 549-556, New York, NY, 2006.\nACM.\n[8] B. Ribeiro-Neto, M. Cristo, P. B. Golgher, and E. S. de Moura.\nImpedance coupling in content-targeted advertising.\nIn SIGIR ``05: Proc.\nof the 28th annual intl..\nACM SIGIR conf., pages 496-503, New York, NY, 2005.\nACM.\n[9] J. Rocchio.\nRelevance feedback in information retrieval.\nIn The SMART Retrieval System: Experiments in Automatic Document Processing, pages 313-323.\nPrenticeHall, 1971.\n[10] P. Sandeep, D. Agarwal, D. Chakrabarti, and V. Josifovski.\nBandits for taxonomies: A model-based approach.\nIn In Proc.\nof the SIAM intl. conf.\non Data Mining, 2007.\n[11] T. Santner and D. Duffy.\nThe Statistical Analysis of Discrete Data.\nSpringer-Verlag, 1989.\n[12] R. Stata, K. Bharat, and F. Maghoul.\nThe term vector database: fast access to indexing terms for web pages.\nComputer Networks, 33(1-6):247-255, 2000.\n[13] C. Wang, P. Zhang, R. Choi, and M. D. Eredita.\nUnderstanding consumers attitude toward advertising.\nIn Eighth Americas conf.\non Information System, pages 1143-1148, 2002.\n[14] W. Yih, J. Goodman, and V. R. Carvalho.\nFinding advertising keywords on web pages.\nIn WWW ``06: Proc.\nof the 15th intl. conf.\non World Wide Web, pages 213-222, New York, NY, 2006.\nACM.","lvl-3":"A Semantic Approach to Contextual Advertising\nABSTRACT\nContextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query.\nIn CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience.\nWith these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads.\nThe SS market developed quicker than the CM market, and most textual ads are still characterized by \"bid phrases\" representing those queries where the advertisers would like to have their ad displayed.\nHence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach.\nHowever, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads.\nTo overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features.\n1.\nINTRODUCTION\nWeb advertising supports a large swath of today's Internet ecosystem.\nThe total internet advertiser spend in US alone in 2006 is estimated at over 17 billion dollars with a growth rate of almost 20% year over year.\nA large part of this market consists of textual ads, that is, short text messages usually marked as \"sponsored links\" or similar.\nThe main advertising channels used to distribute textual ads are:\n1.\nSponsored Search or Paid Search advertising which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query.\nAll major current web search engines (Google, Yahoo!, and Microsoft) support such ads and act simultaneously as a search engine and an ad agency.\n2.\nContextual advertising or Context Match which refers to the placement of commercial ads within the content of a generic web page.\nIn contextual advertising usually there is a commercial intermediary, called an ad-network, in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between publisher and ad-network) and improving user experience.\nAgain, all major current web search engines (Google, Yahoo!, and Microsoft) provide such ad-networking services but there are also many smaller players.\nThe SS market developed quicker than the CM market, and most textual ads are still characterized by \"bid phrases\" representing those queries where the advertisers would like to have their ad displayed.\n(See [5] for a \"brief history\").\nHowever, today, almost all of the for-profit non-transactional web sites (that is, sites that do not sell anything directly) rely at least in part on revenue from context match.\nCM supports sites that range from individual bloggers and small niche communities to large publishers such as major newspapers.\nWithout this model, the web would be a lot smaller!\nThe prevalent pricing model for textual ads is that the advertisers pay a certain amount for every click on the advertisement (pay-per-click or PPC).\nThere are also other models used: pay-per-impression, where the advertisers pay for the number of exposures of an ad and pay-per-action where the advertiser pays only if the ad leads to a sale or similar transaction.\nFor simplicity, we only deal with the PPC model in this paper.\nGiven a page, rather than placing generic ads, it seems preferable to have ads related to the content to provide a better user experience and thus to increase the probability of clicks.\nThis intuition is supported by the analogy to conventional publishing where there are very successful magazines (e.g. Vogue) where a majority of the content is topical advertising (fashion in the case of Vogue) and by user studies that have confirmed that increased relevance increases the number of ad-clicks [4, 13].\nPrevious published approaches estimated the ad relevance based on co-occurrence of the same words or phrases within the ad and within the page (see [7, 8] and Section 3 for more details).\nHowever targeting mechanisms based solely\non phrases found within the text of the page can lead to problems: For example, a page about a famous golfer named \"John Maytag\" might trigger an ad for \"Maytag dishwashers\" since Maytag is a popular brand.\nAnother example could be a page describing the Chevy Tahoe truck (a popular vehicle in US) triggering an ad about \"Lake Tahoe vacations\".\nPolysemy is not the only culprit: there is a (maybe apocryphal) story about a lurid news item about a headless body found in a suitcase triggering an ad for Samsonite luggage!\nIn all these examples the mismatch arises from the fact that the ads are not appropriate for the context.\nIn order to solve this problem we propose a matching mechanism that combines a semantic phase with the traditional keyword matching, that is, a syntactic phase.\nThe semantic phase classifies the page and the ads into a taxonomy of topics and uses the proximity of the ad and page classes as a factor in the ad ranking formula.\nHence we favor ads that are topically related to the page and thus avoid the pitfalls of the purely syntactic approach.\nFurthermore, by using a hierarchical taxonomy we allow for the gradual generalization of the ad search space in the case when there are no ads matching the precise topic of the page.\nFor example if the page is about an event in curling, a rare winter sport, and contains the words \"Alpine Meadows\", the system would still rank highly ads for skiing in Alpine Meadows as these ads belong to the class \"skiing\" which is a sibling of the class \"curling\" and both of these classes share the parent \"winter sports\".\nIn some sense, the taxonomy classes are used to select the set of applicable ads and the keywords are used to narrow down the search to concepts that are of too small granularity to be in the taxonomy.\nThe taxonomy contains nodes for topics that do not change fast, for example, brands of digital cameras, say \"Canon\".\nThe keywords capture the specificity to a level that is more dynamic and granular.\nIn the digital camera example this would correspond to the level of a particular model, say \"Canon SD450\" whose advertising life might be just a few months.\nUpdating the taxonomy with new nodes or even new vocabulary each time a new model comes to the market is prohibitively expensive when we are dealing with millions of manufacturers.\nIn addition to increased click through rate (CTR) due to increased relevance, a significant but harder to quantify benefit of the semantic-syntactic matching is that the resulting page has a unified feel and improves the user experience.\nIn the Chevy Tahoe example above, the classifier would establish that the page is about cars\/automotive and only those ads will be considered.\nEven if there are no ads for this particular Chevy model, the chosen ads will still be within the automotive domain.\nTo implement our approach we need to solve a challenging problem: classify both pages and ads within a large taxonomy (so that the topic granularity would be small enough) with high precision (to reduce the probability of mis-match).\nWe evaluated several classifiers and taxonomies and in this paper we present results using a taxonomy with close to 6000 nodes using a variation of the Rocchio's classifier [9].\nThis classifier gave the best results in both page and ad classification, and ultimately in ad relevance.\nThe paper proceeds as follows.\nIn the next section we review the basic principles of the contextual advertising.\nSection 3 overviews the related work.\nSection 4 describes the taxonomy and document classifier that were used for page and ad classification.\nSection 5 describes the semanticsyntactic method.\nIn Section 6 we briefly discuss how to search efficiently the ad space in order to return the top-k ranked ads.\nExperimental evaluation is presented in Section 7.\nFinally, Section 8 presents the concluding remarks.\n2.\nOVERVIEW OF CONTEXTUAL ADVERTISING\n3.\nRELATED WORK\nOnline advertising in general and contextual advertising in particular are emerging areas of research.\nThe published literature is very sparse.\nA study presented in [13] confirms the intuition that ads need to be relevant to the user's interest to avoid degrading the user's experience and increase the probability of reaction.\nA recent work by Ribeiro-Neto et.\nal. [8] examines a number of strategies to match pages to ads based on extracted keywords.\nThe ads and pages are represented as vectors in a vector space.\nThe first five strategies proposed in that work match the pages and the ads based on the cosine of the angle between the ad vector and the page vector.\nTo find out the important part of the ad, the authors explore using different ad sections (bid phrase, title, body) as a basis for the ad vector.\nThe winning strategy out of the first five requires the bid phrase to appear on the page and then ranks all such ads by the cosine of the union of all the ad sections and the page vectors.\nWhile both pages and ads are mapped to the same space, there is a discrepancy (impendence mismatch) between the vocabulary used in the ads and in the pages.\nFurthermore, since in the vector model the dimensions are determined by the number of unique words, plain cosine similarity will not take into account synonyms.\nTo solve this problem, Ribeiro-Neto et al expand the page vocabulary with terms from other similar pages weighted based on the overall similarity of the origin page to the matched page, and show improved matching precision.\nIn a follow-up work [7] the authors propose a method to learn impact of individual features using genetic programming to produce a matching function.\nThe function is represented as a tree composed of arithmetic operators and the log function as internal nodes, and different numerical features of the query and ad terms as leafs.\nThe results show that genetic programming finds matching functions that significantly improve the matching compared to the best method (without page side expansion) reported in [8].\nAnother approach to contextual advertising is to reduce it to the problem of sponsored search advertising by extracting phrases from the page and matching them with the bid phrase of the ads.\nIn [14] a system for phrase extraction is described that used a variety of features to determine the importance of page phrases for advertising purposes.\nThe system is trained with pages that have been hand annotated with important phrases.\nThe learning algorithm takes into account features based on tf-idf, html meta data and query logs to detect the most important phrases.\nDuring evaluation, each page phrase up to length 5 is considered as potential result and evaluated against a trained classifier.\nIn our work we also experimented with a phrase extractor based on the work reported in [12].\nWhile increasing slightly the precision, it did not change the relative performance of the explored algorithms.\n4.\nPAGE AND AD CLASSIFICATION\n4.1 Taxonomy Choice\n4.2 Classification Method\n5.\nSEMANTIC-SYNTACTIC MATCHING\n6.\nSEARCHING THE AD SPACE\n7.\nEXPERIMENTAL EVALUATION\n7.1 Data and Methodology\n7.2 Results\n8.\nCONCLUSION\nContextual advertising is the economic engine behind a large number of non-transactional sites on the Web.\nStudies have shown that one of the main success factors for contextual ads is their relevance to the surrounding content.\nAll existing commercial contextual match solutions known to us evolved from search advertising solutions whereby a search query is matched to the bid phrase of the ads.\nA natural extension of search advertising is to extract phrases from the page and match them to the bid phrase of the ads.\nHowever, individual phrases and words might have multiple meanings and\/or be unrelated to the overall topic of the page leading to miss-matched ads.\nIn this paper we proposed a novel way of matching advertisements to web pages that rely on a topical (semantic) match as a major component of the relevance score.\nThe semantic match relies on the classification of pages and ads into a 6000 nodes commercial advertising taxonomy to determine their topical distance.\nAs the classification relies on the full content of the page, it is more robust than individual page phrases.\nThe semantic match is complemented with a syntactic match and the final score is a convex combination of the two sub-scores with the relative weight of each determined by a parameter \u03b1.\nWe evaluated the semantic-syntactic approach against a syntactic approach over a set of pages with different contextual advertising.\nAs shown in our experimental evaluation, the optimal value of the parameter \u03b1 depends on the precise objective of optimization (precision at particular position, precision at given recall).\nHowever in all cases the optimal value of \u03b1 is between 0.25 and 0.9 indicating significant effect of the semantic score component.\nThe effectiveness of the syntactic match depends on the quality of the pages used.\nIn lower quality pages we are more likely to make classification errors that will then negatively impact the matching.\nWe demonstrated that it is feasible to build a large scale classifier that has sufficient good precision for this application.\nWe are currently examining how to employ machine learning algorithms to learn the optimal value of \u03b1 based on a collection of features of the input pages.","lvl-4":"A Semantic Approach to Contextual Advertising\nABSTRACT\nContextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query.\nIn CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience.\nWith these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads.\nThe SS market developed quicker than the CM market, and most textual ads are still characterized by \"bid phrases\" representing those queries where the advertisers would like to have their ad displayed.\nHence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach.\nHowever, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads.\nTo overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features.\n1.\nINTRODUCTION\nWeb advertising supports a large swath of today's Internet ecosystem.\nA large part of this market consists of textual ads, that is, short text messages usually marked as \"sponsored links\" or similar.\nThe main advertising channels used to distribute textual ads are:\n1.\nSponsored Search or Paid Search advertising which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query.\nAll major current web search engines (Google, Yahoo!, and Microsoft) support such ads and act simultaneously as a search engine and an ad agency.\n2.\nContextual advertising or Context Match which refers to the placement of commercial ads within the content of a generic web page.\nIn contextual advertising usually there is a commercial intermediary, called an ad-network, in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between publisher and ad-network) and improving user experience.\nAgain, all major current web search engines (Google, Yahoo!, and Microsoft) provide such ad-networking services but there are also many smaller players.\nThe SS market developed quicker than the CM market, and most textual ads are still characterized by \"bid phrases\" representing those queries where the advertisers would like to have their ad displayed.\nCM supports sites that range from individual bloggers and small niche communities to large publishers such as major newspapers.\nWithout this model, the web would be a lot smaller!\nThe prevalent pricing model for textual ads is that the advertisers pay a certain amount for every click on the advertisement (pay-per-click or PPC).\nThere are also other models used: pay-per-impression, where the advertisers pay for the number of exposures of an ad and pay-per-action where the advertiser pays only if the ad leads to a sale or similar transaction.\nFor simplicity, we only deal with the PPC model in this paper.\nGiven a page, rather than placing generic ads, it seems preferable to have ads related to the content to provide a better user experience and thus to increase the probability of clicks.\nPrevious published approaches estimated the ad relevance based on co-occurrence of the same words or phrases within the ad and within the page (see [7, 8] and Section 3 for more details).\nHowever targeting mechanisms based solely\non phrases found within the text of the page can lead to problems: For example, a page about a famous golfer named \"John Maytag\" might trigger an ad for \"Maytag dishwashers\" since Maytag is a popular brand.\nAnother example could be a page describing the Chevy Tahoe truck (a popular vehicle in US) triggering an ad about \"Lake Tahoe vacations\".\nIn order to solve this problem we propose a matching mechanism that combines a semantic phase with the traditional keyword matching, that is, a syntactic phase.\nThe semantic phase classifies the page and the ads into a taxonomy of topics and uses the proximity of the ad and page classes as a factor in the ad ranking formula.\nHence we favor ads that are topically related to the page and thus avoid the pitfalls of the purely syntactic approach.\nFurthermore, by using a hierarchical taxonomy we allow for the gradual generalization of the ad search space in the case when there are no ads matching the precise topic of the page.\nIn some sense, the taxonomy classes are used to select the set of applicable ads and the keywords are used to narrow down the search to concepts that are of too small granularity to be in the taxonomy.\nThe taxonomy contains nodes for topics that do not change fast, for example, brands of digital cameras, say \"Canon\".\nIn the digital camera example this would correspond to the level of a particular model, say \"Canon SD450\" whose advertising life might be just a few months.\nIn addition to increased click through rate (CTR) due to increased relevance, a significant but harder to quantify benefit of the semantic-syntactic matching is that the resulting page has a unified feel and improves the user experience.\nIn the Chevy Tahoe example above, the classifier would establish that the page is about cars\/automotive and only those ads will be considered.\nEven if there are no ads for this particular Chevy model, the chosen ads will still be within the automotive domain.\nTo implement our approach we need to solve a challenging problem: classify both pages and ads within a large taxonomy (so that the topic granularity would be small enough) with high precision (to reduce the probability of mis-match).\nWe evaluated several classifiers and taxonomies and in this paper we present results using a taxonomy with close to 6000 nodes using a variation of the Rocchio's classifier [9].\nThis classifier gave the best results in both page and ad classification, and ultimately in ad relevance.\nThe paper proceeds as follows.\nIn the next section we review the basic principles of the contextual advertising.\nSection 3 overviews the related work.\nSection 4 describes the taxonomy and document classifier that were used for page and ad classification.\nSection 5 describes the semanticsyntactic method.\nIn Section 6 we briefly discuss how to search efficiently the ad space in order to return the top-k ranked ads.\nExperimental evaluation is presented in Section 7.\nFinally, Section 8 presents the concluding remarks.\n3.\nRELATED WORK\nOnline advertising in general and contextual advertising in particular are emerging areas of research.\nThe published literature is very sparse.\nA recent work by Ribeiro-Neto et.\nal. [8] examines a number of strategies to match pages to ads based on extracted keywords.\nThe ads and pages are represented as vectors in a vector space.\nThe first five strategies proposed in that work match the pages and the ads based on the cosine of the angle between the ad vector and the page vector.\nTo find out the important part of the ad, the authors explore using different ad sections (bid phrase, title, body) as a basis for the ad vector.\nThe winning strategy out of the first five requires the bid phrase to appear on the page and then ranks all such ads by the cosine of the union of all the ad sections and the page vectors.\nWhile both pages and ads are mapped to the same space, there is a discrepancy (impendence mismatch) between the vocabulary used in the ads and in the pages.\nFurthermore, since in the vector model the dimensions are determined by the number of unique words, plain cosine similarity will not take into account synonyms.\nTo solve this problem, Ribeiro-Neto et al expand the page vocabulary with terms from other similar pages weighted based on the overall similarity of the origin page to the matched page, and show improved matching precision.\nIn a follow-up work [7] the authors propose a method to learn impact of individual features using genetic programming to produce a matching function.\nThe results show that genetic programming finds matching functions that significantly improve the matching compared to the best method (without page side expansion) reported in [8].\nAnother approach to contextual advertising is to reduce it to the problem of sponsored search advertising by extracting phrases from the page and matching them with the bid phrase of the ads.\nIn [14] a system for phrase extraction is described that used a variety of features to determine the importance of page phrases for advertising purposes.\nThe system is trained with pages that have been hand annotated with important phrases.\nThe learning algorithm takes into account features based on tf-idf, html meta data and query logs to detect the most important phrases.\nDuring evaluation, each page phrase up to length 5 is considered as potential result and evaluated against a trained classifier.\nIn our work we also experimented with a phrase extractor based on the work reported in [12].\nWhile increasing slightly the precision, it did not change the relative performance of the explored algorithms.\n8.\nCONCLUSION\nContextual advertising is the economic engine behind a large number of non-transactional sites on the Web.\nStudies have shown that one of the main success factors for contextual ads is their relevance to the surrounding content.\nAll existing commercial contextual match solutions known to us evolved from search advertising solutions whereby a search query is matched to the bid phrase of the ads.\nA natural extension of search advertising is to extract phrases from the page and match them to the bid phrase of the ads.\nHowever, individual phrases and words might have multiple meanings and\/or be unrelated to the overall topic of the page leading to miss-matched ads.\nIn this paper we proposed a novel way of matching advertisements to web pages that rely on a topical (semantic) match as a major component of the relevance score.\nThe semantic match relies on the classification of pages and ads into a 6000 nodes commercial advertising taxonomy to determine their topical distance.\nAs the classification relies on the full content of the page, it is more robust than individual page phrases.\nThe semantic match is complemented with a syntactic match and the final score is a convex combination of the two sub-scores with the relative weight of each determined by a parameter \u03b1.\nWe evaluated the semantic-syntactic approach against a syntactic approach over a set of pages with different contextual advertising.\nHowever in all cases the optimal value of \u03b1 is between 0.25 and 0.9 indicating significant effect of the semantic score component.\nThe effectiveness of the syntactic match depends on the quality of the pages used.\nIn lower quality pages we are more likely to make classification errors that will then negatively impact the matching.\nWe are currently examining how to employ machine learning algorithms to learn the optimal value of \u03b1 based on a collection of features of the input pages.","lvl-2":"A Semantic Approach to Contextual Advertising\nABSTRACT\nContextual advertising or Context Match (CM) refers to the placement of commercial textual advertisements within the content of a generic web page, while Sponsored Search (SS) advertising consists in placing ads on result pages from a web search engine, with ads driven by the originating query.\nIn CM there is usually an intermediary commercial ad-network entity in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between the publisher and the ad-network) and improving the user experience.\nWith these goals in mind it is preferable to have ads relevant to the page content, rather than generic ads.\nThe SS market developed quicker than the CM market, and most textual ads are still characterized by \"bid phrases\" representing those queries where the advertisers would like to have their ad displayed.\nHence, the first technologies for CM have relied on previous solutions for SS, by simply extracting one or more phrases from the given page content, and displaying ads corresponding to searches on these phrases, in a purely syntactic approach.\nHowever, due to the vagaries of phrase extraction, and the lack of context, this approach leads to many irrelevant ads.\nTo overcome this problem, we propose a system for contextual ad matching based on a combination of semantic and syntactic features.\n1.\nINTRODUCTION\nWeb advertising supports a large swath of today's Internet ecosystem.\nThe total internet advertiser spend in US alone in 2006 is estimated at over 17 billion dollars with a growth rate of almost 20% year over year.\nA large part of this market consists of textual ads, that is, short text messages usually marked as \"sponsored links\" or similar.\nThe main advertising channels used to distribute textual ads are:\n1.\nSponsored Search or Paid Search advertising which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query.\nAll major current web search engines (Google, Yahoo!, and Microsoft) support such ads and act simultaneously as a search engine and an ad agency.\n2.\nContextual advertising or Context Match which refers to the placement of commercial ads within the content of a generic web page.\nIn contextual advertising usually there is a commercial intermediary, called an ad-network, in charge of optimizing the ad selection with the twin goal of increasing revenue (shared between publisher and ad-network) and improving user experience.\nAgain, all major current web search engines (Google, Yahoo!, and Microsoft) provide such ad-networking services but there are also many smaller players.\nThe SS market developed quicker than the CM market, and most textual ads are still characterized by \"bid phrases\" representing those queries where the advertisers would like to have their ad displayed.\n(See [5] for a \"brief history\").\nHowever, today, almost all of the for-profit non-transactional web sites (that is, sites that do not sell anything directly) rely at least in part on revenue from context match.\nCM supports sites that range from individual bloggers and small niche communities to large publishers such as major newspapers.\nWithout this model, the web would be a lot smaller!\nThe prevalent pricing model for textual ads is that the advertisers pay a certain amount for every click on the advertisement (pay-per-click or PPC).\nThere are also other models used: pay-per-impression, where the advertisers pay for the number of exposures of an ad and pay-per-action where the advertiser pays only if the ad leads to a sale or similar transaction.\nFor simplicity, we only deal with the PPC model in this paper.\nGiven a page, rather than placing generic ads, it seems preferable to have ads related to the content to provide a better user experience and thus to increase the probability of clicks.\nThis intuition is supported by the analogy to conventional publishing where there are very successful magazines (e.g. Vogue) where a majority of the content is topical advertising (fashion in the case of Vogue) and by user studies that have confirmed that increased relevance increases the number of ad-clicks [4, 13].\nPrevious published approaches estimated the ad relevance based on co-occurrence of the same words or phrases within the ad and within the page (see [7, 8] and Section 3 for more details).\nHowever targeting mechanisms based solely\non phrases found within the text of the page can lead to problems: For example, a page about a famous golfer named \"John Maytag\" might trigger an ad for \"Maytag dishwashers\" since Maytag is a popular brand.\nAnother example could be a page describing the Chevy Tahoe truck (a popular vehicle in US) triggering an ad about \"Lake Tahoe vacations\".\nPolysemy is not the only culprit: there is a (maybe apocryphal) story about a lurid news item about a headless body found in a suitcase triggering an ad for Samsonite luggage!\nIn all these examples the mismatch arises from the fact that the ads are not appropriate for the context.\nIn order to solve this problem we propose a matching mechanism that combines a semantic phase with the traditional keyword matching, that is, a syntactic phase.\nThe semantic phase classifies the page and the ads into a taxonomy of topics and uses the proximity of the ad and page classes as a factor in the ad ranking formula.\nHence we favor ads that are topically related to the page and thus avoid the pitfalls of the purely syntactic approach.\nFurthermore, by using a hierarchical taxonomy we allow for the gradual generalization of the ad search space in the case when there are no ads matching the precise topic of the page.\nFor example if the page is about an event in curling, a rare winter sport, and contains the words \"Alpine Meadows\", the system would still rank highly ads for skiing in Alpine Meadows as these ads belong to the class \"skiing\" which is a sibling of the class \"curling\" and both of these classes share the parent \"winter sports\".\nIn some sense, the taxonomy classes are used to select the set of applicable ads and the keywords are used to narrow down the search to concepts that are of too small granularity to be in the taxonomy.\nThe taxonomy contains nodes for topics that do not change fast, for example, brands of digital cameras, say \"Canon\".\nThe keywords capture the specificity to a level that is more dynamic and granular.\nIn the digital camera example this would correspond to the level of a particular model, say \"Canon SD450\" whose advertising life might be just a few months.\nUpdating the taxonomy with new nodes or even new vocabulary each time a new model comes to the market is prohibitively expensive when we are dealing with millions of manufacturers.\nIn addition to increased click through rate (CTR) due to increased relevance, a significant but harder to quantify benefit of the semantic-syntactic matching is that the resulting page has a unified feel and improves the user experience.\nIn the Chevy Tahoe example above, the classifier would establish that the page is about cars\/automotive and only those ads will be considered.\nEven if there are no ads for this particular Chevy model, the chosen ads will still be within the automotive domain.\nTo implement our approach we need to solve a challenging problem: classify both pages and ads within a large taxonomy (so that the topic granularity would be small enough) with high precision (to reduce the probability of mis-match).\nWe evaluated several classifiers and taxonomies and in this paper we present results using a taxonomy with close to 6000 nodes using a variation of the Rocchio's classifier [9].\nThis classifier gave the best results in both page and ad classification, and ultimately in ad relevance.\nThe paper proceeds as follows.\nIn the next section we review the basic principles of the contextual advertising.\nSection 3 overviews the related work.\nSection 4 describes the taxonomy and document classifier that were used for page and ad classification.\nSection 5 describes the semanticsyntactic method.\nIn Section 6 we briefly discuss how to search efficiently the ad space in order to return the top-k ranked ads.\nExperimental evaluation is presented in Section 7.\nFinally, Section 8 presents the concluding remarks.\n2.\nOVERVIEW OF CONTEXTUAL ADVERTISING\nContextual advertising is an interplay of four players: 9 The publisher is the owner of the web pages on which the advertising is displayed.\nThe publisher typically aims to maximize advertising revenue while providing a good user experience.\n9 The advertiser provides the supply of ads.\nUsually the activity of the advertisers are organized around campaigns which are defined by a set of ads with a particular temporal and thematic goal (e.g. sale of digital cameras during the holiday season).\nAs in traditional advertising, the goal of the advertisers can be broadly defined as the promotion of products or services.\n9 The ad network is a mediator between the advertiser and the publisher and selects the ads that are put on the pages.\nThe ad-network shares the advertisement revenue with the publisher.\n9 Users visit the web pages of the publisher and interact with the ads.\nContextual advertising usually falls into the category of direct marketing (as opposed to brand advertising), that is advertising whose aim is a \"direct response\" where the effect of an campaign is measured by the user reaction.\nOne of the advantages of online advertising in general and contextual advertising in particular is that, compared to the traditional media, it is relatively easy to measure the user response.\nUsually the desired immediate reaction is for the user to follow the link in the ad and visit the advertiser's web site and, as noted, the prevalent financial model is that the advertiser pays a certain amount for every click on the advertisement (PPC).\nThe revenue is shared between the publisher and the network.\nContext match advertising has grown from Sponsored Search advertising, which consists in placing ads on the result pages from a web search engine, with ads driven by the originating query.\nIn most networks, the amount paid by the advertiser for each SS click is determined by an auction process where the advertisers place bids on a search phrase, and their position in the tower of ads displayed in conjunction with the result is determined by their bid.\nThus each ad is annotated with one or more bid phrases.\nThe bid phrase has no direct bearing on the ad placement in CM.\nHowever, it is a concise description of target ad audience as determined by the advertiser and it has been shown to be an important feature for successful CM ad placement [8].\nIn addition to the bid phrase, an ad is also characterized by a title usually displayed in a bold font, and an abstract or creative, which is the few lines of text, usually less than 120 characters, displayed on the page.\nThe ad-network model aligns the interests of the publishers, advertisers and the network.\nIn general, clicks bring benefits to both the publisher and the ad network by providing revenue, and to the advertiser by bringing traffic to\nthe target web site.\nThe revenue of the network, given a page p, can be estimated as:\nwhere k is the number of ads displayed on page p and price (ai, i) is the click-price of the current ad ai at position i.\nThe price in this model depends on the set of ads presented on the page.\nSeveral models have been proposed to determine the price, most of them based on generalizations of second price auctions.\nHowever, for simplicity we ignore the pricing model and concentrate on finding ads that will maximize the first term of the product, that is we search for arg max P (click | p, ai) i Furthermore we assume that the probability of click for a given ad and page is determined by its relevance score with respect to the page, thus ignoring the positional effect of the ad placement on the page.\nWe assume that this is an orthogonal factor to the relevance component and could be easily incorporated in the model.\n3.\nRELATED WORK\nOnline advertising in general and contextual advertising in particular are emerging areas of research.\nThe published literature is very sparse.\nA study presented in [13] confirms the intuition that ads need to be relevant to the user's interest to avoid degrading the user's experience and increase the probability of reaction.\nA recent work by Ribeiro-Neto et.\nal. [8] examines a number of strategies to match pages to ads based on extracted keywords.\nThe ads and pages are represented as vectors in a vector space.\nThe first five strategies proposed in that work match the pages and the ads based on the cosine of the angle between the ad vector and the page vector.\nTo find out the important part of the ad, the authors explore using different ad sections (bid phrase, title, body) as a basis for the ad vector.\nThe winning strategy out of the first five requires the bid phrase to appear on the page and then ranks all such ads by the cosine of the union of all the ad sections and the page vectors.\nWhile both pages and ads are mapped to the same space, there is a discrepancy (impendence mismatch) between the vocabulary used in the ads and in the pages.\nFurthermore, since in the vector model the dimensions are determined by the number of unique words, plain cosine similarity will not take into account synonyms.\nTo solve this problem, Ribeiro-Neto et al expand the page vocabulary with terms from other similar pages weighted based on the overall similarity of the origin page to the matched page, and show improved matching precision.\nIn a follow-up work [7] the authors propose a method to learn impact of individual features using genetic programming to produce a matching function.\nThe function is represented as a tree composed of arithmetic operators and the log function as internal nodes, and different numerical features of the query and ad terms as leafs.\nThe results show that genetic programming finds matching functions that significantly improve the matching compared to the best method (without page side expansion) reported in [8].\nAnother approach to contextual advertising is to reduce it to the problem of sponsored search advertising by extracting phrases from the page and matching them with the bid phrase of the ads.\nIn [14] a system for phrase extraction is described that used a variety of features to determine the importance of page phrases for advertising purposes.\nThe system is trained with pages that have been hand annotated with important phrases.\nThe learning algorithm takes into account features based on tf-idf, html meta data and query logs to detect the most important phrases.\nDuring evaluation, each page phrase up to length 5 is considered as potential result and evaluated against a trained classifier.\nIn our work we also experimented with a phrase extractor based on the work reported in [12].\nWhile increasing slightly the precision, it did not change the relative performance of the explored algorithms.\n4.\nPAGE AND AD CLASSIFICATION\n4.1 Taxonomy Choice\nThe semantic match of the pages and the ads is performed by classifying both into a common taxonomy.\nThe matching process requires that the taxonomy provides sufficient differentiation between the common commercial topics.\nFor example, classifying all medical related pages into one node will not result into a good classification since both \"sore foot\" and \"flu\" pages will end up in the same node.\nThe ads suitable for these two concepts are, however, very different.\nTo obtain sufficient resolution, we used a taxonomy of around 6000 nodes primarily built for classifying commercial interest queries, rather than pages or ads.\nThis taxonomy has been commercially built by Yahoo! US.\nWe will explain below how we can use the same taxonomy to classify pages and ads as well.\nEach node in our source taxonomy is represented as a collection of exemplary bid phrases or queries that correspond to that node concept.\nEach node has on average around 100 queries.\nThe queries placed in the taxonomy are high volume queries and queries of high interest to advertisers, as indicated by an unusually high cost-per-click (CPC) price.\nThe taxonomy has been populated by human editors using keyword suggestions tools similar to the ones used by ad networks to suggest keywords to advertisers.\nAfter initial seeding with a few queries, using the provided tools a human editor can add several hundreds queries to a given node.\nNevertheless, it has been a significant effort to develop this 6000-nodes taxonomy and it has required several person-years of work.\nA similar-in-spirit process for building enterprise taxonomies via queries has been presented in [6].\nHowever, the details and tools are completely different.\nFigure 1 provides some statistics about the taxonomy used in this work.\n4.2 Classification Method\nAs explained, the semantic phase of the matching relies on ads and pages being topically close.\nThus we need to classify pages into the same taxonomy used to classify ads.\nIn this section we overview the methods we used to build a page and an ad classifier pair.\nThe detailed description and evaluation of this process is outside the scope of this paper.\nGiven the taxonomy of queries (or bid-phrases--we use these terms interchangeably) described in the previous section, we tried three methods to build corresponding page and ad classifiers.\nFor the first two methods we tried to find exemplary pages and ads for each concept as follows:\nFigure 1: Taxonomy statistics: categories per level; fanout for non-leaf nodes; and queries per node\nWe generated a page training set by running the queries in the taxonomy over a Web search index and using the top 10 results after some filtering as documents labeled with the query's label.\nOn the ad side we generated a training set for each class by selecting the ads that have a bid phrase assigned to this class.\nUsing this training sets we then trained a hierarchical SVM [2] (one against all between every group of siblings) and a log-regression [11] classifier.\n(The second method differs from the first in the type of secondary filtering used.\nThis filtering eliminates low content pages, pages deemed unsuitable for advertising, pages that lead to excessive class confusion, etc.) However, we obtained the best performance by using the third document classifier, based on the information in the source taxonomy queries only.\nFor each taxonomy node we concatenated all the exemplary queries into a single metadocument.\nWe then used the meta document as a centroid for a nearest-neighbor classifier based on Rocchio's framework [9] with only positive examples and no relevance feedback.\nEach centroid is defined as a sum of the tf-idf values of each term, normalized by the number of queries in the class\nwhere ~ cj is the centroid for class Cj; q iterates over the queries in a particular class.\nThe classification is based on the cosine of the angle between the document d and the centroid meta-documents:\nwhere F is the set of features.\nThe score is normalized by the document and class length to produce comparable score.\nThe terms ci and di represent the weight of the ith feature in the class centroid and the document respectively.\nThese weights are based on the standard tf-idf formula.\nAs the score of the max class is normalized with regard to document length, the scores for different documents are comparable.\nWe conducted tests using professional editors to judge the quality of page and ad class assignments.\nThe tests showed that for both ads and pages the Rocchio classifier returned the best results, especially on the page side.\nThis is probably a result of the noise induced by using a search engine to machine generate training pages for the SVM and logregression classifiers.\nIt is an area of current investigation how to improve the classification using a noisy training set.\nBased on the test results we decided to use the Rocchio's classifier on both the ad and the page side.\n5.\nSEMANTIC-SYNTACTIC MATCHING\nContextual advertising systems process the content of the page, extract features, and then search the ad space to find the best matching ads.\nGiven a page p and a set of ads A = {a1...as} we estimate the relative probability of click P (click | p, a) with a score that captures the quality of the match between the page and the ad.\nTo find the best ads for a page we rank the ads in A and select the top few for display.\nThe problem can be formally defined as matching every page in the set of all pages P = {p1,...ppc} to one or more ads in the set of ads.\nEach page is represented as a set of page sections pi = {pi,1, pi,2...pi, m}.\nThe sections of the page represent different structural parts, such as title, metadata, body, headings, etc. .\nIn turn, each section is an unordered bag of terms (keywords).\nA page is represented by the union of the terms in each section: pi = {pws11, pws12...pwsim} where pw stands for a page word and the superscript indicates the section of each term.\nA term can be a unigram or a phrase extracted by a phrase extractor [12].\nSimilarly, we represent each ad as a set of sections a = {a1, a2,...al}, each section in turn being an unordered set of terms:\nwhere aw is an ad word.\nThe ads in our experiments have 3 sections: title, body, and bid phrase.\nIn this work, to produce the match score we use only the ad\/page textual information, leaving user information and other data for future work.\nNext, each page and ad term is associated with a weight based on the tf-idf values.\nThe tf value is determined based on the individual ad sections.\nThere are several choices for the value of idf, based on different scopes.\nOn the ad side, it has been shown in previous work that the set of ads of one campaign provide good scope for the estimation of idf that leads to improved matching results [8].\nHowever, in this work for simplicity we do not take into account campaigns.\nTo combine the impact of the term's section and its tf-idf score, the ad\/page term weight is defined as:\nwhere tWeight stands for term weight and weightSection (Si) is the weight assigned to a page or ad section.\nThis weight is a fixed parameter determined by experimentation.\nEach ad and page is classified into the topical taxonomy.\nWe define these two mappings:\nwhere pc and ac are page and ad classes correspondingly.\nEach assignment is associated with a weight given by the classifier.\nThe weights are normalized to sum to 1:\nwhere xi is either a page or an ad, and cWeights (c) is the class weight - normalized confidence assigned by the classifier.\nThe number of classes can vary between different pages and ads.\nThis corresponds to the number of topics a page\/ad can be associated with and is almost always in the range 1-4.\nNow we define the relevance score of an ad ai and page pi as a convex combination of the keyword (syntactic) and classification (semantic) score:\nThe parameter \u03b1 determines the relative weight of the taxonomy score and the keyword score.\nTo calculate the keyword score we use the vector space model [1] where both the pages and ads are represented in n-dimensional space - one dimension for each distinct term.\nThe magnitude of each dimension is determined by the tWeight () formula.\nThe keyword score is then defined as the cosine of the angle between the page and the ad vectors:\nwhere K is the set of all the keywords.\nThe formula assumes independence between the words in the pages and ads.\nFurthermore, it ignores the order and the proximity of the terms in the scoring.\nWe experimented with the impact of phrases and proximity on the keyword score and did not see a substantial impact of these two factors.\nWe now turn to the definition of the TaxScore.\nThis function indicates the topical match between a given ad and a page.\nAs opposed to the keywords that are treated as independent dimensions, here the classes (topics) are organized into a hierarchy.\nOne of the goals in the design of the TaxScore function is to be able to generalize within the taxonomy, that is accept topically related ads.\nGeneralization can help to place ads in cases when there is no ad that matches both the category and the keywords of the page.\nThe example in Figure 2 illustrates this situation.\nIn this example, in the absence of ski ads, a page about skiing containing the word \"Atomic\" could be matched to the available snowboarding ad for the same brand.\nIn general we would like the match to be stronger when both the ad and the page are classified into the same node\nFigure 2: Two generalization paths\nand weaker when the distance between the nodes in the taxonomy gets larger.\nThere are multiple ways to specify the distance between two taxonomy nodes.\nBesides the above requirement, this function should lend itself to an efficient search of the ad space.\nGiven a page we have to find the ad in a few milliseconds, as this impacts the presentation to a waiting user.\nThis will be further discussed in the next section.\nTo capture both the generalization and efficiency requirements we define the TaxScore function as follows:\nIn this function we consider every combination of page class and ad class.\nFor each combination we multiply the product of the class weights with the inverse distance function between the least common ancestor of the two classes (LCA) and the ad class.\nThe inverse distance function idist (c1, c2) takes two nodes on the same path in the class taxonomy and returns a number in the interval [0, 1] depending on the distance of the two class nodes.\nIt returns 1 if the two nodes are the same, and declines toward 0 when LCA (pc, ac) or ac is the root of the taxonomy.\nThe rate of decline determines the weight of the generalization versus the other terms in the scoring formula.\nTo determine the rate of decline we consider the impact on the specificity of the match when we substitute a class with one of its ancestors.\nIn general the impact is topic dependent.\nFor example the node \"Hobby\" in our taxonomy has tens of children, each representing a different hobby, two examples being \"Sailing\" and \"Knitting\".\nPlacing an ad about \"Knitting\" on a page about \"Sailing\" does not make lots of sense.\nHowever, in the \"Winter Sports\" example above, in the absence of better alternative, skiing ads could be put on snowboarding pages as they might promote the same venues, equipment vendors etc. .\nSuch detailed analysis on a case by case basis is prohibitively expensive due to the size of the taxonomy.\nOne option is to use the confidences of the ancestor classes as given by the classifier.\nHowever we found these numbers not suitable for this purpose as the magnitude of the confidences does not necessarily decrease when going up the tree.\nAnother option is to use explore-exploit methods based =\non machine-learning from the click feedback as described in [10].\nHowever for simplicity, in this work we chose a simple heuristic to determine the cost of generalization from a child to a parent.\nIn this heuristic we look at the broadening of the scope when moving from a child to a parent.\nWe estimate the broadening by the density of ads classified in the parent nodes vs the child node.\nThe density is obtained by classifying a large set of ads in the taxonomy using the document classifier described above.\nBased on this, let n. be the number of document classified into the subtree rooted at c.\nThen we define: idist (c, p) = n. np where c represents the child node and p is the parent node.\nThis fraction can be viewed as a probability of an ad belonging to the parent topic being suitable for the child topic.\n6.\nSEARCHING THE AD SPACE\nIn the previous section we discussed the choice of scoring function to estimate the match between an ad and a page.\nThe top-k ads with the highest score are offered by the system for placement on the publisher's page.\nThe process of score calculation and ad selection is to be done in real time and therefore must be very efficient.\nAs the ad collections are in the range of hundreds of millions of entries, there is a need for indexed access to the ads.\nInverted indexes provide scalable and low latency solutions for searching documents.\nHowever, these have been traditionally used to search based on keywords.\nTo be able to search the ads on a combination of keywords and classes we have mapped the classification match to term match and adapted the scoring function to be suitable for fast evaluation over inverted indexes.\nIn this section we overview the ad indexing and the ranking function of our prototype ad search system for matching ads and pages.\nWe used a standard inverted index framework where there is one posting list for each distinct term.\nThe ads are parsed into terms and each term is associated with a weight based on the section in which it appears.\nWeights from distinct occurrences of a term in an ad are added together, so that the posting lists contain one entry per term\/ad combination.\nThe next challenge is how to index the ads so that the class information is preserved in the index?\nA simple method is to create unique meta-terms for the classes and annotate each ad with one meta-term for each assigned class.\nHowever this method does not allow for generalization, since only the ads matching an exact label of the page would be selected.\nTherefore we annotated each ad with one meta-term for each ancestor of the assigned class.\nThe weights of meta-terms are assigned according to the value of the idist () function defined in the previous section.\nOn the query side, given the keywords and the class of a page, we compose a keyword only query by inserting one class term for each ancestor of the classes assigned to the page.\nThe scoring function is adapted to the two part score one for the class meta-terms and another for the text term.\nDuring the class score calculation, for each class path we use only the lowest class meta-term, ignoring the others.\nFor example, if an ad belongs to the \"Skiing\" class and is annotated with both \"Skiing\" and its parent \"Winter Sports\", the index will contain the special class meta-terms for both \"Skiing\" and \"Winter Sports\" (and all their ancestors) with the weights according to the product of the classifier confidence and the idist function.\nWhen matching with a page classified into \"Skiing\", the query will contain terms for class \"Skiing\" and for each of its ancestors.\nHowever when scoring an ad classified into \"Skiing\" we will use the weight for the \"Skiing\" class meta-term.\nAds classified into \"Snowboarding\" will be scored using the weight of the \"Winter Sports\" meta-term.\nTo make this check efficiently we keep a sorted list of all the class paths and, at scoring time, we search the paths bottom up for a meta-term appearing in the ad.\nThe first meta-term is used for scoring, the rest are ignored.\nAt runtime, we evaluate the query using a variant of the WAND algorithm [3].\nThis is a document-at-a-time algorithm [1] that uses a branch-and-bound approach to derive efficient moves for the cursors associated to the postings lists.\nIt finds the next cursor to be moved based on an upper bound of the score for the documents at which the cursors are currently positioned.\nThe algorithm keeps a heap of current best candidates.\nDocuments with an upper bound smaller than the current minimum score among the candidate documents can be eliminated from further considerations, and thus the cursors can skip over them.\nTo find the upper bound for a document, the algorithm assumes that all cursors that are before it will hit this document (i.e. the document contains all those terms represented by cursors before or at that document).\nIt has been shown that WAND can be used with any function that is monotonic with respect to the number of matching terms in the document.\nOur scoring function is monotonic since the score can never decrease when more terms are found in the document.\nIn the special case when we add a cursor representing an ancestor of a class term already factored in the score, the score simply does not change (we add 0).\nGiven these properties, we use an adaptation of the WAND algorithm where we change the calculation of the scoring function and the upper bound score calculation to reflect our scoring function.\nThe rest of the algorithm remains unchanged.\n7.\nEXPERIMENTAL EVALUATION\n7.1 Data and Methodology\nWe quantify the effect of the semantic-syntactic matching using a set of 105 pages.\nThis set of pages was selected by a random sample of a larger set of around 20 million pages with contextual advertising.\nAds for each of these pages have been selected from a larger pool of ads (tens of millions) by previous experiments conducted by Yahoo! US for other purposes.\nEach page-ad pair has been judged by three or more human judges on a 1 to 3 scale:\n1.\nRelevant The ad is semantically directly related to the main subject of the page.\nFor example if the page is about the National Football League and the ad is about tickets for NFL games, it would be scored as 1.\n2.\nSomewhat relevant The ad is related to the secondary subject of the page, or is related to the main topic of the page in a general way.\nIn the NFL page example, an ad about NFL branded products would be judged as 2.\n3.\nIrrelevant The ad is unrelated to the page.\nFor example a mention of the NFL player John Maytag triggers washing machine ads on a NFL page.\nTable 1: Dataset statistics\nTo obtain a score for a page-ad pair we average all the scores and then round to the closest integer.\nWe then used these judgments to evaluate how well our methods distinguish the positive and the negative ad assignments for each page.\nThe statistics of the page dataset is given in Table 1.\nThe original experiments that paired the pages and the ads are loosely related to the syntactic keyword based matching and classification based assignment but used different taxonomies and keyword extraction techniques.\nTherefore we could not use standard pooling as an evaluation method since we did not control the way the pairs were selected and could not precisely establish the set of ads from which the placed ads were selected.\nInstead, in our evaluation for each page we consider only those ads for which we have judgment.\nEach different method was applied to this set and the ads were ranked by the score.\nThe relative effectiveness of the algorithms were judged by comparing how well the methods separated the ads with positive judgment from the ads with negative judgment.\nWe present precision on various levels of recall within this set.\nAs the set of ads per page is relatively small, the evaluation reports precision that is higher than it would be with a larger set of negative ads.\nHowever, these numbers still establish the relative performance of the algorithms and we can use it to evaluate performance at different score thresholds.\nIn addition to the precision-recall over the judged ads, we also present Kendall's T rank correlation coefficient to establish how far from the perfect ordering are the orderings produced by our ranking algorithms.\nFor this test we ranked the judged ads by the scores assigned by the judges and then compared this order to the order assigned by our algorithms.\nFinally we also examined the precision at position 1, 3 and 5.\n7.2 Results\nFigure 3 shows the precision recall curves for the syntactic matching (keywords only used) vs. a syntactic-semantic matching with the optimal value of a = 0.8 (judged by the 11-point score [1]).\nIn this figure, we assume that the adpage pairs judged with 1 or 2 are positive examples and the 3s are negative examples.\nWe also examined counting only the pairs judged with 1 as positive examples and did not find a significant change in the relative performance of the tested methods.\nOverlaid are also results using phrases in the keyword match.\nWe see that these additional features do not change the relative performance of the algorithm.\nThe graphs show significant impact of the class information, especially in the mid range of recall values.\nIn the low recall part of the chart the curves meet.\nThis indicates that when the keyword match is really strong (i.e. when the ad is almost contained within the page) the precision\nFigure 3: Data Set 2: Precision vs. Recall of syntactic match (a = 0) vs. syntactic-semantic match (a = 0.8)\nTable 2: Kendall's T for different a values\nof the syntactic keyword match is comparable with that of the semantic-syntactic match.\nNote however that the two methods might produce different ads and could be used as a complement at level of recall.\nThe semantic components provides largest lift in precision at the mid range of recall where 25% improvement is achieved by using the class information for ad placement.\nThis means that when there is somewhat of a match between the ad and the page, the restriction to the right classes provides a better scope for selecting the ads.\nTable 2 shows the Kendall's T values for different values of a.\nWe calculated the values by ranking all the judged ads for each page and averaging the values over all the pages.\nThe ads with tied judgment were given the rank of the middle position.\nThe results show that when we take into account all the ad-page pairs, the optimal value of a is around 0.5.\nNote that purely syntactic match (a = 0) is by far the weakest method.\nFigure 4 shows the effect of the parameter a in the scoring.\nWe see that in most cases precision grows or is flat when we increase a, except at the low level of recall where due to small number of data points there is a bit of jitter in the results.\nTable 3 shows the precision at positions 1, 3 and 5.\nAgain, the purely syntactic method has clearly the lowest score by individual positions and the total number of correctly placed ads.\nThe numbers are close due to the small number of the ads considered, but there are still some noticeable trends.\nFor position 1 the optimal a is in the range of 0.25 to 0.75.\nFor positions 3 and 5 the optimum is at a = 1.\nThis also indicates that for those ads that have a very high keyword score, the semantic information is somewhat less important.\nIf almost all the words in an ad appear in the page, this ad is\nFigure 4: Impact of \u03b1 on precision for different levels of recall\n\u03b1 #1 #3 #5 sum \u03b1 = 0 80 70 68 218 \u03b1 = 0.25 89 76 73 238 \u03b1 = 0.5 89 74 73 236 \u03b1 = 0.75 89 78 73 240 \u03b1 = 1 86 79 74 239\nTable 3: Precision at position 1, 3 and 5\nlikely to be relevant for this page.\nHowever when there is no such clear affinity, the class information becomes a dominant factor.\n8.\nCONCLUSION\nContextual advertising is the economic engine behind a large number of non-transactional sites on the Web.\nStudies have shown that one of the main success factors for contextual ads is their relevance to the surrounding content.\nAll existing commercial contextual match solutions known to us evolved from search advertising solutions whereby a search query is matched to the bid phrase of the ads.\nA natural extension of search advertising is to extract phrases from the page and match them to the bid phrase of the ads.\nHowever, individual phrases and words might have multiple meanings and\/or be unrelated to the overall topic of the page leading to miss-matched ads.\nIn this paper we proposed a novel way of matching advertisements to web pages that rely on a topical (semantic) match as a major component of the relevance score.\nThe semantic match relies on the classification of pages and ads into a 6000 nodes commercial advertising taxonomy to determine their topical distance.\nAs the classification relies on the full content of the page, it is more robust than individual page phrases.\nThe semantic match is complemented with a syntactic match and the final score is a convex combination of the two sub-scores with the relative weight of each determined by a parameter \u03b1.\nWe evaluated the semantic-syntactic approach against a syntactic approach over a set of pages with different contextual advertising.\nAs shown in our experimental evaluation, the optimal value of the parameter \u03b1 depends on the precise objective of optimization (precision at particular position, precision at given recall).\nHowever in all cases the optimal value of \u03b1 is between 0.25 and 0.9 indicating significant effect of the semantic score component.\nThe effectiveness of the syntactic match depends on the quality of the pages used.\nIn lower quality pages we are more likely to make classification errors that will then negatively impact the matching.\nWe demonstrated that it is feasible to build a large scale classifier that has sufficient good precision for this application.\nWe are currently examining how to employ machine learning algorithms to learn the optimal value of \u03b1 based on a collection of features of the input pages.","keyphrases":["semant","contextu advertis","contextu advertis","match","ad relev","pai-per-click","match mechan","semant-syntact match","keyword match","hierarch taxonomi class","document classifi","top-k ad","topic distanc"],"prmu":["P","P","P","P","P","U","M","M","M","U","U","M","U"]} {"id":"H-90","title":"Context-Sensitive Information Retrieval Using Implicit Feedback","abstract":"A major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored. In this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting. We propose several context-sensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents. We use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set. Experiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially.","lvl-1":"Context-Sensitive Information Retrieval Using Implicit Feedback Xuehua Shen Department of Computer Science University of Illinois at Urbana-Champaign Bin Tan Department of Computer Science University of Illinois at Urbana-Champaign ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign ABSTRACT A major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored.\nIn this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting.\nWe propose several contextsensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents.\nWe use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set.\nExperiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models General Terms Algorithms 1.\nINTRODUCTION In most existing information retrieval models, the retrieval problem is treated as involving one single query and a set of documents.\nFrom a single query, however, the retrieval system can only have very limited clue about the user``s information need.\nAn optimal retrieval system thus should try to exploit as much additional context information as possible to improve retrieval accuracy, whenever it is available.\nIndeed, context-sensitive retrieval has been identified as a major challenge in information retrieval research[2].\nThere are many kinds of context that we can exploit.\nRelevance feedback [14] can be considered as a way for a user to provide more context of search and is known to be effective for improving retrieval accuracy.\nHowever, relevance feedback requires that a user explicitly provides feedback information, such as specifying the category of the information need or marking a subset of retrieved documents as relevant documents.\nSince it forces the user to engage additional activities while the benefits are not always obvious to the user, a user is often reluctant to provide such feedback information.\nThus the effectiveness of relevance feedback may be limited in real applications.\nFor this reason, implicit feedback has attracted much attention recently [11, 13, 18, 17, 12].\nIn general, the retrieval results using the user``s initial query may not be satisfactory; often, the user would need to revise the query to improve the retrieval\/ranking accuracy [8].\nFor a complex or difficult information need, the user may need to modify his\/her query and view ranked documents with many iterations before the information need is completely satisfied.\nIn such an interactive retrieval scenario, the information naturally available to the retrieval system is more than just the current user query and the document collection - in general, all the interaction history can be available to the retrieval system, including past queries, information about which documents the user has chosen to view, and even how a user has read a document (e.g., which part of a document the user spends a lot of time in reading).\nWe define implicit feedback broadly as exploiting all such naturally available interaction history to improve retrieval results.\nA major advantage of implicit feedback is that we can improve the retrieval accuracy without requiring any user effort.\nFor example, if the current query is java, without knowing any extra information, it would be impossible to know whether it is intended to mean the Java programming language or the Java island in Indonesia.\nAs a result, the retrieved documents will likely have both kinds of documents - some may be about the programming language and some may be about the island.\nHowever, any particular user is unlikely searching for both types of documents.\nSuch an ambiguity can be resolved by exploiting history information.\nFor example, if we know that the previous query from the user is cgi programming, it would strongly suggest that it is the programming language that the user is searching for.\nImplicit feedback was studied in several previous works.\nIn [11], Joachims explored how to capture and exploit the clickthrough information and demonstrated that such implicit feedback information can indeed improve the search accuracy for a group of people.\nIn [18], a simulation study of the effectiveness of different implicit feedback algorithms was conducted, and several retrieval models designed for exploiting clickthrough information were proposed and evaluated.\nIn [17], some existing retrieval algorithms are adapted to improve search results based on the browsing history of a user.\nOther related work on using context includes personalized search [1, 3, 4, 7, 10], query log analysis [5], context factors [12], and implicit queries [6].\nWhile the previous work has mostly focused on using clickthrough information, in this paper, we use both clickthrough information and preceding queries, and focus on developing new context-sensitive language models for retrieval.\nSpecifically, we develop models for using implicit feedback information such as query and clickthrough history of the current search session to improve retrieval accuracy.\nWe use the KL-divergence retrieval model [19] as the basis and propose to treat context-sensitive retrieval as estimating a query language model based on the current query and any search context information.\nWe propose several statistical language models to incorporate query and clickthrough history into the KL-divergence model.\nOne challenge in studying implicit feedback models is that there does not exist any suitable test collection for evaluation.\nWe thus use the TREC AP data to create a test collection with implicit feedback information, which can be used to quantitatively evaluate implicit feedback models.\nTo the best of our knowledge, this is the first test set for implicit feedback.\nWe evaluate the proposed models using this data set.\nThe experimental results show that using implicit feedback information, especially the clickthrough data, can substantially improve retrieval performance without requiring additional effort from the user.\nThe remaining sections are organized as follows.\nIn Section 2, we attempt to define the problem of implicit feedback and introduce some terms that we will use later.\nIn Section 3, we propose several implicit feedback models based on statistical language models.\nIn Section 4, we describe how we create the data set for implicit feedback experiments.\nIn Section 5, we evaluate different implicit feedback models on the created data set.\nSection 6 is our conclusions and future work.\n2.\nPROBLEM DEFINITION There are two kinds of context information we can use for implicit feedback.\nOne is short-term context, which is the immediate surrounding information which throws light on a user``s current information need in a single session.\nA session can be considered as a period consisting of all interactions for the same information need.\nThe category of a user``s information need (e.g., kids or sports), previous queries, and recently viewed documents are all examples of short-term context.\nSuch information is most directly related to the current information need of the user and thus can be expected to be most useful for improving the current search.\nIn general, short-term context is most useful for improving search in the current session, but may not be so helpful for search activities in a different session.\nThe other kind of context is long-term context, which refers to information such as a user``s education level and general interest, accumulated user query history and past user clickthrough information; such information is generally stable for a long time and is often accumulated over time.\nLong-term context can be applicable to all sessions, but may not be as effective as the short-term context in improving search accuracy for a particular session.\nIn this paper, we focus on the short-term context, though some of our methods can also be used to naturally incorporate some long-term context.\nIn a single search session, a user may interact with the search system several times.\nDuring interactions, the user would continuously modify the query.\nTherefore for the current query Qk (except for the first query of a search session) , there is a query history, HQ = (Q1, ..., Qk\u22121) associated with it, which consists of the preceding queries given by the same user in the current session.\nNote that we assume that the session boundaries are known in this paper.\nIn practice, we need techniques to automatically discover session boundaries, which have been studied in [9, 16].\nTraditionally, the retrieval system only uses the current query Qk to do retrieval.\nBut the short-term query history clearly may provide useful clues about the user``s current information need as seen in the java example given in the previous section.\nIndeed, our previous work [15] has shown that the short-term query history is useful for improving retrieval accuracy.\nIn addition to the query history, there may be other short-term context information available.\nFor example, a user would presumably frequently click some documents to view.\nWe refer to data associated with these actions as clickthrough history.\nThe clickthrough data may include the title, summary, and perhaps also the content and location (e.g., the URL) of the clicked document.\nAlthough it is not clear whether a viewed document is actually relevant to the user``s information need, we may safely assume that the displayed summary\/title information about the document is attractive to the user, thus conveys information about the user``s information need.\nSuppose we concatenate all the displayed text information about a document (usually title and summary) together, we will also have a clicked summary Ci in each round of retrieval.\nIn general, we may have a history of clicked summaries C1, ..., Ck\u22121.\nWe will also exploit such clickthrough history HC = (C1, ..., Ck\u22121) to improve our search accuracy for the current query Qk.\nPrevious work has also shown positive results using similar clickthrough information [11, 17].\nBoth query history and clickthrough history are implicit feedback information, which naturally exists in interactive information retrieval, thus no additional user effort is needed to collect them.\nIn this paper, we study how to exploit such information (HQ and HC ), develop models to incorporate the query history and clickthrough history into a retrieval ranking function, and quantitatively evaluate these models.\n3.\nLANGUAGE MODELS FOR CONTEXTSENSITIVEINFORMATIONRETRIEVAL Intuitively, the query history HQ and clickthrough history HC are both useful for improving search accuracy for the current query Qk.\nAn important research question is how we can exploit such information effectively.\nWe propose to use statistical language models to model a user``s information need and develop four specific context-sensitive language models to incorporate context information into a basic retrieval model.\n3.1 Basic retrieval model We use the Kullback-Leibler (KL) divergence method [19] as our basic retrieval method.\nAccording to this model, the retrieval task involves computing a query language model \u03b8Q for a given query and a document language model \u03b8D for a document and then computing their KL divergence D(\u03b8Q||\u03b8D), which serves as the score of the document.\nOne advantage of this approach is that we can naturally incorporate the search context as additional evidence to improve our estimate of the query language model.\nFormally, let HQ = (Q1, ..., Qk\u22121) be the query history and the current query be Qk.\nLet HC = (C1, ..., Ck\u22121) be the clickthrough history.\nNote that Ci is the concatenation of all clicked documents'' summaries in the i-th round of retrieval since we may reasonably treat all these summaries equally.\nOur task is to estimate a context query model, which we denote by p(w|\u03b8k), based on the current query Qk, as well as the query history HQ and clickthrough history HC .\nWe now describe several different language models for exploiting HQ and HC to estimate p(w|\u03b8k).\nWe will use c(w, X) to denote the count of word w in text X, which could be either a query or a clicked document``s summary or any other text.\nWe will use |X| to denote the length of text X or the total number of words in X. 3.2 Fixed Coefficient Interpolation (FixInt) Our first idea is to summarize the query history HQ with a unigram language model p(w|HQ) and the clickthrough history HC with another unigram language model p(w|HC ).\nThen we linearly interpolate these two history models to obtain the history model p(w|H).\nFinally, we interpolate the history model p(w|H) with the current query model p(w|Qk).\nThese models are defined as follows.\np(w|Qi) = c(w, Qi) |Qi| p(w|HQ) = 1 k \u2212 1 i=k\u22121 i=1 p(w|Qi) p(w|Ci) = c(w, Ci) |Ci| p(w|HC ) = 1 k \u2212 1 i=k\u22121 i=1 p(w|Ci) p(w|H) = \u03b2p(w|HC ) + (1 \u2212 \u03b2)p(w|HQ) p(w|\u03b8k) = \u03b1p(w|Qk) + (1 \u2212 \u03b1)p(w|H) where \u03b2 \u2208 [0, 1] is a parameter to control the weight on each history model, and where \u03b1 \u2208 [0, 1] is a parameter to control the weight on the current query and the history information.\nIf we combine these equations, we see that p(w|\u03b8k) = \u03b1p(w|Qk) + (1 \u2212 \u03b1)[\u03b2p(w|HC ) + (1 \u2212 \u03b2)p(w|HQ)] That is, the estimated context query model is just a fixed coefficient interpolation of three models p(w|Qk), p(w|HQ), and p(w|HC ).\n3.3 Bayesian Interpolation (BayesInt) One possible problem with the FixInt approach is that the coefficients, especially \u03b1, are fixed across all the queries.\nBut intuitively, if our current query Qk is very long, we should trust the current query more, whereas if Qk has just one word, it may be beneficial to put more weight on the history.\nTo capture this intuition, we treat p(w|HQ) and p(w|HC ) as Dirichlet priors and Qk as the observed data to estimate a context query model using Bayesian estimator.\nThe estimated model is given by p(w|\u03b8k) = c(w, Qk) + \u00b5p(w|HQ) + \u03bdp(w|HC ) |Qk| + \u00b5 + \u03bd = |Qk| |Qk| + \u00b5 + \u03bd p(w|Qk)+ \u00b5 + \u03bd |Qk| + \u00b5 + \u03bd [ \u00b5 \u00b5 + \u03bd p(w|HQ)+ \u03bd \u00b5 + \u03bd p(w|HC )] where \u00b5 is the prior sample size for p(w|HQ) and \u03bd is the prior sample size for p(w|HC ).\nWe see that the only difference between BayesInt and FixInt is the interpolation coefficients are now adaptive to the query length.\nIndeed, when viewing BayesInt as FixInt, we see that \u03b1 = |Qk| |Qk|+\u00b5+\u03bd , \u03b2 = \u03bd \u03bd+\u00b5 , thus with fixed \u00b5 and \u03bd, we will have a query-dependent \u03b1.\nLater we will show that such an adaptive \u03b1 empirically performs better than a fixed \u03b1.\n3.4 Online Bayesian Updating (OnlineUp) Both FixInt and BayesInt summarize the history information by averaging the unigram language models estimated based on previous queries or clicked summaries.\nThis means that all previous queries are treated equally and so are all clicked summaries.\nHowever, as the user interacts with the system and acquires more knowledge about the information in the collection, presumably, the reformulated queries will become better and better.\nThus assigning decaying weights to the previous queries so as to trust a recent query more than an earlier query appears to be reasonable.\nInterestingly, if we incrementally update our belief about the user``s information need after seeing each query, we could naturally obtain decaying weights on the previous queries.\nSince such an incremental online updating strategy can be used to exploit any evidence in an interactive retrieval system, we present it in a more general way.\nIn a typical retrieval system, the retrieval system responds to every new query entered by the user by presenting a ranked list of documents.\nIn order to rank documents, the system must have some model for the user``s information need.\nIn the KL divergence retrieval model, this means that the system must compute a query model whenever a user enters a (new) query.\nA principled way of updating the query model is to use Bayesian estimation, which we discuss below.\n3.4.1 Bayesian updating We first discuss how we apply Bayesian estimation to update a query model in general.\nLet p(w|\u03c6) be our current query model and T be a new piece of text evidence observed (e.g., T can be a query or a clicked summary).\nTo update the query model based on T, we use \u03c6 to define a Dirichlet prior parameterized as Dir(\u00b5T p(w1|\u03c6), ..., \u00b5T p(wN |\u03c6)) where \u00b5T is the equivalent sample size of the prior.\nWe use Dirichlet prior because it is a conjugate prior for multinomial distributions.\nWith such a conjugate prior, the predictive distribution of \u03c6 (or equivalently, the mean of the posterior distribution of \u03c6 is given by p(w|\u03c6) = c(w, T) + \u00b5T p(w|\u03c6) |T| + \u00b5T (1) where c(w, T) is the count of w in T and |T| is the length of T. Parameter \u00b5T indicates our confidence in the prior expressed in terms of an equivalent text sample comparable with T. For example, \u00b5T = 1 indicates that the influence of the prior is equivalent to adding one extra word to T. 3.4.2 Sequential query model updating We now discuss how we can update our query model over time during an interactive retrieval process using Bayesian estimation.\nIn general, we assume that the retrieval system maintains a current query model \u03c6i at any moment.\nAs soon as we obtain some implicit feedback evidence in the form of a piece of text Ti, we will update the query model.\nInitially, before we see any user query, we may already have some information about the user.\nFor example, we may have some information about what documents the user has viewed in the past.\nWe use such information to define a prior on the query model, which is denoted by \u03c60.\nAfter we observe the first query Q1, we can update the query model based on the new observed data Q1.\nThe updated query model \u03c61 can then be used for ranking documents in response to Q1.\nAs the user views some documents, the displayed summary text for such documents C1 (i.e., clicked summaries) can serve as some new data for us to further update the query model to obtain \u03c61.\nAs we obtain the second query Q2 from the user, we can update \u03c61 to obtain a new model \u03c62.\nIn general, we may repeat such an updating process to iteratively update the query model.\nClearly, we see two types of updating: (1) updating based on a new query Qi; (2) updating based on a new clicked summary Ci.\nIn both cases, we can treat the current model as a prior of the context query model and treat the new observed query or clicked summary as observed data.\nThus we have the following updating equations: p(w|\u03c6i) = c(w, Qi) + \u00b5ip(w|\u03c6i\u22121) |Qi| + \u00b5i p(w|\u03c6i) = c(w, Ci) + \u03bdip(w|\u03c6i) |Ci| + \u03bdi where \u00b5i is the equivalent sample size for the prior when updating the model based on a query, while \u03bdi is the equivalent sample size for the prior when updating the model based on a clicked summary.\nIf we set \u00b5i = 0 (or \u03bdi = 0) we essentially ignore the prior model, thus would start a completely new query model based on the query Qi (or the clicked summary Ci).\nOn the other hand, if we set \u00b5i = +\u221e (or \u03bdi = +\u221e) we essentially ignore the observed query (or the clicked summary) and do not update our model.\nThus the model remains the same as if we do not observe any new text evidence.\nIn general, the parameters \u00b5i and \u03bdi may have different values for different i. For example, at the very beginning, we may have very sparse query history, thus we could use a smaller \u00b5i, but later as the query history is richer, we can consider using a larger \u00b5i.\nBut in our experiments, unless otherwise stated, we set them to the same constants, i.e., \u2200i, j, \u00b5i = \u00b5j, \u03bdi = \u03bdj.\nNote that we can take either p(w|\u03c6i) or p(w|\u03c6i) as our context query model for ranking documents.\nThis suggests that we do not have to wait until a user enters a new query to initiate a new round of retrieval; instead, as soon as we collect clicked summary Ci, we can update the query model and use p(w|\u03c6i) to immediately rerank any documents that a user has not yet seen.\nTo score documents after seeing query Qk, we use p(w|\u03c6k), i.e., p(w|\u03b8k) = p(w|\u03c6k) 3.5 Batch Bayesian updating (BatchUp) If we set the equivalent sample size parameters to fixed constant, the OnlineUp algorithm would introduce a decaying factor - repeated interpolation would cause the early data to have a low weight.\nThis may be appropriate for the query history as it is reasonable to believe that the user becomes better and better at query formulation as time goes on, but it is not necessarily appropriate for the clickthrough information, especially because we use the displayed summary, rather than the actual content of a clicked document.\nOne way to avoid applying a decaying interpolation to the clickthrough data is to do OnlineUp only for the query history Q = (Q1, ..., Qi\u22121), but not for the clickthrough data C.\nWe first buffer all the clickthrough data together and use the whole chunk of clickthrough data to update the model generated through running OnlineUp on previous queries.\nThe updating equations are as follows.\np(w|\u03c6i) = c(w, Qi) + \u00b5ip(w|\u03c6i\u22121) |Qi| + \u00b5i p(w|\u03c8i) = i\u22121 j=1 c(w, Cj) + \u03bdip(w|\u03c6i) i\u22121 j=1 |Cj| + \u03bdi where \u00b5i has the same interpretation as in OnlineUp, but \u03bdi now indicates to what extent we want to trust the clicked summaries.\nAs in OnlineUp, we set all \u00b5i``s and \u03bdi``s to the same value.\nAnd to rank documents after seeing the current query Qk, we use p(w|\u03b8k) = p(w|\u03c8k) 4.\nDATA COLLECTION In order to quantitatively evaluate our models, we need a data set which includes not only a text database and testing topics, but also query history and clickthrough history for each topic.\nSince there is no such data set available to us, we have to create one.\nThere are two choices.\nOne is to extract topics and any associated query history and clickthrough history for each topic from the log of a retrieval system (e.g., search engine).\nBut the problem is that we have no relevance judgments on such data.\nThe other choice is to use a TREC data set, which has a text database, topic description and relevance judgment file.\nUnfortunately, there are no query history and clickthrough history data.\nWe decide to augment a TREC data set by collecting query history and clickthrough history data.\nWe select TREC AP88, AP89 and AP90 data as our text database, because AP data has been used in several TREC tasks and has relatively complete judgments.\nThere are altogether 242918 news articles and the average document length is 416 words.\nMost articles have titles.\nIf not, we select the first sentence of the text as the title.\nFor the preprocessing, we only do case folding and do not do stopword removal or stemming.\nWe select 30 relatively difficult topics from TREC topics 1-150.\nThese 30 topics have the worst average precision performance among TREC topics 1-150 according to some baseline experiments using the KL-Divergence model with Bayesian prior smoothing [20].\nThe reason why we select difficult topics is that the user then would have to have several interactions with the retrieval system in order to get satisfactory results so that we can expect to collect a relatively richer query history and clickthrough history data from the user.\nIn real applications, we may also expect our models to be most useful for such difficult topics, so our data collection strategy reflects the real world applications well.\nWe index the TREC AP data set and set up a search engine and web interface for TREC AP news articles.\nWe use 3 subjects to do experiments to collect query history and clickthrough history data.\nEach subject is assigned 10 topics and given the topic descriptions provided by TREC.\nFor each topic, the first query is the title of the topic given in the original TREC topic description.\nAfter the subject submits the query, the search engine will do retrieval and return a ranked list of search results to the subject.\nThe subject will browse the results and maybe click one or more results to browse the full text of article(s).\nThe subject may also modify the query to do another search.\nFor each topic, the subject composes at least 4 queries.\nIn our experiment, only the first 4 queries for each topic are used.\nThe user needs to select the topic number from a selection menu before submitting the query to the search engine so that we can easily detect the session boundary, which is not the focus of our study.\nWe use a relational database to store user interactions, including the submitted queries and clicked documents.\nFor each query, we store the query terms and the associated result pages.\nAnd for each clicked document, we store the summary as shown on the search result page.\nThe summary of the article is query dependent and is computed online using fixed-length passage retrieval (KL divergence model with Bayesian prior smoothing).\nAmong 120 (4 for each of 30 topics) queries which we study in the experiment, the average query length is 3.71 words.\nAltogether there are 91 documents clicked to view.\nSo on average, there are around 3 clicks per topic.\nThe average length of clicked summary FixInt BayesInt OnlineUp BatchUp Query (\u03b1 = 0.1, \u03b2 = 1.0) (\u00b5 = 0.2, \u03bd = 5.0) (\u00b5 = 5.0, \u03bd = 15.0) (\u00b5 = 2.0, \u03bd = 15.0) MAP pr@20docs MAP pr@20docs MAP pr@20docs MAP pr@20docs q1 0.0095 0.0317 0.0095 0.0317 0.0095 0.0317 0.0095 0.0317 q2 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 q2 + HQ + HC 0.0324 0.1117 0.0345 0.1117 0.0215 0.0733 0.0342 0.1100 Improve.\n3.8% -2.9% 10.6% -2.9% -31.1% -36.3% 9.6% -4.3% q3 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 q3 + HQ + HC 0.0726 0.1967 0.0816 0.2067 0.0706 0.1783 0.0810 0.2067 Improve 72.4% 32.6% 93.8% 39.4% 67.7% 20.2% 92.4% 39.4% q4 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 q4 + HQ + HC 0.0891 0.2233 0.0955 0.2317 0.0792 0.2067 0.0950 0.2250 Improve 66.2% 15.5% 78.2% 19.9% 47.8% 6.9% 77.2% 16.4% Table 1: Effect of using query history and clickthrough data for document ranking.\nis 34.4 words.\nAmong 91 clicked documents, 29 documents are judged relevant according to TREC judgment file.\nThis data set is publicly available 1 .\n5.\nEXPERIMENTS 5.1 Experiment design Our major hypothesis is that using search context (i.e., query history and clickthrough information) can help improve search accuracy.\nIn particular, the search context can provide extra information to help us estimate a better query model than using just the current query.\nSo most of our experiments involve comparing the retrieval performance using the current query only (thus ignoring any context) with that using the current query as well as the search context.\nSince we collected four versions of queries for each topic, we make such comparisons for each version of queries.\nWe use two performance measures: (1) Mean Average Precision (MAP): This is the standard non-interpolated average precision and serves as a good measure of the overall ranking accuracy.\n(2) Precision at 20 documents (pr@20docs): This measure does not average well, but it is more meaningful than MAP and reflects the utility for users who only read the top 20 documents.\nIn all cases, the reported figure is the average over all of the 30 topics.\nWe evaluate the four models for exploiting search context (i.e., FixInt, BayesInt, OnlineUp, and BatchUp).\nEach model has precisely two parameters (\u03b1 and \u03b2 for FixInt; \u00b5 and \u03bd for others).\nNote that \u00b5 and \u03bd may need to be interpreted differently for different methods.\nWe vary these parameters and identify the optimal performance for each method.\nWe also vary the parameters to study the sensitivity of our algorithms to the setting of the parameters.\n5.2 Result analysis 5.2.1 Overall effect of search context We compare the optimal performances of four models with those using the current query only in Table 1.\nA row labeled with qi is the baseline performance and a row labeled with qi + HQ + HC is the performance of using search context.\nWe can make several observations from this table: 1.\nComparing the baseline performances indicates that on average reformulated queries are better than the previous queries with the performance of q4 being the best.\nUsers generally formulate better and better queries.\n2.\nUsing search context generally has positive effect, especially when the context is rich.\nThis can be seen from the fact that the 1 http:\/\/sifaka.cs.uiuc.edu\/ir\/ucair\/QCHistory.zip improvement for q4 and q3 is generally more substantial compared with q2.\nActually, in many cases with q2, using the context may hurt the performance, probably because the history at that point is sparse.\nWhen the search context is rich, the performance improvement can be quite substantial.\nFor example, BatchUp achieves 92.4% improvement in the mean average precision over q3 and 77.2% improvement over q4.\n(The generally low precisions also make the relative improvement deceptively high, though.)\n3.\nAmong the four models using search context, the performances of FixInt and OnlineUp are clearly worse than those of BayesInt and BatchUp.\nSince BayesInt performs better than FixInt and the main difference between BayesInt and FixInt is that the former uses an adaptive coefficient for interpolation, the results suggest that using adaptive coefficient is quite beneficial and a Bayesian style interpolation makes sense.\nThe main difference between OnlineUp and BatchUp is that OnlineUp uses decaying coefficients to combine the multiple clicked summaries, while BatchUp simply concatenates all clicked summaries.\nTherefore the fact that BatchUp is consistently better than OnlineUp indicates that the weights for combining the clicked summaries indeed should not be decaying.\nWhile OnlineUp is theoretically appealing, its performance is inferior to BayesInt and BatchUp, likely because of the decaying coefficient.\nOverall, BatchUp appears to be the best method when we vary the parameter settings.\nWe have two different kinds of search context - query history and clickthrough data.\nWe now look into the contribution of each kind of context.\n5.2.2 Using query history only In each of four models, we can turn off the clickthrough history data by setting parameters appropriately.\nThis allows us to evaluate the effect of using query history alone.\nWe use the same parameter setting for query history as in Table 1.\nThe results are shown in Table 2.\nHere we see that in general, the benefit of using query history is very limited with mixed results.\nThis is different from what is reported in a previous study [15], where using query history is consistently helpful.\nAnother observation is that the context runs perform poorly at q2, but generally perform (slightly) better than the baselines for q3 and q4.\nThis is again likely because at the beginning the initial query, which is the title in the original TREC topic description, may not be a good query; indeed, on average, performances of these first-generation queries are clearly poorer than those of all other user-formulated queries in the later generations.\nYet another observation is that when using query history only, the BayesInt model appears to be better than other models.\nSince the clickthrough data is ignored, OnlineUp and BatchUp FixInt BayesInt OnlineUp BatchUp Query (\u03b1 = 0.1, \u03b2 = 0) (\u00b5 = 0.2,\u03bd = 0) (\u00b5 = 5.0,\u03bd = +\u221e) (\u00b5 = 2.0, \u03bd = +\u221e) MAP pr@20docs MAP pr@20docs MAP pr@20docs MAP pr@20docs q2 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 q2 + HQ 0.0097 0.0317 0.0311 0.1200 0.0213 0.0783 0.0287 0.0967 Improve.\n-68.9% -72.4% -0.3% 4.3% -31.7% -31.9% -8.0% -15.9% q3 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 0.0421 0.1483 q3 + HQ 0.0261 0.0917 0.0451 0.1517 0.0444 0.1333 0.0455 0.1450 Improve -38.2% -38.2% 7.1% 2.3% 5.5% -10.1% 8.1% -2.2% q4 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 0.0536 0.1933 q4 + HQ 0.0428 0.1467 0.0537 0.1917 0.0550 0.1733 0.0552 0.1917 Improve -20.1% -24.1% 0.2% -0.8% 3.0% -10.3% 3.0% -0.8% Table 2: Effect of using query history only for document ranking.\n\u00b5 0 0.5 1 2 3 4 5 6 7 8 9 q2 + HQ MAP 0.0312 0.0313 0.0308 0.0287 0.0257 0.0231 0.0213 0.0194 0.0183 0.0182 0.0164 q3 + HQ MAP 0.0421 0.0442 0.0441 0.0455 0.0457 0.0458 0.0444 0.0439 0.0430 0.0390 0.0335 q4 + HQ MAP 0.0536 0.0546 0.0547 0.0552 0.0544 0.0548 0.0550 0.0541 0.0534 0.0525 0.0513 Table 3: Average Precision of BatchUp using query history only are essentially the same algorithm.\nThe displayed results thus reflect the variation caused by parameter \u00b5.\nA smaller setting of 2.0 is seen better than a larger value of 5.0.\nA more complete picture of the influence of the setting of \u00b5 can be seen from Table 3, where we show the performance figures for a wider range of values of \u00b5.\nThe value of \u00b5 can be interpreted as how many words we regard the query history is worth.\nA larger value thus puts more weight on the history and is seen to hurt the performance more when the history information is not rich.\nThus while for q4 the best performance tends to be achieved for \u00b5 \u2208 [2, 5], only when \u00b5 = 0.5 we see some small benefit for q2.\nAs we would expect, an excessively large \u00b5 would hurt the performance in general, but q2 is hurt most and q4 is barely hurt, indicating that as we accumulate more and more query history information, we can put more and more weight on the history information.\nThis also suggests that a better strategy should probably dynamically adjust parameters according to how much history information we have.\nThe mixed query history results suggest that the positive effect of using implicit feedback information may have largely come from the use of clickthrough history, which is indeed true as we discuss in the next subsection.\n5.2.3 Using clickthrough history only We now turn off the query history and only use the clicked summaries plus the current query.\nThe results are shown in Table 4.\nWe see that the benefit of using clickthrough information is much more significant than that of using query history.\nWe see an overall positive effect, often with significant improvement over the baseline.\nIt is also clear that the richer the context data is, the more improvement using clicked summaries can achieve.\nOther than some occasional degradation of precision at 20 documents, the improvement is fairly consistent and often quite substantial.\nThese results show that the clicked summary text is in general quite useful for inferring a user``s information need.\nIntuitively, using the summary text, rather than the actual content of the document, makes more sense, as it is quite possible that the document behind a seemingly relevant summary is actually non-relevant.\n29 out of the 91 clicked documents are relevant.\nUpdating the query model based on such summaries would bring up the ranks of these relevant documents, causing performance improvement.\nHowever, such improvement is really not beneficial for the user as the user has already seen these relevant documents.\nTo see how much improvement we have achieved on improving the ranks of the unseen relevant documents, we exclude these 29 relevant documents from our judgment file and recompute the performance of BayesInt and the baseline using the new judgment file.\nThe results are shown in Table 5.\nNote that the performance of the baseline method is lower due to the removal of the 29 relevant documents, which would have been generally ranked high in the results.\nFrom Table 5, we see clearly that using clicked summaries also helps improve the ranks of unseen relevant documents significantly.\nQuery BayesInt(\u00b5 = 0, \u03bd = 5.0) MAP pr@20docs q2 0.0263 0.100 q2 + HC 0.0314 0.100 Improve.\n19.4% 0% q3 0.0331 0.125 q3 + HC 0.0661 0.178 Improve 99.7% 42.4% q4 0.0442 0.165 q4 + HC 0.0739 0.188 Improve 67.2% 13.9% Table 5: BayesInt evaluated on unseen relevant documents One remaining question is whether the clickthrough data is still helpful if none of the clicked documents is relevant.\nTo answer this question, we took out the 29 relevant summaries from our clickthrough history data HC to obtain a smaller set of clicked summaries HC , and re-evaluated the performance of the BayesInt method using HC with the same setting of parameters as in Table 4.\nThe results are shown in Table 6.\nWe see that although the improvement is not as substantial as in Table 4, the average precision is improved across all generations of queries.\nThese results should be interpreted as very encouraging as they are based on only 62 non-relevant clickthroughs.\nIn reality, a user would more likely click some relevant summaries, which would help bring up more relevant documents as we have seen in Table 4 and Table 5.\nFixInt BayesInt OnlineUp BatchUp Query (\u03b1 = 0.1, \u03b2 = 1) (\u00b5 = 0, \u03bd = 5.0) (\u00b5k = 5.0, \u03bd = 15, \u2200i < k, \u00b5i = +\u221e) (\u00b5 = 0, \u03bd = 15) MAP pr@20docs MAP pr@20docs MAP pr@20docs MAP pr@20docs q2 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 0.0312 0.1150 q2 + HC 0.0324 0.1117 0.0338 0.1133 0.0358 0.1300 0.0344 0.1167 Improve.\n3.8% -2.9% 8.3% -1.5% 14.7% 13.0% 10.3% 1.5% q3 0.0421 0.1483 0.0421 0.1483 0.04210 0.1483 0.0420 0.1483 q3 + HC 0.0726 0.1967 0.0766 0.2033 0.0622 0.1767 0.0513 0.1650 Improve 72.4% 32.6% 81.9% 37.1% 47.7% 19.2% 21.9% 11.3% q4 0.0536 0.1930 0.0536 0.1930 0.0536 0.1930 0.0536 0.1930 q4 + HC 0.0891 0.2233 0.0925 0.2283 0.0772 0.2217 0.0623 0.2050 Improve 66.2% 15.5% 72.6% 18.1% 44.0% 14.7% 16.2% 6.1% Table 4: Effect of using clickthrough data only for document ranking.\nQuery BayesInt(\u00b5 = 0, \u03bd = 5.0) MAP pr@20docs q2 0.0312 0.1150 q2 + HC 0.0313 0.0950 Improve.\n0.3% -17.4% q3 0.0421 0.1483 q3 + HC 0.0521 0.1820 Improve 23.8% 23.0% q4 0.0536 0.1930 q4 + HC 0.0620 0.1850 Improve 15.7% -4.1% Table 6: Effect of using only non-relevant clickthrough data 5.2.4 Additive effect of context information By comparing the results across Table 1, Table 2 and Table 4, we can see that the benefit of the query history information and that of clickthrough information are mostly additive, i.e., combining them can achieve better performance than using each alone, but most improvement has clearly come from the clickthrough information.\nIn Table 7, we show this effect for the BatchUp method.\n5.2.5 Parameter sensitivity All four models have two parameters to control the relative weights of HQ, HC , and Qk, though the parameterization is different from model to model.\nIn this subsection, we study the parameter sensitivity for BatchUp, which appears to perform relatively better than others.\nBatchUp has two parameters \u00b5 and \u03bd.\nWe first look at \u00b5.\nWhen \u00b5 is set to 0, the query history is not used at all, and we essentially just use the clickthrough data combined with the current query.\nIf we increase \u00b5, we will gradually incorporate more information from the previous queries.\nIn Table 8, we show how the average precision of BatchUp changes as we vary \u00b5 with \u03bd fixed to 15.0, where the best performance of BatchUp is achieved.\nWe see that the performance is mostly insensitive to the change of \u00b5 for q3 and q4, but is decreasing as \u00b5 increases for q2.\nThe pattern is also similar when we set \u03bd to other values.\nIn addition to the fact that q1 is generally worse than q2, q3, and q4, another possible reason why the sensitivity is lower for q3 and q4 may be that we generally have more clickthrough data available for q3 and q4 than for q2, and the dominating influence of the clickthrough data has made the small differences caused by \u00b5 less visible for q3 and q4.\nThe best performance is generally achieved when \u00b5 is around 2.0, which means that the past query information is as useful as about 2 words in the current query.\nExcept for q2, there is clearly some tradeoff between the current query and the previous queries Query MAP pr@20docs q2 0.0312 0.1150 q2 + HQ 0.0287 0.0967 Improve.\n-8.0% -15.9% q2 + HC 0.0344 0.1167 Improve.\n10.3% 1.5% q2 + HQ + HC 0.0342 0.1100 Improve.\n9.6% -4.3% q3 0.0421 0.1483 q3 + HQ 0.0455 0.1450 Improve 8.1% -2.2% q3 + HC 0.0513 0.1650 Improve 21.9% 11.3% q3 + HQ + HC 0.0810 0.2067 Improve 92.4% 39.4% q4 0.0536 0.1930 q4 + HQ 0.0552 0.1917 Improve 3.0% -0.8% q4 + HC 0.0623 0.2050 Improve 16.2% 6.1% q4 + HQ + HC 0.0950 0.2250 Improve 77.2% 16.4% Table 7: Additive benefit of context information and using a balanced combination of them achieves better performance than using each of them alone.\nWe now turn to the other parameter \u03bd.\nWhen \u03bd is set to 0, we only use the clickthrough data; When \u03bd is set to +\u221e, we only use the query history and the current query.\nWith \u00b5 set to 2.0, where the best performance of BatchUp is achieved, we vary \u03bd and show the results in Table 9.\nWe see that the performance is also not very sensitive when \u03bd \u2264 30, with the best performance often achieved at \u03bd = 15.\nThis means that the combined information of query history and the current query is as useful as about 15 words in the clickthrough data, indicating that the clickthrough information is highly valuable.\nOverall, these sensitivity results show that BatchUp not only performs better than other methods, but also is quite robust.\n6.\nCONCLUSIONS AND FUTURE WORK In this paper, we have explored how to exploit implicit feedback information, including query history and clickthrough history within the same search session, to improve information retrieval performance.\nUsing the KL-divergence retrieval model as the basis, we proposed and studied four statistical language models for context-sensitive information retrieval, i.e., FixInt, BayesInt, OnlineUp and BatchUp.\nWe use TREC AP Data to create a test set \u00b5 0 1 2 3 4 5 6 7 8 9 10 MAP 0.0386 0.0366 0.0342 0.0315 0.0290 0.0267 0.0250 0.0236 0.0229 0.0223 0.0219 q2 + HQ + HC pr@20 0.1333 0.1233 0.1100 0.1033 0.1017 0.0933 0.0833 0.0767 0.0783 0.0767 0.0750 MAP 0.0805 0.0807 0.0811 0.0814 0.0813 0.0808 0.0804 0.0799 0.0795 0.0790 0.0788 q3 + HQ + HC pr@20 0.210 0.2150 0.2067 0.205 0.2067 0.205 0.2067 0.2067 0.2050 0.2017 0.2000 MAP 0.0929 0.0947 0.0950 0.0940 0.0941 0.0940 0.0942 0.0937 0.0936 0.0932 0.0929 q4 + HQ + HC pr@20 0.2183 0.2217 0.2250 0.2217 0.2233 0.2267 0.2283 0.2333 0.2333 0.2350 0.2333 Table 8: Sensitivity of \u00b5 in BatchUp \u03bd 0 1 2 5 10 15\u00a030\u00a0100\u00a0300 500 MAP 0.0278 0.0287 0.0296 0.0315 0.0334 0.0342 0.0328 0.0311 0.0296 0.0290 q2 + HQ + HC pr@20 0.0933 0.0950 0.0950 0.1000 0.1050 0.1100 0.1150 0.0983 0.0967 0.0967 MAP 0.0728 0.0739 0.0751 0.0786 0.0809 0.0811 0.0770 0.0634 0.0511 0.0491 q3 + HQ + HC pr@20 0.1917 0.1933 0.1950 0.2100 0.2000 0.2067 0.2017 0.1783 0.1600 0.1550 MAP 0.0895 0.0903 0.0914 0.0932 0.0944 0.0950 0.0919 0.0761 0.0664 0.0625 q4 + HQ + HC pr@20 0.2267 0.2233 0.2283 0.2317 0.2233 0.2250 0.2283 0.2200 0.2067 0.2033 Table 9: Sensitivity of \u03bd in BatchUp for evaluating implicit feedback models.\nExperiment results show that using implicit feedback, especially clickthrough history, can substantially improve retrieval performance without requiring any additional user effort.\nThe current work can be extended in several ways: First, we have only explored some very simple language models for incorporating implicit feedback information.\nIt would be interesting to develop more sophisticated models to better exploit query history and clickthrough history.\nFor example, we may treat a clicked summary differently depending on whether the current query is a generalization or refinement of the previous query.\nSecond, the proposed models can be implemented in any practical systems.\nWe are currently developing a client-side personalized search agent, which will incorporate some of the proposed algorithms.\nWe will also do a user study to evaluate effectiveness of these models in the real web search.\nFinally, we should further study a general retrieval framework for sequential decision making in interactive information retrieval and study how to optimize some of the parameters in the context-sensitive retrieval models.\n7.\nACKNOWLEDGMENTS This material is based in part upon work supported by the National Science Foundation under award numbers IIS-0347933 and IIS-0428472.\nWe thank the anonymous reviewers for their useful comments.\n8.\nREFERENCES [1] E. Adar and D. Karger.\nHaystack: Per-user information environments.\nIn Proceedings of CIKM 1999, 1999.\n[2] J. Allan and et al..\nChallenges in information retrieval and language modeling.\nWorkshop at University of Amherst, 2002.\n[3] K. Bharat.\nSearchpad: Explicit capture of search context to support web search.\nIn Proceeding of WWW 2000, 2000.\n[4] W. B. Croft, S. Cronen-Townsend, and V. Larvrenko.\nRelevance feedback and personalization: A language modeling perspective.\nIn Proeedings of Second DELOS Workshop: Personalisation and Recommender Systems in Digital Libraries, 2001.\n[5] H. Cui, J.-R.\nWen, J.-Y.\nNie, and W.-Y.\nMa.\nProbabilistic query expansion using query logs.\nIn Proceedings of WWW 2002, 2002.\n[6] S. T. Dumais, E. Cutrell, R. Sarin, and E. Horvitz.\nImplicit queries (IQ) for contextualized search (demo description).\nIn Proceedings of SIGIR 2004, page 594, 2004.\n[7] L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin.\nPlacing search in context: The concept revisited.\nIn Proceedings of WWW 2002, 2001.\n[8] C. Huang, L. Chien, and Y. Oyang.\nQuery session based term suggestion for interactive web search.\nIn Proceedings of WWW 2001, 2001.\n[9] X. Huang, F. Peng, A. An, and D. Schuurmans.\nDynamic web log session identification with statistical language models.\nJournal of the American Society for Information Science and Technology, 55(14):1290-1303, 2004.\n[10] G. Jeh and J. Widom.\nScaling personalized web search.\nIn Proceeding of WWW 2003, 2003.\n[11] T. Joachims.\nOptimizing search engines using clickthrough data.\nIn Proceedings of SIGKDD 2002, 2002.\n[12] D. Kelly and N. J. Belkin.\nDisplay time as implicit feedback: Understanding task effects.\nIn Proceedings of SIGIR 2004, 2004.\n[13] D. Kelly and J. Teevan.\nImplicit feedback for inferring user preference.\nSIGIR Forum, 32(2), 2003.\n[14] J. Rocchio.\nRelevance feedback information retrieval.\nIn The Smart Retrieval System-Experiments in Automatic Document Processing, pages 313-323, Kansas City, MO, 1971.\nPrentice-Hall.\n[15] X. Shen and C. Zhai.\nExploiting query history for document ranking in interactive information retrieval (poster).\nIn Proceedings of SIGIR 2003, 2003.\n[16] S. Sriram, X. Shen, and C. Zhai.\nA session-based search engine (poster).\nIn Proceedings of SIGIR 2004, 2004.\n[17] K. Sugiyama, K. Hatano, and M. Yoshikawa.\nAdaptive web search based on user profile constructed without any effort from users.\nIn Proceedings of WWW 2004, 2004.\n[18] R. W. White, J. M. Jose, C. J. van Rijsbergen, and I. Ruthven.\nA simulated study of implicit feedback models.\nIn Proceedings of ECIR 2004, pages 311-326, 2004.\n[19] C. Zhai and J. Lafferty.\nModel-based feedback in the KL-divergence retrieval model.\nIn Proceedings of CIKM 2001, 2001.\n[20] C. Zhai and J. Lafferty.\nA study of smoothing methods for language models applied to ad-hoc information retrieval.\nIn Proceedings of SIGIR 2001, 2001.","lvl-3":"Context-Sensitive Information Retrieval Using Implicit Feedback\nABSTRACT\nA major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored.\nIn this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting.\nWe propose several contextsensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents.\nWe use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set.\nExperiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially.\n1.\nINTRODUCTION\nIn most existing information retrieval models, the retrieval problem is treated as involving one single query and a set of documents.\nFrom a single query, however, the retrieval system can only have very limited clue about the user's information need.\nAn optimal retrieval system thus should try to exploit as much additional context information as possible to improve retrieval accuracy, whenever it is available.\nIndeed, context-sensitive retrieval has been identified as a major challenge in information retrieval research [2].\nThere are many kinds of context that we can exploit.\nRelevance feedback [14] can be considered as a way for a user to provide more context of search and is known to be effective for improving retrieval accuracy.\nHowever, relevance feedback requires that a user explicitly provides feedback information, such as specifying the category of the information need or marking a subset of retrieved documents as relevant documents.\nSince it forces the user to engage additional activities while the benefits are not always obvious to the user, a user is often reluctant to provide such feedback information.\nThus the effectiveness of relevance feedback may be limited in real applications.\nFor this reason, implicitfeedback has attracted much attention recently [11, 13, 18, 17, 12].\nIn general, the retrieval results using the user's initial query may not be satisfactory; often, the user would need to revise the query to improve the retrieval\/ranking accuracy [8].\nFor a complex or difficult information need, the user may need to modify his\/her query and view ranked documents with many iterations before the information need is completely satisfied.\nIn such an interactive retrieval scenario, the information naturally available to the retrieval system is more than just the current user query and the document collection--in general, all the interaction history can be available to the retrieval system, including past queries, information about which documents the user has chosen to view, and even how a user has read a document (e.g., which part of a document the user spends a lot of time in reading).\nWe define implicit feedback broadly as exploiting all such naturally available interaction history to improve retrieval results.\nA major advantage of implicit feedback is that we can improve the retrieval accuracy without requiring any user effort.\nFor example, if the current query is \"java\", without knowing any extra information, it would be impossible to know whether it is intended to mean the Java programming language or the Java island in Indonesia.\nAs a result, the retrieved documents will likely have both kinds of documents--some may be about the programming language and some may be about the island.\nHowever, any particular user is unlikely searching for both types of documents.\nSuch an ambiguity can be resolved by exploiting history information.\nFor example, if we know that the previous query from the user is \"cgi programming\", it would strongly suggest that it is the programming language that the user is searching for.\nImplicit feedback was studied in several previous works.\nIn [11], Joachims explored how to capture and exploit the clickthrough information and demonstrated that such implicit feedback information can indeed improve the search accuracy for a group of people.\nIn [18], a simulation study of the effectiveness of different implicit feedback algorithms was conducted, and several retrieval models designed for exploiting clickthrough information were pro\nposed and evaluated.\nIn [17], some existing retrieval algorithms are adapted to improve search results based on the browsing history of a user.\nOther related work on using context includes personalized search [1, 3, 4, 7, 10], query log analysis [5], context factors [12], and implicit queries [6].\nWhile the previous work has mostly focused on using clickthrough information, in this paper, we use both clickthrough information and preceding queries, and focus on developing new context-sensitive language models for retrieval.\nSpecifically, we develop models for using implicit feedback information such as query and clickthrough history of the current search session to improve retrieval accuracy.\nWe use the KL-divergence retrieval model [19] as the basis and propose to treat context-sensitive retrieval as estimating a query language model based on the current query and any search context information.\nWe propose several statistical language models to incorporate query and clickthrough history into the KL-divergence model.\nOne challenge in studying implicit feedback models is that there does not exist any suitable test collection for evaluation.\nWe thus use the TREC AP data to create a test collection with implicit feedback information, which can be used to quantitatively evaluate implicit feedback models.\nTo the best of our knowledge, this is the first test set for implicit feedback.\nWe evaluate the proposed models using this data set.\nThe experimental results show that using implicit feedback information, especially the clickthrough data, can substantially improve retrieval performance without requiring additional effort from the user.\nThe remaining sections are organized as follows.\nIn Section 2, we attempt to define the problem of implicit feedback and introduce some terms that we will use later.\nIn Section 3, we propose several implicit feedback models based on statistical language models.\nIn Section 4, we describe how we create the data set for implicit feedback experiments.\nIn Section 5, we evaluate different implicit feedback models on the created data set.\nSection 6 is our conclusions and future work.\n2.\nPROBLEM DEFINITION\n3.\nLANGUAGE MODELS FOR CONTEXTSENSITIVE INFORMATION RETRIEVAL\n3.1 Basic retrieval model\n3.2 Fixed Coefficient Interpolation (FixInt)\n3.3 Bayesian Interpolation (BayesInt)\n3.4 Online Bayesian Updating (OnlineUp)\n3.4.1 Bayesian updating\n3.4.2 Sequential query model updating\n3.5 Batch Bayesian updating (BatchUp)\n4.\nDATA COLLECTION\n5.\nEXPERIMENTS\n5.1 Experiment design\n5.2 Result analysis\n5.2.1 Overall effect of search context\n5.2.2 Using query history only\n5.2.3 Using clickthrough history only\n5.2.4 Additive effect of context information\n5.2.5 Parameter sensitivity\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we have explored how to exploit implicit feedback information, including query history and clickthrough history within the same search session, to improve information retrieval performance.\nUsing the KL-divergence retrieval model as the basis, we proposed and studied four statistical language models for context-sensitive information retrieval, i.e., FixInt, BayesInt, OnlineUp and BatchUp.\nWe use TREC AP Data to create a test set\nTable 8: Sensitivity of \u00b5 in BatchUp\nTable 9: Sensitivity of v in BatchUp\nfor evaluating implicit feedback models.\nExperiment results show that using implicit feedback, especially clickthrough history, can substantially improve retrieval performance without requiring any additional user effort.\nThe current work can be extended in several ways: First, we have only explored some very simple language models for incorporating implicit feedback information.\nIt would be interesting to develop more sophisticated models to better exploit query history and clickthrough history.\nFor example, we may treat a clicked summary differently depending on whether the current query is a generalization or refinement of the previous query.\nSecond, the proposed models can be implemented in any practical systems.\nWe are currently developing a client-side personalized search agent, which will incorporate some of the proposed algorithms.\nWe will also do a user study to evaluate effectiveness of these models in the real web search.\nFinally, we should further study a general retrieval framework for sequential decision making in interactive information retrieval and study how to optimize some of the parameters in the context-sensitive retrieval models.","lvl-4":"Context-Sensitive Information Retrieval Using Implicit Feedback\nABSTRACT\nA major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored.\nIn this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting.\nWe propose several contextsensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents.\nWe use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set.\nExperiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially.\n1.\nINTRODUCTION\nIn most existing information retrieval models, the retrieval problem is treated as involving one single query and a set of documents.\nFrom a single query, however, the retrieval system can only have very limited clue about the user's information need.\nAn optimal retrieval system thus should try to exploit as much additional context information as possible to improve retrieval accuracy, whenever it is available.\nIndeed, context-sensitive retrieval has been identified as a major challenge in information retrieval research [2].\nThere are many kinds of context that we can exploit.\nRelevance feedback [14] can be considered as a way for a user to provide more context of search and is known to be effective for improving retrieval accuracy.\nHowever, relevance feedback requires that a user explicitly provides feedback information, such as specifying the category of the information need or marking a subset of retrieved documents as relevant documents.\nSince it forces the user to engage additional activities while the benefits are not always obvious to the user, a user is often reluctant to provide such feedback information.\nThus the effectiveness of relevance feedback may be limited in real applications.\nIn general, the retrieval results using the user's initial query may not be satisfactory; often, the user would need to revise the query to improve the retrieval\/ranking accuracy [8].\nWe define implicit feedback broadly as exploiting all such naturally available interaction history to improve retrieval results.\nA major advantage of implicit feedback is that we can improve the retrieval accuracy without requiring any user effort.\nHowever, any particular user is unlikely searching for both types of documents.\nSuch an ambiguity can be resolved by exploiting history information.\nImplicit feedback was studied in several previous works.\nIn [11], Joachims explored how to capture and exploit the clickthrough information and demonstrated that such implicit feedback information can indeed improve the search accuracy for a group of people.\nIn [18], a simulation study of the effectiveness of different implicit feedback algorithms was conducted, and several retrieval models designed for exploiting clickthrough information were pro\nposed and evaluated.\nIn [17], some existing retrieval algorithms are adapted to improve search results based on the browsing history of a user.\nWhile the previous work has mostly focused on using clickthrough information, in this paper, we use both clickthrough information and preceding queries, and focus on developing new context-sensitive language models for retrieval.\nSpecifically, we develop models for using implicit feedback information such as query and clickthrough history of the current search session to improve retrieval accuracy.\nWe use the KL-divergence retrieval model [19] as the basis and propose to treat context-sensitive retrieval as estimating a query language model based on the current query and any search context information.\nWe propose several statistical language models to incorporate query and clickthrough history into the KL-divergence model.\nOne challenge in studying implicit feedback models is that there does not exist any suitable test collection for evaluation.\nWe thus use the TREC AP data to create a test collection with implicit feedback information, which can be used to quantitatively evaluate implicit feedback models.\nTo the best of our knowledge, this is the first test set for implicit feedback.\nWe evaluate the proposed models using this data set.\nThe experimental results show that using implicit feedback information, especially the clickthrough data, can substantially improve retrieval performance without requiring additional effort from the user.\nThe remaining sections are organized as follows.\nIn Section 2, we attempt to define the problem of implicit feedback and introduce some terms that we will use later.\nIn Section 3, we propose several implicit feedback models based on statistical language models.\nIn Section 4, we describe how we create the data set for implicit feedback experiments.\nIn Section 5, we evaluate different implicit feedback models on the created data set.\nSection 6 is our conclusions and future work.\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we have explored how to exploit implicit feedback information, including query history and clickthrough history within the same search session, to improve information retrieval performance.\nUsing the KL-divergence retrieval model as the basis, we proposed and studied four statistical language models for context-sensitive information retrieval, i.e., FixInt, BayesInt, OnlineUp and BatchUp.\nWe use TREC AP Data to create a test set\nTable 8: Sensitivity of \u00b5 in BatchUp\nTable 9: Sensitivity of v in BatchUp\nfor evaluating implicit feedback models.\nExperiment results show that using implicit feedback, especially clickthrough history, can substantially improve retrieval performance without requiring any additional user effort.\nThe current work can be extended in several ways: First, we have only explored some very simple language models for incorporating implicit feedback information.\nIt would be interesting to develop more sophisticated models to better exploit query history and clickthrough history.\nSecond, the proposed models can be implemented in any practical systems.\nWe are currently developing a client-side personalized search agent, which will incorporate some of the proposed algorithms.\nWe will also do a user study to evaluate effectiveness of these models in the real web search.\nFinally, we should further study a general retrieval framework for sequential decision making in interactive information retrieval and study how to optimize some of the parameters in the context-sensitive retrieval models.","lvl-2":"Context-Sensitive Information Retrieval Using Implicit Feedback\nABSTRACT\nA major limitation of most existing retrieval models and systems is that the retrieval decision is made based solely on the query and document collection; information about the actual user and search context is largely ignored.\nIn this paper, we study how to exploit implicit feedback information, including previous queries and clickthrough information, to improve retrieval accuracy in an interactive information retrieval setting.\nWe propose several contextsensitive retrieval algorithms based on statistical language models to combine the preceding queries and clicked document summaries with the current query for better ranking of documents.\nWe use the TREC AP data to create a test collection with search context information, and quantitatively evaluate our models using this test set.\nExperiment results show that using implicit feedback, especially the clicked document summaries, can improve retrieval performance substantially.\n1.\nINTRODUCTION\nIn most existing information retrieval models, the retrieval problem is treated as involving one single query and a set of documents.\nFrom a single query, however, the retrieval system can only have very limited clue about the user's information need.\nAn optimal retrieval system thus should try to exploit as much additional context information as possible to improve retrieval accuracy, whenever it is available.\nIndeed, context-sensitive retrieval has been identified as a major challenge in information retrieval research [2].\nThere are many kinds of context that we can exploit.\nRelevance feedback [14] can be considered as a way for a user to provide more context of search and is known to be effective for improving retrieval accuracy.\nHowever, relevance feedback requires that a user explicitly provides feedback information, such as specifying the category of the information need or marking a subset of retrieved documents as relevant documents.\nSince it forces the user to engage additional activities while the benefits are not always obvious to the user, a user is often reluctant to provide such feedback information.\nThus the effectiveness of relevance feedback may be limited in real applications.\nFor this reason, implicitfeedback has attracted much attention recently [11, 13, 18, 17, 12].\nIn general, the retrieval results using the user's initial query may not be satisfactory; often, the user would need to revise the query to improve the retrieval\/ranking accuracy [8].\nFor a complex or difficult information need, the user may need to modify his\/her query and view ranked documents with many iterations before the information need is completely satisfied.\nIn such an interactive retrieval scenario, the information naturally available to the retrieval system is more than just the current user query and the document collection--in general, all the interaction history can be available to the retrieval system, including past queries, information about which documents the user has chosen to view, and even how a user has read a document (e.g., which part of a document the user spends a lot of time in reading).\nWe define implicit feedback broadly as exploiting all such naturally available interaction history to improve retrieval results.\nA major advantage of implicit feedback is that we can improve the retrieval accuracy without requiring any user effort.\nFor example, if the current query is \"java\", without knowing any extra information, it would be impossible to know whether it is intended to mean the Java programming language or the Java island in Indonesia.\nAs a result, the retrieved documents will likely have both kinds of documents--some may be about the programming language and some may be about the island.\nHowever, any particular user is unlikely searching for both types of documents.\nSuch an ambiguity can be resolved by exploiting history information.\nFor example, if we know that the previous query from the user is \"cgi programming\", it would strongly suggest that it is the programming language that the user is searching for.\nImplicit feedback was studied in several previous works.\nIn [11], Joachims explored how to capture and exploit the clickthrough information and demonstrated that such implicit feedback information can indeed improve the search accuracy for a group of people.\nIn [18], a simulation study of the effectiveness of different implicit feedback algorithms was conducted, and several retrieval models designed for exploiting clickthrough information were pro\nposed and evaluated.\nIn [17], some existing retrieval algorithms are adapted to improve search results based on the browsing history of a user.\nOther related work on using context includes personalized search [1, 3, 4, 7, 10], query log analysis [5], context factors [12], and implicit queries [6].\nWhile the previous work has mostly focused on using clickthrough information, in this paper, we use both clickthrough information and preceding queries, and focus on developing new context-sensitive language models for retrieval.\nSpecifically, we develop models for using implicit feedback information such as query and clickthrough history of the current search session to improve retrieval accuracy.\nWe use the KL-divergence retrieval model [19] as the basis and propose to treat context-sensitive retrieval as estimating a query language model based on the current query and any search context information.\nWe propose several statistical language models to incorporate query and clickthrough history into the KL-divergence model.\nOne challenge in studying implicit feedback models is that there does not exist any suitable test collection for evaluation.\nWe thus use the TREC AP data to create a test collection with implicit feedback information, which can be used to quantitatively evaluate implicit feedback models.\nTo the best of our knowledge, this is the first test set for implicit feedback.\nWe evaluate the proposed models using this data set.\nThe experimental results show that using implicit feedback information, especially the clickthrough data, can substantially improve retrieval performance without requiring additional effort from the user.\nThe remaining sections are organized as follows.\nIn Section 2, we attempt to define the problem of implicit feedback and introduce some terms that we will use later.\nIn Section 3, we propose several implicit feedback models based on statistical language models.\nIn Section 4, we describe how we create the data set for implicit feedback experiments.\nIn Section 5, we evaluate different implicit feedback models on the created data set.\nSection 6 is our conclusions and future work.\n2.\nPROBLEM DEFINITION\nThere are two kinds of context information we can use for implicit feedback.\nOne is short-term context, which is the immediate surrounding information which throws light on a user's current information need in a single session.\nA session can be considered as a period consisting of all interactions for the same information need.\nThe category of a user's information need (e.g., kids or sports), previous queries, and recently viewed documents are all examples of short-term context.\nSuch information is most directly related to the current information need of the user and thus can be expected to be most useful for improving the current search.\nIn general, short-term context is most useful for improving search in the current session, but may not be so helpful for search activities in a different session.\nThe other kind of context is long-term context, which refers to information such as a user's education level and general interest, accumulated user query history and past user clickthrough information; such information is generally stable for a long time and is often accumulated over time.\nLong-term context can be applicable to all sessions, but may not be as effective as the short-term context in improving search accuracy for a particular session.\nIn this paper, we focus on the short-term context, though some of our methods can also be used to naturally incorporate some long-term context.\nIn a single search session, a user may interact with the search system several times.\nDuring interactions, the user would continuously modify the query.\nTherefore for the current query Qk (except for the first query of a search session), there is a query history, HQ = (Q1,..., Qk \u2212 1) associated with it, which consists of the preceding queries given by the same user in the current session.\nNote that we assume that the session boundaries are known in this paper.\nIn practice, we need techniques to automatically discover session boundaries, which have been studied in [9, 16].\nTraditionally, the retrieval system only uses the current query Qk to do retrieval.\nBut the short-term query history clearly may provide useful clues about the user's current information need as seen in the \"java\" example given in the previous section.\nIndeed, our previous work [15] has shown that the short-term query history is useful for improving retrieval accuracy.\nIn addition to the query history, there may be other short-term context information available.\nFor example, a user would presumably frequently click some documents to view.\nWe refer to data associated with these actions as clickthrough history.\nThe clickthrough data may include the title, summary, and perhaps also the content and location (e.g., the URL) of the clicked document.\nAlthough it is not clear whether a viewed document is actually relevant to the user's information need, we may \"safely\" assume that the displayed summary\/title information about the document is attractive to the user, thus conveys information about the user's information need.\nSuppose we concatenate all the displayed text information about a document (usually title and summary) together, we will also have a clicked summary Ci in each round of retrieval.\nIn general, we may have a history of clicked summaries C1,..., Ck \u2212 1.\nWe will also exploit such clickthrough history HC = (C1,..., Ck \u2212 1) to improve our search accuracy for the current query Qk.\nPrevious work has also shown positive results using similar clickthrough information [11, 17].\nBoth query history and clickthrough history are implicit feedback information, which naturally exists in interactive information retrieval, thus no additional user effort is needed to collect them.\nIn this paper, we study how to exploit such information (HQ and HC), develop models to incorporate the query history and clickthrough history into a retrieval ranking function, and quantitatively evaluate these models.\n3.\nLANGUAGE MODELS FOR CONTEXTSENSITIVE INFORMATION RETRIEVAL\nIntuitively, the query history HQ and clickthrough history HC are both useful for improving search accuracy for the current query Qk.\nAn important research question is how we can exploit such information effectively.\nWe propose to use statistical language models to model a user's information need and develop four specific context-sensitive language models to incorporate context information into a basic retrieval model.\n3.1 Basic retrieval model\nWe use the Kullback-Leibler (KL) divergence method [19] as our basic retrieval method.\nAccording to this model, the retrieval task involves computing a query language model BQ for a given query and a document language model BD for a document and then computing their KL divergence D (BQ | | OD), which serves as the score of the document.\nOne advantage of this approach is that we can naturally incorporate the search context as additional evidence to improve our estimate of the query language model.\nFormally, let HQ = (Q1,..., Qk \u2212 1) be the query history and the current query be Qk.\nLet HC = (C1,..., Ck \u2212 1) be the clickthrough history.\nNote that Ci is the concatenation of all clicked documents' summaries in the i-th round of retrieval since we may reasonably treat all these summaries equally.\nOur task is to estimate a context query model, which we denote by p (w | 0k), based on the\ncurrent query Qk, as well as the query history HQ and clickthrough history HC.\nWe now describe several different language models for exploiting HQ and HC to estimate p (w | k).\nWe will use c (w, X) to denote the count of word w in text X, which could be either a query or a clicked document's summary or any other text.\nWe will use | X | to denote the length of text X or the total number of words in X.\n3.2 Fixed Coefficient Interpolation (FixInt)\nOur first idea is to summarize the query history HQ with a unigram language model p (w | HQ) and the clickthrough history HC with another unigram language model p (w | HC).\nThen we linearly interpolate these two history models to obtain the history model p (w | H).\nFinally, we interpolate the history model p (w | H) with the current query model p (w | Qk).\nThese models are defined as follows.\nwhere [0, 1] is a parameter to control the weight on each history model, and where [0, 1] is a parameter to control the weight on the current query and the history information.\nIf we combine these equations, we see that\nThat is, the estimated context query model is just a fixed coefficient interpolation of three models p (w | Qk), p (w | HQ), and p (w | HC).\n3.3 Bayesian Interpolation (BayesInt)\nOne possible problem with the FixInt approach is that the coefficients, especially, are fixed across all the queries.\nBut intuitively, if our current query Qk is very long, we should trust the current query more, whereas if Qk has just one word, it may be beneficial to put more weight on the history.\nTo capture this intuition, we treat p (w | HQ) and p (w | HC) as Dirichlet priors and Qk as the observed data to estimate a context query model using Bayesian estimator.\nThe estimated model is given by\nwhere \u00b5 is the prior sample size for p (w | HQ) and is the prior sample size for p (w | HC).\nWe see that the only difference between BayesInt and FixInt is the interpolation coefficients are now adaptive to the query length.\nIndeed, when viewing BayesInt as FixInt, we see that = | Qk | | Qk | + \u00b5 +, = + \u00b5, thus with fixed \u00b5 and, we will have a query-dependent.\nLater we will show that such an adaptive empirically performs better than a fixed.\n3.4 Online Bayesian Updating (OnlineUp)\nBoth FixInt and BayesInt summarize the history information by averaging the unigram language models estimated based on previous queries or clicked summaries.\nThis means that all previous queries are treated equally and so are all clicked summaries.\nHowever, as the user interacts with the system and acquires more knowledge about the information in the collection, presumably, the reformulated queries will become better and better.\nThus assigning decaying weights to the previous queries so as to trust a recent query more than an earlier query appears to be reasonable.\nInterestingly, if we incrementally update our belief about the user's information need after seeing each query, we could naturally obtain decaying weights on the previous queries.\nSince such an incremental online updating strategy can be used to exploit any evidence in an interactive retrieval system, we present it in a more general way.\nIn a typical retrieval system, the retrieval system responds to every new query entered by the user by presenting a ranked list of documents.\nIn order to rank documents, the system must have some model for the user's information need.\nIn the KL divergence retrieval model, this means that the system must compute a query model whenever a user enters a (new) query.\nA principled way of updating the query model is to use Bayesian estimation, which we discuss below.\n3.4.1 Bayesian updating\nWe first discuss how we apply Bayesian estimation to update a query model in general.\nLet p (w |) be our current query model and T be a new piece of text evidence observed (e.g., T can be a query or a clicked summary).\nTo update the query model based on T, we use to define a Dirichlet prior parameterized as\nwhere \u00b5T is the equivalent sample size of the prior.\nWe use Dirichlet prior because it is a conjugate prior for multinomial distributions.\nWith such a conjugate prior, the predictive distribution of (or equivalently, the mean of the posterior distribution of is given by\nwhere c (w, T) is the count of w in T and | T | is the length of T. Parameter \u00b5T indicates our confidence in the prior expressed in terms of an equivalent text sample comparable with T. For example, \u00b5T = 1 indicates that the influence of the prior is equivalent to adding one extra word to T.\n3.4.2 Sequential query model updating\nWe now discuss how we can update our query model over time during an interactive retrieval process using Bayesian estimation.\nIn general, we assume that the retrieval system maintains a current query model i at any moment.\nAs soon as we obtain some implicit feedback evidence in the form of a piece of text Ti, we will update the query model.\nInitially, before we see any user query, we may already have some information about the user.\nFor example, we may have some information about what documents the user has viewed in the past.\nWe use such information to define a prior on the query model, which is denoted by 0.\nAfter we observe the first query Q1, we can update the query model based on the new observed data Q1.\nThe updated query model 1 can then be used for ranking documents in response to Q1.\nAs the user views some documents, the displayed summary text for such documents C1 (i.e., clicked summaries) can serve as some new data for us to further update the\nquery model to obtain 1.\nAs we obtain the second query Q2 from the user, we can update 1 to obtain a new model 2.\nIn general, we may repeat such an updating process to iteratively update the query model.\nClearly, we see two types of updating: (1) updating based on a new query Qi; (2) updating based on a new clicked summary Ci.\nIn both cases, we can treat the current model as a prior of the context query model and treat the new observed query or clicked summary as observed data.\nThus we have the following updating equations:\nwhere \u00b5i is the equivalent sample size for the prior when updating the model based on a query, while i is the equivalent sample size for the prior when updating the model based on a clicked summary.\nIf we set \u00b5i = 0 (or i = 0) we essentially ignore the prior model, thus would start a completely new query model based on the query Qi (or the clicked summary Ci).\nOn the other hand, if we set \u00b5i = + oo (or i = + oo) we essentially ignore the observed query (or the clicked summary) and do not update our model.\nThus the model remains the same as if we do not observe any new text evidence.\nIn general, the parameters \u00b5i and i may have different values for different i. For example, at the very beginning, we may have very sparse query history, thus we could use a smaller \u00b5i, but later as the query history is richer, we can consider using a larger \u00b5i.\nBut in our experiments, unless otherwise stated, we set them to the same constants, i.e., ` di, j, \u00b5i = \u00b5j, i = j. Note that we can take either p (w1 i) or p (w1 i) as our context query model for ranking documents.\nThis suggests that we do not have to wait until a user enters a new query to initiate a new round of retrieval; instead, as soon as we collect clicked summary Ci, we can update the query model and use p (w1 i) to immediately rerank any documents that a user has not yet seen.\nTo score documents after seeing query Qk, we use p (w1 k), i.e.,\n3.5 Batch Bayesian updating (BatchUp)\nIf we set the equivalent sample size parameters to fixed constant, the OnlineUp algorithm would introduce a decaying factor--repeated interpolation would cause the early data to have a low weight.\nThis may be appropriate for the query history as it is reasonable to believe that the user becomes better and better at query formulation as time goes on, but it is not necessarily appropriate for the clickthrough information, especially because we use the displayed summary, rather than the actual content of a clicked document.\nOne way to avoid applying a decaying interpolation to the clickthrough data is to do OnlineUp only for the query history Q = (Q1,..., Qi-1), but not for the clickthrough data C.\nWe first buffer all the clickthrough data together and use the whole chunk of clickthrough data to update the model generated through running OnlineUp on previous queries.\nThe updating equations are as follows.\nwhere \u00b5i has the same interpretation as in OnlineUp, but i now indicates to what extent we want to trust the clicked summaries.\nAs in OnlineUp, we set all \u00b5i's and i's to the same value.\nAnd to rank documents after seeing the current query Qk, we use\n4.\nDATA COLLECTION\nIn order to quantitatively evaluate our models, we need a data set which includes not only a text database and testing topics, but also query history and clickthrough history for each topic.\nSince there is no such data set available to us, we have to create one.\nThere are two choices.\nOne is to extract topics and any associated query history and clickthrough history for each topic from the log of a retrieval system (e.g., search engine).\nBut the problem is that we have no relevance judgments on such data.\nThe other choice is to use a TREC data set, which has a text database, topic description and relevance judgment file.\nUnfortunately, there are no query history and clickthrough history data.\nWe decide to augment a TREC data set by collecting query history and clickthrough history data.\nWe select TREC AP88, AP89 and AP90 data as our text database, because AP data has been used in several TREC tasks and has relatively complete judgments.\nThere are altogether 242918 news articles and the average document length is 416 words.\nMost articles have titles.\nIf not, we select the first sentence of the text as the title.\nFor the preprocessing, we only do case folding and do not do stopword removal or stemming.\nWe select 30 relatively difficult topics from TREC topics 1-150.\nThese 30 topics have the worst average precision performance among TREC topics 1-150 according to some baseline experiments using the KL-Divergence model with Bayesian prior smoothing [20].\nThe reason why we select difficult topics is that the user then would have to have several interactions with the retrieval system in order to get satisfactory results so that we can expect to collect a relatively richer query history and clickthrough history data from the user.\nIn real applications, we may also expect our models to be most useful for such difficult topics, so our data collection strategy reflects the real world applications well.\nWe index the TREC AP data set and set up a search engine and web interface for TREC AP news articles.\nWe use 3 subjects to do experiments to collect query history and clickthrough history data.\nEach subject is assigned 10 topics and given the topic descriptions provided by TREC.\nFor each topic, the first query is the title of the topic given in the original TREC topic description.\nAfter the subject submits the query, the search engine will do retrieval and return a ranked list of search results to the subject.\nThe subject will browse the results and maybe click one or more results to browse the full text of article (s).\nThe subject may also modify the query to do another search.\nFor each topic, the subject composes at least 4 queries.\nIn our experiment, only the first 4 queries for each topic are used.\nThe user needs to select the topic number from a selection menu before submitting the query to the search engine so that we can easily detect the session boundary, which is not the focus of our study.\nWe use a relational database to store user interactions, including the submitted queries and clicked documents.\nFor each query, we store the query terms and the associated result pages.\nAnd for each clicked document, we store the summary as shown on the search result page.\nThe summary of the article is query dependent and is computed online using fixed-length passage retrieval (KL divergence model with Bayesian prior smoothing).\nAmong 120 (4 for each of 30 topics) queries which we study in the experiment, the average query length is 3.71 words.\nAltogether there are 91 documents clicked to view.\nSo on average, there are around 3 clicks per topic.\nThe average length of clicked summary\nTable 1: Effect of using query history and clickthrough data for document ranking.\nis 34.4 words.\nAmong 91 clicked documents, 29 documents are judged relevant according to TREC judgment file.\nThis data set is publicly available 1.\n5.\nEXPERIMENTS\n5.1 Experiment design\nOur major hypothesis is that using search context (i.e., query history and clickthrough information) can help improve search accuracy.\nIn particular, the search context can provide extra information to help us estimate a better query model than using just the current query.\nSo most of our experiments involve comparing the retrieval performance using the current query only (thus ignoring any context) with that using the current query as well as the search context.\nSince we collected four versions of queries for each topic, we make such comparisons for each version of queries.\nWe use two performance measures: (1) Mean Average Precision (MAP): This is the standard non-interpolated average precision and serves as a good measure of the overall ranking accuracy.\n(2) Precision at 20 documents (pr@20docs): This measure does not average well, but it is more meaningful than MAP and reflects the utility for users who only read the top 20 documents.\nIn all cases, the reported figure is the average over all of the 30 topics.\nWe evaluate the four models for exploiting search context (i.e., FixInt, BayesInt, OnlineUp, and BatchUp).\nEach model has precisely two parameters (a and Q for FixInt; \u00b5 and v for others).\nNote that \u00b5 and v may need to be interpreted differently for different methods.\nWe vary these parameters and identify the optimal performance for each method.\nWe also vary the parameters to study the sensitivity of our algorithms to the setting of the parameters.\n5.2 Result analysis\n5.2.1 Overall effect of search context\nWe compare the optimal performances of four models with those using the current query only in Table 1.\nA row labeled with qi is the baseline performance and a row labeled with qi + HQ + HC is the performance of using search context.\nWe can make several observations from this table:\n1.\nComparing the baseline performances indicates that on average reformulated queries are better than the previous queries with the performance of q4 being the best.\nUsers generally formulate better and better queries.\n2.\nUsing search context generally has positive effect, especially when the context is rich.\nThis can be seen from the fact that the\nimprovement for q4 and q3 is generally more substantial compared with q2.\nActually, in many cases with q2, using the context may hurt the performance, probably because the history at that point is sparse.\nWhen the search context is rich, the performance improvement can be quite substantial.\nFor example, BatchUp achieves 92.4% improvement in the mean average precision over q3 and 77.2% improvement over q4.\n(The generally low precisions also make the relative improvement deceptively high, though.)\n3.\nAmong the four models using search context, the performances of FixInt and OnlineUp are clearly worse than those of BayesInt and BatchUp.\nSince BayesInt performs better than FixInt and the\nmain difference between BayesInt and FixInt is that the former uses an adaptive coefficient for interpolation, the results suggest that using adaptive coefficient is quite beneficial and a Bayesian style interpolation makes sense.\nThe main difference between OnlineUp and BatchUp is that OnlineUp uses decaying coefficients to combine the multiple clicked summaries, while BatchUp simply concatenates all clicked summaries.\nTherefore the fact that BatchUp is consistently better than OnlineUp indicates that the weights for combining the clicked summaries indeed should not be decaying.\nWhile OnlineUp is theoretically appealing, its performance is inferior to BayesInt and BatchUp, likely because of the decaying coefficient.\nOverall, BatchUp appears to be the best method when we vary the parameter settings.\nWe have two different kinds of search context--query history and clickthrough data.\nWe now look into the contribution of each kind of context.\n5.2.2 Using query history only\nIn each of four models, we can \"turn off\" the clickthrough history data by setting parameters appropriately.\nThis allows us to evaluate the effect of using query history alone.\nWe use the same parameter setting for query history as in Table 1.\nThe results are shown in Table 2.\nHere we see that in general, the benefit of using query history is very limited with mixed results.\nThis is different from what is reported in a previous study [15], where using query history is consistently helpful.\nAnother observation is that the context runs perform poorly at q2, but generally perform (slightly) better than the baselines for q3 and q4.\nThis is again likely because at the beginning the initial query, which is the title in the original TREC topic description, may not be a good query; indeed, on average, performances of these \"first-generation\" queries are clearly poorer than those of all other user-formulated queries in the later generations.\nYet another observation is that when using query history only, the BayesInt model appears to be better than other models.\nSince the clickthrough data is ignored, OnlineUp and BatchUp\nTable 2: Effect of using query history only for document ranking.\nTable 3: Average Precision of BatchUp using query history only\nare essentially the same algorithm.\nThe displayed results thus reflect the variation caused by parameter \u00b5.\nA smaller setting of 2.0 is seen better than a larger value of 5.0.\nA more complete picture of the influence of the setting of \u00b5 can be seen from Table 3, where we show the performance figures for a wider range of values of \u00b5.\nThe value of \u00b5 can be interpreted as how many words we regard the query history is worth.\nA larger value thus puts more weight on the history and is seen to hurt the performance more when the history information is not rich.\nThus while for q4 the best performance tends to be achieved for \u00b5 E [2, 5], only when \u00b5 = 0.5 we see some small benefit for q2.\nAs we would expect, an excessively large \u00b5 would hurt the performance in general, but q2 is hurt most and q4 is barely hurt, indicating that as we accumulate more and more query history information, we can put more and more weight on the history information.\nThis also suggests that a better strategy should probably dynamically adjust parameters according to how much history information we have.\nThe mixed query history results suggest that the positive effect of using implicit feedback information may have largely come from the use of clickthrough history, which is indeed true as we discuss in the next subsection.\n5.2.3 Using clickthrough history only\nWe now turn off the query history and only use the clicked summaries plus the current query.\nThe results are shown in Table 4.\nWe see that the benefit of using clickthrough information is much more significant than that of using query history.\nWe see an overall positive effect, often with significant improvement over the baseline.\nIt is also clear that the richer the context data is, the more improvement using clicked summaries can achieve.\nOther than some occasional degradation of precision at 20 documents, the improvement is fairly consistent and often quite substantial.\nThese results show that the clicked summary text is in general quite useful for inferring a user's information need.\nIntuitively, using the summary text, rather than the actual content of the document, makes more sense, as it is quite possible that the document behind a seemingly relevant summary is actually non-relevant.\n29 out of the 91 clicked documents are relevant.\nUpdating the query model based on such summaries would bring up the ranks of these relevant documents, causing performance improvement.\nHowever, such improvement is really not beneficial for the user as the user has already seen these relevant documents.\nTo see how much improvement we have achieved on improving the ranks of the unseen relevant documents, we exclude these 29 relevant documents from our judgment file and recompute the performance of BayesInt and the baseline using the new judgment file.\nThe results are shown in Table 5.\nNote that the performance of the baseline method is lower due to the removal of the 29 relevant documents, which would have been generally ranked high in the results.\nFrom Table 5, we see clearly that using clicked summaries also helps improve the ranks of unseen relevant documents significantly.\nTable 5: BayesInt evaluated on unseen relevant documents\nOne remaining question is whether the clickthrough data is still helpful if none of the clicked documents is relevant.\nTo answer this question, we took out the 29 relevant summaries from our clickthrough history data HC to obtain a smaller set of clicked summaries H C, and re-evaluated the performance of the BayesInt method using H C with the same setting of parameters as in Table 4.\nThe results are shown in Table 6.\nWe see that although the improvement is not as substantial as in Table 4, the average precision is improved across all generations of queries.\nThese results should be interpreted as very encouraging as they are based on only 62 non-relevant clickthroughs.\nIn reality, a user would more likely click some relevant summaries, which would help bring up more relevant documents as we have seen in Table 4 and Table 5.\nTable 4: Effect of using clickthrough data only for document ranking.\nTable 6: Effect of using only non-relevant clickthrough data\n5.2.4 Additive effect of context information\nBy comparing the results across Table 1, Table 2 and Table 4, we can see that the benefit of the query history information and that of clickthrough information are mostly \"additive\", i.e., combining them can achieve better performance than using each alone, but most improvement has clearly come from the clickthrough information.\nIn Table 7, we show this effect for the BatchUp method.\n5.2.5 Parameter sensitivity\nAll four models have two parameters to control the relative weights of HQ, HC, and Qk, though the parameterization is different from model to model.\nIn this subsection, we study the parameter sensitivity for BatchUp, which appears to perform relatively better than others.\nBatchUp has two parameters \u00b5 and v.\nWe first look at \u00b5.\nWhen \u00b5 is set to 0, the query history is not used at all, and we essentially just use the clickthrough data combined with the current query.\nIf we increase \u00b5, we will gradually incorporate more information from the previous queries.\nIn Table 8, we show how the average precision of BatchUp changes as we vary \u00b5 with v fixed to 15.0, where the best performance of BatchUp is achieved.\nWe see that the performance is mostly insensitive to the change of \u00b5 for q3 and q4, but is decreasing as \u00b5 increases for q2.\nThe pattern is also similar when we set v to other values.\nIn addition to the fact that q1 is generally worse than q2, q3, and q4, another possible reason why the sensitivity is lower for q3 and q4 may be that we generally have more clickthrough data available for q3 and q4 than for q2, and the dominating influence of the clickthrough data has made the small differences caused by \u00b5 less visible for q3 and q4.\nThe best performance is generally achieved when \u00b5 is around 2.0, which means that the past query information is as useful as about 2 words in the current query.\nExcept for q2, there is clearly some tradeoff between the current query and the previous queries\nTable 7: Additive benefit of context information\nand using a balanced combination of them achieves better performance than using each of them alone.\nWe now turn to the other parameter v.\nWhen v is set to 0, we only use the clickthrough data; When v is set to + oo, we only use the query history and the current query.\nWith \u00b5 set to 2.0, where the best performance of BatchUp is achieved, we vary v and show the results in Table 9.\nWe see that the performance is also not very sensitive when v <30, with the best performance often achieved at v = 15.\nThis means that the combined information of query history and the current query is as useful as about 15 words in the clickthrough data, indicating that the clickthrough information is highly valuable.\nOverall, these sensitivity results show that BatchUp not only performs better than other methods, but also is quite robust.\n6.\nCONCLUSIONS AND FUTURE WORK\nIn this paper, we have explored how to exploit implicit feedback information, including query history and clickthrough history within the same search session, to improve information retrieval performance.\nUsing the KL-divergence retrieval model as the basis, we proposed and studied four statistical language models for context-sensitive information retrieval, i.e., FixInt, BayesInt, OnlineUp and BatchUp.\nWe use TREC AP Data to create a test set\nTable 8: Sensitivity of \u00b5 in BatchUp\nTable 9: Sensitivity of v in BatchUp\nfor evaluating implicit feedback models.\nExperiment results show that using implicit feedback, especially clickthrough history, can substantially improve retrieval performance without requiring any additional user effort.\nThe current work can be extended in several ways: First, we have only explored some very simple language models for incorporating implicit feedback information.\nIt would be interesting to develop more sophisticated models to better exploit query history and clickthrough history.\nFor example, we may treat a clicked summary differently depending on whether the current query is a generalization or refinement of the previous query.\nSecond, the proposed models can be implemented in any practical systems.\nWe are currently developing a client-side personalized search agent, which will incorporate some of the proposed algorithms.\nWe will also do a user study to evaluate effectiveness of these models in the real web search.\nFinally, we should further study a general retrieval framework for sequential decision making in interactive information retrieval and study how to optimize some of the parameters in the context-sensitive retrieval models.","keyphrases":["context","implicit feedback inform","clickthrough inform","retriev accuraci","current queri","relev feedback","interact retriev","kl-diverg retriev model","context-sensit languag","long-term context","short-term context","fix coeffici interpol","bayesian estim","trec data set","mean averag precis","queri histori inform","queri histori","queri expans"],"prmu":["P","P","P","P","P","M","R","M","R","M","M","U","U","R","U","M","M","M"]} {"id":"H-84","title":"Event Threading within News Topics","abstract":"With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner. Previous research focused only on organizing news stories by their topics into a flat hierarchy. We believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly. In this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models. We call the process of recognizing events and their dependencies event threading. We believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories. We formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem. Besides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies. Our experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.","lvl-1":"Event Threading within News Topics Ramesh Nallapati, Ao Feng, Fuchun Peng, James Allan Center for Intelligent Information Retrieval Department of Computer Science University of Massachusetts Amherst, MA 01003 nmramesh,aofeng,fuchun,allan @cs.\numass.edu ABSTRACT With the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner.\nPrevious research focused only on organizing news stories by their topics into a flat hierarchy.\nWe believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly.\nIn this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models.\nWe call the process of recognizing events and their dependencies event threading.\nWe believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories.\nWe formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem.\nBesides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies.\nOur experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Clustering General Terms Algorithms, Experimentation, Measurement 1.\nINTRODUCTION News forms a major portion of information disseminated in the world everyday.\nCommon people and news analysts alike are very interested in keeping abreast of new things that happen in the news, but it is becoming very difficult to cope with the huge volumes of information that arrives each day.\nHence there is an increasing need for automatic techniques to organize news stories in a way that helps users interpret and analyze them quickly.\nThis problem is addressed by a research program called Topic Detection and Tracking (TDT) [3] that runs an open annual competition on standardized tasks of news organization.\nOne of the shortcomings of current TDT evaluation is its view of news topics as flat collection of stories.\nFor example, the detection task of TDT is to arrange a collection of news stories into clusters of topics.\nHowever, a topic in news is more than a mere collection of stories: it is characterized by a definite structure of inter-related events.\nThis is indeed recognized by TDT which defines a topic as `a set of news stories that are strongly related by some seminal realworld event'' where an event is defined as `something that happens at a specific time and location'' [3].\nFor example, when a bomb explodes in a building, that is the seminal event that triggers the topic.\nOther events in the topic may include the rescue attempts, the search for perpetrators, arrests and trials and so on.\nWe see that there is a pattern of dependencies between pairs of events in the topic.\nIn the above example, the event of rescue attempts is `influenced'' by the event of bombing and so is the event of search for perpetrators.\nIn this work we investigate methods for modeling the structure of a topic in terms of its events.\nBy structure, we mean not only identifying the events that make up a topic, but also establishing dependencies-generally causal-among them.\nWe call the process of recognizing events and identifying dependencies among them event threading, an analogy to email threading that shows connections between related email messages.\nWe refer to the resulting interconnected structure of events as the event model of the topic.\nAlthough this paper focuses on threading events within an existing news topic, we expect that such event based dependency structure more accurately reflects the structure of news than strictly bounded topics do.\nFrom a user``s perspective, we believe that our view of a news topic as a set of interconnected events helps him\/her get a quick overview of the topic and also allows him\/her navigate through the topic faster.\nThe rest of the paper is organized as follows.\nIn section 2, we discuss related work.\nIn section 3, we define the problem and use an example to illustrate threading of events within a news topic.\nIn section 4, we describe how we built the corpus for our problem.\nSection 5 presents our evaluation techniques while section 6 describes the techniques we use for modeling event structure.\nIn section 7 we present our experiments and results.\nSection 8 concludes the paper with a few observations on our results and comments on future work.\n446 2.\nRELATED WORK The process of threading events together is related to threading of electronic mail only by name for the most part.\nEmail usually incorporates a strong structure of referenced messages and consistently formatted subject headings-though information retrieval techniques are useful when the structure breaks down [7].\nEmail threading captures reference dependencies between messages and does not attempt to reflect any underlying real-world structure of the matter under discussion.\nAnother area of research that looks at the structure within a topic is hierarchical text classification of topics [9, 6].\nThe hierarchy within a topic does impose a structure on the topic, but we do not know of an effort to explore the extent to which that structure reflects the underlying event relationships.\nBarzilay and Lee [5] proposed a content structure modeling technique where topics within text are learnt using unsupervised methods, and a linear order of these topics is modeled using hidden Markov models.\nOur work differs from theirs in that we do not constrain the dependency to be linear.\nAlso their algorithms are tuned to work on specific genres of topics such as earthquakes, accidents, etc., while we expect our algorithms to generalize over any topic.\nIn TDT, researchers have traditionally considered topics as flatclusters [1].\nHowever, in TDT-2003, a hierarchical structure of topic detection has been proposed and [2] made useful attempts to adopt the new structure.\nHowever this structure still did not explicitly model any dependencies between events.\nIn a work closest to ours, Makkonen [8] suggested modeling news topics in terms of its evolving events.\nHowever, the paper stopped short of proposing any models to the problem.\nOther related work that dealt with analysis within a news topic includes temporal summarization of news topics [4].\n3.\nPROBLEM DEFINITION AND NOTATION In this work, we have adhered to the definition of event and topic as defined in TDT.\nWe present some definitions (in italics) and our interpretations (regular-faced) below for clarity.\n1.\nStory: A story is a news article delivering some information to users.\nIn TDT, a story is assumed to refer to only a single topic.\nIn this work, we also assume that each story discusses a single event.\nIn other words, a story is the smallest atomic unit in the hierarchy (topic event story).\nClearly, both the assumptions are not necessarily true in reality, but we accept them for simplicity in modeling.\n2.\nEvent: An event is something that happens at some specific time and place [10].\nIn our work, we represent an event by a set of stories that discuss it.\nFollowing the assumption of atomicity of a story, this means that any set of distinct events can be represented by a set of non-overlapping clusters of news stories.\n3.\nTopic: A set of news stories strongly connected by a seminal event.\nWe expand on this definition and interpret a topic as a series of related events.\nThus a topic can be represented by clusters of stories each representing an event and a set of (directed or undirected) edges between pairs of these clusters representing the dependencies between these events.\nWe will describe this representation of a topic in more detail in the next section.\n4.\nTopic detection and tracking (TDT) :Topic detection detects clusters of stories that discuss the same topic; Topic tracking detects stories that discuss a previously known topic [3].\nThus TDT concerns itself mainly with clustering stories into topics that discuss them.\n5.\nEvent threading: Event threading detects events within in a topic, and also captures the dependencies among the events.\nThus the main difference between event threading and TDT is that we focus our modeling effort on microscopic events rather than larger topics.\nAdditionally event threading models the relatedness or dependencies between pairs of events in a topic while TDT models topics as unrelated clusters of stories.\nWe first define our problem and representation of our model formally and then illustrate with the help of an example.\nWe are given a set of \u00d2 news stories \u00cb \u00d71\/2 \u00a1 \u00a1 \u00a1 \u00d7\u00d2 on a given topic \u00cc and their time of publication.\nWe define a set of events 1\/2 \u00a1 \u00a1 \u00a1 \u00d1 with the following constraints: 3\/4 3\/4 \u00cb (1) \u00d7 \u00d8 (2) \u00d7 3\/4 \u00d7 \u00d8 \u00d7 3\/4 (3) While the first constraint says that each event is an element in the power set of S, the second constraint ensures that each story can belong to at most one event.\nThe last constraint tells us that every story belongs to one of the events in .\nIn fact this allows us to define a mapping function from stories to events as follows: \u00b4\u00d7 \u00b5 iff \u00d7 3\/4 (4) Further, we also define a set of directed edges \u00b4 \u00b5 which denote dependencies between events.\nIt is important to explain what we mean by this directional dependency: While the existence of an edge itself represents relatedness of two events, the direction could imply causality or temporal-ordering.\nBy causal dependency we mean that the occurrence of event B is related to and is a consequence of the occurrence of event A. By temporal ordering, we mean that event B happened after event A and is related to A but is not necessarily a consequence of A. For example, consider the following two events: `plane crash'' (event A) and `subsequent investigations'' (event B) in a topic on a plane crash incident.\nClearly, the investigations are a result of the crash.\nHence an arrow from A to B falls under the category of causal dependency.\nNow consider the pair of events `Pope arrives in Cuba''(event A) and `Pope meets Castro''(event B) in a topic that discusses Pope``s visit to Cuba.\nNow events A and B are closely related through their association with the Pope and Cuba but event B is not necessarily a consequence of the occurrence of event A.\nAn arrow in such scenario captures what we call time ordering.\nIn this work, we do not make an attempt to distinguish between these two kinds of dependencies and our models treats them as identical.\nA simpler (and hence less controversial) choice would be to ignore direction in the dependencies altogether and consider only undirected edges.\nThis choice definitely makes sense as a first step but we chose the former since we believe directional edges make more sense to the user as they provide a more illustrative flow-chart perspective to the topic.\nTo make the idea of event threading more concrete, consider the example of TDT3 topic 30005, titled `Osama bin Laden``s Indictment'' (in the 1998 news).\nThis topic has 23 stories which form 5 events.\nAn event model of this topic can be represented as in figure 1.\nEach box in the figure indicates an event in the topic of Osama``s indictment.\nThe occurrence of event 2, namely `Trial and Indictment of Osama'' is dependent on the event of `evidence gathered by CIA'', i.e., event 1.\nSimilarly, event 2 influences the occurrences of events 3, 4 and 5, namely `Threats from Militants'', `Reactions 447 from Muslim World'' and `announcement of reward''.\nThus all the dependencies in the example are causal.\nExtending our notation further, we call an event A a parent of B and B the child of A, if \u00b4 \u00b5 3\/4 .\nWe define an event model \u00c5 \u00b4 \u00b5 to be a tuple of the set of events and set of dependencies.\nTrial and (5) (3) (4) CIA announces reward Muslim world Reactions from Islamic militants Threats from (2) (1) Osama Indictment of CIA gathered by Evidence Figure 1: An event model of TDT topic `Osama bin Laden``s indictment''.\nEvent threading is strongly related to topic detection and tracking, but also different from it significantly.\nIt goes beyond topics, and models the relationships between events.\nThus, event threading can be considered as a further extension of topic detection and tracking and is more challenging due to at least the following difficulties.\n1.\nThe number of events is unknown.\n2.\nThe granularity of events is hard to define.\n3.\nThe dependencies among events are hard to model.\n4.\nSince it is a brand new research area, no standard evaluation metrics and benchmark data is available.\nIn the next few sections, we will describe our attempts to tackle these problems.\n4.\nLABELED DATA We picked 28 topics from the TDT2 corpus and 25 topics from the TDT3 corpus.\nThe criterion we used for selecting a topic is that it should contain at least 15 on-topic stories from CNN headline news.\nIf the topic contained more than 30 CNN stories, we picked only the first 30 stories to keep the topic short enough for annotators.\nThe reason for choosing only CNN as the source is that the stories from this source tend to be short and precise and do not tend to digress or drift too far away from the central theme.\nWe believe modeling such stories would be a useful first step before dealing with more complex data sets.\nWe hired an annotator to create truth data.\nAnnotation includes defining the event membership for each story and also the dependencies.\nWe supervised the annotator on a set of three topics that we did our own annotations on and then asked her to annotate the 28 topics from TDT2 and 25 topics from TDT3.\nIn identifying events in a topic, the annotator was asked to broadly follow the TDT definition of an event, i.e., `something that happens at a specific time and location''.\nThe annotator was encouraged to merge two events A and B into a single event C if any of the stories discusses both A and B.\nThis is to satisfy our assumption that each story corresponds to a unique event.\nThe annotator was also encouraged to avoid singleton events, events that contain a single news story, if possible.\nWe realized from our own experience that people differ in their perception of an event especially when the number of stories in that event is small.\nAs part of the guidelines, we instructed the annotator to assign titles to all the events in each topic.\nWe believe that this would help make her understanding of the events more concrete.\nWe however, do not use or model these titles in our algorithms.\nIn defining dependencies between events, we imposed no restrictions on the graph structure.\nEach event could have single, multiple or no parents.\nFurther, the graph could have cycles or orphannodes.\nThe annotator was however instructed to assign a dependency from event A to event B if and only if the occurrence of B is `either causally influenced by A or is closely related to A and follows A in time''.\nFrom the annotated topics, we created a training set of 26 topics and a test set of 27 topics by merging the 28 topics from TDT2 and 25 from TDT3 and splitting them randomly.\nTable 1 shows that the training and test sets have fairly similar statistics.\nFeature Training set Test set Num.\ntopics 26 27 Avg.\nNum.\nStories\/Topic 28.69 26.74 Avg.\nDoc.\nLen.\n64.60 64.04 Avg.\nNum.\nStories\/Event 5.65 6.22 Avg.\nNum.\nEvents\/Topic 5.07 4.29 Avg.\nNum.\nDependencies\/Topic 3.07 2.92 Avg.\nNum.\nDependencies\/Event 0.61 0.68 Avg.\nNum.\nDays\/Topic 30.65 34.48 Table 1: Statistics of annotated data 5.\nEVALUATION A system can generate some event model \u00c51\/4 \u00b4 1\/4 1\/4\u00b5 using certain algorithms, which is usually different from the truth model \u00c5 \u00b4 \u00b5 (we assume the annotator did not make any mistake).\nComparing a system event model \u00c51\/4 with the true model \u00c5 requires comparing the entire event models including their dependency structure.\nAnd different event granularities may bring huge discrepancy between \u00c51\/4 and \u00c5.\nThis is certainly non-trivial as even testing whether two graphs are isomorphic has no known polynomial time solution.\nHence instead of comparing the actual structure we examine a pair of stories at a time and verify if the system and true labels agree on their event-memberships and dependencies.\nSpecifically, we compare two kinds of story pairs: \u00af Cluster pairs ( \u00b4\u00c5\u00b5): These are the complete set of unordered pairs \u00b4\u00d7 \u00d7 \u00b5 of stories \u00d7 and \u00d7 that fall within the same event given a model \u00c5.\nFormally, \u00b4\u00c5\u00b5 \u00b4\u00d7 \u00d7 \u00b5 \u00d7 \u00d7 3\/4 \u00cb \u00b4\u00d7 \u00b5 \u00b4\u00d7 \u00b5 (5) where is the function in \u00c5 that maps stories to events as defined in equation 4.\n\u00af Dependency pairs ( \u00b4\u00c5\u00b5): These are the set of all ordered pairs of stories \u00b4\u00d7 \u00d7 \u00b5 such that there is a dependency from the event of \u00d7 to the event of \u00d7 in the model \u00c5.\n\u00b4\u00c5\u00b5 \u00b4\u00d7 \u00d7 \u00b5 \u00b4 \u00b4\u00d7 \u00b5 \u00b4\u00d7 \u00b5\u00b5 3\/4 (6) Note the story pair is ordered here, so \u00b4\u00d7 \u00d7 \u00b5 is not equivalent to \u00b4\u00d7 \u00d7 \u00b5.\nIn our evaluation, a correct pair with wrong 448 (B->D) Cluster pairs (A,C) Dependency pairs (A->B) (C->B) (B->D) D,E D,E (D,E) (D,E) (A->C) (A->E) (B->C) (B->E) (B->E) Cluster precision: 1\/2 Cluster Recall: 1\/2 Dependency Recall: 2\/6 Dependency Precision: 2\/4 (A->D) True event model System event model A,B C A,C B Cluster pairs (A,B) Dependency pairs Figure 2: Evaluation measures direction will be considered a mistake.\nAs we mentioned earlier in section 3, ignoring the direction may make the problem simpler, but we will lose the expressiveness of our representation.\nGiven these two sets of story pairs corresponding to the true event model \u00c5 and the system event model \u00c51\/4, we define recall and precision for each category as follows.\n\u00af Cluster Precision (CP): It is the probability that two randomly selected stories \u00d7 and \u00d7 are in the same true-event given that they are in the same system event.\n\u00c8 \u00c8\u00b4 \u00b4\u00d7 \u00b5 \u00b4\u00d7 \u00b5 1\/4\u00b4\u00d7 \u00b5 1\/4\u00b4\u00d7 \u00b5\u00b5 \u00b4\u00c5\u00b5 \u00b4\u00c51\/4\u00b5 \u00b4\u00c51\/4\u00b5 (7) where 1\/4 is the story-event mapping function corresponding to the model \u00c51\/4.\n\u00af Cluster Recall(CR): It is the probability that two randomly selected stories \u00d7 and \u00d7 are in the same system-event given that they are in the same true event.\n\u00ca \u00c8\u00b4 1\/4\u00b4\u00d7 \u00b5 1\/4\u00b4\u00d7 \u00b5 \u00b4\u00d7 \u00b5 \u00b4\u00d7 \u00b5\u00b5 \u00b4\u00c5\u00b5 \u00b4\u00c51\/4\u00b5 \u00b4\u00c5\u00b5 (8) \u00af Dependency Precision(DP): It is the probability that there is a dependency between the events of two randomly selected stories \u00d7 and \u00d7 in the true model \u00c5 given that they have a dependency in the system model \u00c51\/4.\nNote that the direction of dependency is important in comparison.\n\u00c8 \u00c8\u00b4\u00b4 \u00b4\u00d7 \u00b5 \u00b4\u00d7 \u00b5\u00b5 3\/4 \u00b4 1\/4\u00b4\u00d7 \u00b5 1\/4\u00b4\u00d7 \u00b5\u00b5 3\/4 1\/4\u00b5 \u00b4\u00c5\u00b5 \u00b4\u00c51\/4\u00b5 \u00b4\u00c51\/4\u00b5 (9) \u00af Dependency Recall(DR): It is the probability that there is a dependency between the events of two randomly selected stories \u00d7 and \u00d7 in the system model \u00c51\/4 given that they have a dependency in the true model \u00c5.\nAgain, the direction of dependency is taken into consideration.\n\u00ca \u00c8\u00b4\u00b4 1\/4\u00b4\u00d7 \u00b5 1\/4\u00b4\u00d7 \u00b5\u00b5 3\/4 1\/4 \u00b4 \u00b4\u00d7 \u00b5 \u00b4\u00d7 \u00b5\u00b5 3\/4 \u00b5 \u00b4\u00c5\u00b5 \u00b4\u00c51\/4\u00b5 \u00b4\u00c5\u00b5 (10) The measures are illustrated by an example in figure 2.\nWe also combine these measures using the well known F1-measure commonly used in text classification and other research areas as shown below.\n3\/4 cents \u00c8 cents \u00ca \u00c8 \u00b7 \u00ca 3\/4 cents \u00c8 cents \u00ca \u00c8 \u00b7 \u00ca \u00c2 3\/4 cents cents \u00b7 (11) where and are the cluster and dependency F1-measures respectively and \u00c2 is the Joint F1-measure (\u00c2 ) that we use to measure the overall performance.\n6.\nTECHNIQUES The task of event modeling can be split into two parts: clustering the stories into unique events in the topic and constructing dependencies among them.\nIn the following subsections, we describe techniques we developed for each of these sub-tasks.\n6.1 Clustering Each topic is composed of multiple events, so stories must be clustered into events before we can model the dependencies among them.\nFor simplicity, all stories in the same topic are assumed to be available at one time, rather than coming in a text stream.\nThis task is similar to traditional clustering but features other than word distributions may also be critical in our application.\nIn many text clustering systems, the similarity between two stories is the inner product of their tf-idf vectors, hence we use it as one of our features.\nStories in the same event tend to follow temporal locality, so the time stamp of each story can be a useful feature.\nAdditionally, named-entities such as person and location names are another obvious feature when forming events.\nStories in the same event tend to be related to the same person(s) and locations(s).\nIn this subsection, we present an agglomerative clustering algorithm that combines all these features.\nIn our experiments, however, we study the effect of each feature on the performance separately using modified versions of this algorithm.\n6.1.1 Agglomerative clustering with time decay (ACDT) We initialize our events to singleton events (clusters), i.e., each cluster contains exactly one story.\nSo the similarity between two events, to start with, is exactly the similarity between the corresponding stories.\nThe similarity \u00db\u00d7\u00d9\u00d1\u00b4\u00d71\/2 \u00d73\/4\u00b5 between two stories \u00d71\/2 and \u00d73\/4 is given by the following formula: \u00db\u00d7\u00d9\u00d1\u00b4\u00d71\/2 \u00d73\/4\u00b5 1\/2 \u00d3\u00d7\u00b4\u00d71\/2 \u00d73\/4\u00b5 \u00b7 3\/4\u00c4\u00d3 \u00b4\u00d71\/2 \u00d73\/4\u00b5 \u00b7 \u00bf\u00c8 \u00d6\u00b4\u00d71\/2 \u00d73\/4\u00b5 (12) Here 1\/2, 3\/4, \u00bf are the weights on different features.\nIn this work, we determined them empirically, but in the future, one can consider more sophisticated learning techniques to determine them.\n\u00d3\u00d7\u00b4\u00d71\/2 \u00d73\/4\u00b5 is the cosine similarity of the term vectors.\n\u00c4\u00d3 \u00b4\u00d71\/2 \u00d73\/4\u00b5 is 1 if there is some location that appears in both stories, otherwise it is 0.\n\u00c8 \u00d6\u00b4\u00d71\/2 \u00d73\/4\u00b5 is similarly defined for person name.\nWe use time decay when calculating similarity of story pairs, i.e., the larger time difference between two stories, the smaller their similarities.\nThe time period of each topic differs a lot, from a few days to a few months.\nSo we normalize the time difference using the whole duration of that topic.\nThe time decay adjusted similarity 449 \u00d7 \u00d1\u00b4\u00d71\/2 \u00d73\/4\u00b5 is given by \u00d7 \u00d1\u00b4\u00d71\/2 \u00d73\/4\u00b5 \u00db\u00d7\u00d9\u00d1\u00b4\u00d71\/2 \u00d73\/4\u00b5 `` \u00d81\/2 \u00d83\/4 \u00cc (13) where \u00d81\/2 and \u00d83\/4 are the time stamps for story 1 and 2 respectively.\nT is the time difference between the earliest and the latest story in the given topic.\n`` is the time decay factor.\nIn each iteration, we find the most similar event pair and merge them.\nWe have three different ways to compute the similarity between two events \u00d9 and \u00da: \u00af Average link: In this case the similarity is the average of the similarities of all pairs of stories between \u00d9 and \u00da as shown below: \u00d7 \u00d1\u00b4 \u00d9 \u00da \u00b5 \u00c8\u00d7\u00d93\/4 \u00d9 \u00c8\u00d7\u00da3\/4 \u00da \u00d7 \u00d1\u00b4\u00d7\u00d9 \u00d7\u00da \u00b5 \u00d9 \u00da (14) \u00af Complete link: The similarity between two events is given by the smallest of the pair-wise similarities.\n\u00d7 \u00d1\u00b4 \u00d9 \u00da \u00b5 \u00d1 \u00d2 \u00d7\u00d93\/4 \u00d9 \u00d7\u00da3\/4 \u00da \u00d7 \u00d1\u00b4\u00d7\u00d9 \u00d7\u00da \u00b5 (15) \u00af Single link: Here the similarity is given by the best similarity between all pairs of stories.\n\u00d7 \u00d1\u00b4 \u00d9 \u00da \u00b5 \u00d1 \u00dc \u00d7\u00d93\/4 \u00d9 \u00d7\u00da3\/4 \u00da \u00d7 \u00d1\u00b4\u00d7\u00d9 \u00d7\u00da \u00b5 (16) This process continues until the maximum similarity falls below the threshold or the number of clusters is smaller than a given number.\n6.2 Dependency modeling Capturing dependencies is an extremely hard problem because it may require a `deeper understanding'' of the events in question.\nA human annotator decides on dependencies not just based on the information in the events but also based on his\/her vast repertoire of domain-knowledge and general understanding of how things operate in the world.\nFor example, in Figure 1 a human knows `Trial and indictment of Osama'' is influenced by `Evidence gathered by CIA'' because he\/she understands the process of law in general.\nWe believe a robust model should incorporate such domain knowledge in capturing dependencies, but in this work, as a first step, we will rely on surface-features such as time-ordering of news stories and word distributions to model them.\nOur experiments in later sections demonstrate that such features are indeed useful in capturing dependencies to a large extent.\nIn this subsection, we describe the models we considered for capturing dependencies.\nIn the rest of the discussion in this subsection, we assume that we are already given the mapping 1\/4 \u00cb and we focus only on modeling the edges 1\/4.\nFirst we define a couple of features that the following models will employ.\nFirst we define a 1-1 time-ordering function \u00d8 \u00cb 1\/2 \u00a1 \u00a1 \u00a1 \u00d2 that sorts stories in ascending order by their time of publication.\nNow, the event-time-ordering function \u00d8 is defined as follows.\n\u00d8 1\/2 \u00a1 \u00a1 \u00a1 \u00d1 \u00d7 \u00d8 \u00d9 \u00da 3\/4 \u00d8 \u00b4 \u00d9\u00b5 \u00d8 \u00b4 \u00da\u00b5 \u00b4\u00b5 \u00d1 \u00d2 \u00d7\u00d93\/4 \u00d9 \u00d8\u00b4\u00d7\u00d9\u00b5 \u00d1 \u00d2 \u00d7\u00da3\/4 \u00da \u00d8\u00b4\u00d7\u00da\u00b5 (17) In other words, \u00d8 time-orders events based on the time-ordering of their respective first stories.\nWe will also use average cosine similarity between two events as a feature and it is defined as follows.\n\u00da \u00cb \u00d1\u00b4 \u00d9 \u00da \u00b5 \u00c8\u00d7\u00d93\/4 \u00d9 \u00c8\u00d7\u00da3\/4 \u00da \u00d3\u00d7\u00b4\u00d7\u00d9 \u00d7\u00da \u00b5 \u00d9 \u00da (18) 6.2.1 Complete-Link model In this model, we assume that there are dependencies between all pairs of events.\nThe direction of dependency is determined by the time-ordering of the first stories in the respective events.\nFormally, the system edges are defined as follows.\n1\/4 \u00b4 \u00d9 \u00da \u00b5 \u00d8 \u00b4 \u00d9\u00b5 \u00d8 \u00b4 \u00da \u00b5 (19) where \u00d8 is the event-time-ordering function.\nIn other words, the dependency edge is directed from event \u00d9 to event \u00da , if the first story in event \u00d9 is earlier than the first story in event \u00da .\nWe point out that this is not to be confused with the complete-link algorithm in clustering.\nAlthough we use the same names, it will be clear from the context which one we refer to.\n6.2.2 Simple Thresholding This model is an extension of the complete link model with an additional constraint that there is a dependency between any two events \u00d9 and \u00da only if the average cosine similarity between event \u00d9 and event \u00da is greater than a threshold \u00cc.\nFormally, 1\/4 \u00b4 \u00d9 \u00da\u00b5 \u00da \u00cb \u00d1\u00b4 \u00d9 \u00da \u00b5 \u00cc \u00d8 \u00b4 \u00d9\u00b5 \u00d8 \u00b4 \u00da \u00b5 (20) 6.2.3 Nearest Parent Model In this model, we assume that each event can have at most one parent.\nWe define the set of dependencies as follows.\n1\/4 \u00b4 \u00d9 \u00da\u00b5 \u00da \u00cb \u00d1\u00b4 \u00d9 \u00da \u00b5 \u00cc \u00d8 \u00b4 \u00da\u00b5 \u00d8 \u00b4 \u00d9\u00b5 \u00b7 1\/2 (21) Thus, for each event \u00da , the nearest parent model considers only the event preceding it as defined by \u00d8 as a potential candidate.\nThe candidate is assigned as the parent only if the average similarity exceeds a pre-defined threshold \u00cc.\n6.2.4 Best Similarity Model This model also assumes that each event can have at most one parent.\nAn event \u00da is assigned a parent \u00d9 if and only if \u00d9 is the most similar earlier event to \u00da and the similarity exceeds a threshold \u00cc.\nMathematically, this can be expressed as: 1\/4 \u00b4 \u00d9 \u00da \u00b5 \u00da \u00cb \u00d1\u00b4 \u00d9 \u00da\u00b5 \u00cc \u00d9 \u00d6 \u00d1 \u00dc \u00db \u00d8 \u00b4 \u00db\u00b5 \u00d8 \u00b4 \u00da\u00b5 \u00da \u00cb \u00d1\u00b4 \u00db \u00da \u00b5 (22) 6.2.5 Maximum Spanning Tree model In this model, we first build a maximum spanning tree (MST) using a greedy algorithm on the following fully connected weighted, undirected graph whose vertices are the events and whose edges are defined as follows: \u00b4 \u00d9 \u00da \u00b5 \u00db\u00b4 \u00d9 \u00da \u00b5 \u00da \u00cb \u00d1\u00b4 \u00d9 \u00da\u00b5 (23) Let \u00c5\u00cb\u00cc\u00b4 \u00b5 be the set of edges in the maximum spanning tree of 1\/4.\nNow our directed dependency edges are defined as follows.\n1\/4 \u00b4 \u00d9 \u00da \u00b5 \u00b4 \u00d9 \u00da \u00b5 3\/4 \u00c5\u00cb\u00cc\u00b4 \u00b5 \u00d8 \u00b4 \u00d9\u00b5 \u00d8 \u00b4 \u00da\u00b5 \u00da \u00cb \u00d1\u00b4 \u00d9 \u00da \u00b5 \u00cc (24) 450 Thus in this model, we assign dependencies between the most similar events in the topic.\n7.\nEXPERIMENTS Our experiments consists of three parts.\nFirst we modeled only the event clustering part (defining the mapping function 1\/4) using clustering algorithms described in section 6.1.\nThen we modeled only the dependencies by providing to the system the true clusters and running only the dependency algorithms of section 6.2.\nFinally, we experimented with combinations of clustering and dependency algorithms to produce the complete event model.\nThis way of experimentation allows us to compare the performance of our algorithms in isolation and in association with other components.\nThe following subsections present the three parts of our experimentation.\n7.1 Clustering We have tried several variations of the \u00cc algorithm to study the effects of various features on the clustering performance.\nAll the parameters are learned by tuning on the training set.\nWe also tested the algorithms on the test set with parameters fixed at their optimal values learned from training.\nWe used agglomerative clusModel best T CP CR CF P-value cos+1-lnk 0.15 0.41 0.56 0.43cos+all-lnk 0.00 0.40 0.62 0.45cos+Loc+avg-lnk 0.07 0.37 0.74 0.45cos+Per+avg-lnk 0.07 0.39 0.70 0.46cos+TD+avg-lnk 0.04 0.45 0.70 0.53 2.9e-4* cos+N(T)+avg-lnk - 0.41 0.62 0.48 7.5e-2 cos+N(T)+T+avg-lnk 0.03 0.42 0.62 0.49 2.4e-2* cos+TD+N(T)+avg-lnk - 0.44 0.66 0.52 7.0e-3* cos+TD+N(T)+T+avg-lnk 0.03 0.47 0.64 0.53 1.1e-3* Baseline(cos+avg-lnk) 0.05 0.39 0.67 0.46Table 2: Comparison of agglomerative clustering algorithms (training set) tering based on only cosine similarity as our clustering baseline.\nThe results on the training and test sets are in Table 2 and 3 respectively.\nWe use the Cluster F1-measure (CF) averaged over all topics as our evaluation criterion.\nModel CP CR CF P-value cos+1-lnk 0.43 0.49 0.39cos+all-lnk 0.43 0.62 0.47cos+Loc+avg-lnk 0.37 0.73 0.45cos+Per+avg-lnk 0.44 0.62 0.45cos+TD+avg-lnk 0.48 0.70 0.54 0.014* cos+N(T)+avg-lnk 0.41 0.71 0.51 0.31 cos+N(T)+T+avg-lnk 0.43 0.69* 0.52 0.14 cos+TD+N(T)+avg-lnk 0.43 0.76 0.54 0.025* cos+TD+N(T)+T+avg-lnk 0.47 0.69 0.54 0.0095* Baseline(cos+avg-lnk) 0.44 0.67 0.50Table 3: Comparison of agglomerative clustering algorithms (test set) P-value marked with a # means that it is a statistically significant improvement over the baseline (95% confidence level, one tailed T-test).\nThe methods shown in table 2 and 3 are: \u00af Baseline: tf-idf vector weight, cosine similarity, average link in clustering.\nIn equation 12, 1\/2 1\/2, 3\/4 \u00bf 1\/4.\nAnd `` 1\/4 in equation 13.\nThis F-value is the maximum obtained by tuning the threshold.\n\u00af cos+1-lnk: Single link comparison (see equation 16) is used where similarity of two clusters is the maximum of all story pairs, other configurations are the same as the baseline run.\n\u00af cos+all-lnk: Complete link algorithm of equation 15 is used.\nSimilar to single link but it takes the minimum similarity of all story pairs.\n\u00af cos+Loc+avg-lnk: Location names are used when calculating similarity.\n3\/4 1\/4 1\/4 in equation 12.\nAll algorithms starting from this one use average link (equation 14), since single link and complete link do not show any improvement of performance.\n\u00af cos+Per+avg-lnk: \u00bf 1\/4 1\/4 in equation 12, i.e., we put some weight on person names in the similarity.\n\u00af cos+TD+avg-lnk: Time Decay coefficient `` 1\/2 in equation 13, which means the similarity between two stories will be decayed to 1\/2 if they are at different ends of the topic.\n\u00af cos+N(T)+avg-lnk: Use the number of true events to control the agglomerative clustering algorithm.\nWhen the number of clusters is fewer than that of truth events, stop merging clusters.\n\u00af cos+N(T)+T+avg-lnk: similar to N(T) but also stop agglomeration if the maximal similarity is below the threshold \u00cc.\n\u00af cos+TD:+N(T)+avg-lnk: similar to N(T) but the similarities are decayed, `` 1\/2 in equation 13.\n\u00af cos+TD+N(T)+T+avg-lnk: similar to TD+N(Truth) but calculation halts when the maximal similarity is smaller than the threshold \u00cc.\nOur experiments demonstrate that single link and complete link similarities perform worse than average link, which is reasonable since average link is less sensitive to one or two story pairs.\nWe had expected locations and person names to improve the result, but it is not the case.\nAnalysis of topics shows that many on-topic stories share the same locations or persons irrespective of the event they belong to, so these features may be more useful in identifying topics rather than events.\nTime decay is successful because events are temporally localized, i.e., stories discussing the same event tend to be adjacent to each other in terms of time.\nAlso we noticed that providing the number of true events improves the performance since it guides the clustering algorithm to get correct granularity.\nHowever, for most applications, it is not available.\nWe used it only as a cheat experiment for comparison with other algorithms.\nOn the whole, time decay proved to the most powerful feature besides cosine similarity on both training and test sets.\n7.2 Dependencies In this subsection, our goal is to model only dependencies.\nWe use the true mapping function and by implication the true events \u00ce .\nWe build our dependency structure 1\/4 using all the five models described in section 6.2.\nWe first train our models on the 26 training topics.\nTraining involves learning the best threshold \u00cc for each of the models.\nWe then test the performances of all the trained models on the 27 test topics.\nWe evaluate our performance 451 using the average values of Dependency Precision (DP), Dependency Recall (DR) and Dependency F-measure (DF).\nWe consider the complete-link model to be our baseline since for each event, it trivially considers all earlier events to be parents.\nTable 4 lists the results on the training set.\nWe see that while all the algorithms except MST outperform the baseline complete-link algorithm , the nearest Parent algorithm is statistically significant from the baseline in terms of its DF-value using a one-tailed paired T-test at 95% confidence level.\nModel best \u00cc DP DR DF P-value Nearest Parent 0.025 0.55 0.62 0.56 0.04* Best Similarity 0.02 0.51 0.62 0.53 0.24 MST 0.0 0.46 0.58 0.48Simple Thresh.\n0.045 0.45 0.76 0.52 0.14 Complete-link - 0.36 0.93 0.48Table 4: Results on the training set: Best \u00cc is the optimal value of the threshold \u00cc.\n* indicates the corresponding model is statistically significant compared to the baseline using a one-tailed, paired T-test at 95% confidence level.\nIn table 5 we present the comparison of the models on the test set.\nHere, we do not use any tuning but set the threshold to the corresponding optimal values learned from the training set.\nThe results throw some surprises: The nearest parent model, which was significantly better than the baseline on training set, turns out to be worse than the baseline on the test set.\nHowever all the other models are better than the baseline including the best similarity which is statistically significant.\nNotice that all the models that perform better than the baseline in terms of DF, actually sacrifice their recall performance compared to the baseline, but improve on their precision substantially thereby improving their performance on the DF-measure.\nWe notice that both simple-thresholding and best similarity are better than the baseline on both training and test sets although the improvement is not significant.\nOn the whole, we observe that the surface-level features we used capture the dependencies to a reasonable level achieving a best value of 0.72 DF on the test set.\nAlthough there is a lot of room for improvement, we believe this is a good first step.\nModel DP DR DF P-value Nearest Parent 0.61 0.60 0.60Best Similarity 0.71 0.74 0.72 0.04* MST 0.70 0.68 0.69 0.22 Simple Thresh.\n0.57 0.75 0.64 0.24 Baseline (Complete-link) 0.50 0.94 0.63Table 5: Results on the test set 7.3 Combining Clustering and Dependencies Now that we have studied the clustering and dependency algorithms in isolation, we combine the best performing algorithms and build the entire event model.\nSince none of the dependency algorithms has been shown to be consistently and significantly better than the others, we use all of them in our experimentation.\nFrom the clustering techniques, we choose the best performing Cos+TD.\nAs a baseline, we use a combination of the baselines in each components, i.e., cos for clustering and complete-link for dependencies.\nNote that we need to retrain all the algorithms on the training set because our objective function to optimize is now JF, the joint F-measure.\nFor each algorithm, we need to optimize both the clustering threshold and the dependency threshold.\nWe did this empirically on the training set and the optimal values are listed in table 6.\nThe results on the training set, also presented in table 6, indicate that cos+TD+Simple-Thresholding is significantly better than the baseline in terms of the joint F-value JF, using a one-tailed paired Ttest at 95% confidence level.\nOn the whole, we notice that while the clustering performance is comparable to the experiments in section 7.1, the overall performance is undermined by the low dependency performance.\nUnlike our experiments in section 7.2 where we had provided the true clusters to the system, in this case, the system has to deal with deterioration in the cluster quality.\nHence the performance of the dependency algorithms has suffered substantially thereby lowering the overall performance.\nThe results on the test set present a very similar story as shown in table 7.\nWe also notice a fair amount of consistency in the performance of the combination algorithms.\ncos+TD+Simple-Thresholding outperforms the baseline significantly.\nThe test set results also point to the fact that the clustering component remains a bottleneck in achieving an overall good performance.\n8.\nDISCUSSION AND CONCLUSIONS In this paper, we have presented a new perspective of modeling news topics.\nContrary to the TDT view of topics as flat collection of news stories, we view a news topic as a relational structure of events interconnected by dependencies.\nIn this paper, we also proposed a few approaches for both clustering stories into events and constructing dependencies among them.\nWe developed a timedecay based clustering approach that takes advantage of temporallocalization of news stories on the same event and showed that it performs significantly better than the baseline approach based on cosine similarity.\nOur experiments also show that we can do fairly well on dependencies using only surface-features such as cosinesimilarity and time-stamps of news stories as long as true events are provided to the system.\nHowever, the performance deteriorates rapidly if the system has to discover the events by itself.\nDespite that discouraging result, we have shown that our combined algorithms perform significantly better than the baselines.\nOur results indicate modeling dependencies can be a very hard problem especially when the clustering performance is below ideal level.\nErrors in clustering have a magnifying effect on errors in dependencies as we have seen in our experiments.\nHence, we should focus not only on improving dependencies but also on clustering at the same time.\nAs part of our future work, we plan to investigate further into the data and discover new features that influence clustering as well as dependencies.\nAnd for modeling dependencies, a probabilistic framework should be a better choice since there is no definite answer of yes\/no for the causal relations among some events.\nWe also hope to devise an iterative algorithm which can improve clustering and dependency performance alternately as suggested by one of the reviewers.\nWe also hope to expand our labeled corpus further to include more diverse news sources and larger and more complex event structures.\nAcknowledgments We would like to thank the three anonymous reviewers for their valuable comments.\nThis work was supported in part by the Center 452 Model Cluster T Dep.\nT CP CR CF DP DR DF JF P-value cos+TD+Nearest-Parent 0.055 0.02 0.51 0.53 0.49 0.21 0.19 0.19 0.27cos+TD+Best-Similarity 0.04 0.02 0.45 0.70 0.53 0.21 0.33 0.23 0.32cos+TD+MST 0.04 0.00 0.45 0.70 0.53 0.22 0.35 0.25 0.33cos+TD+Simple-Thresholding 0.065 0.02 0.56 0.47 0.48 0.23 0.61 0.32 0.38 0.0004* Baseline (cos+Complete-link) 0.10 - 0.58 0.31 0.38 0.20 0.67 0.30 0.33Table 6: Combined results on the training set Model CP CR CF DP DR DF JF P-value cos+TD+Nearest Parent 0.57 0.50 0.50 0.27 0.19 0.21 0.30cos+TD+Best Similarity 0.48 0.70 0.54 0.31 0.27 0.26 0.35cos+TD+MST 0.48 0.70 0.54 0.31 0.30 0.28 0.37cos+TD+Simple Thresholding 0.60 0.39 0.44 0.32 0.66 0.42 0.43 0.0081* Baseline (cos+Complete-link) 0.66 0.27 0.36 0.30 0.72 0.43 0.39Table 7: Combined results on the test set for Intelligent Information Retrieval and in part by SPAWARSYSCENSD grant number N66001-02-1-8903.\nAny opinions, findings and conclusions or recommendations expressed in this material are the authors'' and do not necessarily reflect those of the sponsor.\n9.\nREFERENCES [1] J. Allan, J. Carbonell, G. Doddington, J. Yamron, and Y. Yang.\nTopic detection and tracking pilot study: Final report.\nIn Proceedings of the DARPA Broadcast News Transcription and Understanding Workshop, pages 194-218, 1998.\n[2] J. Allan, A. Feng, and A. Bolivar.\nFlexible intrinsic evaluation of hierarchical clustering for tdt.\nvolume In the Proc.\nof the ACM Twelfth International Conference on Information and Knowledge Management, pages 263-270, Nov 2003.\n[3] James Allan, editor.\nTopic Detection and Tracking:Event based Information Organization.\nKluwer Academic Publishers, 2000.\n[4] James Allan, Rahul Gupta, and Vikas Khandelwal.\nTemporal summaries of new topics.\nIn Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 10-18.\nACM Press, 2001.\n[5] Regina Barzilay and Lillian Lee.\nCatching the drift: Probabilistic content models, with applications to generation and summarization.\nIn Proceedings of Human Language Technology Conference and North American Chapter of the Association for Computational Linguistics(HLT-NAACL), pages 113-120, 2004.\n[6] D. Lawrie and W. B. Croft.\nDiscovering and comparing topic hierarchies.\nIn Proceedings of RIAO 2000 Conference, pages 314-330, 1999.\n[7] David D. Lewis and Kimberly A. Knowles.\nThreading electronic mail: a preliminary study.\nInf.\nProcess.\nManage., 33(2):209-217, 1997.\n[8] Juha Makkonen.\nInvestigations on event evolution in tdt.\nIn Proceedings of HLT-NAACL 2003 Student Workshop, pages 43-48, 2004.\n[9] Aixin Sun and Ee-Peng Lim.\nHierarchical text classification and evaluation.\nIn Proceedings of the 2001 IEEE International Conference on Data Mining, pages 521-528.\nIEEE Computer Society, 2001.\n[10] Yiming Yang, Jaime Carbonell, Ralf Brown, Thomas Pierce, Brian T. Archibald, and Xin Liu.\nLearning approaches for detecting and tracking news events.\nIn IEEE Intelligent Systems Special Issue on Applications of Intelligent Information Retrieval, volume 14 (4), pages 32-43, 1999.\n453","lvl-3":"Event Threading within News Topics\nABSTRACT\nWith the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner.\nPrevious research focused only on organizing news stories by their topics into a flat hierarchy.\nWe believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly.\nIn this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models.\nWe call the process of recognizing events and their dependencies event threading.\nWe believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories.\nWe formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem.\nBesides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies.\nOur experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.\n1.\nINTRODUCTION\nNews forms a major portion of information disseminated in the world everyday.\nCommon people and news analysts alike are very interested in keeping abreast of new things that happen in the news, but it is becoming very difficult to cope with the huge volumes\nof information that arrives each day.\nHence there is an increasing need for automatic techniques to organize news stories in a way that helps users interpret and analyze them quickly.\nThis problem is addressed by a research program called Topic Detection and Tracking (TDT) [3] that runs an open annual competition on standardized tasks of news organization.\nOne of the shortcomings of current TDT evaluation is its view of news topics as flat collection of stories.\nFor example, the detection task of TDT is to arrange a collection of news stories into clusters of topics.\nHowever, a topic in news is more than a mere collection of stories: it is characterized by a definite structure of inter-related events.\nThis is indeed recognized by TDT which defines a topic as ` a set ofnews stories that are strongly related by some seminal realworld event' where an event is defined as ` something that happens at a specific time and location' [3].\nFor example, when a bomb explodes in a building, that is the seminal event that triggers the topic.\nOther events in the topic may include the rescue attempts, the search for perpetrators, arrests and trials and so on.\nWe see that there is a pattern of dependencies between pairs of events in the topic.\nIn the above example, the event of rescue attempts is ` influenced' by the event of bombing and so is the event of search for perpetrators.\nIn this work we investigate methods for modeling the structure of a topic in terms of its events.\nBy structure, we mean not only identifying the events that make up a topic, but also establishing dependencies--generally causal--among them.\nWe call the process of recognizing events and identifying dependencies among them event threading, an analogy to email threading that shows connections between related email messages.\nWe refer to the resulting interconnected structure of events as the event model of the topic.\nAlthough this paper focuses on threading events within an existing news topic, we expect that such event based dependency structure more accurately reflects the structure of news than strictly bounded topics do.\nFrom a user's perspective, we believe that our view of a news topic as a set of interconnected events helps him\/her get a quick overview of the topic and also allows him\/her navigate through the topic faster.\nThe rest of the paper is organized as follows.\nIn section 2, we discuss related work.\nIn section 3, we define the problem and use an example to illustrate threading of events within a news topic.\nIn section 4, we describe how we built the corpus for our problem.\nSection 5 presents our evaluation techniques while section 6 describes the techniques we use for modeling event structure.\nIn section 7 we present our experiments and results.\nSection 8 concludes the paper with a few observations on our results and comments on future work.\n2.\nRELATED WORK\nThe process of threading events together is related to threading of electronic mail only by name for the most part.\nEmail usually incorporates a strong structure of referenced messages and consistently formatted subject headings--though information retrieval techniques are useful when the structure breaks down [7].\nEmail threading captures reference dependencies between messages and does not attempt to reflect any underlying real-world structure of the matter under discussion.\nAnother area of research that looks at the structure within a topic is hierarchical text classification of topics [9, 6].\nThe hierarchy within a topic does impose a structure on the topic, but we do not know of an effort to explore the extent to which that structure reflects the underlying event relationships.\nBarzilay and Lee [5] proposed a content structure modeling technique where topics within text are learnt using unsupervised methods, and a linear order of these topics is modeled using hidden Markov models.\nOur work differs from theirs in that we do not constrain the dependency to be linear.\nAlso their algorithms are tuned to work on specific genres of topics such as earthquakes, accidents, etc., while we expect our algorithms to generalize over any topic.\nIn TDT, researchers have traditionally considered topics as flatclusters [1].\nHowever, in TDT-2003, a hierarchical structure of topic detection has been proposed and [2] made useful attempts to adopt the new structure.\nHowever this structure still did not explicitly model any dependencies between events.\nIn a work closest to ours, Makkonen [8] suggested modeling news topics in terms of its evolving events.\nHowever, the paper stopped short of proposing any models to the problem.\nOther related work that dealt with analysis within a news topic includes temporal summarization of news topics [4].\n3.\nPROBLEM DEFINITION AND NOTATION\n4.\nLABELED DATA\n5.\nEVALUATION\n6.\nTECHNIQUES\n6.1 Clustering\n6.1.1 Agglomerative clustering with time decay (ACDT)\n6.2 Dependency modeling\n6.2.1 Complete-Link model\n6.2.2 Simple Thresholding\n6.2.3 Nearest Parent Model\n6.2.4 Best Similarity Model\n6.2.5 Maximum Spanning Tree model\n7.\nEXPERIMENTS\n7.1 Clustering\n7.2 Dependencies\n7.3 Combining Clustering and Dependencies","lvl-4":"Event Threading within News Topics\nABSTRACT\nWith the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner.\nPrevious research focused only on organizing news stories by their topics into a flat hierarchy.\nWe believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly.\nIn this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models.\nWe call the process of recognizing events and their dependencies event threading.\nWe believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories.\nWe formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem.\nBesides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies.\nOur experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.\n1.\nINTRODUCTION\nNews forms a major portion of information disseminated in the world everyday.\nof information that arrives each day.\nHence there is an increasing need for automatic techniques to organize news stories in a way that helps users interpret and analyze them quickly.\nThis problem is addressed by a research program called Topic Detection and Tracking (TDT) [3] that runs an open annual competition on standardized tasks of news organization.\nOne of the shortcomings of current TDT evaluation is its view of news topics as flat collection of stories.\nFor example, the detection task of TDT is to arrange a collection of news stories into clusters of topics.\nHowever, a topic in news is more than a mere collection of stories: it is characterized by a definite structure of inter-related events.\nFor example, when a bomb explodes in a building, that is the seminal event that triggers the topic.\nOther events in the topic may include the rescue attempts, the search for perpetrators, arrests and trials and so on.\nWe see that there is a pattern of dependencies between pairs of events in the topic.\nIn the above example, the event of rescue attempts is ` influenced' by the event of bombing and so is the event of search for perpetrators.\nIn this work we investigate methods for modeling the structure of a topic in terms of its events.\nBy structure, we mean not only identifying the events that make up a topic, but also establishing dependencies--generally causal--among them.\nWe call the process of recognizing events and identifying dependencies among them event threading, an analogy to email threading that shows connections between related email messages.\nWe refer to the resulting interconnected structure of events as the event model of the topic.\nAlthough this paper focuses on threading events within an existing news topic, we expect that such event based dependency structure more accurately reflects the structure of news than strictly bounded topics do.\nFrom a user's perspective, we believe that our view of a news topic as a set of interconnected events helps him\/her get a quick overview of the topic and also allows him\/her navigate through the topic faster.\nIn section 2, we discuss related work.\nIn section 3, we define the problem and use an example to illustrate threading of events within a news topic.\nIn section 4, we describe how we built the corpus for our problem.\nSection 5 presents our evaluation techniques while section 6 describes the techniques we use for modeling event structure.\nIn section 7 we present our experiments and results.\nSection 8 concludes the paper with a few observations on our results and comments on future work.\n2.\nRELATED WORK\nThe process of threading events together is related to threading of electronic mail only by name for the most part.\nEmail threading captures reference dependencies between messages and does not attempt to reflect any underlying real-world structure of the matter under discussion.\nAnother area of research that looks at the structure within a topic is hierarchical text classification of topics [9, 6].\nThe hierarchy within a topic does impose a structure on the topic, but we do not know of an effort to explore the extent to which that structure reflects the underlying event relationships.\nOur work differs from theirs in that we do not constrain the dependency to be linear.\nIn TDT, researchers have traditionally considered topics as flatclusters [1].\nHowever, in TDT-2003, a hierarchical structure of topic detection has been proposed and [2] made useful attempts to adopt the new structure.\nHowever this structure still did not explicitly model any dependencies between events.\nIn a work closest to ours, Makkonen [8] suggested modeling news topics in terms of its evolving events.\nHowever, the paper stopped short of proposing any models to the problem.\nOther related work that dealt with analysis within a news topic includes temporal summarization of news topics [4].","lvl-2":"Event Threading within News Topics\nABSTRACT\nWith the overwhelming volume of online news available today, there is an increasing need for automatic techniques to analyze and present news to the user in a meaningful and efficient manner.\nPrevious research focused only on organizing news stories by their topics into a flat hierarchy.\nWe believe viewing a news topic as a flat collection of stories is too restrictive and inefficient for a user to understand the topic quickly.\nIn this work, we attempt to capture the rich structure of events and their dependencies in a news topic through our event models.\nWe call the process of recognizing events and their dependencies event threading.\nWe believe our perspective of modeling the structure of a topic is more effective in capturing its semantics than a flat list of on-topic stories.\nWe formally define the novel problem, suggest evaluation metrics and present a few techniques for solving the problem.\nBesides the standard word based features, our approaches take into account novel features such as temporal locality of stories for event recognition and time-ordering for capturing dependencies.\nOur experiments on a manually labeled data sets show that our models effectively identify the events and capture dependencies among them.\n1.\nINTRODUCTION\nNews forms a major portion of information disseminated in the world everyday.\nCommon people and news analysts alike are very interested in keeping abreast of new things that happen in the news, but it is becoming very difficult to cope with the huge volumes\nof information that arrives each day.\nHence there is an increasing need for automatic techniques to organize news stories in a way that helps users interpret and analyze them quickly.\nThis problem is addressed by a research program called Topic Detection and Tracking (TDT) [3] that runs an open annual competition on standardized tasks of news organization.\nOne of the shortcomings of current TDT evaluation is its view of news topics as flat collection of stories.\nFor example, the detection task of TDT is to arrange a collection of news stories into clusters of topics.\nHowever, a topic in news is more than a mere collection of stories: it is characterized by a definite structure of inter-related events.\nThis is indeed recognized by TDT which defines a topic as ` a set ofnews stories that are strongly related by some seminal realworld event' where an event is defined as ` something that happens at a specific time and location' [3].\nFor example, when a bomb explodes in a building, that is the seminal event that triggers the topic.\nOther events in the topic may include the rescue attempts, the search for perpetrators, arrests and trials and so on.\nWe see that there is a pattern of dependencies between pairs of events in the topic.\nIn the above example, the event of rescue attempts is ` influenced' by the event of bombing and so is the event of search for perpetrators.\nIn this work we investigate methods for modeling the structure of a topic in terms of its events.\nBy structure, we mean not only identifying the events that make up a topic, but also establishing dependencies--generally causal--among them.\nWe call the process of recognizing events and identifying dependencies among them event threading, an analogy to email threading that shows connections between related email messages.\nWe refer to the resulting interconnected structure of events as the event model of the topic.\nAlthough this paper focuses on threading events within an existing news topic, we expect that such event based dependency structure more accurately reflects the structure of news than strictly bounded topics do.\nFrom a user's perspective, we believe that our view of a news topic as a set of interconnected events helps him\/her get a quick overview of the topic and also allows him\/her navigate through the topic faster.\nThe rest of the paper is organized as follows.\nIn section 2, we discuss related work.\nIn section 3, we define the problem and use an example to illustrate threading of events within a news topic.\nIn section 4, we describe how we built the corpus for our problem.\nSection 5 presents our evaluation techniques while section 6 describes the techniques we use for modeling event structure.\nIn section 7 we present our experiments and results.\nSection 8 concludes the paper with a few observations on our results and comments on future work.\n2.\nRELATED WORK\nThe process of threading events together is related to threading of electronic mail only by name for the most part.\nEmail usually incorporates a strong structure of referenced messages and consistently formatted subject headings--though information retrieval techniques are useful when the structure breaks down [7].\nEmail threading captures reference dependencies between messages and does not attempt to reflect any underlying real-world structure of the matter under discussion.\nAnother area of research that looks at the structure within a topic is hierarchical text classification of topics [9, 6].\nThe hierarchy within a topic does impose a structure on the topic, but we do not know of an effort to explore the extent to which that structure reflects the underlying event relationships.\nBarzilay and Lee [5] proposed a content structure modeling technique where topics within text are learnt using unsupervised methods, and a linear order of these topics is modeled using hidden Markov models.\nOur work differs from theirs in that we do not constrain the dependency to be linear.\nAlso their algorithms are tuned to work on specific genres of topics such as earthquakes, accidents, etc., while we expect our algorithms to generalize over any topic.\nIn TDT, researchers have traditionally considered topics as flatclusters [1].\nHowever, in TDT-2003, a hierarchical structure of topic detection has been proposed and [2] made useful attempts to adopt the new structure.\nHowever this structure still did not explicitly model any dependencies between events.\nIn a work closest to ours, Makkonen [8] suggested modeling news topics in terms of its evolving events.\nHowever, the paper stopped short of proposing any models to the problem.\nOther related work that dealt with analysis within a news topic includes temporal summarization of news topics [4].\n3.\nPROBLEM DEFINITION AND NOTATION\nIn this work, we have adhered to the definition of event and topic as defined in TDT.\nWe present some definitions (in italics) and our interpretations (regular-faced) below for clarity.\n1.\nStory: A story is a news article delivering some information to users.\nIn TDT, a story is assumed to refer to only a single topic.\nIn this work, we also assume that each story discusses a single event.\nIn other words, a story is the smallest atomic unit in the hierarchy (topic!\nevent!\nstory).\nClearly, both the assumptions are not necessarily true in reality, but we accept them for simplicity in modeling.\n2.\nEvent: An event is something that happens at some specific time and place [10].\nIn our work, we represent an event by a set of stories that discuss it.\nFollowing the assumption of atomicity of a story, this means that any set of distinct events can be represented by a set of non-overlapping clusters of news stories.\n3.\nTopic: A set of news stories strongly connected by a seminal event.\nWe expand on this definition and interpret a topic as a series of related events.\nThus a topic can be represented by clusters of stories each representing an event and a set of (directed or undirected) edges between pairs of these clusters representing the dependencies between these events.\nWe will describe this representation of a topic in more detail in the next section.\n4.\nTopic detection and tracking (TDT): Topic detection detects clusters of stories that discuss the same topic; Topic tracking detects stories that discuss a previously known topic [3].\nThus TDT concerns itself mainly with clustering stories into topics that discuss them.\n5.\nEvent threading: Event threading detects events within in a topic, and also captures the dependencies among the events.\nThus the main difference between event threading and TDT is that we focus our modeling effort on microscopic events rather than larger topics.\nAdditionally event threading models the relatedness or dependencies between pairs of events in a topic while TDT models topics as unrelated clusters of stories.\nWe first define our problem and representation of our model formally and then illustrate with the help of an example.\nWe are given a set of n news stories S = fs,, ~ ~ ~, sng on a given topic T and their time of publication.\nWe define a set of events E _\nWhile the first constraint says that each event is an element in the power set of S, the second constraint ensures that each story can belong to at most one event.\nThe last constraint tells us that every story belongs to one of the events in E.\nIn fact this allows us to define a mapping function f from stories to events as follows:\nFurther, we also define a set of directed edges E _ f (Ei, Ei) g which denote dependencies between events.\nIt is important to explain what we mean by this directional dependency: While the existence of an edge itself represents relatedness of two events, the direction could imply causality or temporal-ordering.\nBy causal dependency we mean that the occurrence of event B is related to and is a consequence of the occurrence of event A. By temporal ordering, we mean that event B happened after event A and is related to A but is not necessarily a consequence of A. For example, consider the following two events: ` plane crash' (event A) and ` subsequent investigations' (event B) in a topic on a plane crash incident.\nClearly, the investigations are a result of the crash.\nHence an arrow from A to B falls under the category of causal dependency.\nNow consider the pair of events ` Pope arrives in Cuba' (event A) and ` Pope meets Castro' (event B) in a topic that discusses Pope's visit to Cuba.\nNow events A and B are closely related through their association with the Pope and Cuba but event B is not necessarily a consequence of the occurrence of event A.\nAn arrow in such scenario captures what we call time ordering.\nIn this work, we do not make an attempt to distinguish between these two kinds of dependencies and our models treats them as identical.\nA simpler (and hence less controversial) choice would be to ignore direction in the dependencies altogether and consider only undirected edges.\nThis choice definitely makes sense as a first step but we chose the former since we believe directional edges make more sense to the user as they provide a more illustrative flow-chart perspective to the topic.\nTo make the idea of event threading more concrete, consider the example of TDT3 topic 30005, titled ` Osama bin Laden's Indictment' (in the 1998 news).\nThis topic has 23 stories which form 5 events.\nAn event model of this topic can be represented as in figure 1.\nEach box in the figure indicates an event in the topic of Osama's indictment.\nThe occurrence of event 2, namely ` Trial and Indictment of Osama' is dependent on the event of ` evidence gathered by CIA', i.e., event 1.\nSimilarly, event 2 influences the occurrences of events 3, 4 and 5, namely ` Threats from Militants', ` Reactions\nfrom Muslim World' and ` announcement of reward'.\nThus all the dependencies in the example are causal.\nExtending our notation further, we call an event A a parent of B and B the child of A, if (A, B) E E.\nWe define an event model M = (S, E) to be a tuple of the set of events and set of dependencies.\nFigure 1: An event model of TDT topic ` Osama bin Laden's indictment'.\nEvent threading is strongly related to topic detection and tracking, but also different from it significantly.\nIt goes beyond topics, and models the relationships between events.\nThus, event threading can be considered as a further extension of topic detection and tracking and is more challenging due to at least the following difficulties.\n1.\nThe number of events is unknown.\n2.\nThe granularity of events is hard to define.\n3.\nThe dependencies among events are hard to model.\n4.\nSince it is a brand new research area, no standard evaluation metrics and benchmark data is available.\nIn the next few sections, we will describe our attempts to tackle these problems.\n4.\nLABELED DATA\nWe picked 28 topics from the TDT2 corpus and 25 topics from the TDT3 corpus.\nThe criterion we used for selecting a topic is that it should contain at least 15 on-topic stories from CNN headline news.\nIf the topic contained more than 30 CNN stories, we picked only the first 30 stories to keep the topic short enough for annotators.\nThe reason for choosing only CNN as the source is that the stories from this source tend to be short and precise and do not tend to digress or drift too far away from the central theme.\nWe believe modeling such stories would be a useful first step before dealing with more complex data sets.\nWe hired an annotator to create truth data.\nAnnotation includes defining the event membership for each story and also the dependencies.\nWe supervised the annotator on a set of three topics that we did our own annotations on and then asked her to annotate the 28 topics from TDT2 and 25 topics from TDT3.\nIn identifying events in a topic, the annotator was asked to broadly follow the TDT definition of an event, i.e., ` something that happens at a specific time and location'.\nThe annotator was encouraged to merge two events A and B into a single event C if any of the stories discusses both A and B.\nThis is to satisfy our assumption that each story corresponds to a unique event.\nThe annotator was also encouraged to avoid singleton events, events that contain a single news story, if possible.\nWe realized from our own experience that people differ in their perception of an event especially when the number of stories in that event is small.\nAs part of the guidelines, we instructed the annotator to assign titles to all the events in each topic.\nWe believe that this would help make her understanding of the events more concrete.\nWe however, do not use or model these titles in our algorithms.\nIn defining dependencies between events, we imposed no restrictions on the graph structure.\nEach event could have single, multiple or no parents.\nFurther, the graph could have cycles or orphannodes.\nThe annotator was however instructed to assign a dependency from event A to event B if and only if the occurrence of B is ` either causally influenced by A or is closely related to A and follows A in time'.\nFrom the annotated topics, we created a training set of 26 topics and a test set of 27 topics by merging the 28 topics from TDT2 and 25 from TDT3 and splitting them randomly.\nTable 1 shows that the training and test sets have fairly similar statistics.\nTable 1: Statistics of annotated data\n5.\nEVALUATION\nA system can generate some event model M' = (S', E') using certain algorithms, which is usually different from the truth model M = (S, E) (we assume the annotator did not make any mistake).\nComparing a system event model M' with the true model M requires comparing the entire event models including their dependency structure.\nAnd different event granularities may bring huge discrepancy between M' and M.\nThis is certainly non-trivial as even testing whether two graphs are isomorphic has no known polynomial time solution.\nHence instead of comparing the actual structure we examine a pair of stories at a time and verify if the system and true labels agree on their event-memberships and dependencies.\nSpecifically, we compare two kinds of story pairs:\n\u2022 Cluster pairs (C (M)): These are the complete set of unordered pairs (si, sj) of stories si and sj that fall within the same event given a model M. Formally,\nwhere f is the function in M that maps stories to events as defined in equation 4.\n\u2022 Dependency pairs (D (M)): These are the set of all ordered pairs of stories (si, sj) such that there is a dependency from the event of si to the event of sj in the model M.\nNote the story pair is ordered here, so (si, sj) is not equivalent to (sj, si).\nIn our evaluation, a correct pair with wrong\nFigure 2: Evaluation measures\ndirection will be considered a mistake.\nAs we mentioned earlier in section 3, ignoring the direction may make the problem simpler, but we will lose the expressiveness of our representation.\nGiven these two sets of story pairs corresponding to the true event model M and the system event model M', we define recall and precision for each category as follows.\n\u2022 Cluster Precision (CP): It is the probability that two randomly selected stories si and sj are in the same true-event given that they are in the same system event.\nwhere f' is the story-event mapping function corresponding to the model M'.\n\u2022 Cluster Recall (CR): It is the probability that two randomly\nselected stories si and sj are in the same system-event given that they are in the same true event.\n\u2022 Dependency Precision (DP): It is the probability that there is a dependency between the events of two randomly selected stories si and sj in the true model M given that they have a dependency in the system model M'.\nNote that the direction of dependency is important in comparison.\n\u2022 Dependency Recall (DR): It is the probability that there is a dependency between the events of two randomly selected stories si and sj in the system model M' given that they have a dependency in the true model M. Again, the direction of dependency is taken into consideration.\nThe measures are illustrated by an example in figure 2.\nWe also combine these measures using the well known F1-measure commonly used in text classification and other research areas as shown below.\nwhere CF and DF are the cluster and dependency F1-measures respectively and JF is the Joint F1-measure (JF) that we use to measure the overall performance.\n6.\nTECHNIQUES\nThe task of event modeling can be split into two parts: clustering the stories into unique events in the topic and constructing dependencies among them.\nIn the following subsections, we describe techniques we developed for each of these sub-tasks.\n6.1 Clustering\nEach topic is composed of multiple events, so stories must be clustered into events before we can model the dependencies among them.\nFor simplicity, all stories in the same topic are assumed to be available at one time, rather than coming in a text stream.\nThis task is similar to traditional clustering but features other than word distributions may also be critical in our application.\nIn many text clustering systems, the similarity between two stories is the inner product of their tf-idf vectors, hence we use it as one of our features.\nStories in the same event tend to follow temporal locality, so the time stamp of each story can be a useful feature.\nAdditionally, named-entities such as person and location names are another obvious feature when forming events.\nStories in the same event tend to be related to the same person (s) and locations (s).\nIn this subsection, we present an agglomerative clustering algorithm that combines all these features.\nIn our experiments, however, we study the effect of each feature on the performance separately using modified versions of this algorithm.\n6.1.1 Agglomerative clustering with time decay (ACDT)\nWe initialize our events to singleton events (clusters), i.e., each cluster contains exactly one story.\nSo the similarity between two events, to start with, is exactly the similarity between the corresponding stories.\nThe similarity wsum (s1; s2) between two stories s1 and s2 is given by the following formula:\nHere!\n1,!\n2,!\n3 are the weights on different features.\nIn this work, we determined them empirically, but in the future, one can consider more sophisticated learning techniques to determine them.\ncos (s1; s2) is the cosine similarity of the term vectors.\nLoc (s1; s2) is 1 if there is some location that appears in both stories, otherwise it is 0.\nPer (s1; s2) is similarly defined for person name.\nWe use time decay when calculating similarity of story pairs, i.e., the larger time difference between two stories, the smaller their similarities.\nThe time period of each topic differs a lot, from a few days to a few months.\nSo we normalize the time difference using the whole duration of that topic.\nThe time decay adjusted similarity\nwhere t1 and t2 are the time stamps for story 1 and 2 respectively.\nT is the time difference between the earliest and the latest story in the given topic.\na is the time decay factor.\nIn each iteration, we find the most similar event pair and merge them.\nWe have three different ways to compute the similarity between two events Eu and Ev: ~ Average link: In this case the similarity is the average of the similarities of all pairs of stories between Eu and Ev as shown below:\n~ Complete link: The similarity between two events is given by the smallest of the pair-wise similarities.\n~ Single link: Here the similarity is given by the best similarity between all pairs of stories.\nThis process continues until the maximum similarity falls below the threshold or the number of clusters is smaller than a given number.\n6.2 Dependency modeling\nCapturing dependencies is an extremely hard problem because it may require a ` deeper understanding' of the events in question.\nA human annotator decides on dependencies not just based on the information in the events but also based on his\/her vast repertoire of domain-knowledge and general understanding of how things operate in the world.\nFor example, in Figure 1 a human knows ` Trial and indictment of Osama' is influenced by ` Evidence gathered by CIA' because he\/she understands the process of law in general.\nWe believe a robust model should incorporate such domain knowledge in capturing dependencies, but in this work, as a first step, we will rely on surface-features such as time-ordering of news stories and word distributions to model them.\nOur experiments in later sections demonstrate that such features are indeed useful in capturing dependencies to a large extent.\nIn this subsection, we describe the models we considered for capturing dependencies.\nIn the rest of the discussion in this subsection, we assume that we are already given the mapping f': S!\nE and we focus only on modeling the edges E'.\nFirst we define a couple of features that the following models will employ.\nFirst we define a 1-1 time-ordering function t: S!\nf1, ~ ~ ~, ng that sorts stories in ascending order by their time of publication.\nNow, the event-time-ordering function te is defined as follows.\nIn other words, te time-orders events based on the time-ordering of their respective first stories.\nWe will also use average cosine similarity between two events as a feature and it is defined as follows.\n6.2.1 Complete-Link model\nIn this model, we assume that there are dependencies between all pairs of events.\nThe direction of dependency is determined by the time-ordering of the first stories in the respective events.\nFormally, the system edges are defined as follows.\nwhere te is the event-time-ordering function.\nIn other words, the dependency edge is directed from event Eu to event Ev, if the first story in event Eu is earlier than the first story in event Ev.\nWe point out that this is not to be confused with the complete-link algorithm in clustering.\nAlthough we use the same names, it will be clear from the context which one we refer to.\n6.2.2 Simple Thresholding\nThis model is an extension of the complete link model with an additional constraint that there is a dependency between any two events Eu and Ev only if the average cosine similarity between event Eu and event Ev is greater than a threshold T. Formally,\n6.2.3 Nearest Parent Model\nIn this model, we assume that each event can have at most one parent.\nWe define the set of dependencies as follows.\nThus, for each event Ev, the nearest parent model considers only the event preceding it as defined by te as a potential candidate.\nThe candidate is assigned as the parent only if the average similarity exceeds a pre-defined threshold T.\n6.2.4 Best Similarity Model\nThis model also assumes that each event can have at most one parent.\nAn event Ev is assigned a parent Eu if and only if Eu is the most similar earlier event to Ev and the similarity exceeds a threshold T. Mathematically, this can be expressed as:\n6.2.5 Maximum Spanning Tree model\nIn this model, we first build a maximum spanning tree (MST) using a greedy algorithm on the following fully connected weighted, undirected graph whose vertices are the events and whose edges ^ E are defined as follows:\nLet MST (^ E) be the set of edges in the maximum spanning tree of E'.\nNow our directed dependency edges E are defined as follows.\nThus in this model, we assign dependencies between the most similar events in the topic.\n7.\nEXPERIMENTS\nOur experiments consists of three parts.\nFirst we modeled only the event clustering part (defining the mapping function f') using clustering algorithms described in section 6.1.\nThen we modeled only the dependencies by providing to the system the true clusters and running only the dependency algorithms of section 6.2.\nFinally, we experimented with combinations of clustering and dependency algorithms to produce the complete event model.\nThis way of experimentation allows us to compare the performance of our algorithms in isolation and in association with other components.\nThe following subsections present the three parts of our experimentation.\n7.1 Clustering\nWe have tried several variations of the ACDT algorithm to study the effects of various features on the clustering performance.\nAll the parameters are learned by tuning on the training set.\nWe also tested the algorithms on the test set with parameters fixed at their optimal values learned from training.\nWe used agglomerative clus\nTable 2: Comparison of agglomerative clustering algorithms (training set)\ntering based on only cosine similarity as our clustering baseline.\nThe results on the training and test sets are in Table 2 and 3 respectively.\nWe use the Cluster F1-measure (CF) averaged over all topics as our evaluation criterion.\nTable 3: Comparison of agglomerative clustering algorithms (test set)\nP-value marked with a * means that it is a statistically significant improvement over the baseline (95% confidence level, one tailed T-test).\nThe methods shown in table 2 and 3 are:\n\u2022 Baseline: tf-idf vector weight, cosine similarity, average link in clustering.\nIn equation 12, cw1 = 1, W2 = W3 = 0.\nAnd a = 0 in equation 13.\nThis F-value is the maximum obtained by tuning the threshold.\n\u2022 cos +1 - lnk: Single link comparison (see equation 16) is used where similarity of two clusters is the maximum of all story pairs, other configurations are the same as the baseline run.\n\u2022 cos + all-lnk: Complete link algorithm of equation 15 is used.\nSimilar to single link but it takes the minimum similarity of all story pairs.\n\u2022 cos + Loc + avg-lnk: Location names are used when calculating similarity.\nW2 = 0.05 in equation 12.\nAll algorithms starting from this one use average link (equation 14), since single link and complete link do not show any improvement of performance.\n\u2022 cos + Per + avg-lnk: W3 = 0.05 in equation 12, i.e., we put some weight on person names in the similarity.\n\u2022 cos + TD + avg-lnk: Time Decay coefficient a = 1 in equation 13, which means the similarity between two stories will be decayed to 1\/e if they are at different ends of the topic.\n\u2022 cos + N (T) + avg-lnk: Use the number of true events to control the agglomerative clustering algorithm.\nWhen the number of clusters is fewer than that of truth events, stop merging clusters.\n\u2022 cos + N (T) + T + avg-lnk: similar to N (T) but also stop agglomeration if the maximal similarity is below the threshold T. \u2022 cos + TD: + N (T) + avg-lnk: similar to N (T) but the similarities are decayed, a = 1 in equation 13.\n\u2022 cos + TD+N (T) + T + avg-lnk: similar to TD+N (Truth) but cal\nculation halts when the maximal similarity is smaller than the threshold T.\nOur experiments demonstrate that single link and complete link similarities perform worse than average link, which is reasonable since average link is less sensitive to one or two story pairs.\nWe had expected locations and person names to improve the result, but it is not the case.\nAnalysis of topics shows that many on-topic stories share the same locations or persons irrespective of the event they belong to, so these features may be more useful in identifying topics rather than events.\nTime decay is successful because events are temporally localized, i.e., stories discussing the same event tend to be adjacent to each other in terms of time.\nAlso we noticed that providing the number of true events improves the performance since it guides the clustering algorithm to get correct granularity.\nHowever, for most applications, it is not available.\nWe used it only as a \"cheat\" experiment for comparison with other algorithms.\nOn the whole, time decay proved to the most powerful feature besides cosine similarity on both training and test sets.\n7.2 Dependencies\nIn this subsection, our goal is to model only dependencies.\nWe use the true mapping function f and by implication the true events V.\nWe build our dependency structure E' using all the five models described in section 6.2.\nWe first train our models on the 26 training topics.\nTraining involves learning the best threshold T for each of the models.\nWe then test the performances of all the trained models on the 27 test topics.\nWe evaluate our performance\nusing the average values of Dependency Precision (DP), Dependency Recall (DR) and Dependency F-measure (DF).\nWe consider the complete-link model to be our baseline since for each event, it trivially considers all earlier events to be parents.\nTable 4 lists the results on the training set.\nWe see that while all the algorithms except MST outperform the baseline complete-link algorithm, the nearest Parent algorithm is statistically significant from the baseline in terms of its DF-value using a one-tailed paired T-test at 95% confidence level.\nTable 4: Results on the training set: Best T is the optimal value\nof the threshold T. * indicates the corresponding model is statistically significant compared to the baseline using a one-tailed, paired T-test at 95% confidence level.\nIn table 5 we present the comparison of the models on the test set.\nHere, we do not use any tuning but set the threshold to the corresponding optimal values learned from the training set.\nThe results throw some surprises: The nearest parent model, which was significantly better than the baseline on training set, turns out to be worse than the baseline on the test set.\nHowever all the other models are better than the baseline including the best similarity which is statistically significant.\nNotice that all the models that perform better than the baseline in terms of DF, actually sacrifice their recall performance compared to the baseline, but improve on their precision substantially thereby improving their performance on the DF-measure.\nWe notice that both simple-thresholding and best similarity are better than the baseline on both training and test sets although the improvement is not significant.\nOn the whole, we observe that the surface-level features we used capture the dependencies to a reasonable level achieving a best value of 0.72 DF on the test set.\nAlthough there is a lot of room for improvement, we believe this is a good first step.\nTable 5: Results on the test set\n7.3 Combining Clustering and Dependencies\nNow that we have studied the clustering and dependency algorithms in isolation, we combine the best performing algorithms and build the entire event model.\nSince none of the dependency algorithms has been shown to be consistently and significantly better than the others, we use all of them in our experimentation.\nFrom the clustering techniques, we choose the best performing Cos + TD.\nAs a baseline, we use a combination of the baselines in each components, i.e., cos for clustering and complete-link for dependencies.\nNote that we need to retrain all the algorithms on the training set because our objective function to optimize is now JF, the joint F-measure.\nFor each algorithm, we need to optimize both the clustering threshold and the dependency threshold.\nWe did this empirically on the training set and the optimal values are listed in table 6.\nThe results on the training set, also presented in table 6, indicate that cos + TD+S imple-Thresholding is significantly better than the baseline in terms of the joint F-value JF, using a one-tailed paired Ttest at 95% confidence level.\nOn the whole, we notice that while the clustering performance is comparable to the experiments in section 7.1, the overall performance is undermined by the low dependency performance.\nUnlike our experiments in section 7.2 where we had provided the true clusters to the system, in this case, the system has to deal with deterioration in the cluster quality.\nHence the performance of the dependency algorithms has suffered substantially thereby lowering the overall performance.\nThe results on the test set present a very similar story as shown in table 7.\nWe also notice a fair amount of consistency in the performance of the combination algorithms.\ncos + TD+S imple-Thresholding outperforms the baseline significantly.\nThe test set results also point to the fact that the clustering component remains a bottleneck in achieving an overall good performance.","keyphrases":["event thread","event","thread","automat techniqu","flat hierarchi","depend","event model","novel featur","tempor local","event recognit","time-order","new organ","topic detect","topic cluster","inter-relat event","semin event","quick overview","hidden markov model","flatclust","atom","microscop event","map function","direct edg","time order","agglom cluster","cosin similar","term vector","simpl threshold","maximum span tree","correct granular","depend precis","depend recal","depend f-measur","temporalloc","timedecai","cluster"],"prmu":["P","P","P","P","P","P","P","P","P","P","U","M","M","M","M","M","U","M","U","U","M","U","U","U","U","U","U","U","U","U","M","M","M","U","U","U"]} {"id":"C-72","title":"GUESS: Gossiping Updates for Efficient Spectrum Sensing","abstract":"Wireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum. Such radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings. We focus on the problem of sharing RF spectrum data among a collection of wireless devices. The inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging. We propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing. It (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence. We outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches.","lvl-1":"GUESS: Gossiping Updates for Efficient Spectrum Sensing Nabeel Ahmed University of Waterloo David R. Cheriton School of Computer Science n3ahmed@uwaterloo.ca David Hadaller University of Waterloo David R. Cheriton School of Computer Science dthadaller@uwaterloo.ca Srinivasan Keshav University of Waterloo David R. Cheriton School of Computer Science keshav@uwaterloo.ca ABSTRACT Wireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum.\nSuch radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings.\nWe focus on the problem of sharing RF spectrum data among a collection of wireless devices.\nThe inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging.\nWe propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing.\nIt (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence.\nWe outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches.\nCategories and Subject Descriptors C.2.4 [Distributed Systems]: Distributed applications General Terms Algorithms, Performance, Experimentation 1.\nINTRODUCTION There has recently been a huge surge in the growth of wireless technology, driven primarily by the availability of unlicensed spectrum.\nHowever, this has come at the cost of increased RF interference, which has caused the Federal Communications Commission (FCC) in the United States to re-evaluate its strategy on spectrum allocation.\nCurrently, the FCC has licensed RF spectrum to a variety of public and private institutions, termed primary users.\nNew spectrum allocation regimes implemented by the FCC use dynamic spectrum access schemes to either negotiate or opportunistically allocate RF spectrum to unlicensed secondary users Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.\nTo copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific D1 D2 D5 D3 D4 Primary User Shadowed Secondary Users Secondary Users detect Primary's Signal Shadowed Secondary User Figure 1: Without cooperation, shadowed users are not able to detect the presence of the primary user.\nthat can use it when the primary user is absent.\nThe second type of allocation scheme is termed opportunistic spectrum sharing.\nThe FCC has already legislated this access method for the 5 GHz band and is also considering the same for TV broadcast bands [1].\nAs a result, a new wave of intelligent radios, termed cognitive radios (or software defined radios), is emerging that can dynamically re-tune their radio parameters based on interactions with their surrounding environment.\nUnder the new opportunistic allocation strategy, secondary users are obligated not to interfere with primary users (senders or receivers).\nThis can be done by sensing the environment to detect the presence of primary users.\nHowever, local sensing is not always adequate, especially in cases where a secondary user is shadowed from a primary user, as illustrated in Figure 1.\nHere, coordination between secondary users is the only way for shadowed users to detect the primary.\nIn general, cooperation improves sensing accuracy by an order of magnitude when compared to not cooperating at all [5].\nTo realize this vision of dynamic spectrum access, two fundamental problems must be solved: (1) Efficient and coordinated spectrum sensing and (2) Distributed spectrum allocation.\nIn this paper, we propose strategies for coordinated spectrum sensing that are low cost, operate on timescales comparable to the agility of the RF environment, and are resilient to network failures and alterations.\nWe defer the problem of spectrum allocation to future work.\nSpectrum sensing techniques for cognitive radio networks [4, 17] are broadly classified into three regimes; (1) centralized coordinated techniques, (2) decentralized coordinated techniques, and (3) decentralized uncoordinated techniques.\nWe advocate a decentralized coordinated approach, similar in spirit to OSPF link-state routing used in the Internet.\nThis is more effective than uncoordinated approaches because making decisions based only on local information is fallible (as shown in Figure 1).\nMoreover, compared to cen12 tralized approaches, decentralized techniques are more scalable, robust, and resistant to network failures and security attacks (e.g. jamming).\nCoordinating sensory data between cognitive radio devices is technically challenging because accurately assessing spectrum usage requires exchanging potentially large amounts of data with many radios at very short time scales.\nData size grows rapidly due to the large number (i.e. thousands) of spectrum bands that must be scanned.\nThis data must also be exchanged between potentially hundreds of neighboring secondary users at short time scales, to account for rapid changes in the RF environment.\nThis paper presents GUESS, a novel approach to coordinated spectrum sensing for cognitive radio networks.\nOur approach is motivated by the following key observations: 1.\nLow-cost sensors collect approximate data: Most devices have limited sensing resolution because they are low-cost and low duty-cycle devices and thus cannot perform complex RF signal processing (e.g. matched filtering).\nMany are typically equipped with simple energy detectors that gather only approximate information.\n2.\nApproximate summaries are sufficient for coordination: Approximate statistical summaries of sensed data are sufficient for correlating sensed information between radios, as relative usage information is more important than absolute usage data.\nThus, exchanging exact RF information may not be necessary, and more importantly, too costly for the purposes of spectrum sensing.\n3.\nRF spectrum changes incrementally: On most bands, RF spectrum utilization changes infrequently.\nMoreover, utilization of a specific RF band affects only that band and not the entire spectrum.\nTherefore, if the usage pattern of a particular band changes substantially, nodes detecting that change can initiate an update protocol to update the information for that band alone, leaving in place information already collected for other bands.\nThis allows rapid detection of change while saving the overhead of exchanging unnecessary information.\nBased on these observations, GUESS makes the following contributions: 1.\nA novel approach that applies randomized gossiping algorithms to the problem of coordinated spectrum sensing.\nThese algorithms are well suited to coordinated spectrum sensing due to the unique characteristics of the problem: i.e. radios are power-limited, mobile and have limited bandwidth to support spectrum sensing capabilities.\n2.\nAn application of in-network aggregation for dissemination of spectrum summaries.\nWe argue that approximate summaries are adequate for performing accurate radio parameter tuning.\n3.\nAn extension of in-network aggregation and randomized gossiping to support incremental maintenance of spectrum summaries.\nCompared to standard gossiping approaches, incremental techniques can further reduce overhead and protocol execution time by requiring fewer radio resources.\nThe rest of the paper is organized as follows.\nSection 2 motivates the need for a low cost and efficient approach to coordinated spectrum sensing.\nSection 3 discusses related work in the area, while Section 4 provides a background on in-network aggregation and randomized gossiping.\nSections 5 and 6 discuss extensions and protocol details of these techniques for coordinated spectrum sensing.\nSection 7 presents simulation results showcasing the benefits of GUESS, and Section 8 presents a discussion and some directions for future work.\n2.\nMOTIVATION To estimate the scale of the problem, In-stat predicts that the number of WiFi-enabled devices sold annually alone will grow to 430 million by 2009 [2].\nTherefore, it would be reasonable to assume that a typical dense urban environment will contain several thousand cognitive radio devices in range of each other.\nAs a result, distributed spectrum sensing and allocation would become both important and fundamental.\nCoordinated sensing among secondary radios is essential due to limited device sensing resolution and physical RF effects such as shadowing.\nCabric et al. [5] illustrate the gains from cooperation and show an order of magnitude reduction in the probability of interference with the primary user when only a small fraction of secondary users cooperate.\nHowever, such coordination is non-trivial due to: (1) the limited bandwidth available for coordination, (2) the need to communicate this information on short timescales, and (3) the large amount of sensory data that needs to be exchanged.\nLimited Bandwidth: Due to restrictions of cost and power, most devices will likely not have dedicated hardware for supporting coordination.\nThis implies that both data and sensory traffic will need to be time-multiplexed onto a single radio interface.\nTherefore, any time spent communicating sensory information takes away from the device``s ability to perform its intended function.\nThus, any such coordination must incur minimal network overhead.\nShort Timescales: Further compounding the problem is the need to immediately propagate updated RF sensory data, in order to allow devices to react to it in a timely fashion.\nThis is especially true due to mobility, as rapid changes of the RF environment can occur due to device and obstacle movements.\nHere, fading and multi-path interference heavily impact sensing abilities.\nSignal level can drop to a deep null with just a \u03bb\/4 movement in receiver position (3.7 cm at 2 GHz), where \u03bb is the wavelength [14].\nCoordination which does not support rapid dissemination of information will not be able to account for such RF variations.\nLarge Sensory Data: Because cognitive radios can potentially use any part of the RF spectrum, there will be numerous channels that they need to scan.\nSuppose we wish to compute the average signal energy in each of 100 discretized frequency bands, and each signal can have up to 128 discrete energy levels.\nExchanging complete sensory information between nodes would require 700 bits per transmission (for 100 channels, each requiring seven bits of information).\nExchanging this information among even a small group of 50 devices each second would require (50 time-steps \u00d7 50 devices \u00d7 700 bits per transmission) = 1.67 Mbps of aggregate network bandwidth.\nContrast this to the use of a randomized gossip protocol to disseminate such information, and the use of FM bit vectors to perform in-network aggregation.\nBy applying gossip and FM aggregation, aggregate bandwidth requirements drop to (c\u00b7logN time-steps \u00d7 50 devices \u00d7 700 bits per transmission) = 0.40 Mbps, since 12 time-steps are needed to propagate the data (with c = 2, for illustrative purpoes1 ).\nThis is explained further in Section 4.\nBased on these insights, we propose GUESS, a low-overhead approach which uses incremental extensions to FM aggregation and randomized gossiping for efficient coordination within a cognitive radio network.\nAs we show in Section 7, 1 Convergence time is correlated with the connectivity topology of the devices, which in turn depends on the environment.\n13 X A A X B B X Figure 2: Using FM aggregation to compute average signal level measured by a group of devices.\nthese incremental extensions can further reduce bandwidth requirements by up to a factor of 2.4 over the standard approaches discussed above.\n3.\nRELATED WORK Research in cognitive radio has increased rapidly [4, 17] over the years, and it is being projected as one of the leading enabling technologies for wireless networks of the future [9].\nAs mentioned earlier, the FCC has already identified new regimes for spectrum sharing between primary users and secondary users and a variety of systems have been proposed in the literature to support such sharing [4, 17].\nDetecting the presence of a primary user is non-trivial, especially a legacy primary user that is not cognitive radio aware.\nSecondary users must be able to detect the primary even if they cannot properly decode its signals.\nThis has been shown by Sahai et al. [16] to be extremely difficult even if the modulation scheme is known.\nSophisticated and costly hardware, beyond a simple energy detector, is required to improve signal detection accuracy [16].\nMoreover, a shadowed secondary user may not even be able to detect signals from the primary.\nAs a result, simple local sensing approaches have not gained much momentum.\nThis has motivated the need for cooperation among cognitive radios [16].\nMore recently, some researchers have proposed approaches for radio coordination.\nLiu et al. [11] consider a centralized access point (or base station) architecture in which sensing information is forwarded to APs for spectrum allocation purposes.\nAPs direct mobile clients to collect such sensing information on their behalf.\nHowever, due to the need of a fixed AP infrastructure, such a centralized approach is clearly not scalable.\nIn other work, Zhao et al. [17] propose a distributed coordination approach for spectrum sensing and allocation.\nCognitive radios organize into clusters and coordination occurs within clusters.\nThe CORVUS [4] architecture proposes a similar clustering method that can use either a centralized or decentralized approach to manage clusters.\nAlthough an improvement over purely centralized approaches, these techniques still require a setup phase to generate the clusters, which not only adds additional delay, but also requires many of the secondary users to be static or quasi-static.\nIn contrast, GUESS does not place such restrictions on secondary users, and can even function in highly mobile environments.\n4.\nBACKGROUND This section provides the background for our approach.\nWe present the FM aggregation scheme that we use to generate spectrum summaries and perform in-network aggregation.\nWe also discuss randomized gossiping techniques for disseminating aggregates in a cognitive radio network.\n4.1 FM Aggregation Aggregation is the process where nodes in a distributed network combine data received from neighboring nodes with their local value to generate a combined aggregate.\nThis aggregate is then communicated to other nodes in the network and this process repeats until the aggregate at all nodes has converged to the same value, i.e. the global aggregate.\nDouble-counting is a well known problem in this process, where nodes may contribute more than once to the aggregate, causing inaccuracy in the final result.\nIntuitively, nodes can tag the aggregate value they transmit with information about which nodes have contributed to it.\nHowever, this approach is not scalable.\nOrder and Duplicate Insensitive (ODI) techniques have been proposed in the literature [10, 15].\nWe adopt the ODI approach pioneered by Flajolet and Martin (FM) for the purposes of aggregation.\nNext we outline the FM approach; for full details, see [7].\nSuppose we want to compute the number of nodes in the network, i.e. the COUNT query.\nTo do so, each node performs a coin toss experiment as follows: toss an unbiased coin, stopping after the first head is seen.\nThe node then sets the ith bit in a bit vector (initially filled with zeros), where i is the number of coin tosses it performed.\nThe intuition is that as the number of nodes doing coin toss experiments increases, the probability of a more significant bit being set in one of the nodes'' bit vectors increases.\nThese bit vectors are then exchanged among nodes.\nWhen a node receives a bit vector, it updates its local bit vector by bitwise OR-ing it with the received vector (as shown in Figure 2 which computes AVERAGE).\nAt the end of the aggregation process, every node, with high probability, has the same bit vector.\nThe actual value of the count aggregate is then computed using the following formula, AGGF M = 2j\u22121 \/0.77351, where j represents the bit position of the least significant zero in the aggregate bit vector [7].\nAlthough such aggregates are very compact in nature, requiring only O(logN) state space (where N is the number of nodes), they may not be very accurate as they can only approximate values to the closest power of 2, potentially causing errors of up to 50%.\nMore accurate aggregates can be computed by maintaining multiple bit vectors at each node, as explained in [7].\nThis decreases the error to within O(1\/ \u221a m), where m is the number of such bit vectors.\nQueries other than count can also be computed using variants of this basic counting algorithm, as discussed in [3] (and shown in Figure 2).\nTransmitting FM bit vectors between nodes is done using randomized gossiping, discussed next.\n4.2 Gossip Protocols Gossip-based protocols operate in discrete time-steps; a time-step is the required amount of time for all transmissions in that time-step to complete.\nAt every time-step, each node having something to send randomly selects one or more neighboring nodes and transmits its data to them.\nThe randomized propagation of information provides fault-tolerance and resilience to network failures and outages.\nWe emphasize that this characteristic of the protocol also allows it to operate without relying on any underlying network structure.\nGossip protocols have been shown to provide exponentially fast convergence2 , on the order of O(log N) [10], where N is the number of nodes (or radios).\nThese protocols can therefore easily scale to very dense environments.\n2 Convergence refers to the state in which all nodes have the most up-to-date view of the network.\n14 Two types of gossip protocols are: \u2022 Uniform Gossip: In uniform gossip, at each timestep, each node chooses a random neighbor and sends its data to it.\nThis process repeats for O(log(N)) steps (where N is the number of nodes in the network).\nUniform gossip provides exponentially fast convergence, with low network overhead [10].\n\u2022 Random Walk: In random walk, only a subset of the nodes (termed designated nodes) communicate in a particular time-step.\nAt startup, k nodes are randomly elected as designated nodes.\nIn each time-step, each designated node sends its data to a random neighbor, which becomes designated for the subsequent timestep (much like passing a token).\nThis process repeats until the aggregate has converged in the network.\nRandom walk has been shown to provide similar convergence bounds as uniform gossip in problems of similar context [8, 12].\n5.\nINCREMENTAL PROTOCOLS 5.1 Incremental FM Aggregates One limitation of FM aggregation is that it does not support updates.\nDue to the probabilistic nature of FM, once bit vectors have been ORed together, information cannot simply be removed from them as each node``s contribution has not been recorded.\nWe propose the use of delete vectors, an extension of FM to support updates.\nWe maintain a separate aggregate delete vector whose value is subtracted from the original aggregate vector``s value to obtain the resulting value as follows.\nAGGINC = (2a\u22121 \/0.77351) \u2212 (2b\u22121 \/0.77351) (1) Here, a and b represent the bit positions of the least significant zero in the original and delete bit vectors respectively.\nSuppose we wish to compute the average signal level detected in a particular frequency.\nTo compute this, we compute the SUM of all signal level measurements and divide that by the COUNT of the number of measurements.\nA SUM aggregate is computed similar to COUNT (explained in Section 4.1), except that each node performs s coin toss experiments, where s is the locally measured signal level.\nFigure 2 illustrates the sequence by which the average signal energy is computed in a particular band using FM aggregation.\nNow suppose that the measured signal at a node changes from s to s .\nThe vectors are updated as follows.\n\u2022 s > s: We simply perform (s \u2212 s) more coin toss experiments and bitwise OR the result with the original bit vector.\n\u2022 s < s: We increase the value of the delete vector by performing (s \u2212 s ) coin toss experiments and bitwise OR the result with the current delete vector.\nUsing delete vectors, we can now support updates to the measured signal level.\nWith the original implementation of FM, the aggregate would need to be discarded and a new one recomputed every time an update occurred.\nThus, delete vectors provide a low overhead alternative for applications whose data changes incrementally, such as signal level measurements in a coordinated spectrum sensing environment.\nNext we discuss how these aggregates can be communicated between devices using incremental routing protocols.\n5.2 Incremental Routing Protocol We use the following incremental variants of the routing protocols presented in Section 4.2 to support incremental updates to previously computed aggregates.\nUpdate Received OR Local Update Occurs Recovered Susceptible Time-stamp Expires Initial State Additional Update Received Infectious Clean Up Figure 3: State diagram each device passes through as updates proceed in the system \u2022 Incremental Gossip Protocol (IGP): When an update occurs, the updated node initiates the gossiping procedure.\nOther nodes only begin gossiping once they receive the update.\nTherefore, nodes receiving the update become active and continue communicating with their neighbors until the update protocol terminates, after O(log(N)) time steps.\n\u2022 Incremental Random Walk Protocol (IRWP): When an update (or updates) occur in the system, instead of starting random walks at k random nodes in the network, all k random walks are initiated from the updated node(s).\nThe rest of the protocol proceeds in the same fashion as the standard random walk protocol.\nThe allocation of walks to updates is discussed in more detail in [3], where the authors show that the number of walks has an almost negligible impact on network overhead.\n6.\nPROTOCOL DETAILS Using incremental routing protocols to disseminate incremental FM aggregates is a natural fit for the problem of coordinated spectrum sensing.\nHere we outline the implementation of such techniques for a cognitive radio network.\nWe continue with the example from Section 5.1, where we wish to perform coordination between a group of wireless devices to compute the average signal level in a particular frequency band.\nUsing either incremental random walk or incremental gossip, each device proceeds through three phases, in order to determine the global average signal level for a particular frequency band.\nFigure 3 shows a state diagram of these phases.\nSusceptible: Each device starts in the susceptible state and becomes infectious only when its locally measured signal level changes, or if it receives an update message from a neighboring device.\nIf a local change is observed, the device updates either the original or delete bit vector, as described in Section 5.1, and moves into the infectious state.\nIf it receives an update message, it ORs the received original and delete bit vectors with its local bit vectors and moves into the infectious state.\nNote, because signal level measurements may change sporadically over time, a smoothing function, such as an exponentially weighted moving average, should be applied to these measurements.\nInfectious: Once a device is infectious it continues to send its up-to-date bit vectors, using either incremental random walk or incremental gossip, to neighboring nodes.\nDue to FM``s order and duplicate insensitive (ODI) properties, simultaneously occurring updates are handled seamlessly by the protocol.\nUpdate messages contain a time stamp indicating when the update was generated, and each device maintains a lo15 0 200 400 600 800 1000 1 10 100 Number of Measured Signal Changes Executiontime(ms) Incremental Gossip Uniform Gossip (a) Incremental Gossip and Uniform Gossip on Clique 0 200 400 600 800 1000 1 10 100 Number of Measured Signal Changes ExecutionTime(ms).\nIncremental Random Walk Random Walk (b) Incremental Random Walk and Random Walk on Clique 0 400 800 1200 1600 2000 1 10 100 Number of Measured Signal Changes ExecutionTime(ms).\nRandom Walk Incremental Random Walk (c) Incremental Random Walk and Random Walk on Power-Law Random Graph Figure 4: Execution times of Incremental Protocols 0.9 1.4 1.9 2.4 2.9 1 10 100 Number of Measured Signal Changes OverheadImprovementRatio.\n(NormalizedtoUniformGossip) Incremental Gossip Uniform Gossip (a) Incremental Gossip and Uniform Gossip on Clique 0.9 1.4 1.9 2.4 2.9 1 10 100 Number of Measured Signal Changes OverheadImprovementRatio.\n(NormalizedtoRandomWalk) Incremental Random Walk Random Walk (b) Incremental Random Walk and Random Walk on Clique 0.9 1.1 1.3 1.5 1.7 1.9 1 10 100 Number of Measured Signal Changes OverheadImprovementRatio.\n(NormalizedtoRandomWalk) Random Walk Incremental Random Walk (c) Incremental Random Walk and Random Walk on Power-Law Random Graph Figure 5: Network overhead of Incremental Protocols cal time stamp of when it received the most recent update.\nUsing this information, a device moves into the recovered state once enough time has passed for the most recent update to have converged.\nAs discussed in Section 4.2, this happens after O(log(N)) time steps.\nRecovered: A recovered device ceases to propagate any update information.\nAt this point, it performs clean-up and prepares for the next infection by entering the susceptible state.\nOnce all devices have entered the recovered state, the system will have converged, and with high probability, all devices will have the up-to-date average signal level.\nDue to the cumulative nature of FM, even if all devices have not converged, the next update will include all previous updates.\nNevertheless, the probability that gossip fails to converge is small, and has been shown to be O(1\/N) [10].\nFor coordinated spectrum sensing, non-incremental routing protocols can be implemented in a similar fashion.\nRandom walk would operate by having devices periodically drop the aggregate and re-run the protocol.\nEach device would perform a coin toss (biased on the number of walks) to determine whether or not it is a designated node.\nThis is different from the protocol discussed above where only updated nodes initiate random walks.\nSimilar techniques can be used to implement standard gossip.\n7.\nEVALUATION We now provide a preliminary evaluation of GUESS in simulation.\nA more detailed evaluation of this approach can be found in [3].\nHere we focus on how incremental extensions to gossip protocols can lead to further improvements over standard gossiping techniques, for the problem of coordinated spectrum sensing.\nSimulation Setup: We implemented a custom simulator in C++.\nWe study the improvements of our incremental gossip protocols over standard gossiping in two dimensions: execution time and network overhead.\nWe use two topologies to represent device connectivity: a clique, to eliminate the effects of the underlying topology on protocol performance, and a BRITE-generated [13] power-law random graph (PLRG), to illustrate how our results extend to more realistic scenarios.\nWe simulate a large deployment of 1,000 devices to analyze protocol scalability.\nIn our simulations, we compute the average signal level in a particular band by disseminating FM bit vectors.\nIn each run of the simulation, we induce a change in the measured signal at one or more devices.\nA run ends when the new average signal level has converged in the network.\nFor each data point, we ran 100 simulations and 95% confidence intervals (error bars) are shown.\nSimulation Parameters: Each transmission involves sending 70 bits of information to a neighboring node.\nTo compute the AVERAGE aggregate, four bit vectors need to be transmitted: the original SUM vector, the SUM delete vector, the original COUNT vector, and the COUNT delete vector.\nNon-incremental protocols do not transmit the delete vectors.\nEach transmission also includes a time stamp of when the update was generated.\nWe assume nodes communicate on a common control channel at 2 Mbps.\nTherefore, one time-step of protocol execution corresponds to the time required for 1,000 nodes to sequentially send 70 bits at 2 Mbps.\nSequential use of the control channel is a worst case for our protocols; in practice, multiple control channels could be used in parallel to reduce execution time.\nWe also assume nodes are loosely time synchronized, the implications of which are discussed further in [3].\nFinally, in order to isolate the effect of protocol operation on performance, we do not model the complexities of the wireless channel in our simulations.\nIncremental Protocols Reduce Execution Time: Figure 4(a) compares the performance of incremental gossip (IGP) with uniform gossip on a clique topology.\nWe observe that both protocols have almost identical execution times.\nThis is expected as IGP operates in a similar fashion to 16 uniform gossip, taking O(log(N)) time-steps to converge.\nFigure 4(b) compares the execution times of incremental random walk (IRWP) and standard random walk on a clique.\nIRWP reduces execution time by a factor of 2.7 for a small number of measured signal changes.\nAlthough random walk and IRWP both use k random walks (in our simulations k = number of nodes), IRWP initiates walks only from updated nodes (as explained in Section 5.2), resulting in faster information convergence.\nThese improvements carry over to a PLRG topology as well (as shown in Figure 4(c)), where IRWP is 1.33 times faster than random walk.\nIncremental Protocols Reduce Network Overhead: Figure 5(a) shows the ratio of data transmitted using uniform gossip relative to incremental gossip on a clique.\nFor a small number of signal changes, incremental gossip incurs 2.4 times less overhead than uniform gossip.\nThis is because in the early steps of protocol execution, only devices which detect signal changes communicate.\nAs more signal changes are introduced into the system, gossip and incremental gossip incur approximately the same overhead.\nSimilarly, incremental random walk (IRWP) incurs much less overhead than standard random walk.\nFigure 5(b) shows a 2.7 fold reduction in overhead for small numbers of signal changes on a clique.\nAlthough each protocol uses the same number of random walks, IRWP uses fewer network resources than random walk because it takes less time to converge.\nThis improvement also holds true on more complex PLRG topologies (as shown in Figure 5(c)), where we observe a 33% reduction in network overhead.\nFrom these results it is clear that incremental techniques yield significant improvements over standard approaches to gossip, even on complex topologies.\nBecause spectrum utilization is characterized by incremental changes to usage, incremental protocols are ideally suited to solve this problem in an efficient and cost effective manner.\n8.\nDISCUSSION AND FUTURE WORK We have only just scratched the surface in addressing the problem of coordinated spectrum sensing using incremental gossiping.\nNext, we outline some open areas of research.\nSpatial Decay: Devices performing coordinated sensing are primarily interested in the spectrum usage of their local neighborhood.\nTherefore, we recommend the use of spatially decaying aggregates [6], which limits the impact of an update on more distant nodes.\nSpatially decaying aggregates work by successively reducing (by means of a decay function) the value of the update as it propagates further from its origin.\nOne challenge with this approach is that propagation distance cannot be determined ahead of time and more importantly, exhibits spatio-temporal variations.\nTherefore, finding the optimal decay function is non-trivial, and an interesting subject of future work.\nSignificance Threshold: RF spectrum bands continually experience small-scale changes which may not necessarily be significant.\nDeciding if a change is significant can be done using a significance threshold \u03b2, below which any observed change is not propagated by the node.\nChoosing an appropriate operating value for \u03b2 is application dependent, and explored further in [3].\nWeighted Readings: Although we argued that most devices will likely be equipped with low-cost sensing equipment, there may be situations where there are some special infrastructure nodes that have better sensing abilities than others.\nWeighting their measurements more heavily could be used to maintain a higher degree of accuracy.\nDetermining how to assign such weights is an open area of research.\nImplementation Specifics: Finally, implementing gossip for coordinated spectrum sensing is also open.\nIf implemented at the MAC layer, it may be feasible to piggy-back gossip messages over existing management frames (e.g. networking advertisement messages).\nAs well, we also require the use of a control channel to disseminate sensing information.\nThere are a variety of alternatives for implementing such a channel, some of which are outlined in [4].\nThe trade-offs of different approaches to implementing GUESS is a subject of future work.\n9.\nCONCLUSION Spectrum sensing is a key requirement for dynamic spectrum allocation in cognitive radio networks.\nThe nature of the RF environment necessitates coordination between cognitive radio devices.\nWe propose GUESS, an approximate yet low overhead approach to perform efficient coordination between cognitive radios.\nThe fundamental contributions of GUESS are: (1) an FM aggregation scheme for efficient innetwork aggregation, (2) a randomized gossiping approach which provides exponentially fast convergence and robustness to network alterations, and (3) incremental variations of FM and gossip which we show can reduce the communication time by up to a factor of 2.7 and reduce network overhead by up to a factor of 2.4.\nOur preliminary simulation results showcase the benefits of this approach and we also outline a set of open problems that make this a new and exciting area of research.\n10.\nREFERENCES [1] Unlicensed Operation in the TV Broadcast Bands and Additional Spectrum for Unlicensed Devices Below 900 MHz in the 3 GHz band, May 2004.\nNotice of Proposed Rule-Making 04-186, Federal Communications Commission.\n[2] In-Stat: Covering the Full Spectrum of Digital Communications Market Research, from Vendor to End-user, December 2005.\nhttp:\/\/www.in-stat.com\/catalog\/scatalogue.asp?id=28.\n[3] N. Ahmed, D. Hadaller, and S. Keshav.\nIncremental Maintenance of Global Aggregates.\nUW.\nTechnical Report CS-2006-19, University of Waterloo, ON, Canada, 2006.\n[4] R. W. Brodersen, A. Wolisz, D. Cabric, S. M. Mishra, and D. Willkomm.\nCORVUS: A Cognitive Radio Approach for Usage of Virtual Unlicensed Spectrum.\nTechnical report, July 2004.\n[5] D. Cabric, S. M. Mishra, and R. W. Brodersen.\nImplementation Issues in Spectrum Sensing for Cognitive Radios.\nIn Asilomar Conference, 2004.\n[6] E. Cohen and H. Kaplan.\nSpatially-Decaying Aggregation Over a Network: Model and Algorithms.\nIn Proceedings of SIGMOD 2004, pages 707-718, New York, NY, USA, 2004.\nACM Press.\n[7] P. Flajolet and G. N. Martin.\nProbabilistic Counting Algorithms for Data Base Applications.\nJ. Comput.\nSyst.\nSci., 31(2):182-209, 1985.\n[8] C. Gkantsidis, M. Mihail, and A. Saberi.\nRandom Walks in Peer-to-Peer Networks.\nIn Proceedings of INFOCOM 2004, pages 1229-1240, 2004.\n[9] E. Griffith.\nPreviewing Intel``s Cognitive Radio Chip, June 2005.\nhttp:\/\/www.internetnews.com\/wireless\/article.php\/3513721.\n[10] D. Kempe, A. Dobra, and J. Gehrke.\nGossip-Based Computation of Aggregate Information.\nIn FOCS 2003, page 482, Washington, DC, USA, 2003.\nIEEE Computer Society.\n[11] X. Liu and S. Shankar.\nSensing-based Opportunistic Channel Access.\nIn ACM Mobile Networks and Applications (MONET) Journal, March 2005.\n[12] Q. Lv, P. Cao, E. Cohen, K. Li, and S. Shenker.\nSearch and Replication in Unstructured Peer-to-Peer Networks.\nIn Proceedings of ICS, 2002.\n[13] A. Medina, A. Lakhina, I. Matta, and J. Byers.\nBRITE: an Approach to Universal Topology Generation.\nIn Proceedings of MASCOTS conference, Aug. 2001.\n[14] S. M. Mishra, A. Sahai, and R. W. Brodersen.\nCooperative Sensing among Cognitive Radios.\nIn ICC 2006, June 2006.\n[15] S. Nath, P. B. Gibbons, S. Seshan, and Z. R. Anderson.\nSynopsis Diffusion for Robust Aggregation in Sensor Networks.\nIn Proceedings of SenSys 2004, pages 250-262, 2004.\n[16] A. Sahai, N. Hoven, S. M. Mishra, and R. Tandra.\nFundamental Tradeoffs in Robust Spectrum Sensing for Opportunistic Frequency Reuse.\nTechnical Report UC Berkeley, 2006.\n[17] J. Zhao, H. Zheng, and G.-H.\nYang.\nDistributed Coordination in Dynamic Spectrum Allocation Networks.\nIn Proceedings of DySPAN 2005, Baltimore (MD), Nov. 2005.\n17","lvl-3":"GUESS: Gossiping Updates for Efficient Spectrum Sensing\nABSTRACT\nWireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum.\nSuch radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings.\nWe focus on the problem of sharing RF spectrum data among a collection of wireless devices.\nThe inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging.\nWe propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing.\nIt (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence.\nWe outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches.\n1.\nINTRODUCTION\nThere has recently been a huge surge in the growth of wireless technology, driven primarily by the availability of unlicensed spectrum.\nHowever, this has come at the cost of increased RF interference, which has caused the Federal Communications Commission (FCC) in the United States to re-evaluate its strategy on spectrum allocation.\nCurrently, the FCC has licensed RF spectrum to a variety of public and private institutions, termed primary users.\nNew spectrum allocation regimes implemented by the FCC use dynamic spectrum access schemes to either negotiate or opportunistically allocate RF spectrum to unlicensed secondary users\nFigure 1: Without cooperation, shadowed users are not able to detect the presence of the primary user.\nthat can use it when the primary user is absent.\nThe second type of allocation scheme is termed opportunistic spectrum sharing.\nThe FCC has already legislated this access method for the 5 GHz band and is also considering the same for TV broadcast bands [1].\nAs a result, a new wave of intelligent radios, termed cognitive radios (or software defined radios), is emerging that can dynamically re-tune their radio parameters based on interactions with their surrounding environment.\nUnder the new opportunistic allocation strategy, secondary users are obligated not to interfere with primary users (senders or receivers).\nThis can be done by sensing the environment to detect the presence of primary users.\nHowever, local sensing is not always adequate, especially in cases where a secondary user is shadowed from a primary user, as illustrated in Figure 1.\nHere, coordination between secondary users is the only way for shadowed users to detect the primary.\nIn general, cooperation improves sensing accuracy by an order of magnitude when compared to not cooperating at all [5].\nTo realize this vision of dynamic spectrum access, two fundamental problems must be solved: (1) Efficient and coordinated spectrum sensing and (2) Distributed spectrum allocation.\nIn this paper, we propose strategies for coordinated spectrum sensing that are low cost, operate on timescales comparable to the agility of the RF environment, and are resilient to network failures and alterations.\nWe defer the problem of spectrum allocation to future work.\nSpectrum sensing techniques for cognitive radio networks [4, 17] are broadly classified into three regimes; (1) centralized coordinated techniques, (2) decentralized coordinated techniques, and (3) decentralized uncoordinated techniques.\nWe advocate a decentralized coordinated approach, similar in spirit to OSPF link-state routing used in the Internet.\nThis is more effective than uncoordinated approaches because making decisions based only on local information is fallible (as shown in Figure 1).\nMoreover, compared to cen\ntralized approaches, decentralized techniques are more scalable, robust, and resistant to network failures and security attacks (e.g. jamming).\nCoordinating sensory data between cognitive radio devices is technically challenging because accurately assessing spectrum usage requires exchanging potentially large amounts of data with many radios at very short time scales.\nData size grows rapidly due to the large number (i.e. thousands) of spectrum bands that must be scanned.\nThis data must also be exchanged between potentially hundreds of neighboring secondary users at short time scales, to account for rapid changes in the RF environment.\nThis paper presents GUESS, a novel approach to coordinated spectrum sensing for cognitive radio networks.\nOur approach is motivated by the following key observations:\n1.\nLow-cost sensors collect approximate data: Most devices have limited sensing resolution because they are low-cost and low duty-cycle devices and thus cannot perform complex RF signal processing (e.g. matched filtering).\nMany are typically equipped with simple energy detectors that gather only approximate information.\n2.\nApproximate summaries are sufficient for coordination: Approximate statistical summaries of sensed data are sufficient for correlating sensed information between radios, as relative usage information is more important than absolute usage data.\nThus, exchanging exact RF information may not be necessary, and more importantly, too costly for the purposes of spectrum sensing.\n3.\nRF spectrum changes incrementally: On most bands, RF spectrum utilization changes infrequently.\nMore\nover, utilization of a specific RF band affects only that band and not the entire spectrum.\nTherefore, if the usage pattern of a particular band changes substantially, nodes detecting that change can initiate an update protocol to update the information for that band alone, leaving in place information already collected for other bands.\nThis allows rapid detection of change while saving the overhead of exchanging unnecessary information.\nBased on these observations, GUESS makes the following contributions:\n1.\nA novel approach that applies randomized gossiping algorithms to the problem of coordinated spectrum sensing.\nThese algorithms are well suited to coordinated spectrum sensing due to the unique characteristics of the problem: i.e. radios are power-limited, mobile and have limited bandwidth to support spectrum sensing capabilities.\n2.\nAn application of in-network aggregation for dissemination of spectrum summaries.\nWe argue that approximate summaries are adequate for performing accurate radio parameter tuning.\n3.\nAn extension of in-network aggregation and randomized gossiping to support incremental maintenance of spectrum summaries.\nCompared to standard gossiping approaches, incremental techniques can further reduce overhead and protocol execution time by requiring fewer radio resources.\nThe rest of the paper is organized as follows.\nSection 2 motivates the need for a low cost and efficient approach to coordinated spectrum sensing.\nSection 3 discusses related work in the area, while Section 4 provides a background on in-network aggregation and randomized gossiping.\nSections 5 and 6 discuss extensions and protocol details of these techniques for coordinated spectrum sensing.\nSection 7 presents simulation results showcasing the benefits of GUESS, and Section 8 presents a discussion and some directions for future work.\n2.\nMOTIVATION\nTo estimate the scale of the problem, In-stat predicts that the number of WiFi-enabled devices sold annually alone will grow to 430 million by 2009 [2].\nTherefore, it would be reasonable to assume that a typical dense urban environment will contain several thousand cognitive radio devices in range of each other.\nAs a result, distributed spectrum sensing and allocation would become both important and fundamental.\nCoordinated sensing among secondary radios is essential due to limited device sensing resolution and physical RF effects such as shadowing.\nCabric et al. [5] illustrate the gains from cooperation and show an order of magnitude reduction in the probability of interference with the primary user when only a small fraction of secondary users cooperate.\nHowever, such coordination is non-trivial due to: (1) the limited bandwidth available for coordination, (2) the need to communicate this information on short timescales, and (3) the large amount of sensory data that needs to be exchanged.\nLimited Bandwidth: Due to restrictions of cost and power, most devices will likely not have dedicated hardware for supporting coordination.\nThis implies that both data and sensory traffic will need to be time-multiplexed onto a single radio interface.\nTherefore, any time spent communicating sensory information takes away from the device's ability to perform its intended function.\nThus, any such coordination must incur minimal network overhead.\nShort Timescales: Further compounding the problem is the need to immediately propagate updated RF sensory data, in order to allow devices to react to it in a timely fashion.\nThis is especially true due to mobility, as rapid changes of the RF environment can occur due to device and obstacle movements.\nHere, fading and multi-path interference heavily impact sensing abilities.\nSignal level can drop to a deep null with just a \u03bb \/ 4 movement in receiver position (3.7 cm at 2 GHz), where \u03bb is the wavelength [14].\nCoordination which does not support rapid dissemination of information will not be able to account for such RF variations.\nLarge Sensory Data: Because cognitive radios can potentially use any part of the RF spectrum, there will be numerous channels that they need to scan.\nSuppose we wish to compute the average signal energy in each of 100 discretized frequency bands, and each signal can have up to 128 discrete energy levels.\nExchanging complete sensory information between nodes would require 700 bits per transmission (for 100 channels, each requiring seven bits of information).\nExchanging this information among even a small group of 50 devices each second would require (50 time-steps \u00d7 50 devices \u00d7 700 bits per transmission) = 1.67 Mbps of aggregate network bandwidth.\nContrast this to the use of a randomized gossip protocol to disseminate such information, and the use of FM bit vectors to perform in-network aggregation.\nBy applying gossip and FM aggregation, aggregate bandwidth requirements drop to (c \u00b7 logN time-steps \u00d7 50 devices \u00d7 700 bits per transmission) = 0.40 Mbps, since 12 time-steps are needed to propagate the data (with c = 2, for illustrative purpoes').\nThis is explained further in Section 4.\nBased on these insights, we propose GUESS, a low-overhead approach which uses incremental extensions to FM aggregation and randomized gossiping for efficient coordination within a cognitive radio network.\nAs we show in Section 7,' Convergence time is correlated with the connectivity topology of the devices, which in turn depends on the environment.\nFigure 2: Using FM aggregation to compute average signal level measured by a group of devices.\nthese incremental extensions can further reduce bandwidth requirements by up to a factor of 2.4 over the standard approaches discussed above.\n3.\nRELATED WORK\nResearch in cognitive radio has increased rapidly [4, 17] over the years, and it is being projected as one of the leading enabling technologies for wireless networks of the future [9].\nAs mentioned earlier, the FCC has already identified new regimes for spectrum sharing between primary users and secondary users and a variety of systems have been proposed in the literature to support such sharing [4, 17].\nDetecting the presence of a primary user is non-trivial, especially a legacy primary user that is not cognitive radio aware.\nSecondary users must be able to detect the primary even if they cannot properly decode its signals.\nThis has been shown by Sahai et al. [16] to be extremely difficult even if the modulation scheme is known.\nSophisticated and costly hardware, beyond a simple energy detector, is required to improve signal detection accuracy [16].\nMoreover, a shadowed secondary user may not even be able to detect signals from the primary.\nAs a result, simple local sensing approaches have not gained much momentum.\nThis has motivated the need for cooperation among cognitive radios [16].\nMore recently, some researchers have proposed approaches for radio coordination.\nLiu et al. [11] consider a centralized access point (or base station) architecture in which sensing information is forwarded to APs for spectrum allocation purposes.\nAPs direct mobile clients to collect such sensing information on their behalf.\nHowever, due to the need of a fixed AP infrastructure, such a centralized approach is clearly not scalable.\nIn other work, Zhao et al. [17] propose a distributed coordination approach for spectrum sensing and allocation.\nCognitive radios organize into clusters and coordination occurs within clusters.\nThe CORVUS [4] architecture proposes a similar clustering method that can use either a centralized or decentralized approach to manage clusters.\nAlthough an improvement over purely centralized approaches, these techniques still require a setup phase to generate the clusters, which not only adds additional delay, but also requires many of the secondary users to be static or quasi-static.\nIn contrast, GUESS does not place such restrictions on secondary users, and can even function in highly mobile environments.\n4.\nBACKGROUND\nThis section provides the background for our approach.\nWe present the FM aggregation scheme that we use to generate spectrum summaries and perform in-network aggregation.\nWe also discuss randomized gossiping techniques for disseminating aggregates in a cognitive radio network.\n4.1 FM Aggregation\nAggregation is the process where nodes in a distributed network combine data received from neighboring nodes with their local value to generate a combined aggregate.\nThis aggregate is then communicated to other nodes in the network and this process repeats until the aggregate at all nodes has converged to the same value, i.e. the global aggregate.\nDouble-counting is a well known problem in this process, where nodes may contribute more than once to the aggregate, causing inaccuracy in the final result.\nIntuitively, nodes can tag the aggregate value they transmit with information about which nodes have contributed to it.\nHowever, this approach is not scalable.\nOrder and Duplicate Insensitive (ODI) techniques have been proposed in the literature [10, 15].\nWe adopt the ODI approach pioneered by Flajolet and Martin (FM) for the purposes of aggregation.\nNext we outline the FM approach; for full details, see [7].\nSuppose we want to compute the number of nodes in the network, i.e. the COUNT query.\nTo do so, each node performs a coin toss experiment as follows: toss an unbiased coin, stopping after the first \"head\" is seen.\nThe node then sets the ith bit in a bit vector (initially filled with zeros), where i is the number of coin tosses it performed.\nThe intuition is that as the number of nodes doing coin toss experiments increases, the probability of a more significant bit being set in one of the nodes' bit vectors increases.\nThese bit vectors are then exchanged among nodes.\nWhen a node receives a bit vector, it updates its local bit vector by bitwise OR-ing it with the received vector (as shown in Figure 2 which computes AVERAGE).\nAt the end of the aggregation process, every node, with high probability, has the same bit vector.\nThe actual value of the count aggregate is then computed using the following formula, AGGF M = 2j \u2212 1\/0 .77351, where j represents the bit position of the least significant zero in the aggregate bit vector [7].\nAlthough such aggregates are very compact in nature, requiring only O (logN) state space (where N is the number of nodes), they may not be very accurate as they can only approximate values to the closest power of 2, potentially causing errors of up to 50%.\nMore accurate aggregates can be computed by maintaining multiple bit vectors at each node, as explained in [7].\nThis decreases the error to within O (1 \/ \u221a m), where m is the number of such bit vectors.\nQueries other than count can also be computed using variants of this basic counting algorithm, as discussed in [3] (and shown in Figure 2).\nTransmitting FM bit vectors between nodes is done using randomized gossiping, discussed next.\n4.2 Gossip Protocols\nGossip-based protocols operate in discrete time-steps; a time-step is the required amount of time for all transmissions in that time-step to complete.\nAt every time-step, each node having something to send randomly selects one or more neighboring nodes and transmits its data to them.\nThe randomized propagation of information provides fault-tolerance and resilience to network failures and outages.\nWe emphasize that this characteristic of the protocol also allows it to operate without relying on any underlying network structure.\nGossip protocols have been shown to provide exponentially fast convergence2, on the order of O (log N) [10], where N is the number of nodes (or radios).\nThese protocols can therefore easily scale to very dense environments.\nTwo types of gossip protocols are:\n\u2022 Uniform Gossip: In uniform gossip, at each time\nstep, each node chooses a random neighbor and sends its data to it.\nThis process repeats for O (log (N)) steps (where N is the number of nodes in the network).\nUniform gossip provides exponentially fast convergence, with low network overhead [10].\n\u2022 Random Walk: In random walk, only a subset of the nodes (termed designated nodes) communicate in a particular time-step.\nAt startup, k nodes are randomly elected as designated nodes.\nIn each time-step, each designated node sends its data to a random neighbor, which becomes designated for the subsequent timestep (much like passing a token).\nThis process repeats until the aggregate has converged in the network.\nRandom walk has been shown to provide similar convergence bounds as uniform gossip in problems of similar context [8, 12].\n5.\nINCREMENTAL PROTOCOLS\n5.1 Incremental FM Aggregates\n5.2 Incremental Routing Protocol\n6.\nPROTOCOL DETAILS\n7.\nEVALUATION\n9.\nCONCLUSION\nSpectrum sensing is a key requirement for dynamic spectrum allocation in cognitive radio networks.\nThe nature of the RF environment necessitates coordination between cognitive radio devices.\nWe propose GUESS, an approximate yet low overhead approach to perform efficient coordination between cognitive radios.\nThe fundamental contributions of GUESS are: (1) an FM aggregation scheme for efficient innetwork aggregation, (2) a randomized gossiping approach which provides exponentially fast convergence and robustness to network alterations, and (3) incremental variations of FM and gossip which we show can reduce the communication time by up to a factor of 2.7 and reduce network overhead by up to a factor of 2.4.\nOur preliminary simulation results showcase the benefits of this approach and we also outline a set of open problems that make this a new and exciting area of research.","lvl-4":"GUESS: Gossiping Updates for Efficient Spectrum Sensing\nABSTRACT\nWireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum.\nSuch radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings.\nWe focus on the problem of sharing RF spectrum data among a collection of wireless devices.\nThe inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging.\nWe propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing.\nIt (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence.\nWe outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches.\n1.\nINTRODUCTION\nThere has recently been a huge surge in the growth of wireless technology, driven primarily by the availability of unlicensed spectrum.\nCurrently, the FCC has licensed RF spectrum to a variety of public and private institutions, termed primary users.\nNew spectrum allocation regimes implemented by the FCC use dynamic spectrum access schemes to either negotiate or opportunistically allocate RF spectrum to unlicensed secondary users\nFigure 1: Without cooperation, shadowed users are not able to detect the presence of the primary user.\nthat can use it when the primary user is absent.\nThe second type of allocation scheme is termed opportunistic spectrum sharing.\nUnder the new opportunistic allocation strategy, secondary users are obligated not to interfere with primary users (senders or receivers).\nThis can be done by sensing the environment to detect the presence of primary users.\nHowever, local sensing is not always adequate, especially in cases where a secondary user is shadowed from a primary user, as illustrated in Figure 1.\nHere, coordination between secondary users is the only way for shadowed users to detect the primary.\nIn general, cooperation improves sensing accuracy by an order of magnitude when compared to not cooperating at all [5].\nTo realize this vision of dynamic spectrum access, two fundamental problems must be solved: (1) Efficient and coordinated spectrum sensing and (2) Distributed spectrum allocation.\nIn this paper, we propose strategies for coordinated spectrum sensing that are low cost, operate on timescales comparable to the agility of the RF environment, and are resilient to network failures and alterations.\nWe defer the problem of spectrum allocation to future work.\nSpectrum sensing techniques for cognitive radio networks [4, 17] are broadly classified into three regimes; (1) centralized coordinated techniques, (2) decentralized coordinated techniques, and (3) decentralized uncoordinated techniques.\nWe advocate a decentralized coordinated approach, similar in spirit to OSPF link-state routing used in the Internet.\nThis is more effective than uncoordinated approaches because making decisions based only on local information is fallible (as shown in Figure 1).\nMoreover, compared to cen\ntralized approaches, decentralized techniques are more scalable, robust, and resistant to network failures and security attacks (e.g. jamming).\nCoordinating sensory data between cognitive radio devices is technically challenging because accurately assessing spectrum usage requires exchanging potentially large amounts of data with many radios at very short time scales.\nData size grows rapidly due to the large number (i.e. thousands) of spectrum bands that must be scanned.\nThis data must also be exchanged between potentially hundreds of neighboring secondary users at short time scales, to account for rapid changes in the RF environment.\nThis paper presents GUESS, a novel approach to coordinated spectrum sensing for cognitive radio networks.\nOur approach is motivated by the following key observations:\n1.\nMany are typically equipped with simple energy detectors that gather only approximate information.\n2.\nApproximate summaries are sufficient for coordination: Approximate statistical summaries of sensed data are sufficient for correlating sensed information between radios, as relative usage information is more important than absolute usage data.\nThus, exchanging exact RF information may not be necessary, and more importantly, too costly for the purposes of spectrum sensing.\n3.\nRF spectrum changes incrementally: On most bands, RF spectrum utilization changes infrequently.\nMore\nover, utilization of a specific RF band affects only that band and not the entire spectrum.\nThis allows rapid detection of change while saving the overhead of exchanging unnecessary information.\nBased on these observations, GUESS makes the following contributions:\n1.\nA novel approach that applies randomized gossiping algorithms to the problem of coordinated spectrum sensing.\nThese algorithms are well suited to coordinated spectrum sensing due to the unique characteristics of the problem: i.e. radios are power-limited, mobile and have limited bandwidth to support spectrum sensing capabilities.\n2.\nAn application of in-network aggregation for dissemination of spectrum summaries.\nWe argue that approximate summaries are adequate for performing accurate radio parameter tuning.\n3.\nAn extension of in-network aggregation and randomized gossiping to support incremental maintenance of spectrum summaries.\nCompared to standard gossiping approaches, incremental techniques can further reduce overhead and protocol execution time by requiring fewer radio resources.\nThe rest of the paper is organized as follows.\nSection 2 motivates the need for a low cost and efficient approach to coordinated spectrum sensing.\nSection 3 discusses related work in the area, while Section 4 provides a background on in-network aggregation and randomized gossiping.\nSections 5 and 6 discuss extensions and protocol details of these techniques for coordinated spectrum sensing.\nSection 7 presents simulation results showcasing the benefits of GUESS, and Section 8 presents a discussion and some directions for future work.\n2.\nMOTIVATION\nTherefore, it would be reasonable to assume that a typical dense urban environment will contain several thousand cognitive radio devices in range of each other.\nAs a result, distributed spectrum sensing and allocation would become both important and fundamental.\nCoordinated sensing among secondary radios is essential due to limited device sensing resolution and physical RF effects such as shadowing.\nLimited Bandwidth: Due to restrictions of cost and power, most devices will likely not have dedicated hardware for supporting coordination.\nThis implies that both data and sensory traffic will need to be time-multiplexed onto a single radio interface.\nTherefore, any time spent communicating sensory information takes away from the device's ability to perform its intended function.\nThus, any such coordination must incur minimal network overhead.\nShort Timescales: Further compounding the problem is the need to immediately propagate updated RF sensory data, in order to allow devices to react to it in a timely fashion.\nThis is especially true due to mobility, as rapid changes of the RF environment can occur due to device and obstacle movements.\nHere, fading and multi-path interference heavily impact sensing abilities.\nCoordination which does not support rapid dissemination of information will not be able to account for such RF variations.\nLarge Sensory Data: Because cognitive radios can potentially use any part of the RF spectrum, there will be numerous channels that they need to scan.\nExchanging complete sensory information between nodes would require 700 bits per transmission (for 100 channels, each requiring seven bits of information).\nExchanging this information among even a small group of 50 devices each second would require (50 time-steps \u00d7 50 devices \u00d7 700 bits per transmission) = 1.67 Mbps of aggregate network bandwidth.\nContrast this to the use of a randomized gossip protocol to disseminate such information, and the use of FM bit vectors to perform in-network aggregation.\nThis is explained further in Section 4.\nBased on these insights, we propose GUESS, a low-overhead approach which uses incremental extensions to FM aggregation and randomized gossiping for efficient coordination within a cognitive radio network.\nAs we show in Section 7,' Convergence time is correlated with the connectivity topology of the devices, which in turn depends on the environment.\nFigure 2: Using FM aggregation to compute average signal level measured by a group of devices.\nthese incremental extensions can further reduce bandwidth requirements by up to a factor of 2.4 over the standard approaches discussed above.\n3.\nRELATED WORK\nAs mentioned earlier, the FCC has already identified new regimes for spectrum sharing between primary users and secondary users and a variety of systems have been proposed in the literature to support such sharing [4, 17].\nDetecting the presence of a primary user is non-trivial, especially a legacy primary user that is not cognitive radio aware.\nSecondary users must be able to detect the primary even if they cannot properly decode its signals.\nMoreover, a shadowed secondary user may not even be able to detect signals from the primary.\nAs a result, simple local sensing approaches have not gained much momentum.\nThis has motivated the need for cooperation among cognitive radios [16].\nMore recently, some researchers have proposed approaches for radio coordination.\nLiu et al. [11] consider a centralized access point (or base station) architecture in which sensing information is forwarded to APs for spectrum allocation purposes.\nAPs direct mobile clients to collect such sensing information on their behalf.\nHowever, due to the need of a fixed AP infrastructure, such a centralized approach is clearly not scalable.\nIn other work, Zhao et al. [17] propose a distributed coordination approach for spectrum sensing and allocation.\nCognitive radios organize into clusters and coordination occurs within clusters.\nThe CORVUS [4] architecture proposes a similar clustering method that can use either a centralized or decentralized approach to manage clusters.\nIn contrast, GUESS does not place such restrictions on secondary users, and can even function in highly mobile environments.\n4.\nBACKGROUND\nThis section provides the background for our approach.\nWe present the FM aggregation scheme that we use to generate spectrum summaries and perform in-network aggregation.\nWe also discuss randomized gossiping techniques for disseminating aggregates in a cognitive radio network.\n4.1 FM Aggregation\nAggregation is the process where nodes in a distributed network combine data received from neighboring nodes with their local value to generate a combined aggregate.\nThis aggregate is then communicated to other nodes in the network and this process repeats until the aggregate at all nodes has converged to the same value, i.e. the global aggregate.\nDouble-counting is a well known problem in this process, where nodes may contribute more than once to the aggregate, causing inaccuracy in the final result.\nIntuitively, nodes can tag the aggregate value they transmit with information about which nodes have contributed to it.\nHowever, this approach is not scalable.\nWe adopt the ODI approach pioneered by Flajolet and Martin (FM) for the purposes of aggregation.\nNext we outline the FM approach; for full details, see [7].\nSuppose we want to compute the number of nodes in the network, i.e. the COUNT query.\nTo do so, each node performs a coin toss experiment as follows: toss an unbiased coin, stopping after the first \"head\" is seen.\nThe node then sets the ith bit in a bit vector (initially filled with zeros), where i is the number of coin tosses it performed.\nThe intuition is that as the number of nodes doing coin toss experiments increases, the probability of a more significant bit being set in one of the nodes' bit vectors increases.\nThese bit vectors are then exchanged among nodes.\nWhen a node receives a bit vector, it updates its local bit vector by bitwise OR-ing it with the received vector (as shown in Figure 2 which computes AVERAGE).\nAt the end of the aggregation process, every node, with high probability, has the same bit vector.\nMore accurate aggregates can be computed by maintaining multiple bit vectors at each node, as explained in [7].\nTransmitting FM bit vectors between nodes is done using randomized gossiping, discussed next.\n4.2 Gossip Protocols\nGossip-based protocols operate in discrete time-steps; a time-step is the required amount of time for all transmissions in that time-step to complete.\nAt every time-step, each node having something to send randomly selects one or more neighboring nodes and transmits its data to them.\nThe randomized propagation of information provides fault-tolerance and resilience to network failures and outages.\nWe emphasize that this characteristic of the protocol also allows it to operate without relying on any underlying network structure.\nGossip protocols have been shown to provide exponentially fast convergence2, on the order of O (log N) [10], where N is the number of nodes (or radios).\nThese protocols can therefore easily scale to very dense environments.\nTwo types of gossip protocols are:\n\u2022 Uniform Gossip: In uniform gossip, at each time\nstep, each node chooses a random neighbor and sends its data to it.\nThis process repeats for O (log (N)) steps (where N is the number of nodes in the network).\nUniform gossip provides exponentially fast convergence, with low network overhead [10].\n\u2022 Random Walk: In random walk, only a subset of the nodes (termed designated nodes) communicate in a particular time-step.\nAt startup, k nodes are randomly elected as designated nodes.\nIn each time-step, each designated node sends its data to a random neighbor, which becomes designated for the subsequent timestep (much like passing a token).\nThis process repeats until the aggregate has converged in the network.\nRandom walk has been shown to provide similar convergence bounds as uniform gossip in problems of similar context [8, 12].\n9.\nCONCLUSION\nSpectrum sensing is a key requirement for dynamic spectrum allocation in cognitive radio networks.\nThe nature of the RF environment necessitates coordination between cognitive radio devices.\nWe propose GUESS, an approximate yet low overhead approach to perform efficient coordination between cognitive radios.\nOur preliminary simulation results showcase the benefits of this approach and we also outline a set of open problems that make this a new and exciting area of research.","lvl-2":"GUESS: Gossiping Updates for Efficient Spectrum Sensing\nABSTRACT\nWireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum.\nSuch radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings.\nWe focus on the problem of sharing RF spectrum data among a collection of wireless devices.\nThe inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging.\nWe propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing.\nIt (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence.\nWe outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches.\n1.\nINTRODUCTION\nThere has recently been a huge surge in the growth of wireless technology, driven primarily by the availability of unlicensed spectrum.\nHowever, this has come at the cost of increased RF interference, which has caused the Federal Communications Commission (FCC) in the United States to re-evaluate its strategy on spectrum allocation.\nCurrently, the FCC has licensed RF spectrum to a variety of public and private institutions, termed primary users.\nNew spectrum allocation regimes implemented by the FCC use dynamic spectrum access schemes to either negotiate or opportunistically allocate RF spectrum to unlicensed secondary users\nFigure 1: Without cooperation, shadowed users are not able to detect the presence of the primary user.\nthat can use it when the primary user is absent.\nThe second type of allocation scheme is termed opportunistic spectrum sharing.\nThe FCC has already legislated this access method for the 5 GHz band and is also considering the same for TV broadcast bands [1].\nAs a result, a new wave of intelligent radios, termed cognitive radios (or software defined radios), is emerging that can dynamically re-tune their radio parameters based on interactions with their surrounding environment.\nUnder the new opportunistic allocation strategy, secondary users are obligated not to interfere with primary users (senders or receivers).\nThis can be done by sensing the environment to detect the presence of primary users.\nHowever, local sensing is not always adequate, especially in cases where a secondary user is shadowed from a primary user, as illustrated in Figure 1.\nHere, coordination between secondary users is the only way for shadowed users to detect the primary.\nIn general, cooperation improves sensing accuracy by an order of magnitude when compared to not cooperating at all [5].\nTo realize this vision of dynamic spectrum access, two fundamental problems must be solved: (1) Efficient and coordinated spectrum sensing and (2) Distributed spectrum allocation.\nIn this paper, we propose strategies for coordinated spectrum sensing that are low cost, operate on timescales comparable to the agility of the RF environment, and are resilient to network failures and alterations.\nWe defer the problem of spectrum allocation to future work.\nSpectrum sensing techniques for cognitive radio networks [4, 17] are broadly classified into three regimes; (1) centralized coordinated techniques, (2) decentralized coordinated techniques, and (3) decentralized uncoordinated techniques.\nWe advocate a decentralized coordinated approach, similar in spirit to OSPF link-state routing used in the Internet.\nThis is more effective than uncoordinated approaches because making decisions based only on local information is fallible (as shown in Figure 1).\nMoreover, compared to cen\ntralized approaches, decentralized techniques are more scalable, robust, and resistant to network failures and security attacks (e.g. jamming).\nCoordinating sensory data between cognitive radio devices is technically challenging because accurately assessing spectrum usage requires exchanging potentially large amounts of data with many radios at very short time scales.\nData size grows rapidly due to the large number (i.e. thousands) of spectrum bands that must be scanned.\nThis data must also be exchanged between potentially hundreds of neighboring secondary users at short time scales, to account for rapid changes in the RF environment.\nThis paper presents GUESS, a novel approach to coordinated spectrum sensing for cognitive radio networks.\nOur approach is motivated by the following key observations:\n1.\nLow-cost sensors collect approximate data: Most devices have limited sensing resolution because they are low-cost and low duty-cycle devices and thus cannot perform complex RF signal processing (e.g. matched filtering).\nMany are typically equipped with simple energy detectors that gather only approximate information.\n2.\nApproximate summaries are sufficient for coordination: Approximate statistical summaries of sensed data are sufficient for correlating sensed information between radios, as relative usage information is more important than absolute usage data.\nThus, exchanging exact RF information may not be necessary, and more importantly, too costly for the purposes of spectrum sensing.\n3.\nRF spectrum changes incrementally: On most bands, RF spectrum utilization changes infrequently.\nMore\nover, utilization of a specific RF band affects only that band and not the entire spectrum.\nTherefore, if the usage pattern of a particular band changes substantially, nodes detecting that change can initiate an update protocol to update the information for that band alone, leaving in place information already collected for other bands.\nThis allows rapid detection of change while saving the overhead of exchanging unnecessary information.\nBased on these observations, GUESS makes the following contributions:\n1.\nA novel approach that applies randomized gossiping algorithms to the problem of coordinated spectrum sensing.\nThese algorithms are well suited to coordinated spectrum sensing due to the unique characteristics of the problem: i.e. radios are power-limited, mobile and have limited bandwidth to support spectrum sensing capabilities.\n2.\nAn application of in-network aggregation for dissemination of spectrum summaries.\nWe argue that approximate summaries are adequate for performing accurate radio parameter tuning.\n3.\nAn extension of in-network aggregation and randomized gossiping to support incremental maintenance of spectrum summaries.\nCompared to standard gossiping approaches, incremental techniques can further reduce overhead and protocol execution time by requiring fewer radio resources.\nThe rest of the paper is organized as follows.\nSection 2 motivates the need for a low cost and efficient approach to coordinated spectrum sensing.\nSection 3 discusses related work in the area, while Section 4 provides a background on in-network aggregation and randomized gossiping.\nSections 5 and 6 discuss extensions and protocol details of these techniques for coordinated spectrum sensing.\nSection 7 presents simulation results showcasing the benefits of GUESS, and Section 8 presents a discussion and some directions for future work.\n2.\nMOTIVATION\nTo estimate the scale of the problem, In-stat predicts that the number of WiFi-enabled devices sold annually alone will grow to 430 million by 2009 [2].\nTherefore, it would be reasonable to assume that a typical dense urban environment will contain several thousand cognitive radio devices in range of each other.\nAs a result, distributed spectrum sensing and allocation would become both important and fundamental.\nCoordinated sensing among secondary radios is essential due to limited device sensing resolution and physical RF effects such as shadowing.\nCabric et al. [5] illustrate the gains from cooperation and show an order of magnitude reduction in the probability of interference with the primary user when only a small fraction of secondary users cooperate.\nHowever, such coordination is non-trivial due to: (1) the limited bandwidth available for coordination, (2) the need to communicate this information on short timescales, and (3) the large amount of sensory data that needs to be exchanged.\nLimited Bandwidth: Due to restrictions of cost and power, most devices will likely not have dedicated hardware for supporting coordination.\nThis implies that both data and sensory traffic will need to be time-multiplexed onto a single radio interface.\nTherefore, any time spent communicating sensory information takes away from the device's ability to perform its intended function.\nThus, any such coordination must incur minimal network overhead.\nShort Timescales: Further compounding the problem is the need to immediately propagate updated RF sensory data, in order to allow devices to react to it in a timely fashion.\nThis is especially true due to mobility, as rapid changes of the RF environment can occur due to device and obstacle movements.\nHere, fading and multi-path interference heavily impact sensing abilities.\nSignal level can drop to a deep null with just a \u03bb \/ 4 movement in receiver position (3.7 cm at 2 GHz), where \u03bb is the wavelength [14].\nCoordination which does not support rapid dissemination of information will not be able to account for such RF variations.\nLarge Sensory Data: Because cognitive radios can potentially use any part of the RF spectrum, there will be numerous channels that they need to scan.\nSuppose we wish to compute the average signal energy in each of 100 discretized frequency bands, and each signal can have up to 128 discrete energy levels.\nExchanging complete sensory information between nodes would require 700 bits per transmission (for 100 channels, each requiring seven bits of information).\nExchanging this information among even a small group of 50 devices each second would require (50 time-steps \u00d7 50 devices \u00d7 700 bits per transmission) = 1.67 Mbps of aggregate network bandwidth.\nContrast this to the use of a randomized gossip protocol to disseminate such information, and the use of FM bit vectors to perform in-network aggregation.\nBy applying gossip and FM aggregation, aggregate bandwidth requirements drop to (c \u00b7 logN time-steps \u00d7 50 devices \u00d7 700 bits per transmission) = 0.40 Mbps, since 12 time-steps are needed to propagate the data (with c = 2, for illustrative purpoes').\nThis is explained further in Section 4.\nBased on these insights, we propose GUESS, a low-overhead approach which uses incremental extensions to FM aggregation and randomized gossiping for efficient coordination within a cognitive radio network.\nAs we show in Section 7,' Convergence time is correlated with the connectivity topology of the devices, which in turn depends on the environment.\nFigure 2: Using FM aggregation to compute average signal level measured by a group of devices.\nthese incremental extensions can further reduce bandwidth requirements by up to a factor of 2.4 over the standard approaches discussed above.\n3.\nRELATED WORK\nResearch in cognitive radio has increased rapidly [4, 17] over the years, and it is being projected as one of the leading enabling technologies for wireless networks of the future [9].\nAs mentioned earlier, the FCC has already identified new regimes for spectrum sharing between primary users and secondary users and a variety of systems have been proposed in the literature to support such sharing [4, 17].\nDetecting the presence of a primary user is non-trivial, especially a legacy primary user that is not cognitive radio aware.\nSecondary users must be able to detect the primary even if they cannot properly decode its signals.\nThis has been shown by Sahai et al. [16] to be extremely difficult even if the modulation scheme is known.\nSophisticated and costly hardware, beyond a simple energy detector, is required to improve signal detection accuracy [16].\nMoreover, a shadowed secondary user may not even be able to detect signals from the primary.\nAs a result, simple local sensing approaches have not gained much momentum.\nThis has motivated the need for cooperation among cognitive radios [16].\nMore recently, some researchers have proposed approaches for radio coordination.\nLiu et al. [11] consider a centralized access point (or base station) architecture in which sensing information is forwarded to APs for spectrum allocation purposes.\nAPs direct mobile clients to collect such sensing information on their behalf.\nHowever, due to the need of a fixed AP infrastructure, such a centralized approach is clearly not scalable.\nIn other work, Zhao et al. [17] propose a distributed coordination approach for spectrum sensing and allocation.\nCognitive radios organize into clusters and coordination occurs within clusters.\nThe CORVUS [4] architecture proposes a similar clustering method that can use either a centralized or decentralized approach to manage clusters.\nAlthough an improvement over purely centralized approaches, these techniques still require a setup phase to generate the clusters, which not only adds additional delay, but also requires many of the secondary users to be static or quasi-static.\nIn contrast, GUESS does not place such restrictions on secondary users, and can even function in highly mobile environments.\n4.\nBACKGROUND\nThis section provides the background for our approach.\nWe present the FM aggregation scheme that we use to generate spectrum summaries and perform in-network aggregation.\nWe also discuss randomized gossiping techniques for disseminating aggregates in a cognitive radio network.\n4.1 FM Aggregation\nAggregation is the process where nodes in a distributed network combine data received from neighboring nodes with their local value to generate a combined aggregate.\nThis aggregate is then communicated to other nodes in the network and this process repeats until the aggregate at all nodes has converged to the same value, i.e. the global aggregate.\nDouble-counting is a well known problem in this process, where nodes may contribute more than once to the aggregate, causing inaccuracy in the final result.\nIntuitively, nodes can tag the aggregate value they transmit with information about which nodes have contributed to it.\nHowever, this approach is not scalable.\nOrder and Duplicate Insensitive (ODI) techniques have been proposed in the literature [10, 15].\nWe adopt the ODI approach pioneered by Flajolet and Martin (FM) for the purposes of aggregation.\nNext we outline the FM approach; for full details, see [7].\nSuppose we want to compute the number of nodes in the network, i.e. the COUNT query.\nTo do so, each node performs a coin toss experiment as follows: toss an unbiased coin, stopping after the first \"head\" is seen.\nThe node then sets the ith bit in a bit vector (initially filled with zeros), where i is the number of coin tosses it performed.\nThe intuition is that as the number of nodes doing coin toss experiments increases, the probability of a more significant bit being set in one of the nodes' bit vectors increases.\nThese bit vectors are then exchanged among nodes.\nWhen a node receives a bit vector, it updates its local bit vector by bitwise OR-ing it with the received vector (as shown in Figure 2 which computes AVERAGE).\nAt the end of the aggregation process, every node, with high probability, has the same bit vector.\nThe actual value of the count aggregate is then computed using the following formula, AGGF M = 2j \u2212 1\/0 .77351, where j represents the bit position of the least significant zero in the aggregate bit vector [7].\nAlthough such aggregates are very compact in nature, requiring only O (logN) state space (where N is the number of nodes), they may not be very accurate as they can only approximate values to the closest power of 2, potentially causing errors of up to 50%.\nMore accurate aggregates can be computed by maintaining multiple bit vectors at each node, as explained in [7].\nThis decreases the error to within O (1 \/ \u221a m), where m is the number of such bit vectors.\nQueries other than count can also be computed using variants of this basic counting algorithm, as discussed in [3] (and shown in Figure 2).\nTransmitting FM bit vectors between nodes is done using randomized gossiping, discussed next.\n4.2 Gossip Protocols\nGossip-based protocols operate in discrete time-steps; a time-step is the required amount of time for all transmissions in that time-step to complete.\nAt every time-step, each node having something to send randomly selects one or more neighboring nodes and transmits its data to them.\nThe randomized propagation of information provides fault-tolerance and resilience to network failures and outages.\nWe emphasize that this characteristic of the protocol also allows it to operate without relying on any underlying network structure.\nGossip protocols have been shown to provide exponentially fast convergence2, on the order of O (log N) [10], where N is the number of nodes (or radios).\nThese protocols can therefore easily scale to very dense environments.\nTwo types of gossip protocols are:\n\u2022 Uniform Gossip: In uniform gossip, at each time\nstep, each node chooses a random neighbor and sends its data to it.\nThis process repeats for O (log (N)) steps (where N is the number of nodes in the network).\nUniform gossip provides exponentially fast convergence, with low network overhead [10].\n\u2022 Random Walk: In random walk, only a subset of the nodes (termed designated nodes) communicate in a particular time-step.\nAt startup, k nodes are randomly elected as designated nodes.\nIn each time-step, each designated node sends its data to a random neighbor, which becomes designated for the subsequent timestep (much like passing a token).\nThis process repeats until the aggregate has converged in the network.\nRandom walk has been shown to provide similar convergence bounds as uniform gossip in problems of similar context [8, 12].\n5.\nINCREMENTAL PROTOCOLS\n5.1 Incremental FM Aggregates\nOne limitation of FM aggregation is that it does not support updates.\nDue to the probabilistic nature of FM, once bit vectors have been ORed together, information cannot simply be removed from them as each node's contribution has not been recorded.\nWe propose the use of delete vectors, an extension of FM to support updates.\nWe maintain a separate aggregate delete vector whose value is subtracted from the original aggregate vector's value to obtain the resulting value as follows.\nHere, a and b represent the bit positions of the least significant zero in the original and delete bit vectors respectively.\nSuppose we wish to compute the average signal level detected in a particular frequency.\nTo compute this, we compute the SUM of all signal level measurements and divide that by the COUNT of the number of measurements.\nA SUM aggregate is computed similar to COUNT (explained in Section 4.1), except that each node performs s coin toss experiments, where s is the locally measured signal level.\nFigure 2 illustrates the sequence by which the average signal energy is computed in a particular band using FM aggregation.\nNow suppose that the measured signal at a node changes from s to s'.\nThe vectors are updated as follows.\n\u2022 s'> s: We simply perform (s' \u2212 s) more coin toss experiments and bitwise OR the result with the original bit vector.\n\u2022 s' <s: We increase the value of the delete vector by performing (s \u2212 s') coin toss experiments and bitwise OR the result with the current delete vector.\nUsing delete vectors, we can now support updates to the measured signal level.\nWith the original implementation of FM, the aggregate would need to be discarded and a new one recomputed every time an update occurred.\nThus, delete vectors provide a low overhead alternative for applications whose data changes incrementally, such as signal level measurements in a coordinated spectrum sensing environment.\nNext we discuss how these aggregates can be communicated between devices using incremental routing protocols.\n5.2 Incremental Routing Protocol\nWe use the following incremental variants of the routing protocols presented in Section 4.2 to support incremental updates to previously computed aggregates.\nFigure 3: State diagram each device passes through as updates proceed in the system\n\u2022 Incremental Gossip Protocol (IGP): When an update occurs, the updated node initiates the gossiping procedure.\nOther nodes only begin gossiping once they receive the update.\nTherefore, nodes receiving the update become active and continue communicating with their neighbors until the update protocol terminates, after O (log (N)) time steps.\n\u2022 Incremental Random Walk Protocol (IRWP): When an update (or updates) occur in the system, instead of starting random walks at k random nodes in the network, all k random walks are initiated from the updated node (s).\nThe rest of the protocol proceeds in the same fashion as the standard random walk protocol.\nThe allocation of walks to updates is discussed in more detail in [3], where the authors show that the number of walks has an almost negligible impact on network overhead.\n6.\nPROTOCOL DETAILS\nUsing incremental routing protocols to disseminate incremental FM aggregates is a natural fit for the problem of coordinated spectrum sensing.\nHere we outline the implementation of such techniques for a cognitive radio network.\nWe continue with the example from Section 5.1, where we wish to perform coordination between a group of wireless devices to compute the average signal level in a particular frequency band.\nUsing either incremental random walk or incremental gossip, each device proceeds through three phases, in order to determine the global average signal level for a particular frequency band.\nFigure 3 shows a state diagram of these phases.\nSusceptible: Each device starts in the susceptible state and becomes infectious only when its locally measured signal level changes, or if it receives an update message from a neighboring device.\nIf a local change is observed, the device updates either the original or delete bit vector, as described in Section 5.1, and moves into the infectious state.\nIf it receives an update message, it ORs the received original and delete bit vectors with its local bit vectors and moves into the infectious state.\nNote, because signal level measurements may change sporadically over time, a smoothing function, such as an exponentially weighted moving average, should be applied to these measurements.\nInfectious: Once a device is infectious it continues to send its up-to-date bit vectors, using either incremental random walk or incremental gossip, to neighboring nodes.\nDue to FM's order and duplicate insensitive (ODI) properties, simultaneously occurring updates are handled seamlessly by the protocol.\nUpdate messages contain a time stamp indicating when the update was generated, and each device maintains a lo\nFigure 4: Execution times of Incremental Protocols\nFigure 5: Network overhead of Incremental Protocols\ncal time stamp of when it received the most recent update.\nUsing this information, a device moves into the recovered state once enough time has passed for the most recent update to have converged.\nAs discussed in Section 4.2, this happens after O (log (N)) time steps.\nRecovered: A recovered device ceases to propagate any update information.\nAt this point, it performs clean-up and prepares for the next infection by entering the susceptible state.\nOnce all devices have entered the recovered state, the system will have converged, and with high probability, all devices will have the up-to-date average signal level.\nDue to the cumulative nature of FM, even if all devices have not converged, the next update will include all previous updates.\nNevertheless, the probability that gossip fails to converge is small, and has been shown to be O (1\/N) [10].\nFor coordinated spectrum sensing, non-incremental routing protocols can be implemented in a similar fashion.\nRandom walk would operate by having devices periodically drop the aggregate and re-run the protocol.\nEach device would perform a coin toss (biased on the number of walks) to determine whether or not it is a designated node.\nThis is different from the protocol discussed above where only updated nodes initiate random walks.\nSimilar techniques can be used to implement standard gossip.\n7.\nEVALUATION\nWe now provide a preliminary evaluation of GUESS in simulation.\nA more detailed evaluation of this approach can be found in [3].\nHere we focus on how incremental extensions to gossip protocols can lead to further improvements over standard gossiping techniques, for the problem of coordinated spectrum sensing.\nSimulation Setup: We implemented a custom simulator in C++.\nWe study the improvements of our incremental gossip protocols over standard gossiping in two dimensions: execution time and network overhead.\nWe use two topologies to represent device connectivity: a clique, to eliminate the effects of the underlying topology on protocol performance, and a BRITE-generated [13] power-law random graph (PLRG), to illustrate how our results extend to more realistic scenarios.\nWe simulate a large deployment of 1,000 devices to analyze protocol scalability.\nIn our simulations, we compute the average signal level in a particular band by disseminating FM bit vectors.\nIn each run of the simulation, we induce a change in the measured signal at one or more devices.\nA run ends when the new average signal level has converged in the network.\nFor each data point, we ran 100 simulations and 95% confidence intervals (error bars) are shown.\nSimulation Parameters: Each transmission involves sending 70 bits of information to a neighboring node.\nTo compute the AVERAGE aggregate, four bit vectors need to be transmitted: the original SUM vector, the SUM delete vector, the original COUNT vector, and the COUNT delete vector.\nNon-incremental protocols do not transmit the delete vectors.\nEach transmission also includes a time stamp of when the update was generated.\nWe assume nodes communicate on a common control channel at 2 Mbps.\nTherefore, one time-step of protocol execution corresponds to the time required for 1,000 nodes to sequentially send 70 bits at 2 Mbps.\nSequential use of the control channel is a worst case for our protocols; in practice, multiple control channels could be used in parallel to reduce execution time.\nWe also assume nodes are loosely time synchronized, the implications of which are discussed further in [3].\nFinally, in order to isolate the effect of protocol operation on performance, we do not model the complexities of the wireless channel in our simulations.\nIncremental Protocols Reduce Execution Time: Figure 4 (a) compares the performance of incremental gossip (IGP) with uniform gossip on a clique topology.\nWe observe that both protocols have almost identical execution times.\nThis is expected as IGP operates in a similar fashion to\nuniform gossip, taking O (log (N)) time-steps to converge.\nFigure 4 (b) compares the execution times of incremental random walk (IRWP) and standard random walk on a clique.\nIRWP reduces execution time by a factor of 2.7 for a small number of measured signal changes.\nAlthough random walk and IRWP both use k random walks (in our simulations k = number of nodes), IRWP initiates walks only from updated nodes (as explained in Section 5.2), resulting in faster information convergence.\nThese improvements carry over to a PLRG topology as well (as shown in Figure 4 (c)), where IRWP is 1.33 times faster than random walk.\nIncremental Protocols Reduce Network Overhead: Figure 5 (a) shows the ratio of data transmitted using uniform gossip relative to incremental gossip on a clique.\nFor a small number of signal changes, incremental gossip incurs 2.4 times less overhead than uniform gossip.\nThis is because in the early steps of protocol execution, only devices which detect signal changes communicate.\nAs more signal changes are introduced into the system, gossip and incremental gossip incur approximately the same overhead.\nSimilarly, incremental random walk (IRWP) incurs much less overhead than standard random walk.\nFigure 5 (b) shows a 2.7 fold reduction in overhead for small numbers of signal changes on a clique.\nAlthough each protocol uses the same number of random walks, IRWP uses fewer network resources than random walk because it takes less time to converge.\nThis improvement also holds true on more complex PLRG topologies (as shown in Figure 5 (c)), where we observe a 33% reduction in network overhead.\nFrom these results it is clear that incremental techniques yield significant improvements over standard approaches to gossip, even on complex topologies.\nBecause spectrum utilization is characterized by incremental changes to usage, incremental protocols are ideally suited to solve this problem in an efficient and cost effective manner.\n8.\nDISCUSSION AND FUTURE WORK We have only just scratched the surface in addressing the problem of coordinated spectrum sensing using incremental gossiping.\nNext, we outline some open areas of research.\nSpatial Decay: Devices performing coordinated sensing are primarily interested in the spectrum usage of their local neighborhood.\nTherefore, we recommend the use of spatially decaying aggregates [6], which limits the impact of an update on more distant nodes.\nSpatially decaying aggregates work by successively reducing (by means of a decay function) the value of the update as it propagates further from its origin.\nOne challenge with this approach is that propagation distance cannot be determined ahead of time and more importantly, exhibits spatio-temporal variations.\nTherefore, finding the optimal decay function is non-trivial, and an interesting subject of future work.\nSignificance Threshold: RF spectrum bands continually experience small-scale changes which may not necessarily be significant.\nDeciding if a change is significant can be done using a significance threshold \u03b2, below which any observed change is not propagated by the node.\nChoosing an appropriate operating value for \u03b2 is application dependent, and explored further in [3].\nWeighted Readings: Although we argued that most devices will likely be equipped with low-cost sensing equipment, there may be situations where there are some special infrastructure nodes that have better sensing abilities than others.\nWeighting their measurements more heavily could be used to maintain a higher degree of accuracy.\nDetermining how to assign such weights is an open area of research.\nImplementation Specifics: Finally, implementing gossip for coordinated spectrum sensing is also open.\nIf implemented at the MAC layer, it may be feasible to piggy-back gossip messages over existing management frames (e.g. networking advertisement messages).\nAs well, we also require the use of a control channel to disseminate sensing information.\nThere are a variety of alternatives for implementing such a channel, some of which are outlined in [4].\nThe trade-offs of different approaches to implementing GUESS is a subject of future work.\n9.\nCONCLUSION\nSpectrum sensing is a key requirement for dynamic spectrum allocation in cognitive radio networks.\nThe nature of the RF environment necessitates coordination between cognitive radio devices.\nWe propose GUESS, an approximate yet low overhead approach to perform efficient coordination between cognitive radios.\nThe fundamental contributions of GUESS are: (1) an FM aggregation scheme for efficient innetwork aggregation, (2) a randomized gossiping approach which provides exponentially fast convergence and robustness to network alterations, and (3) incremental variations of FM and gossip which we show can reduce the communication time by up to a factor of 2.7 and reduce network overhead by up to a factor of 2.4.\nOur preliminary simulation results showcase the benefits of this approach and we also outline a set of open problems that make this a new and exciting area of research.","keyphrases":["spectrum sens","rf spectrum","rf interfer","cognit radio","spectrum alloc","coordin sens","fm aggreg","increment gossip protocol","opportunist spectrum share","spatial decai aggreg","innetwork aggreg","coordin spectrum sens","gossip protocol","increment algorithm"],"prmu":["P","P","M","M","M","R","U","R","R","U","U","R","R","M"]} {"id":"J-56","title":"Robust Solutions for Combinatorial Auctions","abstract":"Bids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature. In reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so. Given a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible. We have called this the Bid-taker's Exposure Problem. When faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue. In this paper, we propose an approach to addressing the Bidtaker's Exposure Problem. Firstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution. A weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes. Secondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker. We then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions. We also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders. Robust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner.","lvl-1":"Robust Solutions for Combinatorial Auctions \u2217 Alan Holland Cork Constraint Computation Centre Department of Computer Science University College Cork, Ireland a.holland@4c.ucc.ie Barry O``Sullivan Cork Constraint Computation Centre Department of Computer Science University College Cork, Ireland b.osullivan@4c.ucc.ie ABSTRACT Bids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature.\nIn reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so.\nGiven a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible.\nWe have called this the Bid-taker``s Exposure Problem.\nWhen faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue.\nIn this paper, we propose an approach to addressing the Bidtaker``s Exposure Problem.\nFirstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution.\nA weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes.\nSecondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker.\nWe then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions.\nWe also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders.\nRobust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computers \u2217This work has received support from Science Foundation Ireland under grant number 00\/PI.1\/C075.\nThe authors wish to thank Brahim Hnich and the anonymous reviewers for their helpful comments.\nand Society]: Electronic Commerce; I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search.\nGeneral Terms Algorithms, Economics, Reliability.\n1.\nINTRODUCTION A combinatorial auction (CA) [5] provides an efficient means of allocating multiple distinguishable items amongst bidders whose perceived valuations for combinations of items differ.\nSuch auctions are gaining in popularity and there is a proliferation in their usage across various industries such as telecoms, B2B procurement and transportation [11, 19].\nRevenue is the most obvious optimization criterion for such auctions, but another desirable attribute is solution robustness.\nIn terms of combinatorial auctions, a robust solution is one that can withstand bid withdrawal (a break) by making changes easily to form a repair solution of adequate revenue.\nA brittle solution to a CA is one in which an unacceptable loss in revenue is unavoidable if a winning bid is withdrawn.\nIn such situations the bid-taker may be left with a set of items deemed to be of low value by all other bidders.\nThese bidders may associate a higher value for these items if they were combined with items already awarded to others, hence the bid-taker is left in an undesirable local optimum in which a form of backtracking is required to reallocate the items in a manner that results in sufficient revenue.\nWe have called this the Bid-taker``s Exposure Problem that bears similarities to the Exposure Problem faced by bidders seeking multiple items in separate single-unit auctions but holding little or no value for a subset of those items.\nHowever, reallocating items may be regarded as disruptive to a solution in many real-life scenarios.\nConsider a scenario where procurement for a business is conducted using a CA.\nIt would be highly undesirable to retract contracts from a group of suppliers because of the failure of a third party.\nA robust solution that is tolerant of such breaks is preferable.\nRobustness may be regarded as a preventative measure protecting against future uncertainty by sacrificing revenue in place of solution stability and reparability.\nWe assume a probabilistic approach whereby the bid-taker has knowledge of the reliability of bidders from which the likelihood of an incomplete transaction may be inferred.\nRepair solutions are required for bids that are seen as brittle (i.e. likely to break).\nRepairs may also be required for sets of bids deemed brittle.\nWe propose the use of the Weighted Super 183 Solutions (WSS) framework [13] for constraint programming, that is ideal for establishing such robust solutions.\nAs we shall see, this framework can enforce constraints on solutions so that possible breakages are reparable.\nThis paper is organized as follows.\nSection 2 presents the Winner Determination Problem (WDP) for combinatorial auctions, outlines some possible reasons for bid withdrawal and shows how simply maximizing expected revenue can lead to intolerable revenue losses for risk-averse bid-takers.\nThis motivates the use of robust solutions and Section 3 introduces a constraint programming (CP) framework, Weighted Super Solutions [13], that finds such solutions.\nWe then propose an auction model in Section 4 that enhances reparability by introducing mandatory mutual bid bonds, that may be seen as a form of leveled commitment contract [26, 27].\nSection 5 presents an extensive empirical evaluation of the approach presented in this paper, in the context of a number of well-known combinatorial auction distributions, with very encouraging results.\nSection 6 discusses possible extensions and questions raised by our research that deserve future work.\nFinally, in Section 7 a number of concluding remarks are made.\n2.\nCOMBINATORIAL AUCTIONS Before presenting the technical details of our solution to the Bid-taker``s Exposure Problem, we shall present a brief survey of combinatorial auctions and existing techniques for handling bid withdrawal.\nCombinatorial auctions involve a single bid-taker allocating multiple distinguishable items amongst a group of bidders.\nThe bidtaker has a set of m items for sale, M = {1, 2, ... , m}, and bidders submit a set of bids, B = {B1, B2, ... , Bn}.\nA bid is a tuple Bj = Sj, pj where Sj \u2286 M is a subset of the items for sale and pj \u2265 0 is a price.\nThe WDP for a CA is to label all bids as either winning or losing so as to maximize the revenue from winning bids without allocating any item to more than one bid.\nThe following is the integer programming formulation for the WDP: max n j=1 pjxj s.t. j|i\u2208Sj xj \u2264 1, \u2200i \u2208 {1 ... m}, xj \u2208 {0, 1}.\nThis problem is NP-complete [23] and inapproximable [25], and is otherwise known as the Set Packing Problem.\nThe above problem formulation assumes the notion of free disposal.\nThis means that the optimal solution need not necessarily sell all of the items.\nIf the auction rules stipulate that all items must be sold, the problem becomes a Set Partition Problem [5].\nThe WDP has been extensively studied in recent years.\nThe fastest search algorithms that find optimal solutions (e.g. CABOB [25]) can, in practice, solve very large problems involving thousands of bids very quickly.\n2.1 The Problem of Bid Withdrawal We assume an auction protocol with a three stage process involving the submission of bids, winner determination, and finally a transaction phase.\nWe are interested in bid withdrawals that occur between the announcement of winning bids and the end of the transaction phase.\nAll bids are valid until the transaction is complete, so we anticipate an expedient transaction process1 .\n1 In some instances the transaction period may be so lengthy that consideration of non-winning bids as still being valid may not be fair.\nBreaks that occur during a lengthy transaction phase are more difficult to remedy and may require a subsequent auction.\nFor example, if the item is a service contract for a given period of time and the break occurs after partial fulfilment of this contract, the other An example of a winning bid withdrawal occurred in an FCC spectrum auction [32].\nWithdrawals, or breaks, may occur for various reasons.\nBid withdrawal may be instigated by the bid-taker when Quality of Service agreements are broken or payment deadlines are not met.\nWe refer to bid withdrawal by the bid-taker as item withdrawal in this paper to distinguish between the actions of a bidder and the bid-taker.\nHarstad and Rothkopf [8] outlined several possibilities for breaks in single item auctions that include: 1.\nan erroneous initial valuation\/bid; 2.\nunexpected events outside the winning bidder``s control; 3.\na desire to have the second-best bid honored; 4.\ninformation obtained or events that occurred after the auction but before the transaction that reduces the value of an item; 5.\nthe revelation of competing bidders'' valuations infers reduced profitability, a problem known as the Winner``s Curse.\nKastner et al. [15] examined how to handle perturbations given a solution whilst minimizing necessary changes to that solution.\nThese perturbations may include bid withdrawals, change of valuation\/items of a bid or the submission of a new bid.\nThey looked at the problem of finding incremental solutions to restructure a supply chain whose formation is determined using combinatorial auctions [30].\nFollowing a perturbation in the optimal solution they proceed to impose involuntary item withdrawals from winning bidders.\nThey formulated an incremental integer linear program (ILP) that sought to maximize the valuation of the repair solution whilst preserving the previous solution as much as possible.\n2.2 Being Proactive against Bid Withdrawal When a bid is withdrawn there may be constraints on how the solution can be repaired.\nIf the bid-taker was freely able to revoke the awarding of items to other bidders then the solution could be repaired easily by reassigning all the items to the optimal solution without the withdrawn bid.\nAlternatively, the bidder who reneged upon a bid may have all his other bids disqualified and the items could be reassigned based on the optimum solution without that bidder present.\nHowever, the bid-taker is often unable to freely reassign the items already awarded to other bidders.\nWhen items cannot be withdrawn from winning bidders, following the failure of another bidder to honor his bid, repair solutions are restricted to the set of bids whose items only include those in the bid(s) that were reneged upon.\nWe are free to award items to any of the previously unsuccessful bids when finding a repair solution.\nWhen faced with uncertainty over the reliability of bidders a possible approach is to maximize expected revenue.\nThis approach does not make allowances for risk-averse bid-takers who may view a small possibility of very low revenue as unacceptable.\nConsider the example in Table 1, and the optimal expected revenue in the situation where a single bid may be withdrawn.\nThere are three submitted bids for items A and B, the third being a combination bid for the pair of items at a value of 190.\nThe optimal solution has a value of 200, with the first and second bids as winners.\nWhen we consider the probabilities of failure, in the fourth column, the problem of which solution to choose becomes more difficult.\nComputing the expected revenue for the solution with the first and second bids winning the items, denoted 1, 1, 0 , gives: (200\u00d70.9\u00d70.9)+(2\u00d7100\u00d70.9\u00d70.1)+(190\u00d70.1\u00d70.1) = 181.90.\nbidders'' valuations for the item may have decreased in a non-linear fashion.\n184 Table 1: Example Combinatorial Auction.\nItems Bids A B AB Withdrawal prob x1 100 0 0 0.1 x2 0 100 0 0.1 x3 0 0 190 0.1 If a single bid is withdrawn there is probability of 0.18 of a revenue of 100, given the fact that we cannot withdraw an item from the other winning bidder.\nThe expected revenue for 0, 0, 1 is: (190 \u00d7 0.9) + (200 \u00d7 0.1) = 191.00.\nWe can therefore surmise that the second solution is preferable to the first based on expected revenue.\nDetermining the maximum expected revenue in the presence of such uncertainty becomes computationally infeasible however, as the number of brittle bids grows.\nA WDP needs to be solved for all possible combinations of bids that may fail.\nThe possible loss in revenue for breaks is also not tightly bounded using this approach, therefore a large loss may be possible for a small number of breaks.\nConsider the previous example where the bid amount for x3 becomes 175.\nThe expected revenue of 1, 1, 0 (181.75) becomes greater than that of 0, 0, 1 (177.50).\nThere are some bid-takers who may prefer the latter solution because the revenue is never less than 175, but the former solution returns revenue of only 100 with probability 0.18.\nA risk-averse bid-taker may not tolerate such a possibility, preferring to sacrifice revenue for reduced risk.\nIf we modify our repair search so that a solution of at least a given revenue is guaranteed, the search for a repair solution becomes a satisfiability test rather than an optimization problem.\nThe approaches described above are in contrast to that which we propose in the next section.\nOur approach can be seen as preventative in that we find an initial allocation of items to bidders which is robust to bid withdrawal.\nPossible losses in revenue are bounded by a fixed percentage of the true optimal allocation.\nPerturbations to the original solution are also limited so as to minimize disruption.\nWe regard this as the ideal approach for real-world combinatorial auctions.\nDEFINITION 1 (ROBUST SOLUTION FOR A CA).\nA robust solution for a combinatorial auction is one where any subset of successful bids whose probability of withdrawal is greater than or equal to \u03b1 can be repaired by reassigning items at a cost of at most \u03b2 to other previously losing bids to form a repair solution.\nConstraints on acceptable revenue, e.g. being a minimum percentage of the optimum, are defined in the problem model and are thus satisfied by all solutions.\nThe maximum cost of repair, \u03b2, may be a fixed value that may be thought of as a fund for compensating winning bidders whose items are withdrawn from them when creating a repair solution.\nAlternatively, \u03b2 may be a function of the bids that were withdrawn.\nSection 4 will give an example of such a mechanism.\nIn the following section we describe an ideal constraint-based framework for the establishment of such robust solutions.\n3.\nFINDING ROBUST SOLUTIONS In constraint programming [4] (CP), a constraint satisfaction problem (CSP) is modeled as a set of n variables X = {x1, ... , xn}, a set of domains D = {D(x1), ... , D(xn)}, where D(xi) is the set of finite possible values for variable xi and a set C = {C1, ... , Cm} of constraints, each restricting the assignments of some subset of the variables in X. Constraint satisfaction involves finding values for each of the problem variables such that all constraints are satisfied.\nIts main advantages are its declarative nature and flexibility in tackling problems with arbitrary side constraints.\nConstraint optimization seeks to find a solution to a CSP that optimizes some objective function.\nA common technique for solving constraint optimization problems is to use branch-and-bound techniques that avoid exploring sub-trees that are known not to contain a better solution than the best found so far.\nAn initial bound can be determined by finding a solution that satisfies all constraints in C or by using some heuristic methods.\nA classical super solution (SS) is a solution to a CSP in which, if a small number of variables lose their values, repair solutions are guaranteed with only a few changes, thus providing solution robustness [9, 10].\nIt is a generalization of both fault tolerance in CP [31] and supermodels in propositional satisfiability (SAT) [7].\nAn (a,b)-super solution is one in which if at most a variables lose their values, a repair solution can be found by changing at most b other variables [10].\nSuper solutions for combinatorial auctions minimize the number of bids whose status needs to be changed when forming a repair solution [12].\nOnly a particular set of variables in the solution may be subject to change and these are said to be members of the breakset.\nFor each combination of brittle assignments in the break-set, a repair-set is required that comprises the set of variables whose values must change to provide another solution.\nThe cardinality of the repair set is used to measure the cost of repair.\nIn reality, changing some variable assignments in a repair solution incurs a lower cost than others thereby motivating the use of a different metric for determining the legality of repair sets.\nThe Weighted Super Solution (WSS) framework [13] considers the cost of repair required, rather than simply the number of assignments modified, to form an alternative solution.\nFor CAs this may be a measure of the compensation penalties paid to winning bidders to break existing agreements.\nRobust solutions are particularly desirable for applications where unreliability is a problem and potential breakages may incur severe penalties.\nWeighted super solutions offer a means of expressing which variables are easily re-assigned and those that incur a heavy cost [13].\nHebrard et al. [9] describe how some variables may fail (such as machines in a job-shop problem) and others may not.\nA WSS generalizes this approach so that there is a probability of failure associated with each assignment and sets of variables whose assignments have probabilities of failure greater than or equal to a threshold value, \u03b1, require repair solutions.\nA WSS measures the cost of repairing, or reassigning, other variables using inertia as a metric.\nInertia is a measure of a variable``s aversion to change and depends on its current assignment, future assignment and the breakage variable(s).\nIt may be desirable to reassign items to different bidders in order to find a repair solution of satisfactory revenue.\nCompensation may have to be paid to bidders who lose items during the formation of a repair solution.\nThe inertia of a bid reflects the cost of changing its state.\nFor winning bids this may reflect the necessary compensation penalty for the bid-taker to break the agreement (if such breaches are permitted), whereas for previously losing bids this is a free operation.\nThe total amount of compensation payable to bidders may depend upon other factors, such as the cause of the break.\nThere is a limit to how much these overall repair costs should be, and this is given by the value \u03b2.\nThis value may not be known in advance and 185 Algorithm 1: WSS(int level, double \u03b1, double \u03b2):Boolean begin if level > number of variables then return true choose unassigned variable x foreach value v in the domain of x do assign x : v if problem is consistent then foreach combination of brittle assignments, A do if \u00acreparable(A, \u03b2) then return false; if WSS(level+1) then return true unassign x return false end may depend upon the break.\nTherefore, \u03b2 may be viewed as the fund used to compensate winning bidders for the unilateral withdrawal of their bids by the bid-taker.\nIn summary, an (\u03b1,\u03b2)-WSS allows any set of variables whose probability of breaking is greater than or equal to \u03b1 be repaired with changes to the original robust solution with a cost of at most \u03b2.\nThe depth-first search for a WSS (see pseudo-code description in Algorithm 1) maintains arc-consistency [24] at each node of the tree.\nAs search progresses, the reparability of each previous assignment is verified at each node by extending a partial repair solution to the same depth as the current partial solution.\nThis may be thought of as maintaining concurrent search trees for repairs.\nA repair solution is provided for every possible set of break variables, A.\nThe WSS algorithm attempts to extend the current partial assignment by choosing a variable and assigning it a value.\nBacktracking may then occur for one of two reasons: we cannot extend the assignment to satisfy the given constraints, or the current partial assignment cannot be associated with a repair solution whose cost of repair is less than \u03b2 should a break occur.\nThe procedure reparable searches for partial repair solutions using backtracking and attempts to extend the last repair found, just as in (1,b)super solutions [9]; the differences being that a repair is provided for a set of breakage variables rather than a single variable and the cost of repair is considered.\nA summation operator is used to determine the overall cost of repair.\nIf a fixed bound upon the size of any potential break-set can be formed, the WSS algorithm is NPcomplete.\nFor a more detailed description of the WSS search algorithm, the reader is referred to [13], since a complete description of the algorithm is beyond the scope of this paper.\nEXAMPLE 1.\nWe shall step through the example given in Table 1 when searching for a WSS.\nEach bid is represented by a single variable with domain values of 0 and 1, the former representing bid-failure and the latter bid-success.\nThe probability of failure of the variables are 0.1 when they are assigned to 1 and 0.0 otherwise.\nThe problem is initially solved using an ILP solver such as lp_solve [3] or CPLEX, and the optimal revenue is found to be 200.\nA fixed percentage of this revenue can be used as a threshold value for a robust solution and its repairs.\nThe bid-taker wishes to have a robust solution so that if a single winning bid is withdrawn, a repair solution can be formed without withdrawing items from any other winning bidder.\nThis example may be seen as searching for a (0.1,0)-weighted super solution, \u03b2 is 0 because no funds are available to compensate the withdrawal of items from winning bidders.\nThe bid-taker is willing to compromise on revenue, but only by 5%, say, of the optimal value.\nBids 1 and 3 cannot both succeed, since they both require item A, so a constraint is added precluding the assignment in which both variables take the value 1.\nSimilarly, bids 2 and 3 cannot both win so another constraint is added between these two variables.\nTherefore, in this example the set of CSP variables is V = {x1, x2, x3}, whose domains are all {0, 1}.\nThe constraints are x1 + x3 \u2264 1, x2 + x3 \u2264 1 and xi\u2208V aixi \u2265 190, where ai reflects the relevant bid-amounts for the respective bid variables.\nIn order to find a robust solution of optimal revenue we seek to maximize the sum of these amounts, max xi\u2208V aixi.\nWhen all variables are set to 0 (see Figure 1(a) branch 3), this is not a solution because the minimum revenue of 190 has not been met, so we try assigning bid3 to 1 (branch 4).\nThis is a valid solution but this variable is brittle because there is a 10% chance that this bid may be withdrawn (see Table 1).\nTherefore we need to determine if a repair can be formed should it break.\nThe search for a repair begins at the first node, see Figure 1(b).\nNotice that value 1 has been removed from bid3 because this search tree is simulating the withdrawal of this bid.\nWhen bid1 is set to 0 (branch 4.1), the maximum revenue solution in the remaining subtree has revenue of only 100, therefore search is discontinued at that node of the tree.\nBid1 and bid2 are both assigned to 1 (branches 4.2 and 4.4) and the total cost of both these changes is still 0 because no compensation needs to be paid for bids that change from losing to winning.\nWith bid3 now losing (branch 4.5), this gives a repair solution of 200.\nHence 0, 0, 1 is reparable and therefore a WSS.\nWe continue our search in Figure 1(a) however, because we are seeking a robust solution of optimal revenue.\nWhen bid1 is assigned to 1 (branch 6) we seek a partial repair for this variable breaking (branch 5 is not considered since it offers insufficient revenue).\nThe repair search sets bid1 to 0 in a separate search tree, (not shown), and control is returned to the search for a WSS.\nBid2 is set to 0 (branch 7), but this solution would not produce sufficient revenue so bid2 is then set to 1 (branch 8).\nWe then attempt to extend the repair for bid1 (not shown).\nThis fails because the repair for bid1 cannot assign bid2 to 0 because the cost of repairing such an assignment would be \u221e, given that the auction rules do not permit the withdrawal of items from winning bids.\nA repair for bid1 breaking is therefore not possible because items have already been awarded to bid2.\nA repair solution with bid2 assigned to 1 does not produce sufficient revenue when bid1 is assigned to 0.\nThe inability to withdraw items from winning bids implies that 1, 1, 0 is an irreparable solution when the minimum tolerable revenue is greater than 100.\nThe italicized comments and dashed line in Figure 1(a) illustrate the search path for a WSS if both of these bids were deemed reparable.\nSection 4 introduces an alternative auction model that will allow the bid-taker to receive compensation for breakages and in turn use this payment to compensate other bidders for withdrawal of items from winning bids.\nThis will enable the reallocation of items and permit the establishment of 1, 1, 0 as a second WSS for this example.\n4.\nMUTUAL BID BONDS: A BACKTRACKING MECHANISM Some auction solutions are inherently brittle and it may be impossible to find a robust solution.\nIf we can alter the rules of an auction so that the bid-taker can retract items from winning bidders, then the reparability of solutions to such auctions may be improved.\nIn this section we propose an auction model that permits bid and item withdrawal by the bidders and bid-taker, respectively.\nWe propose a model that incorporates mutual bid bonds to enable solution reparability for the bid-taker, a form of insurance against 186 0 0 0 0 0 0 0 1 1 1 1 1 1 1 Insufficient revenue Find repair solution for bid 3 breakage Find partial repair for bid 1 breakage Insufficient revenue (a) Extend partial repair for bid 1 breakage (b) Find partial repair for bid 2 breakage Bid 1 Bid 2 Bid 3 Find repair solutions for bid 1 & 2 breakages [0] [190] [100] [100] [200] 1 2 3 4 5 6 7 8 9 Insufficient revenue (a) Search for WSS.\n0 0 0 0 0 0 0 1 1 1 1 1 1 1 Insufficient revenue Insufficient revenue Bid 1 Bid 2 Bid 3 inertia=0 inertia=0 inertia=0 4.1 4.2 4.3 4.4 4.5 (b) Search for a repair for bid 3 breakage.\nFigure 1: Search Tree for a WSS without item withdrawal.\nthe winner``s curse for the bidder whilst also compensating bidders in the case of item withdrawal from winning bids.\nWe propose that such Winner``s Curse & Bid-taker``s Exposure insurance comprise a fixed percentage, \u03ba, of the bid amount for all bids.\nSuch mutual bid bonds are mandatory for each bid in our model2 .\nThe conditions attached to the bid bonds are that the bid-taker be allowed to annul winning bids (item withdrawal) when repairing breaks elsewhere in the solution.\nIn the interests of fairness, compensation is paid to bidders from whom items are withdrawn and is equivalent to the penalty that would have been imposed on the bidder should he have withdrawn the bid.\nCombinatorial auctions impose a heavy computational burden on the bidder so it is important that the hedging of risk should be a simple and transparent operation for the bidder so as not to further increase this burden unnecessarily.\nWe also contend that it is imperative that the bidder knows the potential penalty for withdrawal in advance of bid submission.\nThis information is essential for bidders when determining how aggressive they should be in their bidding strategy.\nBid bonds are commonplace in procurement for construction projects.\nUsually they are mandatory for all bids, are a fixed percentage, \u03ba, of the bid amount and are unidirectional in that item withdrawal by the bid-taker is not permitted.\nMutual bid bonds may be seen as a form of leveled commitment contract in which both parties may break the contract for the same fixed penalty.\nSuch contracts permit unilateral decommitment for prespecified penalties.\nSandholm et al. showed that this can increase the expected payoffs of all parties and enables deals that would be impossible under full commitment [26, 28, 29].\nIn practice a bid bond typically ranges between 5 and 20% of the 2 Making the insurance optional may be beneficial in some instances.\nIf a bidder does not agree to the insurance, it may be inferred that he may have accurately determined the valuation for the items and therefore less likely to fall victim to the winner``s curse.\nThe probability of such a bid being withdrawn may be less, so a repair solution may be deemed unnecessary for this bid.\nOn the other hand it decreases the reparability of solutions.\nbid amount [14, 18].\nIf the decommitment penalties are the same for both parties in all bids, \u03ba does not influence the reparability of a given set of bids.\nIt merely influences the levels of penalties and compensation transacted by agents.\nLow values of \u03ba incur low bid withdrawal penalties and simulate a dictatorial bid-taker who does not adequately compensate bidders for item withdrawal.\nAndersson and Sandholm [1] found that myopic agents reach a higher social welfare quicker if they act selfishly rather than cooperatively when penalties in leveled commitment contracts are low.\nIncreased levels of bid withdrawal are likely when the penalties are low also.\nHigh values of \u03ba tend towards full-commitment and reduce the advantages of such Winner``s Curse & Bid-taker``s Exposure insurance.\nThe penalties paid are used to fund a reassignment of items to form a repair solution of sufficient revenue by compensating previously successful bidders for withdrawal of the items from them.\nEXAMPLE 2.\nConsider the example given in Table 1 once more, where the bids also comprise a mutual bid bond of 5% of the bid amount.\nIf a bid is withdrawn, the bidder forfeits this amount and the bid-taker can then compensate winning bidders whose items are withdrawn when trying to form a repair solution later.\nThe search for repair solutions for breaks to bid1 and bid2 appear in Figures 2(a) and 2(b), respectively3 .\nWhen bid1 breaks, there is a compensation penalty paid to the bid-taker equal to 5 that can be used to fund a reassignment of the items.\nWe therefore set \u03b2 to 5 and this becomes the maximum expenditure allowed to withdraw items from winning bidders.\n\u03b2 may also be viewed as the size of the fund available to facilitate backtracking by the bid-taker.\nWhen we extend the partial repair for bid1 so that bid2 loses an item (branch 8.1), the overall cost of repair increases to 5, due to this item withdrawal by the bid-taker, 3 The actual implementation of WSS search checks previous solutions to see if they can repair breaks before searching for a new repair solution.\n0, 0, 1 is a solution that has already been found so the search for a repair in this example is not strictly necessary but is described for pedagogical reasons.\n187 0 0 0 1 1 Bid 1 Bid 2 Bid 3 Insufficient revenue inertia=5 =5 inertia=0 =5 inertia=5 =5 1 6.1 8.1 9.1 9.2 (a) Search for a repair for bid 1 breakage.\n0 0 0 1 1 Bid 1 Bid 2 Bid 3 Insufficient revenue inertia=10 =10 inertia=10 =10 inertia=10 =10 1 8.2 8.3 9.3 9.4 (b) Search for a repair for bid 2 breakage.\nFigure 2: Repair Search Tree for breaks 1 and 2, \u03ba = 0.05.\nand is just within the limit given by \u03b2.\nIn Figure 1(a) the search path follows the dashed line and sets bid3 to be 0 (branch 9).\nThe repair solutions for bids 1 and 2 can be extended further by assigning bid3 to 1 (branches 9.2 and 9.4).\nTherefore, 1, 1, 0 may be considered a robust solution.\nRecall, that previously this was not the case.\nUsing mutual bid bonds thus increases reparability and allows a robust solution of revenue 200 as opposed to 190, as was previously the case.\n5.\nEXPERIMENTS We have used the Combinatorial Auction Test Suite (CATS) [16] to generate sample auction data.\nWe generated 100 instances of problems in which there are 20 items for sale and 100-2000 bids that may be dominated in some instances4 .\nSuch dominated bids can participate in repair solutions although they do not feature in optimal solutions.\nCATS uses economically motivated bidding patterns to generate auction data in various scenarios.\nTo motivate the research presented in this paper we use sensitivity analysis to examine the brittleness of optimal solutions and hence determine the types of auctions most likely to benefit from a robust solution.\nWe then establish robust solutions for CAs using the WSS framework.\n5.1 Sensitivity Analysis for the WDP We have performed sensitivity analysis of the following four distributions: airport take-off\/landing slots (matching), electronic components (arbitrary), property\/spectrum-rights (regions) and transportation (paths).\nThese distributions were chosen because they describe a broad array of bidding patterns in different application domains.\nThe method used is as follows.\nWe first of all determined the optimal solution using lp_solve, a mixed integer linear program solver [3].\nWe then simulated a single bid withdrawal and re-solved the problem with the other winning bids remaining fixed, i.e. there were no involuntary dropouts.\nThe optimal repair solution was then determined.\nThis process is repeated for all winning bids in the overall optimal solution, thus assuming that all bids are brittle.\nFigure 3 shows the average revenue of such repair solutions as a percentage of the optimum.\nAlso shown is the average worst-case scenario over 100 auctions.\nWe also implemented an auction rule that disallows bids from the reneging bidder participate in a repair5 .\nFigure 3(a) illustrates how the paths distribution is inherently the most robust distribution since when any winning bid is withdrawn the solution can be repaired to achieve over 98.5% of the 4 The CATS flags included int prices with the bid alpha parameter set to 1000.\n5 We assumed that all bids in a given XOR bid with the same dummy item were from the same bidder.\noptimal revenue on average for auctions with more than 250 bids.\nThere are some cases however when such withdrawals result in solutions whose revenue is significantly lower than optimum.\nEven in auctions with as many as 2000 bids there are occasions when a single bid withdrawal can result in a drop in revenue of over 5%, although the average worst-case drop in revenue is only 1%.\nFigure 3(b) shows how the matching distribution is more brittle on average than paths and also has an inferior worst-case revenue on average.\nThis trend continues as the regions-npv (Figure 3(c)) and arbitrary-npv (Figure 3(d)) distributions are more brittle still.\nThese distributions are clearly sensitive to bid withdrawal when no other winning bids in the solution may be involuntarily withdrawn by the bid-taker.\n5.2 Robust Solutions using WSS In this section we focus upon both the arbitrary-npv and regions-npv distributions because the sensitivity analysis indicated that these types of auctions produce optimal solutions that tend to be most brittle, and therefore stand to benefit most from solution robustness.\nWe ignore the auctions with 2000 bids because the sensitivity analysis has indicated that these auctions are inherently robust with a very low average drop in revenue following a bid withdrawal.\nThey would also be very computationally expensive, given the extra complexity of finding robust solutions.\nA pure CP approach needs to be augmented with global constraints that incorporate operations research techniques to increase pruning sufficiently so that thousands of bids may be examined.\nGlobal constraints exploit special-purpose filtering algorithms to improve performance [21].\nThere are a number of ways to speed up the search for a weighted super solution in a CA, although this is not the main focus of our current work.\nPolynomial matching algorithms may be used in auctions whose bid length is short, such as those for airport landing\/take-off slots for example.\nThe integer programming formulation of the WDP stipulates that a bid either loses or wins.\nIf we relax this constraint so that bids can partially win, this corresponds to the linear relaxation of the problem and is solvable in polynomial time.\nAt each node of the search tree we can quickly solve the linear relaxation of the remaining problem in the subtree below the current node to establish an upper bound on remaining revenue.\nIf this upper bound plus revenue in the parent tree is less than the current lower bound on revenue, search at that node can cease.\nThe (continuous) LP relaxation thus provides a vital speed-up in the search for weighted super solutions, which we have exploited in our implementation.\nThe LP formulation is as follows: max xi\u2208V aixi 188 100 95 90 85 80 75 250\u00a0500\u00a0750\u00a01000 1250\u00a01500\u00a01750\u00a02000 Revenue(%ofoptimum) Bids Average Repair Solution Revenue Worst-case Repair Solution Revenue (a) paths 100 95 90 85 80 75 250\u00a0500\u00a0750\u00a01000 1250\u00a01500\u00a01750\u00a02000 Revenue(%ofoptimum) Bids Average Repair Solution Revenue Worst-case Repair Solution Revenue (b) matching 100 95 90 85 80 75 250\u00a0500\u00a0750\u00a01000 1250\u00a01500\u00a01750\u00a02000 Revenue(%ofoptimum) Bids Average Repair Solution Revenue Worst-case Repair Solution Revenue (c) regions-npv 100 95 90 85 80 75 250\u00a0500\u00a0750\u00a01000 1250\u00a01500\u00a01750\u00a02000 Revenue(%ofoptimum) Bids Average Repair Solution Revenue Worst-case Repair Solution Revenue (d) arbitrary-npv Figure 3: Sensitivity of bid distributions to single bid withdrawal.\ns.t. j|i\u2208Sj xj \u2264 1, \u2200i \u2208 {1 ... m}, xj \u2265 0, xj \u2208 R. Additional techniques, that are outlined in [25], can aid the scalability of a CP approach but our main aim in these experiments is to examine the robustness of various auction distributions and consider the tradeoff between robustness and revenue.\nThe WSS solver we have developed is an extension of the super solution solver presented in [9, 10].\nThis solver is, in turn, based upon the EFC constraint solver [2].\nCombinatorial auctions are easily modeled as a constraint optimization problems.\nWe have chosen the branch-on-bids formulation because in tests it worked faster than a branch-on-items formulation for the arbitrary-npv and regions-npv distributions.\nAll variables are binary and our search mechanism uses a reverse lexicographic value ordering heuristic.\nThis complements our dynamic variable ordering heuristic that selects the most promising unassigned variable as the next one in the search tree.\nWe use the product of the solution of the LP relaxation and the degree of a variable to determine the likelihood of its participation in a robust solution.\nHigh values in the LP solution are a strong indication of variables most likely to form a high revenue solution whilst the a variable``s degree reflects the number of other bids that overlap in terms of desired items.\nBids for large numbers of items tend to be more robust, which is why we weight our robust solution search in this manner.\nWe found this heuristic to be slightly more effective than the LP solution alone.\nAs the number of bids in the auction increases however, there is an increase in the inherent robustness of solutions so the degree of a variable loses significance as the auction size increases.\n5.3 Results Our experiments simulate three different constraints on repair solutions.\nThe first is that no winning bids are withdrawn by the bid-taker and a repair solution must return a revenue of at least 90% of the optimal overall solution.\nSecondly, we relaxed the revenue constraint to 85% of optimum.\nThirdly, we allowed backtracking by the bid-taker on winning bids using mutual bid bonds but maintaining the revenue constraint at 90% of optimum.\nPrior to finding a robust solution we solved the WDP optimally using lp_solve [3].\nWe then set the minimum tolerable revenue for a solution to be 90% (then 85%) of the revenue of this optimal solution.\nWe assumed that all bids were brittle, thus a repair solution is required for every bid in the solution.\nInitially we assume that no backtracking was permitted on assignments of items to other winning bids given a bid withdrawal elsewhere in the solution.\nTable 2 shows the percentage of optimal solutions that are robust for minimum revenue constraints for repair solutions of 90% and 85% of optimal revenue.\nRelaxing the revenue constraint on repair solutions to 85% of the optimum revenue greatly increases the number of optimal solutions that are robust.\nWe also conducted experiments on the same auctions in which backtracking by the bid-taker is permitted using mutual bid bonds.\nThis significantly improves the reparability of optimal solutions whilst still maintaining repair solutions of 90% of optimum.\nAn interesting feature of the arbitrary-npv distribution is that optimal solutions can become more brittle as the number of bids increases.\nThe reason for this is that optimal solutions for larger auctions have more winning bids.\nSome of the optimal solutions for the smallest auctions with 100 bids have only one winning bidder.\nIf this bid is withdrawn it is usually easy to find a new repair solution within 90% of the previous optimal revenue.\nAlso, repair solutions for bids that contain a small number of items may be made difficult by the fact that a reduced number of bids cover only a subset of those items.\nA mitigating factor is that such bids form a smaller percentage of the revenue of the optimal solution on average.\nWe also implemented a rule stipulating that any losing bids from 189 Table 2: Optimal Solutions that are Inherently Robust (%).\n#Bids Min Revenue 100\u00a0250\u00a0500\u00a01000 2000 arbitrary-npv repair \u2265 90% 21 5 3 37 93 repair \u2265 85% 26 15 40 87 100 MBB & repair \u2265 90% 41 35 60 94 \u2265 93 regions-npv repair \u2265 90% 30 33 61 91 98 repair \u2265 85% 50 71\u00a095\u00a0100\u00a0100 MBB & repair \u2265 90% 60 78 96 99 \u2265 98 Table 3: Occurrence of Robust Solutions (%).\n#Bids Min Revenue 100\u00a0250\u00a0500\u00a01000 arbitrary-npv repair \u2265 90% 58 39 51 98 repair \u2265 85% 86 88 94 99 MBB & repair \u2265 90% 78 86 98 100 regions-npv repair \u2265 90% 61 70 97 100 repair \u2265 85% 89 99 99 100 MBB & repair \u2265 90% 83\u00a096\u00a0100\u00a0100 a withdrawing bidder cannot participate in a repair solution.\nThis acts as a disincentive for strategic withdrawal and was also used previously in the sensitivity analysis.\nIn some auctions, a robust solution may not exist.\nTable 3 shows the percentage of auctions that support robust solutions for the arbitrary-npv and regions -npv distributions.\nIt is clear that finding robust solutions for the former distribution is particularly difficult for auctions with 250 and 500 bids when revenue constraints are 90% of optimum.\nThis difficulty was previously alluded to by the low percentage of optimal solutions that were robust for these auctions.\nRelaxing the revenue constraint helps increase the percentage of auctions in which robust solutions are achievable to 88% and 94%, respectively.\nThis improves the reparability of all solutions thereby increasing the average revenue of the optimal robust solution.\nIt is somewhat counterintuitive to expect a reduction in reparability of auction solutions as the number of bids increases because there tends to be an increased number of solutions above a revenue threshold in larger auctions.\nThe MBB auction model performs very well however, and ensures that robust solutions are achievable for such inherently brittle auctions without sacrificing over 10% of optimal revenue to achieve repair solutions.\nFigure 4 shows the average revenue of the optimal robust solution as a percentage of the overall optimum.\nRepair solutions found for a WSS provide a lower bound on possible revenue following a bid withdrawal.\nNote that in some instances it is possible for a repair solution to have higher revenue than the original solution.\nWhen backtracking on winning bids by the bid-taker is disallowed, this can only happen when the repair solution includes two or more bids that were not in the original.\nOtherwise the repair bids would participate in the optimal robust solution in place of the bid that was withdrawn.\nA WSS guarantees minimum levels of revenue for repair solutions but this is not to say that repair solutions cannot be improved upon.\nIt is possible to use an incremental algorithm to 100 98 96 94 92 250\u00a0500\u00a0750\u00a01000 1250\u00a01500\u00a01750\u00a02000 Revenue(%ofoptimum) Bids Repair Revenue: Min 90% Optimal Repair Revenue: Min 85% Optimal MBB: Repair Revenue: Min 90% Optimal (a) regions-npv 100 98 96 94 92 250\u00a0500\u00a0750\u00a01000 1250\u00a01500\u00a01750\u00a02000 Revenue(%ofoptimum) Bids Repair Revenue: Min 90% Optimal Repair Revenue: Min 85% Optimal MBB: Repair Revenue: Min 90% Optimal (b) arbitrary-npv Figure 4: Revenue of optimal robust solutions.\ndetermine an optimal repair solution following a break, whilst safe in the knowledge that in advance of any possible bid withdrawal we can establish a lower bound on the revenue of a repair.\nKastner et al. have provided such an incremental ILP formulation [15].\nMutual bid bonds facilitate backtracking by the bid-taker on already assigned items.\nThis improves the reparability of all possible solutions thus increasing the revenue of the optimal robust solution on average.\nFigure 4 shows the increase in revenue of robust solutions in such instances.\nThe revenues of repair solutions are bounded by at least 90% of the optimum in our experiments thereby allowing a direct comparison with robust solutions already found using the same revenue constraint but not providing for backtracking.\nIt is immediately obvious that such a mechanism can significantly increase revenue whilst still maintaining solution robustness.\nTable 4 shows the number of winning bids participating in optimal and optimal robust solutions given the three different constraints on repairing solutions listed at the beginning of this section.\nAs the number of bids increases, more of the optimal overall solutions are robust.\nThis leads to a convergence in the number of winning bids.\nThe numbers in brackets are derived from the sensitivity analysis of optimal solutions that reveals the fact that almost all optimal solutions for auctions of 2000 bids are robust.\nWe can therefore infer that the average number of winning bids in revenuemaximizing robust solutions converges towards that of the optimal overall solutions.\nA notable side-effect of robust solutions is that fewer bids participate in the solutions.\nIt can be clearly seen from Table 4 that when revenue constraints on repair solutions are tight, there are fewer winning bids in the optimal robust solution on average.\nThis is particularly pronounced for smaller auctions in both distributions.\nThis can win benefits for the bid-taker such as reduced overheads in dealing with fewer suppliers.\nAlthough MBBs aid solution repara190 Table 4: Number of winning bids.\n#Bids Solution 100\u00a0250\u00a0500\u00a01000 2000 arbitrary-npv Optimal 3.31 5.60 7.17 9.31 10.63 Repair \u2265 90% 1.40 2.18 6.10 9.03 (\u2248 10.63) Repair \u2265 85% 1.65 3.81 6.78 9.31 (10.63) MBB (\u2265 90%) 2.33 5.49 7.33 9.34 (\u2248 10.63) regions-npv Optimal 4.34 7.05 9.10 10.67 12.76 Repair \u2265 90% 3.03 5.76 8.67 10.63 (\u2248 12.76) Repair \u2265 85% 3.45 6.75 9.07 (10.67) (12.76) MBB (\u2265 90%) 3.90 6.86 9.10 10.68 (\u2248 12.76) bility, the number of bids in the solutions increases on average.\nThis is to be expected because a greater fraction of these solutions are in fact optimal, as we saw in Table 2.\n6.\nDISCUSSION AND FUTURE WORK Bidding strategies can become complex in non-incentive-compatible mechanisms where winner determination is no longer necessarily optimal.\nThe perceived reparability of a bid may influence the bid amount, with reparable bids reaching a lower equilibrium point and perceived irreparable bids being more aggressive.\nPenalty payments for bid withdrawal also create an incentive for more aggressive bidding by providing a form of insurance against the winner``s curse [8].\nIf a winning bidder``s revised valuation for a set of items drops by more than the penalty for withdrawal of the bid, then it is in his best interests to forfeit the item(s) and pay the penalty.\nShould the auction rules state that the bid-taker will refuse to sell the items to any of the remaining bidders in the event of a withdrawal, then insurance against potential losses will stimulate more aggressive bidding.\nHowever, in our case we are seeking to repair the solution with the given bids.\nA side-effect of such a policy is to offset the increased aggressiveness by incentivizing reduced valuations in expectation that another bidder``s successful bid is withdrawn.\nHarstad and Rothkopf [8] examined the conditions required to ensure an equilibrium position in which bidding was at least as aggressive as if no bid withdrawal was permitted, given this countervailing incentive to under-estimate a valuation.\nThree major results arose from their study of bid withdrawal in a single item auction: 1.\nEquilibrium bidding is more aggressive with withdrawal for sufficiently small probabilities of an award to the second highest bidder in the event of a bid withdrawal; 2.\nEquilibrium bidding is more aggressive with withdrawal if the number of bidders is large enough; 3.\nFor many distributions of costs and estimates, equilibrium bidding is more aggressive with withdrawal if the variability of the estimating distribution is sufficiently large.\nIt is important that mutual bid bonds do not result in depressed bidding in equilibrium.\nAn analysis of the resultant behavior of bidders must incorporate the possibility of a bidder winning an item and having it withdrawn in order for the bid-taker to formulate a repair solution after a break elsewhere.\nHarstad and Rothkopf have analyzed bidder aggressiveness [8] using a strictly game-theoretic model in which the only reason for bid withdrawal is the winner``s curse.\nThey assumed all bidders were risk-neutral, but surmised that it is entirely possible for the bid-taker to collect a risk premium from risk-averse bidders with the offer of such insurance.\nCombinatorial auctions with mutual bid bonds add an extra incentive to bid aggressively because of the possibility of being compensated for having a winning bid withdrawn by a bid-taker.\nThis is militated against by the increased probability of not having items withdrawn in a repair solution.\nWe leave an in-depth analysis of the sufficient conditions for more aggressive bidding for future work.\nWhilst the WSS framework provides ample flexibility and expressiveness, scalability becomes a problem for larger auctions.\nAlthough solutions to larger auctions tend to be naturally more robust, some bid-takers in such auctions may require robustness.\nA possible extension of our work in this paper may be to examine the feasibility of reformulating integer linear programs so that the solutions are robust.\nHebrard et al. [10] examined reformulation of CSPs for finding super solutions.\nAlternatively, it may be possible to use a top-down approach by looking at the k-best solutions sequentially, in terms of revenue, and performing sensitivity analysis upon each solution until a robust one is found.\nIn procurement settings the principle of free disposal is often discounted and all items must be sold.\nThis reduces the number of potential solutions and thereby reduces the reparability of each solution.\nThe impact of such a constraint on revenue of robust solutions is also left for future work.\nThere is another interesting direction this work may take, namely robust mechanism design.\nPorter et al. introduced the notion of fault tolerant mechanism design in which agents have private information regarding costs for task completion, but also their probabilities of failure [20].\nWhen the bid-taker has combinatorial valuations for task completions it may be desirable to assign the same task to multiple agents to ensure solution robustness.\nIt is desirable to minimize such potentially redundant task assignments but not to the detriment of completed task valuations.\nThis problem could be modeled using the WSS framework in a similar manner to that of combinatorial auctions.\nIn the case where no robust solutions are found, it is possible to optimize robustness, instead of revenue, by finding a solution of at least a given revenue that minimizes the probability of an irreparable break.\nIn this manner the least brittle solution of adequate revenue may be chosen.\n7.\nCONCLUSION Fairness is often cited as a reason for choosing the optimal solution in terms of revenue only [22].\nRobust solutions militate against bids deemed brittle, therefore bidders must earn a reputation for being reliable to relax the reparability constraint attached to their bids.\nThis may be seen as being fair to long-standing business partners whose reliability is unquestioned.\nInternet-based auctions are often seen as unwelcome price-gouging exercises by suppliers in many sectors [6, 17].\nTraditional business partnerships are being severed by increased competition amongst suppliers.\nQuality of Service can suffer because of the increased focus on short-term profitability to the detriment of the bid-taker in the long-term.\nRobust solutions can provide a means of selectively discriminating against distrusted bidders in a measured manner.\nAs combinatorial auction deployment moves from large value auctions with a small pool of trusted bidders (e.g. spectrum-rights sales) towards lower value auctions with potentially unknown bidders (e.g. Supply Chain Management [30]), solution robustness becomes more relevant.\nAs well as being used to ensure that the bid-taker is not left vulnerable to bid withdrawal, it may also be used to cement relationships with preferred, possibly incumbent, suppliers.\n191 We have shown that it is possible to attain robust solutions for CAs with only a small loss in revenue.\nWe have also illustrated how such solutions tend to have fewer winning bids than overall optimal solutions, thereby reducing any overheads associated with dealing with more bidders.\nWe have also demonstrated that introducing mutual bid bonds, a form of leveled commitment contract, can significantly increase the revenue of optimal robust solutions by improving reparability.\nWe contend that robust solutions using such a mechanism can allow a bid-taker to offer the possibility of bid withdrawal to bidders whilst remaining confident about postrepair revenue and also facilitating increased bidder aggressiveness.\n8.\nREFERENCES [1] Martin Andersson and Tuomas Sandholm.\nLeveled commitment contracts with myopic and strategic agents.\nJournal of Economic Dynamics and Control, 25:615-640, 2001.\nSpecial issue on Agent-Based Computational Economics.\n[2] Fahiem Bacchus and George Katsirelos.\nEFC solver.\nwww.cs.toronto.edu\/\u02dcgkatsi\/efc\/efc.html.\n[3] Michael Berkelaar, Kjell Eikland, and Peter Notebaert.\nlp solve version 5.0.10.0.\nhttp:\/\/groups.yahoo.com\/group\/lp_solve\/.\n[4] Rina Dechter.\nConstraint Processing.\nMorgan Kaufmann, 2003.\n[5] Sven DeVries and Rakesh Vohra.\nCombinatorial auctions: A survey.\nINFORMS Journal on Computing, pages 284-309, 2003.\n[6] Jim Ericson.\nReverse auctions: Bad idea.\nLine 56, Sept 2001.\n[7] Matthew L. Ginsberg, Andrew J. Parkes, and Amitabha Roy.\nSupermodels and Robustness.\nIn Proceedings of AAAI-98, pages 334-339, Madison, WI, 1998.\n[8] Ronald M. Harstad and Michael H. Rothkopf.\nWithdrawable bids as winner``s curse insurance.\nOperations Research, 43(6):982-994, November-December 1995.\n[9] Emmanuel Hebrard, Brahim Hnich, and Toby Walsh.\nRobust solutions for constraint satisfaction and optimization.\nIn Proceedings of the European Conference on Artificial Intelligence, pages 186-190, 2004.\n[10] Emmanuel Hebrard, Brahim Hnich, and Toby Walsh.\nSuper solutions in constraint programming.\nIn Proceedings of CP-AI-OR 2004, pages 157-172, 2004.\n[11] Gail Hohner, John Rich, Ed Ng, Grant Reid, Andrew J. Davenport, Jayant R. Kalagnanam, Ho Soo Lee, and Chae An.\nCombinatorial and quantity-discount procurement auctions benefit Mars Incorporated and its suppliers.\nInterfaces, 33(1):23-35, 2003.\n[12] Alan Holland and Barry O``Sullivan.\nSuper solutions for combinatorial auctions.\nIn Ercim-Colognet Constraints Workshop (CSCLP 04).\nSpringer LNAI, Lausanne, Switzerland, 2004.\n[13] Alan Holland and Barry O``Sullivan.\nWeighted super solutions for constraint programs, December 2004.\nTechnical Report: No.\nUCC-CS-2004-12-02.\n[14] Selective Insurance.\nBusiness insurance.\nhttp:\/\/www.selectiveinsurance.com\/psApps \/Business\/Ins\/bonds.\nasp?bc=13.16.127.\n[15] Ryan Kastner, Christina Hsieh, Miodrag Potkonjak, and Majid Sarrafzadeh.\nOn the sensitivity of incremental algorithms for combinatorial auctions.\nIn WECWIS, pages 81-88, June 2002.\n[16] Kevin Leyton-Brown, Mark Pearson, and Yoav Shoham.\nTowards a universal test suite for combinatorial auction algorithms.\nIn ACM Conference on Electronic Commerce, pages 66-76, 2000.\n[17] Associated General Contractors of America.\nAssociated general contractors of America white paper on reverse auctions for procurement of construction.\nhttp:\/\/www.agc.org\/content\/public\/pdf \/Member_Resources\/ ReverseAuctionWhitePaper.pdf, 2003.\n[18] National Society of Professional Engineers.\nA basic guide to surety bonds.\nhttp:\/\/www.nspe.org\/pracdiv \/76-02surebond.\nasp.\n[19] Martin Pesendorfer and Estelle Cantillon.\nCombination bidding in multi-unit auctions.\nHarvard Business School Working Draft, 2003.\n[20] Ryan Porter, Amir Ronen, Yoav Shoham, and Moshe Tennenholtz.\nMechanism design with execution uncertainty.\nIn Proceedings of UAI-02, pages 414-421, 2002.\n[21] Jean-Charles R\u00b4egin.\nGlobal constraints and filtering algorithms.\nIn Constraint and Integer ProgrammingTowards a Unified Methodology, chapter 4, pages 89-129.\nKluwer Academic Publishers, 2004.\n[22] Michael H. Rothkopf and Aleksandar Peke\u02d8c.\nCombinatorial auction design.\nManagement Science, 4(11):1485-1503, November 2003.\n[23] Michael H. Rothkopf, Aleksandar Peke\u02d8c, and Ronald M. Harstad.\nComputationally manageable combinatorial auctions.\nManagement Science, 44(8):1131-1147, 1998.\n[24] Daniel Sabin and Eugene C. Freuder.\nContradicting conventional wisdom in constraint satisfaction.\nIn A. Cohn, editor, Proceedings of ECAI-94, pages 125-129, 1994.\n[25] Tuomas Sandholm.\nAlgorithm for optimal winner determination in combinatorial auctions.\nArtificial Intelligence, 135(1-2):1-54, 2002.\n[26] Tuomas Sandholm and Victor Lesser.\nLeveled Commitment Contracts and Strategic Breach.\nGames and Economic Behavior, 35:212-270, January 2001.\n[27] Tuomas Sandholm and Victor Lesser.\nLeveled commitment contracting: A backtracking instrument for multiagent systems.\nAI Magazine, 23(3):89-100, 2002.\n[28] Tuomas Sandholm, Sandeep Sikka, and Samphel Norden.\nAlgorithms for optimizing leveled commitment contracts.\nIn Proceedings of the IJCAI-99, pages 535-541.\nMorgan Kaufmann Publishers Inc., 1999.\n[29] Tuomas Sandholm and Yunhong Zhou.\nSurplus equivalence of leveled commitment contracts.\nArtificial Intelligence, 142:239-264, 2002.\n[30] William E. Walsh, Michael P. Wellman, and Fredrik Ygge.\nCombinatorial auctions for supply chain formation.\nIn ACM Conference on Electronic Commerce, pages 260-269, 2000.\n[31] Rainier Weigel and Christian Bliek.\nOn reformulation of constraint satisfaction problems.\nIn Proceedings of ECAI-98, pages 254-258, 1998.\n[32] Margaret W. Wiener.\nAccess spectrum bid withdrawal.\nhttp:\/\/wireless.fcc.gov\/auctions\/33 \/releases\/da011719.\npdf, July 2001.\n192","lvl-3":"Robust Solutions for Combinatorial Auctions *\nABSTRACT\nBids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature.\nIn reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so.\nGiven a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible.\nWe have called this the \"Bid-taker's Exposure Problem\".\nWhen faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue.\nIn this paper, we propose an approach to addressing the Bidtaker's Exposure Problem.\nFirstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution.\nA weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes.\nSecondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker.\nWe then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions.\nWe also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders.\nRobust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner.\n1.\nINTRODUCTION\nA combinatorial auction (CA) [5] provides an efficient means of allocating multiple distinguishable items amongst bidders whose perceived valuations for combinations of items differ.\nSuch auctions are gaining in popularity and there is a proliferation in their usage across various industries such as telecoms, B2B procurement and transportation [11, 19].\nRevenue is the most obvious optimization criterion for such auctions, but another desirable attribute is solution robustness.\nIn terms of combinatorial auctions, a robust solution is one that can withstand bid withdrawal (a break) by making changes easily to form a repair solution of adequate revenue.\nA brittle solution to a CA is one in which an unacceptable loss in revenue is unavoidable if a winning bid is withdrawn.\nIn such situations the bid-taker may be left with a set of items deemed to be of low value by all other bidders.\nThese bidders may associate a higher value for these items if they were combined with items already awarded to others, hence the bid-taker is left in an undesirable local optimum in which a form of backtracking is required to reallocate the items in a manner that results in sufficient revenue.\nWe have called this the \"Bid-taker's Exposure Problem\" that bears similarities to the \"Exposure Problem\" faced by bidders seeking multiple items in separate single-unit auctions but holding little or no value for a subset of those items.\nHowever, reallocating items may be regarded as disruptive to a solution in many real-life scenarios.\nConsider a scenario where procurement for a business is conducted using a CA.\nIt would be highly undesirable to retract contracts from a group of suppliers because of the failure of a third party.\nA robust solution that is tolerant of such breaks is preferable.\nRobustness may be regarded as a preventative measure protecting against future uncertainty by sacrificing revenue in place of solution stability and reparability.\nWe assume a probabilistic approach whereby the bid-taker has knowledge of the reliability of bidders from which the likelihood of an incomplete transaction may be inferred.\nRepair solutions are required for bids that are seen as brittle (i.e. likely to break).\nRepairs may also be required for sets of bids deemed brittle.\nWe propose the use of the Weighted Super\nSolutions (WSS) framework [13] for constraint programming, that is ideal for establishing such robust solutions.\nAs we shall see, this framework can enforce constraints on solutions so that possible breakages are reparable.\nThis paper is organized as follows.\nSection 2 presents the Winner Determination Problem (WDP) for combinatorial auctions, outlines some possible reasons for bid withdrawal and shows how simply maximizing expected revenue can lead to intolerable revenue losses for risk-averse bid-takers.\nThis motivates the use of robust solutions and Section 3 introduces a constraint programming (CP) framework, Weighted Super Solutions [13], that finds such solutions.\nWe then propose an auction model in Section 4 that enhances reparability by introducing mandatory mutual bid bonds, that may be seen as a form of leveled commitment contract [26, 27].\nSection 5 presents an extensive empirical evaluation of the approach presented in this paper, in the context of a number of well-known combinatorial auction distributions, with very encouraging results.\nSection 6 discusses possible extensions and questions raised by our research that deserve future work.\nFinally, in Section 7 a number of concluding remarks are made.\n2.\nCOMBINATORIAL AUCTIONS\n2.1 The Problem of Bid Withdrawal\n2.2 Being Proactive against Bid Withdrawal\n3.\nFINDING ROBUST SOLUTIONS\n4.\nMUTUAL BID BONDS: A BACKTRACKING MECHANISM\n5.\nEXPERIMENTS\n5.1 Sensitivity Analysis for the WDP\n5.2 Robust Solutions using WSS\n5.3 Results\n7.\nCONCLUSION\nFairness is often cited as a reason for choosing the optimal solution in terms of revenue only [22].\nRobust solutions militate against bids deemed brittle, therefore bidders must earn a reputation for being reliable to relax the reparability constraint attached to their bids.\nThis may be seen as being fair to long-standing business partners whose reliability is unquestioned.\nInternet-based auctions are often seen as unwelcome price-gouging exercises by suppliers in many sectors [6, 17].\nTraditional business partnerships are being severed by increased competition amongst suppliers.\nQuality of Service can suffer because of the increased focus on short-term profitability to the detriment of the bid-taker in the long-term.\nRobust solutions can provide a means of selectively discriminating against distrusted bidders in a measured manner.\nAs combinatorial auction deployment moves from large value auctions with a small pool of trusted bidders (e.g. spectrum-rights sales) towards lower value auctions with potentially unknown bidders (e.g. Supply Chain Management [30]), solution robustness becomes more relevant.\nAs well as being used to ensure that the bid-taker is not left vulnerable to bid withdrawal, it may also be used to cement relationships with preferred, possibly incumbent, suppliers.\nWe have shown that it is possible to attain robust solutions for CAs with only a small loss in revenue.\nWe have also illustrated how such solutions tend to have fewer winning bids than overall optimal solutions, thereby reducing any overheads associated with dealing with more bidders.\nWe have also demonstrated that introducing mutual bid bonds, a form of leveled commitment contract, can significantly increase the revenue of optimal robust solutions by improving reparability.\nWe contend that robust solutions using such a mechanism can allow a bid-taker to offer the possibility of bid withdrawal to bidders whilst remaining confident about postrepair revenue and also facilitating increased bidder aggressiveness.","lvl-4":"Robust Solutions for Combinatorial Auctions *\nABSTRACT\nBids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature.\nIn reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so.\nGiven a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible.\nWe have called this the \"Bid-taker's Exposure Problem\".\nWhen faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue.\nIn this paper, we propose an approach to addressing the Bidtaker's Exposure Problem.\nFirstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution.\nA weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes.\nSecondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker.\nWe then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions.\nWe also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders.\nRobust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner.\n1.\nINTRODUCTION\nA combinatorial auction (CA) [5] provides an efficient means of allocating multiple distinguishable items amongst bidders whose perceived valuations for combinations of items differ.\nRevenue is the most obvious optimization criterion for such auctions, but another desirable attribute is solution robustness.\nIn terms of combinatorial auctions, a robust solution is one that can withstand bid withdrawal (a break) by making changes easily to form a repair solution of adequate revenue.\nA brittle solution to a CA is one in which an unacceptable loss in revenue is unavoidable if a winning bid is withdrawn.\nIn such situations the bid-taker may be left with a set of items deemed to be of low value by all other bidders.\nWe have called this the \"Bid-taker's Exposure Problem\" that bears similarities to the \"Exposure Problem\" faced by bidders seeking multiple items in separate single-unit auctions but holding little or no value for a subset of those items.\nHowever, reallocating items may be regarded as disruptive to a solution in many real-life scenarios.\nConsider a scenario where procurement for a business is conducted using a CA.\nA robust solution that is tolerant of such breaks is preferable.\nRobustness may be regarded as a preventative measure protecting against future uncertainty by sacrificing revenue in place of solution stability and reparability.\nWe assume a probabilistic approach whereby the bid-taker has knowledge of the reliability of bidders from which the likelihood of an incomplete transaction may be inferred.\nRepair solutions are required for bids that are seen as brittle (i.e. likely to break).\nRepairs may also be required for sets of bids deemed brittle.\nWe propose the use of the Weighted Super\nSolutions (WSS) framework [13] for constraint programming, that is ideal for establishing such robust solutions.\nAs we shall see, this framework can enforce constraints on solutions so that possible breakages are reparable.\nThis paper is organized as follows.\nSection 2 presents the Winner Determination Problem (WDP) for combinatorial auctions, outlines some possible reasons for bid withdrawal and shows how simply maximizing expected revenue can lead to intolerable revenue losses for risk-averse bid-takers.\nThis motivates the use of robust solutions and Section 3 introduces a constraint programming (CP) framework, Weighted Super Solutions [13], that finds such solutions.\nWe then propose an auction model in Section 4 that enhances reparability by introducing mandatory mutual bid bonds, that may be seen as a form of leveled commitment contract [26, 27].\nSection 5 presents an extensive empirical evaluation of the approach presented in this paper, in the context of a number of well-known combinatorial auction distributions, with very encouraging results.\nSection 6 discusses possible extensions and questions raised by our research that deserve future work.\nFinally, in Section 7 a number of concluding remarks are made.\n7.\nCONCLUSION\nFairness is often cited as a reason for choosing the optimal solution in terms of revenue only [22].\nRobust solutions militate against bids deemed brittle, therefore bidders must earn a reputation for being reliable to relax the reparability constraint attached to their bids.\nThis may be seen as being fair to long-standing business partners whose reliability is unquestioned.\nInternet-based auctions are often seen as unwelcome price-gouging exercises by suppliers in many sectors [6, 17].\nTraditional business partnerships are being severed by increased competition amongst suppliers.\nRobust solutions can provide a means of selectively discriminating against distrusted bidders in a measured manner.\nWe have shown that it is possible to attain robust solutions for CAs with only a small loss in revenue.\nWe have also illustrated how such solutions tend to have fewer winning bids than overall optimal solutions, thereby reducing any overheads associated with dealing with more bidders.\nWe have also demonstrated that introducing mutual bid bonds, a form of leveled commitment contract, can significantly increase the revenue of optimal robust solutions by improving reparability.\nWe contend that robust solutions using such a mechanism can allow a bid-taker to offer the possibility of bid withdrawal to bidders whilst remaining confident about postrepair revenue and also facilitating increased bidder aggressiveness.","lvl-2":"Robust Solutions for Combinatorial Auctions *\nABSTRACT\nBids submitted in auctions are usually treated as enforceable commitments in most bidding and auction theory literature.\nIn reality bidders often withdraw winning bids before the transaction when it is in their best interests to do so.\nGiven a bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible.\nWe have called this the \"Bid-taker's Exposure Problem\".\nWhen faced with such unreliable bidders, it is preferable for the bid-taker to preempt such uncertainty by having a solution that is robust to bid withdrawal and provides a guarantee that possible withdrawals may be repaired easily with a bounded loss in revenue.\nIn this paper, we propose an approach to addressing the Bidtaker's Exposure Problem.\nFirstly, we use the Weighted Super Solutions framework [13], from the field of constraint programming, to solve the problem of finding a robust solution.\nA weighted super solution guarantees that any subset of bids likely to be withdrawn can be repaired to form a new solution of at least a given revenue by making limited changes.\nSecondly, we introduce an auction model that uses a form of leveled commitment contract [26, 27], which we have called mutual bid bonds, to improve solution reparability by facilitating backtracking on winning bids by the bid-taker.\nWe then examine the trade-off between robustness and revenue in different economically motivated auction scenarios for different constraints on the revenue of repair solutions.\nWe also demonstrate experimentally that fewer winning bids partake in robust solutions, thereby reducing any associated overhead in dealing with extra bidders.\nRobust solutions can also provide a means of selectively discriminating against distrusted bidders in a measured manner.\n1.\nINTRODUCTION\nA combinatorial auction (CA) [5] provides an efficient means of allocating multiple distinguishable items amongst bidders whose perceived valuations for combinations of items differ.\nSuch auctions are gaining in popularity and there is a proliferation in their usage across various industries such as telecoms, B2B procurement and transportation [11, 19].\nRevenue is the most obvious optimization criterion for such auctions, but another desirable attribute is solution robustness.\nIn terms of combinatorial auctions, a robust solution is one that can withstand bid withdrawal (a break) by making changes easily to form a repair solution of adequate revenue.\nA brittle solution to a CA is one in which an unacceptable loss in revenue is unavoidable if a winning bid is withdrawn.\nIn such situations the bid-taker may be left with a set of items deemed to be of low value by all other bidders.\nThese bidders may associate a higher value for these items if they were combined with items already awarded to others, hence the bid-taker is left in an undesirable local optimum in which a form of backtracking is required to reallocate the items in a manner that results in sufficient revenue.\nWe have called this the \"Bid-taker's Exposure Problem\" that bears similarities to the \"Exposure Problem\" faced by bidders seeking multiple items in separate single-unit auctions but holding little or no value for a subset of those items.\nHowever, reallocating items may be regarded as disruptive to a solution in many real-life scenarios.\nConsider a scenario where procurement for a business is conducted using a CA.\nIt would be highly undesirable to retract contracts from a group of suppliers because of the failure of a third party.\nA robust solution that is tolerant of such breaks is preferable.\nRobustness may be regarded as a preventative measure protecting against future uncertainty by sacrificing revenue in place of solution stability and reparability.\nWe assume a probabilistic approach whereby the bid-taker has knowledge of the reliability of bidders from which the likelihood of an incomplete transaction may be inferred.\nRepair solutions are required for bids that are seen as brittle (i.e. likely to break).\nRepairs may also be required for sets of bids deemed brittle.\nWe propose the use of the Weighted Super\nSolutions (WSS) framework [13] for constraint programming, that is ideal for establishing such robust solutions.\nAs we shall see, this framework can enforce constraints on solutions so that possible breakages are reparable.\nThis paper is organized as follows.\nSection 2 presents the Winner Determination Problem (WDP) for combinatorial auctions, outlines some possible reasons for bid withdrawal and shows how simply maximizing expected revenue can lead to intolerable revenue losses for risk-averse bid-takers.\nThis motivates the use of robust solutions and Section 3 introduces a constraint programming (CP) framework, Weighted Super Solutions [13], that finds such solutions.\nWe then propose an auction model in Section 4 that enhances reparability by introducing mandatory mutual bid bonds, that may be seen as a form of leveled commitment contract [26, 27].\nSection 5 presents an extensive empirical evaluation of the approach presented in this paper, in the context of a number of well-known combinatorial auction distributions, with very encouraging results.\nSection 6 discusses possible extensions and questions raised by our research that deserve future work.\nFinally, in Section 7 a number of concluding remarks are made.\n2.\nCOMBINATORIAL AUCTIONS\nBefore presenting the technical details of our solution to the \"Bid-taker's Exposure Problem\", we shall present a brief survey of combinatorial auctions and existing techniques for handling bid withdrawal.\nCombinatorial auctions involve a single bid-taker allocating multiple distinguishable items amongst a group of bidders.\nThe bidtaker has a set of m items for sale, M = {1, 2,..., m}, and bidders submit a set of bids, B = {B1, B2,..., Bn}.\nA bid is a tuple Bj = (Sj, pj) where Sj \u2286 M is a subset of the items for sale and pj \u2265 0 is a price.\nThe WDP for a CA is to label all bids as either winning or losing so as to maximize the revenue from winning bids without allocating any item to more than one bid.\nThe following is the integer programming formulation for the WDP: xj \u2264 1, \u2200 i \u2208 {1...m}, xj \u2208 {0, 1}.\nThis problem is NP-complete [23] and inapproximable [25], and is otherwise known as the Set Packing Problem.\nThe above problem formulation assumes the notion of free disposal.\nThis means that the optimal solution need not necessarily sell all of the items.\nIf the auction rules stipulate that all items must be sold, the problem becomes a Set Partition Problem [5].\nThe WDP has been extensively studied in recent years.\nThe fastest search algorithms that find optimal solutions (e.g. CABOB [25]) can, in practice, solve very large problems involving thousands of bids very quickly.\n2.1 The Problem of Bid Withdrawal\nWe assume an auction protocol with a three stage process involving the submission of bids, winner determination, and finally a transaction phase.\nWe are interested in bid withdrawals that occur between the announcement of winning bids and the end of the transaction phase.\nAll bids are valid until the transaction is complete, so we anticipate an expedient transaction process1.\n1In some instances the transaction period may be so lengthy that consideration of non-winning bids as still being valid may not be fair.\nBreaks that occur during a lengthy transaction phase are more difficult to remedy and may require a subsequent auction.\nFor example, if the item is a service contract for a given period of time and the break occurs after partial fulfilment of this contract, the other An example of a winning bid withdrawal occurred in an FCC spectrum auction [32].\nWithdrawals, or breaks, may occur for various reasons.\nBid withdrawal may be instigated by the bid-taker when Quality of Service agreements are broken or payment deadlines are not met.\nWe refer to bid withdrawal by the bid-taker as item withdrawal in this paper to distinguish between the actions of a bidder and the bid-taker.\nHarstad and Rothkopf [8] outlined several possibilities for breaks in single item auctions that include:\n1.\nan erroneous initial valuation\/bid; 2.\nunexpected events outside the winning bidder's control; 3.\na desire to have the second-best bid honored; 4.\ninformation obtained or events that occurred after the auction but before the transaction that reduces the value of an item; 5.\nthe revelation of competing bidders' valuations infers reduced profitability, a problem known as the \"Winner's Curse\".\nKastner et al. [15] examined how to handle perturbations given a solution whilst minimizing necessary changes to that solution.\nThese perturbations may include bid withdrawals, change of valuation\/items of a bid or the submission of a new bid.\nThey looked at the problem of finding incremental solutions to restructure a supply chain whose formation is determined using combinatorial auctions [30].\nFollowing a perturbation in the optimal solution they proceed to impose involuntary item withdrawals from winning bidders.\nThey formulated an incremental integer linear program (ILP) that sought to maximize the valuation of the repair solution whilst preserving the previous solution as much as possible.\n2.2 Being Proactive against Bid Withdrawal\nWhen a bid is withdrawn there may be constraints on how the solution can be repaired.\nIf the bid-taker was freely able to revoke the awarding of items to other bidders then the solution could be repaired easily by reassigning all the items to the optimal solution without the withdrawn bid.\nAlternatively, the bidder who reneged upon a bid may have all his other bids disqualified and the items could be reassigned based on the optimum solution without that bidder present.\nHowever, the bid-taker is often unable to freely reassign the items already awarded to other bidders.\nWhen items cannot be withdrawn from winning bidders, following the failure of another bidder to honor his bid, repair solutions are restricted to the set of bids whose items only include those in the bid (s) that were reneged upon.\nWe are free to award items to any of the previously unsuccessful bids when finding a repair solution.\nWhen faced with uncertainty over the reliability of bidders a possible approach is to maximize expected revenue.\nThis approach does not make allowances for risk-averse bid-takers who may view a small possibility of very low revenue as unacceptable.\nConsider the example in Table 1, and the optimal expected revenue in the situation where a single bid may be withdrawn.\nThere are three submitted bids for items A and B, the third being a combination bid for the pair of items at a value of 190.\nThe optimal solution has a value of 200, with the first and second bids as winners.\nWhen we consider the probabilities of failure, in the fourth column, the problem of which solution to choose becomes more difficult.\nComputing the expected revenue for the solution with the first and second bids winning the items, denoted (1, 1, 0), gives:\nTable 1: Example Combinatorial Auction.\nIf a single bid is withdrawn there is probability of 0.18 of a revenue of 100, given the fact that we cannot withdraw an item from the other winning bidder.\nThe expected revenue for (0, 0, 1) is:\nWe can therefore surmise that the second solution is preferable to the first based on expected revenue.\nDetermining the maximum expected revenue in the presence of such uncertainty becomes computationally infeasible however, as the number of brittle bids grows.\nA WDP needs to be solved for all possible combinations of bids that may fail.\nThe possible loss in revenue for breaks is also not tightly bounded using this approach, therefore a large loss may be possible for a small number of breaks.\nConsider the previous example where the bid amount for x3 becomes 175.\nThe expected revenue of (1, 1, 0) (181.75) becomes greater than that of (0, 0, 1) (177.50).\nThere are some bid-takers who may prefer the latter solution because the revenue is never less than 175, but the former solution returns revenue of only 100 with probability 0.18.\nA risk-averse bid-taker may not tolerate such a possibility, preferring to sacrifice revenue for reduced risk.\nIf we modify our repair search so that a solution of at least a given revenue is guaranteed, the search for a repair solution becomes a satisfiability test rather than an optimization problem.\nThe approaches described above are in contrast to that which we propose in the next section.\nOur approach can be seen as preventative in that we find an initial allocation of items to bidders which is robust to bid withdrawal.\nPossible losses in revenue are bounded by a fixed percentage of the true optimal allocation.\nPerturbations to the original solution are also limited so as to minimize disruption.\nWe regard this as the ideal approach for real-world combinatorial auctions.\nDEFINITION 1 (ROBUST SOLUTION FOR A CA).\nA robust solution for a combinatorial auction is one where any subset of successful bids whose probability of withdrawal is greater than or equal to \u03b1 can be repaired by reassigning items at a cost of at most, Q to other previously losing bids to form a repair solution.\nConstraints on acceptable revenue, e.g. being a minimum percentage of the optimum, are defined in the problem model and are thus satisfied by all solutions.\nThe maximum cost of repair,, Q, may be a fixed value that may be thought of as a fund for compensating winning bidders whose items are withdrawn from them when creating a repair solution.\nAlternatively,, Q may be a function of the bids that were withdrawn.\nSection 4 will give an example of such a mechanism.\nIn the following section we describe an ideal constraint-based framework for the establishment of such robust solutions.\n3.\nFINDING ROBUST SOLUTIONS\nIn constraint programming [4] (CP), a constraint satisfaction problem (CSP) is modeled as a set of n variables X = {x1,..., xn}, a set of domains D = {D (x1),..., D (xn)}, where D (xi) is the set of finite possible values for variable xi and a set C = {C1,..., Cm} of constraints, each restricting the assignments of some subset of the variables in X. Constraint satisfaction involves finding values for each of the problem variables such that all constraints are satisfied.\nIts main advantages are its declarative nature and flexibility in tackling problems with arbitrary side constraints.\nConstraint optimization seeks to find a solution to a CSP that optimizes some objective function.\nA common technique for solving constraint optimization problems is to use branch-and-bound techniques that avoid exploring sub-trees that are known not to contain a better solution than the best found so far.\nAn initial bound can be determined by finding a solution that satisfies all constraints in C or by using some heuristic methods.\nA classical super solution (SS) is a solution to a CSP in which, if a small number of variables lose their values, repair solutions are guaranteed with only a few changes, thus providing solution robustness [9, 10].\nIt is a generalization of both fault tolerance in CP [31] and supermodels in propositional satisfiability (SAT) [7].\nAn (a, b) - super solution is one in which if at most a variables lose their values, a repair solution can be found by changing at most b other variables [10].\nSuper solutions for combinatorial auctions minimize the number of bids whose status needs to be changed when forming a repair solution [12].\nOnly a particular set of variables in the solution may be subject to change and these are said to be members of the breakset.\nFor each combination of brittle assignments in the break-set, a repair-set is required that comprises the set of variables whose values must change to provide another solution.\nThe cardinality of the repair set is used to measure the cost of repair.\nIn reality, changing some variable assignments in a repair solution incurs a lower cost than others thereby motivating the use of a different metric for determining the legality of repair sets.\nThe Weighted Super Solution (WSS) framework [13] considers the cost of repair required, rather than simply the number of assignments modified, to form an alternative solution.\nFor CAs this may be a measure of the compensation penalties paid to winning bidders to break existing agreements.\nRobust solutions are particularly desirable for applications where unreliability is a problem and potential breakages may incur severe penalties.\nWeighted super solutions offer a means of expressing which variables are easily re-assigned and those that incur a heavy cost [13].\nHebrard et al. [9] describe how some variables may fail (such as machines in a job-shop problem) and others may not.\nA WSS generalizes this approach so that there is a probability of failure associated with each assignment and sets of variables whose assignments have probabilities of failure greater than or equal to a threshold value, \u03b1, require repair solutions.\nA WSS measures the cost of repairing, or reassigning, other variables using inertia as a metric.\nInertia is a measure of a variable's aversion to change and depends on its current assignment, future assignment and the breakage variable (s).\nIt may be desirable to reassign items to different bidders in order to find a repair solution of satisfactory revenue.\nCompensation may have to be paid to bidders who lose items during the formation of a repair solution.\nThe inertia of a bid reflects the cost of changing its state.\nFor winning bids this may reflect the necessary compensation penalty for the bid-taker to break the agreement (if such breaches are permitted), whereas for previously losing bids this is a free operation.\nThe total amount of compensation payable to bidders may depend upon other factors, such as the cause of the break.\nThere is a limit to how much these overall repair costs should be, and this is given by the value, Q.\nThis value may not be known in advance and\nunassign x return false end may depend upon the break.\nTherefore, 3 may be viewed as the fund used to compensate winning bidders for the unilateral withdrawal of their bids by the bid-taker.\nIn summary, an (\u03b1,3) - WSS allows any set of variables whose probability of breaking is greater than or equal to \u03b1 be repaired with changes to the original robust solution with a cost of at most 3.\nThe depth-first search for a WSS (see pseudo-code description in Algorithm 1) maintains arc-consistency [24] at each node of the tree.\nAs search progresses, the reparability of each previous assignment is verified at each node by extending a partial repair solution to the same depth as the current partial solution.\nThis may be thought of as maintaining concurrent search trees for repairs.\nA repair solution is provided for every possible set of break variables, A.\nThe WSS algorithm attempts to extend the current partial assignment by choosing a variable and assigning it a value.\nBacktracking may then occur for one of two reasons: we cannot extend the assignment to satisfy the given constraints, or the current partial assignment cannot be associated with a repair solution whose cost of repair is less than 3 should a break occur.\nThe procedure reparable searches for partial repair solutions using backtracking and attempts to extend the last repair found, just as in (1, b) super solutions [9]; the differences being that a repair is provided for a set of breakage variables rather than a single variable and the cost of repair is considered.\nA summation operator is used to determine the overall cost of repair.\nIf a fixed bound upon the size of any potential break-set can be formed, the WSS algorithm is NPcomplete.\nFor a more detailed description of the WSS search algorithm, the reader is referred to [13], since a complete description of the algorithm is beyond the scope of this paper.\nEXAMPLE 1.\nWe shall step through the example given in Table 1 when searching for a WSS.\nEach bid is represented by a single variable with domain values of 0 and 1, the former representing bid-failure and the latter bid-success.\nThe probability of failure of the variables are 0.1 when they are assigned to 1 and 0.0 otherwise.\nThe problem is initially solved using an ILP solver such as lp_solve [3] or CPLEX, and the optimal revenue is found to be 200.\nA fixed percentage of this revenue can be used as a threshold value for a robust solution and its repairs.\nThe bid-taker wishes to have a robust solution so that if a single winning bid is withdrawn, a repair solution can be formed without withdrawing items from any other winning bidder.\nThis example may be seen as searching for a (0.1,0) - weighted super solution, 3 is 0 because no funds are available to compensate the withdrawal of items from winning bidders.\nThe bid-taker is willing to compromise on revenue, but only by 5%, say, of the optimal value.\nBids 1 and 3 cannot both succeed, since they both require item A, so a constraint is added precluding the assignment in which both variables take the value 1.\nSimilarly, bids 2 and 3 cannot both win so another constraint is added between these two variables.\nTherefore, in this example the set of CSP variables is V = {x1, x2, x3}, whose domains are all {0, 1}.\nThe constraints are x1 + x3 <1, x2 + x3 <1 and Ex.\n\u2208 V aixi \u2265 190, where ai reflects the relevant bid-amounts for the respective bid variables.\nIn order to find a robust solution of optimal revenue we seek to maximize the sum of these amounts, max Ex.\n\u2208 V aixi.\nWhen all variables are set to 0 (see Figure 1 (a) branch 3), this is not a solution because the minimum revenue of 190 has not been met, so we try assigning bid3 to 1 (branch 4).\nThis is a valid solution but this variable is brittle because there is a 10% chance that this bid may be withdrawn (see Table 1).\nTherefore we need to determine if a repair can be formed should it break.\nThe search for a repair begins at the first node, see Figure 1 (b).\nNotice that value 1 has been removed from bid3 because this search tree is simulating the withdrawal of this bid.\nWhen bid1 is set to 0 (branch 4.1), the maximum revenue solution in the remaining subtree has revenue of only 100, therefore search is discontinued at that node of the tree.\nBid1 and bid2 are both assigned to 1 (branches 4.2 and 4.4) and the total cost of both these changes is still 0 because no compensation needs to be paid for bids that change from losing to winning.\nWith bid3 now losing (branch 4.5), this gives a repair solution of 200.\nHence (0, 0, 1) is reparable and therefore a WSS.\nWe continue our search in Figure 1 (a) however, because we are seeking a robust solution of optimal revenue.\nWhen bid1 is assigned to 1 (branch 6) we seek a partial repair for this variable breaking (branch 5 is not considered since it offers insufficient revenue).\nThe repair search sets bid1 to 0 in a separate search tree, (not shown), and control is returned to the search for a WSS.\nBid2 is set to 0 (branch 7), but this solution would not produce sufficient revenue so bid2 is then set to 1 (branch 8).\nWe then attempt to extend the repair for bid1 (not shown).\nThis fails because the repair for bid1 cannot assign bid2 to 0 because the cost of repairing such an assignment would be \u221e, given that the auction rules do not permit the withdrawal of items from winning bids.\nA repair for bid1 breaking is therefore not possible because items have already been awarded to bid2.\nA repair solution with bid2 assigned to 1 does not produce sufficient revenue when bid1 is assigned to 0.\nThe inability to withdraw items from winning bids implies that (1, 1, 0) is an irreparable solution when the minimum tolerable revenue is greater than 100.\nThe italicized comments and dashed line in Figure 1 (a) illustrate the search path for a WSS if both of these bids were deemed reparable.\n0 Section 4 introduces an alternative auction model that will allow the bid-taker to receive compensation for breakages and in turn use this payment to compensate other bidders for withdrawal of items from winning bids.\nThis will enable the reallocation of items and permit the establishment of (1, 1, 0) as a second WSS for this example.\n4.\nMUTUAL BID BONDS: A BACKTRACKING MECHANISM\nSome auction solutions are inherently brittle and it may be impossible to find a robust solution.\nIf we can alter the rules of an auction so that the bid-taker can retract items from winning bidders, then the reparability of solutions to such auctions may be improved.\nIn this section we propose an auction model that permits bid and item withdrawal by the bidders and bid-taker, respectively.\nWe propose a model that incorporates mutual bid bonds to enable solution reparability for the bid-taker, a form of insurance against\nFigure 1: Search Tree for a WSS without item withdrawal.\nthe winner's curse for the bidder whilst also compensating bidders in the case of item withdrawal from winning bids.\nWe propose that such \"Winner's Curse & Bid-taker's Exposure\" insurance comprise a fixed percentage, r,, of the bid amount for all bids.\nSuch mutual bid bonds are mandatory for each bid in our model2.\nThe conditions attached to the bid bonds are that the bid-taker be allowed to annul winning bids (item withdrawal) when repairing breaks elsewhere in the solution.\nIn the interests of fairness, compensation is paid to bidders from whom items are withdrawn and is equivalent to the penalty that would have been imposed on the bidder should he have withdrawn the bid.\nCombinatorial auctions impose a heavy computational burden on the bidder so it is important that the hedging of risk should be a simple and transparent operation for the bidder so as not to further increase this burden unnecessarily.\nWe also contend that it is imperative that the bidder knows the potential penalty for withdrawal in advance of bid submission.\nThis information is essential for bidders when determining how aggressive they should be in their bidding strategy.\nBid bonds are commonplace in procurement for construction projects.\nUsually they are mandatory for all bids, are a fixed percentage, r,, of the bid amount and are unidirectional in that item withdrawal by the bid-taker is not permitted.\nMutual bid bonds may be seen as a form of leveled commitment contract in which both parties may break the contract for the same fixed penalty.\nSuch contracts permit unilateral decommitment for prespecified penalties.\nSandholm et al. showed that this can increase the expected payoffs of all parties and enables deals that would be impossible under full commitment [26, 28, 29].\nIn practice a bid bond typically ranges between 5 and 20% of the 2Making the insurance optional may be beneficial in some instances.\nIf a bidder does not agree to the insurance, it may be inferred that he may have accurately determined the valuation for the items and therefore less likely to fall victim to the winner's curse.\nThe probability of such a bid being withdrawn maybe less, so a repair solution maybe deemed unnecessary for this bid.\nOn the other hand it decreases the reparability of solutions.\nbid amount [14, 18].\nIf the decommitment penalties are the same for both parties in all bids, r, does not influence the reparability of a given set of bids.\nIt merely influences the levels of penalties and compensation transacted by agents.\nLow values of r, incur low bid withdrawal penalties and simulate a dictatorial bid-taker who does not adequately compensate bidders for item withdrawal.\nAndersson and Sandholm [1] found that myopic agents reach a higher social welfare quicker if they act selfishly rather than cooperatively when penalties in leveled commitment contracts are low.\nIncreased levels of bid withdrawal are likely when the penalties are low also.\nHigh values of r, tend towards full-commitment and reduce the advantages of such \"Winner's Curse & Bid-taker's Exposure\" insurance.\nThe penalties paid are used to fund a reassignment of items to form a repair solution of sufficient revenue by compensating previously successful bidders for withdrawal of the items from them.\nEXAMPLE 2.\nConsider the example given in Table 1 once more, where the bids also comprise a mutual bid bond of 5% of the bid amount.\nIf a bid is withdrawn, the bidder forfeits this amount and the bid-taker can then compensate winning bidders whose items are withdrawn when trying to form a repair solution later.\nThe search for repair solutions for breaks to bid1 and bid2 appear in Figures 2 (a) and 2 (b), respectively3.\nWhen bid1 breaks, there is a compensation penalty paid to the bid-taker equal to 5 that can be used to fund a reassignment of the items.\nWe therefore set, Q to 5 and this becomes the maximum expenditure allowed to withdraw items from winning bidders.\n, Q may also be viewed as the size of the fund available to facilitate backtracking by the bid-taker.\nWhen we extend the partial repair for bid1 so that bid2 loses an item (branch 8.1), the overall cost of repair increases to 5, due to this item withdrawal by the bid-taker, 3The actual implementation of WSS search checks previous solutions to see if they can repair breaks before searching for a new repair solution.\n(0, 0, 1) is a solution that has already been found so the search for a repair in this example is not strictly necessary but is described for pedagogical reasons.\nFigure 2: Repair Search Tree for breaks 1 and 2, \u03ba = 0.05.\nand is just within the limit given by \u03b2.\nIn Figure 1 (a) the search path follows the dashed line and sets bid3 to be 0 (branch 9).\nThe repair solutions for bids 1 and 2 can be extended further by assigning bid3 to 1 (branches 9.2 and 9.4).\nTherefore, (1, 1, 0) maybe considered a robust solution.\nRecall, that previously this was not the case.\n0 Using mutual bid bonds thus increases reparability and allows a robust solution of revenue 200 as opposed to 190, as was previously the case.\n5.\nEXPERIMENTS\nWe have used the Combinatorial Auction Test Suite (CATS) [16] to generate sample auction data.\nWe generated 100 instances of problems in which there are 20 items for sale and 100-2000 bids that may be dominated in some instances4.\nSuch dominated bids can participate in repair solutions although they do not feature in optimal solutions.\nCATS uses economically motivated bidding patterns to generate auction data in various scenarios.\nTo motivate the research presented in this paper we use sensitivity analysis to examine the brittleness of optimal solutions and hence determine the types of auctions most likely to benefit from a robust solution.\nWe then establish robust solutions for CAs using the WSS framework.\n5.1 Sensitivity Analysis for the WDP\nWe have performed sensitivity analysis of the following four distributions: airport take-off\/landing slots (matching), electronic components (arbitrary), property\/spectrum-rights (regions) and transportation (paths).\nThese distributions were chosen because they describe a broad array of bidding patterns in different application domains.\nThe method used is as follows.\nWe first of all determined the optimal solution using lp_solve, a mixed integer linear program solver [3].\nWe then simulated a single bid withdrawal and re-solved the problem with the other winning bids remaining fixed, i.e. there were no involuntary dropouts.\nThe optimal repair solution was then determined.\nThis process is repeated for all winning bids in the overall optimal solution, thus assuming that all bids are brittle.\nFigure 3 shows the average revenue of such repair solutions as a percentage of the optimum.\nAlso shown is the average worst-case scenario over 100 auctions.\nWe also implemented an auction rule that disallows bids from the reneging bidder participate in a repair5.\nFigure 3 (a) illustrates how the paths distribution is inherently the most robust distribution since when any winning bid is withdrawn the solution can be repaired to achieve over 98.5% of the\noptimal revenue on average for auctions with more than 250 bids.\nThere are some cases however when such withdrawals result in solutions whose revenue is significantly lower than optimum.\nEven in auctions with as many as 2000 bids there are occasions when a single bid withdrawal can result in a drop in revenue of over 5%, although the average worst-case drop in revenue is only 1%.\nFigure 3 (b) shows how the matching distribution is more brittle on average than paths and also has an inferior worst-case revenue on average.\nThis trend continues as the regions-npv (Figure 3 (c)) and arbitrary-npv (Figure 3 (d)) distributions are more brittle still.\nThese distributions are clearly sensitive to bid withdrawal when no other winning bids in the solution may be involuntarily withdrawn by the bid-taker.\n5.2 Robust Solutions using WSS\nIn this section we focus upon both the arbitrary-npv and regions-npv distributions because the sensitivity analysis indicated that these types of auctions produce optimal solutions that tend to be most brittle, and therefore stand to benefit most from solution robustness.\nWe ignore the auctions with 2000 bids because the sensitivity analysis has indicated that these auctions are inherently robust with a very low average drop in revenue following a bid withdrawal.\nThey would also be very computationally expensive, given the extra complexity of finding robust solutions.\nA pure CP approach needs to be augmented with global constraints that incorporate operations research techniques to increase pruning sufficiently so that thousands of bids may be examined.\nGlobal constraints exploit special-purpose filtering algorithms to improve performance [21].\nThere are a number of ways to speed up the search for a weighted super solution in a CA, although this is not the main focus of our current work.\nPolynomial matching algorithms may be used in auctions whose bid length is short, such as those for airport landing\/take-off slots for example.\nThe integer programming formulation of the WDP stipulates that a bid either loses or wins.\nIf we relax this constraint so that bids can partially win, this corresponds to the linear relaxation of the problem and is solvable in polynomial time.\nAt each node of the search tree we can quickly solve the linear relaxation of the remaining problem in the subtree below the current node to establish an upper bound on remaining revenue.\nIf this upper bound plus revenue in the parent tree is less than the current lower bound on revenue, search at that node can cease.\nThe (continuous) LP relaxation thus provides a vital speed-up in the search for weighted super solutions, which we have exploited in our implementation.\nThe LP formulation is as follows:\nFigure 3: Sensitivity of bid distributions to single bid withdrawal.\nAdditional techniques, that are outlined in [25], can aid the scalability of a CP approach but our main aim in these experiments is to examine the robustness of various auction distributions and consider the tradeoff between robustness and revenue.\nThe WSS solver we have developed is an extension of the super solution solver presented in [9, 10].\nThis solver is, in turn, based upon the EFC constraint solver [2].\nCombinatorial auctions are easily modeled as a constraint optimization problems.\nWe have chosen the branch-on-bids formulation because in tests it worked faster than a branch-on-items formulation for the arbitrary-npv and regions-npv distributions.\nAll variables are binary and our search mechanism uses a reverse lexicographic value ordering heuristic.\nThis complements our dynamic variable ordering heuristic that selects the most promising unassigned variable as the next one in the search tree.\nWe use the product of the solution of the LP relaxation and the degree of a variable to determine the likelihood of its participation in a robust solution.\nHigh values in the LP solution are a strong indication of variables most likely to form a high revenue solution whilst the a variable's degree reflects the number of other bids that overlap in terms of desired items.\nBids for large numbers of items tend to be more robust, which is why we weight our robust solution search in this manner.\nWe found this heuristic to be slightly more effective than the LP solution alone.\nAs the number of bids in the auction increases however, there is an increase in the inherent robustness of solutions so the degree of a variable loses significance as the auction size increases.\n5.3 Results\nOur experiments simulate three different constraints on repair solutions.\nThe first is that no winning bids are withdrawn by the bid-taker and a repair solution must return a revenue of at least 90% of the optimal overall solution.\nSecondly, we relaxed the revenue constraint to 85% of optimum.\nThirdly, we allowed backtracking by the bid-taker on winning bids using mutual bid bonds but maintaining the revenue constraint at 90% of optimum.\nPrior to finding a robust solution we solved the WDP optimally using lp_solve [3].\nWe then set the minimum tolerable revenue for a solution to be 90% (then 85%) of the revenue of this optimal solution.\nWe assumed that all bids were brittle, thus a repair solution is required for every bid in the solution.\nInitially we assume that no backtracking was permitted on assignments of items to other winning bids given a bid withdrawal elsewhere in the solution.\nTable 2 shows the percentage of optimal solutions that are robust for minimum revenue constraints for repair solutions of 90% and 85% of optimal revenue.\nRelaxing the revenue constraint on repair solutions to 85% of the optimum revenue greatly increases the number of optimal solutions that are robust.\nWe also conducted experiments on the same auctions in which backtracking by the bid-taker is permitted using mutual bid bonds.\nThis significantly improves the reparability of optimal solutions whilst still maintaining repair solutions of 90% of optimum.\nAn interesting feature of the arbitrary-npv distribution is that optimal solutions can become more brittle as the number of bids increases.\nThe reason for this is that optimal solutions for larger auctions have more winning bids.\nSome of the optimal solutions for the smallest auctions with 100 bids have only one winning bidder.\nIf this bid is withdrawn it is usually easy to find a new repair solution within 90% of the previous optimal revenue.\nAlso, repair solutions for bids that contain a small number of items may be made difficult by the fact that a reduced number of bids cover only a subset of those items.\nA mitigating factor is that such bids form a smaller percentage of the revenue of the optimal solution on average.\nWe also implemented a rule stipulating that any losing bids from\nTable 2: Optimal Solutions that are Inherently Robust (%).\nTable 3: Occurrence of Robust Solutions (%).\na withdrawing bidder cannot participate in a repair solution.\nThis acts as a disincentive for strategic withdrawal and was also used previously in the sensitivity analysis.\nIn some auctions, a robust solution may not exist.\nTable 3 shows the percentage of auctions that support robust solutions for the arbitrary-npv and regions - npv distributions.\nIt is clear that finding robust solutions for the former distribution is particularly difficult for auctions with 250 and 500 bids when revenue constraints are 90% of optimum.\nThis difficulty was previously alluded to by the low percentage of optimal solutions that were robust for these auctions.\nRelaxing the revenue constraint helps increase the percentage of auctions in which robust solutions are achievable to 88% and 94%, respectively.\nThis improves the reparability of all solutions thereby increasing the average revenue of the optimal robust solution.\nIt is somewhat counterintuitive to expect a reduction in reparability of auction solutions as the number of bids increases because there tends to be an increased number of solutions above a revenue threshold in larger auctions.\nThe MBB auction model performs very well however, and ensures that robust solutions are achievable for such inherently brittle auctions without sacrificing over 10% of optimal revenue to achieve repair solutions.\nFigure 4 shows the average revenue of the optimal robust solution as a percentage of the overall optimum.\nRepair solutions found for a WSS provide a lower bound on possible revenue following a bid withdrawal.\nNote that in some instances it is possible for a repair solution to have higher revenue than the original solution.\nWhen backtracking on winning bids by the bid-taker is disallowed, this can only happen when the repair solution includes two or more bids that were not in the original.\nOtherwise the repair bids would participate in the optimal robust solution in place of the bid that was withdrawn.\nA WSS guarantees minimum levels of revenue for repair solutions but this is not to say that repair solutions cannot be improved upon.\nIt is possible to use an incremental algorithm to\nFigure 4: Revenue of optimal robust solutions.\ndetermine an optimal repair solution following a break, whilst safe in the knowledge that in advance of any possible bid withdrawal we can establish a lower bound on the revenue of a repair.\nKastner et al. have provided such an incremental ILP formulation [15].\nMutual bid bonds facilitate backtracking by the bid-taker on already assigned items.\nThis improves the reparability of all possible solutions thus increasing the revenue of the optimal robust solution on average.\nFigure 4 shows the increase in revenue of robust solutions in such instances.\nThe revenues of repair solutions are bounded by at least 90% of the optimum in our experiments thereby allowing a direct comparison with robust solutions already found using the same revenue constraint but not providing for backtracking.\nIt is immediately obvious that such a mechanism can significantly increase revenue whilst still maintaining solution robustness.\nTable 4 shows the number of winning bids participating in optimal and optimal robust solutions given the three different constraints on repairing solutions listed at the beginning of this section.\nAs the number of bids increases, more of the optimal overall solutions are robust.\nThis leads to a convergence in the number of winning bids.\nThe numbers in brackets are derived from the sensitivity analysis of optimal solutions that reveals the fact that almost all optimal solutions for auctions of 2000 bids are robust.\nWe can therefore infer that the average number of winning bids in revenuemaximizing robust solutions converges towards that of the optimal overall solutions.\nA notable side-effect of robust solutions is that fewer bids participate in the solutions.\nIt can be clearly seen from Table 4 that when revenue constraints on repair solutions are tight, there are fewer winning bids in the optimal robust solution on average.\nThis is particularly pronounced for smaller auctions in both distributions.\nThis can win benefits for the bid-taker such as reduced overheads in dealing with fewer suppliers.\nAlthough MBBs aid solution repara\nTable 4: Number of winning bids.\nbility, the number of bids in the solutions increases on average.\nThis is to be expected because a greater fraction of these solutions are in fact optimal, as we saw in Table 2.\n7.\nCONCLUSION\nFairness is often cited as a reason for choosing the optimal solution in terms of revenue only [22].\nRobust solutions militate against bids deemed brittle, therefore bidders must earn a reputation for being reliable to relax the reparability constraint attached to their bids.\nThis may be seen as being fair to long-standing business partners whose reliability is unquestioned.\nInternet-based auctions are often seen as unwelcome price-gouging exercises by suppliers in many sectors [6, 17].\nTraditional business partnerships are being severed by increased competition amongst suppliers.\nQuality of Service can suffer because of the increased focus on short-term profitability to the detriment of the bid-taker in the long-term.\nRobust solutions can provide a means of selectively discriminating against distrusted bidders in a measured manner.\nAs combinatorial auction deployment moves from large value auctions with a small pool of trusted bidders (e.g. spectrum-rights sales) towards lower value auctions with potentially unknown bidders (e.g. Supply Chain Management [30]), solution robustness becomes more relevant.\nAs well as being used to ensure that the bid-taker is not left vulnerable to bid withdrawal, it may also be used to cement relationships with preferred, possibly incumbent, suppliers.\nWe have shown that it is possible to attain robust solutions for CAs with only a small loss in revenue.\nWe have also illustrated how such solutions tend to have fewer winning bids than overall optimal solutions, thereby reducing any overheads associated with dealing with more bidders.\nWe have also demonstrated that introducing mutual bid bonds, a form of leveled commitment contract, can significantly increase the revenue of optimal robust solutions by improving reparability.\nWe contend that robust solutions using such a mechanism can allow a bid-taker to offer the possibility of bid withdrawal to bidders whilst remaining confident about postrepair revenue and also facilitating increased bidder aggressiveness.","keyphrases":["robust","combinatori auction","bid","enforc commit","bid withdraw","exposur problem","weight super solut","weight super solut","constraint program","constraint program","mutual bid bond","bid-taker's exposur problem","set partit problem","winner determin problem","mandatori mutual bid bond"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","M","M","M"]} {"id":"J-42","title":"The Dynamics of Viral Marketing","abstract":"We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products. We observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model. We then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations. While on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.","lvl-1":"The Dynamics of Viral Marketing \u2217 Jure Leskovec \u2020 Carnegie Mellon University Pittsburgh, PA 15213 jure@cs.cmu.edu Lada A. Adamic \u2021 University of Michigan Ann Arbor, MI 48109 ladamic@umich.edu Bernardo A. Huberman HP Labs Palo Alto, CA 94304 bernardo.huberman@hp.com ABSTRACT We present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products.\nWe observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model.\nWe then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations.\nWhile on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics 1.\nINTRODUCTION With consumers showing increasing resistance to traditional forms of advertising such as TV or newspaper ads, marketers have turned to alternate strategies, including viral marketing.\nViral marketing exploits existing social networks by encouraging customers to share product information with their friends.\nPreviously, a few in depth studies have shown that social networks affect the adoption of individual innovations and products (for a review see [15] or [16]).\nBut until recently it has been difficult to measure how influential person-to-person recommendations actually are over a wide range of products.\nWe were able to directly measure and model the effectiveness of recommendations by studying one online retailer``s incentivised viral marketing program.\nThe website gave discounts to customers recommending any of its products to others, and then tracked the resulting purchases and additional recommendations.\nAlthough word of mouth can be a powerful factor influencing purchasing decisions, it can be tricky for advertisers to tap into.\nSome services used by individuals to communicate are natural candidates for viral marketing, because the product can be observed or advertised as part of the communication.\nEmail services such as Hotmail and Yahoo had very fast adoption curves because every email sent through them contained an advertisement for the service and because they were free.\nHotmail spent a mere $50,000 on traditional marketing and still grew from zero to 12 million users in 18 months [7].\nGoogle``s Gmail captured a significant part of market share in spite of the fact that the only way to sign up for the service was through a referral.\nMost products cannot be advertised in such a direct way.\nAt the same time the choice of products available to consumers has increased manyfold thanks to online retailers who can supply a much wider variety of products than traditional brick-and-mortar stores.\nNot only is the variety of products larger, but one observes a `fat tail'' phenomenon, where a large fraction of purchases are of relatively obscure items.\nOn Amazon.com, somewhere between 20 to 40 percent of unit sales fall outside of its top 100,000 ranked products [2].\nRhapsody, a streaming-music service, streams more tracks outside than inside its top 10,000 tunes [1].\nEffectively advertising these niche products using traditional advertising approaches is impractical.\nTherefore using more targeted marketing approaches is advantageous both to the merchant and the consumer, who would benefit from learning about new products.\nThe problem is partly addressed by the advent of online product and merchant reviews, both at retail sites such as EBay and Amazon, and specialized product comparison sites such as Epinions and CNET.\nQuantitative marketing techniques have been proposed [12], and the rating of products and merchants has been shown to effect the likelihood of an item being bought [13, 4].\nOf further help to the consumer are collaborative filtering recommendations of the form people who bought x also bought y feature [11].\nThese refinements help consumers discover new products 228 and receive more accurate evaluations, but they cannot completely substitute personalized recommendations that one receives from a friend or relative.\nIt is human nature to be more interested in what a friend buys than what an anonymous person buys, to be more likely to trust their opinion, and to be more influenced by their actions.\nOur friends are also acquainted with our needs and tastes, and can make appropriate recommendations.\nA Lucid Marketing survey found that 68% of individuals consulted friends and relatives before purchasing home electronics - more than the half who used search engines to find product information [3].\nSeveral studies have attempted to model just this kind of network influence.\nRichardson and Domingos [14] used Epinions'' trusted reviewer network to construct an algorithm to maximize viral marketing efficiency assuming that individuals'' probability of purchasing a product depends on the opinions on the trusted peers in their network.\nKempe, Kleinberg and Tardos [8] evaluate the efficiency of several algorithms for maximizing the size of influence set given various models of adoption.\nWhile these models address the question of maximizing the spread of influence in a network, they are based on assumed rather than measured influence effects.\nIn contrast, in our study we are able to directly observe the effectiveness of person to person word of mouth advertising for hundreds of thousands of products for the first time.\nWe find that most recommendation chains do not grow very large, often terminating with the initial purchase of a product.\nHowever, occasionally a product will propagate through a very active recommendation network.\nWe propose a simple stochastic model that seems to explain the propagation of recommendations.\nMoreover, the characteristics of recommendation networks influence the purchase patterns of their members.\nFor example, individuals'' likelihood of purchasing a product initially increases as they receive additional recommendations for it, but a saturation point is quickly reached.\nInterestingly, as more recommendations are sent between the same two individuals, the likelihood that they will be heeded decreases.\nWe also propose models to identify products for which viral marketing is effective: We find that the category and price of product plays a role, with recommendations of expensive products of interest to small, well connected communities resulting in a purchase more often.\nWe also observe patterns in the timing of recommendations and purchases corresponding to times of day when people are likely to be shopping online or reading email.\nWe report on these and other findings in the following sections.\n2.\nTHE RECOMMENDATION NETWORK 2.1 Dataset description Our analysis focuses on the recommendation referral program run by a large retailer.\nThe program rules were as follows.\nEach time a person purchases a book, music, or a movie he or she is given the option of sending emails recommending the item to friends.\nThe first person to purchase the same item through a referral link in the email gets a 10% discount.\nWhen this happens the sender of the recommendation receives a 10% credit on their purchase.\nThe recommendation dataset consists of 15,646,121 recommendations made among 3,943,084 distinct users.\nThe data was collected from June 5 2001 to May 16 2003.\nIn total, 548,523 products were recommended, 99% of them belonging to 4 main product groups: Books, DVDs, Music and Videos.\nIn addition to recommendation data, we also crawled the retailer``s website to obtain product categories, reviews and ratings for all products.\nOf the products in our data set, 5813 (1%) were discontinued (the retailer no longer provided any information about them).\nAlthough the data gives us a detailed and accurate view of recommendation dynamics, it does have its limitations.\nThe only indication of the success of a recommendation is the observation of the recipient purchasing the product through the same vendor.\nWe have no way of knowing if the person had decided instead to purchase elsewhere, borrow, or otherwise obtain the product.\nThe delivery of the recommendation is also somewhat different from one person simply telling another about a product they enjoy, possibly in the context of a broader discussion of similar products.\nThe recommendation is received as a form email including information about the discount program.\nSomeone reading the email might consider it spam, or at least deem it less important than a recommendation given in the context of a conversation.\nThe recipient may also doubt whether the friend is recommending the product because they think the recipient might enjoy it, or are simply trying to get a discount for themselves.\nFinally, because the recommendation takes place before the recommender receives the product, it might not be based on a direct observation of the product.\nNevertheless, we believe that these recommendation networks are reflective of the nature of word of mouth advertising, and give us key insights into the influence of social networks on purchasing decisions.\n2.2 Recommendation network statistics For each recommendation, the dataset included the product and product price, sender ID, receiver ID, the sent date, and a buy-bit, indicating whether the recommendation resulted in a purchase and discount.\nThe sender and receiver ID``s were shadowed.\nWe represent this data set as a directed multi graph.\nThe nodes represent customers, and a directed edge contains all the information about the recommendation.\nThe edge (i, j, p, t) indicates that i recommended product p to customer j at time t.\nThe typical process generating edges in the recommendation network is as follows: a node i first buys a product p at time t and then it recommends it to nodes j1, ... , jn.\nThe j nodes can they buy the product and further recommend it.\nThe only way for a node to recommend a product is to first buy it.\nNote that even if all nodes j buy a product, only the edge to the node jk that first made the purchase (within a week after the recommendation) will be marked by a buy-bit.\nBecause the buy-bit is set only for the first person who acts on a recommendation, we identify additional purchases by the presence of outgoing recommendations for a person, since all recommendations must be preceded by a purchase.\nWe call this type of evidence of purchase a buyedge.\nNote that buy-edges provide only a lower bound on the total number of purchases without discounts.\nIt is possible for a customer to not be the first to act on a recommendation and also to not recommend the product to others.\nUnfortunately, this was not recorded in the data set.\nWe consider, however, the buy-bits and buy-edges as proxies for the total number of purchases through recommendations.\nFor each product group we took recommendations on all products from the group and created a network.\nTable 1 229 0 1 2 3 4 x 10 6 0 2 4 6 8 10 12 x 10 4 number of nodes sizeofgiantcomponent by month quadratic fit 0 10 20 0 2 4 x 10 6 m (month) n # nodes 1.7*10 6 m 10 0 10 1 10 2 10 3 10 1 10 2 10 3 10 4 10 5 10 6 kp (recommendations by a person for a product) N(x>=k p ) level 0 \u03b3 = 2.6 level 1 \u03b3 = 2.0 level 2 \u03b3 = 1.5 level 3 \u03b3 = 1.2 level 4 \u03b3 = 1.2 (a) Network growth (b) Recommending by level Figure 1: (a) The size of the largest connected component of customers over time.\nThe inset shows the linear growth in the number of customers n over time.\n(b) The number of recommendations sent by a user with each curve representing a different depth of the user in the recommendation chain.\nA power law exponent \u03b3 is fitted to all but the tail.\n(first 7 columns) shows the sizes of various product group recommendation networks with p being the total number of products in the product group, n the total number of nodes spanned by the group recommendation network and e the number of edges (recommendations).\nThe column eu shows the number of unique edges - disregarding multiple recommendations between the same source and recipient.\nIn terms of the number of different items, there are by far the most music CDs, followed by books and videos.\nThere is a surprisingly small number of DVD titles.\nOn the other hand, DVDs account for more half of all recommendations in the dataset.\nThe DVD network is also the most dense, having about 10 recommendations per node, while books and music have about 2 recommendations per node and videos have only a bit more than 1 recommendation per node.\nMusic recommendations reached about the same number of people as DVDs but used more than 5 times fewer recommendations to achieve the same coverage of the nodes.\nBook recommendations reached by far the most people - 2.8 million.\nNotice that all networks have a very small number of unique edges.\nFor books, videos and music the number of unique edges is smaller than the number of nodes - this suggests that the networks are highly disconnected [5].\nFigure 1(a) shows the fraction of nodes in largest weakly connected component over time.\nNotice the component is very small.\nEven if we compose a network using all the recommendations in the dataset, the largest connected component contains less than 2.5% (100,420) of the nodes, and the second largest component has only 600 nodes.\nStill, some smaller communities, numbering in the tens of thousands of purchasers of DVDs in categories such as westerns, classics and Japanese animated films (anime), had connected components spanning about 20% of their members.\nThe insert in figure 1(a) shows the growth of the customer base over time.\nSurprisingly it was linear, adding on average 165,000 new users each month, which is an indication that the service itself was not spreading epidemically.\nFurther evidence of non-viral spread is provided by the relatively high percentage (94%) of users who made their first recommendation without having previously received one.\nBack to table 1: given the total number of recommendations e and purchases (bb + be) influenced by recommendations we can estimate how many recommendations need to be independently sent over the network to induce a new purchase.\nUsing this metric books have the most influential recommendations followed by DVDs and music.\nFor books one out of 69 recommendations resulted in a purchase.\nFor DVDs it increases to 108 recommendations per purchase and further increases to 136 for music and 203 for video.\nEven with these simple counts we can make the first few observations.\nIt seems that some people got quite heavily involved in the recommendation program, and that they tended to recommend a large number of products to the same set of friends (since the number of unique edges is so small).\nThis shows that people tend to buy more DVDs and also like to recommend them to their friends, while they seem to be more conservative with books.\nOne possible reason is that a book is bigger time investment than a DVD: one usually needs several days to read a book, while a DVD can be viewed in a single evening.\nOne external factor which may be affecting the recommendation patterns for DVDs is the existence of referral websites (www.dvdtalk.com).\nOn these websites people, who want to buy a DVD and get a discount, would ask for recommendations.\nThis way there would be recommendations made between people who don``t really know each other but rather have an economic incentive to cooperate.\nWe were not able to find similar referral sharing sites for books or CDs.\n2.3 Forward recommendations Not all people who make a purchase also decide to give recommendations.\nSo we estimate what fraction of people that purchase also decide to recommend forward.\nTo obtain this information we can only use the nodes with purchases that resulted in a discount.\nThe last 3 columns of table 1 show that only about a third of the people that purchase also recommend the product forward.\nThe ratio of forward recommendations is much higher for DVDs than for other kinds of products.\nVideos also have a higher ratio of forward recommendations, while books have the lowest.\nThis shows that people are most keen on recommending movies, while more conservative when recommending books and music.\nFigure 1(b) shows the cumulative out-degree distribution, that is the number of people who sent out at least kp recommendations, for a product.\nIt shows that the deeper an individual is in the cascade, if they choose to make recommendations, they tend to recommend to a greater number of people on average (the distribution has a higher variance).\nThis effect is probably due to only very heavily recommended products producing large enough cascades to reach a certain depth.\nWe also observe that the probability of an individual making a recommendation at all (which can only occur if they make a purchase), declines after an initial increase as one gets deeper into the cascade.\n2.4 Identifying cascades As customers continue forwarding recommendations, they contribute to the formation of cascades.\nIn order to identify cascades, i.e. the causal propagation of recommendations, we track successful recommendations as they influence purchases and further recommendations.\nWe define a recommendation to be successful if it reached a node before its first purchase.\nWe consider only the first purchase of an item, because there are many cases when a person made multiple 230 Group p n e eu bb be Purchases Forward Percent Book 103,161 2,863,977 5,741,611 2,097,809 65,344 17,769 65,391 15,769 24.2 DVD 19,829 805,285 8,180,393 962,341 17,232 58,189 16,459 7,336 44.6 Music 393,598 794,148 1,443,847 585,738 7,837 2,739 7,843 1,824 23.3 Video 26,131 239,583 280,270 160,683 909\u00a0467\u00a0909\u00a0250 27.6 Total 542,719 3,943,084 15,646,121 3,153,676 91,322 79,164 90,602 25,179 27.8 Table 1: Product group recommendation statistics.\np: number of products, n: number of nodes, e: number of edges (recommendations), eu: number of unique edges, bb: number of buy bits, be: number of buy edges.\nLast 3 columns of the table: Fraction of people that purchase and also recommend forward.\nPurchases: number of nodes that purchased.\nForward: nodes that purchased and then also recommended the product.\n973 938 (a) Medical book (b) Japanese graphic novel Figure 2: Examples of two product recommendation networks: (a) First aid study guide First Aid for the USMLE Step, (b) Japanese graphic novel (manga) Oh My Goddess!\n: Mara Strikes Back.\n10 0 10 5 10 0 10 2 10 4 10 6 10 8 Number of recommendations Count = 3.4e6 x\u22122.30 R2 =0.96 10 0 10 1 10 2 10 3 10 4 10 0 10 2 10 4 10 6 10 8 Number of purchases Count = 4.1e6 x\u22122.49 R2 =0.99 (a) Recommendations (b) Purchases Figure 3: Distribution of the number of recommendations and number of purchases made by a node.\npurchases of the same product, and in between those purchases she may have received new recommendations.\nIn this case one cannot conclude that recommendations following the first purchase influenced the later purchases.\nEach cascade is a network consisting of customers (nodes) who purchased the same product as a result of each other``s recommendations (edges).\nWe delete late recommendations - all incoming recommendations that happened after the first purchase of the product.\nThis way we make the network time increasing or causal - for each node all incoming edges (recommendations) occurred before all outgoing edges.\nNow each connected component represents a time obeying propagation of recommendations.\nFigure 2 shows two typical product recommendation networks: (a) a medical study guide and (b) a Japanese graphic novel.\nThroughout the dataset we observe very similar patters.\nMost product recommendation networks consist of a large number of small disconnected components where we do not observe cascades.\nThen there is usually a small number of relatively small components with recommendations successfully propagating.\nThis observation is reflected in the heavy tailed distribution of cascade sizes (see figure 4), having a power-law exponent close to 1 for DVDs in particular.\nWe also notice bursts of recommendations (figure 2(b)).\nSome nodes recommend to many friends, forming a star like pattern.\nFigure 3 shows the distribution of the recommendations and purchases made by a single node in the recommendation network.\nNotice the power-law distributions and long flat tails.\nThe most active person made 83,729 recommendations and purchased 4,416 different items.\nFinally, we also sometimes observe `collisions'', where nodes receive recommendations from two or more sources.\nA detailed enumeration and analysis of observed topological cascade patterns for this dataset is made in [10].\n2.5 The recommendation propagation model A simple model can help explain how the wide variance we observe in the number of recommendations made by individuals can lead to power-laws in cascade sizes (figure 4).\nThe model assumes that each recipient of a recommendation will forward it to others if its value exceeds an arbitrary threshold that the individual sets for herself.\nSince exceeding this value is a probabilistic event, let``s call pt the probability that at time step t the recommendation exceeds the thresh231 10 0 10 1 10 2 10 0 10 2 10 4 10 6 = 1.8e6 x\u22124.98 R2 =0.99 10 0 10 1 10 2 10 3 10 0 10 2 10 4 = 3.4e3 x\u22121.56 R2 =0.83 10 0 10 1 10 2 10 0 10 2 10 4 = 4.9e5 x\u22126.27 R2 =0.97 10 0 10 1 10 2 10 0 10 2 10 4 = 7.8e4 x\u22125.87 R2 =0.97 (a) Book (b) DVD (c) Music (d) Video Figure 4: Size distribution of cascades (size of cascade vs. count).\nBold line presents a power-fit.\nold.\nIn that case the number of recommendations Nt+1 at time (t + 1) is given in terms of the number of recommendations at an earlier time by Nt+1 = ptNt (1) where the probability pt is defined over the unit interval.\nNotice that, because of the probabilistic nature of the threshold being exceeded, one can only compute the final distribution of recommendation chain lengths, which we now proceed to do.\nSubtracting from both sides of this equation the term Nt and diving by it we obtain N(t+1) \u2212 Nt Nt = pt \u2212 1 (2) Summing both sides from the initial time to some very large time T and assuming that for long times the numerator is smaller than the denominator (a reasonable assumption) we get dN N = pt (3) The left hand integral is just ln(N), and the right hand side is a sum of random variables, which in the limit of a very large uncorrelated number of recommendations is normally distributed (central limit theorem).\nThis means that the logarithm of the number of messages is normally distributed, or equivalently, that the number of messages passed is log-normally distributed.\nIn other words the probability density for N is given by P(N) = 1 N \u221a 2\u03c0\u03c32 exp \u2212(ln(N) \u2212 \u03bc)2 2\u03c32 (4) which, for large variances describes a behavior whereby the typical number of recommendations is small (the mode of the distribution) but there are unlikely events of large chains of recommendations which are also observable.\nFurthermore, for large variances, the lognormal distribution can behave like a power law for a range of values.\nIn order to see this, take the logarithms on both sides of the equation (equivalent to a log-log plot) and one obtains ln(P(N)) = \u2212 ln(N) \u2212 ln( \u221a 2\u03c0\u03c32) \u2212 (ln (N) \u2212 \u03bc)2 2\u03c32 (5) So, for large \u03c3, the last term of the right hand side goes to zero, and since the the second term is a constant one obtains a power law behavior with exponent value of minus one.\nThere are other models which produce power-law distributions of cascade sizes, but we present ours for its simplicity, since it does not depend on network topology [6] or critical thresholds in the probability of a recommendation being accepted [18].\n3.\nSUCCESS OF RECOMMENDATIONS So far we only looked into the aggregate statistics of the recommendation network.\nNext, we ask questions about the effectiveness of recommendations in the recommendation network itself.\nFirst, we analyze the probability of purchasing as one gets more and more recommendations.\nNext, we measure recommendation effectiveness as two people exchange more and more recommendations.\nLastly, we observe the recommendation network from the perspective of the sender of the recommendation.\nDoes a node that makes more recommendations also influence more purchases?\n3.1 Probability of buying versus number of incoming recommendations First, we examine how the probability of purchasing changes as one gets more and more recommendations.\nOne would expect that a person is more likely to buy a product if she gets more recommendations.\nOn the other had one would also think that there is a saturation point - if a person hasn``t bought a product after a number of recommendations, they are not likely to change their minds after receiving even more of them.\nSo, how many recommendations are too many?\nFigure 5 shows the probability of purchasing a product as a function of the number of incoming recommendations on the product.\nAs we move to higher numbers of incoming recommendations, the number of observations drops rapidly.\nFor example, there were 5 million cases with 1 incoming recommendation on a book, and only 58 cases where a person got 20 incoming recommendations on a particular book.\nThe maximum was 30 incoming recommendations.\nFor these reasons we cut-off the plot when the number of observations becomes too small and the error bars too large.\nFigure 5(a) shows that, overall, book recommendations are rarely followed.\nEven more surprisingly, as more and more recommendations are received, their success decreases.\nWe observe a peak in probability of buying at 2 incoming recommendations and then a slow drop.\nFor DVDs (figure 5(b)) we observe a saturation around 10 incoming recommendations.\nThis means that after a person gets 10 recommendations on a particular DVD, they become immune to them - their probability of buying does not increase anymore.\nThe number of observations is 2.5 million at 1 incoming recommendation and 100 at 60 incoming recommendations.\nThe maximal number of received recommendations is 172 (and that person did not buy) 232 2 4 6 8 10 0 0.01 0.02 0.03 0.04 0.05 0.06 Incoming Recommendations ProbabilityofBuying 10 20 30 40 50 60 0 0.02 0.04 0.06 0.08 Incoming Recommendations ProbabilityofBuying (a) Books (b) DVD Figure 5: Probability of buying a book (DVD) given a number of incoming recommendations.\n5 10 15 20 25 30 35 40 4 6 8 10 12 x 10 \u22123 Exchanged recommendations Probabilityofbuying 5 10 15 20 25 30 35 40 0.02 0.03 0.04 0.05 0.06 0.07 Exchanged recommendations Probabilityofbuying (a) Books (b) DVD Figure 6: The effectiveness of recommendations with the total number of exchanged recommendations.\n3.2 Success of subsequent recommendations Next, we analyze how the effectiveness of recommendations changes as two persons exchange more and more recommendations.\nA large number of exchanged recommendations can be a sign of trust and influence, but a sender of too many recommendations can be perceived as a spammer.\nA person who recommends only a few products will have her friends'' attention, but one who floods her friends with all sorts of recommendations will start to loose her influence.\nWe measure the effectiveness of recommendations as a function of the total number of previously exchanged recommendations between the two nodes.\nWe construct the experiment in the following way.\nFor every recommendation r on some product p between nodes u and v, we first determine how many recommendations were exchanged between u and v before recommendation r.\nThen we check whether v, the recipient of recommendation, purchased p after recommendation r arrived.\nFor the experiment we consider only node pairs (u, v), where there were at least a total of 10 recommendations sent from u to v.\nWe perform the experiment using only recommendations from the same product group.\nFigure 6 shows the probability of buying as a function of the total number of exchanged recommendations between two persons up to that point.\nFor books we observe that the effectiveness of recommendation remains about constant up to 3 exchanged recommendations.\nAs the number of exchanged recommendations increases, the probability of buying starts to decrease to about half of the original value and then levels off.\nFor DVDs we observe an immediate and consistent drop.\nThis experiment shows that recommendations start to lose effect after more than two or three are passed between two people.\nWe performed the experiment also for video and music, but the number of observations was too low and the measurements were noisy.\n3.3 Success of outgoing recommendations In previous sections we examined the data from the viewpoint of the receiver of the recommendation.\nNow we look from the viewpoint of the sender.\nThe two interesting questions are: how does the probability of getting a 10% credit change with the number of outgoing recommendations; and given a number of outgoing recommendations, how many purchases will they influence?\nOne would expect that recommendations would be the most effective when recommended to the right subset of friends.\nIf one is very selective and recommends to too few friends, then the chances of success are slim.\nOne the other hand, recommending to everyone and spamming them with recommendations may have limited returns as well.\nThe top row of figure 7 shows how the average number of purchases changes with the number of outgoing recommendations.\nFor books, music, and videos the number of purchases soon saturates: it grows fast up to around 10 outgoing recommendations and then the trend either slows or starts to drop.\nDVDs exhibit different behavior, with the expected number of purchases increasing throughout.\nBut if we plot the probability of getting a 10% credit as a function of the number of outgoing recommendations, as in the bottom row of figure 7, we see that the success of DVD recommendations saturates as well, while books, videos and music have qualitatively similar trends.\nThe difference in the curves for DVD recommendations points to the presence of collisions in the dense DVD network, which has 10 recommendations per node and around 400 per product - an order of magnitude more than other product groups.\nThis means that many different individuals are recommending to the same person, and after that person makes a purchase, even though all of them made a `successful recommendation'' 233 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 Outgoing Recommendations NumberofPurchases 20 40 60\u00a080\u00a0100\u00a0120 140 0 1 2 3 4 5 6 7 Outgoing Recommendations NumberofPurchases 5 10 15 20 0 0.05 0.1 0.15 0.2 Outgoing Recommendations NumberofPurchases 2 4 6 8 10 12 0 0.05 0.1 0.15 0.2 0.25 Outgoing Recommendations NumberofPurchases 10 20 30 40 50 60 70 80 0 0.05 0.1 0.15 0.2 0.25 Outgoing Recommendations ProbabilityofCredit 10 20 30 40 50 60 70 80 0 0.02 0.04 0.06 0.08 0.1 0.12 Outgoing Recommendations ProbabilityofCredit 5 10 15 20 0 0.02 0.04 0.06 0.08 0.1 Outgoing Recommendations ProbabilityofCredit 2 4 6 8 10 12 14 0 0.02 0.04 0.06 0.08 Outgoing Recommendations ProbabilityofCredit (a) Books (b) DVD (c) Music (d) Video Figure 7: Top row: Number of resulting purchases given a number of outgoing recommendations.\nBottom row: Probability of getting a credit given a number of outgoing recommendations.\n1 2 3 4 5 6 7 > 7 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Lag [day] ProportionofPurchases 1 2 3 4 5 6 7 > 7 0 0.1 0.2 0.3 0.4 0.5 Lag [day] ProportionofPurchases (a) Books (b) DVD Figure 8: The time between the recommendation and the actual purchase.\nWe use all purchases.\nby our definition, only one of them receives a credit.\n4.\nTIMING OF RECOMMENDATIONS AND PURCHASES The recommendation referral program encourages people to purchase as soon as possible after they get a recommendation, since this maximizes the probability of getting a discount.\nWe study the time lag between the recommendation and the purchase of different product groups, effectively how long it takes a person to both receive a recommendation, consider it, and act on it.\nWe present the histograms of the thinking time, i.e. the difference between the time of purchase and the time the last recommendation was received for the product prior to the purchase (figure 8).\nWe use a bin size of 1 day.\nAround 35%40% of book and DVD purchases occurred within a day after the last recommendation was received.\nFor DVDs 16% purchases occur more than a week after last recommendation, while this drops to 10% for books.\nIn contrast, if we consider the lag between the purchase and the first recommendation, only 23% of DVD purchases are made within a day, while the proportion stays the same for books.\nThis reflects a greater likelihood for a person to receive multiple recommendations for a DVD than for a book.\nAt the same time, DVD recommenders tend to send out many more recommendations, only one of which can result in a discount.\nIndividuals then often miss their chance of a discount, which is reflected in the high ratio (78%) of recommended DVD purchases that did not a get discount (see table 1, columns bb and be).\nIn contrast, for books, only 21% of purchases through recommendations did not receive a discount.\nWe also measure the variation in intensity by time of day for three different activities in the recommendation system: recommendations (figure 9(a)), all purchases (figure 9(b)), and finally just the purchases which resulted in a discount (figure 9(c)).\nEach is given as a total count by hour of day.\nThe recommendations and purchases follow the same pattern.\nThe only small difference is that purchases reach a sharper peak in the afternoon (after 3pm Pacific Time, 6pm Eastern time).\nThe purchases that resulted in a discount look like a negative image of the first two figures.\nThis means that most of discounted purchases happened in the morning when the traffic (number of purchases\/recommendations) on the retailer``s website was low.\nThis makes a lot of sense since most of the recommendations happened during the day, and if the person wanted to get the discount by being the first one to purchase, she had the highest chances when the traffic on the website was the lowest.\n5.\nRECOMMENDATION EFFECTIVENESS BY BOOK CATEGORY Social networks are a product of the contexts that bring people together.\nSome contexts result in social ties that are more effective at conducting an action.\nFor example, in small world experiments, where participants attempt to reach a target individual through their chain of acquaintances, profession trumped geography, which in turn was more useful in locating a target than attributes such as religion or hobbies [9, 17].\nIn the context of product recommendations, we can ask whether a recommendation for a work of fiction, which may be made by any friend or neighbor, is 234 0 5 10 15 20 25 0 2 4 6 8 10 x 10 5 Hour of the Day Recommendtions 0 5 10 15 20 25 0 0.5 1 1.5 2 x 10 4 Hour of the Day AllPurchases 0 5 10 15 20 25 0 1000 2000 3000 4000 5000 6000 7000 Hour of the Day DiscountedPurchases (a) Recommendations (b) Purchases (c) Purchases with Discount Figure 9: Time of day for purchases and recommendations.\n(a) shows the distribution of recommendations over the day, (b) shows all purchases and (c) shows only purchases that resulted in getting discount.\nmore or less influential than a recommendation for a technical book, which may be made by a colleague at work or school.\nTable 2 shows recommendation trends for all top level book categories by subject.\nAn analysis of other product types can be found in the extended version of the paper.\nFor clarity, we group the results by 4 different category types: fiction, personal\/leisure, professional\/technical, and nonfiction\/other.\nFiction encompasses categories such as Sci-Fi and Romance, as well as children``s and young adult books.\nPersonal\/Leisure encompasses everything from gardening, photography and cooking to health and religion.\nFirst, we compare the relative number of recommendations to reviews posted on the site (column cav\/rp1 of table 2).\nSurprisingly, we find that the number of people making personal recommendations was only a few times greater than the number of people posting a public review on the website.\nWe observe that fiction books have relatively few recommendations compared to the number of reviews, while professional and technical books have more recommendations than reviews.\nThis could reflect several factors.\nOne is that people feel more confident reviewing fiction than technical books.\nAnother is that they hesitate to recommend a work of fiction before reading it themselves, since the recommendation must be made at the point of purchase.\nYet another explanation is that the median price of a work of fiction is lower than that of a technical book.\nThis means that the discount received for successfully recommending a mystery novel or thriller is lower and hence people have less incentive to send recommendations.\nNext, we measure the per category efficacy of recommendations by observing the ratio of the number of purchases occurring within a week following a recommendation to the number of recommenders for each book subject category (column b of table 2).\nOn average, only 2% of the recommenders of a book received a discount because their recommendation was accepted, and another 1% made a recommendation that resulted in a purchase, but not a discount.\nWe observe marked differences in the response to recommendation for different categories of books.\nFiction in general is not very effectively recommended, with only around 2% of recommenders succeeding.\nThe efficacy was a bit higher (around 3%) for non-fiction books dealing with personal and leisure pursuits, but is significantly higher in the professional and technical category.\nMedical books have nearly double the average rate of recommendation acceptance.\nThis could be in part attributed to the higher median price of medical books and technical books in general.\nAs we will see in Section 6, a higher product price increases the chance that a recommendation will be accepted.\nRecommendations are also more likely to be accepted for certain religious categories: 4.3% for Christian living and theology and 4.8% for Bibles.\nIn contrast, books not tied to organized religions, such as ones on the subject of new age (2.5%) and occult (2.2%) spirituality, have lower recommendation effectiveness.\nThese results raise the interesting possibility that individuals have greater influence over one another in an organized context, for example through a professional contact or a religious one.\nThere are exceptions of course.\nFor example, Japanese anime DVDs have a strong following in the US, and this is reflected in their frequency and success in recommendations.\nAnother example is that of gardening.\nIn general, recommendations for books relating to gardening have only a modest chance of being accepted, which agrees with the individual prerogative that accompanies this hobby.\nAt the same time, orchid cultivation can be a highly organized and social activity, with frequent `shows'' and online communities devoted entirely to orchids.\nPerhaps because of this, the rate of acceptance of orchid book recommendations is twice as high as those for books on vegetable or tomato growing.\n6.\nMODELING THE RECOMMENDATION SUCCESS We have examined the properties of recommendation network in relation to viral marketing, but one question still remains: what determines the product``s viral marketing success?\nWe present a model which characterizes product categories for which recommendations are more likely to be accepted.\nWe use a regression of the following product attributes to correlate them with recommendation success: \u2022 r: number of recommendations \u2022 ns: number of senders of recommendations \u2022 nr: number of recipients of recommendations \u2022 p: price of the product \u2022 v: number of reviews of the product \u2022 t: average product rating 235 category np n cc rp1 vav cav\/ pm b \u2217 100 rp1 Books general 370230 2,860,714 1.87 5.28 4.32 1.41 14.95 3.12 Fiction Children``s Books 46,451 390,283 2.82 6.44 4.52 1.12 8.76 2.06** Literature & Fiction 41,682 502,179 3.06 13.09 4.30 0.57 11.87 2.82* Mystery and Thrillers 10,734 123,392 6.03 20.14 4.08 0.36 9.60 2.40** Science Fiction & Fantasy 10,008 175,168 6.17 19.90 4.15 0.64 10.39 2.34** Romance 6,317 60,902 5.65 12.81 4.17 0.52 6.99 1.78** Teens 5,857 81,260 5.72 20.52 4.36 0.41 9.56 1.94** Comics & Graphic Novels 3,565 46,564 11.70 4.76 4.36 2.03 10.47 2.30* Horror 2,773 48,321 9.35 21.26 4.16 0.44 9.60 1.81** Personal\/Leisure Religion and Spirituality 43,423 441,263 1.89 3.87 4.45 1.73 9.99 3.13 Health Mind and Body 33,751 572,704 1.54 4.34 4.41 2.39 13.96 3.04 History 28,458 28,3406 2.74 4.34 4.30 1.27 18.00 2.84 Home and Garden 19,024 180,009 2.91 1.78 4.31 3.48 15.37 2.26** Entertainment 18,724 258,142 3.65 3.48 4.29 2.26 13.97 2.66* Arts and Photography 17,153 179,074 3.49 1.56 4.42 3.85 20.95 2.87 Travel 12,670 113,939 3.91 2.74 4.26 1.87 13.27 2.39** Sports 10,183 120,103 1.74 3.36 4.34 1.99 13.97 2.26** Parenting and Families 8,324 182,792 0.73 4.71 4.42 2.57 11.87 2.81 Cooking Food and Wine 7,655 146,522 3.02 3.14 4.45 3.49 13.97 2.38* Outdoors & Nature 6,413 59,764 2.23 1.93 4.42 2.50 15.00 3.05 Professional\/Technical Professional & Technical 41,794 459,889 1.72 1.91 4.30 3.22 32.50 4.54** Business and Investing 29,002 476,542 1.55 3.61 4.22 2.94 20.99 3.62** Science 25,697 271,391 2.64 2.41 4.30 2.42 28.00 3.90** Computers and Internet 18,941 375,712 2.22 4.51 3.98 3.10 34.95 3.61** Medicine 16,047 175,520 1.08 1.41 4.40 4.19 39.95 5.68** Engineering 10,312 107,255 1.30 1.43 4.14 3.85 59.95 4.10** Law 5,176 53,182 2.64 1.89 4.25 2.67 24.95 3.66* Nonfiction-other Nonfiction 55,868 560,552 2.03 3.13 4.29 1.89 18.95 3.28** Reference 26,834 371,959 1.94 2.49 4.19 3.04 17.47 3.21 Biographies and Memoirs 18,233 277,356 2.80 7.65 4.34 0.90 14.00 2.96 Table 2: Statistics by book category: np:number of products in category, n number of customers, cc percentage of customers in the largest connected component, rp1 av.\n# reviews in 2001 - 2003, rp2 av.\n# reviews 1st 6 months 2005, vav average star rating, cav average number of people recommending product, cav\/rp1 ratio of recommenders to reviewers, pm median price, b ratio of the number of purchases resulting from a recommendation to the number of recommenders.\nThe symbol ** denotes statistical significance at the 0.01 level, * at the 0.05 level.\nFrom the original set of half a million products, we compute a success rate s for the 48,218 products that had at least one purchase made through a recommendation and for which a price was given.\nIn section 5 we defined recommendation success rate s as the ratio of the total number purchases made through recommendations and the number of senders of the recommendations.\nWe decided to use this kind of normalization, rather than normalizing by the total number of recommendations sent, in order not to penalize communities where a few individuals send out many recommendations (figure 2(b)).\nSince the variables follow a heavy tailed distribution, we use the following model: s = exp( i \u03b2i log(xi) + i) where xi are the product attributes (as described on previous page), and i is random error.\nWe fit the model using least squares and obtain the coefficients \u03b2i shown on table 3.\nWith the exception of the average rating, they are all significant.\nThe only two attributes with a positive coefficient are the number of recommendations and price.\nThis shows that more expensive and more recommended products have a higher success rate.\nThe number of senders and receivers have large negative coefficients, showing that successfully recommended products are more likely to be not so widely popular.\nThey have relatively many recommendations with a small number of senders and receivers, which suggests a very dense recommendation network where lots of recommendations were exchanged between a small community of people.\nThese insights could be to marketers - personal recommendations are most effective in small, densely connected communities enjoying expensive products.\n236 Variable Coefficient \u03b2i const -0.940 (0.025)** r 0.426 (0.013)** ns -0.782 (0.004)** nr -1.307 (0.015)** p 0.128 (0.004)** v -0.011 (0.002)** t -0.027 (0.014)* R2 0.74 Table 3: Regression using the log of the recommendation success rate, ln(s), as the dependent variable.\nFor each coefficient we provide the standard error and the statistical significance level (**:0.01, *:0.1).\n7.\nDISCUSSION AND CONCLUSION Although the retailer may have hoped to boost its revenues through viral marketing, the additional purchases that resulted from recommendations are just a drop in the bucket of sales that occur through the website.\nNevertheless, we were able to obtain a number of interesting insights into how viral marketing works that challenge common assumptions made in epidemic and rumor propagation modeling.\nFirstly, it is frequently assumed in epidemic models that individuals have equal probability of being infected every time they interact.\nContrary to this we observe that the probability of infection decreases with repeated interaction.\nMarketers should take heed that providing excessive incentives for customers to recommend products could backfire by weakening the credibility of the very same links they are trying to take advantage of.\nTraditional epidemic and innovation diffusion models also often assume that individuals either have a constant probability of `converting'' every time they interact with an infected individual or that they convert once the fraction of their contacts who are infected exceeds a threshold.\nIn both cases, an increasing number of infected contacts results in an increased likelihood of infection.\nInstead, we find that the probability of purchasing a product increases with the number of recommendations received, but quickly saturates to a constant and relatively low probability.\nThis means individuals are often impervious to the recommendations of their friends, and resist buying items that they do not want.\nIn network-based epidemic models, extremely highly connected individuals play a very important role.\nFor example, in needle sharing and sexual contact networks these nodes become the super-spreaders by infecting a large number of people.\nBut these models assume that a high degree node has as much of a probability of infecting each of its neighbors as a low degree node does.\nIn contrast, we find that there are limits to how influential high degree nodes are in the recommendation network.\nAs a person sends out more and more recommendations past a certain number for a product, the success per recommendation declines.\nThis would seem to indicate that individuals have influence over a few of their friends, but not everybody they know.\nWe also presented a simple stochastic model that allows for the presence of relatively large cascades for a few products, but reflects well the general tendency of recommendation chains to terminate after just a short number of steps.\nWe saw that the characteristics of product reviews and effectiveness of recommendations vary by category and price, with more successful recommendations being made on technical or religious books, which presumably are placed in the social context of a school, workplace or place of worship.\nFinally, we presented a model which shows that smaller and more tightly knit groups tend to be more conducive to viral marketing.\nSo despite the relative ineffectiveness of the viral marketing program in general, we found a number of new insights which we hope will have general applicability to marketing strategies and to future models of viral information spread.\n8.\nREFERENCES [1] Anonymous.\nProfiting from obscurity: What the long tail means for the economics of e-commerce.\nEconomist, 2005.\n[2] E. Brynjolfsson, Y. Hu, and M. D. Smith.\nConsumer surplus in the digital economy: Estimating the value of increased product variety at online booksellers.\nManagement Science, 49(11), 2003.\n[3] K. Burke.\nAs consumer attitudes shift, so must marketing strategies.\n2003.\n[4] J. Chevalier and D. Mayzlin.\nThe effect of word of mouth on sales: Online book reviews.\n2004.\n[5] P. Erd\u00a8os and A. R\u00b4enyi.\nOn the evolution of random graphs.\nPubl.\nMath.\nInst.\nHung.\nAcad.\nSci., 1960.\n[6] D. Gruhl, R. Guha, D. Liben-Nowell, and A. Tomkins.\nInformation diffusion through blogspace.\nIn WWW ``04, 2004.\n[7] S. Jurvetson.\nWhat exactly is viral marketing?\nRed Herring, 78:110-112, 2000.\n[8] D. Kempe, J. Kleinberg, and E. Tardos.\nMaximizing the spread of infuence in a social network.\nIn ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2003.\n[9] P. Killworth and H. Bernard.\nReverse small world experiment.\nSocial Networks, 1:159-192, 1978.\n[10] J. Leskovec, A. Singh, and J. Kleinberg.\nPatterns of influence in a recommendation network.\nIn Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), 2006.\n[11] G. Linden, B. Smith, and J. York.\nAmazon.com recommendations: item-to-item collaborative filtering.\nIEEE Internet Computing, 7(1):76-80, 2003.\n[12] A. L. Montgomery.\nApplying quantitative marketing techniques to the internet.\nInterfaces, 30:90-108, 2001.\n[13] P. Resnick and R. Zeckhauser.\nTrust among strangers in internet transactions: Empirical analysis of ebays reputation system.\nIn The Economics of the Internet and E-Commerce.\nElsevier Science, 2002.\n[14] M. Richardson and P. Domingos.\nMining knowledge-sharing sites for viral marketing.\nIn ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2002.\n[15] E. M. Rogers.\nDiffusion of Innovations.\nFree Press, New York, fourth edition, 1995.\n[16] D. Strang and S. A. Soule.\nDiffusion in organizations and social movements: From hybrid corn to poison pills.\nAnnual Review of Sociology, 24:265-290, 1998.\n[17] J. Travers and S. Milgram.\nAn experimental study of the small world problem.\nSociometry, 1969.\n[18] D. Watts.\nA simple model of global cascades on random networks.\nPNAS, 99(9):4766-5771, Apr 2002.\n237","lvl-3":"The Dynamics of Viral Marketing *\nABSTRACT\nWe present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products.\nWe observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model.\nWe then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations.\nWhile on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.\n1.\nINTRODUCTION\nWith consumers showing increasing resistance to traditional forms of advertising such as TV or newspaper ads, marketers have turned to alternate strategies, including viral marketing.\nViral marketing exploits existing social networks by encouraging customers to share product information with their friends.\nPreviously, a few in depth studies have shown that social networks affect the adoption of * A longer version of this paper can be found at http:\/\/arxiv.org\/abs\/physics\/0509039 tThis research was done while at HP Labs.\n$Research likewise done while at HP Labs.\nindividual innovations and products (for a review see [15] or [16]).\nBut until recently it has been difficult to measure how influential person-to-person recommendations actually are over a wide range of products.\nWe were able to directly measure and model the effectiveness of recommendations by studying one online retailer's incentivised viral marketing program.\nThe website gave discounts to customers recommending any of its products to others, and then tracked the resulting purchases and additional recommendations.\nAlthough word of mouth can be a powerful factor influencing purchasing decisions, it can be tricky for advertisers to tap into.\nSome services used by individuals to communicate are natural candidates for viral marketing, because the product can be observed or advertised as part of the communication.\nEmail services such as Hotmail and Yahoo had very fast adoption curves because every email sent through them contained an advertisement for the service and because they were free.\nHotmail spent a mere $50,000 on traditional marketing and still grew from zero to 12 million users in 18 months [7].\nGoogle's Gmail captured a significant part of market share in spite of the fact that the only way to sign up for the service was through a referral.\nMost products cannot be advertised in such a direct way.\nAt the same time the choice of products available to consumers has increased manyfold thanks to online retailers who can supply a much wider variety of products than traditional brick-and-mortar stores.\nNot only is the variety of products larger, but one observes a ` fat tail' phenomenon, where a large fraction of purchases are of relatively obscure items.\nOn Amazon.com, somewhere between 20 to 40 percent of unit sales fall outside of its top 100,000 ranked products [2].\nRhapsody, a streaming-music service, streams more tracks outside than inside its top 10,000 tunes [1].\nEffectively advertising these niche products using traditional advertising approaches is impractical.\nTherefore using more targeted marketing approaches is advantageous both to the merchant and the consumer, who would benefit from learning about new products.\nThe problem is partly addressed by the advent of online product and merchant reviews, both at retail sites such as EBay and Amazon, and specialized product comparison sites such as Epinions and CNET.\nQuantitative marketing techniques have been proposed [12], and the rating of products and merchants has been shown to effect the likelihood of an item being bought [13, 4].\nOf further help to the consumer are collaborative filtering recommendations of the form \"people who bought x also bought y\" feature [11].\nThese refinements help consumers discover new products\nand receive more accurate evaluations, but they cannot completely substitute personalized recommendations that one receives from a friend or relative.\nIt is human nature to be more interested in what a friend buys than what an anonymous person buys, to be more likely to trust their opinion, and to be more influenced by their actions.\nOur friends are also acquainted with our needs and tastes, and can make appropriate recommendations.\nA Lucid Marketing survey found that 68% of individuals consulted friends and relatives before purchasing home electronics--more than the half who used search engines to find product information [3].\nSeveral studies have attempted to model just this kind of network influence.\nRichardson and Domingos [14] used Epinions' trusted reviewer network to construct an algorithm to maximize viral marketing efficiency assuming that individuals' probability of purchasing a product depends on the opinions on the trusted peers in their network.\nKempe, Kleinberg and Tardos [8] evaluate the efficiency of several algorithms for maximizing the size of influence set given various models of adoption.\nWhile these models address the question of maximizing the spread of influence in a network, they are based on assumed rather than measured influence effects.\nIn contrast, in our study we are able to directly observe the effectiveness of person to person word of mouth advertising for hundreds of thousands of products for the first time.\nWe find that most recommendation chains do not grow very large, often terminating with the initial purchase of a product.\nHowever, occasionally a product will propagate through a very active recommendation network.\nWe propose a simple stochastic model that seems to explain the propagation of recommendations.\nMoreover, the characteristics of recommendation networks influence the purchase patterns of their members.\nFor example, individuals' likelihood of purchasing a product initially increases as they receive additional recommendations for it, but a saturation point is quickly reached.\nInterestingly, as more recommendations are sent between the same two individuals, the likelihood that they will be heeded decreases.\nWe also propose models to identify products for which viral marketing is effective: We find that the category and price of product plays a role, with recommendations of expensive products of interest to small, well connected communities resulting in a purchase more often.\nWe also observe patterns in the timing of recommendations and purchases corresponding to times of day when people are likely to be shopping online or reading email.\nWe report on these and other findings in the following sections.\n2.\nTHE RECOMMENDATION NETWORK\n2.1 Dataset description\n2.2 Recommendation network statistics\n2.3 Forward recommendations\n2.4 Identifying cascades\n2.5 The recommendation propagation model\n3.\nSUCCESS OF RECOMMENDATIONS\n3.1 Probability of buying versus number of incoming recommendations\n3.2 Success of subsequent recommendations\n3.3 Success of outgoing recommendations\n4.\nTIMING OF RECOMMENDATIONS AND PURCHASES\n5.\nRECOMMENDATION EFFECTIVENESS BY BOOK CATEGORY\n6.\nMODELING THE RECOMMENDATION SUCCESS\n7.\nDISCUSSION AND CONCLUSION\nAlthough the retailer may have hoped to boost its revenues through viral marketing, the additional purchases that resulted from recommendations are just a drop in the bucket of sales that occur through the website.\nNevertheless, we were able to obtain a number of interesting insights into how viral marketing works that challenge common assumptions made in epidemic and rumor propagation modeling.\nFirstly, it is frequently assumed in epidemic models that individuals have equal probability of being infected every time they interact.\nContrary to this we observe that the probability of infection decreases with repeated interaction.\nMarketers should take heed that providing excessive incentives for customers to recommend products could backfire by weakening the credibility of the very same links they are trying to take advantage of.\nTraditional epidemic and innovation diffusion models also often assume that individuals either have a constant probability of ` converting' every time they interact with an infected individual or that they convert once the fraction of their contacts who are infected exceeds a threshold.\nIn both cases, an increasing number of infected contacts results in an increased likelihood of infection.\nInstead, we find that the probability of purchasing a product increases with the number of recommendations received, but quickly saturates to a constant and relatively low probability.\nThis means individuals are often impervious to the recommendations of their friends, and resist buying items that they do not want.\nIn network-based epidemic models, extremely highly connected individuals play a very important role.\nFor example, in needle sharing and sexual contact networks these nodes become the \"super-spreaders\" by infecting a large number of people.\nBut these models assume that a high degree node has as much of a probability of infecting each of its neighbors as a low degree node does.\nIn contrast, we find that there are limits to how influential high degree nodes are in the recommendation network.\nAs a person sends out more and more recommendations past a certain number for a product, the success per recommendation declines.\nThis would seem to indicate that individuals have influence over a few of their friends, but not everybody they know.\nWe also presented a simple stochastic model that allows for the presence of relatively large cascades for a few products, but reflects well the general tendency of recommendation chains to terminate after just a short number of steps.\nWe saw that the characteristics of product reviews and effectiveness of recommendations vary by category and price, with more successful recommendations being made on technical or religious books, which presumably are placed in the social context of a school, workplace or place of worship.\nFinally, we presented a model which shows that smaller and more tightly knit groups tend to be more conducive to viral marketing.\nSo despite the relative ineffectiveness of the viral marketing program in general, we found a number of new insights which we hope will have general applicability to marketing strategies and to future models of viral information spread.","lvl-4":"The Dynamics of Viral Marketing *\nABSTRACT\nWe present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products.\nWe observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model.\nWe then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations.\nWhile on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.\n1.\nINTRODUCTION\nWith consumers showing increasing resistance to traditional forms of advertising such as TV or newspaper ads, marketers have turned to alternate strategies, including viral marketing.\nViral marketing exploits existing social networks by encouraging customers to share product information with their friends.\nPreviously, a few in depth studies have shown that social networks affect the adoption of * A longer version of this paper can be found at http:\/\/arxiv.org\/abs\/physics\/0509039 tThis research was done while at HP Labs.\n$Research likewise done while at HP Labs.\nindividual innovations and products (for a review see [15] or [16]).\nBut until recently it has been difficult to measure how influential person-to-person recommendations actually are over a wide range of products.\nWe were able to directly measure and model the effectiveness of recommendations by studying one online retailer's incentivised viral marketing program.\nThe website gave discounts to customers recommending any of its products to others, and then tracked the resulting purchases and additional recommendations.\nSome services used by individuals to communicate are natural candidates for viral marketing, because the product can be observed or advertised as part of the communication.\nHotmail spent a mere $50,000 on traditional marketing and still grew from zero to 12 million users in 18 months [7].\nMost products cannot be advertised in such a direct way.\nAt the same time the choice of products available to consumers has increased manyfold thanks to online retailers who can supply a much wider variety of products than traditional brick-and-mortar stores.\nNot only is the variety of products larger, but one observes a ` fat tail' phenomenon, where a large fraction of purchases are of relatively obscure items.\nOn Amazon.com, somewhere between 20 to 40 percent of unit sales fall outside of its top 100,000 ranked products [2].\nEffectively advertising these niche products using traditional advertising approaches is impractical.\nTherefore using more targeted marketing approaches is advantageous both to the merchant and the consumer, who would benefit from learning about new products.\nQuantitative marketing techniques have been proposed [12], and the rating of products and merchants has been shown to effect the likelihood of an item being bought [13, 4].\nOf further help to the consumer are collaborative filtering recommendations of the form \"people who bought x also bought y\" feature [11].\nThese refinements help consumers discover new products\nand receive more accurate evaluations, but they cannot completely substitute personalized recommendations that one receives from a friend or relative.\nOur friends are also acquainted with our needs and tastes, and can make appropriate recommendations.\nA Lucid Marketing survey found that 68% of individuals consulted friends and relatives before purchasing home electronics--more than the half who used search engines to find product information [3].\nSeveral studies have attempted to model just this kind of network influence.\nRichardson and Domingos [14] used Epinions' trusted reviewer network to construct an algorithm to maximize viral marketing efficiency assuming that individuals' probability of purchasing a product depends on the opinions on the trusted peers in their network.\nKempe, Kleinberg and Tardos [8] evaluate the efficiency of several algorithms for maximizing the size of influence set given various models of adoption.\nWhile these models address the question of maximizing the spread of influence in a network, they are based on assumed rather than measured influence effects.\nIn contrast, in our study we are able to directly observe the effectiveness of person to person word of mouth advertising for hundreds of thousands of products for the first time.\nWe find that most recommendation chains do not grow very large, often terminating with the initial purchase of a product.\nHowever, occasionally a product will propagate through a very active recommendation network.\nWe propose a simple stochastic model that seems to explain the propagation of recommendations.\nMoreover, the characteristics of recommendation networks influence the purchase patterns of their members.\nFor example, individuals' likelihood of purchasing a product initially increases as they receive additional recommendations for it, but a saturation point is quickly reached.\nInterestingly, as more recommendations are sent between the same two individuals, the likelihood that they will be heeded decreases.\nWe also propose models to identify products for which viral marketing is effective: We find that the category and price of product plays a role, with recommendations of expensive products of interest to small, well connected communities resulting in a purchase more often.\nWe also observe patterns in the timing of recommendations and purchases corresponding to times of day when people are likely to be shopping online or reading email.\n7.\nDISCUSSION AND CONCLUSION\nAlthough the retailer may have hoped to boost its revenues through viral marketing, the additional purchases that resulted from recommendations are just a drop in the bucket of sales that occur through the website.\nNevertheless, we were able to obtain a number of interesting insights into how viral marketing works that challenge common assumptions made in epidemic and rumor propagation modeling.\nFirstly, it is frequently assumed in epidemic models that individuals have equal probability of being infected every time they interact.\nContrary to this we observe that the probability of infection decreases with repeated interaction.\nTraditional epidemic and innovation diffusion models also often assume that individuals either have a constant probability of ` converting' every time they interact with an infected individual or that they convert once the fraction of their contacts who are infected exceeds a threshold.\nIn both cases, an increasing number of infected contacts results in an increased likelihood of infection.\nInstead, we find that the probability of purchasing a product increases with the number of recommendations received, but quickly saturates to a constant and relatively low probability.\nThis means individuals are often impervious to the recommendations of their friends, and resist buying items that they do not want.\nIn network-based epidemic models, extremely highly connected individuals play a very important role.\nFor example, in needle sharing and sexual contact networks these nodes become the \"super-spreaders\" by infecting a large number of people.\nBut these models assume that a high degree node has as much of a probability of infecting each of its neighbors as a low degree node does.\nIn contrast, we find that there are limits to how influential high degree nodes are in the recommendation network.\nAs a person sends out more and more recommendations past a certain number for a product, the success per recommendation declines.\nThis would seem to indicate that individuals have influence over a few of their friends, but not everybody they know.\nWe also presented a simple stochastic model that allows for the presence of relatively large cascades for a few products, but reflects well the general tendency of recommendation chains to terminate after just a short number of steps.\nFinally, we presented a model which shows that smaller and more tightly knit groups tend to be more conducive to viral marketing.\nSo despite the relative ineffectiveness of the viral marketing program in general, we found a number of new insights which we hope will have general applicability to marketing strategies and to future models of viral information spread.","lvl-2":"The Dynamics of Viral Marketing *\nABSTRACT\nWe present an analysis of a person-to-person recommendation network, consisting of 4 million people who made 16 million recommendations on half a million products.\nWe observe the propagation of recommendations and the cascade sizes, which we explain by a simple stochastic model.\nWe then establish how the recommendation network grows over time and how effective it is from the viewpoint of the sender and receiver of the recommendations.\nWhile on average recommendations are not very effective at inducing purchases and do not spread very far, we present a model that successfully identifies product and pricing categories for which viral marketing seems to be very effective.\n1.\nINTRODUCTION\nWith consumers showing increasing resistance to traditional forms of advertising such as TV or newspaper ads, marketers have turned to alternate strategies, including viral marketing.\nViral marketing exploits existing social networks by encouraging customers to share product information with their friends.\nPreviously, a few in depth studies have shown that social networks affect the adoption of * A longer version of this paper can be found at http:\/\/arxiv.org\/abs\/physics\/0509039 tThis research was done while at HP Labs.\n$Research likewise done while at HP Labs.\nindividual innovations and products (for a review see [15] or [16]).\nBut until recently it has been difficult to measure how influential person-to-person recommendations actually are over a wide range of products.\nWe were able to directly measure and model the effectiveness of recommendations by studying one online retailer's incentivised viral marketing program.\nThe website gave discounts to customers recommending any of its products to others, and then tracked the resulting purchases and additional recommendations.\nAlthough word of mouth can be a powerful factor influencing purchasing decisions, it can be tricky for advertisers to tap into.\nSome services used by individuals to communicate are natural candidates for viral marketing, because the product can be observed or advertised as part of the communication.\nEmail services such as Hotmail and Yahoo had very fast adoption curves because every email sent through them contained an advertisement for the service and because they were free.\nHotmail spent a mere $50,000 on traditional marketing and still grew from zero to 12 million users in 18 months [7].\nGoogle's Gmail captured a significant part of market share in spite of the fact that the only way to sign up for the service was through a referral.\nMost products cannot be advertised in such a direct way.\nAt the same time the choice of products available to consumers has increased manyfold thanks to online retailers who can supply a much wider variety of products than traditional brick-and-mortar stores.\nNot only is the variety of products larger, but one observes a ` fat tail' phenomenon, where a large fraction of purchases are of relatively obscure items.\nOn Amazon.com, somewhere between 20 to 40 percent of unit sales fall outside of its top 100,000 ranked products [2].\nRhapsody, a streaming-music service, streams more tracks outside than inside its top 10,000 tunes [1].\nEffectively advertising these niche products using traditional advertising approaches is impractical.\nTherefore using more targeted marketing approaches is advantageous both to the merchant and the consumer, who would benefit from learning about new products.\nThe problem is partly addressed by the advent of online product and merchant reviews, both at retail sites such as EBay and Amazon, and specialized product comparison sites such as Epinions and CNET.\nQuantitative marketing techniques have been proposed [12], and the rating of products and merchants has been shown to effect the likelihood of an item being bought [13, 4].\nOf further help to the consumer are collaborative filtering recommendations of the form \"people who bought x also bought y\" feature [11].\nThese refinements help consumers discover new products\nand receive more accurate evaluations, but they cannot completely substitute personalized recommendations that one receives from a friend or relative.\nIt is human nature to be more interested in what a friend buys than what an anonymous person buys, to be more likely to trust their opinion, and to be more influenced by their actions.\nOur friends are also acquainted with our needs and tastes, and can make appropriate recommendations.\nA Lucid Marketing survey found that 68% of individuals consulted friends and relatives before purchasing home electronics--more than the half who used search engines to find product information [3].\nSeveral studies have attempted to model just this kind of network influence.\nRichardson and Domingos [14] used Epinions' trusted reviewer network to construct an algorithm to maximize viral marketing efficiency assuming that individuals' probability of purchasing a product depends on the opinions on the trusted peers in their network.\nKempe, Kleinberg and Tardos [8] evaluate the efficiency of several algorithms for maximizing the size of influence set given various models of adoption.\nWhile these models address the question of maximizing the spread of influence in a network, they are based on assumed rather than measured influence effects.\nIn contrast, in our study we are able to directly observe the effectiveness of person to person word of mouth advertising for hundreds of thousands of products for the first time.\nWe find that most recommendation chains do not grow very large, often terminating with the initial purchase of a product.\nHowever, occasionally a product will propagate through a very active recommendation network.\nWe propose a simple stochastic model that seems to explain the propagation of recommendations.\nMoreover, the characteristics of recommendation networks influence the purchase patterns of their members.\nFor example, individuals' likelihood of purchasing a product initially increases as they receive additional recommendations for it, but a saturation point is quickly reached.\nInterestingly, as more recommendations are sent between the same two individuals, the likelihood that they will be heeded decreases.\nWe also propose models to identify products for which viral marketing is effective: We find that the category and price of product plays a role, with recommendations of expensive products of interest to small, well connected communities resulting in a purchase more often.\nWe also observe patterns in the timing of recommendations and purchases corresponding to times of day when people are likely to be shopping online or reading email.\nWe report on these and other findings in the following sections.\n2.\nTHE RECOMMENDATION NETWORK\n2.1 Dataset description\nOur analysis focuses on the recommendation referral program run by a large retailer.\nThe program rules were as follows.\nEach time a person purchases a book, music, or a movie he or she is given the option of sending emails recommending the item to friends.\nThe first person to purchase the same item through a referral link in the email gets a 10% discount.\nWhen this happens the sender of the recommendation receives a 10% credit on their purchase.\nThe recommendation dataset consists of 15,646,121 recommendations made among 3,943,084 distinct users.\nThe data was collected from June 5 2001 to May 16 2003.\nIn total, 548,523 products were recommended, 99% of them belonging to 4 main product groups: Books, DVDs, Music and Videos.\nIn addition to recommendation data, we also crawled the retailer's website to obtain product categories, reviews and ratings for all products.\nOf the products in our data set, 5813 (1%) were discontinued (the retailer no longer provided any information about them).\nAlthough the data gives us a detailed and accurate view of recommendation dynamics, it does have its limitations.\nThe only indication of the success of a recommendation is the observation of the recipient purchasing the product through the same vendor.\nWe have no way of knowing if the person had decided instead to purchase elsewhere, borrow, or otherwise obtain the product.\nThe delivery of the recommendation is also somewhat different from one person simply telling another about a product they enjoy, possibly in the context of a broader discussion of similar products.\nThe recommendation is received as a form email including information about the discount program.\nSomeone reading the email might consider it spam, or at least deem it less important than a recommendation given in the context of a conversation.\nThe recipient may also doubt whether the friend is recommending the product because they think the recipient might enjoy it, or are simply trying to get a discount for themselves.\nFinally, because the recommendation takes place before the recommender receives the product, it might not be based on a direct observation of the product.\nNevertheless, we believe that these recommendation networks are reflective of the nature of word of mouth advertising, and give us key insights into the influence of social networks on purchasing decisions.\n2.2 Recommendation network statistics\nFor each recommendation, the dataset included the product and product price, sender ID, receiver ID, the sent date, and a buy-bit, indicating whether the recommendation resulted in a purchase and discount.\nThe sender and receiver ID's were shadowed.\nWe represent this data set as a directed multi graph.\nThe nodes represent customers, and a directed edge contains all the information about the recommendation.\nThe edge (i, j, p, t) indicates that i recommended product p to customer j at time t.\nThe typical process generating edges in the recommendation network is as follows: a node i first buys a product p at time t and then it recommends it to nodes j1,..., jn.\nThe j nodes can they buy the product and further recommend it.\nThe only way for a node to recommend a product is to first buy it.\nNote that even if all nodes j buy a product, only the edge to the node jk that first made the purchase (within a week after the recommendation) will be marked by a buy-bit.\nBecause the buy-bit is set only for the first person who acts on a recommendation, we identify additional purchases by the presence of outgoing recommendations for a person, since all recommendations must be preceded by a purchase.\nWe call this type of evidence of purchase a buyedge.\nNote that buy-edges provide only a lower bound on the total number of purchases without discounts.\nIt is possible for a customer to not be the first to act on a recommendation and also to not recommend the product to others.\nUnfortunately, this was not recorded in the data set.\nWe consider, however, the buy-bits and buy-edges as proxies for the total number of purchases through recommendations.\nFor each product group we took recommendations on all products from the group and created a network.\nTable 1\nFigure 1: (a) The size of the largest connected component of customers over time.\nThe inset shows the linear growth in the number of customers n over time.\n(b) The number of recommendations sent by a user with each curve representing a different depth of the user in the recommendation chain.\nA power law exponent \u03b3 is fitted to all but the tail.\n(first 7 columns) shows the sizes of various product group recommendation networks with p being the total number of products in the product group, n the total number of nodes spanned by the group recommendation network and e the number of edges (recommendations).\nThe column eu shows the number of unique edges--disregarding multiple recommendations between the same source and recipient.\nIn terms of the number of different items, there are by far the most music CDs, followed by books and videos.\nThere is a surprisingly small number of DVD titles.\nOn the other hand, DVDs account for more half of all recommendations in the dataset.\nThe DVD network is also the most dense, having about 10 recommendations per node, while books and music have about 2 recommendations per node and videos have only a bit more than 1 recommendation per node.\nMusic recommendations reached about the same number of people as DVDs but used more than 5 times fewer recommendations to achieve the same coverage of the nodes.\nBook recommendations reached by far the most people--2.8 million.\nNotice that all networks have a very small number of unique edges.\nFor books, videos and music the number of unique edges is smaller than the number of nodes--this suggests that the networks are highly disconnected [5].\nFigure 1 (a) shows the fraction of nodes in largest weakly connected component over time.\nNotice the component is very small.\nEven if we compose a network using all the recommendations in the dataset, the largest connected component contains less than 2.5% (100,420) of the nodes, and the second largest component has only 600 nodes.\nStill, some smaller communities, numbering in the tens of thousands of purchasers of DVDs in categories such as westerns, classics and Japanese animated films (anime), had connected components spanning about 20% of their members.\nThe insert in figure 1 (a) shows the growth of the customer base over time.\nSurprisingly it was linear, adding on average 165,000 new users each month, which is an indication that the service itself was not spreading epidemically.\nFurther evidence of non-viral spread is provided by the relatively high percentage (94%) of users who made their first recommendation without having previously received one.\nBack to table 1: given the total number of recommendations e and purchases (bb + be) influenced by recommendations we can estimate how many recommendations need to be independently sent over the network to induce a new purchase.\nUsing this metric books have the most influential recommendations followed by DVDs and music.\nFor books one out of 69 recommendations resulted in a purchase.\nFor DVDs it increases to 108 recommendations per purchase and further increases to 136 for music and 203 for video.\nEven with these simple counts we can make the first few observations.\nIt seems that some people got quite heavily involved in the recommendation program, and that they tended to recommend a large number of products to the same set of friends (since the number of unique edges is so small).\nThis shows that people tend to buy more DVDs and also like to recommend them to their friends, while they seem to be more conservative with books.\nOne possible reason is that a book is bigger time investment than a DVD: one usually needs several days to read a book, while a DVD can be viewed in a single evening.\nOne external factor which may be affecting the recommendation patterns for DVDs is the existence of referral websites (www.dvdtalk.com).\nOn these websites people, who want to buy a DVD and get a discount, would ask for recommendations.\nThis way there would be recommendations made between people who don't really know each other but rather have an economic incentive to cooperate.\nWe were not able to find similar referral sharing sites for books or CDs.\n2.3 Forward recommendations\nNot all people who make a purchase also decide to give recommendations.\nSo we estimate what fraction of people that purchase also decide to recommend forward.\nTo obtain this information we can only use the nodes with purchases that resulted in a discount.\nThe last 3 columns of table 1 show that only about a third of the people that purchase also recommend the product forward.\nThe ratio of forward recommendations is much higher for DVDs than for other kinds of products.\nVideos also have a higher ratio of forward recommendations, while books have the lowest.\nThis shows that people are most keen on recommending movies, while more conservative when recommending books and music.\nFigure 1 (b) shows the cumulative out-degree distribution, that is the number of people who sent out at least kp recommendations, for a product.\nIt shows that the deeper an individual is in the cascade, if they choose to make recommendations, they tend to recommend to a greater number of people on average (the distribution has a higher variance).\nThis effect is probably due to only very heavily recommended products producing large enough cascades to reach a certain depth.\nWe also observe that the probability of an individual making a recommendation at all (which can only occur if they make a purchase), declines after an initial increase as one gets deeper into the cascade.\n2.4 Identifying cascades\nAs customers continue forwarding recommendations, they contribute to the formation of cascades.\nIn order to identify cascades, i.e. the \"causal\" propagation of recommendations, we track successful recommendations as they influence purchases and further recommendations.\nWe define a recommendation to be successful if it reached a node before its first purchase.\nWe consider only the first purchase of an item, because there are many cases when a person made multiple\nlevel 0 \u03b3 = 2.6 level 1 \u03b3 = 2.0 level 2 \u03b3 = 1.5 level 3 \u03b3 = 1.2 level 4 \u03b3 = 1.2\nTable 1: Product group recommendation statistics.\np: number of products, n: number of nodes, e: number of edges (recommendations), eu: number of unique edges, bb: number of buy bits, be: number of buy edges.\nLast 3 columns of the table: Fraction of people that purchase and also recommend forward.\nPurchases: number of nodes that purchased.\nForward: nodes that purchased and then also recommended the product.\nFigure 2: Examples of two product recommendation networks: (a) First aid study guide First Aid for the\nFigure 3: Distribution of the number of recommendations and number of purchases made by a node.\npurchases of the same product, and in between those purchases she may have received new recommendations.\nIn this case one cannot conclude that recommendations following the first purchase influenced the later purchases.\nEach cascade is a network consisting of customers (nodes) who purchased the same product as a result of each other's recommendations (edges).\nWe delete late recommendations--all incoming recommendations that happened after the first purchase of the product.\nThis way we make the network time increasing or causal--for each node all incoming edges (recommendations) occurred before all outgoing edges.\nNow each connected component represents a time obeying propagation of recommendations.\nFigure 2 shows two typical product recommendation networks: (a) a medical study guide and (b) a Japanese graphic novel.\nThroughout the dataset we observe very similar patters.\nMost product recommendation networks consist of a large number of small disconnected components where we do not observe cascades.\nThen there is usually a small number of relatively small components with recommendations successfully propagating.\nThis observation is reflected in the heavy tailed distribution of cascade sizes (see figure 4), having a power-law exponent close to 1 for DVDs in particular.\nWe also notice bursts of recommendations (figure 2 (b)).\nSome nodes recommend to many friends, forming a star like pattern.\nFigure 3 shows the distribution of the recommendations and purchases made by a single node in the recommendation network.\nNotice the power-law distributions and long flat tails.\nThe most active person made 83,729 recommendations and purchased 4,416 different items.\nFinally, we also sometimes observe ` collisions', where nodes receive recommendations from two or more sources.\nA detailed enumeration and analysis of observed topological cascade patterns for this dataset is made in [10].\n2.5 The recommendation propagation model\nA simple model can help explain how the wide variance we observe in the number of recommendations made by individuals can lead to power-laws in cascade sizes (figure 4).\nThe model assumes that each recipient of a recommendation will forward it to others if its value exceeds an arbitrary threshold that the individual sets for herself.\nSince exceeding this value is a probabilistic event, let's call pt the probability that at time step t the recommendation exceeds the thresh\nFigure 4: Size distribution of cascades (size of cascade vs. count).\nBold line presents a power-fit.\nold.\nIn that case the number of recommendations Nt +1 at time (t + 1) is given in terms of the number of recommendations at an earlier time by\nwhere the probability pt is defined over the unit interval.\nNotice that, because of the probabilistic nature of the threshold being exceeded, one can only compute the final distribution of recommendation chain lengths, which we now proceed to do.\nSubtracting from both sides of this equation the term Nt and diving by it we obtain\nSumming both sides from the initial time to some very large time T and assuming that for long times the numerator is smaller than the denominator (a reasonable assumption) we get\nThe left hand integral is just ln (N), and the right hand side is a sum of random variables, which in the limit of a very large uncorrelated number of recommendations is normally distributed (central limit theorem).\nThis means that the logarithm of the number of messages is normally distributed, or equivalently, that the number of messages passed is log-normally distributed.\nIn other words the probability density for N is given by\n2 ~ 2 which, for large variances describes a behavior whereby the typical number of recommendations is small (the mode of the distribution) but there are unlikely events of large chains of recommendations which are also observable.\nFurthermore, for large variances, the lognormal distribution can behave like a power law for a range of values.\nIn order to see this, take the logarithms on both sides of the equation (equivalent to a log-log plot) and one obtains\n2 ~ 2 So, for large ~, the last term of the right hand side goes to zero, and since the the second term is a constant one obtains a power law behavior with exponent value of minus one.\nThere are other models which produce power-law distributions of cascade sizes, but we present ours for its simplicity, since it does not depend on network topology [6] or critical thresholds in the probability of a recommendation being accepted [18].\n3.\nSUCCESS OF RECOMMENDATIONS\nSo far we only looked into the aggregate statistics of the recommendation network.\nNext, we ask questions about the effectiveness of recommendations in the recommendation network itself.\nFirst, we analyze the probability of purchasing as one gets more and more recommendations.\nNext, we measure recommendation effectiveness as two people exchange more and more recommendations.\nLastly, we observe the recommendation network from the perspective of the sender of the recommendation.\nDoes a node that makes more recommendations also influence more purchases?\n3.1 Probability of buying versus number of incoming recommendations\nFirst, we examine how the probability of purchasing changes as one gets more and more recommendations.\nOne would expect that a person is more likely to buy a product if she gets more recommendations.\nOn the other had one would also think that there is a saturation point--if a person hasn't bought a product after a number of recommendations, they are not likely to change their minds after receiving even more of them.\nSo, how many recommendations are too many?\nFigure 5 shows the probability of purchasing a product as a function of the number of incoming recommendations on the product.\nAs we move to higher numbers of incoming recommendations, the number of observations drops rapidly.\nFor example, there were 5 million cases with 1 incoming recommendation on a book, and only 58 cases where a person got 20 incoming recommendations on a particular book.\nThe maximum was 30 incoming recommendations.\nFor these reasons we cut-off the plot when the number of observations becomes too small and the error bars too large.\nFigure 5 (a) shows that, overall, book recommendations are rarely followed.\nEven more surprisingly, as more and more recommendations are received, their success decreases.\nWe observe a peak in probability of buying at 2 incoming recommendations and then a slow drop.\nFor DVDs (figure 5 (b)) we observe a saturation around 10 incoming recommendations.\nThis means that after a person gets 10 recommendations on a particular DVD, they become immune to them--their probability of buying does not increase anymore.\nThe number of observations is 2.5 million at 1 incoming recommendation and 100 at 60 incoming recommendations.\nThe maximal number of received recommendations is 172 (and that person did not buy)\nFigure 5: Probability of buying a book (DVD) given a number of incoming recommendations.\nchanged recommendations increases, the probability of buying starts to decrease to about half of the original value and then levels off.\nFor DVDs we observe an immediate and consistent drop.\nThis experiment shows that recommendations start to lose effect after more than two or three are passed between two people.\nWe performed the experiment also for video and music, but the number of observations was too low and the measurements were noisy.\nFigure 6: The effectiveness of recommendations with the total number of exchanged recommendations.\n3.2 Success of subsequent recommendations\nNext, we analyze how the effectiveness of recommendations changes as two persons exchange more and more recommendations.\nA large number of exchanged recommendations can be a sign of trust and influence, but a sender of too many recommendations can be perceived as a spammer.\nA person who recommends only a few products will have her friends' attention, but one who floods her friends with all sorts of recommendations will start to loose her influence.\nWe measure the effectiveness of recommendations as a function of the total number of previously exchanged recommendations between the two nodes.\nWe construct the experiment in the following way.\nFor every recommendation r on some product p between nodes u and v, we first determine how many recommendations were exchanged between u and v before recommendation r.\nThen we check whether v, the recipient of recommendation, purchased p after recommendation r arrived.\nFor the experiment we consider only node pairs (u, v), where there were at least a total of 10 recommendations sent from u to v.\nWe perform the experiment using only recommendations from the same product group.\nFigure 6 shows the probability of buying as a function of the total number of exchanged recommendations between two persons up to that point.\nFor books we observe that the effectiveness of recommendation remains about constant up to 3 exchanged recommendations.\nAs the number of ex\n3.3 Success of outgoing recommendations\nIn previous sections we examined the data from the viewpoint of the receiver of the recommendation.\nNow we look from the viewpoint of the sender.\nThe two interesting questions are: how does the probability of getting a 10% credit change with the number of outgoing recommendations; and given a number of outgoing recommendations, how many purchases will they influence?\nOne would expect that recommendations would be the most effective when recommended to the right subset of friends.\nIf one is very selective and recommends to too few friends, then the chances of success are slim.\nOne the other hand, recommending to everyone and spamming them with recommendations may have limited returns as well.\nThe top row of figure 7 shows how the average number of purchases changes with the number of outgoing recommendations.\nFor books, music, and videos the number of purchases soon saturates: it grows fast up to around 10 outgoing recommendations and then the trend either slows or starts to drop.\nDVDs exhibit different behavior, with the expected number of purchases increasing throughout.\nBut if we plot the probability of getting a 10% credit as a function of the number of outgoing recommendations, as in the bottom row of figure 7, we see that the success of DVD recommendations saturates as well, while books, videos and music have qualitatively similar trends.\nThe difference in the curves for DVD recommendations points to the presence of collisions in the dense DVD network, which has 10 recommendations per node and around 400 per product--an order of magnitude more than other product groups.\nThis means that many different individuals are recommending to the same person, and after that person makes a purchase, even though all of them made a ` successful recommendation '\nFigure 7: Top row: Number of resulting purchases given a number of outgoing recommendations.\nBottom row: Probability of getting a credit given a number of outgoing recommendations.\nFigure 8: The time between the recommendation and the actual purchase.\nWe use all purchases.\nby our definition, only one of them receives a credit.\n4.\nTIMING OF RECOMMENDATIONS AND PURCHASES\nThe recommendation referral program encourages people to purchase as soon as possible after they get a recommendation, since this maximizes the probability of getting a discount.\nWe study the time lag between the recommendation and the purchase of different product groups, effectively how long it takes a person to both receive a recommendation, consider it, and act on it.\nWe present the histograms of the \"thinking time\", i.e. the difference between the time of purchase and the time the last recommendation was received for the product prior to the purchase (figure 8).\nWe use a bin size of 1 day.\nAround 35% 40% of book and DVD purchases occurred within a day after the last recommendation was received.\nFor DVDs 16% purchases occur more than a week after last recommendation, while this drops to 10% for books.\nIn contrast, if we consider the lag between the purchase and the first recommendation, only 23% of DVD purchases are made within a day, while the proportion stays the same for books.\nThis reflects a greater likelihood for a person to receive multiple recommendations for a DVD than for a book.\nAt the same time, DVD recommenders tend to send out many more recommendations, only one of which can result in a discount.\nIndividuals then often miss their chance of a discount, which is reflected in the high ratio (78%) of recommended DVD purchases that did not a get discount (see table 1, columns bb and be).\nIn contrast, for books, only 21% of purchases through recommendations did not receive a discount.\nWe also measure the variation in intensity by time of day for three different activities in the recommendation system: recommendations (figure 9 (a)), all purchases (figure 9 (b)), and finally just the purchases which resulted in a discount (figure 9 (c)).\nEach is given as a total count by hour of day.\nThe recommendations and purchases follow the same pattern.\nThe only small difference is that purchases reach a sharper peak in the afternoon (after 3pm Pacific Time, 6pm Eastern time).\nThe purchases that resulted in a discount look like a negative image of the first two figures.\nThis means that most of discounted purchases happened in the morning when the traffic (number of purchases\/recommendations) on the retailer's website was low.\nThis makes a lot of sense since most of the recommendations happened during the day, and if the person wanted to get the discount by being the first one to purchase, she had the highest chances when the traffic on the website was the lowest.\n5.\nRECOMMENDATION EFFECTIVENESS BY BOOK CATEGORY\nSocial networks are a product of the contexts that bring people together.\nSome contexts result in social ties that are more effective at conducting an action.\nFor example, in small world experiments, where participants attempt to reach a target individual through their chain of acquaintances, profession trumped geography, which in turn was more useful in locating a target than attributes such as religion or hobbies [9, 17].\nIn the context of product recommendations, we can ask whether a recommendation for a work of fiction, which may be made by any friend or neighbor, is\nFigure 9: Time of day for purchases and recommendations.\n(a) shows the distribution of recommendations over the day, (b) shows all purchases and (c) shows only purchases that resulted in getting discount.\nmore or less influential than a recommendation for a technical book, which may be made by a colleague at work or school.\nTable 2 shows recommendation trends for all top level book categories by subject.\nAn analysis of other product types can be found in the extended version of the paper.\nFor clarity, we group the results by 4 different category types: fiction, personal\/leisure, professional\/technical, and nonfiction\/other.\nFiction encompasses categories such as Sci-Fi and Romance, as well as children's and young adult books.\nPersonal\/Leisure encompasses everything from gardening, photography and cooking to health and religion.\nFirst, we compare the relative number of recommendations to reviews posted on the site (column cav\/rp1 of table 2).\nSurprisingly, we find that the number of people making personal recommendations was only a few times greater than the number of people posting a public review on the website.\nWe observe that fiction books have relatively few recommendations compared to the number of reviews, while professional and technical books have more recommendations than reviews.\nThis could reflect several factors.\nOne is that people feel more confident reviewing fiction than technical books.\nAnother is that they hesitate to recommend a work of fiction before reading it themselves, since the recommendation must be made at the point of purchase.\nYet another explanation is that the median price of a work of fiction is lower than that of a technical book.\nThis means that the discount received for successfully recommending a mystery novel or thriller is lower and hence people have less incentive to send recommendations.\nNext, we measure the per category efficacy of recommendations by observing the ratio of the number of purchases occurring within a week following a recommendation to the number of recommenders for each book subject category (column b of table 2).\nOn average, only 2% of the recommenders of a book received a discount because their recommendation was accepted, and another 1% made a recommendation that resulted in a purchase, but not a discount.\nWe observe marked differences in the response to recommendation for different categories of books.\nFiction in general is not very effectively recommended, with only around 2% of recommenders succeeding.\nThe efficacy was a bit higher (around 3%) for non-fiction books dealing with personal and leisure pursuits, but is significantly higher in the professional and technical category.\nMedical books have nearly double the average rate of recommendation acceptance.\nThis could be in part attributed to the higher median price of medical books and technical books in general.\nAs we will see in Section 6, a higher product price increases the chance that a recommendation will be accepted.\nRecommendations are also more likely to be accepted for certain religious categories: 4.3% for Christian living and theology and 4.8% for Bibles.\nIn contrast, books not tied to organized religions, such as ones on the subject of new age (2.5%) and occult (2.2%) spirituality, have lower recommendation effectiveness.\nThese results raise the interesting possibility that individuals have greater influence over one another in an organized context, for example through a professional contact or a religious one.\nThere are exceptions of course.\nFor example, Japanese anime DVDs have a strong following in the US, and this is reflected in their frequency and success in recommendations.\nAnother example is that of gardening.\nIn general, recommendations for books relating to gardening have only a modest chance of being accepted, which agrees with the individual prerogative that accompanies this hobby.\nAt the same time, orchid cultivation can be a highly organized and social activity, with frequent ` shows' and online communities devoted entirely to orchids.\nPerhaps because of this, the rate of acceptance of orchid book recommendations is twice as high as those for books on vegetable or tomato growing.\n6.\nMODELING THE RECOMMENDATION SUCCESS\nWe have examined the properties of recommendation network in relation to viral marketing, but one question still remains: what determines the product's viral marketing success?\nWe present a model which characterizes product categories for which recommendations are more likely to be accepted.\nWe use a regression of the following product attributes to correlate them with recommendation success:\n\u2022 r: number of recommendations \u2022 ns: number of senders of recommendations \u2022 nr: number of recipients of recommendations \u2022 p: price of the product \u2022 v: number of reviews of the product \u2022 t: average product rating\nTable 2: Statistics by book category: np: number of products in category, n number of customers, cc percentage of customers in the largest connected component, rp1 av.\n#reviews in 2001--2003, rp2 av.\n#reviews\nratio of recommenders to reviewers, pm median price, b ratio of the number of purchases resulting from a recommendation to the number of recommenders.\nThe symbol ** denotes statistical significance at the 0.01 level, * at the 0.05 level.\nFrom the original set of half a million products, we compute a success rate s for the 48,218 products that had at least one purchase made through a recommendation and for which a price was given.\nIn section 5 we defined recommendation success rate s as the ratio of the total number purchases made through recommendations and the number of senders of the recommendations.\nWe decided to use this kind of normalization, rather than normalizing by the total number of recommendations sent, in order not to penalize communities where a few individuals send out many recommendations (figure 2 (b)).\nSince the variables follow a heavy tailed distribution, we use the following model:\nwhere xi are the product attributes (as described on previous page), and Ei is random error.\nWe fit the model using least squares and obtain the coefficients \u03b2i shown on table 3.\nWith the exception of the average rating, they are all significant.\nThe only two attributes with a positive coefficient are the number of recommendations and price.\nThis shows that more expensive and more recommended products have a higher success rate.\nThe number of senders and receivers have large negative coefficients, showing that successfully recommended products are more likely to be not so widely popular.\nThey have relatively many recommendations with a small number of senders and receivers, which suggests a very dense recommendation network where lots of recommendations were exchanged between a small community of people.\nThese insights could be to marketers--personal recommendations are most effective in small, densely connected communities enjoying expensive products.\nTable 3: Regression using the log of the recommen\ndation success rate, ln (s), as the dependent variable.\nFor each coefficient we provide the standard error and the statistical significance level (**:0.01, *:0.1).\n7.\nDISCUSSION AND CONCLUSION\nAlthough the retailer may have hoped to boost its revenues through viral marketing, the additional purchases that resulted from recommendations are just a drop in the bucket of sales that occur through the website.\nNevertheless, we were able to obtain a number of interesting insights into how viral marketing works that challenge common assumptions made in epidemic and rumor propagation modeling.\nFirstly, it is frequently assumed in epidemic models that individuals have equal probability of being infected every time they interact.\nContrary to this we observe that the probability of infection decreases with repeated interaction.\nMarketers should take heed that providing excessive incentives for customers to recommend products could backfire by weakening the credibility of the very same links they are trying to take advantage of.\nTraditional epidemic and innovation diffusion models also often assume that individuals either have a constant probability of ` converting' every time they interact with an infected individual or that they convert once the fraction of their contacts who are infected exceeds a threshold.\nIn both cases, an increasing number of infected contacts results in an increased likelihood of infection.\nInstead, we find that the probability of purchasing a product increases with the number of recommendations received, but quickly saturates to a constant and relatively low probability.\nThis means individuals are often impervious to the recommendations of their friends, and resist buying items that they do not want.\nIn network-based epidemic models, extremely highly connected individuals play a very important role.\nFor example, in needle sharing and sexual contact networks these nodes become the \"super-spreaders\" by infecting a large number of people.\nBut these models assume that a high degree node has as much of a probability of infecting each of its neighbors as a low degree node does.\nIn contrast, we find that there are limits to how influential high degree nodes are in the recommendation network.\nAs a person sends out more and more recommendations past a certain number for a product, the success per recommendation declines.\nThis would seem to indicate that individuals have influence over a few of their friends, but not everybody they know.\nWe also presented a simple stochastic model that allows for the presence of relatively large cascades for a few products, but reflects well the general tendency of recommendation chains to terminate after just a short number of steps.\nWe saw that the characteristics of product reviews and effectiveness of recommendations vary by category and price, with more successful recommendations being made on technical or religious books, which presumably are placed in the social context of a school, workplace or place of worship.\nFinally, we presented a model which shows that smaller and more tightly knit groups tend to be more conducive to viral marketing.\nSo despite the relative ineffectiveness of the viral marketing program in general, we found a number of new insights which we hope will have general applicability to marketing strategies and to future models of viral information spread.","keyphrases":["viral market","viral market","recommend network","product","stochast model","purchas","price categori","advertis","consum","direct multi graph","probabl","connect individu","e-commerc","recommend system"],"prmu":["P","P","P","P","P","P","P","U","U","U","U","U","U","M"]} {"id":"C-66","title":"Heuristics-Based Scheduling of Composite Web Service Workloads","abstract":"Web services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems. Although industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow's web service requests to actual service provisioning in a multi-tiered, multi-organisation environment. This issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits. Because these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement. In this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised. We show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches.","lvl-1":"Heuristics-Based Scheduling of Composite Web Service Workloads Thomas Phan Wen-Syan Li IBM Almaden Research Center 650 Harry Rd..\nSan Jose, CA 95120 {phantom,wsl}@us.\nibm.com ABSTRACT Web services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems.\nAlthough industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow``s web service requests to actual service provisioning in a multi-tiered, multi-organisation environment.\nThis issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits.\nBecause these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement.\nIn this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised.\nWe show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems-distributed applications; D.2.8 [Software Engineering]: Metrics-complexity measures, performance measures 1.\nINTRODUCTION Web services can be composed into workflows to provide streamlined end-to-end functionality for human users or other systems.\nAlthough previous research efforts have looked at ways to intelligently automate the composition of web services into workflows (e.g. [1, 9]), an important remaining problem is the assignment of web service requests to the underlying web service providers in a multi-tiered runtime scenario within constraints.\nIn this paper we address this scheduling problem and examine means to manage a large number of business process workflows in a scalable manner.\nThe problem of scheduling web service requests to providers is relevant to modern business domains that depend on multi-tiered service provisioning.\nConsider the example shown in Figure 1 that illustrates our problem space.\nWorkflows comprise multiple related business processes that are web service consumers; here we assume that the workflows represent requested service from customers or automated systems and that the workflow has already been composed with an existing choreography toolkit.\nThese workflows are then submitted to a portal (not shown) that acts as a scheduling agent between the web service consumers and the web service providers.\nIn this example, a workflow could represent the actions needed to instantiate a vacation itinerary, where one business process requests booking an airline ticket, another business process requests a hotel room, and so forth.\nEach of these requests target a particular service type (e.g. airline reservations, hotel reservations, car reservations, etc.), and for each service type, there are multiple instances of service providers that publish a web service interface.\nAn important challenge is that the workflows must meet some quality-of-service (QoS) metric, such as end-to-end completion time of all its business processes, and that meeting or failing this goal results in the assignment of a quantitative business value metric for the workflow; intuitively, it is desired that all workflows meet their respective QoS goals.\nWe further leverage the notion that QoS service agreements are generally agreed-upon between the web service providers and the scheduling agent such that the providers advertise some level of guaranteed QoS to the scheduler based upon runtime conditions such as turnaround time and maximum available concurrency.\nThe resulting problem is then to schedule and assign the business processes'' requests for service types to one of the service providers for that type.\nThe scheduling must be done such that the aggregate business value across all the workflows is maximised.\nIn Section 3 we state the scenario as a combinatorial problem and utilise a genetic search algorithm [5] to find the best assignment of web service requests to providers.\nThis approach converges towards an assignment that maximises the overall business value for all the workflows.\nIn Section 4 we show through experimentation that this search heuristic finds better assignments than other algorithms (greedy, round-robin, and proportional).\nFurther, this approach allows us to scale the number of simultaneous workflows (up to one thousand workflows in our experiments) and yet still find effective schedules.\n2.\nRELATED WORK In the context of service assignment and scheduling, [11] maps web service calls to potential servers using linear programming, but their work is concerned with mapping only single workflows; our principal focus is on scalably scheduling multiple workflows (up 30 Service Type SuperHotels.com Business Process Business Process Workflow ... Business Process Business Process ... HostileHostels.com IncredibleInns.com Business Process Business Process Business Process ... Business Process Service Provider SkyHighAirlines.com SuperCrazyFlights.com Business Process .\n.\n.\n.\n.\n.\nAdvertised QoS Service Agreement CarRentalService.com Figure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers.\nEach business process accesses a service type and is then mapped to a service provider for that type.\nto one thousand as we show later) using different business metrics and a search heuristic.\n[10] presents a dynamic provisioning approach that uses both predictive and reactive techniques for multi-tiered Internet application delivery.\nHowever, the provisioning techniques do not consider the challenges faced when there are alternative query execution plans and replicated data sources.\n[8] presents a feedback-based scheduling mechanism for multi-tiered systems with back-end databases, but unlike our work, it assumes a tighter coupling between the various components of the system.\nOur work also builds upon prior scheduling research.\nThe classic job-shop scheduling problem, shown to be NP-complete [4] [3], is similar to ours in that tasks within a job must be scheduled onto machinery (c.f. our scenario is that business processes within a workflow must be scheduled onto web service providers).\nThe salient differences are that the machines can process only one job at a time (we assume servers can multi-task but with degraded performance and a maximum concurrency level), tasks within a job cannot simultaneously run on different machines (we assume business processes can be assigned to any available server), and the principal metric of performance is the makespan, which is the time for the last task among all the jobs to complete (and as we show later, optimising on the makespan is insufficient for scheduling the business processes, necessitating different metrics).\n3.\nDESIGN In this section we describe our model and discuss how we can find scheduling assignments using a genetic search algorithm.\n3.1 Model We base our model on the simplified scenario shown in Figure 1.\nSpecifically, we assume that users or automated systems request the execution of a workflow.\nThe workflows comprise business processes, each of which makes one web service invocation to a service type.\nFurther, business processes have an ordering in the workflow.\nThe arrangement and execution of the business processes and the data flow between them are all managed by a composition or choreography tool (e.g. [1, 9]).\nAlthough composition languages can use sophisticated flow-control mechanisms such as conditional branches, for simplicity we assume the processes execute sequentially in a given order.\nThis scenario can be naturally extended to more complex relationships that can be expressed in BPEL [7], which defines how business processes interact, messages are exchanged, activities are ordered, and exceptions are handled.\nDue to space constraints, we focus on the problem space presented here and will extend our model to more advanced deployment scenarios in the future.\nEach workflow has a QoS requirement to complete within a specified number of time units (e.g. on the order of seconds, as detailed in the Experiments section).\nUpon completion (or failure), the workflow is assigned a business value.\nWe extended this approach further and considered different types of workflow completion in order to model differentiated QoS levels that can be applied by businesses (for example, to provide tiered customer service).\nWe say that a workflow is successful if it completes within its QoS requirement, acceptable if it completes within a constant factor \u03ba 31 of its QoS bound (in our experiments we chose \u03ba=3), or failing if it finishes beyond \u03ba times its QoS bound.\nFor each category, a business value score is assigned to the workflow, with the successful category assigned the highest positive score, followed by acceptable and then failing.\nThe business value point distribution is non-uniform across workflows, further modelling cases where some workflows are of higher priority than others.\nEach service type is implemented by a number of different service providers.\nWe assume that the providers make service level agreements (SLAs) to guarantee a level of performance defined by the completion time for completing a web service invocation.\nAlthough SLAs can be complex, in this paper we assume for simplicity that the guarantees can take the form of a linear performance degradation under load.\nThis guarantee is defined by several parameters: \u03b1 is the expected completion time (for example, on the order of seconds) if the assigned workload of web service requests is less than or equal to \u03b2, the maximum concurrency, and if the workload is higher than \u03b2, the expected completion for a workload of size \u03c9 is \u03b1+ \u03b3(\u03c9 \u2212 \u03b2) where \u03b3 is a fractional coefficient.\nIn our experiments we vary \u03b1, \u03b2, and \u03b3 with different distributions.\nIdeally, all workflows would be able to finish within their QoS limits and thus maximise the aggregate business value across all workflows.\nHowever, because we model service providers with degrading performance under load, not all workflows will achieve their QoS limit: it may easily be the case that business processes are assigned to providers who are overloaded and cannot complete within the respective workflow``s QoS limit.\nThe key research problem, then, is to assign the business processes to the web service providers with the goal of optimising on the aggregate business value of all workflows.\nGiven that the scope of the optimisation is the entire set of workflows, it may be that the best scheduling assignments may result in some workflows having to fail in order for more workflows to succeed.\nThis intuitive observation suggests that traditional scheduling approaches such as round-robin or proportional assignments will not fare well, which is what we observe and discuss in Section 4.\nOn the other hand, an exhaustive search of all the possible assignments will find the best schedule, but the computational complexity is prohibitively high.\nSuppose there are W workflows with an average of B business processes per workflow.\nFurther, in the worst case each business process requests one service type, for which there are P providers.\nThere are thus W \u00b7 PB combinations to explore to find the optimal assignments of business processes to providers.\nEven for small configurations (e.g. W =10, B=5, P=10), the computational time for exhaustive search is significant, and in our work we look to scale these parameters.\nIn the next subsection, discuss how a genetic search algorithm can be used to converge toward the optimum scheduling assignments.\n3.2 Genetic algorithm Given an exponential search space of business process assignments to web service providers, the problem is to find the optimal assignment that produces the overall highest aggregate business value across all workflows.\nTo explore the solution space, we use a genetic algorithm (GA) search heuristic that simulates Darwinian natural selection by having members of a population compete to survive in order to pass their genetic chromosomes onto the next generation; after successive generations, there is a tendency for the chromosomes to converge toward the best combination [5] [6].\nAlthough other search heuristics exist that can solve optimization problems (e.g. simulated annealing or steepest-ascent hillclimbing), the business process scheduling problem fits well with a GA because potential solutions can be represented in a matrix form and allows us to use prior research in effective GA chromosome recombination to form new members of the population (e.g. [2]).\n0 1 2 3 4 0 1 2 0 2 1 1 0 1 0 1 0 2 1 2 0 0 1 Figure 2: An example chromosome representing a scheduling assignment of (workflow,service type) \u2192 service provider.\nEach row represents a workflow, and each column represents a service type.\nFor example, here there are 3 workflows (0 to 2) and 5 service types (0 to 4).\nIn workflow 0, any request for service type 3 goes to provider 2.\nNote that the service provider identifier is within a range limited to its service type (i.e. its column), so the 2 listed for service type 3 is a different server from server 2 in other columns.\nChromosome representation of a solution.\nIn Figure 2 we show an example chromosome that encodes one scheduling assignment.\nThe representation is a 2-dimensional matrix that maps {workflow, service type} to a service provider.\nFor a business process in workflow i and utilising service type j, the (i, j)th entry in the table is the identifier for the service provider to which the business process is assigned.\nNote that the service provider identifier is within a range limited to its service type.\nGA execution.\nA GA proceeds as follows.\nInitially a random set of chromosomes is created for the population.\nThe chromosomes are evaluated (hashed) to some metric, and the best ones are chosen to be parents.\nIn our problem, the evaluation produces the net business value across all workflows after executing all business processes once they are assigned to their respective service providers according to the mapping in the chromosome.\nThe parents recombine to produce children, simulating sexual crossover, and occasionally a mutation may arise which produces new characteristics that were not available in either parent.\nThe principal idea is that we would like the children to be different from the parents (in order to explore more of the solution space) yet not too different (in order to contain the portions of the chromosome that result in good scheduling assignments).\nNote that finding the global optimum is not guaranteed because the recombination and mutation are stochastic.\nGA recombination and mutation.\nAs mentioned, the chromosomes are 2-dimensional matrices that represent scheduling assignments.\nTo simulate sexual recombination of two chromosomes to produce a new child chromosome, we applied a one-point crossover scheme twice (once along each dimension).\nThe crossover is best explained by analogy to Cartesian space as follows.\nA random point is chosen in the matrix to be coordinate (0, 0).\nMatrix elements from quadrants II and IV from the first parent and elements from quadrants I and III from the second parent are used to create the new child.\nThis approach follows GA best practices by keeping contiguous chromosome segments together as they are transmitted from parent to child.\nThe uni-chromosome mutation scheme randomly changes one of the service provider assignments to another provider within the available range.\nOther recombination and mutation schemes are an area of research in the GA community, and we look to explore new operators in future work.\nGA evaluation function.\nAn important GA component is the evaluation function.\nGiven a particular chromosome representing one scheduling mapping, the function deterministically calculates the net business value across all workloads.\nThe business processes in each workload are assigned to service providers, and each provider``s completion time is calculated based on the service agreement guarantee using the parameters mentioned in Section 3.1, namely the unloaded completion time \u03b1, the maximum concur32 rency \u03b2, and a coefficient \u03b3 that controls the linear performance degradation under heavy load.\nNote that the evaluation function can be easily replaced if desired; for example, other evaluation functions can model different service provider guarantees or parallel workflows.\n4.\nEXPERIMENTS AND RESULTS In this section we show the benefit of using our GA-based scheduler.\nBecause we wanted to scale the scenarios up to a large number of workflows (up to 1000 in our experiments), we implemented a simulation program that allowed us to vary parameters and to measure the results with different metrics.\nThe simulator was written in standard C++ and was run on a Linux (Fedora Core) desktop computer running at 2.8 GHz with 1GB of RAM.\nWe compared our algorithm against alternative candidates: \u2022 A well-known round-robin algorithm that assigns each business process in circular fashion to the service providers for a particular service type.\nThis approach provides the simplest scheme for load-balancing.\n\u2022 A random-proportional algorithm that proportionally assigns business processes to the service providers; that is, for a given service type, the service providers are ranked by their guaranteed completion time, and business processes are assigned proportionally to the providers based on their completion time.\n(We also tried a proportionality scheme based on both the completion times and maximum concurrency but attained the same results, so only the former scheme``s results are shown here.)\n\u2022 A strawman greedy algorithm that always assigns business processes to the service provider that has the fastest guaranteed completion time.\nThis algorithm represents a naive approach based on greedy, local observations of each workflow without taking into consideration all workflows.\nIn the experiments that follow, all results were averaged across 20 trials, and to help normalise the effects of randomisation used during the GA, each trial started by reading in pre-initialised data from disk.\nIn Table 1 we list our experimental parameters.\nIn Figure 3 we show the results of running our GA against the three candidate alternatives.\nThe x-axis shows the number for workflows scaled up to 1000, and the y-axis shows the aggregate business value for all workflows.\nAs can be seen, the GA consistently produces the highest business value even as the number of workflows grows; at 1000 workflows, the GA produces a 115% improvement over the next-best alternative.\n(Note that although we are optimising against the business value metric we defined earlier, genetic algorithms are able to converge towards the optimal value of any metric, as long as the evaluation function can consistently measure a chromosome``s value with that metric.)\nAs expected, the greedy algorithm performs very poorly because it does the worst job at balancing load: all business processes for a given service type are assigned to only one server (the one advertised to have the fastest completion time), and as more business processes arrive, the provider``s performance degrades linearly.\nThe round-robin scheme is initially outperformed by the randomproportional scheme up to around 120 workflows (as shown in the magnified graph of Figure 4), but as the number of workflows increases, the round-robin scheme consistently wins over randomproportional.\nThe reason is that although the random-proportional scheme assigns business processes to providers proportionally according to the advertised completion times (which is a measure of the power of the service provider), even the best providers will eventually reach a real-world maximum concurrency for the large -2000 -1000 0 1000 2000 3000 4000 5000 6000 7000 0 200\u00a0400\u00a0600\u00a0800 1000 Aggregatebusinessvalueacrossallworkflows Total number of workflows Business value scores of scheduling algorithms Genetic algorithm Round robin Random proportional Greedy Figure 3: Net business value scores of different scheduling algorithms.\n-500 0 500 1000 1500 2000 2500 3000 3500 4000 0 50\u00a0100\u00a0150\u00a0200Aggregatebusinessvalueacrossallworkflows Total number of workflows Business value scores of scheduling algorithms Genetic algorithm Round robin Random proportional Greedy Figure 4: Magnification of the left-most region in Figure 3.\nnumber of workflows that we are considering.\nFor a very large number of workflows, the round-robin scheme is able to better balance the load across all service providers.\nTo better understand the behaviour resulting from the scheduling assignments, we show the workflow completion results in Figures 5, 6, and 7 for 100, 500, and 900 workflows, respectively.\nThese figures show the percentage of workflows that are successful (can complete with their QoS limit), acceptable (can complete within \u03ba=3 times their QoS limit), and failed (cannot complete within \u03ba=3 times their QoS limit).\nThe GA consistently produces the highest percentage of successful workflows (resulting in higher business values for the aggregate set of workflows).\nFurther, the round-robin scheme produces better results than the random-proportional for a large number of workflows but does not perform as well as the GA..\nIn Figure 8 we graph the makespan resulting from the same experiments above.\nMakespan is a traditional metric from the job scheduling community measuring elapsed time for the last job to complete.\nWhile useful, it does not capture the high-level business value metric that we are optimising against.\nIndeed, the makespan is oblivious to the fact that we provide multiple levels of completion (successful, acceptable, and failed) and assign business value scores accordingly.\nFor completeness, we note that the GA provides the fastest makespan, but it is matched by the round robin algorithm.\nThe GA produces better business values (as shown in Figure 3) because it is able to search the solution space to find better mappings that produce more successful workflows (as shown in Figures 5 to 7).\nWe also looked at the effect of the scheduling algorithms on balancing the load.\nFigure 9 shows the percentage of services providers that were accessed while the workflows ran.\nAs expected, the greedy algorithm always hits one service provider; on the other hand, the round-robin algorithm is the fastest to spread the business 33 Experimental parameter Comment Workflows 5 to 1000 Business processes per workflow uniform random: 1 - 10 Service types 10 Service providers per service type uniform random: 1 - 10 Workflow QoS goal uniform random: 10-30 seconds Service provider completion time (\u03b1) uniform random: 1 - 12 seconds Service provider maximum concurrency (\u03b2) uniform random: 1 - 12 Service provider degradation coefficient (\u03b3) uniform random: 0.1 - 0.9 Business value for successful workflows uniform random: 10 - 50 points Business value for acceptable workflows uniform random: 0 - 10 points Business value for failed workflows uniform random: -10 - 0 points GA: number of parents 20 GA: number of children 80 GA: number of generations 1000 Table 1: Experimental parameters Failed Acceptable (completed but not within QoS) Successful (completed within QoS) 0% 20% 40% 60% 80% 100% RoundRobinRandProportionalGreedyGeneticAlg Percentageofallworkflows Workflow behaviour, 100 workflows Figure 5: Workflow behaviour for 100 workflows.\nFailed Acceptable (completed but not within QoS) Successful (completed within QoS) 0% 20% 40% 60% 80% 100% RoundRobinRandProportionalGreedyGeneticAlg Percentageofallworkflows Workflow behaviour, 500 workflows Figure 6: Workflow behaviour for 500 workflows.\nFailed Acceptable (completed but not within QoS) Successful (completed within QoS) 0% 20% 40% 60% 80% 100% RoundRobinRandProportionalGreedyGeneticAlg Percentageofallworkflows Workflow behaviour, 500 workflows Figure 7: Workflow behaviour for 900 workflows.\n0 50 100 150 200 250 300 0 200\u00a0400\u00a0600\u00a0800 1000 Makespan[seconds] Number of workflows Maximum completion time for all workflows Genetic algorithm Round robin Random proportional Greedy Figure 8: Maximum completion time for all workflows.\nThis value is the makespan metric used in traditional scheduling research.\nAlthough useful, the makespan does not take into consideration the business value scoring in our problem domain.\nprocesses.\nFigure 10 is the percentage of accessed service providers (that is, the percentage of service providers represented in Figure 9) that had more assigned business processes than their advertised maximum concurrency.\nFor example, in the greedy algorithm only one service provider is utilised, and this one provider quickly becomes saturated.\nOn the other hand, the random-proportional algorithm uses many service providers, but because business processes are proportionally assigned with more assignments going to the better providers, there is a tendency for a smaller percentage of providers to become saturated.\nFor completeness, we show the performance of the genetic algorithm itself in Figure 11.\nThe algorithm scales linearly with an increasing number of workflows.\nWe note that the round-robin, random-proportional, and greedy algorithms all finished within 1 second even for the largest workflow configuration.\nHowever, we feel that the benefit of finding much higher business value scores justifies the running time of the GA; further we would expect that the running time will improve with both software tuning as well as with a computer faster than our off-the-shelf PC.\n5.\nCONCLUSION Business processes within workflows can be orchestrated to access web services.\nIn this paper we looked at multi-tiered service provisioning where web service requests to service types can be mapped to different service providers.\nThe resulting problem is that in order to support a very large number of workflows, the assignment of business process to web service provider must be intelligent.\nWe used a business value metric to measure the be34 0 0.2 0.4 0.6 0.8 1 0 200\u00a0400\u00a0600\u00a0800 1000 Percentageofallserviceproviders Number of workflows Service providers utilised Genetic algorithm Round robin Random proportional Greedy Figure 9: The percentage of service providers utilized during workload executions.\nThe Greedy algorithm always hits the one service provider, while the Round Robin algorithm spreads requests evenly across the providers.\n0 0.2 0.4 0.6 0.8 1 0 200\u00a0400\u00a0600\u00a0800 1000 Percentageofallserviceproviders Number of workflows Service providers saturated Genetic algorithm Round robin Random proportional Greedy Figure 10: The percentage of service providers that are saturated among those providers who were utilized (that is, percentage of the service providers represented in Figure 9).\nA saturated service provider is one whose workload is greater that its advertised maximum concurrency.\n0 5 10 15 20 25 0 200\u00a0400\u00a0600\u00a0800 1000 Runningtimeinseconds Total number of workflows Running time of genetic algorithm GA running time Figure 11: Running time of the genetic algorithm.\nhaviour of workflows meeting or failing QoS values, and we optimised our scheduling to maximise the aggregate business value across all workflows.\nSince the solution space of scheduler mappings is exponential, we used a genetic search algorithm to search the space and converge toward the best schedule.\nWith a default configuration for all parameters and using our business value scoring, the GA produced up to 115% business value improvement over the next best algorithm.\nFinally, because a genetic algorithm will converge towards the optimal value using any metric (even other than the business value metric we used), we believe our approach has strong potential for continuing work.\nIn future work, we look to acquire real-world traces of web service instances in order to get better estimates of service agreement guarantees, although we expect that such guarantees between the providers and their consumers are not generally available to the public.\nWe will also look at other QoS metrics such as CPU and I\/O usage.\nFor example, we can analyse transfer costs with varying bandwidth, latency, data size, and data distribution.\nFurther, we hope to improve our genetic algorithm and compare it to more scheduler alternatives.\nFinally, since our work is complementary to existing work in web services choreography (because we rely on pre-configured workflows), we look to integrate our approach with available web service workflow systems expressed in BPEL.\n6.\nREFERENCES [1] A. Ankolekar, et al..\nDAML-S: Semantic Markup For Web Services, In Proc.\nof the Int``l Semantic Web Working Symposium, 2001.\n[2] L. Davis.\nJob Shop Scheduling with Genetic Algorithms, In Proc.\nof the Int``l Conference on Genetic Algorithms, 1985.\n[3] H.-L.\nFang, P. Ross, and D. Corne.\nA Promising Genetic Algorithm Approach to Job-Shop Scheduling, Rescheduling, and Open-Shop Scheduling Problems , In Proc.\non the 5th Int``l Conference on Genetic Algorithms, 1993.\n[4] M. Gary and D. Johnson.\nComputers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, 1979.\n[5] J. Holland.\nAdaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press, 1992.\n[6] D. Goldberg.\nGenetic Algorithms in Search, Optimization and Machine Learning, Kluwer Academic Publishers, 1989.\n[7] Business Processes in a Web Services World, www-128.\nibm.com\/developerworks\/ webservices\/library\/ws-bpelwp\/.\n[8] G. Soundararajan, K. Manassiev, J. Chen, A. Goel, and C. Amza.\nBack-end Databases in Shared Dynamic Content Server Clusters, In Proc.\nof the IEEE Int``l Conference on Autonomic Computing, 2005.\n[9] B. Srivastava and J. Koehler.\nWeb Service Composition Current Solutions and Open Problems, ICAP, 2003.\n[10] B. Urgaonkar, P. Shenoy, A. Chandra, and P. Goyal.\nDynamic Provisioning of Multi-Tier Internet Applications, In Proc.\nof the IEEE Int``l Conference on Autonomic Computing, 2005.\n[11] L. Zeng, B. Benatallah, M. Dumas, J. Kalagnanam, and Q. Sheng.\nQuality Driven Web Services Composition, In Proc.\nof the WWW Conference, 2003.\n35","lvl-3":"Heuristics-Based Scheduling of Composite Web Service Workloads\nABSTRACT\nWeb services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems.\nAlthough industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow's web service requests to actual service provisioning in a multi-tiered, multi-organisation environment.\nThis issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits.\nBecause these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement.\nIn this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised.\nWe show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches.\n1.\nINTRODUCTION\nWeb services can be composed into workflows to provide streamlined end-to-end functionality for human users or other systems.\nAlthough previous research efforts have looked at ways to intelligently automate the composition of web services into workflows (e.g. [1, 9]), an important remaining problem is the assignment of web service requests to the underlying web service providers in a multi-tiered runtime scenario within constraints.\nIn this paper we address this scheduling problem and examine means to manage a\nlarge number of business process workflows in a scalable manner.\nThe problem of scheduling web service requests to providers is relevant to modern business domains that depend on multi-tiered service provisioning.\nConsider the example shown in Figure 1 that illustrates our problem space.\nWorkflows comprise multiple related business processes that are web service consumers; here we assume that the workflows represent requested service from customers or automated systems and that the workflow has already been composed with an existing choreography toolkit.\nThese workflows are then submitted to a portal (not shown) that acts as a scheduling agent between the web service consumers and the web service providers.\nIn this example, a workflow could represent the actions needed to instantiate a vacation itinerary, where one business process requests booking an airline ticket, another business process requests a hotel room, and so forth.\nEach of these requests target a particular service type (e.g. airline reservations, hotel reservations, car reservations, etc.), and for each service type, there are multiple instances of service providers that publish a web service interface.\nAn important challenge is that the workflows must meet some quality-of-service (QoS) metric, such as end-to-end completion time of all its business processes, and that meeting or failing this goal results in the assignment of a quantitative business value metric for the workflow; intuitively, it is desired that all workflows meet their respective QoS goals.\nWe further leverage the notion that QoS service agreements are generally agreed-upon between the web service providers and the scheduling agent such that the providers advertise some level of guaranteed QoS to the scheduler based upon runtime conditions such as turnaround time and maximum available concurrency.\nThe resulting problem is then to schedule and assign the business processes' requests for service types to one of the service providers for that type.\nThe scheduling must be done such that the aggregate business value across all the workflows is maximised.\nIn Section 3 we state the scenario as a combinatorial problem and utilise a genetic search algorithm [5] to find the best assignment of web service requests to providers.\nThis approach converges towards an assignment that maximises the overall business value for all the workflows.\nIn Section 4 we show through experimentation that this search heuristic finds better assignments than other algorithms (greedy, round-robin, and proportional).\nFurther, this approach allows us to scale the number of simultaneous workflows (up to one thousand workflows in our experiments) and yet still find effective schedules.\n2.\nRELATED WORK\nIn the context of service assignment and scheduling, [11] maps web service calls to potential servers using linear programming, but their work is concerned with mapping only single workflows; our principal focus is on scalably scheduling multiple workflows (up\nFigure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers.\nEach business process accesses a service type and is then mapped to a service provider for that type.\nto one thousand as we show later) using different business metrics and a search heuristic.\n[10] presents a dynamic provisioning approach that uses both predictive and reactive techniques for multi-tiered Internet application delivery.\nHowever, the provisioning techniques do not consider the challenges faced when there are alternative query execution plans and replicated data sources.\n[8] presents a feedback-based scheduling mechanism for multi-tiered systems with back-end databases, but unlike our work, it assumes a tighter coupling between the various components of the system.\nOur work also builds upon prior scheduling research.\nThe classic job-shop scheduling problem, shown to be NP-complete [4] [3], is similar to ours in that tasks within a job must be scheduled onto machinery (c.f. our scenario is that business processes within a workflow must be scheduled onto web service providers).\nThe salient differences are that the machines can process only one job at a time (we assume servers can multi-task but with degraded performance and a maximum concurrency level), tasks within a job cannot simultaneously run on different machines (we assume business processes can be assigned to any available server), and the principal metric of performance is the makespan, which is the time for the last task among all the jobs to complete (and as we show later, optimising on the makespan is insufficient for scheduling the business processes, necessitating different metrics).\n3.\nDESIGN\n3.1 Model\n3.2 Genetic algorithm\n4.\nEXPERIMENTS AND RESULTS\n5.\nCONCLUSION\nBusiness processes within workflows can be orchestrated to access web services.\nIn this paper we looked at multi-tiered service provisioning where web service requests to service types can be mapped to different service providers.\nThe resulting problem is that in order to support a very large number of workflows, the assignment of business process to web service provider must be intelligent.\nWe used a business value metric to measure the be\nFigure 5: Workflow behaviour for 100 workflows.\nFigure 6: Workflow behaviour for 500 workflows.\nFigure 7: Workflow behaviour for 900 workflows.\nFigure 9: The percentage of service providers utilized during workload executions.\nThe Greedy algorithm always hits the one service provider, while the Round Robin algorithm spreads requests evenly across the providers.\nFigure 10: The percentage of service providers that are saturated among those providers who were utilized (that is, percentage of the service providers represented in Figure 9).\nA saturated service provider is one whose workload is greater that its advertised maximum concurrency.\nFigure 11: Running time of the genetic algorithm.\nhaviour of workflows meeting or failing QoS values, and we optimised our scheduling to maximise the aggregate business value across all workflows.\nSince the solution space of scheduler mappings is exponential, we used a genetic search algorithm to search the space and converge toward the best schedule.\nWith a default configuration for all parameters and using our business value scoring, the GA produced up to 115% business value improvement over the next best algorithm.\nFinally, because a genetic algorithm will converge towards the optimal value using any metric (even other than the business value metric we used), we believe our approach has strong potential for continuing work.\nIn future work, we look to acquire real-world traces of web service instances in order to get better estimates of service agreement guarantees, although we expect that such guarantees between the providers and their consumers are not generally available to the public.\nWe will also look at other QoS metrics such as CPU and I\/O usage.\nFor example, we can analyse transfer costs with varying bandwidth, latency, data size, and data distribution.\nFurther, we hope to improve our genetic algorithm and compare it to more scheduler alternatives.\nFinally, since our work is complementary to existing work in web services choreography (because we rely on pre-configured workflows), we look to integrate our approach with available web service workflow systems expressed in BPEL.","lvl-4":"Heuristics-Based Scheduling of Composite Web Service Workloads\nABSTRACT\nWeb services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems.\nAlthough industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow's web service requests to actual service provisioning in a multi-tiered, multi-organisation environment.\nThis issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits.\nBecause these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement.\nIn this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised.\nWe show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches.\n1.\nINTRODUCTION\nWeb services can be composed into workflows to provide streamlined end-to-end functionality for human users or other systems.\nAlthough previous research efforts have looked at ways to intelligently automate the composition of web services into workflows (e.g. [1, 9]), an important remaining problem is the assignment of web service requests to the underlying web service providers in a multi-tiered runtime scenario within constraints.\nIn this paper we address this scheduling problem and examine means to manage a\nlarge number of business process workflows in a scalable manner.\nThe problem of scheduling web service requests to providers is relevant to modern business domains that depend on multi-tiered service provisioning.\nConsider the example shown in Figure 1 that illustrates our problem space.\nWorkflows comprise multiple related business processes that are web service consumers; here we assume that the workflows represent requested service from customers or automated systems and that the workflow has already been composed with an existing choreography toolkit.\nThese workflows are then submitted to a portal (not shown) that acts as a scheduling agent between the web service consumers and the web service providers.\nIn this example, a workflow could represent the actions needed to instantiate a vacation itinerary, where one business process requests booking an airline ticket, another business process requests a hotel room, and so forth.\nThe resulting problem is then to schedule and assign the business processes' requests for service types to one of the service providers for that type.\nThe scheduling must be done such that the aggregate business value across all the workflows is maximised.\nIn Section 3 we state the scenario as a combinatorial problem and utilise a genetic search algorithm [5] to find the best assignment of web service requests to providers.\nThis approach converges towards an assignment that maximises the overall business value for all the workflows.\nIn Section 4 we show through experimentation that this search heuristic finds better assignments than other algorithms (greedy, round-robin, and proportional).\nFurther, this approach allows us to scale the number of simultaneous workflows (up to one thousand workflows in our experiments) and yet still find effective schedules.\n2.\nRELATED WORK\nIn the context of service assignment and scheduling, [11] maps web service calls to potential servers using linear programming, but their work is concerned with mapping only single workflows; our principal focus is on scalably scheduling multiple workflows (up\nFigure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers.\nEach business process accesses a service type and is then mapped to a service provider for that type.\nto one thousand as we show later) using different business metrics and a search heuristic.\n[10] presents a dynamic provisioning approach that uses both predictive and reactive techniques for multi-tiered Internet application delivery.\nOur work also builds upon prior scheduling research.\nThe classic job-shop scheduling problem, shown to be NP-complete [4] [3], is similar to ours in that tasks within a job must be scheduled onto machinery (c.f. our scenario is that business processes within a workflow must be scheduled onto web service providers).\n5.\nCONCLUSION\nBusiness processes within workflows can be orchestrated to access web services.\nIn this paper we looked at multi-tiered service provisioning where web service requests to service types can be mapped to different service providers.\nThe resulting problem is that in order to support a very large number of workflows, the assignment of business process to web service provider must be intelligent.\nWe used a business value metric to measure the be\nFigure 5: Workflow behaviour for 100 workflows.\nFigure 6: Workflow behaviour for 500 workflows.\nFigure 7: Workflow behaviour for 900 workflows.\nFigure 9: The percentage of service providers utilized during workload executions.\nThe Greedy algorithm always hits the one service provider, while the Round Robin algorithm spreads requests evenly across the providers.\nFigure 10: The percentage of service providers that are saturated among those providers who were utilized (that is, percentage of the service providers represented in Figure 9).\nA saturated service provider is one whose workload is greater that its advertised maximum concurrency.\nFigure 11: Running time of the genetic algorithm.\nhaviour of workflows meeting or failing QoS values, and we optimised our scheduling to maximise the aggregate business value across all workflows.\nSince the solution space of scheduler mappings is exponential, we used a genetic search algorithm to search the space and converge toward the best schedule.\nWith a default configuration for all parameters and using our business value scoring, the GA produced up to 115% business value improvement over the next best algorithm.\nFinally, because a genetic algorithm will converge towards the optimal value using any metric (even other than the business value metric we used), we believe our approach has strong potential for continuing work.\nWe will also look at other QoS metrics such as CPU and I\/O usage.\nFurther, we hope to improve our genetic algorithm and compare it to more scheduler alternatives.\nFinally, since our work is complementary to existing work in web services choreography (because we rely on pre-configured workflows), we look to integrate our approach with available web service workflow systems expressed in BPEL.","lvl-2":"Heuristics-Based Scheduling of Composite Web Service Workloads\nABSTRACT\nWeb services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems.\nAlthough industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow's web service requests to actual service provisioning in a multi-tiered, multi-organisation environment.\nThis issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits.\nBecause these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement.\nIn this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised.\nWe show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches.\n1.\nINTRODUCTION\nWeb services can be composed into workflows to provide streamlined end-to-end functionality for human users or other systems.\nAlthough previous research efforts have looked at ways to intelligently automate the composition of web services into workflows (e.g. [1, 9]), an important remaining problem is the assignment of web service requests to the underlying web service providers in a multi-tiered runtime scenario within constraints.\nIn this paper we address this scheduling problem and examine means to manage a\nlarge number of business process workflows in a scalable manner.\nThe problem of scheduling web service requests to providers is relevant to modern business domains that depend on multi-tiered service provisioning.\nConsider the example shown in Figure 1 that illustrates our problem space.\nWorkflows comprise multiple related business processes that are web service consumers; here we assume that the workflows represent requested service from customers or automated systems and that the workflow has already been composed with an existing choreography toolkit.\nThese workflows are then submitted to a portal (not shown) that acts as a scheduling agent between the web service consumers and the web service providers.\nIn this example, a workflow could represent the actions needed to instantiate a vacation itinerary, where one business process requests booking an airline ticket, another business process requests a hotel room, and so forth.\nEach of these requests target a particular service type (e.g. airline reservations, hotel reservations, car reservations, etc.), and for each service type, there are multiple instances of service providers that publish a web service interface.\nAn important challenge is that the workflows must meet some quality-of-service (QoS) metric, such as end-to-end completion time of all its business processes, and that meeting or failing this goal results in the assignment of a quantitative business value metric for the workflow; intuitively, it is desired that all workflows meet their respective QoS goals.\nWe further leverage the notion that QoS service agreements are generally agreed-upon between the web service providers and the scheduling agent such that the providers advertise some level of guaranteed QoS to the scheduler based upon runtime conditions such as turnaround time and maximum available concurrency.\nThe resulting problem is then to schedule and assign the business processes' requests for service types to one of the service providers for that type.\nThe scheduling must be done such that the aggregate business value across all the workflows is maximised.\nIn Section 3 we state the scenario as a combinatorial problem and utilise a genetic search algorithm [5] to find the best assignment of web service requests to providers.\nThis approach converges towards an assignment that maximises the overall business value for all the workflows.\nIn Section 4 we show through experimentation that this search heuristic finds better assignments than other algorithms (greedy, round-robin, and proportional).\nFurther, this approach allows us to scale the number of simultaneous workflows (up to one thousand workflows in our experiments) and yet still find effective schedules.\n2.\nRELATED WORK\nIn the context of service assignment and scheduling, [11] maps web service calls to potential servers using linear programming, but their work is concerned with mapping only single workflows; our principal focus is on scalably scheduling multiple workflows (up\nFigure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers.\nEach business process accesses a service type and is then mapped to a service provider for that type.\nto one thousand as we show later) using different business metrics and a search heuristic.\n[10] presents a dynamic provisioning approach that uses both predictive and reactive techniques for multi-tiered Internet application delivery.\nHowever, the provisioning techniques do not consider the challenges faced when there are alternative query execution plans and replicated data sources.\n[8] presents a feedback-based scheduling mechanism for multi-tiered systems with back-end databases, but unlike our work, it assumes a tighter coupling between the various components of the system.\nOur work also builds upon prior scheduling research.\nThe classic job-shop scheduling problem, shown to be NP-complete [4] [3], is similar to ours in that tasks within a job must be scheduled onto machinery (c.f. our scenario is that business processes within a workflow must be scheduled onto web service providers).\nThe salient differences are that the machines can process only one job at a time (we assume servers can multi-task but with degraded performance and a maximum concurrency level), tasks within a job cannot simultaneously run on different machines (we assume business processes can be assigned to any available server), and the principal metric of performance is the makespan, which is the time for the last task among all the jobs to complete (and as we show later, optimising on the makespan is insufficient for scheduling the business processes, necessitating different metrics).\n3.\nDESIGN\nIn this section we describe our model and discuss how we can find scheduling assignments using a genetic search algorithm.\n3.1 Model\nWe base our model on the simplified scenario shown in Figure 1.\nSpecifically, we assume that users or automated systems request the execution of a workflow.\nThe workflows comprise business processes, each of which makes one web service invocation to a service type.\nFurther, business processes have an ordering in the workflow.\nThe arrangement and execution of the business processes and the data flow between them are all managed by a composition or choreography tool (e.g. [1, 9]).\nAlthough composition languages can use sophisticated flow-control mechanisms such as conditional branches, for simplicity we assume the processes execute sequentially in a given order.\nThis scenario can be naturally extended to more complex relationships that can be expressed in BPEL [7], which defines how business processes interact, messages are exchanged, activities are ordered, and exceptions are handled.\nDue to space constraints, we focus on the problem space presented here and will extend our model to more advanced deployment scenarios in the future.\nEach workflow has a QoS requirement to complete within a specified number of time units (e.g. on the order of seconds, as detailed in the Experiments section).\nUpon completion (or failure), the workflow is assigned a business value.\nWe extended this approach further and considered different types of workflow completion in order to model differentiated QoS levels that can be applied by businesses (for example, to provide tiered customer service).\nWe say that a workflow is successful if it completes within its QoS requirement, acceptable if it completes within a constant factor \u03ba\nof its QoS bound (in our experiments we chose \u03ba = 3), or failing if it finishes beyond \u03ba times its QoS bound.\nFor each category, a business value score is assigned to the workflow, with the \"successful\" category assigned the highest positive score, followed by \"acceptable\" and then \"failing.\"\nThe business value point distribution is non-uniform across workflows, further modelling cases where some workflows are of higher priority than others.\nEach service type is implemented by a number of different service providers.\nWe assume that the providers make service level agreements (SLAs) to guarantee a level of performance defined by the completion time for completing a web service invocation.\nAlthough SLAs can be complex, in this paper we assume for simplicity that the guarantees can take the form of a linear performance degradation under load.\nThis guarantee is defined by several parameters: \u03b1 is the expected completion time (for example, on the order of seconds) if the assigned workload of web service requests is less than or equal to \u03b2, the maximum concurrency, and if the workload is higher than \u03b2, the expected completion for a workload of size \u03c9 is \u03b1 + \u03b3 (\u03c9 \u2212 \u03b2) where \u03b3 is a fractional coefficient.\nIn our experiments we vary \u03b1, \u03b2, and \u03b3 with different distributions.\nIdeally, all workflows would be able to finish within their QoS limits and thus maximise the aggregate business value across all workflows.\nHowever, because we model service providers with degrading performance under load, not all workflows will achieve their QoS limit: it may easily be the case that business processes are assigned to providers who are overloaded and cannot complete within the respective workflow's QoS limit.\nThe key research problem, then, is to assign the business processes to the web service providers with the goal of optimising on the aggregate business value of all workflows.\nGiven that the scope of the optimisation is the entire set of workflows, it may be that the best scheduling assignments may result in some workflows having to fail in order for more workflows to succeed.\nThis intuitive observation suggests that traditional scheduling approaches such as round-robin or proportional assignments will not fare well, which is what we observe and discuss in Section 4.\nOn the other hand, an exhaustive search of all the possible assignments will find the best schedule, but the computational complexity is prohibitively high.\nSuppose there are W workflows with an average of B business processes per workflow.\nFurther, in the worst case each business process requests one service type, for which there are P providers.\nThere are thus W \u00b7 PB combinations to explore to find the optimal assignments of business processes to providers.\nEven for small configurations (e.g. W = 10, B = 5, P = 10), the computational time for exhaustive search is significant, and in our work we look to scale these parameters.\nIn the next subsection, discuss how a genetic search algorithm can be used to converge toward the optimum scheduling assignments.\n3.2 Genetic algorithm\nGiven an exponential search space of business process assignments to web service providers, the problem is to find the optimal assignment that produces the overall highest aggregate business value across all workflows.\nTo explore the solution space, we use a genetic algorithm (GA) search heuristic that simulates Darwinian natural selection by having members of a population compete to survive in order to pass their genetic chromosomes onto the next generation; after successive generations, there is a tendency for the chromosomes to converge toward the best combination [5] [6].\nAlthough other search heuristics exist that can solve optimization problems (e.g. simulated annealing or steepest-ascent hillclimbing), the business process scheduling problem fits well with a GA because potential solutions can be represented in a matrix form and allows us to use prior research in effective GA chromosome recombination to form new members of the population (e.g. [2]).\nFigure 2: An example chromosome representing a scheduling\nassignment of (workflow, service type)--+ service provider.\nEach row represents a workflow, and each column represents a service type.\nFor example, here there are 3 workflows (0 to 2) and 5 service types (0 to 4).\nIn workflow 0, any request for service type 3 goes to provider 2.\nNote that the service provider identifier is within a range limited to its service type (i.e. its column), so the \"2\" listed for service type 3 is a different server from server \"2\" in other columns.\nChromosome representation of a solution.\nIn Figure 2 we show an example chromosome that encodes one scheduling assignment.\nThe representation is a 2-dimensional matrix that maps {workflow, service type} to a service provider.\nFor a business process in workflow i and utilising service type j, the (i, j) th entry in the table is the identifier for the service provider to which the business process is assigned.\nNote that the service provider identifier is within a range limited to its service type.\nGA execution.\nA GA proceeds as follows.\nInitially a random set of chromosomes is created for the population.\nThe chromosomes are evaluated (hashed) to some metric, and the best ones are chosen to be parents.\nIn our problem, the evaluation produces the net business value across all workflows after executing all business processes once they are assigned to their respective service providers according to the mapping in the chromosome.\nThe parents recombine to produce children, simulating sexual crossover, and occasionally a mutation may arise which produces new characteristics that were not available in either parent.\nThe principal idea is that we would like the children to be different from the parents (in order to explore more of the solution space) yet not too different (in order to contain the portions of the chromosome that result in good scheduling assignments).\nNote that finding the global optimum is not guaranteed because the recombination and mutation are stochastic.\nGA recombination and mutation.\nAs mentioned, the chromosomes are 2-dimensional matrices that represent scheduling assignments.\nTo simulate sexual recombination of two chromosomes to produce a new child chromosome, we applied a one-point crossover scheme twice (once along each dimension).\nThe crossover is best explained by analogy to Cartesian space as follows.\nA random point is chosen in the matrix to be coordinate (0, 0).\nMatrix elements from quadrants II and IV from the first parent and elements from quadrants I and III from the second parent are used to create the new child.\nThis approach follows GA best practices by keeping contiguous chromosome segments together as they are transmitted from parent to child.\nThe uni-chromosome mutation scheme randomly changes one of the service provider assignments to another provider within the available range.\nOther recombination and mutation schemes are an area of research in the GA community, and we look to explore new operators in future work.\nGA evaluation function.\nAn important GA component is the evaluation function.\nGiven a particular chromosome representing one scheduling mapping, the function deterministically calculates the net business value across all workloads.\nThe business processes in each workload are assigned to service providers, and each provider's completion time is calculated based on the service agreement guarantee using the parameters mentioned in Section 3.1, namely the unloaded completion time \u03b1, the maximum concur\nrency \u03b2, and a coefficient \u03b3 that controls the linear performance degradation under heavy load.\nNote that the evaluation function can be easily replaced if desired; for example, other evaluation functions can model different service provider guarantees or parallel workflows.\n4.\nEXPERIMENTS AND RESULTS\nIn this section we show the benefit of using our GA-based scheduler.\nBecause we wanted to scale the scenarios up to a large number of workflows (up to 1000 in our experiments), we implemented a simulation program that allowed us to vary parameters and to measure the results with different metrics.\nThe simulator was written in standard C++ and was run on a Linux (Fedora Core) desktop computer running at 2.8 GHz with 1GB of RAM.\nWe compared our algorithm against alternative candidates:\n\u2022 A well-known round-robin algorithm that assigns each business process in circular fashion to the service providers for a particular service type.\nThis approach provides the simplest scheme for load-balancing.\n\u2022 A random-proportional algorithm that proportionally assigns business processes to the service providers; that is, for a given service type, the service providers are ranked by their guaranteed completion time, and business processes are assigned proportionally to the providers based on their completion time.\n(We also tried a proportionality scheme based on both the completion times and maximum concurrency but attained the same results, so only the former scheme's results are shown here.)\n\u2022 A strawman greedy algorithm that always assigns business processes to the service provider that has the fastest guaranteed completion time.\nThis algorithm represents a naive approach based on greedy, local observations of each workflow without taking into consideration all workflows.\nIn the experiments that follow, all results were averaged across 20 trials, and to help normalise the effects of randomisation used during the GA, each trial started by reading in pre-initialised data from disk.\nIn Table 1 we list our experimental parameters.\nIn Figure 3 we show the results of running our GA against the three candidate alternatives.\nThe x-axis shows the number for workflows scaled up to 1000, and the y-axis shows the aggregate business value for all workflows.\nAs can be seen, the GA consistently produces the highest business value even as the number of workflows grows; at 1000 workflows, the GA produces a 115% improvement over the next-best alternative.\n(Note that although we are optimising against the business value metric we defined earlier, genetic algorithms are able to converge towards the optimal value of any metric, as long as the evaluation function can consistently measure a chromosome's value with that metric.)\nAs expected, the greedy algorithm performs very poorly because it does the worst job at balancing load: all business processes for a given service type are assigned to only one server (the one advertised to have the fastest completion time), and as more business processes arrive, the provider's performance degrades linearly.\nThe round-robin scheme is initially outperformed by the randomproportional scheme up to around 120 workflows (as shown in the magnified graph of Figure 4), but as the number of workflows increases, the round-robin scheme consistently wins over randomproportional.\nThe reason is that although the random-proportional scheme assigns business processes to providers proportionally according to the advertised completion times (which is a measure of the \"power\" of the service provider), even the best providers will eventually reach a real-world maximum concurrency for the large\nFigure 3: Net business value scores of different scheduling algorithms.\nFigure 4: Magnification of the left-most region in Figure 3.\nnumber of workflows that we are considering.\nFor a very large number of workflows, the round-robin scheme is able to better balance the load across all service providers.\nTo better understand the behaviour resulting from the scheduling assignments, we show the workflow completion results in Figures 5, 6, and 7 for 100, 500, and 900 workflows, respectively.\nThese figures show the percentage of workflows that are successful (can complete with their QoS limit), acceptable (can complete within r, = 3 times their QoS limit), and failed (cannot complete within r, = 3 times their QoS limit).\nThe GA consistently produces the highest percentage of successful workflows (resulting in higher business values for the aggregate set of workflows).\nFurther, the round-robin scheme produces better results than the random-proportional for a large number of workflows but does not perform as well as the GA. .\nIn Figure 8 we graph the \"makespan\" resulting from the same experiments above.\nMakespan is a traditional metric from the job scheduling community measuring elapsed time for the last job to complete.\nWhile useful, it does not capture the high-level business value metric that we are optimising against.\nIndeed, the makespan is oblivious to the fact that we provide multiple levels of completion (successful, acceptable, and failed) and assign business value scores accordingly.\nFor completeness, we note that the GA provides the fastest makespan, but it is matched by the round robin algorithm.\nThe GA produces better business values (as shown in Figure 3) because it is able to search the solution space to find better mappings that produce more successful workflows (as shown in Figures 5 to 7).\nWe also looked at the effect of the scheduling algorithms on balancing the load.\nFigure 9 shows the percentage of services providers that were accessed while the workflows ran.\nAs expected, the greedy algorithm always hits one service provider; on the other hand, the round-robin algorithm is the fastest to spread the business\nTable 1: Experimental parameters\nFigure 8: Maximum completion time for all workflows.\nThis value is the \"makespan\" metric used in traditional scheduling research.\nAlthough useful, the makespan does not take into consideration the business value scoring in our problem domain.\nprocesses.\nFigure 10 is the percentage of accessed service providers (that is, the percentage of service providers represented in Figure 9) that had more assigned business processes than their advertised maximum concurrency.\nFor example, in the greedy algorithm only one service provider is utilised, and this one provider quickly becomes saturated.\nOn the other hand, the random-proportional algorithm uses many service providers, but because business processes are proportionally assigned with more assignments going to the better providers, there is a tendency for a smaller percentage of providers to become saturated.\nFor completeness, we show the performance of the genetic algorithm itself in Figure 11.\nThe algorithm scales linearly with an increasing number of workflows.\nWe note that the round-robin, random-proportional, and greedy algorithms all finished within 1 second even for the largest workflow configuration.\nHowever, we feel that the benefit of finding much higher business value scores justifies the running time of the GA; further we would expect that the running time will improve with both software tuning as well as with a computer faster than our off-the-shelf PC.\n5.\nCONCLUSION\nBusiness processes within workflows can be orchestrated to access web services.\nIn this paper we looked at multi-tiered service provisioning where web service requests to service types can be mapped to different service providers.\nThe resulting problem is that in order to support a very large number of workflows, the assignment of business process to web service provider must be intelligent.\nWe used a business value metric to measure the be\nFigure 5: Workflow behaviour for 100 workflows.\nFigure 6: Workflow behaviour for 500 workflows.\nFigure 7: Workflow behaviour for 900 workflows.\nFigure 9: The percentage of service providers utilized during workload executions.\nThe Greedy algorithm always hits the one service provider, while the Round Robin algorithm spreads requests evenly across the providers.\nFigure 10: The percentage of service providers that are saturated among those providers who were utilized (that is, percentage of the service providers represented in Figure 9).\nA saturated service provider is one whose workload is greater that its advertised maximum concurrency.\nFigure 11: Running time of the genetic algorithm.\nhaviour of workflows meeting or failing QoS values, and we optimised our scheduling to maximise the aggregate business value across all workflows.\nSince the solution space of scheduler mappings is exponential, we used a genetic search algorithm to search the space and converge toward the best schedule.\nWith a default configuration for all parameters and using our business value scoring, the GA produced up to 115% business value improvement over the next best algorithm.\nFinally, because a genetic algorithm will converge towards the optimal value using any metric (even other than the business value metric we used), we believe our approach has strong potential for continuing work.\nIn future work, we look to acquire real-world traces of web service instances in order to get better estimates of service agreement guarantees, although we expect that such guarantees between the providers and their consumers are not generally available to the public.\nWe will also look at other QoS metrics such as CPU and I\/O usage.\nFor example, we can analyse transfer costs with varying bandwidth, latency, data size, and data distribution.\nFurther, we hope to improve our genetic algorithm and compare it to more scheduler alternatives.\nFinally, since our work is complementary to existing work in web services choreography (because we rely on pre-configured workflows), we look to integrate our approach with available web service workflow systems expressed in BPEL.","keyphrases":["heurist","schedul","web servic","streamlin function","end-to-end workflow composit","servic request","multi-organis environ","schedul servic","busi process workflow","busi valu metric","schedul agent","multi-tier system","qo-defin limit","qo","work\ufb02ow"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","M","M","U","U"]} {"id":"C-67","title":"A Holistic Approach to High-Performance Computing: Xgrid Experience","abstract":"The Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design. With a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education. We have found that Mac OS X is the best operating system to train future artists and designers. Moreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations. As visual artists cross from paint on canvas to creating in the digital realm, the demand for a high-performance computing environment grows. In our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use. In order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing. As with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid. Therefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues. In our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support. Furthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process.","lvl-1":"A Holistic Approach to High-Performance Computing: Xgrid Experience David Przybyla Ringling School of Art and Design 2700 North Tamiami Trail Sarasota, Florida 34234 941-309-4720 dprzybyl@ringling.edu Karissa Miller Ringling School of Art and Design 2700 North Tamiami Trail Sarasota, Florida 34234 941-359-7670 kmiller@ringling.edu Mahmoud Pegah Ringling School of Art and Design 2700 North Tamiami Trail Sarasota, Florida 34234 941-359-7625 mpegah@ringling.edu ABSTRACT The Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design.\nWith a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education.\nWe have found that Mac OS X is the best operating system to train future artists and designers.\nMoreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations.\nAs visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows.\nIn our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use.\nIn order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing.\nAs with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid.\nTherefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues.\nIn our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support.\nFurthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systemsdistributed applications.\nGeneral Terms Management, Documentation, Performance, Design, Economics, Reliability, Experimentation.\n1.\nINTRODUCTION Grid computing does not have a single, universally accepted definition.\nThe technology behind grid computing model is not new.\nIts roots lie in early distributed computing models that date back to early 1980s, where scientists harnessed the computing power of idle workstations to let compute intensive applications to run on multiple workstations, which dramatically shortening processing times.\nAlthough numerous distributed computing models were available for discipline-specific scientific applications, only recently have the tools became available to use general-purpose applications on a grid.\nConsequently, the grid computing model is gaining popularity and has become a show piece of ``utility computing''.\nSince in the IT industry, various computing models are used interchangeably with grid computing, we first sort out the similarities and difference between these computing models so that grid computing can be placed in perspective.\n1.1 Clustering A cluster is a group of machines in a fixed configuration united to operate and be managed as a single entity to increase robustness and performance.\nThe cluster appears as a single high-speed system or a single highly available system.\nIn this model, resources can not enter and leave the group as necessary.\nThere are at least two types of clusters: parallel clusters and highavailability clusters.\nClustered machines are generally in spatial proximity, such as in the same server room, and dedicated solely to their task.\nIn a high-availability cluster, each machine provides the same service.\nIf one machine fails, another seamlessly takes over its workload.\nFor example, each computer could be a web server for a web site.\nShould one web server ``die,'' another provides the service, so that the web site rarely, if ever, goes down.\nA parallel cluster is a type of supercomputer.\nProblems are split into many parts, and individual cluster members are given part of the problem to solve.\nAn example of a parallel cluster is composed of Apple Power Mac G5 computers at Virginia Tech University [1].\n1.2 Distributed Computing Distributed computing spatially expands network services so that the components providing the services are separated.\nThe major objective of this computing model is to consolidate processing power over a network.\nA simple example is spreading services such as file and print serving, web serving, and data storage across multiple machines rather than a single machine handling all the tasks.\nDistributed computing can also be more fine-grained, where even a single application is broken into parts and each part located on different machines: a word processor on one server, a spell checker on a second server, etc. 1.3 Utility Computing Literally, utility computing resembles common utilities such as telephone or electric service.\nA service provider makes computing resources and infrastructure management available to a customer as needed, and charges for usage rather than a flat rate.\nThe important thing to note is that resources are only used as needed, and not dedicated to a single customer.\n1.4 Grid Computing Grid computing contains aspects of clusters, distributed computing, and utility computing.\nIn the most basic sense, grid turns a group of heterogeneous systems into a centrally managed but flexible computing environment that can work on tasks too time intensive for the individual systems.\nThe grid members are not necessarily in proximity, but must merely be accessible over a network; the grid can access computers on a LAN, WAN, or anywhere in the world via the Internet.\nIn addition, the computers comprising the grid need not be dedicated to the grid; rather, they can function as normal workstations, and then advertise their availability to the grid when not in use.\nThe last characteristic is the most fundamental to the grid described in this paper.\nA well-known example of such an ad hoc grid is the SETI@home project [2] of the University of California at Berkeley, which allows any person in the world with a computer and an Internet connection to donate unused processor time for analyzing radio telescope data.\n1.5 Comparing the Grid and Cluster A computer grid expands the capabilities of the cluster by loosing its spatial bounds, so that any computer accessible through the network gains the potential to augment the grid.\nA fundamental grid feature is that it scales well.\nThe processing power of any machine added to the grid is immediately availably for solving problems.\nIn addition, the machines on the grid can be generalpurpose workstations, which keep down the cost of expanding the grid.\n2.\nASSESSING THE NEED FOR GRID COMPUTING Effective use of a grid requires a computation that can be divided into independent (i.e., parallel) tasks.\nThe results of each task cannot depend on the results of any other task, and so the members of the grid can solve the tasks in parallel.\nOnce the tasks have been completed, the results can be assembled into the solution.\nExamples of parallelizable computations are the Mandelbrot set of fractals, the Monte Carlo calculations used in disciplines such as Solid State Physics, and the individual frames of a rendered animation.\nThis paper is concerned with the last example.\n2.1 Applications Appropriate for Grid Computing The applications used in grid computing must either be specifically designed for grid use, or scriptable in such a way that they can receive data from the grid, process the data, and then return results.\nIn other words, the best candidates for grid computing are applications that run the same or very similar computations on a large number of pieces of data without any dependencies on the previous calculated results.\nApplications heavily dependent on data handling rather than processing power are generally more suitable to run on a traditional environment than on a grid platform.\nOf course, the applications must also run on the computing platform that hosts the grid.\nOur interest is in using the Alias Maya application [3] with Apple``s Xgrid [4] on Mac OS X. Commercial applications usually have strict license requirements.\nThis is an important concern if we install a commercial application such as Maya on all members of our grid.\nBy its nature, the size of the grid may change as the number of idle computers changes.\nHow many licenses will be required?\nOur resolution of this issue will be discussed in a later section.\n2.2 Integration into the Existing Infrastructure The grid requires a controller that recognizes when grid members are available, and parses out job to available members.\nThe controller must be able to see members on the network.\nThis does not require that members be on the same subnet as the controller, but if they are not, any intervening firewalls and routers must be configured to allow grid traffic.\n3.\nXGRID Xgrid is Apple``s grid implementation.\nIt was inspired by Zilla, a desktop clustering application developed by NeXT and acquired by Apple.\nIn this report we describe the Xgrid Technology Preview 2, a free download that requires Mac OS X 10.2.8 or later and a minimum 128 MB RAM [5].\nXgrid, leverages Apple``s traditional ease of use and configuration.\nIf the grid members are on the same subnet, by default Xgrid automatically discovers available resources through Rendezvous [6].\nTasks are submitted to the grid through a GUI interface or by the command line.\nA System Preference Pane controls when each computer is available to the grid.\nIt may be best to view Xgrid as a facilitator.\nThe Xgrid architecture handles software and data distribution, job execution, and result aggregation.\nHowever, Xgrid does not perform the actual calculations.\n3.1 Xgrid Components Xgrid has three major components: the client, controller, and the agent.\nEach component is included in the default installation, and any computer can easily be configured to assume any role.\nIn 120 fact, for testing purposes, a computer can simultaneously assume all roles in local mode.\nThe more typical production use is called cluster mode.\nThe client submits jobs to the controller through the Xgrid GUI or command line.\nThe client defines how the job will be broken into tasks for the grid.\nIf any files or executables must be sent as part of a job, they must reside on the client or at a location accessible to the client.\nWhen a job is complete, the client can retrieve the results from the controller.\nA client can only connect to a single controller at a time.\nThe controller runs the GridServer process.\nIt queues tasks received from clients, distributes those tasks to the agents, and handles failover if an agent cannot complete a task.\nIn Xgrid Technology Preview 2, a controller can handle a maximum of 10,000 agent connections.\nOnly one controller can exist per logical grid.\nThe agents run the GridAgent process.\nWhen the GridAgent process starts, it registers with a controller; an agent can only be connected to one controller at a time.\nAgents receive tasks from their controller, perform the specified computations, and then send the results back to the controller.\nAn agent can be configured to always accept tasks, or to just accept them when the computer is not otherwise busy.\n3.2 Security and Authentication By default, Xgrid requires two passwords.\nFirst, a client needs a password to access a controller.\nSecond, the controller needs a password to access an agent.\nEither password requirement can be disabled.\nXgrid uses two-way-random mutual authentication protocol with MD5 hashes.\nAt this time, data encryption is only used for passwords.\nAs mentioned earlier, an agent registers with a controller when the GridAgent process starts.\nThere is no native method for the controller to reject agents, and so it must accept any agent that registers.\nThis means that any agent could submit a job that consumes excessive processor and disk space on the agents.\nOf course, since Mac OS X is a BSD-based operating system, the controller could employ Unix methods of restricting network connections from agents.\nThe Xgrid daemons run as the user nobody, which means the daemons can read, write, or execute any file according to world permissions.\nThus, Xgrid jobs can execute many commands and write to \/tmp and \/Volumes.\nIn general, this is not a major security risk, but is does require a level of trust between all members of the grid.\n3.3 Using Xgrid 3.3.1 Installation Basic Xgrid installation and configuration is described both in Apple documentation [5] and online at the University of Utah web site [8].\nThe installation is straightforward and offers no options for customization.\nThis means that every computer on which Xgrid is installed has the potential to be a client, controller, or agent.\n3.3.2 Agent and Controller Configuration The agents and controllers can be configured through the Xgrid Preference Pane in the System Preferences or XML files in \/Library\/Preferences.\nHere the GridServer and GridAgent processes are started, passwords set, and the controller discovery method used by agents is selected.\nBy default, agents use Rendezvous to find a controller, although the agents can also be configured to look for a specific host.\nThe Xgrid Preference Pane also sets whether the Agents will always accept jobs, or only accept jobs when idle.\nIn Xgrid terms, idle either means that the Xgrid screen saver has activated, or the mouse and keyboard have not been used for more than 15 minutes.\nEven if the agent is configured to always accept tasks, if the computer is being used these tasks will run in the background at a low priority.\nHowever, if an agent only accepts jobs when idle, any unfinished task being performed when the computer ceases being idle are immediately stopped and any intermediary results lost.\nThen the controller assigns the task to another available member of the grid.\nAdvertising the controller via Rendezvous can be disabled by editing \/Library\/Preferences\/com.\napple.xgrid.controller.plist.\nThis, however, will not prevent an agent from connecting to the controller by hostname.\n3.3.3 Sending Jobs from an Xgrid Client The client sends jobs to the controller either through the Xgrid GUI or the command line.\nThe Xgrid GUI submits jobs via small applications called plug-ins.\nSample plug-ins are provided by Apple, but they are only useful as simple testing or as examples of how to create a custom plug-in.\nIf we are to employ Xgrid for useful work, we will require a custom plug-in.\nJames Reynolds details the creation of custom plug-ins on the University of Utah Mac OS web site [8].\nXgrid stores plug-ins in \/Library\/Xgrid\/Plug-ins or ~\/Library\/Xgrid\/Plug-ins, depending on whether the plug-in was installed with Xgrid or created by a user.\nThe core plug-in parameter is the command, which includes the executable the agents will run.\nAnother important parameter is the working directory.\nThis directory contains necessary files that are not installed on the agents or available to them over a network.\nThe working directory will always be copied to each agent, so it is best to keep this directory small.\nIf the files are installed on the agents or available over a network, the working directory parameter is not needed.\nThe command line allows the options available with the GUI plug-in, but it can be slightly more cumbersome.\nHowever, the command line probably will be the method of choice for serious work.\nThe command arguments must be included in a script unless they are very basic.\nThis can be a shell, perl, or python script, as long as the agent can interpret it.\n3.3.4 Running the Xgrid Job When the Xgrid job is started, the command tells the controller how to break the job into tasks for the agents.\nThen the command is tarred and gzipped and sent to each agent; if there is a working directory, this is also tarred and gzipped and sent to the agents.\n121 The agents extract these files into \/tmp and run the task.\nRecall that since the GridAgent process runs as the user nobody, everything associated with the command must be available to nobody.\nExecutables called by the command should be installed on the agents unless they are very simple.\nIf the executable depends on libraries or other files, it may not function properly if transferred, even if the dependent files are referenced in the working directory.\nWhen the task is complete, the results are available to the client.\nIn principle, the results are sent to the client, but whether this actually happens depends on the command.\nIf the results are not sent to the client, they will be in \/tmp on each agent.\nWhen available, a better solution is to direct the results to a network volume accessible to the client.\n3.4 Limitations and Idiosyncrasies Since Xgrid is only in its second preview release, there are some rough edges and limitations.\nApple acknowledges some limitations [7].\nFor example, the controller cannot determine whether an agent is trustworthy and the controller always copies the command and working directory to the agent without checking to see if these exist on the agent.\nOther limitations are likely just a by-product of an unfinished work.\nNeither the client nor controller can specify which agents will receive the tasks, which is particularly important if the agents contain a variety of processor types and speeds and the user wants to optimize the calculations.\nAt this time, the best solution to this problem may be to divide the computers into multiple logical grids.\nThere is also no standard way to monitor the progress of a running job on each agent.\nThe Xgrid GUI and command line indicate which agents are working on tasks, but gives no indication of progress.\nFinally, at this time only Mac OS X clients can submit jobs to the grid.\nThe framework exists to allow third parties to write plug-ins for other Unix flavors, but Apple has not created them.\n4.\nXGRID IMPLEMENTATION Our goal is an Xgrid render farm for Alias Maya.\nThe Ringling School has about 400 Apple Power Mac G4``s and G5``s in 13 computer labs.\nThe computers range from 733 MHz singleprocessor G4``s and 500 MHz and 1 GHz dual-processor G4``s to 1.8 GHz dual-processor G5``s. All of these computers are lightly used in the evening and on weekends and represent an enormous processing resource for our student rendering projects.\n4.1 Software Installation During our Xgrid testing, we loaded software on each computer multiple times, including the operating systems.\nWe saved time by facilitating our installations with the remote administration daemon (radmind) software developed at the University of Michigan [9], [10].\nEverything we installed for testing was first created as a radmind base load or overload.\nThus, Mac OS X, Mac OS X Developer Tools, Xgrid, POV-Ray [11], and Alias Maya were stored on a radmind server and then installed on our test computers when needed.\n4.2 Initial Testing We used six 1.8 GHz dual-processor Apple Power Mac G5``s for our Xgrid tests.\nEach computer ran Mac OS X 10.3.3 and contained 1 GB RAM.\nAs shown in Figure 1, one computer served as both client and controller, while the other five acted as agents.\nBefore attempting Maya rendering with Xgrid, we performed basic calculations to cement our understanding of Xgrid.\nApple``s Xgrid documentation is sparse, so finding helpful web sites facilitated our learning.\nWe first ran the Mandelbrot set plug-in provided by Apple, which allowed us to test the basic functionality of our grid.\nThen we performed benchmark rendering with the Open Source Application POV-Ray, as described by Daniel C\u00f4t\u00e9 [12] and James Reynolds [8].\nOur results showed that one dual-processor G5 rendering the benchmark POV-Ray image took 104 minutes.\nBreaking the image into three equal parts and using Xgrid to send the parts to three agents required 47 minutes.\nHowever, two agents finished their rendering in 30 minutes, while the third agent used 47 minutes; the entire render was only as fast as the slowest agent.\nThese results gave us two important pieces of information.\nFirst, the much longer rendering time for one of the tasks indicated that we should be careful how we split jobs into tasks for the agents.\nAll portions of the rendering will not take equal amounts of time, even if the pixel size is the same.\nSecond, since POV-Ray cannot take advantage of both processors in a G5, neither can an Xgrid task running POV-Ray.\nAlias Maya does not have this limitation.\n4.3 Rendering with Alias Maya 6 We first installed Alias Maya 6 for Mac OS X on the client\/controller and each agent.\nMaya 6 requires licenses for use as a workstation application.\nHowever, if it is just used for rendering from the command line or a script, no license is needed.\nWe thus created a minimal installation of Maya as a radmind overload.\nThe application was installed in a hidden directory inside \/Applications.\nThis was done so that normal users of the workstations would not find and attempt to run Maya, which would fail because these installations are not licensed for such use.\nIn addition, Maya requires the existence of a directory ending in the path \/maya.\nThe directory must be readable and writable by the Maya user.\nFor a user running Maya on a Mac OS X workstation, the path would usually be ~\/Documents\/maya.\nUnless otherwise specified, this directory will be the default location for Maya data and output files.\nIf the directory does not Figure 1.\nXgrid test grid.\nClient\/ Controller Agent 1 Agent 2 Agent 3 Agent 4 Agent 5 Network Volume Jobs Data Data 122 exist, Maya will try to create it, even if the user specifies that the data and output files exist in other locations.\nHowever, Xgrid runs as the user nobody, which does not have a home directory.\nMaya is unable to create the needed directory, and looks instead for \/Alias\/maya.\nThis directory also does not exist, and the user nobody has insufficient rights to create it.\nOur solution was to manually create \/Alias\/maya and give the user nobody read and write permissions.\nWe also created a network volume for storage of both the rendering data and the resulting rendered frames.\nThis avoided sending the Maya files and associated textures to each agent as part of a working directory.\nSuch a solution worked well for us because our computers are geographically close on a LAN; if greater distance had separated the agents from the client\/controller, specifying a working directory may have been a better solution.\nFinally, we created a custom GUI plug-in for Xgrid.\nThe plug-in command calls a Perl script with three arguments.\nTwo arguments specify the beginning and end frames of the render and the third argument the number of frames in each job (which we call the cluster size).\nThe script then calculates the total number of jobs and parses them out to the agents.\nFor example, if we begin at frame 201 and end at frame 225, with 5 frames for each job, the plug-in will create 5 jobs and send them out to the agents.\nOnce the jobs are sent to the agents, the script executes the \/usr\/sbin\/Render command on each agent with the parameters appropriate for the particular job.\nThe results are sent to the network volume.\nWith the setup described, we were able to render with Alias Maya 6 on our test grid.\nRendering speed was not important at this time; our first goal was to implement the grid, and in that we succeeded.\n4.3.1 Pseudo Code for Perl Script in Custom Xgrid Plug-in In this section we summarize in simplified pseudo code format the Perl script used in our Xgrig plug-in.\nagent_jobs{ \u2022 Read beginning frame, end frame, and cluster size of render.\n\u2022 Check whether the render can be divided into an integer number of jobs based on the cluster size.\n\u2022 If there are not an integer number of jobs, reduce the cluster size of the last job and set its last frame to the end frame of the render.\n\u2022 Determine the start frame and end frame for each job.\n\u2022 Execute the Render command. }\n4.4 Lessons Learned Rendering with Maya from the Xgrid GUI was not trivial.\nThe lack of Xgrid documentation and the requirements of Maya combined into a confusing picture, where it was difficult to decide the true cause of the problems we encountered.\nTrial and error was required to determine the best way to set up our grid.\nThe first hurdle was creating the directory \/Alias\/maya with read and write permissions for the user nobody.\nThe second hurdle was learning that we got the best performance by storing the rendering data on a network volume.\nThe last major hurdle was retrieving our results from the agents.\nUnlike the POV-Ray rendering tests, our initial Maya results were never returned to the client; instead, Maya stored the results in \/tmp on each agent.\nSpecifying in the plug-in where to send the results would not change this behavior.\nWe decided this was likely a Maya issue rather than an Xgrid issue, and the solution was to send the results to the network volume via the Perl script.\n5.\nFUTURE PLANS Maya on Xgrid is not yet ready to be used by the students of Ringling School.\nIn order to do this, we must address at least the following concerns.\n\u2022 Continue our rendering tests through the command line rather than the GUI plug-in.\nThis will be essential for the following step.\n\u2022 Develop an appropriate interface for users to send jobs to the Xgrid controller.\nThis will probably be an extension to the web interface of our existing render farm, where the student specifies parameters that are placed in a script that issues the Render command.\n\u2022 Perform timed Maya rendering tests with Xgrid.\nPart of this should compare the rendering times for Power Mac G4``s and G5``s. 6.\nCONCLUSION Grid computing continues to advance.\nRecently, the IT industry has witnessed the emergence of numerous types of contemporary grid applications in addition to the traditional grid framework for compute intensive applications.\nFor instance, peer-to-peer applications such as Kazaa, are based on storage grids that do not share processing power but instead an elegant protocol to swap files between systems.\nAlthough in our campuses we discourage students from utilizing peer-to-peer applications from music sharing, the same protocol can be utilized on applications such as decision support and data mining.\nThe National Virtual Collaboratory grid project [13] will link earthquake researchers across the U.S. with computing resources, allowing them to share extremely large data sets, research equipment, and work together as virtual teams over the Internet.\nThere is an assortment of new grid players in the IT world expanding the grid computing model and advancing the grid technology to the next level.\nSAP [14] is piloting a project to grid-enable SAP ERP applications, Dell [15] has partnered with Platform Computing to consolidate computing resources and provide grid-enabled systems for compute intensive applications, Oracle has integrated support for grid computing in their 10g release [16], United Devices [17] offers hosting service for gridon-demand, and Sun Microsystems continues their research and development of Sun``s N1 Grid engine [18] which combines grid and clustering platforms.\nSimply, the grid computing is up and coming.\nThe potential benefits of grid computing are colossal in higher education learning while the implementation costs are low.\nToday, it would be difficult to identify an application with as high a return on investment as grid computing in information technology divisions in higher education institutions.\nIt is a mistake to overlook this technology with such a high payback.\n123 7.\nACKNOWLEDGMENTS The authors would like to thank Scott Hanselman of the IT team at the Ringling School of Art and Design for providing valuable input in the planning of our Xgrid testing.\nWe would also like to thank the posters of the Xgrid Mailing List [13] for providing insight into many areas of Xgrid.\n8.\nREFERENCES [1] Apple Academic Research, http:\/\/www.apple.com\/education\/science\/profiles\/vatech\/.\n[2] SETI@home: Search for Extraterrestrial Intelligence at home.\nhttp:\/\/setiathome.ssl.berkeley.edu\/.\n[3] Alias, http:\/\/www.alias.com\/.\n[4] Apple Computer, Xgrid, http:\/\/www.apple.com\/acg\/xgrid\/.\n[5] Xgrid Guide, http:\/\/www.apple.com\/acg\/xgrid\/, 2004.\n[6] Apple Mac OS X Features, http:\/\/www.apple.com\/macosx\/features\/rendezvous\/.\n[7] Xgrid Manual Page, 2004.\n[8] James Reynolds, Xgrid Presentation, University of Utah, http:\/\/www.macos.utah.edu:16080\/xgrid\/, 2004.\n[9] Research Systems Unix Group, Radmind, University of Michigan, http:\/\/rsug.itd.umich.edu\/software\/radmind.\n[10]Using the Radmind Command Line Tools to Maintain Multiple Mac OS X Machines, http:\/\/rsug.itd.umich.edu\/software\/radmind\/files\/radmindtutorial-0.8.1.pdf.\n[11]POV-Ray, http:\/\/www.povray.org\/.\n[12]Daniel C\u00f4t\u00e9, Xgrid example: Parallel graphics rendering in POVray, http:\/\/unu.novajo.ca\/simple\/, 2004.\n[13]NEESgrid, http:\/\/www.neesgrid.org\/.\n[14]SAP, http:\/\/www.sap.com\/.\n[15]Platform Computing, http:\/\/platform.com\/.\n[16]Grid, http:\/\/www.oracle.com\/technologies\/grid\/.\n[17]United Devices, Inc., http:\/\/ud.com\/.\n[18]N1 Grid Engine 6, http:\/\/www.sun.com\/ software\/gridware\/index.\nhtml\/.\n[19]Xgrig Users Mailing List, http:\/\/www.lists.apple.com\/mailman\/listinfo\/xgridusers\/.\n124","lvl-3":"A Holistic Approach to High-Performance Computing: Xgrid Experience\nABSTRACT\nThe Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design.\nWith a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education.\nWe have found that Mac OS X is the best operating system to train future artists and designers.\nMoreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations.\nAs visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows.\nIn our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use.\nIn order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing.\nAs with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid.\nTherefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues.\nIn our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support.\nFurthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process.\n1.\nINTRODUCTION\nGrid computing does not have a single, universally accepted definition.\nThe technology behind grid computing model is not new.\nIts roots lie in early distributed computing models that date back to early 1980s, where scientists harnessed the computing power of idle workstations to let compute intensive applications to run on multiple workstations, which dramatically shortening processing times.\nAlthough numerous distributed computing models were available for discipline-specific scientific applications, only recently have the tools became available to use general-purpose applications on a grid.\nConsequently, the grid computing model is gaining popularity and has become a show piece of \"utility computing\".\nSince in the IT industry, various computing models are used interchangeably with grid computing, we first sort out the similarities and difference between these computing models so that grid computing can be placed in perspective.\n1.1 Clustering\nA cluster is a group of machines in a fixed configuration united to operate and be managed as a single entity to increase robustness and performance.\nThe cluster appears as a single high-speed system or a single highly available system.\nIn this model, resources cannot enter and leave the group as necessary.\nThere are at least two types of clusters: parallel clusters and highavailability clusters.\nClustered machines are generally in spatial proximity, such as in the same server room, and dedicated solely to their task.\nIn a high-availability cluster, each machine provides the same service.\nIf one machine fails, another seamlessly takes over its workload.\nFor example, each computer could be a web server for a web site.\nShould one web server \"die,\" another provides the service, so that the web site rarely, if ever, goes down.\nA parallel cluster is a type of supercomputer.\nProblems are split into many parts, and individual cluster members are given part of the problem to solve.\nAn example of a parallel cluster is\ncomposed of Apple Power Mac G5 computers at Virginia Tech University [1].\n1.2 Distributed Computing\nDistributed computing spatially expands network services so that the components providing the services are separated.\nThe major objective of this computing model is to consolidate processing power over a network.\nA simple example is spreading services such as file and print serving, web serving, and data storage across multiple machines rather than a single machine handling all the tasks.\nDistributed computing can also be more fine-grained, where even a single application is broken into parts and each part located on different machines: a word processor on one server, a spell checker on a second server, etc. .\n1.3 Utility Computing\nLiterally, utility computing resembles common utilities such as telephone or electric service.\nA service provider makes computing resources and infrastructure management available to a customer as needed, and charges for usage rather than a flat rate.\nThe important thing to note is that resources are only used as needed, and not dedicated to a single customer.\n1.4 Grid Computing\nGrid computing contains aspects of clusters, distributed computing, and utility computing.\nIn the most basic sense, grid turns a group of heterogeneous systems into a centrally managed but flexible computing environment that can work on tasks too time intensive for the individual systems.\nThe grid members are not necessarily in proximity, but must merely be accessible over a network; the grid can access computers on a LAN, WAN, or anywhere in the world via the Internet.\nIn addition, the computers comprising the grid need not be dedicated to the grid; rather, they can function as normal workstations, and then advertise their availability to the grid when not in use.\nThe last characteristic is the most fundamental to the grid described in this paper.\nA well-known example of such an \"ad hoc\" grid is the SETI@home project [2] of the University of California at Berkeley, which allows any person in the world with a computer and an Internet connection to donate unused processor time for analyzing radio telescope data.\n1.5 Comparing the Grid and Cluster\nA computer grid expands the capabilities of the cluster by loosing its spatial bounds, so that any computer accessible through the network gains the potential to augment the grid.\nA fundamental grid feature is that it scales well.\nThe processing power of any machine added to the grid is immediately availably for solving problems.\nIn addition, the machines on the grid can be generalpurpose workstations, which keep down the cost of expanding the grid.\n2.\nASSESSING THE NEED FOR GRID COMPUTING\n2.1 Applications Appropriate for Grid Computing\n2.2 Integration into the Existing Infrastructure\n3.\nXGRID\n3.1 Xgrid Components\n3.2 Security and Authentication\n3.3 Using Xgrid\n3.3.1 Installation\n3.3.2 Agent and Controller Configuration\n3.3.3 Sending Jobs from an Xgrid Client\n3.3.4 Running the Xgrid Job\n3.4 Limitations and Idiosyncrasies\n4.\nXGRID IMPLEMENTATION\n4.1 Software Installation\n4.2 Initial Testing\n4.3 Rendering with Alias Maya 6\n4.3.1 Pseudo Code for Perl Script in Custom Xgrid Plug-in\n4.4 Lessons Learned\n5.\nFUTURE PLANS\n6.\nCONCLUSION\nGrid computing continues to advance.\nRecently, the IT industry has witnessed the emergence of numerous types of contemporary grid applications in addition to the traditional grid framework for compute intensive applications.\nFor instance, peer-to-peer applications such as Kazaa, are based on storage grids that do not share processing power but instead an elegant protocol to swap files between systems.\nAlthough in our campuses we discourage students from utilizing peer-to-peer applications from music sharing, the same protocol can be utilized on applications such as decision support and data mining.\nThe National Virtual Collaboratory grid project [13] will link earthquake researchers across the U.S. with computing resources, allowing them to share extremely large data sets, research equipment, and work together as virtual teams over the Internet.\nThere is an assortment of new grid players in the IT world expanding the grid computing model and advancing the grid technology to the next level.\nSAP [14] is piloting a project to grid-enable SAP ERP applications, Dell [15] has partnered with Platform Computing to consolidate computing resources and provide grid-enabled systems for compute intensive applications, Oracle has integrated support for grid computing in their 10g release [16], United Devices [17] offers hosting service for gridon-demand, and Sun Microsystems continues their research and development of Sun's N1 Grid engine [18] which combines grid and clustering platforms.\nSimply, the grid computing is up and coming.\nThe potential benefits of grid computing are colossal in higher education learning while the implementation costs are low.\nToday, it would be difficult to identify an application with as high a return on investment as grid computing in information technology divisions in higher education institutions.\nIt is a mistake to overlook this technology with such a high payback.","lvl-4":"A Holistic Approach to High-Performance Computing: Xgrid Experience\nABSTRACT\nThe Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design.\nWith a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education.\nWe have found that Mac OS X is the best operating system to train future artists and designers.\nMoreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations.\nAs visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows.\nIn our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use.\nIn order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing.\nAs with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid.\nTherefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues.\nIn our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support.\nFurthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process.\n1.\nINTRODUCTION\nGrid computing does not have a single, universally accepted definition.\nThe technology behind grid computing model is not new.\nIts roots lie in early distributed computing models that date back to early 1980s, where scientists harnessed the computing power of idle workstations to let compute intensive applications to run on multiple workstations, which dramatically shortening processing times.\nAlthough numerous distributed computing models were available for discipline-specific scientific applications, only recently have the tools became available to use general-purpose applications on a grid.\nConsequently, the grid computing model is gaining popularity and has become a show piece of \"utility computing\".\nSince in the IT industry, various computing models are used interchangeably with grid computing, we first sort out the similarities and difference between these computing models so that grid computing can be placed in perspective.\n1.1 Clustering\nA cluster is a group of machines in a fixed configuration united to operate and be managed as a single entity to increase robustness and performance.\nThe cluster appears as a single high-speed system or a single highly available system.\nIn this model, resources cannot enter and leave the group as necessary.\nThere are at least two types of clusters: parallel clusters and highavailability clusters.\nClustered machines are generally in spatial proximity, such as in the same server room, and dedicated solely to their task.\nIn a high-availability cluster, each machine provides the same service.\nIf one machine fails, another seamlessly takes over its workload.\nFor example, each computer could be a web server for a web site.\nA parallel cluster is a type of supercomputer.\nProblems are split into many parts, and individual cluster members are given part of the problem to solve.\nAn example of a parallel cluster is\ncomposed of Apple Power Mac G5 computers at Virginia Tech University [1].\n1.2 Distributed Computing\nDistributed computing spatially expands network services so that the components providing the services are separated.\nThe major objective of this computing model is to consolidate processing power over a network.\nA simple example is spreading services such as file and print serving, web serving, and data storage across multiple machines rather than a single machine handling all the tasks.\n1.3 Utility Computing\nLiterally, utility computing resembles common utilities such as telephone or electric service.\nThe important thing to note is that resources are only used as needed, and not dedicated to a single customer.\n1.4 Grid Computing\nGrid computing contains aspects of clusters, distributed computing, and utility computing.\nIn the most basic sense, grid turns a group of heterogeneous systems into a centrally managed but flexible computing environment that can work on tasks too time intensive for the individual systems.\nThe grid members are not necessarily in proximity, but must merely be accessible over a network; the grid can access computers on a LAN, WAN, or anywhere in the world via the Internet.\nIn addition, the computers comprising the grid need not be dedicated to the grid; rather, they can function as normal workstations, and then advertise their availability to the grid when not in use.\nThe last characteristic is the most fundamental to the grid described in this paper.\n1.5 Comparing the Grid and Cluster\nA computer grid expands the capabilities of the cluster by loosing its spatial bounds, so that any computer accessible through the network gains the potential to augment the grid.\nA fundamental grid feature is that it scales well.\nThe processing power of any machine added to the grid is immediately availably for solving problems.\nIn addition, the machines on the grid can be generalpurpose workstations, which keep down the cost of expanding the grid.\n6.\nCONCLUSION\nGrid computing continues to advance.\nRecently, the IT industry has witnessed the emergence of numerous types of contemporary grid applications in addition to the traditional grid framework for compute intensive applications.\nFor instance, peer-to-peer applications such as Kazaa, are based on storage grids that do not share processing power but instead an elegant protocol to swap files between systems.\nThere is an assortment of new grid players in the IT world expanding the grid computing model and advancing the grid technology to the next level.\nSimply, the grid computing is up and coming.\nThe potential benefits of grid computing are colossal in higher education learning while the implementation costs are low.\nToday, it would be difficult to identify an application with as high a return on investment as grid computing in information technology divisions in higher education institutions.","lvl-2":"A Holistic Approach to High-Performance Computing: Xgrid Experience\nABSTRACT\nThe Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design.\nWith a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education.\nWe have found that Mac OS X is the best operating system to train future artists and designers.\nMoreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations.\nAs visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows.\nIn our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use.\nIn order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing.\nAs with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid.\nTherefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues.\nIn our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support.\nFurthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process.\n1.\nINTRODUCTION\nGrid computing does not have a single, universally accepted definition.\nThe technology behind grid computing model is not new.\nIts roots lie in early distributed computing models that date back to early 1980s, where scientists harnessed the computing power of idle workstations to let compute intensive applications to run on multiple workstations, which dramatically shortening processing times.\nAlthough numerous distributed computing models were available for discipline-specific scientific applications, only recently have the tools became available to use general-purpose applications on a grid.\nConsequently, the grid computing model is gaining popularity and has become a show piece of \"utility computing\".\nSince in the IT industry, various computing models are used interchangeably with grid computing, we first sort out the similarities and difference between these computing models so that grid computing can be placed in perspective.\n1.1 Clustering\nA cluster is a group of machines in a fixed configuration united to operate and be managed as a single entity to increase robustness and performance.\nThe cluster appears as a single high-speed system or a single highly available system.\nIn this model, resources cannot enter and leave the group as necessary.\nThere are at least two types of clusters: parallel clusters and highavailability clusters.\nClustered machines are generally in spatial proximity, such as in the same server room, and dedicated solely to their task.\nIn a high-availability cluster, each machine provides the same service.\nIf one machine fails, another seamlessly takes over its workload.\nFor example, each computer could be a web server for a web site.\nShould one web server \"die,\" another provides the service, so that the web site rarely, if ever, goes down.\nA parallel cluster is a type of supercomputer.\nProblems are split into many parts, and individual cluster members are given part of the problem to solve.\nAn example of a parallel cluster is\ncomposed of Apple Power Mac G5 computers at Virginia Tech University [1].\n1.2 Distributed Computing\nDistributed computing spatially expands network services so that the components providing the services are separated.\nThe major objective of this computing model is to consolidate processing power over a network.\nA simple example is spreading services such as file and print serving, web serving, and data storage across multiple machines rather than a single machine handling all the tasks.\nDistributed computing can also be more fine-grained, where even a single application is broken into parts and each part located on different machines: a word processor on one server, a spell checker on a second server, etc. .\n1.3 Utility Computing\nLiterally, utility computing resembles common utilities such as telephone or electric service.\nA service provider makes computing resources and infrastructure management available to a customer as needed, and charges for usage rather than a flat rate.\nThe important thing to note is that resources are only used as needed, and not dedicated to a single customer.\n1.4 Grid Computing\nGrid computing contains aspects of clusters, distributed computing, and utility computing.\nIn the most basic sense, grid turns a group of heterogeneous systems into a centrally managed but flexible computing environment that can work on tasks too time intensive for the individual systems.\nThe grid members are not necessarily in proximity, but must merely be accessible over a network; the grid can access computers on a LAN, WAN, or anywhere in the world via the Internet.\nIn addition, the computers comprising the grid need not be dedicated to the grid; rather, they can function as normal workstations, and then advertise their availability to the grid when not in use.\nThe last characteristic is the most fundamental to the grid described in this paper.\nA well-known example of such an \"ad hoc\" grid is the SETI@home project [2] of the University of California at Berkeley, which allows any person in the world with a computer and an Internet connection to donate unused processor time for analyzing radio telescope data.\n1.5 Comparing the Grid and Cluster\nA computer grid expands the capabilities of the cluster by loosing its spatial bounds, so that any computer accessible through the network gains the potential to augment the grid.\nA fundamental grid feature is that it scales well.\nThe processing power of any machine added to the grid is immediately availably for solving problems.\nIn addition, the machines on the grid can be generalpurpose workstations, which keep down the cost of expanding the grid.\n2.\nASSESSING THE NEED FOR GRID COMPUTING\nEffective use of a grid requires a computation that can be divided into independent (i.e., parallel) tasks.\nThe results of each task cannot depend on the results of any other task, and so the members of the grid can solve the tasks in parallel.\nOnce the tasks have been completed, the results can be assembled into the solution.\nExamples of parallelizable computations are the Mandelbrot set of fractals, the Monte Carlo calculations used in disciplines such as Solid State Physics, and the individual frames of a rendered animation.\nThis paper is concerned with the last example.\n2.1 Applications Appropriate for Grid Computing\nThe applications used in grid computing must either be specifically designed for grid use, or scriptable in such a way that they can receive data from the grid, process the data, and then return results.\nIn other words, the best candidates for grid computing are applications that run the same or very similar computations on a large number of pieces of data without any dependencies on the previous calculated results.\nApplications heavily dependent on data handling rather than processing power are generally more suitable to run on a traditional environment than on a grid platform.\nOf course, the applications must also run on the computing platform that hosts the grid.\nOur interest is in using the Alias Maya application [3] with Apple's Xgrid [4] on Mac OS X. Commercial applications usually have strict license requirements.\nThis is an important concern if we install a commercial application such as Maya on all members of our grid.\nBy its nature, the size of the grid may change as the number of idle computers changes.\nHow many licenses will be required?\nOur resolution of this issue will be discussed in a later section.\n2.2 Integration into the Existing Infrastructure\nThe grid requires a controller that recognizes when grid members are available, and parses out job to available members.\nThe controller must be able to see members on the network.\nThis does not require that members be on the same subnet as the controller, but if they are not, any intervening firewalls and routers must be configured to allow grid traffic.\n3.\nXGRID\nXgrid is Apple's grid implementation.\nIt was inspired by Zilla, a desktop clustering application developed by NeXT and acquired by Apple.\nIn this report we describe the Xgrid Technology Preview 2, a free download that requires Mac OS X 10.2.8 or later and a minimum 128 MB RAM [5].\nXgrid, leverages Apple's traditional ease of use and configuration.\nIf the grid members are on the same subnet, by default Xgrid automatically discovers available resources through Rendezvous [6].\nTasks are submitted to the grid through a GUI interface or by the command line.\nA System Preference Pane controls when each computer is available to the grid.\nIt may be best to view Xgrid as a facilitator.\nThe Xgrid architecture handles software and data distribution, job execution, and result aggregation.\nHowever, Xgrid does not perform the actual calculations.\n3.1 Xgrid Components\nXgrid has three major components: the client, controller, and the agent.\nEach component is included in the default installation, and any computer can easily be configured to assume any role.\nIn\nfact, for testing purposes, a computer can simultaneously assume all roles in \"local mode.\"\nThe more typical production use is called \"cluster mode.\"\nThe client submits jobs to the controller through the Xgrid GUI or command line.\nThe client defines how the job will be broken into tasks for the grid.\nIf any files or executables must be sent as part of a job, they must reside on the client or at a location accessible to the client.\nWhen a job is complete, the client can retrieve the results from the controller.\nA client can only connect to a single controller at a time.\nThe controller runs the GridServer process.\nIt queues tasks received from clients, distributes those tasks to the agents, and handles failover if an agent cannot complete a task.\nIn Xgrid Technology Preview 2, a controller can handle a maximum of 10,000 agent connections.\nOnly one controller can exist per logical grid.\nThe agents run the GridAgent process.\nWhen the GridAgent process starts, it registers with a controller; an agent can only be connected to one controller at a time.\nAgents receive tasks from their controller, perform the specified computations, and then send the results back to the controller.\nAn agent can be configured to always accept tasks, or to just accept them when the computer is not otherwise busy.\n3.2 Security and Authentication\nBy default, Xgrid requires two passwords.\nFirst, a client needs a password to access a controller.\nSecond, the controller needs a password to access an agent.\nEither password requirement can be disabled.\nXgrid uses two-way-random mutual authentication protocol with MD5 hashes.\nAt this time, data encryption is only used for passwords.\nAs mentioned earlier, an agent registers with a controller when the GridAgent process starts.\nThere is no native method for the controller to reject agents, and so it must accept any agent that registers.\nThis means that any agent could submit a job that consumes excessive processor and disk space on the agents.\nOf course, since Mac OS X is a BSD-based operating system, the controller could employ Unix methods of restricting network connections from agents.\nThe Xgrid daemons run as the user \"nobody,\" which means the daemons can read, write, or execute any file according to world permissions.\nThus, Xgrid jobs can execute many commands and write to \/ tmp and \/ Volumes.\nIn general, this is not a major security risk, but is does require a level of trust between all members of the grid.\n3.3 Using Xgrid\n3.3.1 Installation\nBasic Xgrid installation and configuration is described both in Apple documentation [5] and online at the University of Utah web site [8].\nThe installation is straightforward and offers no options for customization.\nThis means that every computer on which Xgrid is installed has the potential to be a client, controller, or agent.\n3.3.2 Agent and Controller Configuration\nThe agents and controllers can be configured through the Xgrid Preference Pane in the System Preferences or XML files in \/ Library\/Preferences.\nHere the GridServer and GridAgent processes are started, passwords set, and the controller discovery method used by agents is selected.\nBy default, agents use Rendezvous to find a controller, although the agents can also be configured to look for a specific host.\nThe Xgrid Preference Pane also sets whether the Agents will always accept jobs, or only accept jobs when idle.\nIn Xgrid terms, idle either means that the Xgrid screen saver has activated, or the mouse and keyboard have not been used for more than 15 minutes.\nEven if the agent is configured to always accept tasks, if the computer is being used these tasks will run in the background at a low priority.\nHowever, if an agent only accepts jobs when idle, any unfinished task being performed when the computer ceases being idle are immediately stopped and any intermediary results lost.\nThen the controller assigns the task to another available member of the grid.\nAdvertising the controller via Rendezvous can be disabled by editing \/ Library\/Preferences\/com.\napple.xgrid.controller.plist.\nThis, however, will not prevent an agent from connecting to the controller by hostname.\n3.3.3 Sending Jobs from an Xgrid Client\nThe client sends jobs to the controller either through the Xgrid GUI or the command line.\nThe Xgrid GUI submits jobs via small applications called plug-ins.\nSample plug-ins are provided by Apple, but they are only useful as simple testing or as examples of how to create a custom plug-in.\nIf we are to employ Xgrid for useful work, we will require a custom plug-in.\nJames Reynolds details the creation of custom plug-ins on the University of Utah Mac OS web site [8].\nXgrid stores plug-ins in \/ Library\/Xgrid\/Plug-ins or ~ \/ Library\/Xgrid\/Plug-ins, depending on whether the plug-in was installed with Xgrid or created by a user.\nThe core plug-in parameter is the \"command,\" which includes the executable the agents will run.\nAnother important parameter is the \"working directory.\"\nThis directory contains necessary files that are not installed on the agents or available to them over a network.\nThe working directory will always be copied to each agent, so it is best to keep this directory small.\nIf the files are installed on the agents or available over a network, the working directory parameter is not needed.\nThe command line allows the options available with the GUI plug-in, but it can be slightly more cumbersome.\nHowever, the command line probably will be the method of choice for serious work.\nThe command arguments must be included in a script unless they are very basic.\nThis can be a shell, perl, or python script, as long as the agent can interpret it.\n3.3.4 Running the Xgrid Job\nWhen the Xgrid job is started, the command tells the controller how to break the job into tasks for the agents.\nThen the command is tarred and gzipped and sent to each agent; if there is a working directory, this is also tarred and gzipped and sent to the agents.\nThe agents extract these files into \/ tmp and run the task.\nRecall that since the GridAgent process runs as the user nobody, everything associated with the command must be available to nobody.\nExecutables called by the command should be installed on the agents unless they are very simple.\nIf the executable depends on libraries or other files, it may not function properly if transferred, even if the dependent files are referenced in the working directory.\nWhen the task is complete, the results are available to the client.\nIn principle, the results are sent to the client, but whether this actually happens depends on the command.\nIf the results are not sent to the client, they will be in \/ tmp on each agent.\nWhen available, a better solution is to direct the results to a network volume accessible to the client.\n3.4 Limitations and Idiosyncrasies\nSince Xgrid is only in its second preview release, there are some rough edges and limitations.\nApple acknowledges some limitations [7].\nFor example, the controller cannot determine whether an agent is trustworthy and the controller always copies the command and working directory to the agent without checking to see if these exist on the agent.\nOther limitations are likely just a by-product of an unfinished work.\nNeither the client nor controller can specify which agents will receive the tasks, which is particularly important if the agents contain a variety of processor types and speeds and the user wants to optimize the calculations.\nAt this time, the best solution to this problem may be to divide the computers into multiple logical grids.\nThere is also no standard way to monitor the progress of a running job on each agent.\nThe Xgrid GUI and command line indicate which agents are working on tasks, but gives no indication of progress.\nFinally, at this time only Mac OS X clients can submit jobs to the grid.\nThe framework exists to allow third parties to write plug-ins for other Unix flavors, but Apple has not created them.\n4.\nXGRID IMPLEMENTATION\nOur goal is an Xgrid render farm for Alias Maya.\nThe Ringling School has about 400 Apple Power Mac G4's and G5's in 13 computer labs.\nThe computers range from 733 MHz singleprocessor G4's and 500 MHz and 1 GHz dual-processor G4's to 1.8 GHz dual-processor G5's.\nAll of these computers are lightly used in the evening and on weekends and represent an enormous processing resource for our student rendering projects.\n4.1 Software Installation\nDuring our Xgrid testing, we loaded software on each computer multiple times, including the operating systems.\nWe saved time by facilitating our installations with the remote administration daemon (radmind) software developed at the University of Michigan [9], [10].\nEverything we installed for testing was first created as a radmind base load or overload.\nThus, Mac OS X, Mac OS X Developer Tools, Xgrid, POV-Ray [11], and Alias Maya were stored on a radmind server and then installed on our test computers when needed.\n4.2 Initial Testing\nWe used six 1.8 GHz dual-processor Apple Power Mac G5's for our Xgrid tests.\nEach computer ran Mac OS X 10.3.3 and contained 1 GB RAM.\nAs shown in Figure 1, one computer served as both client and controller, while the other five acted as agents.\nBefore attempting Maya rendering with Xgrid, we performed basic calculations to cement our understanding of Xgrid.\nApple's Xgrid documentation is sparse, so finding helpful web sites facilitated our learning.\nFigure 1.\nXgrid test grid.\nWe first ran the Mandelbrot set plug-in provided by Apple, which allowed us to test the basic functionality of our grid.\nThen we performed benchmark rendering with the Open Source Application POV-Ray, as described by Daniel C\u00f4t\u00e9 [12] and James Reynolds [8].\nOur results showed that one dual-processor G5 rendering the benchmark POV-Ray image took 104 minutes.\nBreaking the image into three equal parts and using Xgrid to send the parts to three agents required 47 minutes.\nHowever, two agents finished their rendering in 30 minutes, while the third agent used 47 minutes; the entire render was only as fast as the slowest agent.\nThese results gave us two important pieces of information.\nFirst, the much longer rendering time for one of the tasks indicated that we should be careful how we split jobs into tasks for the agents.\nAll portions of the rendering will not take equal amounts of time, even if the pixel size is the same.\nSecond, since POV-Ray cannot take advantage of both processors in a G5, neither can an Xgrid task running POV-Ray.\nAlias Maya does not have this limitation.\n4.3 Rendering with Alias Maya 6\nWe first installed Alias Maya 6 for Mac OS X on the client\/controller and each agent.\nMaya 6 requires licenses for use as a workstation application.\nHowever, if it is just used for rendering from the command line or a script, no license is needed.\nWe thus created a minimal installation of Maya as a radmind overload.\nThe application was installed in a \"hidden\" directory inside \/ Applications.\nThis was done so that normal users of the workstations would not find and attempt to run Maya, which would fail because these installations are not licensed for such use.\nIn addition, Maya requires the existence of a directory ending in the path \/ maya.\nThe directory must be readable and writable by the Maya user.\nFor a user running Maya on a Mac OS X workstation, the path would usually be ~ \/ Documents\/maya.\nUnless otherwise specified, this directory will be the default location for Maya data and output files.\nIf the directory does not\nexist, Maya will try to create it, even if the user specifies that the data and output files exist in other locations.\nHowever, Xgrid runs as the user nobody, which does not have a home directory.\nMaya is unable to create the needed directory, and looks instead for \/ Alias\/maya.\nThis directory also does not exist, and the user nobody has insufficient rights to create it.\nOur solution was to manually create \/ Alias\/maya and give the user nobody read and write permissions.\nWe also created a network volume for storage of both the rendering data and the resulting rendered frames.\nThis avoided sending the Maya files and associated textures to each agent as part of a working directory.\nSuch a solution worked well for us because our computers are geographically close on a LAN; if greater distance had separated the agents from the client\/controller, specifying a working directory may have been a better solution.\nFinally, we created a custom GUI plug-in for Xgrid.\nThe plug-in command calls a Perl script with three arguments.\nTwo arguments specify the beginning and end frames of the render and the third argument the number of frames in each job (which we call the \"cluster size\").\nThe script then calculates the total number of jobs and parses them out to the agents.\nFor example, if we begin at frame 201 and end at frame 225, with 5 frames for each job, the plug-in will create 5 jobs and send them out to the agents.\nOnce the jobs are sent to the agents, the script executes the \/ usr\/sbin\/Render command on each agent with the parameters appropriate for the particular job.\nThe results are sent to the network volume.\nWith the setup described, we were able to render with Alias Maya 6 on our test grid.\nRendering speed was not important at this time; our first goal was to implement the grid, and in that we succeeded.\n4.3.1 Pseudo Code for Perl Script in Custom Xgrid Plug-in\nIn this section we summarize in simplified pseudo code format the Perl script used in our Xgrig plug-in.\nagent_jobs {\n\u2022 Read beginning frame, end frame, and cluster size of render.\n\u2022 Check whether the render can be divided into an integer number of jobs based on the cluster size.\n\u2022 If there are not an integer number of jobs, reduce the cluster size of the last job and set its last frame to the end frame of the render.\n\u2022 Determine the start frame and end frame for each job.\n\u2022 Execute the Render command.}\n4.4 Lessons Learned\nRendering with Maya from the Xgrid GUI was not trivial.\nThe lack of Xgrid documentation and the requirements of Maya combined into a confusing picture, where it was difficult to decide the true cause of the problems we encountered.\nTrial and error was required to determine the best way to set up our grid.\nThe first hurdle was creating the directory \/ Alias\/maya with read and write permissions for the user nobody.\nThe second hurdle was learning that we got the best performance by storing the rendering data on a network volume.\nThe last major hurdle was retrieving our results from the agents.\nUnlike the POV-Ray rendering tests, our initial Maya results were never returned to the client; instead, Maya stored the results in \/ tmp on each agent.\nSpecifying in the plug-in where to send the results would not change this behavior.\nWe decided this was likely a Maya issue rather than an Xgrid issue, and the solution was to send the results to the network volume via the Perl script.\n5.\nFUTURE PLANS\nMaya on Xgrid is not yet ready to be used by the students of Ringling School.\nIn order to do this, we must address at least the following concerns.\n\u2022 Continue our rendering tests through the command line rather than the GUI plug-in.\nThis will be essential for the following step.\n\u2022 Develop an appropriate interface for users to send jobs to the Xgrid controller.\nThis will probably be an extension to the web interface of our existing render farm, where the student specifies parameters that are placed in a script that issues the Render command.\n\u2022 Perform timed Maya rendering tests with Xgrid.\nPart of this\nshould compare the rendering times for Power Mac G4's and G5's.\n6.\nCONCLUSION\nGrid computing continues to advance.\nRecently, the IT industry has witnessed the emergence of numerous types of contemporary grid applications in addition to the traditional grid framework for compute intensive applications.\nFor instance, peer-to-peer applications such as Kazaa, are based on storage grids that do not share processing power but instead an elegant protocol to swap files between systems.\nAlthough in our campuses we discourage students from utilizing peer-to-peer applications from music sharing, the same protocol can be utilized on applications such as decision support and data mining.\nThe National Virtual Collaboratory grid project [13] will link earthquake researchers across the U.S. with computing resources, allowing them to share extremely large data sets, research equipment, and work together as virtual teams over the Internet.\nThere is an assortment of new grid players in the IT world expanding the grid computing model and advancing the grid technology to the next level.\nSAP [14] is piloting a project to grid-enable SAP ERP applications, Dell [15] has partnered with Platform Computing to consolidate computing resources and provide grid-enabled systems for compute intensive applications, Oracle has integrated support for grid computing in their 10g release [16], United Devices [17] offers hosting service for gridon-demand, and Sun Microsystems continues their research and development of Sun's N1 Grid engine [18] which combines grid and clustering platforms.\nSimply, the grid computing is up and coming.\nThe potential benefits of grid computing are colossal in higher education learning while the implementation costs are low.\nToday, it would be difficult to identify an application with as high a return on investment as grid computing in information technology divisions in higher education institutions.\nIt is a mistake to overlook this technology with such a high payback.","keyphrases":["xgrid","design","visual art","design educ","mac os x","oper system","high-end graphic","nonlinear video edit","anim","multimedia","web product","digit video applic","render","xgrid environ","grid comput","larg-scale integr of technolog","macintosh os x","cluster","highperform comput","rendezv"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","M","U","M","U"]} {"id":"J-57","title":"Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games","abstract":"We present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents. This representation scheme captures characteristics of the interactions among the agents in a natural and concise manner. We also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation. The Shapley value can be computed in time linear in the size of the input. The emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation.","lvl-1":"Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games \u2217 Samuel Ieong \u2020 Computer Science Department Stanford University Stanford, CA 94305 sieong@stanford.edu Yoav Shoham Computer Science Department Stanford University Stanford, CA 94305 shoham@stanford.edu ABSTRACT We present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents.\nThis representation scheme captures characteristics of the interactions among the agents in a natural and concise manner.\nWe also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation.\nThe Shapley value can be computed in time linear in the size of the input.\nThe emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems; J.4 [Social and Behavioral Sciences]: Economics; F.2 [Analysis of Algorithms and Problem Complexity] General Terms Algorithms, Economics 1.\nINTRODUCTION Agents can often benefit by coordinating their actions.\nCoalitional games capture these opportunities of coordination by explicitly modeling the ability of the agents to take joint actions as primitives.\nAs an abstraction, coalitional games assign a payoff to each group of agents in the game.\nThis payoff is intended to reflect the payoff the group of agents can secure for themselves regardless of the actions of the agents not in the group.\nThese choices of primitives are in contrast to those of non-cooperative games, of which agents are modeled independently, and their payoffs depend critically on the actions chosen by the other agents.\n1.1 Coalitional Games and E-Commerce Coalitional games have appeared in the context of e-commerce.\nIn [7], Kleinberg et al. use coalitional games to study recommendation systems.\nIn their model, each individual knows about a certain set of items, is interested in learning about all items, and benefits from finding out about them.\nThe payoffs to groups of agents are the total number of distinct items known by its members.\nGiven this coalitional game setting, Kleinberg et al. compute the value of the private information of the agents is worth to the system using the solution concept of the Shapley value (definition can be found in section 2).\nThese values can then be used to determine how much each agent should receive for participating in the system.\nAs another example, consider the economics behind supply chain formation.\nThe increased use of the Internet as a medium for conducting business has decreased the costs for companies to coordinate their actions, and therefore coalitional game is a good model for studying the supply chain problem.\nSuppose that each manufacturer purchases his raw materials from some set of suppliers, and that the suppliers offer higher discount with more purchases.\nThe decrease in communication costs will let manufacturers find others interested in the same set of suppliers cheaper, and facilitates formation of coalitions to bargain with the suppliers.\nDepending on the set of suppliers and how much from each supplier each coalition purchases, we can assign payoffs to the coalitions depending on the discount it receives.\nThe resulting game can be analyzed using coalitional game theory, and we can answer questions such as the stability of coalitions, and how to fairly divide the benefits among the participating manufacturers.\nA similar problem, combinatorial coalition formation, has previously been studied in [8].\n1.2 Evaluation Criteria for Coalitional Game Representation To capture the coalitional games described above and perform computations on them, we must first find a representation for these games.\nThe na\u00a8\u0131ve solution is to enumerate the payoffs to each set of agents, therefore requiring space 193 exponential in the number of agents in the game.\nFor the two applications described, the number of agents in the system can easily exceed a hundred; this na\u00a8\u0131ve approach will not be scalable to such problems.\nTherefore, it is critical to find good representation schemes for coalitional games.\nWe believe that the quality of a representation scheme should be evaluated by four criteria.\nExpressivity: the breadth of the class of coalitional games covered by the representation.\nConciseness: the space requirement of the representation.\nEfficiency: the efficiency of the algorithms we can develop for the representation.\nSimplicity: the ease of use of the representation by users of the system.\nThe ideal representation should be fully expressive, i.e., it should be able to represent any coalitional games, use as little space as possible, have efficient algorithms for computation, and be easy to use.\nThe goal of this paper is to develop a representation scheme that has properties close to the ideal representation.\nUnfortunately, given that the number of degrees of freedom of coalitional games is O(2n ), not all games can be represented concisely using a single scheme due to information theoretic constraints.\nFor any given class of games, one may be able to develop a representation scheme that is tailored and more compact than a general scheme.\nFor example, for the recommendation system game, a highly compact representation would be one that simply states which agents know of which products, and let the algorithms that operate on the representation to compute the values of coalitions appropriately.\nFor some problems, however, there may not be efficient algorithms for customized representations.\nBy having a general representation and efficient algorithms that go with it, the representation will be useful as a prototyping tool for studying new economic situations.\n1.3 Previous Work The question of coalitional game representation has only been sparsely explored in the past [2, 3, 4].\nIn [4], Deng and Papadimitriou focused on the complexity of different solution concepts on coalitional games defined on graphs.\nWhile the representation is compact, it is not fully expressive.\nIn [2], Conitzer and Sandholm looked into the problem of determining the emptiness of the core in superadditive games.\nThey developed a compact representation scheme for such games, but again the representation is not fully expressive either.\nIn [3], Conitzer and Sandholm developed a fully expressive representation scheme based on decomposition.\nOur work extends and generalizes the representation schemes in [3, 4] through decomposing the game into a set of rules that assign marginal contributions to groups of agents.\nWe will give a more detailed review of these papers in section 2.2 after covering the technical background.\n1.4 Summary of Our Contributions \u2022 We develop the marginal contribution networks representation, a fully expressive representation scheme whose size scales according to the complexity of the interactions among the agents.\nWe believe that the representation is also simple and intuitive.\n\u2022 We develop an algorithm for computing the Shapley value of coalitional games under this representation that runs in time linear in the size of the input.\n\u2022 Under the graphical interpretation of the representation, we develop an algorithm for determining the whether a payoff vector is in the core and the emptiness of the core in time exponential only in the treewidth of the graph.\n2.\nPRELIMINARIES In this section, we will briefly review the basics of coalitional game theory and its two primary solution concepts, the Shapley value and the core.1 We will also review previous work on coalitional game representation in more detail.\nThroughout this paper, we will assume that the payoff to a group of agents can be freely distributed among its members.\nThis assumption is often known as the transferable utility assumption.\n2.1 Technical Background We can represent a coalition game with transferable utility by the pair N, v , where \u2022 N is the set of agents; and \u2022 v : 2N \u2192 R is a function that maps each group of agents S \u2286 N to a real-valued payoff.\nThis representation is known as the characteristic form.\nAs there are exponentially many subsets, it will take space exponential in the number of agents to describe a coalitional game.\nAn outcome in a coalitional game specifies the utilities the agents receive.\nA solution concept assigns to each coalitional game a set of reasonable outcomes.\nDifferent solution concepts attempt to capture in some way outcomes that are stable and\/or fair.\nTwo of the best known solution concepts are the Shapley value and the core.\nThe Shapley value is a normative solution concept.\nIt prescribes a fair way to divide the gains from cooperation when the grand coalition (i.e., N) is formed.\nThe division of payoff to agent i is the average marginal contribution of agent i over all possible permutations of the agents.\nFormally, let \u03c6i(v) denote the Shapley value of i under characteristic function v, then2 \u03c6i(v) = S\u2282N s!\n(n \u2212 s \u2212 1)!\nn!\n(v(S \u222a {i}) \u2212 v(S)) (1) The Shapley value is a solution concept that satisfies many nice properties, and has been studied extensively in the economic and game theoretic literature.\nIt has a very useful axiomatic characterization.\nEfficiency (EFF) A total of v(N) is distributed to the agents, i.e., i\u2208N \u03c6i(v) = v(N).\nSymmetry (SYM) If agents i and j are interchangeable, then \u03c6i(v) = \u03c6j(v).\n1 The materials and terminology are based on the textbooks by Mas-Colell et al. [9] and Osborne and Rubinstein [11].\n2 As a notational convenience, we will use the lower-case letter to represent the cardinality of a set denoted by the corresponding upper-case letter.\n194 Dummy (DUM) If agent i is a dummy player, i.e., his marginal contribution to all groups S are the same, \u03c6i(v) = v({i}).\nAdditivity (ADD) For any two coalitional games v and w defined over the same set of agents N, \u03c6i(v + w) = \u03c6i(v) + \u03c6i(w) for all i \u2208 N, where the game v + w is defined as (v + w)(S) = v(S) + w(S) for all S \u2286 N.\nWe will refer to these axioms later in our proof of correctness of the algorithm for computing the Shapley value under our representation in section 4.\nThe core is another major solution concept for coalitional games.\nIt is a descriptive solution concept that focuses on outcomes that are stable.\nStability under core means that no set of players can jointly deviate to improve their payoffs.\nFormally, let x(S) denote i\u2208S xi.\nAn outcome x \u2208 Rn is in the core if \u2200S \u2286 N x(S) \u2265 v(S) (2) The core was one of the first proposed solution concepts for coalitional games, and had been studied in detail.\nAn important question for a given coalitional game is whether the core is empty.\nIn other words, whether there is any outcome that is stable relative to group deviation.\nFor a game to have a non-empty core, it must satisfy the property of balancedness, defined as follows.\nLet 1S \u2208 Rn denote the characteristic vector of S given by (1S)i = 1 if i \u2208 S 0 otherwise Let (\u03bbS)S\u2286N be a set of weights such that each \u03bbS is in the range between 0 and 1.\nThis set of weights, (\u03bbS)S\u2286N , is a balanced collection if for all i \u2208 N, S\u2286N \u03bbS(1S)i = 1 A game is balanced if for all balanced collections of weights, S\u2286N \u03bbSv(S) \u2264 v(N) (3) By the Bondereva-Shapley theorem, the core of a coalitional game is non-empty if and only if the game is balanced.\nTherefore, we can use linear programming to determine whether the core of a game is empty.\nmaximize \u03bb\u2208R2n S\u2286N \u03bbSv(S) subject to S\u2286N \u03bbS1S = 1 \u2200i \u2208 N \u03bbS \u2265 0 \u2200S \u2286 N (4) If the optimal value of (4) is greater than the value of the grand coalition, then the core is empty.\nUnfortunately, this program has an exponential number of variables in the number of players in the game, and hence an algorithm that operates directly on this program would be infeasible in practice.\nIn section 5.4, we will describe an algorithm that answers the question of emptiness of core that works on the dual of this program instead.\n2.2 Previous Work Revisited Deng and Papadimitriou looked into the complexity of various solution concepts on coalitional games played on weighted graphs in [4].\nIn their representation, the set of agents are the nodes of the graph, and the value of a set of agents S is the sum of the weights of the edges spanned by them.\nNotice that this representation is concise since the space required to specify such a game is O(n2 ).\nHowever, this representation is not general; it will not be able to represent interactions among three or more agents.\nFor example, it will not be able to represent the majority game, where a group of agents S will have value of 1 if and only if s > n\/2.\nOn the other hand, there is an efficient algorithm for computing the Shapley value of the game, and for determining whether the core is empty under the restriction of positive edge weights.\nHowever, in the unrestricted case, determining whether the core is non-empty is coNP-complete.\nConitzer and Sandholm in [2] considered coalitional games that are superadditive.\nThey described a concise representation scheme that only states the value of a coalition if the value is strictly superadditive.\nMore precisely, the semantics of the representation is that for a group of agents S, v(S) = max {T1,T2,...,Tn}\u2208\u03a0 i v(Ti) where \u03a0 is the set of all possible partitions of S.\nThe value v(S) is only explicitly specified for S if v(S) is greater than all partitioning of S other than the trivial partition ({S}).\nWhile this representation can represent all games that are superadditive, there are coalitional games that it cannot represent.\nFor example, it will not be able to represent any games with substitutability among the agents.\nAn example of a game that cannot be represented is the unit game, where v(S) = 1 as long as S = \u2205.\nUnder this representation, the authors showed that determining whether the core is non-empty is coNP-complete.\nIn fact, even determining the value of a group of agents is NP-complete.\nIn a more recent paper, Conitzer and Sandholm described a representation that decomposes a coalitional game into a number of subgames whose sum add up to the original game [3].\nThe payoffs in these subgames are then represented by their respective characteristic functions.\nThis scheme is fully general as the characteristic form is a special case of this representation.\nFor any given game, there may be multiple ways to decompose the game, and the decomposition may influence the computational complexity.\nFor computing the Shapley value, the authors showed that the complexity is linear in the input description; in particular, if the largest subgame (as measured by number of agents) is of size n and the number of subgames is m, then their algorithm runs in O(m2n ) time, where the input size will also be O(m2n ).\nOn the other hand, the problem of determining whether a certain outcome is in the core is coNP-complete.\n3.\nMARGINAL CONTRIBUTION NETS In this section, we will describe the Marginal Contribution Networks representation scheme.\nWe will show that the idea is flexible, and we can easily extend it to increase its conciseness.\nWe will also show how we can use this scheme to represent the recommendation game from the introduction.\nFinally, we will show that this scheme is fully expressive, and generalizes the representation schemes in [3, 4].\n3.1 Rules and MarginalContributionNetworks The basic idea behind marginal contribution networks (MC-nets) is to represent coalitional games using sets of rules.\nThe rules in MC-nets have the following syntactic 195 form: Pattern \u2192 value A rule is said to apply to a group of agents S if S meets the requirement of the Pattern.\nIn the basic scheme, these patterns are conjunctions of agents, and S meets the requirement of the given pattern if S is a superset of it.\nThe value of a group of agents is defined to be the sum over the values of all rules that apply to the group.\nFor example, if the set of rules are {a \u2227 b} \u2192 5 {b} \u2192 2 then v({a}) = 0, v({b}) = 2, and v({a, b}) = 5 + 2 = 7.\nMC-nets is a very flexible representation scheme, and can be extended in different ways.\nOne simple way to extend it and increase its conciseness is to allow a wider class of patterns in the rules.\nA pattern that we will use throughout the remainder of the paper is one that applies only in the absence of certain agents.\nThis is useful for expressing concepts such as substitutability or default values.\nFormally, we express such patterns by {p1 \u2227 p2 \u2227 ... \u2227 pm \u2227 \u00acn1 \u2227 \u00acn2 \u2227 ... \u2227 \u00acnn} which has the semantics that such rule will apply to a group S only if {pi}m i=1 \u2208 S and {nj}n j=1 \/\u2208 S.\nWe will call the {pi}m i=1 in the above pattern the positive literals, and {nj}n j=1 the negative literals.\nNote that if the pattern of a rule consists solely of negative literals, we will consider that the empty set of agents will also satisfy such pattern, and hence v(\u2205) may be non-zero in the presence of negative literals.\nTo demonstrate the increase in conciseness of representation, consider the unit game described in section 2.2.\nTo represent such a game without using negative literals, we will need 2n rules for n players: we need a rule of value 1 for each individual agent, a rule of value \u22121 for each pair of agents to counter the double-counting, a rule of value 1 for each triplet of agents, etc., similar to the inclusion-exclusion principle.\nOn the other hand, using negative literals, we only need n rules: value 1 for the first agent, value 1 for the second agent in the absence of the first agent, value 1 for the third agent in the absence of the first two agents, etc..\nThe representational savings can be exponential in the number of agents.\nGiven a game represented as a MC-net, we can interpret the set of rules that make up the game as a graph.\nWe call this graph the agent graph.\nThe nodes in the graph will represent the agents in the game, and for each rule in the MCnet, we connect all the agents in the rule together and assign a value to the clique formed by the set of agents.\nNotice that to accommodate negative literals, we will need to annotate the clique appropriately.\nThis alternative view of MC-nets will be useful in our algorithm for Core-Membership in section 5.\nWe would like to end our discussion of the representation scheme by mentioning a trade-off between the expressiveness of patterns and the space required to represent them.\nTo represent a coalitional game in characteristic form, one would need to specify all 2n \u2212 1 values.\nThere is no overhead on top of that since there is a natural ordering of the groups.\nFor MC-nets, however, specification of the rules requires specifying both the patterns and the values.\nThe patterns, if not represented compactly, may end up overwhelming the savings from having fewer values to specify.\nThe space required for the patterns also leads to a tradeoff between the expressiveness of the allowed patterns and the simplicity of representing them.\nHowever, we believe that for most naturally arising games, there should be sufficient structure in the problem such that our representation achieves a net saving over the characteristic form.\n3.2 Example: Recommendation Game As an example, we will use MC-net to represent the recommendation game discussed in the introduction.\nFor each product, as the benefit of knowing about the product will count only once for each group, we need to capture substitutability among the agents.\nThis can be captured by a scaled unit game.\nSuppose the value of the knowledge about product i is vi, and there are ni agents, denoted by {xj i }, who know about the product, the game for product i can then be represented as the following rules: {x1 i } \u2192 vi {x2 i \u2227 \u00acx1 i } \u2192 vi ... {xni i \u2227 \u00acxni\u22121 i \u2227 \u00b7 \u00b7 \u00b7 \u2227 \u00acx1 i } \u2192 vi The entire game can then be built up from the sets of rules of each product.\nThe space requirement will be O(mn\u2217 ), where m is the number of products in the system, and n\u2217 is the maximum number of agents who knows of the same product.\n3.3 Representation Power We will discuss the expressiveness and conciseness of our representation scheme and compare it with the previous works in this subsection.\nProposition 1.\nMarginal contribution networks constitute a fully expressive representation scheme.\nProof.\nConsider an arbitrary coalitional game N, v in characteristic form representation.\nWe can construct a set of rules to describe this game by starting from the singleton sets and building up the set of rules.\nFor any singleton set {i}, we create a rule {i} \u2192 v(i).\nFor any pair of agents {i, j}, we create a rule {i \u2227 j} \u2192 v({i, j}) \u2212 v({i}) \u2212 v({j}.\nWe can continue to build up rules in a manner similar to the inclusion-exclusion principle.\nSince the game is arbitrary, MC-nets are fully expressive.\nUsing the construction outlined in the proof, we can show that our representation scheme can simulate the multi-issue representation scheme of [3] in almost the same amount of space.\nProposition 2.\nMarginal contribution networks use at most a linear factor (in the number of agents) more space than multi-issue representation for any game.\nProof.\nGiven a game in multi-issue representation, we start by describing each of the subgames, which are represented in characteristic form in [3], with a set of rules.\n196 We then build up the grand game by including all the rules from the subgames.\nNote that our representation may require a space larger by a linear factor due to the need to describe the patterns for each rule.\nOn the other hand, our approach may have fewer than exponential number of rules for each subgame, depending on the structure of these subgames, and therefore may be more concise than multi-issue representation.\nOn the other hand, there are games that require exponentially more space to represent under the multi-issue scheme compared to our scheme.\nProposition 3.\nMarginal contribution networks are exponentially more concise than multi-issue representation for certain games.\nProof.\nConsider a unit game over all the agents N.\nAs explained in 3.1, this game can be represented in linear space using MC-nets with negative literals.\nHowever, as there is no decomposition of this game into smaller subgames, it will require space O(2n ) to represent this game under the multiissue representation.\nUnder the agent graph interpretation of MC-nets, we can see that MC-nets is a generalization of the graphical representation in [4], namely from weighted graphs to weighted hypergraphs.\nProposition 4.\nMarginal contribution networks can represent any games in graphical form (under [4]) in the same amount of space.\nProof.\nGiven a game in graphical form, G, for each edge (i, j) with weight wij in the graph, we create a rule {i, j} \u2192 wij.\nClearly this takes exactly the same space as the size of G, and by the additive semantics of the rules, it represents the same game as G. 4.\nCOMPUTING THE SHAPLEY VALUE Given a MC-net, we have a simple algorithm to compute the Shapley value of the game.\nConsidering each rule as a separate game, we start by computing the Shapley value of the agents for each rule.\nFor each agent, we then sum up the Shapley values of that agent over all the rules.\nWe first show that this final summing process correctly computes the Shapley value of the agents.\nProposition 5.\nThe Shapley value of an agent in a marginal contribution network is equal to the sum of the Shapley values of that agent over each rule.\nProof.\nFor any group S, under the MC-nets representation, v(S) is defined to be the sum over the values of all the rules that apply to S. Therefore, considering each rule as a game, by the (ADD) axiom discussed in section 2, the Shapley value of the game created from aggregating all the rules is equal to the sum of the Shapley values over the rules.\nThe remaining question is how to compute the Shapley values of the rules.\nWe can separate the analysis into two cases, one for rules with only positive literals and one for rules with mixed literals.\nFor rules that have only positive literals, the Shapley value of the agents is v\/m, where v is the value of the rule and m is the number of agents in the rule.\nThis is a direct consequence of the (SYM) axiom of the Shapley value, as the agents in a rule are indistinguishable from each other.\nFor rules that have both positive and negative literals, we can consider the positive and the negative literals separately.\nFor a given positive literal i, the rule will apply only if i occurs in a given permutation after the rest of the positive literals but before any of the negative literals.\nFormally, let \u03c6i denote the Shapley value of i, p denote the cardinality of the positive set, and n denote the cardinality of the negative set, then \u03c6i = (p \u2212 1)!\nn!\n(p + n)!\nv = v p p+n n For a given negative literal j, j will be responsible for cancelling the application of the rule if all positive literals come before the negative literals in the ordering, and j is the first among the negative literals.\nTherefore, \u03c6j = p!\n(n \u2212 1)!\n(p + n)!\n(\u2212v) = \u2212v n p+n p By the (SYM) axiom, all positive literals will have the value of \u03c6i and all negative literals will have the value of \u03c6j.\nNote that the sum over all agents in rules with mixed literals is 0.\nThis is to be expected as these rules contribute 0 to the grand coalition.\nThe fact that these rules have no effect on the grand coalition may appear odd at first.\nBut this is because the presence of such rules is to define the values of coalitions smaller than the grand coalition.\nIn terms of computational complexity, given that the Shapley value of any agent in a given rule can be computed in time linear in the pattern of the rule, the total running time of the algorithm for computing the Shapley value of the game is linear in the size of the input.\n5.\nANSWERING CORE-RELATED QUESTIONS There are a few different but related computational problems associated with the solution concept of the core.\nWe will focus on the following two problems: Definition 1.\n(Core-Membership) Given a coalitional game and a payoff vector x, determine if x is in the core.\nDefinition 2.\n(Core-Non-Emptiness) Given a coalitional game, determine if the core is non-empty.\nIn the rest of the section, we will first show that these two problems are coNP-complete and coNP-hard respectively, and discuss some complexity considerations about these problems.\nWe will then review the main ideas of tree decomposition as it will be used extensively in our algorithm for Core-Membership.\nNext, we will present the algorithm for Core-Membership, and show that the algorithm runs in polynomial time for graphs of bounded treewidth.\nWe end by extending this algorithm to answer the question of CoreNon-Emptiness in polynomial time for graphs of bounded treewidth.\n5.1 Computational Complexity The hardness of Core-Membership and Core-NonEmptiness follows directly from the hardness results of games over weighted graphs in [4].\n197 Proposition 6.\nCore-Membership for games represented as marginal contribution networks is coNP-complete.\nProof.\nCore-Membership in MC-nets is in the class of coNP since any set of agents S of which v(S) > x(S) will serve as a certificate to show that x does not belong to the core.\nAs for its hardness, given any instance of CoreMembership for a game in graphical form of [4], we can encode the game in exactly the same space using MC-net due to Proposition 4.\nSince Core-Membership for games in graphical form is coNP-complete, Core-Membership in MC-nets is coNP-hard.\nProposition 7.\nCore-Non-Emptiness for games represented as marginal contribution networks is coNP-hard.\nProof.\nThe same argument for hardness between games in graphical frm and MC-nets holds for the problem of CoreNon-Emptiness.\nWe do not know of a certificate to show that Core-NonEmptiness is in the class of coNP as of now.\nNote that the obvious certificate of a balanced set of weights based on the Bondereva-Shapley theorem is exponential in size.\nIn [4], Deng and Papadimitriou showed the coNP-completeness of Core-Non-Emptiness via a combinatorial characterization, namely that the core is non-empty if and only if there is no negative cut in the graph.\nIn MC-nets, however, there need not be a negative hypercut in the graph for the core to be empty, as demonstrated by the following game (N = {1, 2, 3, 4}): v(S) = 1 if S = {1, 2, 3, 4} 3\/4 if S = {1, 2}, {1, 3}, {1, 4}, or {2, 3, 4} 0 otherwise (5) Applying the Bondereva-Shapley theorem, if we let \u03bb12 = \u03bb13 = \u03bb14 = 1\/3, and \u03bb234 = 2\/3, this set of weights demonstrates that the game is not balanced, and hence the core is empty.\nOn the other hand, this game can be represented with MC-nets as follows (weights on hyperedges): w({1, 2}) = w({1, 3}) = w({1, 4}) = 3\/4 w({1, 2, 3}) = w({1, 2, 4}) = w({1, 3, 4}) = \u22126\/4 w({2, 3, 4}) = 3\/4 w({1, 2, 3, 4}) = 10\/4 No matter how the set is partitioned, the sum over the weights of the hyperedges in the cut is always non-negative.\nTo overcome the computational hardness of these problems, we have developed algorithms that are based on tree decomposition techniques.\nFor Core-Membership, our algorithm runs in time exponential only in the treewidth of the agent graph.\nThus, for graphs of small treewidth, such as trees, we have a tractable solution to determine if a payoff vector is in the core.\nBy using this procedure as a separation oracle, i.e., a procedure for returning the inequality violated by a candidate solution, to solving a linear program that is related to Core-Non-Emptiness using the ellipsoid method, we can obtain a polynomial time algorithm for Core-Non-Emptiness for graphs of bounded treewidth.\n5.2 Review of Tree Decomposition As our algorithm for Core-Membership relies heavily on tree decomposition, we will first briefly review the main ideas in tree decomposition and treewidth.3 Definition 3.\nA tree decomposition of a graph G = (V, E) is a pair (X, T), where T = (I, F) is a tree and X = {Xi | i \u2208 I} is a family of subsets of V , one for each node of T, such that \u2022 i\u2208I Xi = V ; \u2022 For all edges (v, w) \u2208 E, there exists an i \u2208 I with v \u2208 Xi and w \u2208 Xi; and \u2022 (Running Intersection Property) For all i, j, k \u2208 I: if j is on the path from i to k in T, then Xi \u2229 Xk \u2286 Xj.\nThe treewidth of a tree decomposition is defined as the maximum cardinality over all sets in X, less one.\nThe treewidth of a graph is defined as the minimum treewidth over all tree decompositions of the graph.\nGiven a tree decomposition, we can convert it into a nice tree decomposition of the same treewidth, and of size linear in that of T. Definition 4.\nA tree decomposition T is nice if T is rooted and has four types of nodes: Leaf nodes i are leaves of T with |Xi| = 1.\nIntroduce nodes i have one child j such that Xi = Xj \u222a {v} of some v \u2208 V .\nForget nodes i have one child j such that Xi = Xj \\ {v} for some v \u2208 Xj.\nJoin nodes i have two children j and k with Xi = Xj = Xk.\nAn example of a (partial) nice tree decomposition together with a classification of the different types of nodes is in Figure 1.\nIn the following section, we will refer to nodes in the tree decomposition as nodes, and nodes in the agent graph as agents.\n5.3 Algorithm for Core Membership Our algorithm for Core-Membership takes as an input a nice tree decomposition T of the agent graph and a payoff vector x. By definition, if x belongs to the core, then for all groups S \u2286 N, x(S) \u2265 v(S).\nTherefore, the difference x(S)\u2212v(S) measures how close the group S is to violating the core condition.\nWe call this difference the excess of group S. Definition 5.\nThe excess of a coalition S, e(S), is defined as x(S) \u2212 v(S).\nA brute-force approach to determine if a payoff vector belongs to the core will have to check that the excesses of all groups are non-negative.\nHowever, this approach ignores the structure in the agent graph that will allow an algorithm to infer that certain groups have non-negative excesses due to 3 This is based largely on the materials from a survey paper by Bodlaender [1].\n198 i j k l nm Introduce Node: Xj = {1, 4} Xk = {1, 4} Forget Node: Xl = {1, 4} Introduce Node: Xm = {1, 2, 4} Xn = {4} Leaf Node: Join Node: Xi = {1, 3, 4} Join Node: Figure 1: Example of a (partial) nice tree decomposition the excesses computed elsewhere in the graph.\nTree decomposition is the key to take advantage of such inferences in a structured way.\nFor now, let us focus on rules with positive literals.\nSuppose we have already checked that the excesses of all sets R \u2286 U are non-negative, and we would like to check if the addition of an agent i to the set U will create a group with negative excess.\nA na\u00a8\u0131ve solution will be to compute the excesses of all sets that include i.\nThe excess of the group (R \u222a {i}) for any group R can be computed as follows e(R \u222a {i}) = e(R) + xi \u2212 v(c) (6) where c is the cut between R and i, and v(c) is the sum of the weights of the edges in the cut.\nHowever, suppose that from the tree decomposition, we know that i is only connected to a subset of U, say S, which we will call the entry set to U. Ideally, because i does not share any edges with members of \u00afU = (U \\ S), we would hope that an algorithm can take advantage of this structure by checking only sets that are subsets of (S \u222a {i}).\nThis computational saving may be possible since (xi \u2212v(c)) in the update equation of (6) does not depend on \u00afU. However, we cannot simply ignore \u00afU as members of \u00afU may still influence the excesses of groups that include agent i through group S. Specifically, if there exists a group T \u2283 S such that e(T) < e(S), then even when e(S \u222a {i}) has non-negative excess, e(T \u222a{i}) may have negative excess.\nIn other words, the excess available at S may have been drained away due to T.\nThis motivates the definition of the reserve of a group.\nDefinition 6.\nThe reserve of a coalition S relative to a coalition U is the minimum excess over all coalitions between S and U, i.e., all T : S \u2286 T \u2286 U.\nWe denote this value by r(S, U).\nWe will refer to the group T that has the minimum excess as arg r(S, U).\nWe will also call U the limiting set of the reserve and S the base set of the reserve.\nOur algorithm works by keeping track of the reserves of all non-empty subsets that can be formed by the agents of a node at each of the nodes of the tree decomposition.\nStarting from the leaves of the tree and working towards the root, at each node i, our algorithm computes the reserves of all groups S \u2286 Xi, limited by the set of agents in the subtree rooted at i, Ti, except those in (Xi\\S).\nThe agents in (Xi\\S) are excluded to ensure that S is an entry set.\nSpecifically, S is the entry set to ((Ti \\ Xi) \u222a S).\nTo accomodate for negative literals, we will need to make two adjustments.\nFirstly, the cut between an agent m and a set S at node i now refers to the cut among agent m, set S, and set \u00ac(Xi \\ S), and its value must be computed accordingly.\nAlso, when an agent m is introduced to a group at an introduce node, we will also need to consider the change in the reserves of groups that do not include m due to possible cut involving \u00acm and the group.\nAs an example of the reserve values we keep track of at a tree node, consider node i of the tree in Figure 1.\nAt node i, we will keep track of the following: r({1}, {1, 2, ...}) r({3}, {2, 3, ...}) r({4}, {2, 4, ...}) r({1, 3}, {1, 2, 3, ...}) r({1, 4}, {1, 2, 4, ...}) r({3, 4}, {2, 3, 4, ...}) r({1, 3, 4}, {1, 2, 3, 4, ...} where the dots ... refer to the agents rooted under node m. For notational use, we will use ri(S) to denote r(S, U) at node i where U is the set of agents in the subtree rooted at node i excluding agents in (Xi \\ S).\nWe sometimes refer to these values as the r-values of a node.\nThe details of the r-value computations are in Algorithm 1.\nTo determine if the payoff vector x is in the core, during the r-value computation at each node, we can check if all of the r-values are non-negative.\nIf this is so for all nodes in the tree, the payoff vector x is in the core.\nThe correctness of the algorithm is due to the following proposition.\nProposition 8.\nThe payoff vector x is not in the core if and only if the r-values at some node i for some group S is negative.\nProof.\n(\u21d0) If the reserve at some node i for some group S is negative, then there exists a coalition T for which e(T) = x(T) \u2212 v(T) < 0, hence x is not in the core.\n(\u21d2) Suppose x is not in the core, then there exists some group R\u2217 such that e(R\u2217 ) < 0.\nLet Xroot be the set of nodes at the root.\nConsider any set S \u2208 Xroot, rroot(S) will have the base set of S and the limiting set of ((N \\ Xroot) \u222a S).\nThe union over all of these ranges includes all sets U for which U \u2229 Xroot = \u2205.\nTherefore, if R\u2217 is not disjoint from Xroot, the r-value for some group in the root is negative.\nIf R\u2217 is disjoint from U, consider the forest {Ti} resulting from removal of all tree nodes that include agents in Xroot.\n199 Algorithm 1 Subprocedures for Core Membership Leaf-Node(i) 1: ri(Xi) \u2190 e(Xi) Introduce-Node(i) 2: j \u2190 child of i 3: m \u2190 Xi \\ Xj {the introduced node} 4: for all S \u2286 Xj, S = \u2205 do 5: C \u2190 all hyperedges in the cut of m, S, and \u00ac(Xi \\ S) 6: ri(S \u222a {x}) \u2190 rj(S) + xm \u2212 v(C) 7: C \u2190 all hyperedges in the cut of \u00acm, S, and \u00ac(Xi \\S) 8: ri(S) \u2190 rj(S) \u2212 v(C) 9: end for 10: r({m}) \u2190 e({m}) Forget-Node(i) 11: j \u2190 child of i 12: m \u2190 Xj \\ Xi {the forgotten node} 13: for all S \u2286 Xi, S = \u2205 do 14: ri(S) = min(rj(S), rj(S \u222a {m})) 15: end for Join-Node(i) 16: {j, k} \u2190 {left, right} child of i 17: for all S \u2286 Xi, S = \u2205 do 18: ri(S) \u2190 rj(S) + rk(S) \u2212 e(S) 19: end for By the running intersection property, the sets of nodes in the trees Ti``s are disjoint.\nThus, if the set R\u2217 = i Si for some Si \u2208 Ti, e(R\u2217 ) = i e(Si) < 0 implies some group S\u2217 i has negative excess as well.\nTherefore, we only need to check the r-values of the nodes on the individual trees in the forest.\nBut for each tree in the forest, we can apply the same argument restricted to the agents in the tree.\nIn the base case, we have the leaf nodes of the original tree decomposition, say, for agent i.\nIf R\u2217 = {i}, then r({i}) = e({i}) < 0.\nTherefore, by induction, if e(R\u2217 ) < 0, some reserve at some node would be negative.\nWe will next explain the intuition behind the correctness of the computations for the r-values in the tree nodes.\nA detailed proof of correctness of these computations can be found in the appendix under Lemmas 1 and 2.\nProposition 9.\nThe procedure in Algorithm 1 correctly compute the r-values at each of the tree nodes.\nProof.\n(Sketch) We can perform a case analysis over the four types of tree nodes in a nice tree decomposition.\nLeaf nodes (i) The only reserve value to be computed is ri(Xi), which equals r(Xi, Xi), and therefore it is just the excess of group Xi.\nForget nodes (i with child j) Let m be the forgotten node.\nFor any subset S \u2286 Xi, arg ri(S) must be chosen between the groups of S and S \u222a {m}, and hence we choose between the lower of the two from the r-values at node j. Introduce nodes (i with child j) Let m be the introduced node.\nFor any subset T \u2286 Xi that includes m, let S denote (T \\ {m}).\nBy the running intersection property, there are no rules that involve m and agents of the subtree rooted at node i except those involving m and agents in Xi.\nAs both the base set and the limiting set of the r-values of node j and node i differ by {m}, for any group V that lies between the base set and the limiting set of node i, the excess of group V will differ by a constant amount from the corresponding group (V \\ {m}) at node j. Therefore, the set arg ri(T) equals the set arg rj(S) \u222a {m}, and ri(T) = rj(S) + xm \u2212 v(cut), where v(cut) is the value of the rules in the cut between m and S. For any subset S \u2282 Xi that does not include m, we need to consider the values of rules that include \u00acm as a literal in the pattern.\nAlso, when computing the reserve, the payoff xm will not contribute to group S. Therefore, together with the running intersection property as argued above, we can show that ri(S) = rj(S) \u2212 v(cut).\nJoin nodes (i with left child j and right child k) For any given set S \u2286 Xi, consider the r-values of that set at j and k.\nIf arg rj(S) or arg rk(S) includes agents not in S, then argrj(S) and argrk(S) will be disjoint from each other due to the running intersection property.\nTherefore, we can decompose arg ri(S) into three sets, (arg rj(S) \\ S) on the left, S in the middle, and (arg rk(S) \\ S) on the right.\nThe reserve rj(S) will cover the excesses on the left and in the middle, whereas the reserve rk(S) will cover those on the right and in the middle, and so the excesses in the middle is double-counted.\nWe adjust for the double-counting by subtracting the excesses in the middle from the sum of the two reserves rj(S) and rk(S).\nFinally, note that each step in the computation of the rvalues of each node i takes time at most exponential in the size of Xi, hence the algorithm runs in time exponential only in the treewidth of the graph.\n5.4 Algorithm for Core Non-emptiness We can extend the algorithm for Core-Membership into an algorithm for Core-Non-Emptiness.\nAs described in section 2, whether the core is empty can be checked using the optimization program based on the balancedness condition (3).\nUnfortunately, that program has an exponential number of variables.\nOn the other hand, the dual of the program has only n variables, and can be written as follows: minimize x\u2208Rn n i=1 xi subject to x(S) \u2265 v(S), \u2200S \u2286 N (7) By strong duality, optimal value of (7) is equal to optimal value of (4), the primal program described in section 2.\nTherefore, by the Bondereva-Shapley theorem, if the optimal value of (7) is greater than v(N), the core is empty.\nWe can solve the dual program using the ellipsoid method with Core-Membership as a separation oracle, i.e., a procedure for returning a constraint that is violated.\nNote that a simple extension to the Core-Membership algorithm will allow us to keep track of the set T for which e(T) < 0 during the r-values computation, and hence we can return the inequality about T as the constraint violated.\nTherefore, Core-Non-Emptiness can run in time polynomial in the running time of Core-Membership, which in turn runs in 200 time exponential only in the treewidth of the graph.\nNote that when the core is not empty, this program will return an outcome in the core.\n6.\nCONCLUDING REMARKS We have developed a fully expressive representation scheme for coalitional games of which the size depends on the complexity of the interactions among the agents.\nOur focus on general representation is in contrast to the approach taken in [3, 4].\nWe have also developed an efficient algorithm for the computation of the Shapley values for this representation.\nWhile Core-Membership for MC-nets is coNP-complete, we have developed an algorithm for CoreMembership that runs in time exponential only in the treewidth of the agent graph.\nWe have also extended the algorithm to solve Core-Non-Emptiness.\nOther than the algorithm for Core-Non-Emptiness in [4] under the restriction of non-negative edge weights, and that in [2] for superadditive games when the value of the grand coalition is given, we are not aware of any explicit description of algorithms for core-related problems in the literature.\nThe work in this paper is related to a number of areas in computer science, especially in artificial intelligence.\nFor example, the graphical interpretation of MC-nets is closely related to Markov random fields (MRFs) of the Bayes nets community.\nThey both address the issue of of conciseness of representation by using the combinatorial structure of weighted hypergraphs.\nIn fact, Kearns et al. first apply these idea to games theory by introducing a representation scheme derived from Bayes net to represent non-cooperative games [6].\nThe representational issues faced in coalitional games are closely related to the problem of expressing valuations in combinatorial auctions [5, 10].\nThe OR-bid language, for example, is strongly related to superadditivity.\nThe question of the representation power of different patterns is also related to Boolean expression complexity [12].\nWe believe that with a better understanding of the relationships among these related areas, we may be able to develop more efficient representations and algorithms for coalitional games.\nFinally, we would like to end with some ideas for extending the work in this paper.\nOne direction to increase the conciseness of MC-nets is to allow the definition of equivalent classes of agents, similar to the idea of extending Bayes nets to probabilistic relational models.\nThe concept of symmetry is prevalent in games, and the use of classes of agents will allow us to capture symmetry naturally and concisely.\nThis will also address the problem of unpleasing assymetric representations of symmetric games in our representation.\nAlong the line of exploiting symmetry, as the agents within the same class are symmetric with respect to each other, we can extend the idea above by allowing functional description of marginal contributions.\nMore concretely, we can specify the value of a rule as dependent on the number of agents of each relevant class.\nThe use of functions will allow concise description of marginal diminishing returns (MDRs).\nWithout the use of functions, the space needed to describe MDRs among n agents in MC-nets is O(n).\nWith the use of functions, the space required can be reduced to O(1).\nAnother idea to extend MC-nets is to augment the semantics to allow constructs that specify certain rules cannot be applied simultaneously.\nThis is useful in situations where a certain agent represents a type of exhaustible resource, and therefore rules that depend on the presence of the agent should not apply simultaneously.\nFor example, if agent i in the system stands for coal, we can either use it as fuel for a power plant or as input to a steel mill for making steel, but not for both at the same time.\nCurrently, to represent such situations, we have to specify rules to cancel out the effects of applications of different rules.\nThe augmented semantics can simplify the representation by specifying when rules cannot be applied together.\n7.\nACKNOWLEDGMENT The authors would like to thank Chris Luhrs, Bob McGrew, Eugene Nudelman, and Qixiang Sun for fruitful discussions, and the anonymous reviewers for their helpful comments on the paper.\n8.\nREFERENCES [1] H. L. Bodlaender.\nTreewidth: Algorithmic techniques and results.\nIn Proc.\n22nd Symp.\non Mathematical Foundation of Copmuter Science, pages 19-36.\nSpringer-Verlag LNCS 1295, 1997.\n[2] V. Conitzer and T. Sandholm.\nComplexity of determining nonemptiness of the core.\nIn Proc.\n18th Int.\nJoint Conf.\non Artificial Intelligence, pages 613-618, 2003.\n[3] V. Conitzer and T. Sandholm.\nComputing Shapley values, manipulating value division schemes, and checking core membership in multi-issue domains.\nIn Proc.\n19th Nat.\nConf.\non Artificial Intelligence, pages 219-225, 2004.\n[4] X. Deng and C. H. Papadimitriou.\nOn the complexity of cooperative solution concepts.\nMath.\nOper.\nRes., 19:257-266, May 1994.\n[5] Y. Fujishima, K. Leyton-Brown, and Y. Shoham.\nTaming the computational complexity of combinatorial auctions: Optimal and approximate approaches.\nIn Proc.\n16th Int.\nJoint Conf.\non Artificial Intelligence, pages 548-553, 1999.\n[6] M. Kearns, M. L. Littman, and S. Singh.\nGraphical models for game theory.\nIn Proc.\n17th Conf.\non Uncertainty in Artificial Intelligence, pages 253-260, 2001.\n[7] J. Kleinberg, C. H. Papadimitriou, and P. Raghavan.\nOn the value of private information.\nIn Proc.\n8th Conf.\non Theoretical Aspects of Rationality and Knowledge, pages 249-257, 2001.\n[8] C. Li and K. Sycara.\nAlgoirthms for combinatorial coalition formation and payoff division in an electronic marketplace.\nTechnical report, Robotics Insititute, Carnegie Mellon University, November 2001.\n[9] A. Mas-Colell, M. D. Whinston, and J. R. Green.\nMicroeconomic Theory.\nOxford University Press, New York, 1995.\n[10] N. Nisan.\nBidding and allocation in combinatorial auctions.\nIn Proc.\n2nd ACM Conf.\non Electronic Commerce, pages 1-12, 2000.\n[11] M. J. Osborne and A. Rubinstein.\nA Course in Game Theory.\nThe MIT Press, Cambridge, Massachusetts, 1994.\n[12] I. Wegener.\nThe Complexity of Boolean Functions.\nJohn Wiley & Sons, New York, October 1987.\n201 APPENDIX We will formally show the correctness of the r-value computation in Algorithm 1 of introduce nodes and join nodes.\nLemma 1.\nThe procedure for computing the r-values of introduce nodes in Algorithm 1 is correct.\nProof.\nLet node m be the newly introduced agent at i. Let U denote the set of agents in the subtree rooted at i. By the running intersection property, all interactions (the hyperedges) between m and U must be in node i. For all S \u2286 Xi : m \u2208 S, let R denote (U \\ Xi) \u222a S), and Q denote (R \\ {m}).\nri(S) = r(S, R) = min T :S\u2286T \u2286R e(T) = min T :S\u2286T \u2286R x(T) \u2212 v(T) = min T :S\u2286T \u2286R x(T \\ {m}) + xm \u2212 v(T \\ {m}) \u2212 v(cut) = min T :S\\{m}\u2286T \u2286Q e(T ) + xm \u2212 v(cut) = rj(S) + xm \u2212 v(cut) The argument for sets S \u2286 Xi : m \/\u2208 S is symmetric except xm will not contribute to the reserve due to the absence of m. Lemma 2.\nThe procedure for computing the r-values of join nodes in Algorithm 1 is correct.\nProof.\nConsider any set S \u2286 Xi.\nLet Uj denote the subtree rooted at the left child, Rj denote ((Uj \\ Xj) \u222a S), and Qj denote (Uj \\ Xj).\nLet Uk, Rk, and Qk be defined analogously for the right child.\nLet R denote (U \\ Xi) \u222a S).\nri(S) = r(S, R) = min T :S\u2286T \u2286R x(T) \u2212 v(T) = min T :S\u2286T \u2286R x(S) + x(T \u2229 Qj) + x(T \u2229 Qk) \u2212 v(S) \u2212 v(cut(S, T \u2229 Qj) \u2212 v(cut(S, T \u2229 Qk) = min T :S\u2286T \u2286R x(T \u2229 Qj) \u2212 v(cut(S, T \u2229 Qj)) + min T :S\u2286T \u2286R x(T \u2229 Qk) \u2212 v(cut(S, T \u2229 Qk)) + (x(S) \u2212 v(S)) (*) = min T :S\u2286T \u2286R x(T \u2229 Qj) + x(S) \u2212 v(cut(S, T \u2229 Qj)) \u2212 v(S) + min T :S\u2286T \u2286R x(T \u2229 Qk) + x(S) \u2212 v(cut(S, T \u2229 Qk)) \u2212 v(S) \u2212 (x(S) \u2212 v(S)) = min T :S\u2286T \u2286R e(T \u2229 Rj) + min T :S\u2286T \u2286R e(T \u2229 Rk) \u2212 e(S) = min T :S\u2286T \u2286Rj e(T ) + min T :S\u2286T \u2286Rk e(T ) \u2212 e(S) = rj(S) + rk(S) \u2212 e(S) where (*) is true as T \u2229 Qj and T \u2229 Qk are disjoint due to the running intersection property of tree decomposition, and hence the minimum of the sum can be decomposed into the sum of the minima.\n202","lvl-3":"Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games *\nABSTRACT\nWe present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents.\nThis representation scheme captures characteristics of the interactions among the agents in a natural and concise manner.\nWe also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation.\nThe Shapley value can be computed in time linear in the size of the input.\nThe emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation.\n1.\nINTRODUCTION\nAgents can often benefit by coordinating their actions.\nCoalitional games capture these opportunities of coordination by explicitly modeling the ability of the agents to take joint actions as primitives.\nAs an abstraction, coalitional games assign a payoff to each group of agents in the game.\nThis payoff is intended to reflect the payoff the group of agents can secure for themselves regardless of the actions of the agents not in the group.\nThese choices of primitives are in contrast to those of non-cooperative games, of which agents are modeled independently, and their payoffs depend critically on the actions chosen by the other agents.\n1.1 Coalitional Games and E-Commerce\nCoalitional games have appeared in the context of e-commerce.\nIn [7], Kleinberg et al. use coalitional games to study recommendation systems.\nIn their model, each individual knows about a certain set of items, is interested in learning about all items, and benefits from finding out about them.\nThe payoffs to groups of agents are the total number of distinct items known by its members.\nGiven this coalitional game setting, Kleinberg et al. compute the value of the private information of the agents is worth to the system using the solution concept of the Shapley value (definition can be found in section 2).\nThese values can then be used to determine how much each agent should receive for participating in the system.\nAs another example, consider the economics behind supply chain formation.\nThe increased use of the Internet as a medium for conducting business has decreased the costs for companies to coordinate their actions, and therefore coalitional game is a good model for studying the supply chain problem.\nSuppose that each manufacturer purchases his raw materials from some set of suppliers, and that the suppliers offer higher discount with more purchases.\nThe decrease in communication costs will let manufacturers find others interested in the same set of suppliers cheaper, and facilitates formation of coalitions to bargain with the suppliers.\nDepending on the set of suppliers and how much from each supplier each coalition purchases, we can assign payoffs to the coalitions depending on the discount it receives.\nThe resulting game can be analyzed using coalitional game theory, and we can answer questions such as the stability of coalitions, and how to fairly divide the benefits among the participating manufacturers.\nA similar problem, combinatorial coalition formation, has previously been studied in [8].\n1.2 Evaluation Criteria for Coalitional Game Representation\n1.3 Previous Work\n1.4 Summary of Our Contributions\n2.\nPRELIMINARIES\n2.1 Technical Background\n2.2 Previous Work Revisited\n3.\nMARGINAL CONTRIBUTION NETS\n3.1 Rules and Marginal Contribution Networks\n3.2 Example: Recommendation Game\n3.3 Representation Power\n4.\nCOMPUTING THE SHAPLEY VALUE\n5.\nANSWERING CORE-RELATED QUESTIONS\n5.1 Computational Complexity\n5.2 Review of Tree Decomposition\n5.3 Algorithm for Core Membership\n5.4 Algorithm for Core Non-emptiness\n6.\nCONCLUDING REMARKS\nWe have developed a fully expressive representation scheme for coalitional games of which the size depends on the complexity of the interactions among the agents.\nOur focus on general representation is in contrast to the approach taken in [3, 4].\nWe have also developed an efficient algorithm for the computation of the Shapley values for this representation.\nWhile CORE-MEMBERSHIP for MC-nets is coNP-complete, we have developed an algorithm for COREMEMBERSHIP that runs in time exponential only in the treewidth of the agent graph.\nWe have also extended the algorithm to solve CORE-NON-EMPTINESS.\nOther than the algorithm for CORE-NON-EMPTINESS in [4] under the restriction of non-negative edge weights, and that in [2] for superadditive games when the value of the grand coalition is given, we are not aware of any explicit description of algorithms for core-related problems in the literature.\nThe work in this paper is related to a number of areas in computer science, especially in artificial intelligence.\nFor example, the graphical interpretation of MC-nets is closely related to Markov random fields (MRFs) of the Bayes nets community.\nThey both address the issue of of conciseness of representation by using the combinatorial structure of weighted hypergraphs.\nIn fact, Kearns et al. first apply these idea to games theory by introducing a representation scheme derived from Bayes net to represent non-cooperative games [6].\nThe representational issues faced in coalitional games are closely related to the problem of expressing valuations in combinatorial auctions [5, 10].\nThe OR-bid language, for example, is strongly related to superadditivity.\nThe question of the representation power of different patterns is also related to Boolean expression complexity [12].\nWe believe that with a better understanding of the relationships among these related areas, we may be able to develop more efficient representations and algorithms for coalitional games.\nFinally, we would like to end with some ideas for extending the work in this paper.\nOne direction to increase the conciseness of MC-nets is to allow the definition of equivalent classes of agents, similar to the idea of extending Bayes nets to probabilistic relational models.\nThe concept of symmetry is prevalent in games, and the use of classes of agents will allow us to capture symmetry naturally and concisely.\nThis will also address the problem of unpleasing assymetric representations of symmetric games in our representation.\nAlong the line of exploiting symmetry, as the agents within the same class are symmetric with respect to each other, we can extend the idea above by allowing functional description of marginal contributions.\nMore concretely, we can specify the value of a rule as dependent on the number of agents of each relevant class.\nThe use of functions will allow concise description of marginal diminishing returns (MDRs).\nWithout the use of functions, the space needed to describe MDRs among n agents in MC-nets is O (n).\nWith the use of functions, the space required can be reduced to O (1).\nAnother idea to extend MC-nets is to augment the semantics to allow constructs that specify certain rules cannot be applied simultaneously.\nThis is useful in situations where a certain agent represents a type of exhaustible resource, and therefore rules that depend on the presence of the agent should not apply simultaneously.\nFor example, if agent i in the system stands for coal, we can either use it as fuel for a power plant or as input to a steel mill for making steel, but not for both at the same time.\nCurrently, to represent such situations, we have to specify rules to cancel out the effects of applications of different rules.\nThe augmented semantics can simplify the representation by specifying when rules cannot be applied together.","lvl-4":"Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games *\nABSTRACT\nWe present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents.\nThis representation scheme captures characteristics of the interactions among the agents in a natural and concise manner.\nWe also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation.\nThe Shapley value can be computed in time linear in the size of the input.\nThe emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation.\n1.\nINTRODUCTION\nAgents can often benefit by coordinating their actions.\nCoalitional games capture these opportunities of coordination by explicitly modeling the ability of the agents to take joint actions as primitives.\nAs an abstraction, coalitional games assign a payoff to each group of agents in the game.\nThis payoff is intended to reflect the payoff the group of agents can secure for themselves regardless of the actions of the agents not in the group.\nThese choices of primitives are in contrast to those of non-cooperative games, of which agents are modeled independently, and their payoffs depend critically on the actions chosen by the other agents.\n1.1 Coalitional Games and E-Commerce\nCoalitional games have appeared in the context of e-commerce.\nIn [7], Kleinberg et al. use coalitional games to study recommendation systems.\nThe payoffs to groups of agents are the total number of distinct items known by its members.\nGiven this coalitional game setting, Kleinberg et al. compute the value of the private information of the agents is worth to the system using the solution concept of the Shapley value (definition can be found in section 2).\nThese values can then be used to determine how much each agent should receive for participating in the system.\nAs another example, consider the economics behind supply chain formation.\nThe increased use of the Internet as a medium for conducting business has decreased the costs for companies to coordinate their actions, and therefore coalitional game is a good model for studying the supply chain problem.\nDepending on the set of suppliers and how much from each supplier each coalition purchases, we can assign payoffs to the coalitions depending on the discount it receives.\nThe resulting game can be analyzed using coalitional game theory, and we can answer questions such as the stability of coalitions, and how to fairly divide the benefits among the participating manufacturers.\nA similar problem, combinatorial coalition formation, has previously been studied in [8].\n6.\nCONCLUDING REMARKS\nWe have developed a fully expressive representation scheme for coalitional games of which the size depends on the complexity of the interactions among the agents.\nOur focus on general representation is in contrast to the approach taken in [3, 4].\nWe have also developed an efficient algorithm for the computation of the Shapley values for this representation.\nWhile CORE-MEMBERSHIP for MC-nets is coNP-complete, we have developed an algorithm for COREMEMBERSHIP that runs in time exponential only in the treewidth of the agent graph.\nWe have also extended the algorithm to solve CORE-NON-EMPTINESS.\nFor example, the graphical interpretation of MC-nets is closely related to Markov random fields (MRFs) of the Bayes nets community.\nThey both address the issue of of conciseness of representation by using the combinatorial structure of weighted hypergraphs.\nIn fact, Kearns et al. first apply these idea to games theory by introducing a representation scheme derived from Bayes net to represent non-cooperative games [6].\nThe representational issues faced in coalitional games are closely related to the problem of expressing valuations in combinatorial auctions [5, 10].\nThe OR-bid language, for example, is strongly related to superadditivity.\nThe question of the representation power of different patterns is also related to Boolean expression complexity [12].\nWe believe that with a better understanding of the relationships among these related areas, we may be able to develop more efficient representations and algorithms for coalitional games.\nFinally, we would like to end with some ideas for extending the work in this paper.\nOne direction to increase the conciseness of MC-nets is to allow the definition of equivalent classes of agents, similar to the idea of extending Bayes nets to probabilistic relational models.\nThe concept of symmetry is prevalent in games, and the use of classes of agents will allow us to capture symmetry naturally and concisely.\nThis will also address the problem of unpleasing assymetric representations of symmetric games in our representation.\nAlong the line of exploiting symmetry, as the agents within the same class are symmetric with respect to each other, we can extend the idea above by allowing functional description of marginal contributions.\nMore concretely, we can specify the value of a rule as dependent on the number of agents of each relevant class.\nThe use of functions will allow concise description of marginal diminishing returns (MDRs).\nWithout the use of functions, the space needed to describe MDRs among n agents in MC-nets is O (n).\nAnother idea to extend MC-nets is to augment the semantics to allow constructs that specify certain rules cannot be applied simultaneously.\nThis is useful in situations where a certain agent represents a type of exhaustible resource, and therefore rules that depend on the presence of the agent should not apply simultaneously.\nCurrently, to represent such situations, we have to specify rules to cancel out the effects of applications of different rules.\nThe augmented semantics can simplify the representation by specifying when rules cannot be applied together.","lvl-2":"Marginal Contribution Nets: A Compact Representation Scheme for Coalitional Games *\nABSTRACT\nWe present a new approach to representing coalitional games based on rules that describe the marginal contributions of the agents.\nThis representation scheme captures characteristics of the interactions among the agents in a natural and concise manner.\nWe also develop efficient algorithms for two of the most important solution concepts, the Shapley value and the core, under this representation.\nThe Shapley value can be computed in time linear in the size of the input.\nThe emptiness of the core can be determined in time exponential only in the treewidth of a graphical interpretation of our representation.\n1.\nINTRODUCTION\nAgents can often benefit by coordinating their actions.\nCoalitional games capture these opportunities of coordination by explicitly modeling the ability of the agents to take joint actions as primitives.\nAs an abstraction, coalitional games assign a payoff to each group of agents in the game.\nThis payoff is intended to reflect the payoff the group of agents can secure for themselves regardless of the actions of the agents not in the group.\nThese choices of primitives are in contrast to those of non-cooperative games, of which agents are modeled independently, and their payoffs depend critically on the actions chosen by the other agents.\n1.1 Coalitional Games and E-Commerce\nCoalitional games have appeared in the context of e-commerce.\nIn [7], Kleinberg et al. use coalitional games to study recommendation systems.\nIn their model, each individual knows about a certain set of items, is interested in learning about all items, and benefits from finding out about them.\nThe payoffs to groups of agents are the total number of distinct items known by its members.\nGiven this coalitional game setting, Kleinberg et al. compute the value of the private information of the agents is worth to the system using the solution concept of the Shapley value (definition can be found in section 2).\nThese values can then be used to determine how much each agent should receive for participating in the system.\nAs another example, consider the economics behind supply chain formation.\nThe increased use of the Internet as a medium for conducting business has decreased the costs for companies to coordinate their actions, and therefore coalitional game is a good model for studying the supply chain problem.\nSuppose that each manufacturer purchases his raw materials from some set of suppliers, and that the suppliers offer higher discount with more purchases.\nThe decrease in communication costs will let manufacturers find others interested in the same set of suppliers cheaper, and facilitates formation of coalitions to bargain with the suppliers.\nDepending on the set of suppliers and how much from each supplier each coalition purchases, we can assign payoffs to the coalitions depending on the discount it receives.\nThe resulting game can be analyzed using coalitional game theory, and we can answer questions such as the stability of coalitions, and how to fairly divide the benefits among the participating manufacturers.\nA similar problem, combinatorial coalition formation, has previously been studied in [8].\n1.2 Evaluation Criteria for Coalitional Game Representation\nTo capture the coalitional games described above and perform computations on them, we must first find a representation for these games.\nThe na \u00a8 \u0131ve solution is to enumerate the payoffs to each set of agents, therefore requiring space\nexponential in the number of agents in the game.\nFor the two applications described, the number of agents in the system can easily exceed a hundred; this na \u00a8 \u0131ve approach will not be scalable to such problems.\nTherefore, it is critical to find good representation schemes for coalitional games.\nWe believe that the quality of a representation scheme should be evaluated by four criteria.\nExpressivity: the breadth of the class of coalitional games covered by the representation.\nConciseness: the space requirement of the representation.\nEfficiency: the efficiency of the algorithms we can develop for the representation.\nSimplicity: the ease of use of the representation by users of the system.\nThe ideal representation should be fully expressive, i.e., it should be able to represent any coalitional games, use as little space as possible, have efficient algorithms for computation, and be easy to use.\nThe goal of this paper is to develop a representation scheme that has properties close to the ideal representation.\nUnfortunately, given that the number of degrees of freedom of coalitional games is O (2n), not all games can be represented concisely using a single scheme due to information theoretic constraints.\nFor any given class of games, one may be able to develop a representation scheme that is tailored and more compact than a general scheme.\nFor example, for the recommendation system game, a highly compact representation would be one that simply states which agents know of which products, and let the algorithms that operate on the representation to compute the values of coalitions appropriately.\nFor some problems, however, there may not be efficient algorithms for customized representations.\nBy having a general representation and efficient algorithms that go with it, the representation will be useful as a prototyping tool for studying new economic situations.\n1.3 Previous Work\nThe question of coalitional game representation has only been sparsely explored in the past [2, 3, 4].\nIn [4], Deng and Papadimitriou focused on the complexity of different solution concepts on coalitional games defined on graphs.\nWhile the representation is compact, it is not fully expressive.\nIn [2], Conitzer and Sandholm looked into the problem of determining the emptiness of the core in superadditive games.\nThey developed a compact representation scheme for such games, but again the representation is not fully expressive either.\nIn [3], Conitzer and Sandholm developed a fully expressive representation scheme based on decomposition.\nOur work extends and generalizes the representation schemes in [3, 4] through decomposing the game into a set of rules that assign marginal contributions to groups of agents.\nWe will give a more detailed review of these papers in section 2.2 after covering the technical background.\n1.4 Summary of Our Contributions\n\u2022 We develop the marginal contribution networks representation, a fully expressive representation scheme whose size scales according to the complexity of the interactions among the agents.\nWe believe that the representation is also simple and intuitive.\n\u2022 We develop an algorithm for computing the Shapley value of coalitional games under this representation that runs in time linear in the size of the input.\n\u2022 Under the graphical interpretation of the represen\ntation, we develop an algorithm for determining the whether a payoff vector is in the core and the emptiness of the core in time exponential only in the treewidth of the graph.\n2.\nPRELIMINARIES\nIn this section, we will briefly review the basics of coalitional game theory and its two primary solution concepts, the Shapley value and the core . '\nWe will also review previous work on coalitional game representation in more detail.\nThroughout this paper, we will assume that the payoff to a group of agents can be freely distributed among its members.\nThis assumption is often known as the transferable utility assumption.\n2.1 Technical Background\nWe can represent a coalition game with transferable utility by the pair (N, v), where\n\u2022 N is the set of agents; and \u2022 v: 2N \u2192 R is a function that maps each group of agents S \u2286 N to a real-valued payoff.\nThis representation is known as the characteristic form.\nAs there are exponentially many subsets, it will take space exponential in the number of agents to describe a coalitional game.\nAn outcome in a coalitional game specifies the utilities the agents receive.\nA solution concept assigns to each coalitional game a set of \"reasonable\" outcomes.\nDifferent solution concepts attempt to capture in some way outcomes that are stable and\/or fair.\nTwo of the best known solution concepts are the Shapley value and the core.\nThe Shapley value is a normative solution concept.\nIt prescribes a \"fair\" way to divide the gains from cooperation when the grand coalition (i.e., N) is formed.\nThe division of payoff to agent i is the average marginal contribution of agent i over all possible permutations of the agents.\nFormally, let \u03c6i (v) denote the Shapley value of i under characteristic function v, then2\nThe Shapley value is a solution concept that satisfies many nice properties, and has been studied extensively in the economic and game theoretic literature.\nIt has a very useful axiomatic characterization.\nEfficiency (EFF) A total of v (N) is distributed to the agents, i.e., Ei \u2208 N \u03c6i (v) = v (N).\nSymmetry (SYM) If agents i and j are interchangeable, then \u03c6i (v) = \u03c6j (v).\nDummy (DUM) If agent i is a dummy player, i.e., his marginal contribution to all groups S are the same, \u03c6i (v) = v ({i}).\nAdditivity (ADD) For any two coalitional games v and w defined over the same set of agents N, \u03c6i (v + w) = \u03c6i (v) + \u03c6i (w) for all i E N, where the game v + w is defined as (v + w) (S) = v (S) + w (S) for all S C N.\nWe will refer to these axioms later in our proof of correctness of the algorithm for computing the Shapley value under our representation in section 4.\nThe core is another major solution concept for coalitional games.\nIt is a descriptive solution concept that focuses on outcomes that are \"stable.\"\nStability under core means that no set of players can jointly deviate to improve their payoffs.\nFormally, let x (S) denote Ei \u2208 S xi.\nAn outcome x E Rn is in the core if\nThe core was one of the first proposed solution concepts for coalitional games, and had been studied in detail.\nAn important question for a given coalitional game is whether the core is empty.\nIn other words, whether there is any outcome that is stable relative to group deviation.\nFor a game to have a non-empty core, it must satisfy the property of balancedness, defined as follows.\nLet 1S E Rn denote the characteristic vector of S given by\nLet (\u03bbS) S \u2286 N be a set of weights such that each \u03bbS is in the range between 0 and 1.\nThis set of weights, (\u03bbS) S \u2286 N, is a balanced collection if for all i E N,\nBy the Bondereva-Shapley theorem, the core of a coalitional game is non-empty if and only if the game is balanced.\nTherefore, we can use linear programming to determine whether the core of a game is empty.\nIf the optimal value of (4) is greater than the value of the grand coalition, then the core is empty.\nUnfortunately, this program has an exponential number of variables in the number of players in the game, and hence an algorithm that operates directly on this program would be infeasible in practice.\nIn section 5.4, we will describe an algorithm that answers the question of emptiness of core that works on the dual of this program instead.\n2.2 Previous Work Revisited\nDeng and Papadimitriou looked into the complexity of various solution concepts on coalitional games played on weighted graphs in [4].\nIn their representation, the set of agents are the nodes of the graph, and the value of a set of agents S is the sum of the weights of the edges spanned by them.\nNotice that this representation is concise since the space required to specify such a game is O (n2).\nHowever, this representation is not general; it will not be able to represent interactions among three or more agents.\nFor example, it will not be able to represent the majority game, where a group of agents S will have value of 1 if and only if s> n\/2.\nOn the other hand, there is an efficient algorithm for computing the Shapley value of the game, and for determining whether the core is empty under the restriction of positive edge weights.\nHowever, in the unrestricted case, determining whether the core is non-empty is coNP-complete.\nConitzer and Sandholm in [2] considered coalitional games that are superadditive.\nThey described a concise representation scheme that only states the value of a coalition if the value is strictly superadditive.\nMore precisely, the semantics of the representation is that for a group of agents S,\nwhere \u03a0 is the set of all possible partitions of S.\nThe value v (S) is only explicitly specified for S if v (S) is greater than all partitioning of S other than the trivial partition ({S}).\nWhile this representation can represent all games that are superadditive, there are coalitional games that it cannot represent.\nFor example, it will not be able to represent any games with substitutability among the agents.\nAn example of a game that cannot be represented is the unit game, where v (S) = 1 as long as S = 6 0.\nUnder this representation, the authors showed that determining whether the core is non-empty is coNP-complete.\nIn fact, even determining the value of a group of agents is NP-complete.\nIn a more recent paper, Conitzer and Sandholm described a representation that decomposes a coalitional game into a number of subgames whose sum add up to the original game [3].\nThe payoffs in these subgames are then represented by their respective characteristic functions.\nThis scheme is fully general as the characteristic form is a special case of this representation.\nFor any given game, there may be multiple ways to decompose the game, and the decomposition may influence the computational complexity.\nFor computing the Shapley value, the authors showed that the complexity is linear in the input description; in particular, if the largest subgame (as measured by number of agents) is of size n and the number of subgames is m, then their algorithm runs in O (m2n) time, where the input size will also be O (m2n).\nOn the other hand, the problem of determining whether a certain outcome is in the core is coNP-complete.\n3.\nMARGINAL CONTRIBUTION NETS\nIn this section, we will describe the Marginal Contribution Networks representation scheme.\nWe will show that the idea is flexible, and we can easily extend it to increase its conciseness.\nWe will also show how we can use this scheme to represent the recommendation game from the introduction.\nFinally, we will show that this scheme is fully expressive, and generalizes the representation schemes in [3, 4].\n3.1 Rules and Marginal Contribution Networks\nThe basic idea behind marginal contribution networks (MC-nets) is to represent coalitional games using sets of rules.\nThe rules in MC-nets have the following syntactic\nA rule is said to apply to a group of agents S if S meets the requirement of the Pattern.\nIn the basic scheme, these patterns are conjunctions of agents, and S meets the requirement of the given pattern if S is a superset of it.\nThe value of a group of agents is defined to be the sum over the values of all rules that apply to the group.\nFor example, if the set of rules are\nthen v ({a}) = 0, v ({b}) = 2, and v ({a, b}) = 5 + 2 = 7.\nMC-nets is a very flexible representation scheme, and can be extended in different ways.\nOne simple way to extend it and increase its conciseness is to allow a wider class of patterns in the rules.\nA pattern that we will use throughout the remainder of the paper is one that applies only in the absence of certain agents.\nThis is useful for expressing concepts such as substitutability or default values.\nFormally, we express such patterns by\nwhich has the semantics that such rule will apply to a group S only if {pi} mi = 1 \u2208 S and {nj} nj = 1 \u2208 \/ S.\nWe will call the {pi} mi = 1 in the above pattern the positive literals, and {nj} nj = 1 the negative literals.\nNote that if the pattern of a rule consists solely of negative literals, we will consider that the empty set of agents will also satisfy such pattern, and hence v (\u2205) may be non-zero in the presence of negative literals.\nTo demonstrate the increase in conciseness of representation, consider the unit game described in section 2.2.\nTo represent such a game without using negative literals, we will need 2n rules for n players: we need a rule of value 1 for each individual agent, a rule of value \u2212 1 for each pair of agents to counter the double-counting, a rule of value 1 for each triplet of agents, etc., similar to the inclusion-exclusion principle.\nOn the other hand, using negative literals, we only need n rules: value 1 for the first agent, value 1 for the second agent in the absence of the first agent, value 1 for the third agent in the absence of the first two agents, etc. .\nThe representational savings can be exponential in the number of agents.\nGiven a game represented as a MC-net, we can interpret the set of rules that make up the game as a graph.\nWe call this graph the agent graph.\nThe nodes in the graph will represent the agents in the game, and for each rule in the MCnet, we connect all the agents in the rule together and assign a value to the clique formed by the set of agents.\nNotice that to accommodate negative literals, we will need to annotate the clique appropriately.\nThis alternative view of MC-nets will be useful in our algorithm for CORE-MEMBERSHIP in section 5.\nWe would like to end our discussion of the representation scheme by mentioning a trade-off between the expressiveness of patterns and the space required to represent them.\nTo represent a coalitional game in characteristic form, one would need to specify all 2n \u2212 1 values.\nThere is no overhead on top of that since there is a natural ordering of the groups.\nFor MC-nets, however, specification of the rules requires specifying both the patterns and the values.\nThe patterns, if not represented compactly, may end up overwhelming the savings from having fewer values to specify.\nThe space required for the patterns also leads to a tradeoff between the expressiveness of the allowed patterns and the simplicity of representing them.\nHowever, we believe that for most naturally arising games, there should be sufficient structure in the problem such that our representation achieves a net saving over the characteristic form.\n3.2 Example: Recommendation Game\nAs an example, we will use MC-net to represent the recommendation game discussed in the introduction.\nFor each product, as the benefit of knowing about the product will count only once for each group, we need to capture substitutability among the agents.\nThis can be captured by a scaled unit game.\nSuppose the value of the knowledge about product i is vi, and there are ni agents, denoted by {xji}, who know about the product, the game for product i can then be represented as the following rules:\nThe entire game can then be built up from the sets of rules of each product.\nThe space requirement will be O (mn *), where m is the number of products in the system, and n * is the maximum number of agents who knows of the same product.\n3.3 Representation Power\nWe will discuss the expressiveness and conciseness of our representation scheme and compare it with the previous works in this subsection.\nPROPOSITION 1.\nMarginal contribution networks constitute a fully expressive representation scheme.\nPROOF.\nConsider an arbitrary coalitional game hN, vi in characteristic form representation.\nWe can construct a set of rules to describe this game by starting from the singleton sets and building up the set of rules.\nFor any singleton set {i}, we create a rule {i} \u2192 v (i).\nFor any pair of agents {i, j}, we create a rule {i \u2227 j} \u2192 v ({i, j}) \u2212 v ({i}) \u2212 v ({j}.\nWe can continue to build up rules in a manner similar to the inclusion-exclusion principle.\nSince the game is arbitrary, MC-nets are fully expressive.\nUsing the construction outlined in the proof, we can show that our representation scheme can simulate the multi-issue representation scheme of [3] in almost the same amount of space.\nPROOF.\nGiven a game in multi-issue representation, we start by describing each of the subgames, which are represented in characteristic form in [3], with a set of rules.\nWe then build up the grand game by including all the rules from the subgames.\nNote that our representation may require a space larger by a linear factor due to the need to describe the patterns for each rule.\nOn the other hand, our approach may have fewer than exponential number of rules for each subgame, depending on the structure of these subgames, and therefore may be more concise than multi-issue representation.\nOn the other hand, there are games that require exponentially more space to represent under the multi-issue scheme compared to our scheme.\nPROPOSITION 3.\nMarginal contribution networks are exponentially more concise than multi-issue representation for certain games.\nPROOF.\nConsider a unit game over all the agents N.\nAs explained in 3.1, this game can be represented in linear space using MC-nets with negative literals.\nHowever, as there is no decomposition of this game into smaller subgames, it will require space O (2n) to represent this game under the multiissue representation.\nUnder the agent graph interpretation of MC-nets, we can see that MC-nets is a generalization of the graphical representation in [4], namely from weighted graphs to weighted hypergraphs.\nPROOF.\nGiven a game in graphical form, G, for each edge (i, j) with weight wij in the graph, we create a rule {i, j} \u2192 wij.\nClearly this takes exactly the same space as the size of G, and by the additive semantics of the rules, it represents the same game as G.\n4.\nCOMPUTING THE SHAPLEY VALUE\nGiven a MC-net, we have a simple algorithm to compute the Shapley value of the game.\nConsidering each rule as a separate game, we start by computing the Shapley value of the agents for each rule.\nFor each agent, we then sum up the Shapley values of that agent over all the rules.\nWe first show that this final summing process correctly computes the Shapley value of the agents.\nPROOF.\nFor any group S, under the MC-nets representation, v (S) is defined to be the sum over the values of all the rules that apply to S. Therefore, considering each rule as a game, by the (ADD) axiom discussed in section 2, the Shapley value of the game created from aggregating all the rules is equal to the sum of the Shapley values over the rules.\nThe remaining question is how to compute the Shapley values of the rules.\nWe can separate the analysis into two cases, one for rules with only positive literals and one for rules with mixed literals.\nFor rules that have only positive literals, the Shapley value of the agents is v\/m, where v is the value of the rule and m is the number of agents in the rule.\nThis is a direct consequence of the (SYM) axiom of the Shapley value, as the agents in a rule are indistinguishable from each other.\nFor rules that have both positive and negative literals, we can consider the positive and the negative literals separately.\nFor a given positive literal i, the rule will apply only if i occurs in a given permutation after the rest of the positive literals but before any of the negative literals.\nFormally, let \u03c6i denote the Shapley value of i, p denote the cardinality of the positive set, and n denote the cardinality of the negative set, then \u03c6i = (p \u2212 1)!\nn!\nv (p + n)!\nv = For a given negative literal j, j will be responsible for cancelling the application of the rule if all positive literals come before the negative literals in the ordering, and j is the first among the negative literals.\nTherefore, By the (SYM) axiom, all positive literals will have the value of \u03c6i and all negative literals will have the value of \u03c6j.\nNote that the sum over all agents in rules with mixed literals is 0.\nThis is to be expected as these rules contribute 0 to the grand coalition.\nThe fact that these rules have no effect on the grand coalition may appear odd at first.\nBut this is because the presence of such rules is to define the values of coalitions smaller than the grand coalition.\nIn terms of computational complexity, given that the Shapley value of any agent in a given rule can be computed in time linear in the pattern of the rule, the total running time of the algorithm for computing the Shapley value of the game is linear in the size of the input.\n5.\nANSWERING CORE-RELATED QUESTIONS\nThere are a few different but related computational problems associated with the solution concept of the core.\nWe will focus on the following two problems: Definition 1.\n(CORE-MEMBERSHIP) Given a coalitional game and a payoff vector x, determine if x is in the core.\nDefinition 2.\n(CORE-NON-EMPTINESS) Given a coalitional game, determine if the core is non-empty.\nIn the rest of the section, we will first show that these two problems are coNP-complete and coNP-hard respectively, and discuss some complexity considerations about these problems.\nWe will then review the main ideas of tree decomposition as it will be used extensively in our algorithm for CORE-MEMBERSHIP.\nNext, we will present the algorithm for CORE-MEMBERSHIP, and show that the algorithm runs in polynomial time for graphs of bounded treewidth.\nWe end by extending this algorithm to answer the question of CORENON-EMPTINESS in polynomial time for graphs of bounded treewidth.\n5.1 Computational Complexity\nThe hardness of CORE-MEMBERSHIP and CORE-NONEMPTINESS follows directly from the hardness results of games over weighted graphs in [4].\nPROOF.\nCORE-MEMBERSHIP in MC-nets is in the class of coNP since any set of agents S of which v (S)> x (S) will serve as a certificate to show that x does not belong to the core.\nAs for its hardness, given any instance of COREMEMBERSHIP for a game in graphical form of [4], we can encode the game in exactly the same space using MC-net due to Proposition 4.\nSince CORE-MEMBERSHIP for games in graphical form is coNP-complete, CORE-MEMBERSHIP in MC-nets is coNP-hard.\nPROPOSITION 7.\nCORE-NON-EMPTINESS for games represented as marginal contribution networks is coNP-hard.\nPROOF.\nThe same argument for hardness between games in graphical frm and MC-nets holds for the problem of CORENON-EMPTINESS.\nWe do not know of a certificate to show that CORE-NONEMPTINESS is in the class of coNP as of now.\nNote that the \"obvious\" certificate of a balanced set of weights based on the Bondereva-Shapley theorem is exponential in size.\nIn [4], Deng and Papadimitriou showed the coNP-completeness of CORE-NON-EMPTINESS via a combinatorial characterization, namely that the core is non-empty if and only if there is no negative cut in the graph.\nIn MC-nets, however, there need not be a negative hypercut in the graph for the core to be empty, as demonstrated by the following game\nApplying the Bondereva-Shapley theorem, if we let \u03bb12 = \u03bb13 = \u03bb14 = 1\/3, and \u03bb234 = 2\/3, this set of weights demonstrates that the game is not balanced, and hence the core is empty.\nOn the other hand, this game can be represented with MC-nets as follows (weights on hyperedges):\nNo matter how the set is partitioned, the sum over the weights of the hyperedges in the cut is always non-negative.\nTo overcome the computational hardness of these problems, we have developed algorithms that are based on tree decomposition techniques.\nFor CORE-MEMBERSHIP, our algorithm runs in time exponential only in the treewidth of the agent graph.\nThus, for graphs of small treewidth, such as trees, we have a tractable solution to determine if a payoff vector is in the core.\nBy using this procedure as a separation oracle, i.e., a procedure for returning the inequality violated by a candidate solution, to solving a linear program that is related to CORE-NON-EMPTINESS using the ellipsoid method, we can obtain a polynomial time algorithm for CORE-NON-EMPTINESS for graphs of bounded treewidth.\n5.2 Review of Tree Decomposition\nAs our algorithm for CORE-MEMBERSHIP relies heavily on tree decomposition, we will first briefly review the main ideas in tree decomposition and treewidth .3\nDefinition 3.\nA tree decomposition of a graph G = (V, E) is a pair (X, T), where T = (I, F) is a tree and X = {Xi | i \u2208 I} is a family of subsets of V, one for each node of T, such that \u2022 Ui \u2208 I Xi = V; \u2022 For all edges (v, w) \u2208 E, there exists an i \u2208 I with v \u2208 Xi and w \u2208 Xi; and \u2022 (Running Intersection Property) For all i, j, k \u2208 I: if j is on the path from i to k in T, then Xi \u2229 Xk \u2286 Xj.\nThe treewidth of a tree decomposition is defined as the maximum cardinality over all sets in X, less one.\nThe treewidth of a graph is defined as the minimum treewidth over all tree decompositions of the graph.\nGiven a tree decomposition, we can convert it into a nice tree decomposition of the same treewidth, and of size linear in that of T. Definition 4.\nA tree decomposition T is nice if T is rooted and has four types of nodes: Leaf nodes i are leaves of T with | Xi | = 1.\nIntroduce nodes i have one child j such that Xi = Xj \u222a {v} of some v \u2208 V.\nForget nodes i have one child j such that Xi = Xj \\ {v} for some v \u2208 Xj.\nJoin nodes i have two children j and k with Xi = Xj = Xk.\nAn example of a (partial) nice tree decomposition together with a classification of the different types of nodes is in Figure 1.\nIn the following section, we will refer to nodes in the tree decomposition as nodes, and nodes in the agent graph as agents.\n5.3 Algorithm for Core Membership\nOur algorithm for CORE-MEMBERSHIP takes as an input a nice tree decomposition T of the agent graph and a payoff vector x. By definition, if x belongs to the core, then for all groups S \u2286 N, x (S) \u2265 v (S).\nTherefore, the difference x (S) \u2212 v (S) measures how \"close\" the group S is to violating the core condition.\nWe call this difference the excess of group S. Definition 5.\nThe excess of a coalition S, e (S), is defined as x (S) \u2212 v (S).\nA brute-force approach to determine if a payoff vector belongs to the core will have to check that the excesses of all groups are non-negative.\nHowever, this approach ignores the structure in the agent graph that will allow an algorithm to infer that certain groups have non-negative excesses due to\nFigure 1: Example of a (partial) nice tree decomposition the excesses computed elsewhere in the graph.\nTree decomposition is the key to take advantage of such inferences in a structured way.\nFor now, let us focus on rules with positive literals.\nSuppose we have already checked that the excesses of all sets R C _ U are non-negative, and we would like to check if the addition of an agent i to the set U will create a group with negative excess.\nA na \u00a8 \u0131ve solution will be to compute the excesses of all sets that include i.\nThe excess of the group\nwhere c is the cut between R and i, and v (c) is the sum of the weights of the edges in the cut.\nHowever, suppose that from the tree decomposition, we know that i is only connected to a subset of U, say S, which we will call the entry set to U. Ideally, because i does not share any edges with members of U \u00af = (U \\ S), we would hope that an algorithm can take advantage of this structure by checking only sets that are subsets of (S \u222a {i}).\nThis computational saving may be possible since (xi \u2212 v (c)) in the update equation of (6) does not depend on \u00af U. However, we cannot simply ignore U \u00af as members of U \u00af may still influence the excesses of groups that include agent i through group S. Specifically, if there exists a group T \u2283 S such that e (T) <e (S), then even when e (S \u222a {i}) has non-negative excess, e (T \u222a {i}) may have negative excess.\nIn other words, the excess available at S may have been \"drained\" away due to T.\nThis motivates the definition of the reserve of a group.\nDefinition 6.\nThe reserve of a coalition S relative to a coalition U is the minimum excess over all coalitions between S and U, i.e., all T: S C _ T C _ U.\nWe denote this value by r (S, U).\nWe will refer to the group T that has the minimum excess as arg r (S, U).\nWe will also call U the limiting set of the reserve and S the base set of the reserve.\nOur algorithm works by keeping track of the reserves of all non-empty subsets that can be formed by the agents of a node at each of the nodes of the tree decomposition.\nStarting from the leaves of the tree and working towards the root, at each node i, our algorithm computes the reserves of all groups S C _ Xi, limited by the set of agents in the subtree rooted at i, Ti, except those in (Xi \\ S).\nThe agents in (Xi \\ S) are excluded to ensure that S is an entry set.\nSpecifically, S is the entry set to ((Ti \\ Xi) \u222a S).\nTo accomodate for negative literals, we will need to make two adjustments.\nFirstly, the cut between an agent m and a set S at node i now refers to the cut among agent m, set S, and set - (Xi \\ S), and its value must be computed accordingly.\nAlso, when an agent m is introduced to a group at an introduce node, we will also need to consider the change in the reserves of groups that do not include m due to possible cut involving - m and the group.\nAs an example of the reserve values we keep track of at a tree node, consider node i of the tree in Figure 1.\nAt node i, we will keep track of the following:\nwhere the dots...refer to the agents rooted under node m. For notational use, we will use ri (S) to denote r (S, U) at node i where U is the set of agents in the subtree rooted at node i excluding agents in (Xi \\ S).\nWe sometimes refer to these values as the r-values of a node.\nThe details of the r-value computations are in Algorithm 1.\nTo determine if the payoff vector x is in the core, during the r-value computation at each node, we can check if all of the r-values are non-negative.\nIf this is so for all nodes in the tree, the payoff vector x is in the core.\nThe correctness of the algorithm is due to the following proposition.\nPROPOSITION 8.\nThe payoff vector x is not in the core if and only if the r-values at some node i for some group S is negative.\nPROOF.\n(\u21d0) If the reserve at some node i for some group S is negative, then there exists a coalition T for which e (T) = x (T) \u2212 v (T) <0, hence x is not in the core.\n(\u21d2) Suppose x is not in the core, then there exists some group R * such that e (R *) <0.\nLet Xroot be the set of nodes at the root.\nConsider any set S \u2208 Xroot, rroot (S) will have the base set of S and the limiting set of ((N \\ Xroot) \u222a S).\nThe union over all of these ranges includes all sets U for which U \u2229 Xroot = 6 \u2205.\nTherefore, if R * is not disjoint from Xroot, the r-value for some group in the root is negative.\nIf R * is disjoint from U, consider the forest {Ti} resulting from removal of all tree nodes that include agents in Xroot.\n2: j \u2190 child of i 3: m \u2190 Xi \\ Xj {the introduced node} 4: for all S \u2286 Xj, S = 6 \u2205 do 5: C \u2190 all hyperedges in the cut of m, S, and \u00ac (Xi \\ S) 6: ri (S \u222a {x}) \u2190 rj (S) + xm \u2212 v (C) 7: C \u2190 all hyperedges in the cut of \u00ac m, S, and \u00ac (Xi \\ S) 8: ri (S) \u2190 rj (S) \u2212 v (C) 9: end for\nS * i has negative excess as well.\nTherefore, we only need to check the r-values of the nodes on the individual trees in the forest.\nBut for each tree in the forest, we can apply the same argument restricted to the agents in the tree.\nIn the base case, we have the leaf nodes of the original tree decomposition, say, for agent i.\nIf R * = {i}, then r ({i}) = e ({i}) <0.\nTherefore, by induction, if e (R *) <0, some reserve at some node would be negative.\nWe will next explain the intuition behind the correctness of the computations for the r-values in the tree nodes.\nA detailed proof of correctness of these computations can be found in the appendix under Lemmas 1 and 2.\nPROPOSITION 9.\nThe procedure in Algorithm 1 correctly compute the r-values at each of the tree nodes.\nPROOF.\n(SKETCH) We can perform a case analysis over the four types of tree nodes in a nice tree decomposition.\nLeaf nodes (i) The only reserve value to be computed is ri (Xi), which equals r (Xi, Xi), and therefore it is just the excess of group Xi.\nForget nodes (i with child j) Let m be the forgotten node.\nFor any subset S \u2286 Xi, arg ri (S) must be chosen between the groups of S and S \u222a {m}, and hence we choose between the lower of the two from the r-values at node j. Introduce nodes (i with child j) Let m be the introduced node.\nFor any subset T \u2286 Xi that includes m, let S denote (T \\ {m}).\nBy the running intersection property, there are no rules that involve m and agents of the subtree rooted at node i except those involving m and agents in Xi.\nAs both the base set and the limiting set of the r-values of node j and node i differ by {m}, for any group V that lies between the base set and the limiting set of node i, the excess of group V will differ by a constant amount from the corresponding group (V \\ {m}) at node j. Therefore, the set arg ri (T) equals the set arg rj (S) \u222a {m}, and ri (T) = rj (S) + xm \u2212 v (cut), where v (cut) is the value of the rules in the cut between m and S. For any subset S \u2282 Xi that does not include m, we need to consider the values of rules that include \u00ac m as a literal in the pattern.\nAlso, when computing the reserve, the payoff xm will not contribute to group S. Therefore, together with the running intersection property as argued above, we can show that ri (S) = rj (S) \u2212 v (cut).\nJoin nodes (i with left child j and right child k) For any given set S \u2286 Xi, consider the r-values of that set at j and k.\nIf arg rj (S) or arg rk (S) includes agents not in S, then argrj (S) and argrk (S) will be disjoint from each other due to the running intersection property.\nTherefore, we can decompose arg ri (S) into three sets, (arg rj (S) \\ S) on the left, S in the middle, and (arg rk (S) \\ S) on the right.\nThe reserve rj (S) will cover the excesses on the left and in the middle, whereas the reserve rk (S) will cover those on the right and in the middle, and so the excesses in the middle is double-counted.\nWe adjust for the double-counting by subtracting the excesses in the middle from the sum of the two reserves rj (S) and rk (S).\nBy strong duality, optimal value of (7) is equal to optimal value of (4), the primal program described in section 2.\nTherefore, by the Bondereva-Shapley theorem, if the optimal value of (7) is greater than v (N), the core is empty.\nWe can solve the dual program using the ellipsoid method with CORE-MEMBERSHIP as a separation oracle, i.e., a procedure for returning a constraint that is violated.\nNote that a simple extension to the CORE-MEMBERSHIP algorithm will allow us to keep track of the set T for which e (T) <0 during the r-values computation, and hence we can return the inequality about T as the constraint violated.\nTherefore, CORE-NON-EMPTINESS can run in time polynomial in the running time of CORE-MEMBERSHIP, which in turn runs in Finally, note that each step in the computation of the rvalues of each node i takes time at most exponential in the size of Xi, hence the algorithm runs in time exponential only in the treewidth of the graph.\n5.4 Algorithm for Core Non-emptiness\nWe can extend the algorithm for CORE-MEMBERSHIP into an algorithm for CORE-NON-EMPTINESS.\nAs described in section 2, whether the core is empty can be checked using the optimization program based on the balancedness condition (3).\nUnfortunately, that program has an exponential number of variables.\nOn the other hand, the dual of the program has only n variables, and can be written as follows: n minimize i = 1 xi xERn\ntime exponential only in the treewidth of the graph.\nNote that when the core is not empty, this program will return an outcome in the core.\n6.\nCONCLUDING REMARKS\nWe have developed a fully expressive representation scheme for coalitional games of which the size depends on the complexity of the interactions among the agents.\nOur focus on general representation is in contrast to the approach taken in [3, 4].\nWe have also developed an efficient algorithm for the computation of the Shapley values for this representation.\nWhile CORE-MEMBERSHIP for MC-nets is coNP-complete, we have developed an algorithm for COREMEMBERSHIP that runs in time exponential only in the treewidth of the agent graph.\nWe have also extended the algorithm to solve CORE-NON-EMPTINESS.\nOther than the algorithm for CORE-NON-EMPTINESS in [4] under the restriction of non-negative edge weights, and that in [2] for superadditive games when the value of the grand coalition is given, we are not aware of any explicit description of algorithms for core-related problems in the literature.\nThe work in this paper is related to a number of areas in computer science, especially in artificial intelligence.\nFor example, the graphical interpretation of MC-nets is closely related to Markov random fields (MRFs) of the Bayes nets community.\nThey both address the issue of of conciseness of representation by using the combinatorial structure of weighted hypergraphs.\nIn fact, Kearns et al. first apply these idea to games theory by introducing a representation scheme derived from Bayes net to represent non-cooperative games [6].\nThe representational issues faced in coalitional games are closely related to the problem of expressing valuations in combinatorial auctions [5, 10].\nThe OR-bid language, for example, is strongly related to superadditivity.\nThe question of the representation power of different patterns is also related to Boolean expression complexity [12].\nWe believe that with a better understanding of the relationships among these related areas, we may be able to develop more efficient representations and algorithms for coalitional games.\nFinally, we would like to end with some ideas for extending the work in this paper.\nOne direction to increase the conciseness of MC-nets is to allow the definition of equivalent classes of agents, similar to the idea of extending Bayes nets to probabilistic relational models.\nThe concept of symmetry is prevalent in games, and the use of classes of agents will allow us to capture symmetry naturally and concisely.\nThis will also address the problem of unpleasing assymetric representations of symmetric games in our representation.\nAlong the line of exploiting symmetry, as the agents within the same class are symmetric with respect to each other, we can extend the idea above by allowing functional description of marginal contributions.\nMore concretely, we can specify the value of a rule as dependent on the number of agents of each relevant class.\nThe use of functions will allow concise description of marginal diminishing returns (MDRs).\nWithout the use of functions, the space needed to describe MDRs among n agents in MC-nets is O (n).\nWith the use of functions, the space required can be reduced to O (1).\nAnother idea to extend MC-nets is to augment the semantics to allow constructs that specify certain rules cannot be applied simultaneously.\nThis is useful in situations where a certain agent represents a type of exhaustible resource, and therefore rules that depend on the presence of the agent should not apply simultaneously.\nFor example, if agent i in the system stands for coal, we can either use it as fuel for a power plant or as input to a steel mill for making steel, but not for both at the same time.\nCurrently, to represent such situations, we have to specify rules to cancel out the effects of applications of different rules.\nThe augmented semantics can simplify the representation by specifying when rules cannot be applied together.","keyphrases":["margin contribut","compact represent scheme","represent","coalit game","agent","interact","core","treewidth","shaplei valu","mc-net","coremembership","markov random field","margin diminish return","coalit game theori"],"prmu":["P","P","P","P","P","P","P","P","M","U","U","U","M","M"]} {"id":"H-85","title":"Learning User Interaction Models for Predicting Web Search Result Preferences","abstract":"Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance. We present a real-world study of modeling the behavior of web search users to predict web search result preferences. Accurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks. Our key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected noisy user behavior. We show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods. We generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone. We report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods.","lvl-1":"Learning User Interaction Models for Predicting Web Search Result Preferences Eugene Agichtein Microsoft Research eugeneag@microsoft.com Eric Brill Microsoft Research brill@microsoft.com Susan Dumais Microsoft Research sdumais@microsoft.com Robert Ragno Microsoft Research rragno@microsoft.com ABSTRACT Evaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance.\nWe present a real-world study of modeling the behavior of web search users to predict web search result preferences.\nAccurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks.\nOur key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected noisy user behavior.\nWe show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods.\nWe generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone.\nWe report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Search process, relevance feedback.\nGeneral Terms Algorithms, Measurement, Performance, Experimentation.\n1.\nINTRODUCTION Relevance measurement is crucial to web search and to information retrieval in general.\nTraditionally, search relevance is measured by using human assessors to judge the relevance of query-document pairs.\nHowever, explicit human ratings are expensive and difficult to obtain.\nAt the same time, millions of people interact daily with web search engines, providing valuable implicit feedback through their interactions with the search results.\nIf we could turn these interactions into relevance judgments, we could obtain large amounts of data for evaluating, maintaining, and improving information retrieval systems.\nRecently, automatic or implicit relevance feedback has developed into an active area of research in the information retrieval community, at least in part due to an increase in available resources and to the rising popularity of web search.\nHowever, most traditional IR work was performed over controlled test collections and carefully-selected query sets and tasks.\nTherefore, it is not clear whether these techniques will work for general real-world web search.\nA significant distinction is that web search is not controlled.\nIndividual users may behave irrationally or maliciously, or may not even be real users; all of this affects the data that can be gathered.\nBut the amount of the user interaction data is orders of magnitude larger than anything available in a non-web-search setting.\nBy using the aggregated behavior of large numbers of users (and not treating each user as an individual expert) we can correct for the noise inherent in individual interactions, and generate relevance judgments that are more accurate than techniques not specifically designed for the web search setting.\nFurthermore, observations and insights obtained in laboratory settings do not necessarily translate to real world usage.\nHence, it is preferable to automatically induce feedback interpretation strategies from large amounts of user interactions.\nAutomatically learning to interpret user behavior would allow systems to adapt to changing conditions, changing user behavior patterns, and different search settings.\nWe present techniques to automatically interpret the collective behavior of users interacting with a web search engine to predict user preferences for search results.\nOur contributions include: \u2022 A distributional model of user behavior, robust to noise within individual user sessions, that can recover relevance preferences from user interactions (Section 3).\n\u2022 Extensions of existing clickthrough strategies to include richer browsing and interaction features (Section 4).\n\u2022 A thorough evaluation of our user behavior models, as well as of previously published state-of-the-art techniques, over a large set of web search sessions (Sections 5 and 6).\nWe discuss our results and outline future directions and various applications of this work in Section 7, which concludes the paper.\n2.\nBACKGROUND AND RELATED WORK Ranking search results is a fundamental problem in information retrieval.\nThe most common approaches in the context of the web use both the similarity of the query to the page content, and the overall quality of a page [3, 20].\nA state-ofthe-art search engine may use hundreds of features to describe a candidate page, employing sophisticated algorithms to rank pages based on these features.\nCurrent search engines are commonly tuned on human relevance judgments.\nHuman annotators rate a set of pages for a query according to perceived relevance, creating the gold standard against which different ranking algorithms can be evaluated.\nReducing the dependence on explicit human judgments by using implicit relevance feedback has been an active topic of research.\nSeveral research groups have evaluated the relationship between implicit measures and user interest.\nIn these studies, both reading time and explicit ratings of interest are collected.\nMorita and Shinoda [14] studied the amount of time that users spent reading Usenet news articles and found that reading time could predict a user``s interest levels.\nKonstan et al. [13] showed that reading time was a strong predictor of user interest in their GroupLens system.\nOard and Kim [15] studied whether implicit feedback could substitute for explicit ratings in recommender systems.\nMore recently, Oard and Kim [16] presented a framework for characterizing observable user behaviors using two dimensions-the underlying purpose of the observed behavior and the scope of the item being acted upon.\nGoecks and Shavlik [8] approximated human labels by collecting a set of page activity measures while users browsed the World Wide Web.\nThe authors hypothesized correlations between a high degree of page activity and a user``s interest.\nWhile the results were promising, the sample size was small and the implicit measures were not tested against explicit judgments of user interest.\nClaypool et al. [6] studied how several implicit measures related to the interests of the user.\nThey developed a custom browser called the Curious Browser to gather data, in a computer lab, about implicit interest indicators and to probe for explicit judgments of Web pages visited.\nClaypool et al. found that the time spent on a page, the amount of scrolling on a page, and the combination of time and scrolling have a strong positive relationship with explicit interest, while individual scrolling methods and mouse-clicks were not correlated with explicit interest.\nFox et al. [7] explored the relationship between implicit and explicit measures in Web search.\nThey built an instrumented browser to collect data and then developed Bayesian models to relate implicit measures and explicit relevance judgments for both individual queries and search sessions.\nThey found that clickthrough was the most important individual variable but that predictive accuracy could be improved by using additional variables, notably dwell time on a page.\nJoachims [9] developed valuable insights into the collection of implicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions.\nMore recently, Joachims et al. [10] presented an empirical evaluation of interpreting clickthrough evidence.\nBy performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthrough events in a controlled, laboratory setting.\nA more comprehensive overview of studies of implicit measures is described in Kelly and Teevan [12].\nUnfortunately, the extent to which existing research applies to real-world web search is unclear.\nIn this paper, we build on previous research to develop robust user behavior interpretation models for the real web search setting.\n3.\nLEARNING USER BEHAVIOR MODELS As we noted earlier, real web search user behavior can be noisy in the sense that user behaviors are only probabilistically related to explicit relevance judgments and preferences.\nHence, instead of treating each user as a reliable expert, we aggregate information from many unreliable user search session traces.\nOur main approach is to model user web search behavior as if it were generated by two components: a relevance component - queryspecific behavior influenced by the apparent result relevance, and a background component - users clicking indiscriminately.\nOur general idea is to model the deviations from the expected user behavior.\nHence, in addition to basic features, which we will describe in detail in Section 3.2, we compute derived features that measure the deviation of the observed feature value for a given search result from the expected values for a result, with no query-dependent information.\nWe motivate our intuitions with a particularly important behavior feature, result clickthrough, analyzed next, and then introduce our general model of user behavior that incorporates other user actions (Section 3.2).\n3.1 A Case Study in Click Distributions As we discussed, we aggregate statistics across many user sessions.\nA click on a result may mean that some user found the result summary promising; it could also be caused by people clicking indiscriminately.\nIn general, individual user behavior, clickthrough and otherwise, is noisy, and cannot be relied upon for accurate relevance judgments.\nThe data set is described in more detail in Section 5.2.\nFor the present it suffices to note that we focus on a random sample of 3,500 queries that were randomly sampled from query logs.\nFor these queries we aggregate click data over more than 120,000 searches performed over a three week period.\nWe also have explicit relevance judgments for the top 10 results for each query.\nFigure 3.1 shows the relative clickthrough frequency as a function of result position.\nThe aggregated click frequency at result position p is calculated by first computing the frequency of a click at p for each query (i.e., approximating the probability that a randomly chosen click for that query would land on position p).\nThese frequencies are then averaged across queries and normalized so that relative frequency of a click at the top position is 1.\nThe resulting distribution agrees with previous observations that users click more often on top-ranked results.\nThis reflects the fact that search engines do a reasonable job of ranking results as well as biases to click top results and noisewe attempt to separate these components in the analysis that follows.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 result position RelativeClickFrequency Figure 3.1: Relative click frequency for top 30 result positions over 3,500 queries and 120,000 searches.\nFirst we consider the distribution of clicks for the relevant documents for these queries.\nFigure 3.2 reports the aggregated click distribution for queries with varying Position of Top Relevant document (PTR).\nWhile there are many clicks above the first relevant document for each distribution, there are clearly peaks in click frequency for the first relevant result.\nFor example, for queries with top relevant result in position 2, the relative click frequency at that position (second bar) is higher than the click frequency at other positions for these queries.\nNevertheless, many users still click on the non-relevant results in position 1 for such queries.\nThis shows a stronger property of the bias in the click distribution towards top results - users click more often on results that are ranked higher, even when they are not relevant.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 5 10 result position relativeclickfrequency PTR=1 PTR=2 PTR=3 PTR=5 PTR=10 Background Figure 3.2: Relative click frequency for queries with varying PTR (Position of Top Relevant document).\n-0.06 -0.04 -0.02 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 1 2 3 5 10 result position correctedrelativeclickfrequency PTR=1 PTR=2 PTR=3 PTR=5 PTR=10 Figure 3.3: Relative corrected click frequency for relevant documents with varying PTR (Position of Top Relevant).\nIf we subtract the background distribution of Figure 3.1 from the mixed distribution of Figure 3.2, we obtain the distribution in Figure 3.3, where the remaining click frequency distribution can be interpreted as the relevance component of the results.\nNote that the corrected click distribution correlates closely with actual result relevance as explicitly rated by human judges.\n3.2 Robust User Behavior Model Clicks on search results comprise only a small fraction of the post-search activities typically performed by users.\nWe now introduce our techniques for going beyond the clickthrough statistics and explicitly modeling post-search user behavior.\nAlthough clickthrough distributions are heavily biased towards top results, we have just shown how the `relevance-driven'' click distribution can be recovered by correcting for the prior, background distribution.\nWe conjecture that other aspects of user behavior (e.g., page dwell time) are similarly distorted.\nOur general model includes two feature types for describing user behavior: direct and deviational where the former is the directly measured values, and latter is deviation from the expected values estimated from the overall (query-independent) distributions for the corresponding directly observed features.\nMore formally, we postulate that the observed value o of a feature f for a query q and result r can be expressed as a mixture of two components: ),,()(),,( frqrelfCfrqo += (1) where )( fC is the prior background distribution for values of f aggregated across all queries, and rel(q,r,f) is the component of the behavior influenced by the relevance of the result r.\nAs illustrated above with the clickthrough feature, if we subtract the background distribution (i.e., the expected clickthrough for a result at a given position) from the observed clickthrough frequency at a given position, we can approximate the relevance component of the clickthrough value1 .\nIn order to reduce the effect of individual user variations in behavior, we average observed feature values across all users and search sessions for each query-URL pair.\nThis aggregation gives additional robustness of not relying on individual noisy user interactions.\nIn summary, the user behavior for a query-URL pair is represented by a feature vector that includes both the directly observed features and the derived, corrected feature values.\nWe now describe the actual features we use to represent user behavior.\n3.3 Features for Representing User Behavior Our goal is to devise a sufficiently rich set of features that allow us to characterize when a user will be satisfied with a web search result.\nOnce the user has submitted a query, they perform many different actions (reading snippets, clicking results, navigating, refining their query) which we capture and summarize.\nThis information was obtained via opt-in client-side instrumentation from users of a major web search engine.\nThis rich representation of user behavior is similar in many respects to the recent work by Fox et al. [7].\nAn important difference is that many of our features are (by design) query specific whereas theirs was (by design) a general, queryindependent model of user behavior.\nFurthermore, we include derived, distributional features computed as described above.\nThe features we use to represent user search interactions are summarized in Table 3.1.\nFor clarity, we organize the features into the groups Query-text, Clickthrough, and Browsing.\nQuery-text features: Users decide which results to examine in more detail by looking at the result title, URL, and summary - in some cases, looking at the original document is not even necessary.\nTo model this aspect of user experience we defined features to characterize the nature of the query and its relation to the snippet text.\nThese include features such as overlap between the words in title and in query (TitleOverlap), the fraction of words shared by the query and the result summary (SummaryOverlap), etc..\nBrowsing features: Simple aspects of the user web page interactions can be captured and quantified.\nThese features are used to characterize interactions with pages beyond the results page.\nFor example, we compute how long users dwell on a page (TimeOnPage) or domain (TimeOnDomain), and the deviation of dwell time from expected page dwell time for a query.\nThese features allows us to model intra-query diversity of page browsing behavior (e.g., navigational queries, on average, are likely to have shorter page dwell time than transactional or informational queries).\nWe include both the direct features and the derived features described above.\nClickthrough features: Clicks are a special case of user interaction with the search engine.\nWe include all the features necessary to learn the clickthrough-based strategies described in Sections 4.1 and 4.4.\nFor example, for a query-URL pair we provide the number of clicks for the result (ClickFrequency), as 1 Of course, this is just a rough estimate, as the observed background distribution also includes the relevance component.\nwell as whether there was a click on result below or above the current URL (IsClickBelow, IsClickAbove).\nThe derived feature values such as ClickRelativeFrequency and ClickDeviation are computed as described in Equation 1.\nQuery-text features TitleOverlap Fraction of shared words between query and title SummaryOverlap Fraction of shared words between query and summary QueryURLOverlap Fraction of shared words between query and URL QueryDomainOverlap Fraction of shared words between query and domain QueryLength Number of tokens in query QueryNextOverlap Average fraction of words shared with next query Browsing features TimeOnPage Page dwell time CumulativeTimeOnPage Cumulative time for all subsequent pages after search TimeOnDomain Cumulative dwell time for this domain TimeOnShortUrl Cumulative time on URL prefix, dropping parameters IsFollowedLink 1 if followed link to result, 0 otherwise IsExactUrlMatch 0 if aggressive normalization used, 1 otherwise IsRedirected 1 if initial URL same as final URL, 0 otherwise IsPathFromSearch 1 if only followed links after query, 0 otherwise ClicksFromSearch Number of hops to reach page from query AverageDwellTime Average time on page for this query DwellTimeDeviation Deviation from overall average dwell time on page CumulativeDeviation Deviation from average cumulative time on page DomainDeviation Deviation from average time on domain ShortURLDeviation Deviation from average time on short URL Clickthrough features Position Position of the URL in Current ranking ClickFrequency Number of clicks for this query, URL pair ClickRelativeFrequency Relative frequency of a click for this query and URL ClickDeviation Deviation from expected click frequency IsNextClicked 1 if there is a click on next position, 0 otherwise IsPreviousClicked 1 if there is a click on previous position, 0 otherwise IsClickAbove 1 if there is a click above, 0 otherwise IsClickBelow 1 if there is click below, 0 otherwise Table 3.1: Features used to represent post-search interactions for a given query and search result URL 3.4 Learning a Predictive Behavior Model Having described our features, we now turn to the actual method of mapping the features to user preferences.\nWe attempt to learn a general implicit feedback interpretation strategy automatically instead of relying on heuristics or insights.\nWe consider this approach to be preferable to heuristic strategies, because we can always mine more data instead of relying (only) on our intuition and limited laboratory evidence.\nOur general approach is to train a classifier to induce weights for the user behavior features, and consequently derive a predictive model of user preferences.\nThe training is done by comparing a wide range of implicit behavior measures with explicit user judgments for a set of queries.\nFor this, we use a large random sample of queries in the search query log of a popular web search engine, the sets of results (identified by URLs) returned for each of the queries, and any explicit relevance judgments available for each query\/result pair.\nWe can then analyze the user behavior for all the instances where these queries were submitted to the search engine.\nTo learn the mapping from features to relevance preferences, we use a scalable implementation of neural networks, RankNet [4], capable of learning to rank a set of given items.\nMore specifically, for each judged query we check if a result link has been judged.\nIf so, the label is assigned to the query\/URL pair and to the corresponding feature vector for that search result.\nThese vectors of feature values corresponding to URLs judged relevant or non-relevant by human annotators become our training set.\nRankNet has demonstrated excellent performance in learning to rank objects in a supervised setting, hence we use RankNet for our experiments.\n4.\nPREDICTING USER PREFERENCES In our experiments, we explore several models for predicting user preferences.\nThese models range from using no implicit user feedback to using all available implicit user feedback.\nRanking search results to predict user preferences is a fundamental problem in information retrieval.\nMost traditional IR and web search approaches use a combination of page and link features to rank search results, and a representative state-ofthe-art ranking system will be used as our baseline ranker (Section 4.1).\nAt the same time, user interactions with a search engine provide a wealth of information.\nA commonly considered type of interaction is user clicks on search results.\nPrevious work [9], as described above, also examined which results were skipped (e.g., `skip above'' and `skip next'') and other related strategies to induce preference judgments from the users'' skipping over results and not clicking on following results.\nWe have also added refinements of these strategies to take into account the variability observed in realistic web scenarios.\n.\nWe describe these strategies in Section 4.2.\nAs clickthroughs are just one aspect of user interaction, we extend the relevance estimation by introducing a machine learning model that incorporates clicks as well as other aspects of user behavior, such as follow-up queries and page dwell time (Section 4.3).\nWe conclude this section by briefly describing our baseline - a state-of-the-art ranking algorithm used by an operational web search engine.\n4.1 Baseline Model A key question is whether browsing behavior can provide information absent from existing explicit judgments used to train an existing ranker.\nFor our baseline system we use a state-of-theart page ranking system currently used by a major web search engine.\nHence, we will call this system Current for the subsequent discussion.\nWhile the specific algorithms used by the search engine are beyond the scope of this paper, the algorithm ranks results based on hundreds of features such as query to document similarity, query to anchor text similarity, and intrinsic page quality.\nThe Current web search engine rankings provide a strong system for comparison and experiments of the next two sections.\n4.2 Clickthrough Model If we assume that every user click was motivated by a rational process that selected the most promising result summary, we can then interpret each click as described in Joachims et al.[10].\nBy studying eye tracking and comparing clicks with explicit judgments, they identified a few basic strategies.\nWe discuss the two strategies that performed best in their experiments, Skip Above and Skip Next.\nStrategy SA (Skip Above): For a set of results for a query and a clicked result at position p, all unclicked results ranked above p are predicted to be less relevant than the result at p.\nIn addition to information about results above the clicked result, we also have information about the result immediately following the clicked one.\nEye tracking study performed by Joachims et al. [10] showed that users usually consider the result immediately following the clicked result in current ranking.\nTheir Skip Next strategy uses this observation to predict that a result following the clicked result at p is less relevant than the clicked result, with accuracy comparable to the SA strategy above.\nFor better coverage, we combine the SA strategy with this extension to derive the Skip Above + Skip Next strategy: Strategy SA+N (Skip Above + Skip Next): This strategy predicts all un-clicked results immediately following a clicked result as less relevant than the clicked result, and combines these predictions with those of the SA strategy above.\nWe experimented with variations of these strategies, and found that SA+N outperformed both SA and the original Skip Next strategy, so we will consider the SA and SA+N strategies in the rest of the paper.\nThese strategies are motivated and empirically tested for individual users in a laboratory setting.\nAs we will show, these strategies do not work as well in real web search setting due to inherent inconsistency and noisiness of individual users'' behavior.\nThe general approach for using our clickthrough models directly is to filter clicks to those that reflect higher-than-chance click frequency.\nWe then use the same SA and SA+N strategies, but only for clicks that have higher-than-expected frequency according to our model.\nFor this, we estimate the relevance component rel(q,r,f) of the observed clickthrough feature f as the deviation from the expected (background) clickthrough distribution )( fC .\nStrategy CD (deviation d): For a given query, compute the observed click frequency distribution o(r, p) for all results r in positions p.\nThe click deviation for a result r in position p, dev(r, p) is computed as: )(),(),( pCproprdev \u2212= where C(p) is the expected clickthrough at position p.\nIf dev(r,p)>d, retain the click as input to the SA+N strategy above, and apply SA+N strategy over the filtered set of click events.\nThe choice of d selects the tradeoff between recall and precision.\nWhile the above strategy extends SA and SA+N, it still assumes that a (filtered) clicked result is preferred over all unclicked results presented to the user above a clicked position.\nHowever, for informational queries, multiple results may be clicked, with varying frequency.\nHence, it is preferable to individually compare results for a query by considering the difference between the estimated relevance components of the click distribution of the corresponding query results.\nWe now define a generalization of the previous clickthrough interpretation strategy: Strategy CDiff (margin m): Compute deviation dev(r,p) for each result r1...rn in position p. For each pair of results ri and rj, predict preference of ri over rj iff dev(ri,pi)-dev(ri,pj)>m.\nAs in CD, the choice of m selects the tradeoff between recall and precision.\nThe pairs may be preferred in the original order or in reverse of it.\nGiven the margin, two results might be effectively indistinguishable, but only one can possibly be preferred over the other.\nIntuitively, CDiff generalizes the skip idea above to include cases where the user skipped (i.e., clicked less than expected) on uj and preferred (i.e., clicked more than expected) on ui.\nFurthermore, this strategy allows for differentiation within the set of clicked results, making it more appropriate to noisy user behavior.\nCDiff and CD are complimentary.\nCDiff is a generalization of the clickthrough frequency model of CD, but it ignores the positional information used in CD.\nHence, combining the two strategies to improve coverage is a natural approach: Strategy CD+CDiff (deviation d, margin m): Union of CD and CDiff predictions.\nOther variations of the above strategies were considered, but these five methods cover the range of observed performance.\n4.3 General User Behavior Model The strategies described in the previous section generate orderings based solely on observed clickthrough frequencies.\nAs we discussed, clickthrough is just one, albeit important, aspect of user interactions with web search engine results.\nWe now present our general strategy that relies on the automatically derived predictive user behavior models (Section 3).\nThe UserBehavior Strategy: For a given query, each result is represented with the features in Table 3.1.\nRelative user preferences are then estimated using the learned user behavior model described in Section 3.4.\nRecall that to learn a predictive behavior model we used the features from Table 3.1 along with explicit relevance judgments as input to RankNet which learns an optimal weighting of features to predict preferences.\nThis strategy models user interaction with the search engine, allowing it to benefit from the wisdom of crowds interacting with the results and the pages beyond.\nAs our experiments in the subsequent sections demonstrate, modeling a richer set of user interactions beyond clickthroughs results in more accurate predictions of user preferences.\n5.\nEXPERIMENTAL SETUP We now describe our experimental setup.\nWe first describe the methodology used, including our evaluation metrics (Section 5.1).\nThen we describe the datasets (Section 5.2) and the methods we compared in this study (Section 5.3).\n5.1 Evaluation Methodology and Metrics Our evaluation focuses on the pairwise agreement between preferences for results.\nThis allows us to compare to previous work [9,10].\nFurthermore, for many applications such as tuning ranking functions, pairwise preference can be used directly for training [1,4,9].\nThe evaluation is based on comparing preferences predicted by various models to the correct preferences derived from the explicit user relevance judgments.\nWe discuss other applications of our models beyond web search ranking in Section 7.\nTo create our set of test pairs we take each query and compute the cross-product between all search results, returning preferences for pairs according to the order of the associated relevance labels.\nTo avoid ambiguity in evaluation, we discard all ties (i.e., pairs with equal label).\nIn order to compute the accuracy of our preference predictions with respect to the correct preferences, we adapt the standard Recall and Precision measures [20].\nWhile our task of computing pairwise agreement is different from the absolute relevance ranking task, the metrics are used in the similar way.\nSpecifically, we report the average query recall and precision.\nFor our task, Query Precision and Query Recall for a query q are defined as: \u2022 Query Precision: Fraction of predicted preferences for results for q that agree with preferences obtained from explicit human judgment.\n\u2022 Query Recall: Fraction of preferences obtained from explicit human judgment for q that were correctly predicted.\nThe overall Recall and Precision are computed as the average of Query Recall and Query Precision, respectively.\nA drawback of this evaluation measure is that some preferences may be more valuable than others, which pairwise agreement does not capture.\nWe discuss this issue further when we consider extensions to the current work in Section 7.\n5.2 Datasets For evaluation we used 3,500 queries that were randomly sampled from query logs(for a major web search engine.\nFor each query the top 10 returned search results were manually rated on a 6-point scale by trained judges as part of ongoing relevance improvement effort.\nIn addition for these queries we also had user interaction data for more than 120,000 instances of these queries.\nThe user interactions were harvested from anonymous browsing traces that immediately followed a query submitted to the web search engine.\nThis data collection was part of voluntary opt-in feedback submitted by users from October 11 through October 31.\nThese three weeks (21 days) of user interaction data was filtered to include only the users in the English-U.S. market.\nIn order to better understand the effect of the amount of user interaction data available for a query on accuracy, we created subsets of our data (Q1, Q10, and Q20) that contain different amounts of interaction data: \u2022 Q1: Human-rated queries with at least 1 click on results recorded (3500 queries, 28,093 query-URL pairs) \u2022 Q10: Queries in Q1 with at least 10 clicks (1300 queries, 18,728 query-URL pairs).\n\u2022 Q20: Queries in Q1 with at least 20 clicks (1000 queries total, 12,922 query-URL pairs).\nThese datasets were collected as part of normal user experience and hence have different characteristics than previously reported datasets collected in laboratory settings.\nFurthermore, the data size is order of magnitude larger than any study reported in the literature.\n5.3 Methods Compared We considered a number of methods for comparison.\nWe compared our UserBehavior model (Section 4.3) to previously published implicit feedback interpretation techniques and some variants of these approaches (Section 4.2), and to the current search engine ranking based on query and page features alone (Section 4.1).\nSpecifically, we compare the following strategies: \u2022 SA: The Skip Above clickthrough strategy (Section 4.2) \u2022 SA+N: A more comprehensive extension of SA that takes better advantage of current search engine ranking.\n\u2022 CD: Our refinement of SA+N that takes advantage of our mixture model of clickthrough distribution to select trusted clicks for interpretation (Section 4.2).\n\u2022 CDiff: Our generalization of the CD strategy that explicitly uses the relevance component of clickthrough probabilities to induce preferences between search results (Section 4.2).\n\u2022 CD+CDiff: The strategy combining CD and CDiff as the union of predicted preferences from both (Section 4.2).\n\u2022 UserBehavior: We order predictions based on decreasing highest score of any page.\nIn our preliminary experiments we observed that higher ranker scores indicate higher confidence in the predictions.\nThis heuristic allows us to do graceful recall-precision tradeoff using the score of the highest ranked result to threshold the queries (Section 4.3) \u2022 Current: Current search engine ranking (section 4.1).\nNote that the Current ranker implementation was trained over a superset of the rated query\/URL pairs in our datasets, but using the same truth labels as we do for our evaluation.\nTraining\/Test Split: The only strategy for which splitting the datasets into training and test was required was the UserBehavior method.\nTo evaluate UserBehavior we train and validate on 75% of labeled queries, and test on the remaining 25%.\nThe sampling was done per query (i.e., all results for a chosen query were included in the respective dataset, and there was no overlap in queries between training and test sets).\nIt is worth noting that both the ad-hoc SA and SA+N, as well as the distribution-based strategies (CD, CDiff, and CD+CDiff), do not require a separate training and test set, since they are based on heuristics for detecting anomalous click frequencies for results.\nHence, all strategies except for UserBehavior were tested on the full set of queries and associated relevance preferences, while UserBehavior was tested on a randomly chosen hold-out subset of the queries as described above.\nTo make sure we are not favoring UserBehavior, we also tested all other strategies on the same hold-out test sets, resulting in the same accuracy results as testing over the complete datasets.\n6.\nRESULTS We now turn to experimental evaluation of predicting relevance preference of web search results.\nFigure 6.1 shows the recall-precision results over the Q1 query set (Section 5.2).\nThe results indicate that previous click interpretation strategies, SA and SA+N perform suboptimally in this setting, exhibiting precision 0.627 and 0.638 respectively.\nFurthermore, there is no mechanism to do recall-precision trade-off with SA and SA+N, as they do not provide prediction confidence.\nIn contrast, our clickthrough distribution-based techniques CD and CD+CDiff exhibit somewhat higher precision than SA and SA+N (0.648 and 0.717 at Recall of 0.08, maximum achieved by SA or SA+N).\nSA+N SA 0.6 0.62 0.64 0.66 0.68 0.7 0.72 0.74 0.76 0.78 0.8 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Recall Precision SA SA+N CD CDiff CD+CDiff UserBehavior Current Figure 6.1: Precision vs. Recall of SA, SA+N, CD, CDiff, CD+CDiff, UserBehavior, and Current relevance prediction methods over the Q1 dataset.\nInterestingly, CDiff alone exhibits precision equal to SA (0.627) at the same recall at 0.08.\nIn contrast, by combining CD and CDiff strategies (CD+CDiff method) we achieve the best performance of all clickthrough-based strategies, exhibiting precision of above 0.66 for recall values up to 0.14, and higher at lower recall levels.\nClearly, aggregating and intelligently interpreting clickthroughs, results in significant gain for realistic web search, than previously described strategies.\nHowever, even the CD+CDiff clickthrough interpretation strategy can be improved upon by automatically learning to interpret the aggregated clickthrough evidence.\nBut first, we consider the best performing strategy, UserBehavior.\nIncorporating post-search navigation history in addition to clickthroughs (Browsing features) results in the highest recall and precision among all methods compared.\nBrowse exhibits precision of above 0.7 at recall of 0.16, significantly outperforming our Baseline and clickthrough-only strategies.\nFurthermore, Browse is able to achieve high recall (as high as 0.43) while maintaining precision (0.67) significantly higher than the baseline ranking.\nTo further analyze the value of different dimensions of implicit feedback modeled by the UserBehavior strategy, we consider each group of features in isolation.\nFigure 6.2 reports Precision vs. Recall for each feature group.\nInterestingly, Query-text alone has low accuracy (only marginally better than Random).\nFurthermore, Browsing features alone have higher precision (with lower maximum recall achieved) than considering all of the features in our UserBehavior model.\nApplying different machine learning methods for combining classifier predictions may increase performance of using all features for all recall values.\n0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.01 0.05 0.09 0.13 0.17 0.21 0.25 0.29 0.33 0.37 0.41 0.45 Recall Precision All Features Clickthrough Query-text Browsing Figure 6.2: Precision vs. recall for predicting relevance with each group of features individually.\n0.65 0.67 0.69 0.71 0.73 0.75 0.77 0.79 0.81 0.83 0.85 0.01 0.05 0.09 0.13 0.17 0.21 0.25 0.29 0.33 0.37 0.41 0.45 0.49 Recall Precision CD+CDiff:Q1 UserBehavior:Q1 CD+CDiff:Q10 UserBehavior:Q10 CD+CDiff:Q20 UserBehavior:Q20 Figure 6.3: Recall vs. Precision of CD+CDiff and UserBehavior for query sets Q1, Q10, and Q20 (queries with at least 1, at least 10, and at least 20 clicks respectively).\nInterestingly, the ranker trained over Clickthrough-only features achieves substantially higher recall and precision than human-designed clickthrough-interpretation strategies described earlier.\nFor example, the clickthrough-trained classifier achieves 0.67 precision at 0.42 Recall vs. the maximum recall of 0.14 achieved by the CD+CDiff strategy.\nOur clickthrough and user behavior interpretation strategies rely on extensive user interaction data.\nWe consider the effects of having sufficient interaction data available for a query before proposing a re-ranking of results for that query.\nFigure 6.3 reports recall-precision curves for the CD+CDiff and UserBehavior methods for different test query sets with at least 1 click (Q1), 10 clicks (Q10) and 20 clicks (Q20) available per query.\nNot surprisingly, CD+CDiff improves with more clicks.\nThis indicates that accuracy will improve as more user interaction histories become available, and more queries from the Q1 set will have comprehensive interaction histories.\nSimilarly, the UserBehavior strategy performs better for queries with 10 and 20 clicks, although the improvement is less dramatic than for CD+CDiff.\nFor queries with sufficient clicks, CD+CDiff exhibits precision comparable with Browse at lower recall.\n0 0.05 0.1 0.15 0.2 7 12 17 21 Days of user interaction data harvested Recall CD+CDiff UserBehavior Figure 6.4: Recall of CD+CDiff and UserBehavior strategies at fixed minimum precision 0.7 for varying amounts of user activity data (7, 12, 17, 21 days).\nOur techniques often do not make relevance predictions for search results (i.e., if no interaction data is available for the lower-ranked results), consequently maintaining higher precision at the expense of recall.\nIn contrast, the current search engine always makes a prediction for every result for a given query.\nAs a consequence, the recall of Current is high (0.627) at the expense of lower precision As another dimension of acquiring training data we consider the learning curve with respect to amount (days) of training data available.\nFigure 6.4 reports the Recall of CD+CDiff and UserBehavior strategies for varying amounts of training data collected over time.\nWe fixed minimum precision for both strategies at 0.7 as a point substantially higher than the baseline (0.625).\nAs expected, Recall of both strategies improves quickly with more days of interaction data examined.\nWe now briefly summarize our experimental results.\nWe showed that by intelligently aggregating user clickthroughs across queries and users, we can achieve higher accuracy on predicting user preferences.\nBecause of the skewed distribution of user clicks our clickthrough-only strategies have high precision, but low recall (i.e., do not attempt to predict relevance of many search results).\nNevertheless, our CD+CDiff clickthrough strategy outperforms most recent state-of-the-art results by a large margin (0.72 precision for CD+CDiff vs. 0.64 for SA+N) at the highest recall level of SA+N.\nFurthermore, by considering the comprehensive UserBehavior features that model user interactions after the search and beyond the initial click, we can achieve substantially higher precision and recall than considering clickthrough alone.\nOur UserBehavior strategy achieves recall of over 0.43 with precision of over 0.67 (with much higher precision at lower recall levels), substantially outperforms the current search engine preference ranking and all other implicit feedback interpretation methods.\n7.\nCONCLUSIONS AND FUTURE WORK Our paper is the first, to our knowledge, to interpret postsearch user behavior to estimate user preferences in a real web search setting.\nWe showed that our robust models result in higher prediction accuracy than previously published techniques.\nWe introduced new, robust, probabilistic techniques for interpreting clickthrough evidence by aggregating across users and queries.\nOur methods result in clickthrough interpretation substantially more accurate than previously published results not specifically designed for web search scenarios.\nOur methods'' predictions of relevance preferences are substantially more accurate than the current state-of-the-art search result ranking that does not consider user interactions.\nWe also presented a general model for interpreting post-search user behavior that incorporates clickthrough, browsing, and query features.\nBy considering the complete search experience after the initial query and click, we demonstrated prediction accuracy far exceeding that of interpreting only the limited clickthrough information.\nFurthermore, we showed that automatically learning to interpret user behavior results in substantially better performance than the human-designed ad-hoc clickthrough interpretation strategies.\nAnother benefit of automatically learning to interpret user behavior is that such methods can adapt to changing conditions and changing user profiles.\nFor example, the user behavior model on intranet search may be different from the web search behavior.\nOur general UserBehavior method would be able to adapt to these changes by automatically learning to map new behavior patterns to explicit relevance ratings.\nA natural application of our preference prediction models is to improve web search ranking [1].\nIn addition, our work has many potential applications including click spam detection, search abuse detection, personalization, and domain-specific ranking.\nFor example, our automatically derived behavior models could be trained on examples of search abuse or click spam behavior instead of relevance labels.\nAlternatively, our models could be used directly to detect anomalies in user behavior - either due to abuse or to operational problems with the search engine.\nWhile our techniques perform well on average, our assumptions about clickthrough distributions (and learning the user behavior models) may not hold equally well for all queries.\nFor example, queries with divergent access patterns (e.g., for ambiguous queries with multiple meanings) may result in behavior inconsistent with the model learned for all queries.\nHence, clustering queries and learning different predictive models for each query type is a promising research direction.\nQuery distributions also change over time, and it would be productive to investigate how that affects the predictive ability of these models.\nFurthermore, some predicted preferences may be more valuable than others, and we plan to investigate different metrics to capture the utility of the predicted preferences.\nAs we showed in this paper, using the wisdom of crowds can give us accurate interpretation of user interactions even in the inherently noisy web search setting.\nOur techniques allow us to automatically predict relevance preferences for web search results with accuracy greater than the previously published methods.\nThe predicted relevance preferences can be used for automatic relevance evaluation and tuning, for deploying search in new settings, and ultimately for improving the overall web search experience.\n8.\nREFERENCES [1] E. Agichtein, E. Brill, and S. Dumais, Improving Web Search Ranking by Incorporating User Behavior, in Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2006 [2] J. Allan.\nHARD Track Overview in TREC 2003: High Accuracy Retrieval from Documents.\nIn Proceedings of TREC 2003, 24-37, 2004.\n[3] S. Brin and L. Page, The Anatomy of a Large-scale Hypertextual Web Search Engine,.\nIn Proceedings of WWW7, 107-117, 1998.\n[4] C.J.C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender, Learning to Rank using Gradient Descent, in Proceedings of the International Conference on Machine Learning (ICML), 2005 [5] D.M. Chickering, The WinMine Toolkit, Microsoft Technical Report MSR-TR-2002-103, 2002 [6] M. Claypool, D. Brown, P. Lee and M. Waseda.\nInferring user interest, in IEEE Internet Computing.\n2001 [7] S. Fox, K. Karnawat, M. Mydland, S. T. Dumais and T. White.\nEvaluating implicit measures to improve the search experience.\nIn ACM Transactions on Information Systems, 2005 [8] J. Goecks and J. Shavlick.\nLearning users'' interests by unobtrusively observing their normal behavior.\nIn Proceedings of the IJCAI Workshop on Machine Learning for Information Filtering.\n1999.\n[9] T. Joachims, Optimizing Search Engines Using Clickthrough Data, in Proceedings of the ACM Conference on Knowledge Discovery and Datamining (SIGKDD), 2002 [10] T. Joachims, L. Granka, B. Pang, H. Hembrooke and G. Gay, Accurately Interpreting Clickthrough Data as Implicit Feedback, in Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 2005 [11] T. Joachims, Making Large-Scale SVM Learning Practical.\nAdvances in Kernel Methods, in Support Vector Learning, MIT Press, 1999 [12] D. Kelly and J. Teevan, Implicit feedback for inferring user preference: A bibliography.\nIn SIGIR Forum, 2003 [13] J. Konstan, B. Miller, D. Maltz, J. Herlocker, L. Gordon and J. Riedl.\nGroupLens: Applying collaborative filtering to usenet news.\nIn Communications of ACM, 1997.\n[14] M. Morita, and Y. Shinoda, Information filtering based on user behavior analysis and best match text retrieval.\nIn Proceedings of the ACM Conference on Research and Development on Information Retrieval (SIGIR), 1994 [15] D. Oard and J. Kim.\nImplicit feedback for recommender systems.\nin Proceedings of AAAI Workshop on Recommender Systems.\n1998 [16] D. Oard and J. Kim.\nModeling information content using observable behavior.\nIn Proceedings of the 64th Annual Meeting of the American Society for Information Science and Technology.\n2001 [17] P. Pirolli, The Use of Proximal Information Scent to Forage for Distal Content on the World Wide Web.\nIn Working with Technology in Mind: Brunswikian.\nResources for Cognitive Science and Engineering, Oxford University Press, 2004 [18] F. Radlinski and T. Joachims, Query Chains: Learning to Rank from Implicit Feedback, in Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), ACM, 2005 [19] F. Radlinski and T. Joachims, Evaluating the Robustness of Learning from Implicit Feedback, in the ICML Workshop on Learning in Web Search, 2005 [20] G. Salton and M. McGill.\nIntroduction to modern information retrieval.\nMcGraw-Hill, 1983 [21] E.M. Voorhees, D. Harman, Overview of TREC, 2001","lvl-3":"Learning User Interaction Models for Predicting Web Search Result Preferences\nABSTRACT\nEvaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance.\nWe present a real-world study of modeling the behavior of web search users to predict web search result preferences.\nAccurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks.\nOur key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected \"noisy\" user behavior.\nWe show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods.\nWe generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone.\nWe report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods.\n1.\nINTRODUCTION\nRelevance measurement is crucial to web search and to information retrieval in general.\nTraditionally, search relevance is measured by using human assessors to judge the relevance of querydocument pairs.\nHowever, explicit human ratings are expensive and difficult to obtain.\nAt the same time, millions of people interact daily with web search engines, providing valuable implicit feedback through their interactions with the search results.\nIf we could turn these interactions into relevance judgments, we could obtain large amounts of data for evaluating, maintaining, and improving information retrieval systems.\nRecently, automatic or implicit relevance feedback has developed into an active area of research in the information retrieval community, at least in part due to an increase in available resources and to the rising popularity of web search.\nHowever, most traditional\nIR work was performed over controlled test collections and carefully-selected query sets and tasks.\nTherefore, it is not clear whether these techniques will work for general real-world web search.\nA significant distinction is that web search is not controlled.\nIndividual users may behave irrationally or maliciously, or may not even be real users; all of this affects the data that can be gathered.\nBut the amount of the user interaction data is orders of magnitude larger than anything available in a non-web-search setting.\nBy using the aggregated behavior of large numbers of users (and not treating each user as an individual \"expert\") we can correct for the noise inherent in individual interactions, and generate relevance judgments that are more accurate than techniques not specifically designed for the web search setting.\nFurthermore, observations and insights obtained in laboratory settings do not necessarily translate to real world usage.\nHence, it is preferable to automatically induce feedback interpretation strategies from large amounts of user interactions.\nAutomatically learning to interpret user behavior would allow systems to adapt to changing conditions, changing user behavior patterns, and different search settings.\nWe present techniques to automatically interpret the collective behavior of users interacting with a web search engine to predict user preferences for search results.\nOur contributions include:\n\u2022 A distributional model of user behavior, robust to noise within individual user sessions, that can recover relevance preferences from user interactions (Section 3).\n\u2022 Extensions of existing clickthrough strategies to include richer browsing and interaction features (Section 4).\n\u2022 A thorough evaluation of our user behavior models, as well as of previously published state-of-the-art techniques, over a large set of web search sessions (Sections 5 and 6).\nWe discuss our results and outline future directions and various applications of this work in Section 7, which concludes the paper.\n2.\nBACKGROUND AND RELATED WORK\nRanking search results is a fundamental problem in information retrieval.\nThe most common approaches in the context of the web use both the similarity of the query to the page content, and the overall quality of a page [3, 201.\nA state-of-the-art search engine may use hundreds of features to describe a candidate page, employing sophisticated algorithms to rank pages based on these features.\nCurrent search engines are commonly tuned on human relevance judgments.\nHuman annotators rate a set of pages for a query according to perceived relevance, creating the \"gold standard\" against which different ranking algorithms can be evaluated.\nReducing the dependence on explicit human judgments by using implicit relevance feedback has been an active topic of research.\nSeveral research groups have evaluated the relationship between implicit measures and user interest.\nIn these studies, both reading\ntime and explicit ratings of interest are collected.\nMorita and Shinoda [14] studied the amount of time that users spent reading Usenet news articles and found that reading time could predict a user's interest levels.\nKonstan et al. [13] showed that reading time was a strong predictor of user interest in their GroupLens system.\nOard and Kim [15] studied whether implicit feedback could substitute for explicit ratings in recommender systems.\nMore recently, Oard and Kim [16] presented a framework for characterizing observable user behaviors using two dimensions--the underlying purpose of the observed behavior and the scope of the item being acted upon.\nGoecks and Shavlik [8] approximated human labels by collecting a set of page activity measures while users browsed the World Wide Web.\nThe authors hypothesized correlations between a high degree of page activity and a user's interest.\nWhile the results were promising, the sample size was small and the implicit measures were not tested against explicit judgments of user interest.\nClaypool et al. [6] studied how several implicit measures related to the interests of the user.\nThey developed a custom browser called the Curious Browser to gather data, in a computer lab, about implicit interest indicators and to probe for explicit judgments of Web pages visited.\nClaypool et al. found that the time spent on a page, the amount of scrolling on a page, and the combination of time and scrolling have a strong positive relationship with explicit interest, while individual scrolling methods and mouse-clicks were not correlated with explicit interest.\nFox et al. [7] explored the relationship between implicit and explicit measures in Web search.\nThey built an instrumented browser to collect data and then developed Bayesian models to relate implicit measures and explicit relevance judgments for both individual queries and search sessions.\nThey found that clickthrough was the most important individual variable but that predictive accuracy could be improved by using additional variables, notably dwell time on a page.\nJoachims [9] developed valuable insights into the collection of implicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions.\nMore recently, Joachims et al. [10] presented an empirical evaluation of interpreting clickthrough evidence.\nBy performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthrough events in a controlled, laboratory setting.\nA more comprehensive overview of studies of implicit measures is described in Kelly and Teevan [12].\nUnfortunately, the extent to which existing research applies to real-world web search is unclear.\nIn this paper, we build on previous research to develop robust user behavior interpretation models for the real web search setting.\n3.\nLEARNING USER BEHAVIOR MODELS\n3.1 A Case Study in Click Distributions\n3.2 Robust User Behavior Model\n3.3 Features for Representing User Behavior\n3.4 Learning a Predictive Behavior Model\n4.\nPREDICTING USER PREFERENCES\n4.1 Baseline Model\n4.2 Clickthrough Model\n4.3 General User Behavior Model\n5.\nEXPERIMENTAL SETUP\n5.1 Evaluation Methodology and Metrics\n5.2 Datasets\n5.3 Methods Compared\n6.\nRESULTS\n7.\nCONCLUSIONS AND FUTURE WORK\nOur paper is the first, to our knowledge, to interpret post-search user behavior to estimate user preferences in a real web search setting.\nWe showed that our robust models result in higher prediction accuracy than previously published techniques.\nWe introduced new, robust, probabilistic techniques for interpreting clickthrough evidence by aggregating across users and queries.\nOur methods result in clickthrough interpretation substantially more accurate than previously published results not specifically designed for web search scenarios.\nOur methods' predictions of relevance preferences are substantially more accurate than the current state-of-the-art search result ranking that does not\nconsider user interactions.\nWe also presented a general model for interpreting post-search user behavior that incorporates clickthrough, browsing, and query features.\nBy considering the complete search experience after the initial query and click, we demonstrated prediction accuracy far exceeding that of interpreting only the limited clickthrough information.\nFurthermore, we showed that automatically learning to interpret user behavior results in substantially better performance than the human-designed ad-hoc clickthrough interpretation strategies.\nAnother benefit of automatically learning to interpret user behavior is that such methods can adapt to changing conditions and changing user profiles.\nFor example, the user behavior model on intranet search may be different from the web search behavior.\nOur general UserBehavior method would be able to adapt to these changes by automatically learning to map new behavior patterns to explicit relevance ratings.\nA natural application of our preference prediction models is to improve web search ranking [1].\nIn addition, our work has many potential applications including click spam detection, search abuse detection, personalization, and domain-specific ranking.\nFor example, our automatically derived behavior models could be trained on examples of search abuse or click spam behavior instead of relevance labels.\nAlternatively, our models could be used directly to detect anomalies in user behavior--either due to abuse or to operational problems with the search engine.\nWhile our techniques perform well on average, our assumptions about clickthrough distributions (and learning the user behavior models) may not hold equally well for all queries.\nFor example, queries with divergent access patterns (e.g., for ambiguous queries with multiple meanings) may result in behavior inconsistent with the model learned for all queries.\nHence, clustering queries and learning different predictive models for each query type is a promising research direction.\nQuery distributions also change over time, and it would be productive to investigate how that affects the predictive ability of these models.\nFurthermore, some predicted preferences may be more valuable than others, and we plan to investigate different metrics to capture the utility of the predicted preferences.\nAs we showed in this paper, using the \"wisdom of crowds\" can give us an accurate interpretation of user interactions even in the inherently noisy web search setting.\nOur techniques allow us to automatically predict relevance preferences for web search results with accuracy greater than the previously published methods.\nThe predicted relevance preferences can be used for automatic relevance evaluation and tuning, for deploying search in new settings, and ultimately for improving the overall web search experience.","lvl-4":"Learning User Interaction Models for Predicting Web Search Result Preferences\nABSTRACT\nEvaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance.\nWe present a real-world study of modeling the behavior of web search users to predict web search result preferences.\nAccurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks.\nOur key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected \"noisy\" user behavior.\nWe show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods.\nWe generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone.\nWe report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods.\n1.\nINTRODUCTION\nRelevance measurement is crucial to web search and to information retrieval in general.\nTraditionally, search relevance is measured by using human assessors to judge the relevance of querydocument pairs.\nHowever, explicit human ratings are expensive and difficult to obtain.\nAt the same time, millions of people interact daily with web search engines, providing valuable implicit feedback through their interactions with the search results.\nIf we could turn these interactions into relevance judgments, we could obtain large amounts of data for evaluating, maintaining, and improving information retrieval systems.\nHowever, most traditional\nIR work was performed over controlled test collections and carefully-selected query sets and tasks.\nTherefore, it is not clear whether these techniques will work for general real-world web search.\nA significant distinction is that web search is not controlled.\nIndividual users may behave irrationally or maliciously, or may not even be real users; all of this affects the data that can be gathered.\nBut the amount of the user interaction data is orders of magnitude larger than anything available in a non-web-search setting.\nHence, it is preferable to automatically induce feedback interpretation strategies from large amounts of user interactions.\nAutomatically learning to interpret user behavior would allow systems to adapt to changing conditions, changing user behavior patterns, and different search settings.\nWe present techniques to automatically interpret the collective behavior of users interacting with a web search engine to predict user preferences for search results.\nOur contributions include:\n\u2022 A distributional model of user behavior, robust to noise within individual user sessions, that can recover relevance preferences from user interactions (Section 3).\n\u2022 Extensions of existing clickthrough strategies to include richer browsing and interaction features (Section 4).\n\u2022 A thorough evaluation of our user behavior models, as well as of previously published state-of-the-art techniques, over a large set of web search sessions (Sections 5 and 6).\nWe discuss our results and outline future directions and various applications of this work in Section 7, which concludes the paper.\n2.\nBACKGROUND AND RELATED WORK\nRanking search results is a fundamental problem in information retrieval.\nA state-of-the-art search engine may use hundreds of features to describe a candidate page, employing sophisticated algorithms to rank pages based on these features.\nCurrent search engines are commonly tuned on human relevance judgments.\nReducing the dependence on explicit human judgments by using implicit relevance feedback has been an active topic of research.\nSeveral research groups have evaluated the relationship between implicit measures and user interest.\nIn these studies, both reading\ntime and explicit ratings of interest are collected.\nMorita and Shinoda [14] studied the amount of time that users spent reading Usenet news articles and found that reading time could predict a user's interest levels.\nKonstan et al. [13] showed that reading time was a strong predictor of user interest in their GroupLens system.\nOard and Kim [15] studied whether implicit feedback could substitute for explicit ratings in recommender systems.\nGoecks and Shavlik [8] approximated human labels by collecting a set of page activity measures while users browsed the World Wide Web.\nThe authors hypothesized correlations between a high degree of page activity and a user's interest.\nWhile the results were promising, the sample size was small and the implicit measures were not tested against explicit judgments of user interest.\nClaypool et al. [6] studied how several implicit measures related to the interests of the user.\nFox et al. [7] explored the relationship between implicit and explicit measures in Web search.\nThey built an instrumented browser to collect data and then developed Bayesian models to relate implicit measures and explicit relevance judgments for both individual queries and search sessions.\nJoachims [9] developed valuable insights into the collection of implicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions.\nMore recently, Joachims et al. [10] presented an empirical evaluation of interpreting clickthrough evidence.\nA more comprehensive overview of studies of implicit measures is described in Kelly and Teevan [12].\nUnfortunately, the extent to which existing research applies to real-world web search is unclear.\nIn this paper, we build on previous research to develop robust user behavior interpretation models for the real web search setting.\n7.\nCONCLUSIONS AND FUTURE WORK\nOur paper is the first, to our knowledge, to interpret post-search user behavior to estimate user preferences in a real web search setting.\nWe showed that our robust models result in higher prediction accuracy than previously published techniques.\nWe introduced new, robust, probabilistic techniques for interpreting clickthrough evidence by aggregating across users and queries.\nOur methods result in clickthrough interpretation substantially more accurate than previously published results not specifically designed for web search scenarios.\nOur methods' predictions of relevance preferences are substantially more accurate than the current state-of-the-art search result ranking that does not\nconsider user interactions.\nWe also presented a general model for interpreting post-search user behavior that incorporates clickthrough, browsing, and query features.\nBy considering the complete search experience after the initial query and click, we demonstrated prediction accuracy far exceeding that of interpreting only the limited clickthrough information.\nFurthermore, we showed that automatically learning to interpret user behavior results in substantially better performance than the human-designed ad-hoc clickthrough interpretation strategies.\nAnother benefit of automatically learning to interpret user behavior is that such methods can adapt to changing conditions and changing user profiles.\nFor example, the user behavior model on intranet search may be different from the web search behavior.\nOur general UserBehavior method would be able to adapt to these changes by automatically learning to map new behavior patterns to explicit relevance ratings.\nA natural application of our preference prediction models is to improve web search ranking [1].\nIn addition, our work has many potential applications including click spam detection, search abuse detection, personalization, and domain-specific ranking.\nFor example, our automatically derived behavior models could be trained on examples of search abuse or click spam behavior instead of relevance labels.\nAlternatively, our models could be used directly to detect anomalies in user behavior--either due to abuse or to operational problems with the search engine.\nWhile our techniques perform well on average, our assumptions about clickthrough distributions (and learning the user behavior models) may not hold equally well for all queries.\nFor example, queries with divergent access patterns (e.g., for ambiguous queries with multiple meanings) may result in behavior inconsistent with the model learned for all queries.\nHence, clustering queries and learning different predictive models for each query type is a promising research direction.\nQuery distributions also change over time, and it would be productive to investigate how that affects the predictive ability of these models.\nAs we showed in this paper, using the \"wisdom of crowds\" can give us an accurate interpretation of user interactions even in the inherently noisy web search setting.\nOur techniques allow us to automatically predict relevance preferences for web search results with accuracy greater than the previously published methods.\nThe predicted relevance preferences can be used for automatic relevance evaluation and tuning, for deploying search in new settings, and ultimately for improving the overall web search experience.","lvl-2":"Learning User Interaction Models for Predicting Web Search Result Preferences\nABSTRACT\nEvaluating user preferences of web search results is crucial for search engine development, deployment, and maintenance.\nWe present a real-world study of modeling the behavior of web search users to predict web search result preferences.\nAccurate modeling and interpretation of user behavior has important applications to ranking, click spam detection, web search personalization, and other tasks.\nOur key insight to improving robustness of interpreting implicit feedback is to model query-dependent deviations from the expected \"noisy\" user behavior.\nWe show that our model of clickthrough interpretation improves prediction accuracy over state-of-the-art clickthrough methods.\nWe generalize our approach to model user behavior beyond clickthrough, which results in higher preference prediction accuracy than models based on clickthrough information alone.\nWe report results of a large-scale experimental evaluation that show substantial improvements over published implicit feedback interpretation methods.\n1.\nINTRODUCTION\nRelevance measurement is crucial to web search and to information retrieval in general.\nTraditionally, search relevance is measured by using human assessors to judge the relevance of querydocument pairs.\nHowever, explicit human ratings are expensive and difficult to obtain.\nAt the same time, millions of people interact daily with web search engines, providing valuable implicit feedback through their interactions with the search results.\nIf we could turn these interactions into relevance judgments, we could obtain large amounts of data for evaluating, maintaining, and improving information retrieval systems.\nRecently, automatic or implicit relevance feedback has developed into an active area of research in the information retrieval community, at least in part due to an increase in available resources and to the rising popularity of web search.\nHowever, most traditional\nIR work was performed over controlled test collections and carefully-selected query sets and tasks.\nTherefore, it is not clear whether these techniques will work for general real-world web search.\nA significant distinction is that web search is not controlled.\nIndividual users may behave irrationally or maliciously, or may not even be real users; all of this affects the data that can be gathered.\nBut the amount of the user interaction data is orders of magnitude larger than anything available in a non-web-search setting.\nBy using the aggregated behavior of large numbers of users (and not treating each user as an individual \"expert\") we can correct for the noise inherent in individual interactions, and generate relevance judgments that are more accurate than techniques not specifically designed for the web search setting.\nFurthermore, observations and insights obtained in laboratory settings do not necessarily translate to real world usage.\nHence, it is preferable to automatically induce feedback interpretation strategies from large amounts of user interactions.\nAutomatically learning to interpret user behavior would allow systems to adapt to changing conditions, changing user behavior patterns, and different search settings.\nWe present techniques to automatically interpret the collective behavior of users interacting with a web search engine to predict user preferences for search results.\nOur contributions include:\n\u2022 A distributional model of user behavior, robust to noise within individual user sessions, that can recover relevance preferences from user interactions (Section 3).\n\u2022 Extensions of existing clickthrough strategies to include richer browsing and interaction features (Section 4).\n\u2022 A thorough evaluation of our user behavior models, as well as of previously published state-of-the-art techniques, over a large set of web search sessions (Sections 5 and 6).\nWe discuss our results and outline future directions and various applications of this work in Section 7, which concludes the paper.\n2.\nBACKGROUND AND RELATED WORK\nRanking search results is a fundamental problem in information retrieval.\nThe most common approaches in the context of the web use both the similarity of the query to the page content, and the overall quality of a page [3, 201.\nA state-of-the-art search engine may use hundreds of features to describe a candidate page, employing sophisticated algorithms to rank pages based on these features.\nCurrent search engines are commonly tuned on human relevance judgments.\nHuman annotators rate a set of pages for a query according to perceived relevance, creating the \"gold standard\" against which different ranking algorithms can be evaluated.\nReducing the dependence on explicit human judgments by using implicit relevance feedback has been an active topic of research.\nSeveral research groups have evaluated the relationship between implicit measures and user interest.\nIn these studies, both reading\ntime and explicit ratings of interest are collected.\nMorita and Shinoda [14] studied the amount of time that users spent reading Usenet news articles and found that reading time could predict a user's interest levels.\nKonstan et al. [13] showed that reading time was a strong predictor of user interest in their GroupLens system.\nOard and Kim [15] studied whether implicit feedback could substitute for explicit ratings in recommender systems.\nMore recently, Oard and Kim [16] presented a framework for characterizing observable user behaviors using two dimensions--the underlying purpose of the observed behavior and the scope of the item being acted upon.\nGoecks and Shavlik [8] approximated human labels by collecting a set of page activity measures while users browsed the World Wide Web.\nThe authors hypothesized correlations between a high degree of page activity and a user's interest.\nWhile the results were promising, the sample size was small and the implicit measures were not tested against explicit judgments of user interest.\nClaypool et al. [6] studied how several implicit measures related to the interests of the user.\nThey developed a custom browser called the Curious Browser to gather data, in a computer lab, about implicit interest indicators and to probe for explicit judgments of Web pages visited.\nClaypool et al. found that the time spent on a page, the amount of scrolling on a page, and the combination of time and scrolling have a strong positive relationship with explicit interest, while individual scrolling methods and mouse-clicks were not correlated with explicit interest.\nFox et al. [7] explored the relationship between implicit and explicit measures in Web search.\nThey built an instrumented browser to collect data and then developed Bayesian models to relate implicit measures and explicit relevance judgments for both individual queries and search sessions.\nThey found that clickthrough was the most important individual variable but that predictive accuracy could be improved by using additional variables, notably dwell time on a page.\nJoachims [9] developed valuable insights into the collection of implicit measures, introducing a technique based entirely on clickthrough data to learn ranking functions.\nMore recently, Joachims et al. [10] presented an empirical evaluation of interpreting clickthrough evidence.\nBy performing eye tracking studies and correlating predictions of their strategies with explicit ratings, the authors showed that it is possible to accurately interpret clickthrough events in a controlled, laboratory setting.\nA more comprehensive overview of studies of implicit measures is described in Kelly and Teevan [12].\nUnfortunately, the extent to which existing research applies to real-world web search is unclear.\nIn this paper, we build on previous research to develop robust user behavior interpretation models for the real web search setting.\n3.\nLEARNING USER BEHAVIOR MODELS\nAs we noted earlier, real web search user behavior can be \"noisy\" in the sense that user behaviors are only probabilistically related to explicit relevance judgments and preferences.\nHence, instead of treating each user as a reliable \"expert\", we aggregate information from many unreliable user search session traces.\nOur main approach is to model user web search behavior as if it were generated by two components: a \"relevance\" component--query-specific behavior influenced by the apparent result relevance, and a \"background\" component--users clicking indiscriminately.\nOur general idea is to model the deviations from the expected user behavior.\nHence, in addition to basic features, which we will describe in detail in Section 3.2, we compute derived features that measure the deviation of the observed feature value for a given search result from the expected values for a result, with no query-dependent information.\nWe motivate our intuitions with a particularly important behavior feature, result clickthrough, analyzed next, and then introduce our general model of user behavior that incorporates other user actions (Section 3.2).\n3.1 A Case Study in Click Distributions\nAs we discussed, we aggregate statistics across many user sessions.\nA click on a result may mean that some user found the result summary promising; it could also be caused by people clicking indiscriminately.\nIn general, individual user behavior, clickthrough and otherwise, is noisy, and cannot be relied upon for accurate relevance judgments.\nThe data set is described in more detail in Section 5.2.\nFor the present it suffices to note that we focus on a random sample of 3,500 queries that were randomly sampled from query logs.\nFor these queries we aggregate click data over more than 120,000 searches performed over a three week period.\nWe also have explicit relevance judgments for the top 10 results for each query.\nFigure 3.1 shows the relative clickthrough frequency as a function of result position.\nThe aggregated click frequency at result position p is calculated by first computing the frequency of a click at p for each query (i.e., approximating the probability that a randomly chosen click for that query would land on position p).\nThese frequencies are then averaged across queries and normalized so that relative frequency of a click at the top position is 1.\nThe resulting distribution agrees with previous observations that users click more often on top-ranked results.\nThis reflects the fact that search engines do a reasonable job of ranking results as well as biases to click top results and noise--we attempt to separate these components in the analysis that follows.\nFigure 3.1: Relative click frequency for top 30 result positions over 3,500 queries and 120,000 searches.\nFirst we consider the distribution of clicks for the relevant documents for these queries.\nFigure 3.2 reports the aggregated click distribution for queries with varying Position of Top Relevant document (PTR).\nWhile there are many clicks above the first relevant document for each distribution, there are clearly \"peaks\" in click frequency for the first relevant result.\nFor example, for queries with top relevant result in position 2, the relative click frequency at that position (second bar) is higher than the click frequency at other positions for these queries.\nNevertheless, many users still click on the non-relevant results in position 1 for such queries.\nThis shows a stronger property of the bias in the click distribution towards top results--users click more often on results that are ranked higher, even when they are not relevant.\nFigure 3.2: Relative click frequency for queries with varying PTR (Position of Top Relevant document).\nFigure 3.3: Relative corrected click frequency for relevant documents with varying PTR (Position of Top Relevant).\nIf we subtract the background distribution of Figure 3.1 from the \"mixed\" distribution of Figure 3.2, we obtain the distribution in Figure 3.3, where the remaining click frequency distribution can be interpreted as the relevance component of the results.\nNote that the corrected click distribution correlates closely with actual result relevance as explicitly rated by human judges.\n3.2 Robust User Behavior Model\nClicks on search results comprise only a small fraction of the postsearch activities typically performed by users.\nWe now introduce our techniques for going beyond the clickthrough statistics and explicitly modeling post-search user behavior.\nAlthough clickthrough distributions are heavily biased towards top results, we have just shown how the ` relevance-driven' click distribution can be recovered by correcting for the prior, background distribution.\nWe conjecture that other aspects of user behavior (e.g., page dwell time) are similarly distorted.\nOur general model includes two feature types for describing user behavior: direct and deviational where the former is the directly measured values, and latter is deviation from the expected values estimated from the overall (query-independent) distributions for the corresponding directly observed features.\nMore formally, we postulate that the observed value o of a feature f for a query q and result r can be expressed as a mixture of two components: o (q, r, f) = C (f) + rel (q, r, f) (1) where C (f) is the prior \"background\" distribution for values off aggregated across all queries, and rel (q, r, f is the component of the behavior influenced by the relevance of the result r.\nAs illustrated above with the clickthrough feature, if we subtract the background distribution (i.e., the expected clickthrough for a result at a given position) from the observed clickthrough frequency at a given position, we can approximate the relevance component of the clickthrough value1.\nIn order to reduce the effect of individual user variations in behavior, we average observed feature values across all users and search sessions for each query-URL pair.\nThis aggregation gives additional robustness of not relying on individual \"noisy\" user interactions.\nIn summary, the user behavior for a query-URL pair is represented by a feature vector that includes both the directly observed features and the derived, \"corrected\" feature values.\nWe now describe the actual features we use to represent user behavior.\n3.3 Features for Representing User Behavior\nOur goal is to devise a sufficiently rich set of features that allow us to characterize when a user will be satisfied with a web search result.\nOnce the user has submitted a query, they perform many different actions (reading snippets, clicking results, navigating, refining their query) which we capture and summarize.\nThis information was obtained via opt-in client-side instrumentation from users of a major web search engine.\nThis rich representation of user behavior is similar in many respects to the recent work by Fox et al. [7].\nAn important difference is that many of our features are (by design) query specific whereas theirs was (by design) a general, query-independent model of user behavior.\nFurthermore, we include derived, distributional features computed as described above.\nThe features we use to represent user search interactions are summarized in Table 3.1.\nFor clarity, we organize the features into the groups Query-text, Clickthrough, and Browsing.\nQuery-text features: Users decide which results to examine in more detail by looking at the result title, URL, and summary--in some cases, looking at the original document is not even necessary.\nTo model this aspect of user experience we defined features to characterize the nature of the query and its relation to the snippet text.\nThese include features such as overlap between the words in title and in query (TitleOverlap), the fraction of words shared by the query and the result summary (SummaryOverlap), etc. .\nBrowsing features: Simple aspects of the user web page interactions can be captured and quantified.\nThese features are used to characterize interactions with pages beyond the results page.\nFor example, we compute how long users dwell on a page (TimeOnPage) or domain (TimeOnDomain), and the deviation of dwell time from expected page dwell time for a query.\nThese features allows us to model intra - query diversity of page browsing behavior (e.g., navigational queries, on average, are likely to have shorter page dwell time than transactional or informational queries).\nWe include both the direct features and the derived features described above.\nClickthrough features: Clicks are a special case of user interaction with the search engine.\nWe include all the features necessary to \"learn\" the clickthrough-based strategies described in Sections 4.1 and 4.4.\nFor example, for a query-URL pair we provide the number of clicks for the result (ClickFrequency), as well as whether there was a click on result below or above the current URL (IsClickBelow, IsClickAbove).\nThe derived feature values such as ClickRelativeFrequency and ClickDeviation are computed as described in Equation 1.\nTable 3.1: Features used to represent post-search interactions for a given query and search result URL\n3.4 Learning a Predictive Behavior Model\nHaving described our features, we now turn to the actual method of mapping the features to user preferences.\nWe attempt to learn a general implicit feedback interpretation strategy automatically instead of relying on heuristics or insights.\nWe consider this approach to be preferable to heuristic strategies, because we can always mine more data instead of relying (only) on our intuition and limited laboratory evidence.\nOur general approach is to train a classifier to induce weights for the user behavior features, and consequently derive a predictive model of user preferences.\nThe training is done by comparing a wide range of implicit behavior measures with explicit user judgments for a set of queries.\nFor this, we use a large random sample of queries in the search query log of a popular web search engine, the sets of results (identified by URLs) returned for each of the queries, and any explicit relevance judgments available for each query\/result pair.\nWe can then analyze the user behavior for all the instances where these queries were submitted to the search engine.\nTo learn the mapping from features to relevance preferences, we use a scalable implementation of neural networks, RankNet [4], capable of learning to rank a set of given items.\nMore specifically, for each judged query we check if a result link has been judged.\nIf so, the label is assigned to the query\/URL pair and to the corresponding feature vector for that search result.\nThese vectors of feature values corresponding to URLs judged relevant or non-relevant by human annotators become our training set.\nRankNet has demonstrated excellent performance in learning to rank objects in a supervised setting, hence we use RankNet for our experiments.\n4.\nPREDICTING USER PREFERENCES\nIn our experiments, we explore several models for predicting user preferences.\nThese models range from using no implicit user feedback to using all available implicit user feedback.\nRanking search results to predict user preferences is a fundamental problem in information retrieval.\nMost traditional IR and web search approaches use a combination of page and link features to rank search results, and a representative state-of-the-art ranking system will be used as our baseline ranker (Section 4.1).\nAt the same time, user interactions with a search engine provide a wealth of information.\nA commonly considered type of interaction is user clicks on search results.\nPrevious work [9], as described above, also examined which results were skipped (e.g., ` skip above' and ` skip next') and other related strategies to induce preference judgments from the users' skipping over results and not clicking on following results.\nWe have also added refinements of these strategies to take into account the variability observed in realistic web scenarios.\n.\nWe describe these strategies in Section 4.2.\nAs clickthroughs are just one aspect of user interaction, we extend the relevance estimation by introducing a machine learning model that incorporates clicks as well as other aspects of user behavior, such as follow-up queries and page dwell time (Section 4.3).\nWe conclude this section by briefly describing our \"baseline\"--a state-of-the-art ranking algorithm used by an operational web search engine.\n4.1 Baseline Model\nA key question is whether browsing behavior can provide information absent from existing explicit judgments used to train an existing ranker.\nFor our baseline system we use a state-of-the-art page ranking system currently used by a major web search engine.\nHence, we will call this system Current for the subsequent discussion.\nWhile the specific algorithms used by the search engine are beyond the scope of this paper, the algorithm ranks results based on hundreds of features such as query to document similarity, query to anchor text similarity, and intrinsic page quality.\nThe Current web search engine rankings provide a strong system for comparison and experiments of the next two sections.\n4.2 Clickthrough Model\nIf we assume that every user click was motivated by a rational process that selected the most promising result summary, we can then interpret each click as described in Joachims et al. [10].\nBy studying eye tracking and comparing clicks with explicit judgments, they identified a few basic strategies.\nWe discuss the two strategies that performed best in their experiments, Skip Above and Skip Next.\nStrategy SA (Skip Above): For a set of results for a query and a clicked result at position p, all unclicked results ranked above p are predicted to be less relevant than the result at p.\nIn addition to information about results above the clicked result, we also have information about the result immediately following the clicked one.\nEye tracking study performed by Joachims et al. [10] showed that users usually consider the result immediately following the clicked result in current ranking.\nTheir \"Skip Next\" strategy uses this observation to predict that a result following the clicked result at p is less relevant than the clicked result, with accuracy comparable to the SA strategy above.\nFor better coverage, we combine the SA strategy with this extension to derive the Skip Above + Skip Next strategy:\nStrategy SA+N (Sldp Above + Sldp Next): This strategy predicts all un-clicked results immediately following a clicked result as less relevant than the clicked result, and combines these predictions with those of the SA strategy above.\nWe experimented with variations of these strategies, and found that SA+N outperformed both SA and the original Skip Next strategy, so we will consider the SA and SA+N strategies in the rest of the paper.\nThese strategies are motivated and empirically tested for individual users in a laboratory setting.\nAs we will show, these strategies do not work as well in real web search setting due to inherent inconsistency and noisiness of individual users' behavior.\nThe general approach for using our clickthrough models directly is to filter clicks to those that reflect higher-than-chance click frequency.\nWe then use the same SA and SA+N strategies, but only for clicks that have higher-than-expected frequency according to our model.\nFor this, we estimate the relevance component rel (q, r, f) of the observed clickthrough feature f as the deviation from the expected (background) clickthrough distribution C (f).\nStrategy CD (deviation d): For a given query, compute the observed click frequency distribution o (r, p) for all results r in positions p.\nThe click deviation for a result r in position p, dev (r, p) is computed as: dev (r, p) = o (r, p)--C (p) where C (p) is the expected clickthrough at position p.\nIf dev (r, p)> d, retain the click as input to the SA+N strategy above, and apply SA+N strategy over the filtered set of click events.\nThe choice of d selects the tradeoff between recall and precision.\nWhile the above strategy extends SA and SA+N, it still assumes that a (filtered) clicked result is preferred over all unclicked results presented to the user above a clicked position.\nHowever, for informational queries, multiple results may be clicked, with varying frequency.\nHence, it is preferable to individually compare results for a query by considering the difference between the estimated relevance components of the click distribution of the corresponding query results.\nWe now define a generalization of the previous clickthrough interpretation strategy: Strategy CDiff (margin m): Compute deviation dev (r, p) for each result r,...rn in position p. For each pair of results ri and rj, predict preference of ri over rj iff dev (ri, pi) - dev (ri, pj)> m.\nAs in CD, the choice of m selects the tradeoff between recall and precision.\nThe pairs may be preferred in the original order or in reverse of it.\nGiven the margin, two results might be effectively indistinguishable, but only one can possibly be preferred over the other.\nIntuitively, CDiff generalizes the skip idea above to include cases where the user \"skipped\" (i.e., clicked less than expected) on uj and \"preferred\" (i.e., clicked more than expected) on ui.\nFurthermore, this strategy allows for differentiation within the set of clicked results, making it more appropriate to noisy user behavior.\nCDiff and CD are complimentary.\nCDiff is a generalization of the clickthrough frequency model of CD, but it ignores the positional information used in CD.\nHence, combining the two strategies to improve coverage is a natural approach: Strategy CD+CD iff (deviation d, margin m): Union of CD and CDiff predictions.\nOther variations of the above strategies were considered, but these five methods cover the range of observed performance.\n4.3 General User Behavior Model\nThe strategies described in the previous section generate orderings based solely on observed clickthrough frequencies.\nAs we discussed, clickthrough is just one, albeit important, aspect of user interactions with web search engine results.\nWe now present our general strategy that relies on the automatically derived predictive user behavior models (Section 3).\nThe UserBehavior Strategy: For a given query, each result is represented with the features in Table 3.1.\nRelative user preferences are then estimated using the learned user behavior model described in Section 3.4.\nRecall that to learn a predictive behavior model we used the features from Table 3.1 along with explicit relevance judgments as input to RankNet which learns an optimal weighting of features to predict preferences.\nThis strategy models user interaction with the search engine, allowing it to benefit from the wisdom of crowds interacting with the results and the pages beyond.\nAs our experiments in the subsequent sections demonstrate, modeling a richer set of user interactions beyond clickthroughs results in more accurate predictions of user preferences.\n5.\nEXPERIMENTAL SETUP\nWe now describe our experimental setup.\nWe first describe the methodology used, including our evaluation metrics (Section 5.1).\nThen we describe the datasets (Section 5.2) and the methods we compared in this study (Section 5.3).\n5.1 Evaluation Methodology and Metrics\nOur evaluation focuses on the pairwise agreement between preferences for results.\nThis allows us to compare to previous work [9,10].\nFurthermore, for many applications such as tuning ranking functions, pairwise preference can be used directly for training [1,4,9].\nThe evaluation is based on comparing preferences predicted by various models to the \"correct\" preferences derived from the explicit user relevance judgments.\nWe discuss other applications of our models beyond web search ranking in Section 7.\nTo create our set of \"test\" pairs we take each query and compute the cross-product between all search results, returning preferences for pairs according to the order of the associated relevance labels.\nTo avoid ambiguity in evaluation, we discard all ties (i.e., pairs with equal label).\nIn order to compute the accuracy of our preference predictions with respect to the correct preferences, we adapt the standard Recall and Precision measures [20].\nWhile our task of computing pairwise agreement is different from the absolute relevance ranking task, the metrics are used in the similar way.\nSpecifically, we report the average query recall and precision.\nFor our task, Query Precision and Query Recall for a query q are defined as:\n\u2022 Query Precision: Fraction of predicted preferences for results for q that agree with preferences obtained from explicit human judgment.\n\u2022 Query Recall: Fraction of preferences obtained from explicit human judgment for q that were correctly predicted.\nThe overall Recall and Precision are computed as the average of Query Recall and Query Precision, respectively.\nA drawback of this evaluation measure is that some preferences may be more valuable than others, which pairwise agreement does not capture.\nWe discuss this issue further when we consider extensions to the current work in Section 7.\n5.2 Datasets\nFor evaluation we used 3,500 queries that were randomly sampled from query logs (for a major web search engine.\nFor each query the top 10 returned search results were manually rated on a 6-point scale by trained judges as part of ongoing relevance improvement effort.\nIn addition for these queries we also had user interaction data for more than 120,000 instances of these queries.\nThe user interactions were harvested from anonymous browsing traces that immediately followed a query submitted to the web search engine.\nThis data collection was part of voluntary opt-in feedback submitted by users from October 11 through October 31.\nThese three weeks (21 days) of user interaction data was filtered to include only the users in the English-U.S. market.\nIn order to better understand the effect of the amount of user interaction data available for a query on accuracy, we created subsets of our data (Q1, Q10, and Q20) that contain different amounts of interaction data:\n\u2022 Q1: Human-rated queries with at least 1 click on results recorded (3500 queries, 28,093 query-URL pairs) \u2022 Q10: Queries in Q1 with at least 10 clicks (1300 queries, 18,728 query-URL pairs).\n\u2022 Q20: Queries in Q1 with at least 20 clicks (1000 queries total, 12,922 query-URL pairs).\nThese datasets were collected as part of normal user experience and hence have different characteristics than previously reported datasets collected in laboratory settings.\nFurthermore, the data size is order of magnitude larger than any study reported in the literature.\n5.3 Methods Compared\nWe considered a number of methods for comparison.\nWe compared our UserBehavior model (Section 4.3) to previously published implicit feedback interpretation techniques and some variants of these approaches (Section 4.2), and to the current search engine ranking based on query and page features alone (Section 4.1).\nSpecifically, we compare the following strategies:\n\u2022 SA: The \"Skip Above\" clickthrough strategy (Section 4.2) \u2022 SA+N: A more comprehensive extension of SA that takes better advantage of current search engine ranking.\n\u2022 CD: Our refinement of SA+N that takes advantage of our mixture model of clickthrough distribution to select \"trusted\" clicks for interpretation (Section 4.2).\n\u2022 CDiff: Our generalization of the CD strategy that explicitly uses the relevance component of clickthrough probabilities to induce preferences between search results (Section 4.2).\n\u2022 CD+CD iff: The strategy combining CD and CDiff as the union of predicted preferences from both (Section 4.2).\n\u2022 UserBehavior: We order predictions based on decreasing highest score of any page.\nIn our preliminary experiments we observed that higher ranker scores indicate higher \"confidence\" in the predictions.\nThis heuristic allows us to do graceful recallprecision tradeoff using the score of the highest ranked result to threshold the queries (Section 4.3) \u2022 Current: Current search engine ranking (section 4.1).\nNote that the Current ranker implementation was trained over a superset of the rated query\/URL pairs in our datasets, but using the same \"truth\" labels as we do for our evaluation.\nTraining\/Test Split: The only strategy for which splitting the datasets into training and test was required was the UserBehavior method.\nTo evaluate UserBehavior we train and validate on 75% of labeled queries, and test on the remaining 25%.\nThe sampling was\ndone per query (i.e., all results for a chosen query were included in the respective dataset, and there was no overlap in queries between training and test sets).\nIt is worth noting that both the ad-hoc SA and SA+N, as well as the distribution-based strategies (CD, CDiff, and CD+CD iff), do not require a separate training and test set, since they are based on heuristics for detecting \"anomalous\" click frequencies for results.\nHence, all strategies except for UserBehavior were tested on the full set of queries and associated relevance preferences, while UserBehavior was tested on a randomly chosen hold-out subset of the queries as described above.\nTo make sure we are not favoring UserBehavior, we also tested all other strategies on the same holdout test sets, resulting in the same accuracy results as testing over the complete datasets.\n6.\nRESULTS\nWe now turn to experimental evaluation of predicting relevance preference of web search results.\nFigure 6.1 shows the recallprecision results over the Q1 query set (Section 5.2).\nThe results indicate that previous click interpretation strategies, SA and SA+N perform suboptimally in this setting, exhibiting precision 0.627 and 0.638 respectively.\nFurthermore, there is no mechanism to do recallprecision trade-off with SA and SA+N, as they do not provide prediction confidence.\nIn contrast, our clickthrough distributionbased techniques CD and CD+CD iff exhibit somewhat higher precision than SA and SA+N (0.648 and 0.717 at Recall of 0.08, maximum achieved by SA or SA+N).\nFigure 6.1: Precision vs. Recall of SA, SA+N, CD, CDiff, CD+CD iff, UserBehavior, and Current relevance prediction methods over the Q1 dataset.\nInterestingly, CDiff alone exhibits precision equal to SA (0.627) at the same recall at 0.08.\nIn contrast, by combining CD and CDiff strategies (CD+CD iff method) we achieve the best performance of all clickthrough-based strategies, exhibiting precision of above 0.66 for recall values up to 0.14, and higher at lower recall levels.\nClearly, aggregating and intelligently interpreting clickthroughs, results in significant gain for realistic web search, than previously described strategies.\nHowever, even the CD+CD iff clickthrough interpretation strategy can be improved upon by automatically learning to interpret the aggregated clickthrough evidence.\nBut first, we consider the best performing strategy, UserBehavior.\nIncorporating post-search navigation history in addition to clickthroughs (Browsing features) results in the highest recall and precision among all methods compared.\nBrowse exhibits precision of above 0.7 at recall of 0.16, significantly outperforming our Baseline and clickthrough-only strategies.\nFurthermore, Browse Pre\nis able to achieve high recall (as high as 0.43) while maintaining precision (0.67) significantly higher than the baseline ranking.\nTo further analyze the value of different dimensions of implicit feedback modeled by the UserBehavior strategy, we consider each group of features in isolation.\nFigure 6.2 reports Precision vs. Recall for each feature group.\nInterestingly, Query-text alone has low accuracy (only marginally better than Random).\nFurthermore, Browsing features alone have higher precision (with lower maximum recall achieved) than considering all of the features in our UserBehavior model.\nApplying different machine learning methods for combining classifier predictions may increase performance of using all features for all recall values.\nFigure 6.2: Precision vs. recall for predicting relevance with each group of features individually.\nFigure 6.3: Recall vs. Precision of CD+CD iff and UserBehavior for query sets Q1, Q10, and Q20 (queries with at least 1, at least 10, and at least 20 clicks respectively).\nInterestingly, the ranker trained over Clickthrough-only features achieves substantially higher recall and precision than humandesigned clickthrough-interpretation strategies described earlier.\nFor example, the clickthrough-trained classifier achieves 0.67 precision at 0.42 Recall vs. the maximum recall of 0.14 achieved by the CD+CD iff strategy.\nOur clickthrough and user behavior interpretation strategies rely on extensive user interaction data.\nWe consider the effects of having sufficient interaction data available for a query before proposing a reranking of results for that query.\nFigure 6.3 reports recall-precision curves for the CD+CD iff and UserBehavior methods for different test query sets with at least 1 click (Q1), 10 clicks (Q10) and 20 clicks (Q20) available per query.\nNot surprisingly, CD+CD iff improves with more clicks.\nThis indicates that accuracy will improve as more user interaction histories become available, and more queries from the Q1 set will have comprehensive interaction histories.\nSimilarly, the UserBehavior strategy performs better for queries with 10 and 20 clicks, although the improvement is less dramatic than for CD+CD iff.\nFor queries with sufficient clicks, CD+CD iff exhibits precision comparable with Browse at lower recall.\nFigure 6.4: Recall of CD+CD iff and UserBehavior strategies at fixed minimum precision 0.7 for varying amounts of user activity data (7, 12, 17, 21 days).\nOur techniques often do not make relevance predictions for search results (i.e., if no interaction data is available for the lowerranked results), consequently maintaining higher precision at the expense of recall.\nIn contrast, the current search engine always makes a prediction for every result for a given query.\nAs a consequence, the recall of Current is high (0.627) at the expense of lower precision As another dimension of acquiring training data we consider the learning curve with respect to amount (days) of training data available.\nFigure 6.4 reports the Recall of CD+CD iff and UserBehavior strategies for varying amounts of training data collected over time.\nWe fixed minimum precision for both strategies at 0.7 as a point substantially higher than the baseline (0.625).\nAs expected, Recall of both strategies improves quickly with more days of interaction data examined.\nWe now briefly summarize our experimental results.\nWe showed that by intelligently aggregating user clickthroughs across queries and users, we can achieve higher accuracy on predicting user preferences than previous strategies.\nBecause of the skewed distribution of user clicks, our clickthrough-only strategies have high precision, but low recall (i.e., do not attempt to predict relevance of many search results).\nNevertheless, our CD+CD iff clickthrough strategy outperforms most recent state-of-the-art results by a large margin (0.72 precision for CD+CD iff vs. 0.64 for SA+N) at the highest recall level of SA+N.\nFurthermore, by considering the comprehensive UserBehavior features that model user interactions after the search and beyond the initial click, we can achieve substantially higher precision and recall than considering clickthrough alone.\nOur UserBehavior strategy achieves recall of over 0.43 with precision of over 0.67 (with much higher precision at lower recall levels), substantially outperforming the current search engine preference ranking and all other implicit feedback interpretation methods.\n7.\nCONCLUSIONS AND FUTURE WORK\nOur paper is the first, to our knowledge, to interpret post-search user behavior to estimate user preferences in a real web search setting.\nWe showed that our robust models result in higher prediction accuracy than previously published techniques.\nWe introduced new, robust, probabilistic techniques for interpreting clickthrough evidence by aggregating across users and queries.\nOur methods result in clickthrough interpretation substantially more accurate than previously published results not specifically designed for web search scenarios.\nOur methods' predictions of relevance preferences are substantially more accurate than the current state-of-the-art search result ranking that does not\nconsider user interactions.\nWe also presented a general model for interpreting post-search user behavior that incorporates clickthrough, browsing, and query features.\nBy considering the complete search experience after the initial query and click, we demonstrated prediction accuracy far exceeding that of interpreting only the limited clickthrough information.\nFurthermore, we showed that automatically learning to interpret user behavior results in substantially better performance than the human-designed ad-hoc clickthrough interpretation strategies.\nAnother benefit of automatically learning to interpret user behavior is that such methods can adapt to changing conditions and changing user profiles.\nFor example, the user behavior model on intranet search may be different from the web search behavior.\nOur general UserBehavior method would be able to adapt to these changes by automatically learning to map new behavior patterns to explicit relevance ratings.\nA natural application of our preference prediction models is to improve web search ranking [1].\nIn addition, our work has many potential applications including click spam detection, search abuse detection, personalization, and domain-specific ranking.\nFor example, our automatically derived behavior models could be trained on examples of search abuse or click spam behavior instead of relevance labels.\nAlternatively, our models could be used directly to detect anomalies in user behavior--either due to abuse or to operational problems with the search engine.\nWhile our techniques perform well on average, our assumptions about clickthrough distributions (and learning the user behavior models) may not hold equally well for all queries.\nFor example, queries with divergent access patterns (e.g., for ambiguous queries with multiple meanings) may result in behavior inconsistent with the model learned for all queries.\nHence, clustering queries and learning different predictive models for each query type is a promising research direction.\nQuery distributions also change over time, and it would be productive to investigate how that affects the predictive ability of these models.\nFurthermore, some predicted preferences may be more valuable than others, and we plan to investigate different metrics to capture the utility of the predicted preferences.\nAs we showed in this paper, using the \"wisdom of crowds\" can give us an accurate interpretation of user interactions even in the inherently noisy web search setting.\nOur techniques allow us to automatically predict relevance preferences for web search results with accuracy greater than the previously published methods.\nThe predicted relevance preferences can be used for automatic relevance evaluation and tuning, for deploying search in new settings, and ultimately for improving the overall web search experience.","keyphrases":["user prefer","click spam detect","person","implicit feedback","clickthrough","relev measur","inform retriev","top relev document posit","induc weight","predict model","page dwell time","follow-up queri","explicit relev judgment","predict behavior model","recal measur","precis measur","low recal","web search rank","search abus detect","domain-specif rank","interpret implicit relev feedback","user behavior model","predict relev prefer"],"prmu":["P","P","P","P","P","U","M","U","U","R","U","U","U","R","U","U","U","R","M","M","M","R","M"]} {"id":"H-46","title":"Broad Expertise Retrieval in Sparse Data Environments","abstract":"Expertise retrieval has been largely unexplored on data other than the W3C collection. At the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas. We first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people. For our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site. Using this test set, we conduct two series of experiments. The first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set. The second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set. Expertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings.","lvl-1":"Broad Expertise Retrieval in Sparse Data Environments Krisztian Balog ISLA, University of Amsterdam Kruislaan 403, 1098 SJ Amsterdam, The Netherlands kbalog@science.uva.nl Toine Bogers ILK, Tilburg University P.O. Box 90153, 5000 LE Tilburg, The Netherlands A.M.Bogers@uvt.nl Leif Azzopardi Dept. of Computing Science University of Glasgow, Glasgow, G12 8QQ leif@dcs.gla.ac.uk Maarten de Rijke ISLA, University of Amsterdam Kruislaan 403, 1098 SJ Amsterdam, The Netherlands mdr@science.uva.nl Antal van den Bosch ILK, Tilburg University P.O. Box 90153, 5000 LE Tilburg, The Netherlands Antal.vdnBosch@uvt.nl ABSTRACT Expertise retrieval has been largely unexplored on data other than the W3C collection.\nAt the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas.\nWe first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people.\nFor our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site.\nUsing this test set, we conduct two series of experiments.\nThe first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set.\nThe second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set.\nExpertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings.\nCategories and Subject Descriptors H.3 [Information Storage and Retrieval]: H.3.1 Content Analysis and Indexing; H.3.3 Information Search and Retrieval; H.3.4 Systems and Software; H.4 [Information Systems Applications]: H.4.2 Types of Systems; H.4.\nm Miscellaneous General Terms Algorithms, Measurement, Performance, Experimentation 1.\nINTRODUCTION An organization``s intranet provides a means for exchanging information between employees and for facilitating employee collaborations.\nTo efficiently and effectively achieve this, it is necessary to provide search facilities that enable employees not only to access documents, but also to identify expert colleagues.\nAt the TREC Enterprise Track [22] the need to study and understand expertise retrieval has been recognized through the introduction of Expert Finding tasks.\nThe goal of expert finding is to identify a list of people who are knowledgeable about a given topic.\nThis task is usually addressed by uncovering associations between people and topics [10]; commonly, a co-occurrence of the name of a person with topics in the same context is assumed to be evidence of expertise.\nAn alternative task, which using the same idea of people-topic associations, is expert profiling, where the task is to return a list of topics that a person is knowledgeable about [3].\nThe launch of the Expert Finding task at TREC has generated a lot of interest in expertise retrieval, with rapid progress being made in terms of modeling, algorithms, and evaluation aspects.\nHowever, nearly all of the expert finding or profiling work performed has been validated experimentally using the W3C collection [24] from the Enterprise Track.\nWhile this collection is currently the only publicly available test collection for expertise retrieval tasks, it only represents one type of intranet.\nWith only one test collection it is not possible to generalize conclusions to other realistic settings.\nIn this paper we focus on expertise retrieval in a realistic setting that differs from the W3C setting-one in which relatively small amounts of clean, multilingual data are available, that cover a broad range of expertise areas, as can be found on the intranets of universities and other knowledge-intensive organizations.\nTypically, this setting features several additional types of structure: topical structure (e.g., topic hierarchies as employed by the organization), organizational structure (faculty, department, ...), as well as multiple types of documents (research and course descriptions, publications, and academic homepages).\nThis setting is quite different from the W3C setting in ways that might impact upon the performance of expertise retrieval tasks.\nWe focus on a number of research questions in this paper: Does the relatively small amount of data available on an intranet affect the quality of the topic-person associations that lie at the heart of expertise retrieval algorithms?\nHow do state-of-the-art algorithms developed on the W3C data set perform in the alternative scenario of the type described above?\nMore generally, do the lessons from the Expert Finding task at TREC carry over to this setting?\nHow does the inclusion or exclusion of different documents affect expertise retrieval tasks?\nIn addition to, how can the topical and organizational structure be used for retrieval purposes?\nTo answer our research questions, we first present a set of baseline approaches, based on generative language modeling, aimed at finding associations between topics and people.\nThis allows us to formulate the expert finding and expert profiling tasks in a uniform way, and has the added benefit of allowing us to understand the relations between the two tasks.\nFor our experimental evaluation, we introduce a new data set (the UvT Expert Collection) which is representative of the type of intranet that we described above.\nOur collection is based on publicly available data, crawled from the website of Tilburg University (UvT).\nThis type of data is particularly interesting, since (1) it is clean, heterogeneous, structured, and focused, but comprises a limited number of documents; (2) contains information on the organizational hierarchy; (3) it is bilingual (English and Dutch); and (4) the list of expertise areas of an individual are provided by the employees themselves.\nUsing the UvT Expert collection, we conduct two sets of experiments.\nThe first is aimed at determining the effectiveness of baseline expertise finding and profiling methods in this new setting.\nA second group of experiments is aimed at extensions of the baseline methods that exploit characteristic features of the UvT Expert Collection; specifically, we propose and evaluate refined expert finding and profiling methods that incorporate topicality and organizational structure.\nApart from the research questions and data set that we contribute, our main contributions are as follows.\nThe baseline models developed for expertise finding perform well on the new data set.\nWhile on the W3C setting the expert finding task appears to be more difficult than profiling, for the UvT data the opposite is the case.\nWe find that profiling on the UvT data set is considerably more difficult than on the W3C set, which we believe is due to the large (but realistic) number of topical areas that we used for profiling: about 1,500 for the UvT set, versus 50 in the W3C case.\nTaking the similarity between topics into account can significantly improve retrieval performance.\nThe best performing similarity measures are content-based, therefore they can be applied on the W3C (and other) settings as well.\nFinally, we demonstrate that the organizational structure can be exploited in the form of a context model, improving MAP scores for certain models by up to 70%.\nThe remainder of this paper is organized as follows.\nIn the next section we review related work.\nThen, in Section 3 we provide detailed descriptions of the expertise retrieval tasks that we address in this paper: expert finding and expert profiling.\nIn Section 4 we present our baseline models, of which the performance is then assessed in Section 6 using the UvT data set that we introduce in Section 5.\nAdvanced models exploiting specific features of our data are presented in Section 7 and evaluated in Section 8.\nWe formulate our conclusions in Section 9.\n2.\nRELATED WORK Initial approaches to expertise finding often employed databases containing information on the skills and knowledge of each individual in the organization [11].\nMost of these tools (usually called yellow pages or people-finding systems) rely on people to self-assess their skills against a predefined set of keywords.\nFor updating profiles in these systems in an automatic fashion there is a need for intelligent technologies [5].\nMore recent approaches use specific document sets (such as email [6] or software [18]) to find expertise.\nIn contrast with focusing on particular document types, there is also an increased interest in the development of systems that index and mine published intranet documents as sources of evidence for expertise.\nOne such published approach is the P@noptic system [9], which builds a representation of each person by concatenating all documents associated with that person-this is similar to Model 1 of Balog et al. [4], who formalize and compare two methods.\nBalog et al.``s Model 1 directly models the knowledge of an expert from associated documents, while their Model 2 first locates documents on the topic and then finds the associated experts.\nIn the reported experiments the second method performs significantly better when there are sufficiently many associated documents per candidate.\nMost systems that took part in the 2005 and 2006 editions of the Expert Finding task at TREC implemented (variations on) one of these two models; see [10, 20].\nMacdonald and Ounis [16] propose a different approach for ranking candidate expertise with respect to a topic based on data fusion techniques, without using collectionspecific heuristics; they find that applying field-based weighting models improves the ranking of candidates.\nPetkova and Croft [19] propose yet another approach, based on a combination of the above Model 1 and 2, explicitly modeling topics.\nTurning to other expert retrieval tasks that can also be addressed using topic-people associations, Balog and de Rijke [3] addressed the task of determining topical expert profiles.\nWhile their methods proved to be efficient on the W3C corpus, they require an amount of data that may not be available in the typical knowledge-intensive organization.\nBalog and de Rijke [2] study the related task of finding experts that are similar to a small set of experts given as input.\nAs an aside, creating a textual summary of a person shows some similarities to biography finding, which has received a considerable amount of attention recently; see e.g., [13].\nWe use generative language modeling to find associations between topics and people.\nIn our modeling of expert finding and profiling we collect evidence for expertise from multiple sources, in a heterogeneous collection, and integrate it with the co-occurrence of candidates'' names and query terms-the language modeling setting allows us to do this in a transparent manner.\nOur modeling proceeds in two steps.\nIn the first step, we consider three baseline models, two taken from [4] (the Models 1 and 2 mentioned above), and one a refined version of a model introduced in [3] (which we refer to as Model 3 below); this third model is also similar to the model described by Petkova and Croft [19].\nThe models we consider in our second round of experiments are mixture models similar to contextual language models [1] and to the expanded documents of Tao et al. [21]; however, the features that we use for definining our expansions-including topical structure and organizational structure-have not been used in this way before.\n3.\nTASKS In the expertise retrieval scenario that we envisage, users seeking expertise within an organization have access to an interface that combines a search box (where they can search for experts or topics) with navigational structures (of experts and of topics) that allows them to click their way to an expert page (providing the profile of a person) or a topic page (providing a list of experts on the topic).\nTo feed the above interface, we face two expertise retrieval tasks, expert finding and expert profiling, that we first define and then formalize using generative language models.\nIn order to model either task, the probability of the query topic being associated to a candidate expert plays a key role in the final estimates for searching and profiling.\nBy using language models, both the candidates and the query are characterized by distributions of terms in the vocabulary (used in the documents made available by the organization whose expertise retrieval needs we are addressing).\n3.1 Expert finding Expert finding involves the task of finding the right person with the appropriate skills and knowledge: Who are the experts on topic X?\n.\nE.g., an employee wants to ascertain who worked on a particular project to find out why particular decisions were made without having to trawl through documentation (if there is any).\nOr, they may be in need a trained specialist for consultancy on a specific problem.\nWithin an organization there are usually many possible candidates who could be experts for given topic.\nWe can state this problem as follows: What is the probability of a candidate ca being an expert given the query topic q?\nThat is, we determine p(ca|q), and rank candidates ca according to this probability.\nThe candidates with the highest probability given the query are deemed the most likely experts for that topic.\nThe challenge is how to estimate this probability accurately.\nSince the query is likely to consist of only a few terms to describe the expertise required, we should be able to obtain a more accurate estimate by invoking Bayes'' Theorem, and estimating: p(ca|q) = p(q|ca)p(ca) p(q) , (1) where p(ca) is the probability of a candidate and p(q) is the probability of a query.\nSince p(q) is a constant, it can be ignored for ranking purposes.\nThus, the probability of a candidate ca being an expert given the query q is proportional to the probability of a query given the candidate p(q|ca), weighted by the a priori belief p(ca) that candidate ca is an expert.\np(ca|q) \u221d p(q|ca)p(ca) (2) In this paper our main focus is on estimating the probability of a query given the candidate p(q|ca), because this probability captures the extent to which the candidate knows about the query topic.\nWhereas the candidate priors are generally assumed to be uniformand thus will not influence the ranking-it has been demonstrated that a sensible choice of priors may improve the performance [20].\n3.2 Expert profiling While the task of expert searching was concerned with finding experts given a particular topic, the task of expert profiling seeks to answer a related question: What topics does a candidate know about?\nEssentially, this turns the questions of expert finding around.\nThe profiling of an individual candidate involves the identification of areas of skills and knowledge that they have expertise about and an evaluation of the level of proficiency in each of these areas.\nThis is the candidate``s topical profile.\nGenerally, topical profiles within organizations consist of tabular structures which explicitly catalogue the skills and knowledge of each individual in the organization.\nHowever, such practice is limited by the resources available for defining, creating, maintaining, and updating these profiles over time.\nBy focusing on automatic methods which draw upon the available evidence within the document repositories of an organization, our aim is to reduce the human effort associated with the maintenance of topical profiles1 .\nA topical profile of a candidate, then, is defined as a vector where each element i of the vector corresponds to the candidate ca``s expertise on a given topic ki, (i.e., s(ca, ki)).\nEach topic ki defines a particular knowledge area or skill that the organization uses to define the candidate``s topical profile.\nThus, it is assumed that a list of topics, {k1, ... , kn}, where n is the number of pre-defined topics, is given: profile(ca) = s(ca, k1), s(ca, k2), ... , s(ca, kn) .\n(3) 1 Context and evidence are needed to help users of expertise finding systems to decide whom to contact when seeking expertise in a particular area.\nExamples of such context are: Who does she work with?\nWhat are her contact details?\nIs she well-connected, just in case she is not able to help us herself?\nWhat is her role in the organization?\nWho is her superior?\nCollaborators, and affiliations, etc. are all part of the candidate``s social profile, and can serve as a background against which the system``s recommendations should be interpreted.\nIn this paper we only address the problem of determining topical profiles, and leave social profiling to further work.\nWe state the problem of quantifying the competence of a person on a certain knowledge area as follows: What is the probability of a knowledge area (ki) being part of the candidate``s (expertise) profile?\nwhere s(ca, ki) is defined by p(ki|ca).\nOur task, then, is to estimate p(ki|ca), which is equivalent to the problem of obtaining p(q|ca), where the topic ki is represented as a query topic q, i.e., a sequence of keywords representing the expertise required.\nBoth the expert finding and profiling tasks rely on the accurate estimation of p(q|ca).\nThe only difference derives from the prior probability that a person is an expert (p(ca)), which can be incorporated into the expert finding task.\nThis prior does not apply to the profiling task since the candidate (individual) is fixed.\n4.\nBASELINE MODELS In this section we describe our baseline models for estimating p(q|ca), i.e., associations between topics and people.\nBoth expert finding and expert profiling boil down to this estimation.\nWe employ three models for calculating this probability.\n4.1 From topics to candidates Using Candidate Models: Model 1 Model 1 [4] defines the probability of a query given a candidate (p(q|ca)) using standard language modeling techniques, based on a multinomial unigram language model.\nFor each candidate ca, a candidate language model \u03b8ca is inferred such that the probability of a term given \u03b8ca is nonzero for all terms, i.e., p(t|\u03b8ca) > 0.\nFrom the candidate model the query is generated with the following probability: p(q|\u03b8ca) = Y t\u2208q p(t|\u03b8ca)n(t,q) , where each term t in the query q is sampled identically and independently, and n(t, q) is the number of times t occurs in q.\nThe candidate language model is inferred as follows: (1) an empirical model p(t|ca) is computed; (2) it is smoothed with background probabilities.\nUsing the associations between a candidate and a document, the probability p(t|ca) can be approximated by: p(t|ca) = X d p(t|d)p(d|ca), where p(d|ca) is the probability that candidate ca generates a supporting document d, and p(t|d) is the probability of a term t occurring in the document d.\nWe use the maximum-likelihood estimate of a term, that is, the normalised frequency of the term t in document d.\nThe strength of the association between document d and candidate ca expressed by p(d|ca) reflects the degree to which the candidates expertise is described using this document.\nThe estimation of this probability is presented later, in Section 4.2.\nThe candidate model is then constructed as a linear interpolation of p(t|ca) and the background model p(t) to ensure there are no zero probabilities, which results in the final estimation: p(q|\u03b8ca) = (4) Y t\u2208q ( (1 \u2212 \u03bb) X d p(t|d)p(d|ca) !\n+ \u03bbp(t) )n(t,q) .\nModel 1 amasses all the term information from all the documents associated with the candidate, and uses this to represent that candidate.\nThis model is used to predict how likely a candidate would produce a query q.\nThis can can be intuitively interpreted as the probability of this candidate talking about the query topic, where we assume that this is indicative of their expertise.\nUsing Document Models: Model 2 Model 2 [4] takes a different approach.\nHere, the process is broken into two parts.\nGiven a candidate ca, (1) a document that is associated with a candidate is selected with probability p(d|ca), and (2) from this document a query q is generated with probability p(q|d).\nThen the sum over all documents is taken to obtain p(q|ca), such that: p(q|ca) = X d p(q|d)p(d|ca).\n(5) The probability of a query given a document is estimated by inferring a document language model \u03b8d for each document d in a similar manner as the candidate model was inferred: p(t|\u03b8d) = (1 \u2212 \u03bb)p(t|d) + \u03bbp(t), (6) where p(t|d) is the probability of the term in the document.\nThe probability of a query given the document model is: p(q|\u03b8d) = Y t\u2208q p(t|\u03b8d)n(t,q) .\nThe final estimate of p(q|ca) is obtained by substituting p(q|d) for p(q|\u03b8d) into Eq.\n5 (see [4] for full details).\nConceptually, Model 2 differs from Model 1 because the candidate is not directly modeled.\nInstead, the document acts like a hidden variable in the process which separates the query from the candidate.\nThis process is akin to how a user may search for candidates with a standard search engine: initially by finding the documents which are relevant, and then seeing who is associated with that document.\nBy examining a number of documents the user can obtain an idea of which candidates are more likely to discuss the topic q. Using Topic Models: Model 3 We introduce a third model, Model 3.\nInstead of attempting to model the query generation process via candidate or document models, we represent the query as a topic language model and directly estimate the probability of the candidate p(ca|q).\nThis approach is similar to the model presented in [3, 19].\nAs with the previous models, a language model is inferred, but this time for the query.\nWe adapt the work of Lavrenko and Croft [14] to estimate a topic model from the query.\nThe procedure is as follows.\nGiven a collection of documents and a query topic q, it is assumed that there exists an unknown topic model \u03b8k that assigns probabilities p(t|\u03b8k) to the term occurrences in the topic documents.\nBoth the query and the documents are samples from \u03b8k (as opposed to the previous approaches, where a query is assumed to be sampled from a specific document or candidate model).\nThe main task is to estimate p(t|\u03b8k), the probability of a term given the topic model.\nSince the query q is very sparse, and as there are no examples of documents on the topic, this distribution needs to be approximated.\nLavrenko and Croft [14] suggest a reasonable way of obtaining such an approximation, by assuming that p(t|\u03b8k) can be approximated by the probability of term t given the query q.\nWe can then estimate p(t|q) using the joint probability of observing the term t together with the query terms, q1, ... , qm, and dividing by the joint probability of the query terms: p(t|\u03b8k) \u2248 p(t|q) = p(t, q1, ... , qm) p(q1, ... , qm) = p(t, q1, ... , qm) P t \u2208T p(t , q1, ... , qm) , where p(q1, ... , qm) = P t \u2208T p(t , q1, ... , qm), and T is the entire vocabulary of terms.\nIn order to estimate the joint probability p(t, q1, ... , qm), we follow [14, 15] and assume t and q1, ... , qm are mutually independent, once we pick a source distribution from the set of underlying source distributions U.\nIf we choose U to be a set of document models.\nthen to construct this set, the query q would be issued against the collection, and the top n returned are assumed to be relevant to the topic, and thus treated as samples from the topic model.\n(Note that candidate models could be used instead.)\nWith the document models forming U, the joint probability of term and query becomes: p(t, q1, ... , qm) = X d\u2208U p(d) \u02d8 p(t|\u03b8d) mY i=1 p(qi|\u03b8d) \u00af .\n(7) Here, p(d) denotes the prior distribution over the set U, which reflects the relevance of the document to the topic.\nWe assume that p(d) is uniform across U.\nIn order to rank candidates according to the topic model defined, we use the Kullback-Leibler divergence metric (KL, [8]) to measure the difference between the candidate models and the topic model: KL(\u03b8k||\u03b8ca) = X t p(t|\u03b8k) log p(t|\u03b8k) p(t|\u03b8ca) .\n(8) Candidates with a smaller divergence from the topic model are considered to be more likely experts on that topic.\nThe candidate model \u03b8ca is defined in Eq.\n4.\nBy using KL divergence instead of the probability of a candidate given the topic model p(ca|\u03b8k), we avoid normalization problems.\n4.2 Document-candidate associations For our models we need to be able to estimate the probability p(d|ca), which expresses the extent to which a document d characterizes the candidate ca.\nIn [4], two methods are presented for estimating this probability, based on the number of person names recognized in a document.\nHowever, in our (intranet) setting it is reasonable to assume that authors of documents can unambiguously be identified (e.g., as the author of an article, the teacher assigned to a course, the owner of a web page, etc.) Hence, we set p(d|ca) to be 1 if candidate ca is author of document d, otherwise the probability is 0.\nIn Section 6 we describe how authorship can be determined on different types of documents within the collection.\n5.\nTHE UVT EXPERT COLLECTION The UvT Expert collection used in the experiments in this paper fits the scenario outlined in Section 3.\nThe collection is based on the Webwijs (Webwise) system developed at Tilburg University (UvT) in the Netherlands.\nWebwijs (http:\/\/www.uvt.nl\/ webwijs\/) is a publicly accessible database of UvT employees who are involved in research or teaching; currently, Webwijs contains information about 1168 experts, each of whom has a page with contact information and, if made available by the expert, a research description and publications list.\nIn addition, each expert can select expertise areas from a list of 1491 topics and is encouraged to suggest new topics that need to be approved by the Webwijs editor.\nEach topic has a separate page that shows all experts associated with that topic and, if available, a list of related topics.\nWebwijs is available in Dutch and English, and this bilinguality has been preserved in the collection.\nEvery Dutch Webwijs page has an English translation.\nNot all Dutch topics have an English translation, but the reverse is true: the 981 English topics all have a Dutch equivalent.\nAbout 42% of the experts teach courses at Tilburg University; these courses were also crawled and included in the profile.\nIn addition, about 27% of the experts link to their academic homepage from their Webwijs page.\nThese home pages were crawled and added to the collection.\n(This means that if experts put the full-text versions of their publications on their academic homepage, these were also available for indexing.)\nWe also obtained 1880 full-text versions of publications from the UvT institutional repository and Dutch English no.\nof experts 1168 1168 no.\nof experts with \u2265 1 topic 743 727 no.\nof topics 1491 981 no.\nof expert-topic pairs 4318 3251 avg.\nno.\nof topics\/expert 5.8 5.9 max.\nno.\nof topics\/expert (no.\nof experts) 60 (1) 35 (1) min.\nno.\nof topics\/expert (no.\nof experts) 1 (74) 1 (106) avg.\nno.\nof experts\/topic 2.9 3.3 max.\nno.\nof experts\/topic (no.\nof topics) 30 (1) 30 (1) min.\nno.\nof experts\/topic (no.\nof topics) 1 (615) 1 (346) no.\nof experts with HP 318 318 no.\nof experts with CD 318 318 avg.\nno.\nof CDs per teaching expert 3.5 3.5 no.\nof experts with RD 329 313 no.\nof experts with PUB 734 734 avg.\nno.\nof PUBs per expert 27.0 27.0 avg.\nno.\nof PUB citations per expert 25.2 25.2 avg.\nno.\nof full-text PUBs per expert 1.8 1.8 Table 2: Descriptive statistics of the Dutch and English versions of the UvT Expert collection.\nconverted them to plain text.\nWe ran the TextCat [23] language identifier to classify the language of the home pages and the fulltext publications.\nWe restricted ourselves to pages where the classifier was confident about the language used on the page.\nThis resulted in four document types: research descriptions (RD), course descriptions (CD), publications (PUB; full-text and citationonly versions), and academic homepages (HP).\nEverything was bundled into the UvT Expert collection which is available at http: \/\/ilk.uvt.nl\/uvt-expert-collection\/.\nThe UvT Expert collection was extracted from a different organizational setting than the W3C collection and differs from it in a number of ways.\nThe UvT setting is one with relatively small amounts of multilingual data.\nDocument-author associations are clear and the data is structured and clean.\nThe collection covers a broad range of expertise areas, as one can typically find on intranets of universities and other knowledge-intensive institutes.\nAdditionally, our university setting features several types of structure (topical and organizational), as well as multiple document types.\nAnother important difference between the two data sets is that the expertise areas in the UvT Expert collection are self-selected instead of being based on group membership or assignments by others.\nSize is another dimension along which the W3C and UvT Expert collections differ: the latter is the smaller of the two.\nAlso realistic are the large differences in the amount of information available for each expert.\nUtilizing Webwijs is voluntary; 425 Dutch experts did not select any topics at all.\nThis leaves us with 743 Dutch and 727 English usable expert profiles.\nTable 2 provides descriptive statistics for the UvT Expert collection.\nUniversities tend to have a hierarchical structure that goes from the faculty level, to departments, research groups, down to the individual researchers.\nIn the UvT Expert collection we have information about the affiliations of researchers with faculties and institutes, providing us with a two-level organizational hierarchy.\nTilburg University has 22 organizational units at the faculty level (including the university office and several research institutes) and 71 departments, which amounts to 3.2 departments per faculty.\nAs to the topical hierarchy used by Webwijs, 131 of the 1491 topics are top nodes in the hierarchy.\nThis hierarchy has an average topic chain length of 2.65 and a maximum length of 7 topics.\n6.\nEVALUATION Below, we evaluate Section 4``s models for expert finding and profiling onthe UvT Expert collection.\nWe detail our research questions and experimental setup, and then present our results.\n6.1 Research Questions We address the following research questions.\nBoth expert finding and profiling rely on the estimations of p(q|ca).\nThe question is how the models compare on the different tasks, and in the setting of the UvT Expert collection.\nIn [4], Model 2 outperformed Model 1 on the W3C collection.\nHow do they compare on our data set?\nAnd how does Model 3 compare to Model 1?\nWhat about performance differences between the two languages in our test collection?\n6.2 Experimental Setup The output of our models was evaluated against the self-assigned topic labels, which were treated as relevance judgements.\nResults were evaluated separately for English and Dutch.\nFor English we only used topics for which the Dutch translation was available; for Dutch all topics were considered.\nThe results were averaged for the queries in the intersection of relevance judgements and results; missing queries do not contribute a value of 0 to the scores.\nWe use standard information retrieval measures, such as Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR).\nWe also report the percentage of topics (%q) and candidates (%ca) covered, for the expert finding and profiling tasks, respectively.\n6.3 Results Table 1 shows the performance of Model 1, 2, and 3 on the expert finding and profiling tasks.\nThe rows of the table correspond to the various document types (RD, CD, PUB, and HP) and to their combinations.\nRD+CD+PUB+HP is equivalent to the full collection and will be referred as the BASELINE of our experiments.\nLooking at Table 1 we see that Model 2 performs the best across the board.\nHowever, when the data is clean and very focused (RD), Model 3 outperforms it in a number of cases.\nModel 1 has the best coverage of candidates (%ca) and topics (%q).\nThe various document types differ in their characteristics and how they improve the finding and profiling tasks.\nExpert profiling benefits much from the clean data present in the RD and CD document types, while the publications contribute the most to the expert finding task.\nAdding the homepages does not prove to be particularly useful.\nWhen we compare the results across languages, we find that the coverage of English topics (%q) is higher than of the Dutch ones for expert finding.\nApart from that, the scores fall in the same range for both languages.\nFor the profiling task the coverage of the candidates (%ca) is very similar for both languages.\nHowever, the performance is substantially better for the English topics.\nWhile it is hard to compare scores across collections, we conclude with a brief comparison of the absolute scores in Table 1 to those reported in [3, 4] on the W3C test set (2005 edition).\nFor expert finding the MAP scores for Model 2 reported here are about 50% higher than the corresponding figures in [4], while our MRR scores are slightly below those in [4].\nFor expert profiling, the differences are far more dramatic: the MAP scores for Model 2 reported here are around 50% below the scores in [3], while the (best) MRR scores are about the same as those in [3].\nThe cause for the latter differences seems to reside in the number of knowledge areas considered here-approx.\n30 times more than in the W3C setting.\n7.\nADVANCED MODELS Now that we have developed and assessed basic language modeling techniques for expertise retrieval, we turn to refined models that exploit special features of our test collection.\n7.1 Exploiting knowledge area similarity One way to improve the scoring of a query given a candidate is to consider what other requests the candidate would satisfy and use them as further evidence to support the original query, proportional Expert finding Expert profiling Document types Model 1 Model 2 Model 3 Model 1 Model 2 Model 3 %q MAP MRR %q MAP MRR %q MAP MRR %ca MAP MRR %ca MAP MRR %ca MAP MRR English RD 97.8 0.126 0.269 83.5 0.144 0.311 83.3 0.129 0.271 100 0.089 0.189 39.3 0.232 0.465 41.1 0.166 0.337 CD 97.8 0.118 0.227 91.7 0.123 0.248 91.7 0.118 0.226 32.8 0.188 0.381 32.4 0.195 0.385 32.7 0.203 0.370 PUB 97.8 0.200 0.330 98.0 0.216 0.372 98.0 0.145 0.257 78.9 0.167 0.364 74.5 0.212 0.442 78.9 0.135 0.299 HP 97.8 0.081 0.186 97.4 0.071 0.168 97.2 0.062 0.149 31.2 0.150 0.299 28.8 0.185 0.335 30.1 0.136 0.287 RD+CD 97.8 0.188 0.352 92.9 0.193 0.360 92.9 0.150 0.273 100 0.145 0.286 61.3 0.251 0.477 63.2 0.217 0.416 RD+CD+PUB 97.8 0.235 0.373 98.1 0.277 0.439 98.1 0.178 0.305 100 0.196 0.380 87.2 0.280 0.533 89.5 0.170 0.344 RD+CD+PUB+HP 97.8 0.237 0.372 98.6 0.280 0.441 98.5 0.166 0.293 100 0.199 0.387 88.7 0.281 0.525 90.9 0.169 0.329 Dutch RD 61.3 0.094 0.229 38.4 0.137 0.336 38.3 0.127 0.295 38.0 0.127 0.386 34.1 0.138 0.420 38.0 0.105 0.327 CD 61.3 0.107 0.212 49.7 0.128 0.256 49.7 0.136 0.261 32.5 0.151 0.389 31.8 0.158 0.396 32.5 0.170 0.380 PUB 61.3 0.193 0.319 59.5 0.218 0.368 59.4 0.173 0.291 78.8 0.126 0.364 76.0 0.150 0.424 78.8 0.103 0.294 HP 61.3 0.063 0.169 56.6 0.064 0.175 56.4 0.062 0.163 29.8 0.108 0.308 27.8 0.125 0.338 29.8 0.098 0.255 RD+CD 61.3 0.159 0.314 51.9 0.184 0.360 51.9 0.169 0.324 60.5 0.151 0.410 57.2 0.166 0.431 60.4 0.159 0.384 RD+CD+PUB 61.3 0.244 0.398 61.5 0.260 0.424 61.4 0.210 0.350 90.3 0.165 0.445 88.2 0.189 0.479 90.3 0.126 0.339 RD+CD+PUB+HP 61.3 0.249 0.401 62.6 0.265 0.436 62.6 0.195 0.344 91.9 0.164 0.426 90.1 0.195 0.488 91.9 0.125 0.328 Table 1: Performance of the models on the expert finding and profiling tasks, using different document types and their combinations.\n%q is the number of topics covered (applies to the expert finding task), %ca is the number of candidates covered (applies to the expert profiling task).\nThe top and bottom blocks correspond to English and Dutch respectively.\nThe best scores are in boldface.\nto how related the other requests are to the original query.\nThis can be modeled by interpolating between the p(q|ca) and the further supporting evidence from all similar requests q , as follows: p (q|ca) = \u03bbp(q|ca) + (1 \u2212 \u03bb) X q p(q|q )p(q |ca), (9) where p(q|q ) represents the similarity between the two topics q and q .\nTo be able to work with similarity methods that are not necessarily probabilities, we set p(q|q ) = w(q,q ) \u03b3 , where \u03b3 is a normalizing constant, such that \u03b3 = P q w(q , q ).\nWe consider four methods for calculating the similarity score between two topics.\nThree approaches are strictly content-based, and establish similarity by examining co-occurrence patterns of topics within the collection, while the last approach exploits the hierarchical structure of topical areas that may be present within an organization (see [7] for further examples of integrating word relationships into language models).\nThe Kullback-Leibler (KL) divergence metric defined in Eq.\n8 provides a measure of how different or similar two probability distributions are.\nA topic model is inferred for q and q using the method presented in Section 4.1 to describe the query across the entire vocabulary.\nSince a lower KL score means the queries are more similar, we let w(q, q ) = max(KL(\u03b8q||\u00b7) \u2212 KL(\u03b8q||\u03b8q )).\nPointwise Mutual Information (PMI, [17]) is a measure of association used in information theory to determine the extent of independence between variables.\nThe dependence between two queries is reflected by the SI(q, q ) score, where scores greater than zero indicate that it is likely that there is a dependence, which we take to mean that the queries are likely to be similar: SI(q, q ) = log p(q, q ) p(q)p(q ) (10) We estimate the probability of a topic p(q) using the number of documents relevant to query q within the collection.\nThe joint probability p(q, q ) is estimated similarly, by using the concatenation of q and q as a query.\nTo obtain p(q|q ), we then set w(q, q ) = SI(q, q ) when SI(q, q ) > 0 otherwise w(q, q ) = 0, because we are only interested in including queries that are similar.\nThe log-likelihood statistic provides another measure of dependence, which is more reliable than the pointwise mutual information measure [17].\nLet k1 be the number of co-occurrences of q and q , k2 the number of occurrences of q not co-occurring with q , n1 the total number of occurrences of q , and n2 the total number of topic tokens minus the number of occurrences of q .\nThen, let p1 = k1\/n1, p2 = k2\/n2, and p = (k1 + k2)\/(n1 + n2), (q, q ) = 2( (p1, k1, n1) + (p2, k2, n2) \u2212 (p, k1, n1) \u2212 (p, k2, n2)), where (p, n, k) = k log p + (n \u2212 k) log(1 \u2212 p).\nThe higher score indicate that queries are also likely to be similar, thus we set w(q, q ) = (q, q ).\nFinally, we also estimate the similarity of two topics based on their distance within the topic hierarchy.\nThe topic hierarchy is viewed as a directed graph, and for all topic-pairs the shortest path SP(q, q ) is calculated.\nWe set the similarity score to be the reciprocal of the shortest path: w(q, q ) = 1\/SP(q, q ).\n7.2 Contextual information Given the hierarchy of an organization, the units to which a person belong are regarded as a context so as to compensate for data sparseness.\nWe model it as follows: p (q|ca) = 1 \u2212 P ou\u2208OU(ca) \u03bbou \u00b7 p(q|ca) + P ou\u2208OU(ca) \u03bbou \u00b7 p(q|ou), where OU(ca) is the set of organizational units of which candidate ca is a member of, and p(q|o) expresses the strength of the association between query q and the unit ou.\nThe latter probability can be estimated using either of the three basic models, by simply replacing ca with ou in the corresponding equations.\nAn organizational unit is associated with all the documents that its members have authored.\nThat is, p(d|ou) = maxca\u2208ou p(d|ca).\n7.3 A simple multilingual model For knowledge institutes in Europe, academic or otherwise, a multilingual (or at least bilingual) setting is typical.\nThe following model builds on a kind of independence assumption: there is no spill-over of expertise\/profiles across language boundaries.\nWhile a simplification, this is a sensible first approach.\nThat is: p (q|ca) =P l\u2208L \u03bbl \u00b7 p(ql|ca), where L is the set of languages used in the collection, ql is the translation of the query q to language l, and \u03bbl is a language specific smoothing parameter, such that P l\u2208L \u03bbl = 1.\n8.\nADVANCED MODELS: EVALUATION In this section we present an experimental evaluation of our advanced models.\nExpert finding Expert profiling Language Model 1 Model 2 Model 3 Model 1 Model 2 Model 3 %q MAP MRR %q MAP MRR %q MAP MRR %ca MAP MRR %ca MAP MRR %ca MAP MRR English only 97.8 0.237 0.372 98.6 0.280 0.441 98.5 0.166 0.293 100 0.199 0.387 88.7 0.281 0.525 90.9 0.169 0.329 Dutch only 61.3 0.249 0.401 62.6 0.265 0.436 62.6 0.195 0.344 91.9 0.164 0.426 90.1 0.195 0.488 91.9 0.125 0.328 Combination 99.4 0.297 0.444 99.7 0.324 0.491 99.7 0.223 0.388 100 0.241 0.445 92.1 0.313 0.564 93.2 0.224 0.411 Table 3: Performance of the combination of languages on the expert finding and profiling tasks (on candidates).\nBest scores for each model are in italic, absolute best scores for the expert finding and profiling tasks are in boldface.\nMethod Model 1 Model 2 Model 3 MAP MRR MAP MRR MAP MRR English BASELINE 0.296 0.454 0.339 0.509 0.221 0.333 KLDIV 0.291 0.453 0.327 0.503 0.219 0.330 PMI 0.291 0.453 0.337 0.509 0.219 0.331 LL 0.319 0.490 0.360 0.524 0.233 0.368 HDIST 0.299 0.465 0.346 0.537 0.219 0.332 Dutch BASELINE 0.240 0.350 0.271 0.403 0.227 0.389 KLDIV 0.239 0.347 0.253 0.386 0.224 0.385 PMI 0.239 0.350 0.260 0.392 0.227 0.389 LL 0.255 0.372 0.281 0.425 0.231 0.389 HDIST 0.253 0.365 0.271 0.407 0.236 0.402 Method Model 1 Model 2 Model 3 MAP MRR MAP MRR MAP MRR English BASELINE 0.485 0.546 0.499 0.548 0.381 0.416 KLDIV 0.510 0.564 0.513 0.558 0.381 0.416 PMI 0.486 0.546 0.495 0.542 0.407 0.451 LL 0.558 0.589 0.586 0.617 0.408 0.453 HDIST 0.507 0.567 0.512 0.563 0.386 0.420 Dutch BASELINE 0.263 0.313 0.294 0.358 0.262 0.315 KLDIV 0.284 0.336 0.271 0.321 0.261 0.314 PMI 0.265 0.317 0.265 0.316 0.273 0.330 LL 0.312 0.351 0.330 0.377 0.284 0.331 HDIST 0.280 0.327 0.288 0.341 0.266 0.321 Table 4: Performance on the expert finding (top) and profiling (bottom) tasks, using knowledge area similarities.\nRuns were evaluated on the main topics set.\nBest scores are in boldface.\n8.1 Research Questions Our questions follow the refinements presented in the preceding section: Does exploiting the knowledge area similarity improve effectiveness?\nWhich of the various methods for capturing word relationships is most effective?\nFurthermore, is our way of bringing in contextual information useful?\nFor which tasks?\nAnd finally, is our simple way of combining the monolingual scores sufficient for obtaining significant improvements?\n8.2 Experimental setup Given that the self-assessments are also sparse in our collection, in order to be able to measure differences between the various models, we selected a subset of topics, and evaluated (some of the) runs only on this subset.\nThis set is referred as main topics, and consists of topics that are located at the top level of the topical hierarchy.\n(A main topic has subtopics, but is not a subtopic of any other topic.)\nThis main set consists of 132 Dutch and 119 English topics.\nThe relevance judgements were restricted to the main topic set, but were not expanded with subtopics.\n8.3 Exploiting knowledge area similarity Table 4 presents the results.\nThe four methods used for estimating knowledge-area similarity are KL divergence (KLDIV), PointLang.\nTopics Model 1 Model 2 Model 3 MAP MRR MAP MRR MAP MRR Expert finding UK ALL 0.423 0.545 0.654 0.799 0.494 0.629 UK MAIN 0.500 0.621 0.704 0.834 0.587 0.699 NL ALL 0.439 0.560 0.672 0.826 0.480 0.630 NL MAIN 0.440 0.584 0.645 0.816 0.515 0.655 Expert profiling UK ALL 0.240 0.640 0.306 0.778 0.223 0.616 UK MAIN 0.523 0.677 0.519 0.648 0.461 0.587 NL ALL 0.203 0.716 0.254 0.770 0.183 0.627 NL MAIN 0.332 0.576 0.380 0.624 0.332 0.549 Table 5: Evaluating the context models on organizational units.\nwise mutual information (PMI), log-likelihood (LL), and distance within topic hierarchy (HDIST).\nWe managed to improve upon the baseline in all cases, but the improvement is more noticeable for the profiling task.\nFor both tasks, the LL method performed best.\nThe content-based approaches performed consistently better than HDIST.\n8.4 Contextual information A two level hierarchy of organizational units (faculties and institutes) is available in the UvT Expert collection.\nThe unit a person belongs to is used as a context for that person.\nFirst, we evaluated the models of the organizational units, using all topics (ALL) and only the main topics (MAIN).\nAn organizational unit is considered to be relevant for a given topic (or vice versa) if at least one member of the unit selected the given topic as an expertise area.\nTable 5 reports on the results.\nAs far as expert finding goes, given a topic, the corresponding organizational unit can be identified with high precision.\nHowever, the expert profiling task shows a different picture: the scores are low, and the task seems hard.\nThe explanation may be that general concepts (i.e., our main topics) may belong to several organizational units.\nSecond, we performed another evaluation, where we combined the contextual models with the candidate models (to score candidates again).\nTable 6 reports on the results.\nWe find a positive impact of the context models only for expert finding.\nNoticably, for expert finding (and Model 1), it improves over 50% (for English) and over 70% (for Dutch) on MAP.\nThe poor performance on expert profiling may be due to the fact that context models alone did not perform very well on the profiling task to begin with.\n8.5 Multilingual models In this subsection we evaluate the method for combining results across multiple languages that we described in Section 7.3.\nIn our setting the set of languages consists of English and Dutch: L = {UK, NL}.\nThe weights on these languages were set to be identical (\u03bbUK = \u03bbNL = 0.5).\nWe performed experiments with various \u03bb settings, but did not observe significant differences in performance.\nTable 3 reports on the multilingual results, where performance is evaluated on the full topic set.\nAll three models significantly imLang.\nMethod Model 1 Model 2 Model 3 MAP MRR MAP MRR MAP MRR Expert finding UK BL 0.296 0.454 0.339 0.509 0.221 0.333 UK CT 0.330 0.491 0.342 0.500 0.228 0.342 NL BL 0.240 0.350 0.271 0.403 0.227 0.389 NL CT 0.251 0.382 0.267 0.410 0.246 0.404 Expert profiling UK BL 0.485 0.546 0.499 0.548 0.381 0.416 UK CT 0.562 0.620 0.508 0.558 0.440 0.486 NL BL 0.263 0.313 0.294 0.358 0.262 0.315 NL CT 0.330 0.384 0.317 0.387 0.294 0.345 Table 6: Performance of the context models (CT) compared to the baseline (BL).\nBest scores are in boldface.\nproved over all measures for both tasks.\nThe coverage of topics and candidates for the expert finding and profiling tasks, respectively, is close to 100% in all cases.\nThe relative improvement of the precision scores ranges from 10% to 80%.\nThese scores demonstrate that despite its simplicity, our method for combining results over multiple languages achieves substantial improvements over the baseline.\n9.\nCONCLUSIONS In this paper we focused on expertise retrieval (expert finding and profiling) in a new setting of a typical knowledge-intensive organization in which the available data is of high quality, multilingual, and covering a broad range of expertise area.\nTypically, the amount of available data in such an organization (e.g., a university, a research institute, or a research lab) is limited when compared to the W3C collection that has mostly been used for the experimental evaluation of expertise retrieval so far.\nTo examine expertise retrieval in this setting, we introduced (and released) the UvT Expert collection as a representative case of such knowledge intensive organizations.\nThe new collection reflects the typical properties of knowledge-intensive institutes noted above and also includes several features which may are potentially useful for expertise retrieval, such as topical and organizational structure.\nWe evaluated how current state-of-the-art models for expert finding and profiling performed in this new setting and then refined these models in order to try and exploit the different characteristics within the data environment (language, topicality, and organizational structure).\nWe found that current models of expertise retrieval generalize well to this new environment; in addition we found that refining the models to account for the differences results in significant improvements, thus making up for problems caused by data sparseness issues.\nFuture work includes setting up manual assessments of automatically generated profiles by the employees themselves, especially in cases where the employees have not provided a profile themselves.\n10.\nACKNOWLEDGMENTS Krisztian Balog was supported by the Netherlands Organisation for Scientific Research (NWO) under project number 220-80-001.\nMaarten de Rijke was also supported by NWO under project numbers 017.001.190, 220-80-001, 264-70-050, 354-20-005, 600.065.120, 612-13-001, 612.000.106, 612.066.302, 612.069.006, 640.001.501, 640.002.501, and by the E.U. IST programme of the 6th FP for RTD under project MultiMATCH contract IST-033104.\nThe work of Toine Bogers and Antal van den Bosch was funded by the IOP-MMI-program of SenterNovem \/ The Dutch Ministry of Economic Affairs, as part of the `A Propos project.\n11.\nREFERENCES [1] L. Azzopardi.\nIncorporating Context in the Language Modeling Framework for ad-hoc Information Retrieval.\nPhD thesis, University of Paisley, 2005.\n[2] K. Balog and M. de Rijke.\nFinding similar experts.\nIn This volume, 2007.\n[3] K. Balog and M. de Rijke.\nDetermining expert profiles (with an application to expert finding).\nIn IJCAI ``07: Proc.\n20th Intern.\nJoint Conf.\non Artificial Intelligence, pages 2657-2662, 2007.\n[4] K. Balog, L. Azzopardi, and M. de Rijke.\nFormal models for expert finding in enterprise corpora.\nIn SIGIR ``06: Proc.\n29th annual intern.\nACM SIGIR conf.\non Research and development in information retrieval, pages 43-50, 2006.\n[5] I. Becerra-Fernandez.\nThe role of artificial intelligence technologies in the implementation of people-finder knowledge management systems.\nIn AAAI Workshop on Bringing Knowledge to Business Processes, March 2000.\n[6] C. S. Campbell, P. P. Maglio, A. Cozzi, and B. Dom.\nExpertise identification using email communications.\nIn CIKM ``03: Proc.\ntwelfth intern.\nconf.\non Information and knowledge management, pages 528531, 2003.\n[7] G. Cao, J.-Y.\nNie, and J. Bai.\nIntegrating word relationships into language models.\nIn SIGIR ``05: Proc.\n28th annual intern.\nACM SIGIR conf.\non Research and development in information retrieval, pages 298-305, 2005.\n[8] T. M. Cover and J. A. Thomas.\nElements of Information Theory.\nWiley-Interscience, 1991.\n[9] N. Craswell, D. Hawking, A. M. Vercoustre, and P. Wilkins.\nP@noptic expert: Searching for experts not just for documents.\nIn Ausweb, 2001.\n[10] N. Craswell, A. de Vries, and I. Soboroff.\nOverview of the TREC2005 Enterprise Track.\nIn The Fourteenth Text REtrieval Conf.\nProc.\n(TREC 2005), 2006.\n[11] T. H. Davenport and L. Prusak.\nWorking Knowledge: How Organizations Manage What They Know.\nHarvard Business School Press, Boston, MA, 1998.\n[12] T. Dunning.\nAccurate methods for the statistics of surprise and coincidence.\nComputational Linguistics, 19(1):61-74, 1993.\n[13] E. Filatova and J. Prager.\nTell me what you do and I``ll tell you what you are: Learning occupation-related activities for biographies.\nIn HLT\/EMNLP, 2005.\n[14] V. Lavrenko and W. B. Croft.\nRelevance based language models.\nIn SIGIR ``01: Proc.\n24th annual intern.\nACM SIGIR conf.\non Research and development in information retrieval, pages 120-127, 2001.\n[15] V. Lavrenko, M. Choquette, and W. B. Croft.\nCross-lingual relevance models.\nIn SIGIR ``02: Proc.\n25th annual intern.\nACM SIGIR conf.\non Research and development in information retrieval, pages 175-182, 2002.\n[16] C. Macdonald and I. Ounis.\nVoting for candidates: adapting data fusion techniques for an expert search task.\nIn CIKM ``06: Proc.\n15th ACM intern.\nconf.\non Information and knowledge management, pages 387-396, 2006.\n[17] C. Manning and H. Sch\u00a8utze.\nFoundations of Statistical Natural Language Processing.\nThe MIT Press, 1999.\n[18] A. Mockus and J. D. Herbsleb.\nExpertise browser: a quantitative approach to identifying expertise.\nIn ICSE ``02: Proc.\n24th Intern.\nConf.\non Software Engineering, pages 503-512, 2002.\n[19] D. Petkova and W. B. Croft.\nHierarchical language models for expert finding in enterprise corpora.\nIn Proc.\nICTAI 2006, pages 599-608, 2006.\n[20] I. Soboroff, A. de Vries, and N. Craswell.\nOverview of the TREC 2006 Enterprise Track.\nIn TREC 2006 Working Notes, 2006.\n[21] T. Tao, X. Wang, Q. Mei, and C. Zhai.\nLanguage model information retrieval with document expansion.\nIn HLT-NAACL 2006, 2006.\n[22] TREC.\nEnterprise track, 2005.\nURL: http:\/\/www.ins.cwi.\nnl\/projects\/trec-ent\/wiki\/.\n[23] G. van Noord.\nTextCat Language Guesser.\nURL: http:\/\/www.\nlet.rug.nl\/\u02dcvannoord\/TextCat\/.\n[24] W3C.\nThe W3C test collection, 2005.\nURL: http:\/\/research.\nmicrosoft.com\/users\/nickcr\/w3c-summary.html.","lvl-3":"Broad Expertise Retrieval in Sparse Data Environments\nABSTRACT\nExpertise retrieval has been largely unexplored on data other than the W3C collection.\nAt the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas.\nWe first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people.\nFor our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site.\nUsing this test set, we conduct two series of experiments.\nThe first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set.\nThe second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set.\nExpertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings.\n1.\nINTRODUCTION\nAn organization's intranet provides a means for exchanging information between employees and for facilitating employee collaborations.\nTo efficiently and effectively achieve this, it is necessary\nto provide search facilities that enable employees not only to access documents, but also to identify expert colleagues.\nAt the TREC Enterprise Track [22] the need to study and understand expertise retrieval has been recognized through the introduction of Expert Finding tasks.\nThe goal of expert finding is to identify a list of people who are knowledgeable about a given topic.\nThis task is usually addressed by uncovering associations between people and topics [10]; commonly, a co-occurrence of the name of a person with topics in the same context is assumed to be evidence of expertise.\nAn alternative task, which using the same idea of people-topic associations, is expertprofiling, where the task is to return a list of topics that a person is knowledgeable about [3].\nThe launch of the Expert Finding task at TREC has generated a lot of interest in expertise retrieval, with rapid progress being made in terms of modeling, algorithms, and evaluation aspects.\nHowever, nearly all of the expert finding or profiling work performed has been validated experimentally using the W3C collection [24] from the Enterprise Track.\nWhile this collection is currently the only publicly available test collection for expertise retrieval tasks, it only represents one type of intranet.\nWith only one test collection it is not possible to generalize conclusions to other realistic settings.\nIn this paper we focus on expertise retrieval in a realistic setting that differs from the W3C setting--one in which relatively small amounts of clean, multilingual data are available, that cover a broad range of expertise areas, as can be found on the intranets of universities and other knowledge-intensive organizations.\nTypically, this setting features several additional types of structure: topical structure (e.g., topic hierarchies as employed by the organization), organizational structure (faculty, department, ...), as well as multiple types of documents (research and course descriptions, publications, and academic homepages).\nThis setting is quite different from the W3C setting in ways that might impact upon the performance of expertise retrieval tasks.\nWe focus on a number of research questions in this paper: Does the relatively small amount of data available on an intranet affect the quality of the topic-person associations that lie at the heart of expertise retrieval algorithms?\nHow do state-of-the-art algorithms developed on the W3C data set perform in the alternative scenario of the type described above?\nMore generally, do the lessons from the Expert Finding task at TREC carry over to this setting?\nHow does the inclusion or exclusion of different documents affect expertise retrieval tasks?\nIn addition to, how can the topical and organizational structure be used for retrieval purposes?\nTo answer our research questions, we first present a set of baseline approaches, based on generative language modeling, aimed at finding associations between topics and people.\nThis allows us to\nformulate the expert finding and expert profiling tasks in a uniform way, and has the added benefit of allowing us to understand the relations between the two tasks.\nFor our experimental evaluation, we introduce a new data set (the UvT Expert Collection) which is representative of the type of intranet that we described above.\nOur collection is based on publicly available data, crawled from the website of Tilburg University (UvT).\nThis type of data is particularly interesting, since (1) it is clean, heterogeneous, structured, and focused, but comprises a limited number of documents; (2) contains information on the organizational hierarchy; (3) it is bilingual (English and Dutch); and (4) the list of expertise areas of an individual are provided by the employees themselves.\nUsing the UvT Expert collection, we conduct two sets of experiments.\nThe first is aimed at determining the effectiveness of baseline expertise finding and profiling methods in this new setting.\nA second group of experiments is aimed at extensions of the baseline methods that exploit characteristic features of the UvT Expert Collection; specifically, we propose and evaluate refined expert finding and profiling methods that incorporate topicality and organizational structure.\nApart from the research questions and data set that we contribute, our main contributions are as follows.\nThe baseline models developed for expertise finding perform well on the new data set.\nWhile on the W3C setting the expert finding task appears to be more difficult than profiling, for the UvT data the opposite is the case.\nWe find that profiling on the UvT data set is considerably more difficult than on the W3C set, which we believe is due to the large (but realistic) number of topical areas that we used for profiling: about 1,500 for the UvT set, versus 50 in the W3C case.\nTaking the similarity between topics into account can significantly improve retrieval performance.\nThe best performing similarity measures are content-based, therefore they can be applied on the W3C (and other) settings as well.\nFinally, we demonstrate that the organizational structure can be exploited in the form of a context model, improving MAP scores for certain models by up to 70%.\nThe remainder of this paper is organized as follows.\nIn the next section we review related work.\nThen, in Section 3 we provide detailed descriptions of the expertise retrieval tasks that we address in this paper: expert finding and expert profiling.\nIn Section 4 we present our baseline models, of which the performance is then assessed in Section 6 using the UvT data set that we introduce in Section 5.\nAdvanced models exploiting specific features of our data are presented in Section 7 and evaluated in Section 8.\nWe formulate our conclusions in Section 9.\n2.\nRELATED WORK\nInitial approaches to expertise finding often employed databases containing information on the skills and knowledge of each individual in the organization [11].\nMost of these tools (usually called yellow pages or people-finding systems) rely on people to self-assess their skills against a predefined set of keywords.\nFor updating profiles in these systems in an automatic fashion there is a need for intelligent technologies [5].\nMore recent approaches use specific document sets (such as email [6] or software [18]) to find expertise.\nIn contrast with focusing on particular document types, there is also an increased interest in the development of systems that index and mine published intranet documents as sources of evidence for expertise.\nOne such published approach is the P@noptic system [9], which builds a representation of each person by concatenating all documents associated with that person--this is similar to Model 1 of Balog et al. [4], who formalize and compare two methods.\nBalog et al.'s Model 1 directly models the knowledge of an expert from associated documents, while their Model 2 first locates documents on the topic and then finds the associated experts.\nIn the reported experiments the second method performs significantly better when there are sufficiently many associated documents per candidate.\nMost systems that took part in the 2005 and 2006 editions of the Expert Finding task at TREC implemented (variations on) one of these two models; see [10, 20].\nMacdonald and Ounis [16] propose a different approach for ranking candidate expertise with respect to a topic based on data fusion techniques, without using collectionspecific heuristics; they find that applying field-based weighting models improves the ranking of candidates.\nPetkova and Croft [19] propose yet another approach, based on a combination of the above Model 1 and 2, explicitly modeling topics.\nTurning to other expert retrieval tasks that can also be addressed using topic--people associations, Balog and de Rijke [3] addressed the task of determining topical expert profiles.\nWhile their methods proved to be efficient on the W3C corpus, they require an amount of data that may not be available in the typical knowledge-intensive organization.\nBalog and de Rijke [2] study the related task of finding experts that are similar to a small set of experts given as input.\nAs an aside, creating a textual \"summary\" of a person shows some similarities to biography finding, which has received a considerable amount of attention recently; see e.g., [13].\nWe use generative language modeling to find associations between topics and people.\nIn our modeling of expert finding and profiling we collect evidence for expertise from multiple sources, in a heterogeneous collection, and integrate it with the co-occurrence of candidates' names and query terms--the language modeling setting allows us to do this in a transparent manner.\nOur modeling proceeds in two steps.\nIn the first step, we consider three baseline models, two taken from [4] (the Models 1 and 2 mentioned above), and one a refined version of a model introduced in [3] (which we refer to as Model 3 below); this third model is also similar to the model described by Petkova and Croft [19].\nThe models we consider in our second round of experiments are mixture models similar to contextual language models [1] and to the expanded documents of Tao et al. [21]; however, the features that we use for definining our expansions--including topical structure and organizational structure--have not been used in this way before.\n3.\nTASKS\n3.1 Expert finding\n3.2 Expert profiling\n4.\nBASELINE MODELS\n4.1 From topics to candidates\n4.2 Document-candidate associations\n5.\nTHE UVT EXPERT COLLECTION\n6.\nEVALUATION\n6.1 Research Questions\n6.2 Experimental Setup\n6.3 Results\n7.\nADVANCED MODELS\n7.1 Exploiting knowledge area similarity\n7.2 Contextual information\n7.3 A simple multilingual model\n8.\nADVANCED MODELS: EVALUATION\n8.1 Research Questions\n8.2 Experimental setup\n8.3 Exploiting knowledge area similarity\nExpert finding\n8.4 Contextual information\n8.5 Multilingual models\n9.\nCONCLUSIONS\nIn this paper we focused on expertise retrieval (expert finding and profiling) in a new setting of a typical knowledge-intensive organization in which the available data is of high quality, multilingual, and covering a broad range of expertise area.\nTypically, the amount of available data in such an organization (e.g., a university, a research institute, or a research lab) is limited when compared to the W3C collection that has mostly been used for the experimental evaluation of expertise retrieval so far.\nTo examine expertise retrieval in this setting, we introduced (and released) the UvT Expert collection as a representative case of such knowledge intensive organizations.\nThe new collection reflects the typical properties of knowledge-intensive institutes noted above and also includes several features which may are potentially useful for expertise retrieval, such as topical and organizational structure.\nWe evaluated how current state-of-the-art models for expert finding and profiling performed in this new setting and then refined these models in order to try and exploit the different characteristics within the data environment (language, topicality, and organizational structure).\nWe found that current models of expertise retrieval generalize well to this new environment; in addition we found that refining the models to account for the differences results in significant improvements, thus making up for problems caused by data sparseness issues.\nFuture work includes setting up manual assessments of automatically generated profiles by the employees themselves, especially in cases where the employees have not provided a profile themselves.","lvl-4":"Broad Expertise Retrieval in Sparse Data Environments\nABSTRACT\nExpertise retrieval has been largely unexplored on data other than the W3C collection.\nAt the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas.\nWe first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people.\nFor our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site.\nUsing this test set, we conduct two series of experiments.\nThe first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set.\nThe second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set.\nExpertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings.\n1.\nINTRODUCTION\nAn organization's intranet provides a means for exchanging information between employees and for facilitating employee collaborations.\nto provide search facilities that enable employees not only to access documents, but also to identify expert colleagues.\nAt the TREC Enterprise Track [22] the need to study and understand expertise retrieval has been recognized through the introduction of Expert Finding tasks.\nThe goal of expert finding is to identify a list of people who are knowledgeable about a given topic.\nThis task is usually addressed by uncovering associations between people and topics [10]; commonly, a co-occurrence of the name of a person with topics in the same context is assumed to be evidence of expertise.\nAn alternative task, which using the same idea of people-topic associations, is expertprofiling, where the task is to return a list of topics that a person is knowledgeable about [3].\nThe launch of the Expert Finding task at TREC has generated a lot of interest in expertise retrieval, with rapid progress being made in terms of modeling, algorithms, and evaluation aspects.\nHowever, nearly all of the expert finding or profiling work performed has been validated experimentally using the W3C collection [24] from the Enterprise Track.\nWhile this collection is currently the only publicly available test collection for expertise retrieval tasks, it only represents one type of intranet.\nWith only one test collection it is not possible to generalize conclusions to other realistic settings.\nIn this paper we focus on expertise retrieval in a realistic setting that differs from the W3C setting--one in which relatively small amounts of clean, multilingual data are available, that cover a broad range of expertise areas, as can be found on the intranets of universities and other knowledge-intensive organizations.\nThis setting is quite different from the W3C setting in ways that might impact upon the performance of expertise retrieval tasks.\nWe focus on a number of research questions in this paper: Does the relatively small amount of data available on an intranet affect the quality of the topic-person associations that lie at the heart of expertise retrieval algorithms?\nHow do state-of-the-art algorithms developed on the W3C data set perform in the alternative scenario of the type described above?\nMore generally, do the lessons from the Expert Finding task at TREC carry over to this setting?\nHow does the inclusion or exclusion of different documents affect expertise retrieval tasks?\nIn addition to, how can the topical and organizational structure be used for retrieval purposes?\nTo answer our research questions, we first present a set of baseline approaches, based on generative language modeling, aimed at finding associations between topics and people.\nThis allows us to\nformulate the expert finding and expert profiling tasks in a uniform way, and has the added benefit of allowing us to understand the relations between the two tasks.\nFor our experimental evaluation, we introduce a new data set (the UvT Expert Collection) which is representative of the type of intranet that we described above.\nOur collection is based on publicly available data, crawled from the website of Tilburg University (UvT).\nUsing the UvT Expert collection, we conduct two sets of experiments.\nThe first is aimed at determining the effectiveness of baseline expertise finding and profiling methods in this new setting.\nA second group of experiments is aimed at extensions of the baseline methods that exploit characteristic features of the UvT Expert Collection; specifically, we propose and evaluate refined expert finding and profiling methods that incorporate topicality and organizational structure.\nApart from the research questions and data set that we contribute, our main contributions are as follows.\nThe baseline models developed for expertise finding perform well on the new data set.\nWhile on the W3C setting the expert finding task appears to be more difficult than profiling, for the UvT data the opposite is the case.\nTaking the similarity between topics into account can significantly improve retrieval performance.\nThe best performing similarity measures are content-based, therefore they can be applied on the W3C (and other) settings as well.\nFinally, we demonstrate that the organizational structure can be exploited in the form of a context model, improving MAP scores for certain models by up to 70%.\nThe remainder of this paper is organized as follows.\nIn the next section we review related work.\nThen, in Section 3 we provide detailed descriptions of the expertise retrieval tasks that we address in this paper: expert finding and expert profiling.\nIn Section 4 we present our baseline models, of which the performance is then assessed in Section 6 using the UvT data set that we introduce in Section 5.\nAdvanced models exploiting specific features of our data are presented in Section 7 and evaluated in Section 8.\nWe formulate our conclusions in Section 9.\n2.\nRELATED WORK\nInitial approaches to expertise finding often employed databases containing information on the skills and knowledge of each individual in the organization [11].\nMore recent approaches use specific document sets (such as email [6] or software [18]) to find expertise.\nIn contrast with focusing on particular document types, there is also an increased interest in the development of systems that index and mine published intranet documents as sources of evidence for expertise.\nBalog et al.'s Model 1 directly models the knowledge of an expert from associated documents, while their Model 2 first locates documents on the topic and then finds the associated experts.\nIn the reported experiments the second method performs significantly better when there are sufficiently many associated documents per candidate.\nMost systems that took part in the 2005 and 2006 editions of the Expert Finding task at TREC implemented (variations on) one of these two models; see [10, 20].\nMacdonald and Ounis [16] propose a different approach for ranking candidate expertise with respect to a topic based on data fusion techniques, without using collectionspecific heuristics; they find that applying field-based weighting models improves the ranking of candidates.\nPetkova and Croft [19] propose yet another approach, based on a combination of the above Model 1 and 2, explicitly modeling topics.\nTurning to other expert retrieval tasks that can also be addressed using topic--people associations, Balog and de Rijke [3] addressed the task of determining topical expert profiles.\nWhile their methods proved to be efficient on the W3C corpus, they require an amount of data that may not be available in the typical knowledge-intensive organization.\nBalog and de Rijke [2] study the related task of finding experts that are similar to a small set of experts given as input.\nWe use generative language modeling to find associations between topics and people.\nIn our modeling of expert finding and profiling we collect evidence for expertise from multiple sources, in a heterogeneous collection, and integrate it with the co-occurrence of candidates' names and query terms--the language modeling setting allows us to do this in a transparent manner.\nOur modeling proceeds in two steps.\n9.\nCONCLUSIONS\nIn this paper we focused on expertise retrieval (expert finding and profiling) in a new setting of a typical knowledge-intensive organization in which the available data is of high quality, multilingual, and covering a broad range of expertise area.\nTypically, the amount of available data in such an organization (e.g., a university, a research institute, or a research lab) is limited when compared to the W3C collection that has mostly been used for the experimental evaluation of expertise retrieval so far.\nTo examine expertise retrieval in this setting, we introduced (and released) the UvT Expert collection as a representative case of such knowledge intensive organizations.\nThe new collection reflects the typical properties of knowledge-intensive institutes noted above and also includes several features which may are potentially useful for expertise retrieval, such as topical and organizational structure.\nWe evaluated how current state-of-the-art models for expert finding and profiling performed in this new setting and then refined these models in order to try and exploit the different characteristics within the data environment (language, topicality, and organizational structure).\nWe found that current models of expertise retrieval generalize well to this new environment; in addition we found that refining the models to account for the differences results in significant improvements, thus making up for problems caused by data sparseness issues.\nFuture work includes setting up manual assessments of automatically generated profiles by the employees themselves, especially in cases where the employees have not provided a profile themselves.","lvl-2":"Broad Expertise Retrieval in Sparse Data Environments\nABSTRACT\nExpertise retrieval has been largely unexplored on data other than the W3C collection.\nAt the same time, many intranets of universities and other knowledge-intensive organisations offer examples of relatively small but clean multilingual expertise data, covering broad ranges of expertise areas.\nWe first present two main expertise retrieval tasks, along with a set of baseline approaches based on generative language modeling, aimed at finding expertise relations between topics and people.\nFor our experimental evaluation, we introduce (and release) a new test set based on a crawl of a university site.\nUsing this test set, we conduct two series of experiments.\nThe first is aimed at determining the effectiveness of baseline expertise retrieval methods applied to the new test set.\nThe second is aimed at assessing refined models that exploit characteristic features of the new test set, such as the organizational structure of the university, and the hierarchical structure of the topics in the test set.\nExpertise retrieval models are shown to be robust with respect to environments smaller than the W3C collection, and current techniques appear to be generalizable to other settings.\n1.\nINTRODUCTION\nAn organization's intranet provides a means for exchanging information between employees and for facilitating employee collaborations.\nTo efficiently and effectively achieve this, it is necessary\nto provide search facilities that enable employees not only to access documents, but also to identify expert colleagues.\nAt the TREC Enterprise Track [22] the need to study and understand expertise retrieval has been recognized through the introduction of Expert Finding tasks.\nThe goal of expert finding is to identify a list of people who are knowledgeable about a given topic.\nThis task is usually addressed by uncovering associations between people and topics [10]; commonly, a co-occurrence of the name of a person with topics in the same context is assumed to be evidence of expertise.\nAn alternative task, which using the same idea of people-topic associations, is expertprofiling, where the task is to return a list of topics that a person is knowledgeable about [3].\nThe launch of the Expert Finding task at TREC has generated a lot of interest in expertise retrieval, with rapid progress being made in terms of modeling, algorithms, and evaluation aspects.\nHowever, nearly all of the expert finding or profiling work performed has been validated experimentally using the W3C collection [24] from the Enterprise Track.\nWhile this collection is currently the only publicly available test collection for expertise retrieval tasks, it only represents one type of intranet.\nWith only one test collection it is not possible to generalize conclusions to other realistic settings.\nIn this paper we focus on expertise retrieval in a realistic setting that differs from the W3C setting--one in which relatively small amounts of clean, multilingual data are available, that cover a broad range of expertise areas, as can be found on the intranets of universities and other knowledge-intensive organizations.\nTypically, this setting features several additional types of structure: topical structure (e.g., topic hierarchies as employed by the organization), organizational structure (faculty, department, ...), as well as multiple types of documents (research and course descriptions, publications, and academic homepages).\nThis setting is quite different from the W3C setting in ways that might impact upon the performance of expertise retrieval tasks.\nWe focus on a number of research questions in this paper: Does the relatively small amount of data available on an intranet affect the quality of the topic-person associations that lie at the heart of expertise retrieval algorithms?\nHow do state-of-the-art algorithms developed on the W3C data set perform in the alternative scenario of the type described above?\nMore generally, do the lessons from the Expert Finding task at TREC carry over to this setting?\nHow does the inclusion or exclusion of different documents affect expertise retrieval tasks?\nIn addition to, how can the topical and organizational structure be used for retrieval purposes?\nTo answer our research questions, we first present a set of baseline approaches, based on generative language modeling, aimed at finding associations between topics and people.\nThis allows us to\nformulate the expert finding and expert profiling tasks in a uniform way, and has the added benefit of allowing us to understand the relations between the two tasks.\nFor our experimental evaluation, we introduce a new data set (the UvT Expert Collection) which is representative of the type of intranet that we described above.\nOur collection is based on publicly available data, crawled from the website of Tilburg University (UvT).\nThis type of data is particularly interesting, since (1) it is clean, heterogeneous, structured, and focused, but comprises a limited number of documents; (2) contains information on the organizational hierarchy; (3) it is bilingual (English and Dutch); and (4) the list of expertise areas of an individual are provided by the employees themselves.\nUsing the UvT Expert collection, we conduct two sets of experiments.\nThe first is aimed at determining the effectiveness of baseline expertise finding and profiling methods in this new setting.\nA second group of experiments is aimed at extensions of the baseline methods that exploit characteristic features of the UvT Expert Collection; specifically, we propose and evaluate refined expert finding and profiling methods that incorporate topicality and organizational structure.\nApart from the research questions and data set that we contribute, our main contributions are as follows.\nThe baseline models developed for expertise finding perform well on the new data set.\nWhile on the W3C setting the expert finding task appears to be more difficult than profiling, for the UvT data the opposite is the case.\nWe find that profiling on the UvT data set is considerably more difficult than on the W3C set, which we believe is due to the large (but realistic) number of topical areas that we used for profiling: about 1,500 for the UvT set, versus 50 in the W3C case.\nTaking the similarity between topics into account can significantly improve retrieval performance.\nThe best performing similarity measures are content-based, therefore they can be applied on the W3C (and other) settings as well.\nFinally, we demonstrate that the organizational structure can be exploited in the form of a context model, improving MAP scores for certain models by up to 70%.\nThe remainder of this paper is organized as follows.\nIn the next section we review related work.\nThen, in Section 3 we provide detailed descriptions of the expertise retrieval tasks that we address in this paper: expert finding and expert profiling.\nIn Section 4 we present our baseline models, of which the performance is then assessed in Section 6 using the UvT data set that we introduce in Section 5.\nAdvanced models exploiting specific features of our data are presented in Section 7 and evaluated in Section 8.\nWe formulate our conclusions in Section 9.\n2.\nRELATED WORK\nInitial approaches to expertise finding often employed databases containing information on the skills and knowledge of each individual in the organization [11].\nMost of these tools (usually called yellow pages or people-finding systems) rely on people to self-assess their skills against a predefined set of keywords.\nFor updating profiles in these systems in an automatic fashion there is a need for intelligent technologies [5].\nMore recent approaches use specific document sets (such as email [6] or software [18]) to find expertise.\nIn contrast with focusing on particular document types, there is also an increased interest in the development of systems that index and mine published intranet documents as sources of evidence for expertise.\nOne such published approach is the P@noptic system [9], which builds a representation of each person by concatenating all documents associated with that person--this is similar to Model 1 of Balog et al. [4], who formalize and compare two methods.\nBalog et al.'s Model 1 directly models the knowledge of an expert from associated documents, while their Model 2 first locates documents on the topic and then finds the associated experts.\nIn the reported experiments the second method performs significantly better when there are sufficiently many associated documents per candidate.\nMost systems that took part in the 2005 and 2006 editions of the Expert Finding task at TREC implemented (variations on) one of these two models; see [10, 20].\nMacdonald and Ounis [16] propose a different approach for ranking candidate expertise with respect to a topic based on data fusion techniques, without using collectionspecific heuristics; they find that applying field-based weighting models improves the ranking of candidates.\nPetkova and Croft [19] propose yet another approach, based on a combination of the above Model 1 and 2, explicitly modeling topics.\nTurning to other expert retrieval tasks that can also be addressed using topic--people associations, Balog and de Rijke [3] addressed the task of determining topical expert profiles.\nWhile their methods proved to be efficient on the W3C corpus, they require an amount of data that may not be available in the typical knowledge-intensive organization.\nBalog and de Rijke [2] study the related task of finding experts that are similar to a small set of experts given as input.\nAs an aside, creating a textual \"summary\" of a person shows some similarities to biography finding, which has received a considerable amount of attention recently; see e.g., [13].\nWe use generative language modeling to find associations between topics and people.\nIn our modeling of expert finding and profiling we collect evidence for expertise from multiple sources, in a heterogeneous collection, and integrate it with the co-occurrence of candidates' names and query terms--the language modeling setting allows us to do this in a transparent manner.\nOur modeling proceeds in two steps.\nIn the first step, we consider three baseline models, two taken from [4] (the Models 1 and 2 mentioned above), and one a refined version of a model introduced in [3] (which we refer to as Model 3 below); this third model is also similar to the model described by Petkova and Croft [19].\nThe models we consider in our second round of experiments are mixture models similar to contextual language models [1] and to the expanded documents of Tao et al. [21]; however, the features that we use for definining our expansions--including topical structure and organizational structure--have not been used in this way before.\n3.\nTASKS\nIn the expertise retrieval scenario that we envisage, users seeking expertise within an organization have access to an interface that combines a search box (where they can search for experts or topics) with navigational structures (of experts and of topics) that allows them to click their way to an expert page (providing the profile of a person) or a topic page (providing a list of experts on the topic).\nTo \"feed\" the above interface, we face two expertise retrieval tasks, expert finding and expert profiling, that we first define and then formalize using generative language models.\nIn order to model either task, the probability of the query topic being associated to a candidate expert plays a key role in the final estimates for searching and profiling.\nBy using language models, both the candidates and the query are characterized by distributions of terms in the vocabulary (used in the documents made available by the organization whose expertise retrieval needs we are addressing).\n3.1 Expert finding\nExpert finding involves the task of finding the right person with the appropriate skills and knowledge: Who are the experts on topic X?\n.\nE.g., an employee wants to ascertain who worked on a particular project to find out why particular decisions were made without having to trawl through documentation (if there is any).\nOr, they may be in need a trained specialist for consultancy on a specific problem.\nWithin an organization there are usually many possible candidates who could be experts for given topic.\nWe can state this prob\nlem as follows: What is the probability of a candidate ca being an expert given the query topic q?\nThat is, we determine p (caIq), and rank candidates ca according to this probability.\nThe candidates with the highest probability given the query are deemed the most likely experts for that topic.\nThe challenge is how to estimate this probability accurately.\nSince the query is likely to consist of only a few terms to describe the expertise required, we should be able to obtain a more accurate estimate by invoking Bayes' Theorem, and estimating:\nwhere p (ca) is the probability of a candidate and p (q) is the probability of a query.\nSince p (q) is a constant, it can be ignored for ranking purposes.\nThus, the probability of a candidate ca being an expert given the query q is proportional to the probability of a query given the candidate p (qIca), weighted by the a priori belief p (ca) that candidate ca is an expert.\nIn this paper our main focus is on estimating the probability of a query given the candidate p (qIca), because this probability captures the extent to which the candidate knows about the query topic.\nWhereas the candidate priors are generally assumed to be uniform--and thus will not influence the ranking--it has been demonstrated that a sensible choice of priors may improve the performance [20].\n3.2 Expert profiling\nWhile the task of expert searching was concerned with finding experts given a particular topic, the task of expert profiling seeks to answer a related question: What topics does a candidate know about?\nEssentially, this turns the questions of expert finding around.\nThe profiling of an individual candidate involves the identification of areas of skills and knowledge that they have expertise about and an evaluation of the level of proficiency in each of these areas.\nThis is the candidate's topical profile.\nGenerally, topical profiles within organizations consist of tabular structures which explicitly catalogue the skills and knowledge of each individual in the organization.\nHowever, such practice is limited by the resources available for defining, creating, maintaining, and updating these profiles over time.\nBy focusing on automatic methods which draw upon the available evidence within the document repositories of an organization, our aim is to reduce the human effort associated with the maintenance of topical profiles1.\nA topical profile of a candidate, then, is defined as a vector where each element i of the vector corresponds to the candidate ca's expertise on a given topic ki, (i.e., s (ca, ki)).\nEach topic ki defines a particular knowledge area or skill that the organization uses to define the candidate's topical profile.\nThus, it is assumed that a list of topics, {k1,..., kn}, where n is the number of pre-defined topics, is given:\n1Context and evidence are needed to help users of expertise finding systems to decide whom to contact when seeking expertise in a particular area.\nExamples of such context are: Who does she work with?\nWhat are her contact details?\nIs she well-connected, just in case she is not able to help us herself?\nWhat is her role in the organization?\nWho is her superior?\nCollaborators, and affiliations, etc. are all part of the candidate's social profile, and can serve as a background against which the system's recommendations should be interpreted.\nIn this paper we only address the problem of determining topical profiles, and leave social profiling to further work.\nWe state the problem of quantifying the competence of a person on a certain knowledge area as follows: What is the probability of a knowledge area (ki) being part of the candidate's (expertise) profile?\nwhere s (ca, ki) is defined by p (kiIca).\nOur task, then, is to estimate p (kiIca), which is equivalent to the problem of obtaining p (qIca), where the topic ki is represented as a query topic q, i.e., a sequence of keywords representing the expertise required.\nBoth the expert finding and profiling tasks rely on the accurate estimation of p (qIca).\nThe only difference derives from the prior probability that a person is an expert (p (ca)), which can be incorporated into the expert finding task.\nThis prior does not apply to the profiling task since the candidate (individual) is fixed.\n4.\nBASELINE MODELS\nIn this section we describe our baseline models for estimating p (qIca), i.e., associations between topics and people.\nBoth expert finding and expert profiling boil down to this estimation.\nWe employ three models for calculating this probability.\n4.1 From topics to candidates\nUsing Candidate Models: Model 1 Model 1 [4] defines the probability of a query given a candidate (p (qIca)) using standard language modeling techniques, based on a multinomial unigram language model.\nFor each candidate ca, a candidate language model \u03b8ca is inferred such that the probability of a term given \u03b8ca is nonzero for all terms, i.e., p (tI\u03b8ca)> 0.\nFrom the candidate model the query is generated with the following probability:\nwhere each term t in the query q is sampled identically and independently, and n (t, q) is the number of times t occurs in q.\nThe candidate language model is inferred as follows: (1) an empirical model p (tIca) is computed; (2) it is smoothed with background probabilities.\nUsing the associations between a candidate and a document, the probability p (tIca) can be approximated by:\nwhere p (dIca) is the probability that candidate ca generates a supporting document d, and p (tId) is the probability of a term t occurring in the document d.\nWe use the maximum-likelihood estimate of a term, that is, the normalised frequency of the term t in document d.\nThe strength of the association between document d and candidate ca expressed by p (dIca) reflects the degree to which the candidates expertise is described using this document.\nThe estimation of this probability is presented later, in Section 4.2.\nThe candidate model is then constructed as a linear interpolation of p (tIca) and the background model p (t) to ensure there are no zero probabilities, which results in the final estimation:\nd Model 1 amasses all the term information from all the documents associated with the candidate, and uses this to represent that candidate.\nThis model is used to predict how likely a candidate would produce a query q.\nThis can can be intuitively interpreted as the probability of this candidate talking about the query topic, where we assume that this is indicative of their expertise.\nUsing Document Models: Model 2 Model 2 [4] takes a different approach.\nHere, the process is broken into two parts.\nGiven a candidate ca, (1) a document that is associated with a candidate is selected with probability p (dIca), and (2) from this document a query q is generated with probability p (qId).\nThen the sum over all documents is taken to obtain p (qIca), such that:\nd The probability of a query given a document is estimated by inferring a document language model \u03b8d for each document d in a similar manner as the candidate model was inferred:\nwhere p (tId) is the probability of the term in the document.\nThe probability of a query given the document model is:\nThe final estimate of p (qIca) is obtained by substituting p (qId) for p (qI\u03b8d) into Eq.\n5 (see [4] for full details).\nConceptually, Model 2 differs from Model 1 because the candidate is not directly modeled.\nInstead, the document acts like a \"hidden\" variable in the process which separates the query from the candidate.\nThis process is akin to how a user may search for candidates with a standard search engine: initially by finding the documents which are relevant, and then seeing who is associated with that document.\nBy examining a number of documents the user can obtain an idea of which candidates are more likely to discuss the topic q. Using Topic Models: Model 3 We introduce a third model, Model 3.\nInstead of attempting to model the query generation process via candidate or document models, we represent the query as a topic language model and directly estimate the probability of the candidate p (caIq).\nThis approach is similar to the model presented in [3, 19].\nAs with the previous models, a language model is inferred, but this time for the query.\nWe adapt the work of Lavrenko and Croft [14] to estimate a topic model from the query.\nThe procedure is as follows.\nGiven a collection of documents and a query topic q, it is assumed that there exists an unknown topic model \u03b8k that assigns probabilities p (tI\u03b8k) to the term occurrences in the topic documents.\nBoth the query and the documents are samples from \u03b8k (as opposed to the previous approaches, where a query is assumed to be sampled from a specific document or candidate model).\nThe main task is to estimate p (tI\u03b8k), the probability of a term given the topic model.\nSince the query q is very sparse, and as there are no examples of documents on the topic, this distribution needs to be approximated.\nLavrenko and Croft [14] suggest a reasonable way of obtaining such an approximation, by assuming that p (tI\u03b8k) can be approximated by the probability of term t given the query q.\nWe can then estimate p (tIq) using the joint probability of observing the term t together with the query terms, ql,..., qm, and dividing by the joint probability of the query terms:\nwhere p (ql,..., qm) = Et0ET p (t', ql,..., qm), and T is the entire vocabulary of terms.\nIn order to estimate the joint probability p (t, ql,..., qm), we follow [14, 15] and assume t and ql,..., qm are mutually independent, once we pick a source distribution from the set of underlying source distributions U.\nIf we choose U to be a set of document models.\nthen to construct this set, the query q would be issued against the collection, and the top n returned are assumed to be relevant to the topic, and thus treated as samples from the topic model.\n(Note that candidate models could be used instead.)\nWith the document models forming U, the joint probability of term and query becomes:\nHere, p (d) denotes the prior distribution over the set U, which reflects the relevance of the document to the topic.\nWe assume that p (d) is uniform across U.\nIn order to rank candidates according to the topic model defined, we use the Kullback-Leibler divergence metric (KL, [8]) to measure the difference between the candidate models and the topic model:\nCandidates with a smaller divergence from the topic model are considered to be more likely experts on that topic.\nThe candidate model \u03b8ca is defined in Eq.\n4.\nBy using KL divergence instead of the probability of a candidate given the topic model p (caI\u03b8k), we avoid normalization problems.\n4.2 Document-candidate associations\nFor our models we need to be able to estimate the probability p (dIca), which expresses the extent to which a document d characterizes the candidate ca.\nIn [4], two methods are presented for estimating this probability, based on the number of person names recognized in a document.\nHowever, in our (intranet) setting it is reasonable to assume that authors of documents can unambiguously be identified (e.g., as the author of an article, the teacher assigned to a course, the owner of a web page, etc.) Hence, we set p (dIca) to be 1 if candidate ca is author of document d, otherwise the probability is 0.\nIn Section 6 we describe how authorship can be determined on different types of documents within the collection.\n5.\nTHE UVT EXPERT COLLECTION\nThe UvT Expert collection used in the experiments in this paper fits the scenario outlined in Section 3.\nThe collection is based on the Webwijs (\"Webwise\") system developed at Tilburg University (UvT) in the Netherlands.\nWebwijs (http:\/\/www.uvt.nl\/ webwijs \/) is a publicly accessible database of UvT employees who are involved in research or teaching; currently, Webwijs contains information about 1168 experts, each of whom has a page with contact information and, if made available by the expert, a research description and publications list.\nIn addition, each expert can select expertise areas from a list of 1491 topics and is encouraged to suggest new topics that need to be approved by the Webwijs editor.\nEach topic has a separate page that shows all experts associated with that topic and, if available, a list of related topics.\nWebwijs is available in Dutch and English, and this bilinguality has been preserved in the collection.\nEvery Dutch Webwijs page has an English translation.\nNot all Dutch topics have an English translation, but the reverse is true: the 981 English topics all have a Dutch equivalent.\nAbout 42% of the experts teach courses at Tilburg University; these courses were also crawled and included in the profile.\nIn addition, about 27% of the experts link to their academic homepage from their Webwijs page.\nThese home pages were crawled and added to the collection.\n(This means that if experts put the full-text versions of their publications on their academic homepage, these were also available for indexing.)\nWe also obtained 1880 full-text versions of publications from the UvT institutional repository and =\nDutch English no.\nof experts 1168 1168 no.\nof experts with \u2265 1 topic 743 727 no.\nof topics 1491 981 no.\nof expert-topic pairs 4318 3251 avg.\nno.\nof topics\/expert 5.8 5.9 max.\nno.\nof topics\/expert (no.\nof experts) 60 (1) 35 (1) min.\nno.\nof topics\/expert (no.\nof experts) 1 (74) 1 (106) avg.\nno.\nof experts\/topic 2.9 3.3 max.\nno.\nof experts\/topic (no.\nof topics) 30 (1) 30 (1) min.\nno.\nof experts\/topic (no.\nof topics) 1 (615) 1 (346) no.\nof experts with HP 318 318 no.\nof experts with CD 318 318 avg.\nno.\nof CDs per teaching expert 3.5 3.5 no.\nof experts with RD 329 313 no.\nof experts with PUB 734 734 avg.\nno.\nof PUBs per expert 27.0 27.0 avg.\nno.\nof PUB citations per expert 25.2 25.2 avg.\nno.\nof full-text PUBs per expert 1.8 1.8\nTable 2: Descriptive statistics of the Dutch and English versions of the UvT Expert collection.\nconverted them to plain text.\nWe ran the TextCat [23] language identifier to classify the language of the home pages and the fulltext publications.\nWe restricted ourselves to pages where the classifier was confident about the language used on the page.\nThis resulted in four document types: research descriptions (RD), course descriptions (CD), publications (PUB; full-text and citationonly versions), and academic homepages (HP).\nEverything was bundled into the UvT Expert collection which is available at http: \/ \/ ilk.uvt.nl \/ uvt-expert-collection \/.\nThe UvT Expert collection was extracted from a different organizational setting than the W3C collection and differs from it in a number of ways.\nThe UvT setting is one with relatively small amounts of multilingual data.\nDocument-author associations are clear and the data is structured and clean.\nThe collection covers a broad range of expertise areas, as one can typically find on intranets of universities and other knowledge-intensive institutes.\nAdditionally, our university setting features several types of structure (topical and organizational), as well as multiple document types.\nAnother important difference between the two data sets is that the expertise areas in the UvT Expert collection are self-selected instead of being based on group membership or assignments by others.\nSize is another dimension along which the W3C and UvT Expert collections differ: the latter is the smaller of the two.\nAlso realistic are the large differences in the amount of information available for each expert.\nUtilizing Webwijs is voluntary; 425 Dutch experts did not select any topics at all.\nThis leaves us with 743 Dutch and 727 English usable expert profiles.\nTable 2 provides descriptive statistics for the UvT Expert collection.\nUniversities tend to have a hierarchical structure that goes from the faculty level, to departments, research groups, down to the individual researchers.\nIn the UvT Expert collection we have information about the affiliations of researchers with faculties and institutes, providing us with a two-level organizational hierarchy.\nTilburg University has 22 organizational units at the faculty level (including the university office and several research institutes) and 71 departments, which amounts to 3.2 departments per faculty.\nAs to the topical hierarchy used by Webwijs, 131 of the 1491 topics are top nodes in the hierarchy.\nThis hierarchy has an average topic chain length of 2.65 and a maximum length of 7 topics.\n6.\nEVALUATION\nBelow, we evaluate Section 4's models for expert finding and profiling onthe UvT Expert collection.\nWe detail our research questions and experimental setup, and then present our results.\n6.1 Research Questions\nWe address the following research questions.\nBoth expert finding and profiling rely on the estimations of p (q | ca).\nThe question is how the models compare on the different tasks, and in the setting of the UvT Expert collection.\nIn [4], Model 2 outperformed Model 1 on the W3C collection.\nHow do they compare on our data set?\nAnd how does Model 3 compare to Model 1?\nWhat about performance differences between the two languages in our test collection?\n6.2 Experimental Setup\nThe output of our models was evaluated against the self-assigned topic labels, which were treated as relevance judgements.\nResults were evaluated separately for English and Dutch.\nFor English we only used topics for which the Dutch translation was available; for Dutch all topics were considered.\nThe results were averaged for the queries in the intersection of relevance judgements and results; missing queries do not contribute a value of 0 to the scores.\nWe use standard information retrieval measures, such as Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR).\nWe also report the percentage of topics (% q) and candidates (% ca) covered, for the expert finding and profiling tasks, respectively.\n6.3 Results\nTable 1 shows the performance of Model 1, 2, and 3 on the expert finding and profiling tasks.\nThe rows of the table correspond to the various document types (RD, CD, PUB, and HP) and to their combinations.\nRD+CD+PUB+HP is equivalent to the full collection and will be referred as the BASELINE of our experiments.\nLooking at Table 1 we see that Model 2 performs the best across the board.\nHowever, when the data is clean and very focused (RD), Model 3 outperforms it in a number of cases.\nModel 1 has the best coverage of candidates (% ca) and topics (% q).\nThe various document types differ in their characteristics and how they improve the finding and profiling tasks.\nExpert profiling benefits much from the clean data present in the RD and CD document types, while the publications contribute the most to the expert finding task.\nAdding the homepages does not prove to be particularly useful.\nWhen we compare the results across languages, we find that the coverage of English topics (% q) is higher than of the Dutch ones for expert finding.\nApart from that, the scores fall in the same range for both languages.\nFor the profiling task the coverage of the candidates (% ca) is very similar for both languages.\nHowever, the performance is substantially better for the English topics.\nWhile it is hard to compare scores across collections, we conclude with a brief comparison of the absolute scores in Table 1 to those reported in [3, 4] on the W3C test set (2005 edition).\nFor expert finding the MAP scores for Model 2 reported here are about 50% higher than the corresponding figures in [4], while our MRR scores are slightly below those in [4].\nFor expert profiling, the differences are far more dramatic: the MAP scores for Model 2 reported here are around 50% below the scores in [3], while the (best) MRR scores are about the same as those in [3].\nThe cause for the latter differences seems to reside in the number of knowledge areas considered here--approx.\n30 times more than in the W3C setting.\n7.\nADVANCED MODELS\nNow that we have developed and assessed basic language modeling techniques for expertise retrieval, we turn to refined models that exploit special features of our test collection.\n7.1 Exploiting knowledge area similarity\nOne way to improve the scoring of a query given a candidate is to consider what other requests the candidate would satisfy and use them as further evidence to support the original query, proportional\nTable 1: Performance of the models on the expert finding and profiling tasks, using different document types and their combinations.\n% q is the number of topics covered (applies to the expert finding task),% ca is the number of candidates covered (applies to the expert profiling task).\nThe top and bottom blocks correspond to English and Dutch respectively.\nThe best scores are in boldface.\nto how related the other requests are to the original query.\nThis can be modeled by interpolating between the p (q | ca) and the further supporting evidence from all similar requests q', as follows:\nwhere p (q | q') represents the similarity between the two topics q and q'.\nTo be able to work with similarity methods that are not necessarily probabilities, we set p (q | q') = w (9,9) \u03b3, where - y is a normalizing constant, such that - y = E9 w (q\", q').\nWe consider four methods for calculating the similarity score between two topics.\nThree approaches are strictly content-based, and establish similarity by examining co-occurrence patterns of topics within the collection, while the last approach exploits the hierarchical structure of topical areas that may be present within an organization (see [7] for further examples of integrating word relationships into language models).\nThe Kullback-Leibler (KL) divergence metric defined in Eq.\n8 provides a measure of how different or similar two probability distributions are.\nA topic model is inferred for q and q' using the method presented in Section 4.1 to describe the query across the entire vocabulary.\nSince a lower KL score means the queries are more similar, we let w (q, q') = max (KL (\u03b89 | | \u00b7) \u2212 KL (\u03b89 | | \u03b89)).\nPointwise Mutual Information (PMI, [17]) is a measure of association used in information theory to determine the extent of independence between variables.\nThe dependence between two queries is reflected by the SI (q, q') score, where scores greater than zero indicate that it is likely that there is a dependence, which we take to mean that the queries are likely to be similar:\nWe estimate the probability of a topic p (q) using the number of documents relevant to query q within the collection.\nThe joint probability p (q, q') is estimated similarly, by using the concatenation of q and q' as a query.\nTo obtain p (q | q'), we then set w (q, q') = SI (q, q') when SI (q, q')> 0 otherwise w (q, q') = 0, because we are only interested in including queries that are similar.\nThe log-likelihood statistic provides another measure of dependence, which is more reliable than the pointwise mutual information measure [17].\nLet k1 be the number of co-occurrences of q and q', k2 the number of occurrences of q not co-occurring with q', n1 the total number of occurrences of q', and n2 the total number of topic tokens minus the number of occurrences of q'.\nThen, let\nwhere $(p, n, k) = k log p + (n \u2212 k) log (1 \u2212 p).\nThe higher it score indicate that queries are also likely to be similar, thus we set w (q, q') = U (q, q').\nFinally, we also estimate the similarity of two topics based on their distance within the topic hierarchy.\nThe topic hierarchy is viewed as a directed graph, and for all topic-pairs the shortest path SP (q, q') is calculated.\nWe set the similarity score to be the reciprocal of the shortest path: w (q, q') = 1\/SP (q, q').\n7.2 Contextual information\nGiven the hierarchy of an organization, the units to which a person belong are regarded as a context so as to compensate for data sparseness.\nWe model it as follows:\nwhere OU (ca) is the set of organizational units of which candidate ca is a member of, and p (q | o) expresses the strength of the association between query q and the unit ou.\nThe latter probability can be estimated using either of the three basic models, by simply replacing ca with ou in the corresponding equations.\nAn organizational unit is associated with all the documents that its members have authored.\nThat is, p (d | ou) = maxcaEou p (d | ca).\n7.3 A simple multilingual model\nFor knowledge institutes in Europe, academic or otherwise, a multilingual (or at least bilingual) setting is typical.\nThe following model builds on a kind of independence assumption: there is no spill-over of expertise\/profiles across language boundaries.\nWhile a\nlEL yl \u00b7 p (ql | ca), where L is the set of languages used in the collection, ql is the translation of the query q to language l, and yl is a language specific smoothing parameter, such that ElEL yl = 1.\n8.\nADVANCED MODELS: EVALUATION\nIn this section we present an experimental evaluation of our advanced models.\nTable 3: Performance of the combination of languages on the expert finding and profiling tasks (on candidates).\nBest scores for each model are in italic, absolute best scores for the expert finding and profiling tasks are in boldface.\nTable 4: Performance on the expert finding (top) and profiling (bottom) tasks, using knowledge area similarities.\nRuns were evaluated on the main topics set.\nBest scores are in boldface.\n8.1 Research Questions\nOur questions follow the refinements presented in the preceding section: Does exploiting the knowledge area similarity improve effectiveness?\nWhich of the various methods for capturing word relationships is most effective?\nFurthermore, is our way of bringing in contextual information useful?\nFor which tasks?\nAnd finally, is our simple way of combining the monolingual scores sufficient for obtaining significant improvements?\n8.2 Experimental setup\nGiven that the self-assessments are also sparse in our collection, in order to be able to measure differences between the various models, we selected a subset of topics, and evaluated (some of the) runs only on this subset.\nThis set is referred as main topics, and consists of topics that are located at the top level of the topical hierarchy.\n(A main topic has subtopics, but is not a subtopic of any other topic.)\nThis main set consists of 132 Dutch and 119 English topics.\nThe relevance judgements were restricted to the main topic set, but were not expanded with subtopics.\n8.3 Exploiting knowledge area similarity\nTable 4 presents the results.\nThe four methods used for estimating knowledge-area similarity are KL divergence (KLDIV), Point\nExpert finding\nTable 5: Evaluating the context models on organizational units.\nwise mutual information (PMI), log-likelihood (LL), and distance within topic hierarchy (HDIST).\nWe managed to improve upon the baseline in all cases, but the improvement is more noticeable for the profiling task.\nFor both tasks, the LL method performed best.\nThe content-based approaches performed consistently better than HDIST.\n8.4 Contextual information\nA two level hierarchy of organizational units (faculties and institutes) is available in the UvT Expert collection.\nThe unit a person belongs to is used as a context for that person.\nFirst, we evaluated the models of the organizational units, using all topics (ALL) and only the main topics (MAIN).\nAn organizational unit is considered to be relevant for a given topic (or vice versa) if at least one member of the unit selected the given topic as an expertise area.\nTable 5 reports on the results.\nAs far as expert finding goes, given a topic, the corresponding organizational unit can be identified with high precision.\nHowever, the expert profiling task shows a different picture: the scores are low, and the task seems hard.\nThe explanation may be that general concepts (i.e., our main topics) may belong to several organizational units.\nSecond, we performed another evaluation, where we combined the contextual models with the candidate models (to score candidates again).\nTable 6 reports on the results.\nWe find a positive impact of the context models only for expert finding.\nNoticably, for expert finding (and Model 1), it improves over 50% (for English) and over 70% (for Dutch) on MAP.\nThe poor performance on expert profiling may be due to the fact that context models alone did not perform very well on the profiling task to begin with.\n8.5 Multilingual models\nIn this subsection we evaluate the method for combining results across multiple languages that we described in Section 7.3.\nIn our setting the set of languages consists of English and Dutch: L = {UK, NL}.\nThe weights on these languages were set to be identical (\u03bbUK = \u03bbNL = 0.5).\nWe performed experiments with various \u03bb settings, but did not observe significant differences in performance.\nTable 3 reports on the multilingual results, where performance is evaluated on the full topic set.\nAll three models significantly im\nTable 6: Performance of the context models (CT) compared to the baseline (BL).\nBest scores are in boldface.\nproved over all measures for both tasks.\nThe coverage of topics and candidates for the expert finding and profiling tasks, respectively, is close to 100% in all cases.\nThe relative improvement of the precision scores ranges from 10% to 80%.\nThese scores demonstrate that despite its simplicity, our method for combining results over multiple languages achieves substantial improvements over the baseline.\n9.\nCONCLUSIONS\nIn this paper we focused on expertise retrieval (expert finding and profiling) in a new setting of a typical knowledge-intensive organization in which the available data is of high quality, multilingual, and covering a broad range of expertise area.\nTypically, the amount of available data in such an organization (e.g., a university, a research institute, or a research lab) is limited when compared to the W3C collection that has mostly been used for the experimental evaluation of expertise retrieval so far.\nTo examine expertise retrieval in this setting, we introduced (and released) the UvT Expert collection as a representative case of such knowledge intensive organizations.\nThe new collection reflects the typical properties of knowledge-intensive institutes noted above and also includes several features which may are potentially useful for expertise retrieval, such as topical and organizational structure.\nWe evaluated how current state-of-the-art models for expert finding and profiling performed in this new setting and then refined these models in order to try and exploit the different characteristics within the data environment (language, topicality, and organizational structure).\nWe found that current models of expertise retrieval generalize well to this new environment; in addition we found that refining the models to account for the differences results in significant improvements, thus making up for problems caused by data sparseness issues.\nFuture work includes setting up manual assessments of automatically generated profiles by the employees themselves, especially in cases where the employees have not provided a profile themselves.","keyphrases":["broad expertis retriev","spars data environ","gener languag model","languag model","baselin expertis retriev method","organiz structur","intranet search","expert colleagu","trec enterpris track","expert find task","co-occurr","topic and organiz structur","bay' theorem","baselin model","expertis search","expert find"],"prmu":["P","P","P","P","P","P","M","U","U","M","U","R","U","R","M","M"]} {"id":"H-52","title":"Vocabulary Independent Spoken Term Detection","abstract":"We are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings. Today, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index. However, query terms that are not part of the recognizer's vocabulary cannot be retrieved, and the recall of the search is affected. In addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically. Such phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts. We present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts. A speech recognizer generates word confusion networks and phonetic lattices. The transcripts are indexed for query processing and ranking purpose. The value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1].","lvl-1":"Vocabulary Independent Spoken Term Detection Jonathan Mamou IBM Haifa Research Labs Haifa 31905, Israel mamou@il.ibm.com Bhuvana Ramabhadran, Olivier Siohan IBM T. J. Watson Research Center Yorktown Heights, N.Y. 10598, USA {bhuvana,siohan}@us.\nibm.com ABSTRACT We are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings.\nToday, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index.\nHowever, query terms that are not part of the recognizer``s vocabulary cannot be retrieved, and the recall of the search is affected.\nIn addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically.\nSuch phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts.\nWe present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts.\nA speech recognizer generates word confusion networks and phonetic lattices.\nThe transcripts are indexed for query processing and ranking purpose.\nThe value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1].\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval General Terms Algorithms 1.\nINTRODUCTION The rapidly increasing amount of spoken data calls for solutions to index and search this data.\nThe classical approach consists of converting the speech to word transcripts using a large vocabulary continuous speech recognition (LVCSR) tool.\nIn the past decade, most of the research efforts on spoken data retrieval have focused on extending classical IR techniques to word transcripts.\nSome of these works have been done in the framework of the NIST TREC Spoken Document Retrieval tracks and are described by Garofolo et al. [12].\nThese tracks focused on retrieval from a corpus of broadcast news stories spoken by professionals.\nOne of the conclusions of those tracks was that the effectiveness of retrieval mostly depends on the accuracy of the transcripts.\nWhile the accuracy of automatic speech recognition (ASR) systems depends on the scenario and environment, state-of-the-art systems achieved better than 90% accuracy in transcription of such data.\nIn 2000, Garofolo et al. concluded that Spoken document retrieval is a solved problem [12].\nHowever, a significant drawback of such approaches is that search on queries containing out-of-vocabulary (OOV) terms will not return any results.\nOOV terms are missing words from the ASR system vocabulary and are replaced in the output transcript by alternatives that are probable, given the recognition acoustic model and the language model.\nIt has been experimentally observed that over 10% of user queries can contain OOV terms [16], as queries often relate to named entities that typically have a poor coverage in the ASR vocabulary.\nThe effects of OOV query terms in spoken data retrieval are discussed by Woodland et al. [28].\nIn many applications the OOV rate may get worse over time unless the recognizer``s vocabulary is periodically updated.\nAnother approach consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones.\nThe retrieval is based on searching the sequence of phones representing the query in the phonetic transcripts.\nThe main drawback of this approach is the inherent high error rate of the transcripts.\nTherefore, such approach cannot be an alternative to word transcripts, especially for in-vocabulary (IV) query terms that are part of the vocabulary of the ASR system.\nA solution would be to combine the two different approaches presented above: we index both word transcripts and phonetic transcripts; during query processing, the information is retrieved from the word index for IV terms and from the phonetic index for OOV terms.\nWe would like to be able to process also hybrid queries, i.e, queries that include both IV and OOV terms.\nConsequently, we need to merge pieces of information retrieved from word index and phonetic index.\nProximity information on the occurrences of the query terms is required for phrase search and for proximity-based ranking.\nIn classical IR, the index stores for each occurrence of a term, its offset.\nTherefore, we cannot merge posting lists retrieved by phonetic index with those retrieved by word index since the offset of the occurrences retrieved from the two different indices are not comparable.\nThe only element of comparison between phonetic and word transcripts are the timestamps.\nNo previous work combining word and phonetic approach has been done on phrase search.\nWe present a novel scheme for information retrieval that consists of storing, during the indexing process, for each unit of indexing (phone or word) its timestamp.\nWe search queries by merging the information retrieved from the two different indices, word index and phonetic index, according to the timestamps of the query terms.\nWe analyze the retrieval effectiveness of this approach on the NIST Spoken Term Detection 2006 evaluation data [1].\nThe paper is organized as follows.\nWe describe the audio processing in Section 2.\nThe indexing and retrieval methods are presented in section 3.\nExperimental setup and results are given in Section 4.\nIn Section 5, we give an overview of related work.\nFinally, we conclude in Section 6.\n2.\nAUTOMATIC SPEECH RECOGNITION SYSTEM We use an ASR system for transcribing speech data.\nIt works in speaker-independent mode.\nFor best recognition results, a speaker-independent acoustic model and a language model are trained in advance on data with similar characteristics.\nTypically, ASR generates lattices that can be considered as directed acyclic graphs.\nEach vertex in a lattice is associated with a timestamp and each edge (u, v) is labeled with a word or phone hypothesis and its prior probability, which is the probability of the signal delimited by the timestamps of the vertices u and v, given the hypothesis.\nThe 1-best path transcript is obtained from the lattice using dynamic programming techniques.\nMangu et al. [18] and Hakkani-Tur et al. [13] propose a compact representation of a word lattice called word confusion network (WCN).\nEach edge (u, v) is labeled with a word hypothesis and its posterior probability, i.e., the probability of the word given the signal.\nOne of the main advantages of WCN is that it also provides an alignment for all of the words in the lattice.\nAs explained in [13], the three main steps for building a WCN from a word lattice are as follows: 1.\nCompute the posterior probabilities for all edges in the word lattice.\n2.\nExtract a path from the word lattice (which can be the 1-best, the longest or any random path), and call it the pivot path of the alignment.\n3.\nTraverse the word lattice, and align all the transitions with the pivot, merging the transitions that correspond to the same word (or label) and occur in the same time interval by summing their posterior probabilities.\nThe 1-best path of a WCN is obtained from the path containing the best hypotheses.\nAs stated in [18], although WCNs are more compact than word lattices, in general the 1-best path obtained from WCN has a better word accuracy than the 1-best path obtained from the corresponding word lattice.\nTypical structures of a lattice and a WCN are given in Figure 1.\nFigure 1: Typical structures of a lattice and a WCN.\n3.\nRETRIEVAL MODEL The main problem with retrieving information from spoken data is the low accuracy of the transcription particularly on terms of interest such as named entities and content words.\nGenerally, the accuracy of a word transcript is characterized by its word error rate (WER).\nThere are three kinds of errors that can occur in a transcript: substitution of a term that is part of the speech by another term, deletion of a spoken term that is part of the speech and insertion of a term that is not part of the speech.\nSubstitutions and deletions reflect the fact that an occurrence of a term in the speech signal is not recognized.\nThese misses reduce the recall of the search.\nSubstitutions and insertions reflect the fact that a term which is not part of the speech signal appears in the transcript.\nThese misses reduce the precision of the search.\nSearch recall can be enhanced by expanding the transcript with extra words.\nThese words can be taken from the other alternatives provided by the WCN; these alternatives may have been spoken but were not the top choice of the ASR.\nSuch an expansion tends to correct the substitutions and the deletions and consequently, might improve recall but will probably reduce precision.\nUsing an appropriate ranking model, we can avoid the decrease in precision.\nMamou et al. have presented in [17] the enhancement in the recall and the MAP by searching on WCN instead of considering only the 1-best path word transcript in the context of spoken document retrieval.\nWe have adapted this model of IV search to term detection.\nIn word transcripts, OOV terms are deleted or substituted.\nTherefore, the usage of phonetic transcripts is more desirable.\nHowever, due to their low accuracy, we have preferred to use only the 1-best path extracted from the phonetic lattices.\nWe will show that the usage of phonetic transcripts tends to improve the recall without affecting the precision too much, using an appropriate ranking.\n3.1 Spoken document detection task As stated in the STD 2006 evaluation plan [2], the task consists in finding all the exact matches of a specific query in a given corpus of speech data.\nA query is a phrase containing several words.\nThe queries are text and not speech.\nNote that this task is different from the more classical task of spoken document retrieval.\nManual transcripts of the speech are not provided but are used by the evaluators to find true occurrences.\nBy definition, true occurrences of a query are found automatically by searching the manual transcripts using the following rule: the gap between adjacent words in a query must be less than 0.5 seconds in the corresponding speech.\nFor evaluating the results, each system output occurrence is judged as correct or not according to whether it is close in time to a true occurrence of the query retrieved from manual transcripts; it is judged as correct if the midpoint of the system output occurrence is less than or equal to 0.5 seconds from the time span of a true occurrence of the query.\n3.2 Indexing We have used the same indexing process for WCN and phonetic transcripts.\nEach occurrence of a unit of indexing (word or phone) u in a transcript D is indexed with the following information: \u2022 the begin time t of the occurrence of u, \u2022 the duration d of the occurrence of u.\nIn addition, for WCN indexing, we store \u2022 the confidence level of the occurrence of u at the time t that is evaluated by its posterior probability Pr(u|t, D), \u2022 the rank of the occurrence of u among the other hypotheses beginning at the same time t, rank(u|t, D).\nNote that since the task is to find exact matches of the phrase queries, we have not filtered stopwords and the corpus is not stemmed before indexing.\n3.3 Search In the following, we present our approach for accomplishing the STD task using the indices described above.\nThe terms are extracted from the query.\nThe vocabulary of the ASR system building word transcripts is given.\nTerms that are part of this vocabulary are IV terms; the other terms are OOV.\nFor an IV query term, the posting list is extracted from the word index.\nFor an OOV query term, the term is converted to a sequence of phones using a joint maximum entropy N-gram model [10].\nFor example, the term prosody is converted to the sequence of phones (p, r, aa, z, ih, d, iy).\nThe posting list of each phone is extracted from the phonetic index.\nThe next step consists of merging the different posting lists according to the timestamp of the occurrences in order to create results matching the query.\nFirst, we check that the words and phones appear in the right order according to their begin times.\nSecond, we check that the gap in time between adjacent words and phones is reasonable.\nConforming to the requirements of the STD evaluation, the distance in time between two adjacent query terms must be less than 0.5 seconds.\nFor OOV search, we check that the distance in time between two adjacent phones of a query term is less that 0.2 seconds; this value has been determined empirically.\nIn such a way, we can reduce the effect of insertion errors since we allow insertions between the adjacent words and phones.\nOur query processing does not allow substitutions and deletions.\nExample: Let us consider the phrase query prosody research.\nThe term prosody is OOV and the term research is IV.\nThe term prosody is converted to the sequence of phones (p, r, aa, z, ih, d, iy).\nThe posting list of each phone is extracted from the phonetic index.\nWe merge the posting lists of the phones such that the sequence of phones appears in the right order and the gap in time between the pairs of phones (p, r), (r, aa), (aa, z), (z, ih), (ih, d), (d, iy) is less than 0.2 seconds.\nWe obtain occurrences of the term prosody.\nThe posting list of research is extracted from the word index and we merge it with the occurrences found for prosody such that they appear in the right order and the distance in time between prosody and research is less than 0.5 seconds.\nNote that our indexing model allows to search for different types of queries: 1.\nqueries containing only IV terms using the word index.\n2.\nqueries containing only OOV terms using the phonetic index.\n3.\nkeyword queries containing both IV and OOV terms using the word index for IV terms and the phonetic index for OOV terms; for query processing, the different sets of matches are unified if the query terms have OR semantics and intersected if the query terms have AND semantics.\n4.\nphrase queries containing both IV and OOV terms; for query processing, the posting lists of the IV terms retrieved from the word index are merged with the posting lists of the OOV terms retrieved from the phonetic index.\nThe merging is possible since we have stored the timestamps for each unit of indexing (word and phone) in both indices.\nThe STD evaluation has focused on the fourth query type.\nIt is the hardest task since we need to combine posting lists retrieved from phonetic and word indices.\n3.4 Ranking Since IV terms and OOV terms are retrieved from two different indices, we propose two different functions for scoring an occurrence of a term; afterward, an aggregate score is assigned to the query based on the scores of the query terms.\nBecause the task is term detection, we do not use a document frequency criterion for ranking the occurrences.\nLet us consider a query Q = (k0, ..., kn), associated with a boosting vector B = (B1, ..., Bj).\nThis vector associates a boosting factor to each rank of the different hypotheses; the boosting factors are normalized between 0 and 1.\nIf the rank r is larger than j, we assume Br = 0.\n3.4.1 In vocabulary term ranking For IV term ranking, we extend the work of Mamou et al. [17] on spoken document retrieval to term detection.\nWe use the information provided by the word index.\nWe define the score score(k, t, D) of a keyword k occurring at a time t in the transcript D, by the following formula: score(k, t, D) = Brank(k|t,D) \u00d7 Pr(k|t, D) Note that 0 \u2264 score(k, t, D) \u2264 1.\n3.4.2 Out of vocabulary term ranking For OOV term ranking, we use the information provided by the phonetic index.\nWe give a higher rank to occurrences of OOV terms that contain phones close (in time) to each other.\nWe define a scoring function that is related to the average gap in time between the different phones.\nLet us consider a keyword k converted to the sequence of phones (pk 0 , ..., pk l ).\nWe define the normalized score score(k, tk 0 , D) of a keyword k = (pk 0 , ..., pk l ), where each pk i occurs at time tk i with a duration of dk i in the transcript D, by the following formula: score(k, tk 0 , D) = 1 \u2212 l i=1 5 \u00d7 (tk i \u2212 (tk i\u22121 + dk i\u22121)) l Note that according to what we have ex-plained in Section 3.3, we have \u22001 \u2264 i \u2264 l, 0 < tk i \u2212 (tk i\u22121 + dk i\u22121) < 0.2 sec, 0 < 5 \u00d7 (tk i \u2212 (tk i\u22121 + dk i\u22121)) < 1, and consequently, 0 < score(k, tk 0 , D) \u2264 1.\nThe duration of the keyword occurrence is tk l \u2212 tk 0 + dk l .\nExample: let us consider the sequence (p, r, aa, z, ih, d, iy) and two different occurrences of the sequence.\nFor each phone, we give the begin time and the duration in second.\nOccurrence 1: (p, 0.25, 0.01), (r, 0.36, 0.01), (aa, 0.37, 0.01), (z, 0.38, 0.01), (ih, 0.39, 0.01), (d, 0.4, 0.01), (iy, 0.52, 0.01).\nOccurrence 2: (p, 0.45, 0.01), (r, 0.46, 0.01), (aa, 0.47, 0.01), (z, 0.48, 0.01), (ih, 0.49, 0.01), (d, 0.5, 0.01), (iy, 0.51, 0.01).\nAccording to our formula, the score of the first occurrence is 0.83 and the score of the second occurrence is 1.\nIn the first occurrence, there are probably some insertion or silence between the phone p and r, and between the phone d and iy.\nThe silence can be due to the fact that the phones belongs to two different words ans therefore, it is not an occurrence of the term prosody.\n3.4.3 Combination The score of an occurrence of a query Q at time t0 in the document D is determined by the multiplication of the score of each keyword ki, where each ki occurs at time ti with a duration di in the transcript D: score(Q, t0, D) = n i=0 score(ki, ti, D)\u03b3n Note that according to what we have ex-plained in Section 3.3, we have \u22001 \u2264 i \u2264 n, 0 < ti \u2212(ti\u22121 +di\u22121) < 0.5 sec.\nOur goal is to estimate for each found occurrence how likely the query appears.\nIt is different from classical IR that aims to rank the results and not to score them.\nSince the probability to have a false alarm is inversely proportional to the length of the phrase query, we have boosted the score of queries by a \u03b3n exponent, that is related to the number of keywords in the phrase.\nWe have determined empirically the value of \u03b3n = 1\/n.\nThe begin time of the query occurrence is determined by the begin time t0 of the first query term and the duration of the query occurrence by tn \u2212 t0 + dn.\n4.\nEXPERIMENTS 4.1 Experimental setup Our corpus consists of the evaluation set provided by NIST for the STD 2006 evaluation [1].\nIt includes three different source types in US English: three hours of broadcast news (BNEWS), three hours of conversational telephony speech (CTS) and two hours of conference room meetings (CONFMTG).\nAs shown in Section 4.2, these different collections have different accuracies.\nCTS and CONFMTG are spontaneous speech.\nFor the experiments, we have processed the query set provided by NIST that includes 1100 queries.\nEach query is a phrase containing between one to five terms, common and rare terms, terms that are in the manual transcripts and those that are not.\nTesting and determination of empirical values have been achieved on another set of speech data and queries, the development set, also provided by NIST.\nWe have used the IBM research prototype ASR system, described in [26], for transcribing speech data.\nWe have produced WCNs for the three different source types.\n1-best phonetic transcripts were generated only for BNEWS and CTS, since CONFMTG phonetic transcripts have too low accuracy.\nWe have adapted Juru [7], a full-text search library written in Java, to index the transcripts and to store the timestamps of the words and phones; search results have been retrieved as described in Section 3.\nFor each found occurrence of the given query, our system outputs: the location of the term in the audio recording (begin time and duration), the score indicating how likely is the occurrence of query, (as defined in Section 3.4) and a hard (binary) decision as to whether the detection is correct.\nWe measure precision and recall by comparing the results obtained over the automatic transcripts (only the results having true hard decision) to the results obtained over the reference manual transcripts.\nOur aim is to evaluate the ability of the suggested retrieval approach to handle transcribed speech data.\nThus, the closer the automatic results to the manual results is, the better the search effectiveness over the automatic transcripts will be.\nThe results returned from the manual transcription for a given query are considered relevant and are expected to be retrieved with highest scores.\nThis approach for measuring search effectiveness using manual data as a reference is very common in speech retrieval research [25, 22, 8, 9, 17].\nBeside the recall and the precision, we use the evaluation measures defined by NIST for the 2006 STD evaluation [2]: the Actual Term-Weighted Value (ATWV) and the Maximum Term-Weighted Value (MTWV).\nThe term-weighted value (TWV) is computed by first computing the miss and false alarm probabilities for each query separately, then using these and an (arbitrarily chosen) prior probability to compute query-specific values, and finally averaging these query-specific values over all queries q to produce an overall system value: TWV (\u03b8) = 1 \u2212 averageq{Pmiss(q, \u03b8) + \u03b2 \u00d7 PF A(q, \u03b8)} where \u03b2 = C V (Pr\u22121 q \u2212 1).\n\u03b8 is the detection threshold.\nFor the evaluation, the cost\/value ratio, C\/V , has been determined to 0.1 and the prior probability of a query Prq to 10\u22124 .\nTherefore, \u03b2 = 999.9.\nMiss and false alarm probabilities for a given query q are functions of \u03b8: Pmiss(q, \u03b8) = 1 \u2212 Ncorrect(q, \u03b8) Ntrue(q) PF A(q, \u03b8) = Nspurious(q, \u03b8) NNT (q) corpus WER(%) SUBR(%) DELR(%) INSR(%) BNEWS WCN 12.7 49 42 9 CTS WCN 19.6 51 38 11 CONFMTG WCN 47.4 47 49 3 Table 1: WER and distribution of the error types over word 1-best path extracted from WCNs for the different source types.\nwhere: \u2022 Ncorrect(q, \u03b8) is the number of correct detections (retrieved by the system) of the query q with a score greater than or equal to \u03b8.\n\u2022 Nspurious(q, \u03b8) is the number of spurious detections of the query q with a score greater than or equal to \u03b8.\n\u2022 Ntrue(q) is the number of true occurrences of the query q in the corpus.\n\u2022 NNT (q) is the number of opportunities for incorrect detection of the query q in the corpus; it is the NonTarget query trials.\nIt has been defined by the following formula: NNT (q) = Tspeech \u2212 Ntrue(q).\nTspeech is the total amount of speech in the collection (in seconds).\nATWV is the actual term-weighted value; it is the detection value attained by the system as a result of the system output and the binary decision output for each putative occurrence.\nIt ranges from \u2212\u221e to +1.\nMTWV is the maximum term-weighted value over the range of all possible values of \u03b8.\nIt ranges from 0 to +1.\nWe have also provided the detection error tradeoff (DET) curve [19] of miss probability (Pmiss) vs. false alarm probability (PF A).\nWe have used the STDEval tool to extract the relevant results from the manual transcripts and to compute ATWV, MTWV and the DET curve.\nWe have determined empirically the following values for the boosting vector defined in Section 3.4: Bi = 1 i .\n4.2 WER analysis We use the word error rate (WER) in order to characterize the accuracy of the transcripts.\nWER is defined as follows: S + D + I N \u00d7 100 where N is the total number of words in the corpus, and S, I, and D are the total number of substitution, insertion, and deletion errors, respectively.\nThe substitution error rate (SUBR) is defined by S S + D + I \u00d7 100.\nDeletion error rate (DELR) and insertion error rate (INSR) are defined in a similar manner.\nTable 1 gives the WER and the distribution of the error types over 1-best path transcripts extracted from WCNs.\nThe WER of the 1-best path phonetic transcripts is approximately two times worse than the WER of word transcripts.\nThat is the reason why we have not retrieved from phonetic transcripts on CONFMTG speech data.\n4.3 Theta threshold We have determined empirically a detection threshold \u03b8 per source type and the hard decision of the occurrences having a score less than \u03b8 is set to false; false occurrences returned by the system are not considered as retrieved and therefore, are not used for computing ATWV, precision and recall.\nThe value of the threshold \u03b8 per source type is reported in Table 2.\nIt is correlated to the accuracy of the transcripts.\nBasically, setting a threshold aims to eliminate from the retrieved occurrences, false alarms without adding misses.\nThe higher the WER is, the higher the \u03b8 threshold should be.\nBNEWS CTS CONFMTG 0.4 0.61 0.91 Table 2: Values of the \u03b8 threshold per source type.\n4.4 Processing resource profile We report in Table 3 the processing resource profile.\nConcerning the index size, note that our index is compressed using IR index compression techniques.\nThe indexing time includes both audio processing (generation of word and phonetic transcripts) and building of the searchable indices.\nIndex size 0.3267 MB\/HS Indexing time 7.5627 HP\/HS Index Memory Usage 1653.4297 MB Search speed 0.0041 sec.P\/HS Search Memory Usage 269.1250 MB Table 3: Processing resource profile.\n(HS: Hours of Speech.\nHP: Processing Hours.\nsec.P: Processing seconds) 4.5 Retrieval measures We compare our approach (WCN phonetic) presented in Section 4.1 with another approach (1-best-WCN phonetic).\nThe only difference between these two approaches is that, in 1-best-WCN phonetic, we index only the 1-best path extracted from the WCN instead of indexing all the WCN.\nWCN phonetic was our primary system for the evaluation and 1-best-WCN phonetic was one of our contrastive systems.\nAverage precision and recall, MTWV and ATWV on the 1100 queries are given in Table 4.\nWe provide also the DET curve for WCN phonetic approach in Figure 2.\nThe point that maximizes the TWV, the MTWV, is specified on each curve.\nNote that retrieval performance has been evaluated separately for each source type since the accuracy of the speech differs per source type as shown in Section 4.2.\nAs expected, we can see that MTWV and ATWV decrease in higher WER.\nThe retrieval performance is improved when measure BNEWS CTS CONFMTG WCN phonetic ATWV 0.8485 0.7392 0.2365 MTWV 0.8532 0.7408 0.2508 precision 0.94 0.90 0.65 recall 0.89 0.81 0.37 1-best-WCN phonetic ATWV 0.8279 0.7102 0.2381 MTWV 0.8319 0.7117 0.2512 precision 0.95 0.91 0.66 recall 0.84 0.75 0.37 Table 4: ATWV, MTWV, precision and recall per source type.\nFigure 2: DET curve for WCN phonetic approach.\nusing WCNs relatively to 1-best path.\nIt is due to the fact that miss probability is improved by indexing all the hypotheses provided by the WCNs.\nThis observation confirms the results shown by Mamou et al. [17] in the context of spoken document retrieval.\nThe ATWV that we have obtained is close to the MTWV; we have combined our ranking model with appropriate threshold \u03b8 to eliminate results with lower score.\nTherefore, the effect of false alarms added by WCNs is reduced.\nWCN phonetic approach was used in the recent NIST STD evaluation and received the highest overall ranking among eleven participants.\nFor comparison, the system that ranked at the third place, obtained an ATWV of 0.8238 for BNEWS, 0.6652 for CTS and 0.1103 for CONFMTG.\n4.6 Influence of the duration of the query on the retrieval performance We have analysed the retrieval performance according to the average duration of the occurrences in the manual transcripts.\nThe query set was divided into three different quantiles according to the duration; we have reported in Table 5 ATWV and MTWV according to the duration.\nWe can see that we performed better on longer queries.\nOne of the reasons is the fact that the ASR system is more accurate on long words.\nHence, it was justified to boost the score of the results with the exponent \u03b3n, as explained in Section 3.4.3, according to the length of the query.\nquantile 0-33 33-66 66-100 BNEWS ATWV 0.7655 0.8794 0.9088 MTWV 0.7819 0.8914 0.9124 CTS ATWV 0.6545 0.8308 0.8378 MTWV 0.6551 0.8727 0.8479 CONFMTG ATWV 0.1677 0.3493 0.3651 MTWV 0.1955 0.4109 0.3880 Table 5: ATWV, MTWV according to the duration of the query occurrences per source type.\n4.7 OOV vs. IV query processing We have randomly chosen three sets of queries from the query sets provided by NIST: 50 queries containing only IV terms; 50 queries containing only OOV terms; and 50 hybrid queries containing both IV and OOV terms.\nThe following experiment has been achieved on the BNEWS collection and IV and OOV terms has been determined according to the vocabulary of BNEWS ASR system.\nWe would like to compare three different approaches of retrieval: using only word index; using only phonetic index; combining word and phonetic indices.\nTable 6 summarizes the retrieval performance according to each approach and to each type of queries.\nUsing a word-based approach for dealing with OOV and hybrid queries affects drastically the performance of the retrieval; precision and recall are null.\nUsing a phone-based approach for dealing with IV queries affects also the performance of the retrieval relatively to the word-based approach.\nAs expected, the approach combining word and phonetic indices presented in Section 3 leads to the same retrieval performance as the word approach for IV queries and to the same retrieval performance as the phonetic approach for OOV queries.\nThis approach always outperforms the others and it justifies the fact that we need to combine word and phonetic search.\n5.\nRELATED WORK In the past decade, the research efforts on spoken data retrieval have focused on extending classical IR techniques to spoken documents.\nSome of these works have been done in the context of the TREC Spoken Document Retrieval evaluations and are described by Garofolo et al. [12].\nAn LVCSR system is used to transcribe the speech into 1-best path word transcripts.\nThe transcripts are indexed as clean text: for each occurrence, its document, its word offset and additional information are stored in the index.\nA generic IR system over the text is used for word spotting and search as described by Brown et al. [6] and James [14].\nThis stratindex word phonetic word and phonetic precision recall precision recall precision recall IV queries 0.8 0.96 0.11 0.77 0.8 0.96 OOV queries 0 0 0.13 0.79 0.13 0.79 hybrid queries 0 0 0.15 0.71 0.89 0.83 Table 6: Comparison of word and phonetic approach on IV and OOV queries egy works well for transcripts like broadcast news collections that have a low WER (in the range of 15%-30%) and are redundant by nature (the same piece of information is spoken several times in different manners).\nMoreover, the algorithms have been mostly tested over long queries stated in plain English and retrieval for such queries is more robust against speech recognition errors.\nAn alternative approach consists of using word lattices in order to improve the effectiveness of SDR.\nSinghal et al. [24, 25] propose to add some terms to the transcript in order to alleviate the retrieval failures due to ASR errors.\nFrom an IR perspective, a classical way to bring new terms is document expansion using a similar corpus.\nTheir approach consists in using word lattices in order to determine which words returned by a document expansion algorithm should be added to the original transcript.\nThe necessity to use a document expansion algorithm was justified by the fact that the word lattices they worked with, lack information about word probabilities.\nChelba and Acero in [8, 9] propose a more compact word lattice, the position specific posterior lattice (PSPL).\nThis data structure is similar to WCN and leads to a more compact index.\nThe offset of the terms in the speech documents is also stored in the index.\nHowever, the evaluation framework is carried out on lectures that are relatively planned, in contrast to conversational speech.\nTheir ranking model is based on the term confidence level but does not take into consideration the rank of the term among the other hypotheses.\nMamou et al. [17] propose a model for spoken document retrieval using WCNs in order to improve the recall and the MAP of the search.\nHowever, in the above works, the problem of queries containing OOV terms is not addressed.\nPopular approaches to deal with OOV queries are based on sub-words transcripts, where the sub-words are typically phones, syllables or word fragments (sequences of phones) [11, 20, 23].\nThe classical approach consists of using phonetic transcripts.\nThe transcripts are indexed in the same manner as words in using classical text retrieval techniques; during query processing, the query is represented as a sequence of phones.\nThe retrieval is based on searching the string of phones representing the query in the phonetic transcript.\nTo account for the high recognition error rates, some other systems use richer transcripts like phonetic lattices.\nThey are attractive as they accommodate high error rate conditions as well as allow for OOV queries to be used [15, 3, 20, 23, 21, 27].\nHowever, phonetic lattices contain many edges that overlap in time with the same phonetic label, and are difficult to index.\nMoreover, beside the improvement in the recall of the search, the precision is affected since phonetic lattices are often inaccurate.\nConsequently, phonetic approaches should be used only for OOV search; for searching queries containing also IV terms, this technique affects the performance of the retrieval in comparison to the word based approach.\nSaraclar and Sproat in [22] show improvement in word spotting accuracy for both IV and OOV queries, using phonetic and word lattices, where a confidence measure of a word or a phone can be derived.\nThey propose three different retrieval strategies: search both the word and the phonetic indices and unify the two different sets of results; search the word index for IV queries, search the phonetic index for OOV queries; search the word index and if no result is returned, search the phonetic index.\nHowever, no strategy is proposed to deal with phrase queries containing both IV and OOV terms.\nAmir et al. in [5, 4] propose to merge a word approach with a phonetic approach in the context of video retrieval.\nHowever, the phonetic transcript is obtained from a text to phonetic conversion of the 1-best path of the word transcript and is not based on a phonetic decoding of the speech data.\nAn important issue to be considered when looking at the state-of-the-art in retrieval of spoken data, is the lack of a common test set and appropriate query terms.\nThis paper uses such a task and the STD evaluation is a good summary of the performance of different approaches on the same test conditions.\n6.\nCONCLUSIONS This work studies how vocabulary independent spoken term detection can be performed efficiently over different data sources.\nPreviously, phonetic-based and word-based approaches have been used for IR on speech data.\nThe former suffers from low accuracy and the latter from limited vocabulary of the recognition system.\nIn this paper, we have presented a vocabulary independent model of indexing and search that combines both the approaches.\nThe system can deal with all kinds of queries although the phrases that need to combine for the retrieval, information extracted from two different indices, a word index and a phonetic index.\nThe scoring of OOV terms is based on the proximity (in time) between the different phones.\nThe scoring of IV terms is based on information provided by the WCNs.\nWe have shown an improvement in the retrieval performance when using all the WCN and not only the 1-best path and when using phonetic index for search of OOV query terms.\nThis approach always outperforms the other approaches using only word index or phonetic index.\nAs a future work, we will compare our model for OOV search on phonetic transcripts with a retrieval model based on the edit distance.\n7.\nACKNOWLEDGEMENTS Jonathan Mamou is grateful to David Carmel and Ron Hoory for helpful and interesting discussions.\n8.\nREFERENCES [1] NIST Spoken Term Detection 2006 Evaluation Website, http:\/\/www.nist.gov\/speech\/tests\/std\/.\n[2] NIST Spoken Term Detection (STD) 2006 Evaluation Plan, http:\/\/www.nist.gov\/speech\/tests\/std\/docs\/std06-evalplan-v10.pdf.\n[3] C. Allauzen, M. Mohri, and M. Saraclar.\nGeneral indexation of weighted automata - application to spoken utterance retrieval.\nIn Proceedings of the HLT-NAACL 2004 Workshop on Interdiciplinary Approaches to Speech Indexing and Retrieval, Boston, MA, USA, 2004.\n[4] A. Amir, M. Berg, and H. Permuter.\nMutual relevance feedback for multimodal query formulation in video retrieval.\nIn MIR ``05: Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, pages 17-24, New York, NY, USA, 2005.\nACM Press.\n[5] A. Amir, A. Efrat, and S. Srinivasan.\nAdvances in phonetic word spotting.\nIn CIKM ``01: Proceedings of the tenth international conference on Information and knowledge management, pages 580-582, New York, NY, USA, 2001.\nACM Press.\n[6] M. Brown, J. Foote, G. Jones, K. Jones, and S. Young.\nOpen-vocabulary speech indexing for voice and video mail retrieval.\nIn Proceedings ACM Multimedia 96, pages 307-316, Hong-Kong, November 1996.\n[7] D. Carmel, E. Amitay, M. Herscovici, Y. S. Maarek, Y. Petruschka, and A. Soffer.\nJuru at TREC 10Experiments with Index Pruning.\nIn Proceedings of the Tenth Text Retrieval Conference (TREC-10).\nNational Institute of Standards and Technology.\nNIST, 2001.\n[8] C. Chelba and A. Acero.\nIndexing uncertainty for spoken document search.\nIn Interspeech 2005, pages 61-64, Lisbon, Portugal, 2005.\n[9] C. Chelba and A. Acero.\nPosition specific posterior lattices for indexing speech.\nIn Proceedings of the 43rd Annual Conference of the Association for Computational Linguistics (ACL), Ann Arbor, MI, 2005.\n[10] S. Chen.\nConditional and joint models for grapheme-to-phoneme conversion.\nIn Eurospeech 2003, Geneva, Switzerland, 2003.\n[11] M. Clements, S. Robertson, and M. Miller.\nPhonetic searching applied to on-line distance learning modules.\nIn Digital Signal Processing Workshop, 2002 and the 2nd Signal Processing Education Workshop.\nProceedings of 2002 IEEE 10th, pages 186-191, 2002.\n[12] J. Garofolo, G. Auzanne, and E. Voorhees.\nThe TREC spoken document retrieval track: A success story.\nIn Proceedings of the Ninth Text Retrieval Conference (TREC-9).\nNational Institute of Standards and Technology.\nNIST, 2000.\n[13] D. Hakkani-Tur and G. Riccardi.\nA general algorithm for word graph matrix decomposition.\nIn Proceedings of the IEEE Internation Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 596-599, Hong-Kong, 2003.\n[14] D. James.\nThe application of classical information retrieval techniques to spoken documents.\nPhD thesis, University of Cambridge, Downing College, 1995.\n[15] D. A. James.\nA system for unrestricted topic retrieval from radio news broadcasts.\nIn Proc.\nICASSP ``96, pages 279-282, Atlanta, GA, 1996.\n[16] B. Logan, P. Moreno, J. V. Thong, and E. Whittaker.\nAn experimental study of an audio indexing system for the web.\nIn Proceedings of ICSLP, 1996.\n[17] J. Mamou, D. Carmel, and R. Hoory.\nSpoken document retrieval from call-center conversations.\nIn SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 51-58, New York, NY, USA, 2006.\nACM Press.\n[18] L. Mangu, E. Brill, and A. Stolcke.\nFinding consensus in speech recognition: word error minimization and other applications of confusion networks.\nComputer Speech and Language, 14(4):373-400, 2000.\n[19] A. Martin, G. Doddington, T. Kamm, M. Ordowski, and M. Przybocki.\nThe DET curve in assessment of detection task performance.\nIn Proc.\nEurospeech ``97, pages 1895-1898, Rhodes, Greece, 1997.\n[20] K. Ng and V. W. Zue.\nSubword-based approaches for spoken document retrieval.\nSpeech Commun., 32(3):157-186, 2000.\n[21] Y. Peng and F. Seide.\nFast two-stage vocabulary-independent search in spontaneous speech.\nIn Acoustics, Speech, and Signal Processing.\nProceedings.\n(ICASSP).\nIEEE International Conference, volume 1, pages 481-484, 2005.\n[22] M. Saraclar and R. Sproat.\nLattice-based search for spoken utterance retrieval.\nIn HLT-NAACL 2004: Main Proceedings, pages 129-136, Boston, Massachusetts, USA, 2004.\n[23] F. Seide, P. Yu, C. Ma, and E. Chang.\nVocabulary-independent search in spontaneous speech.\nIn ICASSP-2004, IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004.\n[24] A. Singhal, J. Choi, D. Hindle, D. Lewis, and F. Pereira.\nAT&T at TREC-7.\nIn Proceedings of the Seventh Text Retrieval Conference (TREC-7).\nNational Institute of Standards and Technology.\nNIST, 1999.\n[25] A. Singhal and F. Pereira.\nDocument expansion for speech retrieval.\nIn SIGIR ``99: Proceedings of the 22nd annual international ACM SIGIR conference on research and development in information retrieval, pages 34-41, New York, NY, USA, 1999.\nACM Press.\n[26] H. Soltau, B. Kingsbury, L. Mangu, D. Povey, G. Saon, and G. Zweig.\nThe IBM 2004 conversational telephony system for rich transcription.\nIn Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2005.\n[27] K. Thambiratnam and S. Sridharan.\nDynamic match phone-lattice searches for very fast and accurate unrestricted vocabulary keyword spotting.\nIn Acoustics, Speech, and Signal Processing.\nProceedings.\n(ICASSP).\nIEEE International Conference, 2005.\n[28] P. C. Woodland, S. E. Johnson, P. Jourlin, and K. S. Jones.\nEffects of out of vocabulary words in spoken document retrieval (poster session).\nIn SIGIR ``00: Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 372-374, New York, NY, USA, 2000.\nACM Press.","lvl-3":"Vocabulary Independent Spoken Term Detection\nABSTRACT\nWe are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings.\nToday, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index.\nHowever, query terms that are not part of the recognizer's vocabulary cannot be retrieved, and the recall of the search is affected.\nIn addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically.\nSuch phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts.\nWe present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts.\nA speech recognizer generates word confusion networks and phonetic lattices.\nThe transcripts are indexed for query processing and ranking purpose.\nThe value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1].\n1.\nINTRODUCTION\nThe rapidly increasing amount of spoken data calls for solutions to index and search this data.\nThe classical approach consists of converting the speech to word transcripts using a large vocabulary continuous speech recognition (LVCSR) tool.\nIn the past decade, most of the research efforts on spoken data retrieval have focused on extending classical IR techniques to word transcripts.\nSome of these works have been done in the framework of the NIST TREC Spoken Document Retrieval tracks and are described by Garofolo et al. [12].\nThese tracks focused on retrieval from a corpus of broadcast news stories spoken by professionals.\nOne of the conclusions of those tracks was that the effectiveness of retrieval mostly depends on the accuracy of the transcripts.\nWhile the accuracy of automatic speech recognition (ASR) systems depends on the scenario and environment, state-of-the-art systems achieved better than 90% accuracy in transcription of such data.\nIn 2000, Garofolo et al. concluded that \"Spoken document retrieval is a solved problem\" [12].\nHowever, a significant drawback of such approaches is that search on queries containing out-of-vocabulary (OOV) terms will not return any results.\nOOV terms are missing words from the ASR system vocabulary and are replaced in the output transcript by alternatives that are probable, given the recognition acoustic model and the language model.\nIt has been experimentally observed that over 10% of user queries can contain OOV terms [16], as queries often relate to named entities that typically have a poor coverage in the ASR vocabulary.\nThe effects of OOV query terms in spoken data retrieval are discussed by Woodland et al. [28].\nIn many applications the OOV rate may get worse over time unless the recognizer's vocabulary is periodically updated.\nAnother approach consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones.\nThe retrieval is based on searching the sequence of phones representing the query in the phonetic transcripts.\nThe main drawback of this approach is the inherent high error rate of the transcripts.\nTherefore, such approach cannot be an alternative to word transcripts, especially for in-vocabulary (IV) query terms that are part of the vocabulary of the ASR system.\nA solution would be to combine the two different approaches presented above: we index both word transcripts and phonetic transcripts; during query processing, the information is retrieved from the word index for IV terms and from the phonetic index for OOV terms.\nWe would like to be able to process also hybrid queries, i.e, queries that include both IV and OOV terms.\nConsequently, we need to merge pieces of information retrieved from word index and phonetic index.\nProximity information on the occurrences\nof the query terms is required for phrase search and for proximity-based ranking.\nIn classical IR, the index stores for each occurrence of a term, its offset.\nTherefore, we cannot merge posting lists retrieved by phonetic index with those retrieved by word index since the offset of the occurrences retrieved from the two different indices are not comparable.\nThe only element of comparison between phonetic and word transcripts are the timestamps.\nNo previous work combining word and phonetic approach has been done on phrase search.\nWe present a novel scheme for information retrieval that consists of storing, during the indexing process, for each unit of indexing (phone or word) its timestamp.\nWe search queries by merging the information retrieved from the two different indices, word index and phonetic index, according to the timestamps of the query terms.\nWe analyze the retrieval effectiveness of this approach on the NIST Spoken Term Detection 2006 evaluation data [1].\nThe paper is organized as follows.\nWe describe the audio processing in Section 2.\nThe indexing and retrieval methods are presented in section 3.\nExperimental setup and results are given in Section 4.\nIn Section 5, we give an overview of related work.\nFinally, we conclude in Section 6.\n2.\nAUTOMATIC SPEECH RECOGNITION SYSTEM\n3.\nRETRIEVAL MODEL\n3.1 Spoken document detection task\n3.2 Indexing\n3.3 Search\n3.4 Ranking\n3.4.1 In vocabulary term ranking\n3.4.2 Out of vocabulary term ranking\n3.4.3 Combination\n4.\nEXPERIMENTS 4.1 Experimental setup\n4.2 WER analysis\n4.3 Theta threshold\nBNEWS CTS CONFMTG\n4.4 Processing resource profile\n4.5 Retrieval measures\n4.6 Influence of the duration of the query on the retrieval performance\n4.7 OOV vs. IV query processing\n5.\nRELATED WORK\nIn the past decade, the research efforts on spoken data retrieval have focused on extending classical IR techniques to spoken documents.\nSome of these works have been done in the context of the TREC Spoken Document Retrieval evaluations and are described by Garofolo et al. [12].\nAn LVCSR system is used to transcribe the speech into 1-best path word transcripts.\nThe transcripts are indexed as clean text: for each occurrence, its document, its word offset and additional information are stored in the index.\nA generic IR system over the text is used for word spotting and search as described by Brown et al. [6] and James [14].\nThis strat\nTable 6: Comparison of word and phonetic approach on IV and OOV queries\negy works well for transcripts like broadcast news collections that have a low WER (in the range of 15% -30%) and are redundant by nature (the same piece of information is spoken several times in different manners).\nMoreover, the algorithms have been mostly tested over long queries stated in plain English and retrieval for such queries is more robust against speech recognition errors.\nAn alternative approach consists of using word lattices in order to improve the effectiveness of SDR.\nSinghal et al. [24, 25] propose to add some terms to the transcript in order to alleviate the retrieval failures due to ASR errors.\nFrom an IR perspective, a classical way to bring new terms is document expansion using a similar corpus.\nTheir approach consists in using word lattices in order to determine which words returned by a document expansion algorithm should be added to the original transcript.\nThe necessity to use a document expansion algorithm was justified by the fact that the word lattices they worked with, lack information about word probabilities.\nChelba and Acero in [8, 9] propose a more compact word lattice, the position specific posterior lattice (PSPL).\nThis data structure is similar to WCN and leads to a more compact index.\nThe offset of the terms in the speech documents is also stored in the index.\nHowever, the evaluation framework is carried out on lectures that are relatively planned, in contrast to conversational speech.\nTheir ranking model is based on the term confidence level but does not take into consideration the rank of the term among the other hypotheses.\nMamou et al. [17] propose a model for spoken document retrieval using WCNs in order to improve the recall and the MAP of the search.\nHowever, in the above works, the problem of queries containing OOV terms is not addressed.\nPopular approaches to deal with OOV queries are based on sub-words transcripts, where the sub-words are typically phones, syllables or word fragments (sequences of phones) [11, 20, 23].\nThe classical approach consists of using phonetic transcripts.\nThe transcripts are indexed in the same manner as words in using classical text retrieval techniques; during query processing, the query is represented as a sequence of phones.\nThe retrieval is based on searching the string of phones representing the query in the phonetic transcript.\nTo account for the high recognition error rates, some other systems use richer transcripts like phonetic lattices.\nThey are attractive as they accommodate high error rate conditions as well as allow for OOV queries to be used [15, 3, 20, 23, 21, 27].\nHowever, phonetic lattices contain many edges that overlap in time with the same phonetic label, and are difficult to index.\nMoreover, beside the improvement in the recall of the search, the precision is affected since phonetic lattices are often inaccurate.\nConsequently, phonetic approaches should be used only for OOV search; for searching queries containing also IV terms, this technique affects the performance of the retrieval in comparison to the word based approach.\nSaraclar and Sproat in [22] show improvement in word spotting accuracy for both IV and OOV queries, using phonetic and word lattices, where a confidence measure of a word or a phone can be derived.\nThey propose three different retrieval strategies: search both the word and the phonetic indices and unify the two different sets of results; search the word index for IV queries, search the phonetic index for OOV queries; search the word index and if no result is returned, search the phonetic index.\nHowever, no strategy is proposed to deal with phrase queries containing both IV and OOV terms.\nAmir et al. in [5, 4] propose to merge a word approach with a phonetic approach in the context of video retrieval.\nHowever, the phonetic transcript is obtained from a text to phonetic conversion of the 1-best path of the word transcript and is not based on a phonetic decoding of the speech data.\nAn important issue to be considered when looking at the state-of-the-art in retrieval of spoken data, is the lack of a common test set and appropriate query terms.\nThis paper uses such a task and the STD evaluation is a good summary of the performance of different approaches on the same test conditions.\n6.\nCONCLUSIONS\nThis work studies how vocabulary independent spoken term detection can be performed efficiently over different data sources.\nPreviously, phonetic-based and word-based approaches have been used for IR on speech data.\nThe former suffers from low accuracy and the latter from limited vocabulary of the recognition system.\nIn this paper, we have presented a vocabulary independent model of indexing and search that combines both the approaches.\nThe system can deal with all kinds of queries although the phrases that need to combine for the retrieval, information extracted from two different indices, a word index and a phonetic index.\nThe scoring of OOV terms is based on the proximity (in time) between the different phones.\nThe scoring of IV terms is based on information provided by the WCNs.\nWe have shown an improvement in the retrieval performance when using all the WCN and not only the 1-best path and when using phonetic index for search of OOV query terms.\nThis approach always outperforms the other approaches using only word index or phonetic index.\nAs a future work, we will compare our model for OOV search on phonetic transcripts with a retrieval model based on the edit distance.","lvl-4":"Vocabulary Independent Spoken Term Detection\nABSTRACT\nWe are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings.\nToday, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index.\nHowever, query terms that are not part of the recognizer's vocabulary cannot be retrieved, and the recall of the search is affected.\nIn addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically.\nSuch phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts.\nWe present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts.\nA speech recognizer generates word confusion networks and phonetic lattices.\nThe transcripts are indexed for query processing and ranking purpose.\nThe value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1].\n1.\nINTRODUCTION\nThe rapidly increasing amount of spoken data calls for solutions to index and search this data.\nThe classical approach consists of converting the speech to word transcripts using a large vocabulary continuous speech recognition (LVCSR) tool.\nIn the past decade, most of the research efforts on spoken data retrieval have focused on extending classical IR techniques to word transcripts.\nSome of these works have been done in the framework of the NIST TREC Spoken Document Retrieval tracks and are described by Garofolo et al. [12].\nThese tracks focused on retrieval from a corpus of broadcast news stories spoken by professionals.\nOne of the conclusions of those tracks was that the effectiveness of retrieval mostly depends on the accuracy of the transcripts.\nIn 2000, Garofolo et al. concluded that \"Spoken document retrieval is a solved problem\" [12].\nHowever, a significant drawback of such approaches is that search on queries containing out-of-vocabulary (OOV) terms will not return any results.\nOOV terms are missing words from the ASR system vocabulary and are replaced in the output transcript by alternatives that are probable, given the recognition acoustic model and the language model.\nThe effects of OOV query terms in spoken data retrieval are discussed by Woodland et al. [28].\nAnother approach consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones.\nThe retrieval is based on searching the sequence of phones representing the query in the phonetic transcripts.\nThe main drawback of this approach is the inherent high error rate of the transcripts.\nTherefore, such approach cannot be an alternative to word transcripts, especially for in-vocabulary (IV) query terms that are part of the vocabulary of the ASR system.\nA solution would be to combine the two different approaches presented above: we index both word transcripts and phonetic transcripts; during query processing, the information is retrieved from the word index for IV terms and from the phonetic index for OOV terms.\nWe would like to be able to process also hybrid queries, i.e, queries that include both IV and OOV terms.\nConsequently, we need to merge pieces of information retrieved from word index and phonetic index.\nProximity information on the occurrences\nof the query terms is required for phrase search and for proximity-based ranking.\nIn classical IR, the index stores for each occurrence of a term, its offset.\nTherefore, we cannot merge posting lists retrieved by phonetic index with those retrieved by word index since the offset of the occurrences retrieved from the two different indices are not comparable.\nThe only element of comparison between phonetic and word transcripts are the timestamps.\nNo previous work combining word and phonetic approach has been done on phrase search.\nWe present a novel scheme for information retrieval that consists of storing, during the indexing process, for each unit of indexing (phone or word) its timestamp.\nWe search queries by merging the information retrieved from the two different indices, word index and phonetic index, according to the timestamps of the query terms.\nWe analyze the retrieval effectiveness of this approach on the NIST Spoken Term Detection 2006 evaluation data [1].\nWe describe the audio processing in Section 2.\nThe indexing and retrieval methods are presented in section 3.\nExperimental setup and results are given in Section 4.\nIn Section 5, we give an overview of related work.\nFinally, we conclude in Section 6.\n5.\nRELATED WORK\nIn the past decade, the research efforts on spoken data retrieval have focused on extending classical IR techniques to spoken documents.\nSome of these works have been done in the context of the TREC Spoken Document Retrieval evaluations and are described by Garofolo et al. [12].\nAn LVCSR system is used to transcribe the speech into 1-best path word transcripts.\nThe transcripts are indexed as clean text: for each occurrence, its document, its word offset and additional information are stored in the index.\nA generic IR system over the text is used for word spotting and search as described by Brown et al. [6] and James [14].\nThis strat\nTable 6: Comparison of word and phonetic approach on IV and OOV queries\nMoreover, the algorithms have been mostly tested over long queries stated in plain English and retrieval for such queries is more robust against speech recognition errors.\nAn alternative approach consists of using word lattices in order to improve the effectiveness of SDR.\nSinghal et al. [24, 25] propose to add some terms to the transcript in order to alleviate the retrieval failures due to ASR errors.\nFrom an IR perspective, a classical way to bring new terms is document expansion using a similar corpus.\nTheir approach consists in using word lattices in order to determine which words returned by a document expansion algorithm should be added to the original transcript.\nThe necessity to use a document expansion algorithm was justified by the fact that the word lattices they worked with, lack information about word probabilities.\nThis data structure is similar to WCN and leads to a more compact index.\nThe offset of the terms in the speech documents is also stored in the index.\nMamou et al. [17] propose a model for spoken document retrieval using WCNs in order to improve the recall and the MAP of the search.\nHowever, in the above works, the problem of queries containing OOV terms is not addressed.\nPopular approaches to deal with OOV queries are based on sub-words transcripts, where the sub-words are typically phones, syllables or word fragments (sequences of phones) [11, 20, 23].\nThe classical approach consists of using phonetic transcripts.\nThe transcripts are indexed in the same manner as words in using classical text retrieval techniques; during query processing, the query is represented as a sequence of phones.\nThe retrieval is based on searching the string of phones representing the query in the phonetic transcript.\nTo account for the high recognition error rates, some other systems use richer transcripts like phonetic lattices.\nHowever, phonetic lattices contain many edges that overlap in time with the same phonetic label, and are difficult to index.\nMoreover, beside the improvement in the recall of the search, the precision is affected since phonetic lattices are often inaccurate.\nConsequently, phonetic approaches should be used only for OOV search; for searching queries containing also IV terms, this technique affects the performance of the retrieval in comparison to the word based approach.\nHowever, no strategy is proposed to deal with phrase queries containing both IV and OOV terms.\nAmir et al. in [5, 4] propose to merge a word approach with a phonetic approach in the context of video retrieval.\nHowever, the phonetic transcript is obtained from a text to phonetic conversion of the 1-best path of the word transcript and is not based on a phonetic decoding of the speech data.\nAn important issue to be considered when looking at the state-of-the-art in retrieval of spoken data, is the lack of a common test set and appropriate query terms.\n6.\nCONCLUSIONS\nThis work studies how vocabulary independent spoken term detection can be performed efficiently over different data sources.\nPreviously, phonetic-based and word-based approaches have been used for IR on speech data.\nThe former suffers from low accuracy and the latter from limited vocabulary of the recognition system.\nIn this paper, we have presented a vocabulary independent model of indexing and search that combines both the approaches.\nThe system can deal with all kinds of queries although the phrases that need to combine for the retrieval, information extracted from two different indices, a word index and a phonetic index.\nThe scoring of OOV terms is based on the proximity (in time) between the different phones.\nThe scoring of IV terms is based on information provided by the WCNs.\nWe have shown an improvement in the retrieval performance when using all the WCN and not only the 1-best path and when using phonetic index for search of OOV query terms.\nThis approach always outperforms the other approaches using only word index or phonetic index.\nAs a future work, we will compare our model for OOV search on phonetic transcripts with a retrieval model based on the edit distance.","lvl-2":"Vocabulary Independent Spoken Term Detection\nABSTRACT\nWe are interested in retrieving information from speech data like broadcast news, telephone conversations and roundtable meetings.\nToday, most systems use large vocabulary continuous speech recognition tools to produce word transcripts; the transcripts are indexed and query terms are retrieved from the index.\nHowever, query terms that are not part of the recognizer's vocabulary cannot be retrieved, and the recall of the search is affected.\nIn addition to the output word transcript, advanced systems provide also phonetic transcripts, against which query terms can be matched phonetically.\nSuch phonetic transcripts suffer from lower accuracy and cannot be an alternative to word transcripts.\nWe present a vocabulary independent system that can handle arbitrary queries, exploiting the information provided by having both word transcripts and phonetic transcripts.\nA speech recognizer generates word confusion networks and phonetic lattices.\nThe transcripts are indexed for query processing and ranking purpose.\nThe value of the proposed method is demonstrated by the relative high performance of our system, which received the highest overall ranking for US English speech data in the recent NIST Spoken Term Detection evaluation [1].\n1.\nINTRODUCTION\nThe rapidly increasing amount of spoken data calls for solutions to index and search this data.\nThe classical approach consists of converting the speech to word transcripts using a large vocabulary continuous speech recognition (LVCSR) tool.\nIn the past decade, most of the research efforts on spoken data retrieval have focused on extending classical IR techniques to word transcripts.\nSome of these works have been done in the framework of the NIST TREC Spoken Document Retrieval tracks and are described by Garofolo et al. [12].\nThese tracks focused on retrieval from a corpus of broadcast news stories spoken by professionals.\nOne of the conclusions of those tracks was that the effectiveness of retrieval mostly depends on the accuracy of the transcripts.\nWhile the accuracy of automatic speech recognition (ASR) systems depends on the scenario and environment, state-of-the-art systems achieved better than 90% accuracy in transcription of such data.\nIn 2000, Garofolo et al. concluded that \"Spoken document retrieval is a solved problem\" [12].\nHowever, a significant drawback of such approaches is that search on queries containing out-of-vocabulary (OOV) terms will not return any results.\nOOV terms are missing words from the ASR system vocabulary and are replaced in the output transcript by alternatives that are probable, given the recognition acoustic model and the language model.\nIt has been experimentally observed that over 10% of user queries can contain OOV terms [16], as queries often relate to named entities that typically have a poor coverage in the ASR vocabulary.\nThe effects of OOV query terms in spoken data retrieval are discussed by Woodland et al. [28].\nIn many applications the OOV rate may get worse over time unless the recognizer's vocabulary is periodically updated.\nAnother approach consists of converting the speech to phonetic transcripts and representing the query as a sequence of phones.\nThe retrieval is based on searching the sequence of phones representing the query in the phonetic transcripts.\nThe main drawback of this approach is the inherent high error rate of the transcripts.\nTherefore, such approach cannot be an alternative to word transcripts, especially for in-vocabulary (IV) query terms that are part of the vocabulary of the ASR system.\nA solution would be to combine the two different approaches presented above: we index both word transcripts and phonetic transcripts; during query processing, the information is retrieved from the word index for IV terms and from the phonetic index for OOV terms.\nWe would like to be able to process also hybrid queries, i.e, queries that include both IV and OOV terms.\nConsequently, we need to merge pieces of information retrieved from word index and phonetic index.\nProximity information on the occurrences\nof the query terms is required for phrase search and for proximity-based ranking.\nIn classical IR, the index stores for each occurrence of a term, its offset.\nTherefore, we cannot merge posting lists retrieved by phonetic index with those retrieved by word index since the offset of the occurrences retrieved from the two different indices are not comparable.\nThe only element of comparison between phonetic and word transcripts are the timestamps.\nNo previous work combining word and phonetic approach has been done on phrase search.\nWe present a novel scheme for information retrieval that consists of storing, during the indexing process, for each unit of indexing (phone or word) its timestamp.\nWe search queries by merging the information retrieved from the two different indices, word index and phonetic index, according to the timestamps of the query terms.\nWe analyze the retrieval effectiveness of this approach on the NIST Spoken Term Detection 2006 evaluation data [1].\nThe paper is organized as follows.\nWe describe the audio processing in Section 2.\nThe indexing and retrieval methods are presented in section 3.\nExperimental setup and results are given in Section 4.\nIn Section 5, we give an overview of related work.\nFinally, we conclude in Section 6.\n2.\nAUTOMATIC SPEECH RECOGNITION SYSTEM\nWe use an ASR system for transcribing speech data.\nIt works in speaker-independent mode.\nFor best recognition results, a speaker-independent acoustic model and a language model are trained in advance on data with similar characteristics.\nTypically, ASR generates lattices that can be considered as directed acyclic graphs.\nEach vertex in a lattice is associated with a timestamp and each edge (u, v) is labeled with a word or phone hypothesis and its prior probability, which is the probability of the signal delimited by the timestamps of the vertices u and v, given the hypothesis.\nThe 1-best path transcript is obtained from the lattice using dynamic programming techniques.\nMangu et al. [18] and Hakkani-Tur et al. [13] propose a compact representation of a word lattice called word confusion network (WCN).\nEach edge (u, v) is labeled with a word hypothesis and its posterior probability, i.e., the probability of the word given the signal.\nOne of the main advantages of WCN is that it also provides an alignment for all of the words in the lattice.\nAs explained in [13], the three main steps for building a WCN from a word lattice are as follows:\n1.\nCompute the posterior probabilities for all edges in the word lattice.\n2.\nExtract a path from the word lattice (which can be the 1-best, the longest or any random path), and call it the pivot path of the alignment.\n3.\nTraverse the word lattice, and align all the transitions with the pivot, merging the transitions that correspond to the same word (or label) and occur in the same time interval by summing their posterior probabilities.\nThe 1-best path of a WCN is obtained from the path containing the best hypotheses.\nAs stated in [18], although WCNs are more compact than word lattices, in general the 1-best path obtained from WCN has a better word accuracy than the 1-best path obtained from the corresponding word lattice.\nTypical structures of a lattice and a WCN are given in\nFigure 1.\nFigure 1: Typical structures of a lattice and a WCN.\n3.\nRETRIEVAL MODEL\nThe main problem with retrieving information from spoken data is the low accuracy of the transcription particularly on terms of interest such as named entities and content words.\nGenerally, the accuracy of a word transcript is characterized by its word error rate (WER).\nThere are three kinds of errors that can occur in a transcript: substitution of a term that is part of the speech by another term, deletion of a spoken term that is part of the speech and insertion of a term that is not part of the speech.\nSubstitutions and deletions reflect the fact that an occurrence of a term in the speech signal is not recognized.\nThese misses reduce the recall of the search.\nSubstitutions and insertions reflect the fact that a term which is not part of the speech signal appears in the transcript.\nThese misses reduce the precision of the search.\nSearch recall can be enhanced by expanding the transcript with extra words.\nThese words can be taken from the other alternatives provided by the WCN; these alternatives may have been spoken but were not the top choice of the ASR.\nSuch an expansion tends to correct the substitutions and the deletions and consequently, might improve recall but will probably reduce precision.\nUsing an appropriate ranking model, we can avoid the decrease in precision.\nMamou et al. have presented in [17] the enhancement in the recall and the MAP by searching on WCN instead of considering only the 1-best path word transcript in the context of spoken document retrieval.\nWe have adapted this model of IV search to term detection.\nIn word transcripts, OOV terms are deleted or substituted.\nTherefore, the usage of phonetic transcripts is more desirable.\nHowever, due to their low accuracy, we have preferred to use only the 1-best path extracted from the phonetic lattices.\nWe will show that the usage of phonetic transcripts tends to improve the recall without affecting the precision too much, using an appropriate ranking.\n3.1 Spoken document detection task\nAs stated in the STD 2006 evaluation plan [2], the task consists in finding all the exact matches of a specific query\nin a given corpus of speech data.\nA query is a phrase containing several words.\nThe queries are text and not speech.\nNote that this task is different from the more classical task of spoken document retrieval.\nManual transcripts of the speech are not provided but are used by the evaluators to find true occurrences.\nBy definition, true occurrences of a query are found automatically by searching the manual transcripts using the following rule: the gap between adjacent words in a query must be less than 0.5 seconds in the corresponding speech.\nFor evaluating the results, each system output occurrence is judged as correct or not according to whether it is \"close\" in time to a true occurrence of the query retrieved from manual transcripts; it is judged as correct if the midpoint of the system output occurrence is less than or equal to 0.5 seconds from the time span of a true occurrence of the query.\n3.2 Indexing\nWe have used the same indexing process for WCN and phonetic transcripts.\nEach occurrence of a unit of indexing (word or phone) u in a transcript D is indexed with the following information:\n\u2022 the begin time t of the occurrence of u, \u2022 the duration d of the occurrence of u.\nIn addition, for WCN indexing, we store \u2022 the confidence level of the occurrence of u at the time t that is evaluated by its posterior probability Pr (u | t, D), \u2022 the rank of the occurrence of u among the other hypotheses beginning at the same time t, rank (u | t, D).\nNote that since the task is to find exact matches of the phrase queries, we have not filtered stopwords and the corpus is not stemmed before indexing.\n3.3 Search\nIn the following, we present our approach for accomplishing the STD task using the indices described above.\nThe terms are extracted from the query.\nThe vocabulary of the ASR system building word transcripts is given.\nTerms that are part of this vocabulary are IV terms; the other terms are OOV.\nFor an IV query term, the posting list is extracted from the word index.\nFor an OOV query term, the term is converted to a sequence of phones using a joint maximum entropy N-gram model [10].\nFor example, the term prosody is converted to the sequence of phones (p, r, aa, z, ih, d, iy).\nThe posting list of each phone is extracted from the phonetic index.\nThe next step consists of merging the different posting lists according to the timestamp of the occurrences in order to create results matching the query.\nFirst, we check that the words and phones appear in the right order according to their begin times.\nSecond, we check that the gap in time between adjacent words and phones is \"reasonable\".\nConforming to the requirements of the STD evaluation, the distance in time between two adjacent query terms must be less than 0.5 seconds.\nFor OOV search, we check that the distance in time between two adjacent phones of a query term is less that 0.2 seconds; this value has been determined empirically.\nIn such a way, we can reduce the effect of insertion errors since we allow insertions between the adjacent words and phones.\nOur query processing does not allow substitutions and deletions.\nExample: Let us consider the phrase query prosody research.\nThe term prosody is OOV and the term research is IV.\nThe term prosody is converted to the sequence of phones (p, r, aa, z, ih, d, iy).\nThe posting list of each phone is extracted from the phonetic index.\nWe merge the posting lists of the phones such that the sequence of phones appears in the right order and the gap in time between the pairs of phones (p, r), (r, aa), (aa, z), (z, ih), (ih, d), (d, iy) is less than 0.2 seconds.\nWe obtain occurrences of the term prosody.\nThe posting list of research is extracted from the word index and we merge it with the occurrences found for prosody such that they appear in the right order and the distance in time between prosody and research is less than 0.5 seconds.\nNote that our indexing model allows to search for different types of queries:\n1.\nqueries containing only IV terms using the word index.\n2.\nqueries containing only OOV terms using the phonetic index.\n3.\nkeyword queries containing both IV and OOV terms using the word index for IV terms and the phonetic index for OOV terms; for query processing, the different sets of matches are unified if the query terms have OR semantics and intersected if the query terms have AND semantics.\n4.\nphrase queries containing both IV and OOV terms; for\nquery processing, the posting lists of the IV terms retrieved from the word index are merged with the posting lists of the OOV terms retrieved from the phonetic index.\nThe merging is possible since we have stored the timestamps for each unit of indexing (word and phone) in both indices.\nThe STD evaluation has focused on the fourth query type.\nIt is the hardest task since we need to combine posting lists retrieved from phonetic and word indices.\n3.4 Ranking\nSince IV terms and OOV terms are retrieved from two different indices, we propose two different functions for scoring an occurrence of a term; afterward, an aggregate score is assigned to the query based on the scores of the query terms.\nBecause the task is term detection, we do not use a document frequency criterion for ranking the occurrences.\nLet us consider a query Q = (k0,..., kn), associated with a boosting vector B = (B1,..., Bj).\nThis vector associates a boosting factor to each rank of the different hypotheses; the boosting factors are normalized between 0 and 1.\nIf the rank r is larger than j, we assume Br = 0.\n3.4.1 In vocabulary term ranking\nFor IV term ranking, we extend the work of Mamou et al. [17] on spoken document retrieval to term detection.\nWe use the information provided by the word index.\nWe define the score score (k, t, D) of a keyword k occurring at a time t in the transcript D, by the following formula: score (k, t, D) = Brank (k | t, D) \u00d7 Pr (k | t, D) Note that 0 <score (k, t, D) <1.\n3.4.2 Out of vocabulary term ranking\nFor OOV term ranking, we use the information provided by the phonetic index.\nWe give a higher rank to occurrences of OOV terms that contain phones close (in time) to each other.\nWe define a scoring function that is related to the average gap in time between the different phones.\nLet us consider a keyword k converted to the sequence of phones (pk0,..., pkl).\nWe define the normalized score score (k, tk0, D) of a keyword k = (pk0,..., pkl), where each pki occurs at time tki with a duration of dki in the transcript D, by the following formula: l Note that according to what we have ex-plained in Section 3.3, we have ` d1 <i <l, 0 <tki--(tki \u2212 1 + dki \u2212 1) <0.2 sec, 0 <5 x (tki--(tki \u2212 1 + dki \u2212 1)) <1, and consequently, 0 <score (k, tk0, D) <1.\nThe duration of the keyword occurrence is tkl--tk0 + dkl.\nExample: let us consider the sequence (p, r, aa, z, ih, d, iy) and two different occurrences of the sequence.\nFor each phone, we give the begin time and the duration in second.\nOccurrence 1: (p, 0.25, 0.01), (r, 0.36, 0.01), (aa, 0.37, 0.01), (z, 0.38, 0.01), (ih, 0.39, 0.01), (d, 0.4, 0.01), (iy, 0.52, 0.01).\nOccurrence 2: (p, 0.45, 0.01), (r, 0.46, 0.01), (aa, 0.47, 0.01), (z, 0.48, 0.01), (ih, 0.49, 0.01), (d, 0.5, 0.01), (iy, 0.51, 0.01).\nAccording to our formula, the score of the first occurrence is 0.83 and the score of the second occurrence is 1.\nIn the first occurrence, there are probably some insertion or silence between the phone p and r, and between the phone d and iy.\nThe silence can be due to the fact that the phones belongs to two different words ans therefore, it is not an occurrence of the term prosody.\n3.4.3 Combination\nThe score of an occurrence of a query Q at time t0 in the document D is determined by the multiplication of the score of each keyword ki, where each ki occurs at time ti with a duration di in the transcript D:\nNote that according to what we have ex-plained in Section 3.3, we have ` d1 <i <n, 0 <ti--(ti \u2212 1 + di \u2212 1) <0.5 sec.\nOur goal is to estimate for each found occurrence how likely the query appears.\nIt is different from classical IR that aims to rank the results and not to score them.\nSince the probability to have a false alarm is inversely proportional to the length of the phrase query, we have boosted the score of queries by a \u03b3n exponent, that is related to the number of keywords in the phrase.\nWe have determined empirically the value of \u03b3n = 1\/n.\nThe begin time of the query occurrence is determined by the begin time t0 of the first query term and the duration of the query occurrence by tn--t0 + dn.\n4.\nEXPERIMENTS 4.1 Experimental setup\nOur corpus consists of the evaluation set provided by NIST for the STD 2006 evaluation [1].\nIt includes three different source types in US English: three hours of broadcast news (BNEWS), three hours of conversational telephony speech (CTS) and two hours of conference room meetings (CONFMTG).\nAs shown in Section 4.2, these different collections have different accuracies.\nCTS and CONFMTG are spontaneous speech.\nFor the experiments, we have processed the query set provided by NIST that includes 1100 queries.\nEach query is a phrase containing between one to five terms, common and rare terms, terms that are in the manual transcripts and those that are not.\nTesting and determination of empirical values have been achieved on another set of speech data and queries, the development set, also provided by NIST.\nWe have used the IBM research prototype ASR system, described in [26], for transcribing speech data.\nWe have produced WCNs for the three different source types.\n1-best phonetic transcripts were generated only for BNEWS and CTS, since CONFMTG phonetic transcripts have too low accuracy.\nWe have adapted Juru [7], a full-text search library written in Java, to index the transcripts and to store the timestamps of the words and phones; search results have been retrieved as described in Section 3.\nFor each found occurrence of the given query, our system outputs: the location of the term in the audio recording (begin time and duration), the score indicating how likely is the occurrence of query, (as defined in Section 3.4) and a hard (binary) decision as to whether the detection is correct.\nWe measure precision and recall by comparing the results obtained over the automatic transcripts (only the results having true hard decision) to the results obtained over the reference manual transcripts.\nOur aim is to evaluate the ability of the suggested retrieval approach to handle transcribed speech data.\nThus, the closer the automatic results to the manual results is, the better the search effectiveness over the automatic transcripts will be.\nThe results returned from the manual transcription for a given query are considered relevant and are expected to be retrieved with highest scores.\nThis approach for measuring search effectiveness using manual data as a reference is very common in speech retrieval research [25, 22, 8, 9, 17].\nBeside the recall and the precision, we use the evaluation measures defined by NIST for the 2006 STD evaluation [2]: the Actual Term-Weighted Value (ATWV) and the Maximum Term-Weighted Value (MTWV).\nThe term-weighted value (TWV) is computed by first computing the miss and false alarm probabilities for each query separately, then using these and an (arbitrarily chosen) prior probability to compute query-specific values, and finally averaging these query-specific values over all queries q to produce an overall system value:\nwhere \u03b2 = VC (Pr \u2212 1 q--1).\n\u03b8 is the detection threshold.\nFor the evaluation, the cost\/value ratio, C\/V, has been determined to 0.1 and the prior probability of a query Prq to 10 \u2212 4.\nTherefore, \u03b2 = 999.9.\nMiss and false alarm probabilities for a given query q are functions of \u03b8:\nTable 1: WER and distribution of the error types over word 1-best path extracted from WCNs for the different source types.\nwhere:\n9 Ncorrect (q, 0) is the number of correct detections (retrieved by the system) of the query q with a score greater than or equal to 0.\n9 Nspurious (q, 0) is the number of spurious detections of the query q with a score greater than or equal to 0.\n9 Ntrue (q) is the number of true occurrences of the query q in the corpus.\n9 NNT (q) is the number of opportunities for incorrect detection of the query q in the corpus; it is the\" NonTarget\" query trials.\nIt has been defined by the following formula: NNT (q) = Tspeech \u2212 Ntrue (q).\nTspeech is the total amount of speech in the collection (in seconds).\nATWV is the\" actual term-weighted value\"; it is the detection value attained by the system as a result of the system output and the binary decision output for each putative occurrence.\nIt ranges from \u2212 \u221e to +1.\nMTWV is the\" maximum term-weighted value\" over the range of all possible values of 0.\nIt ranges from 0 to +1.\nWe have also provided the detection error tradeoff (DET) curve [19] of miss probability (Pmiss) vs. false alarm probability (PF A).\nWe have used the STDEval tool to extract the relevant results from the manual transcripts and to compute ATWV, MTWV and the DET curve.\nWe have determined empirically the following values for the boosting vector defined in Section 3.4: Bi = 1i.\n4.2 WER analysis\nWe use the word error rate (WER) in order to characterize the accuracy of the transcripts.\nWER is defined as follows:\nwhere N is the total number of words in the corpus, and S, I, and D are the total number of substitution, insertion, and deletion errors, respectively.\nThe substitution error rate (SUBR) is defined by\nDeletion error rate (DELR) and insertion error rate (INSR) are defined in a similar manner.\nTable 1 gives the WER and the distribution of the error types over 1-best path transcripts extracted from WCNs.\nThe WER of the 1-best path phonetic transcripts is approximately two times worse than the WER of word transcripts.\nThat is the reason why we have not retrieved from phonetic transcripts on CONFMTG speech data.\n4.3 Theta threshold\nWe have determined empirically a detection threshold 0 per source type and the hard decision of the occurrences having a score less than 0 is set to false; false occurrences returned by the system are not considered as retrieved and therefore, are not used for computing ATWV, precision and recall.\nThe value of the threshold 0 per source type is reported in Table 2.\nIt is correlated to the accuracy of the transcripts.\nBasically, setting a threshold aims to eliminate from the retrieved occurrences, false alarms without adding misses.\nThe higher the WER is, the higher the 0 threshold should be.\nBNEWS CTS CONFMTG\n0.4 0.61 0.91\nTable 2: Values of the 0 threshold per source type.\n4.4 Processing resource profile\nWe report in Table 3 the processing resource profile.\nConcerning the index size, note that our index is compressed using IR index compression techniques.\nThe indexing time includes both audio processing (generation of word and phonetic transcripts) and building of the searchable indices.\nTable 3: Processing resource profile.\n(HS: Hours of Speech.\nHP: Processing Hours.\nsec.P: Processing seconds)\n4.5 Retrieval measures\nWe compare our approach (WCN phonetic) presented in Section 4.1 with another approach (1-best-WCN phonetic).\nThe only difference between these two approaches is that, in 1-best-WCN phonetic, we index only the 1-best path extracted from the WCN instead of indexing all the WCN.\nWCN phonetic was our primary system for the evaluation and 1-best-WCN phonetic was one of our contrastive systems.\nAverage precision and recall, MTWV and ATWV on the 1100 queries are given in Table 4.\nWe provide also the DET curve for WCN phonetic approach in Figure 2.\nThe point that maximizes the TWV, the MTWV, is specified on each curve.\nNote that retrieval performance has been evaluated separately for each source type since the accuracy of the speech differs per source type as shown in Section 4.2.\nAs expected, we can see that MTWV and ATWV decrease in higher WER.\nThe retrieval performance is improved when\nTable 4: ATWV, MTWV, precision and recall per source type.\nFigure 2: DET curve for WCN phonetic approach.\nusing WCNs relatively to 1-best path.\nIt is due to the fact that miss probability is improved by indexing all the hypotheses provided by the WCNs.\nThis observation confirms the results shown by Mamou et al. [17] in the context of spoken document retrieval.\nThe ATWV that we have obtained is close to the MTWV; we have combined our ranking model with appropriate threshold \u03b8 to eliminate results with lower score.\nTherefore, the effect of false alarms added by WCNs is reduced.\nWCN phonetic approach was used in the recent NIST STD evaluation and received the highest overall ranking among eleven participants.\nFor comparison, the system that ranked at the third place, obtained an ATWV of 0.8238 for BNEWS, 0.6652 for CTS and 0.1103 for CONFMTG.\n4.6 Influence of the duration of the query on the retrieval performance\nWe have analysed the retrieval performance according to the average duration of the occurrences in the manual transcripts.\nThe query set was divided into three different quantiles according to the duration; we have reported in Table 5 ATWV and MTWV according to the duration.\nWe can see that we performed better on longer queries.\nOne of the reasons is the fact that the ASR system is more accurate on long words.\nHence, it was justified to boost the score of the results with the exponent \u03b3n, as explained in Section 3.4.3, according to the length of the query.\nTable 5: ATWV, MTWV according to the duration of the query occurrences per source type.\n4.7 OOV vs. IV query processing\nWe have randomly chosen three sets of queries from the query sets provided by NIST: 50 queries containing only IV terms; 50 queries containing only OOV terms; and 50 hybrid queries containing both IV and OOV terms.\nThe following experiment has been achieved on the BNEWS collection and IV and OOV terms has been determined according to the vocabulary of BNEWS ASR system.\nWe would like to compare three different approaches of retrieval: using only word index; using only phonetic index; combining word and phonetic indices.\nTable 6 summarizes the retrieval performance according to each approach and to each type of queries.\nUsing a word-based approach for dealing with OOV and hybrid queries affects drastically the performance of the retrieval; precision and recall are null.\nUsing a phone-based approach for dealing with IV queries affects also the performance of the retrieval relatively to the word-based approach.\nAs expected, the approach combining word and phonetic indices presented in Section 3 leads to the same retrieval performance as the word approach for IV queries and to the same retrieval performance as the phonetic approach for OOV queries.\nThis approach always outperforms the others and it justifies the fact that we need to combine word and phonetic search.\n5.\nRELATED WORK\nIn the past decade, the research efforts on spoken data retrieval have focused on extending classical IR techniques to spoken documents.\nSome of these works have been done in the context of the TREC Spoken Document Retrieval evaluations and are described by Garofolo et al. [12].\nAn LVCSR system is used to transcribe the speech into 1-best path word transcripts.\nThe transcripts are indexed as clean text: for each occurrence, its document, its word offset and additional information are stored in the index.\nA generic IR system over the text is used for word spotting and search as described by Brown et al. [6] and James [14].\nThis strat\nTable 6: Comparison of word and phonetic approach on IV and OOV queries\negy works well for transcripts like broadcast news collections that have a low WER (in the range of 15% -30%) and are redundant by nature (the same piece of information is spoken several times in different manners).\nMoreover, the algorithms have been mostly tested over long queries stated in plain English and retrieval for such queries is more robust against speech recognition errors.\nAn alternative approach consists of using word lattices in order to improve the effectiveness of SDR.\nSinghal et al. [24, 25] propose to add some terms to the transcript in order to alleviate the retrieval failures due to ASR errors.\nFrom an IR perspective, a classical way to bring new terms is document expansion using a similar corpus.\nTheir approach consists in using word lattices in order to determine which words returned by a document expansion algorithm should be added to the original transcript.\nThe necessity to use a document expansion algorithm was justified by the fact that the word lattices they worked with, lack information about word probabilities.\nChelba and Acero in [8, 9] propose a more compact word lattice, the position specific posterior lattice (PSPL).\nThis data structure is similar to WCN and leads to a more compact index.\nThe offset of the terms in the speech documents is also stored in the index.\nHowever, the evaluation framework is carried out on lectures that are relatively planned, in contrast to conversational speech.\nTheir ranking model is based on the term confidence level but does not take into consideration the rank of the term among the other hypotheses.\nMamou et al. [17] propose a model for spoken document retrieval using WCNs in order to improve the recall and the MAP of the search.\nHowever, in the above works, the problem of queries containing OOV terms is not addressed.\nPopular approaches to deal with OOV queries are based on sub-words transcripts, where the sub-words are typically phones, syllables or word fragments (sequences of phones) [11, 20, 23].\nThe classical approach consists of using phonetic transcripts.\nThe transcripts are indexed in the same manner as words in using classical text retrieval techniques; during query processing, the query is represented as a sequence of phones.\nThe retrieval is based on searching the string of phones representing the query in the phonetic transcript.\nTo account for the high recognition error rates, some other systems use richer transcripts like phonetic lattices.\nThey are attractive as they accommodate high error rate conditions as well as allow for OOV queries to be used [15, 3, 20, 23, 21, 27].\nHowever, phonetic lattices contain many edges that overlap in time with the same phonetic label, and are difficult to index.\nMoreover, beside the improvement in the recall of the search, the precision is affected since phonetic lattices are often inaccurate.\nConsequently, phonetic approaches should be used only for OOV search; for searching queries containing also IV terms, this technique affects the performance of the retrieval in comparison to the word based approach.\nSaraclar and Sproat in [22] show improvement in word spotting accuracy for both IV and OOV queries, using phonetic and word lattices, where a confidence measure of a word or a phone can be derived.\nThey propose three different retrieval strategies: search both the word and the phonetic indices and unify the two different sets of results; search the word index for IV queries, search the phonetic index for OOV queries; search the word index and if no result is returned, search the phonetic index.\nHowever, no strategy is proposed to deal with phrase queries containing both IV and OOV terms.\nAmir et al. in [5, 4] propose to merge a word approach with a phonetic approach in the context of video retrieval.\nHowever, the phonetic transcript is obtained from a text to phonetic conversion of the 1-best path of the word transcript and is not based on a phonetic decoding of the speech data.\nAn important issue to be considered when looking at the state-of-the-art in retrieval of spoken data, is the lack of a common test set and appropriate query terms.\nThis paper uses such a task and the STD evaluation is a good summary of the performance of different approaches on the same test conditions.\n6.\nCONCLUSIONS\nThis work studies how vocabulary independent spoken term detection can be performed efficiently over different data sources.\nPreviously, phonetic-based and word-based approaches have been used for IR on speech data.\nThe former suffers from low accuracy and the latter from limited vocabulary of the recognition system.\nIn this paper, we have presented a vocabulary independent model of indexing and search that combines both the approaches.\nThe system can deal with all kinds of queries although the phrases that need to combine for the retrieval, information extracted from two different indices, a word index and a phonetic index.\nThe scoring of OOV terms is based on the proximity (in time) between the different phones.\nThe scoring of IV terms is based on information provided by the WCNs.\nWe have shown an improvement in the retrieval performance when using all the WCN and not only the 1-best path and when using phonetic index for search of OOV query terms.\nThis approach always outperforms the other approaches using only word index or phonetic index.\nAs a future work, we will compare our model for OOV search on phonetic transcripts with a retrieval model based on the edit distance.","keyphrases":["vocabulari","spoken term detect","speech recogn","phonet transcript","vocabulari independ system","speech data retriev","index timestamp","word index","phonet index","index merg","oov search","automat speech recognit","speech retriev","speak term detect","out-of-vocabulari"],"prmu":["P","P","P","P","P","R","M","R","R","M","M","M","R","M","U"]} {"id":"H-35","title":"AdaRank: A Boosting Algorithm for Information Retrieval","abstract":"In this paper we address the issue of learning to rank for document retrieval. In the task, a model is automatically created with some training data and then is utilized for ranking of documents. The goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain). Ideally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data. Existing methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures. For example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs. To deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures. Our algorithm, referred to as AdaRank, repeatedly constructs \u2018weak rankers' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions. We prove that the training process of AdaRank is exactly that of enhancing the performance measure used. Experimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.","lvl-1":"AdaRank: A Boosting Algorithm for Information Retrieval Jun Xu Microsoft Research Asia No. 49 Zhichun Road, Haidian Distinct Beijing, China 100080 junxu@microsoft.com Hang Li Microsoft Research Asia No. 49 Zhichun Road, Haidian Distinct Beijing, China 100080 hangli@microsoft.com ABSTRACT In this paper we address the issue of learning to rank for document retrieval.\nIn the task, a model is automatically created with some training data and then is utilized for ranking of documents.\nThe goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain).\nIdeally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data.\nExisting methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures.\nFor example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs.\nTo deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures.\nOur algorithm, referred to as AdaRank, repeatedly constructs `weak rankers'' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions.\nWe prove that the training process of AdaRank is exactly that of enhancing the performance measure used.\nExperimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.\nCategories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Retrieval models General Terms Algorithms, Experimentation, Theory 1.\nINTRODUCTION Recently `learning to rank'' has gained increasing attention in both the fields of information retrieval and machine learning.\nWhen applied to document retrieval, learning to rank becomes a task as follows.\nIn training, a ranking model is constructed with data consisting of queries, their corresponding retrieved documents, and relevance levels given by humans.\nIn ranking, given a new query, the corresponding retrieved documents are sorted by using the trained ranking model.\nIn document retrieval, usually ranking results are evaluated in terms of performance measures such as MAP (Mean Average Precision) [1] and NDCG (Normalized Discounted Cumulative Gain) [15].\nIdeally, the ranking function is created so that the accuracy of ranking in terms of one of the measures with respect to the training data is maximized.\nSeveral methods for learning to rank have been developed and applied to document retrieval.\nFor example, Herbrich et al. [13] propose a learning algorithm for ranking on the basis of Support Vector Machines, called Ranking SVM.\nFreund et al. [8] take a similar approach and perform the learning by using boosting, referred to as RankBoost.\nAll the existing methods used for document retrieval [2, 3, 8, 13, 16, 20] are designed to optimize loss functions loosely related to the IR performance measures, not loss functions directly based on the measures.\nFor example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs.\nIn this paper, we aim to develop a new learning algorithm that can directly optimize any performance measure used in document retrieval.\nInspired by the work of AdaBoost for classification [9], we propose to develop a boosting algorithm for information retrieval, referred to as AdaRank.\nAdaRank utilizes a linear combination of `weak rankers'' as its model.\nIn learning, it repeats the process of re-weighting the training sample, creating a weak ranker, and calculating a weight for the ranker.\nWe show that AdaRank algorithm can iteratively optimize an exponential loss function based on any of IR performance measures.\nA lower bound of the performance on training data is given, which indicates that the ranking accuracy in terms of the performance measure can be continuously improved during the training process.\nAdaRank offers several advantages: ease in implementation, theoretical soundness, efficiency in training, and high accuracy in ranking.\nExperimental results indicate that AdaRank can outperform the baseline methods of BM25, Ranking SVM, and RankBoost, on four benchmark datasets including OHSUMED, WSJ, AP, and .\nGov. Tuning ranking models using certain training data and a performance measure is a common practice in IR [1].\nAs the number of features in the ranking model gets larger and the amount of training data gets larger, the tuning becomes harder.\nFrom the viewpoint of IR, AdaRank can be viewed as a machine learning method for ranking model tuning.\nRecently, direct optimization of performance measures in learning has become a hot research topic.\nSeveral methods for classification [17] and ranking [5, 19] have been proposed.\nAdaRank can be viewed as a machine learning method for direct optimization of performance measures, based on a different approach.\nThe rest of the paper is organized as follows.\nAfter a summary of related work in Section 2, we describe the proposed AdaRank algorithm in details in Section 3.\nExperimental results and discussions are given in Section 4.\nSection 5 concludes this paper and gives future work.\n2.\nRELATED WORK 2.1 Information Retrieval The key problem for document retrieval is ranking, specifically, how to create the ranking model (function) that can sort documents based on their relevance to the given query.\nIt is a common practice in IR to tune the parameters of a ranking model using some labeled data and one performance measure [1].\nFor example, the state-ofthe-art methods of BM25 [24] and LMIR (Language Models for Information Retrieval) [18, 22] all have parameters to tune.\nAs the ranking models become more sophisticated (more features are used) and more labeled data become available, how to tune or train ranking models turns out to be a challenging issue.\nRecently methods of `learning to rank'' have been applied to ranking model construction and some promising results have been obtained.\nFor example, Joachims [16] applies Ranking SVM to document retrieval.\nHe utilizes click-through data to deduce training data for the model creation.\nCao et al. [4] adapt Ranking SVM to document retrieval by modifying the Hinge Loss function to better meet the requirements of IR.\nSpecifically, they introduce a Hinge Loss function that heavily penalizes errors on the tops of ranking lists and errors from queries with fewer retrieved documents.\nBurges et al. [3] employ Relative Entropy as a loss function and Gradient Descent as an algorithm to train a Neural Network model for ranking in document retrieval.\nThe method is referred to as `RankNet''.\n2.2 Machine Learning There are three topics in machine learning which are related to our current work.\nThey are `learning to rank'', boosting, and direct optimization of performance measures.\nLearning to rank is to automatically create a ranking function that assigns scores to instances and then rank the instances by using the scores.\nSeveral approaches have been proposed to tackle the problem.\nOne major approach to learning to rank is that of transforming it into binary classification on instance pairs.\nThis `pair-wise'' approach fits well with information retrieval and thus is widely used in IR.\nTypical methods of the approach include Ranking SVM [13], RankBoost [8], and RankNet [3].\nFor other approaches to learning to rank, refer to [2, 11, 31].\nIn the pair-wise approach to ranking, the learning task is formalized as a problem of classifying instance pairs into two categories (correctly ranked and incorrectly ranked).\nActually, it is known that reducing classification errors on instance pairs is equivalent to maximizing a lower bound of MAP [16].\nIn that sense, the existing methods of Ranking SVM, RankBoost, and RankNet are only able to minimize loss functions that are loosely related to the IR performance measures.\nBoosting is a general technique for improving the accuracies of machine learning algorithms.\nThe basic idea of boosting is to repeatedly construct `weak learners'' by re-weighting training data and form an ensemble of weak learners such that the total performance of the ensemble is `boosted''.\nFreund and Schapire have proposed the first well-known boosting algorithm called AdaBoost (Adaptive Boosting) [9], which is designed for binary classification (0-1 prediction).\nLater, Schapire & Singer have introduced a generalized version of AdaBoost in which weak learners can give confidence scores in their predictions rather than make 0-1 decisions [26].\nExtensions have been made to deal with the problems of multi-class classification [10, 26], regression [7], and ranking [8].\nIn fact, AdaBoost is an algorithm that ingeniously constructs a linear model by minimizing the `exponential loss function'' with respect to the training data [26].\nOur work in this paper can be viewed as a boosting method developed for ranking, particularly for ranking in IR.\nRecently, a number of authors have proposed conducting direct optimization of multivariate performance measures in learning.\nFor instance, Joachims [17] presents an SVM method to directly optimize nonlinear multivariate performance measures like the F1 measure for classification.\nCossock & Zhang [5] find a way to approximately optimize the ranking performance measure DCG [15].\nMetzler et al. [19] also propose a method of directly maximizing rank-based metrics for ranking on the basis of manifold learning.\nAdaRank is also one that tries to directly optimize multivariate performance measures, but is based on a different approach.\nAdaRank is unique in that it employs an exponential loss function based on IR performance measures and a boosting technique.\n3.\nOUR METHOD: ADARANK 3.1 General Framework We first describe the general framework of learning to rank for document retrieval.\nIn retrieval (testing), given a query the system returns a ranking list of documents in descending order of the relevance scores.\nThe relevance scores are calculated with a ranking function (model).\nIn learning (training), a number of queries and their corresponding retrieved documents are given.\nFurthermore, the relevance levels of the documents with respect to the queries are also provided.\nThe relevance levels are represented as ranks (i.e., categories in a total order).\nThe objective of learning is to construct a ranking function which achieves the best results in ranking of the training data in the sense of minimization of a loss function.\nIdeally the loss function is defined on the basis of the performance measure used in testing.\nSuppose that Y = {r1, r2, \u00b7 \u00b7 \u00b7 , r } is a set of ranks, where denotes the number of ranks.\nThere exists a total order between the ranks r r \u22121 \u00b7 \u00b7 \u00b7 r1, where ` '' denotes a preference relationship.\nIn training, a set of queries Q = {q1, q2, \u00b7 \u00b7 \u00b7 , qm} is given.\nEach query qi is associated with a list of retrieved documents di = {di1, di2, \u00b7 \u00b7 \u00b7 , di,n(qi)} and a list of labels yi = {yi1, yi2, \u00b7 \u00b7 \u00b7 , yi,n(qi)}, where n(qi) denotes the sizes of lists di and yi, dij denotes the jth document in di, and yij \u2208 Y denotes the rank of document di j.\nA feature vector xij = \u03a8(qi, di j) \u2208 X is created from each query-document pair (qi, di j), i = 1, 2, \u00b7 \u00b7 \u00b7 , m; j = 1, 2, \u00b7 \u00b7 \u00b7 , n(qi).\nThus, the training set can be represented as S = {(qi, di, yi)}m i=1.\nThe objective of learning is to create a ranking function f : X \u2192 , such that for each query the elements in its corresponding document list can be assigned relevance scores using the function and then be ranked according to the scores.\nSpecifically, we create a permutation of integers \u03c0(qi, di, f) for query qi, the corresponding list of documents di, and the ranking function f. Let di = {di1, di2, \u00b7 \u00b7 \u00b7 , di,n(qi)} be identified by the list of integers {1, 2, \u00b7 \u00b7 \u00b7 , n(qi)}, then permutation \u03c0(qi, di, f) is defined as a bijection from {1, 2, \u00b7 \u00b7 \u00b7 , n(qi)} to itself.\nWe use \u03c0( j) to denote the position of item j (i.e., di j).\nThe learning process turns out to be that of minimizing the loss function which represents the disagreement between the permutation \u03c0(qi, di, f) and the list of ranks yi, for all of the queries.\nTable 1: Notations and explanations.\nNotations Explanations qi \u2208 Q ith query di = {di1, di2, \u00b7 \u00b7 \u00b7 , di,n(qi)} List of documents for qi yi j \u2208 {r1, r2, \u00b7 \u00b7 \u00b7 , r } Rank of di j w.r.t. qi yi = {yi1, yi2, \u00b7 \u00b7 \u00b7 , yi,n(qi)} List of ranks for qi S = {(qi, di, yi)}m i=1 Training set xij = \u03a8(qi, dij) \u2208 X Feature vector for (qi, di j) f(xij) \u2208 Ranking model \u03c0(qi, di, f) Permutation for qi, di, and f ht(xi j) \u2208 tth weak ranker E(\u03c0(qi, di, f), yi) \u2208 [\u22121, +1] Performance measure function In the paper, we define the rank model as a linear combination of weak rankers: f(x) = T t=1 \u03b1tht(x), where ht(x) is a weak ranker, \u03b1t is its weight, and T is the number of weak rankers.\nIn information retrieval, query-based performance measures are used to evaluate the `goodness'' of a ranking function.\nBy query based measure, we mean a measure defined over a ranking list of documents with respect to a query.\nThese measures include MAP, NDCG, MRR (Mean Reciprocal Rank), WTA (Winners Take ALL), and Precision@n [1, 15].\nWe utilize a general function E(\u03c0(qi, di, f), yi) \u2208 [\u22121, +1] to represent the performance measures.\nThe first argument of E is the permutation \u03c0 created using the ranking function f on di.\nThe second argument is the list of ranks yi given by humans.\nE measures the agreement between \u03c0 and yi.\nTable 1 gives a summary of notations described above.\nNext, as examples of performance measures, we present the definitions of MAP and NDCG.\nGiven a query qi, the corresponding list of ranks yi, and a permutation \u03c0i on di, average precision for qi is defined as: AvgPi = n(qi) j=1 Pi( j) \u00b7 yij n(qi) j=1 yij , (1) where yij takes on 1 and 0 as values, representing being relevant or irrelevant and Pi( j) is defined as precision at the position of dij: Pi( j) = k:\u03c0i(k)\u2264\u03c0i(j) yik \u03c0i(j) , (2) where \u03c0i( j) denotes the position of di j. Given a query qi, the list of ranks yi, and a permutation \u03c0i on di, NDCG at position m for qi is defined as: Ni = ni \u00b7 j:\u03c0i(j)\u2264m 2yi j \u2212 1 log(1 + \u03c0i( j)) , (3) where yij takes on ranks as values and ni is a normalization constant.\nni is chosen so that a perfect ranking \u03c0\u2217 i ``s NDCG score at position m is 1.\n3.2 Algorithm Inspired by the AdaBoost algorithm for classification, we have devised a novel algorithm which can optimize a loss function based on the IR performance measures.\nThe algorithm is referred to as `AdaRank'' and is shown in Figure 1.\nAdaRank takes a training set S = {(qi, di, yi)}m i=1 as input and takes the performance measure function E and the number of iterations T as parameters.\nAdaRank runs T rounds and at each round it creates a weak ranker ht(t = 1, \u00b7 \u00b7 \u00b7 , T).\nFinally, it outputs a ranking model f by linearly combining the weak rankers.\nAt each round, AdaRank maintains a distribution of weights over the queries in the training data.\nWe denote the distribution of weights Input: S = {(qi, di, yi)}m i=1, and parameters E and T Initialize P1(i) = 1\/m.\nFor t = 1, \u00b7 \u00b7 \u00b7 , T \u2022 Create weak ranker ht with weighted distribution Pt on training data S .\n\u2022 Choose \u03b1t \u03b1t = 1 2 \u00b7 ln m i=1 Pt(i){1 + E(\u03c0(qi, di, ht), yi)} m i=1 Pt(i){1 \u2212 E(\u03c0(qi, di, ht), yi)} .\n\u2022 Create ft ft(x) = t k=1 \u03b1khk(x).\n\u2022 Update Pt+1 Pt+1(i) = exp{\u2212E(\u03c0(qi, di, ft), yi)} m j=1 exp{\u2212E(\u03c0(qj, dj, ft), yj)} .\nEnd For Output ranking model: f(x) = fT (x).\nFigure 1: The AdaRank algorithm.\nat round t as Pt and the weight on the ith training query qi at round t as Pt(i).\nInitially, AdaRank sets equal weights to the queries.\nAt each round, it increases the weights of those queries that are not ranked well by ft, the model created so far.\nAs a result, the learning at the next round will be focused on the creation of a weak ranker that can work on the ranking of those `hard'' queries.\nAt each round, a weak ranker ht is constructed based on training data with weight distribution Pt.\nThe goodness of a weak ranker is measured by the performance measure E weighted by Pt: m i=1 Pt(i)E(\u03c0(qi, di, ht), yi).\nSeveral methods for weak ranker construction can be considered.\nFor example, a weak ranker can be created by using a subset of queries (together with their document list and label list) sampled according to the distribution Pt.\nIn this paper, we use single features as weak rankers, as will be explained in Section 3.6.\nOnce a weak ranker ht is built, AdaRank chooses a weight \u03b1t > 0 for the weak ranker.\nIntuitively, \u03b1t measures the importance of ht.\nA ranking model ft is created at each round by linearly combining the weak rankers constructed so far h1, \u00b7 \u00b7 \u00b7 , ht with weights \u03b11, \u00b7 \u00b7 \u00b7 , \u03b1t.\nft is then used for updating the distribution Pt+1.\n3.3 Theoretical Analysis The existing learning algorithms for ranking attempt to minimize a loss function based on instance pairs (document pairs).\nIn contrast, AdaRank tries to optimize a loss function based on queries.\nFurthermore, the loss function in AdaRank is defined on the basis of general IR performance measures.\nThe measures can be MAP, NDCG, WTA, MRR, or any other measures whose range is within [\u22121, +1].\nWe next explain why this is the case.\nIdeally we want to maximize the ranking accuracy in terms of a performance measure on the training data: max f\u2208F m i=1 E(\u03c0(qi, di, f), yi), (4) where F is the set of possible ranking functions.\nThis is equivalent to minimizing the loss on the training data min f\u2208F m i=1 (1 \u2212 E(\u03c0(qi, di, f), yi)).\n(5) It is difficult to directly optimize the loss, because E is a noncontinuous function and thus may be difficult to handle.\nWe instead attempt to minimize an upper bound of the loss in (5) min f\u2208F m i=1 exp{\u2212E(\u03c0(qi, di, f), yi)}, (6) because e\u2212x \u2265 1 \u2212 x holds for any x \u2208 .\nWe consider the use of a linear combination of weak rankers as our ranking model: f(x) = T t=1 \u03b1tht(x).\n(7) The minimization in (6) then turns out to be min ht\u2208H,\u03b1t\u2208 + L(ht, \u03b1t) = m i=1 exp{\u2212E(\u03c0(qi, di, ft\u22121 + \u03b1tht), yi)}, (8) where H is the set of possible weak rankers, \u03b1t is a positive weight, and ( ft\u22121 + \u03b1tht)(x) = ft\u22121(x) + \u03b1tht(x).\nSeveral ways of computing coefficients \u03b1t and weak rankers ht may be considered.\nFollowing the idea of AdaBoost, in AdaRank we take the approach of `forward stage-wise additive modeling'' [12] and get the algorithm in Figure 1.\nIt can be proved that there exists a lower bound on the ranking accuracy for AdaRank on training data, as presented in Theorem 1.\nT 1.\nThe following bound holds on the ranking accuracy of the AdaRank algorithm on training data: 1 m m i=1 E(\u03c0(qi, di, fT ), yi) \u2265 1 \u2212 T t=1 e\u2212\u03b4t min 1 \u2212 \u03d5(t)2, where \u03d5(t) = m i=1 Pt(i)E(\u03c0(qi, di, ht), yi), \u03b4t min = mini=1,\u00b7\u00b7\u00b7 ,m \u03b4t i, and \u03b4t i = E(\u03c0(qi, di, ft\u22121 + \u03b1tht), yi) \u2212 E(\u03c0(qi, di, ft\u22121), yi) \u2212\u03b1tE(\u03c0(qi, di, ht), yi), for all i = 1, 2, \u00b7 \u00b7 \u00b7 , m and t = 1, 2, \u00b7 \u00b7 \u00b7 , T.\nA proof of the theorem can be found in appendix.\nThe theorem implies that the ranking accuracy in terms of the performance measure can be continuously improved, as long as e\u2212\u03b4t min 1 \u2212 \u03d5(t)2 < 1 holds.\n3.4 Advantages AdaRank is a simple yet powerful method.\nMore importantly, it is a method that can be justified from the theoretical viewpoint, as discussed above.\nIn addition AdaRank has several other advantages when compared with the existing learning to rank methods such as Ranking SVM, RankBoost, and RankNet.\nFirst, AdaRank can incorporate any performance measure, provided that the measure is query based and in the range of [\u22121, +1].\nNotice that the major IR measures meet this requirement.\nIn contrast the existing methods only minimize loss functions that are loosely related to the IR measures [16].\nSecond, the learning process of AdaRank is more efficient than those of the existing learning algorithms.\nThe time complexity of AdaRank is of order O((k+T)\u00b7m\u00b7n log n), where k denotes the number of features, T the number of rounds, m the number of queries in training data, and n is the maximum number of documents for queries in training data.\nThe time complexity of RankBoost, for example, is of order O(T \u00b7 m \u00b7 n2 ) [8].\nThird, AdaRank employs a more reasonable framework for performing the ranking task than the existing methods.\nSpecifically in AdaRank the instances correspond to queries, while in the existing methods the instances correspond to document pairs.\nAs a result, AdaRank does not have the following shortcomings that plague the existing methods.\n(a) The existing methods have to make a strong assumption that the document pairs from the same query are independently distributed.\nIn reality, this is clearly not the case and this problem does not exist for AdaRank.\n(b) Ranking the most relevant documents on the tops of document lists is crucial for document retrieval.\nThe existing methods cannot focus on the training on the tops, as indicated in [4].\nSeveral methods for rectifying the problem have been proposed (e.g., [4]), however, they do not seem to fundamentally solve the problem.\nIn contrast, AdaRank can naturally focus on training on the tops of document lists, because the performance measures used favor rankings for which relevant documents are on the tops.\n(c) In the existing methods, the numbers of document pairs vary from query to query, resulting in creating models biased toward queries with more document pairs, as pointed out in [4].\nAdaRank does not have this drawback, because it treats queries rather than document pairs as basic units in learning.\n3.5 Differences from AdaBoost AdaRank is a boosting algorithm.\nIn that sense, it is similar to AdaBoost, but it also has several striking differences from AdaBoost.\nFirst, the types of instances are different.\nAdaRank makes use of queries and their corresponding document lists as instances.\nThe labels in training data are lists of ranks (relevance levels).\nAdaBoost makes use of feature vectors as instances.\nThe labels in training data are simply +1 and \u22121.\nSecond, the performance measures are different.\nIn AdaRank, the performance measure is a generic measure, defined on the document list and the rank list of a query.\nIn AdaBoost the corresponding performance measure is a specific measure for binary classification, also referred to as `margin'' [25].\nThird, the ways of updating weights are also different.\nIn AdaBoost, the distribution of weights on training instances is calculated according to the current distribution and the performance of the current weak learner.\nIn AdaRank, in contrast, it is calculated according to the performance of the ranking model created so far, as shown in Figure 1.\nNote that AdaBoost can also adopt the weight updating method used in AdaRank.\nFor AdaBoost they are equivalent (cf., [12] page 305).\nHowever, this is not true for AdaRank.\n3.6 Construction of Weak Ranker We consider an efficient implementation for weak ranker construction, which is also used in our experiments.\nIn the implementation, as weak ranker we choose the feature that has the optimal weighted performance among all of the features: max k m i=1 Pt(i)E(\u03c0(qi, di, xk), yi).\nCreating weak rankers in this way, the learning process turns out to be that of repeatedly selecting features and linearly combining the selected features.\nNote that features which are not selected in the training phase will have a weight of zero.\n4.\nEXPERIMENTAL RESULTS We conducted experiments to test the performances of AdaRank using four benchmark datasets: OHSUMED, WSJ, AP, and .\nGov. Table 2: Features used in the experiments on OHSUMED, WSJ, and AP datasets.\nC(w, d) represents frequency of word w in document d; C represents the entire collection; n denotes number of terms in query; | \u00b7 | denotes the size function; and id f(\u00b7) denotes inverse document frequency.\n1 wi\u2208q d ln(c(wi, d) + 1) 2 wi\u2208q d ln( |C| c(wi,C) + 1) 3 wi\u2208q d ln(id f(wi)) 4 wi\u2208q d ln(c(wi,d) |d| + 1) 5 wi\u2208q d ln(c(wi,d) |d| \u00b7 id f(wi) + 1) 6 wi\u2208q d ln(c(wi,d)\u00b7|C| |d|\u00b7c(wi,C) + 1) 7 ln(BM25 score) 0.2 0.3 0.4 0.5 0.6 MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10 BM25 Ranking SVM RarnkBoost AdaRank.MAP AdaRank.NDCG Figure 2: Ranking accuracies on OHSUMED data.\n4.1 Experiment Setting Ranking SVM [13, 16] and RankBoost [8] were selected as baselines in the experiments, because they are the state-of-the-art learning to rank methods.\nFurthermore, BM25 [24] was used as a baseline, representing the state-of-the-arts IR method (we actually used the tool Lemur1 ).\nFor AdaRank, the parameter T was determined automatically during each experiment.\nSpecifically, when there is no improvement in ranking accuracy in terms of the performance measure, the iteration stops (and T is determined).\nAs the measure E, MAP and NDCG@5 were utilized.\nThe results for AdaRank using MAP and NDCG@5 as measures in training are represented as AdaRank.MAP and AdaRank.NDCG, respectively.\n4.2 Experiment with OHSUMED Data In this experiment, we made use of the OHSUMED dataset [14] to test the performances of AdaRank.\nThe OHSUMED dataset consists of 348,566 documents and 106 queries.\nThere are in total 16,140 query-document pairs upon which relevance judgments are made.\nThe relevance judgments are either `d'' (definitely relevant), `p'' (possibly relevant), or `n''(not relevant).\nThe data have been used in many experiments in IR, for example [4, 29].\nAs features, we adopted those used in document retrieval [4].\nTable 2 shows the features.\nFor example, tf (term frequency), idf (inverse document frequency), dl (document length), and combinations of them are defined as features.\nBM25 score itself is also a feature.\nStop words were removed and stemming was conducted in the data.\nWe randomly divided queries into four even subsets and conducted 4-fold cross-validation experiments.\nWe tuned the parameters for BM25 during one of the trials and applied them to the other trials.\nThe results reported in Figure 2 are those averaged over four trials.\nIn MAP calculation, we define the rank `d'' as relevant and 1 http:\/\/www.lemurproject.com Table 3: Statistics on WSJ and AP datasets.\nDataset # queries # retrieved docs # docs per query AP 116 24,727 213.16 WSJ 126 40,230 319.29 0.40 0.45 0.50 0.55 0.60 MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10 BM25 Ranking SVM RankBoost AdaRank.MAP AdaRank.NDCG Figure 3: Ranking accuracies on WSJ dataset.\nthe other two ranks as irrelevant.\nFrom Figure 2, we see that both AdaRank.MAP and AdaRank.NDCG outperform BM25, Ranking SVM, and RankBoost in terms of all measures.\nWe conducted significant tests (t-test) on the improvements of AdaRank.MAP over BM25, Ranking SVM, and RankBoost in terms of MAP.\nThe results indicate that all the improvements are statistically significant (p-value < 0.05).\nWe also conducted t-test on the improvements of AdaRank.NDCG over BM25, Ranking SVM, and RankBoost in terms of NDCG@5.\nThe improvements are also statistically significant.\n4.3 Experiment with WSJ and AP Data In this experiment, we made use of the WSJ and AP datasets from the TREC ad-hoc retrieval track, to test the performances of AdaRank.\nWSJ contains 74,520 articles of Wall Street Journals from 1990 to 1992, and AP contains 158,240 articles of Associated Press in 1988 and 1990.\n200 queries are selected from the TREC topics (No.101 \u223c No.300).\nEach query has a number of documents associated and they are labeled as `relevant'' or `irrelevant'' (to the query).\nFollowing the practice in [28], the queries that have less than 10 relevant documents were discarded.\nTable 3 shows the statistics on the two datasets.\nIn the same way as in section 4.2, we adopted the features listed in Table 2 for ranking.\nWe also conducted 4-fold cross-validation experiments.\nThe results reported in Figure 3 and 4 are those averaged over four trials on WSJ and AP datasets, respectively.\nFrom Figure 3 and 4, we can see that AdaRank.MAP and AdaRank.NDCG outperform BM25, Ranking SVM, and RankBoost in terms of all measures on both WSJ and AP.\nWe conducted t-tests on the improvements of AdaRank.MAP and AdaRank.NDCG over BM25, Ranking SVM, and RankBoost on WSJ and AP.\nThe results indicate that all the improvements in terms of MAP are statistically significant (p-value < 0.05).\nHowever only some of the improvements in terms of NDCG@5 are statistically significant, although overall the improvements on NDCG scores are quite high (1-2 points).\n4.4 Experiment with .\nGov Data In this experiment, we further made use of the TREC .\nGov data to test the performance of AdaRank for the task of web retrieval.\nThe corpus is a crawl from the .\ngov domain in early 2002, and has been used at TREC Web Track since 2002.\nThere are a total 0.40 0.45 0.50 0.55 MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10 BM25 Ranking SVM RankBoost AdaRank.MAP AdaRank.NDCG Figure 4: Ranking accuracies on AP dataset.\n0.1 0.2 0.3 0.4 0.5 0.6 0.7 MAP NDCG@1 NDCG@3 NDCG@5 NDCG@10 BM25 Ranking SVM RankBoost AdaRank.MAP AdaRank.NDCG Figure 5: Ranking accuracies on .\nGov dataset.\nTable 4: Features used in the experiments on .\nGov dataset.\n1 BM25 [24] 2 MSRA1000 [27] 3 PageRank [21] 4 HostRank [30] 5 Relevance Propagation [23] (10 features) of 1,053,110 web pages with 11,164,829 hyperlinks in the data.\nThe 50 queries in the topic distillation task in the Web Track of TREC 2003 [6] were used.\nThe ground truths for the queries are provided by the TREC committee with binary judgment: relevant or irrelevant.\nThe number of relevant pages vary from query to query (from 1 to 86).\nWe extracted 14 features from each query-document pair.\nTable 4 gives a list of the features.\nThey are the outputs of some well-known algorithms (systems).\nThese features are different from those in Table 2, because the task is different.\nAgain, we conducted 4-fold cross-validation experiments.\nThe results averaged over four trials are reported in Figure 5.\nFrom the results, we can see that AdaRank.MAP and AdaRank.NDCG outperform all the baselines in terms of all measures.\nWe conducted ttests on the improvements of AdaRank.MAP and AdaRank.NDCG over BM25, Ranking SVM, and RankBoost.\nSome of the improvements are not statistically significant.\nThis is because we have only 50 queries used in the experiments, and the number of queries is too small.\n4.5 Discussions We investigated the reasons that AdaRank outperforms the baseline methods, using the results of the OHSUMED dataset as examples.\nFirst, we examined the reason that AdaRank has higher performances than Ranking SVM and RankBoost.\nSpecifically we com0.58 0.60 0.62 0.64 0.66 0.68 d-n d-p p-n accuracy pair type Ranking SVM RankBoost AdaRank.MAP AdaRank.NDCG Figure 6: Accuracy on ranking document pairs with OHSUMED dataset.\n0 2 4 6 8 10 12 numberofqueries number of document pairs per query Figure 7: Distribution of queries with different number of document pairs in training data of trial 1.\npared the error rates between different rank pairs made by Ranking SVM, RankBoost, AdaRank.MAP, and AdaRank.NDCG on the test data.\nThe results averaged over four trials in the 4-fold cross validation are shown in Figure 6.\nWe use `d-n'' to stand for the pairs between `definitely relevant'' and `not relevant'', `d-p'' the pairs between `definitely relevant'' and `partially relevant'', and `p-n'' the pairs between `partially relevant'' and `not relevant''.\nFrom Figure 6, we can see that AdaRank.MAP and AdaRank.NDCG make fewer errors for `d-n'' and `d-p'', which are related to the tops of rankings and are important.\nThis is because AdaRank.MAP and AdaRank.NDCG can naturally focus upon the training on the tops by optimizing MAP and NDCG@5, respectively.\nWe also made statistics on the number of document pairs per query in the training data (for trial 1).\nThe queries are clustered into different groups based on the the number of their associated document pairs.\nFigure 7 shows the distribution of the query groups.\nIn the figure, for example, `0-1k'' is the group of queries whose number of document pairs are between 0 and 999.\nWe can see that the numbers of document pairs really vary from query to query.\nNext we evaluated the accuracies of AdaRank.MAP and RankBoost in terms of MAP for each of the query group.\nThe results are reported in Figure 8.\nWe found that the average MAP of AdaRank.MAP over the groups is two points higher than RankBoost.\nFurthermore, it is interesting to see that AdaRank.MAP performs particularly better than RankBoost for queries with small numbers of document pairs (e.g., `0-1k'', `1k-2k'', and `2k-3k'').\nThe results indicate that AdaRank.MAP can effectively avoid creating a model biased towards queries with more document pairs.\nFor AdaRank.NDCG, similar results can be observed.\n0.2 0.3 0.4 0.5 MAP query group RankBoost AdaRank.MAP Figure 8: Differences in MAP for different query groups.\n0.30 0.31 0.32 0.33 0.34 trial 1 trial 2 trial 3 trial 4 MAP AdaRank.MAP AdaRank.NDCG Figure 9: MAP on training set when model is trained with MAP or NDCG@5.\nWe further conducted an experiment to see whether AdaRank has the ability to improve the ranking accuracy in terms of a measure by using the measure in training.\nSpecifically, we trained ranking models using AdaRank.MAP and AdaRank.NDCG and evaluated their accuracies on the training dataset in terms of both MAP and NDCG@5.\nThe experiment was conducted for each trial.\nFigure 9 and Figure 10 show the results in terms of MAP and NDCG@5, respectively.\nWe can see that, AdaRank.MAP trained with MAP performs better in terms of MAP while AdaRank.NDCG trained with NDCG@5 performs better in terms of NDCG@5.\nThe results indicate that AdaRank can indeed enhance ranking performance in terms of a measure by using the measure in training.\nFinally, we tried to verify the correctness of Theorem 1.\nThat is, the ranking accuracy in terms of the performance measure can be continuously improved, as long as e\u2212\u03b4t min 1 \u2212 \u03d5(t)2 < 1 holds.\nAs an example, Figure 11 shows the learning curve of AdaRank.MAP in terms of MAP during the training phase in one trial of the cross validation.\nFrom the figure, we can see that the ranking accuracy of AdaRank.MAP steadily improves, as the training goes on, until it reaches to the peak.\nThe result agrees well with Theorem 1.\n5.\nCONCLUSION AND FUTURE WORK In this paper we have proposed a novel algorithm for learning ranking models in document retrieval, referred to as AdaRank.\nIn contrast to existing methods, AdaRank optimizes a loss function that is directly defined on the performance measures.\nIt employs a boosting technique in ranking model learning.\nAdaRank offers several advantages: ease of implementation, theoretical soundness, efficiency in training, and high accuracy in ranking.\nExperimental results based on four benchmark datasets show that AdaRank can significantly outperform the baseline methods of BM25, Ranking SVM, and RankBoost.\n0.49 0.50 0.51 0.52 0.53 trial 1 trial 2 trial 3 trial 4 NDCG@5 AdaRank.MAP AdaRank.NDCG Figure 10: NDCG@5 on training set when model is trained with MAP or NDCG@5.\n0.29 0.30 0.31 0.32 0 50\u00a0100\u00a0150\u00a0200 250\u00a0300\u00a0350 MAP number of rounds Figure 11: Learning curve of AdaRank.\nFuture work includes theoretical analysis on the generalization error and other properties of the AdaRank algorithm, and further empirical evaluations of the algorithm including comparisons with other algorithms that can directly optimize performance measures.\n6.\nACKNOWLEDGMENTS We thank Harry Shum, Wei-Ying Ma, Tie-Yan Liu, Gu Xu, Bin Gao, Robert Schapire, and Andrew Arnold for their valuable comments and suggestions to this paper.\n7.\nREFERENCES [1] R. Baeza-Yates and B. Ribeiro-Neto.\nModern Information Retrieval.\nAddison Wesley, May 1999.\n[2] C. Burges, R. Ragno, and Q. Le.\nLearning to rank with nonsmooth cost functions.\nIn Advances in Neural Information Processing Systems 18, pages 395-402.\nMIT Press, Cambridge, MA, 2006.\n[3] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender.\nLearning to rank using gradient descent.\nIn ICML 22, pages 89-96, 2005.\n[4] Y. Cao, J. Xu, T.-Y.\nLiu, H. Li, Y. Huang, and H.-W.\nHon. Adapting ranking SVM to document retrieval.\nIn SIGIR 29, pages 186-193, 2006.\n[5] D. Cossock and T. Zhang.\nSubset ranking using regression.\nIn COLT, pages 605-619, 2006.\n[6] N. Craswell, D. Hawking, R. Wilkinson, and M. Wu.\nOverview of the TREC 2003 web track.\nIn TREC, pages 78-92, 2003.\n[7] N. Duffy and D. Helmbold.\nBoosting methods for regression.\nMach.\nLearn., 47(2-3):153-200, 2002.\n[8] Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer.\nAn efficient boosting algorithm for combining preferences.\nJournal of Machine Learning Research, 4:933-969, 2003.\n[9] Y. Freund and R. E. Schapire.\nA decision-theoretic generalization of on-line learning and an application to boosting.\nJ. Comput.\nSyst.\nSci., 55(1):119-139, 1997.\n[10] J. Friedman, T. Hastie, and R. Tibshirani.\nAdditive logistic regression: A statistical view of boosting.\nThe Annals of Statistics, 28(2):337-374, 2000.\n[11] G. Fung, R. Rosales, and B. Krishnapuram.\nLearning rankings via convex hull separation.\nIn Advances in Neural Information Processing Systems 18, pages 395-402.\nMIT Press, Cambridge, MA, 2006.\n[12] T. Hastie, R. Tibshirani, and J. H. Friedman.\nThe Elements of Statistical Learning.\nSpringer, August 2001.\n[13] R. Herbrich, T. Graepel, and K. Obermayer.\nLarge Margin rank boundaries for ordinal regression.\nMIT Press, Cambridge, MA, 2000.\n[14] W. Hersh, C. Buckley, T. J. Leone, and D. Hickam.\nOhsumed: an interactive retrieval evaluation and new large test collection for research.\nIn SIGIR, pages 192-201, 1994.\n[15] K. Jarvelin and J. Kekalainen.\nIR evaluation methods for retrieving highly relevant documents.\nIn SIGIR 23, pages 41-48, 2000.\n[16] T. Joachims.\nOptimizing search engines using clickthrough data.\nIn SIGKDD 8, pages 133-142, 2002.\n[17] T. Joachims.\nA support vector method for multivariate performance measures.\nIn ICML 22, pages 377-384, 2005.\n[18] J. Lafferty and C. Zhai.\nDocument language models, query models, and risk minimization for information retrieval.\nIn SIGIR 24, pages 111-119, 2001.\n[19] D. A. Metzler, W. B. Croft, and A. McCallum.\nDirect maximization of rank-based metrics for information retrieval.\nTechnical report, CIIR, 2005.\n[20] R. Nallapati.\nDiscriminative models for information retrieval.\nIn SIGIR 27, pages 64-71, 2004.\n[21] L. Page, S. Brin, R. Motwani, and T. Winograd.\nThe pagerank citation ranking: Bringing order to the web.\nTechnical report, Stanford Digital Library Technologies Project, 1998.\n[22] J. M. Ponte and W. B. Croft.\nA language modeling approach to information retrieval.\nIn SIGIR 21, pages 275-281, 1998.\n[23] T. Qin, T.-Y.\nLiu, X.-D.\nZhang, Z. Chen, and W.-Y.\nMa.\nA study of relevance propagation for web search.\nIn SIGIR 28, pages 408-415, 2005.\n[24] S. E. Robertson and D. A. Hull.\nThe TREC-9 filtering track final report.\nIn TREC, pages 25-40, 2000.\n[25] R. E. Schapire, Y. Freund, P. Barlett, and W. S. Lee.\nBoosting the margin: A new explanation for the effectiveness of voting methods.\nIn ICML 14, pages 322-330, 1997.\n[26] R. E. Schapire and Y. Singer.\nImproved boosting algorithms using confidence-rated predictions.\nMach.\nLearn., 37(3):297-336, 1999.\n[27] R. Song, J. Wen, S. Shi, G. Xin, T. yan Liu, T. Qin, X. Zheng, J. Zhang, G. Xue, and W.-Y.\nMa.\nMicrosoft Research Asia at web track and terabyte track of TREC 2004.\nIn TREC, 2004.\n[28] A. Trotman.\nLearning to rank.\nInf.\nRetr., 8(3):359-381, 2005.\n[29] J. Xu, Y. Cao, H. Li, and Y. Huang.\nCost-sensitive learning of SVM for ranking.\nIn ECML, pages 833-840, 2006.\n[30] G.-R.\nXue, Q. Yang, H.-J.\nZeng, Y. Yu, and Z. Chen.\nExploiting the hierarchical structure for link analysis.\nIn SIGIR 28, pages 186-193, 2005.\n[31] H. Yu.\nSVM selective sampling for ranking with application to data retrieval.\nIn SIGKDD 11, pages 354-363, 2005.\nAPPENDIX Here we give the proof of Theorem 1.\nP .\nSet ZT = m i=1 exp {\u2212E(\u03c0(qi, di, fT ), yi)} and \u03c6(t) = 1 2 (1 + \u03d5(t)).\nAccording to the definition of \u03b1t, we know that e\u03b1t = \u03c6(t) 1\u2212\u03c6(t) .\nZT = m i=1 exp {\u2212E(\u03c0(qi, di, fT\u22121 + \u03b1T hT ), yi)} = m i=1 exp \u2212E(\u03c0(qi, di, fT\u22121), yi) \u2212 \u03b1T E(\u03c0(qi, di, hT ), yi) \u2212 \u03b4T i \u2264 m i=1 exp {\u2212E(\u03c0(qi, di, fT\u22121), yi)} exp {\u2212\u03b1T E(\u03c0(qi, di, hT ), yi)} e\u2212\u03b4T min = e\u2212\u03b4T min ZT\u22121 m i=1 exp {\u2212E(\u03c0(qi, di, fT\u22121), yi)} ZT\u22121 exp{\u2212\u03b1T E(\u03c0(qi, di, hT ), yi)} = e\u2212\u03b4T min ZT\u22121 m i=1 PT (i) exp{\u2212\u03b1T E(\u03c0(qi, di, hT ), yi)}.\nMoreover, if E(\u03c0(qi, di, hT ), yi) \u2208 [\u22121, +1] then, ZT \u2264 e\u2212\u03b4T minZT\u22121 m i=1 PT (i) 1+E(\u03c0(qi, di, hT ), yi) 2 e\u2212\u03b1T + 1\u2212E(\u03c0(qi, di, hT ), yi) 2 e\u03b1T = e\u2212\u03b4T min ZT\u22121 \u03c6(T) 1 \u2212 \u03c6(T) \u03c6(T) + (1 \u2212 \u03c6(T)) \u03c6(T) 1 \u2212 \u03c6(T) = ZT\u22121e\u2212\u03b4T min 4\u03c6(T)(1 \u2212 \u03c6(T)) \u2264 ZT\u22122 T t=T\u22121 e\u2212\u03b4t min 4\u03c6(t)(1 \u2212 \u03c6(t)) \u2264 Z1 T t=2 e\u2212\u03b4t min 4\u03c6(t)(1 \u2212 \u03c6(t)) = m m i=1 1 m exp{\u2212E(\u03c0(qi, di, \u03b11h1), yi)} T t=2 e\u2212\u03b4t min 4\u03c6(t)(1 \u2212 \u03c6(t)) = m m i=1 1 m exp{\u2212\u03b11E(\u03c0(qi, di, h1), yi) \u2212 \u03b41 i } T t=2 e\u2212\u03b4t min 4\u03c6(t)(1 \u2212 \u03c6(t)) \u2264 me\u2212\u03b41 min m i=1 1 m exp{\u2212\u03b11E(\u03c0(qi, di, h1), yi)} T t=2 e\u2212\u03b4t min 4\u03c6(t)(1 \u2212 \u03c6(t)) \u2264 m e\u2212\u03b41 min 4\u03c6(1)(1 \u2212 \u03c6(1)) T t=2 e\u2212\u03b4t min 4\u03c6(t)(1 \u2212 \u03c6(t)) = m T t=1 e\u2212\u03b4t min 1 \u2212 \u03d5(t)2.\n\u2234 1 m m i=1 E(\u03c0(qi, di, fT ), yi) \u2265 1 m m i=1 {1 \u2212 exp(\u2212E(\u03c0(qi, di, fT ), yi))} \u2265 1 \u2212 T t=1 e\u2212\u03b4t min 1 \u2212 \u03d5(t)2.","lvl-3":"AdaRank: A Boosting Algorithm for Information Retrieval\nABSTRACT\nIn this paper we address the issue of learning to rank for document retrieval.\nIn the task, a model is automatically created with some training data and then is utilized for ranking of documents.\nThe goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain).\nIdeally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data.\nExisting methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures.\nFor example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs.\nTo deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures.\nOur algorithm, referred to as AdaRank, repeatedly constructs ` weak rankers' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions.\nWe prove that the training process of AdaRank is exactly that of enhancing the performance measure used.\nExperimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.\n1.\nINTRODUCTION\nRecently ` learning to rank' has gained increasing attention in both the fields of information retrieval and machine learning.\nWhen\napplied to document retrieval, learning to rank becomes a task as follows.\nIn training, a ranking model is constructed with data consisting of queries, their corresponding retrieved documents, and relevance levels given by humans.\nIn ranking, given a new query, the corresponding retrieved documents are sorted by using the trained ranking model.\nIn document retrieval, usually ranking results are evaluated in terms of performance measures such as MAP (Mean Average Precision) [1] and NDCG (Normalized Discounted Cumulative Gain) [15].\nIdeally, the ranking function is created so that the accuracy of ranking in terms of one of the measures with respect to the training data is maximized.\nSeveral methods for learning to rank have been developed and applied to document retrieval.\nFor example, Herbrich et al. [13] propose a learning algorithm for ranking on the basis of Support Vector Machines, called Ranking SVM.\nFreund et al. [8] take a similar approach and perform the learning by using boosting, referred to as RankBoost.\nAll the existing methods used for document retrieval [2, 3, 8, 13, 16, 20] are designed to optimize loss functions loosely related to the IR performance measures, not loss functions directly based on the measures.\nFor example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs.\nIn this paper, we aim to develop a new learning algorithm that can directly optimize any performance measure used in document retrieval.\nInspired by the work of AdaBoost for classification [9], we propose to develop a boosting algorithm for information retrieval, referred to as AdaRank.\nAdaRank utilizes a linear combination of ` weak rankers' as its model.\nIn learning, it repeats the process of re-weighting the training sample, creating a weak ranker, and calculating a weight for the ranker.\nWe show that AdaRank algorithm can iteratively optimize an exponential loss function based on any of IR performance measures.\nA lower bound of the performance on training data is given, which indicates that the ranking accuracy in terms of the performance measure can be continuously improved during the training process.\nAdaRank offers several advantages: ease in implementation, theoretical soundness, efficiency in training, and high accuracy in ranking.\nExperimental results indicate that AdaRank can outperform the baseline methods of BM25, Ranking SVM, and RankBoost, on four benchmark datasets including OHSUMED, WSJ, AP, and.\nGov. Tuning ranking models using certain training data and a performance measure is a common practice in IR [1].\nAs the number of features in the ranking model gets larger and the amount of training data gets larger, the tuning becomes harder.\nFrom the viewpoint of IR, AdaRank can be viewed as a machine learning method for ranking model tuning.\nRecently, direct optimization of performance measures in learning has become a hot research topic.\nSeveral methods for classifi\ncation [17] and ranking [5, 19] have been proposed.\nAdaRank can be viewed as a machine learning method for direct optimization of performance measures, based on a different approach.\nThe rest of the paper is organized as follows.\nAfter a summary of related work in Section 2, we describe the proposed AdaRank algorithm in details in Section 3.\nExperimental results and discussions are given in Section 4.\nSection 5 concludes this paper and gives future work.\n2.\nRELATED WORK\n2.1 Information Retrieval\nThe key problem for document retrieval is ranking, specifically, how to create the ranking model (function) that can sort documents based on their relevance to the given query.\nIt is a common practice in IR to tune the parameters of a ranking model using some labeled data and one performance measure [1].\nFor example, the state-ofthe-art methods of BM25 [24] and LMIR (Language Models for Information Retrieval) [18, 22] all have parameters to tune.\nAs the ranking models become more sophisticated (more features are used) and more labeled data become available, how to tune or train ranking models turns out to be a challenging issue.\nRecently methods of ` learning to rank' have been applied to ranking model construction and some promising results have been obtained.\nFor example, Joachims [16] applies Ranking SVM to document retrieval.\nHe utilizes click-through data to deduce training data for the model creation.\nCao et al. [4] adapt Ranking SVM to document retrieval by modifying the Hinge Loss function to better meet the requirements of IR.\nSpecifically, they introduce a Hinge Loss function that heavily penalizes errors on the tops of ranking lists and errors from queries with fewer retrieved documents.\nBurges et al. [3] employ Relative Entropy as a loss function and Gradient Descent as an algorithm to train a Neural Network model for ranking in document retrieval.\nThe method is referred to as ` RankNet'.\n2.2 Machine Learning\nThere are three topics in machine learning which are related to our current work.\nThey are ` learning to rank', boosting, and direct optimization of performance measures.\nLearning to rank is to automatically create a ranking function that assigns scores to instances and then rank the instances by using the scores.\nSeveral approaches have been proposed to tackle the problem.\nOne major approach to learning to rank is that of transforming it into binary classification on instance pairs.\nThis ` pair-wise' approach fits well with information retrieval and thus is widely used in IR.\nTypical methods of the approach include Ranking SVM [13], RankBoost [8], and RankNet [3].\nFor other approaches to learning to rank, refer to [2, 11, 31].\nIn the pair-wise approach to ranking, the learning task is formalized as a problem of classifying instance pairs into two categories (correctly ranked and incorrectly ranked).\nActually, it is known that reducing classification errors on instance pairs is equivalent to maximizing a lower bound of MAP [16].\nIn that sense, the existing methods of Ranking SVM, RankBoost, and RankNet are only able to minimize loss functions that are loosely related to the IR performance measures.\nBoosting is a general technique for improving the accuracies of machine learning algorithms.\nThe basic idea of boosting is to repeatedly construct ` weak learners' by re-weighting training data and form an ensemble of weak learners such that the total performance of the ensemble is ` boosted'.\nFreund and Schapire have proposed the first well-known boosting algorithm called AdaBoost (Adaptive Boosting) [9], which is designed for binary classification (0-1 prediction).\nLater, Schapire & Singer have introduced a generalized version of AdaBoost in which weak learners can give confidence scores in their predictions rather than make 0-1 decisions [26].\nExtensions have been made to deal with the problems of multi-class classification [10, 26], regression [7], and ranking [8].\nIn fact, AdaBoost is an algorithm that ingeniously constructs a linear model by minimizing the ` exponential loss function' with respect to the training data [26].\nOur work in this paper can be viewed as a boosting method developed for ranking, particularly for ranking in IR.\nRecently, a number of authors have proposed conducting direct optimization of multivariate performance measures in learning.\nFor instance, Joachims [17] presents an SVM method to directly optimize nonlinear multivariate performance measures like the F1 measure for classification.\nCossock & Zhang [5] find a way to approximately optimize the ranking performance measure DCG [15].\nMetzler et al. [19] also propose a method of directly maximizing rank-based metrics for ranking on the basis of manifold learning.\nAdaRank is also one that tries to directly optimize multivariate performance measures, but is based on a different approach.\nAdaRank is unique in that it employs an exponential loss function based on IR performance measures and a boosting technique.\n3.\nOUR METHOD: ADARANK\n3.1 General Framework\n3.2 Algorithm\n3.3 Theoretical Analysis\n3.4 Advantages\n3.5 Differences from AdaBoost\n3.6 Construction of Weak Ranker\n4.\nEXPERIMENTAL RESULTS\n4.1 Experiment Setting\n4.2 Experiment with OHSUMED Data\n4.3 Experiment with WSJ and AP Data\n4.4 Experiment with.\nGov Data\n4.5 Discussions\n5.\nCONCLUSION AND FUTURE WORK\nIn this paper we have proposed a novel algorithm for learning ranking models in document retrieval, referred to as AdaRank.\nIn contrast to existing methods, AdaRank optimizes a loss function that is directly defined on the performance measures.\nIt employs a boosting technique in ranking model learning.\nAdaRank offers several advantages: ease of implementation, theoretical soundness, efficiency in training, and high accuracy in ranking.\nExperimental results based on four benchmark datasets show that AdaRank can significantly outperform the baseline methods of BM25, Ranking SVM, and RankBoost.\nFigure 10: NDCG@5 on training set when model is trained with MAP or NDCG@5.\nFigure 11: Learning curve of AdaRank.\nFuture work includes theoretical analysis on the generalization error and other properties of the AdaRank algorithm, and further empirical evaluations of the algorithm including comparisons with other algorithms that can directly optimize performance measures.","lvl-4":"AdaRank: A Boosting Algorithm for Information Retrieval\nABSTRACT\nIn this paper we address the issue of learning to rank for document retrieval.\nIn the task, a model is automatically created with some training data and then is utilized for ranking of documents.\nThe goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain).\nIdeally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data.\nExisting methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures.\nFor example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs.\nTo deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures.\nOur algorithm, referred to as AdaRank, repeatedly constructs ` weak rankers' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions.\nWe prove that the training process of AdaRank is exactly that of enhancing the performance measure used.\nExperimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.\n1.\nINTRODUCTION\nRecently ` learning to rank' has gained increasing attention in both the fields of information retrieval and machine learning.\nWhen\napplied to document retrieval, learning to rank becomes a task as follows.\nIn training, a ranking model is constructed with data consisting of queries, their corresponding retrieved documents, and relevance levels given by humans.\nIn ranking, given a new query, the corresponding retrieved documents are sorted by using the trained ranking model.\nIdeally, the ranking function is created so that the accuracy of ranking in terms of one of the measures with respect to the training data is maximized.\nSeveral methods for learning to rank have been developed and applied to document retrieval.\nFor example, Herbrich et al. [13] propose a learning algorithm for ranking on the basis of Support Vector Machines, called Ranking SVM.\nFreund et al. [8] take a similar approach and perform the learning by using boosting, referred to as RankBoost.\nAll the existing methods used for document retrieval [2, 3, 8, 13, 16, 20] are designed to optimize loss functions loosely related to the IR performance measures, not loss functions directly based on the measures.\nFor example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs.\nIn this paper, we aim to develop a new learning algorithm that can directly optimize any performance measure used in document retrieval.\nInspired by the work of AdaBoost for classification [9], we propose to develop a boosting algorithm for information retrieval, referred to as AdaRank.\nAdaRank utilizes a linear combination of ` weak rankers' as its model.\nWe show that AdaRank algorithm can iteratively optimize an exponential loss function based on any of IR performance measures.\nA lower bound of the performance on training data is given, which indicates that the ranking accuracy in terms of the performance measure can be continuously improved during the training process.\nAdaRank offers several advantages: ease in implementation, theoretical soundness, efficiency in training, and high accuracy in ranking.\nExperimental results indicate that AdaRank can outperform the baseline methods of BM25, Ranking SVM, and RankBoost, on four benchmark datasets including OHSUMED, WSJ, AP, and.\nGov. Tuning ranking models using certain training data and a performance measure is a common practice in IR [1].\nAs the number of features in the ranking model gets larger and the amount of training data gets larger, the tuning becomes harder.\nFrom the viewpoint of IR, AdaRank can be viewed as a machine learning method for ranking model tuning.\nRecently, direct optimization of performance measures in learning has become a hot research topic.\nSeveral methods for classifi\ncation [17] and ranking [5, 19] have been proposed.\nAdaRank can be viewed as a machine learning method for direct optimization of performance measures, based on a different approach.\nAfter a summary of related work in Section 2, we describe the proposed AdaRank algorithm in details in Section 3.\nExperimental results and discussions are given in Section 4.\nSection 5 concludes this paper and gives future work.\n2.\nRELATED WORK\n2.1 Information Retrieval\nThe key problem for document retrieval is ranking, specifically, how to create the ranking model (function) that can sort documents based on their relevance to the given query.\nIt is a common practice in IR to tune the parameters of a ranking model using some labeled data and one performance measure [1].\nRecently methods of ` learning to rank' have been applied to ranking model construction and some promising results have been obtained.\nFor example, Joachims [16] applies Ranking SVM to document retrieval.\nHe utilizes click-through data to deduce training data for the model creation.\nCao et al. [4] adapt Ranking SVM to document retrieval by modifying the Hinge Loss function to better meet the requirements of IR.\nSpecifically, they introduce a Hinge Loss function that heavily penalizes errors on the tops of ranking lists and errors from queries with fewer retrieved documents.\nBurges et al. [3] employ Relative Entropy as a loss function and Gradient Descent as an algorithm to train a Neural Network model for ranking in document retrieval.\nThe method is referred to as ` RankNet'.\n2.2 Machine Learning\nThere are three topics in machine learning which are related to our current work.\nThey are ` learning to rank', boosting, and direct optimization of performance measures.\nLearning to rank is to automatically create a ranking function that assigns scores to instances and then rank the instances by using the scores.\nSeveral approaches have been proposed to tackle the problem.\nOne major approach to learning to rank is that of transforming it into binary classification on instance pairs.\nThis ` pair-wise' approach fits well with information retrieval and thus is widely used in IR.\nTypical methods of the approach include Ranking SVM [13], RankBoost [8], and RankNet [3].\nFor other approaches to learning to rank, refer to [2, 11, 31].\nIn the pair-wise approach to ranking, the learning task is formalized as a problem of classifying instance pairs into two categories (correctly ranked and incorrectly ranked).\nIn that sense, the existing methods of Ranking SVM, RankBoost, and RankNet are only able to minimize loss functions that are loosely related to the IR performance measures.\nBoosting is a general technique for improving the accuracies of machine learning algorithms.\nOur work in this paper can be viewed as a boosting method developed for ranking, particularly for ranking in IR.\nRecently, a number of authors have proposed conducting direct optimization of multivariate performance measures in learning.\nFor instance, Joachims [17] presents an SVM method to directly optimize nonlinear multivariate performance measures like the F1 measure for classification.\nCossock & Zhang [5] find a way to approximately optimize the ranking performance measure DCG [15].\nMetzler et al. [19] also propose a method of directly maximizing rank-based metrics for ranking on the basis of manifold learning.\nAdaRank is also one that tries to directly optimize multivariate performance measures, but is based on a different approach.\nAdaRank is unique in that it employs an exponential loss function based on IR performance measures and a boosting technique.\n5.\nCONCLUSION AND FUTURE WORK\nIn this paper we have proposed a novel algorithm for learning ranking models in document retrieval, referred to as AdaRank.\nIn contrast to existing methods, AdaRank optimizes a loss function that is directly defined on the performance measures.\nIt employs a boosting technique in ranking model learning.\nAdaRank offers several advantages: ease of implementation, theoretical soundness, efficiency in training, and high accuracy in ranking.\nExperimental results based on four benchmark datasets show that AdaRank can significantly outperform the baseline methods of BM25, Ranking SVM, and RankBoost.\nFigure 10: NDCG@5 on training set when model is trained with MAP or NDCG@5.\nFigure 11: Learning curve of AdaRank.","lvl-2":"AdaRank: A Boosting Algorithm for Information Retrieval\nABSTRACT\nIn this paper we address the issue of learning to rank for document retrieval.\nIn the task, a model is automatically created with some training data and then is utilized for ranking of documents.\nThe goodness of a model is usually evaluated with performance measures such as MAP (Mean Average Precision) and NDCG (Normalized Discounted Cumulative Gain).\nIdeally a learning algorithm would train a ranking model that could directly optimize the performance measures with respect to the training data.\nExisting methods, however, are only able to train ranking models by minimizing loss functions loosely related to the performance measures.\nFor example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs.\nTo deal with the problem, we propose a novel learning algorithm within the framework of boosting, which can minimize a loss function directly defined on the performance measures.\nOur algorithm, referred to as AdaRank, repeatedly constructs ` weak rankers' on the basis of re-weighted training data and finally linearly combines the weak rankers for making ranking predictions.\nWe prove that the training process of AdaRank is exactly that of enhancing the performance measure used.\nExperimental results on four benchmark datasets show that AdaRank significantly outperforms the baseline methods of BM25, Ranking SVM, and RankBoost.\n1.\nINTRODUCTION\nRecently ` learning to rank' has gained increasing attention in both the fields of information retrieval and machine learning.\nWhen\napplied to document retrieval, learning to rank becomes a task as follows.\nIn training, a ranking model is constructed with data consisting of queries, their corresponding retrieved documents, and relevance levels given by humans.\nIn ranking, given a new query, the corresponding retrieved documents are sorted by using the trained ranking model.\nIn document retrieval, usually ranking results are evaluated in terms of performance measures such as MAP (Mean Average Precision) [1] and NDCG (Normalized Discounted Cumulative Gain) [15].\nIdeally, the ranking function is created so that the accuracy of ranking in terms of one of the measures with respect to the training data is maximized.\nSeveral methods for learning to rank have been developed and applied to document retrieval.\nFor example, Herbrich et al. [13] propose a learning algorithm for ranking on the basis of Support Vector Machines, called Ranking SVM.\nFreund et al. [8] take a similar approach and perform the learning by using boosting, referred to as RankBoost.\nAll the existing methods used for document retrieval [2, 3, 8, 13, 16, 20] are designed to optimize loss functions loosely related to the IR performance measures, not loss functions directly based on the measures.\nFor example, Ranking SVM and RankBoost train ranking models by minimizing classification errors on instance pairs.\nIn this paper, we aim to develop a new learning algorithm that can directly optimize any performance measure used in document retrieval.\nInspired by the work of AdaBoost for classification [9], we propose to develop a boosting algorithm for information retrieval, referred to as AdaRank.\nAdaRank utilizes a linear combination of ` weak rankers' as its model.\nIn learning, it repeats the process of re-weighting the training sample, creating a weak ranker, and calculating a weight for the ranker.\nWe show that AdaRank algorithm can iteratively optimize an exponential loss function based on any of IR performance measures.\nA lower bound of the performance on training data is given, which indicates that the ranking accuracy in terms of the performance measure can be continuously improved during the training process.\nAdaRank offers several advantages: ease in implementation, theoretical soundness, efficiency in training, and high accuracy in ranking.\nExperimental results indicate that AdaRank can outperform the baseline methods of BM25, Ranking SVM, and RankBoost, on four benchmark datasets including OHSUMED, WSJ, AP, and.\nGov. Tuning ranking models using certain training data and a performance measure is a common practice in IR [1].\nAs the number of features in the ranking model gets larger and the amount of training data gets larger, the tuning becomes harder.\nFrom the viewpoint of IR, AdaRank can be viewed as a machine learning method for ranking model tuning.\nRecently, direct optimization of performance measures in learning has become a hot research topic.\nSeveral methods for classifi\ncation [17] and ranking [5, 19] have been proposed.\nAdaRank can be viewed as a machine learning method for direct optimization of performance measures, based on a different approach.\nThe rest of the paper is organized as follows.\nAfter a summary of related work in Section 2, we describe the proposed AdaRank algorithm in details in Section 3.\nExperimental results and discussions are given in Section 4.\nSection 5 concludes this paper and gives future work.\n2.\nRELATED WORK\n2.1 Information Retrieval\nThe key problem for document retrieval is ranking, specifically, how to create the ranking model (function) that can sort documents based on their relevance to the given query.\nIt is a common practice in IR to tune the parameters of a ranking model using some labeled data and one performance measure [1].\nFor example, the state-ofthe-art methods of BM25 [24] and LMIR (Language Models for Information Retrieval) [18, 22] all have parameters to tune.\nAs the ranking models become more sophisticated (more features are used) and more labeled data become available, how to tune or train ranking models turns out to be a challenging issue.\nRecently methods of ` learning to rank' have been applied to ranking model construction and some promising results have been obtained.\nFor example, Joachims [16] applies Ranking SVM to document retrieval.\nHe utilizes click-through data to deduce training data for the model creation.\nCao et al. [4] adapt Ranking SVM to document retrieval by modifying the Hinge Loss function to better meet the requirements of IR.\nSpecifically, they introduce a Hinge Loss function that heavily penalizes errors on the tops of ranking lists and errors from queries with fewer retrieved documents.\nBurges et al. [3] employ Relative Entropy as a loss function and Gradient Descent as an algorithm to train a Neural Network model for ranking in document retrieval.\nThe method is referred to as ` RankNet'.\n2.2 Machine Learning\nThere are three topics in machine learning which are related to our current work.\nThey are ` learning to rank', boosting, and direct optimization of performance measures.\nLearning to rank is to automatically create a ranking function that assigns scores to instances and then rank the instances by using the scores.\nSeveral approaches have been proposed to tackle the problem.\nOne major approach to learning to rank is that of transforming it into binary classification on instance pairs.\nThis ` pair-wise' approach fits well with information retrieval and thus is widely used in IR.\nTypical methods of the approach include Ranking SVM [13], RankBoost [8], and RankNet [3].\nFor other approaches to learning to rank, refer to [2, 11, 31].\nIn the pair-wise approach to ranking, the learning task is formalized as a problem of classifying instance pairs into two categories (correctly ranked and incorrectly ranked).\nActually, it is known that reducing classification errors on instance pairs is equivalent to maximizing a lower bound of MAP [16].\nIn that sense, the existing methods of Ranking SVM, RankBoost, and RankNet are only able to minimize loss functions that are loosely related to the IR performance measures.\nBoosting is a general technique for improving the accuracies of machine learning algorithms.\nThe basic idea of boosting is to repeatedly construct ` weak learners' by re-weighting training data and form an ensemble of weak learners such that the total performance of the ensemble is ` boosted'.\nFreund and Schapire have proposed the first well-known boosting algorithm called AdaBoost (Adaptive Boosting) [9], which is designed for binary classification (0-1 prediction).\nLater, Schapire & Singer have introduced a generalized version of AdaBoost in which weak learners can give confidence scores in their predictions rather than make 0-1 decisions [26].\nExtensions have been made to deal with the problems of multi-class classification [10, 26], regression [7], and ranking [8].\nIn fact, AdaBoost is an algorithm that ingeniously constructs a linear model by minimizing the ` exponential loss function' with respect to the training data [26].\nOur work in this paper can be viewed as a boosting method developed for ranking, particularly for ranking in IR.\nRecently, a number of authors have proposed conducting direct optimization of multivariate performance measures in learning.\nFor instance, Joachims [17] presents an SVM method to directly optimize nonlinear multivariate performance measures like the F1 measure for classification.\nCossock & Zhang [5] find a way to approximately optimize the ranking performance measure DCG [15].\nMetzler et al. [19] also propose a method of directly maximizing rank-based metrics for ranking on the basis of manifold learning.\nAdaRank is also one that tries to directly optimize multivariate performance measures, but is based on a different approach.\nAdaRank is unique in that it employs an exponential loss function based on IR performance measures and a boosting technique.\n3.\nOUR METHOD: ADARANK\n3.1 General Framework\nWe first describe the general framework of learning to rank for document retrieval.\nIn retrieval (testing), given a query the system returns a ranking list of documents in descending order of the relevance scores.\nThe relevance scores are calculated with a ranking function (model).\nIn learning (training), a number of queries and their corresponding retrieved documents are given.\nFurthermore, the relevance levels of the documents with respect to the queries are also provided.\nThe relevance levels are represented as ranks (i.e., categories in a total order).\nThe objective of learning is to construct a ranking function which achieves the best results in ranking of the training data in the sense of minimization of a loss function.\nIdeally the loss function is defined on the basis of the performance measure used in testing.\nSuppose that Y = {r1, r2, \u00b7 \u00b7 \u00b7, re] is a set of ranks, where B denotes the number of ranks.\nThere exists a total order between the ranks re> re_1> \u00b7 \u00b7 \u00b7> r1, where `>' denotes a preference relationship.\nIn training, a set of queries Q = {q1, q2, \u00b7 \u00b7 \u00b7, qm] is given.\nEach query qi is associated with a list of retrieved documents di = {di1, di2, \u00b7 \u00b7 \u00b7, di, n (qi)] and a list of labels yi = {yi1, yi2, \u00b7 \u00b7 \u00b7, yi, n (qi)], where n (qi) denotes the sizes of lists di and yi, dij denotes the jth document in di, and yij E Y denotes the rank of document dij.\nA feature vector ~ xi j = \u03a8 (qi, dij) E X is created from each query-document pair (qi, dij), i = 1, 2, \u00b7 \u00b7 \u00b7, m; j = 1, 2, \u00b7 \u00b7 \u00b7, n (qi).\nThus, the training set can be represented as S = {(qi, di, yi)] mi = 1.\nThe objective of learning is to create a ranking function f: X 7 \u2192 R, such that for each query the elements in its corresponding document list can be assigned relevance scores using the function and then be ranked according to the scores.\nSpecifically, we create a permutation of integers 7r (qi, di, f) for query qi, the corresponding list of documents di, and the ranking function f. Let di = {di1, di2, \u00b7 \u00b7 \u00b7, di, n (qi)] be identified by the list of integers {1, 2, \u00b7 \u00b7 \u00b7, n (qi)], then permutation 7r (qi, di, f) is defined as a bijection from {1, 2, \u00b7 \u00b7 \u00b7, n (qi)] to itself.\nWe use 7r (j) to denote the position of item j (i.e., dij).\nThe learning process turns out to be that of minimizing the loss function which represents the disagreement between the permutation 7r (qi, di, f) and the list of ranks yi, for all of the queries.\nTable 1: Notations and explanations.\nIn the paper, we define the rank model as a linear combination of t = 1 \u03b1tht -LRB-~x-RRB-, where ht -LRB-~x-RRB- is a weak ranker, \u03b1t is its weight, and T is the number of weak rankers.\nIn information retrieval, query-based performance measures are used to evaluate the ` goodness' of a ranking function.\nBy query based measure, we mean a measure defined over a ranking list of documents with respect to a query.\nThese measures include MAP, NDCG, MRR (Mean Reciprocal Rank), WTA (Winners Take ALL), and Precision@n [1, 15].\nWe utilize a general function E (Sr (qi, di, f), yi) \u2208 [\u2212 1, +1] to represent the performance measures.\nThe first argument of E is the permutation Sr created using the ranking function f on di.\nThe second argument is the list of ranks yi given by humans.\nE measures the agreement between Sr and yi.\nTable 1 gives a summary of notations described above.\nNext, as examples of performance measures, we present the definitions of MAP and NDCG.\nGiven a query qi, the corresponding list of ranks yi, and a permutation Sri on di, average precision for qi is defined as: where yij takes on 1 and 0 as values, representing being relevant or irrelevant and Pi (j) is defined as precision at the position of dij:\nwhere Sri (j) denotes the position of dij.\nGiven a query qi, the list of ranks yi, and a permutation Sri on di, NDCG at position m for qi is defined as:\nwhere yij takes on ranks as values and ni is a normalization constant.\nni is chosen so that a perfect ranking Sr \u2217 i's NDCG score at position m is 1.\n3.2 Algorithm\nInspired by the AdaBoost algorithm for classification, we have devised a novel algorithm which can optimize a loss function based on the IR performance measures.\nThe algorithm is referred to as ` AdaRank' and is shown in Figure 1.\nAdaRank takes a training set S = {(qi, di, yi)} m i = 1 as input and takes the performance measure function E and the number of iterations T as parameters.\nAdaRank runs T rounds and at each round it creates a weak ranker ht (t = 1, \u00b7 \u00b7 \u00b7, T).\nFinally, it outputs a ranking model f by linearly combining the weak rankers.\nAt each round, AdaRank maintains a distribution of weights over the queries in the training data.\nWe denote the distribution of weights\n\u2022 Create weak ranker ht with weighted distribution Pt on training data S.\n\u2022 Choose \u03b1t\n\u2022 Create ft \u2022 Update Pt +1\nFigure 1: The AdaRank algorithm.\nat round t as Pt and the weight on the ith training query qi at round t as Pt (i).\nInitially, AdaRank sets equal weights to the queries.\nAt each round, it increases the weights of those queries that are not ranked well by ft, the model created so far.\nAs a result, the learning at the next round will be focused on the creation of a weak ranker that can work on the ranking of those ` hard' queries.\nAt each round, a weak ranker ht is constructed based on training data with weight distribution Pt.\nThe goodness of a weak ranker is measured by the performance measure E weighted by Pt:\nSeveral methods for weak ranker construction can be considered.\nFor example, a weak ranker can be created by using a subset of queries (together with their document list and label list) sampled according to the distribution Pt.\nIn this paper, we use single features as weak rankers, as will be explained in Section 3.6.\nOnce a weak ranker ht is built, AdaRank chooses a weight \u03b1t> 0 for the weak ranker.\nIntuitively, \u03b1t measures the importance of ht.\nA ranking model ft is created at each round by linearly combining the weak rankers constructed so far h1, \u00b7 \u00b7 \u00b7, ht with weights \u03b11, \u00b7 \u00b7 \u00b7, \u03b1t.\nft is then used for updating the distribution Pt +1.\n3.3 Theoretical Analysis\nThe existing learning algorithms for ranking attempt to minimize a loss function based on instance pairs (document pairs).\nIn contrast, AdaRank tries to optimize a loss function based on queries.\nFurthermore, the loss function in AdaRank is defined on the basis of general IR performance measures.\nThe measures can be MAP, NDCG, WTA, MRR, or any other measures whose range is within [\u2212 1, +1].\nWe next explain why this is the case.\nIdeally we want to maximize the ranking accuracy in terms of a performance measure on the training data:\nwhere F is the set of possible ranking functions.\nThis is equivalent to minimizing the loss on the training data\nIt is difficult to directly optimize the loss, because E is a noncontinuous function and thus may be difficult to handle.\nWe instead attempt to minimize an upper bound of the loss in (5) exp {\u2212 E (7r (qi, di, f), yi)}, (6) because e \u2212 x \u2265 1 \u2212 x holds for any x \u2208 <.\nWe consider the use of a linear combination of weak rankers as our ranking model:\nwhere H is the set of possible weak rankers, \u03b1t is a positive weight, and (ft \u2212 1 + \u03b1tht) -LRB-~x-RRB- = ft \u2212 1 -LRB-~x-RRB- + \u03b1tht -LRB-~x-RRB-.\nSeveral ways of computing coefficients \u03b1t and weak rankers ht may be considered.\nFollowing the idea of AdaBoost, in AdaRank we take the approach of ` forward stage-wise additive modeling' [12] and get the algorithm in Figure 1.\nIt can be proved that there exists a lower bound on the ranking accuracy for AdaRank on training data, as presented in Theorem 1.\nwhere Sp (t) = Zmi = 1 Pt (i) E (7r (qi, di, ht), yi), 6t min = mini = 1, \u00b7 \u00b7 \u00b7, m 6ti, and\nA proof of the theorem can be found in appendix.\nThe theorem implies that the ranking accuracy in terms of the performance measure can be continuously improved, as long as e \u2212 6t min V1 \u2212 Sp (t) 2 <1 holds.\n3.4 Advantages\nAdaRank is a simple yet powerful method.\nMore importantly, it is a method that can be justified from the theoretical viewpoint, as discussed above.\nIn addition AdaRank has several other advantages when compared with the existing learning to rank methods such as Ranking SVM, RankBoost, and RankNet.\nFirst, AdaRank can incorporate any performance measure, provided that the measure is query based and in the range of [\u2212 1, +1].\nNotice that the major IR measures meet this requirement.\nIn contrast the existing methods only minimize loss functions that are loosely related to the IR measures [16].\nSecond, the learning process of AdaRank is more efficient than those of the existing learning algorithms.\nThe time complexity of AdaRank is of order O ((k + T) \u00b7 m \u00b7 n log n), where k denotes the number of features, T the number of rounds, m the number of queries in training data, and n is the maximum number of documents for queries in training data.\nThe time complexity of RankBoost, for example, is of order O (T \u00b7 m \u00b7 n2) [8].\nThird, AdaRank employs a more reasonable framework for performing the ranking task than the existing methods.\nSpecifically in AdaRank the instances correspond to queries, while in the existing methods the instances correspond to document pairs.\nAs a result, AdaRank does not have the following shortcomings that plague the existing methods.\n(a) The existing methods have to make a strong assumption that the document pairs from the same query are independently distributed.\nIn reality, this is clearly not the case and this problem does not exist for AdaRank.\n(b) Ranking the most relevant documents on the tops of document lists is crucial for document retrieval.\nThe existing methods cannot focus on the training on the tops, as indicated in [4].\nSeveral methods for rectifying the problem have been proposed (e.g., [4]), however, they do not seem to fundamentally solve the problem.\nIn contrast, AdaRank can naturally focus on training on the tops of document lists, because the performance measures used favor rankings for which relevant documents are on the tops.\n(c) In the existing methods, the numbers of document pairs vary from query to query, resulting in creating models biased toward queries with more document pairs, as pointed out in [4].\nAdaRank does not have this drawback, because it treats queries rather than document pairs as basic units in learning.\n3.5 Differences from AdaBoost\nAdaRank is a boosting algorithm.\nIn that sense, it is similar to AdaBoost, but it also has several striking differences from AdaBoost.\nFirst, the types of instances are different.\nAdaRank makes use of queries and their corresponding document lists as instances.\nThe labels in training data are lists of ranks (relevance levels).\nAdaBoost makes use of feature vectors as instances.\nThe labels in training data are simply +1 and \u2212 1.\nSecond, the performance measures are different.\nIn AdaRank, the performance measure is a generic measure, defined on the document list and the rank list of a query.\nIn AdaBoost the corresponding performance measure is a specific measure for binary classification, also referred to as ` margin' [25].\nThird, the ways of updating weights are also different.\nIn AdaBoost, the distribution of weights on training instances is calculated according to the current distribution and the performance of the current weak learner.\nIn AdaRank, in contrast, it is calculated according to the performance of the ranking model created so far, as shown in Figure 1.\nNote that AdaBoost can also adopt the weight updating method used in AdaRank.\nFor AdaBoost they are equivalent (cf., [12] page 305).\nHowever, this is not true for AdaRank.\n3.6 Construction of Weak Ranker\nWe consider an efficient implementation for weak ranker construction, which is also used in our experiments.\nIn the implementation, as weak ranker we choose the feature that has the optimal weighted performance among all of the features: Pt (i) E (7r (qi, di, xk), yi).\nCreating weak rankers in this way, the learning process turns out to be that of repeatedly selecting features and linearly combining the selected features.\nNote that features which are not selected in the training phase will have a weight of zero.\n4.\nEXPERIMENTAL RESULTS\nWe conducted experiments to test the performances of AdaRank using four benchmark datasets: OHSUMED, WSJ, AP, and.\nGov.\nTable 2: Features used in the experiments on OHSUMED,\nWSJ, and AP datasets.\nC (w, d) represents frequency of word w in document d; C represents the entire collection; n denotes number of terms in query; | \u00b7 | denotes the size function; and idf (\u00b7) denotes inverse document frequency.\nFigure 2: Ranking accuracies on OHSUMED data.\n4.1 Experiment Setting\nRanking SVM [13, 16] and RankBoost [8] were selected as baselines in the experiments, because they are the state-of-the-art learning to rank methods.\nFurthermore, BM25 [24] was used as a baseline, representing the state-of-the-arts IR method (we actually used the tool Lemur1).\nFor AdaRank, the parameter T was determined automatically during each experiment.\nSpecifically, when there is no improvement in ranking accuracy in terms of the performance measure, the iteration stops (and T is determined).\nAs the measure E, MAP and NDCG@5 were utilized.\nThe results for AdaRank using MAP and NDCG@5 as measures in training are represented as AdaRank.MAP and AdaRank.NDCG, respectively.\n4.2 Experiment with OHSUMED Data\nIn this experiment, we made use of the OHSUMED dataset [14] to test the performances of AdaRank.\nThe OHSUMED dataset consists of 348,566 documents and 106 queries.\nThere are in total 16,140 query-document pairs upon which relevance judgments are made.\nThe relevance judgments are either ` d' (definitely relevant), ` p' (possibly relevant), or ` n' (not relevant).\nThe data have been used in many experiments in IR, for example [4, 29].\nAs features, we adopted those used in document retrieval [4].\nTable 2 shows the features.\nFor example, tf (term frequency), idf (inverse document frequency), dl (document length), and combinations of them are defined as features.\nBM25 score itself is also a feature.\nStop words were removed and stemming was conducted in the data.\nWe randomly divided queries into four even subsets and conducted 4-fold cross-validation experiments.\nWe tuned the parameters for BM25 during one of the trials and applied them to the other trials.\nThe results reported in Figure 2 are those averaged over four trials.\nIn MAP calculation, we define the rank ` d' as relevant and\nTable 3: Statistics on WSJ and AP datasets.\nFigure 3: Ranking accuracies on WSJ dataset.\nthe other two ranks as irrelevant.\nFrom Figure 2, we see that both AdaRank.MAP and AdaRank.NDCG outperform BM25, Ranking SVM, and RankBoost in terms of all measures.\nWe conducted significant tests (t-test) on the improvements of AdaRank.MAP over BM25, Ranking SVM, and RankBoost in terms of MAP.\nThe results indicate that all the improvements are statistically significant (p-value <0.05).\nWe also conducted t-test on the improvements of AdaRank.NDCG over BM25, Ranking SVM, and RankBoost in terms of NDCG@5.\nThe improvements are also statistically significant.\n4.3 Experiment with WSJ and AP Data\nIn this experiment, we made use of the WSJ and AP datasets from the TREC ad-hoc retrieval track, to test the performances of AdaRank.\nWSJ contains 74,520 articles of Wall Street Journals from 1990 to 1992, and AP contains 158,240 articles of Associated Press in 1988 and 1990.\n200 queries are selected from the TREC topics (No. 101 \u223c No. 300).\nEach query has a number of documents associated and they are labeled as ` relevant' or ` irrelevant' (to the query).\nFollowing the practice in [28], the queries that have less than 10 relevant documents were discarded.\nTable 3 shows the statistics on the two datasets.\nIn the same way as in section 4.2, we adopted the features listed in Table 2 for ranking.\nWe also conducted 4-fold cross-validation experiments.\nThe results reported in Figure 3 and 4 are those averaged over four trials on WSJ and AP datasets, respectively.\nFrom Figure 3 and 4, we can see that AdaRank.MAP and AdaRank.NDCG outperform BM25, Ranking SVM, and RankBoost in terms of all measures on both WSJ and AP.\nWe conducted t-tests on the improvements of AdaRank.MAP and AdaRank.NDCG over BM25, Ranking SVM, and RankBoost on WSJ and AP.\nThe results indicate that all the improvements in terms of MAP are statistically significant (p-value <0.05).\nHowever only some of the improvements in terms of NDCG@5 are statistically significant, although overall the improvements on NDCG scores are quite high (1-2 points).\n4.4 Experiment with.\nGov Data\nIn this experiment, we further made use of the TREC.\nGov data to test the performance of AdaRank for the task of web retrieval.\nThe corpus is a crawl from the.\ngov domain in early 2002, and has been used at TREC Web Track since 2002.\nThere are a total\nFigure 4: Ranking accuracies on AP dataset.\nFigure 5: Ranking accuracies on.\nGov dataset.\nTable 4: Features used in the experiments on.\nGov dataset.\nof 1,053,110 web pages with 11,164,829 hyperlinks in the data.\nThe 50 queries in the topic distillation task in the Web Track of TREC 2003 [6] were used.\nThe ground truths for the queries are provided by the TREC committee with binary judgment: relevant or irrelevant.\nThe number of relevant pages vary from query to query (from 1 to 86).\nWe extracted 14 features from each query-document pair.\nTable 4 gives a list of the features.\nThey are the outputs of some well-known algorithms (systems).\nThese features are different from those in Table 2, because the task is different.\nAgain, we conducted 4-fold cross-validation experiments.\nThe results averaged over four trials are reported in Figure 5.\nFrom the results, we can see that AdaRank.MAP and AdaRank.NDCG outperform all the baselines in terms of all measures.\nWe conducted ttests on the improvements of AdaRank.MAP and AdaRank.NDCG over BM25, Ranking SVM, and RankBoost.\nSome of the improvements are not statistically significant.\nThis is because we have only 50 queries used in the experiments, and the number of queries is too small.\n4.5 Discussions\nWe investigated the reasons that AdaRank outperforms the baseline methods, using the results of the OHSUMED dataset as examples.\nFirst, we examined the reason that AdaRank has higher performances than Ranking SVM and RankBoost.\nSpecifically we com\nFigure 6: Accuracy on ranking document pairs with OHSUMED dataset.\nFigure 7: Distribution of queries with different number of document pairs in training data of trial 1.\npared the error rates between different rank pairs made by Ranking SVM, RankBoost, AdaRank.MAP, and AdaRank.NDCG on the test data.\nThe results averaged over four trials in the 4-fold cross validation are shown in Figure 6.\nWe use ` d-n' to stand for the pairs between ` definitely relevant' and ` not relevant', ` d-p' the pairs between ` definitely relevant' and ` partially relevant', and ` p-n' the pairs between ` partially relevant' and ` not relevant'.\nFrom Figure 6, we can see that AdaRank.MAP and AdaRank.NDCG make fewer errors for ` d-n' and ` d-p', which are related to the tops of rankings and are important.\nThis is because AdaRank.MAP and AdaRank.NDCG can naturally focus upon the training on the tops by optimizing MAP and NDCG@5, respectively.\nWe also made statistics on the number of document pairs per query in the training data (for trial 1).\nThe queries are clustered into different groups based on the the number of their associated document pairs.\nFigure 7 shows the distribution of the query groups.\nIn the figure, for example, ` 0-1k' is the group of queries whose number of document pairs are between 0 and 999.\nWe can see that the numbers of document pairs really vary from query to query.\nNext we evaluated the accuracies of AdaRank.MAP and RankBoost in terms of MAP for each of the query group.\nThe results are reported in Figure 8.\nWe found that the average MAP of AdaRank.MAP over the groups is two points higher than RankBoost.\nFurthermore, it is interesting to see that AdaRank.MAP performs particularly better than RankBoost for queries with small numbers of document pairs (e.g., ` 0-1k', ` 1k-2k', and ` 2k-3k').\nThe results indicate that AdaRank.MAP can effectively avoid creating a model biased towards queries with more document pairs.\nFor AdaRank.NDCG, similar results can be observed.\nFigure 8: Differences in MAP for different query groups.\nFigure 9: MAP on training set when model is trained with MAP or NDCG@5.\nWe further conducted an experiment to see whether AdaRank has the ability to improve the ranking accuracy in terms of a measure by using the measure in training.\nSpecifically, we trained ranking models using AdaRank.MAP and AdaRank.NDCG and evaluated their accuracies on the training dataset in terms of both MAP and NDCG@5.\nThe experiment was conducted for each trial.\nFigure 9 and Figure 10 show the results in terms of MAP and NDCG@5, respectively.\nWe can see that, AdaRank.MAP trained with MAP performs better in terms of MAP while AdaRank.NDCG trained with NDCG@5 performs better in terms of NDCG@5.\nThe results indicate that AdaRank can indeed enhance ranking performance in terms of a measure by using the measure in training.\nFinally, we tried to verify the correctness of Theorem 1.\nThat is, the ranking accuracy in terms of the performance measure can be continuously improved, as long as e \u2212 \u03b4t min V1 \u2212 \u03d5 (t) 2 <1 holds.\nAs an example, Figure 11 shows the learning curve of AdaRank.MAP in terms of MAP during the training phase in one trial of the cross validation.\nFrom the figure, we can see that the ranking accuracy of AdaRank.MAP steadily improves, as the training goes on, until it reaches to the peak.\nThe result agrees well with Theorem 1.\n5.\nCONCLUSION AND FUTURE WORK\nIn this paper we have proposed a novel algorithm for learning ranking models in document retrieval, referred to as AdaRank.\nIn contrast to existing methods, AdaRank optimizes a loss function that is directly defined on the performance measures.\nIt employs a boosting technique in ranking model learning.\nAdaRank offers several advantages: ease of implementation, theoretical soundness, efficiency in training, and high accuracy in ranking.\nExperimental results based on four benchmark datasets show that AdaRank can significantly outperform the baseline methods of BM25, Ranking SVM, and RankBoost.\nFigure 10: NDCG@5 on training set when model is trained with MAP or NDCG@5.\nFigure 11: Learning curve of AdaRank.\nFuture work includes theoretical analysis on the generalization error and other properties of the AdaRank algorithm, and further empirical evaluations of the algorithm including comparisons with other algorithms that can directly optimize performance measures.","keyphrases":["boost","inform retriev","learn to rank","document retriev","rank model","train rank model","rankboost","novel learn algorithm","weak ranker","re-weight train data","train process","machin learn","support vector machin","new learn algorithm","rank model tune"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","U","M","M"]} {"id":"I-61","title":"Distributed Agent-Based Air Traffic Flow Management","abstract":"Air traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today. The FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars. Finding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume. This problem is particularly complex as it requires the integration and\/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace). In this paper we use FACET -- an air traffic flow simulator developed at NASA and used extensively by the FAA and industry -- to test a multi-agent algorithm for traffic flow management. An agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix. Agents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion. Our FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation).","lvl-1":"Distributed Agent-Based Air Traffic Flow Management Kagan Tumer Oregon State University 204 Rogers Hall Corvallis, OR 97331, USA kagan.tumer@oregonstate.edu Adrian Agogino UCSC, NASA Ames Research Center Mailstop 269-3 Moffett Field, CA 94035, USA adrian@email.arc.nasa.gov ABSTRACT Air traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today.\nThe FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars.\nFinding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume.\nThis problem is particularly complex as it requires the integration and\/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace).\nIn this paper we use FACET - an air traffic flow simulator developed at NASA and used extensively by the FAA and industry - to test a multi-agent algorithm for traffic flow management.\nAn agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix.\nAgents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion.\nOur FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation).\nCategories and Subject Descriptors I.2.11 [Computing Methodologies]: Artificial IntelligenceMultiagent systems General Terms Algorithms, Performance 1.\nINTRODUCTION The efficient, safe and reliable management of our ever increasing air traffic is one of the fundamental challenges facing the aerospace industry today.\nOn a typical day, more than 40,000 commercial flights operate within the US airspace [14].\nIn order to efficiently and safely route this air traffic, current traffic flow control relies on a centralized, hierarchical routing strategy that performs flow projections ranging from one to six hours.\nAs a consequence, the system is slow to respond to developing weather or airport conditions leading potentially minor local delays to cascade into large regional congestions.\nIn 2005, weather, routing decisions and airport conditions caused 437,667 delays, accounting for 322,272 hours of delays.\nThe total cost of these delays was estimated to exceed three billion dollars by industry [7].\nFurthermore, as the traffic flow increases, the current procedures increase the load on the system, the airports, and the air traffic controllers (more aircraft per region) without providing any of them with means to shape the traffic patterns beyond minor reroutes.\nThe Next Generation Air Transportation Systems (NGATS) initiative aims to address this issues and, not only account for a threefold increase in traffic, but also for the increasing heterogeneity of aircraft and decreasing restrictions on flight paths.\nUnlike many other flow problems where the increasing traffic is to some extent absorbed by improved hardware (e.g., more servers with larger memories and faster CPUs for internet routing) the air traffic domain needs to find mainly algorithmic solutions, as the infrastructure (e.g., number of the airports) will not change significantly to impact the flow problem.\nThere is therefore a strong need to explore new, distributed and adaptive solutions to the air flow control problem.\nAn adaptive, multi-agent approach is an ideal fit to this naturally distributed problem where the complex interaction among the aircraft, airports and traffic controllers renders a pre-determined centralized solution severely suboptimal at the first deviation from the expected plan.\nThough a truly distributed and adaptive solution (e.g., free flight where aircraft can choose almost any path) offers the most potential in terms of optimizing flow, it also provides the most radical departure from the current system.\nAs a consequence, a shift to such a system presents tremendous difficulties both in terms of implementation (e.g., scheduling and airport capacity) and political fallout (e.g., impact on air traffic controllers).\nIn this paper, we focus on agent based system that can be implemented readily.\nIn this approach, we assign an 342 978-81-904262-7-5 (RPS) c 2007 IFAAMAS agent to a fix, a specific location in 2D.\nBecause aircraft flight plans consist of a sequence of fixes, this representation allows localized fixes (or agents) to have direct impact on the flow of air traffic1 .\nIn this approach, the agents'' actions are to set the separation that approaching aircraft are required to keep.\nThis simple agent-action pair allows the agents to slow down or speed up local traffic and allows agents to a have significant impact on the overall air traffic flow.\nAgents learn the most appropriate separation for their location using a reinforcement learning (RL) algorithm [15].\nIn a reinforcement learning approach, the selection of the agent reward has a large impact on the performance of the system.\nIn this work, we explore four different agent reward functions, and compare them to simulating various changes to the system and selecting the best solution (e.g, equivalent to a Monte-Carlo search).\nThe first explored reward consisted of the system reward.\nThe second reward was a personalized agent reward based on collectives [3, 17, 18].\nThe last two rewards were personalized rewards based on estimations to lower the computational burden of the reward computation.\nAll three personalized rewards aim to align agent rewards with the system reward and ensure that the rewards remain sensitive to the agents'' actions.\nPrevious work in this domain fell into one of two distinct categories: The first principles based modeling approaches used by domain experts [5, 8, 10, 13] and the algorithmic approaches explored by the learning and\/or agents community [6, 9, 12].\nThough our approach comes from the second category, we aim to bridge the gap by using FACET to test our algorithms, a simulator introduced and widely used (i.e., over 40 organizations and 5000 users) by work in the first category [4, 11].\nThe main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and test that algorithm using FACET.\nIn Section 2, we describe the air traffic flow problem and the simulation tool, FACET.\nIn Section 3, we present the agent-based approach, focusing on the selection of the agents and their action space along with the agents'' learning algorithms and reward structures.\nIn Section 4 we present results in domains with one and two congestions, explore different trade-offs of the system objective function, discuss the scaling properties of the different agent rewards and discuss the computational cost of achieving certain levels of performance.\nFinally, in Section 5, we discuss the implications of these results and provide and map the required work to enable the FAA to reach its stated goal of increasing the traffic volume by threefold.\n2.\nAIR TRAFFIC FLOW MANAGEMENT With over 40,000 flights operating within the United States airspace on an average day, the management of traffic flow is a complex and demanding problem.\nNot only are there concerns for the efficiency of the system, but also for fairness (e.g., different airlines), adaptability (e.g., developing weather patterns), reliability and safety (e.g., airport management).\nIn order to address such issues, the management of this traffic flow occurs over four hierarchical levels: 1.\nSeparation assurance (2-30 minute decisions); 1 We discuss how flight plans with few fixes can be handled in more detail in Section 2.\n2.\nRegional flow (20 minutes to 2 hours); 3.\nNational flow (1-8 hours); and 4.\nDynamic airspace configuration (6 hours to 1 year).\nBecause of the strict guidelines and safety concerns surrounding aircraft separation, we will not address that control level in this paper.\nSimilarly, because of the business and political impact of dynamic airspace configuration, we will not address the outermost flow control level either.\nInstead, we will focus on the regional and national flow management problems, restricting our impact to decisions with time horizons between twenty minutes and eight hours.\nThe proposed algorithm will fit between long term planning by the FAA and the very short term decisions by air traffic controllers.\nThe continental US airspace consists of 20 regional centers (handling 200-300 flights on a given day) and 830 sectors (handling 10-40 flights).\nThe flow control problem has to address the integration of policies across these sectors and centers, account for the complexity of the system (e.g., over 5200 public use airports and 16,000 air traffic controllers) and handle changes to the policies caused by weather patterns.\nTwo of the fundamental problems in addressing the flow problem are: (i) modeling and simulating such a large complex system as the fidelity required to provide reliable results is difficult to achieve; and (ii) establishing the method by which the flow management is evaluated, as directly minimizing the total delay may lead to inequities towards particular regions or commercial entities.\nBelow, we discuss how we addressed both issues, namely, we present FACET a widely used simulation tool and discuss our system evaluation function.\nFigure 1: FACET screenshot displaying traffic routes and air flow statistics.\n2.1 FACET FACET (Future ATM Concepts Evaluation Tool), a physics based model of the US airspace was developed to accurately model the complex air traffic flow problem [4].\nIt is based on propagating the trajectories of proposed flights forward in time.\nFACET can be used to either simulate and display air traffic (a 24 hour slice with 60,000 flights takes 15 minutes to simulate on a 3 GHz, 1 GB RAM computer) or provide rapid statistics on recorded data (4D trajectories for 10,000 flights including sectors, airports, and fix statistics in 10 seconds on the same computer) [11].\nFACET is extensively used by The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 343 the FAA, NASA and industry (over 40 organizations and 5000 users) [11].\nFACET simulates air traffic based on flight plans and through a graphical user interface allows the user to analyze congestion patterns of different sectors and centers (Figure 1).\nFACET also allows the user to change the flow patterns of the aircraft through a number of mechanisms, including metering aircraft through fixes.\nThe user can then observe the effects of these changes to congestion.\nIn this paper, agents use FACET directly through batch mode, where agents send scripts to FACET asking it to simulate air traffic based on metering orders imposed by the agents.\nThe agents then produce their rewards based on receive feedback from FACET about the impact of these meterings.\n2.2 System Evaluation The system performance evaluation function we select focuses on delay and congestion but does not account for fairness impact on different commercial entities.\nInstead it focuses on the amount of congestion in a particular sector and on the amount of measured air traffic delay.\nThe linear combination of these two terms gives the full system evaluation function, G(z) as a function of the full system state z.\nMore precisely, we have: G(z) = \u2212((1 \u2212 \u03b1)B(z) + \u03b1C(z)) , (1) where B(z) is the total delay penalty for all aircraft in the system, and C(z) is the total congestion penalty.\nThe relative importance of these two penalties is determined by the value of \u03b1, and we explore various trade-offs based on \u03b1 in Section 4.\nThe total delay, B, is a sum of delays over a set of sectors S and is given by: B(z) = X s\u2208S Bs(z) (2) where Bs(z) = X t \u0398(t \u2212 \u03c4s)kt,s(t \u2212 \u03c4s) , (3) where ks,t is the number of aircraft in sector s at a particular time, \u03c4s is a predetermined time, and \u0398(\u00b7) is the step function that equals 1 when its argument is greater or equal to zero, and has a value of zero otherwise.\nIntuitively, Bs(z) provides the total number of aircraft that remain in a sector s past a predetermined time \u03c4s, and scales their contribution to count by the amount by which they are late.\nIn this manner Bs(z) provides a delay factor that not only accounts for all aircraft that are late, but also provides a scale to measure their lateness.\nThis definition is based on the assumption that most aircraft should have reached the sector by time \u03c4s and that aircraft arriving after this time are late.\nIn this paper the value of \u03c4s is determined by assessing aircraft counts in the sector in the absence of any intervention or any deviation from predicted paths.\nSimilarly, the total congestion penalty is a sum over the congestion penalties over the sectors of observation, S: C(z) = X s\u2208S Cs(z) (4) where Cs(z) = a X t \u0398(ks,t \u2212 cs)eb(ks,t\u2212cs) , (5) where a and b are normalizing constants, and cs is the capacity of sector s as defined by the FAA.\nIntuitively, Cs(z) penalizes a system state where the number of aircraft in a sector exceeds the FAAs official sector capacity.\nEach sector capacity is computed using various metrics which include the number of air traffic controllers available.\nThe exponential penalty is intended to provide strong feedback to return the number of aircraft in a sector to below the FAA mandated capacities.\n3.\nAGENT BASED AIR TRAFFIC FLOW The multi agent approach to air traffic flow management we present is predicated on adaptive agents taking independent actions that maximize the system evaluation function discussed above.\nTo that end, there are four critical decisions that need to be made: agent selection, agent action set selection, agent learning algorithm selection and agent reward structure selection.\n3.1 Agent Selection Selecting the aircraft as agents is perhaps the most obvious choice for defining an agent.\nThat selection has the advantage that agent actions can be intuitive (e.g., change of flight plan, increase or decrease speed and altitude) and offer a high level of granularity, in that each agent can have its own policy.\nHowever, there are several problems with that approach.\nFirst, there are in excess of 40,000 aircraft in a given day, leading to a massively large multi-agent system.\nSecond, as the agents would not be able to sample their state space sufficiently, learning would be prohibitively slow.\nAs an alternative, we assign agents to individual ground locations throughout the airspace called fixes.\nEach agent is then responsible for any aircraft going through its fix.\nFixes offer many advantages as agents: 1.\nTheir number can vary depending on need.\nThe system can have as many agents as required for a given situation (e.g., agents coming live around an area with developing weather conditions).\n2.\nBecause fixes are stationary, collecting data and matching behavior to reward is easier.\n3.\nBecause aircraft flight plans consist of fixes, agent will have the ability to affect traffic flow patterns.\n4.\nThey can be deployed within the current air traffic routing procedures, and can be used as tools to help air traffic controllers rather than compete with or replace them.\nFigure 2 shows a schematic of this agent based system.\nAgents surrounding a congestion or weather condition affect the flow of traffic to reduce the burden on particular regions.\n3.2 Agent Actions The second issue that needs to be addressed, is determining the action set of the agents.\nAgain, an obvious choice may be for fixes to bid on aircraft, affecting their flight plans.\nThough appealing from a free flight perspective, that approach makes the flight plans too unreliable and significantly complicates the scheduling problem (e.g., arrival at airports and the subsequent gate assignment process).\nInstead, we set the actions of an agent to determining the separation (distance between aircraft) that aircraft have 344 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) to maintain, when going through the agent``s fix.\nThis is known as setting the Miles in Trail or MIT.\nWhen an agent sets the MIT value to d, aircraft going towards its fix are instructed to line up and keep d miles of separation (though aircraft will always keep a safe distance from each other regardless of the value of d).\nWhen there are many aircraft going through a fix, the effect of issuing higher MIT values is to slow down the rate of aircraft that go through the fix.\nBy increasing the value of d, an agent can limit the amount of air traffic downstream of its fix, reducing congestion at the expense of increasing the delays upstream.\nFigure 2: Schematic of agent architecture.\nThe agents corresponding to fixes surrounding a possible congestion become live and start setting new separation times.\n3.3 Agent Learning The objective of each agent is to learn the best values of d that will lead to the best system performance, G.\nIn this paper we assume that each agent will have a reward function and will aim to maximize its reward using its own reinforcement learner [15] (though alternatives such as evolving neuro-controllers are also effective [1]).\nFor complex delayed-reward problems, relatively sophisticated reinforcement learning systems such as temporal difference may have to be used.\nHowever, due to our agent selection and agent action set, the air traffic congestion domain modeled in this paper only needs to utilize immediate rewards.\nAs a consequence, simple table-based immediate reward reinforcement learning is used.\nOur reinforcement learner is equivalent to an -greedy Q-learner with a discount rate of 0 [15].\nAt every episode an agent takes an action and then receives a reward evaluating that action.\nAfter taking action a and receiving reward R an agent updates its Q table (which contains its estimate of the value for taking that action [15]) as follows: Q (a) = (1 \u2212 l)Q(a) + l(R), (6) where l is the learning rate.\nAt every time step the agent chooses the action with the highest table value with probability 1 \u2212 and chooses a random action with probability .\nIn the experiments described in this paper, \u03b1 is equal to 0.5 and is equal to 0.25.\nThe parameters were chosen experimentally, though system performance was not overly sensitive to these parameters.\n3.4 Agent Reward Structure The final issue that needs to be addressed is selecting the reward structure for the learning agents.\nThe first and most direct approach is to let each agent receive the system performance as its reward.\nHowever, in many domains such a reward structure leads to slow learning.\nWe will therefore also set up a second set of reward structures based on agent-specific rewards.\nGiven that agents aim to maximize their own rewards, a critical task is to create good agent rewards, or rewards that when pursued by the agents lead to good overall system performance.\nIn this work we focus on difference rewards which aim to provide a reward that is both sensitive to that agent``s actions and aligned with the overall system reward [2, 17, 18].\n3.4.1 Difference Rewards Consider difference rewards of the form [2, 17, 18]: Di \u2261 G(z) \u2212 G(z \u2212 zi + ci) , (7) where zi is the action of agent i. All the components of z that are affected by agent i are replaced with the fixed constant ci 2 .\nIn many situations it is possible to use a ci that is equivalent to taking agent i out of the system.\nIntuitively this causes the second term of the difference reward to evaluate the performance of the system without i and therefore D evaluates the agent``s contribution to the system performance.\nThere are two advantages to using D: First, because the second term removes a significant portion of the impact of other agents in the system, it provides an agent with a cleaner signal than G.\nThis benefit has been dubbed learnability (agents have an easier time learning) in previous work [2, 17].\nSecond, because the second term does not depend on the actions of agent i, any action by agent i that improves D, also improves G.\nThis term which measures the amount of alignment between two rewards has been dubbed factoredness in previous work [2, 17].\n3.4.2 Estimates of Difference Rewards Though providing a good compromise between aiming for system performance and removing the impact of other agents from an agent``s reward, one issue that may plague D is computational cost.\nBecause it relies on the computation of the counterfactual term G(z \u2212 zi + ci) (i.e., the system performance without agent i) it may be difficult or impossible to compute, particularly when the exact mathematical form of G is not known.\nLet us focus on G functions in the following form: G(z) = Gf (f(z)), (8) where Gf () is non-linear with a known functional form and, f(z) = X i fi(zi) , (9) where each fi is an unknown non-linear function.\nWe assume that we can sample values from f(z), enabling us to compute G, but that we cannot sample from each fi(zi).\n2 This notation uses zero padding and vector addition rather than concatenation to form full state vectors from partial state vectors.\nThe vector zi in our notation would be ziei in standard vector notation, where ei is a vector with a value of 1 in the ith component and is zero everywhere else.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 345 In addition, we assume that Gf is much easier to compute than f(z), or that we may not be able to even compute f(z) directly and must sample it from a black box computation.\nThis form of G matches our system evaluation in the air traffic domain.\nWhen we arrange agents so that each aircraft is typically only affected by a single agent, each agent``s impact of the counts of the number of aircraft in a sector, kt,s, will be mostly independent of the other agents.\nThese values of kt,s are the f(z)s in our formulation and the penalty functions form Gf .\nNote that given aircraft counts, the penalty functions (Gf ) can be easily computed in microseconds, while aircraft counts (f) can only be computed by running FACET taking on the order of seconds.\nTo compute our counterfactual G(z \u2212 zi + ci) we need to compute: Gf (f(z \u2212 zi + ci)) = Gf 0 @ X j=i fj(zj) + fi(ci) 1 A (10) = Gf (f(z) \u2212 fi(zi) + fi(ci)) .\n(11) Unfortunately, we cannot compute this directly as the values of fi(zi) are unknown.\nHowever, if agents take actions independently (it does not observe how other agents act before taking its own action) we can take advantage of the linear form of f(z) in the fis with the following equality: E(f\u2212i(z\u2212i)|zi) = E(f\u2212i(z\u2212i)|ci) (12) where E(f\u2212i(z\u2212i)|zi) is the expected value of all of the fs other than fi given the value of zi and E(f\u2212i(z\u2212i)|ci) is the expected value of all of the fs other than fi given the value of zi is changed to ci.\nWe can then estimate f(z \u2212 zi + ci): f(z) \u2212 fi(zi) + fi(ci) = f(z) \u2212 fi(zi) + fi(ci) + E(f\u2212i(z\u2212i)|ci) \u2212 E(f\u2212i(z\u2212i)|zi) = f(z) \u2212 E(fi(zi)|zi) + E(fi(ci)|ci) + E(f\u2212i(z\u2212i)|ci) \u2212 E(f\u2212i(z\u2212i)|zi) = f(z) \u2212 E(f(z)|zi) + E(f(z)|ci) .\nTherefore we can evaluate Di = G(z) \u2212 G(z \u2212 zi + ci) as: Dest1 i = Gf (f(z)) \u2212 Gf (f(z) \u2212 E(f(z)|zi) + E(f(z)|ci)) , (13) leaving us with the task of estimating the values of E(f(z)|zi) and E(f(z)|ci)).\nThese estimates can be computed by keeping a table of averages where we average the values of the observed f(z) for each value of zi that we have seen.\nThis estimate should improve as the number of samples increases.\nTo improve our estimates, we can set ci = E(z) and if we make the mean squared approximation of f(E(z)) \u2248 E(f(z)) then we can estimate G(z) \u2212 G(z \u2212 zi + ci) as: Dest2 i = Gf (f(z)) \u2212 Gf (f(z) \u2212 E(f(z)|zi) + E(f(z))) .\n(14) This formulation has the advantage in that we have more samples at our disposal to estimate E(f(z)) than we do to estimate E(f(z)|ci)).\n4.\nSIMULATION RESULTS In this paper we test the performance of our agent based air traffic optimization method on a series of simulations using the FACET air traffic simulator.\nIn all experiments we test the performance of five different methods.\nThe first method is Monte Carlo estimation, where random policies are created, with the best policy being chosen.\nThe other four methods are agent based methods where the agents are maximizing one of the following rewards: 1.\nThe system reward, G(z), as define in Equation 1.\n2.\nThe difference reward, Di(z), assuming that agents can calculate counterfactuals.\n3.\nEstimation to the difference reward, Dest1 i (z), where agents estimate the counterfactual using E(f(z)|zi) and E(f(z)|ci).\n4.\nEstimation to the difference reward, Dest2 i (z), where agents estimate the counterfactual using E(f(z)|zi) and E(f(z)).\nThese methods are first tested on an air traffic domain with 300 aircraft, where 200 of the aircraft are going through a single point of congestion over a four hour simulation.\nAgents are responsible for reducing congestion at this single point, while trying to minimize delay.\nThe methods are then tested on a more difficult problem, where a second point of congestion is added with the 100 remaining aircraft going through this second point of congestion.\nIn all experiments the goal of the system is to maximize the system performance given by G(z) with the parameters, a = 50, b = 0.3, \u03c4s1 equal to 200 minutes and \u03c4s1 equal to 175 minutes.\nThese values of \u03c4 are obtained by examining the time at which most of the aircraft leave the sectors, when no congestion control is being performed.\nExcept where noted, the trade-off between congestion and lateness, \u03b1 is set to 0.5.\nIn all experiments to make the agent results comparable to the Monte Carlo estimation, the best policies chosen by the agents are used in the results.\nAll results are an average of thirty independent trials with the differences in the mean (\u03c3\/ \u221a n) shown as error bars, though in most cases the error bars are too small to see.\nFigure 3: Performance on single congestion problem, with 300 Aircraft , 20 Agents and \u03b1 = .5.\n4.1 Single Congestion In the first experiment we test the performance of the five methods when there is a single point of congestion, with twenty agents.\nThis point of congestion is created by setting up a series of flight plans that cause the number of aircraft in 346 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) the sector of interest to be significantly more than the number allowed by the FAA.\nThe results displayed in Figures 3 and 4 show the performance of all five algorithms on two different system evaluations.\nIn both cases, the agent based methods significantly outperform the Monte Carlo method.\nThis result is not surprising since the agent based methods intelligently explore their space, where as the Monte Carlo method explores the space randomly.\nFigure 4: Performance on single congestion problem, with 300 Aircraft , 20 Agents and \u03b1 = .75.\nAmong the agent based methods, agents using difference rewards perform better than agents using the system reward.\nAgain this is not surprising, since with twenty agents, an agent directly trying to maximize the system reward has difficulty determining the effect of its actions on its own reward.\nEven if an agent takes an action that reduces congestion and lateness, other agents at the same time may take actions that increase congestion and lateness, causing the agent to wrongly believe that its action was poor.\nIn contrast agents using the difference reward have more influence over the value of their own reward, therefore when an agent takes a good action, the value of this action is more likely to be reflected in its reward.\nThis experiment also shows that estimating the difference reward is not only possible, but also quite effective, when the true value of the difference reward cannot be computed.\nWhile agents using the estimates do not achieve as high of results as agents using the true difference reward, they still perform significantly better than agents using the system reward.\nNote, however, that the benefit of the estimated difference rewards are only present later in learning.\nEarlier in learning, the estimates are poor, and agents using the estimated difference rewards perform no better then agents using the system reward.\n4.2 Two Congestions In the second experiment we test the performance of the five methods on a more difficult problem with two points of congestion.\nOn this problem the first region of congestion is the same as in the previous problem, and the second region of congestion is added in a different part of the country.\nThe second congestion is less severe than the first one, so agents have to form different policies depending which point of congestion they are influencing.\nFigure 5: Performance on two congestion problem, with 300 Aircraft, 20 Agents and \u03b1 = .5.\nFigure 6: Performance on two congestion problem, with 300 Aircraft, 50 Agents and \u03b1 = .5.\nThe results displayed in Figure 5 show that the relative performance of the five methods is similar to the single congestion case.\nAgain agent based methods perform better than the Monte Carlo method and the agents using difference rewards perform better than agents using the system reward.\nTo verify that the performance improvement of our methods is maintained when there are a different number of agents, we perform additional experiments with 50 agents.\nThe results displayed in Figure 6 show that indeed the relative performances of the methods are comparable when the number of agents is increased to 50.\nFigure 7 shows scaling results and demonstrates that the conclusions hold over a wide range of number of agents.\nAgents using Dest2 perform slightly better than agents using Dest1 in all cases but for 50 agents.\nThis slight advantage stems from Dest2 providing the agents with a cleaner signal, since its estimate uses more data points.\n4.3 Penalty Tradeoffs The system evaluation function used in the experiments is G(z) = \u2212((1\u2212\u03b1)D(z)+\u03b1C(z)), which comprises of penalties for both congestion and lateness.\nThis evaluation function The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 347 Figure 7: Impact of number of agents on system performance.\nTwo congestion problem, with 300 Aircraft and \u03b1 = .5.\nforces the agents to tradeoff these relative penalties depending on the value of \u03b1.\nWith high \u03b1 the optimization focuses on reducing congestion, while with low \u03b1 the system focuses on reducing lateness.\nTo verify that the results obtained above are not specific to a particular value of \u03b1, we repeat the experiment with 20 agents for \u03b1 = .75.\nFigure 8 shows that qualitatively the relative performance of the algorithms remain the same.\nNext, we perform a series of experiments where \u03b1 ranges from 0.0 to 1.0 .\nFigure 9 shows the results which lead to three interesting observations: \u2022 First, there is a zero congestion penalty solution.\nThis solution has agents enforce large MIT values to block all air traffic, which appears viable when the system evaluation does not account for delays.\nAll algorithms find this solution, though it is of little interest in practice due to the large delays it would cause.\n\u2022 Second, if the two penalties were independent, an optimal solution would be a line from the two end points.\nTherefore, unless D is far from being optimal, the two penalties are not independent.\nNote that for \u03b1 = 0.5 the difference between D and this hypothetical line is as large as it is anywhere else, making \u03b1 = 0.5 a reasonable choice for testing the algorithms in a difficult setting.\n\u2022 Third, Monte Carlo and G are particularly poor at handling multiple objectives.\nFor both algorithms, the performance degrades significantly for mid-ranges of \u03b1.\n4.4 Computational Cost The results in the previous section show the performance of the different algorithms after a specific number of episodes.\nThose results show that D is significantly superior to the other algorithms.\nOne question that arises, though, is what computational overhead D puts on the system, and what results would be obtained if the additional computational expense of D is made available to the other algorithms.\nThe computation cost of the system evaluation, G (Equation 1) is almost entirely dependent on the computation of Figure 8: Performance on two congestion problem, with 300 Aircraft, 20 Agents and \u03b1 = .75.\nFigure 9: Tradeoff Between Objectives on two congestion problem, with 300 Aircraft and 20 Agents.\nNote that Monte Carlo and G are particularly bad at handling multiple objectives.\nthe airplane counts for the sectors kt,s, which need to be computed using FACET.\nExcept when D is used, the values of k are computed once per episode.\nHowever, to compute the counterfactual term in D, if FACET is treated as a black box, each agent would have to compute their own values of k for their counterfactual resulting in n + 1 computations of k per episode.\nWhile it may be possible to streamline the computation of D with some knowledge of the internals of FACET, given the complexity of the FACET simulation, it is not unreasonable in this case to treat it as a black box.\nTable 1 shows the performance of the algorithms after 2100 G computations for each of the algorithms for the simulations presented in Figure 5 where there were 20 agents, 2 congestions and \u03b1 = .5.\nAll the algorithms except the fully computed D reach 2100 k computations at time step 2100.\nD however computes k once for the system, and then once for each agent, leading to 21 computations per time step.\nIt therefore reaches 2100 computations at time step 100.\nWe also show the results of the full D computation at t=2100, which needs 44100 computations of k as D44K .\n348 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: System Performance for 20 Agents, 2 congestions and \u03b1 = .5, after 2100 G evaluations (except for D44K which has 44100 G evaluations at t=2100).\nReward G \u03c3\/ \u221a n time Dest2 -232.5 7.55 2100 Dest1 -234.4 6.83 2100 D -277.0 7.8 100 D44K -219.9 4.48 2100 G -412.6 13.6 2100 MC -639.0 16.4 2100 Although D44K provides the best result by a slight margin, it is achieved at a considerable computational cost.\nIndeed, the performance of the two D estimates is remarkable in this case as they were obtained with about twenty times fewer computations of k. Furthermore, the two D estimates, significantly outperform the full D computation for a given number of computations of k and validate the assumptions made in Section 3.4.2.\nThis shows that for this domain, in practice it is more fruitful to perform more learning steps and approximate D, than few learning steps with full D computation when we treat FACET as a black box.\n5.\nDISCUSSION The efficient, safe and reliable management of air traffic flow is a complex problem, requiring solutions that integrate control policies with time horizons ranging from minutes up to a year.\nThe main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and to test that algorithm using FACET, a simulation tool widely used by the FAA, NASA and the industry.\nOur method is based on agents representing fixes and having each agent determine the separation between aircraft approaching its fix.\nIt offers the significant benefit of not requiring radical changes to the current air flow management structure and is therefore readily deployable.\nThe agents use reinforcement learning to learn control policies and we explore different agent reward functions and different ways of estimating those functions.\nWe are currently extending this work in three directions.\nFirst, we are exploring new methods of estimating agent rewards, to further speed up the simulations.\nSecond we are investigating deployment strategies and looking for modifications that would have larger impact.\nOne such modification is to extend the definition of agents from fixes to sectors, giving agents more opportunity to control the traffic flow, and allow them to be more efficient in eliminating congestion.\nFinally, in cooperation with domain experts, we are investigating different system evaluation functions, above and beyond the delay and congestion dependent G presented in this paper.\nAcknowledgments: The authors thank Banavar Sridhar for his invaluable help in describing both current air traffic flow management and NGATS, and Shon Grabbe for his detailed tutorials on FACET.\n6.\nREFERENCES [1] A. Agogino and K. Tumer.\nEfficient evaluation functions for multi-rover systems.\nIn The Genetic and Evolutionary Computation Conference, pages 1-12, Seatle, WA, June 2004.\n[2] A. Agogino and K. Tumer.\nMulti agent reward analysis for learning in noisy domains.\nIn Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multi-Agent Systems, Utrecht, Netherlands, July 2005.\n[3] A. K. Agogino and K. Tumer.\nHandling communiction restrictions and team formation in congestion games.\nJournal of Autonous Agents and Multi Agent Systems, 13(1):97-115, 2006.\n[4] K. D. Bilimoria, B. Sridhar, G. B. Chatterji, K. S. Shethand, and S. R. Grabbe.\nFACET: Future ATM concepts evaluation tool.\nAir Traffic Control Quarterly, 9(1), 2001.\n[5] Karl D. Bilimoria.\nA geometric optimization approach to aircraft conflict resolution.\nIn AIAA Guidance, Navigation, and Control Conf, Denver, CO, 2000.\n[6] Martin S. Eby and Wallace E. Kelly III.\nFree flight separation assurance using distributed algorithms.\nIn Proc of Aerospace Conf, 1999, Aspen, CO, 1999.\n[7] FAA OPSNET data Jan-Dec 2005.\nUS Department of Transportation website.\n[8] S. Grabbe and B. Sridhar.\nCentral east pacific flight routing.\nIn AIAA Guidance, Navigation, and Control Conference and Exhibit, Keystone, CO, 2006.\n[9] Jared C. Hill, F. Ryan Johnson, James K. Archibald, Richard L. Frost, and Wynn C. Stirling.\nA cooperative multi-agent approach to free flight.\nIn AAMAS ``05: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, pages 1083-1090, New York, NY, USA, 2005.\nACM Press.\n[10] P. K. Menon, G. D. Sweriduk, and B. Sridhar.\nOptimal strategies for free flight air traffic conflict resolution.\nJournal of Guidance, Control, and Dynamics, 22(2):202-211, 1999.\n[11] 2006 NASA Software of the Year Award Nomination.\nFACET: Future ATM concepts evaluation tool.\nCase no.\nARC-14653-1, 2006.\n[12] M. Pechoucek, D. Sislak, D. Pavlicek, and M. Uller.\nAutonomous agents for air-traffic deconfliction.\nIn Proc of the Fifth Int Jt Conf on Autonomous Agents and Multi-Agent Systems, Hakodate, Japan, May 2006.\n[13] B. Sridhar and S. Grabbe.\nBenefits of direct-to in national airspace system.\nIn AIAA Guidance, Navigation, and Control Conf, Denver, CO, 2000.\n[14] B. Sridhar, T. Soni, K. Sheth, and G. B. Chatterji.\nAggregate flow model for air-traffic management.\nJournal of Guidance, Control, and Dynamics, 29(4):992-997, 2006.\n[15] R. S. Sutton and A. G. Barto.\nReinforcement Learning: An Introduction.\nMIT Press, Cambridge, MA, 1998.\n[16] C. Tomlin, G. Pappas, and S. Sastry.\nConflict resolution for air traffic management.\nIEEE Tran on Automatic Control, 43(4):509-521, 1998.\n[17] K. Tumer and D. Wolpert, editors.\nCollectives and the Design of Complex Systems.\nSpringer, New York, 2004.\n[18] D. H. Wolpert and K. Tumer.\nOptimal payoff functions for members of collectives.\nAdvances in Complex Systems, 4(2\/3):265-279, 2001.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 349","lvl-3":"Distributed Agent-Based Air Traffic Flow Management\nABSTRACT\nAir traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today.\nThe FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars.\nFinding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume.\nThis problem is particularly complex as it requires the integration and\/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace).\nIn this paper we use FACET--an air traffic flow simulator developed at NASA and used extensively by the FAA and industry--to test a multi-agent algorithm for traffic flow management.\nAn agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix.\nAgents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion.\nOur FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation).\n1.\nINTRODUCTION\nThe efficient, safe and reliable management of our ever increasing air traffic is one of the fundamental challenges facing the aerospace industry today.\nOn a typical day, more than 40,000 commercial flights operate within the US airspace [14].\nIn order to efficiently and safely route this air traffic, current traffic flow control relies on a centralized, hierarchical routing strategy that performs flow projections ranging from one to six hours.\nAs a consequence, the system is slow to respond to developing weather or airport conditions leading potentially minor local delays to cascade into large regional congestions.\nIn 2005, weather, routing decisions and airport conditions caused 437,667 delays, accounting for 322,272 hours of delays.\nThe total cost of these delays was estimated to exceed three billion dollars by industry [7].\nFurthermore, as the traffic flow increases, the current procedures increase the load on the system, the airports, and the air traffic controllers (more aircraft per region) without providing any of them with means to shape the traffic patterns beyond minor reroutes.\nThe Next Generation Air Transportation Systems (NGATS) initiative aims to address this issues and, not only account for a threefold increase in traffic, but also for the increasing heterogeneity of aircraft and decreasing restrictions on flight paths.\nUnlike many other flow problems where the increasing traffic is to some extent absorbed by improved hardware (e.g., more servers with larger memories and faster CPUs for internet routing) the air traffic domain needs to find mainly algorithmic solutions, as the infrastructure (e.g., number of the airports) will not change significantly to impact the flow problem.\nThere is therefore a strong need to explore new, distributed and adaptive solutions to the air flow control problem.\nAn adaptive, multi-agent approach is an ideal fit to this naturally distributed problem where the complex interaction among the aircraft, airports and traffic controllers renders a pre-determined centralized solution severely suboptimal at the first deviation from the expected plan.\nThough a truly distributed and adaptive solution (e.g., free flight where aircraft can choose almost any path) offers the most potential in terms of optimizing flow, it also provides the most radical departure from the current system.\nAs a consequence, a shift to such a system presents tremendous difficulties both in terms of implementation (e.g., scheduling and airport capacity) and political fallout (e.g., impact on air traffic controllers).\nIn this paper, we focus on agent based system that can be implemented readily.\nIn this approach, we assign an\nagent to a \"fix,\" a specific location in 2D.\nBecause aircraft flight plans consist of a sequence of fixes, this representation allows localized fixes (or agents) to have direct impact on the flow of air traffic1.\nIn this approach, the agents' actions are to set the separation that approaching aircraft are required to keep.\nThis simple agent-action pair allows the agents to slow down or speed up local traffic and allows agents to a have significant impact on the overall air traffic flow.\nAgents learn the most appropriate separation for their location using a reinforcement learning (RL) algorithm [15].\nIn a reinforcement learning approach, the selection of the agent reward has a large impact on the performance of the system.\nIn this work, we explore four different agent reward functions, and compare them to simulating various changes to the system and selecting the best solution (e.g, equivalent to a Monte-Carlo search).\nThe first explored reward consisted of the system reward.\nThe second reward was a personalized agent reward based on collectives [3, 17, 18].\nThe last two rewards were personalized rewards based on estimations to lower the computational burden of the reward computation.\nAll three personalized rewards aim to align agent rewards with the system reward and ensure that the rewards remain sensitive to the agents' actions.\nPrevious work in this domain fell into one of two distinct categories: The first principles based modeling approaches used by domain experts [5, 8, 10, 13] and the algorithmic approaches explored by the learning and\/or agents community [6, 9, 12].\nThough our approach comes from the second category, we aim to bridge the gap by using FACET to test our algorithms, a simulator introduced and widely used (i.e., over 40 organizations and 5000 users) by work in the first category [4, 11].\nThe main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and test that algorithm using FACET.\nIn Section 2, we describe the air traffic flow problem and the simulation tool, FACET.\nIn Section 3, we present the agent-based approach, focusing on the selection of the agents and their action space along with the agents' learning algorithms and reward structures.\nIn Section 4 we present results in domains with one and two congestions, explore different trade-offs of the system objective function, discuss the scaling properties of the different agent rewards and discuss the computational cost of achieving certain levels of performance.\nFinally, in Section 5, we discuss the implications of these results and provide and map the required work to enable the FAA to reach its stated goal of increasing the traffic volume by threefold.\n2.\nAIR TRAFFIC FLOW MANAGEMENT\n2.1 FACET\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 343\n2.2 System Evaluation\n3.\nAGENT BASED AIR TRAFFIC FLOW\n3.1 Agent Selection\n3.2 Agent Actions\n344 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.3 Agent Learning\n3.4 Agent Reward Structure\n3.4.1 Difference Rewards\n3.4.2 Estimates of Difference Rewards\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 345\n4.\nSIMULATION RESULTS\n4.1 Single Congestion\n346 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 Two Congestions\n4.3 Penalty Tradeoffs\n4.4 Computational Cost\n348 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-4":"Distributed Agent-Based Air Traffic Flow Management\nABSTRACT\nAir traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today.\nThe FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars.\nFinding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume.\nThis problem is particularly complex as it requires the integration and\/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace).\nIn this paper we use FACET--an air traffic flow simulator developed at NASA and used extensively by the FAA and industry--to test a multi-agent algorithm for traffic flow management.\nAn agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix.\nAgents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion.\nOur FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation).\n1.\nINTRODUCTION\nThe efficient, safe and reliable management of our ever increasing air traffic is one of the fundamental challenges facing the aerospace industry today.\nIn order to efficiently and safely route this air traffic, current traffic flow control relies on a centralized, hierarchical routing strategy that performs flow projections ranging from one to six hours.\nAs a consequence, the system is slow to respond to developing weather or airport conditions leading potentially minor local delays to cascade into large regional congestions.\nIn 2005, weather, routing decisions and airport conditions caused 437,667 delays, accounting for 322,272 hours of delays.\nThe total cost of these delays was estimated to exceed three billion dollars by industry [7].\nFurthermore, as the traffic flow increases, the current procedures increase the load on the system, the airports, and the air traffic controllers (more aircraft per region) without providing any of them with means to shape the traffic patterns beyond minor reroutes.\nThere is therefore a strong need to explore new, distributed and adaptive solutions to the air flow control problem.\nAn adaptive, multi-agent approach is an ideal fit to this naturally distributed problem where the complex interaction among the aircraft, airports and traffic controllers renders a pre-determined centralized solution severely suboptimal at the first deviation from the expected plan.\nThough a truly distributed and adaptive solution (e.g., free flight where aircraft can choose almost any path) offers the most potential in terms of optimizing flow, it also provides the most radical departure from the current system.\nAs a consequence, a shift to such a system presents tremendous difficulties both in terms of implementation (e.g., scheduling and airport capacity) and political fallout (e.g., impact on air traffic controllers).\nIn this paper, we focus on agent based system that can be implemented readily.\nIn this approach, we assign an\nagent to a \"fix,\" a specific location in 2D.\nBecause aircraft flight plans consist of a sequence of fixes, this representation allows localized fixes (or agents) to have direct impact on the flow of air traffic1.\nIn this approach, the agents' actions are to set the separation that approaching aircraft are required to keep.\nThis simple agent-action pair allows the agents to slow down or speed up local traffic and allows agents to a have significant impact on the overall air traffic flow.\nAgents learn the most appropriate separation for their location using a reinforcement learning (RL) algorithm [15].\nIn a reinforcement learning approach, the selection of the agent reward has a large impact on the performance of the system.\nIn this work, we explore four different agent reward functions, and compare them to simulating various changes to the system and selecting the best solution (e.g, equivalent to a Monte-Carlo search).\nThe first explored reward consisted of the system reward.\nThe second reward was a personalized agent reward based on collectives [3, 17, 18].\nThe last two rewards were personalized rewards based on estimations to lower the computational burden of the reward computation.\nAll three personalized rewards aim to align agent rewards with the system reward and ensure that the rewards remain sensitive to the agents' actions.\nThe main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and test that algorithm using FACET.\nIn Section 2, we describe the air traffic flow problem and the simulation tool, FACET.\nIn Section 3, we present the agent-based approach, focusing on the selection of the agents and their action space along with the agents' learning algorithms and reward structures.\nIn Section 4 we present results in domains with one and two congestions, explore different trade-offs of the system objective function, discuss the scaling properties of the different agent rewards and discuss the computational cost of achieving certain levels of performance.\nFinally, in Section 5, we discuss the implications of these results and provide and map the required work to enable the FAA to reach its stated goal of increasing the traffic volume by threefold.","lvl-2":"Distributed Agent-Based Air Traffic Flow Management\nABSTRACT\nAir traffic flow management is one of the fundamental challenges facing the Federal Aviation Administration (FAA) today.\nThe FAA estimates that in 2005 alone, there were over 322,000 hours of delays at a cost to the industry in excess of three billion dollars.\nFinding reliable and adaptive solutions to the flow management problem is of paramount importance if the Next Generation Air Transportation Systems are to achieve the stated goal of accommodating three times the current traffic volume.\nThis problem is particularly complex as it requires the integration and\/or coordination of many factors including: new data (e.g., changing weather info), potentially conflicting priorities (e.g., different airlines), limited resources (e.g., air traffic controllers) and very heavy traffic volume (e.g., over 40,000 flights over the US airspace).\nIn this paper we use FACET--an air traffic flow simulator developed at NASA and used extensively by the FAA and industry--to test a multi-agent algorithm for traffic flow management.\nAn agent is associated with a fix (a specific location in 2D space) and its action consists of setting the separation required among the airplanes going though that fix.\nAgents use reinforcement learning to set this separation and their actions speed up or slow down traffic to manage congestion.\nOur FACET based results show that agents receiving personalized rewards reduce congestion by up to 45% over agents receiving a global reward and by up to 67% over a current industry approach (Monte Carlo estimation).\n1.\nINTRODUCTION\nThe efficient, safe and reliable management of our ever increasing air traffic is one of the fundamental challenges facing the aerospace industry today.\nOn a typical day, more than 40,000 commercial flights operate within the US airspace [14].\nIn order to efficiently and safely route this air traffic, current traffic flow control relies on a centralized, hierarchical routing strategy that performs flow projections ranging from one to six hours.\nAs a consequence, the system is slow to respond to developing weather or airport conditions leading potentially minor local delays to cascade into large regional congestions.\nIn 2005, weather, routing decisions and airport conditions caused 437,667 delays, accounting for 322,272 hours of delays.\nThe total cost of these delays was estimated to exceed three billion dollars by industry [7].\nFurthermore, as the traffic flow increases, the current procedures increase the load on the system, the airports, and the air traffic controllers (more aircraft per region) without providing any of them with means to shape the traffic patterns beyond minor reroutes.\nThe Next Generation Air Transportation Systems (NGATS) initiative aims to address this issues and, not only account for a threefold increase in traffic, but also for the increasing heterogeneity of aircraft and decreasing restrictions on flight paths.\nUnlike many other flow problems where the increasing traffic is to some extent absorbed by improved hardware (e.g., more servers with larger memories and faster CPUs for internet routing) the air traffic domain needs to find mainly algorithmic solutions, as the infrastructure (e.g., number of the airports) will not change significantly to impact the flow problem.\nThere is therefore a strong need to explore new, distributed and adaptive solutions to the air flow control problem.\nAn adaptive, multi-agent approach is an ideal fit to this naturally distributed problem where the complex interaction among the aircraft, airports and traffic controllers renders a pre-determined centralized solution severely suboptimal at the first deviation from the expected plan.\nThough a truly distributed and adaptive solution (e.g., free flight where aircraft can choose almost any path) offers the most potential in terms of optimizing flow, it also provides the most radical departure from the current system.\nAs a consequence, a shift to such a system presents tremendous difficulties both in terms of implementation (e.g., scheduling and airport capacity) and political fallout (e.g., impact on air traffic controllers).\nIn this paper, we focus on agent based system that can be implemented readily.\nIn this approach, we assign an\nagent to a \"fix,\" a specific location in 2D.\nBecause aircraft flight plans consist of a sequence of fixes, this representation allows localized fixes (or agents) to have direct impact on the flow of air traffic1.\nIn this approach, the agents' actions are to set the separation that approaching aircraft are required to keep.\nThis simple agent-action pair allows the agents to slow down or speed up local traffic and allows agents to a have significant impact on the overall air traffic flow.\nAgents learn the most appropriate separation for their location using a reinforcement learning (RL) algorithm [15].\nIn a reinforcement learning approach, the selection of the agent reward has a large impact on the performance of the system.\nIn this work, we explore four different agent reward functions, and compare them to simulating various changes to the system and selecting the best solution (e.g, equivalent to a Monte-Carlo search).\nThe first explored reward consisted of the system reward.\nThe second reward was a personalized agent reward based on collectives [3, 17, 18].\nThe last two rewards were personalized rewards based on estimations to lower the computational burden of the reward computation.\nAll three personalized rewards aim to align agent rewards with the system reward and ensure that the rewards remain sensitive to the agents' actions.\nPrevious work in this domain fell into one of two distinct categories: The first principles based modeling approaches used by domain experts [5, 8, 10, 13] and the algorithmic approaches explored by the learning and\/or agents community [6, 9, 12].\nThough our approach comes from the second category, we aim to bridge the gap by using FACET to test our algorithms, a simulator introduced and widely used (i.e., over 40 organizations and 5000 users) by work in the first category [4, 11].\nThe main contribution of this paper is to present a distributed adaptive air traffic flow management algorithm that can be readily implemented and test that algorithm using FACET.\nIn Section 2, we describe the air traffic flow problem and the simulation tool, FACET.\nIn Section 3, we present the agent-based approach, focusing on the selection of the agents and their action space along with the agents' learning algorithms and reward structures.\nIn Section 4 we present results in domains with one and two congestions, explore different trade-offs of the system objective function, discuss the scaling properties of the different agent rewards and discuss the computational cost of achieving certain levels of performance.\nFinally, in Section 5, we discuss the implications of these results and provide and map the required work to enable the FAA to reach its stated goal of increasing the traffic volume by threefold.\n2.\nAIR TRAFFIC FLOW MANAGEMENT\nWith over 40,000 flights operating within the United States airspace on an average day, the management of traffic flow is a complex and demanding problem.\nNot only are there concerns for the efficiency of the system, but also for fairness (e.g., different airlines), adaptability (e.g., developing weather patterns), reliability and safety (e.g., airport management).\nIn order to address such issues, the management of this traffic flow occurs over four hierarchical levels:\n1.\nSeparation assurance (2-30 minute decisions); 1We discuss how flight plans with few fixes can be handled in more detail in Section 2.\n2.\nRegional flow (20 minutes to 2 hours); 3.\nNational flow (1-8 hours); and 4.\nDynamic airspace configuration (6 hours to 1 year).\nBecause of the strict guidelines and safety concerns surrounding aircraft separation, we will not address that control level in this paper.\nSimilarly, because of the business and political impact of dynamic airspace configuration, we will not address the outermost flow control level either.\nInstead, we will focus on the regional and national flow management problems, restricting our impact to decisions with time horizons between twenty minutes and eight hours.\nThe proposed algorithm will fit between long term planning by the FAA and the very short term decisions by air traffic controllers.\nThe continental US airspace consists of 20 regional centers (handling 200-300 flights on a given day) and 830 sectors (handling 10-40 flights).\nThe flow control problem has to address the integration of policies across these sectors and centers, account for the complexity of the system (e.g., over 5200 public use airports and 16,000 air traffic controllers) and handle changes to the policies caused by weather patterns.\nTwo of the fundamental problems in addressing the flow problem are: (i) modeling and simulating such a large complex system as the fidelity required to provide reliable results is difficult to achieve; and (ii) establishing the method by which the flow management is evaluated, as directly minimizing the total delay may lead to inequities towards particular regions or commercial entities.\nBelow, we discuss how we addressed both issues, namely, we present FACET a widely used simulation tool and discuss our system evaluation function.\nFigure 1: FACET screenshot displaying traffic routes and air flow statistics.\n2.1 FACET\nFACET (Future ATM Concepts Evaluation Tool), a physics based model of the US airspace was developed to accurately model the complex air traffic flow problem [4].\nIt is based on propagating the trajectories of proposed flights forward in time.\nFACET can be used to either simulate and display air traffic (a 24 hour slice with 60,000 flights takes 15 minutes to simulate on a 3 GHz, 1 GB RAM computer) or provide rapid statistics on recorded data (4D trajectories for 10,000 flights including sectors, airports, and fix statistics in 10 seconds on the same computer) [11].\nFACET is extensively used by\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 343\nthe FAA, NASA and industry (over 40 organizations and 5000 users) [11].\nFACET simulates air traffic based on flight plans and through a graphical user interface allows the user to analyze congestion patterns of different sectors and centers (Figure 1).\nFACET also allows the user to change the flow patterns of the aircraft through a number of mechanisms, including metering aircraft through fixes.\nThe user can then observe the effects of these changes to congestion.\nIn this paper, agents use FACET directly through \"batch mode\", where agents send scripts to FACET asking it to simulate air traffic based on metering orders imposed by the agents.\nThe agents then produce their rewards based on receive feedback from FACET about the impact of these meterings.\n2.2 System Evaluation\nThe system performance evaluation function we select focuses on delay and congestion but does not account for fairness impact on different commercial entities.\nInstead it focuses on the amount of congestion in a particular sector and on the amount of measured air traffic delay.\nThe linear combination of these two terms gives the full system evaluation function, G (z) as a function of the full system state z.\nMore precisely, we have:\nwhere B (z) is the total delay penalty for all aircraft in the system, and C (z) is the total congestion penalty.\nThe relative importance of these two penalties is determined by the value of \u03b1, and we explore various trade-offs based on \u03b1 in Section 4.\nThe total delay, B, is a sum of delays over a set of sectors S and is given by:\nwhere\nwhere ks, t is the number of aircraft in sector s at a particular time, \u03c4s is a predetermined time, and \u0398 (\u2022) is the step function that equals 1 when its argument is greater or equal to zero, and has a value of zero otherwise.\nIntuitively, Bs (z) provides the total number of aircraft that remain in a sector s past a predetermined time \u03c4s, and scales their contribution to count by the amount by which they are late.\nIn this manner Bs (z) provides a delay factor that not only accounts for all aircraft that are late, but also provides a scale to measure their \"lateness\".\nThis definition is based on the assumption that most aircraft should have reached the sector by time \u03c4s and that aircraft arriving after this time are late.\nIn this paper the value of \u03c4s is determined by assessing aircraft counts in the sector in the absence of any intervention or any deviation from predicted paths.\nSimilarly, the total congestion penalty is a sum over the congestion penalties over the sectors of observation, S:\nwhere\nwhere a and b are normalizing constants, and cs is the capacity of sector s as defined by the FAA.\nIntuitively, Cs (z) penalizes a system state where the number of aircraft in a sector exceeds the FAAs official sector capacity.\nEach sector capacity is computed using various metrics which include the number of air traffic controllers available.\nThe exponential penalty is intended to provide strong feedback to return the number of aircraft in a sector to below the FAA mandated capacities.\n3.\nAGENT BASED AIR TRAFFIC FLOW\nThe multi agent approach to air traffic flow management we present is predicated on adaptive agents taking independent actions that maximize the system evaluation function discussed above.\nTo that end, there are four critical decisions that need to be made: agent selection, agent action set selection, agent learning algorithm selection and agent reward structure selection.\n3.1 Agent Selection\nSelecting the aircraft as agents is perhaps the most obvious choice for defining an agent.\nThat selection has the advantage that agent actions can be intuitive (e.g., change of flight plan, increase or decrease speed and altitude) and offer a high level of granularity, in that each agent can have its own policy.\nHowever, there are several problems with that approach.\nFirst, there are in excess of 40,000 aircraft in a given day, leading to a massively large multi-agent system.\nSecond, as the agents would not be able to sample their state space sufficiently, learning would be prohibitively slow.\nAs an alternative, we assign agents to individual ground locations throughout the airspace called \"fixes.\"\nEach agent is then responsible for any aircraft going through its fix.\nFixes offer many advantages as agents:\n1.\nTheir number can vary depending on need.\nThe system can have as many agents as required for a given situation (e.g., agents coming \"live\" around an area with developing weather conditions).\n2.\nBecause fixes are stationary, collecting data and matching behavior to reward is easier.\n3.\nBecause aircraft flight plans consist of fixes, agent will have the ability to affect traffic flow patterns.\n4.\nThey can be deployed within the current air traffic routing procedures, and can be used as tools to help air traffic controllers rather than compete with or replace them.\nFigure 2 shows a schematic of this agent based system.\nAgents surrounding a congestion or weather condition affect the flow of traffic to reduce the burden on particular regions.\n3.2 Agent Actions\nThe second issue that needs to be addressed, is determining the action set of the agents.\nAgain, an obvious choice may be for fixes to \"bid\" on aircraft, affecting their flight plans.\nThough appealing from a free flight perspective, that approach makes the flight plans too unreliable and significantly complicates the scheduling problem (e.g., arrival at airports and the subsequent gate assignment process).\nInstead, we set the actions of an agent to determining the separation (distance between aircraft) that aircraft have\n344 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nto maintain, when going through the agent's fix.\nThis is known as setting the \"Miles in Trail\" or MIT.\nWhen an agent sets the MIT value to d, aircraft going towards its fix are instructed to line up and keep d miles of separation (though aircraft will always keep a safe distance from each other regardless of the value of d).\nWhen there are many aircraft going through a fix, the effect of issuing higher MIT values is to slow down the rate of aircraft that go through the fix.\nBy increasing the value of d, an agent can limit the amount of air traffic downstream of its fix, reducing congestion at the expense of increasing the delays upstream.\nFigure 2: Schematic of agent architecture.\nThe agents corresponding to fixes surrounding a possible congestion become \"live\" and start setting new separation times.\n3.3 Agent Learning\nThe objective of each agent is to learn the best values of d that will lead to the best system performance, G.\nIn this paper we assume that each agent will have a reward function and will aim to maximize its reward using its own reinforcement learner [15] (though alternatives such as evolving neuro-controllers are also effective [1]).\nFor complex delayed-reward problems, relatively sophisticated reinforcement learning systems such as temporal difference may have to be used.\nHowever, due to our agent selection and agent action set, the air traffic congestion domain modeled in this paper only needs to utilize immediate rewards.\nAs a consequence, simple table-based immediate reward reinforcement learning is used.\nOur reinforcement learner is equivalent to an e-greedy Q-learner with a discount rate of 0 [15].\nAt every episode an agent takes an action and then receives a reward evaluating that action.\nAfter taking action a and receiving reward R an agent updates its Q table (which contains its estimate of the value for taking that action [15]) as follows:\nwhere l is the learning rate.\nAt every time step the agent chooses the action with the highest table value with probability 1--e and chooses a random action with probability E.\nIn the experiments described in this paper, \u03b1 is equal to 0.5 and a is equal to 0.25.\nThe parameters were chosen experimentally, though system performance was not overly sensitive to these parameters.\n3.4 Agent Reward Structure\nThe final issue that needs to be addressed is selecting the reward structure for the learning agents.\nThe first and most direct approach is to let each agent receive the system performance as its reward.\nHowever, in many domains such a reward structure leads to slow learning.\nWe will therefore also set up a second set of reward structures based on agent-specific rewards.\nGiven that agents aim to maximize their own rewards, a critical task is to create \"good\" agent rewards, or rewards that when pursued by the agents lead to good overall system performance.\nIn this work we focus on difference rewards which aim to provide a reward that is both sensitive to that agent's actions and aligned with the overall system reward [2, 17, 18].\n3.4.1 Difference Rewards\nConsider difference rewards of the form [2, 17, 18]:\nwhere zi is the action of agent i. All the components of z that are affected by agent i are replaced with the fixed constant ci 2.\nIn many situations it is possible to use a ci that is equivalent to taking agent i out of the system.\nIntuitively this causes the second term of the difference reward to evaluate the performance of the system without i and therefore D evaluates the agent's contribution to the system performance.\nThere are two advantages to using D: First, because the second term removes a significant portion of the impact of other agents in the system, it provides an agent with a \"cleaner\" signal than G.\nThis benefit has been dubbed \"learnability\" (agents have an easier time learning) in previous work [2, 17].\nSecond, because the second term does not depend on the actions of agent i, any action by agent i that improves D, also improves G.\nThis term which measures the amount of alignment between two rewards has been dubbed \"factoredness\" in previous work [2, 17].\n3.4.2 Estimates of Difference Rewards\nThough providing a good compromise between aiming for system performance and removing the impact of other agents from an agent's reward, one issue that may plague D is computational cost.\nBecause it relies on the computation of the counterfactual term G (z--zi + ci) (i.e., the system performance without agent i) it may be difficult or impossible to compute, particularly when the exact mathematical form of G is not known.\nLet us focus on G functions in the following form:\nwhere each fi is an unknown non-linear function.\nWe assume that we can sample values from f (z), enabling us to compute G, but that we cannot sample from each fi (zi).\n2This notation uses zero padding and vector addition rather than concatenation to form full state vectors from partial state vectors.\nThe vector \"zi\" in our notation would be ziei in standard vector notation, where ei is a vector with a value of 1 in the ith component and is zero everywhere else.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 345\nIn addition, we assume that Gf is much easier to compute than f (z), or that we may not be able to even compute f (z) directly and must sample it from a \"black box\" computation.\nThis form of G matches our system evaluation in the air traffic domain.\nWhen we arrange agents so that each aircraft is typically only affected by a single agent, each agent's impact of the counts of the number of aircraft in a sector, kt, s, will be mostly independent of the other agents.\nThese values of kt, s are the \"f (z) s\" in our formulation and the penalty functions form \"Gf.\"\nNote that given aircraft counts, the penalty functions (Gf) can be easily computed in microseconds, while aircraft counts (f) can only be computed by running FACET taking on the order of seconds.\nTo compute our counterfactual G (z--zi + ci) we need to compute:\nUnfortunately, we cannot compute this directly as the values of fi (zi) are unknown.\nHowever, if agents take actions independently (it does not observe how other agents act before taking its own action) we can take advantage of the linear form of f (z) in the fis with the following equality:\nwhere E (f--i (z--i) Izi) is the expected value of all of the f s other than fi given the value of zi and E (f--i (z--i) Ici) is the expected value of all of the f s other than fi given the value of zi is changed to ci.\nWe can then estimate f (z--zi + ci):\n= f (z)--E (f (z) Izi) + E (f (z) Ici).\nTherefore we can evaluate Di = G (z)--G (z--zi + ci) as:\nleaving us with the task of estimating the values of E (f (z) Izi) and E (f (z) Ici)).\nThese estimates can be computed by keeping a table of averages where we average the values of the observed f (z) for each value of zi that we have seen.\nThis estimate should improve as the number of samples increases.\nTo improve our estimates, we can set ci = E (z) and if we make the mean squared approximation of f (E (z)); z E (f (z)) then we can estimate G (z)--G (z--zi + ci) as:\nThis formulation has the advantage in that we have more samples at our disposal to estimate E (f (z)) than we do to estimate E (f (z) Ici)).\n4.\nSIMULATION RESULTS\nIn this paper we test the performance of our agent based air traffic optimization method on a series of simulations using the FACET air traffic simulator.\nIn all experiments we test the performance of five different methods.\nThe first method is Monte Carlo estimation, where random policies are created, with the best policy being chosen.\nThe other four methods are agent based methods where the agents are maximizing one of the following rewards:\n1.\nThe system reward, G (z), as define in Equation 1.\n2.\nThe difference reward, Di (z), assuming that agents can calculate counterfactuals.\n3.\nEstimation to the difference reward, Dest1 i (z), where agents estimate the counterfactual using E (f (z) Izi) and E (f (z) Ici).\n4.\nEstimation to the difference reward, Dest2 i (z), where agents estimate the counterfactual using E (f (z) Izi) and E (f (z)).\nThese methods are first tested on an air traffic domain with 300 aircraft, where 200 of the aircraft are going through a single point of congestion over a four hour simulation.\nAgents are responsible for reducing congestion at this single point, while trying to minimize delay.\nThe methods are then tested on a more difficult problem, where a second point of congestion is added with the 100 remaining aircraft going through this second point of congestion.\nIn all experiments the goal of the system is to maximize the system performance given by G (z) with the parameters, a = 50, b = 0.3, \u03c4s1 equal to 200 minutes and \u03c4s1 equal to 175 minutes.\nThese values of \u03c4 are obtained by examining the time at which most of the aircraft leave the sectors, when no congestion control is being performed.\nExcept where noted, the trade-off between congestion and lateness, \u03b1 is set to 0.5.\nIn all experiments to make the agent results comparable to the Monte Carlo estimation, the best policies chosen by the agents are used in the results.\nAll results are an average of thirty independent trials with the differences in the mean (\u03c3 \/ -, \/ n) shown as error bars, though in most cases the error bars are too small to see.\nFigure 3: Performance on single congestion problem, with 300 Aircraft, 20 Agents and \u03b1 = .5.\n4.1 Single Congestion\nIn the first experiment we test the performance of the five methods when there is a single point of congestion, with twenty agents.\nThis point of congestion is created by setting up a series of flight plans that cause the number of aircraft in\n346 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nthe sector of interest to be significantly more than the number allowed by the FAA.\nThe results displayed in Figures 3 and 4 show the performance of all five algorithms on two different system evaluations.\nIn both cases, the agent based methods significantly outperform the Monte Carlo method.\nThis result is not surprising since the agent based methods intelligently explore their space, where as the Monte Carlo method explores the space randomly.\nFigure 4: Performance on single congestion problem, with 300 Aircraft, 20 Agents and \u03b1 = .75.\nAmong the agent based methods, agents using difference rewards perform better than agents using the system reward.\nAgain this is not surprising, since with twenty agents, an agent directly trying to maximize the system reward has difficulty determining the effect of its actions on its own reward.\nEven if an agent takes an action that reduces congestion and lateness, other agents at the same time may take actions that increase congestion and lateness, causing the agent to wrongly believe that its action was poor.\nIn contrast agents using the difference reward have more influence over the value of their own reward, therefore when an agent takes a good action, the value of this action is more likely to be reflected in its reward.\nThis experiment also shows that estimating the difference reward is not only possible, but also quite effective, when the true value of the difference reward cannot be computed.\nWhile agents using the estimates do not achieve as high of results as agents using the true difference reward, they still perform significantly better than agents using the system reward.\nNote, however, that the benefit of the estimated difference rewards are only present later in learning.\nEarlier in learning, the estimates are poor, and agents using the estimated difference rewards perform no better then agents using the system reward.\n4.2 Two Congestions\nIn the second experiment we test the performance of the five methods on a more difficult problem with two points of congestion.\nOn this problem the first region of congestion is the same as in the previous problem, and the second region of congestion is added in a different part of the country.\nThe second congestion is less severe than the first one, so agents have to form different policies depending which point of congestion they are influencing.\nFigure 6: Performance on two congestion problem, with 300 Aircraft, 50 Agents and \u03b1 = .5.\nThe results displayed in Figure 5 show that the relative performance of the five methods is similar to the single congestion case.\nAgain agent based methods perform better than the Monte Carlo method and the agents using difference rewards perform better than agents using the system reward.\nTo verify that the performance improvement of our methods is maintained when there are a different number of agents, we perform additional experiments with 50 agents.\nThe results displayed in Figure 6 show that indeed the relative performances of the methods are comparable when the number of agents is increased to 50.\nFigure 7 shows scaling results and demonstrates that the conclusions hold over a wide range of number of agents.\nAgents using Dest2 perform slightly better than agents using Dest1 in all cases but for 50 agents.\nThis slight advantage stems from Dest2 providing the agents with a cleaner signal, since its estimate uses more data points.\n4.3 Penalty Tradeoffs\nThe system evaluation function used in the experiments is G (z) = \u2212 ((1 \u2212 \u03b1) D (z) + \u03b1C (z)), which comprises of penalties for both congestion and lateness.\nThis evaluation function\nFigure 5: Performance on two congestion problem, with 300 Aircraft, 20 Agents and \u03b1 = .5.\nFigure 7: Impact of number of agents on system performance.\nTwo congestion problem, with 300 Aircraft and \u03b1 = .5.\nforces the agents to tradeoff these relative penalties depending on the value of \u03b1.\nWith high \u03b1 the optimization focuses on reducing congestion, while with low \u03b1 the system focuses on reducing lateness.\nTo verify that the results obtained above are not specific to a particular value of \u03b1, we repeat the experiment with 20 agents for \u03b1 = .75.\nFigure 8 shows that qualitatively the relative performance of the algorithms remain the same.\nNext, we perform a series of experiments where \u03b1 ranges from 0.0 to 1.0.\nFigure 9 shows the results which lead to three interesting observations: \u2022 First, there is a zero congestion penalty solution.\nThis solution has agents enforce large MIT values to block all air traffic, which appears viable when the system evaluation does not account for delays.\nAll algorithms find this solution, though it is of little interest in practice due to the large delays it would cause.\n\u2022 Second, if the two penalties were independent, an optimal solution would be a line from the two end points.\nTherefore, unless D is far from being optimal, the two penalties are not independent.\nNote that for \u03b1 = 0.5 the difference between D and this hypothetical line is as large as it is anywhere else, making \u03b1 = 0.5 a reasonable choice for testing the algorithms in a difficult setting.\n\u2022 Third, Monte Carlo and G are particularly poor at handling multiple objectives.\nFor both algorithms, the performance degrades significantly for mid-ranges of \u03b1.\n4.4 Computational Cost\nThe results in the previous section show the performance of the different algorithms after a specific number of episodes.\nThose results show that D is significantly superior to the other algorithms.\nOne question that arises, though, is what computational overhead D puts on the system, and what results would be obtained if the additional computational expense of D is made available to the other algorithms.\nThe computation cost of the system evaluation, G (Equation 1) is almost entirely dependent on the computation of\nFigure 9: Tradeoff Between Objectives on two congestion problem, with 300 Aircraft and 20 Agents.\nNote that Monte Carlo and G are particularly bad at handling multiple objectives.\nthe airplane counts for the sectors kt, s, which need to be computed using FACET.\nExcept when D is used, the values of k are computed once per episode.\nHowever, to compute the counterfactual term in D, if FACET is treated as a \"black box\", each agent would have to compute their own values of k for their counterfactual resulting in n + 1 computations of k per episode.\nWhile it may be possible to streamline the computation of D with some knowledge of the internals of FACET, given the complexity of the FACET simulation, it is not unreasonable in this case to treat it as a black box.\nTable 1 shows the performance of the algorithms after 2100 G computations for each of the algorithms for the simulations presented in Figure 5 where there were 20 agents, 2 congestions and \u03b1 = .5.\nAll the algorithms except the fully computed D reach 2100 k computations at time step 2100.\nD however computes k once for the system, and then once for each agent, leading to 21 computations per time step.\nIt therefore reaches 2100 computations at time step 100.\nWe also show the results of the full D computation at t = 2100, which needs 44100 computations of k as D44K.\nFigure 8: Performance on two congestion problem, with 300 Aircraft, 20 Agents and \u03b1 = .75.\n348 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nTable 1: System Performance for 20 Agents, 2 congestions and \u03b1 = .5, after 2100 G evaluations (except\nAlthough D44K provides the best result by a slight margin, it is achieved at a considerable computational cost.\nIndeed, the performance of the two D estimates is remarkable in this case as they were obtained with about twenty times fewer computations of k. Furthermore, the two D estimates, significantly outperform the full D computation for a given number of computations of k and validate the assumptions made in Section 3.4.2.\nThis shows that for this domain, in practice it is more fruitful to perform more learning steps and approximate D, than few learning steps with full D computation when we treat FACET as a black box.","keyphrases":["traffic flow","air traffic control","reinforc learn","reinforc learn","congest","multiag system","optim","futur atm concept evalu tool","new method of estim agent reward","deploy strategi"],"prmu":["P","P","P","P","P","M","U","U","M","U"]} {"id":"I-75","title":"Hypotheses Refinement under Topological Communication Constraints","abstract":"We investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment. Upon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations). However, we further assume that communication opportunities are severely constrained and change dynamically. In this paper, we mostly investigate the convergence of such systems towards global consistency. We first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange. As this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange. We study a critical situation involving a number of agents aiming at escaping from a burning building. The results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.","lvl-1":"Hypotheses Refinement under Topological Communication Constraints \u2217 Gauvain Bourgne, Gael Hette, Nicolas Maudet, and Suzanne Pinson LAMSADE, Univ..\nParis-Dauphine, France {bourgne,hette,maudet,pinson}@lamsade.\ndauphine.fr ABSTRACT We investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment.\nUpon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations).\nHowever, we further assume that communication opportunities are severely constrained and change dynamically.\nIn this paper, we mostly investigate the convergence of such systems towards global consistency.\nWe first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange.\nAs this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange.\nWe study a critical situation involving a number of agents aiming at escaping from a burning building.\nThe results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems General Terms Theory, Experimentation 1.\nINTRODUCTION We consider a multiagent system where each (distributed) agent locally perceives its environment, and we assume that some unexpected event occurs in that system.\nIf each agent computes only locally its favoured hypothesis, it is only natural to assume that agents will seek to coordinate and refine their hypotheses by confronting their observations with other agents.\nIf, in addition, the communication opportunities are severely constrained (for instance, agents can only communicate when they are close enough to some other agent), and dynamically changing (for instance, agents may change their locations), it becomes crucial to carefully design protocols that will allow agents to converge to some desired state of global consistency.\nIn this paper we exhibit some sufficient conditions on the system dynamics and on the protocol\/strategy structures that allow to guarantee that property, and we experimentally study some contexts where (some of) these assumptions are relaxed.\nWhile problems of diagnosis are among the venerable classics in the AI tradition, their multiagent counterparts have much more recently attracted some attention.\nRoos and colleagues [8, 9] in particular study a situation where a number of distributed entities try to come up with a satisfying global diagnosis of the whole system.\nThey show in particular that the number of messages required to establish this global diagnosis is bound to be prohibitive, unless the communication is enhanced with some suitable protocol.\nHowever, they do not put any restrictions on agents'' communication options, and do not assume either that the system is dynamic.\nThe benefits of enhancing communication with supporting information to make convergence to a desired global state of a system more efficient has often been put forward in the literature.\nThis is for instance one of the main idea underlying the argumentation-based negotiation approach [7], where the desired state is a compromise between agents with conflicting preferences.\nMany of these works however make the assumption that this approach is beneficial to start with, and study the technical facets of the problem (or instead emphasize other advantages of using argumentation).\nNotable exceptions are the works of [3, 4, 2, 5], which studied in contexts different from ours the efficiency of argumentation.\nThe rest of the paper is as follows.\nSection 2 specifies the basic elements of our model, and Section 3 goes on to presenting the different protocols and strategies used by the agents to exchange hypotheses and observations.\nWe put special attention at clearly emphasizing the conditions on the system dynamics and protocols\/strategies that will be exploited in the rest of the paper.\nSection 4 details one of 998 978-81-904262-7-5 (RPS) c 2007 IFAAMAS the main results of the paper, namely the fact that under the aforementioned conditions, the constraints that we put on the topology will not prevent the convergence of the system towards global consistency, at the condition that no agent ever gets completely lost forever in the system, and that unlimited time is allowed for computation and argument exchange.\nWhile the conditions on protocols and strategies are fairly mild, it is also clear that these system requirements look much more problematic, even frankly unrealistic in critical situations where distributed approaches are precisely advocated.\nTo get a clearer picture of the situation induced when time is a critical factor, we have set up an experimental framework that we introduce and discuss in Section 5.\nThe critical situation involves a number of agents aiming at escaping from a burning building.\nThe results reported here show that the effectiveness of argument exchange crucially depends upon the nature of the building, and provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.\n2.\nBASIC NOTIONS We start by defining the basic elements of our system.\nEnvironment Let O be the (potentially infinite) set of possible observations.\nWe assume the sensors of our agents to be perfect, hence the observations to be certain.\nLet H be the set of hypotheses, uncertain and revisable.\nLet Cons(h, O) be the consistency relation, a binary relation between a hypothesis h \u2208 H and a set of observations O \u2286 O.\nIn most cases, Cons will refer to classical consistency relation, however, we may overload its meaning and add some additional properties to that relation (in which case we will mention it).\nThe environment may include some dynamics, and change over the course of time.\nWe define below sequences of time points to deal with it: Definition 1 (Sequence of time points).\nA sequence of time points t1, t2, ... , tn from t is an ordered set of time points t1, t2, ... , tn such that t1 \u2265 t and \u2200i \u2208 [1, n \u2212 1], ti+1 \u2265 ti.\nAgent We take a system populated by n agents a1, ... , an.\nEach agent is defined as a tuple F, Oi, hi , where: \u2022 F, the set of facts, common knowledge to all agents.\n\u2022 Oi \u2208 2O , the set of observations made by the agent so far.\nWe assume a perfect memory, hence this set grows monotonically.\n\u2022 hi \u2208 H, the favourite hypothesis of the agent.\nA key notion governing the formation of hypotheses is that of consistency, defined below: Definition 2 (Consistency).\nWe say that: \u2022 An agent is consistent (Cons(ai)) iff Cons(hi, Oi) (that is, its hypothesis is consistent with its observation set).\n\u2022 An agent ai consistent with a partner agent aj iff Cons(ai) and Cons(hi, Oj) (that is, this agent is consistent and its hypothesis can explain the observation set of the other agent).\n\u2022 Two agents ai and aj are mutually consistent (MCons(ai, aj)) iff Cons(ai, aj) and Cons(aj, ai).\n\u2022 A system is consistent iff \u2200(i, j)\u2208[1, n]2 it is the case that MCons(ai, aj).\nTo ensure its consistency, each agent is equipped with an abstract reasoning machinery that we shall call the explanation function Eh.\nThis (deterministic) function takes a set of observation and returns a single prefered hypothesis (2O \u2192 H).\nWe assume h = Eh(O) to be consistent with O by definition of Eh, so using this function on its observation set to determine its favourite hypothesis is a sure way for the agent to achieve consistency.\nNote however that an hypothesis does not need to be generated by Eh to be consistent with an observation set.\nAs a concrete example of such a function, and one of the main inspiration of this work, one can cite the Theorist reasoning system [6] -as long as it is coupled with a filter selecting a single prefered theory among the ones initially selected by Theorist.\nNote also that hi may only be modified as a consequence of the application Eh.\nWe refer to this as the autonomy of the agent: no other agent can directly impose a given hypothesis to an agent.\nAs a consequence, only a new observation (being it a new perception, or an observation communicated by a fellow agent) can result in a modification of its prefered hypothesis hi (but not necessarily of course).\nWe finally define a property of the system that we shall use in the rest of the paper: Definition 3 (Bounded Perceptions).\nA system involves a bounded perception for agents iff \u2203n0 s.t. \u2200t| \u222aN i=1 Oi| \u2264 n0.\n(That is, the number of observations to be made by the agents in the system is not infinite.)\nAgent Cycle Now we need to see how these agents will evolve and interact in their environment.\nIn our context, agents evolve in a dynamic environment, and we classicaly assume the following system cycle: 1.\nEnvironment dynamics: the environment evolves according to the defined rules of the system dynamics.\n2.\nPerception step : agents get perceptions from the environment.\nThese perceptions are typically partial (e.g. the agent can only see a portion of a map).\n3.\nReasoning step: agents compare perception with predictions, seek explanations for (potential) difference(s), refine their hypothesis, draw new conclusions.\n4.\nCommunication step: agents can communicate hypotheses and observations with other agents through a defined protocol.\nAny agent can only be involved in one communication with another agent by step.\n5.\nAction step: agents do some practical reasoning using the models obtained from the previous steps and select an action.\nThey can then modify the environment by executing it.\nThe communication of the agents will be further constrained by topological consideration.\nAt a given time, an agent will only be able to communicate with a number of neighbours.\nIts connexions with these others agents may The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 999 evolve with its situation in the environment.\nTypically, an agent can only communicate with agents that it can sense, but one could imagine evolving topological constraints on communication based on a network of communications between agents where the links are not always active.\nCommunication In our system, agents will be able to communicate with each other.\nHowever, due to the aforementionned topological constraints, they will not be able to communicate with any agents at anytime.\nWho an agent can communicate with will be defined dynamically (for instance, this can be a consequence of the agents being close enough to get in touch).\nWe will abstractly denote by C(ai, aj, t) the communication property, in other words, the fact that agents ai and aj can communicate at time t (note that this relation is assumed to be symetric, but of course not transitive).\nWe are now in a position to define two essential properties of our system.\nDefinition 4 (Temporal Path).\nThere exists a temporal communication path at horizon tf (noted Ltf (aI , aJ )) between ai and aj iff there exists a sequence of time points t1, t2, ... , tn from tf and a sequence of agents k1, k2, ... , kn s.t. (i) C(aI , ak1 , t1), (ii) C(akn , aJ , tn+1), (iii) \u2200i \u2208 [1, n], C(aki , aki+1 , ti) Intuitively, what this property says is that it is possible to find a temporal path in the future that would allow to link agent ai and aj via a sequence of intermediary agents.\nNote that the time points are not necessarily successive, and that the sequence of agents may involve the same agents several times.\nDefinition 5 (Temporal Connexity).\nA system is temporaly connex iff \u2200t \u2200(i, j)\u2208[1, n]2 Lt(ai, aj) In short, a temporaly connex system guarantees that any agent will be able to communicate with any other agents, no matter how long it might take to do so, at any time.\nTo put it another way, it is never the case that an agent will be isolated for ever from another agent of the system.\nWe will next discuss the detail of how communication concretely takes place in our system.\nRemember that in this paper, we only consider the case of bilateral exchanges (an agent can only speak to a single other agent), and that we also assume that any agent can only engage in a single exchange in a given round.\n3.\nPROTOCOLS AND STRATEGIES In this section, we discuss the requirements of the interaction protocols that govern the exchange of messages between agents, and provide some example instantiation of such protocols.\nTo clarify the presentation, we distinguish two levels: the local level, which is concerned with the regulation of bilateral exchanges; and the global level,which essentially regulates the way agents can actually engage into a conversation.\nAt each level, we separate what is specified by the protocol, and what is left to agents'' strategies.\nLocal Protocol and Strategies We start by inspecting local protocols and strategies that will regulate the communication between the agents of the system.\nAs we limit ourselves to bilateral communication, these protocols will simply involve two agents.\nSuch protocol will have to meet one basic requirement to be satisfying.\n\u2022 consistency (CONS)- a local protocol has to guarantee the mutual consistency of agents upon termination (which implies termination of course).\nFigure 1: A Hypotheses Exchange Protocol [1] One example such protocol is the protocol described in [1] that is pictured in Fig. 1.\nTo further illustrate how such protocol can be used by agents, we give some details on a possible strategy: upon receiving a hypothesis h1 (propose(h1) or counterpropose(h1)) from a1, agent a2 is in state 2 and has the following possible replies: counterexample (if the agent knows an example contradicting the hypothesis, or not explained by this hypothesis), challenge (if the agents lacks evidence to accept this hypothesis), counterpropose (if the agent agrees with the hypothesis but prefers another one), or accept (if it is indeed as good as its favourite hypothesis).\nThis strategy guarantees, among other properties, the eventual mutual logical consistency of the involved agents [1].\nGlobal Protocol The global protocol regulates the way bilateral exchanges will be initiated between agents.\nAt each turn, agents will concurrently send one weighted request to communicate to other agents.\nThis weight is a value measuring the agent``s willingness to converse with the targeted agent (in practice, this can be based on different heuristics, but we shall make some assumptions on agents'' strategies, see below).\nSending such a request is a kind of conditional commitment for the agent.\nAn agent sending a weighted request commits to engage in conversation with the target if he does not receive and accept himself another request.\nOnce all request have been received, each agent replies with either an acccept or a reject.\nBy answering with an accept, an agent makes a full commitment to engage in conversation with the sender.\nTherefore, it can only send one accept in a given round, as an agent can only participate in one conversation per time step.\nWhen all response have been received, each agent receiving an accept can either initiate a conversation using the local protocol or send a cancel if it has accepted another request.\nAt the end of all the bilateral exchanges, the agents engaged in conversation are discarded from the protocol.\nThen each of the remaining agents resends a request and the process iterates until no more requests are sent.\nGlobal Strategy We now define four requirements for the strategies used by agents, depending on their role in the protocol: two are concerned with the requestee role (how to decide who the 1000 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) agent wishes to communicate with?)\n, the other two with the responder role (how to decide which communication request to accept or not?)\n.\n\u2022 Willingness to solve inconsistancies (SOLVE)-agents want to communicate with any other agents unless they know they are mutually consistent.\n\u2022 Focus on solving inconsistencies (FOCUS)-agents do not request communication with an agent with whom they know they are mutually consistent.\n\u2022 Willingness to communicate (COMM)-agents cannot refuse a weighted communication request, unless they have just received or send a request with a greater weight.\n\u2022 Commitment to communication request (REQU)agents cannot accept a weighted communication request if they have themselves sent a communication request with a greater weight.\nTherefore, they will not cancel their request unless they have received a communicational request with greater weight.\nNow the protocol structure, together with the properties COMM+REQU, ensure that a request can only be rejected if its target agent engages in communication with another agent.\nSuppose indeed that agent ai wants to communicate with aj by sending a request with weight w. COMM guarantees that an agent receiving a weighted request will either accept this communication, accept a communication with a greater weight or wait for the answer to a request with a greater weight.\nThis ensures that the request with maximal weight will be accepted and not cancelled (as REQU ensures that an agent sending a request can only cancel it if he accepts another request with greater weight).\nTherefore at least two agents will engage in conversation per round of the global protocol.\nAs the protocol ensures that ai can resend its request while aj is not engaged in a conversation, there will be a turn in which aj must engage in a conversation, either with ai or another agent.\nThese requirements concern request sending and acceptation, but agents also need some strategy of weight attribution.\nWe describe below an altruist strategy, used in our experiments.\nBeing cooperative, an agent may want to know more of the communication wishes of other agents in order to improve the overall allocation of exchanges to agents.\nA context request step is then added to the global protocol.\nBefore sending their chosen weighted request, agents attribute a weight to all agents they are prepared to communicate with, according to some internal factors.\nIn the simplest case, this weight will be 1 for all agent with whom the agent is not sure of being mutually consistent (ensuring SOLVE), other agent being not considered for communication (ensuring FOCUS).\nThe agent then sends a context request to all agents with whom communication is considered.\nThis request also provides information about the sender (list of considered communications along with their weight).\nAfter reception of all the context requests, agents will either reply with a deny, iff they are already engaged in a conversation (in which case, the requesting agent will not consider communication with them anymore in this turn), or an inform giving the requester information about the requests it has sent and received.\nWhen all replies have been received, each agent can calculate the weight of all requests concerning it.\nIt does so by substracting from the weight of its request the weight of all requests concerning either it or its target (that is, the final weight of the request from ai to aj is Wi,j = wi,j +wj,i \u2212 ( P k\u2208R(i)\u2212{j} wi,k + P k\u2208S(i)\u2212{j} wk,i + P k\u2208R(j)\u2212{i} wj,k + P k\u2208S(j)\u2212{i} wk,j) where wi,j is the weight of the request of ai to aj, R(i) is the set of indice of agents having received a request from ai and S(i) is the set of indice of agents having send a request to ai).\nIt then finally sends a weighted request to the agents who maximise this weight (or wait for a request) as described in the global protocol.\n4.\n(CONDITIONAL) CONVERGENCE TO GLOBAL CONSISTENCY In this section we will show that the requirements regarding protocols and strategies just discussed will be sufficient to ensure that the system will eventually converge towards global consistency, under some conditions.\nWe first show that, if two agents are not mutually consistent at some time, then there will be necessarily a time in the future such that an agent will learn a new observation, being it because it is new for the system, or by learning it from another agent.\nLemma 1.\nLet S be a system populated by n agents a1, a2, ..., an, temporaly connex, and involving bounded perceptions for these agents.\nLet n1 be the sum of cardinalities of the intersection of pairwise observation sets.\n(n1 = P (i,j)\u2208[1,n]2 |Oi \u2229 Oj|) Let n2 be the cardinality of the union of all agents'' observations sets.\n(n2 = | \u222aN i=1 Oi|).\nIf \u00acMCons(ai, aj) at time t0, there is necessarily a time t > t0 s.t. either n1 or n2 will increase.\nProof.\nSuppose that there exist a time t0 and indices (i, j) s.t. \u00acMCons(ai, aj).\nWe will use mt0 = P (k,l)\u2208[1,n]2 \u03b5Comm(ak, al, t0) where \u03b5Comm(ak, al, t0) = 1 if ak and al have communicated at least once since t0, and 0 otherwise.\nTemporal connexity guarantees that there exist t1, ..., tm+1 and k1, ..., km s.t. C(ai, ak1 , t1), C(akm , aj, tm+1), and \u2200p \u2208 [1, m], C(akp , akp+1 , tp).\nClearly, if MCons(ai, ak1 ), MCons(akm , aj) and \u2200p, MCons(akp , akp+1 ), we have MCons(ai, aj) which contradicts our hypothesis (MCons being transitive, MCons(ai, ak1 )\u2227MCons(ak1 , ak2 ) implies that MCons(ai, ak2 ) and so on till MCons(ai, akm )\u2227 MCons(akm , aj) which implies MCons(ai, aj) ).\nAt least two agents are then necessarily inconsistent (\u00acMCons(ai, ak1 ), or \u00acMCons(akm , aj), or \u2203p0 t.q. \u00acMCons(akp0 , akp0+1 )).\nLet ak and al be these two neighbours at a time t > t0 1 .\nThe SOLVE property ensures that either ak or al will send a communication request to the other agent at time t .\nAs shown before, this in turn ensures that at least one of these agents will be involved in a communication.\nThen there are two possibilities: (case i) ak and al communicate at time t .\nIn this case, we know that \u00acMCons(ak, al).\nThis and the CONS property ensures that at least one of the agents must change its 1 Strictly speaking, the transitivity of MCons only ensure that ak and al are inconsistent at a time t \u2265 t0 that can be different from the time t at which they can communicate.\nBut if they become consistent between t and t (or inconsistent between t and t ), it means that at least one of them have changed its hypothesis between t and t , that is, after t0.\nWe can then apply the reasoning of case iib.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1001 hypothesis, which in turn, since agents are autonomous, implies at least one exchange of observation.\nBut then |Ok \u2229Ol| is bound to increase: n1(t ) > n1(t0).\n(case ii) ak communicates with ap at time t .\nWe then have again two possibilities: (case iia) ak and ap did not communicate since t0.\nBut then \u03b5Comm(ak, ap, t0) had value 0 and takes value 1.\nHence mt0 increases.\n(case iib) ak and ap did communicate at some time t0 > t0.\nThe CONS property of the protocol ensures that MCons(ak, ap) at that time.\nNow the fact that they communicate and FOCUS implies that at least one of them did change its hypothesis in the meantime.\nThe fact that agents are autonomous implies in turn that a new observation (perceived or received from another agent) necessarily provoked this change.\nThe latter case would ensure the existence of a time t > t0 and an agent aq s.t. either |Op \u2229Oq| or |Ok \u2229Oq| increases of 1 at that time (implying n1(t ) > n1(t0)).\nThe former case means that the agent gets a new perception o at time t .\nIf that observation was unknown in the system before, then n2(t ) > n2(t0).\nIf some agent aq already knew this observation before, then either Op \u2229 Oq or Ok \u2229 Oq increases of 1 at time t (which implies that n1(t ) > n1(t0)).\nHence, \u00acMCons(ai, aj) at time t0 guarantees that, either: \u2212\u2203t > t0 t.q. n1(t ) > n1(t0); or \u2212\u2203t > t0 t.q. n2(t ) > n2(t0); or \u2212\u2203t > t0 t.q. mt0 increases of 1 at time t .\nBy iterating the reasoning with t (but keeping t0 as the time reference for mt0 ), we can eliminate the third case (mt0 is integer and bounded by n2 , which means that after a maximum of n2 iterations, we necessarily will be in one of two other cases.)\nAs a result, we have proven that if \u00acMCons(ai, aj) at time t0, there is necessarily a time t s.t. either n1 or n2 will increase.\nTheorem 1 (Global consistency).\nLet S be a system populated by n agents a1, a2, ..., an, temporaly connex, and involving bounded perceptions for these agents.\nLet Cons(ai, aj) be a transitive consistency property.\nThen any protocol and strategies satisfying properties CONS, SOLVE, FOCUS, COMM and REQU guarantees that the system will converge towards global consistency.\nProof.\nFor the sake of contradiction, let us assume \u2203I, J \u2208 [1, N] s.t. \u2200t, \u2203t0 > t, t.q. \u00acCons(aI , aJ , t0).\nUsing the lemma, this implies that \u2203t > t0 s.t. either n1(t ) > n1(t0) or n2(t ) > n2(t0).\nBut we can apply the same reasoning taking t = t , which would give us t1 > t > t0 s.t. \u00acCons(aI , aJ , t1), which gives us t > t1 s.t. either n1(t ) > n1(t1) or n2(t ) > n2(t1).\nBy successive iterations we can then construct a sequence t0, t1, ..., tn, which can be divided in two sub-sequences t0, t1, ...tn and t0 , t1 , ..., tn s.t. n1(t0) < n1(t1) < ... < n1(tn) and n2(t0 ) < n2(t1 ) < ... < n2(tn).\nOne of these sub-sequences has to be infinite.\nHowever, n1(ti) and n2(ti ) are strictly growing, integer, and bounded, which implies that both are finite.\nContradiction.\nWhat the previous result essentially shows is that, in a system where no agent will be isolated from the rest of the agents for ever, only very mild assumptions on the protocols and strategies used by agents suffice to guarantee convergence towards system consistency in a finite amount of time (although it might take very long).\nUnfortunately, in many critical situations, it will not be possible to assume this temporal connexity.\nAs distributed approaches as the one advocated in this paper are precisely often presented as a good way to tackle problems of reliability or problems of dependence to a center that are of utmost importance in these critical applications, it is certainly interesting to further explore how such a system would behave when we relax this assumption.\n5.\nEXPERIMENTAL STUDY This experiment involves agents trying to escape from a burning building.\nThe environment is described as a spatial grid with a set of walls and (thankfully) some exits.\nTime and space are considered discrete.\nTime is divided in rounds.\nAgents are localised by their position on the spatial grid.\nThese agents can move and communicate with other agents.\nIn a round, an agent can move of one cell in any of the four cardinal directions, provided it is not blocked by a wall.\nIn this application, agents communicate with any other agent (but, recall, a single one) given that this agent is in view, and that they have not yet exchanged their current favoured hypothesis.\nSuddenly, a fire erupts in these premises.\nFrom this moment, the fire propagates.\nEach round, for each cases where there is fire, the fire propagates in the four directions.\nHowever, the fire cannot propagate through a wall.\nIf the fire propagates in a case where an agent is positioned, that agent burns and is considered dead.\nIt can of course no longer move nor communicate.\nIf an agent gets to an exit, it is considered saved, and can no longer be burned.\nAgents know the environment and the rules governing the dynamics of this environment, that is, they know the map as well as the rules of fire propagation previously described.\nThey also locally perceive this environment, but cannot see further than 3 cases away, in any direction.\nWalls also block the line of view, preventing agents from seeing behind them.\nWithin their sight, they can see other agents and whether or not the cases they see are on fire.\nAll these perceptions are memorised.\nWe now show how this instantiates the abstract framework presented the paper.\n\u2022 O = {Fire(x, y, t), NoFire(x, y, t), Agent(ai, x, y, t)} Observations can then be positive (o \u2208 P(O) iff \u2203h \u2208 H s.t. h |= o) or negative (o \u2208 N(O) iff \u2203h \u2208 H s.t. h |= \u00aco).\n\u2022 H={FireOrigin(x1, y1, t1)\u2227...\u2227FireOrigin(xl, yl, tl)} Hypotheses are conjunctions of FireOrigins.\n\u2022 Cons(h, O) consistency relation satisfies: - coherence : \u2200o \u2208 N(O), h |= \u00aco. - completeness : \u2200o \u2208 P(O), h |= o. - minimality : For all h \u2208 H, if h is coherent and complete for O, then h is prefered to h according to the preference relation (h \u2264p h ).2 2 Selects first the minimal number of origins, then the most recent (least preemptive strategy [6]), then uses some arbitrary fixed ranking to discriminate ex-aequo.\nThe resulting relation is a total order, hence minimality implies that there will be a single h s.t.Cons(O, h) for a given O.\nThis in turn means that MCons(ai, aj) iff Cons(ai), Cons(aj), and hi = hj.\nThis relation is then transitive and symmetric.\n1002 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) \u2022 Eh takes O as argument and returns min\u2264p of the coherent and complete hypothesis for O 5.1 Experimental Evaluation We will classically (see e.g. [3, 4]) assess the effectiveness and efficiency of different interaction protocols.\nEffectiveness of a protocol The proportion of agents surviving the fire over the initial number of agents involved in the experiment will determine the effectiveness of a given protocol.\nIf this value is high, the protocol has been effective to propagate the information and\/or for the agents to refine their hypotheses and determine the best way to the exit.\nEfficiency of a protocol Typically, the use of supporting information will involve a communication overhead.\nWe will assume here that the efficiency of a given protocol is characterised by the data flow induced by this protocol.\nIn this paper we will only discuss this aspect wrt.\nlocal protocols.\nThe main measure that we shall then use here is the mean total size of messages that are exchanged by agents per exchange (hence taking into account both the number of messages and the actual size of the messages, because it could be that messages happen to be very big, containing e.g. a large number of observations, which could counter-balance a low number of messages).\n5.2 Experimental Settings The chosen experimental settings are the following: \u2022 Environmental topology- Performances of information propagation are highly constrained by the environment topology.\nThe perception skills of the agents depend on the openness of the environment.\nWith a large number of walls the perceptions of agents are limited, and also the number of possible inter-agent communications, whereas an open environment will provide optimal possibilities of perception and information propagation.\nThus, we propose a topological index (see below) as a common basis to charaterize the environments (maps) used during experimentations.\nThe topological index (TI) is the ratio of the number of cells that can be perceived by agents summed up from all possible positions, divided by the number of cells that would be perceived from the same positions but without any walls.\n(The closer to 1, the more open the environment).\nWe shall also use two additional, more classical [10], measures: the characteristic path length3 (CPL) and the clustering coefficient4 (CC).\n\u2022 Number of agents- The propagation of information also depends on the initial number of agents involved during an experimentation.\nFor instance, the more agents, the more potential communications there is.\nThis means that there will be more potential for propagation, but also that the bilateral exchange restriction will be more crucial.\n3 The CPL is the median of the means of the shortest path lengths connecting each node to all other nodes.\n4 characterising the isolation degree of a region of an environment in terms of acessibility (number of roads still usable to reach this region).\nMap T.I. (%) C.P.L. C.C. 69-1 69,23 4,5 0,69 69-2 68,88 4,38 0,65 69-3 69,80 4,25 0,67 53-1 53,19 5,6 0,59 53-2 53,53 6,38 0,54 53-3 53,92 6,08 0,61 38-1 38,56 8,19 0,50 38-2 38,56 7,3 0,50 38-3 38,23 8,13 0,50 Table 1: Topological Characteristics of the Maps \u2022 Initial positions of the agents- Initial positions of the agents have a significant influence on the overall behavior of an instance of our system: being close from an exit will (in general) ease the escape.\n5.3 Experimental environments We choose to realize experiments on three very different topological indexes (69% for open environments, 53% for mixed environments, and 38% for labyrinth-like environments).\nFigure 2: Two maps (left: TI=69%, right TI=38%) We designed three different maps for each index (Fig. 2 shows two of them), containing the same maximum number of agents (36 agents max.)\nwith a maximum density of one agent per cell, the same number of exits and a similar fire origin (e.g. starting time and position).\nThe three differents maps of a given index are designed as follows.\nThe first map is a model of an existing building floor.\nThe second map has the same enclosure, exits and fire origin as the first one, but the number and location of walls are different (wall locations are designed by an heuristic which randomly creates walls on the spatial grid such that no fully closed rooms are created and that no exit is closed).\nThe third map is characterised by geometrical enclosure in wich walls location is also designed with the aforementioned heuristic.\nTable 1 summarizes the different topological measures characterizing these different maps.\nIt is worth pointing out that the values confirm the relevance of TI (maps with a high TI have a low CPL and a high CC.\nHowever the CPL and CC allows to further refine the difference between the maps, e.g. between 53-1 and 53-2).\n5.4 Experimental Results For each triple of maps defined as above we conduct the same experiments.\nIn each experiment, the society differs in terms of its initial proportion of involved agents, from 1% to 100%.\nThis initial proportion represents the percentage of involved agents with regards to the possible maximum number of agents.\nFor each map and each initial proportion, The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1003 we select randomly 100 different initial agents'' locations.\nFor each of those different locations we execute the system one time for each different interaction protocol.\nEffectiveness of Communication and Argumentation The first experiment that we set up aims at testing how effective is hypotheses exchange (HE), and in particular how the topological aspects will affect this effectiveness.\nIn order to do so, we have computed the ratio of improvement offered by that protocol over a situation where agents could simply not communicate (no comm).\nTo get further insights as to what extent the hypotheses exchange was really crucial, we also tested a much less elaborated protocol consisting of mere observation exchanges (OE).\nMore precisely, this protocol requires that each agent stores any unexpected observation that it perceives, and agents simply exchange their respective lists of observations when they discuss.\nIn this case, the local protocol is different (note in particular that it does not guarantee mutual consistency), but the global protocol remains the same (at the only exception that agents'' motivation to communicate is to synchronise their list of observations, not their hypothesis).\nIf this protocol is at best as effective as HE, it has the advantage of being more efficient (this is obvious wrt the number of messages which will be limited to 2, less straightforward as far as the size of messages is concerned, but the rough observation that the exchange of observations can be viewed as a flat version of the challenge is helpful to see this).\nThe results of these experiments are reported in Fig. 3.\nFigure 3: Comparative effectiveness ratio gain of protocols when the proportion of agents augments The first observation that needs to be made is that communication improves the effectiveness of the process, and this ratio increases as the number of agents grows in the system.\nThe second lesson that we learn here is that closeness relatively makes communication more effective over non communication.\nMaps exhibiting a T.I. of 38% are constantly above the two others, and 53% are still slightly but significantly better than 69%.\nHowever, these curves also suggest, perhaps surprisingly, that HE outperforms OE in precisely those situations where the ratio gain is less important (the only noticeable difference occurs for rather open maps where T.I. is 69%).\nThis may be explained as follows: when a map is open, agents have many potential explanation candidates, and argumentation becomes useful to discriminate between those.\nWhen a map is labyrinth-like, there are fewer possible explanations to an unexpected event.\nImportance of the Global Protocol The second set of experiments seeks to evaluate the importance of the design of the global protocol.\nWe tested our protocol against a local broadcast (LB) protocol.\nLocal broadcast means that all the neighbours agents perceived by an agent will be involved in a communication with that agent in a given round -we alleviate the constraint of a single communication by agent.\nThis gives us a rough upper bound upon the possible ratio gain in the system (for a given local protocol).\nAgain, we evaluated the ratio gain induced by that LB over our classical HE, for the three different classes of maps.\nThe results are reported in Fig. 4.\nFigure 4: Ratio gain of local broadcast over hypotheses exchange Note to begin with that the ratio gain is 0 when the proportion of agents is 5%, which is easily explained by the fact that it corresponds to situations involving only two agents.\nWe first observe that all classes of maps witness a ratio gain increasing when the proportion of agents augments: the gain reaches 10 to 20%, depending on the class of maps considered.\nIf one compares this with the improvement reported in the previous experiment, it appears to be of the same magnitude.\nThis illustrates that the design of the global protocol cannot be ignored, especially when the proportion of agents is high.\nHowever, we also note that the effectiveness ratio gain curves have very different shapes in both cases: the gain induced by the accuracy of the local protocol increases very quickly with the proportion of agents, while the curve is really smooth for the global one.\nNow let us observe more carefully the results reported here: the curve corresponding to a TI of 53% is above that corresponding to 38%.\nThis is so because the more open a map, the more opportunities to communicate with more than one agent (and hence benefits from broadcast).\nHowever, we also observe that curve for 69% is below that for 53%.\nThis is explained as follows: in the case of 69%, the potential gain to be made in terms of surviving agents is much lower, because our protocols already give rather efficient outcomes anyway (quickly reaching 90%, see Fig. 3).\nA simple rule of thumb could be that when the number of agents is small, special attention should be put on the local 1004 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) protocol, whereas when that number is large, one should carefully design the global one (unless the map is so open that the protocol is already almost optimally efficient).\nEfficiency of the Protocols The final experiment reported here is concerned with the analysis of the efficiency of the protocols.\nWe analysis here the mean size of the totality of the messages that are exchanged by agents (mean size of exchanges, for short) using the following protocols: HE, OE, and two variant protocols.\nThe first one is an intermediary restricted hypotheses exchange protocol (RHE).\nRHE is as follows: it does not involve any challenge nor counter-propose, which means that agents cannot switch their role during the protocol (this differs from RE in that respect).\nIn short, RHE allows an agent to exhaust its partner``s criticism, and eventually this partner will come to adopt the agent``s hypothesis.\nNote that this means that the autonomy of the agent is not preserved here (as an agent will essentially accept any hypothesis it cannot undermine), with the hope that the gain in efficiency will be significant enough to compensate a loss in effectiveness.\nThe second variant protocol is a complete observation exchange protocol (COE).\nCOE uses the same principles as OE, but includes in addition all critical negative examples (nofire) in the exchange (thus giving all examples used as arguments by the hypotheses exchanges protocol), hence improving effectiveness.\nResults for map 69-1 are shown on Fig. 5.\nFigure 5: Mean size of exchanges First we can observe the fact that the ordering of the protocols, from the least efficient to the most efficient, is COE, HE, RHE and then OE.\nHE being more efficient than COE proves that the argumentation process gains efficiency by selecting when it is needed to provide negative example, which have less impact that positive ones in our specific testbed.\nHowever, by communicating hypotheses before eventually giving observation to support it (HE) instead of directly giving the most crucial observations (OE), the argumentation process doubles the size of data exchanges.\nIt is the cost for ensuring consistency at the end of the exchange (a property that OE does not support).\nAlso significant is the fact the the mean size of exchanges is slightly higher when the number of agents is small.\nThis is explained by the fact that in these cases only a very few agents have relevant informations in their possession, and that they will need to communicate a lot in order to come up with a common view of the situation.\nWhen the number of agents increases, this knowledge is distributed over more agents which need shorter discussions to get to mutual consistency.\nAs a consequence, the relative gain in efficiency of using RHE appears to be better when the number of agents is small: when it is high, they will hardly argue anyway.\nFinally, it is worth noticing that the standard deviation for these experiments is rather high, which means that the conversation do not converge to any stereotypic pattern.\n6.\nCONCLUSION This paper has investigated the properties of a multiagent system where each (distributed) agent locally perceives its environment, and tries to reach consistency with other agents despite severe communication restrictions.\nIn particular we have exhibited conditions allowing convergence, and experimentally investigated a typical situation where those conditions cannot hold.\nThere are many possible extensions to this work, the first being to further investigate the properties of different global protocols belonging to the class we identified, and their influence on the outcome.\nThere are in particular many heuristics, highly dependent on the context of the study, that could intuitively yield interesting results (in our study, selecting the recipient on the basis of what can be inferred from his observed actions could be such a heuristic).\nOne obvious candidate for longer term issues concern the relaxation of the assumption of perfect sensing.\n7.\nREFERENCES [1] G. Bourgne, N. Maudet, and S. Pinson.\nWhen agents communicate hypotheses in critical situations.\nIn Proceedings of DALT-2006, May 2006.\n[2] P. Harvey, C. F. Chang, and A. Ghose.\nSupport-based distributed search: a new approach for multiagent constraint processing.\nIn Proceedings of AAMAS06, 2006.\n[3] H. Jung and M. Tambe.\nArgumentation as distributed constraint satisfaction: Applications and results.\nIn Proceedings of AGENTS01, 2001.\n[4] N. C. Karunatillake and N. R. Jennings.\nIs it worth arguing?\nIn Proceedings of ArgMAS 2004, 2004.\n[5] S. Onta\u02dcn\u00b4on and E. Plaza.\nArguments and counterexamples in case-based joint deliberation.\nIn Proceedings of ArgMAS-2006, May 2006.\n[6] D. Poole.\nExplanation and prediction: An architecture for default and abductive reasoning.\nComputational Intelligence, 5(2):97-110, 1989.\n[7] I. Rahwan, S. D. Ramchurn, N. R. Jennings, P. McBurney, S. Parsons, and L. Sonenberg.\nArgumention-based negotiation.\nThe Knowledge Engineering Review, 4(18):345-375, 2003.\n[8] N. Roos, A. ten Tije, and C. Witteveen.\nA protocol for multi-agent diagnosis with spatially distributed knowledge.\nIn Proceedings of AAMAS03, 2003.\n[9] N. Roos, A. ten Tije, and C. Witteveen.\nReaching diagnostic agreement in multiagent diagnosis.\nIn Proceedings of AAMAS04, 2004.\n[10] T. Takahashi, Y. Kaneda, and N. Ito.\nPreliminary study - using robocuprescue simulations for disasters prevention.\nIn Proceedings of SRMED2004, 2004.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1005","lvl-3":"Hypotheses Refinement under Topological Communication Constraints *\nABSTRACT\nWe investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment.\nUpon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations).\nHowever, we further assume that communication opportunities are severely constrained and change dynamically.\nIn this paper, we mostly investigate the convergence of such systems towards global consistency.\nWe first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange.\nAs this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange.\nWe study a critical situation involving a number of agents aiming at escaping from a burning building.\nThe results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.\n1.\nINTRODUCTION\nWe consider a multiagent system where each (distributed) agent locally perceives its environment, and we assume that some unexpected event occurs in that system.\nIf each agent computes only locally its favoured hypothesis, it is only natural to assume that agents will seek to coordinate and refine their hypotheses by confronting their observations with other agents.\nIf, in addition, the communication opportunities are severely constrained (for instance, agents can only communicate when they are close enough to some other agent), and dynamically changing (for instance, agents may change their locations), it becomes crucial to carefully design protocols that will allow agents to converge to some desired state of global consistency.\nIn this paper we exhibit some sufficient conditions on the system dynamics and on the protocol\/strategy structures that allow to guarantee that property, and we experimentally study some contexts where (some of) these assumptions are relaxed.\nWhile problems of diagnosis are among the venerable classics in the AI tradition, their multiagent counterparts have much more recently attracted some attention.\nRoos and colleagues [8, 9] in particular study a situation where a number of distributed entities try to come up with a satisfying global diagnosis of the whole system.\nThey show in particular that the number of messages required to establish this global diagnosis is bound to be prohibitive, unless the communication is enhanced with some suitable protocol.\nHowever, they do not put any restrictions on agents' communication options, and do not assume either that the system is dynamic.\nThe benefits of enhancing communication with supporting information to make convergence to a desired global state of a system more efficient has often been put forward in the literature.\nThis is for instance one of the main idea underlying the argumentation-based negotiation approach [7], where the desired state is a compromise between agents with conflicting preferences.\nMany of these works however make the assumption that this approach is beneficial to start with, and study the technical facets of the problem (or instead emphasize other advantages of using argumentation).\nNotable exceptions are the works of [3, 4, 2, 5], which studied in contexts different from ours the efficiency of argumentation.\nThe rest of the paper is as follows.\nSection 2 specifies the basic elements of our model, and Section 3 goes on to presenting the different protocols and strategies used by the agents to exchange hypotheses and observations.\nWe put special attention at clearly emphasizing the conditions on the system dynamics and protocols\/strategies that will be exploited in the rest of the paper.\nSection 4 details one of\nthe main results of the paper, namely the fact that under the aforementioned conditions, the constraints that we put on the topology will not prevent the convergence of the system towards global consistency, at the condition that no agent ever gets completely \"lost\" forever in the system, and that unlimited time is allowed for computation and argument exchange.\nWhile the conditions on protocols and strategies are fairly mild, it is also clear that these system requirements look much more problematic, even frankly unrealistic in critical situations where distributed approaches are precisely advocated.\nTo get a clearer picture of the situation induced when time is a critical factor, we have set up an experimental framework that we introduce and discuss in Section 5.\nThe critical situation involves a number of agents aiming at escaping from a burning building.\nThe results reported here show that the effectiveness of argument exchange crucially depends upon the nature of the building, and provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.\n2.\nBASIC NOTIONS\nDEFINITION 1 (SEQUENCE OF TIME POINTS).\nA se\nAgent\nAgent Cycle\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 999\nCommunication\n3.\nPROTOCOLS AND STRATEGIES\nLocal Protocol and Strategies\nGlobal Protocol\nGlobal Strategy\n1000 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\n(CONDITIONAL) CONVERGENCE TO GLOBAL CONSISTENCY\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1001\n5.\nEXPERIMENTAL STUDY\n1002 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.1 Experimental Evaluation\nEffectiveness of a protocol\n5.2 Experimental Settings\n5.3 Experimental environments\n5.4 Experimental Results\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1003\nEffectiveness of Communication and Argumentation\n1004 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n6.\nCONCLUSION\nThis paper has investigated the properties of a multiagent system where each (distributed) agent locally perceives its environment, and tries to reach consistency with other agents despite severe communication restrictions.\nIn particular we have exhibited conditions allowing convergence, and experimentally investigated a typical situation where those conditions cannot hold.\nThere are many possible extensions to this work, the first being to further investigate the properties of different global protocols belonging to the class we identified, and their influence on the outcome.\nThere are in particular many heuristics, highly dependent on the context of the study, that could intuitively yield interesting results (in our study, selecting the recipient on the basis of what can be inferred from his observed actions could be such a heuristic).\nOne obvious candidate for longer term issues concern the relaxation of the assumption of perfect sensing.","lvl-4":"Hypotheses Refinement under Topological Communication Constraints *\nABSTRACT\nWe investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment.\nUpon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations).\nHowever, we further assume that communication opportunities are severely constrained and change dynamically.\nIn this paper, we mostly investigate the convergence of such systems towards global consistency.\nWe first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange.\nAs this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange.\nWe study a critical situation involving a number of agents aiming at escaping from a burning building.\nThe results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.\n1.\nINTRODUCTION\nWe consider a multiagent system where each (distributed) agent locally perceives its environment, and we assume that some unexpected event occurs in that system.\nIf each agent computes only locally its favoured hypothesis, it is only natural to assume that agents will seek to coordinate and refine their hypotheses by confronting their observations with other agents.\nIn this paper we exhibit some sufficient conditions on the system dynamics and on the protocol\/strategy structures that allow to guarantee that property, and we experimentally study some contexts where (some of) these assumptions are relaxed.\nRoos and colleagues [8, 9] in particular study a situation where a number of distributed entities try to come up with a satisfying global diagnosis of the whole system.\nThey show in particular that the number of messages required to establish this global diagnosis is bound to be prohibitive, unless the communication is enhanced with some suitable protocol.\nHowever, they do not put any restrictions on agents' communication options, and do not assume either that the system is dynamic.\nThe benefits of enhancing communication with supporting information to make convergence to a desired global state of a system more efficient has often been put forward in the literature.\nThis is for instance one of the main idea underlying the argumentation-based negotiation approach [7], where the desired state is a compromise between agents with conflicting preferences.\nMany of these works however make the assumption that this approach is beneficial to start with, and study the technical facets of the problem (or instead emphasize other advantages of using argumentation).\nNotable exceptions are the works of [3, 4, 2, 5], which studied in contexts different from ours the efficiency of argumentation.\nThe rest of the paper is as follows.\nSection 2 specifies the basic elements of our model, and Section 3 goes on to presenting the different protocols and strategies used by the agents to exchange hypotheses and observations.\nWe put special attention at clearly emphasizing the conditions on the system dynamics and protocols\/strategies that will be exploited in the rest of the paper.\nSection 4 details one of\nWhile the conditions on protocols and strategies are fairly mild, it is also clear that these system requirements look much more problematic, even frankly unrealistic in critical situations where distributed approaches are precisely advocated.\nTo get a clearer picture of the situation induced when time is a critical factor, we have set up an experimental framework that we introduce and discuss in Section 5.\nThe critical situation involves a number of agents aiming at escaping from a burning building.\nThe results reported here show that the effectiveness of argument exchange crucially depends upon the nature of the building, and provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.\n6.\nCONCLUSION\nThis paper has investigated the properties of a multiagent system where each (distributed) agent locally perceives its environment, and tries to reach consistency with other agents despite severe communication restrictions.\nIn particular we have exhibited conditions allowing convergence, and experimentally investigated a typical situation where those conditions cannot hold.\nThere are many possible extensions to this work, the first being to further investigate the properties of different global protocols belonging to the class we identified, and their influence on the outcome.","lvl-2":"Hypotheses Refinement under Topological Communication Constraints *\nABSTRACT\nWe investigate the properties of a multiagent system where each (distributed) agent locally perceives its environment.\nUpon perception of an unexpected event, each agent locally computes its favoured hypothesis and tries to propagate it to other agents, by exchanging hypotheses and supporting arguments (observations).\nHowever, we further assume that communication opportunities are severely constrained and change dynamically.\nIn this paper, we mostly investigate the convergence of such systems towards global consistency.\nWe first show that (for a wide class of protocols that we shall define), the communication constraints induced by the topology will not prevent the convergence of the system, at the condition that the system dynamics guarantees that no agent will ever be isolated forever, and that agents have unlimited time for computation and arguments exchange.\nAs this assumption cannot be made in most situations though, we then set up an experimental framework aiming at comparing the relative efficiency and effectiveness of different interaction protocols for hypotheses exchange.\nWe study a critical situation involving a number of agents aiming at escaping from a burning building.\nThe results reported here provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.\n1.\nINTRODUCTION\nWe consider a multiagent system where each (distributed) agent locally perceives its environment, and we assume that some unexpected event occurs in that system.\nIf each agent computes only locally its favoured hypothesis, it is only natural to assume that agents will seek to coordinate and refine their hypotheses by confronting their observations with other agents.\nIf, in addition, the communication opportunities are severely constrained (for instance, agents can only communicate when they are close enough to some other agent), and dynamically changing (for instance, agents may change their locations), it becomes crucial to carefully design protocols that will allow agents to converge to some desired state of global consistency.\nIn this paper we exhibit some sufficient conditions on the system dynamics and on the protocol\/strategy structures that allow to guarantee that property, and we experimentally study some contexts where (some of) these assumptions are relaxed.\nWhile problems of diagnosis are among the venerable classics in the AI tradition, their multiagent counterparts have much more recently attracted some attention.\nRoos and colleagues [8, 9] in particular study a situation where a number of distributed entities try to come up with a satisfying global diagnosis of the whole system.\nThey show in particular that the number of messages required to establish this global diagnosis is bound to be prohibitive, unless the communication is enhanced with some suitable protocol.\nHowever, they do not put any restrictions on agents' communication options, and do not assume either that the system is dynamic.\nThe benefits of enhancing communication with supporting information to make convergence to a desired global state of a system more efficient has often been put forward in the literature.\nThis is for instance one of the main idea underlying the argumentation-based negotiation approach [7], where the desired state is a compromise between agents with conflicting preferences.\nMany of these works however make the assumption that this approach is beneficial to start with, and study the technical facets of the problem (or instead emphasize other advantages of using argumentation).\nNotable exceptions are the works of [3, 4, 2, 5], which studied in contexts different from ours the efficiency of argumentation.\nThe rest of the paper is as follows.\nSection 2 specifies the basic elements of our model, and Section 3 goes on to presenting the different protocols and strategies used by the agents to exchange hypotheses and observations.\nWe put special attention at clearly emphasizing the conditions on the system dynamics and protocols\/strategies that will be exploited in the rest of the paper.\nSection 4 details one of\nthe main results of the paper, namely the fact that under the aforementioned conditions, the constraints that we put on the topology will not prevent the convergence of the system towards global consistency, at the condition that no agent ever gets completely \"lost\" forever in the system, and that unlimited time is allowed for computation and argument exchange.\nWhile the conditions on protocols and strategies are fairly mild, it is also clear that these system requirements look much more problematic, even frankly unrealistic in critical situations where distributed approaches are precisely advocated.\nTo get a clearer picture of the situation induced when time is a critical factor, we have set up an experimental framework that we introduce and discuss in Section 5.\nThe critical situation involves a number of agents aiming at escaping from a burning building.\nThe results reported here show that the effectiveness of argument exchange crucially depends upon the nature of the building, and provide some insights regarding the design of optimal protocol for hypotheses refinement in this context.\n2.\nBASIC NOTIONS\nWe start by defining the basic elements of our system.\nEnvironment Let O be the (potentially infinite) set of possible observations.\nWe assume the sensors of our agents to be perfect, hence the observations to be certain.\nLet 7 {be the set of hypotheses, uncertain and revisable.\nLet Cons (h, O) be the consistency relation, a binary relation between a hypothesis h \u2208 7 {and a set of observations O C O.\nIn most cases, Cons will refer to classical consistency relation, however, we may overload its meaning and add some additional properties to that relation (in which case we will mention it).\nThe environment may include some dynamics, and change over the course of time.\nWe define below sequences of time points to deal with it:\nDEFINITION 1 (SEQUENCE OF TIME POINTS).\nA se\nquence of time points t1, t2,..., tn from t is an ordered set of time points t1, t2,..., tn such that t1> t and Vi \u2208 [1, n \u2212 1], ti +1> ti.\nAgent\nWe take a system populated by n agents a1,..., an.\nEach agent is defined as a tuple (F, Oi, hi), where:\n\u2022 F, the set of facts, common knowledge to all agents.\n\u2022 Oi \u2208 2O, the set of observations made by the agent so far.\nWe assume a perfect memory, hence this set grows monotonically.\n\u2022 hi \u2208 7 {, the favourite hypothesis of the agent.\nA key notion governing the formation of hypotheses is that of consistency, defined below: DEFINITION 2 (CONSISTENCY).\nWe say that: \u2022 An agent is consistent (Cons (ai)) iff Cons (hi, Oi) (that is, its hypothesis is consistent with its observation set).\n\u2022 An agent ai consistent with a partner agent aj iff Cons (ai) and Cons (hi, Oj) (that is, this agent is consistent and its hypothesis can explain the observation set of the other agent).\n\u2022 Two agents ai and aj are mutually consistent (MCons (ai, aj)) iff Cons (ai, aj) and Cons (aj, ai).\n\u2022 A system is consistent iff V (i, j) \u2208 [1, n] 2 it is the case that MCons (ai, aj).\nTo ensure its consistency, each agent is equipped with an abstract reasoning machinery that we shall call the explanation function Eh.\nThis (deterministic) function takes a set of observation and returns a single prefered hypothesis (2O--* 7 {).\nWe assume h = Eh (O) to be consistent with O by definition of Eh, so using this function on its observation set to determine its favourite hypothesis is a sure way for the agent to achieve consistency.\nNote however that an hypothesis does not need to be generated by Eh to be consistent with an observation set.\nAs a concrete example of such a function, and one of the main inspiration of this work, one can cite the Theorist reasoning system [6]--as long as it is coupled with a filter selecting a single prefered theory among the ones initially selected by Theorist.\nNote also that hi may only be modified as a consequence of the application Eh.\nWe refer to this as the autonomy of the agent: no other agent can directly impose a given hypothesis to an agent.\nAs a consequence, only a new observation (being it a new perception, or an observation communicated by a fellow agent) can result in a modification of its prefered hypothesis hi (but not necessarily of course).\nWe finally define a property of the system that we shall use in the rest of the paper:\nAgent Cycle\nNow we need to see how these agents will evolve and interact in their environment.\nIn our context, agents evolve in a dynamic environment, and we classicaly assume the following system cycle:\n1.\nEnvironment dynamics: the environment evolves according to the defined rules of the system dynamics.\n2.\nPerception step: agents get perceptions from the environment.\nThese perceptions are typically partial (e.g. the agent can only see a portion of a map).\n3.\nReasoning step: agents compare perception with predictions, seek explanations for (potential) difference (s), refine their hypothesis, draw new conclusions.\n4.\nCommunication step: agents can communicate hypotheses and observations with other agents through a defined protocol.\nAny agent can only be involved in one communication with another agent by step.\n5.\nAction step: agents do some practical reasoning using the models obtained from the previous steps and select an action.\nThey can then modify the environment by executing it.\nThe communication of the agents will be further constrained by topological consideration.\nAt a given time, an agent will only be able to communicate with a number of neighbours.\nIts connexions with these others agents may\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 999\nevolve with its situation in the environment.\nTypically, an agent can only communicate with agents that it can sense, but one could imagine evolving topological constraints on communication based on a network of communications between agents where the links are not always active.\nCommunication\nIn our system, agents will be able to communicate with each other.\nHowever, due to the aforementionned topological constraints, they will not be able to communicate with any agents at anytime.\nWho an agent can communicate with will be defined dynamically (for instance, this can be a consequence of the agents being close enough to get in touch).\nWe will abstractly denote by C (ai, aj, t) the communication property, in other words, the fact that agents ai and aj can communicate at time t (note that this relation is assumed to be symetric, but of course not transitive).\nWe are now in a position to define two essential properties of our system.\nDEFINITION 4 (TEMPORAL PATH).\nThere exists a temporal communication path at horizon tf (noted Ltf (aI, aJ)) between ai and aj iff there exists a sequence of time points t1, t2,..., tn from tf and a sequence of agents k1, k2,..., kn s.t. (i) C (aI, ak1, t1), (ii) C (akn, aJ, tn +1), (iii) b' i \u2208 [1, n], C (aki, aki +1, ti) Intuitively, what this property says is that it is possible to find a \"temporal path\" in the future that would allow to link agent ai and aj via a sequence of intermediary agents.\nNote that the time points are not necessarily successive, and that the sequence of agents may involve the same agents several times.\nIn short, a temporaly connex system guarantees that any agent will be able to communicate with any other agents, no matter how long it might take to do so, at any time.\nTo put it another way, it is never the case that an agent will be isolated for ever from another agent of the system.\nWe will next discuss the detail of how communication concretely takes place in our system.\nRemember that in this paper, we only consider the case of bilateral exchanges (an agent can only speak to a single other agent), and that we also assume that any agent can only engage in a single exchange in a given round.\n3.\nPROTOCOLS AND STRATEGIES\nIn this section, we discuss the requirements of the interaction protocols that govern the exchange of messages between agents, and provide some example instantiation of such protocols.\nTo clarify the presentation, we distinguish two levels: the local level, which is concerned with the regulation of bilateral exchanges; and the global level, which essentially regulates the way agents can actually engage into a conversation.\nAt each level, we separate what is specified by the protocol, and what is left to agents' strategies.\nLocal Protocol and Strategies\nWe start by inspecting local protocols and strategies that will regulate the communication between the agents of the system.\nAs we limit ourselves to bilateral communication, these protocols will simply involve two agents.\nSuch protocol will have to meet one basic requirement to be satisfying.\n\u2022 consistency (CONS)--a local protocol has to guarantee the mutual consistency of agents upon termination (which implies termination of course).\nFigure 1: A Hypotheses Exchange Protocol [1]\nOne example such protocol is the protocol described in [1] that is pictured in Fig. 1.\nTo further illustrate how such protocol can be used by agents, we give some details on a possible strategy: upon receiving a hypothesis h1 (propose (h1) or counterpropose (h1)) from a1, agent a2 is in state 2 and has the following possible replies: counterexample (if the agent knows an example contradicting the hypothesis, or not explained by this hypothesis), challenge (if the agents lacks evidence to accept this hypothesis), counterpropose (if the agent agrees with the hypothesis but prefers another one), or accept (if it is indeed as good as its favourite hypothesis).\nThis strategy guarantees, among other properties, the eventual mutual logical consistency of the involved agents [1].\nGlobal Protocol\nThe global protocol regulates the way bilateral exchanges will be initiated between agents.\nAt each turn, agents will concurrently send one weighted request to communicate to other agents.\nThis weight is a value measuring the agent's willingness to converse with the targeted agent (in practice, this can be based on different heuristics, but we shall make some assumptions on agents' strategies, see below).\nSending such a request is a kind of conditional commitment for the agent.\nAn agent sending a weighted request commits to engage in conversation with the target if he does not receive and accept himself another request.\nOnce all request have been received, each agent replies with either an acccept or a reject.\nBy answering with an accept, an agent makes a full commitment to engage in conversation with the sender.\nTherefore, it can only send one accept in a given round, as an agent can only participate in one conversation per time step.\nWhen all response have been received, each agent receiving an accept can either initiate a conversation using the local protocol or send a cancel if it has accepted another request.\nAt the end of all the bilateral exchanges, the agents engaged in conversation are discarded from the protocol.\nThen each of the remaining agents resends a request and the process iterates until no more requests are sent.\nGlobal Strategy\nWe now define four requirements for the strategies used by agents, depending on their role in the protocol: two are concerned with the requestee role (how to decide who the\n1000 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nagent wishes to communicate with?)\n, the other two with the responder role (how to decide which communication request to accept or not?)\n.\n\u2022 Willingness to solve inconsistancies (SOLVE)--agents want to communicate with any other agents unless they know they are mutually consistent.\n\u2022 Focus on solving inconsistencies (FOCUS)--agents do not request communication with an agent with whom they know they are mutually consistent.\n\u2022 Willingness to communicate (COMM)--agents cannot refuse a weighted communication request, unless they have just received or send a request with a greater weight.\n\u2022 Commitment to communication request (REQU)--agents cannot accept a weighted communication request if they have themselves sent a communication request with a greater weight.\nTherefore, they will not cancel their request unless they have received a communicational request with greater weight.\nNow the protocol structure, together with the properties COMM+REQU, ensure that a request can only be rejected if its target agent engages in communication with another agent.\nSuppose indeed that agent ai wants to communicate with aj by sending a request with weight w. COMM guarantees that an agent receiving a weighted request will either accept this communication, accept a communication with a greater weight or wait for the answer to a request with a greater weight.\nThis ensures that the request with maximal weight will be accepted and not cancelled (as REQU ensures that an agent sending a request can only cancel it if he accepts another request with greater weight).\nTherefore at least two agents will engage in conversation per round of the global protocol.\nAs the protocol ensures that ai can resend its request while aj is not engaged in a conversation, there will be a turn in which aj must engage in a conversation, either with ai or another agent.\nThese requirements concern request sending and acceptation, but agents also need some strategy of weight attribution.\nWe describe below an altruist strategy, used in our experiments.\nBeing cooperative, an agent may want to know more of the communication wishes of other agents in order to improve the overall allocation of exchanges to agents.\nA context request step is then added to the global protocol.\nBefore sending their chosen weighted request, agents attribute a weight to all agents they are prepared to communicate with, according to some internal factors.\nIn the simplest case, this weight will be 1 for all agent with whom the agent is not sure of being mutually consistent (ensuring SOLVE), other agent being not considered for communication (ensuring FOCUS).\nThe agent then sends a context request to all agents with whom communication is considered.\nThis request also provides information about the sender (list of considered communications along with their weight).\nAfter reception of all the context requests, agents will either reply with a deny, iff they are already engaged in a conversation (in which case, the requesting agent will not consider communication with them anymore in this turn), or an inform giving the requester information about the requests it has sent and received.\nWhen all replies have been received, each agent can calculate the weight of all requests concerning it.\nIt does so by substracting from the weight of its request the weight of all requests concerning either it or its target (that is, the final weight of the request from ai to aj is Wi, j = wi, j + wj, i \u2212\nk \u2208 S (j) \u2212 {i} wk, j) where wi, j is the weight of the request of ai to aj, R (i) is the set of indice of agents having received a request from ai and S (i) is the set of indice of agents having send a request to ai).\nIt then finally sends a weighted request to the agents who maximise this weight (or wait for a request) as described in the global protocol.\n4.\n(CONDITIONAL) CONVERGENCE TO GLOBAL CONSISTENCY\nIn this section we will show that the requirements regarding protocols and strategies just discussed will be sufficient to ensure that the system will eventually converge towards global consistency, under some conditions.\nWe first show that, if two agents are not mutually consistent at some time, then there will be necessarily a time in the future such that an agent will learn a new observation, being it because it is new for the system, or by learning it from another agent.\nLEMMA 1.\nLet S be a system populated by n agents a1, a2,..., an, temporaly connex, and involving bounded perceptions for these agents.\nLet n1 be the sum of cardinalities of the intersection of pairwise observation sets.\n(n1 = E (i, j) \u2208 [1, n] 2 | Oi \u2229 Oj |) Let n2 be the cardinality of the union of all agents' observations sets.\n(n2 = | \u222a Ni = 1 Oi |).\nIf \u00ac MCons (ai, aj) at time t0, there is necessarily a time t'> t0 s.t. either n1 or n2 will increase.\nPROOF.\nSuppose that there exist a time t0 and indices (i, j) s.t. \u00ac MCons (ai, aj).\nWe will\n\u03b5Comm (ak, al, t0) = 1 if ak and al have communicated at least once since t0, and 0 otherwise.\nTemporal connexity guarantees that there exist t1,..., tm +1 and k1,..., km s.t. C (ai, ak1, t1), C (akm, aj, tm +1), and \u2200 p \u2208 [1, m], C (akp, akp +1, tp).\nClearly, if MCons (ai, ak1), MCons (akm, aj) and \u2200 p, MCons (akp, akp +1), we have MCons (ai, aj) which contradicts our hypothesis (MCons being transitive, MCons (ai, ak1) \u2227 MCons (ak1, ak2) implies that MCons (ai, ak2) and so on till MCons (ai, akm) \u2227 MCons (akm, aj) which implies MCons (ai, aj)).\nAt least two agents are then necessarily inconsistent (\u00ac MCons (ai, ak1), or \u00ac MCons (akm, aj), or \u2203 p0 t.q. \u00ac MCons (akp0, akp0 +1)).\nLet ak and al be these two neighbours at a time t'> t0 1.\nThe SOLVE property ensures that either ak or al will send a communication request to the other agent at time t'.\nAs shown before, this in turn ensures that at least one of these agents will be involved in a communication.\nThen there are two possibilities: (case i) ak and al communicate at time t'.\nIn this case, we know that \u00ac MCons (ak, al).\nThis and the CONS property ensures that at least one of the agents must change its 1Strictly speaking, the transitivity of MCons only ensure that ak and al are inconsistent at a time t' ' \u2265 t0 that can be different from the time t' at which they can communicate.\nBut if they become consistent between t' ' and t' (or inconsistent between t' and t' '), it means that at least one of them have changed its hypothesis between t' ' and t', that is, after t0.\nWe can then apply the reasoning of case iib.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1001\nhypothesis, which in turn, since agents are autonomous, implies at least one exchange of observation.\nBut then | OknOl | is bound to increase: n1 (t')> n1 (t0).\n(case ii) ak communicates with ap at time t'.\nWe then have again two possibilities: (case iia) ak and ap did not communicate since t0.\nBut then \u03b5Comm (ak, ap, t0) had value 0 and takes value 1.\nHence int0 increases.\n(case iib) ak and ap did communicate at some time t' 0> t0.\nThe CONS property of the protocol ensures that MCons (ak, ap) at that time.\nNow the fact that they communicate and FOCUS implies that at least one of them did change its hypothesis in the meantime.\nThe fact that agents are autonomous implies in turn that a new observation (perceived or received from another agent) necessarily provoked this change.\nThe latter case would ensure the existence of a time t' '> t0 and an agent aq s.t. either | Op nOq | or | Ok nOq | increases of 1 at that time (implying n1 (t' ')> n1 (t0)).\nThe former case means that the agent gets a new perception o at time t' '.\nIf that observation was unknown in the system before, then n2 (t' ')> n2 (t0).\nIf some agent aq already knew this observation before, then either Op n Oq or Ok n Oq increases of 1 at time t' ' (which implies that n1 (t' ')> n1 (t0)).\nHence,--MCons (ai, aj) at time t0 guarantees that, either:\nBy iterating the reasoning with t' (but keeping t0 as the time reference for int0), we can eliminate the third case (int0 is integer and bounded by n2, which means that after a maximum of n2 iterations, we necessarily will be in one of two other cases.)\nAs a result, we have proven that if--MCons (ai, aj) at time t0, there is necessarily a time t' s.t. either n1 or n2 will increase.\nTHEOREM 1 (GLOBAL CONSISTENCY).\nLet S be a system populated by n agents a1, a2,..., an, temporaly connex, and involving bounded perceptions for these agents.\nLet Cons (ai, aj) be a transitive consistency property.\nThen any protocol and strategies satisfying properties CONS, SOLVE, FOCUS, COMM and REQU guarantees that the system will converge towards global consistency.\nPROOF.\nFor the sake of contradiction, let us assume 3I, J E [1, N] s.t. Vt, 3t0> t, t.q.--Cons (aI, aJ, t0).\nUsing the lemma, this implies that 3t'> t0 s.t. either n1 (t')> n1 (t0) or n2 (t')> n2 (t0).\nBut we can apply the same reasoning taking t = t', which would give us t1> t'> t0 s.t.--Cons (aI, aJ, t1), which gives us t' '> t1 s.t. either n1 (t' ')> n1 (t1) or n2 (t' ')> n2 (t1).\nBy successive iterations we can then construct a sequence t0, t1,..., tn, which can be divided in two sub-sequences t' 0, t' 1,...t 'n and t' ' 0, t' ' 1,..., t' ' n s.t. n1 (t' 0) <n1 (t' 1) <... <n1 (t 'n) and n2 (t' ' 0) <n2 (t' ' 1) <... <n2 (t' 'n).\nOne of these sub-sequences has to be infinite.\nHowever, n1 (t' i) and n2 (t' ' i) are strictly growing, integer, and bounded, which implies that both are finite.\nContradiction.\nWhat the previous result essentially shows is that, in a system where no agent will be isolated from the rest of the agents for ever, only very mild assumptions on the protocols and strategies used by agents suffice to guarantee convergence towards system consistency in a finite amount of time (although it might take very long).\nUnfortunately, in many \"critical\" situations, it will not be possible to assume this temporal connexity.\nAs distributed approaches as the one advocated in this paper are precisely often presented as a good way to tackle problems of reliability or problems of dependence to a center that are of utmost importance in these critical applications, it is certainly interesting to further explore how such a system would behave when we relax this assumption.\n5.\nEXPERIMENTAL STUDY\nThis experiment involves agents trying to escape from a burning building.\nThe environment is described as a spatial grid with a set of walls and (thankfully) some exits.\nTime and space are considered discrete.\nTime is divided in rounds.\nAgents are localised by their position on the spatial grid.\nThese agents can move and communicate with other agents.\nIn a round, an agent can move of one cell in any of the four cardinal directions, provided it is not blocked by a wall.\nIn this application, agents communicate with any other agent (but, recall, a single one) given that this agent is in view, and that they have not yet exchanged their current favoured hypothesis.\nSuddenly, a fire erupts in these premises.\nFrom this moment, the fire propagates.\nEach round, for each cases where there is fire, the fire propagates in the four directions.\nHowever, the fire cannot propagate through a wall.\nIf the fire propagates in a case where an agent is positioned, that agent burns and is considered dead.\nIt can of course no longer move nor communicate.\nIf an agent gets to an exit, it is considered saved, and can no longer be burned.\nAgents know the environment and the rules governing the dynamics of this environment, that is, they know the map as well as the rules of fire propagation previously described.\nThey also locally perceive this environment, but cannot see further than 3 cases away, in any direction.\nWalls also block the line of view, preventing agents from seeing behind them.\nWithin their sight, they can see other agents and whether or not the cases they see are on fire.\nAll these perceptions are memorised.\nWe now show how this instantiates the abstract framework presented the paper.\n\u2022 O = {Fire (x, y, t), NoFire (x, y, t), Agent (ai, x, y, t)} Observations can then be positive (o E P (O) iff 3h E H s.t. h | = o) or negative (o E N (O) iff 3h E H s.t. h | =--o).\n\u2022 H = {FireOrigin (x1, y1, t1) \u2227...\u2227 FireOrigin (xl, yl, tl)} Hypotheses are conjunctions of FireOrigins.\n\u2022 Cons (h, O) consistency relation satisfies:--coherence: Vo E N (O), h ~ | =--o.--completeness: Vo E P (O), h | = o.\n-- minimality: For all h' E H, if h' is coherent and complete for O, then h is prefered to h' according to the preference relation (h <p h').2 2Selects first the minimal number of origins, then the most recent (least preemptive strategy [6]), then uses some arbitrary fixed ranking to discriminate ex-aequo.\nThe resulting relation is a total order, hence minimality implies that there will be a single h s.t.Cons (O, h) for a given O.\nThis in turn means that MCons (ai, aj) iff Cons (ai), Cons (aj), and hi = hj.\nThis relation is then transitive and symmetric.\n1002 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n\u2022 Eh takes O as argument and returns min \u2264 p of the coherent and complete hypothesis for O\n5.1 Experimental Evaluation\nWe will classically (see e.g. [3, 4]) assess the effectiveness and efficiency of different interaction protocols.\nEffectiveness of a protocol\nThe proportion of agents surviving the fire over the initial number of agents involved in the experiment will determine the effectiveness of a given protocol.\nIf this value is high, the protocol has been effective to propagate the information and\/or for the agents to refine their hypotheses and determine the best way to the exit.\nEfficiency of a protocol Typically, the use of supporting information will involve a communication overhead.\nWe will assume here that the efficiency of a given protocol is characterised by the data flow induced by this protocol.\nIn this paper we will only discuss this aspect wrt.\nlocal protocols.\nThe main measure that we shall then use here is the mean total size of messages that are exchanged by agents per exchange (hence taking into account both the number of messages and the actual size of the messages, because it could be that messages happen to be very \"big\", containing e.g. a large number of observations, which could counter-balance a low number of messages).\n5.2 Experimental Settings\nThe chosen experimental settings are the following:\n\u2022 Environmental topology--Performances of informa\ntion propagation are highly constrained by the environment topology.\nThe perception skills of the agents depend on the \"openness\" of the environment.\nWith a large number of walls the perceptions of agents are limited, and also the number of possible inter-agent communications, whereas an \"open\" environment will provide optimal possibilities of perception and information propagation.\nThus, we propose a topological index (see below) as a common basis to charaterize the environments (maps) used during experimentations.\nThe topological index (TI) is the ratio of the number of cells that can be perceived by agents summed up from all possible positions, divided by the number of cells that would be perceived from the same positions but without any walls.\n(The closer to 1, the more open the environment).\nWe shall also use two additional, more classical [10], measures: the characteristic path length3 (CPL) and the clustering coefficient4 (CC).\n\u2022 Number of agents--The propagation of information also depends on the initial number of agents involved during an experimentation.\nFor instance, the more agents, the more potential communications there is.\nThis means that there will be more potential for propagation, but also that the bilateral exchange restriction will be more crucial.\nTable 1: Topological Characteristics of the Maps\n\u2022 Initial positions of the agents--Initial positions of the agents have a significant influence on the overall behavior of an instance of our system: being close from an exit will (in general) ease the escape.\n5.3 Experimental environments\nWe choose to realize experiments on three very different topological indexes (69% for \"open\" environments, 53% for mixed environments, and 38% for labyrinth-like environments).\nFigure 2: Two maps (left: TI = 69%, right TI = 38%)\nWe designed three different maps for each index (Fig. 2 shows two of them), containing the same maximum number of agents (36 agents max.)\nwith a maximum density of one agent per cell, the same number of exits and a similar fire origin (e.g. starting time and position).\nThe three differents maps of a given index are designed as follows.\nThe first map is a model of an existing building floor.\nThe second map has the same \"enclosure\", exits and fire origin as the first one, but the number and location of walls are different (wall locations are designed by an heuristic which randomly creates walls on the spatial grid such that no fully closed rooms are created and that no exit is closed).\nThe third map is characterised by geometrical \"enclosure\" in wich walls location is also designed with the aforementioned heuristic.\nTable 1 summarizes the different topological measures characterizing these different maps.\nIt is worth pointing out that the values confirm the relevance of TI (maps with a high TI have a low CPL and a high CC.\nHowever the CPL and CC allows to further refine the difference between the maps, e.g. between 53-1 and 53-2).\n5.4 Experimental Results\nFor each triple of maps defined as above we conduct the same experiments.\nIn each experiment, the society differs in terms of its initial proportion of involved agents, from 1% to 100%.\nThis initial proportion represents the percentage of involved agents with regards to the possible maximum number of agents.\nFor each map and each initial proportion,\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1003\nwe select randomly 100 different initial agents' locations.\nFor each of those different locations we execute the system one time for each different interaction protocol.\nEffectiveness of Communication and Argumentation\nThe first experiment that we set up aims at testing how effective is hypotheses exchange (HE), and in particular how the topological aspects will affect this effectiveness.\nIn order to do so, we have computed the ratio of improvement offered by that protocol over a situation where agents could simply not communicate (no comm).\nTo get further insights as to what extent the hypotheses exchange was really crucial, we also tested a much less elaborated protocol consisting of mere observation exchanges (OE).\nMore precisely, this protocol requires that each agent stores any \"unexpected\" observation that it perceives, and agents simply exchange their respective lists of observations when they discuss.\nIn this case, the local protocol is different (note in particular that it does not guarantee mutual consistency), but the global protocol remains the same (at the only exception that agents' motivation to communicate is to synchronise their list of observations, not their hypothesis).\nIf this protocol is at best as effective as HE, it has the advantage of being more efficient (this is obvious wrt the number of messages which will be limited to 2, less straightforward as far as the size of messages is concerned, but the rough observation that the exchange of observations can be viewed as a \"flat\" version of the challenge is helpful to see this).\nThe results of these experiments are reported in Fig. 3.\nFigure 3: Comparative effectiveness ratio gain of protocols when the proportion of agents augments\nThe first observation that needs to be made is that communication improves the effectiveness of the process, and this ratio increases as the number of agents grows in the system.\nThe second lesson that we learn here is that closeness relatively makes communication more effective over non communication.\nMaps exhibiting a T.I. of 38% are constantly above the two others, and 53% are still slightly but significantly better than 69%.\nHowever, these curves also suggest, perhaps surprisingly, that HE outperforms OE in precisely those situations where the ratio gain is less important (the only noticeable difference occurs for rather open maps where T.I. is 69%).\nThis may be explained as follows: when a map is open, agents have many potential explanation candidates, and argumentation becomes useful to discriminate between those.\nWhen a map is labyrinth-like, there are fewer possible explanations to an unexpected event.\nImportance of the Global Protocol The second set of experiments seeks to evaluate the importance of the design of the global protocol.\nWe tested our protocol against a \"local broadcast\" (LB) protocol.\nLocal broadcast means that all the neighbours agents perceived by an agent will be involved in a communication with that agent in a given round--we alleviate the constraint of a single communication by agent.\nThis gives us a rough upper bound upon the possible ratio gain in the system (for a given local protocol).\nAgain, we evaluated the ratio gain induced by that LB over our classical HE, for the three different classes of maps.\nThe results are reported in Fig. 4.\nFigure 4: Ratio gain of local broadcast over hypotheses exchange\nNote to begin with that the ratio gain is 0 when the proportion of agents is 5%, which is easily explained by the fact that it corresponds to situations involving only two agents.\nWe first observe that all classes of maps witness a ratio gain increasing when the proportion of agents augments: the gain reaches 10 to 20%, depending on the class of maps considered.\nIf one compares this with the improvement reported in the previous experiment, it appears to be of the same magnitude.\nThis illustrates that the design of the global protocol cannot be ignored, especially when the proportion of agents is high.\nHowever, we also note that the effectiveness ratio gain curves have very different shapes in both cases: the gain induced by the accuracy of the local protocol increases very quickly with the proportion of agents, while the curve is really smooth for the global one.\nNow let us observe more carefully the results reported here: the curve corresponding to a TI of 53% is above that corresponding to 38%.\nThis is so because the more open a map, the more opportunities to communicate with more than one agent (and hence benefits from broadcast).\nHowever, we also observe that curve for 69% is below that for 53%.\nThis is explained as follows: in the case of 69%, the potential gain to be made in terms of surviving agents is much lower, because our protocols already give rather efficient outcomes anyway (quickly reaching 90%, see Fig. 3).\nA simple rule of thumb could be that when the number of agents is small, special attention should be put on the local\n1004 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nprotocol, whereas when that number is large, one should carefully design the global one (unless the map is so open that the protocol is already almost optimally efficient).\nEfficiency of the Protocols The final experiment reported here is concerned with the analysis of the efficiency of the protocols.\nWe analysis here the mean size of the totality of the messages that are exchanged by agents (mean size of exchanges, for short) using the following protocols: HE, OE, and two variant protocols.\nThe first one is an intermediary restricted hypotheses exchange protocol (RHE).\nRHE is as follows: it does not involve any challenge nor counter-propose, which means that agents cannot switch their role during the protocol (this differs from RE in that respect).\nIn short, RHE allows an agent to exhaust its partner's criticism, and eventually this partner will come to adopt the agent's hypothesis.\nNote that this means that the autonomy of the agent is not preserved here (as an agent will essentially accept any hypothesis it cannot undermine), with the hope that the gain in efficiency will be significant enough to compensate a loss in effectiveness.\nThe second variant protocol is a complete observation exchange protocol (COE).\nCOE uses the same principles as OE, but includes in addition all critical negative examples (nofire) in the exchange (thus giving all examples used as arguments by the hypotheses exchanges protocol), hence improving effectiveness.\nResults for map 69-1 are shown on Fig. 5.\nFigure 5: Mean size of exchanges\nFirst we can observe the fact that the ordering of the protocols, from the least efficient to the most efficient, is COE, HE, RHE and then OE.\nHE being more efficient than COE proves that the argumentation process gains efficiency by selecting when it is needed to provide negative example, which have less impact that positive ones in our specific testbed.\nHowever, by communicating hypotheses before eventually giving observation to support it (HE) instead of directly giving the most crucial observations (OE), the argumentation process doubles the size of data exchanges.\nIt is the cost for ensuring consistency at the end of the exchange (a property that OE does not support).\nAlso significant is the fact the the mean size of exchanges is slightly higher when the number of agents is small.\nThis is explained by the fact that in these cases only a very few agents have relevant informations in their possession, and that they will need to communicate a lot in order to come up with a common view of the situation.\nWhen the number of agents increases, this knowledge is distributed over more agents which need shorter discussions to get to mutual consistency.\nAs a consequence, the relative gain in efficiency of using RHE appears to be better when the number of agents is small: when it is high, they will hardly argue anyway.\nFinally, it is worth noticing that the standard deviation for these experiments is rather high, which means that the conversation do not converge to any \"stereotypic\" pattern.\n6.\nCONCLUSION\nThis paper has investigated the properties of a multiagent system where each (distributed) agent locally perceives its environment, and tries to reach consistency with other agents despite severe communication restrictions.\nIn particular we have exhibited conditions allowing convergence, and experimentally investigated a typical situation where those conditions cannot hold.\nThere are many possible extensions to this work, the first being to further investigate the properties of different global protocols belonging to the class we identified, and their influence on the outcome.\nThere are in particular many heuristics, highly dependent on the context of the study, that could intuitively yield interesting results (in our study, selecting the recipient on the basis of what can be inferred from his observed actions could be such a heuristic).\nOne obvious candidate for longer term issues concern the relaxation of the assumption of perfect sensing.","keyphrases":["multiag system","favour hypothesi","global consist","consist","observ set","time point sequenc","bound percept","tempor path","topolog constraint","hypothesi exchang protocol","bilater exchang","mutual consist","context request step","inter-agent commun","negoti and argument","agent commun languag and protocol"],"prmu":["P","P","P","P","R","M","M","U","R","R","M","M","M","M","M","M"]} {"id":"I-49","title":"A Multilateral Multi-issue Negotiation Protocol","abstract":"In this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context. We consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment. This information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents. In addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent.","lvl-1":"A Multilateral Multi-issue Negotiation Protocol Miniar Hemaissia THALES Research & Technology France RD 128 F-91767 Palaiseau Cedex, France miniar.hemaissia@lip6.fr Amal El Fallah Seghrouchni LIP6, University of Paris 6 8 rue du Capitaine Scott F-75015 Paris, France amal.elfallah@lip6.fr Christophe Labreuche and Juliette Mattioli THALES Research & Technology France RD 128 F-91767 Palaiseau Cedex, France ABSTRACT In this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context.\nWe consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment.\nThis information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents.\nIn addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Intelligent agents, Multiagent systems General Terms Theory, Design, Experimentation 1.\nINTRODUCTION Multi-issue negotiation protocols represent an important field of study since negotiation problems in the real world are often complex ones involving multiple issues.\nTo date, most of previous work in this area ([2, 3, 19, 13]) dealt almost exclusively with simple negotiations involving independent issues.\nHowever, real-world negotiation problems involve complex dependencies between multiple issues.\nWhen one wants to buy a car, for example, the value of a given car is highly dependent on its price, consumption, comfort and so on.\nThe addition of such interdependencies greatly complicates the agents utility functions and classical utility functions, such as the weighted sum, are not sufficient to model this kind of preferences.\nIn [10, 9, 17, 14, 20], the authors consider inter-dependencies between issues, most often defined with boolean values, except for [9], while we can deal with continuous and discrete dependent issues thanks to the modelling power of the Choquet integral.\nIn [17], the authors deal with bilateral negotiation while we are interested in a multilateral negotiation setting.\nKlein et al. [10] present an approach similar to ours, using a mediator too and information about the strength of the approval or rejection that an agent makes during the negotiation.\nIn our protocol, we use more precise information to improve the proposals thanks to the multi-criteria methodology and tools used to model the preferences of our agents.\nLin, in [14, 20], also presents a mediation service but using an evolutionary algorithm to reach optimal solutions and as explained in [4], players in the evolutionary models need to repeatedly interact with each other until the stable state is reached.\nAs the population size increases, the time it takes for the population to stabilize also increases, resulting in excessive computation, communication, and time overheads that can become prohibitive, and for one-to-many and many-to-many negotiations, the overheads become higher as the number of players increases.\nIn [9], the authors consider a non-linear utility function by using constraints on the domain of the issues and a mediation service to find a combination of bids maximizing the social welfare.\nOur preference model, a nonlinear utility function too, is more complex than [9] one since the Choquet integral takes into account the interactions and the importance of each decision criteria\/issue, not only the dependencies between the values of the issues, to determine the utility.\nWe also use an iterative protocol enabling us to find a solution even when no bid combination is possible.\nIn this paper, we propose a negotiation protocol suited for multiple agents with complex preferences and taking into account, at the same time, multiple interdependent issues and recommendations made by the agents to improve a proposal.\nMoreover, the preferences of our agents are modelled using a multi-criteria methodology and tools enabling us to take into account information about the improvements that can be made to a proposal, in order to help in accelerating the search for a consensus between the agents.\nTherefore, we propose a negotiation protocol consisting of solving our decision problem using a MAS with a multi-criteria decision aiding modelling at the agent level and a cooperation-based multilateral multi-issue negotiation protocol.\nThis protocol is studied under a non-cooperative approach and it is shown 943 978-81-904262-7-5 (RPS) c 2007 IFAAMAS that it has subgame perfect equilibria, provided that agents behave rationally in the sense of von Neumann and Morgenstern.\nThe approach proposed in this paper has been first introduced and presented in [8].\nIn this paper, we present our first experiments, with some noteworthy results, and a more complex multi-agent system with representatives to enable us to have a more robust system.\nIn Section 2, we present our application, a crisis management problem.\nSection 3 deals with the general aspect of the proposed approach.\nThe preference modelling is described in sect.\n4, whereas the motivations of our protocol are considered in sect.\n5 and the agent\/multiagent modelling in sect.\n6.\nSection 7 presents the formal modelling and properties of our protocol before presenting our first experiments in sect.\n8.\nFinally, in Section 9, we conclude and present the future work.\n2.\nCASE STUDY This protocol is applied to a crisis management problem.\nCrisis management is a relatively new field of management and is composed of three types of activities: crisis prevention, operational preparedness and management of declared crisis.\nThe crisis prevention aims to bring the risk of crisis to an acceptable level and, when possible, avoid that the crisis actually happens.\nThe operational preparedness includes strategic advanced planning, training and simulation to ensure availability, rapid mobilisation and deployment of resources to deal with possible emergencies.\nThe management of declared crisis is the response to - including the evacuation, search and rescue - and the recovery from the crisis by minimising the effects of the crises, limiting the impact on the community and environment and, on a longer term, by bringing the community``s systems back to normal.\nIn this paper, we focus on the response part of the management of declared crisis activity, and particularly on the evacuation of the injured people in disaster situations.\nWhen a crisis is declared, the plans defined during the operational preparedness activity are executed.\nFor disasters, master plans are executed.\nThese plans are elaborated by the authorities with the collaboration of civil protection agencies, police, health services, non-governmental organizations, etc..\nWhen a victim is found, several actions follow.\nFirst, a rescue party is assigned to the victim who is examined and is given first aid on the spot.\nThen, the victims can be placed in an emergency centre on the ground called the medical advanced post.\nFor all victims, a sorter physician - generally a hospital physician - examines the seriousness of their injuries and classifies the victims by pathology.\nThe evacuation by emergency health transport if necessary can take place after these clinical examinations and classifications.\nNowadays, to evacuate the injured people, the physicians contact the emergency call centre to pass on the medical assessments of the most urgent cases.\nThe emergency call centre then searches for available and appropriate spaces in the hospitals to care for these victims.\nThe physicians are informed of the allocations, so they can proceed to the evacuations choosing the emergency health transports according to the pathologies and the transport modes provided.\nIn this context, we can observe that the evacuation is based on three important elements: the examination and classification of the victims, the search for an allocation and the transport.\nIn the case of the 11 March 2004 Madrid attacks, for instance, some injured people did not receive the appropriate health care because, during the search for space, the emergency call centre did not consider the transport constraints and, in particular, the traffic.\nTherefore, for a large scale crisis management problem, there is a need to support the emergency call centre and the physicians in the dispatching to take into account the hospitals and the transport constraints and availabilities.\n3.\nPROPOSED APPROACH To accept a proposal, an agent has to consider several issues such as, in the case of the crisis management problem, the availabilities in terms of number of beds by unit, medical and surgical staffs, theatres and so on.\nTherefore, each agent has its own preferences in correlation with its resource constraints and other decision criteria such as, for the case study, the level of congestion of a hospital.\nAll the agents also make decisions by taking into account the dependencies between these decision criteria.\nThe first hypothesis of our approach is that there are several parties involved in and impacted by the decision, and so they have to decide together according to their own constraints and decision criteria.\nNegotiation is the process by which a group facing a conflict communicates with one another to try and come to a mutually acceptable agreement or decision and so, the agents have to negotiate.\nThe conflict we have to resolve is finding an acceptable solution for all the parties by using a particular protocol.\nIn our context, multilateral negotiation is a negotiation protocol type that is the best suited for this type of problem : this type of protocol enables the hospitals and the physicians to negotiate together.\nThe negotiation also deals with multiple issues.\nMoreover, an other hypothesis is that we are in a cooperative context where all the parties have a common objective which is to provide the best possible solution for everyone.\nThis implies the use of a negotiation protocol encouraging the parties involved to cooperate as satisfying its preferences.\nTaking into account these aspects, a Multi-Agent System (MAS) seems to be a reliable method in the case of a distributed decision making process.\nIndeed, a MAS is a suitable answer when the solution has to combine, at least, distribution features and reasoning capabilities.\nAnother motivation for using MAS lies in the fact that MAS is well known for facilitating automated negotiation at the operative decision making level in various applications.\nTherefore, our approach consists of solving a multiparty decision problem using a MAS with \u2022 The preferences of the agents are modelled using a multi-criteria decision aid tool, MYRIAD, also enabling us to consider multi-issue problems by evaluating proposals on several criteria.\n\u2022 A cooperation-based multilateral and multi-issue negotiation protocol.\n4.\nTHE PREFERENCE MODEL We consider a problem where an agent has several decision criteria, a set Nk = {1, ... , nk} of criteria for each agent k involved in the negotiation protocol.\nThese decision criteria enable the agents to evaluate the set of issues that are negotiated.\nThe issues correspond directly or not to the decision criteria.\nHowever, for the example of the crisis management 944 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) problem, the issues are the set of victims to dispatch between the hospitals.\nThese issues are translated to decision criteria enabling the hospital to evaluate its congestion and so to an updated number of available beds, medical teams and so on.\nIn order to take into account the complexity that exists between the criteria\/issues, we use a multi-criteria decision aiding (MCDA) tool named MYRIAD [12] developed at Thales for MCDA applications based on a two-additive Choquet integral which is a good compromise between versatility and ease to understand and model the interactions between decision criteria [6].\nThe set of the attributes of Nk is denoted by Xk 1 , ... , Xk nk .\nAll the attributes are made commensurate thanks to the introduction of partial utility functions uk i : Xk i \u2192 [0, 1].\nThe [0, 1] scale depicts the satisfaction of the agent k regarding the values of the attributes.\nAn option x is identified to an element of Xk = Xk 1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Xk nk , with x = (x1, ... , xnk ).\nThen the overall assessment of x is given by Uk(x) = Hk(uk 1 (x1), ... , uh nk (xnk )) (1) where Hk : [0, 1]nk \u2192 [0, 1] is the aggregation function.\nThe overall preference relation over Xk is then x y \u21d0\u21d2 Uk(x) \u2265 Uk(y) .\nThe two-additive Choquet integral is defined for (z1, ... , znk ) \u2208 [0, 1]nk by [7] Hk(z1, ... , znk ) = X i\u2208Nk 0 @vk i \u2212 1 2 X j=i |Ik i,j| 1 A zi + X Ik i,j >0 Ik i,j zi \u2227 zj + X Ii,j <0 |Ii,j| zi \u2228 zj (2) where vk i is the relative importance of criterion i for agent k and Ik i,j is the interaction between criteria i and j, \u2227 and \u2228 denote the min and max functions respectively.\nAssume that zi < zj.\nA positive interaction between criteria i and j depicts complementarity between these criteria (positive synergy) [7].\nHence, the lower score of z on criterion i conceals the positive effect of the better score on criterion j to a larger extent on the overall evaluation than the impact of the relative importance of the criteria taken independently of the other ones.\nIn other words, the score of z on criterion j is penalized by the lower score on criterion i. Conversely, a negative interaction between criteria i and j depicts substitutability between these criteria (negative synergy) [7].\nThe score of z on criterion i is then saved by a better score on criterion j.\nIn MYRIAD, we can also obtain some recommendations corresponding to an indicator \u03c9C (H, x) measuring the worth to improve option x w.r.t. Hk on some criteria C \u2286 Nk as follows \u03c9C (Hk, x)= Z 1 0 Hk ` (1 \u2212 \u03c4)xC + \u03c4, xNk\\C \u00b4 \u2212 Hk(x) EC (\u03c4, x) d\u03c4 where ((1\u2212\u03c4)xC +\u03c4, xNk\\C ) is the compound act that equals (1 \u2212 \u03c4)xi + \u03c4 if i \u2208 C and equals xi if i \u2208 Nk \\ C. Moreover, EC (\u03c4, x) is the effort to go from the profile x to the profile ((1 \u2212 \u03c4)xC + \u03c4, xNk\\C ).\nFunction \u03c9C (Hk, x) depicts the average improvement of Hk when the criteria of coalition A range from xC to 1C divided by the average effort needed for this improvement.\nWe generally assume that EC is of order 1, that is EC (\u03c4, x) = \u03c4 P i\u2208C (1 \u2212 xi).\nThe expression of \u03c9C (Hk, x) when Hk is a Choquet integral, is given in [11].\nThe agent is then recommended to improve of coalition C for which \u03c9C (Hk, x) is maximum.\nThis recommendation is very useful in a negotiation protocol since it helps the agents to know what to do if they want an offer to be accepted while not revealing their own preference model.\n5.\nPROTOCOL MOTIVATIONS For multi-issue problems, there are two approaches: a complete package approach where the issues are negotiated simultaneously in opposition to the sequential approach where the issues are negotiated one by one.\nWhen the issues are dependant, then it is the best choice to bargain simultaneously over all issues [5].\nThus, the complete package is the adopted approach so that an offer will be on the overall set of injured people while taking into account the other decision criteria.\nWe have to consider that all the parties of the negotiation process have to agree on the decision since they are all involved in and impacted by this decision and so an unanimous agreement is required in the protocol.\nIn addition, no party can leave the process until an agreement is reached, i.e. a consensus achieved.\nThis makes sense since a proposal concerns all the parties.\nMoreover, we have to guarantee the availability of the resources needed by the parties to ensure that a proposal is realistic.\nTo this end, the information about these availabilities are used to determine admissible proposals such that an offer cannot be made if one of the parties has not enough resources to execute\/achieve it.\nAt the beginning of the negotiation, each party provides its maximum availabilities, this defining the constraints that have to be satisfied for each offer submitted.\nThe negotiation has also to converge quickly on an unanimous agreement.\nWe decided to introduce in the negotiation protocol an incentive to cooperate taking into account the passed negotiation time.\nThis incentive is defined on the basis of a time dependent penalty, the discounting factor as in [18] or a time-dependent threshold.\nThis penalty has to be used in the accept\/reject stage of our consensus procedure.\nIn fact, in the case of a discounting factor, each party will accept or reject an offer by evaluating the proposal using its utility function deducted from the discounting factor.\nIn the case of a time-dependent threshold, if the evaluation is greater or equal to this threshold, the offer is accepted, otherwise, in the next period, its threshold is reduced.\nThe use of a penalty is not enough alone since it does not help in finding a solution.\nSome information about the assessments of the parties involved in the negotiation are needed.\nIn particular, it would be helpful to know why an offer has been rejected and\/or what can be done to make a proposal that would be accepted.\nMYRIAD provides an analysis that determines the flaws an option, here a proposal.\nIn particular, it gives this type of information: which criteria of a proposal should be improved so as to reach the highest possible overall evaluation [11].\nAs we use this tool to model the parties involved in the negotiation, the information about the criteria to improve can be used by the mediator to elaborate the proposals.\nWe also consider that the dual function can be used to take into account another type of information: on which criteria of a proposal, no improvement is necessary so that the overall evaluation of a proposal is still acceptable, do not decrease.\nThus, all information is a constraint to be satisfied as much as possible by The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 945 Figure 1: An illustration of some system.\nthe parties to make a new proposal.\nWe are in a cooperative context and revealing one``s opinion on what can be improved is not prohibited, on the contrary, it is useful and recommended here seeing that it helps in converging on an agreement.\nTherefore, when one of the parties refuses an offer, some information will be communicated.\nIn order to facilitate and speed up the negotiation, we introduce a mediator.\nThis specific entity is in charge of making the proposals to the other parties in the system by taking into account their public constraints (e.g. their availabilities) and the recommendations they make.\nThis mediator can also be considered as the representative of the general interest we can have, in some applications, such as in the crisis management problem, the physician will be the mediator and will also have some more information to consider when making an offer (e.g. traffic state, transport mode and time).\nEach party in a negotiation N, a negotiator, can also be a mediator of another negotiation N , this party becoming the representative of N in the negotiation N, as illustrated by fig. 1 what can also help in reducing the communication time.\n6.\nAGENTIFICATION How the problem is transposed in a MAS problem is a very important aspect when designing such a system.\nThe agentification has an influence upon the systems efficiency in solving the problem.\nTherefore, in this section, we describe the elements and constraints taken into account during the modelling phase and for the model itself.\nHowever, for this negotiation application, the modelling is quite natural when one observes the negotiation protocol motivations and main properties.\nFirst of all, it seems obvious that there should be one agent for each player of our multilateral multi-issue negotiation protocol.\nThe agents have the involved parties'' information and preferences.\nThese agents are: \u2022 Autonomous: they decide for themselves what, when and under what conditions actions should be performed; \u2022 Rational: they have a means-ends competence to fit its decisions, according to its knowledge, preferences and goal; \u2022 Self-interested: they have their own interests which may conflict with the interests of other agents.\nMoreover, their preferences are modelled and a proposal evaluated and analysed using MYRIAD.\nEach agent has private information and can access public information as knowledge.\nIn fact, there are two types of agents: the mediator type for the agents corresponding to the mediator of our negotiation protocol, the delegated physician in our application, and the negotiator type for the agents corresponding to the other parties, the hospitals.\nThe main behaviours that an agent of type mediator needs to negotiate in our protocol are the following: \u2022 convert_improvements: converts the information given by the other agents involved in the negotiation about the improvements to be done, into constraints on the next proposal to be made; \u2022 convert_no_decrease: converts the information given by the other agents involved in the negotiation about the points that should not be changed into constraints on the next proposal to be made; \u2022 construct_proposal: constructs a new proposal according to the constraints obtained with convert_improvements, convert_no_decrease and the agent preferences; The main behaviours that an agent of type negotiator needs to negotiate in our protocol are the following: \u2022 convert_proposal: converts a proposal to a MYRIAD option of the agent according to its preferences model and its private data; \u2022 convert_improvements_wc: converts the agent recommendations for the improvements of a MYRIAD option into general information on the proposal; \u2022 convert_no_decrease_wc: converts the agent recommendations about the criteria that should not be changed in the MYRIAD option into general information on the proposal; In addition to these behaviours, there are, for the two types of agents, access behaviours to MYRIAD functionalities such as the evaluation and improvement functions: \u2022 evaluate_option: evaluates the MYRIAD option obtained using the agent behaviour convert_proposal; \u2022 improvements: gets the agent recommendations to improve a proposal from the MYRIAD option; \u2022 no_decrease: gets the agent recommendations to not change some criteria from the MYRIAD option; Of course, before running the system with such agents, we must have defined each party preferences model in MYRIAD.\nThis model has to be part of the agent so that it could be used to make the assessments and to retrieve the improvements.\nIn addition to these behaviours, the communication acts between the agents is as follows.\n1.\nmediator agent communication acts: 946 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) m1 m 1 1 m inform1 m mediator negotiator accept\u2212proposal l 1 accept\u2212proposal m\u2212l reject\u2212proposal propose propose Figure 2: The protocol diagram in AUML, and where m is the number of negotiator agents and l is the number of agents refusing current proposal.\n(a) propose: sends a message containing a proposal to all negotiator agents; (b) inform: sends a message to all negotiator agents to inform them that an agreement has been reached and containing the consensus outcome.\n2.\nnegotiator agent communication acts: (a) accept-proposal: sends a message to the mediator agent containing the agent recommendations to improve the proposal and obtained with convert_improvements_wc; (b) reject-proposal: sends a message to the mediator agent containing the agent recommendations about the criteria that should not be changed and obtained with convert_no_decrease_wc.\nSuch agents are interchangeable, in a case of failure, since they all have the same properties and represent a user with his preference model, not depending on the agent, but on the model defined in MYRIAD.\nWhen the issues and the decision criteria are different from each other, the information about the criteria improvement have to be pre-processed to give some instructions on the directions to take and about the negotiated issues.\nIt is the same for the evaluation of a proposal: each agent has to convert the information about the issues to update its private information and to obtain the values of each attribute of the decision criteria.\n7.\nOUR PROTOCOL Formally, we consider negotiations where a set of players A = {1, 2, ... , m} and a player a are negotiating over a set Q of size q.\nThe player a is the protocol mediator, the mediator agent of the agentification.\nThe utility\/preference function of a player k \u2208 A \u222a {a} is Uk, defined using MYRIAD, as presented in Section 4, with a set Nk of criteria, Xk an option, and so on.\nAn offer is a vector P = (P1, P2, \u00b7 \u00b7 \u00b7 , Pm), a partition of Q, in which Pk is player k``s share of Q.\nWe have P \u2208 P where P is the set of admissible proposals, a finite set.\nNote that P is determined using all players general constraints on the proposals and Q. Moreover, let \u02dcP denote a particular proposal defined as a``s preferred proposal.\nWe also have the following notation: \u03b4k is the threshold decrease factor of player k, \u03a6k : Pk \u2192 Xk is player k``s function to convert a proposal to an option and \u03a8k is the function indicating which points P has to be improved, with \u03a8k its dual function - on which points no improvement is necessary.\n\u03a8k is obtained using the dual function of \u03c9C (Hk, x): e\u03c9C (Hk, x)= Z 1 0 Hk(x) \u2212 Hk ` \u03c4 xC , xNk\\C \u00b4 eEC (\u03c4, x) d\u03c4 Where eEC (\u03c4, x) is the cost\/effort to go from (\u03c4xC , xNk\\C ) to x.\nIn period t of our consensus procedure, player a proposes an agreement P. All players k \u2208 A respond to a by accepting or rejecting P.\nThe responses are made simultaneously.\nIf all players k \u2208 A accept the offer, the game ends.\nIf any player k rejects P, then the next period t+1 begins: player a makes another proposal P by taking into account information provided by the players and the ones that have rejected P apply a penalty.\nTherefore, our negotiation protocol can be as follows: Protocol P1.\n\u2022 At the beginning, we set period t = 0 \u2022 a makes a proposal P \u2208 P that has not been proposed before.\n\u2022 Wait that all players of A give their opinion Yes or No to the player a.\nIf all players agree on P, this later is chosen.\nOtherwise t is incremented and we go back to previous point.\n\u2022 If there is no more offer left from P, the default offer \u02dcP will be chosen.\n\u2022 The utility of players regarding a given offer decreases over time.\nMore precisely, the utility of player k \u2208 A at period t regarding offer P is Uk(\u03a6k(Pk), t) = ft(Uk(\u03a6k(Pk))), where one can take for instance ft(x) = x.(\u03b4k)t or ft(x) = x \u2212 \u03b4k.t, as penalty function.\nLemma 1.\nProtocol P1 has at least one subgame perfect equilibrium 1 .\nProof : Protocol P1 is first transformed in a game in extensive form.\nTo this end, one shall specify the order in which the responders A react to the offer P of a.\nHowever the order in which the players answer has no influence on the course of the game and in particular on their personal utility.\nHence protocol P1 is strictly equivalent to a game in 1 A subgame perfect equilibrium is an equilibrium such that players'' strategies constitute a Nash equilibrium in every subgame of the original game [18, 16].\nA Nash equilibrium is a set of strategies, one for each player, such that no player has incentive to unilaterally change his\/her action [15].\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 947 extensive form, considering any order of the players A.\nThis game is clearly finite since P is finite and each offer can only be proposed once.\nFinally P1 corresponds to a game with perfect information.\nWe end the proof by using a classical result stating that any finite game in extensive form with perfect information has at least one subgame perfect equilibrium (see e.g. [16]).\nRational players (in the sense of von Neumann and Morgenstern) involved in protocol P1 will necessarily come up with a subgame perfect equilibrium.\nExample 1.\nConsider an example with A = {1, 2} and P = {P1 , P2 , P3 } where the default offer is P1 .\nAssume that ft(x) = x \u2212 0.1 t. Consider the following table giving the utilities at t = 0.\nP1 P2 P3 a 1 0.8 0.7 1 0.1 0.7 0.5 2 0.1 0.3 0.8 It is easy to see that there is one single subgame perfect equilibrium for protocol P1 corresponding to these values.\nThis equilibrium consists of the following choices: first a proposes P3 ; player 1 rejects this offer; a proposes then P2 and both players 1 and 2 accepts otherwise they are threatened to receive the worse offer P1 for them.\nFinally offer P2 is chosen.\nOption P1 is the best one for a but the two other players vetoed it.\nIt is interesting to point out that, even though a prefers P2 to P3 , offer P3 is first proposed and this make P2 being accepted.\nIf a proposes P2 first, then the subgame perfect equilibrium in this situation is P3 .\nTo sum up, the worse preferred options have to be proposed first in order to get finally the best one.\nBut this entails a waste of time.\nAnalysing the previous example, one sees that the game outcome at the equilibrium is P2 that is not very attractive for player 2.\nOption P3 seems more balanced since no player judges it badly.\nIt could be seen as a better solution as a consensus among the agents.\nIn order to introduce this notion of balanceness in the protocol, we introduce a condition under which a player will be obliged to accept the proposal, reducing the autonomy of the agents but for increasing rationality and cooperation.\nMore precisely if the utility of a player is larger than a given threshold then acceptance is required.\nThe threshold decreases over time so that players have to make more and more concession.\nTherefore, the protocol becomes as follows.\nProtocol P2.\n\u2022 At the beginning we set period t = 0 \u2022 a makes a proposal P \u2208 P that has not been proposed before.\n\u2022 Wait that all players of A give their opinion Yes or No to the player a.\nA player k must accept the offer if Uk(\u03a6k(Pk)) \u2265 \u03c1k(t) where \u03c1k(t) tends to zero when t grows.\nMoreover there exists T such that for all t \u2265 T, \u03c1k(t) = 0.\nIf all players agree on P, this later is chosen.\nOtherwise t is incremented and we go back to previous point.\n\u2022 If there is no more offer left from P, the default offer \u02dcP will be chosen.\nOne can show exactly as in Lemma 1 that protocol P2 has at least one subgame perfect equilibrium.\nWe expect that protocol P2 provides a solution not to far from P , so it favours fairness among the players.\nTherefore, our cooperation-based multilateral multi-issue protocol is the following: Protocol P. \u2022 At the beginning we set period t = 0 \u2022 a makes a proposal P \u2208 P that has not been proposed before, considering \u03a8k(Pt ) and \u03a8k(Pt ) for all players k \u2208 A. \u2022 Wait that all players of A give their opinion (Yes , \u03a8k(Pt )) or (No , \u03a8k(Pt )) to the player a.\nA player k must accept the offer if Uk(\u03a6k(Pk)) \u2265 \u03c1k(t) where \u03c1k(t) tends to zero when t grows.\nMoreover there exists T such that for all t \u2265 T, \u03c1k(t) = 0.\nIf all players agree on P, this later is chosen.\nOtherwise t is incremented and we go back to previous point.\n\u2022 If there is no more offer left from P, the default offer \u02dcP will be chosen.\n8.\nEXPERIMENTS We developed a MAS using the widely used JADE agent platform [1].\nThis MAS is designed to be as general as possible (e.g. a general framework to specialise according to the application) and enable us to make some preliminary experiments.\nThe experiments aim at verifying that our approach gives solutions as close as possible to the Maximin solution and in a small number of rounds and hopefully in a short time since our context is highly cooperative.\nWe defined the two types of agents and their behaviours as introduced in section 6.\nThe agents and their behaviours correspond to the main classes of our prototype, NegotiatorAgent and NegotiatorBehaviour for the negotiator agents, and MediatorAgent and MediatorBehaviour for the mediator agent.\nThese classes extend JADE classes and integrate MYRIAD into the agents, reducing the amount of communications in the system.\nSome functionalities depending on the application have to be implemented according to the application by extending these classes.\nIn particular, all conversion parts of the agents have to be specified according to the application since to convert a proposal into decision criteria, we need to know, first, this model and the correlations between the proposals and this model.\nFirst, to illustrate our protocol, we present a simple example of our dispatch problem.\nIn this example, we have three hospitals, H1, H2 and H3.\nEach hospital can receive victims having a particular pathology in such a way that H1 can receive patients with the pathology burn, surgery or orthopedic, H2 can receive patients with the pathology surgery, orthopedic or cardiology and H3 can receive patients with the pathology burn or cardiology.\nAll the hospitals have similar decision criteria reflecting their preferences on the level of congestion they can face for the overall hospital and the different services available, as briefly explained for hospital H1 hereafter.\nFor hospital H1, the preference model, fig. 3, is composed of five criteria.\nThese criteria correspond to the preferences on the pathologies the hospital can treat.\nIn the case of 948 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 3: The H1 preference model in MYRIAD.\nthe pathology burn, the corresponding criterion, also named burn as shown in fig. 3, represents the preferences of H1 according to the value of Cburn which is the current capacity of burn.\nTherefore, the utility function of this criterion represents a preference such that the more there are patients of this pathology in the hospital, the less the hospital may satisfy them, and this with an initial capacity.\nIn addition to reflecting this kind of viewpoint, the aggregation function as defined in MYRIAD introduces a veto on the criteria burn, surgery, orthopedic and EReceipt, where EReceipt is the criterion for the preferences about the capacity to receive a number of patients at the same time.\nIn this simplified example, the physician have no particular preferences on the dispatch and the mediator agent chooses a proposal randomly in a subset of the set of admissibility.\nThis subset have to satisfy as much as possible the recommendations made by the hospitals.\nTo solve this problem, for this example, we decided to solve a linear problem with the availability constraints and the recommendations as linear constraints on the dispatch values.\nThe set of admissibility is then obtained by solving this linear problem by the use of Prolog.\nMoreover, only the recommendations on how to improve a proposal are taken into account.\nThe problem to solve is then to dispatch to hospital H1, H2 and H3, the set of victims composed of 5 victims with the pathology burn, 10 with surgery, 3 with orthopedic and 7 with cardiology.\nThe availabilities of the hospitals are as presented in the following table.\nAvailable Overall burn surg.\northop.\ncardio.\nH1 11 4 8 10H2 25 - 3 4 10 H3 7 10 - - 3 We obtain a multiagent system with the mediator agent and three agents of type negotiator for the three hospital in the problem.\nThe hospitals threshold are fixed approximatively to the level where an evaluation is considered as good.\nTo start, the negotiator agents send their availabilities.\nThe mediator agent makes a proposal chosen randomly in admissible set obtained with these availabilities as linear constraints.\nThis proposal is the vector P0 = [[H1,burn, 3], [H1, surgery, 8], [H1, orthopaedic, 0], [H2, surgery, 2], [H2, orthopaedic, 3], [H2, cardiology, 6], [H3, burn, 2], [H3, cardiology, 1]] and the mediator sends propose(P0) to H1, H2 and H3 for approval.\nEach negotiator agent evaluates this proposal and answers back by accepting or rejecting P0: \u2022 Agent H1 rejects this offer since its evaluation is very far from the threshold (0.29, a bad score) and gives a recommendation to improve burn and surgery by sending the message reject_proposal([burn,surgery]); \u2022 Agent H2 accepts this offer by sending the message accept_proposal(), the proposal evaluation being good; \u2022 Agent H3 accepts P0 by sending the message accept_ proposal(), the proposal evaluation being good.\nJust with the recommendations provided by agent H1, the mediator is able to make a new proposal by restricting the value of burn and surgery.\nThe new proposal obtained is then P1 = [[H1,burn, 0], [H1, surgery, 8], [H1, orthopaedic, 1], [H2, surgery, 2], [H2, orthopaedic, 2], [H2, cardiology, 6], [H3, burn, 5], [H3, cardiology, 1]].\nThe mediator sends propose(P1) the negotiator agents.\nH1, H2 and H3 answer back by sending the message accept_proposal(), P1 being evaluated with a high enough score to be acceptable, and also considered as a good proposal when using the explanation function of MYRIAD.\nAn agreement is reached with P1.\nNote that the evaluation of P1 by H3 has decreased in comparison with P0, but not enough to be rejected and that this solution is the Pareto one, P\u2217 .\nOther examples have been tested with the same settings: issues in IN, three negotiator agents and the same mediator agent, with no preference model but selecting randomly the proposal.\nWe obtained solutions either equal or close to the Maximin solution, the distance from the standard deviation being less than 0.0829, the evaluations not far from the ones obtained with P\u2217 and with less than seven proposals made.\nThis shows us that we are able to solve this multi-issue multilateral negotiation problem in a simple and efficient way, with solutions close to the Pareto solution.\n9.\nCONCLUSION AND FUTURE WORK This paper presents a new protocol to address multilateral multi-issue negotiation in a cooperative context.\nThe first main contribution is that we take into account complex inter-dependencies between multiple issues with the use of a complex preference modelling.\nThis contribution is reinforced by the use of multi-issue negotiation in a multilateral context.\nOur second contribution is the use of sharp recommendations in the protocol to help in accelerating the search of a consensus between the cooperative agents and in finding an optimal solution.\nWe have also shown that the protocol has subgame perfect equilibria and these equilibria converge to the usual maximum solution.\nMoreover, we tested this protocol in a crisis management context where the negotiation aim is where to evacuate a whole set of injured people to predefined hospitals.\nWe have already developed a first MAS, in particular integrating MYRIAD, to test this protocol in order to know more about its efficiency in terms of solution quality and quickness in finding a consensus.\nThis prototype enabled us to solve some examples with our approach and the results we obtained are encouraging since we obtained quickly good agreements, close to the Pareto solution, in the light of the initial constraints of the problem: the availabilities.\nWe still have to improve our MAS by taking into account The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 949 the two types of recommendations and by adding a preference model to the mediator of our system.\nMoreover, a comparative study has to be done in order to evaluate the performance of our framework against the existing ones and against some variations on the protocol.\n10.\nACKNOWLEDGEMENT This work is partly funded by the ICIS research project under the Dutch BSIK Program (BSIK 03024).\n11.\nREFERENCES [1] JADE.\nhttp:\/\/jade.tilab.com\/.\n[2] P. Faratin, C. Sierra, and N. R. Jennings.\nUsing similarity criteria to make issue trade-offs in automated negotiations.\nArtificial Intelligence, 142(2):205-237, 2003.\n[3] S. S. Fatima, M. Wooldridge, and N. R. Jennings.\nOptimal negotiation of multiple issues in incomplete information settings.\nIn 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS``04), pages 1080-1087, New York, USA, 2004.\n[4] S. S. Fatima, M. Wooldridge, and N. R. Jennings.\nA comparative study of game theoretic and evolutionary models of bargaining for software agents.\nArtificial Intelligence Review, 23:185-203, 2005.\n[5] S. S. Fatima, M. Wooldridge, and N. R. Jennings.\nOn efficient procedures for multi-issue negotiation.\nIn 8th International Workshop on Agent-Mediated Electronic Commerce(AMEC``06), pages 71-84, Hakodate, Japan, 2006.\n[6] M. Grabisch.\nThe application of fuzzy integrals in multicriteria decision making.\nEuropean J. of Operational Research, 89:445-456, 1996.\n[7] M. Grabisch, T. Murofushi, and M. Sugeno.\nFuzzy Measures and Integrals.\nTheory and Applications (edited volume).\nStudies in Fuzziness.\nPhysica Verlag, 2000.\n[8] M. Hemaissia, A. El Fallah-Seghrouchni, C. Labreuche, and J. Mattioli.\nCooperation-based multilateral multi-issue negotiation for crisis management.\nIn 2th International Workshop on Rational, Robust and Secure Negotiation (RRS``06), pages 77-95, Hakodate, Japan, May 2006.\n[9] T. Ito, M. Klein, and H. Hattori.\nA negotiation protocol for agents with nonlinear utility functions.\nIn AAAI, 2006.\n[10] M. Klein, P. Faratin, H. Sayama, and Y. Bar-Yam.\nNegotiating complex contracts.\nGroup Decision and Negotiation, 12:111-125, March 2003.\n[11] C. Labreuche.\nDetermination of the criteria to be improved first in order to improve as much as possible the overall evaluation.\nIn IPMU 2004, pages 609-616, Perugia, Italy, 2004.\n[12] C. Labreuche and F. Le Hu\u00b4ed\u00b4e. MYRIAD: a tool suite for MCDA.\nIn EUSFLAT``05, pages 204-209, Barcelona, Spain, 2005.\n[13] R. Y. K. Lau.\nTowards genetically optimised multi-agent multi-issue negotiations.\nIn Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS``05), Big Island, Hawaii, 2005.\n[14] R. J. Lin.\nBilateral multi-issue contract negotiation for task redistribution using a mediation service.\nIn Agent Mediated Electronic Commerce VI (AMEC``04), New York, USA, 2004.\n[15] J. F. Nash.\nNon cooperative games.\nAnnals of Mathematics, 54:286-295, 1951.\n[16] G. Owen.\nGame Theory.\nAcademic Press, New York, 1995.\n[17] V. Robu, D. J. A. Somefun, and J. A. L. Poutr\u00b4e. Modeling complex multi-issue negotiations using utility graphs.\nIn 4th International Joint Conference on Autonomous agents and multiagent systems (AAMAS``05), pages 280-287, 2005.\n[18] A. Rubinstein.\nPerfect equilibrium in a bargaining model.\nEconometrica, 50:97-109, jan 1982.\n[19] L.-K.\nSoh and X. Li.\nAdaptive, confidence-based multiagent negotiation strategy.\nIn 3rd International Joint Conference on Autonomous agents and multiagent systems (AAMAS``04), pages 1048-1055, Los Alamitos, CA, USA, 2004.\n[20] H.-W.\nTung and R. J. Lin.\nAutomated contract negotiation using a mediation service.\nIn 7th IEEE International Conference on E-Commerce Technology (CEC``05), pages 374-377, Munich, Germany, 2005.\n950 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"A Multilateral Multi-issue Negotiation Protocol\nABSTRACT\nIn this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context.\nWe consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment.\nThis information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents.\nIn addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent.\n1.\nINTRODUCTION\nMulti-issue negotiation protocols represent an important field of study since negotiation problems in the real world are often complex ones involving multiple issues.\nTo date, most of previous work in this area ([2, 3, 19, 13]) dealt almost exclusively with simple negotiations involving independent issues.\nHowever, real-world negotiation problems involve complex dependencies between multiple issues.\nWhen one wants to buy a car, for example, the value of a given car is highly dependent on its price, consumption, comfort and so on.\nThe addition of such interdependencies greatly\ncomplicates the agents utility functions and classical utility functions, such as the weighted sum, are not sufficient to model this kind of preferences.\nIn [10, 9, 17, 14, 20], the authors consider inter-dependencies between issues, most often defined with boolean values, except for [9], while we can deal with continuous and discrete dependent issues thanks to the modelling power of the Choquet integral.\nIn [17], the authors deal with bilateral negotiation while we are interested in a multilateral negotiation setting.\nKlein et al. [10] present an approach similar to ours, using a mediator too and information about the strength of the approval or rejection that an agent makes during the negotiation.\nIn our protocol, we use more precise information to improve the proposals thanks to the multi-criteria methodology and tools used to model the preferences of our agents.\nLin, in [14, 20], also presents a mediation service but using an evolutionary algorithm to reach optimal solutions and as explained in [4], players in the evolutionary models need to repeatedly interact with each other until the stable state is reached.\nAs the population size increases, the time it takes for the population to stabilize also increases, resulting in excessive computation, communication, and time overheads that can become prohibitive, and for one-to-many and many-to-many negotiations, the overheads become higher as the number of players increases.\nIn [9], the authors consider a non-linear utility function by using constraints on the domain of the issues and a mediation service to find a combination of bids maximizing the social welfare.\nOur preference model, a nonlinear utility function too, is more complex than [9] one since the Choquet integral takes into account the interactions and the importance of each decision criteria\/issue, not only the dependencies between the values of the issues, to determine the utility.\nWe also use an iterative protocol enabling us to find a solution even when no bid combination is possible.\nIn this paper, we propose a negotiation protocol suited for multiple agents with complex preferences and taking into account, at the same time, multiple interdependent issues and recommendations made by the agents to improve a proposal.\nMoreover, the preferences of our agents are modelled using a multi-criteria methodology and tools enabling us to take into account information about the improvements that can be made to a proposal, in order to help in accelerating the search for a consensus between the agents.\nTherefore, we propose a negotiation protocol consisting of solving our decision problem using a MAS with a multi-criteria decision aiding modelling at the agent level and a cooperation-based multilateral multi-issue negotiation protocol.\nThis protocol is studied under a non-cooperative approach and it is shown\nthat it has subgame perfect equilibria, provided that agents behave rationally in the sense of von Neumann and Morgenstern.\nThe approach proposed in this paper has been first introduced and presented in [8].\nIn this paper, we present our first experiments, with some noteworthy results, and a more complex multi-agent system with representatives to enable us to have a more robust system.\nIn Section 2, we present our application, a crisis management problem.\nSection 3 deals with the general aspect of the proposed approach.\nThe preference modelling is described in sect.\n4, whereas the motivations of our protocol are considered in sect.\n5 and the agent\/multiagent modelling in sect.\n6.\nSection 7 presents the formal modelling and properties of our protocol before presenting our first experiments in sect.\n8.\nFinally, in Section 9, we conclude and present the future work.\n2.\nCASE STUDY\n3.\nPROPOSED APPROACH\n4.\nTHE PREFERENCE MODEL\n944 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.\nPROTOCOL MOTIVATIONS\n6.\nAGENTIFICATION\n946 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7.\nOUR PROTOCOL\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 947\n8.\nEXPERIMENTS\n948 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 949\n10.\nACKNOWLEDGEMENT\nThis work is partly funded by the ICIS research project under the Dutch BSIK Program (BSIK 03024).","lvl-4":"A Multilateral Multi-issue Negotiation Protocol\nABSTRACT\nIn this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context.\nWe consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment.\nThis information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents.\nIn addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent.\n1.\nINTRODUCTION\nMulti-issue negotiation protocols represent an important field of study since negotiation problems in the real world are often complex ones involving multiple issues.\nTo date, most of previous work in this area ([2, 3, 19, 13]) dealt almost exclusively with simple negotiations involving independent issues.\nHowever, real-world negotiation problems involve complex dependencies between multiple issues.\nThe addition of such interdependencies greatly\ncomplicates the agents utility functions and classical utility functions, such as the weighted sum, are not sufficient to model this kind of preferences.\nIn [17], the authors deal with bilateral negotiation while we are interested in a multilateral negotiation setting.\nKlein et al. [10] present an approach similar to ours, using a mediator too and information about the strength of the approval or rejection that an agent makes during the negotiation.\nIn our protocol, we use more precise information to improve the proposals thanks to the multi-criteria methodology and tools used to model the preferences of our agents.\nIn [9], the authors consider a non-linear utility function by using constraints on the domain of the issues and a mediation service to find a combination of bids maximizing the social welfare.\nOur preference model, a nonlinear utility function too, is more complex than [9] one since the Choquet integral takes into account the interactions and the importance of each decision criteria\/issue, not only the dependencies between the values of the issues, to determine the utility.\nWe also use an iterative protocol enabling us to find a solution even when no bid combination is possible.\nIn this paper, we propose a negotiation protocol suited for multiple agents with complex preferences and taking into account, at the same time, multiple interdependent issues and recommendations made by the agents to improve a proposal.\nMoreover, the preferences of our agents are modelled using a multi-criteria methodology and tools enabling us to take into account information about the improvements that can be made to a proposal, in order to help in accelerating the search for a consensus between the agents.\nTherefore, we propose a negotiation protocol consisting of solving our decision problem using a MAS with a multi-criteria decision aiding modelling at the agent level and a cooperation-based multilateral multi-issue negotiation protocol.\nThis protocol is studied under a non-cooperative approach and it is shown\nthat it has subgame perfect equilibria, provided that agents behave rationally in the sense of von Neumann and Morgenstern.\nThe approach proposed in this paper has been first introduced and presented in [8].\nIn this paper, we present our first experiments, with some noteworthy results, and a more complex multi-agent system with representatives to enable us to have a more robust system.\nIn Section 2, we present our application, a crisis management problem.\nSection 3 deals with the general aspect of the proposed approach.\nThe preference modelling is described in sect.\n4, whereas the motivations of our protocol are considered in sect.\n5 and the agent\/multiagent modelling in sect.\n6.\nSection 7 presents the formal modelling and properties of our protocol before presenting our first experiments in sect.\n8.\nFinally, in Section 9, we conclude and present the future work.\n10.\nACKNOWLEDGEMENT","lvl-2":"A Multilateral Multi-issue Negotiation Protocol\nABSTRACT\nIn this paper, we present a new protocol to address multilateral multi-issue negotiation in a cooperative context.\nWe consider complex dependencies between multiple issues by modelling the preferences of the agents with a multi-criteria decision aid tool, also enabling us to extract relevant information on a proposal assessment.\nThis information is used in the protocol to help in accelerating the search for a consensus between the cooperative agents.\nIn addition, the negotiation procedure is defined in a crisis management context where the common objective of our agents is also considered in the preferences of a mediator agent.\n1.\nINTRODUCTION\nMulti-issue negotiation protocols represent an important field of study since negotiation problems in the real world are often complex ones involving multiple issues.\nTo date, most of previous work in this area ([2, 3, 19, 13]) dealt almost exclusively with simple negotiations involving independent issues.\nHowever, real-world negotiation problems involve complex dependencies between multiple issues.\nWhen one wants to buy a car, for example, the value of a given car is highly dependent on its price, consumption, comfort and so on.\nThe addition of such interdependencies greatly\ncomplicates the agents utility functions and classical utility functions, such as the weighted sum, are not sufficient to model this kind of preferences.\nIn [10, 9, 17, 14, 20], the authors consider inter-dependencies between issues, most often defined with boolean values, except for [9], while we can deal with continuous and discrete dependent issues thanks to the modelling power of the Choquet integral.\nIn [17], the authors deal with bilateral negotiation while we are interested in a multilateral negotiation setting.\nKlein et al. [10] present an approach similar to ours, using a mediator too and information about the strength of the approval or rejection that an agent makes during the negotiation.\nIn our protocol, we use more precise information to improve the proposals thanks to the multi-criteria methodology and tools used to model the preferences of our agents.\nLin, in [14, 20], also presents a mediation service but using an evolutionary algorithm to reach optimal solutions and as explained in [4], players in the evolutionary models need to repeatedly interact with each other until the stable state is reached.\nAs the population size increases, the time it takes for the population to stabilize also increases, resulting in excessive computation, communication, and time overheads that can become prohibitive, and for one-to-many and many-to-many negotiations, the overheads become higher as the number of players increases.\nIn [9], the authors consider a non-linear utility function by using constraints on the domain of the issues and a mediation service to find a combination of bids maximizing the social welfare.\nOur preference model, a nonlinear utility function too, is more complex than [9] one since the Choquet integral takes into account the interactions and the importance of each decision criteria\/issue, not only the dependencies between the values of the issues, to determine the utility.\nWe also use an iterative protocol enabling us to find a solution even when no bid combination is possible.\nIn this paper, we propose a negotiation protocol suited for multiple agents with complex preferences and taking into account, at the same time, multiple interdependent issues and recommendations made by the agents to improve a proposal.\nMoreover, the preferences of our agents are modelled using a multi-criteria methodology and tools enabling us to take into account information about the improvements that can be made to a proposal, in order to help in accelerating the search for a consensus between the agents.\nTherefore, we propose a negotiation protocol consisting of solving our decision problem using a MAS with a multi-criteria decision aiding modelling at the agent level and a cooperation-based multilateral multi-issue negotiation protocol.\nThis protocol is studied under a non-cooperative approach and it is shown\nthat it has subgame perfect equilibria, provided that agents behave rationally in the sense of von Neumann and Morgenstern.\nThe approach proposed in this paper has been first introduced and presented in [8].\nIn this paper, we present our first experiments, with some noteworthy results, and a more complex multi-agent system with representatives to enable us to have a more robust system.\nIn Section 2, we present our application, a crisis management problem.\nSection 3 deals with the general aspect of the proposed approach.\nThe preference modelling is described in sect.\n4, whereas the motivations of our protocol are considered in sect.\n5 and the agent\/multiagent modelling in sect.\n6.\nSection 7 presents the formal modelling and properties of our protocol before presenting our first experiments in sect.\n8.\nFinally, in Section 9, we conclude and present the future work.\n2.\nCASE STUDY\nThis protocol is applied to a crisis management problem.\nCrisis management is a relatively new field of management and is composed of three types of activities: crisis prevention, operational preparedness and management of declared crisis.\nThe crisis prevention aims to bring the risk of crisis to an acceptable level and, when possible, avoid that the crisis actually happens.\nThe operational preparedness includes strategic advanced planning, training and simulation to ensure availability, rapid mobilisation and deployment of resources to deal with possible emergencies.\nThe management of declared crisis is the response to--including the evacuation, search and rescue--and the recovery from the crisis by minimising the effects of the crises, limiting the impact on the community and environment and, on a longer term, by bringing the community's systems back to normal.\nIn this paper, we focus on the response part of the management of declared crisis activity, and particularly on the evacuation of the injured people in disaster situations.\nWhen a crisis is declared, the plans defined during the operational preparedness activity are executed.\nFor disasters, master plans are executed.\nThese plans are elaborated by the authorities with the collaboration of civil protection agencies, police, health services, non-governmental organizations, etc. .\nWhen a victim is found, several actions follow.\nFirst, a rescue party is assigned to the victim who is examined and is given first aid on the spot.\nThen, the victims can be placed in an emergency centre on the ground called the medical advanced post.\nFor all victims, a sorter physician--generally a hospital physician--examines the seriousness of their injuries and classifies the victims by pathology.\nThe evacuation by emergency health transport if necessary can take place after these clinical examinations and classifications.\nNowadays, to evacuate the injured people, the physicians contact the emergency call centre to pass on the medical assessments of the most urgent cases.\nThe emergency call centre then searches for available and appropriate spaces in the hospitals to care for these victims.\nThe physicians are informed of the allocations, so they can proceed to the evacuations choosing the emergency health transports according to the pathologies and the transport modes provided.\nIn this context, we can observe that the evacuation is based on three important elements: the examination and classification of the victims, the search for an allocation and the transport.\nIn the case of the 11 March 2004 Madrid attacks, for instance, some injured people did not receive the appropriate health care because, during the search for space, the emergency call centre did not consider the transport constraints and, in particular, the traffic.\nTherefore, for a large scale crisis management problem, there is a need to support the emergency call centre and the physicians in the dispatching to take into account the hospitals and the transport constraints and availabilities.\n3.\nPROPOSED APPROACH\nTo accept a proposal, an agent has to consider several issues such as, in the case of the crisis management problem, the availabilities in terms of number of beds by unit, medical and surgical staffs, theatres and so on.\nTherefore, each agent has its own preferences in correlation with its resource constraints and other decision criteria such as, for the case study, the level of congestion of a hospital.\nAll the agents also make decisions by taking into account the dependencies between these decision criteria.\nThe first hypothesis of our approach is that there are several parties involved in and impacted by the decision, and so they have to decide together according to their own constraints and decision criteria.\nNegotiation is the process by which a group facing a conflict communicates with one another to try and come to a mutually acceptable agreement or decision and so, the agents have to negotiate.\nThe conflict we have to resolve is finding an acceptable solution for all the parties by using a particular protocol.\nIn our context, multilateral negotiation is a negotiation protocol type that is the best suited for this type of problem: this type of protocol enables the hospitals and the physicians to negotiate together.\nThe negotiation also deals with multiple issues.\nMoreover, an other hypothesis is that we are in a cooperative context where all the parties have a common objective which is to provide the best possible solution for everyone.\nThis implies the use of a negotiation protocol encouraging the parties involved to cooperate as satisfying its preferences.\nTaking into account these aspects, a Multi-Agent System (MAS) seems to be a reliable method in the case of a distributed decision making process.\nIndeed, a MAS is a suitable answer when the solution has to combine, at least, distribution features and reasoning capabilities.\nAnother motivation for using MAS lies in the fact that MAS is well known for facilitating automated negotiation at the operative decision making level in various applications.\nTherefore, our approach consists of solving a multiparty decision problem using a MAS with\n\u2022 The preferences of the agents are modelled using a multi-criteria decision aid tool, MYRIAD, also enabling us to consider multi-issue problems by evaluating proposals on several criteria.\n\u2022 A cooperation-based multilateral and multi-issue negotiation protocol.\n4.\nTHE PREFERENCE MODEL\nWe consider a problem where an agent has several decision criteria, a set Nk = {1,..., nk} of criteria for each agent k involved in the negotiation protocol.\nThese decision criteria enable the agents to evaluate the set of issues that are negotiated.\nThe issues correspond directly or not to the decision criteria.\nHowever, for the example of the crisis management\n944 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nproblem, the issues are the set of victims to dispatch between the hospitals.\nThese issues are translated to decision criteria enabling the hospital to evaluate its congestion and so to an updated number of available beds, medical teams and so on.\nIn order to take into account the complexity that exists between the criteria\/issues, we use a multi-criteria decision aiding (MCDA) tool named MYRIAD [12] developed at Thales for MCDA applications based on a two-additive Choquet integral which is a good compromise between versatility and ease to understand and model the interactions between decision criteria [6].\nThe set of the attributes of Nk is denoted by Xk1,..., Xknk.\nAll the attributes are made commensurate thanks to the introduction of partial utility functions uki: Xki \u2192 [0, 1].\nThe [0, 1] scale depicts the satisfaction of the agent k regarding the values of the attributes.\nAn option x is identified to an element of Xk = Xk1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Xknk, with x = (x1,..., xnk).\nThen the overall assessment of x is given by\nwhere Hk: [0, 1] nk \u2192 [0, 1] is the aggregation function.\nThe overall preference relation ~ over Xk is then\nwhere vki is the relative importance of criterion i for agent k and Iki, j is the interaction between criteria i and j, \u2227 and \u2228 denote the min and max functions respectively.\nAssume that zi <zj.\nA positive interaction between criteria i and j depicts complementarity between these criteria (positive synergy) [7].\nHence, the lower score of z on criterion i conceals the positive effect of the better score on criterion j to a larger extent on the overall evaluation than the impact of the relative importance of the criteria taken independently of the other ones.\nIn other words, the score of z on criterion j is penalized by the lower score on criterion i. Conversely, a negative interaction between criteria i and j depicts substitutability between these criteria (negative synergy) [7].\nThe score of z on criterion i is then saved by a better score on criterion j.\nIn MYRIAD, we can also obtain some recommendations corresponding to an indicator \u03c9C (H, x) measuring the worth to improve option x w.r.t. Hk on some criteria C \u2286 Nk as follows\nerage improvement of Hk when the criteria of coalition A range from xC to 1C divided by the average effort needed order 1, that is EC (\u03c4, x) = \u03c4 E for this improvement.\nWe generally assume that EC is of i \u2208 C (1 \u2212 xi).\nThe expression of \u03c9C (Hk, x) when Hk is a Choquet integral, is given in [11].\nThe agent is then recommended to improve of coalition C for which \u03c9C (Hk, x) is maximum.\nThis recommendation is very useful in a negotiation protocol since it helps the agents to know what to do if they want an offer to be accepted while not revealing their own preference model.\n5.\nPROTOCOL MOTIVATIONS\nFor multi-issue problems, there are two approaches: a complete package approach where the issues are negotiated simultaneously in opposition to the sequential approach where the issues are negotiated one by one.\nWhen the issues are dependant, then it is the best choice to bargain simultaneously over all issues [5].\nThus, the complete package is the adopted approach so that an offer will be on the overall set of injured people while taking into account the other decision criteria.\nWe have to consider that all the parties of the negotiation process have to agree on the decision since they are all involved in and impacted by this decision and so an unanimous agreement is required in the protocol.\nIn addition, no party can leave the process until an agreement is reached, i.e. a consensus achieved.\nThis makes sense since a proposal concerns all the parties.\nMoreover, we have to guarantee the availability of the resources needed by the parties to ensure that a proposal is realistic.\nTo this end, the information about these availabilities are used to determine admissible proposals such that an offer cannot be made if one of the parties has not enough resources to execute\/achieve it.\nAt the beginning of the negotiation, each party provides its maximum availabilities, this defining the constraints that have to be satisfied for each offer submitted.\nThe negotiation has also to converge quickly on an unanimous agreement.\nWe decided to introduce in the negotiation protocol an incentive to cooperate taking into account the passed negotiation time.\nThis incentive is defined on the basis of a time dependent penalty, the discounting factor as in [18] or a time-dependent threshold.\nThis penalty has to be used in the accept\/reject stage of our consensus procedure.\nIn fact, in the case of a discounting factor, each party will accept or reject an offer by evaluating the proposal using its utility function deducted from the discounting factor.\nIn the case of a time-dependent threshold, if the evaluation is greater or equal to this threshold, the offer is accepted, otherwise, in the next period, its threshold is reduced.\nThe use of a penalty is not enough alone since it does not help in finding a solution.\nSome information about the assessments of the parties involved in the negotiation are needed.\nIn particular, it would be helpful to know why an offer has been rejected and\/or what can be done to make a proposal that would be accepted.\nMYRIAD provides an analysis that determines the flaws an option, here a proposal.\nIn particular, it gives this type of information: which criteria of a proposal should be improved so as to reach the highest possible overall evaluation [11].\nAs we use this tool to model the parties involved in the negotiation, the information about the criteria to improve can be used by the mediator to elaborate the proposals.\nWe also consider that the dual function can be used to take into account another type of information: on which criteria of a proposal, no improvement is necessary so that the overall evaluation of a proposal is still acceptable, do not decrease.\nThus, all information is a constraint to be satisfied as much as possible by\nFigure 1: An illustration of some system.\nthe parties to make a new proposal.\nWe are in a cooperative context and revealing one's opinion on what can be improved is not prohibited, on the contrary, it is useful and recommended here seeing that it helps in converging on an agreement.\nTherefore, when one of the parties refuses an offer, some information will be communicated.\nIn order to facilitate and speed up the negotiation, we introduce a mediator.\nThis specific entity is in charge of making the proposals to the other parties in the system by taking into account their public constraints (e.g. their availabilities) and the recommendations they make.\nThis mediator can also be considered as the representative of the general interest we can have, in some applications, such as in the crisis management problem, the physician will be the mediator and will also have some more information to consider when making an offer (e.g. traffic state, transport mode and time).\nEach party in a negotiation N, a negotiator, can also be a mediator of another negotiation N', this party becoming the representative of N' in the negotiation N, as illustrated by fig. 1 what can also help in reducing the communication time.\n6.\nAGENTIFICATION\nHow the problem is transposed in a MAS problem is a very important aspect when designing such a system.\nThe agentification has an influence upon the systems efficiency in solving the problem.\nTherefore, in this section, we describe the elements and constraints taken into account during the modelling phase and for the model itself.\nHowever, for this negotiation application, the modelling is quite natural when one observes the negotiation protocol motivations and main properties.\nFirst of all, it seems obvious that there should be one agent for each player of our multilateral multi-issue negotiation protocol.\nThe agents have the involved parties' information and preferences.\nThese agents are:\n\u2022 Autonomous: they decide for themselves what, when and under what conditions actions should be performed; \u2022 Rational: they have a means-ends competence to fit its decisions, according to its knowledge, preferences and goal; \u2022 Self-interested: they have their own interests which may conflict with the interests of other agents.\nMoreover, their preferences are modelled and a proposal evaluated and analysed using MYRIAD.\nEach agent has private information and can access public information as knowledge.\nIn fact, there are two types of agents: the mediator type for the agents corresponding to the mediator of our negotiation protocol, the delegated physician in our application, and the negotiator type for the agents corresponding to the other parties, the hospitals.\nThe main behaviours that an agent of type mediator needs to negotiate in our protocol are the following:\n\u2022 convert-improvements: converts the information given by the other agents involved in the negotiation about the improvements to be done, into constraints on the next proposal to be made; \u2022 convert-no-decrease: converts the information given by the other agents involved in the negotiation about the points that should not be changed into constraints on the next proposal to be made; \u2022 construct-proposal: constructs a new proposal according to the constraints obtained with convert-improvements, convert-no-decrease and the agent preferences;\nThe main behaviours that an agent of type negotiator needs to negotiate in our protocol are the following:\n\u2022 convert-proposal: converts a proposal to a MYRIAD option of the agent according to its preferences model and its private data; \u2022 convert-improvements-wc: converts the agent recommendations for the improvements of a MYRIAD option into general information on the proposal; \u2022 convert-no-decrease-wc: converts the agent recom\nmendations about the criteria that should not be changed in the MYRIAD option into general information on the proposal; In addition to these behaviours, there are, for the two types of agents, access behaviours to MYRIAD functionalities such as the evaluation and improvement functions:\n\u2022 evaluate-option: evaluates the MYRIAD option obtained using the agent behaviour convert-proposal; \u2022 improvements: gets the agent recommendations to improve a proposal from the MYRIAD option; \u2022 no-decrease: gets the agent recommendations to not change some criteria from the MYRIAD option;\nOf course, before running the system with such agents, we must have defined each party preferences model in MYRIAD.\nThis model has to be part of the agent so that it could be used to make the assessments and to retrieve the improvements.\nIn addition to these behaviours, the communication acts between the agents is as follows.\n1.\nmediator agent communication acts:\n946 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 2: The protocol diagram in AUML, and where m is the number of negotiator agents and l is the number of agents refusing current proposal.\n(a) propose: sends a message containing a proposal to all negotiator agents; (b) inform: sends a message to all negotiator agents to inform them that an agreement has been reached and containing the consensus outcome.\n2.\nnegotiator agent communication acts: (a) accept-proposal: sends a message to the mediator agent containing the agent recommendations to improve the proposal and obtained with convert_improvements_wc; (b) reject-proposal: sends a message to the mediator agent containing the agent recommendations about the criteria that should not be changed and obtained with convert_no_decrease_wc.\nSuch agents are interchangeable, in a case of failure, since they all have the same properties and represent a user with his preference model, not depending on the agent, but on the model defined in MYRIAD.\nWhen the issues and the decision criteria are different from each other, the information about the criteria improvement have to be pre-processed to give some instructions on the directions to take and about the negotiated issues.\nIt is the same for the evaluation of a proposal: each agent has to convert the information about the issues to update its private information and to obtain the values of each attribute of the decision criteria.\n7.\nOUR PROTOCOL\nFormally, we consider negotiations where a set of players A = {1, 2,..., m} and a player a are negotiating over a set Q of size q.\nThe player a is the protocol mediator, the mediator agent of the agentification.\nThe utility\/preference function of a player k \u2208 A \u222a {a} is Uk, defined using MYRIAD, as presented in Section 4, with a set Nk of criteria, Xk an option, and so on.\nAn offer is a vector P = (P1, P2, \u00b7 \u00b7 \u00b7, Pm), a partition of Q, in which Pk is player k's share of Q.\nWe have P \u2208 P where P is the set of admissible proposals, a finite set.\nNote that P is determined using all players general constraints on the proposals and Q. Moreover, let P\u02dc denote a particular proposal defined as a's preferred proposal.\nWe also have the following notation: \u03b4k is the threshold decrease factor of player k, 4Dk: Pk \u2192 Xk is player k's function to convert a proposal to an option and \u03a8k is the function indicating which points P has to be improved, with \u03a8k its dual function--on which points no improvement is necessary.\n\u03a8k is obtained using the dual function of \u03c9C (Hk, x):\nIn period t of our consensus procedure, player a proposes an agreement P. All players k \u2208 A respond to a by accepting or rejecting P.\nThe responses are made simultaneously.\nIf all players k \u2208 A accept the offer, the game ends.\nIf any player k rejects P, then the next period t +1 begins: player a makes another proposal P' by taking into account information provided by the players and the ones that have rejected P apply a penalty.\nTherefore, our negotiation protocol can be as follows:\n\u2022 At the beginning, we set period t = 0 \u2022 a makes a proposal P \u2208 P that has not been proposed before.\n\u2022 Wait that all players of A give their opinion Yes or No to the player a.\nIf all players agree on P, this later is chosen.\nOtherwise t is incremented and we go back to previous point.\n\u2022 If there is no more offer left from P, the default offer P\u02dc will be chosen.\n\u2022 The utility of players regarding a given of\nfer decreases over time.\nMore precisely, the utility of player k \u2208 A at period t regarding offer P is Uk (4Dk (Pk), t) = ft (Uk (4Dk (Pk))), where one can take for instance ft (x) = x. (\u03b4k) t or ft (x) = x \u2212 \u03b4k.t, as penalty function.\nProof: Protocol P1 is first transformed in a game in extensive form.\nTo this end, one shall specify the order in which the responders A react to the offer P of a.\nHowever the order in which the players answer has no influence on the course of the game and in particular on their personal utility.\nHence protocol P1 is strictly equivalent to a game in 1A subgame perfect equilibrium is an equilibrium such that players' strategies constitute a Nash equilibrium in every subgame of the original game [18, 16].\nA Nash equilibrium is a set of strategies, one for each player, such that no player has incentive to unilaterally change his\/her action [15].\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 947\nextensive form, considering any order of the players A.\nThis game is clearly finite since P is finite and each offer can only be proposed once.\nFinally P1 corresponds to a game with perfect information.\nWe end the proof by using a classical result stating that any finite game in extensive form with perfect information has at least one subgame perfect equilibrium (see e.g. [16]).\nRational players (in the sense of von Neumann and Morgenstern) involved in protocol P1 will necessarily come up with a subgame perfect equilibrium.\nIt is easy to see that there is one single subgame perfect equilibrium for protocol P1 corresponding to these values.\nThis equilibrium consists of the following choices: first a proposes P3; player 1 rejects this offer; a proposes then P2 and both players 1 and 2 accepts otherwise they are threatened to receive the worse offer P1 for them.\nFinally offer P2 is chosen.\nOption P1 is the best one for a but the two other players vetoed it.\nIt is interesting to point out that, even though a prefers P2 to P3, offer P3 is first proposed and this make P2 being accepted.\nIf a proposes P2 first, then the subgame perfect equilibrium in this situation is P3.\nTo sum up, the worse preferred options have to be proposed first in order to get finally the best one.\nBut this entails a waste of time.\nAnalysing the previous example, one sees that the game outcome at the equilibrium is P2 that is not very attractive for player 2.\nOption P3 seems more balanced since no player judges it badly.\nIt could be seen as a better solution as a consensus among the agents.\nIn order to introduce this notion of balanceness in the protocol, we introduce a condition under which a player will be obliged to accept the proposal, reducing the autonomy of the agents but for increasing rationality and cooperation.\nMore precisely if the utility of a player is larger than a given threshold then acceptance is required.\nThe threshold decreases over time so that players have to make more and more concession.\nTherefore, the protocol becomes as follows.\nProtocol P2.\n\u2022 At the beginning we set period t = 0 \u2022 a makes a proposal P \u2208 P that has not been proposed before.\n\u2022 Wait that all players of A give their opinion Yes or No to the player a.\nA player k must accept the offer if Uk (\u03a6k (Pk)) \u2265 \u03c1k (t) where \u03c1k (t) tends to zero when t grows.\nMoreover there exists T such that for all t \u2265 T, \u03c1k (t) = 0.\nIf all players agree on P, this later is chosen.\nOtherwise t is incremented and we go back to previous point.\n\u2022 If there is no more offer left from P, the default offer P\u02dc will be chosen.\nOne can show exactly as in Lemma 1 that protocol P2 has at least one subgame perfect equilibrium.\nWe expect that protocol P2 provides a solution not to far from P\", so it favours fairness among the players.\nTherefore, our cooperation-based multilateral multi-issue protocol is the following:\nProtocol P. \u2022 At the beginning we set period t = 0 \u2022 a makes a proposal P \u2208 P that has not been proposed before, considering' Pk (Pt) and' Pk (Pt) for all players k \u2208 A. \u2022 Wait that all players of A give their opinion (Yes,' Pk (Pt)) or (No,' Pk (Pt)) to the player a.\nA player k must accept the offer if Uk (\u03a6k (Pk)) \u2265 \u03c1k (t) where \u03c1k (t) tends to zero when t grows.\nMoreover there exists T such that for all t \u2265 T, \u03c1k (t) = 0.\nIf all players agree on P, this later is chosen.\nOtherwise t is incremented and we go back to previous point.\n\u2022 If there is no more offer left from P, the default offer P\u02dc will be chosen.\n8.\nEXPERIMENTS\nWe developed a MAS using the widely used JADE agent platform [1].\nThis MAS is designed to be as general as possible (e.g. a general framework to specialise according to the application) and enable us to make some preliminary experiments.\nThe experiments aim at verifying that our approach gives solutions as close as possible to the Maximin solution and in a small number of rounds and hopefully in a short time since our context is highly cooperative.\nWe defined the two types of agents and their behaviours as introduced in section 6.\nThe agents and their behaviours correspond to the main classes of our prototype, NegotiatorAgent and NegotiatorBehaviour for the negotiator agents, and MediatorAgent and MediatorBehaviour for the mediator agent.\nThese classes extend JADE classes and integrate MYRIAD into the agents, reducing the amount of communications in the system.\nSome functionalities depending on the application have to be implemented according to the application by extending these classes.\nIn particular, all conversion parts of the agents have to be specified according to the application since to convert a proposal into decision criteria, we need to know, first, this model and the correlations between the proposals and this model.\nFirst, to illustrate our protocol, we present a simple example of our dispatch problem.\nIn this example, we have three hospitals, H1, H2 and H3.\nEach hospital can receive victims having a particular pathology in such a way that H1 can receive patients with the pathology burn, surgery or orthopedic, H2 can receive patients with the pathology surgery, orthopedic or cardiology and H3 can receive patients with the pathology burn or cardiology.\nAll the hospitals have similar decision criteria reflecting their preferences on the level of congestion they can face for the overall hospital and the different services available, as briefly explained for hospital H1 hereafter.\nFor hospital H1, the preference model, fig. 3, is composed of five criteria.\nThese criteria correspond to the preferences on the pathologies the hospital can treat.\nIn the case of\n948 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 3: The H1 preference model in MYRIAD.\nthe pathology burn, the corresponding criterion, also named burn as shown in fig. 3, represents the preferences of H1 according to the value of Cburn which is the current capacity of burn.\nTherefore, the utility function of this criterion represents a preference such that the more there are patients of this pathology in the hospital, the less the hospital may satisfy them, and this with an initial capacity.\nIn addition to reflecting this kind of viewpoint, the aggregation function as defined in MYRIAD introduces a veto on the criteria burn, surgery, orthopedic and EReceipt, where EReceipt is the criterion for the preferences about the capacity to receive a number of patients at the same time.\nIn this simplified example, the physician have no particular preferences on the dispatch and the mediator agent chooses a proposal randomly in a subset of the set of admissibility.\nThis subset have to satisfy as much as possible the recommendations made by the hospitals.\nTo solve this problem, for this example, we decided to solve a linear problem with the availability constraints and the recommendations as linear constraints on the dispatch values.\nThe set of admissibility is then obtained by solving this linear problem by the use of Prolog.\nMoreover, only the recommendations on how to improve a proposal are taken into account.\nThe problem to solve is then to dispatch to hospital H1, H2 and H3, the set of victims composed of 5 victims with the pathology burn, 10 with surgery, 3 with orthopedic and 7 with cardiology.\nThe availabilities of the hospitals are as presented in the following table.\nWe obtain a multiagent system with the mediator agent and three agents of type negotiator for the three hospital in the problem.\nThe hospitals threshold are fixed approximatively to the level where an evaluation is considered as good.\nTo start, the negotiator agents send their availabilities.\nThe mediator agent makes a proposal chosen randomly in admissible set obtained with these availabilities as linear constraints.\nThis proposal is the vector Po = [[H1, burn, 3], [H1, surgery, 8], [H1, orthopaedic, 0], [H2, surgery, 2], [H2, orthopaedic, 3], [H2, cardiology, 6], [H3, burn, 2], [H3, cardiology, 1]] and the mediator sends propose (Po) to H1, H2 and H3 for approval.\nEach negotiator agent evaluates this proposal and answers back by accepting or rejecting Po: \u2022 Agent H1 rejects this offer since its evaluation is very far from the threshold (0.29, a bad score) and gives a recommendation to improve burn and surgery by sending the message reject_proposal ([burn, surgery]);\n\u2022 Agent H2 accepts this offer by sending the message accept_proposal (), the proposal evaluation being good; \u2022 Agent H3 accepts Po by sending the message accept _ proposal (), the proposal evaluation being good.\nJust with the recommendations provided by agent H1, the mediator is able to make a new proposal by restricting the value of burn and surgery.\nThe new proposal obtained is then Pi = [[H1, burn, 0], [H1, surgery, 8], [H1, orthopaedic, 1], [H2, surgery, 2], [H2, orthopaedic, 2], [H2, cardiology, 6], [H3, burn, 5], [H3, cardiology, 1]].\nThe mediator sends propose (Pi) the negotiator agents.\nH1, H2 and H3 answer back by sending the message accept_proposal (), Pi being evaluated with a high enough score to be acceptable, and also considered as a good proposal when using the explanation function of MYRIAD.\nAn agreement is reached with Pi.\nNote that the evaluation of Pi by H3 has decreased in comparison with Po, but not enough to be rejected and that this solution is the Pareto one, P \u2217.\nOther examples have been tested with the same settings: issues in IN, three negotiator agents and the same mediator agent, with no preference model but selecting randomly the proposal.\nWe obtained solutions either equal or close to the Maximin solution, the distance from the standard deviation being less than 0.0829, the evaluations not far from the ones obtained with P \u2217 and with less than seven proposals made.\nThis shows us that we are able to solve this multi-issue multilateral negotiation problem in a simple and efficient way, with solutions close to the Pareto solution.\n9.\nCONCLUSION AND FUTURE WORK This paper presents a new protocol to address multilateral multi-issue negotiation in a cooperative context.\nThe first main contribution is that we take into account complex inter-dependencies between multiple issues with the use of a complex preference modelling.\nThis contribution is reinforced by the use of multi-issue negotiation in a multilateral context.\nOur second contribution is the use of sharp recommendations in the protocol to help in accelerating the search of a consensus between the cooperative agents and in finding an optimal solution.\nWe have also shown that the protocol has subgame perfect equilibria and these equilibria converge to the usual maximum solution.\nMoreover, we tested this protocol in a crisis management context where the negotiation aim is where to evacuate a whole set of injured people to predefined hospitals.\nWe have already developed a first MAS, in particular integrating MYRIAD, to test this protocol in order to know more about its efficiency in terms of solution quality and quickness in finding a consensus.\nThis prototype enabled us to solve some examples with our approach and the results we obtained are encouraging since we obtained quickly good agreements, close to the Pareto solution, in the light of the initial constraints of the problem: the availabilities.\nWe still have to improve our MAS by taking into account\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 949\nthe two types of recommendations and by adding a preference model to the mediator of our system.\nMoreover, a comparative study has to be done in order to evaluate the performance of our framework against the existing ones and against some variations on the protocol.\n10.\nACKNOWLEDGEMENT\nThis work is partly funded by the ICIS research project under the Dutch BSIK Program (BSIK 03024).","keyphrases":["multi-issu negoti","negoti protocol","model","cooper agent","crisi manag","decis make","multilater negoti","multi-agent system","autom negoti","myriad","negoti strategi","multi-criterion decis make"],"prmu":["P","P","P","P","P","M","R","U","M","U","M","M"]} {"id":"I-48","title":"Normative System Games","abstract":"We develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model. In the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multi-agent system. A normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system. We specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy. Using this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not. We then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.","lvl-1":"Normative System Games Thomas \u25e6 Agotnes Dept of Computer Engineering Bergen University College PB.\n2030, N-5020 Bergen Norway tag@hib.no Wiebe van der Hoek Dept of Computer Science University of Liverpool Liverpool L69 7ZF UK wiebe@csc.liv.ac.uk Michael Wooldridge Dept of Computer Science University of Liverpool Liverpool L69 7ZF UK mjw@csc.liv.ac.uk ABSTRACT We develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model.\nIn the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multiagent system.\nA normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system.\nWe specify an agent``s goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy.\nUsing this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not.\nWe then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems; I.2.4 [Knowledge representation formalisms and methods] General Terms Theory 1.\nINTRODUCTION Normative systems, or social laws, have proved to be an attractive approach to coordination in multi-agent systems [13, 14, 10, 15, 1].\nAlthough the various approaches to normative systems proposed in the literature differ on technical details, they all share the same basic intuition that a normative system is a set of constraints on the behaviour of agents in the system; by imposing these constraints, it is hoped that some desirable objective will emerge.\nThe idea of using social laws to coordinate multi-agent systems was proposed by Shoham and Tennenholtz [13, 14]; their approach was extended by van der Hoek et al. to include the idea of specifying a desirable global objective for a social law as a logical formula, with the idea being that the normative system would be regarded as successful if, after implementing it (i.e., after eliminating all forbidden actions), the objective formula was guaranteed to be satisfied in the system [15].\nHowever, this model did not take into account the preferences of individual agents, and hence neglected to account for possible strategic behaviour by agents when deciding whether to comply with the normative system or not.\nThis model of normative systems was further extended by attributing to each agent a single goal in [16].\nHowever, this model was still too impoverished to capture the kinds of decision making that take place when an agent decides whether or not to comply with a social law.\nIn reality, strategic considerations come into play: an agent takes into account not just whether the normative system would be beneficial for itself, but also whether other agents will rationally choose to participate.\nIn this paper, we develop a model of normative systems in which agents are assumed to have multiple goals, of increasing priority.\nWe specify an agent``s goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures [8]: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy.\nUsing this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripkebased normative systems as games, in which agents must determine whether to comply with the normative system or not.\nWe thus provide a very natural bridge between logical structures and languages and the techniques and concepts of game theory, which have proved to be very powerful for analysing social contract-style scenarios such as normative systems [3, 4].\nWe then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.\n2.\nKRIPKE STRUCTURES AND CTL We use Kripke structures as our basic semantic model for multiagent systems [8].\nA Kripke structure is essentially a directed graph, with the vertex set S corresponding to possible states of the system being modelled, and the relation R \u2286 S \u00d7 S capturing the 881 978-81-904262-7-5 (RPS) c 2007 IFAAMAS possible transitions of the system; intuitively, these transitions are caused by agents in the system performing actions, although we do not include such actions in our semantic model (see, e.g., [13, 2, 15] for related models which include actions as first class citizens).\nWe let S0 denote the set of possible initial states of the system.\nOur model is intended to correspond to the well-known interleaved concurrency model from the reactive systems literature: thus an arc corresponds to the execution of an atomic action by one of the processes in the system, which we call agents.\nIt is important to note that, in contrast to such models as [2, 15], we are therefore here not modelling synchronous action.\nThis assumption is not in fact essential for our analysis, but it greatly simplifies the presentation.\nHowever, we find it convenient to include within our model the agents that cause transitions.\nWe therefore assume a set A of agents, and we label each transition in R with the agent that causes the transition via a function \u03b1 : R \u2192 A. Finally, we use a vocabulary \u03a6 = {p, q, ...} of Boolean variables to express the properties of individual states S: we use a function V : S \u2192 2\u03a6 to label each state with the Boolean variables true (or satisfied) in that state.\nCollecting these components together, an agent-labelled Kripke structure (over \u03a6) is a 6-tuple: K = S, S0 , R, A, \u03b1, V , where: \u2022 S is a finite, non-empty set of states, \u2022 S0 \u2286 S (S0 = \u2205) is the set of initial states; \u2022 R \u2286 S \u00d7 S is a total binary relation on S, which we refer to as the transition relation1 ; \u2022 A = {1, ... , n} is a set of agents; \u2022 \u03b1 : R \u2192 A labels each transition in R with an agent; and \u2022 V : S \u2192 2\u03a6 labels each state with the set of propositional variables true in that state.\nIn the interests of brevity, we shall hereafter refer to an agentlabelled Kripke structure simply as a Kripke structure.\nA path over a transition relation R is an infinite sequence of states \u03c0 = s0, s1, ... which must satisfy the property that \u2200u \u2208 N: (su , su+1) \u2208 R.\nIf u \u2208 N, then we denote by \u03c0[u] the component indexed by u in \u03c0 (thus \u03c0[0] denotes the first element, \u03c0[1] the second, and so on).\nA path \u03c0 such that \u03c0[0] = s is an s-path.\nLet \u03a0R(s) denote the set of s-paths over R; since it will usually be clear from context, we often omit reference to R, and simply write \u03a0(s).\nWe will sometimes refer to and think of an s-path as a possible computation, or system evolution, from s. EXAMPLE 1.\nOur running example is of a system with a single non-sharable resource, which is desired by two agents.\nConsider the Kripke structure depicted in Figure 1.\nWe have two states, s and t, and two corresponding Boolean variables p1 and p2, which are 1 In the branching time temporal logic literature, a relation R \u2286 S \u00d7 S is said to be total iff \u2200s \u2203s : (s, s ) \u2208 R. Note that the term total relation is sometimes used to refer to relations R \u2286 S \u00d7 S such that for every pair of elements s, s \u2208 S we have either (s, s ) \u2208 R or (s , s) \u2208 R; we are not using the term in this way here.\nIt is also worth noting that for some domains, other constraints may be more appropriate than simple totality.\nFor example, one might consider the agent totality requirement, that in every state, every agent has at least one possible transition available: \u2200s\u2200i \u2208 A\u2203s : (s, s ) \u2208 R and \u03b1(s, s ) = i. 2p t p 2 2 1 s 1 1 Figure 1: The resource control running example.\nmutually exclusive.\nThink of pi as meaning agent i has currently control over the resource.\nEach agent has two possible actions, when in possession of the resource: either give it away, or keep it.\nObviously there are infinitely many different s-paths and t-paths.\nLet us say that our set of initial states S0 equals {s, t}, i.e., we don``t make any assumptions about who initially has control over the resource.\n2.1 CTL We now define Computation Tree Logic (CTL), a branching time temporal logic intended for representing the properties of Kripke structures [8].\nNote that since CTL is well known and widely documented in the literature, our presentation, though complete, will be somewhat terse.\nWe will use CTL to express agents'' goals.\nThe syntax of CTL is defined by the following grammar: \u03d5 ::= | p | \u00ac\u03d5 | \u03d5 \u2228 \u03d5 | E f\u03d5 | E(\u03d5 U \u03d5) | A f\u03d5 | A(\u03d5 U \u03d5) where p \u2208 \u03a6.\nWe denote the set of CTL formula over \u03a6 by L\u03a6; since \u03a6 is understood, we usually omit reference to it.\nThe semantics of CTL are given with respect to the satisfaction relation |=, which holds between pairs of the form K, s, (where K is a Kripke structure and s is a state in K), and formulae of the language.\nThe satisfaction relation is defined as follows: K, s |= ; K, s |= p iff p \u2208 V (s) (where p \u2208 \u03a6); K, s |= \u00ac\u03d5 iff not K, s |= \u03d5; K, s |= \u03d5 \u2228 \u03c8 iff K, s |= \u03d5 or K, s |= \u03c8; K, s |= A f\u03d5 iff \u2200\u03c0 \u2208 \u03a0(s) : K, \u03c0[1] |= \u03d5; K, s |= E f\u03d5 iff \u2203\u03c0 \u2208 \u03a0(s) : K, \u03c0[1] |= \u03d5; K, s |= A(\u03d5 U \u03c8) iff \u2200\u03c0 \u2208 \u03a0(s), \u2203u \u2208 N, s.t. K, \u03c0[u] |= \u03c8 and \u2200v, (0 \u2264 v < u) : K, \u03c0[v] |= \u03d5 K, s |= E(\u03d5 U \u03c8) iff \u2203\u03c0 \u2208 \u03a0(s), \u2203u \u2208 N, s.t. K, \u03c0[u] |= \u03c8 and \u2200v, (0 \u2264 v < u) : K, \u03c0[v] |= \u03d5 The remaining classical logic connectives (\u2227, \u2192, \u2194) are assumed to be defined as abbreviations in terms of \u00ac, \u2228, in the conventional manner.\nThe remaining CTL temporal operators are defined: A\u2666\u03d5 \u2261 A( U \u03d5) E\u2666\u03d5 \u2261 E( U \u03d5) A \u03d5 \u2261 \u00acE\u2666\u00ac\u03d5 E \u03d5 \u2261 \u00acA\u2666\u00ac\u03d5 We say \u03d5 is satisfiable if K, s |= \u03d5 for some Kripke structure K and state s in K; \u03d5 is valid if K, s |= \u03d5 for all Kripke structures K and states s in K.\nThe problem of checking whether K, s |= \u03d5 for given K, s, \u03d5 (model checking) can be done in deterministic polynomial time, while checking whether a given \u03d5 is satisfiable or whether \u03d5 is valid is EXPTIME-complete [8].\nWe write K |= \u03d5 if K, s0 |= \u03d5 for all s0 \u2208 S0 , and |= \u03d5 if K |= \u03d5 for all K. 882 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.\nNORMATIVE SYSTEMS For our purposes, a normative system is simply a set of constraints on the behaviour of agents in a system [1].\nMore precisely, a normative system defines, for every possible system transition, whether or not that transition is considered to be legal or not.\nDifferent normative systems may differ on whether or not a transition is legal.\nFormally, a normative system \u03b7 (w.r.t. a Kripke structure K = S, S0 , R, A, \u03b1, V ) is simply a subset of R, such that R \\ \u03b7 is a total relation.\nThe requirement that R\\\u03b7 is total is a reasonableness constraint: it prevents normative systems which lead to states with no successor.\nLet N (R) = {\u03b7 : (\u03b7 \u2286 R) & (R \\ \u03b7 is total)} be the set of normative systems over R.\nThe intended interpretation of a normative system \u03b7 is that (s, s ) \u2208 \u03b7 means transition (s, s ) is forbidden in the context of \u03b7; hence R \\ \u03b7 denotes the legal transitions of \u03b7.\nSince it is assumed \u03b7 is reasonable, we are guaranteed that a legal outward transition exists for every state.\nWe denote the empty normative system by \u03b7\u2205, so \u03b7\u2205 = \u2205.\nNote that the empty normative system \u03b7\u2205 is reasonable with respect to any transition relation R.\nThe effect of implementing a normative system on a Kripke structure is to eliminate from it all transitions that are forbidden according to this normative system (see [15, 1]).\nIf K is a Kripke structure, and \u03b7 is a normative system over K, then K \u2020 \u03b7 denotes the Kripke structure obtained from K by deleting transitions forbidden in \u03b7.\nFormally, if K = S, S0 , R, A, \u03b1, V , and \u03b7 \u2208 N (R), then let K\u2020\u03b7 = K be the Kripke structure K = S , S0 , R , A , \u03b1 , V where: \u2022 S = S , S0 = S0 , A = A , and V = V ; \u2022 R = R \\ \u03b7; and \u2022 \u03b1 is the restriction of \u03b1 to R : \u03b1 (s, s ) = j \u03b1(s, s ) if (s, s ) \u2208 R undefined otherwise.\nNotice that for all K, we have K \u2020 \u03b7\u2205 = K. EXAMPLE 1.\n(continued) When thinking in terms of fairness, it seems natural to consider normative systems \u03b7 that contain (s, s) or (t, t).\nA normative system with (s, t) would not be fair, in the sense that A\u2666A \u00acp1 \u2228 A\u2666A \u00acp2 holds: in all paths, from some moment on, one agent will have control forever.\nLet us, for later reference, fix \u03b71 = {(s, s)}, \u03b72 = {(t, t)}, and \u03b73 = {(s, s), (t, t)}.\nLater, we will address the issue of whether or not agents should rationally choose to comply with a particular normative system.\nIn this context, it is useful to define operators on normative systems which correspond to groups of agents defecting from the normative system.\nFormally, let K = S, S0 ,R, A,\u03b1, V be a Kripke structure, let C \u2286 A be a set of agents over K, and let \u03b7 be a normative system over K. Then: \u2022 \u03b7 C denotes the normative system that is the same as \u03b7 except that it only contains the arcs of \u03b7 that correspond to the actions of agents in C.\nWe call \u03b7 C the restriction of \u03b7 to C, and it is defined as: \u03b7 C = {(s, s ) : (s, s ) \u2208 \u03b7 & \u03b1(s, s ) \u2208 C}.\nThus K \u2020 (\u03b7 C) is the Kripke structure that results if only the agents in C choose to comply with the normative system.\n\u2022 \u03b7 C denotes the normative system that is the same as \u03b7 except that it only contains the arcs of \u03b7 that do not correspond to actions of agents in C.\nWe call \u03b7 C the exclusion of C from \u03b7, and it is defined as: \u03b7 C = {(s, s ) : (s, s ) \u2208 \u03b7 & \u03b1(s, s ) \u2208 C}.\nThus K \u2020 (\u03b7 C) is the Kripke structure that results if only the agents in C choose not to comply with the normative system (i.e., the only ones who comply are those in A \\ C).\nNote that we have \u03b7 C = \u03b7 (A\\C) and \u03b7 C = \u03b7 (A\\C).\nEXAMPLE 1.\n(Continued) We have \u03b71 {1} = \u03b71 = {(s, s)}, while \u03b71 {1} = \u03b7\u2205 = \u03b71 {2}.\nSimilarly, we have \u03b73 {1} = {(s, s)} and \u03b73 {1} = {(t, t)}.\n4.\nGOALS AND UTILITIES Next, we want to be able to capture the goals that agents have, as these will drive an agent``s strategic considerations - particularly, as we will see, considerations about whether or not to comply with a normative system.\nWe will model an agent``s goals as a prioritised list of CTL formulae, representing increasingly desired properties that the agent wishes to hold.\nThe intended interpretation of such a goal hierarchy \u03b3i for agent i \u2208 A is that the further up the hierarchy a goal is, the more it is desired by i. Note that we assume that if an agent can achieve a goal at a particular level in its goal hierarchy, then it is unconcerned about goals lower down the hierarchy.\nFormally, a goal hierarchy, \u03b3, (over a Kripke structure K) is a finite, non-empty sequence of CTL formulae \u03b3 = (\u03d50, \u03d51, ... , \u03d5k ) in which, by convention, \u03d50 = .\nWe use a natural number indexing notation to extract the elements of a goal hierarchy, so if \u03b3 = (\u03d50, \u03d51, ... , \u03d5k ) then \u03b3[0] = \u03d50, \u03b3[1] = \u03d51, and so on.\nWe denote the largest index of any element in \u03b3 by |\u03b3|.\nA particular Kripke structure K is said to satisfy a goal at index x in goal hierarchy \u03b3 if K |= \u03b3[x], i.e., if \u03b3[x] is satisfied in all initial states S0 of K.\nAn obvious potential property of goal hierarchies is monotonicity: where goals at higher levels in the hierarchy logically imply those at lower levels in the hierarchy.\nFormally, a goal hierarchy \u03b3 is monotonic if for all x \u2208 {1, ... , |\u03b3|} \u2286 N, we have |= \u03b3[x] \u2192 \u03b3[x \u2212 1].\nThe simplest type of monotonic goal hierarchy is where \u03b3[x + 1] = \u03b3[x] \u2227 \u03c8x+1 for some \u03c8x+1, so at each successive level of the hierarchy, we add new constraints to the goal of the previous level.\nAlthough this is a natural property of many goal hierarchies, it is not a property we demand of all goal hierarchies.\nEXAMPLE 1.\n(continued) Suppose the agents have similar, but opposing goals: each agent i wants to keep the source as often and long as possible for himself.\nDefine each agent``s goal hierarchy as: \u03b3i = ( \u03d5i 0 = , \u03d5i 1 = E\u2666pi , \u03d5i 2 = E E\u2666pi , \u03d5i 3 = E\u2666E pi , \u03d5i 4 = A E\u2666pi , \u03d5i 5 = E\u2666A pi \u03d5i 6 = A A\u2666pi , \u03d5i 7 = A (A\u2666pi \u2227 E pi ), \u03d5i 8 = A pi ) The most desired goal of agent i is to, in every computation, always have the resource, pi (this is expressed in \u03d5i 8).\nThanks to our reasonableness constraint, this goal implies \u03d5i 7 which says that, no matter how the computation paths evolve, it will always be that all The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 883 continuations will hit a point in which pi , and, moreover, there is a continuation in which pi always holds.\nGoal \u03d5i 6 is a fairness constraint implied by it.\nNote that A\u2666pi says that every computation eventually reaches a pi state.\nThis may mean that after pi has happened, it will never happen again.\n\u03d5i 6 circumvents this: it says that, no matter where you are, there should be a future pi state.\nThe goal \u03d5i 5 is like the strong goal \u03d5i 8 but it accepts that this is only achieved in some computation, eventually.\n\u03d5i 4 requires that in every path, there is always a continuation that eventually gives pi .\nGoal \u03d5i 3 says that pi should be true on some branch, from some moment on.\nIt implies \u03d5i 2 which expresses that there is a computation such that everywhere during it, it is possible to choose a continuation that eventually satisfies pi .\nThis implies \u03d5i 1, which says that pi should at least not be impossible.\nIf we even drop that demand, we have the trivial goal \u03d5i 0.\nWe remark that it may seem more natural to express a fairness constraint \u03d5i 6 as A \u2666pi .\nHowever, this is not a proper CTL formula.\nIt is in fact a formula in CTL \u2217 [9], and in this logic, the two expressions would be equivalent.\nHowever, our basic complexity results in the next sections would not hold for the richer language CTL \u22172 , and the price to pay for this is that we have to formulate our desired goals in a somewhat more cumbersome manner than we might ideally like.\nOf course, our basic framework does not demand that goals are expressed in CTL; they could equally well be expressed in CTL \u2217 or indeed ATL [2] (as in [15]).\nWe comment on the implications of alternative goal representations at the conclusion of the next section.\nA multi-agent system collects together a Kripke structure (representing the basic properties of a system under consideration: its state space, and the possible state transitions that may occur in it), together with a goal hierarchy, one for each agent, representing the aspirations of the agents in the system.\nFormally, a multi-agent system, M , is an (n + 1)-tuple: M = K, \u03b31, ... , \u03b3n where K is a Kripke structure, and for each agent i in K, \u03b3i is a goal hierarchy over K. 4.1 The Utility of Normative Systems We can now define the utility of a Kripke structure for an agent.\nThe idea is that the utility of a Kripke structure is the highest index of any goal that is guaranteed for that agent in the Kripke structure.\nWe make this precise in the function ui (\u00b7): ui (K) = max{j : 0 \u2264 j \u2264 |\u03b3i | & K |= \u03b3i [j ]} Note that using these definitions of goals and utility, it never makes sense to have a goal \u03d5 at index n if there is a logically weaker goal \u03c8 at index n + k in the hierarchy: by definition of utility, it could never be n for any structure K. EXAMPLE 1.\n(continued) Let M = K, \u03b31, \u03b32 be the multiagent system of Figure 1, with \u03b31 and \u03b32 as defined earlier in this example.\nRecall that we have defined S0 as {s, t}.\nThen, u1(K) = u2(K) = 4: goal \u03d54 is true in S0 , but \u03d55 is not.\nTo see that \u03d52 4 = A E\u2666p2 is true in s for instance: note that on ever path it is always the case that there is a transition to t, in which p2 is true.\nNotice that since for any goal hierarchy \u03b3i we have \u03b3[0] = , then for all Kripke structures, ui (K) is well defined, with ui (K) \u2265 2 CTL \u2217 model checking is PSPACE-complete, and hence much worse (under standard complexity theoretic assumptions) than model checking CTL [8].\n\u03b7 \u03b41(K, \u03b7) \u03b42(K, \u03b7) \u03b7\u2205 0 0 \u03b71 0 3 \u03b72 3 0 \u03b73 2 2 C D C (2, 2) (0, 3) D (3, 0) (0, 0) Figure 2: Benefits of implementing a normative system \u03b7 (left) and pay-offs for the game \u03a3M .\n0.\nNote that this is an ordinal utility measure: it tells us, for any given agent, the relative utility of different Kripke structures, but utility values are not on some standard system-wide scale.\nThe fact that ui (K1) > ui (K2) certainly means that i strictly prefers K1 over K2, but the fact that ui (K) > uj (K) does not mean that i values K more highly than j .\nThus, it does not make sense to compare utility values between agents, and so for example, some system wide measures of utility, (notably those measures that aggregate individual utilities, such as social welfare), do not make sense when applied in this setting.\nHowever, as we shall see shortly, other measures - such as Pareto efficiency - can be usefully applied.\nThere are other representations for goals, which would allow us to define cardinal utilities.\nThe simplest would be to specify goals \u03b3 for an agent as a finite, non-empty, one-to-one relation: \u03b3 \u2286 L\u00d7R.\nWe assume that the x values in pairs (\u03d5, x) \u2208 \u03b3 are specified so that x for agent i means the same as x for agent j , and so we have cardinal utility.\nWe then define the utility for i of a Kripke structure K asui (K) = max{x : (\u03d5, x) \u2208 \u03b3i & K |= \u03d5}.\nThe results of this paper in fact hold irrespective of which of these representations we actually choose; we fix upon the goal hierarchy approach in the interests of simplicity.\nOur next step is to show how, in much the same way, we can lift the utility function from Kripke structures to normative systems.\nSuppose we are given a multi-agent system M = K, \u03b31, ... , \u03b3n and an associated normative system \u03b7 over K. Let for agent i, \u03b4i (K, K ) be the difference in his utility when moving from K to K : \u03b4i (K, K ) = ui (K )\u2212 ui (K).\nThen the utility of \u03b7 to agent i wrt K is \u03b4i (K, K \u2020 \u03b7).\nWe will sometimes abuse notation and just write \u03b4i (K, \u03b7) for this, and refer to it as the benefit for agent i of implementing \u03b7 in K. Note that this benefit can be negative.\nSummarising, the utility of a normative system to an agent is the difference between the utility of the Kripke structure in which the normative system was implemented and the original Kripke structure.\nIf this value is greater than 0, then the agent would be better off if the normative system were imposed, while if it is less than 0 then the agent would be worse off if \u03b7 were imposed than in the original system.\nWe say \u03b7 is individually rational for i wrt K if \u03b4i (K, \u03b7) > 0, and individually rational simpliciter if \u03b7 is individually rational for every agent.\nA social system now is a pair \u03a3 = M , \u03b7 where M is a multi-agent system, and \u03b7 is a normative system over M .\nEXAMPLE 1.\nThe table at the left hand in Figure 2 displays the utilities \u03b4i (K, \u03b7) of implementing \u03b7 in the Kripke structure of our running example, for the normative systems \u03b7 = \u03b7\u2205, \u03b71, \u03b72 and \u03b73, introduced before.\nRecall that u1(K) = u2(K) = 4.\n4.2 Universal and Existential Goals 884 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Keeping in mind that a norm \u03b7 restricts the possible transitions of the model under consideration, we make the following observation, borrowing from [15].\nSome classes of goals are monotonic or anti-monotonic with respect to adding additional constraints to a system.\nLet us therefore define two fragments of the language of CTL: the universal language Lu with typical element \u03bc, and the existential fragment Le with typical element \u03b5.\n\u03bc ::= | p | \u00acp | \u03bc \u2228 \u03bc | A f\u03bc | A \u03bc | A(\u03bc U \u03bc) \u03b5 ::= | p | \u00acp | \u03b5 \u2228 \u03b5 | E f\u03b5 | E\u2666\u03b5 | E(\u03b5 U \u03b5) Let us say, for two Kripke structures K1 = S, S0 , R1, A, \u03b1, V and K2 = S, S0 , R2, A, \u03b1, V that K1 is a subsystem of K2 and K2 is a supersystem of K1, written K1 K2 iff R1 \u2286 R2.\nNote that typically K \u2020 \u03b7 K.\nThen we have (cf. [15]).\nTHEOREM 1.\nSuppose K1 K2, and s \u2208 S.\nThen \u2200\u03b5 \u2208 Le : K1, s |= \u03b5 \u21d2 K2, s |= \u03b5 \u2200\u03bc \u2208 Lu : K2, s |= \u03bc \u21d2 K1, s |= \u03bc This has the following effect on imposing a new norm: COROLLARY 1.\nLet K be a structure, and \u03b7 a normative system.\nLet \u03b3i denote a goal hierarchy for agent i. 1.\nSuppose agent i``s utility ui (K) is n, and \u03b3i [n] \u2208 Lu , (i.e., \u03b3i [n] is a universal formula).\nThen, for any normative system \u03b7, \u03b4i (K, \u03b7) \u2265 0.\n2.\nSuppose agent i``s utility ui (K \u2020 \u03b7) is n, and \u03b3i [n] is an existential formula \u03b5.\nThen, \u03b4i (K \u2020 \u03b7, K) \u2265 0.\nCorollary 1``s first item says that an agent whose current maximal goal in a system is a universal formula, need never fear the imposition of a new norm \u03b7.\nThe reason is that his current goal will at least remain true (in fact a goal higher up in the hierarchy may become true).\nIt follows from this that an agent with only universal goals can only gain from the imposition of normative systems \u03b7.\nThe opposite is true for existential goals, according to the second item of the corollary: it can never be bad for an agent to undo a norm \u03b7.\nHence, an agent with only existential goals might well fear any norm \u03b7.\nHowever, these observations implicitly assume that all agents in the system will comply with the norm.\nWhether they will in fact do so, of course, is a strategic decision: it partly depends on what the agent thinks that other agents will do.\nThis motivates us to consider normative system games.\n5.\nNORMATIVE SYSTEM GAMES We now have a principled way of talking about the utility of normative systems for agents, and so we can start to apply the technical apparatus of game theory to analyse them.\nSuppose we have a multi-agent system M = K, \u03b31, ... , \u03b3n and a normative system \u03b7 over K.\nIt is proposed to the agents in M that \u03b7 should be imposed on K, (typically to achieve some coordination objective).\nOur agent - let``s say agent i - is then faced with a choice: should it comply with the strictures of the normative system, or not?\nNote that this reasoning takes place before the agent is in the system - it is a design time consideration.\nWe can understand the reasoning here as a game, as follows.\nA game in strategic normal form (cf. [11, p.11]) is a structure: G = AG, S1, ... , Sn , U1, ... , Un where: \u2022 AG = {1, ... , n} is a set of agents - the players of the game; \u2022 Si is the set of strategies for each agent i \u2208 AG (a strategy for an agent i is nothing else than a choice between alternative actions); and \u2022 Ui : (S1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Sn ) \u2192 R is the utility function for agent i \u2208 AG, which assigns a utility to every combination of strategy choices for the agents.\nNow, suppose we are given a social system \u03a3 = M , \u03b7 where M = K, \u03b31, ... , \u03b3n .\nThen we can associate a game - the normative system game - G\u03a3 with \u03a3, as follows.\nThe agents AG in G\u03a3 are as in \u03a3.\nEach agent i has just two strategies available to it: \u2022 C - comply (cooperate) with the normative system; and \u2022 D - do not comply with (defect from) the normative system.\nIf S is a tuple of strategies, one for each agent, and x \u2208 {C, D}, then we denote by AGx S the subset of agents that play strategy x in S. Hence, for a social system \u03a3 = M , \u03b7 , the normative system \u03b7 AGC S only implements the restrictions for those agents that choose to cooperate in G\u03a3.\nNote that this is the same as \u03b7 AGD S : the normative system that excludes all the restrictions of agents that play D in G\u03a3.\nWe then define the utility functions Ui for each i \u2208 AG as: Ui (S) = \u03b4i (K, \u03b7 AGC S ).\nSo, for example, if SD is a collection of strategies in which every agent defects (i.e., does not comply with the norm), then Ui (SD ) = \u03b4i (K, (\u03b7 AGD SD )) = ui (K \u2020 \u03b7\u2205) \u2212 ui (K) = 0.\nIn the same way, if SC is a collection of strategies in which every agent cooperates (i.e., complies with the norm), then Ui (SC ) = \u03b4i (K, (\u03b7 AGD SC )) = ui (K \u2020 (\u03b7 \u2205)) = ui (K \u2020 \u03b7).\nWe can now start to investigate some properties of normative system games.\nEXAMPLE 1.\n(continued) For our example system, we have displayed the different U values for our multi agent system with the norm \u03b73, i.e., {(s, s), (t, t)} as the second table of Figure 2.\nFor instance, the pair (0, 3) in the matrix under the entry S = C, D is obtained as follows.\nU1( C, D ) = \u03b41(K, \u03b73 AGC C,D ) = u1(K \u2020 \u03b73 AGC C,D ) \u2212 u1(K).\nThe first term of this is the utility of 1 in the system K where we implement \u03b73 for the cooperating agent, i.e., 1, only.\nThis means that the transitions are R \\ {(s, s)}.\nIn this system, still \u03d51 4 = A E\u2666p1 is the highest goal for agent 1.\nThis is the same utility for 1 as in K, and hence, \u03b41(K, \u03b73 AGC C,D ) = 0.\nAgent 2 of course benefits if agent 1 complies with \u03b73 while 2 does not.\nHis utility would be 3, since \u03b73 AGC C,D is in fact \u03b71.\n5.1 Individually Rational Normative Systems A normative system is individually rational if every agent would fare better if the normative system were imposed than otherwise.\nThis is a necessary, although not sufficient condition on a norm to expect that everybody respects it.\nNote that \u03b73 of our example is individually rational for both 1 and 2, although this is not a stable situation: given that the other plays C, i is better of by playing D.\nWe can easily characterise individually rationality with respect to the corresponding game in strategic form, as follows.\nLet \u03a3 = M , \u03b7 be a social system.\nThen the following are equivalent: The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 885 f(xk) ... s0 s1 s2 s3 s4 s(2k\u22121) s2k t(x1) f(x1) t(x2) f(x2) t(xk) Figure 3: The Kripke structure produced in the reduction of Theorem 2; all transitions are associated with agent 1, the only initial state is s0.\n1.\n\u03b7 is individually rational in M ; 2.\n\u2200i \u2208 AG, Ui (SC ) > Ui (SD ) in the game G\u03a3.\nThe decision problem associated with individually rational normative systems is as follows: INDIVIDUALLY RATIONAL NORMATIVE SYSTEM (IRNS): Given: Multi-agent system M .\nQuestion: Does there exist an individually rational normative system for M ?\nTHEOREM 2.\nIRNS is NP-complete, even in one-agent systems.\nPROOF.\nFor membership of NP, guess a normative system \u03b7, and verify that it is individually rational.\nSince \u03b7 \u2286 R, we will be able to guess it in nondeterministic polynomial time.\nTo verify that it is individually rational, we check that for all i, we have ui (K \u2020 \u03b7) > ui (K); computing K \u2020 \u03b7 is just set subtraction, so can be done in polynomial time, while determining the value of ui (K) for any K can be done with a polynomial number of model checking calls, each of which requires only time polynomial in the K and \u03b3.\nHence verifying that ui (K \u2020 \u03b7) > ui (K) requires only polynomial time.\nFor NP-hardness, we reduce SAT [12, p.77].\nGiven a SAT instance \u03d5 over Boolean variables x1, ... , xk , we produce an instance of IRNS as follows.\nFirst, we define a single agent A = {1}.\nFor each Boolean variable xi in the SAT instance, we create two Boolean variables t(xi ) and f (xi ) in the IRNS instance.\nWe then create a Kripke structure K\u03d5 with 2k + 1 states, as shown in Figure 3: arcs in this graph correspond to transitions in K\u03d5.\nLet \u03d5\u2217 be the result of systematically substituting for every Boolean variable xi in \u03d5 the CTL expression (E ft(xi )).\nNext, consider the following formulae: k^ i=1 E f(t(xi ) \u2228 f (xi )) (1) k^ i=1 \u00ac((E ft(xi )) \u2227 (E ff (xi ))) (2) We then define the goal hierarchy for all agent 1 as follows: \u03b31[0] = \u03b31[1] = (1) \u2227 (2) \u2227 \u03d5\u2217 We claim there is an individually rational normative system for the instance so constructed iff \u03d5 is satisfiable.\nFirst, notice that any individually rational normative system must force \u03b31[1] to be true, since in the original system, we do not have \u03b31[1].\nFor the \u21d2 direction, if there is an individually rational normative system \u03b7, then we construct a satisfying assignment for \u03d5 by considering the arcs that are forbidden by \u03b7: formula (1) ensures that we must forbid an arc to either a t(xi ) or a f (xi ) state for all variables xi , but (2) ensures that we cannot forbid arcs to both.\nSo, if we forbid an arc to a t(xi ) state then in the corresponding valuation for \u03d5 we make xi false, while if we forbid an arc to a f (xi ) state then we make xi true.\nThe fact that \u03d5\u2217 is part of the goal ensures that the normative system is indeed a valuation for \u03d5.\nFor \u21d0, note that for any satisfying valuation for \u03d5 we can construct an individually rational normative system \u03b7, as follows: if the valuation makes xi true, we forbid the arc to the f (xi ) state, while if the valuation makes xi false, we forbid the arc to the t(xi ) state.\nThe resulting normative system ensures \u03b31[1], and is thus individually rational.\nNotice that the Kripke structure constructed in the reduction contains just a single agent, and so the Theorem is proven.\n5.2 Pareto Efficient Normative Systems Pareto efficiency is a basic measure of how good a particular outcome is for a group of agents [11, p.7].\nIntuitively, an outcome is Pareto efficient if there is no other outcome that makes every agent better off.\nIn our framework, suppose we are given a social system \u03a3 = M , \u03b7 , and asked whether \u03b7 is Pareto efficient.\nThis amounts to asking whether or not there is some other normative system \u03b7 such that every agent would be better off under \u03b7 than with \u03b7.\nIf \u03b7 makes every agent better off than \u03b7, then we say \u03b7 Pareto dominates \u03b7.\nThe decision problem is as follows: PARETO EFFICIENT NORMATIVE SYSTEM (PENS): Given: Multi-agent system M and normative system \u03b7 over M .\nQuestion: Is \u03b7 Pareto efficient for M ?\nTHEOREM 3.\nPENS is co-NP-complete, even for one-agent systems.\nPROOF.\nLet M and \u03b7 be as in the Theorem.\nWe show that the complement problem to PENS, which we refer to as PARETO DOMINATED, is NP-complete.\nIn this problem, we are given M and \u03b7, and we are asked whether \u03b7 is Pareto dominated, i.e., whether or not there exists some \u03b7 over M such that \u03b7 makes every agent better off than \u03b7.\nFor membership of NP, simply guess a normative system \u03b7 , and verify that for all i \u2208 A, we have ui (K \u2020 \u03b7 ) > ui (K \u2020 \u03b7) - verifying requires a polynomial number of model checking problems, each of which takes polynomial time.\nSince \u03b7 \u2286 R, the normative system can be guessed in non-deterministic polynomial time.\nFor NP-hardness, we reduce IRNS, which we know to be NPcomplete from Theorem 2.\nGiven an instance M of IRNS, we let M in the instance of PARETO DOMINATED be as in the IRNS instance, and define the normative system for PARETO DOMINATED to be \u03b7\u2205, the empty normative system.\nNow, it is straightforward that there exists a normative system \u03b7 which Pareto dominates \u03b7\u2205 in M iff there exist an individually rational normative system in M .\nSince the complement problem is NP-complete, it follows that PENS is co-NP-complete.\n886 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) \u03b70 \u03b71 \u03b72 \u03b73 \u03b74 \u03b75 \u03b76 \u03b77 \u03b78 u1(K \u2020 \u03b7) 4 4 7 6 5 0 0 8 0 u2(K \u2020 \u03b7) 4 7 4 6 0 5 8 0 0 Table 1: Utilities for all possible norms in our example How about Pareto efficient norms for our toy example?\nSettling this question amounts to finding the dominant normative systems among \u03b70 = \u03b7\u2205, \u03b71, \u03b72, \u03b73 defined before, and \u03b74 = {(s, t)}, \u03b75 = {(t, s)}, \u03b76 = {(s, s), (t, s)}, \u03b77 = {(t, t), (s, t)} and \u03b78 = {(s, t), (t, s)}.\nThe utilities for each system are given in Table 1.\nFrom this, we infer that the Pareto efficient norms are \u03b71, \u03b72, \u03b73, \u03b76 and \u03b77.\nNote that \u03b78 prohibits the resource to be passed from one agent to another, and this is not good for any agent (since we have chosen S0 = {s, t}, no agent can be sure to ever get the resource, i.e., goal \u03d5i 1 is not true in K \u2020 \u03b78).\n5.3 Nash Implementation Normative Systems The most famous solution concept in game theory is of course Nash equilibrium [11, p.14].\nA collection of strategies, one for each agent, is said to form a Nash equilibrium if no agent can benefit by doing anything other than playing its strategy, under the assumption that the other agents play theirs.\nNash equilibria are important because they provide stable solutions to the problem of what strategy an agent should play.\nNote that in our toy example, although \u03b73 is individually rational for each agent, it is not a Nash equilibrium, since given this norm, it would be beneficial for agent 1 to deviate (and likewise for 2).\nIn our framework, we say a social system \u03a3 = M , \u03b7 (where \u03b7 = \u03b7\u2205) is a Nash implementation if SC (i.e., everyone complying with the normative system) forms a Nash equilibrium in the game G\u03a3.\nThe intuition is that if \u03a3 is a Nash implementation, then complying with the normative system is a reasonable solution for all concerned: there can be no benefit to deviating from it, indeed, there is a positive incentive for all to comply.\nIf \u03a3 is not a Nash implementation, then the normative system is unlikely to succeed, since compliance is not rational for some agents.\n(Our choice of terminology is deliberately chosen to reflect the way the term Nash implementation is used in implementation theory, or mechanism design [11, p.185], where a game designer seeks to achieve some outcomes by designing the rules of the game such that these outcomes are equilibria.)\nNASH IMPLEMENTATION (NI) : Given: Multi-agent system M .\nQuestion: Does there exist a non-empty normative system \u03b7 over M such that M , \u03b7 forms a Nash implementation?\nVerifying that a particular social system forms a Nash implementation can be done in polynomial time - it amounts to checking: \u2200i \u2208 A : ui (K \u2020 \u03b7) \u2265 ui (K \u2020 (\u03b7 {i})).\nThis, clearly requires only a polynomial number of model checking calls, each of which requires only polynomial time.\nTHEOREM 4.\nThe NI problem is NP-complete, even for twoagent systems.\nPROOF.\nFor membership of NP, simply guess a normative system \u03b7 and check that it forms a Nash implementation; since \u03b7 \u2286 R, guessing can be done in non-deterministic polynomial time, and as s(2k+1) 1 1 1 1 1 1 11 1 1 11 2 2 2 2 2 2 2 2 2 2 2 t(x1) f(x1) t(x2) f(x2) t(xk) f(xk) 2 2 t(x1) f(x1) t(x2) f(x2) t(xk) f(xk) ....\ns0 Figure 4: Reduction for Theorem 4.\nwe argued above, verifying that it forms a Nash implementation can be done in polynomial time.\nFor NP-hardness, we reduce SAT.\nSuppose we are given a SAT instance \u03d5 over Boolean variables x1, ... , xk .\nThen we construct an instance of NI as follows.\nWe create two agents, A = {1, 2}.\nFor each Boolean variable xi we create two Boolean variables, t(xi ) and f (xi ), and we then define a Kripke structure as shown in Figure 4, with s0 being the only initial state; the arc labelling in Figure 4 gives the \u03b1 function, and each state is labelled with the propositions that are true in that state.\nFor each Boolean variable xi , we define the formulae xi and x\u22a5 i as follows: xi = E f(t(xi ) \u2227 E f((E f(t(xi ))) \u2227 A f(\u00acf (xi )))) x\u22a5 i = E f(f (xi ) \u2227 E f((E f(f (xi ))) \u2227 A f(\u00act(xi )))) Let \u03d5\u2217 be the formula obtained from \u03d5 by systematically substituting xi for xi .\nEach agent has three goals: \u03b3i [0] = for both i \u2208 {1, 2}, while \u03b31[1] = k^ i=1 ((E f(t(xi ))) \u2227 (E f(f (xi )))) \u03b32[1] = E fE f k^ i=1 ((E f(t(xi ))) \u2227 (E f(f (xi )))) and finally, for both agents, \u03b3i [2] being the conjunction of the following formulae: The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 887 k^ i=1 (xi \u2228 x\u22a5 i ) (3) k^ i=1 \u00ac(xi \u2227 x\u22a5 i ) (4) k^ i=1 \u00ac(E f(t(xi )) \u2227 E f(f (xi ))) (5) \u03d5\u2217 (6) We denote the multi-agent system so constructed by M\u03d5.\nNow, we prove that the SAT instance \u03d5 is satisfiable iff M\u03d5 has a Nash implementation normative system: For the \u21d2 direction, suppose \u03d5 is satisfiable, and let X be a satisfying valuation, i.e., a set of Boolean variables making \u03d5 true.\nWe can extract from X a Nash implementation normative system \u03b7 as follows: if xi \u2208 X , then \u03b7 includes the arc from s0 to the state in which f (xi ) is true, and also includes the arc from s(2k + 1) to the state in which f (xi ) is true; if xi \u2208 X , then \u03b7 includes the arc from s0 to the state in which t(xi ) is true, and also includes the arc from s(2k + 1) to the state in which t(xi ) is true.\nNo other arcs, apart from those so defined, as included in \u03b7.\nNotice that \u03b7 is individually rational for both agents: if they both comply with the normative system, then they will have their \u03b3i [2] goals achieved, which they do not in the basic system.\nTo see that \u03b7 forms a Nash implementation, observe that if either agent defects from \u03b7, then neither will have their \u03b3i [2] goals achieved: agent 1 strictly prefers (C, C) over (D, C), and agent 2 strictly prefers (C, C) over (C, D).\nFor the \u21d0 direction, suppose there exists a Nash implementation normative system \u03b7, in which case \u03b7 = \u2205.\nThen \u03d5 is satisfiable; for suppose not.\nThen the goals \u03b3i [2] are not achievable by any normative system, (by construction).\nNow, since \u03b7 must forbid at least one transition, then at least one agent would fail to have its \u03b3i [1] goal achieved if it complied, so at least one would do better by defecting, i.e., not complying with \u03b7.\nBut this contradicts the assumption that \u03b7 is a Nash implementation, i.e., that (C, C) forms a Nash equilibrium.\nThis result is perhaps of some technical interest beyond the specific concerns of the present paper, since it is related to two problems that are of wider interest: the complexity of mechanism design [5], and the complexity of computing Nash equilibria [6, 7] 5.4 Richer Goal Languages It is interesting to consider what happens to the complexity of the problems we consider above if we allow richer languages for goals: in particular, CTL \u2217 [9].\nThe main difference is that determining ui (K) in a given multi-agent system M when such a goal language is used involves solving a PSPACE-complete problem (since model checking for CTL \u2217 is PSPACE-complete [8]).\nIn fact, it seems that for each of the three problems we consider above, the corresponding problem under the assumption of a CTL \u2217 representation for goals is also PSPACE-complete.\nIt cannot be any easier, since determining the utility of a particular Kripke structure involves solving a PSPACE-complete problem.\nTo see membership in PSPACE we can exploit the fact that PSPACE = NPSPACE [12, p.150], and so we can guess the desired normative system, applying a PSPACE verification procedure to check that it has the desired properties.\n6.\nCONCLUSIONS Social norms are supposed to restrict our behaviour.\nOf course, such a restriction does not have to be bad: the fact that an agent``s behaviour is restricted may seem a limitation, but there may be benefits if he can assume that others will also constrain their behaviour.\nThe question then, for an agent is, how to be sure that others will comply with a norm.\nAnd, for a system designer, how to be sure that the system will behave socially, that is, according to its norm.\nGame theory is a very natural tool to analyse and answer these questions, which involve strategic considerations, and we have proposed a way to translate key questions concerning logic-based normative systems to game theoretical questions.\nWe have proposed a logical framework to reason about such scenarios, and we have given some computational costs for settling some of the main questions about them.\nOf course, our approach is in many senses open for extension or enrichment.\nAn obvious issue is to consider is the complexity of the questions we give for more practical representations of models (cf. [1]), and to consider other classes of allowable goals.\n7.\nREFERENCES [1] T. Agotnes, W. van der Hoek, J. A. Rodriguez-Aguilar, C. Sierra, and M. Wooldridge.\nOn the logic of normative systems.\nIn Proc.\nIJCAI-07, Hyderabad, India, 2007.\n[2] R. Alur, T. A. Henzinger, and O. Kupferman.\nAlternating-time temporal logic.\nJnl.\nof the ACM, 49(5):672-713, 2002.\n[3] K. Binmore.\nGame Theory and the Social Contract Volume 1: Playing Fair.\nThe MIT Press: Cambridge, MA, 1994.\n[4] K. Binmore.\nGame Theory and the Social Contract Volume 2: Just Playing.\nThe MIT Press: Cambridge, MA, 1998.\n[5] V. Conitzer and T. Sandholm.\nComplexity of mechanism design.\nIn Proc.\nUAI, Edmonton, Canada, 2002.\n[6] V. Conitzer and T. Sandholm.\nComplexity results about nash equilibria.\nIn Proc.\nIJCAI-03, pp. 765-771, Acapulco, Mexico, 2003.\n[7] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou.\nThe complexity of computing a Nash equilibrium.\nIn Proc.\nSTOC, Seattle, WA, 2006.\n[8] E. A. Emerson.\nTemporal and modal logic.\nIn Handbook of Theor.\nComp.\nSci.\nVol.\nB, pages 996-1072.\nElsevier, 1990.\n[9] E. A. Emerson and J. Y. Halpern.\n`Sometimes'' and `not never'' revisited: on branching time versus linear time temporal logic.\nJnl.\nof the ACM, 33(1):151-178, 1986.\n[10] D. Fitoussi and M. Tennenholtz.\nChoosing social laws for multi-agent systems: Minimality and simplicity.\nArtificial Intelligence, 119(1-2):61-101, 2000.\n[11] M. J. Osborne and A. Rubinstein.\nA Course in Game Theory.\nThe MIT Press: Cambridge, MA, 1994.\n[12] C. H. Papadimitriou.\nComputational Complexity.\nAddison-Wesley: Reading, MA, 1994.\n[13] Y. Shoham and M. Tennenholtz.\nOn the synthesis of useful social laws for artificial agent societies.\nIn Proc.\nAAAI, San Diego, CA, 1992.\n[14] Y. Shoham and M. Tennenholtz.\nOn social laws for artificial agent societies: Off-line design.\nIn Computational Theories of Interaction and Agency, pages 597-618.\nThe MIT Press: Cambridge, MA, 1996.\n[15] W. van der Hoek, M. Roberts, and M. Wooldridge.\nSocial laws in alternating time: Effectiveness, feasibility, and synthesis.\nSynthese, 2007.\n[16] M. Wooldridge and W. van der Hoek.\nOn obligations and normative ability.\nJnl.\nof Appl.\nLogic, 3:396-420, 2005.\n888 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"Normative System Games\nABSTRACT\nWe develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model.\nIn the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multiagent system.\nA normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system.\nWe specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy.\nUsing this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not.\nWe then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.\n1.\nINTRODUCTION\nNormative systems, or social laws, have proved to be an attractive approach to coordination in multi-agent systems [13, 14, 10, 15, 1].\nAlthough the various approaches to normative systems proposed in\nthe literature differ on technical details, they all share the same basic intuition that a normative system is a set of constraints on the behaviour of agents in the system; by imposing these constraints, it is hoped that some desirable objective will emerge.\nThe idea of using social laws to coordinate multi-agent systems was proposed by Shoham and Tennenholtz [13, 14]; their approach was extended by van der Hoek et al. to include the idea of specifying a desirable global objective for a social law as a logical formula, with the idea being that the normative system would be regarded as successful if, after implementing it (i.e., after eliminating all forbidden actions), the objective formula was guaranteed to be satisfied in the system [15].\nHowever, this model did not take into account the preferences of individual agents, and hence neglected to account for possible strategic behaviour by agents when deciding whether to comply with the normative system or not.\nThis model of normative systems was further extended by attributing to each agent a single goal in [16].\nHowever, this model was still too impoverished to capture the kinds of decision making that take place when an agent decides whether or not to comply with a social law.\nIn reality, strategic considerations come into play: an agent takes into account not just whether the normative system would be beneficial for itself, but also whether other agents will rationally choose to participate.\nIn this paper, we develop a model of normative systems in which agents are assumed to have multiple goals, of increasing priority.\nWe specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures [8]: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy.\nUsing this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripkebased normative systems as games, in which agents must determine whether to comply with the normative system or not.\nWe thus provide a very natural bridge between logical structures and languages and the techniques and concepts of game theory, which have proved to be very powerful for analysing social contract-style scenarios such as normative systems [3, 4].\nWe then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.\n2.\nKRIPKE STRUCTURES AND CTL\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n2.1 CTL\n3.\nNORMATIVE SYSTEMS\n4.\nGOALS AND UTILITIES\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 883\n4.1 The Utility of Normative Systems\n4.2 Universal and Existential Goals\n884 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.\nNORMATIVE SYSTEM GAMES\n5.1 Individually Rational Normative Systems\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 885\nINDIVIDUALLY RATIONAL NORMATIVE SYSTEM (IRNS):\n5.2 Pareto Efficient Normative Systems\nPARETO EFFICIENT NORMATIVE SYSTEM (PENS):\n5.3 Nash Implementation Normative Systems\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 887\n5.4 Richer Goal Languages\n6.\nCONCLUSIONS\nSocial norms are supposed to restrict our behaviour.\nOf course, such a restriction does not have to be bad: the fact that an agent's behaviour is restricted may seem a limitation, but there may be benefits if he can assume that others will also constrain their behaviour.\nThe question then, for an agent is, how to be sure that others will comply with a norm.\nAnd, for a system designer, how to be sure that the system will behave socially, that is, according to its norm.\nGame theory is a very natural tool to analyse and answer these questions, which involve strategic considerations, and we have proposed a way to translate key questions concerning logic-based normative systems to game theoretical questions.\nWe have proposed a logical framework to reason about such scenarios, and we have given some computational costs for settling some of the main questions about them.\nOf course, our approach is in many senses open for extension or enrichment.\nAn obvious issue is to consider is the complexity of the questions we give for more practical representations of models (cf. [1]), and to consider other classes of allowable goals.","lvl-4":"Normative System Games\nABSTRACT\nWe develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model.\nIn the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multiagent system.\nA normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system.\nWe specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy.\nUsing this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not.\nWe then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.\n1.\nINTRODUCTION\nNormative systems, or social laws, have proved to be an attractive approach to coordination in multi-agent systems [13, 14, 10, 15, 1].\nAlthough the various approaches to normative systems proposed in\nthe literature differ on technical details, they all share the same basic intuition that a normative system is a set of constraints on the behaviour of agents in the system; by imposing these constraints, it is hoped that some desirable objective will emerge.\nHowever, this model did not take into account the preferences of individual agents, and hence neglected to account for possible strategic behaviour by agents when deciding whether to comply with the normative system or not.\nThis model of normative systems was further extended by attributing to each agent a single goal in [16].\nHowever, this model was still too impoverished to capture the kinds of decision making that take place when an agent decides whether or not to comply with a social law.\nIn reality, strategic considerations come into play: an agent takes into account not just whether the normative system would be beneficial for itself, but also whether other agents will rationally choose to participate.\nIn this paper, we develop a model of normative systems in which agents are assumed to have multiple goals, of increasing priority.\nWe specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures [8]: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy.\nUsing this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripkebased normative systems as games, in which agents must determine whether to comply with the normative system or not.\nWe thus provide a very natural bridge between logical structures and languages and the techniques and concepts of game theory, which have proved to be very powerful for analysing social contract-style scenarios such as normative systems [3, 4].\nWe then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.\n6.\nCONCLUSIONS\nSocial norms are supposed to restrict our behaviour.\nThe question then, for an agent is, how to be sure that others will comply with a norm.\nAnd, for a system designer, how to be sure that the system will behave socially, that is, according to its norm.\nGame theory is a very natural tool to analyse and answer these questions, which involve strategic considerations, and we have proposed a way to translate key questions concerning logic-based normative systems to game theoretical questions.\nOf course, our approach is in many senses open for extension or enrichment.","lvl-2":"Normative System Games\nABSTRACT\nWe develop a model of normative systems in which agents are assumed to have multiple goals of increasing priority, and investigate the computational complexity and game theoretic properties of this model.\nIn the underlying model of normative systems, we use Kripke structures to represent the possible transitions of a multiagent system.\nA normative system is then simply a subset of the Kripke structure, which contains the arcs that are forbidden by the normative system.\nWe specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy.\nUsing this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripke-based normative systems as games, in which agents must determine whether to comply with the normative system or not.\nWe then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.\n1.\nINTRODUCTION\nNormative systems, or social laws, have proved to be an attractive approach to coordination in multi-agent systems [13, 14, 10, 15, 1].\nAlthough the various approaches to normative systems proposed in\nthe literature differ on technical details, they all share the same basic intuition that a normative system is a set of constraints on the behaviour of agents in the system; by imposing these constraints, it is hoped that some desirable objective will emerge.\nThe idea of using social laws to coordinate multi-agent systems was proposed by Shoham and Tennenholtz [13, 14]; their approach was extended by van der Hoek et al. to include the idea of specifying a desirable global objective for a social law as a logical formula, with the idea being that the normative system would be regarded as successful if, after implementing it (i.e., after eliminating all forbidden actions), the objective formula was guaranteed to be satisfied in the system [15].\nHowever, this model did not take into account the preferences of individual agents, and hence neglected to account for possible strategic behaviour by agents when deciding whether to comply with the normative system or not.\nThis model of normative systems was further extended by attributing to each agent a single goal in [16].\nHowever, this model was still too impoverished to capture the kinds of decision making that take place when an agent decides whether or not to comply with a social law.\nIn reality, strategic considerations come into play: an agent takes into account not just whether the normative system would be beneficial for itself, but also whether other agents will rationally choose to participate.\nIn this paper, we develop a model of normative systems in which agents are assumed to have multiple goals, of increasing priority.\nWe specify an agent's goals as a hierarchy of formulae of Computation Tree Logic (CTL), a widely used logic for representing the properties of Kripke structures [8]: the intuition is that goals further up the hierarchy are preferred by the agent over those that appear further down the hierarchy.\nUsing this scheme, we define a model of ordinal utility, which in turn allows us to interpret our Kripkebased normative systems as games, in which agents must determine whether to comply with the normative system or not.\nWe thus provide a very natural bridge between logical structures and languages and the techniques and concepts of game theory, which have proved to be very powerful for analysing social contract-style scenarios such as normative systems [3, 4].\nWe then characterise the computational complexity of a number of decision problems associated with these Kripke-based normative system games; for example, we show that the complexity of checking whether there exists a normative system which has the property of being a Nash implementation is NP-complete.\n2.\nKRIPKE STRUCTURES AND CTL\nWe use Kripke structures as our basic semantic model for multiagent systems [8].\nA Kripke structure is essentially a directed graph, with the vertex set S corresponding to possible states of the system being modelled, and the relation R \u2286 S \u00d7 S capturing the\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\npossible transitions of the system; intuitively, these transitions are caused by agents in the system performing actions, although we do not include such actions in our semantic model (see, e.g., [13, 2, 15] for related models which include actions as first class citizens).\nWe let S0 denote the set of possible initial states of the system.\nOur model is intended to correspond to the well-known interleaved concurrency model from the reactive systems literature: thus an arc corresponds to the execution of an atomic action by one of the processes in the system, which we call agents.\nIt is important to note that, in contrast to such models as [2, 15], we are therefore here not modelling synchronous action.\nThis assumption is not in fact essential for our analysis, but it greatly simplifies the presentation.\nHowever, we find it convenient to include within our model the agents that cause transitions.\nWe therefore assume a set A of agents, and we label each transition in R with the agent that causes the transition via a function \u03b1: R \u2192 A. Finally, we use a vocabulary \u03a6 = {p, q, ...} of Boolean variables to express the properties of individual states S: we use a function V: S \u2192 2\u03a6 to label each state with the Boolean variables true (or satisfied) in that state.\nCollecting these components together, an agent-labelled Kripke structure (over \u03a6) is a 6-tuple:\n\u2022 S is a finite, non-empty set of states, \u2022 S0 \u2286 S (S0 = ~ \u2205) is the set of initial states; \u2022 R \u2286 S \u00d7 S is a total binary relation on S, which we refer to as the transition relation1; \u2022 A = {1,..., n} is a set of agents; \u2022 \u03b1: R \u2192 A labels each transition in R with an agent; and \u2022 V: S \u2192 2\u03a6 labels each state with the set of propositional variables true in that state.\nIn the interests of brevity, we shall hereafter refer to an agentlabelled Kripke structure simply as a Kripke structure.\nA path over a transition relation R is an infinite sequence of states \u03c0 = s0, s1,...which must satisfy the property that \u2200 u \u2208 N: (su, su +1) \u2208 R.\nIf u \u2208 N, then we denote by \u03c0 [u] the component indexed by u in \u03c0 (thus \u03c0 [0] denotes the first element, \u03c0 [1] the second, and so\non).\nA path \u03c0 such that \u03c0 [0] = s is an s-path.\nLet \u03a0R (s) denote the set of s-paths over R; since it will usually be clear from context, we often omit reference to R, and simply write \u03a0 (s).\nWe will sometimes refer to and think of an s-path as a possible computation, or system evolution, from s. EXAMPLE 1.\nOur running example is of a system with a single non-sharable resource, which is desired by two agents.\nConsider the Kripke structure depicted in Figure 1.\nWe have two states, s and t, and two corresponding Boolean variables p1 and p2, which are 1In the branching time temporal logic literature, a relation R \u2286 S \u00d7 S is said to be total iff \u2200 s \u2203 s': (s, s') \u2208 R. Note that the term \"total relation\" is sometimes used to refer to relations R \u2286 S \u00d7 S such that for every pair of elements s, s' \u2208 S we have either (s, s') \u2208 R or (s', s) \u2208 R; we are not using the term in this way here.\nIt is also worth noting that for some domains, other constraints may be more appropriate than simple totality.\nFor example, one might consider the agent totality requirement, that in every state, every agent has at least one possible transition available: \u2200 s \u2200 i \u2208 A \u2203 s': (s, s') \u2208 R and \u03b1 (s, s') = i.\nFigure 1: The resource control running example.\nmutually exclusive.\nThink of pi as meaning \"agent i has currently control over the resource\".\nEach agent has two possible actions, when in possession of the resource: either give it away, or keep it.\nObviously there are infinitely many different s-paths and t - paths.\nLet us say that our set of initial states S0 equals {s, t}, i.e., we don't make any assumptions about who initially has control over the resource.\n2.1 CTL\nWe now define Computation Tree Logic (CTL), a branching time temporal logic intended for representing the properties of Kripke structures [8].\nNote that since CTL is well known and widely documented in the literature, our presentation, though complete, will be somewhat terse.\nWe will use CTL to express agents' goals.\nThe syntax of CTL is defined by the following grammar:\nwhere p \u2208 \u03a6.\nWe denote the set of CTL formula over \u03a6 by L\u03a6; since \u03a6 is understood, we usually omit reference to it.\nThe semantics of CTL are given with respect to the satisfaction relation \"| =\", which holds between pairs of the form K, s, (where K is a Kripke structure and s is a state in K), and formulae of the language.\nThe satisfaction relation is defined as follows:\nK, s | = A (\u03d5 U \u03c8) iff \u2200 \u03c0 \u2208 \u03a0 (s), \u2203 u \u2208 N, s.t. K, \u03c0 [u] | = \u03c8 and \u2200 v, (0 \u2264 v <u): K, \u03c0 [v] | = \u03d5 K, s | = E (\u03d5 U \u03c8) iff \u2203 \u03c0 \u2208 \u03a0 (s), \u2203 u \u2208 N, s.t. K, \u03c0 [u] | = \u03c8 and \u2200 v, (0 \u2264 v <u): K, \u03c0 [v] | = \u03d5 The remaining classical logic connectives (\"\u2227\", \"\u2192\", \"\u2194\") are assumed to be defined as abbreviations in terms of \u00ac, \u2228, in the conventional manner.\nThe remaining CTL temporal operators are defined:\nWe say \u03d5 is satisfiable if K, s | = \u03d5 for some Kripke structure K and state s in K; \u03d5 is valid if K, s | = \u03d5 for all Kripke structures K and states s in K.\nThe problem of checking whether K, s | = \u03d5 for given K, s, \u03d5 (model checking) can be done in deterministic polynomial time, while checking whether a given \u03d5 is satisfiable or whether \u03d5 is valid is EXPTIME-complete [8].\nWe write K | = \u03d5 if K, s0 | = \u03d5 for all s0 \u2208 S0, and | = \u03d5 if K | = \u03d5 for all K.\n3.\nNORMATIVE SYSTEMS\nFor our purposes, a normative system is simply a set of constraints on the behaviour of agents in a system [1].\nMore precisely, a normative system defines, for every possible system transition, whether or not that transition is considered to be legal or not.\nDifferent normative systems may differ on whether or not a transition is legal.\nFormally, a normative system \u03b7 (w.r.t. a Kripke structure K = ~ S, S0, R, A, \u03b1, V ~) is simply a subset of R, such that R \\ \u03b7 is a total relation.\nThe requirement that R \\ \u03b7 is total is a reasonableness constraint: it prevents normative systems which lead to states with no successor.\nLet N (R) = {\u03b7: (\u03b7 \u2286 R) & (R \\ \u03b7 is total)} be the set of normative systems over R.\nThe intended interpretation of a normative system \u03b7 is that (s, s') \u2208 \u03b7 means transition (s, s') is forbidden in the context of \u03b7; hence R \\ \u03b7 denotes the legal transitions of \u03b7.\nSince it is assumed \u03b7 is reasonable, we are guaranteed that a legal outward transition exists for every state.\nWe denote the empty normative system by \u03b70, so \u03b70 = \u2205.\nNote that the empty normative system \u03b70 is reasonable with respect to any transition relation R.\nThe effect of implementing a normative system on a Kripke structure is to eliminate from it all transitions that are forbidden according to this normative system (see [15, 1]).\nIf K is a Kripke structure, and \u03b7 is a normative system over K, then K \u2020 \u03b7 denotes the Kripke structure obtained from K by deleting transitions forbidden in \u03b7.\nFormally, if K = ~ S, S0, R, A, \u03b1, V ~, and \u03b7 \u2208 N (R), then let K \u2020 \u03b7 = K' be the Kripke structure K' = ~ S', S0', R', A', \u03b1', V' ~ where:\n\u2022 S = S', S0 = S0', A = A', and V = V'; \u2022 R' = R \\ \u03b7; and \u2022 \u03b1' is the restriction of \u03b1 to R': j \u03b1 (s, s') if (s, s') \u2208 R '\nNotice that for all K, we have K \u2020 \u03b70 = K. EXAMPLE 1.\n(continued) When thinking in terms of fairness, it seems natural to consider normative systems \u03b7 that contain (s, s) or (t, t).\nA normative system with (s, t) would not be fair, in the sense that AOA \u00ac p1 \u2228 AOA \u00ac p2 holds: in all paths, from some moment on, one agent will have control forever.\nLet us, for later reference, fix \u03b71 = {(s, s)}, \u03b72 = {(t, t)}, and \u03b73 = {(s, s), (t, t)}.\nLater, we will address the issue of whether or not agents should rationally choose to comply with a particular normative system.\nIn this context, it is useful to define operators on normative systems which correspond to groups of agents \"defecting\" from the normative system.\nFormally, let K = ~ S, S0, R, A, \u03b1, V ~ be a Kripke structure, let C \u2286 A be a set of agents over K, and let \u03b7 be a normative system over K. Then:\n\u2022 \u03b7 [C denotes the normative system that is the same as \u03b7 except that it only contains the arcs of \u03b7 that correspond to the actions of agents in C.\nWe call \u03b7 [C the restriction of \u03b7 to C, and it is defined as: \u03b7 [C = {(s, s'): (s, s') \u2208 \u03b7 & \u03b1 (s, s') \u2208 C}.\nThus K \u2020 (\u03b7 [C) is the Kripke structure that results if only the agents in C choose to comply with the normative system.\n\u2022 \u03b7 1 C denotes the normative system that is the same as \u03b7 except that it only contains the arcs of \u03b7 that do not correspond to actions of agents in C.\nWe call \u03b7 1 C the exclusion of C from \u03b7, and it is defined as:\n\u03b7 1 C = {(s, s'): (s, s') \u2208 \u03b7 & \u03b1 (s, s') \u2208 ~ C}.\nThus K \u2020 (\u03b7 1 C) is the Kripke structure that results if only the agents in C choose not to comply with the normative system (i.e., the only ones who comply are those in A \\ C).\nNote that we have \u03b7 1 C = \u03b7 [(A \\ C) and \u03b7 [C = \u03b7 1 (A \\ C).\n4.\nGOALS AND UTILITIES\nNext, we want to be able to capture the goals that agents have, as these will drive an agent's strategic considerations--particularly, as we will see, considerations about whether or not to comply with a normative system.\nWe will model an agent's goals as a prioritised list of CTL formulae, representing increasingly desired properties that the agent wishes to hold.\nThe intended interpretation of such a goal hierarchy \u03b3i for agent i \u2208 A is that the \"further up the hierarchy\" a goal is, the more it is desired by i. Note that we assume that if an agent can achieve a goal at a particular level in its goal hierarchy, then it is unconcerned about goals lower down the hierarchy.\nFormally, a goal hierarchy, \u03b3, (over a Kripke structure K) is a finite, non-empty sequence of CTL formulae\nin which, by convention, \u03d50 =.\nWe use a natural number indexing notation to extract the elements of a goal hierarchy, so if \u03b3 = (\u03d50, \u03d51,..., \u03d5k) then \u03b3 [0] = \u03d50, \u03b3 [1] = \u03d51, and so on.\nWe denote the largest index of any element in \u03b3 by | \u03b3 |.\nA particular Kripke structure K is said to satisfy a goal at index x in goal hierarchy \u03b3 if K | = \u03b3 [x], i.e., if \u03b3 [x] is satisfied in all initial states S0 of K.\nAn obvious potential property of goal hierarchies is monotonicity: where goals at higher levels in the hierarchy logically imply those at lower levels in the hierarchy.\nFormally, a goal hierarchy \u03b3 is monotonic if for all x \u2208 {1,..., | \u03b3 |} \u2286 N, we have | = \u03b3 [x] \u2192 \u03b3 [x \u2212 1].\nThe simplest type of monotonic goal hierarchy is where \u03b3 [x + 1] = \u03b3 [x] \u2227 \u03c8x +1 for some \u03c8x +1, so at each successive level of the hierarchy, we add new constraints to the goal of the previous level.\nAlthough this is a natural property of many goal hierarchies, it is not a property we demand of all goal hierarchies.\nEXAMPLE 1.\n(continued) Suppose the agents have similar, but opposing goals: each agent i wants to keep the source as often and long as possible for himself.\nDefine each agent's goal hierarchy as:\nThe most desired goal of agent i is to, in every computation, always have the resource, pi (this is expressed in \u03d5i8).\nThanks to our reasonableness constraint, this goal implies \u03d5i7 which says that, no matter how the computation paths evolve, it will always be that all\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 883\ncontinuations will hit a point in which pi, and, moreover, there is a continuation in which pi always holds.\nGoal cpi6 is a fairness constraint implied by it.\nNote that A \u2666 pi says that every computation eventually reaches a pi state.\nThis may mean that after pi has happened, it will never happen again.\ncpi6 circumvents this: it says that, no matter where you are, there should be a future pi state.\nThe goal cpi5 is like the strong goal cpi8 but it accepts that this is only achieved in some computation, eventually.\ncpi4 requires that in every path, there is always a continuation that eventually gives pi.\nGoal cpi3 says that pi should be true on some branch, from some moment on.\nIt implies cpi2 which expresses that there is a computation such that everywhere during it, it is possible to choose a continuation that eventually satisfies pi.\nThis implies cpi 1, which says that pi should at least not be impossible.\nIf we even drop that demand, we have the trivial goal cpi 0.\nWe remark that it may seem more natural to express a fairness constraint cpi6 as A \u2666 pi.\nHowever, this is not a proper CTL formula.\nIt is in fact a formula in CTL' [9], and in this logic, the two expressions would be equivalent.\nHowever, our basic complexity results in the next sections would not hold for the richer language CTL' 2, and the price to pay for this is that we have to formulate our desired goals in a somewhat more cumbersome manner than we might ideally like.\nOf course, our basic framework does not demand that goals are expressed in CTL; they could equally well be expressed in CTL' or indeed ATL [2] (as in [15]).\nWe comment on the implications of alternative goal representations at the conclusion of the next section.\nA multi-agent system collects together a Kripke structure (representing the basic properties of a system under consideration: its state space, and the possible state transitions that may occur in it), together with a goal hierarchy, one for each agent, representing the aspirations of the agents in the system.\nFormally, a multi-agent system, M, is an (n + 1) - tuple:\nwhere K is a Kripke structure, and for each agent i in K, \u03b3i is a goal hierarchy over K.\n4.1 The Utility of Normative Systems\nWe can now define the utility of a Kripke structure for an agent.\nThe idea is that the utility of a Kripke structure is the highest index of any goal that is guaranteed for that agent in the Kripke structure.\nWe make this precise in the function ui (\u00b7):\nNote that using these definitions of goals and utility, it never makes sense to have a goal cp at index n if there is a logically weaker goal 0 at index n + k in the hierarchy: by definition of utility, it could never be n for any structure K.\nEXAMPLE 1.\n(continued) Let M = (K, \u03b31, \u03b32) be the multiagent system of Figure 1, with \u03b31 and \u03b32 as defined earlier in this example.\nRecall that we have defined S0 as {s, t}.\nThen, u1 (K) = u2 (K) = 4: goal cp4 is true in S 0, but cp5 is not.\nTo see that cp24 = A E \u2666 p2 is true in s for instance: note that on ever path it is always the case that there is a transition to t, in which p2 is true.\nNotice that since for any goal hierarchy \u03b3i we have \u03b3 [0] = T, then for all Kripke structures, ui (K) is well defined, with ui (K)> 2 CTL' model checking is PSPACE-complete, and hence much worse (under standard complexity theoretic assumptions) than model checking CTL [8].\nFigure 2: Benefits of implementing a normative system 77 (left) and pay-offs for the game \u03a3M.\n0.\nNote that this is an ordinal utility measure: it tells us, for any given agent, the relative utility of different Kripke structures, but utility values are not on some standard system-wide scale.\nThe fact that ui (K1)> ui (K2) certainly means that i strictly prefers K1 over K2, but the fact that ui (K)> uj (K) does not mean that i values K more highly than j. Thus, it does not make sense to compare utility values between agents, and so for example, some system wide measures of utility, (notably those measures that aggregate individual utilities, such as social welfare), do not make sense when applied in this setting.\nHowever, as we shall see shortly, other measures--such as Pareto efficiency--can be usefully applied.\nThere are other representations for goals, which would allow us to define cardinal utilities.\nThe simplest would be to specify goals \u03b3 for an agent as a finite, non-empty, one-to-one relation: \u03b3 C G xR.\nWe assume that the x values in pairs (cp, x) E \u03b3 are specified so that x for agent i means the same as x for agent j, and so we have cardinal utility.\nWe then define the utility for i of a Kripke structure K asui (K) = max {x: (cp, x) E \u03b3i & K | = cp}.\nThe results of this paper in fact hold irrespective of which of these representations we actually choose; we fix upon the goal hierarchy approach in the interests of simplicity.\nOur next step is to show how, in much the same way, we can lift the utility function from Kripke structures to normative systems.\nSuppose we are given a multi-agent system M = (K, \u03b31,..., \u03b3n) and an associated normative system 77 over K. Let for agent i, Si (K, K') be the difference in his utility when moving from K to K': Si (K, K') = ui (K') \u2212 ui (K).\nThen the utility of 77 to agent i wrt K is Si (K, K \u2020 77).\nWe will sometimes abuse notation and just write Si (K, 77) for this, and refer to it as the benefit for agent i of implementing 77 in K. Note that this benefit can be negative.\nSummarising, the utility of a normative system to an agent is the difference between the utility of the Kripke structure in which the normative system was implemented and the original Kripke structure.\nIf this value is greater than 0, then the agent would be better off if the normative system were imposed, while if it is less than 0 then the agent would be worse off if 77 were imposed than in the original system.\nWe say 77 is individually rational for i wrt K if Si (K, 77)> 0, and individually rational simpliciter if 77 is individually rational for every agent.\nA social system now is a pair\n4.2 Universal and Existential Goals\n884 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nKeeping in mind that a norm \u03b7 restricts the possible transitions of the model under consideration, we make the following observation, borrowing from [15].\nSome classes of goals are monotonic or anti-monotonic with respect to adding additional constraints to a system.\nLet us therefore define two fragments of the language of CTL: the universal language Lu with typical element \u03bc, and the existential fragment Le with typical element \u03b5.\nLet us say, for two Kripke structures K1 = (S, S0, R1, A, \u03b1, V) and K2 = (S, S0, R2, A, \u03b1, V) that K1 is a subsystem of K2 and K2 is a supersystem of K1, written K1 C K2 iff R1 \u2286 R2.\nNote that typically K \u2020 \u03b7 C K.\nThen we have (cf. [15]).\nThis has the following effect on imposing a new norm:\n1.\nSuppose agent i's utility ui (K) is n, and \u03b3i [n] E Lu, (i.e., \u03b3i [n] is a universal formula).\nThen, for any normative system \u03b7, \u03b4i (K, \u03b7)> 0.\n2.\nSuppose agent i's utility ui (K \u2020 \u03b7) is n, and \u03b3i [n] is an existential formula \u03b5.\nThen, \u03b4i (K \u2020 \u03b7, K)> 0.\nCorollary 1's first item says that an agent whose current maximal goal in a system is a universal formula, need never fear the imposition of a new norm \u03b7.\nThe reason is that his current goal will at least remain true (in fact a goal higher up in the hierarchy may become true).\nIt follows from this that an agent with only universal goals can only gain from the imposition of normative systems \u03b7.\nThe opposite is true for existential goals, according to the second item of the corollary: it can never be bad for an agent to \"undo\" a norm \u03b7.\nHence, an agent with only existential goals might well fear any norm \u03b7.\nHowever, these observations implicitly assume that all agents in the system will comply with the norm.\nWhether they will in fact do so, of course, is a strategic decision: it partly depends on what the agent thinks that other agents will do.\nThis motivates us to consider normative system games.\n5.\nNORMATIVE SYSTEM GAMES\nWe now have a principled way of talking about the utility of normative systems for agents, and so we can start to apply the technical apparatus of game theory to analyse them.\nSuppose we have a multi-agent system M = (K, \u03b31,..., \u03b3n) and a normative system \u03b7 over K.\nIt is proposed to the agents in M that \u03b7 should be imposed on K, (typically to achieve some coordination objective).\nOur agent--let's say agent i--is then faced with a choice: should it comply with the strictures of the normative system, or not?\nNote that this reasoning takes place before the agent is \"in\" the system--it is a design time consideration.\nWe can understand the reasoning here as a game, as follows.\nA game in strategic normal form (cf. [11, p. 11]) is a structure:\n\u2022 A0 = {1,..., n} is a set of agents--the players of the game; \u2022 Si is the set of strategies for each agent i E A0 (a strategy for an agent i is nothing else than a choice between alternative actions); and \u2022 Ui: (S1 x \u00b7 \u00b7 \u00b7 x Sn)--* R is the utility function for agent i E A0, which assigns a utility to every combination of strategy choices for the agents.\nNow, suppose we are given a social system \u03a3 = (M, \u03b7) where M = (K, \u03b31,..., \u03b3n).\nThen we can associate a game--the normative system game--0\u03a3 with \u03a3, as follows.\nThe agents A0 in 0\u03a3 are as in \u03a3.\nEach agent i has just two strategies available to it: \u2022 C--comply (cooperate) with the normative system; and \u2022 D--do not comply with (defect from) the normative system.\nIf S is a tuple of strategies, one for each agent, and x E {C, D}, then we denote by A0xS the subset of agents that play strategy x in S. Hence, for a social system \u03a3 = (M, \u03b7), the normative system \u03b7 [A0CS only implements the restrictions for those agents that choose to cooperate in 0\u03a3.\nNote that this is the same as \u03b7 1 A0DS: the normative system that excludes all the restrictions of agents that play D in 0\u03a3.\nWe then define the utility functions Ui for each\nSo, for example, if SD is a collection of strategies in which every agent defects (i.e., does not comply with the norm), then Ui (SD) = \u03b4i (K, (\u03b7 1 A0D SD)) = ui (K \u2020 \u03b70) \u2212 ui (K) = 0.\nIn the same way, if SC is a collection of strategies in which every agent cooperates (i.e., complies with the norm), then Ui (SC) = \u03b4i (K, (\u03b7 1 A0DSC)) = ui (K \u2020 (\u03b7 1 \u2205)) = ui (K \u2020 \u03b7).\nWe can now start to investigate some properties of normative system games.\n5.1 Individually Rational Normative Systems\nA normative system is individually rational if every agent would fare better if the normative system were imposed than otherwise.\nThis is a necessary, although not sufficient condition on a norm to expect that everybody respects it.\nNote that \u03b73 of our example is individually rational for both 1 and 2, although this is not a stable situation: given that the other plays C, i is better of by playing D.\nWe can easily characterise individually rationality with respect to the corresponding game in strategic form, as follows.\nLet \u03a3 = (M, \u03b7) be a social system.\nThen the following are equivalent:\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 885\nFigure 3: The Kripke structure produced in the reduction of Theorem 2; all transitions are associated with agent 1, the only initial state is s0.\n1.\n\u03b7 is individually rational in M; 2.\nVi E A9, Ui (SC)> Ui (SD) in the game 9\u03a3.\nThe decision problem associated with individually rational normative systems is as follows:\nINDIVIDUALLY RATIONAL NORMATIVE SYSTEM (IRNS):\nGiven: Multi-agent system M.\nQuestion: Does there exist an individually rational normative system for M?\nTHEOREM 2.\nIRNS is NP-complete, even in one-agent systems.\nPROOF.\nFor membership of NP, guess a normative system \u03b7, and verify that it is individually rational.\nSince \u03b7 C R, we will be able to guess it in nondeterministic polynomial time.\nTo verify that it is individually rational, we check that for all i, we have ui (K t \u03b7)> ui (K); computing K t \u03b7 is just set subtraction, so can be done in polynomial time, while determining the value of ui (K) for any K can be done with a polynomial number of model checking calls, each of which requires only time polynomial in the K and \u03b3.\nHence verifying that ui (K t \u03b7)> ui (K) requires only polynomial time.\nFor NP-hardness, we reduce SAT [12, p. 77].\nGiven a SAT instance \u03d5 over Boolean variables x1,..., xk, we produce an instance of IRNS as follows.\nFirst, we define a single agent A = {1}.\nFor each Boolean variable xi in the SAT instance, we create two Boolean variables t (xi) and f (xi) in the IRNS instance.\nWe then create a Kripke structure K\u03d5 with 2k + 1 states, as shown in Figure 3: arcs in this graph correspond to transitions in K\u03d5.\nLet \u03d5 * be the result of systematically substituting for every Boolean variable xi in \u03d5 the CTL expression (E \u2762 t (xi)).\nNext, consider the following formulae:\nWe then define the goal hierarchy for all agent 1 as follows:\nWe claim there is an individually rational normative system for the instance so constructed iff \u03d5 is satisfiable.\nFirst, notice that any individually rational normative system must force \u03b31 [1] to be true, since in the original system, we do not have \u03b31 [1].\nFor the = * direction, if there is an individually rational normative system \u03b7, then we construct a satisfying assignment for \u03d5 by considering the arcs that are forbidden by \u03b7: formula (1) ensures that we must forbid an arc to either a t (xi) or a f (xi) state for all variables xi, but (2) ensures that we cannot forbid arcs to both.\nSo, if we forbid an arc to a t (xi) state then in the corresponding valuation for \u03d5 we make xi false, while if we forbid an arc to a f (xi) state then we make xi true.\nThe fact that \u03d5 * is part of the goal ensures that the normative system is indeed a valuation for \u03d5.\nFor, note that for any satisfying valuation for \u03d5 we can construct an individually rational normative system \u03b7, as follows: if the valuation makes xi true, we forbid the arc to the f (xi) state, while if the valuation makes xi false, we forbid the arc to the t (xi) state.\nThe resulting normative system ensures \u03b31 [1], and is thus individually rational.\nNotice that the Kripke structure constructed in the reduction contains just a single agent, and so the Theorem is proven.\n5.2 Pareto Efficient Normative Systems\nPareto efficiency is a basic measure of how good a particular outcome is for a group of agents [11, p. 7].\nIntuitively, an outcome is Pareto efficient if there is no other outcome that makes every agent better off.\nIn our framework, suppose we are given a social system \u03a3 = (M, \u03b7), and asked whether \u03b7 is Pareto efficient.\nThis amounts to asking whether or not there is some other normative system \u03b7' such that every agent would be better off under \u03b7' than with \u03b7.\nIf \u03b7' makes every agent better off than \u03b7, then we say \u03b7' Pareto dominates \u03b7.\nThe decision problem is as follows:\nPARETO EFFICIENT NORMATIVE SYSTEM (PENS):\nGiven: Multi-agent system M and normative system \u03b7 over M.\nQuestion: Is \u03b7 Pareto efficient for M?\nTHEOREM 3.\nPENS is co-NP-complete, even for one-agent systems.\nPROOF.\nLet M and \u03b7 be as in the Theorem.\nWe show that the complement problem to PENS, which we refer to as PARETO DOMINATED, is NP-complete.\nIn this problem, we are given M and \u03b7, and we are asked whether \u03b7 is Pareto dominated, i.e., whether or not there exists some \u03b7' over M such that \u03b7' makes every agent better off than \u03b7.\nFor membership of NP, simply guess a normative system \u03b7', and verify that for all i E A, we have ui (K t \u03b7')> ui (K t \u03b7)--verifying requires a polynomial number of model checking problems, each of which takes polynomial time.\nSince \u03b7' C R, the normative system can be guessed in non-deterministic polynomial time.\nFor NP-hardness, we reduce IRNS, which we know to be NPcomplete from Theorem 2.\nGiven an instance M of IRNS, we let M in the instance of PARETO DOMINATED be as in the IRNS instance, and define the normative system for PARETO DOMINATED to be \u03b70, the empty normative system.\nNow, it is straightforward that there exists a normative system \u03b7' which Pareto dominates \u03b70 in M iff there exist an individually rational normative system in M.\nSince the complement problem is NP-complete, it follows that PENS is co-NP-complete.\nTable 1: Utilities for all possible norms in our example\nHow about Pareto efficient norms for our toy example?\nSettling this question amounts to finding the dominant normative systems among 770 = 770, 771, 772, 773 defined before, and 774 = {(s, t)}, 775 = {(t, s)}, 776 = {(s, s), (t, s)}, 777 = {(t, t), (s, t)} and 778 = {(s, t), (t, s)}.\nThe utilities for each system are given in Table 1.\nFrom this, we infer that the Pareto efficient norms are 771, 772, 773, 776 and 777.\nNote that 778 prohibits the resource to be passed from one agent to another, and this is not good for any agent (since we have chosen S0 = {s, t}, no agent can be sure to ever get the resource, i.e., goal Wi1 is not true in K \u2020 778).\n5.3 Nash Implementation Normative Systems\nThe most famous solution concept in game theory is of course Nash equilibrium [11, p. 14].\nA collection of strategies, one for each agent, is said to form a Nash equilibrium if no agent can benefit by doing anything other than playing its strategy, under the assumption that the other agents play theirs.\nNash equilibria are important because they provide stable solutions to the problem of what strategy an agent should play.\nNote that in our toy example, although 773 is individually rational for each agent, it is not a Nash equilibrium, since given this norm, it would be beneficial for agent 1 to deviate (and likewise for 2).\nIn our framework, we say a social system \u03a3 = (M, 77) (where 77 = ~ 770) is a Nash implementation if SC (i.e., everyone complying with the normative system) forms a Nash equilibrium in the game 9\u03a3.\nThe intuition is that if \u03a3 is a Nash implementation, then complying with the normative system is a reasonable solution for all concerned: there can be no benefit to deviating from it, indeed, there is a positive incentive for all to comply.\nIf \u03a3 is not a Nash implementation, then the normative system is unlikely to succeed, since compliance is not rational for some agents.\n(Our choice of terminology is deliberately chosen to reflect the way the term \"Nash implementation\" is used in implementation theory, or mechanism design [11, p. 185], where a game designer seeks to achieve some outcomes by designing the rules of the game such that these outcomes are equilibria.)\nNASH IMPLEMENTATION (NI): Given: Multi-agent system M.\nQuestion: Does there exist a non-empty normative system 77 over M such that (M, 77) forms a Nash implementation?\nVerifying that a particular social system forms a Nash implementation can be done in polynomial time--it amounts to checking:\nThis, clearly requires only a polynomial number of model checking calls, each of which requires only polynomial time.\nTHEOREM 4.\nThe NI problem is NP-complete, even for twoagent systems.\nPROOF.\nFor membership of NP, simply guess a normative system 77 and check that it forms a Nash implementation; since 77 C R, guessing can be done in non-deterministic polynomial time, and as\nFigure 4: Reduction for Theorem 4.\nwe argued above, verifying that it forms a Nash implementation can be done in polynomial time.\nFor NP-hardness, we reduce SAT.\nSuppose we are given a SAT instance W over Boolean variables x1,..., xk.\nThen we construct an instance of NI as follows.\nWe create two agents, A = {1, 2}.\nFor each Boolean variable xi we create two Boolean variables, t (xi) and f (xi), and we then define a Kripke structure as shown in Figure 4, with s0 being the only initial state; the arc labelling in Figure 4 gives the \u03b1 function, and each state is labelled with the propositions that are true in that state.\nFor each Boolean variable xi, we define the formulae xi ~ and xi \u22a5 as follows:\nLet W * be the formula obtained from W by systematically substituting xi ~ for xi.\nEach agent has three goals: - yi [0] = T for both i E {1, 2}, while\nand finally, for both agents, - yi [2] being the conjunction of the following formulae:\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 887\nWe denote the multi-agent system so constructed by M\u03d5.\nNow, we prove that the SAT instance W is satisfiable iff M\u03d5 has a Nash implementation normative system: For the = * direction, suppose W is satisfiable, and let X be a satisfying valuation, i.e., a set of Boolean variables making W true.\nWe can extract from X a Nash implementation normative system 77 as follows: if xi E X, then 77 includes the arc from so to the state in which f (xi) is true, and also includes the arc from s (2k + 1) to the state in which f (xi) is true; if xi E ~ X, then 77 includes the arc from so to the state in which t (xi) is true, and also includes the arc from s (2k + 1) to the state in which t (xi) is true.\nNo other arcs, apart from those so defined, as included in 77.\nNotice that 77 is individually rational for both agents: if they both comply with the normative system, then they will have their - yi [2] goals achieved, which they do not in the basic system.\nTo see that 77 forms a Nash implementation, observe that if either agent defects from 77, then neither will have their - yi [2] goals achieved: agent 1 strictly prefers (C, C) over (D, C), and agent 2 strictly prefers (C, C) over (C, D).\nFor the - t = direction, suppose there exists a Nash implementation normative system 77, in which case 77 = ~ 0.\nThen W is satisfiable; for suppose not.\nThen the goals - yi [2] are not achievable by any normative system, (by construction).\nNow, since 77 must forbid at least one transition, then at least one agent would fail to have its - yi [1] goal achieved if it complied, so at least one would do better by defecting, i.e., not complying with 77.\nBut this contradicts the assumption that 77 is a Nash implementation, i.e., that (C, C) forms a Nash equilibrium.\nThis result is perhaps of some technical interest beyond the specific concerns of the present paper, since it is related to two problems that are of wider interest: the complexity of mechanism design [5], and the complexity of computing Nash equilibria [6, 7]\n5.4 Richer Goal Languages\nIt is interesting to consider what happens to the complexity of the problems we consider above if we allow richer languages for goals: in particular, CTL' [9].\nThe main difference is that determining ui (K) in a given multi-agent system M when such a goal language is used involves solving a PSPACE-complete problem (since model checking for CTL' is PSPACE-complete [8]).\nIn fact, it seems that for each of the three problems we consider above, the corresponding problem under the assumption of a CTL' representation for goals is also PSPACE-complete.\nIt cannot be any easier, since determining the utility of a particular Kripke structure involves solving a PSPACE-complete problem.\nTo see membership in PSPACE we can exploit the fact that PSPACE = NPSPACE [12, p. 150], and so we can \"guess\" the desired normative system, applying a PSPACE verification procedure to check that it has the desired properties.\n6.\nCONCLUSIONS\nSocial norms are supposed to restrict our behaviour.\nOf course, such a restriction does not have to be bad: the fact that an agent's behaviour is restricted may seem a limitation, but there may be benefits if he can assume that others will also constrain their behaviour.\nThe question then, for an agent is, how to be sure that others will comply with a norm.\nAnd, for a system designer, how to be sure that the system will behave socially, that is, according to its norm.\nGame theory is a very natural tool to analyse and answer these questions, which involve strategic considerations, and we have proposed a way to translate key questions concerning logic-based normative systems to game theoretical questions.\nWe have proposed a logical framework to reason about such scenarios, and we have given some computational costs for settling some of the main questions about them.\nOf course, our approach is in many senses open for extension or enrichment.\nAn obvious issue is to consider is the complexity of the questions we give for more practical representations of models (cf. [1]), and to consider other classes of allowable goals.","keyphrases":["norm system game","norm system","game","multipl goal of increas prioriti","goal","comput complex","complex","game theoret properti","kripk structur","comput tree logic","logic","ordin util","nash implement","social law","multi-agent system","desir object","constraint","decis make"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","U","M","U","U","M"]} {"id":"I-74","title":"On the relevance of utterances in formal inter-agent dialogues","abstract":"Work on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues. This work has established the space in which agents are permitted to interact through dialogues. Recently, there has been increasing interest in the mechanisms agents might use to choose how to act -- the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue. Key in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks. Here we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring.","lvl-1":"On the relevance of utterances in formal inter-agent dialogues Simon Parsons1 Peter McBurney2 1 Department of Computer & Information Science Brooklyn College, City University of New York Brooklyn NY 11210 USA {parsons,sklar}@sci.\nbrooklyn.cuny.edu Elizabeth Sklar1 Michael Wooldridge2 2 Department of Computer Science University of Liverpool Liverpool L69 7ZF UK {p.j.mcburney,m.j.wooldridge}@csc.\nliv.ac.uk ABSTRACT Work on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues.\nThis work has established the space in which agents are permitted to interact through dialogues.\nRecently, there has been increasing interest in the mechanisms agents might use to choose how to act - the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue.\nKey in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks.\nHere we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence: Coherence & co-ordination; languages & structures; multiagent systems.\nGeneral Terms Design, languages, theory.\n1.\nINTRODUCTION Finding ways for agents to reach agreements in multiagent systems is an area of active research.\nOne mechanism for achieving agreement is through the use of argumentation - where one agent tries to convince another agent of something during the course of some dialogue.\nEarly examples of argumentation-based approaches to multiagent agreement include the work of Dignum et al. [7], Kraus [14], Parsons and Jennings [16], Reed [23], Schroeder et al. [25] and Sycara [26].\nThe work of Walton and Krabbe [27], popularised in the multiagent systems community by Reed [23], has been particularly influential in the field of argumentation-based dialogue.\nThis work influenced the field in a number of ways, perhaps most deeply in framing multi-agent interactions as dialogue games in the tradition of Hamblin [13].\nViewing dialogues in this way, as in [2, 21], provides a powerful framework for analysing the formal properties of dialogues, and for identifying suitable protocols under which dialogues can be conducted [18, 20].\nThe dialogue game view overlaps with work on conversation policies (see, for example, [6, 10]), but differs in considering the entire dialogue rather than dialogue segments.\nIn this paper, we extend the work of [18] by considering the role of relevance - the relationship between utterances in a dialogue.\nRelevance is a topic of increasing interest in argumentation-based dialogue because it relates to the scope that an agent has for applying strategic manoeuvering to obtain the outcomes that it requires [19, 22, 24].\nOur work identifes the limits on such rhetorical manoeuvering, showing when it can and cannot have an effect.\n2.\nBACKGROUND We begin by introducing the formal system of argumentation that underpins our approach, as well as the corresponding terminology and notation, all taken from [2, 8, 17].\nA dialogue is a sequence of messages passed between two or more members of a set of agents A.\nAn agent \u03b1 maintains a knowledge base, \u03a3\u03b1, containing formulas of a propositional language L and having no deductive closure.\nAgent \u03b1 also maintains the set of its past utterances, called the commitment store, CS\u03b1.\nWe refer to this as an agent``s public knowledge, since it contains information that is shared with other agents.\nIn contrast, the contents of \u03a3\u03b1 are private to \u03b1.\nNote that in the description that follows, we assume that is the classical inference relation, that \u2261 stands for logical equivalence, and we use \u0394 to denote all the information available to an agent.\nThus in a dialogue between two agents \u03b1 and \u03b2, \u0394\u03b1 = \u03a3\u03b1 \u222a CS\u03b1 \u222a CS\u03b2, so the commitment store CS\u03b1 can be loosely thought of as a subset of \u0394\u03b1 consisting of the assertions that have been made public.\nIn some dialogue games, such as those in [18] anything in CS\u03b1 is either in \u03a3\u03b1 or can be derived from it.\nIn other dialogue games, such as 1006 978-81-904262-7-5 (RPS) c 2007 IFAAMAS those in [2], CS\u03b1 may contain things that cannot be derived from \u03a3\u03b1.\nDefinition 2.1.\nAn argument A is a pair (S, p) where p is a formula of L and S a subset of \u0394 such that (i) S is consistent; (ii) S p; and (iii) S is minimal, so no proper subset of S satisfying both (1) and (2) exists.\nS is called the support of A, written S = Support(A) and p is the conclusion of A, written p = Conclusion(A).\nThus we talk of p being supported by the argument (S, p).\nIn general, since \u0394 may be inconsistent, arguments in A(\u0394), the set of all arguments which can be made from \u0394, may conflict, and we make this idea precise with the notion of undercutting: Definition 2.2.\nLet A1 and A2 be arguments in A(\u0394).\nA1 undercuts A2 iff \u2203\u00acp \u2208 Support(A2) such that p \u2261 Conclusion(A1).\nIn other words, an argument is undercut if and only if there is another argument which has as its conclusion the negation of an element of the support for the first argument.\nTo capture the fact that some beliefs are more strongly held than others, we assume that any set of beliefs has a preference order over it.\nWe consider all information available to an agent, \u0394, to be stratified into non-overlapping subsets \u03941, ... , \u0394n such that beliefs in \u0394i are all equally preferred and are preferred over elements in \u0394j where i > j.\nThe preference level of a nonempty subset S \u2282 \u0394, where different elements s \u2208 S may belong to different layers \u0394i, is valued at the highest numbered layer which has a member in S and is referred to as level(S).\nIn other words, S is only as strong as its weakest member.\nNote that the strength of a belief as used in this context is a separate concept from the notion of support discussed earlier.\nDefinition 2.3.\nLet A1 and A2 be arguments in A(\u0394).\nA1 is preferred to A2 according to Pref , A1 Pref A2, iff level(Support(A1)) > level(Support(A2)).\nIf A1 is preferred to A2, we say that A1 is stronger than A2.\nWe can now define the argumentation system we will use: Definition 2.4.\nAn argumentation system is a triple: A(\u0394), Undercut, Pref such that: \u2022 A(\u0394) is a set of the arguments built from \u0394, \u2022 Undercut is a binary relation representing the defeat relationship between arguments, Undercut \u2286 A(\u0394) \u00d7 A(\u0394), and \u2022 Pref is a pre-ordering on A(\u0394) \u00d7 A(\u0394).\nThe preference order makes it possible to distinguish different types of relations between arguments: Definition 2.5.\nLet A1, A2 be two arguments of A(\u0394).\n\u2022 If A2 undercuts A1 then A1 defends itself against A2 iff A1 Pref A2.\nOtherwise, A1 does not defend itself.\n\u2022 A set of arguments A defends A1 iff for every A2 that undercuts A1, where A1 does not defend itself against A2, then there is some A3 \u2208 A such that A3 undercuts A2 and A2 does not defend itself against A3.\nWe write AUndercut,Pref to denote the set of all non-undercut arguments and arguments defending themselves against all their undercutting arguments.\nThe set A(\u0394) of acceptable arguments of the argumentation system A(\u0394), Undercut, Pref is [1] the least fixpoint of a function F: A \u2286 A(\u0394) F(A) = {(S, p) \u2208 A(\u0394) | (S, p) is defended by A} Definition 2.6.\nThe set of acceptable arguments for an argumentation system A(\u0394), Undercut, Pref is recursively defined as: A(\u0394) = [ Fi\u22650(\u2205) = AUndercut,Pref \u222a h[ Fi\u22651(AUndercut,Pref ) i An argument is acceptable if it is a member of the acceptable set, and a proposition is acceptable if it is the conclusion of an acceptable argument.\nAn acceptable argument is one which is, in some sense, proven since all the arguments which might undermine it are themselves undermined.\nDefinition 2.7.\nIf there is an acceptable argument for a proposition p, then the status of p is accepted, while if there is not an acceptable argument for p, the status of p is not accepted.\nArgument A is said to affect the status of another argument A if changing the status of A will change the status of A .\n3.\nDIALOGUES Systems like those described in [2, 18], lay down sets of locutions that agents can make to put forward propositions and the arguments that support them, and protocols that define precisely which locutions can be made at which points in the dialogue.\nWe are not concerned with such a level of detail here.\nInstead we are interested in the interplay between arguments that agents put forth.\nAs a result, we will consider only that agents are allowed to put forward arguments.\nWe do not discuss the detail of the mechanism that is used to put these arguments forward - we just assume that arguments of the form (S, p) are inserted into an agent``s commitment store where they are then visible to other agents.\nWe then have a typical definition of a dialogue: Definition 3.1.\nA dialogue D is a sequence of moves: m1, m2, ... , mn.\nA given move mi is a pair \u03b1, Ai where Ai is an argument that \u03b1 places into its commitment store CS\u03b1.\nMoves in an argumentation-based dialogue typically attack moves that have been made previously.\nWhile, in general, a dialogue can include moves that undercut several arguments, in the remainder of this paper, we will only consider dialogues that put forward moves that undercut at most one argument.\nFor now we place no additional constraints on the moves that make up a dialogue.\nLater we will see how different restrictions on moves lead to different kinds of dialogue.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1007 The sequence of arguments put forward in the dialogue is determined by the agents who are taking part in the dialogue, but they are usually not completely free to choose what arguments they make.\nAs indicated earlier, their choice is typically limited by a protocol.\nIf we write the sequence of n moves m1, m2, ... , mn as mn, and denote the empty sequence as m0, then we can define a profocol in the following way: Definition 3.2.\nA protocol P is a function on a sequence of moves mi in a dialogue D that, for all i \u2265 0, identifies a set of possible moves Mi+1 from which the mi+1th move may be drawn: P : mi \u2192 Mi+1 In other words, for our purposes here, at every point in a dialogue, a protocol determines a set of possible moves that agents may make as part of the dialogue.\nIf a dialogue D always picks its moves m from the set M identified by protocol P, then D is said to conform to P. Even if a dialogue conforms to a protocol, it is typically the case that the agent engaging in the dialogue has to make a choice of move - it has to choose which of the moves in M to make.\nThis excercise of choice is what we refer to as an agent``s use of rhetoric (in its oratorical sense of influencing the thought and conduct of an audience).\nSome of our results will give a sense of how much scope an agent has to exercise rhetoric under different protocols.\nAs arguments are placed into commitment stores, and hence become public, agents can determine the relationships between them.\nIn general, after several moves in a dialogue, some arguments will undercut others.\nWe will denote the set of arguments {A1, A2, ... , Aj} asserted after moves m1, m2, ... , mj of a dialogue to be Aj - the relationship of the arguments in Aj can be described as an argumentation graph, similar to those described in, for example, [3, 4, 9]: Definition 3.3.\nAn argumentation graph AG over a set of arguments A is a directed graph (V, E) such that every vertex v, v \u2208 V denotes one argument A \u2208 A, every argument A is denoted by one vertex v, and every directed edge e \u2208 E from v to v denotes that v undercuts v .\nWe will use the term argument graph as a synonym for argumentation graph.\nNote that we do not require that the argumentation graph is connected.\nIn other words the notion of an argumentation graph allows for the representation of arguments that do not relate, by undercutting or being undercut, to any other arguments (we will come back to this point very shortly).\nWe adapt some standard graph theoretic notions in order to describe various aspects of the argumentation graph.\nIf there is an edge e from vertex v to vertex v , then v is said to be the parent of v and v is said to be the child of v.\nIn a reversal of the usual notion, we define a root of an argumentation graph1 as follows: Definition 3.4.\nA root of an argumentation graph AG = (V, E) is a node v \u2208 V that has no children.\nThus a root of a graph is a node to which directed edges may be connected, but from which no directed edges connect to other nodes.\nThus a root is a node representing an 1 Note that we talk of a root rather than the root - as defined, an argumentation graph need not be a tree.\nv v'' Figure 1: An example argument graph argument that is undercut, but which itself does no undercutting.\nSimilarly: Definition 3.5.\nA leaf of an argumentation graph AG = (V, E) is a node v \u2208 V that has no parents.\nThus a leaf in an argumentation graph represents an argument that undercuts another argument, but does no undercutting.\nThus in Figure 1, v is a root, and v is a leaf.\nThe reason for the reversal of the usual notions of root and leaf is that, as we shall see, we will consider dialogues to construct argumentation graphs from the roots (in our sense) to the leaves.\nThe reversal of the terminology means that it matches the natural process of tree construction.\nSince, as described above, argumentation graphs are allowed to be not connected (in the usual graph theory sense), it is helpful to distinguish nodes that are connected to other nodes, in particular to the root of the tree.\nWe say that node v is connected to node v if and only if there is a path from v to v .\nSince edges represent undercut relations, the notion of connectedness between nodes captures the influence that one argument may have on another: Proposition 3.1.\nGiven an argumentation graph AG, if there is any argument A, denoted by node v that affects the status of another argument A , denoted by v , then v is connected to v .\nThe converse does not hold.\nProof.\nGiven Definitions 2.5 and 2.6, the only ways in which A can affect the status of A is if A either undercuts A , or if A undercuts some argument A that undercuts A , or if A undercuts some A that undercuts some A that undercuts A , and so on.\nIn all such cases, a sequence of undercut relations relates the two arguments, and if they are both in an argumentation graph, this means that they are connected.\nSince the notion of path ignores the direction of the directed arcs, nodes v and v are connected whether the edge between them runs from v to v or vice versa.\nSince A only undercuts A if the edge runs from v to v , we cannot infer that A will affect the status of A from information about whether or not they are connected.\nThe reason that we need the concept of the argumentation graph is that the properties of the argumentation graph tell us something about the set of arguments A the graph represents.\nWhen that set of arguments is constructed through a dialogue, there is a relationship between the structure of the argumentation graph and the protocol that governs the dialogue.\nIt is the extent of the relationship between structure and protocol that is the main subject of this paper.\nTo study this relationship, we need to establish a correspondence between a dialogue and an argumentation graph.\nGiven the definitions we have so far, this is simple: Definition 3.6.\nA dialogue D, consisting of a sequence of moves mn, and an argument graph AG = (V, E) correspond to one another iff \u2200m \u2208 mn, the argument Ai that 1008 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) is advanced at move mi is represented by exactly one node v \u2208 V , and \u2200v \u2208 V , v represents exactly one argument Ai that has been advanced by a move m \u2208 mn.\nThus a dialogue corresponds to an argumentation graph if and only if every argument made in the dialogue corresponds to a node in the graph, and every node in the graph corresponds to an argument made in the dialogue.\nThis one-toone correspondence allows us to consider each node v in the graph to have an index i which is the index of the move in the dialogue that put forward the argument which that node represents.\nThus we can, for example, refer to the third node in the argumentation graph, meaning the node that represents the argument put forward in the third move of the dialogue.\n4.\nRELEVANCE Most work on dialogues is concerned with what we might call coherent dialogues, that is dialogues in which the participants are, as in the work of Walton and Krabbe [27], focused on resolving some question through the dialogue2 To capture this coherence, it seems we need a notion of relevance to constrain the statements made by agents.\nHere we study three notions of relevance: Definition 4.1.\nConsider a dialogue D, consisting of a sequence of moves mi, with a corresponding argument graph AG.\nThe move mi+1, i > 1, is said to be relevant if one or more of the following hold: R1 Making mi+1 will change the status of the argument denoted by the first node of AG.\nR2 Making mi+1 will add a node vi+1 that is connected to the first node of AG.\nR3 Making mi+1 will add a node vi+1 that is connected to the last node to be added to AG.\nR2-relevance is the form of relevance defined by [3] in their study of strategic and tactical reasoning3 .\nR1-relevance was suggested by the notion used in [15], and though it differs somewhat from that suggested there, we believe it captures the essence of its predecessor.\nNote that we only define relevance for the second move of the dialogue onwards because the first move is taken to identify the subject of the dialogue, that is, the central question that the dialogue is intended to answer, and hence it must be relevant to the dialogue, no matter what it is.\nIn assuming this, we focus our attention on the same kind of dialogues as [18].\nWe can think of relevance as enforcing a form of parsimony on a dialogue - it prevents agents from making statements that do not bear on the current state of the dialogue.\nThis promotes efficiency, in the sense of limiting the number of moves in the dialogue, and, as in [15], prevents agents revealing information that they might better keep hidden.\nAnother form of parsimony is to insist that agents are not allowed to put forward arguments that will be undercut by arguments that have already been made during the dialogue.\nWe therefore distinguish such arguments.\n2 See [11, 12] for examples of dialogues where this is not the case.\n3 We consider such reasoning sub-types of rhetoric.\nDefinition 4.2.\nConsider a dialogue D, consisting of a sequence of moves mi, with a corresponding argument graph AG.\nThe move mi+1 and the argument it puts forward, Ai+1, are both said to be pre-empted, if Ai+1 is undercut by some A \u2208 Ai.\nWe use the term pre-empted because if such an argument is put forward, it can seem as though another agent anticipated the argument being made, and already made an argument that would render it useless.\nIn the rest of this paper, we will only deal with protocols that permit moves that are relevant, in any of the senses introduced above, and are not allowed to be pre-empted.\nWe call such protocols basic protocols, and dialogues carried out under such protocols basic dialogues.\nThe argument graph of a basic dialogue is somewhat restricted.\nProposition 4.1.\nConsider a basic dialogue D.\nThe argumentation graph AG that corresponds to D is a tree with a single root.\nProof.\nRecall that Definition 3.3 requires only that AG be a directed graph.\nTo show that it is a tree, we have to show that it is acyclic and connected.\nThat the graph is connected follows from the construction of the graph under a protocol that enforces relevance.\nIf the notion of relevance is R3, each move adds a node that is connected to the previous node.\nIf the notion of relevance is R2, then every move adds a node that is connected to the root, and thus is connected to some node in the graph.\nIf the notion of relevance is R1, then every move has to change the status of the argument denoted by the root.\nProposition 3.1 tells us that to affect the status of an argument A , the node v representing the argument A that is effecting the change has to be connected to v , the node representing A , and so it follows that every new node added as a result of an R1relevant move will be connected to the argumentation graph.\nThus AG is connected.\nSince a basic dialogue does not allow moves that are preempted, every edge that is added during construction is directed from the node that is added to one already in the graph (thus denoting that the argument A denoted by the added node, v, undercuts the argument A denoted by the node to which the connection is made, v , rather than the other way around).\nSince every edge that is added is directed from the new node to the rest of the graph, there can be no cycles.\nThus AG is a tree.\nTo show that AG has a single root, consider its construction from the initial node.\nAfter m1 the graph has one node, v1 that is both a root and a leaf.\nAfter m2, the graph is two nodes connected by an edge, and v1 is now a root and not a leaf.\nv2 is a leaf and not a root.\nHowever the third node is added, the argument earlier in this proof demonstrates that there will be a directed edge from it to some other node, making it a leaf.\nThus v1 will always be the only root.\nThe ruling out of pre-empted moves means that v1 will never cease to be a root, and so the argumentation graph will always have one root.\nSince every argumentation graph constructed by a basic dialogue is a tree with a single root, this means that the first node of every argumentation graph is the root.\nAlthough these results are straightforward to obtain, they allow us to show how the notions of relevance are related.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1009 Proposition 4.2.\nConsider a basic dialogue D, consisting of a sequence of moves mi, with a corresponding argument graph AG.\n1.\nEvery move mi+1 that is R1-relevant is R2-relevant.\nThe converse does not hold.\n2.\nEvery move mi+1 that is R3-relevant is R2-relevant.\nThe converse does not hold.\n3.\nNot every move mi+1 that is R1-relevant is R3-relevant, and not every move mi+1 that is R3-relevant is R1relevant Proof.\nFor 1, consider how move mi+1 can satisfy R1.\nProposition 3.1 tells us that if Ai+1 can change the status of the argument denoted by the root v1 (which, as observed above, is the first node) of AG, then vi+1 must be connected to the root.\nThis is precisely what is required to satisfy R2, and the relatiosnhip is proved to hold.\nTo see that the converse does not hold, we have to consider what it takes to change the status of r (since Proposition 3.1 tells us that connectedness is not enough to ensure a change of status - if it did, R1 and R2 relevance would coincide).\nFor mi+1 to change the status of the root, it will have to (1) make the argument A represented by r either unacceptable, if it were acceptable before the move, or (2) acceptable if it were unacceptable before the move.\nGiven the definition of acceptability, it can achieve (1) either by directly undercutting the argument represented by r, in which case vi+1 will be directly connected to r by some edge, or by undercutting some argument A that is part of the set of non-undercut arguments defending A.\nIn the latter case, vi+1 will be directly connected to the node representing A and by Proposition 4.1 to r. To achieve (2), vi+1 will have to undercut an argument A that is either currently undercutting A, or is undercutting an argument that would otherwise defend A. Now, further consider that mi+1 puts forward an argument Ai+1 that undercuts the argument denoted by some node v , but this latter argument defends itself against Ai+1.\nIn such a case, the set of acceptable arguments will not change, and so the status of Ar will not change.\nThus a move that is R2-relevant need not be R1-relevant.\nFor 2, consider that mi+1 can satisfy R3 simply by adding a node that is connected to vi, the last node to be added to AG.\nBy Proposition 4.1, it is connected to r and so is R2-relevant.\nTo see that the converse does not hold, consider that an R2-relevant move can connect to any node in AG.\nThe first part of 3 follows by a similar argument to that we just used - an R1-relevant move does not have to connect to vi, just to some v that is part of the graph - and the second part follows since a move that is R3-relevant may introduce an argument Ai+1 that undercuts the argument Ai put forward by the previous move (and so vi+1 is connected to vi), but finds that Ai defends itself against Ai+1, preventing a change of status at the root.\nWhat is most interesting is not so much the results but why they hold, since this reveals some aspects of the interplay between relevance and the structure of argument graphs.\nFor example, to restate a case from the proof of Proposition 4.2, a move that is R3-relevant by definition has to add a node to the argument graph that is connected to the last node that was added.\nSince a move that is R2-relevant can add a node that connects anywhere on an argument graph, any move that is R3-relevant will be R2-relevant, but the converse does not hold.\nIt turns out that we can exploit the interplay between structure and relevance that Propositions 4.1 and 4.2 have started to illuminate to establish relationships between the protocols that govern dialogues and the argument graphs constructed during such dialogues.\nTo do this we need to define protocols in such a way that they refer to the structure of the graph.\nWe have: Definition 4.3.\nA protocol is single-path if all dialogues that conform to it construct argument graphs that have only one branch.\nProposition 4.3.\nA basic protocol P is single-path if, for all i, the set of permitted moves Mi at move i are all R3relevant.\nThe converse does not hold.\nProof.\nR3-relevance requires that every node added to the argument graph be connected to the previous node.\nStarting from the first node this recursively constructs a tree with just one branch, and the relationship holds.\nThe converse does not hold because even if one or more moves in the protocol are R1- or R2-relevant, it may be the case that, because of an agent``s rhetorical choice or because of its knowledge, every argument that is chosen to be put forward will undercut the previous argument and so the argument graph is a one-branch tree.\nLooking for more complex kinds of protocol that construct more complex kinds of argument graph, it is an obvious move to turn to: Definition 4.4.\nA basic protocol is multi-path if all dialogues that conform to it can construct argument graphs that are trees.\nBut, on reflection, since any graph with only one branch is also a tree: Proposition 4.4.\nAny single-path protocol is an instance of a multi-path protocol.\nand, furthermore: Proposition 4.5.\nAny basic protocol P is multi-path.\nProof.\nImmediate from Proposition 4.1 So the notion of a multi-path protocol does not have much traction.\nAs a result we distinguish multi-path protocols that permit dialogues that can construct trees that have more than one branch as bushy protocols.\nWe then have: Proposition 4.6.\nA basic protocol P is bushy if, for some i, the set of permitted moves Mi at move i are all R1- or R2-relevant.\nProof.\nFrom Proposition 4.3 we know that if all moves are R3-relevant then we``ll get a tree with one branch, and from Proposition 4.1 we know that all basic protocols will build an argument graph that is a tree, so providing we exclude R3-relevant moves, we will get protocols that can build multi-branch trees.\n1010 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Of course, since, by Proposition 4.2, any move that is R3relevant is R2-relevant and can quite possibly be R1-relevant (all that Proposition 4.2 tells us is that there is no guarantee that it will be), all that Proposition 4.6 tells us is that dialogues that conform to bushy protocols may have more than one branch.\nAll we can do is to identify a bound on the number of branches: Proposition 4.7.\nConsider a basic dialogue D that includes m moves that are not R3-relevant, and has a corresponding argumentation graph AG.\nThe number of branches in AG is less than or equal to m + 1.\nProof.\nSince it must connect a node to the last node added to AG, an R3-relevant move can only extend an existing branch.\nSince they do not have the same restriction, R1 and R2-relevant moves may create a new branch by connecting to a node that is not the last node added.\nEvery such move could create a new branch, and if they do, we will have m branches.\nIf there were R3-relevant moves before any of these new-branch-creating moves, then these m branches are in addition to the initial branch created by the R3-relevant moves, and we have a maximum of m + 1 possible branches.\nWe distinguish bushy protocols from multi-path protocols, and hence R1- and R2-relevance from R3-relevance, because of the kinds of dialogue that R3-relevance enforces.\nIn a dialogue in which all moves must be R3-relevant, the argumentation graph has a single branch - the dialogue consists of a sequence of arguments each of which undercuts the previous one and the last move to be made is the one that settles the dialogue.\nThis, as we will see next, means that such a dialogue only allows a subset of all the moves that would otherwise be possible.\n5.\nCOMPLETENESS The above discussion of the difference between dialogues carried out under single-path and bushy protocols brings us to the consideration of what [18] called predeterminism, but we now prefer to describe using the term completeness.\nThe idea of predeterminism, as described in [18], captures the notion that, under some circumstances, the result of a dialogue can be established without actually having the dialogue - the agents have sufficiently little room for rhetorical manoeuver that were one able to see the contents of all the \u03a3i of all the \u03b1i \u2208 A, one would be able to identify the outcome of any dialogue on a given subject4 .\nWe develop this idea by considering how the argument graphs constructed by dialogues under different protocols compare to benchmark complete dialogues.\nWe start by developing ideas of what complete might mean.\nOne reasonable definition is that: Definition 5.1.\nA basic dialogue D between the set of agents A with a corresponding argumentation graph AG is topic-complete if no agent can construct an argument A that undercuts any argument A represented by a node in AG.\nThe argumentation graph constructed by a topic-complete dialogue is called a topic-complete argumentation graph and is denoted AG(D)T .\n4 Assuming that the \u03a3i do not change during the dialogue, which is the usual assumption in this kind of dialogue.\nA dialogue is topic-complete when no agent can add anything that is directly connected to the subject of the dialogue.\nSome protocols will prevent agents from making moves even though the dialogue is not topic-complete.\nTo distinguish such cases we have: Definition 5.2.\nA basic dialogue D between the set of agents A with a corresponding argumentation graph AG is protocol-complete under a protocol P if no agent can make a move that adds a node to the argumentation graph that is permitted by P.\nThe argumentation graph constructed by a protocol-complete dialogue is called a protocol-complete argumentation graph and is denoted AG(D)P .\nClearly: Proposition 5.1.\nAny dialogue D under a basic protocol P is protocol-complete if it is topic-complete.\nThe converse does not hold in general.\nProof.\nIf D is topic-complete, no agent can make a move that will extend the argumentation graph.\nThis means that no agent can make a move that is permitted by a basic protocol, and so D is also protocol complete.\nThe converse does not hold since some basic dialogues (under a protocol that only permits R3-relevant moves, for example) will not permit certain moves (like the addition of a node that connects to the root of the argumentation graph after more than two moves) that would be allowed in a topiccomplete dialogue.\nCorollary 5.1.\nFor a basic dialogue D, AG(D)P is a sub-graph of AG(D)T .\nObviously, from the definition of a sub-graph, the converse of Corollary 5.1 does not hold in general.\nThe important distinction between topic- and protocolcompleteness is that the former is determined purely by the state of the dialogue - as captured by the argumentation graph - and is thus independent of the protocol, while the latter is determined entirely by the protocol.\nAny time that a dialogue ends in a state of protocol-completeness rather than topic completeness, it is ending when agents still have things to say but can``t because the protocol won``t allow them to.\nWith these definitions of completeness, our task is to relate topic-completeness - the property that ensures that agents can say everything that they have to say in a dialogue that is, in some sense, important - to the notions of relevance we have developed - which determine what agents are allowed to say.\nWhen we need very specific conditions to make protocol-complete dialogues topic-complete, it means that agents have lots of room for rhetorical maneouver when those conditions are not in force.\nThat is there are many ways they can bring dialogues to a close before everything that can be said has been said.\nWhere few conditions are required, or conditions are absent, then dialogues between agents with the same knowledge will always play out the same way, and rhetoric has no place.\nWe have: Proposition 5.2.\nA protocol-complete basic dialogue D under a protocol which only allows R3-relevant moves will be topic-complete only when AG(D)T has a single branch in which the nodes are labelled in increasing order from the root.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1011 Proof.\nGiven what we know about R3-relevance, the condition on AG(D)P having a single branch is obvious.\nThis is not a sufficient condition on its own because certain protocols may prevent - through additional restrictions, like strict turn-taking in a multi-party dialogue - all the nodes in AG(D)T , which is not subject to such restrictions, being added to the graph.\nOnly when AG(D)T includes the nodes in the exact order that the corresponding arguments are put forward is it necessary that a topic-complete argumentation graph be constructed.\nGiven Proposition 5.1, these are the conditions under which dialogues conducted under the notion of R3-relevance will always be predetermined, and given how restrictive the conditions are, such dialogues seem to have plenty of room for rhetoric to play a part.\nTo find similar conditions for dialogues composed of R1and R2-relevant moves, we first need to distinguish between them.\nWe can do this in terms of the structure of the argumentation graph: Proposition 5.3.\nConsider a basic dialogue D, with argumentation graph AG which has root r denoting an argument A.\nIf argument A , denoted by node v is an an R2relevant move m, m is not R1-relevant if and only if: 1.\nthere are two nodes v and v on the path between v and r, and the argument denoted by v defends itself against the argument denoted by v ; or 2.\nthere is an argument A , denoted by node v , that affects the status of A, and the path from v to r has one or more nodes in common with the path from v to r. Proof.\nFor the first condition, consider that since AG is a tree, v is connected to r. Thus there is a series of undercut relations between A and A , and this corrresponds to a path through AG.\nIf this path is the only branch in the tree, then A will affect the status of A unless the chain of affect is broken by an undercut that can``t change the status of the undercut argument because the latter defends itself.\nFor the second condition, as for the first, the only way that A cannot affect the status of A is if something is blocking its influence.\nIf this is not due to defending against, it must be because there is some node u on the path that represents an argument whose status is fixed somehow, and that must mean that there is another chain of undercut relations, another branch of the tree, that is incident at u.\nSince this second branch denotes another chain of arguments, and these affect the status of the argument denoted by u, they must also affect the status of A. Any of these are the A in the condition.\nSo an R2-relevant move m is not R1-relevant if either its effect is blocked because an argument upstream is not strong enough, or because there is another line of argument that is currently determining the status of the argument at the root.\nThis, in turn, means that if the effect is not due to defending against, then there is an alternative move that is R1-relevant - a move that undercuts A in the second condition above5 .\nWe can now show 5 Though whether the agent in question can make such a move is another question.\nProposition 5.4.\nA protocol-complete basic dialogue D will always be topic-complete under a protocol which only includes R2-relevant moves and allows every R2-relevant move to be made.\nThe restriction on R2-relevant rules is exactly that for topiccompleteness, so a dialogue that has only R2-relevant moves will continue until every argument that any agent can make has been put forward.\nGiven this, and what we revealed about R1-relevance in Proposition 5.3, we can see that: Proposition 5.5.\nA protocol-complete basic dialogue D under a protocol which only includes R1-relevant moves will be topic-complete if AG(D)T : 1.\nincludes no path with adjacent nodes v, denoting A, and v , denoting A , such that A undercuts A and A is stronger that A; and 2.\nis such that the nodes in every branch have consecutive indices and no node with degree greater than two is an odd number of arcs from a leaf node.\nProof.\nThe first condition rules out the first condition in Proposition 5.3, and the second deals with the situation that leads to the second condition in Proposition 5.3.\nThe second condition ensures that each branch is constructed in full before any new branch is added, and when a new branch is added, the argument that is undercut as part of the addition will be acceptable, and so the addition will change the status of the argument denoted by that node, and hence the root.\nWith these conditions, every move required to construct AG(D)T will be permitted and so the dialogue will be topic-complete when every move has been completed.\nThe second part of this result only identifies one possible way to ensure that the second condition in Proposition 5.3 is met, so the converse of this result does not hold.\nHowever, what we have is sufficient to answer the question about predetermination that we started with.\nFor dialogues to be predetermined, every move that is R2-relevant must be made.\nIn such cases every dialogue is topic complete.\nIf we do not require that all R2-relevant moves are made, then there is some room for rhetoric - the way in which alternative lines of argument are presented becomes an issue.\nIf moves are forced to be R3-relevant, then there is considerable room for rhetorical play.\n6.\nSUMMARY This paper has studied the different ideas of relevance in argumentation-based dialogue, identifying the relationship between these ideas, and showing how they can impact the extent to which the way that agents choose moves in a dialogue - what some authors have called the strategy and tactics of a dialogue.\nThis extends existing work on relvance, such as [3, 15] by showing how different notions of relevance can have an effect on the outcome of a dialogue, in particular when they render the outcome predetermined.\nThis connection extends the work of [18] which considered dialogue outcome, but stopped short of identifying the conditions under which it is predetermined.\nThere are two ways we are currently trying to extend this work, both of which will generalise the results and extend its applicability.\nFirst, we want to relax the restrictions that 1012 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) we have imposed, the exclusion of moves that attack several arguments (without which the argument graph can be mulitply-connected) and the exclusion of pre-empted moves, without which the argument graph can have cycles.\nSecond, we want to extend the ideas of relevance to cope with moves that do not only add undercutting arguments, but also supporting arguments, thus taking account of bipolar argumentation frameworks [5].\nAcknowledgments The authors are grateful for financial support received from the EC, through project IST-FP6-002307, and from the NSF under grants REC-02-19347 and NSF IIS-0329037.\nThey are also grateful to Peter Stone for a question, now several years old, which this paper has finally answered.\n7.\nREFERENCES [1] L. Amgoud and C. Cayrol.\nOn the acceptability of arguments in preference-based argumentation framework.\nIn Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, pages 1-7, 1998.\n[2] L. Amgoud, S. Parsons, and N. Maudet.\nArguments, dialogue, and negotiation.\nIn W. Horn, editor, Proceedings of the Fourteenth European Conference on Artificial Intelligence, pages 338-342, Berlin, Germany, 2000.\nIOS Press.\n[3] J. Bentahar, M. Mbarki, and B. Moulin.\nStrategic and tactic reasoning for communicating agents.\nIn N. Maudet, I. Rahwan, and S. Parsons, editors, Proceedings of the Third Workshop on Argumentation in Muliagent Systems, Hakodate, Japan, 2006.\n[4] P. Besnard and A. Hunter.\nA logic-based theory of deductive arguments.\nArtificial Intelligence, 128:203-235, 2001.\n[5] C. Cayrol, C. Devred, and M.-C.\nLagasquie-Schiex.\nHandling controversial arguments in bipolar argumentation frameworks.\nIn P. E. Dunne and T. J. M. Bench-Capon, editors, Computational Models of Argument: Proceedings of COMMA 2006, pages 261-272.\nIOS Press, 2006.\n[6] B. Chaib-Draa and F. Dignum.\nTrends in agent communication language.\nComputational Intelligence, 18(2):89-101, 2002.\n[7] F. Dignum, B. Dunin-K\u00b8eplicz, and R. Verbrugge.\nAgent theory for team formation by dialogue.\nIn C. Castelfranchi and Y. Lesp\u00b4erance, editors, Seventh Workshop on Agent Theories, Architectures, and Languages, pages 141-156, Boston, USA, 2000.\n[8] P. M. Dung.\nOn the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games.\nArtificial Intelligence, 77:321-357, 1995.\n[9] P. M. Dung, R. A. Kowalski, and F. Toni.\nDialectic proof procedures for assumption-based, admissable argumentation.\nArtificial Intelligence, 170(2):114-159, 2006.\n[10] R. A. Flores and R. C. Kremer.\nTo commit or not to commit.\nComputational Intelligence, 18(2):120-173, 2002.\n[11] D. M. Gabbay and J. Woods.\nMore on non-cooperation in Dialogue Logic.\nLogic Journal of the IGPL, 9(2):321-339, 2001.\n[12] D. M. Gabbay and J. Woods.\nNon-cooperation in Dialogue Logic.\nSynthese, 127(1-2):161-186, 2001.\n[13] C. L. Hamblin.\nMathematical models of dialogue.\nTheoria, 37:130-155, 1971.\n[14] S. Kraus, K. Sycara, and A. Evenchik.\nReaching agreements through argumentation: a logical model and implementation.\nArtificial Intelligence, 104(1-2):1-69, 1998.\n[15] N. Oren, T. J. Norman, and A. Preece.\nLoose lips sink ships: A heuristic for argumentation.\nIn N. Maudet, I. Rahwan, and S. Parsons, editors, Proceedings of the Third Workshop on Argumentation in Muliagent Systems, Hakodate, Japan, 2006.\n[16] S. Parsons and N. R. Jennings.\nNegotiation through argumentation - a preliminary report.\nIn Proceedings of Second International Conference on Multi-Agent Systems, pages 267-274, 1996.\n[17] S. Parsons, M. Wooldridge, and L. Amgoud.\nAn analysis of formal inter-agent dialogues.\nIn 1st International Conference on Autonomous Agents and Multi-Agent Systems.\nACM Press, 2002.\n[18] S. Parsons, M. Wooldridge, and L. Amgoud.\nOn the outcomes of formal inter-agent dialogues.\nIn 2nd International Conference on Autonomous Agents and Multi-Agent Systems.\nACM Press, 2003.\n[19] H. Prakken.\nOn dialogue systems with speech acts, arguments, and counterarguments.\nIn Proceedings of the Seventh European Workshop on Logic in Artificial Intelligence, Berlin, Germany, 2000.\nSpringer Verlag.\n[20] H. Prakken.\nRelating protocols for dynamic dispute with logics for defeasible argumentation.\nSynthese, 127:187-219, 2001.\n[21] H. Prakken and G. Sartor.\nModelling reasoning with precedents in a formal dialogue game.\nArtificial Intelligence and Law, 6:231-287, 1998.\n[22] I. Rahwan, P. McBurney, and E. Sonenberg.\nTowards a theory of negotiation strategy.\nIn I. Rahwan, P. Moraitis, and C. Reed, editors, Proceedings of the 1st International Workshop on Argumentation in Multiagent Systems, New York, NY, 2004.\n[23] C. Reed.\nDialogue frames in agent communications.\nIn Y. Demazeau, editor, Proceedings of the Third International Conference on Multi-Agent Systems, pages 246-253.\nIEEE Press, 1998.\n[24] M. Rovatsos, I. Rahwan, F. Fisher, and G. Weiss.\nAdaptive strategies for practical argument-based negotiation.\nIn I. Rahwan, P. Moraitis, and C. Reed, editors, Proceedings of the 1st International Workshop on Argumentation in Multiagent Systems, New York, NY, 2004.\n[25] M. Schroeder, D. A. Plewe, and A. Raab.\nUltima ratio: should Hamlet kill Claudius.\nIn Proceedings of the 2nd International Conference on Autonomous Agents, pages 467-468, 1998.\n[26] K. Sycara.\nArgumentation: Planning other agents'' plans.\nIn Proceedings of the Eleventh Joint Conference on Artificial Intelligence, pages 517-523, 1989.\n[27] D. N. Walton and E. C. W. Krabbe.\nCommitment in Dialogue: Basic Concepts of Interpersonal Reasoning.\nState University of New York Press, Albany, NY, USA, 1995.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1013","lvl-3":"On the relevance of utterances in formal inter-agent dialogues\nABSTRACT\nWork on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues.\nThis work has established the space in which agents are permitted to interact through dialogues.\nRecently, there has been increasing interest in the mechanisms agents might use to choose how to act--the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue.\nKey in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks.\nHere we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring.\n1.\nINTRODUCTION\nFinding ways for agents to reach agreements in multiagent systems is an area of active research.\nOne mechanism for achieving agreement is through the use of argumentation--where one agent tries to convince another agent of something during the course of some dialogue.\nEarly examples of argumentation-based approaches to multiagent agreement\ninclude the work of Dignum et al. [7], Kraus [14], Parsons and Jennings [16], Reed [23], Schroeder et al. [25] and Sycara [26].\nThe work of Walton and Krabbe [27], popularised in the multiagent systems community by Reed [23], has been particularly influential in the field of argumentation-based dialogue.\nThis work influenced the field in a number of ways, perhaps most deeply in framing multi-agent interactions as dialogue games in the tradition of Hamblin [13].\nViewing dialogues in this way, as in [2, 21], provides a powerful framework for analysing the formal properties of dialogues, and for identifying suitable protocols under which dialogues can be conducted [18, 20].\nThe dialogue game view overlaps with work on conversation policies (see, for example, [6, 10]), but differs in considering the entire dialogue rather than dialogue segments.\nIn this paper, we extend the work of [18] by considering the role of relevance--the relationship between utterances in a dialogue.\nRelevance is a topic of increasing interest in argumentation-based dialogue because it relates to the scope that an agent has for applying strategic manoeuvering to obtain the outcomes that it requires [19, 22, 24].\nOur work identifes the limits on such rhetorical manoeuvering, showing when it can and cannot have an effect.\n2.\nBACKGROUND\nWe begin by introducing the formal system of argumentation that underpins our approach, as well as the corresponding terminology and notation, all taken from [2, 8, 17].\nA dialogue is a sequence of messages passed between two or more members of a set of agents A.\nAn agent \u03b1 maintains a knowledge base, \u03a3., containing formulas of a propositional language L and having no deductive closure.\nAgent \u03b1 also maintains the set of its past utterances, called the \"commitment store\", CS.\n.\nWe refer to this as an agent's \"public knowledge\", since it contains information that is shared with other agents.\nIn contrast, the contents of \u03a3.\nare \"private\" to \u03b1.\nNote that in the description that follows, we assume that\n\u2666 is the classical inference relation, that \u2261 stands for logical equivalence, and we use \u0394 to denote all the information available to an agent.\nThus in a dialogue between two agents \u03b1 and \u03b2, \u0394.\n= \u03a3.\n\u222a CS.\n\u222a CSp, so the commitment store CS.\ncan be loosely thought of as a subset of \u0394.\nconsisting of the assertions that have been made public.\nIn some dialogue games, such as those in [18] anything in CS.\nis either in \u03a3.\nor can be derived from it.\nIn other dialogue games, such as\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n3.\nDIALOGUES\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1007\n1008 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nRELEVANCE\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1009\n1010 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.\nCOMPLETENESS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1011\n6.\nSUMMARY\nThis paper has studied the different ideas of relevance in argumentation-based dialogue, identifying the relationship between these ideas, and showing how they can impact the extent to which the way that agents choose moves in a dialogue--what some authors have called the strategy and tactics of a dialogue.\nThis extends existing work on relvance, such as [3, 15] by showing how different notions of relevance can have an effect on the outcome of a dialogue, in particular when they render the outcome predetermined.\nThis connection extends the work of [18] which considered dialogue outcome, but stopped short of identifying the conditions under which it is predetermined.\nThere are two ways we are currently trying to extend this work, both of which will generalise the results and extend its applicability.\nFirst, we want to relax the restrictions that\n1012 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nwe have imposed, the exclusion of moves that attack several arguments (without which the argument graph can be mulitply-connected) and the exclusion of pre-empted moves, without which the argument graph can have cycles.\nSecond, we want to extend the ideas of relevance to cope with moves that do not only add undercutting arguments, but also supporting arguments, thus taking account of bipolar argumentation frameworks [5].","lvl-4":"On the relevance of utterances in formal inter-agent dialogues\nABSTRACT\nWork on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues.\nThis work has established the space in which agents are permitted to interact through dialogues.\nRecently, there has been increasing interest in the mechanisms agents might use to choose how to act--the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue.\nKey in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks.\nHere we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring.\n1.\nINTRODUCTION\nFinding ways for agents to reach agreements in multiagent systems is an area of active research.\nOne mechanism for achieving agreement is through the use of argumentation--where one agent tries to convince another agent of something during the course of some dialogue.\nEarly examples of argumentation-based approaches to multiagent agreement\nThe work of Walton and Krabbe [27], popularised in the multiagent systems community by Reed [23], has been particularly influential in the field of argumentation-based dialogue.\nThis work influenced the field in a number of ways, perhaps most deeply in framing multi-agent interactions as dialogue games in the tradition of Hamblin [13].\nViewing dialogues in this way, as in [2, 21], provides a powerful framework for analysing the formal properties of dialogues, and for identifying suitable protocols under which dialogues can be conducted [18, 20].\nThe dialogue game view overlaps with work on conversation policies (see, for example, [6, 10]), but differs in considering the entire dialogue rather than dialogue segments.\nIn this paper, we extend the work of [18] by considering the role of relevance--the relationship between utterances in a dialogue.\nRelevance is a topic of increasing interest in argumentation-based dialogue because it relates to the scope that an agent has for applying strategic manoeuvering to obtain the outcomes that it requires [19, 22, 24].\nOur work identifes the limits on such rhetorical manoeuvering, showing when it can and cannot have an effect.\n2.\nBACKGROUND\nA dialogue is a sequence of messages passed between two or more members of a set of agents A.\nAn agent \u03b1 maintains a knowledge base, \u03a3., containing formulas of a propositional language L and having no deductive closure.\nAgent \u03b1 also maintains the set of its past utterances, called the \"commitment store\", CS.\n.\nWe refer to this as an agent's \"public knowledge\", since it contains information that is shared with other agents.\nare \"private\" to \u03b1.\n\u2666 is the classical inference relation, that \u2261 stands for logical equivalence, and we use \u0394 to denote all the information available to an agent.\nThus in a dialogue between two agents \u03b1 and \u03b2, \u0394.\n= \u03a3.\n\u222a CS.\n\u222a CSp, so the commitment store CS.\nconsisting of the assertions that have been made public.\nIn some dialogue games, such as those in [18] anything in CS.\nis either in \u03a3.\nor can be derived from it.\nIn other dialogue games, such as\n6.\nSUMMARY\nThis paper has studied the different ideas of relevance in argumentation-based dialogue, identifying the relationship between these ideas, and showing how they can impact the extent to which the way that agents choose moves in a dialogue--what some authors have called the strategy and tactics of a dialogue.\nThis extends existing work on relvance, such as [3, 15] by showing how different notions of relevance can have an effect on the outcome of a dialogue, in particular when they render the outcome predetermined.\nThis connection extends the work of [18] which considered dialogue outcome, but stopped short of identifying the conditions under which it is predetermined.\nThere are two ways we are currently trying to extend this work, both of which will generalise the results and extend its applicability.\nFirst, we want to relax the restrictions that\n1012 The Sixth Intl. .\nJoint Conf.\nSecond, we want to extend the ideas of relevance to cope with moves that do not only add undercutting arguments, but also supporting arguments, thus taking account of bipolar argumentation frameworks [5].","lvl-2":"On the relevance of utterances in formal inter-agent dialogues\nABSTRACT\nWork on argumentation-based dialogue has defined frameworks within which dialogues can be carried out, established protocols that govern dialogues, and studied different properties of dialogues.\nThis work has established the space in which agents are permitted to interact through dialogues.\nRecently, there has been increasing interest in the mechanisms agents might use to choose how to act--the rhetorical manoeuvring that they use to navigate through the space defined by the rules of the dialogue.\nKey in such considerations is the idea of relevance, since a usual requirement is that agents stay focussed on the subject of the dialogue and only make relevant remarks.\nHere we study several notions of relevance, showing how they can be related to both the rules for carrying out dialogues and to rhetorical manoeuvring.\n1.\nINTRODUCTION\nFinding ways for agents to reach agreements in multiagent systems is an area of active research.\nOne mechanism for achieving agreement is through the use of argumentation--where one agent tries to convince another agent of something during the course of some dialogue.\nEarly examples of argumentation-based approaches to multiagent agreement\ninclude the work of Dignum et al. [7], Kraus [14], Parsons and Jennings [16], Reed [23], Schroeder et al. [25] and Sycara [26].\nThe work of Walton and Krabbe [27], popularised in the multiagent systems community by Reed [23], has been particularly influential in the field of argumentation-based dialogue.\nThis work influenced the field in a number of ways, perhaps most deeply in framing multi-agent interactions as dialogue games in the tradition of Hamblin [13].\nViewing dialogues in this way, as in [2, 21], provides a powerful framework for analysing the formal properties of dialogues, and for identifying suitable protocols under which dialogues can be conducted [18, 20].\nThe dialogue game view overlaps with work on conversation policies (see, for example, [6, 10]), but differs in considering the entire dialogue rather than dialogue segments.\nIn this paper, we extend the work of [18] by considering the role of relevance--the relationship between utterances in a dialogue.\nRelevance is a topic of increasing interest in argumentation-based dialogue because it relates to the scope that an agent has for applying strategic manoeuvering to obtain the outcomes that it requires [19, 22, 24].\nOur work identifes the limits on such rhetorical manoeuvering, showing when it can and cannot have an effect.\n2.\nBACKGROUND\nWe begin by introducing the formal system of argumentation that underpins our approach, as well as the corresponding terminology and notation, all taken from [2, 8, 17].\nA dialogue is a sequence of messages passed between two or more members of a set of agents A.\nAn agent \u03b1 maintains a knowledge base, \u03a3., containing formulas of a propositional language L and having no deductive closure.\nAgent \u03b1 also maintains the set of its past utterances, called the \"commitment store\", CS.\n.\nWe refer to this as an agent's \"public knowledge\", since it contains information that is shared with other agents.\nIn contrast, the contents of \u03a3.\nare \"private\" to \u03b1.\nNote that in the description that follows, we assume that\n\u2666 is the classical inference relation, that \u2261 stands for logical equivalence, and we use \u0394 to denote all the information available to an agent.\nThus in a dialogue between two agents \u03b1 and \u03b2, \u0394.\n= \u03a3.\n\u222a CS.\n\u222a CSp, so the commitment store CS.\ncan be loosely thought of as a subset of \u0394.\nconsisting of the assertions that have been made public.\nIn some dialogue games, such as those in [18] anything in CS.\nis either in \u03a3.\nor can be derived from it.\nIn other dialogue games, such as\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\nthose in [2], CS\u03b1 may contain things that cannot be derived from \u03a3\u03b1.\nS is called the support of A, written S = Support (A) and p is the conclusion of A, written p = Conclusion (A).\nThus we talk of p being supported by the argument (S, p).\nIn general, since \u0394 may be inconsistent, arguments in A (\u0394), the set of all arguments which can be made from \u0394, may conflict, and we make this idea precise with the notion of undercutting: DEFINITION 2.2.\nLet A1 and A2 be arguments in A (\u0394).\nA1 undercuts A2 iff \u2203 \u00ac p \u2208 Support (A2) such that p \u2261 Conclusion (A1).\nIn other words, an argument is undercut if and only if there is another argument which has as its conclusion the negation of an element of the support for the first argument.\nTo capture the fact that some beliefs are more strongly held than others, we assume that any set of beliefs has a preference order over it.\nWe consider all information available to an agent, \u0394, to be stratified into non-overlapping subsets \u03941,..., \u0394n such that beliefs in \u0394i are all equally preferred and are preferred over elements in \u0394j where i> j.\nThe preference level of a nonempty subset S \u2282 \u0394, where different elements s \u2208 S may belong to different layers \u0394i, is valued at the highest numbered layer which has a member in S and is referred to as level (S).\nIn other words, S is only as strong as its weakest member.\nNote that the strength of a belief as used in this context is a separate concept from the notion of support discussed earlier.\nDEFINITION 2.3.\nLet A1 and A2 be arguments in A (\u0394).\nA1 is preferred to A2 according to Pref, A1\" Pref A2, iff level (Support (A1))> level (Support (A2)).\nIf A1 is preferred to A2, we say that A1 is stronger than A2.\nWe can now define the argumentation system we will use: DEFINITION 2.4.\nAn argumentation system is a triple: (A (\u0394), Undercut, Pref) such that: \u2022 A (\u0394) is a set of the arguments built from \u0394, \u2022 Undercut is a binary relation representing the defeat relationship between arguments, Undercut \u2286 A (\u0394) \u00d7 A (\u0394), and \u2022 Pref is a pre-ordering on A (\u0394) \u00d7 A (\u0394).\nThe preference order makes it possible to distinguish different types of relations between arguments:\n\u2022 If A2 undercuts A1 then A1 defends itself against A2 iff A1\" Pref A2.\nOtherwise, A1 does not defend itself.\n\u2022 A set of arguments A defends A1 iff for every A2 that undercuts A1, where A1 does not defend itself against A2, then there is some A3 \u2208 A such that A3 undercuts A2 and A2 does not defend itself against A3.\nWe write AUndercut, Pref to denote the set of all non-undercut arguments and arguments defending themselves against all their undercutting arguments.\nThe set A (\u0394) of acceptable arguments of the argumentation system (A (\u0394), Undercut, Pref) is [1] the least fixpoint of a function F:\nAn argument is acceptable if it is a member of the acceptable set, and a proposition is acceptable if it is the conclusion of an acceptable argument.\nAn acceptable argument is one which is, in some sense, proven since all the arguments which might undermine it are themselves undermined.\n3.\nDIALOGUES\nSystems like those described in [2, 18], lay down sets of locutions that agents can make to put forward propositions and the arguments that support them, and protocols that define precisely which locutions can be made at which points in the dialogue.\nWe are not concerned with such a level of detail here.\nInstead we are interested in the interplay between arguments that agents put forth.\nAs a result, we will consider only that agents are allowed to put forward arguments.\nWe do not discuss the detail of the mechanism that is used to put these arguments forward--we just assume that arguments of the form (S, p) are inserted into an agent's commitment store where they are then visible to other agents.\nWe then have a typical definition of a dialogue:\nDEFINITION 3.1.\nA dialogue D is a sequence of moves: m1, m2,..., mn.\nA given move mi is a pair (\u03b1, Ai) where Ai is an argument that \u03b1 places into its commitment store CS\u03b1.\nMoves in an argumentation-based dialogue typically attack moves that have been made previously.\nWhile, in general, a dialogue can include moves that undercut several arguments, in the remainder of this paper, we will only consider dialogues that put forward moves that undercut at most one argument.\nFor now we place no additional constraints on the moves that make up a dialogue.\nLater we will see how different restrictions on moves lead to different kinds of dialogue.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1007\nThe sequence of arguments put forward in the dialogue is determined by the agents who are taking part in the dialogue, but they are usually not completely free to choose what arguments they make.\nAs indicated earlier, their choice is typically limited by a protocol.\nIf we write the sequence of n moves m1, m2,..., mn as ~ mn, and denote the empty sequence as ~ m0, then we can define a profocol in the following way: DEFINITION 3.2.\nA protocol P is a function on a sequence of moves ~ mi in a dialogue D that, for all i> 0, identifies a set of possible moves Mi +1 from which the mi +1 th move may be drawn:\nIn other words, for our purposes here, at every point in a dialogue, a protocol determines a set of possible moves that agents may make as part of the dialogue.\nIf a dialogue D always picks its moves m from the set M identified by protocol P, then D is said to conform to P. Even if a dialogue conforms to a protocol, it is typically the case that the agent engaging in the dialogue has to make a choice of move--it has to choose which of the moves in M to make.\nThis excercise of choice is what we refer to as an agent's use of rhetoric (in its oratorical sense of \"influencing the thought and conduct of an audience\").\nSome of our results will give a sense of how much scope an agent has to exercise rhetoric under different protocols.\nAs arguments are placed into commitment stores, and hence become public, agents can determine the relationships between them.\nIn general, after several moves in a dialogue, some arguments will undercut others.\nWe will denote the set of arguments {A1, A2,..., Aj} asserted after moves m1, m2,..., mj of a dialogue to be Aj--the relationship of the arguments in Aj can be described as an argumentation graph, similar to those described in, for example, [3, 4, 9]:\nWe will use the term argument graph as a synonym for \"argumentation graph\".\nNote that we do not require that the argumentation graph is connected.\nIn other words the notion of an argumentation graph allows for the representation of arguments that do not relate, by undercutting or being undercut, to any other arguments (we will come back to this point very shortly).\nWe adapt some standard graph theoretic notions in order to describe various aspects of the argumentation graph.\nIf there is an edge e from vertex v to vertex v', then v is said to be the parent of v' and v' is said to be the child of v.\nIn a reversal of the usual notion, we define a root of an argumentation graph1 as follows: DEFINITION 3.4.\nA root of an argumentation graph AG = (V, E) is a node v E V that has no children.\nThus a root of a graph is a node to which directed edges may be connected, but from which no directed edges connect to other nodes.\nThus a root is a node representing an 1Note that we talk of \"a root\" rather than \"the root\"--as defined, an argumentation graph need not be a tree.\nFigure 1: An example argument graph\nargument that is undercut, but which itself does no undercutting.\nSimilarly: DEFINITION 3.5.\nA leaf of an argumentation graph AG = (V, E) is a node v E V that has no parents.\nThus a leaf in an argumentation graph represents an argument that undercuts another argument, but does no undercutting.\nThus in Figure 1, v is a root, and v' is a leaf.\nThe reason for the reversal of the usual notions of root and leaf is that, as we shall see, we will consider dialogues to construct argumentation graphs from the roots (in our sense) to the leaves.\nThe reversal of the terminology means that it matches the natural process of tree construction.\nSince, as described above, argumentation graphs are allowed to be not connected (in the usual graph theory sense), it is helpful to distinguish nodes that are connected to other nodes, in particular to the root of the tree.\nWe say that node v is connected to node v' if and only if there is a path from v to v'.\nSince edges represent undercut relations, the notion of connectedness between nodes captures the influence that one argument may have on another: PROPOSITION 3.1.\nGiven an argumentation graph AG, if there is any argument A, denoted by node v that affects the status of another argument A', denoted by v', then v is connected to v'.\nThe converse does not hold.\nPROOF.\nGiven Definitions 2.5 and 2.6, the only ways in which A can affect the status of A' is if A either undercuts A', or if A undercuts some argument A' ' that undercuts A', or if A undercuts some A' ' ' that undercuts some A' ' that undercuts A', and so on.\nIn all such cases, a sequence of undercut relations relates the two arguments, and if they are both in an argumentation graph, this means that they are connected.\nSince the notion of path ignores the direction of the directed arcs, nodes v and v' are connected whether the edge between them runs from v to v' or vice versa.\nSince A only undercuts A' if the edge runs from v to v', we cannot infer that A will affect the status of A' from information about whether or not they are connected.\nThe reason that we need the concept of the argumentation graph is that the properties of the argumentation graph tell us something about the set of arguments A the graph represents.\nWhen that set of arguments is constructed through a dialogue, there is a relationship between the structure of the argumentation graph and the protocol that governs the dialogue.\nIt is the extent of the relationship between structure and protocol that is the main subject of this paper.\nTo study this relationship, we need to establish a correspondence between a dialogue and an argumentation graph.\nGiven the definitions we have so far, this is simple:\nof moves ~ mn, and an argument graph AG = (V, E) correspond to one another iff Vm E ~ mn, the argument Ai that\n1008 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nis advanced at move mi is represented by exactly one node v E V, and Vv E V, v represents exactly one argument Ai that has been advanced by a move m E Imn.\nThus a dialogue corresponds to an argumentation graph if and only if every argument made in the dialogue corresponds to a node in the graph, and every node in the graph corresponds to an argument made in the dialogue.\nThis one-toone correspondence allows us to consider each node v in the graph to have an index i which is the index of the move in the dialogue that put forward the argument which that node represents.\nThus we can, for example, refer to the \"third node\" in the argumentation graph, meaning the node that represents the argument put forward in the third move of the dialogue.\n4.\nRELEVANCE\nMost work on dialogues is concerned with what we might call coherent dialogues, that is dialogues in which the participants are, as in the work of Walton and Krabbe [27], focused on resolving some question through the dialogue2 To capture this coherence, it seems we need a notion of relevance to constrain the statements made by agents.\nHere we study three notions of relevance: DEFINITION 4.1.\nConsider a dialogue D, consisting of a sequence of moves Imi, with a corresponding argument graph AG.\nThe move mi +1, i> 1, is said to be relevant if one or more of the following hold: R1 Making mi +1 will change the status of the argument denoted by the first node of AG.\nR2 Making mi +1 will add a node vi +1 that is connected to the first node of AG.\nR3 Making mi +1 will add a node vi +1 that is connected to the last node to be added to AG.\nR2-relevance is the form of relevance defined by [3] in their study of strategic and tactical reasoning3.\nR1-relevance was suggested by the notion used in [15], and though it differs somewhat from that suggested there, we believe it captures the essence of its predecessor.\nNote that we only define relevance for the second move of the dialogue onwards because the first move is taken to identify the subject of the dialogue, that is, the central question that the dialogue is intended to answer, and hence it must be relevant to the dialogue, no matter what it is.\nIn assuming this, we focus our attention on the same kind of dialogues as [18].\nWe can think of relevance as enforcing a form of parsimony on a dialogue--it prevents agents from making statements that do not bear on the current state of the dialogue.\nThis promotes efficiency, in the sense of limiting the number of moves in the dialogue, and, as in [15], prevents agents revealing information that they might better keep hidden.\nAnother form of parsimony is to insist that agents are not allowed to put forward arguments that will be undercut by arguments that have already been made during the dialogue.\nWe therefore distinguish such arguments.\nWe use the term \"pre-empted\" because if such an argument is put forward, it can seem as though another agent anticipated the argument being made, and already made an argument that would render it useless.\nIn the rest of this paper, we will only deal with protocols that permit moves that are relevant, in any of the senses introduced above, and are not allowed to be pre-empted.\nWe call such protocols basic protocols, and dialogues carried out under such protocols basic dialogues.\nThe argument graph of a basic dialogue is somewhat restricted.\nPROOF.\nRecall that Definition 3.3 requires only that AG be a directed graph.\nTo show that it is a tree, we have to show that it is acyclic and connected.\nThat the graph is connected follows from the construction of the graph under a protocol that enforces relevance.\nIf the notion of relevance is R3, each move adds a node that is connected to the previous node.\nIf the notion of relevance is R2, then every move adds a node that is connected to the root, and thus is connected to some node in the graph.\nIf the notion of relevance is R1, then every move has to change the status of the argument denoted by the root.\nProposition 3.1 tells us that to affect the status of an argument A', the node v representing the argument A that is effecting the change has to be connected to v', the node representing A', and so it follows that every new node added as a result of an R1relevant move will be connected to the argumentation graph.\nThus AG is connected.\nSince a basic dialogue does not allow moves that are preempted, every edge that is added during construction is directed from the node that is added to one already in the graph (thus denoting that the argument A denoted by the added node, v, undercuts the argument A' denoted by the node to which the connection is made, v', rather than the other way around).\nSince every edge that is added is directed from the new node to the rest of the graph, there can be no cycles.\nThus AG is a tree.\nTo show that AG has a single root, consider its construction from the initial node.\nAfter m1 the graph has one node, v1 that is both a root and a leaf.\nAfter m2, the graph is two nodes connected by an edge, and v1 is now a root and not a leaf.\nv2 is a leaf and not a root.\nHowever the third node is added, the argument earlier in this proof demonstrates that there will be a directed edge from it to some other node, making it a leaf.\nThus v1 will always be the only root.\nThe ruling out of pre-empted moves means that v1 will never cease to be a root, and so the argumentation graph will always have one root.\nSince every argumentation graph constructed by a basic dialogue is a tree with a single root, this means that the first node of every argumentation graph is the root.\nAlthough these results are straightforward to obtain, they allow us to show how the notions of relevance are related.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1009\n1.\nEvery move mi +1 that is R1-relevant is R2-relevant.\nThe converse does not hold.\n2.\nEvery move mi +1 that is R3-relevant is R2-relevant.\nThe converse does not hold.\n3.\nNot every move mi +1 that is R1-relevant is R3-relevant, and not every move mi +1 that is R3-relevant is R1relevant\nPROOF.\nFor 1, consider how move mi +1 can satisfy R1.\nProposition 3.1 tells us that if Ai +1 can change the status of the argument denoted by the root v1 (which, as observed above, is the first node) of AG, then vi +1 must be connected to the root.\nThis is precisely what is required to satisfy R2, and the relatiosnhip is proved to hold.\nTo see that the converse does not hold, we have to consider what it takes to change the status of r (since Proposition 3.1 tells us that connectedness is not enough to ensure a change of status--if it did, R1 and R2 relevance would coincide).\nFor mi +1 to change the status of the root, it will have to (1) make the argument A represented by r either unacceptable, if it were acceptable before the move, or (2) acceptable if it were unacceptable before the move.\nGiven the definition of acceptability, it can achieve (1) either by directly undercutting the argument represented by r, in which case vi +1 will be directly connected to r by some edge, or by undercutting some argument A' that is part of the set of non-undercut arguments defending A.\nIn the latter case, vi +1 will be directly connected to the node representing A' and by Proposition 4.1 to r. To achieve (2), vi +1 will have to undercut an argument A' ' that is either currently undercutting A, or is undercutting an argument that would otherwise defend A. Now, further consider that mi +1 puts forward an argument Ai +1 that undercuts the argument denoted by some node v', but this latter argument defends itself against Ai +1.\nIn such a case, the set of acceptable arguments will not change, and so the status of Ar will not change.\nThus a move that is R2-relevant need not be R1-relevant.\nFor 2, consider that mi +1 can satisfy R3 simply by adding a node that is connected to vi, the last node to be added to AG.\nBy Proposition 4.1, it is connected to r and so is R2-relevant.\nTo see that the converse does not hold, consider that an R2-relevant move can connect to any node in AG.\nThe first part of 3 follows by a similar argument to that we just used--an R1-relevant move does not have to connect to vi, just to some v that is part of the graph--and the second part follows since a move that is R3-relevant may introduce an argument Ai +1 that undercuts the argument Ai put forward by the previous move (and so vi +1 is connected to vi), but finds that Ai defends itself against Ai +1, preventing a change of status at the root.\nWhat is most interesting is not so much the results but why they hold, since this reveals some aspects of the interplay between relevance and the structure of argument graphs.\nFor example, to restate a case from the proof of Proposition 4.2, a move that is R3-relevant by definition has to add a node to the argument graph that is connected to the last node that was added.\nSince a move that is R2-relevant can add a node that connects anywhere on an argument graph, any move that is R3-relevant will be R2-relevant, but the converse does not hold.\nIt turns out that we can exploit the interplay between structure and relevance that Propositions 4.1 and 4.2 have started to illuminate to establish relationships between the protocols that govern dialogues and the argument graphs constructed during such dialogues.\nTo do this we need to define protocols in such a way that they refer to the structure of the graph.\nWe have:\nPROOF.\nR3-relevance requires that every node added to the argument graph be connected to the previous node.\nStarting from the first node this recursively constructs a tree with just one branch, and the relationship holds.\nThe converse does not hold because even if one or more moves in the protocol are R1 - or R2-relevant, it may be the case that, because of an agent's rhetorical choice or because of its knowledge, every argument that is chosen to be put forward will undercut the previous argument and so the argument graph is a one-branch tree.\nLooking for more complex kinds of protocol that construct more complex kinds of argument graph, it is an obvious move to turn to:\nSo the notion of a multi-path protocol does not have much traction.\nAs a result we distinguish multi-path protocols that permit dialogues that can construct trees that have more than one branch as bushy protocols.\nWe then have:\n1010 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nOf course, since, by Proposition 4.2, any move that is R3relevant is R2-relevant and can quite possibly be R1-relevant (all that Proposition 4.2 tells us is that there is no guarantee that it will be), all that Proposition 4.6 tells us is that dialogues that conform to bushy protocols may have more than one branch.\nAll we can do is to identify a bound on the number of branches:\nWe distinguish bushy protocols from multi-path protocols, and hence R1 - and R2-relevance from R3-relevance, because of the kinds of dialogue that R3-relevance enforces.\nIn a dialogue in which all moves must be R3-relevant, the argumentation graph has a single branch--the dialogue consists of a sequence of arguments each of which undercuts the previous one and the last move to be made is the one that settles the dialogue.\nThis, as we will see next, means that such a dialogue only allows a subset of all the moves that would otherwise be possible.\n5.\nCOMPLETENESS\nThe above discussion of the difference between dialogues carried out under single-path and bushy protocols brings us to the consideration of what [18] called \"predeterminism\", but we now prefer to describe using the term \"completeness\".\nThe idea of predeterminism, as described in [18], captures the notion that, under some circumstances, the result of a dialogue can be established without actually having the dialogue--the agents have sufficiently little room for rhetorical manoeuver that were one able to see the contents of all the \u03a3i of all the \u03b1i \u2208 A, one would be able to identify the outcome of any dialogue on a given subject4.\nWe develop this idea by considering how the argument graphs constructed by dialogues under different protocols compare to benchmark complete dialogues.\nWe start by developing ideas of what \"complete\" might mean.\nOne reasonable definition is that:\nThe argumentation graph constructed by a topic-complete dialogue is called a topic-complete argumentation graph and is denoted AG (D) T. 4Assuming that the \u03a3i do not change during the dialogue, which is the usual assumption in this kind of dialogue.\nA dialogue is topic-complete when no agent can add anything that is directly connected to the subject of the dialogue.\nSome protocols will prevent agents from making moves even though the dialogue is not topic-complete.\nTo distinguish such cases we have:\nThe argumentation graph constructed by a protocol-complete dialogue is called a protocol-complete argumentation graph and is denoted AG (D) P. Clearly:\nObviously, from the definition of a sub-graph, the converse of Corollary 5.1 does not hold in general.\nThe important distinction between topic - and protocolcompleteness is that the former is determined purely by the state of the dialogue--as captured by the argumentation graph--and is thus independent of the protocol, while the latter is determined entirely by the protocol.\nAny time that a dialogue ends in a state of protocol-completeness rather than topic completeness, it is ending when agents still have things to say but can't because the protocol won't allow them to.\nWith these definitions of completeness, our task is to relate topic-completeness--the property that ensures that agents can say everything that they have to say in a dialogue that is, in some sense, important--to the notions of relevance we have developed--which determine what agents are allowed to say.\nWhen we need very specific conditions to make protocol-complete dialogues topic-complete, it means that agents have lots of room for rhetorical maneouver when those conditions are not in force.\nThat is there are many ways they can bring dialogues to a close before everything that can be said has been said.\nWhere few conditions are required, or conditions are absent, then dialogues between agents with the same knowledge will always play out the same way, and rhetoric has no place.\nWe have:\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1011\nPROOF.\nGiven what we know about R3-relevance, the condition on AG (D) P having a single branch is obvious.\nThis is not a sufficient condition on its own because certain protocols may prevent--through additional restrictions, like strict turn-taking in a multi-party dialogue--all the nodes in AG (D) T, which is not subject to such restrictions, being added to the graph.\nOnly when AG (D) T includes the nodes in the exact order that the corresponding arguments are put forward is it necessary that a topic-complete argumentation graph be constructed.\nGiven Proposition 5.1, these are the conditions under which dialogues conducted under the notion of R3-relevance will always be predetermined, and given how restrictive the conditions are, such dialogues seem to have plenty of room for rhetoric to play a part.\nTo find similar conditions for dialogues composed of R1and R2-relevant moves, we first need to distinguish between them.\nWe can do this in terms of the structure of the argumentation graph:\n1.\nthere are two nodes v' and v' ' on the path between v and r, and the argument denoted by v' defends itself against the argument denoted by v' '; or 2.\nthere is an argument A' ', denoted by node v' ', that affects the status of A, and the path from v' ' to r has one or more nodes in common with the path from v to r.\nPROOF.\nFor the first condition, consider that since AG is a tree, v is connected to r. Thus there is a series of undercut relations between A and A', and this corrresponds to a path through AG.\nIf this path is the only branch in the tree, then A will affect the status of A' unless the chain of \"affect\" is broken by an undercut that can't change the status of the undercut argument because the latter defends itself.\nFor the second condition, as for the first, the only way that A' cannot affect the status of A is if something is blocking its influence.\nIf this is not due to \"defending against\", it must be because there is some node u on the path that represents an argument whose status is fixed somehow, and that must mean that there is another chain of undercut relations, another branch of the tree, that is incident at u.\nSince this second branch denotes another chain of arguments, and these affect the status of the argument denoted by u, they must also affect the status of A. Any of these are the A' ' in the condition.\nSo an R2-relevant move m is not R1-relevant if either its effect is blocked because an argument upstream is not strong enough, or because there is another line of argument that is currently determining the status of the argument at the root.\nThis, in turn, means that if the effect is not due to \"defending against\", then there is an alternative move that is R1-relevant--a move that undercuts A' ' in the second condition above5.\nWe can now show\nThe restriction on R2-relevant rules is exactly that for topiccompleteness, so a dialogue that has only R2-relevant moves will continue until every argument that any agent can make has been put forward.\nGiven this, and what we revealed about R1-relevance in Proposition 5.3, we can see that:\nPROPOSITION 5.5.\nA protocol-complete basic dialogue D under a protocol which only includes R1-relevant moves will be topic-complete if AG (D) T: 1.\nincludes no path with adjacent nodes v, denoting A, and v', denoting A', such that A undercuts A' and A' is stronger that A; and 2.\nis such that the nodes in every branch have consecutive indices and no node with degree greater than two is an odd number of arcs from a leaf node.\nPROOF.\nThe first condition rules out the first condition in Proposition 5.3, and the second deals with the situation that leads to the second condition in Proposition 5.3.\nThe second condition ensures that each branch is constructed in full before any new branch is added, and when a new branch is added, the argument that is undercut as part of the addition will be acceptable, and so the addition will change the status of the argument denoted by that node, and hence the root.\nWith these conditions, every move required to construct AG (D) T will be permitted and so the dialogue will be topic-complete when every move has been completed.\nThe second part of this result only identifies one possible way to ensure that the second condition in Proposition 5.3 is met, so the converse of this result does not hold.\nHowever, what we have is sufficient to answer the question about \"predetermination\" that we started with.\nFor dialogues to be predetermined, every move that is R2-relevant must be made.\nIn such cases every dialogue is topic complete.\nIf we do not require that all R2-relevant moves are made, then there is some room for rhetoric--the way in which alternative lines of argument are presented becomes an issue.\nIf moves are forced to be R3-relevant, then there is considerable room for rhetorical play.\n6.\nSUMMARY\nThis paper has studied the different ideas of relevance in argumentation-based dialogue, identifying the relationship between these ideas, and showing how they can impact the extent to which the way that agents choose moves in a dialogue--what some authors have called the strategy and tactics of a dialogue.\nThis extends existing work on relvance, such as [3, 15] by showing how different notions of relevance can have an effect on the outcome of a dialogue, in particular when they render the outcome predetermined.\nThis connection extends the work of [18] which considered dialogue outcome, but stopped short of identifying the conditions under which it is predetermined.\nThere are two ways we are currently trying to extend this work, both of which will generalise the results and extend its applicability.\nFirst, we want to relax the restrictions that\n1012 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nwe have imposed, the exclusion of moves that attack several arguments (without which the argument graph can be mulitply-connected) and the exclusion of pre-empted moves, without which the argument graph can have cycles.\nSecond, we want to extend the ideas of relevance to cope with moves that do not only add undercutting arguments, but also supporting arguments, thus taking account of bipolar argumentation frameworks [5].","keyphrases":["relev","dialogu","multiag system","graph","node","tree","statu","argument","leaf"],"prmu":["P","P","U","U","U","U","U","U","U"]} {"id":"I-60","title":"On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks","abstract":"As more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads. Indeed, car manufacturers are already equipping their cars with such devices. Though, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas. Nonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers. These car owners will try to manipulate their agents such that they transmit false data to their peers. Using a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented.","lvl-1":"On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks\u2217 Raz Lin and Sarit Kraus Computer Science Department Bar-Ilan University Ramat-Gan, Israel {linraz,sarit}@cs.\nbiu.ac.il Yuval Shavitt School of Electrical Engineering Tel-Aviv University, Israel shavitt@eng.tau.ac.il ABSTRACT As more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads.\nIndeed, car manufacturers are already equipping their cars with such devices.\nThough, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas.\nNonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers.\nThese car owners will try to manipulate their agents such that they transmit false data to their peers.\nUsing a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Intelligent agents General Terms Experimentation 1.\nINTRODUCTION As technology advances, more and more cars are being equipped with devices, which enable them to act as autonomous agents.\nAn important advancement in this respect is the introduction of ad-hoc communication networks (such as Wi-Fi), which enable the exchange of information between cars, e.g., for locating road congestions [1] and optimal routes [15] or improving traffic safety [2].\nVehicle-To-Vehicle (V2V) communication is already onboard by some car manufactures, enabling the collaboration between different cars on the road.\nFor example, GM``s proprietary algorithm [6], called the threat assessment algorithm, constantly calculates, in real time, other vehicles'' positions and speeds, and enables messaging other cars when a collision is imminent; Also, Honda has began testing its system in which vehicles talk with each other and with the highway system itself [7].\nIn this paper, we investigate the attraction of being a selfish agent in vehicular networks.\nThat is, we investigate the benefits achieved by car owners, who tamper with on-board devices and incorporate their own self-interested agents in them, which act for their benefit.\nWe build on the notion of Gossip Networks, introduced by Shavitt and Shay [15], in which the agents can obtain road congestion information by gossiping with peer agents using ad-hoc communication.\nWe recognize two typical behaviors that the self-interested agents could embark upon, in the context of vehicular networks.\nIn the first behavior, described in Section 4, the objective of the self-interested agents is to maximize their own utility, expressed by their average journey duration on the road.\nThis situation can be modeled in real life by car owners, whose aim is to reach their destination as fast as possible, and would like to have their way free of other cars.\nTo this end they will let their agents cheat the other agents, by injecting false information into the network.\nThis is achieved by reporting heavy traffic values for the roads on their route to other agents in the network in the hope of making the other agents believe that the route is jammed, and causing them to choose a different route.\nThe second type of behavior, described in Section 5, is modeled by the self-interested agents'' objective to cause disorder in the network, more than they are interested in maximizing their own utility.\nThis kind of behavior could be generated, for example, by vandalism or terrorists, who aim to cause as much mayhem in the network as possible.\nWe note that the introduction of self-interested agents to the network, would most probably motivate other agents to try and detect these agents in order to minimize their effect.\nThis is similar, though in a different context, to the problem introduced by Lamport et al. [8] as the Byzantine Generals Problem.\nHowever, the introduction of mechanisms to deal with self-interested agents is costly and time consuming.\nIn this paper we focus mainly on the attractiveness of selfish behavior by these agents, while we also provide some insights 327 978-81-904262-7-5 (RPS) c 2007 IFAAMAS into the possibility of detecting self-interested agents and minimizing their effect.\nTo demonstrate the benefits achieved by self-interested agents, we have used a simulation environment, which models the transportation network in a central part of a large real city.\nThe simulation environment is further described in Section 3.\nOur simulations provide insights to the benefits of self-interested agents cheating.\nOur findings can motivate future research in this field in order to minimize the effect of selfish-agents.\nThe rest of this paper is organized as follows.\nIn Section 2 we review related work in the field of self-interested agents and V2V communications.\nWe continue and formally describe our environment and simulation settings in Section 3.\nSections 4 and 5 describe the different behaviors of the selfinterested agents and our findings.\nFinally, we conclude the paper with open questions and future research directions.\n2.\nRELATED WORK In their seminal paper, Lamport et al. [8] describe the Byzantine Generals problem, in which processors need to handle malfunctioning components that give conflicting information to different parts of the system.\nThey also present a model in which not all agents are connected, and thus an agent cannot send a message to all the other agents.\nDolev et al. [5] has built on this problem and has analyzed the number of faulty agents that can be tolerated in order to eventually reach the right conclusion about true data.\nSimilar work is presented by Minsky et al. [11], who discuss techniques for constructing gossip protocols that are resilient to up to t malicious host failures.\nAs opposed to the above works, our work focuses on vehicular networks, in which the agents are constantly roaming the network and exchanging data.\nAlso, the domain of transportation networks introduces dynamic data, as the load of the roads is subject to change.\nIn addition, the system in transportation networks has a feedback mechanism, since the load in the roads depends on the reports and the movement of the agents themselves.\nMalkhi et al. [10] present a gossip algorithm for propagating information in a network of processors, in the presence of malicious parties.\nTheir algorithm prevents the spreading of spurious gossip and diffuses genuine data.\nThis is done in time, which is logarithmic in the number of processes and linear in the number of corrupt parties.\nNevertheless, their work assumes that the network is static and also that the agents are static (they discuss a network of processors).\nThis is not true for transportation networks.\nFor example, in our model, agents might gossip about heavy traffic load of a specific road, which is currently jammed, yet this information might be false several minutes later, leaving the agents to speculate whether the spreading agents are indeed malicious or not.\nIn addition, as the agents are constantly moving, each agent cannot choose with whom he interacts and exchanges data.\nIn the context of analyzing the data and deciding whether the data is true or not, researchers have focused on distributed reputation systems or decision mechanisms to decide whether or not to share data.\nYu and Singh [18] build a social network of agents'' reputations.\nEvery agent keeps a list of its neighbors, which can be changed over time, and computes the trustworthiness of other agents by updating the current values of testimonies obtained from reliable referral chains.\nAfter a bad experience with another agent every agent decreases the rating of the ``bad'' agent and propagates this bad experience throughout the network so that other agents can update their ratings accordingly.\nThis approach might be implemented in our domain to allow gossip agents to identify self-interested agents and thus minimize their effect.\nHowever, the implementation of such a mechanism is an expensive addition to the infrastructure of autonomous agents in transportation networks.\nThis is mainly due to the dynamic nature of the list of neighbors in transportation networks.\nThus, not only does it require maintaining the neighbors'' list, since the neighbors change frequently, but it is also harder to build a good reputation system.\nLeckie et al. [9] focus on the issue of when to share information between the agents in the network.\nTheir domain involves monitoring distributed sensors.\nEach agent monitors a subset of the sensors and evaluates a hypothesis based on the local measurements of its sensors.\nIf the agent believes that a hypothesis is sufficient likely he exchanges this information with the other agents.\nIn their domain, the goal of all the agents is to reach a global consensus about the likelihood of the hypothesis.\nIn our domain, however, as the agents constantly move, they have many samples, which they exchange with each other.\nAlso, the data might also vary (e.g., a road might be reported as jammed, but a few minutes later it could be free), thus making it harder to decide whether to trust the agent, who sent the data.\nMoreover, the agent might lie only about a subset of its samples, thus making it even harder to detect his cheating.\nSome work has been done in the context of gossip networks or transportation networks regarding the spreading of data and its dissemination.\nDatta et al. [4] focus on information dissemination in mobile ad-hoc networks (MANET).\nThey propose an autonomous gossiping algorithm for an infrastructure-less mobile ad-hoc networking environment.\nTheir autonomous gossiping algorithm uses a greedy mechanism to spread data items in the network.\nThe data items are spread to immediate neighbors that are interested in the information, and avoid ones that are not interested.\nThe decision which node is interested in the information is made by the data item itself, using heuristics.\nHowever, their work concentrates on the movement of the data itself, and not on the agents who propagate the data.\nThis is different from our scenario in which each agent maintains the data it has gathered, while the agent itself roams the road and is responsible (and has the capabilities) for spreading the data to other agents in the network.\nDas et al. [3] propose a cooperative strategy for content delivery in vehicular networks.\nIn their domain, peers download a file from a mesh and exchange pieces of the file among themselves.\nWe, on the other hand, are interested in vehicular networks in which there is no rule forcing the agents to cooperate among themselves.\nShibata et al. [16] propose a method for cars to cooperatively and autonomously collect traffic jam statistics to estimate arrival time to destinations for each car.\nThe communication is based on IEEE 802.11, without using a fixed infrastructure on the ground.\nWhile we use the same domain, we focus on a different problem.\nShibata et al. [16] mainly focus on efficiently broadcasting the data between agents (e.g., avoid duplicates and communication overhead), as we focus on the case where agents are not cooperative in 328 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) nature, and on how selfish agents affect other agents and the network load.\nWang et al. [17] also assert, in the context of wireless networks, that individual agents are likely to do what is most beneficial for their owners, and will act selfishly.\nThey design a protocol for communication in networks in which all agents are selfish.\nTheir protocol motivates every agent to maximize its profit only when it behaves truthfully (a mechanism of incentive compatibility).\nHowever, the domain of wireless networks is quite different from the domain of transportation networks.\nIn the wireless network, the wireless terminal is required to contribute its local resources to transmit data.\nThus, Wang et al. [17] use a payment mechanism, which attaches costs to terminals when transmitting data, and thus enables them to maximize their utility when transmitting data, instead of acting selfishly.\nUnlike this, in the context of transportation networks, constructing such a mechanism is not quite a straightforward task, as self-interested agents and regular gossip agents might incur the same cost when transmitting data.\nThe difference between the two types of agents only exists regarding the credibility of the data they exchange.\nIn the next section, we will describe our transportation network model and gossiping between the agents.\nWe will also describe the different agents in our system.\n3.\nMODEL AND SIMULATIONS We first describe the formal transportation network model, and then we describe the simulations designs.\n3.1 Formal Model Following Shavitt and Shay [15] and Parshani [13], the transportation network is represented by a directed graph G(V, E), where V is the set of vertices representing junctions, and E is the set of edges, representing roads.\nAn edge e \u2208 E is associated with a weight w > 0, which specifies the time it takes to traverse the road associated with that edge.\nThe roads'' weights vary in time according to the network (traffic) load.\nEach car, which is associated with an autonomous agent, is given a pair of origin and destination points (vertices).\nA journey is defined as the (not necessarily simple) path taken by an agent between the origin vertex and the destination vertex.\nWe assume that there is always a path between a source and a destination.\nA journey length is defined as the sum of all weights of the edges constituting this path.\nEvery agent has to travel between its origin and destination points and aims to minimize its journey length.\nInitially, agents are ignorant about the state of the roads.\nRegular agents are only capable of gathering information about the roads as they traverse them.\nHowever, we assume that some agents have means of inter-vehicle communication (e.g., IEEE 802.11) with a given communication range, which enables them to communicate with other agents with the same device.\nThose agents are referred to as gossip agents.\nSince the communication range is limited, the exchange of information using gossiping is done in one of two ways: (a) between gossip agents passing one another, or (b) between gossip agents located at the same junction.\nWe assume that each agent stores the most recent information it has received or gathered around the edges in the network.\nA subset of the gossip agents are those agents who are selfinterested and manipulate the devices for their own benefit.\nWe will refer to these agents as self-interested agents.\nA detailed description of their behavior is given in Sections 4 and 5.\n3.2 Simulation Design Building on [13], the network in our simulations replicates a central part of a large city, and consists of 50 junctions and 150 roads, which are approximately the number of main streets in the city.\nEach simulation consists of 6 iterations.\nThe basic time unit of the iteration is a step, which equivalents to about 30 seconds.\nEach iteration simulates six hours of movements.\nThe average number of cars passing through the network during the iteration is about 70,000 and the average number of cars in the network at a specific time unit is about 3,500 cars.\nIn each iteration the same agents are used with the same origin and destination points, whereas the data collected in earlier iterations is preserved in the future iterations (referred to as the history of the agent).\nThis allows us to simulate somewhat a daily routine in the transportation network (e.g., a working week).\nEach of the experiments that we describe below is run with 5 different traffic scenarios.\nEach such traffic scenario differs from one another by the initial load of the roads and the designated routes of the agents (cars) in the network.\nFor each such scenario 5 simulations are run, creating a total of 25 simulations for each experiment.\nIt has been shown by Parshani et al. [13, 14] that the information propagation in the network is very efficient when the percentage of gossiping agents is 10% or more.\nYet, due to congestion caused by too many cars rushing to what is reported as the less congested part of the network 20-30% of gossiping agents leads to the most efficient routing results in their experiments.\nThus, in our simulation, we focus only on simulations in which the percentage of gossip agents is 20%.\nThe simulations were done with different percentages of self-interested agents.\nTo gain statistical significance we ran each simulation with changes in the set of the gossip agents, and the set of the self-interested agents.\nIn order to gain a similar ordinal scale, the results were normalized.\nThe normalized values were calculated by comparing each agent``s result to his results when the same scenario was run with no self-interested agents.\nThis was done for all of the iterations.\nUsing the normalized values enabled us to see how worse (or better) each agent would perform compared to the basic setting.\nFor example, if an average journey length of a certain agent in iteration 1 with no selfinterested agent was 50, and the length was 60 in the same scenario and iteration in which self-interested agents were involved, then the normalized value for that agent would be 60\/50 = 1.2.\nMore details regarding the simulations are described in Sections 4 and 5.\n4.\nSPREADING LIES, MAXIMIZING UTILITY In the first set of experiments we investigated the benefits achieved by the self-interested agents, whose aim was to minimize their own journey length.\nThe self-interested agents adopted a cheating approach, in which they sent false data to their peers.\nIn this section we first describe the simulations with the self-interested agents.\nThen, we model the scenario as a The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 329 game with two types of agents, and prove that the equilibrium result can only be achieved when there is no efficient exchange of gossiping information in the network.\n4.1 Modeling the Self-Interested Agents'' Behavior While the gossip agents gather data and send it to other agents, the self-interested agents'' behavior is modeled as follows: 1.\nCalculate the shortest path from origin to destination.\n2.\nCommunicate the following data to other agents: (a) If the road is not in the agent``s route - send the true data about it (e.g., data about roads it has received from other agents) (b) For all roads in the agent``s route, which the agent has not yet traversed, send a random high weight.\nBasically, the self-interested agent acts the same as the gossip agent.\nIt collects data regarding the weight of the roads (either by traversing the road or by getting the data from other agents) and sends the data it has collected to other agents.\nHowever, the self-interested agent acts differently when the road is in its route.\nSince the agent``s goal is to reach its destination as fast as possible, the agent will falsely report that all the roads in its route are heavily congested.\nThis is in order to free the path for itself, by making other agents recalculate their paths, this time without including roads on the self-interested agent``s route.\nTo this end, for all the roads in its route, which the agent has not yet passed, the agent generates a random weight, which is above the average weight of the roads in the network.\nIt then associates these new weights with the roads in its route and sends them to the other agents.\nWhile an agent can also divert cars from its route by falsely reporting congested roads in parallel to its route as free, this behavior is not very likely since other agents, attempting to use the roads, will find the mistake within a short time and spread the true congestion on the road.\nOn the other hand, if an agent manages to persuade other agents not to use a road, it will be harder for them to detect that the said roads are not congested.\nIn addition, to avoid being influenced by its own lies and other lies spreading in the network, all self-interested agents will ignore data received about roads with heavy traffic (note that data about roads that are not heavily traffic will not be ignored)1 .\nIn the next subsection we describe the simulation results, involving the self-interested agents.\n4.2 Simulation Results To test the benefits of cheating by the self-interested agents we ran several experiments.\nIn the first set of experiments, we created a scenario, in which a small group of self-interested agents spread lies on the same route, and tested its effect on the journey length of all the agents in the network.\n1 In other simulations we have run, in which there had been several real congestions in the network, we indeed saw that even when the roads are jammed, the self-interested agents were less affected if they ignored all reported heavy traffic, since by such they also discarded all lies roaming the network Table 1: Normalized journey length values, selfinterested agents with the same route Iteration Self-Interested Gossip - Gossip - Regular Number Agents SR Others Agents 1 1.38 1.27 1.06 1.06 2 0.95 1.56 1.18 1.14 3 1.00 1.86 1.28 1.17 4 1.06 2.93 1.35 1.16 5 1.13 2.00 1.40 1.17 6 1.08 2.02 1.43 1.18 Thus, several cars, which had the same origin and destination points, were designated as self-interested agents.\nIn this simulation, we selected only 6 agents to be part of the group of the self-interested agents, as we wanted to investigate the effect achieved by only a small number of agents.\nIn each simulation in this experiment, 6 different agents were randomly chosen to be part of the group of self-interested agents, as described above.\nIn addition, one road, on the route of these agents, was randomly selected to be partially blocked, letting only one car go through that road at each time step.\nAbout 8,000 agents were randomly selected as regular gossip agents, and the other 32,000 agents were designated as regular agents.\nWe analyzed the average journey length of the self-interested agents as opposed to the average journey length of other regular gossip agents traveling along the same route.\nTable 1 summarizes the normalized results for the self-interested agents, the gossip agents (those having the same origin and destination points as the self-interested agents, denoted Gossip - SR, and all other gossip agents, denoted Gossip - Others) and the regular agents, as a function of the iteration number.\nWe can see from the results that the first time the selfinterested agents traveled the route while spreading the false data about the roads did not help them (using the paired t-test we show that those agents had significantly lower journey lengths in the scenario in which they did not spread any lies, with p < 0.01).\nThis is mainly due to the fact that the lies do not bypass the self-interested agent and reach other cars that are ahead of the self-interested car on the same route.\nThus, spreading the lies in the first iteration does not help the self-interested agent to free the route he is about to travel, in the first iteration.\nOnly when the self-interested agents had repeated their journey in the next iteration (iteration 2) did it help them significantly (p = 0.04).\nThe reason for this is that other gossip agents received this data and used it to recalculate their shortest path, thus avoiding entrance to the roads, for which the self-interested agents had spread false information about congestion.\nIt is also interesting to note the large value attained by the self-interested agents in the first iteration.\nThis is mainly due to several self-interested agents, who entered the jammed road.\nThis situation occurred since the self-interested agents ignored all heavy traffic data, and thus ignored the fact that the road was jammed.\nAs they started spreading lies about this road, more cars shifted from this route, thus making the road free for the future iterations.\nHowever, we also recall that the self-interested agents ignore all information about the heavy traffic roads.\nThus, 330 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 2: Normalized journey length values, spreading lies for a beneficiary agent Iteration Beneficiary Gossip - Gossip - Regular Number Agent SR Others Agents 1 1.10 1.05 0.94 1.11 2 1.09 1.14 0.99 1.14 3 1.04 1.19 1.02 1.14 4 1.03 1.26 1.03 1.14 5 1.05 1.32 1.05 1.12 6 0.92 1.40 1.06 1.11 when the network becomes congested, more self-interested cars are affected, since they might enter jammed roads, which they would otherwise not have entered.\nThis can be seen, for example, in iterations 4-6, in which the normalized value of the self-interested agents increased above 1.00.\nUsing the paired t-test to compare these values with the values achieved by these agents when no lies are used, we see that there is no significant difference between the two scenarios.\nAs opposed to the gossip agents, we can see how little effect the self-interested agents have on the regular agents.\nAs compared to the gossip agents on the same route that have traveled as much as 193% more, when self-interested agents are introduced, the average journey length for the regular agents has only increased by about 15%.\nThis result is even lower than the effect on other gossip agents in the entire network.\nSince we noticed that cheating by the self-interested agents does not benefit them in the first iteration, we devised another set of experiments.\nIn the second set of experiments, the self-interested agents have the objective to help another agent, who is supposed to enter the network some time after the self-interested agent entered.\nWe refer to the latter agent as the beneficiary agent.\nJust like a self-interested agent, the beneficiary agent also ignores all data regarding heavy traffic.\nIn real-life this can be modeled, for example, by a husband, who would like to help his wife find a faster route to her destination.\nTable 2 summarizes the normalized values for the different agents.\nAs in the first set of experiments, 5 simulations were run for each scenario, with a total of 25 simulations.\nIn each of these simulation one agent was randomly selected as a self-interested agent, and then another agent, with the same origin as the selfinterested agent, was randomly selected as the beneficiary agent.\nThe other 8,000 and 32,000 agents were designated as regular gossip agents and regular agents, respectively.\nWe can see that as the number of iterations advances, the lower the normalized value for the beneficiary agent.\nIn this scenario, just like the previous one, in the first iterations, not only does the beneficiary agent not avoid the jammed roads, since he ignores all heavy traffic, he also does not benefit from the lies spread by the self-interested agent.\nThis is due to the fact that the lies are not yet incorporated by other gossip agents.\nThus, if we compare the average journey length in the first iteration when lies are spread and when there are no lies, the average is significantly lower when there are no lies (p < 0.03).\nOn the other hand, if we compare the average journey length in all of the iterations, there is no significant difference between the two settings.\nStill, in most of the iterations, the average journey length of the beneficiary agent is longer than in the case when no lies are spread.\nWe can also see the impact on the other agents in the system.\nWhile the gossip agents, which are not on the route of the beneficiary agent, virtually are not affected by the self-interested agent, those on the route and the regular agents are affected and have higher normalized values.\nThat is, even with just one self-interested car, we can see that both the gossip agents that follow the same route as the lies spread by the self-interested agents, and other regular agents, increase their journey length by more than 14%.\nIn our third set of experiments we examined a setting in which there was an increasing number of agents, and the agents did not necessarily have the same origin and destination points.\nTo model this we randomly selected selfinterested agents, whose objective was to minimize their average journey length, assuming the cars were repeating their journeys (that is, more than one iteration was made).\nAs opposed to the first set of experiments, in this set the self-interested agents were selected randomly, and we did not enforce the constraint that they will all have the same origin and destination points.\nAs in the previous sets of experiments we ran 5 different simulations per scenario.\nIn each simulation 11 runs were made, each run with different numbers of self-interested agents: 0 (no self-interested agents), 1, 2, 4, 8, and 16.\nEach agent adopted the behavior modeled in Section 4.1.\nFigure 1 shows the normalized value achieved by the self-interested agents as a function of their number.\nThe figure shows these values for iterations 2-6.\nThe first iteration is not shown intentionally, as we assume repeated journeys.\nAlso, we have seen in the previous set of experiments and we have provided explanations as to why the self-interested agents do not gain much from their behavior in the first iteration.\n0.955 0.96 0.965 0.97 0.975 0.98 0.985 0.99 0.995 1 1.005 1.01 1.015 1.02 1.025 1.03 0 1 2 3 4 5 6 7 8 9 10111213141516 Self-Interested Agents Number NormalizedValue Iteration 2 Iteration 3 Iteration 4 Iteration 5 Iteration 6 Figure 1: Self-interested agents normalized values as a function of the number of self-interested agents.\nUsing these simulations we examined what the threshold could be for the number of randomly selected self-interested agents in order to allow themselves to benefit from their selfish behavior.\nWe can see that up to 8 self-interested agents, the average normalized value is below 1.\nThat is, they benefit from their malicious behavior.\nIn the case of one self-interested agent there is a significant difference between the average journey length of when the agent spread lies and when no lies are spread (p < 0.001), while when The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 331 there are 2, 4, 8 and 16 self-interested agents there is no significance difference.\nYet, as the number of self-interested agents increases, the normalized value also increases.\nIn such cases, the normalized value is larger than 1, and the self-interested agents journey length becomes significantly higher than their journey length, in cases where there are no self-interested agents in the system.\nIn the next subsection we analyze the scenario as a game and show that when in equilibrium the exchange of gossiping between the agents becomes inefficient.\n4.3 When Gossiping is Inefficient We continued and modeled our scenario as a game, in order to find the equilibrium.\nThere are two possible types for the agents: (a) regular gossip agents, and (b) self-interested agents.\nEach of these agents is a representative of its group, and thus all agents in the same group have similar behavior.\nWe note that the advantage of using gossiping in transportation networks is to allow the agents to detect anomalies in the network (e.g., traffic jams) and to quickly adapt to them by recalculating their routes [14].\nWe also assume that the objective of the self-interested agents is to minimize their own journey length, thus they spread lies on their routes, as described in Section 4.1.\nWe also assume that sophisticated methods for identifying the self-interested agents or managing reputation are not used.\nThis is mainly due to the complexity of incorporating and maintaining such mechanisms, as well as due to the dynamics of the network, in which interactions between different agents are frequent, agents may leave the network, and data about the road might change as time progresses (e.g., a road might be reported by a regular gossip agent as free at a given time, yet it may currently be jammed due to heavy traffic on the road).\nLet Tavg be the average time it takes to traverse an edge in the transportation network (that is, the average load of an edge).\nLet Tmax be the maximum time it takes to traverse an edge.\nWe will investigate the game, in which the self-interested and the regular gossip agents can choose the following actions.\nThe self-interested agents can choose how much to lie, that is, they can choose to spread how long (not necessarily the true duration) it takes to traverse certain roads.\nSince the objective of the self-interested agents is to spread messages as though some roads are jammed, the traversal time they report is obviously larger than the average time.\nWe denote the time the self-interested agents spread as Ts, such that Tavg \u2264 Ts \u2264 Tmax.\nMotivated by the results of the simulations we have described above, we saw that the agents are less affected if they discard the heavy traffic values.\nThus, the regular gossip cars, attempting to mitigate the effect of the liars, can choose a strategy to ignore abnormal congestion values above a certain threshold, Tg.\nObviously, Tavg \u2264 Tg \u2264 Tmax.\nIn order to prevent the gossip agents from detecting the lies and just discarding those values, the self-interested agents send lies in a given range, [Ts, Tmax], with an inverse geometric distribution, that is, the higher the T value, the higher its frequency.\nNow we construct the utility functions for each type of agents, which is defined by the values of Ts and Tg.\nIf the self-interested agents spread traversal times higher than or equal to the regular gossip cars'' threshold, they will not benefit from those lies.\nThus, the utility value of the selfinterested agents in this case is 0.\nOn the other hand, if the self-interested agents spread traversal time which is lower than the threshold, they will gain a positive utility value.\nFrom the regular gossip agents point-of-view, if they accept messages from the self-interested agents, then they incorporate the lies in their calculation, thus they will lose utility points.\nOn the other hand, if they discard the false values the self-interested agents send, that is, they do not incorporate the lies, they will gain utility values.\nFormally, we use us to denote the utility of the self-interested agents and ug to denote the utility of the regular gossip agents.\nWe also denote the strategy profile in the game as {Ts, Tg}.\nThe utility functions are defined as: us = 0 if Ts \u2265 Tg Ts \u2212 Tavg + 1 if Ts < Tg (1) ug = Tg \u2212 Tavg if Ts \u2265 Tg Ts \u2212 Tg if Ts < Tg (2) We are interested in finding the Nash equilibrium.\nWe recall from [12], that the Nash equilibrium is a strategy profile, where no player has anything to gain by deviating from his strategy, given that the other agent follows his strategy profile.\nFormally, let (S, u) denote the game, where S is the set of strategy profiles and u is the set of utility functions.\nWhen each agent i \u2208 {regular gossip, self-interested} chooses a strategy Ti resulting in a strategy profile T = (Ts, Tg) then agent i obtains a utility of ui (T).\nA strategy profile T\u2217 \u2208 S is a Nash equilibrium if no deviation in the strategy by any single agent is profitable, that is, if for all i, ui (T\u2217 ) \u2265 ui (Ti, T\u2217 \u2212i).\nThat is, (Ts, Tg) is a Nash equilibrium if the self-interested agents have no other value Ts such that us (Ts, Tg) > us (Ts, Tg), and similarly for the gossip agents.\nWe now have the following theorem.\nTheorem 4.1.\n(Tavg, Tavg) is the only Nash equilibrium.\nProof.\nFirst we will show that (Tavg, Tavg) is a Nash equilibrium.\nAssume, by contradiction, that the gossip agents choose another value Tg > Tavg.\nThus, ug (Tavg, Tg ) = Tavg \u2212 Tg < 0.\nOn the other hand, ug (Tavg, Tavg) = 0.\nThus, the regular gossip agents have no incentive to deviate from this strategy.\nThe self-interested agents also have no incentive to deviate from this strategy.\nBy contradiction, again assume that the self-interested agents choose another value Ts > Tavg.\nThus, us (Ts , Tavg) = 0, while us (Tavg, Tavg) = 0.\nWe will now show that the above solution is unique.\nWe will show that any other tuple (Ts, Tg), such that Tavg < Tg \u2264 Tmax and Tavg < Ts \u2264 Tmax is not a Nash equilibrium.\nWe have three cases.\nIn the first Tavg < Tg < Ts \u2264 Tmax.\nThus, us (Ts, Tg) = 0 and ug (Ts, Tg) = Tg \u2212 Tavg.\nIn this case, the regular gossip agents have an incentive to deviate and choose another strategy Tg + 1, since by doing so they increase their own utility: ug (Ts, Tg + 1) = Tg + 1 \u2212 Tavg.\nIn the second case we have Tavg < Ts < Tg \u2264 Tmax.\nThus, ug (Ts, Tg) = Ts \u2212 Tg < 0.\nAlso, the regular gossip agents have an incentive to deviate and choose another strategy Tg \u22121, in which their utility value is higher: ug (Ts, Tg \u22121) = Ts \u2212 Tg + 1.\nIn the last case we have Tavg < Ts = Tg \u2264 Tmax.\nThus, us (Ts, Tg) = Ts \u2212 Tg = 0.\nIn this case, the self-interested agents have an incentive to deviate and choose another strategy Tg \u2212 1, in which their utility value is higher: us (Tg \u2212 1, Tg) = Tg \u2212 1 \u2212 Tavg + 1 = Tg \u2212 Tavg > 0.\n332 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 3: Normalized journey length values for the first iteration Self-Interested Self-Interested Gossip Regular Agents Number Agents Agents Agents 1 0.98 1.01 1.05 2 1.09 1.02 1.05 4 1.07 1.02 1.05 8 1.06 1.04 1.05 16 1.03 1.08 1.06 32 1.07 1.17 1.08 50 1.12 1.28 1.1 64 1.14 1.4 1.13 80 1.15 1.5 1.14 100 1.17 1.63 1.16 Table 4: Normalized journey length values for all iterations Self-Interested Self-Interested Gossip Regular Agents Number Agents Agents Agents 1 0.98 1.02 1.06 2 1.0 1.04 1.07 4 1.0 1.08 1.07 8 1.01 1.33 1.11 16 1.02 1.89 1.17 32 1.06 2.46 1.25 50 1.13 2.24 1.29 64 1.21 2.2 1.32 80 1.21 2.13 1.27 100 1.26 2.11 1.27 The above theorem proves that the equilibrium point is reached only when the self-interested agents send the time to traverse certain edges equals the average time, and on the other hand the regular gossip agents discard all data regarding roads that are associated with an average time or higher.\nThus, for this equilibrium point the exchange of gossiping information between agents is inefficient, as the gossip agents are unable to detect any anomalies in the network.\nIn the next section we describe another scenario for the self-interested agents, in which they are not concerned with their own utility, but rather interested in maximizing the average journey length of other gossip agents.\n5.\nSPREADING LIES, CAUSING CHAOS Another possible behavior that can be adopted by selfinterested agents is characterized by their goal to cause disorder in the network.\nThis can be achieved, for example, by maximizing the average journey length of all agents, even at the cost of maximizing their own journey length.\nTo understand the vulnerability of the gossip based transportation support system, we ran 5 different simulations for each scenario.\nIn each simulation different agents were randomly chosen (using a uniform distribution) to act as gossip agents, among them self-interested agents were chosen.\nEach self-interested agent behaved in the same manner as described in Section 4.1.\nEvery simulation consisted of 11 runs with each run comprising different numbers of self-interested agents: 0 (no selfinterested agents), 1, 2, 4, 8, 16, 32, 50, 64, 80 and 100.\nAlso, in each run the number of self-interested agents was increased incrementally.\nFor example: the run with 50 selfinterested agents consisted of all the self-interested agents that were used in the run with 32 self-interested agents, but with an additional 18 self-interested agents.\nTables 3 and 4 summarize the normalized journey length for the self-interested agents, the regular gossip agents and the regular (non-gossip) agents.\nTable 3 summarizes the data for the first iteration and Table 4 summarizes the data for the average of all iterations.\nFigure 2 demonstrates the changes in the normalized values for the regular gossip agents and the regular agents, as a function of the iteration number.\nSimilar to the results in our first set of experiments, described in Section 4.2, we can see that randomly selected self-interested agents who follow different randomly selected routes do not benefit from their malicious behavior (that is, their average journey length does not decrease).\nHowever, when only one self-interested agent is involved, it does benefit from the malicious behavior, even in the first iteration.\nThe results also indicate that the regular gossip agents are more sensitive to malicious behavior than regular agentsthe average journey length for the gossip agents increases significantly (e.g., with 32 self-interested agents the average journey length for the gossip agents was 146% higher than in the setting with no self-interested agents at all, as opposed to an increase of only 25% for the regular agents).\nIn contrast, these results also indicate that the self-interested agents do not succeed in causing a significant load in the network by their malicious behavior.\n1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 1 2 3 4 5 6 Iteration Number NormalizedValue 32 self-interested agents, gossip agents normalized value 100 self-interested agents, gossip agents normalized value 32 self-interested agents, regular agents normalized value 100 self-interested agents, regular agents normalized value Figure 2: Gossip and regular agents normalized values, as a function of the iteration.\nSince the goal of the self-interested agents in this case is to cause disorder in the network rather than use the lies for their own benefits, the question arises as to why would the behavior of the self-interested agents be to send lies about their routes only.\nFurthermore, we hypothesize that if they all send lies about the same major roads the damage they might inflict on the entire network would be larger that had each of them sent lies about its own route.\nTo examine this hypothesis, we designed another set of experiments.\nIn this set of experiments, all the self-interested agents spread lies about the same 13 main roads in the network.\nHowever, the results show quite a smaller impact on other gossip and reguThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 333 Table 5: Normalized journey length values for all iterations.\nNetwork with congestions.\nSelf-Interested Self-Interested Gossip Regular Agents Number Agents Agents Agents 1 1.04 1.02 1.22 2 1.06 1.04 1.22 4 1.04 1.06 1.23 8 1.07 1.15 1.26 16 1.09 1.55 1.39 32 1.12 2.25 1.56 50 1.24 2.25 1.60 64 1.28 2.47 1.63 80 1.50 2.41 1.64 100 1.69 2.61 1.75 lar agents in the network.\nThe average normalized value for the gossip agents in these simulations was only about 1.07, as opposed to 1.7 in the original scenario.\nWhen analyzing the results we saw that although the false data was spread, it did not cause other gossip cars to change their route.\nThe main reason was that the lies were spread on roads that were not on the route of the self-interested agents.\nThus, it took the data longer to reach agents on the main roads, and when the agents reached the relevant roads this data was too old to be incorporated in the other agents calculations.\nWe also examined the impact of sending lies in order to cause chaos when there are already congestions in the network.\nTo this end, we simulated a network in which 13 main roads are jammed.\nThe behavior of the self-interested agents is as described in Section 4.1, and the self-interested agents spread lies about their own route.\nThe simulation results, detailed in Table 5, show that there is a greater incentive for the self-interested agents to cheat when the network is already congested, as their cheating causes more damage to the other agents in the network.\nFor example, whereas the average journey length of the regular agents increased only by about 15% in the original scenario, in which the network was not congested, in this scenario the average journey length of the agents had increased by about 60%.\n6.\nCONCLUSIONS In this paper we investigated the benefits achieved by self-interested agents in vehicular networks.\nUsing simulations we investigated two behaviors that might be taken by self-interested agents: (a) trying to minimize their journey length, and (b) trying to cause chaos in the network.\nOur simulations indicate that in both behaviors the selfinterested agents have only limited success achieving their goal, even if no counter-measures are taken.\nThis is in contrast to the greater impact inflicted by self-interested agents in other domains (e.g., E-Commerce).\nSome reasons for this are the special characteristics of vehicular networks and their dynamic nature.\nWhile the self-interested agents spread lies, they cannot choose which agents with whom they will interact.\nAlso, by the time their lies reach other agents, they might become irrelevant, as more recent data has reached the same agents.\nMotivated by the simulation results, future research in this field will focus on modeling different behaviors of the self-interested agents, which might cause more damage to the network.\nAnother research direction would be to find ways of minimizing the effect of selfish-agents by using distributed reputation or other measures.\n7.\nREFERENCES [1] A. Bejan and R. Lawrence.\nPeer-to-peer cooperative driving.\nIn Proceedings of ISCIS, pages 259-264, Orlando, USA, October 2002.\n[2] I. Chisalita and N. Shahmehri.\nA novel architecture for supporting vehicular communication.\nIn Proceedings of VTC, pages 1002-1006, Canada, September 2002.\n[3] S. Das, A. Nandan, and G. Pau.\nSpawn: A swarming protocol for vehicular ad-hoc wireless networks.\nIn Proceedings of VANET, pages 93-94, 2004.\n[4] A. Datta, S. Quarteroni, and K. Aberer.\nAutonomous gossiping: A self-organizing epidemic algorithm for selective information dissemination in mobile ad-hoc networks.\nIn Proceedings of IC-SNW, pages 126-143, Maison des Polytechniciens, Paris, France, June 2004.\n[5] D. Dolev, R. Reischuk, and H. R. Strong.\nEarly stopping in byzantine agreement.\nJACM, 37(4):720-741, 1990.\n[6] GM.\nThreat assessment algorithm.\nhttp:\/\/www.nhtsa.dot.gov\/people\/injury\/research\/pub\/ acas\/acas-fieldtest\/, 2000.\n[7] Honda.\nhttp:\/\/world.honda.com\/news\/2005\/c050902.html.\n[8] Lamport, Shostak, and Pease.\nThe byzantine generals problem.\nIn Advances in Ultra-Dependable Distributed Systems, N. Suri, C. J. Walter, and M. M. Hugue (Eds.)\n.\nIEEE Computer Society Press, 1982.\n[9] C. Leckie and R. Kotagiri.\nPolicies for sharing distributed probabilistic beliefs.\nIn Proceedings of ACSC, pages 285-290, Adelaide, Australia, 2003.\n[10] D. Malkhi, E. Pavlov, and Y. Sella.\nGossip with malicious parties.\nTechnical report: 2003-9, School of Computer Science and Engineering - The Hebrew University of Jerusalem, Israel, March 2003.\n[11] Y. M. Minsky and F. B. Schneider.\nTolerating malicious gossip.\nDistributed Computing, 16(1):49-68, February 2003.\n[12] M. J. Osborne and A. Rubinstein.\nA Course In Game Theory.\nMIT Press, Cambridge MA, 1994.\n[13] R. Parshani.\nRouting in gossip networks.\nMaster``s thesis, Department of Computer Science, Bar-Ilan University, Ramat-Gan, Israel, October 2004.\n[14] R. Parshani, S. Kraus, and Y. Shavitt.\nA study of gossiping in transportation networks.\nSubmitted for publication, 2006.\n[15] Y. Shavitt and A. Shay.\nOptimal routing in gossip networks.\nIEEE Transactions on Vehicular Technology, 54(4):1473-1487, July 2005.\n[16] N. Shibata, T. Terauchi, T. Kitani, K. Yasumoto, M. Ito, and T. Higashino.\nA method for sharing traffic jam information using inter-vehicle communication.\nIn Proceedings of V2VCOM, USA, 2006.\n[17] W. Wang, X.-Y.\nLi, and Y. Wang.\nTruthful multicast routing in selfish wireless networks.\nIn Proceedings of MobiCom, pages 245-259, USA, 2004.\n[18] B. Yu and M. P. Singh.\nA social mechanism of reputation management in electronic communities.\nIn Proceedings of CIA, 2000.\n334 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks \u2217\nABSTRACT\nAs more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads.\nIndeed, car manufacturers are already equipping their cars with such devices.\nThough, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas.\nNonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers.\nThese car owners will try to manipulate their agents such that they transmit false data to their peers.\nUsing a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented.\n1.\nINTRODUCTION\nAs technology advances, more and more cars are being equipped with devices, which enable them to act as autonomous agents.\nAn important advancement in this respect is the introduction of ad-hoc communication networks (such as Wi-Fi), which enable the exchange of information \u2217 This work was supported in part under ISF grant number 8008.\nbetween cars, e.g., for locating road congestions [1] and optimal routes [15] or improving traffic safety [2].\nVehicle-To-Vehicle (V2V) communication is already onboard by some car manufactures, enabling the collaboration between different cars on the road.\nFor example, GM's proprietary algorithm [6], called the\" threat assessment algorithm\", constantly calculates, in real time, other vehicles' positions and speeds, and enables messaging other cars when a collision is imminent; Also, Honda has began testing its system in which vehicles talk with each other and with the highway system itself [7].\nIn this paper, we investigate the attraction of being a selfish agent in vehicular networks.\nThat is, we investigate the benefits achieved by car owners, who tamper with on-board devices and incorporate their own self-interested agents in them, which act for their benefit.\nWe build on the notion of Gossip Networks, introduced by Shavitt and Shay [15], in which the agents can obtain road congestion information by gossiping with peer agents using ad-hoc communication.\nWe recognize two typical behaviors that the self-interested agents could embark upon, in the context of vehicular networks.\nIn the first behavior, described in Section 4, the objective of the self-interested agents is to maximize their own utility, expressed by their average journey duration on the road.\nThis situation can be modeled in real life by car owners, whose aim is to reach their destination as fast as possible, and would like to have their way free of other cars.\nTo this end they will let their agents cheat the other agents, by injecting false information into the network.\nThis is achieved by reporting heavy traffic values for the roads on their route to other agents in the network in the hope of making the other agents believe that the route is jammed, and causing them to choose a different route.\nThe second type of behavior, described in Section 5, is modeled by the self-interested agents' objective to cause disorder in the network, more than they are interested in maximizing their own utility.\nThis kind of behavior could be generated, for example, by vandalism or terrorists, who aim to cause as much mayhem in the network as possible.\nWe note that the introduction of self-interested agents to the network, would most probably motivate other agents to try and detect these agents in order to minimize their effect.\nThis is similar, though in a different context, to the problem introduced by Lamport et al. [8] as the Byzantine Generals Problem.\nHowever, the introduction of mechanisms to deal with self-interested agents is costly and time consuming.\nIn this paper we focus mainly on the attractiveness of selfish behavior by these agents, while we also provide some insights\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n2.\nRELATED WORK\nIn their seminal paper, Lamport et al. [8] describe the Byzantine Generals problem, in which processors need to handle malfunctioning components that give conflicting information to different parts of the system.\nThey also present a model in which not all agents are connected, and thus an agent cannot send a message to all the other agents.\nDolev et al. [5] has built on this problem and has analyzed the number of faulty agents that can be tolerated in order to eventually reach the right conclusion about true data.\nSimilar work is presented by Minsky et al. [11], who discuss techniques for constructing gossip protocols that are resilient to up to t malicious host failures.\nAs opposed to the above works, our work focuses on vehicular networks, in which the agents are constantly roaming the network and exchanging data.\nAlso, the domain of transportation networks introduces dynamic data, as the load of the roads is subject to change.\nIn addition, the system in transportation networks has a feedback mechanism, since the load in the roads depends on the reports and the movement of the agents themselves.\nMalkhi et al. [10] present a gossip algorithm for propagating information in a network of processors, in the presence of malicious parties.\nTheir algorithm prevents the spreading of spurious gossip and diffuses genuine data.\nThis is done in time, which is logarithmic in the number of processes and linear in the number of corrupt parties.\nNevertheless, their work assumes that the network is static and also that the agents are static (they discuss a network of processors).\nThis is not true for transportation networks.\nFor example, in our model, agents might gossip about heavy traffic load of a specific road, which is currently jammed, yet this information might be false several minutes later, leaving the agents to speculate whether the spreading agents are indeed malicious or not.\nIn addition, as the agents are constantly moving, each agent cannot choose with whom he interacts and exchanges data.\nIn the context of analyzing the data and deciding whether the data is true or not, researchers have focused on distributed reputation systems or decision mechanisms to decide whether or not to share data.\nYu and Singh [18] build a social network of agents' reputations.\nEvery agent keeps a list of its neighbors, which can be changed over time, and computes the trustworthiness of other agents by updating the current values of testimonies obtained from reliable referral chains.\nAfter a bad experience with another agent every agent decreases the rating of the' bad' agent and propagates this bad experience throughout the network so that other agents can update their ratings accordingly.\nThis approach might be implemented in our domain to allow gossip agents to identify self-interested agents and thus minimize their effect.\nHowever, the implementation of such a mechanism is an expensive addition to the infrastructure of autonomous agents in transportation networks.\nThis is mainly due to the dynamic nature of the list of neighbors in transportation networks.\nThus, not only does it require maintaining the neighbors' list, since the neighbors change frequently, but it is also harder to build a good reputation system.\nLeckie et al. [9] focus on the issue of when to share information between the agents in the network.\nTheir domain involves monitoring distributed sensors.\nEach agent monitors a subset of the sensors and evaluates a hypothesis based on the local measurements of its sensors.\nIf the agent believes that a hypothesis is sufficient likely he exchanges this information with the other agents.\nIn their domain, the goal of all the agents is to reach a global consensus about the likelihood of the hypothesis.\nIn our domain, however, as the agents constantly move, they have many samples, which they exchange with each other.\nAlso, the data might also vary (e.g., a road might be reported as jammed, but a few minutes later it could be free), thus making it harder to decide whether to trust the agent, who sent the data.\nMoreover, the agent might lie only about a subset of its samples, thus making it even harder to detect his cheating.\nSome work has been done in the context of gossip networks or transportation networks regarding the spreading of data and its dissemination.\nDatta et al. [4] focus on information dissemination in mobile ad-hoc networks (MANET).\nThey propose an autonomous gossiping algorithm for an infrastructure-less mobile ad-hoc networking environment.\nTheir autonomous gossiping algorithm uses a greedy mechanism to spread data items in the network.\nThe data items are spread to immediate neighbors that are interested in the information, and avoid ones that are not interested.\nThe decision which node is interested in the information is made by the data item itself, using heuristics.\nHowever, their work concentrates on the movement of the data itself, and not on the agents who propagate the data.\nThis is different from our scenario in which each agent maintains the data it has gathered, while the agent itself roams the road and is responsible (and has the capabilities) for spreading the data to other agents in the network.\nDas et al. [3] propose a cooperative strategy for content delivery in vehicular networks.\nIn their domain, peers download a file from a mesh and exchange pieces of the file among themselves.\nWe, on the other hand, are interested in vehicular networks in which there is no rule forcing the agents to cooperate among themselves.\nShibata et al. [16] propose a method for cars to cooperatively and autonomously collect traffic jam statistics to estimate arrival time to destinations for each car.\nThe communication is based on IEEE 802.11, without using a fixed infrastructure on the ground.\nWhile we use the same domain, we focus on a different problem.\nShibata et al. [16] mainly focus on efficiently broadcasting the data between agents (e.g., avoid duplicates and communication overhead), as we focus on the case where agents are not cooperative in\n328 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nnature, and on how selfish agents affect other agents and the network load.\nWang et al. [17] also assert, in the context of wireless networks, that individual agents are likely to do what is most beneficial for their owners, and will act selfishly.\nThey design a protocol for communication in networks in which all agents are selfish.\nTheir protocol motivates every agent to maximize its profit only when it behaves truthfully (a mechanism of incentive compatibility).\nHowever, the domain of wireless networks is quite different from the domain of transportation networks.\nIn the wireless network, the wireless terminal is required to contribute its local resources to transmit data.\nThus, Wang et al. [17] use a payment mechanism, which attaches costs to terminals when transmitting data, and thus enables them to maximize their utility when transmitting data, instead of acting selfishly.\nUnlike this, in the context of transportation networks, constructing such a mechanism is not quite a straightforward task, as self-interested agents and regular gossip agents might incur the same cost when transmitting data.\nThe difference between the two types of agents only exists regarding the credibility of the data they exchange.\nIn the next section, we will describe our transportation network model and gossiping between the agents.\nWe will also describe the different agents in our system.\n3.\nMODEL AND SIMULATIONS\n3.1 Formal Model\n3.2 Simulation Design\n4.\nSPREADING LIES, MAXIMIZING UTILITY\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 329\n4.1 Modeling the Self-Interested Agents' Behavior\n4.2 Simulation Results\n330 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 331\n4.3 When Gossiping is Inefficient\n5.\nSPREADING LIES, CAUSING CHAOS\n6.\nCONCLUSIONS\nIn this paper we investigated the benefits achieved by self-interested agents in vehicular networks.\nUsing simulations we investigated two behaviors that might be taken by self-interested agents: (a) trying to minimize their journey length, and (b) trying to cause chaos in the network.\nOur simulations indicate that in both behaviors the selfinterested agents have only limited success achieving their goal, even if no counter-measures are taken.\nThis is in contrast to the greater impact inflicted by self-interested agents in other domains (e.g., E-Commerce).\nSome reasons for this are the special characteristics of vehicular networks and their dynamic nature.\nWhile the self-interested agents spread lies, they cannot choose which agents with whom they will interact.\nAlso, by the time their lies reach other agents, they might become irrelevant, as more recent data has reached the same agents.\nMotivated by the simulation results, future research in this field will focus on modeling different behaviors of the self-interested agents, which might cause more damage to the network.\nAnother research direction would be to find ways of minimizing the effect of selfish-agents by using distributed reputation or other measures.","lvl-4":"On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks \u2217\nABSTRACT\nAs more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads.\nIndeed, car manufacturers are already equipping their cars with such devices.\nThough, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas.\nNonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers.\nThese car owners will try to manipulate their agents such that they transmit false data to their peers.\nUsing a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented.\n1.\nINTRODUCTION\nAs technology advances, more and more cars are being equipped with devices, which enable them to act as autonomous agents.\nAn important advancement in this respect is the introduction of ad-hoc communication networks (such as Wi-Fi), which enable the exchange of information \u2217 This work was supported in part under ISF grant number 8008.\nVehicle-To-Vehicle (V2V) communication is already onboard by some car manufactures, enabling the collaboration between different cars on the road.\nIn this paper, we investigate the attraction of being a selfish agent in vehicular networks.\nThat is, we investigate the benefits achieved by car owners, who tamper with on-board devices and incorporate their own self-interested agents in them, which act for their benefit.\nWe build on the notion of Gossip Networks, introduced by Shavitt and Shay [15], in which the agents can obtain road congestion information by gossiping with peer agents using ad-hoc communication.\nWe recognize two typical behaviors that the self-interested agents could embark upon, in the context of vehicular networks.\nIn the first behavior, described in Section 4, the objective of the self-interested agents is to maximize their own utility, expressed by their average journey duration on the road.\nTo this end they will let their agents cheat the other agents, by injecting false information into the network.\nThis is achieved by reporting heavy traffic values for the roads on their route to other agents in the network in the hope of making the other agents believe that the route is jammed, and causing them to choose a different route.\nThe second type of behavior, described in Section 5, is modeled by the self-interested agents' objective to cause disorder in the network, more than they are interested in maximizing their own utility.\nThis kind of behavior could be generated, for example, by vandalism or terrorists, who aim to cause as much mayhem in the network as possible.\nWe note that the introduction of self-interested agents to the network, would most probably motivate other agents to try and detect these agents in order to minimize their effect.\nHowever, the introduction of mechanisms to deal with self-interested agents is costly and time consuming.\nIn this paper we focus mainly on the attractiveness of selfish behavior by these agents, while we also provide some insights\n2.\nRELATED WORK\nThey also present a model in which not all agents are connected, and thus an agent cannot send a message to all the other agents.\nDolev et al. [5] has built on this problem and has analyzed the number of faulty agents that can be tolerated in order to eventually reach the right conclusion about true data.\nAs opposed to the above works, our work focuses on vehicular networks, in which the agents are constantly roaming the network and exchanging data.\nAlso, the domain of transportation networks introduces dynamic data, as the load of the roads is subject to change.\nIn addition, the system in transportation networks has a feedback mechanism, since the load in the roads depends on the reports and the movement of the agents themselves.\nMalkhi et al. [10] present a gossip algorithm for propagating information in a network of processors, in the presence of malicious parties.\nTheir algorithm prevents the spreading of spurious gossip and diffuses genuine data.\nNevertheless, their work assumes that the network is static and also that the agents are static (they discuss a network of processors).\nThis is not true for transportation networks.\nIn addition, as the agents are constantly moving, each agent cannot choose with whom he interacts and exchanges data.\nIn the context of analyzing the data and deciding whether the data is true or not, researchers have focused on distributed reputation systems or decision mechanisms to decide whether or not to share data.\nYu and Singh [18] build a social network of agents' reputations.\nEvery agent keeps a list of its neighbors, which can be changed over time, and computes the trustworthiness of other agents by updating the current values of testimonies obtained from reliable referral chains.\nAfter a bad experience with another agent every agent decreases the rating of the' bad' agent and propagates this bad experience throughout the network so that other agents can update their ratings accordingly.\nThis approach might be implemented in our domain to allow gossip agents to identify self-interested agents and thus minimize their effect.\nHowever, the implementation of such a mechanism is an expensive addition to the infrastructure of autonomous agents in transportation networks.\nThis is mainly due to the dynamic nature of the list of neighbors in transportation networks.\nLeckie et al. [9] focus on the issue of when to share information between the agents in the network.\nTheir domain involves monitoring distributed sensors.\nEach agent monitors a subset of the sensors and evaluates a hypothesis based on the local measurements of its sensors.\nIf the agent believes that a hypothesis is sufficient likely he exchanges this information with the other agents.\nIn their domain, the goal of all the agents is to reach a global consensus about the likelihood of the hypothesis.\nIn our domain, however, as the agents constantly move, they have many samples, which they exchange with each other.\nAlso, the data might also vary (e.g., a road might be reported as jammed, but a few minutes later it could be free), thus making it harder to decide whether to trust the agent, who sent the data.\nMoreover, the agent might lie only about a subset of its samples, thus making it even harder to detect his cheating.\nSome work has been done in the context of gossip networks or transportation networks regarding the spreading of data and its dissemination.\nDatta et al. [4] focus on information dissemination in mobile ad-hoc networks (MANET).\nThey propose an autonomous gossiping algorithm for an infrastructure-less mobile ad-hoc networking environment.\nTheir autonomous gossiping algorithm uses a greedy mechanism to spread data items in the network.\nThe data items are spread to immediate neighbors that are interested in the information, and avoid ones that are not interested.\nThe decision which node is interested in the information is made by the data item itself, using heuristics.\nHowever, their work concentrates on the movement of the data itself, and not on the agents who propagate the data.\nThis is different from our scenario in which each agent maintains the data it has gathered, while the agent itself roams the road and is responsible (and has the capabilities) for spreading the data to other agents in the network.\nDas et al. [3] propose a cooperative strategy for content delivery in vehicular networks.\nIn their domain, peers download a file from a mesh and exchange pieces of the file among themselves.\nWe, on the other hand, are interested in vehicular networks in which there is no rule forcing the agents to cooperate among themselves.\nWhile we use the same domain, we focus on a different problem.\nShibata et al. [16] mainly focus on efficiently broadcasting the data between agents (e.g., avoid duplicates and communication overhead), as we focus on the case where agents are not cooperative in\n328 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nnature, and on how selfish agents affect other agents and the network load.\nWang et al. [17] also assert, in the context of wireless networks, that individual agents are likely to do what is most beneficial for their owners, and will act selfishly.\nThey design a protocol for communication in networks in which all agents are selfish.\nTheir protocol motivates every agent to maximize its profit only when it behaves truthfully (a mechanism of incentive compatibility).\nHowever, the domain of wireless networks is quite different from the domain of transportation networks.\nIn the wireless network, the wireless terminal is required to contribute its local resources to transmit data.\nUnlike this, in the context of transportation networks, constructing such a mechanism is not quite a straightforward task, as self-interested agents and regular gossip agents might incur the same cost when transmitting data.\nThe difference between the two types of agents only exists regarding the credibility of the data they exchange.\nIn the next section, we will describe our transportation network model and gossiping between the agents.\nWe will also describe the different agents in our system.\n6.\nCONCLUSIONS\nIn this paper we investigated the benefits achieved by self-interested agents in vehicular networks.\nUsing simulations we investigated two behaviors that might be taken by self-interested agents: (a) trying to minimize their journey length, and (b) trying to cause chaos in the network.\nOur simulations indicate that in both behaviors the selfinterested agents have only limited success achieving their goal, even if no counter-measures are taken.\nThis is in contrast to the greater impact inflicted by self-interested agents in other domains (e.g., E-Commerce).\nSome reasons for this are the special characteristics of vehicular networks and their dynamic nature.\nWhile the self-interested agents spread lies, they cannot choose which agents with whom they will interact.\nAlso, by the time their lies reach other agents, they might become irrelevant, as more recent data has reached the same agents.\nMotivated by the simulation results, future research in this field will focus on modeling different behaviors of the self-interested agents, which might cause more damage to the network.","lvl-2":"On the Benefits of Cheating by Self-Interested Agents in Vehicular Networks \u2217\nABSTRACT\nAs more and more cars are equipped with GPS and Wi-Fi transmitters, it becomes easier to design systems that will allow cars to interact autonomously with each other, e.g., regarding traffic on the roads.\nIndeed, car manufacturers are already equipping their cars with such devices.\nThough, currently these systems are a proprietary, we envision a natural evolution where agent applications will be developed for vehicular systems, e.g., to improve car routing in dense urban areas.\nNonetheless, this new technology and agent applications may lead to the emergence of self-interested car owners, who will care more about their own welfare than the social welfare of their peers.\nThese car owners will try to manipulate their agents such that they transmit false data to their peers.\nUsing a simulation environment, which models a real transportation network in a large city, we demonstrate the benefits achieved by self-interested agents if no counter-measures are implemented.\n1.\nINTRODUCTION\nAs technology advances, more and more cars are being equipped with devices, which enable them to act as autonomous agents.\nAn important advancement in this respect is the introduction of ad-hoc communication networks (such as Wi-Fi), which enable the exchange of information \u2217 This work was supported in part under ISF grant number 8008.\nbetween cars, e.g., for locating road congestions [1] and optimal routes [15] or improving traffic safety [2].\nVehicle-To-Vehicle (V2V) communication is already onboard by some car manufactures, enabling the collaboration between different cars on the road.\nFor example, GM's proprietary algorithm [6], called the\" threat assessment algorithm\", constantly calculates, in real time, other vehicles' positions and speeds, and enables messaging other cars when a collision is imminent; Also, Honda has began testing its system in which vehicles talk with each other and with the highway system itself [7].\nIn this paper, we investigate the attraction of being a selfish agent in vehicular networks.\nThat is, we investigate the benefits achieved by car owners, who tamper with on-board devices and incorporate their own self-interested agents in them, which act for their benefit.\nWe build on the notion of Gossip Networks, introduced by Shavitt and Shay [15], in which the agents can obtain road congestion information by gossiping with peer agents using ad-hoc communication.\nWe recognize two typical behaviors that the self-interested agents could embark upon, in the context of vehicular networks.\nIn the first behavior, described in Section 4, the objective of the self-interested agents is to maximize their own utility, expressed by their average journey duration on the road.\nThis situation can be modeled in real life by car owners, whose aim is to reach their destination as fast as possible, and would like to have their way free of other cars.\nTo this end they will let their agents cheat the other agents, by injecting false information into the network.\nThis is achieved by reporting heavy traffic values for the roads on their route to other agents in the network in the hope of making the other agents believe that the route is jammed, and causing them to choose a different route.\nThe second type of behavior, described in Section 5, is modeled by the self-interested agents' objective to cause disorder in the network, more than they are interested in maximizing their own utility.\nThis kind of behavior could be generated, for example, by vandalism or terrorists, who aim to cause as much mayhem in the network as possible.\nWe note that the introduction of self-interested agents to the network, would most probably motivate other agents to try and detect these agents in order to minimize their effect.\nThis is similar, though in a different context, to the problem introduced by Lamport et al. [8] as the Byzantine Generals Problem.\nHowever, the introduction of mechanisms to deal with self-interested agents is costly and time consuming.\nIn this paper we focus mainly on the attractiveness of selfish behavior by these agents, while we also provide some insights\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\ninto the possibility of detecting self-interested agents and minimizing their effect.\nTo demonstrate the benefits achieved by self-interested agents, we have used a simulation environment, which models the transportation network in a central part of a large real city.\nThe simulation environment is further described in Section 3.\nOur simulations provide insights to the benefits of self-interested agents cheating.\nOur findings can motivate future research in this field in order to minimize the effect of selfish-agents.\nThe rest of this paper is organized as follows.\nIn Section 2 we review related work in the field of self-interested agents and V2V communications.\nWe continue and formally describe our environment and simulation settings in Section 3.\nSections 4 and 5 describe the different behaviors of the selfinterested agents and our findings.\nFinally, we conclude the paper with open questions and future research directions.\n2.\nRELATED WORK\nIn their seminal paper, Lamport et al. [8] describe the Byzantine Generals problem, in which processors need to handle malfunctioning components that give conflicting information to different parts of the system.\nThey also present a model in which not all agents are connected, and thus an agent cannot send a message to all the other agents.\nDolev et al. [5] has built on this problem and has analyzed the number of faulty agents that can be tolerated in order to eventually reach the right conclusion about true data.\nSimilar work is presented by Minsky et al. [11], who discuss techniques for constructing gossip protocols that are resilient to up to t malicious host failures.\nAs opposed to the above works, our work focuses on vehicular networks, in which the agents are constantly roaming the network and exchanging data.\nAlso, the domain of transportation networks introduces dynamic data, as the load of the roads is subject to change.\nIn addition, the system in transportation networks has a feedback mechanism, since the load in the roads depends on the reports and the movement of the agents themselves.\nMalkhi et al. [10] present a gossip algorithm for propagating information in a network of processors, in the presence of malicious parties.\nTheir algorithm prevents the spreading of spurious gossip and diffuses genuine data.\nThis is done in time, which is logarithmic in the number of processes and linear in the number of corrupt parties.\nNevertheless, their work assumes that the network is static and also that the agents are static (they discuss a network of processors).\nThis is not true for transportation networks.\nFor example, in our model, agents might gossip about heavy traffic load of a specific road, which is currently jammed, yet this information might be false several minutes later, leaving the agents to speculate whether the spreading agents are indeed malicious or not.\nIn addition, as the agents are constantly moving, each agent cannot choose with whom he interacts and exchanges data.\nIn the context of analyzing the data and deciding whether the data is true or not, researchers have focused on distributed reputation systems or decision mechanisms to decide whether or not to share data.\nYu and Singh [18] build a social network of agents' reputations.\nEvery agent keeps a list of its neighbors, which can be changed over time, and computes the trustworthiness of other agents by updating the current values of testimonies obtained from reliable referral chains.\nAfter a bad experience with another agent every agent decreases the rating of the' bad' agent and propagates this bad experience throughout the network so that other agents can update their ratings accordingly.\nThis approach might be implemented in our domain to allow gossip agents to identify self-interested agents and thus minimize their effect.\nHowever, the implementation of such a mechanism is an expensive addition to the infrastructure of autonomous agents in transportation networks.\nThis is mainly due to the dynamic nature of the list of neighbors in transportation networks.\nThus, not only does it require maintaining the neighbors' list, since the neighbors change frequently, but it is also harder to build a good reputation system.\nLeckie et al. [9] focus on the issue of when to share information between the agents in the network.\nTheir domain involves monitoring distributed sensors.\nEach agent monitors a subset of the sensors and evaluates a hypothesis based on the local measurements of its sensors.\nIf the agent believes that a hypothesis is sufficient likely he exchanges this information with the other agents.\nIn their domain, the goal of all the agents is to reach a global consensus about the likelihood of the hypothesis.\nIn our domain, however, as the agents constantly move, they have many samples, which they exchange with each other.\nAlso, the data might also vary (e.g., a road might be reported as jammed, but a few minutes later it could be free), thus making it harder to decide whether to trust the agent, who sent the data.\nMoreover, the agent might lie only about a subset of its samples, thus making it even harder to detect his cheating.\nSome work has been done in the context of gossip networks or transportation networks regarding the spreading of data and its dissemination.\nDatta et al. [4] focus on information dissemination in mobile ad-hoc networks (MANET).\nThey propose an autonomous gossiping algorithm for an infrastructure-less mobile ad-hoc networking environment.\nTheir autonomous gossiping algorithm uses a greedy mechanism to spread data items in the network.\nThe data items are spread to immediate neighbors that are interested in the information, and avoid ones that are not interested.\nThe decision which node is interested in the information is made by the data item itself, using heuristics.\nHowever, their work concentrates on the movement of the data itself, and not on the agents who propagate the data.\nThis is different from our scenario in which each agent maintains the data it has gathered, while the agent itself roams the road and is responsible (and has the capabilities) for spreading the data to other agents in the network.\nDas et al. [3] propose a cooperative strategy for content delivery in vehicular networks.\nIn their domain, peers download a file from a mesh and exchange pieces of the file among themselves.\nWe, on the other hand, are interested in vehicular networks in which there is no rule forcing the agents to cooperate among themselves.\nShibata et al. [16] propose a method for cars to cooperatively and autonomously collect traffic jam statistics to estimate arrival time to destinations for each car.\nThe communication is based on IEEE 802.11, without using a fixed infrastructure on the ground.\nWhile we use the same domain, we focus on a different problem.\nShibata et al. [16] mainly focus on efficiently broadcasting the data between agents (e.g., avoid duplicates and communication overhead), as we focus on the case where agents are not cooperative in\n328 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nnature, and on how selfish agents affect other agents and the network load.\nWang et al. [17] also assert, in the context of wireless networks, that individual agents are likely to do what is most beneficial for their owners, and will act selfishly.\nThey design a protocol for communication in networks in which all agents are selfish.\nTheir protocol motivates every agent to maximize its profit only when it behaves truthfully (a mechanism of incentive compatibility).\nHowever, the domain of wireless networks is quite different from the domain of transportation networks.\nIn the wireless network, the wireless terminal is required to contribute its local resources to transmit data.\nThus, Wang et al. [17] use a payment mechanism, which attaches costs to terminals when transmitting data, and thus enables them to maximize their utility when transmitting data, instead of acting selfishly.\nUnlike this, in the context of transportation networks, constructing such a mechanism is not quite a straightforward task, as self-interested agents and regular gossip agents might incur the same cost when transmitting data.\nThe difference between the two types of agents only exists regarding the credibility of the data they exchange.\nIn the next section, we will describe our transportation network model and gossiping between the agents.\nWe will also describe the different agents in our system.\n3.\nMODEL AND SIMULATIONS\nWe first describe the formal transportation network model, and then we describe the simulations designs.\n3.1 Formal Model\nFollowing Shavitt and Shay [15] and Parshani [13], the transportation network is represented by a directed graph G (V, E), where V is the set of vertices representing junctions, and E is the set of edges, representing roads.\nAn edge e \u2208 E is associated with a weight w> 0, which specifies the time it takes to traverse the road associated with that edge.\nThe roads' weights vary in time according to the network (traffic) load.\nEach car, which is associated with an autonomous agent, is given a pair of origin and destination points (vertices).\nA journey is defined as the (not necessarily simple) path taken by an agent between the origin vertex and the destination vertex.\nWe assume that there is always a path between a source and a destination.\nA journey length is defined as the sum of all weights of the edges constituting this path.\nEvery agent has to travel between its origin and destination points and aims to minimize its journey length.\nInitially, agents are ignorant about the state of the roads.\nRegular agents are only capable of gathering information about the roads as they traverse them.\nHowever, we assume that some agents have means of inter-vehicle communication (e.g., IEEE 802.11) with a given communication range, which enables them to communicate with other agents with the same device.\nThose agents are referred to as gossip agents.\nSince the communication range is limited, the exchange of information using gossiping is done in one of two ways: (a) between gossip agents passing one another, or (b) between gossip agents located at the same junction.\nWe assume that each agent stores the most recent information it has received or gathered around the edges in the network.\nA subset of the gossip agents are those agents who are selfinterested and manipulate the devices for their own benefit.\nWe will refer to these agents as self-interested agents.\nA detailed description of their behavior is given in Sections 4 and 5.\n3.2 Simulation Design\nBuilding on [13], the network in our simulations replicates a central part of a large city, and consists of 50 junctions and 150 roads, which are approximately the number of main streets in the city.\nEach simulation consists of 6 iterations.\nThe basic time unit of the iteration is a step, which equivalents to about 30 seconds.\nEach iteration simulates six hours of movements.\nThe average number of cars passing through the network during the iteration is about 70,000 and the average number of cars in the network at a specific time unit is about 3,500 cars.\nIn each iteration the same agents are used with the same origin and destination points, whereas the data collected in earlier iterations is preserved in the future iterations (referred to as the history of the agent).\nThis allows us to simulate somewhat a daily routine in the transportation network (e.g., a working week).\nEach of the experiments that we describe below is run with 5 different traffic scenarios.\nEach such traffic scenario differs from one another by the initial load of the roads and the designated routes of the agents (cars) in the network.\nFor each such scenario 5 simulations are run, creating a total of 25 simulations for each experiment.\nIt has been shown by Parshani et al. [13, 14] that the information propagation in the network is very efficient when the percentage of gossiping agents is 10% or more.\nYet, due to congestion caused by too many cars rushing to what is reported as the less congested part of the network 20-30% of gossiping agents leads to the most efficient routing results in their experiments.\nThus, in our simulation, we focus only on simulations in which the percentage of gossip agents is 20%.\nThe simulations were done with different percentages of self-interested agents.\nTo gain statistical significance we ran each simulation with changes in the set of the gossip agents, and the set of the self-interested agents.\nIn order to gain a similar ordinal scale, the results were normalized.\nThe normalized values were calculated by comparing each agent's result to his results when the same scenario was run with no self-interested agents.\nThis was done for all of the iterations.\nUsing the normalized values enabled us to see how worse (or better) each agent would perform compared to the basic setting.\nFor example, if an average journey length of a certain agent in iteration 1 with no selfinterested agent was 50, and the length was 60 in the same scenario and iteration in which self-interested agents were involved, then the normalized value for that agent would be 60\/50 = 1.2.\nMore details regarding the simulations are described in Sections 4 and 5.\n4.\nSPREADING LIES, MAXIMIZING UTILITY\nIn the first set of experiments we investigated the benefits achieved by the self-interested agents, whose aim was to minimize their own journey length.\nThe self-interested agents adopted a cheating approach, in which they sent false data to their peers.\nIn this section we first describe the simulations with the self-interested agents.\nThen, we model the scenario as a\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 329\ngame with two types of agents, and prove that the equilibrium result can only be achieved when there is no efficient exchange of gossiping information in the network.\n4.1 Modeling the Self-Interested Agents' Behavior\nWhile the gossip agents gather data and send it to other agents, the self-interested agents' behavior is modeled as follows:\n1.\nCalculate the shortest path from origin to destination.\n2.\nCommunicate the following data to other agents: (a) If the road is not in the agent's route - send the true data about it (e.g., data about roads it has received from other agents) (b) For all roads in the agent's route, which the agent has not yet traversed, send a random high weight.\nBasically, the self-interested agent acts the same as the gossip agent.\nIt collects data regarding the weight of the roads (either by traversing the road or by getting the data from other agents) and sends the data it has collected to other agents.\nHowever, the self-interested agent acts differently when the road is in its route.\nSince the agent's goal is to reach its destination as fast as possible, the agent will falsely report that all the roads in its route are heavily congested.\nThis is in order to free the path for itself, by making other agents recalculate their paths, this time without including roads on the self-interested agent's route.\nTo this end, for all the roads in its route, which the agent has not yet passed, the agent generates a random weight, which is above the average weight of the roads in the network.\nIt then associates these new weights with the roads in its route and sends them to the other agents.\nWhile an agent can also divert cars from its route by falsely reporting congested roads in parallel to its route as free, this behavior is not very likely since other agents, attempting to use the roads, will find the mistake within a short time and spread the true congestion on the road.\nOn the other hand, if an agent manages to persuade other agents not to use a road, it will be harder for them to detect that the said roads are not congested.\nIn addition, to avoid being influenced by its own lies and other lies spreading in the network, all self-interested agents will ignore data received about roads with heavy traffic (note that data about roads that are not heavily traffic will not be ignored) 1.\nIn the next subsection we describe the simulation results, involving the self-interested agents.\n4.2 Simulation Results\nTo test the benefits of cheating by the self-interested agents we ran several experiments.\nIn the first set of experiments, we created a scenario, in which a small group of self-interested agents spread lies on the same route, and tested its effect on the journey length of all the agents in the network.\n1In other simulations we have run, in which there had been several real congestions in the network, we indeed saw that even when the roads are jammed, the self-interested agents were less affected if they ignored all reported heavy traffic, since by such they also discarded all lies roaming the network\nTable 1: Normalized journey length values, selfinterested agents with the same route\nThus, several cars, which had the same origin and destination points, were designated as self-interested agents.\nIn this simulation, we selected only 6 agents to be part of the group of the self-interested agents, as we wanted to investigate the effect achieved by only a small number of agents.\nIn each simulation in this experiment, 6 different agents were randomly chosen to be part of the group of self-interested agents, as described above.\nIn addition, one road, on the route of these agents, was randomly selected to be partially blocked, letting only one car go through that road at each time step.\nAbout 8,000 agents were randomly selected as regular gossip agents, and the other 32,000 agents were designated as regular agents.\nWe analyzed the average journey length of the self-interested agents as opposed to the average journey length of other regular gossip agents traveling along the same route.\nTable 1 summarizes the normalized results for the self-interested agents, the gossip agents (those having the same origin and destination points as the self-interested agents, denoted Gossip - SR, and all other gossip agents, denoted Gossip - Others) and the regular agents, as a function of the iteration number.\nWe can see from the results that the first time the selfinterested agents traveled the route while spreading the false data about the roads did not help them (using the paired t-test we show that those agents had significantly lower journey lengths in the scenario in which they did not spread any lies, with P <0.01).\nThis is mainly due to the fact that the lies do not bypass the self-interested agent and reach other cars that are ahead of the self-interested car on the same route.\nThus, spreading the lies in the first iteration does not help the self-interested agent to free the route he is about to travel, in the first iteration.\nOnly when the self-interested agents had repeated their journey in the next iteration (iteration 2) did it help them significantly (P = 0.04).\nThe reason for this is that other gossip agents received this data and used it to recalculate their shortest path, thus avoiding entrance to the roads, for which the self-interested agents had spread false information about congestion.\nIt is also interesting to note the large value attained by the self-interested agents in the first iteration.\nThis is mainly due to several self-interested agents, who entered the jammed road.\nThis situation occurred since the self-interested agents ignored all heavy traffic data, and thus ignored the fact that the road was jammed.\nAs they started spreading lies about this road, more cars shifted from this route, thus making the road free for the future iterations.\nHowever, we also recall that the self-interested agents ignore all information about the heavy traffic roads.\nThus,\n330 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nTable 2: Normalized journey length values, spreading lies for a beneficiary agent\nwhen the network becomes congested, more self-interested cars are affected, since they might enter jammed roads, which they would otherwise not have entered.\nThis can be seen, for example, in iterations 4-6, in which the normalized value of the self-interested agents increased above 1.00.\nUsing the paired t-test to compare these values with the values achieved by these agents when no lies are used, we see that there is no significant difference between the two scenarios.\nAs opposed to the gossip agents, we can see how little effect the self-interested agents have on the regular agents.\nAs compared to the gossip agents on the same route that have traveled as much as 193% more, when self-interested agents are introduced, the average journey length for the regular agents has only increased by about 15%.\nThis result is even lower than the effect on other gossip agents in the entire network.\nSince we noticed that cheating by the self-interested agents does not benefit them in the first iteration, we devised another set of experiments.\nIn the second set of experiments, the self-interested agents have the objective to help another agent, who is supposed to enter the network some time after the self-interested agent entered.\nWe refer to the latter agent as the beneficiary agent.\nJust like a self-interested agent, the beneficiary agent also ignores all data regarding heavy traffic.\nIn real-life this can be modeled, for example, by a husband, who would like to help his wife find a faster route to her destination.\nTable 2 summarizes the normalized values for the different agents.\nAs in the first set of experiments, 5 simulations were run for each scenario, with a total of 25 simulations.\nIn each of these simulation one agent was randomly selected as a self-interested agent, and then another agent, with the same origin as the selfinterested agent, was randomly selected as the beneficiary agent.\nThe other 8,000 and 32,000 agents were designated as regular gossip agents and regular agents, respectively.\nWe can see that as the number of iterations advances, the lower the normalized value for the beneficiary agent.\nIn this scenario, just like the previous one, in the first iterations, not only does the beneficiary agent not avoid the jammed roads, since he ignores all heavy traffic, he also does not benefit from the lies spread by the self-interested agent.\nThis is due to the fact that the lies are not yet incorporated by other gossip agents.\nThus, if we compare the average journey length in the first iteration when lies are spread and when there are no lies, the average is significantly lower when there are no lies (p <0.03).\nOn the other hand, if we compare the average journey length in all of the iterations, there is no significant difference between the two settings.\nStill, in most of the iterations, the average journey length of the beneficiary agent is longer than in the case when no lies are spread.\nWe can also see the impact on the other agents in the system.\nWhile the gossip agents, which are not on the route of the beneficiary agent, virtually are not affected by the self-interested agent, those on the route and the regular agents are affected and have higher normalized values.\nThat is, even with just one self-interested car, we can see that both the gossip agents that follow the same route as the lies spread by the self-interested agents, and other regular agents, increase their journey length by more than 14%.\nIn our third set of experiments we examined a setting in which there was an increasing number of agents, and the agents did not necessarily have the same origin and destination points.\nTo model this we randomly selected selfinterested agents, whose objective was to minimize their average journey length, assuming the cars were repeating their journeys (that is, more than one iteration was made).\nAs opposed to the first set of experiments, in this set the self-interested agents were selected randomly, and we did not enforce the constraint that they will all have the same origin and destination points.\nAs in the previous sets of experiments we ran 5 different simulations per scenario.\nIn each simulation 11 runs were made, each run with different numbers of self-interested agents: 0 (no self-interested agents), 1, 2, 4, 8, and 16.\nEach agent adopted the behavior modeled in Section 4.1.\nFigure 1 shows the normalized value achieved by the self-interested agents as a function of their number.\nThe figure shows these values for iterations 2-6.\nThe first iteration is not shown intentionally, as we assume repeated journeys.\nAlso, we have seen in the previous set of experiments and we have provided explanations as to why the self-interested agents do not gain much from their behavior in the first iteration.\nFigure 1: Self-interested agents normalized values as a function of the number of self-interested agents.\nUsing these simulations we examined what the threshold could be for the number of randomly selected self-interested agents in order to allow themselves to benefit from their selfish behavior.\nWe can see that up to 8 self-interested agents, the average normalized value is below 1.\nThat is, they benefit from their malicious behavior.\nIn the case of one self-interested agent there is a significant difference between the average journey length of when the agent spread lies and when no lies are spread (p <0.001), while when\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 331\nthere are 2, 4, 8 and 16 self-interested agents there is no significance difference.\nYet, as the number of self-interested agents increases, the normalized value also increases.\nIn such cases, the normalized value is larger than 1, and the self-interested agents journey length becomes significantly higher than their journey length, in cases where there are no self-interested agents in the system.\nIn the next subsection we analyze the scenario as a game and show that when in equilibrium the exchange of gossiping between the agents becomes inefficient.\n4.3 When Gossiping is Inefficient\nWe continued and modeled our scenario as a game, in order to find the equilibrium.\nThere are two possible types for the agents: (a) regular gossip agents, and (b) self-interested agents.\nEach of these agents is a representative of its group, and thus all agents in the same group have similar behavior.\nWe note that the advantage of using gossiping in transportation networks is to allow the agents to detect anomalies in the network (e.g., traffic jams) and to quickly adapt to them by recalculating their routes [14].\nWe also assume that the objective of the self-interested agents is to minimize their own journey length, thus they spread lies on their routes, as described in Section 4.1.\nWe also assume that sophisticated methods for identifying the self-interested agents or managing reputation are not used.\nThis is mainly due to the complexity of incorporating and maintaining such mechanisms, as well as due to the dynamics of the network, in which interactions between different agents are frequent, agents may leave the network, and data about the road might change as time progresses (e.g., a road might be reported by a regular gossip agent as free at a given time, yet it may currently be jammed due to heavy traffic on the road).\nLet Tavg be the average time it takes to traverse an edge in the transportation network (that is, the average load of an edge).\nLet Tmax be the maximum time it takes to traverse an edge.\nWe will investigate the game, in which the self-interested and the regular gossip agents can choose the following actions.\nThe self-interested agents can choose how much to lie, that is, they can choose to spread how long (not necessarily the true duration) it takes to traverse certain roads.\nSince the objective of the self-interested agents is to spread messages as though some roads are jammed, the traversal time they report is obviously larger than the average time.\nWe denote the time the self-interested agents spread as Ts, such that Tavg <Ts <Tmax.\nMotivated by the results of the simulations we have described above, we saw that the agents are less affected if they discard the heavy traffic values.\nThus, the regular gossip cars, attempting to mitigate the effect of the liars, can choose a strategy to ignore abnormal congestion values above a certain threshold, Tg.\nObviously, Tavg <Tg <Tmax.\nIn order to prevent the gossip agents from detecting the lies and just discarding those values, the self-interested agents send lies in a given range, [Ts, Tmax], with an inverse geometric distribution, that is, the higher the T value, the higher its frequency.\nNow we construct the utility functions for each type of agents, which is defined by the values of Ts and Tg.\nIf the self-interested agents spread traversal times higher than or equal to the regular gossip cars' threshold, they will not benefit from those lies.\nThus, the utility value of the selfinterested agents in this case is 0.\nOn the other hand, if the self-interested agents spread traversal time which is lower than the threshold, they will gain a positive utility value.\nFrom the regular gossip agents point-of-view, if they accept messages from the self-interested agents, then they incorporate the lies in their calculation, thus they will lose utility points.\nOn the other hand, if they discard the false values the self-interested agents send, that is, they do not incorporate the lies, they will gain utility values.\nFormally, we use us to denote the utility of the self-interested agents and ug to denote the utility of the regular gossip agents.\nWe also denote the strategy profile in the game as {Ts, Tg}.\nThe utility functions are defined as:\nWe are interested in finding the Nash equilibrium.\nWe recall from [12], that the Nash equilibrium is a strategy profile, where no player has anything to gain by deviating from his strategy, given that the other agent follows his strategy profile.\nFormally, let (S, u) denote the game, where S is the set of strategy profiles and u is the set of utility functions.\nWhen each agent i E {regular gossip, self-interested} chooses a strategy Ti resulting in a strategy profile T = (Ts, Tg) then agent i obtains a utility of ui (T).\nA strategy profile T' E S is a Nash equilibrium if no deviation in the strategy by any single agent is profitable, that is, if for all i, ui (T')> ui (Ti, T' \u2212 i).\nThat is, (Ts, Tg) is a Nash equilibrium if the self-interested agents have no other value Ts ~ such that us (T ~ s, Tg)> us (Ts, Tg), and similarly for the gossip agents.\nWe now have the following theorem.\nTHEOREM 4.1.\n(Tavg, Tavg) is the only Nash equilibrium.\nProof.\nFirst we will show that (Tavg, Tavg) is a Nash equilibrium.\nAssume, by contradiction, that the gossip agents choose another value Tgl> Tavg.\nThus, ug (Tavg, Tg ~) = Tavg--Tgl <0.\nOn the other hand, ug (Tavg, Tavg) = 0.\nThus, the regular gossip agents have no incentive to deviate from this strategy.\nThe self-interested agents also have no incentive to deviate from this strategy.\nBy contradiction, again assume that the self-interested agents choose another value Ts> Tavg.\nThus, us (Ts,, Tavg) = 0, while us (Tavg, Tavg) = 0.\nWe will now show that the above solution is unique.\nWe will show that any other tuple (Ts, Tg), such that Tavg <Tg <Tmax and Tavg <Ts <Tmax is not a Nash equilibrium.\nWe have three cases.\nIn the first Tavg <Tg <Ts <Tmax.\nThus, us (Ts, Tg) = 0 and ug (Ts, Tg) = Tg--Tavg.\nIn this case, the regular gossip agents have an incentive to deviate and choose another strategy Tg + 1, since by doing so they increase their own utility: ug (Ts, Tg + 1) = Tg + 1--Tavg.\nIn the second case we have Tavg <Ts <Tg <Tmax.\nThus, ug (Ts, Tg) = Ts--Tg <0.\nAlso, the regular gossip agents have an incentive to deviate and choose another strategy Tg--1, in which their utility value is higher: ug (Ts, Tg--1) = Ts--Tg + 1.\nIn the last case we have Tavg <Ts = Tg <Tmax.\nThus, us (Ts, Tg) = Ts--Tg = 0.\nIn this case, the self-interested agents have an incentive to deviate and choose another strategy Tg--1, in which their utility value is higher: us (Tg--1, Tg) = Tg--1--Tavg + 1 = Tg--Tavg> 0.\n0\nTable 3: Normalized journey length values for the first iteration\nTable 4: Normalized journey length values for all iterations\nThe above theorem proves that the equilibrium point is reached only when the self-interested agents send the time to traverse certain edges equals the average time, and on the other hand the regular gossip agents discard all data regarding roads that are associated with an average time or higher.\nThus, for this equilibrium point the exchange of gossiping information between agents is inefficient, as the gossip agents are unable to detect any anomalies in the network.\nIn the next section we describe another scenario for the self-interested agents, in which they are not concerned with their own utility, but rather interested in maximizing the average journey length of other gossip agents.\n5.\nSPREADING LIES, CAUSING CHAOS\nAnother possible behavior that can be adopted by selfinterested agents is characterized by their goal to cause disorder in the network.\nThis can be achieved, for example, by maximizing the average journey length of all agents, even at the cost of maximizing their own journey length.\nTo understand the vulnerability of the gossip based transportation support system, we ran 5 different simulations for each scenario.\nIn each simulation different agents were randomly chosen (using a uniform distribution) to act as gossip agents, among them self-interested agents were chosen.\nEach self-interested agent behaved in the same manner as described in Section 4.1.\nEvery simulation consisted of 11 runs with each run comprising different numbers of self-interested agents: 0 (no selfinterested agents), 1, 2, 4, 8, 16, 32, 50, 64, 80 and 100.\nAlso, in each run the number of self-interested agents was increased incrementally.\nFor example: the run with 50 selfinterested agents consisted of all the self-interested agents that were used in the run with 32 self-interested agents, but with an additional 18 self-interested agents.\nTables 3 and 4 summarize the normalized journey length for the self-interested agents, the regular gossip agents and the regular (non-gossip) agents.\nTable 3 summarizes the data for the first iteration and Table 4 summarizes the data for the average of all iterations.\nFigure 2 demonstrates the changes in the normalized values for the regular gossip agents and the regular agents, as a function of the iteration number.\nSimilar to the results in our first set of experiments, described in Section 4.2, we can see that randomly selected self-interested agents who follow different randomly selected routes do not benefit from their malicious behavior (that is, their average journey length does not decrease).\nHowever, when only one self-interested agent is involved, it does benefit from the malicious behavior, even in the first iteration.\nThe results also indicate that the regular gossip agents are more sensitive to malicious behavior than regular agents the average journey length for the gossip agents increases significantly (e.g., with 32 self-interested agents the average journey length for the gossip agents was 146% higher than in the setting with no self-interested agents at all, as opposed to an increase of only 25% for the regular agents).\nIn contrast, these results also indicate that the self-interested agents do not succeed in causing a significant load in the network by their malicious behavior.\nFigure 2: Gossip and regular agents normalized values, as a function of the iteration.\nSince the goal of the self-interested agents in this case is to cause disorder in the network rather than use the lies for their own benefits, the question arises as to why would the behavior of the self-interested agents be to send lies about their routes only.\nFurthermore, we hypothesize that if they all send lies about the same major roads the damage they might inflict on the entire network would be larger that had each of them sent lies about its own route.\nTo examine this hypothesis, we designed another set of experiments.\nIn this set of experiments, all the self-interested agents spread lies about the same 13 main roads in the network.\nHowever, the results show quite a smaller impact on other gossip and regu\nTable 5: Normalized journey length values for all iterations.\nNetwork with congestions.\nlar agents in the network.\nThe average normalized value for the gossip agents in these simulations was only about 1.07, as opposed to 1.7 in the original scenario.\nWhen analyzing the results we saw that although the false data was spread, it did not cause other gossip cars to change their route.\nThe main reason was that the lies were spread on roads that were not on the route of the self-interested agents.\nThus, it took the data longer to reach agents on the main roads, and when the agents reached the relevant roads this data was\" too old\" to be incorporated in the other agents calculations.\nWe also examined the impact of sending lies in order to cause chaos when there are already congestions in the network.\nTo this end, we simulated a network in which 13 main roads are jammed.\nThe behavior of the self-interested agents is as described in Section 4.1, and the self-interested agents spread lies about their own route.\nThe simulation results, detailed in Table 5, show that there is a greater incentive for the self-interested agents to cheat when the network is already congested, as their cheating causes more damage to the other agents in the network.\nFor example, whereas the average journey length of the regular agents increased only by about 15% in the original scenario, in which the network was not congested, in this scenario the average journey length of the agents had increased by about 60%.\n6.\nCONCLUSIONS\nIn this paper we investigated the benefits achieved by self-interested agents in vehicular networks.\nUsing simulations we investigated two behaviors that might be taken by self-interested agents: (a) trying to minimize their journey length, and (b) trying to cause chaos in the network.\nOur simulations indicate that in both behaviors the selfinterested agents have only limited success achieving their goal, even if no counter-measures are taken.\nThis is in contrast to the greater impact inflicted by self-interested agents in other domains (e.g., E-Commerce).\nSome reasons for this are the special characteristics of vehicular networks and their dynamic nature.\nWhile the self-interested agents spread lies, they cannot choose which agents with whom they will interact.\nAlso, by the time their lies reach other agents, they might become irrelevant, as more recent data has reached the same agents.\nMotivated by the simulation results, future research in this field will focus on modeling different behaviors of the self-interested agents, which might cause more damage to the network.\nAnother research direction would be to find ways of minimizing the effect of selfish-agents by using distributed reputation or other measures.","keyphrases":["self-interest agent","self-interest agent","vehicular network","intellig agent","social network","journei length","chao","selfinterest agent","agent-base deploi applic","artifici social system"],"prmu":["P","P","P","M","R","U","U","M","M","M"]} {"id":"J-33","title":"Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions","abstract":"We investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades. We develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades. We analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory. Network flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms. Experimental trials help distinguish tractable problem classes for proposed solution techniques.","lvl-1":"Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions Yagil Engel, Michael P. Wellman, and Kevin M. Lochner University of Michigan, Computer Science & Engineering 2260 Hayward St, Ann Arbor, MI 48109-2121, USA {yagil,wellman,klochner}@umich.\nedu ABSTRACT We investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades.\nWe develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades.\nWe analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory.\nNetwork flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms.\nExperimental trials help distinguish tractable problem classes for proposed solution techniques.\nCategories and Subject Descriptors: F.2 [Theory of Computation]: Analysis Of Algorithms And Problem Complexity; J.4 [Computer Applications]: Social and Behavioral Sciences-Economics General Terms: Algorithms, Economics 1.\nBACKGROUND A multiattribute auction is a market-based mechanism where goods are described by vectors of features, or attributes [3, 5, 8, 19].\nSuch mechanisms provide traders with the ability to negotiate over a multidimensional space of potential deals, delaying commitment to specific configurations until the most promising candidates are identified.\nFor example, in a multiattribute auction for computers, the good may be defined by attributes such as processor speed, memory, and hard disk capacity.\nAgents have varying preferences (or costs) associated with the possible configurations.\nFor example, a buyer may be willing to purchase a computer with a 2 GHz processor, 500 MB of memory, and a 50 GB hard disk for a price no greater than $500, or the same computer with 1GB of memory for a price no greater than $600.\nExisting research in multiattribute auctions has focused primarily on one-sided mechanisms, which automate the process whereby a single agent negotiates with multiple potential trading partners [8, 7, 19, 5, 23, 22].\nModels of procurement typically assume the buyer has a value function, v, ranging over the possible configurations, X, and that each seller i can similarly be associated with a cost function ci over this domain.\nThe role of the auction is to elicit these functions (possibly approximate or partial versions), and identify the surplus-maximizing deal.\nIn this case, such an outcome would be arg maxi,x v(x) \u2212 ci(x).\nThis problem can be translated into the more familiar auction for a single good without attributes by computing a score for each attribute vector based on the seller valuation function, and have buyers bid scores.\nAnalogs of the classic first- and second-price auctions correspond to firstand second-score auctions [8, 7].\nIn the absence of a published buyer scoring function, agents on both sides may provide partial specifications of the deals they are willing to engage.\nResearch on such auctions has, for example, produced iterative mechanisms for eliciting cost functions incrementally [19].\nOther efforts focus on the optimization problem facing the bid taker, for example considering side constraints on the combination of trades comprising an overall deal [4].\nSide constraints have also been analyzed in the context of combinatorial auctions [6, 20].\nOur emphasis is on two-sided multiattribute auctions, where multiple buyers and sellers submit bids, and the objective is to construct a set of deals maximizing overall surplus.\nPrevious research on such auctions includes works by Fink et al. [12] and Gong [14], both of which consider a matching problem for continuous double auctions (CDAs), where deals are struck whenever a pair of compatible bids is identified.\nIn a call market, in contrast, bids accumulate until designated times (e.g., on a periodic or scheduled basis) at which the auction clears by determining a comprehensive match over the entire set of bids.\nBecause the optimization is performed over an aggregated scope, call markets often enjoy liquidity and efficiency advantages over CDAs [10].1 Clearing a multiattribute CDA is much like clearing a one-sided multiattribute auction.\nBecause nothing happens between bids, the problem is to match a given new bid (say, an offer to buy) with the existing bids on the other (sell) side.\nMultiattribute call markets are potentially much more complex.\nConstructing an optimal overall matching may require consideration of many different combina1 In the interim between clears, call markets may also disseminate price quotes providing summary information about the state of the auction [24].\nSuch price quotes are often computed based on hypothetical clears, and so the clearing algorithm may be invoked more frequently than actual market clearing operations.\n110 tions of trades, among the various potential trading-partner pairings.\nThe problem can be complicated by restrictions on overall assignments, as expressed in side constraints [16].\nThe goal of the present work is to develop a general framework for multiattribute call markets, to enable investigation of design issues and possibilities.\nIn particular, we use the framework to explore tradeoffs between expressive power of agent bids and computational properties of auction clearing.\nWe conduct our exploration independent of any consideration of strategic issues bearing on mechanism design.\nAs with analogous studies of combinatorial auctions [18], we intend that tradeoffs quantified in this work can be combined with incentive factors within a comprehensive overall approach to multiattribute auction design.\nWe provide the formal semantics of multiattribute offers in our framework in the next section.\nWe abstract, where appropriate, from the specific language used to express offers, characterizing expressiveness semantically in terms of what deals may be offered.\nThis enables us to identify some general conditions under which the problem of multilateral matching can be decomposed into bilateral matching problems.\nWe then develop a family of network flow problems that capture corresponding classes of multiattribute call market optimizations.\nExperimental trials provide preliminary confirmation that the network formulations provide useful structure for implementing clearing algorithms.\n2.\nMULTIATTRIBUTE OFFERS 2.1 Basic Definitions The distinguishing feature of a multiattribute auction is that the goods are defined by vectors of attributes, x = (x1, ... , xm), xj \u2208 Xj .\nA configuration is a particular attribute vector, x \u2208 X = Qm j=1 Xj .\nThe outcome of the auction is a set of bilateral trades.\nTrade t takes the form t = (x, q, b, s, \u03c0), signifying that agent b buys q > 0 units of configuration x from seller s, for payment \u03c0 > 0.\nFor convenience, we use the notation xt to denote the configuration associated with trade t (and similarly for other elements of t).\nFor a set of trades T, we denote by Ti that subset of T involving agent i (i.e., b = i or s = i).\nLet T denote the set of all possible trades.\nA bid expresses an agent``s willingness to participate in trades.\nWe specify the semantics of a bid in terms of offer sets.\nLet OT i \u2286 Ti denote agent i``s trade offer set.\nIntuitively, this represents the trades in which i is willing to participate.\nHowever, since the outcome of the auction is a set of trades, several of which may involve agent i, we must in general consider willingness to engage in trade combinations.\nAccordingly, we introduce the combination offer set of agent i, OC i \u2286 2Ti .\n2.2 Specifying Offer Sets A fully expressive bid language would allow specification of arbitrary combination offer sets.\nWe instead consider a more limited class which, while restrictive, still captures most forms of multiattribute bidding proposed in the literature.\nOur bids directly specify part of the agent``s trade offer set, and include further directives controlling how this can be extended to the full trade and combination offer sets.\nFor example, one way to specify a trade (buy) offer set would be to describe a set of configurations and quantities, along with the maximal payment one would exchange for each (x, q) specified.\nThis description could be by enumeration, or any available means of defining such a mapping.\nAn explicit set of trades in the offer set generally entails inclusion of many more implicit trades.\nWe assume payment monotonicity, which means that agents always prefer more money.\nThat is, for \u03c0 > \u03c0 > 0, (x, q, i, s, \u03c0) \u2208 OT i \u21d2 (x, q, i, s, \u03c0 ) \u2208 OT i , (x, q, b, i, \u03c0 ) \u2208 OT i \u21d2 (x, q, b, i, \u03c0) \u2208 OT i .\nWe also assume free disposal, which dictates that for all i, q > q > 0, (x, q , i, s, \u03c0) \u2208 OT i \u21d2 (x, q, i, s, \u03c0) \u2208 OT i , (x, q, b, i, \u03c0) \u2208 OT i \u21d2 (x, q , b, i, \u03c0) \u2208 OT i .\nNote that the conditions for agents in the role of buyers and sellers are analogous.\nHenceforth, for expository simplicity, we present all definitions with respect to buyers only, leaving the definition for sellers as understood.\nAllowing agents'' bids to comprise offers from both buyer and seller perspectives is also straightforward.\nAn assertion that offers are divisible entails further implicit members in the trade offer set.\nDEFINITION 1 (DIVISIBLE OFFER).\nAgent i``s offer is divisible down to q iff \u2200q < q < q. (x, q, i, s, \u03c0) \u2208 OT i \u21d2 (x, q , i, s, q q \u03c0) \u2208 OT i .\nWe employ the shorthand divisible to mean divisible down to 0.\nThe definition above specifies arbitrary divisibility.\nIt would likewise be possible to define divisibility with respect to integers, or to any given finite granularity.\nNote that when offers are divisible, it suffices to specify one offer corresponding to the maximal quantity one is willing to trade for any given configuration, trading partner, and per-unit payment (called the price).\nAt the extreme of indivisibility are all-or-none offers.\nDEFINITION 2 (AON OFFER).\nAgent i``s offer is all-or-none (AON) iff (x, q, i, s, \u03c0) \u2208 OT i \u2227 (x, q , i, s, \u03c0 ) \u2208 OT i \u21d2 [q = q \u2228 \u03c0 = \u03c0 ].\nIn many cases, the agent will be indifferent with respect to different trading partners.\nIn that event, it may omit the partner element from trades directly specified in its offer set, and simply assert that its offer is anonymous.\nDEFINITION 3 (ANONYMITY).\nAgent i``s offer is anonymous iff \u2200s, s , b, b .\n(x, q, i, s, \u03c0) \u2208 OT i \u21d4 (x, q, i, s , \u03c0) \u2208 OT i \u2227 (x, q, b, i, \u03c0) \u2208 OT i \u21d4 (x, q, b , i, \u03c0) \u2208 OT i .\nBecause omitting trading partner qualifications simplifies the exposition, we generally assume in the following that all offers are anonymous unless explicitly specified otherwise.\nExtending to the non-anonymous case is conceptually straightforward.\nWe employ the wild-card symbol \u2217 in place of an agent identifier to indicate that any agent is acceptable.\nTo specify a trade offer set, a bidder directly specifies a set of willing trades, along with any regularity conditions (e.g., divisibility, anonymity) that implicitly extend the set.\nThe full trade offer set is then defined by the closure of this direct set with respect to payment monotonicity, free disposal, and any applicable divisibility assumptions.\nWe next consider the specification of combination offer sets.\nWithout loss of generality, we restrict each trade set T \u2208 OC i to include at most one trade for any combination of configuration and trading partner (multiple such trades are equivalent to one net trade aggregating the quantities and payments).\nThe key question is to what extent the agent is willing to aggregate deals across configurations or trading partners.\nOne possibility is disallowing any aggregation.\n111 DEFINITION 4 (NO AGGREGATION).\nThe no-aggregation combinations are given by ONA i = {\u2205} \u222a {{t} | t \u2208 OT i }.\nAgent i``s offer exhibits non-aggregation iff OC i = ONA i .\nWe require in general that OC i \u2287 ONA i .\nA more flexible policy is to allow aggregation across trading partners, keeping configuration constant.\nDEFINITION 5 (PARTNER AGGREGATION).\nSuppose a particular trade is offered in the same context (set of additional trades, T) with two different sellers, s and s .\nThat is, {(x, q, i, s, \u03c0)} \u222a T \u2208 OC i \u2227 {(x, q, i, s , \u03c0)} \u222a T \u2208 OC i .\nAgent i``s offer allows seller aggregation iff in all such cases, {(x, q , i, s, \u03c0 ), (x, q \u2212 q , i, s , \u03c0 \u2212 \u03c0 )} \u222a T \u2208 OC i .\nIn other words, we may create new trade offer combinations by splitting the common trade (quantity and payment, not necessarily proportionately) between the two sellers.\nIn some cases, it might be reasonable to form combinations by aggregating different configurations.\nDEFINITION 6 (CONFIGURATION AGGREGATION).\nSuppose agent i offers, in the same context, the same quantity of two (not necessarily different) configurations, x and x .\nThat is, {(x, q, i, \u2217, \u03c0)} \u222a T \u2208 OC i \u2227 {(x , q, i, \u2217, \u03c0 )} \u222a T \u2208 OC i .\nAgent i``s offer allows configuration aggregation iff in all such cases (and analogously when it is a seller), {(x, q , i, \u2217, q q \u03c0), (x , q \u2212 q , i, \u2217, q \u2212 q q \u03c0 )} \u222a T \u2208 OC i .\nNote that combination offer sets can accommodate offerings of configuration bundles.\nHowever, classes of bundles formed by partner or configuration aggregation are highly regular, covering only a specific type of bundle formed by splitting a desired quantity across configurations.\nThis is quite restrictive compared to the general combinatorial case.\n2.3 Willingness to Pay An agent``s offer trade set implicitly defines the agent``s willingness to pay for any given configuration and quantity.\nWe assume anonymity to avoid conditioning our definitions on trading partner.\nDEFINITION 7 (WILLINGNESS TO PAY).\nAgent i``s willingness to pay for quantity q of configuration x is given by \u02c6uB i (x, q) = max \u03c0 s.t. (x, q, i, \u2217, \u03c0) \u2208 OT i .\nWe use the symbol \u02c6u to recognize that willingness to pay can be viewed as a proxy for the agent``s utility function, measured in monetary units.\nThe superscript B distinguishes the buyer``s willingnessto-pay function from, a seller``s willingness to accept, \u02c6uS i (x, q), defined as the minimum payment seller i will accept for q units of configuration x.\nWe omit the superscript where the distinction is inessential or clear from context.\nDEFINITION 8 (TRADE QUANTITY BOUNDS).\nAgent i``s minimum trade quantity for configuration x is given by qi(x) = min q s.t. \u2203\u03c0.\n(x, q, i, \u2217, \u03c0) \u2208 OT i .\nThe agent``s maximum trade quantity for x is \u00afqi(x) = max q s.t. \u2203\u03c0.\n(x, q, i, \u2217, \u03c0) \u2208 OT i \u2227 \u00ac\u2203q < q. (x, q , i, \u2217, \u03c0) \u2208 OT i .\nWhen the agent has no offers involving x, we take qi(x) = \u00afqi(x) = 0.\nIt is useful to define a special case where all configurations are offered in the same quantity range.\nDEFINITION 9 (CONFIGURATION PARITY).\nAgent i``s offers exhibit configuration parity iff qi(x) > 0 \u2227 qi(x ) > 0 \u21d2 qi(x) = qi(x ) \u2227 \u00afqi(x) = \u00afqi(x ).\nUnder configuration parity we drop the arguments from trade quantity bounds, yielding the constants \u00afq and q which apply to all offers.\nDEFINITION 10 (LINEAR PRICING).\nAgent i``s offers exhibit linear pricing iff for all qi(x) \u2264 q \u2264 \u00afqi(x), \u02c6ui(x, q) = q \u00afqi(x) \u02c6ui(x, \u00afqi(x)).\nNote that linear pricing assumes divisibility down to qi(x).\nGiven linear pricing, we can define the unit willingness to pay, \u02c6ui(x) = \u02c6ui(x, \u00afqi(x))\/\u00afqi(x), and take \u02c6ui(x, q) = q\u02c6ui(x) for all qi(x) \u2264 q \u2264 \u00afqi(x).\nIn general, an agent``s willingness to pay may depend on a context of other trades the agent is engaging in.\nDEFINITION 11 (WILLINGNESS TO PAY IN CONTEXT).\nAgent i``s willingness to pay for quantity q of configuration x in the context of other trades T is given by \u02c6uB i (x, q; T) = max \u03c0 s.t. {(x, q, i, s, \u03c0)} \u222a Ti \u2208 OC i .\nLEMMA 1.\nIf OC i is either non aggregating, or exhibits linear pricing, then \u02c6uB i (x, q; T) = \u02c6uB i (x, q).\n3.\nMULTIATTRIBUTE ALLOCATION DEFINITION 12 (TRADE SURPLUS).\nThe surplus of trade t = (x, q, b, s, \u03c0) is given by \u03c3(t) = \u02c6uB b (x, q) \u2212 \u02c6uS s (x, q).\nNote that the trade surplus does not depend on the payment, which is simply a transfer from buyer to seller.\nDEFINITION 13 (TRADE UNIT SURPLUS).\nThe unit surplus of trade t = (x, q, b, s, \u03c0) is given by \u03c31 (t) = \u03c3(t)\/q. Under linear pricing, we can equivalently write \u03c31 (t) = \u02c6uB b (x) \u2212 \u02c6uS s (x).\nDEFINITION 14 (SURPLUS OF A TRADE IN CONTEXT).\nThe surplus of trade t = (x, q, b, s, \u03c0) in the context of other trades T, \u03c3(t; T), is given by \u02c6uB b (x, q; T) \u2212 \u02c6uS s (x, q; T).\nDEFINITION 15 (GMAP).\nThe Global Multiattribute Allocation Problem (GMAP) is to find the set of acceptable trades maximizing total surplus, max T \u22082T X t\u2208T \u03c3(t; T \\ {t}) s.t. \u2200i. Ti \u2208 OC i .\nDEFINITION 16 (MMP).\nThe Multiattribute Matching Problem (MMP) is to find a best trade for a given pair of traders, MMP(b, s) = arg max t\u2208OT b \u2229OT s \u03c3(t).\nIf OT b \u2229 OT s is empty, we say that MMP has no solution.\n112 Proofs of all the following results are provided in an extended version of this paper available from the authors.\nTHEOREM 2.\nSuppose all agents'' offers exhibit no aggregation (Definition 4).\nThen the solution to GMAP consists of a set of trades, each of which is a solution to MMP for its specified pair of traders.\nTHEOREM 3.\nSuppose that each agent``s offer set satisfies one of the following (not necessarily the same) sets of conditions.\n1.\nNo aggregation and configuration parity (Definitions 4 and 9).\n2.\nDivisibility, linear pricing, and configuration parity (Definitions 1, 10, and 9), with combination offer set defined as the minimal set consistent with configuration aggregation (Definition 6).2 Then the solution to GMAP consists of a set of trades, each of which employs a configuration that solves MMP for its specified pair of traders.\nLet MMPd (b, s) denote a modified version of MMP, where OT b and OT s are extended to assume divisibility (i.e., the offer sets are taken to be their closures under Definition 1).\nThen we can extend Theorem 3 to allow aggregating agents to maintain AON or minquantity offers as follows.\nTHEOREM 4.\nSuppose offer sets as in Theorem 3, except that agents i satisfying configuration aggregation need be divisible only down to qi, rather than down to 0.\nThen the solution to GMAP consists of a set of trades, each of which employs the same configuration as a solution to MMPd for its specified pair of traders.\nTHEOREM 5.\nSuppose agents b and s exhibit configuration parity, divisibility, and linear pricing, and there exists configuration x such that \u02c6ub(x) \u2212 \u02c6us(x) > 0.\nThen t \u2208 MMPd (b, s) iff xt = arg max x {\u02c6ub(x) \u2212 \u02c6us(x)} qt = min(\u00afqb, \u00afqs).\n(1) The preceding results signify that under certain conditions, we can divide the global optimization problem into two parts: first find a bilateral trade that maximizes unit surplus for each pair of traders (or total surplus in the non-aggregation case), and then use the results to find a globally optimal set of trades.\nIn the following two sections we investigate each of these subproblems.\n4.\nUTILITY REPRESENTATION AND MMP We turn next to consider the problem of finding a best deal between pairs of traders.\nThe complexity of MMP depends pivotally on the representation by bids of offer sets, an issue we have postponed to this point.\nNote that issues of utility representation and MMP apply to a broad class of multiattribute mechanisms, beyond the multiattribute call markets we emphasize.\nFor example, the complexity results contained in this section apply equally to the bidding problem faced by sellers in reverse auctions, given a published buyer scoring function.\nThe simplest representation of an offer set is a direct enumeration of configurations and associated quantities and payments.\nThis approach treats the configurations as atomic entities, making no use 2 That is, for such an agent i, OC i is the closure under configuration aggregation of ONA i .\nof attribute structure.\nA common and inexpensive enhancement is to enable a trader to express sets of configurations, by specifying subsets of the domains of component attributes.\nAssociating a single quantity and payment with a set of configurations expresses indifference among them; hence we refer to such a set as an indifference range.3 Indifference ranges include the case of attributes with a natural ordering, in which a bid specifies a minimum or maximum acceptable attribute level.\nThe use of indifference ranges can be convenient for MMP.\nThe compatibility of two indifference ranges is simply found by testing set intersection for each attribute, as demonstrated by the decision-tree algorithm of Fink et al. [12].\nAlternatively, bidders may specify willingness-to-pay functions \u02c6u in terms of compact functional forms.\nEnumeration based representations, even when enhanced with indifference ranges, are ultimately limited by the exponential size of attribute space.\nFunctional forms may avoid this explosion, but only if \u02c6u reflects structure among the attributes.\nMoreover, even given a compact specification of \u02c6u, we gain computational benefits only if we can perform the matching without expanding the \u02c6u values of an exponential number of configuration points.\n4.1 Additive Forms One particularly useful multiattribute representation is known as the additive scoring function.\nThough this form is widely used in practice and in the academic literature, it is important to stress the assumptions behind it.\nThe theory of multiattribute representation is best developed in the context where \u02c6u is interpreted as a utility function representing an underlying preference order [17].\nWe present the premises of additive utility theory in this section, and discuss some generalizations in the next.\nDEFINITION 17.\nA set of attributes Y \u2282 X is preferentially independent (PI) of its complement Z = X \\ Y if the conditional preference order over Y given a fixed level Z0 of Z is the same regardless of the choice of Z0 .\nIn other words, the preference order over the projection of X on the attributes in Y is the same for any instantiation of the attributes in Z. DEFINITION 18.\nX = {x1, ... , xm} is mutually preferentially independent (MPI) if any subset of X is preferentially independent of its complement.\nTHEOREM 6 ([9]).\nA preference order over set of attributes X has an additive utility function representation u(x1, ... , xm) = mX i=1 ui(xi) iff X is mutually preferential independent.\nA utility function over outcomes including money is quasi-linear if the function can be represented as a function over non-monetary attributes plus payments, \u03c0.\nInterpreting \u02c6u as a utility function over non-monetary attributes is tantamount to assuming quasi-linearity.\nEven when quasi-linearity is assumed, however, MPI over nonmonetary attributes is not sufficient for the quasi-linear utility function to be additive.\nFor this, we also need that each of the pairs (\u03c0, Xi) for any attribute Xi would be PI of the rest of the attributes.\n3 These should not be mistaken with indifference curves, which express dependency between the attributes.\nIndifference curves can be expressed by the more elaborate utility representations discussed below.\n113 This (by MAUT) in turn implies that the set of attributes including money is MPI and the utility function can be represented as u(x1, ... , xm, \u03c0) = mX i=1 ui(xi) + \u03c0.\nGiven that form, a willingness-to-pay function reflecting u can be represented additively, as \u02c6u(x) = mX i=1 ui(xi) In many cases the additivity assumption provides practically crucial simplification of offer set elicitation.\nIn addition to compactness, additivity dramatically simplifies MMP.\nIf both sides provide additive \u02c6u representations, the globally optimal match reduces to finding the optimal match separately for each attribute.\nA common scenario in procurement has the buyer define an additive scoring function, while suppliers submit enumerated offer points or indifference ranges.\nThis model is still very amenable to MMP: for each element in a supplier``s enumerated set, we optimize each attribute by finding the point in the supplier``s allowable range that is most preferred by the buyer.\nA special type of scoring (more particularly, cost) function was defined by Bichler and Kalagnanam [4] and called a configurable offer.\nThis idea is geared towards procurement auctions: assuming suppliers are usually comfortable with expressing their preferences in terms of cost that is quasi-linear in every attribute, they can specify a price for a base offer, and additional cost for every change in a specific attribute level.\nThis model is essentially a pricing out approach [17].\nFor this case, MMP can still be optimized on a per-attribute basis.\nA similar idea has been applied to one-sided iterative mechanisms [19], in which sellers refine prices on a perattribute basis at each iteration.\n4.2 Multiattribute Utility Theory Under MPI, the tradeoffs between the attributes in each subset cannot be affected by the value of other attributes.\nFor example, when buying a PC, a weaker CPU may increase the importance of the RAM compared to, say, the type of keyboard.\nSuch relationships cannot be expressed under an additive model.\nMultiattribute utility theory (MAUT) develops various compact representations of utility functions that are based on weaker structural assumptions [17, 2].\nThere are several challenges in adapting these techniques to multiattribute bidding.\nFirst, as noted above, the theory is developed for utility functions, which may behave differently from willingness-to-pay functions.\nSecond, computational efficiency of matching has not been an explicit goal of most work in the area.\nThird, adapting such representations to iterative mechanisms may be more challenging.\nOne representation that employs somewhat weaker assumptions than additivity, yet retains the summation structure is the generalized additive (GA) decomposition: u(x) = JX j=1 fj(xj ), xj \u2208 Xj , (2) where the Xj are potentially overlapping sets of attributes, together exhausting the space X.\nA key point from our perspective is that the complexity of the matching is similar to the complexity of optimizing a single function, since the sum function is in the form (2) as well.\nRecent work by Gonzales and Perny [15] provides an elicitation process for GA decomposable preferences under certainty, as well as an optimization algorithm for the GA decomposed function.\nThe complexity of exact optimization is exponential in the induced width of the graph.\nHowever, to become operational for multiattribute bidding this decomposition must be detectable and verifiable by statements over preferences with respect to price outcomes.\nWe are exploring this topic in ongoing work [11].\n5.\nSOLVING GMAP UNDER ALLOCATION CONSTRAINTS Theorems 2, 3, and 4 establish conditions under which GMAP solutions must comprise elements from constituent MMP solutions.\nIn Sections 5.1 and 5.2, we show how to compute these GMAP solutions, given the MMP solutions, under these conditions.\nIn these settings, traders that aggregate partners also aggregate configurations; hence we refer to them simply as aggregating or nonaggregating.\nSection 5.3 suggests a means to relax the linear pricing restriction employed in these constructions.\nSection 5.4 provides strategies for allowing traders to aggregate partners and restrict configuration aggregation at the same time.\n5.1 Notation and Graphical Representation Our clearing algorithms are based on network flow formulations of the underlying optimization problem [1].\nThe network model is based on a bipartite graph, in which nodes on the left side represent buyers, and nodes on the right represent sellers.\nWe denote the sets of buyers and sellers by B and S, respectively.\nWe define two graph families, one for the case of non-aggregating traders (called single-unit), and the other for the case of aggregating traders (called multi-unit).4 For both types, a single directed arc is placed from a buyer i \u2208 B to a seller j \u2208 S if and only if MMP(i, j) is nonempty.\nWe denote by T(i) the set of potential trading partners of trader i (i.e., the nodes connected to buyer or seller i in the bipartite graph.\nIn the single-unit case, we define the weight of an arc (i, j) as wij = \u03c3(MMP(i, j)).\nNote that free disposal lets a buy offer receive a larger quantity than desired (and similarly for sell offers).\nFor the multi-unit case, the weights are wij = \u03c31 (MMP(i, j)), and we associate the quantity \u00afqi with the node for trader i.\nWe also use the notation qij for the mathematical formulations to denote partial fulfillment of qt for t = MMP(i, j).\n5.2 Handling Indivisibility and Aggregation Constraints Under the restrictions of Theorems 2, 3, or 4, and when the solution to MMP is given, GMAP exhibits strong similarity to the problem of clearing double auctions with assignment constraints [16].\nA match in our bipartite representation corresponds to a potential trade in which assignment constraints are satisfied.\nNetwork flow formulations have been shown to model this problem under the assumption of indivisibility and aggregation for all traders.\nThe novelty in this part of our work is the use of generalized network flow formulations for more complex cases where aggregation and divisibility may be controlled by traders.\nInitially we examine the simple case of no aggregation (Theorem 2).\nObserve that the optimal allocation is simply the solution to the well known weighted assignment problem [1] on the singleunit bipartite graph described above.\nThe set of matches that maximizes the total weight of arcs corresponds to the set of trades that maximizes total surplus.\nNote that any form of (in)divisibility can 4 In the next section, we introduce a hybrid form of graph accommodating mixes of the two trader categories.\n114 also be accommodated in this model via the constituent MMP subproblems.\nThe next formulation solves the case in which all traders fall under case 2 of Theorem 3-that is, all traders are aggregating and divisible, and exhibit linear pricing.\nThis case can be represented using the following linear program, corresponding to our multi-unit graph: max X i\u2208B,j\u2208S wij qij s.t. X i\u2208T (j) qij \u2264 \u00afqj j \u2208 S X j\u2208T (i) qij \u2264 \u00afqi i \u2208 B qij \u2265 0 j \u2208 S, i \u2208 B Recall that the qij variables in the solution represent the number of units that buyer i procures from seller j.\nThis formulation is known as the network transportation problem with inequality constraints, for which efficient algorithms are available [1].\nIt is a well known property of the transportation problem (and flow problems on pure networks in general) that given integer input values, the optimal solution is guaranteed to be integer as well.\nFigure 1 demonstrates the transformation of a set of bids to a transportation problem instance.\nFigure 1: Multi-unit matching with two boolean attributes.\n(a) Bids, with offers to buy in the left column and offers to sell at right.\nq@p indicates an offer to trade q units at price p per unit.\nConfigurations are described in terms of constraints on attribute values.\n(b) Corresponding multi-unit assignment model.\nW represents arc weights (unit surplus), s represents source (exogenous) flow, and t represents sink quantity.\nThe problem becomes significantly harder when aggregation is given as an option to bidders, requiring various enhancements to the basic multi-unit bipartite graph described above.\nIn general, we consider traders that are either aggregating or not, with either divisible or AON offers.\nInitially we examine a special case, which at the same time demonstrates the hardness of the problem but still carries computational advantages.\nWe designate one side (e.g., buyers) as restrictive (AON and non-aggregating), and the other side (sellers) as unrestrictive (divisible and aggregating).\nThis problem can be represented using the following integer programming formulation: max X i\u2208B,j\u2208S wij qij s.t. X i\u2208T (j) \u00afqiqij \u2264 \u00afqj j \u2208 S X j\u2208T (i) qij \u2264 1 i \u2208 B qij \u2208 {0, 1} j \u2208 S, i \u2208 B (3) This formulation is a restriction of the generalized assignment problem (GAP) [13].\nAlthough GAP is known to be NP-hard, it can be solved relatively efficiently by exact or approximate algorithms.\nGAP is more general than the formulation above as it allows buyside quantities (\u00afqi above) to be different for each potential seller.\nThat this formulation is NP-hard as well (even the case of a single seller corresponds to the knapsack problem), illustrates the drastic increase in complexity when traders with different constraints are admitted to the same problem instance.\nOther than the special case above, we found no advantage in limiting AON constraints when traders may specify aggregation constraints.\nTherefore, the next generalization allows any combination of the two boolean constraints, that is, any trader chooses among four bid types: NI Bid AON and not aggregating.\nAD Bid allows aggregation and divisibility.\nAI Bid AON, allows aggregation (quantity can be aggregated across configurations, as long as it sums to the whole amount).\nND No aggregation, divisibility (one trade, but smaller quantities are acceptable).\nTo formulate an integer programming representation for the problem, we introduce the following variables.\nBoolean (0\/1) variables ri and rj indicate whether buyer i and seller j participate in the solution (used for AON traders).\nAnother indicator variable, yij , applied to non-aggregating buyer i and seller j, is one iff i trades with j. For aggregating traders, yij is not constrained.\nmax X i\u2208B,j\u2208S Wij qij (4a) s.t. X j\u2208T (i) qij = \u00afqiri i \u2208 AIb (4b) X j\u2208T (i) qij \u2264 \u00afqiri i \u2208 ADb (4c) X i\u2208T (j) qij = \u00afqirj j \u2208 AIs (4d) X i\u2208T (j) qij \u2264 qj rj j \u2208 ADs (4e) xij \u2264 \u00afqiyij i \u2208 NDb , j \u2208 T(i) (4f) xij \u2264 \u00afqj yij j \u2208 NIs , i \u2208 T(j) (4g) X j\u2208T (i) yij \u2264 ri i \u2208 NIb \u222a NDb (4h) X i\u2208T (j) yij \u2264 rj j \u2208 NIs \u222a NDs (4i) int qij (4j) yij , rj, ri \u2208 {0, 1} (4k) 115 Figure 2: Generalized network flow model.\nB1 is a buyer in AD, B2 \u2208 NI, B3 \u2208 AI, B4 \u2208 ND.\nV 1 is a seller in ND, V 2 \u2208 AI, V 4 \u2208 AD.\nThe g values represent arc gains.\nProblem (4) has additional structure as a generalized min-cost flow problem with integral flow.5 A generalized flow network is a network in which each arc may have a gain factor, in addition to the pure network parameters (which are flow limits and costs).\nFlow in an arc is then multiplied by its gain factor, so that the flow that enters the end node of an arc equals the flow that entered from its start node, multiplied by the gain factor of the arc.\nThe network model can in turn be translated into an IP formulation that captures such structure.\nThe generalized min-cost flow problem is well-studied and has a multitude of efficient algorithms [1].\nThe faster algorithms are polynomial in the number of arcs and the logarithm of the maximal gain, that is, performance is not strongly polynomial but is polynomial in the size of the input.\nThe main benefit of this graphical formulation to our matching problem is that it provides a very efficient linear relaxation.\nInteger programming algorithms such as branch-and-bound use solutions to the linear relaxation instance to bound the optimal integer solution.\nSince network flow algorithms are much faster than arbitrary linear programs (generalized network flow simplex algorithms have been shown to run in practice only 2 or 3 times slower than pure network min-cost flow [1]), we expect a branch-and-bound solver for the matching problem to show improved performance when taking advantage of network flow modeling.\nThe network flow formulation is depicted in Figure 2.\nNonrestrictive traders are treated as in Figure 1.\nFor a non-aggregating buyer, a single unit from the source will saturate up to one of the yij for all j, and be multiplied by \u00afqi.\nIf i \u2208 ND, the end node of yij will function as a sink that may drain up to \u00afqi of the entering flow.\nFor i \u2208 NI we use an indicator (0\/1) arc ri, on which the flow is multiplied by \u00afqi.\nTrader i trades the full quantity iff ri = 1.\nAt the seller side, the end node of a qij arc functions as a source for sellers j \u2208 ND, in order to let the flow through yij arcs be 0 or \u00afqj.\nThe flow is then multiplied by 1 \u00afqj so 0\/1 flows enter an end node which can drain either 1 or 0 units.\nfor sellers j \u2208 NI arcs rj ensure AON similarly to arcs rj for buyers.\nHaving established this framework, we are ready to accommo5 Constraint (4j) could be omitted (yielding computational savings) if non-integer quantities are allowed.\nHere and henceforth we assume the harder problem, where divisibility is with respect to integers.\ndate more flexible versions of side constraints.\nThe first generalization is to replace the boolean AON constraint with divisibility down to q, the minimal quantity.\nIn our network flow instance we simply need to turn the node of the constrained trader i (e.g., the node B3 in Figure 2) to a sink that can drain up to \u00afqi \u2212 qi units of flow.\nThe integer program (4) can be also easily changed to accommodate this extension.\nUsing gains, we can also apply batch size constraints.\nIf a trader specifies a batch size \u03b2, we change the gain on the r arcs to \u03b2, and set the available flow of its origin to the maximal number of batches \u00afqi\/\u03b2.\n5.3 Nonlinear Pricing A key assumption in handling aggregation up to this point is linear pricing, which enables us to limit attention to a single unit price.\nDivisibility without linear pricing allows expression of concave willingness-to-pay functions, corresponding to convex preference relations.\nBidders may often wish to express non-convex offer sets, for example, due to fixed costs or switching costs in production settings [21].\nWe consider nonlinear pricing in the form of enumerated payment schedules-that is, defining values \u02c6u(x, q) for a select set of quantities q. For the indivisible case, these points are distinguished in the offer set by satisfying the following: \u2203\u03c0.\n(x, q, i, \u2217, \u03c0) \u2208 OT i \u2227 \u00ac\u2203q < q. (x, q , i, \u2217, \u03c0) \u2208 OT i .\n(cf. Definition 8, which defines the maximum quantity, \u00afq, as the largest of these.)\nFor the divisible case, the distinguished quantities are those where the unit price changes, which can be formalized similarly.\nTo handle nonlinear pricing, we augment the network to include flow possibilities corresponding to each of the enumerated quantities, plus additional structure to enforce exclusivity among them.\nIn other words, the network treats the offer for a given quantity as in Section 5.2, and embeds this in an XOR relation to ensure that each trader picks only one of these quantities.\nSince for each such quantity choice we can apply Theorem 3 or 4, the solution we get is in fact the solution to GMAP.\nThe network representation of the XOR relation (which can be embedded into the network of Figure 2) is depicted in Figure 3.\nFor a trader i with K XOR quantity points, we define dummy variables, zk i , k = 1, ... , K.\nSince we consider trades between every pair of quantity points we also have qk ij , k = 1, ... , K. For buyer i \u2208 AI with XOR points at quantities \u00afqk i , we replace (4b) with the following constraints: X j\u2208T (i) qk ij = \u00afqk i zk i k = 1, ... , K KX k=1 zk i = ri zk i \u2208 {0, 1} k = 1, ... , K (5) 5.4 Homogeneity Constraints The model (4) handles constraints over the aggregation of quantities from different trading partners.\nWhen aggregation is allowed, the formulation permits trades involving arbitrary combinations of configurations.\nA homogeneity constraint [4] restricts such combinations, by requiring that configurations aggregated in an overall deal must agree on some or all attributes.\n116 Figure 3: Extending the network flow model to express an XOR over quantities.\nB2 has 3 XOR points for 6, 3, or 5 units.\nIn the presence of homogeneity constraints, we can no longer apply the convenient separation of GMAP into MMP plus global bipartite optimization, as the solution to GMAP may include trades not part of any MMP solution.\nFor example, let buyer b specify an offer for maximum quantity 10 of various acceptable configurations, with a homogeneity constraint over the attribute color.\nThis means b is willing to aggregate deals over different trading partners and configurations, as long as all are the same color.\nIf seller s can provide 5 blue units or 5 green units, and seller s can provide only 5 green units, we may prefer that b and s trade on green units, even if the local surplus of a blue trade is greater.\nLet {x1, ... , xH} be attributes that some trader constrains to be homogeneous.\nTo preserve the network flow framework, we need to consider, for each trader, every point in the product domain of these homogeneous attributes.\nThus, for every assignment \u02c6x to the homogeneous attributes, we compute MMP(b, s) under the constraint that configurations are consistent with \u02c6x.\nWe apply the same approach as in Section 5.3: solve the global optimization, such that the alternative \u02c6x assignments for each trader are combined under XOR semantics, thus enforcing homogeneity constraints.\nThe size of this network is exponential in the number of homogeneous attributes, since we need a node for each point in the product domain of all the homogeneous attributes of each trader.6 Hence this solution method will only be tractable in applications were the traders can be limited to a small number of homogeneous attributes.\nIt is important to note that the graph needs to include a node only for each point that potentially matches a point of the other side.\nIt is therefore possible to make the problem tractable by limiting one of the sides to a less expressive bidding language, and by that limit the set of potential matches.\nFor example, if sellers submit bounded sets of XOR points, we only need to consider the points in the combined set offered by the sellers, and the reduction to network flow is polynomial regardless of the number of homogeneous attributes.\nIf such simplifications do not apply, it may be preferable to solve the global problem directly as a single optimization problem.\nWe provide the formulation for the special case of divisibility (with respect to integers) and configuration parity.\nLet i index buyers, j sellers, and H homogeneous attributes.\nVariable xh ij \u2208 Xh represents the value of attribute Xh in the trade between buyer i and seller j. Integer variable qij represents the quantity of the trade (zero for no trade) between i and j. 6 If traders differ on which attributes they express such constraints, we can limit consideration to the relevant alternatives.\nThe complexity will still be exponential, but in the maximum number of homogeneous attributes for any pair of traders.\nmax X i\u2208B,j\u2208S [\u02c6uB i (xij , qij ) \u2212 \u02c6uS j (xij , qij )] X j\u2208S qij \u2264 \u00afqi i \u2208 B X i\u2208B qij \u2264 \u00afqj j \u2208 S xh 1j = xh 2j = \u00b7 \u00b7 \u00b7 = x|B|j j \u2208 S, h \u2208 {1, ... , H} xh i1 = xh i2 = \u00b7 \u00b7 \u00b7 = xi|S| i \u2208 B, h \u2208 {1, ... , H} (6) Table 1 summarizes the mapping we presented from allocation constraints to the complexity of solving GMAP.\nConfiguration parity is assumed for all cases but the first.\n6.\nEXPERIMENTAL RESULTS We approach the experimental aspect of this work with two objectives.\nFirst, we seek a general idea of the sizes and types of clearing problems that can be solved under given time constraints.\nWe also look to compare the performance of a straightforward integer program as in (4) with an integer program that is based on the network formulations developed here.\nSince we used CPLEX, a commercial optimization tool, the second objective could be achieved to the extent that CPLEX can take advantage of network structure present in a model.\nWe found that in addition to the problem size (in terms of number of traders), the number of aggregating traders plays a crucial role in determining complexity.\nWhen most of the traders are aggregating, problems of larger sizes can be solved quickly.\nFor example, our IP model solved instances with 600 buyers and 500 sellers, where 90% of them are aggregating, in less than two minutes.\nWhen the aggregating ratio was reduced to 80% for the same data, solution time was just under five minutes.\nThese results motivated us to develop a new network model.\nRather than treat non-aggregating traders as a special case, the new model takes advantage of the single-unit nature of non-aggregating trades (treating the aggregating traders as a special case).\nThis new model outperformed our other models on most problem instances, exceptions being those where aggregating traders constitute a vast majority (at least 80%).\nThis new model (Figure 4) has a single node for each non aggregating trader, with a single-unit arc designating a match to another non-aggregating trader.\nAn aggregating trader has a node for each potential match, connected (via y arcs) to a mutual source node.\nUnlike the previous model we allow fractional flow for this case, representing the traded fraction of the buyer``s total quantity.7 We tested all three models on random data in the form of bipartite graphs encoding MMP solutions.\nIn our experiments, each trader has a maximum quantity uniformly distributed over [30, 70], and minimum quantity uniformly distributed from zero to maximal quantity.\nEach buyer\/seller pair is selected as matching with probability 0.75, with matches assigned a surplus uniformly distributed over [10, 70].\nWhereas the size of the problem is defined by the number of traders on each side, the problem complexity depends on the product |B| \u00d7 |S|.\nThe tests depicted in Figures 5-7 are for the worst case |B| = |S|, with each data point averaged over six samples.\nIn the figures, the direct IP (4) is designated SW, our first network model (Figure 2) NW, and our revised network model (Figure 4) NW 2.\n7 Traded quantity remains integer.\n117 Aggregation Hom.\nattr.\nDivisibility linear pricing Technique Complexity No aggregation N\/A Any Not required Assignment problem Polynomial All aggregate None Down to 0 Required Transpor.\nproblem Polynomial One side None Aggr side div.\nAggr.\nside GAP NP-hard Optional None Down to q, batch Required Generalized ntwrk flow NP-hard Optional Bounded Down to q, batch Bounded size schdl.\nGeneralized ntwrk flow NP-hard Optional Not bounded Down to q, batch Not required Nonlinear opt Depends on \u02c6u(x, q) Table 1: Mapping from combinations of allocation constraints to the solution methods of GMAP.\nOne Side means that one side aggregates and divisible, and the other side is restrictive.\nBatch means that traders may submit batch sizes.\nFigure 4: Generalized network flow model.\nB1 is a buyer in AD, B2 \u2208 AI, B3 \u2208 NI, B4 \u2208 ND.\nV 1 is a seller in AD, V 2 \u2208 AI, V 4 \u2208 ND.\nThe g values represent arc gains, and W values represent weights.\nFigure 5: Average performance of models when 30% of traders aggregate.\nFigure 6: Average performance of models when 50% of traders aggregate.\nFigure 7: Average performance of models when 70% of traders aggregate.\n118 Figure 8: Performance of models when varying percentage of aggregating traders Figure 8 shows how the various models are affected by a change in the percentage of aggregating traders, holding problem size fixed.8 Due to the integrality constraints, we could not test available algorithms specialized for network-flow problems on our test problems.\nThus, we cannot fully evaluate the potential gain attributable to network structure.\nHowever, the model we built based on the insight from the network structure clearly provided a significant speedup, even without using a special-purpose algorithm.\nModel NW 2 provided speedups of a factor of 4-10 over the model SW.\nThis was consistent throughout the problem sizes, including the smaller sizes for which the speedup is not visually apparent on the chart.\n7.\nCONCLUSIONS The implementation and deployment of market exchanges requires the development of bidding languages, information feedback policies, and clearing algorithms that are suitable for the target domain, while paying heed to the incentive properties of the resulting mechanisms.\nFor multiattribute exchanges, the space of feasible such mechanisms is constrained by computational limitations imposed by the clearing process.\nThe extent to which the space of feasible mechanisms may be quantified a priori will facilitate the search for such exchanges in the full mechanism design problem.\nIn this work, we investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades.\nWe developed a formal semantic framework for characterizing expressible offers, and introduced some basic classes of restrictions.\nOur key technical results identify sets of conditions under which the overall matching problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades.\nBased on these results, we developed network flow models for the overall clearing problem, which facilitate classification of problem versions by computational complexity, and provide guidance for developing solution algorithms and relaxing bidding constraints.\n8.\nACKNOWLEDGMENTS This work was supported in part by NSF grant IIS-0205435, and the STIET program under NSF IGERT grant 0114368.\nWe are 8 All tests were performed on Intel 3.4 GHz processors with 2048 KB cache.\nTest that did not complete by the one-hour time limit were recorded as 4000 seconds.\ngrateful to comments from an anonymous reviewer.\nSome of the underlying ideas were developed while the first two authors worked at TradingDynamics Inc. and Ariba Inc. in 1999-2001 (cf. US Patent 6,952,682).\nWe thank Yoav Shoham, Kumar Ramaiyer, and Gopal Sundaram for fruitful discussions about multiattribute auctions in that time frame.\n9.\nREFERENCES [1] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin.\nNetwork Flows.\nPrentice-Hall, 1993.\n[2] F. Bacchus and A. Grove.\nGraphical models for preference and utility.\nIn Eleventh Conference on Uncertainty in Artificial Intelligence, pages 3-10, Montreal, 1995.\n[3] M. Bichler.\nThe Future of e-Markets: Multi-Dimensional Market Mechanisms.\nCambridge U. Press, New York, NY, USA, 2001.\n[4] M. Bichler and J. Kalagnanam.\nConfigurable offers and winner determination in multi-attribute auctions.\nEuropean Journal of Operational Research, 160:380-394, 2005.\n[5] M. Bichler, M. Kaukal, and A. Segev.\nMulti-attribute auctions for electronic procurement.\nIn Proceedings of the 1st IBM IAC Workshop on Internet Based Negotiation Technologies, 1999.\n[6] C. Boutilier, T. Sandholm, and R. Shields.\nEliciting bid taker non-price preferences in (combinatorial) auctions.\nIn Nineteenth Natl. Conf.\non Artificial Intelligence, pages 204-211, San Jose, 2004.\n[7] F. Branco.\nThe design of multidimensional auctions.\nRAND Journal of Economics, 28(1):63-81, 1997.\n[8] Y.-K.\nChe.\nDesign competition through multidimensional auctions.\nRAND Journal of Economics, 24(4):668-680, 1993.\n[9] G. Debreu.\nTopological methods in cardinal utility theory.\nIn K. Arrow, S. Karlin, and P. Suppes, editors, Mathematical Methods in the Social Sciences.\nStanford University Press, 1959.\n[10] N. Economides and R. A. Schwartz.\nElectronic call market trading.\nJournal of Portfolio Management, 21(3), 1995.\n[11] Y. Engel and M. P. Wellman.\nMultiattribute utility representation for willingness-to-pay functions.\nTech.\nreport, Univ. of Michigan, 2006.\n[12] E. Fink, J. Johnson, and J. Hu.\nExchange market for complex goods: Theory and experiments.\nNetnomics, 6(1):21-42, 2004.\n[13] M. L. Fisher, R. Jaikumar, and L. N. Van Wassenhove.\nA multiplier adjustment method for the generalized assignment problem.\nManagement Science, 32(9):1095-1103, 1986.\n[14] J. Gong.\nExchanges for complex commodities: Search for optimal matches.\nMaster``s thesis, University of South Florida, 2002.\n[15] C. Gonzales and P. Perny.\nGAI networks for decision making under certainty.\nIn IJCAI-05 workshop on preferences, Edinburgh, 2005.\n[16] J. R. Kalagnanam, A. J. Davenport, and H. S. Lee.\nComputational aspects of clearing continuous call double auctions with assignment constraints and indivisible demand.\nElectronic Commerce Research, 1(3):221-238, 2001.\n[17] R. L. Keeney and H. Raiffa.\nDecisions with Multiple Objectives: Preferences and Value Tradeoffs.\nWiley, 1976.\n[18] N. Nisan.\nBidding and allocation in combinatorial auctions.\nIn Second ACM Conference on Electronic Commerce, pages 1-12, Minneapolis, MN, 2000.\n[19] D. C. Parkes and J. Kalagnanam.\nModels for iterative multiattribute procurement auctions.\nManagement Science, 51:435-451, 2005.\n[20] T. Sandholm and S. Suri.\nSide constraints and non-price attributes in markets.\nIn IJCAI-01 Workshop on Distributed Constraint Reasoning, Seattle, 2001.\n[21] L. J. Schvartzman and M. P. Wellman.\nMarket-based allocation with indivisible bids.\nIn AAMAS-05 Workshop on Agent-Mediated Electronic Commerce, Utrecht, 2005.\n[22] J. Shachat and J. T. Swarthout.\nProcurement auctions for differentiated goods.\nTechnical Report 0310004, Economics Working Paper Archive at WUSTL, Oct. 2003.\n[23] A. V. Sunderam and D. C. Parkes.\nPreference elicitation in proxied multiattribute auctions.\nIn Fourth ACM Conference on Electronic Commerce, pages 214-215, San Diego, 2003.\n[24] P. R. Wurman, M. P. Wellman, and W. E. Walsh.\nA parametrization of the auction design space.\nGames and Economic Behavior, 35:304-338, 2001.\n119","lvl-3":"Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions\nABSTRACT\nWe investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades.\nWe develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades.\nWe analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory.\nNetwork flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms.\nExperimental trials help distinguish tractable problem classes for proposed solution techniques.\n1.\nBACKGROUND\nA multiattribute auction is a market-based mechanism where goods are described by vectors of features, or attributes [3, 5, 8, 19].\nSuch mechanisms provide traders with the ability to negotiate over a multidimensional space of potential deals, delaying commitment to specific configurations until the most promising candidates are identified.\nFor example, in a multiattribute auction for computers, the good may be defined by attributes such as processor speed, memory, and hard disk capacity.\nAgents have varying preferences (or costs) associated with the possible configurations.\nFor example, a buyer may be willing to purchase a computer with a 2 GHz processor, 500 MB of memory, and a 50 GB hard disk for a price no greater than $500, or the same computer with 1GB of memory for a price no greater than $600.\nExisting research in multiattribute auctions has focused primarily on one-sided mechanisms, which automate the process whereby a single agent negotiates with multiple potential trading partners [8, 7, 19, 5, 23, 22].\nModels of procurement typically assume the buyer has a value function, v, ranging over the possible configurations, X, and that each seller i can similarly be associated with a cost function ci over this domain.\nThe role of the auction is to elicit these functions (possibly approximate or partial versions), and identify the surplus-maximizing deal.\nIn this case, such an outcome would be arg maxi, x v (x) \u2212 ci (x).\nThis problem can be translated into the more familiar auction for a single good without attributes by computing a score for each attribute vector based on the seller valuation function, and have buyers bid scores.\nAnalogs of the classic first - and second-price auctions correspond to firstand second-score auctions [8, 7].\nIn the absence of a published buyer scoring function, agents on both sides may provide partial specifications of the deals they are willing to engage.\nResearch on such auctions has, for example, produced iterative mechanisms for eliciting cost functions incrementally [19].\nOther efforts focus on the optimization problem facing the bid taker, for example considering side constraints on the combination of trades comprising an overall deal [4].\nSide constraints have also been analyzed in the context of combinatorial auctions [6, 20].\nOur emphasis is on two-sided multiattribute auctions, where multiple buyers and sellers submit bids, and the objective is to construct a set of deals maximizing overall surplus.\nPrevious research on such auctions includes works by Fink et al. [12] and Gong [14], both of which consider a matching problem for continuous double auctions (CDAs), where deals are struck whenever a pair of compatible bids is identified.\nIn a call market, in contrast, bids accumulate until designated times (e.g., on a periodic or scheduled basis) at which the auction clears by determining a comprehensive match over the entire set of bids.\nBecause the optimization is performed over an aggregated scope, call markets often enjoy liquidity and efficiency advantages over CDAs [10].1 Clearing a multiattribute CDA is much like clearing a one-sided multiattribute auction.\nBecause nothing happens between bids, the problem is to match a given new bid (say, an offer to buy) with the existing bids on the other (sell) side.\nMultiattribute call markets are potentially much more complex.\nConstructing an optimal overall matching may require consideration of many different combina1In the interim between clears, call markets may also disseminate price quotes providing summary information about the state of the auction [24].\nSuch price quotes are often computed based on hypothetical clears, and so the clearing algorithm may be invoked more frequently than actual market clearing operations.\ntions of trades, among the various potential trading-partner pairings.\nThe problem can be complicated by restrictions on overall assignments, as expressed in side constraints [16].\nThe goal of the present work is to develop a general framework for multiattribute call markets, to enable investigation of design issues and possibilities.\nIn particular, we use the framework to explore tradeoffs between expressive power of agent bids and computational properties of auction clearing.\nWe conduct our exploration independent of any consideration of strategic issues bearing on mechanism design.\nAs with analogous studies of combinatorial auctions [18], we intend that tradeoffs quantified in this work can be combined with incentive factors within a comprehensive overall approach to multiattribute auction design.\nWe provide the formal semantics of multiattribute offers in our framework in the next section.\nWe abstract, where appropriate, from the specific language used to express offers, characterizing expressiveness semantically in terms of what deals may be offered.\nThis enables us to identify some general conditions under which the problem of multilateral matching can be decomposed into bilateral matching problems.\nWe then develop a family of network flow problems that capture corresponding classes of multiattribute call market optimizations.\nExperimental trials provide preliminary confirmation that the network formulations provide useful structure for implementing clearing algorithms.\n2.\nMULTIATTRIBUTE OFFERS 2.1 Basic Definitions\n2.2 Specifying Offer Sets\n2.3 Willingness to Pay\n4.\nUTILITY REPRESENTATION AND MMP\n4.1 Additive Forms\n4.2 Multiattribute Utility Theory\n5.\nSOLVING GMAP UNDER ALLOCATION CONSTRAINTS\n5.1 Notation and Graphical Representation\n5.2 Handling Indivisibility and Aggregation Constraints\n5.3 Nonlinear Pricing\n5.4 Homogeneity Constraints\n6.\nEXPERIMENTAL RESULTS\n7.\nCONCLUSIONS\nThe implementation and deployment of market exchanges requires the development of bidding languages, information feedback policies, and clearing algorithms that are suitable for the target domain, while paying heed to the incentive properties of the resulting mechanisms.\nFor multiattribute exchanges, the space of feasible such mechanisms is constrained by computational limitations imposed by the clearing process.\nThe extent to which the space of feasible mechanisms may be quantified a priori will facilitate the search for such exchanges in the full mechanism design problem.\nIn this work, we investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades.\nWe developed a formal semantic framework for characterizing expressible offers, and introduced some basic classes of restrictions.\nOur key technical results identify sets of conditions under which the overall matching problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades.\nBased on these results, we developed network flow models for the overall clearing problem, which facilitate classification of problem versions by computational complexity, and provide guidance for developing solution algorithms and relaxing bidding constraints.","lvl-4":"Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions\nABSTRACT\nWe investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades.\nWe develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades.\nWe analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory.\nNetwork flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms.\nExperimental trials help distinguish tractable problem classes for proposed solution techniques.\n1.\nBACKGROUND\nA multiattribute auction is a market-based mechanism where goods are described by vectors of features, or attributes [3, 5, 8, 19].\nSuch mechanisms provide traders with the ability to negotiate over a multidimensional space of potential deals, delaying commitment to specific configurations until the most promising candidates are identified.\nFor example, in a multiattribute auction for computers, the good may be defined by attributes such as processor speed, memory, and hard disk capacity.\nAgents have varying preferences (or costs) associated with the possible configurations.\nExisting research in multiattribute auctions has focused primarily on one-sided mechanisms, which automate the process whereby a single agent negotiates with multiple potential trading partners [8, 7, 19, 5, 23, 22].\nThe role of the auction is to elicit these functions (possibly approximate or partial versions), and identify the surplus-maximizing deal.\nThis problem can be translated into the more familiar auction for a single good without attributes by computing a score for each attribute vector based on the seller valuation function, and have buyers bid scores.\nAnalogs of the classic first - and second-price auctions correspond to firstand second-score auctions [8, 7].\nIn the absence of a published buyer scoring function, agents on both sides may provide partial specifications of the deals they are willing to engage.\nResearch on such auctions has, for example, produced iterative mechanisms for eliciting cost functions incrementally [19].\nOther efforts focus on the optimization problem facing the bid taker, for example considering side constraints on the combination of trades comprising an overall deal [4].\nSide constraints have also been analyzed in the context of combinatorial auctions [6, 20].\nOur emphasis is on two-sided multiattribute auctions, where multiple buyers and sellers submit bids, and the objective is to construct a set of deals maximizing overall surplus.\nBecause the optimization is performed over an aggregated scope, call markets often enjoy liquidity and efficiency advantages over CDAs [10].1 Clearing a multiattribute CDA is much like clearing a one-sided multiattribute auction.\nBecause nothing happens between bids, the problem is to match a given new bid (say, an offer to buy) with the existing bids on the other (sell) side.\nMultiattribute call markets are potentially much more complex.\nConstructing an optimal overall matching may require consideration of many different combina1In the interim between clears, call markets may also disseminate price quotes providing summary information about the state of the auction [24].\nSuch price quotes are often computed based on hypothetical clears, and so the clearing algorithm may be invoked more frequently than actual market clearing operations.\nThe problem can be complicated by restrictions on overall assignments, as expressed in side constraints [16].\nThe goal of the present work is to develop a general framework for multiattribute call markets, to enable investigation of design issues and possibilities.\nIn particular, we use the framework to explore tradeoffs between expressive power of agent bids and computational properties of auction clearing.\nWe conduct our exploration independent of any consideration of strategic issues bearing on mechanism design.\nAs with analogous studies of combinatorial auctions [18], we intend that tradeoffs quantified in this work can be combined with incentive factors within a comprehensive overall approach to multiattribute auction design.\nWe provide the formal semantics of multiattribute offers in our framework in the next section.\nWe abstract, where appropriate, from the specific language used to express offers, characterizing expressiveness semantically in terms of what deals may be offered.\nThis enables us to identify some general conditions under which the problem of multilateral matching can be decomposed into bilateral matching problems.\nWe then develop a family of network flow problems that capture corresponding classes of multiattribute call market optimizations.\nExperimental trials provide preliminary confirmation that the network formulations provide useful structure for implementing clearing algorithms.\n7.\nCONCLUSIONS\nFor multiattribute exchanges, the space of feasible such mechanisms is constrained by computational limitations imposed by the clearing process.\nThe extent to which the space of feasible mechanisms may be quantified a priori will facilitate the search for such exchanges in the full mechanism design problem.\nIn this work, we investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades.\nWe developed a formal semantic framework for characterizing expressible offers, and introduced some basic classes of restrictions.\nOur key technical results identify sets of conditions under which the overall matching problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades.\nBased on these results, we developed network flow models for the overall clearing problem, which facilitate classification of problem versions by computational complexity, and provide guidance for developing solution algorithms and relaxing bidding constraints.","lvl-2":"Bid Expressiveness and Clearing Algorithms in Multiattribute Double Auctions\nABSTRACT\nWe investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades.\nWe develop a formal semantic framework for characterizing expressible offers, and show conditions under which the allocation problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades.\nWe analyze the bilateral matching problem while taking into consideration relevant results from multiattribute utility theory.\nNetwork flow models we develop for computing global allocations facilitate classification of the problem space by computational complexity, and provide guidance for developing solution algorithms.\nExperimental trials help distinguish tractable problem classes for proposed solution techniques.\n1.\nBACKGROUND\nA multiattribute auction is a market-based mechanism where goods are described by vectors of features, or attributes [3, 5, 8, 19].\nSuch mechanisms provide traders with the ability to negotiate over a multidimensional space of potential deals, delaying commitment to specific configurations until the most promising candidates are identified.\nFor example, in a multiattribute auction for computers, the good may be defined by attributes such as processor speed, memory, and hard disk capacity.\nAgents have varying preferences (or costs) associated with the possible configurations.\nFor example, a buyer may be willing to purchase a computer with a 2 GHz processor, 500 MB of memory, and a 50 GB hard disk for a price no greater than $500, or the same computer with 1GB of memory for a price no greater than $600.\nExisting research in multiattribute auctions has focused primarily on one-sided mechanisms, which automate the process whereby a single agent negotiates with multiple potential trading partners [8, 7, 19, 5, 23, 22].\nModels of procurement typically assume the buyer has a value function, v, ranging over the possible configurations, X, and that each seller i can similarly be associated with a cost function ci over this domain.\nThe role of the auction is to elicit these functions (possibly approximate or partial versions), and identify the surplus-maximizing deal.\nIn this case, such an outcome would be arg maxi, x v (x) \u2212 ci (x).\nThis problem can be translated into the more familiar auction for a single good without attributes by computing a score for each attribute vector based on the seller valuation function, and have buyers bid scores.\nAnalogs of the classic first - and second-price auctions correspond to firstand second-score auctions [8, 7].\nIn the absence of a published buyer scoring function, agents on both sides may provide partial specifications of the deals they are willing to engage.\nResearch on such auctions has, for example, produced iterative mechanisms for eliciting cost functions incrementally [19].\nOther efforts focus on the optimization problem facing the bid taker, for example considering side constraints on the combination of trades comprising an overall deal [4].\nSide constraints have also been analyzed in the context of combinatorial auctions [6, 20].\nOur emphasis is on two-sided multiattribute auctions, where multiple buyers and sellers submit bids, and the objective is to construct a set of deals maximizing overall surplus.\nPrevious research on such auctions includes works by Fink et al. [12] and Gong [14], both of which consider a matching problem for continuous double auctions (CDAs), where deals are struck whenever a pair of compatible bids is identified.\nIn a call market, in contrast, bids accumulate until designated times (e.g., on a periodic or scheduled basis) at which the auction clears by determining a comprehensive match over the entire set of bids.\nBecause the optimization is performed over an aggregated scope, call markets often enjoy liquidity and efficiency advantages over CDAs [10].1 Clearing a multiattribute CDA is much like clearing a one-sided multiattribute auction.\nBecause nothing happens between bids, the problem is to match a given new bid (say, an offer to buy) with the existing bids on the other (sell) side.\nMultiattribute call markets are potentially much more complex.\nConstructing an optimal overall matching may require consideration of many different combina1In the interim between clears, call markets may also disseminate price quotes providing summary information about the state of the auction [24].\nSuch price quotes are often computed based on hypothetical clears, and so the clearing algorithm may be invoked more frequently than actual market clearing operations.\ntions of trades, among the various potential trading-partner pairings.\nThe problem can be complicated by restrictions on overall assignments, as expressed in side constraints [16].\nThe goal of the present work is to develop a general framework for multiattribute call markets, to enable investigation of design issues and possibilities.\nIn particular, we use the framework to explore tradeoffs between expressive power of agent bids and computational properties of auction clearing.\nWe conduct our exploration independent of any consideration of strategic issues bearing on mechanism design.\nAs with analogous studies of combinatorial auctions [18], we intend that tradeoffs quantified in this work can be combined with incentive factors within a comprehensive overall approach to multiattribute auction design.\nWe provide the formal semantics of multiattribute offers in our framework in the next section.\nWe abstract, where appropriate, from the specific language used to express offers, characterizing expressiveness semantically in terms of what deals may be offered.\nThis enables us to identify some general conditions under which the problem of multilateral matching can be decomposed into bilateral matching problems.\nWe then develop a family of network flow problems that capture corresponding classes of multiattribute call market optimizations.\nExperimental trials provide preliminary confirmation that the network formulations provide useful structure for implementing clearing algorithms.\n2.\nMULTIATTRIBUTE OFFERS 2.1 Basic Definitions\nThe distinguishing feature of a multiattribute auction is that the goods are defined by vectors of attributes, x = (x1,..., xm), xj \u2208 Xj.\nA configuration is a particular attribute vector, x \u2208 X = Qmj = 1 Xj.\nThe outcome of the auction is a set of bilateral trades.\nTrade t takes the form t = (x, q, b, s, \u03c0), signifying that agent b buys q> 0 units of configuration x from seller s, for payment \u03c0> 0.\nFor convenience, we use the notation xt to denote the configuration associated with trade t (and similarly for other elements of t).\nFor a set of trades T, we denote by Ti that subset of T involving agent i (i.e., b = i or s = i).\nLet T denote the set of all possible trades.\nA bid expresses an agent's willingness to participate in trades.\nWe specify the semantics of a bid in terms of offer sets.\nLet OTi \u2286 Ti denote agent i's trade offer set.\nIntuitively, this represents the trades in which i is willing to participate.\nHowever, since the outcome of the auction is a set of trades, several of which may involve agent i, we must in general consider willingness to engage in trade combinations.\nAccordingly, we introduce the combination offer set of agent i, OCi \u2286 2Ti.\n2.2 Specifying Offer Sets\nA fully expressive bid language would allow specification of arbitrary combination offer sets.\nWe instead consider a more limited class which, while restrictive, still captures most forms of multiattribute bidding proposed in the literature.\nOur bids directly specify part of the agent's trade offer set, and include further directives controlling how this can be extended to the full trade and combination offer sets.\nFor example, one way to specify a trade (buy) offer set would be to describe a set of configurations and quantities, along with the maximal payment one would exchange for each (x, q) specified.\nThis description could be by enumeration, or any available means of defining such a mapping.\nAn explicit set of trades in the offer set generally entails inclusion of many more implicit trades.\nWe assume payment monotonicity,\nWe also assume free disposal, which dictates that for all i, q> q'> 0, (x, q', i, s, \u03c0) \u2208 OTi \u21d2 (x, q, i, s, \u03c0) \u2208 OTi, (x, q, b, i, \u03c0) \u2208 OTi \u21d2 (x, q', b, i, \u03c0) \u2208 OTi.\nNote that the conditions for agents in the role of buyers and sellers are analogous.\nHenceforth, for expository simplicity, we present all definitions with respect to buyers only, leaving the definition for sellers as understood.\nAllowing agents' bids to comprise offers from both buyer and seller perspectives is also straightforward.\nAn assertion that offers are divisible entails further implicit members in the trade offer set.\nDEFINITION 1 (DIVISIBLE OFFER).\nAgent i's offer is divisible down to q iff \u2200 q <q' <q. (x, q, i, s, \u03c0) \u2208 OTi \u21d2 (x, q', i, s, q ~ q \u03c0) \u2208 OTi.\nWe employ the shorthand divisible to mean divisible down to 0.\nThe definition above specifies arbitrary divisibility.\nIt would likewise be possible to define divisibility with respect to integers, or to any given finite granularity.\nNote that when offers are divisible, it suffices to specify one offer corresponding to the maximal quantity one is willing to trade for any given configuration, trading partner, and per-unit payment (called the price).\nAt the extreme of indivisibility are all-or-none offers.\nIn many cases, the agent will be indifferent with respect to different trading partners.\nIn that event, it may omit the partner element from trades directly specified in its offer set, and simply assert that its offer is anonymous.\nBecause omitting trading partner qualifications simplifies the exposition, we generally assume in the following that all offers are anonymous unless explicitly specified otherwise.\nExtending to the non-anonymous case is conceptually straightforward.\nWe employ the wild-card symbol \u2217 in place of an agent identifier to indicate that any agent is acceptable.\nTo specify a trade offer set, a bidder directly specifies a set of willing trades, along with any regularity conditions (e.g., divisibility, anonymity) that implicitly extend the set.\nThe full trade offer set is then defined by the closure of this direct set with respect to payment monotonicity, free disposal, and any applicable divisibility assumptions.\nWe next consider the specification of combination offer sets.\nWithout loss of generality, we restrict each trade set T \u2208 OCi to include at most one trade for any combination of configuration and trading partner (multiple such trades are equivalent to one net trade aggregating the quantities and payments).\nThe key question is to what extent the agent is willing to aggregate deals across configurations or trading partners.\nOne possibility is disallowing any aggregation.\nDEFINITION 4 (NO AGGREGATION).\nThe no-aggregation combinations are given by ONAi = {\u2205} \u222a {{t} | t \u2208 OTi}.\nAgent i's offer exhibits non-aggregation iff OCi = ONA\nA more flexible policy is to allow aggregation across trading partners, keeping configuration constant.\nIn other words, we may create new trade offer combinations by splitting the common trade (quantity and payment, not necessarily proportionately) between the two sellers.\nIn some cases, it might be reasonable to form combinations by aggregating different configurations.\nAgent i's offer allows configuration aggregation iff in all such cases (and analogously when it is a seller), {(x, q', i, \u2217, q' q \u03c0), (x', q \u2212 q', i, \u2217, q \u2212 q' \u03c0')} \u222a T \u2208 OCi.\nq Note that combination offer sets can accommodate offerings of configuration bundles.\nHowever, classes of bundles formed by partner or configuration aggregation are highly regular, covering only a specific type of bundle formed by splitting a desired quantity across configurations.\nThis is quite restrictive compared to the general combinatorial case.\n2.3 Willingness to Pay\nAn agent's offer trade set implicitly defines the agent's willingness to pay for any given configuration and quantity.\nWe assume anonymity to avoid conditioning our definitions on trading partner.\nWe use the symbol u\u02c6 to recognize that willingness to pay can be viewed as a proxy for the agent's utility function, measured in monetary units.\nThe superscript B distinguishes the buyer's willingnessto-pay function from, a seller's willingness to accept, \u02c6uSi (x, q), defined as the minimum payment seller i will accept for q units of configuration x.\nWe omit the superscript where the distinction is inessential or clear from context.\nDEFINITION 8 (TRADE QUANTITY BOUNDS).\nAgent i's minimum trade quantity for configuration x is given by\nWhen the agent has no offers involving x, we take qi (x) = \u00af qi (x) = 0.\nIt is useful to define a special case where all configurations are offered in the same quantity range.\nDEFINITION 9 (CONFIGURATION PARITY).\nAgent i's offers exhibit configuration parity iff\nUnder configuration parity we drop the arguments from trade quantity bounds, yielding the constants q \u00af and q which apply to all offers.\nNote that linear pricing assumes divisibility down to qi (x).\nGiven linear pricing, we can define the unit willingness to pay, \u02c6ui (x) = \u02c6ui (x, \u00af qi (x)) \/ \u00af qi (x), and take \u02c6ui (x, q) = q\u02c6ui (x) for all qi (x) \u2264 q \u2264 \u00af qi (x).\nIn general, an agent's willingness to pay may depend on a context of other trades the agent is engaging in.\nDEFINITION 11 (WILLINGNESS TO PAY IN CONTEXT).\nAgent i's willingness to pay for quantity q of configuration x in the context of other trades T is given by\nNote that the trade surplus does not depend on the payment, which is simply a transfer from buyer to seller.\nProofs of all the following results are provided in an extended version of this paper available from the authors.\n1.\nNo aggregation and configuration parity (Definitions 4 and 9).\n2.\nDivisibility, linear pricing, and configuration parity (Definitions 1, 10, and 9), with combination offer set defined as the minimal set consistent with configuration aggregation (Definition 6).2\nThen the solution to GMAP consists of a set of trades, each of which employs a configuration that solves MMP for its specified pair of traders.\nLet MMPd (b, s) denote a modified version of MMP, where OTb and OT3 are extended to assume divisibility (i.e., the offer sets are taken to be their closures under Definition 1).\nThen we can extend Theorem 3 to allow aggregating agents to maintain AON or minquantity offers as follows.\nThe preceding results signify that under certain conditions, we can divide the global optimization problem into two parts: first find a bilateral trade that maximizes unit surplus for each pair of traders (or total surplus in the non-aggregation case), and then use the results to find a globally optimal set of trades.\nIn the following two sections we investigate each of these subproblems.\n4.\nUTILITY REPRESENTATION AND MMP\nWe turn next to consider the problem of finding a best deal between pairs of traders.\nThe complexity of MMP depends pivotally on the representation by bids of offer sets, an issue we have postponed to this point.\nNote that issues of utility representation and MMP apply to a broad class of multiattribute mechanisms, beyond the multiattribute call markets we emphasize.\nFor example, the complexity results contained in this section apply equally to the bidding problem faced by sellers in reverse auctions, given a published buyer scoring function.\nThe simplest representation of an offer set is a direct enumeration of configurations and associated quantities and payments.\nThis approach treats the configurations as atomic entities, making no use\naggregation of ONA i.\nof attribute structure.\nA common and inexpensive enhancement is to enable a trader to express sets of configurations, by specifying subsets of the domains of component attributes.\nAssociating a single quantity and payment with a set of configurations expresses indifference among them; hence we refer to such a set as an indifference range .3 Indifference ranges include the case of attributes with a natural ordering, in which a bid specifies a minimum or maximum acceptable attribute level.\nThe use of indifference ranges can be convenient for MMP.\nThe compatibility of two indifference ranges is simply found by testing set intersection for each attribute, as demonstrated by the decision-tree algorithm of Fink et al. [12].\nAlternatively, bidders may specify willingness-to-pay functions u\u02c6 in terms of compact functional forms.\nEnumeration based representations, even when enhanced with indifference ranges, are ultimately limited by the exponential size of attribute space.\nFunctional forms may avoid this explosion, but only if u\u02c6 reflects structure among the attributes.\nMoreover, even given a compact specification of \u02c6u, we gain computational benefits only if we can perform the matching without expanding the u\u02c6 values of an exponential number of configuration points.\n4.1 Additive Forms\nOne particularly useful multiattribute representation is known as the additive scoring function.\nThough this form is widely used in practice and in the academic literature, it is important to stress the assumptions behind it.\nThe theory of multiattribute representation is best developed in the context where u\u02c6 is interpreted as a utility function representing an underlying preference order [17].\nWe present the premises of additive utility theory in this section, and discuss some generalizations in the next.\nIn other words, the preference order over the projection of X on the attributes in Y is the same for any instantiation of the attributes in Z.\nA utility function over outcomes including money is quasi-linear if the function can be represented as a function over non-monetary attributes plus payments, \u03c0.\nInterpreting u\u02c6 as a utility function over non-monetary attributes is tantamount to assuming quasi-linearity.\nEven when quasi-linearity is assumed, however, MPI over nonmonetary attributes is not sufficient for the quasi-linear utility function to be additive.\nFor this, we also need that each of the pairs (\u03c0, Xi) for any attribute Xi would be PI of the rest of the attributes.\n3These should not be mistaken with indifference curves, which express dependency between the attributes.\nIndifference curves can be expressed by the more elaborate utility representations discussed below.\nThis (by MAUT) in turn implies that the set of attributes including money is MPI and the utility function can be represented as\nGiven that form, a willingness-to-pay function reflecting u can be represented additively, as In many cases the additivity assumption provides practically crucial simplification of offer set elicitation.\nIn addition to compactness, additivity dramatically simplifies MMP.\nIf both sides provide additive u\u02c6 representations, the globally optimal match reduces to finding the optimal match separately for each attribute.\nA common scenario in procurement has the buyer define an additive scoring function, while suppliers submit enumerated offer points or indifference ranges.\nThis model is still very amenable to MMP: for each element in a supplier's enumerated set, we optimize each attribute by finding the point in the supplier's allowable range that is most preferred by the buyer.\nA special type of scoring (more particularly, cost) function was defined by Bichler and Kalagnanam [4] and called a configurable offer.\nThis idea is geared towards procurement auctions: assuming suppliers are usually comfortable with expressing their preferences in terms of cost that is quasi-linear in every attribute, they can specify a price for a base offer, and additional cost for every change in a specific attribute level.\nThis model is essentially a \"pricing out\" approach [17].\nFor this case, MMP can still be optimized on a per-attribute basis.\nA similar idea has been applied to one-sided iterative mechanisms [19], in which sellers refine prices on a perattribute basis at each iteration.\n4.2 Multiattribute Utility Theory\nUnder MPI, the tradeoffs between the attributes in each subset cannot be affected by the value of other attributes.\nFor example, when buying a PC, a weaker CPU may increase the importance of the RAM compared to, say, the type of keyboard.\nSuch relationships cannot be expressed under an additive model.\nMultiattribute utility theory (MAUT) develops various compact representations of utility functions that are based on weaker structural assumptions [17, 2].\nThere are several challenges in adapting these techniques to multiattribute bidding.\nFirst, as noted above, the theory is developed for utility functions, which may behave differently from willingness-to-pay functions.\nSecond, computational efficiency of matching has not been an explicit goal of most work in the area.\nThird, adapting such representations to iterative mechanisms may be more challenging.\nOne representation that employs somewhat weaker assumptions than additivity, yet retains the summation structure is the generalized additive (GA) decomposition:\nwhere the Xj are potentially overlapping sets of attributes, together exhausting the space X.\nA key point from our perspective is that the complexity of the matching is similar to the complexity of optimizing a single function, since the sum function is in the form (2) as well.\nRecent work by Gonzales and Perny [15] provides an elicitation process for GA decomposable preferences under certainty, as well as an optimization algorithm for the GA decomposed function.\nThe complexity of exact optimization is exponential in the induced width of the graph.\nHowever, to become operational for multiattribute bidding this decomposition must be detectable and verifiable by statements over preferences with respect to price outcomes.\nWe are exploring this topic in ongoing work [11].\n5.\nSOLVING GMAP UNDER ALLOCATION CONSTRAINTS\nTheorems 2, 3, and 4 establish conditions under which GMAP solutions must comprise elements from constituent MMP solutions.\nIn Sections 5.1 and 5.2, we show how to compute these GMAP solutions, given the MMP solutions, under these conditions.\nIn these settings, traders that aggregate partners also aggregate configurations; hence we refer to them simply as \"aggregating\" or \"nonaggregating\".\nSection 5.3 suggests a means to relax the linear pricing restriction employed in these constructions.\nSection 5.4 provides strategies for allowing traders to aggregate partners and restrict configuration aggregation at the same time.\n5.1 Notation and Graphical Representation\nOur clearing algorithms are based on network flow formulations of the underlying optimization problem [1].\nThe network model is based on a bipartite graph, in which nodes on the left side represent buyers, and nodes on the right represent sellers.\nWe denote the sets of buyers and sellers by B and S, respectively.\nWe define two graph families, one for the case of non-aggregating traders (called single-unit), and the other for the case of aggregating traders (called multi-unit).4 For both types, a single directed arc is placed from a buyer i \u2208 B to a seller j \u2208 S if and only if MMP (i, j) is nonempty.\nWe denote by T (i) the set of potential trading partners of trader i (i.e., the nodes connected to buyer or seller i in the bipartite graph.\nIn the single-unit case, we define the weight of an arc (i, j) as wi, = \u03c3 (MMP (i, j)).\nNote that free disposal lets a buy offer receive a larger quantity than desired (and similarly for sell offers).\nFor the multi-unit case, the weights are wi, = \u03c31 (MMP (i, j)), and we associate the quantity \u00af qi with the node for trader i.\nWe also use the notation qi, for the mathematical formulations to denote partial fulfillment of qt for t = MMP (i, j).\n5.2 Handling Indivisibility and Aggregation Constraints\nUnder the restrictions of Theorems 2, 3, or 4, and when the solution to MMP is given, GMAP exhibits strong similarity to the problem of clearing double auctions with assignment constraints [16].\nA match in our bipartite representation corresponds to a potential trade in which assignment constraints are satisfied.\nNetwork flow formulations have been shown to model this problem under the assumption of indivisibility and aggregation for all traders.\nThe novelty in this part of our work is the use of generalized network flow formulations for more complex cases where aggregation and divisibility may be controlled by traders.\nInitially we examine the simple case of no aggregation (Theorem 2).\nObserve that the optimal allocation is simply the solution to the well known weighted assignment problem [1] on the singleunit bipartite graph described above.\nThe set of matches that maximizes the total weight of arcs corresponds to the set of trades that maximizes total surplus.\nNote that any form of (in) divisibility can 4In the next section, we introduce a hybrid form of graph accommodating mixes of the two trader categories.\nalso be accommodated in this model via the constituent MMP subproblems.\nThe next formulation solves the case in which all traders fall under case 2 of Theorem 3--that is, all traders are aggregating and divisible, and exhibit linear pricing.\nThis case can be represented using the following linear program, corresponding to our multi-unit graph:\nRecall that the qij variables in the solution represent the number of units that buyer i procures from seller j.\nThis formulation is known as the network transportation problem with inequality constraints, for which efficient algorithms are available [1].\nIt is a well known property of the transportation problem (and flow problems on pure networks in general) that given integer input values, the optimal solution is guaranteed to be integer as well.\nFigure 1 demonstrates the transformation of a set of bids to a transportation problem instance.\nFigure 1: Multi-unit matching with two boolean attributes.\n(a) Bids, with offers to buy in the left column and offers to sell at right.\nq@p indicates an offer to trade q units at price p per unit.\nConfigurations are described in terms of constraints\non attribute values.\n(b) Corresponding multi-unit assignment model.\nW represents arc weights (unit surplus), s represents source (exogenous) flow, and t represents sink quantity.\nThe problem becomes significantly harder when aggregation is given as an option to bidders, requiring various enhancements to the basic multi-unit bipartite graph described above.\nIn general, we consider traders that are either aggregating or not, with either divisible or AON offers.\nInitially we examine a special case, which at the same time demonstrates the hardness of the problem but still carries computational advantages.\nWe designate one side (e.g., buyers) as restrictive (AON and non-aggregating), and the other side (sellers) as unrestrictive (divisible and aggregating).\nThis problem can be represented using the following integer programming formulation:\nThis formulation is a restriction of the generalized assignmentproblem (GAP) [13].\nAlthough GAP is known to be NP-hard, it can be solved relatively efficiently by exact or approximate algorithms.\nGAP is more general than the formulation above as it allows buyside quantities (\u00af qi above) to be different for each potential seller.\nThat this formulation is NP-hard as well (even the case of a single seller corresponds to the knapsack problem), illustrates the drastic increase in complexity when traders with different constraints are admitted to the same problem instance.\nOther than the special case above, we found no advantage in limiting AON constraints when traders may specify aggregation constraints.\nTherefore, the next generalization allows any combination of the two boolean constraints, that is, any trader chooses among four bid types: NI Bid AON and not aggregating.\nAD Bid allows aggregation and divisibility.\nAI Bid AON, allows aggregation (quantity can be aggregated across configurations, as long as it sums to the whole amount).\nND No aggregation, divisibility (one trade, but smaller quantities are acceptable).\nTo formulate an integer programming representation for the problem, we introduce the following variables.\nBoolean (0\/1) variables ri and rj' indicate whether buyer i and seller j participate in the solution (used for AON traders).\nAnother indicator variable, yij, applied to non-aggregating buyer i and seller j, is one iff i trades with j. For aggregating traders, yij is not constrained.\nFigure 2: Generalized network flow model.\nB1 is a buyer in AD, B2 E NI, B3 E AI, B4 E ND.\nV 1 is a seller in ND, V 2 E AI, V 4 E AD.\nThe g values represent arc gains.\nProblem (4) has additional structure as a generalized min-cost flow problem with integral flow .5 A generalized flow network is a network in which each arc may have a gain factor, in addition to the pure network parameters (which are flow limits and costs).\nFlow in an arc is then multiplied by its gain factor, so that the flow that enters the end node of an arc equals the flow that entered from its start node, multiplied by the gain factor of the arc.\nThe network model can in turn be translated into an IP formulation that captures such structure.\nThe generalized min-cost flow problem is well-studied and has a multitude of efficient algorithms [1].\nThe faster algorithms are polynomial in the number of arcs and the logarithm of the maximal gain, that is, performance is not strongly polynomial but is polynomial in the size of the input.\nThe main benefit of this graphical formulation to our matching problem is that it provides a very efficient linear relaxation.\nInteger programming algorithms such as branch-and-bound use solutions to the linear relaxation instance to bound the optimal integer solution.\nSince network flow algorithms are much faster than arbitrary linear programs (generalized network flow simplex algorithms have been shown to run in practice only 2 or 3 times slower than pure network min-cost flow [1]), we expect a branch-and-bound solver for the matching problem to show improved performance when taking advantage of network flow modeling.\nThe network flow formulation is depicted in Figure 2.\nNonrestrictive traders are treated as in Figure 1.\nFor a non-aggregating buyer, a single unit from the source will saturate up to one of the yi, for all j, and be multiplied by \u00af qi.\nIf i E ND, the end node of yi, will function as a sink that may drain up to \u00af qi of the entering flow.\nFor i E NI we use an indicator (0\/1) arc ri, on which the flow is multiplied by \u00af qi.\nTrader i trades the full quantity iff ri = 1.\nAt the seller side, the end node of a qi, arc functions as a source for sellers j E ND, in order to let the flow through y' i, arcs be 0 or \u00af qj.\nThe flow is then multiplied by \u00af q,1 so 0\/1 flows enter an end node which can drain either 1 or 0 units.\nfor sellers j E NI arcs r' j ensure AON similarly to arcs rj for buyers.\nHaving established this framework, we are ready to accommo5Constraint (4j) could be omitted (yielding computational savings) if non-integer quantities are allowed.\nHere and henceforth we assume the harder problem, where divisibility is with respect to integers.\ndate more flexible versions of side constraints.\nThe first generalization is to replace the boolean AON constraint with divisibility down to q, the minimal quantity.\nIn our network flow instance we simply need to turn the node of the constrained trader i (e.g., the node B3 in Figure 2) to a sink that can drain up to \u00af qi--qi units of flow.\nThe integer program (4) can be also easily changed to accommodate this extension.\nUsing gains, we can also apply batch size constraints.\nIf a trader specifies a batch size \u03b2, we change the gain on the r arcs to \u03b2, and set the available flow of its origin to the maximal number of batches \u00af qi \/ \u03b2.\n5.3 Nonlinear Pricing\nA key assumption in handling aggregation up to this point is linear pricing, which enables us to limit attention to a single unit price.\nDivisibility without linear pricing allows expression of concave willingness-to-pay functions, corresponding to convex preference relations.\nBidders may often wish to express non-convex offer sets, for example, due to fixed costs or switching costs in production settings [21].\nWe consider nonlinear pricing in the form of enumerated payment schedules--that is, defining values \u02c6u (x, q) for a select set of quantities q. For the indivisible case, these points are distinguished in the offer set by satisfying the following:\n(cf. Definition 8, which defines the maximum quantity, \u00af q, as the largest of these.)\nFor the divisible case, the distinguished quantities are those where the unit price changes, which can be formalized similarly.\nTo handle nonlinear pricing, we augment the network to include flow possibilities corresponding to each of the enumerated quantities, plus additional structure to enforce exclusivity among them.\nIn other words, the network treats the offer for a given quantity as in Section 5.2, and embeds this in an XOR relation to ensure that each trader picks only one of these quantities.\nSince for each such quantity choice we can apply Theorem 3 or 4, the solution we get is in fact the solution to GMAP.\nThe network representation of the XOR relation (which can be embedded into the network of Figure 2) is depicted in Figure 3.\nFor a trader i with K XOR quantity points, we define dummy variables, zki, k = 1,..., K.\nSince we consider trades between every pair of quantity points we also have qk i,, k = 1,..., K. For buyer i E AI with XOR points at quantities \u00af qki, we replace (4b) with the following constraints:\n5.4 Homogeneity Constraints\nThe model (4) handles constraints over the aggregation of quantities from different trading partners.\nWhen aggregation is allowed, the formulation permits trades involving arbitrary combinations of configurations.\nA homogeneity constraint [4] restricts such combinations, by requiring that configurations aggregated in an overall deal must agree on some or all attributes.\nFigure 3: Extending the network flow model to express an XOR over quantities.\nB2 has 3 XOR points for 6, 3, or 5 units.\nIn the presence of homogeneity constraints, we can no longer apply the convenient separation of GMAP into MMP plus global bipartite optimization, as the solution to GMAP may include trades not part of any MMP solution.\nFor example, let buyer b specify an offer for maximum quantity 10 of various acceptable configurations, with a homogeneity constraint over the attribute \"color\".\nThis means b is willing to aggregate deals over different trading partners and configurations, as long as all are the same color.\nIf seller s can provide 5 blue units or 5 green units, and seller s ~ can provide only 5 green units, we may prefer that b and s trade on green units, even if the local surplus of a blue trade is greater.\nLet {x1,..., xH} be attributes that some trader constrains to be homogeneous.\nTo preserve the network flow framework, we need to consider, for each trader, every point in the product domain of these homogeneous attributes.\nThus, for every assignment x\u02c6 to the homogeneous attributes, we compute MMP (b, s) under the constraint that configurations are consistent with \u02c6x.\nWe apply the same approach as in Section 5.3: solve the global optimization, such that the alternative x\u02c6 assignments for each trader are combined under XOR semantics, thus enforcing homogeneity constraints.\nThe size of this network is exponential in the number of homogeneous attributes, since we need a node for each point in the product domain of all the homogeneous attributes of each trader .6 Hence this solution method will only be tractable in applications were the traders can be limited to a small number of homogeneous attributes.\nIt is important to note that the graph needs to include a node only for each point that potentially matches a point of the other side.\nIt is therefore possible to make the problem tractable by limiting one of the sides to a less expressive bidding language, and by that limit the set of potential matches.\nFor example, if sellers submit bounded sets of XOR points, we only need to consider the points in the combined set offered by the sellers, and the reduction to network flow is polynomial regardless of the number of homogeneous attributes.\nIf such simplifications do not apply, it may be preferable to solve the global problem directly as a single optimization problem.\nWe provide the formulation for the special case of divisibility (with respect to integers) and configuration parity.\nLet i index buyers, j sellers, and H homogeneous attributes.\nVariable xhij E Xh represents the value of attribute Xh in the trade between buyer i and seller j. Integer variable qij represents the quantity of the trade (zero for no trade) between i and j. 6If traders differ on which attributes they express such constraints, we can limit consideration to the relevant alternatives.\nThe complexity will still be exponential, but in the maximum number of homogeneous attributes for any pair of traders.\nXmax [\u02c6uBi (xij, qij) - \u02c6uSj (xij, qij)]\nTable 1 summarizes the mapping we presented from allocation constraints to the complexity of solving GMAP.\nConfiguration parity is assumed for all cases but the first.\n6.\nEXPERIMENTAL RESULTS\nWe approach the experimental aspect of this work with two objectives.\nFirst, we seek a general idea of the sizes and types of clearing problems that can be solved under given time constraints.\nWe also look to compare the performance of a straightforward integer program as in (4) with an integer program that is based on the network formulations developed here.\nSince we used CPLEX, a commercial optimization tool, the second objective could be achieved to the extent that CPLEX can take advantage of network structure present in a model.\nWe found that in addition to the problem size (in terms of number of traders), the number of aggregating traders plays a crucial role in determining complexity.\nWhen most of the traders are aggregating, problems of larger sizes can be solved quickly.\nFor example, our IP model solved instances with 600 buyers and 500 sellers, where 90% of them are aggregating, in less than two minutes.\nWhen the aggregating ratio was reduced to 80% for the same data, solution time was just under five minutes.\nThese results motivated us to develop a new network model.\nRather than treat non-aggregating traders as a special case, the new model takes advantage of the single-unit nature of non-aggregating trades (treating the aggregating traders as a special case).\nThis new model outperformed our other models on most problem instances, exceptions being those where aggregating traders constitute a vast majority (at least 80%).\nThis new model (Figure 4) has a single node for each non aggregating trader, with a single-unit arc designating a match to another non-aggregating trader.\nAn aggregating trader has a node for each potential match, connected (via y arcs) to a mutual source node.\nUnlike the previous model we allow fractional flow for this case, representing the traded fraction of the buyer's total quantity .7 We tested all three models on random data in the form of bipartite graphs encoding MMP solutions.\nIn our experiments, each trader has a maximum quantity uniformly distributed over [30, 70], and minimum quantity uniformly distributed from zero to maximal quantity.\nEach buyer\/seller pair is selected as matching with probability 0.75, with matches assigned a surplus uniformly distributed over [10, 70].\nWhereas the size of the problem is defined by the number of traders on each side, the problem complexity depends on the product B x S.\nThe tests depicted in Figures 5--7 are for the worst case B = S, with each data point averaged over six samples.\nIn the figures, the direct IP (4) is designated \"SW\", our first network model (Figure 2) \"NW\", and our revised network model (Figure 4) \"NW 2\".\nTable 1: Mapping from combinations of allocation constraints to the solution methods of GMAP.\nOne Side means that one side aggregates and divisible, and the other side is restrictive.\nBatch means that traders may submit batch sizes.\nFigure 4: Generalized network flow model.\nB1 is a buyer in AD, B2 \u2208 AI, B3 \u2208 NI, B4 \u2208 ND.\nV 1 is a seller in AD, V 2 \u2208 AI, V 4 \u2208 ND.\nThe g values represent arc gains, and W values represent weights.\nFigure 6: Average performance of models when 50% of traders aggregate.\nFigure 5: Average performance of models when 30% of traders aggregate.\nFigure 7: Average performance of models when 70% of traders aggregate.\nFigure 8: Performance of models when varying percentage of aggregating traders\nFigure 8 shows how the various models are affected by a change in the percentage of aggregating traders, holding problem size fixed .8 Due to the integrality constraints, we could not test available algorithms specialized for network-flow problems on our test problems.\nThus, we cannot fully evaluate the potential gain attributable to network structure.\nHowever, the model we built based on the insight from the network structure clearly provided a significant speedup, even without using a special-purpose algorithm.\nModel NW 2 provided speedups of a factor of 4--10 over the model SW.\nThis was consistent throughout the problem sizes, including the smaller sizes for which the speedup is not visually apparent on the chart.\n7.\nCONCLUSIONS\nThe implementation and deployment of market exchanges requires the development of bidding languages, information feedback policies, and clearing algorithms that are suitable for the target domain, while paying heed to the incentive properties of the resulting mechanisms.\nFor multiattribute exchanges, the space of feasible such mechanisms is constrained by computational limitations imposed by the clearing process.\nThe extent to which the space of feasible mechanisms may be quantified a priori will facilitate the search for such exchanges in the full mechanism design problem.\nIn this work, we investigate the space of two-sided multiattribute auctions, focusing on the relationship between constraints on the offers traders can express through bids, and the resulting computational problem of determining an optimal set of trades.\nWe developed a formal semantic framework for characterizing expressible offers, and introduced some basic classes of restrictions.\nOur key technical results identify sets of conditions under which the overall matching problem can be separated into first identifying optimal pairwise trades and subsequently optimizing combinations of those trades.\nBased on these results, we developed network flow models for the overall clearing problem, which facilitate classification of problem versions by computational complexity, and provide guidance for developing solution algorithms and relaxing bidding constraints.","keyphrases":["bid","auction","multiattribut auction","constraint","semant framework","multiattribut util theori","global alloc","prefer","on-side mechan","seller valuat function","partial specif","combinatori auction","continu doubl auction"],"prmu":["P","P","P","P","P","P","P","U","U","U","U","M","M"]} {"id":"I-76","title":"Negotiation by Abduction and Relaxation","abstract":"This paper studies a logical framework for automated negotiation between two agents. We suppose an agent who has a knowledge base represented by a logic program. Then, we introduce methods of constructing counter-proposals in response to proposals made by an agent. To this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases. These techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation. We provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals.","lvl-1":"Negotiation by Abduction and Relaxation Chiaki Sakama Dept. Computer and Communication Sciences Wakayama University Sakaedani, Wakayama 640 8510, Japan sakama@sys.wakayama-u.ac.jp Katsumi Inoue National Institute of Informatics 2-1-2 Hitotsubashi, Chiyoda-ku Tokyo 101 8430, Japan ki@nii.ac.jp ABSTRACT This paper studies a logical framework for automated negotiation between two agents.\nWe suppose an agent who has a knowledge base represented by a logic program.\nThen, we introduce methods of constructing counter-proposals in response to proposals made by an agent.\nTo this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases.\nThese techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation.\nWe provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals.\nCategories and Subject Descriptors F.4.1 [Mathematical Logic]: Logic and constraint programming;; I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General Terms Theory 1.\nINTRODUCTION Automated negotiation has been received increasing attention in multi-agent systems, and a number of frameworks have been proposed in different contexts ([1, 2, 3, 5, 10, 11, 13, 14], for instance).\nNegotiation usually proceeds in a series of rounds and each agent makes a proposal at every round.\nAn agent that received a proposal responds in two ways.\nOne is a critique which is a remark as to whether or not (parts of) the proposal is accepted.\nThe other is a counter-proposal which is an alternative proposal made in response to a previous proposal [13].\nTo see these proposals in one-to-one negotiation, suppose the following negotiation dialogue between a buyer agent B and a seller agent S. (Bi (or Si) represents an utterance of B (or S) in the i-th round.)\nB1: I want to buy a personal computer of the brand b1, with the specification of CPU:1GHz, Memory:512MB, HDD: 80GB, and a DVD-RW driver.\nI want to get it at the price under 1200 USD.\nS1: We can provide a PC with the requested specification if you pay for it by cash.\nIn this case, however, service points are not added for this special discount.\nB2: I cannot pay it by cash.\nS2: In a normal price, the requested PC costs 1300 USD.\nB3: I cannot accept the price.\nMy budget is under 1200 USD.\nS3: We can provide another computer with the requested specification, except that it is made by the brand b2.\nThe price is exactly 1200 USD.\nB4: I do not want a PC of the brand b2.\nInstead, I can downgrade a driver from DVD-RW to CD-RW in my initial proposal.\nS4: Ok, I accept your offer.\nIn this dialogue, in response to the opening proposal B1, the counter-proposal S1 is returned.\nIn the rest of the dialogue, B2, B3, S4 are critiques, while S2, S3, B4 are counterproposals.\nCritiques are produced by evaluating a proposal in a knowledge base of an agent.\nIn contrast, making counter-proposals involves generating an alternative proposal which is more favorable to the responding agent than the original one.\nIt is known that there are two ways of producing counterproposals: extending the initial proposal or amending part of the initial proposal.\nAccording to [13], the first type appears in the dialogue: A: I propose that you provide me with service X. B: I propose that I provide you with service X if you provide me with service Z.\nThe second type is in the dialogue: A: I propose that I provide you with service Y if you provide me with service X. B: I propose that I provide you with service X if you provide me with service Z.\nA negotiation proceeds by iterating such give-andtake dialogues until it reaches an agreement\/disagreement.\nIn those dialogues, agents generate (counter-)proposals by reasoning on their own goals or objectives.\nThe objective of the agent A in the above dialogues is to obtain service X.\nThe agent B proposes conditions to provide the service.\nIn the process of negotiation, however, it may happen that agents are obliged to weaken or change their initial goals to reach a negotiated compromise.\nIn the dialogue of 1022 978-81-904262-7-5 (RPS) c 2007 IFAAMAS a buyer agent and a seller agent presented above, a buyer agent changes its initial goal by downgrading a driver from DVD-RW to CD-RW.\nSuch behavior is usually represented as specific meta-knowledge of an agent or specified as negotiation protocols in particular problems.\nCurrently, there is no computational logic for automated negotiation which has general inference rules for producing (counter-)proposals.\nThe purpose of this paper is to mechanize a process of building (counter-)proposals in one-to-one negotiation dialogues.\nWe suppose an agent who has a knowledge base represented by a logic program.\nWe then introduce methods for generating three different types of proposals.\nFirst, we use the technique of extended abduction in artificial intelligence [8, 15] to construct a conditional proposal as an extension of the original one.\nSecond, we use the technique of relaxation in cooperative query answering for databases [4, 6] to construct a neighborhood proposal as an amendment of the original one.\nThird, combining extended abduction and relaxation, conditional neighborhood proposals are constructed as amended extensions of the original proposal.\nWe develop a negotiation protocol between two agents based on the exchange of these counter-proposals and critiques.\nWe also provide procedures for computing proposals in logic programming.\nThis paper is organized as follows.\nSection 2 introduces a logical framework used in this paper.\nSection 3 presents methods for constructing proposals, and provides a negotiation protocol.\nSection 4 provides methods for computing proposals in logic programming.\nSection 5 discusses related works, and Section 6 concludes the paper.\n2.\nPRELIMINARIES Logic programs considered in this paper are extended disjunctive programs (EDP) [7].\nAn EDP (or simply a program) is a set of rules of the form: L1 ; \u00b7 \u00b7 \u00b7 ; Ll \u2190 Ll+1 , ... , Lm, not Lm+1 , ... , not Ln (n \u2265 m \u2265 l \u2265 0) where each Li is a positive\/negative literal, i.e., A or \u00acA for an atom A, and not is negation as failure (NAF).\nnot L is called an NAF-literal.\nThe symbol ; represents disjunction.\nThe left-hand side of the rule is the head, and the right-hand side is the body.\nFor each rule r of the above form, head(r), body+ (r) and body\u2212 (r) denote the sets of literals {L1, ... , Ll}, {Ll+1, ... , Lm}, and {Lm+1, ... , Ln}, respectively.\nAlso, not body\u2212 (r) denotes the set of NAF-literals {not Lm+1, ... , not Ln}.\nA disjunction of literals and a conjunction of (NAF-)literals in a rule are identified with its corresponding sets of literals.\nA rule r is often written as head(r) \u2190 body+ (r), not body\u2212 (r) or head(r) \u2190 body(r) where body(r) = body+ (r)\u222anot body\u2212 (r).\nA rule r is disjunctive if head(r) contains more than one literal.\nA rule r is an integrity constraint if head(r) = \u2205; and r is a fact if body(r) = \u2205.\nA program is NAF-free if no rule contains NAF-literals.\nTwo rules\/literals are identified with respect to variable renaming.\nA substitution is a mapping from variables to terms \u03b8 = {x1\/t1, ... , xn\/tn}, where x1, ... , xn are distinct variables and each ti is a term distinct from xi.\nGiven a conjunction G of (NAF-)literals, G\u03b8 denotes the conjunction obtained by applying \u03b8 to G.\nA program, rule, or literal is ground if it contains no variable.\nA program P with variables is a shorthand of its ground instantiation Ground(P), the set of ground rules obtained from P by substituting variables in P by elements of its Herbrand universe in every possible way.\nThe semantics of an EDP is defined by the answer set semantics [7].\nLet Lit be the set of all ground literals in the language of a program.\nSuppose a program P and a set of literals S(\u2286 Lit).\nThen, the reduct P S is the program which contains the ground rule head(r) \u2190 body+ (r) iff there is a rule r in Ground(P) such that body\u2212 (r)\u2229S = \u2205.\nGiven an NAF-free EDP P, Cn(P) denotes the smallest set of ground literals which is (i) closed under P, i.e., for every ground rule r in Ground(P), body(r) \u2286 Cn(P) implies head(r) \u2229 Cn(P) = \u2205; and (ii) logically closed, i.e., it is either consistent or equal to Lit.\nGiven an EDP P and a set S of literals, S is an answer set of P if S = Cn(P S ).\nA program has none, one, or multiple answer sets in general.\nAn answer set is consistent if it is not Lit.\nA program P is consistent if it has a consistent answer set; otherwise, P is inconsistent.\nAbductive logic programming [9] introduces a mechanism of hypothetical reasoning to logic programming.\nAn abductive framework used in this paper is the extended abduction introduced by Inoue and Sakama [8, 15].\nAn abductive program is a pair P, H where P is an EDP and H is a set of literals called abducibles.\nWhen a literal L \u2208 H contains variables, any instance of L is also an abducible.\nAn abductive program P, H is consistent if P is consistent.\nThroughout the paper, abductive programs are assumed to be consistent unless stated otherwise.\nLet G = L1, ... , Lm, not Lm+1, ... , not Ln be a conjunction, where all variables in G are existentially quantified at the front and range-restricted, i.e., every variable in Lm+1, ... , Ln appears in L1, ... , Lm.\nA set S of ground literals satisfies the conjunction G if { L1\u03b8, ... , Lm\u03b8 } \u2286 S and { Lm+1\u03b8, ... , Ln\u03b8 }\u2229 S = \u2205 for some ground instance G\u03b8 with a substitution \u03b8.\nLet P, H be an abductive program and G a conjunction as above.\nA pair (E, F) is an explanation of an observation G in P, H if1 1.\n(P \\ F) \u222a E has an answer set which satisfies G, 2.\n(P \\ F) \u222a E is consistent, 3.\nE and F are sets of ground literals such that E \u2286 H\\P and F \u2286 H \u2229 P.\nWhen (P \\ F) \u222a E has an answer set S satisfying the above three conditions, S is called a belief set of an abductive program P, H satisfying G (with respect to (E, F)).\nNote that if P has a consistent answer set S satisfying G, S is also a belief set of P, H satisfying G with respect to (E, F) = (\u2205, \u2205).\nExtended abduction introduces\/removes hypotheses to\/from a program to explain an observation.\nNote that normal abduction (as in [9]) considers only introducing hypotheses to explain an observation.\nAn explanation (E, F) of an observation G is called minimal if for any explanation (E , F ) of G, E \u2286 E and F \u2286 F imply E = E and F = F. Example 2.1.\nConsider the abductive program P, H : P : flies(x) \u2190 bird(x), not ab(x) , ab(x) \u2190 broken-wing(x) , bird(tweety) \u2190 , bird(opus) \u2190 , broken-wing(tweety) \u2190 .\nH : broken-wing(x) .\nThe observation G = flies(tweety) has the minimal explanation (E, F) = (\u2205, {broken-wing(tweety)}).\n1 This defines credulous explanations [15].\nSkeptical explanations are used in [8].\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1023 3.\nNEGOTIATION 3.1 Conditional Proposals by Abduction We suppose an agent who has a knowledge base represented by an abductive program P, H .\nA program P consists of two types of knowledge, belief B and desire D, where B represents objective knowledge of an agent, while D represents subjective knowledge in general.\nWe define P = B \u222a D, but do not distinguish B and D if such distinction is not important in the context.\nIn contrast, abducibles H are used for representing permissible conditions to make a compromise in the process of negotiation.\nDefinition 3.1.\nA proposal G is a conjunction of literals and NAF-literals: L1, ... , Lm, not Lm+1, ... , not Ln where every variable in G is existentially quantified at the front and range-restricted.\nIn particular, G is called a critique if G = accept or G = reject where accept and reject are the reserved propositions.\nA counter-proposal is a proposal made in response to a proposal.\nDefinition 3.2.\nA proposal G is accepted in an abductive program P, H if P has an answer set satisfying G.\nWhen a proposal is not accepted, abduction is used for seeking conditions to make it acceptable.\nDefinition 3.3.\nLet P, H be an abductive program and G a proposal.\nIf (E, F) is a minimal explanation of G\u03b8 for some substitution \u03b8 in P, H , the conjunction G : G\u03b8, E, not F is called a conditional proposal (for G), where E, not F represents the conjunction: A1, ... , Ak, not Ak+1, ... , not Al for E = {A1, ... , Ak} and F = { Ak+1, ... , Al }.\nProposition 3.1.\nLet P, H be an abductive program and G a proposal.\nIf G is a conditional proposal, there is a belief set S of P, H satisfying G .\nProof.\nWhen G = G\u03b8, E, not F, (P \\ F) \u222a E has a consistent answer set S satisfying G\u03b8 and E \u2229 F = \u2205.\nIn this case, S satisfies G\u03b8, E, not F.\nA conditional proposal G provides a minimal requirement for accepting the proposal G.\nIf G\u03b8 has multiple minimal explanations, several conditional proposals exist accordingly.\nWhen (E, F) = (\u2205, \u2205), a conditional proposal is used as a new proposal made in response to the proposal G. Example 3.1.\nAn agent seeks a position of a research assistant at the computer department of a university with the condition that the salary is at least 50,000 USD per year.\nThe agent makes his\/her request as the proposal:2 G = assist(compt dept), salary(x), x \u2265 50, 000.\nThe university has the abductive program P, H : P : salary(40, 000) \u2190 assist(compt dept), not has PhD, salary(60, 000) \u2190 assist(compt dept), has PhD, salary(50, 000) \u2190 assist(math dept), salary(55, 000) \u2190 system admin(compt dept), 2 For notational convenience, we often include mathematical (in)equations in proposals\/programs.\nThey are written by literals, for instance, x \u2265 y by geq(x, y) with a suitable definition of the predicate geq.\nemployee(x) \u2190 assist(x), employee(x) \u2190 system admin(x), assist(compt dept); assist(math dept) ; system admin(compt dept) \u2190, H : has PhD, where available positions are represented by disjunction.\nAccording to P, the base salary of a research assistant at the computer department is 40,000 USD, but if he\/she has PhD, it is 60,000 USD.\nIn this case, (E, F) = ({has PhD}, \u2205) becomes the minimal explanation of G\u03b8 = assist(compt dept), salary(60, 000) with \u03b8 = { x\/60, 000 }.\nThen, the conditional proposal made by the university becomes assist(compt dept), salary(60, 000), has PhD .\n3.2 Neighborhood Proposals by Relaxation When a proposal is unacceptable, an agent tries to construct a new counter-proposal by weakening constraints in the initial proposal.\nWe use techniques of relaxation for this purpose.\nRelaxation is used as a technique of cooperative query answering in databases [4, 6].\nWhen an original query fails in a database, relaxation expands the scope of the query by relaxing the constraints in the query.\nThis allows the database to return neighborhood answers which are related to the original query.\nWe use the technique for producing proposals in the process of negotiation.\nDefinition 3.4.\nLet P, H be an abductive program and G a proposal.\nThen, G is relaxed to G in the following three ways: Anti-instantiation: Construct G such that G \u03b8 = G for some substitution \u03b8.\nDropping conditions: Construct G such that G \u2282 G. Goal replacement: If G is a conjunction G1, G2, where G1 and G2 are conjunctions, and there is a rule L \u2190 G1 in P such that G1\u03b8 = G1 for some substitution \u03b8, then build G as L\u03b8, G2.\nHere, L\u03b8 is called a replaced literal.\nIn each case, every variable in G is existentially quantified at the front and range-restricted.\nAnti-instantiation replaces constants (or terms) with fresh variables.\nDropping conditions eliminates some conditions in a proposal.\nGoal replacement replaces the condition G1 in G with a literal L\u03b8 in the presence of a rule L \u2190 G1 in P under the condition G1\u03b8 = G1.\nAll these operations generalize proposals in different ways.\nEach G obtained by these operations is called a relaxation of G.\nIt is worth noting that these operations are also used in the context of inductive generalization [12].\nThe relaxed proposal can produce new offers which are neighbor to the original proposal.\nDefinition 3.5.\nLet P, H be an abductive program and G a proposal.\n1.\nLet G be a proposal obtained by anti-instantiation.\nIf P has an answer set S which satisfies G \u03b8 for some substitution \u03b8 and G \u03b8 = G, G \u03b8 is called a neighborhood proposal by anti-instantiation.\n2.\nLet G be a proposal obtained by dropping conditions.\nIf P has an answer set S which satisfies G \u03b8 for some substitution \u03b8, G \u03b8 is called a neighborhood proposal by dropping conditions.\n1024 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.\nLet G be a proposal obtained by goal replacement.\nFor a replaced literal L \u2208 G and a rule H \u2190 B in P such that L = H\u03c3 and (G \\ {L}) \u222a B\u03c3 = G for some substitution \u03c3, put G = (G \\ {L}) \u222a B\u03c3.\nIf P has an answer set S which satisfies G \u03b8 for some substitution \u03b8, G \u03b8 is called a neighborhood proposal by goal replacement.\nExample 3.2.\n(cont.\nExample 3.1) Given the proposal G = assist(compt dept), salary(x), x \u2265 50, 000, \u2022 G1 = assist(w), salary(x), x \u2265 50, 000 is produced by substituting compt dept with a variable w.\nAs G1\u03b81 = assist(math dept), salary(50, 000) with \u03b81 = { w\/math dept } is satisfied by an answer set of P, G1\u03b81 becomes a neighborhood proposal by anti-instantiation.\n\u2022 G2 = assist(compt dept), salary(x) is produced by dropping the salary condition x \u2265 50, 000.\nAs G2\u03b82 = assist(compt dept), salary(40, 000) with \u03b82 = { x\/40, 000 } is satisfied by an answer set of P, G2\u03b82 becomes a neighborhood proposal by dropping conditions.\n\u2022 G3 = employee(compt dept), salary(x), x \u2265 50, 000 is produced by replacing assist(compt dept) with employee(compt dept) using the rule employee(x) \u2190 assist(x) in P. By G3 and the rule employee(x) \u2190 system admin(x) in P, G3 = sys admin(compt dept), salary(x), x \u2265 50, 000 is produced.\nAs G3 \u03b83 = sys admin(compt dept), salary(55, 000) with \u03b83 = { x\/55, 000 } is satisfied by an answer set of P, G3 \u03b83 becomes a neighborhood proposal by goal replacement.\nFinally, extended abduction and relaxation are combined to produce conditional neighborhood proposals.\nDefinition 3.6.\nLet P, H be an abductive program and G a proposal.\n1.\nLet G be a proposal obtained by either anti-instantiation or dropping conditions.\nIf (E, F) is a minimal explanation of G \u03b8(= G) for some substitution \u03b8, the conjunction G \u03b8, E, not F is called a conditional neighborhood proposal by anti-instantiation\/dropping conditions.\n2.\nLet G be a proposal obtained by goal replacement.\nSuppose G as in Definition 3.5(3).\nIf (E, F) is a minimal explanation of G \u03b8 for some substitution \u03b8, the conjunction G \u03b8, E, not F is called a conditional neighborhood proposal by goal replacement.\nA conditional neighborhood proposal reduces to a neighborhood proposal when (E, F) = (\u2205, \u2205).\n3.3 Negotiation Protocol A negotiation protocol defines how to exchange proposals in the process of negotiation.\nThis section presents a negotiation protocol in our framework.\nWe suppose one-to-one negotiation between two agents who have a common ontology and the same language for successful communication.\nDefinition 3.7.\nA proposal L1, ..., Lm, not Lm+1, ..., not Ln violates an integrity constraint \u2190 body+ (r), not body\u2212 (r) if for any substitution \u03b8, there is a substitution \u03c3 such that body+ (r)\u03c3 \u2286 { L1\u03b8, ... , Lm\u03b8 }, body\u2212 (r)\u03c3\u2229{ L1\u03b8, ... , Lm\u03b8 } = \u2205, and body\u2212 (r)\u03c3 \u2286 { Lm+1\u03b8, ... , Ln\u03b8 }.\nIntegrity constraints are conditions which an agent should satisfy, so that they are used to explain why an agent does not accept a proposal.\nA negotiation proceeds in a series of rounds.\nEach i-th round (i \u2265 1) consists of a proposal Gi 1 made by one agent Ag1 and another proposal Gi 2 made by the other agent Ag2.\nDefinition 3.8.\nLet P1, H1 be an abductive program of an agent Ag1 and Gi 2 a proposal made by Ag2 at the i-th round.\nA critique set of Ag1 (at the i-th round) is a set CSi 1(P1, Gj 2) = CSi\u22121 1 (P1, Gj\u22121 2 ) \u222a { r | r is an integrity constraint in P1 and Gj 2 violates r } where j = i \u2212 1 or i, and CS0 1 (P1, G0 2) = CS1 1 (P1, G0 2) = \u2205.\nA critique set of an agent Ag1 accumulates integrity constraints which are violated by proposals made by another agent Ag2.\nCSi 2(P2, Gj 1) is defined in the same manner.\nDefinition 3.9.\nLet Pk, Hk be an abductive program of an agent Agk and Gj a proposal, which is not a critique, made by any agent at the j(\u2264 i)-th round.\nA negotiation set of Agk (at the i-th round) is a triple NSi k = (Si c, Si n, Si cn), where Si c is the set of conditional proposals, Si n is the set of neighborhood proposals, and Si cn is the set of conditional neighborhood proposals, produced by Gj and Pk, Hk .\nA negotiation set represents the space of possible proposals made by an agent.\nSi x (x \u2208 {c, n, cn}) accumulates proposals produced by Gj (1 \u2264 j \u2264 i) according to Definitions 3.3, 3.5, and 3.6.\nNote that an agent can construct counter-proposals by modifying its own previous proposals or another agent``s proposals.\nAn agent Agk accumulates proposals that are made by Agk but are rejected by another agent, in the failed proposal set FP i k (at the i-th round), where FP 0 k = \u2205.\nSuppose two agents Ag1 and Ag2 who have abductive programs P1, H1 and P2, H2 , respectively.\nGiven a proposal G1 1 which is satisfied by an answer set of P1, a negotiation starts.\nIn response to the proposal Gi 1 made by Ag1 at the i-th round, Ag2 behaves as follows.\n1.\nIf Gi 1 = accept, an agreement is reached and negotiation ends in success.\n2.\nElse if Gi 1 = reject, put FP i 2 = FPi\u22121 2 \u222a{Gi\u22121 2 } where {G0 2} = \u2205.\nProceed to the step 4(b).\n3.\nElse if P2 has an answer set satisfying Gi 1, Ag2 returns Gi 2 = accept to Ag1.\nNegotiation ends in success.\n4.\nOtherwise, Ag2 behaves as follows.\nPut FP i 2 = FPi\u22121 2 .\n(a) If Gi 1 violates an integrity constraint in P2, return the critique Gi 2 = reject to Ag1, together with the critique set CSi 2(P2, Gi 1).\n(b) Otherwise, construct NSi 2 as follows.\n(i) Produce Si c. Let \u03bc(Si c) = { p | p \u2208 Si c \\ FPi 2 and p satisfies the constraints in CSi 1(P1, Gi\u22121 2 )}.\nIf \u03bc(Si c) = \u2205, select one from \u03bc(Si c) and propose it as Gi 2 to Ag1; otherwise, go to (ii).\n(ii) Produce Si n.\nIf \u03bc(Si n) = \u2205, select one from \u03bc(Si n) and propose it as Gi 2 to Ag1; otherwise, go to (iii).\n(iii) Produce Si cn.\nIf \u03bc(Si cn) = \u2205, select one from \u03bc(Si cn) and propose it as Gi 2 to Ag1; otherwise, negotiation ends in failure.\nThis means that Ag2 can make no counter-proposal or every counterproposal made by Ag2 is rejected by Ag1.\nIn the step 4(a), Ag2 rejects the proposal Gi 1 and returns the reason of rejection as a critique set.\nThis helps for Ag1 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1025 in preparing a next counter-proposal.\nIn the step 4(b), Ag2 constructs a new proposal.\nIn its construction, Ag2 should take care of the critique set CSi 1(P1, Gi\u22121 2 ), which represents integrity constraints, if any, accumulated in previous rounds, that Ag1 must satisfy.\nAlso, FP i 2 is used for removing proposals which have been rejected.\nConstruction of Si x (x \u2208 {c, n, cn}) in NSi 2 is incrementally done by adding new counter-proposals produced by Gi 1 or Gi\u22121 2 to Si\u22121 x .\nFor instance, Si n in NSi 2 is computed as Si n = Si\u22121 n \u222a{ p | p is a neighborhood proposal made by Gi 1 } \u222a { p | p is a neighborhood proposal made by Gi\u22121 2 }, where S0 n = \u2205.\nThat is, Si n is constructed from Si\u22121 n by adding new proposals which are obtained by modifying the proposal Gi 1 made by Ag1 at the i-th round or modifying the proposal Gi\u22121 2 made by Ag2 at the (i \u2212 1)-th round.\nSi c and Si cn are obtained as well.\nIn the above protocol, an agent produces Si c at first, secondly Si n, and finally Si cn.\nThis strategy seeks conditions which satisfy the given proposal, prior to neighborhood proposals which change the original one.\nAnother strategy, which prefers neighborhood proposals to conditional ones, is also considered.\nConditional neighborhood proposals are to be considered in the last place, since they differ from the original one to the maximal extent.\nThe above protocol produces the candidate proposals in Si x for each x \u2208 {c, n, cn} at once.\nWe can consider a variant of the protocol in which each proposal in Si x is constructed one by one (see Example 3.3).\nThe above protocol is repeatedly applied to each one of the two negotiating agents until a negotiation ends in success\/failure.\nFormally, the above negotiation protocol has the following properties.\nTheorem 3.2.\nLet Ag1 and Ag2 be two agents having abductive programs P1, H1 and P2, H2 , respectively.\n1.\nIf P1, H1 and P2, H2 are function-free (i.e., both Pi and Hi contain no function symbol), any negotiation will terminate.\n2.\nIf a negotiation terminates with agreement on a proposal G, both P1, H1 and P2, H2 have belief sets satisfying G. Proof.\n1.\nWhen an abductive program is function-free, abducibles and negotiation sets are both finite.\nMoreover, if a proposal is once rejected, it is not proposed again by the function \u03bc.\nThus, negotiation will terminate in finite steps.\n2.\nWhen a proposal G is made by Ag1, P1, H1 has a belief set satisfying G.\nIf the agent Ag2 accepts the proposal G, it is satisfied by an answer set of P2 which is also a belief set of P2, H2 .\nExample 3.3.\nSuppose a buying-selling situation in the introduction.\nA seller agent has the abductive program Ps, Hs in which Ps consists of belief Bs and desire Ds: Bs : pc(b1, 1G, 512M, 80G) ; pc(b2, 1G, 512M, 80G) \u2190,(1) dvd-rw ; cd-rw \u2190, (2) Ds : normal price(1300) \u2190 pc(b1, 1G, 512M, 80G), dvd-rw, (3) normal price(1200) \u2190 pc(b1, 1G, 512M, 80G), cd-rw, (4) normal price(1200) \u2190 pc(b2, 1G, 512M, 80G), dvd-rw, (5) price(x) \u2190 normal price(x), add point, (6) price(x \u2217 0.9) \u2190 normal price(x), pay cash, not add point,(7) add point \u2190, (8) Hs : add point, pay cash.\nHere, (1) and (2) represent selection of products.\nThe atom pc(b1, 1G, 512M, 80G) represents that the seller agent has a PC of the brand b1 such that CPU is 1GHz, memory is 512MB, and HDD is 80GB.\nPrices of products are represented as desire of the seller.\nThe rules (3) - (5) are normal prices of products.\nA normal price is a selling price on the condition that service points are added (6).\nOn the other hand, a discount price is applied if the paying method is cash and no service point is added (7).\nThe fact (8) represents the addition of service points.\nThis service would be withdrawn in case of discount prices, so add point is specified as an abducible.\nA buyer agent has the abductive program Pb, Hb in which Pb consists of belief Bb and desire Db: Bb : drive \u2190 dvd-rw, (9) drive \u2190 cd-rw, (10) price(x) \u2190, (11) Db : pc(b1, 1G, 512M, 80G) \u2190, (12) dvd-rw \u2190, (13) cd-rw \u2190 not dvd-rw, (14) \u2190 pay cash, (15) \u2190 price(x), x > 1200, (16) Hb : dvd-rw.\nRules (12) - (16) are the buyer``s desire.\nAmong them, (15) and (16) impose constraints for buying a PC.\nA DVD-RW is specified as an abducible which is subject to concession.\n(1st round) First, the following proposal is given by the buyer agent: G1 b : pc(b1, 1G, 512M, 80G), dvd-rw, price(x), x \u2264 1200.\nAs Ps has no answer set which satisfies G1 b , the seller agent cannot accept the proposal.\nThe seller takes an action of making a counter-proposal and performs abduction.\nAs a result, the seller finds the minimal explanation (E, F) = ({ pay cash }, { add point }) which explains G1 b \u03b81 with \u03b81 = { x\/1170 }.\nThe seller constructs the conditional proposal: G1 s : pc(b1, 1G, 512M, 80G), dvd-rw, price(1170), pay cash, not add point and offers it to the buyer.\n(2nd round) The buyer does not accept G1 s because he\/she cannot pay it by cash (15).\nThe buyer then returns the critique G2 b = reject to the seller, together with the critique set CS2 b (Pb, G1 s) = {(15)}.\nIn response to this, the seller tries to make another proposal which satisfies the constraint in this critique set.\nAs G1 s is stored in FP 2 s and no other conditional proposal satisfying the buyer``s requirement exists, the seller produces neighborhood proposals.\nHe\/she relaxes G1 b by dropping x \u2264 1200 in the condition, and produces pc(b1, 1G, 512M, 80G), dvd-rw, price(x).\nAs Ps has an answer set which satisfies G2 s : pc(b1, 1G, 512M, 80G), dvd-rw, price(1300), 1026 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) the seller offers G2 s as a new counter-proposal.\n(3rd round) The buyer does not accept G2 s because he\/she cannot pay more than 1200USD (16).\nThe buyer again returns the critique G3 b = reject to the seller, together with the critique set CS3 b (Pb, G2 s) = CS2 b (Pb, G1 s) \u222a {(16)}.\nThe seller then considers another proposal by replacing b1 with a variable w, G1 b now becomes pc(w, 1G, 512M, 80G), dvd-rw, price(x), x \u2264 1200.\nAs Ps has an answer set which satisfies G3 s : pc(b2, 1G, 512M, 80G), dvd-rw, price(1200), the seller offers G3 s as a new counter-proposal.\n(4th round) The buyer does not accept G3 s because a PC of the brand b2 is out of his\/her interest and Pb has no answer set satisfying G3 s. Then, the buyer makes a concession by changing his\/her original goal.\nThe buyer relaxes G1 b by goal replacement using the rule (9) in Pb, and produces pc(b1, 1G, 512M, 80G), drive, price(x), x \u2264 1200.\nUsing (10), the following proposal is produced: pc(b1, 1G, 512M, 80G), cd-rw, price(x), x \u2264 1200.\nAs Pb \\ { dvd-rw } has a consistent answer set satisfying the above proposal, the buyer proposes the conditional neighborhood proposal G4 b : pc(b1, 1G, 512M, 80G), cd-rw, not dvd-rw, price(x), x \u2264 1200 to the seller agent.\nSince Ps also has an answer set satisfying G4 b , the seller accepts it and sends the message G4 s = accept to the buyer.\nThus, the negotiation ends in success.\n4.\nCOMPUTATION In this section, we provide methods of computing proposals in terms of answer sets of programs.\nWe first introduce some definitions from [15].\nDefinition 4.1.\nGiven an abductive program P, H , the set UR of update rules is defined as: UR = { L \u2190 not L, L \u2190 not L | L \u2208 H } \u222a { +L \u2190 L | L \u2208 H \\ P } \u222a { \u2212L \u2190 not L | L \u2208 H \u2229 P } , where L, +L, and \u2212L are new atoms uniquely associated with every L \u2208 H.\nThe atoms +L and \u2212L are called update atoms.\nBy the definition, the atom L becomes true iff L is not true.\nThe pair of rules L \u2190 not L and L \u2190 not L specify the situation that an abducible L is true or not.\nWhen p(x) \u2208 H and p(a) \u2208 P but p(t) \u2208 P for t = a, the rule +L \u2190 L precisely becomes +p(t) \u2190 p(t) for any t = a.\nIn this case, the rule is shortly written as +p(x) \u2190 p(x), x = a. Generally, the rule becomes +p(x) \u2190 p(x), x = t1, ... , x = tn for n such instances.\nThe rule +L \u2190 L derives the atom +L if an abducible L which is not in P is to be true.\nIn contrast, the rule \u2212L \u2190 not L derives the atom \u2212L if an abducible L which is in P is not to be true.\nThus, update atoms represent the change of truth values of abducibles in a program.\nThat is, +L means the introduction of L, while \u2212L means the deletion of L.\nWhen an abducible L contains variables, the associated update atom +L or \u2212L is supposed to have exactly the same variables.\nIn this case, an update atom is semantically identified with its ground instances.\nThe set of all update atoms associated with the abducibles in H is denoted by UH, and UH = UH+ \u222a UH\u2212 where UH+ (resp.\nUH\u2212 ) is the set of update atoms of the form +L (resp.\n\u2212L).\nDefinition 4.2.\nGiven an abductive program P, H , its update program UP is defined as the program UP = (P \\ H) \u222a UR .\nAn answer set S of UP is called U-minimal if there is no answer set T of UP such that T \u2229 UH \u2282 S \u2229 UH.\nBy the definition, U-minimal answer sets exist whenever UP has answer sets.\nUpdate programs are used for computing (minimal) explanations of an observation.\nGiven an observation G as a conjunction of literals and NAF-literals possibly containing variables, we introduce a new ground literal O together with the rule O \u2190 G.\nIn this case, O has an explanation (E, F) iff G has the same explanation.\nWith this replacement, an observation is assumed to be a ground literal without loss of generality.\nIn what follows, E+ = { +L | L \u2208 E } and F \u2212 = { \u2212L | L \u2208 F } for E \u2286 H and F \u2286 H. Proposition 4.1.\n([15]) Let P, H be an abductive program, UP its update program, and G a ground literal representing an observation.\nThen, a pair (E, F) is an explanation of G iff UP \u222a { \u2190 not G } has a consistent answer set S such that E+ = S \u2229 UH+ and F\u2212 = S \u2229 UH\u2212 .\nIn particular, (E, F) is a minimal explanation iff S is a U-minimal answer set.\nExample 4.1.\nTo explain the observation G = flies(t) in the program P of Example 2.1, first construct the update program UP of P:3 UP : flies(x) \u2190 bird(x), not ab(x), ab(x) \u2190 broken-wing(x) , bird(t) \u2190 , bird(o) \u2190 , broken-wing(x) \u2190 not broken-wing(x), broken-wing(x) \u2190 not broken-wing(x), +broken-wing(x) \u2190 broken-wing(x), x = t , \u2212broken-wing(t) \u2190 not broken-wing(t) .\nNext, consider the program UP \u222a { \u2190 not flies(t) }.\nIt has the single U-minimal answer set: S = { bird(t), bird(o), flies(t), flies(o), broken-wing(t), broken-wing(o), \u2212broken-wing(t) }.\nThe unique minimal explanation (E, F) = (\u2205, {broken-wing(t)}) of G is expressed by the update atom \u2212broken-wing(t) in S \u2229 UH\u2212 .\nProposition 4.2.\nLet P, H be an abductive program and G a ground literal representing an observation.\nIf P \u222a { \u2190 not G } has a consistent answer set S, G has the minimal explanation (E, F) = (\u2205, \u2205) and S satisfies G.\nNow we provide methods for computing (counter-)proposals.\nFirst, conditional proposals are computed as follows.\ninput : an abductive program P, H , a proposal G; output : a set Sc of proposals.\nIf G is a ground literal, compute its minimal explanation (E, F) in P, H using the update program.\nPut G, E, not F in Sc.\nElse if G is a conjunction possibly containing variables, consider the abductive program 3 t represents tweety and o represents opus.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1027 P \u222a{ O \u2190 G }, H with a ground literal O. Compute a minimal explanation of O in P \u222a { O \u2190 G }, H using its update program.\nIf O has a minimal explanation (E, F) with a substitution \u03b8 for variables in G, put G\u03b8, E, not F in Sc.\nNext, neighborhood proposals are computed as follows.\ninput : an abductive program P, H , a proposal G; output : a set Sn of proposals.\n% neighborhood proposals by anti-instantiation; Construct G by anti-instantiation.\nFor a ground literal O, if P \u222a { O \u2190 G } \u222a { \u2190 not O } has a consistent answer set satisfying G \u03b8 with a substitution \u03b8 and G \u03b8 = G, put G \u03b8 in Sn.\n% neighborhood proposals by dropping conditions; Construct G by dropping conditions.\nIf G is a ground literal and the program P \u222a { \u2190 not G } has a consistent answer set, put G in Sn.\nElse if G is a conjunction possibly containing variables, do the following.\nFor a ground literal O, if P \u222a{ O \u2190 G }\u222a{ \u2190 not O } has a consistent answer set satisfying G \u03b8 with a substitution \u03b8, put G \u03b8 in Sn.\n% neighborhood proposals by goal replacement; Construct G by goal replacement.\nIf G is a ground literal and there is a rule H \u2190 B in P such that G = H\u03c3 and B\u03c3 = G for some substitution \u03c3, put G = B\u03c3.\nIf P \u222a { \u2190 not G } has a consistent answer set satisfying G \u03b8 with a substitution \u03b8, put G \u03b8 in Sn.\nElse if G is a conjunction possibly containing variables, do the following.\nFor a replaced literal L \u2208 G , if there is a rule H \u2190 B in P such that L = H\u03c3 and (G \\ {L}) \u222a B\u03c3 = G for some substitution \u03c3, put G = (G \\ {L}) \u222a B\u03c3.\nFor a ground literal O, if P \u222a { O \u2190 G } \u222a { \u2190 not O } has a consistent answer set satisfying G \u03b8 with a substitution \u03b8, put G \u03b8 in Sn.\nTheorem 4.3.\nThe set Sc (resp.\nSn) computed above coincides with the set of conditional proposals (resp.\nneighborhood proposals).\nProof.\nThe result for Sc follows from Definition 3.3 and Proposition 4.1.\nThe result for Sn follows from Definition 3.5 and Proposition 4.2.\nConditional neighborhood proposals are computed by combining the above two procedures.\nThose proposals are computed at each round.\nNote that the procedure for computing Sn contains some nondeterministic choices.\nFor instance, there are generally several candidates of literals to relax in a proposal.\nAlso, there might be several rules in a program for the usage of goal replacement.\nIn practice, an agent can prespecify literals in a proposal for possible relaxation or rules in a program for the usage of goal replacement.\n5.\nRELATED WORK As there are a number of literature on automated negotiation, this section focuses on comparison with negotiation frameworks based on logic and argumentation.\nSadri et al. [14] use abductive logic programming as a representation language of negotiating agents.\nAgents negotiate using common dialogue primitives, called dialogue moves.\nEach agent has an abductive logic program in which a sequence of dialogues are specified by a program, a dialogue protocol is specified as constraints, and dialogue moves are specified as abducibles.\nThe behavior of agents is regulated by an observe-think-act cycle.\nOnce a dialogue move is uttered by an agent, another agent that observed the utterance thinks and acts using a proof procedure.\nTheir approach and ours both employ abductive logic programming as a platform of agent reasoning, but the use of it is quite different.\nFirst, they use abducibles to specify dialogue primitives of the form tell(utterer, receiver, subject, identifier, time), while we use abducibles to specify arbitrary permissible hypotheses to construct conditional proposals.\nSecond, a program pre-specifies a plan to carry out in order to achieve a goal, together with available\/missing resources in the context of resource-exchanging problems.\nThis is in contrast with our method in which possible counter-proposals are newly constructed in response to a proposal made by an agent.\nThird, they specify a negotiation policy inside a program (as integrity constraints), while we give a protocol independent of individual agents.\nThey provide an operational model that completely specifies the behavior of agents in terms of agent cycle.\nWe do not provide such a complete specification of the behavior of agents.\nOur primary interest is to mechanize construction of proposals.\nBracciali and Torroni [2] formulate abductive agents that have knowledge in abductive logic programs.\nTo explain an observation, two agents communicate by exchanging integrity constraints.\nIn the process of communication, an agent can revise its own integrity constraints according to the information provided by the other agent.\nA set IC of integrity constraints relaxes a set IC (or IC tightens IC ) if any observation that can be proved with respect to IC can also be proved with respect to IC .\nFor instance, IC : \u2190 a, b, c relaxes IC : \u2190 a, b. Thus, they use relaxation for weakening the constraints in an abductive logic program.\nIn contrast, we use relaxation for weakening proposals and three different relaxation methods, anti-instantiation, dropping conditions, and goal replacement, are considered.\nTheir goal is to explain an observation by revising integrity constraints of an agent through communication, while we use integrity constraints for communication to explain critiques and help other agents in making counter-proposals.\nMeyer et al. [11] introduce a logical framework for negotiating agents.\nThey introduce two different modes of negotiation: concession and adaptation.\nThey provide rational postulates to characterize negotiated outcomes between two agents, and describe methods for constructing outcomes.\nThey provide logical conditions for negotiated outcomes to satisfy, but they do not describe a process of negotiation nor negotiation protocols.\nMoreover, they represent agents by classical propositional theories, which is different from our abductive logic programming framework.\nFoo et al. [5] model one-to-one negotiation as a one-time encounter between two extended logic programs.\nAn agent offers an answer set of its program, and their mutual deal is regarded as a trade on their answer sets.\nStarting from the initial agreement set S\u2229T for an answer set S of an agent and an answer set T of another agent, each agent extends this set to reflect its own demand while keeping consistency with demand of the other agent.\nTheir algorithm returns new programs having answer sets which are consistent with each other and keep the agreement set.\nThe work is extended to repeated encounters in [3].\nIn their framework, two agents exchange answer sets to produce a common belief set, which is different from our framework of exchanging proposals.\nThere are a number of proposals for negotiation based 1028 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) on argumentation.\nAn advantage of argumentation-based negotiation is that it constructs a proposal with arguments supporting the proposal [1].\nThe existence of arguments is useful to convince other agents of reasons why an agent offers (counter-)proposals or returns critiques.\nParsons et al. [13] develop a logic of argumentation-based negotiation among BDI agents.\nIn one-to-one negotiation, an agent A generates a proposal together with its arguments, and passes it to another agent B.\nThe proposal is evaluated by B which attempts to build arguments against it.\nIf it conflicts with B``s interest, B informs A of its objection by sending back its attacking argument.\nIn response to this, A tries to find an alternative way of achieving its original objective, or a way of persuading B to drop its objection.\nIf either type of argument can be found, A will submit it to B.\nIf B finds no reason to reject the new proposal, it will be accepted and the negotiation ends in success.\nOtherwise, the process is iterated.\nIn this negotiation processes, the agent A never changes its original objective, so that negotiation ends in failure if A fails to find an alternative way of achieving the original objective.\nIn our framework, when a proposal is rejected by another agent, an agent can weaken or change its objective by abduction and relaxation.\nOur framework does not have a mechanism of argumentation, but reasons for critiques can be informed by responding critique sets.\nKakas and Moraitis [10] propose a negotiation protocol which integrates abduction within an argumentation framework.\nA proposal contains an offer corresponding to the negotiation object, together with supporting information representing conditions under which this offer is made.\nSupporting information is computed by abduction and is used for constructing conditional arguments during the process of negotiation.\nIn their negotiation protocol, when an agent cannot satisfy its own goal, the agent considers the other agent``s goal and searches for conditions under which the goal is acceptable.\nOur present approach differs from theirs in the following points.\nFirst, they use abduction to seek conditions to support arguments, while we use abduction to seek conditions for proposals to accept.\nSecond, in their negotiation protocol, counter-proposals are chosen among candidates based on preference knowledge of an agent at meta-level, which represents policy under which an agent uses its object-level decision rules according to situations.\nIn our framework, counter-proposals are newly constructed using abduction and relaxation.\nThe method of construction is independent of particular negotiation protocols.\nAs [2, 10, 14], abduction or abductive logic programming used in negotiation is mostly based on normal abduction.\nIn contrast, our approach is based on extended abduction which can not only introduce hypotheses but remove them from a program.\nThis is another important difference.\nRelaxation and neighborhood query answering are devised to make databases cooperative with their users [4, 6].\nIn this sense, those techniques have the spirit similar to cooperative problem solving in multi-agent systems.\nAs far as the authors know, however, there is no study which applies those technique to agent negotiation.\n6.\nCONCLUSION In this paper we proposed a logical framework for negotiating agents.\nTo construct proposals in the process of negotiation, we combined the techniques of extended abduction and relaxation.\nIt was shown that these two operations are used for general inference rules in producing proposals.\nWe developed a negotiation protocol between two agents based on exchange of proposals and critiques, and provided procedures for computing proposals in abductive logic programming.\nThis enables us to realize automated negotiation on top of the existing answer set solvers.\nThe present framework does not have a mechanism of selecting an optimal (counter-)proposal among different alternatives.\nTo compare and evaluate proposals, an agent must have preference knowledge of candidate proposals.\nFurther elaboration to maximize the utility of agents is left for future study.\n7.\nREFERENCES [1] L. Amgoud, S. Parsons, and N. Maudet.\nArguments, dialogue, and negotiation.\nIn: Proc.\nECAI-00, pp. 338-342, IOS Press, 2000.\n[2] A. Bracciali and P. Torroni.\nA new framework for knowledge revision of abductive agents through their interaction.\nIn: Proc.\nCLIMA-IV, Computational Logic in Multi-Agent Systems, LNAI 3259, pp. 159-177, 2004.\n[3] W. Chen, M. Zhang, and N. Foo.\nRepeated negotiation of logic programs.\nIn: Proc.\n7th Workshop on Nonmonotonic Reasoning, Action and Change, 2006.\n[4] W. W. Chu, Q. Chen, and R.-C.\nLee.\nCooperative query answering via type abstraction hierarchy.\nIn: Cooperating Knowledge Based Systems, S. M. Deen ed., pp. 271-290, Springer, 1990.\n[5] N. Foo, T. Meyer, Y. Zhang, and D. Zhang.\nNegotiating logic programs.\nIn: Proc.\n6th Workshop on Nonmonotonic Reasoning, Action and Change, 2005.\n[6] T. Gaasterland, P. Godfrey, and J. Minker.\nRelaxation as a platform for cooperative answering.\nJournal of Intelligence Information Systems 1(3\/4):293-321, 1992.\n[7] M. Gelfond and V. Lifschitz.\nClassical negation in logic programs and disjunctive databases.\nNew Generation Computing 9:365-385, 1991.\n[8] K. Inoue and C. Sakama.\nAbductive framework for nonmonotonic theory change.\nIn: Proc.\nIJCAI-95, pp. 204-210, Morgan Kaufmann.\n[9] A. C. Kakas, R. A. Kowalski, and F. Toni, The role of abduction in logic programming.\nIn: Handbook of Logic in AI and Logic Programming, D. M. Gabbay, et al. (eds), vol.\n5, pp. 235-324, Oxford University Press, 1998.\n[10] A. C. Kakas and P. Moraitis.\nAdaptive agent negotiation via argumentation.\nIn: Proc.\nAAMAS-06, pp. 384-391, ACM Press.\n[11] T. Meyer, N. Foo, R. Kwok, and D. Zhang.\nLogical foundation of negotiation: outcome, concession and adaptation.\nIn: Proc.\nAAAI-04, pp. 293-298, MIT Press.\n[12] R. S. Michalski.\nA theory and methodology of inductive learning.\nIn: Machine Learning: An Artificial Intelligence Approach, R. S. Michalski, et al. (eds), pp. 83-134, Morgan Kaufmann, 1983.\n[13] S. Parsons, C. Sierra and N. Jennings.\nAgents that reason and negotiate by arguing.\nJournal of Logic and Computation, 8(3):261-292, 1988.\n[14] F. Sadri, F. Toni, and P. Torroni, An abductive logic programming architecture for negotiating agents.\nIn: Proc.\n8th European Conf.\non Logics in AI, LNAI 2424, pp. 419-431, Springer, 2002.\n[15] C. Sakama and K. Inoue.\nAn abductive framework for computing knowledge base updates.\nTheory and Practice of Logic Programming 3(6):671-715, 2003.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1029","lvl-3":"Negotiation by Abduction and Relaxation\nABSTRACT\nThis paper studies a logical framework for automated negotiation between two agents.\nWe suppose an agent who has a knowledge base represented by a logic program.\nThen, we introduce methods of constructing counter-proposals in response to proposals made by an agent.\nTo this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases.\nThese techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation.\nWe provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals.\n1.\nINTRODUCTION\nAutomated negotiation has been received increasing attention in multi-agent systems, and a number of frameworks have been proposed in different contexts ([1, 2, 3, 5, 10, 11, 13, 14], for instance).\nNegotiation usually proceeds in a series of rounds and each agent makes a proposal at every round.\nAn agent that received a proposal responds in two ways.\nOne is a critique which is a remark as to whether or not (parts of) the proposal is accepted.\nThe other is a counter-proposal which is an alternative proposal made in response to a previous proposal [13].\nTo see these proposals in one-to-one negotiation, suppose the following negotiation dialogue between a buyer agent B and a seller agent S. (Bi (or Si) represents an utterance of B (or S) in the i-th round.)\nB1: I want to buy a personal computer of the brand b1, with the specification of CPU:1 GHz, Memory:512 MB, HDD: 80GB, and a DVD-RW driver.\nI want to get it at the price under 1200 USD.\nS1: We can provide a PC with the requested specification if you pay for it by cash.\nIn this case, however, service points are not added for this special discount.\nB2: I cannot pay it by cash.\nS2: In a normal price, the requested PC costs 1300 USD.\nB3: I cannot accept the price.\nMy budget is under 1200 USD.\nS3: We can provide another computer with the requested specification, except that it is made by the brand b2.\nThe price is exactly 1200 USD.\nB4: I do not want a PC of the brand b2.\nInstead, I can downgrade a driver from DVD-RW to CD-RW in my initial proposal.\nS4: Ok, I accept your offer.\nIn this dialogue, in response to the opening proposal B1, the counter-proposal S1 is returned.\nIn the rest of the dialogue, B2, B3, S4 are critiques, while S2, S3, B4 are counterproposals.\nCritiques are produced by evaluating a proposal in a knowledge base of an agent.\nIn contrast, making counter-proposals involves generating an alternative proposal which is more favorable to the responding agent than the original one.\nIt is known that there are two ways of producing counterproposals: extending the initial proposal or amending part of the initial proposal.\nAccording to [13], the first type appears in the dialogue: A: \"I propose that you provide me with service X\".\nB: \"I propose that I provide you with service X if you provide me with service Z\".\nThe second type is in the dialogue: A: \"I propose that I provide you with service Y if you provide me with service X\".\nB: \"I propose that I provide you with service X if you provide me with service Z\".\nA negotiation proceeds by iterating such \"give-andtake\" dialogues until it reaches an agreement\/disagreement.\nIn those dialogues, agents generate (counter -) proposals by reasoning on their own goals or objectives.\nThe objective of the agent A in the above dialogues is to obtain service X.\nThe agent B proposes conditions to provide the service.\nIn the process of negotiation, however, it may happen that agents are obliged to weaken or change their initial goals to reach a negotiated compromise.\nIn the dialogue of\na buyer agent and a seller agent presented above, a buyer agent changes its initial goal by downgrading a driver from DVD-RW to CD-RW.\nSuch behavior is usually represented as specific meta-knowledge of an agent or specified as negotiation protocols in particular problems.\nCurrently, there is no computational logic for automated negotiation which has general inference rules for producing (counter -) proposals.\nThe purpose of this paper is to mechanize a process of building (counter -) proposals in one-to-one negotiation dialogues.\nWe suppose an agent who has a knowledge base represented by a logic program.\nWe then introduce methods for generating three different types of proposals.\nFirst, we use the technique of extended abduction in artificial intelligence [8, 15] to construct a conditional proposal as an extension of the original one.\nSecond, we use the technique of relaxation in cooperative query answering for databases [4, 6] to construct a neighborhood proposal as an amendment of the original one.\nThird, combining extended abduction and relaxation, conditional neighborhood proposals are constructed as amended extensions of the original proposal.\nWe develop a negotiation protocol between two agents based on the exchange of these counter-proposals and critiques.\nWe also provide procedures for computing proposals in logic programming.\nThis paper is organized as follows.\nSection 2 introduces a logical framework used in this paper.\nSection 3 presents methods for constructing proposals, and provides a negotiation protocol.\nSection 4 provides methods for computing proposals in logic programming.\nSection 5 discusses related works, and Section 6 concludes the paper.\n2.\nPRELIMINARIES\nLogic programs considered in this paper are extended disjunctive programs (EDP) [7].\nAn EDP (or simply a program) is a set of rules of the form:\n(n \u2265 m \u2265 l \u2265 0) where each Li is a positive\/negative literal, i.e., A or \u00ac A for an atom A, and not is negation as failure (NAF).\nnot L is called an NAF-literal.\nThe symbol \";\" represents disjunction.\nThe left-hand side of the rule is the head, and the right-hand side is the body.\nFor each rule r of the above form, head (r), body + (r) and body--(r) denote the sets of literals {L1,..., Ll}, {Ll +1,..., Lm}, and {Lm +1,..., Ln}, respectively.\nAlso, not body--(r) denotes the set of NAF-literals {not Lm +1,..., not Ln}.\nA disjunction of literals and a conjunction of (NAF -) literals in a rule are identified with its corresponding sets of literals.\nA rule r is often written as head (r) \u2190 body + (r), not body--(r) or head (r) \u2190 body (r) where body (r) = body + (r) \u222a not body--(r).\nA rule r is disjunctive if head (r) contains more than one literal.\nA rule r is an integrity constraint if head (r) = \u2205; and r is a fact if body (r) = \u2205.\nA program is NAF-free if no rule contains NAF-literals.\nTwo rules\/literals are identified with respect to variable renaming.\nA substitution is a mapping from variables to terms 0 = {x1\/t1,..., xn\/tn}, where x1,..., xn are distinct variables and each ti is a term distinct from xi.\nGiven a conjunction G of (NAF -) literals, G0 denotes the conjunction obtained by applying 0 to G.\nA program, rule, or literal is ground if it contains no variable.\nA program P with variables is a shorthand of its ground instantiation Ground (P), the set of ground rules obtained from P by substituting variables in P by elements of its Herbrand universe in every possible way.\nThe semantics of an EDP is defined by the answer set semantics [7].\nLet Lit be the set of all ground literals in the language of a program.\nSuppose a program P and a set of literals S (\u2286 Lit).\nThen, the reduct P S is the program which contains the ground rule head (r) \u2190 body + (r) iff there is a rule r in Ground (P) such that body--(r) \u2229 S = \u2205.\nGiven an NAF-free EDP P, Cn (P) denotes the smallest set of ground literals which is (i) closed under P, i.e., for every ground rule r in Ground (P), body (r) \u2286 Cn (P) implies head (r) \u2229 Cn (P) = ~ \u2205; and (ii) logically closed, i.e., it is either consistent or equal to Lit.\nGiven an EDP P and a set S of literals, S is an answer set of P if S = Cn (P S).\nA program has none, one, or multiple answer sets in general.\nAn answer set is consistent if it is not Lit.\nA program P is consistent if it has a consistent answer set; otherwise, P is inconsistent.\nAbductive logic programming [9] introduces a mechanism of hypothetical reasoning to logic programming.\nAn abductive framework used in this paper is the extended abduction introduced by Inoue and Sakama [8, 15].\nAn abductive program is a pair P, H where P is an EDP and H is a set of literals called abducibles.\nWhen a literal L \u2208 H contains variables, any instance of L is also an abducible.\nAn abductive program P, H is consistent if P is consistent.\nThroughout the paper, abductive programs are assumed to be consistent unless stated otherwise.\nLet G = L1,..., Lm, not Lm +1,..., not Ln be a conjunction, where all variables in G are existentially quantified at the front and range-restricted, i.e., every variable in Lm +1,..., Ln appears in L1,..., Lm.\nA set S of ground literals satisfies the conjunction G if {L10,..., Lm0} \u2286 S and {Lm +10,..., Ln0} \u2229 S = \u2205 for some ground instance G0 with a substitution 0.\nLet P, H be an abductive program and G a conjunction as above.\nA pair (E, F) is an explanation of an observation\n1.\n(P \\ F) \u222a E has an answer set which satisfies G, 2.\n(P \\ F) \u222a E is consistent, 3.\nE and F are sets of ground literals such that E \u2286 H \\ P and F \u2286 H \u2229 P.\nWhen (P \\ F) \u222a E has an answer set S satisfying the above three conditions, S is called a belief set of an abductive pro\ngram P, H satisfying G (with respect to (E, F)).\nNote that if P has a consistent answer set S satisfying G, S is also a belief set of P, H satisfying G with respect to (E, F) = (\u2205, \u2205).\nExtended abduction introduces\/removes hypotheses to\/from a program to explain an observation.\nNote that \"normal\" abduction (as in [9]) considers only introducing hypotheses to explain an observation.\nAn explanation (E, F) of an observation G is called minimal if for any explanation (E', F') of G, E' \u2286 E and F' \u2286 F imply E' = E and F' = F.\n3.\nNEGOTIATION\n3.1 Conditional Proposals by Abduction\n3.2 Neighborhood Proposals by Relaxation\n1024 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.3 Negotiation Protocol\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1025\n1026 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nCOMPUTATION\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1027\n5.\nRELATED WORK\nAs there are a number of literature on automated negotiation, this section focuses on comparison with negotiation frameworks based on logic and argumentation.\nSadri et al. [14] use abductive logic programming as a representation language of negotiating agents.\nAgents negotiate using common dialogue primitives, called dialogue moves.\nEach agent has an abductive logic program in which a sequence of dialogues are specified by a program, a dialogue protocol is specified as constraints, and dialogue moves are specified as abducibles.\nThe behavior of agents is regulated by an observe-think-act cycle.\nOnce a dialogue move is uttered by an agent, another agent that observed the utterance thinks and acts using a proof procedure.\nTheir approach and ours both employ abductive logic programming as a platform of agent reasoning, but the use of it is quite different.\nFirst, they use abducibles to specify dialogue primitives of the form tell (utterer, receiver, subject, identifier, time), while we use abducibles to specify arbitrary permissible hypotheses to construct conditional proposals.\nSecond, a program pre-specifies a plan to carry out in order to achieve a goal, together with available\/missing resources in the context of resource-exchanging problems.\nThis is in contrast with our method in which possible counter-proposals are newly constructed in response to a proposal made by an agent.\nThird, they specify a negotiation policy inside a program (as integrity constraints), while we give a protocol independent of individual agents.\nThey provide an operational model that completely specifies the behavior of agents in terms of agent cycle.\nWe do not provide such a complete specification of the behavior of agents.\nOur primary interest is to mechanize construction of proposals.\nBracciali and Torroni [2] formulate abductive agents that have knowledge in abductive logic programs.\nTo explain an observation, two agents communicate by exchanging integrity constraints.\nIn the process of communication, an agent can revise its own integrity constraints according to the information provided by the other agent.\nA set IC' of integrity constraints relaxes a set IC (or IC tightens IC') if any observation that can be proved with respect to IC can also be proved with respect to IC'.\nFor instance, IC': +--a, b, c relaxes IC: +--a, b. Thus, they use relaxation for weakening the constraints in an abductive logic program.\nIn contrast, we use relaxation for weakening proposals and three different relaxation methods, anti-instantiation, dropping conditions, and goal replacement, are considered.\nTheir goal is to explain an observation by revising integrity constraints of an agent through communication, while we use integrity constraints for communication to explain critiques and help other agents in making counter-proposals.\nMeyer et al. [11] introduce a logical framework for negotiating agents.\nThey introduce two different modes of negotiation: concession and adaptation.\nThey provide rational postulates to characterize negotiated outcomes between two agents, and describe methods for constructing outcomes.\nThey provide logical conditions for negotiated outcomes to satisfy, but they do not describe a process of negotiation nor negotiation protocols.\nMoreover, they represent agents by classical propositional theories, which is different from our abductive logic programming framework.\nFoo et al. [5] model one-to-one negotiation as a one-time encounter between two extended logic programs.\nAn agent offers an answer set of its program, and their mutual deal is regarded as a trade on their answer sets.\nStarting from the initial agreement set SnT for an answer set S of an agent and an answer set T of another agent, each agent extends this set to reflect its own demand while keeping consistency with demand of the other agent.\nTheir algorithm returns new programs having answer sets which are consistent with each other and keep the agreement set.\nThe work is extended to repeated encounters in [3].\nIn their framework, two agents exchange answer sets to produce a common belief set, which is different from our framework of exchanging proposals.\nThere are a number of proposals for negotiation based\n1028 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\non argumentation.\nAn advantage of argumentation-based negotiation is that it constructs a proposal with arguments supporting the proposal [1].\nThe existence of arguments is useful to convince other agents of reasons why an agent offers (counter -) proposals or returns critiques.\nParsons et al. [13] develop a logic of argumentation-based negotiation among BDI agents.\nIn one-to-one negotiation, an agent A generates a proposal together with its arguments, and passes it to another agent B.\nThe proposal is evaluated by B which attempts to build arguments against it.\nIf it conflicts with B's interest, B informs A of its objection by sending back its attacking argument.\nIn response to this, A tries to find an alternative way of achieving its original objective, or a way of persuading B to drop its objection.\nIf either type of argument can be found, A will submit it to B.\nIf B finds no reason to reject the new proposal, it will be accepted and the negotiation ends in success.\nOtherwise, the process is iterated.\nIn this negotiation processes, the agent A never changes its original objective, so that negotiation ends in failure if A fails to find an alternative way of achieving the original objective.\nIn our framework, when a proposal is rejected by another agent, an agent can weaken or change its objective by abduction and relaxation.\nOur framework does not have a mechanism of argumentation, but reasons for critiques can be informed by responding critique sets.\nKakas and Moraitis [10] propose a negotiation protocol which integrates abduction within an argumentation framework.\nA proposal contains an offer corresponding to the negotiation object, together with supporting information representing conditions under which this offer is made.\nSupporting information is computed by abduction and is used for constructing conditional arguments during the process of negotiation.\nIn their negotiation protocol, when an agent cannot satisfy its own goal, the agent considers the other agent's goal and searches for conditions under which the goal is acceptable.\nOur present approach differs from theirs in the following points.\nFirst, they use abduction to seek conditions to support arguments, while we use abduction to seek conditions for proposals to accept.\nSecond, in their negotiation protocol, counter-proposals are chosen among candidates based on preference knowledge of an agent at meta-level, which represents policy under which an agent uses its object-level decision rules according to situations.\nIn our framework, counter-proposals are newly constructed using abduction and relaxation.\nThe method of construction is independent of particular negotiation protocols.\nAs [2, 10, 14], abduction or abductive logic programming used in negotiation is mostly based on normal abduction.\nIn contrast, our approach is based on extended abduction which cannot only introduce hypotheses but remove them from a program.\nThis is another important difference.\nRelaxation and neighborhood query answering are devised to make databases cooperative with their users [4, 6].\nIn this sense, those techniques have the spirit similar to cooperative problem solving in multi-agent systems.\nAs far as the authors know, however, there is no study which applies those technique to agent negotiation.\n6.\nCONCLUSION\nIn this paper we proposed a logical framework for negotiating agents.\nTo construct proposals in the process of negotiation, we combined the techniques of extended abduction and relaxation.\nIt was shown that these two operations are used for general inference rules in producing proposals.\nWe developed a negotiation protocol between two agents based on exchange of proposals and critiques, and provided procedures for computing proposals in abductive logic programming.\nThis enables us to realize automated negotiation on top of the existing answer set solvers.\nThe present framework does not have a mechanism of selecting an optimal (counter -) proposal among different alternatives.\nTo compare and evaluate proposals, an agent must have preference knowledge of candidate proposals.\nFurther elaboration to maximize the utility of agents is left for future study.","lvl-4":"Negotiation by Abduction and Relaxation\nABSTRACT\nThis paper studies a logical framework for automated negotiation between two agents.\nWe suppose an agent who has a knowledge base represented by a logic program.\nThen, we introduce methods of constructing counter-proposals in response to proposals made by an agent.\nTo this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases.\nThese techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation.\nWe provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals.\n1.\nINTRODUCTION\nNegotiation usually proceeds in a series of rounds and each agent makes a proposal at every round.\nAn agent that received a proposal responds in two ways.\nOne is a critique which is a remark as to whether or not (parts of) the proposal is accepted.\nThe other is a counter-proposal which is an alternative proposal made in response to a previous proposal [13].\nTo see these proposals in one-to-one negotiation, suppose the following negotiation dialogue between a buyer agent B and a seller agent S. (Bi (or Si) represents an utterance of B (or S) in the i-th round.)\nI want to get it at the price under 1200 USD.\nS1: We can provide a PC with the requested specification if you pay for it by cash.\nS2: In a normal price, the requested PC costs 1300 USD.\nB3: I cannot accept the price.\nMy budget is under 1200 USD.\nS3: We can provide another computer with the requested specification, except that it is made by the brand b2.\nThe price is exactly 1200 USD.\nB4: I do not want a PC of the brand b2.\nInstead, I can downgrade a driver from DVD-RW to CD-RW in my initial proposal.\nS4: Ok, I accept your offer.\nIn this dialogue, in response to the opening proposal B1, the counter-proposal S1 is returned.\nCritiques are produced by evaluating a proposal in a knowledge base of an agent.\nIn contrast, making counter-proposals involves generating an alternative proposal which is more favorable to the responding agent than the original one.\nIt is known that there are two ways of producing counterproposals: extending the initial proposal or amending part of the initial proposal.\nAccording to [13], the first type appears in the dialogue: A: \"I propose that you provide me with service X\".\nB: \"I propose that I provide you with service X if you provide me with service Z\".\nThe second type is in the dialogue: A: \"I propose that I provide you with service Y if you provide me with service X\".\nB: \"I propose that I provide you with service X if you provide me with service Z\".\nA negotiation proceeds by iterating such \"give-andtake\" dialogues until it reaches an agreement\/disagreement.\nIn those dialogues, agents generate (counter -) proposals by reasoning on their own goals or objectives.\nThe objective of the agent A in the above dialogues is to obtain service X.\nThe agent B proposes conditions to provide the service.\nIn the process of negotiation, however, it may happen that agents are obliged to weaken or change their initial goals to reach a negotiated compromise.\nIn the dialogue of\na buyer agent and a seller agent presented above, a buyer agent changes its initial goal by downgrading a driver from DVD-RW to CD-RW.\nSuch behavior is usually represented as specific meta-knowledge of an agent or specified as negotiation protocols in particular problems.\nCurrently, there is no computational logic for automated negotiation which has general inference rules for producing (counter -) proposals.\nThe purpose of this paper is to mechanize a process of building (counter -) proposals in one-to-one negotiation dialogues.\nWe suppose an agent who has a knowledge base represented by a logic program.\nWe then introduce methods for generating three different types of proposals.\nFirst, we use the technique of extended abduction in artificial intelligence [8, 15] to construct a conditional proposal as an extension of the original one.\nSecond, we use the technique of relaxation in cooperative query answering for databases [4, 6] to construct a neighborhood proposal as an amendment of the original one.\nThird, combining extended abduction and relaxation, conditional neighborhood proposals are constructed as amended extensions of the original proposal.\nWe develop a negotiation protocol between two agents based on the exchange of these counter-proposals and critiques.\nWe also provide procedures for computing proposals in logic programming.\nThis paper is organized as follows.\nSection 2 introduces a logical framework used in this paper.\nSection 3 presents methods for constructing proposals, and provides a negotiation protocol.\nSection 4 provides methods for computing proposals in logic programming.\nSection 5 discusses related works, and Section 6 concludes the paper.\n2.\nPRELIMINARIES\nLogic programs considered in this paper are extended disjunctive programs (EDP) [7].\nAn EDP (or simply a program) is a set of rules of the form:\nnot L is called an NAF-literal.\nThe symbol \";\" represents disjunction.\nThe left-hand side of the rule is the head, and the right-hand side is the body.\nA disjunction of literals and a conjunction of (NAF -) literals in a rule are identified with its corresponding sets of literals.\nA rule r is disjunctive if head (r) contains more than one literal.\nA rule r is an integrity constraint if head (r) = \u2205; and r is a fact if body (r) = \u2205.\nA program is NAF-free if no rule contains NAF-literals.\nTwo rules\/literals are identified with respect to variable renaming.\nGiven a conjunction G of (NAF -) literals, G0 denotes the conjunction obtained by applying 0 to G.\nA program, rule, or literal is ground if it contains no variable.\nThe semantics of an EDP is defined by the answer set semantics [7].\nLet Lit be the set of all ground literals in the language of a program.\nSuppose a program P and a set of literals S (\u2286 Lit).\nThen, the reduct P S is the program which contains the ground rule head (r) \u2190 body + (r) iff there is a rule r in Ground (P) such that body--(r) \u2229 S = \u2205.\nGiven an EDP P and a set S of literals, S is an answer set of P if S = Cn (P S).\nA program has none, one, or multiple answer sets in general.\nAn answer set is consistent if it is not Lit.\nA program P is consistent if it has a consistent answer set; otherwise, P is inconsistent.\nAbductive logic programming [9] introduces a mechanism of hypothetical reasoning to logic programming.\nAn abductive framework used in this paper is the extended abduction introduced by Inoue and Sakama [8, 15].\nAn abductive program is a pair P, H where P is an EDP and H is a set of literals called abducibles.\nWhen a literal L \u2208 H contains variables, any instance of L is also an abducible.\nAn abductive program P, H is consistent if P is consistent.\nThroughout the paper, abductive programs are assumed to be consistent unless stated otherwise.\nLet P, H be an abductive program and G a conjunction as above.\nA pair (E, F) is an explanation of an observation\n1.\n(P \\ F) \u222a E has an answer set which satisfies G, 2.\n(P \\ F) \u222a E is consistent, 3.\nE and F are sets of ground literals such that E \u2286 H \\ P and F \u2286 H \u2229 P.\nWhen (P \\ F) \u222a E has an answer set S satisfying the above three conditions, S is called a belief set of an abductive pro\ngram P, H satisfying G (with respect to (E, F)).\nExtended abduction introduces\/removes hypotheses to\/from a program to explain an observation.\nNote that \"normal\" abduction (as in [9]) considers only introducing hypotheses to explain an observation.\n5.\nRELATED WORK\nAs there are a number of literature on automated negotiation, this section focuses on comparison with negotiation frameworks based on logic and argumentation.\nSadri et al. [14] use abductive logic programming as a representation language of negotiating agents.\nAgents negotiate using common dialogue primitives, called dialogue moves.\nEach agent has an abductive logic program in which a sequence of dialogues are specified by a program, a dialogue protocol is specified as constraints, and dialogue moves are specified as abducibles.\nThe behavior of agents is regulated by an observe-think-act cycle.\nOnce a dialogue move is uttered by an agent, another agent that observed the utterance thinks and acts using a proof procedure.\nTheir approach and ours both employ abductive logic programming as a platform of agent reasoning, but the use of it is quite different.\nThis is in contrast with our method in which possible counter-proposals are newly constructed in response to a proposal made by an agent.\nThird, they specify a negotiation policy inside a program (as integrity constraints), while we give a protocol independent of individual agents.\nThey provide an operational model that completely specifies the behavior of agents in terms of agent cycle.\nWe do not provide such a complete specification of the behavior of agents.\nOur primary interest is to mechanize construction of proposals.\nBracciali and Torroni [2] formulate abductive agents that have knowledge in abductive logic programs.\nTo explain an observation, two agents communicate by exchanging integrity constraints.\nIn the process of communication, an agent can revise its own integrity constraints according to the information provided by the other agent.\nFor instance, IC': +--a, b, c relaxes IC: +--a, b. Thus, they use relaxation for weakening the constraints in an abductive logic program.\nIn contrast, we use relaxation for weakening proposals and three different relaxation methods, anti-instantiation, dropping conditions, and goal replacement, are considered.\nTheir goal is to explain an observation by revising integrity constraints of an agent through communication, while we use integrity constraints for communication to explain critiques and help other agents in making counter-proposals.\nMeyer et al. [11] introduce a logical framework for negotiating agents.\nThey introduce two different modes of negotiation: concession and adaptation.\nThey provide rational postulates to characterize negotiated outcomes between two agents, and describe methods for constructing outcomes.\nThey provide logical conditions for negotiated outcomes to satisfy, but they do not describe a process of negotiation nor negotiation protocols.\nMoreover, they represent agents by classical propositional theories, which is different from our abductive logic programming framework.\nFoo et al. [5] model one-to-one negotiation as a one-time encounter between two extended logic programs.\nAn agent offers an answer set of its program, and their mutual deal is regarded as a trade on their answer sets.\nTheir algorithm returns new programs having answer sets which are consistent with each other and keep the agreement set.\nThe work is extended to repeated encounters in [3].\nIn their framework, two agents exchange answer sets to produce a common belief set, which is different from our framework of exchanging proposals.\nThere are a number of proposals for negotiation based\n1028 The Sixth Intl. .\nJoint Conf.\non argumentation.\nAn advantage of argumentation-based negotiation is that it constructs a proposal with arguments supporting the proposal [1].\nThe existence of arguments is useful to convince other agents of reasons why an agent offers (counter -) proposals or returns critiques.\nParsons et al. [13] develop a logic of argumentation-based negotiation among BDI agents.\nIn one-to-one negotiation, an agent A generates a proposal together with its arguments, and passes it to another agent B.\nThe proposal is evaluated by B which attempts to build arguments against it.\nIf B finds no reason to reject the new proposal, it will be accepted and the negotiation ends in success.\nOtherwise, the process is iterated.\nIn this negotiation processes, the agent A never changes its original objective, so that negotiation ends in failure if A fails to find an alternative way of achieving the original objective.\nIn our framework, when a proposal is rejected by another agent, an agent can weaken or change its objective by abduction and relaxation.\nOur framework does not have a mechanism of argumentation, but reasons for critiques can be informed by responding critique sets.\nKakas and Moraitis [10] propose a negotiation protocol which integrates abduction within an argumentation framework.\nA proposal contains an offer corresponding to the negotiation object, together with supporting information representing conditions under which this offer is made.\nSupporting information is computed by abduction and is used for constructing conditional arguments during the process of negotiation.\nIn their negotiation protocol, when an agent cannot satisfy its own goal, the agent considers the other agent's goal and searches for conditions under which the goal is acceptable.\nOur present approach differs from theirs in the following points.\nFirst, they use abduction to seek conditions to support arguments, while we use abduction to seek conditions for proposals to accept.\nSecond, in their negotiation protocol, counter-proposals are chosen among candidates based on preference knowledge of an agent at meta-level, which represents policy under which an agent uses its object-level decision rules according to situations.\nIn our framework, counter-proposals are newly constructed using abduction and relaxation.\nThe method of construction is independent of particular negotiation protocols.\nAs [2, 10, 14], abduction or abductive logic programming used in negotiation is mostly based on normal abduction.\nIn contrast, our approach is based on extended abduction which cannot only introduce hypotheses but remove them from a program.\nThis is another important difference.\nRelaxation and neighborhood query answering are devised to make databases cooperative with their users [4, 6].\nAs far as the authors know, however, there is no study which applies those technique to agent negotiation.\n6.\nCONCLUSION\nIn this paper we proposed a logical framework for negotiating agents.\nTo construct proposals in the process of negotiation, we combined the techniques of extended abduction and relaxation.\nIt was shown that these two operations are used for general inference rules in producing proposals.\nWe developed a negotiation protocol between two agents based on exchange of proposals and critiques, and provided procedures for computing proposals in abductive logic programming.\nThis enables us to realize automated negotiation on top of the existing answer set solvers.\nThe present framework does not have a mechanism of selecting an optimal (counter -) proposal among different alternatives.\nTo compare and evaluate proposals, an agent must have preference knowledge of candidate proposals.\nFurther elaboration to maximize the utility of agents is left for future study.","lvl-2":"Negotiation by Abduction and Relaxation\nABSTRACT\nThis paper studies a logical framework for automated negotiation between two agents.\nWe suppose an agent who has a knowledge base represented by a logic program.\nThen, we introduce methods of constructing counter-proposals in response to proposals made by an agent.\nTo this end, we combine the techniques of extended abduction in artificial intelligence and relaxation in cooperative query answering for databases.\nThese techniques are respectively used for producing conditional proposals and neighborhood proposals in the process of negotiation.\nWe provide a negotiation protocol based on the exchange of these proposals and develop procedures for computing new proposals.\n1.\nINTRODUCTION\nAutomated negotiation has been received increasing attention in multi-agent systems, and a number of frameworks have been proposed in different contexts ([1, 2, 3, 5, 10, 11, 13, 14], for instance).\nNegotiation usually proceeds in a series of rounds and each agent makes a proposal at every round.\nAn agent that received a proposal responds in two ways.\nOne is a critique which is a remark as to whether or not (parts of) the proposal is accepted.\nThe other is a counter-proposal which is an alternative proposal made in response to a previous proposal [13].\nTo see these proposals in one-to-one negotiation, suppose the following negotiation dialogue between a buyer agent B and a seller agent S. (Bi (or Si) represents an utterance of B (or S) in the i-th round.)\nB1: I want to buy a personal computer of the brand b1, with the specification of CPU:1 GHz, Memory:512 MB, HDD: 80GB, and a DVD-RW driver.\nI want to get it at the price under 1200 USD.\nS1: We can provide a PC with the requested specification if you pay for it by cash.\nIn this case, however, service points are not added for this special discount.\nB2: I cannot pay it by cash.\nS2: In a normal price, the requested PC costs 1300 USD.\nB3: I cannot accept the price.\nMy budget is under 1200 USD.\nS3: We can provide another computer with the requested specification, except that it is made by the brand b2.\nThe price is exactly 1200 USD.\nB4: I do not want a PC of the brand b2.\nInstead, I can downgrade a driver from DVD-RW to CD-RW in my initial proposal.\nS4: Ok, I accept your offer.\nIn this dialogue, in response to the opening proposal B1, the counter-proposal S1 is returned.\nIn the rest of the dialogue, B2, B3, S4 are critiques, while S2, S3, B4 are counterproposals.\nCritiques are produced by evaluating a proposal in a knowledge base of an agent.\nIn contrast, making counter-proposals involves generating an alternative proposal which is more favorable to the responding agent than the original one.\nIt is known that there are two ways of producing counterproposals: extending the initial proposal or amending part of the initial proposal.\nAccording to [13], the first type appears in the dialogue: A: \"I propose that you provide me with service X\".\nB: \"I propose that I provide you with service X if you provide me with service Z\".\nThe second type is in the dialogue: A: \"I propose that I provide you with service Y if you provide me with service X\".\nB: \"I propose that I provide you with service X if you provide me with service Z\".\nA negotiation proceeds by iterating such \"give-andtake\" dialogues until it reaches an agreement\/disagreement.\nIn those dialogues, agents generate (counter -) proposals by reasoning on their own goals or objectives.\nThe objective of the agent A in the above dialogues is to obtain service X.\nThe agent B proposes conditions to provide the service.\nIn the process of negotiation, however, it may happen that agents are obliged to weaken or change their initial goals to reach a negotiated compromise.\nIn the dialogue of\na buyer agent and a seller agent presented above, a buyer agent changes its initial goal by downgrading a driver from DVD-RW to CD-RW.\nSuch behavior is usually represented as specific meta-knowledge of an agent or specified as negotiation protocols in particular problems.\nCurrently, there is no computational logic for automated negotiation which has general inference rules for producing (counter -) proposals.\nThe purpose of this paper is to mechanize a process of building (counter -) proposals in one-to-one negotiation dialogues.\nWe suppose an agent who has a knowledge base represented by a logic program.\nWe then introduce methods for generating three different types of proposals.\nFirst, we use the technique of extended abduction in artificial intelligence [8, 15] to construct a conditional proposal as an extension of the original one.\nSecond, we use the technique of relaxation in cooperative query answering for databases [4, 6] to construct a neighborhood proposal as an amendment of the original one.\nThird, combining extended abduction and relaxation, conditional neighborhood proposals are constructed as amended extensions of the original proposal.\nWe develop a negotiation protocol between two agents based on the exchange of these counter-proposals and critiques.\nWe also provide procedures for computing proposals in logic programming.\nThis paper is organized as follows.\nSection 2 introduces a logical framework used in this paper.\nSection 3 presents methods for constructing proposals, and provides a negotiation protocol.\nSection 4 provides methods for computing proposals in logic programming.\nSection 5 discusses related works, and Section 6 concludes the paper.\n2.\nPRELIMINARIES\nLogic programs considered in this paper are extended disjunctive programs (EDP) [7].\nAn EDP (or simply a program) is a set of rules of the form:\n(n \u2265 m \u2265 l \u2265 0) where each Li is a positive\/negative literal, i.e., A or \u00ac A for an atom A, and not is negation as failure (NAF).\nnot L is called an NAF-literal.\nThe symbol \";\" represents disjunction.\nThe left-hand side of the rule is the head, and the right-hand side is the body.\nFor each rule r of the above form, head (r), body + (r) and body--(r) denote the sets of literals {L1,..., Ll}, {Ll +1,..., Lm}, and {Lm +1,..., Ln}, respectively.\nAlso, not body--(r) denotes the set of NAF-literals {not Lm +1,..., not Ln}.\nA disjunction of literals and a conjunction of (NAF -) literals in a rule are identified with its corresponding sets of literals.\nA rule r is often written as head (r) \u2190 body + (r), not body--(r) or head (r) \u2190 body (r) where body (r) = body + (r) \u222a not body--(r).\nA rule r is disjunctive if head (r) contains more than one literal.\nA rule r is an integrity constraint if head (r) = \u2205; and r is a fact if body (r) = \u2205.\nA program is NAF-free if no rule contains NAF-literals.\nTwo rules\/literals are identified with respect to variable renaming.\nA substitution is a mapping from variables to terms 0 = {x1\/t1,..., xn\/tn}, where x1,..., xn are distinct variables and each ti is a term distinct from xi.\nGiven a conjunction G of (NAF -) literals, G0 denotes the conjunction obtained by applying 0 to G.\nA program, rule, or literal is ground if it contains no variable.\nA program P with variables is a shorthand of its ground instantiation Ground (P), the set of ground rules obtained from P by substituting variables in P by elements of its Herbrand universe in every possible way.\nThe semantics of an EDP is defined by the answer set semantics [7].\nLet Lit be the set of all ground literals in the language of a program.\nSuppose a program P and a set of literals S (\u2286 Lit).\nThen, the reduct P S is the program which contains the ground rule head (r) \u2190 body + (r) iff there is a rule r in Ground (P) such that body--(r) \u2229 S = \u2205.\nGiven an NAF-free EDP P, Cn (P) denotes the smallest set of ground literals which is (i) closed under P, i.e., for every ground rule r in Ground (P), body (r) \u2286 Cn (P) implies head (r) \u2229 Cn (P) = ~ \u2205; and (ii) logically closed, i.e., it is either consistent or equal to Lit.\nGiven an EDP P and a set S of literals, S is an answer set of P if S = Cn (P S).\nA program has none, one, or multiple answer sets in general.\nAn answer set is consistent if it is not Lit.\nA program P is consistent if it has a consistent answer set; otherwise, P is inconsistent.\nAbductive logic programming [9] introduces a mechanism of hypothetical reasoning to logic programming.\nAn abductive framework used in this paper is the extended abduction introduced by Inoue and Sakama [8, 15].\nAn abductive program is a pair P, H where P is an EDP and H is a set of literals called abducibles.\nWhen a literal L \u2208 H contains variables, any instance of L is also an abducible.\nAn abductive program P, H is consistent if P is consistent.\nThroughout the paper, abductive programs are assumed to be consistent unless stated otherwise.\nLet G = L1,..., Lm, not Lm +1,..., not Ln be a conjunction, where all variables in G are existentially quantified at the front and range-restricted, i.e., every variable in Lm +1,..., Ln appears in L1,..., Lm.\nA set S of ground literals satisfies the conjunction G if {L10,..., Lm0} \u2286 S and {Lm +10,..., Ln0} \u2229 S = \u2205 for some ground instance G0 with a substitution 0.\nLet P, H be an abductive program and G a conjunction as above.\nA pair (E, F) is an explanation of an observation\n1.\n(P \\ F) \u222a E has an answer set which satisfies G, 2.\n(P \\ F) \u222a E is consistent, 3.\nE and F are sets of ground literals such that E \u2286 H \\ P and F \u2286 H \u2229 P.\nWhen (P \\ F) \u222a E has an answer set S satisfying the above three conditions, S is called a belief set of an abductive pro\ngram P, H satisfying G (with respect to (E, F)).\nNote that if P has a consistent answer set S satisfying G, S is also a belief set of P, H satisfying G with respect to (E, F) = (\u2205, \u2205).\nExtended abduction introduces\/removes hypotheses to\/from a program to explain an observation.\nNote that \"normal\" abduction (as in [9]) considers only introducing hypotheses to explain an observation.\nAn explanation (E, F) of an observation G is called minimal if for any explanation (E', F') of G, E' \u2286 E and F' \u2286 F imply E' = E and F' = F.\n3.\nNEGOTIATION\n3.1 Conditional Proposals by Abduction\nWe suppose an agent who has a knowledge base represented by an abductive program (P, H).\nA program P consists of two types of knowledge, belief B and desire D, where B represents objective knowledge of an agent, while D represents subjective knowledge in general.\nWe define P = B U D, but do not distinguish B and D if such distinction is not important in the context.\nIn contrast, abducibles H are used for representing permissible conditions to make a compromise in the process of negotiation.\nwhere every variable in G is existentially quantified at the front and range-restricted.\nIn particular, G is called a critique if G = accept or G = reject where accept and reject are the reserved propositions.\nA counter-proposal is a proposal made in response to a proposal.\nDEFINITION 3.2.\nA proposal G is accepted in an abductive program (P, H) if P has an answer set satisfying G.\nWhen a proposal is not accepted, abduction is used for seeking conditions to make it acceptable.\nDEFINITION 3.3.\nLet (P, H) be an abductive program and G a proposal.\nIf (E, F) is a minimal explanation of G\u03b8 for some substitution \u03b8 in (P, H), the conjunction G':\nPROOF.\nWhen G' = G\u03b8, E, not F, (P \\ F) U E has a consistent answer set S satisfying G\u03b8 and E n F = 0.\nIn this case, S satisfies G\u03b8, E, not F.\nA conditional proposal G' provides a minimal requirement for accepting the proposal G.\nIf G\u03b8 has multiple minimal explanations, several conditional proposals exist accordingly.\nWhen (E, F) = ~ (0, 0), a conditional proposal is used as a new proposal made in response to the proposal G. EXAMPLE 3.1.\nAn agent seeks a position of a research assistant at the computer department of a university with the condition that the salary is at least 50,000 USD per year.\nThe agent makes his\/her request as the proposal:2\n2For notational convenience, we often include mathematical (in) equations in proposals\/programs.\nThey are written by literals, for instance, x> y by geq (x, y) with a suitable definition of the predicate geq.\nwhere available positions are represented by disjunction.\nAccording to P, the base salary of a research assistant at the computer department is 40,000 USD, but if he\/she has PhD, it is 60,000 USD.\nIn this case, (E, F) = ({has PhD1, 0) becomes the minimal explanation of G\u03b8 = assist (compt dept), salary (60, 000) with \u03b8 = {x\/60, 000 1.\nThen, the conditional proposal made by the university becomes assist (compt dept), salary (60, 000), has PhD.\n3.2 Neighborhood Proposals by Relaxation\nWhen a proposal is unacceptable, an agent tries to construct a new counter-proposal by weakening constraints in the initial proposal.\nWe use techniques of relaxation for this purpose.\nRelaxation is used as a technique of cooperative query answering in databases [4, 6].\nWhen an original query fails in a database, relaxation expands the scope of the query by relaxing the constraints in the query.\nThis allows the database to return \"neighborhood\" answers which are related to the original query.\nWe use the technique for producing proposals in the process of negotiation.\nDEFINITION 3.4.\nLet (P, H) be an abductive program and G a proposal.\nThen, G is relaxed to G' in the following three ways: Anti-instantiation: Construct G' such that G' \u03b8 = G for some substitution \u03b8.\nDropping conditions: Construct G' such that G' C G. Goal replacement: If G is a conjunction \"G1, G2\", where G1 and G2 are conjunctions, and there is a rule L +--G' 1 in P such that G' 1\u03b8 = G1 for some substitution \u03b8, then build G' as L\u03b8, G2.\nHere, L\u03b8 is called a replaced literal.\nIn each case, every variable in G' is existentially quantified at the front and range-restricted.\nAnti-instantiation replaces constants (or terms) with fresh variables.\nDropping conditions eliminates some conditions in a proposal.\nGoal replacement replaces the condition G1 in G with a literal L\u03b8 in the presence of a rule L +--G' 1 in P under the condition G' 1\u03b8 = G1.\nAll these operations generalize proposals in different ways.\nEach G' obtained by these operations is called a relaxation of G.\nIt is worth noting that these operations are also used in the context of inductive generalization [12].\nThe relaxed proposal can produce new offers which are neighbor to the original proposal.\nDEFINITION 3.5.\nLet (P, H) be an abductive program and G a proposal.\n1.\nLet G' be a proposal obtained by anti-instantiation.\nIf P has an answer set S which satisfies G' \u03b8 for some substitution \u03b8 and G' \u03b8 = ~ G, G' \u03b8 is called a neighborhood proposal by anti-instantiation.\n2.\nLet G' be a proposal obtained by dropping conditions.\nIf P has an answer set S which satisfies G' \u03b8 for some substitution \u03b8, G' \u03b8 is called a neighborhood proposal by dropping conditions.\n1024 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.\nLet G ~ be a proposal obtained by goal replacement.\nFor a replaced literal L E G ~ and a rule H <--B in P such that L = Hor and (G ~ \\ {L}) U Bor = ~ G for some substitution or, put G ~ ~ = (G ~ \\ {L}) U Bor.\nIf P has an answer set S which satisfies G ~ ~ 0 for some substitution 0, G ~ ~ 0 is called a neighborhood proposal by goal replacement.\nEXAMPLE 3.2.\n(cont.\nExample 3.1) Given the proposal G = assist (compt dept), salary (x), x> 50, 000,\n\u2022 G ~ 1 = assist (w), salary (x), x> 50, 000 is produced by substituting compt dept with a variable w. As\nwith 01 = {w\/math dept} is satisfied by an answer set of P, G ~ 101 becomes a neighborhood proposal by anti-instantiation.\n\u2022 G ~ 2 = assist (compt dept), salary (x) is produced by dropping the salary condition x> 50, 000.\nAs\nwith 02 = {x\/40, 000} is satisfied by an answer set of P, G ~ 202 becomes a neighborhood proposal by dropping conditions.\n\u2022 G ~ 3 = employee (compt dept), salary (x), x> 50, 000 is produced by replacing assist (compt dept) with employee (compt dept) using the rule employee (x) \u2190 assist (x) in P. By G ~ 3 and the rule employee (x) \u2190 system admin (x) in P, G ~ ~ 3 = sys admin (compt dept), salary (x), x> 50, 000 is produced.\nAs\nwith 03 = {x\/55, 000} is satisfied by an answer set of P, G ~ ~ 3 03 becomes a neighborhood proposal by goal replacement.\nFinally, extended abduction and relaxation are combined to produce conditional neighborhood proposals.\nDEFINITION 3.6.\nLet (P, H) be an abductive program and G a proposal.\n1.\nLet G ~ be a proposal obtained by either anti-instantiation or dropping conditions.\nIf (E, F) is a minimal explanation of G ~ 0 (~ = G) for some substitution 0, the conjunction G ~ 0, E, not F is called a conditional neighborhood proposal by anti-instantiation\/dropping conditions.\n2.\nLet G ~ be a proposal obtained by goal replacement.\nSuppose G ~ ~ as in Definition 3.5 (3).\nIf (E, F) is a minimal explanation of G ~ ~ 0 for some substitution 0, the conjunction G ~ ~ 0, E, not F is called a conditional neighborhood proposal by goal replacement.\nA conditional neighborhood proposal reduces to a neighborhood proposal when (E, F) = (0, 0).\n3.3 Negotiation Protocol\nA negotiation protocol defines how to exchange proposals in the process of negotiation.\nThis section presents a negotiation protocol in our framework.\nWe suppose one-to-one negotiation between two agents who have a common ontology and the same language for successful communication.\nIntegrity constraints are conditions which an agent should satisfy, so that they are used to explain why an agent does not accept a proposal.\nA negotiation proceeds in a series of rounds.\nEach i-th round (i> 1) consists of a proposal Gi1 made by one agent Ag1 and another proposal Gi2 made by the other agent Ag2.\nDEFINITION 3.8.\nLet (P1, H1) be an abductive program of an agent Ag1 and Gi2 a proposal made by Ag2 at the i-th round.\nA critique set of Ag1 (at the i-th round) is a set\nconstraint in P1 and Gj2 violates r} where j = i \u2212 1 or i, and CS01 (P1, G02) = CS11 (P1, G02) = 0.\nA critique set of an agent Ag1 accumulates integrity constraints which are violated by proposals made by another agent Ag2.\nCSi2 (P2, Gj 1) is defined in the same manner.\nDEFINITION 3.9.\nLet (Pk, Hk) be an abductive program of an agent Agk and Gj a proposal, which is not a critique, made by any agent at the j (<i) - th round.\nA negotiation set of Agk (at the i-th round) is a triple NSik = (Sic, Si n, Sicn), where Sic is the set of conditional proposals, Sin is the set of neighborhood proposals, and Sicn is the set of conditional neighborhood proposals, produced by Gj and (Pk, Hk).\nA negotiation set represents the space of possible proposals made by an agent.\nSix (x E {c, n, cn}) accumulates proposals produced by Gj (1 <j <i) according to Definitions 3.3, 3.5, and 3.6.\nNote that an agent can construct counter-proposals by modifying its own previous proposals or another agent's proposals.\nAn agent Agk accumulates proposals that are made by Agk but are rejected by another agent, in the failed proposal set FPki (at the i-th round), where FPk0 = 0.\nSuppose two agents Ag1 and Ag2 who have abductive programs (P1, H1) and (P2, H2), respectively.\nGiven a proposal G11 which is satisfied by an answer set of P1, a negotiation starts.\nIn response to the proposal Gi1 made by Ag1 at the i-th round, Ag2 behaves as follows.\n1.\nIf Gi1 = accept, an agreement is reached and negotiation ends in success.\n2.\nElse if Gi1 = reject, put FP2i = FP2i \u2212 1 U {Gi \u2212 1 2} where {G02} = 0.\nProceed to the step 4 (b).\n3.\nElse if P2 has an answer set satisfying Gi1, Ag2 returns Gi2 = accept to Ag1.\nNegotiation ends in success.\n4.\nOtherwise, Ag2 behaves as follows.\nPut FP2i = FP i \u2212 1 2.\n(a) If Gi1 violates an integrity constraint in P2, return the critique Gi2 = reject to Ag1, together with the critique set CSi2 (P2, Gi1).\n(b) Otherwise, construct NSi2 as follows.\n(i) Produce Sic.\nLet \u03bc (Sic) = {p | p E Sic \\ FP2i and\np satisfies the constraints in CSi1 (P1, Gi \u2212 1 2)}.\nIf \u03bc (Sic) = ~ 0, select one from \u03bc (Sic) and propose it as Gi2 to Ag1; otherwise, go to (ii).\n(ii) Produce Sin.\nIf \u03bc (Sin) = ~ 0, select one from \u03bc (Sin) and propose it as Gi2 to Ag1; otherwise, go to (iii).\n(iii) Produce Sicn.\nIf \u03bc (Sicn) = ~ 0, select one from \u03bc (Sicn) and propose it as Gi2 to Ag1; otherwise, negotiation ends in failure.\nThis means that Ag2 can make no counter-proposal or every counterproposal made by Ag2 is rejected by Ag1.\nIn the step 4 (a), Ag2 rejects the proposal Gi1 and returns the reason of rejection as a critique set.\nThis helps for Ag1\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1025\nin preparing a next counter-proposal.\nIn the step 4 (b), Ag2 constructs a new proposal.\nIn its construction, Ag2 should take care of the critique set CSi 1 (P1, Gi-1 2), which represents integrity constraints, if any, accumulated in previous rounds, that Ag1 must satisfy.\nAlso, FP2i is used for removing proposals which have been rejected.\nConstruction of Six (x E {c, n, cn}) in NSi2 is incrementally done by adding new counter-proposals produced by Gi1 or Gi-12 to Si-1 x.\nFor instance, Sin in NSi2 is computed as\nwhere S0n = 0.\nThat is, Sin is constructed from Si-1 n by adding new proposals which are obtained by modifying the proposal Gi1 made by Ag1 at the i-th round or modifying the proposal Gi-1 2 made by Ag2 at the (i \u2212 1) - th round.\nSic and Sicn are obtained as well.\nIn the above protocol, an agent produces Sic at first, secondly Si n, and finally Sicn.\nThis strategy seeks conditions which satisfy the given proposal, prior to neighborhood proposals which change the original one.\nAnother strategy, which prefers neighborhood proposals to conditional ones, is also considered.\nConditional neighborhood proposals are to be considered in the last place, since they differ from the original one to the maximal extent.\nThe above protocol produces the candidate proposals in Six for each x E {c, n, cn} at once.\nWe can consider a variant of the protocol in which each proposal in Six is constructed one by one (see Example 3.3).\nThe above protocol is repeatedly applied to each one of the two negotiating agents until a negotiation ends in success\/failure.\nFormally, the above negotiation protocol has the following properties.\nTHEOREM 3.2.\nLet Ag1 and Ag2 be two agents having abductive programs (P1, H1) and (P2, H2), respectively.\n1.\nIf (P1, H1) and (P2, H2) are function-free (i.e., both Pi and Hi contain no function symbol), any negotiation will terminate.\n2.\nIf a negotiation terminates with agreement on a proposal G, both (P1, H1) and (P2, H2) have belief sets satisfying G.\nPROOF.\n1.\nWhen an abductive program is function-free, abducibles and negotiation sets are both finite.\nMoreover, if a proposal is once rejected, it is not proposed again by the function \u03bc.\nThus, negotiation will terminate in finite steps.\n2.\nWhen a proposal G is made by Ag1, (P1, H1) has a belief set satisfying G.\nIf the agent Ag2 accepts the proposal G, it is satisfied by an answer set of P2 which is also a belief set of (P2, H2).\nHere, (1) and (2) represent selection of products.\nThe atom pc (b1, 1G, 512M, 80G) represents that the seller agent has a PC of the brand b1 such that CPU is 1GHz, memory is 512MB, and HDD is 80GB.\nPrices of products are represented as desire of the seller.\nThe rules (3)--(5) are normal prices of products.\nA normal price is a selling price on the condition that service points are added (6).\nOn the other hand, a discount price is applied if the paying method is cash and no service point is added (7).\nThe fact (8) represents the addition of service points.\nThis service would be withdrawn in case of discount prices, so add point is specified as an abducible.\nA buyer agent has the abductive program (Pb, Hb) in which Pb consists of belief Bb and desire Db:\nand (16) impose constraints for buying a PC.\nA DVD-RW is specified as an abducible which is subject to concession.\n(1st round) First, the following proposal is given by the buyer agent:\nAs Ps has no answer set which satisfies G1b, the seller agent cannot accept the proposal.\nThe seller takes an action of making a counter-proposal and performs abduction.\nAs a result, the seller finds the minimal explanation (E, F) = ({pay cash}, {add point}) which explains G1b01 with 01 = {x\/1170}.\nThe seller constructs the conditional proposal: G1s: pc (b1, 1G, 512M, 80G), dvd-rw, price (1170), pay cash, not add point and offers it to the buyer.\n(2nd round) The buyer does not accept G1s because he\/she cannot pay it by cash (15).\nThe buyer then returns the critique G2b = reject to the seller, together with the critique set CS2 b (Pb, G1 s) = {(15)}.\nIn response to this, the seller tries to make another proposal which satisfies the constraint in this critique set.\nAs G1s is stored in FPs2 and no other conditional proposal satisfying the buyer's requirement exists, the seller produces neighborhood proposals.\nHe\/she relaxes G1b by dropping x <1200 in the condition, and produces pc (b1, 1G, 512M, 80G), dvd-rw, price (x).\nAs Ps has an answer set which satisfies G2s: pc (b1, 1G, 512M, 80G), dvd-rw, price (1300),\n1026 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nthe seller offers G2s as a new counter-proposal.\n(3rd round) The buyer does not accept G2s because he\/she cannot pay more than 1200USD (16).\nThe buyer again returns the critique G3b = reject to the seller, together with the critique set CS3b (Pb, G2s) = CS2b (Pb, G1s) U {(16)}.\nThe seller then considers another proposal by replacing b1 with a variable w, G1b now becomes\nthe seller offers G3s as a new counter-proposal.\n(4th round) The buyer does not accept G3s because a PC of the brand b2 is out of his\/her interest and Pb has no answer set satisfying G3s.\nThen, the buyer makes a concession by changing his\/her original goal.\nThe buyer relaxes G1b by goal replacement using the rule (9) in Pb, and produces pc (b1, 1G, 512M, 80G), drive, price (x), x <1200.\nUsing (10), the following proposal is produced: pc (b1, 1G, 512M, 80G), cd-rw, price (x), x <1200.\nAs Pb \\ {dvd-rw} has a consistent answer set satisfying the above proposal, the buyer proposes the conditional neighborhood proposal\nto the seller agent.\nSince Ps also has an answer set satisfying G4b, the seller accepts it and sends the message G4s = accept to the buyer.\nThus, the negotiation ends in success.\n4.\nCOMPUTATION\nIn this section, we provide methods of computing proposals in terms of answer sets of programs.\nWe first introduce some definitions from [15].\nDEFINITION 4.1.\nGiven an abductive program (P, H), the set UR of update rules is defined as:\nwhere L, + L, and \u2212 L are new atoms uniquely associated with every L E H.\nThe atoms + L and \u2212 L are called update atoms.\nBy the definition, the atom L becomes true iff L is not true.\nThe pair of rules L \u2190 not L and L \u2190 not L specify the situation that an abducible L is true or not.\nWhen p (x) E H and p (a) E P but p (t) E P for t = a, the rule + L \u2190 L precisely becomes + p (t) \u2190 p (t) for any t = a.\nIn this case, the rule is shortly written as + p (x) \u2190 p (x), x = a. Generally, the rule becomes + p (x) \u2190 p (x), x = t1,..., x = tn for n such instances.\nThe rule + L \u2190 L derives the atom + L if an abducible L which is not in P is to be true.\nIn contrast, the rule \u2212 L \u2190 not L derives the atom \u2212 L if an abducible L which is in P is not to be true.\nThus, update atoms represent the change of truth values of abducibles in a program.\nThat is, + L means the introduction of L, while \u2212 L means the deletion of L.\nWhen an abducible L contains variables, the associated update atom + L or \u2212 L is supposed to have exactly the same variables.\nIn this case, an update atom is semantically identified with its ground instances.\nThe set of all update atoms associated with the abducibles in H is denoted by UH, and UH = UH + U UH--where UH + (resp.\nUH--) is the set of update atoms of the form + L (resp.\n\u2212 L).\nDEFINITION 4.2.\nGiven an abductive program (P, H), its update program UP is defined as the program\nAn answer set S of UP is called U-minimal if there is no answer set T of UP such that T n UH \u2282 S n UH.\nBy the definition, U-minimal answer sets exist whenever UP has answer sets.\nUpdate programs are used for computing (minimal) explanations of an observation.\nGiven an observation G as a conjunction of literals and NAF-literals possibly containing variables, we introduce a new ground literal O together with the rule O \u2190 G.\nIn this case, O has an explanation (E, F) iff G has the same explanation.\nWith this replacement, an observation is assumed to be a ground literal without loss of generality.\nIn what follows, E + = {+ L | L E E} and F--= {\u2212 L | L E F} for E \u2286 H and F \u2286 H.\nNext, consider the program UP U {\u2190 not flies (t)}.\nIt has the single U-minimal answer set: S = {bird (t), bird (o), f lies (t), flies (o), broken-wing (t), broken-wing (o), \u2212 broken-wing (t)}.\nThe unique minimal explanation (E, F) = (0, {broken-wing (t)}) of G is expressed by the update atom \u2212 broken-wing (t) in S n UH--.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1027\n(P U {O +--G}, H) with a ground literal O. Compute a minimal explanation of O in (P U {O +--G}, H) using its update program.\nIf O has a minimal explanation (E, F) with a substitution 0 for variables in G, put \"G0, E, not F\" in S. .\nNext, neighborhood proposals are computed as follows.\ninput: an abductive program (P, H), a proposal G; output: a set S,, of proposals.\n% neighborhood proposals by anti-instantiation; Construct G' by anti-instantiation.\nFor a ground literal O, if P U {O +--G'} U {+--not O} has a consistent answer set satisfying G' 0 with a substitution 0 and G' 0 = ~ G, put G' 0 in S,,.\n% neighborhood proposals by dropping conditions; Construct G' by dropping conditions.\nIf G' is a ground literal and the program P U {+--not G'} has a consistent answer set, put G' in S,,.\nElse if G' is a conjunction possibly containing variables, do the following.\nFor a ground literal O, if P U {O +--G'} U {+--not O} has a consistent answer set satisfying G' 0 with a substitution 0, put G' 0 in S,,.\n% neighborhood proposals by goal replacement; Construct G' by goal replacement.\nIf G' is a ground literal and there is a rule H +--B in P such that G' = Hor and Bor = ~ G for some substitution or, put G\" = Bor.\nIf P U {+--not G'} has a consistent answer set satisfying G\" 0 with a substitution 0, put G\" 0 in S,,.\nElse if G' is a conjunction possibly containing variables, do the following.\nFor a replaced literal L E G', if there is a rule H +--B in P such that L = Hor and (G' \\ {L}) U Bor = ~ G for some substitution or, put G\" = (G' \\ {L}) U Bor.\nFor a ground literal O, if P U {O +--G\"} U {+--not O} has a consistent answer set satisfying G\" 0 with a substitution 0, put G\" 0 in S,,.\nTHEOREM 4.3.\nThe set S. (resp.\nS,,) computed above coincides with the set of conditional proposals (resp.\nneighborhood proposals).\nPROOF.\nThe result for S. follows from Definition 3.3 and Proposition 4.1.\nThe result for S,, follows from Definition 3.5 and Proposition 4.2.\nConditional neighborhood proposals are computed by combining the above two procedures.\nThose proposals are computed at each round.\nNote that the procedure for computing S,, contains some nondeterministic choices.\nFor instance, there are generally several candidates of literals to relax in a proposal.\nAlso, there might be several rules in a program for the usage of goal replacement.\nIn practice, an agent can prespecify literals in a proposal for possible relaxation or rules in a program for the usage of goal replacement.\n5.\nRELATED WORK\nAs there are a number of literature on automated negotiation, this section focuses on comparison with negotiation frameworks based on logic and argumentation.\nSadri et al. [14] use abductive logic programming as a representation language of negotiating agents.\nAgents negotiate using common dialogue primitives, called dialogue moves.\nEach agent has an abductive logic program in which a sequence of dialogues are specified by a program, a dialogue protocol is specified as constraints, and dialogue moves are specified as abducibles.\nThe behavior of agents is regulated by an observe-think-act cycle.\nOnce a dialogue move is uttered by an agent, another agent that observed the utterance thinks and acts using a proof procedure.\nTheir approach and ours both employ abductive logic programming as a platform of agent reasoning, but the use of it is quite different.\nFirst, they use abducibles to specify dialogue primitives of the form tell (utterer, receiver, subject, identifier, time), while we use abducibles to specify arbitrary permissible hypotheses to construct conditional proposals.\nSecond, a program pre-specifies a plan to carry out in order to achieve a goal, together with available\/missing resources in the context of resource-exchanging problems.\nThis is in contrast with our method in which possible counter-proposals are newly constructed in response to a proposal made by an agent.\nThird, they specify a negotiation policy inside a program (as integrity constraints), while we give a protocol independent of individual agents.\nThey provide an operational model that completely specifies the behavior of agents in terms of agent cycle.\nWe do not provide such a complete specification of the behavior of agents.\nOur primary interest is to mechanize construction of proposals.\nBracciali and Torroni [2] formulate abductive agents that have knowledge in abductive logic programs.\nTo explain an observation, two agents communicate by exchanging integrity constraints.\nIn the process of communication, an agent can revise its own integrity constraints according to the information provided by the other agent.\nA set IC' of integrity constraints relaxes a set IC (or IC tightens IC') if any observation that can be proved with respect to IC can also be proved with respect to IC'.\nFor instance, IC': +--a, b, c relaxes IC: +--a, b. Thus, they use relaxation for weakening the constraints in an abductive logic program.\nIn contrast, we use relaxation for weakening proposals and three different relaxation methods, anti-instantiation, dropping conditions, and goal replacement, are considered.\nTheir goal is to explain an observation by revising integrity constraints of an agent through communication, while we use integrity constraints for communication to explain critiques and help other agents in making counter-proposals.\nMeyer et al. [11] introduce a logical framework for negotiating agents.\nThey introduce two different modes of negotiation: concession and adaptation.\nThey provide rational postulates to characterize negotiated outcomes between two agents, and describe methods for constructing outcomes.\nThey provide logical conditions for negotiated outcomes to satisfy, but they do not describe a process of negotiation nor negotiation protocols.\nMoreover, they represent agents by classical propositional theories, which is different from our abductive logic programming framework.\nFoo et al. [5] model one-to-one negotiation as a one-time encounter between two extended logic programs.\nAn agent offers an answer set of its program, and their mutual deal is regarded as a trade on their answer sets.\nStarting from the initial agreement set SnT for an answer set S of an agent and an answer set T of another agent, each agent extends this set to reflect its own demand while keeping consistency with demand of the other agent.\nTheir algorithm returns new programs having answer sets which are consistent with each other and keep the agreement set.\nThe work is extended to repeated encounters in [3].\nIn their framework, two agents exchange answer sets to produce a common belief set, which is different from our framework of exchanging proposals.\nThere are a number of proposals for negotiation based\n1028 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\non argumentation.\nAn advantage of argumentation-based negotiation is that it constructs a proposal with arguments supporting the proposal [1].\nThe existence of arguments is useful to convince other agents of reasons why an agent offers (counter -) proposals or returns critiques.\nParsons et al. [13] develop a logic of argumentation-based negotiation among BDI agents.\nIn one-to-one negotiation, an agent A generates a proposal together with its arguments, and passes it to another agent B.\nThe proposal is evaluated by B which attempts to build arguments against it.\nIf it conflicts with B's interest, B informs A of its objection by sending back its attacking argument.\nIn response to this, A tries to find an alternative way of achieving its original objective, or a way of persuading B to drop its objection.\nIf either type of argument can be found, A will submit it to B.\nIf B finds no reason to reject the new proposal, it will be accepted and the negotiation ends in success.\nOtherwise, the process is iterated.\nIn this negotiation processes, the agent A never changes its original objective, so that negotiation ends in failure if A fails to find an alternative way of achieving the original objective.\nIn our framework, when a proposal is rejected by another agent, an agent can weaken or change its objective by abduction and relaxation.\nOur framework does not have a mechanism of argumentation, but reasons for critiques can be informed by responding critique sets.\nKakas and Moraitis [10] propose a negotiation protocol which integrates abduction within an argumentation framework.\nA proposal contains an offer corresponding to the negotiation object, together with supporting information representing conditions under which this offer is made.\nSupporting information is computed by abduction and is used for constructing conditional arguments during the process of negotiation.\nIn their negotiation protocol, when an agent cannot satisfy its own goal, the agent considers the other agent's goal and searches for conditions under which the goal is acceptable.\nOur present approach differs from theirs in the following points.\nFirst, they use abduction to seek conditions to support arguments, while we use abduction to seek conditions for proposals to accept.\nSecond, in their negotiation protocol, counter-proposals are chosen among candidates based on preference knowledge of an agent at meta-level, which represents policy under which an agent uses its object-level decision rules according to situations.\nIn our framework, counter-proposals are newly constructed using abduction and relaxation.\nThe method of construction is independent of particular negotiation protocols.\nAs [2, 10, 14], abduction or abductive logic programming used in negotiation is mostly based on normal abduction.\nIn contrast, our approach is based on extended abduction which cannot only introduce hypotheses but remove them from a program.\nThis is another important difference.\nRelaxation and neighborhood query answering are devised to make databases cooperative with their users [4, 6].\nIn this sense, those techniques have the spirit similar to cooperative problem solving in multi-agent systems.\nAs far as the authors know, however, there is no study which applies those technique to agent negotiation.\n6.\nCONCLUSION\nIn this paper we proposed a logical framework for negotiating agents.\nTo construct proposals in the process of negotiation, we combined the techniques of extended abduction and relaxation.\nIt was shown that these two operations are used for general inference rules in producing proposals.\nWe developed a negotiation protocol between two agents based on exchange of proposals and critiques, and provided procedures for computing proposals in abductive logic programming.\nThis enables us to realize automated negotiation on top of the existing answer set solvers.\nThe present framework does not have a mechanism of selecting an optimal (counter -) proposal among different alternatives.\nTo compare and evaluate proposals, an agent must have preference knowledge of candidate proposals.\nFurther elaboration to maximize the utility of agents is left for future study.","keyphrases":["negoti","relax","autom negoti","logic program","extend abduct","condit propos","multi-agent system","on-to-on negoti","altern propos","specif meta-knowledg","abduct framework","abduct program","drop condit","anti-instanti","induct gener","minim explan","integr constraint"],"prmu":["P","P","P","P","P","P","U","M","M","U","R","R","M","U","U","U","U"]} {"id":"I-62","title":"A Q-decomposition and Bounded RTDP Approach to Resource Allocation","abstract":"This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, a Q-decomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent. The Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider. On the other hand, when the resources are available to all agents, no Q-decomposition is possible and we use heuristic search. In particular, the bounded Real-time Dynamic Programming (bounded rtdp) is used. Bounded rtdp concentrates the planning on significant states only and prunes the action space. The pruning is accomplished by proposing tight upper and lower bounds on the value function.","lvl-1":"A Q-decomposition and Bounded RTDP Approach to Resource Allocation Pierrick Plamondon and Brahim Chaib-draa Computer Science & Software Engineering Dept Laval University Qu\u00e9bec, Canada {plamon, chaib}@damas.\nift.ulaval.ca Abder Rezak Benaskeur Decision Support Systems Section Defence R&D Canada - Valcartier Qu\u00e9bec, Canada abderrezak.benaskeur@drdc-rddc.gc.ca ABSTRACT This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete.\nTo address this complex resource management problem, a Qdecomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nThe Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider.\nOn the other hand, when the resources are available to all agents, no Qdecomposition is possible and we use heuristic search.\nIn particular, the bounded Real-time Dynamic Programming (bounded rtdp) is used.\nBounded rtdp concentrates the planning on significant states only and prunes the action space.\nThe pruning is accomplished by proposing tight upper and lower bounds on the value function.\nCategories and Subject Descriptors I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence.\nGeneral Terms Algorithms, Performance, Experimentation.\n1.\nINTRODUCTION This paper aims to contribute to solve complex stochastic resource allocation problems.\nIn general, resource allocation problems are known to be NP-Complete [12].\nIn such problems, a scheduling process suggests the action (i.e. resources to allocate) to undertake to accomplish certain tasks, according to the perfectly observable state of the environment.\nWhen executing an action to realize a set of tasks, the stochastic nature of the problem induces probabilities on the next visited state.\nIn general, the number of states is the combination of all possible specific states of each task and available resources.\nIn this case, the number of possible actions in a state is the combination of each individual possible resource assignment to the tasks.\nThe very high number of states and actions in this type of problem makes it very complex.\nThere can be many types of resource allocation problems.\nFirstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent.\nA second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nTo solve this problem efficiently, we adapt Qdecomposition proposed by Russell and Zimdars [9].\nIn our Q-decomposition approach, a planning agent manages each task and all agents have to share the limited resources.\nThe planning process starts with the initial state s0.\nIn s0, each agent computes their respective Q-value.\nThen, the planning agents are coordinated through an arbitrator to find the highest global Q-value by adding the respective possible Q-values of each agents.\nWhen implemented with heuristic search, since the number of states and actions to consider when computing the optimal policy is exponentially reduced compared to other known approaches, Q-decomposition allows to formulate the first optimal decomposed heuristic search algorithm in a stochastic environments.\nOn the other hand, when the resources are available to all agents, no Q-decomposition is possible.\nA common way of addressing this large stochastic problem is by using Markov Decision Processes (mdps), and in particular real-time search where many algorithms have been developed recently.\nFor instance Real-Time Dynamic Programming (rtdp) [1], lrtdp [4], hdp [3], and lao [5] are all state-of-the-art heuristic search approaches in a stochastic environment.\nBecause of its anytime quality, an interesting approach is rtdp introduced by Barto et al. [1] which updates states in trajectories from an initial state s0 to a goal state sg.\nrtdp is used in this paper to solve efficiently a constrained resource allocation problem.\nrtdp is much more effective if the action space can be pruned of sub-optimal actions.\nTo do this, McMahan et 1212 978-81-904262-7-5 (RPS) c 2007 IFAAMAS al. [6], Smith and Simmons [11], and Singh and Cohn [10] proposed solving a stochastic problem using a rtdp type heuristic search with upper and lower bounds on the value of states.\nMcMahan et al. [6] and Smith and Simmons [11] suggested, in particular, an efficient trajectory of state updates to further speed up the convergence, when given upper and lower bounds.\nThis efficient trajectory of state updates can be combined to the approach proposed here since this paper focusses on the definition of tight bounds, and efficient state update for a constrained resource allocation problem.\nOn the other hand, the approach by Singh and Cohn is suitable to our case, and extended in this paper using, in particular, the concept of marginal revenue [7] to elaborate tight bounds.\nThis paper proposes new algorithms to define upper and lower bounds in the context of a rtdp heuristic search approach.\nOur marginal revenue bounds are compared theoretically and empirically to the bounds proposed by Singh and Cohn.\nAlso, even if the algorithm used to obtain the optimal policy is rtdp, our bounds can be used with any other algorithm to solve an mdp.\nThe only condition on the use of our bounds is to be in the context of stochastic constrained resource allocation.\nThe problem is now modelled.\n2.\nPROBLEM FORMULATION A simple resource allocation problem is one where there are the following two tasks to realize: ta1 = {wash the dishes}, and ta2 = {clean the floor}.\nThese two tasks are either in the realized state, or not realized state.\nTo realize the tasks, two type of resources are assumed: res1 = {brush}, and res2 = {detergent}.\nA computer has to compute the optimal allocation of these resources to cleaner robots to realize their tasks.\nIn this problem, a state represents a conjunction of the particular state of each task, and the available resources.\nThe resources may be constrained by the amount that may be used simultaneously (local constraint), and in total (global constraint).\nFurthermore, the higher is the number of resources allocated to realize a task, the higher is the expectation of realizing the task.\nFor this reason, when the specific states of the tasks change, or when the number of available resources changes, the value of this state may change.\nWhen executing an action a in state s, the specific states of the tasks change stochastically, and the remaining resource are determined with the resource available in s, subtracted from the resources used by action a, if the resource is consumable.\nIndeed, our model may consider consumable and non-consumable resource types.\nA consumable resource type is one where the amount of available resource is decreased when it is used.\nOn the other hand, a nonconsumable resource type is one where the amount of available resource is unchanged when it is used.\nFor example, a brush is a non-consumable resource, while the detergent is a consumable resource.\n2.1 Resource Allocation as a MDPs In our problem, the transition function and the reward function are both known.\nA Markov Decision Process (mdp) framework is used to model our stochastic resource allocation problem.\nmdps have been widely adopted by researchers today to model a stochastic process.\nThis is due to the fact that mdps provide a well-studied and simple, yet very expressive model of the world.\nAn mdp in the context of a resource allocation problem with limited resources is defined as a tuple Res, T a, S, A, P, W, R, , where: \u2022 Res = res1, ..., res|Res| is a finite set of resource types available for a planning process.\nEach resource type may have a local resource constraint Lres on the number that may be used in a single step, and a global resource constraint Gres on the number that may be used in total.\nThe global constraint only applies for consumable resource types (Resc) and the local constraints always apply to consumable and nonconsumable resource types.\n\u2022 T a is a finite set of tasks with ta \u2208 T a to be accomplished.\n\u2022 S is a finite set of states with s \u2208 S.\nA state s is a tuple T a, res1, ..., res|Resc| , which is the characteristic of each unaccomplished task ta \u2208 T a in the environment, and the available consumable resources.\nsta is the specific state of task ta.\nAlso, S contains a non empty set sg \u2286 S of goal states.\nA goal state is a sink state where an agent stays forever.\n\u2022 A is a finite set of actions (or assignments).\nThe actions a \u2208 A(s) applicable in a state are the combination of all resource assignments that may be executed, according to the state s.\nIn particular, a is simply an allocation of resources to the current tasks, and ata is the resource allocation to task ta.\nThe possible actions are limited by Lres and Gres.\n\u2022 Transition probabilities Pa(s |s) for s \u2208 S and a \u2208 A(s).\n\u2022 W = [wta] is the relative weight (criticality) of each task.\n\u2022 State rewards R = [rs] : ta\u2208T a rsta \u2190 sta \u00d7 wta.\nThe relative reward of the state of a task rsta is the product of a real number sta by the weight factor wta.\nFor our problem, a reward of 1 \u00d7 wta is given when the state of a task (sta) is in an achieved state, and 0 in all other cases.\n\u2022 A discount (preference) factor \u03b3, which is a real number between 0 and 1.\nA solution of an mdp is a policy \u03c0 mapping states s into actions a \u2208 A(s).\nIn particular, \u03c0ta(s) is the action (i.e. resources to allocate) that should be executed on task ta, considering the global state s.\nIn this case, an optimal policy is one that maximizes the expected total reward for accomplishing all tasks.\nThe optimal value of a state, V (s), is given by: V (s) = R(s) + max a\u2208A(s) \u03b3 s \u2208S Pa(s |s)V (s ) (1) where the remaining consumable resources in state s are Resc \\ res(a), where res(a) are the consumable resources used by action a. Indeed, since an action a is a resource assignment, Resc \\ res(a) is the new set of available resources after the execution of action a. Furthermore, one may compute the Q-Values Q(a, s) of each state action pair using the The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1213 following equation: Q(a, s) = R(s) + \u03b3 s \u2208S Pa(s |s) max a \u2208A(s ) Q(a , s ) (2) where the optimal value of a state is V (s) = max a\u2208A(s) Q(a, s).\nThe policy is subjected to the local resource constraints res(\u03c0(s)) \u2264 Lres\u2200 s \u2208 S , and \u2200 res \u2208 Res.\nThe global constraint is defined according to all system trajectories tra \u2208 T RA.\nA system trajectory tra is a possible sequence of state-action pairs, until a goal state is reached under the optimal policy \u03c0.\nFor example, state s is entered, which may transit to s or to s , according to action a.\nThe two possible system trajectories are (s, a), (s ) and (s, a), (s ) .\nThe global resource constraint is res(tra) \u2264 Gres\u2200 tra \u2208 T RA ,and \u2200 res \u2208 Resc where res(tra) is a function which returns the resources used by trajectory tra.\nSince the available consumable resources are represented in the state space, this condition is verified by itself.\nIn other words, the model is Markovian as the history has not to be considered in the state space.\nFurthermore, the time is not considered in the model description, but it may also include a time horizon by using a finite horizon mdp.\nSince resource allocation in a stochastic environment is NP-Complete, heuristics should be employed.\nQ-decomposition which decomposes a planning problem to many agents to reduce the computational complexity associated to the state and\/or action spaces is now introduced.\n2.2 Q-decomposition for Resource Allocation There can be many types of resource allocation problems.\nFirstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent.\nA second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nFor instance, a group of agents which manages the oil consummated by a country falls in this group.\nThese agents desire to maximize their specific reward by consuming the right amount of oil.\nHowever, all the agents are penalized when an agent consumes oil because of the pollution it generates.\nAnother example of this type comes from our problem of interest, explained in Section 3, which is a naval platform which must counter incoming missiles (i.e. tasks) by using its resources (i.e. weapons, movements).\nIn some scenarios, it may happens that the missiles can be classified in two types: Those requiring a set of resources Res1 and those requiring a set of resources Res2.\nThis can happen depending on the type of missiles, their range, and so on.\nIn this case, two agents can plan for both set of tasks to determine the policy.\nHowever, there are interaction between the resource of Res1 and Res2, so that certain combination of resource cannot be assigned.\nIN particular, if an agent i allocate resources Resi to the first set of tasks T ai, and agent i allocate resources Resi to second set of tasks T ai , the resulting policy may include actions which cannot be executed together.\nTo result these conflicts, we use Q-decomposition proposed by Russell and Zimdars [9] in the context of reinforcement learning.\nThe primary assumption underlying Qdecomposition is that the overall reward function R can be additively decomposed into separate rewards Ri for each distinct agent i \u2208 Ag, where |Ag| is the number of agents.\nThat is, R = i\u2208Ag Ri.\nIt requires each agent to compute a value, from its perspective, for every action.\nTo coordinate with each other, each agent i reports its action values Qi(ai, si) for each state si \u2208 Si to an arbitrator at each learning iteration.\nThe arbitrator then chooses an action maximizing the sum of the agent Q-values for each global state s \u2208 S.\nThe next time state s is updated, an agent i considers the value as its respective contribution, or Q-value, to the global maximal Q-value.\nThat is, Qi(ai, si) is the value of a state such that it maximizes maxa\u2208A(s) i\u2208Ag Qi(ai, si).\nThe fact that the agents use a determined Q-value as the value of a state is an extension of the Sarsa on-policy algorithm [8] to Q-decomposition.\nRussell and Zimdars called this approach local Sarsa.\nIn this way, an ideal compromise can be found for the agents to reach a global optimum.\nIndeed, rather than allowing each agent to choose the successor action, each agent i uses the action ai executed by the arbitrator in the successor state si: Qi(ai, si) = Ri(si) + \u03b3 si\u2208Si Pai (si|si)Qi(ai, si) (3) where the remaining consumable resources in state si are Resci \\ resi(ai) for a resource allocation problem.\nRussell and Zimdars [9] demonstrated that local Sarsa converges to the optimum.\nAlso, in some cases, this form of agent decomposition allows the local Q-functions to be expressed by a much reduced state and action space.\nFor our resource allocation problem described briefly in this section, Q-decomposition can be applied to generate an optimal solution.\nIndeed, an optimal Bellman backup can be applied in a state as in Algorithm 1.\nIn Line 5 of the Qdec-backup function, each agent managing a task computes its respective Q-value.\nHere, Qi (ai, s ) determines the optimal Q-value of agent i in state s .\nAn agent i uses as the value of a possible state transition s the Q-value for this agent which determines the maximal global Q-value for state s as in the original Q-decomposition approach.\nIn brief, for each visited states s \u2208 S, each agent computes its respective Q-values with respect to the global state s.\nSo the state space is the joint state space of all agents.\nSome of the gain in complexity to use Q-decomposition resides in the si\u2208Si Pai (si|s) part of the equation.\nAn agent considers as a possible state transition only the possible states of the set of tasks it manages.\nSince the number of states is exponential with the number of tasks, using Q-decomposition should reduce the planning time significantly.\nFurthermore, the action space of the agents takes into account only their available resources which is much less complex than a standard action space, which is the combination of all possible resource allocation in a state for all agents.\nThen, the arbitrator functionalities are in Lines 8 to 20.\nThe global Q-value is the sum of the Q-values produced by each agent managing each task as shown in Line 11, considering the global action a.\nIn this case, when an action of an agent i cannot be executed simultaneously with an action of another agent i , the global action is simply discarded from the action space A(s).\nLine 14 simply allocate the current value with respect to the highest global Q-value, as in a standard Bellman backup.\nThen, the optimal policy and Q-value of each agent is updated in Lines 16 and 17 to the sub-actions ai and specific Q-values Qi(ai, s) of each agent 1214 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) for action a. Algorithm 1 The Q-decomposition Bellman Backup.\n1: Function Qdec-backup(s) 2: V (s) \u2190 0 3: for all i \u2208 Ag do 4: for all ai \u2208 Ai(s) do 5: Qi(ai, s) \u2190 Ri(s) + \u03b3 si \u2208Si Pai (si|s)Qi (ai, s ) {where Qi (ai, s ) = hi(s ) when s is not yet visited, and s has Resci \\ resi(ai) remaining consumable resources for each agent i} 6: end for 7: end for 8: for all a \u2208 A(s) do 9: Q(a, s) \u2190 0 10: for all i \u2208 Ag do 11: Q(a, s) \u2190 Q(a, s) + Qi(ai, s) 12: end for 13: if Q(a, s) > V (s) then 14: V (s) \u2190 Q(a, s) 15: for all i \u2208 Ag do 16: \u03c0i(s) \u2190 ai 17: Qi (ai, s) \u2190 Qi(ai, s) 18: end for 19: end if 20: end for A standard Bellman backup has a complexity of O(|A| \u00d7 |SAg|), where |SAg| is the number of joint states for all agents excluding the resources, and |A| is the number of joint actions.\nOn the other hand, the Q-decomposition Bellman backup has a complexity of O((|Ag| \u00d7 |Ai| \u00d7 |Si)|) + (|A| \u00d7 |Ag|)), where |Si| is the number of states for an agent i, excluding the resources and |Ai| is the number of actions for an agent i.\nSince |SAg| is combinatorial with the number of tasks, so |Si| |S|.\nAlso, |A| is combinatorial with the number of resource types.\nIf the resources are already shared among the agents, the number of resource type for each agent will usually be lower than the set of all available resource types for all agents.\nIn these circumstances, |Ai| |A|.\nIn a standard Bellman backup, |A| is multiplied by |SAg|, which is much more complex than multiplying |A| by |Ag| with the Q-decomposition Bellman backup.\nThus, the Q-decomposition Bellman backup is much less complex than a standard Bellman backup.\nFurthermore, the communication cost between the agents and the arbitrator is null since this approach does not consider a geographically separated problem.\nHowever, when the resources are available to all agents, no Q-decomposition is possible.\nIn this case, Bounded RealTime Dynamic Programming (bounded-rtdp) permits to focuss the search on relevant states, and to prune the action space A by using lower and higher bound on the value of states.\nbounded-rtdp is now introduced.\n2.3 Bounded-RTDP Bonet and Geffner [4] proposed lrtdp as an improvement to rtdp [1].\nlrtdp is a simple dynamic programming algorithm that involves a sequence of trial runs, each starting in the initial state s0 and ending in a goal or a solved state.\nEach lrtdp trial is the result of simulating the policy \u03c0 while updating the values V (s) using a Bellman backup (Equation 1) over the states s that are visited.\nh(s) is a heuristic which define an initial value for state s.\nThis heuristic has to be admissible - The value given by the heuristic has to overestimate (or underestimate) the optimal value V (s) when the objective function is maximized (or minimized).\nFor example, an admissible heuristic for a stochastic shortest path problem is the solution of a deterministic shortest path problem.\nIndeed, since the problem is stochastic, the optimal value is lower than for the deterministic version.\nIt has been proven that lrtdp, given an admissible initial heuristic on the value of states cannot be trapped in loops, and eventually yields optimal values [4].\nThe convergence is accomplished by means of a labeling procedure called checkSolved(s, ).\nThis procedure tries to label as solved each traversed state in the current trajectory.\nWhen the initial state is labelled as solved, the algorithm has converged.\nIn this section, a bounded version of rtdp (boundedrtdp) is presented in Algorithm 2 to prune the action space of sub-optimal actions.\nThis pruning enables to speed up the convergence of lrtdp.\nbounded-rtdp is similar to rtdp except there are two distinct initial heuristics for unvisited states s \u2208 S; hL(s) and hU (s).\nAlso, the checkSolved(s, ) procedure can be omitted because the bounds can provide the labeling of a state as solved.\nOn the one hand, hL(s) defines a lower bound on the value of s such that the optimal value of s is higher than hL(s).\nFor its part, hU (s) defines an upper bound on the value of s such that the optimal value of s is lower than hU (s).\nThe values of the bounds are computed in Lines 3 and 4 of the bounded-backup function.\nComputing these two Q-values is made simultaneously as the state transitions are the same for both Q-values.\nOnly the values of the state transitions change.\nThus, having to compute two Q-values instead of one does not augment the complexity of the approach.\nIn fact, Smith and Simmons [11] state that the additional time to compute a Bellman backup for two bounds, instead of one, is no more than 10%, which is also what we obtained.\nIn particular, L(s) is the lower bound of state s, while U(s) is the upper bound of state s. Similarly, QL(a, s) is the Q-value of the lower bound of action a in state s, while QU (a, s) is the Q-value of the upper bound of action a in state s. Using these two bounds allow significantly reducing the action space A. Indeed, in Lines 5 and 6 of the bounded-backup function, if QU (a, s) \u2264 L(s) then action a may be pruned from the action space of s.\nIn Line 13 of this function, a state can be labeled as solved if the difference between the lower and upper bounds is lower than .\nWhen the execution goes back to the bounded-rtdp function, the next state in Line 10 has a fixed number of consumable resources available Resc, determined in Line 9.\nIn brief, pickNextState(res) selects a none-solved state s reachable under the current policy which has the highest Bellman error (|U(s) \u2212 L(s)|).\nFinally, in Lines 12 to 15, a backup is made in a backward fashion on all visited state of a trajectory, when this trajectory has been made.\nThis strategy has been proven as efficient [11] [6].\nAs discussed by Singh and Cohn [10], this type of algorithm has a number of desirable anytime characteristics: if an action has to be picked in state s before the algorithm has converged (while multiple competitive actions remains), the action with the highest lower bound is picked.\nSince the upper bound for state s is known, it may be estimated The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1215 Algorithm 2 The bounded-rtdp algorithm.\nAdapted from [4] and [10].\n1: Function bounded-rtdp(S) 2: returns a value function V 3: repeat 4: s \u2190 s0 5: visited \u2190 null 6: repeat 7: visited.push(s) 8: bounded-backup(s) 9: Resc \u2190 Resc \\ {\u03c0(s)} 10: s \u2190 s.pickNextState(Resc) 11: until s is a goal 12: while visited = null do 13: s \u2190 visited.pop() 14: bounded-backup(s) 15: end while 16: until s0 is solved or |A(s)| = 1 \u2200 s \u2208 S reachable from s0 17: return V Algorithm 3 The bounded Bellman backup.\n1: Function bounded-backup(s) 2: for all a \u2208 A(s) do 3: QU (a, s) \u2190 R(s) + \u03b3 s \u2208S Pa(s |s)U(s ) 4: QL(a, s) \u2190 R(s) + \u03b3 s \u2208S Pa(s |s)L(s ) {where L(s ) \u2190 hL(s ) and U(s ) \u2190 hU (s ) when s is not yet visited and s has Resc \\ res(a) remaining consumable resources} 5: if QU (a, s) \u2264 L(s) then 6: A(s) \u2190 A(s) \\ res(a) 7: end if 8: end for 9: L(s) \u2190 max a\u2208A(s) QL(a, s) 10: U(s) \u2190 max a\u2208A(s) QU (a, s) 11: \u03c0(s) \u2190 arg max a\u2208A(s) QL(a, s) 12: if |U(s) \u2212 L(s)| < then 13: s \u2190 solved 14: end if how far the lower bound is from the optimal.\nIf the difference between the lower and upper bound is too high, one can choose to use another greedy algorithm of one``s choice, which outputs a fast and near optimal solution.\nFurthermore, if a new task dynamically arrives in the environment, it can be accommodated by redefining the lower and upper bounds which exist at the time of its arrival.\nSingh and Cohn [10] proved that an algorithm that uses admissible lower and upper bounds to prune the action space is assured of converging to an optimal solution.\nThe next sections describe two separate methods to define hL(s) and hU (s).\nFirst of all, the method of Singh and Cohn [10] is briefly described.\nThen, our own method proposes tighter bounds, thus allowing a more effective pruning of the action space.\n2.4 Singh and Cohn``s Bounds Singh and Cohn [10] defined lower and upper bounds to prune the action space.\nTheir approach is pretty straightforward.\nFirst of all, a value function is computed for all tasks to realize, using a standard rtdp approach.\nThen, using these task-value functions, a lower bound hL, and upper bound hU can be defined.\nIn particular, hL(s) = max ta\u2208T a Vta(sta), and hU (s) = ta\u2208T a Vta(sta).\nFor readability, the upper bound by Singh and Cohn is named SinghU, and the lower bound is named SinghL.\nThe admissibility of these bounds has been proven by Singh and Cohn, such that, the upper bound always overestimates the optimal value of each state, while the lower bound always underestimates the optimal value of each state.\nTo determine the optimal policy \u03c0, Singh and Cohn implemented an algorithm very similar to bounded-rtdp, which uses the bounds to initialize L(s) and U(s).\nThe only difference between bounded-rtdp, and the rtdp version of Singh and Cohn is in the stopping criteria.\nSingh and Cohn proposed that the algorithm terminates when only one competitive action remains for each state, or when the range of all competitive actions for any state are bounded by an indifference parameter .\nbounded-rtdp labels states for which |U(s) \u2212 L(s)| < as solved and the convergence is reached when s0 is solved or when only one competitive action remains for each state.\nThis stopping criteria is more effective since it is similar to the one used by Smith and Simmons [11] and McMahan et al. brtdp [6].\nIn this paper, the bounds defined by Singh and Cohn and implemented using bounded-rtdp define the Singh-rtdp approach.\nThe next sections propose to tighten the bounds of Singh-rtdp to permit a more effective pruning of the action space.\n2.5 Reducing the Upper Bound SinghU includes actions which may not be possible to execute because of resource constraints, which overestimates the upper bound.\nTo consider only possible actions, our upper bound, named maxU is introduced: hU (s) = max a\u2208A(s) ta\u2208T a Qta(ata, sta) (4) where Qta(ata, sta) is the Q-value of task ta for state sta, and action ata computed using a standard lrtdp approach.\nTheorem 2.1.\nThe upper bound defined by Equation 4 is admissible.\nProof: The local resource constraints are satisfied because the upper bound is computed using all global possible actions a. However, hU (s) still overestimates V (s) because the global resource constraint is not enforced.\nIndeed, each task may use all consumable resources for its own purpose.\nDoing this produces a higher value for each task, than the one obtained when planning for all tasks globally with the shared limited resources.\nComputing the maxU bound in a state has a complexity of O(|A| \u00d7 |T a|), and O(|T a|) for SinghU.\nA standard Bellman backup has a complexity of O(|A| \u00d7 |S|).\nSince |A|\u00d7|T a| |A|\u00d7|S|, the computation time to determine the upper bound of a state, which is done one time for each visited state, is much less than the computation time required to compute a standard Bellman backup for a state, which is usually done many times for each visited state.\nThus, the computation time of the upper bound is negligible.\n1216 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 2.6 Increasing the Lower Bound The idea to increase SinghL is to allocate the resources a priori among the tasks.\nWhen each task has its own set of resources, each task may be solved independently.\nThe lower bound of state s is hL(s) = ta\u2208T a Lowta(sta), where Lowta(sta) is a value function for each task ta \u2208 T a, such that the resources have been allocated a priori.\nThe allocation a priori of the resources is made using marginal revenue, which is a highly used concept in microeconomics [7], and has recently been used for coordination of a Decentralized mdp [2].\nIn brief, marginal revenue is the extra revenue that an additional unit of product will bring to a firm.\nThus, for a stochastic resource allocation problem, the marginal revenue of a resource is the additional expected value it involves.\nThe marginal revenue of a resource res for a task ta in a state sta is defined as following: mrta(sta) = max ata\u2208A(sta) Qta(ata, sta)\u2212 max ata\u2208A(sta) Qta(ata|res \/\u2208 ata, sta) (5) The concept of marginal revenue of a resource is used in Algorithm 4 to allocate the resources a priori among the tasks which enables to define the lower bound value of a state.\nIn Line 4 of the algorithm, a value function is computed for all tasks in the environment using a standard lrtdp [4] approach.\nThese value functions, which are also used for the upper bound, are computed considering that each task may use all available resources.\nThe Line 5 initializes the valueta variable.\nThis variable is the estimated value of each task ta \u2208 T a.\nIn the beginning of the algorithm, no resources are allocated to a specific task, thus the valueta variable is initialized to 0 for all ta \u2208 T a. Then, in Line 9, a resource type res (consumable or non-consumable) is selected to be allocated.\nHere, a domain expert may separate all available resources in many types or parts to be allocated.\nThe resources are allocated in the order of its specialization.\nIn other words, the more a resource is efficient on a small group of tasks, the more it is allocated early.\nAllocating the resources in this order improves the quality of the resulting lower bound.\nThe Line 12 computes the marginal revenue of a consumable resource res for each task ta \u2208 T a. For a non-consumable resource, since the resource is not considered in the state space, all other reachable states from sta consider that the resource res is still usable.\nThe approach here is to sum the difference between the real value of a state to the maximal Q-value of this state if resource res cannot be used for all states in a trajectory given by the policy of task ta.\nThis heuristic proved to obtain good results, but other ones may be tried, for example Monte-Carlo simulation.\nIn Line 21, the marginal revenue is updated in function of the resources already allocated to each task.\nR(sgta ) is the reward to realize task ta.\nThus, Vta(sta)\u2212valueta R(sgta ) is the residual expected value that remains to be achieved, knowing current allocation to task ta, and normalized by the reward of realizing the tasks.\nThe marginal revenue is multiplied by this term to indicate that, the more a task has a high residual value, the more its marginal revenue is going to be high.\nThen, a task ta is selected in Line 23 with the highest marginal revenue, adjusted with residual value.\nIn Line 24, the resource type res is allocated to the group of resources Resta of task ta.\nAfterwards, Line 29 recomAlgorithm 4 The marginal revenue lower bound algorithm.\n1: Function revenue-bound(S) 2: returns a lower bound LowT a 3: for all ta \u2208 T a do 4: Vta \u2190lrtdp(Sta) 5: valueta \u2190 0 6: end for 7: s \u2190 s0 8: repeat 9: res \u2190 Select a resource type res \u2208 Res 10: for all ta \u2208 T a do 11: if res is consumable then 12: mrta(sta) \u2190 Vta(sta) \u2212 Vta(sta(Res \\ res)) 13: else 14: mrta(sta) \u2190 0 15: repeat 16: mrta(sta) \u2190 mrta(sta) + Vta(sta)max (ata\u2208A(sta)|res\/\u2208ata) Qta(ata, sta) 17: sta \u2190 sta.pickNextState(Resc) 18: until sta is a goal 19: s \u2190 s0 20: end if 21: mrrvta(sta) \u2190 mrta(sta) \u00d7 Vta(sta)\u2212valueta R(sgta ) 22: end for 23: ta \u2190 Task ta \u2208 T a which maximize mrrvta(sta) 24: Resta \u2190 Resta {res} 25: temp \u2190 \u2205 26: if res is consumable then 27: temp \u2190 res 28: end if 29: valueta \u2190 valueta + ((Vta(sta) \u2212 valueta)\u00d7 max ata\u2208A(sta,res) Qta(ata,sta(temp)) Vta(sta) ) 30: until all resource types res \u2208 Res are assigned 31: for all ta \u2208 T a do 32: Lowta \u2190lrtdp(Sta, Resta) 33: end for 34: return LowT a putes valueta.\nThe first part of the equation to compute valueta represents the expected residual value for task ta.\nThis term is multiplied by max ata\u2208A(sta) Qta(ata,sta(res)) Vta(sta) , which is the ratio of the efficiency of resource type res.\nIn other words, valueta is assigned to valueta + (the residual value \u00d7 the value ratio of resource type res).\nFor a consumable resource, the Q-value consider only resource res in the state space, while for a non-consumable resource, no resources are available.\nAll resource types are allocated in this manner until Res is empty.\nAll consumable and non-consumable resource types are allocated to each task.\nWhen all resources are allocated, the lower bound components Lowta of each task are computed in Line 32.\nWhen the global solution is computed, the lower bound is as follow: hL(s) = max(SinghL, max a\u2208A(s) ta\u2208T a Lowta(sta)) (6) We use the maximum of the SinghL bound and the sum of the lower bound components Lowta, thus marginalrevenue \u2265 SinghL.\nIn particular, the SinghL bound may The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1217 be higher when a little number of tasks remain.\nAs the components Lowta are computed considering s0; for example, if in a subsequent state only one task remains, the bound of SinghL will be higher than any of the Lowta components.\nThe main difference of complexity between SinghL and revenue-bound is in Line 32 where a value for each task has to be computed with the shared resource.\nHowever, since the resource are shared, the state space and action space is greatly reduced for each task, reducing greatly the calculus compared to the value functions computed in Line 4 which is done for both SinghL and revenue-bound.\nTheorem 2.2.\nThe lower bound of Equation 6 is admissible.\nProof: Lowta(sta) is computed with the resource being shared.\nSumming the Lowta(sta) value functions for each ta \u2208 T a does not violates the local and global resource constraints.\nIndeed, as the resources are shared, the tasks cannot overuse them.\nThus, hL(s) is a realizable policy, and an admissible lower bound.\n3.\nDISCUSSION AND EXPERIMENTS The domain of the experiments is a naval platform which must counter incoming missiles (i.e. tasks) by using its resources (i.e. weapons, movements).\nFor the experiments, 100 randomly resource allocation problems were generated for each approach, and possible number of tasks.\nIn our problem, |Sta| = 4, thus each task can be in four distinct states.\nThere are two types of states; firstly, states where actions modify the transition probabilities; and then, there are goal states.\nThe state transitions are all stochastic because when a missile is in a given state, it may always transit in many possible states.\nIn particular, each resource type has a probability to counter a missile between 45% and 65% depending on the state of the task.\nWhen a missile is not countered, it transits to another state, which may be preferred or not to the current state, where the most preferred state for a task is when it is countered.\nThe effectiveness of each resource is modified randomly by \u00b115% at the start of a scenario.\nThere are also local and global resource constraints on the amount that may be used.\nFor the local constraints, at most 1 resource of each type can be allocated to execute tasks in a specific state.\nThis constraint is also present on a real naval platform because of sensor and launcher constraints and engagement policies.\nFurthermore, for consumable resources, the total amount of available consumable resource is between 1 and 2 for each type.\nThe global constraint is generated randomly at the start of a scenario for each consumable resource type.\nThe number of resource type has been fixed to 5, where there are 3 consumable resource types and 2 non-consumable resources types.\nFor this problem a standard lrtdp approach has been implemented.\nA simple heuristic has been used where the value of an unvisited state is assigned as the value of a goal state such that all tasks are achieved.\nThis way, the value of each unvisited state is assured to overestimate its real value since the value of achieving a task ta is the highest the planner may get for ta.\nSince this heuristic is pretty straightforward, the advantages of using better heuristics are more evident.\nNevertheless, even if the lrtdp approach uses a simple heuristic, still a huge part of the state space is not visited when computing the optimal policy.\nThe approaches described in this paper are compared in Figures 1 and 2.\nLets summarize these approaches here: \u2022 Qdec-lrtdp: The backups are computed using the Qdec-backup function (Algorithm 1), but in a lrtdp context.\nIn particular the updates made in the checkSolved function are also made using the the Qdecbackup function.\n\u2022 lrtdp-up: The upper bound of maxU is used for lrtdp.\n\u2022 Singh-rtdp: The SinghL and SinghU bounds are used for bounded-rtdp.\n\u2022 mr-rtdp: The revenue-bound and maxU bounds are used for bounded-rtdp.\nTo implement Qdec-lrtdp, we divided the set of tasks in two equal parts.\nThe set of task T ai, managed by agent i, can be accomplished with the set of resources Resi, while the second set of task T ai , managed by agent Agi , can be accomplished with the set of resources Resi .\nResi had one consumable resource type and one non-consumable resource type, while Resi had two consumable resource types and one non-consumable resource type.\nWhen the number of tasks is odd, one more task was assigned to T ai .\nThere are constraint between the group of resource Resi and Resi such that some assignments are not possible.\nThese constraints are managed by the arbitrator as described in Section 2.2.\nQ-decomposition permits to diminish the planning time significantly in our problem settings, and seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent.\nTo compute the lower bound of revenue-bound, all available resources have to be separated in many types or parts to be allocated.\nFor our problem, we allocated each resource of each type in the order of of its specialization like we said when describing the revenue-bound function.\nIn terms of experiments, notice that the lrtdp lrtdp-up and approaches for resource allocation, which doe not prune the action space, are much more complex.\nFor instance, it took an average of 1512 seconds to plan for the lrtdp-up approach with six tasks (see Figure 1).\nThe Singh-rtdp approach diminished the planning time by using a lower and upper bound to prune the action space.\nmr-rtdp further reduce the planning time by providing very tight initial bounds.\nIn particular, Singh-rtdp needed 231 seconds in average to solve problem with six tasks and mr-rtdp required 76 seconds.\nIndeed, the time reduction is quite significant compared to lrtdp-up, which demonstrates the efficiency of using bounds to prune the action space.\nFurthermore, we implemented mr-rtdp with the SinghU bound, and this was slightly less efficient than with the maxU bound.\nWe also implemented mr-rtdp with the SinghL bound, and this was slightly more efficient than Singh-rtdp.\nFrom these results, we conclude that the difference of efficiency between mr-rtdp and Singh-rtdp is more attributable to the marginal-revenue lower bound that to the maxU upper bound.\nIndeed, when the number of task to execute is high, the lower bounds by Singh-rtdp takes the values of a single task.\nOn the other hand, the lower bound of mr-rtdp takes into account the value of all 1218 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0.01 0.1 1 10 100 1000 10000 100000 1 2 3 4 5 6 7 8 9 10 11 12 13 Timeinseconds Number of tasks LRTDP QDEC-LRTDP Figure 1: Efficiency of Q-decomposition LRTDP and LRTDP.\n0.01 0.1 1 10 100 1000 10000 1 2 3 4 5 6 7 8 Timeinseconds Number of tasks LRTDP LRTDP-up Singh-RTDP MR-RTDP Figure 2: Efficiency of MR-RTDP compared to SINGH-RTDP.\ntask by using a heuristic to distribute the resources.\nIndeed, an optimal allocation is one where the resources are distributed in the best way to all tasks, and our lower bound heuristically does that.\n4.\nCONCLUSION The experiments have shown that Q-decomposition seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent.\nOn the other hand, when the available resource are shared, no Q-decomposition is possible and we proposed tight bounds for heuristic search.\nIn this case, the planning time of bounded-rtdp, which prunes the action space, is significantly lower than for lrtdp.\nFurthermore, The marginal revenue bound proposed in this paper compares favorably to the Singh and Cohn [10] approach.\nboundedrtdp with our proposed bounds may apply to a wide range of stochastic environments.\nThe only condition for the use our bounds is that each task possesses consumable and\/or non-consumable limited resources.\nAn interesting research avenue would be to experiment our bounds with other heuristic search algorithms.\nFor instance, frtdp [11], and brtdp [6] are both efficient heuristic search algorithms.\nIn particular, both these approaches proposed an efficient state trajectory updates, when given upper and lower bounds.\nOur tight bounds would enable, for both frtdp and brtdp, to reduce the number of backup to perform before convergence.\nFinally, the bounded-rtdp function prunes the action space when QU (a, s) \u2264 L(s), as Singh and Cohn [10] suggested.\nfrtdp and brtdp could also prune the action space in these circumstances to further reduce their planning time.\n5.\nREFERENCES [1] A. Barto, S. Bradtke, and S. Singh.\nLearning to act using real-time dynamic programming.\nArtificial Intelligence, 72(1):81-138, 1995.\n[2] A. Beynier and A. I. Mouaddib.\nAn iterative algorithm for solving constrained decentralized markov decision processes.\nIn Proceeding of the Twenty-First National Conference on Artificial Intelligence (AAAI-06), 2006.\n[3] B. Bonet and H. Geffner.\nFaster heuristic search algorithms for planning with uncertainty and full feedback.\nIn Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI-03), August 2003.\n[4] B. Bonet and H. Geffner.\nLabeled lrtdp approach: Improving the convergence of real-time dynamic programming.\nIn Proceeding of the Thirteenth International Conference on Automated Planning & Scheduling (ICAPS-03), pages 12-21, Trento, Italy, 2003.\n[5] E. A. Hansen and S. Zilberstein.\nlao : A heuristic search algorithm that finds solutions with loops.\nArtificial Intelligence, 129(1-2):35-62, 2001.\n[6] H. B. McMahan, M. Likhachev, and G. J. Gordon.\nBounded real-time dynamic programming: rtdp with monotone upper bounds and performance guarantees.\nIn ICML ``05: Proceedings of the Twenty-Second International Conference on Machine learning, pages 569-576, New York, NY, USA, 2005.\nACM Press.\n[7] R. S. Pindyck and D. L. Rubinfeld.\nMicroeconomics.\nPrentice Hall, 2000.\n[8] G. A. Rummery and M. Niranjan.\nOn-line Q-learning using connectionist systems.\nTechnical report CUED\/FINFENG\/TR 166, Cambridge University Engineering Department, 1994.\n[9] S. J. Russell and A. Zimdars.\nQ-decomposition for reinforcement learning agents.\nIn ICML, pages 656-663, 2003.\n[10] S. Singh and D. Cohn.\nHow to dynamically merge markov decision processes.\nIn Advances in Neural Information Processing Systems, volume 10, pages 1057-1063, Cambridge, MA, USA, 1998.\nMIT Press.\n[11] T. Smith and R. Simmons.\nFocused real-time dynamic programming for mdps: Squeezing more out of a heuristic.\nIn Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI), Boston, USA, 2006.\n[12] W. Zhang.\nModeling and solving a resource allocation problem with soft constraint techniques.\nTechnical report: wucs-2002-13, Washington University, Saint-Louis, Missouri, 2002.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1219","lvl-3":"A Q-decomposition and Bounded RTDP Approach to Resource Allocation\nABSTRACT\nThis paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete.\nTo address this complex resource management problem, a Qdecomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nThe Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider.\nOn the other hand, when the resources are available to all agents, no Qdecomposition is possible and we use heuristic search.\nIn particular, the bounded Real-time Dynamic Programming (bounded RTDP) is used.\nBounded RTDP concentrates the planning on significant states only and prunes the action space.\nThe pruning is accomplished by proposing tight upper and lower bounds on the value function.\n1.\nINTRODUCTION\nThis paper aims to contribute to solve complex stochastic resource allocation problems.\nIn general, resource allocation problems are known to be NP-Complete [12].\nIn such problems, a scheduling process suggests the action (i.e. resources to allocate) to undertake to accomplish certain tasks,\nabderrezak.benaskeur@drdc-rddc.gc.ca according to the perfectly observable state of the environment.\nWhen executing an action to realize a set of tasks, the stochastic nature of the problem induces probabilities on the next visited state.\nIn general, the number of states is the combination of all possible specific states of each task and available resources.\nIn this case, the number of possible actions in a state is the combination of each individual possible resource assignment to the tasks.\nThe very high number of states and actions in this type of problem makes it very complex.\nThere can be many types of resource allocation problems.\nFirstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent.\nA second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nTo solve this problem efficiently, we adapt Qdecomposition proposed by Russell and Zimdars [9].\nIn our Q-decomposition approach, a planning agent manages each task and all agents have to share the limited resources.\nThe planning process starts with the initial state s0.\nIn s0, each agent computes their respective Q-value.\nThen, the planning agents are coordinated through an arbitrator to find the highest global Q-value by adding the respective possible Q-values of each agents.\nWhen implemented with heuristic search, since the number of states and actions to consider when computing the optimal policy is exponentially reduced compared to other known approaches, Q-decomposition allows to formulate the first optimal decomposed heuristic search algorithm in a stochastic environments.\nOn the other hand, when the resources are available to all agents, no Q-decomposition is possible.\nA common way of addressing this large stochastic problem is by using Markov Decision Processes (MDPs), and in particular real-time search where many algorithms have been developed recently.\nFor instance Real-Time Dynamic Programming (RTDP) [1], LRTDP [4], HDP [3], and LAO\" [5] are all state-of-the-art heuristic search approaches in a stochastic environment.\nBecause of its anytime quality, an interesting approach is RTDP introduced by Barto et al. [1] which updates states in trajectories from an initial state s0 to a goal state sg.\nRTDP is used in this paper to solve efficiently a constrained resource allocation problem.\nRTDP is much more effective if the action space can be pruned of sub-optimal actions.\nTo do this, McMahan et\nal. [6], Smith and Simmons [11], and Singh and Cohn [10] proposed solving a stochastic problem using a RTDP type heuristic search with upper and lower bounds on the value of states.\nMcMahan et al. [6] and Smith and Simmons [11] suggested, in particular, an efficient trajectory of state updates to further speed up the convergence, when given upper and lower bounds.\nThis efficient trajectory of state updates can be combined to the approach proposed here since this paper focusses on the definition of tight bounds, and efficient state update for a constrained resource allocation problem.\nOn the other hand, the approach by Singh and Cohn is suitable to our case, and extended in this paper using, in particular, the concept of marginal revenue [7] to elaborate tight bounds.\nThis paper proposes new algorithms to define upper and lower bounds in the context of a RTDP heuristic search approach.\nOur marginal revenue bounds are compared theoretically and empirically to the bounds proposed by Singh and Cohn.\nAlso, even if the algorithm used to obtain the optimal policy is RTDP, our bounds can be used with any other algorithm to solve an MDP.\nThe only condition on the use of our bounds is to be in the context of stochastic constrained resource allocation.\nThe problem is now modelled.\n2.\nPROBLEM FORMULATION\n2.1 Resource Allocation as a MDPs\n2.2 Q-decomposition for Resource Allocation\n1214 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.3 Bounded-RTDP\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1215\n2.4 Singh and Cohn's Bounds\n2.5 Reducing the Upper Bound\n1216 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.6 Increasing the Lower Bound\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1217\n3.\nDISCUSSION AND EXPERIMENTS\n1218 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nCONCLUSION\nThe experiments have shown that Q-decomposition seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent.\nOn the other hand, when the available resource are shared, no Q-decomposition is possible and we proposed tight bounds for heuristic search.\nIn this case, the planning time of BOUNDED-RTDP, which prunes the action space, is significantly lower than for LRTDP.\nFurthermore, The marginal revenue bound proposed in this paper compares favorably to the Singh and Cohn [10] approach.\nBOUNDEDRTDP with our proposed bounds may apply to a wide range of stochastic environments.\nThe only condition for the use our bounds is that each task possesses consumable and\/or non-consumable limited resources.\nAn interesting research avenue would be to experiment our bounds with other heuristic search algorithms.\nFor instance, FRTDP [11], and BRTDP [6] are both efficient heuristic search algorithms.\nIn particular, both these approaches proposed an efficient state trajectory updates, when given upper and lower bounds.\nOur tight bounds would enable, for both FRTDP and BRTDP, to reduce the number of backup to perform before convergence.\nFinally, the BOUNDED-RTDP function prunes the action space when QU (a, s) \u2264 L (s), as Singh and Cohn [10] suggested.\nFRTDP and BRTDP could also prune the action space in these circumstances to further reduce their planning time.","lvl-4":"A Q-decomposition and Bounded RTDP Approach to Resource Allocation\nABSTRACT\nThis paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete.\nTo address this complex resource management problem, a Qdecomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nThe Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider.\nOn the other hand, when the resources are available to all agents, no Qdecomposition is possible and we use heuristic search.\nIn particular, the bounded Real-time Dynamic Programming (bounded RTDP) is used.\nBounded RTDP concentrates the planning on significant states only and prunes the action space.\nThe pruning is accomplished by proposing tight upper and lower bounds on the value function.\n1.\nINTRODUCTION\nThis paper aims to contribute to solve complex stochastic resource allocation problems.\nIn general, resource allocation problems are known to be NP-Complete [12].\nIn such problems, a scheduling process suggests the action (i.e. resources to allocate) to undertake to accomplish certain tasks,\nabderrezak.benaskeur@drdc-rddc.gc.ca according to the perfectly observable state of the environment.\nWhen executing an action to realize a set of tasks, the stochastic nature of the problem induces probabilities on the next visited state.\nIn general, the number of states is the combination of all possible specific states of each task and available resources.\nIn this case, the number of possible actions in a state is the combination of each individual possible resource assignment to the tasks.\nThe very high number of states and actions in this type of problem makes it very complex.\nThere can be many types of resource allocation problems.\nFirstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent.\nA second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nTo solve this problem efficiently, we adapt Qdecomposition proposed by Russell and Zimdars [9].\nIn our Q-decomposition approach, a planning agent manages each task and all agents have to share the limited resources.\nThe planning process starts with the initial state s0.\nIn s0, each agent computes their respective Q-value.\nOn the other hand, when the resources are available to all agents, no Q-decomposition is possible.\nRTDP is used in this paper to solve efficiently a constrained resource allocation problem.\nRTDP is much more effective if the action space can be pruned of sub-optimal actions.\nal. [6], Smith and Simmons [11], and Singh and Cohn [10] proposed solving a stochastic problem using a RTDP type heuristic search with upper and lower bounds on the value of states.\nMcMahan et al. [6] and Smith and Simmons [11] suggested, in particular, an efficient trajectory of state updates to further speed up the convergence, when given upper and lower bounds.\nThis efficient trajectory of state updates can be combined to the approach proposed here since this paper focusses on the definition of tight bounds, and efficient state update for a constrained resource allocation problem.\nThis paper proposes new algorithms to define upper and lower bounds in the context of a RTDP heuristic search approach.\nOur marginal revenue bounds are compared theoretically and empirically to the bounds proposed by Singh and Cohn.\nAlso, even if the algorithm used to obtain the optimal policy is RTDP, our bounds can be used with any other algorithm to solve an MDP.\nThe only condition on the use of our bounds is to be in the context of stochastic constrained resource allocation.\nThe problem is now modelled.\n4.\nCONCLUSION\nThe experiments have shown that Q-decomposition seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent.\nOn the other hand, when the available resource are shared, no Q-decomposition is possible and we proposed tight bounds for heuristic search.\nIn this case, the planning time of BOUNDED-RTDP, which prunes the action space, is significantly lower than for LRTDP.\nFurthermore, The marginal revenue bound proposed in this paper compares favorably to the Singh and Cohn [10] approach.\nBOUNDEDRTDP with our proposed bounds may apply to a wide range of stochastic environments.\nThe only condition for the use our bounds is that each task possesses consumable and\/or non-consumable limited resources.\nAn interesting research avenue would be to experiment our bounds with other heuristic search algorithms.\nFor instance, FRTDP [11], and BRTDP [6] are both efficient heuristic search algorithms.\nIn particular, both these approaches proposed an efficient state trajectory updates, when given upper and lower bounds.\nOur tight bounds would enable, for both FRTDP and BRTDP, to reduce the number of backup to perform before convergence.\nFinally, the BOUNDED-RTDP function prunes the action space when QU (a, s) \u2264 L (s), as Singh and Cohn [10] suggested.\nFRTDP and BRTDP could also prune the action space in these circumstances to further reduce their planning time.","lvl-2":"A Q-decomposition and Bounded RTDP Approach to Resource Allocation\nABSTRACT\nThis paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete.\nTo address this complex resource management problem, a Qdecomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nThe Q-decomposition allows to coordinate these reward separated agents and thus permits to reduce the set of states and actions to consider.\nOn the other hand, when the resources are available to all agents, no Qdecomposition is possible and we use heuristic search.\nIn particular, the bounded Real-time Dynamic Programming (bounded RTDP) is used.\nBounded RTDP concentrates the planning on significant states only and prunes the action space.\nThe pruning is accomplished by proposing tight upper and lower bounds on the value function.\n1.\nINTRODUCTION\nThis paper aims to contribute to solve complex stochastic resource allocation problems.\nIn general, resource allocation problems are known to be NP-Complete [12].\nIn such problems, a scheduling process suggests the action (i.e. resources to allocate) to undertake to accomplish certain tasks,\nabderrezak.benaskeur@drdc-rddc.gc.ca according to the perfectly observable state of the environment.\nWhen executing an action to realize a set of tasks, the stochastic nature of the problem induces probabilities on the next visited state.\nIn general, the number of states is the combination of all possible specific states of each task and available resources.\nIn this case, the number of possible actions in a state is the combination of each individual possible resource assignment to the tasks.\nThe very high number of states and actions in this type of problem makes it very complex.\nThere can be many types of resource allocation problems.\nFirstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent.\nA second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nTo solve this problem efficiently, we adapt Qdecomposition proposed by Russell and Zimdars [9].\nIn our Q-decomposition approach, a planning agent manages each task and all agents have to share the limited resources.\nThe planning process starts with the initial state s0.\nIn s0, each agent computes their respective Q-value.\nThen, the planning agents are coordinated through an arbitrator to find the highest global Q-value by adding the respective possible Q-values of each agents.\nWhen implemented with heuristic search, since the number of states and actions to consider when computing the optimal policy is exponentially reduced compared to other known approaches, Q-decomposition allows to formulate the first optimal decomposed heuristic search algorithm in a stochastic environments.\nOn the other hand, when the resources are available to all agents, no Q-decomposition is possible.\nA common way of addressing this large stochastic problem is by using Markov Decision Processes (MDPs), and in particular real-time search where many algorithms have been developed recently.\nFor instance Real-Time Dynamic Programming (RTDP) [1], LRTDP [4], HDP [3], and LAO\" [5] are all state-of-the-art heuristic search approaches in a stochastic environment.\nBecause of its anytime quality, an interesting approach is RTDP introduced by Barto et al. [1] which updates states in trajectories from an initial state s0 to a goal state sg.\nRTDP is used in this paper to solve efficiently a constrained resource allocation problem.\nRTDP is much more effective if the action space can be pruned of sub-optimal actions.\nTo do this, McMahan et\nal. [6], Smith and Simmons [11], and Singh and Cohn [10] proposed solving a stochastic problem using a RTDP type heuristic search with upper and lower bounds on the value of states.\nMcMahan et al. [6] and Smith and Simmons [11] suggested, in particular, an efficient trajectory of state updates to further speed up the convergence, when given upper and lower bounds.\nThis efficient trajectory of state updates can be combined to the approach proposed here since this paper focusses on the definition of tight bounds, and efficient state update for a constrained resource allocation problem.\nOn the other hand, the approach by Singh and Cohn is suitable to our case, and extended in this paper using, in particular, the concept of marginal revenue [7] to elaborate tight bounds.\nThis paper proposes new algorithms to define upper and lower bounds in the context of a RTDP heuristic search approach.\nOur marginal revenue bounds are compared theoretically and empirically to the bounds proposed by Singh and Cohn.\nAlso, even if the algorithm used to obtain the optimal policy is RTDP, our bounds can be used with any other algorithm to solve an MDP.\nThe only condition on the use of our bounds is to be in the context of stochastic constrained resource allocation.\nThe problem is now modelled.\n2.\nPROBLEM FORMULATION\nA simple resource allocation problem is one where there are the following two tasks to realize: ta1 = {wash the dishes}, and ta2 = {clean the floor}.\nThese two tasks are either in the realized state, or not realized state.\nTo realize the tasks, two type of resources are assumed: res1 = {brush}, and res2 = {detergent}.\nA computer has to compute the optimal allocation of these resources to cleaner robots to realize their tasks.\nIn this problem, a state represents a conjunction of the particular state of each task, and the available resources.\nThe resources may be constrained by the amount that may be used simultaneously (local constraint), and in total (global constraint).\nFurthermore, the higher is the number of resources allocated to realize a task, the higher is the expectation of realizing the task.\nFor this reason, when the specific states of the tasks change, or when the number of available resources changes, the value of this state may change.\nWhen executing an action a in state s, the specific states of the tasks change stochastically, and the remaining resource are determined with the resource available in s, subtracted from the resources used by action a, if the resource is consumable.\nIndeed, our model may consider consumable and non-consumable resource types.\nA consumable resource type is one where the amount of available resource is decreased when it is used.\nOn the other hand, a nonconsumable resource type is one where the amount of available resource is unchanged when it is used.\nFor example, a brush is a non-consumable resource, while the detergent is a consumable resource.\n2.1 Resource Allocation as a MDPs\nIn our problem, the transition function and the reward function are both known.\nA Markov Decision Process (MDP) framework is used to model our stochastic resource allocation problem.\nMDPs have been widely adopted by researchers today to model a stochastic process.\nThis is due to the fact that MDPs provide a well-studied and simple, yet very expressive model of the world.\nAn MDP in the context of a resource allocation problem with limited resources is defined as a tuple (Res, Ta, S, A, P, W, R,), where:\n\u2022 Res = (res1,..., reslResl) is a finite set of resource types available for a planning process.\nEach resource type may have a local resource constraint Lres on the number that may be used in a single step, and a global resource constraint Gres on the number that may be used in total.\nThe global constraint only applies for consumable resource types (Resc) and the local constraints always apply to consumable and nonconsumable resource types.\n\u2022 Ta is a finite set of tasks with ta \u2208 Ta to be accomplished.\n\u2022 S is a finite set of states with s \u2208 S.\nA state s is a tuple (Ta, (res1,..., reslRescl)), which is the characteristic of each unaccomplished task ta \u2208 Ta in the environment, and the available consumable resources.\nsta is the specific state of task ta.\nAlso, S contains a non empty set sg \u2286 S of goal states.\nA goal state is a sink state where an agent stays forever.\n\u2022 A is a finite set of actions (or assignments).\nThe actions a \u2208 A (s) applicable in a state are the combination of all resource assignments that may be executed, according to the state s.\nIn particular, a is simply an allocation of resources to the current tasks, and ata is the resource allocation to task ta.\nThe possible actions are limited by Lres and Gres.\n\u2022 Transition probabilities Pa (s ~ | s) for s \u2208 S and a \u2208 A (s).\n\u2022 W = [wta] is the relative weight (criticality) of each task.\n\u2022 State rewards R = [rs]: F, rsta \u2190 Rsta \u00d7 wta.\nThe taETa relative reward of the state of a task rsta is the product of a real number Rsta by the weight factor wta.\nFor our problem, a reward of 1 \u00d7 wta is given when the state of a task (sta) is in an achieved state, and 0 in all other cases.\n\u2022 A discount (preference) factor - y, which is a real number between 0 and 1.\nA solution of an MDP is a policy \u03c0 mapping states s into actions a \u2208 A (s).\nIn particular, \u03c0ta (s) is the action (i.e. resources to allocate) that should be executed on task ta, considering the global state s.\nIn this case, an optimal policy is one that maximizes the expected total reward for accomplishing all tasks.\nThe optimal value of a state, V (s), is given by:\nwhere the remaining consumable resources in state s ~ are Resc \\ res (a), where res (a) are the consumable resources used by action a. Indeed, since an action a is a resource assignment, Resc \\ res (a) is the new set of available resources after the execution of action a. Furthermore, one may compute the Q-Values Q (a, s) of each state action pair using the\nwhere the optimal value of a state is V ~ (s) = max a \u2208 A (s) Q (a, s).\nThe policy is subjected to the local resource constraints res (\u03c0 (s)) <LresV s E S, and V res E Res.\nThe global constraint is defined according to all system trajectories tra E TRA.\nA system trajectory tra is a possible sequence of state-action pairs, until a goal state is reached under the optimal policy \u03c0.\nFor example, state s is entered, which may transit to s ~ or to s ~ ~, according to action a.\nThe two possible system trajectories are ((s, a), (s ~)) and ((s, a), (s ~ ~)).\nThe global resource constraint is res (tra) <GresV tra E TRA, and V res E Resc where res (tra) is a function which returns the resources used by trajectory tra.\nSince the available consumable resources are represented in the state space, this condition is verified by itself.\nIn other words, the model is Markovian as the history has not to be considered in the state space.\nFurthermore, the time is not considered in the model description, but it may also include a time horizon by using a finite horizon MDP.\nSince resource allocation in a stochastic environment is NP-Complete, heuristics should be employed.\nQ-decomposition which decomposes a planning problem to many agents to reduce the computational complexity associated to the state and\/or action spaces is now introduced.\n2.2 Q-decomposition for Resource Allocation\nThere can be many types of resource allocation problems.\nFirstly, if the resources are already shared among the agents, and the actions made by an agent does not influence the state of another agent, the globally optimal policy can be computed by planning separately for each agent.\nA second type of resource allocation problem is where the resources are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.\nFor instance, a group of agents which manages the oil consummated by a country falls in this group.\nThese agents desire to maximize their specific reward by consuming the right amount of oil.\nHowever, all the agents are penalized when an agent consumes oil because of the pollution it generates.\nAnother example of this type comes from our problem of interest, explained in Section 3, which is a naval platform which must counter incoming missiles (i.e. tasks) by using its resources (i.e. weapons, movements).\nIn some scenarios, it may happens that the missiles can be classified in two types: Those requiring a set of resources Res1 and those requiring a set of resources Res2.\nThis can happen depending on the type of missiles, their range, and so on.\nIn this case, two agents can plan for both set of tasks to determine the policy.\nHowever, there are interaction between the resource of Res1 and Res2, so that certain combination of resource cannot be assigned.\nIN particular, if an agent i allocate resources Resi to the first set of tasks Tai, and agent i ~ allocate resources Resi, to second set of tasks Tai,, the resulting policy may include actions which cannot be executed together.\nTo result these conflicts, we use Q-decomposition proposed by Russell and Zimdars [9] in the context of reinforcement learning.\nThe primary assumption underlying Qdecomposition is that the overall reward function R can be additively decomposed into separate rewards Ri for each distinct agent i E Ag, where IAgI is the number of agents.\nThat is, R = Ei \u2208 Ag Ri.\nIt requires each agent to compute a value, from its perspective, for every action.\nTo coordinate with each other, each agent i reports its action values Qi (ai, si) for each state si E Si to an arbitrator at each learning iteration.\nThe arbitrator then chooses an action maximizing the sum of the agent Q-values for each global state s E S.\nThe next time state s is updated, an agent i considers the value as its respective contribution, or Q-value, to the global maximal Q-value.\nThat is, Qi (ai, si) is the value of a state E such that it maximizes maxa \u2208 A (s) i \u2208 Ag Qi (ai, si).\nThe fact that the agents use a determined Q-value as the value of a state is an extension of the Sarsa on-policy algorithm [8] to Q-decomposition.\nRussell and Zimdars called this approach local Sarsa.\nIn this way, an ideal compromise can be found for the agents to reach a global optimum.\nIndeed, rather than allowing each agent to choose the successor action, each agent i uses the action a ~ i executed by the arbitrator in the successor state s ~ i:\nwhere the remaining consumable resources in state s ~ i are Resci \\ resi (ai) for a resource allocation problem.\nRussell and Zimdars [9] demonstrated that local Sarsa converges to the optimum.\nAlso, in some cases, this form of agent decomposition allows the local Q-functions to be expressed by a much reduced state and action space.\nFor our resource allocation problem described briefly in this section, Q-decomposition can be applied to generate an optimal solution.\nIndeed, an optimal Bellman backup can be applied in a state as in Algorithm 1.\nIn Line 5 of the QDEC-BACKUP function, each agent managing a task computes its respective Q-value.\nHere, Q ~ i (a ~ i, s ~) determines the optimal Q-value of agent i in state s ~.\nAn agent i uses as the value of a possible state transition s ~ the Q-value for this agent which determines the maximal global Q-value for state s ~ as in the original Q-decomposition approach.\nIn brief, for each visited states s E S, each agent computes its respective Q-values with respect to the global state s.\nSo the state space is the joint state space of all agents.\nSome of the gain in complexity to use Q-decomposition resides in the E Pai (s ~ iIs) part of the equation.\nAn agent considers s, i \u2208 Si as a possible state transition only the possible states of the set of tasks it manages.\nSince the number of states is exponential with the number of tasks, using Q-decomposition should reduce the planning time significantly.\nFurthermore, the action space of the agents takes into account only their available resources which is much less complex than a standard action space, which is the combination of all possible resource allocation in a state for all agents.\nThen, the arbitrator functionalities are in Lines 8 to 20.\nThe global Q-value is the sum of the Q-values produced by each agent managing each task as shown in Line 11, considering the global action a.\nIn this case, when an action of an agent i cannot be executed simultaneously with an action of another agent i ~, the global action is simply discarded from the action space A (s).\nLine 14 simply allocate the current value with respect to the highest global Q-value, as in a standard Bellman backup.\nThen, the optimal policy and Q-value of each agent is updated in Lines 16 and 17 to the sub-actions ai and specific Q-values Qi (ai, s) of each agent\n1214 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nfor action a. Algorithm 1 The Q-decomposition Bellman Backup.\n1: Function QDEC-BACKUP (s) 2: V (s) i--0 3: for all i E Ag do 4: for all ai E Ai (s) do 5: Qi (ai, s) i--Ri (s) + \u03b3 ~ Pai (s' iIs) Q ~ i (a' i, s')\ns ~ iESi {where Q ~ i (a' i, s') = hi (s') when s' is not yet visited, and s' has Resci \\ resi (ai) remaining consumable resources for each agent i}\n6: end for 7: end for 8: for all a E A (s) do 9: Q (a, s) i--0 10: for all i E Ag do 11: Q (a, s) i--Q (a, s) + Qi (ai, s) 12: end for 13: if Q (a, s)> V (s) then 14: V (s) i--Q (a, s) 15: for all i E Ag do 16: Sri (s) i--ai 17: Q ~ i (ai, s) i--Qi (ai, s) 18: end for 19: end if 20: end for\nA standard Bellman backup has a complexity of o (IAI x ISAgI), where ISAgI is the number of joint states for all agents excluding the resources, and IAI is the number of joint actions.\nOn the other hand, the Q-decomposition Bellman backup has a complexity of o ((IAgI x IAiI x ISi) I) + (IAI x IAgI)), where ISiI is the number of states for an agent i, excluding the resources and IAiI is the number of actions for an agent i.\nSince ISAgI is combinatorial with the number of tasks, so ISiI \"ISI.\nAlso, IAI is combinatorial with the number of resource types.\nIf the resources are already shared among the agents, the number of resource type for each agent will usually be lower than the set of all available resource types for all agents.\nIn these circumstances, IAiI \"IAI.\nIn a standard Bellman backup, IAI is multiplied by ISAgI, which is much more complex than multiplying IAI by IAgI with the Q-decomposition Bellman backup.\nThus, the Q-decomposition Bellman backup is much less complex than a standard Bellman backup.\nFurthermore, the communication cost between the agents and the arbitrator is null since this approach does not consider a geographically separated problem.\nHowever, when the resources are available to all agents, no Q-decomposition is possible.\nIn this case, Bounded RealTime Dynamic Programming (BOUNDED-RTDP) permits to focuss the search on relevant states, and to prune the action space A by using lower and higher bound on the value of states.\nBOUNDED-RTDP is now introduced.\n2.3 Bounded-RTDP\nBonet and Geffner [4] proposed LRTDP as an improvement to RTDP [1].\nLRTDP is a simple dynamic programming algorithm that involves a sequence of trial runs, each starting in the initial state s0 and ending in a goal or a solved state.\nEach LRTDP trial is the result of simulating the policy Sr while updating the values V (s) using a Bellman backup (Equation 1) over the states s that are visited.\nh (s) is a heuristic which define an initial value for state s.\nThis heuristic has to be admissible--The value given by the heuristic has to overestimate (or underestimate) the optimal value V ~ (s) when the objective function is maximized (or minimized).\nFor example, an admissible heuristic for a stochastic shortest path problem is the solution of a deterministic shortest path problem.\nIndeed, since the problem is stochastic, the optimal value is lower than for the deterministic version.\nIt has been proven that LRTDP, given an admissible initial heuristic on the value of states cannot be trapped in loops, and eventually yields optimal values [4].\nThe convergence is accomplished by means of a labeling procedure called CHECKSOLUED (s, E).\nThis procedure tries to label as solved each traversed state in the current trajectory.\nWhen the initial state is labelled as solved, the algorithm has converged.\nIn this section, a bounded version of RTDP (BOUNDEDRTDP) is presented in Algorithm 2 to prune the action space of sub-optimal actions.\nThis pruning enables to speed up the convergence of LRTDP.\nBOUNDED-RTDP is similar to RTDP except there are two distinct initial heuristics for unvisited states s E S; hL (s) and hU (s).\nAlso, the CHECKSOLUED (s, E) procedure can be omitted because the bounds can provide the labeling of a state as solved.\nOn the one hand, hL (s) defines a lower bound on the value of s such that the optimal value of s is higher than hL (s).\nFor its part, hU (s) defines an upper bound on the value of s such that the optimal value of s is lower than hU (s).\nThe values of the bounds are computed in Lines 3 and 4 of the BOUNDED-BACKUP function.\nComputing these two Q-values is made simultaneously as the state transitions are the same for both Q-values.\nOnly the values of the state transitions change.\nThus, having to compute two Q-values instead of one does not augment the complexity of the approach.\nIn fact, Smith and Simmons [11] state that the additional time to compute a Bellman backup for two bounds, instead of one, is no more than 10%, which is also what we obtained.\nIn particular, L (s) is the lower bound of state s, while U (s) is the upper bound of state s. Similarly, QL (a, s) is the Q-value of the lower bound of action a in state s, while QU (a, s) is the Q-value of the upper bound of action a in state s. Using these two bounds allow significantly reducing the action space A. Indeed, in Lines 5 and 6 of the BOUNDED-BACKUP function, if QU (a, s) <L (s) then action a may be pruned from the action space of s.\nIn Line 13 of this function, a state can be labeled as solved if the difference between the lower and upper bounds is lower than E.\nWhen the execution goes back to the BOUNDED-RTDP function, the next state in Line 10 has a fixed number of consumable resources available Resc, determined in Line 9.\nIn brief, PICKNExTSTATE (res) selects a none-solved state s reachable under the current policy which has the highest Bellman error (IU (s)--L (s) I).\nFinally, in Lines 12 to 15, a backup is made in a backward fashion on all visited state of a trajectory, when this trajectory has been made.\nThis strategy has been proven as efficient [11] [6].\nAs discussed by Singh and Cohn [10], this type of algorithm has a number of desirable anytime characteristics: if an action has to be picked in state s before the algorithm has converged (while multiple competitive actions remains), the action with the highest lower bound is picked.\nSince the upper bound for state s is known, it may be estimated\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1215\nAlgorithm 2 The BOUNDED-RTDP algorithm.\nAdapted from [4] and [10].\n1: Function BOUNDED-RTDP (S) 2: returns a value function V 3: repeat 4: s \u2190 s0 5: visited \u2190 null 6: repeat 7: visited.push (s) 8: BOUNDED-BACKUP (s) 9: Resc \u2190 Resc \\ {\u03c0 (s)} 10: s \u2190 s.PICKNEXTSTATE (Resc) 11: until s is a goal\n5: if QU (a, s) \u2264 L (s) then 6: A (s) \u2190 A (s) \\ res (a) 7: end if 8: end for 9: L (s) \u2190 max aEA (s) QL (a, s) max QU (a, s) 10: U (s) \u2190 aEA (s) 11: \u03c0 (s) \u2190 arg max QL (a, s) aEA (s) 12: if | U (s) \u2212 L (s) | <~ then 13: s \u2190 solved 14: end if\nhow far the lower bound is from the optimal.\nIf the difference between the lower and upper bound is too high, one can choose to use another greedy algorithm of one's choice, which outputs a fast and near optimal solution.\nFurthermore, if a new task dynamically arrives in the environment, it can be accommodated by redefining the lower and upper bounds which exist at the time of its arrival.\nSingh and Cohn [10] proved that an algorithm that uses admissible lower and upper bounds to prune the action space is assured of converging to an optimal solution.\nThe next sections describe two separate methods to define hL (s) and hU (s).\nFirst of all, the method of Singh and Cohn [10] is briefly described.\nThen, our own method proposes tighter bounds, thus allowing a more effective pruning of the action space.\n2.4 Singh and Cohn's Bounds\nSingh and Cohn [10] defined lower and upper bounds to prune the action space.\nTheir approach is pretty straightforward.\nFirst of all, a value function is computed for all tasks to realize, using a standard RTDP approach.\nThen, using these task-value functions, a lower bound hL, and upper bound hU can be defined.\nIn particular, hL (s) = Vta (sta), and hU (s) = ~ Vta (sta).\nFor readability, taETa the upper bound by Singh and Cohn is named SINGHU, and the lower bound is named SINGHL.\nThe admissibility of these bounds has been proven by Singh and Cohn, such that, the upper bound always overestimates the optimal value of each state, while the lower bound always underestimates the optimal value of each state.\nTo determine the optimal policy \u03c0, Singh and Cohn implemented an algorithm very similar to BOUNDED-RTDP, which uses the bounds to initialize L (s) and U (s).\nThe only difference between BOUNDED-RTDP, and the RTDP version of Singh and Cohn is in the stopping criteria.\nSingh and Cohn proposed that the algorithm terminates when only one competitive action remains for each state, or when the range of all competitive actions for any state are bounded by an indifference parameter ~.\nBOUNDED-RTDP labels states for which | U (s) \u2212 L (s) | <~ as solved and the convergence is reached when s0 is solved or when only one competitive action remains for each state.\nThis stopping criteria is more effective since it is similar to the one used by Smith and Simmons [11] and McMahan et al. .\nBRTDP [6].\nIn this paper, the bounds defined by Singh and Cohn and implemented using BOUNDED-RTDP define the SINGH-RTDP approach.\nThe next sections propose to tighten the bounds of SINGH-RTDP to permit a more effective pruning of the action space.\n2.5 Reducing the Upper Bound\nSINGHU includes actions which may not be possible to execute because of resource constraints, which overestimates the upper bound.\nTo consider only possible actions, our upper bound, named MAXU is introduced:\nwhere Qta (ata, sta) is the Q-value of task ta for state sta, and action ata computed using a standard LRTDP approach.\nTHEOREM 2.1.\nThe upper bound defined by Equation 4 is admissible.\nProof: The local resource constraints are satisfied because the upper bound is computed using all global possible actions a. However, hU (s) still overestimates V\" (s) because the global resource constraint is not enforced.\nIndeed, each task may use all consumable resources for its own purpose.\nDoing this produces a higher value for each task, than the one obtained when planning for all tasks globally with the shared limited resources.\n\u25a0 Computing the MAXU bound in a state has a complexity of O (| A | \u00d7 | Ta |), and O (| Ta |) for SINGHU.\nA standard Bellman backup has a complexity of O (| A | \u00d7 | S |).\nSince | A | \u00d7 | Ta | ~ | A | \u00d7 | S |, the computation time to determine the upper bound of a state, which is done one time for each visited state, is much less than the computation time required to compute a standard Bellman backup for a state, which is usually done many times for each visited state.\nThus, the computation time of the upper bound is negligible.\nmax taETa\n1216 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n2.6 Increasing the Lower Bound\nThe idea to increase SINGHL is to allocate the resources a priori among the tasks.\nWhen each task has its own set of resources, each task may be solved independently.\nThe lower bound of state s is hL (s) = E Lowta (sta), where ta \u2208 Ta Lowta (sta) is a value function for each task ta E Ta, such that the resources have been allocated a priori.\nThe allocation a priori of the resources is made using marginal revenue, which is a highly used concept in microeconomics [7], and has recently been used for coordination of a Decentralized MDP [2].\nIn brief, marginal revenue is the extra revenue that an additional unit of product will bring to a firm.\nThus, for a stochastic resource allocation problem, the marginal revenue of a resource is the additional expected value it involves.\nThe marginal revenue of a resource res for a task ta in a state sta is defined as following:\nThe concept of marginal revenue of a resource is used in Algorithm 4 to allocate the resources a priori among the tasks which enables to define the lower bound value of a state.\nIn Line 4 of the algorithm, a value function is computed for all tasks in the environment using a standard LRTDP [4] approach.\nThese value functions, which are also used for the upper bound, are computed considering that each task may use all available resources.\nThe Line 5 initializes the valueta variable.\nThis variable is the estimated value of each task ta E Ta.\nIn the beginning of the algorithm, no resources are allocated to a specific task, thus the valueta variable is initialized to 0 for all ta E Ta.\nThen, in Line 9, a resource type res (consumable or non-consumable) is selected to be allocated.\nHere, a domain expert may separate all available resources in many types or parts to be allocated.\nThe resources are allocated in the order of its specialization.\nIn other words, the more a resource is efficient on a small group of tasks, the more it is allocated early.\nAllocating the resources in this order improves the quality of the resulting lower bound.\nThe Line 12 computes the marginal revenue of a consumable resource res for each task ta E Ta.\nFor a non-consumable resource, since the resource is not considered in the state space, all other reachable states from sta consider that the resource res is still usable.\nThe approach here is to sum the difference between the real value of a state to the maximal Q-value of this state if resource res cannot be used for all states in a trajectory given by the policy of task ta.\nThis heuristic proved to obtain good results, but other ones may be tried, for example Monte-Carlo simulation.\nIn Line 21, the marginal revenue is updated in function of the resources already allocated to each task.\nR (sgta) is the reward to realize task ta.\nThus, Vta (sta) \u2212 valueta is R (sgta) the residual expected value that remains to be achieved, knowing current allocation to task ta, and normalized by the reward of realizing the tasks.\nThe marginal revenue is multiplied by this term to indicate that, the more a task has a high residual value, the more its marginal revenue is going to be high.\nThen, a task ta is selected in Line 23 with the highest marginal revenue, adjusted with residual value.\nIn Line 24, the resource type res is allocated to the group of resources Resta of task ta.\nAfterwards, Line 29 recomAlgorithm 4 The marginal revenue lower bound algorithm.\n1: Function REVENUE-bOUND (S) 2: returns a lower bound LowTa 3: for all ta E Ta do 4: Vta +--LRTDP (Sta) 5: valueta +--0 6: end for 7: s +--s0 8: repeat 9: res +--Select a resource type res E Res 10: for all ta E Ta do 11: if res is consumable then 12: mrta (sta) +--Vta (sta)--Vta (sta (Res \\ res))\n30: until all resource types res E Res are assigned 31: for all ta E Ta do 32: Lowta +--LRTDP (Sta, Resta) 33: end for 34: return LowTa\nputes valueta.\nThe first part of the equation to compute valueta represents the expected residual value for task ta.\n, which is the ratio of the efficiency of resource type res.\nIn other words, valueta is assigned to valueta + (the residual value x the value ratio of resource type res).\nFor a consumable resource, the Q-value consider only resource res in the state space, while for a non-consumable resource, no resources are available.\nAll resource types are allocated in this manner until Res is empty.\nAll consumable and non-consumable resource types are allocated to each task.\nWhen all resources are allocated, the lower bound components Lowta of each task are computed in Line 32.\nWhen the global solution is computed, the lower bound is as follow:\nWe use the maximum of the SINGHL bound and the sum of the lower bound components Lowta, thus MARGINALREVENUE> SINGHL.\nIn particular, the SINGHL bound may\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1217\nbe higher when a little number of tasks remain.\nAs the components Lowta are computed considering s0; for example, if in a subsequent state only one task remains, the bound of SINGHL will be higher than any of the Lowta components.\nThe main difference of complexity between SINGHL and REVENUE-BOUND is in Line 32 where a value for each task has to be computed with the shared resource.\nHowever, since the resource are shared, the state space and action space is greatly reduced for each task, reducing greatly the calculus compared to the value functions computed in Line 4 which is done for both SINGHL and REVENUE-BOUND.\nTHEOREM 2.2.\nThe lower bound of Equation 6 is admissible.\nProof: Lowta (sta) is computed with the resource being shared.\nSumming the Lowta (sta) value functions for each ta \u2208 Ta does not violates the local and global resource constraints.\nIndeed, as the resources are shared, the tasks cannot overuse them.\nThus, hL (s) is a realizable policy, and an admissible lower bound.\n\u25a0\n3.\nDISCUSSION AND EXPERIMENTS\nThe domain of the experiments is a naval platform which must counter incoming missiles (i.e. tasks) by using its resources (i.e. weapons, movements).\nFor the experiments, 100 randomly resource allocation problems were generated for each approach, and possible number of tasks.\nIn our problem, | Sta | = 4, thus each task can be in four distinct states.\nThere are two types of states; firstly, states where actions modify the transition probabilities; and then, there are goal states.\nThe state transitions are all stochastic because when a missile is in a given state, it may always transit in many possible states.\nIn particular, each resource type has a probability to counter a missile between 45% and 65% depending on the state of the task.\nWhen a missile is not countered, it transits to another state, which may be preferred or not to the current state, where the most preferred state for a task is when it is countered.\nThe effectiveness of each resource is modified randomly by \u00b1 15% at the start of a scenario.\nThere are also local and global resource constraints on the amount that may be used.\nFor the local constraints, at most 1 resource of each type can be allocated to execute tasks in a specific state.\nThis constraint is also present on a real naval platform because of sensor and launcher constraints and engagement policies.\nFurthermore, for consumable resources, the total amount of available consumable resource is between 1 and 2 for each type.\nThe global constraint is generated randomly at the start of a scenario for each consumable resource type.\nThe number of resource type has been fixed to 5, where there are 3 consumable resource types and 2 non-consumable resources types.\nFor this problem a standard LRTDP approach has been implemented.\nA simple heuristic has been used where the value of an unvisited state is assigned as the value of a goal state such that all tasks are achieved.\nThis way, the value of each unvisited state is assured to overestimate its real value since the value of achieving a task ta is the highest the planner may get for ta.\nSince this heuristic is pretty straightforward, the advantages of using better heuristics are more evident.\nNevertheless, even if the LRTDP approach uses a simple heuristic, still a huge part of the state space is not visited when computing the optimal policy.\nThe approaches described in this paper are compared in Figures 1 and 2.\nLets summarize these approaches here:\n\u2022 QDEC-LRTDP: The backups are computed using the QDEC-BACKUP function (Algorithm 1), but in a LRTDP context.\nIn particular the updates made in the CHECKSOLVED function are also made using the the QDECBACKUP function.\n\u2022 LRTDP-UP: The upper bound of MAXU is used for LRTDP.\n\u2022 SINGH-RTDP: The SINGHL and SINGHU bounds are used for BOUNDED-RTDP.\n\u2022 MR-RTDP: The REVENUE-BOUND and MAXU bounds are used for BOUNDED-RTDP.\nTo implement QDEC-LRTDP, we divided the set of tasks in two equal parts.\nThe set of task Tai, managed by agent i, can be accomplished with the set of resources Resi, while the second set of task Tai,, managed by agent Agi,, can be accomplished with the set of resources Resi,.\nResi had one consumable resource type and one non-consumable resource type, while Resi, had two consumable resource types and one non-consumable resource type.\nWhen the number of tasks is odd, one more task was assigned to Tai,.\nThere are constraint between the group of resource Resi and Resi, such that some assignments are not possible.\nThese constraints are managed by the arbitrator as described in Section 2.2.\nQ-decomposition permits to diminish the planning time significantly in our problem settings, and seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent.\nTo compute the lower bound of REVENUE-BOUND, all available resources have to be separated in many types or parts to be allocated.\nFor our problem, we allocated each resource of each type in the order of of its specialization like we said when describing the REVENUE-BOUND function.\nIn terms of experiments, notice that the LRTDP LRTDP-UP and approaches for resource allocation, which doe not prune the action space, are much more complex.\nFor instance, it took an average of 1512 seconds to plan for the LRTDP-UP approach with six tasks (see Figure 1).\nThe SINGH-RTDP approach diminished the planning time by using a lower and upper bound to prune the action space.\nMR-RTDP further reduce the planning time by providing very tight initial bounds.\nIn particular, SINGH-RTDP needed 231 seconds in average to solve problem with six tasks and MR-RTDP required 76 seconds.\nIndeed, the time reduction is quite significant compared to LRTDP-UP, which demonstrates the efficiency of using bounds to prune the action space.\nFurthermore, we implemented MR-RTDP with the SINGHU bound, and this was slightly less efficient than with the MAXU bound.\nWe also implemented MR-RTDP with the SINGHL bound, and this was slightly more efficient than SINGH-RTDP.\nFrom these results, we conclude that the difference of efficiency between MR-RTDP and SINGH-RTDP is more attributable to the MARGINAL-REVENUE lower bound that to the MAXU upper bound.\nIndeed, when the number of task to execute is high, the lower bounds by SINGH-RTDP takes the values of a single task.\nOn the other hand, the lower bound of MR-RTDP takes into account the value of all\n1218 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 1: Efficiency of Q-decomposition LRTDP and LRTDP.\ntask by using a heuristic to distribute the resources.\nIndeed, an optimal allocation is one where the resources are distributed in the best way to all tasks, and our lower bound heuristically does that.\n4.\nCONCLUSION\nThe experiments have shown that Q-decomposition seems a very efficient approach when a group of agents have to allocate resources which are only available to themselves, but the actions made by an agent may influence the reward obtained by at least another agent.\nOn the other hand, when the available resource are shared, no Q-decomposition is possible and we proposed tight bounds for heuristic search.\nIn this case, the planning time of BOUNDED-RTDP, which prunes the action space, is significantly lower than for LRTDP.\nFurthermore, The marginal revenue bound proposed in this paper compares favorably to the Singh and Cohn [10] approach.\nBOUNDEDRTDP with our proposed bounds may apply to a wide range of stochastic environments.\nThe only condition for the use our bounds is that each task possesses consumable and\/or non-consumable limited resources.\nAn interesting research avenue would be to experiment our bounds with other heuristic search algorithms.\nFor instance, FRTDP [11], and BRTDP [6] are both efficient heuristic search algorithms.\nIn particular, both these approaches proposed an efficient state trajectory updates, when given upper and lower bounds.\nOur tight bounds would enable, for both FRTDP and BRTDP, to reduce the number of backup to perform before convergence.\nFinally, the BOUNDED-RTDP function prunes the action space when QU (a, s) \u2264 L (s), as Singh and Cohn [10] suggested.\nFRTDP and BRTDP could also prune the action space in these circumstances to further reduce their planning time.","keyphrases":["q-decomposit","resourc alloc","resourc manag","reward separ agent","heurist search","real-time dynam program","complex stochast resourc alloc problem","plan agent","markov decis process","stochast environ","margin revenu bound","margin revenu"],"prmu":["P","P","P","P","P","M","R","R","U","M","M","U"]} {"id":"I-63","title":"Combinatorial Resource Scheduling for Multiagent MDPs","abstract":"Optimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive. We consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs). In recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs. However, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs. We extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods. We provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time. We illustrate and empirically analyze the method in the context of a stochastic job-scheduling domain.","lvl-1":"Combinatorial Resource Scheduling for Multiagent MDPs Dmitri A. Dolgov, Michael R. James, and Michael E. Samples AI and Robotics Group Technical Research, Toyota Technical Center USA {ddolgov, michael.r.james, michael.samples}@gmail.\ncom ABSTRACT Optimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive.\nWe consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs).\nIn recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs.\nHowever, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs.\nWe extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods.\nWe provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time.\nWe illustrate and empirically analyze the method in the context of a stochastic jobscheduling domain.\nCategories and Subject Descriptors I.2.8 [Artificial Intelligence]: Problem Solving, Control Methods, and Search; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems General Terms Algorithms, Performance, Design 1.\nINTRODUCTION The tasks of optimal resource allocation and scheduling are ubiquitous in multiagent systems, but solving such optimization problems can be computationally difficult, due to a number of factors.\nIn particular, when the value of a set of resources to an agent is not additive (as is often the case with resources that are substitutes or complements), the utility function might have to be defined on an exponentially large space of resource bundles, which very quickly becomes computationally intractable.\nFurther, even when each agent has a utility function that is nonzero only on a small subset of the possible resource bundles, obtaining optimal allocation is still computationally prohibitive, as the problem becomes NP-complete [14].\nSuch computational issues have recently spawned several threads of work in using compact models of agents'' preferences.\nOne idea is to use any structure present in utility functions to represent them compactly, via, for example, logical formulas [15, 10, 4, 3].\nAn alternative is to directly model the mechanisms that define the agents'' utility functions and perform resource allocation directly with these models [9].\nA way of accomplishing this is to model the processes by which an agent might utilize the resources and define the utility function as the payoff of these processes.\nIn particular, if an agent uses resources to act in a stochastic environment, its utility function can be naturally modeled with a Markov decision process, whose action set is parameterized by the available resources.\nThis representation can then be used to construct very efficient resource-allocation algorithms that lead to an exponential speedup over a straightforward optimization problem with flat representations of combinatorial preferences [6, 7, 8].\nHowever, this existing work on resource allocation with preferences induced by resource-parameterized MDPs makes an assumption that the resources are only allocated once and are then utilized by the agents independently within their infinite-horizon MDPs.\nThis assumption that no reallocation of resources is possible can be limiting in domains where agents arrive and depart dynamically.\nIn this paper, we extend the work on resource allocation under MDP-induced preferences to discrete-time scheduling problems, where agents are present in the system for finite time intervals and can only use resources within these intervals.\nIn particular, agents arrive and depart at arbitrary (predefined) times and within these intervals use resources to execute tasks in finite-horizon MDPs.\nWe address the problem of globally optimal resource scheduling, where the objective is to find an allocation of resources to the agents across time that maximizes the sum of the expected rewards that they obtain.\nIn this context, our main contribution is a mixed-integerprogramming formulation of the scheduling problem that chooses globally optimal resource assignments, starting times, and execution horizons for all agents (within their arrival1220 978-81-904262-7-5 (RPS) c 2007 IFAAMAS departure intervals).\nWe analyze and empirically compare two flavors of the scheduling problem: one, where agents have static resource assignments within their finite-horizon MDPs, and another, where resources can be dynamically reallocated between agents at every time step.\nIn the rest of the paper, we first lay down the necessary groundwork in Section 2 and then introduce our model and formal problem statement in Section 3.\nIn Section 4.2, we describe our main result, the optimization program for globally optimal resource scheduling.\nFollowing the discussion of our experimental results on a job-scheduling problem in Section 5, we conclude in Section 6 with a discussion of possible extensions and generalizations of our method.\n2.\nBACKGROUND Similarly to the model used in previous work on resourceallocation with MDP-induced preferences [6, 7], we define the value of a set of resources to an agent as the value of the best MDP policy that is realizable, given those resources.\nHowever, since the focus of our work is on scheduling problems, and a large part of the optimization problem is to decide how resources are allocated in time among agents with finite arrival and departure times, we model the agents'' planning problems as finite-horizon MDPs, in contrast to previous work that used infinite-horizon discounted MDPs.\nIn the rest of this section, we first introduce some necessary background on finite-horizon MDPs and present a linear-programming formulation that serves as the basis for our solution algorithm developed in Section 4.\nWe also outline the standard methods for combinatorial resource scheduling with flat resource values, which serve as a comparison benchmark for the new model developed here.\n2.1 Markov Decision Processes A stationary, finite-domain, discrete-time MDP (see, for example, [13] for a thorough and detailed development) can be described as S, A, p, r , where: S is a finite set of system states; A is a finite set of actions that are available to the agent; p is a stationary stochastic transition function, where p(\u03c3|s, a) is the probability of transitioning to state \u03c3 upon executing action a in state s; r is a stationary reward function, where r(s, a) specifies the reward obtained upon executing action a in state s. Given such an MDP, a decision problem under a finite horizon T is to choose an optimal action at every time step to maximize the expected value of the total reward accrued during the agent``s (finite) lifetime.\nThe agent``s optimal policy is then a function of current state s and the time until the horizon.\nAn optimal policy for such a problem is to act greedily with respect to the optimal value function, defined recursively by the following system of finite-time Bellman equations [2]: v(s, t) = max a r(s, a) + X \u03c3 p(\u03c3|s, a)v(\u03c3, t + 1), \u2200s \u2208 S, t \u2208 [1, T \u2212 1]; v(s, T) = 0, \u2200s \u2208 S; where v(s, t) is the optimal value of being in state s at time t \u2208 [1, T].\nThis optimal value function can be easily computed using dynamic programming, leading to the following optimal policy \u03c0, where \u03c0(s, a, t) is the probability of executing action a in state s at time t: \u03c0(s, a, t) = ( 1, a = argmaxa r(s, a) + P \u03c3 p(\u03c3|s, a)v(\u03c3, t + 1), 0, otherwise.\nThe above is the most common way of computing the optimal value function (and therefore an optimal policy) for a finite-horizon MDP.\nHowever, we can also formulate the problem as the following linear program (similarly to the dual LP for infinite-horizon discounted MDPs [13, 6, 7]): max X s X a r(s, a) X t x(s, a, t) subject to: X a x(\u03c3, a, t + 1) = X s,a p(\u03c3|s, a)x(s, a, t) \u2200\u03c3, t \u2208 [1, T \u2212 1]; X a x(s, a, 1) = \u03b1(s), \u2200s \u2208 S; (1) where \u03b1(s) is the initial distribution over the state space, and x is the (non-stationary) occupation measure (x(s, a, t) \u2208 [0, 1] is the total expected number of times action a is executed in state s at time t).\nAn optimal (non-stationary) policy is obtained from the occupation measure as follows: \u03c0(s, a, t) = x(s, a, t)\/ X a x(s, a, t) \u2200s \u2208 S, t \u2208 [1, T].\n(2) Note that the standard unconstrained finite-horizon MDP, as described above, always has a uniformly-optimal solution (optimal for any initial distribution \u03b1(s)).\nTherefore, an optimal policy can be obtained by using an arbitrary constant \u03b1(s) > 0 (in particular, \u03b1(s) = 1 will result in x(s, a, t) = \u03c0(s, a, t)).\nHowever, for MDPs with resource constraints (as defined below in Section 3), uniformly-optimal policies do not in general exist.\nIn such cases, \u03b1 becomes a part of the problem input, and a resulting policy is only optimal for that particular \u03b1.\nThis result is well known for infinite-horizon MDPs with various types of constraints [1, 6], and it also holds for our finite-horizon model, which can be easily established via a line of reasoning completely analogous to the arguments in [6].\n2.2 Combinatorial Resource Scheduling A straightforward approach to resource scheduling for a set of agents M, whose values for the resources are induced by stochastic planning problems (in our case, finite-horizon MDPs) would be to have each agent enumerate all possible resource assignments over time and, for each one, compute its value by solving the corresponding MDP.\nThen, each agent would provide valuations for each possible resource bundle over time to a centralized coordinator, who would compute the optimal resource assignments across time based on these valuations.\nWhen resources can be allocated at different times to different agents, each agent must submit valuations for every combination of possible time horizons.\nLet each agent m \u2208 M execute its MDP within the arrival-departure time interval \u03c4 \u2208 [\u03c4a m, \u03c4d m].\nHence, agent m will execute an MDP with time horizon no greater than Tm = \u03c4d m\u2212\u03c4a m+1.\nLet b\u03c4 be the global time horizon for the problem, before which all of the agents'' MDPs must finish.\nWe assume \u03c4d m < b\u03c4, \u2200m \u2208 M.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1221 For the scheduling problem where agents have static resource requirements within their finite-horizon MDPs, the agents provide a valuation for each resource bundle for each possible time horizon (from [1, Tm]) that they may use.\nLet \u03a9 be the set of resources to be allocated among the agents.\nAn agent will get at most one resource bundle for one of the time horizons.\nLet the variable \u03c8 \u2208 \u03a8m enumerate all possible pairs of resource bundles and time horizons for agent m, so there are 2|\u03a9| \u00d7 Tm values for \u03c8 (the space of bundles is exponential in the number of resource types |\u03a9|).\nThe agent m must provide a value v\u03c8 m for each \u03c8, and the coordinator will allocate at most one \u03c8 (resource, time horizon) pair to each agent.\nThis allocation is expressed as an indicator variable z\u03c8 m \u2208 {0, 1} that shows whether \u03c8 is assigned to agent m. For time \u03c4 and resource \u03c9, the function nm(\u03c8, \u03c4, \u03c9) \u2208 {0, 1} indicates whether the bundle in \u03c8 uses resource \u03c9 at time \u03c4 (we make the assumption that agents have binary resource requirements).\nThis allocation problem is NP-complete, even when considering only a single time step, and its difficulty increases significantly with multiple time steps because of the increasing number of values of \u03c8.\nThe problem of finding an optimal allocation that satisfies the global constraint that the amount of each resource \u03c9 allocated to all agents does not exceed the available amount b\u03d5(\u03c9) can be expressed as the following integer program: max X m\u2208M X \u03c8\u2208\u03a8m z\u03c8 mv\u03c8 m subject to: X \u03c8\u2208\u03a8m z\u03c8 m \u2264 1, \u2200m \u2208 M; X m\u2208M X \u03c8\u2208\u03a8m z\u03c8 mnm(\u03c8, \u03c4, \u03c9) \u2264 b\u03d5(\u03c9), \u2200\u03c4 \u2208 [1, b\u03c4], \u2200\u03c9 \u2208 \u03a9; (3) The first constraint in equation 3 says that no agent can receive more than one bundle, and the second constraint ensures that the total assignment of resource \u03c9 does not, at any time, exceed the resource bound.\nFor the scheduling problem where the agents are able to dynamically reallocate resources, each agent must specify a value for every combination of bundles and time steps within its time horizon.\nLet the variable \u03c8 \u2208 \u03a8m in this case enumerate all possible resource bundles for which at most one bundle may be assigned to agent m at each time step.\nTherefore, in this case there are P t\u2208[1,Tm](2|\u03a9| )t \u223c 2|\u03a9|Tm possibilities of resource bundles assigned to different time slots, for the Tm different time horizons.\nThe same set of equations (3) can be used to solve this dynamic scheduling problem, but the integer program is different because of the difference in how \u03c8 is defined.\nIn this case, the number of \u03c8 values is exponential in each agent``s planning horizon Tm, resulting in a much larger program.\nThis straightforward approach to solving both of these scheduling problems requires an enumeration and solution of either 2|\u03a9| Tm (static allocation) or P t\u2208[1,Tm] 2|\u03a9|t (dynamic reallocation) MDPs for each agent, which very quickly becomes intractable with the growth of the number of resources |\u03a9| or the time horizon Tm.\n3.\nMODEL AND PROBLEM STATEMENT We now formally introduce our model of the resourcescheduling problem.\nThe problem input consists of the following components: \u2022 M, \u03a9, b\u03d5, \u03c4a m, \u03c4d m, b\u03c4 are as defined above in Section 2.2.\n\u2022 {\u0398m} = {S, A, pm, rm, \u03b1m} are the MDPs of all agents m \u2208 M. Without loss of generality, we assume that state and action spaces of all agents are the same, but each has its own transition function pm, reward function rm, and initial conditions \u03b1m.\n\u2022 \u03d5m : A\u00d7\u03a9 \u2192 {0, 1} is the mapping of actions to resources for agent m. \u03d5m(a, \u03c9) indicates whether action a of agent m needs resource \u03c9.\nAn agent m that receives a set of resources that does not include resource \u03c9 cannot execute in its MDP policy any action a for which \u03d5m(a, \u03c9) = 0.\nWe assume all resource requirements are binary; as discussed below in Section 6, this assumption is not limiting.\nGiven the above input, the optimization problem we consider is to find the globally optimal-maximizing the sum of expected rewards-mapping of resources to agents for all time steps: \u0394 : \u03c4 \u00d7 M \u00d7 \u03a9 \u2192 {0, 1}.\nA solution is feasible if the corresponding assignment of resources to the agents does not violate the global resource constraint: X m \u0394m(\u03c4, \u03c9) \u2264 b\u03d5(\u03c9), \u2200\u03c9 \u2208 \u03a9, \u03c4 \u2208 [1, b\u03c4].\n(4) We consider two flavors of the resource-scheduling problem.\nThe first formulation restricts resource assignments to the space where the allocation of resources to each agent is static during the agent``s lifetime.\nThe second formulation allows reassignment of resources between agents at every time step within their lifetimes.\nFigure 1 depicts a resource-scheduling problem with three agents M = {m1, m2, m3}, three resources \u03a9 = {\u03c91, \u03c92, \u03c93}, and a global problem horizon of b\u03c4 = 11.\nThe agents'' arrival and departure times are shown as gray boxes and are {1, 6}, {3, 7}, and {2, 11}, respectively.\nA solution to this problem is shown via horizontal bars within each agents'' box, where the bars correspond to the allocation of the three resource types.\nFigure 1a shows a solution to a static scheduling problem.\nAccording to the shown solution, agent m1 begins the execution of its MDP at time \u03c4 = 1 and has a lock on all three resources until it finishes execution at time \u03c4 = 3.\nNote that agent m1 relinquishes its hold on the resources before its announced departure time of \u03c4d m1 = 6, ostensibly because other agents can utilize the resources more effectively.\nThus, at time \u03c4 = 4, resources \u03c91 and \u03c93 are allocated to agent m2, who then uses them to execute its MDP (using only actions supported by resources \u03c91 and \u03c93) until time \u03c4 = 7.\nAgent m3 holds resource \u03c93 during the interval \u03c4 \u2208 [4, 10].\nFigure 1b shows a possible solution to the dynamic version of the same problem.\nThere, resources can be reallocated between agents at every time step.\nFor example, agent m1 gives up its use of resource \u03c92 at time \u03c4 = 2, although it continues the execution of its MDP until time \u03c4 = 6.\nNotice that an agent is not allowed to stop and restart its MDP, so agent m1 is only able to continue executing in the interval \u03c4 \u2208 [3, 4] if it has actions that do not require any resources (\u03d5m(a, \u03c9) = 0).\nClearly, the model and problem statement described above make a number of assumptions about the problem and the desired solution properties.\nWe discuss some of those assumptions and their implications in Section 6.\n1222 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) (a) (b) Figure 1: Illustration of a solution to a resource-scheduling problem with three agents and three resources: a) static resource assignments (resource assignments are constant within agents'' lifetimes; b) dynamic assignment (resource assignments are allowed to change at every time step).\n4.\nRESOURCE SCHEDULING Our resource-scheduling algorithm proceeds in two stages.\nFirst, we perform a preprocessing step that augments the agent MDPs; this process is described in Section 4.1.\nSecond, using these augmented MDPs we construct a global optimization problem, which is described in Section 4.2.\n4.1 Augmenting Agents'' MDPs In the model described in the previous section, we assume that if an agent does not possess the necessary resources to perform actions in its MDP, its execution is halted and the agent leaves the system.\nIn other words, the MDPs cannot be paused and resumed.\nFor example, in the problem shown in Figure 1a, agent m1 releases all resources after time \u03c4 = 3, at which point the execution of its MDP is halted.\nSimilarly, agents m2 and m3 only execute their MDPs in the intervals \u03c4 \u2208 [4, 6] and \u03c4 \u2208 [4, 10], respectively.\nTherefore, an important part of the global decision-making problem is to decide the window of time during which each of the agents is active (i.e., executing its MDP).\nTo accomplish this, we augment each agent``s MDP with two new states (start and finish states sb , sf , respectively) and a new start\/stop action a\u2217 , as illustrated in Figure 2.\nThe idea is that an agent stays in the start state sb until it is ready to execute its MDP, at which point it performs the start\/stop action a\u2217 and transitions into the state space of the original MDP with the transition probability that corresponds to the original initial distribution \u03b1(s).\nFor example, in Figure 1a, for agent m2 this would happen at time \u03c4 = 4.\nOnce the agent gets to the end of its activity window (time \u03c4 = 6 for agent m2 in Figure 1a), it performs the start\/stop action, which takes it into the sink finish state sf at time \u03c4 = 7.\nMore precisely, given an MDP S, A, pm, rm, \u03b1m , we define an augmented MDP S , A , pm, rm, \u03b1m as follows: S = S \u222a sb \u222a sf ; A = A \u222a a\u2217 ; p (s|sb , a\u2217 ) = \u03b1(s), \u2200s \u2208 S; p (sb |sb , a) = 1.0, \u2200a \u2208 A; p (sf |s, a\u2217 ) = 1.0, \u2200s \u2208 S; p (\u03c3|s, a) = p(\u03c3|s, a), \u2200s, \u03c3 \u2208 S, a \u2208 A; r (sb , a) = r (sf , a) = 0, \u2200a \u2208 A ; r (s, a) = r(s, a), \u2200s \u2208 S, a \u2208 A; \u03b1 (sb ) = 1; \u03b1 (s) = 0, \u2200s \u2208 S; where all non-specified transition probabilities are assumed to be zero.\nFurther, in order to account for the new starting state, we begin the MDP one time-step earlier, setting \u03c4a m \u2190 \u03c4a m \u2212 1.\nThis will not affect the resource allocation due to the resource constraints only being enforced for the original MDP states, as will be discussed in the next section.\nFor example, the augmented MDPs shown in Figure 2b (which starts in state sb at time \u03c4 = 2) would be constructed from an MDP with original arrival time \u03c4 = 3.\nFigure 2b also shows a sample trajectory through the state space: the agent starts in state sb , transitions into the state space S of the original MDP, and finally exists into the sink state sf .\nNote that if we wanted to model a problem where agents could pause their MDPs at arbitrary time steps (which might be useful for domains where dynamic reallocation is possible), we could easily accomplish this by including an extra action that transitions from each state to itself with zero reward.\n4.2 MILP for Resource Scheduling Given a set of augmented MDPs, as defined above, the goal of this section is to formulate a global optimization program that solves the resource-scheduling problem.\nIn this section and below, all MDPs are assumed to be the augmented MDPs as defined in Section 4.1.\nOur approach is similar to the idea used in [6]: we begin with the linear-program formulation of agents'' MDPs (1) and augment it with constraints that ensure that the corresponding resource allocation across agents and time is valid.\nThe resulting optimization problem then simultaneously solves the agents'' MDPs and resource-scheduling problems.\nIn the rest of this section, we incrementally develop a mixed integer program (MILP) that achieves this.\nIn the absence of resource constraints, the agents'' finitehorizon MDPs are completely independent, and the globally optimal solution can be trivially obtained via the following LP, which is simply an aggregation of single-agent finitehorizon LPs: max X m X s X a rm(s, a) X t xm(s, a, t) subject to: X a xm(\u03c3, a, t + 1) = X s,a pm(\u03c3|s, a)xm(s, a, t), \u2200m \u2208 M, \u03c3 \u2208 S, t \u2208 [1, Tm \u2212 1]; X a xm(s, a, 1) = \u03b1m(s), \u2200m \u2208 M, s \u2208 S; (12) where xm(s, a, t) is the occupation measure of agent m, and The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1223 (a) (b) Figure 2: Illustration of augmenting an MDP to allow for variable starting and stopping times: a) (left) the original two-state MDP with a single action; (right) the augmented MDP with new states sb and sf and the new action a\u2217 (note that the origianl transitions are not changed in the augmentation process); b) the augmented MDP displayed as a trajectory through time (grey lines indicate all transitions, while black lines indicate a given trajectory.\nObjective Function (sum of expected rewards over all agents) max X m X s X a rm(s, a) X t xm(s, a, t) (5) Meaning Implication Linear Constraints Tie x to \u03b8.\nAgent is only active when occupation measure is nonzero in original MDP states.\n\u03b8m(\u03c4) = 0 =\u21d2 xm(s, a, \u03c4 \u2212\u03c4a m+1) = 0 \u2200s \/\u2208 {sb , sf }, a \u2208 A X s\/\u2208{sb,sf } X a xm(s, a, t) \u2264 \u03b8m(\u03c4a m + t \u2212 1) \u2200m \u2208 M, \u2200t \u2208 [1, Tm] (6) Agent can only be active in \u03c4 \u2208 (\u03c4a m, \u03c4d m) \u03b8m(\u03c4) = 0 \u2200m \u2208 M, \u03c4 \/\u2208 (\u03c4a m, \u03c4d m) (7) Cannot use resources when not active \u03b8m(\u03c4) = 0 =\u21d2 \u0394m(\u03c4, \u03c9) = 0 \u2200\u03c4 \u2208 [0, b\u03c4], \u03c9 \u2208 \u03a9 \u0394m(\u03c4, \u03c9) \u2264 \u03b8m(\u03c4) \u2200m \u2208 M, \u03c4 \u2208 [0, b\u03c4], \u03c9 \u2208 \u03a9 (8) Tie x to \u0394 (nonzero x forces corresponding \u0394 to be nonzero.)\n\u0394m(\u03c4, \u03c9) = 0, \u03d5m(a, \u03c9) = 1 =\u21d2 xm(s, a, \u03c4 \u2212 \u03c4a m + 1) = 0 \u2200s \/\u2208 {sb , sf } 1\/|A| X a \u03d5m(a, \u03c9) X s\/\u2208{sb,sf } xm(s, a, t) \u2264 \u0394m(t + \u03c4a m \u2212 1, \u03c9) \u2200m \u2208 M, \u03c9 \u2208 \u03a9, t \u2208 [1, Tm] (9) Resource bounds X m \u0394m(\u03c4, \u03c9) \u2264 b\u03d5(\u03c9) \u2200\u03c9 \u2208 \u03a9, \u03c4 \u2208 [0, b\u03c4] (10) Agent cannot change resources while active.\nOnly enabled for scheduling with static assignments.\n\u03b8m(\u03c4) = 1 and \u03b8m(\u03c4 + 1) = 1 =\u21d2 \u0394m(\u03c4, \u03c9) = \u0394m(\u03c4 + 1, \u03c9) \u0394m(\u03c4, \u03c9) \u2212 Z(1 \u2212 \u03b8m(\u03c4 + 1)) \u2264 \u0394m(\u03c4 + 1, \u03c9) + Z(1 \u2212 \u03b8m(\u03c4)) \u0394m(\u03c4, \u03c9) + Z(1 \u2212 \u03b8m(\u03c4 + 1)) \u2265 \u0394m(\u03c4 + 1, \u03c9) \u2212 Z(1 \u2212 \u03b8m(\u03c4)) \u2200m \u2208 M, \u03c9 \u2208 \u03a9, \u03c4 \u2208 [0, b\u03c4] (11) Table 1: MILP for globally optimal resource scheduling.\nTm = \u03c4d m \u2212 \u03c4a m + 1 is the time horizon for the agent``s MDP.\nUsing this LP as a basis, we augment it with constraints that ensure that the resource usage implied by the agents'' occupation measures {xm} does not violate the global resource requirements b\u03d5 at any time step \u03c4 \u2208 [0, b\u03c4].\nTo formulate these resource constraints, we use the following binary variables: \u2022 \u0394m(\u03c4, \u03c9) = {0, 1}, \u2200m \u2208 M, \u03c4 \u2208 [0, b\u03c4], \u03c9 \u2208 \u03a9, which serve as indicator variables that define whether agent m possesses resource \u03c9 at time \u03c4.\nThese are analogous to the static indicator variables used in the one-shot static resource-allocation problem in [6].\n\u2022 \u03b8m = {0, 1}, \u2200m \u2208 M, \u03c4 \u2208 [0, b\u03c4] are indicator variables that specify whether agent m is active (i.e., executing its MDP) at time \u03c4.\nThe meaning of resource-usage variables \u0394 is illustrated in Figure 1: \u0394m(\u03c4, \u03c9) = 1 only if resource \u03c9 is allocated to agent m at time \u03c4.\nThe meaning of the activity indicators \u03b8 is illustrated in Figure 2b: when agent m is in either the start state sb or the finish state sf , the corresponding \u03b8m = 0, but once the agent becomes active and enters one of the other states, we set \u03b8m = 1 .\nThis meaning of \u03b8 can be enforced with a linear constraint that synchronizes the values of the agents'' occupation measures xm and the activity 1224 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) indicators \u03b8, as shown in (6) in Table 1.\nAnother constraint we have to add-because the activity indicators \u03b8 are defined on the global timeline \u03c4-is to enforce the fact that the agent is inactive outside of its arrivaldeparture window.\nThis is accomplished by constraint (7) in Table 1.\nFurthermore, agents should not be using resources while they are inactive.\nThis constraint can also be enforced via a linear inequality on \u03b8 and \u0394, as shown in (8).\nConstraint (6) sets the value of \u03b8 to match the policy defined by the occupation measure xm.\nIn a similar fashion, we have to make sure that the resource-usage variables \u0394 are also synchronized with the occupation measure xm.\nThis is done via constraint (9) in Table 1, which is nearly identical to the analogous constraint from [6].\nAfter implementing the above constraint, which enforces the meaning of \u0394, we add a constraint that ensures that the agents'' resource usage never exceeds the amounts of available resources.\nThis condition is also trivially expressed as a linear inequality (10) in Table 1.\nFinally, for the problem formulation where resource assignments are static during a lifetime of an agent, we add a constraint that ensures that the resource-usage variables \u0394 do not change their value while the agent is active (\u03b8 = 1).\nThis is accomplished via the linear constraint (11), where Z \u2265 2 is a constant that is used to turn off the constraints when \u03b8m(\u03c4) = 0 or \u03b8m(\u03c4 + 1) = 0.\nThis constraint is not used for the dynamic problem formulation, where resources can be reallocated between agents at every time step.\nTo summarize, Table 1 together with the conservationof-flow constraints from (12) defines the MILP that simultaneously computes an optimal resource assignment for all agents across time as well as optimal finite-horizon MDP policies that are valid under that resource assignment.\nAs a rough measure of the complexity of this MILP, let us consider the number of optimization variables and constraints.\nLet TM = P Tm = P m(\u03c4a m \u2212 \u03c4d m + 1) be the sum of the lengths of the arrival-departure windows across all agents.\nThen, the number of optimization variables is: TM + b\u03c4|M||\u03a9| + b\u03c4|M|, TM of which are continuous (xm), and b\u03c4|M||\u03a9| + b\u03c4|M| are binary (\u0394 and \u03b8).\nHowever, notice that all but TM|M| of the \u03b8 are set to zero by constraint (7), which also immediately forces all but TM|M||\u03a9| of the \u0394 to be zero via the constraints (8).\nThe number of constraints (not including the degenerate constraints in (7)) in the MILP is: TM + TM|\u03a9| + b\u03c4|\u03a9| + b\u03c4|M||\u03a9|.\nDespite the fact that the complexity of the MILP is, in the worst case, exponential1 in the number of binary variables, the complexity of this MILP is significantly (exponentially) lower than that of the MILP with flat utility functions, described in Section 2.2.\nThis result echos the efficiency gains reported in [6] for single-shot resource-allocation problems, but is much more pronounced, because of the explosion of the flat utility representation due to the temporal aspect of the problem (recall the prohibitive complexity of the combinatorial optimization in Section 2.2).\nWe empirically analyze the performance of this method in Section 5.\n1 Strictly speaking, solving MILPs to optimality is NPcomplete in the number of integer variables.\n5.\nEXPERIMENTAL RESULTS Although the complexity of solving MILPs is in the worst case exponential in the number of integer variables, there are many efficient methods for solving MILPs that allow our algorithm to scale well for parameters common to resource allocation and scheduling problems.\nIn particular, this section introduces a problem domain-the repairshop problem-used to empirically evaluate our algorithm``s scalability in terms of the number of agents |M|, the number of shared resources |\u03a9|, and the varied lengths of global time b\u03c4 during which agents may enter and exit the system.\nThe repairshop problem is a simple parameterized MDP adopting the metaphor of a vehicular repair shop.\nAgents in the repair shop are mechanics with a number of independent tasks that yield reward only when completed.\nIn our MDP model of this system, actions taken to advance through the state space are only allowed if the agent holds certain resources that are publicly available to the shop.\nThese resources are in finite supply, and optimal policies for the shop will determine when each agent may hold the limited resources to take actions and earn individual rewards.\nEach task to be completed is associated with a single action, although the agent is required to repeat the action numerous times before completing the task and earning a reward.\nThis model was parameterized in terms of the number of agents in the system, the number of different types of resources that could be linked to necessary actions, a global time during which agents are allowed to arrive and depart, and a maximum length for the number of time steps an agent may remain in the system.\nAll datapoints in our experiments were obtained with 20 evaluations using CPLEX to solve the MILPs on a Pentium4 computer with 2Gb of RAM.\nTrials were conducted on both the static and the dynamic version of the resourcescheduling problem, as defined earlier.\nFigure 3 shows the runtime and policy value for independent modifications to the parameter set.\nThe top row shows how the solution time for the MILP scales as we increase the number of agents |M|, the global time horizon b\u03c4, and the number of resources |\u03a9|.\nIncreasing the number of agents leads to exponential complexity scaling, which is to be expected for an NP-complete problem.\nHowever, increasing the global time limit b\u03c4 or the total number of resource types |\u03a9|-while holding the number of agents constantdoes not lead to decreased performance.\nThis occurs because the problems get easier as they become under-constrained, which is also a common phenomenon for NP-complete problems.\nWe also observe that the solution to the dynamic version of the problem can often be computed much faster than the static version.\nThe bottom row of Figure 3 shows the joint policy value of the policies that correspond to the computed optimal resource-allocation schedules.\nWe can observe that the dynamic version yields higher reward (as expected, since the reward for the dynamic version is always no less than the reward of the static version).\nWe should point out that these graphs should not be viewed as a measure of performance of two different algorithms (both algorithms produce optimal solutions but to different problems), but rather as observations about how the quality of optimal solutions change as more flexibility is allowed in the reallocation of resources.\nFigure 4 shows runtime and policy value for trials in which common input variables are scaled together.\nThis allows The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1225 2 4 6 8 10 10 \u22123 10 \u22122 10 \u22121 10 0 10 1 10 2 10 3 10 4 Number of Agents |M| CPUTime,sec |\u03a9| = 5, \u03c4 = 50 static dynamic 50\u00a0100\u00a0150\u00a0200 10 \u22122 10 \u22121 10 0 10 1 10 2 10 3 Global Time Boundary \u03c4 CPUTime,sec |M| = 5, |\u03a9| = 5 static dynamic 10 20 30 40 50 10 \u22122 10 \u22121 10 0 10 1 10 2 Number of Resources |\u03a9| CPUTime,sec |M| = 5, \u03c4 = 50 static dynamic 2 4 6 8 10 200 400 600 800 1000 1200 1400 1600 Number of Agents |M| Value |\u03a9| = 5, \u03c4 = 50 static dynamic 50\u00a0100\u00a0150\u00a0200 400 500 600 700 800 900 1000 1100 1200 1300 1400 Global Time Boundary \u03c4 Value |M| = 5, |\u03a9| = 5 static dynamic 10 20 30 40 50 500 600 700 800 900 1000 1100 1200 1300 1400 Number of Resources |\u03a9| Value |M| = 5, \u03c4 = 50 static dynamic Figure 3: Evaluation of our MILP for variable numbers of agents (column 1), lengths of global-time window (column 2), and numbers of resource types (column 3).\nTop row shows CPU time, and bottom row shows the joint reward of agents'' MDP policies.\nError bars show the 1st and 3rd quartiles (25% and 75%).\n2 4 6 8 10 10 \u22123 10 \u22122 10 \u22121 10 0 10 1 10 2 10 3 Number of Agents |M| CPUTime,sec \u03c4 = 10|M| static dynamic 2 4 6 8 10 10 \u22123 10 \u22122 10 \u22121 10 0 10 1 10 2 10 3 10 4 Number of Agents |M| CPUTime,sec |\u03a9| = 2|M| static dynamic 2 4 6 8 10 10 \u22123 10 \u22122 10 \u22121 10 0 10 1 10 2 10 3 10 4 Number of Agents |M| CPUTime,sec |\u03a9| = 5|M| static dynamic 2 4 6 8 10 200 400 600 800 1000 1200 1400 1600 1800 2000 2200 Number of Agents |M| Value \u03c4 = 10|M| static dynamic 2 4 6 8 10 200 400 600 800 1000 1200 1400 1600 1800 2000 Number of Agents |M| Value |\u03a9| = 2|M| static dynamic 2 4 6 8 10 0 500 1000 1500 2000 2500 Number of Agents |M| Value |\u03a9| = 5|M| static dynamic Figure 4: Evaluation of our MILP using correlated input variables.\nThe left column tracks the performance and CPU time as the number of agents and global-time window increase together (b\u03c4 = 10|M|).\nThe middle and the right column track the performance and CPU time as the number of resources and the number of agents increase together as |\u03a9| = 2|M| and |\u03a9| = 5|M|, respectively.\nError bars show the 1st and 3rd quartiles (25% and 75%).\n1226 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) us to explore domains where the total number of agents scales proportionally to the total number of resource types or the global time horizon, while keeping constant the average agent density (per unit of global time) or the average number of resources per agent (which commonly occurs in real-life applications).\nOverall, we believe that these experimental results indicate that our MILP formulation can be used to effectively solve resource-scheduling problems of nontrivial size.\n6.\nDISCUSSION AND CONCLUSIONS Throughout the paper, we have made a number of assumptions in our model and solution algorithm; we discuss their implications below.\n\u2022 Continual execution.\nWe assume that once an agent stops executing its MDP (transitions into state sf ), it exits the system and cannot return.\nIt is easy to relax this assumption for domains where agents'' MDPs can be paused and restarted.\nAll that is required is to include an additional pause action which transitions from a given state back to itself, and has zero reward.\n\u2022 Indifference to start time.\nWe used a reward model where agents'' rewards depend only on the time horizon of their MDPs and not the global start time.\nThis is a consequence of our MDP-augmentation procedure from Section 4.1.\nIt is easy to extend the model so that the agents incur an explicit penalty for idling by assigning a non-zero negative reward to the start state sb .\n\u2022 Binary resource requirements.\nFor simplicity, we have assumed that resource costs are binary: \u03d5m(a, \u03c9) = {0, 1}, but our results generalize in a straightforward manner to non-binary resource mappings, analogously to the procedure used in [5].\n\u2022 Cooperative agents.\nThe optimization procedure discussed in this paper was developed in the context of cooperative agents, but it can also be used to design a mechanism for scheduling resources among self-interested agents.\nThis optimization procedure can be embedded in a VickreyClarke-Groves auction, completely analogously to the way it was done in [7].\nIn fact, all the results of [7] about the properties of the auction and information privacy directly carry over to the scheduling domain discussed in this paper, requiring only slight modifications to deal with finitehorizon MDPs.\n\u2022 Known, deterministic arrival and departure times.\nFinally, we have assumed that agents'' arrival and departure times (\u03c4a m and \u03c4d m) are deterministic and known a priori.\nThis assumption is fundamental to our solution method.\nWhile there are many domains where this assumption is valid, in many cases agents arrive and depart dynamically and their arrival and departure times can only be predicted probabilistically, leading to online resource-allocation problems.\nIn particular, in the case of self-interested agents, this becomes an interesting version of an online-mechanism-design problem [11, 12].\nIn summary, we have presented an MILP formulation for the combinatorial resource-scheduling problem where agents'' values for possible resource assignments are defined by finitehorizon MDPs.\nThis result extends previous work ([6, 7]) on static one-shot resource allocation under MDP-induced preferences to resource-scheduling problems with a temporal aspect.\nAs such, this work takes a step in the direction of designing an online mechanism for agents with combinatorial resource preferences induced by stochastic planning problems.\nRelaxing the assumption about deterministic arrival and departure times of the agents is a focus of our future work.\nWe would like to thank the anonymous reviewers for their insightful comments and suggestions.\n7.\nREFERENCES [1] E. Altman and A. Shwartz.\nAdaptive control of constrained Markov chains: Criteria and policies.\nAnnals of Operations Research, special issue on Markov Decision Processes, 28:101-134, 1991.\n[2] R. Bellman.\nDynamic Programming.\nPrinceton University Press, 1957.\n[3] C. Boutilier.\nSolving concisely expressed combinatorial auction problems.\nIn Proc.\nof AAAI-02, pages 359-366, 2002.\n[4] C. Boutilier and H. H. Hoos.\nBidding languages for combinatorial auctions.\nIn Proc.\nof IJCAI-01, pages 1211-1217, 2001.\n[5] D. Dolgov.\nIntegrated Resource Allocation and Planning in Stochastic Multiagent Environments.\nPhD thesis, Computer Science Department, University of Michigan, February 2006.\n[6] D. A. Dolgov and E. H. Durfee.\nOptimal resource allocation and policy formulation in loosely-coupled Markov decision processes.\nIn Proc.\nof ICAPS-04, pages 315-324, June 2004.\n[7] D. A. Dolgov and E. H. Durfee.\nComputationally efficient combinatorial auctions for resource allocation in weakly-coupled MDPs.\nIn Proc.\nof AAMAS-05, New York, NY, USA, 2005.\nACM Press.\n[8] D. A. Dolgov and E. H. Durfee.\nResource allocation among agents with preferences induced by factored MDPs.\nIn Proc.\nof AAMAS-06, 2006.\n[9] K. Larson and T. Sandholm.\nMechanism design and deliberative agents.\nIn Proc.\nof AAMAS-05, pages 650-656, New York, NY, USA, 2005.\nACM Press.\n[10] N. Nisan.\nBidding and allocation in combinatorial auctions.\nIn Electronic Commerce, 2000.\n[11] D. C. Parkes and S. Singh.\nAn MDP-based approach to Online Mechanism Design.\nIn Proc.\nof the Seventeenths Annual Conference on Neural Information Processing Systems (NIPS-03), 2003.\n[12] D. C. Parkes, S. Singh, and D. Yanovsky.\nApproximately efficient online mechanism design.\nIn Proc.\nof the Eighteenths Annual Conference on Neural Information Processing Systems (NIPS-04), 2004.\n[13] M. L. Puterman.\nMarkov Decision Processes.\nJohn Wiley & Sons, New York, 1994.\n[14] M. H. Rothkopf, A. Pekec, and R. M. Harstad.\nComputationally manageable combinational auctions.\nManagement Science, 44(8):1131-1147, 1998.\n[15] T. Sandholm.\nAn algorithm for optimal winner determination in combinatorial auctions.\nIn Proc.\nof IJCAI-99, pages 542-547, San Francisco, CA, USA, 1999.\nMorgan Kaufmann Publishers Inc..\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1227","lvl-3":"Combinatorial Resource Scheduling for Multiagent MDPs\nABSTRACT\nOptimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive.\nWe consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs).\nIn recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs.\nHowever, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs.\nWe extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods.\nWe provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time.\nWe illustrate and empirically analyze the method in the context of a stochastic jobscheduling domain.\n1.\nINTRODUCTION\nThe tasks of optimal resource allocation and scheduling are ubiquitous in multiagent systems, but solving such optimization problems can be computationally difficult, due to a number of factors.\nIn particular, when the value of a set of resources to an agent is not additive (as is often the case with\nresources that are substitutes or complements), the utility function might have to be defined on an exponentially large space of resource bundles, which very quickly becomes computationally intractable.\nFurther, even when each agent has a utility function that is nonzero only on a small subset of the possible resource bundles, obtaining optimal allocation is still computationally prohibitive, as the problem becomes NP-complete [14].\nSuch computational issues have recently spawned several threads of work in using compact models of agents' preferences.\nOne idea is to use any structure present in utility functions to represent them compactly, via, for example, logical formulas [15, 10, 4, 3].\nAn alternative is to directly model the mechanisms that define the agents' utility functions and perform resource allocation directly with these models [9].\nA way of accomplishing this is to model the processes by which an agent might utilize the resources and define the utility function as the payoff of these processes.\nIn particular, if an agent uses resources to act in a stochastic environment, its utility function can be naturally modeled with a Markov decision process, whose action set is parameterized by the available resources.\nThis representation can then be used to construct very efficient resource-allocation algorithms that lead to an exponential speedup over a straightforward optimization problem with flat representations of combinatorial preferences [6, 7, 8].\nHowever, this existing work on resource allocation with preferences induced by resource-parameterized MDPs makes an assumption that the resources are only allocated once and are then utilized by the agents independently within their infinite-horizon MDPs.\nThis assumption that no reallocation of resources is possible can be limiting in domains where agents arrive and depart dynamically.\nIn this paper, we extend the work on resource allocation under MDP-induced preferences to discrete-time scheduling problems, where agents are present in the system for finite time intervals and can only use resources within these intervals.\nIn particular, agents arrive and depart at arbitrary (predefined) times and within these intervals use resources to execute tasks in finite-horizon MDPs.\nWe address the problem of globally optimal resource scheduling, where the objective is to find an allocation of resources to the agents across time that maximizes the sum of the expected rewards that they obtain.\nIn this context, our main contribution is a mixed-integerprogramming formulation of the scheduling problem that chooses globally optimal resource assignments, starting times, and execution horizons for all agents (within their arrival\ndeparture intervals).\nWe analyze and empirically compare two flavors of the scheduling problem: one, where agents have static resource assignments within their finite-horizon MDPs, and another, where resources can be dynamically reallocated between agents at every time step.\nIn the rest of the paper, we first lay down the necessary groundwork in Section 2 and then introduce our model and formal problem statement in Section 3.\nIn Section 4.2, we describe our main result, the optimization program for globally optimal resource scheduling.\nFollowing the discussion of our experimental results on a job-scheduling problem in Section 5, we conclude in Section 6 with a discussion of possible extensions and generalizations of our method.\n2.\nBACKGROUND\nSimilarly to the model used in previous work on resourceallocation with MDP-induced preferences [6, 7], we define the value of a set of resources to an agent as the value of the best MDP policy that is realizable, given those resources.\nHowever, since the focus of our work is on scheduling problems, and a large part of the optimization problem is to decide how resources are allocated in time among agents with finite arrival and departure times, we model the agents' planning problems as finite-horizon MDPs, in contrast to previous work that used infinite-horizon discounted MDPs.\nIn the rest of this section, we first introduce some necessary background on finite-horizon MDPs and present a linear-programming formulation that serves as the basis for our solution algorithm developed in Section 4.\nWe also outline the standard methods for combinatorial resource scheduling with flat resource values, which serve as a comparison benchmark for the new model developed here.\n2.1 Markov Decision Processes\nA stationary, finite-domain, discrete-time MDP (see, for example, [13] for a thorough and detailed development) can be described as (S, A, p, r), where: S is a finite set of system states; A is a finite set of actions that are available to the agent; p is a stationary stochastic transition function, where p (\u03c3 | s, a) is the probability of transitioning to state \u03c3 upon executing action a in state s; r is a stationary reward function, where r (s, a) specifies the reward obtained upon executing action a in state s. Given such an MDP, a decision problem under a finite horizon T is to choose an optimal action at every time step to maximize the expected value of the total reward accrued during the agent's (finite) lifetime.\nThe agent's optimal policy is then a function of current state s and the time until the horizon.\nAn optimal policy for such a problem is to act greedily with respect to the optimal value function, defined recursively by the following system of finite-time Bellman equations [2]:\nThis optimal value function can be easily computed using dynamic programming, leading to the following optimal policy \u03c0, where \u03c0 (s, a, t) is the probability of executing action\nThe above is the most common way of computing the optimal value function (and therefore an optimal policy) for a finite-horizon MDP.\nHowever, we can also formulate the problem as the following linear program (similarly to the dual LP for infinite-horizon discounted MDPs [13, 6, 7]):\nNote that the standard unconstrained finite-horizon MDP, as described above, always has a uniformly-optimal solution (optimal for any initial distribution \u03b1 (s)).\nTherefore, an optimal policy can be obtained by using an arbitrary constant \u03b1 (s)> 0 (in particular, \u03b1 (s) = 1 will result in x (s, a, t) = \u03c0 (s, a, t)).\nHowever, for MDPs with resource constraints (as defined below in Section 3), uniformly-optimal policies do not in general exist.\nIn such cases, \u03b1 becomes a part of the problem input, and a resulting policy is only optimal for that particular \u03b1.\nThis result is well known for infinite-horizon MDPs with various types of constraints [1, 6], and it also holds for our finite-horizon model, which can be easily established via a line of reasoning completely analogous to the arguments in [6].\n2.2 Combinatorial Resource Scheduling\nA straightforward approach to resource scheduling for a set of agents M, whose values for the resources are induced by stochastic planning problems (in our case, finite-horizon MDPs) would be to have each agent enumerate all possible resource assignments over time and, for each one, compute its value by solving the corresponding MDP.\nThen, each agent would provide valuations for each possible resource bundle over time to a centralized coordinator, who would compute the optimal resource assignments across time based on these valuations.\nWhen resources can be allocated at different times to different agents, each agent must submit valuations for every combination of possible time horizons.\nLet each agent m E M execute its MDP within the arrival-departure time interval \u03c4 E [\u03c4am, \u03c4dm].\nHence, agent m will execute an MDP with time horizon no greater than Tm = \u03c4d m \u2212 \u03c4 a m +1.\nLet b\u03c4 be the global time horizon for the problem, before which all of the agents' MDPs must finish.\nWe assume \u03c4dm <b\u03c4, Vm E M.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1221\nFor the scheduling problem where agents have static resource requirements within their finite-horizon MDPs, the agents provide a valuation for each resource bundle for each possible time horizon (from [1, Tm]) that they may use.\nLet \u03a9 be the set of resources to be allocated among the agents.\nAn agent will get at most one resource bundle for one of the time horizons.\nLet the variable \u03c8 \u2208 \u03a8m enumerate all possible pairs of resource bundles and time horizons for agent m, so there are 2 | \u03a9 | \u00d7 Tm values for \u03c8 (the space of bundles is exponential in the number of resource types | \u03a9 |).\nThe agent m must provide a value v\u03c8m for each \u03c8, and the coordinator will allocate at most one \u03c8 (resource, time horizon) pair to each agent.\nThis allocation is expressed as an indicator variable z\u03c8m \u2208 {0, 1} that shows whether \u03c8 is assigned to agent m. For time \u03c4 and resource \u03c9, the function nm (\u03c8, \u03c4, \u03c9) \u2208 {0, 1} indicates whether the bundle in \u03c8 uses resource \u03c9 at time \u03c4 (we make the assumption that agents have binary resource requirements).\nThis allocation problem is NP-complete, even when considering only a single time step, and its difficulty increases significantly with multiple time steps because of the increasing number of values of \u03c8.\nThe problem of finding an optimal allocation that satisfies the global constraint that the amount of each resource \u03c9 allocated to all agents does not exceed the available amount b\u03d5 (\u03c9) can be expressed as the following integer program: (3) The first constraint in equation 3 says that no agent can receive more than one bundle, and the second constraint ensures that the total assignment of resource \u03c9 does not, at any time, exceed the resource bound.\nFor the scheduling problem where the agents are able to dynamically reallocate resources, each agent must specify a value for every combination of bundles and time steps within its time horizon.\nLet the variable \u03c8 \u2208 \u03a8m in this case enumerate all possible resource bundles for which at most one bundle may be assigned to agent m at each time step.\nTherefore, in this case there are Pt \u2208 [1, T -] (2 | \u03a9 |) t \u223c 2 | \u03a9 | Tpossibilities of resource bundles assigned to different time slots, for the Tm different time horizons.\nThe same set of equations (3) can be used to solve this dynamic scheduling problem, but the integer program is different because of the difference in how \u03c8 is defined.\nIn this case, the number of \u03c8 values is exponential in each agent's planning horizon Tm, resulting in a much larger program.\nThis straightforward approach to solving both of these of either 2 | \u03a9 | Tm (static allocation) or P scheduling problems requires an enumeration and solution t \u2208 [1, T -] 2 | \u03a9 | t (dynamic reallocation) MDPs for each agent, which very quickly becomes intractable with the growth of the number of resources | \u03a9 | or the time horizon Tm.\n3.\nMODEL AND PROBLEM STATEMENT\n1222 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nRESOURCE SCHEDULING\n4.1 Augmenting Agents' MDPs\n4.2 MILP for Resource Scheduling\n1224 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.\nEXPERIMENTAL RESULTS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1225","lvl-4":"Combinatorial Resource Scheduling for Multiagent MDPs\nABSTRACT\nOptimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive.\nWe consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs).\nIn recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs.\nHowever, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs.\nWe extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods.\nWe provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time.\nWe illustrate and empirically analyze the method in the context of a stochastic jobscheduling domain.\n1.\nINTRODUCTION\nThe tasks of optimal resource allocation and scheduling are ubiquitous in multiagent systems, but solving such optimization problems can be computationally difficult, due to a number of factors.\nIn particular, when the value of a set of resources to an agent is not additive (as is often the case with\nresources that are substitutes or complements), the utility function might have to be defined on an exponentially large space of resource bundles, which very quickly becomes computationally intractable.\nFurther, even when each agent has a utility function that is nonzero only on a small subset of the possible resource bundles, obtaining optimal allocation is still computationally prohibitive, as the problem becomes NP-complete [14].\nSuch computational issues have recently spawned several threads of work in using compact models of agents' preferences.\nAn alternative is to directly model the mechanisms that define the agents' utility functions and perform resource allocation directly with these models [9].\nA way of accomplishing this is to model the processes by which an agent might utilize the resources and define the utility function as the payoff of these processes.\nIn particular, if an agent uses resources to act in a stochastic environment, its utility function can be naturally modeled with a Markov decision process, whose action set is parameterized by the available resources.\nHowever, this existing work on resource allocation with preferences induced by resource-parameterized MDPs makes an assumption that the resources are only allocated once and are then utilized by the agents independently within their infinite-horizon MDPs.\nThis assumption that no reallocation of resources is possible can be limiting in domains where agents arrive and depart dynamically.\nIn this paper, we extend the work on resource allocation under MDP-induced preferences to discrete-time scheduling problems, where agents are present in the system for finite time intervals and can only use resources within these intervals.\nIn particular, agents arrive and depart at arbitrary (predefined) times and within these intervals use resources to execute tasks in finite-horizon MDPs.\nWe address the problem of globally optimal resource scheduling, where the objective is to find an allocation of resources to the agents across time that maximizes the sum of the expected rewards that they obtain.\nIn this context, our main contribution is a mixed-integerprogramming formulation of the scheduling problem that chooses globally optimal resource assignments, starting times, and execution horizons for all agents (within their arrival\ndeparture intervals).\nWe analyze and empirically compare two flavors of the scheduling problem: one, where agents have static resource assignments within their finite-horizon MDPs, and another, where resources can be dynamically reallocated between agents at every time step.\nIn the rest of the paper, we first lay down the necessary groundwork in Section 2 and then introduce our model and formal problem statement in Section 3.\nIn Section 4.2, we describe our main result, the optimization program for globally optimal resource scheduling.\nFollowing the discussion of our experimental results on a job-scheduling problem in Section 5, we conclude in Section 6 with a discussion of possible extensions and generalizations of our method.\n2.\nBACKGROUND\nSimilarly to the model used in previous work on resourceallocation with MDP-induced preferences [6, 7], we define the value of a set of resources to an agent as the value of the best MDP policy that is realizable, given those resources.\nHowever, since the focus of our work is on scheduling problems, and a large part of the optimization problem is to decide how resources are allocated in time among agents with finite arrival and departure times, we model the agents' planning problems as finite-horizon MDPs, in contrast to previous work that used infinite-horizon discounted MDPs.\nIn the rest of this section, we first introduce some necessary background on finite-horizon MDPs and present a linear-programming formulation that serves as the basis for our solution algorithm developed in Section 4.\nWe also outline the standard methods for combinatorial resource scheduling with flat resource values, which serve as a comparison benchmark for the new model developed here.\n2.1 Markov Decision Processes\nThe agent's optimal policy is then a function of current state s and the time until the horizon.\nAn optimal policy for such a problem is to act greedily with respect to the optimal value function, defined recursively by the following system of finite-time Bellman equations [2]:\nThis optimal value function can be easily computed using dynamic programming, leading to the following optimal policy \u03c0, where \u03c0 (s, a, t) is the probability of executing action\nThe above is the most common way of computing the optimal value function (and therefore an optimal policy) for a finite-horizon MDP.\nHowever, we can also formulate the problem as the following linear program (similarly to the dual LP for infinite-horizon discounted MDPs [13, 6, 7]):\nHowever, for MDPs with resource constraints (as defined below in Section 3), uniformly-optimal policies do not in general exist.\nIn such cases, \u03b1 becomes a part of the problem input, and a resulting policy is only optimal for that particular \u03b1.\n2.2 Combinatorial Resource Scheduling\nA straightforward approach to resource scheduling for a set of agents M, whose values for the resources are induced by stochastic planning problems (in our case, finite-horizon MDPs) would be to have each agent enumerate all possible resource assignments over time and, for each one, compute its value by solving the corresponding MDP.\nThen, each agent would provide valuations for each possible resource bundle over time to a centralized coordinator, who would compute the optimal resource assignments across time based on these valuations.\nWhen resources can be allocated at different times to different agents, each agent must submit valuations for every combination of possible time horizons.\nLet each agent m E M execute its MDP within the arrival-departure time interval \u03c4 E [\u03c4am, \u03c4dm].\nHence, agent m will execute an MDP with time horizon no greater than Tm = \u03c4d m \u2212 \u03c4 a m +1.\nLet b\u03c4 be the global time horizon for the problem, before which all of the agents' MDPs must finish.\nThe Sixth Intl. .\nJoint Conf.\nFor the scheduling problem where agents have static resource requirements within their finite-horizon MDPs, the agents provide a valuation for each resource bundle for each possible time horizon (from [1, Tm]) that they may use.\nLet \u03a9 be the set of resources to be allocated among the agents.\nAn agent will get at most one resource bundle for one of the time horizons.\nLet the variable \u03c8 \u2208 \u03a8m enumerate all possible pairs of resource bundles and time horizons for agent m, so there are 2 | \u03a9 | \u00d7 Tm values for \u03c8 (the space of bundles is exponential in the number of resource types | \u03a9 |).\nThe agent m must provide a value v\u03c8m for each \u03c8, and the coordinator will allocate at most one \u03c8 (resource, time horizon) pair to each agent.\nThis allocation is expressed as an indicator variable z\u03c8m \u2208 {0, 1} that shows whether \u03c8 is assigned to agent m. For time \u03c4 and resource \u03c9, the function nm (\u03c8, \u03c4, \u03c9) \u2208 {0, 1} indicates whether the bundle in \u03c8 uses resource \u03c9 at time \u03c4 (we make the assumption that agents have binary resource requirements).\nThis allocation problem is NP-complete, even when considering only a single time step, and its difficulty increases significantly with multiple time steps because of the increasing number of values of \u03c8.\nFor the scheduling problem where the agents are able to dynamically reallocate resources, each agent must specify a value for every combination of bundles and time steps within its time horizon.\nLet the variable \u03c8 \u2208 \u03a8m in this case enumerate all possible resource bundles for which at most one bundle may be assigned to agent m at each time step.\nTherefore, in this case there are Pt \u2208 [1, T -] (2 | \u03a9 |) t \u223c 2 | \u03a9 | Tpossibilities of resource bundles assigned to different time slots, for the Tm different time horizons.\nThe same set of equations (3) can be used to solve this dynamic scheduling problem, but the integer program is different because of the difference in how \u03c8 is defined.\nIn this case, the number of \u03c8 values is exponential in each agent's planning horizon Tm, resulting in a much larger program.\nThis straightforward approach to solving both of these of either 2 | \u03a9 | Tm (static allocation) or P scheduling problems requires an enumeration and solution t \u2208 [1, T -] 2 | \u03a9 | t (dynamic reallocation) MDPs for each agent, which very quickly becomes intractable with the growth of the number of resources | \u03a9 | or the time horizon Tm.","lvl-2":"Combinatorial Resource Scheduling for Multiagent MDPs\nABSTRACT\nOptimal resource scheduling in multiagent systems is a computationally challenging task, particularly when the values of resources are not additive.\nWe consider the combinatorial problem of scheduling the usage of multiple resources among agents that operate in stochastic environments, modeled as Markov decision processes (MDPs).\nIn recent years, efficient resource-allocation algorithms have been developed for agents with resource values induced by MDPs.\nHowever, this prior work has focused on static resource-allocation problems where resources are distributed once and then utilized in infinite-horizon MDPs.\nWe extend those existing models to the problem of combinatorial resource scheduling, where agents persist only for finite periods between their (predefined) arrival and departure times, requiring resources only for those time periods.\nWe provide a computationally efficient procedure for computing globally optimal resource assignments to agents over time.\nWe illustrate and empirically analyze the method in the context of a stochastic jobscheduling domain.\n1.\nINTRODUCTION\nThe tasks of optimal resource allocation and scheduling are ubiquitous in multiagent systems, but solving such optimization problems can be computationally difficult, due to a number of factors.\nIn particular, when the value of a set of resources to an agent is not additive (as is often the case with\nresources that are substitutes or complements), the utility function might have to be defined on an exponentially large space of resource bundles, which very quickly becomes computationally intractable.\nFurther, even when each agent has a utility function that is nonzero only on a small subset of the possible resource bundles, obtaining optimal allocation is still computationally prohibitive, as the problem becomes NP-complete [14].\nSuch computational issues have recently spawned several threads of work in using compact models of agents' preferences.\nOne idea is to use any structure present in utility functions to represent them compactly, via, for example, logical formulas [15, 10, 4, 3].\nAn alternative is to directly model the mechanisms that define the agents' utility functions and perform resource allocation directly with these models [9].\nA way of accomplishing this is to model the processes by which an agent might utilize the resources and define the utility function as the payoff of these processes.\nIn particular, if an agent uses resources to act in a stochastic environment, its utility function can be naturally modeled with a Markov decision process, whose action set is parameterized by the available resources.\nThis representation can then be used to construct very efficient resource-allocation algorithms that lead to an exponential speedup over a straightforward optimization problem with flat representations of combinatorial preferences [6, 7, 8].\nHowever, this existing work on resource allocation with preferences induced by resource-parameterized MDPs makes an assumption that the resources are only allocated once and are then utilized by the agents independently within their infinite-horizon MDPs.\nThis assumption that no reallocation of resources is possible can be limiting in domains where agents arrive and depart dynamically.\nIn this paper, we extend the work on resource allocation under MDP-induced preferences to discrete-time scheduling problems, where agents are present in the system for finite time intervals and can only use resources within these intervals.\nIn particular, agents arrive and depart at arbitrary (predefined) times and within these intervals use resources to execute tasks in finite-horizon MDPs.\nWe address the problem of globally optimal resource scheduling, where the objective is to find an allocation of resources to the agents across time that maximizes the sum of the expected rewards that they obtain.\nIn this context, our main contribution is a mixed-integerprogramming formulation of the scheduling problem that chooses globally optimal resource assignments, starting times, and execution horizons for all agents (within their arrival\ndeparture intervals).\nWe analyze and empirically compare two flavors of the scheduling problem: one, where agents have static resource assignments within their finite-horizon MDPs, and another, where resources can be dynamically reallocated between agents at every time step.\nIn the rest of the paper, we first lay down the necessary groundwork in Section 2 and then introduce our model and formal problem statement in Section 3.\nIn Section 4.2, we describe our main result, the optimization program for globally optimal resource scheduling.\nFollowing the discussion of our experimental results on a job-scheduling problem in Section 5, we conclude in Section 6 with a discussion of possible extensions and generalizations of our method.\n2.\nBACKGROUND\nSimilarly to the model used in previous work on resourceallocation with MDP-induced preferences [6, 7], we define the value of a set of resources to an agent as the value of the best MDP policy that is realizable, given those resources.\nHowever, since the focus of our work is on scheduling problems, and a large part of the optimization problem is to decide how resources are allocated in time among agents with finite arrival and departure times, we model the agents' planning problems as finite-horizon MDPs, in contrast to previous work that used infinite-horizon discounted MDPs.\nIn the rest of this section, we first introduce some necessary background on finite-horizon MDPs and present a linear-programming formulation that serves as the basis for our solution algorithm developed in Section 4.\nWe also outline the standard methods for combinatorial resource scheduling with flat resource values, which serve as a comparison benchmark for the new model developed here.\n2.1 Markov Decision Processes\nA stationary, finite-domain, discrete-time MDP (see, for example, [13] for a thorough and detailed development) can be described as (S, A, p, r), where: S is a finite set of system states; A is a finite set of actions that are available to the agent; p is a stationary stochastic transition function, where p (\u03c3 | s, a) is the probability of transitioning to state \u03c3 upon executing action a in state s; r is a stationary reward function, where r (s, a) specifies the reward obtained upon executing action a in state s. Given such an MDP, a decision problem under a finite horizon T is to choose an optimal action at every time step to maximize the expected value of the total reward accrued during the agent's (finite) lifetime.\nThe agent's optimal policy is then a function of current state s and the time until the horizon.\nAn optimal policy for such a problem is to act greedily with respect to the optimal value function, defined recursively by the following system of finite-time Bellman equations [2]:\nThis optimal value function can be easily computed using dynamic programming, leading to the following optimal policy \u03c0, where \u03c0 (s, a, t) is the probability of executing action\nThe above is the most common way of computing the optimal value function (and therefore an optimal policy) for a finite-horizon MDP.\nHowever, we can also formulate the problem as the following linear program (similarly to the dual LP for infinite-horizon discounted MDPs [13, 6, 7]):\nNote that the standard unconstrained finite-horizon MDP, as described above, always has a uniformly-optimal solution (optimal for any initial distribution \u03b1 (s)).\nTherefore, an optimal policy can be obtained by using an arbitrary constant \u03b1 (s)> 0 (in particular, \u03b1 (s) = 1 will result in x (s, a, t) = \u03c0 (s, a, t)).\nHowever, for MDPs with resource constraints (as defined below in Section 3), uniformly-optimal policies do not in general exist.\nIn such cases, \u03b1 becomes a part of the problem input, and a resulting policy is only optimal for that particular \u03b1.\nThis result is well known for infinite-horizon MDPs with various types of constraints [1, 6], and it also holds for our finite-horizon model, which can be easily established via a line of reasoning completely analogous to the arguments in [6].\n2.2 Combinatorial Resource Scheduling\nA straightforward approach to resource scheduling for a set of agents M, whose values for the resources are induced by stochastic planning problems (in our case, finite-horizon MDPs) would be to have each agent enumerate all possible resource assignments over time and, for each one, compute its value by solving the corresponding MDP.\nThen, each agent would provide valuations for each possible resource bundle over time to a centralized coordinator, who would compute the optimal resource assignments across time based on these valuations.\nWhen resources can be allocated at different times to different agents, each agent must submit valuations for every combination of possible time horizons.\nLet each agent m E M execute its MDP within the arrival-departure time interval \u03c4 E [\u03c4am, \u03c4dm].\nHence, agent m will execute an MDP with time horizon no greater than Tm = \u03c4d m \u2212 \u03c4 a m +1.\nLet b\u03c4 be the global time horizon for the problem, before which all of the agents' MDPs must finish.\nWe assume \u03c4dm <b\u03c4, Vm E M.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1221\nFor the scheduling problem where agents have static resource requirements within their finite-horizon MDPs, the agents provide a valuation for each resource bundle for each possible time horizon (from [1, Tm]) that they may use.\nLet \u03a9 be the set of resources to be allocated among the agents.\nAn agent will get at most one resource bundle for one of the time horizons.\nLet the variable \u03c8 \u2208 \u03a8m enumerate all possible pairs of resource bundles and time horizons for agent m, so there are 2 | \u03a9 | \u00d7 Tm values for \u03c8 (the space of bundles is exponential in the number of resource types | \u03a9 |).\nThe agent m must provide a value v\u03c8m for each \u03c8, and the coordinator will allocate at most one \u03c8 (resource, time horizon) pair to each agent.\nThis allocation is expressed as an indicator variable z\u03c8m \u2208 {0, 1} that shows whether \u03c8 is assigned to agent m. For time \u03c4 and resource \u03c9, the function nm (\u03c8, \u03c4, \u03c9) \u2208 {0, 1} indicates whether the bundle in \u03c8 uses resource \u03c9 at time \u03c4 (we make the assumption that agents have binary resource requirements).\nThis allocation problem is NP-complete, even when considering only a single time step, and its difficulty increases significantly with multiple time steps because of the increasing number of values of \u03c8.\nThe problem of finding an optimal allocation that satisfies the global constraint that the amount of each resource \u03c9 allocated to all agents does not exceed the available amount b\u03d5 (\u03c9) can be expressed as the following integer program: (3) The first constraint in equation 3 says that no agent can receive more than one bundle, and the second constraint ensures that the total assignment of resource \u03c9 does not, at any time, exceed the resource bound.\nFor the scheduling problem where the agents are able to dynamically reallocate resources, each agent must specify a value for every combination of bundles and time steps within its time horizon.\nLet the variable \u03c8 \u2208 \u03a8m in this case enumerate all possible resource bundles for which at most one bundle may be assigned to agent m at each time step.\nTherefore, in this case there are Pt \u2208 [1, T -] (2 | \u03a9 |) t \u223c 2 | \u03a9 | Tpossibilities of resource bundles assigned to different time slots, for the Tm different time horizons.\nThe same set of equations (3) can be used to solve this dynamic scheduling problem, but the integer program is different because of the difference in how \u03c8 is defined.\nIn this case, the number of \u03c8 values is exponential in each agent's planning horizon Tm, resulting in a much larger program.\nThis straightforward approach to solving both of these of either 2 | \u03a9 | Tm (static allocation) or P scheduling problems requires an enumeration and solution t \u2208 [1, T -] 2 | \u03a9 | t (dynamic reallocation) MDPs for each agent, which very quickly becomes intractable with the growth of the number of resources | \u03a9 | or the time horizon Tm.\n3.\nMODEL AND PROBLEM STATEMENT\nWe now formally introduce our model of the resourcescheduling problem.\nThe problem input consists of the following components:\n\u2022 M, \u03a9, b\u03d5, \u03c4a m, \u03c4d m, b\u03c4 are as defined above in Section 2.2.\n\u2022 {\u0398m} = {S, A, pm, rm, \u03b1m} are the MDPs of all agents m \u2208 M. Without loss of generality, we assume that state and action spaces of all agents are the same, but each has its own transition function pm, reward function rm, and initial conditions \u03b1m.\n\u2022 \u03d5m: A \u00d7 \u03a9 ~ \u2192 {0, 1} is the mapping of actions to resources for agent m. \u03d5m (a, \u03c9) indicates whether action a of agent m needs resource \u03c9.\nAn agent m that receives a set of resources that does not include resource \u03c9 cannot execute in its MDP policy any action a for which \u03d5m (a, \u03c9) = ~ 0.\nWe assume all resource requirements are binary; as discussed below in Section 6, this assumption is not limiting.\nGiven the above input, the optimization problem we consider is to find the globally optimal--maximizing the sum of expected rewards--mapping of resources to agents for all time steps: \u0394: \u03c4 \u00d7 M \u00d7 \u03a9 ~ \u2192 {0, 1}.\nA solution is feasible if the corresponding assignment of resources to the agents does not violate the global resource constraint:\nWe consider two flavors of the resource-scheduling problem.\nThe first formulation restricts resource assignments to the space where the allocation of resources to each agent is static during the agent's lifetime.\nThe second formulation allows reassignment of resources between agents at every time step within their lifetimes.\nFigure 1 depicts a resource-scheduling problem with three agents M = {m1, m2, m3}, three resources \u03a9 = {\u03c91, \u03c92, \u03c93}, and a global problem horizon of b\u03c4 = 11.\nThe agents' arrival and departure times are shown as gray boxes and are {1, 6}, {3, 7}, and {2, 11}, respectively.\nA solution to this problem is shown via horizontal bars within each agents' box, where the bars correspond to the allocation of the three resource types.\nFigure 1a shows a solution to a static scheduling problem.\nAccording to the shown solution, agent m1 begins the execution of its MDP at time \u03c4 = 1 and has a lock on all three resources until it finishes execution at time \u03c4 = 3.\nNote that agent m1 relinquishes its hold on the resources before its announced departure time of \u03c4dm1 = 6, ostensibly because other agents can utilize the resources more effectively.\nThus, at time \u03c4 = 4, resources \u03c91 and \u03c93 are allocated to agent m2, who then uses them to execute its MDP (using only actions supported by resources \u03c91 and \u03c93) until time \u03c4 = 7.\nAgent m3 holds resource \u03c93 during the interval \u03c4 \u2208 [4, 10].\nFigure 1b shows a possible solution to the dynamic version of the same problem.\nThere, resources can be reallocated between agents at every time step.\nFor example, agent m1 gives up its use of resource \u03c92 at time \u03c4 = 2, although it continues the execution of its MDP until time \u03c4 = 6.\nNotice that an agent is not allowed to stop and restart its MDP, so agent m1 is only able to continue executing in the interval \u03c4 \u2208 [3, 4] if it has actions that do not require any resources (\u03d5m (a, \u03c9) = 0).\nClearly, the model and problem statement described above make a number of assumptions about the problem and the desired solution properties.\nWe discuss some of those assumptions and their implications in Section 6.\n1222 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 1: Illustration of a solution to a resource-scheduling problem with three agents and three resources: a) static resource assignments (resource assignments are constant within agents' lifetimes; b) dynamic assignment (resource assignments are allowed to change at every time step).\n4.\nRESOURCE SCHEDULING\nOur resource-scheduling algorithm proceeds in two stages.\nFirst, we perform a preprocessing step that augments the agent MDPs; this process is described in Section 4.1.\nSecond, using these augmented MDPs we construct a global optimization problem, which is described in Section 4.2.\n4.1 Augmenting Agents' MDPs\nIn the model described in the previous section, we assume that if an agent does not possess the necessary resources to perform actions in its MDP, its execution is halted and the agent leaves the system.\nIn other words, the MDPs cannot be \"paused\" and \"resumed\".\nFor example, in the problem shown in Figure 1a, agent m1 releases all resources after time \u03c4 = 3, at which point the execution of its MDP is halted.\nSimilarly, agents m2 and m3 only execute their MDPs in the intervals \u03c4 E [4, 6] and \u03c4 E [4, 10], respectively.\nTherefore, an important part of the global decision-making problem is to decide the window of time during which each of the agents is \"active\" (i.e., executing its MDP).\nTo accomplish this, we augment each agent's MDP with two new states (\"start\" and \"finish\" states sb, sf, respectively) and a new \"start\/stop\" action a `, as illustrated in Figure 2.\nThe idea is that an agent stays in the start state sb until it is ready to execute its MDP, at which point it performs the start\/stop action a ` and transitions into the state space of the original MDP with the transition probability that corresponds to the original initial distribution \u03b1 (s).\nFor example, in Figure 1a, for agent m2 this would happen at time \u03c4 = 4.\nOnce the agent gets to the end of its activity window (time \u03c4 = 6 for agent m2 in Figure 1a), it performs the start\/stop action, which takes it into the sink finish state sf at time \u03c4 = 7.\nMore precisely, given an MDP (S, A, pm, rm, \u03b1m), we define an augmented MDP (S', A', p' m, r' m, \u03b1'm) as follows:\nwhere all non-specified transition probabilities are assumed to be zero.\nFurther, in order to account for the new starting state, we begin the MDP one time-step earlier, setting \u03c4am +--\u03c4am \u2212 1.\nThis will not affect the resource allocation due to the resource constraints only being enforced for the original MDP states, as will be discussed in the next section.\nFor example, the augmented MDPs shown in Figure 2b (which starts in state sb at time \u03c4 = 2) would be constructed from an MDP with original arrival time \u03c4 = 3.\nFigure 2b also shows a sample trajectory through the state space: the agent starts in state sb, transitions into the state space S of the original MDP, and finally exists into the sink state sf.\nNote that if we wanted to model a problem where agents could pause their MDPs at arbitrary time steps (which might be useful for domains where dynamic reallocation is possible), we could easily accomplish this by including an extra action that transitions from each state to itself with zero reward.\n4.2 MILP for Resource Scheduling\nGiven a set of augmented MDPs, as defined above, the goal of this section is to formulate a global optimization program that solves the resource-scheduling problem.\nIn this section and below, all MDPs are assumed to be the augmented MDPs as defined in Section 4.1.\nOur approach is similar to the idea used in [6]: we begin with the linear-program formulation of agents' MDPs (1) and augment it with constraints that ensure that the corresponding resource allocation across agents and time is valid.\nThe resulting optimization problem then simultaneously solves the agents' MDPs and resource-scheduling problems.\nIn the rest of this section, we incrementally develop a mixed integer program (MILP) that achieves this.\nIn the absence of resource constraints, the agents' finitehorizon MDPs are completely independent, and the globally optimal solution can be trivially obtained via the following LP, which is simply an aggregation of single-agent finitehorizon LPs:\nwhere xm (s, a, t) is the occupation measure of agent m, and\nFigure 2: Illustration of augmenting an MDP to allow for variable starting and stopping times: a) (left) the original two-state MDP with a single action; (right) the augmented MDP with new states sb and sf and the new action a \u2217 (note that the origianl transitions are not changed in the augmentation process); b) the augmented MDP displayed as a trajectory through time (grey lines indicate all transitions, while black lines indicate a given trajectory.\nTable 1: MILP for globally optimal resource scheduling.\nTm = \u03c4dm \u2212 \u03c4am + 1 is the time horizon for the agent's MDP.\nUsing this LP as a basis, we augment it with constraints that ensure that the resource usage implied by the agents' occupation measures {xm} does not violate the global resource requirements \u03d5b at any time step \u03c4 \u2208 [0, b\u03c4].\nTo formulate these resource constraints, we use the following binary variables:\n\u2022 \u0394m (\u03c4, \u03c9) = {0, 1}, \u2200 m \u2208 M, \u03c4 \u2208 [0, b\u03c4], \u03c9 \u2208 \u03a9, which serve as indicator variables that define whether agent m possesses resource \u03c9 at time \u03c4.\nThese are analogous to the static indicator variables used in the one-shot static resource-allocation problem in [6].\n\u2022 \u03b8m = {0, 1}, \u2200 m \u2208 M, \u03c4 \u2208 [0, b\u03c4] are indicator variables that specify whether agent m is \"active\" (i.e., executing its MDP) at time \u03c4.\nThe meaning of resource-usage variables \u0394 is illustrated in Figure 1: \u0394m (\u03c4, \u03c9) = 1 only if resource \u03c9 is allocated to agent m at time \u03c4.\nThe meaning of the \"activity indicators\" \u03b8 is illustrated in Figure 2b: when agent m is in either the start state sb or the finish state sf, the corresponding \u03b8m = 0, but once the agent becomes active and enters one of the other states, we set \u03b8m = 1.\nThis meaning of \u03b8 can be enforced with a linear constraint that synchronizes the values of the agents' occupation measures xm and the activity\n1224 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nindicators 0, as shown in (6) in Table 1.\nAnother constraint we have to add--because the activity indicators 0 are defined on the global timeline - r--is to enforce the fact that the agent is inactive outside of its arrivaldeparture window.\nThis is accomplished by constraint (7) in Table 1.\nFurthermore, agents should not be using resources while they are inactive.\nThis constraint can also be enforced via a linear inequality on 0 and \u0394, as shown in (8).\nConstraint (6) sets the value of 0 to match the policy defined by the occupation measure x _.\nIn a similar fashion, we have to make sure that the resource-usage variables \u0394 are also synchronized with the occupation measure x _.\nThis is done via constraint (9) in Table 1, which is nearly identical to the analogous constraint from [6].\nAfter implementing the above constraint, which enforces the meaning of \u0394, we add a constraint that ensures that the agents' resource usage never exceeds the amounts of available resources.\nThis condition is also trivially expressed as a linear inequality (10) in Table 1.\nFinally, for the problem formulation where resource assignments are static during a lifetime of an agent, we add a constraint that ensures that the resource-usage variables \u0394 do not change their value while the agent is active (0 = 1).\nThis is accomplished via the linear constraint (11), where Z> 2 is a constant that is used to turn off the constraints when 0 _ (- r) = 0 or 0 _ (- r + 1) = 0.\nThis constraint is not used for the dynamic problem formulation, where resources can be reallocated between agents at every time step.\nTo summarize, Table 1 together with the conservationof-flow constraints from (12) defines the MILP that simultaneously computes an optimal resource assignment for all agents across time as well as optimal finite-horizon MDP policies that are valid under that resource assignment.\nAs a rough measure of the complexity of this MILP, let us consider the number of optimization variables and constraints.\nLet TM = ET _ = E _ (- r' _--- rd _ + 1) be the sum of the lengths of the arrival-departure windows across all agents.\nThen, the number of optimization variables is:\nTM of which are continuous (x _), and b-rIMII\u03a9I + b-rIMI are binary (\u0394 and 0).\nHowever, notice that all but TMIMI of the 0 are set to zero by constraint (7), which also immediately forces all but TMIMII\u03a9I of the \u0394 to be zero via the constraints (8).\nThe number of constraints (not including the degenerate constraints in (7)) in the MILP is:\nDespite the fact that the complexity of the MILP is, in the worst case, exponential1 in the number of binary variables, the complexity of this MILP is significantly (exponentially) lower than that of the MILP with flat utility functions, described in Section 2.2.\nThis result echos the efficiency gains reported in [6] for single-shot resource-allocation problems, but is much more pronounced, because of the explosion of the flat utility representation due to the temporal aspect of the problem (recall the prohibitive complexity of the combinatorial optimization in Section 2.2).\nWe empirically analyze the performance of this method in Section 5.\n5.\nEXPERIMENTAL RESULTS\nAlthough the complexity of solving MILPs is in the worst case exponential in the number of integer variables, there are many efficient methods for solving MILPs that allow our algorithm to scale well for parameters common to resource allocation and scheduling problems.\nIn particular, this section introduces a problem domain--the repairshop problem--used to empirically evaluate our algorithm's scalability in terms of the number of agents IMI, the number of shared resources I\u03a9I, and the varied lengths of global time b-r during which agents may enter and exit the system.\nThe repairshop problem is a simple parameterized MDP adopting the metaphor of a vehicular repair shop.\nAgents in the repair shop are mechanics with a number of independent tasks that yield reward only when completed.\nIn our MDP model of this system, actions taken to advance through the state space are only allowed if the agent holds certain resources that are publicly available to the shop.\nThese resources are in finite supply, and optimal policies for the shop will determine when each agent may hold the limited resources to take actions and earn individual rewards.\nEach task to be completed is associated with a single action, although the agent is required to repeat the action numerous times before completing the task and earning a reward.\nThis model was parameterized in terms of the number of agents in the system, the number of different types of resources that could be linked to necessary actions, a global time during which agents are allowed to arrive and depart, and a maximum length for the number of time steps an agent may remain in the system.\nAll datapoints in our experiments were obtained with 20 evaluations using CPLEX to solve the MILPs on a Pentium4 computer with 2Gb of RAM.\nTrials were conducted on both the static and the dynamic version of the resourcescheduling problem, as defined earlier.\nFigure 3 shows the runtime and policy value for independent modifications to the parameter set.\nThe top row shows how the solution time for the MILP scales as we increase the number of agents IMI, the global time horizon b-r, and the number of resources I\u03a9I.\nIncreasing the number of agents leads to exponential complexity scaling, which is to be expected for an NP-complete problem.\nHowever, increasing the global time limit b-r or the total number of resource types I\u03a9I--while holding the number of agents constant--does not lead to decreased performance.\nThis occurs because the problems get easier as they become under-constrained, which is also a common phenomenon for NP-complete problems.\nWe also observe that the solution to the dynamic version of the problem can often be computed much faster than the static version.\nThe bottom row of Figure 3 shows the joint policy value of the policies that correspond to the computed optimal resource-allocation schedules.\nWe can observe that the dynamic version yields higher reward (as expected, since the reward for the dynamic version is always no less than the reward of the static version).\nWe should point out that these graphs should not be viewed as a measure of performance of two different algorithms (both algorithms produce optimal solutions but to different problems), but rather as observations about how the quality of optimal solutions change as more flexibility is allowed in the reallocation of resources.\nFigure 4 shows runtime and policy value for trials in which common input variables are scaled together.\nThis allows\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1225\nFigure 3: Evaluation of our MILP for variable numbers of agents (column 1), lengths of global-time window (column 2), and numbers of resource types (column 3).\nTop row shows CPU time, and bottom row shows the joint reward of agents' MDP policies.\nError bars show the 1st and 3rd quartiles (25% and 75%).\nus to explore domains where the total number of agents scales proportionally to the total number of resource types or the global time horizon, while keeping constant the average agent density (per unit of global time) or the average number of resources per agent (which commonly occurs in real-life applications).\nOverall, we believe that these experimental results indicate that our MILP formulation can be used to effectively solve resource-scheduling problems of nontrivial size.","keyphrases":["combinatori resourc schedul","resourc","schedul","optim resourc schedul","multiag system","markov decis process","resourc alloc","optim problem","util function","optim alloc","discret-time schedul problem","resourc-schedul algorithm","resourc-schedul","task and resourc alloc in agent system","multiag plan"],"prmu":["P","P","P","P","P","P","M","R","M","M","M","M","U","M","M"]} {"id":"I-77","title":"The LOGIC Negotiation Model","abstract":"Successful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC). We introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness). The intimacy is a pair of matrices that evaluate both an agent's contribution to the relationship and its opponent's contribution each from an information view and from a utilitarian view across the five LOGIC dimensions. The balance is the difference between these matrices. A relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future. The negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy.","lvl-1":"The LOGIC Negotiation Model Carles Sierra Institut d``Investigacio en Intel.ligencia Artificial Spanish Scientific Research Council, UAB 08193 Bellaterra, Catalonia, Spain sierra@iiia.csic.es John Debenham Faculty of Information Technology University of Technology, Sydney NSW, Australia debenham@it.uts.edu.au ABSTRACT Successful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC).\nWe introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness).\nThe intimacy is a pair of matrices that evaluate both an agent``s contribution to the relationship and its opponent``s contribution each from an information view and from a utilitarian view across the five LOGIC dimensions.\nThe balance is the difference between these matrices.\nA relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future.\nThe negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems General Terms Theory 1.\nINTRODUCTION In this paper we propose a new negotiation model to deal with long term relationships that are founded on successive negotiation encounters.\nThe model is grounded on results from business and psychological studies [1, 16, 9], and acknowledges that negotiation is an information exchange process as well as a utility exchange process [15, 14].\nWe believe that if agents are to succeed in real application domains they have to reconcile both views: informational and gametheoretical.\nOur aim is to model trading scenarios where agents represent their human principals, and thus we want their behaviour to be comprehensible by humans and to respect usual human negotiation procedures, whilst being consistent with, and somehow extending, game theoretical and information theoretical results.\nIn this sense, agents are not just utility maximisers, but aim at building long lasting relationships with progressing levels of intimacy that determine what balance in information and resource sharing is acceptable to them.\nThese two concepts, intimacy and balance are key in the model, and enable us to understand competitive and co-operative game theory as two particular theories of agent relationships (i.e. at different intimacy levels).\nThese two theories are too specific and distinct to describe how a (business) relationship might grow because interactions have some aspects of these two extremes on a continuum in which, for example, agents reveal increasing amounts of private information as their intimacy grows.\nWe don``t follow the ``Co-Opetition'' aproach [4] where co-operation and competition depend on the issue under negotiation, but instead we belief that the willingness to co-operate\/compete affect all aspects in the negotiation process.\nNegotiation strategies can naturally be seen as procedures that select tactics used to attain a successful deal and to reach a target intimacy level.\nIt is common in human settings to use tactics that compensate for unbalances in one dimension of a negotiation with unbalances in another dimension.\nIn this sense, humans aim at a general sense of fairness in an interaction.\nIn Section 2 we outline the aspects of human negotiation modelling that we cover in this work.\nThen, in Section 3 we introduce the negotiation language.\nSection 4 explains in outline the architecture and the concepts of intimacy and balance, and how they influence the negotiation.\nSection 5 contains a description of the different metrics used in the agent model including intimacy.\nFinally, Section 6 outlines how strategies and tactics use the LOGIC framework, intimacy and balance.\n2.\nHUMAN NEGOTIATION Before a negotiation starts human negotiators prepare the dialogic exchanges that can be made along the five LOGIC dimensions [7]: \u2022 Legitimacy.\nWhat information is relevant to the negotiation process?\nWhat are the persuasive arguments about the fairness of the options?\n1030 978-81-904262-7-5 (RPS) c 2007 IFAAMAS \u2022 Options.\nWhat are the possible agreements we can accept?\n\u2022 Goals.\nWhat are the underlying things we need or care about?\nWhat are our goals?\n\u2022 Independence.\nWhat will we do if the negotiation fails?\nWhat alternatives have we got?\n\u2022 Commitment.\nWhat outstanding commitments do we have?\nNegotiation dialogues, in this context, exchange dialogical moves, i.e. messages, with the intention of getting information about the opponent or giving away information about us along these five dimensions: request for information, propose options, inform about interests, issue promises, appeal to standards ... A key part of any negotiation process is to build a model of our opponent(s) along these dimensions.\nAll utterances agents make during a negotiation give away information about their current LOGIC model, that is, about their legitimacy, options, goals, independence, and commitments.\nAlso, several utterances can have a utilitarian interpretation in the sense that an agent can associate a preferential gain to them.\nFor instance, an offer may inform our negotiation opponent about our willingness to sign a contract in the terms expressed in the offer, and at the same time the opponent can compute what is its associated expected utilitarian gain.\nThese two views: informationbased and utility-based, are central in the model proposed in this paper.\n2.1 Intimacy and Balance in relationships There is evidence from psychological studies that humans seek a balance in their negotiation relationships.\nThe classical view [1] is that people perceive resource allocations as being distributively fair (i.e. well balanced) if they are proportional to inputs or contributions (i.e. equitable).\nHowever, more recent studies [16, 17] show that humans follow a richer set of norms of distributive justice depending on their intimacy level: equity, equality, and need.\nEquity being the allocation proportional to the effort (e.g. the profit of a company goes to the stock holders proportional to their investment), equality being the allocation in equal amounts (e.g. two friends eat the same amount of a cake cooked by one of them), and need being the allocation proportional to the need for the resource (e.g. in case of food scarcity, a mother gives all food to her baby).\nFor instance, if we are in a purely economic setting (low intimacy) we might request equity for the Options dimension but could accept equality in the Goals dimension.\nThe perception of a relation being in balance (i.e. fair) depends strongly on the nature of the social relationships between individuals (i.e. the intimacy level).\nIn purely economical relationships (e.g., business), equity is perceived as more fair; in relations where joint action or fostering of social relationships are the goal (e.g. friends), equality is perceived as more fair; and in situations where personal development or personal welfare are the goal (e.g. family), allocations are usually based on need.\nWe believe that the perception of balance in dialogues (in negotiation or otherwise) is grounded on social relationships, and that every dimension of an interaction between humans can be correlated to the social closeness, or intimacy, between the parties involved.\nAccording to the previous studies, the more intimacy across the five LOGIC dimensions the more the need norm is used, and the less intimacy the more the equity norm is used.\nThis might be part of our social evolution.\nThere is ample evidence that when human societies evolved from a hunter-gatherer structure1 to a shelterbased one2 the probability of survival increased when food was scarce.\nIn this context, we can clearly see that, for instance, families exchange not only goods but also information and knowledge based on need, and that few families would consider their relationships as being unbalanced, and thus unfair, when there is a strong asymmetry in the exchanges (a mother explaining everything to her children, or buying toys, does not expect reciprocity).\nIn the case of partners there is some evidence [3] that the allocations of goods and burdens (i.e. positive and negative utilities) are perceived as fair, or in balance, based on equity for burdens and equality for goods.\nSee Table 1 for some examples of desired balances along the LOGIC dimensions.\nThe perceived balance in a negotiation dialogue allows negotiators to infer information about their opponent, about its LOGIC stance, and to compare their relationships with all negotiators.\nFor instance, if we perceive that every time we request information it is provided, and that no significant questions are returned, or no complaints about not receiving information are given, then that probably means that our opponent perceives our social relationship to be very close.\nAlternatively, we can detect what issues are causing a burden to our opponent by observing an imbalance in the information or utilitarian senses on that issue.\n3.\nCOMMUNICATION MODEL 3.1 Ontology In order to define a language to structure agent dialogues we need an ontology that includes a (minimum) repertoire of elements: a set of concepts (e.g. quantity, quality, material) organised in a is-a hierarchy (e.g. platypus is a mammal, Australian-dollar is a currency), and a set of relations over these concepts (e.g. price(beer,AUD)).3 We model ontologies following an algebraic approach [8] as: An ontology is a tuple O = (C, R, \u2264, \u03c3) where: 1.\nC is a finite set of concept symbols (including basic data types); 2.\nR is a finite set of relation symbols; 3.\n\u2264 is a reflexive, transitive and anti-symmetric relation on C (a partial order) 4.\n\u03c3 : R \u2192 C+ is the function assigning to each relation symbol its arity 1 In its purest form, individuals in these societies collect food and consume it when and where it is found.\nThis is a pure equity sharing of the resources, the gain is proportional to the effort.\n2 In these societies there are family units, around a shelter, that represent the basic food sharing structure.\nUsually, food is accumulated at the shelter for future use.\nThen the food intake depends more on the need of the members.\n3 Usually, a set of axioms defined over the concepts and relations is also required.\nWe will omit this here.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1031 Element A new trading partner my butcher my boss my partner my children Legitimacy equity equity equity equality need Options equity equity equity mixeda need Goals equity need equity need need Independence equity equity equality need need Commitment equity equity equity mixed need a equity on burden, equality on good Table 1: Some desired balances (sense of fairness) examples depending on the relationship.\nwhere \u2264 is the traditional is-a hierarchy.\nTo simplify computations in the computing of probability distributions we assume that there is a number of disjoint is-a trees covering different ontological spaces (e.g. a tree for types of fabric, a tree for shapes of clothing, and so on).\nR contains relations between the concepts in the hierarchy, this is needed to define `objects'' (e.g. deals) that are defined as a tuple of issues.\nThe semantic distance between concepts within an ontology depends on how far away they are in the structure defined by the \u2264 relation.\nSemantic distance plays a fundamental role in strategies for information-based agency.\nHow signed contracts, Commit(\u00b7), about objects in a particular semantic region, and their execution, Done(\u00b7), affect our decision making process about signing future contracts in nearby semantic regions is crucial to modelling the common sense that human beings apply in managing trading relationships.\nA measure [10] bases the semantic similarity between two concepts on the path length induced by \u2264 (more distance in the \u2264 graph means less semantic similarity), and the depth of the subsumer concept (common ancestor) in the shortest path between the two concepts (the deeper in the hierarchy, the closer the meaning of the concepts).\nSemantic similarity is then defined as: Sim(c, c ) = e\u2212\u03ba1l \u00b7 e\u03ba2h \u2212 e\u2212\u03ba2h e\u03ba2h + e\u2212\u03ba2h where l is the length (i.e. number of hops) of the shortest path between the concepts, h is the depth of the deepest concept subsuming both concepts, and \u03ba1 and \u03ba2 are parameters scaling the contributions of the shortest path length and the depth respectively.\n3.2 Language The shape of the language that \u03b1 uses to represent the information received and the content of its dialogues depends on two fundamental notions.\nFirst, when agents interact within an overarching institution they explicitly or implicitly accept the norms that will constrain their behaviour, and accept the established sanctions and penalties whenever norms are violated.\nSecond, the dialogues in which \u03b1 engages are built around two fundamental actions: (i) passing information, and (ii) exchanging proposals and contracts.\nA contract \u03b4 = (a, b) between agents \u03b1 and \u03b2 is a pair where a and b represent the actions that agents \u03b1 and \u03b2 are responsible for respectively.\nContracts signed by agents and information passed by agents, are similar to norms in the sense that they oblige agents to behave in a particular way, so as to satisfy the conditions of the contract, or to make the world consistent with the information passed.\nContracts and Information can thus be thought of as normative statements that restrict an agent``s behaviour.\nNorms, contracts, and information have an obvious temporal dimension.\nThus, an agent has to abide by a norm while it is inside an institution, a contract has a validity period, and a piece of information is true only during an interval in time.\nThe set of norms affecting the behaviour of an agent defines the context that the agent has to take into account.\n\u03b1``s communication language has two fundamental primitives: Commit(\u03b1, \u03b2, \u03d5) to represent, in \u03d5, the world that \u03b1 aims at bringing about and that \u03b2 has the right to verify, complain about or claim compensation for any deviations from, and Done(\u03bc) to represent the event that a certain action \u03bc4 has taken place.\nIn this way, norms, contracts, and information chunks will be represented as instances of Commit(\u00b7) where \u03b1 and \u03b2 can be individual agents or institutions.\nC is: \u03bc ::= illoc(\u03b1, \u03b2, \u03d5, t) | \u03bc; \u03bc | Let context In \u03bc End \u03d5 ::= term | Done(\u03bc) | Commit(\u03b1, \u03b2, \u03d5) | \u03d5 \u2227 \u03d5 | \u03d5 \u2228 \u03d5 | \u00ac\u03d5 | \u2200v.\u03d5v | \u2203v.\u03d5v context ::= \u03d5 | id = \u03d5 | prolog clause | context; context where \u03d5v is a formula with free variable v, illoc is any appropriate set of illocutionary particles, `;'' means sequencing, and context represents either previous agreements, previous illocutions, the ontological working context, that is a projection of the ontological trees that represent the focus of the conversation, or code that aligns the ontological differences between the speakers needed to interpret an action a. Representing an ontology as a set predicates in Prolog is simple.\nThe set term contains instances of the ontology concepts and relations.5 For example, we can represent the following offer: If you spend a total of more than e100 in my shop during October then I will give you a 10% discount on all goods in November, as: Offer( \u03b1, \u03b2,spent(\u03b2, \u03b1, October, X) \u2227 X \u2265 e100 \u2192 \u2200 y. Done(Inform(\u03be, \u03b1, pay(\u03b2, \u03b1, y), November)) \u2192 Commit(\u03b1, \u03b2, discount(y,10%))) \u03be is an institution agent that reports the payment.\n4 Without loss of generality we will assume that all actions are dialogical.\n5 We assume the convention that C(c) means that c is an instance of concept C and r(c1, ... , cn) implicitly determines that ci is an instance of the concept in the i-th position of the relation r. 1032 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: The LOGIC agent architecture 4.\nAGENT ARCHITECTURE A multiagent system {\u03b1, \u03b21, ... , \u03b2n, \u03be, \u03b81, ... , \u03b8t}, contains an agent \u03b1 that interacts with other argumentation agents, \u03b2i, information providing agents, \u03b8j, and an institutional agent, \u03be, that represents the institution where we assume the interactions happen [2].\nThe institutional agent reports promptly and honestly on what actually occurs after an agent signs a contract, or makes some other form of commitment.\nIn Section 4.1 this enables us to measure the difference between an utterance and a subsequent observation.\nThe communication language C introduced in Section 3.2 enables us both to structure the dialogues and to structure the processing of the information gathered by agents.\nAgents have a probabilistic first-order internal language L used to represent a world model, Mt .\nA generic information-based architecture is described in detail in [15].\nThe LOGIC agent architecture is shown in Figure 1.\nAgent \u03b1 acts in response to a need that is expressed in terms of the ontology.\nA need may be exogenous such as a need to trade profitably and may be triggered by another agent offering to trade, or endogenous such as \u03b1 deciding that it owns more wine than it requires.\nNeeds trigger \u03b1``s goal\/plan proactive reasoning, while other messages are dealt with by \u03b1``s reactive reasoning.6 Each plan prepares for the negotiation by assembling the contents of a `LOGIC briefcase'' that the agent `carries'' into the negotiation7 .\nThe relationship strategy determines which agent to negotiate with for a given need; it uses risk management analysis to preserve a strategic set of trading relationships for each mission-critical need - this is not detailed here.\nFor each trading relationship this strategy generates a relationship target that is expressed in the LOGIC framework as a desired level of intimacy to be achieved in the long term.\nEach negotiation consists of a dialogue, \u03a8t , between two agents with agent \u03b1 contributing utterance \u03bc and the part6 Each of \u03b1``s plans and reactions contain constructors for an initial world model Mt .\nMt is then maintained from percepts received using update functions that transform percepts into constraints on Mt - for details, see [14, 15].\n7 Empirical evidence shows that in human negotiation, better outcomes are achieved by skewing the opening Options in favour of the proposer.\nWe are unaware of any empirical investigation of this hypothesis for autonomous agents in real trading scenarios.\nner \u03b2 contributing \u03bc using the language described in Section 3.2.\nEach dialogue, \u03a8t , is evaluated using the LOGIC framework in terms of the value of \u03a8t to both \u03b1 and \u03b2 - see Section 5.2.\nThe negotiation strategy then determines the current set of Options {\u03b4i}, and then the tactics, guided by the negotiation target, decide which, if any, of these Options to put forward and wraps them in argumentation dialogue - see Section 6.\nWe now describe two of the distributions in Mt that support offer exchange.\nPt (acc(\u03b1, \u03b2, \u03c7, \u03b4)) estimates the probability that \u03b1 should accept proposal \u03b4 in satisfaction of her need \u03c7, where \u03b4 = (a, b) is a pair of commitments, a for \u03b1 and b for \u03b2.\n\u03b1 will accept \u03b4 if: Pt (acc(\u03b1, \u03b2, \u03c7, \u03b4)) > c, for level of certainty c.\nThis estimate is compounded from subjective and objective views of acceptability.\nThe subjective estimate takes account of: the extent to which the enactment of \u03b4 will satisfy \u03b1``s need \u03c7, how much \u03b4 is `worth'' to \u03b1, and the extent to which \u03b1 believes that she will be in a position to execute her commitment a [14, 15].\nS\u03b1(\u03b2, a) is a random variable denoting \u03b1``s estimate of \u03b2``s subjective valuation of a over some finite, numerical evaluation space.\nThe objective estimate captures whether \u03b4 is acceptable on the open market, and variable U\u03b1(b) denotes \u03b1``s open-market valuation of the enactment of commitment b, again taken over some finite numerical valuation space.\nWe also consider needs, the variable T\u03b1(\u03b2, a) denotes \u03b1``s estimate of the strength of \u03b2``s motivating need for the enactment of commitment a over a valuation space.\nThen for \u03b4 = (a, b): Pt (acc(\u03b1, \u03b2, \u03c7, \u03b4)) = Pt \u201e T\u03b1(\u03b2, a) T\u03b1(\u03b1, b) ``h \u00d7 \u201e S\u03b1(\u03b1, b) S\u03b1(\u03b2, a) ``g \u00d7 U\u03b1(b) U\u03b1(a) \u2265 s !\n(1) where g \u2208 [0, 1] is \u03b1``s greed, h \u2208 [0, 1] is \u03b1``s degree of altruism, and s \u2248 1 is derived from the stance8 described in Section 6.\nThe parameters g and h are independent.\nWe can imagine a relationship that begins with g = 1 and h = 0.\nThen as the agents share increasing amounts of their information about their open market valuations g gradually reduces to 0, and then as they share increasing amounts of information about their needs h increases to 1.\nThe basis for the acceptance criterion has thus developed from equity to equality, and then to need.\nPt (acc(\u03b2, \u03b1, \u03b4)) estimates the probability that \u03b2 would accept \u03b4, by observing \u03b2``s responses.\nFor example, if \u03b2 sends the message Offer(\u03b41) then \u03b1 derives the constraint: {Pt (acc(\u03b2, \u03b1, \u03b41)) = 1} on the distribution Pt (\u03b2, \u03b1, \u03b4), and if this is a counter offer to a former offer of \u03b1``s, \u03b40, then: {Pt (acc(\u03b2, \u03b1, \u03b40)) = 0}.\nIn the not-atypical special case of multi-issue bargaining where the agents'' preferences over the individual issues only are known and are complementary to each other``s, maximum entropy reasoning can be applied to estimate the probability that any multi-issue \u03b4 will be acceptable to \u03b2 by enumerating the possible worlds that represent \u03b2``s limit of acceptability [6].\n4.1 Updating the World Model Mt \u03b1``s world model consists of probability distributions that represent its uncertainty in the world state.\n\u03b1 is interested 8 If \u03b1 chooses to inflate her opening Options then this is achieved in Section 6 by increasing the value of s.\nIf s 1 then a deal may not be possible.\nThis illustrates the wellknown inefficiency of bilateral bargaining established analytically by Myerson and Satterthwaite in 1983.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1033 in the degree to which an utterance accurately describes what will subsequently be observed.\nAll observations about the world are received as utterances from an all-truthful institution agent \u03be.\nFor example, if \u03b2 communicates the goal I am hungry and the subsequent negotiation terminates with \u03b2 purchasing a book from \u03b1 (by \u03be advising \u03b1 that a certain amount of money has been credited to \u03b1``s account) then \u03b1 may conclude that the goal that \u03b2 chose to satisfy was something other than hunger.\nSo, \u03b1``s world model contains probability distributions that represent its uncertain expectations of what will be observed on the basis of utterances received.\nWe represent the relationship between utterance, \u03d5, and subsequent observation, \u03d5 , by Pt (\u03d5 |\u03d5) \u2208 Mt , where \u03d5 and \u03d5 may be ontological categories in the interest of computational feasibility.\nFor example, if \u03d5 is I will deliver a bucket of fish to you tomorrow then the distribution P(\u03d5 |\u03d5) need not be over all possible things that \u03b2 might do, but could be over ontological categories that summarise \u03b2``s possible actions.\nIn the absence of in-coming utterances, the conditional probabilities, Pt (\u03d5 |\u03d5), should tend to ignorance as represented by a decay limit distribution D(\u03d5 |\u03d5).\n\u03b1 may have background knowledge concerning D(\u03d5 |\u03d5) as t \u2192 \u221e, otherwise \u03b1 may assume that it has maximum entropy whilst being consistent with the data.\nIn general, given a distribution, Pt (Xi), and a decay limit distribution D(Xi), Pt (Xi) decays by: Pt+1 (Xi) = \u0394i(D(Xi), Pt (Xi)) (2) where \u0394i is the decay function for the Xi satisfying the property that limt\u2192\u221e Pt (Xi) = D(Xi).\nFor example, \u0394i could be linear: Pt+1 (Xi) = (1 \u2212 \u03bdi) \u00d7 D(Xi) + \u03bdi \u00d7 Pt (Xi), where \u03bdi < 1 is the decay rate for the i``th distribution.\nEither the decay function or the decay limit distribution could also be a function of time: \u0394t i and Dt (Xi).\nSuppose that \u03b1 receives an utterance \u03bc = illoc(\u03b1, \u03b2, \u03d5, t) from agent \u03b2 at time t. Suppose that \u03b1 attaches an epistemic belief Rt (\u03b1, \u03b2, \u03bc) to \u03bc - this probability takes account of \u03b1``s level of personal caution.\nWe model the update of Pt (\u03d5 |\u03d5) in two cases, one for observations given \u03d5, second for observations given \u03c6 in the semantic neighbourhood of \u03d5.\n4.2 Update of Pt (\u03d5 |\u03d5) given \u03d5 First, if \u03d5k is observed then \u03b1 may set Pt+1 (\u03d5k|\u03d5) to some value d where {\u03d51, \u03d52, ... , \u03d5m} is the set of all possible observations.\nWe estimate the complete posterior distribution Pt+1 (\u03d5 |\u03d5) by applying the principle of minimum relative entropy9 as follows.\nLet p(\u03bc) be the distribution: 9 Given a probability distribution q, the minimum relative entropy distribution p = (p1, ... , pI ) subject to a set of J linear constraints g = {gj(p) = aj \u00b7 p \u2212 cj = 0}, j = 1, ... , J (that must include the constraint P i pi \u2212 1 = 0) is: p = arg minr P j rj log rj qj .\nThis may be calculated by introducing Lagrange multipliers \u03bb: L(p, \u03bb) = P j pj log pj qj + \u03bb \u00b7 g. Minimising L, { \u2202L \u2202\u03bbj = gj(p) = 0}, j = 1, ... , J is the set of given constraints g, and a solution to \u2202L \u2202pi = 0, i = 1, ... , I leads eventually to p. Entropy-based inference is a form of Bayesian inference that is convenient when the data is sparse [5] and encapsulates common-sense reasoning [12].\narg minx P j xj log xj Pt(\u03d5 |\u03d5)j that satisfies the constraint p(\u03bc)k = d.\nThen let q(\u03bc) be the distribution: q(\u03bc) = Rt (\u03b1, \u03b2, \u03bc) \u00d7 p(\u03bc) + (1 \u2212 Rt (\u03b1, \u03b2, \u03bc)) \u00d7 Pt (\u03d5 |\u03d5) and then let: r(\u03bc) = ( q(\u03bc) if q(\u03bc) is more interesting than Pt (\u03d5 |\u03d5) Pt (\u03d5 |\u03d5) otherwise A general measure of whether q(\u03bc) is more interesting than Pt (\u03d5 |\u03d5) is: K(q(\u03bc) D(\u03d5 |\u03d5)) > K(Pt (\u03d5 |\u03d5) D(\u03d5 |\u03d5)), where K(x y) = P j xj ln xj yj is the Kullback-Leibler distance between two probability distributions x and y [11].\nFinally incorporating Eqn.\n2 we obtain the method for updating a distribution Pt (\u03d5 |\u03d5) on receipt of a message \u03bc: Pt+1 (\u03d5 |\u03d5) = \u0394i(D(\u03d5 |\u03d5), r(\u03bc)) (3) This procedure deals with integrity decay, and with two probabilities: first, the probability z in the utterance \u03bc, and second the belief Rt (\u03b1, \u03b2, \u03bc) that \u03b1 attached to \u03bc.\n4.3 Update of Pt (\u03c6 |\u03c6) given \u03d5 The sim method: Given as above \u03bc = illoc(\u03b1, \u03b2, \u03d5, t) and the observation \u03d5k we define the vector t by ti = Pt (\u03c6i|\u03c6) + (1\u2212 | Sim(\u03d5k, \u03d5) \u2212 Sim(\u03c6i, \u03c6) |) \u00b7 Sim(\u03d5k, \u03c6) with {\u03c61, \u03c62, ... , \u03c6p} the set of all possible observations in the context of \u03c6 and i = 1, ... , p. t is not a probability distribution.\nThe multiplying factor Sim(\u03d5 , \u03c6) limits the variation of probability to those formulae whose ontological context is not too far away from the observation.\nThe posterior Pt+1 (\u03c6 |\u03c6) is obtained with Equation 3 with r(\u03bc) defined to be the normalisation of t.\nThe valuation method: For a given \u03c6k, wexp (\u03c6k) =Pm j=1 Pt (\u03c6j|\u03c6k) \u00b7 w(\u03c6j) is \u03b1``s expectation of the value of what will be observed given that \u03b2 has stated that \u03c6k will be observed, for some measure w.\nNow suppose that, as before, \u03b1 observes \u03d5k after agent \u03b2 has stated \u03d5.\n\u03b1 revises the prior estimate of the expected valuation wexp (\u03c6k) in the light of the observation \u03d5k to: (wrev (\u03c6k) | (\u03d5k|\u03d5)) = g(wexp (\u03c6k), Sim(\u03c6k, \u03d5), w(\u03c6k), w(\u03d5), wi(\u03d5k)) for some function g - the idea being, for example, that if the execution, \u03d5k, of the commitment, \u03d5, to supply cheese was devalued then \u03b1``s expectation of the value of a commitment, \u03c6, to supply wine should decrease.\nWe estimate the posterior by applying the principle of minimum relative entropy as for Equation 3, where the distribution p(\u03bc) = p(\u03c6 |\u03c6) satisfies the constraint: p X j=1 p(\u03d5 ,\u03d5)j \u00b7 wi(\u03c6j) = g(wexp (\u03c6k), Sim(\u03c6k, \u03d5), w(\u03c6k), w(\u03d5), wi(\u03d5k)) 5.\nSUMMARY MEASURES A dialogue, \u03a8t , between agents \u03b1 and \u03b2 is a sequence of inter-related utterances in context.\nA relationship, \u03a8\u2217t , is a sequence of dialogues.\nWe first measure the confidence that an agent has for another by observing, for each utterance, the difference between what is said (the utterance) and what 1034 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) subsequently occurs (the observation).\nSecond we evaluate each dialogue as it progresses in terms of the LOGIC framework - this evaluation employs the confidence measures.\nFinally we define the intimacy of a relationship as an aggregation of the value of its component dialogues.\n5.1 Confidence Confidence measures generalise what are commonly called trust, reliability and reputation measures into a single computational framework that spans the LOGIC categories.\nIn Section 5.2 confidence measures are applied to valuing fulfilment of promises in the Legitimacy category - we formerly called this honour [14], to the execution of commitments - we formerly called this trust [13], and to valuing dialogues in the Goals category - we formerly called this reliability [14].\nIdeal observations.\nConsider a distribution of observations that represent \u03b1``s ideal in the sense that it is the best that \u03b1 could reasonably expect to observe.\nThis distribution will be a function of \u03b1``s context with \u03b2 denoted by e, and is Pt I (\u03d5 |\u03d5, e).\nHere we measure the relative entropy between this ideal distribution, Pt I (\u03d5 |\u03d5, e), and the distribution of expected observations, Pt (\u03d5 |\u03d5).\nThat is: C(\u03b1, \u03b2, \u03d5) = 1 \u2212 X \u03d5 Pt I (\u03d5 |\u03d5, e) log Pt I (\u03d5 |\u03d5, e) Pt(\u03d5 |\u03d5) (4) where the 1 is an arbitrarily chosen constant being the maximum value that this measure may have.\nThis equation measures confidence for a single statement \u03d5.\nIt makes sense to aggregate these values over a class of statements, say over those \u03d5 that are in the ontological context o, that is \u03d5 \u2264 o: C(\u03b1, \u03b2, o) = 1 \u2212 P \u03d5:\u03d5\u2264o Pt \u03b2(\u03d5) [1 \u2212 C(\u03b1, \u03b2, \u03d5)] P \u03d5:\u03d5\u2264o Pt \u03b2(\u03d5) where Pt \u03b2(\u03d5) is a probability distribution over the space of statements that the next statement \u03b2 will make to \u03b1 is \u03d5.\nSimilarly, for an overall estimate of \u03b2``s confidence in \u03b1: C(\u03b1, \u03b2) = 1 \u2212 X \u03d5 Pt \u03b2(\u03d5) [1 \u2212 C(\u03b1, \u03b2, \u03d5)] Preferred observations.\nThe previous measure requires that an ideal distribution, Pt I (\u03d5 |\u03d5, e), has to be specified for each \u03d5.\nHere we measure the extent to which the observation \u03d5 is preferable to the original statement \u03d5.\nGiven a predicate Prefer(c1, c2, e) meaning that \u03b1 prefers c1 to c2 in environment e.\nThen if \u03d5 \u2264 o: C(\u03b1, \u03b2, \u03d5) = X \u03d5 Pt (Prefer(\u03d5 , \u03d5, o))Pt (\u03d5 |\u03d5) and: C(\u03b1, \u03b2, o) = P \u03d5:\u03d5\u2264o Pt \u03b2(\u03d5)C(\u03b1, \u03b2, \u03d5) P \u03d5:\u03d5\u2264o Pt \u03b2(\u03d5) Certainty in observation.\nHere we measure the consistency in expected acceptable observations, or the lack of expected uncertainty in those possible observations that are better than the original statement.\nIf \u03d5 \u2264 o let: \u03a6+(\u03d5, o, \u03ba) =\u02d8 \u03d5 | Pt (Prefer(\u03d5 , \u03d5, o)) > \u03ba \u00af for some constant \u03ba, and: C(\u03b1, \u03b2, \u03d5) = 1 + 1 B\u2217 \u00b7 X \u03d5 \u2208\u03a6+(\u03d5,o,\u03ba) Pt +(\u03d5 |\u03d5) log Pt +(\u03d5 |\u03d5) where Pt +(\u03d5 |\u03d5) is the normalisation of Pt (\u03d5 |\u03d5) for \u03d5 \u2208 \u03a6+(\u03d5, o, \u03ba), B\u2217 = ( 1 if |\u03a6+(\u03d5, o, \u03ba)| = 1 log |\u03a6+(\u03d5, o, \u03ba)| otherwise As above we aggregate this measure for observations in a particular context o, and measure confidence as before.\nComputational Note.\nThe various measures given above involve extensive calculations.\nFor example, Eqn.\n4 containsP \u03d5 that sums over all possible observations \u03d5 .\nWe obtain a more computationally friendly measure by appealing to the structure of the ontology described in Section 3.2, and the right-hand side of Eqn.\n4 may be approximated to: 1 \u2212 X \u03d5 :Sim(\u03d5 ,\u03d5)\u2265\u03b7 Pt \u03b7,I (\u03d5 |\u03d5, e) log Pt \u03b7,I (\u03d5 |\u03d5, e) Pt \u03b7(\u03d5 |\u03d5) where Pt \u03b7,I (\u03d5 |\u03d5, e) is the normalisation of Pt I (\u03d5 |\u03d5, e) for Sim(\u03d5 , \u03d5) \u2265 \u03b7, and similarly for Pt \u03b7(\u03d5 |\u03d5).\nThe extent of this calculation is controlled by the parameter \u03b7.\nAn even tighter restriction may be obtained with: Sim(\u03d5 , \u03d5) \u2265 \u03b7 and \u03d5 \u2264 \u03c8 for some \u03c8.\n5.2 Valuing negotiation dialogues Suppose that a negotiation commences at time s, and by time t a string of utterances, \u03a6t = \u03bc1, ... , \u03bcn has been exchanged between agent \u03b1 and agent \u03b2.\nThis negotiation dialogue is evaluated by \u03b1 in the context of \u03b1``s world model at time s, Ms , and the environment e that includes utterances that may have been received from other agents in the system including the information sources {\u03b8i}.\nLet \u03a8t = (\u03a6t , Ms , e), then \u03b1 estimates the value of this dialogue to itself in the context of Ms and e as a 2 \u00d7 5 array V\u03b1(\u03a8t ) where: Vx(\u03a8t ) = \u201e IL x (\u03a8t ) IO x (\u03a8t ) IG x (\u03a8t ) II x(\u03a8t ) IC x (\u03a8t ) UL x (\u03a8t ) UO x (\u03a8t ) UG x (\u03a8t ) UI x(\u03a8t ) UC x (\u03a8t ) `` where the I(\u00b7) and U(\u00b7) functions are information-based and utility-based measures respectively as we now describe.\n\u03b1 estimates the value of this dialogue to \u03b2 as V\u03b2(\u03a8t ) by assuming that \u03b2``s reasoning apparatus mirrors its own.\nIn general terms, the information-based valuations measure the reduction in uncertainty, or information gain, that the dialogue gives to each agent, they are expressed in terms of decrease in entropy that can always be calculated.\nThe utility-based valuations measure utility gain are expressed in terms of some suitable utility evaluation function U(\u00b7) that can be difficult to define.\nThis is one reason why the utilitarian approach has no natural extension to the management of argumentation that is achieved here by our informationbased approach.\nFor example, if \u03b1 receives the utterance Today is Tuesday then this may be translated into a constraint on a single distribution, and the resulting decrease in entropy is the information gain.\nAttaching a utilitarian measure to this utterance may not be so simple.\nWe use the term 2 \u00d7 5 array loosely to describe V\u03b1 in that the elements of the array are lists of measures that will be determined by the agent``s requirements.\nTable 2 shows a sample measure for each of the ten categories, in it the dialogue commences at time s and terminates at time t.\nIn that Table, U(\u00b7) is a suitable utility evaluation function, needs(\u03b2, \u03c7) means agent \u03b2 needs the need \u03c7, cho(\u03b2, \u03c7, \u03b3) means agent \u03b2 satisfies need \u03c7 by choosing to negotiate The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1035 with agent \u03b3, N is the set of needs chosen from the ontology at some suitable level of abstraction, Tt is the set of offers on the table at time t, com(\u03b2, \u03b3, b) means agent \u03b2 has an outstanding commitment with agent \u03b3 to execute the commitment b where b is defined in the ontology at some suitable level of abstraction, B is the number of such commitments, and there are n + 1 agents in the system.\n5.3 Intimacy and Balance The balance in a negotiation dialogue, \u03a8t , is defined as: B\u03b1\u03b2(\u03a8t ) = V\u03b1(\u03a8t ) V\u03b2(\u03a8t ) for an element-by-element difference operator that respects the structure of V (\u03a8t ).\nThe intimacy between agents \u03b1 and \u03b2, I\u2217t \u03b1\u03b2, is the pattern of the two 2 \u00d7 5 arrays V \u2217t \u03b1 and V \u2217t \u03b2 that are computed by an update function as each negotiation round terminates, I\u2217t \u03b1\u03b2 = ` V \u2217t \u03b1 , V \u2217t \u03b2 \u00b4 .\nIf \u03a8t terminates at time t: V \u2217t+1 x = \u03bd \u00d7 Vx(\u03a8t ) + (1 \u2212 \u03bd) \u00d7 V \u2217t x (5) where \u03bd is the learning rate, and x = \u03b1, \u03b2.\nAdditionally, V \u2217t x continually decays by: V \u2217t+1 x = \u03c4 \u00d7 V \u2217t x + (1 \u2212 \u03c4) \u00d7 Dx, where x = \u03b1, \u03b2; \u03c4 is the decay rate, and Dx is a 2 \u00d7 5 array being the decay limit distribution for the value to agent x of the intimacy of the relationship in the absence of any interaction.\nDx is the reputation of agent x.\nThe relationship balance between agents \u03b1 and \u03b2 is: B\u2217t \u03b1\u03b2 = V \u2217t \u03b1 V \u2217t \u03b2 .\nIn particular, the intimacy determines values for the parameters g and h in Equation 1.\nAs a simple example, if both IO \u03b1 (\u03a8\u2217t ) and IO \u03b2 (\u03a8\u2217t ) increase then g decreases, and as the remaining eight information-based LOGIC components increase, h increases.\nThe notion of balance may be applied to pairs of utterances by treating them as degenerate dialogues.\nIn simple multi-issue bargaining the equitable information revelation strategy generalises the tit-for-tat strategy in single-issue bargaining, and extends to a tit-for-tat argumentation strategy by applying the same principle across the LOGIC framework.\n6.\nSTRATEGIES AND TACTICS Each negotiation has to achieve two goals.\nFirst it may be intended to achieve some contractual outcome.\nSecond it will aim to contribute to the growth, or decline, of the relationship intimacy.\nWe now describe in greater detail the contents of the Negotiation box in Figure 1.\nThe negotiation literature consistently advises that an agent``s behaviour should not be predictable even in close, intimate relationships.\nThe required variation of behaviour is normally described as varying the negotiation stance that informally varies from friendly guy to tough guy.\nThe stance is shown in Figure 1, it injects bounded random noise into the process, where the bound tightens as intimacy increases.\nThe stance, St \u03b1\u03b2, is a 2 \u00d7 5 matrix of randomly chosen multipliers, each \u2248 1, that perturbs \u03b1``s actions.\nThe value in the (x, y) position in the matrix, where x = I, U and y = L, O, G, I, C, is chosen at random from [ 1 l(I\u2217t \u03b1\u03b2 ,x,y) , l(I\u2217t \u03b1\u03b2, x, y)] where l(I\u2217t \u03b1\u03b2, x, y) is the bound, and I\u2217t \u03b1\u03b2 is the intimacy.\nThe negotiation strategy is concerned with maintaining a working set of Options.\nIf the set of options is empty then \u03b1 will quit the negotiation.\n\u03b1 perturbs the acceptance machinery (see Section 4) by deriving s from the St \u03b1\u03b2 matrix such as the value at the (I, O) position.\nIn line with the comment in Footnote 7, in the early stages of the negotiation \u03b1 may decide to inflate her opening Options.\nThis is achieved by increasing the value of s in Equation 1.\nThe following strategy uses the machinery described in Section 4.\nFix h, g, s and c, set the Options to the empty set, let Dt s = {\u03b4 | Pt (acc(\u03b1, \u03b2, \u03c7, \u03b4) > c}, then: \u2022 repeat the following as many times as desired: add \u03b4 = arg maxx{Pt (acc(\u03b2, \u03b1, x)) | x \u2208 Dt s} to Options, remove {y \u2208 Dt s | Sim(y, \u03b4) < k} for some k from Dt s By using Pt (acc(\u03b2, \u03b1, \u03b4)) this strategy reacts to \u03b2``s history of Propose and Reject utterances.\nNegotiation tactics are concerned with selecting some Options and wrapping them in argumentation.\nPrior interactions with agent \u03b2 will have produced an intimacy pattern expressed in the form of ` V \u2217t \u03b1 , V \u2217t \u03b2 \u00b4 .\nSuppose that the relationship target is (T\u2217t \u03b1 , T\u2217t \u03b2 ).\nFollowing from Equation 5, \u03b1 will want to achieve a negotiation target, N\u03b2(\u03a8t ) such that: \u03bd \u00b7 N\u03b2(\u03a8t ) + (1 \u2212 \u03bd) \u00b7 V \u2217t \u03b2 is a bit on the T\u2217t \u03b2 side of V \u2217t \u03b2 : N\u03b2(\u03a8t ) = \u03bd \u2212 \u03ba \u03bd V \u2217t \u03b2 \u2295 \u03ba \u03bd T\u2217t \u03b2 (6) for small \u03ba \u2208 [0, \u03bd] that represents \u03b1``s desired rate of development for her relationship with \u03b2.\nN\u03b2(\u03a8t ) is a 2 \u00d7 5 matrix containing variations in the LOGIC dimensions that \u03b1 would like to reveal to \u03b2 during \u03a8t (e.g. I``ll pass a bit more information on options than usual, I``ll be stronger in concessions on options, etc.).\nIt is reasonable to expect \u03b2 to progress towards her target at the same rate and N\u03b1(\u03a8t ) is calculated by replacing \u03b2 by \u03b1 in Equation 6.\nN\u03b1(\u03a8t ) is what \u03b1 hopes to receive from \u03b2 during \u03a8t .\nThis gives a negotiation balance target of: N\u03b1(\u03a8t ) N\u03b2(\u03a8t ) that can be used as the foundation for reactive tactics by striving to maintain this balance across the LOGIC dimensions.\nA cautious tactic could use the balance to bound the response \u03bc to each utterance \u03bc from \u03b2 by the constraint: V\u03b1(\u03bc ) V\u03b2(\u03bc) \u2248 St \u03b1\u03b2 \u2297 (N\u03b1(\u03a8t ) N\u03b2(\u03a8t )), where \u2297 is element-by-element matrix multiplication, and St \u03b1\u03b2 is the stance.\nA less neurotic tactic could attempt to achieve the target negotiation balance over the anticipated complete dialogue.\nIf a balance bound requires negative information revelation in one LOGIC category then \u03b1 will contribute nothing to it, and will leave this to the natural decay to the reputation D as described above.\n7.\nDISCUSSION In this paper we have introduced a novel approach to negotiation that uses information and game-theoretical measures grounded on business and psychological studies.\nIt introduces the concepts of intimacy and balance as key elements in understanding what is a negotiation strategy and tactic.\nNegotiation is understood as a dialogue that affect five basic dimensions: Legitimacy, Options, Goals, Independence, and Commitment.\nEach dialogical move produces a change in a 2\u00d75 matrix that evaluates the dialogue along five information-based measures and five utility-based measures.\nThe current Balance and intimacy levels and the desired, or target, levels are used by the tactics to determine what to say next.\nWe are currently exploring the use of this model as an extension of a currently widespread eProcurement software commercialised by iSOCO, a spin-off company of the laboratory of one of the authors.\n1036 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) IL \u03b1(\u03a8t ) = X \u03d5\u2208\u03a8t Ct (\u03b1, \u03b2, \u03d5) \u2212 Cs (\u03b1, \u03b2, \u03d5) UL \u03b1 (\u03a8t ) = X \u03d5\u2208\u03a8t X \u03d5 Pt \u03b2(\u03d5 |\u03d5) \u00d7 U\u03b1(\u03d5 ) IO \u03b1 (\u03a8t ) = P \u03b4\u2208T t Hs (acc(\u03b2, \u03b1, \u03b4)) \u2212 P \u03b4\u2208T t Ht (acc(\u03b2, \u03b1, \u03b4)) |Tt| UO \u03b1 (\u03a8t ) = X \u03b4\u2208T t Pt (acc(\u03b2, \u03b1, \u03b4)) \u00d7 X \u03b4 Pt (\u03b4 |\u03b4)U\u03b1(\u03b4 ) IG \u03b1 (\u03a8t ) = P \u03c7\u2208N Hs (needs(\u03b2, \u03c7)) \u2212 Ht (needs(\u03b2, \u03c7)) |N| UG \u03b1 (\u03a8t ) = X \u03c7\u2208N Pt (needs(\u03b2, \u03c7)) \u00d7 Et (U\u03b1(needs(\u03b2, \u03c7))) II \u03b1(\u03a8t ) = Po i=1 P \u03c7\u2208N Hs (cho(\u03b2, \u03c7, \u03b2i)) \u2212 Ht (cho(\u03b2, \u03c7, \u03b2i)) n \u00d7 |N| UI \u03b1(\u03a8t ) = oX i=1 X \u03c7\u2208N Ut (cho(\u03b2, \u03c7, \u03b2i)) \u2212 Us (cho(\u03b2, \u03c7, \u03b2i)) IC \u03b1 (\u03a8t ) = Po i=1 P \u03b4\u2208B Hs (com(\u03b2, \u03b2i, b)) \u2212 Ht (com(\u03b2, \u03b2i, b)) n \u00d7 |B| UC \u03b1 (\u03a8t ) = oX i=1 X \u03b4\u2208B Ut (com(\u03b2, \u03b2i, b)) \u2212 Us (com(\u03b2, \u03b2i, b)) Table 2: Sample measures for each category in V\u03b1(\u03a8t ).\n(Similarly for V\u03b2(\u03a8t ).)\nAcknowledgements Carles Sierra is partially supported by the OpenKnowledge European STREP project and by the Spanish IEA Project.\n8.\nREFERENCES [1] Adams, J. S. Inequity in social exchange.\nIn Advances in experimental social psychology, L. Berkowitz, Ed., vol.\n2.\nNew York: Academic Press, 1965.\n[2] Arcos, J. L., Esteva, M., Noriega, P., Rodr\u00b4\u0131guez, J. A., and Sierra, C. Environment engineering for multiagent systems.\nJournal on Engineering Applications of Artificial Intelligence 18 (2005).\n[3] Bazerman, M. H., Loewenstein, G. F., and White, S. B. Reversal of preference in allocation decisions: judging an alternative versus choosing among alternatives.\nAdministration Science Quarterly, 37 (1992), 220-240.\n[4] Brandenburger, A., and Nalebuff, B. Co-Opetition : A Revolution Mindset That Combines Competition and Cooperation.\nDoubleday, New York, 1996.\n[5] Cheeseman, P., and Stutz, J. Bayesian Inference and Maximum Entropy Methods in Science and Engineering.\nAmerican Institute of Physics, Melville, NY, USA, 2004, ch.\nOn The Relationship between Bayesian and Maximum Entropy Inference, pp.\n445461.\n[6] Debenham, J. Bargaining with information.\nIn Proceedings Third International Conference on Autonomous Agents and Multi Agent Systems AAMAS-2004 (July 2004), N. Jennings, C. Sierra, L. Sonenberg, and M. Tambe, Eds., ACM Press, New York, pp. 664 - 671.\n[7] Fischer, R., Ury, W., and Patton, B. Getting to Yes: Negotiating agreements without giving in.\nPenguin Books, 1995.\n[8] Kalfoglou, Y., and Schorlemmer, M. IF-Map: An ontology-mapping method based on information-flow theory.\nIn Journal on Data Semantics I, S. Spaccapietra, S. March, and K. Aberer, Eds., vol.\n2800 of Lecture Notes in Computer Science.\nSpringer-Verlag: Heidelberg, Germany, 2003, pp. 98-127.\n[9] Lewicki, R. J., Saunders, D. M., and Minton, J. W. Essentials of Negotiation.\nMcGraw Hill, 2001.\n[10] Li, Y., Bandar, Z. A., and McLean, D.\nAn approach for measuring semantic similarity between words using multiple information sources.\nIEEE Transactions on Knowledge and Data Engineering 15, 4 (July \/ August 2003), 871 - 882.\n[11] MacKay, D. Information Theory, Inference and Learning Algorithms.\nCambridge University Press, 2003.\n[12] Paris, J. Common sense and maximum entropy.\nSynthese 117, 1 (1999), 75 - 93.\n[13] Sierra, C., and Debenham, J.\nAn information-based model for trust.\nIn Proceedings Fourth International Conference on Autonomous Agents and Multi Agent Systems AAMAS-2005 (Utrecht, The Netherlands, July 2005), F. Dignum, V. Dignum, S. Koenig, S. Kraus, M. Singh, and M. Wooldridge, Eds., ACM Press, New York, pp. 497 - 504.\n[14] Sierra, C., and Debenham, J. Trust and honour in information-based agency.\nIn Proceedings Fifth International Conference on Autonomous Agents and Multi Agent Systems AAMAS-2006 (Hakodate, Japan, May 2006), P. Stone and G. Weiss, Eds., ACM Press, New York, pp. 1225 - 1232.\n[15] Sierra, C., and Debenham, J. Information-based agency.\nIn Proceedings of Twentieth International Joint Conference on Artificial Intelligence IJCAI-07 (Hyderabad, India, January 2007), pp. 1513-1518.\n[16] Sondak, H., Neale, M. A., and Pinkley, R.\nThe negotiated allocations of benefits and burdens: The impact of outcome valence, contribution, and relationship.\nOrganizational Behaviour and Human Decision Processes, 3 (December 1995), 249-260.\n[17] Valley, K. L., Neale, M. A., and Mannix, E. A. Friends, lovers, colleagues, strangers: The effects of relationships on the process and outcome of negotiations.\nIn Research in Negotiation in Organizations, R. Bies, R. Lewicki, and B. Sheppard, Eds., vol.\n5.\nJAI Press, 1995, pp. 65-94.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1037","lvl-3":"The LOGIC Negotiation Model\nABSTRACT\nSuccessful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC).\nWe introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness).\nThe intimacy is a pair of matrices that evaluate both an agent's contribution to the relationship and its opponent's contribution each from an information view and from a utilitarian view across the five LOGIC dimensions.\nThe balance is the difference between these matrices.\nA relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future.\nThe negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy.\n1.\nINTRODUCTION\nIn this paper we propose a new negotiation model to deal with long term relationships that are founded on successive negotiation encounters.\nThe model is grounded on results from business and psychological studies [1, 16, 9], and acknowledges that negotiation is an information exchange pro\ncess as well as a utility exchange process [15, 14].\nWe believe that if agents are to succeed in real application domains they have to reconcile both views: informational and gametheoretical.\nOur aim is to model trading scenarios where agents represent their human principals, and thus we want their behaviour to be comprehensible by humans and to respect usual human negotiation procedures, whilst being consistent with, and somehow extending, game theoretical and information theoretical results.\nIn this sense, agents are not just utility maximisers, but aim at building long lasting relationships with progressing levels of intimacy that determine what balance in information and resource sharing is acceptable to them.\nThese two concepts, intimacy and balance are key in the model, and enable us to understand competitive and co-operative game theory as two particular theories of agent relationships (i.e. at different intimacy levels).\nThese two theories are too specific and distinct to describe how a (business) relationship might grow because interactions have some aspects of these two extremes on a continuum in which, for example, agents reveal increasing amounts of private information as their intimacy grows.\nWe don't follow the' Co-Opetition' aproach [4] where co-operation and competition depend on the issue under negotiation, but instead we belief that the willingness to co-operate\/compete affect all aspects in the negotiation process.\nNegotiation strategies can naturally be seen as procedures that select tactics used to attain a successful deal and to reach a target intimacy level.\nIt is common in human settings to use tactics that compensate for unbalances in one dimension of a negotiation with unbalances in another dimension.\nIn this sense, humans aim at a general sense of fairness in an interaction.\nIn Section 2 we outline the aspects of human negotiation modelling that we cover in this work.\nThen, in Section 3 we introduce the negotiation language.\nSection 4 explains in outline the architecture and the concepts of intimacy and balance, and how they influence the negotiation.\nSection 5 contains a description of the different metrics used in the agent model including intimacy.\nFinally, Section 6 outlines how strategies and tactics use the LOGIC framework, intimacy and balance.\n2.\nHUMAN NEGOTIATION\n2.1 Intimacy and Balance in relationships\n3.\nCOMMUNICATION MODEL\n3.1 Ontology\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1031\nElement A new trading partner my butcher my boss my partner my children\n3.2 Language\n1032 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nAGENT ARCHITECTURE\n4.1 Updating the World Model Mt\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1033\n4.3 Update of Pt (\u03c6 ~ I\u03c6) given \u03d5\n4.2 Update of Pt (\u03d5 ~ I\u03d5) given \u03d5\n5.\nSUMMARY MEASURES\n1034 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.1 Confidence\n5.2 Valuing negotiation dialogues\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1035\n5.3 Intimacy and Balance\n6.\nSTRATEGIES AND TACTICS\nEach negotiation has to achieve two goals.\nFirst it may be intended to achieve some contractual outcome.\nSecond it will aim to contribute to the growth, or decline, of the relationship intimacy.\nWe now describe in greater detail the contents of the \"Negotiation\" box in Figure 1.\nThe negotiation literature consistently advises that an agent's behaviour should not be predictable even in close, intimate relationships.\nThe required variation of behaviour is normally described as varying the negotiation stance that informally varies from \"friendly guy\" to \"tough guy\".\nThe stance is shown in Figure 1, it injects bounded random noise into the process, where the bound tightens as intimacy increases.\nThe stance, St\u03b1\u03b2, is a 2 \u00d7 5 matrix of randomly chosen multipliers, each \u2248 1, that perturbs \u03b1's actions.\nThe value in the (x, y) position in the matrix, where x = I, U and y = L, O, G, I, C, is chosen at random from [1 \u03b1\u03b2, x, y) is the l (I \u2217 t \u03b1\u03b2, x, y), l (I * t \u03b1\u03b2, x, y)] where l (I * t bound, and I * t \u03b1\u03b2 is the intimacy.\nThe negotiation strategy is concerned with maintaining a working set of Options.\nIf the set of options is empty then \u03b1 will quit the negotiation.\n\u03b1 perturbs the acceptance machinery (see Section 4) by deriving s from the St\u03b1\u03b2 matrix such as the value at the (I, O) position.\nIn line with the comment in Footnote 7, in the early stages of the negotiation \u03b1 may decide to inflate her opening Options.\nThis is achieved by increasing the value of s in Equation 1.\nThe following strategy uses the machinery described in Section 4.\nFix h, g, s and c, set the Options to the empty set, let Dts = {\u03b4 | Pt (acc (\u03b1, \u03b2, \u03c7, \u03b4)> c}, then:\n\u2022 repeat the following as many times as desired: add\n\u03b4 = arg maxx {Pt (acc (\u03b2, \u03b1, x)) | x \u2208 Dts} to Options, remove {y \u2208 Dts | Sim (y, \u03b4) <k} for some k from Dts By using Pt (acc (\u03b2, \u03b1, \u03b4)) this strategy reacts to \u03b2's history of Propose and Reject utterances.\nNegotiation tactics are concerned with selecting some Options and wrapping them in argumentation.\nPrior interactions with agent \u03b2 will have produced an intimacy pattern expressed in the form of (V * t \u03b1, V * t).\nSuppose that the rela\n\u03b2).\nFollowing from Equation 5, \u03b1 will want to achieve a negotiation target, N\u03b2 (\u03a8t) such that: \u03bd \u00b7 N\u03b2 (\u03a8t) + (1 \u2212 \u03bd) \u00b7 V\u03b2 * t is \"a bit on the T\u03b2 * t side of\" V * t\nfor small \u03ba \u2208 [0, \u03bd] that represents \u03b1's desired rate of development for her relationship with \u03b2.\nN\u03b2 (\u03a8t) is a 2 \u00d7 5 matrix containing variations in the LOGIC dimensions that \u03b1 would like to reveal to \u03b2 during \u03a8t (e.g. I'll pass a bit more information on options than usual, I'll be stronger in concessions on options, etc.).\nIt is reasonable to expect \u03b2 to progress towards her target at the same rate and N\u03b1 (\u03a8t) is calculated by replacing \u03b2 by \u03b1 in Equation 6.\nN\u03b1 (\u03a8t) is what \u03b1 hopes to receive from \u03b2 during \u03a8t.\nThis gives a negotiation balance target of: N\u03b1 (\u03a8t) ~ N\u03b2 (\u03a8t) that can be used as the foundation for reactive tactics by striving to maintain this balance across the LOGIC dimensions.\nA cautious tactic could use the balance to bound the response \u03bc to each utterance \u03bc' from \u03b2 by the constraint: V\u03b1 (\u03bc') ~ V\u03b2 (\u03bc) \u2248 St\u03b1\u03b2 \u2297 (N\u03b1 (\u03a8t) ~ N\u03b2 (\u03a8t)), where \u2297 is element-by-element matrix multiplication, and St\u03b1\u03b2 is the stance.\nA less neurotic tactic could attempt to achieve the target negotiation balance over the anticipated complete dialogue.\nIf a balance bound requires negative information revelation in one LOGIC category then \u03b1 will contribute nothing to it, and will leave this to the natural decay to the reputation D as described above.","lvl-4":"The LOGIC Negotiation Model\nABSTRACT\nSuccessful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC).\nWe introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness).\nThe intimacy is a pair of matrices that evaluate both an agent's contribution to the relationship and its opponent's contribution each from an information view and from a utilitarian view across the five LOGIC dimensions.\nThe balance is the difference between these matrices.\nA relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future.\nThe negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy.\n1.\nINTRODUCTION\nIn this paper we propose a new negotiation model to deal with long term relationships that are founded on successive negotiation encounters.\nThe model is grounded on results from business and psychological studies [1, 16, 9], and acknowledges that negotiation is an information exchange pro\ncess as well as a utility exchange process [15, 14].\nOur aim is to model trading scenarios where agents represent their human principals, and thus we want their behaviour to be comprehensible by humans and to respect usual human negotiation procedures, whilst being consistent with, and somehow extending, game theoretical and information theoretical results.\nIn this sense, agents are not just utility maximisers, but aim at building long lasting relationships with progressing levels of intimacy that determine what balance in information and resource sharing is acceptable to them.\nThese two concepts, intimacy and balance are key in the model, and enable us to understand competitive and co-operative game theory as two particular theories of agent relationships (i.e. at different intimacy levels).\nThese two theories are too specific and distinct to describe how a (business) relationship might grow because interactions have some aspects of these two extremes on a continuum in which, for example, agents reveal increasing amounts of private information as their intimacy grows.\nWe don't follow the' Co-Opetition' aproach [4] where co-operation and competition depend on the issue under negotiation, but instead we belief that the willingness to co-operate\/compete affect all aspects in the negotiation process.\nNegotiation strategies can naturally be seen as procedures that select tactics used to attain a successful deal and to reach a target intimacy level.\nIt is common in human settings to use tactics that compensate for unbalances in one dimension of a negotiation with unbalances in another dimension.\nIn this sense, humans aim at a general sense of fairness in an interaction.\nIn Section 2 we outline the aspects of human negotiation modelling that we cover in this work.\nThen, in Section 3 we introduce the negotiation language.\nSection 4 explains in outline the architecture and the concepts of intimacy and balance, and how they influence the negotiation.\nSection 5 contains a description of the different metrics used in the agent model including intimacy.\nFinally, Section 6 outlines how strategies and tactics use the LOGIC framework, intimacy and balance.\n6.\nSTRATEGIES AND TACTICS\nEach negotiation has to achieve two goals.\nFirst it may be intended to achieve some contractual outcome.\nSecond it will aim to contribute to the growth, or decline, of the relationship intimacy.\nWe now describe in greater detail the contents of the \"Negotiation\" box in Figure 1.\nThe negotiation literature consistently advises that an agent's behaviour should not be predictable even in close, intimate relationships.\nThe required variation of behaviour is normally described as varying the negotiation stance that informally varies from \"friendly guy\" to \"tough guy\".\nThe stance is shown in Figure 1, it injects bounded random noise into the process, where the bound tightens as intimacy increases.\nThe stance, St\u03b1\u03b2, is a 2 \u00d7 5 matrix of randomly chosen multipliers, each \u2248 1, that perturbs \u03b1's actions.\nThe negotiation strategy is concerned with maintaining a working set of Options.\nIf the set of options is empty then \u03b1 will quit the negotiation.\n\u03b1 perturbs the acceptance machinery (see Section 4) by deriving s from the St\u03b1\u03b2 matrix such as the value at the (I, O) position.\nIn line with the comment in Footnote 7, in the early stages of the negotiation \u03b1 may decide to inflate her opening Options.\nThis is achieved by increasing the value of s in Equation 1.\nThe following strategy uses the machinery described in Section 4.\n\u2022 repeat the following as many times as desired: add\nNegotiation tactics are concerned with selecting some Options and wrapping them in argumentation.\nPrior interactions with agent \u03b2 will have produced an intimacy pattern expressed in the form of (V * t \u03b1, V * t).\nSuppose that the rela\n\u03b2).\nFollowing from Equation 5, \u03b1 will want to achieve a negotiation target, N\u03b2 (\u03a8t) such that: \u03bd \u00b7 N\u03b2 (\u03a8t) + (1 \u2212 \u03bd) \u00b7 V\u03b2 * t is \"a bit on the T\u03b2 * t side of\" V * t\nfor small \u03ba \u2208 [0, \u03bd] that represents \u03b1's desired rate of development for her relationship with \u03b2.\nThis gives a negotiation balance target of: N\u03b1 (\u03a8t) ~ N\u03b2 (\u03a8t) that can be used as the foundation for reactive tactics by striving to maintain this balance across the LOGIC dimensions.\nA less neurotic tactic could attempt to achieve the target negotiation balance over the anticipated complete dialogue.\nIf a balance bound requires negative information revelation in one LOGIC category then \u03b1 will contribute nothing to it, and will leave this to the natural decay to the reputation D as described above.","lvl-2":"The LOGIC Negotiation Model\nABSTRACT\nSuccessful negotiators prepare by determining their position along five dimensions: Legitimacy, Options, Goals, Independence, and Commitment, (LOGIC).\nWe introduce a negotiation model based on these dimensions and on two primitive concepts: intimacy (degree of closeness) and balance (degree of fairness).\nThe intimacy is a pair of matrices that evaluate both an agent's contribution to the relationship and its opponent's contribution each from an information view and from a utilitarian view across the five LOGIC dimensions.\nThe balance is the difference between these matrices.\nA relationship strategy maintains a target intimacy for each relationship that an agent would like the relationship to move towards in future.\nThe negotiation strategy maintains a set of Options that are in-line with the current intimacy level, and then tactics wrap the Options in argumentation with the aim of attaining a successful deal and manipulating the successive negotiation balances towards the target intimacy.\n1.\nINTRODUCTION\nIn this paper we propose a new negotiation model to deal with long term relationships that are founded on successive negotiation encounters.\nThe model is grounded on results from business and psychological studies [1, 16, 9], and acknowledges that negotiation is an information exchange pro\ncess as well as a utility exchange process [15, 14].\nWe believe that if agents are to succeed in real application domains they have to reconcile both views: informational and gametheoretical.\nOur aim is to model trading scenarios where agents represent their human principals, and thus we want their behaviour to be comprehensible by humans and to respect usual human negotiation procedures, whilst being consistent with, and somehow extending, game theoretical and information theoretical results.\nIn this sense, agents are not just utility maximisers, but aim at building long lasting relationships with progressing levels of intimacy that determine what balance in information and resource sharing is acceptable to them.\nThese two concepts, intimacy and balance are key in the model, and enable us to understand competitive and co-operative game theory as two particular theories of agent relationships (i.e. at different intimacy levels).\nThese two theories are too specific and distinct to describe how a (business) relationship might grow because interactions have some aspects of these two extremes on a continuum in which, for example, agents reveal increasing amounts of private information as their intimacy grows.\nWe don't follow the' Co-Opetition' aproach [4] where co-operation and competition depend on the issue under negotiation, but instead we belief that the willingness to co-operate\/compete affect all aspects in the negotiation process.\nNegotiation strategies can naturally be seen as procedures that select tactics used to attain a successful deal and to reach a target intimacy level.\nIt is common in human settings to use tactics that compensate for unbalances in one dimension of a negotiation with unbalances in another dimension.\nIn this sense, humans aim at a general sense of fairness in an interaction.\nIn Section 2 we outline the aspects of human negotiation modelling that we cover in this work.\nThen, in Section 3 we introduce the negotiation language.\nSection 4 explains in outline the architecture and the concepts of intimacy and balance, and how they influence the negotiation.\nSection 5 contains a description of the different metrics used in the agent model including intimacy.\nFinally, Section 6 outlines how strategies and tactics use the LOGIC framework, intimacy and balance.\n2.\nHUMAN NEGOTIATION\nBefore a negotiation starts human negotiators prepare the dialogic exchanges that can be made along the five LOGIC dimensions [7]:\n\u2022 Legitimacy.\nWhat information is relevant to the negotiation process?\nWhat are the persuasive arguments about the fairness of the options?\n1030 978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS \u2022 Options.\nWhat are the possible agreements we can accept?\n\u2022 Goals.\nWhat are the underlying things we need or care about?\nWhat are our goals?\n\u2022 Independence.\nWhat will we do if the negotiation fails?\nWhat alternatives have we got?\n\u2022 Commitment.\nWhat outstanding commitments do we have?\nNegotiation dialogues, in this context, exchange dialogical moves, i.e. messages, with the intention of getting information about the opponent or giving away information about us along these five dimensions: request for information, propose options, inform about interests, issue promises, appeal to standards...A key part of any negotiation process is to build a model of our opponent (s) along these dimensions.\nAll utterances agents make during a negotiation give away information about their current LOGIC model, that is, about their legitimacy, options, goals, independence, and commitments.\nAlso, several utterances can have a utilitarian interpretation in the sense that an agent can associate a preferential gain to them.\nFor instance, an offer may inform our negotiation opponent about our willingness to sign a contract in the terms expressed in the offer, and at the same time the opponent can compute what is its associated expected utilitarian gain.\nThese two views: informationbased and utility-based, are central in the model proposed in this paper.\n2.1 Intimacy and Balance in relationships\nThere is evidence from psychological studies that humans seek a balance in their negotiation relationships.\nThe classical view [1] is that people perceive resource allocations as being distributively fair (i.e. well balanced) if they are proportional to inputs or contributions (i.e. equitable).\nHowever, more recent studies [16, 17] show that humans follow a richer set of norms of distributive justice depending on their intimacy level: equity, equality, and need.\nEquity being the allocation proportional to the effort (e.g. the profit of a company goes to the stock holders proportional to their investment), equality being the allocation in equal amounts (e.g. two friends eat the same amount of a cake cooked by one of them), and need being the allocation proportional to the need for the resource (e.g. in case of food scarcity, a mother gives all food to her baby).\nFor instance, if we are in a purely economic setting (low intimacy) we might request equity for the Options dimension but could accept equality in the Goals dimension.\nThe perception of a relation being in balance (i.e. fair) depends strongly on the nature of the social relationships between individuals (i.e. the intimacy level).\nIn purely economical relationships (e.g., business), equity is perceived as more fair; in relations where joint action or fostering of social relationships are the goal (e.g. friends), equality is perceived as more fair; and in situations where personal development or personal welfare are the goal (e.g. family), allocations are usually based on need.\nWe believe that the perception of balance in dialogues (in negotiation or otherwise) is grounded on social relationships, and that every dimension of an interaction between humans can be correlated to the social closeness, or intimacy, between the parties involved.\nAccording to the previous studies, the more intimacy across the five LOGIC dimensions the more the need norm is used, and the less intimacy the more the equity norm is used.\nThis might be part of our social evolution.\nThere is ample evidence that when human societies evolved from a hunter-gatherer structure1 to a shelterbased one2 the probability of survival increased when food was scarce.\nIn this context, we can clearly see that, for instance, families exchange not only goods but also information and knowledge based on need, and that few families would consider their relationships as being unbalanced, and thus unfair, when there is a strong asymmetry in the exchanges (a mother explaining everything to her children, or buying toys, does not expect reciprocity).\nIn the case of partners there is some evidence [3] that the allocations of goods and burdens (i.e. positive and negative utilities) are perceived as fair, or in balance, based on equity for burdens and equality for goods.\nSee Table 1 for some examples of desired balances along the LOGIC dimensions.\nThe perceived balance in a negotiation dialogue allows negotiators to infer information about their opponent, about its LOGIC stance, and to compare their relationships with all negotiators.\nFor instance, if we perceive that every time we request information it is provided, and that no significant questions are returned, or no complaints about not receiving information are given, then that probably means that our opponent perceives our social relationship to be very close.\nAlternatively, we can detect what issues are causing a burden to our opponent by observing an imbalance in the information or utilitarian senses on that issue.\n3.\nCOMMUNICATION MODEL\n3.1 Ontology\nIn order to define a language to structure agent dialogues we need an ontology that includes a (minimum) repertoire of elements: a set of concepts (e.g. quantity, quality, material) organised in a is-a hierarchy (e.g. platypus is a mammal, Australian-dollar is a currency), and a set of relations over these concepts (e.g. price (beer, AUD)).3 We model ontologies following an algebraic approach [8] as: An ontology is a tuple O = (C, R, <, \u03c3) where:\n1.\nC is a finite set of concept symbols (including basic data types); 2.\nR is a finite set of relation symbols; 3.\n<is a reflexive, transitive and anti-symmetric relation on C (a partial order) 4.\n\u03c3: R \u2192 C + is the function assigning to each relation symbol its arity 1In its purest form, individuals in these societies collect food and consume it when and where it is found.\nThis is a pure equity sharing of the resources, the gain is proportional to the effort.\n2In these societies there are family units, around a shelter, that represent the basic food sharing structure.\nUsually, food is accumulated at the shelter for future use.\nThen the food intake depends more on the need of the members.\n3Usually, a set of axioms defined over the concepts and relations is also required.\nWe will omit this here.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1031\nElement A new trading partner my butcher my boss my partner my children\nLegitimacy equity equity equity equality need Options equity equity equity mixeda need Goals equity need equity need need Independence equity equity equality need need Commitment equity equity equity mixed need aequity on burden, equality on good\nTable 1: Some desired balances (sense of fairness) examples depending on the relationship.\nwhere \u2264 is the traditional is-a hierarchy.\nTo simplify computations in the computing of probability distributions we assume that there is a number of disjoint is-a trees covering different ontological spaces (e.g. a tree for types of fabric, a tree for shapes of clothing, and so on).\nR contains relations between the concepts in the hierarchy, this is needed to define ` objects' (e.g. deals) that are defined as a tuple of issues.\nThe semantic distance between concepts within an ontology depends on how far away they are in the structure defined by the \u2264 relation.\nSemantic distance plays a fundamental role in strategies for information-based agency.\nHow signed contracts, Commit (\u00b7), about objects in a particular semantic region, and their execution, Done (\u00b7), affect our decision making process about signing future contracts in nearby semantic regions is crucial to modelling the common sense that human beings apply in managing trading relationships.\nA measure [10] bases the semantic similarity between two concepts on the path length induced by \u2264 (more distance in the \u2264 graph means less semantic similarity), and the depth of the subsumer concept (common ancestor) in the shortest path between the two concepts (the deeper in the hierarchy, the closer the meaning of the concepts).\nSemantic similarity is then defined as:\nwhere l is the length (i.e. number of hops) of the shortest path between the concepts, h is the depth of the deepest concept subsuming both concepts, and \u03ba1 and \u03ba2 are parameters scaling the contributions of the shortest path length and the depth respectively.\n3.2 Language\nThe shape of the language that \u03b1 uses to represent the information received and the content of its dialogues depends on two fundamental notions.\nFirst, when agents interact within an overarching institution they explicitly or implicitly accept the norms that will constrain their behaviour, and accept the established sanctions and penalties whenever norms are violated.\nSecond, the dialogues in which \u03b1 engages are built around two fundamental actions: (i) passing information, and (ii) exchanging proposals and contracts.\nA contract \u03b4 = (a, b) between agents \u03b1 and \u03b2 is a pair where a and b represent the actions that agents \u03b1 and \u03b2 are responsible for respectively.\nContracts signed by agents and information passed by agents, are similar to norms in the sense that they oblige agents to behave in a particular way, so as to satisfy the conditions of the contract, or to make the world consistent with the information passed.\nContracts and Information can thus be thought of as normative statements that restrict an agent's behaviour.\nNorms, contracts, and information have an obvious temporal dimension.\nThus, an agent has to abide by a norm while it is inside an institution, a contract has a validity period, and a piece of information is true only during an interval in time.\nThe set of norms affecting the behaviour of an agent defines the context that the agent has to take into account.\n\u03b1's communication language has two fundamental primitives: Commit (\u03b1, \u03b2, \u03d5) to represent, in \u03d5, the world that \u03b1 aims at bringing about and that \u03b2 has the right to verify, complain about or claim compensation for any deviations from, and Done (\u03bc) to represent the event that a certain action \u03bc4 has taken place.\nIn this way, norms, contracts, and information chunks will be represented as instances of Commit (\u00b7) where \u03b1 and \u03b2 can be individual agents or institutions.\nC is:\nwhere \u03d5v is a formula with free variable v, illoc is any appropriate set of illocutionary particles, `;' means sequencing, and context represents either previous agreements, previous illocutions, the ontological working context, that is a projection of the ontological trees that represent the focus of the conversation, or code that aligns the ontological differences between the speakers needed to interpret an action a. Representing an ontology as a set predicates in Prolog is simple.\nThe set term contains instances of the ontology concepts and relations .5 For example, we can represent the following offer: \"If you spend a total of more than $100 in my shop during October then I will give you a 10% discount on all goods in November\", as:\n\u03be is an institution agent that reports the payment.\n4Without loss of generality we will assume that all actions are dialogical.\n5We assume the convention that C (c) means that c is an instance of concept C and r (c1,..., cn) implicitly determines that ci is an instance of the concept in the i-th position of the relation r.\n1032 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 1: The LOGIC agent architecture\n4.\nAGENT ARCHITECTURE\nA multiagent system {\u03b1,,31,...,,3 n, \u03be, B1,..., Bt}, contains an agent \u03b1 that interacts with other argumentation agents,,3 i, information providing agents, Bj, and an institutional agent, \u03be, that represents the institution where we assume the interactions happen [2].\nThe institutional agent reports promptly and honestly on what actually occurs after an agent signs a contract, or makes some other form of commitment.\nIn Section 4.1 this enables us to measure the difference between an utterance and a subsequent observation.\nThe communication language C introduced in Section 3.2 enables us both to structure the dialogues and to structure the processing of the information gathered by agents.\nAgents have a probabilistic first-order internal language L used to represent a world model, Mt. A generic information-based architecture is described in detail in [15].\nThe LOGIC agent architecture is shown in Figure 1.\nAgent \u03b1 acts in response to a need that is expressed in terms of the ontology.\nA need may be exogenous such as a need to trade profitably and may be triggered by another agent offering to trade, or endogenous such as \u03b1 deciding that it owns more wine than it requires.\nNeeds trigger \u03b1's goal\/plan proactive reasoning, while other messages are dealt with by \u03b1's reactive reasoning .6 Each plan prepares for the negotiation by assembling the contents of a ` LOGIC briefcase' that the agent ` carries' into the negotiation7.\nThe relationship strategy determines which agent to negotiate with for a given need; it uses risk management analysis to preserve a strategic set of trading relationships for each mission-critical need--this is not detailed here.\nFor each trading relationship this strategy generates a relationship target that is expressed in the LOGIC framework as a desired level of intimacy to be achieved in the long term.\nEach negotiation consists of a dialogue, \u03a8t, between two agents with agent \u03b1 contributing utterance \u00b5 and the part6Each of \u03b1's plans and reactions contain constructors for an initial world model Mt. Mt is then maintained from percepts received using update functions that transform percepts into constraints on Mt--for details, see [14, 15].\n7Empirical evidence shows that in human negotiation, better outcomes are achieved by skewing the opening Options in favour of the proposer.\nWe are unaware of any empirical investigation of this hypothesis for autonomous agents in real trading scenarios.\nner,3 contributing \u00b5' using the language described in Section 3.2.\nEach dialogue, \u03a8t, is evaluated using the LOGIC framework in terms of the value of \u03a8t to both \u03b1 and,3--see Section 5.2.\nThe negotiation strategy then determines the current set of Options {5i}, and then the tactics, guided by the negotiation target, decide which, if any, of these Options to put forward and wraps them in argumentation dialogue--see Section 6.\nWe now describe two of the distributions in Mt that support offer exchange.\nPt (acc (\u03b1,,3, x, 5)) estimates the probability that \u03b1 should accept proposal 5 in satisfaction of her need x, where 5 = (a, b) is a pair of commitments, a for \u03b1 and b for,3.\n\u03b1 will accept 5 if: Pt (acc (\u03b1,,3, x, 5))> c, for level of certainty c.\nThis estimate is compounded from subjective and objective views of acceptability.\nThe subjective estimate takes account of: the extent to which the enactment of 5 will satisfy \u03b1's need x, how much 5 is ` worth' to \u03b1, and the extent to which \u03b1 believes that she will be in a position to execute her commitment a [14, 15].\nS\u03b1 (,3, a) is a random variable denoting \u03b1's estimate of,3's subjective valuation of a over some finite, numerical evaluation space.\nThe objective estimate captures whether 5 is acceptable on the open market, and variable U\u03b1 (b) denotes \u03b1's open-market valuation of the enactment of commitment b, again taken over some finite numerical valuation space.\nWe also consider needs, the variable T\u03b1 (,3, a) denotes \u03b1's estimate of the strength of,3's motivating need for the enactment of commitment a over a valuation space.\nThen for 5 = (a, b): Pt (acc (\u03b1,,3, x, 5)) =\nwhere g \u2208 [0, 1] is \u03b1's greed, h \u2208 [0, 1] is \u03b1's degree of altruism, and s \u2248 1 is derived from the stance8 described in Section 6.\nThe parameters g and h are independent.\nWe can imagine a relationship that begins with g = 1 and h = 0.\nThen as the agents share increasing amounts of their information about their open market valuations g gradually reduces to 0, and then as they share increasing amounts of information about their needs h increases to 1.\nThe basis for the acceptance criterion has thus developed from equity to equality, and then to need.\nPt (acc (,3, \u03b1, 5)) estimates the probability that,3 would accept 5, by observing,3's responses.\nFor example, if,3 sends the message Offer (51) then \u03b1 derives the constraint: {Pt (acc (,3, \u03b1, 51)) = 1} on the distribution Pt (,3, \u03b1, 5), and if this is a counter offer to a former offer of \u03b1's, 50, then: {Pt (acc (,3, \u03b1, 50)) = 0}.\nIn the not-atypical special case of multi-issue bargaining where the agents' preferences over the individual issues only are known and are complementary to each other's, maximum entropy reasoning can be applied to estimate the probability that any multi-issue 5 will be acceptable to,3 by enumerating the possible worlds that represent,3's \"limit of acceptability\" [6].\n4.1 Updating the World Model Mt\n\u03b1's world model consists of probability distributions that represent its uncertainty in the world state.\n\u03b1 is interested 8If \u03b1 chooses to inflate her opening Options then this is achieved in Section 6 by increasing the value of s.\nIf s\" 1 then a deal may not be possible.\nThis illustrates the wellknown inefficiency of bilateral bargaining established analytically by Myerson and Satterthwaite in 1983.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1033\nin the degree to which an utterance accurately describes what will subsequently be observed.\nAll observations about the world are received as utterances from an all-truthful institution agent \u03be.\nFor example, if \u03b2 communicates the goal \"I am hungry\" and the subsequent negotiation terminates with \u03b2 purchasing a book from \u03b1 (by \u03be advising \u03b1 that a certain amount of money has been credited to \u03b1's account) then \u03b1 may conclude that the goal that \u03b2 chose to satisfy was something other than hunger.\nSo, \u03b1's world model contains probability distributions that represent its uncertain expectations of what will be observed on the basis of utterances received.\nWe represent the relationship between utterance, \u03d5, and subsequent observation, \u03d5 ~, by Pt (\u03d5 ~ I\u03d5) E Mt, where \u03d5 ~ and \u03d5 may be ontological categories in the interest of computational feasibility.\nFor example, if \u03d5 is \"I will deliver a bucket of fish to you tomorrow\" then the distribution P (\u03d5 ~ I\u03d5) need not be over all possible things that \u03b2 might do, but could be over ontological categories that summarise \u03b2's possible actions.\nIn the absence of in-coming utterances, the conditional probabilities, Pt (\u03d5 ~ I\u03d5), should tend to ignorance as represented by a decay limit distribution D (\u03d5 ~ I\u03d5).\n\u03b1 may have background knowledge concerning D (\u03d5 ~ I\u03d5) as t--+ oc, otherwise \u03b1 may assume that it has maximum entropy whilst being consistent with the data.\nIn general, given a distribution, Pt (Xi), and a decay limit distribution D (Xi), Pt (Xi) decays by:\nwhere \u0394i is the decay function for the Xi satisfying the property that limt \u2192 \u221e Pt (Xi) = D (Xi).\nFor example, \u0394i could be linear: Pt +1 (Xi) = (1--\u03bdi) x D (Xi) + \u03bdi x Pt (Xi), where \u03bdi <1 is the decay rate for the i' th distribution.\nEither the decay function or the decay limit distribution could also be a function of time: \u0394ti and Dt (Xi).\nSuppose that \u03b1 receives an utterance \u03bc = illoc (\u03b1, \u03b2, \u03d5, t) from agent \u03b2 at time t. Suppose that \u03b1 attaches an epistemic belief Rt (\u03b1, \u03b2, \u03bc) to \u03bc--this probability takes account of \u03b1's level of personal caution.\nWe model the update of Pt (\u03d5 ~ I\u03d5) in two cases, one for observations given \u03d5, second for observations given \u03c6 in the semantic neighbourhood of\ny is the Kullback-Leibler distance between two probability distributions x ~ and y ~ [11].\nFinally incorporating Eqn.\n2 we obtain the method for updating a distribution Pt (\u03d5 ~ I\u03d5) on receipt of a message \u03bc:\nThis procedure deals with integrity decay, and with two probabilities: first, the probability z in the utterance \u03bc, and second the belief Rt (\u03b1, \u03b2, \u03bc) that \u03b1 attached to \u03bc.\n4.3 Update of Pt (\u03c6 ~ I\u03c6) given \u03d5\nThe sim method: Given as above \u03bc = illoc (\u03b1, \u03b2, \u03d5, t) and the observation \u03d5k we define the vector t ~ by\nwith 1\u03c61, \u03c62,..., \u03c6p1 the set of all possible observations in the context of \u03c6 and i = 1,..., p. t ~ is not a probability distribution.\nThe multiplying factor Sim (\u03d5 ~, \u03c6) limits the variation of probability to those formulae whose ontological context is not too far away from the observation.\nThe posterior Pt +1 (\u03c6 ~ I\u03c6) is obtained with Equation 3 with ~ r (\u03bc) defined to be the normalisation of ~ t.\nThe valuation method: For a given \u03c6k, wexp (\u03c6k) = Pm j = 1 Pt (\u03c6jI\u03c6k) \u2022 w (\u03c6j) is \u03b1's expectation of the value of what will be observed given that \u03b2 has stated that \u03c6k will be observed, for some measure w.\nNow suppose that, as before, \u03b1 observes \u03d5k after agent \u03b2 has stated \u03d5.\n\u03b1 revises the prior estimate of the expected valuation wexp (\u03c6k) in the light of the observation \u03d5k to:\n4.2 Update of Pt (\u03d5 ~ I\u03d5) given \u03d5\nFirst, if \u03d5k is observed then \u03b1 may set Pt +1 (\u03d5kI\u03d5) to some value d where 1\u03d51, \u03d52,..., \u03d5m1 is the set of all possible observations.\nWe estimate the complete posterior distribution Pt +1 (\u03d5 ~ I\u03d5) by applying the principle of minimum relative entropy9 as follows.\nLet ~ p (\u03bc) be the distribution: 9Given a probability distribution ~ q, the minimum relative entropy distribution p ~ = (p1,..., pI) subject to a set of J linear constraints g ~ = {gj (~ p) = ~ aj \u2022 p ~--cj = 01, j = 1,..., J (that must include the constraint Pi pi--1 = 0) is: p ~ =\nq.\nThis may be calculated by introducing Lagrange multipliers ~ \u03bb: L (~ p, ~ \u03bb) = Pj pj log p q + \u03bb ~ \u2022 ~ g. Minimising L, {\u2202 L \u2202 \u03bb = gj (~ p) = 01, j = 1,..., J is the set of given constraints ~ g, and a solution to \u2202 L \u2202 pi = 0, i = 1,..., I leads eventually to ~ p. Entropy-based inference is a form of Bayesian inference that is convenient when the data is sparse [5] and encapsulates common-sense reasoning [12].\nfor some function g ~--the idea being, for example, that if the execution, \u03d5k, of the commitment, \u03d5, to supply cheese was devalued then \u03b1's expectation of the value of a commitment, \u03c6, to supply wine should decrease.\nWe estimate the posterior by applying the principle of minimum relative entropy as for Equation 3, where the distribution ~ p (\u03bc) = ~ p (\u03c6 | \u03c6) satisfies the constraint:\n5.\nSUMMARY MEASURES\nA dialogue, \u03a8t, between agents \u03b1 and \u03b2 is a sequence of inter-related utterances in context.\nA relationship, \u03a8 \u2217 t, is a sequence of dialogues.\nWe first measure the confidence that an agent has for another by observing, for each utterance, the difference between what is said (the utterance) and what\n1034 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nsubsequently occurs (the observation).\nSecond we evaluate each dialogue as it progresses in terms of the LOGIC framework--this evaluation employs the confidence measures.\nFinally we define the intimacy of a relationship as an aggregation of the value of its component dialogues.\n5.1 Confidence\nConfidence measures generalise what are commonly called trust, reliability and reputation measures into a single computational framework that spans the LOGIC categories.\nIn Section 5.2 confidence measures are applied to valuing fulfilment of promises in the Legitimacy category--we formerly called this \"honour\" [14], to the execution of commitments--we formerly called this \"trust\" [13], and to valuing dialogues in the Goals category--we formerly called this \"reliability\" [14].\nIdeal observations.\nConsider a distribution of observations that represent \u03b1's \"ideal\" in the sense that it is the best that \u03b1 could reasonably expect to observe.\nThis distribution will be a function of \u03b1's context with \u03b2 denoted by e, and is PtI (\u03d5 ~ I\u03d5, e).\nHere we measure the relative entropy between this ideal distribution, PtI (\u03d5 ~ I\u03d5, e), and the distribution of expected observations, Pt (\u03d5 ~ I\u03d5).\nThat is:\nwhere the \"1\" is an arbitrarily chosen constant being the maximum value that this measure may have.\nThis equation measures confidence for a single statement \u03d5.\nIt makes sense to aggregate these values over a class of statements, say over those \u03d5 that are in the ontological context o, that is \u03d5 <o:\nwhere Pt\u03b2 (\u03d5) is a probability distribution over the space of statements that the next statement \u03b2 will make to \u03b1 is \u03d5.\nSimilarly, for an overall estimate of \u03b2's confidence in \u03b1:\nPreferred observations.\nThe previous measure requires that an ideal distribution, PtI (\u03d5 ~ I\u03d5, e), has to be specified for each \u03d5.\nHere we measure the extent to which the observation \u03d5 ~ is preferable to the original statement \u03d5.\nGiven a predicate Prefer (c1, c2, e) meaning that \u03b1 prefers c1 to c2 in environment e.\nThen if \u03d5 <o:\nCertainty in observation.\nHere we measure the consistency in expected acceptable observations, or \"the lack of expected uncertainty in those possible observations that are better than the original statement\".\nIf \u03d5 <o let: 4D + (\u03d5, o, \u03ba) = J\u03d5 ~ I Pt (Prefer (\u03d5 ~, \u03d5, o))> \u03ba} for some constant \u03ba, and:\nAs above we aggregate this measure for observations in a particular context o, and measure confidence as before.\nComputational Note.\nThe various measures given above involve extensive calculations.\nFor example, Eqn.\n4 contains P\u03d5' that sums over all possible observations \u03d5 ~.\nWe obtain a more computationally friendly measure by appealing to the structure of the ontology described in Section 3.2, and the right-hand side of Eqn.\n4 may be approximated to:\nwhere Pt\u03b7, I (\u03d5 ~ I\u03d5, e) is the normalisation of PtI (\u03d5 ~ I\u03d5, e) for Sim (\u03d5 ~, \u03d5)> \u03b7, and similarly for Pt\u03b7 (\u03d5 ~ I\u03d5).\nThe extent of this calculation is controlled by the parameter \u03b7.\nAn even tighter restriction may be obtained with: Sim (\u03d5 ~, \u03d5)> \u03b7 and \u03d5 ~ <\u03c8 for some \u03c8.\n5.2 Valuing negotiation dialogues\nSuppose that a negotiation commences at time s, and by time t a string of utterances, 4Dt = (\u03bc1,..., \u03bcn) has been exchanged between agent \u03b1 and agent \u03b2.\nThis negotiation dialogue is evaluated by \u03b1 in the context of \u03b1's world model at time s, Ms, and the environment e that includes utterances that may have been received from other agents in the system including the information sources 1\u03b8i}.\nLet \u03a8t = (4Dt, Ms, e), then \u03b1 estimates the value of this dialogue to itself in the context of Ms and e as a 2 x 5 array V\u03b1 (\u03a8t)\nwhere the I (\u2022) and U (\u2022) functions are information-based and utility-based measures respectively as we now describe.\n\u03b1 estimates the value of this dialogue to \u03b2 as V\u03b2 (\u03a8t) by assuming that \u03b2's reasoning apparatus mirrors its own.\nIn general terms, the information-based valuations measure the reduction in uncertainty, or information gain, that the dialogue gives to each agent, they are expressed in terms of decrease in entropy that can always be calculated.\nThe utility-based valuations measure utility gain are expressed in terms of \"some suitable\" utility evaluation function U (\u2022) that can be difficult to define.\nThis is one reason why the utilitarian approach has no natural extension to the management of argumentation that is achieved here by our informationbased approach.\nFor example, if \u03b1 receives the utterance \"Today is Tuesday\" then this may be translated into a constraint on a single distribution, and the resulting decrease in entropy is the information gain.\nAttaching a utilitarian measure to this utterance may not be so simple.\nWe use the term \"2 x 5 array\" loosely to describe V\u03b1 in that the elements of the array are lists of measures that will be determined by the agent's requirements.\nTable 2 shows a sample measure for each of the ten categories, in it the dialogue commences at time s and terminates at time t.\nIn that Table, U (\u2022) is a suitable utility evaluation function, needs (\u03b2, \u03c7) means \"agent \u03b2 needs the need \u03c7\", cho (\u03b2, \u03c7, \u03b3) means \"agent \u03b2 satisfies need \u03c7 by choosing to negotiate\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1035\nwith agent \u03b3\", N is the set of needs chosen from the ontology at some suitable level of abstraction, Tt is the set of offers on the table at time t, com (\u03b2, \u03b3, b) means \"agent \u03b2 has an outstanding commitment with agent \u03b3 to execute the commitment b\" where b is defined in the ontology at some suitable level of abstraction, B is the number of such commitments, and there are n + 1 agents in the system.\n5.3 Intimacy and Balance\nThe balance in a negotiation dialogue, \u03a8t, is defined as: B\u03b1\u03b2 (\u03a8t) = V\u03b1 (\u03a8t) ~ V\u03b2 (\u03a8t) for an element-by-element difference operator ~ that respects the structure of V (\u03a8t).\nThe intimacy between agents \u03b1 and \u03b2, I * t\u03b1\u03b2, is the pattern of the two 2 \u00d7 5 arrays V * t \u03b1 and V * t \u03b2 that are computed by an update function as each negotiation round terminates,\nwhere \u03bd is the learning rate, and x = \u03b1, \u03b2.\nAdditionally, V * t x continually decays by: Vx * t +1 = \u03c4 \u00d7 V * t x + (1 \u2212 \u03c4) \u00d7 Dx, where x = \u03b1, \u03b2; \u03c4 is the decay rate, and Dx is a 2 \u00d7 5 array being the decay limit distribution for the value to agent x of the intimacy of the relationship in the absence of any interaction.\nDx is the reputation of agent x.\nThe relationship balance between agents \u03b1 and \u03b2 is: B * t\n\u03b2.\nIn particular, the intimacy determines values for the parameters g and h in Equation 1.\nAs a simple example, if both IO\u03b1 (\u03a8 * t) and IO\u03b2 (\u03a8 * t) increase then g decreases, and as the remaining eight information-based LOGIC components increase, h increases.\nThe notion of balance may be applied to pairs of utterances by treating them as degenerate dialogues.\nIn simple multi-issue bargaining the equitable information revelation strategy generalises the tit-for-tat strategy in single-issue bargaining, and extends to a tit-for-tat argumentation strategy by applying the same principle across the LOGIC framework.\n6.\nSTRATEGIES AND TACTICS\nEach negotiation has to achieve two goals.\nFirst it may be intended to achieve some contractual outcome.\nSecond it will aim to contribute to the growth, or decline, of the relationship intimacy.\nWe now describe in greater detail the contents of the \"Negotiation\" box in Figure 1.\nThe negotiation literature consistently advises that an agent's behaviour should not be predictable even in close, intimate relationships.\nThe required variation of behaviour is normally described as varying the negotiation stance that informally varies from \"friendly guy\" to \"tough guy\".\nThe stance is shown in Figure 1, it injects bounded random noise into the process, where the bound tightens as intimacy increases.\nThe stance, St\u03b1\u03b2, is a 2 \u00d7 5 matrix of randomly chosen multipliers, each \u2248 1, that perturbs \u03b1's actions.\nThe value in the (x, y) position in the matrix, where x = I, U and y = L, O, G, I, C, is chosen at random from [1 \u03b1\u03b2, x, y) is the l (I \u2217 t \u03b1\u03b2, x, y), l (I * t \u03b1\u03b2, x, y)] where l (I * t bound, and I * t \u03b1\u03b2 is the intimacy.\nThe negotiation strategy is concerned with maintaining a working set of Options.\nIf the set of options is empty then \u03b1 will quit the negotiation.\n\u03b1 perturbs the acceptance machinery (see Section 4) by deriving s from the St\u03b1\u03b2 matrix such as the value at the (I, O) position.\nIn line with the comment in Footnote 7, in the early stages of the negotiation \u03b1 may decide to inflate her opening Options.\nThis is achieved by increasing the value of s in Equation 1.\nThe following strategy uses the machinery described in Section 4.\nFix h, g, s and c, set the Options to the empty set, let Dts = {\u03b4 | Pt (acc (\u03b1, \u03b2, \u03c7, \u03b4)> c}, then:\n\u2022 repeat the following as many times as desired: add\n\u03b4 = arg maxx {Pt (acc (\u03b2, \u03b1, x)) | x \u2208 Dts} to Options, remove {y \u2208 Dts | Sim (y, \u03b4) <k} for some k from Dts By using Pt (acc (\u03b2, \u03b1, \u03b4)) this strategy reacts to \u03b2's history of Propose and Reject utterances.\nNegotiation tactics are concerned with selecting some Options and wrapping them in argumentation.\nPrior interactions with agent \u03b2 will have produced an intimacy pattern expressed in the form of (V * t \u03b1, V * t).\nSuppose that the rela\n\u03b2).\nFollowing from Equation 5, \u03b1 will want to achieve a negotiation target, N\u03b2 (\u03a8t) such that: \u03bd \u00b7 N\u03b2 (\u03a8t) + (1 \u2212 \u03bd) \u00b7 V\u03b2 * t is \"a bit on the T\u03b2 * t side of\" V * t\nfor small \u03ba \u2208 [0, \u03bd] that represents \u03b1's desired rate of development for her relationship with \u03b2.\nN\u03b2 (\u03a8t) is a 2 \u00d7 5 matrix containing variations in the LOGIC dimensions that \u03b1 would like to reveal to \u03b2 during \u03a8t (e.g. I'll pass a bit more information on options than usual, I'll be stronger in concessions on options, etc.).\nIt is reasonable to expect \u03b2 to progress towards her target at the same rate and N\u03b1 (\u03a8t) is calculated by replacing \u03b2 by \u03b1 in Equation 6.\nN\u03b1 (\u03a8t) is what \u03b1 hopes to receive from \u03b2 during \u03a8t.\nThis gives a negotiation balance target of: N\u03b1 (\u03a8t) ~ N\u03b2 (\u03a8t) that can be used as the foundation for reactive tactics by striving to maintain this balance across the LOGIC dimensions.\nA cautious tactic could use the balance to bound the response \u03bc to each utterance \u03bc' from \u03b2 by the constraint: V\u03b1 (\u03bc') ~ V\u03b2 (\u03bc) \u2248 St\u03b1\u03b2 \u2297 (N\u03b1 (\u03a8t) ~ N\u03b2 (\u03a8t)), where \u2297 is element-by-element matrix multiplication, and St\u03b1\u03b2 is the stance.\nA less neurotic tactic could attempt to achieve the target negotiation balance over the anticipated complete dialogue.\nIf a balance bound requires negative information revelation in one LOGIC category then \u03b1 will contribute nothing to it, and will leave this to the natural decay to the reputation D as described above.","keyphrases":["negoti","negoti strategi","success negoti encount","long term relationship","utter","utilitarian interpret","ontolog","set predic","multiag system","logic agent architectur","accept view","accept criterion","compon dialogu","confid measur"],"prmu":["P","P","M","M","U","M","U","M","U","M","M","U","U","U"]} {"id":"H-37","title":"Relaxed Online SVMs for Spam Filtering","abstract":"Spam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs. Content-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam. The former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification. However, similar performance gains have yet to be demonstrated for online spam filtering. Additionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods. In this paper, we offer a resolution to this controversy. First, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets. Second, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost. Our results are experimentally verified on email spam, blog spam, and splog detection tasks.","lvl-1":"Relaxed Online SVMs for Spam Filtering D. Sculley Tufts University Department of Computer Science 161 College Ave., Medford, MA USA dsculleycs.tufts.edu Gabriel M. Wachman Tufts University Department of Computer Science 161 College Ave., Medford, MA USA gwachm01cs.tufts.edu ABSTRACT Spam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs.\nContent-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam.\nThe former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification.\nHowever, similar performance gains have yet to be demonstrated for online spam filtering.\nAdditionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods.\nIn this paper, we offer a resolution to this controversy.\nFirst, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets.\nSecond, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost.\nOur results are experimentally verified on email spam, blog spam, and splog detection tasks.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval - spam General Terms Measurement, Experimentation, Algorithms 1.\nINTRODUCTION Electronic communication is increasingly plagued by unwanted or harmful content known as spam.\nThe most well known form of spam is email spam, which remains a major problem for large email systems.\nOther forms of spam are also becoming problematic, including blog spam, in which spammers post unwanted comments in blogs [21], and splogs, which are fake blogs constructed to enable link spam with the hope of boosting the measured importance of a given webpage in the eyes of automated search engines [17].\nThere are a variety of methods for identifying these many forms of spam, including compiling blacklists of known spammers, and conducting link analysis.\nThe approach of content analysis has shown particular promise and generality for combating spam.\nIn content analysis, the actual message text (often including hyper-text and meta-text, such as HTML and headers) is analyzed using machine learning techniques for text classification to determine if the given content is spam.\nContent analysis has been widely applied in detecting email spam [11], and has also been used for identifying blog spam [21] and splogs [17].\nIn this paper, we do not explore the related problem of link spam, which is currently best combated by link analysis [13].\n1.1 An Anti-Spam Controversy The anti-spam community has been divided on the choice of the best machine learning method for content-based spam detection.\nAcademic researchers have tended to favor the use of Support Vector Machines (SVMs), a statistically robust machine learning method [7] which yields state-of-theart performance on general text classification [14].\nHowever, SVMs typically require training time that is quadratic in the number of training examples, and are impractical for largescale email systems.\nPractitioners requiring content-based spam filtering have typically chosen to use the faster (if less statistically robust) machine learning method of Naive Bayes text classification [11, 12, 20].\nThis Bayesian method requires only linear training time, and is easily implemented in an online setting with incremental updates.\nThis allows a deployed system to easily adapt to a changing environment over time.\nOther fast methods for spam filtering include compression models [1] and logistic regression [10].\nIt has not yet been empirically demonstrated that SVMs give improved performance over these methods in an online spam detection setting [4].\n1.2 Contributions In this paper, we address the anti-spam controversy and offer a potential resolution.\nWe first demonstrate that online SVMs do indeed provide state-of-the-art spam detection through empirical tests on several large benchmark data sets of email spam.\nWe then analyze the effect of the tradeoff parameter in the SVM objective function, which shows that the expensive SVM methodology may, in fact, be overkill for spam detection.\nWe reduce the computational cost of SVM learning by relaxing this requirement on the maximum margin in online settings, and create a Relaxed Online SVM, ROSVM, appropriate for high performance content-based spam filtering in large-scale settings.\n2.\nSPAM AND ONLINE SVMS The controversy between academics and practitioners in spam filtering centers on the use of SVMs.\nThe former advocate their use, but have yet to demonstrate strong performance with SVMs on online spam filtering.\nIndeed, the results of [4] show that, when used with default parameters, SVMs actually perform worse than other methods.\nIn this section, we review the basic workings of SVMs and describe a simple Online SVM algorithm.\nWe then show that Online SVMs indeed achieve state-of-the-art performance on filtering email spam, blog comment spam, and splogs, so long as the tradeoff parameter C is set to a high value.\nHowever, the cost of Online SVMs turns out to be prohibitive for largescale applications.\nThese findings motivate our proposal of Relaxed Online SVMs in the following section.\n2.1 Background: SVMs SVMs are a robust machine learning methodology which has been shown to yield state-of-the-art performance on text classification [14].\nby finding a hyperplane that separates two classes of data in data space while maximizing the margin between them.\nWe use the following notation to describe SVMs, which draws from [23].\nA data set X contains n labeled example vectors {(x1, y1) ... (xn, yn)}, where each xi is a vector containing features describing example i, and each yi is the class label for that example.\nIn spam detection, the classes spam and ham (i.e., not spam) are assigned the numerical class labels +1 and \u22121, respectively.\nThe linear SVMs we employ in this paper use a hypothesis vector w and bias term b to classify a new example x, by generating a predicted class label f(x): f(x) = sign(< w, x > +b) SVMs find the hypothesis w, which defines the separating hyperplane, by minimizing the following objective function over all n training examples: \u03c4(w, \u03be) = 1 2 ||w||2 + C nX i=i \u03bei under the constraints that \u2200i = {1.\n.\nn} : yi(< w, xi > +b) \u2265 1 \u2212 \u03bei, \u03bei \u2265 0 In this objective function, each slack variable \u03bei shows the amount of error that the classifier makes on a given example xi.\nMinimizing the sum of the slack variables corresponds to minimizing the loss function on the training data, while minimizing the term 1 2 ||w||2 corresponds to maximizing the margin between the two classes [23].\nThese two optimization goals are often in conflict; the tradeoff parameter C determines how much importance to give each of these tasks.\nLinear SVMs exploit data sparsity to classify a new instance in O(s) time, where s is the number of non-zero features.\nThis is the same classification time as other linear Given: data set X = (x1, y1), ... , (xn, yn), C, m: Initialize w := 0, b := 0, seenData := { } For Each xi \u2208 X do: Classify xi using f(xi) = sign(< w, xi > +b) IF yif(xi) < 1 Find w , b using SMO on seenData, using w, b as seed hypothesis.\nAdd xi to seenData done Figure 1: Pseudo code for Online SVM.\nclassifiers, and as Naive Bayesian classification.\nTraining SVMs, however, typically takes O(n2 ) time, for n training examples.\nA variant for linear SVMs was recently proposed which trains in O(ns) time [15], but because this method has a high constant, we do not explore it here.\n2.2 Online SVMs In many traditional machine learning applications, SVMs are applied in batch mode.\nThat is, an SVM is trained on an entire set of training data, and is then tested on a separate set of testing data.\nSpam filtering is typically tested and deployed in an online setting, which proceeds incrementally.\nHere, the learner classifies a new example, is told if its prediction is correct, updates its hypothesis accordingly, and then awaits a new example.\nOnline learning allows a deployed system to adapt itself in a changing environment.\nRe-training an SVM from scratch on the entire set of previously seen data for each new example is cost prohibitive.\nHowever, using an old hypothesis as the starting point for re-training reduces this cost considerably.\nOne method of incremental and decremental SVM learning was proposed in [2].\nBecause we are only concerned with incremental learning, we apply a simpler algorithm for converting a batch SVM learner into an online SVM (see Figure 1 for pseudocode), which is similar to the approach of [16].\nEach time the Online SVM encounters an example that was poorly classified, it retrains using the old hypothesis as a starting point.\nNote that due to the Karush-Kuhn-Tucker (KKT) conditions, it is not necessary to re-train on wellclassified examples that are outside the margins [23].\nWe used Platt``s SMO algorithm [22] as a core SVM solver, because it is an iterative method that is well suited to converge quickly from a good initial hypothesis.\nBecause previous work (and our own initial testing) indicates that binary feature values give the best results for spam filtering [20, 9], we optimized our implementation of the Online SMO to exploit fast inner-products with binary vectors.\n1 2.3 Feature Mapping Spam Content Extracting machine learning features from text may be done in a variety of ways, especially when that text may include hyper-content and meta-content such as HTML and header information.\nHowever, previous research has shown that simple methods from text classification, such as bag of words vectors, and overlapping character-level n-grams, can achieve strong results [9].\nFormally, a bag of words vector is a vector x with a unique dimension for each possible 1 Our source code is freely available at www.cs.tufts.edu\/\u223cdsculley\/onlineSMO.\n1 0.999 0.995 0.99 0.985 0.98 0.1 1 10\u00a0100\u00a01000 ROCArea C 2-grams 3-grams 4-grams words Figure 2: Tuning the Tradeoff Parameter C. Tests were conducted with Online SMO, using binary feature vectors, on the spamassassin data set of 6034 examples.\nGraph plots C versus Area under the ROC curve.\nword, defined as a contiguous substring of non-whitespace characters.\nAn n-gram vector is a vector x with a unique dimension for each possible substring of n total characters.\nNote that n-grams may include whitespace, and are overlapping.\nWe use binary feature scoring, which has been shown to be most effective for a variety of spam detection methods [20, 9].\nWe normalize the vectors with the Euclidean norm.\nFurthermore, with email data, we reduce the impact of long messages (for example, with attachments) by considering only the first 3,000 characters of each string.\nFor blog comments and splogs, we consider the whole text, including any meta-data such as HTML tags, as given.\nNo other feature selection or domain knowledge was used.\n2.4 Tuning the Tradeoff Parameter, C The SVM tradeoff parameter C must be tuned to balance the (potentially conflicting) goals of maximizing the margin and minimizing the training error.\nEarly work on SVM based spam detection [9] showed that high values of C give best performance with binary features.\nLater work has not always followed this lead: a (low) default setting of C was used on splog detection [17], and also on email spam [4].\nFollowing standard machine learning practice, we tuned C on separate tuning data not used for later testing.\nWe used the publicly available spamassassin email spam data set, and created an online learning task by randomly interleaving all 6034 labeled messages to create a single ordered set.\nFor tuning, we performed a coarse parameter search for C using powers of ten from .0001 to 10000.\nWe used the Online SVM described above, and tested both binary bag of words vectors and n-gram vectors with n = {2, 3, 4}.\nWe used the first 3000 characters of each message, which included header information, body of the email, and possibly attachments.\nFollowing the recommendation of [6], we use Area under the ROC curve as our evaluation measure.\nThe results (see Figure 2) agree with [9]: there is a plateau of high performance achieved with all values of C \u2265 10, and performance degrades sharply with C < 1.\nFor the remainder of our experiments with SVMs in this paper, we set C = 100.\nWe will return to the observation that very high values of C do not degrade performance as support for the intuition that relaxed SVMs should perform well on spam.\nTable 1: Results for Email Spam filtering with Online SVM on benchmark data sets.\nScore reported is (1-ROCA)%, where 0 is optimal.\ntrec05p-1 trec06p OnSVM: words 0.015 (.011-.022) 0.034 (.025-.046) 3-grams 0.011 (.009-.015) 0.025 (.017-.035) 4-grams 0.008 (.007-.011) 0.023 (.017-.032) SpamProbe 0.059 (.049-.071) 0.092 (.078-.110) BogoFilter 0.048 (.038-.062) 0.077 (.056-.105) TREC Winners 0.019 (.015-.023) 0.054 (.034-.085) 53-Ensemble 0.007 (.005-.008) 0.020 (.007-.050) Table 2: Results for Blog Comment Spam Detection using SVMs and Leave One Out Cross Validation.\nWe report the same performance measures as in the prior work for meaningful comparison.\naccuracy precision recall SVM C = 100: words 0.931 0.946 0.954 3-grams 0.951 0.963 0.965 4-grams 0.949 0.967 0.956 Prior best method 0.83 0.874 0.874 2.5 Email Spam and Online SVMs With C tuned on a separate tuning set, we then tested the performance of Online SVMs in spam detection.\nWe used two large benchmark data sets of email spam as our test corpora.\nThese data sets are the 2005 TREC public data set trec05p-1 of 92,189 messages, and the 2006 TREC public data sets, trec06p, containing 37,822 messages in English.\n(We do not report our strong results on the trec06c corpus of Chinese messages as there have been questions raised over the validity of this test set.)\nWe used the canonical ordering provided with each of these data sets for fair comparison.\nResults for these experiments, with bag of words vectors and and n-gram vectors appear in Table 1.\nTo compare our results with previous scores on these data sets, we use the same (1-ROCA)% measure described in [6], which is one minus the area under the ROC curve, expressed as a percent.\nThis measure shows the percent chance of error made by a classifier asserting that one message is more likely to be spam than another.\nThese results show that Online SVMs do give state of the art performance on email spam.\nThe only known system that out-performs the Online SVMs on the trec05p-1 data set is a recent ensemble classifier which combines the results of 53 unique spam filters [19].\nTo our knowledge, the Online SVM has out-performed every other single filter on these data sets, including those using Bayesian methods [5, 3], compression models [5, 3], logistic regression [10], and perceptron variants [3], the TREC competition winners [5, 3], and open source email spam filters BogoFilter v1.1.5 and SpamProbe v1.4d. 2.6 Blog Comment Spam and SVMs Blog comment spam is similar to email spam in many regards, and content-based methods have been proposed for detecting these spam comments [21].\nHowever, large benchmark data sets of labeled blog comment spam do not yet exist.\nThus, we run experiments on the only publicly available data set we know of, which was used in content-based blog Table 3: Results for Splog vs. Blog Detection using SVMs and Leave One Out Cross Validation.\nWe report the same evaluation measures as in the prior work for meaningful comparison.\nfeatures precision recall F1 SVM C = 100: words 0.921 0.870 0.895 3-grams 0.904 0.866 0.885 4-grams 0.928 0.876 0.901 Prior SVM with: words 0.887 0.864 0.875 4-grams 0.867 0.844 0.855 words+urls 0.893 0.869 0.881 comment spam detection experiments by [21].\nBecause of the small size of the data set, and because prior researchers did not conduct their experiments in an on-line setting, we test the performance of linear SVMs using leave-one-out cross validation, with SVM-Light, a standard open-source SVM implementation [14].\nWe use the parameter setting C = 100, with the same feature space mappings as above.\nWe report accuracy, precision, and recall to compare these to the results given on the same data set by [21].\nThese results (see Table 2) show that SVMs give superior performance on this data set to the prior methodology.\n2.7 Splogs and SVMs As with blog comment spam, there is not yet a large, publicly available benchmark corpus of labeled splog detection test data.\nHowever, the authors of [17] kindly provided us with the labeled data set of 1,389 blogs and splogs that they used to test content-based splog detection using SVMs.\nThe only difference between our methodology and that of [17] is that they used default parameters for C, which SVM-Light sets to 1 avg||x||2 .\n(For normalized vectors, this default value sets C = 1.)\nThey also tested several domain-informed feature mappings, such as giving special features to url tags.\nFor our experiments, we used the same feature mappings as above, and tested the effect of setting C = 100.\nAs with the methodology of [17], we performed leave one out cross validation for apples-to-apples comparison on this data.\nThe results (see Table 3) show that a high value of C produces higher performance for the same feature space mappings, and even enables the simple 4-gram mapping to out-perform the previous best mapping which incorporated domain knowledge by using words and urls.\n2.8 Computational Cost The results presented in this section demonstrate that linfeatures trec06p trec05p-1 words 12196s 66478s 3-grams 44605s 128924s 4-grams 87519s 242160s corpus size 32822 92189 Table 4: Execution time for Online SVMs with email spam detection, in CPU seconds.\nThese times do not include the time spent mapping strings to feature vectors.\nThe number of examples in each data set is given in the last row as corpus size.\nA B Figure 3: Visualizing the effect of C. Hyperplane A maximizes the margin while accepting a small amount of training error.\nThis corresponds to setting C to a low value.\nHyperplane B accepts a smaller margin in order to reduce training error.\nThis corresponds to setting C to a high value.\nContent-based spam filtering appears to do best with high values of C. ear SVMs give state of the art performance on content-based spam filtering.\nHowever, this performance comes at a price.\nAlthough the blog comment spam and splog data sets are too small for the quadratic training time of SVMs to appear problematic, the email data sets are large enough to illustrate the problems of quadratic training cost.\nTable 4 shows computation time versus data set size for each of the online learning tasks (on same system).\nThe training cost of SVMs are prohibitive for large-scale content based spam detection, or a large blog host.\nIn the following section, we reduce this cost by relaxing the expensive requirements of SVMs.\n3.\nRELAXED ONLINE SVMS (ROSVM) One of the main benefits of SVMs is that they find a decision hyperplane that maximizes the margin between classes in the data space.\nMaximizing the margin is expensive, typically requiring quadratic training time in the number of training examples.\nHowever, as we saw in the previous section, the task of content-based spam detection is best achieved by SVMs with a high value of C. Setting C to a high value for this domain implies that minimizing training loss is more important than maximizing the margin (see Figure 3).\nThus, while SVMs do create high performance spam filters, applying them in practice is overkill.\nThe full margin maximization feature that they provide is unnecessary, and relaxing this requirement can reduce computational cost.\nWe propose three ways to relax Online SVMs: \u2022 Reduce the size of the optimization problem by only optimizing over the last p examples.\n\u2022 Reduce the number of training updates by only training on actual errors.\n\u2022 Reduce the number of iterations in the iterative SVM Given: dataset X = (x1, y1), ... , (xn, yn), C, m, p: Initialize w := 0, b := 0, seenData := { } For Each xi \u2208 X do: Classify xi using f(xi) = sign(< w, xi > +b) If yif(xi) < m Find w , b with SMO on seenData, using w, b as seed hypothesis.\nset (w, b) := (w'',b'') If size(seenData) > p remove oldest example from seenData Add xi to seenData done Figure 4: Pseudo-code for Relaxed Online SVM.\nsolver by allowing an approximate solution to the optimization problem.\nAs we describe in the remainder of this subsection, all of these methods trade statistical robustness for reduced computational cost.\nExperimental results reported in the following section show that they equal or approach the performance of full Online SVMs on content-based spam detection.\n3.1 Reducing Problem Size In the full Online SVMs, we re-optimize over the full set of seen data on every update, which becomes expensive as the number of seen data points grows.\nWe can bound this expense by only considering the p most recent examples for optimization (see Figure 4 for pseudo-code).\nNote that this is not equivalent to training a new SVM classifier from scratch on the p most recent examples, because each successive optimization problem is seeded with the previous hypothesis w [8].\nThis hypothesis may contain values for features that do not occur anywhere in the p most recent examples, and these will not be changed.\nThis allows the hypothesis to remember rare (but informative) features that were learned further than p examples in the past.\nFormally, the optimization problem is now defined most clearly in the dual form [23].\nIn this case, the original softmargin SVM is computed by maximizing at example n: W (\u03b1) = nX i=1 \u03b1i \u2212 1 2 nX i,j=1 \u03b1i\u03b1jyiyj < xi, xj >, subject to the previous constraints [23]: \u2200i \u2208 {1, ... , n} : 0 \u2264 \u03b1i \u2264 C and nX i=1 \u03b1iyi = 0 To this, we add the additional lookback buffer constraint \u2200j \u2208 {1, ... , (n \u2212 p)} : \u03b1j = cj where cj is a constant, fixed as the last value found for \u03b1j while j > (n \u2212 p).\nThus, the margin found by an optimization is not guaranteed to be one that maximizes the margin for the global data set of examples {x1, ... , xn)}, but rather one that satisfies a relaxed requirement that the margin be maximized over the examples { x(n\u2212p+1), ... , xn}, subject to the fixed constraints on the hyperplane that were found in previous optimizations over examples {x1, ... , x(n\u2212p)}.\n(For completeness, when p \u2265 n, define (n \u2212 p) = 1.)\nThis set of constraints reduces the number of free variables in the optimization problem, reducing computational cost.\n3.2 Reducing Number of Updates As noted before, the KKT conditions show that a well classified example will not change the hypothesis; thus it is not necessary to re-train when we encounter such an example.\nUnder the KKT conditions, an example xi is considered well-classified when yif(xi) > 1.\nIf we re-train on every example that is not well-classified, our hyperplane will be guaranteed to be optimal at every step.\nThe number of re-training updates can be reduced by relaxing the definition of well classified.\nAn example xi is now considered well classified when yif(xi) > M, for some 0 \u2264 M \u2264 1.\nHere, each update still produces an optimal hyperplane.\nThe learner may encounter an example that lies within the margins, but farther from the margins than M.\nSuch an example means the hypothesis is no longer globally optimal for the data set, but it is considered good enough for continued use without immediate retraining.\nThis update procedure is similar to that used by variants of the Perceptron algorithm [18].\nIn the extreme case, we can set M = 0, which creates a mistake driven Online SVM.\nIn the experimental section, we show that this version of Online SVMs, which updates only on actual errors, does not significantly degrade performance on content-based spam detection, but does significantly reduce cost.\n3.3 Reducing Iterations As an iterative solver, SMO makes repeated passes over the data set to optimize the objective function.\nSMO has one main loop, which can alternate between passing over the entire data set, or the smaller active set of current support vectors [22].\nSuccessive iterations of this loop bring the hyperplane closer to an optimal value.\nHowever, it is possible that these iterations provide less benefit than their expense justifies.\nThat is, a close first approximation may be good enough.\nWe introduce a parameter T to control the maximum number of iterations we allow.\nAs we will see in the experimental section, this parameter can be set as low as 1 with little impact on the quality of results, providing computational savings.\n4.\nEXPERIMENTS In Section 2, we argued that the strong performance on content-based spam detection with SVMs with a high value of C show that the maximum margin criteria is overkill, incurring unnecessary computational cost.\nIn Section 3, we proposed ROSVM to address this issue, as both of these methods trade away guarantees on the maximum margin hyperplane in return for reduced computational cost.\nIn this section, we test these methods on the same benchmark data sets to see if state of the art performance may be achieved by these less costly methods.\nWe find that ROSVM is capable of achieving these high levels of performance with greatly reduced cost.\nOur main tests on content-based spam detection are performed on large benchmark sets of email data.\nWe then apply these methods on the smaller data sets of blog comment spam and blogs, with similar performance.\n4.1 ROSVM Tests In Section 3, we proposed three approaches for reducing the computational cost of Online SMO: reducing the prob0.005 0.01 0.025 0.05 0.1 10\u00a0100\u00a01000\u00a010000 100000 (1-ROCA)% Buffer Size trec05p-1 trec06p 0 50000 100000 150000 200000 250000 10\u00a0100\u00a01000\u00a010000 100000 CPUSec.\nBuffer Size trec05p-1 trec06p Figure 5: Reduced Size Tests.\nlem size, reducing the number of optimization iterations, and reducing the number of training updates.\nEach of these approaches relax the maximum margin criteria on the global set of previously seen data.\nHere we test the effect that each of these methods has on both effectiveness and efficiency.\nIn each of these tests, we use the large benchmark email data sets, trec05p-1 and trec06p.\n4.1.1 Testing Reduced Size For our first ROSVM test, we experiment on the effect of reducing the size of the optimization problem by only considering the p most recent examples, as described in the previous section.\nFor this test, we use the same 4-gram mappings as for the reference experiments in Section 2, with the same value C = 100.\nWe test a range of values p in a coarse grid search.\nFigure 5 reports the effect of the buffer size p in relationship to the (1-ROCA)% performance measure (top), and the number of CPU seconds required (bottom).\nThe results show that values of p < 100 do result in degraded performance, although they evaluate very quickly.\nHowever, p values from 500 to 10,000 perform almost as well as the original Online SMO (represented here as p = 100, 000), at dramatically reduced computational cost.\nThese results are important for making state of the art performance on large-scale content-based spam detection practical with online SVMs.\nOrdinarily, the training time would grow quadratically with the number of seen examples.\nHowever, fixing a value of p ensures that the training time is independent of the size of the data set.\nFurthermore, a lookback buffer allows the filter to adjust to concept drift.\n0.005 0.01 0.025 0.05 0.1 10521 (1-ROCA)% Max Iters.\ntrec06p trec05p-1 50000 100000 150000 200000 250000 10521 CPUSec.\nMax Iters.\ntrec06p trec05p-1 Figure 6: Reduced Iterations Tests.\n4.1.2 Testing Reduced Iterations In the second ROSVM test, we experiment with reducing the number of iterations.\nOur initial tests showed that the maximum number of iterations used by Online SMO was rarely much larger than 10 on content-based spam detection; thus we tested values of T = {1, 2, 5, \u221e}.\nOther parameters were identical to the original Online SVM tests.\nThe results on this test were surprisingly stable (see Figure 6).\nReducing the maximum number of SMO iterations per update had essentially no impact on classification performance, but did result in a moderate increase in speed.\nThis suggests that any additional iterations are spent attempting to find improvements to a hyperplane that is already very close to optimal.\nThese results show that for content-based spam detection, we can reduce computational cost by allowing only a single SMO iteration (that is, T = 1) with effectively equivalent performance.\n4.1.3 Testing Reduced Updates For our third ROSVM experiment, we evaluate the impact of adjusting the parameter M to reduce the total number of updates.\nAs noted before, when M = 1, the hyperplane is globally optimal at every step.\nReducing M allows a slightly inconsistent hyperplane to persist until it encounters an example for which it is too inconsistent.\nWe tested values of M from 0 to 1, at increments of 0.1.\n(Note that we used p = 10000 to decrease the cost of evaluating these tests.)\nThe results for these tests are appear in Figure 7, and show that there is a slight degradation in performance with reduced values of M, and that this degradation in performance is accompanied by an increase in efficiency.\nValues of 0.005 0.01 0.025 0.05 0.1 0 0.2 0.4 0.6 0.8 1 (1-ROCA)% M trec05p-1 trec06p 5000 10000 15000 20000 25000 30000 35000 40000 0 0.2 0.4 0.6 0.8 1 CPUSec.\nM trec05p-1 trec06p Figure 7: Reduced Updates Tests.\nM > 0.7 give effectively equivalent performance as M = 1, and still reduce cost.\n4.2 Online SVMs and ROSVM We now compare ROSVM against Online SVMs on the email spam, blog comment spam, and splog detection tasks.\nThese experiments show comparable performance on these tasks, at radically different costs.\nIn the previous section, the effect of the different relaxation methods was tested separately.\nHere, we tested these methods together to create a full implementation of ROSVM.\nWe chose the values p = 10000, T = 1, M = 0.8 for the email spam detection tasks.\nNote that these parameter values were selected as those allowing ROSVM to achieve comparable performance results with Online SVMs, in order to test total difference in computational cost.\nThe splog and blog data sets were much smaller, so we set p = 100 for these tasks to allow meaningful comparisons between the reduced size and full size optimization problems.\nBecause these values were not hand-tuned, both generalization performance and runtime results are meaningful in these experiments.\n4.2.1 Experimental Setup We compared Online SVMs and ROSVM on email spam, blog comment spam, and splog detection.\nFor the email spam, we used the two large benchmark corpora, trec05p-1 and trec06p, in the standard online ordering.\nWe randomly ordered both the blog comment spam corpus and the splog corpus to create online learning tasks.\nNote that this is a different setting than the leave-one-out cross validation task presented on these corpora in Section 2 - the results are not directly comparable.\nHowever, this experimental design Table 5: Email Spam Benchmark Data.\nThese results compare Online SVM and ROSVM on email spam detection, using binary 4-gram feature space.\nScore reported is (1-ROCA)%, where 0 is optimal.\ntrec05p-1 trec05p-1 trec06p trec06p (1-ROC)% CPUs (1-ROC)% CPUs OnSVM 0.0084 242,160 0.0232 87,519 ROSVM 0.0090 24,720 0.0240 18,541 Table 6: Blog Comment Spam.\nThese results comparing Online SVM and ROSVM on blog comment spam detection using binary 4-gram feature space.\nAcc.\nPrec.\nRecall F1 CPUs OnSVM 0.926 0.930 0.962 0.946 139 ROSVM 0.923 0.925 0.965 0.945 11 does allow meaningful comparison between our two online methods on these content-based spam detection tasks.\nWe ran each method on each task, and report the results in Tables 5, 6, and 7.\nNote that the CPU time reported for each method was generated on the same computing system.\nThis time reflects only the time needed to complete online learning on tokenized data.\nWe do not report the time taken to tokenize the data into binary 4-grams, as this is the same additive constant for all methods on each task.\nIn all cases, ROSVM was significantly less expensive computationally.\n4.3 Discussion The comparison results shown in Tables 5, 6, and 7 are striking in two ways.\nFirst, they show that the performance of Online SVMs can be matched and even exceeded by relaxed margin methods.\nSecond, they show a dramatic disparity in computational cost.\nROSVM is an order of magnitude more efficient than the normal Online SVM, and gives comparable results.\nFurthermore, the fixed lookback buffer ensures that the cost of each update does not depend on the size of the data set already seen, unlike Online SVMs.\nNote the blog and splog data sets are relatively small, and results on these data sets must be considered preliminary.\nOverall, these results show that there is no need to pay the high cost of SVMs to achieve this level of performance on contentbased detection of spam.\nROSVMs offer a far cheaper alternative with little or no performance loss.\n5.\nCONCLUSIONS In the past, academic researchers and industrial practitioners have disagreed on the best method for online contentbased detection of spam on the web.\nWe have presented one resolution to this debate.\nOnline SVMs do, indeed, proTable 7: Splog Data Set.\nThese results compare Online SVM and ROSVM on splog detection using binary 4-gram feature space.\nAcc.\nPrec.\nRecall F1 CPUs OnSVM 0.880 0.910 0.842 0.874 29353 ROSVM 0.878 0.902 0.849 0.875 1251 duce state-of-the-art performance on this task with proper adjustment of the tradeoff parameter C, but with cost that grows quadratically with the size of the data set.\nThe high values of C required for best performance with SVMs show that the margin maximization of Online SVMs is overkill for this task.\nThus, we have proposed a less expensive alternative, ROSVM, that relaxes this maximum margin requirement, and produces nearly equivalent results.\nThese methods are efficient enough for large-scale filtering of contentbased spam in its many forms.\nIt is natural to ask why the task of content-based spam detection gets strong performance from ROSVM.\nAfter all, not all data allows the relaxation of SVM requirements.\nWe conjecture that email spam, blog comment spam, and splogs all share the characteristic that a subset of features are particularly indicative of content being either spam or not spam.\nThese indicative features may be sparsely represented in the data set, because of spam methods such as word obfuscation, in which common spam words are intentionally misspelled in an attempt to reduce the effectiveness of word-based spam detection.\nMaximizing the margin may cause these sparsely represented features to be ignored, creating an overall reduction in performance.\nIt appears that spam data is highly separable, allowing ROSVM to be successful with high values of C and little effort given to maximizing the margin.\nFuture work will determine how applicable relaxed SVMs are to the general problem of text classification.\nFinally, we note that the success of relaxed SVM methods for content-based spam detection is a result that depends on the nature of spam data, which is potentially subject to change.\nAlthough it is currently true that ham and spam are linearly separable given an appropriate feature space, this assumption may be subject to attack.\nWhile our current methods appear robust against primitive attacks along these lines, such as the good word attack [24], we must explore the feasibility of more sophisticated attacks.\n6.\nREFERENCES [1] A. Bratko and B. Filipic.\nSpam filtering using compression models.\nTechnical Report IJS-DP-9227, Department of Intelligent Systems, Jozef Stefan Institute, L jubljana, Slovenia, 2005.\n[2] G. Cauwenberghs and T. Poggio.\nIncremental and decremental support vector machine learning.\nIn NIPS, pages 409-415, 2000.\n[3] G. V. Cormack.\nTREC 2006 spam track overview.\nIn To appear in: The Fifteenth Text REtrieval Conference (TREC 2006) Proceedings, 2006.\n[4] G. V. Cormack and A. Bratko.\nBatch and on-line spam filter comparison.\nIn Proceedings of the Third Conference on Email and Anti-Spam (CEAS), 2006.\n[5] G. V. Cormack and T. R. Lynam.\nTREC 2005 spam track overview.\nIn The Fourteenth Text REtrieval Conference (TREC 2005) Proceedings, 2005.\n[6] G. V. Cormack and T. R. Lynam.\nOn-line supervised spam filter evaluation.\nTechnical report, David R. Cheriton School of Computer Science, University of Waterloo, Canada, February 2006.\n[7] N. Cristianini and J. Shawe-Taylor.\nAn introduction to support vector machines.\nCambridge University Press, 2000.\n[8] D. DeCoste and K. Wagstaff.\nAlpha seeding for support vector machines.\nIn KDD ``00: Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 345-349, 2000.\n[9] H. Drucker, V. Vapnik, and D. Wu.\nSupport vector machines for spam categorization.\nIEEE Transactions on Neural Networks, 10(5):1048-1054, 1999.\n[10] J. Goodman and W. Yin.\nOnline discriminative spam filter training.\nIn Proceedings of the Third Conference on Email and Anti-Spam (CEAS), 2006.\n[11] P. Graham.\nA plan for spam.\n2002.\n[12] P. Graham.\nBetter bayesian filtering.\n2003.\n[13] Z. Gyongi and H. Garcia-Molina.\nSpam: It``s not just for inboxes anymore.\nComputer, 38(10):28-34, 2005.\n[14] T. Joachims.\nText categorization with suport vector machines: Learning with many relevant features.\nIn ECML ``98: Proceedings of the 10th European Conference on Machine Learning, pages 137-142, 1998.\n[15] T. Joachims.\nTraining linear svms in linear time.\nIn KDD ``06: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 217-226, 2006.\n[16] J. Kivinen, A. Smola, and R. Williamson.\nOnline learning with kernels.\nIn Advances in Neural Information Processing Systems 14, pages 785-793.\nMIT Press, 2002.\n[17] P. Kolari, T. Finin, and A. Joshi.\nSVMs for the blogosphere: Blog identification and splog detection.\nAAAI Spring Symposium on Computational Approaches to Analyzing Weblogs, 2006.\n[18] W. Krauth and M. M\u00b4ezard.\nLearning algorithms with optimal stability in neural networks.\nJournal of Physics A, 20(11):745-752, 1987.\n[19] T. Lynam, G. Cormack, and D. Cheriton.\nOn-line spam filter fusion.\nIn SIGIR ``06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 123-130, 2006.\n[20] V. Metsis, I. Androutsopoulos, and G. Paliouras.\nSpam filtering with naive bayes - which naive bayes?\nThird Conference on Email and Anti-Spam (CEAS), 2006.\n[21] G. Mishne, D. Carmel, and R. Lempel.\nBlocking blog spam with language model disagreement.\nProceedings of the 1st International Workshop on Adversarial Information Retrieval on the Web (AIRWeb), May 2005.\n[22] J. Platt.\nSequenital minimal optimization: A fast algorithm for training support vector machines.\nIn B. Scholkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning.\nMIT Press, 1998.\n[23] B. Scholkopf and A. Smola.\nLearning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond.\nMIT Press, 2001.\n[24] G. L. Wittel and S. F. Wu.\nOn attacking statistical spam filters.\nCEAS: First Conference on Email and Anti-Spam, 2004.","lvl-3":"Relaxed Online SVMs for Spam Filtering\nABSTRACT\nSpam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs.\nContent-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam.\nThe former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification.\nHowever, similar performance gains have yet to be demonstrated for online spam filtering.\nAdditionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods.\nIn this paper, we offer a resolution to this controversy.\nFirst, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets.\nSecond, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost.\nOur results are experimentally verified on email spam, blog spam, and splog detection tasks.\n1.\nINTRODUCTION\nElectronic communication is increasingly plagued by unwanted or harmful content known as spam.\nThe most well known form of spam is email spam, which remains a major\nproblem for large email systems.\nOther forms of spam are also becoming problematic, including blog spam, in which spammers post unwanted comments in blogs [21], and splogs, which are fake blogs constructed to enable link spam with the hope of boosting the measured importance of a given webpage in the eyes of automated search engines [17].\nThere are a variety of methods for identifying these many forms of spam, including compiling blacklists of known spammers, and conducting link analysis.\nThe approach of content analysis has shown particular promise and generality for combating spam.\nIn content analysis, the actual message text (often including hyper-text and meta-text, such as HTML and headers) is analyzed using machine learning techniques for text classification to determine if the given content is spam.\nContent analysis has been widely applied in detecting email spam [11], and has also been used for identifying blog spam [21] and splogs [17].\nIn this paper, we do not explore the related problem of link spam, which is currently best combated by link analysis [13].\n1.1 An Anti-Spam Controversy\nThe anti-spam community has been divided on the choice of the best machine learning method for content-based spam detection.\nAcademic researchers have tended to favor the use of Support Vector Machines (SVMs), a statistically robust machine learning method [7] which yields state-of-theart performance on general text classification [14].\nHowever, SVMs typically require training time that is quadratic in the number of training examples, and are impractical for largescale email systems.\nPractitioners requiring content-based spam filtering have typically chosen to use the faster (if less statistically robust) machine learning method of Naive Bayes text classification [11, 12, 20].\nThis Bayesian method requires only linear training time, and is easily implemented in an online setting with incremental updates.\nThis allows a deployed system to easily adapt to a changing environment over time.\nOther fast methods for spam filtering include compression models [1] and logistic regression [10].\nIt has not yet been empirically demonstrated that SVMs give improved performance over these methods in an online spam detection setting [4].\n1.2 Contributions\nIn this paper, we address the anti-spam controversy and offer a potential resolution.\nWe first demonstrate that online SVMs do indeed provide state-of-the-art spam detection through empirical tests on several large benchmark data sets of email spam.\nWe then analyze the effect of the tradeoff\nparameter in the SVM objective function, which shows that the expensive SVM methodology may, in fact, be overkill for spam detection.\nWe reduce the computational cost of SVM learning by relaxing this requirement on the maximum margin in online settings, and create a Relaxed Online SVM, ROSVM, appropriate for high performance content-based spam filtering in large-scale settings.\n2.\nSPAM AND ONLINE SVMS\n2.1 Background: SVMs\n2.2 Online SVMs\n2.3 Feature Mapping Spam Content\n2.4 Tuning the Tradeoff Parameter, C\n2.5 Email Spam and Online SVMs\n2.6 Blog Comment Spam and SVMs\n2.7 Splogs and SVMs\n3.\nRELAXED ONLINE SVMS (ROSVM)\n2.8 Computational Cost\n3.1 Reducing Problem Size\n3.2 Reducing Number of Updates\n3.3 Reducing Iterations\n4.\nEXPERIMENTS\n4.1 ROSVM Tests\n4.1.1 Testing Reduced Size\n4.1.2 Testing Reduced Iterations\n4.1.3 Testing Reduced Updates\n4.2 Online SVMs and ROSVM\n4.2.1 Experimental Setup\n4.3 Discussion\n5.\nCONCLUSIONS\nIn the past, academic researchers and industrial practitioners have disagreed on the best method for online contentbased detection of spam on the web.\nWe have presented one resolution to this debate.\nOnline SVMs do, indeed, pro\nTable 7: Splog Data Set.\nThese results compare Online SVM and ROSVM on splog detection using binary 4-gram feature space.\nduce state-of-the-art performance on this task with proper adjustment of the tradeoff parameter C, but with cost that grows quadratically with the size of the data set.\nThe high values of C required for best performance with SVMs show that the margin maximization of Online SVMs is overkill for this task.\nThus, we have proposed a less expensive alternative, ROSVM, that relaxes this maximum margin requirement, and produces nearly equivalent results.\nThese methods are efficient enough for large-scale filtering of contentbased spam in its many forms.\nIt is natural to ask why the task of content-based spam detection gets strong performance from ROSVM.\nAfter all, not all data allows the relaxation of SVM requirements.\nWe conjecture that email spam, blog comment spam, and splogs all share the characteristic that a subset of features are particularly indicative of content being either spam or not spam.\nThese indicative features may be sparsely represented in the data set, because of spam methods such as word obfuscation, in which common spam words are intentionally misspelled in an attempt to reduce the effectiveness of word-based spam detection.\nMaximizing the margin may cause these sparsely represented features to be ignored, creating an overall reduction in performance.\nIt appears that spam data is highly separable, allowing ROSVM to be successful with high values of C and little effort given to maximizing the margin.\nFuture work will determine how applicable relaxed SVMs are to the general problem of text classification.\nFinally, we note that the success of relaxed SVM methods for content-based spam detection is a result that depends on the nature of spam data, which is potentially subject to change.\nAlthough it is currently true that ham and spam are linearly separable given an appropriate feature space, this assumption may be subject to attack.\nWhile our current methods appear robust against primitive attacks along these lines, such as the good word attack [24], we must explore the feasibility of more sophisticated attacks.","lvl-4":"Relaxed Online SVMs for Spam Filtering\nABSTRACT\nSpam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs.\nContent-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam.\nThe former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification.\nHowever, similar performance gains have yet to be demonstrated for online spam filtering.\nAdditionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods.\nIn this paper, we offer a resolution to this controversy.\nFirst, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets.\nSecond, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost.\nOur results are experimentally verified on email spam, blog spam, and splog detection tasks.\n1.\nINTRODUCTION\nElectronic communication is increasingly plagued by unwanted or harmful content known as spam.\nThe most well known form of spam is email spam, which remains a major\nproblem for large email systems.\nThere are a variety of methods for identifying these many forms of spam, including compiling blacklists of known spammers, and conducting link analysis.\nThe approach of content analysis has shown particular promise and generality for combating spam.\nIn content analysis, the actual message text (often including hyper-text and meta-text, such as HTML and headers) is analyzed using machine learning techniques for text classification to determine if the given content is spam.\nContent analysis has been widely applied in detecting email spam [11], and has also been used for identifying blog spam [21] and splogs [17].\nIn this paper, we do not explore the related problem of link spam, which is currently best combated by link analysis [13].\n1.1 An Anti-Spam Controversy\nThe anti-spam community has been divided on the choice of the best machine learning method for content-based spam detection.\nAcademic researchers have tended to favor the use of Support Vector Machines (SVMs), a statistically robust machine learning method [7] which yields state-of-theart performance on general text classification [14].\nHowever, SVMs typically require training time that is quadratic in the number of training examples, and are impractical for largescale email systems.\nPractitioners requiring content-based spam filtering have typically chosen to use the faster (if less statistically robust) machine learning method of Naive Bayes text classification [11, 12, 20].\nThis Bayesian method requires only linear training time, and is easily implemented in an online setting with incremental updates.\nOther fast methods for spam filtering include compression models [1] and logistic regression [10].\nIt has not yet been empirically demonstrated that SVMs give improved performance over these methods in an online spam detection setting [4].\n1.2 Contributions\nIn this paper, we address the anti-spam controversy and offer a potential resolution.\nWe first demonstrate that online SVMs do indeed provide state-of-the-art spam detection through empirical tests on several large benchmark data sets of email spam.\nparameter in the SVM objective function, which shows that the expensive SVM methodology may, in fact, be overkill for spam detection.\nWe reduce the computational cost of SVM learning by relaxing this requirement on the maximum margin in online settings, and create a Relaxed Online SVM, ROSVM, appropriate for high performance content-based spam filtering in large-scale settings.\n5.\nCONCLUSIONS\nIn the past, academic researchers and industrial practitioners have disagreed on the best method for online contentbased detection of spam on the web.\nWe have presented one resolution to this debate.\nOnline SVMs do, indeed, pro\nTable 7: Splog Data Set.\nThese results compare Online SVM and ROSVM on splog detection using binary 4-gram feature space.\nThe high values of C required for best performance with SVMs show that the margin maximization of Online SVMs is overkill for this task.\nThese methods are efficient enough for large-scale filtering of contentbased spam in its many forms.\nIt is natural to ask why the task of content-based spam detection gets strong performance from ROSVM.\nWe conjecture that email spam, blog comment spam, and splogs all share the characteristic that a subset of features are particularly indicative of content being either spam or not spam.\nThese indicative features may be sparsely represented in the data set, because of spam methods such as word obfuscation, in which common spam words are intentionally misspelled in an attempt to reduce the effectiveness of word-based spam detection.\nMaximizing the margin may cause these sparsely represented features to be ignored, creating an overall reduction in performance.\nIt appears that spam data is highly separable, allowing ROSVM to be successful with high values of C and little effort given to maximizing the margin.\nFuture work will determine how applicable relaxed SVMs are to the general problem of text classification.\nFinally, we note that the success of relaxed SVM methods for content-based spam detection is a result that depends on the nature of spam data, which is potentially subject to change.\nAlthough it is currently true that ham and spam are linearly separable given an appropriate feature space, this assumption may be subject to attack.","lvl-2":"Relaxed Online SVMs for Spam Filtering\nABSTRACT\nSpam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs.\nContent-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam.\nThe former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification.\nHowever, similar performance gains have yet to be demonstrated for online spam filtering.\nAdditionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods.\nIn this paper, we offer a resolution to this controversy.\nFirst, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets.\nSecond, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost.\nOur results are experimentally verified on email spam, blog spam, and splog detection tasks.\n1.\nINTRODUCTION\nElectronic communication is increasingly plagued by unwanted or harmful content known as spam.\nThe most well known form of spam is email spam, which remains a major\nproblem for large email systems.\nOther forms of spam are also becoming problematic, including blog spam, in which spammers post unwanted comments in blogs [21], and splogs, which are fake blogs constructed to enable link spam with the hope of boosting the measured importance of a given webpage in the eyes of automated search engines [17].\nThere are a variety of methods for identifying these many forms of spam, including compiling blacklists of known spammers, and conducting link analysis.\nThe approach of content analysis has shown particular promise and generality for combating spam.\nIn content analysis, the actual message text (often including hyper-text and meta-text, such as HTML and headers) is analyzed using machine learning techniques for text classification to determine if the given content is spam.\nContent analysis has been widely applied in detecting email spam [11], and has also been used for identifying blog spam [21] and splogs [17].\nIn this paper, we do not explore the related problem of link spam, which is currently best combated by link analysis [13].\n1.1 An Anti-Spam Controversy\nThe anti-spam community has been divided on the choice of the best machine learning method for content-based spam detection.\nAcademic researchers have tended to favor the use of Support Vector Machines (SVMs), a statistically robust machine learning method [7] which yields state-of-theart performance on general text classification [14].\nHowever, SVMs typically require training time that is quadratic in the number of training examples, and are impractical for largescale email systems.\nPractitioners requiring content-based spam filtering have typically chosen to use the faster (if less statistically robust) machine learning method of Naive Bayes text classification [11, 12, 20].\nThis Bayesian method requires only linear training time, and is easily implemented in an online setting with incremental updates.\nThis allows a deployed system to easily adapt to a changing environment over time.\nOther fast methods for spam filtering include compression models [1] and logistic regression [10].\nIt has not yet been empirically demonstrated that SVMs give improved performance over these methods in an online spam detection setting [4].\n1.2 Contributions\nIn this paper, we address the anti-spam controversy and offer a potential resolution.\nWe first demonstrate that online SVMs do indeed provide state-of-the-art spam detection through empirical tests on several large benchmark data sets of email spam.\nWe then analyze the effect of the tradeoff\nparameter in the SVM objective function, which shows that the expensive SVM methodology may, in fact, be overkill for spam detection.\nWe reduce the computational cost of SVM learning by relaxing this requirement on the maximum margin in online settings, and create a Relaxed Online SVM, ROSVM, appropriate for high performance content-based spam filtering in large-scale settings.\n2.\nSPAM AND ONLINE SVMS\nThe controversy between academics and practitioners in spam filtering centers on the use of SVMs.\nThe former advocate their use, but have yet to demonstrate strong performance with SVMs on online spam filtering.\nIndeed, the results of [4] show that, when used with default parameters, SVMs actually perform worse than other methods.\nIn this section, we review the basic workings of SVMs and describe a simple Online SVM algorithm.\nWe then show that Online SVMs indeed achieve state-of-the-art performance on filtering email spam, blog comment spam, and splogs, so long as the tradeoff parameter C is set to a high value.\nHowever, the cost of Online SVMs turns out to be prohibitive for largescale applications.\nThese findings motivate our proposal of Relaxed Online SVMs in the following section.\n2.1 Background: SVMs\nSVMs are a robust machine learning methodology which has been shown to yield state-of-the-art performance on text classification [14].\nby finding a hyperplane that separates two classes of data in data space while maximizing the margin between them.\nWe use the following notation to describe SVMs, which draws from [23].\nA data set X contains n labeled example vectors {(x1, y1)... (xn, yn)}, where each xi is a vector containing features describing example i, and each yi is the class label for that example.\nIn spam detection, the classes spam and ham (i.e., not spam) are assigned the numerical class labels +1 and \u2212 1, respectively.\nThe linear SVMs we employ in this paper use a hypothesis vector w and bias term b to classify a new example x, by generating a predicted class label f (x):\nSVMs find the hypothesis w, which defines the separating hyperplane, by minimizing the following objective function over all n training examples:\nIn this objective function, each slack variable \u03bei shows the amount of error that the classifier makes on a given example xi.\nMinimizing the sum of the slack variables corresponds to minimizing the loss function on the training data, while minimizing the term 21 | | w | | 2 corresponds to maximizing the margin between the two classes [23].\nThese two optimization goals are often in conflict; the tradeoff parameter C determines how much importance to give each of these tasks.\nLinear SVMs exploit data sparsity to classify a new instance in O (s) time, where s is the number of non-zero features.\nThis is the same classification time as other linear\nFigure 1: Pseudo code for Online SVM.\nclassifiers, and as Naive Bayesian classification.\nTraining SVMs, however, typically takes O (n2) time, for n training examples.\nA variant for linear SVMs was recently proposed which trains in O (ns) time [15], but because this method has a high constant, we do not explore it here.\n2.2 Online SVMs\nIn many traditional machine learning applications, SVMs are applied in batch mode.\nThat is, an SVM is trained on an entire set of training data, and is then tested on a separate set of testing data.\nSpam filtering is typically tested and deployed in an online setting, which proceeds incrementally.\nHere, the learner classifies a new example, is told if its prediction is correct, updates its hypothesis accordingly, and then awaits a new example.\nOnline learning allows a deployed system to adapt itself in a changing environment.\nRe-training an SVM from scratch on the entire set of previously seen data for each new example is cost prohibitive.\nHowever, using an old hypothesis as the starting point for re-training reduces this cost considerably.\nOne method of incremental and decremental SVM learning was proposed in [2].\nBecause we are only concerned with incremental learning, we apply a simpler algorithm for converting a batch SVM learner into an online SVM (see Figure 1 for pseudocode), which is similar to the approach of [16].\nEach time the Online SVM encounters an example that was poorly classified, it retrains using the old hypothesis as a starting point.\nNote that due to the Karush-Kuhn-Tucker (KKT) conditions, it is not necessary to re-train on wellclassified examples that are outside the margins [23].\nWe used Platt's SMO algorithm [22] as a core SVM solver, because it is an iterative method that is well suited to converge quickly from a good initial hypothesis.\nBecause previous work (and our own initial testing) indicates that binary feature values give the best results for spam filtering [20, 9], we optimized our implementation of the Online SMO to exploit fast inner-products with binary vectors.\n1\n2.3 Feature Mapping Spam Content\nExtracting machine learning features from text may be done in a variety of ways, especially when that text may include hyper-content and meta-content such as HTML and header information.\nHowever, previous research has shown that simple methods from text classification, such as bag of words vectors, and overlapping character-level n-grams, can achieve strong results [9].\nFormally, a bag of words vector is a vector x with a unique dimension for each possible\nFigure 2: Tuning the Tradeoff Parameter C. Tests were conducted with Online SMO, using binary feature vectors, on the spamassassin data set of 6034 examples.\nGraph plots C versus Area under the ROC curve.\nword, defined as a contiguous substring of non-whitespace characters.\nAn n-gram vector is a vector x with a unique dimension for each possible substring of n total characters.\nNote that n-grams may include whitespace, and are overlapping.\nWe use binary feature scoring, which has been shown to be most effective for a variety of spam detection methods [20, 9].\nWe normalize the vectors with the Euclidean norm.\nFurthermore, with email data, we reduce the impact of long messages (for example, with attachments) by considering only the first 3,000 characters of each string.\nFor blog comments and splogs, we consider the whole text, including any meta-data such as HTML tags, as given.\nNo other feature selection or domain knowledge was used.\n2.4 Tuning the Tradeoff Parameter, C\nThe SVM tradeoff parameter C must be tuned to balance the (potentially conflicting) goals of maximizing the margin and minimizing the training error.\nEarly work on SVM based spam detection [9] showed that high values of C give best performance with binary features.\nLater work has not always followed this lead: a (low) default setting of C was used on splog detection [17], and also on email spam [4].\nFollowing standard machine learning practice, we tuned C on separate tuning data not used for later testing.\nWe used the publicly available spamassassin email spam data set, and created an online learning task by randomly interleaving all 6034 labeled messages to create a single ordered set.\nFor tuning, we performed a coarse parameter search for C using powers of ten from .0001 to 10000.\nWe used the Online SVM described above, and tested both binary bag of words vectors and n-gram vectors with n = {2, 3, 4}.\nWe used the first 3000 characters of each message, which included header information, body of the email, and possibly attachments.\nFollowing the recommendation of [6], we use Area under the ROC curve as our evaluation measure.\nThe results (see Figure 2) agree with [9]: there is a plateau of high performance achieved with all values of C \u2265 10, and performance degrades sharply with C <1.\nFor the remainder of our experiments with SVMs in this paper, we set C = 100.\nWe will return to the observation that very high values of C do not degrade performance as support for the intuition that relaxed SVMs should perform well on spam.\nTable 1: Results for Email Spam filtering with On\nTable 2: Results for Blog Comment Spam Detection\nusing SVMs and Leave One Out Cross Validation.\nWe report the same performance measures as in the prior work for meaningful comparison.\n2.5 Email Spam and Online SVMs\nWith C tuned on a separate tuning set, we then tested the performance of Online SVMs in spam detection.\nWe used two large benchmark data sets of email spam as our test corpora.\nThese data sets are the 2005 TREC public data set trec05p-1 of 92,189 messages, and the 2006 TREC public data sets, trec06p, containing 37,822 messages in English.\n(We do not report our strong results on the trec06c corpus of Chinese messages as there have been questions raised over the validity of this test set.)\nWe used the canonical ordering provided with each of these data sets for fair comparison.\nResults for these experiments, with bag of words vectors and and n-gram vectors appear in Table 1.\nTo compare our results with previous scores on these data sets, we use the same (1-ROCA)% measure described in [6], which is one minus the area under the ROC curve, expressed as a percent.\nThis measure shows the percent chance of error made by a classifier asserting that one message is more likely to be spam than another.\nThese results show that Online SVMs do give state of the art performance on email spam.\nThe only known system that out-performs the Online SVMs on the trec05p-1 data set is a recent ensemble classifier which combines the results of 53 unique spam filters [19].\nTo our knowledge, the Online SVM has out-performed every other single filter on these data sets, including those using Bayesian methods [5, 3], compression models [5, 3], logistic regression [10], and perceptron variants [3], the TREC competition winners [5, 3], and open source email spam filters BogoFilter v1 .1.5 and SpamProbe v1 .4 d.\n2.6 Blog Comment Spam and SVMs\nBlog comment spam is similar to email spam in many regards, and content-based methods have been proposed for detecting these spam comments [21].\nHowever, large benchmark data sets of labeled blog comment spam do not yet exist.\nThus, we run experiments on the only publicly available data set we know of, which was used in content-based blog\nTable 3: Results for Splog vs. Blog Detection using SVMs and Leave One Out Cross Validation.\nWe report the same evaluation measures as in the prior work for meaningful comparison.\ncomment spam detection experiments by [21].\nBecause of the small size of the data set, and because prior researchers did not conduct their experiments in an on-line setting, we test the performance of linear SVMs using leave-one-out cross validation, with SVM-Light, a standard open-source SVM implementation [14].\nWe use the parameter setting C = 100, with the same feature space mappings as above.\nWe report accuracy, precision, and recall to compare these to the results given on the same data set by [21].\nThese results (see Table 2) show that SVMs give superior performance on this data set to the prior methodology.\n2.7 Splogs and SVMs\nAs with blog comment spam, there is not yet a large, publicly available benchmark corpus of labeled splog detection test data.\nHowever, the authors of [17] kindly provided us with the labeled data set of 1,389 blogs and splogs that they used to test content-based splog detection using SVMs.\nThe only difference between our methodology and that of [17] is that they used default parameters for C, which SVM-Light\nTable 4: Execution time for Online SVMs with email\nspam detection, in CPU seconds.\nThese times do not include the time spent mapping strings to feature vectors.\nThe number of examples in each data set is given in the last row as corpus size.\nFigure 3: Visualizing the effect of C. Hyperplane A maximizes the margin while accepting a small amount of training error.\nThis corresponds to setting C to a low value.\nHyperplane B accepts a smaller margin in order to reduce training error.\nThis corresponds to setting C to a high value.\nContent-based spam filtering appears to do best with high values of C.\near SVMs give state of the art performance on content-based spam filtering.\nHowever, this performance comes at a price.\nAlthough the blog comment spam and splog data sets are too small for the quadratic training time of SVMs to appear problematic, the email data sets are large enough to illustrate the problems of quadratic training cost.\nTable 4 shows computation time versus data set size for each of the online learning tasks (on same system).\nThe training cost of SVMs are prohibitive for large-scale content based spam detection, or a large blog host.\nIn the following section, we reduce this cost by relaxing the expensive requirements of SVMs.\n3.\nRELAXED ONLINE SVMS (ROSVM)\nOne of the main benefits of SVMs is that they find a decision hyperplane that maximizes the margin between classes in the data space.\nMaximizing the margin is expensive, typically requiring quadratic training time in the number of training examples.\nHowever, as we saw in the previous section, the task of content-based spam detection is best achieved by SVMs with a high value of C. Setting C to a high value for this domain implies that minimizing training loss is more important than maximizing the margin (see Figure 3).\nThus, while SVMs do create high performance spam filters, applying them in practice is overkill.\nThe full margin maximization feature that they provide is unnecessary, and relaxing this requirement can reduce computational cost.\nWe propose three ways to relax Online SVMs:\n9 Reduce the size of the optimization problem by only optimizing over the last P examples.\n9 Reduce the number of training updates by only training on actual errors.\n9 Reduce the number of iterations in the iterative SVM\navg | | X | | 2.\n(For normalized vectors, this default value sets C = 1.)\nThey also tested several domain-informed feature mappings, such as giving special features to url tags.\nFor our experiments, we used the same feature mappings as above, and tested the effect of setting C = 100.\nAs with the methodology of [17], we performed leave one out cross validation for apples-to-apples comparison on this data.\nThe results (see Table 3) show that a high value of C produces higher performance for the same feature space mappings, and even enables the simple 4-gram mapping to out-perform the previous best mapping which incorporated domain knowledge by using words and urls.\n2.8 Computational Cost\nThe results presented in this section demonstrate that lin\nFigure 4: Pseudo-code for Relaxed Online SVM.\nsolver by allowing an approximate solution to the optimization problem.\nAs we describe in the remainder of this subsection, all of these methods trade statistical robustness for reduced computational cost.\nExperimental results reported in the following section show that they equal or approach the performance of full Online SVMs on content-based spam detection.\n3.1 Reducing Problem Size\nIn the full Online SVMs, we re-optimize over the full set of seen data on every update, which becomes expensive as the number of seen data points grows.\nWe can bound this expense by only considering the p most recent examples for optimization (see Figure 4 for pseudo-code).\nNote that this is not equivalent to training a new SVM classifier from scratch on the p most recent examples, because each successive optimization problem is seeded with the previous hypothesis w [8].\nThis hypothesis may contain values for features that do not occur anywhere in the p most recent examples, and these will not be changed.\nThis allows the hypothesis to remember rare (but informative) features that were learned further than p examples in the past.\nFormally, the optimization problem is now defined most clearly in the dual form [23].\nIn this case, the original softmargin SVM is computed by maximizing at example n:\nwhere cj is a constant, fixed as the last value found for \u03b1j while j> (n \u2212 p).\nThus, the margin found by an optimization is not guaranteed to be one that maximizes the margin for the global data set of examples {x1,..., xn)}, but rather one that satisfies a relaxed requirement that the margin be maximized over the examples {x (n \u2212 p +1),..., xn}, subject to the fixed constraints on the hyperplane that were found in previous optimizations over examples {x1,..., x (n \u2212 p)}.\n(For completeness, when p> n, define (n \u2212 p) = 1.)\nThis set of constraints reduces the number of free variables in the optimization problem, reducing computational cost.\n3.2 Reducing Number of Updates\nAs noted before, the KKT conditions show that a well classified example will not change the hypothesis; thus it is not necessary to re-train when we encounter such an example.\nUnder the KKT conditions, an example xi is considered well-classified when yif (xi)> 1.\nIf we re-train on every example that is not well-classified, our hyperplane will be guaranteed to be optimal at every step.\nThe number of re-training updates can be reduced by relaxing the definition of well classified.\nAn example xi is now considered well classified when yif (xi)> M, for some 0 <M <1.\nHere, each update still produces an optimal hyperplane.\nThe learner may encounter an example that lies within the margins, but farther from the margins than M.\nSuch an example means the hypothesis is no longer globally optimal for the data set, but it is considered good enough for continued use without immediate retraining.\nThis update procedure is similar to that used by variants of the Perceptron algorithm [18].\nIn the extreme case, we can set M = 0, which creates a mistake driven Online SVM.\nIn the experimental section, we show that this version of Online SVMs, which updates only on actual errors, does not significantly degrade performance on content-based spam detection, but does significantly reduce cost.\n3.3 Reducing Iterations\nAs an iterative solver, SMO makes repeated passes over the data set to optimize the objective function.\nSMO has one main loop, which can alternate between passing over the entire data set, or the smaller active set of current support vectors [22].\nSuccessive iterations of this loop bring the hyperplane closer to an optimal value.\nHowever, it is possible that these iterations provide less benefit than their expense justifies.\nThat is, a close first approximation may be good enough.\nWe introduce a parameter T to control the maximum number of iterations we allow.\nAs we will see in the experimental section, this parameter can be set as low as 1 with little impact on the quality of results, providing computational savings.\n4.\nEXPERIMENTS\nIn Section 2, we argued that the strong performance on content-based spam detection with SVMs with a high value of C show that the maximum margin criteria is overkill, incurring unnecessary computational cost.\nIn Section 3, we proposed ROSVM to address this issue, as both of these methods trade away guarantees on the maximum margin hyperplane in return for reduced computational cost.\nIn this section, we test these methods on the same benchmark data sets to see if state of the art performance may be achieved by these less costly methods.\nWe find that ROSVM is capable of achieving these high levels of performance with greatly reduced cost.\nOur main tests on content-based spam detection are performed on large benchmark sets of email data.\nWe then apply these methods on the smaller data sets of blog comment spam and blogs, with similar performance.\n4.1 ROSVM Tests\nIn Section 3, we proposed three approaches for reducing the computational cost of Online SMO: reducing the prob\nFigure 5: Reduced Size Tests.\nlem size, reducing the number of optimization iterations, and reducing the number of training updates.\nEach of these approaches relax the maximum margin criteria on the global set of previously seen data.\nHere we test the effect that each of these methods has on both effectiveness and efficiency.\nIn each of these tests, we use the large benchmark email data sets, trec05p-1 and trec06p.\n4.1.1 Testing Reduced Size\nFor our first ROSVM test, we experiment on the effect of reducing the size of the optimization problem by only considering the p most recent examples, as described in the previous section.\nFor this test, we use the same 4-gram mappings as for the reference experiments in Section 2, with the same value C = 100.\nWe test a range of values p in a coarse grid search.\nFigure 5 reports the effect of the buffer size p in relationship to the (1-ROCA)% performance measure (top), and the number of CPU seconds required (bottom).\nThe results show that values of p <100 do result in degraded performance, although they evaluate very quickly.\nHowever, p values from 500 to 10,000 perform almost as well as the original Online SMO (represented here as p = 100, 000), at dramatically reduced computational cost.\nThese results are important for making state of the art performance on large-scale content-based spam detection practical with online SVMs.\nOrdinarily, the training time would grow quadratically with the number of seen examples.\nHowever, fixing a value of p ensures that the training time is independent of the size of the data set.\nFurthermore, a lookback buffer allows the filter to adjust to concept drift.\nFigure 6: Reduced Iterations Tests.\n4.1.2 Testing Reduced Iterations\nIn the second ROSVM test, we experiment with reducing the number of iterations.\nOur initial tests showed that the maximum number of iterations used by Online SMO was rarely much larger than 10 on content-based spam detection; thus we tested values of T = {1, 2, 5, \u221e}.\nOther parameters were identical to the original Online SVM tests.\nThe results on this test were surprisingly stable (see Figure 6).\nReducing the maximum number of SMO iterations per update had essentially no impact on classification performance, but did result in a moderate increase in speed.\nThis suggests that any additional iterations are spent attempting to find improvements to a hyperplane that is already very close to optimal.\nThese results show that for content-based spam detection, we can reduce computational cost by allowing only a single SMO iteration (that is, T = 1) with effectively equivalent performance.\n4.1.3 Testing Reduced Updates\nFor our third ROSVM experiment, we evaluate the impact of adjusting the parameter M to reduce the total number of updates.\nAs noted before, when M = 1, the hyperplane is globally optimal at every step.\nReducing M allows a slightly inconsistent hyperplane to persist until it encounters an example for which it is too inconsistent.\nWe tested values of M from 0 to 1, at increments of 0.1.\n(Note that we used p = 10000 to decrease the cost of evaluating these tests.)\nThe results for these tests are appear in Figure 7, and show that there is a slight degradation in performance with reduced values of M, and that this degradation in performance is accompanied by an increase in efficiency.\nValues of\nFigure 7: Reduced Updates Tests.\nM> 0.7 give effectively equivalent performance as M = 1, and still reduce cost.\n4.2 Online SVMs and ROSVM\nWe now compare ROSVM against Online SVMs on the email spam, blog comment spam, and splog detection tasks.\nThese experiments show comparable performance on these tasks, at radically different costs.\nIn the previous section, the effect of the different relaxation methods was tested separately.\nHere, we tested these methods together to create a full implementation of ROSVM.\nWe chose the values P = 10000, T = 1, M = 0.8 for the email spam detection tasks.\nNote that these parameter values were selected as those allowing ROSVM to achieve comparable performance results with Online SVMs, in order to test total difference in computational cost.\nThe splog and blog data sets were much smaller, so we set P = 100 for these tasks to allow meaningful comparisons between the reduced size and full size optimization problems.\nBecause these values were not hand-tuned, both generalization performance and runtime results are meaningful in these experiments.\n4.2.1 Experimental Setup\nWe compared Online SVMs and ROSVM on email spam, blog comment spam, and splog detection.\nFor the email spam, we used the two large benchmark corpora, trec05p-1 and trec06p, in the standard online ordering.\nWe randomly ordered both the blog comment spam corpus and the splog corpus to create online learning tasks.\nNote that this is a different setting than the leave-one-out cross validation task presented on these corpora in Section 2--the results are not directly comparable.\nHowever, this experimental design\nTable 5: Email Spam Benchmark Data.\nThese results compare Online SVM and ROSVM on email spam detection, using binary 4-gram feature space.\nScore reported is (1-ROCA)%, where 0 is optimal.\nTable 6: Blog Comment Spam.\nThese results comparing Online SVM and ROSVM on blog comment spam detection using binary 4-gram feature space.\ndoes allow meaningful comparison between our two online methods on these content-based spam detection tasks.\nWe ran each method on each task, and report the results in Tables 5, 6, and 7.\nNote that the CPU time reported for each method was generated on the same computing system.\nThis time reflects only the time needed to complete online learning on tokenized data.\nWe do not report the time taken to tokenize the data into binary 4-grams, as this is the same additive constant for all methods on each task.\nIn all cases, ROSVM was significantly less expensive computationally.\n4.3 Discussion\nThe comparison results shown in Tables 5, 6, and 7 are striking in two ways.\nFirst, they show that the performance of Online SVMs can be matched and even exceeded by relaxed margin methods.\nSecond, they show a dramatic disparity in computational cost.\nROSVM is an order of magnitude more efficient than the normal Online SVM, and gives comparable results.\nFurthermore, the fixed lookback buffer ensures that the cost of each update does not depend on the size of the data set already seen, unlike Online SVMs.\nNote the blog and splog data sets are relatively small, and results on these data sets must be considered preliminary.\nOverall, these results show that there is no need to pay the high cost of SVMs to achieve this level of performance on contentbased detection of spam.\nROSVMs offer a far cheaper alternative with little or no performance loss.\n5.\nCONCLUSIONS\nIn the past, academic researchers and industrial practitioners have disagreed on the best method for online contentbased detection of spam on the web.\nWe have presented one resolution to this debate.\nOnline SVMs do, indeed, pro\nTable 7: Splog Data Set.\nThese results compare Online SVM and ROSVM on splog detection using binary 4-gram feature space.\nduce state-of-the-art performance on this task with proper adjustment of the tradeoff parameter C, but with cost that grows quadratically with the size of the data set.\nThe high values of C required for best performance with SVMs show that the margin maximization of Online SVMs is overkill for this task.\nThus, we have proposed a less expensive alternative, ROSVM, that relaxes this maximum margin requirement, and produces nearly equivalent results.\nThese methods are efficient enough for large-scale filtering of contentbased spam in its many forms.\nIt is natural to ask why the task of content-based spam detection gets strong performance from ROSVM.\nAfter all, not all data allows the relaxation of SVM requirements.\nWe conjecture that email spam, blog comment spam, and splogs all share the characteristic that a subset of features are particularly indicative of content being either spam or not spam.\nThese indicative features may be sparsely represented in the data set, because of spam methods such as word obfuscation, in which common spam words are intentionally misspelled in an attempt to reduce the effectiveness of word-based spam detection.\nMaximizing the margin may cause these sparsely represented features to be ignored, creating an overall reduction in performance.\nIt appears that spam data is highly separable, allowing ROSVM to be successful with high values of C and little effort given to maximizing the margin.\nFuture work will determine how applicable relaxed SVMs are to the general problem of text classification.\nFinally, we note that the success of relaxed SVM methods for content-based spam detection is a result that depends on the nature of spam data, which is potentially subject to change.\nAlthough it is currently true that ham and spam are linearly separable given an appropriate feature space, this assumption may be subject to attack.\nWhile our current methods appear robust against primitive attacks along these lines, such as the good word attack [24], we must explore the feasibility of more sophisticated attacks.","keyphrases":["spam filter","spam filter","blog","support vector machin","bayesian method","splog","content-base filter","link analysi","machin learn techniqu","link spam","content-base spam detect","increment updat","logist regress","hyperplan","featur map"],"prmu":["P","P","P","P","P","P","M","U","M","M","M","U","U","U","U"]} {"id":"J-36","title":"Playing Games in Many Possible Worlds","abstract":"In traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game-either full omniscient knowledge or partial but fixed information. In real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences. In this paper, we model this phenomenon. We imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting Socratic game theory. In a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world. Players can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action. We consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made. The results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players. When the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable- and unobservable-query models. When the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservable-query Socratic games and correlated equilibria in observable-query Socratic games.","lvl-1":"Playing Games in Many Possible Worlds Matt Lepinski\u2217 , David Liben-Nowell\u2020 , Seth Gilbert\u2217 , and April Rasala Lehman\u2021 (\u2217 ) Computer Science and Artificial Intelligence Laboratory, MIT; Cambridge, MA 02139 (\u2020 ) Department of Computer Science, Carleton College; Northfield, MN 55057 (\u2021 ) Google, Inc.; Mountain View, CA 94043 lepinski,sethg@theory.lcs.mit.edu, dlibenno@carleton.edu, alehman@google.com ABSTRACT In traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game-either full omniscient knowledge or partial but fixed information.\nIn real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences.\nIn this paper, we model this phenomenon.\nWe imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting Socratic game theory.\nIn a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world.\nPlayers can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action.\nWe consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made.\nThe results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players.\nWhen the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable- and unobservable-query models.\nWhen the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservablequery Socratic games and correlated equilibria in observablequery Socratic games.\nCategories and Subject Descriptors F.2 [Theory of Computation]: Analysis of algorithms and problem complexity; J.4 [Social and Behavioral Sciences]: Economics General Terms Algorithms, Economics, Theory 1.\nINTRODUCTION Late October 1960.\nA smoky room.\nDemocratic Party strategists huddle around a map.\nHow should the Kennedy campaign allocate its remaining advertising budget?\nShould it focus on, say, California or New York?\nThe Nixon campaign faces the same dilemma.\nOf course, neither campaign knows the effectiveness of its advertising in each state.\nPerhaps Californians are susceptible to Nixon``s advertising, but are unresponsive to Kennedy``s.\nIn light of this uncertainty, the Kennedy campaign may conduct a survey, at some cost, to estimate the effectiveness of its advertising.\nMoreover, the larger-and more expensive-the survey, the more accurate it will be.\nIs the cost of a survey worth the information that it provides?\nHow should one balance the cost of acquiring more information against the risk of playing a game with higher uncertainty?\nIn this paper, we model situations of this type as Socratic games.\nAs in traditional game theory, the players in a Socratic game choose actions to maximize their payoffs, but we model players with incomplete information who can make costly queries to reduce their uncertainty about the state of the world before they choose their actions.\nThis approach contrasts with traditional game theory, in which players are usually modeled as having fixed, exogenously given information about the structure of the game and its payoffs.\n(In traditional games of incomplete and imperfect information, there is information that the players do not have; in Socratic games, unlike in these games, the players have a chance to acquire the missing information, at some cost.)\nA number of related models have been explored by economists and computer scientists motivated by similar situations, often with a focus on mechanism design and auctions; a sampling of this research includes the work of Larson and Sandholm [41, 42, 43, 44], Parkes [59], Fong [22], Compte and Jehiel [12], Rezende [63], Persico and Matthews [48, 60], Cr\u00b4emer and Khalil [15], Rasmusen [62], and Bergemann and V\u00a8alim\u00a8aki [4, 5].\nThe model of Bergemann and V\u00a8alim\u00a8aki is similar in many regards to the one that we explore here; see Section 7 for some discussion.\nA Socratic game proceeds as follows.\nA real world is cho150 sen randomly from a set of possible worlds according to a common prior distribution.\nEach player then selects an arbitrary query from a set of available costly queries and receives a corresponding piece of information about the real world.\nFinally each player selects an action and receives a payoff-a function of the players'' selected actions and the identity of the real world-less the cost of the query that he or she made.\nCompared to traditional game theory, the distinguishing feature of our model is the introduction of explicit costs to the players for learning arbitrary partial information about which of the many possible worlds is the real world.\nOur research was initially inspired by recent results in psychology on decision making, but it soon became clear that Socratic game theory is also a general tool for understanding the exploitation versus exploration tradeoff, well studied in machine learning, in a strategic multiplayer environment.\nThis tension between the risk arising from uncertainty and the cost of acquiring information is ubiquitous in economics, political science, and beyond.\nOur results.\nWe consider Socratic games under two models: an unobservable-query model where players learn only the response to their own queries and an observable-query model where players also learn which queries their opponents made.\nWe give efficient algorithms to find Nash equilibriai.e., tuples of strategies from which no player has unilateral incentive to deviate-in broad classes of two-player Socratic games in both models.\nOur first result is an efficient algorithm to find Nash equilibria in unobservable-query Socratic games with constant-sum worlds, in which the sum of the players'' payoffs is independent of their actions.\nOur techniques also yield Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds.\nStrategically zero-sum games generalize constant-sum games by allowing the sum of the players'' payoffs to depend on individual players'' choices of strategy, but not on any interaction of their choices.\nOur second result is an efficient algorithm to find Nash equilibria in observable-query Socratic games with constant-sum worlds.\nFinally, we give an efficient algorithm to find correlated equilibria-a weaker but increasingly well-studied solution concept for games [2, 3, 32, 56, 57]-in observable-query Socratic games with strategically zero-sum worlds.\nLike all games, Socratic games can be viewed as a special case of extensive-form games, which represent games by trees in which internal nodes represent choices made by chance or by the players, and the leaves represent outcomes that correspond to a vector of payoffs to the players.\nAlgorithmically, the generality of extensive-form games makes them difficult to solve efficiently, and the special cases that are known to be efficiently solvable do not include even simple Socratic games.\nEvery (complete-information) classical game is a trivial Socratic game (with a single possible world and a single trivial query), and efficiently finding Nash equilibria in classical games has been shown to be hard [10, 11, 13, 16, 17, 27, 54, 55].\nTherefore we would not expect to find a straightforward polynomial-time algorithm to compute Nash equilibria in general Socratic games.\nHowever, it is well known that Nash equilibria can be found efficiently via an LP for two-player constant-sum games [49, 71] (and strategically zero-sum games [51]).\nA Socratic game is itself a classical game, so one might hope that these results can be applied to Socratic games with constant-sum (or strategically zero-sum) worlds.\nWe face two major obstacles in extending these classical results to Socratic games.\nFirst, a Socratic game with constant-sum worlds is not itself a constant-sum classical game-rather, the resulting classical game is only strategically zero sum.\nWorse yet, a Socratic game with strategically zero-sum worlds is not itself classically strategically zero sum-indeed, there are no known efficient algorithmic techniques to compute Nash equilibria in the resulting class of classical games.\n(Exponential-time algorithms like Lemke\/Howson, of course, can be used [45].)\nThus even when it is easy to find Nash equilibria in each of the worlds of a Socratic game, we require new techniques to solve the Socratic game itself.\nSecond, even when the Socratic game itself is strategically zero sum, the number of possible strategies available to each player is exponential in the natural representation of the game.\nAs a result, the standard linear programs for computing equilibria have an exponential number of variables and an exponential number of constraints.\nFor unobservable-query Socratic games with strategically zero-sum worlds, we address these obstacles by formulating a new LP that uses only polynomially many variables (though still an exponential number of constraints) and then use ellipsoid-based techniques to solve it.\nFor observablequery Socratic games, we handle the exponentiality by decomposing the game into stages, solving the stages separately, and showing how to reassemble the solutions efficiently.\nTo solve the stages, it is necessary to find Nash equilibria in Bayesian strategically zero-sum games, and we give an explicit polynomial-time algorithm to do so.\n2.\nGAMES AND SOCRATIC GAMES In this section, we review background on game theory and formally introduce Socratic games.\nWe present these models in the context of two-player games, but the multiplayer case is a natural extension.\nThroughout the paper, boldface variables will be used to denote a pair of variables (e.g., a = ai, aii ).\nLet Pr[x \u2190 \u03c0] denote the probability that a particular value x is drawn from the distribution \u03c0, and let Ex\u223c\u03c0[g(x)] denote the expectation of g(x) when x is drawn from \u03c0.\n2.1 Background on Game Theory Consider two players, Player I and Player II, each of whom is attempting to maximize his or her utility (or payoff).\nA (two-player) game is a pair A, u , where, for i \u2208 {i,ii}, \u2022 Ai is the set of pure strategies for Player i, and A = Ai, Aii ; and \u2022 ui : A \u2192 R is the utility function for Player i, and u = ui, uii .\nWe require that A and u be common knowledge.\nIf each Player i chooses strategy ai \u2208 Ai, then the payoffs to Players I and II are ui(a) and uii(a), respectively.\nA game is constant sum if, for all a \u2208 A, we have that ui(a) + uii(a) = c for some fixed c independent of a. Player i can also play a mixed strategy \u03b1i \u2208 Ai, where Ai denotes the space of probability measures over the set Ai.\nPayoff functions are generalized as ui (\u03b1) = ui (\u03b1i, \u03b1ii) := Ea\u223c\u03b1[ui (a)] = P a\u2208A \u03b1(a)ui (a), where the quantity \u03b1(a) = 151 \u03b1i(ai) \u00b7 \u03b1ii(aii) denotes the joint probability of the independent events that each Player i chooses action ai from the distribution \u03b1i.\nThis generalization to mixed strategies is known as von Neumann\/Morgenstern utility [70], in which players are indifferent between a guaranteed payoff x and an expected payoff of x.\nA Nash equilibrium is a pair \u03b1 of mixed strategies so that neither player has an incentive to change his or her strategy unilaterally.\nFormally, the strategy pair \u03b1 is a Nash equilibrium if and only if both ui(\u03b1i, \u03b1ii) = max\u03b1i\u2208Ai ui(\u03b1i, \u03b1ii) and uii(\u03b1i, \u03b1ii) = max\u03b1ii\u2208Aii uii(\u03b1i, \u03b1ii); that is, the strategies \u03b1i and \u03b1ii are mutual best responses.\nA correlated equilibrium is a distribution \u03c8 over A that obeys the following: if a \u2208 A is drawn randomly according to \u03c8 and Player i learns ai, then no Player i has incentive to deviate unilaterally from playing ai.\n(A Nash equilibrium is a correlated equilibrium in which \u03c8(a) = \u03b1i(ai) \u00b7 \u03b1ii(aii) is a product distribution.)\nFormally, in a correlated equilibrium, for every a \u2208 A we must have that ai is a best response to a randomly chosen \u02c6aii \u2208 Aii drawn according to \u03c8(ai, \u02c6aii), and the analogous condition must hold for Player II.\n2.2 Socratic Games In this section, we formally define Socratic games.\nA Socratic game is a 7-tuple A, W, u, S, Q, p, \u03b4 , where, for i \u2208 {i,ii}: \u2022 Ai is, as before, the set of pure strategies for Player i. \u2022 W is a set of possible worlds, one of which is the real world wreal.\n\u2022 ui = {uw i : A \u2192 R | w \u2208 W} is a set of payoff functions for Player i, one for each possible world.\n\u2022 S is a set of signals.\n\u2022 Qi is a set of available queries for Player i.\nWhen Player i makes query qi : W \u2192 S, he or she receives the signal qi(wreal).\nWhen Player i receives signal qi(wreal) in response to query qi, he or she can infer that wreal \u2208 {w : qi(w) = qi(wreal)}, i.e., the set of possible worlds from which query qi cannot distinguish wreal.\n\u2022 p : W \u2192 [0, 1] is a probability distribution over the possible worlds.\n\u2022 \u03b4i : Qi \u2192 R\u22650 gives the query cost for each available query for Player i. Initially, the world wreal is chosen according to the probability distribution p, but the identity of wreal remains unknown to the players.\nThat is, it is as if the players are playing the game A, uwreal but do not know wreal.\nThe players make queries q \u2208 Q, and Player i receives the signal qi(wreal).\nWe consider both observable queries and unobservable queries.\nWhen queries are observable, each player learns which query was made by the other player, and the results of his or her own query-that is, each Player i learns qi, qii, and qi(wreal).\nFor unobservable queries, Player i learns only qi and qi(wreal).\nAfter learning the results of the queries, the players select strategies a \u2208 A and receive as payoffs u wreal i (a) \u2212 \u03b4i(qi).\nIn the Socratic game, a pure strategy for Player i consists of a query qi \u2208 Qi and a response function mapping any result of the query qi to a strategy ai \u2208 Ai to play.\nA player``s state of knowledge after a query is a point in R := Q \u00d7 S or Ri := Qi \u00d7 S for observable or unobservable queries, respectively.\nThus Player i``s response function maps R or Ri to Ai.\nNote that the number of pure strategies is exponential, as there are exponentially many response functions.\nA mixed strategy involves both randomly choosing a query qi \u2208 Qi and randomly choosing an action ai \u2208 Ai in response to the results of the query.\nFormally, we will consider a mixed-strategy-function profile f = fquery , fresp to have two parts: \u2022 a function fquery i : Qi \u2192 [0, 1], where fquery i (qi) is the probability that Player i makes query qi.\n\u2022 a function fresp i that maps R or Ri to a probability distribution over actions.\nPlayer i chooses an action ai \u2208 Ai according to the probability distribution fresp i (q, qi(w)) for observable queries, and according to fresp i (qi, qi(w)) for unobservable queries.\n(With unobservable queries, for example, the probability that Player I plays action ai conditioned on making query qi in world w is given by Pr[ai \u2190 fresp i (qi, qi(w))].)\nMixed strategies are typically defined as probability distributions over the pure strategies, but here we represent a mixed strategy by a pair fquery , fresp , which is commonly referred to as a behavioral strategy in the game-theory literature.\nAs in any game with perfect recall, one can easily map a mixture of pure strategies to a behavioral strategy f = fquery , fresp that induces the same probability of making a particular query qi or playing a particular action after making a query qi in a particular world.\nThus it suffices to consider only this representation of mixed strategies.\nFor a strategy-function profile f for observable queries, the (expected) payoff to Player i is given by X q\u2208Q,w\u2208W,a\u2208A 2 6 6 4 fquery i (qi) \u00b7 fquery ii (qii) \u00b7 p(w) \u00b7 Pr[ai \u2190 fresp i (q, qi(w))] \u00b7 Pr[aii \u2190 fresp ii (q, qii(w))] \u00b7 (uw i (a) \u2212 \u03b4i(qi)) 3 7 7 5 .\nThe payoffs for unobservable queries are analogous, with fresp j (qj, qj(w)) in place of fresp j (q, qj(w)).\n3.\nSTRATEGICALLY ZERO-SUM GAMES We can view a Socratic game G with constant-sum worlds as an exponentially large classical game, with pure strategies make query qi and respond according to fi.\nHowever, this classical game is not constant sum.\nThe sum of the players'' payoffs varies depending upon their strategies, because different queries incur different costs.\nHowever, this game still has significant structure: the sum of payoffs varies only because of varying query costs.\nThus the sum of payoffs does depend on players'' choice of strategies, but not on the interaction of their choices-i.e., for fixed functions gi and gii, we have ui(q, f) + uii(q, f) = gi(qi, fi) + gii(qii, fii) for all strategies q, f .\nSuch games are called strategically zero sum and were introduced by Moulin and Vial [51], who describe a notion of strategic equivalence and define strategically zero-sum games as those strategically equivalent to zero-sum games.\nIt is interesting to note that two Socratic games with the same queries and strategically equivalent worlds are not necessarily strategically equivalent.\nA game A, u is strategically zero sum if there exist labels (i, ai) for every Player i and every pure strategy ai \u2208 Ai 152 such that, for all mixed-strategy profiles \u03b1, we have that the sum of the utilities satisfies ui(\u03b1)+uii(\u03b1) = X ai\u2208Ai \u03b1i(ai)\u00b7 (i, ai)+ X aii\u2208Aii \u03b1ii(aii)\u00b7 (ii, aii).\nNote that any constant-sum game is strategically zero sum as well.\nIt is not immediately obvious that one can efficiently decide if a given game is strategically zero sum.\nFor completeness, we give a characterization of classical strategically zero-sum games in terms of the rank of a simple matrix derived from the game``s payoffs, allowing us to efficiently decide if a given game is strategically zero sum and, if it is, to compute the labels (i, ai).\nTheorem 3.1.\nConsider a game G = A, u with Ai = {a1 i , ... , ani i }.\nLet MG be the ni-by-nii matrix whose i, j th entry MG (i,j) satisfies log2 MG (i,j) = ui(ai i , aj ii) + uii(ai i , aj ii).\nThen the following are equivalent: (i) G is strategically zero sum; (ii) there exist labels (i, ai) for every player i \u2208 {i,ii} and every pure strategy ai \u2208 Ai such that, for all pure strategies a \u2208 A, we have ui(a) + uii(a) = (i, ai) + (ii, aii); and (iii) rank(MG ) = 1.\nProof Sketch.\n(i \u21d2 ii) is immediate; every pure strategy is a trivially mixed strategy.\nFor (ii \u21d2 iii), let ci be the n-element column vector with jth component 2 (i,a j i ) ; then ci \u00b7 cii T = MG .\nFor (iii \u21d2 i), if rank(MG ) = 1, then MG = u \u00b7 vT .\nWe can prove that G is strategically zero sum by choosing labels (i, aj i ) := log2 uj and (ii, aj ii) := log2 vj.\n4.\nSOCRATIC GAMES WITH UNOBSERVABLE QUERIES We begin with Socratic games with unobservable queries, where a player``s choice of query is not revealed to her opponent.\nWe give an efficient algorithm to solve unobservablequery Socratic games with strategically zero-sum worlds.\nOur algorithm is based upon the LP shown in Figure 1, whose feasible points are Nash equilibria for the game.\nThe LP has polynomially many variables but exponentially many constraints.\nWe give an efficient separation oracle for the LP, implying that the ellipsoid method [28, 38] yields an efficient algorithm.\nThis approach extends the techniques of Koller and Megiddo [39] (see also [40]) to solve constant-sum games represented in extensive form.\n(Recall that their result does not directly apply in our case; even a Socratic game with constant-sum worlds is not a constant-sum classical game.)\nLemma 4.1.\nLet G = A, W, u, S, Q, p, \u03b4 be an arbitrary unobservable-query Socratic game with strategically zero-sum worlds.\nAny feasible point for the LP in Figure 1 can be efficiently mapped to a Nash equilibrium for G, and any Nash equilibrium for G can be mapped to a feasible point for the program.\nProof Sketch.\nWe begin with a description of the correspondence between feasible points for the LP and Nash equilibria for G. First, suppose that strategy profile f = fquery , fresp forms a Nash equilibrium for G.\nThen the following setting for the LP variables is feasible: yi qi = fquery i (qi) xi ai,qi,w = Pr[ai \u2190 fresp i (qi, qi(w))] \u00b7 yi qi \u03c1i = P w,q\u2208Q,a\u2208A p(w) \u00b7 xi ai,qi,w \u00b7 xii aii,qii,w \u00b7 [uw i (a) \u2212 \u03b4i(qi)].\n(We omit the straightforward calculations that verify feasibility.)\nNext, suppose xi ai,qi,w, yi qi , \u03c1i is feasible for the LP.\nLet f be the strategy-function profile defined as fquery i : qi \u2192 yi qi fresp i (qi, qi(w)) : ai \u2192 xi ai,qi,w\/yi qi .\nVerifying that this strategy profile is a Nash equilibrium requires checking that fresp i (qi, qi(w)) is a well-defined function (from constraint VI), that fquery i and fresp i (qi, qi(w)) are probability distributions (from constraints III and IV), and that each player is playing a best response to his or her opponent``s strategy (from constraints I and II).\nFinally, from constraints I and II, the expected payoff to Player i is at most \u03c1i.\nBecause the right-hand side of constraint VII is equal to the expected sum of the payoffs from f and is at most \u03c1i + \u03c1ii, the payoffs are correct and imply the lemma.\nWe now give an efficient separation oracle for the LP in Figure 1, thus allowing the ellipsoid method to solve the LP in polynomial time.\nRecall that a separation oracle is a function that, given a setting for the variables in the LP, either returns feasible or returns a particular constraint of the LP that is violated by that setting of the variables.\nAn efficient, correct separation oracle allows us to solve the LP efficiently via the ellipsoid method.\nLemma 4.2.\nThere exists a separation oracle for the LP in Figure 1 that is correct and runs in polynomial time.\nProof.\nHere is a description of the separation oracle SP.\nOn input xi ai,qi,w, yi qi , \u03c1i : 1.\nCheck each of the constraints (III), (IV), (V), (VI), and (VII).\nIf any one of these constraints is violated, then return it.\n2.\nDefine the strategy profile f as follows: fquery i : qi \u2192 yi qi fresp i (qi, qi(w)) : ai \u2192 xi ai,qi,w\/yi qi For each query qi, we will compute a pure best-response function \u02c6f qi i for Player I to strategy fii after making query qi.\nMore specifically, given fii and the result qi(wreal) of the query qi, it is straightforward to compute the probability that, conditioned on the fact that the result of query qi is qi(w), the world is w and Player II will play action aii \u2208 Aii.\nTherefore, for each query qi and response qi(w), Player I can compute the expected utility of each pure response ai to the induced mixed strategy over Aii for Player II.\nPlayer I can then select the ai maximizing this expected payoff.\nLet \u02c6fi be the response function such that \u02c6fi(qi, qi(w)) = \u02c6f qi i (qi(w)) for every qi \u2208 Qi.\nSimilarly, compute \u02c6fii.\n153 Player i does not prefer `make query qi, then play according to the function fi'' : \u2200qi \u2208 Qi, fi : Ri \u2192 Ai : \u03c1i \u2265 P w\u2208W,aii\u2208Aii,qii\u2208Qii,ai=fi(qi,qi(w)) ` p(w) \u00b7 xii aii,qii,w \u00b7 [uw i (a) \u2212 \u03b4i(qi)] \u00b4 (I) \u2200qii \u2208 Qii, fii : Rii \u2192 Aii : \u03c1ii \u2265 P w\u2208W,ai\u2208Ai,qi\u2208Qi,aii=fii(qii,qii(w)) ` p(w) \u00b7 xi ai,qi,w \u00b7 [uw ii (a) \u2212 \u03b4ii(qii)] \u00b4 (II) Every player``s choices form a probability distribution in every world: \u2200i \u2208 {i,ii}, w \u2208 W : 1 = P ai\u2208Ai,qi\u2208Qi xi ai,qi,w (III) \u2200i \u2208 {i,ii}, w \u2208 W : 0 \u2264 xi ai,qi,w (IV) Queries are independent of the world, and actions depend only on query output: \u2200i \u2208 {i,ii}, qi \u2208 Qi, w \u2208 W, w \u2208 W such that qi(w) = qi(w ) : yi qi = P ai\u2208Ai xi ai,qi,w (V) xi ai,qi,w = xi ai,qi,w (VI) The payoffs are consistent with the labels (i, ai, w): \u03c1i + \u03c1ii = P i\u2208{i,ii} P w\u2208W,qi\u2208Qi,ai\u2208Ai ` p(w) \u00b7 xi ai,qi,w \u00b7 [ (i, ai, w) \u2212 \u03b4i(qi)] \u00b4 (VII) Figure 1: An LP to find Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds.\nThe input is a Socratic game A, W, u, S, Q, p, \u03b4 so that world w is strategically zero sum with labels (i, ai, w).\nPlayer i makes query qi \u2208 Qi with probability yi qi and, when the actual world is w \u2208 W, makes query qi and plays action ai with probability xi ai,qi,w.\nThe expected payoff to Player i is given by \u03c1i.\n3.\nLet \u02c6\u03c1 qi i be the expected payoff to Player I using the strategy make query qi and play response function \u02c6fi if Player II plays according to fii.\nLet \u02c6\u03c1i = maxqi\u2208Qq \u02c6\u03c1 qi i and let \u02c6qi = arg maxqi\u2208Qq \u02c6\u03c1 qi i .\nSimilarly, define \u02c6\u03c1 qii ii , \u02c6\u03c1ii, and \u02c6qii.\n4.\nFor the \u02c6fi and \u02c6qi defined in Step 3, return constraint (I-\u02c6qi- \u02c6fi) or (II-\u02c6qii- \u02c6fii) if either is violated.\nIf both are satisfied, then return feasible.\nWe first note that the separation oracle runs in polynomial time and then prove its correctness.\nSteps 1 and 4 are clearly polynomial.\nFor Step 2, we have described how to compute the relevant response functions by examining every action of Player I, every world, every query, and every action of Player II.\nThere are only polynomially many queries, worlds, query results, and pure actions, so the running time of Steps 2 and 3 is thus polynomial.\nWe now sketch the proof that the separation oracle works correctly.\nThe main challenge is to show that if any constraint (I-qi-fi ) is violated then (I-\u02c6qi- \u02c6fi) is violated in Step 4.\nFirst, we observe that, by construction, the function \u02c6fi computed in Step 3 must be a best response to Player II playing fii, no matter what query Player I makes.\nTherefore the strategy make query \u02c6qi, then play response function \u02c6fi must be a best response to Player II playing fii, by definition of \u02c6qi.\nThe right-hand side of each constraint (I-qi-fi ) is equal to the expected payoff that Player I receives when playing the pure strategy make query qi and then play response function fi against Player II``s strategy of fii.\nTherefore, because the pure strategy make query \u02c6qi and then play response function \u02c6fi is a best response to Player II playing fii, the right-hand side of constraint (I-\u02c6qi- \u02c6fi) is at least as large as the right hand side of any constraint (I-\u02c6qi-fi ).\nTherefore, if any constraint (I-qi-fi ) is violated, constraint (I-\u02c6qi- \u02c6fi) is also violated.\nAn analogous argument holds for Player II.\nThese lemmas and the well-known fact that Nash equilibria always exist [52] imply the following theorem: Theorem 4.3.\nNash equilibria can be found in polynomial time for any two-player unobservable-query Socratic game with strategically zero-sum worlds.\n5.\nSOCRATIC GAMES WITH OBSERVABLE QUERIES In this section, we give efficient algorithms to find (1) a Nash equilibrium for observable-query Socratic games with constant-sum worlds and (2) a correlated equilibrium in the broader class of Socratic games with strategically zero-sum worlds.\nRecall that a Socratic game G = A, W, u, S, Q, p, \u03b4 with observable queries proceeds in two stages: Stage 1: The players simultaneously choose queries q \u2208 Q. Player i receives as output qi, qii, and qi(wreal).\nStage 2: The players simultaneously choose strategies a \u2208 A.\nThe payoff to Player i is u wreal i (a) \u2212 \u03b4i(qi).\nUsing backward induction, we first solve Stage 2 and then proceed to the Stage-1 game.\nFor a query q \u2208 Q, we would like to analyze the Stage-2 game \u02c6Gq resulting from the players making queries q in Stage 1.\nTechnically, however, \u02c6Gq is not actually a game, because at the beginning of Stage 2 the players have different information about the world: Player I knows qi(wreal), and 154 Player II knows qii(wreal).\nFortunately, the situation in which players have asymmetric private knowledge has been well studied in the game-theory literature.\nA Bayesian game is a quadruple A, T, r, u , where: \u2022 Ai is the set of pure strategies for Player i. \u2022 Ti is the set of types for Player i. \u2022 r is a probability distribution over T; r(t) denotes the probability that Player i has type ti for all i. \u2022 ui : A \u00d7 T \u2192 R is the payoff function for Player i.\nIf the players have types t and play pure strategies a, then ui(a, t) denotes the payoff for Player i. Initially, a type t is drawn randomly from T according to the distribution r. Player i learns his type ti, but does not learn any other player``s type.\nPlayer i then plays a mixed strategy \u03b1i \u2208 Ai-that is, a probability distribution over Ai-and receives payoff ui(\u03b1, t).\nA strategy function is a function hi : Ti \u2192 Ai; Player i plays the mixed strategy hi(ti) \u2208 Ai when her type is ti.\nA strategy-function profile h is a Bayesian Nash equilibrium if and only if no Player i has unilateral incentive to deviate from hi if the other players play according to h. For a two-player Bayesian game, if \u03b1 = h(t), then the profile h is a Bayesian Nash equilibrium exactly when the following condition and its analogue for Player II hold: Et\u223cr[ui(\u03b1, t)] = maxhi Et\u223cr[ui( hi(ti), \u03b1ii , t)].\nThese conditions hold if and only if, for all ti \u2208 Ti occurring with positive probability, Player i``s expected utility conditioned on his type being ti is maximized by hi(ti).\nA Bayesian game is constant sum if for all a \u2208 A and all t \u2208 T, we have ui(a, t) + uii(a, t) = ct, for some constant ct independent of a.\nA Bayesian game is strategically zero sum if the classical game A, u(\u00b7, t) is strategically zero sum for every t \u2208 T. Whether a Bayesian game is strategically zero sum can be determined as in Theorem 3.1.\n(For further discussion of Bayesian games, see [25, 31].)\nWe now formally define the Stage-2 game as a Bayesian game.\nGiven a Socratic game G = A, W, u, S, Q, p, \u03b4 and a query profile q \u2208 Q, we define the Stage-2 Bayesian game Gstage2(q) := A, Tq , pstage2(q) , ustage2(q) , where: \u2022 Ai, the set of pure strategies for Player i, is the same as in the original Socratic game; \u2022 Tq i = {qi(w) : w \u2208 W}, the set of types for Player i, is the set of signals that can result from query qi; \u2022 pstage2(q) (t) = Pr[q(w) = t | w \u2190 p]; and \u2022 u stage2(q) i (a, t) = P w\u2208W Pr[w \u2190 p | q(w) = t] \u00b7 uw i (a).\nWe now define the Stage-1 game in terms of the payoffs for the Stage-2 games.\nFix any algorithm alg that finds a Bayesian Nash equilibrium hq,alg := alg(Gstage2(q)) for each Stage-2 game.\nDefine valuealg i (Gstage2(q)) to be the expected payoff received by Player i in the Bayesian game Gstage2(q) if each player plays according to hq,alg , that is, valuealg i (Gstage2(q)) := P w\u2208W p(w) \u00b7 u stage2(q) i (hq,alg (q(w)), q(w)).\nDefine the game Galg stage1 := Astage1 , ustage1(alg) , where: \u2022 Astage1 := Q, the set of available queries in the Socratic game; and \u2022 u stage1(alg) i (q) := valuealg i (Gstage2(q)) \u2212 \u03b4i(qi).\nI.e., players choose queries q and receive payoffs corresponding to valuealg (Gstage2(q)), less query costs.\nLemma 5.1.\nConsider an observable-query Socratic game G = A, W, u, S, Q, p, \u03b4 .\nLet Gstage2(q) be the Stage-2 games for all q \u2208 Q, let alg be an algorithm finding a Bayesian Nash equilibrium in each Gstage2(q), and let Galg stage1 be the Stage-1 game.\nLet \u03b1 be a Nash equilibrium for Galg stage1, and let hq,alg := alg(Gstage2(q)) be a Bayesian Nash equilibrium for each Gstage2(q).\nThen the following strategy profile is a Nash equilibrium for G: \u2022 In Stage 1, Player i makes query qi with probability \u03b1i(qi).\n(That is, set fquery (q) := \u03b1(q).)\n\u2022 In Stage 2, if q is the query in Stage 1 and qi(wreal) denotes the response to Player i``s query, then Player i chooses action ai with probability hq,alg i (qi(wreal)).\n(In other words, set fresp i (q, qi(w)) := hq,alg i (qi(w)).)\nWe now find equilibria in the stage games for Socratic games with constant- or strategically zero-sum worlds.\nWe first show that the stage games are well structured in this setting: Lemma 5.2.\nConsider an observable-query Socratic game G = A, W, u, S, Q, p, \u03b4 with constant-sum worlds.\nThen the Stage-1 game Galg stage1 is strategically zero sum for every algorithm alg, and every Stage-2 game Gstage2(q) is Bayesian constant sum.\nIf the worlds of G are strategically zero sum, then every Gstage2(q) is Bayesian strategically zero sum.\nWe now show that we can efficiently compute equilibria for these well-structured stage games.\nTheorem 5.3.\nThere exists a polynomial-time algorithm BNE finding Bayesian Nash equilibria in strategically zerosum Bayesian (and thus classical strategically zero-sum or Bayesian constant-sum) two-player games.\nProof Sketch.\nLet G = A, T, r, u be a strategically zero-sum Bayesian game.\nDefine an unobservable-query Socratic game G\u2217 with one possible world for each t \u2208 T, one available zero-cost query qi for each Player i so that qi reveals ti, and all else as in G. Bayesian Nash equilibria in G correspond directly to Nash equilibria in G\u2217 , and the worlds of G\u2217 are strategically zero sum.\nThus by Theorem 4.3 we can compute Nash equilibria for G\u2217 , and thus we can compute Bayesian Nash equilibria for G. (LP``s for zero-sum two-player Bayesian games have been previously developed and studied [61].)\nTheorem 5.4.\nWe can compute a Nash equilibrium for an arbitrary two-player observable-query Socratic game G = A, W, u, S, Q, p, \u03b4 with constant-sum worlds in polynomial time.\nProof.\nBecause each world of G is constant sum, Lemma 5.2 implies that the induced Stage-2 games Gstage2(q) are all Bayesian constant sum.\nThus we can use algorithm BNE to compute a Bayesian Nash equilibrium hq,BNE := BNE(Gstage2(q)) for each q \u2208 Q, by Theorem 5.3.\nFurthermore, again by Lemma 5.2, the induced Stage-1 game GBNE stage1 is classical strategically zero sum.\nTherefore we can again use algorithm BNE to compute a Nash equilibrium \u03b1 := BNE(GBNE stage1), again by Theorem 5.3.\nTherefore, by Lemma 5.1, we can assemble \u03b1 and the hq,BNE ``s into a Nash equilibrium for the Socratic game G. 155 We would like to extend our results on observable-query Socratic games to Socratic games with strategically zerosum worlds.\nWhile we can still find Nash equilibria in the Stage-2 games, the resulting Stage-1 game is not in general strategically zero sum.\nThus, finding Nash equilibria in observable-query Socratic games with strategically zerosum worlds seems to require substantially new techniques.\nHowever, our techniques for decomposing observable-query Socratic games do allow us to find correlated equilibria in this case.\nLemma 5.5.\nConsider an observable-query Socratic game G = A, W, u, S, Q, p, \u03b4 .\nLet alg be an arbitrary algorithm that finds a Bayesian Nash equilibrium in each of the derived Stage-2 games Gstage2(q), and let Galg stage1 be the derived Stage1 game.\nLet \u03c6 be a correlated equilibrium for Galg stage1, and let hq,alg := alg(Gstage2(q)) be a Bayesian Nash equilibrium for each Gstage2(q).\nThen the following distribution over pure strategies is a correlated equilibrium for G: \u03c8(q, f) := \u03c6(q) Y i\u2208{i,ii} Y s\u2208S Pr h fi(q, s) \u2190 hq,alg i (s) i .\nThus to find a correlated equilibrium in an observable-query Socratic game with strategically zero-sum worlds, we need only algorithm BNE from Theorem 5.3 along with an efficient algorithm for finding a correlated equilibrium in a general game.\nSuch an algorithm exists (the definition of correlated equilibria can be directly translated into an LP [3]), and therefore we have the following theorem: Theorem 5.6.\nWe can provide both efficient oracle access and efficient sampling access to a correlated equilibrium for any observable-query two-player Socratic game with strategically zero-sum worlds.\nBecause the support of the correlated equilibrium may be exponentially large, providing oracle and sampling access is the natural way to represent the correlated equilibrium.\nBy Lemma 5.5, we can also compute correlated equilibria in any observable-query Socratic game for which Nash equilibria are computable in the induced Gstage2(q) games (e.g., when Gstage2(q) is of constant size).\nAnother potentially interesting model of queries in Socratic games is what one might call public queries, in which both the choice and outcome of a player``s query is observable by all players in the game.\n(This model might be most appropriate in the presence of corporate espionage or media leaks, or in a setting in which the queries-and thus their results-are done in plain view.)\nThe techniques that we have developed in this section also yield exactly the same results as for observable queries.\nThe proof is actually simpler: with public queries, the players'' payoffs are common knowledge when Stage 2 begins, and thus Stage 2 really is a complete-information game.\n(There may still be uncertainty about the real world, but all players use the observed signals to infer exactly the same set of possible worlds in which wreal may lie; thus they are playing a complete-information game against each other.)\nThus we have the same results as in Theorems 5.4 and 5.6 more simply, by solving Stage 2 using a (non-Bayesian) Nash-equilibrium finder and solving Stage 1 as before.\nOur results for observable queries are weaker than for unobservable: in Socratic games with worlds that are strategically zero sum but not constant sum, we find only a correlated equilibrium in the observable case, whereas we find a Nash equilibrium in the unobservable case.\nWe might hope to extend our unobservable-query techniques to observable queries, but there is no obvious way to do so.\nThe fundamental obstacle is that the LP``s payoff constraint becomes nonlinear if there is any dependence on the probability that the other player made a particular query.\nThis dependence arises with observable queries, suggesting that observable Socratic games with strategically zero-sum worlds may be harder to solve.\n6.\nRELATED WORK Our work was initially motivated by research in the social sciences indicating that real people seem (irrationally) paralyzed when they are presented with additional options.\nIn this section, we briefly review some of these social-science experiments and then discuss technical approaches related to Socratic game theory.\nPrima facie, a rational agent``s happiness given an added option can only increase.\nHowever, recent research has found that more choices tend to decrease happiness: for example, students choosing among extra-credit options are more likely to do extra credit if given a small subset of the choices and, moreover, produce higher-quality work [35].\n(See also [19].)\nThe psychology literature explores a number of explanations: people may miscalculate their opportunity cost by comparing their choice to a component-wise maximum of all other options instead of the single best alternative [65], a new option may draw undue attention to aspects of the other options [67], and so on.\nThe present work explores an economic explanation of this phenomenon: information is not free.\nWhen there are more options, a decision-maker must spend more time to achieve a satisfactory outcome.\nSee, e.g., the work of Skyrms [68] for a philosophical perspective on the role of deliberation in strategic situations.\nFinally, we note the connection between Socratic games and modal logic [34], a formalism for the logic of possibility and necessity.\nThe observation that human players typically do not play rational strategies has inspired some attempts to model partially rational players.\nThe typical model of this socalled bounded rationality [36, 64, 66] is to postulate bounds on computational power in computing the consequences of a strategy.\nThe work on bounded rationality [23, 24, 53, 58] differs from the models that we consider here in that instead of putting hard limitations on the computational power of the agents, we instead restrict their a priori knowledge of the state of the world, requiring them to spend time (and therefore money\/utility) to learn about it.\nPartially observable stochastic games (POSGs) are a general framework used in AI to model situations of multi-agent planning in an evolving, unknown environment, but the generality of POSGs seems to make them very difficult [6].\nRecent work has been done in developing algorithms for restricted classes of POSGs, most notably classes of cooperative POSGs-e.g., [20, 30]-which are very different from the competitive strategically zero-sum games we address in this paper.\nThe fundamental question in Socratic games is deciding on the comparative value of making a more costly but more informative query, or concluding the data-gathering phase and picking the best option, given current information.\nThis tradeoff has been explored in a variety of other contexts; a sampling of these contexts includes aggregating results 156 from delay-prone information sources [8], doing approximate reasoning in intelligent systems [72], deciding when to take the current best guess of disease diagnosis from a beliefpropagation network and when to let it continue inference [33], among many others.\nThis issue can also be viewed as another perspective on the general question of exploration versus exploitation that arises often in AI: when is it better to actively seek additional information instead of exploiting the knowledge one already has?\n(See, e.g., [69].)\nMost of this work differs significantly from our own in that it considers single-agent planning as opposed to the game-theoretic setting.\nA notable exception is the work of Larson and Sandholm [41, 42, 43, 44] on mechanism design for interacting agents whose computation is costly and limited.\nThey present a model in which players must solve a computationally intractable valuation problem, using costly computation to learn some hidden parameters, and results for auctions and bargaining games in this model.\n7.\nFUTURE DIRECTIONS Efficiently finding Nash equilibria in Socratic games with non-strategically zero-sum worlds is probably difficult because the existence of such an algorithm for classical games has been shown to be unlikely [10, 11, 13, 16, 17, 27, 54, 55].\nThere has, however, been some algorithmic success in finding Nash equilibria in restricted classical settings (e.g., [21, 46, 47, 57]); we might hope to extend our results to analogous Socratic games.\nAn efficient algorithm to find correlated equilibria in general Socratic games seems more attainable.\nSuppose the players receive recommended queries and responses.\nThe difficulty is that when a player considers a deviation from his recommended query, he already knows his recommended response in each of the Stage-2 games.\nIn a correlated equilibrium, a player``s expected payoff generally depends on his recommended strategy, and thus a player may deviate in Stage 1 so as to land in a Stage-2 game where he has been given a better than average recommended response.\n(Socratic games are succinct games of superpolynomial type, so Papadimitriou``s results [56] do not imply correlated equilibria for them.)\nSocratic games can be extended to allow players to make adaptive queries, choosing subsequent queries based on previous results.\nOur techniques carry over to O(1) rounds of unobservable queries, but it would be interesting to compute equilibria in Socratic games with adaptive observable queries or with \u03c9(1) rounds of unobservable queries.\nSpecial cases of adaptive Socratic games are closely related to single-agent problems like minimum latency [1, 7, 26], determining strategies for using priced information [9, 29, 37], and an online version of minimum test cover [18, 50].\nAlthough there are important technical distinctions between adaptive Socratic games and these problems, approximation techniques from this literature may apply to Socratic games.\nThe question of approximation raises interesting questions even in non-adaptive Socratic games.\nAn -approximate Nash equilibrium is a strategy profile \u03b1 so that no player can increase her payoff by an additive by deviating from \u03b1.\nFinding approximate Nash equilibria in both adaptive and non-adaptive Socratic games is an interesting direction to pursue.\nAnother natural extension is the model where query results are stochastic.\nIn this paper, we model a query as deterministically partitioning the possible worlds into subsets that the query cannot distinguish.\nHowever, one could instead model a query as probabilistically mapping the set of possible worlds into the set of signals.\nWith this modification, our unobservable-query model becomes equivalent to the model of Bergemann and V\u00a8alim\u00a8aki [4, 5], in which the result of a query is a posterior distribution over the worlds.\nOur techniques allow us to compute equilibria in such a stochastic-query model provided that each query is represented as a table that, for each world\/signal pair, lists the probability that the query outputs that signal in that world.\nIt is also interesting to consider settings in which the game``s queries are specified by a compact representation of the relevant probability distributions.\n(For example, one might consider a setting in which the algorithm has only a sampling oracle for the posterior distributions envisioned by Bergemann and V\u00a8alim\u00a8aki.)\nEfficiently finding equilibria in such settings remains an open problem.\nAnother interesting setting for Socratic games is when the set Q of available queries is given by Q = P(\u0393)-i.e., each player chooses to make a set q \u2208 P(\u0393) of queries from a specified groundset \u0393 of queries.\nHere we take the query cost to be a linear function, so that \u03b4(q) = P \u03b3\u2208q \u03b4({\u03b3}).\nNatural groundsets include comparison queries (if my opponent is playing strategy aii, would I prefer to play ai or \u02c6ai?)\n, strategy queries (what is my vector of payoffs if I play strategy ai?)\n, and world-identity queries (is the world w \u2208 W the real world?)\n.\nWhen one can infer a polynomial bound on the number of queries made by a rational player, then our results yield efficient solutions.\n(For example, we can efficiently solve games in which every groundset element \u03b3 \u2208 \u0393 has \u03b4({\u03b3}) = \u03a9(M \u2212 M), where M and M denote the maximum and minimum payoffs to any player in any world.)\nConversely, it is NP-hard to compute a Nash equilibrium for such a game when every \u03b4({\u03b3}) \u2264 1\/|W|2 , even when the worlds are constant sum and Player II has only a single available strategy.\nThus even computing a best response for Player I is hard.\n(This proof proceeds by reduction from set cover; intuitively, for sufficiently low query costs, Player I must fully identify the actual world through his queries.\nSelecting a minimum-sized set of these queries is hard.)\nComputing Player I``s best response can be viewed as maximizing a submodular function, and thus a best response can be (1 \u2212 1\/e) \u2248 0.63 approximated greedily [14].\nAn interesting open question is whether this approximate best-response calculation can be leveraged to find an approximate Nash equilibrium.\n8.\nACKNOWLEDGEMENTS Part of this work was done while all authors were at MIT CSAIL.\nWe thank Erik Demaine, Natalia Hernandez Gardiol, Claire Monteleoni, Jason Rennie, Madhu Sudan, and Katherine White for helpful comments and discussions.\n9.\nREFERENCES [1] Aaron Archer and David P. Williamson.\nFaster approximation algorithms for the minimum latency problem.\nIn Proceedings of the Symposium on Discrete Algorithms, pages 88-96, 2003.\n[2] R. J. Aumann.\nSubjectivity and correlation in randomized strategies.\nJ. Mathematical Economics, 1:67-96, 1974.\n157 [3] Robert J. Aumann.\nCorrelated equilibrium as an expression of Bayesian rationality.\nEconometrica, 55(1):1-18, January 1987.\n[4] Dick Bergemann and Juuso V\u00a8alim\u00a8aki.\nInformation acquisition and efficient mechanism design.\nEconometrica, 70(3):1007-1033, May 2002.\n[5] Dick Bergemann and Juuso V\u00a8alim\u00a8aki.\nInformation in mechanism design.\nTechnical Report 1532, Cowles Foundation for Research in Economics, 2005.\n[6] Daniel S. Bernstein, Shlomo Zilberstein, and Neil Immerman.\nThe complexity of decentralized control of Markov Decision Processes.\nMathematics of Operations Research, pages 819-840, 2002.\n[7] Avrim Blum, Prasad Chalasani, Don Coppersmith, Bill Pulleyblank, Prabhakar Raghavan, and Madhu Sudan.\nThe minimum latency problem.\nIn Proceedings of the Symposium on the Theory of Computing, pages 163-171, 1994.\n[8] Andrei Z. Broder and Michael Mitzenmacher.\nOptimal plans for aggregation.\nIn Proceedings of the Principles of Distributed Computing, pages 144-152, 2002.\n[9] Moses Charikar, Ronald Fagin, Venkatesan Guruswami, Jon Kleinberg, Prabhakar Raghavan, and Amit Sahai.\nQuery strategies for priced information.\nJ. Computer and System Sciences, 64(4):785-819, June 2002.\n[10] Xi Chen and Xiaotie Deng.\n3-NASH is PPAD-complete.\nIn Electronic Colloquium on Computational Complexity, 2005.\n[11] Xi Chen and Xiaotie Deng.\nSettling the complexity of 2-player Nash-equilibrium.\nIn Electronic Colloquium on Computational Complexity, 2005.\n[12] Olivier Compte and Philippe Jehiel.\nAuctions and information acquisition: Sealed-bid or dynamic formats?\nTechnical report, Centre d``Enseignement et de Recherche en Analyse Socio-\u00b4economique, 2002.\n[13] Vincent Conitzer and Tuomas Sandholm.\nComplexity results about Nash equilibria.\nIn Proceedings of the International Joint Conference on Artificial Intelligence, pages 765-771, 2003.\n[14] Gerard Cornuejols, Marshall L. Fisher, and George L. Nemhauser.\nLocation of bank accounts to optimize float: An analytic study of exact and approximate algorithms.\nManagement Science, 23(8), April 1977.\n[15] Jacques Cr\u00b4emer and Fahad Khalil.\nGathering information before signing a contract.\nAmerican Economic Review, 82:566-578, 1992.\n[16] Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou.\nThe complexity of computing a Nash equilbrium.\nIn Electronic Colloquium on Computational Complexity, 2005.\n[17] Konstantinos Daskalakis and Christos H. Papadimitriou.\nThree-player games are hard.\nIn Electronic Colloquium on Computational Complexity, 2005.\n[18] K. M. J. De Bontridder, B. V. Halld\u00b4orsson, M. M. Halld\u00b4orsson, C. A. J. Hurkens, J. K. Lenstra, R. Ravi, and L. Stougie.\nApproximation algorithms for the test cover problem.\nMathematical Programming, 98(1-3):477-491, September 2003.\n[19] Ap Dijksterhuis, Maarten W. Bos, Loran F. Nordgren, and Rick B. van Baaren.\nOn making the right choice: The deliberation-without-attention effect.\nScience, 311:1005-1007, 17 February 2006.\n[20] Rosemary Emery-Montemerlo, Geoff Gordon, Jeff Schneider, and Sebastian Thrun.\nApproximate solutions for partially observable stochastic games with common payoffs.\nIn Autonomous Agents and Multi-Agent Systems, 2004.\n[21] Alex Fabrikant, Christos Papadimitriou, and Kunal Talwar.\nThe complexity of pure Nash equilibria.\nIn Proceedings of the Symposium on the Theory of Computing, 2004.\n[22] Kyna Fong.\nMulti-stage Information Acquisition in Auction Design.\nSenior thesis, Harvard College, 2003.\n[23] Lance Fortnow and Duke Whang.\nOptimality and domination in repeated games with bounded players.\nIn Proceedings of the Symposium on the Theory of Computing, pages 741-749, 1994.\n[24] Yoav Freund, Michael Kearns, Yishay Mansour, Dana Ron, Ronitt Rubinfeld, and Robert E. Schapire.\nEfficient algorithms for learning to play repeated games against computationally bounded adversaries.\nIn Proceedings of the Foundations of Computer Science, pages 332-341, 1995.\n[25] Drew Fudenberg and Jean Tirole.\nGame Theory.\nMIT, 1991.\n[26] Michel X. Goemans and Jon Kleinberg.\nAn improved approximation ratio for the minimum latency problem.\nMathematical Programming, 82:111-124, 1998.\n[27] Paul W. Goldberg and Christos H. Papadimitriou.\nReducibility among equilibrium problems.\nIn Electronic Colloquium on Computational Complexity, 2005.\n[28] M. Grotschel, L. Lovasz, and A. Schrijver.\nThe ellipsoid method and its consequences in combinatorial optimization.\nCombinatorica, 1:70-89, 1981.\n[29] Anupam Gupta and Amit Kumar.\nSorting and selection with structured costs.\nIn Proceedings of the Foundations of Computer Science, pages 416-425, 2001.\n[30] Eric A. Hansen, Daniel S. Bernstein, and Shlomo Zilberstein.\nDynamic programming for partially observable stochastic games.\nIn National Conference on Artificial Intelligence (AAAI), 2004.\n[31] John C. Harsanyi.\nGames with incomplete information played by Bayesian players.\nManagement Science, 14(3,5,7), 1967-1968.\n[32] Sergiu Hart and David Schmeidler.\nExistence of correlated equilibria.\nMathematics of Operations Research, 14(1):18-25, 1989.\n[33] Eric Horvitz and Geoffrey Rutledge.\nTime-dependent utility and action under uncertainty.\nIn Uncertainty in Artificial Intelligence, pages 151-158, 1991.\n[34] G. E. Hughes and M. J. Cresswell.\nA New Introduction to Modal Logic.\nRoutledge, 1996.\n[35] Sheena S. Iyengar and Mark R. Lepper.\nWhen choice is demotivating: Can one desire too much of a good thing?\nJ. Personality and Social Psychology, 79(6):995-1006, 2000.\n[36] Ehud Kalai.\nBounded rationality and strategic complexity in repeated games.\nGame Theory and Applications, pages 131-157, 1990.\n158 [37] Sampath Kannan and Sanjeev Khanna.\nSelection with monotone comparison costs.\nIn Proceedings of the Symposium on Discrete Algorithms, pages 10-17, 2003.\n[38] L.G. Khachiyan.\nA polynomial algorithm in linear programming.\nDokklady Akademiia Nauk SSSR, 244, 1979.\n[39] Daphne Koller and Nimrod Megiddo.\nThe complexity of two-person zero-sum games in extensive form.\nGames and Economic Behavior, 4:528-552, 1992.\n[40] Daphne Koller, Nimrod Megiddo, and Bernhard von Stengel.\nEfficient computation of equilibria for extensive two-person games.\nGames and Economic Behavior, 14:247-259, 1996.\n[41] Kate Larson.\nMechanism Design for Computationally Limited Agents.\nPhD thesis, CMU, 2004.\n[42] Kate Larson and Tuomas Sandholm.\nBargaining with limited computation: Deliberation equilibrium.\nArtificial Intelligence, 132(2):183-217, 2001.\n[43] Kate Larson and Tuomas Sandholm.\nCostly valuation computation in auctions.\nIn Proceedings of the Theoretical Aspects of Rationality and Knowledge, July 2001.\n[44] Kate Larson and Tuomas Sandholm.\nStrategic deliberation and truthful revelation: An impossibility result.\nIn Proceedings of the ACM Conference on Electronic Commerce, May 2004.\n[45] C. E. Lemke and J. T. Howson, Jr..\nEquilibrium points of bimatrix games.\nJ. Society for Industrial and Applied Mathematics, 12, 1964.\n[46] Richard J. Lipton, Evangelos Markakis, and Aranyak Mehta.\nPlaying large games using simple strategies.\nIn Proceedings of the ACM Conference on Electronic Commerce, pages 36-41, 2003.\n[47] Michael L. Littman, Michael Kearns, and Satinder Singh.\nAn efficient exact algorithm for singly connected graphical games.\nIn Proceedings of Neural Information Processing Systems, 2001.\n[48] Steven A. Matthews and Nicola Persico.\nInformation acquisition and the excess refund puzzle.\nTechnical Report 05-015, Department of Economics, University of Pennsylvania, March 2005.\n[49] Richard D. McKelvey and Andrew McLennan.\nComputation of equilibria in finite games.\nIn H. Amman, D. A. Kendrick, and J. Rust, editors, Handbook of Compututational Economics, volume 1, pages 87-142.\nElsevier, 1996.\n[50] B.M.E. Moret and H. D. Shapiro.\nOn minimizing a set of tests.\nSIAM J. Scientific Statistical Computing, 6:983-1003, 1985.\n[51] H. Moulin and J.-P.\nVial.\nStrategically zero-sum games: The class of games whose completely mixed equilibria cannot be improved upon.\nInternational J. Game Theory, 7(3\/4), 1978.\n[52] John F. Nash, Jr..\nEquilibrium points in n-person games.\nProceedings of the National Academy of Sciences, 36:48-49, 1950.\n[53] Abraham Neyman.\nFinitely repeated games with finite automata.\nMathematics of Operations Research, 23(3):513-552, August 1998.\n[54] Christos Papadimitriou.\nOn the complexity of the parity argument and other inefficient proofs of existence.\nJ. Computer and System Sciences, 48:498-532, 1994.\n[55] Christos Papadimitriou.\nAlgorithms, games, and the internet.\nIn Proceedings of the Symposium on the Theory of Computing, pages 749-753, 2001.\n[56] Christos H. Papadimitriou.\nComputing correlated equilibria in multi-player games.\nIn Proceedings of the Symposium on the Theory of Computing, 2005.\n[57] Christos H. Papadimitriou and Tim Roughgarden.\nComputing equilibria in multiplayer games.\nIn Proceedings of the Symposium on Discrete Algorithms, 2005.\n[58] Christos H. Papadimitriou and Mihalis Yannakakis.\nOn bounded rationality and computational complexity.\nIn Proceedings of the Symposium on the Theory of Computing, pages 726-733, 1994.\n[59] David C. Parkes.\nAuction design with costly preference elicitation.\nAnnals of Mathematics and Artificial Intelligence, 44:269-302, 2005.\n[60] Nicola Persico.\nInformation acquisition in auctions.\nEconometrica, 68(1):135-148, 2000.\n[61] Jean-Pierre Ponssard and Sylvain Sorin.\nThe LP formulation of finite zero-sum games with incomplete information.\nInternational J. Game Theory, 9(2):99-105, 1980.\n[62] Eric Rasmussen.\nStrategic implications of uncertainty over one``s own private value in auctions.\nTechnical report, Indiana University, 2005.\n[63] Leonardo Rezende.\nMid-auction information acquisition.\nTechnical report, University of Illinois, 2005.\n[64] Ariel Rubinstein.\nModeling Bounded Rationality.\nMIT, 1988.\n[65] Barry Schwartz.\nThe Paradox of Choice: Why More is Less.\nEcco, 2004.\n[66] Herbert Simon.\nModels of Bounded Rationality.\nMIT, 1982.\n[67] I. Simonson and A. Tversky.\nChoice in context: Tradeoff contrast and extremeness aversion.\nJ. Marketing Research, 29:281-295, 1992.\n[68] Brian Skyrms.\nDynamic models of deliberation and the theory of games.\nIn Proceedings of the Theoretical Aspects of Rationality and Knowledge, pages 185-200, 1990.\n[69] Richard Sutton and Andrew Barto.\nReinforcement Learning: An Introduction.\nMIT, 1998.\n[70] John von Neumann and Oskar Morgenstern.\nTheory of Games and Economic Behavior.\nPrinceton, 1957.\n[71] Bernhard von Stengel.\nComputing equilibria for two-person games.\nIn R. J. Aumann and S. Hart, editors, Handbook of Game Theory with Econonic Applications, volume 3, pages 1723-1759.\nElsevier, 2002.\n[72] S. Zilberstein and S. Russell.\nApproximate reasoning using anytime algorithms.\nIn S. Natarajan, editor, Imprecise and Approximate Computation.\nKluwer, 1995.\n159","lvl-3":"Playing Games in Many Possible Worlds\nABSTRACT\nIn traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game--either full omniscient knowledge or partial but fixed information.\nIn real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences.\nIn this paper, we model this phenomenon.\nWe imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting \"Socratic\" game theory.\nIn a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world.\nPlayers can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action.\nWe consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made.\nThe results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players.\nWhen the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable - and unobservable-query models.\nWhen the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservablequery Socratic games and correlated equilibria in observablequery Socratic games.\n1.\nINTRODUCTION\nLate October 1960.\nA smoky room.\nDemocratic Party strategists huddle around a map.\nHow should the Kennedy campaign allocate its remaining advertising budget?\nShould it focus on, say, California or New York?\nThe Nixon campaign faces the same dilemma.\nOf course, neither campaign knows the effectiveness of its advertising in each state.\nPerhaps Californians are susceptible to Nixon's advertising, but are unresponsive to Kennedy's.\nIn light of this uncertainty, the Kennedy campaign may conduct a survey, at some cost, to estimate the effectiveness of its advertising.\nMoreover, the larger--and more expensive--the survey, the more accurate it will be.\nIs the cost of a survey worth the information that it provides?\nHow should one balance the cost of acquiring more information against the risk of playing a game with higher uncertainty?\nIn this paper, we model situations of this type as Socratic games.\nAs in traditional game theory, the players in a Socratic game choose actions to maximize their payoffs, but we model players with incomplete information who can make costly queries to reduce their uncertainty about the state of the world before they choose their actions.\nThis approach contrasts with traditional game theory, in which players are usually modeled as having fixed, exogenously given information about the structure of the game and its payoffs.\n(In traditional games of incomplete and imperfect information, there is information that the players do not have; in Socratic games, unlike in these games, the players have a chance to acquire the missing information, at some cost.)\nA number of related models have been explored by economists and computer scientists motivated by similar situations, often with a focus on mechanism design and auctions; a sampling of this research includes the work of Larson and Sandholm [41, 42, 43, 44], Parkes [59], Fong [22], Compte and Jehiel [12], Rezende [63], Persico and Matthews [48, 60], Cr \u00b4 emer and Khalil [15], Rasmusen [62], and Bergemann and V \u00a8 alim \u00a8 aki [4, 5].\nThe model of Bergemann and V \u00a8 alim \u00a8 aki is similar in many regards to the one that we explore here; see Section 7 for some discussion.\nA Socratic game proceeds as follows.\nA real world is cho\nsen randomly from a set of possible worlds according to a common prior distribution.\nEach player then selects an arbitrary query from a set of available costly queries and receives a corresponding piece of information about the real world.\nFinally each player selects an action and receives a payoff--a function of the players' selected actions and the identity of the real world--less the cost of the query that he or she made.\nCompared to traditional game theory, the distinguishing feature of our model is the introduction of explicit costs to the players for learning arbitrary partial information about which of the many possible worlds is the real world.\nOur research was initially inspired by recent results in psychology on decision making, but it soon became clear that Socratic game theory is also a general tool for understanding the \"exploitation versus exploration\" tradeoff, well studied in machine learning, in a strategic multiplayer environment.\nThis tension between the risk arising from uncertainty and the cost of acquiring information is ubiquitous in economics, political science, and beyond.\nOur results.\nWe consider Socratic games under two models: an unobservable-query model where players learn only the response to their own queries and an observable-query model where players also learn which queries their opponents made.\nWe give efficient algorithms to find Nash equilibria--i.e., tuples of strategies from which no player has unilateral incentive to deviate--in broad classes of two-player Socratic games in both models.\nOur first result is an efficient algorithm to find Nash equilibria in unobservable-query Socratic games with constant-sum worlds, in which the sum of the players' payoffs is independent of their actions.\nOur techniques also yield Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds.\nStrategically zero-sum games generalize constant-sum games by allowing the sum of the players' payoffs to depend on individual players' choices of strategy, but not on any interaction of their choices.\nOur second result is an efficient algorithm to find Nash equilibria in observable-query Socratic games with constant-sum worlds.\nFinally, we give an efficient algorithm to find correlated equilibria--a weaker but increasingly well-studied solution concept for games [2, 3, 32, 56, 57]--in observable-query Socratic games with strategically zero-sum worlds.\nLike all games, Socratic games can be viewed as a special case of extensive-form games, which represent games by trees in which internal nodes represent choices made by chance or by the players, and the leaves represent outcomes that correspond to a vector of payoffs to the players.\nAlgorithmically, the generality of extensive-form games makes them difficult to solve efficiently, and the special cases that are known to be efficiently solvable do not include even simple Socratic games.\nEvery (complete-information) classical game is a trivial Socratic game (with a single possible world and a single trivial query), and efficiently finding Nash equilibria in classical games has been shown to be hard [10, 11, 13, 16, 17, 27, 54, 55].\nTherefore we would not expect to find a straightforward polynomial-time algorithm to compute Nash equilibria in general Socratic games.\nHowever, it is well known that Nash equilibria can be found efficiently via an LP for two-player constant-sum games [49, 71] (and strategically zero-sum games [51]).\nA Socratic game is itself a classical game, so one might hope that these results can be applied to Socratic games with constant-sum (or strategically zero-sum) worlds.\nWe face two major obstacles in extending these classical results to Socratic games.\nFirst, a Socratic game with constant-sum worlds is not itself a constant-sum classical game--rather, the resulting classical game is only strategically zero sum.\nWorse yet, a Socratic game with strategically zero-sum worlds is not itself classically strategically zero sum--indeed, there are no known efficient algorithmic techniques to compute Nash equilibria in the resulting class of classical games.\n(Exponential-time algorithms like Lemke\/Howson, of course, can be used [45].)\nThus even when it is easy to find Nash equilibria in each of the worlds of a Socratic game, we require new techniques to solve the Socratic game itself.\nSecond, even when the Socratic game itself is strategically zero sum, the number of possible strategies available to each player is exponential in the natural representation of the game.\nAs a result, the standard linear programs for computing equilibria have an exponential number of variables and an exponential number of constraints.\nFor unobservable-query Socratic games with strategically zero-sum worlds, we address these obstacles by formulating a new LP that uses only polynomially many variables (though still an exponential number of constraints) and then use ellipsoid-based techniques to solve it.\nFor observablequery Socratic games, we handle the exponentiality by decomposing the game into stages, solving the stages separately, and showing how to reassemble the solutions efficiently.\nTo solve the stages, it is necessary to find Nash equilibria in Bayesian strategically zero-sum games, and we give an explicit polynomial-time algorithm to do so.\n2.\nGAMES AND SOCRATIC GAMES\n2.1 Background on Game Theory\n2.2 Socratic Games\n3.\nSTRATEGICALLY ZERO-SUM GAMES\n4.\nSOCRATIC GAMES WITH UNOBSERVABLE QUERIES\n3.\nLet \u02c6\u03c1qI\n5.\nSOCRATIC GAMES WITH OBSERVABLE QUERIES\nGBNE\n6.\nRELATED WORK\nOur work was initially motivated by research in the social sciences indicating that real people seem (irrationally) paralyzed when they are presented with additional options.\nIn this section, we briefly review some of these social-science experiments and then discuss technical approaches related to Socratic game theory.\nPrima facie, a rational agent's happiness given an added option can only increase.\nHowever, recent research has found that more choices tend to decrease happiness: for example, students choosing among extra-credit options are more likely to do extra credit if given a small subset of the choices and, moreover, produce higher-quality work [35].\n(See also [19].)\nThe psychology literature explores a number of explanations: people may miscalculate their opportunity cost by comparing their choice to a \"component-wise maximum\" of all other options instead of the single best alternative [65], a new option may draw undue attention to aspects of the other options [67], and so on.\nThe present work explores an economic explanation of this phenomenon: information is not free.\nWhen there are more options, a decision-maker must spend more time to achieve a satisfactory outcome.\nSee, e.g., the work of Skyrms [68] for a philosophical perspective on the role of deliberation in strategic situations.\nFinally, we note the connection between Socratic games and modal logic [34], a formalism for the logic of possibility and necessity.\nThe observation that human players typically do not play \"rational\" strategies has inspired some attempts to model \"partially\" rational players.\nThe typical model of this socalled bounded rationality [36, 64, 66] is to postulate bounds on computational power in computing the consequences of a strategy.\nThe work on bounded rationality [23, 24, 53, 58] differs from the models that we consider here in that instead of putting hard limitations on the computational power of the agents, we instead restrict their a priori knowledge of the state of the world, requiring them to spend time (and therefore money\/utility) to learn about it.\nPartially observable stochastic games (POSGs) are a general framework used in AI to model situations of multi-agent planning in an evolving, unknown environment, but the generality of POSGs seems to make them very difficult [6].\nRecent work has been done in developing algorithms for restricted classes of POSGs, most notably classes of cooperative POSGs--e.g., [20, 30]--which are very different from the competitive strategically zero-sum games we address in this paper.\nThe fundamental question in Socratic games is deciding on the comparative value of making a more costly but more informative query, or concluding the data-gathering phase and picking the best option, given current information.\nThis tradeoff has been explored in a variety of other contexts; a sampling of these contexts includes aggregating results \u03c8 (q, f): = 0 (q) Y Y i \u2208 {I, II} s \u2208 S\nfrom delay-prone information sources [8], doing approximate reasoning in intelligent systems [72], deciding when to take the current best guess of disease diagnosis from a beliefpropagation network and when to let it continue inference [33], among many others.\nThis issue can also be viewed as another perspective on the general question of exploration versus exploitation that arises often in AI: when is it better to actively seek additional information instead of exploiting the knowledge one already has?\n(See, e.g., [69].)\nMost of this work differs significantly from our own in that it considers single-agent planning as opposed to the game-theoretic setting.\nA notable exception is the work of Larson and Sandholm [41, 42, 43, 44] on mechanism design for interacting agents whose computation is costly and limited.\nThey present a model in which players must solve a computationally intractable valuation problem, using costly computation to learn some hidden parameters, and results for auctions and bargaining games in this model.\n7.\nFUTURE DIRECTIONS\nEfficiently finding Nash equilibria in Socratic games with non-strategically zero-sum worlds is probably difficult because the existence of such an algorithm for classical games has been shown to be unlikely [10, 11, 13, 16, 17, 27, 54, 55].\nThere has, however, been some algorithmic success in finding Nash equilibria in restricted classical settings (e.g., [21, 46, 47, 57]); we might hope to extend our results to analogous Socratic games.\nAn efficient algorithm to find correlated equilibria in general Socratic games seems more attainable.\nSuppose the players receive recommended queries and responses.\nThe difficulty is that when a player considers a deviation from his recommended query, he already knows his recommended response in each of the Stage-2 games.\nIn a correlated equilibrium, a player's expected payoff generally depends on his recommended strategy, and thus a player may deviate in Stage 1 so as to land in a Stage-2 game where he has been given a \"better than average\" recommended response.\n(Socratic games are \"succinct games of superpolynomial type,\" so Papadimitriou's results [56] do not imply correlated equilibria for them.)\nSocratic games can be extended to allow players to make adaptive queries, choosing subsequent queries based on previous results.\nOur techniques carry over to O (1) rounds of unobservable queries, but it would be interesting to compute equilibria in Socratic games with adaptive observable queries or with \u03c9 (1) rounds of unobservable queries.\nSpecial cases of adaptive Socratic games are closely related to single-agent problems like minimum latency [1, 7, 26], determining strategies for using priced information [9, 29, 37], and an online version of minimum test cover [18, 50].\nAlthough there are important technical distinctions between adaptive Socratic games and these problems, approximation techniques from this literature may apply to Socratic games.\nThe question of approximation raises interesting questions even in non-adaptive Socratic games.\nAn e-approximate Nash equilibrium is a strategy profile \u03b1 so that no player can increase her payoff by an additive e by deviating from \u03b1.\nFinding approximate Nash equilibria in both adaptive and non-adaptive Socratic games is an interesting direction to pursue.\nAnother natural extension is the model where query results are stochastic.\nIn this paper, we model a query as deterministically partitioning the possible worlds into subsets that the query cannot distinguish.\nHowever, one could instead model a query as probabilistically mapping the set of possible worlds into the set of signals.\nWith this modification, our unobservable-query model becomes equivalent to the model of Bergemann and V \u00a8 alim \u00a8 aki [4, 5], in which the result of a query is a posterior distribution over the worlds.\nOur techniques allow us to compute equilibria in such a \"stochastic-query\" model provided that each query is represented as a table that, for each world\/signal pair, lists the probability that the query outputs that signal in that world.\nIt is also interesting to consider settings in which the game's queries are specified by a compact representation of the relevant probability distributions.\n(For example, one might consider a setting in which the algorithm has only a sampling oracle for the posterior distributions envisioned by Bergemann and V \u00a8 alim \u00a8 aki.)\nEfficiently finding equilibria in such settings remains an open problem.\nAnother interesting setting for Socratic games is when the set Q of available queries is given by Q = P (\u0393)--i.e., each player chooses to make a set q E P (\u0393) of queries from a specified groundset \u0393 of queries.\nHere we take the query cost to be a linear function, so that \u03b4 (q) = P\u03b3 \u2208 q \u03b4 ({- y}).\nNatural groundsets include comparison queries (\"if my opponent is playing strategy aII, would I prefer to play aI or \u02c6aI?\")\n, strategy queries (\"what is my vector of payoffs if I play strategy aI?\")\n, and world-identity queries (\"is the world w E W the real world?\")\n.\nWhen one can infer a polynomial bound on the number of queries made by a rational player, then our results yield efficient solutions.\n(For example, we can efficiently solve games in which every groundset element - y E \u0393 has \u03b4 ({- y}) = \u03a9 (M \u2212 M), where M and M denote the maximum and minimum payoffs to any player in any world.)\nConversely, it is NP-hard to compute a Nash equilibrium for such a game when every \u03b4 ({- y}) <1 \/ | W | 2, even when the worlds are constant sum and Player II has only a single available strategy.\nThus even computing a best response for Player I is hard.\n(This proof proceeds by reduction from set cover; intuitively, for sufficiently low query costs, Player I must fully identify the actual world through his queries.\nSelecting a minimum-sized set of these queries is hard.)\nComputing Player I's best response can be viewed as maximizing a submodular function, and thus a best response can be (1 \u2212 1\/e) \u2248 0.63 approximated greedily [14].\nAn interesting open question is whether this approximate best-response calculation can be leveraged to find an approximate Nash equilibrium.","lvl-4":"Playing Games in Many Possible Worlds\nABSTRACT\nIn traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game--either full omniscient knowledge or partial but fixed information.\nIn real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences.\nIn this paper, we model this phenomenon.\nWe imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting \"Socratic\" game theory.\nIn a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world.\nPlayers can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action.\nWe consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made.\nThe results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players.\nWhen the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable - and unobservable-query models.\nWhen the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservablequery Socratic games and correlated equilibria in observablequery Socratic games.\n1.\nINTRODUCTION\nLate October 1960.\nA smoky room.\nHow should the Kennedy campaign allocate its remaining advertising budget?\nThe Nixon campaign faces the same dilemma.\nOf course, neither campaign knows the effectiveness of its advertising in each state.\nPerhaps Californians are susceptible to Nixon's advertising, but are unresponsive to Kennedy's.\nIn light of this uncertainty, the Kennedy campaign may conduct a survey, at some cost, to estimate the effectiveness of its advertising.\nIs the cost of a survey worth the information that it provides?\nHow should one balance the cost of acquiring more information against the risk of playing a game with higher uncertainty?\nIn this paper, we model situations of this type as Socratic games.\nAs in traditional game theory, the players in a Socratic game choose actions to maximize their payoffs, but we model players with incomplete information who can make costly queries to reduce their uncertainty about the state of the world before they choose their actions.\nThis approach contrasts with traditional game theory, in which players are usually modeled as having fixed, exogenously given information about the structure of the game and its payoffs.\n(In traditional games of incomplete and imperfect information, there is information that the players do not have; in Socratic games, unlike in these games, the players have a chance to acquire the missing information, at some cost.)\nThe model of Bergemann and V \u00a8 alim \u00a8 aki is similar in many regards to the one that we explore here; see Section 7 for some discussion.\nA Socratic game proceeds as follows.\nA real world is cho\nsen randomly from a set of possible worlds according to a common prior distribution.\nEach player then selects an arbitrary query from a set of available costly queries and receives a corresponding piece of information about the real world.\nFinally each player selects an action and receives a payoff--a function of the players' selected actions and the identity of the real world--less the cost of the query that he or she made.\nCompared to traditional game theory, the distinguishing feature of our model is the introduction of explicit costs to the players for learning arbitrary partial information about which of the many possible worlds is the real world.\nThis tension between the risk arising from uncertainty and the cost of acquiring information is ubiquitous in economics, political science, and beyond.\nOur results.\nWe consider Socratic games under two models: an unobservable-query model where players learn only the response to their own queries and an observable-query model where players also learn which queries their opponents made.\nWe give efficient algorithms to find Nash equilibria--i.e., tuples of strategies from which no player has unilateral incentive to deviate--in broad classes of two-player Socratic games in both models.\nOur first result is an efficient algorithm to find Nash equilibria in unobservable-query Socratic games with constant-sum worlds, in which the sum of the players' payoffs is independent of their actions.\nOur techniques also yield Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds.\nStrategically zero-sum games generalize constant-sum games by allowing the sum of the players' payoffs to depend on individual players' choices of strategy, but not on any interaction of their choices.\nOur second result is an efficient algorithm to find Nash equilibria in observable-query Socratic games with constant-sum worlds.\nFinally, we give an efficient algorithm to find correlated equilibria--a weaker but increasingly well-studied solution concept for games [2, 3, 32, 56, 57]--in observable-query Socratic games with strategically zero-sum worlds.\nLike all games, Socratic games can be viewed as a special case of extensive-form games, which represent games by trees in which internal nodes represent choices made by chance or by the players, and the leaves represent outcomes that correspond to a vector of payoffs to the players.\nAlgorithmically, the generality of extensive-form games makes them difficult to solve efficiently, and the special cases that are known to be efficiently solvable do not include even simple Socratic games.\nEvery (complete-information) classical game is a trivial Socratic game (with a single possible world and a single trivial query), and efficiently finding Nash equilibria in classical games has been shown to be hard [10, 11, 13, 16, 17, 27, 54, 55].\nTherefore we would not expect to find a straightforward polynomial-time algorithm to compute Nash equilibria in general Socratic games.\nHowever, it is well known that Nash equilibria can be found efficiently via an LP for two-player constant-sum games [49, 71] (and strategically zero-sum games [51]).\nA Socratic game is itself a classical game, so one might hope that these results can be applied to Socratic games with constant-sum (or strategically zero-sum) worlds.\nWe face two major obstacles in extending these classical results to Socratic games.\nFirst, a Socratic game with constant-sum worlds is not itself a constant-sum classical game--rather, the resulting classical game is only strategically zero sum.\nWorse yet, a Socratic game with strategically zero-sum worlds is not itself classically strategically zero sum--indeed, there are no known efficient algorithmic techniques to compute Nash equilibria in the resulting class of classical games.\n(Exponential-time algorithms like Lemke\/Howson, of course, can be used [45].)\nThus even when it is easy to find Nash equilibria in each of the worlds of a Socratic game, we require new techniques to solve the Socratic game itself.\nSecond, even when the Socratic game itself is strategically zero sum, the number of possible strategies available to each player is exponential in the natural representation of the game.\nAs a result, the standard linear programs for computing equilibria have an exponential number of variables and an exponential number of constraints.\nFor unobservable-query Socratic games with strategically zero-sum worlds, we address these obstacles by formulating a new LP that uses only polynomially many variables (though still an exponential number of constraints) and then use ellipsoid-based techniques to solve it.\nFor observablequery Socratic games, we handle the exponentiality by decomposing the game into stages, solving the stages separately, and showing how to reassemble the solutions efficiently.\nTo solve the stages, it is necessary to find Nash equilibria in Bayesian strategically zero-sum games, and we give an explicit polynomial-time algorithm to do so.\n6.\nRELATED WORK\nOur work was initially motivated by research in the social sciences indicating that real people seem (irrationally) paralyzed when they are presented with additional options.\nIn this section, we briefly review some of these social-science experiments and then discuss technical approaches related to Socratic game theory.\nPrima facie, a rational agent's happiness given an added option can only increase.\n(See also [19].)\nThe present work explores an economic explanation of this phenomenon: information is not free.\nSee, e.g., the work of Skyrms [68] for a philosophical perspective on the role of deliberation in strategic situations.\nFinally, we note the connection between Socratic games and modal logic [34], a formalism for the logic of possibility and necessity.\nThe observation that human players typically do not play \"rational\" strategies has inspired some attempts to model \"partially\" rational players.\nPartially observable stochastic games (POSGs) are a general framework used in AI to model situations of multi-agent planning in an evolving, unknown environment, but the generality of POSGs seems to make them very difficult [6].\nRecent work has been done in developing algorithms for restricted classes of POSGs, most notably classes of cooperative POSGs--e.g., [20, 30]--which are very different from the competitive strategically zero-sum games we address in this paper.\nThe fundamental question in Socratic games is deciding on the comparative value of making a more costly but more informative query, or concluding the data-gathering phase and picking the best option, given current information.\n(See, e.g., [69].)\nMost of this work differs significantly from our own in that it considers single-agent planning as opposed to the game-theoretic setting.\nThey present a model in which players must solve a computationally intractable valuation problem, using costly computation to learn some hidden parameters, and results for auctions and bargaining games in this model.\n7.\nFUTURE DIRECTIONS\nEfficiently finding Nash equilibria in Socratic games with non-strategically zero-sum worlds is probably difficult because the existence of such an algorithm for classical games has been shown to be unlikely [10, 11, 13, 16, 17, 27, 54, 55].\nThere has, however, been some algorithmic success in finding Nash equilibria in restricted classical settings (e.g., [21, 46, 47, 57]); we might hope to extend our results to analogous Socratic games.\nAn efficient algorithm to find correlated equilibria in general Socratic games seems more attainable.\nSuppose the players receive recommended queries and responses.\nThe difficulty is that when a player considers a deviation from his recommended query, he already knows his recommended response in each of the Stage-2 games.\nIn a correlated equilibrium, a player's expected payoff generally depends on his recommended strategy, and thus a player may deviate in Stage 1 so as to land in a Stage-2 game where he has been given a \"better than average\" recommended response.\n(Socratic games are \"succinct games of superpolynomial type,\" so Papadimitriou's results [56] do not imply correlated equilibria for them.)\nSocratic games can be extended to allow players to make adaptive queries, choosing subsequent queries based on previous results.\nOur techniques carry over to O (1) rounds of unobservable queries, but it would be interesting to compute equilibria in Socratic games with adaptive observable queries or with \u03c9 (1) rounds of unobservable queries.\nSpecial cases of adaptive Socratic games are closely related to single-agent problems like minimum latency [1, 7, 26], determining strategies for using priced information [9, 29, 37], and an online version of minimum test cover [18, 50].\nAlthough there are important technical distinctions between adaptive Socratic games and these problems, approximation techniques from this literature may apply to Socratic games.\nThe question of approximation raises interesting questions even in non-adaptive Socratic games.\nAn e-approximate Nash equilibrium is a strategy profile \u03b1 so that no player can increase her payoff by an additive e by deviating from \u03b1.\nFinding approximate Nash equilibria in both adaptive and non-adaptive Socratic games is an interesting direction to pursue.\nAnother natural extension is the model where query results are stochastic.\nIn this paper, we model a query as deterministically partitioning the possible worlds into subsets that the query cannot distinguish.\nHowever, one could instead model a query as probabilistically mapping the set of possible worlds into the set of signals.\nWith this modification, our unobservable-query model becomes equivalent to the model of Bergemann and V \u00a8 alim \u00a8 aki [4, 5], in which the result of a query is a posterior distribution over the worlds.\nOur techniques allow us to compute equilibria in such a \"stochastic-query\" model provided that each query is represented as a table that, for each world\/signal pair, lists the probability that the query outputs that signal in that world.\nIt is also interesting to consider settings in which the game's queries are specified by a compact representation of the relevant probability distributions.\nEfficiently finding equilibria in such settings remains an open problem.\nAnother interesting setting for Socratic games is when the set Q of available queries is given by Q = P (\u0393)--i.e., each player chooses to make a set q E P (\u0393) of queries from a specified groundset \u0393 of queries.\nHere we take the query cost to be a linear function, so that \u03b4 (q) = P\u03b3 \u2208 q \u03b4 ({- y}).\nNatural groundsets include comparison queries (\"if my opponent is playing strategy aII, would I prefer to play aI or \u02c6aI?\")\n, strategy queries (\"what is my vector of payoffs if I play strategy aI?\")\n, and world-identity queries (\"is the world w E W the real world?\")\n.\nWhen one can infer a polynomial bound on the number of queries made by a rational player, then our results yield efficient solutions.\n(For example, we can efficiently solve games in which every groundset element - y E \u0393 has \u03b4 ({- y}) = \u03a9 (M \u2212 M), where M and M denote the maximum and minimum payoffs to any player in any world.)\nConversely, it is NP-hard to compute a Nash equilibrium for such a game when every \u03b4 ({- y}) <1 \/ | W | 2, even when the worlds are constant sum and Player II has only a single available strategy.\nThus even computing a best response for Player I is hard.\n(This proof proceeds by reduction from set cover; intuitively, for sufficiently low query costs, Player I must fully identify the actual world through his queries.\nSelecting a minimum-sized set of these queries is hard.)\nAn interesting open question is whether this approximate best-response calculation can be leveraged to find an approximate Nash equilibrium.","lvl-2":"Playing Games in Many Possible Worlds\nABSTRACT\nIn traditional game theory, players are typically endowed with exogenously given knowledge of the structure of the game--either full omniscient knowledge or partial but fixed information.\nIn real life, however, people are often unaware of the utility of taking a particular action until they perform research into its consequences.\nIn this paper, we model this phenomenon.\nWe imagine a player engaged in a questionand-answer session, asking questions both about his or her own preferences and about the state of reality; thus we call this setting \"Socratic\" game theory.\nIn a Socratic game, players begin with an a priori probability distribution over many possible worlds, with a different utility function for each world.\nPlayers can make queries, at some cost, to learn partial information about which of the possible worlds is the actual world, before choosing an action.\nWe consider two query models: (1) an unobservable-query model, in which players learn only the response to their own queries, and (2) an observable-query model, in which players also learn which queries their opponents made.\nThe results in this paper consider cases in which the underlying worlds of a two-player Socratic game are either constant-sum games or strategically zero-sum games, a class that generalizes constant-sum games to include all games in which the sum of payoffs depends linearly on the interaction between the players.\nWhen the underlying worlds are constant sum, we give polynomial-time algorithms to find Nash equilibria in both the observable - and unobservable-query models.\nWhen the worlds are strategically zero sum, we give efficient algorithms to find Nash equilibria in unobservablequery Socratic games and correlated equilibria in observablequery Socratic games.\n1.\nINTRODUCTION\nLate October 1960.\nA smoky room.\nDemocratic Party strategists huddle around a map.\nHow should the Kennedy campaign allocate its remaining advertising budget?\nShould it focus on, say, California or New York?\nThe Nixon campaign faces the same dilemma.\nOf course, neither campaign knows the effectiveness of its advertising in each state.\nPerhaps Californians are susceptible to Nixon's advertising, but are unresponsive to Kennedy's.\nIn light of this uncertainty, the Kennedy campaign may conduct a survey, at some cost, to estimate the effectiveness of its advertising.\nMoreover, the larger--and more expensive--the survey, the more accurate it will be.\nIs the cost of a survey worth the information that it provides?\nHow should one balance the cost of acquiring more information against the risk of playing a game with higher uncertainty?\nIn this paper, we model situations of this type as Socratic games.\nAs in traditional game theory, the players in a Socratic game choose actions to maximize their payoffs, but we model players with incomplete information who can make costly queries to reduce their uncertainty about the state of the world before they choose their actions.\nThis approach contrasts with traditional game theory, in which players are usually modeled as having fixed, exogenously given information about the structure of the game and its payoffs.\n(In traditional games of incomplete and imperfect information, there is information that the players do not have; in Socratic games, unlike in these games, the players have a chance to acquire the missing information, at some cost.)\nA number of related models have been explored by economists and computer scientists motivated by similar situations, often with a focus on mechanism design and auctions; a sampling of this research includes the work of Larson and Sandholm [41, 42, 43, 44], Parkes [59], Fong [22], Compte and Jehiel [12], Rezende [63], Persico and Matthews [48, 60], Cr \u00b4 emer and Khalil [15], Rasmusen [62], and Bergemann and V \u00a8 alim \u00a8 aki [4, 5].\nThe model of Bergemann and V \u00a8 alim \u00a8 aki is similar in many regards to the one that we explore here; see Section 7 for some discussion.\nA Socratic game proceeds as follows.\nA real world is cho\nsen randomly from a set of possible worlds according to a common prior distribution.\nEach player then selects an arbitrary query from a set of available costly queries and receives a corresponding piece of information about the real world.\nFinally each player selects an action and receives a payoff--a function of the players' selected actions and the identity of the real world--less the cost of the query that he or she made.\nCompared to traditional game theory, the distinguishing feature of our model is the introduction of explicit costs to the players for learning arbitrary partial information about which of the many possible worlds is the real world.\nOur research was initially inspired by recent results in psychology on decision making, but it soon became clear that Socratic game theory is also a general tool for understanding the \"exploitation versus exploration\" tradeoff, well studied in machine learning, in a strategic multiplayer environment.\nThis tension between the risk arising from uncertainty and the cost of acquiring information is ubiquitous in economics, political science, and beyond.\nOur results.\nWe consider Socratic games under two models: an unobservable-query model where players learn only the response to their own queries and an observable-query model where players also learn which queries their opponents made.\nWe give efficient algorithms to find Nash equilibria--i.e., tuples of strategies from which no player has unilateral incentive to deviate--in broad classes of two-player Socratic games in both models.\nOur first result is an efficient algorithm to find Nash equilibria in unobservable-query Socratic games with constant-sum worlds, in which the sum of the players' payoffs is independent of their actions.\nOur techniques also yield Nash equilibria in unobservable-query Socratic games with strategically zero-sum worlds.\nStrategically zero-sum games generalize constant-sum games by allowing the sum of the players' payoffs to depend on individual players' choices of strategy, but not on any interaction of their choices.\nOur second result is an efficient algorithm to find Nash equilibria in observable-query Socratic games with constant-sum worlds.\nFinally, we give an efficient algorithm to find correlated equilibria--a weaker but increasingly well-studied solution concept for games [2, 3, 32, 56, 57]--in observable-query Socratic games with strategically zero-sum worlds.\nLike all games, Socratic games can be viewed as a special case of extensive-form games, which represent games by trees in which internal nodes represent choices made by chance or by the players, and the leaves represent outcomes that correspond to a vector of payoffs to the players.\nAlgorithmically, the generality of extensive-form games makes them difficult to solve efficiently, and the special cases that are known to be efficiently solvable do not include even simple Socratic games.\nEvery (complete-information) classical game is a trivial Socratic game (with a single possible world and a single trivial query), and efficiently finding Nash equilibria in classical games has been shown to be hard [10, 11, 13, 16, 17, 27, 54, 55].\nTherefore we would not expect to find a straightforward polynomial-time algorithm to compute Nash equilibria in general Socratic games.\nHowever, it is well known that Nash equilibria can be found efficiently via an LP for two-player constant-sum games [49, 71] (and strategically zero-sum games [51]).\nA Socratic game is itself a classical game, so one might hope that these results can be applied to Socratic games with constant-sum (or strategically zero-sum) worlds.\nWe face two major obstacles in extending these classical results to Socratic games.\nFirst, a Socratic game with constant-sum worlds is not itself a constant-sum classical game--rather, the resulting classical game is only strategically zero sum.\nWorse yet, a Socratic game with strategically zero-sum worlds is not itself classically strategically zero sum--indeed, there are no known efficient algorithmic techniques to compute Nash equilibria in the resulting class of classical games.\n(Exponential-time algorithms like Lemke\/Howson, of course, can be used [45].)\nThus even when it is easy to find Nash equilibria in each of the worlds of a Socratic game, we require new techniques to solve the Socratic game itself.\nSecond, even when the Socratic game itself is strategically zero sum, the number of possible strategies available to each player is exponential in the natural representation of the game.\nAs a result, the standard linear programs for computing equilibria have an exponential number of variables and an exponential number of constraints.\nFor unobservable-query Socratic games with strategically zero-sum worlds, we address these obstacles by formulating a new LP that uses only polynomially many variables (though still an exponential number of constraints) and then use ellipsoid-based techniques to solve it.\nFor observablequery Socratic games, we handle the exponentiality by decomposing the game into stages, solving the stages separately, and showing how to reassemble the solutions efficiently.\nTo solve the stages, it is necessary to find Nash equilibria in Bayesian strategically zero-sum games, and we give an explicit polynomial-time algorithm to do so.\n2.\nGAMES AND SOCRATIC GAMES\nIn this section, we review background on game theory and formally introduce Socratic games.\nWe present these models in the context of two-player games, but the multiplayer case is a natural extension.\nThroughout the paper, boldface variables will be used to denote a pair of variables (e.g., a = (aI, aII)).\nLet Pr [x +--\u03c0] denote the probability that a particular value x is drawn from the distribution \u03c0, and let Ex \u223c \u03c0 [g (x)] denote the expectation of g (x) when x is drawn from \u03c0.\n2.1 Background on Game Theory\nConsider two players, Player I and Player II, each of whom is attempting to maximize his or her utility (or payoff).\nA (two-player) game is a pair (A, u), where, for i E {I, II},\n\u2022 Ai is the set of pure strategies for Player i, and A = (AI, AII); and \u2022 ui: A--+ R is the utility function for Player i, and u = (uI, uII).\nWe require that A and u be common knowledge.\nIf each Player i chooses strategy ai E Ai, then the payoffs to Players I and II are uI (a) and uII (a), respectively.\nA game is constant sum if, for all a E A, we have that uI (a) + uII (a) = c for some fixed c independent of a. Player i can also play a mixed strategy \u03b1i E Ai, where Ai denotes the space of probability measures over the set Ai.\n\u03b1I (aI) \u00b7 \u03b1II (aII) denotes the joint probability of the independent events that each Player i chooses action ai from the distribution \u03b1i.\nThis generalization to mixed strategies is known as von Neumann\/Morgenstern utility [70], in which players are indifferent between a guaranteed payoff x and an expected payoff of x.\nA Nash equilibrium is a pair \u03b1 of mixed strategies so that neither player has an incentive to change his or her strategy unilaterally.\nFormally, the strategy pair \u03b1 is a Nash equilibrium if and only if both uI (\u03b1I, \u03b1II) = max\u03b1 ~ IEAI uI (\u03b1 ~ I, \u03b1II) and uII (\u03b1I, \u03b1II) = max\u03b1 ~ IIEAII uII (\u03b1I, \u03b1II); that is, the strategies \u03b1I and \u03b1II are mutual best responses.\nA correlated equilibrium is a distribution \u03c8 over A that obeys the following: if a \u2208 A is drawn randomly according to \u03c8 and Player i learns ai, then no Player i has incentive to deviate unilaterally from playing ai.\n(A Nash equilibrium is a correlated equilibrium in which \u03c8 (a) = \u03b1I (aI) \u00b7 \u03b1II (aII) is a product distribution.)\nFormally, in a correlated equilibrium, for every a \u2208 A we must have that aI is a best response to a randomly chosen \u02c6aII \u2208 AII drawn according to \u03c8 (aI, \u02c6aII), and the analogous condition must hold for Player II.\n2.2 Socratic Games\nIn this section, we formally define Socratic games.\nA Socratic game is a 7-tuple ~ A, W, ~ u, S, Q, p, \u03b4 ~, where, fori \u2208 {I, II}:\n\u2022 Ai is, as before, the set of pure strategies for Player i. \u2022 W is a set of possible worlds, one of which is the real world wreal.\n\u2022 ~ ui = {uwi: A \u2192 R | w \u2208 W} is a set of payoff functions for Player i, one for each possible world.\n\u2022 S is a set of signals.\n\u2022 Qi is a set of available queries for Player i.\nWhen Player i makes query qi: W \u2192 S, he or she receives the signal qi (wreal).\nWhen Player i receives signal qi (wreal) in response to query qi, he or she can infer that wreal \u2208 {w: qi (w) = qi (wreal)}, i.e., the set of possible worlds from which query qi cannot distinguish wreal.\n\u2022 p: W \u2192 [0, 1] is a probability distribution over the possible worlds.\n\u2022 \u03b4i: Qi \u2192 R \u2265 0 gives the query cost for each available query for Player i.\nInitially, the world wreal is chosen according to the probability distribution p, but the identity of wreal remains unknown to the players.\nThat is, it is as if the players are playing the game ~ A, uwreal ~ but do not know wreal.\nThe players make queries q \u2208 Q, and Player i receives the signal qi (wreal).\nWe consider both observable queries and unobservable queries.\nWhen queries are observable, each player learns which query was made by the other player, and the results of his or her own query--that is, each Player i learns qI, qII, and qi (wreal).\nFor unobservable queries, Player i learns only qi and qi (wreal).\nAfter learning the results of the queries, the players select strategies a \u2208 A and receive as payoffs (a) \u2212 \u03b4i (qi).\nIn the Socratic game, a pure strategy for Player i consists of a query qi \u2208 Qi and a response function mapping any result of the query qi to a strategy ai \u2208 Ai to play.\nA player's state of knowledge after a query is a point in R: = Q \u00d7 S or Ri: = Qi \u00d7 S for observable or unobservable queries, respectively.\nThus Player i's response function maps R or Ri to Ai.\nNote that the number of pure strategies is exponential, as there are exponentially many response functions.\nA mixed strategy involves both randomly choosing a query qi \u2208 Qi and randomly choosing an action ai \u2208 Ai in response to the results of the query.\nFormally, we will consider a mixed-strategy-function profile f = ~ fquery, fresp ~ to have two parts:\n\u2022 a function fquery\nprobability that Player i makes query qi.\n\u2022 a function fresp\ni that maps R or Ri to a probability distribution over actions.\nPlayer i chooses an action ai \u2208 Ai according to the probability distribution ffresp i (q, qi (w)) for observable queries, and according to resp i (qi, qi (w)) for unobservable queries.\n(With unobservable queries, for example, the probability that Player I plays action aI conditioned on making query qI in world w is given by Pr [aI \u2190 fresp I (qI, qI (w))].)\nMixed strategies are typically defined as probability distributions over the pure strategies, but here we represent a mixed strategy by a pair ~ f query, fresp ~, which is commonly referred to as a \"behavioral\" strategy in the game-theory literature.\nAs in any game with perfect recall, one can easily map a mixture of pure strategies to a behavioral strategy f = ~ fquery, fresp ~ that induces the same probability of making a particular query qi or playing a particular action after making a query qi in a particular world.\nThus it suffices to consider only this representation of mixed strategies.\nFor a strategy-function profile f for observable queries, the (expected) payoff to Player i is given by\nThe payoffs for unobservable queries are analogous, with fresp j (qj, qj (w)) in place of fresp j (q, qj (w)).\n3.\nSTRATEGICALLY ZERO-SUM GAMES\nWe can view a Socratic game G with constant-sum worlds as an exponentially large classical game, with pure strategies \"make query qi and respond according to fi.\"\nHowever, this classical game is not constant sum.\nThe sum of the players' payoffs varies depending upon their strategies, because different queries incur different costs.\nHowever, this game still has significant structure: the sum of payoffs varies only because of varying query costs.\nThus the sum of payoffs does depend on players' choice of strategies, but not on the interaction of their choices--i.e., for fixed functions gI and gII, we have uI (q, f) + uII (q, f) = gI (qI, fI) + gII (qII, fII) for all strategies ~ q, f ~.\nSuch games are called strategically zero sum and were introduced by Moulin and Vial [51], who describe a notion of strategic equivalence and define strategically zero-sum games as those strategically equivalent to zero-sum games.\nIt is interesting to note that two Socratic games with the same queries and strategically equivalent worlds are not necessarily strategically equivalent.\nA game ~ A, u ~ is strategically zero sum if there exist labels ~ (i, ai) for every Player i and every pure strategy ai \u2208 Ai uwreal i\nsuch that, for all mixed-strategy profiles \u03b1, we have that the sum of the utilities satisfies\nNote that any constant-sum game is strategically zero sum as well.\nIt is not immediately obvious that one can efficiently decide if a given game is strategically zero sum.\nFor completeness, we give a characterization of classical strategically zero-sum games in terms of the rank of a simple matrix derived from the game's payoffs, allowing us to efficiently decide if a given game is strategically zero sum and, if it is, to compute the labels ~ (i, ai).\nTHEOREM 3.1.\nConsider a game G = ~ A, u ~ with Ai = {a1i,..., ani i}.\nLet MG be the nI-by-nII matrix whose ~ i, j ~ th entry MG (i, j) satisfies log2 MG (i, j) = uI (aiI, ajII) + uII (aiI, aj II).\nThen the following are equivalent:\n(i) G is strategically zero sum; (ii) there exist labels ~ (i, ai) for every player i \u2208 {I, II} and every pure strategy ai \u2208 Ai such that, for all pure strategies a \u2208 A, we have uI (a) + uII (a) = ~ (I, aI) + ~ (II, aII); and (iii) rank (MG) = 1.\nPROOF SKETCH.\n(i \u21d2 ii) is immediate; every pure strategy is a trivially mixed strategy.\nFor (ii \u21d2 iii), let ~ ci be the n-element column vector with jth component 2 ~ (i, aji); then ~ cI \u00b7 ~ cIIT = MG.\nFor (iii \u21d2 i), if rank (MG) = 1, then MG = u \u00b7 vT. .\nWe can prove that G is strategically zero sum by choosing labels ~ (I, ajI): = log2 uj and ~ (II, ajII): = log2 vj.\n4.\nSOCRATIC GAMES WITH UNOBSERVABLE QUERIES\nWe begin with Socratic games with unobservable queries, where a player's choice of query is not revealed to her opponent.\nWe give an efficient algorithm to solve unobservablequery Socratic games with strategically zero-sum worlds.\nOur algorithm is based upon the LP shown in Figure 1, whose feasible points are Nash equilibria for the game.\nThe LP has polynomially many variables but exponentially many constraints.\nWe give an efficient separation oracle for the LP, implying that the ellipsoid method [28, 38] yields an efficient algorithm.\nThis approach extends the techniques of Koller and Megiddo [39] (see also [40]) to solve constant-sum games represented in extensive form.\n(Recall that their result does not directly apply in our case; even a Socratic game with constant-sum worlds is not a constant-sum classical game.)\nPROOF SKETCH.\nWe begin with a description of the correspondence between feasible points for the LP and Nash equilibria for G. First, suppose that strategy profile f = ~ f query, fresp ~ forms a Nash equilibrium for G.\nThen the following setting for the LP variables is feasible:\n(We omit the straightforward calculations that verify feasibility.)\nNext, suppose ~ xiai, qi, w, yiqi, \u03c1i ~ is feasible for the LP.\nLet f be the strategy-function profile defined as ffquery: qi ~ \u2192 yi i qi resp i (qi, qi (w)): ai ~ \u2192 xiai, qi, w\/yi qi.\nVerifying that this strategy profile is a Nash equilibrium requires checking that fresp i (qi, qi (w)) is a well-defined function (from constraint VI), that fquery i and fresp i (qi, qi (w)) are probability distributions (from constraints III and IV), and that each player is playing a best response to his or her opponent's strategy (from constraints I and II).\nFinally, from constraints I and II, the expected payoff to Player i is at most \u03c1i.\nBecause the right-hand side of constraint VII is equal to the expected sum of the payoffs from f and is at most \u03c1I + \u03c1II, the payoffs are correct and imply the lemma.\nWe now give an efficient separation oracle for the LP in Figure 1, thus allowing the ellipsoid method to solve the LP in polynomial time.\nRecall that a separation oracle is a function that, given a setting for the variables in the LP, either returns \"feasible\" or returns a particular constraint of the LP that is violated by that setting of the variables.\nAn efficient, correct separation oracle allows us to solve the LP efficiently via the ellipsoid method.\nLEMMA 4.2.\nThere exists a separation oracle for the LP in Figure 1 that is correct and runs in polynomial time.\nPROOF.\nHere is a description of the separation oracle SP.\nOn input ~ xiai, qi, w, yiqi, \u03c1i ~:\n1.\nCheck each of the constraints (III), (IV), (V), (VI), and (VII).\nIf any one of these constraints is violated, then return it.\n2.\nDefine the strategy profile f as follows:\nMore specifically, given fII and the result qI (wreal) of the query qI, it is straightforward to compute the probability that, conditioned on the fact that the result of query qI is qI (w), the world is w and Player II will play action aII \u2208 AII.\nTherefore, for each query qI and response qI (w), Player I can compute the expected utility of each pure response aI to the induced mixed strategy over AII for Player II.\nPlayer I can then select the aI maximizing this expected payoff.\n\u02c6fI be the response function such that \u02c6fI (qI, qI (w)) = \u02c6fII.\n\"Player i does not prefer ` make query qi, then play according to the function fi\"':\nFigure 1: An LP to find Nash equilibria in unobservable-query Socratic games with strategically zero-sum\nworlds.\nThe input is a Socratic game (A, W, ~ u, S, Q, p, \u03b4) so that world w is strategically zero sum with labels ~ (i, ai, w).\nPlayer i makes query qi E Qi with probability yi qi and, when the actual world is w E W, makes query qi and plays action ai with probability xiai, qi, w.\nThe expected payoff to Player i is given by \u03c1i.\n3.\nLet \u02c6\u03c1qI\nI be the expected payoff to Player I using the strategy \"make query qI and play response function \u02c6fI\" if Player II plays according to fII.\nLet \u02c6\u03c1I = maxqIEQq \u02c6\u03c1qII and let \u02c6qI = arg maxqIEQq \u02c6\u03c1qI I.\nSimilarly, define \u02c6\u03c1qII II, \u02c6\u03c1II, and \u02c6qII.\n\u02c6fi and \u02c6qi defined in Step 3, return constraint (I-\u02c6qI-\u02c6fI) or (II-\u02c6qII-\u02c6fII) if either is violated.\nIf both are satisfied, then return \"feasible.\"\nWe first note that the separation oracle runs in polynomial time and then prove its correctness.\nSteps 1 and 4 are clearly polynomial.\nFor Step 2, we have described how to compute the relevant response functions by examining every action of Player I, every world, every query, and every action of Player II.\nThere are only polynomially many queries, worlds, query results, and pure actions, so the running time of Steps 2 and 3 is thus polynomial.\nWe now sketch the proof that the separation oracle works correctly.\nThe main challenge is to show that if any constraint (I-qI-fI) is violated then (I-\u02c6qI-\u02c6fI) is violated in Step 4.\nFirst, we observe that, by construction, the function \u02c6fI computed in Step 3 must be a best response to Player II playing fII, no matter what query Player I makes.\nTherefore the \u02c6fI\" must be a best response to Player II playing fII, by definition of \u02c6qI.\nThe right-hand side of each constraint (I-qI-fI) is equal to the expected payoff that Player I receives when playing the pure strategy \"make query qI and then play response function fI\" against Player II's strategy of fII.\nTherefore, because the pure strategy \"make query \u02c6qI and then play response function \u02c6fI\" is a best response to Player II playing fII, the right-hand side of constraint (I-\u02c6qI - \u02c6fI) is at least as large as the right hand side of any constraint (I-\u02c6qI-fi).\nTherefore, if any constraint (I-qI-fI) is violated, constraint (I-\u02c6qI - \u02c6fI) is also violated.\nAn analogous argument holds for Player II.\nThese lemmas and the well-known fact that Nash equilibria always exist [52] imply the following theorem:\n5.\nSOCRATIC GAMES WITH OBSERVABLE QUERIES\nIn this section, we give efficient algorithms to find (1) a Nash equilibrium for observable-query Socratic games with constant-sum worlds and (2) a correlated equilibrium in the broader class of Socratic games with strategically zero-sum worlds.\nRecall that a Socratic game G = (A, W, ~ u, S, Q, p, \u03b4) with observable queries proceeds in two stages: Stage 1: The players simultaneously choose queries q E Q. Player i receives as output qI, qII, and qi (wreal).\nStage 2: The players simultaneously choose strategies a E A.\nThe payoff to Player i is uwreal\nUsing backward induction, we first solve Stage 2 and then proceed to the Stage-1 game.\nFor a query q E Q, we would like to analyze the Stage-2 \"game\" \u02c6GQ resulting from the players making queries q in Stage 1.\nTechnically, however, \u02c6GQ is not actually a game, because at the beginning of Stage 2 the players have different information about the world: Player I knows qI (wreal), and 4.\nFor the strategy \"make query \u02c6qI, then play response function\nPlayer II knows q,, (wreal).\nFortunately, the situation in which players have asymmetric private knowledge has been well studied in the game-theory literature.\nA Bayesian game is a quadruple ~ A, T, r, u ~, where:\n\u2022 Ai is the set of pure strategies for Player i. \u2022 Ti is the set of types for Player i. \u2022 r is a probability distribution over T; r (t) denotes the probability that Player i has type ti for all i. \u2022 ui: A \u00d7 T \u2192 R is the payoff function for Player i.\nIf the players have types t and play pure strategies a, then ui (a, t) denotes the payoff for Player i.\nInitially, a type t is drawn randomly from T according to the distribution r. Player i learns his type ti, but does not learn any other player's type.\nPlayer i then plays a mixed strategy \u03b1i \u2208 Ai--that is, a probability distribution over Ai--and receives payoff ui (a, t).\nA strategy function is a function hi: Ti \u2192 Ai; Player i plays the mixed strategy hi (ti) \u2208 Ai when her type is ti.\nA strategy-function profile h is a Bayesian Nash equilibrium if and only if no Player i has unilateral incentive to deviate from hi if the other players play according to h. For a two-player Bayesian game, if a = h (t), then the profile h is a Bayesian Nash equilibrium exactly when the following condition and its analogue for Player II hold: Et--r [u, (a, t)] = maxh,, Et--r [u, (~ h ~, (t,), \u03b1,, ~, t)].\nThese conditions hold if and only if, for all ti \u2208 Ti occurring with positive probability, Player i's expected utility conditioned on his type being ti is maximized by hi (ti).\nA Bayesian game is constant sum if for all a \u2208 A and all t \u2208 T, we have u, (a, t) + u,, (a, t) = ct, for some constant ct independent of a.\nA Bayesian game is strategically zero sum if the classical game ~ A, u (\u00b7, t) ~ is strategically zero sum for every t \u2208 T. Whether a Bayesian game is strategically zero sum can be determined as in Theorem 3.1.\n(For further discussion of Bayesian games, see [25, 31].)\nWe now formally define the Stage-2 \"game\" as a Bayesian game.\nGiven a Socratic game G = ~ A, W, ~ u, S, Q, p, 6 ~ and a query profile q \u2208 Q, we define the Stage-2 Bayesian game Gstage2 (q): = ~ A, Tq, pstage2 (q), ustage2 (q) ~, where:\n\u2022 Ai, the set of pure strategies for Player i, is the same as in the original Socratic game; \u2022 Tiq = {qi (w): w \u2208 W}, the set of types for Player i, is the set of signals that can result from query qi; \u2022 pstage2 (q) (t) = Pr [q (w) = t | w \u2190 p]; and\nWe now define the Stage-1 game in terms of the payoffs for the Stage-2 games.\nFix any algorithm alg that finds a Bayesian Nash equilibrium hq, alg: = alg (Gstage2 (q)) for each Stage-2 game.\nDefine valuealg i (Gstage2 (q)) to be the expected payoff received by Player i in the Bayesian game Gstage2 (q) if each player plays according to hq, alg, that is,\n\u2022 Astage1: = Q, the set of available queries in the Socratic game; and\nI.e., players choose queries q and receive payoffs corresponding to valuealg (Gstage2 (q)), less query costs.\nLEMMA 5.1.\nConsider an observable-query Socratic game G = ~ A, W, ~ u, S, Q, p, 6 ~.\nLet Gstage2 (q) be the Stage-2 games for all q \u2208 Q, let alg be an algorithm finding a Bayesian Nash equilibrium in each Gstage2 (q), and let Galg stage1 be the Stage-1 game.\nLet a be a Nash equilibrium for Galg stage1, and let hq, alg: = alg (Gstage2 (q)) be a Bayesian Nash equilibrium for each Gstage2 (q).\nThen the following strategy profile is a Nash equilibrium for G:\n\u2022 In Stage 1, Player i makes query qi with probability \u03b1i (qi).\n(That is, set fquery (q): = a (q).)\n\u2022 In Stage 2, if q is the query in Stage 1 and qi (wreal) denotes the response to Player i's query, then Player i chooses action ai with probability hq, alg\nWe now find equilibria in the stage games for Socratic games with constant - or strategically zero-sum worlds.\nWe first show that the stage games are well structured in this setting:\nstage1 is strategically zero sum for every algorithm alg, and every Stage-2 game Gstage2 (q) is Bayesian constant sum.\nIf the worlds of G are strategically zero sum, then every Gstage2 (q) is Bayesian strategically zero sum.\nWe now show that we can efficiently compute equilibria for these well-structured stage games.\nTHEOREM 5.3.\nThere exists a polynomial-time algorithm BNE finding Bayesian Nash equilibria in strategically zerosum Bayesian (and thus classical strategically zero-sum or Bayesian constant-sum) two-player games.\nPROOF SKETCH.\nLet G = ~ A, T, r, u ~ be a strategically zero-sum Bayesian game.\nDefine an unobservable-query Socratic game G * with one possible world for each t \u2208 T, one available zero-cost query qi for each Player i so that qi reveals ti, and all else as in G. Bayesian Nash equilibria in G correspond directly to Nash equilibria in G *, and the worlds of G * are strategically zero sum.\nThus by Theorem 4.3 we can compute Nash equilibria for G *, and thus we can compute Bayesian Nash equilibria for G. (LP's for zero-sum two-player Bayesian games have been previously developed and studied [61].)\nTHEOREM 5.4.\nWe can compute a Nash equilibrium for an arbitrary two-player observable-query Socratic game G = ~ A, W, ~ u, S, Q, p, 6 ~ with constant-sum worlds in polynomial time.\nPROOF.\nBecause each world of G is constant sum, Lemma 5.2 implies that the induced Stage-2 games Gstage2 (q) are all Bayesian constant sum.\nThus we can use algorithm BNE to compute a Bayesian Nash equilibrium hq, BNE: = BNE (Gstage2 (q)) for each q \u2208 Q, by Theorem 5.3.\nFurthermore, again by Lemma 5.2, the induced Stage-1 game\nGBNE\nstage1 is classical strategically zero sum.\nTherefore we can again use algorithm BNE to compute a Nash equilibrium a: = BNE (GBNE stage1), again by Theorem 5.3.\nTherefore, by Lemma 5.1, we can assemble a and the hq, BNE's into a Nash equilibrium for the Socratic game G.\nWe would like to extend our results on observable-query Socratic games to Socratic games with strategically zerosum worlds.\nWhile we can still find Nash equilibria in the Stage-2 games, the resulting Stage-1 game is not in general strategically zero sum.\nThus, finding Nash equilibria in observable-query Socratic games with strategically zerosum worlds seems to require substantially new techniques.\nHowever, our techniques for decomposing observable-query Socratic games do allow us to find correlated equilibria in this case.\nLEMMA 5.5.\nConsider an observable-query Socratic game G = (A, W, ~ u, S, Q, p, \u03b4).\nLet alg be an arbitrary algorithm that finds a Bayesian Nash equilibrium in each of the derived Stage-2 games Gstage2 (q), and let Galg stage1 be the derived Stage\nstage1, and let hq, alg: = alg (Gstage2 (q)) be a Bayesian Nash equilibrium for each Gstage2 (q).\nThen the following distribution over pure strategies is a correlated equilibrium for G:\nThus to find a correlated equilibrium in an observable-query Socratic game with strategically zero-sum worlds, we need only algorithm BNE from Theorem 5.3 along with an efficient algorithm for finding a correlated equilibrium in a general game.\nSuch an algorithm exists (the definition of correlated equilibria can be directly translated into an LP [3]), and therefore we have the following theorem: THEOREM 5.6.\nWe can provide both efficient oracle access and efficient sampling access to a correlated equilibrium for any observable-query two-player Socratic game with strategically zero-sum worlds.\nBecause the support of the correlated equilibrium may be exponentially large, providing oracle and sampling access is the natural way to represent the correlated equilibrium.\nBy Lemma 5.5, we can also compute correlated equilibria in any observable-query Socratic game for which Nash equilibria are computable in the induced Gstage2 (q) games (e.g., when Gstage2 (q) is of constant size).\nAnother potentially interesting model of queries in Socratic games is what one might call public queries, in which both the choice and outcome of a player's query is observable by all players in the game.\n(This model might be most appropriate in the presence of corporate espionage or media leaks, or in a setting in which the queries--and thus their results--are done in plain view.)\nThe techniques that we have developed in this section also yield exactly the same results as for observable queries.\nThe proof is actually simpler: with public queries, the players' payoffs are common knowledge when Stage 2 begins, and thus Stage 2 really is a complete-information game.\n(There may still be uncertainty about the real world, but all players use the observed signals to infer exactly the same set of possible worlds in which wreal may lie; thus they are playing a complete-information game against each other.)\nThus we have the same results as in Theorems 5.4 and 5.6 more simply, by solving Stage 2 using a (non-Bayesian) Nash-equilibrium finder and solving Stage 1 as before.\nOur results for observable queries are weaker than for unobservable: in Socratic games with worlds that are strategically zero sum but not constant sum, we find only a correlated equilibrium in the observable case, whereas we find a Nash equilibrium in the unobservable case.\nWe might hope to extend our unobservable-query techniques to observable queries, but there is no obvious way to do so.\nThe fundamental obstacle is that the LP's payoff constraint becomes nonlinear if there is any dependence on the probability that the other player made a particular query.\nThis dependence arises with observable queries, suggesting that observable Socratic games with strategically zero-sum worlds may be harder to solve.\n6.\nRELATED WORK\nOur work was initially motivated by research in the social sciences indicating that real people seem (irrationally) paralyzed when they are presented with additional options.\nIn this section, we briefly review some of these social-science experiments and then discuss technical approaches related to Socratic game theory.\nPrima facie, a rational agent's happiness given an added option can only increase.\nHowever, recent research has found that more choices tend to decrease happiness: for example, students choosing among extra-credit options are more likely to do extra credit if given a small subset of the choices and, moreover, produce higher-quality work [35].\n(See also [19].)\nThe psychology literature explores a number of explanations: people may miscalculate their opportunity cost by comparing their choice to a \"component-wise maximum\" of all other options instead of the single best alternative [65], a new option may draw undue attention to aspects of the other options [67], and so on.\nThe present work explores an economic explanation of this phenomenon: information is not free.\nWhen there are more options, a decision-maker must spend more time to achieve a satisfactory outcome.\nSee, e.g., the work of Skyrms [68] for a philosophical perspective on the role of deliberation in strategic situations.\nFinally, we note the connection between Socratic games and modal logic [34], a formalism for the logic of possibility and necessity.\nThe observation that human players typically do not play \"rational\" strategies has inspired some attempts to model \"partially\" rational players.\nThe typical model of this socalled bounded rationality [36, 64, 66] is to postulate bounds on computational power in computing the consequences of a strategy.\nThe work on bounded rationality [23, 24, 53, 58] differs from the models that we consider here in that instead of putting hard limitations on the computational power of the agents, we instead restrict their a priori knowledge of the state of the world, requiring them to spend time (and therefore money\/utility) to learn about it.\nPartially observable stochastic games (POSGs) are a general framework used in AI to model situations of multi-agent planning in an evolving, unknown environment, but the generality of POSGs seems to make them very difficult [6].\nRecent work has been done in developing algorithms for restricted classes of POSGs, most notably classes of cooperative POSGs--e.g., [20, 30]--which are very different from the competitive strategically zero-sum games we address in this paper.\nThe fundamental question in Socratic games is deciding on the comparative value of making a more costly but more informative query, or concluding the data-gathering phase and picking the best option, given current information.\nThis tradeoff has been explored in a variety of other contexts; a sampling of these contexts includes aggregating results \u03c8 (q, f): = 0 (q) Y Y i \u2208 {I, II} s \u2208 S\nfrom delay-prone information sources [8], doing approximate reasoning in intelligent systems [72], deciding when to take the current best guess of disease diagnosis from a beliefpropagation network and when to let it continue inference [33], among many others.\nThis issue can also be viewed as another perspective on the general question of exploration versus exploitation that arises often in AI: when is it better to actively seek additional information instead of exploiting the knowledge one already has?\n(See, e.g., [69].)\nMost of this work differs significantly from our own in that it considers single-agent planning as opposed to the game-theoretic setting.\nA notable exception is the work of Larson and Sandholm [41, 42, 43, 44] on mechanism design for interacting agents whose computation is costly and limited.\nThey present a model in which players must solve a computationally intractable valuation problem, using costly computation to learn some hidden parameters, and results for auctions and bargaining games in this model.\n7.\nFUTURE DIRECTIONS\nEfficiently finding Nash equilibria in Socratic games with non-strategically zero-sum worlds is probably difficult because the existence of such an algorithm for classical games has been shown to be unlikely [10, 11, 13, 16, 17, 27, 54, 55].\nThere has, however, been some algorithmic success in finding Nash equilibria in restricted classical settings (e.g., [21, 46, 47, 57]); we might hope to extend our results to analogous Socratic games.\nAn efficient algorithm to find correlated equilibria in general Socratic games seems more attainable.\nSuppose the players receive recommended queries and responses.\nThe difficulty is that when a player considers a deviation from his recommended query, he already knows his recommended response in each of the Stage-2 games.\nIn a correlated equilibrium, a player's expected payoff generally depends on his recommended strategy, and thus a player may deviate in Stage 1 so as to land in a Stage-2 game where he has been given a \"better than average\" recommended response.\n(Socratic games are \"succinct games of superpolynomial type,\" so Papadimitriou's results [56] do not imply correlated equilibria for them.)\nSocratic games can be extended to allow players to make adaptive queries, choosing subsequent queries based on previous results.\nOur techniques carry over to O (1) rounds of unobservable queries, but it would be interesting to compute equilibria in Socratic games with adaptive observable queries or with \u03c9 (1) rounds of unobservable queries.\nSpecial cases of adaptive Socratic games are closely related to single-agent problems like minimum latency [1, 7, 26], determining strategies for using priced information [9, 29, 37], and an online version of minimum test cover [18, 50].\nAlthough there are important technical distinctions between adaptive Socratic games and these problems, approximation techniques from this literature may apply to Socratic games.\nThe question of approximation raises interesting questions even in non-adaptive Socratic games.\nAn e-approximate Nash equilibrium is a strategy profile \u03b1 so that no player can increase her payoff by an additive e by deviating from \u03b1.\nFinding approximate Nash equilibria in both adaptive and non-adaptive Socratic games is an interesting direction to pursue.\nAnother natural extension is the model where query results are stochastic.\nIn this paper, we model a query as deterministically partitioning the possible worlds into subsets that the query cannot distinguish.\nHowever, one could instead model a query as probabilistically mapping the set of possible worlds into the set of signals.\nWith this modification, our unobservable-query model becomes equivalent to the model of Bergemann and V \u00a8 alim \u00a8 aki [4, 5], in which the result of a query is a posterior distribution over the worlds.\nOur techniques allow us to compute equilibria in such a \"stochastic-query\" model provided that each query is represented as a table that, for each world\/signal pair, lists the probability that the query outputs that signal in that world.\nIt is also interesting to consider settings in which the game's queries are specified by a compact representation of the relevant probability distributions.\n(For example, one might consider a setting in which the algorithm has only a sampling oracle for the posterior distributions envisioned by Bergemann and V \u00a8 alim \u00a8 aki.)\nEfficiently finding equilibria in such settings remains an open problem.\nAnother interesting setting for Socratic games is when the set Q of available queries is given by Q = P (\u0393)--i.e., each player chooses to make a set q E P (\u0393) of queries from a specified groundset \u0393 of queries.\nHere we take the query cost to be a linear function, so that \u03b4 (q) = P\u03b3 \u2208 q \u03b4 ({- y}).\nNatural groundsets include comparison queries (\"if my opponent is playing strategy aII, would I prefer to play aI or \u02c6aI?\")\n, strategy queries (\"what is my vector of payoffs if I play strategy aI?\")\n, and world-identity queries (\"is the world w E W the real world?\")\n.\nWhen one can infer a polynomial bound on the number of queries made by a rational player, then our results yield efficient solutions.\n(For example, we can efficiently solve games in which every groundset element - y E \u0393 has \u03b4 ({- y}) = \u03a9 (M \u2212 M), where M and M denote the maximum and minimum payoffs to any player in any world.)\nConversely, it is NP-hard to compute a Nash equilibrium for such a game when every \u03b4 ({- y}) <1 \/ | W | 2, even when the worlds are constant sum and Player II has only a single available strategy.\nThus even computing a best response for Player I is hard.\n(This proof proceeds by reduction from set cover; intuitively, for sufficiently low query costs, Player I must fully identify the actual world through his queries.\nSelecting a minimum-sized set of these queries is hard.)\nComputing Player I's best response can be viewed as maximizing a submodular function, and thus a best response can be (1 \u2212 1\/e) \u2248 0.63 approximated greedily [14].\nAn interesting open question is whether this approximate best-response calculation can be leveraged to find an approximate Nash equilibrium.","keyphrases":["game theori","socrat game","priori probabl distribut","constant-sum game","algorithm","game-either full omnisci knowledg","questionand-answer session","nash equilibrium","unobserv-queri model","miss inform","auction","arbitrari partial inform","strateg multiplay environ","observ-queri model","inform acquisit","correl equilibrium"],"prmu":["P","P","P","P","P","M","M","M","M","M","U","M","M","M","M","M"]} {"id":"I-73","title":"Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed","abstract":"In open MAS it is often a problem to achieve agents' interoperability. The heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation. In this paper we propose the use of an ontology to deal with this issue. We experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology. This data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models.","lvl-1":"Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed Anarosa A. F. Brand\u00e3o1 , Laurent Vercouter2 , Sara Casare1 and Jaime Sichman1 1 Laborat\u00f3rio de T\u00e9cnicas Inteligentes - EP\/USP Av.\nProf. Luciano Gualberto, 158, trav.\n3, 05508-970, S\u00e3o Paulo - Brazil +55\u00a011\u00a03091\u00a05397 anarosabrandao@gmail.com, {sara.casare,jaime.sichman}@poli.\nusp.br 2 Ecole Nationale Sup\u00e9rieure des Mines de Saint-Etienne 158, cours Fauriel, 42023 Saint-Etienne Cedex 2, France Laurent.Vercouter@emse.fr ABSTRACT In open MAS it is often a problem to achieve agents' interoperability.\nThe heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation.\nIn this paper we propose the use of an ontology to deal with this issue.\nWe experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology.\nThis data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General Terms Design, Experimentation, Standardization.\n1.\nINTRODUCTION Open multiagent systems (MAS) are composed of autonomous distributed agents that may enter and leave the agent society at their will because open systems have no centralized control over the development of its parts [1].\nSince agents are considered as autonomous entities, we cannot assume that there is a way to control their internal behavior.\nThese features are interesting to obtain flexible and adaptive systems but they also create new risks about the reliability and the robustness of the system.\nSolutions to this problem have been proposed by the way of trust models where agents are endowed with a model of other agents that allows them to decide if they can or cannot trust another agent.\nSuch trust decision is very important because it is an essential condition to the formation of agents' cooperation.\nThe trust decision processes use the concept of reputation as the basis of a decision.\nReputation is a subject that has been studied in several works [4][5][8][9] with different approaches, but also with different semantics attached to the reputation concept.\nCasare and Sichman [2][3] proposed a Functional Ontology of Reputation (FORe) and some directions about how it could be used to allow the interoperability among different agent reputation models.\nThis paper describes how the FORe can be applied to allow interoperability among agents that have different reputation models.\nAn outline of this approach is sketched in the context of a testbed for the experimentation and comparison of trust models, the ART testbed [6].\n2.\nTHE FUNCTIONAL ONTOLOGY OF REPUTATION (FORe) In the last years several computational models of reputation have been proposed [7][10][13][14].\nAs an example of research produced in the MAS field we refer to three of them: a cognitive reputation model [5], a typology of reputation [7] and the reputation model used in the ReGret system [9][10].\nEach model includes its own specific concepts that may not exist in other models, or exist with a different name.\nFor instance, Image and Reputation are two central concepts in the cognitive reputation model.\nThese concepts do not exist in the typology of reputation or in the ReGret model.\nIn the typology of reputation, we can find some similar concepts such as direct reputation and indirect reputation but there are some slight semantic differences.\nIn the same way, the ReGret model includes four kinds of reputation (direct, witness, neighborhood and system) that overlap with the concepts of other models but that are not exactly the same.\nThe Functional Ontology of Reputation (FORe) was defined as a common semantic basis that subsumes the concepts of the main reputation models.\nThe FORe includes, as its kernel, the following concepts: reputation nature, roles involved in reputation formation and propagation, information sources for reputation, evaluation of reputation, and reputation maintenance.\nThe ontology concept ReputationNature is composed of concepts such as IndividualReputation, GroupReputation and ProductReputation.\nReputation formation and propagation involves several roles, played by the entities or agents that participate in those processes.\nThe ontology defines the concepts ReputationProcess and ReputationRole.\nMoreover, reputation can be classified according to the origin of beliefs and opinions that can derive from several sources.\nThe ontology defines the concept ReputationType which can be PrimaryReputation or SecondaryReputation.\nPrimaryReputation is composed of concepts ObservedReputation and DirectReputation and the concept SecondaryReputation is composed of concepts such as PropagatedReputation and CollectiveReputation.\nMore details about the FORe can be found on [2][3].\n3.\nMAPPING THE AGENT REPUTATION MODELS TO THE FORe Visser et al [12] suggest three different ways to support semantic integration of different sources of information: a centralized approach, where each source of information is related to one common domain ontology; a decentralized approach, where every source of information is related to its own ontology; and a hybrid approach, where every source of information has its own ontology and the vocabulary of these ontologies are related to a common ontology.\nThis latter organizes the common global vocabulary in order to support the source ontologies comparison.\nCasare and Sichman [3] used the hybrid approach to show that the FORe serves as a common ontology for several reputation models.\nTherefore, considering the ontologies which describe the agent reputation models we can define a mapping between these ontologies and the FORe whenever the ontologies use a common vocabulary.\nAlso, the information concerning the mappings between the agent reputation models and the FORe can be directly inferred by simply classifying the resulting ontology from the integration of a given reputation model ontology and the FORe in an ontology tool with reasoning engine.\nFor instance, a mapping between the Cognitive Reputation Model ontology and the FORe relates the concepts Image and Reputation to PrimaryReputation and SecondaryReputation from FORe, respectively.\nAlso, a mapping between the Typology of Reputation and the FORe relates the concepts Direct Reputation and Indirect Reputation to PrimaryReputation and SecondaryReputation from FORe, respectively.\nNevertheless, the concepts Direct Trust and Witness Reputation from the Regret System Reputation Model are mapped to PrimaryReputation and PropagatedReputation from FORe.\nSince PropagatedReputation is a sub-concept of SecondaryReputation, it can be inferred that Witness Reputation is also mapped to SecondaryReputation.\n4.\nEXPERIMENTAL SCENARIOS USING THE ART TESTBED To exemplify the use of mappings from last section, we define a scenario where several agents are implemented using different agent reputation models.\nThis scenario includes the agents'' interaction during the simulation of the game defined by ART [6] in order to describe the ways interoperability is possible between different trust models using the FORe.\n4.1 The ART testbed The ART testbed provides a simulation engine on which several agents, using different trust models, may run.\nThe simulation consists in a game where the agents have to decide to trust or not other agents.\nThe game``s domain is art appraisal, in which agents are required to evaluate the value of paintings based on information exchanged among other agents during agents'' interaction.\nThe information can be an opinion transaction, when an agent asks other agents to help it in its evaluation of a painting; or a reputation transaction, when the information required is about the reputation of another agent (a target) for a given era.\nMore details about the ART testbed can be found in [6].\nThe ART common reputation model was enhanced with semantic data obtained from FORe.\nA general agent architecture for interoperability was defined [11] to allow agents to reason about the information received from reputation interactions.\nThis architecture contains two main modules: the Reputation Mapping Module (RMM) which is responsible for mapping concepts between an agent reputation model and FORe; and the Reputation Reasoning Module (RRM) which is responsible for deal with information about reputation according to the agent reputation model.\n4.2 Reputation transaction scenarios While including the FORe to the ART common reputation model, we have incremented it to allow richer interactions that involve reputation transaction.\nIn this section we describe scenarios concerning reputation transactions in the context of ART testbed, but the first is valid for any kind of reputation transaction and the second is specific for the ART domain.\n4.2.1 General scenario Suppose that agents A, B and C are implemented according to the aforementioned general agent architecture with the enhanced ART common reputation model, using different reputation models.\nAgent A uses the Typology of Reputation model, agent B uses the Cognitive Reputation Model and agent C uses the ReGret System model.\nConsider the interaction about reputation where agents A and B receive from agent C information about the reputation of agent Y.\nA big picture of this interaction is showed in Figure 2.\nReGret Ontology (Y, value=0.8, witnessreputation) C Typol.\nOntology (Y, value=0.8, propagatedreputation) A CogMod.\nOntology (Y, value=0.8, reputation) B (Y, value=0.8, PropagatedReputation) (Y, value=0.8, PropagatedReputation) ReGret Ontology (Y, value=0.8, witnessreputation) C ReGret Ontology (Y, value=0.8, witnessreputation) ReGret Ontology (Y, value=0.8, witnessreputation) (Y, value=0.8, witnessreputation) C Typol.\nOntology (Y, value=0.8, propagatedreputation) A Typol.\nOntology (Y, value=0.8, propagatedreputation) Typol.\nOntology (Y, value=0.8, propagatedreputation) (Y, value=0.8, propagatedreputation) A CogMod.\nOntology (Y, value=0.8, reputation) B CogMod.\nOntology (Y, value=0.8, reputation) CogMod.\nOntology (Y, value=0.8, reputation) (Y, value=0.8, reputation) B (Y, value=0.8, PropagatedReputation) (Y, value=0.8, PropagatedReputation) (Y, value=0.8, PropagatedReputation) (Y, value=0.8, PropagatedReputation) Figure 1.\nInteraction about reputation The information witness reputation from agent C is treated by its RMM and is sent as PropagatedReputation to both agents.\nThe corresponding information in agent A reputation model is propagated reputation and in agent B reputation model is reputation.\nThe way agents A and B make use of the information depends on their internal reputation model and their RRM implementation.\n1048 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2.2 ART scenario Considering the same agents A and B and the art appraisal domain of ART, another interesting scenario describes the following situation: agent A asks to agent B information about agents it knows that have skill on some specific painting era.\nIn this case agent A wants information concerning the direct reputation agent B has about agents that have skill on an specific era, such as cubism.\nFollowing the same steps of the previous scenario, agent A message is prepared in its RRM using information from its internal model.\nA big picture of this interaction is in Figure 2.\nTypol.\nOntology (agent = ?\n, value = ?\n, skill = cubism, reputation = directreputation) A (agent = ?\n, value = ?\n, skill = cubism, reputation = PrimaryReputation) CogMod.\nOntology (agent = ?\n, value = ?\n, skill = cubism, reputation = image) B Typol.\nOntology (agent = ?\n, value = ?\n, skill = cubism, reputation = directreputation) A (agent = ?\n, value = ?\n, skill = cubism, reputation = PrimaryReputation) CogMod.\nOntology (agent = ?\n, value = ?\n, skill = cubism, reputation = image) B Figure 2.\nInteraction about specific types of reputation values Agent B response to agent A is processed in its RRM and it is composed of tuples (agent, value, cubism, image) , where the pair (agent, value) is composed of all agents and associated reputation values whose agent B knows their expertise about cubism by its own opinion.\nThis response is forwarded to the RMM in order to be translated to the enriched common model and to be sent to agent A.\nAfter receiving the information sent by agent B, agent A processes it in its RMM and translates it to its own reputation model to be analyzed by its RRM.\n5.\nCONCLUSION In this paper we present a proposal for reducing the incompatibility between reputation models by using a general agent architecture for reputation interaction which relies on a functional ontology of reputation (FORe), used as a globally shared reputation model.\nA reputation mapping module allows agents to translate information from their internal reputation model into the shared model and vice versa.\nThe ART testbed has been enriched to use the ontology during agent transactions.\nSome scenarios were described to illustrate our proposal and they seem to be a promising way to improve the process of building reputation just using existing technologies.\n6.\nACKNOWLEDGMENTS Anarosa A. F. Brand\u00e3o is supported by CNPq\/Brazil grant 310087\/2006-6 and Jaime Sichman is partially supported by CNPq\/Brazil grants 304605\/2004-2, 482019\/2004-2 and 506881\/2004-1.\nLaurent Vercouter was partially supported by FAPESP grant 2005\/02902-5.\n7.\nREFERENCES [1] Agha, G. A. Abstracting Interaction Patterns: A Programming Paradigm for Open Distributed Systems, In (Eds) E. Najm and J.-B.\nStefani, Formal Methods for Open Object-based Distributed Systems IFIP Transactions, 1997, Chapman Hall.\n[2] Casare,S. and Sichman, J.S. Towards a Functional Ontology of Reputation, In Proc of the 4th Intl Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS``05), Utrecht, The Netherlands, 2005, v.2, pp. 505-511.\n[3] Casare, S. and Sichman, J.S. Using a Functional Ontology of Reputation to Interoperate Different Agent Reputation Models, Journal of the Brazilian Computer Society, (2005), 11(2), pp. 79-94.\n[4] Castelfranchi, C. and Falcone, R. Principles of trust in MAS: cognitive anatomy, social importance and quantification.\nIn Proceedings of ICMAS``98, Paris, 1998, pp. 72-79.\n[5] Conte, R. and Paolucci, M. Reputation in Artificial Societies: Social Beliefs for Social Order, Kluwer Publ., 2002.\n[6] Fullam, K.; Klos, T.; Muller, G.; Sabater, J.; Topol, Z.; Barber, S.;Rosenchein, J.; Vercouter, L. and Voss, M.\nA specification of the agent reputation and trust (art) testbed: experimentation and competition for trust in agent societies.\nIn Proc.\nof the 4th Intl..\nJoint Conf on Autonomous Agents and Multiagent Systems (AAMAS``05), ACM, 2005, 512-158.\n[7] Mui, L.; Halberstadt, A.; Mohtashemi, M. Notions of Reputation in Multi-Agents Systems: A Review.\nIn: Proc of 1st Intl..\nJoint Conf.\non Autonomous Agents and Multi-agent Systems (AAMAS 2002), Bologna, Italy, 2002, 1, 280-287.\n[8] Muller, G. and Vercouter, L. Decentralized monitoring of agent communication with a reputation model.\nIn Trusting Agents for Trusting Electronic Societies, LNCS 3577, 2005, pp. 144-161.\n[9] Sabater, J. and Sierra, C. ReGret: Reputation in gregarious societies.\nIn M\u00fcller, J. et al (Eds) Proc.\nof the 5th Intl..\nConf.\non Autonomous Agents, Canada, 2001, ACM, 194-195.\n[10] Sabater, J. and Sierra, C. Review on Computational Trust and Reputation Models.\nIn: Artificial Intelligence Review, Kluwer Acad.\nPubl., (2005), v. 24, n. 1, pp. 33 - 60.\n[11] Vercouter,L, Casare, S., Sichman, J. and Brand\u00e3o, A.\nAn experience on reputation models interoperability based on a functional ontology In Proc.\nof the 20th IJCAI, Hyderabad, India, 2007, pp.617-622.\n[12] Visser, U.; Stuckenschmidt, H.; Wache, H. and Vogele, T. Enabling technologies for inter-operability.\nIn: In U. Visser and H. Pundt, Eds, Workshop on the 14th Intl Symp.\nof Computer Science for Environmental Protection, Bonn, Germany, 2000, pp. 35-46.\n[13] Yu, B. and Singh, M.P..\nAn Evidential Model of Distributed Reputation Management.\nIn: Proc.\nof the 1st Intl Joint Conf.\non Autonomous Agents and Multi-agent Systems (AAMAS 2002), Bologna, Italy, 2002, part 1, pp. 294 - 301.\n[14] Zacharia, G. and Maes, P. Trust Management Through Reputation Mechanisms.\nIn: Applied Artificial Intelligence, 14(9), 2000, pp. 881-907.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1049","lvl-3":"Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed\nABSTRACT\nIn open MAS it is often a problem to achieve agents' interoperability.\nThe heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation.\nIn this paper we propose the use of an ontology to deal with this issue.\nWe experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology.\nThis data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models.\n1.\nINTRODUCTION\nOpen multiagent systems (MAS) are composed of autonomous distributed agents that may enter and leave the agent society at their will because open systems have no centralized control over the development of its parts [1].\nSince agents are considered as autonomous entities, we cannot assume that there is a way to control their internal behavior.\nThese features are interesting to obtain flexible and adaptive systems but they also create new risks about the reliability and the robustness of the system.\nSolutions to this problem have been proposed by the way of trust models where agents are endowed with a model of other agents that allows them to decide if they can or cannot trust another agent.\nSuch trust decision is very important because it is an essential Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.\nTo copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and\/or a fee.\ncondition to the formation of agents' cooperation.\nThe trust decision processes use the concept of reputation as the basis of a decision.\nReputation is a subject that has been studied in several works [4] [5] [8] [9] with different approaches, but also with different semantics attached to the reputation concept.\nCasare and Sichman [2] [3] proposed a Functional Ontology of Reputation (FORe) and some directions about how it could be used to allow the interoperability among different agent reputation models.\nThis paper describes how the FORe can be applied to allow interoperability among agents that have different reputation models.\nAn outline of this approach is sketched in the context of a testbed for the experimentation and comparison of trust models, the ART testbed [6].\n2.\nTHE FUNCTIONAL ONTOLOGY OF REPUTATION (FORe)\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n3.\nMAPPING THE AGENT REPUTATION MODELS TO THE FORe\n4.\nEXPERIMENTAL SCENARIOS USING THE ART TESTBED\n4.1 The ART testbed\n4.2 Reputation transaction scenarios\n4.2.1 General scenario\n1048 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2.2 ART scenario\n5.\nCONCLUSION\nIn this paper we present a proposal for reducing the incompatibility between reputation models by using a general agent architecture for reputation interaction which relies on a functional ontology of reputation (FORe), used as a globally shared reputation model.\nA reputation mapping module allows agents to translate information from their internal reputation model into the shared model and vice versa.\nThe ART testbed has been enriched to use the ontology during agent transactions.\nSome scenarios were described to illustrate our proposal and they seem to be a promising way to improve the process of building reputation just using existing technologies.","lvl-4":"Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed\nABSTRACT\nIn open MAS it is often a problem to achieve agents' interoperability.\nThe heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation.\nIn this paper we propose the use of an ontology to deal with this issue.\nWe experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology.\nThis data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models.\n1.\nINTRODUCTION\nOpen multiagent systems (MAS) are composed of autonomous distributed agents that may enter and leave the agent society at their will because open systems have no centralized control over the development of its parts [1].\nSince agents are considered as autonomous entities, we cannot assume that there is a way to control their internal behavior.\nSolutions to this problem have been proposed by the way of trust models where agents are endowed with a model of other agents that allows them to decide if they can or cannot trust another agent.\ncondition to the formation of agents' cooperation.\nThe trust decision processes use the concept of reputation as the basis of a decision.\nReputation is a subject that has been studied in several works [4] [5] [8] [9] with different approaches, but also with different semantics attached to the reputation concept.\nCasare and Sichman [2] [3] proposed a Functional Ontology of Reputation (FORe) and some directions about how it could be used to allow the interoperability among different agent reputation models.\nThis paper describes how the FORe can be applied to allow interoperability among agents that have different reputation models.\nAn outline of this approach is sketched in the context of a testbed for the experimentation and comparison of trust models, the ART testbed [6].\n5.\nCONCLUSION\nIn this paper we present a proposal for reducing the incompatibility between reputation models by using a general agent architecture for reputation interaction which relies on a functional ontology of reputation (FORe), used as a globally shared reputation model.\nA reputation mapping module allows agents to translate information from their internal reputation model into the shared model and vice versa.\nThe ART testbed has been enriched to use the ontology during agent transactions.\nSome scenarios were described to illustrate our proposal and they seem to be a promising way to improve the process of building reputation just using existing technologies.","lvl-2":"Exchanging Reputation Values among Heterogeneous Agent Reputation Models: An Experience on ART Testbed\nABSTRACT\nIn open MAS it is often a problem to achieve agents' interoperability.\nThe heterogeneity of its components turns the establishment of interaction or cooperation among them into a non trivial task, since agents may use different internal models and the decision about trust other agents is a crucial condition to the formation of agents' cooperation.\nIn this paper we propose the use of an ontology to deal with this issue.\nWe experiment this idea by enhancing the ART reputation model with semantic data obtained from this ontology.\nThis data is used during interaction among heterogeneous agents when exchanging reputation values and may be used for agents that use different reputation models.\n1.\nINTRODUCTION\nOpen multiagent systems (MAS) are composed of autonomous distributed agents that may enter and leave the agent society at their will because open systems have no centralized control over the development of its parts [1].\nSince agents are considered as autonomous entities, we cannot assume that there is a way to control their internal behavior.\nThese features are interesting to obtain flexible and adaptive systems but they also create new risks about the reliability and the robustness of the system.\nSolutions to this problem have been proposed by the way of trust models where agents are endowed with a model of other agents that allows them to decide if they can or cannot trust another agent.\nSuch trust decision is very important because it is an essential Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page.\nTo copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and\/or a fee.\ncondition to the formation of agents' cooperation.\nThe trust decision processes use the concept of reputation as the basis of a decision.\nReputation is a subject that has been studied in several works [4] [5] [8] [9] with different approaches, but also with different semantics attached to the reputation concept.\nCasare and Sichman [2] [3] proposed a Functional Ontology of Reputation (FORe) and some directions about how it could be used to allow the interoperability among different agent reputation models.\nThis paper describes how the FORe can be applied to allow interoperability among agents that have different reputation models.\nAn outline of this approach is sketched in the context of a testbed for the experimentation and comparison of trust models, the ART testbed [6].\n2.\nTHE FUNCTIONAL ONTOLOGY OF REPUTATION (FORe)\nIn the last years several computational models of reputation have been proposed [7] [10] [13] [14].\nAs an example of research produced in the MAS field we refer to three of them: a cognitive reputation model [5], a typology of reputation [7] and the reputation model used in the ReGret system [9] [10].\nEach model includes its own specific concepts that may not exist in other models, or exist with a different name.\nFor instance, Image and Reputation are two central concepts in the cognitive reputation model.\nThese concepts do not exist in the typology of reputation or in the ReGret model.\nIn the typology of reputation, we can find some similar concepts such as direct reputation and indirect reputation but there are some slight semantic differences.\nIn the same way, the ReGret model includes four kinds of reputation (direct, witness, neighborhood and system) that overlap with the concepts of other models but that are not exactly the same.\nThe Functional Ontology of Reputation (FORe) was defined as a common semantic basis that subsumes the concepts of the main reputation models.\nThe FORe includes, as its kernel, the following concepts: reputation nature, roles involved in reputation formation and propagation, information sources for reputation, evaluation of reputation, and reputation maintenance.\nThe ontology concept ReputationNature is composed of concepts such as IndividualReputation, GroupReputation and ProductReputation.\nReputation formation and propagation involves several roles, played by the entities or agents that participate in those processes.\nThe ontology defines the concepts ReputationProcess and ReputationRole.\nMoreover, reputation can be classified according to the origin of beliefs and opinions that can derive from several\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\nsources.\nThe ontology defines the concept ReputationType which can be PrimaryReputation or SecondaryReputation.\nPrimaryReputation is composed of concepts ObservedReputation and DirectReputation and the concept SecondaryReputation is composed of concepts such as PropagatedReputation and CollectiveReputation.\nMore details about the FORe can be found on [2] [3].\n3.\nMAPPING THE AGENT REPUTATION MODELS TO THE FORe\nVisser et al [12] suggest three different ways to support semantic integration of different sources of information: a centralized approach, where each source of information is related to one common domain ontology; a decentralized approach, where every source of information is related to its own ontology; and a hybrid approach, where every source of information has its own ontology and the vocabulary of these ontologies are related to a common ontology.\nThis latter organizes the common global vocabulary in order to support the source ontologies comparison.\nCasare and Sichman [3] used the hybrid approach to show that the FORe serves as a common ontology for several reputation models.\nTherefore, considering the ontologies which describe the agent reputation models we can define a mapping between these ontologies and the FORe whenever the ontologies use a common vocabulary.\nAlso, the information concerning the mappings between the agent reputation models and the FORe can be directly inferred by simply classifying the resulting ontology from the integration of a given reputation model ontology and the FORe in an ontology tool with reasoning engine.\nFor instance, a mapping between the Cognitive Reputation Model ontology and the FORe relates the concepts Image and Reputation to PrimaryReputation and SecondaryReputation from FORe, respectively.\nAlso, a mapping between the Typology of Reputation and the FORe relates the concepts Direct Reputation and Indirect Reputation to PrimaryReputation and SecondaryReputation from FORe, respectively.\nNevertheless, the concepts Direct Trust and Witness Reputation from the Regret System Reputation Model are mapped to PrimaryReputation and PropagatedReputation from FORe.\nSince PropagatedReputation is a sub-concept of SecondaryReputation, it can be inferred that Witness Reputation is also mapped to SecondaryReputation.\n4.\nEXPERIMENTAL SCENARIOS USING THE ART TESTBED\nTo exemplify the use of mappings from last section, we define a scenario where several agents are implemented using different agent reputation models.\nThis scenario includes the agents' interaction during the simulation of the game defined by ART [6] in order to describe the ways interoperability is possible between different trust models using the FORe.\n4.1 The ART testbed\nThe ART testbed provides a simulation engine on which several agents, using different trust models, may run.\nThe simulation consists in a game where the agents have to decide to trust or not other agents.\nThe game's domain is art appraisal, in which agents are required to evaluate the value of paintings based on information exchanged among other agents during agents' interaction.\nThe information can be an opinion transaction, when an agent asks other agents to help it in its evaluation of a painting; or a reputation transaction, when the information required is about the reputation of another agent (a target) for a given era.\nMore details about the ART testbed can be found in [6].\nThe ART common reputation model was enhanced with semantic data obtained from FORe.\nA general agent architecture for interoperability was defined [11] to allow agents to reason about the information received from reputation interactions.\nThis architecture contains two main modules: the Reputation Mapping Module (RMM) which is responsible for mapping concepts between an agent reputation model and FORe; and the Reputation Reasoning Module (RRM) which is responsible for deal with information about reputation according to the agent reputation model.\n4.2 Reputation transaction scenarios\nWhile including the FORe to the ART common reputation model, we have incremented it to allow richer interactions that involve reputation transaction.\nIn this section we describe scenarios concerning reputation transactions in the context of ART testbed, but the first is valid for any kind of reputation transaction and the second is specific for the ART domain.\n4.2.1 General scenario\nSuppose that agents A, B and C are implemented according to the aforementioned general agent architecture with the enhanced ART common reputation model, using different reputation models.\nAgent A uses the Typology of Reputation model, agent B uses the Cognitive Reputation Model and agent C uses the ReGret System model.\nConsider the interaction about reputation where agents A and B receive from agent C information about the reputation of agent Y.\nA big picture of this interaction is showed in Figure 2.\nFigure 1.\nInteraction about reputation\nThe information witness reputation from agent C is treated by its RMM and is sent as PropagatedReputation to both agents.\nThe corresponding information in agent A reputation model is propagated reputation and in agent B reputation model is reputation.\nThe way agents A and B make use of the information depends on their internal reputation model and their RRM implementation.\n1048 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 4.2.2 ART scenario\nConsidering the same agents A and B and the art appraisal domain of ART, another interesting scenario describes the following situation: agent A asks to agent B information about agents it knows that have skill on some specific painting era.\nIn this case agent A wants information concerning the direct reputation agent B has about agents that have skill on an specific era, such as cubism.\nFollowing the same steps of the previous scenario, agent A message is prepared in its RRM using information from its internal model.\nA big picture of this interaction is in Figure 2.\nFigure 2.\nInteraction about specific types of reputation values\nAgent B response to agent A is processed in its RRM and it is composed of tuples (agent, value, cubism, image), where the pair (agent, value) is composed of all agents and associated reputation values whose agent B knows their expertise about cubism by its own opinion.\nThis response is forwarded to the RMM in order to be translated to the enriched common model and to be sent to agent A.\nAfter receiving the information sent by agent B, agent A processes it in its RMM and translates it to its own reputation model to be analyzed by its RRM.\n5.\nCONCLUSION\nIn this paper we present a proposal for reducing the incompatibility between reputation models by using a general agent architecture for reputation interaction which relies on a functional ontology of reputation (FORe), used as a globally shared reputation model.\nA reputation mapping module allows agents to translate information from their internal reputation model into the shared model and vice versa.\nThe ART testbed has been enriched to use the ontology during agent transactions.\nSome scenarios were described to illustrate our proposal and they seem to be a promising way to improve the process of building reputation just using existing technologies.","keyphrases":["reput valu","reput","heterogen agent","reput model","art testb","art testb","interoper","trust","ontolog","multiag system","autonom distribut agent","reput format","agent architectur","function ontolog of reput"],"prmu":["P","P","P","P","P","P","P","P","P","U","M","R","M","M"]} {"id":"I-66","title":"Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies","abstract":"Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains. Given the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions. Though this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality. A second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost. This paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time. Experimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms.","lvl-1":"Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies Pradeep Varakantham, Janusz Marecki, Yuichi Yabu\u2217 , Milind Tambe, Makoto Yokoo\u2217 University of Southern California, Los Angeles, CA 90089, {varakant, marecki, tambe}@usc.\nedu \u2217 Dept. of Intelligent Systems, Kyushu University, Fukuoka, 812-8581 Japan, yokoo@is.kyushu-u.ac.jp ABSTRACT Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains.\nGiven the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions.\nThough this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality.\nA second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost.\nThis paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time.\nExperimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial IntelligenceMulti-agent Systems General Terms Algorithms, Theory 1.\nINTRODUCTION Distributed Partially Observable Markov Decision Problems (Distributed POMDPs) are emerging as a popular approach for modeling sequential decision making in teams operating under uncertainty [9, 4, 1, 2, 13].\nThe uncertainty arises on account of nondeterminism in the outcomes of actions and because the world state may only be partially (or incorrectly) observable.\nUnfortunately, as shown by Bernstein et al. [3], the problem of finding the optimal joint policy for general distributed POMDPs is NEXP-Complete.\nResearchers have attempted two different types of approaches towards solving these models.\nThe first category consists of highly efficient approximate techniques, that may not reach globally optimal solutions [2, 9, 11].\nThe key problem with these techniques has been their inability to provide any guarantees on the quality of the solution.\nIn contrast, the second less popular category of approaches has focused on a global optimal result [13, 5, 10].\nThough these approaches obtain optimal solutions, they typically consider only two agents.\nFurthermore, they fail to exploit structure in the interactions of the agents and hence are severely hampered with respect to scalability when considering more than two agents.\nTo address these problems with the existing approaches, we propose approximate techniques that provide guarantees on the quality of the solution while focussing on a network of more than two agents.\nWe first propose the basic SPIDER (Search for Policies In Distributed EnviRonments) algorithm.\nThere are two key novel features in SPIDER: (i) it is a branch and bound heuristic search technique that uses a MDP-based heuristic function to search for an optimal joint policy; (ii) it exploits network structure of agents by organizing agents into a Depth First Search (DFS) pseudo tree and takes advantage of the independence in the different branches of the DFS tree.\nWe then provide three enhancements to improve the efficiency of the basic SPIDER algorithm while providing guarantees on the quality of the solution.\nThe first enhancement uses abstractions for speedup, but does not sacrifice solution quality.\nIn particular, it initially performs branch and bound search on abstract policies and then extends to complete policies.\nThe second enhancement obtains speedups by sacrificing solution quality, but within an input parameter that provides the tolerable expected value difference from the optimal solution.\nThe third enhancement is again based on bounding the search for efficiency, however with a tolerance parameter that is provided as a percentage of optimal.\nWe experimented with the sensor network domain presented in Nair et al. [10], a domain representative of an important class of problems with networks of agents working in uncertain environments.\nIn our experiments, we illustrate that SPIDER dominates an existing global optimal approach called GOA [10], the only known global optimal algorithm with demonstrated experimental results for more than two agents.\nFurthermore, we demonstrate that abstraction improves the performance of SPIDER significantly (while providing optimal solutions).\nWe finally demonstrate a key feature of SPIDER: by utilizing the approximation enhancements it enables principled tradeoffs in run-time versus solution quality.\n822 978-81-904262-7-5 (RPS) c 2007 IFAAMAS 2.\nDOMAIN: DISTRIBUTED SENSOR NETS Distributed sensor networks are a large, important class of domains that motivate our work.\nThis paper focuses on a set of target tracking problems that arise in certain types of sensor networks [6] first introduced in [10].\nFigure 1 shows a specific problem instance within this type consisting of three sensors.\nHere, each sensor node can scan in one of four directions: North, South, East or West (see Figure 1).\nTo track a target and obtain associated reward, two sensors with overlapping scanning areas must coordinate by scanning the same area simultaneously.\nIn Figure 1, to track a target in Loc11, sensor1 needs to scan `East'' and sensor2 needs to scan `West'' simultaneously.\nThus, sensors have to act in a coordinated fashion.\nWe assume that there are two independent targets and that each target``s movement is uncertain and unaffected by the sensor agents.\nBased on the area it is scanning, each sensor receives observations that can have false positives and false negatives.\nThe sensors'' observations and transitions are independent of each other``s actions e.g.the observations that sensor1 receives are independent of sensor2``s actions.\nEach agent incurs a cost for scanning whether the target is present or not, but no cost if it turns off.\nGiven the sensors'' observational uncertainty, the targets'' uncertain transitions and the distributed nature of the sensor nodes, these sensor nets provide a useful domains for applying distributed POMDP models.\nFigure 1: A 3-chain sensor configuration 3.\nBACKGROUND 3.1 Model: Network Distributed POMDP The ND-POMDP model was introduced in [10], motivated by domains such as the sensor networks introduced in Section 2.\nIt is defined as the tuple S, A, P, \u03a9, O, R, b , where S = \u00d71\u2264i\u2264nSi \u00d7 Su is the set of world states.\nSi refers to the set of local states of agent i and Su is the set of unaffectable states.\nUnaffectable state refers to that part of the world state that cannot be affected by the agents'' actions, e.g. environmental factors like target locations that no agent can control.\nA = \u00d71\u2264i\u2264nAi is the set of joint actions, where Ai is the set of action for agent i. ND-POMDP assumes transition independence, where the transition function is defined as P(s, a, s ) = Pu(su, su) \u00b7 1\u2264i\u2264n Pi(si, su, ai, si), where a = a1, ... , an is the joint action performed in state s = s1, ... , sn, su and s = s1, ... , sn, su is the resulting state.\n\u03a9 = \u00d71\u2264i\u2264n\u03a9i is the set of joint observations where \u03a9i is the set of observations for agents i. Observational independence is assumed in ND-POMDPs i.e., the joint observation function is defined as O(s, a, \u03c9) = 1\u2264i\u2264n Oi(si, su, ai, \u03c9i), where s = s1, ... , sn, su is the world state that results from the agents performing a = a1, ... , an in the previous state, and \u03c9 = \u03c91, ... , \u03c9n \u2208 \u03a9 is the observation received in state s.\nThis implies that each agent``s observation depends only on the unaffectable state, its local action and on its resulting local state.\nThe reward function, R, is defined as R(s, a) = l Rl(sl1, ... , slr, su, al1, ... , alr ), where each l could refer to any sub-group of agents and r = |l|.\nBased on the reward function, an interaction hypergraph is constructed.\nA hyper-link, l, exists between a subset of agents for all Rl that comprise R.\nThe interaction hypergraph is defined as G = (Ag, E), where the agents, Ag, are the vertices and E = {l|l \u2286 Ag \u2227 Rl is a component of R} are the edges.\nThe initial belief state (distribution over the initial state), b, is defined as b(s) = bu(su) \u00b7 1\u2264i\u2264n bi(si), where bu and bi refer to the distribution over initial unaffectable state and agent i``s initial belief state, respectively.\nThe goal in ND-POMDP is to compute the joint policy \u03c0 = \u03c01, ... , \u03c0n that maximizes team``s expected reward over a finite horizon T starting from the belief state b.\nAn ND-POMDP is similar to an n-ary Distributed Constraint Optimization Problem (DCOP)[8, 12] where the variable at each node represents the policy selected by an individual agent, \u03c0i with the domain of the variable being the set of all local policies, \u03a0i.\nThe reward component Rl where |l| = 1 can be thought of as a local constraint while the reward component Rl where l > 1 corresponds to a non-local constraint in the constraint graph.\n3.2 Algorithm: Global Optimal Algorithm (GOA) In previous work, GOA has been defined as a global optimal algorithm for ND-POMDPs [10].\nWe will use GOA in our experimental comparisons, since GOA is a state-of-the-art global optimal algorithm, and in fact the only one with experimental results available for networks of more than two agents.\nGOA borrows from a global optimal DCOP algorithm called DPOP[12].\nGOA``s message passing follows that of DPOP.\nThe first phase is the UTIL propagation, where the utility messages, in this case values of policies, are passed up from the leaves to the root.\nValue for a policy at an agent is defined as the sum of best response values from its children and the joint policy reward associated with the parent policy.\nThus, given a policy for a parent node, GOA requires an agent to iterate through all its policies, finding the best response policy and returning the value to the parent - while at the parent node, to find the best policy, an agent requires its children to return their best responses to each of its policies.\nThis UTIL propagation process is repeated at each level in the tree, until the root exhausts all its policies.\nIn the second phase of VALUE propagation, where the optimal policies are passed down from the root till the leaves.\nGOA takes advantage of the local interactions in the interaction graph, by pruning out unnecessary joint policy evaluations (associated with nodes not connected directly in the tree).\nSince the interaction graph captures all the reward interactions among agents and as this algorithm iterates through all the relevant joint policy evaluations, this algorithm yields a globally optimal solution.\n4.\nSPIDER As mentioned in Section 3.1, an ND-POMDP can be treated as a DCOP, where the goal is to compute a joint policy that maximizes the overall joint reward.\nThe brute-force technique for computing an optimal policy would be to examine the expected values for all possible joint policies.\nThe key idea in SPIDER is to avoid computation of expected values for the entire space of joint policies, by utilizing upper bounds on the expected values of policies and the interaction structure of the agents.\nAkin to some of the algorithms for DCOP [8, 12], SPIDER has a pre-processing step that constructs a DFS tree corresponding to the given interaction structure.\nNote that these DFS trees are pseudo trees [12] that allow links between ancestors and children.\nWe employ the Maximum Constrained Node (MCN) heuristic used in the DCOP algorithm, ADOPT [8], however other heuristics (such as MLSP heuristic from [7]) can also be employed.\nMCN heuristic tries to place agents with more number of constraints at the top of the tree.\nThis tree governs how the search for the optimal joint polThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 823 icy proceeds in SPIDER.\nThe algorithms presented in this paper are easily extendable to hyper-trees, however for expository purposes, we assume binary trees.\nSPIDER is an algorithm for centralized planning and distributed execution in distributed POMDPs.\nIn this paper, we employ the following notation to denote policies and expected values: Ancestors(i) \u21d2 agents from i to the root (not including i).\nTree(i) \u21d2 agents in the sub-tree (not including i) for which i is the root.\n\u03c0root+ \u21d2 joint policy of all agents.\n\u03c0i+ \u21d2 joint policy of all agents in Tree(i) \u222a i. \u03c0i\u2212 \u21d2 joint policy of agents that are in Ancestors(i).\n\u03c0i \u21d2 policy of the ith agent.\n\u02c6v[\u03c0i, \u03c0i\u2212 ] \u21d2 upper bound on the expected value for \u03c0i+ given \u03c0i and policies of ancestor agents i.e. \u03c0i\u2212 .\n\u02c6vj[\u03c0i, \u03c0i\u2212 ] \u21d2 upper bound on the expected value for \u03c0i+ from the jth child.\nv[\u03c0i, \u03c0i\u2212 ] \u21d2 expected value for \u03c0i given policies of ancestor agents, \u03c0i\u2212 .\nv[\u03c0i+ , \u03c0i\u2212 ] \u21d2 expected value for \u03c0i+ given policies of ancestor agents, \u03c0i\u2212 .\nvj[\u03c0i+ , \u03c0i\u2212 ] \u21d2 expected value for \u03c0i+ from the jth child.\nFigure 2: Execution of SPIDER, an example 4.1 Outline of SPIDER SPIDER is based on the idea of branch and bound search, where the nodes in the search tree represent partial\/complete joint policies.\nFigure 2 shows an example search tree for the SPIDER algorithm, using an example of the three agent chain.\nBefore SPIDER begins its search we create a DFS tree (i.e. pseudo tree) from the three agent chain, with the middle agent as the root of this tree.\nSPIDER exploits the structure of this DFS tree while engaging in its search.\nNote that in our example figure, each agent is assigned a policy with T=2.\nThus, each rounded rectange (search tree node) indicates a partial\/complete joint policy, a rectangle indicates an agent and the ovals internal to an agent show its policy.\nHeuristic or actual expected value for a joint policy is indicated in the top right corner of the rounded rectangle.\nIf the number is italicized and underlined, it implies that the actual expected value of the joint policy is provided.\nSPIDER begins with no policy assigned to any of the agents (shown in the level 1 of the search tree).\nLevel 2 of the search tree indicates that the joint policies are sorted based on upper bounds computed for root agent``s policies.\nLevel 3 shows one SPIDER search node with a complete joint policy (a policy assigned to each of the agents).\nThe expected value for this joint policy is used to prune out the nodes in level 2 (the ones with upper bounds < 234) When creating policies for each non-leaf agent i, SPIDER potentially performs two steps: 1.\nObtaining upper bounds and sorting: In this step, agent i computes upper bounds on the expected values, \u02c6v[\u03c0i, \u03c0i\u2212 ] of the joint policies \u03c0i+ corresponding to each of its policy \u03c0i and fixed ancestor policies.\nAn MDP based heuristic is used to compute these upper bounds on the expected values.\nDetailed description about this MDP heuristic is provided in Section 4.2.\nAll policies of agent i, \u03a0i are then sorted based on these upper bounds (also referred to as heuristic values henceforth) in descending order.\nExploration of these policies (in step 2 below) are performed in this descending order.\nAs indicated in the level 2 of the search tree (of Figure 2), all the joint policies are sorted based on the heuristic values, indicated in the top right corner of each joint policy.\nThe intuition behind sorting and then exploring policies in descending order of upper bounds, is that the policies with higher upper bounds could yield joint policies with higher expected values.\n2.\nExploration and Pruning: Exploration implies computing the best response joint policy \u03c0i+,\u2217 corresponding to fixed ancestor policies of agent i, \u03c0i\u2212 .\nThis is performed by iterating through all policies of agent i i.e. \u03a0i and summing two quantities for each policy: (i) the best response for all of i``s children (obtained by performing steps 1 and 2 at each of the child nodes); (ii) the expected value obtained by i for fixed policies of ancestors.\nThus, exploration of a policy \u03c0i yields actual expected value of a joint policy, \u03c0i+ represented as v[\u03c0i+ , \u03c0i\u2212 ].\nThe policy with the highest expected value is the best response policy.\nPruning refers to avoiding exploring all policies (or computing expected values) at agent i by using the current best expected value, vmax [\u03c0i+ , \u03c0i\u2212 ].\nHenceforth, this vmax [\u03c0i+ , \u03c0i\u2212 ] will be referred to as threshold.\nA policy, \u03c0i need not be explored if the upper bound for that policy, \u02c6v[\u03c0i, \u03c0i\u2212 ] is less than the threshold.\nThis is because the expected value for the best joint policy attainable for that policy will be less than the threshold.\nOn the other hand, when considering a leaf agent, SPIDER computes the best response policy (and consequently its expected value) corresponding to fixed policies of its ancestors, \u03c0i\u2212 .\nThis is accomplished by computing expected values for each of the policies (corresponding to fixed policies of ancestors) and selecting the highest expected value policy.\nIn Figure 2, SPIDER assigns best response policies to leaf agents at level 3.\nThe policy for the left leaf agent is to perform action East at each time step in the policy, while the policy for the right leaf agent is to perform Off at each time step.\nThese best response policies from the leaf agents yield an actual expected value of 234 for the complete joint policy.\nAlgorithm 1 provides the pseudo code for SPIDER.\nThis algorithm outputs the best joint policy, \u03c0i+,\u2217 (with an expected value greater than threshold) for the agents in Tree(i).\nLines 3-8 compute the best response policy of a leaf agent i, while lines 9-23 computes the best response joint policy for agents in Tree(i).\nThis best response computation for a non-leaf agent i includes: (a) Sorting of policies (in descending order) based on heuristic values on line 11; (b) Computing best response policies at each of the children for fixed policies of agent i in lines 16-20; and (c) Maintaining 824 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Algorithm 1 SPIDER(i, \u03c0i\u2212 , threshold) 1: \u03c0i+,\u2217 \u2190 null 2: \u03a0i \u2190 GET-ALL-POLICIES (horizon, Ai, \u03a9i) 3: if IS-LEAF(i) then 4: for all \u03c0i \u2208 \u03a0i do 5: v[\u03c0i, \u03c0i\u2212] \u2190 JOINT-REWARD (\u03c0i, \u03c0i\u2212) 6: if v[\u03c0i, \u03c0i\u2212] > threshold then 7: \u03c0i+,\u2217 \u2190 \u03c0i 8: threshold \u2190 v[\u03c0i, \u03c0i\u2212] 9: else 10: children \u2190 CHILDREN (i) 11: \u02c6\u03a0i \u2190 UPPER-BOUND-SORT(i, \u03a0i, \u03c0i\u2212) 12: for all \u03c0i \u2208 \u02c6\u03a0i do 13: \u02dc\u03c0i+ \u2190 \u03c0i 14: if \u02c6v[\u03c0i, \u03c0i\u2212] < threshold then 15: Go to line 12 16: for all j \u2208 children do 17: jThres \u2190 threshold \u2212 v[\u03c0i, \u03c0i\u2212]\u2212 \u03a3k\u2208children,k=j \u02c6vk[\u03c0i, \u03c0i\u2212] 18: \u03c0j+,\u2217 \u2190 SPIDER(j, \u03c0i \u03c0i\u2212, jThres) 19: \u02dc\u03c0i+ \u2190 \u02dc\u03c0i+ \u03c0j+,\u2217 20: \u02c6vj[\u03c0i, \u03c0i\u2212] \u2190 v[\u03c0j+,\u2217, \u03c0i \u03c0i\u2212] 21: if v[\u02dc\u03c0i+, \u03c0i\u2212] > threshold then 22: threshold \u2190 v[\u02dc\u03c0i+, \u03c0i\u2212] 23: \u03c0i+,\u2217 \u2190 \u02dc\u03c0i+ 24: return \u03c0i+,\u2217 Algorithm 2 UPPER-BOUND-SORT(i, \u03a0i, \u03c0i\u2212 ) 1: children \u2190 CHILDREN (i) 2: \u02c6\u03a0i \u2190 null \/* Stores the sorted list *\/ 3: for all \u03c0i \u2208 \u03a0i do 4: \u02c6v[\u03c0i, \u03c0i\u2212] \u2190 JOINT-REWARD (\u03c0i, \u03c0i\u2212) 5: for all j \u2208 children do 6: \u02c6vj[\u03c0i, \u03c0i\u2212] \u2190 UPPER-BOUND(i, j, \u03c0i \u03c0i\u2212) 7: \u02c6v[\u03c0i, \u03c0i\u2212] + \u2190 \u02c6vj[\u03c0i, \u03c0i\u2212] 8: \u02c6\u03a0i \u2190 INSERT-INTO-SORTED (\u03c0i, \u02c6\u03a0i) 9: return \u02c6\u03a0i best expected value, joint policy in lines 21-23.\nAlgorithm 2 provides the pseudo code for sorting policies based on the upper bounds on the expected values of joint policies.\nExpected value for an agent i consists of two parts: value obtained from ancestors and value obtained from its children.\nLine 4 computes the expected value obtained from ancestors of the agent (using JOINT-REWARD function), while lines 5-7 compute the heuristic value from the children.\nThe sum of these two parts yields an upper bound on the expected value for agent i, and line 8 of the algorithm sorts the policies based on these upper bounds.\n4.2 MDP based heuristic function The heuristic function quickly provides an upper bound on the expected value obtainable from the agents in Tree(i).\nThe subtree of agents is a distributed POMDP in itself and the idea here is to construct a centralized MDP corresponding to the (sub-tree) distributed POMDP and obtain the expected value of the optimal policy for this centralized MDP.\nTo reiterate this in terms of the agents in DFS tree interaction structure, we assume full observability for the agents in Tree(i) and for fixed policies of the agents in {Ancestors(i) \u222a i}, we compute the joint value \u02c6v[\u03c0i+ , \u03c0i\u2212 ] .\nWe use the following notation for presenting the equations for computing upper bounds\/heuristic values (for agents i and k): Let Ei\u2212 denote the set of links between agents in {Ancestors(i)\u222a i} and Tree(i), Ei+ denote the set of links between agents in Tree(i).\nAlso, if l \u2208 Ei\u2212 , then l1 is the agent in {Ancestors(i)\u222a i} and l2 is the agent in Tree(i), that l connects together.\nWe first compact the standard notation: ot k =Ok(st+1 k , st+1 u , \u03c0k(\u03c9t k), \u03c9t+1 k ) (1) pt k =Pk(st k, st u, \u03c0k(\u03c9t k), st+1 k ) \u00b7 ot k pt u =P(st u, st+1 u ) st l = st l1 , st l2 , st u ; \u03c9t l = \u03c9t l1 , \u03c9t l2 rt l =Rl(st l , \u03c0l1 (\u03c9t l1 ), \u03c0l2 (\u03c9t l2 )) vt l =V t \u03c0l (st l , st u, \u03c9t l1 , \u03c9t l2 ) Depending on the location of agent k in the agent tree we have the following cases: IF k \u2208 {Ancestors(i) \u222a i}, \u02c6pt k = pt k, (2) IF k \u2208 Tree(i), \u02c6pt k = Pk(st k, st u, \u03c0k(\u03c9t k), st+1 k ) IF l \u2208 Ei\u2212 , \u02c6rt l = max {al2 } Rl(st l , \u03c0l1 (\u03c9t l1 ), al2 ) IF l \u2208 Ei+ , \u02c6rt l = max {al1 ,al2 } Rl(st l , al1 , al2 ) The value function for an agent i executing the joint policy \u03c0i+ at time \u03b7 \u2212 1 is provided by the equation: V \u03b7\u22121 \u03c0i+ (s\u03b7\u22121 , \u03c9\u03b7\u22121 ) = l\u2208Ei\u2212 v\u03b7\u22121 l + l\u2208Ei+ v\u03b7\u22121 l (3) where v\u03b7\u22121 l = r\u03b7\u22121 l + \u03c9 \u03b7 l ,s\u03b7 p\u03b7\u22121 l1 p\u03b7\u22121 l2 p\u03b7\u22121 u v\u03b7 l Algorithm 3 UPPER-BOUND (i, j, \u03c0j\u2212 ) 1: val \u2190 0 2: for all l \u2208 Ej\u2212 \u222a Ej+ do 3: if l \u2208 Ej\u2212 then \u03c0l1 \u2190 \u03c6 4: for all s0 l do 5: val + \u2190 startBel[s0 l ]\u00b7 UPPER-BOUND-TIME (i, s0 l , j, \u03c0l1 , ) 6: return val Algorithm 4 UPPER-BOUND-TIME (i, st l , j, \u03c0l1 , \u03c9t l1 ) 1: maxV al \u2190 \u2212\u221e 2: for all al1 , al2 do 3: if l \u2208 Ei\u2212 and l \u2208 Ej\u2212 then al1 \u2190 \u03c0l1 (\u03c9t l1 ) 4: val \u2190 GET-REWARD(st l , al1 , al2 ) 5: if t < \u03c0i.horizon \u2212 1 then 6: for all st+1 l , \u03c9t+1 l1 do 7: futV al\u2190pt u \u02c6pt l1 \u02c6pt l2 8: futV al \u2217 \u2190 UPPER-BOUND-TIME(st+1 l , j, \u03c0l1 , \u03c9t l1 \u03c9t+1 l1 ) 9: val + \u2190 futV al 10: if val > maxV al then maxV al \u2190 val 11: return maxV al Upper bound on the expected value for a link is computed by modifying the equation 3 to reflect the full observability assumption.\nThis involves removing the observational probability term for agents in Tree(i) and maximizing the future value \u02c6v\u03b7 l over the actions of those agents (in Tree(i)).\nThus, the equation for the The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 825 computation of the upper bound on a link l, is as follows: IF l \u2208 Ei\u2212 , \u02c6v\u03b7\u22121 l =\u02c6r\u03b7\u22121 l + max al2 \u03c9 \u03b7 l1 ,s \u03b7 l \u02c6p\u03b7\u22121 l1 \u02c6p\u03b7\u22121 l2 p\u03b7\u22121 u \u02c6v\u03b7 l IF l \u2208 Ei+ , \u02c6v\u03b7\u22121 l =\u02c6r\u03b7\u22121 l + max al1 ,al2 s \u03b7 l \u02c6p\u03b7\u22121 l1 \u02c6p\u03b7\u22121 l2 p\u03b7\u22121 u \u02c6v\u03b7 l Algorithm 3 and Algorithm 4 provide the algorithm for computing upper bound for child j of agent i, using the equations descirbed above.\nWhile Algorithm 4 computes the upper bound on a link given the starting state, Algorithm 3 sums the upper bound values computed over each of the links in Ei\u2212 \u222a Ei+ .\n4.3 Abstraction Algorithm 5 SPIDER-ABS(i, \u03c0i\u2212 , threshold) 1: \u03c0i+,\u2217 \u2190 null 2: \u03a0i \u2190 GET-POLICIES (<>, 1) 3: if IS-LEAF(i) then 4: for all \u03c0i \u2208 \u03a0i do 5: absHeuristic \u2190 GET-ABS-HEURISTIC (\u03c0i, \u03c0i\u2212) 6: absHeuristic \u2217 \u2190 (timeHorizon \u2212 \u03c0i.horizon) 7: if \u03c0i.horizon = timeHorizon and \u03c0i.absNodes = 0 then 8: v[\u03c0i, \u03c0i\u2212] \u2190 JOINT-REWARD (\u03c0i, \u03c0i\u2212) 9: if v[\u03c0i, \u03c0i\u2212] > threshold then 10: \u03c0i+,\u2217 \u2190 \u03c0i; threshold \u2190 v[\u03c0i, \u03c0i\u2212] 11: else if v[\u03c0i, \u03c0i\u2212] + absHeuristic > threshold then 12: \u02c6\u03a0i \u2190 EXTEND-POLICY (\u03c0i, \u03c0i.absNodes + 1) 13: \u03a0i + \u2190 INSERT-SORTED-POLICIES ( \u02c6\u03a0i) 14: REMOVE(\u03c0i) 15: else 16: children \u2190 CHILDREN (i) 17: \u03a0i \u2190 UPPER-BOUND-SORT(i, \u03a0i, \u03c0i\u2212) 18: for all \u03c0i \u2208 \u03a0i do 19: \u02dc\u03c0i+ \u2190 \u03c0i 20: absHeuristic \u2190 GET-ABS-HEURISTIC (\u03c0i, \u03c0i\u2212) 21: absHeuristic \u2217 \u2190 (timeHorizon \u2212 \u03c0i.horizon) 22: if \u03c0i.horizon = timeHorizon and \u03c0i.absNodes = 0 then 23: if \u02c6v[\u03c0i, \u03c0i\u2212] < threshold and \u03c0i.absNodes = 0 then 24: Go to line 19 25: for all j \u2208 children do 26: jThres \u2190 threshold \u2212 v[\u03c0i, \u03c0i\u2212]\u2212 \u03a3k\u2208children,k=j \u02c6vk[\u03c0i, \u03c0i\u2212] 27: \u03c0j+,\u2217 \u2190 SPIDER(j, \u03c0i \u03c0i\u2212, jThres) 28: \u02dc\u03c0i+ \u2190 \u02dc\u03c0i+ \u03c0j+,\u2217; \u02c6vj[\u03c0i, \u03c0i\u2212] \u2190 v[\u03c0j+,\u2217, \u03c0i \u03c0i\u2212] 29: if v[\u02dc\u03c0i+, \u03c0i\u2212] > threshold then 30: threshold \u2190 v[\u02dc\u03c0i+, \u03c0i\u2212]; \u03c0i+,\u2217 \u2190 \u02dc\u03c0i+ 31: else if \u02c6v[\u03c0i+, \u03c0i\u2212] + absHeuristic > threshold then 32: \u02c6\u03a0i \u2190 EXTEND-POLICY (\u03c0i, \u03c0i.absNodes + 1) 33: \u03a0i + \u2190 INSERT-SORTED-POLICIES (\u02c6\u03a0i) 34: REMOVE(\u03c0i) 35: return \u03c0i+,\u2217 In SPIDER, the exploration\/pruning phase can only begin after the heuristic (or upper bound) computation and sorting for the policies has ended.\nWe provide an approach to possibly circumvent the exploration of a group of policies based on heuristic computation for one abstract policy, thus leading to an improvement in runtime performance (without loss in solution quality).\nThe important steps in this technique are defining the abstract policy and how heuristic values are computated for the abstract policies.\nIn this paper, we propose two types of abstraction: 1.\nHorizon Based Abstraction (HBA): Here, the abstract policy is defined as a shorter horizon policy.\nIt represents a group of longer horizon policies that have the same actions as the abstract policy for times less than or equal to the horizon of the abstract policy.\nIn Figure 3(a), a T=1 abstract policy that performs East action, represents a group of T=2 policies, that perform East in the first time step.\nFor HBA, there are two parts to heuristic computation: (a) Computing the upper bound for the horizon of the abstract policy.\nThis is same as the heuristic computation defined by the GETHEURISTIC() algorithm for SPIDER, however with a shorter time horizon (horizon of the abstract policy).\n(b) Computing the maximum possible reward that can be accumulated in one time step (using GET-ABS-HEURISTIC()) and multiplying it by the number of time steps to time horizon.\nThis maximum possible reward (for one time step) is obtained by iterating through all the actions of all the agents in Tree(i) and computing the maximum joint reward for any joint action.\nSum of (a) and (b) is the heuristic value for a HBA abstract policy.\n2.\nNode Based Abstraction (NBA): Here an abstract policy is obtained by not associating actions to certain nodes of the policy tree.\nUnlike in HBA, this implies multiple levels of abstraction.\nThis is illustrated in Figure 3(b), where there are T=2 policies that do not have an action for observation `TP''.\nThese incomplete T=2 policies are abstractions for T=2 complete policies.\nIncreased levels of abstraction leads to faster computation of a complete joint policy, \u03c0root+ and also to shorter heuristic computation and exploration, pruning phases.\nFor NBA, the heuristic computation is similar to that of a normal policy, except in cases where there is no action associated with policy nodes.\nIn such cases, the immediate reward is taken as Rmax (maximum reward for any action).\nWe combine both the abstraction techniques mentioned above into one technique, SPIDER-ABS.\nAlgorithm 5 provides the algorithm for this abstraction technique.\nFor computing optimal joint policy with SPIDER-ABS, a non-leaf agent i initially examines all abstract T=1 policies (line 2) and sorts them based on abstract policy heuristic computations (line 17).\nThe abstraction horizon is gradually increased and these abstract policies are then explored in descending order of heuristic values and ones that have heuristic values less than the threshold are pruned (lines 23-24).\nExploration in SPIDER-ABS has the same definition as in SPIDER if the policy being explored has a horizon of policy computation which is equal to the actual time horizon and if all the nodes of the policy have an action associated with them (lines 25-30).\nHowever, if those conditions are not met, then it is substituted by a group of policies that it represents (using EXTEND-POLICY () function) (lines 31-32).\nEXTEND-POLICY() function is also responsible for initializing the horizon and absNodes of a policy.\nabsNodes represents the number of nodes at the last level in the policy tree, that do not have an action assigned to them.\nIf \u03c0i.absNodes = |\u03a9i|\u03c0i.horizon\u22121 (i.e. total number of policy nodes possible at \u03c0i.horizon) , then \u03c0i.absNodes is set to zero and \u03c0i.horizon is increased by 1.\nOtherwise, \u03c0i.absNodes is increased by 1.\nThus, this function combines both HBA and NBA by using the policy variables, horizon and absNodes.\nBefore substituting the abstract policy with a group of policies, those policies are sorted based on heuristic values (line 33).\nSimilar type of abstraction based best response computation is adopted at leaf agents (lines 3-14).\n4.4 Value ApproXimation (VAX) In this section, we present an approximate enhancement to SPIDER called VAX.\nThe input to this technique is an approximation parameter , which determines the difference from the optimal solution quality.\nThis approximation parameter is used at each agent for pruning out joint policies.\nThe pruning mechanism in SPIDER and SPIDER-Abs dictates that a joint policy be pruned only if the threshold is exactly greater than the heuristic value.\nHowever, the 826 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 3: Example of abstraction for (a) HBA (Horizon Based Abstraction) and (b) NBA (Node Based Abstraction) idea in this technique is to prune out joint a policy if the following condition is satisfied: threshold + > \u02c6v[\u03c0i , \u03c0i\u2212 ].\nApart from the pruning condition, VAX is the same as SPIDER\/SPIDER-ABS.\nIn the example of Figure 2, if the heuristic value for the second joint policy (or second search tree node) in level 2 were 238 instead of 232, then that policy could not be be pruned using SPIDER or SPIDER-Abs.\nHowever, in VAX with an approximation parameter of 5, the joint policy in consideration would also be pruned.\nThis is because the threshold (234) at that juncture plus the approximation parameter (5), i.e. 239 would have been greater than the heuristic value for that joint policy (238).\nIt can be noted from the example (just discussed) that this kind of pruning can lead to fewer explorations and hence lead to an improvement in the overall run-time performance.\nHowever, this can entail a sacrifice in the quality of the solution because this technique can prune out a candidate optimal solution.\nA bound on the error introduced by this approximate algorithm as a function of , is provided by Proposition 3.\n4.5 Percentage ApproXimation (PAX) In this section, we present the second approximation enhancement over SPIDER called PAX.\nInput to this technique is a parameter, \u03b4 that represents the minimum percentage of the optimal solution quality that is desired.\nOutput of this technique is a policy with an expected value that is at least \u03b4% of the optimal solution quality.\nA policy is pruned if the following condition is satisfied: threshold > \u03b4 100 \u02c6v[\u03c0i , \u03c0i\u2212 ].\nLike in VAX, the only difference between PAX and SPIDER\/SPIDER-ABS is this pruning condition.\nAgain in Figure 2, if the heuristic value for the second search tree node in level 2 were 238 instead of 232, then PAX with an input parameter of 98% would be able to prune that search tree node (since 98 100 \u2217238 < 234).\nThis type of pruning leads to fewer explorations and hence an improvement in run-time performance, while potentially leading to a loss in quality of the solution.\nProposition 4 provides the bound on quality loss.\n4.6 Theoretical Results PROPOSITION 1.\nHeuristic provided using the centralized MDP heuristic is admissible.\nProof.\nFor the value provided by the heuristic to be admissible, it should be an over estimate of the expected value for a joint policy.\nThus, we need to show that: For l \u2208 Ei+ \u222a Ei\u2212 : \u02c6vt l \u2265 vt l (refer to notation in Section 4.2) We use mathematical induction on t to prove this.\nBase case: t = T \u2212 1.\nIrrespective of whether l \u2208 Ei\u2212 or l \u2208 Ei+ , \u02c6rt l is computed by maximizing over all actions of the agents in Tree(i), while rt l is computed for fixed policies of the same agents.\nHence, \u02c6rt l \u2265 rt l and also \u02c6vt l \u2265 vt l .\nAssumption: Proposition holds for t = \u03b7, where 1 \u2264 \u03b7 < T \u2212 1.\nWe now have to prove that the proposition holds for t = \u03b7 \u2212 1.\nWe show the proof for l \u2208 Ei\u2212 and similar reasoning can be adopted to prove for l \u2208 Ei+ .\nThe heuristic value function for l \u2208 Ei\u2212 is provided by the following equation: \u02c6v\u03b7\u22121 l =\u02c6r\u03b7\u22121 l + max al2 \u03c9 \u03b7 l1 ,s \u03b7 l \u02c6p\u03b7\u22121 l1 \u02c6p\u03b7\u22121 l2 p\u03b7\u22121 u \u02c6v\u03b7 l Rewriting the RHS and using Eqn 2 (in Section 4.2) =\u02c6r\u03b7\u22121 l + max al2 \u03c9 \u03b7 l1 ,s \u03b7 l p\u03b7\u22121 u p\u03b7\u22121 l1 \u02c6p\u03b7\u22121 l2 \u02c6v\u03b7 l =\u02c6r\u03b7\u22121 l + \u03c9 \u03b7 l1 ,s \u03b7 l p\u03b7\u22121 u p\u03b7\u22121 l1 max al2 \u02c6p\u03b7\u22121 l2 \u02c6v\u03b7 l Since maxal2 \u02c6p\u03b7\u22121 l2 \u02c6v\u03b7 l \u2265 \u03c9l2 o\u03b7\u22121 l2 \u02c6p\u03b7\u22121 l2 \u02c6v\u03b7 l and p\u03b7\u22121 l2 = o\u03b7\u22121 l2 \u02c6p\u03b7\u22121 l2 \u2265\u02c6r\u03b7\u22121 l + \u03c9 \u03b7 l1 ,s \u03b7 l p\u03b7\u22121 u p\u03b7\u22121 l1 \u03c9l2 p\u03b7\u22121 l2 \u02c6v\u03b7 l Since \u02c6v\u03b7 l \u2265 v\u03b7 l (from the assumption) \u2265\u02c6r\u03b7\u22121 l + \u03c9 \u03b7 l1 ,s \u03b7 l p\u03b7\u22121 u p\u03b7\u22121 l1 \u03c9l2 p\u03b7\u22121 l2 v\u03b7 l Since \u02c6r\u03b7\u22121 l \u2265 r\u03b7\u22121 l (by definition) \u2265r\u03b7\u22121 l + \u03c9 \u03b7 l1 ,s \u03b7 l p\u03b7\u22121 u p\u03b7\u22121 l1 \u03c9l2 p\u03b7\u22121 l2 v\u03b7 l =r\u03b7\u22121 l + (\u03c9 \u03b7 l ,s \u03b7 l ) p\u03b7\u22121 u p\u03b7\u22121 l1 p\u03b7\u22121 l2 v\u03b7 l = v\u03b7\u22121 l Thus proved.\nPROPOSITION 2.\nSPIDER provides an optimal solution.\nProof.\nSPIDER examines all possible joint policies given the interaction structure of the agents.\nThe only exception being when a joint policy is pruned based on the heuristic value.\nThus, as long as a candidate optimal policy is not pruned, SPIDER will return an optimal policy.\nAs proved in Proposition 1, the expected value for a joint policy is always an upper bound.\nHence when a joint policy is pruned, it cannot be an optimal solution.\nPROPOSITION 3.\nError bound on the solution quality for VAX (implemented over SPIDER-ABS) with an approximation parameter of is \u03c1 , where \u03c1 is the number of leaf nodes in the DFS tree.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 827 Proof.\nWe prove this proposition using mathematical induction on the depth of the DFS tree.\nBase case: depth = 1 (i.e. one node).\nBest response is computed by iterating through all policies, \u03a0k.\nA policy,\u03c0k is pruned if \u02c6v[\u03c0k, \u03c0k\u2212 ] < threshold + .\nThus the best response policy computed by VAX would be at most away from the optimal best response.\nHence the proposition holds for the base case.\nAssumption: Proposition holds for d, where 1 \u2264 depth \u2264 d.\nWe now have to prove that the proposition holds for d + 1.\nWithout loss of generality, lets assume that the root node of this tree has k children.\nEach of this children is of depth \u2264 d, and hence from the assumption, the error introduced in kth child is \u03c1k , where \u03c1k is the number of leaf nodes in kth child of the root.\nTherefore, \u03c1 = k \u03c1k, where \u03c1 is the number of leaf nodes in the tree.\nIn SPIDER-ABS, threshold at the root agent, thresspider = k v[\u03c0k+ , \u03c0k\u2212 ].\nHowever, with VAX the threshold at the root agent will be (in the worst case), threshvax = k v[\u03c0k+ , \u03c0k\u2212 ]\u2212 k \u03c1k .\nHence, with VAX a joint policy is pruned at the root agent if \u02c6v[\u03c0root, \u03c0root\u2212 ] < threshvax + \u21d2 \u02c6v[\u03c0root, \u03c0root\u2212 ] < threshspider \u2212 (( k \u03c1k) \u2212 1) \u2264 threshspider \u2212 ( k \u03c1k) \u2264 threshspider \u2212 \u03c1 .\nHence proved.\nPROPOSITION 4.\nFor PAX (implemented over SPIDER-ABS) with an input parameter of \u03b4, the solution quality is at least \u03b4 100 v[\u03c0root+,\u2217 ], where v[\u03c0root+,\u2217 ] denotes the optimal solution quality.\nProof.\nWe prove this proposition using mathematical induction on the depth of the DFS tree.\nBase case: depth = 1 (i.e. one node).\nBest response is computed by iterating through all policies, \u03a0k.\nA policy,\u03c0k is pruned if \u03b4 100 \u02c6v[\u03c0k, \u03c0k\u2212 ] < threshold.\nThus the best response policy computed by PAX would be at least \u03b4 100 times the optimal best response.\nHence the proposition holds for the base case.\nAssumption: Proposition holds for d, where 1 \u2264 depth \u2264 d.\nWe now have to prove that the proposition holds for d + 1.\nWithout loss of generality, lets assume that the root node of this tree has k children.\nEach of this children is of depth \u2264 d, and hence from the assumption, the solution quality in the kth child is at least \u03b4 100 v[\u03c0k+,\u2217 , \u03c0k\u2212 ] for PAX.\nWith SPIDER-ABS, a joint policy is pruned at the root agent if \u02c6v[\u03c0root, \u03c0root\u2212 ] < k v[\u03c0k+,\u2217 , \u03c0k\u2212 ].\nHowever with PAX, a joint policy is pruned if \u03b4 100 \u02c6v[\u03c0root, \u03c0root\u2212 ] < k \u03b4 100 v[\u03c0k+,\u2217 , \u03c0k\u2212 ] \u21d2 \u02c6v[\u03c0root, \u03c0root\u2212 ] < k v[\u03c0k+,\u2217 , \u03c0k\u2212 ].\nSince the pruning condition at the root agent in PAX is the same as the one in SPIDER-ABS, there is no error introduced at the root agent and all the error is introduced in the children.\nThus, overall solution quality is at least \u03b4 100 of the optimal solution.\nHence proved.\n5.\nEXPERIMENTAL RESULTS All our experiments were conducted on the sensor network domain from Section 2.\nThe five network configurations employed are shown in Figure 4.\nAlgorithms that we experimented with are GOA, SPIDER, SPIDER-ABS, PAX and VAX.\nWe compare against GOA because it is the only global optimal algorithm that considers more than two agents.\nWe performed two sets of experiments: (i) firstly, we compared the run-time performance of the above algorithms and (ii) secondly, we experimented with PAX and VAX to study the tradeoff between run-time and solution quality.\nExperiments were terminated after 10000 seconds1 .\nFigure 5(a) provides run-time comparisons between the optimal algorithms GOA, SPIDER, SPIDER-Abs and the approximate algorithms, PAX ( of 30) and VAX(\u03b4 of 80).\nX-axis denotes the 1 Machine specs for all experiments: Intel Xeon 3.6 GHZ processor, 2GB RAM sensor network configuration used, while Y-axis indicates the runtime (on a log-scale).\nThe time horizon of policy computation was 3.\nFor each configuration (3-chain, 4-chain, 4-star and 5-star), there are five bars indicating the time taken by GOA, SPIDER, SPIDERAbs, PAX and VAX.\nGOA did not terminate within the time limit for 4-star and 5-star configurations.\nSPIDER-Abs dominated the SPIDER and GOA for all the configurations.\nFor instance, in the 3chain configuration, SPIDER-ABS provides 230-fold speedup over GOA and 2-fold speedup over SPIDER and for the 4-chain configuration it provides 58-fold speedup over GOA and 2-fold speedup over SPIDER.\nThe two approximation approaches, VAX and PAX provided further improvement in performance over SPIDER-Abs.\nFor instance, in the 5-star configuration VAX provides a 15-fold speedup and PAX provides a 8-fold speedup over SPIDER-Abs.\nFigures 5(b) provides a comparison of the solution quality obtained using the different algorithms for the problems tested in Figure 5(a).\nX-axis denotes the sensor network configuration while Y-axis indicates the solution quality.\nSince GOA, SPIDER, and SPIDER-Abs are all global optimal algorithms, the solution quality is the same for all those algorithms.\nFor 5-P configuration, the global optimal algorithms did not terminate within the limit of 10000 seconds, so the bar for optimal quality indicates an upper bound on the optimal solution quality.\nWith both the approximations, we obtained a solution quality that was close to the optimal solution quality.\nIn 3-chain and 4-star configurations, it is remarkable that both PAX and VAX obtained almost the same actual quality as the global optimal algorithms, despite the approximation parameter and \u03b4.\nFor other configurations as well, the loss in quality was less than 20% of the optimal solution quality.\nFigure 5(c) provides the time to solution with PAX (for varying epsilons).\nX-axis denotes the approximation parameter, \u03b4 (percentage to optimal) used, while Y-axis denotes the time taken to compute the solution (on a log-scale).\nThe time horizon for all the configurations was 4.\nAs \u03b4 was decreased from 70 to 30, the time to solution decreased drastically.\nFor instance, in the 3-chain case there was a total speedup of 170-fold when the \u03b4 was changed from 70 to 30.\nInterestingly, even with a low \u03b4 of 30%, the actual solution quality remained equal to the one obtained at 70%.\nFigure 5(d) provides the time to solution for all the configurations with VAX (for varying epsilons).\nX-axis denotes the approximation parameter, used, while Y-axis denotes the time taken to compute the solution (on a log-scale).\nThe time horizon for all the configurations was 4.\nAs was increased, the time to solution decreased drastically.\nFor instance, in the 4-star case there was a total speedup of 73-fold when the was changed from 60 to 140.\nAgain, the actual solution quality did not change with varying epsilon.\nFigure 4: Sensor network configurations 828 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 5: Comparison of GOA, SPIDER, SPIDER-Abs and VAX for T = 3 on (a) Runtime and (b) Solution quality; (c) Time to solution for PAX with varying percentage to optimal for T=4 (d) Time to solution for VAX with varying epsilon for T=4 6.\nSUMMARY AND RELATED WORK This paper presents four algorithms SPIDER, SPIDER-ABS, PAX and VAX that provide a novel combination of features for policy search in distributed POMDPs: (i) exploiting agent interaction structure given a network of agents (i.e. easier scale-up to larger number of agents); (ii) using branch and bound search with an MDP based heuristic function; (iii) utilizing abstraction to improve runtime performance without sacrificing solution quality; (iv) providing a priori percentage bounds on quality of solutions using PAX; and (v) providing expected value bounds on the quality of solutions using VAX.\nThese features allow for systematic tradeoff of solution quality for run-time in networks of agents operating under uncertainty.\nExperimental results show orders of magnitude improvement in performance over previous global optimal algorithms.\nResearchers have typically employed two types of techniques for solving distributed POMDPs.\nThe first set of techniques compute global optimal solutions.\nHansen et al. [5] present an algorithm based on dynamic programming and iterated elimination of dominant policies, that provides optimal solutions for distributed POMDPs.\nSzer et al. [13] provide an optimal heuristic search method for solving Decentralized POMDPs.\nThis algorithm is based on the combination of a classical heuristic search algorithm, A\u2217 and decentralized control theory.\nThe key differences between SPIDER and MAA* are: (a) Enhancements to SPIDER (VAX and PAX) provide for quality guaranteed approximations, while MAA* is a global optimal algorithm and hence involves significant computational complexity; (b) Due to MAA*``s inability to exploit interaction structure, it was illustrated only with two agents.\nHowever, SPIDER has been illustrated for networks of agents; and (c) SPIDER explores the joint policy one agent at a time, while MAA* expands it one time step at a time (simultaneously for all the agents).\nThe second set of techniques seek approximate policies.\nEmeryMontemerlo et al. [4] approximate POSGs as a series of one-step Bayesian games using heuristics to approximate future value, trading off limited lookahead for computational efficiency, resulting in locally optimal policies (with respect to the selected heuristic).\nNair et al. [9]``s JESP algorithm uses dynamic programming to reach a local optimum solution for finite horizon decentralized POMDPs.\nPeshkin et al. [11] and Bernstein et al. [2] are examples of policy search techniques that search for locally optimal policies.\nThough all the above techniques improve the efficiency of policy computation considerably, they are unable to provide error bounds on the quality of the solution.\nThis aspect of quality bounds differentiates SPIDER from all the above techniques.\nAcknowledgements.\nThis material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division under Contract No.\nNBCHD030010.\nThe views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government.\n7.\nREFERENCES [1] R. Becker, S. Zilberstein, V. Lesser, and C.V. Goldman.\nSolving transition independent decentralized Markov decision processes.\nJAIR, 22:423-455, 2004.\n[2] D. S. Bernstein, E.A. Hansen, and S. Zilberstein.\nBounded policy iteration for decentralized POMDPs.\nIn IJCAI, 2005.\n[3] D. S. Bernstein, S. Zilberstein, and N. Immerman.\nThe complexity of decentralized control of MDPs.\nIn UAI, 2000.\n[4] R. Emery-Montemerlo, G. Gordon, J. Schneider, and S. Thrun.\nApproximate solutions for partially observable stochastic games with common payoffs.\nIn AAMAS, 2004.\n[5] E. Hansen, D. Bernstein, and S. Zilberstein.\nDynamic programming for partially observable stochastic games.\nIn AAAI, 2004.\n[6] V. Lesser, C. Ortiz, and M. Tambe.\nDistributed sensor nets: A multiagent perspective.\nKluwer, 2003.\n[7] R. Maheswaran, M. Tambe, E. Bowring, J. Pearce, and P. Varakantham.\nTaking dcop to the real world : Efficient complete solutions for distributed event scheduling.\nIn AAMAS, 2004.\n[8] P. J. Modi, W. Shen, M. Tambe, and M. Yokoo.\nAn asynchronous complete method for distributed constraint optimization.\nIn AAMAS, 2003.\n[9] R. Nair, D. Pynadath, M. Yokoo, M. Tambe, and S. Marsella.\nTaming decentralized POMDPs: Towards efficient policy computation for multiagent settings.\nIn IJCAI, 2003.\n[10] R. Nair, P. Varakantham, M. Tambe, and M. Yokoo.\nNetworked distributed POMDPs: A synthesis of distributed constraint optimization and POMDPs.\nIn AAAI, 2005.\n[11] L. Peshkin, N. Meuleau, K.-E.\nKim, and L. Kaelbling.\nLearning to cooperate via policy search.\nIn UAI, 2000.\n[12] A. Petcu and B. Faltings.\nA scalable method for multiagent constraint optimization.\nIn IJCAI, 2005.\n[13] D. Szer, F. Charpillet, and S. Zilberstein.\nMAA*: A heuristic search algorithm for solving decentralized POMDPs.\nIn IJCAI, 2005.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 829","lvl-3":"Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies\nABSTRACT\nDistributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains.\nGiven the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions.\nThough this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality.\nA second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost.\nThis paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time.\nExperimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms.\n1.\nINTRODUCTION\nDistributed Partially Observable Markov Decision Problems (Distributed POMDPs) are emerging as a popular approach for modeling sequential decision making in teams operating under uncertainty [9, 4, 1, 2, 13].\nThe uncertainty arises on account of non\ndeterminism in the outcomes of actions and because the world state may only be partially (or incorrectly) observable.\nUnfortunately, as shown by Bernstein et al. [3], the problem of finding the optimal joint policy for general distributed POMDPs is NEXP-Complete.\nResearchers have attempted two different types of approaches towards solving these models.\nThe first category consists of highly efficient approximate techniques, that may not reach globally optimal solutions [2, 9, 11].\nThe key problem with these techniques has been their inability to provide any guarantees on the quality of the solution.\nIn contrast, the second less popular category of approaches has focused on a global optimal result [13, 5, 10].\nThough these approaches obtain optimal solutions, they typically consider only two agents.\nFurthermore, they fail to exploit structure in the interactions of the agents and hence are severely hampered with respect to scalability when considering more than two agents.\nTo address these problems with the existing approaches, we propose approximate techniques that provide guarantees on the quality of the solution while focussing on a network of more than two agents.\nWe first propose the basic SPIDER (Search for Policies In Distributed EnviRonments) algorithm.\nThere are two key novel features in SPIDER: (i) it is a branch and bound heuristic search technique that uses a MDP-based heuristic function to search for an optimal joint policy; (ii) it exploits network structure of agents by organizing agents into a Depth First Search (DFS) pseudo tree and takes advantage of the independence in the different branches of the DFS tree.\nWe then provide three enhancements to improve the efficiency of the basic SPIDER algorithm while providing guarantees on the quality of the solution.\nThe first enhancement uses abstractions for speedup, but does not sacrifice solution quality.\nIn particular, it initially performs branch and bound search on abstract policies and then extends to complete policies.\nThe second enhancement obtains speedups by sacrificing solution quality, but within an input parameter that provides the tolerable expected value difference from the optimal solution.\nThe third enhancement is again based on bounding the search for efficiency, however with a tolerance parameter that is provided as a percentage of optimal.\nWe experimented with the sensor network domain presented in Nair et al. [10], a domain representative of an important class of problems with networks of agents working in uncertain environments.\nIn our experiments, we illustrate that SPIDER dominates an existing global optimal approach called GOA [10], the only known global optimal algorithm with demonstrated experimental results for more than two agents.\nFurthermore, we demonstrate that abstraction improves the performance of SPIDER significantly (while providing optimal solutions).\nWe finally demonstrate a key feature of SPIDER: by utilizing the approximation enhancements it enables principled tradeoffs in run-time versus solution quality.\n2.\nDOMAIN: DISTRIBUTED SENSOR NETS\n3.\nBACKGROUND\n3.1 Model: Network Distributed POMDP\nThe ND-POMDP model was introduced in [10], motivated by domains such as the sensor networks introduced in Section 2.\nIt is defined as the tuple (S, A, P, \u03a9, O, R, b), where S = x1 <i <nSi x Su is the set of world states.\nSi refers to the set of local states of agent i and Su is the set of unaffectable states.\nUnaffectable state refers to that part of the world state that cannot be affected by the agents' actions, e.g. environmental factors like target locations that no agent can control.\nA = x1 <i <nAi is the set of joint actions, where Ai is the set of action for agent i. ND-POMDP assumes transition independence, where the transition function is defined as P (s, a, s;-RRB- = Pu (su, s; u) \u2022 rI1 <i <n Pi (si, su, ai, s; i), where a = (a1,..., an) is the joint action performed in state s = (s1,..., sn, su) and s; = (s; 1,..., s; n, s; u) is the resulting state.\n\u03a9 = x1 <i <n\u03a9i is the set of joint observations where \u03a9i is the set of observations for agents i. Observational independence is assumed in ND-POMDPs i.e., the joint observation function is defined as O (s, a, \u03c9) = rI1 <i <n Oi (si, su, ai, \u03c9i), where s = (s1,..., sn, su) is the world state that results from the agents performing a = (a1,..., an) in the previous state, and \u03c9 =-LRB- \u03c91,..., \u03c9n) E \u03a9 is the observation received in state s.\nThis implies that each agent's observation depends only on the unaffectable state, its local action and on its resulting local state.\nThe reward function, R, is defined as l Rl (sl1,..., slr, su, (al1,..., alr)), where each l could refer to any sub-group of agents and r = IlI.\nBased on the reward function, an interaction hypergraph is constructed.\nA hyper-link, l, exists between a subset of agents for all Rl that comprise R.\nThe interaction hypergraph is defined as G = (Ag, E), where the agents, Ag, are the vertices and E = 1lIl C Ag n Rl is a component of R} are the edges.\ndefined as b (s) = bu (su) \u2022 H The initial belief state (distribution over the initial state), b, is 1 <i <n bi (si), where bu and bi refer to the distribution over initial unaffectable state and agent i's initial belief state, respectively.\nThe goal in ND-POMDP is to compute the joint policy \u03c0 = (\u03c01,..., \u03c0n) that maximizes team's expected reward over a finite horizon T starting from the belief state b.\nAn ND-POMDP is similar to an n-ary Distributed Constraint Optimization Problem (DCOP) [8, 12] where the variable at each node represents the policy selected by an individual agent, \u03c0i with the domain of the variable being the set of all local policies, \u03a0i.\nThe reward component Rl where IlI = 1 can be thought of as a local constraint while the reward component Rl where l> 1 corresponds to a non-local constraint in the constraint graph.\n3.2 Algorithm: Global Optimal Algorithm (GOA)\nIn previous work, GOA has been defined as a global optimal algorithm for ND-POMDPs [10].\nWe will use GOA in our experimental comparisons, since GOA is a state-of-the-art global optimal algorithm, and in fact the only one with experimental results available for networks of more than two agents.\nGOA borrows from a global optimal DCOP algorithm called DPOP [12].\nGOA's message passing follows that of DPOP.\nThe first phase is the UTIL propagation, where the utility messages, in this case values of policies, are passed up from the leaves to the root.\nValue for a policy at an agent is defined as the sum of best response values from its children and the joint policy reward associated with the parent policy.\nThus, given a policy for a parent node, GOA requires an agent to iterate through all its policies, finding the best response policy and returning the value to the parent--while at the parent node, to find the best policy, an agent requires its children to return their best responses to each of its policies.\nThis UTIL propagation process is repeated at each level in the tree, until the root exhausts all its policies.\nIn the second phase of VALUE propagation, where the optimal policies are passed down from the root till the leaves.\nGOA takes advantage of the local interactions in the interaction graph, by pruning out unnecessary joint policy evaluations (associated with nodes not connected directly in the tree).\nSince the interaction graph captures all the reward interactions among agents and as this algorithm iterates through all the relevant joint policy evaluations, this algorithm yields a globally optimal solution.\n4.\nSPIDER\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 823\n4.1 Outline of SPIDER\n824 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 MDP based heuristic function\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 825\n4.3 Abstraction\n4.4 Value ApproXimation (VAX)\n826 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.5 Percentage ApproXimation (PAX)\n4.6 Theoretical Results\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 827\n5.\nEXPERIMENTAL RESULTS\n6.\nSUMMARY AND RELATED WORK\nThis paper presents four algorithms SPIDER, SPIDER-ABS, PAX and VAX that provide a novel combination of features for policy search in distributed POMDPs: (i) exploiting agent interaction structure given a network of agents (i.e. easier scale-up to larger number of agents); (ii) using branch and bound search with an MDP based heuristic function; (iii) utilizing abstraction to improve runtime performance without sacrificing solution quality; (iv) providing a priori percentage bounds on quality of solutions using PAX; and (v) providing expected value bounds on the quality of solutions using VAX.\nThese features allow for systematic tradeoff of solution quality for run-time in networks of agents operating under uncertainty.\nExperimental results show orders of magnitude improvement in performance over previous global optimal algorithms.\nResearchers have typically employed two types of techniques for solving distributed POMDPs.\nThe first set of techniques compute global optimal solutions.\nHansen et al. [5] present an algorithm based on dynamic programming and iterated elimination of dominant policies, that provides optimal solutions for distributed POMDPs.\nSzer et al. [13] provide an optimal heuristic search method for solving Decentralized POMDPs.\nThis algorithm is based on the combination of a classical heuristic search algorithm, A \u2217 and decentralized control theory.\nThe key differences between SPIDER and MAA * are: (a) Enhancements to SPIDER (VAX and PAX) provide for quality guaranteed approximations, while MAA * is a global optimal algorithm and hence involves significant computational complexity; (b) Due to MAA *'s inability to exploit interaction structure, it was illustrated only with two agents.\nHowever, SPIDER has been illustrated for networks of agents; and (c) SPIDER explores the joint policy one agent at a time, while MAA * expands it one time step at a time (simultaneously for all the agents).\nThe second set of techniques seek approximate policies.\nEmeryMontemerlo et al. [4] approximate POSGs as a series of one-step Bayesian games using heuristics to approximate future value, trading off limited lookahead for computational efficiency, resulting in locally optimal policies (with respect to the selected heuristic).\nNair et al. [9]'s JESP algorithm uses dynamic programming to reach a local optimum solution for finite horizon decentralized POMDPs.\nPeshkin et al. [11] and Bernstein et al. [2] are examples of policy search techniques that search for locally optimal policies.\nThough all the above techniques improve the efficiency of policy computation considerably, they are unable to provide error bounds on the quality of the solution.\nThis aspect of quality bounds differentiates SPIDER from all the above techniques.\nAcknowledgements.\nThis material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division under Contract No.\nNBCHD030010.\nThe views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government.","lvl-4":"Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies\nABSTRACT\nDistributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains.\nGiven the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions.\nThough this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality.\nA second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost.\nThis paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time.\nExperimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms.\n1.\nINTRODUCTION\nDistributed Partially Observable Markov Decision Problems (Distributed POMDPs) are emerging as a popular approach for modeling sequential decision making in teams operating under uncertainty [9, 4, 1, 2, 13].\nThe uncertainty arises on account of non\ndeterminism in the outcomes of actions and because the world state may only be partially (or incorrectly) observable.\nUnfortunately, as shown by Bernstein et al. [3], the problem of finding the optimal joint policy for general distributed POMDPs is NEXP-Complete.\nResearchers have attempted two different types of approaches towards solving these models.\nThe first category consists of highly efficient approximate techniques, that may not reach globally optimal solutions [2, 9, 11].\nThe key problem with these techniques has been their inability to provide any guarantees on the quality of the solution.\nIn contrast, the second less popular category of approaches has focused on a global optimal result [13, 5, 10].\nThough these approaches obtain optimal solutions, they typically consider only two agents.\nFurthermore, they fail to exploit structure in the interactions of the agents and hence are severely hampered with respect to scalability when considering more than two agents.\nTo address these problems with the existing approaches, we propose approximate techniques that provide guarantees on the quality of the solution while focussing on a network of more than two agents.\nWe first propose the basic SPIDER (Search for Policies In Distributed EnviRonments) algorithm.\nWe then provide three enhancements to improve the efficiency of the basic SPIDER algorithm while providing guarantees on the quality of the solution.\nThe first enhancement uses abstractions for speedup, but does not sacrifice solution quality.\nIn particular, it initially performs branch and bound search on abstract policies and then extends to complete policies.\nThe second enhancement obtains speedups by sacrificing solution quality, but within an input parameter that provides the tolerable expected value difference from the optimal solution.\nThe third enhancement is again based on bounding the search for efficiency, however with a tolerance parameter that is provided as a percentage of optimal.\nWe experimented with the sensor network domain presented in Nair et al. [10], a domain representative of an important class of problems with networks of agents working in uncertain environments.\nIn our experiments, we illustrate that SPIDER dominates an existing global optimal approach called GOA [10], the only known global optimal algorithm with demonstrated experimental results for more than two agents.\nFurthermore, we demonstrate that abstraction improves the performance of SPIDER significantly (while providing optimal solutions).\nWe finally demonstrate a key feature of SPIDER: by utilizing the approximation enhancements it enables principled tradeoffs in run-time versus solution quality.\n3.\nBACKGROUND\n3.1 Model: Network Distributed POMDP\nThe ND-POMDP model was introduced in [10], motivated by domains such as the sensor networks introduced in Section 2.\nSi refers to the set of local states of agent i and Su is the set of unaffectable states.\nUnaffectable state refers to that part of the world state that cannot be affected by the agents' actions, e.g. environmental factors like target locations that no agent can control.\nThis implies that each agent's observation depends only on the unaffectable state, its local action and on its resulting local state.\nThe reward function, R, is defined as l Rl (sl1,..., slr, su, (al1,..., alr)), where each l could refer to any sub-group of agents and r = IlI.\nBased on the reward function, an interaction hypergraph is constructed.\nA hyper-link, l, exists between a subset of agents for all Rl that comprise R.\nThe interaction hypergraph is defined as G = (Ag, E), where the agents, Ag, are the vertices and E = 1lIl C Ag n Rl is a component of R} are the edges.\nThe goal in ND-POMDP is to compute the joint policy \u03c0 = (\u03c01,..., \u03c0n) that maximizes team's expected reward over a finite horizon T starting from the belief state b.\nAn ND-POMDP is similar to an n-ary Distributed Constraint Optimization Problem (DCOP) [8, 12] where the variable at each node represents the policy selected by an individual agent, \u03c0i with the domain of the variable being the set of all local policies, \u03a0i.\n3.2 Algorithm: Global Optimal Algorithm (GOA)\nIn previous work, GOA has been defined as a global optimal algorithm for ND-POMDPs [10].\nWe will use GOA in our experimental comparisons, since GOA is a state-of-the-art global optimal algorithm, and in fact the only one with experimental results available for networks of more than two agents.\nGOA borrows from a global optimal DCOP algorithm called DPOP [12].\nGOA's message passing follows that of DPOP.\nThe first phase is the UTIL propagation, where the utility messages, in this case values of policies, are passed up from the leaves to the root.\nValue for a policy at an agent is defined as the sum of best response values from its children and the joint policy reward associated with the parent policy.\nThis UTIL propagation process is repeated at each level in the tree, until the root exhausts all its policies.\nIn the second phase of VALUE propagation, where the optimal policies are passed down from the root till the leaves.\nGOA takes advantage of the local interactions in the interaction graph, by pruning out unnecessary joint policy evaluations (associated with nodes not connected directly in the tree).\nSince the interaction graph captures all the reward interactions among agents and as this algorithm iterates through all the relevant joint policy evaluations, this algorithm yields a globally optimal solution.\n6.\nSUMMARY AND RELATED WORK\nThese features allow for systematic tradeoff of solution quality for run-time in networks of agents operating under uncertainty.\nExperimental results show orders of magnitude improvement in performance over previous global optimal algorithms.\nResearchers have typically employed two types of techniques for solving distributed POMDPs.\nThe first set of techniques compute global optimal solutions.\nHansen et al. [5] present an algorithm based on dynamic programming and iterated elimination of dominant policies, that provides optimal solutions for distributed POMDPs.\nSzer et al. [13] provide an optimal heuristic search method for solving Decentralized POMDPs.\nThis algorithm is based on the combination of a classical heuristic search algorithm, A \u2217 and decentralized control theory.\nThe key differences between SPIDER and MAA * are: (a) Enhancements to SPIDER (VAX and PAX) provide for quality guaranteed approximations, while MAA * is a global optimal algorithm and hence involves significant computational complexity; (b) Due to MAA *'s inability to exploit interaction structure, it was illustrated only with two agents.\nHowever, SPIDER has been illustrated for networks of agents; and (c) SPIDER explores the joint policy one agent at a time, while MAA * expands it one time step at a time (simultaneously for all the agents).\nThe second set of techniques seek approximate policies.\nNair et al. [9]'s JESP algorithm uses dynamic programming to reach a local optimum solution for finite horizon decentralized POMDPs.\nPeshkin et al. [11] and Bernstein et al. [2] are examples of policy search techniques that search for locally optimal policies.\nThough all the above techniques improve the efficiency of policy computation considerably, they are unable to provide error bounds on the quality of the solution.\nThis aspect of quality bounds differentiates SPIDER from all the above techniques.\nAcknowledgements.\nNBCHD030010.","lvl-2":"Letting loose a SPIDER on a network of POMDPs: Generating quality guaranteed policies\nABSTRACT\nDistributed Partially Observable Markov Decision Problems (Distributed POMDPs) are a popular approach for modeling multi-agent systems acting in uncertain domains.\nGiven the significant complexity of solving distributed POMDPs, particularly as we scale up the numbers of agents, one popular approach has focused on approximate solutions.\nThough this approach is efficient, the algorithms within this approach do not provide any guarantees on solution quality.\nA second less popular approach focuses on global optimality, but typical results are available only for two agents, and also at considerable computational cost.\nThis paper overcomes the limitations of both these approaches by providing SPIDER, a novel combination of three key features for policy generation in distributed POMDPs: (i) it exploits agent interaction structure given a network of agents (i.e. allowing easier scale-up to larger number of agents); (ii) it uses a combination of heuristics to speedup policy search; and (iii) it allows quality guaranteed approximations, allowing a systematic tradeoff of solution quality for time.\nExperimental results show orders of magnitude improvement in performance when compared with previous global optimal algorithms.\n1.\nINTRODUCTION\nDistributed Partially Observable Markov Decision Problems (Distributed POMDPs) are emerging as a popular approach for modeling sequential decision making in teams operating under uncertainty [9, 4, 1, 2, 13].\nThe uncertainty arises on account of non\ndeterminism in the outcomes of actions and because the world state may only be partially (or incorrectly) observable.\nUnfortunately, as shown by Bernstein et al. [3], the problem of finding the optimal joint policy for general distributed POMDPs is NEXP-Complete.\nResearchers have attempted two different types of approaches towards solving these models.\nThe first category consists of highly efficient approximate techniques, that may not reach globally optimal solutions [2, 9, 11].\nThe key problem with these techniques has been their inability to provide any guarantees on the quality of the solution.\nIn contrast, the second less popular category of approaches has focused on a global optimal result [13, 5, 10].\nThough these approaches obtain optimal solutions, they typically consider only two agents.\nFurthermore, they fail to exploit structure in the interactions of the agents and hence are severely hampered with respect to scalability when considering more than two agents.\nTo address these problems with the existing approaches, we propose approximate techniques that provide guarantees on the quality of the solution while focussing on a network of more than two agents.\nWe first propose the basic SPIDER (Search for Policies In Distributed EnviRonments) algorithm.\nThere are two key novel features in SPIDER: (i) it is a branch and bound heuristic search technique that uses a MDP-based heuristic function to search for an optimal joint policy; (ii) it exploits network structure of agents by organizing agents into a Depth First Search (DFS) pseudo tree and takes advantage of the independence in the different branches of the DFS tree.\nWe then provide three enhancements to improve the efficiency of the basic SPIDER algorithm while providing guarantees on the quality of the solution.\nThe first enhancement uses abstractions for speedup, but does not sacrifice solution quality.\nIn particular, it initially performs branch and bound search on abstract policies and then extends to complete policies.\nThe second enhancement obtains speedups by sacrificing solution quality, but within an input parameter that provides the tolerable expected value difference from the optimal solution.\nThe third enhancement is again based on bounding the search for efficiency, however with a tolerance parameter that is provided as a percentage of optimal.\nWe experimented with the sensor network domain presented in Nair et al. [10], a domain representative of an important class of problems with networks of agents working in uncertain environments.\nIn our experiments, we illustrate that SPIDER dominates an existing global optimal approach called GOA [10], the only known global optimal algorithm with demonstrated experimental results for more than two agents.\nFurthermore, we demonstrate that abstraction improves the performance of SPIDER significantly (while providing optimal solutions).\nWe finally demonstrate a key feature of SPIDER: by utilizing the approximation enhancements it enables principled tradeoffs in run-time versus solution quality.\n2.\nDOMAIN: DISTRIBUTED SENSOR NETS\nDistributed sensor networks are a large, important class of domains that motivate our work.\nThis paper focuses on a set of target tracking problems that arise in certain types of sensor networks [6] first introduced in [10].\nFigure 1 shows a specific problem instance within this type consisting of three sensors.\nHere, each sensor node can scan in one of four directions: North, South, East or West (see Figure 1).\nTo track a target and obtain associated reward, two sensors with overlapping scanning areas must coordinate by scanning the same area simultaneously.\nIn Figure 1, to track a target in Loc11, sensor1 needs to scan ` East' and sensor2 needs to scan ` West' simultaneously.\nThus, sensors have to act in a coordinated fashion.\nWe assume that there are two independent targets and that each target's movement is uncertain and unaffected by the sensor agents.\nBased on the area it is scanning, each sensor receives observations that can have false positives and false negatives.\nThe sensors' observations and transitions are independent of each other's actions e.g.the observations that sensor1 receives are independent of sensor2's actions.\nEach agent incurs a cost for scanning whether the target is present or not, but no cost if it turns off.\nGiven the sensors' observational uncertainty, the targets' uncertain transitions and the distributed nature of the sensor nodes, these sensor nets provide a useful domains for applying distributed POMDP models.\nFigure 1: A 3-chain sensor configuration\n3.\nBACKGROUND\n3.1 Model: Network Distributed POMDP\nThe ND-POMDP model was introduced in [10], motivated by domains such as the sensor networks introduced in Section 2.\nIt is defined as the tuple (S, A, P, \u03a9, O, R, b), where S = x1 <i <nSi x Su is the set of world states.\nSi refers to the set of local states of agent i and Su is the set of unaffectable states.\nUnaffectable state refers to that part of the world state that cannot be affected by the agents' actions, e.g. environmental factors like target locations that no agent can control.\nA = x1 <i <nAi is the set of joint actions, where Ai is the set of action for agent i. ND-POMDP assumes transition independence, where the transition function is defined as P (s, a, s;-RRB- = Pu (su, s; u) \u2022 rI1 <i <n Pi (si, su, ai, s; i), where a = (a1,..., an) is the joint action performed in state s = (s1,..., sn, su) and s; = (s; 1,..., s; n, s; u) is the resulting state.\n\u03a9 = x1 <i <n\u03a9i is the set of joint observations where \u03a9i is the set of observations for agents i. Observational independence is assumed in ND-POMDPs i.e., the joint observation function is defined as O (s, a, \u03c9) = rI1 <i <n Oi (si, su, ai, \u03c9i), where s = (s1,..., sn, su) is the world state that results from the agents performing a = (a1,..., an) in the previous state, and \u03c9 =-LRB- \u03c91,..., \u03c9n) E \u03a9 is the observation received in state s.\nThis implies that each agent's observation depends only on the unaffectable state, its local action and on its resulting local state.\nThe reward function, R, is defined as l Rl (sl1,..., slr, su, (al1,..., alr)), where each l could refer to any sub-group of agents and r = IlI.\nBased on the reward function, an interaction hypergraph is constructed.\nA hyper-link, l, exists between a subset of agents for all Rl that comprise R.\nThe interaction hypergraph is defined as G = (Ag, E), where the agents, Ag, are the vertices and E = 1lIl C Ag n Rl is a component of R} are the edges.\ndefined as b (s) = bu (su) \u2022 H The initial belief state (distribution over the initial state), b, is 1 <i <n bi (si), where bu and bi refer to the distribution over initial unaffectable state and agent i's initial belief state, respectively.\nThe goal in ND-POMDP is to compute the joint policy \u03c0 = (\u03c01,..., \u03c0n) that maximizes team's expected reward over a finite horizon T starting from the belief state b.\nAn ND-POMDP is similar to an n-ary Distributed Constraint Optimization Problem (DCOP) [8, 12] where the variable at each node represents the policy selected by an individual agent, \u03c0i with the domain of the variable being the set of all local policies, \u03a0i.\nThe reward component Rl where IlI = 1 can be thought of as a local constraint while the reward component Rl where l> 1 corresponds to a non-local constraint in the constraint graph.\n3.2 Algorithm: Global Optimal Algorithm (GOA)\nIn previous work, GOA has been defined as a global optimal algorithm for ND-POMDPs [10].\nWe will use GOA in our experimental comparisons, since GOA is a state-of-the-art global optimal algorithm, and in fact the only one with experimental results available for networks of more than two agents.\nGOA borrows from a global optimal DCOP algorithm called DPOP [12].\nGOA's message passing follows that of DPOP.\nThe first phase is the UTIL propagation, where the utility messages, in this case values of policies, are passed up from the leaves to the root.\nValue for a policy at an agent is defined as the sum of best response values from its children and the joint policy reward associated with the parent policy.\nThus, given a policy for a parent node, GOA requires an agent to iterate through all its policies, finding the best response policy and returning the value to the parent--while at the parent node, to find the best policy, an agent requires its children to return their best responses to each of its policies.\nThis UTIL propagation process is repeated at each level in the tree, until the root exhausts all its policies.\nIn the second phase of VALUE propagation, where the optimal policies are passed down from the root till the leaves.\nGOA takes advantage of the local interactions in the interaction graph, by pruning out unnecessary joint policy evaluations (associated with nodes not connected directly in the tree).\nSince the interaction graph captures all the reward interactions among agents and as this algorithm iterates through all the relevant joint policy evaluations, this algorithm yields a globally optimal solution.\n4.\nSPIDER\nAs mentioned in Section 3.1, an ND-POMDP can be treated as a DCOP, where the goal is to compute a joint policy that maximizes the overall joint reward.\nThe brute-force technique for computing an optimal policy would be to examine the expected values for all possible joint policies.\nThe key idea in SPIDER is to avoid computation of expected values for the entire space of joint policies, by utilizing upper bounds on the expected values of policies and the interaction structure of the agents.\nAkin to some of the algorithms for DCOP [8, 12], SPIDER has a pre-processing step that constructs a DFS tree corresponding to the given interaction structure.\nNote that these DFS trees are pseudo trees [12] that allow links between ancestors and children.\nWe employ the Maximum Constrained Node (MCN) heuristic used in the DCOP algorithm, ADOPT [8], however other heuristics (such as MLSP heuristic from [7]) can also be employed.\nMCN heuristic tries to place agents with more number of constraints at the top of the tree.\nThis tree governs how the search for the optimal joint pol\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 823\nicy proceeds in SPIDER.\nThe algorithms presented in this paper are easily extendable to hyper-trees, however for expository purposes, we assume binary trees.\nSPIDER is an algorithm for centralized planning and distributed execution in distributed POMDPs.\nIn this paper, we employ the following notation to denote policies and expected values: Ancestors (i) agents from i to the root (not including i).\nTree (i) agents in the sub-tree (not including i) for which i is the root.\n.7 rroot + joint policy of all agents.\n.7 r' + joint policy of all agents in Tree (i) U i. .7 r'--joint policy of agents that are in Ancestors (i).\n.7 r' policy of the ith agent.\n\u02c6v [.7 r', .7 r'--] upper bound on the expected value for .7 r' + given .7 r' and policies of ancestor agents i.e. .7 r'--.\n\u02c6vj [.7 r', .7 r'--] upper bound on the expected value for .7 r' + from the jth child.\nv [.7 r', .7 r'--] expected value for .7 r' given policies of ancestor agents, .7 r'--.\nv [.7 r' +, .7 r'--] expected value for .7 r' + given policies of ancestor agents, .7 r'--.\nvj [.7 r' +, .7 r'--] expected value for .7 r' + from the jth child.\nFigure 2: Execution of SPIDER, an example\n4.1 Outline of SPIDER\nSPIDER is based on the idea of branch and bound search, where the nodes in the search tree represent partial\/complete joint policies.\nFigure 2 shows an example search tree for the SPIDER algorithm, using an example of the three agent chain.\nBefore SPIDER begins its search we create a DFS tree (i.e. pseudo tree) from the three agent chain, with the middle agent as the root of this tree.\nSPIDER exploits the structure of this DFS tree while engaging in its search.\nNote that in our example figure, each agent is assigned a policy with T = 2.\nThus, each rounded rectange (search tree node) indicates a partial\/complete joint policy, a rectangle indicates an agent and the ovals internal to an agent show its policy.\nHeuristic or actual expected value for a joint policy is indicated in the top right corner of the rounded rectangle.\nIf the number is italicized and underlined, it implies that the actual expected value of the joint policy is provided.\nSPIDER begins with no policy assigned to any of the agents (shown in the level 1 of the search tree).\nLevel 2 of the search tree indicates that the joint policies are sorted based on upper bounds computed for root agent's policies.\nLevel 3 shows one SPIDER search node with a complete joint policy (a policy assigned to each of the agents).\nThe expected value for this joint policy is used to prune out the nodes in level 2 (the ones with upper bounds <234) When creating policies for each non-leaf agent i, SPIDER potentially performs two steps:\n1.\nObtaining upper bounds and sorting: In this step, agent i computes upper bounds on the expected values, \u02c6v [.7 r', .7 r'--] of the joint policies .7 r' + corresponding to each of its policy .7 r' and fixed ancestor policies.\nAn MDP based heuristic is used to compute these upper bounds on the expected values.\nDetailed description about this MDP heuristic is provided in Section 4.2.\nAll policies of agent i, II' are then sorted based on these upper bounds (also referred to as heuristic values henceforth) in descending order.\nExploration of these policies (in step 2 below) are performed in this descending order.\nAs indicated in the level 2 of the search tree (of Figure 2), all the joint policies are sorted based on the heuristic values, indicated in the top right corner of each joint policy.\nThe intuition behind sorting and then exploring policies in descending order of upper bounds, is that the policies with higher upper bounds could yield joint policies with higher expected values.\n2.\nExploration and Pruning: Exploration implies computing the best response joint policy .7 r' +, = corresponding to fixed ancestor policies of agent i, .7 r'--.\nThis is performed by iterating through all policies of agent i i.e. II' and summing two quantities for each policy: (i) the best response for all of i's children (obtained by per\nforming steps 1 and 2 at each of the child nodes); (ii) the expected value obtained by i for fixed policies of ancestors.\nThus, exploration of a policy .7 r' yields actual expected value of a joint policy, .7 r' + represented as v [.7 r' +, .7 r'--].\nThe policy with the highest expected value is the best response policy.\nPruning refers to avoiding exploring all policies (or computing expected values) at agent i by using the current best expected value, vmax [.7 r' +, .7 r'--].\nHenceforth, this vmax [.7 r' +, .7 r'--] will be referred to as threshold.\nA policy, .7 r' need not be explored if the upper bound for that policy, \u02c6v [.7 r', .7 r'--] is less than the threshold.\nThis is because the expected value for the best joint policy attainable for that policy will be less than the threshold.\nOn the other hand, when considering a leaf agent, SPIDER computes the best response policy (and consequently its expected value) corresponding to fixed policies of its ancestors, .7 r'--.\nThis is accomplished by computing expected values for each of the policies (corresponding to fixed policies of ancestors) and selecting the highest expected value policy.\nIn Figure 2, SPIDER assigns best response policies to leaf agents at level 3.\nThe policy for the left leaf agent is to perform action \"East\" at each time step in the policy, while the policy for the right leaf agent is to perform \"Off\" at each time step.\nThese best response policies from the leaf agents yield an actual expected value of 234 for the complete joint policy.\nAlgorithm 1 provides the pseudo code for SPIDER.\nThis algorithm outputs the best joint policy, .7 r' +, = (with an expected value greater than threshold) for the agents in Tree (i).\nLines 3-8 compute the best response policy of a leaf agent i, while lines 9-23 computes the best response joint policy for agents in Tree (i).\nThis best response computation for a non-leaf agent i includes: (a) Sorting of policies (in descending order) based on heuristic values on line 11; (b) Computing best response policies at each of the children for fixed policies of agent i in lines 16-20; and (c) Maintaining\n824 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nAlgorithm 1 SPIDER (i, 7ri \u2212, threshold)\n1: \u03c0i +, \u2217 \u2190 null 2: \u03a0i \u2190 GET-ALL-POLICIES (horizon, Ai, \u03a9i) 3: if IS-LEAF (i) then 4: for all \u03c0i \u2208 \u03a0i do 5: v [\u03c0i, \u03c0i \u2212] \u2190 JOINT-REWARD (\u03c0i, \u03c0i \u2212) 6: if v [\u03c0i, \u03c0i \u2212]> threshold then 7: \u03c0i +, \u2217 \u2190 \u03c0i 8: threshold \u2190 v [\u03c0i, \u03c0i \u2212] 9: else 10: children \u2190 CHILDREN (i) 11: \u02c6\u03a0i \u2190 UPPER-BOUND-SORT (i, \u03a0i, \u03c0i \u2212) 12: for all \u03c0i \u2208 \u02c6\u03a0i do 13: \u02dc\u03c0i + \u2190 \u03c0i 14: if \u02c6v [\u03c0i, \u03c0i \u2212] <threshold then 15: Go to line 12 16: for all j \u2208 children do 17: jThres \u2190 threshold \u2212 v [\u03c0i, \u03c0i \u2212] \u2212 \u03a3k \u2208 children, k ~ = j \u02c6vk [\u03c0i, \u03c0i \u2212] 18: \u03c0j +, \u2217 \u2190 SPIDER (j, \u03c0i ~ \u03c0i \u2212, jThres) 19: \u02dc\u03c0i + \u2190 \u02dc\u03c0i + ~ \u03c0j +, \u2217 20: \u02c6vj [\u03c0i, \u03c0i \u2212] \u2190 v [\u03c0j +, \u2217, \u03c0i ~ \u03c0i \u2212] Algorithm 2 UPPER-BOUND-SORT (i, Ili, 7ri \u2212) 1: children \u2190 CHILDREN (i) 2: \u02c6\u03a0i \u2190 null \/ * Stores the sorted list * \/ 3: for all \u03c0i \u2208 \u03a0i do 4: \u02c6v [\u03c0i, \u03c0i \u2212] \u2190 JOINT-REWARD (\u03c0i, \u03c0i \u2212) 5: for all j \u2208 children do 6: \u02c6vj [\u03c0i, \u03c0i \u2212] \u2190 UPPER-BOUND (i, j, \u03c0i ~ \u03c0i \u2212) 7: \u02c6v [\u03c0i, \u03c0i \u2212] + \u2190 \u02c6vj [\u03c0i, \u03c0i \u2212] 8: \u02c6\u03a0i \u2190 INSERT-INTO-SORTED (\u03c0i, \u02c6\u03a0i) 9: return \u02c6\u03a0i\nbest expected value, joint policy in lines 21-23.\nAlgorithm 2 provides the pseudo code for sorting policies based on the upper bounds on the expected values of joint policies.\nExpected value for an agent i consists of two parts: value obtained from ancestors and value obtained from its children.\nLine 4 computes the expected value obtained from ancestors of the agent (using JOINT-REWARD function), while lines 5-7 compute the heuristic value from the children.\nThe sum of these two parts yields an upper bound on the expected value for agent i, and line 8 of the algorithm sorts the policies based on these upper bounds.\n4.2 MDP based heuristic function\nThe heuristic function quickly provides an upper bound on the expected value obtainable from the agents in Tree (i).\nThe subtree of agents is a distributed POMDP in itself and the idea here is to construct a centralized MDP corresponding to the (sub-tree) distributed POMDP and obtain the expected value of the optimal policy for this centralized MDP.\nTo reiterate this in terms of the agents in DFS tree interaction structure, we assume full observability for the agents in Tree (i) and for fixed policies of the agents in {Ancestors (i) \u222a i}, we compute the joint value \u02c6v [7ri +, 7ri \u2212].\nWe use the following notation for presenting the equations for computing upper bounds\/heuristic values (for agents i and k): Let Ei \u2212 denote the set of links between agents in {Ancestors (i) \u222a i} and Tree (i), Ei + denote the set of links between agents in Tree (i).\nAlso, if l \u2208 Ei \u2212, then l1 is the agent in {Ancestors (i) \u222a\nDepending on the location of agent k in the agent tree we have the following cases:\nThe value function for an agent i executing the joint policy 7ri + at time \u03b7 \u2212 1 is provided by the equation:\n1: val \u2190 0 2: for all l \u2208 Ej \u2212 \u222a Ej + do 3: if l \u2208 Ej \u2212 then \u03c0l1 \u2190 \u03c6 4: for all s0l do + 5: val \u2190 startBel [s0l] \u00b7 UPPER-BOUND-TIME (i, s0l, j, \u03c0l1, ~ ~) 6: return val Algorithm 4 UPPER-BOUND-TIME (i, stl, j, 7rl1, ~ \u03c9tl1) 1: maxVal \u2190 \u2212 \u221e 2: for all al1, al2 do 3: if l \u2208 Ei \u2212 and l \u2208 Ej \u2212 then al1 \u2190 \u03c0l1 (~ \u03c9tl1) 4: val \u2190 GET-REWARD (stl, al1, al2) 5: if t <\u03c0i.horizon \u2212 1 then 6: for all st +1 l, \u03c9t +1 l1 do 7: futV al \u2190 ptu \u02c6ptl1 \u02c6ptl2 8: futVal \u2190 \u2217 UPPER-BOUND-TIME (st +1\n9: val \u2190 + futVal 10: if val> maxVal then maxV al \u2190 val 11: return maxV al\nUpper bound on the expected value for a link is computed by modifying the equation 3 to reflect the full observability assumption.\nThis involves removing the observational probability term for agents in Tree (i) and maximizing the future value \u02c6v\u03b7l over the actions of those agents (in Tree (i)).\nThus, the equation for the\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 825\ncomputation of the upper bound on a link l, is as follows:\nAlgorithm 3 and Algorithm 4 provide the algorithm for computing upper bound for child j of agent i, using the equations descirbed above.\nWhile Algorithm 4 computes the upper bound on a link given the starting state, Algorithm 3 sums the upper bound values computed over each of the links in Ei \u2212 \u222a Ei +.\n4.3 Abstraction\nAlgorithm 5 SPIDER-ABS (i, .7 ri \u2212, threshold)\n1: \u03c0i +, \u2217 \u2190 null 2: \u03a0i \u2190 GET-POLICIES (<>, 1) 3: if IS-LEAF (i) then 4: for all \u03c0i \u2208 \u03a0i do 5: absHeuristic \u2190 GET-ABS-HEURISTIC (\u03c0i, \u03c0i \u2212) 6: absHeuristic \u2217 \u2190 (timeHorizon \u2212 \u03c0i.horizon) 7: if \u03c0i.horizon = timeHorizon and \u03c0i.absNodes = 0 then 8: v [\u03c0i, \u03c0i \u2212] \u2190 JOINT-REWARD (\u03c0i, \u03c0i \u2212) 9: if v [\u03c0i, \u03c0i \u2212]> threshold then 10: \u03c0i +, \u2217 \u2190 \u03c0i; threshold \u2190 v [\u03c0i, \u03c0i \u2212] 12: \u02c6\u03a0i \u2190 EXTEND-POLICY (\u03c0i, \u03c0i.absNodes + 1) 13: \u03a0i \u2190 INSERT-SORTED-POLICIES (\u02c6\u03a0i) + 14: REMOVE (\u03c0i) 15: else 16: children \u2190 CHILDREN (i) 17: \u03a0i \u2190 UPPER-BOUND-SORT (i, \u03a0i, \u03c0i \u2212) 18: for all \u03c0i \u2208 \u03a0i do 19: \u02dc\u03c0i + \u2190 \u03c0i 20: absHeuristic \u2190 GET-ABS-HEURISTIC (\u03c0i, \u03c0i \u2212) 21: absHeuristic \u2190 \u2217 (timeHorizon \u2212 \u03c0i.horizon) 22: if \u03c0i.horizon = timeHorizon and \u03c0i.absNodes = 0 then 23: if \u02c6v [\u03c0i, \u03c0i \u2212] <threshold and \u03c0i.absNodes = 0 then 24: Go to line 19 25: for all j \u2208 children do 26: jThres \u2190 threshold \u2212 v [\u03c0i, \u03c0i \u2212] \u2212 \u03a3k \u2208 children, k ~ = j \u02c6vk [\u03c0i, \u03c0 i \u2212] 27: \u03c0j +, \u2217 \u2190 SPIDER (j, \u03c0i ~ \u03c0i \u2212, jThres) 28: \u02dc\u03c0i + \u2190 \u02dc\u03c0i + ~ \u03c0j +, \u2217; \u02c6vj [\u03c0i, \u03c0i \u2212] \u2190 v [\u03c0j +, \u2217, \u03c0i ~ \u03c0i \u2212] 29: if v [\u02dc\u03c0i +, \u03c0i \u2212]> threshold then 30: threshold \u2190 v [\u02dc\u03c0i +, \u03c0i \u2212]; \u03c0i +, \u2217 \u2190 \u02dc\u03c0i + 31: else if \u02c6v [\u03c0i +, \u03c0i \u2212] + absHeuristic> threshold then 32: \u02c6\u03a0i \u2190 EXTEND-POLICY (\u03c0i, \u03c0i.absNodes + 1) 33: \u03a0i \u2190 INSERT-SORTED-POLICIES (\u02c6\u03a0i) + 34: REMOVE (\u03c0i) 35: return \u03c0i +, \u2217\nIn SPIDER, the exploration\/pruning phase can only begin after the heuristic (or upper bound) computation and sorting for the policies has ended.\nWe provide an approach to possibly circumvent the exploration of a group of policies based on heuristic computation for one abstract policy, thus leading to an improvement in runtime performance (without loss in solution quality).\nThe important steps in this technique are defining the abstract policy and how heuristic values are computated for the abstract policies.\nIn this paper, we propose two types of abstraction: 1.\nHorizon Based Abstraction (HBA): Here, the abstract policy is defined as a shorter horizon policy.\nIt represents a group of longer horizon policies that have the same actions as the abstract policy for times less than or equal to the horizon of the abstract policy.\nIn Figure 3 (a), a T = 1 abstract policy that performs \"East\" action, represents a group of T = 2 policies, that perform \"East\" in the first time step.\nFor HBA, there are two parts to heuristic computation: (a) Computing the upper bound for the horizon of the abstract policy.\nThis is same as the heuristic computation defined by the GETHEURISTIC () algorithm for SPIDER, however with a shorter time horizon (horizon of the abstract policy).\n(b) Computing the maximum possible reward that can be accumulated in one time step (using GET-ABS-HEURISTIC ()) and multiplying it by the number of time steps to time horizon.\nThis maximum possible reward (for one time step) is obtained by iterating through all the actions of all the agents in Tree (i) and computing the maximum joint reward for any joint action.\nSum of (a) and (b) is the heuristic value for a HBA abstract policy.\n2.\nNode Based Abstraction (NBA): Here an abstract policy is obtained by not associating actions to certain nodes of the policy tree.\nUnlike in HBA, this implies multiple levels of abstraction.\nThis is illustrated in Figure 3 (b), where there are T = 2 policies that do not have an action for observation ` TP'.\nThese incomplete T = 2 policies are abstractions for T = 2 complete policies.\nIncreased levels of abstraction leads to faster computation of a complete joint policy, .7 rroot + and also to shorter heuristic computation and exploration, pruning phases.\nFor NBA, the heuristic computation is similar to that of a normal policy, except in cases where there is no action associated with policy nodes.\nIn such cases, the immediate reward is taken as Rmax (maximum reward for any action).\nWe combine both the abstraction techniques mentioned above into one technique, SPIDER-ABS.\nAlgorithm 5 provides the algorithm for this abstraction technique.\nFor computing optimal joint policy with SPIDER-ABS, a non-leaf agent i initially examines all abstract T = 1 policies (line 2) and sorts them based on abstract policy heuristic computations (line 17).\nThe abstraction horizon is gradually increased and these abstract policies are then explored in descending order of heuristic values and ones that have heuristic values less than the threshold are pruned (lines 23-24).\nExploration in SPIDER-ABS has the same definition as in SPIDER if the policy being explored has a horizon of policy computation which is equal to the actual time horizon and if all the nodes of the policy have an action associated with them (lines 25-30).\nHowever, if those conditions are not met, then it is substituted by a group of policies that it represents (using EXTEND-POLICY () function) (lines 31-32).\nEXTEND-POLICY () function is also responsible for initializing the horizon and absNodes of a policy.\nabsNodes represents the number of nodes at the last level in the policy tree, that do not have an action assigned to them.\nIf .7 ri.absNodes = | \u03a9i | \u03c0i.horizon \u2212 1 (i.e. total number of policy nodes possible at .7 ri.horizon), then .7 ri.absNodes is set to zero and .7 ri.horizon is increased by 1.\nOtherwise, .7 ri.absNodes is increased by 1.\nThus, this function combines both HBA and NBA by using the policy variables, horizon and absNodes.\nBefore substituting the abstract policy with a group of policies, those policies are sorted based on heuristic values (line 33).\nSimilar type of abstraction based best response computation is adopted at leaf agents (lines 3-14).\n4.4 Value ApproXimation (VAX)\nIn this section, we present an approximate enhancement to SPIDER called VAX.\nThe input to this technique is an approximation parameter E, which determines the difference from the optimal solution quality.\nThis approximation parameter is used at each agent for pruning out joint policies.\nThe pruning mechanism in SPIDER and SPIDER-Abs dictates that a joint policy be pruned only if the threshold is exactly greater than the heuristic value.\nHowever, the\n11: else if v [\u03c0i, \u03c0i \u2212] + absHeuristic> threshold then\n826 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 3: Example of abstraction for (a) HBA (Horizon Based Abstraction) and (b) NBA (Node Based Abstraction)\nidea in this technique is to prune out joint a policy if the following condition is satisfied: threshold + E> \u02c6v [.7 ri, .7 ri -].\nApart from the pruning condition, VAX is the same as SPIDER\/SPIDER-ABS.\nIn the example of Figure 2, if the heuristic value for the second joint policy (or second search tree node) in level 2 were 238 instead of 232, then that policy could not be be pruned using SPIDER or SPIDER-Abs.\nHowever, in VAX with an approximation parameter of 5, the joint policy in consideration would also be pruned.\nThis is because the threshold (234) at that juncture plus the approximation parameter (5), i.e. 239 would have been greater than the heuristic value for that joint policy (238).\nIt can be noted from the example (just discussed) that this kind of pruning can lead to fewer explorations and hence lead to an improvement in the overall run-time performance.\nHowever, this can entail a sacrifice in the quality of the solution because this technique can prune out a candidate optimal solution.\nA bound on the error introduced by this approximate algorithm as a function of E, is provided by Proposition 3.\n4.5 Percentage ApproXimation (PAX)\nIn this section, we present the second approximation enhancement over SPIDER called PAX.\nInput to this technique is a parameter, \u03b4 that represents the minimum percentage of the optimal solution quality that is desired.\nOutput of this technique is a policy with an expected value that is at least \u03b4% of the optimal solution quality.\nA policy is pruned if the following condition is satisfied: threshold> \u03b4100\u02c6v [.7 ri, .7 ri -].\nLike in VAX, the only difference between PAX and SPIDER\/SPIDER-ABS is this pruning condition.\nAgain in Figure 2, if the heuristic value for the second search tree node in level 2 were 238 instead of 232, then PAX with an input parameter of 98% would be able to prune that search tree node (since 98 100 * 238 <234).\nThis type of pruning leads to fewer explorations and hence an improvement in run-time performance, while potentially leading to a loss in quality of the solution.\nProposition 4 provides the bound on quality loss.\n4.6 Theoretical Results\nPROPOSITION 1.\nHeuristic provided using the centralized MDP heuristic is admissible.\nProof.\nFor the value provided by the heuristic to be admissible, it should be an over estimate of the expected value for a joint policy.\nThus, we need to show that: For l E Ei + U Ei -: \u02c6vtl> vtl (refer to notation in Section 4.2) We use mathematical induction on t to prove this.\nBase case: t = T--1.\nIrrespective of whether l E Ei - or l E Ei +, \u02c6rtl is computed by maximizing over all actions of the agents in Tree (i), while rtl is computed for fixed policies of the same agents.\nHence, \u02c6rtl> rtl and also \u02c6vtl> vtl.\nAssumption: Proposition holds for t = 77, where 1 <77 <T--1.\nWe now have to prove that the proposition holds for t = 77--1.\nWe show the proof for l E Ei - and similar reasoning can be adopted to prove for l E Ei +.\nThe heuristic value function for l E Ei - is provided by the following equation:\nProof.\nSPIDER examines all possible joint policies given the interaction structure of the agents.\nThe only exception being when a joint policy is pruned based on the heuristic value.\nThus, as long as a candidate optimal policy is not pruned, SPIDER will return an optimal policy.\nAs proved in Proposition 1, the expected value for a joint policy is always an upper bound.\nHence when a joint policy is pruned, it cannot be an optimal solution.\nPROPOSITION 3.\nError bound on the solution quality for VAX (implemented over SPIDER-ABS) with an approximation parameter of E is \u03c1E, where \u03c1 is the number of leaf nodes in the DFS tree.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 827\nProof.\nWe prove this proposition using mathematical induction on the depth of the DFS tree.\nBase case: depth = 1 (i.e. one node).\nBest response is computed by iterating through all policies, Ilk.\nA policy, \u03c0k is pruned if \u02c6v [\u03c0k, \u03c0k--] <threshold + ~.\nThus the best response policy computed by VAX would be at most ~ away from the optimal best response.\nHence the proposition holds for the base case.\nAssumption: Proposition holds for d, where 1 \u2264 depth \u2264 d.\nWe now have to prove that the proposition holds for d + 1.\nWithout loss of generality, lets assume that the root node of this tree has k children.\nEach of this children is of depth \u2264 d, and hence from the assumption, the error introduced in kth child is \u03c1k ~, where \u03c1k is the number of leaf nodes in kth child of the root.\nTherefore, \u03c1 = Ek \u03c1k, where \u03c1 is the number of leaf nodes in the tree.\nE In SPIDER-ABS, threshold at the root agent, thresspider = k v [\u03c0k +, \u03c0k--].\nHowever, with VAX the threshold at the root agent will be (in the worst case), threshvax = Ek v [\u03c0k +, \u03c0k--] \u2212 E k \u03c1k ~.\nHence, with VAX a joint policy is pruned at the root agent if \u02c6v [\u03c0root, \u03c0root--] <threshvax + ~ \u21d2 \u02c6v [\u03c0root, \u03c0root--] <threshspider \u2212 ((Ek \u03c1k) \u2212 1) ~ \u2264 threshspider \u2212 (Ek \u03c1k) ~ \u2264 threshspider \u2212 \u03c1 ~.\nHence proved.\n\u25a0 PROPOSITION 4.\nFor PAX (implemented over SPIDER-ABS) with an input parameter of \u03b4, the solution quality is at least \u03b4 100 v [\u03c0root +, *], where v [\u03c0root +, *] denotes the optimal solution quality.\nProof.\nWe prove this proposition using mathematical induction on the depth of the DFS tree.\nBase case: depth = 1 (i.e. one node).\nBest response is computed by iterating through all policies, Ilk.\nA policy, \u03c0k is pruned if \u03b4 100 \u02c6v [\u03c0k, \u03c0k--] <threshold.\nThus the best response policy computed by PAX would be at least \u03b4 100 times the optimal best response.\nHence the proposition holds for the base case.\nAssumption: Proposition holds for d, where 1 \u2264 depth \u2264 d.\nWe now have to prove that the proposition holds for d + 1.\nWithout loss of generality, lets assume that the root node of this tree has k children.\nEach of this children is of depth \u2264 d, and hence from the assumption, the solution quality in the kth child is at least \u03b4100v [\u03c0k +, *, \u03c0k--] for PAX.\nWith SPIDER-ABS, a joint policy is pruned at the root agent if \u02c6v [\u03c0root, \u03c0root--] <Ek v [\u03c0k +, *, \u03c0k--].\nHowever with PAX, a joint policy is pruned if\nk v [\u03c0k +, *, \u03c0k--].\nSince the pruning condition at the root agent in PAX is the same as the one in SPIDER-ABS, there is no error introduced at the root agent and all the error is introduced in the children.\nThus, overall solution quality is at least \u03b4 100 of the optimal solution.\nHence proved.\n\u25a0\n5.\nEXPERIMENTAL RESULTS\nAll our experiments were conducted on the sensor network domain from Section 2.\nThe five network configurations employed are shown in Figure 4.\nAlgorithms that we experimented with are GOA, SPIDER, SPIDER-ABS, PAX and VAX.\nWe compare against GOA because it is the only global optimal algorithm that considers more than two agents.\nWe performed two sets of experiments: (i) firstly, we compared the run-time performance of the above algorithms and (ii) secondly, we experimented with PAX and VAX to study the tradeoff between run-time and solution quality.\nExperiments were terminated after 10000 seconds1.\nFigure 5 (a) provides run-time comparisons between the optimal algorithms GOA, SPIDER, SPIDER-Abs and the approximate algorithms, PAX (~ of 30) and VAX (\u03b4 of 80).\nX-axis denotes the\nsensor network configuration used, while Y-axis indicates the runtime (on a log-scale).\nThe time horizon of policy computation was 3.\nFor each configuration (3-chain, 4-chain, 4-star and 5-star), there are five bars indicating the time taken by GOA, SPIDER, SPIDERAbs, PAX and VAX.\nGOA did not terminate within the time limit for 4-star and 5-star configurations.\nSPIDER-Abs dominated the SPIDER and GOA for all the configurations.\nFor instance, in the 3chain configuration, SPIDER-ABS provides 230-fold speedup over GOA and 2-fold speedup over SPIDER and for the 4-chain configuration it provides 58-fold speedup over GOA and 2-fold speedup over SPIDER.\nThe two approximation approaches, VAX and PAX provided further improvement in performance over SPIDER-Abs.\nFor instance, in the 5-star configuration VAX provides a 15-fold speedup and PAX provides a 8-fold speedup over SPIDER-Abs.\nFigures 5 (b) provides a comparison of the solution quality obtained using the different algorithms for the problems tested in Figure 5 (a).\nX-axis denotes the sensor network configuration while Y-axis indicates the solution quality.\nSince GOA, SPIDER, and SPIDER-Abs are all global optimal algorithms, the solution quality is the same for all those algorithms.\nFor 5-P configuration, the global optimal algorithms did not terminate within the limit of 10000 seconds, so the bar for optimal quality indicates an upper bound on the optimal solution quality.\nWith both the approximations, we obtained a solution quality that was close to the optimal solution quality.\nIn 3-chain and 4-star configurations, it is remarkable that both PAX and VAX obtained almost the same actual quality as the global optimal algorithms, despite the approximation parameter ~ and \u03b4.\nFor other configurations as well, the loss in quality was less than 20% of the optimal solution quality.\nFigure 5 (c) provides the time to solution with PAX (for varying epsilons).\nX-axis denotes the approximation parameter, \u03b4 (percentage to optimal) used, while Y-axis denotes the time taken to compute the solution (on a log-scale).\nThe time horizon for all the configurations was 4.\nAs \u03b4 was decreased from 70 to 30, the time to solution decreased drastically.\nFor instance, in the 3-chain case there was a total speedup of 170-fold when the \u03b4 was changed from 70 to 30.\nInterestingly, even with a low \u03b4 of 30%, the actual solution quality remained equal to the one obtained at 70%.\nFigure 5 (d) provides the time to solution for all the configurations with VAX (for varying epsilons).\nX-axis denotes the approximation parameter, ~ used, while Y-axis denotes the time taken to compute the solution (on a log-scale).\nThe time horizon for all the configurations was 4.\nAs ~ was increased, the time to solution decreased drastically.\nFor instance, in the 4-star case there was a total speedup of 73-fold when the ~ was changed from 60 to 140.\nAgain, the actual solution quality did not change with varying epsilon.\nFigure 4: Sensor network configurations\nFigure 5: Comparison of GOA, SPIDER, SPIDER-Abs and VAX for T = 3 on (a) Runtime and (b) Solution quality; (c) Time to solution for PAX with varying percentage to optimal for T = 4 (d) Time to solution for VAX with varying epsilon for T = 4\n6.\nSUMMARY AND RELATED WORK\nThis paper presents four algorithms SPIDER, SPIDER-ABS, PAX and VAX that provide a novel combination of features for policy search in distributed POMDPs: (i) exploiting agent interaction structure given a network of agents (i.e. easier scale-up to larger number of agents); (ii) using branch and bound search with an MDP based heuristic function; (iii) utilizing abstraction to improve runtime performance without sacrificing solution quality; (iv) providing a priori percentage bounds on quality of solutions using PAX; and (v) providing expected value bounds on the quality of solutions using VAX.\nThese features allow for systematic tradeoff of solution quality for run-time in networks of agents operating under uncertainty.\nExperimental results show orders of magnitude improvement in performance over previous global optimal algorithms.\nResearchers have typically employed two types of techniques for solving distributed POMDPs.\nThe first set of techniques compute global optimal solutions.\nHansen et al. [5] present an algorithm based on dynamic programming and iterated elimination of dominant policies, that provides optimal solutions for distributed POMDPs.\nSzer et al. [13] provide an optimal heuristic search method for solving Decentralized POMDPs.\nThis algorithm is based on the combination of a classical heuristic search algorithm, A \u2217 and decentralized control theory.\nThe key differences between SPIDER and MAA * are: (a) Enhancements to SPIDER (VAX and PAX) provide for quality guaranteed approximations, while MAA * is a global optimal algorithm and hence involves significant computational complexity; (b) Due to MAA *'s inability to exploit interaction structure, it was illustrated only with two agents.\nHowever, SPIDER has been illustrated for networks of agents; and (c) SPIDER explores the joint policy one agent at a time, while MAA * expands it one time step at a time (simultaneously for all the agents).\nThe second set of techniques seek approximate policies.\nEmeryMontemerlo et al. [4] approximate POSGs as a series of one-step Bayesian games using heuristics to approximate future value, trading off limited lookahead for computational efficiency, resulting in locally optimal policies (with respect to the selected heuristic).\nNair et al. [9]'s JESP algorithm uses dynamic programming to reach a local optimum solution for finite horizon decentralized POMDPs.\nPeshkin et al. [11] and Bernstein et al. [2] are examples of policy search techniques that search for locally optimal policies.\nThough all the above techniques improve the efficiency of policy computation considerably, they are unable to provide error bounds on the quality of the solution.\nThis aspect of quality bounds differentiates SPIDER from all the above techniques.\nAcknowledgements.\nThis material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division under Contract No.\nNBCHD030010.\nThe views and conclusions contained in this document are those of the authors, and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the U.S. Government.","keyphrases":["network","pomdp","qualiti guarante polici","distribut partial observ markov decis problem","distribut pomdp","uncertain domain","approxim solut","global optim","agent interact structur","heurist","polici search","qualiti guarante approxim","multi-agent system","agent network","branch and bound heurist search techniqu","heurist function","optim joint polici","network structur","depth first search","distribut sensor network","overal joint reward","maximum constrain node","partial observ markov decis process","global optim solut"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M","R","M","M","M","R","M","M","U","U","M","R"]} {"id":"I-72","title":"Learning Consumer Preferences Using Semantic Similarity","abstract":"In online, dynamic environments, the services requested by consumers may not be readily served by the providers. This requires the service consumers and providers to negotiate their service needs and offers. Multiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price. In contrast, this work develops an approach through which the parties can negotiate the content of a service. This calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other's preferences incrementally over time. Accordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service. Through repetitive interactions, the provider learns consumers' needs accurately and can make better targeted offers. To enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques. We further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics.","lvl-1":"Learning Consumer Preferences Using Semantic Similarity \u2217 Reyhan Aydo\u02d8gan reyhan.aydogan@gmail.com P\u0131nar Yolum pinar.yolum@boun.edu.tr Department of Computer Engineering Bo\u02d8gazi\u00e7i University Bebek, 34342, Istanbul,Turkey ABSTRACT In online, dynamic environments, the services requested by consumers may not be readily served by the providers.\nThis requires the service consumers and providers to negotiate their service needs and offers.\nMultiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price.\nIn contrast, this work develops an approach through which the parties can negotiate the content of a service.\nThis calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other``s preferences incrementally over time.\nAccordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service.\nThrough repetitive interactions, the provider learns consumers'' needs accurately and can make better targeted offers.\nTo enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques.\nWe further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Algorithms, Experimentation 1.\nINTRODUCTION Current approaches to e-commerce treat service price as the primary construct for negotiation by assuming that the service content is fixed [9].\nHowever, negotiation on price presupposes that other properties of the service have already been agreed upon.\nNevertheless, many times the service provider may not be offering the exact requested service due to lack of resources, constraints in its business policy, and so on [3].\nWhen this is the case, the producer and the consumer need to negotiate the content of the requested service [15].\nHowever, most existing negotiation approaches assume that all features of a service are equally important and concentrate on the price [5, 2].\nHowever, in reality not all features may be relevant and the relevance of a feature may vary from consumer to consumer.\nFor instance, completion time of a service may be important for one consumer whereas the quality of the service may be more important for a second consumer.\nWithout doubt, considering the preferences of the consumer has a positive impact on the negotiation process.\nFor this purpose, evaluation of the service components with different weights can be useful.\nSome studies take these weights as a priori and uses the fixed weights [4].\nOn the other hand, mostly the producer does not know the consumer``s preferences before the negotiation.\nHence, it is more appropriate for the producer to learn these preferences for each consumer.\nPreference Learning: As an alternative, we propose an architecture in which the service providers learn the relevant features of a service for a particular customer over time.\nWe represent service requests as a vector of service features.\nWe use an ontology in order to capture the relations between services and to construct the features for a given service.\nBy using a common ontology, we enable the consumers and producers to share a common vocabulary for negotiation.\nThe particular service we have used is a wine selling service.\nThe wine seller learns the wine preferences of the customer to sell better targeted wines.\nThe producer models the requests of the consumer and its counter offers to learn which features are more important for the consumer.\nSince no information is present before the interactions start, the learning algorithm has to be incremental so that it can be trained at run time and can revise itself with each new interaction.\nService Generation: Even after the producer learns the important features for a consumer, it needs a method to generate offers that are the most relevant for the consumer among its set of possible services.\nIn other words, the question is how the producer uses the information that was learned from the dialogues to make the best offer to the consumer.\nFor instance, assume that the producer has learned that the consumer wants to buy a red wine but the producer can only offer rose or white wine.\nWhat should the producer``s offer 1301 978-81-904262-7-5 (RPS) c 2007 IFAAMAS contain; white wine or rose wine?\nIf the producer has some domain knowledge about semantic similarity (e.g., knows that the red and rose wines are taste-wise more similar than white wine), then it can generate better offers.\nHowever, in addition to domain knowledge, this derivation requires appropriate metrics to measure similarity between available services and learned preferences.\nThe rest of this paper is organized as follows: Section 2 explains our proposed architecture.\nSection 3 explains the learning algorithms that were studied to learn consumer preferences.\nSection 4 studies the different service offering mechanisms.\nSection 5 contains the similarity metrics used in the experiments.\nThe details of the developed system is analyzed in Section 6.\nSection 7 provides our experimental setup, test cases, and results.\nFinally, Section 8 discusses and compares our work with other related work.\n2.\nARCHITECTURE Our main components are consumer and producer agents, which communicate with each other to perform content-oriented negotiation.\nFigure 1 depicts our architecture.\nThe consumer agent represents the customer and hence has access to the preferences of the customer.\nThe consumer agent generates requests in accordance with these preferences and negotiates with the producer based on these preferences.\nSimilarly, the producer agent has access to the producer``s inventory and knows which wines are available or not.\nA shared ontology provides the necessary vocabulary and hence enables a common language for agents.\nThis ontology describes the content of the service.\nFurther, since an ontology can represent concepts, their properties and their relationships semantically, the agents can reason the details of the service that is being negotiated.\nSince a service can be anything such as selling a car, reserving a hotel room, and so on, the architecture is independent of the ontology used.\nHowever, to make our discussion concrete, we use the well-known Wine ontology [19] with some modification to illustrate our ideas and to test our system.\nThe wine ontology describes different types of wine and includes features such as color, body, winery of the wine and so on.\nWith this ontology, the service that is being negotiated between the consumer and the producer is that of selling wine.\nThe data repository in Figure 1 is used solely by the producer agent and holds the inventory information of the producer.\nThe data repository includes information on the products the producer owns, the number of the products and ratings of those products.\nRatings indicate the popularity of the products among customers.\nThose are used to decide which product will be offered when there exists more than one product having same similarity to the request of the consumer agent.\nThe negotiation takes place in a turn-taking fashion, where the consumer agent starts the negotiation with a particular service request.\nThe request is composed of significant features of the service.\nIn the wine example, these features include color, winery and so on.\nThis is the particular wine that the customer is interested in purchasing.\nIf the producer has the requested wine in its inventory, the producer offers the wine and the negotiation ends.\nOtherwise, the producer offers an alternative wine from the inventory.\nWhen the consumer receives a counter offer from the producer, it will evaluate it.\nIf it is acceptable, then the negotiation will end.\nOtherwise, the customer will generate a new request or stick to the previous request.\nThis process will continue until some service is accepted by the consumer agent or all possible offers are put forward to the consumer by the producer.\nOne of the crucial challenges of the content-oriented negotiation is the automatic generation of counter offers by the service producer.\nWhen the producer constructs its offer, it should consider Figure 1: Proposed Negotiation Architecture three important things: the current request, consumer preferences and the producer``s available services.\nBoth the consumer``s current request and the producer``s own available services are accessible by the producer.\nHowever, the consumer``s preferences in most cases will not be available.\nHence, the producer will have to understand the needs of the consumer from their interactions and generate a counter offer that is likely to be accepted by the consumer.\nThis challenge can be studied in three stages: \u2022 Preference Learning: How can the producers learn about each customer``s preferences based on requests and counter offers?\n(Section 3) \u2022 Service Offering: How can the producers revise their offers based on the consumer``s preferences that they have learned so far?\n(Section 4) \u2022 Similarity Estimation: How can the producer agent estimate similarity between the request and available services?\n(Section 5) 3.\nPREFERENCE LEARNING The requests of the consumer and the counter offers of the producer are represented as vectors, where each element in the vector corresponds to the value of a feature.\nThe requests of the consumers represent individual wine products whereas their preferences are constraints over service features.\nFor example, a consumer may have preference for red wine.\nThis means that the consumer is willing to accept any wine offered by the producers as long as the color is red.\nAccordingly, the consumer generates a request where the color feature is set to red and other features are set to arbitrary values, e.g. (Medium, Strong, Red).\nAt the beginning of negotiation, the producer agent does not know the consumer``s preferences but will need to learn them using information obtained from the dialogues between the producer and the consumer.\nThe preferences denote the relative importance of the features of the services demanded by the consumer agents.\nFor instance, the color of the wine may be important so the consumer insists on buying the wine whose color is red and rejects all 1302 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: How DCEA works Type Sample The most The most general set specific set + (Full,Strong,White) {(?\n, ?\n, ?)}\n{(Full,Strong,White)} {{(?\n-Full), ?\n, ? }\n, - (Full,Delicate,Rose) {?\n, (?\n-Delicate), ?}\n, {(Full,Strong,White)} {?\n, ?\n, (?\n-Rose)}} {{(?\n-Full), ?\n, ?}\n, {{(Full,Strong,White)}, + (Medium,Moderate,Red) {?\n,(?\n-Delicate), ?}\n, {(Medium,Moderate,Red)}} {?\n, ?\n, (?\n-Rose)}} the offers involving the wine whose color is white or rose.\nOn the contrary, the winery may not be as important as the color for this customer, so the consumer may have a tendency to accept wines from any winery as long as the color is red.\nTo tackle this problem, we propose to use incremental learning algorithms [6].\nThis is necessary since no training data is available before the interactions start.\nWe particularly investigate two approaches.\nThe first one is inductive learning.\nThis technique is applied to learn the preferences as concepts.\nWe elaborate on Candidate Elimination Algorithm (CEA) for Version Space [10].\nCEA is known to perform poorly if the information to be learned is disjunctive.\nInterestingly, most of the time consumer preferences are disjunctive.\nSay, we are considering an agent that is buying wine.\nThe consumer may prefer red wine or rose wine but not white wine.\nTo use CEA with such preferences, a solid modification is necessary.\nThe second approach is decision trees.\nDecision trees can learn from examples easily and classify new instances as positive or negative.\nA well-known incremental decision tree is ID5R [18].\nHowever, ID5R is known to suffer from high computational complexity.\nFor this reason, we instead use the ID3 algorithm [13] and iteratively build decision trees to simulate incremental learning.\n3.1 CEA CEA [10] is one of the inductive learning algorithms that learns concepts from observed examples.\nThe algorithm maintains two sets to model the concept to be learned.\nThe first set is the most general set G. G contains hypotheses about all the possible values that the concept may obtain.\nAs the name suggests, it is a generalization and contains all possible values unless the values have been identified not to represent the concept.\nThe second set is the most specific set S. S contains only hypotheses that are known to identify the concept that is being learned.\nAt the beginning of the algorithm, G is initialized to cover all possible concepts while S is initialized to be empty.\nDuring the interactions, each request of the consumer can be considered as a positive example and each counter offer generated by the producer and rejected by the consumer agent can be thought of as a negative example.\nAt each interaction between the producer and the consumer, both G and S are modified.\nThe negative samples enforce the specialization of some hypotheses so that G does not cover any hypothesis accepting the negative samples as positive.\nWhen a positive sample comes, the most specific set S should be generalized in order to cover the new training instance.\nAs a result, the most general hypotheses and the most special hypotheses cover all positive training samples but do not cover any negative ones.\nIncrementally, G specializes and S generalizes until G and S are equal to each other.\nWhen these sets are equal, the algorithm converges by means of reaching the target concept.\n3.2 Disjunctive CEA Unfortunately, CEA is primarily targeted for conjunctive concepts.\nOn the other hand, we need to learn disjunctive concepts in the negotiation of a service since consumer may have several alternative wishes.\nThere are several studies on learning disjunctive concepts via Version Space.\nSome of these approaches use multiple version space.\nFor instance, Hong et al. maintain several version spaces by split and merge operation [7].\nTo be able to learn disjunctive concepts, they create new version spaces by examining the consistency between G and S.\nWe deal with the problem of not supporting disjunctive concepts of CEA by extending our hypothesis language to include disjunctive hypothesis in addition to the conjunctives and negation.\nEach attribute of the hypothesis has two parts: inclusive list, which holds the list of valid values for that attribute and exclusive list, which is the list of values which cannot be taken for that feature.\nEXAMPLE 1.\nAssume that the most specific set is {(Light, Delicate, Red)} and a positive example, (Light, Delicate, White) comes.\nThe original CEA will generalize this as (Light, Delicate, ?)\n, meaning the color can take any value.\nHowever, in fact, we only know that the color can be red or white.\nIn the DCEA, we generalize it as {(Light, Delicate, [White, Red] )}.\nOnly when all the values exist in the list, they will be replaced by ?\n.\nIn other words, we let the algorithm generalize more slowly than before.\nWe modify the CEA algorithm to deal with this change.\nThe modified algorithm, DCEA, is given as Algorithm 1.\nNote that compared to the previous studies of disjunctive versions, our approach uses only a single version space rather than multiple version space.\nThe initialization phase is the same as the original algorithm (lines 1, 2).\nIf any positive sample comes, we add the sample to the special set as before (line 4).\nHowever, we do not eliminate the hypotheses in G that do not cover this sample since G now contains a disjunction of many hypotheses, some of which will be conflicting with each other.\nRemoving a specific hypothesis from G will result in loss of information, since other hypotheses are not guaranteed to cover it.\nAfter some time, some hypotheses in S can be merged and can construct one hypothesis (lines 6, 7).\nWhen a negative sample comes, we do not change S as before.\nWe only modify the most general hypotheses not to cover this negative sample (lines 11-15).\nDifferent from the original CEA, we try to specialize the G minimally.\nThe algorithm removes the hypothesis covering the negative sample (line 13).\nThen, we generate new hypotheses as the number of all possible attributes by using the removed hypothesis.\nFor each attribute in the negative sample, we add one of them at each time to the exclusive list of the removed hypothesis.\nThus, all possible hypotheses that do not cover the negative sample are generated (line 14).\nNote that, exclusive list contains the values that the attribute cannot take.\nFor example, consider the color attribute.\nIf a hypothesis includes red in its exclusive list and ?\nin its inclusive list, this means that color may take any value except red.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1303 Algorithm 1 Disjunctive Candidate Elimination Algorithm 1: G \u2190the set of maximally general hypotheses in H 2: S \u2190the set of maximally specific hypotheses in H 3: For each training example, d 4: if d is a positive example then 5: Add d to S 6: if s in S can be combined with d to make one element then 7: Combine s and d into sd {sd is the rule covers s and d} 8: end if 9: end if 10: if d is a negative example then 11: For each hypothesis g in G does cover d 12: * Assume : g = (x1, x2, ..., xn) and d = (d1, d2, ..., dn) 13: - Remove g from G 14: - Add hypotheses g1, g2, gn where g1= (x1-d1, x2,..., xn), g2= (x1, x2-d2,..., xn),..., and gn= (x1, x2,..., xn-dn) 15: - Remove from G any hypothesis that is less general than another hypothesis in G 16: end if EXAMPLE 2.\nTable 1 illustrates the first three interactions and the workings of DCEA.\nThe most general set and the most specific set show the contents of G and S after the sample comes in.\nAfter the first positive sample, S is generalized to also cover the instance.\nThe second sample is negative.\nThus, we replace (?\n, ?\n, ?)\nby three disjunctive hypotheses; each hypothesis being minimally specialized.\nIn this process, at each time one attribute value of negative sample is applied to the hypothesis in the general set.\nThe third sample is positive and generalizes S even more.\nNote that in Table 1, we do not eliminate {(?\n-Full), ?\n, ?}\nfrom the general set while having a positive sample such as (Full, Strong, White).\nThis stems from the possibility of using this rule in the generation of other hypotheses.\nFor instance, if the example continues with a negative sample (Full, Strong, Red), we can specialize the previous rule such as {(?\n-Full), ?\n, (?\n-Red)}.\nBy Algorithm 1, we do not miss any information.\n3.3 ID3 ID3 [13] is an algorithm that constructs decision trees in a topdown fashion from the observed examples represented in a vector with attribute-value pairs.\nApplying this algorithm to our system with the intention of learning the consumer``s preferences is appropriate since this algorithm also supports learning disjunctive concepts in addition to conjunctive concepts.\nThe ID3 algorithm is used in the learning process with the purpose of classification of offers.\nThere are two classes: positive and negative.\nPositive means that the service description will possibly be accepted by the consumer agent whereas the negative implies that it will potentially be rejected by the consumer.\nConsumer``s requests are considered as positive training examples and all rejected counter-offers are thought as negative ones.\nThe decision tree has two types of nodes: leaf node in which the class labels of the instances are held and non-leaf nodes in which test attributes are held.\nThe test attribute in a non-leaf node is one of the attributes making up the service description.\nFor instance, body, flavor, color and so on are potential test attributes for wine service.\nWhen we want to find whether the given service description is acceptable, we start searching from the root node by examining the value of test attributes until reaching a leaf node.\nThe problem with this algorithm is that it is not an incremental algorithm, which means all the training examples should exist before learning.\nTo overcome this problem, the system keeps consumer``s requests throughout the negotiation interaction as positive examples and all counter-offers rejected by the consumer as negative examples.\nAfter each coming request, the decision tree is rebuilt.\nWithout doubt, there is a drawback of reconstruction such as additional process load.\nHowever, in practice we have evaluated ID3 to be fast and the reconstruction cost to be negligible.\n4.\nSERVICE OFFERING After learning the consumer``s preferences, the producer needs to make a counter offer that is compatible with the consumer``s preferences.\n4.1 Service Offering via CEA and DCEA To generate the best offer, the producer agent uses its service ontology and the CEA algorithm.\nThe service offering mechanism is the same for both the original CEA and DCEA, but as explained before their methods for updating G and S are different.\nWhen producer receives a request from the consumer, the learning set of the producer is trained with this request as a positive sample.\nThe learning components, the most specific set S and the most general set G are actively used in offering service.\nThe most general set, G is used by the producer in order to avoid offering the services, which will be rejected by the consumer agent.\nIn other words, it filters the service set from the undesired services, since G contains hypotheses that are consistent with the requests of the consumer.\nThe most specific set, S is used in order to find best offer, which is similar to the consumer``s preferences.\nSince the most specific set S holds the previous requests and the current request, estimating similarity between this set and every service in the service list is very convenient to find the best offer from the service list.\nWhen the consumer starts the interaction with the producer agent, producer agent loads all related services to the service list object.\nThis list constitutes the provider``s inventory of services.\nUpon receiving a request, if the producer can offer an exactly matching service, then it does so.\nFor example, for a wine this corresponds to selling a wine that matches the specified features of the consumer``s request identically.\nWhen the producer cannot offer the service as requested, it tries to find the service that is most similar to the services that have been requested by the consumer during the negotiation.\nTo do this, the producer has to compute the similarity between the services it can offer and the services that have been requested (in S).\nWe compute the similarities in various ways as will be explained in Section 5.\nAfter the similarity of the available services with the current S is calculated, there may be more than one service with the maximum similarity.\nThe producer agent can break the tie in a number of ways.\nHere, we have associated a rating value with each service and the producer prefers the higher rated service to others.\n4.2 Service Offering via ID3 If the producer learns the consumer``s preferences with ID3, a similar mechanism is applied with two differences.\nFirst, since ID3 does not maintain G, the list of unaccepted services that are classified as negative are removed from the service list.\nSecond, the similarities of possible services are not measured with respect to S, but instead to all previously made requests.\n4.3 Alternative Service Offering Mechanisms In addition to these three service offering mechanisms (Service Offering with CEA, Service Offering with DCEA, and Service Offering with ID3), we include two other mechanisms.\n.\n1304 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) \u2022 Random Service Offering (RO): The producer generates a counter offer randomly from the available service list, without considering the consumer``s preferences.\n\u2022 Service Offering considering only the current request (SCR): The producer selects a counter offer according to the similarity of the consumer``s current request but does not consider previous requests.\n5.\nSIMILARITY ESTIMATION Similarity can be estimated with a similarity metric that takes two entries and returns how similar they are.\nThere are several similarity metrics used in case based reasoning system such as weighted sum of Euclidean distance, Hamming distance and so on [12].\nThe similarity metric affects the performance of the system while deciding which service is the closest to the consumer``s request.\nWe first analyze some existing metrics and then propose a new semantic similarity metric named RP Similarity.\n5.1 Tversky``s Similarity Metric Tversky``s similarity metric compares two vectors in terms of the number of exactly matching features [17].\nIn Equation (1), common represents the number of matched attributes whereas different represents the number of the different attributes.\nOur current assumption is that \u03b1 and \u03b2 is equal to each other.\nSMpq = \u03b1(common) \u03b1(common) + \u03b2(different) (1) Here, when two features are compared, we assign zero for dissimilarity and one for similarity by omitting the semantic closeness among the feature values.\nTversky``s similarity metric is designed to compare two feature vectors.\nIn our system, whereas the list of services that can be offered by the producer are each a feature vector, the most specific set S is not a feature vector.\nS consists of hypotheses of feature vectors.\nTherefore, we estimate the similarity of each hypothesis inside the most specific set S and then take the average of the similarities.\nEXAMPLE 3.\nAssume that S contains the following two hypothesis: { {Light, Moderate, (Red, White)} , {Full, Strong, Rose}}.\nTake service s as (Light, Strong, Rose).\nThen the similarity of the first one is equal to 1\/3 and the second one is equal to 2\/3 in accordance with Equation (1).\nNormally, we take the average of it and obtain (1\/3 + 2\/3)\/2, equally 1\/2.\nHowever, the first hypothesis involves the effect of two requests and the second hypothesis involves only one request.\nAs a result, we expect the effect of the first hypothesis to be greater than that of the second.\nTherefore, we calculate the average similarity by considering the number of samples that hypotheses cover.\nLet ch denote the number of samples that hypothesis h covers and (SM(h,service)) denote the similarity of hypothesis h with the given service.\nWe compute the similarity of each hypothesis with the given service and weight them with the number of samples they cover.\nWe find the similarity by dividing the weighted sum of the similarities of all hypotheses in S with the service by the number of all samples that are covered in S. AV G\u2212SM(service,S) = |S| |h| (ch \u2217 SM(h,service)) |S| |h| ch (2) Figure 2: Sample taxonomy for similarity estimation EXAMPLE 4.\nFor the above example, the similarity of (Light, Strong, Rose) with the specific set is (2 \u2217 1\/3 + 2\/3)\/3, equally 4\/9.\nThe possible number of samples that a hypothesis covers can be estimated with multiplying cardinalities of each attribute.\nFor example, the cardinality of the first attribute is two and the others is equal to one for the given hypothesis such as {Light, Moderate, (Red, White)}.\nWhen we multiply them, we obtain two (2 \u2217 1 \u2217 1 = 2).\n5.2 Lin``s Similarity Metric A taxonomy can be used while estimating semantic similarity between two concepts.\nEstimating semantic similarity in a Is-A taxonomy can be done by calculating the distance between the nodes related to the compared concepts.\nThe links among the nodes can be considered as distances.\nThen, the length of the path between the nodes indicates how closely similar the concepts are.\nAn alternative estimation to use information content in estimation of semantic similarity rather than edge counting method, was proposed by Lin [8].\nThe equation (3) [8] shows Lin``s similarity where c1 and c2 are the compared concepts and c0 is the most specific concept that subsumes both of them.\nBesides, P(C) represents the probability of an arbitrary selected object belongs to concept C. Similarity(c1, c2) = 2 \u00d7 log P(c0) log P(c1) + log P(c2) (3) 5.3 Wu & Palmer``s Similarity Metric Different from Lin, Wu and Palmer use the distance between the nodes in IS-A taxonomy [20].\nThe semantic similarity is represented with Equation (4) [20].\nHere, the similarity between c1 and c2 is estimated and c0 is the most specific concept subsuming these classes.\nN1 is the number of edges between c1 and c0.\nN2 is the number of edges between c2 and c0.\nN0 is the number of IS-A links of c0 from the root of the taxonomy.\nSimW u&P almer(c1, c2) = 2 \u00d7 N0 N1 + N2 + 2 \u00d7 N0 (4) 5.4 RP Semantic Metric We propose to estimate the relative distance in a taxonomy between two concepts using the following intuitions.\nWe use Figure 2 to illustrate these intuitions.\n\u2022 Parent versus grandparent: Parent of a node is more similar to the node than grandparents of that.\nGeneralization of The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1305 a concept reasonably results in going further away that concept.\nThe more general concepts are, the less similar they are.\nFor example, AnyWineColor is parent of ReddishColor and ReddishColor is parent of Red.\nThen, we expect the similarity between ReddishColor and Red to be higher than that of the similarity between AnyWineColor and Red.\n\u2022 Parent versus sibling: A node would have higher similarity to its parent than to its sibling.\nFor instance, Red and Rose are children of ReddishColor.\nIn this case, we expect the similarity between Red and ReddishColor to be higher than that of Red and Rose.\n\u2022 Sibling versus grandparent: A node is more similar to it``s sibling then to its grandparent.\nTo illustrate, AnyWineColor is grandparent of Red, and Red and Rose are siblings.\nTherefore, we possibly anticipate that Red and Rose are more similar than AnyWineColor and Red.\nAs a taxonomy is represented in a tree, that tree can be traversed from the first concept being compared through the second concept.\nAt starting node related to the first concept, the similarity value is constant and equal to one.\nThis value is diminished by a constant at each node being visited over the path that will reach to the node including the second concept.\nThe shorter the path between the concepts, the higher the similarity between nodes.\nAlgorithm 2 Estimate-RP-Similarity(c1,c2) Require: The constants should be m > n > m2 where m, n \u2208 R[0, 1] 1: Similarity \u2190 1 2: if c1 is equal to c2 then 3: Return Similarity 4: end if 5: commonParent \u2190 findCommonParent(c1, c2) {commonParent is the most specific concept that covers both c1 and c2} 6: N1 \u2190 findDistance(commonParent, c1) 7: N2 \u2190 findDistance(commonParent, c2) {N1 & N2 are the number of links between the concept and parent concept} 8: if (commonParent == c1) or (commonParent == c2) then 9: Similarity \u2190 Similarity \u2217 m(N1+N2) 10: else 11: Similarity \u2190 Similarity \u2217 n \u2217 m(N1+N2\u22122) 12: end if 13: Return Similarity Relative distance between nodes c1 and c2 is estimated in the following way.\nStarting from c1, the tree is traversed to reach c2.\nAt each hop, the similarity decreases since the concepts are getting farther away from each other.\nHowever, based on our intuitions, not all hops decrease the similarity equally.\nLet m represent the factor for hopping from a child to a parent and n represent the factor for hopping from a sibling to another sibling.\nSince hopping from a node to its grandparent counts as two parent hops, the discount factor of moving from a node to its grandparent is m2 .\nAccording to the above intuitions, our constants should be in the form m > n > m2 where the value of m and n should be between zero and one.\nAlgorithm 2 shows the distance calculation.\nAccording to the algorithm, firstly the similarity is initialized with the value of one (line 1).\nIf the concepts are equal to each other then, similarity will be one (lines 2-4).\nOtherwise, we compute the common parent of the two nodes and the distance of each concept to the common parent without considering the sibling (lines 5-7).\nIf one of the concepts is equal to the common parent, then there is no sibling relation between the concepts.\nFor each level, we multiply the similarity by m and do not consider the sibling factor in the similarity estimation.\nAs a result, we decrease the similarity at each level with the rate of m (line9).\nOtherwise, there has to be a sibling relation.\nThis means that we have to consider the effect of n when measuring similarity.\nRecall that we have counted N1+N2 edges between the concepts.\nSince there is a sibling relation, two of these edges constitute the sibling relation.\nHence, when calculating the effect of the parent relation, we use N1+N2 \u22122 edges (line 11).\nSome similarity estimations related to the taxonomy in Figure 2 are given in Table 2.\nIn this example, m is taken as 2\/3 and n is taken as 4\/7.\nTable 2: Sample similarity estimation over sample taxonomy Similarity(ReddishColor, Rose) = 1 \u2217 (2\/3) = 0.6666667 Similarity(Red, Rose) = 1 \u2217 (4\/7) = 0.5714286 Similarity(AnyW ineColor,Rose) = 1 \u2217 (2\/3)2 = 0.44444445 Similarity(W hite,Rose) = 1 \u2217 (2\/3) \u2217 (4\/7) = 0.3809524 For all semantic similarity metrics in our architecture, the taxonomy for features is held in the shared ontology.\nIn order to evaluate the similarity of feature vector, we firstly estimate the similarity for feature one by one and take the average sum of these similarities.\nThen the result is equal to the average semantic similarity of the entire feature vector.\n6.\nDEVELOPED SYSTEM We have implemented our architecture in Java.\nTo ease testing of the system, the consumer agent has a user interface that allows us to enter various requests.\nThe producer agent is fully automated and the learning and service offering operations work as explained before.\nIn this section, we explain the implementation details of the developed system.\nWe use OWL [11] as our ontology language and JENA as our ontology reasoner.\nThe shared ontology is the modified version of the Wine Ontology [19].\nIt includes the description of wine as a concept and different types of wine.\nAll participants of the negotiation use this ontology for understanding each other.\nAccording to the ontology, seven properties make up the wine concept.\nThe consumer agent and the producer agent obtain the possible values for the these properties by querying the ontology.\nThus, all possible values for the components of the wine concept such as color, body, sugar and so on can be reached by both agents.\nAlso a variety of wine types are described in this ontology such as Burgundy, Chardonnay, CheninBlanc and so on.\nIntuitively, any wine type described in the ontology also represents a wine concept.\nThis allows us to consider instances of Chardonnay wine as instances of Wine class.\nIn addition to wine description, the hierarchical information of some features can be inferred from the ontology.\nFor instance, we can represent the information Europe Continent covers Western Country.\nWestern Country covers French Region, which covers some territories such as Loire, Bordeaux and so on.\nThis hierarchical information is used in estimation of semantic similarity.\nIn this part, some reasoning can be made such as if a concept X covers Y and Y covers Z, then concept X covers Z. For example, Europe Continent covers Bordeaux.\n1306 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) For some features such as body, flavor and sugar, there is no hierarchical information, but their values are semantically leveled.\nWhen that is the case, we give the reasonable similarity values for these features.\nFor example, the body can be light, medium, or strong.\nIn this case, we assume that light is 0.66 similar to medium but only 0.33 to strong.\nWineStock Ontology is the producer``s inventory and describes a product class as WineProduct.\nThis class is necessary for the producer to record the wines that it sells.\nOntology involves the individuals of this class.\nThe individuals represent available services that the producer owns.\nWe have prepared two separate WineStock ontologies for testing.\nIn the first ontology, there are 19 available wine products and in the second ontology, there are 50 products.\n7.\nPERFORMANCE EVALUATION We evaluate the performance of the proposed systems in respect to learning technique they used, DCEA and ID3, by comparing them with the CEA, RO (for random offering), and SCR (offering based on current request only).\nWe apply a variety of scenarios on this dataset in order to see the performance differences.\nEach test scenario contains a list of preferences for the user and number of matches from the product list.\nTable 3 shows these preferences and availability of those products in the inventory for first five scenarios.\nNote that these preferences are internal to the consumer and the producer tries to learn these during negotiation.\nTable 3: Availability of wines in different test scenarios ID Preference of consumer Availability (out of 19) 1 Dry wine 15 2 Red and dry wine 8 3 Red, dry and moderate wine 4 4 Red and strong wine 2 5 Red or rose, and strong 3 7.1 Comparison of Learning Algorithms In comparison of learning algorithms, we use the five scenarios in Table 3.\nHere, first we use Tversky``s similarity measure.\nWith these test cases, we are interested in finding the number of iterations that are required for the producer to generate an acceptable offer for the consumer.\nSince the performance also depends on the initial request, we repeat our experiments with different initial requests.\nConsequently, for each case, we run the algorithms five times with several variations of the initial requests.\nIn each experiment, we count the number of iterations that were needed to reach an agreement.\nWe take the average of these numbers in order to evaluate these systems fairly.\nAs is customary, we test each algorithm with the same initial requests.\nTable 4 compares the approaches using different learning algorithm.\nWhen the large parts of inventory is compatible with the customer``s preferences as in the first test case, the performance of all techniques are nearly same (e.g., Scenario 1).\nAs the number of compatible services drops, RO performs poorly as expected.\nThe second worst method is SCR since it only considers the customer``s most recent request and does not learn from previous requests.\nCEA gives the best results when it can generate an answer but cannot handle the cases containing disjunctive preferences, such as the one in Scenario 5.\nID3 and DCEA achieve the best results.\nTheir performance is comparable and they can handle all cases including Scenario 5.\nTable 4: Comparison of learning algorithms in terms of average number of interactions Run DCEA SCR RO CEA ID3 Scenario 1: 1.2 1.4 1.2 1.2 1.2 Scenario 2: 1.4 1.4 2.6 1.4 1.4 Scenario 3: 1.4 1.8 4.4 1.4 1.4 Scenario 4: 2.2 2.8 9.6 1.8 2 Scenario 5: 2 2.6 7.6 1.75+ No offer 1.8 Avg.\nof all cases: 1.64 2 5.08 1.51+No offer 1.56 7.2 Comparison of Similarity Metrics To compare the similarity metrics that were explained in Section 5, we fix the learning algorithm to DCEA.\nIn addition to the scenarios shown in Table 3, we add following five new scenarios considering the hierarchical information.\n\u2022 The customer wants to buy wine whose winery is located in California and whose grape is a type of white grape.\nMoreover, the winery of the wine should not be expensive.\nThere are only four products meeting these conditions.\n\u2022 The customer wants to buy wine whose color is red or rose and grape type is red grape.\nIn addition, the location of wine should be in Europe.\nThe sweetness degree is wished to be dry or off dry.\nThe flavor should be delicate or moderate where the body should be medium or light.\nFurthermore, the winery of the wine should be an expensive winery.\nThere are two products meeting all these requirements.\n\u2022 The customer wants to buy moderate rose wine, which is located around French Region.\nThe category of winery should be Moderate Winery.\nThere is only one product meeting these requirements.\n\u2022 The customer wants to buy expensive red wine, which is located around California Region or cheap white wine, which is located in around Texas Region.\nThere are five available products.\n\u2022 The customer wants to buy delicate white wine whose producer in the category of Expensive Winery.\nThere are two available products.\nThe first seven scenarios are tested with the first dataset that contains a total of 19 services and the last three scenarios are tested with the second dataset that contains 50 services.\nTable 5 gives the performance evaluation in terms of the number of interactions needed to reach a consensus.\nTversky``s metric gives the worst results since it does not consider the semantic similarity.\nLin``s performance are better than Tversky but worse than others.\nWu Palmer``s metric and RP similarity measure nearly give the same performance and better than others.\nWhen the results are examined, considering semantic closeness increases the performance.\n8.\nDISCUSSION We review the recent literature in comparison to our work.\nTama et al. [16] propose a new approach based on ontology for negotiation.\nAccording to their approach, the negotiation protocols used in e-commerce can be modeled as ontologies.\nThus, the agents can perform negotiation protocol by using this shared ontology without the need of being hard coded of negotiation protocol details.\nWhile The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1307 Table 5: Comparison of similarity metrics in terms of number of interactions Run Tversky Lin Wu Palmer RP Scenario 1: 1.2 1.2 1 1 Scenario 2: 1.4 1.4 1.6 1.6 Scenario 3: 1.4 1.8 2 2 Scenario 4: 2.2 1 1.2 1.2 Scenario 5: 2 1.6 1.6 1.6 Scenario 6: 5 3.8 2.4 2.6 Scenario 7: 3.2 1.2 1 1 Scenario 8: 5.6 2 2 2.2 Scenario 9: 2.6 2.2 2.2 2.6 Scenario 10: 4.4 2 2 1.8 Average of all cases: 2.9 1.82 1.7 1.76 Tama et al. model the negotiation protocol using ontologies, we have instead modeled the service to be negotiated.\nFurther, we have built a system with which negotiation preferences can be learned.\nSadri et al. study negotiation in the context of resource allocation [14].\nAgents have limited resources and need to require missing resources from other agents.\nA mechanism which is based on dialogue sequences among agents is proposed as a solution.\nThe mechanism relies on observe-think-action agent cycle.\nThese dialogues include offering resources, resource exchanges and offering alternative resource.\nEach agent in the system plans its actions to reach a goal state.\nContrary to our approach, Sadri et al.``s study is not concerned with learning preferences of each other.\nBrzostowski and Kowalczyk propose an approach to select an appropriate negotiation partner by investigating previous multi-attribute negotiations [1].\nFor achieving this, they use case-based reasoning.\nTheir approach is probabilistic since the behavior of the partners can change at each iteration.\nIn our approach, we are interested in negotiation the content of the service.\nAfter the consumer and producer agree on the service, price-oriented negotiation mechanisms can be used to agree on the price.\nFatima et al. study the factors that affect the negotiation such as preferences, deadline, price and so on, since the agent who develops a strategy against its opponent should consider all of them [5].\nIn their approach, the goal of the seller agent is to sell the service for the highest possible price whereas the goal of the buyer agent is to buy the good with the lowest possible price.\nTime interval affects these agents differently.\nCompared to Fatima et al. our focus is different.\nWhile they study the effect of time on negotiation, our focus is on learning preferences for a successful negotiation.\nFaratin et al. propose a multi-issue negotiation mechanism, where the service variables for the negotiation such as price, quality of the service, and so on are considered traded-offs against each other (i.e., higher price for earlier delivery) [4].\nThey generate a heuristic model for trade-offs including fuzzy similarity estimation and a hill-climbing exploration for possibly acceptable offers.\nAlthough we address a similar problem, we learn the preferences of the customer by the help of inductive learning and generate counter-offers in accordance with these learned preferences.\nFaratin et al. only use the last offer made by the consumer in calculating the similarity for choosing counter offer.\nUnlike them, we also take into account the previous requests of the consumer.\nIn their experiments, Faratin et al. assume that the weights for service variables are fixed a priori.\nOn the contrary, we learn these preferences over time.\nIn our future work, we plan to integrate ontology reasoning into the learning algorithm so that hierarchical information can be learned from subsumption hierarchy of relations.\nFurther, by using relationships among features, the producer can discover new knowledge from the existing knowledge.\nThese are interesting directions that we will pursue in our future work.\n9.\nREFERENCES [1] J. Brzostowski and R. Kowalczyk.\nOn possibilistic case-based reasoning for selecting partners for multi-attribute agent negotiation.\nIn Proceedings of the 4th Intl..\nJoint Conference on Autonomous Agents and MultiAgent Systems (AAMAS), pages 273-278, 2005.\n[2] L. Busch and I. Horstman.\nA comment on issue-by-issue negotiations.\nGames and Economic Behavior, 19:144-148, 1997.\n[3] J. K. Debenham.\nManaging e-market negotiation in context with a multiagent system.\nIn Proceedings 21st International Conference on Knowledge Based Systems and Applied Artificial Intelligence, ES``2002:, 2002.\n[4] P. Faratin, C. Sierra, and N. R. Jennings.\nUsing similarity criteria to make issue trade-offs in automated negotiations.\nArtificial Intelligence, 142:205-237, 2002.\n[5] S. Fatima, M. Wooldridge, and N. Jennings.\nOptimal agents for multi-issue negotiation.\nIn Proceeding of the 2nd Intl..\nJoint Conference on Autonomous Agents and MultiAgent Systems (AAMAS), pages 129-136, 2003.\n[6] C. Giraud-Carrier.\nA note on the utility of incremental learning.\nAI Communications, 13(4):215-223, 2000.\n[7] T.-P.\nHong and S.-S.\nTseng.\nSplitting and merging version spaces to learn disjunctive concepts.\nIEEE Transactions on Knowledge and Data Engineering, 11(5):813-815, 1999.\n[8] D. Lin.\nAn information-theoretic definition of similarity.\nIn Proc.\n15th International Conf.\non Machine Learning, pages 296-304.\nMorgan Kaufmann, San Francisco, CA, 1998.\n[9] P. Maes, R. H. Guttman, and A. G. Moukas.\nAgents that buy and sell.\nCommunications of the ACM, 42(3):81-91, 1999.\n[10] T. M. Mitchell.\nMachine Learning.\nMcGraw Hill, NY, 1997.\n[11] OWL.\nOWL: Web ontology language guide, 2003.\nhttp:\/\/www.w3.org\/TR\/2003\/CR-owl-guide-20030818\/.\n[12] S. K. Pal and S. C. K. Shiu.\nFoundations of Soft Case-Based Reasoning.\nJohn Wiley & Sons, New Jersey, 2004.\n[13] J. R. Quinlan.\nInduction of decision trees.\nMachine Learning, 1(1):81-106, 1986.\n[14] F. Sadri, F. Toni, and P. Torroni.\nDialogues for negotiation: Agent varieties and dialogue sequences.\nIn ATAL 2001, Revised Papers, volume 2333 of LNAI, pages 405-421.\nSpringer-Verlag, 2002.\n[15] M. P. Singh.\nValue-oriented electronic commerce.\nIEEE Internet Computing, 3(3):6-7, 1999.\n[16] V. Tamma, S. Phelps, I. Dickinson, and M. Wooldridge.\nOntologies for supporting negotiation in e-commerce.\nEngineering Applications of Artificial Intelligence, 18:223-236, 2005.\n[17] A. Tversky.\nFeatures of similarity.\nPsychological Review, 84(4):327-352, 1977.\n[18] P. E. Utgoff.\nIncremental induction of decision trees.\nMachine Learning, 4:161-186, 1989.\n[19] Wine, 2003.\nhttp:\/\/www.w3.org\/TR\/2003\/CR-owl-guide20030818\/wine.rdf.\n[20] Z. Wu and M. Palmer.\nVerb semantics and lexical selection.\nIn 32nd.\nAnnual Meeting of the Association for Computational Linguistics, pages 133 -138, 1994.\n1308 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"Learning Consumer Preferences Using Semantic Similarity \u2217 Reyhan Aydo\u02d8gan P\u0131nar Yolum\nABSTRACT\nIn online, dynamic environments, the services requested by consumers may not be readily served by the providers.\nThis requires the service consumers and providers to negotiate their service needs and offers.\nMultiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price.\nIn contrast, this work develops an approach through which the parties can negotiate the content of a service.\nThis calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other's preferences incrementally over time.\nAccordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service.\nThrough repetitive interactions, the provider learns consumers' needs accurately and can make better targeted offers.\nTo enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques.\nWe further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics.\n1.\nINTRODUCTION\nCurrent approaches to e-commerce treat service price as the primary construct for negotiation by assuming that the service content is fixed [9].\nHowever, negotiation on price presupposes that other properties of the service have already been agreed upon.\nNevertheless, many times the service provider may not be offering the exact requested service due to lack of resources, constraints in its business policy, and so on [3].\nWhen this is the case, the producer and the consumer need to negotiate the content of the requested service [15].\nHowever, most existing negotiation approaches assume that all features of a service are equally important and concentrate on the price [5, 2].\nHowever, in reality not all features may be relevant and the relevance of a feature may vary from consumer to consumer.\nFor instance, completion time of a service may be important for one consumer whereas the quality of the service may be more important for a second consumer.\nWithout doubt, considering the preferences of the consumer has a positive impact on the negotiation process.\nFor this purpose, evaluation of the service components with different weights can be useful.\nSome studies take these weights as a priori and uses the fixed weights [4].\nOn the other hand, mostly the producer does not know the consumer's preferences before the negotiation.\nHence, it is more appropriate for the producer to learn these preferences for each consumer.\nPreference Learning: As an alternative, we propose an architecture in which the service providers learn the relevant features of a service for a particular customer over time.\nWe represent service requests as a vector of service features.\nWe use an ontology in order to capture the relations between services and to construct the features for a given service.\nBy using a common ontology, we enable the consumers and producers to share a common vocabulary for negotiation.\nThe particular service we have used is a wine selling service.\nThe wine seller learns the wine preferences of the customer to sell better targeted wines.\nThe producer models the requests of the consumer and its counter offers to learn which features are more important for the consumer.\nSince no information is present before the interactions start, the learning algorithm has to be incremental so that it can be trained at run time and can revise itself with each new interaction.\nService Generation: Even after the producer learns the important features for a consumer, it needs a method to generate offers that are the most relevant for the consumer among its set of possible services.\nIn other words, the question is how the producer uses the information that was learned from the dialogues to make the best offer to the consumer.\nFor instance, assume that the producer has learned that the consumer wants to buy a red wine but the producer can only offer rose or white wine.\nWhat should the producer's offer\ncontain; white wine or rose wine?\nIf the producer has some domain knowledge about semantic similarity (e.g., knows that the red and rose wines are taste-wise more similar than white wine), then it can generate better offers.\nHowever, in addition to domain knowledge, this derivation requires appropriate metrics to measure similarity between available services and learned preferences.\nThe rest of this paper is organized as follows: Section 2 explains our proposed architecture.\nSection 3 explains the learning algorithms that were studied to learn consumer preferences.\nSection 4 studies the different service offering mechanisms.\nSection 5 contains the similarity metrics used in the experiments.\nThe details of the developed system is analyzed in Section 6.\nSection 7 provides our experimental setup, test cases, and results.\nFinally, Section 8 discusses and compares our work with other related work.\n2.\nARCHITECTURE\n3.\nPREFERENCE LEARNING\n1302 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.1 CEA\n3.2 Disjunctive CEA\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1303\n3.3 ID3\n4.\nSERVICE OFFERING\n4.1 Service Offering via CEA and DCEA\n4.2 Service Offering via ID3\n4.3 Alternative Service Offering Mechanisms\n1304 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.\nSIMILARITY ESTIMATION\n5.1 Tversky's Similarity Metric\n5.2 Lin's Similarity Metric\n5.3 Wu & Palmer's Similarity Metric\n5.4 RP Semantic Metric\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1305\n6.\nDEVELOPED SYSTEM\n1306 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n7.\nPERFORMANCE EVALUATION\n7.1 Comparison of Learning Algorithms\n7.2 Comparison of Similarity Metrics\n8.\nDISCUSSION\nWe review the recent literature in comparison to our work.\nTama et al. [16] propose a new approach based on ontology for negotiation.\nAccording to their approach, the negotiation protocols used in e-commerce can be modeled as ontologies.\nThus, the agents can perform negotiation protocol by using this shared ontology without the need of being hard coded of negotiation protocol details.\nWhile\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1307\nTable 5: Comparison of similarity metrics in terms of number of interactions\nTama et al. model the negotiation protocol using ontologies, we have instead modeled the service to be negotiated.\nFurther, we have built a system with which negotiation preferences can be learned.\nSadri et al. study negotiation in the context of resource allocation [14].\nAgents have limited resources and need to require missing resources from other agents.\nA mechanism which is based on dialogue sequences among agents is proposed as a solution.\nThe mechanism relies on observe-think-action agent cycle.\nThese dialogues include offering resources, resource exchanges and offering alternative resource.\nEach agent in the system plans its actions to reach a goal state.\nContrary to our approach, Sadri et al.'s study is not concerned with learning preferences of each other.\nBrzostowski and Kowalczyk propose an approach to select an appropriate negotiation partner by investigating previous multi-attribute negotiations [1].\nFor achieving this, they use case-based reasoning.\nTheir approach is probabilistic since the behavior of the partners can change at each iteration.\nIn our approach, we are interested in negotiation the content of the service.\nAfter the consumer and producer agree on the service, price-oriented negotiation mechanisms can be used to agree on the price.\nFatima et al. study the factors that affect the negotiation such as preferences, deadline, price and so on, since the agent who develops a strategy against its opponent should consider all of them [5].\nIn their approach, the goal of the seller agent is to sell the service for the highest possible price whereas the goal of the buyer agent is to buy the good with the lowest possible price.\nTime interval affects these agents differently.\nCompared to Fatima et al. our focus is different.\nWhile they study the effect of time on negotiation, our focus is on learning preferences for a successful negotiation.\nFaratin et al. propose a multi-issue negotiation mechanism, where the service variables for the negotiation such as price, quality of the service, and so on are considered traded-offs against each other (i.e., higher price for earlier delivery) [4].\nThey generate a heuristic model for trade-offs including fuzzy similarity estimation and a hill-climbing exploration for possibly acceptable offers.\nAlthough we address a similar problem, we learn the preferences of the customer by the help of inductive learning and generate counter-offers in accordance with these learned preferences.\nFaratin et al. only use the last offer made by the consumer in calculating the similarity for choosing counter offer.\nUnlike them, we also take into account the previous requests of the consumer.\nIn their experiments, Faratin et al. assume that the weights for service variables are fixed a priori.\nOn the contrary, we learn these preferences over time.\nIn our future work, we plan to integrate ontology reasoning into the learning algorithm so that hierarchical information can be learned from subsumption hierarchy of relations.\nFurther, by using relationships among features, the producer can discover new knowledge from the existing knowledge.\nThese are interesting directions that we will pursue in our future work.","lvl-4":"Learning Consumer Preferences Using Semantic Similarity \u2217 Reyhan Aydo\u02d8gan P\u0131nar Yolum\nABSTRACT\nIn online, dynamic environments, the services requested by consumers may not be readily served by the providers.\nThis requires the service consumers and providers to negotiate their service needs and offers.\nMultiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price.\nIn contrast, this work develops an approach through which the parties can negotiate the content of a service.\nThis calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other's preferences incrementally over time.\nAccordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service.\nThrough repetitive interactions, the provider learns consumers' needs accurately and can make better targeted offers.\nTo enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques.\nWe further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics.\n1.\nINTRODUCTION\nCurrent approaches to e-commerce treat service price as the primary construct for negotiation by assuming that the service content is fixed [9].\nHowever, negotiation on price presupposes that other properties of the service have already been agreed upon.\nNevertheless, many times the service provider may not be offering the exact requested service due to lack of resources, constraints in its business policy, and so on [3].\nWhen this is the case, the producer and the consumer need to negotiate the content of the requested service [15].\nHowever, most existing negotiation approaches assume that all features of a service are equally important and concentrate on the price [5, 2].\nHowever, in reality not all features may be relevant and the relevance of a feature may vary from consumer to consumer.\nFor instance, completion time of a service may be important for one consumer whereas the quality of the service may be more important for a second consumer.\nWithout doubt, considering the preferences of the consumer has a positive impact on the negotiation process.\nFor this purpose, evaluation of the service components with different weights can be useful.\nSome studies take these weights as a priori and uses the fixed weights [4].\nOn the other hand, mostly the producer does not know the consumer's preferences before the negotiation.\nHence, it is more appropriate for the producer to learn these preferences for each consumer.\nPreference Learning: As an alternative, we propose an architecture in which the service providers learn the relevant features of a service for a particular customer over time.\nWe represent service requests as a vector of service features.\nWe use an ontology in order to capture the relations between services and to construct the features for a given service.\nBy using a common ontology, we enable the consumers and producers to share a common vocabulary for negotiation.\nThe particular service we have used is a wine selling service.\nThe wine seller learns the wine preferences of the customer to sell better targeted wines.\nThe producer models the requests of the consumer and its counter offers to learn which features are more important for the consumer.\nService Generation: Even after the producer learns the important features for a consumer, it needs a method to generate offers that are the most relevant for the consumer among its set of possible services.\nIn other words, the question is how the producer uses the information that was learned from the dialogues to make the best offer to the consumer.\nFor instance, assume that the producer has learned that the consumer wants to buy a red wine but the producer can only offer rose or white wine.\nWhat should the producer's offer\ncontain; white wine or rose wine?\nHowever, in addition to domain knowledge, this derivation requires appropriate metrics to measure similarity between available services and learned preferences.\nThe rest of this paper is organized as follows: Section 2 explains our proposed architecture.\nSection 3 explains the learning algorithms that were studied to learn consumer preferences.\nSection 4 studies the different service offering mechanisms.\nSection 5 contains the similarity metrics used in the experiments.\nThe details of the developed system is analyzed in Section 6.\nSection 7 provides our experimental setup, test cases, and results.\nFinally, Section 8 discusses and compares our work with other related work.\n8.\nDISCUSSION\nWe review the recent literature in comparison to our work.\nTama et al. [16] propose a new approach based on ontology for negotiation.\nAccording to their approach, the negotiation protocols used in e-commerce can be modeled as ontologies.\nThus, the agents can perform negotiation protocol by using this shared ontology without the need of being hard coded of negotiation protocol details.\nWhile\nThe Sixth Intl. .\nJoint Conf.\nTable 5: Comparison of similarity metrics in terms of number of interactions\nTama et al. model the negotiation protocol using ontologies, we have instead modeled the service to be negotiated.\nFurther, we have built a system with which negotiation preferences can be learned.\nSadri et al. study negotiation in the context of resource allocation [14].\nAgents have limited resources and need to require missing resources from other agents.\nA mechanism which is based on dialogue sequences among agents is proposed as a solution.\nThe mechanism relies on observe-think-action agent cycle.\nThese dialogues include offering resources, resource exchanges and offering alternative resource.\nEach agent in the system plans its actions to reach a goal state.\nContrary to our approach, Sadri et al.'s study is not concerned with learning preferences of each other.\nBrzostowski and Kowalczyk propose an approach to select an appropriate negotiation partner by investigating previous multi-attribute negotiations [1].\nIn our approach, we are interested in negotiation the content of the service.\nAfter the consumer and producer agree on the service, price-oriented negotiation mechanisms can be used to agree on the price.\nTime interval affects these agents differently.\nCompared to Fatima et al. our focus is different.\nWhile they study the effect of time on negotiation, our focus is on learning preferences for a successful negotiation.\nFaratin et al. only use the last offer made by the consumer in calculating the similarity for choosing counter offer.\nUnlike them, we also take into account the previous requests of the consumer.\nIn their experiments, Faratin et al. assume that the weights for service variables are fixed a priori.\nOn the contrary, we learn these preferences over time.\nIn our future work, we plan to integrate ontology reasoning into the learning algorithm so that hierarchical information can be learned from subsumption hierarchy of relations.\nFurther, by using relationships among features, the producer can discover new knowledge from the existing knowledge.","lvl-2":"Learning Consumer Preferences Using Semantic Similarity \u2217 Reyhan Aydo\u02d8gan P\u0131nar Yolum\nABSTRACT\nIn online, dynamic environments, the services requested by consumers may not be readily served by the providers.\nThis requires the service consumers and providers to negotiate their service needs and offers.\nMultiagent negotiation approaches typically assume that the parties agree on service content and focus on finding a consensus on service price.\nIn contrast, this work develops an approach through which the parties can negotiate the content of a service.\nThis calls for a negotiation approach in which the parties can understand the semantics of their requests and offers and learn each other's preferences incrementally over time.\nAccordingly, we propose an architecture in which both consumers and producers use a shared ontology to negotiate a service.\nThrough repetitive interactions, the provider learns consumers' needs accurately and can make better targeted offers.\nTo enable fast and accurate learning of preferences, we develop an extension to Version Space and compare it with existing learning techniques.\nWe further develop a metric for measuring semantic similarity between services and compare the performance of our approach using different similarity metrics.\n1.\nINTRODUCTION\nCurrent approaches to e-commerce treat service price as the primary construct for negotiation by assuming that the service content is fixed [9].\nHowever, negotiation on price presupposes that other properties of the service have already been agreed upon.\nNevertheless, many times the service provider may not be offering the exact requested service due to lack of resources, constraints in its business policy, and so on [3].\nWhen this is the case, the producer and the consumer need to negotiate the content of the requested service [15].\nHowever, most existing negotiation approaches assume that all features of a service are equally important and concentrate on the price [5, 2].\nHowever, in reality not all features may be relevant and the relevance of a feature may vary from consumer to consumer.\nFor instance, completion time of a service may be important for one consumer whereas the quality of the service may be more important for a second consumer.\nWithout doubt, considering the preferences of the consumer has a positive impact on the negotiation process.\nFor this purpose, evaluation of the service components with different weights can be useful.\nSome studies take these weights as a priori and uses the fixed weights [4].\nOn the other hand, mostly the producer does not know the consumer's preferences before the negotiation.\nHence, it is more appropriate for the producer to learn these preferences for each consumer.\nPreference Learning: As an alternative, we propose an architecture in which the service providers learn the relevant features of a service for a particular customer over time.\nWe represent service requests as a vector of service features.\nWe use an ontology in order to capture the relations between services and to construct the features for a given service.\nBy using a common ontology, we enable the consumers and producers to share a common vocabulary for negotiation.\nThe particular service we have used is a wine selling service.\nThe wine seller learns the wine preferences of the customer to sell better targeted wines.\nThe producer models the requests of the consumer and its counter offers to learn which features are more important for the consumer.\nSince no information is present before the interactions start, the learning algorithm has to be incremental so that it can be trained at run time and can revise itself with each new interaction.\nService Generation: Even after the producer learns the important features for a consumer, it needs a method to generate offers that are the most relevant for the consumer among its set of possible services.\nIn other words, the question is how the producer uses the information that was learned from the dialogues to make the best offer to the consumer.\nFor instance, assume that the producer has learned that the consumer wants to buy a red wine but the producer can only offer rose or white wine.\nWhat should the producer's offer\ncontain; white wine or rose wine?\nIf the producer has some domain knowledge about semantic similarity (e.g., knows that the red and rose wines are taste-wise more similar than white wine), then it can generate better offers.\nHowever, in addition to domain knowledge, this derivation requires appropriate metrics to measure similarity between available services and learned preferences.\nThe rest of this paper is organized as follows: Section 2 explains our proposed architecture.\nSection 3 explains the learning algorithms that were studied to learn consumer preferences.\nSection 4 studies the different service offering mechanisms.\nSection 5 contains the similarity metrics used in the experiments.\nThe details of the developed system is analyzed in Section 6.\nSection 7 provides our experimental setup, test cases, and results.\nFinally, Section 8 discusses and compares our work with other related work.\n2.\nARCHITECTURE\nOur main components are consumer and producer agents, which communicate with each other to perform content-oriented negotiation.\nFigure 1 depicts our architecture.\nThe consumer agent represents the customer and hence has access to the preferences of the customer.\nThe consumer agent generates requests in accordance with these preferences and negotiates with the producer based on these preferences.\nSimilarly, the producer agent has access to the producer's inventory and knows which wines are available or not.\nA shared ontology provides the necessary vocabulary and hence enables a common language for agents.\nThis ontology describes the content of the service.\nFurther, since an ontology can represent concepts, their properties and their relationships semantically, the agents can reason the details of the service that is being negotiated.\nSince a service can be anything such as selling a car, reserving a hotel room, and so on, the architecture is independent of the ontology used.\nHowever, to make our discussion concrete, we use the well-known Wine ontology [19] with some modification to illustrate our ideas and to test our system.\nThe wine ontology describes different types of wine and includes features such as color, body, winery of the wine and so on.\nWith this ontology, the service that is being negotiated between the consumer and the producer is that of selling wine.\nThe data repository in Figure 1 is used solely by the producer agent and holds the inventory information of the producer.\nThe data repository includes information on the products the producer owns, the number of the products and ratings of those products.\nRatings indicate the popularity of the products among customers.\nThose are used to decide which product will be offered when there exists more than one product having same similarity to the request of the consumer agent.\nThe negotiation takes place in a turn-taking fashion, where the consumer agent starts the negotiation with a particular service request.\nThe request is composed of significant features of the service.\nIn the wine example, these features include color, winery and so on.\nThis is the particular wine that the customer is interested in purchasing.\nIf the producer has the requested wine in its inventory, the producer offers the wine and the negotiation ends.\nOtherwise, the producer offers an alternative wine from the inventory.\nWhen the consumer receives a counter offer from the producer, it will evaluate it.\nIf it is acceptable, then the negotiation will end.\nOtherwise, the customer will generate a new request or stick to the previous request.\nThis process will continue until some service is accepted by the consumer agent or all possible offers are put forward to the consumer by the producer.\nOne of the crucial challenges of the content-oriented negotiation is the automatic generation of counter offers by the service producer.\nWhen the producer constructs its offer, it should consider\nFigure 1: Proposed Negotiation Architecture\nthree important things: the current request, consumer preferences and the producer's available services.\nBoth the consumer's current request and the producer's own available services are accessible by the producer.\nHowever, the consumer's preferences in most cases will not be available.\nHence, the producer will have to understand the needs of the consumer from their interactions and generate a counter offer that is likely to be accepted by the consumer.\nThis challenge can be studied in three stages:\n\u2022 Preference Learning: How can the producers learn about each customer's preferences based on requests and counter offers?\n(Section 3) \u2022 Service Offering: How can the producers revise their offers based on the consumer's preferences that they have learned so far?\n(Section 4) \u2022 Similarity Estimation: How can the producer agent estimate similarity between the request and available services?\n(Section 5)\n3.\nPREFERENCE LEARNING\nThe requests of the consumer and the counter offers of the producer are represented as vectors, where each element in the vector corresponds to the value of a feature.\nThe requests of the consumers represent individual wine products whereas their preferences are constraints over service features.\nFor example, a consumer may have preference for red wine.\nThis means that the consumer is willing to accept any wine offered by the producers as long as the color is red.\nAccordingly, the consumer generates a request where the color feature is set to red and other features are set to arbitrary values, e.g. (Medium, Strong, Red).\nAt the beginning of negotiation, the producer agent does not know the consumer's preferences but will need to learn them using information obtained from the dialogues between the producer and the consumer.\nThe preferences denote the relative importance of the features of the services demanded by the consumer agents.\nFor instance, the color of the wine may be important so the consumer insists on buying the wine whose color is red and rejects all\n1302 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nTable 1: How DCEA works\nthe offers involving the wine whose color is white or rose.\nOn the contrary, the winery may not be as important as the color for this customer, so the consumer may have a tendency to accept wines from any winery as long as the color is red.\nTo tackle this problem, we propose to use incremental learning algorithms [6].\nThis is necessary since no training data is available before the interactions start.\nWe particularly investigate two approaches.\nThe first one is inductive learning.\nThis technique is applied to learn the preferences as concepts.\nWe elaborate on Candidate Elimination Algorithm (CEA) for Version Space [10].\nCEA is known to perform poorly if the information to be learned is disjunctive.\nInterestingly, most of the time consumer preferences are disjunctive.\nSay, we are considering an agent that is buying wine.\nThe consumer may prefer red wine or rose wine but not white wine.\nTo use CEA with such preferences, a solid modification is necessary.\nThe second approach is decision trees.\nDecision trees can learn from examples easily and classify new instances as positive or negative.\nA well-known incremental decision tree is ID5R [18].\nHowever, ID5R is known to suffer from high computational complexity.\nFor this reason, we instead use the ID3 algorithm [13] and iteratively build decision trees to simulate incremental learning.\n3.1 CEA\nCEA [10] is one of the inductive learning algorithms that learns concepts from observed examples.\nThe algorithm maintains two sets to model the concept to be learned.\nThe first set is the most general set G. G contains hypotheses about all the possible values that the concept may obtain.\nAs the name suggests, it is a generalization and contains all possible values unless the values have been identified not to represent the concept.\nThe second set is the most specific set S. S contains only hypotheses that are known to identify the concept that is being learned.\nAt the beginning of the algorithm, G is initialized to cover all possible concepts while S is initialized to be empty.\nDuring the interactions, each request of the consumer can be considered as a positive example and each counter offer generated by the producer and rejected by the consumer agent can be thought of as a negative example.\nAt each interaction between the producer and the consumer, both G and S are modified.\nThe negative samples enforce the specialization of some hypotheses so that G does not cover any hypothesis accepting the negative samples as positive.\nWhen a positive sample comes, the most specific set S should be generalized in order to cover the new training instance.\nAs a result, the most general hypotheses and the most special hypotheses cover all positive training samples but do not cover any negative ones.\nIncrementally, G specializes and S generalizes until G and S are equal to each other.\nWhen these sets are equal, the algorithm converges by means of reaching the target concept.\n3.2 Disjunctive CEA\nUnfortunately, CEA is primarily targeted for conjunctive concepts.\nOn the other hand, we need to learn disjunctive concepts in the negotiation of a service since consumer may have several alternative wishes.\nThere are several studies on learning disjunctive concepts via Version Space.\nSome of these approaches use multiple version space.\nFor instance, Hong et al. maintain several version spaces by split and merge operation [7].\nTo be able to learn disjunctive concepts, they create new version spaces by examining the consistency between G and S.\nWe deal with the problem of not supporting disjunctive concepts of CEA by extending our hypothesis language to include disjunctive hypothesis in addition to the conjunctives and negation.\nEach attribute of the hypothesis has two parts: inclusive list, which holds the list of valid values for that attribute and exclusive list, which is the list of values which cannot be taken for that feature.\nEXAMPLE 1.\nAssume that the most specific set is {(Light, Delicate, Red)} and a positive example, (Light, Delicate, White) comes.\nThe original CEA will generalize this as (Light, Delicate,?)\n, meaning the color can take any value.\nHowever, in fact, we only know that the color can be red or white.\nIn the DCEA, we generalize it as {(Light, Delicate, [White, Red])}.\nOnly when all the values exist in the list, they will be replaced by?\n.\nIn other words, we let the algorithm generalize more slowly than before.\nWe modify the CEA algorithm to deal with this change.\nThe modified algorithm, DCEA, is given as Algorithm 1.\nNote that compared to the previous studies of disjunctive versions, our approach uses only a single version space rather than multiple version space.\nThe initialization phase is the same as the original algorithm (lines 1, 2).\nIf any positive sample comes, we add the sample to the special set as before (line 4).\nHowever, we do not eliminate the hypotheses in G that do not cover this sample since G now contains a disjunction of many hypotheses, some of which will be conflicting with each other.\nRemoving a specific hypothesis from G will result in loss of information, since other hypotheses are not guaranteed to cover it.\nAfter some time, some hypotheses in S can be merged and can construct one hypothesis (lines 6, 7).\nWhen a negative sample comes, we do not change S as before.\nWe only modify the most general hypotheses not to cover this negative sample (lines 11--15).\nDifferent from the original CEA, we try to specialize the G minimally.\nThe algorithm removes the hypothesis covering the negative sample (line 13).\nThen, we generate new hypotheses as the number of all possible attributes by using the removed hypothesis.\nFor each attribute in the negative sample, we add one of them at each time to the exclusive list of the removed hypothesis.\nThus, all possible hypotheses that do not cover the negative sample are generated (line 14).\nNote that, exclusive list contains the values that the attribute cannot take.\nFor example, consider the color attribute.\nIf a hypothesis includes red in its exclusive list and?\nin its inclusive list, this means that color may take any value except red.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1303\nAlgorithm 1 Disjunctive Candidate Elimination Algorithm\n1: G +--the set of maximally general hypotheses in H 2: S +--the set of maximally specific hypotheses in H 3: For each training example, d 4: if d is a positive example then 5: Add d to S 6: if s in S can be combined with d to make one element then 7: Combine s and d into sd {sd is the rule covers s and d} 8: end if 9: end if 10: if d is a negative example then 11: For each hypothesis g in G does cover d 12: * Assume: g = (x1, x2,..., xn) and d = (d1, d2,..., dn) 13:--Remove g from G 14:--Add hypotheses g1, g2, gn where g1 = (x1-d1, x2,..., xn), g2 = (x1, x2-d2,..., xn),..., and gn = (x1, x2,..., xn-dn) 15:--Remove from G any hypothesis that is less general than another hypothesis in G 16: end if\nEXAMPLE 2.\nTable 1 illustrates the first three interactions and the workings of DCEA.\nThe most general set and the most specific set show the contents of G and S after the sample comes in.\nAfter the first positive sample, S is generalized to also cover the instance.\nThe second sample is negative.\nThus, we replace (?\n,?\n,?)\nby three disjunctive hypotheses; each hypothesis being minimally specialized.\nIn this process, at each time one attribute value of negative sample is applied to the hypothesis in the general set.\nThe third sample is positive and generalizes S even more.\nNote that in Table 1, we do not eliminate {(?\n- Full),?\n,?}\nfrom the general set while having a positive sample such as (Full, Strong, White).\nThis stems from the possibility of using this rule in the generation of other hypotheses.\nFor instance, if the example continues with a negative sample (Full, Strong, Red), we can specialize the previous rule such as {(?\n- Full),?\n, (?\n- Red)}.\nBy Algorithm 1, we do not miss any information.\n3.3 ID3\nID3 [13] is an algorithm that constructs decision trees in a topdown fashion from the observed examples represented in a vector with attribute-value pairs.\nApplying this algorithm to our system with the intention of learning the consumer's preferences is appropriate since this algorithm also supports learning disjunctive concepts in addition to conjunctive concepts.\nThe ID3 algorithm is used in the learning process with the purpose of classification of offers.\nThere are two classes: positive and negative.\nPositive means that the service description will possibly be accepted by the consumer agent whereas the negative implies that it will potentially be rejected by the consumer.\nConsumer's requests are considered as positive training examples and all rejected counter-offers are thought as negative ones.\nThe decision tree has two types of nodes: leaf node in which the class labels of the instances are held and non-leaf nodes in which test attributes are held.\nThe test attribute in a non-leaf node is one of the attributes making up the service description.\nFor instance, body, flavor, color and so on are potential test attributes for wine service.\nWhen we want to find whether the given service description is acceptable, we start searching from the root node by examining the value of test attributes until reaching a leaf node.\nThe problem with this algorithm is that it is not an incremental algorithm, which means all the training examples should exist before learning.\nTo overcome this problem, the system keeps consumer's requests throughout the negotiation interaction as positive examples and all counter-offers rejected by the consumer as negative examples.\nAfter each coming request, the decision tree is rebuilt.\nWithout doubt, there is a drawback of reconstruction such as additional process load.\nHowever, in practice we have evaluated ID3 to be fast and the reconstruction cost to be negligible.\n4.\nSERVICE OFFERING\nAfter learning the consumer's preferences, the producer needs to make a counter offer that is compatible with the consumer's preferences.\n4.1 Service Offering via CEA and DCEA\nTo generate the best offer, the producer agent uses its service ontology and the CEA algorithm.\nThe service offering mechanism is the same for both the original CEA and DCEA, but as explained before their methods for updating G and S are different.\nWhen producer receives a request from the consumer, the learning set of the producer is trained with this request as a positive sample.\nThe learning components, the most specific set S and the most general set G are actively used in offering service.\nThe most general set, G is used by the producer in order to avoid offering the services, which will be rejected by the consumer agent.\nIn other words, it filters the service set from the undesired services, since G contains hypotheses that are consistent with the requests of the consumer.\nThe most specific set, S is used in order to find best offer, which is similar to the consumer's preferences.\nSince the most specific set S holds the previous requests and the current request, estimating similarity between this set and every service in the service list is very convenient to find the best offer from the service list.\nWhen the consumer starts the interaction with the producer agent, producer agent loads all related services to the service list object.\nThis list constitutes the provider's inventory of services.\nUpon receiving a request, if the producer can offer an exactly matching service, then it does so.\nFor example, for a wine this corresponds to selling a wine that matches the specified features of the consumer's request identically.\nWhen the producer cannot offer the service as requested, it tries to find the service that is most similar to the services that have been requested by the consumer during the negotiation.\nTo do this, the producer has to compute the similarity between the services it can offer and the services that have been requested (in S).\nWe compute the similarities in various ways as will be explained in Section 5.\nAfter the similarity of the available services with the current S is calculated, there may be more than one service with the maximum similarity.\nThe producer agent can break the tie in a number of ways.\nHere, we have associated a rating value with each service and the producer prefers the higher rated service to others.\n4.2 Service Offering via ID3\nIf the producer learns the consumer's preferences with ID3, a similar mechanism is applied with two differences.\nFirst, since ID3 does not maintain G, the list of unaccepted services that are classified as negative are removed from the service list.\nSecond, the similarities of possible services are not measured with respect to S, but instead to all previously made requests.\n4.3 Alternative Service Offering Mechanisms\nIn addition to these three service offering mechanisms (Service Offering with CEA, Service Offering with DCEA, and Service Offering with ID3), we include two other mechanisms.\n.\n1304 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n\u2022 Random Service Offering (RO): The producer generates a counter offer randomly from the available service list, without considering the consumer's preferences.\n\u2022 Service Offering considering only the current request (SCR):\nThe producer selects a counter offer according to the similarity of the consumer's current request but does not consider previous requests.\n5.\nSIMILARITY ESTIMATION\nSimilarity can be estimated with a similarity metric that takes two entries and returns how similar they are.\nThere are several similarity metrics used in case based reasoning system such as weighted sum of Euclidean distance, Hamming distance and so on [12].\nThe similarity metric affects the performance of the system while deciding which service is the closest to the consumer's request.\nWe first analyze some existing metrics and then propose a new semantic similarity metric named RP Similarity.\n5.1 Tversky's Similarity Metric\nTversky's similarity metric compares two vectors in terms of the number of exactly matching features [17].\nIn Equation (1), common represents the number of matched attributes whereas different represents the number of the different attributes.\nOur current assumption is that \u03b1 and \u03b2 is equal to each other.\nHere, when two features are compared, we assign zero for dissimilarity and one for similarity by omitting the semantic closeness among the feature values.\nTversky's similarity metric is designed to compare two feature vectors.\nIn our system, whereas the list of services that can be offered by the producer are each a feature vector, the most specific set S is not a feature vector.\nS consists of hypotheses of feature vectors.\nTherefore, we estimate the similarity of each hypothesis inside the most specific set S and then take the average of the similarities.\nEXAMPLE 3.\nAssume that S contains the following two hypothesis: {{Light, Moderate, (Red, White)}, {Full, Strong, Rose}}.\nTake service s as (Light, Strong, Rose).\nThen the similarity of the first one is equal to 1\/3 and the second one is equal to 2\/3 in accordance with Equation (1).\nNormally, we take the average of it and obtain (1\/3 + 2\/3) \/ 2, equally 1\/2.\nHowever, the first hypothesis involves the effect of two requests and the second hypothesis involves only one request.\nAs a result, we expect the effect of the first hypothesis to be greater than that of the second.\nTherefore, we calculate the average similarity by considering the number of samples that hypotheses cover.\nLet ch denote the number of samples that hypothesis h covers and (SM (h, service)) denote the similarity of hypothesis h with the given service.\nWe compute the similarity of each hypothesis with the given service and weight them with the number of samples they cover.\nWe find the similarity by dividing the weighted sum of the similarities of all hypotheses in S with the service by the number of all samples that are covered in S.\nFigure 2: Sample taxonomy for similarity estimation\nEXAMPLE 4.\nFor the above example, the similarity of (Light, Strong, Rose) with the specific set is (2 * 1\/3 + 2\/3) \/ 3, equally 4\/9.\nThe possible number of samples that a hypothesis covers can be estimated with multiplying cardinalities of each attribute.\nFor example, the cardinality of the first attribute is two and the others is equal to one for the given hypothesis such as {Light, Moderate, (Red, White)}.\nWhen we multiply them, we obtain two (2 * 1 * 1 = 2).\n5.2 Lin's Similarity Metric\nA taxonomy can be used while estimating semantic similarity between two concepts.\nEstimating semantic similarity in a Is-A taxonomy can be done by calculating the distance between the nodes related to the compared concepts.\nThe links among the nodes can be considered as distances.\nThen, the length of the path between the nodes indicates how closely similar the concepts are.\nAn alternative estimation to use information content in estimation of semantic similarity rather than edge counting method, was proposed by Lin [8].\nThe equation (3) [8] shows Lin's similarity where c1 and c2 are the compared concepts and c0 is the most specific concept that subsumes both of them.\nBesides, P (C) represents the probability of an arbitrary selected object belongs to concept C.\n5.3 Wu & Palmer's Similarity Metric\nDifferent from Lin, Wu and Palmer use the distance between the nodes in IS-A taxonomy [20].\nThe semantic similarity is represented with Equation (4) [20].\nHere, the similarity between c1 and c2 is estimated and c0 is the most specific concept subsuming these classes.\nN1 is the number of edges between c1 and c0.\nN2 is the number of edges between c2 and c0.\nN0 is the number of IS-A links of c0 from the root of the taxonomy.\n5.4 RP Semantic Metric\nWe propose to estimate the relative distance in a taxonomy between two concepts using the following intuitions.\nWe use Figure 2 to illustrate these intuitions.\n\u2022 Parent versus grandparent: Parent of a node is more similar to the node than grandparents of that.\nGeneralization of\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1305\na concept reasonably results in going further away that concept.\nThe more general concepts are, the less similar they are.\nFor example, AnyWineColor is parent of ReddishColor and ReddishColor is parent of Red.\nThen, we expect the similarity between ReddishColor and Red to be higher than that of the similarity between AnyWineColor and Red.\n\u2022 Parent versus sibling: A node would have higher similarity to its parent than to its sibling.\nFor instance, Red and Rose are children of ReddishColor.\nIn this case, we expect the similarity between Red and ReddishColor to be higher than that of Red and Rose.\n\u2022 Sibling versus grandparent: A node is more similar to it 's\nsibling then to its grandparent.\nTo illustrate, AnyWineColor is grandparent of Red, and Red and Rose are siblings.\nTherefore, we possibly anticipate that Red and Rose are more similar than AnyWineColor and Red.\nAs a taxonomy is represented in a tree, that tree can be traversed from the first concept being compared through the second concept.\nAt starting node related to the first concept, the similarity value is constant and equal to one.\nThis value is diminished by a constant at each node being visited over the path that will reach to the node including the second concept.\nThe shorter the path between the concepts, the higher the similarity between nodes.\n1: Similarity 1 2: if c1 is equal to c2 then 3: Return Similarity 4: end if 5: commonParent findCommonParent (c1, c2) {commonParent is the most specific concept that covers both c1 and c2} 6: N1 findDistance (commonParent, c1) 7: N2 findDistance (commonParent, c2) {N1 & N2 are the number of links between the concept and parent concept} 8: if (commonParent == c1) or (commonParent == c2) then 9: Similarity Similarity * m (N1 + N2) 10: else 11: Similarity Similarity * n * m (N1 + N2 \u2212 2) 12: end if 13: Return Similarity\nRelative distance between nodes c1 and c2 is estimated in the following way.\nStarting from c1, the tree is traversed to reach c2.\nAt each hop, the similarity decreases since the concepts are getting farther away from each other.\nHowever, based on our intuitions, not all hops decrease the similarity equally.\nLet m represent the factor for hopping from a child to a parent and n represent the factor for hopping from a sibling to another sibling.\nSince hopping from a node to its grandparent counts as two parent hops, the discount factor of moving from a node to its grandparent is m2.\nAccording to the above intuitions, our constants should be in the form m> n> m2 where the value of m and n should be between zero and one.\nAlgorithm 2 shows the distance calculation.\nAccording to the algorithm, firstly the similarity is initialized with the value of one (line 1).\nIf the concepts are equal to each other then, similarity will be one (lines 2-4).\nOtherwise, we compute the common parent of the two nodes and the distance of each concept to the common parent without considering the sibling (lines 5-7).\nIf one of the concepts is equal to the common parent, then there is no sibling relation between the concepts.\nFor each level, we multiply the similarity by m and do not consider the sibling factor in the similarity estimation.\nAs a result, we decrease the similarity at each level with the rate of m (line9).\nOtherwise, there has to be a sibling relation.\nThis means that we have to consider the effect of n when measuring similarity.\nRecall that we have counted N1 + N2 edges between the concepts.\nSince there is a sibling relation, two of these edges constitute the sibling relation.\nHence, when calculating the effect of the parent relation, we use N1 + N2--2 edges (line 11).\nSome similarity estimations related to the taxonomy in Figure 2 are given in Table 2.\nIn this example, m is taken as 2\/3 and n is taken as 4\/7.\nTable 2: Sample similarity estimation over sample taxonomy\nFor all semantic similarity metrics in our architecture, the taxonomy for features is held in the shared ontology.\nIn order to evaluate the similarity of feature vector, we firstly estimate the similarity for feature one by one and take the average sum of these similarities.\nThen the result is equal to the average semantic similarity of the entire feature vector.\n6.\nDEVELOPED SYSTEM\nWe have implemented our architecture in Java.\nTo ease testing of the system, the consumer agent has a user interface that allows us to enter various requests.\nThe producer agent is fully automated and the learning and service offering operations work as explained before.\nIn this section, we explain the implementation details of the developed system.\nWe use OWL [11] as our ontology language and JENA as our ontology reasoner.\nThe shared ontology is the modified version of the Wine Ontology [19].\nIt includes the description of wine as a concept and different types of wine.\nAll participants of the negotiation use this ontology for understanding each other.\nAccording to the ontology, seven properties make up the wine concept.\nThe consumer agent and the producer agent obtain the possible values for the these properties by querying the ontology.\nThus, all possible values for the components of the wine concept such as color, body, sugar and so on can be reached by both agents.\nAlso a variety of wine types are described in this ontology such as Burgundy, Chardonnay, CheninBlanc and so on.\nIntuitively, any wine type described in the ontology also represents a wine concept.\nThis allows us to consider instances of Chardonnay wine as instances of Wine class.\nIn addition to wine description, the hierarchical information of some features can be inferred from the ontology.\nFor instance, we can represent the information Europe Continent covers Western Country.\nWestern Country covers French Region, which covers some territories such as Loire, Bordeaux and so on.\nThis hierarchical information is used in estimation of semantic similarity.\nIn this part, some reasoning can be made such as if a concept X covers Y and Y covers Z, then concept X covers Z. For example, Europe Continent covers Bordeaux.\n1306 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFor some features such as body, flavor and sugar, there is no hierarchical information, but their values are semantically leveled.\nWhen that is the case, we give the reasonable similarity values for these features.\nFor example, the body can be light, medium, or strong.\nIn this case, we assume that light is 0.66 similar to medium but only 0.33 to strong.\nWineStock Ontology is the producer's inventory and describes a product class as WineProduct.\nThis class is necessary for the producer to record the wines that it sells.\nOntology involves the individuals of this class.\nThe individuals represent available services that the producer owns.\nWe have prepared two separate WineStock ontologies for testing.\nIn the first ontology, there are 19 available wine products and in the second ontology, there are 50 products.\n7.\nPERFORMANCE EVALUATION\nWe evaluate the performance of the proposed systems in respect to learning technique they used, DCEA and ID3, by comparing them with the CEA, RO (for random offering), and SCR (offering based on current request only).\nWe apply a variety of scenarios on this dataset in order to see the performance differences.\nEach test scenario contains a list of preferences for the user and number of matches from the product list.\nTable 3 shows these preferences and availability of those products in the inventory for first five scenarios.\nNote that these preferences are internal to the consumer and the producer tries to learn these during negotiation.\nTable 3: Availability of wines in different test scenarios\n7.1 Comparison of Learning Algorithms\nIn comparison of learning algorithms, we use the five scenarios in Table 3.\nHere, first we use Tversky's similarity measure.\nWith these test cases, we are interested in finding the number of iterations that are required for the producer to generate an acceptable offer for the consumer.\nSince the performance also depends on the initial request, we repeat our experiments with different initial requests.\nConsequently, for each case, we run the algorithms five times with several variations of the initial requests.\nIn each experiment, we count the number of iterations that were needed to reach an agreement.\nWe take the average of these numbers in order to evaluate these systems fairly.\nAs is customary, we test each algorithm with the same initial requests.\nTable 4 compares the approaches using different learning algorithm.\nWhen the large parts of inventory is compatible with the customer's preferences as in the first test case, the performance of all techniques are nearly same (e.g., Scenario 1).\nAs the number of compatible services drops, RO performs poorly as expected.\nThe second worst method is SCR since it only considers the customer's most recent request and does not learn from previous requests.\nCEA gives the best results when it can generate an answer but cannot handle the cases containing disjunctive preferences, such as the one in Scenario 5.\nID3 and DCEA achieve the best results.\nTheir performance is comparable and they can handle all cases including Scenario 5.\nTable 4: Comparison of learning algorithms in terms of average number of interactions\n7.2 Comparison of Similarity Metrics\nTo compare the similarity metrics that were explained in Section 5, we fix the learning algorithm to DCEA.\nIn addition to the scenarios shown in Table 3, we add following five new scenarios considering the hierarchical information.\n\u2022 The customer wants to buy wine whose winery is located in California and whose grape is a type of white grape.\nMoreover, the winery of the wine should not be expensive.\nThere are only four products meeting these conditions.\n\u2022 The customer wants to buy wine whose color is red or rose and grape type is red grape.\nIn addition, the location of wine should be in Europe.\nThe sweetness degree is wished to be dry or off dry.\nThe flavor should be delicate or moderate where the body should be medium or light.\nFurthermore, the winery of the wine should be an expensive winery.\nThere are two products meeting all these requirements.\n\u2022 The customer wants to buy moderate rose wine, which is located around French Region.\nThe category of winery should be Moderate Winery.\nThere is only one product meeting these requirements.\n\u2022 The customer wants to buy expensive red wine, which is located around California Region or cheap white wine, which is located in around Texas Region.\nThere are five available products.\n\u2022 The customer wants to buy delicate white wine whose pro\nducer in the category of Expensive Winery.\nThere are two available products.\nThe first seven scenarios are tested with the first dataset that contains a total of 19 services and the last three scenarios are tested with the second dataset that contains 50 services.\nTable 5 gives the performance evaluation in terms of the number of interactions needed to reach a consensus.\nTversky's metric gives the worst results since it does not consider the semantic similarity.\nLin's performance are better than Tversky but worse than others.\nWu Palmer's metric and RP similarity measure nearly give the same performance and better than others.\nWhen the results are examined, considering semantic closeness increases the performance.\n8.\nDISCUSSION\nWe review the recent literature in comparison to our work.\nTama et al. [16] propose a new approach based on ontology for negotiation.\nAccording to their approach, the negotiation protocols used in e-commerce can be modeled as ontologies.\nThus, the agents can perform negotiation protocol by using this shared ontology without the need of being hard coded of negotiation protocol details.\nWhile\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1307\nTable 5: Comparison of similarity metrics in terms of number of interactions\nTama et al. model the negotiation protocol using ontologies, we have instead modeled the service to be negotiated.\nFurther, we have built a system with which negotiation preferences can be learned.\nSadri et al. study negotiation in the context of resource allocation [14].\nAgents have limited resources and need to require missing resources from other agents.\nA mechanism which is based on dialogue sequences among agents is proposed as a solution.\nThe mechanism relies on observe-think-action agent cycle.\nThese dialogues include offering resources, resource exchanges and offering alternative resource.\nEach agent in the system plans its actions to reach a goal state.\nContrary to our approach, Sadri et al.'s study is not concerned with learning preferences of each other.\nBrzostowski and Kowalczyk propose an approach to select an appropriate negotiation partner by investigating previous multi-attribute negotiations [1].\nFor achieving this, they use case-based reasoning.\nTheir approach is probabilistic since the behavior of the partners can change at each iteration.\nIn our approach, we are interested in negotiation the content of the service.\nAfter the consumer and producer agree on the service, price-oriented negotiation mechanisms can be used to agree on the price.\nFatima et al. study the factors that affect the negotiation such as preferences, deadline, price and so on, since the agent who develops a strategy against its opponent should consider all of them [5].\nIn their approach, the goal of the seller agent is to sell the service for the highest possible price whereas the goal of the buyer agent is to buy the good with the lowest possible price.\nTime interval affects these agents differently.\nCompared to Fatima et al. our focus is different.\nWhile they study the effect of time on negotiation, our focus is on learning preferences for a successful negotiation.\nFaratin et al. propose a multi-issue negotiation mechanism, where the service variables for the negotiation such as price, quality of the service, and so on are considered traded-offs against each other (i.e., higher price for earlier delivery) [4].\nThey generate a heuristic model for trade-offs including fuzzy similarity estimation and a hill-climbing exploration for possibly acceptable offers.\nAlthough we address a similar problem, we learn the preferences of the customer by the help of inductive learning and generate counter-offers in accordance with these learned preferences.\nFaratin et al. only use the last offer made by the consumer in calculating the similarity for choosing counter offer.\nUnlike them, we also take into account the previous requests of the consumer.\nIn their experiments, Faratin et al. assume that the weights for service variables are fixed a priori.\nOn the contrary, we learn these preferences over time.\nIn our future work, we plan to integrate ontology reasoning into the learning algorithm so that hierarchical information can be learned from subsumption hierarchy of relations.\nFurther, by using relationships among features, the producer can discover new knowledge from the existing knowledge.\nThese are interesting directions that we will pursue in our future work.","keyphrases":["consum prefer","semant similar","servic","negoti","price","ontolog","similar metric","consum agent","data repositori","prefer learn","candid elimin algorithm","decis tree","increment decis tree","disjunct cea","multipl version space","disjunct hypothesi","id3","learn set","rp similar","induct learn"],"prmu":["P","P","P","P","P","P","P","M","U","R","U","U","M","U","M","U","U","M","M","M"]} {"id":"J-37","title":"Finding Equilibria in Large Sequential Games of Imperfect Information","abstract":"Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games. To address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation. For a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game. We present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively. Its complexity is \u02dcO(n2 ), where n is the number of nodes in a structure we call the signal tree. It is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree. Using GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes-over four orders of magnitude more than in the largest poker game solved previously. We discuss several electronic commerce applications for GameShrink. To address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies.","lvl-1":"Finding Equilibria in Large Sequential Games of Imperfect Information\u2217 Andrew Gilpin Computer Science Department Carnegie Mellon University Pittsburgh, PA, USA gilpin@cs.cmu.edu Tuomas Sandholm Computer Science Department Carnegie Mellon University Pittsburgh, PA, USA sandholm@cs.cmu.edu ABSTRACT Finding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games.\nTo address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation.\nFor a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game.\nWe present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively.\nIts complexity is \u02dcO(n2 ), where n is the number of nodes in a structure we call the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree.\nUsing GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes-over four orders of magnitude more than in the largest poker game solved previously.\nWe discuss several electronic commerce applications for GameShrink.\nTo address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies.\nCategories and Subject Descriptors: I.2 [Artificial Intelligence], F. [Theory of Computation], J.4 [Social and Behavioral Sciences]: Economics.\nGeneral Terms: Algorithms, Economics, Theory.\n1.\nINTRODUCTION In environments with more than one agent, an agent``s outcome is generally affected by the actions of the other agent(s).\nConsequently, the optimal action of one agent can depend on the others.\nGame theory provides a normative framework for analyzing such strategic situations.\nIn particular, it provides solution concepts that define what rational behavior is in such settings.\nThe most famous and important solution concept is that of Nash equilibrium [36].\nIt is a strategy profile (one strategy for each agent) in which no agent has incentive to deviate to a different strategy.\nHowever, for the concept to be operational, we need algorithmic techniques for finding an equilibrium.\nGames can be classified as either games of perfect information or imperfect information.\nChess and Go are examples of the former, and, until recently, most game playing work has been on games of this type.\nTo compute an optimal strategy in a perfect information game, an agent traverses the game tree and evaluates individual nodes.\nIf the agent is able to traverse the entire game tree, she simply computes an optimal strategy from the bottom-up, using the principle of backward induction.1 In computer science terms, this is done using minimax search (often in conjunction with \u03b1\u03b2-pruning to reduce the search tree size and thus enhance speed).\nMinimax search runs in linear time in the size of the game tree.2 The differentiating feature of games of imperfect information, such as poker, is that they are not fully observable: when it is an agent``s turn to move, she does not have access to all of the information about the world.\nIn such games, the decision of what to do at a point in time cannot generally be optimally made without considering decisions at all other points in time (including ones on other paths of play) because those other decisions affect the probabilities of being at different states at the current point in time.\nThus the algorithms for perfect information games do not solve games of imperfect information.\nFor sequential games with imperfect information, one could try to find an equilibrium using the normal (matrix) form, where every contingency plan of the agent is a pure strategy for the agent.3 Unfortunately (even if equivalent strategies 1 This actually yields a solution that satisfies not only the Nash equilibrium solution concept, but a stronger solution concept called subgame perfect Nash equilibrium [45].\n2 This type of algorithm still does not scale to huge trees (such as in chess or Go), but effective game-playing agents can be developed even then by evaluating intermediate nodes using a heuristic evaluation and then treating those nodes as leaves.\n3 An -equilibrium in a normal form game with any 160 are replaced by a single strategy [27]) this representation is generally exponential in the size of the game tree [52].\nBy observing that one needs to consider only sequences of moves rather than pure strategies [41, 46, 22, 52], one arrives at a more compact representation, the sequence form, which is linear in the size of the game tree.4 For 2-player games, there is a polynomial-sized (in the size of the game tree) linear programming formulation (linear complementarity in the non-zero-sum case) based on the sequence form such that strategies for players 1 and 2 correspond to primal and dual variables.\nThus, the equilibria of reasonable-sized 2-player games can be computed using this method [52, 24, 25].5 However, this approach still yields enormous (unsolvable) optimization problems for many real-world games, such as poker.\n1.1 Our approach In this paper, we take a different approach to tackling the difficult problem of equilibrium computation.\nInstead of developing an equilibrium-finding method per se, we instead develop a methodology for automatically abstracting games in such a way that any equilibrium in the smaller (abstracted) game corresponds directly to an equilibrium in the original game.\nThus, by computing an equilibrium in the smaller game (using any available equilibrium-finding algorithm), we are able to construct an equilibrium in the original game.\nThe motivation is that an equilibrium for the smaller game can be computed drastically faster than for the original game.\nTo this end, we introduce games with ordered signals (Section 2), a broad class of games that has enough structure which we can exploit for abstraction purposes.\nInstead of operating directly on the game tree (something we found to be technically challenging), we instead introduce the use of information filters (Section 2.1), which coarsen the information each player receives.\nThey are used in our analysis and abstraction algorithm.\nBy operating only in the space of filters, we are able to keep the strategic structure of the game intact, while abstracting out details of the game in a way that is lossless from the perspective of equilibrium finding.\nWe introduce the ordered game isomorphism to describe strategically symmetric situations and the ordered game isomorphic abstraction transformation to take advantange of such symmetries (Section 3).\nAs our main equilibrium result we have the following: constant number of agents can be constructed in quasipolynomial time [31], but finding an exact equilibrium is PPAD-complete even in a 2-player game [8].\nThe most prevalent algorithm for finding an equilibrium in a 2-agent game is Lemke-Howson [30], but it takes exponentially many steps in the worst case [44].\nFor a survey of equilibrium computation in 2-player games, see [53].\nRecently, equilibriumfinding algorithms that enumerate supports (i.e., sets of pure strategies that are played with positive probability) have been shown efficient on many games [40], and efficient mixed integer programming algorithms that search in the space of supports have been developed [43].\nFor more than two players, many algorithms have been proposed, but they currently only scale to very small games [19, 34, 40].\n4 There were also early techniques that capitalized in different ways on the fact that in many games the vast majority of pure strategies are not played in equilibrium [54, 23].\n5 Recently this approach was extended to handle computing sequential equilibria [26] as well [35].\nTheorem 2 Let \u0393 be a game with ordered signals, and let F be an information filter for \u0393.\nLet F be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation, and let \u03c3 be a Nash equilibrium strategy profile of the induced game \u0393F (i.e., the game \u0393 using the filter F ).\nIf \u03c3 is constructed by using the corresponding strategies of \u03c3 , then \u03c3 is a Nash equilibrium of \u0393F .\nThe proof of the theorem uses an equivalent characterization of Nash equilibria: \u03c3 is a Nash equilibrium if and only if there exist beliefs \u03bc (players'' beliefs about unknown information) at all points of the game reachable by \u03c3 such that \u03c3 is sequentially rational (i.e., a best response) given \u03bc, where \u03bc is updated using Bayes'' rule.\nWe can then use the fact that \u03c3 is a Nash equilibrium to show that \u03c3 is a Nash equilibrium considering only local properties of the game.\nWe also give an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively (Section 4).\nIts complexity is \u02dcO(n2 ), where n is the number of nodes in a structure we call the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree.\nWe present several algorithmic and data structure related speed improvements (Section 4.1), and we demonstrate how a simple modification to our algorithm yields an approximation algorithm (Section 5).\n1.2 Electronic commerce applications Sequential games of imperfect information are ubiquitous, for example in negotiation and in auctions.\nOften aspects of a player``s knowledge are not pertinent for deciding what action the player should take at a given point in the game.\nOn the trivial end, some aspects of a player``s knowledge are never pertinent (e.g., whether it is raining or not has no bearing on the bidding strategy in an art auction), and such aspects can be completely left out of the model specification.\nHowever, some aspects can be pertinent in certain states of the game while they are not pertinent in other states, and thus cannot be left out of the model completely.\nFurthermore, it may be highly non-obvious which aspects are pertinent in which states of the game.\nOur algorithm automatically discovers which aspects are irrelevant in different states, and eliminates those aspects of the game, resulting in a more compact, equivalent game representation.\nOne broad application area that has this property is sequential negotiation (potentially over multiple issues).\nAnother broad application area is sequential auctions (potentially over multiple goods).\nFor example, in those states of a 1-object auction where bidder A can infer that his valuation is greater than that of bidder B, bidder A can ignore all his other information about B``s signals, although that information would be relevant for inferring B``s exact valuation.\nFurthermore, in some states of the auction, a bidder might not care which exact other bidders have which valuations, but cares about which valuations are held by the other bidders in aggregate (ignoring their identities).\nMany open-cry sequential auction and negotiation mechanisms fall within the game model studied in this paper (specified in detail later), as do certain other games in electronic commerce, such as sequences of take-it-or-leave-it offers [42].\nOur techniques are in no way specific to an application.\nThe main experiment that we present in this paper is on 161 a recreational game.\nWe chose a particular poker game as the benchmark problem because it yields an extremely complicated and enormous game tree, it is a game of imperfect information, it is fully specified as a game (and the data is available), and it has been posted as a challenge problem by others [47] (to our knowledge no such challenge problem instances have been proposed for electronic commerce applications that require solving sequential games).\n1.3 Rhode Island Hold``em poker Poker is an enormously popular card game played around the world.\nThe 2005 World Series of Poker had over $103 million dollars in total prize money, including $56 million for the main event.\nIncreasingly, poker players compete in online casinos, and television stations regularly broadcast poker tournaments.\nPoker has been identified as an important research area in AI due to the uncertainty stemming from opponents'' cards, opponents'' future actions, and chance moves, among other reasons [5].\nAlmost since the field``s founding, game theory has been used to analyze different aspects of poker [28; 37; 3; 51, pp. 186-219].\nHowever, this work was limited to tiny games that could be solved by hand.\nMore recently, AI researchers have been applying the computational power of modern hardware to computing game theory-based strategies for larger games.\nKoller and Pfeffer determined solutions to poker games with up to 140,000 nodes using the sequence form and linear programming [25].\nLarge-scale approximations have been developed [4], but those methods do not provide any guarantees about the performance of the computed strategies.\nFurthermore, the approximations were designed manually by a human expert.\nOur approach yields an automated abstraction mechanism along with theoretical guarantees on the strategies'' performance.\nRhode Island Hold``em was invented as a testbed for computational game playing [47].\nIt was designed so that it was similar in style to Texas Hold``em, yet not so large that devising reasonably intelligent strategies would be impossible.\n(The rules of Rhode Island Hold``em, as well as a discussion of how Rhode Island Hold``em can be modeled as a game with ordered signals, that is, it fits in our model, is available in an extended version of this paper [13].)\nWe applied the techniques developed in this paper to find an exact (minimax) solution to Rhode Island Hold``em, which has a game tree exceeding 3.1 billion nodes.\nApplying the sequence form to Rhode Island Hold``em directly without abstraction yields a linear program with 91,224,226 rows, and the same number of columns.\nThis is much too large for (current) linear programming algorithms to handle.\nWe used our GameShrink algorithm to reduce this with lossless abstraction, and it yielded a linear program with 1,237,238 rows and columns-with 50,428,638 non-zero coefficients.\nWe then applied iterated elimination of dominated strategies, which further reduced this to 1,190,443 rows and 1,181,084 columns.\n(Applying iterated elimination of dominated strategies without GameShrink yielded 89,471,986 rows and 89,121,538 columns, which still would have been prohibitively large to solve.)\nGameShrink required less than one second to perform the shrinking (i.e., to compute all of the ordered game isomorphic abstraction transformations).\nUsing a 1.65GHz IBM eServer p5 570 with 64 gigabytes of RAM (the linear program solver actually needed 25 gigabytes), we solved it in 7 days and 17 hours using the interior-point barrier method of CPLEX version 9.1.2.\nWe recently demonstrated our optimal Rhode Island Hold``em poker player at the AAAI-05 conference [14], and it is available for play on-line at http:\/\/www.cs.cmu.edu\/ ~gilpin\/gsi.\nhtml.\nWhile others have worked on computer programs for playing Rhode Island Hold``em [47], no optimal strategy has been found before.\nThis is the largest poker game solved to date by over four orders of magnitude.\n2.\nGAMES WITH ORDERED SIGNALS We work with a slightly restricted class of games, as compared to the full generality of the extensive form.\nThis class, which we call games with ordered signals, is highly structured, but still general enough to capture a wide range of strategic situations.\nA game with ordered signals consists of a finite number of rounds.\nWithin a round, the players play a game on a directed tree (the tree can be different in different rounds).\nThe only uncertainty players face stems from private signals the other players have received and from the unknown future signals.\nIn other words, players observe each others'' actions, but potentially not nature``s actions.\nIn each round, there can be public signals (announced to all players) and private signals (confidentially communicated to individual players).\nFor simplicity, we assume-as is the case in most recreational games-that within each round, the number of private signals received is the same across players (this could quite likely be relaxed).\nWe also assume that the legal actions that a player has are independent of the signals received.\nFor example, in poker, the legal betting actions are independent of the cards received.\nFinally, the strongest assumption is that there is a partial ordering over sets of signals, and the payoffs are increasing (not necessarily strictly) in these signals.\nFor example, in poker, this partial ordering corresponds exactly to the ranking of card hands.\nDefinition 1.\nA game with ordered signals is a tuple \u0393 = I, G, L, \u0398, \u03ba, \u03b3, p, , \u03c9, u where: 1.\nI = {1, ... , n} is a finite set of players.\n2.\nG = G1 , ... , Gr , Gj = ` V j , Ej \u00b4 , is a finite collection of finite directed trees with nodes V j and edges Ej .\nLet Zj denote the leaf nodes of Gj and let Nj (v) denote the outgoing neighbors of v \u2208 V j .\nGj is the stage game for round j. 3.\nL = L1 , ... , Lr , Lj : V j \\ Zj \u2192 I indicates which player acts (chooses an outgoing edge) at each internal node in round j. 4.\n\u0398 is a finite set of signals.\n5.\n\u03ba = \u03ba1 , ... , \u03bar and \u03b3 = \u03b31 , ... , \u03b3r are vectors of nonnegative integers, where \u03baj and \u03b3j denote the number of public and private signals (per player), respectively, revealed in round j. Each signal \u03b8 \u2208 \u0398 may only be revealed once, and in each round every player receives the same number of private signals, so we require Pr j=1 \u03baj + n\u03b3j \u2264 |\u0398|.\nThe public information revealed in round j is \u03b1j \u2208 \u0398\u03baj and the public information revealed in all rounds up through round j is \u02dc\u03b1j = ` \u03b11 , ... , \u03b1j \u00b4 .\nThe private information revealed to player i \u2208 I in round j is \u03b2j i \u2208 \u0398\u03b3j and the private information revaled to player i \u2208 I in all rounds up through round j is \u02dc\u03b2j i = ` \u03b21 i , ... , \u03b2j i \u00b4 .\nWe 162 also write \u02dc\u03b2j = \u02dc\u03b2j 1, ... , \u02dc\u03b2j n to represent all private information up through round j, and \u02dc\u03b2 j i , \u02dc\u03b2j \u2212i = \u02dc\u03b2j 1, ... , \u02dc\u03b2j i\u22121, \u02dc\u03b2 j i , \u02dc\u03b2j i+1, ... , \u02dc\u03b2j n is \u02dc\u03b2j with \u02dc\u03b2j i replaced with \u02dc\u03b2 j i .\nThe total information revealed up through round j, \u02dc\u03b1j , \u02dc\u03b2j , is said to be legal if no signals are repeated.\n6.\np is a probability distribution over \u0398, with p(\u03b8) > 0 for all \u03b8 \u2208 \u0398.\nSignals are drawn from \u0398 according to p without replacement, so if X is the set of signals already revealed, then p(x | X) = ( p(x)P y \/\u2208X p(y) if x \/\u2208 X 0 if x \u2208 X. 7.\nis a partial ordering of subsets of \u0398 and is defined for at least those pairs required by u. 8.\n\u03c9 : rS j=1 Zj \u2192 {over, continue} is a mapping of terminal nodes within a stage game to one of two values: over, in which case the game ends, or continue, in which case the game continues to the next round.\nClearly, we require \u03c9(z) = over for all z \u2208 Zr .\nNote that \u03c9 is independent of the signals.\nLet \u03c9j over = {z \u2208 Zj | \u03c9(z) = over} and \u03c9j cont = {z \u2208 Zj | \u03c9(z) = continue}.\n9.\nu = (u1 , ... , ur ), uj : j\u22121 k=1 \u03c9k cont \u00d7 \u03c9j over \u00d7 j k=1 \u0398\u03bak \u00d7 n i=1 j k=1 \u0398\u03b3k \u2192 Rn is a utility function such that for every j, 1 \u2264 j \u2264 r, for every i \u2208 I, and for every \u02dcz \u2208 j\u22121 k=1 \u03c9k cont \u00d7 \u03c9j over, at least one of the following two conditions holds: (a) Utility is signal independent: uj i (\u02dcz, \u03d1) = uj i (\u02dcz, \u03d1 ) for all legal \u03d1, \u03d1 \u2208 j k=1 \u0398\u03bak \u00d7 n i=1 j k=1 \u0398\u03b3k .\n(b) is defined for all legal signals (\u02dc\u03b1j , \u02dc\u03b2j i ), (\u02dc\u03b1j , \u02dc\u03b2 j i ) through round j and a player``s utility is increasing in her private signals, everything else equal: \u02dc\u03b1j , \u02dc\u03b2j i \u02dc\u03b1j , \u02dc\u03b2 j i =\u21d2 ui \u02dcz, \u02dc\u03b1j , \u02dc\u03b2j i , \u02dc\u03b2j \u2212i \u2265 ui \u02dcz, \u02dc\u03b1j , \u02dc\u03b2 j i , \u02dc\u03b2j \u2212i .\nWe will use the term game with ordered signals and the term ordered game interchangeably.\n2.1 Information filters In this subsection, we define an information filter for ordered games.\nInstead of completely revealing a signal (either public or private) to a player, the signal first passes through this filter, which outputs a coarsened signal to the player.\nBy varying the filter applied to a game, we are able to obtain a wide variety of games while keeping the underlying action space of the game intact.\nWe will use this when designing our abstraction techniques.\nFormally, an information filter is as follows.\nDefinition 2.\nLet \u0393 = I, G, L, \u0398, \u03ba, \u03b3, p, , \u03c9, u be an ordered game.\nLet Sj \u2286 j k=1 \u0398\u03bak \u00d7 j k=1 \u0398\u03b3k be the set of legal signals (i.e., no repeated signals) for one player through round j.\nAn information filter for \u0393 is a collection F = F1 , ... , Fr where each Fj is a function Fj : Sj \u2192 2Sj such that each of the following conditions hold: 1.\n(Truthfulness) (\u02dc\u03b1j , \u02dc\u03b2j i ) \u2208 Fj (\u02dc\u03b1j , \u02dc\u03b2j i ) for all legal (\u02dc\u03b1j , \u02dc\u03b2j i ).\n2.\n(Independence) The range of Fj is a partition of Sj .\n3.\n(Information preservation) If two values of a signal are distinguishable in round k, then they are distinguishable fpr each round j > k. Let mj = Pj l=1 \u03bal +\u03b3l .\nWe require that for all legal (\u03b81, ... , \u03b8mk , ... , \u03b8mj ) \u2286 \u0398 and (\u03b81, ... , \u03b8mk , ... , \u03b8mj ) \u2286 \u0398: (\u03b81, ... , \u03b8mk ) \/\u2208 Fk (\u03b81, ... , \u03b8mk ) =\u21d2 (\u03b81, ... , \u03b8mk , ... , \u03b8mj ) \/\u2208 Fj (\u03b81, ... , \u03b8mk , ... , \u03b8mj ).\nA game with ordered signals \u0393 and an information filter F for \u0393 defines a new game \u0393F .\nWe refer to such games as filtered ordered games.\nWe are left with the original game if we use the identity filter Fj \u02dc\u03b1j , \u02dc\u03b2j i = n \u02dc\u03b1j , \u02dc\u03b2j i o .\nWe have the following simple (but important) result: Proposition 1.\nA filtered ordered game is an extensive form game satisfying perfect recall.\nA simple proof proceeds by constructing an extensive form game directly from the ordered game, and showing that it satisfies perfect recall.\nIn determining the payoffs in a game with filtered signals, we take the average over all real signals in the filtered class, weighted by the probability of each real signal occurring.\n2.2 Strategies and Nash equilibrium We are now ready to define behavior strategies in the context of filtered ordered games.\nDefinition 3.\nA behavior strategy for player i in round j of \u0393 = I, G, L, \u0398, \u03ba, \u03b3, p, , \u03c9, u with information filter F is a probability distribution over possible actions, and is defined for each player i, each round j, and each v \u2208 V j \\Zj for Lj (v) = i: \u03c3j i,v : j\u22121 k=1 \u03c9k cont\u00d7Range Fj \u2192 \u0394 n w \u2208 V j | (v, w) \u2208 Ej o .\n(\u0394(X) is the set of probability distributions over a finite set X.) A behavior strategy for player i in round j is \u03c3j i = (\u03c3j i,v1 , ... , \u03c3j i,vm ) for each vk \u2208 V j \\ Zj where Lj (vk) = i.\nA behavior strategy for player i in \u0393 is \u03c3i = ` \u03c31 i , ... , \u03c3r i \u00b4 .\nA strategy profile is \u03c3 = (\u03c31, ... , \u03c3n).\nA strategy profile with \u03c3i replaced by \u03c3i is (\u03c3i, \u03c3\u2212i) = (\u03c31, ... , \u03c3i\u22121, \u03c3i, \u03c3i+1, ... , \u03c3n).\nBy an abuse of notation, we will say player i receives an expected payoff of ui(\u03c3) when all players are playing the strategy profile \u03c3.\nStrategy \u03c3i is said to be player i``s best response to \u03c3\u2212i if for all other strategies \u03c3i for player i we have ui(\u03c3i, \u03c3\u2212i) \u2265 ui(\u03c3i, \u03c3\u2212i).\n\u03c3 is a Nash equilibrium if, for every player i, \u03c3i is a best response for \u03c3\u2212i.\nA Nash equilibrium always exists in finite extensive form games [36], and one exists in behavior strategies for games with perfect recall [29].\nUsing these observations, we have the following corollary to Proposition 1: 163 Corollary 1.\nFor any filtered ordered game, a Nash equilibrium exists in behavior strateges.\n3.\nEQUILIBRIUM-PRESERVING ABSTRACTIONS In this section, we present our main technique for reducing the size of games.\nWe begin by defining a filtered signal tree which represents all of the chance moves in the game.\nThe bold edges (i.e. the first two levels of the tree) in the game trees in Figure 1 correspond to the filtered signal trees in each game.\nDefinition 4.\nAssociated with every ordered game \u0393 = I, G, L, \u0398, \u03ba, \u03b3, p, , \u03c9, u and information filter F is a filtered signal tree, a directed tree in which each node corresponds to some revealed (filtered) signals and edges correspond to revealing specific (filtered) signals.\nThe nodes in the filtered signal tree represent the set of all possible revealed filtered signals (public and private) at some point in time.\nThe filtered public signals revealed in round j correspond to the nodes in the \u03baj levels beginning at level Pj\u22121 k=1 ` \u03bak + n\u03b3k \u00b4 and the private signals revealed in round j correspond to the nodes in the n\u03b3j levels beginning at level Pj k=1 \u03bak + Pj\u22121 k=1 n\u03b3k .\nWe denote children of a node x as N(x).\nIn addition, we associate weights with the edges corresponding to the probability of the particular edge being chosen given that its parent was reached.\nIn many games, there are certain situations in the game that can be thought of as being strategically equivalent to other situations in the game.\nBy melding these situations together, it is possible to arrive at a strategically equivalent smaller game.\nThe next two definitions formalize this notion via the introduction of the ordered game isomorphic relation and the ordered game isomorphic abstraction transformation.\nDefinition 5.\nTwo subtrees beginning at internal nodes x and y of a filtered signal tree are ordered game isomorphic if x and y have the same parent and there is a bijection f : N(x) \u2192 N(y), such that for w \u2208 N(x) and v \u2208 N(y), v = f(w) implies the weights on the edges (x, w) and (y, v) are the same and the subtrees beginning at w and v are ordered game isomorphic.\nTwo leaves (corresponding to filtered signals \u03d1 and \u03d1 up through round r) are ordered game isomorphic if for all \u02dcz \u2208 r\u22121 j=1 \u03c9j cont \u00d7 \u03c9r over, ur (\u02dcz, \u03d1) = ur (\u02dcz, \u03d1 ).\nDefinition 6.\nLet \u0393 = I, G, L, \u0398, \u03ba, \u03b3, p, , \u03c9, u be an ordered game and let F be an information filter for \u0393.\nLet \u03d1 and \u03d1 be two nodes where the subtrees in the induced filtered signal tree corresponding to the nodes \u03d1 and \u03d1 are ordered game isomorphic, and \u03d1 and \u03d1 are at either levelPj\u22121 k=1 ` \u03bak + n\u03b3k \u00b4 or Pj k=1 \u03bak + Pj\u22121 k=1 n\u03b3k for some round j.\nThe ordered game isomorphic abstraction transformation is given by creating a new information filter F : F j \u02dc\u03b1j , \u02dc\u03b2j i = 8 < : Fj \u02dc\u03b1j , \u02dc\u03b2j i if \u02dc\u03b1j , \u02dc\u03b2j i \/\u2208 \u03d1 \u222a \u03d1 \u03d1 \u222a \u03d1 if \u02dc\u03b1j , \u02dc\u03b2j i \u2208 \u03d1 \u222a \u03d1 .\nFigure 1 shows the ordered game isomorphic abstraction transformation applied twice to a tiny poker game.\nTheorem 2, our main equilibrium result, shows how the ordered game isomorphic abstraction transformation can be used to compute equilibria faster.\nTheorem 2.\nLet \u0393 = I, G, L, \u0398, \u03ba, \u03b3, p, , \u03c9, u be an ordered game and F be an information filter for \u0393.\nLet F be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation.\nLet \u03c3 be a Nash equilibrium of the induced game \u0393F .\nIf we take \u03c3j i,v \u02dcz, Fj \u02dc\u03b1j , \u02dc\u03b2j i = \u03c3 j i,v \u02dcz, F j \u02dc\u03b1j , \u02dc\u03b2j i , \u03c3 is a Nash equilibrium of \u0393F .\nProof.\nFor an extensive form game, a belief system \u03bc assigns a probability to every decision node x such thatP x\u2208h \u03bc(x) = 1 for every information set h.\nA strategy profile \u03c3 is sequentially rational at h given belief system \u03bc if ui(\u03c3i, \u03c3\u2212i | h, \u03bc) \u2265 ui(\u03c4i, \u03c3\u2212i | h, \u03bc) for all other strategies \u03c4i, where i is the player who controls h.\nA basic result [33, Proposition 9.C.1] characterizing Nash equilibria dictates that \u03c3 is a Nash equilibrium if and only if there is a belief system \u03bc such that for every information set h with Pr(h | \u03c3) > 0, the following two conditions hold: (C1) \u03c3 is sequentially rational at h given \u03bc; and (C2) \u03bc(x) = Pr(x | \u03c3) Pr(h | \u03c3) for all x \u2208 h.\nSince \u03c3 is a Nash equilibrium of \u0393 , there exists such a belief system \u03bc for \u0393F .\nUsing \u03bc , we will construct a belief system \u03bc for \u0393 and show that conditions C1 and C2 hold, thus supporting \u03c3 as a Nash equilibrium.\nFix some player i \u2208 I. Each of i``s information sets in some round j corresponds to filtered signals Fj \u02dc\u03b1\u2217j , \u02dc\u03b2\u2217j i , history in the first j \u2212 1 rounds (z1, ... , zj\u22121) \u2208 j\u22121 k=1 \u03c9k cont, and history so far in round j, v \u2208 V j \\ Zj .\nLet \u02dcz = (z1, ... , zj\u22121, v) represent all of the player actions leading to this information set.\nThus, we can uniquely specify this information set using the information Fj \u02dc\u03b1\u2217j , \u02dc\u03b2\u2217j i , \u02dcz .\nEach node in an information set corresponds to the possible private signals the other players have received.\nDenote by \u02dc\u03b2 some legal (Fj (\u02dc\u03b1j , \u02dc\u03b2j 1), ... , Fj (\u02dc\u03b1j , \u02dc\u03b2j i\u22121), Fj (\u02dc\u03b1j , \u02dc\u03b2j i+1), ... , Fj (\u02dc\u03b1j , \u02dc\u03b2j n)).\nIn other words, there exists (\u02dc\u03b1j , \u02dc\u03b2j 1, ... , \u02dc\u03b2j n) such that (\u02dc\u03b1j , \u02dc\u03b2j i ) \u2208 Fj (\u02dc\u03b1\u2217j , \u02dc\u03b2\u2217j i ), (\u02dc\u03b1j , \u02dc\u03b2j k) \u2208 Fj (\u02dc\u03b1j , \u02dc\u03b2j k) for k = i, and no signals are repeated.\nUsing such a set of signals (\u02dc\u03b1j , \u02dc\u03b2j 1, ... , \u02dc\u03b2j n), let \u02c6\u03b2 denote (F j (\u02dc\u03b1j , \u02dc\u03b2j 1), ... , F j (\u02dc\u03b1j , \u02dc\u03b2j i\u22121), F j (\u02dc\u03b1j , \u02dc\u03b2j i+1), ... , F j (\u02dc\u03b1j , \u02dc\u03b2j n).\n(We will abuse notation and write F j \u2212i \u02c6\u03b2 = \u02c6\u03b2 .)\nWe can now compute \u03bc directly from \u03bc : \u03bc \u02c6\u03b2 | Fj \u02dc\u03b1j , \u02dc\u03b2j i , \u02dcz = 8 >>>>>>< >>>>>>: \u03bc \u02c6\u03b2 | F j \u02dc\u03b1j , \u02dc\u03b2j i , \u02dcz if Fj \u02dc\u03b1j , \u02dc\u03b2j i = F j \u02dc\u03b1j , \u02dc\u03b2j i or \u02c6\u03b2 = \u02c6\u03b2 p\u2217 \u03bc \u02c6\u03b2 | F j \u02dc\u03b1j , \u02dc\u03b2j i , \u02dcz if Fj \u02dc\u03b1j , \u02dc\u03b2j i = F j \u02dc\u03b1j , \u02dc\u03b2j i and \u02c6\u03b2 = \u02c6\u03b2 164 J1 J2 J2 K1 K1 K2 K2 c b C B F B f b c b C B F B f b c b C f b B BF c b C f b B BF c b C B F B f b c b C BF B f b c b C f b B BF c b C f b B BF c b C f b B BF c b C f b B BF c b C B F B f b c b C B F B f b 0 0 0-1 -1 -1 -1 -1 -1 -1 -1-1 -1 -1 -1 -1 -1 -1 -1 -1 -1-1 -1 -1 -1 -1 -10 0 0 0 0 0 0 0 0 -1 -2 -2 -1 -2 -2 -1 -2 -2 -1 -2 -2 1 2 2 1 2 2 1 2 2 1 2 2 J1 K1 K2 J1 J2 K2 J1 J2 K1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 {{J1}, {J2}, {K1}, {K2}} {{J1,J2}, {K1}, {K2}} c b C BF B f b c b C f b B BF c b C B F B f b J1,J2 K1 K2 1 1 c b C f b B BF c b C BF B f b c b C BF B f b c b C B F B f b J1,J2 K1 K2 1 1 1 1 J1,J2 K2 J1,J2 K1 0 0 0-1 -1 -1 -1 -1 -1 -1 -2 -2 -1 -2 -2 2 2 2 2 2 2 -1 -1-1 -1 0 0 0 1 2 2 -1 -1-1 -1 0 0 0 1 2 2 c b C B F B f b -1 -10 0 0 c b B F B f b -1 -1-1 -2 -2 c b C BF B f b 0 0 0-1 -1 c b C BF B f b J1,J2 J1,J2 J1,J2K1,K2 K1,K2 K1,K2 -1 -1 1 2 2 2 2 2 2 {{J1,J2}, {K1,K2}} 1 1 1 1 1\/4 1\/4 1\/4 1\/4 1\/3 1\/3 1\/3 1\/3 1\/3 1\/3 1\/3 1\/3 1\/3 1\/3 1\/3 1\/3 1\/4 1\/41\/2 1\/3 1\/3 1\/3 1\/32\/3 1\/32\/3 1\/2 1\/2 1\/3 2\/3 2\/3 1\/3 Figure 1: GameShrink applied to a tiny two-person four-card (two Jacks and two Kings) poker game.\nNext to each game tree is the range of the information filter F. Dotted lines denote information sets, which are labeled by the controlling player.\nOpen circles are chance nodes with the indicated transition probabilities.\nThe root node is the chance node for player 1``s card, and the next level is for player 2``s card.\nThe payment from player 2 to player 1 is given below each leaf.\nIn this example, the algorithm reduces the game tree from 53 nodes to 19 nodes.\nwhere p\u2217 = Pr(\u02c6\u03b2 | F j (\u02dc\u03b1j , \u02dc\u03b2 j i )) Pr(\u02c6\u03b2 | F j (\u02dc\u03b1j , \u02dc\u03b2 j i )) .\nThe following three claims show that \u03bc as calculated above supports \u03c3 as a Nash equilibrium.\nClaim 1.\n\u03bc is a valid belief system for \u0393F .\nClaim 2.\nFor all information sets h with Pr(h | \u03c3) > 0, \u03bc(x) = Pr(x | \u03c3) Pr(h | \u03c3) for all x \u2208 h. Claim 3.\nFor all information sets h with Pr(h | \u03c3) > 0, \u03c3 is sequentially rational at h given \u03bc.\nThe proofs of Claims 1-3 are in an extended version of this paper [13].\nBy Claims 1 and 2, we know that condition C2 holds.\nBy Claim 3, we know that condition C1 holds.\nThus, \u03c3 is a Nash equilibrium.\n3.1 Nontriviality of generalizing beyond this model Our model does not capture general sequential games of imperfect information because it is restricted in two ways (as discussed above): 1) there is a special structure connecting the player actions and the chance actions (for one, the players are assumed to observe each others'' actions, but nature``s actions might not be publicly observable), and 2) there is a common ordering of signals.\nIn this subsection we show that removing either of these conditions can make our technique invalid.\nFirst, we demonstrate a failure when removing the first assumption.\nConsider the game in Figure 2.6 Nodes a and b are in the same information set, have the same parent (chance) node, have isomorphic subtrees with the same payoffs, and nodes c and d also have similar structural properties.\nBy merging the subtrees beginning at a and b, we get the game on the right in Figure 2.\nIn this game, player 1``s only Nash equilibrium strategy is to play left.\nBut in the original game, player 1 knows that node c will never be reached, and so should play right in that information set.\n1\/4 1\/4 1\/4 1\/4 2 2 2 1 1 1 2 1 2 3 0 3 0 -10 10 1\/2 1\/4 1\/4 2 2 2 1 1 2 3 0 3 0 a b 2 2 2 10-10 c d Figure 2: Example illustrating difficulty in developing a theory of equilibrium-preserving abstractions for general extensive form games.\nRemoving the second assumption (that the utility functions are based on a common ordering of signals) can also cause failure.\nConsider a simple three-card game with a deck containing two Jacks (J1 and J2) and a King (K), where player 1``s utility function is based on the ordering 6 We thank Albert Xin Jiang for providing this example.\n165 K J1 \u223c J2 but player 2``s utility function is based on the ordering J2 K J1.\nIt is easy to check that in the abstracted game (where Player 1 treats J1 and J2 as being equivalent) the Nash equilibrium does not correspond to a Nash equilibrium in the original game.7 4.\nGAMESHRINK: AN EFFICIENT ALGORITHM FOR COMPUTING ORDERED GAME ISOMORPHIC ABSTRACTION TRANSFORMATIONS This section presents an algorithm, GameShrink, for conducting the abstractions.\nIt only needs to analyze the signal tree discussed above, rather than the entire game tree.\nWe first present a subroutine that GameShrink uses.\nIt is a dynamic program for computing the ordered game isomorphic relation.\nAgain, it operates on the signal tree.\nAlgorithm 1.\nOrderedGameIsomorphic?\n(\u0393, \u03d1, \u03d1 ) 1.\nIf \u03d1 and \u03d1 have different parents, then return false.\n2.\nIf \u03d1 and \u03d1 are both leaves of the signal tree: (a) If ur (\u03d1 | \u02dcz) = ur (\u03d1 | \u02dcz) for all \u02dcz \u2208 r\u22121 j=1 \u03c9j cont \u00d7 \u03c9r over, then return true.\n(b) Otherwise, return false.\n3.\nCreate a bipartite graph G\u03d1,\u03d1 = (V1, V2, E) with V1 = N(\u03d1) and V2 = N(\u03d1 ).\n4.\nFor each v1 \u2208 V1 and v2 \u2208 V2: If OrderedGameIsomorphic?\n(\u0393, v1, v2) Create edge (v1, v2) 5.\nReturn true if G\u03d1,\u03d1 has a perfect matching; otherwise, return false.\nBy evaluating this dynamic program from bottom to top, Algorithm 1 determines, in time polynomial in the size of the signal tree, whether or not any pair of equal depth nodes x and y are ordered game isomorphic.\nWe can further speed up this computation by only examining nodes with the same parent, since we know (from step 1) that no nodes with different parents are ordered game isomorphic.\nThe test in step 2(a) can be computed in O(1) time by consulting the relation from the specification of the game.\nEach call to OrderedGameIsomorphic?\nperforms at most one perfect matching computation on a bipartite graph with O(|\u0398|) nodes and O(|\u0398|2 ) edges (recall that \u0398 is the set of signals).\nUsing the Ford-Fulkerson algorithm [12] for finding a maximal matching, this takes O(|\u0398|3 ) time.\nLet S be the maximum number of signals possibly revealed in the game (e.g., in Rhode Island Hold``em, S = 4 because each of the two players has one card in the hand plus there are two cards on the table).\nThe number of nodes, n, in the signal tree is O(|\u0398|S ).\nThe dynamic program visits each node in the signal tree, with each visit requiring O(|\u0398|2 ) calls to the OrderedGameIsomorphic?\nroutine.\nSo, it takes O(|\u0398|S |\u0398|3 |\u0398|2 ) = O(|\u0398|S+5 ) time to compute the entire ordered game isomorphic relation.\nWhile this is exponential in the number of revealed signals, we now show that it is polynomial in the size of the signal tree-and thus polynomial in the size of the game tree 7 We thank an anonymous person for this example.\nbecause the signal tree is smaller than the game tree.\nThe number of nodes in the signal tree is n = 1 + SX i=1 iY j=1 (|\u0398| \u2212 j + 1) (Each term in the summation corresponds to the number of nodes at a specific depth of the tree.)\nThe number of leaves is SY j=1 (|\u0398| \u2212 j + 1) = |\u0398| S !\nS!\nwhich is a lower bound on the number of nodes.\nFor large |\u0398| we can use the relation `n k \u00b4 \u223c nk k!\nto get |\u0398| S !\nS!\n\u223c \u201e |\u0398|S S!\n`` S!\n= |\u0398|S and thus the number of leaves in the signal tree is \u03a9(|\u0398|S ).\nThus, O(|\u0398|S+5 ) = O(n|\u0398|5 ), which proves that we can indeed compute the ordered game isomorphic relation in time polynomial in the number of nodes, n, of the signal tree.\nThe algorithm often runs in sublinear time (and space) in the size of the game tree because the signal tree is significantly smaller than the game tree in most nontrivial games.\n(Note that the input to the algorithm is not an explicit game tree, but a specification of the rules, so the algorithm does not need to read in the game tree.)\nSee Figure 1.\nIn general, if an ordered game has r rounds, and each round``s stage game has at least b nonterminal leaves, then the size of the signal tree is at most 1 br of the size of the game tree.\nFor example, in Rhode Island Hold``em, the game tree has 3.1 billion nodes while the signal tree only has 6,632,705.\nGiven the OrderedGameIsomorphic?\nroutine for determining ordered game isomorphisms in an ordered game, we are ready to present the main algorithm, GameShrink.\nAlgorithm 2.\nGameShrink (\u0393) 1.\nInitialize F to be the identity filter for \u0393.\n2.\nFor j from 1 to r: For each pair of sibling nodes \u03d1, \u03d1 at either levelPj\u22121 k=1 ` \u03bak + n\u03b3k \u00b4 or Pj k=1 \u03bak + Pj\u22121 k=1 n\u03b3k in the filtered (according to F) signal tree: If OrderedGameIsomorphic?\n(\u0393, \u03d1, \u03d1 ), then Fj (\u03d1) \u2190 Fj (\u03d1 ) \u2190 Fj (\u03d1) \u222a Fj (\u03d1 ).\n3.\nOutput F. Given as input an ordered game \u0393, GameShrink applies the shrinking ideas presented above as aggressively as possible.\nOnce it finishes, there are no contractible nodes (since it compares every pair of nodes at each level of the signal tree), and it outputs the corresponding information filter F.\nThe correctness of GameShrink follows by a repeated application of Theorem 2.\nThus, we have the following result: Theorem 3.\nGameShrink finds all ordered game isomorphisms and applies the associated ordered game isomorphic abstraction transformations.\nFurthermore, for any Nash equilibrium, \u03c3 , of the abstracted game, the strategy profile constructed for the original game from \u03c3 is a Nash equilibrium.\nThe dominating factor in the run time of GameShrink is in the rth iteration of the main for-loop.\nThere are at most 166 `|\u0398| S \u00b4 S!\nnodes at this level, where we again take S to be the maximum number of signals possibly revealed in the game.\nThus, the inner for-loop executes O \u201e`|\u0398| S \u00b4 S!\n2 `` times.\nAs discussed in the next subsection, we use a union-find data structure to represent the information filter F. Each iteration of the inner for-loop possibly performs a union operation on the data structure; performing M operations on a union-find data structure containing N elements takes O(\u03b1(M, N)) amortized time per operation, where \u03b1(M, N) is the inverse Ackermann``s function [1, 49] (which grows extremely slowly).\nThus, the total time for GameShrink is O \u201e`|\u0398| S \u00b4 S!\n2 \u03b1 \u201e`|\u0398| S \u00b4 S!\n2 , |\u0398|S ```` .\nBy the inequality `n k \u00b4 \u2264 nk k!\n, this is O ` (|\u0398|S )2 \u03b1 ` (|\u0398|S )2 , |\u0398|S \u00b4\u00b4 .\nAgain, although this is exponential in S, it is \u02dcO(n2 ), where n is the number of nodes in the signal tree.\nFurthermore, GameShrink tends to actually run in sublinear time and space in the size of the game tree because the signal tree is significantly smaller than the game tree in most nontrivial games, as discussed above.\n4.1 Efficiency enhancements We designed several speed enhancement techniques for GameShrink, and all of them are incorporated into our implementation.\nOne technique is the use of the union-find data structure for storing the information filter F.\nThis data structure uses time almost linear in the number of operations [49].\nInitially each node in the signalling tree is its own set (this corresponds to the identity information filter); when two nodes are contracted they are joined into a new set.\nUpon termination, the filtered signals for the abstracted game correspond exactly to the disjoint sets in the data structure.\nThis is an efficient method of recording contractions within the game tree, and the memory requirements are only linear in the size of the signal tree.\nDetermining whether two nodes are ordered game isomorphic requires us to determine if a bipartite graph has a perfect matching.\nWe can eliminate some of these computations by using easy-to-check necessary conditions for the ordered game isomorphic relation to hold.\nOne such condition is to check that the nodes have the same number of chances as being ranked (according to ) higher than, lower than, and the same as the opponents.\nWe can precompute these frequencies for every game tree node.\nThis substantially speeds up GameShrink, and we can leverage this database across multiple runs of the algorithm (for example, when trying different abstraction levels; see next section).\nThe indices for this database depend on the private and public signals, but not the order in which they were revealed, and thus two nodes may have the same corresponding database entry.\nThis makes the database significantly more compact.\n(For example in Texas Hold``em, the database is reduced by a factor `50 3 \u00b4`47 1 \u00b4`46 1 \u00b4 \/ `50 5 \u00b4 = 20.)\nWe store the histograms in a 2-dimensional database.\nThe first dimension is indexed by the private signals, the second by the public signals.\nThe problem of computing the index in (either) one of the dimensions is exactly the problem of computing a bijection between all subsets of size r from a set of size n and integers in \u02c6 0, ... , `n r \u00b4 \u2212 1 \u02dc .\nWe efficiently compute this using the subsets'' colexicographical ordering [6].\nLet {c1, ... , cr}, ci \u2208 {0, ... , n \u2212 1}, denote the r signals and assume that ci < ci+1.\nWe compute a unique index for this set of signals as follows: index(c1, ... , cr) = Pr i=1 `ci i \u00b4 .\n5.\nAPPROXIMATION METHODS Some games are too large to compute an exact equilibrium, even after using the presented abstraction technique.\nThis section discusses general techniques for computing approximately optimal strategy profiles.\nFor a two-player game, we can always evaluate the worst-case performance of a strategy, thus providing some objective evaluation of the strength of the strategy.\nTo illustrate this, suppose we know player 2``s planned strategy for some game.\nWe can then fix the probabilities of player 2``s actions in the game tree as if they were chance moves.\nThen player 1 is faced with a single-agent decision problem, which can be solved bottomup, maximizing expected payoff at every node.\nThus, we can objectively determine the expected worst-case performance of player 2``s strategy.\nThis will be most useful when we want to evaluate how well a given strategy performs when we know that it is not an equilibrium strategy.\n(A variation of this technique may also be applied in n-person games where only one player``s strategies are held fixed.)\nThis technique provides ex post guarantees about the worst-case performance of a strategy, and can be used independently of the method that is used to compute the strategies.\n5.1 State-space approximations By slightly modifying GameShrink, we can obtain an algorithm that yields even smaller game trees, at the expense of losing the equilibrium guarantees of Theorem 2.\nInstead of requiring the payoffs at terminal nodes to match exactly, we can instead compute a penalty that increases as the difference in utility between two nodes increases.\nThere are many ways in which the penalty function could be defined and implemented.\nOne possibility is to create edge weights in the bipartite graphs used in Algorithm 1, and then instead of requiring perfect matchings in the unweighted graph we would instead require perfect matchings with low cost (i.e., only consider two nodes to be ordered game isomorphic if the corresponding bipartite graph has a perfect matching with cost below some threshold).\nThus, with this threshold as a parameter, we have a knob to turn that in one extreme (threshold = 0) yields an optimal abstraction and in the other extreme (threshold = \u221e) yields a highly abstracted game (this would in effect restrict players to ignoring all signals, but still observing actions).\nThis knob also begets an anytime algorithm.\nOne can solve increasingly less abstracted versions of the game, and evaluate the quality of the solution at every iteration using the ex post method discussed above.\n5.2 Algorithmic approximations In the case of two-player zero-sum games, the equilibrium computation can be modeled as a linear program (LP), which can in turn be solved using the simplex method.\nThis approach has inherent features which we can leverage into desirable properties in the context of solving games.\nIn the LP, primal solutions correspond to strategies of player 2, and dual solutions correspond to strategies of player 1.\nThere are two versions of the simplex method: the primal simplex and the dual simplex.\nThe primal simplex maintains primal feasibility and proceeds by finding better and better primal solutions until the dual solution vector is feasible, 167 at which point optimality has been reached.\nAnalogously, the dual simplex maintains dual feasibility and proceeds by finding increasingly better dual solutions until the primal solution vector is feasible.\n(The dual simplex method can be thought of as running the primal simplex method on the dual problem.)\nThus, the primal and dual simplex methods serve as anytime algorithms (for a given abstraction) for players 2 and 1, respectively.\nAt any point in time, they can output the best strategies found so far.\nAlso, for any feasible solution to the LP, we can get bounds on the quality of the strategies by examining the primal and dual solutions.\n(When using the primal simplex method, dual solutions may be read off of the LP tableau.)\nEvery feasible solution of the dual yields an upper bound on the optimal value of the primal, and vice versa [9, p. 57].\nThus, without requiring further computation, we get lower bounds on the expected utility of each agent``s strategy against that agent``s worst-case opponent.\nOne problem with the simplex method is that it is not a primal-dual algorithm, that is, it does not maintain both primal and dual feasibility throughout its execution.\n(In fact, it only obtains primal and dual feasibility at the very end of execution.)\nIn contrast, there are interior-point methods for linear programming that maintain primal and dual feasibility throughout the execution.\nFor example, many interiorpoint path-following algorithms have this property [55, Ch.\n5].\nWe observe that running such a linear programming method yields a method for finding -equilibria (i.e., strategy profiles in which no agent can increase her expected utility by more than by deviating).\nA threshold on can also be used as a termination criterion for using the method as an anytime algorithm.\nFurthermore, interior-point methods in this class have polynomial-time worst-case run time, as opposed to the simplex algorithm, which takes exponentially many steps in the worst case.\n6.\nRELATED RESEARCH Functions that transform extensive form games have been introduced [50, 11].\nIn contrast to our work, those approaches were not for making the game smaller and easier to solve.\nThe main result is that a game can be derived from another by a sequence of those transformations if and only if the games have the same pure reduced normal form.\nThe pure reduced normal form is the extensive form game represented as a game in normal form where duplicates of pure strategies (i.e., ones with identical payoffs) are removed and players essentially select equivalence classes of strategies [27].\nAn extension to that work shows a similar result, but for slightly different transformations and mixed reduced normal form games [21].\nModern treatments of this prior work on game transformations exist [38, Ch.\n6], [10].\nThe recent notion of weak isomorphism in extensive form games [7] is related to our notion of restricted game isomorphism.\nThe motivation of that work was to justify solution concepts by arguing that they are invariant with respect to isomorphic transformations.\nIndeed, the author shows, among other things, that many solution concepts, including Nash, perfect, subgame perfect, and sequential equilibrium, are invariant with respect to weak isomorphisms.\nHowever, that definition requires that the games to be tested for weak isomorphism are of the same size.\nOur focus is totally different: we find strategically equivalent smaller games.\nAlso, their paper does not provide algorithms.\nAbstraction techniques have been used in artificial intelligence research before.\nIn contrast to our work, most (but not all) research involving abstraction has been for singleagent problems (e.g. [20, 32]).\nFurthermore, the use of abstraction typically leads to sub-optimal solutions, unlike the techniques presented in this paper, which yield optimal solutions.\nA notable exception is the use of abstraction to compute optimal strategies for the game of Sprouts [2].\nHowever, a significant difference to our work is that Sprouts is a game of perfect information.\nOne of the first pieces of research to use abstraction in multi-agent settings was the development of partition search, which is the algorithm behind GIB, the world``s first expertlevel computer bridge player [17, 18].\nIn contrast to other game tree search algorithms which store a particular game position at each node of the search tree, partition search stores groups of positions that are similar.\n(Typically, the similarity of two game positions is computed by ignoring the less important components of each game position and then checking whether the abstracted positions are similar-in some domain-specific expert-defined sense-to each other.)\nPartition search can lead to substantial speed improvements over \u03b1-\u03b2-search.\nHowever, it is not game theory-based (it does not consider information sets in the game tree), and thus does not solve for the equilibrium of a game of imperfect information, such as poker.8 Another difference is that the abstraction is defined by an expert human while our abstractions are determined automatically.\nThere has been some research on the use of abstraction for imperfect information games.\nMost notably, Billings et al [4] describe a manually constructed abstraction for Texas Hold``em poker, and include promising results against expert players.\nHowever, this approach has significant drawbacks.\nFirst, it is highly specialized for Texas Hold``em.\nSecond, a large amount of expert knowledge and effort was used in constructing the abstraction.\nThird, the abstraction does not preserve equilibrium: even if applied to a smaller game, it might not yield a game-theoretic equilibrium.\nPromising ideas for abstraction in the context of general extensive form games have been described in an extended abstract [39], but to our knowledge, have not been fully developed.\n7.\nCONCLUSIONS AND DISCUSSION We introduced the ordered game isomorphic abstraction transformation and gave an algorithm, GameShrink, for abstracting the game using the isomorphism exhaustively.\nWe proved that in games with ordered signals, any Nash equilibrium in the smaller abstracted game maps directly to a Nash equilibrium in the original game.\nThe complexity of GameShrink is \u02dcO(n2 ), where n is the number of nodes in the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in 8 Bridge is also a game of imperfect information, and partition search does not find the equilibrium for that game either.\nInstead, partition search is used in conjunction with statistical sampling to simulate the uncertainty in bridge.\nThere are also other bridge programs that use search techniques for perfect information games in conjunction with statistical sampling and expert-defined abstraction [48].\nSuch (non-game-theoretic) techniques are unlikely to be competitive in poker because of the greater importance of information hiding and bluffing.\n168 the size of the game tree.\nUsing GameShrink, we found a minimax equilibrium to Rhode Island Hold``em, a poker game with 3.1 billion nodes in the game tree-over four orders of magnitude more than in the largest poker game solved previously.\nTo further improve scalability, we introduced an approximation variant of GameShrink, which can be used as an anytime algorithm by varying a parameter that controls the coarseness of abstraction.\nWe also discussed how (in a two-player zero-sum game), linear programming can be used in an anytime manner to generate approximately optimal strategies of increasing quality.\nThe method also yields bounds on the suboptimality of the resulting strategies.\nWe are currently working on using these techniques for full-scale 2-player limit Texas Hold``em poker, a highly popular card game whose game tree has about 1018 nodes.\nThat game tree size has required us to use the approximation version of GameShrink (as well as round-based abstraction) [16, 15].\n8.\nREFERENCES [1] W. Ackermann.\nZum Hilbertschen Aufbau der reellen Zahlen.\nMath.\nAnnalen, 99:118-133, 1928.\n[2] D. Applegate, G. Jacobson, and D. Sleator.\nComputer analysis of sprouts.\nTechnical Report CMU-CS-91-144, 1991.\n[3] R. Bellman and D. Blackwell.\nSome two-person games involving bluffing.\nPNAS, 35:600-605, 1949.\n[4] D. Billings, N. Burch, A. Davidson, R. Holte, J. Schaeffer, T. Schauenberg, and D. Szafron.\nApproximating game-theoretic optimal strategies for full-scale poker.\nIn IJCAI, 2003.\n[5] D. Billings, A. Davidson, J. Schaeffer, and D. Szafron.\nThe challenge of poker.\nArtificial Intelligence, 134:201-240, 2002.\n[6] B. Bollob\u00b4as.\nCombinatorics.\nCambridge University Press, 1986.\n[7] A. Casajus.\nWeak isomorphism of extensive games.\nMathematical Social Sciences, 46:267-290, 2003.\n[8] X. Chen and X. Deng.\nSettling the complexity of 2-player Nash equilibrium.\nECCC, Report No. 150, 2005.\n[9] V. Chv\u00b4atal.\nLinear Programming.\nW. H. Freeman & Co., 1983.\n[10] B. P. de Bruin.\nGame transformations and game equivalence.\nTechnical note x-1999-01, University of Amsterdam, Institute for Logic, Language, and Computation, 1999.\n[11] S. Elmes and P. J. Reny.\nOn the strategic equivalence of extensive form games.\nJ. of Economic Theory, 62:1-23, 1994.\n[12] L. R. Ford, Jr. and D. R. Fulkerson.\nFlows in Networks.\nPrinceton University Press, 1962.\n[13] A. Gilpin and T. Sandholm.\nFinding equilibria in large sequential games of imperfect information.\nTechnical Report CMU-CS-05-158, Carnegie Mellon University, 2005.\n[14] A. Gilpin and T. Sandholm.\nOptimal Rhode Island Hold``em poker.\nIn AAAI, pages 1684-1685, Pittsburgh, PA, USA, 2005.\n[15] A. Gilpin and T. Sandholm.\nA competitive Texas Hold``em poker player via automated abstraction and real-time equilibrium computation.\nMimeo, 2006.\n[16] A. Gilpin and T. Sandholm.\nA Texas Hold``em poker player based on automated abstraction and real-time equilibrium computation.\nIn AAMAS, Hakodate, Japan, 2006.\n[17] M. L. Ginsberg.\nPartition search.\nIn AAAI, pages 228-233, Portland, OR, 1996.\n[18] M. L. Ginsberg.\nGIB: Steps toward an expert-level bridge-playing program.\nIn IJCAI, Stockholm, Sweden, 1999.\n[19] S. Govindan and R. Wilson.\nA global Newton method to compute Nash equilibria.\nJ. of Econ.\nTheory, 110:65-86, 2003.\n[20] C. A. Knoblock.\nAutomatically generating abstractions for planning.\nArtificial Intelligence, 68(2):243-302, 1994.\n[21] E. Kohlberg and J.-F.\nMertens.\nOn the strategic stability of equilibria.\nEconometrica, 54:1003-1037, 1986.\n[22] D. Koller and N. Megiddo.\nThe complexity of two-person zero-sum games in extensive form.\nGames and Economic Behavior, 4(4):528-552, Oct. 1992.\n[23] D. Koller and N. Megiddo.\nFinding mixed strategies with small supports in extensive form games.\nInternational Journal of Game Theory, 25:73-92, 1996.\n[24] D. Koller, N. Megiddo, and B. von Stengel.\nEfficient computation of equilibria for extensive two-person games.\nGames and Economic Behavior, 14(2):247-259, 1996.\n[25] D. Koller and A. Pfeffer.\nRepresentations and solutions for game-theoretic problems.\nArtificial Intelligence, 94(1):167-215, July 1997.\n[26] D. M. Kreps and R. Wilson.\nSequential equilibria.\nEconometrica, 50(4):863-894, 1982.\n[27] H. W. Kuhn.\nExtensive games.\nPNAS, 36:570-576, 1950.\n[28] H. W. Kuhn.\nA simplified two-person poker.\nIn Contributions to the Theory of Games, volume 1 of Annals of Mathematics Studies, 24, pages 97-103.\nPrinceton University Press, 1950.\n[29] H. W. Kuhn.\nExtensive games and the problem of information.\nIn Contributions to the Theory of Games, volume 2 of Annals of Mathematics Studies, 28, pages 193-216.\nPrinceton University Press, 1953.\n[30] C. Lemke and J. Howson.\nEquilibrium points of bimatrix games.\nJournal of the Society for Industrial and Applied Mathematics, 12:413-423, 1964.\n[31] R. Lipton, E. Markakis, and A. Mehta.\nPlaying large games using simple strategies.\nIn ACM-EC, pages 36-41, 2003.\n[32] C.-L.\nLiu and M. Wellman.\nOn state-space abstraction for anytime evaluation of Bayesian networks.\nSIGART Bulletin, 7(2):50-57, 1996.\n[33] A. Mas-Colell, M. Whinston, and J. R. Green.\nMicroeconomic Theory.\nOxford University Press, 1995.\n[34] R. D. McKelvey and A. McLennan.\nComputation of equilibria in finite games.\nIn Handbook of Computational Economics, volume 1, pages 87-142.\nElsevier, 1996.\n[35] P. B. Miltersen and T. B. S\u00f8rensen.\nComputing sequential equilibria for two-player games.\nIn SODA, pages 107-116, 2006.\n[36] J. Nash.\nEquilibrium points in n-person games.\nProc.\nof the National Academy of Sciences, 36:48-49, 1950.\n[37] J. F. Nash and L. S. Shapley.\nA simple three-person poker game.\nIn Contributions to the Theory of Games, volume 1, pages 105-116.\nPrinceton University Press, 1950.\n[38] A. Perea.\nRationality in extensive form games.\nKluwer Academic Publishers, 2001.\n[39] A. Pfeffer, D. Koller, and K. Takusagawa.\nState-space approximations for extensive form games, July 2000.\nTalk given at the First International Congress of the Game Theory Society, Bilbao, Spain.\n[40] R. Porter, E. Nudelman, and Y. Shoham.\nSimple search methods for finding a Nash equilibrium.\nIn AAAI, pages 664-669, San Jose, CA, USA, 2004.\n[41] I. Romanovskii.\nReduction of a game with complete memory to a matrix game.\nSoviet Mathematics, 3:678-681, 1962.\n[42] T. Sandholm and A. Gilpin.\nSequences of take-it-or-leave-it offers: Near-optimal auctions without full valuation revelation.\nIn AAMAS, Hakodate, Japan, 2006.\n[43] T. Sandholm, A. Gilpin, and V. Conitzer.\nMixed-integer programming methods for finding Nash equilibria.\nIn AAAI, pages 495-501, Pittsburgh, PA, USA, 2005.\n[44] R. Savani and B. von Stengel.\nExponentially many steps for finding a Nash equilibrium in a bimatrix game.\nIn FOCS, pages 258-267, 2004.\n[45] R. Selten.\nSpieltheoretische behandlung eines oligopolmodells mit nachfragetr\u00a8agheit.\nZeitschrift f\u00a8ur die gesamte Staatswissenschaft, 12:301-324, 1965.\n[46] R. Selten.\nEvolutionary stability in extensive two-person games - correction and further development.\nMathematical Social Sciences, 16:223-266, 1988.\n[47] J. Shi and M. Littman.\nAbstraction methods for game theoretic poker.\nIn Computers and Games, pages 333-345.\nSpringer-Verlag, 2001.\n[48] S. J. J. Smith, D. S. Nau, and T. Throop.\nComputer bridge: A big win for AI planning.\nAI Magazine, 19(2):93-105, 1998.\n[49] R. E. Tarjan.\nEfficiency of a good but not linear set union algorithm.\nJournal of the ACM, 22(2):215-225, 1975.\n[50] F. Thompson.\nEquivalence of games in extensive form.\nRAND Memo RM-759, The RAND Corporation, Jan. 1952.\n[51] J. von Neumann and O. Morgenstern.\nTheory of games and economic behavior.\nPrinceton University Press, 1947.\n[52] B. von Stengel.\nEfficient computation of behavior strategies.\nGames and Economic Behavior, 14(2):220-246, 1996.\n[53] B. von Stengel.\nComputing equilibria for two-person games.\nIn Handbook of Game Theory, volume 3.\nNorth Holland, Amsterdam, 2002.\n[54] R. Wilson.\nComputing equilibria of two-person games from the extensive form.\nManagement Science, 18(7):448-460, 1972.\n[55] S. J. Wright.\nPrimal-Dual Interior-Point Methods.\nSIAM, 1997.\n169","lvl-3":"Finding Equilibria in Large Sequential Games of Imperfect Information *\nABSTRACT\nFinding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games.\nTo address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation.\nFor a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game.\nWe present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively.\nIts complexity is \u02dcO (n2), where n is the number of nodes in a structure we call the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree.\nUsing GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes--over four orders of magnitude more than in the largest poker game solved previously.\nWe discuss several electronic commerce applications for GameShrink.\nTo address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies.\n1.\nINTRODUCTION\nIn environments with more than one agent, an agent's outcome is generally affected by the actions of the other * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship.\nagent (s).\nConsequently, the optimal action of one agent can depend on the others.\nGame theory provides a normative framework for analyzing such strategic situations.\nIn particular, it provides solution concepts that define what rational behavior is in such settings.\nThe most famous and important solution concept is that of Nash equilibrium [36].\nIt is a strategy profile (one strategy for each agent) in which no agent has incentive to deviate to a different strategy.\nHowever, for the concept to be operational, we need algorithmic techniques for finding an equilibrium.\nGames can be classified as either games of perfect information or imperfect information.\nChess and Go are examples of the former, and, until recently, most game playing work has been on games of this type.\nTo compute an optimal strategy in a perfect information game, an agent traverses the game tree and evaluates individual nodes.\nIf the agent is able to traverse the entire game tree, she simply computes an optimal strategy from the bottom-up, using the principle of backward induction .1 In computer science terms, this is done using minimax search (often in conjunction with \u03b1\u03b2-pruning to reduce the search tree size and thus enhance speed).\nMinimax search runs in linear time in the size of the game tree .2 The differentiating feature of games of imperfect information, such as poker, is that they are not fully observable: when it is an agent's turn to move, she does not have access to all of the information about the world.\nIn such games, the decision of what to do at a point in time cannot generally be optimally made without considering decisions at all other points in time (including ones on other paths of play) because those other decisions affect the probabilities of being at different states at the current point in time.\nThus the algorithms for perfect information games do not solve games of imperfect information.\nFor sequential games with imperfect information, one could try to find an equilibrium using the normal (matrix) form, where every contingency plan of the agent is a pure strategy for the agent .3 Unfortunately (even if equivalent strategies\nare replaced by a single strategy [27]) this representation is generally exponential in the size of the game tree [52].\nBy observing that one needs to consider only sequences of moves rather than pure strategies [41, 46, 22, 52], one arrives at a more compact representation, the sequence form, which is linear in the size of the game tree .4 For 2-player games, there is a polynomial-sized (in the size of the game tree) linear programming formulation (linear complementarity in the non-zero-sum case) based on the sequence form such that strategies for players 1 and 2 correspond to primal and dual variables.\nThus, the equilibria of reasonable-sized 2-player games can be computed using this method [52, 24, 25].5 However, this approach still yields enormous (unsolvable) optimization problems for many real-world games, such as poker.\n1.1 Our approach\nIn this paper, we take a different approach to tackling the difficult problem of equilibrium computation.\nInstead of developing an equilibrium-finding method per se, we instead develop a methodology for automatically abstracting games in such a way that any equilibrium in the smaller (abstracted) game corresponds directly to an equilibrium in the original game.\nThus, by computing an equilibrium in the smaller game (using any available equilibrium-finding algorithm), we are able to construct an equilibrium in the original game.\nThe motivation is that an equilibrium for the smaller game can be computed drastically faster than for the original game.\nTo this end, we introduce games with ordered signals (Section 2), a broad class of games that has enough structure which we can exploit for abstraction purposes.\nInstead of operating directly on the game tree (something we found to be technically challenging), we instead introduce the use of information filters (Section 2.1), which coarsen the information each player receives.\nThey are used in our analysis and abstraction algorithm.\nBy operating only in the space of filters, we are able to keep the strategic structure of the game intact, while abstracting out details of the game in a way that is lossless from the perspective of equilibrium finding.\nWe introduce the ordered game isomorphism to describe strategically symmetric situations and the ordered game isomorphic abstraction transformation to take advantange of such symmetries (Section 3).\nAs our main equilibrium result we have the following: constant number of agents can be constructed in quasipolynomial time [31], but finding an exact equilibrium is PPAD-complete even in a 2-player game [8].\nThe most prevalent algorithm for finding an equilibrium in a 2-agent game is Lemke-Howson [30], but it takes exponentially many steps in the worst case [44].\nFor a survey of equilibrium computation in 2-player games, see [53].\nRecently, equilibriumfinding algorithms that enumerate supports (i.e., sets of pure strategies that are played with positive probability) have been shown efficient on many games [40], and efficient mixed integer programming algorithms that search in the space of supports have been developed [43].\nFor more than two players, many algorithms have been proposed, but they currently only scale to very small games [19, 34, 40].\n4There were also early techniques that capitalized in different ways on the fact that in many games the vast majority of pure strategies are not played in equilibrium [54, 23].\n5Recently this approach was extended to handle computing sequential equilibria [26] as well [35].\nTheorem 2 Let \u0393 be a game with ordered signals, and let F be an information filter for \u0393.\nLet F' be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation, and let \u03c3' be a Nash equilibrium strategy profile of the induced game \u0393F (i.e., the game \u0393 using the filter F').\nIf \u03c3 is constructed by using the corresponding strategies of \u03c3', then \u03c3 is a Nash equilibrium of \u0393F.\nThe proof of the theorem uses an equivalent characterization of Nash equilibria: \u03c3 is a Nash equilibrium if and only if there exist beliefs \u03bc (players' beliefs about unknown information) at all points of the game reachable by \u03c3 such that \u03c3 is sequentially rational (i.e., a best response) given \u03bc, where \u03bc is updated using Bayes' rule.\nWe can then use the fact that \u03c3' is a Nash equilibrium to show that \u03c3 is a Nash equilibrium considering only local properties of the game.\nWe also give an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively (Section 4).\nIts complexity is \u02dcO (n2), where n is the number of nodes in a structure we call the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree.\nWe present several algorithmic and data structure related speed improvements (Section 4.1), and we demonstrate how a simple modification to our algorithm yields an approximation algorithm (Section 5).\n1.2 Electronic commerce applications\nSequential games of imperfect information are ubiquitous, for example in negotiation and in auctions.\nOften aspects of a player's knowledge are not pertinent for deciding what action the player should take at a given point in the game.\nOn the trivial end, some aspects of a player's knowledge are never pertinent (e.g., whether it is raining or not has no bearing on the bidding strategy in an art auction), and such aspects can be completely left out of the model specification.\nHowever, some aspects can be pertinent in certain states of the game while they are not pertinent in other states, and thus cannot be left out of the model completely.\nFurthermore, it may be highly non-obvious which aspects are pertinent in which states of the game.\nOur algorithm automatically discovers which aspects are irrelevant in different states, and eliminates those aspects of the game, resulting in a more compact, equivalent game representation.\nOne broad application area that has this property is sequential negotiation (potentially over multiple issues).\nAnother broad application area is sequential auctions (potentially over multiple goods).\nFor example, in those states of a 1-object auction where bidder A can infer that his valuation is greater than that of bidder B, bidder A can ignore all his other information about B's signals, although that information would be relevant for inferring B's exact valuation.\nFurthermore, in some states of the auction, a bidder might not care which exact other bidders have which valuations, but cares about which valuations are held by the other bidders in aggregate (ignoring their identities).\nMany open-cry sequential auction and negotiation mechanisms fall within the game model studied in this paper (specified in detail later), as do certain other games in electronic commerce, such as sequences of take-it-or-leave-it offers [42].\nOur techniques are in no way specific to an application.\nThe main experiment that we present in this paper is on\na recreational game.\nWe chose a particular poker game as the benchmark problem because it yields an extremely complicated and enormous game tree, it is a game of imperfect information, it is fully specified as a game (and the data is available), and it has been posted as a challenge problem by others [47] (to our knowledge no such challenge problem instances have been proposed for electronic commerce applications that require solving sequential games).\n1.3 Rhode Island Hold 'em poker\nPoker is an enormously popular card game played around the world.\nThe 2005 World Series of Poker had over $103 million dollars in total prize money, including $56 million for the main event.\nIncreasingly, poker players compete in online casinos, and television stations regularly broadcast poker tournaments.\nPoker has been identified as an important research area in AI due to the uncertainty stemming from opponents' cards, opponents' future actions, and chance moves, among other reasons [5].\nAlmost since the field's founding, game theory has been used to analyze different aspects of poker [28; 37; 3; 51, pp. 186--219].\nHowever, this work was limited to tiny games that could be solved by hand.\nMore recently, AI researchers have been applying the computational power of modern hardware to computing game theory-based strategies for larger games.\nKoller and Pfeffer determined solutions to poker games with up to 140,000 nodes using the sequence form and linear programming [25].\nLarge-scale approximations have been developed [4], but those methods do not provide any guarantees about the performance of the computed strategies.\nFurthermore, the approximations were designed manually by a human expert.\nOur approach yields an automated abstraction mechanism along with theoretical guarantees on the strategies' performance.\nRhode Island Hold 'em was invented as a testbed for computational game playing [47].\nIt was designed so that it was similar in style to Texas Hold 'em, yet not so large that devising reasonably intelligent strategies would be impossible.\n(The rules of Rhode Island Hold 'em, as well as a discussion of how Rhode Island Hold 'em can be modeled as a game with ordered signals, that is, it fits in our model, is available in an extended version of this paper [13].)\nWe applied the techniques developed in this paper to find an exact (minimax) solution to Rhode Island Hold 'em, which has a game tree exceeding 3.1 billion nodes.\nApplying the sequence form to Rhode Island Hold 'em directly without abstraction yields a linear program with 91,224,226 rows, and the same number of columns.\nThis is much too large for (current) linear programming algorithms to handle.\nWe used our GameShrink algorithm to reduce this with lossless abstraction, and it yielded a linear program with 1,237,238 rows and columns--with 50,428,638 non-zero coefficients.\nWe then applied iterated elimination of dominated strategies, which further reduced this to 1,190,443 rows and 1,181,084 columns.\n(Applying iterated elimination of dominated strategies without GameShrink yielded 89,471,986 rows and 89,121,538 columns, which still would have been prohibitively large to solve.)\nGameShrink required less than one second to perform the shrinking (i.e., to compute all of the ordered game isomorphic abstraction transformations).\nUsing a 1.65 GHz IBM eServer p5 570 with 64 gigabytes of RAM (the linear program solver actually needed 25 gigabytes), we solved it in 7 days and 17 hours using the interior-point barrier method of CPLEX version 9.1.2.\nWe recently demonstrated our optimal Rhode Island Hold 'em poker player at the AAAI-05 conference [14], and it is available for play on-line at http:\/\/www.cs.cmu.edu\/ ~ gilpin\/gsi.\nhtml.\nWhile others have worked on computer programs for playing Rhode Island Hold 'em [47], no optimal strategy has been found before.\nThis is the largest poker game solved to date by over four orders of magnitude.\n2.\nGAMES WITH ORDERED SIGNALS\n2.1 Information filters\n2.2 Strategies and Nash equilibrium\n3.\nEQUILIBRIUM-PRESERVING ABSTRACTIONS\n4.\nGAMESHRINK: AN EFFICIENT ALGORITHM FOR COMPUTING ORDERED GAME ISOMORPHIC ABSTRACTION TRANSFORMATIONS\n4.1 Efficiency enhancements\n5.\nAPPROXIMATION METHODS\n5.1 State-space approximations\n5.2 Algorithmic approximations\n6.\nRELATED RESEARCH\nFunctions that transform extensive form games have been introduced [50, 11].\nIn contrast to our work, those approaches were not for making the game smaller and easier to solve.\nThe main result is that a game can be derived from another by a sequence of those transformations if and only if the games have the same pure reduced normal form.\nThe pure reduced normal form is the extensive form game represented as a game in normal form where duplicates of pure strategies (i.e., ones with identical payoffs) are removed and players essentially select equivalence classes of strategies [27].\nAn extension to that work shows a similar result, but for slightly different transformations and mixed reduced normal form games [21].\nModern treatments of this prior work on game transformations exist [38, Ch.\n6], [10].\nThe recent notion of weak isomorphism in extensive form games [7] is related to our notion of restricted game isomorphism.\nThe motivation of that work was to justify solution concepts by arguing that they are invariant with respect to isomorphic transformations.\nIndeed, the author shows, among other things, that many solution concepts, including Nash, perfect, subgame perfect, and sequential equilibrium, are invariant with respect to weak isomorphisms.\nHowever, that definition requires that the games to be tested for weak isomorphism are of the same size.\nOur focus is totally different: we find strategically equivalent smaller games.\nAlso, their paper does not provide algorithms.\nAbstraction techniques have been used in artificial intelligence research before.\nIn contrast to our work, most (but not all) research involving abstraction has been for singleagent problems (e.g. [20, 32]).\nFurthermore, the use of abstraction typically leads to sub-optimal solutions, unlike the techniques presented in this paper, which yield optimal solutions.\nA notable exception is the use of abstraction to compute optimal strategies for the game of Sprouts [2].\nHowever, a significant difference to our work is that Sprouts is a game of perfect information.\nOne of the first pieces of research to use abstraction in multi-agent settings was the development of partition search, which is the algorithm behind GIB, the world's first expertlevel computer bridge player [17, 18].\nIn contrast to other game tree search algorithms which store a particular game position at each node of the search tree, partition search stores groups of positions that are similar.\n(Typically, the similarity of two game positions is computed by ignoring the less important components of each game position and then checking whether the abstracted positions are similar--in some domain-specific expert-defined sense--to each other.)\nPartition search can lead to substantial speed improvements over \u03b1-\u03b2-search.\nHowever, it is not game theory-based (it does not consider information sets in the game tree), and thus does not solve for the equilibrium of a game of imperfect information, such as poker .8 Another difference is that the abstraction is defined by an expert human while our abstractions are determined automatically.\nThere has been some research on the use of abstraction for imperfect information games.\nMost notably, Billings et al [4] describe a manually constructed abstraction for Texas Hold 'em poker, and include promising results against expert players.\nHowever, this approach has significant drawbacks.\nFirst, it is highly specialized for Texas Hold 'em.\nSecond, a large amount of expert knowledge and effort was used in constructing the abstraction.\nThird, the abstraction does not preserve equilibrium: even if applied to a smaller game, it might not yield a game-theoretic equilibrium.\nPromising ideas for abstraction in the context of general extensive form games have been described in an extended abstract [39], but to our knowledge, have not been fully developed.\n7.\nCONCLUSIONS AND DISCUSSION\nWe introduced the ordered game isomorphic abstraction transformation and gave an algorithm, GameShrink, for abstracting the game using the isomorphism exhaustively.\nWe proved that in games with ordered signals, any Nash equilibrium in the smaller abstracted game maps directly to a Nash equilibrium in the original game.\nThe complexity of GameShrink is \u02dcO (n2), where n is the number of nodes in the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in 8Bridge is also a game of imperfect information, and partition search does not find the equilibrium for that game either.\nInstead, partition search is used in conjunction with statistical sampling to simulate the uncertainty in bridge.\nThere are also other bridge programs that use search techniques for perfect information games in conjunction with statistical sampling and expert-defined abstraction [48].\nSuch (non-game-theoretic) techniques are unlikely to be competitive in poker because of the greater importance of information hiding and bluffing.\nthe size of the game tree.\nUsing GameShrink, we found a minimax equilibrium to Rhode Island Hold 'em, a poker game with 3.1 billion nodes in the game tree--over four orders of magnitude more than in the largest poker game solved previously.\nTo further improve scalability, we introduced an approximation variant of GameShrink, which can be used as an anytime algorithm by varying a parameter that controls the coarseness of abstraction.\nWe also discussed how (in a two-player zero-sum game), linear programming can be used in an anytime manner to generate approximately optimal strategies of increasing quality.\nThe method also yields bounds on the suboptimality of the resulting strategies.\nWe are currently working on using these techniques for full-scale 2-player limit Texas Hold 'em poker, a highly popular card game whose game tree has about 1018 nodes.\nThat game tree size has required us to use the approximation version of GameShrink (as well as round-based abstraction) [16, 15].","lvl-4":"Finding Equilibria in Large Sequential Games of Imperfect Information *\nABSTRACT\nFinding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games.\nTo address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation.\nFor a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game.\nWe present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively.\nIts complexity is \u02dcO (n2), where n is the number of nodes in a structure we call the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree.\nUsing GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes--over four orders of magnitude more than in the largest poker game solved previously.\nWe discuss several electronic commerce applications for GameShrink.\nTo address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies.\n1.\nINTRODUCTION\nagent (s).\nConsequently, the optimal action of one agent can depend on the others.\nGame theory provides a normative framework for analyzing such strategic situations.\nThe most famous and important solution concept is that of Nash equilibrium [36].\nIt is a strategy profile (one strategy for each agent) in which no agent has incentive to deviate to a different strategy.\nHowever, for the concept to be operational, we need algorithmic techniques for finding an equilibrium.\nGames can be classified as either games of perfect information or imperfect information.\nChess and Go are examples of the former, and, until recently, most game playing work has been on games of this type.\nTo compute an optimal strategy in a perfect information game, an agent traverses the game tree and evaluates individual nodes.\nMinimax search runs in linear time in the size of the game tree .2 The differentiating feature of games of imperfect information, such as poker, is that they are not fully observable: when it is an agent's turn to move, she does not have access to all of the information about the world.\nThus the algorithms for perfect information games do not solve games of imperfect information.\nFor sequential games with imperfect information, one could try to find an equilibrium using the normal (matrix) form, where every contingency plan of the agent is a pure strategy for the agent .3 Unfortunately (even if equivalent strategies\nare replaced by a single strategy [27]) this representation is generally exponential in the size of the game tree [52].\nThus, the equilibria of reasonable-sized 2-player games can be computed using this method [52, 24, 25].5 However, this approach still yields enormous (unsolvable) optimization problems for many real-world games, such as poker.\n1.1 Our approach\nIn this paper, we take a different approach to tackling the difficult problem of equilibrium computation.\nInstead of developing an equilibrium-finding method per se, we instead develop a methodology for automatically abstracting games in such a way that any equilibrium in the smaller (abstracted) game corresponds directly to an equilibrium in the original game.\nThus, by computing an equilibrium in the smaller game (using any available equilibrium-finding algorithm), we are able to construct an equilibrium in the original game.\nThe motivation is that an equilibrium for the smaller game can be computed drastically faster than for the original game.\nTo this end, we introduce games with ordered signals (Section 2), a broad class of games that has enough structure which we can exploit for abstraction purposes.\nInstead of operating directly on the game tree (something we found to be technically challenging), we instead introduce the use of information filters (Section 2.1), which coarsen the information each player receives.\nThey are used in our analysis and abstraction algorithm.\nBy operating only in the space of filters, we are able to keep the strategic structure of the game intact, while abstracting out details of the game in a way that is lossless from the perspective of equilibrium finding.\nWe introduce the ordered game isomorphism to describe strategically symmetric situations and the ordered game isomorphic abstraction transformation to take advantange of such symmetries (Section 3).\nAs our main equilibrium result we have the following: constant number of agents can be constructed in quasipolynomial time [31], but finding an exact equilibrium is PPAD-complete even in a 2-player game [8].\nThe most prevalent algorithm for finding an equilibrium in a 2-agent game is Lemke-Howson [30], but it takes exponentially many steps in the worst case [44].\nFor a survey of equilibrium computation in 2-player games, see [53].\nFor more than two players, many algorithms have been proposed, but they currently only scale to very small games [19, 34, 40].\n4There were also early techniques that capitalized in different ways on the fact that in many games the vast majority of pure strategies are not played in equilibrium [54, 23].\n5Recently this approach was extended to handle computing sequential equilibria [26] as well [35].\nTheorem 2 Let \u0393 be a game with ordered signals, and let F be an information filter for \u0393.\nLet F' be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation, and let \u03c3' be a Nash equilibrium strategy profile of the induced game \u0393F (i.e., the game \u0393 using the filter F').\nIf \u03c3 is constructed by using the corresponding strategies of \u03c3', then \u03c3 is a Nash equilibrium of \u0393F.\nWe can then use the fact that \u03c3' is a Nash equilibrium to show that \u03c3 is a Nash equilibrium considering only local properties of the game.\nWe also give an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively (Section 4).\nIts complexity is \u02dcO (n2), where n is the number of nodes in a structure we call the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree.\n1.2 Electronic commerce applications\nSequential games of imperfect information are ubiquitous, for example in negotiation and in auctions.\nOften aspects of a player's knowledge are not pertinent for deciding what action the player should take at a given point in the game.\nHowever, some aspects can be pertinent in certain states of the game while they are not pertinent in other states, and thus cannot be left out of the model completely.\nFurthermore, it may be highly non-obvious which aspects are pertinent in which states of the game.\nOur algorithm automatically discovers which aspects are irrelevant in different states, and eliminates those aspects of the game, resulting in a more compact, equivalent game representation.\nOne broad application area that has this property is sequential negotiation (potentially over multiple issues).\nAnother broad application area is sequential auctions (potentially over multiple goods).\nMany open-cry sequential auction and negotiation mechanisms fall within the game model studied in this paper (specified in detail later), as do certain other games in electronic commerce, such as sequences of take-it-or-leave-it offers [42].\nOur techniques are in no way specific to an application.\nThe main experiment that we present in this paper is on\na recreational game.\n1.3 Rhode Island Hold 'em poker\nPoker is an enormously popular card game played around the world.\nIncreasingly, poker players compete in online casinos, and television stations regularly broadcast poker tournaments.\nAlmost since the field's founding, game theory has been used to analyze different aspects of poker [28; 37; 3; 51, pp. 186--219].\nHowever, this work was limited to tiny games that could be solved by hand.\nMore recently, AI researchers have been applying the computational power of modern hardware to computing game theory-based strategies for larger games.\nKoller and Pfeffer determined solutions to poker games with up to 140,000 nodes using the sequence form and linear programming [25].\nLarge-scale approximations have been developed [4], but those methods do not provide any guarantees about the performance of the computed strategies.\nFurthermore, the approximations were designed manually by a human expert.\nOur approach yields an automated abstraction mechanism along with theoretical guarantees on the strategies' performance.\nRhode Island Hold 'em was invented as a testbed for computational game playing [47].\nIt was designed so that it was similar in style to Texas Hold 'em, yet not so large that devising reasonably intelligent strategies would be impossible.\n(The rules of Rhode Island Hold 'em, as well as a discussion of how Rhode Island Hold 'em can be modeled as a game with ordered signals, that is, it fits in our model, is available in an extended version of this paper [13].)\nWe applied the techniques developed in this paper to find an exact (minimax) solution to Rhode Island Hold 'em, which has a game tree exceeding 3.1 billion nodes.\nApplying the sequence form to Rhode Island Hold 'em directly without abstraction yields a linear program with 91,224,226 rows, and the same number of columns.\nThis is much too large for (current) linear programming algorithms to handle.\nWe used our GameShrink algorithm to reduce this with lossless abstraction, and it yielded a linear program with 1,237,238 rows and columns--with 50,428,638 non-zero coefficients.\nWe then applied iterated elimination of dominated strategies, which further reduced this to 1,190,443 rows and 1,181,084 columns.\n(Applying iterated elimination of dominated strategies without GameShrink yielded 89,471,986 rows and 89,121,538 columns, which still would have been prohibitively large to solve.)\nGameShrink required less than one second to perform the shrinking (i.e., to compute all of the ordered game isomorphic abstraction transformations).\nWe recently demonstrated our optimal Rhode Island Hold 'em poker player at the AAAI-05 conference [14], and it is available for play on-line at http:\/\/www.cs.cmu.edu\/ ~ gilpin\/gsi.\nhtml.\nWhile others have worked on computer programs for playing Rhode Island Hold 'em [47], no optimal strategy has been found before.\nThis is the largest poker game solved to date by over four orders of magnitude.\n6.\nRELATED RESEARCH\nFunctions that transform extensive form games have been introduced [50, 11].\nIn contrast to our work, those approaches were not for making the game smaller and easier to solve.\nThe main result is that a game can be derived from another by a sequence of those transformations if and only if the games have the same pure reduced normal form.\nThe pure reduced normal form is the extensive form game represented as a game in normal form where duplicates of pure strategies (i.e., ones with identical payoffs) are removed and players essentially select equivalence classes of strategies [27].\nAn extension to that work shows a similar result, but for slightly different transformations and mixed reduced normal form games [21].\nModern treatments of this prior work on game transformations exist [38, Ch.\nThe recent notion of weak isomorphism in extensive form games [7] is related to our notion of restricted game isomorphism.\nThe motivation of that work was to justify solution concepts by arguing that they are invariant with respect to isomorphic transformations.\nHowever, that definition requires that the games to be tested for weak isomorphism are of the same size.\nOur focus is totally different: we find strategically equivalent smaller games.\nAlso, their paper does not provide algorithms.\nAbstraction techniques have been used in artificial intelligence research before.\nIn contrast to our work, most (but not all) research involving abstraction has been for singleagent problems (e.g. [20, 32]).\nFurthermore, the use of abstraction typically leads to sub-optimal solutions, unlike the techniques presented in this paper, which yield optimal solutions.\nA notable exception is the use of abstraction to compute optimal strategies for the game of Sprouts [2].\nHowever, a significant difference to our work is that Sprouts is a game of perfect information.\nIn contrast to other game tree search algorithms which store a particular game position at each node of the search tree, partition search stores groups of positions that are similar.\nPartition search can lead to substantial speed improvements over \u03b1-\u03b2-search.\nHowever, it is not game theory-based (it does not consider information sets in the game tree), and thus does not solve for the equilibrium of a game of imperfect information, such as poker .8 Another difference is that the abstraction is defined by an expert human while our abstractions are determined automatically.\nThere has been some research on the use of abstraction for imperfect information games.\nMost notably, Billings et al [4] describe a manually constructed abstraction for Texas Hold 'em poker, and include promising results against expert players.\nHowever, this approach has significant drawbacks.\nFirst, it is highly specialized for Texas Hold 'em.\nSecond, a large amount of expert knowledge and effort was used in constructing the abstraction.\nThird, the abstraction does not preserve equilibrium: even if applied to a smaller game, it might not yield a game-theoretic equilibrium.\nPromising ideas for abstraction in the context of general extensive form games have been described in an extended abstract [39], but to our knowledge, have not been fully developed.\n7.\nCONCLUSIONS AND DISCUSSION\nWe introduced the ordered game isomorphic abstraction transformation and gave an algorithm, GameShrink, for abstracting the game using the isomorphism exhaustively.\nWe proved that in games with ordered signals, any Nash equilibrium in the smaller abstracted game maps directly to a Nash equilibrium in the original game.\nThe complexity of GameShrink is \u02dcO (n2), where n is the number of nodes in the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in 8Bridge is also a game of imperfect information, and partition search does not find the equilibrium for that game either.\nInstead, partition search is used in conjunction with statistical sampling to simulate the uncertainty in bridge.\nThere are also other bridge programs that use search techniques for perfect information games in conjunction with statistical sampling and expert-defined abstraction [48].\nSuch (non-game-theoretic) techniques are unlikely to be competitive in poker because of the greater importance of information hiding and bluffing.\nthe size of the game tree.\nUsing GameShrink, we found a minimax equilibrium to Rhode Island Hold 'em, a poker game with 3.1 billion nodes in the game tree--over four orders of magnitude more than in the largest poker game solved previously.\nTo further improve scalability, we introduced an approximation variant of GameShrink, which can be used as an anytime algorithm by varying a parameter that controls the coarseness of abstraction.\nWe also discussed how (in a two-player zero-sum game), linear programming can be used in an anytime manner to generate approximately optimal strategies of increasing quality.\nThe method also yields bounds on the suboptimality of the resulting strategies.\nWe are currently working on using these techniques for full-scale 2-player limit Texas Hold 'em poker, a highly popular card game whose game tree has about 1018 nodes.\nThat game tree size has required us to use the approximation version of GameShrink (as well as round-based abstraction) [16, 15].","lvl-2":"Finding Equilibria in Large Sequential Games of Imperfect Information *\nABSTRACT\nFinding an equilibrium of an extensive form game of imperfect information is a fundamental problem in computational game theory, but current techniques do not scale to large games.\nTo address this, we introduce the ordered game isomorphism and the related ordered game isomorphic abstraction transformation.\nFor a multi-player sequential game of imperfect information with observable actions and an ordered signal space, we prove that any Nash equilibrium in an abstracted smaller game, obtained by one or more applications of the transformation, can be easily converted into a Nash equilibrium in the original game.\nWe present an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively.\nIts complexity is \u02dcO (n2), where n is the number of nodes in a structure we call the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree.\nUsing GameShrink, we find an equilibrium to a poker game with 3.1 billion nodes--over four orders of magnitude more than in the largest poker game solved previously.\nWe discuss several electronic commerce applications for GameShrink.\nTo address even larger games, we introduce approximation methods that do not preserve equilibrium, but nevertheless yield (ex post) provably close-to-optimal strategies.\n1.\nINTRODUCTION\nIn environments with more than one agent, an agent's outcome is generally affected by the actions of the other * This material is based upon work supported by the National Science Foundation under ITR grants IIS-0121678 and IIS-0427858, and a Sloan Fellowship.\nagent (s).\nConsequently, the optimal action of one agent can depend on the others.\nGame theory provides a normative framework for analyzing such strategic situations.\nIn particular, it provides solution concepts that define what rational behavior is in such settings.\nThe most famous and important solution concept is that of Nash equilibrium [36].\nIt is a strategy profile (one strategy for each agent) in which no agent has incentive to deviate to a different strategy.\nHowever, for the concept to be operational, we need algorithmic techniques for finding an equilibrium.\nGames can be classified as either games of perfect information or imperfect information.\nChess and Go are examples of the former, and, until recently, most game playing work has been on games of this type.\nTo compute an optimal strategy in a perfect information game, an agent traverses the game tree and evaluates individual nodes.\nIf the agent is able to traverse the entire game tree, she simply computes an optimal strategy from the bottom-up, using the principle of backward induction .1 In computer science terms, this is done using minimax search (often in conjunction with \u03b1\u03b2-pruning to reduce the search tree size and thus enhance speed).\nMinimax search runs in linear time in the size of the game tree .2 The differentiating feature of games of imperfect information, such as poker, is that they are not fully observable: when it is an agent's turn to move, she does not have access to all of the information about the world.\nIn such games, the decision of what to do at a point in time cannot generally be optimally made without considering decisions at all other points in time (including ones on other paths of play) because those other decisions affect the probabilities of being at different states at the current point in time.\nThus the algorithms for perfect information games do not solve games of imperfect information.\nFor sequential games with imperfect information, one could try to find an equilibrium using the normal (matrix) form, where every contingency plan of the agent is a pure strategy for the agent .3 Unfortunately (even if equivalent strategies\nare replaced by a single strategy [27]) this representation is generally exponential in the size of the game tree [52].\nBy observing that one needs to consider only sequences of moves rather than pure strategies [41, 46, 22, 52], one arrives at a more compact representation, the sequence form, which is linear in the size of the game tree .4 For 2-player games, there is a polynomial-sized (in the size of the game tree) linear programming formulation (linear complementarity in the non-zero-sum case) based on the sequence form such that strategies for players 1 and 2 correspond to primal and dual variables.\nThus, the equilibria of reasonable-sized 2-player games can be computed using this method [52, 24, 25].5 However, this approach still yields enormous (unsolvable) optimization problems for many real-world games, such as poker.\n1.1 Our approach\nIn this paper, we take a different approach to tackling the difficult problem of equilibrium computation.\nInstead of developing an equilibrium-finding method per se, we instead develop a methodology for automatically abstracting games in such a way that any equilibrium in the smaller (abstracted) game corresponds directly to an equilibrium in the original game.\nThus, by computing an equilibrium in the smaller game (using any available equilibrium-finding algorithm), we are able to construct an equilibrium in the original game.\nThe motivation is that an equilibrium for the smaller game can be computed drastically faster than for the original game.\nTo this end, we introduce games with ordered signals (Section 2), a broad class of games that has enough structure which we can exploit for abstraction purposes.\nInstead of operating directly on the game tree (something we found to be technically challenging), we instead introduce the use of information filters (Section 2.1), which coarsen the information each player receives.\nThey are used in our analysis and abstraction algorithm.\nBy operating only in the space of filters, we are able to keep the strategic structure of the game intact, while abstracting out details of the game in a way that is lossless from the perspective of equilibrium finding.\nWe introduce the ordered game isomorphism to describe strategically symmetric situations and the ordered game isomorphic abstraction transformation to take advantange of such symmetries (Section 3).\nAs our main equilibrium result we have the following: constant number of agents can be constructed in quasipolynomial time [31], but finding an exact equilibrium is PPAD-complete even in a 2-player game [8].\nThe most prevalent algorithm for finding an equilibrium in a 2-agent game is Lemke-Howson [30], but it takes exponentially many steps in the worst case [44].\nFor a survey of equilibrium computation in 2-player games, see [53].\nRecently, equilibriumfinding algorithms that enumerate supports (i.e., sets of pure strategies that are played with positive probability) have been shown efficient on many games [40], and efficient mixed integer programming algorithms that search in the space of supports have been developed [43].\nFor more than two players, many algorithms have been proposed, but they currently only scale to very small games [19, 34, 40].\n4There were also early techniques that capitalized in different ways on the fact that in many games the vast majority of pure strategies are not played in equilibrium [54, 23].\n5Recently this approach was extended to handle computing sequential equilibria [26] as well [35].\nTheorem 2 Let \u0393 be a game with ordered signals, and let F be an information filter for \u0393.\nLet F' be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation, and let \u03c3' be a Nash equilibrium strategy profile of the induced game \u0393F (i.e., the game \u0393 using the filter F').\nIf \u03c3 is constructed by using the corresponding strategies of \u03c3', then \u03c3 is a Nash equilibrium of \u0393F.\nThe proof of the theorem uses an equivalent characterization of Nash equilibria: \u03c3 is a Nash equilibrium if and only if there exist beliefs \u03bc (players' beliefs about unknown information) at all points of the game reachable by \u03c3 such that \u03c3 is sequentially rational (i.e., a best response) given \u03bc, where \u03bc is updated using Bayes' rule.\nWe can then use the fact that \u03c3' is a Nash equilibrium to show that \u03c3 is a Nash equilibrium considering only local properties of the game.\nWe also give an algorithm, GameShrink, for abstracting the game using our isomorphism exhaustively (Section 4).\nIts complexity is \u02dcO (n2), where n is the number of nodes in a structure we call the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in the size of the game tree.\nWe present several algorithmic and data structure related speed improvements (Section 4.1), and we demonstrate how a simple modification to our algorithm yields an approximation algorithm (Section 5).\n1.2 Electronic commerce applications\nSequential games of imperfect information are ubiquitous, for example in negotiation and in auctions.\nOften aspects of a player's knowledge are not pertinent for deciding what action the player should take at a given point in the game.\nOn the trivial end, some aspects of a player's knowledge are never pertinent (e.g., whether it is raining or not has no bearing on the bidding strategy in an art auction), and such aspects can be completely left out of the model specification.\nHowever, some aspects can be pertinent in certain states of the game while they are not pertinent in other states, and thus cannot be left out of the model completely.\nFurthermore, it may be highly non-obvious which aspects are pertinent in which states of the game.\nOur algorithm automatically discovers which aspects are irrelevant in different states, and eliminates those aspects of the game, resulting in a more compact, equivalent game representation.\nOne broad application area that has this property is sequential negotiation (potentially over multiple issues).\nAnother broad application area is sequential auctions (potentially over multiple goods).\nFor example, in those states of a 1-object auction where bidder A can infer that his valuation is greater than that of bidder B, bidder A can ignore all his other information about B's signals, although that information would be relevant for inferring B's exact valuation.\nFurthermore, in some states of the auction, a bidder might not care which exact other bidders have which valuations, but cares about which valuations are held by the other bidders in aggregate (ignoring their identities).\nMany open-cry sequential auction and negotiation mechanisms fall within the game model studied in this paper (specified in detail later), as do certain other games in electronic commerce, such as sequences of take-it-or-leave-it offers [42].\nOur techniques are in no way specific to an application.\nThe main experiment that we present in this paper is on\na recreational game.\nWe chose a particular poker game as the benchmark problem because it yields an extremely complicated and enormous game tree, it is a game of imperfect information, it is fully specified as a game (and the data is available), and it has been posted as a challenge problem by others [47] (to our knowledge no such challenge problem instances have been proposed for electronic commerce applications that require solving sequential games).\n1.3 Rhode Island Hold 'em poker\nPoker is an enormously popular card game played around the world.\nThe 2005 World Series of Poker had over $103 million dollars in total prize money, including $56 million for the main event.\nIncreasingly, poker players compete in online casinos, and television stations regularly broadcast poker tournaments.\nPoker has been identified as an important research area in AI due to the uncertainty stemming from opponents' cards, opponents' future actions, and chance moves, among other reasons [5].\nAlmost since the field's founding, game theory has been used to analyze different aspects of poker [28; 37; 3; 51, pp. 186--219].\nHowever, this work was limited to tiny games that could be solved by hand.\nMore recently, AI researchers have been applying the computational power of modern hardware to computing game theory-based strategies for larger games.\nKoller and Pfeffer determined solutions to poker games with up to 140,000 nodes using the sequence form and linear programming [25].\nLarge-scale approximations have been developed [4], but those methods do not provide any guarantees about the performance of the computed strategies.\nFurthermore, the approximations were designed manually by a human expert.\nOur approach yields an automated abstraction mechanism along with theoretical guarantees on the strategies' performance.\nRhode Island Hold 'em was invented as a testbed for computational game playing [47].\nIt was designed so that it was similar in style to Texas Hold 'em, yet not so large that devising reasonably intelligent strategies would be impossible.\n(The rules of Rhode Island Hold 'em, as well as a discussion of how Rhode Island Hold 'em can be modeled as a game with ordered signals, that is, it fits in our model, is available in an extended version of this paper [13].)\nWe applied the techniques developed in this paper to find an exact (minimax) solution to Rhode Island Hold 'em, which has a game tree exceeding 3.1 billion nodes.\nApplying the sequence form to Rhode Island Hold 'em directly without abstraction yields a linear program with 91,224,226 rows, and the same number of columns.\nThis is much too large for (current) linear programming algorithms to handle.\nWe used our GameShrink algorithm to reduce this with lossless abstraction, and it yielded a linear program with 1,237,238 rows and columns--with 50,428,638 non-zero coefficients.\nWe then applied iterated elimination of dominated strategies, which further reduced this to 1,190,443 rows and 1,181,084 columns.\n(Applying iterated elimination of dominated strategies without GameShrink yielded 89,471,986 rows and 89,121,538 columns, which still would have been prohibitively large to solve.)\nGameShrink required less than one second to perform the shrinking (i.e., to compute all of the ordered game isomorphic abstraction transformations).\nUsing a 1.65 GHz IBM eServer p5 570 with 64 gigabytes of RAM (the linear program solver actually needed 25 gigabytes), we solved it in 7 days and 17 hours using the interior-point barrier method of CPLEX version 9.1.2.\nWe recently demonstrated our optimal Rhode Island Hold 'em poker player at the AAAI-05 conference [14], and it is available for play on-line at http:\/\/www.cs.cmu.edu\/ ~ gilpin\/gsi.\nhtml.\nWhile others have worked on computer programs for playing Rhode Island Hold 'em [47], no optimal strategy has been found before.\nThis is the largest poker game solved to date by over four orders of magnitude.\n2.\nGAMES WITH ORDERED SIGNALS\nWe work with a slightly restricted class of games, as compared to the full generality of the extensive form.\nThis class, which we call games with ordered signals, is highly structured, but still general enough to capture a wide range of strategic situations.\nA game with ordered signals consists of a finite number of rounds.\nWithin a round, the players play a game on a directed tree (the tree can be different in different rounds).\nThe only uncertainty players face stems from private signals the other players have received and from the unknown future signals.\nIn other words, players observe each others' actions, but potentially not nature's actions.\nIn each round, there can be public signals (announced to all players) and private signals (confidentially communicated to individual players).\nFor simplicity, we assume--as is the case in most recreational games--that within each round, the number of private signals received is the same across players (this could quite likely be relaxed).\nWe also assume that the legal actions that a player has are independent of the signals received.\nFor example, in poker, the legal betting actions are independent of the cards received.\nFinally, the strongest assumption is that there is a partial ordering over sets of signals, and the payoffs are increasing (not necessarily strictly) in these signals.\nFor example, in poker, this partial ordering corresponds exactly to the ranking of card hands.\n1.\nI = {1,..., n} is a finite set of players.\n2.\nG = (G1,..., Gr), Gj = (Vj, Ej), is a finite collection of finite directed trees with nodes Vj and edges Ej.\nLet Zj denote the leaf nodes of Gj and let Nj (v) denote the outgoing neighbors of v E V j. Gj is the stage game for round j. 3.\nL = (L1,..., Lr), Lj: Vj \\ Zj--+ I indicates which player acts (chooses an outgoing edge) at each internal node in round j. 4.\n\u0398 is a finite set of signals.\n5.\n\u03ba = (\u03ba1,..., \u03bar) and \u03b3 = (\u03b31,..., \u03b3r) are vectors of nonnegative integers, where \u03baj and \u03b3j denote the number of public and private signals (per player), respectively, revealed in round j. Each signal \u03b8 E \u0398 may only be revealed once, and in each round every player receives the same number of private signals, so we require rj = 1 \u03baj + n\u03b3j <| \u0398 |.\nThe public information\nrevealed in round j is \u03b1j E \u0398\u03baj and the public information revealed in all rounds up through round j is \u02dc\u03b1j = (\u03b11,..., \u03b1j).\nThe private information revealed to player i E I in round j is \u03b2ji E \u0398\u03b3j and the private information revaled to player i E I in all\nminal nodes within a stage game to one of two values: over, in which case the game ends, or continue, in which case the game continues to the next round.\nClearly, we require \u03c9 (z) = over for all z \u2208 Zr.\nNote that \u03c9 is independent of the signals.\nLet \u03c9jover = {z \u2208 Zj | \u03c9 (z) = over} and \u03c9jcont = {z \u2208 Zj | \u03c9 (z) = continue}.\n\u03c9kcont \u00d7 \u03c9j over, at least one of the following two conditions holds:\nthrough round j and a player's utility is increasing in her private signals, everything else equal:\nWe will use the term game with ordered signals and the term ordered game interchangeably.\n2.1 Information filters\nIn this subsection, we define an information filter for ordered games.\nInstead of completely revealing a signal (either public or private) to a player, the signal first passes through this filter, which outputs a coarsened signal to the player.\nBy varying the filter applied to a game, we are able to obtain a wide variety of games while keeping the underlying action space of the game intact.\nWe will use this when designing our abstraction techniques.\nFormally, an information filter is as follows.\nlegal signals (i.e., no repeated signals) for one player through round j.\nAn information filter for \u0393 is a collection F = ~ F1,..., Fr ~ where each Fj is a function Fj: Sj \u2192 2Sj such that each of the following conditions hold:\n1.\n(Truthfulness) (\u02dc\u03b1j, \u02dc\u03b2ji) \u2208 Fj (\u02dc\u03b1j, \u02dc\u03b2ji) for all legal (\u02dc\u03b1j, \u02dc\u03b2ji).\n2.\n(Independence) The range of Fj is a partition of Sj.\n3.\n(Information preservation) If two values of a signal are distinguishable in round k, then they are distinguishable fpr each round j> k. Let mj = Ejl = 1 \u03bal + \u03b3l.\nWe require that for all legal (\u03b81,..., \u03b8mk,..., \u03b8mj) \u2286 \u0398 and (\u03b8' 1,..., \u03b8' mk,..., \u03b8' mj) \u2286 \u0398:\n(\u03b8' 1,..., \u03b8'm k) \u2208 \/ Fk (\u03b81,..., \u03b8mk) = \u21d2 (\u03b8' 1,..., \u03b8'm k,..., \u03b8'm j) \u2208 \/ F j (\u03b81,..., \u03b8mk,..., \u03b8mj).\nA game with ordered signals \u0393 and an information filter F for \u0393 defines a new game \u0393F.\nWe refer to such games as filtered ordered games.\nWe are left with the original game\nif we use the identity filter Fj (\u02dc\u03b1j, \u02dc\u03b2j \u02dc\u03b1j, \u02dc\u03b2j\nhave the following simple (but important) result:\nA simple proof proceeds by constructing an extensive form game directly from the ordered game, and showing that it satisfies perfect recall.\nIn determining the payoffs in a game with filtered signals, we take the average over all real signals in the filtered class, weighted by the probability of each real signal occurring.\n2.2 Strategies and Nash equilibrium\nWe are now ready to define behavior strategies in the context of filtered ordered games.\nDEFINITION 3.\nA behavior strategy for player i in round j of \u0393 = ~ I, G, L, \u0398, \u03ba, \u03b3, p, ~, \u03c9, u ~ with information filter F is a probability distribution over possible actions, and is defined for each player i, each round j, and each v \u2208 Vj \\ Zj for Lj (v) = i:\n(\u0394 (X) is the set of probability distributions over a finite set X.) A behavior strategy for player i in round j is \u03c3ji = (\u03c3ji, v1,..., \u03c3ji, vm) for each vk \u2208 Vj \\ Zj where Lj (vk) = i.\nA behavior strategy for player i in \u0393 is \u03c3i = ` \u03c31i,..., \u03c3r \u00b4.\ni A strategy profile is \u03c3 = (\u03c31,..., \u03c3n).\nA strategy profile with \u03c3i replaced by \u03c3' i is (\u03c3' i, \u03c3-i) = (\u03c31,..., \u03c3i-1, \u03c3' i, \u03c3i +1,..., \u03c3n).\nBy an abuse of notation, we will say player i receives an expected payoff of ui (\u03c3) when all players are playing the strategy profile \u03c3.\nStrategy \u03c3i is said to be player i's best response to \u03c3-i if for all other strategies \u03c3' i for player i we have ui (\u03c3i, \u03c3-i) \u2265 ui (\u03c3' i, \u03c3-i).\n\u03c3 is a Nash equilibrium if, for every player i, \u03c3i is a best response for \u03c3-i.\nA Nash equilibrium always exists in finite extensive form games [36], and one exists in behavior strategies for games with perfect recall [29].\nUsing these observations, we have the following corollary to Proposition 1:\n3.\nEQUILIBRIUM-PRESERVING ABSTRACTIONS\nIn this section, we present our main technique for reducing the size of games.\nWe begin by defining a filtered signal tree which represents all of the chance moves in the game.\nThe bold edges (i.e. the first two levels of the tree) in the game trees in Figure 1 correspond to the filtered signal trees in each game.\nDEFINITION 4.\nAssociated with every ordered game \u0393 = (I, G, L, \u0398, \u03ba, \u03b3, p,> -, \u03c9, u) and information filter F is a filtered signal tree, a directed tree in which each node corresponds to some revealed (filtered) signals and edges correspond to revealing specific (filtered) signals.\nThe nodes in the filtered signal tree represent the set of all possible revealed filtered signals (public and private) at some point in time.\nThe filtered public signals revealed in round j correspond to the nodes in the \u03baj levels beginning at level Pj \u2212 1 ` \u03bak + n\u03b3k \u00b4\nk = 1 n\u03b3k.\nWe denote children of a node x as N (x).\nIn addition, we associate weights with the edges corresponding to the probability of the particular edge being chosen given that its parent was reached.\nIn many games, there are certain situations in the game that can be thought of as being strategically equivalent to other situations in the game.\nBy melding these situations together, it is possible to arrive at a strategically equivalent smaller game.\nThe next two definitions formalize this notion via the introduction of the ordered game isomorphic relation and the ordered game isomorphic abstraction transformation.\nordered game isomorphic, and \u03d1 and \u03d1 ~ are at either level ` \u03bak + n\u03b3k \u00b4 or Pj k = 1 \u03bak + Pj \u2212 1 k = 1 n\u03b3k for some roundk = 1 j.\nThe ordered game isomorphic abstraction transformation is given by creating a new information filter F ~: Figure 1 shows the ordered game isomorphic abstraction transformation applied twice to a tiny poker game.\nTheorem 2, our main equilibrium result, shows how the ordered game isomorphic abstraction transformation can be used to compute equilibria faster.\nTHEOREM 2.\nLet \u0393 = (I, G, L, \u0398, \u03ba, \u03b3, p,> -, \u03c9, u) be an ordered game and F be an information filter for \u0393.\nLet F ~ be an information filter constructed from F by one application of the ordered game isomorphic abstraction transformation.\na Nash equilibrium of \u0393F.\nPROOF.\nFor an extensive form game, a belief system \u03bc assigns a probability to every decision node x such that\nfor all other strategies \u03c4i, where i is the player who controls h.\nA basic result [33, Proposition 9.C .1] characterizing Nash equilibria dictates that \u03c3 is a Nash equilibrium if and only if there is a belief system \u03bc such that for every information set h with Pr (h I \u03c3)> 0, the following two conditions hold: (C1) \u03c3 is sequentially rational at h given \u03bc; and (C2) \u03bc (x) =\nPr (h | \u03c3) for all x E h.\nSince \u03c3 ~ is a Nash equilibrium of \u0393 ~, there exists such a belief system \u03bc ~ for \u0393F.\nUsing \u03bc ~, we will construct a belief system \u03bc for \u0393 and show that conditions C1 and C2 hold, thus supporting \u03c3 as a Nash equilibrium.\nFix some player i E I. Each of i's information sets in some\" round j corresponds to filtered signals Fj \"\u02dc\u03b1 \u2217 j, \u02dc\u03b2 \u2217 j, history i in the first j--1 rounds (z1,..., zj \u2212 1) E j \u2212 1 \u03c9k cont, and his\ntory so far in round j, v E Vj \\ Zj.\nLet z\u02dc = (z1,..., zj \u2212 1, v) represent all of the player actions leading to this information set.\nThus, we can uniquely specify this information set\nusing the information \u02dc\u03b1 \u2217 j, \u02dc\u03b2 \u2217 j, z\u02dc.\ni Each node in an information set corresponds to the possible private signals the other players have received.\nDenote by \u03b2\u02dc some legal (Fj (\u02dc\u03b1j, \u02dc\u03b2j1),..., Fj (\u02dc\u03b1j, \u02dc\u03b2ji \u2212 1), F j (\u02dc\u03b1j, \u02dc\u03b2ji +1),..., Fj (\u02dc\u03b1j, \u02dc\u03b2jn)).\nIn other words, there exists (\u02dc\u03b1j, \u02dc\u03b2j1,..., \u02dc\u03b2jn) such that\nnotation and write F ~ j \u03b2\u02c6 = \u02c6\u03b2 ~.)\nWe can now compute \u03bc\nFigure 1: GameShrink applied to a tiny two-person four-card (two Jacks and two Kings) poker game.\nNext to each game tree is the range of the information filter F. Dotted lines denote information sets, which are labeled by the controlling player.\nOpen circles are chance nodes with the indicated transition probabilities.\nThe root node is the chance node for player 1's card, and the next level is for player 2's card.\nThe payment from player 2 to player 1 is given below each leaf.\nIn this example, the algorithm reduces the game tree from 53 nodes to 19 nodes.\nPr (\u02c6\u03b2 | F j (\u02dc\u03b1j, \u02dc\u03b2j i)) where p \u2217 = Pr (\u02c6\u03b2 ~ | F j (\u02dc\u03b1j, \u02dc\u03b2j i)).\nThe following three claims show that \u00b5 as calculated above supports ~ as a Nash equilibrium.\nFirst, we demonstrate a failure when removing the first assumption.\nConsider the game in Figure 2.6 Nodes a and b are in the same information set, have the same parent (chance) node, have isomorphic subtrees with the same payoffs, and nodes c and d also have similar structural properties.\nBy merging the subtrees beginning at a and b, we get the game on the right in Figure 2.\nIn this game, player 1's only Nash equilibrium strategy is to play left.\nBut in the original game, player 1 knows that node c will never be reached, and so should play right in that information set.\nThe proofs of Claims 1-3 are in an extended version of this paper [13].\nBy Claims 1 and 2, we know that condition C2 holds.\nBy Claim 3, we know that condition C1 holds.\nThus, ~ is a Nash equilibrium.\n3.1 Nontriviality of generalizing beyond this model Our model does not capture general sequential games of imperfect information because it is restricted in two ways (as discussed above): 1) there is a special structure connecting the player actions and the chance actions (for one, the players are assumed to observe each others' actions, but nature's actions might not be publicly observable), and 2) there is a common ordering of signals.\nIn this subsection we show that removing either of these conditions can make our technique invalid.\nFigure 2: Example illustrating difficulty in developing a theory of equilibrium-preserving abstractions for general extensive form games.\nRemoving the second assumption (that the utility functions are based on a common ordering of signals) can also cause failure.\nConsider a simple three-card game with a deck containing two Jacks (J1 and J2) and a King (K), where player 1's utility function is based on the ordering\nK> - J1 J2 but player 2's utility function is based on the ordering J2> - K> - J1.\nIt is easy to check that in the abstracted game (where Player 1 treats J1 and J2 as being \"equivalent\") the Nash equilibrium does not correspond to a Nash equilibrium in the original game .7\n4.\nGAMESHRINK: AN EFFICIENT ALGORITHM FOR COMPUTING ORDERED GAME ISOMORPHIC ABSTRACTION TRANSFORMATIONS\nThis section presents an algorithm, GameShrink, for conducting the abstractions.\nIt only needs to analyze the signal tree discussed above, rather than the entire game tree.\nWe first present a subroutine that GameShrink uses.\nIt is a dynamic program for computing the ordered game isomorphic relation.\nAgain, it operates on the signal tree.\nALGORITHM 1.\nOrderedGameIsomorphic?\n(\u0393, 19, 19') 1.\nIf 19 and 19' have different parents, then return false.\n2.\nIf 19 and 19' are both leaves of the signal tree: (a) If ur (19 l \u02dcz) = ur (19' l \u02dcz) for all z\u02dc G wr over, then return true.\n(b) Otherwise, return false.\n3.\nCreate a bipartite graph G\u03d1, \u03d1 = (V1, V2, E) with V1 = N (19) and V2 = N (19').\n4.\nFor each v1 G V1 and v2 G V2: If OrderedGameIsomorphic?\n(\u0393, v1, v2) Create edge (v1, v2) 5.\nReturn true if G\u03d1, \u03d1 has a perfect matching; otherwise, return false.\nBy evaluating this dynamic program from bottom to top, Algorithm 1 determines, in time polynomial in the size of the signal tree, whether or not any pair of equal depth nodes x and y are ordered game isomorphic.\nWe can further speed up this computation by only examining nodes with the same parent, since we know (from step 1) that no nodes with different parents are ordered game isomorphic.\nThe test in step 2 (a) can be computed in O (1) time by consulting the> - relation from the specification of the game.\nEach call to OrderedGameIsomorphic?\nperforms at most one perfect matching computation on a bipartite graph with O (lel) nodes and O (lel2) edges (recall that e is the set of signals).\nUsing the Ford-Fulkerson algorithm [12] for finding a maximal matching, this takes O (lel3) time.\nLet S be the maximum number of signals possibly revealed in the game (e.g., in Rhode Island Hold 'em, S = 4 because each of the two players has one card in the hand plus there are two cards on the table).\nThe number of nodes, n, in the signal tree is O (lelS).\nThe dynamic program visits each node in the signal tree, with each visit requiring O (lel2) calls to the OrderedGameIsomorphic?\nroutine.\nSo, it takes O (lelSlel3lel2) = O (lelS +5) time to compute the entire ordered game isomorphic relation.\nWhile this is exponential in the number of revealed signals, we now show that it is polynomial in the size of the signal tree--and thus polynomial in the size of the game tree 7We thank an anonymous person for this example.\nbecause the signal tree is smaller than the game tree.\nThe number of nodes in the signal tree is\nand thus the number of leaves in the signal tree is \u03a9 (lelS).\nThus, O (lelS +5) = O (nlel5), which proves that we can indeed compute the ordered game isomorphic relation in time polynomial in the number of nodes, n, of the signal tree.\nThe algorithm often runs in sublinear time (and space) in the size of the game tree because the signal tree is significantly smaller than the game tree in most nontrivial games.\n(Note that the input to the algorithm is not an explicit game tree, but a specification of the rules, so the algorithm does not need to read in the game tree.)\nSee Figure 1.\nIn general, if an ordered game has r rounds, and each round's stage game has at least b nonterminal leaves, then the size of the signal tree is at most br1 of the size of the game tree.\nFor example, in Rhode Island Hold 'em, the game tree has 3.1 billion nodes while the signal tree only has 6,632,705.\nGiven the OrderedGameIsomorphic?\nroutine for determining ordered game isomorphisms in an ordered game, we are ready to present the main algorithm, GameShrink.\nALGORITHM 2.\nGameShrink (\u0393) 1.\nInitialize F to be the identity filter for \u0393.\n2.\nFor j from 1 to r:\n3.\nOutput F.\nGiven as input an ordered game \u0393, GameShrink applies the shrinking ideas presented above as aggressively as possible.\nOnce it finishes, there are no contractible nodes (since it compares every pair of nodes at each level of the signal tree), and it outputs the corresponding information filter F.\nThe correctness of GameShrink follows by a repeated application of Theorem 2.\nThus, we have the following result: THEOREM 3.\nGameShrink finds all ordered game isomorphisms and applies the associated ordered game isomorphic abstraction transformations.\nFurthermore, for any Nash equilibrium, \u03c3', of the abstracted game, the strategy profile constructed for the original game from \u03c3' is a Nash equilibrium.\nThe dominating factor in the run time of GameShrink is in the rth iteration of the main for-loop.\nThere are at most\n` 1\u03981 \u00b4 S!\nnodes at this level, where we again take S to be the S maximum number of signals possibly revealed in the game.\n\u201e\"` 1\u03981 \u00b4 S!\nThus, the inner for-loop executes O S discussed in the next subsection, we use a union-find data structure to represent the information filter F. Each iteration of the inner for-loop possibly performs a union operation on the data structure; performing M operations on a union-find data structure containing N elements takes O (\u03b1 (M, N)) amortized time per operation, where \u03b1 (M, N) is the inverse Ackermann's function [1, 49] (which grows extremely slowly).\nThus, the total time for GameShrink is\nthough this is exponential in S, it is \u02dcO (n2), where n is the number of nodes in the signal tree.\nFurthermore, GameShrink tends to actually run in sublinear time and space in the size of the game tree because the signal tree is significantly smaller than the game tree in most nontrivial games, as discussed above.\n4.1 Efficiency enhancements\nWe designed several speed enhancement techniques for GameShrink, and all of them are incorporated into our implementation.\nOne technique is the use of the union-find data structure for storing the information filter F.\nThis data structure uses time almost linear in the number of operations [49].\nInitially each node in the signalling tree is its own set (this corresponds to the identity information filter); when two nodes are contracted they are joined into a new set.\nUpon termination, the filtered signals for the abstracted game correspond exactly to the disjoint sets in the data structure.\nThis is an efficient method of recording contractions within the game tree, and the memory requirements are only linear in the size of the signal tree.\nDetermining whether two nodes are ordered game isomorphic requires us to determine if a bipartite graph has a perfect matching.\nWe can eliminate some of these computations by using easy-to-check necessary conditions for the ordered game isomorphic relation to hold.\nOne such condition is to check that the nodes have the same number of chances as being ranked (according to> -) higher than, lower than, and the same as the opponents.\nWe can precompute these frequencies for every game tree node.\nThis substantially speeds up GameShrink, and we can leverage this database across multiple runs of the algorithm (for example, when trying different abstraction levels; see next section).\nThe indices for this database depend on the private and public signals, but not the order in which they were revealed, and thus two nodes may have the same corresponding database entry.\nThis makes the database significantly more compact.\na factor ` 50 (For example in Texas Hold 'em, the database is reduced by \u00b4 ` 47 \u00b4 ` 46 \u00b4 \/ ` 50 \u00b4 = 20.)\nWe store the histograms 3 1 1 5 in a 2-dimensional database.\nThe first dimension is indexed by the private signals, the second by the public signals.\nThe problem of computing the index in (either) one of the dimensions is exactly the problem of computing a bijection between all subsets of size r from a set of size n and integers in \u02c60,..., ` n \u00b4 \u2212 1 \u02dc.\nWe efficiently compute this using r the subsets' colexicographical ordering [6].\nLet {c1,..., cr}, ci E {0,..., n \u2212 1}, denote the r signals and assume that ci <ci +1.\nWe compute a unique index for this set of signals as follows: index (c1,..., cr) = Pr ` ci \u00b4.\n5.\nAPPROXIMATION METHODS\nSome games are too large to compute an exact equilibrium, even after using the presented abstraction technique.\nThis section discusses general techniques for computing approximately optimal strategy profiles.\nFor a two-player game, we can always evaluate the worst-case performance of a strategy, thus providing some objective evaluation of the strength of the strategy.\nTo illustrate this, suppose we know player 2's planned strategy for some game.\nWe can then fix the probabilities of player 2's actions in the game tree as if they were chance moves.\nThen player 1 is faced with a single-agent decision problem, which can be solved bottomup, maximizing expected payoff at every node.\nThus, we can objectively determine the expected worst-case performance of player 2's strategy.\nThis will be most useful when we want to evaluate how well a given strategy performs when we know that it is not an equilibrium strategy.\n(A variation of this technique may also be applied in n-person games where only one player's strategies are held fixed.)\nThis technique provides ex post guarantees about the worst-case performance of a strategy, and can be used independently of the method that is used to compute the strategies.\n5.1 State-space approximations\nBy slightly modifying GameShrink, we can obtain an algorithm that yields even smaller game trees, at the expense of losing the equilibrium guarantees of Theorem 2.\nInstead of requiring the payoffs at terminal nodes to match exactly, we can instead compute a penalty that increases as the difference in utility between two nodes increases.\nThere are many ways in which the penalty function could be defined and implemented.\nOne possibility is to create edge weights in the bipartite graphs used in Algorithm 1, and then instead of requiring perfect matchings in the unweighted graph we would instead require perfect matchings with low cost (i.e., only consider two nodes to be ordered game isomorphic if the corresponding bipartite graph has a perfect matching with cost below some threshold).\nThus, with this threshold as a parameter, we have a knob to turn that in one extreme (threshold = 0) yields an optimal abstraction and in the other extreme (threshold = oo) yields a highly abstracted game (this would in effect restrict players to ignoring all signals, but still observing actions).\nThis knob also begets an anytime algorithm.\nOne can solve increasingly less abstracted versions of the game, and evaluate the quality of the solution at every iteration using the ex post method discussed above.\n5.2 Algorithmic approximations\nIn the case of two-player zero-sum games, the equilibrium computation can be modeled as a linear program (LP), which can in turn be solved using the simplex method.\nThis approach has inherent features which we can leverage into desirable properties in the context of solving games.\nIn the LP, primal solutions correspond to strategies of player 2, and dual solutions correspond to strategies of player 1.\nThere are two versions of the simplex method: the primal simplex and the dual simplex.\nThe primal simplex maintains primal feasibility and proceeds by finding better and better primal solutions until the dual solution vector is feasible,\nat which point optimality has been reached.\nAnalogously, the dual simplex maintains dual feasibility and proceeds by finding increasingly better dual solutions until the primal solution vector is feasible.\n(The dual simplex method can be thought of as running the primal simplex method on the dual problem.)\nThus, the primal and dual simplex methods serve as anytime algorithms (for a given abstraction) for players 2 and 1, respectively.\nAt any point in time, they can output the best strategies found so far.\nAlso, for any feasible solution to the LP, we can get bounds on the quality of the strategies by examining the primal and dual solutions.\n(When using the primal simplex method, dual solutions may be read off of the LP tableau.)\nEvery feasible solution of the dual yields an upper bound on the optimal value of the primal, and vice versa [9, p. 57].\nThus, without requiring further computation, we get lower bounds on the expected utility of each agent's strategy against that agent's worst-case opponent.\nOne problem with the simplex method is that it is not a primal-dual algorithm, that is, it does not maintain both primal and dual feasibility throughout its execution.\n(In fact, it only obtains primal and dual feasibility at the very end of execution.)\nIn contrast, there are interior-point methods for linear programming that maintain primal and dual feasibility throughout the execution.\nFor example, many interiorpoint path-following algorithms have this property [55, Ch.\n5].\nWe observe that running such a linear programming method yields a method for finding E-equilibria (i.e., strategy profiles in which no agent can increase her expected utility by more than E by deviating).\nA threshold on E can also be used as a termination criterion for using the method as an anytime algorithm.\nFurthermore, interior-point methods in this class have polynomial-time worst-case run time, as opposed to the simplex algorithm, which takes exponentially many steps in the worst case.\n6.\nRELATED RESEARCH\nFunctions that transform extensive form games have been introduced [50, 11].\nIn contrast to our work, those approaches were not for making the game smaller and easier to solve.\nThe main result is that a game can be derived from another by a sequence of those transformations if and only if the games have the same pure reduced normal form.\nThe pure reduced normal form is the extensive form game represented as a game in normal form where duplicates of pure strategies (i.e., ones with identical payoffs) are removed and players essentially select equivalence classes of strategies [27].\nAn extension to that work shows a similar result, but for slightly different transformations and mixed reduced normal form games [21].\nModern treatments of this prior work on game transformations exist [38, Ch.\n6], [10].\nThe recent notion of weak isomorphism in extensive form games [7] is related to our notion of restricted game isomorphism.\nThe motivation of that work was to justify solution concepts by arguing that they are invariant with respect to isomorphic transformations.\nIndeed, the author shows, among other things, that many solution concepts, including Nash, perfect, subgame perfect, and sequential equilibrium, are invariant with respect to weak isomorphisms.\nHowever, that definition requires that the games to be tested for weak isomorphism are of the same size.\nOur focus is totally different: we find strategically equivalent smaller games.\nAlso, their paper does not provide algorithms.\nAbstraction techniques have been used in artificial intelligence research before.\nIn contrast to our work, most (but not all) research involving abstraction has been for singleagent problems (e.g. [20, 32]).\nFurthermore, the use of abstraction typically leads to sub-optimal solutions, unlike the techniques presented in this paper, which yield optimal solutions.\nA notable exception is the use of abstraction to compute optimal strategies for the game of Sprouts [2].\nHowever, a significant difference to our work is that Sprouts is a game of perfect information.\nOne of the first pieces of research to use abstraction in multi-agent settings was the development of partition search, which is the algorithm behind GIB, the world's first expertlevel computer bridge player [17, 18].\nIn contrast to other game tree search algorithms which store a particular game position at each node of the search tree, partition search stores groups of positions that are similar.\n(Typically, the similarity of two game positions is computed by ignoring the less important components of each game position and then checking whether the abstracted positions are similar--in some domain-specific expert-defined sense--to each other.)\nPartition search can lead to substantial speed improvements over \u03b1-\u03b2-search.\nHowever, it is not game theory-based (it does not consider information sets in the game tree), and thus does not solve for the equilibrium of a game of imperfect information, such as poker .8 Another difference is that the abstraction is defined by an expert human while our abstractions are determined automatically.\nThere has been some research on the use of abstraction for imperfect information games.\nMost notably, Billings et al [4] describe a manually constructed abstraction for Texas Hold 'em poker, and include promising results against expert players.\nHowever, this approach has significant drawbacks.\nFirst, it is highly specialized for Texas Hold 'em.\nSecond, a large amount of expert knowledge and effort was used in constructing the abstraction.\nThird, the abstraction does not preserve equilibrium: even if applied to a smaller game, it might not yield a game-theoretic equilibrium.\nPromising ideas for abstraction in the context of general extensive form games have been described in an extended abstract [39], but to our knowledge, have not been fully developed.\n7.\nCONCLUSIONS AND DISCUSSION\nWe introduced the ordered game isomorphic abstraction transformation and gave an algorithm, GameShrink, for abstracting the game using the isomorphism exhaustively.\nWe proved that in games with ordered signals, any Nash equilibrium in the smaller abstracted game maps directly to a Nash equilibrium in the original game.\nThe complexity of GameShrink is \u02dcO (n2), where n is the number of nodes in the signal tree.\nIt is no larger than the game tree, and on nontrivial games it is drastically smaller, so GameShrink has time and space complexity sublinear in 8Bridge is also a game of imperfect information, and partition search does not find the equilibrium for that game either.\nInstead, partition search is used in conjunction with statistical sampling to simulate the uncertainty in bridge.\nThere are also other bridge programs that use search techniques for perfect information games in conjunction with statistical sampling and expert-defined abstraction [48].\nSuch (non-game-theoretic) techniques are unlikely to be competitive in poker because of the greater importance of information hiding and bluffing.\nthe size of the game tree.\nUsing GameShrink, we found a minimax equilibrium to Rhode Island Hold 'em, a poker game with 3.1 billion nodes in the game tree--over four orders of magnitude more than in the largest poker game solved previously.\nTo further improve scalability, we introduced an approximation variant of GameShrink, which can be used as an anytime algorithm by varying a parameter that controls the coarseness of abstraction.\nWe also discussed how (in a two-player zero-sum game), linear programming can be used in an anytime manner to generate approximately optimal strategies of increasing quality.\nThe method also yields bounds on the suboptimality of the resulting strategies.\nWe are currently working on using these techniques for full-scale 2-player limit Texas Hold 'em poker, a highly popular card game whose game tree has about 1018 nodes.\nThat game tree size has required us to use the approximation version of GameShrink (as well as round-based abstraction) [16, 15].","keyphrases":["sequenti game","sequenti game of imperfect inform","imperfect inform","equilibrium","comput game theori","game theori","order game isomorph","relat order game isomorph abstract transform","observ action","order signal space","nash equilibrium","gameshrink","signal tree","norm framework","ration behavior","strategi profil","autom abstract","equilibrium find","comput poker"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","U","U","M","M","R","R"]} {"id":"J-35","title":"Efficiency and Nash Equilibria in a Scrip System for P2P Networks","abstract":"A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized. The work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking.","lvl-1":"Efficiency and Nash Equilibria in a Scrip System for P2P Networks Eric J. Friedman School of Operations Research and Industrial Engineering Cornell University ejf27@cornell.edu Joseph Y. Halpern Computer Science Dept. Cornell University halpern@cs.cornell.edu Ian Kash Computer Science Dept. Cornell University kash@cs.cornell.edu ABSTRACT A model of providing service in a P2P network is analyzed.\nIt is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained.\nThe effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized.\nThe work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking.\nCategories and Subject Descriptors C.2.4 [Computer-Communication Networks]: Distributed Systems; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent systems; J.4 [Social and Behavioral Sciences]: Economics; K.4.4 [Computers and Society]: Electronic Commerce General Terms Economics, Theory 1.\nINTRODUCTION A common feature of many online distributed systems is that individuals provide services for each other.\nPeer-topeer (P2P) networks (such as Kazaa [25] or BitTorrent [3]) have proved popular as mechanisms for file sharing, and applications such as distributed computation and file storage are on the horizon; systems such as Seti@home [24] provide computational assistance; systems such as Slashdot [21] provide content, evaluations, and advice forums in which people answer each other``s questions.\nHaving individuals provide each other with service typically increases the social welfare: the individual utilizing the resources of the system derives a greater benefit from it than the cost to the individual providing it.\nHowever, the cost of providing service can still be nontrivial.\nFor example, users of Kazaa and BitTorrent may be charged for bandwidth usage; in addition, in some filesharing systems, there is the possibility of being sued, which can be viewed as part of the cost.\nThus, in many systems there is a strong incentive to become a free rider and benefit from the system without contributing to it.\nThis is not merely a theoretical problem; studies of the Gnutella [22] network have shown that almost 70 percent of users share no files and nearly 50 percent of responses are from the top 1 percent of sharing hosts [1].\nHaving relatively few users provide most of the service creates a point of centralization; the disappearance of a small percentage of users can greatly impair the functionality of the system.\nMoreover, current trends seem to be leading to the elimination of the altruistic users on which these systems rely.\nThese heavy users are some of the most expensive customers ISPs have.\nThus, as the amount of traffic has grown, ISPs have begun to seek ways to reduce this traffic.\nSome universities have started charging students for excessive bandwidth usage; others revoke network access for it [5].\nA number of companies have also formed whose service is to detect excessive bandwidth usage [19].\nThese trends make developing a system that encourages a more equal distribution of the work critical for the continued viability of P2P networks and other distributed online systems.\nA significant amount of research has gone into designing reputation systems to give preferential treatment to users who are sharing files.\nSome of the P2P networks currently in use have implemented versions of these techniques.\nHowever, these approaches tend to fall into one of two categories: either they are barter-like or reputational.\nBy barter-like, we mean that each agent bases its decisions only on information it has derived from its own interactions.\nPerhaps the best-known example of a barter-like system is BitTorrent, where clients downloading a file try to find other clients with parts they are missing so that they can trade, thus creating a roughly equal amount of work.\nSince the barter is restricted to users currently interested in a single file, this works well for popular files, but tends to have problems maintaining availability of less popular ones.\nAn example of a barter-like system built on top of a more traditional file-sharing system is the credit system used by eMule 140 [8].\nEach user tracks his history of interactions with other users and gives priority to those he has downloaded from in the past.\nHowever, in a large system, the probability that a pair of randomly-chosen users will have interacted before is quite small, so this interaction history will not be terribly helpful.\nAnagnostakis and Greenwald [2] present a more sophisticated version of this approach, but it still seems to suffer from similar problems.\nA number of attempts have been made at providing general reputation systems (e.g. [12, 13, 17, 27]).\nThe basic idea is to aggregate each user``s experience into a global number for each individual that intuitively represents the system``s view of that individual``s reputation.\nHowever, these attempts tend to suffer from practical problems because they implicitly view users as either good or bad, assume that the good users will act according to the specified protocol, and that there are relatively few bad users.\nUnfortunately, if there are easy ways to game the system, once this information becomes widely available, rational users are likely to make use of it.\nWe cannot count on only a few users being bad (in the sense of not following the prescribed protocol).\nFor example, Kazaa uses a measure of the ratio of the number of uploads to the number of downloads to identify good and bad users.\nHowever, to avoid penalizing new users, they gave new users an average rating.\nUsers discovered that they could use this relatively good rating to free ride for a while and, once it started to get bad, they could delete their stored information and effectively come back as a new user, thus circumventing the system (see [2] for a discussion and [11] for a formal analysis of this whitewashing).\nThus Kazaa``s reputation system is ineffective.\nThis is a simple case of a more general vulnerability of such systems to sybil attacks [6], where a single user maintains multiple identities and uses them in a coordinated fashion to get better service than he otherwise would.\nRecent work has shown that most common reputation systems are vulnerable (in the worst case)to such attacks [4]; however, the degree of this vulnerability is still unclear.\nThe analyses of the practical vulnerabilities and the existence of such systems that are immune to such attacks remains an area of active research (e.g., [4, 28, 14]).\nSimple economic systems based on a scrip or money seem to avoid many of these problems, are easy to implement and are quite popular (see, e.g., [13, 15, 26]).\nHowever, they have a different set of problems.\nPerhaps the most common involve determining the amount of money in the system.\nRoughly speaking, if there is too little money in the system relative to the number of agents, then relatively few users can afford to make request.\nOn the other hand, if there is too much money, then users will not feel the need to respond to a request; they have enough money already.\nA related problem involves handling newcomers.\nIf newcomers are each given a positive amount of money, then the system is open to sybil attacks.\nPerhaps not surprisingly, scrip systems end up having to deal with standard economic woes such as inflation, bubbles, and crashes [26].\nIn this paper, we provide a formal model in which to analyze scrip systems.\nWe describe a simple scrip system and show that, under reasonable assumptions, for each fixed amount of money there is a nontrivial Nash equilibrium involving threshold strategies, where an agent accepts a request if he has less than $k for some threshold k.1 An interesting aspect of our analysis is that, in equilibrium, the distribution of users with each amount of money is the distribution that maximizes entropy (subject to the money supply constraint).\nThis allows us to compute the money supply that maximizes efficiency (social welfare), given the number of agents.\nIt also leads to a solution for the problem of dealing with newcomers: we simply assume that new users come in with no money, and adjust the price of service (which is equivalent to adjusting the money supply) to maintain the ratio that maximizes efficiency.\nWhile assuming that new users come in with no money will not work in all settings, we believe the approach will be widely applicable.\nIn systems where the goal is to do work, new users can acquire money by performing work.\nIt should also work in Kazaalike system where a user can come in with some resources (e.g., a private collection of MP3s).\nThe rest of the paper is organized as follows.\nIn Section 2, we present our formal model and observe that it can be used to understand the effect of altruists.\nIn Section 3, we examine what happens in the game under nonstrategic play, if all agents use the same threshold strategy.\nWe show that, in this case, the system quickly converges to a situation where the distribution of money is characterized by maximum entropy.\nUsing this analysis, we show in Section 4 that, under minimal assumptions, there is a nontrivial Nash equilibrium in the game where all agents use some threshold strategy.\nMoreover, we show in Section 5 that the analysis leads to an understanding of how to choose the amount of money in the system (or, equivalently, the cost to fulfill a request) so as to maximize efficiency, and also shows how to handle new users.\nIn Section 6, we discuss the extent to which our approach can handle sybils and collusion.\nWe conclude in Section 7.\n2.\nTHE MODEL To begin, we formalize providing service in a P2P network as a non-cooperative game.\nUnlike much of the modeling in this area, our model will model the asymmetric interactions in a file sharing system in which the matching of players (those requesting a file with those who have that particular file) is a key part of the system.\nThis is in contrast with much previous work which uses random matching in a prisoner``s dilemma.\nSuch models were studied in the economics literature [18, 7] and first applied to online reputations in [11]; an application to P2P is found in [9].\nThis random-matching model fails to capture some salient aspects of a number of important settings.\nWhen a request is made, there are typically many people in the network who can potentially satisfy it (especially in a large P2P network), but not all can.\nFor example, some people may not have the time or resources to satisfy the request.\nThe randommatching process ignores the fact that some people may not be able to satisfy the request.\nPresumably, if the person matched with the requester could not satisfy the match, he would have to defect.\nMoreover, it does not capture the fact that the decision as to whether to volunteer to satisfy the request should be made before the matching process, not after.\nThat is, the matching process does not capture 1 Although we refer to our unit of scrip as the dollar, these are not real dollars nor do we view them as convertible to dollars.\n141 the fact that if someone is unwilling to satisfy the request, there will doubtless be others who can satisfy it.\nFinally, the actions and payoffs in the prisoner``s dilemma game do not obviously correspond to actual choices that can be made.\nFor example, it is not clear what defection on the part of the requester means.\nIn our model we try to deal with all these issues.\nSuppose that there are n agents.\nAt each round, an agent is picked uniformly at random to make a request.\nEach other agent is able to satisfy this request with probability \u03b2 > 0 at all times, independent of previous behavior.\nThe term \u03b2 is intended to capture the probability that an agent is busy, or does not have the resources to fulfill the request.\nAssuming that \u03b2 is time-independent does not capture the intution that being an unable to fulfill a request at time t may well be correlated with being unable to fulfill it at time t+1.\nWe believe that, in large systems, we should be able to drop the independence assumption, but we leave this for future work.\nIn any case, those agents that are able to satisfy the request must choose whether or not to volunteer to satisfy it.\nIf at least one agent volunteers, the requester gets a benefit of 1 util (the job is done) and one of volunteers is chosen at random to fulfill the request.\nThe agent that fulfills the request pays a cost of \u03b1 < 1.\nAs is standard in the literature, we assume that agents discount future payoffs by a factor of \u03b4 per time unit.\nThis captures the intuition that a util now is worth more than a util tomorrow, and allows us to compute the total utility derived by an agent in an infinite game.\nLastly, we assume that with more players requests come more often.\nThus we assume that the time between rounds is 1\/n.\nThis captures the fact that the systems we want to model are really processing many requests in parallel, so we would expect the number of concurrent requests to be proportional to the number of users.2 Let G(n, \u03b4, \u03b1, \u03b2) denote this game with n agents, a discount factor of \u03b4, a cost to satisfy requests of \u03b1, and a probability of being able to satisfy requests of \u03b2.\nWhen the latter two parameters are not relevant, we sometimes write G(n, \u03b4).\nWe use the following notation throughout the paper: \u2022 pt denotes the agent chosen in round t. \u2022 Bt i \u2208 {0, 1} denotes whether agent i can satisfy the request in round t. Bt i = 1 with probability \u03b2 > 0 and Bt i is independent of Bt i for all t = t. \u2022 V t i \u2208 {0, 1} denotes agent i``s decision about whether to volunteer in round t; 1 indicates volunteering.\nV t i is determined by agent i``s strategy.\n\u2022 vt \u2208 {j | V t j Bt j = 1} denotes the agent chosen to satisfy the request.\nThis agent is chosen uniformly at random from those who are willing (V t j = 1) and able (Bt j = 1) to satisfy the request.\n\u2022 ut i denotes agent i``s utility in round t.\nA standard agent is one whose utility is determined as discussed in the introduction; namely, the agent gets 2 For large n, our model converges to one in which players make requests in real time, and the time between a player``s requests are exponentially distributed with mean 1.\nIn addition, the time between requests served by a single player is also exponentially distributed.\na utility of 1 for a fulfilled request and utility \u2212\u03b1 for fulfilling a request.\nThus, if i is a standard agent, then ut i = 8 < : 1 if i = pt and P j=i V t j Bt j > 0 \u2212\u03b1 if i = vt 0 otherwise.\n\u2022 Ui = P\u221e t=0 \u03b4t\/n ut i denotes the total utility for agent i.\nIt is the discounted total of agent i``s utility in each round.\nNote that the effective discount factor is \u03b41\/n since an increase in n leads to a shortening of the time between rounds.\nNow that we have a model of making and satisfying requests, we use it to analyze free riding.\nTake an altruist to be someone who always fulfills requests.\nAgent i might rationally behave altruistically if agent i``s utility function has the following form, for some \u03b1 > 0: ut i = 8 < : 1 if i = pt and P j=i V t j Bt j > 0 \u03b1 if i = vt 0 otherwise.\nThus, rather than suffering a loss of utility when satisfying a request, an agent derives positive utility from satisfying it.\nSuch a utility function is a reasonable representation of the pleasure that some people get from the sense that they provide the music that everyone is playing.\nFor such altruistic agents, playing the strategy that sets V t i = 1 for all t is dominant.\nWhile having a nonstandard utility function might be one reason that a rational agent might use this strategy, there are certainly others.\nFor example a naive user of filesharing software with a good connection might well follow this strategy.\nAll that matters for the following discussion is that there are some agents that use this strategy, for whatever reason.\nAs we have observed, such users seem to exist in some large systems.\nSuppose that our system has a altruists.\nIntuitively, if a is moderately large, they will manage to satisfy most of the requests in the system even if other agents do no work.\nThus, there is little incentive for any other agent to volunteer, because he is already getting full advantage of participating in the system.\nBased on this intuition, it is a relatively straightforward calculation to determine a value of a that depends only on \u03b1, \u03b2, and \u03b4, but not the number n of players in the system, such that the dominant strategy for all standard agents i is to never volunteer to satisfy any requests (i.e., V t i = 0 for all t).\nProposition 2.1.\nThere exists an a that depends only on \u03b1, \u03b2, and \u03b4 such that, in G(n, \u03b4, \u03b1, \u03b2) with at least a altruists, not volunteering in every round is a dominant strategy for all standard agents.\nProof.\nConsider the strategy for a standard player j in the presence of a altruists.\nEven with no money, player j will get a request satisfied with probability 1 \u2212 (1 \u2212 \u03b2)a just through the actions of these altruists.\nThus, even if j is chosen to make a request in every round, the most additional expected utility he can hope to gain by having money isP\u221e k=1(1 \u2212 \u03b2)a \u03b4k = (1 \u2212 \u03b2)a \/(1 \u2212 \u03b4).\nIf (1 \u2212 \u03b2)a \/(1 \u2212 \u03b4) > \u03b1 or, equivalently, if a > log1\u2212\u03b2(\u03b1(1 \u2212 \u03b4)), never volunteering is a dominant strategy.\nConsider the following reasonable values for our parameters: \u03b2 = .01 (so that each player can satisfy 1% of the requests), \u03b1 = .1 (a low but non-negligible cost), \u03b4 = .9999\/day 142 (which corresponds to a yearly discount factor of approximately 0.95), and an average of 1 request per day per player.\nThen we only need a > 1145.\nWhile this is a large number, it is small relative to the size of a large P2P network.\nCurrent systems all have a pool of users behaving like our altruists.\nThis means that attempts to add a reputation system on top of an existing P2P system to influence users to cooperate will have no effect on rational users.\nTo have a fair distribution of work, these systems must be fundamentally redesigned to eliminate the pool of altruistic users.\nIn some sense, this is not a problem at all.\nIn a system with altruists, the altruists are presumably happy, as are the standard agents, who get almost all their requests satisfied without having to do any work.\nIndeed, current P2P network work quite well in terms of distributing content to people.\nHowever, as we said in the introduction, there is some reason to believe these altruists may not be around forever.\nThus, it is worth looking at what can be done to make these systems work in their absence.\nFor the rest of this paper we assume that all agents are standard, and try to maximize expected utility.\nWe are interested in equilibria based on a scrip system.\nEach time an agent has a request satisfied he must pay the person who satisfied it some amount.\nFor now, we assume that the payment is fixed; for simplicity, we take the amount to be $1.\nWe denote by M the total amount of money in the system.\nWe assume that M > 0 (otherwise no one will ever be able to get paid).\nIn principle, agents are free to adopt a very wide variety of strategies.\nThey can make decisions based on the names of other agents or use a strategy that is heavily history dependant, and mix these strategies freely.\nTo aid our analysis, we would like to be able to restrict our attention to a simpler class of strategies.\nThe class of strategies we are interested in is easy to motivate.\nThe intuitive reason for wanting to earn money is to cater for the possibility that an agent will run out before he has a chance to earn more.\nOn the other hand, a rational agent with plenty of mone would not want to work, because by the time he has managed to spend all his money, the util will have less value than the present cost of working.\nThe natural balance between these two is a threshold strategy.\nLet Sk be the strategy where an agent volunteers whenever he has less than k dollars and not otherwise.\nNote that S0 is the strategy where the agent never volunteers.\nWhile everyone playing S0 is a Nash equilibrium (nobody can do better by volunteering if no one else is willing to), it is an uninteresting one.\nAs we will show in Section 4, it is sufficient to restrict our attention to this class of strategies.\nWe use Kt i to denote the amount of money agent i has at time t. Clearly Kt+1 i = Kt i unless agent i has a request satisfied, in which case Kt+1 i = Kt+1 i \u2212 1 or agent i fulfills a request, in which case Kt+1 i = Kt+1 i + 1.\nFormally, Kt+1 i = 8 < : Kt i \u2212 1 if i = pt , P j=i V t j Bt j > 0, and Kt i > 0 Kt i + 1 if i = vt and Kt pt > 0 Kt i otherwise.\nThe threshold strategy Sk is the strategy such that V t i = 1 if Kt pt > 0 and Kt i < k 0 otherwise.\n3.\nTHE GAME UNDER NONSTRATEGIC PLAY Before we consider strategic play, we examine what happens in the system if everyone just plays the same strategy Sk.\nOur overall goal is to show that there is some distribution over money (i.e., the fraction of people with each amount of money) such that the system converges to this distribution in a sense to be made precise shortly.\nSuppose that everyone plays Sk.\nFor simplicity, assume that everyone has at most k dollars.\nWe can make this assumption with essentially no loss of generality, since if someone has more than k dollars, he will just spend money until he has at most k dollars.\nAfter this point he will never acquire more than k. Thus, eventually the system will be in such a state.\nIf M \u2265 kn, no agent will ever be willing to work.\nThus, for the purposes of this section we assume that M < kn.\nFrom the perspective of a single agent, in (stochastic) equilibrium, the agent is undergoing a random walk.\nHowever, the parameters of this random walk depend on the random walks of the other agents and it is quite complicated to solve directly.\nThus we consider an alternative analysis based on the evolution of the system as a whole.\nIf everyone has at most k dollars, then the amount of money that an agent has is an element of {0, ... , k}.\nIf there are n agents, then the state of the game can be described by identifying how much money each agent has, so we can represent it by an element of Sk,n = {0, ... , k}{1,...,n} .\nSince the total amount of money is constant, not all of these states can arise in the game.\nFor example the state where each player has $0 is impossible to reach in any game with money in the system.\nLet mS(s) = P i\u2208{1...n} s(i) denote the total mount of money in the game at state s, where s(i) is the number of dollars that agent i has in state s.\nWe want to consider only those states where the total money in the system is M, namely Sk,n,M = {s \u2208 Sk,n | mS(s) = M}.\nUnder the assumption that all agents use strategy Sk, the evolution of the system can be treated as a Markov chain Mk,n,M over the state space Sk,n,M .\nIt is possible to move from one state to another in a single round if by choosing a particular agent to make a request and a particular agent to satisfy it, the amounts of money possesed by each agent become those in the second state.\nTherefore the probability of a transition from a state s to t is 0 unless there exist two agents i and j such that s(i ) = t(i ) for all i \/\u2208 {i, j}, t(i) = s(i) + 1, and t(j) = s(j) \u2212 1.\nIn this case the probability of transitioning from s to t is the probability of j being chosen to spend a dollar and has someone willing and able to satisfy his request ((1\/n)(1 \u2212 (1 \u2212 \u03b2)|{i |s(i )=k}|\u2212Ij ) multiplied by the probability of i being chosen to satisfy his request (1\/(|({i | s(i ) = k}| \u2212 Ij )).\nIj is 0 if j has k dollars and 1 otherwise (it is just a correction for the fact that j cannot satisfy his own request.)\nLet \u2206k denote the set of probability distributions on {0, ... , k}.\nWe can think of an element of \u2206k as describing the fraction of people with each amount of money.\nThis is a useful way of looking at the system, since we typically don``t care who has each amount of money, but just the fraction of people that have each amount.\nAs before, not all elements of \u2206k are possible, given our constraint that the total amount of 143 money is M. Rather than thinking in terms of the total amount of money in the system, it will prove more useful to think in terms of the average amount of money each player has.\nOf course, the total amount of money in a system with n agents is M iff the average amount that each player has is m = M\/n.\nLet \u2206k m denote all distributions d \u2208 \u2206k such that E(d) = m (i.e., Pk j=0 d(j)j = m).\nGiven a state s \u2208 Sk,n,M , let ds \u2208 \u2206k m denote the distribution of money in s.\nOur goal is to show that, if n is large, then there is a distribution d\u2217 \u2208 \u2206k m such that, with high probability, the Markov chain Mk,n,M will almost always be in a state s such that ds is close to d\u2217 .\nThus, agents can base their decisions about what strategy to use on the assumption that they will be in such a state.\nWe can in fact completely characterize the distribution d\u2217 .\nGiven a distribution d \u2208 \u2206k , let H(d) = \u2212 X {j:d(j)=0} d(j) log(d(j)) denote the entropy of d.\nIf \u2206 is a closed convex set of distributions, then it is well known that there is a unique distribution in \u2206 at which the entropy function takes its maximum value in \u2206.\nSince \u2206k m is easily seen to be a closed convex set of distributions, it follows that there is a unique distribution in \u2206k m that we denote d\u2217 k,m whose entropy is greater than that of all other distributions in \u2206k m.\nWe now show that, for n sufficiently large, the Markov chain Mk,n,M is almost surely in a state s such that ds is close to d\u2217 k,M\/n.\nThe statement is correct under a number of senses of close.\nFor definiteness, we consider the Euclidean distance.\nGiven > 0, let Sk,n,m, denote the set of states s in Sk,n,mn such that Pk j=0 |ds (j) \u2212 d\u2217 k,m|2 < .\nGiven a Markov chain M over a state space S and S \u2286 S, let Xt,s,S be the random variable that denotes that M is in a state of S at time t, when started in state s. Theorem 3.1.\nFor all > 0, all k, and all m, there exists n such that for all n > n and all states s \u2208 Sk,n,mn, there exists a time t\u2217 (which may depend on k, n, m, and ) such that for t > t\u2217 , we have Pr(Xt,s,Sk,n,m, ) > 1 \u2212 .\nProof.\n(Sketch) Suppose that at some time t, Pr(Xt,s,s ) is uniform for all s .\nThen the probability of being in a set of states is just the size of the set divided by the total number of states.\nA standard technique from statistical mechanics is to show that there is a concentration phenomenon around the maximum entropy distribution [16].\nMore precisely, using a straightforward combinatorial argument, it can be shown that the fraction of states not in Sk,n,m, is bounded by p(n)\/ecn , where p is a polynomial.\nThis fraction clearly goes to 0 as n gets large.\nThus, for sufficiently large n, Pr(Xt,s,Sk,n,m, ) > 1 \u2212 if Pr(Xt,s,s ) is uniform.\nIt is relatively straightforward to show that our Markov Chain has a limit distribution \u03c0 over Sk,n,mn, such that for all s, s \u2208 Sk,n,mn, limt\u2192\u221e Pr(Xt,s,s ) = \u03c0s .\nLet Pij denote the probability of transitioning from state i to state j.\nIt is easily verified by an explicit computation of the transition probabilities that Pij = Pji for all states i and j.\nIt immediatly follows from this symmetry that \u03c0s = \u03c0s , so \u03c0 is uniform.\nAfter a sufficient amount of time, the distribution will be close enough to \u03c0, that the probabilities are again bounded by constant, which is sufficient to complete the theorem.\n0 0.002 0.004 0.006 0.008 0.01 Euclidean Distance 2000 2500 3000 3500 4000 NumberofSteps Figure 1: Distance from maximum-entropy distribution with 1000 agents.\n5000 10000 15000 20000 25000 Number of Agents 0.001 0.002 0.003 0.004 0.005 MaximumDistance Figure 2: Maximum distance from maximumentropy distribution over 106 timesteps.\n0 5000 10000 15000 20000 25000 Number of Agents 0 20000 40000 60000 TimetoDistance.001 Figure 3: Average time to get within .001 of the maximum-entropy distribution.\n144 We performed a number of experiments that show that the maximum entropy behavior described in Theorem 3.1 arises quickly for quite practical values of n and t.\nThe first experiment showed that, even if n = 1000, we reach the maximum-entropy distribution quickly.\nWe averaged 10 runs of the Markov chain for k = 5 where there is enough money for each agent to have $2 starting from a very extreme distribution (every agent has either $0 or $5) and considered the average time needed to come within various distances of the maximum entropy distribution.\nAs Figure 1 shows, after 2,000 steps, on average, the Euclidean distance from the average distribution of money to the maximum-entropy distribution is .008; after 3,000 steps, the distance is down to .001.\nNote that this is really only 3 real time units since with 1000 players we have 1000 transactions per time unit.\nWe then considered how close the distribution stays to the maximum entropy distribution once it has reached it.\nTo simplify things, we started the system in a state whose distribution was very close to the maximum-entropy distribution and ran it for 106 steps, for various values of n.\nAs Figure 2 shows, the system does not move far from the maximum-entropy distribution once it is there.\nFor example, if n = 5000, the system is never more than distance .001 from the maximum-entropy distribution; if n = 25, 000, it is never more than .0002 from the maximum-entropy distribution.\nFinally, we considered how more carefully how quickly the system converges to the maximum-entropy distribution for various values of n.\nThere are approximately kn possible states, so the convergence time could in principle be quite large.\nHowever, we suspect that the Markov chain that arises here is rapidly mixing, which means that it will converge significantly faster (see [20] for more details about rapid mixing).\nWe believe that the actually time needed is O(n).\nThis behavior is illustrated in Figure 3, which shows that for our example chain (again averaged over 10 runs), after 3n steps, the Euclidean distance between the actual distribution of money in the system and the maximum-entropy distribution is less than .001.\n4.\nTHE GAME UNDER STRATEGIC PLAY We have seen that the system is well behaved if the agents all follow a threshold strategy; we now want to show that there is a nontrivial Nash equilibrium where they do so (that is, a Nash equilibrium where all the agents use Sk for some k > 0.)\nThis is not true in general.\nIf \u03b4 is small, then agents have no incentive to work.\nIntuitively, if future utility is sufficiently discounted, then all that matters is the present, and there is no point in volunteering to work.\nWith small \u03b4, S0 is the only equilibrium.\nHowever, we show that for \u03b4 sufficiently large, there is another equilibrium in threshold strategies.\nWe do this by first showing that, if every other agent is playing a threshold strategy, then there is a best response that is also a threshold strategy (although not necessarily the same one).\nWe then show that there must be some (mixed) threshold strategy for which this best response is the same strategy.\nIt follows that this tuple of threshold strategies is a Nash equilibrium.\nAs a first step, we show that, for all k, if everyone other than agent i is playing Sk, then there is a threshold strategy Sk that is a best response for agent i. To prove this, we need to assume that the system is close to the steadystate distribution (i.e., the maximum-entropy distribution).\nHowever, as long as \u03b4 is sufficiently close to 1, we can ignore what happens during the period that the system is not in steady state.3 We have thus far considered threshold strategies of the form Sk, where k is a natural number; this is a discrete set of strategies.\nFor a later proof, it will be helpful to have a continuous set of strategies.\nIf \u03b3 = k + \u03b3 , where k is a natural number and 0 \u2264 \u03b3 < 1, let S\u03b3 be the strategy that performs Sk with probability 1 \u2212 \u03b3 and Sk+1 with probability \u03b3.\n(Note that we are not considering arbitrary mixed threshold strategies here, but rather just mixing between adjacent strategies for the sole purpose of making out strategies continuous in a natural way.)\nTheorem 3.1 applies to strategies S\u03b3 (the same proof goes through without change), where \u03b3 is an arbitrary nonnegative real number.\nTheorem 4.1.\nFix a strategy S\u03b3 and an agent i.\nThere exists \u03b4\u2217 < 1 and n\u2217 such that if \u03b4 > \u03b4\u2217 , n > n\u2217 , and every agent other than i is playing S\u03b3 in game G(n, \u03b4), then there is an integer k such that the best response for agent i is Sk .\nEither k is unique (that is, there is a unique best response that is also a threshold strategy), or there exists an integer k such that S\u03b3 is a best response for agent i for all \u03b3 in the interval [k , k +1] (and these are the only best responses among threshold strategies).\nProof.\n(Sketch:-RRB- If \u03b4 is sufficiently large, we can ignore what happens before the system converges to the maximumentropy distribution.\nIf n is sufficiently large, then the strategy played by one agent will not affect the distribution of money significantly.\nThus, the probability of i moving from one state (dollar amount) to another depends only on i``s strategy (since we can take the probability that i will be chosen to make a request and the probability that i will be chosen to satisfy a request to be constant).\nThus, from i``s point of view, the system is a Markov decision process (MDP), and i needs to compute the optimal policy (strategy) for this MDP.\nIt follows from standard results [23, Theorem 6.11.6] that there is an optimal policy that is a threshold policy.\nThe argument that the best response is either unique or there is an interval of best responses follows from a more careful analysis of the value function for the MDP.\nWe remark that there may be best responses that are not threshold strategies.\nAll that Theorem 4.1 shows is that, among best responses, there is at least one that is a threshold strategy.\nSince we know that there is a best response that is a threshold strategy, we can look for a Nash equilibrium in the space of threshold strategies.\nTheorem 4.2.\nFor all M, there exists \u03b4\u2217 < 1 and n\u2217 such that if \u03b4 > \u03b4\u2217 and n > n\u2217 , there exists a Nash equilibrium in the game G(n, \u03b4) where all agents play S\u03b3 for some integer \u03b3 > 0.\nProof.\nIt follows easily from the proof Theorem 4.1 that if br(\u03b4, \u03b3) is the minimal best response threshold strategy if all the other agents are playing S\u03b3 and the discount factor is \u03b4 then, for fixed \u03b4, br(\u03b4, \u00b7) is a step function.\nIt also follows 3 Formally, we need to define the strategies when the system is far from equilibrium.\nHowever, these far from (stochastic) equilibrium strategies will not affect the equilibrium behavior when n is large and deviations from stochastic equilibrium are extremely rare.\n145 from the theorem that if there are two best responses, then a mixture of them is also a best response.\nTherefore, if we can join the steps by a vertical line, we get a best-response curve.\nIt is easy to see that everywhere that this bestresponse curve crosses the diagonal y = x defines a Nash equilibrium where all agents are using the same threshold strategy.\nAs we have already observed, one such equilibrium occurs at 0.\nIf there are only $M in the system, we can restrict to threshold strategies Sk where k \u2264 M + 1.\nSince no one can have more than $M, all strategies Sk for k > M are equivalent to SM ; these are just the strategies where the agent always volunteers in response to request made by someone who can pay.\nClearly br(\u03b4, SM ) \u2264 M for all \u03b4, so the best response function is at or below the equilibrium at M.\nIf k \u2264 M\/n, every player will have at least k dollars and so will be unwilling to work and the best response is just 0.\nConsider k\u2217 , the smallest k such that k > M\/n.\nIt is not hard to show that for k\u2217 there exists a \u03b4\u2217 such that for all \u03b4 \u2265 \u03b4\u2217 , br(\u03b4, k\u2217 ) \u2265 k\u2217 .\nIt follows by continuity that if \u03b4 \u2265 \u03b4\u2217 , there must be some \u03b3 such that br(\u03b4, \u03b3) = \u03b3.\nThis is the desired Nash equilibrium.\nThis argument also shows us that we cannot in general expect fixed points to be unique.\nIf br(\u03b4, k\u2217 ) = k\u2217 and br(\u03b4, k + 1) > k + 1 then our argument shows there must be a second fixed point.\nIn general there may be multiple fixed points even when br(\u03b4, k\u2217 ) > k\u2217 , as illustrated in the Figure 4 with n = 1000 and M = 3000.\n0 5 10 15 20 25 Strategy of Rest of Agents 0 5 10 15 20 25 BestResponse Figure 4: The best response function for n = 1000 and M = 3000.\nTheorem 4.2 allows us to restrict our design to agents using threshold strategies with the confidence that there will be a nontrivial equilibrium.\nHowever, it does not rule out the possibility that there may be other equilibria that do not involve threshold stratgies.\nIt is even possible (although it seems unlikely) that some of these equilibria might be better.\n5.\nSOCIAL WELFARE AND SCALABITY Our theorems show that for each value of M and n, for sufficiently large \u03b4, there is a nontrivial Nash equilibrium where all the agents use some threshold strategy S\u03b3(M,n).\nFrom the point of view of the system designer, not all equilibria are equally good; we want an equilibrium where as few as possible agents have $0 when they get a chance to make a request (so that they can pay for the request) and relatively few agents have more than the threshold amount of money (so that there are always plenty of agents to fulfill the request).\nThere is a tension between these objectives.\nIt is not hard to show that as the fraction of agents with $0 increases in the maximum entropy distribution, the fraction of agents with the maximum amount of money decreases.\nThus, our goal is to understand what the optimal amount of money should be in the system, given the number of agents.\nThat is, we want to know the amount of money M that maximizes efficiency, i.e., the total expected utility if all the agents use S\u03b3(M,n).\n4 We first observe that the most efficient equilibrium depends only on the ratio of M to n, not on the actual values of M and n. Theorem 5.1.\nThere exists n\u2217 such that for all games G(n1, \u03b4) and G(n2, \u03b4) where n1, n2 > n\u2217 , if M1\/n1 = M2\/n2, then S\u03b3(M1,n1) = S\u03b3(M2,n2).\nProof.\nFix M\/n = r. Theorem 3.1 shows that the maximum-entropy distribution depends only on k and the ratio M\/n, not on M and n separately.\nThus, given r, for each choice of k, there is a unique maximum entropy distribution dk,r.\nThe best response br(\u03b4, k) depends only on the distribution dk,r, not M or n. Thus, the Nash equilibrium depends only on the ratio r.\nThat is, for all choices of M and n such that n is sufficiently large (so that Theorem 3.1 applies) and M\/n = r, the equilibrium strategies are the same.\nIn light of Theorem 5.1, the system designer should ensure that there is enough money M in the system so that the ratio between M\/n is optimal.\nWe are currently exploring exactly what the optimal ratio is.\nAs our very preliminary results for \u03b2 = 1 show in Figure 5, the ratio appears to be monotone increasing in \u03b4, which matches the intuition that we should provide more patient agents with the opportunity to save more money.\nAdditionally, it appears to be relatively smooth, which suggests that it may have a nice analytic solution.\n0.9 0.91 0.92 0.93 0.94 0.95 Discount Rate \u2206 5 5.5 6 6.5 7 OptimalRatioofMn Figure 5: Optimal average amount of money to the nearest .25 for \u03b2 = 1 We remark that, in practice, it may be easier for the designer to vary the price of fulfilling a request rather than 4 If there are multiple equilibria, we take S\u03b3(M,n) to be the Nash equilibrium that has highest efficiency for fixed M and n. 146 injecting money in the system.\nThis produces the same effect.\nFor example, changing the cost of fulfilling a request from $1 to $2 is equivalent to halving the amount of money that each agent has.\nSimilarly, halving the the cost of fulfilling a request is equivalent to doubling the amount of money that everyone has.\nWith a fixed amount of money M, there is an optimal product nc of the number of agents and the cost c of fulfilling a request.\nTheorem 5.1 also tells us how to deal with a dynamic pool of agents.\nOur system can handle newcomers relatively easily: simply allow them to join with no money.\nThis gives existing agents no incentive to leave and rejoin as newcomers.\nWe then change the price of fulfilling a request so that the optimal ratio is maintained.\nThis method has the nice feature that it can be implemented in a distributed fashion; if all nodes in the system have a good estimate of n then they can all adjust prices automatically.\n(Alternatively, the number of agents in the system can be posted in a public place.)\nApproaches that rely on adjusting the amount of money may require expensive system-wide computations (see [26] for an example), and must be carefully tuned to avoid creating incentives for agents to manipulate the system by which this is done.\nNote that, in principle, the realization that the cost of fulfilling a request can change can affect an agent``s strategy.\nFor example, if an agent expects the cost to increase, then he may want to defer volunteering to fulfill a request.\nHowever, if the number of agents in the system is always increasing, then the cost always decreases, so there is never any advantage in waiting.\nThere may be an advantage in delaying a request, but it is far more costly, in terms of waiting costs than in providing service, since we assume the need for a service is often subject to real waiting costs, while the need to supply the service is merely to augment a money supply.\n(Related issues are discussed in [10].)\nWe ultimately hope to modify the mechanism so that the price of a job can be set endogenously within the system (as in real-world economies), with agents bidding for jobs rather than there being a fixed cost set externally.\nHowever, we have not yet explored the changes required to implement this change.\nThus, for now, we assume that the cost is set as a function of the number of agents in the system (and that there is no possibility for agents to satisfy a request for less than the official cost or for requesters to offer to pay more than it).\n6.\nSYBILS AND COLLUSION In a naive sense, our system is essentially sybil-proof.\nTo get d dollars, his sybils together still have to perform d units of work.\nMoreover, since newcomers enter the system with $0, there is no benefit to creating new agents simply to take advantage of an initial endowment.\nNevertheless, there are some less direct ways that an agent could take advantage of sybils.\nFirst, by having more identities he will have a greater probability of getting chosen to make a request.\nIt is easy to see that this will lead to the agent having higher total utility.\nHowever, this is just an artifact of our model.\nTo make our system simple to analyze, we have assumed that request opportunities came uniformly at random.\nIn practice, requests are made to satisfy a desire.\nOur model implicitly assumed that all agents are equally likely to have a desire at any particular time.\nHaving sybils should not increase the need to have a request satisfied.\nIndeed, it would be reasonable to assume that sybils do not make requests at all.\nSecond, having sybils makes it more likely that one of the sybils will be chosen to fulfill a request.\nThis can allow a user to increase his utility by setting a lower threshold; that is, to use a strategy Sk where k is smaller than the k used by the Nash equilibrium strategy.\nIntuitively, the need for money is not as critical if money is easier to obtain.\nUnlike the first concern, this seems like a real issue.\nIt seems reasonable to believe that when people make a decision between a number of nodes to satisfy a request they do so at random, at least to some extent.\nEven if they look for advertised node features to help make this decision, sybils would allow a user to advertise a wide range of features.\nThird, an agent can drive down the cost of fulfilling a request by introducing many sybils.\nSimilarly, he could increase the cost (and thus the value of his money) by making a number of sybils leave the system.\nConcievably he could alternate between these techniques to magnify the effects of work he does.\nWe have not yet calculated the exact effect of this change (it interacts with the other two effects of having sybils that we have already noted).\nGiven the number of sybils that would be needed to cause a real change in the perceived size of a large P2P network, the practicality of this attack depends heavily on how much sybils cost an attacker and what resources he has available.\nThe second point raised regarding sybils also applies to collusion if we allow money to be loaned.\nIf k agents collude, they can agree that, if one runs out of money, another in the group will loan him money.\nBy pooling their money in this way, the k agents can again do better by setting a higher threshold.\nNote that the loan mechanism doesn``t need to be built into the system; the agents can simply use a fake transaction to transfer the money.\nThese appear to be the main avenues for collusive attacks, but we are still exploring this issue.\n7.\nCONCLUSION We have given a formal analysis of a scrip system and have shown that the existence of a Nash equilibrium where all agents use a threshold strategy.\nMoreover, we can compute efficiency of equilibrium strategy and optimize the price (or money supply) to maximize efficiency.\nThus, our analysis provides a formal mechanisms for solving some important problems in implementing scrip systems.\nIt tells us that with a fixed population of rational users, such systems are very unlikely to become unstable.\nThus if this stability is common belief among the agents we would not expect inflation, bubbles, or crashes because of agent speculation.\nHowever, we cannot rule out the possibility that that agents may have other beliefs that will cause them to speculate.\nOur analysis also tells us how to scale the system to handle an influx of new users without introducing these problems: scale the money supply to keep the average amount of money constant (or equivalently adjust prices to achieve the same goal).\nThere are a number of theoretical issues that are still open, including a characterization of the multiplicity of equilibria - are there usually 2?\nIn addition, we expect that one should be able to compute analytic estimates for the best response function and optimal pricing which would allow us to understand the relationship between pricing and various parameters in the model.\n147 It would also be of great interest to extend our analysis to handle more realistic settings.\nWe mention a few possible extensions here: \u2022 We have assumed that the world is homogeneous in a number of ways, including request frequency, utility, and ability to satisfy requests.\nIt would be interesting to examine how relaxing any of these assumptions would alter our results.\n\u2022 We have assumed that there is no cost to an agent to be a member of the system.\nSuppose instead that we imposed a small cost simply for being present in the system to reflect the costs of routing messages and overlay maintainance.\nThis modification could have a significant impact on sybil attacks.\n\u2022 We have described a scrip system that works when there are no altruists and have shown that no system can work once there there are sufficiently many altruists.\nWhat happens between these extremes?\n\u2022 One type of irrational behavior encountered with scrip systems is hoarding.\nThere are some similarities between hoarding and altruistic behavior.\nWhile an altruist provide service for everyone, a hoarder will volunteer for all jobs (in order to get more money) and rarely request service (so as not to spend money).\nIt would be interesting to investigate the extent to which our system is robust against hoarders.\nClearly with too many hoarders, there may not be enough money remaining among the non-hoarders to guarantee that, typically, a non-hoarder would have enough money to satisfy a request.\n\u2022 Finally, in P2P filesharing systems, there are overlapping communities of various sizes that are significantly more likely to be able to satisfy each other``s requests.\nIt would be interesting to investigate the effect of such communities on the equilibrium of our system.\nThere are also a number of implementation issues that would have to be resolved in a real system.\nFor example, we need to worry about the possibility of agents counterfeiting money or lying about whether service was actually provided.\nKarma [26] provdes techniques for dealing with both of these issues and a number of others, but some of Karma``s implementation decisions point to problems for our model.\nFor example, it is prohibitively expensive to ensure that bank account balances can never go negative, a fact that our model does not capture.\nAnother example is that Karma has nodes serve as bookkeepers for other nodes account balances.\nLike maintaining a presence in the network, this imposes a cost on the node, but unlike that, responsibility it can be easily shirked.\nKarma suggests several ways to incentivize nodes to perform these duties.\nWe have not investigated whether these mechanisms be incorporated without disturbing our equilibrium.\n8.\nACKNOWLEDGEMENTS We would like to thank Emin Gun Sirer, Shane Henderson, Jon Kleinberg, and 3 anonymous referees for helpful suggestions.\nEF, IK and JH are supported in part by NSF under grant ITR-0325453.\nJH is also supported in part by NSF under grants CTC-0208535 and IIS-0534064, by ONR under grant N00014-01-10-511, by the DoD Multidisciplinary University Research Initiative (MURI) program administered by the ONR under grants N00014-01-1-0795 and N00014-04-1-0725, and by AFOSR under grant F49620-021-0101.\n9.\nREFERENCES [1] E. Adar and B. A. Huberman.\nFree riding on Gnutella.\nFirst Monday, 5(10), 2000.\n[2] K. G. Anagnostakis and M. Greenwald.\nExchange-based incentive mechanisms for peer-to-peer file sharing.\nIn International Conference on Distributed Computing Systems (ICDCS), pages 524-533, 2004.\n[3] BitTorrent Inc..\nBitTorrent web site.\nhttp:\/\/www.bittorent.com.\n[4] A. Cheng and E. Friedman.\nSybilproof reputation mechanisms.\nIn Workshop on Economics of Peer-to-Peer Systems (P2PECON), pages 128-132, 2005.\n[5] Cornell Information Technologies.\nCornell``s ccommodity internet usage statistics.\nhttp:\/\/www.cit.cornell.edu\/computer\/students\/ bandwidth\/charts.\nhtml.\n[6] J. R. Douceur.\nThe sybil attack.\nIn International Workshop on Peer-to-Peer Systems (IPTPS), pages 251-260, 2002.\n[7] G. Ellison.\nCooperation in the prisoner``s dilemma with anonymous random matching.\nReview of Economic Studies, 61:567-588, 1994.\n[8] eMule Project.\neMule web site.\nhttp:\/\/www.emule-project.net\/.\n[9] M. Feldman, K. Lai, I. Stoica, and J. Chuang.\nRobust incentive techniques for peer-to-peer networks.\nIn ACM Conference on Electronic Commerce (EC), pages 102-111, 2004.\n[10] E. J. Friedman and D. C. Parkes.\nPricing wifi at starbucks: issues in online mechanism design.\nIn EC ``03: Proceedings of the 4th ACM Conference on Electronic Commerce, pages 240-241.\nACM Press, 2003.\n[11] E. J. Friedman and P. Resnick.\nThe social cost of cheap pseudonyms.\nJournal of Economics and Management Strategy, 10(2):173-199, 2001.\n[12] R. Guha, R. Kumar, P. Raghavan, and A. Tomkins.\nPropagation of trust and distrust.\nIn Conference on the World Wide Web(WWW), pages 403-412, 2004.\n[13] M. Gupta, P. Judge, and M. H. Ammar.\nA reputation system for peer-to-peer networks.\nIn Network and Operating System Support for Digital Audio and Video(NOSSDAV), pages 144-152, 2003.\n[14] Z. Gyongi, P. Berkhin, H. Garcia-Molina, and J. Pedersen.\nLink spam detection based on mass estimation.\nTechnical report, Stanford University, 2005.\n[15] J. Ioannidis, S. Ioannidis, A. D. Keromytis, and V. Prevelakis.\nFileteller: Paying and getting paid for file storage.\nIn Financial Cryptography, pages 282-299, 2002.\n[16] E. T. Jaynes.\nWhere do we stand on maximum entropy?\nIn R. D. Levine and M. Tribus, editors, The Maximum Entropy Formalism, pages 15-118.\nMIT Press, Cambridge, Mass., 1978.\n148 [17] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina.\nThe Eigentrust algorithm for reputation management in P2P networks.\nIn Conference on the World Wide Web (WWW), pages 640-651, 2003.\n[18] M. Kandori.\nSocial norms and community enforcement.\nReview of Economic Studies, 59:63-80, 1992.\n[19] LogiSense Corporation.\nLogiSense web site.\nhttp:\/\/www.logisense.com\/tm p2p.html.\n[20] L. Lovasz and P. Winkler.\nMixing of random walks and other diffusions on a graph.\nIn Surveys in Combinatorics, 1993, Walker (Ed.)\n, London Mathematical Society Lecture Note Series 187, Cambridge University Press.\n1995.\n[21] Open Source Technology Group.\nSlashdot FAQcomments and moderation.\nhttp:\/\/slashdot.org\/faq\/com-mod.shtml#cm700.\n[22] OSMB LLC.\nGnutella web site.\nhttp:\/\/www.gnutella.com\/.\n[23] M. L. Puterman.\nMarkov Decision Processes.\nWiley, 1994.\n[24] SETI@home.\nSETI@home web page.\nhttp:\/\/setiathome.ssl.berkeley.edu\/.\n[25] Sharman Networks Ltd..\nKazaa web site.\nhttp:\/\/www.kazaa.com\/.\n[26] V. Vishnumurthy, S. Chandrakumar, and E. Sirer.\nKarma: A secure economic framework for peer-to-peer resource sharing.\nIn Workshop on Economics of Peer-to-Peer Systems (P2PECON), 2003.\n[27] L. Xiong and L. Liu.\nBuilding trust in decentralized peer-to-peer electronic communities.\nIn Internation Conference on Electronic Commerce Research (ICECR), 2002.\n[28] H. Zhang, A. Goel, R. Govindan, K. Mason, and B. V. Roy.\nMaking eigenvector-based reputation systems robust to collusion.\nIn Workshop on Algorithms and Models for the Web-Graph(WAW), pages 92-104, 2004.\n149","lvl-3":"Efficiency and Nash Equilibria in a Scrip System for P2P Networks\nABSTRACT\nA model of providing service in a P2P network is analyzed.\nIt is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained.\nThe effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized.\nThe work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking.\n1.\nINTRODUCTION\nA common feature of many online distributed systems is that individuals provide services for each other.\nPeer-topeer (P2P) networks (such as Kazaa [25] or BitTorrent [3]) have proved popular as mechanisms for file sharing, and applications such as distributed computation and file storage are on the horizon; systems such as Seti@home [24] provide computational assistance; systems such as Slashdot [21] provide content, evaluations, and advice forums in which people\nanswer each other's questions.\nHaving individuals provide each other with service typically increases the social welfare: the individual utilizing the resources of the system derives a greater benefit from it than the cost to the individual providing it.\nHowever, the cost of providing service can still be nontrivial.\nFor example, users of Kazaa and BitTorrent may be charged for bandwidth usage; in addition, in some filesharing systems, there is the possibility of being sued, which can be viewed as part of the cost.\nThus, in many systems there is a strong incentive to become a free rider and benefit from the system without contributing to it.\nThis is not merely a theoretical problem; studies of the Gnutella [22] network have shown that almost 70 percent of users share no files and nearly 50 percent of responses are from the top 1 percent of sharing hosts [1].\nHaving relatively few users provide most of the service creates a point of centralization; the disappearance of a small percentage of users can greatly impair the functionality of the system.\nMoreover, current trends seem to be leading to the elimination of the \"altruistic\" users on which these systems rely.\nThese heavy users are some of the most expensive customers ISPs have.\nThus, as the amount of traffic has grown, ISPs have begun to seek ways to reduce this traffic.\nSome universities have started charging students for excessive bandwidth usage; others revoke network access for it [5].\nA number of companies have also formed whose service is to detect excessive bandwidth usage [19].\nThese trends make developing a system that encourages a more equal distribution of the work critical for the continued viability of P2P networks and other distributed online systems.\nA significant amount of research has gone into designing reputation systems to give preferential treatment to users who are sharing files.\nSome of the P2P networks currently in use have implemented versions of these techniques.\nHowever, these approaches tend to fall into one of two categories: either they are \"barter-like\" or reputational.\nBy barter-like, we mean that each agent bases its decisions only on information it has derived from its own interactions.\nPerhaps the best-known example of a barter-like system is BitTorrent, where clients downloading a file try to find other clients with parts they are missing so that they can trade, thus creating a roughly equal amount of work.\nSince the barter is restricted to users currently interested in a single file, this works well for popular files, but tends to have problems maintaining availability of less popular ones.\nAn example of a barter-like system built on top of a more traditional file-sharing system is the credit system used by eMule\n[8].\nEach user tracks his history of interactions with other users and gives priority to those he has downloaded from in the past.\nHowever, in a large system, the probability that a pair of randomly-chosen users will have interacted before is quite small, so this interaction history will not be terribly helpful.\nAnagnostakis and Greenwald [2] present a more sophisticated version of this approach, but it still seems to suffer from similar problems.\nA number of attempts have been made at providing general reputation systems (e.g. [12, 13, 17, 27]).\nThe basic idea is to aggregate each user's experience into a global number for each individual that intuitively represents the system's view of that individual's reputation.\nHowever, these attempts tend to suffer from practical problems because they implicitly view users as either \"good\" or \"bad\", assume that the \"good\" users will act according to the specified protocol, and that there are relatively few \"bad\" users.\nUnfortunately, if there are easy ways to game the system, once this information becomes widely available, rational users are likely to make use of it.\nWe cannot count on only a few users being \"bad\" (in the sense of not following the prescribed protocol).\nFor example, Kazaa uses a measure of the ratio of the number of uploads to the number of downloads to identify good and bad users.\nHowever, to avoid penalizing new users, they gave new users an average rating.\nUsers discovered that they could use this relatively good rating to free ride for a while and, once it started to get bad, they could delete their stored information and effectively come back as a \"new\" user, thus circumventing the system (see [2] for a discussion and [11] for a formal analysis of this \"whitewashing\").\nThus Kazaa's reputation system is ineffective.\nThis is a simple case of a more general vulnerability of such systems to sybil attacks [6], where a single user maintains multiple identities and uses them in a coordinated fashion to get better service than he otherwise would.\nRecent work has shown that most common reputation systems are vulnerable (in the worst case) to such attacks [4]; however, the degree of this vulnerability is still unclear.\nThe analyses of the practical vulnerabilities and the existence of such systems that are immune to such attacks remains an area of active research (e.g., [4, 28, 14]).\nSimple economic systems based on a scrip or money seem to avoid many of these problems, are easy to implement and are quite popular (see, e.g., [13, 15, 26]).\nHowever, they have a different set of problems.\nPerhaps the most common involve determining the amount of money in the system.\nRoughly speaking, if there is too little money in the system relative to the number of agents, then relatively few users can afford to make request.\nOn the other hand, if there is too much money, then users will not feel the need to respond to a request; they have enough money already.\nA related problem involves handling newcomers.\nIf newcomers are each given a positive amount of money, then the system is open to sybil attacks.\nPerhaps not surprisingly, scrip systems end up having to deal with standard economic woes such as inflation, bubbles, and crashes [26].\nIn this paper, we provide a formal model in which to analyze scrip systems.\nWe describe a simple scrip system and show that, under reasonable assumptions, for each fixed amount of money there is a nontrivial Nash equilibrium involving threshold strategies, where an agent accepts a request if he has less than $k for some threshold k.' An interesting aspect of our analysis is that, in equilibrium, the distribution of users with each amount of money is the distribution that maximizes entropy (subject to the money supply constraint).\nThis allows us to compute the money supply that maximizes efficiency (social welfare), given the number of agents.\nIt also leads to a solution for the problem of dealing with newcomers: we simply assume that new users come in with no money, and adjust the price of service (which is equivalent to adjusting the money supply) to maintain the ratio that maximizes efficiency.\nWhile assuming that new users come in with no money will not work in all settings, we believe the approach will be widely applicable.\nIn systems where the goal is to do work, new users can acquire money by performing work.\nIt should also work in Kazaalike system where a user can come in with some resources (e.g., a private collection of MP3s).\nThe rest of the paper is organized as follows.\nIn Section 2, we present our formal model and observe that it can be used to understand the effect of altruists.\nIn Section 3, we examine what happens in the game under nonstrategic play, if all agents use the same threshold strategy.\nWe show that, in this case, the system quickly converges to a situation where the distribution of money is characterized by maximum entropy.\nUsing this analysis, we show in Section 4 that, under minimal assumptions, there is a nontrivial Nash equilibrium in the game where all agents use some threshold strategy.\nMoreover, we show in Section 5 that the analysis leads to an understanding of how to choose the amount of money in the system (or, equivalently, the cost to fulfill a request) so as to maximize efficiency, and also shows how to handle new users.\nIn Section 6, we discuss the extent to which our approach can handle sybils and collusion.\nWe conclude in Section 7.\n2.\nTHE MODEL\n3.\nTHE GAME UNDER NONSTRATEGIC PLAY\n4.\nTHE GAME UNDER STRATEGIC PLAY\n5.\nSOCIAL WELFARE AND SCALABITY\n6.\nSYBILS AND COLLUSION\n7.\nCONCLUSION\nWe have given a formal analysis of a scrip system and have shown that the existence of a Nash equilibrium where all agents use a threshold strategy.\nMoreover, we can compute efficiency of equilibrium strategy and optimize the price (or money supply) to maximize efficiency.\nThus, our analysis provides a formal mechanisms for solving some important problems in implementing scrip systems.\nIt tells us that with a fixed population of rational users, such systems are very unlikely to become unstable.\nThus if this stability is common belief among the agents we would not expect inflation, bubbles, or crashes because of agent speculation.\nHowever, we cannot rule out the possibility that that agents may have other beliefs that will cause them to speculate.\nOur analysis also tells us how to scale the system to handle an influx of new users without introducing these problems: scale the money supply to keep the average amount of money constant (or equivalently adjust prices to achieve the same goal).\nThere are a number of theoretical issues that are still open, including a characterization of the multiplicity of equilibria--are there usually 2?\nIn addition, we expect that one should be able to compute analytic estimates for the best response function and optimal pricing which would allow us to understand the relationship between pricing and various parameters in the model.\nIt would also be of great interest to extend our analysis to handle more realistic settings.\nWe mention a few possible extensions here: 9 We have assumed that the world is homogeneous in a number of ways, including request frequency, utility, and ability to satisfy requests.\nIt would be interesting to examine how relaxing any of these assumptions would alter our results.\n9 We have assumed that there is no cost to an agent to be a member of the system.\nSuppose instead that we imposed a small cost simply for being present in the system to reflect the costs of routing messages and overlay maintainance.\nThis modification could have a significant impact on sybil attacks.\n9 We have described a scrip system that works when there are no altruists and have shown that no system can work once there there are sufficiently many altruists.\nWhat happens between these extremes?\n9 One type of \"irrational\" behavior encountered with scrip systems is hoarding.\nThere are some similarities between hoarding and altruistic behavior.\nWhile an altruist provide service for everyone, a hoarder will volunteer for all jobs (in order to get more money) and rarely request service (so as not to spend money).\nIt would be interesting to investigate the extent to which our system is robust against hoarders.\nClearly with too many hoarders, there may not be enough money remaining among the non-hoarders to guarantee that, typically, a non-hoarder would have enough money to satisfy a request.\n9 Finally, in P2P filesharing systems, there are overlapping communities of various sizes that are significantly more likely to be able to satisfy each other's requests.\nIt would be interesting to investigate the effect of such communities on the equilibrium of our system.\nThere are also a number of implementation issues that would have to be resolved in a real system.\nFor example, we need to worry about the possibility of agents counterfeiting money or lying about whether service was actually provided.\nKarma [26] provdes techniques for dealing with both of these issues and a number of others, but some of Karma's implementation decisions point to problems for our model.\nFor example, it is prohibitively expensive to ensure that bank account balances can never go negative, a fact that our model does not capture.\nAnother example is that Karma has nodes serve as bookkeepers for other nodes account balances.\nLike maintaining a presence in the network, this imposes a cost on the node, but unlike that, responsibility it can be easily shirked.\nKarma suggests several ways to incentivize nodes to perform these duties.\nWe have not investigated whether these mechanisms be incorporated without disturbing our equilibrium.","lvl-4":"Efficiency and Nash Equilibria in a Scrip System for P2P Networks\nABSTRACT\nA model of providing service in a P2P network is analyzed.\nIt is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained.\nThe effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized.\nThe work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking.\n1.\nINTRODUCTION\nA common feature of many online distributed systems is that individuals provide services for each other.\nanswer each other's questions.\nHaving individuals provide each other with service typically increases the social welfare: the individual utilizing the resources of the system derives a greater benefit from it than the cost to the individual providing it.\nHowever, the cost of providing service can still be nontrivial.\nFor example, users of Kazaa and BitTorrent may be charged for bandwidth usage; in addition, in some filesharing systems, there is the possibility of being sued, which can be viewed as part of the cost.\nThus, in many systems there is a strong incentive to become a free rider and benefit from the system without contributing to it.\nHaving relatively few users provide most of the service creates a point of centralization; the disappearance of a small percentage of users can greatly impair the functionality of the system.\nMoreover, current trends seem to be leading to the elimination of the \"altruistic\" users on which these systems rely.\nThese heavy users are some of the most expensive customers ISPs have.\nA number of companies have also formed whose service is to detect excessive bandwidth usage [19].\nThese trends make developing a system that encourages a more equal distribution of the work critical for the continued viability of P2P networks and other distributed online systems.\nA significant amount of research has gone into designing reputation systems to give preferential treatment to users who are sharing files.\nSome of the P2P networks currently in use have implemented versions of these techniques.\nHowever, these approaches tend to fall into one of two categories: either they are \"barter-like\" or reputational.\nBy barter-like, we mean that each agent bases its decisions only on information it has derived from its own interactions.\nPerhaps the best-known example of a barter-like system is BitTorrent, where clients downloading a file try to find other clients with parts they are missing so that they can trade, thus creating a roughly equal amount of work.\nSince the barter is restricted to users currently interested in a single file, this works well for popular files, but tends to have problems maintaining availability of less popular ones.\nAn example of a barter-like system built on top of a more traditional file-sharing system is the credit system used by eMule\n[8].\nEach user tracks his history of interactions with other users and gives priority to those he has downloaded from in the past.\nHowever, in a large system, the probability that a pair of randomly-chosen users will have interacted before is quite small, so this interaction history will not be terribly helpful.\nA number of attempts have been made at providing general reputation systems (e.g. [12, 13, 17, 27]).\nThe basic idea is to aggregate each user's experience into a global number for each individual that intuitively represents the system's view of that individual's reputation.\nUnfortunately, if there are easy ways to game the system, once this information becomes widely available, rational users are likely to make use of it.\nWe cannot count on only a few users being \"bad\" (in the sense of not following the prescribed protocol).\nFor example, Kazaa uses a measure of the ratio of the number of uploads to the number of downloads to identify good and bad users.\nHowever, to avoid penalizing new users, they gave new users an average rating.\nThus Kazaa's reputation system is ineffective.\nThis is a simple case of a more general vulnerability of such systems to sybil attacks [6], where a single user maintains multiple identities and uses them in a coordinated fashion to get better service than he otherwise would.\nRecent work has shown that most common reputation systems are vulnerable (in the worst case) to such attacks [4]; however, the degree of this vulnerability is still unclear.\nThe analyses of the practical vulnerabilities and the existence of such systems that are immune to such attacks remains an area of active research (e.g., [4, 28, 14]).\nSimple economic systems based on a scrip or money seem to avoid many of these problems, are easy to implement and are quite popular (see, e.g., [13, 15, 26]).\nHowever, they have a different set of problems.\nPerhaps the most common involve determining the amount of money in the system.\nRoughly speaking, if there is too little money in the system relative to the number of agents, then relatively few users can afford to make request.\nOn the other hand, if there is too much money, then users will not feel the need to respond to a request; they have enough money already.\nA related problem involves handling newcomers.\nIf newcomers are each given a positive amount of money, then the system is open to sybil attacks.\nPerhaps not surprisingly, scrip systems end up having to deal with standard economic woes such as inflation, bubbles, and crashes [26].\nIn this paper, we provide a formal model in which to analyze scrip systems.\nThis allows us to compute the money supply that maximizes efficiency (social welfare), given the number of agents.\nIt also leads to a solution for the problem of dealing with newcomers: we simply assume that new users come in with no money, and adjust the price of service (which is equivalent to adjusting the money supply) to maintain the ratio that maximizes efficiency.\nWhile assuming that new users come in with no money will not work in all settings, we believe the approach will be widely applicable.\nIn systems where the goal is to do work, new users can acquire money by performing work.\nIt should also work in Kazaalike system where a user can come in with some resources (e.g., a private collection of MP3s).\nIn Section 2, we present our formal model and observe that it can be used to understand the effect of altruists.\nIn Section 3, we examine what happens in the game under nonstrategic play, if all agents use the same threshold strategy.\nWe show that, in this case, the system quickly converges to a situation where the distribution of money is characterized by maximum entropy.\nUsing this analysis, we show in Section 4 that, under minimal assumptions, there is a nontrivial Nash equilibrium in the game where all agents use some threshold strategy.\nMoreover, we show in Section 5 that the analysis leads to an understanding of how to choose the amount of money in the system (or, equivalently, the cost to fulfill a request) so as to maximize efficiency, and also shows how to handle new users.\nIn Section 6, we discuss the extent to which our approach can handle sybils and collusion.\nWe conclude in Section 7.\n7.\nCONCLUSION\nWe have given a formal analysis of a scrip system and have shown that the existence of a Nash equilibrium where all agents use a threshold strategy.\nMoreover, we can compute efficiency of equilibrium strategy and optimize the price (or money supply) to maximize efficiency.\nThus, our analysis provides a formal mechanisms for solving some important problems in implementing scrip systems.\nIt tells us that with a fixed population of rational users, such systems are very unlikely to become unstable.\nThus if this stability is common belief among the agents we would not expect inflation, bubbles, or crashes because of agent speculation.\nHowever, we cannot rule out the possibility that that agents may have other beliefs that will cause them to speculate.\nOur analysis also tells us how to scale the system to handle an influx of new users without introducing these problems: scale the money supply to keep the average amount of money constant (or equivalently adjust prices to achieve the same goal).\nThere are a number of theoretical issues that are still open, including a characterization of the multiplicity of equilibria--are there usually 2?\nIt would also be of great interest to extend our analysis to handle more realistic settings.\nIt would be interesting to examine how relaxing any of these assumptions would alter our results.\n9 We have assumed that there is no cost to an agent to be a member of the system.\nSuppose instead that we imposed a small cost simply for being present in the system to reflect the costs of routing messages and overlay maintainance.\nThis modification could have a significant impact on sybil attacks.\n9 We have described a scrip system that works when there are no altruists and have shown that no system can work once there there are sufficiently many altruists.\nWhat happens between these extremes?\n9 One type of \"irrational\" behavior encountered with scrip systems is hoarding.\nWhile an altruist provide service for everyone, a hoarder will volunteer for all jobs (in order to get more money) and rarely request service (so as not to spend money).\nIt would be interesting to investigate the extent to which our system is robust against hoarders.\nClearly with too many hoarders, there may not be enough money remaining among the non-hoarders to guarantee that, typically, a non-hoarder would have enough money to satisfy a request.\n9 Finally, in P2P filesharing systems, there are overlapping communities of various sizes that are significantly more likely to be able to satisfy each other's requests.\nIt would be interesting to investigate the effect of such communities on the equilibrium of our system.\nThere are also a number of implementation issues that would have to be resolved in a real system.\nFor example, we need to worry about the possibility of agents counterfeiting money or lying about whether service was actually provided.\nKarma [26] provdes techniques for dealing with both of these issues and a number of others, but some of Karma's implementation decisions point to problems for our model.\nAnother example is that Karma has nodes serve as bookkeepers for other nodes account balances.\nKarma suggests several ways to incentivize nodes to perform these duties.\nWe have not investigated whether these mechanisms be incorporated without disturbing our equilibrium.","lvl-2":"Efficiency and Nash Equilibria in a Scrip System for P2P Networks\nABSTRACT\nA model of providing service in a P2P network is analyzed.\nIt is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained.\nThe effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized.\nThe work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking.\n1.\nINTRODUCTION\nA common feature of many online distributed systems is that individuals provide services for each other.\nPeer-topeer (P2P) networks (such as Kazaa [25] or BitTorrent [3]) have proved popular as mechanisms for file sharing, and applications such as distributed computation and file storage are on the horizon; systems such as Seti@home [24] provide computational assistance; systems such as Slashdot [21] provide content, evaluations, and advice forums in which people\nanswer each other's questions.\nHaving individuals provide each other with service typically increases the social welfare: the individual utilizing the resources of the system derives a greater benefit from it than the cost to the individual providing it.\nHowever, the cost of providing service can still be nontrivial.\nFor example, users of Kazaa and BitTorrent may be charged for bandwidth usage; in addition, in some filesharing systems, there is the possibility of being sued, which can be viewed as part of the cost.\nThus, in many systems there is a strong incentive to become a free rider and benefit from the system without contributing to it.\nThis is not merely a theoretical problem; studies of the Gnutella [22] network have shown that almost 70 percent of users share no files and nearly 50 percent of responses are from the top 1 percent of sharing hosts [1].\nHaving relatively few users provide most of the service creates a point of centralization; the disappearance of a small percentage of users can greatly impair the functionality of the system.\nMoreover, current trends seem to be leading to the elimination of the \"altruistic\" users on which these systems rely.\nThese heavy users are some of the most expensive customers ISPs have.\nThus, as the amount of traffic has grown, ISPs have begun to seek ways to reduce this traffic.\nSome universities have started charging students for excessive bandwidth usage; others revoke network access for it [5].\nA number of companies have also formed whose service is to detect excessive bandwidth usage [19].\nThese trends make developing a system that encourages a more equal distribution of the work critical for the continued viability of P2P networks and other distributed online systems.\nA significant amount of research has gone into designing reputation systems to give preferential treatment to users who are sharing files.\nSome of the P2P networks currently in use have implemented versions of these techniques.\nHowever, these approaches tend to fall into one of two categories: either they are \"barter-like\" or reputational.\nBy barter-like, we mean that each agent bases its decisions only on information it has derived from its own interactions.\nPerhaps the best-known example of a barter-like system is BitTorrent, where clients downloading a file try to find other clients with parts they are missing so that they can trade, thus creating a roughly equal amount of work.\nSince the barter is restricted to users currently interested in a single file, this works well for popular files, but tends to have problems maintaining availability of less popular ones.\nAn example of a barter-like system built on top of a more traditional file-sharing system is the credit system used by eMule\n[8].\nEach user tracks his history of interactions with other users and gives priority to those he has downloaded from in the past.\nHowever, in a large system, the probability that a pair of randomly-chosen users will have interacted before is quite small, so this interaction history will not be terribly helpful.\nAnagnostakis and Greenwald [2] present a more sophisticated version of this approach, but it still seems to suffer from similar problems.\nA number of attempts have been made at providing general reputation systems (e.g. [12, 13, 17, 27]).\nThe basic idea is to aggregate each user's experience into a global number for each individual that intuitively represents the system's view of that individual's reputation.\nHowever, these attempts tend to suffer from practical problems because they implicitly view users as either \"good\" or \"bad\", assume that the \"good\" users will act according to the specified protocol, and that there are relatively few \"bad\" users.\nUnfortunately, if there are easy ways to game the system, once this information becomes widely available, rational users are likely to make use of it.\nWe cannot count on only a few users being \"bad\" (in the sense of not following the prescribed protocol).\nFor example, Kazaa uses a measure of the ratio of the number of uploads to the number of downloads to identify good and bad users.\nHowever, to avoid penalizing new users, they gave new users an average rating.\nUsers discovered that they could use this relatively good rating to free ride for a while and, once it started to get bad, they could delete their stored information and effectively come back as a \"new\" user, thus circumventing the system (see [2] for a discussion and [11] for a formal analysis of this \"whitewashing\").\nThus Kazaa's reputation system is ineffective.\nThis is a simple case of a more general vulnerability of such systems to sybil attacks [6], where a single user maintains multiple identities and uses them in a coordinated fashion to get better service than he otherwise would.\nRecent work has shown that most common reputation systems are vulnerable (in the worst case) to such attacks [4]; however, the degree of this vulnerability is still unclear.\nThe analyses of the practical vulnerabilities and the existence of such systems that are immune to such attacks remains an area of active research (e.g., [4, 28, 14]).\nSimple economic systems based on a scrip or money seem to avoid many of these problems, are easy to implement and are quite popular (see, e.g., [13, 15, 26]).\nHowever, they have a different set of problems.\nPerhaps the most common involve determining the amount of money in the system.\nRoughly speaking, if there is too little money in the system relative to the number of agents, then relatively few users can afford to make request.\nOn the other hand, if there is too much money, then users will not feel the need to respond to a request; they have enough money already.\nA related problem involves handling newcomers.\nIf newcomers are each given a positive amount of money, then the system is open to sybil attacks.\nPerhaps not surprisingly, scrip systems end up having to deal with standard economic woes such as inflation, bubbles, and crashes [26].\nIn this paper, we provide a formal model in which to analyze scrip systems.\nWe describe a simple scrip system and show that, under reasonable assumptions, for each fixed amount of money there is a nontrivial Nash equilibrium involving threshold strategies, where an agent accepts a request if he has less than $k for some threshold k.' An interesting aspect of our analysis is that, in equilibrium, the distribution of users with each amount of money is the distribution that maximizes entropy (subject to the money supply constraint).\nThis allows us to compute the money supply that maximizes efficiency (social welfare), given the number of agents.\nIt also leads to a solution for the problem of dealing with newcomers: we simply assume that new users come in with no money, and adjust the price of service (which is equivalent to adjusting the money supply) to maintain the ratio that maximizes efficiency.\nWhile assuming that new users come in with no money will not work in all settings, we believe the approach will be widely applicable.\nIn systems where the goal is to do work, new users can acquire money by performing work.\nIt should also work in Kazaalike system where a user can come in with some resources (e.g., a private collection of MP3s).\nThe rest of the paper is organized as follows.\nIn Section 2, we present our formal model and observe that it can be used to understand the effect of altruists.\nIn Section 3, we examine what happens in the game under nonstrategic play, if all agents use the same threshold strategy.\nWe show that, in this case, the system quickly converges to a situation where the distribution of money is characterized by maximum entropy.\nUsing this analysis, we show in Section 4 that, under minimal assumptions, there is a nontrivial Nash equilibrium in the game where all agents use some threshold strategy.\nMoreover, we show in Section 5 that the analysis leads to an understanding of how to choose the amount of money in the system (or, equivalently, the cost to fulfill a request) so as to maximize efficiency, and also shows how to handle new users.\nIn Section 6, we discuss the extent to which our approach can handle sybils and collusion.\nWe conclude in Section 7.\n2.\nTHE MODEL\nTo begin, we formalize providing service in a P2P network as a non-cooperative game.\nUnlike much of the modeling in this area, our model will model the asymmetric interactions in a file sharing system in which the matching of players (those requesting a file with those who have that particular file) is a key part of the system.\nThis is in contrast with much previous work which uses random matching in a prisoner's dilemma.\nSuch models were studied in the economics literature [18, 7] and first applied to online reputations in [11]; an application to P2P is found in [9].\nThis random-matching model fails to capture some salient aspects of a number of important settings.\nWhen a request is made, there are typically many people in the network who can potentially satisfy it (especially in a large P2P network), but not all can.\nFor example, some people may not have the time or resources to satisfy the request.\nThe randommatching process ignores the fact that some people may not be able to satisfy the request.\nPresumably, if the person matched with the requester could not satisfy the match, he would have to defect.\nMoreover, it does not capture the fact that the decision as to whether to \"volunteer\" to satisfy the request should be made before the matching process, not after.\nThat is, the matching process does not capture ` Although we refer to our unit of scrip as the dollar, these are not real dollars nor do we view them as convertible to dollars.\nthe fact that if someone is unwilling to satisfy the request, there will doubtless be others who can satisfy it.\nFinally, the actions and payoffs in the prisoner's dilemma game do not obviously correspond to actual choices that can be made.\nFor example, it is not clear what defection on the part of the requester means.\nIn our model we try to deal with all these issues.\nSuppose that there are n agents.\nAt each round, an agent is picked uniformly at random to make a request.\nEach other agent is able to satisfy this request with probability,3> 0 at all times, independent of previous behavior.\nThe term,3 is intended to capture the probability that an agent is busy, or does not have the resources to fulfill the request.\nAssuming that,3 is time-independent does not capture the intution that being an unable to fulfill a request at time t may well be correlated with being unable to fulfill it at time t + 1.\nWe believe that, in large systems, we should be able to drop the independence assumption, but we leave this for future work.\nIn any case, those agents that are able to satisfy the request must choose whether or not to volunteer to satisfy it.\nIf at least one agent volunteers, the requester gets a benefit of 1 util (the job is done) and one of volunteers is chosen at random to fulfill the request.\nThe agent that fulfills the request pays a cost of \u03b1 <1.\nAs is standard in the literature, we assume that agents discount future payoffs by a factor of S per time unit.\nThis captures the intuition that a util now is worth more than a util tomorrow, and allows us to compute the total utility derived by an agent in an infinite game.\nLastly, we assume that with more players requests come more often.\nThus we assume that the time between rounds is 1\/n.\nThis captures the fact that the systems we want to model are really processing many requests in parallel, so we would expect the number of concurrent requests to be proportional to the number of users .2 Let G (n, S, \u03b1,,3) denote this game with n agents, a discount factor of S, a cost to satisfy requests of \u03b1, and a probability of being able to satisfy requests of,3.\nWhen the latter two parameters are not relevant, we sometimes write G (n, S).\nWe use the following notation throughout the paper:\n\u2022 pt denotes the agent chosen in round t. \u2022 Bti E {0, 11 denotes whether agent i can satisfy the request in round t. Bit = 1 with probability,3> 0 and Bit is independent of Bt' i for all t' = ~ t. \u2022 Vit E {0, 11 denotes agent i's decision about whether to volunteer in round t; 1 indicates volunteering.\nVit is determined by agent i's strategy.\n\u2022 vt E {j Vjt Btj = 11 denotes the agent chosen to satisfy the request.\nThis agent is chosen uniformly at random from those who are willing (Vjt = 1) and able (Btj = 1) to satisfy the request.\n\u2022 uti denotes agent i's utility in round t.\nA standard agent is one whose utility is determined as discussed in the introduction; namely, the agent gets 2For large n, our model converges to one in which players make requests in real time, and the time between a player's requests are exponentially distributed with mean 1.\nIn addition, the time between requests served by a single player is also exponentially distributed.\na utility of 1 for a fulfilled request and utility \u2212 \u03b1 for fulfilling a request.\nThus, if i is a standard agent, then\n\u2022 Ui = P \u00b0 t = 0 St\/nuti denotes the total utility for agent i.\nIt is the discounted total of agent i's utility in each round.\nNote that the effective discount factor is S1\/n since an increase in n leads to a shortening of the time between rounds.\nNow that we have a model of making and satisfying requests, we use it to analyze free riding.\nTake an altruist to be someone who always fulfills requests.\nAgent i might rationally behave altruistically if agent i's utility function has the following form, for some \u03b1'> 0:\nThus, rather than suffering a loss of utility when satisfying a request, an agent derives positive utility from satisfying it.\nSuch a utility function is a reasonable representation of the pleasure that some people get from the sense that they provide the music that everyone is playing.\nFor such altruistic agents, playing the strategy that sets Vit = 1 for all t is dominant.\nWhile having a nonstandard utility function might be one reason that a rational agent might use this strategy, there are certainly others.\nFor example a naive user of filesharing software with a good connection might well follow this strategy.\nAll that matters for the following discussion is that there are some agents that use this strategy, for whatever reason.\nAs we have observed, such users seem to exist in some large systems.\nSuppose that our system has a altruists.\nIntuitively, if a is moderately large, they will manage to satisfy most of the requests in the system even if other agents do no work.\nThus, there is little incentive for any other agent to volunteer, because he is already getting full advantage of participating in the system.\nBased on this intuition, it is a relatively straightforward calculation to determine a value of a that depends only on \u03b1,,3, and S, but not the number n of players in the system, such that the dominant strategy for all standard agents i is to never volunteer to satisfy any requests (i.e., Vit = 0 for all t).\nPROOF.\nConsider the strategy for a standard player j in the presence of a altruists.\nEven with no money, player j will get a request satisfied with probability 1 \u2212 (1 \u2212,3) a just through the actions of these altruists.\nThus, even if j is chosen to make a request in every round, the most additional expected utility he can hope to gain by having money is P \u00b0 k = 1 (1 \u2212,3) aSk = (1 \u2212,3) a \/ (1 \u2212 S).\nIf (1 \u2212,3) a \/ (1 \u2212 S)> \u03b1 or, equivalently, if a> log1--\u03b2 (\u03b1 (1 \u2212 S)), never volunteering is a dominant strategy.\nConsider the following reasonable values for our parameters:,3 = .01 (so that each player can satisfy 1% of the requests), \u03b1 = .1 (a low but non-negligible cost), S = .9999 \/ day\n(which corresponds to a yearly discount factor of approximately 0.95), and an average of 1 request per day per player.\nThen we only need a> 1145.\nWhile this is a large number, it is small relative to the size of a large P2P network.\nCurrent systems all have a pool of users behaving like our altruists.\nThis means that attempts to add a reputation system on top of an existing P2P system to influence users to cooperate will have no effect on rational users.\nTo have a fair distribution of work, these systems must be fundamentally redesigned to eliminate the pool of altruistic users.\nIn some sense, this is not a problem at all.\nIn a system with altruists, the altruists are presumably happy, as are the standard agents, who get almost all their requests satisfied without having to do any work.\nIndeed, current P2P network work quite well in terms of distributing content to people.\nHowever, as we said in the introduction, there is some reason to believe these altruists may not be around forever.\nThus, it is worth looking at what can be done to make these systems work in their absence.\nFor the rest of this paper we assume that all agents are standard, and try to maximize expected utility.\nWe are interested in equilibria based on a scrip system.\nEach time an agent has a request satisfied he must pay the person who satisfied it some amount.\nFor now, we assume that the payment is fixed; for simplicity, we take the amount to be $1.\nWe denote by M the total amount of money in the system.\nWe assume that M> 0 (otherwise no one will ever be able to get paid).\nIn principle, agents are free to adopt a very wide variety of strategies.\nThey can make decisions based on the names of other agents or use a strategy that is heavily history dependant, and mix these strategies freely.\nTo aid our analysis, we would like to be able to restrict our attention to a simpler class of strategies.\nThe class of strategies we are interested in is easy to motivate.\nThe intuitive reason for wanting to earn money is to cater for the possibility that an agent will run out before he has a chance to earn more.\nOn the other hand, a rational agent with plenty of mone would not want to work, because by the time he has managed to spend all his money, the util will have less value than the present cost of working.\nThe natural balance between these two is a threshold strategy.\nLet Sk be the strategy where an agent volunteers whenever he has less than k dollars and not otherwise.\nNote that S0 is the strategy where the agent never volunteers.\nWhile everyone playing S0 is a Nash equilibrium (nobody can do better by volunteering if no one else is willing to), it is an uninteresting one.\nAs we will show in Section 4, it is sufficient to restrict our attention to this class of strategies.\nWe use Kti to denote the amount of money agent i has at time t. Clearly Kt +1 i = Kti unless agent i has a request satisfied, in which case Kt +1\ni \u2212 1 or agent i fulfills a request, in which case Kt +1\n3.\nTHE GAME UNDER NONSTRATEGIC PLAY\nBefore we consider strategic play, we examine what happens in the system if everyone just plays the same strategy Sk.\nOur overall goal is to show that there is some distribution over money (i.e., the fraction of people with each amount of money) such that the system \"converges\" to this distribution in a sense to be made precise shortly.\nSuppose that everyone plays Sk.\nFor simplicity, assume that everyone has at most k dollars.\nWe can make this assumption with essentially no loss of generality, since if someone has more than k dollars, he will just spend money until he has at most k dollars.\nAfter this point he will never acquire more than k. Thus, eventually the system will be in such a state.\nIf M \u2265 kn, no agent will ever be willing to work.\nThus, for the purposes of this section we assume that M <kn.\nFrom the perspective of a single agent, in (stochastic) equilibrium, the agent is undergoing a random walk.\nHowever, the parameters of this random walk depend on the random walks of the other agents and it is quite complicated to solve directly.\nThus we consider an alternative analysis based on the evolution of the system as a whole.\nIf everyone has at most k dollars, then the amount of money that an agent has is an element of {0,..., k}.\nIf there are n agents, then the state of the game can be described by identifying how much money each agent has, so we can represent it by an element of Sk, n = {0,..., k} {1,..., n}.\nSince the total amount of money is constant, not all of these states can arise in the game.\nFor example the state where each player has $0 is impossible to reach in any game with money in the system.\nLet mS (s) = Ei \u2208 {1...n} s (i) denote the total mount of money in the game at state s, where s (i) is the number of dollars that agent i has in state s.\nWe want to consider only those states where the total money in the system is M, namely\nUnder the assumption that all agents use strategy Sk, the evolution of the system can be treated as a Markov chain Mk, n, M over the state space Sk, n, M.\nIt is possible to move from one state to another in a single round if by choosing a particular agent to make a request and a particular agent to satisfy it, the amounts of money possesed by each agent become those in the second state.\nTherefore the probability of a transition from a state s to t is 0 unless there exist two agents i and j such that s (i ~) = t (i ~) for all i ~ \u2208 \/ {i, j}, t (i) = s (i) + 1, and t (j) = s (j) \u2212 1.\nIn this case the probability of transitioning from s to t is the probability of j being chosen to spend a dollar and has someone willing and able to satisfy his request ((1\/n) (1 \u2212 (1 \u2212 \u03b2) | {i ~ | s (i ~) ~ = k} | \u2212 Ij) multiplied by the probability of i being chosen to satisfy his request (1 \/ (| ({i ~ | s (i ~) = ~ k} | \u2212 Ij)).\nIj is 0 if j has k dollars and 1 otherwise (it is just a correction for the fact that j cannot satisfy his own request.)\nLet \u2206 k denote the set of probability distributions on {0,..., k}.\nWe can think of an element of \u2206 k as describing the fraction of people with each amount of money.\nThis is a useful way of looking at the system, since we typically don't care who has each amount of money, but just the fraction of people that have each amount.\nAs before, not all elements of \u2206 k are possible, given our constraint that the total amount of\nmoney is M. Rather than thinking in terms of the total Number of Steps 4000 amount of money in the system, it will prove more useful to 3500 think in terms of the average amount of money each player 3000 has.\nOf course, the total amount of money in a system 2500 with n agents is M iff the average amount that each player 2000 has is m = M\/n.\nLet \u2206 km denote all distributions d E \u2206 k such that E (d) = m (i.e., Pkj = 0 d (j) j = m).\nGiven a state\ndenote the entropy of d.\nIf \u2206 is a closed convex set of distributions, then it is well known that there is a unique distribution in \u2206 at which the entropy function takes its maximum value in \u2206.\nSince \u2206 km is easily seen to be a closed convex set of distributions, it follows that there is a unique distribution in \u2206 km that we denote d \u2217 k, m whose entropy is greater than that of all other distributions in \u2206 k m.\nWe now show that, for n sufficiently large, the Markov chain Mk, n, M is almost surely in a state s such that ds is close to d \u2217 k, M\/n.\nThe statement is correct under a number of senses of \"close\".\nFor definiteness, we consider the Euclidean distance.\nGiven E> 0, let Sk, n, m, ~ denote the set of states s in Sk, n, mn such that Pkj = 0 | ds (j) \u2212 d \u2217 k, m | 2 <e. Given a Markov chain M over a state space S and S C S, let Xt, s, S be the random variable that denotes that M is in a state of S at time t, when started in state s.\nPROOF.\n(Sketch) Suppose that at some time t, Pr (Xt, s, s,) is uniform for all s'.\nThen the probability of being in a set of states is just the size of the set divided by the total number of states.\nA standard technique from statistical mechanics is to show that there is a concentration phenomenon around the maximum entropy distribution [16].\nMore precisely, using a straightforward combinatorial argument, it can be shown that the fraction of states not in Sk, n, m, ~ is bounded by p (n) \/ ecn, where p is a polynomial.\nThis fraction clearly goes to 0 as n gets large.\nThus, for sufficiently large n, Pr (Xt, s, Sk, n, m, J> 1 \u2212 e if Pr (Xt, s, s,) is uniform.\nIt is relatively straightforward to show that our Markov Chain has a limit distribution \u03c0 over Sk, n, mn, such that for all s, s' E Sk, n, mn, limt \u2192 \u221e Pr (Xt, s, s,) = \u03c0s,.\nLet Pij denote the probability of transitioning from state i to state j.\nIt is easily verified by an explicit computation of the transition probabilities that Pij = Pji for all states i and j.\nIt immediatly follows from this symmetry that \u03c0s = \u03c0s,, so \u03c0 is uniform.\nAfter a sufficient amount of time, the distribution will be close enough to \u03c0, that the probabilities are again bounded by constant, which is sufficient to complete the theorem.\nFigure 2: Maximum distance from maximumentropy distribution over 106 timesteps.\nFigure 3: Average time to get within .001 of the maximum-entropy distribution.\nWe performed a number of experiments that show that the maximum entropy behavior described in Theorem 3.1 arises quickly for quite practical values of n and t.\nThe first experiment showed that, even if n = 1000, we reach the maximum-entropy distribution quickly.\nWe averaged 10 runs of the Markov chain for k = 5 where there is enough money for each agent to have $2 starting from a very extreme distribution (every agent has either $0 or $5) and considered the average time needed to come within various distances of the maximum entropy distribution.\nAs Figure 1 shows, after 2,000 steps, on average, the Euclidean distance from the average distribution of money to the maximum-entropy distribution is .008; after 3,000 steps, the distance is down to .001.\nNote that this is really only 3 real time units since with 1000 players we have 1000 transactions per time unit.\nWe then considered how close the distribution stays to the maximum entropy distribution once it has reached it.\nTo simplify things, we started the system in a state whose distribution was very close to the maximum-entropy distribution and ran it for 106 steps, for various values of n.\nAs Figure 2 shows, the system does not move far from the maximum-entropy distribution once it is there.\nFor example, if n = 5000, the system is never more than distance .001 from the maximum-entropy distribution; if n = 25, 000, it is never more than .0002 from the maximum-entropy distribution.\nFinally, we considered how more carefully how quickly the system converges to the maximum-entropy distribution for various values of n.\nThere are approximately kn possible states, so the convergence time could in principle be quite large.\nHowever, we suspect that the Markov chain that arises here is rapidly mixing, which means that it will converge significantly faster (see [20] for more details about rapid mixing).\nWe believe that the actually time needed is O (n).\nThis behavior is illustrated in Figure 3, which shows that for our example chain (again averaged over 10 runs), after 3n steps, the Euclidean distance between the actual distribution of money in the system and the maximum-entropy distribution is less than .001.\n4.\nTHE GAME UNDER STRATEGIC PLAY\nWe have seen that the system is well behaved if the agents all follow a threshold strategy; we now want to show that there is a nontrivial Nash equilibrium where they do so (that is, a Nash equilibrium where all the agents use Sk for some k> 0.)\nThis is not true in general.\nIf \u03b4 is small, then agents have no incentive to work.\nIntuitively, if future utility is sufficiently discounted, then all that matters is the present, and there is no point in volunteering to work.\nWith small \u03b4, S0 is the only equilibrium.\nHowever, we show that for \u03b4 sufficiently large, there is another equilibrium in threshold strategies.\nWe do this by first showing that, if every other agent is playing a threshold strategy, then there is a best response that is also a threshold strategy (although not necessarily the same one).\nWe then show that there must be some (mixed) threshold strategy for which this best response is the same strategy.\nIt follows that this tuple of threshold strategies is a Nash equilibrium.\nAs a first step, we show that, for all k, if everyone other than agent i is playing Sk, then there is a threshold strategy Sk that is a best response for agent i. To prove this, we need to assume that the system is close to the steadystate distribution (i.e., the maximum-entropy distribution).\nHowever, as long as \u03b4 is sufficiently close to 1, we can ignore what happens during the period that the system is not in steady state .3 We have thus far considered threshold strategies of the form Sk, where k is a natural number; this is a discrete set of strategies.\nFor a later proof, it will be helpful to have a continuous set of strategies.\nIf - y = k + - y', where k is a natural number and 0 \u2264 - y' <1, let S\u03b3 be the strategy that performs Sk with probability 1 \u2212 - y' and Sk +1 with probability - y. (Note that we are not considering arbitrary mixed threshold strategies here, but rather just mixing between adjacent strategies for the sole purpose of making out strategies continuous in a natural way.)\nTheorem 3.1 applies to strategies S\u03b3 (the same proof goes through without change), where - y is an arbitrary nonnegative real number.\nPROOF.\n(Sketch:-RRB- If \u03b4 is sufficiently large, we can ignore what happens before the system converges to the maximumentropy distribution.\nIf n is sufficiently large, then the strategy played by one agent will not affect the distribution of money significantly.\nThus, the probability of i moving from one state (dollar amount) to another depends only on i's strategy (since we can take the probability that i will be chosen to make a request and the probability that i will be chosen to satisfy a request to be constant).\nThus, from i's point of view, the system is a Markov decision process (MDP), and i needs to compute the optimal policy (strategy) for this MDP.\nIt follows from standard results [23, Theorem 6.11.6] that there is an optimal policy that is a threshold policy.\nThe argument that the best response is either unique or there is an interval of best responses follows from a more careful analysis of the value function for the MDP.\nWe remark that there may be best responses that are not threshold strategies.\nAll that Theorem 4.1 shows is that, among best responses, there is at least one that is a threshold strategy.\nSince we know that there is a best response that is a threshold strategy, we can look for a Nash equilibrium in the space of threshold strategies.\nPROOF.\nIt follows easily from the proof Theorem 4.1 that if br (\u03b4, - y) is the minimal best response threshold strategy if all the other agents are playing S\u03b3 and the discount factor is \u03b4 then, for fixed \u03b4, br (\u03b4, \u00b7) is a step function.\nIt also follows 3Formally, we need to define the strategies when the system is far from equilibrium.\nHowever, these far from (stochastic) equilibrium strategies will not affect the equilibrium behavior when n is large and deviations from stochastic equilibrium are extremely rare.\nfrom the theorem that if there are two best responses, then a mixture of them is also a best response.\nTherefore, if we can join the \"steps\" by a vertical line, we get a best-response curve.\nIt is easy to see that everywhere that this bestresponse curve crosses the diagonal y = x defines a Nash equilibrium where all agents are using the same threshold strategy.\nAs we have already observed, one such equilibrium occurs at 0.\nIf there are only $M in the system, we can restrict to threshold strategies Sk where k <M + 1.\nSince no one can have more than $M, all strategies Sk for k> M are equivalent to SM; these are just the strategies where the agent always volunteers in response to request made by someone who can pay.\nClearly br (\u03b4, SM) <M for all \u03b4, so the best response function is at or below the equilibrium at M.\nIf k <M\/n, every player will have at least k dollars and so will be unwilling to work and the best response is just 0.\nConsider k *, the smallest k such that k> M\/n.\nIt is not hard to show that for k * there exists a \u03b4 * such that for all \u03b4> \u03b4 *, br (\u03b4, k *)> k *.\nIt follows by continuity that if \u03b4> \u03b4 *, there must be some - y such that br (\u03b4, - y) = - y.\nThis is the desired Nash equilibrium.\nThis argument also shows us that we cannot in general expect fixed points to be unique.\nIf br (\u03b4, k *) = k * and br (\u03b4, k + 1)> k + 1 then our argument shows there must be a second fixed point.\nIn general there may be multiple fixed points even when br (\u03b4, k *)> k *, as illustrated in the Figure 4 with n = 1000 and M = 3000.\nFigure 4: The best response function for n = 1000 and M = 3000.\nTheorem 4.2 allows us to restrict our design to agents using threshold strategies with the confidence that there will be a nontrivial equilibrium.\nHowever, it does not rule out the possibility that there may be other equilibria that do not involve threshold stratgies.\nIt is even possible (although it seems unlikely) that some of these equilibria might be better.\n5.\nSOCIAL WELFARE AND SCALABITY\nOur theorems show that for each value of M and n, for sufficiently large \u03b4, there is a nontrivial Nash equilibrium where all the agents use some threshold strategy S\u03b3 (M, n).\nFrom the point of view of the system designer, not all equilibria are equally good; we want an equilibrium where as few as possible agents have $0 when they get a chance to make a request (so that they can pay for the request) and relatively few agents have more than the threshold amount of money (so that there are always plenty of agents to fulfill the request).\nThere is a tension between these objectives.\nIt is not hard to show that as the fraction of agents with $0 increases in the maximum entropy distribution, the fraction of agents with the maximum amount of money decreases.\nThus, our goal is to understand what the optimal amount of money should be in the system, given the number of agents.\nThat is, we want to know the amount of money M that maximizes efficiency, i.e., the total expected utility if all the agents use S\u03b3 (M, n).\n4 We first observe that the most efficient equilibrium depends only on the ratio of M to n, not on the actual values of M and n.\nPROOF.\nFix M\/n = r. Theorem 3.1 shows that the maximum-entropy distribution depends only on k and the ratio M\/n, not on M and n separately.\nThus, given r, for each choice of k, there is a unique maximum entropy distribution dk, r.\nThe best response br (\u03b4, k) depends only on the distribution dk, r, not M or n. Thus, the Nash equilibrium depends only on the ratio r.\nThat is, for all choices of M and n such that n is sufficiently large (so that Theorem 3.1 applies) and M\/n = r, the equilibrium strategies are the same.\nIn light of Theorem 5.1, the system designer should ensure that there is enough money M in the system so that the ratio between M\/n is optimal.\nWe are currently exploring exactly what the optimal ratio is.\nAs our very preliminary results for 3 = 1 show in Figure 5, the ratio appears to be monotone increasing in \u03b4, which matches the intuition that we should provide more patient agents with the opportunity to save more money.\nAdditionally, it appears to be relatively smooth, which suggests that it may have a nice analytic solution.\nFigure 5: Optimal average amount of money to the nearest .25 for 3 = 1\nWe remark that, in practice, it may be easier for the designer to vary the price of fulfilling a request rather than 4If there are multiple equilibria, we take S\u03b3 (M, n) to be the Nash equilibrium that has highest efficiency for fixed M and n.\ninjecting money in the system.\nThis produces the same effect.\nFor example, changing the cost of fulfilling a request from $1 to $2 is equivalent to halving the amount of money that each agent has.\nSimilarly, halving the the cost of fulfilling a request is equivalent to doubling the amount of money that everyone has.\nWith a fixed amount of money M, there is an optimal product nc of the number of agents and the cost c of fulfilling a request.\nTheorem 5.1 also tells us how to deal with a dynamic pool of agents.\nOur system can handle newcomers relatively easily: simply allow them to join with no money.\nThis gives existing agents no incentive to leave and rejoin as newcomers.\nWe then change the price of fulfilling a request so that the optimal ratio is maintained.\nThis method has the nice feature that it can be implemented in a distributed fashion; if all nodes in the system have a good estimate of n then they can all adjust prices automatically.\n(Alternatively, the number of agents in the system can be posted in a public place.)\nApproaches that rely on adjusting the amount of money may require expensive system-wide computations (see [26] for an example), and must be carefully tuned to avoid creating incentives for agents to manipulate the system by which this is done.\nNote that, in principle, the realization that the cost of fulfilling a request can change can affect an agent's strategy.\nFor example, if an agent expects the cost to increase, then he may want to defer volunteering to fulfill a request.\nHowever, if the number of agents in the system is always increasing, then the cost always decreases, so there is never any advantage in waiting.\nThere may be an advantage in delaying a request, but it is far more costly, in terms of waiting costs than in providing service, since we assume the need for a service is often subject to real waiting costs, while the need to supply the service is merely to augment a money supply.\n(Related issues are discussed in [10].)\nWe ultimately hope to modify the mechanism so that the price of a job can be set endogenously within the system (as in real-world economies), with agents bidding for jobs rather than there being a fixed cost set externally.\nHowever, we have not yet explored the changes required to implement this change.\nThus, for now, we assume that the cost is set as a function of the number of agents in the system (and that there is no possibility for agents to satisfy a request for less than the \"official\" cost or for requesters to offer to pay more than it).\n6.\nSYBILS AND COLLUSION\nIn a naive sense, our system is essentially sybil-proof.\nTo get d dollars, his sybils together still have to perform d units of work.\nMoreover, since newcomers enter the system with $0, there is no benefit to creating new agents simply to take advantage of an initial endowment.\nNevertheless, there are some less direct ways that an agent could take advantage of sybils.\nFirst, by having more identities he will have a greater probability of getting chosen to make a request.\nIt is easy to see that this will lead to the agent having higher total utility.\nHowever, this is just an artifact of our model.\nTo make our system simple to analyze, we have assumed that request opportunities came uniformly at random.\nIn practice, requests are made to satisfy a desire.\nOur model implicitly assumed that all agents are equally likely to have a desire at any particular time.\nHaving sybils should not increase the need to have a request satisfied.\nIndeed, it would be reasonable to assume that sybils do not make requests at all.\nSecond, having sybils makes it more likely that one of the sybils will be chosen to fulfill a request.\nThis can allow a user to increase his utility by setting a lower threshold; that is, to use a strategy Sk where k' is smaller than the k used by the Nash equilibrium strategy.\nIntuitively, the need for money is not as critical if money is easier to obtain.\nUnlike the first concern, this seems like a real issue.\nIt seems reasonable to believe that when people make a decision between a number of nodes to satisfy a request they do so at random, at least to some extent.\nEven if they look for advertised node features to help make this decision, sybils would allow a user to advertise a wide range of features.\nThird, an agent can drive down the cost of fulfilling a request by introducing many sybils.\nSimilarly, he could increase the cost (and thus the value of his money) by making a number of sybils leave the system.\nConcievably he could alternate between these techniques to magnify the effects of work he does.\nWe have not yet calculated the exact effect of this change (it interacts with the other two effects of having sybils that we have already noted).\nGiven the number of sybils that would be needed to cause a real change in the perceived size of a large P2P network, the practicality of this attack depends heavily on how much sybils cost an attacker and what resources he has available.\nThe second point raised regarding sybils also applies to collusion if we allow money to be \"loaned\".\nIf k agents collude, they can agree that, if one runs out of money, another in the group will loan him money.\nBy pooling their money in this way, the k agents can again do better by setting a higher threshold.\nNote that the \"loan\" mechanism doesn't need to be built into the system; the agents can simply use a \"fake\" transaction to transfer the money.\nThese appear to be the main avenues for collusive attacks, but we are still exploring this issue.\n7.\nCONCLUSION\nWe have given a formal analysis of a scrip system and have shown that the existence of a Nash equilibrium where all agents use a threshold strategy.\nMoreover, we can compute efficiency of equilibrium strategy and optimize the price (or money supply) to maximize efficiency.\nThus, our analysis provides a formal mechanisms for solving some important problems in implementing scrip systems.\nIt tells us that with a fixed population of rational users, such systems are very unlikely to become unstable.\nThus if this stability is common belief among the agents we would not expect inflation, bubbles, or crashes because of agent speculation.\nHowever, we cannot rule out the possibility that that agents may have other beliefs that will cause them to speculate.\nOur analysis also tells us how to scale the system to handle an influx of new users without introducing these problems: scale the money supply to keep the average amount of money constant (or equivalently adjust prices to achieve the same goal).\nThere are a number of theoretical issues that are still open, including a characterization of the multiplicity of equilibria--are there usually 2?\nIn addition, we expect that one should be able to compute analytic estimates for the best response function and optimal pricing which would allow us to understand the relationship between pricing and various parameters in the model.\nIt would also be of great interest to extend our analysis to handle more realistic settings.\nWe mention a few possible extensions here: 9 We have assumed that the world is homogeneous in a number of ways, including request frequency, utility, and ability to satisfy requests.\nIt would be interesting to examine how relaxing any of these assumptions would alter our results.\n9 We have assumed that there is no cost to an agent to be a member of the system.\nSuppose instead that we imposed a small cost simply for being present in the system to reflect the costs of routing messages and overlay maintainance.\nThis modification could have a significant impact on sybil attacks.\n9 We have described a scrip system that works when there are no altruists and have shown that no system can work once there there are sufficiently many altruists.\nWhat happens between these extremes?\n9 One type of \"irrational\" behavior encountered with scrip systems is hoarding.\nThere are some similarities between hoarding and altruistic behavior.\nWhile an altruist provide service for everyone, a hoarder will volunteer for all jobs (in order to get more money) and rarely request service (so as not to spend money).\nIt would be interesting to investigate the extent to which our system is robust against hoarders.\nClearly with too many hoarders, there may not be enough money remaining among the non-hoarders to guarantee that, typically, a non-hoarder would have enough money to satisfy a request.\n9 Finally, in P2P filesharing systems, there are overlapping communities of various sizes that are significantly more likely to be able to satisfy each other's requests.\nIt would be interesting to investigate the effect of such communities on the equilibrium of our system.\nThere are also a number of implementation issues that would have to be resolved in a real system.\nFor example, we need to worry about the possibility of agents counterfeiting money or lying about whether service was actually provided.\nKarma [26] provdes techniques for dealing with both of these issues and a number of others, but some of Karma's implementation decisions point to problems for our model.\nFor example, it is prohibitively expensive to ensure that bank account balances can never go negative, a fact that our model does not capture.\nAnother example is that Karma has nodes serve as bookkeepers for other nodes account balances.\nLike maintaining a presence in the network, this imposes a cost on the node, but unlike that, responsibility it can be easily shirked.\nKarma suggests several ways to incentivize nodes to perform these duties.\nWe have not investigated whether these mechanisms be incorporated without disturbing our equilibrium.","keyphrases":["scrip system","p2p network","nash equilibrium","social welfar","agent","onlin system","gnutellum network","reput system","bittorr","emul","game","maximum entropi","threshold strategi","game theori"],"prmu":["P","P","P","P","P","P","M","M","U","U","U","U","U","U"]} {"id":"I-58","title":"An Efficient Heuristic Approach for Security Against Multiple Adversaries","abstract":"In adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue. This paper focuses on domains where these threats come from unknown adversaries. These domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games. However, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies. In this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium. Previous work has shown this problem of optimal strategy selection to be NP-hard. Therefore, we present a heuristic called ASAP, with three key advantages to address the problem. First, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game. Second, it provides strategies which are simple to understand, represent, and implement. Third, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form. We provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches.","lvl-1":"An Efficient Heuristic Approach for Security Against Multiple Adversaries Praveen Paruchuri, Jonathan P. Pearce, Milind Tambe, Fernando Ordonez University of Southern California Los Angeles, CA 90089 {paruchur, jppearce, tambe, fordon}@usc.\nedu Sarit Kraus Bar-Ilan University Ramat-Gan 52900, Israel sarit@cs.biu.ac.il ABSTRACT In adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue.\nThis paper focuses on domains where these threats come from unknown adversaries.\nThese domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games.\nHowever, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies.\nIn this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium.\nPrevious work has shown this problem of optimal strategy selection to be NP-hard.\nTherefore, we present a heuristic called ASAP, with three key advantages to address the problem.\nFirst, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game.\nSecond, it provides strategies which are simple to understand, represent, and implement.\nThird, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form.\nWe provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches.\nCategories and Subject Descriptors I.2.11 [Computing Methodologies]: Artificial Intelligence: Distributed Artificial Intelligence - Intelligent Agents General Terms Security, Design, Theory 1.\nINTRODUCTION In many multiagent domains, agents must act in order to provide security against attacks by adversaries.\nA common issue that agents face in such security domains is uncertainty about the adversaries they may be facing.\nFor example, a security robot may need to make a choice about which areas to patrol, and how often [16].\nHowever, it will not know in advance exactly where a robber will choose to strike.\nA team of unmanned aerial vehicles (UAVs) [1] monitoring a region undergoing a humanitarian crisis may also need to choose a patrolling policy.\nThey must make this decision without knowing in advance whether terrorists or other adversaries may be waiting to disrupt the mission at a given location.\nIt may indeed be possible to model the motivations of types of adversaries the agent or agent team is likely to face in order to target these adversaries more closely.\nHowever, in both cases, the security robot or UAV team will not know exactly which kinds of adversaries may be active on any given day.\nA common approach for choosing a policy for agents in such scenarios is to model the scenarios as Bayesian games.\nA Bayesian game is a game in which agents may belong to one or more types; the type of an agent determines its possible actions and payoffs.\nThe distribution of adversary types that an agent will face may be known or inferred from historical data.\nUsually, these games are analyzed according to the solution concept of a Bayes-Nash equilibrium, an extension of the Nash equilibrium for Bayesian games.\nHowever, in many settings, a Nash or Bayes-Nash equilibrium is not an appropriate solution concept, since it assumes that the agents'' strategies are chosen simultaneously [5].\nIn some settings, one player can (or must) commit to a strategy before the other players choose their strategies.\nThese scenarios are known as Stackelberg games [6].\nIn a Stackelberg game, a leader commits to a strategy first, and then a follower (or group of followers) selfishly optimize their own rewards, considering the action chosen by the leader.\nFor example, the security agent (leader) must first commit to a strategy for patrolling various areas.\nThis strategy could be a mixed strategy in order to be unpredictable to the robbers (followers).\nThe robbers, after observing the pattern of patrols over time, can then choose their strategy (which location to rob).\nOften, the leader in a Stackelberg game can attain a higher reward than if the strategies were chosen simultaneously.\nTo see the advantage of being the leader in a Stackelberg game, consider a simple game with the payoff table as shown in Table 1.\nThe leader is the row player and the follower is the column player.\nHere, the leader``s payoff is listed first.\n1 2 3 1 5,5 0,0 3,10 2 0,0 2,2 5,0 Table 1: Payoff table for example normal form game.\nThe only Nash equilibrium for this game is when the leader plays 2 and the follower plays 2 which gives the leader a payoff of 2.\n311 978-81-904262-7-5 (RPS) c 2007 IFAAMAS However, if the leader commits to a uniform mixed strategy of playing 1 and 2 with equal (0.5) probability, the follower``s best response is to play 3 to get an expected payoff of 5 (10 and 0 with equal probability).\nThe leader``s payoff would then be 4 (3 and 5 with equal probability).\nIn this case, the leader now has an incentive to deviate and choose a pure strategy of 2 (to get a payoff of 5).\nHowever, this would cause the follower to deviate to strategy 2 as well, resulting in the Nash equilibrium.\nThus, by committing to a strategy that is observed by the follower, and by avoiding the temptation to deviate, the leader manages to obtain a reward higher than that of the best Nash equilibrium.\nThe problem of choosing an optimal strategy for the leader to commit to in a Stackelberg game is analyzed in [5] and found to be NP-hard in the case of a Bayesian game with multiple types of followers.\nThus, efficient heuristic techniques for choosing highreward strategies in these games is an important open issue.\nMethods for finding optimal leader strategies for non-Bayesian games [5] can be applied to this problem by converting the Bayesian game into a normal-form game by the Harsanyi transformation [8].\nIf, on the other hand, we wish to compute the highest-reward Nash equilibrium, new methods using mixed-integer linear programs (MILPs) [17] may be used, since the highest-reward Bayes-Nash equilibrium is equivalent to the corresponding Nash equilibrium in the transformed game.\nHowever, by transforming the game, the compact structure of the Bayesian game is lost.\nIn addition, since the Nash equilibrium assumes a simultaneous choice of strategies, the advantages of being the leader are not considered.\nThis paper introduces an efficient heuristic method for approximating the optimal leader strategy for security domains, known as ASAP (Agent Security via Approximate Policies).\nThis method has three key advantages.\nFirst, it directly searches for an optimal strategy, rather than a Nash (or Bayes-Nash) equilibrium, thus allowing it to find high-reward non-equilibrium strategies like the one in the above example.\nSecond, it generates policies with a support which can be expressed as a uniform distribution over a multiset of fixed size as proposed in [12].\nThis allows for policies that are simple to understand and represent [12], as well as a tunable parameter (the size of the multiset) that controls the simplicity of the policy.\nThird, the method allows for a Bayes-Nash game to be expressed compactly without conversion to a normal-form game, allowing for large speedups over existing Nash methods such as [17] and [11].\nThe rest of the paper is organized as follows.\nIn Section 2 we fully describe the patrolling domain and its properties.\nSection 3 introduces the Bayesian game, the Harsanyi transformation, and existing methods for finding an optimal leader``s strategy in a Stackelberg game.\nThen, in Section 4 the ASAP algorithm is presented for normal-form games, and in Section 5 we show how it can be adapted to the structure of Bayesian games with uncertain adversaries.\nExperimental results showing higher reward and faster policy computation over existing Nash methods are shown in Section 6, and we conclude with a discussion of related work in Section 7.\n2.\nTHE PATROLLING DOMAIN In most security patrolling domains, the security agents (like UAVs [1] or security robots [16]) cannot feasibly patrol all areas all the time.\nInstead, they must choose a policy by which they patrol various routes at different times, taking into account factors such as the likelihood of crime in different areas, possible targets for crime, and the security agents'' own resources (number of security agents, amount of available time, fuel, etc.).\nIt is usually beneficial for this policy to be nondeterministic so that robbers cannot safely rob certain locations, knowing that they will be safe from the security agents [14].\nTo demonstrate the utility of our algorithm, we use a simplified version of such a domain, expressed as a game.\nThe most basic version of our game consists of two players: the security agent (the leader) and the robber (the follower) in a world consisting of m houses, 1 ... m.\nThe security agent``s set of pure strategies consists of possible routes of d houses to patrol (in an order).\nThe security agent can choose a mixed strategy so that the robber will be unsure of exactly where the security agent may patrol, but the robber will know the mixed strategy the security agent has chosen.\nFor example, the robber can observe over time how often the security agent patrols each area.\nWith this knowledge, the robber must choose a single house to rob.\nWe assume that the robber generally takes a long time to rob a house.\nIf the house chosen by the robber is not on the security agent``s route, then the robber successfully robs it.\nOtherwise, if it is on the security agent``s route, then the earlier the house is on the route, the easier it is for the security agent to catch the robber before he finishes robbing it.\nWe model the payoffs for this game with the following variables: \u2022 vl,x: value of the goods in house l to the security agent.\n\u2022 vl,q: value of the goods in house l to the robber.\n\u2022 cx: reward to the security agent of catching the robber.\n\u2022 cq: cost to the robber of getting caught.\n\u2022 pl: probability that the security agent can catch the robber at the lth house in the patrol (pl < pl \u21d0\u21d2 l < l).\nThe security agent``s set of possible pure strategies (patrol routes) is denoted by X and includes all d-tuples i =< w1, w2, ..., wd > with w1 ... wd = 1 ... m where no two elements are equal (the agent is not allowed to return to the same house).\nThe robber``s set of possible pure strategies (houses to rob) is denoted by Q and includes all integers j = 1 ... m.\nThe payoffs (security agent, robber) for pure strategies i, j are: \u2022 \u2212vl,x, vl,q, for j = l \/\u2208 i. \u2022 plcx +(1\u2212pl)(\u2212vl,x), \u2212plcq +(1\u2212pl)(vl,q), for j = l \u2208 i. With this structure it is possible to model many different types of robbers who have differing motivations; for example, one robber may have a lower cost of getting caught than another, or may value the goods in the various houses differently.\nIf the distribution of different robber types is known or inferred from historical data, then the game can be modeled as a Bayesian game [6].\n3.\nBAYESIAN GAMES A Bayesian game contains a set of N agents, and each agent n must be one of a given set of types \u03b8n.\nFor our patrolling domain, we have two agents, the security agent and the robber.\n\u03b81 is the set of security agent types and \u03b82 is the set of robber types.\nSince there is only one type of security agent, \u03b81 contains only one element.\nDuring the game, the robber knows its type but the security agent does not know the robber``s type.\nFor each agent (the security agent or the robber) n, there is a set of strategies \u03c3n and a utility function un : \u03b81 \u00d7 \u03b82 \u00d7 \u03c31 \u00d7 \u03c32 \u2192 .\nA Bayesian game can be transformed into a normal-form game using the Harsanyi transformation [8].\nOnce this is done, new, linear-program (LP)-based methods for finding high-reward strategies for normal-form games [5] can be used to find a strategy in the transformed game; this strategy can then be used for the Bayesian game.\nWhile methods exist for finding Bayes-Nash equilibria directly, without the Harsanyi transformation [10], they find only a 312 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) single equilibrium in the general case, which may not be of high reward.\nRecent work [17] has led to efficient mixed-integer linear program techniques to find the best Nash equilibrium for a given agent.\nHowever, these techniques do require a normal-form game, and so to compare the policies given by ASAP against the optimal policy, as well as against the highest-reward Nash equilibrium, we must apply these techniques to the Harsanyi-transformed matrix.\nThe next two subsections elaborate on how this is done.\n3.1 Harsanyi Transformation The first step in solving Bayesian games is to apply the Harsanyi transformation [8] that converts the Bayesian game into a normal form game.\nGiven that the Harsanyi transformation is a standard concept in game theory, we explain it briefly through a simple example in our patrolling domain without introducing the mathematical formulations.\nLet us assume there are two robber types a and b in the Bayesian game.\nRobber a will be active with probability \u03b1, and robber b will be active with probability 1 \u2212 \u03b1.\nThe rules described in Section 2 allow us to construct simple payoff tables.\nAssume that there are two houses in the world (1 and 2) and hence there are two patrol routes (pure strategies) for the agent: {1,2} and {2,1}.\nThe robber can rob either house 1 or house 2 and hence he has two strategies (denoted as 1l, 2l for robber type l).\nSince there are two types assumed (denoted as a and b), we construct two payoff tables (shown in Table 2) corresponding to the security agent playing a separate game with each of the two robber types with probabilities \u03b1 and 1 \u2212 \u03b1.\nFirst, consider robber type a. Borrowing the notation from the domain section, we assign the following values to the variables: v1,x = v1,q = 3\/4, v2,x = v2,q = 1\/4, cx = 1\/2, cq = 1, p1 = 1, p2 = 1\/2.\nUsing these values we construct a base payoff table as the payoff for the game against robber type a. For example, if the security agent chooses route {1,2} when robber a is active, and robber a chooses house 1, the robber receives a reward of -1 (for being caught) and the agent receives a reward of 0.5 for catching the robber.\nThe payoffs for the game against robber type b are constructed using different values.\nSecurity agent: {1,2} {2,1} Robber a 1a -1, .5 -.375, .125 2a -.125, -.125 -1, .5 Robber b 1b -.9, .6 -.275, .225 2b -.025, -.025 -.9, .6 Table 2: Payoff tables: Security Agent vs Robbers a and b Using the Harsanyi technique involves introducing a chance node, that determines the robber``s type, thus transforming the security agent``s incomplete information regarding the robber into imperfect information [3].\nThe Bayesian equilibrium of the game is then precisely the Nash equilibrium of the imperfect information game.\nThe transformed, normal-form game is shown in Table 3.\nIn the transformed game, the security agent is the column player, and the set of all robber types together is the row player.\nSuppose that robber type a robs house 1 and robber type b robs house 2, while the security agent chooses patrol {1,2}.\nThen, the security agent and the robber receive an expected payoff corresponding to their payoffs from the agent encountering robber a at house 1 with probability \u03b1 and robber b at house 2 with probability 1 \u2212 \u03b1.\n3.2 Finding an Optimal Strategy Although a Nash equilibrium is the standard solution concept for games in which agents choose strategies simultaneously, in our security domain, the security agent (the leader) can gain an advantage by committing to a mixed strategy in advance.\nSince the followers (the robbers) will know the leader``s strategy, the optimal response for the followers will be a pure strategy.\nGiven the common assumption, taken in [5], in the case where followers are indifferent, they will choose the strategy that benefits the leader, there must exist a guaranteed optimal strategy for the leader [5].\nFrom the Bayesian game in Table 2, we constructed the Harsanyi transformed bimatrix in Table 3.\nThe strategies for each player (security agent or robber) in the transformed game correspond to all combinations of possible strategies taken by each of that player``s types.\nTherefore, we denote X = \u03c3\u03b81 1 = \u03c31 and Q = \u03c3\u03b82 2 as the index sets of the security agent and robbers'' pure strategies respectively, with R and C as the corresponding payoff matrices.\nRij is the reward of the security agent and Cij is the reward of the robbers when the security agent takes pure strategy i and the robbers take pure strategy j.\nA mixed strategy for the security agent is a probability distribution over its set of pure strategies and will be represented by a vector x = (px1, px2, ... , px|X|), where pxi \u2265 0 and P pxi = 1.\nHere, pxi is the probability that the security agent will choose its ith pure strategy.\nThe optimal mixed strategy for the security agent can be found in time polynomial in the number of rows in the normal form game using the following linear program formulation from [5].\nFor every possible pure strategy j by the follower (the set of all robber types), max P i\u2208X pxiRij s.t. \u2200j \u2208 Q, P i\u2208\u03c31 pxiCij \u2265 P i\u2208\u03c31 pxiCij P i\u2208X pxi = 1 \u2200i\u2208X , pxi >= 0 (1) Then, for all feasible follower strategies j, choose the one that maximizes P i\u2208X pxiRij, the reward for the security agent (leader).\nThe pxi variables give the optimal strategy for the security agent.\nNote that while this method is polynomial in the number of rows in the transformed, normal-form game, the number of rows increases exponentially with the number of robber types.\nUsing this method for a Bayesian game thus requires running |\u03c32||\u03b82| separate linear programs.\nThis is no surprise, since finding the leader``s optimal strategy in a Bayesian Stackelberg game is NP-hard [5].\n4.\nHEURISTIC APPROACHES Given that finding the optimal strategy for the leader is NP-hard, we provide a heuristic approach.\nIn this heuristic we limit the possible mixed strategies of the leader to select actions with probabilities that are integer multiples of 1\/k for a predetermined integer k. Previous work [14] has shown that strategies with high entropy are beneficial for security applications when opponents'' utilities are completely unknown.\nIn our domain, if utilities are not considered, this method will result in uniform-distribution strategies.\nOne advantage of such strategies is that they are compact to represent (as fractions) and simple to understand; therefore they can be efficiently implemented by real organizations.\nWe aim to maintain the advantage provided by simple strategies for our security application problem, incorporating the effect of the robbers'' rewards on the security agent``s rewards.\nThus, the ASAP heuristic will produce strategies which are k-uniform.\nA mixed strategy is denoted k-uniform if it is a uniform distribution on a multiset S of pure strategies with |S| = k.\nA multiset is a set whose elements may be repeated multiple times; thus, for example, the mixed strategy corresponding to the multiset {1, 1, 2} would take strategy 1 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 313 {1,2} {2,1} {1a, 1b} \u22121\u03b1 \u2212 .9(1 \u2212 \u03b1), .5\u03b1 + .6(1 \u2212 \u03b1) \u2212.375\u03b1 \u2212 .275(1 \u2212 \u03b1), .125\u03b1 + .225(1 \u2212 \u03b1) {1a, 2b} \u22121\u03b1 \u2212 .025(1 \u2212 \u03b1), .5\u03b1 \u2212 .025(1 \u2212 \u03b1) \u2212.375\u03b1 \u2212 .9(1 \u2212 \u03b1), .125\u03b1 + .6(1 \u2212 \u03b1) {2a, 1b} \u2212.125\u03b1 \u2212 .9(1 \u2212 \u03b1), \u2212.125\u03b1 + .6(1 \u2212 \u03b1) \u22121\u03b1 \u2212 .275(1 \u2212 \u03b1), .5\u03b1 + .225(1 \u2212 \u03b1) {2a, 2b} \u2212.125\u03b1 \u2212 .025(1 \u2212 \u03b1), \u2212.125\u03b1 \u2212 .025(1 \u2212 \u03b1) \u22121\u03b1 \u2212 .9(1 \u2212 \u03b1), .5\u03b1 + .6(1 \u2212 \u03b1) Table 3: Harsanyi Transformed Payoff Table with probability 2\/3 and strategy 2 with probability 1\/3.\nASAP allows the size of the multiset to be chosen in order to balance the complexity of the strategy reached with the goal that the identified strategy will yield a high reward.\nAnother advantage of the ASAP heuristic is that it operates directly on the compact Bayesian representation, without requiring the Harsanyi transformation.\nThis is because the different follower (robber) types are independent of each other.\nHence, evaluating the leader strategy against a Harsanyi-transformed game matrix is equivalent to evaluating against each of the game matrices for the individual follower types.\nThis independence property is exploited in ASAP to yield a decomposition scheme.\nNote that the LP method introduced by [5] to compute optimal Stackelberg policies is unlikely to be decomposable into a small number of games as it was shown to be NP-hard for Bayes-Nash problems.\nFinally, note that ASAP requires the solution of only one optimization problem, rather than solving a series of problems as in the LP method of [5].\nFor a single follower type, the algorithm works the following way.\nGiven a particular k, for each possible mixed strategy x for the leader that corresponds to a multiset of size k, evaluate the leader``s payoff from x when the follower plays a reward-maximizing pure strategy.\nWe then take the mixed strategy with the highest payoff.\nWe need only to consider the reward-maximizing pure strategies of the followers (robbers), since for a given fixed strategy x of the security agent, each robber type faces a problem with fixed linear rewards.\nIf a mixed strategy is optimal for the robber, then so are all the pure strategies in the support of that mixed strategy.\nNote also that because we limit the leader``s strategies to take on discrete values, the assumption from Section 3.2 that the followers will break ties in the leader``s favor is not significant, since ties will be unlikely to arise.\nThis is because, in domains where rewards are drawn from any random distribution, the probability of a follower having more than one pure optimal response to a given leader strategy approaches zero, and the leader will have only a finite number of possible mixed strategies.\nOur approach to characterize the optimal strategy for the security agent makes use of properties of linear programming.\nWe briefly outline these results here for completeness, for detailed discussion and proofs see one of many references on the topic, such as [2].\nEvery linear programming problem, such as: max cT x | Ax = b, x \u2265 0, has an associated dual linear program, in this case: min bT y | AT y \u2265 c.\nThese primal\/dual pairs of problems satisfy weak duality: For any x and y primal and dual feasible solutions respectively, cT x \u2264 bT y. Thus a pair of feasible solutions is optimal if cT x = bT y, and the problems are said to satisfy strong duality.\nIn fact if a linear program is feasible and has a bounded optimal solution, then the dual is also feasible and there is a pair x\u2217 , y\u2217 that satisfies cT x\u2217 = bT y\u2217 .\nThese optimal solutions are characterized with the following optimality conditions (as defined in [2]): \u2022 primal feasibility: Ax = b, x \u2265 0 \u2022 dual feasibility: AT y \u2265 c \u2022 complementary slackness: xi(AT y \u2212 c)i = 0 for all i. Note that this last condition implies that cT x = xT AT y = bT y, which proves optimality for primal dual feasible solutions x and y.\nIn the following subsections, we first define the problem in its most intuititive form as a mixed-integer quadratic program (MIQP), and then show how this problem can be converted into a mixedinteger linear program (MILP).\n4.1 Mixed-Integer Quadratic Program We begin with the case of a single type of follower.\nLet the leader be the row player and the follower the column player.\nWe denote by x the vector of strategies of the leader and q the vector of strategies of the follower.\nWe also denote X and Q the index sets of the leader and follower``s pure strategies, respectively.\nThe payoff matrices R and C correspond to: Rij is the reward of the leader and Cij is the reward of the follower when the leader takes pure strategy i and the follower takes pure strategy j. Let k be the size of the multiset.\nWe first fix the policy of the leader to some k-uniform policy x.\nThe value xi is the number of times pure strategy i is used in the k-uniform policy, which is selected with probability xi\/k.\nWe formulate the optimization problem the follower solves to find its optimal response to x as the following linear program: max X j\u2208Q X i\u2208X 1 k Cijxi qj s.t. P j\u2208Q qj = 1 q \u2265 0.\n(2) The objective function maximizes the follower``s expected reward given x, while the constraints make feasible any mixed strategy q for the follower.\nThe dual to this linear programming problem is the following: min a s.t. a \u2265 X i\u2208X 1 k Cijxi j \u2208 Q. (3) From strong duality and complementary slackness we obtain that the follower``s maximum reward value, a, is the value of every pure strategy with qj > 0, that is in the support of the optimal mixed strategy.\nTherefore each of these pure strategies is optimal.\nOptimal solutions to the follower``s problem are characterized by linear programming optimality conditions: primal feasibility constraints in (2), dual feasibility constraints in (3), and complementary slackness qj a \u2212 X i\u2208X 1 k Cijxi !\n= 0 j \u2208 Q.\nThese conditions must be included in the problem solved by the leader in order to consider only best responses by the follower to the k-uniform policy x. 314 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) The leader seeks the k-uniform solution x that maximizes its own payoff, given that the follower uses an optimal response q(x).\nTherefore the leader solves the following integer problem: max X i\u2208X X j\u2208Q 1 k Rijq(x)j xi s.t. P i\u2208X xi = k xi \u2208 {0, 1, ... , k}.\n(4) Problem (4) maximizes the leader``s reward with the follower``s best response (qj for fixed leader``s policy x and hence denoted q(x)j) by selecting a uniform policy from a multiset of constant size k.\nWe complete this problem by including the characterization of q(x) through linear programming optimality conditions.\nTo simplify writing the complementary slackness conditions, we will constrain q(x) to be only optimal pure strategies by just considering integer solutions of q(x).\nThe leader``s problem becomes: maxx,q X i\u2208X X j\u2208Q 1 k Rijxiqj s.t. P i xi = kP j\u2208Q qj = 1 0 \u2264 (a \u2212 P i\u2208X 1 k Cijxi) \u2264 (1 \u2212 qj)M xi \u2208 {0, 1, ..., k} qj \u2208 {0, 1}.\n(5) Here, the constant M is some large number.\nThe first and fourth constraints enforce a k-uniform policy for the leader, and the second and fifth constraints enforce a feasible pure strategy for the follower.\nThe third constraint enforces dual feasibility of the follower``s problem (leftmost inequality) and the complementary slackness constraint for an optimal pure strategy q for the follower (rightmost inequality).\nIn fact, since only one pure strategy can be selected by the follower, say qh = 1, this last constraint enforces that a = P i\u2208X 1 k Cihxi imposing no additional constraint for all other pure strategies which have qj = 0.\nWe conclude this subsection noting that Problem (5) is an integer program with a non-convex quadratic objective in general, as the matrix R need not be positive-semi-definite.\nEfficient solution methods for non-linear, non-convex integer problems remains a challenging research question.\nIn the next section we show a reformulation of this problem as a linear integer programming problem, for which a number of efficient commercial solvers exist.\n4.2 Mixed-Integer Linear Program We can linearize the quadratic program of Problem 5 through the change of variables zij = xiqj, obtaining the following problem maxq,z P i\u2208X P j\u2208Q 1 k Rijzij s.t. P i\u2208X P j\u2208Q zij = k P j\u2208Q zij \u2264 k kqj \u2264 P i\u2208X zij \u2264 k P j\u2208Q qj = 1 0 \u2264 (a \u2212 P i\u2208X 1 k Cij( P h\u2208Q zih)) \u2264 (1 \u2212 qj)M zij \u2208 {0, 1, ..., k} qj \u2208 {0, 1} (6) PROPOSITION 1.\nProblems (5) and (6) are equivalent.\nProof: Consider x, q a feasible solution of (5).\nWe will show that q, zij = xiqj is a feasible solution of (6) of same objective function value.\nThe equivalence of the objective functions, and constraints 4, 6 and 7 of (6) are satisfied by construction.\nThe fact that P j\u2208Q zij = xi as P j\u2208Q qj = 1 explains constraints 1, 2, and 5 of (6).\nConstraint 3 of (6) is satisfied because P i\u2208X zij = kqj.\nLet us now consider q, z feasible for (6).\nWe will show that q and xi = P j\u2208Q zij are feasible for (5) with the same objective value.\nIn fact all constraints of (5) are readily satisfied by construction.\nTo see that the objectives match, notice that if qh = 1 then the third constraint in (6) implies that P i\u2208X zih = k, which means that zij = 0 for all i \u2208 X and all j = h. Therefore, xiqj = X l\u2208Q zilqj = zihqj = zij.\nThis last equality is because both are 0 when j = h.\nThis shows that the transformation preserves the objective function value, completing the proof.\nGiven this transformation to a mixed-integer linear program (MILP), we now show how we can apply our decomposition technique on the MILP to obtain significant speedups for Bayesian games with multiple follower types.\n5.\nDECOMPOSITION FOR MULTIPLE ADVERSARIES The MILP developed in the previous section handles only one follower.\nSince our security scenario contains multiple follower (robber) types, we change the response function for the follower from a pure strategy into a weighted combination over various pure follower strategies where the weights are probabilities of occurrence of each of the follower types.\n5.1 Decomposed MIQP To admit multiple adversaries in our framework, we modify the notation defined in the previous section to reason about multiple follower types.\nWe denote by x the vector of strategies of the leader and ql the vector of strategies of follower l, with L denoting the index set of follower types.\nWe also denote by X and Q the index sets of leader and follower l``s pure strategies, respectively.\nWe also index the payoff matrices on each follower l, considering the matrices Rl and Cl .\nUsing this modified notation, we characterize the optimal solution of follower l``s problem given the leaders k-uniform policy x, with the following optimality conditions: X j\u2208Q ql j = 1 al \u2212 X i\u2208X 1 k Cl ijxi \u2265 0 ql j(al \u2212 X i\u2208X 1 k Cl ijxi) = 0 ql j \u2265 0 Again, considering only optimal pure strategies for follower l``s problem we can linearize the complementarity constraint above.\nWe incorporate these constraints on the leader``s problem that selects the optimal k-uniform policy.\nTherefore, given a priori probabilities pl , with l \u2208 L of facing each follower, the leader solves the following problem: The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 315 maxx,q X i\u2208X X l\u2208L X j\u2208Q pl k Rl ijxiql j s.t. P i xi = kP j\u2208Q ql j = 1 0 \u2264 (al \u2212 P i\u2208X 1 k Cl ijxi) \u2264 (1 \u2212 ql j)M xi \u2208 {0, 1, ..., k} ql j \u2208 {0, 1}.\n(7) Problem (7) for a Bayesian game with multiple follower types is indeed equivalent to Problem (5) on the payoff matrix obtained from the Harsanyi transformation of the game.\nIn fact, every pure strategy j in Problem (5) corresponds to a sequence of pure strategies jl, one for each follower l \u2208 L.\nThis means that qj = 1 if and only if ql jl = 1 for all l \u2208 L.\nIn addition, given the a priori probabilities pl of facing player l, the reward in the Harsanyi transformation payoff table is Rij = P l\u2208L pl Rl ijl .\nThe same relation holds between C and Cl .\nThese relations between a pure strategy in the equivalent normal form game and pure strategies in the individual games with each followers are key in showing these problems are equivalent.\n5.2 Decomposed MILP We can linearize the quadratic programming problem 7 through the change of variables zl ij = xiql j, obtaining the following problem maxq,z P i\u2208X P l\u2208L P j\u2208Q pl k Rl ijzl ij s.t. P i\u2208X P j\u2208Q zl ij = k P j\u2208Q zl ij \u2264 k kql j \u2264 P i\u2208X zl ij \u2264 k P j\u2208Q ql j = 1 0 \u2264 (al \u2212 P i\u2208X 1 k Cl ij( P h\u2208Q zl ih)) \u2264 (1 \u2212 ql j)M P j\u2208Q zl ij = P j\u2208Q z1 ij zl ij \u2208 {0, 1, ..., k} ql j \u2208 {0, 1} (8) PROPOSITION 2.\nProblems (7) and (8) are equivalent.\nProof: Consider x, ql , al with l \u2208 L a feasible solution of (7).\nWe will show that ql , al , zl ij = xiql j is a feasible solution of (8) of same objective function value.\nThe equivalence of the objective functions, and constraints 4, 7 and 8 of (8) are satisfied by construction.\nThe fact that P j\u2208Q zl ij = xi as P j\u2208Q ql j = 1 explains constraints 1, 2, 5 and 6 of (8).\nConstraint 3 of (8) is satisfied because P i\u2208X zl ij = kql j. Lets now consider ql , zl , al feasible for (8).\nWe will show that ql , al and xi = P j\u2208Q z1 ij are feasible for (7) with the same objective value.\nIn fact all constraints of (7) are readily satisfied by construction.\nTo see that the objectives match, notice for each l one ql j must equal 1 and the rest equal 0.\nLet us say that ql jl = 1, then the third constraint in (8) implies that P i\u2208X zl ijl = k, which means that zl ij = 0 for all i \u2208 X and all j = jl.\nIn particular this implies that xi = X j\u2208Q z1 ij = z1 ij1 = zl ijl , the last equality from constraint 6 of (8).\nTherefore xiql j = zl ijl ql j = zl ij.\nThis last equality is because both are 0 when j = jl.\nEffectively, constraint 6 ensures that all the adversaries are calculating their best responses against a particular fixed policy of the agent.\nThis shows that the transformation preserves the objective function value, completing the proof.\nWe can therefore solve this equivalent linear integer program with efficient integer programming packages which can handle problems with thousands of integer variables.\nWe implemented the decomposed MILP and the results are shown in the following section.\n6.\nEXPERIMENTAL RESULTS The patrolling domain and the payoffs for the associated game are detailed in Sections 2 and 3.\nWe performed experiments for this game in worlds of three and four houses with patrols consisting of two houses.\nThe description given in Section 2 is used to generate a base case for both the security agent and robber payoff functions.\nThe payoff tables for additional robber types are constructed and added to the game by adding a random distribution of varying size to the payoffs in the base case.\nAll games are normalized so that, for each robber type, the minimum and maximum payoffs to the security agent and robber are 0 and 1, respectively.\nUsing the data generated, we performed the experiments using four methods for generating the security agent``s strategy: \u2022 uniform randomization \u2022 ASAP \u2022 the multiple linear programs method from [5] (to find the true optimal strategy) \u2022 the highest reward Bayes-Nash equilibrium, found using the MIP-Nash algorithm [17] The last three methods were applied using CPLEX 8.1.\nBecause the last two methods are designed for normal-form games rather than Bayesian games, the games were first converted using the Harsanyi transformation [8].\nThe uniform randomization method is simply choosing a uniform random policy over all possible patrol routes.\nWe use this method as a simple baseline to measure the performance of our heuristics.\nWe anticipated that the uniform policy would perform reasonably well since maximum-entropy policies have been shown to be effective in multiagent security domains [14].\nThe highest-reward Bayes-Nash equilibria were used in order to demonstrate the higher reward gained by looking for an optimal policy rather than an equilibria in Stackelberg games such as our security domain.\nBased on our experiments we present three sets of graphs to demonstrate (1) the runtime of ASAP compared to other common methods for finding a strategy, (2) the reward guaranteed by ASAP compared to other methods, and (3) the effect of varying the parameter k, the size of the multiset, on the performance of ASAP.\nIn the first two sets of graphs, ASAP is run using a multiset of 80 elements; in the third set this number is varied.\nThe first set of graphs, shown in Figure 1 shows the runtime graphs for three-house (left column) and four-house (right column) domains.\nEach of the three rows of graphs corresponds to a different randomly-generated scenario.\nThe x-axis shows the number of robber types the security agent faces and the y-axis of the graph shows the runtime in seconds.\nAll experiments that were not concluded in 30 minutes (1800 seconds) were cut off.\nThe runtime for the uniform policy is always negligible irrespective of the number of adversaries and hence is not shown.\nThe ASAP algorithm clearly outperforms the optimal, multipleLP method as well as the MIP-Nash algorithm for finding the highestreward Bayes-Nash equilibrium with respect to runtime.\nFor a 316 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Runtimes for various algorithms on problems of 3 and 4 houses.\ndomain of three houses, the optimal method cannot reach a solution for more than seven robber types, and for four houses it cannot solve for more than six types within the cutoff time in any of the three scenarios.\nMIP-Nash solves for even fewer robber types within the cutoff time.\nOn the other hand, ASAP runs much faster, and is able to solve for at least 20 adversaries for the three-house scenarios and for at least 12 adversaries in the four-house scenarios within the cutoff time.\nThe runtime of ASAP does not increase strictly with the number of robber types for each scenario, but in general, the addition of more types increases the runtime required.\nThe second set of graphs, Figure 2, shows the reward to the patrol agent given by each method for three scenarios in the three-house (left column) and four-house (right column) domains.\nThis reward is the utility received by the security agent in the patrolling game, and not as a percentage of the optimal reward, since it was not possible to obtain the optimal reward as the number of robber types increased.\nThe uniform policy consistently provides the lowest reward in both domains; while the optimal method of course produces the optimal reward.\nThe ASAP method remains consistently close to the optimal, even as the number of robber types increases.\nThe highest-reward Bayes-Nash equilibria, provided by the MIPNash method, produced rewards higher than the uniform method, but lower than ASAP.\nThis difference clearly illustrates the gains in the patrolling domain from committing to a strategy as the leader in a Stackelberg game, rather than playing a standard Bayes-Nash strategy.\nThe third set of graphs, shown in Figure 3 shows the effect of the multiset size on runtime in seconds (left column) and reward (right column), again expressed as the reward received by the security agent in the patrolling game, and not a percentage of the optimal Figure 2: Reward for various algorithms on problems of 3 and 4 houses.\nreward.\nResults here are for the three-house domain.\nThe trend is that as as the multiset size is increased, the runtime and reward level both increase.\nNot surprisingly, the reward increases monotonically as the multiset size increases, but what is interesting is that there is relatively little benefit to using a large multiset in this domain.\nIn all cases, the reward given by a multiset of 10 elements was within at least 96% of the reward given by an 80-element multiset.\nThe runtime does not always increase strictly with the multiset size; indeed in one example (scenario 2 with 20 robber types), using a multiset of 10 elements took 1228 seconds, while using 80 elements only took 617 seconds.\nIn general, runtime should increase since a larger multiset means a larger domain for the variables in the MILP, and thus a larger search space.\nHowever, an increase in the number of variables can sometimes allow for a policy to be constructed more quickly due to more flexibility in the problem.\n7.\nSUMMARY AND RELATED WORK This paper focuses on security for agents patrolling in hostile environments.\nIn these environments, intentional threats are caused by adversaries about whom the security patrolling agents have incomplete information.\nSpecifically, we deal with situations where the adversaries'' actions and payoffs are known but the exact adversary type is unknown to the security agent.\nAgents acting in the real world quite frequently have such incomplete information about other agents.\nBayesian games have been a popular choice to model such incomplete information games [3].\nThe Gala toolkit is one method for defining such games [9] without requiring the game to be represented in normal form via the Harsanyi transformation [8]; Gala``s guarantees are focused on fully competitive games.\nMuch work has been done on finding optimal Bayes-Nash equilbria for The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 317 Figure 3: Reward for ASAP using multisets of 10, 30, and 80 elements subclasses of Bayesian games, finding single Bayes-Nash equilibria for general Bayesian games [10] or approximate Bayes-Nash equilibria [18].\nLess attention has been paid to finding the optimal strategy to commit to in a Bayesian game (the Stackelberg scenario [15]).\nHowever, the complexity of this problem was shown to be NP-hard in the general case [5], which also provides algorithms for this problem in the non-Bayesian case.\nTherefore, we present a heuristic called ASAP, with three key advantages towards addressing this problem.\nFirst, ASAP searches for the highest reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game.\nSecond, it provides strategies which are simple to understand, represent, and implement.\nThird, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form.\nWe provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches.\nOur k-uniform strategies are similar to the k-uniform strategies of [12].\nWhile that work provides epsilon error-bounds based on the k-uniform strategies, their solution concept is still that of a Nash equilibrium, and they do not provide efficient algorithms for obtaining such k-uniform strategies.\nThis contrasts with ASAP, where our emphasis is on a highly efficient heuristic approach that is not focused on equilibrium solutions.\nFinally the patrolling problem which motivated our work has recently received growing attention from the multiagent community due to its wide range of applications [4, 13].\nHowever most of this work is focused on either limiting energy consumption involved in patrolling [7] or optimizing on criteria like the length of the path traveled [4, 13], without reasoning about any explicit model of an adversary[14].\nAcknowledgments : This research is supported by the United States Department of Homeland Security through Center for Risk and Economic Analysis of Terrorism Events (CREATE).\nIt is also supported by the Defense Advanced Research Projects Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division, under Contract No.\nNBCHD030010.\nSarit Kraus is also affiliated with UMIACS.\n8.\nREFERENCES [1] R. W. Beard and T. McLain.\nMultiple UAV cooperative search under collision avoidance and limited range communication constraints.\nIn IEEE CDC, 2003.\n[2] D. Bertsimas and J. Tsitsiklis.\nIntroduction to Linear Optimization.\nAthena Scientific, 1997.\n[3] J. Brynielsson and S. Arnborg.\nBayesian games for threat prediction and situation analysis.\nIn FUSION, 2004.\n[4] Y. Chevaleyre.\nTheoretical analysis of multi-agent patrolling problem.\nIn AAMAS, 2004.\n[5] V. Conitzer and T. Sandholm.\nChoosing the best strategy to commit to.\nIn ACM Conference on Electronic Commerce, 2006.\n[6] D. Fudenberg and J. Tirole.\nGame Theory.\nMIT Press, 1991.\n[7] C. Gui and P. Mohapatra.\nVirtual patrol: A new power conservation design for surveillance using sensor networks.\nIn IPSN, 2005.\n[8] J. C. Harsanyi and R. Selten.\nA generalized Nash solution for two-person bargaining games with incomplete information.\nManagement Science, 18(5):80-106, 1972.\n[9] D. Koller and A. Pfeffer.\nGenerating and solving imperfect information games.\nIn IJCAI, pages 1185-1193, 1995.\n[10] D. Koller and A. Pfeffer.\nRepresentations and solutions for game-theoretic problems.\nArtificial Intelligence, 94(1):167-215, 1997.\n[11] C. Lemke and J. Howson.\nEquilibrium points of bimatrix games.\nJournal of the Society for Industrial and Applied Mathematics, 12:413-423, 1964.\n[12] R. J. Lipton, E. Markakis, and A. Mehta.\nPlaying large games using simple strategies.\nIn ACM Conference on Electronic Commerce, 2003.\n[13] A. Machado, G. Ramalho, J. D. Zucker, and A. Drougoul.\nMulti-agent patrolling: an empirical analysis on alternative architectures.\nIn MABS, 2002.\n[14] P. Paruchuri, M. Tambe, F. Ordonez, and S. Kraus.\nSecurity in multiagent systems by policy randomization.\nIn AAMAS, 2006.\n[15] T. Roughgarden.\nStackelberg scheduling strategies.\nIn ACM Symposium on TOC, 2001.\n[16] S. Ruan, C. Meirina, F. Yu, K. R. Pattipati, and R. L. Popp.\nPatrolling in a stochastic environment.\nIn 10th Intl..\nCommand and Control Research Symp., 2005.\n[17] T. Sandholm, A. Gilpin, and V. Conitzer.\nMixed-integer programming methods for finding nash equilibria.\nIn AAAI, 2005.\n[18] S. Singh, V. Soni, and M. Wellman.\nComputing approximate Bayes-Nash equilibria with tree-games of incomplete information.\nIn ACM Conference on Electronic Commerce, 2004.\n318 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"An Efficient Heuristic Approach for Security Against Multiple Adversaries\nABSTRACT\nIn adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue.\nThis paper focuses on domains where these threats come from unknown adversaries.\nThese domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games.\nHowever, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies.\nIn this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium.\nPrevious work has shown this problem of optimal strategy selection to be NP-hard.\nTherefore, we present a heuristic called ASAP, with three key advantages to address the problem.\nFirst, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game.\nSecond, it provides strategies which are simple to understand, represent, and implement.\nThird, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form.\nWe provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches.\n1.\nINTRODUCTION\nIn many multiagent domains, agents must act in order to provide security against attacks by adversaries.\nA common issue that agents face in such security domains is uncertainty about the ad\nversaries they may be facing.\nFor example, a security robot may need to make a choice about which areas to patrol, and how often [16].\nHowever, it will not know in advance exactly where a robber will choose to strike.\nA team of unmanned aerial vehicles (UAVs) [1] monitoring a region undergoing a humanitarian crisis may also need to choose a patrolling policy.\nThey must make this decision without knowing in advance whether terrorists or other adversaries may be waiting to disrupt the mission at a given location.\nIt may indeed be possible to model the motivations of types of adversaries the agent or agent team is likely to face in order to target these adversaries more closely.\nHowever, in both cases, the security robot or UAV team will not know exactly which kinds of adversaries may be active on any given day.\nA common approach for choosing a policy for agents in such scenarios is to model the scenarios as Bayesian games.\nA Bayesian game is a game in which agents may belong to one or more types; the type of an agent determines its possible actions and payoffs.\nThe distribution of adversary types that an agent will face may be known or inferred from historical data.\nUsually, these games are analyzed according to the solution concept of a Bayes-Nash equilibrium, an extension of the Nash equilibrium for Bayesian games.\nHowever, in many settings, a Nash or Bayes-Nash equilibrium is not an appropriate solution concept, since it assumes that the agents' strategies are chosen simultaneously [5].\nIn some settings, one player can (or must) commit to a strategy before the other players choose their strategies.\nThese scenarios are known as Stackelberg games [6].\nIn a Stackelberg game, a leader commits to a strategy first, and then a follower (or group of followers) selfishly optimize their own rewards, considering the action chosen by the leader.\nFor example, the security agent (leader) must first commit to a strategy for patrolling various areas.\nThis strategy could be a mixed strategy in order to be unpredictable to the robbers (followers).\nThe robbers, after observing the pattern of patrols over time, can then choose their strategy (which location to rob).\nOften, the leader in a Stackelberg game can attain a higher reward than if the strategies were chosen simultaneously.\nTo see the advantage of being the leader in a Stackelberg game, consider a simple game with the payoff table as shown in Table 1.\nThe leader is the row player and the follower is the column player.\nHere, the leader's payoff is listed first.\nTable 1: Payoff table for example normal form game.\nThe only Nash equilibrium for this game is when the leader plays 2 and the follower plays 2 which gives the leader a payoff of 2.\nHowever, if the leader commits to a uniform mixed strategy of playing 1 and 2 with equal (0.5) probability, the follower's best response is to play 3 to get an expected payoff of 5 (10 and 0 with equal probability).\nThe leader's payoff would then be 4 (3 and 5 with equal probability).\nIn this case, the leader now has an incentive to deviate and choose a pure strategy of 2 (to get a payoff of 5).\nHowever, this would cause the follower to deviate to strategy 2 as well, resulting in the Nash equilibrium.\nThus, by committing to a strategy that is observed by the follower, and by avoiding the temptation to deviate, the leader manages to obtain a reward higher than that of the best Nash equilibrium.\nThe problem of choosing an optimal strategy for the leader to commit to in a Stackelberg game is analyzed in [5] and found to be NP-hard in the case of a Bayesian game with multiple types of followers.\nThus, efficient heuristic techniques for choosing highreward strategies in these games is an important open issue.\nMethods for finding optimal leader strategies for non-Bayesian games [5] can be applied to this problem by converting the Bayesian game into a normal-form game by the Harsanyi transformation [8].\nIf, on the other hand, we wish to compute the highest-reward Nash equilibrium, new methods using mixed-integer linear programs (MILPs) [17] may be used, since the highest-reward Bayes-Nash equilibrium is equivalent to the corresponding Nash equilibrium in the transformed game.\nHowever, by transforming the game, the compact structure of the Bayesian game is lost.\nIn addition, since the Nash equilibrium assumes a simultaneous choice of strategies, the advantages of being the leader are not considered.\nThis paper introduces an efficient heuristic method for approximating the optimal leader strategy for security domains, known as ASAP (Agent Security via Approximate Policies).\nThis method has three key advantages.\nFirst, it directly searches for an optimal strategy, rather than a Nash (or Bayes-Nash) equilibrium, thus allowing it to find high-reward non-equilibrium strategies like the one in the above example.\nSecond, it generates policies with a support which can be expressed as a uniform distribution over a multiset of fixed size as proposed in [12].\nThis allows for policies that are simple to understand and represent [12], as well as a tunable parameter (the size of the multiset) that controls the simplicity of the policy.\nThird, the method allows for a Bayes-Nash game to be expressed compactly without conversion to a normal-form game, allowing for large speedups over existing Nash methods such as [17] and [11].\nThe rest of the paper is organized as follows.\nIn Section 2 we fully describe the patrolling domain and its properties.\nSection 3 introduces the Bayesian game, the Harsanyi transformation, and existing methods for finding an optimal leader's strategy in a Stackelberg game.\nThen, in Section 4 the ASAP algorithm is presented for normal-form games, and in Section 5 we show how it can be adapted to the structure of Bayesian games with uncertain adversaries.\nExperimental results showing higher reward and faster policy computation over existing Nash methods are shown in Section 6, and we conclude with a discussion of related work in Section 7.\n2.\nTHE PATROLLING DOMAIN\n3.\nBAYESIAN GAMES\n312 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.1 Harsanyi Transformation\n3.2 Finding an Optimal Strategy\n4.\nHEURISTIC APPROACHES\n4.1 Mixed-Integer Quadratic Program\n314 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2 Mixed-Integer Linear Program\n5.\nDECOMPOSITION FOR MULTIPLE ADVERSARIES\n5.1 Decomposed MIQP\n5.2 Decomposed MILP\n6.\nEXPERIMENTAL RESULTS\n316 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4 houses.\nreward.\nResults here are for the three-house domain.\nThe trend is that as as the multiset size is increased, the runtime and reward level both increase.\nNot surprisingly, the reward increases monotonically as the multiset size increases, but what is interesting is that there is relatively little benefit to using a large multiset in this domain.\nIn all cases, the reward given by a multiset of 10 elements was within at least 96% of the reward given by an 80-element multiset.\nThe runtime does not always increase strictly with the multiset size; indeed in one example (scenario 2 with 20 robber types), using a multiset of 10 elements took 1228 seconds, while using 80 elements only took 617 seconds.\nIn general, runtime should increase since a larger multiset means a larger domain for the variables in the MILP, and thus a larger search space.\nHowever, an increase in the number of variables can sometimes allow for a policy to be constructed more quickly due to more flexibility in the problem.","lvl-4":"An Efficient Heuristic Approach for Security Against Multiple Adversaries\nABSTRACT\nIn adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue.\nThis paper focuses on domains where these threats come from unknown adversaries.\nThese domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games.\nHowever, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies.\nIn this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium.\nPrevious work has shown this problem of optimal strategy selection to be NP-hard.\nTherefore, we present a heuristic called ASAP, with three key advantages to address the problem.\nFirst, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game.\nSecond, it provides strategies which are simple to understand, represent, and implement.\nThird, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form.\nWe provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches.\n1.\nINTRODUCTION\nIn many multiagent domains, agents must act in order to provide security against attacks by adversaries.\nA common issue that agents face in such security domains is uncertainty about the ad\nversaries they may be facing.\nHowever, it will not know in advance exactly where a robber will choose to strike.\nIt may indeed be possible to model the motivations of types of adversaries the agent or agent team is likely to face in order to target these adversaries more closely.\nHowever, in both cases, the security robot or UAV team will not know exactly which kinds of adversaries may be active on any given day.\nA common approach for choosing a policy for agents in such scenarios is to model the scenarios as Bayesian games.\nA Bayesian game is a game in which agents may belong to one or more types; the type of an agent determines its possible actions and payoffs.\nThe distribution of adversary types that an agent will face may be known or inferred from historical data.\nUsually, these games are analyzed according to the solution concept of a Bayes-Nash equilibrium, an extension of the Nash equilibrium for Bayesian games.\nHowever, in many settings, a Nash or Bayes-Nash equilibrium is not an appropriate solution concept, since it assumes that the agents' strategies are chosen simultaneously [5].\nIn some settings, one player can (or must) commit to a strategy before the other players choose their strategies.\nThese scenarios are known as Stackelberg games [6].\nIn a Stackelberg game, a leader commits to a strategy first, and then a follower (or group of followers) selfishly optimize their own rewards, considering the action chosen by the leader.\nFor example, the security agent (leader) must first commit to a strategy for patrolling various areas.\nThis strategy could be a mixed strategy in order to be unpredictable to the robbers (followers).\nThe robbers, after observing the pattern of patrols over time, can then choose their strategy (which location to rob).\nOften, the leader in a Stackelberg game can attain a higher reward than if the strategies were chosen simultaneously.\nTo see the advantage of being the leader in a Stackelberg game, consider a simple game with the payoff table as shown in Table 1.\nThe leader is the row player and the follower is the column player.\nHere, the leader's payoff is listed first.\nTable 1: Payoff table for example normal form game.\nThe only Nash equilibrium for this game is when the leader plays 2 and the follower plays 2 which gives the leader a payoff of 2.\nHowever, if the leader commits to a uniform mixed strategy of playing 1 and 2 with equal (0.5) probability, the follower's best response is to play 3 to get an expected payoff of 5 (10 and 0 with equal probability).\nThe leader's payoff would then be 4 (3 and 5 with equal probability).\nIn this case, the leader now has an incentive to deviate and choose a pure strategy of 2 (to get a payoff of 5).\nHowever, this would cause the follower to deviate to strategy 2 as well, resulting in the Nash equilibrium.\nThus, by committing to a strategy that is observed by the follower, and by avoiding the temptation to deviate, the leader manages to obtain a reward higher than that of the best Nash equilibrium.\nThe problem of choosing an optimal strategy for the leader to commit to in a Stackelberg game is analyzed in [5] and found to be NP-hard in the case of a Bayesian game with multiple types of followers.\nThus, efficient heuristic techniques for choosing highreward strategies in these games is an important open issue.\nMethods for finding optimal leader strategies for non-Bayesian games [5] can be applied to this problem by converting the Bayesian game into a normal-form game by the Harsanyi transformation [8].\nHowever, by transforming the game, the compact structure of the Bayesian game is lost.\nIn addition, since the Nash equilibrium assumes a simultaneous choice of strategies, the advantages of being the leader are not considered.\nThis paper introduces an efficient heuristic method for approximating the optimal leader strategy for security domains, known as ASAP (Agent Security via Approximate Policies).\nThis method has three key advantages.\nFirst, it directly searches for an optimal strategy, rather than a Nash (or Bayes-Nash) equilibrium, thus allowing it to find high-reward non-equilibrium strategies like the one in the above example.\nThird, the method allows for a Bayes-Nash game to be expressed compactly without conversion to a normal-form game, allowing for large speedups over existing Nash methods such as [17] and [11].\nIn Section 2 we fully describe the patrolling domain and its properties.\nSection 3 introduces the Bayesian game, the Harsanyi transformation, and existing methods for finding an optimal leader's strategy in a Stackelberg game.\nThen, in Section 4 the ASAP algorithm is presented for normal-form games, and in Section 5 we show how it can be adapted to the structure of Bayesian games with uncertain adversaries.\nExperimental results showing higher reward and faster policy computation over existing Nash methods are shown in Section 6, and we conclude with a discussion of related work in Section 7.\n4 houses.\nreward.\nResults here are for the three-house domain.\nThe trend is that as as the multiset size is increased, the runtime and reward level both increase.\nNot surprisingly, the reward increases monotonically as the multiset size increases, but what is interesting is that there is relatively little benefit to using a large multiset in this domain.\nIn all cases, the reward given by a multiset of 10 elements was within at least 96% of the reward given by an 80-element multiset.\nIn general, runtime should increase since a larger multiset means a larger domain for the variables in the MILP, and thus a larger search space.\nHowever, an increase in the number of variables can sometimes allow for a policy to be constructed more quickly due to more flexibility in the problem.","lvl-2":"An Efficient Heuristic Approach for Security Against Multiple Adversaries\nABSTRACT\nIn adversarial multiagent domains, security, commonly defined as the ability to deal with intentional threats from other agents, is a critical issue.\nThis paper focuses on domains where these threats come from unknown adversaries.\nThese domains can be modeled as Bayesian games; much work has been done on finding equilibria for such games.\nHowever, it is often the case in multiagent security domains that one agent can commit to a mixed strategy which its adversaries observe before choosing their own strategies.\nIn this case, the agent can maximize reward by finding an optimal strategy, without requiring equilibrium.\nPrevious work has shown this problem of optimal strategy selection to be NP-hard.\nTherefore, we present a heuristic called ASAP, with three key advantages to address the problem.\nFirst, ASAP searches for the highest-reward strategy, rather than a Bayes-Nash equilibrium, allowing it to find feasible strategies that exploit the natural first-mover advantage of the game.\nSecond, it provides strategies which are simple to understand, represent, and implement.\nThird, it operates directly on the compact, Bayesian game representation, without requiring conversion to normal form.\nWe provide an efficient Mixed Integer Linear Program (MILP) implementation for ASAP, along with experimental results illustrating significant speedups and higher rewards over other approaches.\n1.\nINTRODUCTION\nIn many multiagent domains, agents must act in order to provide security against attacks by adversaries.\nA common issue that agents face in such security domains is uncertainty about the ad\nversaries they may be facing.\nFor example, a security robot may need to make a choice about which areas to patrol, and how often [16].\nHowever, it will not know in advance exactly where a robber will choose to strike.\nA team of unmanned aerial vehicles (UAVs) [1] monitoring a region undergoing a humanitarian crisis may also need to choose a patrolling policy.\nThey must make this decision without knowing in advance whether terrorists or other adversaries may be waiting to disrupt the mission at a given location.\nIt may indeed be possible to model the motivations of types of adversaries the agent or agent team is likely to face in order to target these adversaries more closely.\nHowever, in both cases, the security robot or UAV team will not know exactly which kinds of adversaries may be active on any given day.\nA common approach for choosing a policy for agents in such scenarios is to model the scenarios as Bayesian games.\nA Bayesian game is a game in which agents may belong to one or more types; the type of an agent determines its possible actions and payoffs.\nThe distribution of adversary types that an agent will face may be known or inferred from historical data.\nUsually, these games are analyzed according to the solution concept of a Bayes-Nash equilibrium, an extension of the Nash equilibrium for Bayesian games.\nHowever, in many settings, a Nash or Bayes-Nash equilibrium is not an appropriate solution concept, since it assumes that the agents' strategies are chosen simultaneously [5].\nIn some settings, one player can (or must) commit to a strategy before the other players choose their strategies.\nThese scenarios are known as Stackelberg games [6].\nIn a Stackelberg game, a leader commits to a strategy first, and then a follower (or group of followers) selfishly optimize their own rewards, considering the action chosen by the leader.\nFor example, the security agent (leader) must first commit to a strategy for patrolling various areas.\nThis strategy could be a mixed strategy in order to be unpredictable to the robbers (followers).\nThe robbers, after observing the pattern of patrols over time, can then choose their strategy (which location to rob).\nOften, the leader in a Stackelberg game can attain a higher reward than if the strategies were chosen simultaneously.\nTo see the advantage of being the leader in a Stackelberg game, consider a simple game with the payoff table as shown in Table 1.\nThe leader is the row player and the follower is the column player.\nHere, the leader's payoff is listed first.\nTable 1: Payoff table for example normal form game.\nThe only Nash equilibrium for this game is when the leader plays 2 and the follower plays 2 which gives the leader a payoff of 2.\nHowever, if the leader commits to a uniform mixed strategy of playing 1 and 2 with equal (0.5) probability, the follower's best response is to play 3 to get an expected payoff of 5 (10 and 0 with equal probability).\nThe leader's payoff would then be 4 (3 and 5 with equal probability).\nIn this case, the leader now has an incentive to deviate and choose a pure strategy of 2 (to get a payoff of 5).\nHowever, this would cause the follower to deviate to strategy 2 as well, resulting in the Nash equilibrium.\nThus, by committing to a strategy that is observed by the follower, and by avoiding the temptation to deviate, the leader manages to obtain a reward higher than that of the best Nash equilibrium.\nThe problem of choosing an optimal strategy for the leader to commit to in a Stackelberg game is analyzed in [5] and found to be NP-hard in the case of a Bayesian game with multiple types of followers.\nThus, efficient heuristic techniques for choosing highreward strategies in these games is an important open issue.\nMethods for finding optimal leader strategies for non-Bayesian games [5] can be applied to this problem by converting the Bayesian game into a normal-form game by the Harsanyi transformation [8].\nIf, on the other hand, we wish to compute the highest-reward Nash equilibrium, new methods using mixed-integer linear programs (MILPs) [17] may be used, since the highest-reward Bayes-Nash equilibrium is equivalent to the corresponding Nash equilibrium in the transformed game.\nHowever, by transforming the game, the compact structure of the Bayesian game is lost.\nIn addition, since the Nash equilibrium assumes a simultaneous choice of strategies, the advantages of being the leader are not considered.\nThis paper introduces an efficient heuristic method for approximating the optimal leader strategy for security domains, known as ASAP (Agent Security via Approximate Policies).\nThis method has three key advantages.\nFirst, it directly searches for an optimal strategy, rather than a Nash (or Bayes-Nash) equilibrium, thus allowing it to find high-reward non-equilibrium strategies like the one in the above example.\nSecond, it generates policies with a support which can be expressed as a uniform distribution over a multiset of fixed size as proposed in [12].\nThis allows for policies that are simple to understand and represent [12], as well as a tunable parameter (the size of the multiset) that controls the simplicity of the policy.\nThird, the method allows for a Bayes-Nash game to be expressed compactly without conversion to a normal-form game, allowing for large speedups over existing Nash methods such as [17] and [11].\nThe rest of the paper is organized as follows.\nIn Section 2 we fully describe the patrolling domain and its properties.\nSection 3 introduces the Bayesian game, the Harsanyi transformation, and existing methods for finding an optimal leader's strategy in a Stackelberg game.\nThen, in Section 4 the ASAP algorithm is presented for normal-form games, and in Section 5 we show how it can be adapted to the structure of Bayesian games with uncertain adversaries.\nExperimental results showing higher reward and faster policy computation over existing Nash methods are shown in Section 6, and we conclude with a discussion of related work in Section 7.\n2.\nTHE PATROLLING DOMAIN\nIn most security patrolling domains, the security agents (like UAVs [1] or security robots [16]) cannot feasibly patrol all areas all the time.\nInstead, they must choose a policy by which they patrol various routes at different times, taking into account factors such as the likelihood of crime in different areas, possible targets for crime, and the security agents' own resources (number of security agents, amount of available time, fuel, etc.).\nIt is usually beneficial for this policy to be nondeterministic so that robbers cannot safely rob certain locations, knowing that they will be safe from the security agents [14].\nTo demonstrate the utility of our algorithm, we use a simplified version of such a domain, expressed as a game.\nThe most basic version of our game consists of two players: the security agent (the leader) and the robber (the follower) in a world consisting of m houses, 1...m.\nThe security agent's set of pure strategies consists of possible routes of d houses to patrol (in an order).\nThe security agent can choose a mixed strategy so that the robber will be unsure of exactly where the security agent may patrol, but the robber will know the mixed strategy the security agent has chosen.\nFor example, the robber can observe over time how often the security agent patrols each area.\nWith this knowledge, the robber must choose a single house to rob.\nWe assume that the robber generally takes a long time to rob a house.\nIf the house chosen by the robber is not on the security agent's route, then the robber successfully robs it.\nOtherwise, if it is on the security agent's route, then the earlier the house is on the route, the easier it is for the security agent to catch the robber before he finishes robbing it.\nWe model the payoffs for this game with the following variables:\n\u2022 vl, x: value of the goods in house l to the security agent.\n\u2022 vl, q: value of the goods in house l to the robber.\n\u2022 cx: reward to the security agent of catching the robber.\n\u2022 cq: cost to the robber of getting caught.\n\u2022 pl: probability that the security agent can catch the robber at the lth house in the patrol (pl <pl, \u21d0 \u21d2 l' <l).\nThe security agent's set of possible pure strategies (patrol routes) is denoted by X and includes all d-tuples i = <w1, w2,..., wd> with w1...wd = 1...m where no two elements are equal (the agent is not allowed to return to the same house).\nThe robber's set of possible pure strategies (houses to rob) is denoted by Q and includes all integers j = 1...m.\nThe payoffs (security agent, robber) for pure strategies i, j are:\n\u2022 \u2212 vl, x, vl, q, for j = l \u2208 \/ i. \u2022 plcx + (1 \u2212 pl) (\u2212 vl, x), \u2212 plcq + (1 \u2212 pl) (vl, q), for j = l \u2208 i.\nWith this structure it is possible to model many different types of robbers who have differing motivations; for example, one robber may have a lower cost of getting caught than another, or may value the goods in the various houses differently.\nIf the distribution of different robber types is known or inferred from historical data, then the game can be modeled as a Bayesian game [6].\n3.\nBAYESIAN GAMES\nA Bayesian game contains a set of N agents, and each agent n must be one of a given set of types 0n.\nFor our patrolling domain, we have two agents, the security agent and the robber.\n01 is the set of security agent types and 02 is the set of robber types.\nSince there is only one type of security agent, 01 contains only one element.\nDuring the game, the robber knows its type but the security agent does not know the robber's type.\nFor each agent (the security agent or the robber) n, there is a set of strategies \u03c3n and a utility function un: 01 \u00d7 02 \u00d7 \u03c31 \u00d7 \u03c32 \u2192 R.\nA Bayesian game can be transformed into a normal-form game using the Harsanyi transformation [8].\nOnce this is done, new, linear-program (LP) - based methods for finding high-reward strategies for normal-form games [5] can be used to find a strategy in the transformed game; this strategy can then be used for the Bayesian game.\nWhile methods exist for finding Bayes-Nash equilibria directly, without the Harsanyi transformation [10], they find only a\n312 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nsingle equilibrium in the general case, which may not be of high reward.\nRecent work [17] has led to efficient mixed-integer linear program techniques to find the best Nash equilibrium for a given agent.\nHowever, these techniques do require a normal-form game, and so to compare the policies given by ASAP against the optimal policy, as well as against the highest-reward Nash equilibrium, we must apply these techniques to the Harsanyi-transformed matrix.\nThe next two subsections elaborate on how this is done.\n3.1 Harsanyi Transformation\nThe first step in solving Bayesian games is to apply the Harsanyi transformation [8] that converts the Bayesian game into a normal form game.\nGiven that the Harsanyi transformation is a standard concept in game theory, we explain it briefly through a simple example in our patrolling domain without introducing the mathematical formulations.\nLet us assume there are two robber types a and b in the Bayesian game.\nRobber a will be active with probability \u03b1, and robber b will be active with probability 1--\u03b1.\nThe rules described in Section 2 allow us to construct simple payoff tables.\nAssume that there are two houses in the world (1 and 2) and hence there are two patrol routes (pure strategies) for the agent: 11,21 and 12,11.\nThe robber can rob either house 1 or house 2 and hence he has two strategies (denoted as 1l, 2l for robber type l).\nSince there are two types assumed (denoted as a and b), we construct two payoff tables (shown in Table 2) corresponding to the security agent playing a separate game with each of the two robber types with probabilities \u03b1 and 1--\u03b1.\nFirst, consider robber type a. Borrowing the notation from the domain section, we assign the following values to the variables: v1, x = v1, q = 3\/4, v2, x = v2, q = 1\/4, cx = 1\/2, cq = 1, p1 = 1, p2 = 1\/2.\nUsing these values we construct a base payoff table as the payoff for the game against robber type a. For example, if the security agent chooses route 11,21 when robber a is active, and robber a chooses house 1, the robber receives a reward of -1 (for being caught) and the agent receives a reward of 0.5 for catching the robber.\nThe payoffs for the game against robber type b are constructed using different values.\nTable 2: Payoff tables: Security Agent vs Robbers a and b\nUsing the Harsanyi technique involves introducing a chance node, that determines the robber's type, thus transforming the security agent's incomplete information regarding the robber into imperfect information [3].\nThe Bayesian equilibrium of the game is then precisely the Nash equilibrium of the imperfect information game.\nThe transformed, normal-form game is shown in Table 3.\nIn the transformed game, the security agent is the column player, and the set of all robber types together is the row player.\nSuppose that robber type a robs house 1 and robber type b robs house 2, while the security agent chooses patrol 11,21.\nThen, the security agent and the robber receive an expected payoff corresponding to their payoffs from the agent encountering robber a at house 1 with probability \u03b1 and robber b at house 2 with probability 1--\u03b1.\n3.2 Finding an Optimal Strategy\nAlthough a Nash equilibrium is the standard solution concept for games in which agents choose strategies simultaneously, in our security domain, the security agent (the leader) can gain an advantage by committing to a mixed strategy in advance.\nSince the followers (the robbers) will know the leader's strategy, the optimal response for the followers will be a pure strategy.\nGiven the common assumption, taken in [5], in the case where followers are indifferent, they will choose the strategy that benefits the leader, there must exist a guaranteed optimal strategy for the leader [5].\nFrom the Bayesian game in Table 2, we constructed the Harsanyi transformed bimatrix in Table 3.\nThe strategies for each player (security agent or robber) in the transformed game correspond to all combinations of possible strategies taken by each of that player's types.\nTherefore, we denote X = \u03c3\u03b811 = \u03c31 and Q = \u03c3\u03b82 2 as the index sets of the security agent and robbers' pure strategies respectively, with R and C as the corresponding payoff matrices.\nRij is the reward of the security agent and Cij is the reward of the robbers when the security agent takes pure strategy i and the robbers take pure strategy j.\nA mixed strategy for the security agent is a probability distribution over its set of pure strategies and will be represented by a vector x = (px1, px2,..., pxIXI), where pxi> 0 and E pxi = 1.\nHere, pxi is the probability that the security agent will choose its ith pure strategy.\nThe optimal mixed strategy for the security agent can be found in time polynomial in the number of rows in the normal form game using the following linear program formulation from [5].\nFor every possible pure strategy j by the follower (the set of all robber types),\nThen, for all feasible follower strategies j, choose the one that maximizes EiEX pxiRij, the reward for the security agent (leader).\nThe pxi variables give the optimal strategy for the security agent.\nNote that while this method is polynomial in the number of rows in the transformed, normal-form game, the number of rows increases exponentially with the number of robber types.\nUsing this method for a Bayesian game thus requires running 1\u03c3211\u03b821 separate linear programs.\nThis is no surprise, since finding the leader's optimal strategy in a Bayesian Stackelberg game is NP-hard [5].\n4.\nHEURISTIC APPROACHES\nGiven that finding the optimal strategy for the leader is NP-hard, we provide a heuristic approach.\nIn this heuristic we limit the possible mixed strategies of the leader to select actions with probabilities that are integer multiples of 1\/k for a predetermined integer k. Previous work [14] has shown that strategies with high entropy are beneficial for security applications when opponents' utilities are completely unknown.\nIn our domain, if utilities are not considered, this method will result in uniform-distribution strategies.\nOne advantage of such strategies is that they are compact to represent (as fractions) and simple to understand; therefore they can be efficiently implemented by real organizations.\nWe aim to maintain the advantage provided by simple strategies for our security application problem, incorporating the effect of the robbers' rewards on the security agent's rewards.\nThus, the ASAP heuristic will produce strategies which are k-uniform.\nA mixed strategy is denoted k-uniform if it is a uniform distribution on a multiset S of pure strategies with IS = k.\nA multiset is a set whose elements may be repeated multiple times; thus, for example, the mixed strategy corresponding to the multiset 11, 1, 21 would take strategy 1\nTable 3: Harsanyi Transformed Payoff Table\nwith probability 2\/3 and strategy 2 with probability 1\/3.\nASAP allows the size of the multiset to be chosen in order to balance the complexity of the strategy reached with the goal that the identified strategy will yield a high reward.\nAnother advantage of the ASAP heuristic is that it operates directly on the compact Bayesian representation, without requiring the Harsanyi transformation.\nThis is because the different follower (robber) types are independent of each other.\nHence, evaluating the leader strategy against a Harsanyi-transformed game matrix is equivalent to evaluating against each of the game matrices for the individual follower types.\nThis independence property is exploited in ASAP to yield a decomposition scheme.\nNote that the LP method introduced by [5] to compute optimal Stackelberg policies is unlikely to be decomposable into a small number of games as it was shown to be NP-hard for Bayes-Nash problems.\nFinally, note that ASAP requires the solution of only one optimization problem, rather than solving a series of problems as in the LP method of [5].\nFor a single follower type, the algorithm works the following way.\nGiven a particular k, for each possible mixed strategy x for the leader that corresponds to a multiset of size k, evaluate the leader's payoff from x when the follower plays a reward-maximizing pure strategy.\nWe then take the mixed strategy with the highest payoff.\nWe need only to consider the reward-maximizing pure strategies of the followers (robbers), since for a given fixed strategy x of the security agent, each robber type faces a problem with fixed linear rewards.\nIf a mixed strategy is optimal for the robber, then so are all the pure strategies in the support of that mixed strategy.\nNote also that because we limit the leader's strategies to take on discrete values, the assumption from Section 3.2 that the followers will break ties in the leader's favor is not significant, since ties will be unlikely to arise.\nThis is because, in domains where rewards are drawn from any random distribution, the probability of a follower having more than one pure optimal response to a given leader strategy approaches zero, and the leader will have only a finite number of possible mixed strategies.\nOur approach to characterize the optimal strategy for the security agent makes use of properties of linear programming.\nWe briefly outline these results here for completeness, for detailed discussion and proofs see one of many references on the topic, such as [2].\nEvery linear programming problem, such as: max cT x I Ax = b, x> 0, has an associated dual linear program, in this case: min bT y I AT y> c.\nThese primal\/dual pairs of problems satisfy weak duality: For any x and y primal and dual feasible solutions respectively, cT x <bT y. Thus a pair of feasible solutions is optimal if cT x = bT y, and the problems are said to satisfy strong duality.\nIn fact if a linear program is feasible and has a bounded optimal solution, then the dual is also feasible and there is a pair x', y' that satisfies cT x' = bT y'.\nThese optimal solutions are characterized with the following optimality conditions (as defined in [2]):\n\u2022 primal feasibility: Ax = b, x> 0 \u2022 dual feasibility: AT y> c \u2022 complementary slackness: xi (AT y--c) i = 0 for all i.\nNote that this last condition implies that\nwhich proves optimality for primal dual feasible solutions x and y.\nIn the following subsections, we first define the problem in its most intuititive form as a mixed-integer quadratic program (MIQP), and then show how this problem can be converted into a mixedinteger linear program (MILP).\n4.1 Mixed-Integer Quadratic Program\nWe begin with the case of a single type of follower.\nLet the leader be the row player and the follower the column player.\nWe denote by x the vector of strategies of the leader and q the vector of strategies of the follower.\nWe also denote X and Q the index sets of the leader and follower's pure strategies, respectively.\nThe payoff matrices R and C correspond to: Rij is the reward of the leader and Cij is the reward of the follower when the leader takes pure strategy i and the follower takes pure strategy j. Let k be the size of the multiset.\nWe first fix the policy of the leader to some k-uniform policy x.\nThe value xi is the number of times pure strategy i is used in the k-uniform policy, which is selected with probability xi\/k.\nWe formulate the optimization problem the follower solves to find its optimal response to x as the following linear program:\nThe objective function maximizes the follower's expected reward given x, while the constraints make feasible any mixed strategy q for the follower.\nThe dual to this linear programming problem is the following:\nFrom strong duality and complementary slackness we obtain that the follower's maximum reward value, a, is the value of every pure strategy with qj> 0, that is in the support of the optimal mixed strategy.\nTherefore each of these pure strategies is optimal.\nOptimal solutions to the follower's problem are characterized by linear programming optimality conditions: primal feasibility constraints in (2), dual feasibility constraints in (3), and complementary slackness\nThese conditions must be included in the problem solved by the leader in order to consider only best responses by the follower to the k-uniform policy x.\n314 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe leader seeks the k-uniform solution x that maximizes its own payoff, given that the follower uses an optimal response q (x).\nTherefore the leader solves the following integer problem:\nProblem (4) maximizes the leader's reward with the follower's best response (qj for fixed leader's policy x and hence denoted q (x) j) by selecting a uniform policy from a multiset of constant size k.\nWe complete this problem by including the characterization of q (x) through linear programming optimality conditions.\nTo simplify writing the complementary slackness conditions, we will constrain q (x) to be only optimal pure strategies by just considering integer solutions of q (x).\nThe leader's problem becomes:\nHere, the constant M is some large number.\nThe first and fourth constraints enforce a k-uniform policy for the leader, and the second and fifth constraints enforce a feasible pure strategy for the follower.\nThe third constraint enforces dual feasibility of the follower's problem (leftmost inequality) and the complementary slackness constraint for an optimal pure strategy q for the follower (rightmost inequality).\nIn fact, since only one pure strategy can be selected by the follower, say qh = 1, this last constraint enforces that\npure strategies which have qj = 0.\nWe conclude this subsection noting that Problem (5) is an integer program with a non-convex quadratic objective in general, as the matrix R need not be positive-semi-definite.\nEfficient solution methods for non-linear, non-convex integer problems remains a challenging research question.\nIn the next section we show a reformulation of this problem as a linear integer programming problem, for which a number of efficient commercial solvers exist.\n4.2 Mixed-Integer Linear Program\nWe can linearize the quadratic program of Problem 5 through the change of variables zij = xiqj, obtaining the following problem\nProof: Consider x, q a feasible solution of (5).\nWe will show that q, zij = xiqj is a feasible solution of (6) of same objective function value.\nThe equivalence of the objective functions, and constraints 4, 6 and 7 of (6) are satisfied by construction.\nThe fact that EjEQ zij = xi as EjEQ qj = 1 explains constraints 1, 2, and 5 of (6).\nConstraint 3 of (6) is satisfied because E iEX zij = kqj.\nLet us now consider q, z feasible for (6).\nWe will show that q and jEQ zij are feasible for (5) with the same objective value.\nIn fact all constraints of (5) are readily satisfied by construction.\nTo constraint in (6) implies that E see that the objectives match, notice that if qh = 1 then the third\nThis last equality is because both are 0 when j = ~ h.\nThis shows that the transformation preserves the objective function value, completing the proof.\nGiven this transformation to a mixed-integer linear program (MILP), we now show how we can apply our decomposition technique on the MILP to obtain significant speedups for Bayesian games with multiple follower types.\n5.\nDECOMPOSITION FOR MULTIPLE ADVERSARIES\nThe MILP developed in the previous section handles only one follower.\nSince our security scenario contains multiple follower (robber) types, we change the response function for the follower from a pure strategy into a weighted combination over various pure follower strategies where the weights are probabilities of occurrence of each of the follower types.\n5.1 Decomposed MIQP\nTo admit multiple adversaries in our framework, we modify the notation defined in the previous section to reason about multiple follower types.\nWe denote by x the vector of strategies of the leader and ql the vector of strategies of follower l, with L denoting the index set of follower types.\nWe also denote by X and Q the index sets of leader and follower l's pure strategies, respectively.\nWe also index the payoff matrices on each follower l, considering the matrices Rl and Cl.\nUsing this modified notation, we characterize the optimal solution of follower l's problem given the leaders k-uniform policy x, with the following optimality conditions:\nAgain, considering only optimal pure strategies for follower l's problem we can linearize the complementarity constraint above.\nWe incorporate these constraints on the leader's problem that selects the optimal k-uniform policy.\nTherefore, given a priori probabilities pl, with l \u2208 L of facing each follower, the leader solves the following problem:\nProblem (7) for a Bayesian game with multiple follower types is indeed equivalent to Problem (5) on the payoff matrix obtained from the Harsanyi transformation of the game.\nIn fact, every pure strategy j in Problem (5) corresponds to a sequence of pure strategies jl, one for each follower l E L.\nThis means that qj = 1 if and only if qlj, = 1 for all l E L.\nIn addition, given the a priori probabilities pl of facing player l, the reward in the Harsanyi transformation payoff table is Rij = E lEL plRlij,.\nThe same relation holds between C and Cl.\nThese relations between a pure strategy in the equivalent normal form game and pure strategies in the individual games with each followers are key in showing these problems are equivalent.\n5.2 Decomposed MILP\nWe can linearize the quadratic programming problem 7 through the change of variables zlij = xiqlj, obtaining the following prob\nProof: Consider x, ql, al with l E L a feasible solution of (7).\nWe will show that ql, al, zlij = xiqlj is a feasible solution of (8) of same objective function value.\nThe equivalence of the objective functions, and constraints 4, 7 and 8 of (8) are satisfied by construction.\nThe fact that EjEQ zlij = xi as EjEQ qlj = 1 explains constraints 1, 2, 5 and 6 of (8).\nConstraint 3 of (8) is satisfied because E iEX zlij = kqlj.\nql, al and xi = E Lets now consider ql, zl, al feasible for (8).\nWe will show that jEQ z1 ij are feasible for (7) with the same objective value.\nIn fact all constraints of (7) are readily satisfied by construction.\nTo see that the objectives match, notice for each l one qlj must equal 1 and the rest equal 0.\nLet us say that qlj, = 1, then the third constraint in (8) implies that E iEX zlij, = k, which means that zlij = 0 for all i E X and all j = ~ jl.\nIn particular this implies that\nthe last equality from constraint 6 of (8).\nTherefore xiqlj = zlij, qlj = zlij.\nThis last equality is because both are 0 when j = ~ jl.\nEffectively, constraint 6 ensures that all the adversaries are calculating their best responses against a particular fixed policy of the agent.\nThis shows that the transformation preserves the objective function value, completing the proof.\nWe can therefore solve this equivalent linear integer program with efficient integer programming packages which can handle problems with thousands of integer variables.\nWe implemented the decomposed MILP and the results are shown in the following section.\n6.\nEXPERIMENTAL RESULTS\nThe patrolling domain and the payoffs for the associated game are detailed in Sections 2 and 3.\nWe performed experiments for this game in worlds of three and four houses with patrols consisting of two houses.\nThe description given in Section 2 is used to generate a base case for both the security agent and robber payoff functions.\nThe payoff tables for additional robber types are constructed and added to the game by adding a random distribution of varying size to the payoffs in the base case.\nAll games are normalized so that, for each robber type, the minimum and maximum payoffs to the security agent and robber are 0 and 1, respectively.\nUsing the data generated, we performed the experiments using four methods for generating the security agent's strategy:\n\u2022 uniform randomization \u2022 ASAP \u2022 the multiple linear programs method from [5] (to find the true optimal strategy) \u2022 the highest reward Bayes-Nash equilibrium, found using the MIP-Nash algorithm [17]\nThe last three methods were applied using CPLEX 8.1.\nBecause the last two methods are designed for normal-form games rather than Bayesian games, the games were first converted using the Harsanyi transformation [8].\nThe uniform randomization method is simply choosing a uniform random policy over all possible patrol routes.\nWe use this method as a simple baseline to measure the performance of our heuristics.\nWe anticipated that the uniform policy would perform reasonably well since maximum-entropy policies have been shown to be effective in multiagent security domains [14].\nThe highest-reward Bayes-Nash equilibria were used in order to demonstrate the higher reward gained by looking for an optimal policy rather than an equilibria in Stackelberg games such as our security domain.\nBased on our experiments we present three sets of graphs to demonstrate (1) the runtime of ASAP compared to other common methods for finding a strategy, (2) the reward guaranteed by ASAP compared to other methods, and (3) the effect of varying the parameter k, the size of the multiset, on the performance of ASAP.\nIn the first two sets of graphs, ASAP is run using a multiset of 80 elements; in the third set this number is varied.\nThe first set of graphs, shown in Figure 1 shows the runtime graphs for three-house (left column) and four-house (right column) domains.\nEach of the three rows of graphs corresponds to a different randomly-generated scenario.\nThe x-axis shows the number of robber types the security agent faces and the y-axis of the graph shows the runtime in seconds.\nAll experiments that were not concluded in 30 minutes (1800 seconds) were cut off.\nThe runtime for the uniform policy is always negligible irrespective of the number of adversaries and hence is not shown.\nThe ASAP algorithm clearly outperforms the optimal, multipleLP method as well as the MIP-Nash algorithm for finding the highestreward Bayes-Nash equilibrium with respect to runtime.\nFor a\n316 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 1: Runtimes for various algorithms on problems of 3 and 4 houses.\ndomain of three houses, the optimal method cannot reach a solution for more than seven robber types, and for four houses it cannot solve for more than six types within the cutoff time in any of the three scenarios.\nMIP-Nash solves for even fewer robber types within the cutoff time.\nOn the other hand, ASAP runs much faster, and is able to solve for at least 20 adversaries for the three-house scenarios and for at least 12 adversaries in the four-house scenarios within the cutoff time.\nThe runtime of ASAP does not increase strictly with the number of robber types for each scenario, but in general, the addition of more types increases the runtime required.\nThe second set of graphs, Figure 2, shows the reward to the patrol agent given by each method for three scenarios in the three-house (left column) and four-house (right column) domains.\nThis reward is the utility received by the security agent in the patrolling game, and not as a percentage of the optimal reward, since it was not possible to obtain the optimal reward as the number of robber types increased.\nThe uniform policy consistently provides the lowest reward in both domains; while the optimal method of course produces the optimal reward.\nThe ASAP method remains consistently close to the optimal, even as the number of robber types increases.\nThe highest-reward Bayes-Nash equilibria, provided by the MIPNash method, produced rewards higher than the uniform method, but lower than ASAP.\nThis difference clearly illustrates the gains in the patrolling domain from committing to a strategy as the leader in a Stackelberg game, rather than playing a standard Bayes-Nash strategy.\nThe third set of graphs, shown in Figure 3 shows the effect of the multiset size on runtime in seconds (left column) and reward (right column), again expressed as the reward received by the security agent in the patrolling game, and not a percentage of the optimal\nFigure 2: Reward for various algorithms on problems of 3 and\n4 houses.\nreward.\nResults here are for the three-house domain.\nThe trend is that as as the multiset size is increased, the runtime and reward level both increase.\nNot surprisingly, the reward increases monotonically as the multiset size increases, but what is interesting is that there is relatively little benefit to using a large multiset in this domain.\nIn all cases, the reward given by a multiset of 10 elements was within at least 96% of the reward given by an 80-element multiset.\nThe runtime does not always increase strictly with the multiset size; indeed in one example (scenario 2 with 20 robber types), using a multiset of 10 elements took 1228 seconds, while using 80 elements only took 617 seconds.\nIn general, runtime should increase since a larger multiset means a larger domain for the variables in the MILP, and thus a larger search space.\nHowever, an increase in the number of variables can sometimes allow for a policy to be constructed more quickly due to more flexibility in the problem.","keyphrases":["heurist approach","adversari multiag domain","bayesian game","np-hard","agent secur via approxim polici","agent system secur","bay-nash equilibrium","bayesian and stackelberg game","patrol domain","decomposit for multipl adversari","mix-integ linear program","game theori"],"prmu":["P","P","P","P","M","M","M","M","M","M","M","M"]} {"id":"I-64","title":"Organizational Self-Design in Semi-dynamic Environments","abstract":"Organizations are an important basis for coordination in multiagent systems. However, there is no best way to organize and all ways of organizing are not equally effective. Attempting to optimize an organizational structure depends strongly on environmental features including problem characteristics, available resources, and agent capabilities. If the environment is dynamic, the environmental conditions or the problem task structure may change over time. This precludes the use of static, design-time generated, organizational structures in such systems. On the other hand, for many real environments, the problems are not totally unique either: certain characteristics and conditions change slowly, if at all, and these can have an important effect in creating stable organizational structures. Organizational-Self Design (OSD) has been proposed as an approach for constructing suitable organizational structures at runtime. We extend the existing OSD approach to include worth-oriented domains, model other resources in addition to only processor resources and build in robustness into the organization. We then evaluate our approach against the contract-net approach and show that our OSD agents perform better, are more efficient, and more flexible to changes in the environment.","lvl-1":"Organizational Self-Design in Semi-dynamic Environments Sachin Kamboj \u2217 and Keith S. Decker Department of Computer and Information Sciences University of Delaware Newark, DE 19716 {kamboj, decker}@cis.\nudel.edu ABSTRACT Organizations are an important basis for coordination in multiagent systems.\nHowever, there is no best way to organize and all ways of organizing are not equally effective.\nAttempting to optimize an organizational structure depends strongly on environmental features including problem characteristics, available resources, and agent capabilities.\nIf the environment is dynamic, the environmental conditions or the problem task structure may change over time.\nThis precludes the use of static, design-time generated, organizational structures in such systems.\nOn the other hand, for many real environments, the problems are not totally unique either: certain characteristics and conditions change slowly, if at all, and these can have an important effect in creating stable organizational structures.\nOrganizational-Self Design (OSD) has been proposed as an approach for constructing suitable organizational structures at runtime.\nWe extend the existing OSD approach to include worthoriented domains, model other resources in addition to only processor resources and build in robustness into the organization.\nWe then evaluate our approach against the contract-net approach and show that our OSD agents perform better, are more efficient, and more flexible to changes in the environment.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent systems General Terms Algorithms, Design, Performance, Experimentation 1.\nINTRODUCTION In this paper, we are primarily interested in the organizational design of a multiagent system - the roles enacted by the agents, \u2217Primary author is a student the coordination between the roles and the number and assignment of roles and resources to the individual agents.\nThe organizational design is complicated by the fact that there is no best way to organize and all ways of organizing are not equally effective [2].\nInstead, the optimal organizational structure depends both on the problem at hand and the environmental conditions under which the problem needs to be solved.\nThe environmental conditions may not be known a priori, or may change over time, which would preclude the use of a static organizational structure.\nOn the other hand, all problem instances and environmental conditions are not always unique, which would render inefficient the use of a new, bespoke organizational structure for every problem instance.\nOrganizational Self-Design (OSD) [4, 10] has been proposed as an approach to designing organizations at run-time in which the agents are responsible for generating their own organizational structures.\nWe believe that OSD is especially suited to the above scenario in which the environment is semi-dynamic as the agents can adapt to changes in the task structures and environmental conditions, while still being able to generate relatively stable organizational structures that exploit the common characteristics across problem instances.\nIn our approach (as in [10]), we define two operators for OSD - agent spawning and composition - when an agent becomes overloaded, it spawns off a new agent to handle part of its task load\/responsibility; when an agent lies idle for an extended period of time, it may decide to compose with another agent.\nWe use T\u00c6MS as the underlying representation for our problem solving requests.\nT\u00c6MS [11] (Task Analysis, Environment Modeling and Simulation) is a computational framework for representing and reasoning about complex task environments in which tasks (problems) are represented using extended hierarchical task structures [3].\nThe root node of the task structure represents the high-level goal that the agent is trying to achieve.\nThe sub-nodes of a node represent the subtasks and methods that make up the highlevel task.\nThe leaf nodes are at the lowest level of abstraction and represent executable methods - the primitive actions that the agents can perform.\nThe executable methods, themselves, may have multiple outcomes, with different probabilities and different characteristics such as quality, cost and duration.\nT\u00c6MS also allows various mechanisms for specifying subtask variations and alternatives, i.e. each node in T\u00c6MS is labeled with a characteristic accumulation function that describes how many or which subgoals or sets of subgoals need to be achieved in order to achieve a particular higherlevel goal.\nT\u00c6MS has been used to model many different problemsolving environments including distributed sensor networks, information gathering, hospital scheduling, EMS, and military planning.\n[5, 6, 3, 15].\nThe main contributions of this paper are as follows: 1.\nWe extend existing OSD approaches to use T\u00c6MS as the underlying problem representation, which allows us to model and use OSD for worth-oriented domains.\nThis in turn allows us to reason about (1) alternative task and role assignments that make different quality\/cost tradeoffs and generate different organizational structures and (2) uncertainties in the execution of tasks.\n2.\nWe model the use of resources other than only processor resources.\n3.\nWe incorporate robustness into the organizational structures.\n2.\nRELATED WORK The concept of OSD is not new and has been around since the work of Corkill and Lesser on the DVMT system[4], even though the concept was not fully developed by them.\nMore recently Dignum et.\nal.[8] have described OSD in the context of the reorganization of agent societies and attempt to classify the various kinds of reorganization possible according to the the reason for reorganization, the type of reorganization and who is responsible for the reorganization decision.\nAccording to their scheme, the type of reorganization done by our agents falls into the category of structural changes and the reorganization decision can be described as shared command.\nOur research primarily builds on the work done by Gasser and Ishida [10], in which they use OSD in the context of a production system in order to perform adaptive work allocation and load balancing.\nIn their approach, they define two organizational primitives - composition and decomposition, which are similar to our organizational primitives for agent spawning and composition.\nThe main difference between their work and our work is that we use T\u00c6MS as the underlying representation for our problems, which allows, firstly, the representation of a larger, more general class of problems and, secondly, quantitative reasoning over task structures.\nThe latter also allows us to incorporate different design-to-criteria schedulers [16].\nHorling and Lesser [9] present a different, top-down approach to OSD that also uses T\u00c6MS as the underlying representation.\nHowever, their approach assumes a fixed number of agents with designated (and fixed) roles.\nOSD is used in their work to change the interaction patterns between the agents and results in the agents using different subtasks or different resources to achieve their goals.\nWe also extend on the work done by Sycara et.\nal.,[13] on Agent Cloning, which is another approach to resource allocation and load balancing.\nIn this approach, the authors present agent cloning as a possible response to agent overload - if an agent detects that it is overloaded and that there are spare (unused) resources in the system, the agent clones itself and gives its clone some part of its task load.\nHence, agent cloning can be thought of as akin to agent spawning in our approach.\nHowever, the two approaches are different in that there is no specialization of the agents in the formerthe cloned agents are perfect replicas of the original agents and fulfill the same roles and responsibilities as the original agents.\nIn our approach, on the other hand, the spawned agents are specialized on a subpart of the spawning agent``s task structure, which is no longer the responsibility of the spawning agent.\nHence, our approach also deals with explicit organization formation and the coordination of the agents'' tasks which are not handled by their approach.\nOther approaches to OSD include the work of So and Durfee [14], who describe a top-down model of OSD in the context of Cooperative Distributive Problem Solving (CDPS) and Barber and Martin [1], who describe an adaptive decision making framework in which agents are able to reorganize decision-making groups by dynamically changing (1) who makes the decisions for a particular goal and (2) who must carry out these decisions.The latter work is primarily concerned with coordination decisions and can be used to complement our OSD work, which primarily deals with task and resource allocation.\n3.\nTASK AND RESOURCE MODEL To ground our discussion of OSD, we now formally describe our task and resource model.\nIn our model, the primary input to the multi-agent system (MAS) is an ordered set of problem solving requests or task instances, < P1, P2, P3, ..., Pn >, where each problem solving request, Pi, can be represented using the tuple < ti, ai, di >.\nIn this scheme, ti is the underlying T\u00c6MS task structure, ai \u2208 N+ is the arrival time and di \u2208 N+ is the deadline of the ith task instance1 .\nThe MAS has no prior knowledge about the task ti before the arrival time, ai.\nIn order for the MAS to accrue quality, the task ti must be completed before the deadline, di.\nFurthermore, every underlying task structure, ti, can be represented using the tuple < T, \u03c4, M, Q, E, R, \u03c1, C >, where: \u2022 T is the set of tasks.\nThe tasks are non-leaf nodes in a T\u00c6MS task structure and are used to denote goals that the agents must achieve.\nTasks have a characteristic accumulation function (see below) and are themselves composed of other subtasks and\/or methods that need to be achieved in order to achieve the goal represented by that task.\nFormally, each task Tj can be represented using the pair (qj, sj), where qj \u2208 Q and sj \u2282 (T \u222a M).\nFor our convenience, we define two functions SUBTASKS(Task) : T \u2192 P(T \u222a M) and SUPERTASKS(T\u00c6MS node) : T \u222a M \u2192 P(T), that return the subtasks and supertasks of a T\u00c6MS node respectively2 .\n\u2022 \u03c4 \u2208 T, is the root of the task structure, i.e. the highest level goal that the organization is trying to achieve.\nThe quality accrued on a problem is equal to the quality of task \u03c4.\n\u2022 M is the set executable methods, i.e., M = {m1, m2, ..., mn}, where each method, mk, is represented using the outcome distribution, {(o1, p1), (o2, p2), ..., (om, pm)}.\nIn the pair (ol, pl), ol is an outcome and pl is the probability that executing mk will result in the outcome ol.\nFurthermore, each outcome, ol is represented using the triple (ql, cl, dl), where ql is the quality distribution, cl is the cost distribution and dl is the duration distribution of outcome ol.\nEach discrete distribution is itself a set of pairs, {(n1, p1), (n2, p2), ..., (nn, pn)}, where pi \u2208 + is the probability that the outcome will have a quality\/cost\/duration of nl \u2208 N depending on the type of distribution and Pm i=1 pl = 1.\n\u2022 Q is the set of quality\/characteristic accumulation functions (CAFs).\nThe CAFs determine how a task group accrues quality given the quality accrued by its subtasks\/methods.\nFor our research, we use four CAFs: MIN, MAX, SUM and EXACTLY ONE.\nSee [5] for formal definitions.\n\u2022 E is the set of (non-local) effects.\nAgain, see [5] for formal definitions.\n\u2022 R is the set of resources.\n\u2022 \u03c1 is a mapping from an executable method and resource to the quantity of that resource needed (by an agent) to schedule\/execute that method.\nThat is \u03c1(method, resource) : M \u00d7 R \u2192 N. 1 N is the set of natural numbers including zero and N+ is the set of positive natural numbers excluding zero.\n2 P is the power set of set, i.e., the set of all subsets of a set The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1229 \u2022 C is a mapping from a resource to the cost of that resource, that is C(resource) : R \u2192 N+ We also make the following set of assumptions in our research: 1.\nThe agents in the MAS are drawn from the infinite set A = {a1, a2, a3, ...}.\nThat is, we do not assume a fixed set of agents - instead agents are created (spawned) and destroyed (combined) as needed.\n2.\nAll problem solving requests have the same underlying task structure, i.e. \u2203t\u2200iti = t, where t is the task structure of the problem that the MAS is trying to solve.\nWe believe that this assumption holds for many of the practical problems that we have in mind because T\u00c6MS task structures are basically high-level plans for achieving some goal in which the steps required for achieving the goal-as well as the possible contingency situations-have been pre-computed offline and represented in the task structure.\nBecause it represents many contingencies, alternatives, uncertain characteristics and runtime flexible choices, the same underlying task structure can play out very differently across specific instances.\n3.\nAll resources are exclusive, i.e., only one agent may use a resource at any given time.\nFurthermore, we assume that each agent has to own the set of resources that it needseven though the resource ownership can change during the evolution of the organization.\n4.\nAll resources are non-consumable.\n4.\nORGANIZATIONAL SELF DESIGN 4.1 Agent Roles and Relationships The organizational structure is primarily composed of roles and the relationships between the roles.\nOne or more agents may enact a particular role and one or more roles must be enacted by every agent.\nThe roles may be thought of as the parts played by the agents enacting the roles in the solution to the problem and reflect the long-term commitments made by the agents in question to a certain course of action (that includes task responsibility, authority, and mechanisms for coordination).\nThe relationships between the roles are the coordination relationships that exist between the subparts of a problem.\nIn our approach, the organizational design is directly contingent on the task structure and the environmental conditions under which the problems need to be solved.\nWe define a role as a T\u00c6MS subtree rooted at a particular node.\nHence, the set (T \u222a M) encompasses the space of all possible roles.\nNote, by definition, a role may consist of one or more other (sub-) roles as a particular T\u00c6MS node may itself be made up of one or more subtrees.\nHence, we will use the terms role, task node and task interchangeably.\nWe, also, differentiate between local and managed (non-local) roles.\nLocal roles are roles that are the sole responsibility of a single agent, that is, the agent concerned is responsible for solving all the subproblems of the tree rooted at that node.\nFor such roles, the agent concerned can do one or more subtasks, solely at its discretion and without consultation with any other agent.\nManaged roles, on the other hand, must be coordinated between two or more agents as such roles will have two or more descendent local roles that are the responsibility of two or more separate agents.\nAny of the existing coordination mechanisms (such as GPGP [11]) can be used to achieve this coordination.\nFormally, if the function TYPE(Agent, T\u00c6MS Node) : A\u00d7(T \u222a M) \u2192 {Local, Managed, Unassigned}, returns the type of the responsibility of the agent towards the specified role, then TYPE(a, r) = Local \u21d0\u21d2 \u2200ri\u2208SUBTASKS(r)TYPE(a, ri) = Local TYPE(a, r) = Managed \u21d0\u21d2 [\u2203a1\u2203r1(r1 \u2208 SUBTASKS(r)) \u2227 (TYPE(a1, r1) = Managed)] \u2228 [\u2203a2\u2203a3\u2203r2\u2203r3(a2 = a3) \u2227 (r2 = r3) \u2227 (r2 \u2208 SUBTASKS(r)) \u2227 (r3 \u2208 SUBTASKS(r)) \u2227 (TYPE(a2, r2) = Local) \u2227 (TYPE(a3, r3) = Local)] 4.2 Organization Formation and Adaptation To form or adapt their organizational structure, the agents use two organizational primitives: agent spawning and composition.\nThese two primitives result in a change in the assignment of roles to the agents.\nAgent spawning is the generation of a new agent to handle a subset of the roles of the spawning agent.\nAgent composition, on the other hand, is orthogonal to agent spawning and involves the merging of two or more agents together - the combined agent is responsible for enacting all the roles of the agents being merged.\nIn order to participate in the formation and adaption of an organization, the agents need to explicitly represent and reason about the role assignments.\nHence, as a part of its organizational knowledge, each agent keeps a list of the local roles that it is enacting and the non-local roles that it is managing.\nNote that each agent only has limited organizational knowledge and is individually responsible for spawning off or combining with another agent, as needed, based on its estimate of its performance so far.\nTo see how the organizational primitives work, we first describe four rules that can be thought of as the organizational invariants which will always hold before and after any organizational change: 1.\nFor a local role, all the descendent nodes of that role will be local.\nTYPE(a, r) = Local =\u21d2 \u2200ri\u2208SUBTASKS(r)TYPE(a, ri) = Local 2.\nSimilarly, for a managed (non-local) role, all the ascendent nodes of that role will be managed.\nTYPE(a, r) = Managed =\u21d2 \u2200ri\u2208SUPERTASKS(r)\u2203ai(ai \u2208 A) \u2227 (TYPE(ai, ri) = Managed) 3.\nIf two local roles that are enacted by two different agents share a common ancestor, that ancestor will be a managed role.\n(TYPE(a1, r1) = Local) \u2227 (TYPE(a2, r2) = Local)\u2227 (a1 = a2) \u2227 (r1 = r2) =\u21d2 \u2200ri\u2208(SUPERTASKS(r1)\u2229SUPERTASKS(r2))\u2203ai(ai \u2208 A)\u2227 (TYPE(ai, ri) = Managed) 4.\nIf all the direct descendants of a role are local and the sole responsibility of a single agent, that role will be a local role.\n\u2203a\u2203r\u2200ri\u2208SUBTASKS(r)(a \u2208 A) \u2227 (r \u2208 (T \u222a M))\u2227 (TYPE(a, ri) = Local) =\u21d2 (TYPE(a, r) = Local) When a new agent is spawned, the agent doing the spawning will assign one or more of its local roles to the newly spawned agent (Algorithm 1).\nTo preserve invariant rules 2 and 3, the spawning agent will change the type of all the ascendent roles of the nodes assigned to the newly spawned agent from local to managed.\nNote that the spawning agent is only changing its local organizational knowledge and not the global organizational knowledge.\nAt the 1230 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) same time, the spawning agent is taking on the task of managing the previously local roles.\nSimilarly, the newly spawned agent will only know of its just assigned local roles.\nWhen an agent (the composing agent) decides to compose with another agent (the composed agent), the organizational knowledge of the composing agent is merged with the organizational knowledge of the composed agent.\nTo do this, the composed agent takes on the roles of all the local and managed tasks of the composing agent.\nCare is taken to preserve the organizational invariant rules 1 and 4.\nAlgorithm 1 SpawnAgent(SpawningAgent) : A \u2192 A 1: LocalRoles \u2190 {r \u2286 (T \u222a M) | TYPE(SpawningAgent, r)= Local} 2: NewAgent \u2190 CREATENEWAGENT() 3: NewAgentRoles \u2190 FINDROLESFORSPAWNEDAGENT (LocalRoles) 4: for role in NewAgentRoles do 5: TYPE(NewAgent, role) \u2190 Local 6: TYPE(SpawningAgent, role) \u2190 Unassigned 7: PRESERVEORGANIZATIONALINVARIANTS() 8: return NewAgent Algorithm 2 FINDROLESFORSPAWNEDAGENT (SpawningAgentRoles) : (T \u222a M) \u2192 (T \u222a M) 1: R \u2190 SpawningAgentRoles 2: selectedRoles \u2190 nil 3: for roleSet in [P(R) \u2212 {\u03c6, R}] do 4: if COST(R, roleSet) < COST(R, selectedRoles) then 5: selectedRoles \u2190 roleSet 6: return selectedRoles Algorithm 3 GETRESOURCECOST(Roles) : (T \u222a M) \u2192 1: M \u2190 (Roles \u2229 M) 2: cost \u2190 0 3: for resource in R do 4: maxResourceUsage \u2190 0 5: for method in M do 6: if \u03c1(method, resource) > maxResourceUsage then 7: max \u2190 \u03c1(method, resource) 8: cost \u2190 cost + [C(resource) \u00d7 maxResourceUsage] 9: return cost 4.2.1 Role allocation during spawning One of the key questions that the agent doing the spawning needs to answer is - which of its local-roles should it assign to the newly spawned agent and which of its local roles should it keep to itself?\nThe onus of answering this question falls on the FINDROLESFORSPAWNEDAGENT() function, shown in Algorithm 2 above.\nThis function takes the set of local roles that are the responsibility of the spawning agent and returns a subset of those roles for allocation to the newly spawned agent.\nThis subset is selected based on the results of a cost function as is evident from line 4 of the algorithm.\nSince the use of different cost functions will result in different organizational structures and since we have no a priori reason to believe that one cost function will out-perform the other, we evaluated the performance of three different cost functions based on the following three different heuristics: Algorithm 4 GETEXPECTEDDURATION(Roles) : (T \u222a M) \u2192 N+ 1: M \u2190 (Roles \u2229 M) 2: exptDuration \u2190 0 3: for [outcome =< (q, c, d), outcomeProb >] in M do 4: exptOutcomeDuration \u2190 0 5: for (n,p) in d do 6: exptOutcomeDuration \u2190 n \u00d7 p 7: exptDuration \u2190 exptDuration + [exptOutcomeDuration \u00d7 outcomeProb] 8: return exptDuration Allocating top-most roles first: This heuristic always breaks up at the top-most nodes first.\nThat is, if the nodes of a task structure were numbered, starting from the root, in a breadth-first fashion, then this heuristic would select the local-role of the spawning agent that had the lowest number and breakup that node (by allocating one of its subtasks to the newly spawned agent).\nWe selected this heuristic because (a) it is the simplest to implement, (b) fastest to run (the role allocation can be done in constant time without the need of a search through the task structure) and (c) it makes sense from a human-organizational perspective as this heuristic corresponds to dividing an organization along functional lines.\nMinimizing total resources: This heuristic attempts to minimize the total cost of the resources needed by the agents in the organization to execute their roles.\nIf R be the local roles of the spawning agent and R be the subset of roles being evaluated for allocation to the newly spawned agent, the cost function for this heuristic is given by: COST(R, R ) \u2190 GETRESOURCECOST(R \u2212 R )+GETRESOURCECOST(R ) Balancing execution time: This heuristic attempts to allocate roles in a way that tries to ensure that each agent has an equal amount of work to do.\nFor each potential role allocation, this heuristic works by calculating the absolute value of the difference between the expected duration of its own roles after spawning and the expected duration of the roles of the newly spawned agent.\nIf this difference is close to zero, then the both the agents have roughly the same amount of work to do.\nFormally, if R be the local roles of the spawning agent and R be the subset of roles being evaluated for allocation to the newly spawned agent, then the cost function for this heuristic is given by: COST(R, R ) \u2190 |GETEXPECTEDDURATION(R\u2212R )\u2212GETEXPECTEDDURATION(R )| To evaluate these heuristics, we ran a series of experiments that tested the performance of the resultant organization on randomly generated task structures.\nThe results are given in Section 6.\n4.3 Reasons for Organizational Change As organizational change is expensive (requiring clock cycles, allocation\/deallocation of resources, etc.) we want a stable organizational structure that is suited to the task and environmental conditions at hand.\nHence, we wish to change the organizational structure only if the task structure and\/or environmental conditions change.\nAlso to allow temporary changes to the environmental conditions to be overlooked, we want the probability of an organizational change to be inversely proportional to the time since the last organizational change.\nIf this time is relatively short, the agents are still adjusting to the changes in the environment - hence the probability of an agent initiating an organizational change should be high.\nSimilarly, if the time since the last organizational change is relatively large, we wish to have a low probability of organizational change.\nTo allow this variation in probability of organizational change, we use simulated annealing to determine the probability of keepThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1231 ing an existing organizational structure.\nThis probability is calculated using the annealing formula: p = e\u2212 \u0394E kT where \u0394E is the amount of overload\/underload, T is the time since the last organizational change and k is a constant.\nThe mechanism of computing \u0394E is different for agent spawning than for agent composition and is described below.\nFrom this formula, if T is large, p, or the probability of keeping the existing organizational structure is large.\nNote that the value of p is capped at a certain threshold in order to prevent the organization from being too sluggish in its reaction to environmental change.\nTo compute if agent spawning is necessary, we use the annealing equation with \u0394E = 1 \u03b1\u2217Slack where \u03b1 is a constant and Slack is the difference between the total time available for completion of the outstanding tasks and the sum of the expected time required for completion of each task on the task queue.\nAlso, if the amount of Slack is negative, immediate agent spawning will occur without use of the annealing equation.\nTo calculate if agent composition is necessary, we again use the simulated annealing equation.\nHowever, in this case, \u0394E = \u03b2 \u2217 Idle Time, where \u03b2 is a constant and Idle Time is the amount of time for which the agent was idle.\nIf the agent has been sitting idle for a long period of time, \u0394E is large, which implies that p, the probability of keeping the existing organizational structure, is low.\n5.\nORGANIZATION AND ROBUSTNESS There are two approaches commonly used to achieve robustness in multiagent systems: 1.\nthe Survivalist Approach [12], which involves replicating domain agents in order to allow the replicas to take over should the original agents fail; and 2.\nthe Citizen Approach [7], which involves the use of special monitoring agents (called Sentinel Agents) in order to detect agent failure and dynamically startup new agents in lieu of the failed ones.\nThe advantage of the survivalist approach is that recovery is relatively fast, since the replicas are pre-existing in the organization and can take over as soon as a failure is detected.\nThe advantages of the citizen approach are that it requires fewer resources, little modification to the existing organizational structure and coordination protocol and is simpler to implement.\nBoth of these approaches can be applied to achieve robustness in our OSD agents and it is not clear which approach would be better.\nRather a thorough empirical evaluation of both approaches would be required.\nIn this paper, we present the citizen approach as it has been shown by [7], to have a better performance than the survivalist approach in the Contract Net protocol, and leave the presentation and evaluation of the survivalist approach to a future paper.\nTo implement the citizen approach, we designed special monitoring agents, that periodically poll the domain agents by sending them are you alive messages that the agents must respond to.\nIf an agent fails, it will not respond to such messages - the monitoring agents can then create a new agent and delegate the responsibilities of the dead agent to the new agent.\nThis delegation of responsibilities is non-trivial as the monitoring agents do not have access to the internal state of the domain agents, which is itself composed of two components - the organizational knowledge and the task information.\nThe former consists of the information about the local and managerial roles of the agent while the latter is composed of the methods that are being scheduled and executed and the tasks that have been delegated to other agents.\nThis state information can only be deduced by monitoring and recording the messages being sent and received by the domain agents.\nFor example, in order to deduce the organizational knowledge, the monitoring agents need to keep a track of the spawn and compose messages sent by the agents in order to trigger the spawning and composition operations respectively.\nThe deduction process is particularly complicated in the case of the task information as the monitoring agents do not have access to the private schedules of the domain agents.\nThe details are beyond the scope of this paper.\n6.\nEVALUATION To evaluate our approach, we ran a series of experiments that simulated the operation of both the OSD agents and the Contract Net agents on various task structures with varied arrival rates and deadlines.\nAt the start of each experiment, a random T\u00c6MS task structure was generated with a specified depth and branching factor.\nDuring the course of the experiment, a series of task instances (problems) arrive at the organization and must be completed by the agents before their specified deadlines.\nTo directly compare the OSD approach with the Contract Net approach, each experiment was repeated several times - using OSD agents on the first run and a different number of Contract Net agents on each subsequent run.\nWe were careful to use the same task structure, task arrival times, task deadlines and random numbers for each of these trials.\nWe divided the experiments into two groups: experiments in which the environment was static (fixed task arrival rates and deadlines) and experiments in which the environment was dynamic (varying arrival rates and\/or deadlines).\nThe two graphs in Figure 1, show the average performance of the OSD organization against the Contract Net organizations with 8, 10, 12 and 14 agents.\nThe results shown are the averages of running 40 experiments.\n20 of those experiments had a static environment with a fixed task arrival time of 15 cycles and a deadline window of 20 cycles.\nThe remaining 20 experiments had a varying task arrival rate - the task arrival rate was changed from 15 cycles to 30 cycles and back to 15 cycles after every 20 tasks.\nIn all the experiments, the task structures were randomly generated with a maximum depth of 4 and a maximum branching factor of 3.\nThe runtime of all the experiments was 2500 cycles.\nWe tested several hypotheses relating to the comparative performance of our OSD approach using the Wilcoxon Matched-Pair Signed-Rank tests.\nMatched-Pair signifies that we are comparing the performance of each system on precisely the same randomized task set within each separate experiment.\nThe tested hypothesis are: The OSD organization requires fewer agents to complete an equal or larger number of tasks when compared to the Contract Net organization: To test this hypothesis, we tested the stronger null hypothesis that states that the contract net agents complete more tasks.\nThis null hypothesis is rejected for all contract net organizations with less than 14 agents (static: p < 0.0003; dynamic: p < 0.03).\nFor large contract net organizations, the number of tasks completed is statistically equivalent to the number completed by the OSD agents, however the number of agents used by the OSD organization is smaller: 9.59 agents (in the static case) and 7.38 agents (in the dynamic case) versus 14 contract net agents3 .\nThus the original hypothesis, that OSD requires fewer agents to 3 These values should not be construed as an indication of the scalability of our approach.\nWe have tested our approach on organizations with more than 300 agents, which is significantly greater than the number of agents needed for the kind of applications that we have in mind (i.e. web service choreography, efficient dynamic use of grid computing, distributed information gathering, etc.).\n1232 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 1: Graph comparing the average performance of the OSD organization with the Contract Net organizations (with 8, 10, 12 and 14 agents).\nThe error bars show the standard deviations.\ncomplete an equal or larger number of tasks, is upheld.\nThe OSD organizations achieve an equal or greater average quality than the Contract Net organizations: The null hypothesis is that the Contract Net agents achieve a greater average quality.\nWe can reject the null hypothesis for contract net organizations with less than 12 agents (static: p < 0.01; dynamic: p < 0.05).\nFor larger contract net organizations, the average quality is statistically equivalent to that achieved by OSD.\nThe OSD agents have a lower average response time as compared to the Contract Net agents: The null hypothesis that OSD has the same or higher response time is rejected for all contract net organizations (static: p < 0.0002; dynamic: p < 0.0004).\nThe OSD agents send less messages than the Contract Net Agents: The null hypothesis that OSD sends the same or more messages is rejected for all contract net organizations (p < .0003 in all cases except 8 contract net agents in a static environment where p < 0.02) Hence, as demonstrated by the above tests, our agents perform better than the contract net agents as they complete a larger number of tasks, achieve a greater quality and also have a lower response time and communication overhead.\nThese results make intuitive sense given our goals for the OSD approach.\nWe expected the OSD organizations to have a faster average response time and to send less messages because the agents in the OSD organization are not wasting time and messages sending bid requests and replying to bids.\nThe quality gained on the tasks is directly dependent on the Criteria\/Heuristic BET TF MR Rand Number of Agents 572\u00a0567\u00a0100\u00a0139 No-Org-Changes 641 51 5 177 Total-Messages-Sent 586 499 13 11 Resource-Cost 346\u00a0418\u00a0337 66 Tasks-Completed 427\u00a0560\u00a0154\u00a0166 Average-Quality 367\u00a0492\u00a0298\u00a0339 Average-Response-Time 356\u00a0321\u00a0370\u00a0283 Average-Runtime 543 323 74 116 Average-Turnaround-Time 560 314 74 126 Table 1: The number of times that each heuristic performed the best or statistically equivalent to the best for each of the performance criteria.\nHeuristic Key: BET is Balancing Execution Time, TF is Topmost First, MR is Minimizing Resources and Rand is a random allocation strategy, in which every T\u00c6MS node has a uniform probability of being selected for allocation.\nnumber of tasks completed, hence the more the number of tasks completed, the greater average quality.\nThe results of testing the first hypothesis were slightly more surprising.\nIt appears that due to the inherent inefficiency of the contract net protocol in bidding for each and every task instance, a greater number of agents are needed to complete an equal number of tasks.\nNext, we evaluated the performance of the three heuristics for allocating tasks.\nSome preliminary experiments (that are not reported here due to space constraints) demonstrated the lack of a clear winner amongst the three heuristics for most of the performance criteria that we evaluated.\nWe suspected this to be the case because different heuristics are better for different task structures and environmental conditions, and since each experiment starts with a different random task structure, we couldn``t find one allocation strategy that always dominated the other for all the performance criteria.\nTo determine which heuristic performs the best, given a set of task structures, environmental conditions and performance criteria, we performed a series of experiments that were controlled using the following five variables: \u2022 The depth of the task structure was varied from 3 to 5.\n\u2022 The branching factor was varied from 3 to 5.\n\u2022 The probability of any given task node having a MIN CAF was varied from 0.0 to 1.0 in increments of 0.2.\nThe probability of any node having a SUM CAF was in turn modified to ensure that the probabilities add up to 14 .\n\u2022 The arrival rate: from 10 to 40 cycles in increments of 10.\n\u2022 The deadline slack: from 5 to 15 in increments of 5.\nEach experiment was repeated 20 times, with a new task structure being generated each time - these 20 experiments formed an experimental set.\nHence, all the experiments in an experimental set had the same values for the exogenous variables that were used to control the experiment.\nNote that a static environment was used in each of these experiments, as we wanted to see the performance of the arrival rate and deadline slack on each of the three heuristics.\nAlso the results of any experiment in which the OSD organization consisted of a single agent ware culled from the results.\nSimilarly, 4 Since our preliminary analysis led is to believe that the number of MAX and EXACTLY ONE CAFs in a task structure have a minimal effect on the performance of the allocation strategies being evaluated, we set the probabilities of the MAX and EXACTLY ONE CAFs to 0 in order to reduce the combinatorial explosion of the full factorial experimental design.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1233 experiments in which the generated task structures were unsatisfiable (given the deadline constraints), were removed from the final results.\nIf any experimental set had more than 15 experiments thus removed, the whole set was ignored for performing the evaluation.\nThe final evaluation was done on 673 experimental sets.\nWe tested the potential of these three heuristics on the following performance criteria: 1.\nThe average number of agents used.\n2.\nThe total number of organizational changes.\n3.\nThe total messages sent by all the agents.\n4.\nThe total resource cost of the organization.\n5.\nThe number of tasks completed.\n6.\nThe average quality accrued.\nThe average quality is defined as the total quality accrued during the experimental run divided by the sum of the number of tasks completed and the number of tasks failed.\n7.\nThe average response time of the organization.\nThe response time of a task is defined as the difference between the time at which any agent in the organization starts working on the task (the start time) and the time at which the task was generated (the generation time).\nHence, the response time is equivalent to the wait time.\nFor tasks that are never attempted\/started, the response time is set at final runtime minus the generation time.\n8.\nThe average runtime of the tasks attempted by the organization.\nThis time is defined as the difference between the time at which the task completed or failed and the start time.\nFor tasks that were never stated, this time is set to zero.\n9.\nThe turnaround time is defined as the sum of the response time and runtime of a task.\nExcept for the number of tasks completed and the average quality accrued, lower values for the various performance criteria indicate better performance.\nAgain we ran the Wilcoxon Matched-Pair Signed-Rank tests on the experiments in each of the experimental sets.\nThe null hypothesis in each case was that there is no difference between the pair of heuristics for the performance criteria under consideration.\nWe were interested in the cases in which we could reject the null hypothesis with 95% confidence (p < 0.05).\nWe noted the number of times that a heuristic performed the best or was in a group that performed statistically better than the rest.\nThese counts are given in Tables 1 and 2.\nThe number of experimental sets in which each heuristic performed the best or statistically equivalent to the best is shown in Table 1.\nThe breakup of these numbers into (1) the number of times that each heuristic performed better than all the other heuristics and (2) the number of times each heuristic was statistically equivalent to another group of heuristics, all of which performed the best, is shown in Table 2.\nBoth of these tables allow us to glean important information about the performance of the three heuristics.\nParticularly interesting were the following results: \u2022 Whereas Balancing Execution Time (BET) used the lowest number of agents in largest number of experimental sets (572), in most of these cases (337 experimental sets) it was statistically equivalent to Topmost First (TF).\nWhen these two heuristics didn``t perform equally, there was an almost even split between the number of experimental sets in which one outperformed the other.\nWe believe this was the case because BET always bifurcates the agents into two agents that have a more or less equal task load.\nThis often results in organizations that have an even Figure 2: Graph demonstrating the robustness of the citizen approach.\nThe baseline shows the number of tasks completed in the absence of any failure.\nnumber of agents - none of which are small5 enough to combine into a larger agent.\nWith TF, on the other hand, a large agent can successively spawn off smaller agents until it and the spawned agents are small enough to complete their tasks before the deadlines - this often results in organizations with an odd number of agents that is less than those used by BET.\n\u2022 As expected, BET achieved the lowest number of organizational changes in the largest number of experimental sets.\nIn fact, it was over ten times as good as its second best competitor (TF).\nThis shows that if the agents are conscientious in their initial task allocation, there is a lesser need for organizational change later on, especially for static environments.\n\u2022 A particularly interesting, yet easily explainable, result was that of the average response time.\nWe found that the Minimizing Resources (MR) heuristic performed the best when it came to minimizing the average response time!\nThis can be explained by the fact the MR heuristic is extremely greedy and prefers to spawn off small agents that have a tiny resource footprint (so as to minimize the total increase in the resource cost to the organization at the time of spawning).\nWhereas most of these small agents might compose with other agents over time, the presence of a single small agent is sufficient to reduce the response time.\nIn fact the MR heuristic is not the most effective heuristic when it comes to minimizing the resource-cost of the organization - in fact, it only outperforms a random task\/resource allocation.\nWe believe this is in part due to the greedy nature of this heuristic and in part because of the fact that all spawning and composition operations only use local information.\nWe believe that using some non-local information about the resource allocation might help in making better decisions, something that we plan to look at in the future.\nFinally we evaluated the performance of the citizens approach to robustness as applied to our OSD mechanism (Figure 2).\nAs expected, as the probability of failure increases, the number of agents failing during a run also increases.\nThis results in a slight decrease in the number of tasks completed, which can be explained by the fact that whenever an agent fails, its looses whatever work it was doing at the time.\nThe newly created agent that fills in for the failed 5 For this discussion small agents are agents that have a low expected duration for their local roles (as calculated by Algorithm 4).\n1234 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Criteria\/Heuristic BET TF MR Rand BET+TF BET+Rand MR+Rand TF+MR BET+TF+MR All Number of Agents 94 88 3 7 337 2 0 0 12 85 No-Org-Changes 480 0 0 29 16 113 0 0 0 5 Total-Messages-Sent 170 85 0 2 399 1 0 0 7 5 Resource-Cost 26\u00a0100\u00a0170 42 167 0 7 6 128 15 Tasks-Completed 77 197 4 28 184 1 3 9 36 99 Average-Quality 38 147 26 104 76 0 11 11 34 208 Average-Response-Time 104 74 162 43 31 20 16 8 7 169 Average-Runtime 322 110 0 12 121 13 1 1 1 69 Average-Turnaround-Time 318 94 1 11 125 26 1 0 7 64 Table 2: Table showing the number of times that each individual heuristic performed the best and the number of times that a certain group of statistically equivalent heuristics performed the best.\nOnly the more interesting heuristic groupings are shown.\nAll shows the number of experimental sets in which there was no statistical difference between the three heuristics and a random allocation strategy one must redo the work, thus wasting precious time which might not be available close to a deadline.\nAs a part of our future research, we wish to, firstly, evaluate the survivalist approach to robustness.\nThe survivalist approach might actually be better than the citizen approach for higher probabilities of agent failure, as the replicated agents may be processing the task structures in parallel and can take over the moment the original agents fail - thus saving time around tight deadlines.\nAlso, we strongly believe that the optimal organizational structure may vary, depending on the probability of failure and the desired level of robustness.\nFor example, one way of achieving a higher level of robustness in the survivalist approach, given a large numbers of agent failures, would be to relax the task deadlines.\nHowever, such a relaxation would result in the system using fewer agents in order to conserve resources, which in turn would have a detrimental effect on the robustness.\nTherefore, towards this end, we have begun exploring the robustness properties of task structures and the ways in which the organizational design can be modified to take such properties into account.\n7.\nCONCLUSION In this paper, we have presented a run-time approach to organization in which the agents use Organizational Self-Design to come up with a suitable organizational structure.\nWe have also evaluated the performance of the organizations generated by the agents following our approach with the bespoke organization formation that takes place in the Contract Net protocol and have demonstrated that our approach is better than the Contract Net approach as evident by the larger number of tasks completed, larger quality achieved and lower response time.\nFinally, we tested the performance of three different resource allocation heuristics on various performance metrics and also evaluated the robustness of our approach.\n8.\nREFERENCES [1] K. S. Barber and C. E. Martin.\nDynamic reorganization of decision-making groups.\nIn AGENTS ``01, pages 513-520, New York, NY, USA, 2001.\n[2] K. M. Carley and L. Gasser.\nComputational organization theory.\nIn G. Wiess, editor, Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, pages 299-330, MIT Press, 1999.\n[3] W. Chen and K. S. Decker.\nThe analysis of coordination in an information system application - emergency medical services.\nIn Lecture Notes in Computer Science (LNCS), number 3508, pages 36-51.\nSpringer-Verlag, May 2005.\n[4] D. Corkill and V. Lesser.\nThe use of meta-level control for coordination in a distributed problem solving network.\nProceedings of the Eighth International Joint Conference on Artificial Intelligence, pages 748-756, August 1983.\n[5] K. S. Decker.\nEnvironment centered analysis and design of coordination mechanisms.\nPh.D..\nThesis, Dept. of Comp.\nScience, University of Massachusetts, Amherst, May 1995.\n[6] K. S. Decker and J. Li.\nCoordinating mutually exclusive resources using GPGP.\nAutonomous Agents and Multi-Agent Systems, 3(2):133-157, 2000.\n[7] C. Dellarocas and M. Klein.\nAn experimental evaluation of domain-independent fault handling services in open multi-agent systems.\nProceedings of the International Conference on Multi-Agent Systems (ICMAS-2000), July 2000.\n[8] V. Dignum, F. Dignum, and L. Sonenberg.\nTowards Dynamic Reorganization of Agent Societies.\nIn Proceedings of CEAS: Workshop on Coordination in Emergent Agent Societies at ECAI, pages 22-27, Valencia, Spain, September 2004.\n[9] B. Horling, B. Benyo, and V. Lesser.\nUsing self-diagnosis to adapt organizational structures.\nIn AGENTS ``01, pages 529-536, New York, NY, USA, 2001.\nACM Press.\n[10] T. Ishida, L. Gasser, and M. Yokoo.\nOrganization self-design of distributed production systems.\nIEEE Transactions on Knowledge and Data Engineering, 4(2):123-134, 1992.\n[11] V. R. Lesser et.\nal..\nEvolution of the gpgp\/t\u00e6ms domain-independent coordination framework.\nAutonomous Agents and Multi-Agent Systems, 9(1-2):87-143, 2004.\n[12] O. Marin, P. Sens, J. Briot, and Z. Guessoum.\nTowards adaptive fault tolerance for distributed multi-agent systems.\nProceedings of ERSADS 2001, May 2001.\n[13] O. Shehory, K. Sycara, et.\nal..\nAgent cloning: an approach to agent mobility and resource allocation.\nIEEE Communications Magazine, 36(7):58-67, 1998.\n[14] Y.\nSo and E. Durfee.\nAn organizational self-design model for organizational change.\nIn AAAI-93 Workshop on AI and Theories of Groups and Organizations, pages 8-15, Washington, D.C., July 1993.\n[15] T. Wagner.\nCoordination decision support assistants (coordinators).\nTechnical Report 04-29, BAA, 2004.\n[16] T. Wagner and V. Lesser.\nDesign-to-criteria scheduling: Real-time agent control.\nProc.\nof AAAI 2000 Spring Symposium on Real-Time Autonomous Systems, 89-96.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1235","lvl-3":"Organizational Self-Design in Semi-dynamic Environments\nABSTRACT\nIn this paper we propose a run-time approach to organization that is contingent on the task structure of the problem being solved and the environmental conditions under which it is being solved.\nWe use T1EMS as the underlying representation for our problems and describe a framework that uses Organizational Self-Design (OSD) to allocate tasks and resources to the agents and coordinate their activities.\n1.\nINTRODUCTION\nIn this paper, we are primarily interested in the organizational design of a multiagent system--the roles enacted by the agents, the coordination between the roles and the number and assignment of roles and resources to the individual agents.\nThe organizational design is complicated by the fact that there is no best way to organize and all ways of organizing are not equally effective [1].\nInstead, the optimal organizational structure depends both on the problem at hand and the environmental conditions under which the problem needs to be solved.\nThe environmental conditions may not be known a priori or may change over time, which would preclude the use of a static organizational structure.\nOn the other hand, all problem instances and environmental conditions are not always unique which would rule out the use of a new, bespoke organizational structure for every problem instance.\nIn our approach we use Organizational Self-Design (OSD) to dynamically alter the organizational structure of the agents.\nWe define two operators for OSD--agent spawning and composition--when an agent becomes overloaded, it spawns off a new agent to handle part of its task load\/responsibility; when an agent lies idle for an extended period of time, it may decide to compose with another (underloaded) agent.\nOur work builds on the work by [2].\nThe primary difference between their work and our work is that we use T1EMS [3] as the underlying representation for our problems.\nT1EMS is a computational framework that uses annotated hierarchical task networks (HTNs) to allow quantitative reasoning over the task structures.\nT1EMS allows us to explicitly reason about alternative ways of doing a task, arbitrary ways of combining subtasks, uncertainties, quality\/cost tradeoffs, and non-local effects and is hence more general than the approach used by [2].\n2.\nORGANIZATIONAL SELF DESIGN\n2.1 Agent Roles and Relationships\n2.2 Organization Formation and Adaptation\n2.3 Reasons for Organizational Change\n3.\nEVALUATION\nTo evaluate our approach, we ran a series of experiments that simulated the operation of both the OSD agents and the Contract Net agents on various task structures with varied arrival rates and deadlines.\nAt the start of each experiment, a random T\u00c6MS task structure was generated with a specified depth and branching factor.\nDuring the course of the experiment, a series of task instances arrive at the organization and must be completed by the agents before their specified deadlines.\nTo directly compare the OSD approach with the Contract\nNet approach, each experiment was repeated several times--using OSD agents on the first run and a different number of Contract Net agents on each subsequent run.\nWe were careful to use the same task structure, task arrival times, task deadlines and random numbers for each of these trials.\nWe divided the experiments into two groups: experiments in which the environment was static (fixed task arrival rates and deadlines) and experiments in which the environment was dynamic (varying arrival rates and\/or deadlines).\nThe two graphs in Figure 1, show the average performance of the OSD organization against the Contract Net organizations with 8, 10, 12 and 14 agents.\nThe results shown are the averages of running 40 experiments.\n20 of those experiments had a static environment with a fixed task arrival time of 15 cycles and a deadline window of 20 cycles.\nThe remaining 20 experiments had a varying task arrival rate the task arrival rate was changed from 15 cycles to 30 cycles and back to 15 cycles after every 20 tasks.\nIn all the experiments, the task structures were randomly generated with a maximum depth of 4 and a maximum branching factor of 3.\nThe runtime of all the experiments was 2500 cycles.\nWe tested several hypotheses relating to the comparative performance of our OSD approach using theWilcoxon Matched-Pair Signed-Rank tests.\nMatched-Pair signifies that we are comparing the performance of each system on precisely the same randomized task set within each separate experiment.\nThe tested hypothesis are: The OSD organization requires fewer agents to complete an equal or larger number of tasks when compared to the Contract Net organization: To test this hypothesis, we tested the stronger null hypothesis that states that the contract net agents complete more tasks.\nThis null hypothesis is rejected for all contract net organizations with less than 14 agents (static: p <0.0003; dynamic: p <0.03).\nFor large contract net organizations, the number of tasks completed is statistically equivalent to the number completed by the OSD agents, however the number of agents used by the OSD organization is smaller: 9.59 agents (in the static case) and 7.38 agents (in the dynamic case) versus 14 contract net agents.\nThus the original hypothesis, that OSD requires fewer agents to complete an equal or larger number of tasks, is upheld.\nThe OSD organizations achieve an equal or greater average quality than the Contract Net organizations: The null hypothesis is that the Contract Net agents achieve a greater average quality.\nWe can reject the null hypothesis for contract net organizations with less than 12 agents (static: p <0.01; dynamic: p <0.05).\nFor larger contract net organizations, the average quality is statistically equivalent to that achieved by OSD.\nThe OSD agents have a lower average response time as compared to the Contract Net agents: The null hypothesis that OSD has the same or higher response time is rejected for all contract net organizations (static: p <0.0002; dynamic: p <0.0004).\nThe OSD agents send less messages than the Contract Net Agents: The null hypothesis that OSD sends the same or more messages is rejected for all contract net organizations (p <.0003 in all cases except 8 contract net agents in a static environment where p <0.02) Hence, as demonstrated by the above tests, our agents perform better than the contract net agents as they complete a larger number of tasks, achieve a greater quality and also\nFigure 1: Graph comparing the average perfor\nmance of the OSD organization with the Contract Net organizations (with 8, 10, 12 and 14 agents).\nThe error bars show the standard deviations.\nhave a lower response time and communication overhead.\nThese results make intuitive sense given our goals for the OSD approach.\nWe expected the OSD organizations to have a faster average response time and to send less messages because the agents in the OSD organization are not wasting time and messages sending bid requests and replying to bids.\nThe quality gained on the tasks is directly dependent on the number of tasks completed, hence the more the number of tasks completed, the greater average quality.\nThe results of testing the first hypothesis were slightly more surprising.\nIt appears that due to the inherent inefficiency of the contract net protocol in bidding for each and every task instance, a greater number of agents are needed to complete an equal number of tasks.","lvl-4":"Organizational Self-Design in Semi-dynamic Environments\nABSTRACT\nIn this paper we propose a run-time approach to organization that is contingent on the task structure of the problem being solved and the environmental conditions under which it is being solved.\nWe use T1EMS as the underlying representation for our problems and describe a framework that uses Organizational Self-Design (OSD) to allocate tasks and resources to the agents and coordinate their activities.\n1.\nINTRODUCTION\nInstead, the optimal organizational structure depends both on the problem at hand and the environmental conditions under which the problem needs to be solved.\nThe environmental conditions may not be known a priori or may change over time, which would preclude the use of a static organizational structure.\nOn the other hand, all problem instances and environmental conditions are not always unique which would rule out the use of a new, bespoke organizational structure for every problem instance.\nIn our approach we use Organizational Self-Design (OSD) to dynamically alter the organizational structure of the agents.\nT1EMS is a computational framework that uses annotated hierarchical task networks (HTNs) to allow quantitative reasoning over the task structures.\n3.\nEVALUATION\nTo evaluate our approach, we ran a series of experiments that simulated the operation of both the OSD agents and the Contract Net agents on various task structures with varied arrival rates and deadlines.\nAt the start of each experiment, a random T\u00c6MS task structure was generated with a specified depth and branching factor.\nDuring the course of the experiment, a series of task instances arrive at the organization and must be completed by the agents before their specified deadlines.\nTo directly compare the OSD approach with the Contract\nNet approach, each experiment was repeated several times--using OSD agents on the first run and a different number of Contract Net agents on each subsequent run.\nWe were careful to use the same task structure, task arrival times, task deadlines and random numbers for each of these trials.\nWe divided the experiments into two groups: experiments in which the environment was static (fixed task arrival rates and deadlines) and experiments in which the environment was dynamic (varying arrival rates and\/or deadlines).\nThe two graphs in Figure 1, show the average performance of the OSD organization against the Contract Net organizations with 8, 10, 12 and 14 agents.\nThe results shown are the averages of running 40 experiments.\n20 of those experiments had a static environment with a fixed task arrival time of 15 cycles and a deadline window of 20 cycles.\nThe remaining 20 experiments had a varying task arrival rate the task arrival rate was changed from 15 cycles to 30 cycles and back to 15 cycles after every 20 tasks.\nIn all the experiments, the task structures were randomly generated with a maximum depth of 4 and a maximum branching factor of 3.\nThe runtime of all the experiments was 2500 cycles.\nWe tested several hypotheses relating to the comparative performance of our OSD approach using theWilcoxon Matched-Pair Signed-Rank tests.\nMatched-Pair signifies that we are comparing the performance of each system on precisely the same randomized task set within each separate experiment.\nThe tested hypothesis are: The OSD organization requires fewer agents to complete an equal or larger number of tasks when compared to the Contract Net organization: To test this hypothesis, we tested the stronger null hypothesis that states that the contract net agents complete more tasks.\nThis null hypothesis is rejected for all contract net organizations with less than 14 agents (static: p <0.0003; dynamic: p <0.03).\nFor large contract net organizations, the number of tasks completed is statistically equivalent to the number completed by the OSD agents, however the number of agents used by the OSD organization is smaller: 9.59 agents (in the static case) and 7.38 agents (in the dynamic case) versus 14 contract net agents.\nThus the original hypothesis, that OSD requires fewer agents to complete an equal or larger number of tasks, is upheld.\nThe OSD organizations achieve an equal or greater average quality than the Contract Net organizations: The null hypothesis is that the Contract Net agents achieve a greater average quality.\nWe can reject the null hypothesis for contract net organizations with less than 12 agents (static: p <0.01; dynamic: p <0.05).\nFor larger contract net organizations, the average quality is statistically equivalent to that achieved by OSD.\nThe OSD agents have a lower average response time as compared to the Contract Net agents: The null hypothesis that OSD has the same or higher response time is rejected for all contract net organizations (static: p <0.0002; dynamic: p <0.0004).\nFigure 1: Graph comparing the average perfor\nmance of the OSD organization with the Contract Net organizations (with 8, 10, 12 and 14 agents).\nThe error bars show the standard deviations.\nhave a lower response time and communication overhead.\nThese results make intuitive sense given our goals for the OSD approach.\nWe expected the OSD organizations to have a faster average response time and to send less messages because the agents in the OSD organization are not wasting time and messages sending bid requests and replying to bids.\nThe quality gained on the tasks is directly dependent on the number of tasks completed, hence the more the number of tasks completed, the greater average quality.\nThe results of testing the first hypothesis were slightly more surprising.\nIt appears that due to the inherent inefficiency of the contract net protocol in bidding for each and every task instance, a greater number of agents are needed to complete an equal number of tasks.","lvl-2":"Organizational Self-Design in Semi-dynamic Environments\nABSTRACT\nIn this paper we propose a run-time approach to organization that is contingent on the task structure of the problem being solved and the environmental conditions under which it is being solved.\nWe use T1EMS as the underlying representation for our problems and describe a framework that uses Organizational Self-Design (OSD) to allocate tasks and resources to the agents and coordinate their activities.\n1.\nINTRODUCTION\nIn this paper, we are primarily interested in the organizational design of a multiagent system--the roles enacted by the agents, the coordination between the roles and the number and assignment of roles and resources to the individual agents.\nThe organizational design is complicated by the fact that there is no best way to organize and all ways of organizing are not equally effective [1].\nInstead, the optimal organizational structure depends both on the problem at hand and the environmental conditions under which the problem needs to be solved.\nThe environmental conditions may not be known a priori or may change over time, which would preclude the use of a static organizational structure.\nOn the other hand, all problem instances and environmental conditions are not always unique which would rule out the use of a new, bespoke organizational structure for every problem instance.\nIn our approach we use Organizational Self-Design (OSD) to dynamically alter the organizational structure of the agents.\nWe define two operators for OSD--agent spawning and composition--when an agent becomes overloaded, it spawns off a new agent to handle part of its task load\/responsibility; when an agent lies idle for an extended period of time, it may decide to compose with another (underloaded) agent.\nOur work builds on the work by [2].\nThe primary difference between their work and our work is that we use T1EMS [3] as the underlying representation for our problems.\nT1EMS is a computational framework that uses annotated hierarchical task networks (HTNs) to allow quantitative reasoning over the task structures.\nT1EMS allows us to explicitly reason about alternative ways of doing a task, arbitrary ways of combining subtasks, uncertainties, quality\/cost tradeoffs, and non-local effects and is hence more general than the approach used by [2].\n2.\nORGANIZATIONAL SELF DESIGN\n2.1 Agent Roles and Relationships\nAs explained in Section 1, the organizational structure is primarily composed of roles and the relationships between the roles.\nOne or more agents may enact a particular role and one or more roles must be enacted by every agent.\nThe roles may be thought of as the parts played by the agents enacting the roles in the solution to the problem and reflect the long-term commitments made by the agents in question to a certain course of action (that includes task responsibility, authority, and mechanisms for coordination).\nThe relationships between the roles are the coordination relationships that exist between the subparts of a problem.\nIn our approach, the organizational design is directly contingent on the task structure and the environmental conditions under which the problems need to be solved.\nWe define a role as a T1EMS subtree rooted at a particular node.\nNote, by definition, a role may consist of one or more other (sub -) roles as a particular T1EMS node may itself be made up of one or more subtrees.\nHence, we will use the terms role, task node and task interchangeably.\nWe, also, differentiate between local and managed (nonlocal) roles.\nLocal roles are roles that are the sole responsibility of a single agent, that is, the agent concerned is responsible for solving all the subproblems of the tree rooted at that node.\nFor such roles, the agent concerned can do one or more subtasks, solely at its discretion and without consultation with any other agent.\nManaged roles, on the other hand, must be coordinated between two or more agents\nas such roles will have two or more descendent local roles that are the responsibility of two or more separate agents.\nWe achieve this coordination by assigning one of the agents as the manager responsible for enacting the non-local role.\nThis manager is responsible for making the coordination decisions, often in consultation with the agents enacting the descendent sub-roles of that particular non-local role.\n2.2 Organization Formation and Adaptation\nTo form or adapt their organizational structure, the agents use two organizational primitives: agent spawning and composition.\nThese two primitives result in a change in the assignment of roles to the agents.\nAgent spawning is the generation of a new agent to handle a subset of the roles of the spawning agent.\nAgent composition, on the other hand, is orthogonal to agent spawning and involves the merging of two or more agents together--the combined agent is responsible for enacting all the roles of the agents being merged.\nHence, OSD can be thought of as a search in the space of all the role assignments for a suitable role assignment that minimizes or maximizes a performance measure In order to participate in the formation and adaption of an organization, the agents need to explicitly represent and reason about the role assignments.\nHence, as a part of its organizational knowledge, each agent keeps a list of the local roles that it is enacting and the non-local roles that it is managing.\nNote that each agent only has limited organizational knowledge and is individually responsible for spawning off or combining with another agent, as needed, based on its estimate of its performance so far.\nTo see how the organizational primitives work, we first describe four rules that can be thought of as the organizational invariants which will always hold before and after any organizational change:\n1.\nFor a local role, all the descendent nodes of that role will be local.\n2.\nSimilarly, for a managed (non-local) role, all the ascendent nodes of that role will be managed.\n3.\nIf two local roles that are enacted by two different agents share a common ancestor, that ancestor will be a managed role.\n4.\nIf all the direct descendants of a role are local and the sole responsibility of a single agent, that role will be a local role.\nWhen a new agent is spawned, the agent doing the spawning will assign one or more of its local roles to the newly spawned agent.\nTo preserve invariant rules 2 and 3, the spawning agent will change the type of all the ascendent roles of the nodes assigned to the newly spawned agent from local to managed.\nNote that the spawning agent is only changing its local organizational knowledge and not the global organizational knowledge.\nAt the same time, the spawning agent is taking on the task of managing the previously local roles.\nSimilarly, the newly spawned agent will only know of its just assigned local roles.\nWhen an agent (the composing agent) decides to compose with another agent (the composed agent), the organizational knowledge of the composing agent is merged with the organizational knowledge of the composed agent.\nTo do this, the composed agent takes on the roles of all the local and managed tasks of the composing agent.\nCare is taken to preserve the organizational invariant rules 1 and 4.\n2.3 Reasons for Organizational Change\nAs organizational change is expensive (requiring clock cycles, allocation\/deallocation of resources, etc.) we want a stable organizational structure that is suited to the task and environmental conditions at hand.\nHence, we wish to change the organizational structure only if the task structure and\/or environmental conditions change.\nAlso to allow temporary changes to the environmental conditions to be overlooked, we want the probability of an organizational change to be inversely proportional to the time since the last organizational change.\nIf this time is relatively short, the agents are still adjusting to the changes in the environment - hence the probability of an agent initiating an organizational change should be high.\nSimilarly, if the time since the last organizational change is relatively large, we wish to have a low probability of organizational change.\nTo allow this variation in probability of organizational change, we use simulated annealing to determine the probability of keeping an existing organizational structure.\nThis probability is calculated using the annealing formula: p = e \u2212 \u0394E kT where \u0394E is the \"amount\" of overload\/underload, T is the time since the last organizational change and k is a constant.\nThe mechanism of computing \u0394E is different for agent spawning than for agent composition and is described below.\nFrom this formula, if T is large, p, or the probability of keeping the existing organizational structure is large.\nAgent spawning only occurs when the agent doing the spawning is too overloaded and cannot complete all the tasks in its task queue by the given deadlines of the tasks.\nTo compute if spawning is necessary, we use the annealing equation with \u0394E = 1 \u03b1 \u2217 Slack where \u03b1 is a constant and Slack is the difference between the total time available for completion of the outstanding tasks and the sum of the expected time required for completion of each task on the task queue.\nAgent composition, on the other hand, is exactly orthogonal to agent spawning as agent composition only occurs when the agents are underloaded.\nIn such a situation, some of the agents will be sitting idle waiting for tasks to arrive.\nThese idle agents will either be utilizing resources while waiting, or more likely, will have resources allocated to them that could be used elsewhere in the system.\nIn either case, it makes sense to combine some of the idle agents with other agents freeing precious resources.\nTo calculate if agent composition is necessary, we again use the simulated annealing equation.\nHowever, in this case, \u0394E = \u03b2 \u2217 Idle Time, where \u03b2 is a constant and Idle Time is the amount of time for which the agent was idle.\nIf the agent has been sitting idle for a long period of time, \u0394E is large, which implies that p, the probability of keeping the existing organizational structure, is low.\n3.\nEVALUATION\nTo evaluate our approach, we ran a series of experiments that simulated the operation of both the OSD agents and the Contract Net agents on various task structures with varied arrival rates and deadlines.\nAt the start of each experiment, a random T\u00c6MS task structure was generated with a specified depth and branching factor.\nDuring the course of the experiment, a series of task instances arrive at the organization and must be completed by the agents before their specified deadlines.\nTo directly compare the OSD approach with the Contract\nNet approach, each experiment was repeated several times--using OSD agents on the first run and a different number of Contract Net agents on each subsequent run.\nWe were careful to use the same task structure, task arrival times, task deadlines and random numbers for each of these trials.\nWe divided the experiments into two groups: experiments in which the environment was static (fixed task arrival rates and deadlines) and experiments in which the environment was dynamic (varying arrival rates and\/or deadlines).\nThe two graphs in Figure 1, show the average performance of the OSD organization against the Contract Net organizations with 8, 10, 12 and 14 agents.\nThe results shown are the averages of running 40 experiments.\n20 of those experiments had a static environment with a fixed task arrival time of 15 cycles and a deadline window of 20 cycles.\nThe remaining 20 experiments had a varying task arrival rate the task arrival rate was changed from 15 cycles to 30 cycles and back to 15 cycles after every 20 tasks.\nIn all the experiments, the task structures were randomly generated with a maximum depth of 4 and a maximum branching factor of 3.\nThe runtime of all the experiments was 2500 cycles.\nWe tested several hypotheses relating to the comparative performance of our OSD approach using theWilcoxon Matched-Pair Signed-Rank tests.\nMatched-Pair signifies that we are comparing the performance of each system on precisely the same randomized task set within each separate experiment.\nThe tested hypothesis are: The OSD organization requires fewer agents to complete an equal or larger number of tasks when compared to the Contract Net organization: To test this hypothesis, we tested the stronger null hypothesis that states that the contract net agents complete more tasks.\nThis null hypothesis is rejected for all contract net organizations with less than 14 agents (static: p <0.0003; dynamic: p <0.03).\nFor large contract net organizations, the number of tasks completed is statistically equivalent to the number completed by the OSD agents, however the number of agents used by the OSD organization is smaller: 9.59 agents (in the static case) and 7.38 agents (in the dynamic case) versus 14 contract net agents.\nThus the original hypothesis, that OSD requires fewer agents to complete an equal or larger number of tasks, is upheld.\nThe OSD organizations achieve an equal or greater average quality than the Contract Net organizations: The null hypothesis is that the Contract Net agents achieve a greater average quality.\nWe can reject the null hypothesis for contract net organizations with less than 12 agents (static: p <0.01; dynamic: p <0.05).\nFor larger contract net organizations, the average quality is statistically equivalent to that achieved by OSD.\nThe OSD agents have a lower average response time as compared to the Contract Net agents: The null hypothesis that OSD has the same or higher response time is rejected for all contract net organizations (static: p <0.0002; dynamic: p <0.0004).\nThe OSD agents send less messages than the Contract Net Agents: The null hypothesis that OSD sends the same or more messages is rejected for all contract net organizations (p <.0003 in all cases except 8 contract net agents in a static environment where p <0.02) Hence, as demonstrated by the above tests, our agents perform better than the contract net agents as they complete a larger number of tasks, achieve a greater quality and also\nFigure 1: Graph comparing the average perfor\nmance of the OSD organization with the Contract Net organizations (with 8, 10, 12 and 14 agents).\nThe error bars show the standard deviations.\nhave a lower response time and communication overhead.\nThese results make intuitive sense given our goals for the OSD approach.\nWe expected the OSD organizations to have a faster average response time and to send less messages because the agents in the OSD organization are not wasting time and messages sending bid requests and replying to bids.\nThe quality gained on the tasks is directly dependent on the number of tasks completed, hence the more the number of tasks completed, the greater average quality.\nThe results of testing the first hypothesis were slightly more surprising.\nIt appears that due to the inherent inefficiency of the contract net protocol in bidding for each and every task instance, a greater number of agents are needed to complete an equal number of tasks.","keyphrases":["organiz self-design","organ","coordin","multiag system","organiz structur","robust","agent spawn","composit","task analysi","environ model","simul","extend hierarch task structur","organiz-self design","task and resourc alloc"],"prmu":["P","P","P","P","P","P","M","U","M","R","U","M","M","M"]} {"id":"I-70","title":"A Multi-Agent System for Building Dynamic Ontologies","abstract":"Ontologies building from text is still a time-consuming task which justifies the growth of Ontology Learning. Our system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture. In this paper we present a distributed hierarchical clustering algorithm, core of our approach. It is evaluated and compared to a more conventional centralized algorithm. We also present how it has been improved using a multi-criteria approach. With those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution.","lvl-1":"A Multi-Agent System for Building Dynamic Ontologies K\u00e9vin Ottens \u2217 IRIT, Universit\u00e9 Paul Sabatier 118 Route de Narbonne F-31062 TOULOUSE ottens@irit.fr Marie-Pierre Gleizes IRIT, Universit\u00e9 Paul Sabatier 118 Route de Narbonne F-31062 TOULOUSE gleizes@irit.fr Pierre Glize IRIT, Universit\u00e9 Paul Sabatier 118 Route de Narbonne F-31062 TOULOUSE glize@irit.fr ABSTRACT Ontologies building from text is still a time-consuming task which justifies the growth of Ontology Learning.\nOur system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture.\nIn this paper we present a distributed hierarchical clustering algorithm, core of our approach.\nIt is evaluated and compared to a more conventional centralized algorithm.\nWe also present how it has been improved using a multi-criteria approach.\nWith those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial IntelligenceMultiagent Systems General Terms Algorithms, Experimentation 1.\nINTRODUCTION Nowadays, it is well established that ontologies are needed for semantic web, knowledge management, B2B... For knowledge management, ontologies are used to annotate documents and to enhance the information retrieval.\nBut building an ontology manually is a slow, tedious, costly, complex and time consuming process.\nCurrently, a real challenge lies in building them automatically or semi-automatically and keeping them up to date.\nIt would mean creating dynamic ontologies [10] and it justifies the emergence of ontology learning techniques [14] [13].\nOur research focuses on Dynamo (an acronym of DYNAMic Ontologies), a tool based on an adaptive multi-agent system to construct and maintain an ontology from a domain specific set of texts.\nOur aim is not to build an exhaustive, general hierarchical ontology but a domain specific one.\nWe propose a semi-automated tool since an external resource is required: the ``ontologist''.\nAn ontologist is a kind of cognitive engineer, or analyst, who is using information from texts and expert interviews to design ontologies.\nIn the multi-agent field, ontologies generally enable agents to understand each other [12].\nThey``re sometimes used to ease the ontology building process, in particular for collaborative contexts [3], but they rarely represent the ontology itself [16].\nMost works interested in the construction of ontologies [7] propose the refinement of ontologies.\nThis process consists in using an existing ontology and building a new one from it.\nThis approach is different from our approach because Dynamo starts from scratch.\nResearchers, working on the construction of ontologies from texts, claim that the work to be automated requires external resources such as a dictionary [14], or web access [5].\nIn our work, we propose an interaction between the ontologist and the system, our external resource lies both in the texts and the ontologist.\nThis paper first presents, in section 2, the big picture of the Dynamo system.\nIn particular the motives that led to its creation and its general architecture.\nThen, in section 3 we discuss the distributed clustering algorithm used in Dynamo and compare it to a more classic centralized approach.\nSection 4 is dedicated to some enhancement of the agents behavior that got designed by taking into account criteria ignored by clustering.\nAnd finally, in section 5, we discuss the limitations of our approach and explain how it will be addressed in further work.\n2.\nDYNAMO OVERVIEW 2.1 Ontology as a Multi-Agent System Dynamo aims at reducing the need for manual actions in processing the text analysis results and at suggesting a concept network kick-off in order to build ontologies more efficiently.\nThe chosen approach is completely original to our knowledge and uses an adaptive multi-agent system.\nThis choice comes from the qualities offered by multi-agent system: they can ease the interactive design of a system [8] (in our case, a conceptual network), they allow its incremental building by progressively taking into account new data (coming from text analysis and user interaction), and last but not least they can be easily distributed across a computer network.\nDynamo takes a syntactical and terminological analysis of texts as input.\nIt uses several criteria based on statistics computed from the linguistic contexts of terms to create and position the concepts.\nAs output, Dynamo provides to the analyst a hierarchical organization of concepts (the multi-agent system itself) that can be validated, refined of modified, until he\/she obtains a satisfying state of 1286 978-81-904262-7-5 (RPS) c 2007 IFAAMAS the semantic network.\nAn ontology can be seen as a stable map constituted of conceptual entities, represented here by agents, linked by labelled relations.\nThus, our approach considers an ontology as a type of equilibrium between its concept-agents where their forces are defined by their potential relationships.\nThe ontology modification is a perturbation of the previous equilibrium by the appearance or disappearance of agents or relationships.\nIn this way, a dynamic ontology is a self-organizing process occurring when new texts are included into the corpus, or when the ontologist interacts with it.\nTo support the needed flexibility of such a system we use a selforganizing multi-agent system based on a cooperative approach [9].\nWe followed the ADELFE method [4] proposed to drive the design of this kind of multi-agent system.\nIt justifies how we designed some of the rules used by our agents in order to maximize the cooperation degree within Dynamo``s multi-agent system.\n2.2 Proposed Architecture In this section, we present our system architecture.\nIt addresses the needs of Knowledge Engineering in the context of dynamic ontology management and maintenance when the ontology is linked to a document collection.\nThe Dynamo system consists of three parts (cf. figure 1): \u2022 a term network, obtained thanks to a term extraction tool used to preprocess the textual corpus, \u2022 a multi-agent system which uses the term network to make a hierarchical clustering in order to obtain a taxonomy of concepts, \u2022 an interface allowing the ontologist to visualize and control the clustering process.\n??\nOntologist Interface System Concept Agent Term Term network Terms Extraction Tool Figure 1: System architecture The term extractor we use is Syntex, a software that has efficiently been used for ontology building tasks [11].\nWe mainly selected it because of its robustness and the great amount of information extracted.\nIn particular, it creates a ``Head-Expansion'' network which has already proven to be interesting for a clustering system [1].\nIn such a network, each term is linked to its head term1 and 1 i.e. the maximum sub-phrase located as head of the term its expansion term2 , and also to all the terms for which it is a head or an expansion term.\nFor example, ``knowledge engineering from text'' has ``knowledge engineering'' as head term and ``text'' as expansion term.\nMoreover, ``knowledge engineering'' is composed of ``knowledge'' as head term and ``engineering'' as expansion term.\nWith Dynamo, the term network obtained as the output of the extractor is stored in a database.\nFor each term pair, we assume that it is possible to compute a similarity value in order to make a clustering [6] [1].\nBecause of the nature of the data, we are only focusing on similarity computation between objects described thanks to binary variables, that means that each item is described by the presence or absence of a characteristic set [15].\nIn the case of terms we are generally dealing with their usage contexts.\nWith Syntex, those contexts are identified by terms and characterized by some syntactic relations.\nThe Dynamo multi-agent system implements the distributed clustering algorithm described in detail in section 3 and the rules described in section 4.\nIt is designed to be both the system producing the resulting structure and the structure itself.\nIt means that each agent represent a class in the taxonomy.\nThen, the system output is the organization obtained from the interaction between agents, while taking into account feedback coming from the ontologist when he\/she modifies the taxonomy given his needs or expertise.\n3.\nDISTRIBUTED CLUSTERING This section presents the distributed clustering algorithm used in Dynamo.\nFor the sake of understanding, and because of its evaluation in section 3.1, we recall the basic centralized algorithm used for a hierarchical ascending clustering in a non metric space, when a symmetrical similarity measure is available [15] (which is the case of the measures used in our system).\nAlgorithm 1: Centralized hierarchical ascending clustering algorithm Data: List L of items to organize as a hierarchy Result: Root R of the hierarchy while length(L) > 1 do max \u2190 0; A \u2190 nil; B \u2190 nil; for i \u2190 1 to length(L) do I \u2190 L[i]; for j \u2190 i + 1 to length(L) do J \u2190 L[j]; sim \u2190 similarity(I, J); if sim > max then max \u2190 sim; A \u2190 I; B \u2190 J; end end end remove(A, L); remove(B, L); append((A, B), L); end R \u2190 L[1]; In algorithm 1, for each clustering step, the pair of the most similar elements is determined.\nThose two elements are grouped in a cluster, and the resulting class is appended to the list of remaining elements.\nThis algorithm stops when the list has only one element left.\n2 i.e. the maximum sub-phrase located as tail of the term The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1287 The hierarchy resulting from algorithm 1 is always a binary tree because of the way grouping is done.\nMoreover grouping the most similar elements is equivalent to moving them away from the least similar ones.\nOur distributed algorithm is designed relying on those two facts.\nIt is executed concurrently in each of the agents of the system.\nNote that, in the following of this paper, we used for both algorithms an Anderberg similarity (with \u03b1 = 0.75) and an average link clustering strategy [15].\nThose choices have an impact on the resulting tree, but they impact neither the global execution of the algorithm nor its complexity.\nWe now present the distributed algorithm used in our system.\nIt is bootstrapped in the following way: \u2022 a TOP agent having no parent is created, it will be the root of the resulting taxonomy, \u2022 an agent is created for each term to be positioned in the taxonomy, they all have TOP as parent.\nOnce this basic structure is set, the algorithm runs until it reaches equilibrium and then provides the resulting taxonomy.\nAk\u22121 Ak AnA2A1 P ....\n....\nA1 Figure 2: Distributed classification: Step 1 The process first step (figure 2) is triggered when an agent (here Ak) has more than one brother (since we want to obtain a binary tree).\nThen it sends a message to its parent P indicating its most dissimilar brother (here A1).\nThen P receives the same kind of message from each of its children.\nIn the following, this kind of message will be called a ``vote''.\nAk\u22121 Ak AnA2A1 P P'' ....\n....\nP'' P'' Figure 3: Distributed clustering: Step 2 Next, when P has got messages from all its children, it starts the second step (figure 3).\nThanks to the received messages indicating the preferences of its children, P can determine three sub-groups among its children: \u2022 the child which got the most ``votes'' by its brothers, that is the child being the most dissimilar from the greatest number of its brothers.\nIn case of a draw, one of the winners is chosen randomly (here A1), \u2022 the children that allowed the ``election'' of the first group, that is the agents which chose their brother of the first group as being the most dissimilar one (here Ak to An), \u2022 the remaining children (here A2 to Ak\u22121).\nThen P creates a new agent P (having P as parent) and asks agents from the second group (here agents Ak to An) to make it their new parent.\nAk\u22121 Ak AnA2A1 P P'' ....\n....\nFigure 4: Distributed clustering: Step 3 Finally, step 3 (figure 4) is trivial.\nThe children rejected by P (here agent A2 to An) take its message into account and choose P as their new parent.\nThe hierarchy just created a new intermediate level.\nNote that this algorithm generally converges, since the number of brothers of an agent drops.\nWhen an agent has only one remaining brother, its activity stops (although it keeps processing messages coming from its children).\nHowever in a few cases we can reach a ``circular conflict'' in the voting procedure when for example A votes against B, B against C and C against A. With the current system no decision can be taken.\nThe current procedure should be improved to address this, probably using a ranked voting method.\n3.1 Quantitative Evaluation Now, we evaluate the properties of our distributed algorithm.\nIt requires to begin with a quantitative evaluation, based on its complexity, while comparing it with the algorithm 1 from the previous section.\nIts theoretical complexity is calculated for the worst case, by considering the similarity computation operation as elementary.\nFor the distributed algorithm, the worst case means that for each run, only a two-item group can be created.\nUnder those conditions, for a given dataset of n items, we can determine the amount of similarity computations.\nFor algorithm 1, we note l = length(L), then the most enclosed ``for'' loop is run l \u2212 i times.\nAnd its body has the only similarity computation, so its cost is l\u2212i.\nThe second ``for'' loop is ran l times for i ranging from 1 to l.\nThen its cost is Pl i=1(l \u2212 i) which can be simplified in l\u00d7(l\u22121) 2 .\nFinally for each run of the ``while'' loop, l is decreased from n to 1 which gives us t1(n) as the amount of similarity computations for algorithm 1: t1(n) = nX l=1 l \u00d7 (l \u2212 1) 2 (1) For the distributed algorithm, at a given step, each one of the l agents evaluates the similarity with its l \u22121 brothers.\nSo each steps has a l \u00d7 (l \u2212 1) cost.\nThen, groups are created and another vote occurs with l decreased by one (since we assume worst case, only groups of size 2 or l \u22121 are built).\nSince l is equal to n on first run, we obtain tdist(n) as the amount of similarity computations for the distributed algorithm: tdist(n) = nX l=1 l \u00d7 (l \u2212 1) (2) Both algorithms then have an O(n3 ) complexity.\nBut in the worst case, the distributed algorithm does twice the number of el1288 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) ementary operations done by the centralized algorithm.\nThis gap comes from the local decision making in each agent.\nBecause of this, the similarity computations are done twice for each agent pair.\nWe could conceive that an agent sends its computation result to its peer.\nBut, it would simply move the problem by generating more communication in the system.\n0 20000 40000 60000 80000 100000 120000 140000 160000 180000 10 20 30 40 50 60 70 80 90 100 Amountofcomparisons Amount of input terms 1.\nDistributed algorithm (on average, with min and max) 2.\nLogarithmic polynomial 3.\nCentralized algorithm Figure 5: Experimental results In a second step, the average complexity of the algorithm has been determined by experiments.\nThe multi-agent system has been executed with randomly generated input data sets ranging from ten to one hundred terms.\nThe given value is the average of comparisons made for one hundred of runs without any user interaction.\nIt results in the plots of figure 5.\nThe algorithm is then more efficient on average than the centralized algorithm, and its average complexity is below the worst case.\nIt can be explained by the low probability that a data set forces the system to create only minimal groups (two items) or maximal (n \u2212 1 elements) for each step of reasoning.\nCurve number 2 represents the logarithmic polynomial minimizing the error with curve number 1.\nThe highest degree term of this polynomial is in n2 log(n), then our distributed algorithm has a O(n2 log(n)) complexity on average.\nFinally, let``s note the reduced variation of the average performances with the maximum and the minimum.\nIn the worst case for 100 terms, the variation is of 1,960.75 for an average of 40,550.10 (around 5%) which shows the good stability of the system.\n3.2 Qualitative Evaluation Although the quantitative results are interesting, the real advantage of this approach comes from more qualitative characteristics that we will present in this section.\nAll are advantages obtained thanks to the use of an adaptive multi-agent system.\nThe main advantage to the use of a multi-agent system for a clustering task is to introduce dynamic in such a system.\nThe ontologist can make modifications and the hierarchy adapts depending on the request.\nIt is particularly interesting in a knowledge engineering context.\nIndeed, the hierarchy created by the system is meant to be modified by the ontologist since it is the result of a statistic computation.\nDuring the necessary look at the texts to examine the usage contexts of terms [2], the ontologist will be able to interpret the real content and to revise the system proposal.\nIt is extremely difficult to realize this with a centralized ``black-box'' approach.\nIn most cases, one has to find which reasoning step generated the error and to manually modify the resulting class.\nUnfortunately, in this case, all the reasoning steps that occurred after the creation of the modified class are lost and must be recalculated by taking the modification into account.\nThat is why a system like ASIUM [6] tries to soften the problem with a system-user collaboration by showing to the ontologist the created classes after each step of reasoning.\nBut, the ontologist can make a mistake, and become aware of it too late.\nFigure 6: Concept agent tree after autonomous stabilization of the system In order to illustrate our claims, we present an example thanks to a few screenshots from the working prototype tested on a medical related corpus.\nBy using test data and letting the system work by itself, we obtain the hierarchy from figure 6 after stabilization.\nIt is clear that the concept described by the term ``l\u00e9sion'' (lesion) is misplaced.\nIt happens that the similarity computations place it closer to ``femme'' (woman) and ``chirurgien'' (surgeon) than to ``infection'', ``gastro-ent\u00e9rite'' (gastro-enteritis) and ``h\u00e9patite'' (hepatitis).\nThis wrong position for ``lesion'' is explained by the fact that without ontologist input the reasoning is only done on statistics criteria.\nFigure 7: Concept agent tree after ontologist modification Then, the ontologist replaces the concept in the right branch, by affecting ``ConceptAgent:8'' as its new parent.\nThe name ``ConceptAgent:X'' is automatically given to a concept agent that is not described by a term.\nThe system reacts by itself and refines the clustering hierarchy to obtain a binary tree by creating ``ConceptAgent:11''.\nThe new stable state if the one of figure 7.\nThis system-user coupling is necessary to build an ontology, but no particular adjustment to the distributed algorithm principle is needed since each agent does an autonomous local processing and communicates with its neighborhood by messages.\nMoreover, this algorithm can de facto be distributed on a computer network.\nThe communication between agents is then done by sending messages and each one keeps its decision autonomy.\nThen, a system modification to make it run networked would not require to adjust the algorithm.\nOn the contrary, it would only require to rework the communication layer and the agent creation process since in our current implementation those are not networked.\n4.\nMULTI-CRITERIA HIERARCHY In the previous sections, we assumed that similarity can be computed for any term pair.\nBut, as soon as one uses real data this property is not verified anymore.\nSome terms do not have any similarity value with any extracted term.\nMoreover for leaf nodes it is sometimes interesting to use other means to position them in the hierarchy.\nFor this low level structuring, ontologists generally base The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1289 their choices on simple heuristics.\nUsing this observation, we built a new set of rules, which are not based on similarity to support low level structuring.\n4.1 Adding Head Coverage Rules In this case, agents can act with a very local point of view simply by looking at the parent\/child relation.\nEach agent can try to determine if its parent is adequate.\nIt is possible to guess this because each concept agent is described by a set of terms and thanks to the ``Head-Expansion'' term network.\nIn the following TX will be the set of terms describing concept agent X and head(TX ) the set of all the terms that are head of at least one element of TX .\nThanks to those two notations we can describe the parent adequacy function a(P, C) between a parent P and a child C: a(P, C) = |TP \u2229 head(TC )| |TP \u222a head(TC )| (3) Then, the best parent for C is the P agent that maximizes a(P, C).\nAn agent unsatisfied by its parent can then try to find a better one by evaluating adequacy with candidates.\nWe designed a complementary algorithm to drive this search: When an agent C is unsatisfied by its parent P, it evaluates a(Bi, C) with all its brothers (noted Bi) the one maximizing a(Bi, C) is then chosen as the new parent.\nFigure 8: Concept agent tree after autonomous stabilization of the system without head coverage rule We now illustrate this rule behavior with an example.\nFigure 8 shows the state of the system after stabilization on test data.\nWe can notice that ``h\u00e9patite viral'' (viral hepatitis) is still linked to the taxonomy root.\nIt is caused by the fact that there is no similarity value between the ``viral hepatitis'' term and any of the term of the other concept agents.\nFigure 9: Concept agent tree after activation of the head coverage rule After activating the head coverage rule and letting the system stabilize again we obtain figure 9.\nWe can see that ``viral hepatitis'' slipped through the branch leading to ``hepatitis'' and chose it as its new parent.\nIt is a sensible default choice since ``viral hepatitis'' is a more specific term than ``hepatitis''.\nThis rule tends to push agents described by a set of term to become leafs of the concept tree.\nIt addresses our concern to improve the low level structuring of our taxonomy.\nBut obviously our agents lack a way to backtrack in case of modifications in the taxonomy which would make them be located in the wrong branch.\nThat is one of the point where our system still has to be improved by adding another set of rules.\n4.2 On Using Several Criteria In the previous sections and examples, we only used one algorithm at a time.\nThe distributed clustering algorithm tends to introduce new layers in the taxonomy, while the head coverage algorithm tends to push some of the agents toward the leafs of the taxonomy.\nIt obviously raises the question on how to deal with multiple criteria in our taxonomy building, and how agents determine their priorities at a given time.\nThe solution we chose came from the search for minimizing non cooperation within the system in accordance with the ADELFE method.\nEach agent computes three non cooperation degrees and chooses its current priority depending on which degree is the highest.\nFor a given agent A having a parent P, a set of brothers Bi and which received a set of messages Mk having the priority pk the three non cooperation degrees are: \u2022 \u03bcH (A) = 1 \u2212 a(P, A), is the ``head coverage'' non cooperation degree, determined by the head coverage of the parent, \u2022 \u03bcB(A) = max(1 \u2212 similarity(A, Bi)), is the ``brotherhood'' non cooperation degree, determined by the worst brother of A regarding similarities, \u2022 \u03bcM (A) = max(pk), is the ``message'' non cooperation degree, determined by the most urgent message received.\nThen, the non cooperation degree \u03bc(A) of agent A is: \u03bc(A) = max(\u03bcH (A), \u03bcB(A), \u03bcM (A)) (4) Then, we have three cases determining which kind of action A will choose: \u2022 if \u03bc(A) = \u03bcH (A) then A will use the head coverage algorithm we detailed in the previous subsection \u2022 if \u03bc(A) = \u03bcB(A) then A will use the distributed clustering algorithm (see section 3) \u2022 if \u03bc(A) = \u03bcM (A) then A will process Mk immediately in order to help its sender Those three cases summarize the current activities of our agents: they have to find the best parent for them (\u03bc(A) = \u03bcH (A)), improve the structuring through clustering (\u03bc(A) = \u03bcB(A)) and process other agent messages (\u03bc(A) = \u03bcM (A)) in order to help them fulfill their own goals.\n4.3 Experimental Complexity Revisited We evaluated the experimental complexity of the whole multiagent system when all the rules are activated.\nIn this case, the metric used is the number of messages exchanged in the system.\nOnce again the system has been executed with input data sets ranging from ten to one hundred terms.\nThe given value is the average of message amount sent in the system as a whole for one hundred runs without user interaction.\nIt results in the plots of figure 10.\nCurve number 1 represents the average of the value obtained.\nCurve number 2 represents the average of the value obtained when only the distributed clustering algorithm is activated, not the full rule set.\nCurve number 3 represents the polynomial minimizing the error with curve number 1.\nThe highest degree term of this polynomial is in n3 , then our multi-agent system has a O(n3 ) complexity 1290 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 0 5000 10000 15000 20000 25000 10 20 30 40 50 60 70 80 90 100 Amountofmessages Amount of input terms 1.\nDynamo, all rules (on average, with min and max) 2.\nDistributed clustering only (on average) 2.\nCubic polynomial Figure 10: Experimental results on average.\nMoreover, let``s note the very small variation of the average performances with the maximum and the minimum.\nIn the worst case for 100 terms, the variation is of 126.73 for an average of 20,737.03 (around 0.6%) which proves the excellent stability of the system.\nFinally the extra head coverage rules are a real improvement on the distributed algorithm alone.\nThey introduce more constraints and stability point is reached with less interactions and decision making by the agents.\nIt means that less messages are exchanged in the system while obtaining a tree of higher quality for the ontologist.\n5.\nDISCUSSION & PERSPECTIVES 5.1 Current Limitation of our Approach The most important limitation of our current algorithm is that the result depends on the order the data gets added.\nWhen the system works by itself on a fixed data set given during initialization, the final result is equivalent to what we could obtain with a centralized algorithm.\nOn the contrary, adding a new item after a first stabilization has an impact on the final result.\nFigure 11: Concept agent tree after autonomous stabilization of the system To illustrate our claims, we present another example of the working system.\nBy using test data and letting the system work by itself, we obtain the hierarchy of figure 11 after stabilization.\nFigure 12: Concept agent tree after taking in account ``hepatitis'' Then, the ontologist interacts with the system and adds a new concept described by the term ``hepatitis'' and linked to the root.\nThe system reacts and stabilizes, we then obtain figure 12 as a result.\n``hepatitis'' is located in the right branch, but we have not obtained the same organization as the figure 6 of the previous example.\nWe need to improve our distributed algorithm to allow a concept to move along a branch.\nWe are currently working on the required rules, but the comparison with centralized algorithm will become very difficult.\nIn particular since they will take into account criteria ignored by the centralized algorithm.\n5.2 Pruning for Ontologies Building In section 3, we presented the distributed clustering algorithm used in the Dynamo system.\nSince this work was first based on this algorithm, it introduced a clear bias toward binary trees as a result.\nBut we have to keep in mind that we are trying to obtain taxonomies which are more refined and concise.\nAlthough the head coverage rule is an improvement because it is based on how the ontologists generally work, it only addresses low level structuring but not the intermediate levels of the tree.\nBy looking at figure 7, it is clear that some pruning could be done in the taxonomy.\nIn particular, since ``l\u00e9sion'' moved, ``ConceptAgent:9'' could be removed, it is not needed anymore.\nMoreover the branch starting with ``ConceptAgent:8'' clearly respects the constraint to make a binary tree, but it would be more useful to the user in a more compact and meaningful form.\nIn this case ``ConceptAgent:10'' and ``ConceptAgent:11'' could probably be merged.\nCurrently, our system has the necessary rules to create intermediate levels in the taxonomy, or to have concepts shifting towards the leaf.\nAs we pointed, it is not enough, so new rules are needed to allow removing nodes from the tree, or move them toward the root.\nMost of the work needed to develop those rules consists in finding the relevant statistic information that will support the ontologist.\n6.\nCONCLUSION After being presented as a promising solution, ensuring model quality and their terminological richness, ontology building from textual corpus analysis is difficult and costly.\nIt requires analyst supervising and taking in account the ontology aim.\nUsing natural languages processing tools ease the knowledge localization in texts through language uses.\nThat said, those tools produce a huge amount of lexical or grammatical data which is not trivial to examine in order to define conceptual elements.\nOur contribution lies in this step of the modeling process from texts, before any attempts to normalize or formalize the result.\nWe proposed an approach based on an adaptive multi-agent system to provide the ontologist with a first taxonomic structure of concepts.\nOur system makes use of a terminological network resulting from an analysis made by Syntex.\nThe current state of our software allows to produce simple structures, to propose them to the ontologist and to make them evolve depending on the modifications he made.\nPerformances of the system are interesting and some aspects are even comparable to their centralized counterpart.\nIts strengths are mostly qualitative since it allows more subtle user interactions and a progressive adaptation to new linguistic based information.\nFrom the point of view of ontology building, this work is a first step showing the relevance of our approach.\nIt must continue, both to ensure a better robustness during classification, and to obtain richer structures semantic wise than simple trees.\nFrom this improvements we are mostly focusing on the pruning to obtain better taxonomies.\nWe``re currently working on the criterion to trigger the complementary actions of the structure changes applied by our clustering algorithm.\nIn other words this algorithm introduces inThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1291 termediate levels, and we need to be able to remove them if necessary, in order to reach a dynamic equilibrium.\nAlso from the multi-agent engineering point of view, their use in a dynamic ontology context has shown its relevance.\nThis dynamic ontologies can be seen as complex problem solving, in such a case self-organization through cooperation has been an efficient solution.\nAnd, more generally it``s likely to be interesting for other design related tasks, even if we``re focusing only on knowledge engineering in this paper.\nOf course, our system still requires more evaluation and validation work to accurately determine the advantages and flaws of this approach.\nWe``re planning to work on such benchmarking in the near future.\n7.\nREFERENCES [1] H. Assadi.\nConstruction of a regional ontology from text and its use within a documentary system.\nProceedings of the International Conference on Formal Ontology and Information Systems - FOIS``98, pages 236-249, 1998.\n[2] N. Aussenac-Gilles and D. S\u00f6rgel.\nText analysis for ontology and terminology engineering.\nJournal of Applied Ontology, 2005.\n[3] J. Bao and V. Honavar.\nCollaborative ontology building with wiki@nt.\nProceedings of the Workshop on Evaluation of Ontology-Based Tools (EON2004), 2004.\n[4] C. Bernon, V. Camps, M.-P.\nGleizes, and G. Picard.\nAgent-Oriented Methodologies, chapter 7.\nEngineering Self-Adaptive Multi-Agent Systems : the ADELFE Methodology, pages 172-202.\nIdea Group Publishing, 2005.\n[5] C. Brewster, F. Ciravegna, and Y. Wilks.\nBackground and foreground knowledge in dynamic ontology construction.\nSemantic Web Workshop, SIGIR``03, August 2003.\n[6] D. Faure and C. Nedellec.\nA corpus-based conceptual clustering method for verb frames and ontology acquisition.\nLREC workshop on adapting lexical and corpus resources to sublanguages and applications, 1998.\n[7] F. Gandon.\nOntology Engineering: a Survey and a Return on Experience.\nINRIA, 2002.\n[8] J.-P.\nGeorg\u00e9, G. Picard, M.-P.\nGleizes, and P. Glize.\nLiving Design for Open Computational Systems.\n12th IEEE International Workshops on Enabling Technologies, Infrastructure for Collaborative Enterprises, pages 389-394, June 2003.\n[9] M.-P.\nGleizes, V. Camps, and P. Glize.\nA Theory of emergent computation based on cooperative self-organization for adaptive artificial systems.\nFourth European Congress of Systems Science, September 1999.\n[10] J. Heflin and J. Hendler.\nDynamic ontologies on the web.\nAmerican Association for Artificial Intelligence Conference, 2000.\n[11] S. Le Moigno, J. Charlet, D. Bourigault, and M.-C.\nJaulent.\nTerminology extraction from text to build an ontology in surgical intensive care.\nProceedings of the AMIA 2002 annual symposium, 2002.\n[12] K. Lister, L. Sterling, and K. Taveter.\nReconciling Ontological Differences by Assistant Agents.\nAAMAS``06, May 2006.\n[13] A. Maedche.\nOntology learning for the Semantic Web.\nKluwer Academic Publisher, 2002.\n[14] A. Maedche and S. Staab.\nMining Ontologies from Text.\nEKAW 2000, pages 189-202, 2000.\n[15] C. D. Manning and H. Sch\u00fctze.\nFoundations of Statistical Natural Language Processing.\nThe MIT Press, Cambridge, Massachusetts, 1999.\n[16] H. V. D. Parunak, R. Rohwer, T. C. Belding, and S. Brueckner.\nDynamic decentralized any-time hierarchical clustering.\n29th Annual International ACM SIGIR Conference on Research & Development on Information Retrieval, August 2006.\n1292 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"A Multi-Agent System for Building Dynamic Ontologies\nABSTRACT\nOntologies building from text is still a time-consuming task which justifies the growth of Ontology Learning.\nOur system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture.\nIn this paper we present a distributed hierarchical clustering algorithm, core of our approach.\nIt is evaluated and compared to a more conventional centralized algorithm.\nWe also present how it has been improved using a multi-criteria approach.\nWith those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution.\n1.\nINTRODUCTION\nNowadays, it is well established that ontologies are needed for semantic web, knowledge management, B2B...For knowledge management, ontologies are used to annotate documents and to enhance the information retrieval.\nBut building an ontology manually is a slow, tedious, costly, complex and time consuming process.\nCurrently, a real challenge lies in building them automatically or semi-automatically and keeping them up to date.\nIt would mean creating dynamic ontologies [10] and it justifies the emergence of ontology learning techniques [14] [13].\nOur research focuses on Dynamo (an acronym of DYNAMic Ontologies), a tool based on an adaptive multi-agent system to construct and maintain an ontology from a domain specific set of texts.\n* PhD student\n2.\nDYNAMO OVERVIEW\n2.1 Ontology as a Multi-Agent System\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n2.2 Proposed Architecture\n3.\nDISTRIBUTED CLUSTERING\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1287\n3.1 Quantitative Evaluation\n1288 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.2 Qualitative Evaluation\n4.\nMULTI-CRITERIA HIERARCHY\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1289\n4.1 Adding Head Coverage Rules\n4.2 On Using Several Criteria\n4.3 Experimental Complexity Revisited\n1290 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.\nDISCUSSION & PERSPECTIVES\n5.1 Current Limitation of our Approach\n5.2 Pruning for Ontologies Building\n6.\nCONCLUSION\nAfter being presented as a promising solution, ensuring model quality and their terminological richness, ontology building from textual corpus analysis is difficult and costly.\nIt requires analyst supervising and taking in account the ontology aim.\nUsing natural languages processing tools ease the knowledge localization in texts through language uses.\nThat said, those tools produce a huge amount of lexical or grammatical data which is not trivial to examine in order to define conceptual elements.\nOur contribution lies in this step of the modeling process from texts, before any attempts to normalize or formalize the result.\nWe proposed an approach based on an adaptive multi-agent system to provide the ontologist with a first taxonomic structure of concepts.\nOur system makes use of a terminological network resulting from an analysis made by Syntex.\nThe current state of our software allows to produce simple structures, to propose them to the ontologist and to make them evolve depending on the modifications he made.\nPerformances of the system are interesting and some aspects are even comparable to their centralized counterpart.\nIts strengths are mostly qualitative since it allows more subtle user interactions and a progressive adaptation to new linguistic based information.\nFrom the point of view of ontology building, this work is a first step showing the relevance of our approach.\nIt must continue, both to ensure a better robustness during classification, and to obtain richer structures semantic wise than simple trees.\nFrom this improvements we are mostly focusing on the pruning to obtain better taxonomies.\nWe're currently working on the criterion to trigger the complementary actions of the structure changes applied by our clustering algorithm.\nIn other words this algorithm introduces in\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1291\ntermediate levels, and we need to be able to remove them if necessary, in order to reach a dynamic equilibrium.\nAlso from the multi-agent engineering point of view, their use in a dynamic ontology context has shown its relevance.\nThis dynamic ontologies can be seen as complex problem solving, in such a case self-organization through cooperation has been an efficient solution.\nAnd, more generally it's likely to be interesting for other design related tasks, even if we're focusing only on knowledge engineering in this paper.\nOf course, our system still requires more evaluation and validation work to accurately determine the advantages and flaws of this approach.\nWe're planning to work on such benchmarking in the near future.","lvl-4":"A Multi-Agent System for Building Dynamic Ontologies\nABSTRACT\nOntologies building from text is still a time-consuming task which justifies the growth of Ontology Learning.\nOur system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture.\nIn this paper we present a distributed hierarchical clustering algorithm, core of our approach.\nIt is evaluated and compared to a more conventional centralized algorithm.\nWe also present how it has been improved using a multi-criteria approach.\nWith those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution.\n1.\nINTRODUCTION\nNowadays, it is well established that ontologies are needed for semantic web, knowledge management, B2B...For knowledge management, ontologies are used to annotate documents and to enhance the information retrieval.\nBut building an ontology manually is a slow, tedious, costly, complex and time consuming process.\nIt would mean creating dynamic ontologies [10] and it justifies the emergence of ontology learning techniques [14] [13].\nOur research focuses on Dynamo (an acronym of DYNAMic Ontologies), a tool based on an adaptive multi-agent system to construct and maintain an ontology from a domain specific set of texts.\n6.\nCONCLUSION\nAfter being presented as a promising solution, ensuring model quality and their terminological richness, ontology building from textual corpus analysis is difficult and costly.\nIt requires analyst supervising and taking in account the ontology aim.\nUsing natural languages processing tools ease the knowledge localization in texts through language uses.\nOur contribution lies in this step of the modeling process from texts, before any attempts to normalize or formalize the result.\nWe proposed an approach based on an adaptive multi-agent system to provide the ontologist with a first taxonomic structure of concepts.\nOur system makes use of a terminological network resulting from an analysis made by Syntex.\nPerformances of the system are interesting and some aspects are even comparable to their centralized counterpart.\nFrom the point of view of ontology building, this work is a first step showing the relevance of our approach.\nIt must continue, both to ensure a better robustness during classification, and to obtain richer structures semantic wise than simple trees.\nFrom this improvements we are mostly focusing on the pruning to obtain better taxonomies.\nWe're currently working on the criterion to trigger the complementary actions of the structure changes applied by our clustering algorithm.\nIn other words this algorithm introduces in\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1291\ntermediate levels, and we need to be able to remove them if necessary, in order to reach a dynamic equilibrium.\nAlso from the multi-agent engineering point of view, their use in a dynamic ontology context has shown its relevance.\nThis dynamic ontologies can be seen as complex problem solving, in such a case self-organization through cooperation has been an efficient solution.\nAnd, more generally it's likely to be interesting for other design related tasks, even if we're focusing only on knowledge engineering in this paper.\nOf course, our system still requires more evaluation and validation work to accurately determine the advantages and flaws of this approach.","lvl-2":"A Multi-Agent System for Building Dynamic Ontologies\nABSTRACT\nOntologies building from text is still a time-consuming task which justifies the growth of Ontology Learning.\nOur system named Dynamo is designed along this domain but following an original approach based on an adaptive multi-agent architecture.\nIn this paper we present a distributed hierarchical clustering algorithm, core of our approach.\nIt is evaluated and compared to a more conventional centralized algorithm.\nWe also present how it has been improved using a multi-criteria approach.\nWith those results in mind, we discuss the limits of our system and add as perspectives the modifications required to reach a complete ontology building solution.\n1.\nINTRODUCTION\nNowadays, it is well established that ontologies are needed for semantic web, knowledge management, B2B...For knowledge management, ontologies are used to annotate documents and to enhance the information retrieval.\nBut building an ontology manually is a slow, tedious, costly, complex and time consuming process.\nCurrently, a real challenge lies in building them automatically or semi-automatically and keeping them up to date.\nIt would mean creating dynamic ontologies [10] and it justifies the emergence of ontology learning techniques [14] [13].\nOur research focuses on Dynamo (an acronym of DYNAMic Ontologies), a tool based on an adaptive multi-agent system to construct and maintain an ontology from a domain specific set of texts.\n* PhD student\nOur aim is not to build an exhaustive, general hierarchical ontology but a domain specific one.\nWe propose a semi-automated tool since an external resource is required: the \"ontologist\".\nAn ontologist is a kind of cognitive engineer, or analyst, who is using information from texts and expert interviews to design ontologies.\nIn the multi-agent field, ontologies generally enable agents to understand each other [12].\nThey're sometimes used to ease the ontology building process, in particular for collaborative contexts [3], but they rarely represent the ontology itself [16].\nMost works interested in the construction of ontologies [7] propose the refinement of ontologies.\nThis process consists in using an existing ontology and building a new one from it.\nThis approach is different from our approach because Dynamo starts from scratch.\nResearchers, working on the construction of ontologies from texts, claim that the work to be automated requires external resources such as a dictionary [14], or web access [5].\nIn our work, we propose an interaction between the ontologist and the system, our external resource lies both in the texts and the ontologist.\nThis paper first presents, in section 2, the big picture of the Dynamo system.\nIn particular the motives that led to its creation and its general architecture.\nThen, in section 3 we discuss the distributed clustering algorithm used in Dynamo and compare it to a more classic centralized approach.\nSection 4 is dedicated to some enhancement of the agents behavior that got designed by taking into account criteria ignored by clustering.\nAnd finally, in section 5, we discuss the limitations of our approach and explain how it will be addressed in further work.\n2.\nDYNAMO OVERVIEW\n2.1 Ontology as a Multi-Agent System\nDynamo aims at reducing the need for manual actions in processing the text analysis results and at suggesting a concept network kick-off in order to build ontologies more efficiently.\nThe chosen approach is completely original to our knowledge and uses an adaptive multi-agent system.\nThis choice comes from the qualities offered by multi-agent system: they can ease the interactive design of a system [8] (in our case, a conceptual network), they allow its incremental building by progressively taking into account new data (coming from text analysis and user interaction), and last but not least they can be easily distributed across a computer network.\nDynamo takes a syntactical and terminological analysis of texts as input.\nIt uses several criteria based on statistics computed from the linguistic contexts of terms to create and position the concepts.\nAs output, Dynamo provides to the analyst a hierarchical organization of concepts (the multi-agent system itself) that can be validated, refined of modified, until he\/she obtains a satisfying state of\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\nthe semantic network.\nAn ontology can be seen as a stable map constituted of conceptual entities, represented here by agents, linked by labelled relations.\nThus, our approach considers an ontology as a type of equilibrium between its concept-agents where their forces are defined by their potential relationships.\nThe ontology modification is a perturbation of the previous equilibrium by the appearance or disappearance of agents or relationships.\nIn this way, a dynamic ontology is a self-organizing process occurring when new texts are included into the corpus, or when the ontologist interacts with it.\nTo support the needed flexibility of such a system we use a selforganizing multi-agent system based on a cooperative approach [9].\nWe followed the ADELFE method [4] proposed to drive the design of this kind of multi-agent system.\nIt justifies how we designed some of the rules used by our agents in order to maximize the cooperation degree within Dynamo's multi-agent system.\n2.2 Proposed Architecture\nIn this section, we present our system architecture.\nIt addresses the needs of Knowledge Engineering in the context of dynamic ontology management and maintenance when the ontology is linked to a document collection.\nThe Dynamo system consists of three parts (cf. figure 1):\n\u2022 a term network, obtained thanks to a term extraction tool used to preprocess the textual corpus, \u2022 a multi-agent system which uses the term network to make a hierarchical clustering in order to obtain a taxonomy of concepts, \u2022 an interface allowing the ontologist to visualize and control the clustering process.\nThe term extractor we use is Syntex, a software that has efficiently been used for ontology building tasks [11].\nWe mainly selected it because of its robustness and the great amount of information extracted.\nIn particular, it creates a \"Head-Expansion\" network which has already proven to be interesting for a clustering system [1].\nIn such a network, each term is linked to its head term1 and 1i.\ne. the maximum sub-phrase located as head of the term its expansion term2, and also to all the terms for which it is a head or an expansion term.\nFor example, \"knowledge engineering from text\" has \"knowledge engineering\" as head term and \"text\" as expansion term.\nMoreover, \"knowledge engineering\" is composed of \"knowledge\" as head term and \"engineering\" as expansion term.\nWith Dynamo, the term network obtained as the output of the extractor is stored in a database.\nFor each term pair, we assume that it is possible to compute a similarity value in order to make a clustering [6] [1].\nBecause of the nature of the data, we are only focusing on similarity computation between objects described thanks to binary variables, that means that each item is described by the presence or absence of a characteristic set [15].\nIn the case of terms we are generally dealing with their usage contexts.\nWith Syntex, those contexts are identified by terms and characterized by some syntactic relations.\nThe Dynamo multi-agent system implements the distributed clustering algorithm described in detail in section 3 and the rules described in section 4.\nIt is designed to be both the system producing the resulting structure and the structure itself.\nIt means that each agent represent a class in the taxonomy.\nThen, the system output is the organization obtained from the interaction between agents, while taking into account feedback coming from the ontologist when he\/she modifies the taxonomy given his needs or expertise.\n3.\nDISTRIBUTED CLUSTERING\nThis section presents the distributed clustering algorithm used in Dynamo.\nFor the sake of understanding, and because of its evaluation in section 3.1, we recall the basic centralized algorithm used for a hierarchical ascending clustering in a non metric space, when a symmetrical similarity measure is available [15] (which is the case of the measures used in our system).\nAlgorithm 1: Centralized hierarchical ascending clustering algorithm Data: List L of items to organize as a hierarchy Result: Root R of the hierarchy\nIn algorithm 1, for each clustering step, the pair of the most similar elements is determined.\nThose two elements are grouped in a cluster, and the resulting class is appended to the list of remaining elements.\nThis algorithm stops when the list has only one element left.\n2i.\ne. the maximum sub-phrase located as tail of the term\nFigure 1: System architecture\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1287\nThe hierarchy resulting from algorithm 1 is always a binary tree because of the way grouping is done.\nMoreover grouping the most similar elements is equivalent to moving them away from the least similar ones.\nOur distributed algorithm is designed relying on those two facts.\nIt is executed concurrently in each of the agents of the system.\nNote that, in the following of this paper, we used for both algorithms an Anderberg similarity (with \u03b1 = 0.75) and an average link clustering strategy [15].\nThose choices have an impact on the resulting tree, but they impact neither the global execution of the algorithm nor its complexity.\nWe now present the distributed algorithm used in our system.\nIt is bootstrapped in the following way:\n\u2022 a TOP agent having no parent is created, it will be the root of the resulting taxonomy, \u2022 an agent is created for each term to be positioned in the taxonomy, they all have TOP as parent.\nOnce this basic structure is set, the algorithm runs until it reaches equilibrium and then provides the resulting taxonomy.\nFigure 2: Distributed classification: Step 1\nThe process first step (figure 2) is triggered when an agent (here Ak) has more than one brother (since we want to obtain a binary tree).\nThen it sends a message to its parent P indicating its most dissimilar brother (here A1).\nThen P receives the same kind of message from each of its children.\nIn the following, this kind of message will be called a \"vote\".\nFigure 3: Distributed clustering: Step 2\nNext, when P has got messages from all its children, it starts the second step (figure 3).\nThanks to the received messages indicating the preferences of its children, P can determine three sub-groups among its children:\n\u2022 the child which got the most \"votes\" by its brothers, that is the child being the most dissimilar from the greatest number of its brothers.\nIn case of a draw, one of the winners is chosen randomly (here A1), \u2022 the children that allowed the \"election\" of the first group, that is the agents which chose their brother of the first group as being the most dissimilar one (here Ak to An), \u2022 the remaining children (here A2 to Ak \u2212 1).\nThen P creates a new agent P' (having P as parent) and asks agents from the second group (here agents Ak to An) to make it their new parent.\nFigure 4: Distributed clustering: Step 3\nFinally, step 3 (figure 4) is trivial.\nThe children rejected by P (here agent A2 to An) take its message into account and choose P' as their new parent.\nThe hierarchy just created a new intermediate level.\nNote that this algorithm generally converges, since the number of brothers of an agent drops.\nWhen an agent has only one remaining brother, its activity stops (although it keeps processing messages coming from its children).\nHowever in a few cases we can reach a \"circular conflict\" in the voting procedure when for example A votes against B, B against C and C against A. With the current system no decision can be taken.\nThe current procedure should be improved to address this, probably using a ranked voting method.\n3.1 Quantitative Evaluation\nNow, we evaluate the properties of our distributed algorithm.\nIt requires to begin with a quantitative evaluation, based on its complexity, while comparing it with the algorithm 1 from the previous section.\nIts theoretical complexity is calculated for the worst case, by considering the similarity computation operation as elementary.\nFor the distributed algorithm, the worst case means that for each run, only a two-item group can be created.\nUnder those conditions, for a given dataset of n items, we can determine the amount of similarity computations.\nFor algorithm 1, we note l = length (L), then the most enclosed \"for\" loop is run l \u2212 i times.\nAnd its body has the only similarity computation, so its cost is l \u2212 i.\nThe second \"for\" loop is ran l times for i ranging from 1 to l.\nThen its cost is Pli = 1 (l \u2212 i) which can be simplified in l \u00d7 (l \u2212 1) 2.\nFinally for each run of the \"while\" loop, l is decreased from n to 1 which gives us t1 (n) as the amount of similarity computations for algorithm 1:\nFor the distributed algorithm, at a given step, each one of the l agents evaluates the similarity with its l \u2212 1 brothers.\nSo each steps has a l \u00d7 (l \u2212 1) cost.\nThen, groups are created and another vote occurs with l decreased by one (since we assume worst case, only groups of size 2 or l \u2212 1 are built).\nSince l is equal to n on first run, we obtain tdist (n) as the amount of similarity computations for the distributed algorithm:\nBoth algorithms then have an O (n3) complexity.\nBut in the worst case, the distributed algorithm does twice the number of el\n1288 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nementary operations done by the centralized algorithm.\nThis gap comes from the local decision making in each agent.\nBecause of this, the similarity computations are done twice for each agent pair.\nWe could conceive that an agent sends its computation result to its peer.\nBut, it would simply move the problem by generating more communication in the system.\nFigure 5: Experimental results\nIn a second step, the average complexity of the algorithm has been determined by experiments.\nThe multi-agent system has been executed with randomly generated input data sets ranging from ten to one hundred terms.\nThe given value is the average of comparisons made for one hundred of runs without any user interaction.\nIt results in the plots of figure 5.\nThe algorithm is then more efficient on average than the centralized algorithm, and its average complexity is below the worst case.\nIt can be explained by the low probability that a data set forces the system to create only minimal groups (two items) or maximal (n \u2212 1 elements) for each step of reasoning.\nCurve number 2 represents the logarithmic polynomial minimizing the error with curve number 1.\nThe highest degree term of this polynomial is in n2log (n), then our distributed algorithm has a O (n2log (n)) complexity on average.\nFinally, let's note the reduced variation of the average performances with the maximum and the minimum.\nIn the worst case for 100 terms, the variation is of 1,960.75 for an average of 40,550.10 (around 5%) which shows the good stability of the system.\n3.2 Qualitative Evaluation\nAlthough the quantitative results are interesting, the real advantage of this approach comes from more qualitative characteristics that we will present in this section.\nAll are advantages obtained thanks to the use of an adaptive multi-agent system.\nThe main advantage to the use of a multi-agent system for a clustering task is to introduce dynamic in such a system.\nThe ontologist can make modifications and the hierarchy adapts depending on the request.\nIt is particularly interesting in a knowledge engineering context.\nIndeed, the hierarchy created by the system is meant to be modified by the ontologist since it is the result of a statistic computation.\nDuring the necessary look at the texts to examine the usage contexts of terms [2], the ontologist will be able to interpret the real content and to revise the system proposal.\nIt is extremely difficult to realize this with a centralized \"black-box\" approach.\nIn most cases, one has to find which reasoning step generated the error and to manually modify the resulting class.\nUnfortunately, in this case, all the reasoning steps that occurred after the creation of the modified class are lost and must be recalculated by taking the modification into account.\nThat is why a system like ASIUM [6] tries to soften the problem with a system-user collaboration by showing to the ontologist the created classes after each step of reasoning.\nBut, the ontologist can make a mistake, and become aware of it too late.\nFigure 6: Concept agent tree after autonomous stabilization of the system\nIn order to illustrate our claims, we present an example thanks to a few screenshots from the working prototype tested on a medical related corpus.\nBy using test data and letting the system work by itself, we obtain the hierarchy from figure 6 after stabilization.\nIt is clear that the concept described by the term \"l\u00e9sion\" (lesion) is misplaced.\nIt happens that the similarity computations place it closer to \"femme\" (woman) and \"chirurgien\" (surgeon) than to \"infection\", \"gastro-ent\u00e9rite\" (gastro-enteritis) and \"h\u00e9patite\" (hepatitis).\nThis wrong position for \"lesion\" is explained by the fact that without ontologist input the reasoning is only done on statistics criteria.\nFigure 7: Concept agent tree after ontologist modification\nThen, the ontologist replaces the concept in the right branch, by affecting \"ConceptAgent:8\" as its new parent.\nThe name \"ConceptAgent: X\" is automatically given to a concept agent that is not described by a term.\nThe system reacts by itself and refines the clustering hierarchy to obtain a binary tree by creating \"ConceptAgent:11\".\nThe new stable state if the one of figure 7.\nThis system-user coupling is necessary to build an ontology, but no particular adjustment to the distributed algorithm principle is needed since each agent does an autonomous local processing and communicates with its neighborhood by messages.\nMoreover, this algorithm can de facto be distributed on a computer network.\nThe communication between agents is then done by sending messages and each one keeps its decision autonomy.\nThen, a system modification to make it run networked would not require to adjust the algorithm.\nOn the contrary, it would only require to rework the communication layer and the agent creation process since in our current implementation those are not networked.\n4.\nMULTI-CRITERIA HIERARCHY\nIn the previous sections, we assumed that similarity can be computed for any term pair.\nBut, as soon as one uses real data this property is not verified anymore.\nSome terms do not have any similarity value with any extracted term.\nMoreover for leaf nodes it is sometimes interesting to use other means to position them in the hierarchy.\nFor this low level structuring, ontologists generally base\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1289\ntheir choices on simple heuristics.\nUsing this observation, we built a new set of rules, which are not based on similarity to support low level structuring.\n4.1 Adding Head Coverage Rules\nIn this case, agents can act with a very local point of view simply by looking at the parent\/child relation.\nEach agent can try to determine if its parent is adequate.\nIt is possible to guess this because each concept agent is described by a set of terms and thanks to the \"Head-Expansion\" term network.\nIn the following TX will be the set of terms describing concept agent X and head (TX) the set of all the terms that are head of at least one element of TX.\nThanks to those two notations we can describe the parent adequacy function a (P, C) between a parent P and a child C:\nThen, the best parent for C is the P agent that maximizes a (P, C).\nAn agent unsatisfied by its parent can then try to find a better one by evaluating adequacy with candidates.\nWe designed a complementary algorithm to drive this search: When an agent C is unsatisfied by its parent P, it evaluates a (Bi, C) with all its brothers (noted Bi) the one maximizing a (Bi, C) is then chosen as the new parent.\nFigure 8: Concept agent tree after autonomous stabilization of the system without head coverage rule\nWe now illustrate this rule behavior with an example.\nFigure 8 shows the state of the system after stabilization on test data.\nWe can notice that \"h\u00e9patite viral\" (viral hepatitis) is still linked to the taxonomy root.\nIt is caused by the fact that there is no similarity value between the \"viral hepatitis\" term and any of the term of the other concept agents.\nFigure 9: Concept agent tree after activation of the head coverage rule\nAfter activating the head coverage rule and letting the system stabilize again we obtain figure 9.\nWe can see that \"viral hepatitis\" slipped through the branch leading to \"hepatitis\" and chose it as its new parent.\nIt is a sensible default choice since \"viral hepatitis\" is a more specific term than \"hepatitis\".\nThis rule tends to push agents described by a set of term to become leafs of the concept tree.\nIt addresses our concern to improve the low level structuring of our taxonomy.\nBut obviously our agents lack a way to backtrack in case of modifications in the taxonomy which would make them be located in the wrong branch.\nThat is one of the point where our system still has to be improved by adding another set of rules.\n4.2 On Using Several Criteria\nIn the previous sections and examples, we only used one algorithm at a time.\nThe distributed clustering algorithm tends to introduce new layers in the taxonomy, while the head coverage algorithm tends to push some of the agents toward the leafs of the taxonomy.\nIt obviously raises the question on how to deal with multiple criteria in our taxonomy building, and how agents determine their priorities at a given time.\nThe solution we chose came from the search for minimizing non cooperation within the system in accordance with the ADELFE method.\nEach agent computes three non cooperation degrees and chooses its current priority depending on which degree is the highest.\nFor a given agent A having a parent P, a set of brothers Bi and which received a set of messages Mk having the priority pk the three non cooperation degrees are:\n\u2022 \u03bcH (A) = 1 \u2212 a (P, A), is the \"head coverage\" non cooperation degree, determined by the head coverage of the parent, \u2022 \u03bcB (A) = max (1 \u2212 similarity (A, Bi)), is the \"brotherhood\" non cooperation degree, determined by the worst brother of A regarding similarities, \u2022 \u03bcM (A) = max (pk), is the \"message\" non cooperation de\ngree, determined by the most urgent message received.\nThen, the non cooperation degree \u03bc (A) of agent A is:\nThen, we have three cases determining which kind of action A will choose:\n\u2022 if \u03bc (A) = \u03bcH (A) then A will use the head coverage algorithm we detailed in the previous subsection \u2022 if \u03bc (A) = \u03bcB (A) then A will use the distributed clustering algorithm (see section 3) \u2022 if \u03bc (A) = \u03bcM (A) then A will process Mk immediately in order to help its sender\nThose three cases summarize the current activities of our agents: they have to find the best parent for them (\u03bc (A) = \u03bcH (A)), improve the structuring through clustering (\u03bc (A) = \u03bcB (A)) and process other agent messages (\u03bc (A) = \u03bcM (A)) in order to help them fulfill their own goals.\n4.3 Experimental Complexity Revisited\nWe evaluated the experimental complexity of the whole multiagent system when all the rules are activated.\nIn this case, the metric used is the number of messages exchanged in the system.\nOnce again the system has been executed with input data sets ranging from ten to one hundred terms.\nThe given value is the average of message amount sent in the system as a whole for one hundred runs without user interaction.\nIt results in the plots of figure 10.\nCurve number 1 represents the average of the value obtained.\nCurve number 2 represents the average of the value obtained when only the distributed clustering algorithm is activated, not the full rule set.\nCurve number 3 represents the polynomial minimizing the error with curve number 1.\nThe highest degree term of this polynomial is in n3, then our multi-agent system has a O (n3) complexity\n1290 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 10: Experimental results\non average.\nMoreover, let's note the very small variation of the average performances with the maximum and the minimum.\nIn the worst case for 100 terms, the variation is of 126.73 for an average of 20,737.03 (around 0.6%) which proves the excellent stability of the system.\nFinally the extra head coverage rules are a real improvement on the distributed algorithm alone.\nThey introduce more constraints and stability point is reached with less interactions and decision making by the agents.\nIt means that less messages are exchanged in the system while obtaining a tree of higher quality for the ontologist.\n5.\nDISCUSSION & PERSPECTIVES\n5.1 Current Limitation of our Approach\nThe most important limitation of our current algorithm is that the result depends on the order the data gets added.\nWhen the system works by itself on a fixed data set given during initialization, the final result is equivalent to what we could obtain with a centralized algorithm.\nOn the contrary, adding a new item after a first stabilization has an impact on the final result.\nFigure 11: Concept agent tree after autonomous stabilization of the system\nTo illustrate our claims, we present another example of the working system.\nBy using test data and letting the system work by itself, we obtain the hierarchy of figure 11 after stabilization.\nFigure 12: Concept agent tree after taking in account \"hepatitis\"\nThen, the ontologist interacts with the system and adds a new concept described by the term \"hepatitis\" and linked to the root.\nThe system reacts and stabilizes, we then obtain figure 12 as a result.\n\"hepatitis\" is located in the right branch, but we have not obtained the same organization as the figure 6 of the previous example.\nWe need to improve our distributed algorithm to allow a concept to move along a branch.\nWe are currently working on the required rules, but the comparison with centralized algorithm will become very difficult.\nIn particular since they will take into account criteria ignored by the centralized algorithm.\n5.2 Pruning for Ontologies Building\nIn section 3, we presented the distributed clustering algorithm used in the Dynamo system.\nSince this work was first based on this algorithm, it introduced a clear bias toward binary trees as a result.\nBut we have to keep in mind that we are trying to obtain taxonomies which are more refined and concise.\nAlthough the head coverage rule is an improvement because it is based on how the ontologists generally work, it only addresses low level structuring but not the intermediate levels of the tree.\nBy looking at figure 7, it is clear that some pruning could be done in the taxonomy.\nIn particular, since \"l\u00e9sion\" moved, \"ConceptAgent:9\" could be removed, it is not needed anymore.\nMoreover the branch starting with \"ConceptAgent:8\" clearly respects the constraint to make a binary tree, but it would be more useful to the user in a more compact and meaningful form.\nIn this case \"ConceptAgent:10\" and \"ConceptAgent:11\" could probably be merged.\nCurrently, our system has the necessary rules to create intermediate levels in the taxonomy, or to have concepts shifting towards the leaf.\nAs we pointed, it is not enough, so new rules are needed to allow removing nodes from the tree, or move them toward the root.\nMost of the work needed to develop those rules consists in finding the relevant statistic information that will support the ontologist.\n6.\nCONCLUSION\nAfter being presented as a promising solution, ensuring model quality and their terminological richness, ontology building from textual corpus analysis is difficult and costly.\nIt requires analyst supervising and taking in account the ontology aim.\nUsing natural languages processing tools ease the knowledge localization in texts through language uses.\nThat said, those tools produce a huge amount of lexical or grammatical data which is not trivial to examine in order to define conceptual elements.\nOur contribution lies in this step of the modeling process from texts, before any attempts to normalize or formalize the result.\nWe proposed an approach based on an adaptive multi-agent system to provide the ontologist with a first taxonomic structure of concepts.\nOur system makes use of a terminological network resulting from an analysis made by Syntex.\nThe current state of our software allows to produce simple structures, to propose them to the ontologist and to make them evolve depending on the modifications he made.\nPerformances of the system are interesting and some aspects are even comparable to their centralized counterpart.\nIts strengths are mostly qualitative since it allows more subtle user interactions and a progressive adaptation to new linguistic based information.\nFrom the point of view of ontology building, this work is a first step showing the relevance of our approach.\nIt must continue, both to ensure a better robustness during classification, and to obtain richer structures semantic wise than simple trees.\nFrom this improvements we are mostly focusing on the pruning to obtain better taxonomies.\nWe're currently working on the criterion to trigger the complementary actions of the structure changes applied by our clustering algorithm.\nIn other words this algorithm introduces in\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1291\ntermediate levels, and we need to be able to remove them if necessary, in order to reach a dynamic equilibrium.\nAlso from the multi-agent engineering point of view, their use in a dynamic ontology context has shown its relevance.\nThis dynamic ontologies can be seen as complex problem solving, in such a case self-organization through cooperation has been an efficient solution.\nAnd, more generally it's likely to be interesting for other design related tasks, even if we're focusing only on knowledge engineering in this paper.\nOf course, our system still requires more evaluation and validation work to accurately determine the advantages and flaws of this approach.\nWe're planning to work on such benchmarking in the near future.","keyphrases":["ontolog","dynamo","cooper","emerg behavior","multi-agent field","quantit evalu","black-box","parent adequaci function","hepat","terminolog rich","model qualiti","dynam equilibrium"],"prmu":["P","P","U","U","U","M","U","U","U","U","U","M"]} {"id":"I-71","title":"A Formal Model for Situated Semantic Alignment","abstract":"Ontology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems. Most ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources. In this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in. It hence makes the situation in which the alignment occurs explicit in the model. We resort to Channel Theory to carry out the formalisation.","lvl-1":"A Formal Model for Situated Semantic Alignment Manuel Atencia Marco Schorlemmer IIIA, Artificial Intelligence Research Institute CSIC, Spanish National Research Council Bellaterra (Barcelona), Catalonia, Spain {manu, marco}@iiia.\ncsic.es ABSTRACT Ontology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems.\nMost ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources.\nIn this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in.\nIt hence makes the situation in which the alignment occurs explicit in the model.\nWe resort to Channel Theory to carry out the formalisation.\nCategories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-coherence and coordination, multiagent systems; D.2.12 [Software Engineering]: Interoperability-data mapping; I.2.4 [Artificial Intelligence]: Knowledge Representation Formalisms and Methods-semantic networks, relation systems.\nGeneral Terms Theory 1.\nINTRODUCTION An ontology is commonly defined as a specification of the conceptualisation of a particular domain.\nIt fixes the vocabulary used by knowledge engineers to denote concepts and their relations, and it constrains the interpretation of this vocabulary to the meaning originally intended by knowledge engineers.\nAs such, ontologies have been widely adopted as a key technology that may favour knowledge sharing in distributed environments, such as multi-agent systems, federated databases, or the Semantic Web.\nBut the proliferation of many diverse ontologies caused by different conceptualisations of even the same domain -and their subsequent specification using varying terminology- has highlighted the need of ontology matching techniques that are capable of computing semantic relationships between entities of separately engineered ontologies.\n[5, 11] Until recently, most ontology matching mechanisms developed so far have taken a classical functional approach to the semantic heterogeneity problem, in which ontology matching is seen as a process taking two or more ontologies as input and producing a semantic alignment of ontological entities as output [3].\nFurthermore, matching often has been carried out at design-time, before integrating knowledge-based systems or making them interoperate.\nThis might have been successful for clearly delimited and stable domains and for closed distributed systems, but it is untenable and even undesirable for the kind of applications that are currently deployed in open systems.\nMulti-agent communication, peer-to-peer information sharing, and webservice composition are all of a decentralised, dynamic, and open-ended nature, and they require ontology matching to be locally performed during run-time.\nIn addition, in many situations peer ontologies are not even open for inspection (e.g., when they are based on commercially confidential information).\nCertainly, there exist efforts to efficiently match ontological entities at run-time, taking only those ontology fragment that are necessary for the task at hand [10, 13, 9, 8].\nNevertheless, the techniques used by these systems to establish the semantic relationships between ontological entities -even though applied at run-time- still exploit a priori defined concept taxonomies as they are represented in the graph-based structures of the ontologies to be matched, use previously existing external sources such as thesauri (e.g., WordNet) and upper-level ontologies (e.g., CyC or SUMO), or resort to additional background knowledge repositories or shared instances.\nWe claim that semantic alignment of ontological terminology is ultimately relative to the particular situation in which the alignment is carried out, and that this situation should be made explicit and brought into the alignment mechanism.\nEven two agents with identical conceptualisation capabilities, and using exactly the same vocabulary to specify their respective conceptualisations may fail to interoperate 1278 978-81-904262-7-5 (RPS) c 2007 IFAAMAS in a concrete situation because of their differing perception of the domain.\nImagine a situation in which two agents are facing each other in front of a checker board.\nAgent A1 may conceptualise a figure on the board as situated on the left margin of the board, while agent A2 may conceptualise the same figure as situated on the right.\nAlthough the conceptualisation of `left'' and `right'' is done in exactly the same manner by both agents, and even if both use the terms left and right in their communication, they still will need to align their respective vocabularies if they want to successfully communicate to each other actions that change the position of figures on the checker board.\nTheir semantic alignment, however, will only be valid in the scope of their interaction within this particular situation or environment.\nThe same agents situated differently may produce a different alignment.\nThis scenario is reminiscent to those in which a group of distributed agents adapt to form an ontology and a shared lexicon in an emergent, bottom-up manner, with only local interactions and no central control authority [12].\nThis sort of self-organised emergence of shared meaning is namely ultimately grounded on the physical interaction of agents with the environment.\nIn this paper, however, we address the case in which agents are already endowed with a top-down engineered ontology (it can even be the same one), which they do not adapt or refine, but for which they want to find the semantic relationships with separate ontologies of other agents on the grounds of their communication within a specific situation.\nIn particular, we provide a formal model that formalises situated semantic alignment as a sequence of information-channel refinements in the sense of Barwise and Seligman``s theory of information flow [1].\nThis theory is particularly useful for our endeavour because it models the flow of information occurring in distributed systems due to the particular situations -or tokens- that carry information.\nAnalogously, the semantic alignment that will allow information to flow ultimately will be carried by the particular situation agents are acting in.\nWe shall therefore consider a scenario with two or more agents situated in an environment.\nEach agent will have its own viewpoint of the environment so that, if the environment is in a concrete state, both agents may have different perceptions of this state.\nBecause of these differences there may be a mismatch in the meaning of the syntactic entities by which agents describe their perceptions (and which constitute the agents'' respective ontologies).\nWe state that these syntactic entities can be related according to the intrinsic semantics provided by the existing relationship between the agents'' viewpoint of the environment.\nThe existence of this relationship is precisely justified by the fact that the agents are situated and observe the same environment.\nIn Section 2 we describe our formal model for Situated Semantic Alignment (SSA).\nFirst, in Section 2.1 we associate a channel to the scenario under consideration and show how the distributed logic generated by this channel provides the logical relationships between the agents'' viewpoints of the environment.\nSecond, in Section 2.2 we present a method by which agents obtain approximations of this distributed logic.\nThese approximations gradually become more reliable as the method is applied.\nIn Section 3 we report on an application of our method.\nConclusions and further work are analyzed in Section 4.\nFinally, an appendix summarizes the terms and theorems of Channel theory used along the paper.\nWe do not assume any knowledge of Channel Theory; we restate basic definitions and theorems in the appendix, but any detailed exposition of the theory is outside the scope of this paper.\n2.\nA FORMAL MODEL FOR SSA 2.1 The Logic of SSA Consider a scenario with two agents A1 and A2 situated in an environment E (the generalization to any numerable set of agents is straightforward).\nWe associate a numerable set S of states to E and, at any given instant, we suppose E to be in one of these states.\nWe further assume that each agent is able to observe the environment and has its own perception of it.\nThis ability is faithfully captured by a surjective function seei : S \u2192 Pi, where i \u2208 {1, 2}, and typically see1 and see2 are different.\nAccording to Channel Theory, information is only viable where there is a systematic way of classifying some range of things as being this way or that, in other words, where there is a classification (see appendix A).\nSo in order to be within the framework of Channel Theory, we must associate classifications to the components of our system.\nFor each i \u2208 {1, 2}, we consider a classification Ai that models Ai``s viewpoint of E. First, tok(Ai) is composed of Ai``s perceptions of E states, that is, tok(Ai) = Pi.\nSecond, typ(Ai) contains the syntactic entities by which Ai describes its perceptions, the ones constituting the ontology of Ai.\nFinally, |=Ai synthesizes how Ai relates its perceptions with these syntactic entities.\nNow, with the aim of associating environment E with a classification E we choose the power classification of S as E, which is the classification whose set of types is equal to 2S , whose tokens are the elements of S, and for which a token e is of type \u03b5 if e \u2208 \u03b5.\nThe reason for taking the power classification is because there are no syntactic entities that may play the role of types for E since, in general, there is no global conceptualisation of the environment.\nHowever, the set of types of the power classification includes all possible token configurations potentially described by types.\nThus tok(E) = S, typ(E) = 2S and e |=E \u03b5 if and only if e \u2208 \u03b5.\nThe notion of channel (see appendix A) is fundamental in Barwise and Seligman``s theory.\nThe information flow among the components of a distributed system is modelled in terms of a channel and the relationships among these components are expressed via infomorphisms (see appendix A) which provide a way of moving information between them.\nThe information flow of the scenario under consideration is accurately described by channel E = {fi : Ai \u2192 E}i\u2208{1,2} defined as follows: \u2022 \u02c6fi(\u03b1) = {e \u2208 tok(E) | seei(e) |=Ai \u03b1} for each \u03b1 \u2208 typ(Ai) \u2022 \u02c7fi(e) = seei(e) for each e \u2208 tok(E) where i \u2208 {1, 2}.\nDefinition of \u02c7fi seems natural while \u02c6fi is defined in such a way that the fundamental property of the infomorphisms is fulfilled: \u02c7fi(e) |=Ai \u03b1 iff seei(e) |=Ai \u03b1 (by definition of \u02c7fi) iff e \u2208 \u02c6fi(\u03b1) (by definition of \u02c6fi) iff e |=E \u02c6fi(\u03b1) (by definition of |=E) The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1279 Consequently, E is the core of channel E and a state e \u2208 tok(E) connects agents'' perceptions \u02c7f1(e) and \u02c7f2(e) (see Figure 1).\ntyp(E) typ(A1) \u02c6f1 99ttttttttt typ(A2) \u02c6f2 eeJJJJJJJJJ tok(E) |=E \u02c7f1yyttttttttt \u02c7f2 %%JJJJJJJJJ tok(A1) |=A1 tok(A2) |=A2 Figure 1: Channel E E explains the information flow of our scenario by virtue of agents A1 and A2 being situated and perceiving the same environment E.\nWe want to obtain meaningful relations among agents'' syntactic entities, that is, agents'' types.\nWe state that meaningfulness must be in accord with E.\nThe sum operation (see appendix A) gives us a way of putting the two agents'' classifications of channel E together into a single classification, namely A1 +A2, and also the two infomorphisms together into a single infomorphism, f1 +f2 : A1 + A2 \u2192 E. A1 + A2 assembles agents'' classifications in a very coarse way.\ntok(A1 + A2) is the cartesian product of tok(A1) and tok(A2), that is, tok(A1 + A2) = { p1, p2 | pi \u2208 Pi}, so a token of A1 + A2 is a pair of agents'' perceptions with no restrictions.\ntyp(A1 + A2) is the disjoint union of typ(A1) and typ(A2), and p1, p2 is of type i, \u03b1 if pi is of type \u03b1.\nWe attach importance to take the disjoint union because A1 and A2 could use identical types with the purpose of describing their respective perceptions of E. Classification A1 + A2 seems to be the natural place in which to search for relations among agents'' types.\nNow, Channel Theory provides a way to make all these relations explicit in a logical fashion by means of theories and local logics (see appendix A).\nThe theory generated by the sum classification, Th(A1 + A2), and hence its logic generated, Log(A1 + A2), involve all those constraints among agents'' types valid according to A1 +A2.\nNotice however that these constraints are obvious.\nAs we stated above, meaningfulness must be in accord with channel E. Classifications A1 + A2 and E are connected via the sum infomorphism, f = f1 + f2, where: \u2022 \u02c6f( i, \u03b1 ) = \u02c6fi(\u03b1) = {e \u2208 tok(E) | seei(e) |=Ai \u03b1} for each i, \u03b1 \u2208 typ(A1 + A2) \u2022 \u02c7f(e) = \u02c7f1(e), \u02c7f2(e) = see1(e), see2(e) for each e \u2208 tok(E) Meaningful constraints among agents'' types are in accord with channel E because they are computed making use of f as we expound below.\nAs important as the notion of channel is the concept of distributed logic (see appendix A).\nGiven a channel C and a logic L on its core, DLogC(L) represents the reasoning about relations among the components of C justified by L.\nIf L = Log(C), the distributed logic, we denoted by Log(C), captures in a logical fashion the information flow inherent in the channel.\nIn our case, Log(E) explains the relationship between the agents'' viewpoints of the environment in a logical fashion.\nOn the one hand, constraints of Th(Log(E)) are defined by: \u0393 Log(E) \u0394 if \u02c6f[\u0393] Log(E) \u02c6f[\u0394] (1) where \u0393, \u0394 \u2286 typ(A1 + A2).\nOn the other hand, the set of normal tokens, NLog(E), is equal to the range of function \u02c7f: NLog(E) = \u02c7f[tok(E)] = { see1(e), see2(e) | e \u2208 tok(E)} Therefore, a normal token is a pair of agents'' perceptions that are restricted by coming from the same environment state (unlike A1 + A2 tokens).\nAll constraints of Th(Log(E)) are satisfied by all normal tokens (because of being a logic).\nIn this particular case, this condition is also sufficient (the proof is straightforward); as alternative to (1) we have: \u0393 Log(E) \u0394 iff for all e \u2208 tok(E), if (\u2200 i, \u03b3 \u2208 \u0393)[seei(e) |=Ai \u03b3] then (\u2203 j, \u03b4 \u2208 \u0394)[seej(e) |=Aj \u03b4] (2) where \u0393, \u0394 \u2286 typ(A1 + A2).\nLog(E) is the logic of SSA.\nTh(Log(E)) comprises the most meaningful constraints among agents'' types in accord with channel E.\nIn other words, the logic of SSA contains and also justifies the most meaningful relations among those syntactic entities that agents use in order to describe their own environment perceptions.\nLog(E) is complete since Log(E) is complete but it is not necessarily sound because although Log(E) is sound, \u02c7f is not surjective in general (see appendix B).\nIf Log(E) is also sound then Log(E) = Log(A1 +A2) (see appendix B).\nThat means there is no significant relation between agents'' points of view of the environment according to E.\nIt is just the fact that Log(E) is unsound what allows a significant relation between the agents'' viewpoints.\nThis relation is expressed at the type level in terms of constraints by Th(Log(E)) and at the token level by NLog(E).\n2.2 Approaching the logic of SSA through communication We have dubbed Log(E) the logic of SSA.\nTh(Log(E)) comprehends the most meaningful constraints among agents'' types according to E.\nThe problem is that neither agent can make use of this theory because they do not know E completely.\nIn this section, we present a method by which agents obtain approximations to Th(Log(E)).\nWe also prove these approximations gradually become more reliable as the method is applied.\nAgents can obtain approximations to Th(Log(E)) through communication.\nA1 and A2 communicate by exchanging information about their perceptions of environment states.\nThis information is expressed in terms of their own classification relations.\nSpecifically, if E is in a concrete state e, we assume that agents can convey to each other which types are satisfied by their respective perceptions of e and which are not.\nThis exchange generates a channel C = {fi : Ai \u2192 1280 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) C}i\u2208{1,2} and Th(Log(C)) contains the constraints among agents'' types justified by the fact that agents have observed e. Now, if E turns to another state e and agents proceed as before, another channel C = {fi : Ai \u2192 C }i\u2208{1,2} gives account of the new situation considering also the previous information.\nTh(Log(C )) comprises the constraints among agents'' types justified by the fact that agents have observed e and e .\nThe significant point is that C is a refinement of C (see appendix A).\nTheorem 2.1 below ensures that the refined channel involves more reliable information.\nThe communication supposedly ends when agents have observed all the environment states.\nAgain this situation can be modeled by a channel, call it C\u2217 = {f\u2217 i : Ai \u2192 C\u2217 }i\u2208{1,2}.\nTheorem 2.2 states that Th(Log(C\u2217 )) = Th(Log(E)).\nTheorem 2.1 and Theorem 2.2 assure that applying the method agents can obtain approximations to Th(Log(E)) gradually more reliable.\nTheorem 2.1.\nLet C = {fi : Ai \u2192 C}i\u2208{1,2} and C = {fi : Ai \u2192 C }i\u2208{1,2} be two channels.\nIf C is a refinement of C then: 1.\nTh(Log(C )) \u2286 Th(Log(C)) 2.\nNLog(C ) \u2287 NLog(C) Proof.\nSince C is a refinement of C then there exists a refinement infomorphism r from C to C; so fi = r \u25e6 fi .\nLet A =def A1 + A2, f =def f1 + f2 and f =def f1 + f2.\n1.\nLet \u0393 and \u0394 be subsets of typ(A) and assume that \u0393 Log(C ) \u0394, which means \u02c6f [\u0393] C \u02c6f [\u0394].\nWe have to prove \u0393 Log(C) \u0394, or equivalently, \u02c6f[\u0393] C \u02c6f[\u0394].\nWe proceed by reductio ad absurdum.\nSuppose c \u2208 tok(C) does not satisfy the sequent \u02c6f[\u0393], \u02c6f[\u0394] .\nThen c |=C \u02c6f(\u03b3) for all \u03b3 \u2208 \u0393 and c |=C \u02c6f(\u03b4) for all \u03b4 \u2208 \u0394.\nLet us choose an arbitrary \u03b3 \u2208 \u0393.\nWe have that \u03b3 = i, \u03b1 for some \u03b1 \u2208 typ(Ai) and i \u2208 {1, 2}.\nThus \u02c6f(\u03b3) = \u02c6f( i, \u03b1 ) = \u02c6fi(\u03b1) = \u02c6r \u25e6 \u02c6fi (\u03b1) = \u02c6r( \u02c6fi (\u03b1)).\nTherefore: c |=C \u02c6f(\u03b3) iff c |=C \u02c6r( \u02c6fi (\u03b1)) iff \u02c7r(c) |=C \u02c6fi (\u03b1) iff \u02c7r(c) |=C \u02c6f ( i, \u03b1 ) iff \u02c7r(c) |=C \u02c6f (\u03b3) Consequently, \u02c7r(c) |=C \u02c6f (\u03b3) for all \u03b3 \u2208 \u0393.\nSince \u02c6f [\u0393] C \u02c6f [\u0394] then there exists \u03b4\u2217 \u2208 \u0394 such that \u02c7r(c) |=C \u02c6f (\u03b4\u2217 ).\nA sequence of equivalences similar to the above one justifies c |=C \u02c6f(\u03b4\u2217 ), contradicting that c is a counterexample to \u02c6f[\u0393], \u02c6f[\u0394] .\nHence \u0393 Log(C) \u0394 as we wanted to prove.\n2.\nLet a1, a2 \u2208 tok(A) and assume a1, a2 \u2208 NLog(C).\nTherefore, there exists c token in C such that a1, a2 = \u02c7f(c).\nThen we have ai = \u02c7fi(c) = \u02c7fi \u25e6 \u02c7r(c) = \u02c7fi (\u02c7r(c)), for i \u2208 {1, 2}.\nHence a1, a2 = \u02c7f (\u02c7r(c)) and a1, a2 \u2208 NLog(C ).\nConsequently, NLog(C ) \u2287 NLog(C) which concludes the proof.\nRemark 2.1.\nTheorem 2.1 asserts that the more refined channel gives more reliable information.\nEven though its theory has less constraints, it has more normal tokens to which they apply.\nIn the remainder of the section, we explicitly describe the process of communication and we conclude with the proof of Theorem 2.2.\nLet us assume that typ(Ai) is finite for i \u2208 {1, 2} and S is infinite numerable, though the finite case can be treated in a similar form.\nWe also choose an infinite numerable set of symbols {cn | n \u2208 N}1 .\nWe omit informorphisms superscripts when no confusion arises.\nTypes are usually denoted by greek letters and tokens by latin letters so if f is an infomorphism, f(\u03b1) \u2261 \u02c6f(\u03b1) and f(a) \u2261 \u02c7f(a).\nAgents communication starts from the observation of E. Let us suppose that E is in state e1 \u2208 S = tok(E).\nA1``s perception of e1 is f1(e1 ) and A2``s perception of e1 is f2(e1 ).\nWe take for granted that A1 can communicate A2 those types that are and are not satisfied by f1(e1 ) according to its classification A1.\nSo can A2 do.\nSince both typ(A1) and typ(A2) are finite, this process eventually finishes.\nAfter this communication a channel C1 = {f1 i : Ai \u2192 C1 }i=1,2 arises (see Figure 2).\nC1 A1 f1 1 ==|||||||| A2 f1 2 aaCCCCCCCC Figure 2: The first communication stage On the one hand, C1 is defined by: \u2022 tok(C1 ) = {c1 } \u2022 typ(C1 ) = typ(A1 + A2) \u2022 c1 |=C1 i, \u03b1 if fi(e1 ) |=Ai \u03b1 (for every i, \u03b1 \u2208 typ(A1 + A2)) On the other hand, f1 i , with i \u2208 {1, 2}, is defined by: \u2022 f1 i (\u03b1) = i, \u03b1 (for every \u03b1 \u2208 typ(Ai)) \u2022 f1 i (c1 ) = fi(e1 ) Log(C1 ) represents the reasoning about the first stage of communication.\nIt is easy to prove that Th(Log(C1 )) = Th(C1 ).\nThe significant point is that both agents know C1 as the result of the communication.\nHence they can compute separately theory Th(C1 ) = typ(C1 ), C1 which contains the constraints among agents'' types justified by the fact that agents have observed e1 .\nNow, let us assume that E turns to a new state e2 .\nAgents can proceed as before, exchanging this time information about their perceptions of e2 .\nAnother channel C2 = {f2 i : Ai \u2192 C2 }i\u2208{1,2} comes up.\nWe define C2 so as to take also into account the information provided by the previous stage of communication.\nOn the one hand, C2 is defined by: \u2022 tok(C2 ) = {c1 , c2 } 1 We write these symbols with superindices because we limit the use of subindices for what concerns to agents.\nNote this set is chosen with the same cardinality of S.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1281 \u2022 typ(C2 ) = typ(A1 + A2) \u2022 ck |=C2 i, \u03b1 if fi(ek ) |=Ai \u03b1 (for every k \u2208 {1, 2} and i, \u03b1 \u2208 typ(A1 + A2)) On the other hand, f2 i , with i \u2208 {1, 2}, is defined by: \u2022 f2 i (\u03b1) = i, \u03b1 (for every \u03b1 \u2208 typ(Ai)) \u2022 f2 i (ck ) = fi(ek ) (for every k \u2208 {1, 2}) Log(C2 ) represents the reasoning about the former and the later communication stages.\nTh(Log(C2 )) is equal to Th(C2 ) = typ(C2 ), C2 , then it contains the constraints among agents'' types justified by the fact that agents have observed e1 and e2 .\nA1 and A2 knows C2 so they can use these constraints.\nThe key point is that channel C2 is a refinement of C1 .\nIt is easy to check that f1 defined as the identity function on types and the inclusion function on tokens is a refinement infomorphism (see at the bottom of Figure 3).\nBy Theorem 2.1, C2 constraints are more reliable than C1 constraints.\nIn the general situation, once the states e1 , e2 , ... , en\u22121 (n \u2265 2) have been observed and a new state en appears, channel Cn = {fn i : Ai \u2192 Cn }i\u2208{1,2} informs about agents communication up to that moment.\nCn definition is similar to the previous ones and analogous remarks can be made (see at the top of Figure 3).\nTheory Th(Log(Cn )) = Th(Cn ) = typ(Cn ), Cn contains the constraints among agents'' types justified by the fact that agents have observed e1 , e2 , ... , en .\nCn fn\u22121 A1 fn\u22121 1 99PPPPPPPPPPPPP fn 1 UUnnnnnnnnnnnnn f2 1 %%44444444444444444444444444 f1 1 '''',,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, A2 fn 2 ggPPPPPPPPPPPPP fn\u22121 2 wwnnnnnnnnnnnnn f2 2 \u00d5\u00d5 f1 2 \u00d8\u00d8 Cn\u22121 .\n.\n.\nC2 f1 C1 Figure 3: Agents communication Remember we have assumed that S is infinite numerable.\nIt is therefore unpractical to let communication finish when all environment states have been observed by A1 and A2.\nAt that point, the family of channels {Cn }n\u2208N would inform of all the communication stages.\nIt is therefore up to the agents to decide when to stop communicating should a good enough approximation have been reached for the purposes of their respective tasks.\nBut the study of possible termination criteria is outside the scope of this paper and left for future work.\nFrom a theoretical point of view, however, we can consider the channel C\u2217 = {f\u2217 i : Ai \u2192 C\u2217 }i\u2208{1,2} which informs of the end of the communication after observing all environment states.\nOn the one hand, C\u2217 is defined by: \u2022 tok(C\u2217 ) = {cn | n \u2208 N} \u2022 typ(C\u2217 ) = typ(A1 + A2) \u2022 cn |=C\u2217 i, \u03b1 if fi(en ) |=Ai \u03b1 (for n \u2208 N and i, \u03b1 \u2208 typ(A1 + A2)) On the other hand, f\u2217 i , with i \u2208 {1, 2}, is defined by: \u2022 f\u2217 i (\u03b1) = i, \u03b1 (for \u03b1 \u2208 typ(Ai)) \u2022 f\u2217 i (cn ) = fi(en ) (for n \u2208 N) Theorem below constitutes the cornerstone of the model exposed in this paper.\nIt ensures, together with Theorem 2.1, that at each communication stage agents obtain a theory that approximates more closely to the theory generated by the logic of SSA.\nTheorem 2.2.\nThe following statements hold: 1.\nFor all n \u2208 N, C\u2217 is a refinement of Cn .\n2.\nTh(Log(E)) = Th(C\u2217 ) = Th(Log(C\u2217 )).\nProof.\n1.\nIt is easy to prove that for each n \u2208 N, gn defined as the identity function on types and the inclusion function on tokens is a refinement infomorphism from C\u2217 to Cn .\n2.\nThe second equality is straightforward; the first one follows directly from: cn |=C\u2217 i, \u03b1 iff \u02c7fi(en ) |=Ai \u03b1 (by definition of |=C\u2217 ) iff en |=E \u02c6fi(\u03b1) (because fi is infomorphim) iff en |=E \u02c6f( i, \u03b1 ) (by definition of \u02c6f) E C\u2217 gn A1 fn 1 99OOOOOOOOOOOOO f\u2217 1 UUooooooooooooo f1 cc A2 f\u2217 2 ggOOOOOOOOOOOOO fn 2 wwooooooooooooo f2 ?????????????????\nCn 1282 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 3.\nAN EXAMPLE In the previous section we have described in great detail our formal model for SSA.\nHowever, we have not tackled the practical aspect of the model yet.\nIn this section, we give a brushstroke of the pragmatic view of our approach.\nWe study a very simple example and explain how agents can use those approximations of the logic of SSA they can obtain through communication.\nLet us reflect on a system consisting of robots located in a two-dimensional grid looking for packages with the aim of moving them to a certain destination (Figure 4).\nRobots can carry only one package at a time and they can not move through a package.\nFigure 4: The scenario Robots have a partial view of the domain and there exist two kinds of robots according to the visual field they have.\nSome robots are capable of observing the eight adjoining squares but others just observe the three squares they have in front (see Figure 5).\nWe call them URDL (shortened form of Up-Right-Down-Left) and LCR (abbreviation for Left-Center-Right) robots respectively.\nDescribing the environment states as well as the robots'' perception functions is rather tedious and even unnecessary.\nWe assume the reader has all those descriptions in mind.\nAll robots in the system must be able to solve package distribution problems cooperatively by communicating their intentions to each other.\nIn order to communicate, agents send messages using some ontology.\nIn our scenario, there coexist two ontologies, the UDRL and LCR ontologies.\nBoth of them are very simple and are just confined to describe what robots observe.\nFigure 5: Robots field of vision When a robot carrying a package finds another package obstructing its way, it can either go around it or, if there is another robot in its visual field, ask it for assistance.\nLet us suppose two URDL robots are in a situation like the one depicted in Figure 6.\nRobot1 (the one carrying a package) decides to ask Robot2 for assistance and sends a request.\nThis request is written below as a KQML message and it should be interpreted intuitively as: Robot2, pick up the package located in my Up square, knowing that you are located in my Up-Right square.\n` request :sender Robot1 :receiver Robot2 :language Packages distribution-language :ontology URDL-ontology :content (pick up U(Package) because UR(Robot2) \u00b4 Figure 6: Robot assistance Robot2 understands the content of the request and it can use a rule represented by the following constraint: 1, UR(Robot2) , 2, UL(Robot1) , 1, U(Package) 2, U(Package) The above constraint should be interpreted intuitively as: if Robot2 is situated in Robot1``s Up-Right square, Robot1 is situated in Robot2``s Up-Left square and a package is located in Robot1``s Up square, then a package is located in Robot2``s Up square.\nNow, problems arise when a LCR robot and a URDL robot try to interoperate.\nSee Figure 7.\nRobot1 sends a request of the form: ` request :sender Robot1 :receiver Robot2 :language Packages distribution-language :ontology LCR-ontology :content (pick up R(Robot2) because C(Package) \u00b4 Robot2 does not understand the content of the request but they decide to begin a process of alignment -corresponding with a channel C1 .\nOnce finished, Robot2 searches in Th(C1 ) for constraints similar to the expected one, that is, those of the form: 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C1 2, \u03bb(Package) where \u03bb \u2208 {U, R, D, L, UR, DR, DL, UL}.\nFrom these, only the following constraints are plausible according to C1 : The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1283 Figure 7: Ontology mismatch 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C1 2, U(Package) 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C1 2, L(Package) 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C1 2, DR(Package) If subsequently both robots adopting the same roles take part in a situation like the one depicted in Figure 8, a new process of alignment -corresponding with a channel C2 - takes place.\nC2 also considers the previous information and hence refines C1 .\nThe only constraint from the above ones that remains plausible according to C2 is : 1, R(Robot2) , 2, UL(Robot1) , 1, C(Package) C2 2, U(Package) Notice that this constraint is an element of the theory of the distributed logic.\nAgents communicate in order to cooperate successfully and success is guaranteed using constrains of the distributed logic.\nFigure 8: Refinement 4.\nCONCLUSIONS AND FURTHER WORK In this paper we have exposed a formal model of semantic alignment as a sequence of information-channel refinements that are relative to the particular states of the environment in which two agents communicate and align their respective conceptualisations of these states.\nBefore us, Kent [6] and Kalfoglou and Schorlemmer [4, 10] have applied Channel Theory to formalise semantic alignment using also Barwise and Seligman``s insight to focus on tokens as the enablers of information flow.\nTheir approach to semantic alignment, however, like most ontology matching mechanisms developed to date (regardless of whether they follow a functional, design-time-based approach, or an interaction-based, runtime-based approach), still defines semantic alignment in terms of a priori design decisions such as the concept taxonomy of the ontologies or the external sources brought into the alignment process.\nInstead the model we have presented in this paper makes explicit the particular states of the environment in which agents are situated and are attempting to gradually align their ontological entities.\nIn the future, our effort will focus on the practical side of the situated semantic alignment problem.\nWe plan to further refine the model presented here (e.g., to include pragmatic issues such as termination criteria for the alignment process) and to devise concrete ontology negotiation protocols based on this model that agents may be able to enact.\nThe formal model exposed in this paper will constitute a solid base of future practical results.\nAcknowledgements This work is supported under the UPIC project, sponsored by Spain``s Ministry of Education and Science under grant number TIN2004-07461-C02- 02 and also under the OpenKnowledge Specific Targeted Research Project (STREP), sponsored by the European Commission under contract number FP6-027253.\nMarco Schorlemmer is supported by a Ram\u00b4on y Cajal Research Fellowship from Spain``s Ministry of Education and Science, partially funded by the European Social Fund.\n5.\nREFERENCES [1] J. Barwise and J. Seligman.\nInformation Flow: The Logic of Distributed Systems.\nCambridge University Press, 1997.\n[2] C. Ghidini and F. Giunchiglia.\nLocal models semantics, or contextual reasoning = locality + compatibility.\nArtificial Intelligence, 127(2):221-259, 2001.\n[3] F. Giunchiglia and P. Shvaiko.\nSemantic matching.\nThe Knowledge Engineering Review, 18(3):265-280, 2004.\n[4] Y. Kalfoglou and M. Schorlemmer.\nIF-Map: An ontology-mapping method based on information-flow theory.\nIn Journal on Data Semantics I, LNCS 2800, 2003.\n[5] Y. Kalfoglou and M. Schorlemmer.\nOntology mapping: The sate of the art.\nThe Knowledge Engineering Review, 18(1):1-31, 2003.\n[6] R. E. Kent.\nSemantic integration in the Information Flow Framework.\nIn Semantic Interoperability and Integration, Dagstuhl Seminar Proceedings 04391, 2005.\n[7] D. Lenat.\nCyC: A large-scale investment in knowledge infrastructure.\nCommunications of the ACM, 38(11), 1995.\n[8] V. L\u00b4opez, M. Sabou, and E. Motta.\nPowerMap: Mapping the real Semantic Web on the fly.\nProceedings of the ISWC``06, 2006.\n[9] F. McNeill.\nDynamic Ontology Refinement.\nPhD 1284 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) thesis, School of Informatics, The University of Edinburgh, 2006.\n[10] M. Schorlemmer and Y. Kalfoglou.\nProgressive ontology alignment for meaning coordination: An information-theoretic foundation.\nIn 4th Int.\nJoint Conf.\non Autonomous Agents and Multiagent Systems, 2005.\n[11] P. Shvaiko and J. Euzenat.\nA survey of schema-based matching approaches.\nIn Journal on Data Semantics IV, LNCS 3730, 2005.\n[12] L. Steels.\nThe Origins of Ontologies and Communication Conventions in Multi-Agent Systems.\nIn Journal of Autonomous Agents and Multi-Agent Systems, 1(2), 169-194, 1998.\n[13] J. van Diggelen et al..\nANEMONE: An Effective Minimal Ontology Negotiation Environment In 5th Int.\nJoint Conf.\non Autonomous Agents and Multiagent Systems, 2006 APPENDIX A. CHANNEL THEORY TERMS Classification: is a tuple A = tok(A), typ(A), |=A where tok(A) is a set of tokens, typ(A) is a set of types and |=A is a binary relation between tok(A) and typ(A).\nIf a |=A \u03b1 then a is said to be of type \u03b1.\nInfomorphism: f : A \u2192 B from classifications A to B is a contravariant pair of functions f = \u02c6f, \u02c7f , where \u02c6f : typ(A) \u2192 typ(B) and \u02c7f : tok(B) \u2192 tok(A), satisfying the following fundamental property: \u02c7f(b) |=A \u03b1 iff b |=B \u02c6f(\u03b1) for each token b \u2208 tok(B) and each type \u03b1 \u2208 typ(A).\nChannel: consists of two infomorphisms C = {fi : Ai \u2192 C}i\u2208{1,2} with a common codomain C, called the core of C. C tokens are called connections and a connection c is said to connect tokens \u02c7f1(c) and \u02c7f2(c).2 Sum: given classifications A and B, the sum of A and B, denoted by A + B, is the classification with tok(A + B) = tok(A) \u00d7 tok(B) = { a, b | a \u2208 tok(A) and b \u2208 tok(B)}, typ(A + B) = typ(A) typ(B) = { i, \u03b3 | i = 1 and \u03b3 \u2208 typ(A) or i = 2 and \u03b3 \u2208 typ(B)} and relation |=A+B defined by: a, b |=A+B 1, \u03b1 if a |=A \u03b1 a, b |=A+B 2, \u03b2 if b |=B \u03b2 Given infomorphisms f : A \u2192 C and g : B \u2192 C, the sum f + g : A + B \u2192 C is defined on types by \u02c6(f + g)( 1, \u03b1 ) = \u02c6f(\u03b1) and \u02c6(f + g)( 2, \u03b2 ) = \u02c6g(\u03b2), and on tokens by \u02c7(f + g)(c) = \u02c7f(c), \u02c7g(c) .\nTheory: given a set \u03a3, a sequent of \u03a3 is a pair \u0393, \u0394 of subsets of \u03a3.\nA binary relation between subsets of \u03a3 is called a consequence relation on \u03a3.\nA theory is a pair T = \u03a3, where is a consequence relation on \u03a3.\nA sequent \u0393, \u0394 of \u03a3 for which \u0393 \u0394 is called a constraint of the theory T. T is regular if it satisfies: 1.\nIdentity: \u03b1 \u03b1 2.\nWeakening: if \u0393 \u0394, then \u0393, \u0393 \u0394, \u0394 2 In fact, this is the definition of a binary channel.\nA channel can be defined with an arbitrary index set.\n3.\nGlobal Cut: if \u0393, \u03a00 \u0394, \u03a01 for each partition \u03a00, \u03a01 of \u03a0 (i.e., \u03a00 \u222a \u03a01 = \u03a0 and \u03a00 \u2229 \u03a01 = \u2205), then \u0393 \u0394 for all \u03b1 \u2208 \u03a3 and all \u0393, \u0393 , \u0394, \u0394 , \u03a0 \u2286 \u03a3.3 Theory generated by a classification: let A be a classification.\nA token a \u2208 tok(A) satisfies a sequent \u0393, \u0394 of typ(A) provided that if a is of every type in \u0393 then it is of some type in \u0394.\nThe theory generated by A, denoted by Th(A), is the theory typ(A), A where \u0393 A \u0394 if every token in A satisfies \u0393, \u0394 .\nLocal logic: is a tuple L = tok(L), typ(L), |=L , L , NL where: 1.\ntok(L), typ(L), |=L is a classification denoted by Cla(L), 2.\ntyp(L), L is a regular theory denoted by Th(L), 3.\nNL is a subset of tok(L), called the normal tokens of L, which satisfy all constraints of Th(L).\nA local logic L is sound if every token in Cla(L) is normal, that is, NL = tok(L).\nL is complete if every sequent of typ(L) satisfied by every normal token is a constraint of Th(L).\nLocal logic generated by a classification: given a classification A, the local logic generated by A, written Log(A), is the local logic on A (i.e., Cla(Log(A)) = A), with Th(Log(A)) = Th(A) and such that all its tokens are normal, i.e., NLog(A) = tok(A).\nInverse image: given an infomorphism f : A \u2192 B and a local logic L on B, the inverse image of L under f, denoted f\u22121 [L], is the local logic on A such that \u0393 f\u22121[L] \u0394 if \u02c6f[\u0393] L \u02c6f[\u0394] and Nf\u22121[L] = \u02c7f[NL ] = {a \u2208 tok(A) | a = \u02c7f(b) for some b \u2208 NL }.\nDistributed logic: let C = {fi : Ai \u2192 C}i\u2208{1,2} be a channel and L a local logic on its core C, the distributed logic of C generated by L, written DLogC(L), is the inverse image of L under the sum f1 + f2.\nRefinement: let C = {fi : Ai \u2192 C}i\u2208{1,2} and C = {fi : Ai \u2192 C }i\u2208{1,2} be two channels with the same component classifications A1 and A2.\nA refinement infomorphism from C to C is an infomorphism r : C \u2192 C such that for each i \u2208 {1, 2}, fi = r \u25e6fi (i.e., \u02c6fi = \u02c6r \u25e6 \u02c6fi and \u02c7fi = \u02c7fi \u25e6\u02c7r).\nChannel C is a refinement of C if there exists a refinement infomorphism r from C to C. B. CHANNEL THEORY THEOREMS Theorem B.1.\nThe logic generated by a classification is sound and complete.\nFurthermore, given a classification A and a logic L on A, L is sound and complete if and only if L = Log(A).\nTheorem B.2.\nLet L be a logic on a classification B and f : A \u2192 B an infomorphism.\n1.\nIf L is complete then f\u22121 [L] is complete.\n2.\nIf L is sound and \u02c7f is surjective then f\u22121 [L] is sound.\n3 All theories considered in this paper are regular.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1285","lvl-3":"A Formal Model for Situated Semantic Alignment\nABSTRACT\nOntology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems.\nMost ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources.\nIn this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in.\nIt hence makes the situation in which the alignment occurs explicit in the model.\nWe resort to Channel Theory to carry out the formalisation.\n1.\nINTRODUCTION\nAn ontology is commonly defined as a specification of the conceptualisation of a particular domain.\nIt fixes the vocabulary used by knowledge engineers to denote concepts and their relations, and it constrains the interpretation of this vocabulary to the meaning originally intended by knowledge\nengineers.\nAs such, ontologies have been widely adopted as a key technology that may favour knowledge sharing in distributed environments, such as multi-agent systems, federated databases, or the Semantic Web.\nBut the proliferation of many diverse ontologies caused by different conceptualisations of even the same domain--and their subsequent specification using varying terminology--has highlighted the need of ontology matching techniques that are capable of computing semantic relationships between entities of separately engineered ontologies.\n[5, 11] Until recently, most ontology matching mechanisms developed so far have taken a classical functional approach to the semantic heterogeneity problem, in which ontology matching is seen as a process taking two or more ontologies as input and producing a semantic alignment of ontological entities as output [3].\nFurthermore, matching often has been carried out at design-time, before integrating knowledge-based systems or making them interoperate.\nThis might have been successful for clearly delimited and stable domains and for closed distributed systems, but it is untenable and even undesirable for the kind of applications that are currently deployed in open systems.\nMulti-agent communication, peer-to-peer information sharing, and webservice composition are all of a decentralised, dynamic, and open-ended nature, and they require ontology matching to be locally performed during run-time.\nIn addition, in many situations peer ontologies are not even open for inspection (e.g., when they are based on commercially confidential information).\nCertainly, there exist efforts to efficiently match ontological entities at run-time, taking only those ontology fragment that are necessary for the task at hand [10, 13, 9, 8].\nNevertheless, the techniques used by these systems to establish the semantic relationships between ontological entities--even though applied at run-time--still exploit a priori defined concept taxonomies as they are represented in the graph-based structures of the ontologies to be matched, use previously existing external sources such as thesauri (e.g., WordNet) and upper-level ontologies (e.g., CyC or SUMO), or resort to additional background knowledge repositories or shared instances.\nWe claim that semantic alignment of ontological terminology is ultimately relative to the particular situation in which the alignment is carried out, and that this situation should be made explicit and brought into the alignment mechanism.\nEven two agents with identical conceptualisation capabilities, and using exactly the same vocabulary to specify their respective conceptualisations may fail to interoperate\nin a concrete situation because of their differing perception of the domain.\nImagine a situation in which two agents are facing each other in front of a checker board.\nAgent A1 may conceptualise a figure on the board as situated on the left margin of the board, while agent A2 may conceptualise the same figure as situated on the right.\nAlthough the conceptualisation of ` left' and ` right' is done in exactly the same manner by both agents, and even if both use the terms left and right in their communication, they still will need to align their respective vocabularies if they want to successfully communicate to each other actions that change the position of figures on the checker board.\nTheir semantic alignment, however, will only be valid in the scope of their interaction within this particular situation or environment.\nThe same agents situated differently may produce a different alignment.\nThis scenario is reminiscent to those in which a group of distributed agents adapt to form an ontology and a shared lexicon in an emergent, bottom-up manner, with only local interactions and no central control authority [12].\nThis sort of self-organised emergence of shared meaning is namely ultimately grounded on the physical interaction of agents with the environment.\nIn this paper, however, we address the case in which agents are already endowed with a top-down engineered ontology (it can even be the same one), which they do not adapt or refine, but for which they want to find the semantic relationships with separate ontologies of other agents on the grounds of their communication within a specific situation.\nIn particular, we provide a formal model that formalises situated semantic alignment as a sequence of information-channel refinements in the sense of Barwise and Seligman's theory of information flow [1].\nThis theory is particularly useful for our endeavour because it models the flow of information occurring in distributed systems due to the particular situations--or tokens--that carry information.\nAnalogously, the semantic alignment that will allow information to flow ultimately will be carried by the particular situation agents are acting in.\nWe shall therefore consider a scenario with two or more agents situated in an environment.\nEach agent will have its own viewpoint of the environment so that, if the environment is in a concrete state, both agents may have different perceptions of this state.\nBecause of these differences there may be a mismatch in the meaning of the syntactic entities by which agents describe their perceptions (and which constitute the agents' respective ontologies).\nWe state that these syntactic entities can be related according to the intrinsic semantics provided by the existing relationship between the agents' viewpoint of the environment.\nThe existence of this relationship is precisely justified by the fact that the agents are situated and observe the same environment.\nIn Section 2 we describe our formal model for Situated Semantic Alignment (SSA).\nFirst, in Section 2.1 we associate a channel to the scenario under consideration and show how the distributed logic generated by this channel provides the logical relationships between the agents' viewpoints of the environment.\nSecond, in Section 2.2 we present a method by which agents obtain approximations of this distributed logic.\nThese approximations gradually become more reliable as the method is applied.\nIn Section 3 we report on an application of our method.\nConclusions and further work are analyzed in Section 4.\nFinally, an appendix summarizes the terms and theorems of Channel theory used along the paper.\nWe do not assume any knowledge of Channel Theory; we restate basic definitions and theorems in the appendix, but any detailed exposition of the theory is outside the scope of this paper.\n2.\nA FORMAL MODEL FOR SSA\n2.1 The Logic of SSA\n2.2 Approaching the logic of SSA through communication\n1280 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n1282 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.\nAN EXAMPLE\n4.\nCONCLUSIONS AND FURTHER WORK\nAcknowledgements\n5.\nREFERENCES\nAPPENDIX A. CHANNEL THEORY TERMS\nB. CHANNEL THEORY THEOREMS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1285","lvl-4":"A Formal Model for Situated Semantic Alignment\nABSTRACT\nOntology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems.\nMost ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources.\nIn this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in.\nIt hence makes the situation in which the alignment occurs explicit in the model.\nWe resort to Channel Theory to carry out the formalisation.\n1.\nINTRODUCTION\nAn ontology is commonly defined as a specification of the conceptualisation of a particular domain.\nIt fixes the vocabulary used by knowledge engineers to denote concepts and their relations, and it constrains the interpretation of this vocabulary to the meaning originally intended by knowledge\nengineers.\nAs such, ontologies have been widely adopted as a key technology that may favour knowledge sharing in distributed environments, such as multi-agent systems, federated databases, or the Semantic Web.\nBut the proliferation of many diverse ontologies caused by different conceptualisations of even the same domain--and their subsequent specification using varying terminology--has highlighted the need of ontology matching techniques that are capable of computing semantic relationships between entities of separately engineered ontologies.\nFurthermore, matching often has been carried out at design-time, before integrating knowledge-based systems or making them interoperate.\nIn addition, in many situations peer ontologies are not even open for inspection (e.g., when they are based on commercially confidential information).\nCertainly, there exist efforts to efficiently match ontological entities at run-time, taking only those ontology fragment that are necessary for the task at hand [10, 13, 9, 8].\nWe claim that semantic alignment of ontological terminology is ultimately relative to the particular situation in which the alignment is carried out, and that this situation should be made explicit and brought into the alignment mechanism.\nEven two agents with identical conceptualisation capabilities, and using exactly the same vocabulary to specify their respective conceptualisations may fail to interoperate\nin a concrete situation because of their differing perception of the domain.\nImagine a situation in which two agents are facing each other in front of a checker board.\nAgent A1 may conceptualise a figure on the board as situated on the left margin of the board, while agent A2 may conceptualise the same figure as situated on the right.\nTheir semantic alignment, however, will only be valid in the scope of their interaction within this particular situation or environment.\nThe same agents situated differently may produce a different alignment.\nThis scenario is reminiscent to those in which a group of distributed agents adapt to form an ontology and a shared lexicon in an emergent, bottom-up manner, with only local interactions and no central control authority [12].\nThis sort of self-organised emergence of shared meaning is namely ultimately grounded on the physical interaction of agents with the environment.\nIn this paper, however, we address the case in which agents are already endowed with a top-down engineered ontology (it can even be the same one), which they do not adapt or refine, but for which they want to find the semantic relationships with separate ontologies of other agents on the grounds of their communication within a specific situation.\nIn particular, we provide a formal model that formalises situated semantic alignment as a sequence of information-channel refinements in the sense of Barwise and Seligman's theory of information flow [1].\nThis theory is particularly useful for our endeavour because it models the flow of information occurring in distributed systems due to the particular situations--or tokens--that carry information.\nAnalogously, the semantic alignment that will allow information to flow ultimately will be carried by the particular situation agents are acting in.\nWe shall therefore consider a scenario with two or more agents situated in an environment.\nEach agent will have its own viewpoint of the environment so that, if the environment is in a concrete state, both agents may have different perceptions of this state.\nBecause of these differences there may be a mismatch in the meaning of the syntactic entities by which agents describe their perceptions (and which constitute the agents' respective ontologies).\nWe state that these syntactic entities can be related according to the intrinsic semantics provided by the existing relationship between the agents' viewpoint of the environment.\nThe existence of this relationship is precisely justified by the fact that the agents are situated and observe the same environment.\nIn Section 2 we describe our formal model for Situated Semantic Alignment (SSA).\nFirst, in Section 2.1 we associate a channel to the scenario under consideration and show how the distributed logic generated by this channel provides the logical relationships between the agents' viewpoints of the environment.\nSecond, in Section 2.2 we present a method by which agents obtain approximations of this distributed logic.\nThese approximations gradually become more reliable as the method is applied.\nIn Section 3 we report on an application of our method.\nConclusions and further work are analyzed in Section 4.\nFinally, an appendix summarizes the terms and theorems of Channel theory used along the paper.","lvl-2":"A Formal Model for Situated Semantic Alignment\nABSTRACT\nOntology matching is currently a key technology to achieve the semantic alignment of ontological entities used by knowledge-based applications, and therefore to enable their interoperability in distributed environments such as multiagent systems.\nMost ontology matching mechanisms, however, assume matching prior integration and rely on semantics that has been coded a priori in concept hierarchies or external sources.\nIn this paper, we present a formal model for a semantic alignment procedure that incrementally aligns differing conceptualisations of two or more agents relative to their respective perception of the environment or domain they are acting in.\nIt hence makes the situation in which the alignment occurs explicit in the model.\nWe resort to Channel Theory to carry out the formalisation.\n1.\nINTRODUCTION\nAn ontology is commonly defined as a specification of the conceptualisation of a particular domain.\nIt fixes the vocabulary used by knowledge engineers to denote concepts and their relations, and it constrains the interpretation of this vocabulary to the meaning originally intended by knowledge\nengineers.\nAs such, ontologies have been widely adopted as a key technology that may favour knowledge sharing in distributed environments, such as multi-agent systems, federated databases, or the Semantic Web.\nBut the proliferation of many diverse ontologies caused by different conceptualisations of even the same domain--and their subsequent specification using varying terminology--has highlighted the need of ontology matching techniques that are capable of computing semantic relationships between entities of separately engineered ontologies.\n[5, 11] Until recently, most ontology matching mechanisms developed so far have taken a classical functional approach to the semantic heterogeneity problem, in which ontology matching is seen as a process taking two or more ontologies as input and producing a semantic alignment of ontological entities as output [3].\nFurthermore, matching often has been carried out at design-time, before integrating knowledge-based systems or making them interoperate.\nThis might have been successful for clearly delimited and stable domains and for closed distributed systems, but it is untenable and even undesirable for the kind of applications that are currently deployed in open systems.\nMulti-agent communication, peer-to-peer information sharing, and webservice composition are all of a decentralised, dynamic, and open-ended nature, and they require ontology matching to be locally performed during run-time.\nIn addition, in many situations peer ontologies are not even open for inspection (e.g., when they are based on commercially confidential information).\nCertainly, there exist efforts to efficiently match ontological entities at run-time, taking only those ontology fragment that are necessary for the task at hand [10, 13, 9, 8].\nNevertheless, the techniques used by these systems to establish the semantic relationships between ontological entities--even though applied at run-time--still exploit a priori defined concept taxonomies as they are represented in the graph-based structures of the ontologies to be matched, use previously existing external sources such as thesauri (e.g., WordNet) and upper-level ontologies (e.g., CyC or SUMO), or resort to additional background knowledge repositories or shared instances.\nWe claim that semantic alignment of ontological terminology is ultimately relative to the particular situation in which the alignment is carried out, and that this situation should be made explicit and brought into the alignment mechanism.\nEven two agents with identical conceptualisation capabilities, and using exactly the same vocabulary to specify their respective conceptualisations may fail to interoperate\nin a concrete situation because of their differing perception of the domain.\nImagine a situation in which two agents are facing each other in front of a checker board.\nAgent A1 may conceptualise a figure on the board as situated on the left margin of the board, while agent A2 may conceptualise the same figure as situated on the right.\nAlthough the conceptualisation of ` left' and ` right' is done in exactly the same manner by both agents, and even if both use the terms left and right in their communication, they still will need to align their respective vocabularies if they want to successfully communicate to each other actions that change the position of figures on the checker board.\nTheir semantic alignment, however, will only be valid in the scope of their interaction within this particular situation or environment.\nThe same agents situated differently may produce a different alignment.\nThis scenario is reminiscent to those in which a group of distributed agents adapt to form an ontology and a shared lexicon in an emergent, bottom-up manner, with only local interactions and no central control authority [12].\nThis sort of self-organised emergence of shared meaning is namely ultimately grounded on the physical interaction of agents with the environment.\nIn this paper, however, we address the case in which agents are already endowed with a top-down engineered ontology (it can even be the same one), which they do not adapt or refine, but for which they want to find the semantic relationships with separate ontologies of other agents on the grounds of their communication within a specific situation.\nIn particular, we provide a formal model that formalises situated semantic alignment as a sequence of information-channel refinements in the sense of Barwise and Seligman's theory of information flow [1].\nThis theory is particularly useful for our endeavour because it models the flow of information occurring in distributed systems due to the particular situations--or tokens--that carry information.\nAnalogously, the semantic alignment that will allow information to flow ultimately will be carried by the particular situation agents are acting in.\nWe shall therefore consider a scenario with two or more agents situated in an environment.\nEach agent will have its own viewpoint of the environment so that, if the environment is in a concrete state, both agents may have different perceptions of this state.\nBecause of these differences there may be a mismatch in the meaning of the syntactic entities by which agents describe their perceptions (and which constitute the agents' respective ontologies).\nWe state that these syntactic entities can be related according to the intrinsic semantics provided by the existing relationship between the agents' viewpoint of the environment.\nThe existence of this relationship is precisely justified by the fact that the agents are situated and observe the same environment.\nIn Section 2 we describe our formal model for Situated Semantic Alignment (SSA).\nFirst, in Section 2.1 we associate a channel to the scenario under consideration and show how the distributed logic generated by this channel provides the logical relationships between the agents' viewpoints of the environment.\nSecond, in Section 2.2 we present a method by which agents obtain approximations of this distributed logic.\nThese approximations gradually become more reliable as the method is applied.\nIn Section 3 we report on an application of our method.\nConclusions and further work are analyzed in Section 4.\nFinally, an appendix summarizes the terms and theorems of Channel theory used along the paper.\nWe do not assume any knowledge of Channel Theory; we restate basic definitions and theorems in the appendix, but any detailed exposition of the theory is outside the scope of this paper.\n2.\nA FORMAL MODEL FOR SSA\n2.1 The Logic of SSA\nConsider a scenario with two agents A1 and A2 situated in an environment E (the generalization to any numerable set of agents is straightforward).\nWe associate a numerable set S of states to E and, at any given instant, we suppose E to be in one of these states.\nWe further assume that each agent is able to observe the environment and has its own perception of it.\nThis ability is faithfully captured by a surjective function seei: S--+ Pi, where i E 11, 21, and typically see1 and see2 are different.\nAccording to Channel Theory, information is only viable where there is a systematic way of classifying some range of things as being this way or that, in other words, where there is a classification (see appendix A).\nSo in order to be within the framework of Channel Theory, we must associate classifications to the components of our system.\nFor each i E 11, 21, we consider a classification Ai that models Ai's viewpoint of E. First, tok (Ai) is composed of Ai's perceptions of E states, that is, tok (Ai) = Pi.\nSecond, typ (Ai) contains the syntactic entities by which Ai describes its perceptions, the ones constituting the ontology of Ai.\nFinally, = Ai synthesizes how Ai relates its perceptions with these syntactic entities.\nNow, with the aim of associating environment E with a classification E we choose the power classification of S as E, which is the classification whose set of types is equal to 2S, whose tokens are the elements of S, and for which a token e is of type \u03b5 if e E \u03b5.\nThe reason for taking the power classification is because there are no syntactic entities that may play the role of types for E since, in general, there is no global conceptualisation of the environment.\nHowever, the set of types of the power classification includes all possible token configurations potentially described by types.\nThus tok (E) = S, typ (E) = 2S and e = E \u03b5 if and only if e E \u03b5.\nThe notion of channel (see appendix A) is fundamental in Barwise and Seligman's theory.\nThe information flow among the components of a distributed system is modelled in terms of a channel and the relationships among these components are expressed via infomorphisms (see appendix A) which provide a way of moving information between them.\nThe information flow of the scenario under consideration is accurately described by channel E = 1fi: Ai--+ Eb \u2208 {1,2} defined as follows:\n\u2022 \u02c6fi (\u03b1) = 1e E tok (E) I seei (e) = Ai \u03b11 for each \u03b1 E typ (Ai) \u2022 \u02c7fi (e) = seei (e) for each e E tok (E)\nwhere i E 11, 21.\nDefinition of \u02c7fi seems natural while \u02c6fi is defined in such a way that the fundamental property of the infomorphisms is fulfilled:\nFigure 1: Channel E\nE explains the information flow of our scenario by virtue of agents A1 and A2 being situated and perceiving the same environment E.\nWe want to obtain meaningful relations among agents' syntactic entities, that is, agents' types.\nWe state that meaningfulness must be in accord with E.\nThe sum operation (see appendix A) gives us a way of putting the two agents' classifications of channel E together into a single classification, namely A1 + A2, and also the two infomorphisms together into a single infomorphism, f1 + f2: A1 + A2--+ E. A1 + A2 assembles agents' classifications in a very coarse way.\ntok (A1 + A2) is the cartesian product of tok (A1) and tok (A2), that is, tok (A1 + A2) = {(p1, p2) | pi \u2208 Pi}, so a token of A1 + A2 is a pair of agents' perceptions with no restrictions.\ntyp (A1 + A2) is the disjoint union of typ (A1) and typ (A2), and (p1, p2) is of type (i, \u03b1) if pi is of type \u03b1.\nWe attach importance to take the disjoint union because A1 and A2 could use identical types with the purpose of describing their respective perceptions of E. Classification A1 + A2 seems to be the natural place in which to search for relations among agents' types.\nNow, Channel Theory provides a way to make all these relations explicit in a logical fashion by means of theories and local logics (see appendix A).\nThe theory generated by the sum classification, Th (A1 + A2), and hence its logic generated, Log (A1 + A2), involve all those constraints among agents' types valid according to A1 + A2.\nNotice however that these constraints are obvious.\nAs we stated above, meaningfulness must be in accord with channel E. Classifications A1 + A2 and E are connected via the sum infomorphism, f = f1 + f2, where:\n\u2022 \u02c6f ((i, \u03b1)) = \u02c6fi (\u03b1) = {e \u2208 tok (E) | seei (e) | = Ai \u03b1} for each (i, \u03b1) \u2208 typ (A1 + A2) \u2022 \u02c7f (e) = (\u02c7f1 (e), \u02c7f2 (e)) = (see1 (e), see2 (e)) for each e \u2208 tok (E)\nMeaningful constraints among agents' types are in accord with channel E because they are computed making use of f as we expound below.\nAs important as the notion of channel is the concept of distributed logic (see appendix A).\nGiven a channel C and a logic 2 on its core, DLogC (2) represents the reasoning about relations among the components of C justified by 2.\nIf 2 = Log (C), the distributed logic, we denoted by Log (C), captures in a logical fashion the information flow inherent in the channel.\nIn our case, Log (E) explains the relationship between the agents' viewpoints of the environment in a logical fashion.\nOn the one hand, constraints of Th (Log (E)) are defined by:\nwhere \u0393, \u0394 \u2286 typ (A1 + A2).\nOn the other hand, the set of normal tokens, NLog (E), is equal to the range of function \u02c7f:\nTherefore, a normal token is a pair of agents' perceptions that are restricted by coming from the same environment state (unlike A1 + A2 tokens).\nAll constraints of Th (Log (E)) are satisfied by all normal tokens (because of being a logic).\nIn this particular case, this condition is also sufficient (the proof is straightforward); as alternative to (1) we have:\nwhere \u0393, \u0394 \u2286 typ (A1 + A2).\nLog (E) is the logic of SSA.\nTh (Log (E)) comprises the most meaningful constraints among agents' types in accord with channel E.\nIn other words, the logic of SSA contains and also justifies the most meaningful relations among those syntactic entities that agents use in order to describe their own environment perceptions.\nnecessarily sound because although Log (E) is sound, f\u02c7 is Log (E) is complete since Log (E) is complete but it is not not surjective in general (see appendix B).\nIf Log (E) is also sound then Log (E) = Log (A1 + A2) (see appendix B).\nThat means there is no significant relation between agents' points of view of the environment according to E.\nIt is just the fact that Log (E) is unsound what allows a significant relation between the agents' viewpoints.\nThis relation is expressed at the type level in terms of constraints by Th (Log (E)) and at the token level by NLog (E).\n2.2 Approaching the logic of SSA through communication\nWe have dubbed Log (E) the logic of SSA.\nTh (Log (E)) comprehends the most meaningful constraints among agents' types according to E.\nThe problem is that neither agent can make use of this theory because they do not know E completely.\nIn this section, we present a method by which agents obtain approximations to Th (Log (E)).\nWe also prove these approximations gradually become more reliable as the method is applied.\nAgents can obtain approximations to Th (Log (E)) through communication.\nA1 and A2 communicate by exchanging information about their perceptions of environment states.\nThis information is expressed in terms of their own classification relations.\nSpecifically, if E is in a concrete state e, we assume that agents can convey to each other which types are satisfied by their respective perceptions of e and which are not.\nThis exchange generates a channel C = {fi: Ai--+\n1280 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nC} i \u2208 {1,2} and Th (Log (C)) contains the constraints among agents' types justified by the fact that agents have observed e. Now, if E turns to another state e ~ and agents proceed as before, another channel C ~ = {fi ~: Ai \u2192 C ~} i \u2208 {1,2} gives account of the new situation considering also the previous information.\nTh (Log (C ~)) comprises the constraints among agents' types justified by the fact that agents have observed e and e ~.\nThe significant point is that C ~ is a refinement of C (see appendix A).\nTheorem 2.1 below ensures that the refined channel involves more reliable information.\nThe communication supposedly ends when agents have observed all the environment states.\nAgain this situation can be modeled by a channel, call it C \u2217 = {fi \u2217: Ai \u2192 C \u2217} i \u2208 {1,2}.\nTheorem 2.2 states that Th (Log (C \u2217)) = Th (Log (E)).\nTheorem 2.1 and Theorem 2.2 assure that applying the method agents can obtain approximations to Th (Log (E)) gradually more reliable.\nTHEOREM 2.1.\nLet C = {fi: Ai \u2192 C} i \u2208 {1,2} and C ~ = {fi ~: Ai \u2192 C ~} i \u2208 {1,2} be two channels.\nIf C ~ is a refinement of C then: 1.\nTh (Log (C ~)) \u2286 Th (Log (C)) 2.\nNLog (C,) \u2287 NLog (C) PROOF.\nSince C ~ is a refinement of C then there exists a refinement infomorphism r from C ~ to C; so fi = r \u25e6 fi ~.\nLet A = def A1 + A2, f = def f1 + f2 and f ~ = def f1 ~ + f ~ 2.\n1.\nLet \u0393 and \u0394 be subsets of typ (A) and assume that \u0393 ~ Log (C,) \u0394, which means \u02c6f ~ [\u0393] ~ c, \u02c6f ~ [\u0394].\nWe have\n\u02c6f [\u0393] ~ c \u02c6f [\u0394].\nWe proceed by reductio ad absurdum.\nSuppose c \u2208 tok (C) does not satisfy the sequent ~ \u02c6f [\u0393], \u02c6f [\u0394].\nThen c | = c \u02c6f (\u03b3) for all \u03b3 \u2208 \u0393 and c | = c \u02c6f (\u03b4) for all \u03b4 \u2208 \u0394.\nLet us choose an arbitrary \u03b3 \u2208 \u0393.\nWe have that\nConsequently, \u02c7r (c) | = c, \u02c6f ~ (\u03b3) for all \u03b3 \u2208 \u0393.\nSince \u02c6f ~ [\u0394] then there exists \u03b4 \u2217 \u2208 \u0394 such that \u02c6f ~ (\u03b4 \u2217).\nA sequence of equivalences similar to the above one justifies c | = c \u02c6f (\u03b4 \u2217), contradicting that c is a counterexample to ~ \u02c6f [\u0393], \u02c6f [\u0394].\nHence \u0393 ~ Log (C) \u0394 as we wanted to prove.\n2.\nLet ~ a1, a2 \u2208 tok (A) and assume ~ a1, a2 \u2208 NLog (C).\nTherefore, there exists c token in C such that ~ a1, a2 = \u02c7f (c).\nThen we have ai = \u02c7fi (c) = \u02c7fi ~ \u25e6 \u02c7r (c) = \u02c7fi ~ (\u02c7r (c)), for i \u2208 {1, 2}.\nHence ~ a1, a2 = \u02c7f ~ (\u02c7r (c)) and ~ a1, a2 \u2208 NLog (C,).\nConsequently, NLog (C,) \u2287 NLog (C) which concludes the proof.\nREMARK 2.1.\nTheorem 2.1 asserts that the more refined channel gives more reliable information.\nEven though its theory has less constraints, it has more normal tokens to which they apply.\nIn the remainder of the section, we explicitly describe the process of communication and we conclude with the proof of Theorem 2.2.\nLet us assume that typ (Ai) is finite for i \u2208 {1, 2} and S is infinite numerable, though the finite case can be treated in a similar form.\nWe also choose an infinite numerable set of symbols {cn | n \u2208 N} 1.\nWe omit informorphisms superscripts when no confusion arises.\nTypes are usually denoted by greek letters and tokens \u02c6f (\u03b1) and Agents communication starts from the observation of E. Let us suppose that E is in state e1 \u2208 S = tok (E).\nA1's perception of e1 is f1 (e1) and A2's perception of e1 is f2 (e1).\nWe take for granted that A1 can communicate A2 those types that are and are not satisfied by f1 (e1) according to its classification A1.\nSo can A2 do.\nSince both typ (A1) and typ (A2) are finite, this process eventually finishes.\nAfter this communication a channel C1 = {fi1: Ai \u2192 C1} i = 1,2 arises (see Figure 2).\nFigure 2: The first communication stage\nOn the one hand, C1 is defined by:\n\u2022 tok (C1) = {c1} \u2022 typ (C1) = typ (A1 + A2) \u2022 c1 | = cl ~ i, \u03b1 if fi (e1) | = Ai \u03b1 (for every ~ i, \u03b1 \u2208 typ (A1 + A2))\nOn the other hand, fi1, with i \u2208 {1, 2}, is defined by:\nLog (C1) represents the reasoning about the first stage of communication.\nIt is easy to prove that Th (Log (C1)) = Th (C1).\nThe significant point is that both agents know C1 as the result of the communication.\nHence they can compute separately theory Th (C1) = ~ typ (C1), ~ cl which contains the constraints among agents' types justified by the fact that agents have observed e1.\nNow, let us assume that E turns to a new state e2.\nAgents can proceed as before, exchanging this time information about their perceptions of e2.\nAnother channel C2 = {fi2: Ai \u2192 C2} i \u2208 {1,2} comes up.\nWe define C2 so as to take also into account the information provided by the previous stage of communication.\nOn the one hand, C2 is defined by:\n\u2022 tok (C2) = {c1, c2} 1We write these symbols with superindices because we limit the use of subindices for what concerns to agents.\nNote this set is chosen with the same cardinality of S.\nLog (C2) represents the reasoning about the former and the later communication stages.\nTh (Log (C2)) is equal to Th (C2) = ~ typ (C2), ~ C2 ~, then it contains the constraints among agents' types justified by the fact that agents have observed e1 and e2.\nA1 and A2 knows C2 so they can use these constraints.\nThe key point is that channel C2 is a refinement of C1.\nIt is easy to check that f 1 defined as the identity function on types and the inclusion function on tokens is a refinement infomorphism (see at the bottom of Figure 3).\nBy Theorem 2.1, C2 constraints are more reliable than C1 constraints.\nIn the general situation, once the states e1, e2,..., en \u2212 1 (n \u2265 2) have been observed and a new state en appears, channel Cn = {fin: Ai \u2192 Cn} i \u2208 {1,2} informs about agents communication up to that moment.\nCn definition is similar to the previous ones and analogous remarks can be made (see at the top of Figure 3).\nTheory Th (Log (Cn)) = Th (Cn) = ~ typ (Cn), ~ Cn ~ contains the constraints among agents' types justified by the fact that agents have observed e1, e2,..., en.\nFigure 3: Agents communication\nagents to decide when to stop communicating should a good enough approximation have been reached for the purposes of their respective tasks.\nBut the study of possible termination criteria is outside the scope of this paper and left for future work.\nFrom a theoretical point of view, however, we can consider the channel C \u2217 = {fi \u2217: Ai \u2192 C \u2217} i \u2208 {1,2} which informs of the end of the communication after observing all environment states.\nOn the one hand, C \u2217 is defined by:\n\u2022 tok (C \u2217) = {cn | n \u2208 N} \u2022 typ (C \u2217) = typ (A1 + A2) \u2022 cn | = C \u2217 ~ i, \u03b1 ~ if fi (en) | = Ai \u03b1 (for n \u2208 N and ~ i, \u03b1 ~ \u2208 typ (A1 + A2)) On the other hand, fi \u2217, with i \u2208 {1, 2}, is defined by: \u2022 fi \u2217 (\u03b1) = ~ i, \u03b1 ~ (for \u03b1 \u2208 typ (Ai)) \u2022 fi \u2217 (cn) = fi (en) (for n \u2208 N)\nTheorem below constitutes the cornerstone of the model exposed in this paper.\nIt ensures, together with Theorem 2.1, that at each communication stage agents obtain a theory that approximates more closely to the theory generated by the logic of SSA.\nTHEOREM 2.2.\nThe following statements hold:\n1.\nFor all n \u2208 N, C \u2217 is a refinement of Cn.\n2.\nTh (Log (E)) = Th (C \u2217) = Th (Log (C \u2217)).\nRemember we have assumed that S is infinite numerable.\nIt is therefore unpractical to let communication finish when all environment states have been observed by A1 and A2.\nAt that point, the family of channels {Cn} n \u2208 N would inform of all the communication stages.\nIt is therefore up to the\n1282 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n3.\nAN EXAMPLE\nIn the previous section we have described in great detail our formal model for SSA.\nHowever, we have not tackled the practical aspect of the model yet.\nIn this section, we give a brushstroke of the pragmatic view of our approach.\nWe study a very simple example and explain how agents can use those approximations of the logic of SSA they can obtain through communication.\nLet us reflect on a system consisting of robots located in a two-dimensional grid looking for packages with the aim of moving them to a certain destination (Figure 4).\nRobots can carry only one package at a time and they cannot move through a package.\nFigure 4: The scenario\nRobots have a partial view of the domain and there exist two kinds of robots according to the visual field they have.\nSome robots are capable of observing the eight adjoining squares but others just observe the three squares they have in front (see Figure 5).\nWe call them URDL (shortened form of Up-Right-Down-Left) and LCR (abbreviation for Left-Center-Right) robots respectively.\nDescribing the environment states as well as the robots' perception functions is rather tedious and even unnecessary.\nWe assume the reader has all those descriptions in mind.\nAll robots in the system must be able to solve package distribution problems cooperatively by communicating their intentions to each other.\nIn order to communicate, agents send messages using some ontology.\nIn our scenario, there coexist two ontologies, the UDRL and LCR ontologies.\nBoth of them are very simple and are just confined to describe what robots observe.\nFigure 5: Robots field of vision\nWhen a robot carrying a package finds another package obstructing its way, it can either go around it or, if there is another robot in its visual field, ask it for assistance.\nLet us suppose two URDL robots are in a situation like the one depicted in Figure 6.\nRobot1 (the one carrying a package) decides to ask Robot2 for assistance and sends a request.\nThis request is written below as a KQML message and it should be interpreted intuitively as: Robot2, pick up the package located in gay \"Up\" square, knowing that you are located in gay \"Up-Right\" square.\nFigure 6: Robot assistance\nRobot2 understands the content of the request and it can use a rule represented by the following constraint:\nThe above constraint should be interpreted intuitively as: if Robot2 is situated in Robot1's \"Up-Right\" square, Robot1 is situated in Robot2's \"Up-Left\" square and a package is located in Robot1's \"Up\" square, then a package is located in Robot2's \"Up\" square.\nNow, problems arise when a LCR robot and a URDL robot try to interoperate.\nSee Figure 7.\nRobot1 sends a request of the form:\nRobot2 does not understand the content of the request but they decide to begin a process of alignment - corresponding with a channel C1.\nOnce finished, Robot2 searches in Th (C1) for constraints similar to the expected one, that is, those of the form:\nwhere \u03bb E {U, R, D, L, UR, DR, DL, UL}.\nFrom these, only the following constraints are plausible according to C1:\nFigure 7: Ontology mismatch\nIf subsequently both robots adopting the same roles take part in a situation like the one depicted in Figure 8, a new process of alignment - corresponding with a channel C2 - takes place.\nC2 also considers the previous information and hence refines C1.\nThe only constraint from the above ones that remains plausible according to C2 is:\nNotice that this constraint is an element of the theory of the distributed logic.\nAgents communicate in order to cooperate successfully and success is guaranteed using constrains of the distributed logic.\nFigure 8: Refinement\n4.\nCONCLUSIONS AND FURTHER WORK\nIn this paper we have exposed a formal model of semantic alignment as a sequence of information-channel refinements that are relative to the particular states of the environment in which two agents communicate and align their respective conceptualisations of these states.\nBefore us, Kent [6] and Kalfoglou and Schorlemmer [4, 10] have applied Channel Theory to formalise semantic alignment using also Barwise and Seligman's insight to focus on tokens as the enablers of information flow.\nTheir approach to semantic alignment, however, like most ontology matching mechanisms developed to date (regardless of whether they follow a functional, design-time-based approach, or an interaction-based, runtime-based approach), still defines semantic alignment in terms of a priori design decisions such as the concept taxonomy of the ontologies or the external sources brought into the alignment process.\nInstead the model we have presented in this paper makes explicit the particular states of the environment in which agents are situated and are attempting to gradually align their ontological entities.\nIn the future, our effort will focus on the practical side of the situated semantic alignment problem.\nWe plan to further refine the model presented here (e.g., to include pragmatic issues such as termination criteria for the alignment process) and to devise concrete ontology negotiation protocols based on this model that agents may be able to enact.\nThe formal model exposed in this paper will constitute a solid base of future practical results.\nAcknowledgements\nThis work is supported under the UPIC project, sponsored by Spain's Ministry of Education and Science under grant number TIN2004-07461-C02 - 02 and also under the OpenKnowledge Specific Targeted Research Project (STREP), sponsored by the European Commission under contract number FP6-027253.\nMarco Schorlemmer is supported by a Ram \u00b4 on y Cajal Research Fellowship from Spain's Ministry of Education and Science, partially funded by the European Social Fund.\n5.\nREFERENCES\nAPPENDIX A. CHANNEL THEORY TERMS\nClassification: is a tuple A = ~ tok (A), typ (A), | = A ~ where tok (A) is a set of tokens, typ (A) is a set of types and | = A is a binary relation between tok (A) and typ (A).\nIf a | = A \u03b1 then a is said to be of type \u03b1.\nInfomorphism: f: A \u2192 B from classifications A to B is a contravariant pair of functions f = ~ \u02c6f, \u02c7f ~, where f\u02c6: typ (A) \u2192 typ (B) and f\u02c7: tok (B) \u2192 tok (A), satisfying the following fundamental property: \u02c7f (b) | = A \u03b1 iff b | = B for each token b \u2208 tok (B) and each type \u03b1 \u2208 typ (A).\nChannel: consists of two infomorphisms C = {fi: Ai \u2192 C} i \u2208 {1,2} with a common codomain C, called the core of C. C tokens are called connections and a connection c is said to connect tokens \u02c7f1 (c) and \u02c7f2 (c).2 Sum: given classifications A and B, the sum of A and B, denoted by A + B, is the classification with tok (A + B) = tok (A) \u00d7 tok (B) = {~ a, b ~ | a \u2208 tok (A) and b \u2208 tok (B)}, typ (A + B) = typ (A) ~ typ (B) = {~ i, \u03b3 ~ | i = 1 and \u03b3 \u2208 typ (A) or i = 2 and \u03b3 \u2208 typ (B)} and relation | = A+B defined by: ~ a, b ~ | = A+B ~ 1, \u03b1 ~ if a | = A \u03b1 ~ a, b ~ | = A+B ~ 2, \u03b2 ~ if b | = B \u03b2 Given infomorphisms f: A \u2192 C and g: B \u2192 C, the sum f + g: A + B \u2192 C is defined on types by (f \u02c6 + g) (~ 1, \u03b1 ~) = \u02c6f (\u03b1) and (f \u02c6 + g) (~ 2, \u03b2 ~) = \u02c6g (\u03b2), and on tokens by (f \u02c7 + g) (c) = ~ \u02c7f (c), \u02c7g (c) ~.\nTheory: given a set \u03a3, a sequent of \u03a3 is a pair ~ \u0393, \u0394 ~ of subsets of \u03a3.\nA binary relation ~ between subsets of \u03a3 is called a consequence relation on \u03a3.\nA theory is a pair T = ~ \u03a3, ~ ~ where ~ is a consequence relation on \u03a3.\nA sequent ~ \u0393, \u0394 ~ of \u03a3 for which \u0393 ~ \u0394 is called a constraint of the theory T. T is regular if it satisfies:\n1.\nIdentity: \u03b1 ~ \u03b1 2.\nWeakening: if \u0393 ~ \u0394, then \u0393, \u0393 ~ ~ \u0394, \u0394 ~ 2In fact, this is the definition of a binary channel.\nA channel can be defined with an arbitrary index set.\n3.\nGlobal Cut: if \u0393, Il0 ~ \u0394, Il1 for each partition ~ Il0, Il1 ~ of Il (i.e., Il0 \u222a Il1 = Il and Il0 \u2229 Il1 = \u2205), then \u0393 ~ \u0394\nfor all \u03b1 \u2208 \u03a3 and all \u0393, \u0393 ~, \u0394, \u0394 ~, Il \u2286 \u03a3 .3 Theory generated by a classification: let A be a classification.\nA token a \u2208 tok (A) satisfies a sequent ~ \u0393, \u0394 ~ of typ (A) provided that if a is of every type in \u0393 then it is of some type in \u0394.\nThe theory generated by A, denoted by Th (A), is the theory ~ typ (A), ~ A ~ where \u0393 ~ A \u0394 if every token in A satisfies ~ \u0393, \u0394 ~.\nLocal logic: is a tuple, E = ~ tok (, E), typ (, E), | = #, ~ #, N #~ where:\n1.\n~ tok (, E), typ (, E), | = #~ is a classification denoted by Cla (, E), 2.\n~ typ (, E), ~ #~ is a regular theory denoted by Th (, E), 3.\nN #is a subset of tok (, E), called the normal tokens of, E, which satisfy all constraints of Th (, E).\nA local logic, E is sound if every token in Cla (, E) is normal, that is, N #= tok (, E).\n, E is complete if every sequent of typ (, E) satisfied by every normal token is a constraint of Th (, E).\nLocal logic generated by a classification: given a classification A, the local logic generated by A, written Log (A), is the local logic on A (i.e., Cla (Log (A)) = A), with Th (Log (A)) = Th (A) and such that all its tokens are normal, i.e., NLog (A) = tok (A).\nInverse image: given an infomorphism f: A \u2192 B and a local logic, E on B, the inverse image of, E under f, denoted f \u2212 1 [, E], is the local logic on A such that \u02c6f [\u0394] and Nf \u2212 1 [#] = \u02c7f [N #] = {a \u2208 tok (A) | a = \u02c7f (b) for some b \u2208 N #}.\nDistributed logic: let C = {fi: Ai \u2192 C} i \u2208 {1,2} be a channel and, E a local logic on its core C, the distributed logic of C generated by, E, written DLogC (, E), is the inverse image of, E under the sum f1 + f2.\nRefinement: let C = {fi: Ai \u2192 C} i \u2208 {1,2} and C ~ = {fi ~: Ai \u2192 C ~} i \u2208 {1,2} be two channels with the same component classifications A1 and A2.\nA refinement infomorphism from C ~ to C is an infomorphism r: C ~ \u2192 C such that for each i \u2208 {1, 2}, fi = r \u25e6 fi ~ (i.e., \u02c6fi = r\u02c6 \u25e6 \u02c6fi ~ and \u02c7fi = \u02c7fi ~ \u25e6 \u02c7r).\nChannel C ~ is a refinement of C if there exists a refinement infomorphism r from C ~ to C.\nB. CHANNEL THEORY THEOREMS\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 1285","keyphrases":["semant align","ontolog","multi-agent system","feder databas","semant web","knowledg-base system","disjoint union","sum infomorph","constraint","inform-channel refin","distribut logic","channel refin"],"prmu":["P","P","M","U","M","M","U","U","U","U","M","M"]} {"id":"I-65","title":"Graphical Models for Online Solutions to Interactive POMDPs","abstract":"We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation. These graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-DIDs may be applied and demonstrate their usefulness.","lvl-1":"Graphical Models for Online Solutions to Interactive POMDPs Prashant Doshi Dept. of Computer Science University of Georgia Athens, GA 30602, USA pdoshi@cs.uga.edu Yifeng Zeng Dept. of Computer Science Aalborg University DK-9220 Aalborg, Denmark yfzeng@cs.aau.edu Qiongyu Chen Dept. of Computer Science National Univ. of Singapore 117543, Singapore chenqy@comp.nus.edu.sg ABSTRACT We develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation.\nThese graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables.\nI-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs.\nI-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents.\nUsing several examples, we show how I-DIDs may be applied and demonstrate their usefulness.\nCategories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: Multiagent Systems General Terms Theory 1.\nINTRODUCTION Interactive partially observable Markov decision processes (IPOMDPs) [9] provide a framework for sequential decision-making in partially observable multiagent environments.\nThey generalize POMDPs [13] to multiagent settings by including the other agents'' computable models in the state space along with the states of the physical environment.\nThe models encompass all information influencing the agents'' behaviors, including their preferences, capabilities, and beliefs, and are thus analogous to types in Bayesian games [11].\nI-POMDPs adopt a subjective approach to understanding strategic behavior, rooted in a decision-theoretic framework that takes a decision-maker``s perspective in the interaction.\nIn [15], Polich and Gmytrasiewicz introduced interactive dynamic influence diagrams (I-DIDs) as the computational representations of I-POMDPs.\nI-DIDs generalize DIDs [12], which may be viewed as computational counterparts of POMDPs, to multiagents settings in the same way that I-POMDPs generalize POMDPs.\nI-DIDs contribute to a growing line of work [19] that includes multi-agent influence diagrams (MAIDs) [14], and more recently, networks of influence diagrams (NIDs) [8].\nThese formalisms seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables.\nMAIDs provide an alternative to normal and extensive game forms using a graphical formalism to represent games of imperfect information with a decision node for each agent``s actions and chance nodes capturing the agent``s private information.\nMAIDs objectively analyze the game, efficiently computing the Nash equilibrium profile by exploiting the independence structure.\nNIDs extend MAIDs to include agents'' uncertainty over the game being played and over models of the other agents.\nEach model is a MAID and the network of MAIDs is collapsed, bottom up, into a single MAID for computing the equilibrium of the game keeping in mind the different models of each agent.\nGraphical formalisms such as MAIDs and NIDs open up a promising area of research that aims to represent multiagent interactions more transparently.\nHowever, MAIDs provide an analysis of the game from an external viewpoint and the applicability of both is limited to static single play games.\nMatters are more complex when we consider interactions that are extended over time, where predictions about others'' future actions must be made using models that change as the agents act and observe.\nI-DIDs address this gap by allowing the representation of other agents'' models as the values of a special model node.\nBoth, other agents'' models and the original agent``s beliefs over these models are updated over time using special-purpose implementations.\nIn this paper, we improve on the previous preliminary representation of the I-DID shown in [15] by using the insight that the static I-ID is a type of NID.\nThus, we may utilize NID-specific language constructs such as multiplexers to represent the model node, and subsequently the I-ID, more transparently.\nFurthermore, we clarify the semantics of the special purpose policy link introduced in the representation of I-DID by [15], and show that it could be replaced by traditional dependency links.\nIn the previous representation of the I-DID, the update of the agent``s belief over the models of others as the agents act and receive observations was denoted using a special link called the model update link that connected the model nodes over time.\nWe explicate the semantics of this link by showing how it can be implemented using the traditional dependency links between the chance nodes that constitute the model nodes.\nThe net result is a representation of I-DID that is significantly more transparent, semantically clear, and capable of being implemented using the standard algorithms for solving DIDs.\nWe show how IDIDs may be used to model an agent``s uncertainty over others'' models, that may themselves be I-DIDs.\nSolution to the I-DID is a policy that prescribes what the agent should do over time, given its beliefs over the physical state and others'' models.\nAnalogous to DIDs, I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents.\n2.\nBACKGROUND: FINITELY NESTED IPOMDPS Interactive POMDPs generalize POMDPs to multiagent settings by including other agents'' models as part of the state space [9].\nSince other agents may also reason about others, the interactive state space is strategically nested; it contains beliefs about other agents'' models and their beliefs about others.\nFor simplicity of presentation we consider an agent, i, that is interacting with one other agent, j.\nA finitely nested I-POMDP of agent i with a strategy level l is defined as the tuple: I-POMDPi,l = ISi,l, A, Ti, \u03a9i, Oi, Ri where: \u2022 ISi,l denotes a set of interactive states defined as, ISi,l = S \u00d7 Mj,l\u22121, where Mj,l\u22121 = {\u0398j,l\u22121 \u222a SMj}, for l \u2265 1, and ISi,0 = S, where S is the set of states of the physical environment.\n\u0398j,l\u22121 is the set of computable intentional models of agent j: \u03b8j,l\u22121 = bj,l\u22121, \u02c6\u03b8j where the frame, \u02c6\u03b8j = A, \u03a9j, Tj, Oj, Rj, OCj .\nHere, j is Bayes rational and OCj is j``s optimality criterion.\nSMj is the set of subintentional models of j. Simple examples of subintentional models include a no-information model [10] and a fictitious play model [6], both of which are history independent.\nWe give a recursive bottom-up construction of the interactive state space below.\nISi,0 = S, \u0398j,0 = { bj,0, \u02c6\u03b8j | bj,0 \u2208 \u0394(ISj,0)} ISi,1 = S \u00d7 {\u0398j,0 \u222a SMj}, \u0398j,1 = { bj,1, \u02c6\u03b8j | bj,1 \u2208 \u0394(ISj,1)} .\n.\n.\n.\n.\n.\nISi,l = S \u00d7 {\u0398j,l\u22121 \u222a SMj}, \u0398j,l = { bj,l, \u02c6\u03b8j | bj,l \u2208 \u0394(ISj,l)} Similar formulations of nested spaces have appeared in [1, 3].\n\u2022 A = Ai \u00d7 Aj is the set of joint actions of all agents in the environment; \u2022 Ti : S \u00d7A\u00d7S \u2192 [0, 1], describes the effect of the joint actions on the physical states of the environment; \u2022 \u03a9i is the set of observations of agent i; \u2022 Oi : S \u00d7 A \u00d7 \u03a9i \u2192 [0, 1] gives the likelihood of the observations given the physical state and joint action; \u2022 Ri : ISi \u00d7 A \u2192 R describes agent i``s preferences over its interactive states.\nUsually only the physical states will matter.\nAgent i``s policy is the mapping, \u03a9\u2217 i \u2192 \u0394(Ai), where \u03a9\u2217 i is the set of all observation histories of agent i.\nSince belief over the interactive states forms a sufficient statistic [9], the policy can also be represented as a mapping from the set of all beliefs of agent i to a distribution over its actions, \u0394(ISi) \u2192 \u0394(Ai).\n2.1 Belief Update Analogous to POMDPs, an agent within the I-POMDP framework updates its belief as it acts and observes.\nHowever, there are two differences that complicate the belief update in multiagent settings when compared to single agent ones.\nFirst, since the state of the physical environment depends on the actions of both agents, i``s prediction of how the physical state changes has to be made based on its prediction of j``s actions.\nSecond, changes in j``s models have to be included in i``s belief update.\nSpecifically, if j is intentional then an update of j``s beliefs due to its action and observation has to be included.\nIn other words, i has to update its belief based on its prediction of what j would observe and how j would update its belief.\nIf j``s model is subintentional, then j``s probable observations are appended to the observation history contained in the model.\nFormally, we have: Pr(ist |at\u22121 i , bt\u22121 i,l ) = \u03b2 ISt\u22121:mt\u22121 j =\u03b8t j bt\u22121 i,l (ist\u22121 ) \u00d7 at\u22121 j Pr(at\u22121 j |\u03b8t\u22121 j,l\u22121)Oi(st , at\u22121 i , at\u22121 j , ot i) \u00d7Ti(st\u22121 , at\u22121 i , at\u22121 j , st ) ot j Oj(st , at\u22121 i , at\u22121 j , ot j) \u00d7\u03c4(SE\u03b8t j (bt\u22121 j,l\u22121, at\u22121 j , ot j) \u2212 bt j,l\u22121) (1) where \u03b2 is the normalizing constant, \u03c4 is 1 if its argument is 0 otherwise it is 0, Pr(at\u22121 j |\u03b8t\u22121 j,l\u22121) is the probability that at\u22121 j is Bayes rational for the agent described by model \u03b8t\u22121 j,l\u22121, and SE(\u00b7) is an abbreviation for the belief update.\nFor a version of the belief update when j``s model is subintentional, see [9].\nIf agent j is also modeled as an I-POMDP, then i``s belief update invokes j``s belief update (via the term SE\u03b8t j ( bt\u22121 j,l\u22121 , at\u22121 j , ot j)), which in turn could invoke i``s belief update and so on.\nThis recursion in belief nesting bottoms out at the 0th level.\nAt this level, the belief update of the agent reduces to a POMDP belief update.\n1 For illustrations of the belief update, additional details on I-POMDPs, and how they compare with other multiagent frameworks, see [9].\n2.2 Value Iteration Each belief state in a finitely nested I-POMDP has an associated value reflecting the maximum payoff the agent can expect in this belief state: Un ( bi,l, \u03b8i ) = max ai\u2208Ai is\u2208ISi,l ERi(is, ai)bi,l(is)+ \u03b3 oi\u2208\u03a9i Pr(oi|ai, bi,l)Un\u22121 ( SE\u03b8i (bi,l, ai, oi), \u03b8i ) (2) where, ERi(is, ai) = aj Ri(is, ai, aj)Pr(aj|mj,l\u22121) (since is = (s, mj,l\u22121)).\nEq.\n2 is a basis for value iteration in I-POMDPs.\nAgent i``s optimal action, a\u2217 i , for the case of finite horizon with discounting, is an element of the set of optimal actions for the belief state, OPT(\u03b8i), defined as: OPT( bi,l, \u03b8i ) = argmax ai\u2208Ai is\u2208ISi,l ERi(is, ai)bi,l(is) +\u03b3 oi\u2208\u03a9i Pr(oi|ai, bi,l)Un ( SE\u03b8i (bi,l, ai, oi), \u03b8i ) (3) 3.\nINTERACTIVEINFLUENCEDIAGRAMS A naive extension of influence diagrams (IDs) to settings populated by multiple agents is possible by treating other agents as automatons, represented using chance nodes.\nHowever, this approach assumes that the agents'' actions are controlled using a probability distribution that does not change over time.\nInteractive influence diagrams (I-IDs) adopt a more sophisticated approach by generalizing IDs to make them applicable to settings shared with other agents who may act and observe, and update their beliefs.\n3.1 Syntax In addition to the usual chance, decision, and utility nodes, IIDs include a new type of node called the model node.\nWe show a general level l I-ID in Fig. 1(a), where the model node (Mj,l\u22121) is denoted using a hexagon.\nWe note that the probability distribution over the chance node, S, and the model node together represents agent i``s belief over its interactive states.\nIn addition to the model 1 The 0th level model is a POMDP: Other agent``s actions are treated as exogenous events and folded into the T, O, and R functions.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 815 Figure 1: (a) A generic level l I-ID for agent i situated with one other agent j.\nThe hexagon is the model node (Mj,l\u22121) whose structure we show in (b).\nMembers of the model node are I-IDs themselves (m1 j,l\u22121, m2 j,l\u22121; diagrams not shown here for simplicity) whose decision nodes are mapped to the corresponding chance nodes (A1 j , A2 j ).\nDepending on the value of the node, Mod[Mj], the distribution of each of the chance nodes is assigned to the node Aj.\n(c) The transformed I-ID with the model node replaced by the chance nodes and the relationships between them.\nnode, I-IDs differ from IDs by having a dashed link (called the policy link in [15]) between the model node and a chance node, Aj, that represents the distribution over the other agent``s actions given its model.\nIn the absence of other agents, the model node and the chance node, Aj, vanish and I-IDs collapse into traditional IDs.\nThe model node contains the alternative computational models ascribed by i to the other agent from the set, \u0398j,l\u22121 \u222a SMj, where \u0398j,l\u22121 and SMj were defined previously in Section 2.\nThus, a model in the model node may itself be an I-ID or ID, and the recursion terminates when a model is an ID or subintentional.\nBecause the model node contains the alternative models of the other agent as its values, its representation is not trivial.\nIn particular, some of the models within the node are I-IDs that when solved generate the agent``s optimal policy in their decision nodes.\nEach decision node is mapped to the corresponding chance node, say A1 j , in the following way: if OPT is the set of optimal actions obtained by solving the I-ID (or ID), then Pr(aj \u2208 A1 j ) = 1 |OP T | if aj \u2208 OPT, 0 otherwise.\nBorrowing insights from previous work [8], we observe that the model node and the dashed policy link that connects it to the chance node, Aj, could be represented as shown in Fig. 1(b).\nThe decision node of each level l \u2212 1 I-ID is transformed into a chance node, as we mentioned previously, so that the actions with the largest value in the decision node are assigned uniform probabilities in the chance node while the rest are assigned zero probability.\nThe different chance nodes (A1 j , A2 j ), one for each model, and additionally, the chance node labeled Mod[Mj] form the parents of the chance node, Aj.\nThus, there are as many action nodes (A1 j , A2 j ) in Mj,l\u22121 as the number of models in the support of agent i``s beliefs.\nThe conditional probability table of the chance node, Aj, is a multiplexer that assumes the distribution of each of the action nodes (A1 j , A2 j ) depending on the value of Mod[Mj].\nThe values of Mod[Mj] denote the different models of j.\nIn other words, when Mod[Mj] has the value m1 j,l\u22121, the chance node Aj assumes the distribution of the node A1 j , and Aj assumes the distribution of A2 j when Mod[Mj] has the value m2 j,l\u22121.\nThe distribution over the node, Mod[Mj], is the agent i``s belief over the models of j given a physical state.\nFor more agents, we will have as many model nodes as there are agents.\nNotice that Fig. 1(b) clarifies the semantics of the policy link, and shows how it can be represented using the traditional dependency links.\nIn Fig. 1(c), we show the transformed I-ID when the model node is replaced by the chance nodes and relationships between them.\nIn contrast to the representation in [15], there are no special-purpose policy links, rather the I-ID is composed of only those types of nodes that are found in traditional IDs and dependency relationships between the nodes.\nThis allows I-IDs to be represented and implemented using conventional application tools that target IDs.\nNote that we may view the level l I-ID as a NID.\nSpecifically, each of the level l \u2212 1 models within the model node are blocks in the NID (see Fig. 2).\nIf the level l = 1, each block is a traditional ID, otherwise if l > 1, each block within the NID may itself be a NID.\nNote that within the I-IDs (or IDs) at each level, there is only a single decision node.\nThus, our NID does not contain any MAIDs.\nFigure 2: A level l I-ID represented as a NID.\nThe probabilities assigned to the blocks of the NID are i``s beliefs over j``s models conditioned on a physical state.\n3.2 Solution The solution of an I-ID proceeds in a bottom-up manner, and is implemented recursively.\nWe start by solving the level 0 models, which, if intentional, are traditional IDs.\nTheir solutions provide probability distributions over the other agents'' actions, which are entered in the corresponding chance nodes found in the model node of the level 1 I-ID.\nThe mapping from the level 0 models'' decision nodes to the chance nodes is carried out so that actions with the largest value in the decision node are assigned uniform probabilities in the chance node while the rest are assigned zero probability.\nGiven the distributions over the actions within the different chance nodes (one for each model of the other agent), the level 1 I-ID is transformed as shown in Fig. 1(c).\nDuring the transformation, the conditional probability table (CPT) of the node, Aj, is populated such that the node assumes the distribution of each of the chance nodes depending on the value of the node, Mod[Mj].\nAs we mentioned previously, the values of the node Mod[Mj] denote the different models of the other agent, and its distribution is the agent i``s belief over the models of j conditioned on the physical state.\nThe transformed level 1 I-ID is a traditional ID that may be solved us816 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) (a) (b) Figure 3: (a) A generic two time-slice level l I-DID for agent i in a setting with one other agent j. Notice the dotted model update link that denotes the update of the models of j and the distribution over the models over time.\n(b) The semantics of the model update link.\ning the standard expected utility maximization method [18].\nThis procedure is carried out up to the level l I-ID whose solution gives the non-empty set of optimal actions that the agent should perform given its belief.\nNotice that analogous to IDs, I-IDs are suitable for online decision-making when the agent``s current belief is known.\n4.\nINTERACTIVE DYNAMIC INFLUENCE DIAGRAMS Interactive dynamic influence diagrams (I-DIDs) extend I-IDs (and NIDs) to allow sequential decision-making over several time steps.\nJust as DIDs are structured graphical representations of POMDPs, I-DIDs are the graphical online analogs for finitely nested I-POMDPs.\nI-DIDs may be used to optimize over a finite look-ahead given initial beliefs while interacting with other, possibly similar, agents.\n4.1 Syntax We depict a general two time-slice I-DID in Fig. 3(a).\nIn addition to the model nodes and the dashed policy link, what differentiates an I-DID from a DID is the model update link shown as a dotted arrow in Fig. 3(a).\nWe explained the semantics of the model node and the policy link in the previous section; we describe the model updates next.\nThe update of the model node over time involves two steps: First, given the models at time t, we identify the updated set of models that reside in the model node at time t + 1.\nRecall from Section 2 that an agent``s intentional model includes its belief.\nBecause the agents act and receive observations, their models are updated to reflect their changed beliefs.\nSince the set of optimal actions for a model could include all the actions, and the agent may receive any one of |\u03a9j| possible observations, the updated set at time step t + 1 will have at most |Mt j,l\u22121||Aj||\u03a9j| models.\nHere, |Mt j,l\u22121| is the number of models at time step t, |Aj| and |\u03a9j| are the largest spaces of actions and observations respectively, among all the models.\nSecond, we compute the new distribution over the updated models given the original distribution and the probability of the agent performing the action and receiving the observation that led to the updated model.\nThese steps are a part of agent i``s belief update formalized using Eq.\n1.\nIn Fig. 3(b), we show how the dotted model update link is implemented in the I-DID.\nIf each of the two level l \u2212 1 models ascribed to j at time step t results in one action, and j could make one of two possible observations, then the model node at time step t + 1 contains four updated models (mt+1,1 j,l\u22121 ,mt+1,2 j,l\u22121 , mt+1,3 j,l\u22121 , and mt+1,4 j,l\u22121 ).\nThese models differ in their initial beliefs, each of which is the result of j updating its beliefs due to its action and a possible observation.\nThe decision nodes in each of the I-DIDs or DIDs that represent the lower level models are mapped to the corresponding Figure 4: Transformed I-DID with the model nodes and model update link replaced with the chance nodes and the relationships (in bold).\nchance nodes, as mentioned previously.\nNext, we describe how the distribution over the updated set of models (the distribution over the chance node Mod[Mt+1 j ] in Mt+1 j,l\u22121) is computed.\nThe probability that j``s updated model is, say mt+1,1 j,l\u22121 , depends on the probability of j performing the action and receiving the observation that led to this model, and the prior distribution over the models at time step t. Because the chance node At j assumes the distribution of each of the action nodes based on the value of Mod[Mt j ], the probability of the action is given by this chance node.\nIn order to obtain the probability of j``s possible observation, we introduce the chance node Oj, which depending on the value of Mod[Mt j ] assumes the distribution of the observation node in the lower level model denoted by Mod[Mt j ].\nBecause the probability of j``s observations depends on the physical state and the joint actions of both agents, the node Oj is linked with St+1 , At j, and At i. 2 Analogous to At j, the conditional probability table of Oj is also a multiplexer modulated by Mod[Mt j ].\nFinally, the distribution over the prior models at time t is obtained from the chance node, Mod[Mt j ] in Mt j,l\u22121.\nConsequently, the chance nodes, Mod[Mt j ], At j, and Oj, form the parents of Mod[Mt+1 j ] in Mt+1 j,l\u22121.\nNotice that the model update link may be replaced by the dependency links between the chance nodes that constitute the model nodes in the two time slices.\nIn Fig. 4 we show the two time-slice I-DID with the model nodes replaced by the chance nodes and the relationships between them.\nChance nodes and dependency links that not in bold are standard, usually found in DIDs.\nExpansion of the I-DID over more time steps requires the repetition of the two steps of updating the set of models that form the 2 Note that Oj represents j``s observation at time t + 1.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 817 values of the model node and adding the relationships between the chance nodes, as many times as there are model update links.\nWe note that the possible set of models of the other agent j grows exponentially with the number of time steps.\nFor example, after T steps, there may be at most |Mt=1 j,l\u22121|(|Aj||\u03a9j|)T \u22121 candidate models residing in the model node.\n4.2 Solution Analogous to I-IDs, the solution to a level l I-DID for agent i expanded over T time steps may be carried out recursively.\nFor the purpose of illustration, let l=1 and T=2.\nThe solution method uses the standard look-ahead technique, projecting the agent``s action and observation sequences forward from the current belief state [17], and finding the possible beliefs that i could have in the next time step.\nBecause agent i has a belief over j``s models as well, the lookahead includes finding out the possible models that j could have in the future.\nConsequently, each of j``s subintentional or level 0 models (represented using a standard DID) in the first time step must be solved to obtain its optimal set of actions.\nThese actions are combined with the set of possible observations that j could make in that model, resulting in an updated set of candidate models (that include the updated beliefs) that could describe the behavior of j. Beliefs over this updated set of candidate models are calculated using the standard inference methods using the dependency relationships between the model nodes as shown in Fig. 3(b).\nWe note the recursive nature of this solution: in solving agent i``s level 1 I-DID, j``s level 0 DIDs must be solved.\nIf the nesting of models is deeper, all models at all levels starting from 0 are solved in a bottom-up manner.\nWe briefly outline the recursive algorithm for solving agent i``s Algorithm for solving I-DID Input : level l \u2265 1 I-ID or level 0 ID, T Expansion Phase 1.\nFor t from 1 to T \u2212 1 do 2.\nIf l \u2265 1 then Populate Mt+1 j,l\u22121 3.\nFor each mt j in Range(Mt j,l\u22121) do 4.\nRecursively call algorithm with the l \u2212 1 I-ID (or ID) that represents mt j and the horizon, T \u2212 t + 1 5.\nMap the decision node of the solved I-ID (or ID), OPT(mt j), to a chance node Aj 6.\nFor each aj in OPT(mt j) do 7.\nFor each oj in Oj (part of mt j) do 8.\nUpdate j``s belief, bt+1 j \u2190 SE(bt j, aj, oj) 9.\nmt+1 j \u2190 New I-ID (or ID) with bt+1 j as the initial belief 10.\nRange(Mt+1 j,l\u22121) \u222a \u2190 {mt+1 j } 11.\nAdd the model node, Mt+1 j,l\u22121, and the dependency links between Mt j,l\u22121 and Mt+1 j,l\u22121 (shown in Fig. 3(b)) 12.\nAdd the chance, decision, and utility nodes for t + 1 time slice and the dependency links between them 13.\nEstablish the CPTs for each chance node and utility node Look-Ahead Phase 14.\nApply the standard look-ahead and backup method to solve the expanded I-DID Figure 5: Algorithm for solving a level l \u2265 0 I-DID.\nlevel l I-DID expanded over T time steps with one other agent j in Fig. 5.\nWe adopt a two-phase approach: Given an I-ID of level l (described previously in Section 3) with all lower level models also represented as I-IDs or IDs (if level 0), the first step is to expand the level l I-ID over T time steps adding the dependency links and the conditional probability tables for each node.\nWe particularly focus on establishing and populating the model nodes (lines 3-11).\nNote that Range(\u00b7) returns the values (lower level models) of the random variable given as input (model node).\nIn the second phase, we use a standard look-ahead technique projecting the action and observation sequences over T time steps in the future, and backing up the utility values of the reachable beliefs.\nSimilar to I-IDs, the I-DIDs reduce to DIDs in the absence of other agents.\nAs we mentioned previously, the 0-th level models are the traditional DIDs.\nTheir solutions provide probability distributions over actions of the agent modeled at that level to I-DIDs at level 1.\nGiven probability distributions over other agent``s actions the level 1 IDIDs can themselves be solved as DIDs, and provide probability distributions to yet higher level models.\nAssume that the number of models considered at each level is bound by a number, M. Solving an I-DID of level l in then equivalent to solving O(Ml ) DIDs.\n5.\nEXAMPLE APPLICATIONS To illustrate the usefulness of I-DIDs, we apply them to three problem domains.\nWe describe, in particular, the formulation of the I-DID and the optimal prescriptions obtained on solving it.\n5.1 Followership-Leadership in the Multiagent Tiger Problem We begin our illustrations of using I-IDs and I-DIDs with a slightly modified version of the multiagent tiger problem discussed in [9].\nThe problem has two agents, each of which can open the right door (OR), the left door (OL) or listen (L).\nIn addition to hearing growls (from the left (GL) or from the right (GR)) when they listen, the agents also hear creaks (from the left (CL), from the right (CR), or no creaks (S)), which noisily indicate the other agent``s opening one of the doors.\nWhen any door is opened, the tiger persists in its original location with a probability of 95%.\nAgent i hears growls with a reliability of 65% and creaks with a reliability of 95%.\nAgent j, on the other hand, hears growls with a reliability of 95%.\nThus, the setting is such that agent i hears agent j opening doors more reliably than the tiger``s growls.\nThis suggests that i could use j``s actions as an indication of the location of the tiger, as we discuss below.\nEach agent``s preferences are as in the single agent game discussed in [13].\nThe transition, observation, and reward functions are shown in [16].\nA good indicator of the usefulness of normative methods for decision-making like I-DIDs is the emergence of realistic social behaviors in their prescriptions.\nIn settings of the persistent multiagent tiger problem that reflect real world situations, we demonstrate followership between the agents and, as shown in [15], deception among agents who believe that they are in a follower-leader type of relationship.\nIn particular, we analyze the situational and epistemological conditions sufficient for their emergence.\nThe followership behavior, for example, results from the agent knowing its own weaknesses, assessing the strengths, preferences, and possible behaviors of the other, and realizing that its best for it to follow the other``s actions in order to maximize its payoffs.\nLet us consider a particular setting of the tiger problem in which agent i believes that j``s preferences are aligned with its own - both of them just want to get the gold - and j``s hearing is more reliable in comparison to itself.\nAs an example, suppose that j, on listening can discern the tiger``s location 95% of the times compared to i``s 65% accuracy.\nAdditionally, agent i does not have any initial information about the tiger``s location.\nIn other words, i``s single-level nested belief, bi,1, assigns 0.5 to each of the two locations of the tiger.\nIn addition, i considers two models of j, which differ in j``s flat level 0 initial beliefs.\nThis is represented in the level 1 I-ID shown in Fig. 6(a).\nAccording to one model, j assigns a probability of 0.9 that the tiger is behind the left door, while the other 818 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 6: (a) Level 1 I-ID of agent i, (b) two level 0 IDs of agent j whose decision nodes are mapped to the chance nodes, A1 j , A2 j , in (a).\nmodel assigns 0.1 to that location (see Fig. 6(b)).\nAgent i is undecided on these two models of j.\nIf we vary i``s hearing ability, and solve the corresponding level 1 I-ID expanded over three time steps, we obtain the normative behavioral policies shown in Fig 7 that exhibit followership behavior.\nIf i``s probability of correctly hearing the growls is 0.65, then as shown in the policy in Fig. 7(a), i begins to conditionally follow j``s actions: i opens the same door that j opened previously iff i``s own assessment of the tiger``s location confirms j``s pick.\nIf i loses the ability to correctly interpret the growls completely, it blindly follows j and opens the same door that j opened previously (Fig. 7(b)).\nFigure 7: Emergence of (a) conditional followership, and (b) blind followership in the tiger problem.\nBehaviors of interest are in bold.\n* is a wildcard, and denotes any one of the observations.\nWe observed that a single level of belief nesting - beliefs about the other``s models - was sufficient for followership to emerge in the tiger problem.\nHowever, the epistemological requirements for the emergence of leadership are more complex.\nFor an agent, say j, to emerge as a leader, followership must first emerge in the other agent i.\nAs we mentioned previously, if i is certain that its preferences are identical to those of j, and believes that j has a better sense of hearing, i will follow j``s actions over time.\nAgent j emerges as a leader if it believes that i will follow it, which implies that j``s belief must be nested two levels deep to enable it to recognize its leadership role.\nRealizing that i will follow presents j with an opportunity to influence i``s actions in the benefit of the collective good or its self-interest alone.\nFor example, in the tiger problem, let us consider a setting in which if both i and j open the correct door, then each gets a payoff of 20 that is double the original.\nIf j alone selects the correct door, it gets the payoff of 10.\nOn the other hand, if both agents pick the wrong door, their penalties are cut in half.\nIn this setting, it is in both j``s best interest as well as the collective betterment for j to use its expertise in selecting the correct door, and thus be a good leader.\nHowever, consider a slightly different problem in which j gains from i``s loss and is penalized if i gains.\nSpecifically, let i``s payoff be subtracted from j``s, indicating that j is antagonistic toward i - if j picks the correct door and i the wrong one, then i``s loss of 100 becomes j``s gain.\nAgent j believes that i incorrectly thinks that j``s preferences are those that promote the collective good and that it starts off by believing with 99% confidence where the tiger is.\nBecause i believes that its preferences are similar to those of j, and that j starts by believing almost surely that one of the two is the correct location (two level 0 models of j), i will start by following j``s actions.\nWe show i``s normative policy on solving its singly-nested I-DID over three time steps in Fig. 8(a).\nThe policy demonstrates that i will blindly follow j``s actions.\nSince the tiger persists in its original location with a probability of 0.95, i will select the same door again.\nIf j begins the game with a 99% probability that the tiger is on the right, solving j``s I-DID nested two levels deep, results in the policy shown in Fig. 8(b).\nEven though j is almost certain that OL is the correct action, it will start by selecting OR, followed by OL.\nAgent j``s intention is to deceive i who, it believes, will follow j``s actions, so as to gain $110 in the second time step, which is more than what j would gain if it were to be honest.\nFigure 8: Emergence of deception between agents in the tiger problem.\nBehaviors of interest are in bold.\n* denotes as before.\n(a) Agent i``s policy demonstrating that it will blindly follow j``s actions.\n(b) Even though j is almost certain that the tiger is on the right, it will start by selecting OR, followed by OL, in order to deceive i. 5.2 Altruism and Reciprocity in the Public Good Problem The public good (PG) problem [7], consists of a group of M agents, each of whom must either contribute some resource to a public pot or keep it for themselves.\nSince resources contributed to the public pot are shared among all the agents, they are less valuable to the agent when in the public pot.\nHowever, if all agents choose to contribute their resources, then the payoff to each agent is more than if no one contributes.\nSince an agent gets its share of the public pot irrespective of whether it has contributed or not, the dominating action is for each agent to not contribute, and instead free ride on others'' contributions.\nHowever, behaviors of human players in empirical simulations of the PG problem differ from the normative predictions.\nThe experiments reveal that many players initially contribute a large amount to the public pot, and continue to contribute when the PG problem is played repeatedly, though in decreasing amounts [4].\nMany of these experiments [5] report that a small core group of players persistently contributes to the public pot even when all others are defecting.\nThese experiments also reveal that players who persistently contribute have altruistic or reciprocal preferences matching expected cooperation of others.\nFor simplicity, we assume that the game is played between M = 2 agents, i and j. Let each agent be initially endowed with XT amount of resources.\nWhile the classical PG game formulation permits each agent to contribute any quantity of resources (\u2264 XT ) to the public pot, we simplify the action space by allowing two possible actions.\nEach agent may choose to either contribute (C) a fixed amount of the resources, or not contribute.\nThe latter action is deThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 819 noted as defect (D).\nWe assume that the actions are not observable to others.\nThe value of resources in the public pot is discounted by ci for each agent i, where ci is the marginal private return.\nWe assume that ci < 1 so that the agent does not benefit enough that it contributes to the public pot for private gain.\nSimultaneously, ciM > 1, making collective contribution pareto optimal.\ni\/j C D C 2ciXT , 2cjXT ciXT \u2212 cp, XT + cjXT \u2212 P D XT + ciXT \u2212 P, cjXT \u2212 cp XT , XT Table 1: The one-shot PG game with punishment.\nIn order to encourage contributions, the contributing agents punish free riders but incur a small cost for administering the punishment.\nLet P be the punishment meted out to the defecting agent and cp the non-zero cost of punishing for the contributing agent.\nFor simplicity, we assume that the cost of punishing is same for both the agents.\nThe one-shot PG game with punishment is shown in Table.\n1.\nLet ci = cj, cp > 0, and if P > XT \u2212 ciXT , then defection is no longer a dominating action.\nIf P < XT \u2212 ciXT , then defection is the dominating action for both.\nIf P = XT \u2212 ciXT , then the game is not dominance-solvable.\nFigure 9: (a) Level 1 I-ID of agent i, (b) level 0 IDs of agent j with decision nodes mapped to the chance nodes, A1 j and A2 j , in (a).\nWe formulate a sequential version of the PG problem with punishment from the perspective of agent i. Though in the repeated PG game, the quantity in the public pot is revealed to all the agents after each round of actions, we assume in our formulation that it is hidden from the agents.\nEach agent may contribute a fixed amount, xc, or defect.\nAn agent on performing an action receives an observation of plenty (PY) or meager (MR) symbolizing the state of the public pot.\nNotice that the observations are also indirectly indicative of agent j``s actions because the state of the public pot is influenced by them.\nThe amount of resources in agent i``s private pot, is perfectly observable to i.\nThe payoffs are analogous to Table.\n1.\nBorrowing from the empirical investigations of the PG problem [5], we construct level 0 IDs for j that model altruistic and non-altruistic types (Fig. 9(b)).\nSpecifically, our altruistic agent has a high marginal private return (cj is close to 1) and does not punish others who defect.\nLet xc = 1 and the level 0 agent be punished half the times it defects.\nWith one action remaining, both types of agents choose to contribute to avoid being punished.\nWith two actions to go, the altruistic type chooses to contribute, while the other defects.\nThis is because cj for the altruistic type is close to 1, thus the expected punishment, 0.5P > (1 \u2212 cj), which the altruistic type avoids.\nBecause cj for the non-altruistic type is less, it prefers not to contribute.\nWith three steps to go, the altruistic agent contributes to avoid punishment (0.5P > 2(1 \u2212 cj)), and the non-altruistic type defects.\nFor greater than three steps, while the altruistic agent continues to contribute to the public pot depending on how close its marginal private return is to 1, the non-altruistic type prescribes defection.\nWe analyzed the decisions of an altruistic agent i modeled using a level 1 I-DID expanded over 3 time steps.\ni ascribes the two level 0 models, mentioned previously, to j (see Fig. 9).\nIf i believes with a probability 1 that j is altruistic, i chooses to contribute for each of the three steps.\nThis behavior persists when i is unaware of whether j is altruistic (Fig. 10(a)), and when i assigns a high probability to j being the non-altruistic type.\nHowever, when i believes with a probability 1 that j is non-altruistic and will thus surely defect, i chooses to defect to avoid being punished and because its marginal private return is less than 1.\nThese results demonstrate that the behavior of our altruistic type resembles that found experimentally.\nThe non-altruistic level 1 agent chooses to defect regardless of how likely it believes the other agent to be altruistic.\nWe analyzed the behavior of a reciprocal agent type that matches expected cooperation or defection.\nThe reciprocal type``s marginal private return is similar to that of the non-altruistic type, however, it obtains a greater payoff when its action is similar to that of the other.\nWe consider the case when the reciprocal agent i is unsure of whether j is altruistic and believes that the public pot is likely to be half full.\nFor this prior belief, i chooses to defect.\nOn receiving an observation of plenty, i decides to contribute, while an observation of meager makes it defect (Fig. 10(b)).\nThis is because an observation of plenty signals that the pot is likely to be greater than half full, which results from j``s action to contribute.\nThus, among the two models ascribed to j, its type is likely to be altruistic making it likely that j will contribute again in the next time step.\nAgent i therefore chooses to contribute to reciprocate j``s action.\nAn analogous reasoning leads i to defect when it observes a meager pot.\nWith one action to go, i believing that j contributes, will choose to contribute too to avoid punishment regardless of its observations.\nFigure 10: (a) An altruistic level 1 agent always contributes.\n(b) A reciprocal agent i starts off by defecting followed by choosing to contribute or defect based on its observation of plenty (indicating that j is likely altruistic) or meager (j is non-altruistic).\n5.3 Strategies in Two-Player Poker Poker is a popular zero sum card game that has received much attention among the AI research community as a testbed [2].\nPoker is played among M \u2265 2 players in which each player receives a hand of cards from a deck.\nWhile several flavors of Poker with varying complexity exist, we consider a simple version in which each player has three plys during which the player may either exchange a card (E), keep the existing hand (K), fold (F) and withdraw from the game, or call (C), requiring all players to show their hands.\nTo keep matters simple, let M = 2, and each player receive a hand consisting of a single card drawn from the same suit.\nThus, during a showdown, the player who has the numerically larger card (2 is the lowest, ace is the highest) wins the pot.\nDuring an exchange of cards, the discarded card is placed either in the L pile, indicating to the other agent that it was a low numbered card less than 8, or in the 820 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) H pile, indicating that the card had a rank greater than or equal to 8.\nNotice that, for example, if a lower numbered card is discarded, the probability of receiving a low card in exchange is now reduced.\nWe show the level 1 I-ID for the simplified two-player Poker in Fig. 11.\nWe considered two models (personality types) of agent j.\nThe conservative type believes that it is likely that its opponent has a high numbered card in its hand.\nOn the other hand, the aggressive agent j believes with a high probability that its opponent has a lower numbered card.\nThus, the two types differ in their beliefs over their opponent``s hand.\nIn both these level 0 models, the opponent is assumed to perform its actions following a fixed, uniform distribution.\nWith three actions to go, regardless of its hand (unless it is an ace), the aggressive agent chooses to exchange its card, with the intent of improving on its current hand.\nThis is because it believes the other to have a low card, which improves its chances of getting a high card during the exchange.\nThe conservative agent chooses to keep its card, no matter its hand because its chances of getting a high card are slim as it believes that its opponent has one.\nFigure 11: (a) Level 1 I-ID of agent i.\nThe observation reveals information about j``s hand of the previous time step, (b) level 0 IDs of agent j whose decision nodes are mapped to the chance nodes, A1 j , A2 j , in (a).\nThe policy of a level 1 agent i who believes that each card except its own has an equal likelihood of being in j``s hand (neutral personality type) and j could be either an aggressive or conservative type, is shown in Fig. 12.\ni``s own hand contains the card numbered 8.\nThe agent starts by keeping its card.\nOn seeing that j did not exchange a card (N), i believes with probability 1 that j is conservative and hence will keep its cards.\ni responds by either keeping its card or exchanging it because j is equally likely to have a lower or higher card.\nIf i observes that j discarded its card into the L or H pile, i believes that j is aggressive.\nOn observing L, i realizes that j had a low card, and is likely to have a high card after its exchange.\nBecause the probability of receiving a low card is high now, i chooses to keep its card.\nOn observing H, believing that the probability of receiving a high numbered card is high, i chooses to exchange its card.\nIn the final step, i chooses to call regardless of its observation history because its belief that j has a higher card is not sufficiently high to conclude that its better to fold and relinquish the payoff.\nThis is partly due to the fact that an observation of, say, L resets the agent i``s previous time step beliefs over j``s hand to the low numbered cards only.\n6.\nDISCUSSION We showed how DIDs may be extended to I-DIDs that enable online sequential decision-making in uncertain multiagent settings.\nOur graphical representation of I-DIDs improves on the previous Figure 12: A level 1 agent i``s three step policy in the Poker problem.\ni starts by believing that j is equally likely to be aggressive or conservative and could have any card in its hand with equal probability.\nwork significantly by being more transparent, semantically clear, and capable of being solved using standard algorithms that target DIDs.\nI-DIDs extend NIDs to allow sequential decision-making over multiple time steps in the presence of other interacting agents.\nI-DIDs may be seen as concise graphical representations for IPOMDPs providing a way to exploit problem structure and carry out online decision-making as the agent acts and observes given its prior beliefs.\nWe are currently investigating ways to solve I-DIDs approximately with provable bounds on the solution quality.\nAcknowledgment: We thank Piotr Gmytrasiewicz for some useful discussions related to this work.\nThe first author would like to acknowledge the support of a UGARF grant.\n7.\nREFERENCES [1] R. J. Aumann.\nInteractive epistemology i: Knowledge.\nInternational Journal of Game Theory, 28:263-300, 1999.\n[2] D. Billings, A. Davidson, J. Schaeffer, and D. Szafron.\nThe challenge of poker.\nAIJ, 2001.\n[3] A. Brandenburger and E. Dekel.\nHierarchies of beliefs and common knowledge.\nJournal of Economic Theory, 59:189-198, 1993.\n[4] C. Camerer.\nBehavioral Game Theory: Experiments in Strategic Interaction.\nPrinceton University Press, 2003.\n[5] E. Fehr and S. Gachter.\nCooperation and punishment in public goods experiments.\nAmerican Economic Review, 90(4):980-994, 2000.\n[6] D. Fudenberg and D. K. Levine.\nThe Theory of Learning in Games.\nMIT Press, 1998.\n[7] D. Fudenberg and J. Tirole.\nGame Theory.\nMIT Press, 1991.\n[8] Y. Gal and A. Pfeffer.\nA language for modeling agent``s decision-making processes in games.\nIn AAMAS, 2003.\n[9] P. Gmytrasiewicz and P. Doshi.\nA framework for sequential planning in multiagent settings.\nJAIR, 24:49-79, 2005.\n[10] P. Gmytrasiewicz and E. Durfee.\nRational coordination in multi-agent environments.\nJAAMAS, 3(4):319-350, 2000.\n[11] J. C. Harsanyi.\nGames with incomplete information played by bayesian players.\nManagement Science, 14(3):159-182, 1967.\n[12] R. A. Howard and J. E. Matheson.\nInfluence diagrams.\nIn R. A. Howard and J. E. Matheson, editors, The Principles and Applications of Decision Analysis.\nStrategic Decisions Group, Menlo Park, CA 94025, 1984.\n[13] L. Kaelbling, M. Littman, and A. Cassandra.\nPlanning and acting in partially observable stochastic domains.\nArtificial Intelligence Journal, 2, 1998.\n[14] D. Koller and B. Milch.\nMulti-agent influence diagrams for representing and solving games.\nIn IJCAI, pages 1027-1034, 2001.\n[15] K. Polich and P. Gmytrasiewicz.\nInteractive dynamic influence diagrams.\nIn GTDT Workshop, AAMAS, 2006.\n[16] B. Rathnas., P. Doshi, and P. J. Gmytrasiewicz.\nExact solutions to interactive pomdps using behavioral equivalence.\nIn Autonomous Agents and Multi-Agent Systems Conference (AAMAS), 2006.\n[17] S. Russell and P. Norvig.\nArtificial Intelligence: A Modern Approach (Second Edition).\nPrentice Hall, 2003.\n[18] R. D. Shachter.\nEvaluating influence diagrams.\nOperations Research, 34(6):871-882, 1986.\n[19] D. Suryadi and P. Gmytrasiewicz.\nLearning models of other agents using influence diagrams.\nIn UM, 1999.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 821","lvl-3":"Graphical Models for Online Solutions to Interactive POMDPs\nABSTRACT\nWe develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation.\nThese graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables.\nI-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs.\nI-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents.\nUsing several examples, we show how I-DIDs may be applied and demonstrate their usefulness.\n1.\nINTRODUCTION\nInteractive partially observable Markov decision processes (IPOMDPs) [9] provide a framework for sequential decision-making in partially observable multiagent environments.\nThey generalize POMDPs [13] to multiagent settings by including the other agents' computable models in the state space along with the states of the physical environment.\nThe models encompass all information influencing the agents' behaviors, including their preferences, capabilities, and beliefs, and are thus analogous to types in Bayesian games [11].\nI-POMDPs adopt a subjective approach to understanding strategic behavior, rooted in a decision-theoretic framework that takes a decision-maker's perspective in the interaction.\nIn [15], Polich and Gmytrasiewicz introduced interactive dynamic influence diagrams (I-DIDs) as the computational representations of I-POMDPs.\nI-DIDs generalize DIDs [12], which may be viewed as computational counterparts of POMDPs, to multiagents settings in the same way that I-POMDPs generalize POMDPs.\nI-DIDs contribute to a growing line of work [19] that includes multi-agent influence diagrams (MAIDs) [14], and more recently, networks of influence diagrams (NIDs) [8].\nThese formalisms seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables.\nMAIDs provide an alternative to normal and extensive game forms using a graphical formalism to represent games of imperfect information with a decision node for each agent's actions and chance nodes capturing the agent's private information.\nMAIDs objectively analyze the game, efficiently computing the Nash equilibrium profile by exploiting the independence structure.\nNIDs extend MAIDs to include agents' uncertainty over the game being played and over models of the other agents.\nEach model is a MAID and the network of MAIDs is collapsed, bottom up, into a single MAID for computing the equilibrium of the game keeping in mind the different models of each agent.\nGraphical formalisms such as MAIDs and NIDs open up a promising area of research that aims to represent multiagent interactions more transparently.\nHowever, MAIDs provide an analysis of the game from an external viewpoint and the applicability of both is limited to static single play games.\nMatters are more complex when we consider interactions that are extended over time, where predictions about others' future actions must be made using models that change as the agents act and observe.\nI-DIDs address this gap by allowing the representation of other agents' models as the values of a special model node.\nBoth, other agents' models and the original agent's beliefs over these models are updated over time using special-purpose implementations.\nIn this paper, we improve on the previous preliminary representation of the I-DID shown in [15] by using the insight that the static I-ID is a type of NID.\nThus, we may utilize NID-specific language constructs such as multiplexers to represent the model node, and subsequently the I-ID, more transparently.\nFurthermore, we clarify the semantics of the special purpose \"policy link\" introduced in the representation of I-DID by [15], and show that it could be replaced by traditional dependency links.\nIn the previous representation of the I-DID, the update of the agent's belief over the models of others as the agents act and receive observations was denoted using a special link called the \"model update link\" that connected the model nodes over time.\nWe explicate the semantics of this link by showing how it can be implemented using the traditional dependency links between the chance nodes that constitute the model nodes.\nThe net result is a representation of I-DID that is significantly more\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n2.\nBACKGROUND: FINITELY NESTED IPOMDPS\nInteractive POMDPs generalize POMDPs to multiagent settings by including other agents' models as part of the state space [9].\nSince other agents may also reason about others, the interactive state space is strategically nested; it contains beliefs about other agents' models and their beliefs about others.\nFor simplicity of presentation we consider an agent, i, that is interacting with one other agent, j.\nA finitely nested I-POMDP of agent i with a strategy level l is defined as the tuple:\nwhere: \u2022 ISi, l denotes a set of interactive states defined as, ISi, l = S \u00d7 Mj, l \u2212 1, where Mj, l \u2212 1 = {19j, l \u2212 1 \u222a SMj}, for l \u2265 1, and ISi,0 = S, where S is the set of states of the physical environment.\n19j, l \u2212 1 is the set of computable intentional models of agent j: \u03b8j, l \u2212 1 = ~ bj, l \u2212 1, \u02c6\u03b8j ~ where the frame, \u02c6\u03b8j = ~ A, \u03a9j, Tj, Oj, Rj, OCj ~.\nHere, j is Bayes rational and OCj is j's optimality criterion.\nSMj is the set of subintentional models of j. Simple examples of subintentional models include a no-information model [10] and a fictitious play model [6], both of which are history independent.\nWe give a recursive bottom-up construction of the interactive state space below.\nSimilar formulations of nested spaces have appeared in [1, 3].\n\u2022 A = Ai \u00d7 Aj is the set of joint actions of all agents in the environment; \u2022 Ti: S \u00d7 A \u00d7 S \u2192 [0, 1], describes the effect of the joint actions on the physical states of the environment; \u2022 \u03a9i is the set of observations of agent i; \u2022 Oi: S \u00d7 A \u00d7 \u03a9i \u2192 [0, 1] gives the likelihood of the observations given the physical state and joint action; \u2022 Ri: ISi \u00d7 A \u2192 R describes agent i's preferences over its interactive states.\nUsually only the physical states will matter.\nAgent i's policy is the mapping, \u03a9 \u2217 i \u2192 \u0394 (Ai), where \u03a9 \u2217 i is the set of all observation histories of agent i.\nSince belief over the interactive states forms a sufficient statistic [9], the policy can also be represented as a mapping from the set of all beliefs of agent i to a distribution over its actions, \u0394 (ISi) \u2192 \u0394 (Ai).\n2.1 Belief Update\nAnalogous to POMDPs, an agent within the I-POMDP framework updates its belief as it acts and observes.\nHowever, there are two differences that complicate the belief update in multiagent settings when compared to single agent ones.\nFirst, since the state of the physical environment depends on the actions of both agents, i's prediction of how the physical state changes has to be made based on its prediction of j's actions.\nSecond, changes in j's models have to be included in i's belief update.\nSpecifically, if j is intentional then an update of j's beliefs due to its action and observation has to be included.\nIn other words, i has to update its belief based on its prediction of what j would observe and how j would update its belief.\nIf j's model is subintentional, then j's probable observations are appended to the observation history contained in the model.\nFormally, we have: Pr (ist | at \u2212 1 i, bt \u2212 1\nis an abbreviation for the belief update.\nFor a version of the belief update when j's model is subintentional, see [9].\nIf agent j is also modeled as an I-POMDP, then i's belief update invokes j's belief update (via the term SE\u03b8tj (bt \u2212 1\nj, otj)), which in turn could invoke i's belief update and so on.\nThis recursion in belief nesting bottoms out at the 0th level.\nAt this level, the belief update of the agent reduces to a POMDP belief update.\n1 For illustrations of the belief update, additional details on I-POMDPs, and how they compare with other multiagent frameworks, see [9].\n2.2 Value Iteration\nEach belief state in a finitely nested I-POMDP has an associated value reflecting the maximum payoff the agent can expect in this belief state:\nwhere, ERi (is, ai) = aj Ri (is, ai, aj) Pr (aj | mj, l \u2212 1) (since is = (s, mj, l \u2212 1)).\nEq.\n2 is a basis for value iteration in I-POMDPs.\nAgent i's optimal action, a \u2217 i, for the case of finite horizon with discounting, is an element of the set of optimal actions for the belief state, OPT (\u03b8i), defined as:\n3.\nINTERACTIVE INFLUENCE DIAGRAMS\n3.1 Syntax\n3.2 Solution\n816 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nINTERACTIVE DYNAMIC INFLUENCE DIAGRAMS\n4.1 Syntax\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 817\n4.2 Solution\n5.\nEXAMPLE APPLICATIONS\n5.1 Followership-Leadership in the Multiagent Tiger Problem\n818 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.2 Altruism and Reciprocity in the Public Good Problem\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 819\n5.3 Strategies in Two-Player Poker\n820 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n6.\nDISCUSSION\nWe showed how DIDs may be extended to I-DIDs that enable online sequential decision-making in uncertain multiagent settings.\nOur graphical representation of I-DIDs improves on the previous Figure 12: A level 1 agent i's three step policy in the Poker problem.\ni starts by believing that j is equally likely to be aggressive or conservative and could have any card in its hand with equal probability.\nwork significantly by being more transparent, semantically clear, and capable of being solved using standard algorithms that target DIDs.\nI-DIDs extend NIDs to allow sequential decision-making over multiple time steps in the presence of other interacting agents.\nI-DIDs may be seen as concise graphical representations for IPOMDPs providing a way to exploit problem structure and carry out online decision-making as the agent acts and observes given its prior beliefs.\nWe are currently investigating ways to solve I-DIDs approximately with provable bounds on the solution quality.\nAcknowledgment: We thank Piotr Gmytrasiewicz for some useful discussions related to this work.\nThe first author would like to acknowledge the support of a UGARF grant.","lvl-4":"Graphical Models for Online Solutions to Interactive POMDPs\nABSTRACT\nWe develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation.\nThese graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables.\nI-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs.\nI-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents.\nUsing several examples, we show how I-DIDs may be applied and demonstrate their usefulness.\n1.\nINTRODUCTION\nInteractive partially observable Markov decision processes (IPOMDPs) [9] provide a framework for sequential decision-making in partially observable multiagent environments.\nThey generalize POMDPs [13] to multiagent settings by including the other agents' computable models in the state space along with the states of the physical environment.\nThe models encompass all information influencing the agents' behaviors, including their preferences, capabilities, and beliefs, and are thus analogous to types in Bayesian games [11].\nIn [15], Polich and Gmytrasiewicz introduced interactive dynamic influence diagrams (I-DIDs) as the computational representations of I-POMDPs.\nI-DIDs generalize DIDs [12], which may be viewed as computational counterparts of POMDPs, to multiagents settings in the same way that I-POMDPs generalize POMDPs.\nI-DIDs contribute to a growing line of work [19] that includes multi-agent influence diagrams (MAIDs) [14], and more recently, networks of influence diagrams (NIDs) [8].\nThese formalisms seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables.\nMAIDs provide an alternative to normal and extensive game forms using a graphical formalism to represent games of imperfect information with a decision node for each agent's actions and chance nodes capturing the agent's private information.\nMAIDs objectively analyze the game, efficiently computing the Nash equilibrium profile by exploiting the independence structure.\nNIDs extend MAIDs to include agents' uncertainty over the game being played and over models of the other agents.\nEach model is a MAID and the network of MAIDs is collapsed, bottom up, into a single MAID for computing the equilibrium of the game keeping in mind the different models of each agent.\nGraphical formalisms such as MAIDs and NIDs open up a promising area of research that aims to represent multiagent interactions more transparently.\nMatters are more complex when we consider interactions that are extended over time, where predictions about others' future actions must be made using models that change as the agents act and observe.\nI-DIDs address this gap by allowing the representation of other agents' models as the values of a special model node.\nBoth, other agents' models and the original agent's beliefs over these models are updated over time using special-purpose implementations.\nIn this paper, we improve on the previous preliminary representation of the I-DID shown in [15] by using the insight that the static I-ID is a type of NID.\nThus, we may utilize NID-specific language constructs such as multiplexers to represent the model node, and subsequently the I-ID, more transparently.\nFurthermore, we clarify the semantics of the special purpose \"policy link\" introduced in the representation of I-DID by [15], and show that it could be replaced by traditional dependency links.\nIn the previous representation of the I-DID, the update of the agent's belief over the models of others as the agents act and receive observations was denoted using a special link called the \"model update link\" that connected the model nodes over time.\nWe explicate the semantics of this link by showing how it can be implemented using the traditional dependency links between the chance nodes that constitute the model nodes.\nThe net result is a representation of I-DID that is significantly more\n2.\nBACKGROUND: FINITELY NESTED IPOMDPS\nInteractive POMDPs generalize POMDPs to multiagent settings by including other agents' models as part of the state space [9].\nSince other agents may also reason about others, the interactive state space is strategically nested; it contains beliefs about other agents' models and their beliefs about others.\nFor simplicity of presentation we consider an agent, i, that is interacting with one other agent, j.\nA finitely nested I-POMDP of agent i with a strategy level l is defined as the tuple:\n19j, l \u2212 1 is the set of computable intentional models of agent j: \u03b8j, l \u2212 1 = ~ bj, l \u2212 1, \u02c6\u03b8j ~ where the frame, \u02c6\u03b8j = ~ A, \u03a9j, Tj, Oj, Rj, OCj ~.\nSMj is the set of subintentional models of j. Simple examples of subintentional models include a no-information model [10] and a fictitious play model [6], both of which are history independent.\nWe give a recursive bottom-up construction of the interactive state space below.\nSimilar formulations of nested spaces have appeared in [1, 3].\nUsually only the physical states will matter.\nAgent i's policy is the mapping, \u03a9 \u2217 i \u2192 \u0394 (Ai), where \u03a9 \u2217 i is the set of all observation histories of agent i.\nSince belief over the interactive states forms a sufficient statistic [9], the policy can also be represented as a mapping from the set of all beliefs of agent i to a distribution over its actions, \u0394 (ISi) \u2192 \u0394 (Ai).\n2.1 Belief Update\nAnalogous to POMDPs, an agent within the I-POMDP framework updates its belief as it acts and observes.\nHowever, there are two differences that complicate the belief update in multiagent settings when compared to single agent ones.\nFirst, since the state of the physical environment depends on the actions of both agents, i's prediction of how the physical state changes has to be made based on its prediction of j's actions.\nSecond, changes in j's models have to be included in i's belief update.\nSpecifically, if j is intentional then an update of j's beliefs due to its action and observation has to be included.\nIn other words, i has to update its belief based on its prediction of what j would observe and how j would update its belief.\nIf j's model is subintentional, then j's probable observations are appended to the observation history contained in the model.\nis an abbreviation for the belief update.\nFor a version of the belief update when j's model is subintentional, see [9].\nIf agent j is also modeled as an I-POMDP, then i's belief update invokes j's belief update (via the term SE\u03b8tj (bt \u2212 1\nj, otj)), which in turn could invoke i's belief update and so on.\nThis recursion in belief nesting bottoms out at the 0th level.\nAt this level, the belief update of the agent reduces to a POMDP belief update.\n1 For illustrations of the belief update, additional details on I-POMDPs, and how they compare with other multiagent frameworks, see [9].\n2.2 Value Iteration\nEach belief state in a finitely nested I-POMDP has an associated value reflecting the maximum payoff the agent can expect in this belief state:\nEq.\n2 is a basis for value iteration in I-POMDPs.\nAgent i's optimal action, a \u2217 i, for the case of finite horizon with discounting, is an element of the set of optimal actions for the belief state, OPT (\u03b8i), defined as:\n6.\nDISCUSSION\nWe showed how DIDs may be extended to I-DIDs that enable online sequential decision-making in uncertain multiagent settings.\nOur graphical representation of I-DIDs improves on the previous Figure 12: A level 1 agent i's three step policy in the Poker problem.\nwork significantly by being more transparent, semantically clear, and capable of being solved using standard algorithms that target DIDs.\nI-DIDs extend NIDs to allow sequential decision-making over multiple time steps in the presence of other interacting agents.\nI-DIDs may be seen as concise graphical representations for IPOMDPs providing a way to exploit problem structure and carry out online decision-making as the agent acts and observes given its prior beliefs.\nAcknowledgment: We thank Piotr Gmytrasiewicz for some useful discussions related to this work.","lvl-2":"Graphical Models for Online Solutions to Interactive POMDPs\nABSTRACT\nWe develop a new graphical representation for interactive partially observable Markov decision processes (I-POMDPs) that is significantly more transparent and semantically clear than the previous representation.\nThese graphical models called interactive dynamic influence diagrams (I-DIDs) seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables.\nI-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that I-POMDPs generalize POMDPs.\nI-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents.\nUsing several examples, we show how I-DIDs may be applied and demonstrate their usefulness.\n1.\nINTRODUCTION\nInteractive partially observable Markov decision processes (IPOMDPs) [9] provide a framework for sequential decision-making in partially observable multiagent environments.\nThey generalize POMDPs [13] to multiagent settings by including the other agents' computable models in the state space along with the states of the physical environment.\nThe models encompass all information influencing the agents' behaviors, including their preferences, capabilities, and beliefs, and are thus analogous to types in Bayesian games [11].\nI-POMDPs adopt a subjective approach to understanding strategic behavior, rooted in a decision-theoretic framework that takes a decision-maker's perspective in the interaction.\nIn [15], Polich and Gmytrasiewicz introduced interactive dynamic influence diagrams (I-DIDs) as the computational representations of I-POMDPs.\nI-DIDs generalize DIDs [12], which may be viewed as computational counterparts of POMDPs, to multiagents settings in the same way that I-POMDPs generalize POMDPs.\nI-DIDs contribute to a growing line of work [19] that includes multi-agent influence diagrams (MAIDs) [14], and more recently, networks of influence diagrams (NIDs) [8].\nThese formalisms seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables.\nMAIDs provide an alternative to normal and extensive game forms using a graphical formalism to represent games of imperfect information with a decision node for each agent's actions and chance nodes capturing the agent's private information.\nMAIDs objectively analyze the game, efficiently computing the Nash equilibrium profile by exploiting the independence structure.\nNIDs extend MAIDs to include agents' uncertainty over the game being played and over models of the other agents.\nEach model is a MAID and the network of MAIDs is collapsed, bottom up, into a single MAID for computing the equilibrium of the game keeping in mind the different models of each agent.\nGraphical formalisms such as MAIDs and NIDs open up a promising area of research that aims to represent multiagent interactions more transparently.\nHowever, MAIDs provide an analysis of the game from an external viewpoint and the applicability of both is limited to static single play games.\nMatters are more complex when we consider interactions that are extended over time, where predictions about others' future actions must be made using models that change as the agents act and observe.\nI-DIDs address this gap by allowing the representation of other agents' models as the values of a special model node.\nBoth, other agents' models and the original agent's beliefs over these models are updated over time using special-purpose implementations.\nIn this paper, we improve on the previous preliminary representation of the I-DID shown in [15] by using the insight that the static I-ID is a type of NID.\nThus, we may utilize NID-specific language constructs such as multiplexers to represent the model node, and subsequently the I-ID, more transparently.\nFurthermore, we clarify the semantics of the special purpose \"policy link\" introduced in the representation of I-DID by [15], and show that it could be replaced by traditional dependency links.\nIn the previous representation of the I-DID, the update of the agent's belief over the models of others as the agents act and receive observations was denoted using a special link called the \"model update link\" that connected the model nodes over time.\nWe explicate the semantics of this link by showing how it can be implemented using the traditional dependency links between the chance nodes that constitute the model nodes.\nThe net result is a representation of I-DID that is significantly more\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\ntransparent, semantically clear, and capable of being implemented using the standard algorithms for solving DIDs.\nWe show how IDIDs may be used to model an agent's uncertainty over others' models, that may themselves be I-DIDs.\nSolution to the I-DID is a policy that prescribes what the agent should do over time, given its beliefs over the physical state and others' models.\nAnalogous to DIDs, I-DIDs may be used to compute the policy of an agent online as the agent acts and observes in a setting that is populated by other interacting agents.\n2.\nBACKGROUND: FINITELY NESTED IPOMDPS\nInteractive POMDPs generalize POMDPs to multiagent settings by including other agents' models as part of the state space [9].\nSince other agents may also reason about others, the interactive state space is strategically nested; it contains beliefs about other agents' models and their beliefs about others.\nFor simplicity of presentation we consider an agent, i, that is interacting with one other agent, j.\nA finitely nested I-POMDP of agent i with a strategy level l is defined as the tuple:\nwhere: \u2022 ISi, l denotes a set of interactive states defined as, ISi, l = S \u00d7 Mj, l \u2212 1, where Mj, l \u2212 1 = {19j, l \u2212 1 \u222a SMj}, for l \u2265 1, and ISi,0 = S, where S is the set of states of the physical environment.\n19j, l \u2212 1 is the set of computable intentional models of agent j: \u03b8j, l \u2212 1 = ~ bj, l \u2212 1, \u02c6\u03b8j ~ where the frame, \u02c6\u03b8j = ~ A, \u03a9j, Tj, Oj, Rj, OCj ~.\nHere, j is Bayes rational and OCj is j's optimality criterion.\nSMj is the set of subintentional models of j. Simple examples of subintentional models include a no-information model [10] and a fictitious play model [6], both of which are history independent.\nWe give a recursive bottom-up construction of the interactive state space below.\nSimilar formulations of nested spaces have appeared in [1, 3].\n\u2022 A = Ai \u00d7 Aj is the set of joint actions of all agents in the environment; \u2022 Ti: S \u00d7 A \u00d7 S \u2192 [0, 1], describes the effect of the joint actions on the physical states of the environment; \u2022 \u03a9i is the set of observations of agent i; \u2022 Oi: S \u00d7 A \u00d7 \u03a9i \u2192 [0, 1] gives the likelihood of the observations given the physical state and joint action; \u2022 Ri: ISi \u00d7 A \u2192 R describes agent i's preferences over its interactive states.\nUsually only the physical states will matter.\nAgent i's policy is the mapping, \u03a9 \u2217 i \u2192 \u0394 (Ai), where \u03a9 \u2217 i is the set of all observation histories of agent i.\nSince belief over the interactive states forms a sufficient statistic [9], the policy can also be represented as a mapping from the set of all beliefs of agent i to a distribution over its actions, \u0394 (ISi) \u2192 \u0394 (Ai).\n2.1 Belief Update\nAnalogous to POMDPs, an agent within the I-POMDP framework updates its belief as it acts and observes.\nHowever, there are two differences that complicate the belief update in multiagent settings when compared to single agent ones.\nFirst, since the state of the physical environment depends on the actions of both agents, i's prediction of how the physical state changes has to be made based on its prediction of j's actions.\nSecond, changes in j's models have to be included in i's belief update.\nSpecifically, if j is intentional then an update of j's beliefs due to its action and observation has to be included.\nIn other words, i has to update its belief based on its prediction of what j would observe and how j would update its belief.\nIf j's model is subintentional, then j's probable observations are appended to the observation history contained in the model.\nFormally, we have: Pr (ist | at \u2212 1 i, bt \u2212 1\nis an abbreviation for the belief update.\nFor a version of the belief update when j's model is subintentional, see [9].\nIf agent j is also modeled as an I-POMDP, then i's belief update invokes j's belief update (via the term SE\u03b8tj (bt \u2212 1\nj, otj)), which in turn could invoke i's belief update and so on.\nThis recursion in belief nesting bottoms out at the 0th level.\nAt this level, the belief update of the agent reduces to a POMDP belief update.\n1 For illustrations of the belief update, additional details on I-POMDPs, and how they compare with other multiagent frameworks, see [9].\n2.2 Value Iteration\nEach belief state in a finitely nested I-POMDP has an associated value reflecting the maximum payoff the agent can expect in this belief state:\nwhere, ERi (is, ai) = aj Ri (is, ai, aj) Pr (aj | mj, l \u2212 1) (since is = (s, mj, l \u2212 1)).\nEq.\n2 is a basis for value iteration in I-POMDPs.\nAgent i's optimal action, a \u2217 i, for the case of finite horizon with discounting, is an element of the set of optimal actions for the belief state, OPT (\u03b8i), defined as:\n3.\nINTERACTIVE INFLUENCE DIAGRAMS\nA naive extension of influence diagrams (IDs) to settings populated by multiple agents is possible by treating other agents as automatons, represented using chance nodes.\nHowever, this approach assumes that the agents' actions are controlled using a probability distribution that does not change over time.\nInteractive influence diagrams (I-IDs) adopt a more sophisticated approach by generalizing IDs to make them applicable to settings shared with other agents who may act and observe, and update their beliefs.\n3.1 Syntax\nIn addition to the usual chance, decision, and utility nodes, IIDs include a new type of node called the model node.\nWe show a general level l I-ID in Fig. 1 (a), where the model node (Mj, l \u2212 1) is denoted using a hexagon.\nWe note that the probability distribution over the chance node, S, and the model node together represents agent i's belief over its interactive states.\nIn addition to the model 1The 0th level model is a POMDP: Other agent's actions are treated as exogenous events and folded into the T, O, and R functions.\nFigure 1: (a) A generic level l I-ID for agent i situated with one other agent j.\nThe hexagon is the model node (Mj, l_1) whose structure we show in (b).\nMembers of the model node are I-IDs themselves (m1j, l_1, m2j, l_1; diagrams not shown here for simplicity) whose decision nodes are mapped to the corresponding chance nodes (A1j, A2j).\nDepending on the value of the node, Mod [Mj], the distribution of each of the chance nodes is assigned to the node Aj.\n(c) The transformed I-ID with the model node replaced by the chance nodes and the relationships between them.\nnode, I-IDs differ from IDs by having a dashed link (called the \"policy link\" in [15]) between the model node and a chance node, Aj, that represents the distribution over the other agent's actions given its model.\nIn the absence of other agents, the model node and the chance node, Aj, vanish and I-IDs collapse into traditional IDs.\nThe model node contains the alternative computational models ascribed by i to the other agent from the set, \u0398j, l_1 \u222a SMj, where \u0398j, l_1 and SMj were defined previously in Section 2.\nThus, a model in the model node may itself be an I-ID or ID, and the recursion terminates when a model is an ID or subintentional.\nBecause the model node contains the alternative models of the other agent as its values, its representation is not trivial.\nIn particular, some of the models within the node are I-IDs that when solved generate the agent's optimal policy in their decision nodes.\nEach decision node is mapped to the corresponding chance node, say A1j, in the following way: if OPT is the set of optimal actions obtained by solving the I-ID (or ID), then Pr (aj \u2208 A1j) = 1 1OP T 1 if aj \u2208 OPT, 0 otherwise.\nBorrowing insights from previous work [8], we observe that the model node and the dashed \"policy link\" that connects it to the chance node, Aj, could be represented as shown in Fig. 1 (b).\nThe decision node of each level l \u2212 1 I-ID is transformed into a chance node, as we mentioned previously, so that the actions with the largest value in the decision node are assigned uniform probabilities in the chance node while the rest are assigned zero probability.\nThe different chance nodes (A1j, A2j), one for each model, and additionally, the chance node labeled Mod [Mj] form the parents of the chance node, Aj.\nThus, there are as many action nodes (A1j, A2j) in Mj, l_1 as the number of models in the support of agent i's beliefs.\nThe conditional probability table of the chance node, Aj, is a multiplexer that assumes the distribution of each of the action nodes (A1j, A2j) depending on the value of Mod [Mj].\nThe values of Mod [Mj] denote the different models of j.\nIn other words, when Mod [Mj] has the value _ 1j, l_1, the chance node Aj assumes the distribution of the node A1j, and Aj assumes the distribution of A2j when Mod [Mj] has the value _ 2j, l_1.\nThe distribution over the node, Mod [Mj], is the agent i's belief over the models of j given a physical state.\nFor more agents, we will have as many model nodes as there are agents.\nNotice that Fig. 1 (b) clarifies the semantics of the \"policy link\", and shows how it can be represented using the traditional dependency links.\nIn Fig. 1 (c), we show the transformed I-ID when the model node is replaced by the chance nodes and relationships between them.\nIn contrast to the representation in [15], there are no special-purpose \"policy links\", rather the I-ID is composed of only those types of nodes that are found in traditional IDs and dependency relationships between the nodes.\nThis allows I-IDs to be represented and implemented using conventional application tools that target IDs.\nNote that we may view the level l I-ID as a NID.\nSpecifically, each of the level l \u2212 1 models within the model node are blocks in the NID (see Fig. 2).\nIf the level l = 1, each block is a traditional ID, otherwise if l> 1, each block within the NID may itself be a NID.\nNote that within the I-IDs (or IDs) at each level, there is only a single decision node.\nThus, our NID does not contain any MAIDs.\nFigure 2: A level l I-ID represented as a NID.\nThe probabilities assigned to the blocks of the NID are i's beliefs over j's models conditioned on a physical state.\n3.2 Solution\nThe solution of an I-ID proceeds in a bottom-up manner, and is implemented recursively.\nWe start by solving the level 0 models, which, if intentional, are traditional IDs.\nTheir solutions provide probability distributions over the other agents' actions, which are entered in the corresponding chance nodes found in the model node of the level 1 I-ID.\nThe mapping from the level 0 models' decision nodes to the chance nodes is carried out so that actions with the largest value in the decision node are assigned uniform probabilities in the chance node while the rest are assigned zero probability.\nGiven the distributions over the actions within the different chance nodes (one for each model of the other agent), the level 1 I-ID is transformed as shown in Fig. 1 (c).\nDuring the transformation, the conditional probability table (CPT) of the node, Aj, is populated such that the node assumes the distribution of each of the chance nodes depending on the value of the node, Mod [Mj].\nAs we mentioned previously, the values of the node Mod [Mj] denote the different models of the other agent, and its distribution is the agent i's belief over the models of j conditioned on the physical state.\nThe transformed level 1 I-ID is a traditional ID that may be solved us\n816 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 3: (a) A generic two time-slice level l I-DID for agent i in a setting with one other agent j. Notice the dotted model update link that denotes the update of the models of j and the distribution over the models over time.\n(b) The semantics of the model update link.\ning the standard expected utility maximization method [18].\nThis procedure is carried out up to the level l I-ID whose solution gives the non-empty set of optimal actions that the agent should perform given its belief.\nNotice that analogous to IDs, I-IDs are suitable for online decision-making when the agent's current belief is known.\n4.\nINTERACTIVE DYNAMIC INFLUENCE DIAGRAMS\nInteractive dynamic influence diagrams (I-DIDs) extend I-IDs (and NIDs) to allow sequential decision-making over several time steps.\nJust as DIDs are structured graphical representations of POMDPs, I-DIDs are the graphical online analogs for finitely nested I-POMDPs.\nI-DIDs may be used to optimize over a finite look-ahead given initial beliefs while interacting with other, possibly similar, agents.\n4.1 Syntax\nWe depict a general two time-slice I-DID in Fig. 3 (a).\nIn addition to the model nodes and the dashed policy link, what differentiates an I-DID from a DID is the model update link shown as a dotted arrow in Fig. 3 (a).\nWe explained the semantics of the model node and the policy link in the previous section; we describe the model updates next.\nThe update of the model node over time involves two steps: First, given the models at time t, we identify the updated set of models that reside in the model node at time t + 1.\nRecall from Section 2 that an agent's intentional model includes its belief.\nBecause the agents act and receive observations, their models are updated to reflect their changed beliefs.\nSince the set of optimal actions for a model could include all the actions, and the agent may receive any one of I\u03a9j I possible observations, the updated set at time step t + 1 will have at most IMtj, l \u2212 1IIAj II\u03a9j I models.\nHere, IMtj, l \u2212 1I is the number of models at time step t, IAj I and I\u03a9j I are the largest spaces of actions and observations respectively, among all the models.\nSecond, we compute the new distribution over the updated models given the original distribution and the probability of the agent performing the action and receiving the observation that led to the updated model.\nThese steps are a part of agent i's belief update formalized using Eq.\n1.\nIn Fig. 3 (b), we show how the dotted model update link is implemented in the I-DID.\nIf each of the two level l--1 models ascribed to j at time step t results in one action, and j could make one of two possible observations, then the model node at time step\nant +1,4 j, l \u2212 1).\nThese models differ in their initial beliefs, each of which is the result of j updating its beliefs due to its action and a possible observation.\nThe decision nodes in each of the I-DIDs or DIDs that represent the lower level models are mapped to the corresponding\nFigure 4: Transformed I-DID with the model nodes and model update link replaced with the chance nodes and the relationships (in bold).\nchance nodes, as mentioned previously.\nNext, we describe how the distribution over the updated set of models (the distribution over the chance node Mod [Mt +1\nj] in Mt +1 j, l \u2212 1) is computed.\nThe probability that j's updated model is, say ant +1,1 j, l \u2212 1, depends on the probability of j performing the action and receiving the observation that led to this model, and the prior distribution over the models at time step t. Because the chance node Atj assumes the distribution of each of the action nodes based on the value of Mod [Mjt], the probability of the action is given by this chance node.\nIn order to obtain the probability of j's possible observation, we introduce the chance node Oj, which depending on the value of Mod [Mjt] assumes the distribution of the observation node in the lower level model denoted by Mod [Mjt].\nBecause the probability of j's observations depends on the physical state and the joint actions of both agents, the node Oj is linked with St +1, Atj, and Ati.\n2 Analogous to Atj, the conditional probability table of Oj is also a multiplexer modulated by Mod [Mjt].\nFinally, the distribution over the prior models at time t is obtained from the chance node, Mod [Mjt] in Mtj, l \u2212 1.\nConsequently, the chance nodes, Mod [Mjt], Atj, and Oj, form the parents of Mod [Mj t +1] in Mt +1 j, l \u2212 1.\nNotice that the model update link may be replaced by the dependency links between the chance nodes that constitute the model nodes in the two time slices.\nIn Fig. 4 we show the two time-slice I-DID with the model nodes replaced by the chance nodes and the relationships between them.\nChance nodes and dependency links that not in bold are standard, usually found in DIDs.\nExpansion of the I-DID over more time steps requires the repetition of the two steps of updating the set of models that form the\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 817\nvalues of the model node and adding the relationships between the chance nodes, as many times as there are model update links.\nWe note that the possible set of models of the other agent j grows exponentially with the number of time steps.\nFor example, after T steps, there may be at most 1Mt = 1 j, l_11 (1Aj11\u03a9j 1) T _ 1 candidate models residing in the model node.\n4.2 Solution\nAnalogous to I-IDs, the solution to a level l I-DID for agent i expanded over T time steps may be carried out recursively.\nFor the purpose of illustration, let l = 1 and T = 2.\nThe solution method uses the standard look-ahead technique, projecting the agent's action and observation sequences forward from the current belief state [17], and finding the possible beliefs that i could have in the next time step.\nBecause agent i has a belief over j's models as well, the lookahead includes finding out the possible models that j could have in the future.\nConsequently, each of j's subintentional or level 0 models (represented using a standard DID) in the first time step must be solved to obtain its optimal set of actions.\nThese actions are combined with the set of possible observations that j could make in that model, resulting in an updated set of candidate models (that include the updated beliefs) that could describe the behavior of j. Beliefs over this updated set of candidate models are calculated using the standard inference methods using the dependency relationships between the model nodes as shown in Fig. 3 (b).\nWe note the recursive nature of this solution: in solving agent i's level 1I-DID, j's level 0 DIDs must be solved.\nIf the nesting of models is deeper, all models at all levels starting from 0 are solved in a bottom-up manner.\nWe briefly outline the recursive algorithm for solving agent i 's\n3.\nFor each mt jin Range (Mtj, l_1) do 4.\nRecursively call algorithm with the l--1 I-ID (or ID) that represents mtj and the horizon, T--t + 1 5.\nMap the decision node of the solved I-ID (or ID), OPT (mt j), to a chance node Aj 6.\nFor each aj in OPT (mt j) do 7.\nFor each oj in Oj (part of mtj) do\nFigure 5: Algorithm for solving a level l> 0I-DID.\nlevel l I-DID expanded over T time steps with one other agent j in Fig. 5.\nWe adopt a two-phase approach: Given an I-ID of level l (described previously in Section 3) with all lower level models also represented as I-IDs or IDs (if level 0), the first step is to expand the level l I-ID over T time steps adding the dependency links and the conditional probability tables for each node.\nWe particularly focus on establishing and populating the model nodes (lines 3-11).\nNote that Range (.)\nreturns the values (lower level models) of the random variable given as input (model node).\nIn the second phase, we use a standard look-ahead technique projecting the action and observation sequences over T time steps in the future, and backing up the utility values of the reachable beliefs.\nSimilar to I-IDs, the I-DIDs reduce to DIDs in the absence of other agents.\nAs we mentioned previously, the 0-th level models are the traditional DIDs.\nTheir solutions provide probability distributions over actions of the agent modeled at that level to I-DIDs at level 1.\nGiven probability distributions over other agent's actions the level 1 IDIDs can themselves be solved as DIDs, and provide probability distributions to yet higher level models.\nAssume that the number of models considered at each level is bound by a number, M. Solving an I-DID of level l in then equivalent to solving O (Ml) DIDs.\n5.\nEXAMPLE APPLICATIONS\nTo illustrate the usefulness of I-DIDs, we apply them to three problem domains.\nWe describe, in particular, the formulation of the I-DID and the optimal prescriptions obtained on solving it.\n5.1 Followership-Leadership in the Multiagent Tiger Problem\nWe begin our illustrations of using I-IDs and I-DIDs with a slightly modified version of the multiagent tiger problem discussed in [9].\nThe problem has two agents, each of which can open the right door (OR), the left door (OL) or listen (L).\nIn addition to hearing growls (from the left (GL) or from the right (GR)) when they listen, the agents also hear creaks (from the left (CL), from the right (CR), or no creaks (S)), which noisily indicate the other agent's opening one of the doors.\nWhen any door is opened, the tiger persists in its original location with a probability of 95%.\nAgent i hears growls with a reliability of 65% and creaks with a reliability of 95%.\nAgent j, on the other hand, hears growls with a reliability of 95%.\nThus, the setting is such that agent i hears agent j opening doors more reliably than the tiger's growls.\nThis suggests that i could use j's actions as an indication of the location of the tiger, as we discuss below.\nEach agent's preferences are as in the single agent game discussed in [13].\nThe transition, observation, and reward functions are shown in [16].\nA good indicator of the usefulness of normative methods for decision-making like I-DIDs is the emergence of realistic social behaviors in their prescriptions.\nIn settings of the persistent multiagent tiger problem that reflect real world situations, we demonstrate followership between the agents and, as shown in [15], deception among agents who believe that they are in a follower-leader type of relationship.\nIn particular, we analyze the situational and epistemological conditions sufficient for their emergence.\nThe followership behavior, for example, results from the agent knowing its own weaknesses, assessing the strengths, preferences, and possible behaviors of the other, and realizing that its best for it to follow the other's actions in order to maximize its payoffs.\nLet us consider a particular setting of the tiger problem in which agent i believes that j's preferences are aligned with its own - both of them just want to get the gold - and j's hearing is more reliable in comparison to itself.\nAs an example, suppose that j, on listening can discern the tiger's location 95% of the times compared to i's 65% accuracy.\nAdditionally, agent i does not have any initial information about the tiger's location.\nIn other words, i's single-level nested belief, bi,1, assigns 0.5 to each of the two locations of the tiger.\nIn addition, i considers two models of j, which differ in j's flat level 0 initial beliefs.\nThis is represented in the level 1 I-ID shown in Fig. 6 (a).\nAccording to one model, j assigns a probability of 0.9 that the tiger is behind the left door, while the other\n818 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 6: (a) Level 1 I-ID of agent i, (b) two level 0 IDs of agent j whose decision nodes are mapped to the chance nodes, A1j, A2j, in (a).\nmodel assigns 0.1 to that location (see Fig. 6 (b)).\nAgent i is undecided on these two models of j.\nIf we vary i's hearing ability, and solve the corresponding level 1 I-ID expanded over three time steps, we obtain the normative behavioral policies shown in Fig 7 that exhibit followership behavior.\nIf i's probability of correctly hearing the growls is 0.65, then as shown in the policy in Fig. 7 (a), i begins to conditionally follow j's actions: i opens the same door that j opened previously iff i's own assessment of the tiger's location confirms j's pick.\nIf i loses the ability to correctly interpret the growls completely, it blindly follows j and opens the same door that j opened previously (Fig. 7 (b)).\nFigure 7: Emergence of (a) conditional followership, and (b) blind followership in the tiger problem.\nBehaviors of interest are in bold.\n* is a wildcard, and denotes any one of the observations.\nWe observed that a single level of belief nesting - beliefs about the other's models - was sufficient for followership to emerge in the tiger problem.\nHowever, the epistemological requirements for the emergence of leadership are more complex.\nFor an agent, say j, to emerge as a leader, followership must first emerge in the other agent i.\nAs we mentioned previously, if i is certain that its preferences are identical to those of j, and believes that j has a better sense of hearing, i will follow j's actions over time.\nAgent j emerges as a leader if it believes that i will follow it, which implies that j's belief must be nested two levels deep to enable it to recognize its leadership role.\nRealizing that i will follow presents j with an opportunity to influence i's actions in the benefit of the collective good or its self-interest alone.\nFor example, in the tiger problem, let us consider a setting in which if both i and j open the correct door, then each gets a payoff of 20 that is double the original.\nIf j alone selects the correct door, it gets the payoff of 10.\nOn the other hand, if both agents pick the wrong door, their penalties are cut in half.\nIn this setting, it is in both j's best interest as well as the collective betterment for j to use its expertise in selecting the correct door, and thus be a good leader.\nHowever, consider a slightly different problem in which j gains from i's loss and is penalized if i gains.\nSpecifically, let i's payoff be subtracted from j's, indicating that j is antagonistic toward i - if j picks the correct door and i the wrong one, then i's loss of 100 becomes j's gain.\nAgent j believes that i incorrectly thinks that j's preferences are those that promote the collective good and that it starts off by believing with 99% confidence where the tiger is.\nBecause i believes that its preferences are similar to those of j, and that j starts by believing almost surely that one of the two is the correct location (two level 0 models of j), i will start by following j's actions.\nWe show i's normative policy on solving its singly-nested I-DID over three time steps in Fig. 8 (a).\nThe policy demonstrates that i will blindly follow j's actions.\nSince the tiger persists in its original location with a probability of 0.95, i will select the same door again.\nIf j begins the game with a 99% probability that the tiger is on the right, solving j's I-DID nested two levels deep, results in the policy shown in Fig. 8 (b).\nEven though j is almost certain that OL is the correct action, it will start by selecting OR, followed by OL.\nAgent j's intention is to deceive i who, it believes, will follow j's actions, so as to gain $110 in the second time step, which is more than what j would gain if it were to be honest.\nFigure 8: Emergence of deception between agents in the tiger problem.\nBehaviors of interest are in bold.\n* denotes as before.\n(a) Agent i's policy demonstrating that it will blindly follow j's actions.\n(b) Even though j is almost certain that the tiger is on the right, it will start by selecting OR, followed by OL, in order to deceive i.\n5.2 Altruism and Reciprocity in the Public Good Problem\nThe public good (PG) problem [7], consists of a group of M agents, each of whom must either contribute some resource to a public pot or keep it for themselves.\nSince resources contributed to the public pot are shared among all the agents, they are less valuable to the agent when in the public pot.\nHowever, if all agents choose to contribute their resources, then the payoff to each agent is more than if no one contributes.\nSince an agent gets its share of the public pot irrespective of whether it has contributed or not, the dominating action is for each agent to not contribute, and instead \"free ride\" on others' contributions.\nHowever, behaviors of human players in empirical simulations of the PG problem differ from the normative predictions.\nThe experiments reveal that many players initially contribute a large amount to the public pot, and continue to contribute when the PG problem is played repeatedly, though in decreasing amounts [4].\nMany of these experiments [5] report that a small core group of players persistently contributes to the public pot even when all others are defecting.\nThese experiments also reveal that players who persistently contribute have altruistic or reciprocal preferences matching expected cooperation of others.\nFor simplicity, we assume that the game is played between M = 2 agents, i and j. Let each agent be initially endowed with XT amount of resources.\nWhile the classical PG game formulation permits each agent to contribute any quantity of resources (\u2264 XT) to the public pot, we simplify the action space by allowing two possible actions.\nEach agent may choose to either contribute (C) a fixed amount of the resources, or not contribute.\nThe latter action is de\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 819\nnoted as defect (D).\nWe assume that the actions are not observable to others.\nThe value of resources in the public pot is discounted by ci for each agent i, where ci is the marginal private return.\nWe assume that ci <1 so that the agent does not benefit enough that it contributes to the public pot for private gain.\nSimultaneously, ciM> 1, making collective contribution pareto optimal.\nTable 1: The one-shot PG game with punishment.\nIn order to encourage contributions, the contributing agents punish free riders but incur a small cost for administering the punishment.\nLet P be the punishment meted out to the defecting agent and cp the non-zero cost of punishing for the contributing agent.\nFor simplicity, we assume that the cost of punishing is same for both the agents.\nThe one-shot PG game with punishment is shown in Table.\n1.\nLet ci = cj, cp> 0, and if P> XT--ciXT, then defection is no longer a dominating action.\nIf P <XT--ciXT, then defection is the dominating action for both.\nIf P = XT--ciXT, then the game is not dominance-solvable.\nFigure 9: (a) Level 1 I-ID of agent i, (b) level 0 IDs of agent j with decision nodes mapped to the chance nodes, A1j and A2j, in (a).\nWe formulate a sequential version of the PG problem with punishment from the perspective of agent i. Though in the repeated PG game, the quantity in the public pot is revealed to all the agents after each round of actions, we assume in our formulation that it is hidden from the agents.\nEach agent may contribute a fixed amount, xc, or defect.\nAn agent on performing an action receives an observation of plenty (PY) or meager (MR) symbolizing the state of the public pot.\nNotice that the observations are also indirectly indicative of agent j's actions because the state of the public pot is influenced by them.\nThe amount of resources in agent i's private pot, is perfectly observable to i.\nThe payoffs are analogous to Table.\n1.\nBorrowing from the empirical investigations of the PG problem [5], we construct level 0 IDs for j that model altruistic and non-altruistic types (Fig. 9 (b)).\nSpecifically, our altruistic agent has a high marginal private return (cj is close to 1) and does not punish others who defect.\nLet xc = 1 and the level 0 agent be punished half the times it defects.\nWith one action remaining, both types of agents choose to contribute to avoid being punished.\nWith two actions to go, the altruistic type chooses to contribute, while the other defects.\nThis is because cj for the altruistic type is close to 1, thus the expected punishment, 0.5 P> (1--cj), which the altruistic type avoids.\nBecause cj for the non-altruistic type is less, it prefers not to contribute.\nWith three steps to go, the altruistic agent contributes to avoid punishment (0.5 P> 2 (1--cj)), and the non-altruistic type defects.\nFor greater than three steps, while the altruistic agent continues to contribute to the public pot depending on how close its marginal private return is to 1, the non-altruistic type prescribes defection.\nWe analyzed the decisions of an altruistic agent i modeled using a level 1I-DID expanded over 3 time steps.\ni ascribes the two level 0 models, mentioned previously, to j (see Fig. 9).\nIf i believes with a probability 1 that j is altruistic, i chooses to contribute for each of the three steps.\nThis behavior persists when i is unaware of whether j is altruistic (Fig. 10 (a)), and when i assigns a high probability to j being the non-altruistic type.\nHowever, when i believes with a probability 1 that j is non-altruistic and will thus surely defect, i chooses to defect to avoid being punished and because its marginal private return is less than 1.\nThese results demonstrate that the behavior of our altruistic type resembles that found experimentally.\nThe non-altruistic level 1 agent chooses to defect regardless of how likely it believes the other agent to be altruistic.\nWe analyzed the behavior of a reciprocal agent type that matches expected cooperation or defection.\nThe reciprocal type's marginal private return is similar to that of the non-altruistic type, however, it obtains a greater payoff when its action is similar to that of the other.\nWe consider the case when the reciprocal agent i is unsure of whether j is altruistic and believes that the public pot is likely to be half full.\nFor this prior belief, i chooses to defect.\nOn receiving an observation of plenty, i decides to contribute, while an observation of meager makes it defect (Fig. 10 (b)).\nThis is because an observation of plenty signals that the pot is likely to be greater than half full, which results from j's action to contribute.\nThus, among the two models ascribed to j, its type is likely to be altruistic making it likely that j will contribute again in the next time step.\nAgent i therefore chooses to contribute to reciprocate j's action.\nAn analogous reasoning leads i to defect when it observes a meager pot.\nWith one action to go, i believing that j contributes, will choose to contribute too to avoid punishment regardless of its observations.\nFigure 10: (a) An altruistic level 1 agent always contributes.\n(b) A reciprocal agent i starts off by defecting followed by choosing to contribute or defect based on its observation of plenty (indicating that j is likely altruistic) or meager (j is non-altruistic).\n5.3 Strategies in Two-Player Poker\nPoker is a popular zero sum card game that has received much attention among the AI research community as a testbed [2].\nPoker is played among M> 2 players in which each player receives a hand of cards from a deck.\nWhile several flavors of Poker with varying complexity exist, we consider a simple version in which each player has three plys during which the player may either exchange a card (E), keep the existing hand (K), fold (F) and withdraw from the game, or call (C), requiring all players to show their hands.\nTo keep matters simple, let M = 2, and each player receive a hand consisting of a single card drawn from the same suit.\nThus, during a showdown, the player who has the numerically larger card (2 is the lowest, ace is the highest) wins the pot.\nDuring an exchange of cards, the discarded card is placed either in the L pile, indicating to the other agent that it was a low numbered card less than 8, or in the\n820 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nH pile, indicating that the card had a rank greater than or equal to 8.\nNotice that, for example, if a lower numbered card is discarded, the probability of receiving a low card in exchange is now reduced.\nWe show the level 1 I-ID for the simplified two-player Poker in Fig. 11.\nWe considered two models (personality types) of agent j.\nThe conservative type believes that it is likely that its opponent has a high numbered card in its hand.\nOn the other hand, the aggressive agent j believes with a high probability that its opponent has a lower numbered card.\nThus, the two types differ in their beliefs over their opponent's hand.\nIn both these level 0 models, the opponent is assumed to perform its actions following a fixed, uniform distribution.\nWith three actions to go, regardless of its hand (unless it is an ace), the aggressive agent chooses to exchange its card, with the intent of improving on its current hand.\nThis is because it believes the other to have a low card, which improves its chances of getting a high card during the exchange.\nThe conservative agent chooses to keep its card, no matter its hand because its chances of getting a high card are slim as it believes that its opponent has one.\nFigure 11: (a) Level 1 I-ID of agent i.\nThe observation reveals information about j's hand of the previous time step, (b) level 0 IDs of agent j whose decision nodes are mapped to the chance nodes, A1j, A2j, in (a).\nThe policy of a level 1 agent i who believes that each card except its own has an equal likelihood of being in j's hand (neutral personality type) and j could be either an aggressive or conservative type, is shown in Fig. 12.\ni's own hand contains the card numbered 8.\nThe agent starts by keeping its card.\nOn seeing that j did not exchange a card (N), i believes with probability 1 that j is conservative and hence will keep its cards.\ni responds by either keeping its card or exchanging it because j is equally likely to have a lower or higher card.\nIf i observes that j discarded its card into the L or H pile, i believes that j is aggressive.\nOn observing L, i realizes that j had a low card, and is likely to have a high card after its exchange.\nBecause the probability of receiving a low card is high now, i chooses to keep its card.\nOn observing H, believing that the probability of receiving a high numbered card is high, i chooses to exchange its card.\nIn the final step, i chooses to call regardless of its observation history because its belief that j has a higher card is not sufficiently high to conclude that its better to fold and relinquish the payoff.\nThis is partly due to the fact that an observation of, say, L resets the agent i's previous time step beliefs over j's hand to the low numbered cards only.\n6.\nDISCUSSION\nWe showed how DIDs may be extended to I-DIDs that enable online sequential decision-making in uncertain multiagent settings.\nOur graphical representation of I-DIDs improves on the previous Figure 12: A level 1 agent i's three step policy in the Poker problem.\ni starts by believing that j is equally likely to be aggressive or conservative and could have any card in its hand with equal probability.\nwork significantly by being more transparent, semantically clear, and capable of being solved using standard algorithms that target DIDs.\nI-DIDs extend NIDs to allow sequential decision-making over multiple time steps in the presence of other interacting agents.\nI-DIDs may be seen as concise graphical representations for IPOMDPs providing a way to exploit problem structure and carry out online decision-making as the agent acts and observes given its prior beliefs.\nWe are currently investigating ways to solve I-DIDs approximately with provable bounds on the solution quality.\nAcknowledgment: We thank Piotr Gmytrasiewicz for some useful discussions related to this work.\nThe first author would like to acknowledge the support of a UGARF grant.","keyphrases":["graphic model","interact partial observ markov decis process","interact dynam influenc diagram","influenc diagram","agent onlin","sequenti decis-make","partial observ multiag environ","multiag environ","nash equilibrium profil","independ structur","multi-agent influenc diagram","influenc diagram network","multiplex","polici link","depend link","interact influenc diagram","onlin sequenti decis-make","dynam in\ufb02uenc diagram","decis-make","agent model"],"prmu":["P","P","P","P","P","U","M","M","U","M","M","M","U","M","M","R","M","M","U","R"]} {"id":"I-59","title":"An Agent-Based Approach for Privacy-Preserving Recommender Systems","abstract":"Recommender Systems are used in various domains to generate personalized information based on personal user data. The ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers. Existing approaches neglect to address privacy in this multilateral way. We have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants. We describe the main modules of our solution as well as an application we have implemented based on this approach.","lvl-1":"An Agent-Based Approach for Privacy-Preserving Recommender Systems Richard Ciss\u00e9e DAI-Labor, TU Berlin Franklinstrasse 28\/29 10587 Berlin richard.cissee@dai-labor.de Sahin Albayrak DAI-Labor, TU Berlin Franklinstrasse 28\/29 10587 Berlin sahin.albayrak@dai-labor.de ABSTRACT Recommender Systems are used in various domains to generate personalized information based on personal user data.\nThe ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers.\nExisting approaches neglect to address privacy in this multilateral way.\nWe have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants.\nWe describe the main modules of our solution as well as an application we have implemented based on this approach.\nCategories and Subject Descriptors H.3.3 [Information Storage and Retrieval]: Information Search and Retrieval-Information Filtering; I.2.11 [Artificial Intelligence]: Distributed Artificial Intelligence-Multiagent Systems General Terms Management, Security, Human Factors, Standardization 1.\nINTRODUCTION Information Filtering (IF) systems aim at countering information overload by extracting information that is relevant for a given user out of a large body of information available via an information provider.\nIn contrast to Information Retrieval (IR) systems, where relevant information is extracted based on search queries, IF architectures generate personalized information based on user profiles containing, for each given user, personal data, preferences, and rated items.\nThe provided body of information is usually structured and collected in provider profiles.\nFiltering techniques operate on these profiles in order to generate recommendations of items that are probably relevant for a given user, or in order to determine users with similar interests, or both.\nDepending on the respective goal, the resulting systems constitute Recommender Systems [5], Matchmaker Systems [10], or a combination thereof.\nThe aspect of privacy is an essential issue in all IF systems: Generating personalized information obviously requires the use of personal data.\nAccording to surveys indicating major privacy concerns of users in the context of Recommender Systems and e-commerce in general [23], users can be expected to be less reluctant to provide personal information if they trust the system to be privacy-preserving with regard to personal data.\nSimilar considerations also apply to the information provider, who may want to control the dissemination of the provided information, and to the provider of the filtering techniques, who may not want the details of the utilized filtering algorithms to become common knowledge.\nA privacy-preserving IF system should therefore balance these requirements and protect the privacy of all parties involved in a multilateral way, while addressing general requirements regarding performance, security and quality of the recommendations as well.\nAs described in the following section, there are several approaches with similar goals, but none of these provide a generic approach in which the privacy of all parties is preserved.\nWe have developed an agent-based approach for privacypreserving IF which has been utilized for realizing a combined Recommender\/Matchmaker System as part of an application supporting users in planning entertainment-related activities.\nIn this paper, we focus on the Recommender System functionality.\nOur approach is based on Multi-Agent System (MAS) technology because fundamental features of agents such as autonomy, adaptability and the ability to communicate are essential requirements of our approach.\nIn other words, the realized approach does not merely constitute a solution for privacy-preserving IF within a MAS context, but rather utilizes a MAS architecture in order to realize a solution for privacy-preserving IF, which could not be realized easily otherwise.\nThe paper is structured as follows: Section 2 describes related work.\nSection 3 describes the general ideas of our approach.\nIn Section 4, we describe essential details of the 319 978-81-904262-7-5 (RPS) c 2007 IFAAMAS modules of our approach and their implementation.\nIn Section 5, we evaluate the approach, mainly via the realized application.\nSection 6 concludes the paper with an outlook and outlines further work.\n2.\nRELATED WORK There is a large amount of work in related areas, such as Private Information Retrieval [7], Privacy-Preserving Data Mining [2], and other privacy-preserving protocols [4, 16], most of which is based on Secure Multi-Party Computation [27].\nWe have ruled out Secure Multi-Party Computation approaches mainly because of their complexity, and because the algorithm that is computed securely is not considered to be private in these approaches.\nVarious enforcement mechanisms have been suggested that are applicable in the context of privacy-preserving Information Filtering, such as enterprise privacy policies [17] or hippocratic databases [1], both of which annotate user data with additional meta-information specifying how the data is to be handled on the provider side.\nThese approaches ultimately assume that the provider actually intends to protect the privacy of the user data, and offer support for this task, but they are not intended to prevent the provider from acting in a malicious manner.\nTrusted computing, as specified by the Trusted Computing Group, aims at realizing trusted systems by increasing the security of open systems to a level comparable with the level of security that is achievable in closed systems.\nIt is based on a combination of tamper-proof hardware and various software components.\nSome example applications, including peer-to-peer networks, distributed firewalls, and distributed computing in general, are listed in [13].\nThere are some approaches for privacy-preserving Recommender Systems based on distributed collaborative filtering, in which recommendations are generated via a public model aggregating the distributed user profiles without containing explicit information about user profiles themselves.\nThis is achieved via Secure Multi-Party Computation [6], or via random perturbation of the user data [20].\nIn [19], various approaches are integrated within a single architecture.\nIn [10], an agent-based approach is described in which user agents representing similar users are discovered via a transitive traversal of user agents.\nPrivacy is preserved through pseudonymous interaction between the agents and through adding obfuscating data to personal information.\nMore recent related approaches are described in [18].\nIn [3], an agent-based architecture for privacy-preserving demographic filtering is described which may be generalized in order to support other kinds of filtering techniques.\nWhile in some aspects similar to our approach, this architecture addresses at least two aspects inadequately, namely the protection of the filter against manipulation attempts, and the prevention of collusions between the filter and the provider.\n3.\nPRIVACY-PRESERVING INFORMATION FILTERING We identify three main abstract entities participating in an information filtering process within a distributed system: A user entity, a provider entity, and a filter entity.\nWhereas in some applications the provider and filter entities explicitly trust each other, because they are deployed by the same party, our solution is applicable more generically because it does not require any trust between the main abstract entities.\nIn this paper, we focus on aspects related to the information filtering process itself, and omit all aspects related to information collection and processing, i.e. the stages in which profiles are generated and maintained, mainly because these stages are less critical with regard to privacy, as they involve fewer different entities.\n3.1 Requirements Our solution aims at meeting the following requirements with regard to privacy: \u2022 User Privacy: No linkable information about user profiles should be acquired permanently by any other entity or external party, including other user entities.\nSingle user profile items, however, may be acquired permanently if they are unlinkable, i.e. if they cannot be attributed to a specific user or linked to other user profile items.\nTemporary acquisition of private information is permitted as well.\nSets of recommendations may be acquired permanently by the provider, but they should not be linkable to a specific user.\nThese concessions simplify the resulting protocol and allow the provider to obtain recommendations and single unlinkable user profile items, and thus to determine frequently requested information and optimize the offered information accordingly.\n\u2022 Provider Privacy: No information about provider profiles, with the exception of the recommendations, should be acquired permanently by other entities or external parties.\nAgain, temporary acquisition of private information is permitted.\nAdditionally, the propagation of provider information is entirely under the control of the provider.\nThus, the provider is enabled to prevent misuse such as the automatic large-scale extraction of information.\n\u2022 Filter Privacy: Details of the algorithms applied by the filtering techniques should not be acquired permanently by any other entity or external party.\nGeneral information about the algorithm may be provided by the filter entity in order to help other entities to reach a decision on whether to apply the respective filtering technique.\nIn addition, general requirements regarding the quality of the recommendations as well as security aspects, performance and broadness of the resulting system have to be addressed as well.\nWhile minor trade-offs may be acceptable, the resulting system should reach a level similar to regular Recommender Systems with regard to these requirements.\n3.2 Outline of the Solution The basic idea for realizing a protocol fulfilling these privacy-related requirements in Recommender Systems is implied by allowing the temporary acquisition of private information (see [8] for the original approach): User and provider entity both propagate the respective profile data to the filter entity.\nThe filter entity provides the recommendations, and subsequently deletes all private information, thus fulfilling the requirement regarding permanent acquisition of private information.\n320 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) The entities whose private information is propagated have to be certain that the respective information is actually acquired temporarily only.\nTrust in this regard may be established in two main ways: \u2022 Trusted Software: The respective entity itself is trusted to remove the respective information as specified.\n\u2022 Trusted Environment: The respective entity operates in an environment that is trusted to control the communication and life cycle of the entity to an extent that the removal of the respective information may be achieved regardless of the attempted actions of the entity itself.\nAdditionally, the environment itself is trusted not to act in a malicious manner (e.g. it is trusted not to acquire and propagate the respective information itself).\nIn both cases, trust may be established in various ways.\nReputation-based mechanisms, additional trusted third parties certifying entities or environments, or trusted computing mechanisms may be used.\nOur approach is based on a trusted environment realized via trusted computing mechanisms, because we see this solution as the most generic and realistic approach.\nThis decision is discussed briefly in Section 5.\nWe are now able to specify the abstract information filtering protocol as shown in Figure 1: The filter entity deploys a Temporary Filter Entity (TFE) operating in a trusted environment.\nThe user entity deploys an additional relay entity operating in the same environment.\nThrough mechanisms provided by this environment, the relay entity is able to control the communication of the TFE, and the provider entity is able to control the communication of both relay entity and the TFE.\nThus, it is possible to ensure that the controlled entities are only able to propagate recommendations, but no other private information.\nIn the first stage (steps 1.1 to 1.3 of Figure 1), the relay entity establishes control of the TFE, and thus prevents it from propagating user profile information.\nUser profile data is propagated without participation of the provider entity from the user entity to the TFE via the relay entity.\nIn the second stage (steps 2.1 to 2.3 of Figure 1), the provider entity establishes control of both relay and TFE, and thus prevents them from propagating provider profile information.\nProvider profile data is propagated from the provider entity to the TFE via the relay entity.\nIn the third stage (steps 3.1 to 3.5 of Figure 1), the TFE returns the recommendations via the relay entity, and the controlled entities are terminated.\nTaken together, these steps ensure that all private information is acquired temporarily only by the other main entities.\nThe problems of determining acceptable queries on the provider profile and ensuring unlinkability of the recommendations are discussed in the following section.\nOur approach requires each entity in the distributed architecture to have the following five main abilities: The ability to perform certain well-defined tasks (such as carrying out a filtering process) with a high degree of autonomy, i.e. largely independent of other entities (e.g. because the respective entity is not able to communicate in an unrestricted manner), the ability to be deployable dynamically in a well-defined environment, the ability to communicate with other entities, the ability to achieve protection against external manipulation attempts, and the ability to control and restrict the communication of other entities.\nFigure 1: The abstract privacy-preserving information filtering protocol.\nAll communication across the environments indicated by dashed lines is prevented with the exception of communication with the controlling entity.\nMAS architectures are an ideal solution for realizing a distributed system characterized by these features, because they provide agents constituting entities that are actually characterized by autonomy, mobility and the ability to communicate [26], as well as agent platforms as environments providing means to realize the security of agents.\nIn this context, the issue of malicious hosts, i.e. hosts attacking agents, has to be addressed explicitly.\nFurthermore, existing MAS architectures generally do not allow agents to control the communication of other agents.\nIt is possible, however, to expand a MAS architecture and to provide designated agents with this ability.\nFor these reasons, our solution is based on a FIPA[11]-compliant MAS architecture.\nThe entities introduced above are mapped directly to agents, and the trusted environment in which they exist is realized in the form of agent platforms.\nIn addition to the MAS architecture itself, which is assumed as given, our solution consists of the following five main modules: \u2022 The Controller Module described in Section 4.1 provides functionality for controlling the communication capabilities of agents.\n\u2022 The Transparent Persistence Module facilitates the use of different data storage mechanisms, and provides a uniform interface for accessing persistent information, which may be utilized for monitoring critical interactions involving potentially private information e.g. as part of queries.\nIts description is outside the scope of this paper.\n\u2022 The Recommender Module, details of which are described in Section 4.2, provides Recommender System functionality.\n\u2022 The Matchmaker Module provides Matchmaker System functionality.\nIt additionally utilizes social aspects of MAS technology.\nIts description is outside the scope of this paper.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 321 \u2022 Finally, a separate module described in Section 4.4 provides Exemplary Filtering Techniques in order to show that various restrictions imposed on filtering techniques by our approach may actually be fulfilled.\nThe trusted environment introduced above encompasses the MAS architecture itself and the Controller Module, which have to be trusted to act in a non-malicious manner in order to rule out the possibility of malicious hosts.\n4.\nMAIN MODULES AND IMPLEMENTATION In this section, we describe the main modules of our approach, and outline the implementation.\nWhile we have chosen a specific architecture for the implementation, the specification of the module is applicable to any FIPA-compliant MAS architecture.\nA module basically encompasses ontologies, functionality provided by agents via agent services, and internal functionality.\nThroughout this paper, {m}KX denotes a message m encrypted via a non-specified symmetric encryption scheme with a secret key KX used for encryption and decryption which is initially known only to participant X.\nA key KXY is a key shared by participants X and Y .\nA cryptographic hash function is used at various points of the protocol, i.e. a function returning a hash value h(x) for given data x that is both preimage-resistant and collision-resistant1 .\nWe denote a set of hash values for a data set X = {x1, .\n.\n, xn} as H(X) = {h(x1), .\n.\n, h(xn)}, whereas h(X) denotes a single hash value of a data set.\n4.1 Controller Module As noted above, the ability to control the communication of agents is generally not a feature of existing MAS architectures2 but at the same time a central requirement of our approach for privacy-preserving Information Filtering.\nThe required functionality cannot be realized based on regular agent services or components, because an agent on a platform is usually not allowed to interfere with the actions of other agents in any way.\nTherefore, we add additional infrastructure providing the required functionality to the MAS architecture itself, resulting in an agent environment with extended functionality and responsibilities.\nControlling the communication capabilities of an agent is realized by restricting via rules, in a manner similar to a firewall, but with the consent of the respective agent, its incoming and outgoing communication to specific platforms or agents on external platforms as well as other possible communication channels, such as the file system.\nConsent is required because otherwise the overall security would be compromised, as attackers could arbitrarily block various communication channels.\nOur approach does not require controlling the communication between agents on the same platform, and therefore this aspect is not addressed.\nConsequently, all rules addressing communication capabilities have to be enforced across entire platforms, because otherwise a controlled agent could just use a non-controlled agent 1 In the implementation, we have used the Advanced Encryption Standard (AES) as the symmetric encryption scheme and SHA-1 as the cryptographic hash function.\n2 A recent survey on agent environments [24] concludes that aspects related to agent environments are often neglected, and does not indicate any existing work in this particular area.\non the same platform as a relay for communicating with agents residing on external platforms.\nVarious agent services provide functionality for adding and revoking control of platforms, including functionality required in complex scenarios where controlled agents in turn control further platforms.\nThe implementation of the actual control mechanism depends on the actual MAS architecture.\nIn our implementation, we have utilized methods provided via the Java Security Manager as part of the Java security model.\nThus, the supervisor agent is enabled to define custom security policies, thereby granting or denying other agents access to resources required for communication with other agents as well as communication in general, such as files or sockets for TCP\/IP-based communication.\n4.2 Recommender Module The Recommender Module is mainly responsible for carrying out information filtering processes, according to the protocol described in Table 1.\nThe participating entities are realized as agents, and the interactions as agent services.\nWe assume that mechanisms for secure agent communication are available within the respective MAS architecture.\nTwo issues have to be addressed in this module: The relevant parts of the provider profile have to be retrieved without compromising the user``s privacy, and the recommendations have to be propagated in a privacy-preserving way.\nOur solution is based on a threat model in which no main abstract entity may safely assume any other abstract entity to act in an honest manner: Each entity has to assume that other entities may attempt to obtain private information, either while following the specified protocol or even by deviating from the protocol.\nAccording to [15], we classify the former case as honest-but-curious behavior (as an example, the TFE may propagate recommendations as specified, but may additionally attempt to propagate private information), and the latter case as malicious behavior (as an example, the filter may attempt to propagate private information instead of the recommendations).\n4.2.1 Retrieving the Provider Profile As outlined above, the relay agent relays data between the TFE agent and the provider agent.\nThese agents are not allowed to communicate directly, because the TFE agent cannot be assumed to act in an honest manner.\nUnlike the user profile, which is usually rather small, the provider profile is often too voluminous to be propagated as a whole efficiently.\nA typical example is a user profile containing ratings of about 100 movies, while the provider profile contains some 10,000 movies.\nRetrieving only the relevant part of the provider profile, however, is problematic because it has to be done without leaking sensitive information about the user profile.\nTherefore, the relay agent has to analyze all queries on the provider profile, and reject potentially critical queries, such as queries containing a set of user profile items.\nBecause the propagation of single unlinkable user profile items is assumed to be uncritical, we extend the information filtering protocol as follows: The relevant parts of the provider profile are retrieved based on single anonymous interactions between the relay and the provider.\nIf the MAS architecture used for the implementation does not provide an infrastructure for anonymous agent communication, this feature has to be provided explicitly: The most straightforward way is to use additional relay agents deployed via 322 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Table 1: The basic information filtering protocol with participants U = user agent, P = provider agent, F = TFE agent, R = relay agent, based on the abstract protocol shown in Figure 1.\nUP denotes the user profile with UP = {up1, .\n.\n, upn}, PP denotes the provider profile, and REC denotes the set of recommendations with REC = {rec1, .\n.\n, recm}.\nPhase.\nSender \u2192 Message or Action Step Receiver 1.1 R \u2192 F establish control 1.2 U \u2192 R UP 1.3 R \u2192 F UP 2.1 P \u2192 R, F establish control 2.2 P \u2192 R PP 2.3 R \u2192 F PP 3.1 F \u2192 R REC 3.2 R \u2192 P REC 3.3 P \u2192 U REC 3.4 R \u2192 F terminate F 3.5 P \u2192 R terminate R the main relay agent and used once for a single anonymous interaction.\nObviously, unlinkability is only achieved if multiple instances of the protocol are executed simultaneously between the provider and different users.\nBecause agents on controlled platforms are unable to communicate anonymously with the respective controlling agent, control has to be established after the anonymous interactions have been completed.\nTo prevent the uncontrolled relay agents from propagating provider profile data, the respective data is encrypted and the key is provided only after control has been established.\nTherefore, the second phase of the protocol described in Table 1 is replaced as described in Table 2.\nAdditionally, the relay agent may allow other interactions as long as no user profile items are used within the queries.\nIn this case, the relay agent has to ensure that the provider does not obtain any information exceeding the information deducible via the recommendations themselves.\nThe clusterbased filtering technique described in Section 4.3 is an example for a filtering technique operating in this manner.\n4.2.2 Recommendation Propagation The propagation of the recommendations is even more problematic mainly because more participants are involved: Recommendations have to be propagated from the TFE agent via the relay and provider agent to the user agent.\nNo participant should be able to alter the recommendations or use them for the propagation of private information.\nTherefore, every participant in this chain has to obtain and verify the recommendations in unencrypted form prior to the next agent in the chain, i.e. the relay agent has to verify the recommendations before the provider obtains them, and so on.\nTherefore, the final phase of the protocol described in Table 1 is replaced as described in Table 3.\nIt basically consists of two parts (Step 3.1 to 3.4, and Step 3.5 to Step 3.8), each of which provide a solution for a problem related to the prisoners'' problem [22], in which two participants (the prisoners) intend to exchange a message via a third, untrusted participant (the warden) who may read the message but must not be able to alter it in an undetectable manner.\nThere are various solutions for protocols addressing the prisoners'' probTable 2: The updated second stage of the information filtering protocol with definitions as above.\nPPq is the part of the provider profile PP returned as the result of the query q. Phase.\nSender \u2192 Message or Action Step Receiver repeat 2.1 to 2.3 \u2200 up \u2208 UP: 2.1 F \u2192 R q(up) (a query based on up) 2.2 R anon \u2192 P q(up) (R remains anonymous) 2.3 P \u2192 R anon {PPq(up)}KP 2.4 P \u2192 R, F establish control 2.5 P \u2192 R KP 2.6 R \u2192 F PPq(UP ) Table 3: The updated final stage of the information filtering protocol with definitions as above.\nPhase.\nSender \u2192 Message or Action Step Receiver 3.1 F \u2192 R REC, {H(REC)}KPF 3.2 R \u2192 P h(KR), {{H(REC)}KPF }KR 3.3 P \u2192 R KP F 3.4 R \u2192 P KR repeat 3.5 \u2200 rec \u2208 REC: 3.5 R \u2192 P {rec}KURrec repeat 3.6 \u2200 rec \u2208 REC: 3.6 P \u2192 U h(KPrec ), {{rec}KURrec }KPrec repeat 3.7 to 3.8 \u2200 rec \u2208 REC: 3.7 U \u2192 P KURrec 3.8 P \u2192 U KPrec 3.9 U \u2192 F terminate F 3.10 P \u2192 U terminate U lem.\nThe more obvious of these, however, such as protocols based on the use of digital signatures, introduce additional threats e.g. via the possibility of additional subliminal channels [22].\nIn order to minimize the risk of possible threats, we have decided to use a protocol that only requires a symmetric encryption scheme.\nThe first part of the final phase is carried out as follows: In order to prevent the relay from altering recommendations, they are propagated by the filter together with an encrypted hash in Step 3.1.\nThus, the relay is able to verify the recommendations before they are propagated further.\nThe relay, however, may suspect the data propagated as the encrypted hash to contain private information instead of the actual hash value.\nTherefore, the encrypted hash is encrypted again and propagated together with a hash on the respective key in Step 3.2.\nIn Step 3.3, the key KP F is revealed to the relay, allowing the relay to validate the encrypted hash.\nIn Step 3.4, the key KR is revealed to the provider, allowing the provider to decrypt the data received in Step 3.2 and thus to obtain H(REC).\nPropagating the hash of the key KR prevents the relay from altering the recommendations to REC after Step 3.3, which would be undetectable otherwise because the relay could choose a key KR so that {{H(REC)}KPF }KR = {{H(REC )}KPF }KR .\nThe encryption scheme used for encrypting the hash has to be secure against known-plaintext attacks, because otherwise the relay may be able to obtain KP F after Step 3.1 and subsequently alter the recommendations in an undetectable The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 323 way.\nAdditionally, the encryption scheme must not be commutative for similar reasons.\nThe remaining protocol steps are interactions between relay, provider and user agent.\nThe interactions of Step 3.5 to Step 3.8 ensure, via the same mechanisms used in Step 3.1 to 3.4, that the provider is able to analyze the recommendations before the user obtains them, but at the same time prevent the provider from altering the recommendations.\nAdditionally, the recommendations are not processed at once, but rather one at a time, to prevent the provider from withholding all recommendations.\nUpon completion of the protocol, both user and provider have obtained a set of recommendations.\nIf the user wants these recommendations to be unlinkable to himself, the user agent has to carry out the entire protocol anonymously.\nAgain, the most straightforward way to achieve this is to use additional relay agents deployed via the user agent which are used once for a single information filtering process.\n4.3 Exemplary Filtering Techniques The filtering technique applied by the TFE agent cannot be chosen freely: All collaboration-based approaches, such as collaborative filtering techniques based on the profiles of a set of users, are not applicable because the provider profile does not contain user profile data (unless this data has been collected externally).\nInstead, these approaches are realized via the Matchmaker Module, which is outside the scope of this paper.\nLearning-based approaches are not applicable because the TFE agent cannot propagate any acquired data to the filter, which effectively means that the filter is incapable of learning.\nFiltering techniques that are actually applicable are feature-based approaches, such as content-based filtering (in which profile items are compared via their attributes) and knowledge-based filtering (in which domain-specific knowledge is applied in order to match user and provider profile items).\nAn overview of different classes and hybrid combinations of filtering techniques is given in [5].\nWe have implemented two generic content-based filtering approaches that are applicable within our approach: A direct content-based filtering technique based on the class of item-based top-N recommendation algorithms [9] is used in cases where the user profile contains items that are also contained in the provider profile.\nIn a preprocessing stage, i.e. prior to the actual information filtering processes, a model is generated containing the k most similar items for each provider profile item.\nWhile computationally rather complex, this approach is feasible because it has to be done only once, and it is carried out in a privacy-preserving way via interactions between the provider agent and a TFE agent.\nThe resulting model is stored by the provider agent and can be seen as an additional part of the provider profile.\nIn the actual information filtering process, the k most similar items are retrieved for each single user profile item via queries on the model (as described in Section 4.2.1, this is possible in a privacy-preserving way via anonymous communication).\nRecommendations are generated by selecting the n most frequent items from the result sets that are not already contained within the user profile.\nAs an alternative approach applicable when the user profile contains information in addition to provider profile items, we provide a cluster-based approach in which provider profile items are clustered in a preprocessing stage via an agglomerative hierarchical clustering approach.\nEach cluster is represented by a centroid item, and the cluster elements are either sub-clusters or, on the lowest level, the items themselves.\nIn the information filtering stage, the relevant items are retrieved by descending through the cluster hierarchy in the following manner: The cluster items of the highest level are retrieved independent of the user profile.\nBy comparing these items with the user profile data, the most relevant sub-clusters are determined and retrieved in a subsequent iteration.\nThis process is repeated until the lowest level is reached, which contains the items themselves as recommendations.\nThroughout the process, user profile items are never propagated to the provider as such.\nThe information deducible about the user profile does not exceed the information deducible via the recommendations themselves (because essentially only a chain of cluster centroids leading to the recommendations is retrieved), and therefore it is not regarded as privacy-critical.\n4.4 Implementation We have implemented the approach for privacy-preserving IF based on JIAC IV [12], a FIPA-compliant MAS architecture.\nJIAC IV integrates fundamental aspects of autonomous agents regarding pro-activeness, intelligence, communication capabilities and mobility by providing a scalable component-based architecture.\nAdditionally, JIAC IV offers components realizing management and security functionality, and provides a methodology for Agent-Oriented Software Engineering.\nJIAC IV stands out among MAS architectures as the only security-certified architecture, since it has been certified by the German Federal Office for Information Security according to the EAL3 of the Common Criteria for Information Technology Security standard [14].\nJIAC IV offers several security features in the areas of access control for agent services, secure communication between agents, and low-level security based on Java security policies [21], and thus provides all security-related functionality required for our approach.\nWe have extended the JIAC IV architecture by adding the mechanisms for communication control described in Section 4.1.\nRegarding the issue of malicious hosts, we currently assume all providers of agent platforms to be trusted.\nWe are additionally developing a solution that is actually based on a trusted computing infrastructure.\n5.\nEVALUATION For the evaluation of our approach, we have examined whether and to which extent the requirements (mainly regarding privacy, performance, and quality) are actually met.\nPrivacy aspects are directly addressed by the modules and protocols described above and therefore not evaluated further here.\nPerformance is a critical issue, mainly because of the overhead caused by creating additional agents and agent platforms for controlling communication, and by the additional interactions within the Recommender Module.\nOverall, a single information filtering process takes about ten times longer than a non-privacy-preserving information filtering process leading to the same results, which is a considerable overhead but still acceptable under certain conditions, as described in the following section.\n5.1 The Smart Event Assistant As a proof of concept, and in order to evaluate performance and quality under real-life conditions, we have ap324 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) Figure 2: The Smart Event Assistant, a privacypreserving Recommender System supporting users in planning entertainment-related activities.\nplied our approach within the Smart Event Assistant, a MAS-based Recommender System which integrates various personalized services for entertainment planning in different German cities, such as a restaurant finder and a movie finder [25].\nAdditional services, such as a calendar, a routing service and news services complement the information services.\nAn intelligent day planner integrates all functionality by providing personalized recommendations for the various information services, based on the user``s preferences and taking into account the location of the user as well as the potential venues.\nAll services are accessible via mobile devices as well3 .\nFigure 2 shows a screenshot of the intelligent day planner``s result dialog.\nThe Smart Event Assistant is entirely realized as a MAS system providing, among other functionality, various filter agents and different service provider agents, which together with the user agents utilize the functionality provided by our approach.\nRecommendations are generated in two ways: A push service delivers new recommendations to the user in regular intervals (e.g. once per day) via email or SMS.\nBecause the user is not online during these interactions, they are less critical with regard to performance and the protracted duration of the information filtering process is acceptable in this case.\nRecommendations generated for the intelligent day planner, however, have to be delivered with very little latency because the process is triggered by the user, who expects to receive results promptly.\nIn this scenario, the overall performance is substantially improved by setting up the relay agent and the TFE agent o\ufb04ine, i.e. prior to the user``s request, and by an efficient retrieval of the relevant 3 The Smart Event Assistant may be accessed online via http:\/\/www.smarteventassistant.de.\nTable 4: Complexity of typical privacy-preserving (PP) vs. non-privacy-preserving (NPP) filtering processes in the realized application.\nIn the nonprivacy-preserving version, an agent retrieves the profiles directly and propagates the result to a provider agent.\nscenario push day planning version NPP PP NPP PP profile size (retrieved\/total amount of items) user 25\/25 25\/25 provider 125\/10,000 500\/10,000 elapsed time in filtering process (in seconds) setup n\/a 2.2 n\/a o\ufb04ine database access 0.2 0.5 0.4 0.4 profile propagation n\/a 0.8 n\/a 0.3 filtering algorithm 0.2 0.2 0.2 0.2 result propagation 0.1 1.1 0.1 1.1 complete time 0.5 4.8 0.7 2.0 part of the provider profile: Because the user is only interested in items, such as movies, available within a certain time period and related to specific locations, such as screenings at cinemas in a specific city, the relevant part of the provider profile is usually small enough to be propagated entirely.\nBecause these additional parameters are not seen as privacy-critical (as they are not based on the user profile, but rather constitute a short-term information need), the relevant part of the provider profile may be propagated as a whole, with no need for complex interactions.\nTaken together, these improvements result in a filtering process that takes about three times as long as the respective nonprivacy-preserving filtering process, which we regard as an acceptable trade-off for the increased level of privacy.\nTable 4 shows the results of the performance evaluation in more detail.\nIn these scenarios, a direct content-based filtering technique similar to the one described in Section 4.3 is applied.\nBecause equivalent filtering techniques have been applied successfully in regular Recommender Systems [9], there are no negative consequences with regard to the quality of the recommendations.\n5.2 Alternative Approaches As described in Section 3.2, our solution is based on trusted computing.\nThere are more straightforward ways to realize privacy-preserving IF, e.g. by utilizing a centralized architecture in which the privacy-preserving providerside functionality is realized as trusted software based on trusted computing.\nHowever, we consider these approaches to be unsuitable because they are far less generic: Whenever some part of the respective software is patched, upgraded or replaced, the entire system has to be analyzed again in order to determine its trustworthiness, a process that is problematic in itself due to its complexity.\nIn our solution, only a comparatively small part of the overall system is based on trusted computing.\nBecause agent platforms can be utilized for a large variety of tasks, and because we see trusted computing as the most promising approach to realize secure and trusted agent environments, it seems reasonable to assume that these respective mechanisms will be generally available in the future, independent of specific solutions such as the one described here.\nThe Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 325 6.\nCONCLUSION & FURTHER WORK We have developed an agent-based approach for privacypreserving Recommender Systems.\nBy utilizing fundamental features of agents such as autonomy, adaptability and the ability to communicate, by extending the capabilities of agent platform managers regarding control of agent communication, by providing a privacy-preserving protocol for information filtering processes, and by utilizing suitable filtering techniques we have been able to realize an approach which actually preserves privacy in Information Filtering architectures in a multilateral way.\nAs a proof of concept, we have used the approach within an application supporting users in planning entertainment-related activities.\nWe envision various areas of future work: To achieve complete user privacy, the protocol should be extended in order to keep the recommendations themselves private as well.\nGenerally, the feedback we have obtained from users of the Smart Event Assistant indicates that most users are indifferent to privacy in the context of entertainment-related personal information.\nTherefore, we intend to utilize the approach to realize a Recommender System in a more privacysensitive domain, such as health or finance, which would enable us to better evaluate user acceptance.\n7.\nACKNOWLEDGMENTS We would like to thank our colleagues Andreas Rieger and Nicolas Braun, who co-developed the Smart Event Assistant.\nThe Smart Event Assistant is based on a project funded by the German Federal Ministry of Education and Research under Grant No. 01AK037, and a project funded by the German Federal Ministry of Economics and Labour under Grant No. 01MD506.\n8.\nREFERENCES [1] R. Agrawal, J. Kiernan, R. Srikant, and Y. Xu.\nHippocratic databases.\nIn 28th Int``l Conf.\non Very Large Databases (VLDB), Hong Kong, 2002.\n[2] R. Agrawal and R. Srikant.\nPrivacy-preserving data mining.\nIn Proc.\nof the ACM SIGMOD Conference on Management of Data, pages 439-450.\nACM Press, May 2000.\n[3] E. A\u00a8\u0131meur, G. Brassard, J. M. Fernandez, and F. S. Mani Onana.\nPrivacy-preserving demographic filtering.\nIn SAC ``06: Proceedings of the 2006 ACM symposium on Applied computing, pages 872-878, New York, NY, USA, 2006.\nACM Press.\n[4] M. Bawa, R. Bayardo, Jr., and R. Agrawal.\nPrivacy-preserving indexing of documents on the network.\nIn Proc.\nof the 2003 VLDB, 2003.\n[5] R. Burke.\nHybrid recommender systems: Survey and experiments.\nUser Modeling and User-Adapted Interaction, 12(4):331-370, 2002.\n[6] J. Canny.\nCollaborative filtering with privacy.\nIn IEEE Symposium on Security and Privacy, pages 45-57, 2002.\n[7] B. Chor, O. Goldreich, E. Kushilevitz, and M. Sudan.\nPrivate information retrieval.\nIn IEEE Symposium on Foundations of Computer Science, pages 41-50, 1995.\n[8] R. Ciss\u00b4ee.\nAn architecture for agent-based privacy-preserving information filtering.\nIn Proceedings of the 6th International Workshop on Trust, Privacy, Deception and Fraud in Agent Systems, 2003.\n[9] M. Deshpande and G. Karypis.\nItem-based top-N recommendation algorithms.\nACM Trans.\nInf.\nSyst., 22(1):143-177, 2004.\n[10] L. Foner.\nPolitical artifacts and personal privacy: The yenta multi-agent distributed matchmaking system.\nPhD thesis, MIT, 1999.\n[11] Foundation for Intelligent Physical Agents.\nFIPA Abstract Architecture Specification, Version L, 2002.\n[12] S. Fricke, K. Bsufka, J. Keiser, T. Schmidt, R. Sesseler, and S. Albayrak.\nAgent-based telematic services and telecom applications.\nCommunications of the ACM, 44(4), April 2001.\n[13] T. Garfinkel, M. Rosenblum, and D. Boneh.\nFlexible OS support and applications for trusted computing.\nIn Proceedings of HotOS-IX, May 2003.\n[14] T. Geissler and O. Kroll-Peters.\nApplying security standards to multi agent systems.\nIn AAMAS Workshop: Safety & Security in Multiagent Systems, 2004.\n[15] O. Goldreich, S. Micali, and A. Wigderson.\nHow to play any mental game.\nIn Proc.\nof STOC ``87, pages 218-229, New York, NY, USA, 1987.\nACM Press.\n[16] S. Jha, L. Kruger, and P. McDaniel.\nPrivacy preserving clustering.\nIn ESORICS 2005, volume 3679 of LNCS.\nSpringer, 2005.\n[17] G. Karjoth, M. Schunter, and M. Waidner.\nThe platform for enterprise privacy practices: Privacy-enabled management of customer data.\nIn PET 2002, volume 2482 of LNCS.\nSpringer, 2003.\n[18] H. Link, J. Saia, T. Lane, and R. A. LaViolette.\nThe impact of social networks on multi-agent recommender systems.\nIn Proc.\nof the Workshop on Cooperative Multi-Agent Learning (ECML\/PKDD ``05), 2005.\n[19] B. N. Miller, J. A. Konstan, and J. Riedl.\nPocketLens: Toward a personal recommender system.\nACM Trans.\nInf.\nSyst., 22(3):437-476, 2004.\n[20] H. Polat and W. Du.\nSVD-based collaborative filtering with privacy.\nIn Proc.\nof SAC ``05, pages 791-795, New York, NY, USA, 2005.\nACM Press.\n[21] T. Schmidt.\nAdvanced Security Infrastructure for Multi-Agent-Applications in the Telematic Area.\nPhD thesis, Technische Universit\u00a8at Berlin, 2002.\n[22] G. J. Simmons.\nThe prisoners'' problem and the subliminal channel.\nIn D. Chaum, editor, Proc.\nof Crypto ``83, pages 51-67.\nPlenum Press, 1984.\n[23] M. Teltzrow and A. Kobsa.\nImpacts of user privacy preferences on personalized systems: a comparative study.\nIn Designing personalized user experiences in eCommerce, pages 315-332.\n2004.\n[24] D. Weyns, H. Parunak, F. Michel, T. Holvoet, and J. Ferber.\nEnvironments for multiagent systems: State-of-the-art and research challenges.\nIn Environments for Multiagent Systems, volume 3477 of LNCS.\nSpringer, 2005.\n[25] J. Wohltorf, R. Ciss\u00b4ee, and A. Rieger.\nBerlintainment: An agent-based context-aware entertainment planning system.\nIEEE Communications Magazine, 43(6), 2005.\n[26] M. Wooldridge and N. R. Jennings.\nIntelligent agents: Theory and practice.\nKnowledge Engineering Review, 10(2):115-152, 1995.\n[27] A. Yao.\nProtocols for secure computation.\nIn Proc.\nof IEEE FOGS ``82, pages 160-164, 1982.\n326 The Sixth Intl..\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)","lvl-3":"An Agent-Based Approach for Privacy-Preserving Recommender Systems\nABSTRACT\nRecommender Systems are used in various domains to generate personalized information based on personal user data.\nThe ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers.\nExisting approaches neglect to address privacy in this multilateral way.\nWe have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants.\nWe describe the main modules of our solution as well as an application we have implemented based on this approach.\n1.\nINTRODUCTION\nInformation Filtering (IF) systems aim at countering information overload by extracting information that is relevant for a given user out of a large body of information available via an information provider.\nIn contrast to Information Retrieval (IR) systems, where relevant information\n10587 Berlin\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\n2.\nRELATED WORK\nThere is a large amount of work in related areas, such as Private Information Retrieval [7], Privacy-Preserving Data Mining [2], and other privacy-preserving protocols [4, 16], most of which is based on Secure Multi-Party Computation [27].\nWe have ruled out Secure Multi-Party Computation approaches mainly because of their complexity, and because the algorithm that is computed securely is not considered to be private in these approaches.\nVarious enforcement mechanisms have been suggested that are applicable in the context of privacy-preserving Information Filtering, such as enterprise privacy policies [17] or hippocratic databases [1], both of which annotate user data with additional meta-information specifying how the data is to be handled on the provider side.\nThese approaches ultimately assume that the provider actually intends to protect the privacy of the user data, and offer support for this task, but they are not intended to prevent the provider from acting in a malicious manner.\nTrusted computing, as specified by the Trusted Computing Group, aims at realizing trusted systems by increasing the security of open systems to a level comparable with the level of security that is achievable in closed systems.\nIt is based on a combination of tamper-proof hardware and various software components.\nSome example applications, including peer-to-peer networks, distributed firewalls, and distributed computing in general, are listed in [13].\nThere are some approaches for privacy-preserving Recommender Systems based on distributed collaborative filtering, in which recommendations are generated via a public model aggregating the distributed user profiles without containing explicit information about user profiles themselves.\nThis is achieved via Secure Multi-Party Computation [6], or via random perturbation of the user data [20].\nIn [19], various approaches are integrated within a single architecture.\nIn [10], an agent-based approach is described in which user agents representing similar users are discovered via a transitive traversal of user agents.\nPrivacy is preserved through pseudonymous interaction between the agents and through adding obfuscating data to personal information.\nMore recent related approaches are described in [18].\nIn [3], an agent-based architecture for privacy-preserving demographic filtering is described which may be generalized in order to support other kinds of filtering techniques.\nWhile in some aspects similar to our approach, this architecture addresses at least two aspects inadequately, namely the protection of the filter against manipulation attempts, and the prevention of collusions between the filter and the provider.\n3.\nPRIVACY-PRESERVING INFORMATION FILTERING\n3.1 Requirements\n3.2 Outline of the Solution\n320 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.\nMAIN MODULES AND IMPLEMENTATION\n4.1 Controller Module\n4.2 Recommender Module\n4.2.1 Retrieving the Provider Profile\n322 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n4.2.2 Recommendation Propagation\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 323\n4.3 Exemplary Filtering Techniques\n4.4 Implementation\n5.\nEVALUATION\n5.1 The Smart Event Assistant\n324 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\n5.2 Alternative Approaches\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 325\n6.\nCONCLUSION & FURTHER WORK\nWe have developed an agent-based approach for privacypreserving Recommender Systems.\nBy utilizing fundamental features of agents such as autonomy, adaptability and the ability to communicate, by extending the capabilities of agent platform managers regarding control of agent communication, by providing a privacy-preserving protocol for information filtering processes, and by utilizing suitable filtering techniques we have been able to realize an approach which actually preserves privacy in Information Filtering architectures in a multilateral way.\nAs a proof of concept, we have used the approach within an application supporting users in planning entertainment-related activities.\nWe envision various areas of future work: To achieve complete user privacy, the protocol should be extended in order to keep the recommendations themselves private as well.\nGenerally, the feedback we have obtained from users of the Smart Event Assistant indicates that most users are indifferent to privacy in the context of entertainment-related personal information.\nTherefore, we intend to utilize the approach to realize a Recommender System in a more privacysensitive domain, such as health or finance, which would enable us to better evaluate user acceptance.","lvl-4":"An Agent-Based Approach for Privacy-Preserving Recommender Systems\nABSTRACT\nRecommender Systems are used in various domains to generate personalized information based on personal user data.\nThe ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers.\nExisting approaches neglect to address privacy in this multilateral way.\nWe have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants.\nWe describe the main modules of our solution as well as an application we have implemented based on this approach.\n1.\nINTRODUCTION\nInformation Filtering (IF) systems aim at countering information overload by extracting information that is relevant for a given user out of a large body of information available via an information provider.\nIn contrast to Information Retrieval (IR) systems, where relevant information\n2.\nRELATED WORK\nThere is a large amount of work in related areas, such as Private Information Retrieval [7], Privacy-Preserving Data Mining [2], and other privacy-preserving protocols [4, 16], most of which is based on Secure Multi-Party Computation [27].\nWe have ruled out Secure Multi-Party Computation approaches mainly because of their complexity, and because the algorithm that is computed securely is not considered to be private in these approaches.\nThese approaches ultimately assume that the provider actually intends to protect the privacy of the user data, and offer support for this task, but they are not intended to prevent the provider from acting in a malicious manner.\nIt is based on a combination of tamper-proof hardware and various software components.\nThere are some approaches for privacy-preserving Recommender Systems based on distributed collaborative filtering, in which recommendations are generated via a public model aggregating the distributed user profiles without containing explicit information about user profiles themselves.\nThis is achieved via Secure Multi-Party Computation [6], or via random perturbation of the user data [20].\nIn [19], various approaches are integrated within a single architecture.\nIn [10], an agent-based approach is described in which user agents representing similar users are discovered via a transitive traversal of user agents.\nPrivacy is preserved through pseudonymous interaction between the agents and through adding obfuscating data to personal information.\nMore recent related approaches are described in [18].\nIn [3], an agent-based architecture for privacy-preserving demographic filtering is described which may be generalized in order to support other kinds of filtering techniques.\nWhile in some aspects similar to our approach, this architecture addresses at least two aspects inadequately, namely the protection of the filter against manipulation attempts, and the prevention of collusions between the filter and the provider.\n6.\nCONCLUSION & FURTHER WORK\nWe have developed an agent-based approach for privacypreserving Recommender Systems.\nAs a proof of concept, we have used the approach within an application supporting users in planning entertainment-related activities.\nWe envision various areas of future work: To achieve complete user privacy, the protocol should be extended in order to keep the recommendations themselves private as well.\nGenerally, the feedback we have obtained from users of the Smart Event Assistant indicates that most users are indifferent to privacy in the context of entertainment-related personal information.\nTherefore, we intend to utilize the approach to realize a Recommender System in a more privacysensitive domain, such as health or finance, which would enable us to better evaluate user acceptance.","lvl-2":"An Agent-Based Approach for Privacy-Preserving Recommender Systems\nABSTRACT\nRecommender Systems are used in various domains to generate personalized information based on personal user data.\nThe ability to preserve the privacy of all participants is an essential requirement of the underlying Information Filtering architectures, because the deployed Recommender Systems have to be accepted by privacy-aware users as well as information and service providers.\nExisting approaches neglect to address privacy in this multilateral way.\nWe have developed an approach for privacy-preserving Recommender Systems based on Multi-Agent System technology which enables applications to generate recommendations via various filtering techniques while preserving the privacy of all participants.\nWe describe the main modules of our solution as well as an application we have implemented based on this approach.\n1.\nINTRODUCTION\nInformation Filtering (IF) systems aim at countering information overload by extracting information that is relevant for a given user out of a large body of information available via an information provider.\nIn contrast to Information Retrieval (IR) systems, where relevant information\n10587 Berlin\nsahin.albayrak@dai-labor.de is extracted based on search queries, IF architectures generate personalized information based on user profiles containing, for each given user, personal data, preferences, and rated items.\nThe provided body of information is usually structured and collected in provider profiles.\nFiltering techniques operate on these profiles in order to generate recommendations of items that are probably relevant for a given user, or in order to determine users with similar interests, or both.\nDepending on the respective goal, the resulting systems constitute Recommender Systems [5], Matchmaker Systems [10], or a combination thereof.\nThe aspect of privacy is an essential issue in all IF systems: Generating personalized information obviously requires the use of personal data.\nAccording to surveys indicating major privacy concerns of users in the context of Recommender Systems and e-commerce in general [23], users can be expected to be less reluctant to provide personal information if they trust the system to be privacy-preserving with regard to personal data.\nSimilar considerations also apply to the information provider, who may want to control the dissemination of the provided information, and to the provider of the filtering techniques, who may not want the details of the utilized filtering algorithms to become common knowledge.\nA privacy-preserving IF system should therefore balance these requirements and protect the privacy of all parties involved in a multilateral way, while addressing general requirements regarding performance, security and quality of the recommendations as well.\nAs described in the following section, there are several approaches with similar goals, but none of these provide a generic approach in which the privacy of all parties is preserved.\nWe have developed an agent-based approach for privacypreserving IF which has been utilized for realizing a combined Recommender\/Matchmaker System as part of an application supporting users in planning entertainment-related activities.\nIn this paper, we focus on the Recommender System functionality.\nOur approach is based on Multi-Agent System (MAS) technology because fundamental features of agents such as autonomy, adaptability and the ability to communicate are essential requirements of our approach.\nIn other words, the realized approach does not merely constitute a solution for privacy-preserving IF within a MAS context, but rather utilizes a MAS architecture in order to realize a solution for privacy-preserving IF, which could not be realized easily otherwise.\nThe paper is structured as follows: Section 2 describes related work.\nSection 3 describes the general ideas of our approach.\nIn Section 4, we describe essential details of the\n978-81-904262-7-5 (RPS) c ~ 2007 IFAAMAS\nmodules of our approach and their implementation.\nIn Section 5, we evaluate the approach, mainly via the realized application.\nSection 6 concludes the paper with an outlook and outlines further work.\n2.\nRELATED WORK\nThere is a large amount of work in related areas, such as Private Information Retrieval [7], Privacy-Preserving Data Mining [2], and other privacy-preserving protocols [4, 16], most of which is based on Secure Multi-Party Computation [27].\nWe have ruled out Secure Multi-Party Computation approaches mainly because of their complexity, and because the algorithm that is computed securely is not considered to be private in these approaches.\nVarious enforcement mechanisms have been suggested that are applicable in the context of privacy-preserving Information Filtering, such as enterprise privacy policies [17] or hippocratic databases [1], both of which annotate user data with additional meta-information specifying how the data is to be handled on the provider side.\nThese approaches ultimately assume that the provider actually intends to protect the privacy of the user data, and offer support for this task, but they are not intended to prevent the provider from acting in a malicious manner.\nTrusted computing, as specified by the Trusted Computing Group, aims at realizing trusted systems by increasing the security of open systems to a level comparable with the level of security that is achievable in closed systems.\nIt is based on a combination of tamper-proof hardware and various software components.\nSome example applications, including peer-to-peer networks, distributed firewalls, and distributed computing in general, are listed in [13].\nThere are some approaches for privacy-preserving Recommender Systems based on distributed collaborative filtering, in which recommendations are generated via a public model aggregating the distributed user profiles without containing explicit information about user profiles themselves.\nThis is achieved via Secure Multi-Party Computation [6], or via random perturbation of the user data [20].\nIn [19], various approaches are integrated within a single architecture.\nIn [10], an agent-based approach is described in which user agents representing similar users are discovered via a transitive traversal of user agents.\nPrivacy is preserved through pseudonymous interaction between the agents and through adding obfuscating data to personal information.\nMore recent related approaches are described in [18].\nIn [3], an agent-based architecture for privacy-preserving demographic filtering is described which may be generalized in order to support other kinds of filtering techniques.\nWhile in some aspects similar to our approach, this architecture addresses at least two aspects inadequately, namely the protection of the filter against manipulation attempts, and the prevention of collusions between the filter and the provider.\n3.\nPRIVACY-PRESERVING INFORMATION FILTERING\nWe identify three main abstract entities participating in an information filtering process within a distributed system: A user entity, a provider entity, and a filter entity.\nWhereas in some applications the provider and filter entities explicitly trust each other, because they are deployed by the same party, our solution is applicable more generically because it does not require any trust between the main abstract entities.\nIn this paper, we focus on aspects related to the information filtering process itself, and omit all aspects related to information collection and processing, i.e. the stages in which profiles are generated and maintained, mainly because these stages are less critical with regard to privacy, as they involve fewer different entities.\n3.1 Requirements\nOur solution aims at meeting the following requirements with regard to privacy:\n\u2022 User Privacy: No linkable information about user profiles should be acquired permanently by any other entity or external party, including other user entities.\nSingle user profile items, however, may be acquired permanently if they are unlinkable, i.e. if they cannot be attributed to a specific user or linked to other user profile items.\nTemporary acquisition of private information is permitted as well.\nSets of recommendations may be acquired permanently by the provider, but they should not be linkable to a specific user.\nThese concessions simplify the resulting protocol and allow the provider to obtain recommendations and single unlinkable user profile items, and thus to determine frequently requested information and optimize the offered information accordingly.\n\u2022 Provider Privacy: No information about provider profiles, with the exception of the recommendations, should be acquired permanently by other entities or external parties.\nAgain, temporary acquisition of private information is permitted.\nAdditionally, the propagation of provider information is entirely under the control of the provider.\nThus, the provider is enabled to prevent misuse such as the automatic large-scale extraction of information.\n\u2022 Filter Privacy: Details of the algorithms applied by the filtering techniques should not be acquired permanently by any other entity or external party.\nGeneral information about the algorithm may be provided by the filter entity in order to help other entities to reach a decision on whether to apply the respective filtering technique.\nIn addition, general requirements regarding the quality of the recommendations as well as security aspects, performance and broadness of the resulting system have to be addressed as well.\nWhile minor trade-offs may be acceptable, the resulting system should reach a level similar to regular Recommender Systems with regard to these requirements.\n3.2 Outline of the Solution\nThe basic idea for realizing a protocol fulfilling these privacy-related requirements in Recommender Systems is implied by allowing the temporary acquisition of private information (see [8] for the original approach): User and provider entity both propagate the respective profile data to the filter entity.\nThe filter entity provides the recommendations, and subsequently deletes all private information, thus fulfilling the requirement regarding permanent acquisition of private information.\n320 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nThe entities whose private information is propagated have to be certain that the respective information is actually acquired temporarily only.\nTrust in this regard may be established in two main ways:\n\u2022 Trusted Software: The respective entity itself is trusted to remove the respective information as specified.\n\u2022 Trusted Environment: The respective entity operates in an environment that is trusted to control the communication and life cycle of the entity to an extent that the removal of the respective information may be achieved regardless of the attempted actions of the entity itself.\nAdditionally, the environment itself is trusted not to act in a malicious manner (e.g. it is trusted not to acquire and propagate the respective information itself).\nIn both cases, trust may be established in various ways.\nReputation-based mechanisms, additional trusted third parties certifying entities or environments, or trusted computing mechanisms may be used.\nOur approach is based on a trusted environment realized via trusted computing mechanisms, because we see this solution as the most generic and realistic approach.\nThis decision is discussed briefly in Section 5.\nWe are now able to specify the abstract information filtering protocol as shown in Figure 1: The filter entity deploys a Temporary Filter Entity (TFE) operating in a trusted environment.\nThe user entity deploys an additional relay entity operating in the same environment.\nThrough mechanisms provided by this environment, the relay entity is able to control the communication of the TFE, and the provider entity is able to control the communication of both relay entity and the TFE.\nThus, it is possible to ensure that the controlled entities are only able to propagate recommendations, but no other private information.\nIn the first stage (steps 1.1 to 1.3 of Figure 1), the relay entity establishes control of the TFE, and thus prevents it from propagating user profile information.\nUser profile data is propagated without participation of the provider entity from the user entity to the TFE via the relay entity.\nIn the second stage (steps 2.1 to 2.3 of Figure 1), the provider entity establishes control of both relay and TFE, and thus prevents them from propagating provider profile information.\nProvider profile data is propagated from the provider entity to the TFE via the relay entity.\nIn the third stage (steps 3.1 to 3.5 of Figure 1), the TFE returns the recommendations via the relay entity, and the controlled entities are terminated.\nTaken together, these steps ensure that all private information is acquired temporarily only by the other main entities.\nThe problems of determining acceptable queries on the provider profile and ensuring unlinkability of the recommendations are discussed in the following section.\nOur approach requires each entity in the distributed architecture to have the following five main abilities: The ability to perform certain well-defined tasks (such as carrying out a filtering process) with a high degree of autonomy, i.e. largely independent of other entities (e.g. because the respective entity is not able to communicate in an unrestricted manner), the ability to be deployable dynamically in a well-defined environment, the ability to communicate with other entities, the ability to achieve protection against external manipulation attempts, and the ability to control and restrict the communication of other entities.\nFigure 1: The abstract privacy-preserving information filtering protocol.\nAll communication across the environments indicated by dashed lines is prevented with the exception of communication with the controlling entity.\nMAS architectures are an ideal solution for realizing a distributed system characterized by these features, because they provide agents constituting entities that are actually characterized by autonomy, mobility and the ability to communicate [26], as well as agent platforms as environments providing means to realize the security of agents.\nIn this context, the issue of malicious hosts, i.e. hosts attacking agents, has to be addressed explicitly.\nFurthermore, existing MAS architectures generally do not allow agents to control the communication of other agents.\nIt is possible, however, to expand a MAS architecture and to provide designated agents with this ability.\nFor these reasons, our solution is based on a FIPA [11] - compliant MAS architecture.\nThe entities introduced above are mapped directly to agents, and the trusted environment in which they exist is realized in the form of agent platforms.\nIn addition to the MAS architecture itself, which is assumed as given, our solution consists of the following five main modules:\n\u2022 The Controller Module described in Section 4.1 provides functionality for controlling the communication capabilities of agents.\n\u2022 The Transparent Persistence Module facilitates the use of different data storage mechanisms, and provides a uniform interface for accessing persistent information, which may be utilized for monitoring critical interactions involving potentially private information e.g. as part of queries.\nIts description is outside the scope of this paper.\n\u2022 The Recommender Module, details of which are described in Section 4.2, provides Recommender System functionality.\n\u2022 The Matchmaker Module provides Matchmaker System functionality.\nIt additionally utilizes social aspects of MAS technology.\nIts description is outside the scope of this paper.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 321 \u2022 Finally, a separate module described in Section 4.4 provides Exemplary Filtering Techniques in order to show that various restrictions imposed on filtering techniques by our approach may actually be fulfilled.\nThe trusted environment introduced above encompasses the MAS architecture itself and the Controller Module, which have to be trusted to act in a non-malicious manner in order to rule out the possibility of malicious hosts.\n4.\nMAIN MODULES AND IMPLEMENTATION\nIn this section, we describe the main modules of our approach, and outline the implementation.\nWhile we have chosen a specific architecture for the implementation, the specification of the module is applicable to any FIPA-compliant MAS architecture.\nA module basically encompasses ontologies, functionality provided by agents via agent services, and internal functionality.\nThroughout this paper, {m} KX denotes a message m encrypted via a non-specified symmetric encryption scheme with a secret key KX used for encryption and decryption which is initially known only to participant X.\nA key KXY is a key shared by participants X and Y.\nA cryptographic hash function is used at various points of the protocol, i.e. a function returning a hash value h (x) for given data x that is both preimage-resistant and collision-resistant1.\nWe denote a set of hash values for a data set X = {x1,.\n.\n, xn} as H (X) = {h (x1),.\n.\n, h (xn)}, whereas h (X) denotes a single hash value of a data set.\n4.1 Controller Module\nAs noted above, the ability to control the communication of agents is generally not a feature of existing MAS architectures2 but at the same time a central requirement of our approach for privacy-preserving Information Filtering.\nThe required functionality cannot be realized based on regular agent services or components, because an agent on a platform is usually not allowed to interfere with the actions of other agents in any way.\nTherefore, we add additional infrastructure providing the required functionality to the MAS architecture itself, resulting in an agent environment with extended functionality and responsibilities.\nControlling the communication capabilities of an agent is realized by restricting via rules, in a manner similar to a firewall, but with the consent of the respective agent, its incoming and outgoing communication to specific platforms or agents on external platforms as well as other possible communication channels, such as the file system.\nConsent is required because otherwise the overall security would be compromised, as attackers could arbitrarily block various communication channels.\nOur approach does not require controlling the communication between agents on the same platform, and therefore this aspect is not addressed.\nConsequently, all rules addressing communication capabilities have to be enforced across entire platforms, because otherwise a controlled agent could just use a non-controlled agent 1In the implementation, we have used the Advanced Encryption Standard (AES) as the symmetric encryption scheme and SHA-1 as the cryptographic hash function.\n2A recent survey on agent environments [24] concludes that aspects related to agent environments are often neglected, and does not indicate any existing work in this particular area.\non the same platform as a relay for communicating with agents residing on external platforms.\nVarious agent services provide functionality for adding and revoking control of platforms, including functionality required in complex scenarios where controlled agents in turn control further platforms.\nThe implementation of the actual control mechanism depends on the actual MAS architecture.\nIn our implementation, we have utilized methods provided via the Java Security Manager as part of the Java security model.\nThus, the supervisor agent is enabled to define custom security policies, thereby granting or denying other agents access to resources required for communication with other agents as well as communication in general, such as files or sockets for TCP\/IP-based communication.\n4.2 Recommender Module\nThe Recommender Module is mainly responsible for carrying out information filtering processes, according to the protocol described in Table 1.\nThe participating entities are realized as agents, and the interactions as agent services.\nWe assume that mechanisms for secure agent communication are available within the respective MAS architecture.\nTwo issues have to be addressed in this module: The relevant parts of the provider profile have to be retrieved without compromising the user's privacy, and the recommendations have to be propagated in a privacy-preserving way.\nOur solution is based on a threat model in which no main abstract entity may safely assume any other abstract entity to act in an honest manner: Each entity has to assume that other entities may attempt to obtain private information, either while following the specified protocol or even by deviating from the protocol.\nAccording to [15], we classify the former case as honest-but-curious behavior (as an example, the TFE may propagate recommendations as specified, but may additionally attempt to propagate private information), and the latter case as malicious behavior (as an example, the filter may attempt to propagate private information instead of the recommendations).\n4.2.1 Retrieving the Provider Profile\nAs outlined above, the relay agent relays data between the TFE agent and the provider agent.\nThese agents are not allowed to communicate directly, because the TFE agent cannot be assumed to act in an honest manner.\nUnlike the user profile, which is usually rather small, the provider profile is often too voluminous to be propagated as a whole efficiently.\nA typical example is a user profile containing ratings of about 100 movies, while the provider profile contains some 10,000 movies.\nRetrieving only the relevant part of the provider profile, however, is problematic because it has to be done without leaking sensitive information about the user profile.\nTherefore, the relay agent has to analyze all queries on the provider profile, and reject potentially critical queries, such as queries containing a set of user profile items.\nBecause the propagation of single unlinkable user profile items is assumed to be uncritical, we extend the information filtering protocol as follows: The relevant parts of the provider profile are retrieved based on single anonymous interactions between the relay and the provider.\nIf the MAS architecture used for the implementation does not provide an infrastructure for anonymous agent communication, this feature has to be provided explicitly: The most straightforward way is to use additional relay agents deployed via\n322 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nTable 1: The basic information filtering protocol\nwith participants U = user agent, P = provider agent, F = TFE agent, R = relay agent, based on the abstract protocol shown in Figure 1.\nUP denotes the user profile with UP = {up1,.\n.\n, upn}, PP denotes the provider profile, and REC denotes the set of recommendations with REC = {rec1,.\n.\n, recm}.\nthe main relay agent and used once for a single anonymous interaction.\nObviously, unlinkability is only achieved if multiple instances of the protocol are executed simultaneously between the provider and different users.\nBecause agents on controlled platforms are unable to communicate anonymously with the respective controlling agent, control has to be established after the anonymous interactions have been completed.\nTo prevent the uncontrolled relay agents from propagating provider profile data, the respective data is encrypted and the key is provided only after control has been established.\nTherefore, the second phase of the protocol described in Table 1 is replaced as described in Table 2.\nAdditionally, the relay agent may allow other interactions as long as no user profile items are used within the queries.\nIn this case, the relay agent has to ensure that the provider does not obtain any information exceeding the information deducible via the recommendations themselves.\nThe clusterbased filtering technique described in Section 4.3 is an example for a filtering technique operating in this manner.\n4.2.2 Recommendation Propagation\nThe propagation of the recommendations is even more problematic mainly because more participants are involved: Recommendations have to be propagated from the TFE agent via the relay and provider agent to the user agent.\nNo participant should be able to alter the recommendations or use them for the propagation of private information.\nTherefore, every participant in this chain has to obtain and verify the recommendations in unencrypted form prior to the next agent in the chain, i.e. the relay agent has to verify the recommendations before the provider obtains them, and so on.\nTherefore, the final phase of the protocol described in Table 1 is replaced as described in Table 3.\nIt basically consists of two parts (Step 3.1 to 3.4, and Step 3.5 to Step 3.8), each of which provide a solution for a problem related to the prisoners' problem [22], in which two participants (the prisoners) intend to exchange a message via a third, untrusted participant (the warden) who may read the message but must not be able to alter it in an undetectable manner.\nThere are various solutions for protocols addressing the prisoners' prob\nTable 2: The updated second stage of the information filtering protocol with definitions as above.\nPPq is the part of the provider profile PP returned as the result of the query q.\nTable 3: The updated final stage of the information filtering protocol with definitions as above.\nlem.\nThe more obvious of these, however, such as protocols based on the use of digital signatures, introduce additional threats e.g. via the possibility of additional subliminal channels [22].\nIn order to minimize the risk of possible threats, we have decided to use a protocol that only requires a symmetric encryption scheme.\nThe first part of the final phase is carried out as follows: In order to prevent the relay from altering recommendations, they are propagated by the filter together with an encrypted hash in Step 3.1.\nThus, the relay is able to verify the recommendations before they are propagated further.\nThe relay, however, may suspect the data propagated as the encrypted hash to contain private information instead of the actual hash value.\nTherefore, the encrypted hash is encrypted again and propagated together with a hash on the respective key in Step 3.2.\nIn Step 3.3, the key KPF is revealed to the relay, allowing the relay to validate the encrypted hash.\nIn Step 3.4, the key KR is revealed to the provider, allowing the provider to decrypt the data received in Step 3.2 and thus to obtain H (REC).\nPropagating the hash of the key KR prevents the relay from altering the recommendations to REC' after Step 3.3, which would be undetectable otherwise because the relay could choose a key KR, so that {{H (REC)} KPF} KR = {{H (REC')} KPF} KR,.\nThe encryption scheme used for encrypting the hash has to be secure against known-plaintext attacks, because otherwise the relay may be able to obtain KPF after Step 3.1 and subsequently alter the recommendations in an undetectable\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 323\nway.\nAdditionally, the encryption scheme must not be commutative for similar reasons.\nThe remaining protocol steps are interactions between relay, provider and user agent.\nThe interactions of Step 3.5 to Step 3.8 ensure, via the same mechanisms used in Step 3.1 to 3.4, that the provider is able to analyze the recommendations before the user obtains them, but at the same time prevent the provider from altering the recommendations.\nAdditionally, the recommendations are not processed at once, but rather one at a time, to prevent the provider from withholding all recommendations.\nUpon completion of the protocol, both user and provider have obtained a set of recommendations.\nIf the user wants these recommendations to be unlinkable to himself, the user agent has to carry out the entire protocol anonymously.\nAgain, the most straightforward way to achieve this is to use additional relay agents deployed via the user agent which are used once for a single information filtering process.\n4.3 Exemplary Filtering Techniques\nThe filtering technique applied by the TFE agent cannot be chosen freely: All collaboration-based approaches, such as collaborative filtering techniques based on the profiles of a set of users, are not applicable because the provider profile does not contain user profile data (unless this data has been collected externally).\nInstead, these approaches are realized via the Matchmaker Module, which is outside the scope of this paper.\nLearning-based approaches are not applicable because the TFE agent cannot propagate any acquired data to the filter, which effectively means that the filter is incapable of learning.\nFiltering techniques that are actually applicable are feature-based approaches, such as content-based filtering (in which profile items are compared via their attributes) and knowledge-based filtering (in which domain-specific knowledge is applied in order to match user and provider profile items).\nAn overview of different classes and hybrid combinations of filtering techniques is given in [5].\nWe have implemented two generic content-based filtering approaches that are applicable within our approach: A direct content-based filtering technique based on the class of item-based top-N recommendation algorithms [9] is used in cases where the user profile contains items that are also contained in the provider profile.\nIn a preprocessing stage, i.e. prior to the actual information filtering processes, a model is generated containing the k most similar items for each provider profile item.\nWhile computationally rather complex, this approach is feasible because it has to be done only once, and it is carried out in a privacy-preserving way via interactions between the provider agent and a TFE agent.\nThe resulting model is stored by the provider agent and can be seen as an additional part of the provider profile.\nIn the actual information filtering process, the k most similar items are retrieved for each single user profile item via queries on the model (as described in Section 4.2.1, this is possible in a privacy-preserving way via anonymous communication).\nRecommendations are generated by selecting the n most frequent items from the result sets that are not already contained within the user profile.\nAs an alternative approach applicable when the user profile contains information in addition to provider profile items, we provide a cluster-based approach in which provider profile items are clustered in a preprocessing stage via an agglomerative hierarchical clustering approach.\nEach cluster is represented by a centroid item, and the cluster elements are either sub-clusters or, on the lowest level, the items themselves.\nIn the information filtering stage, the relevant items are retrieved by descending through the cluster hierarchy in the following manner: The cluster items of the highest level are retrieved independent of the user profile.\nBy comparing these items with the user profile data, the most relevant sub-clusters are determined and retrieved in a subsequent iteration.\nThis process is repeated until the lowest level is reached, which contains the items themselves as recommendations.\nThroughout the process, user profile items are never propagated to the provider as such.\nThe information deducible about the user profile does not exceed the information deducible via the recommendations themselves (because essentially only a chain of cluster centroids leading to the recommendations is retrieved), and therefore it is not regarded as privacy-critical.\n4.4 Implementation\nWe have implemented the approach for privacy-preserving IF based on JIAC IV [12], a FIPA-compliant MAS architecture.\nJIAC IV integrates fundamental aspects of autonomous agents regarding pro-activeness, intelligence, communication capabilities and mobility by providing a scalable component-based architecture.\nAdditionally, JIAC IV offers components realizing management and security functionality, and provides a methodology for Agent-Oriented Software Engineering.\nJIAC IV stands out among MAS architectures as the only security-certified architecture, since it has been certified by the German Federal Office for Information Security according to the EAL3 of the Common Criteria for Information Technology Security standard [14].\nJIAC IV offers several security features in the areas of access control for agent services, secure communication between agents, and low-level security based on Java security policies [21], and thus provides all security-related functionality required for our approach.\nWe have extended the JIAC IV architecture by adding the mechanisms for communication control described in Section 4.1.\nRegarding the issue of malicious hosts, we currently assume all providers of agent platforms to be trusted.\nWe are additionally developing a solution that is actually based on a trusted computing infrastructure.\n5.\nEVALUATION\nFor the evaluation of our approach, we have examined whether and to which extent the requirements (mainly regarding privacy, performance, and quality) are actually met.\nPrivacy aspects are directly addressed by the modules and protocols described above and therefore not evaluated further here.\nPerformance is a critical issue, mainly because of the overhead caused by creating additional agents and agent platforms for controlling communication, and by the additional interactions within the Recommender Module.\nOverall, a single information filtering process takes about ten times longer than a non-privacy-preserving information filtering process leading to the same results, which is a considerable overhead but still acceptable under certain conditions, as described in the following section.\n5.1 The Smart Event Assistant\nAs a proof of concept, and in order to evaluate performance and quality under real-life conditions, we have ap\n324 The Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07)\nFigure 2: The Smart Event Assistant, a privacypreserving Recommender System supporting users in planning entertainment-related activities.\nplied our approach within the Smart Event Assistant, a MAS-based Recommender System which integrates various personalized services for entertainment planning in different German cities, such as a restaurant finder and a movie finder [25].\nAdditional services, such as a calendar, a routing service and news services complement the information services.\nAn intelligent day planner integrates all functionality by providing personalized recommendations for the various information services, based on the user's preferences and taking into account the location of the user as well as the potential venues.\nAll services are accessible via mobile devices as well3.\nFigure 2 shows a screenshot of the intelligent day planner's result dialog.\nThe Smart Event Assistant is entirely realized as a MAS system providing, among other functionality, various filter agents and different service provider agents, which together with the user agents utilize the functionality provided by our approach.\nRecommendations are generated in two ways: A push service delivers new recommendations to the user in regular intervals (e.g. once per day) via email or SMS.\nBecause the user is not online during these interactions, they are less critical with regard to performance and the protracted duration of the information filtering process is acceptable in this case.\nRecommendations generated for the intelligent day planner, however, have to be delivered with very little latency because the process is triggered by the user, who expects to receive results promptly.\nIn this scenario, the overall performance is substantially improved by setting up the relay agent and the TFE agent offline, i.e. prior to the user's request, and by an efficient retrieval of the relevant\npart of the provider profile: Because the user is only interested in items, such as movies, available within a certain time period and related to specific locations, such as screenings at cinemas in a specific city, the relevant part of the provider profile is usually small enough to be propagated entirely.\nBecause these additional parameters are not seen as privacy-critical (as they are not based on the user profile, but rather constitute a short-term information need), the relevant part of the provider profile may be propagated as a whole, with no need for complex interactions.\nTaken together, these improvements result in a filtering process that takes about three times as long as the respective nonprivacy-preserving filtering process, which we regard as an acceptable trade-off for the increased level of privacy.\nTable 4 shows the results of the performance evaluation in more detail.\nIn these scenarios, a direct content-based filtering technique similar to the one described in Section 4.3 is applied.\nBecause equivalent filtering techniques have been applied successfully in regular Recommender Systems [9], there are no negative consequences with regard to the quality of the recommendations.\n5.2 Alternative Approaches\nAs described in Section 3.2, our solution is based on trusted computing.\nThere are more straightforward ways to realize privacy-preserving IF, e.g. by utilizing a centralized architecture in which the privacy-preserving providerside functionality is realized as trusted software based on trusted computing.\nHowever, we consider these approaches to be unsuitable because they are far less generic: Whenever some part of the respective software is patched, upgraded or replaced, the entire system has to be analyzed again in order to determine its trustworthiness, a process that is problematic in itself due to its complexity.\nIn our solution, only a comparatively small part of the overall system is based on trusted computing.\nBecause agent platforms can be utilized for a large variety of tasks, and because we see trusted computing as the most promising approach to realize secure and trusted agent environments, it seems reasonable to assume that these respective mechanisms will be generally available in the future, independent of specific solutions such as the one described here.\nThe Sixth Intl. .\nJoint Conf.\non Autonomous Agents and Multi-Agent Systems (AAMAS 07) 325\n6.\nCONCLUSION & FURTHER WORK\nWe have developed an agent-based approach for privacypreserving Recommender Systems.\nBy utilizing fundamental features of agents such as autonomy, adaptability and the ability to communicate, by extending the capabilities of agent platform managers regarding control of agent communication, by providing a privacy-preserving protocol for information filtering processes, and by utilizing suitable filtering techniques we have been able to realize an approach which actually preserves privacy in Information Filtering architectures in a multilateral way.\nAs a proof of concept, we have used the approach within an application supporting users in planning entertainment-related activities.\nWe envision various areas of future work: To achieve complete user privacy, the protocol should be extended in order to keep the recommendations themselves private as well.\nGenerally, the feedback we have obtained from users of the Smart Event Assistant indicates that most users are indifferent to privacy in the context of entertainment-related personal information.\nTherefore, we intend to utilize the approach to realize a Recommender System in a more privacysensitive domain, such as health or finance, which would enable us to better evaluate user acceptance.","keyphrases":["recommend system","privaci","inform filter","privaci-preserv recommend system","multi-agent system technolog","inform search","retriev-inform filter","distribut artifici intellig-multiag system","secur multi-parti comput","trust softwar","java secur model","learn-base approach","featur-base approach","multi-agent system","trust"],"prmu":["P","P","P","M","M","M","M","M","U","U","U","M","M","M","U"]} {"id":"J-34","title":"(In)Stability Properties of Limit Order Dynamics","abstract":"We study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets. We ask whether such mechanisms are susceptible to butterfly effects -- the infliction of large changes on common measures of market activity by only small perturbations of the order sequence. We show that the answer depends strongly on whether the market consists of absolute traders (who determine their prices independent of the current order book state) or relative traders (who determine their prices relative to the current bid and ask). We prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability. Our theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks.","lvl-1":"(In)Stability Properties of Limit Order Dynamics Eyal Even-Dar \u2217 Sham M. Kakade \u2020 Michael Kearns \u2021 Yishay Mansour \u00a7 ABSTRACT We study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets.\nWe ask whether such mechanisms are susceptible to butterfly effects - the infliction of large changes on common measures of market activity by only small perturbations of the order sequence.\nWe show that the answer depends strongly on whether the market consists of absolute traders (who determine their prices independent of the current order book state) or relative traders (who determine their prices relative to the current bid and ask).\nWe prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability.\nOur theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks.\nCategories and Subject Descriptors J.4 [Social and Behavioral Sciences]: Economics General Terms Economics, Theory 1.\nINTRODUCTION In recent years there has been an explosive increase in the automation of modern equity markets.\nThis increase has taken place both in the exchanges, which are increasingly computerized and offer sophisticated interfaces for order placement and management, and in the trading activity itself, which is ever more frequently undertaken by software.\nThe so-called Electronic Communication Networks (or ECNs) that dominate trading in NASDAQ stocks are a common example of the automation of the exchanges.\nOn the trading side, computer programs now are entrusted not only with the careful execution of large block trades for clients (sometimes referred to on Wall Street as program trading), but with the autonomous selection of stocks, direction (long or short) and volumes to trade for profit (commonly referred to as statistical arbitrage).\nThe vast majority of equity trading is done via the standard limit order market mechanism.\nIn this mechanism, continuous trading takes place via the arrival of limit orders specifying whether the party wishes to buy or sell, the volume desired, and the price offered.\nArriving limit orders that are entirely or partially executable with the best offers on the other side are executed immediately, with any volume not immediately executable being placed in an queue (or book) ordered by price on the appropriate side (buy or sell).\n(A detailed description of the limit order mechanism is given in Section 3.)\nWhile traders have always been able to view the prices at the top of the buy and sell books (known as the bid and ask), a relatively recent development in certain exchanges is the real-time revelation of the entire order book - the complete distribution of orders, prices and volumes on both sides of the exchange.\nWith this revelation has come the opportunity - and increasingly, the needfor modeling and exploiting limit order data and dynamics.\nIt is fair to say that market microstructure, as this area is generally known, is a topic commanding great interest both in the real markets and in the academic finance literature.\nThe opportunities and needs span the range from the optimized execution of large trades to the creation of stand-alone proprietary strategies that attempt to profit from high-frequency microstructure signals.\nIn this paper we investigate a previously unexplored but fundamental aspect of limit order microstructure: the stability properties of the dynamics.\nSpecifically, we are interested in the following natural question: To what extent are simple models of limit order markets either susceptible or immune to butterfly effects - that is, the infliction of large changes in important activity statistics (such as the 120 number of shares traded or the average price per share) by only minor perturbations of the order sequence?\nTo examine this question, we consider two stylized but natural models of the limit order arrival process.\nIn the absolute price model, buyers and sellers arrive with limit order prices that are determined independently of the current state of the market (as represented by the order books), though they may depend on all manner of exogenous information or shocks, such as time, news events, announcements from the company whose shares are being traded, private signals or state of the individual traders, etc..\nThis process models traditional fundamentals-based trading, in which market participants each have some inherent but possibly varying valuation for the good that in turn determines their limit price.\nIn contrast, in the relative price model, traders express their limit order prices relative to the best price offered in their respective book (buy or sell).\nThus, a buyer would encode their limit order price as an offset \u2206 (which may be positive, negative, or zero) from the current bid pb, which is then translated to the limit price pb +\u2206.\nAgain, in addition to now depending on the state of the order books, prices may also depend on all manner of exogenous information.\nThe relative price model can be viewed as modeling traders who, in addition to perhaps incorporating fundamental external information on the stock, may also position their orders strategically relative to the other orders on their side of the book.\nA common example of such strategic behavior is known as penny-jumping on Wall Street, in which a trader who has in interest in buying shares quickly, but still at a discount to placing a market order, will deliberately position their order just above the current bid.\nMore generally, the entire area of modern execution optimization [9, 10, 8] has come to rely heavily on the careful positioning of limit orders relative to the current order book state.\nNote that such positioning may depend on more complex features of the order books than just the current bid and ask, but the relative model is a natural and simplified starting point.\nWe remark that an alternate view of the two models is that all traders behave in a relative manner, but with absolute traders able to act only on a considerably slower time scale than the faster relative traders.\nHow do these two models differ?\nClearly, given any fixed sequence of arriving limit order prices, we can choose to express these prices either as their original (absolute) values, or we can run the order book dynamical process and transform each order into a relative difference with the top of its book, and obtain identical results.\nThe differences arise when we consider the stability question introduced above.\nIntuitively, in the absolute model a small perturbation in the arriving limit price sequence should have limited (but still some) effects on the subsequent evolution of the order books, since prices are determined independently.\nFor the relative model this intuition is less clear.\nIt seems possible that a small perturbation could (for example) slightly modify the current bid, which in turn could slightly modify the price of the next arriving order, which could then slightly modify the price of the subsequent order, and so on, leading to an amplifying sequence of events.\nOur main results demonstrate that these two models do indeed have dramatically different stability properties.\nWe first show that for any fixed sequence of prices in the absolute model, the modification of a single order has a bounded and extremely limited impact on the subsequent evolution of the books.\nIn particular, we define a natural notion of distance between order books and show that small modifications can result in only constant distance to the original books for all subsequent time steps.\nWe then show that this implies that for almost any standard statistic of market activity - the executed volume, the average price execution price, and many others - the statistic can be influenced only infinitesimally by small perturbations.\nIn contrast, we show that the relative model enjoys no such stability properties.\nAfter giving specific (worst-case) relative price sequences in which small perturbations generate large changes in basic statistics (for example, altering the number of shares traded by a factor of two), we proceed to demonstrate that the difference in stability properties of the two models is more than merely theoretical.\nUsing extensive INET (a major ECN for NASDAQ stocks) limit order data and order book reconstruction code, we investigate the empirical stability properties when the data is interpreted as containing either absolute prices, relative prices, or mixtures of the two.\nThe theoretical predictions of stability and instability are strongly borne out by the subsequent experiments.\nIn addition to stability being of fundamental interest in any important dynamical system, we believe that the results described here provide food for thought on the topics of market impact and the backtesting of quantitative trading strategies (the attempt to determine hypothetical past performance using historical data).\nThey suggest that one``s confidence that trading quietly and in small volumes will have minimal market impact is linked to an implicit belief in an absolute price model.\nOur results and the fact that in the real markets there is a large and increasing amount of relative behavior such as penny-jumping would seem to cast doubts on such beliefs.\nSimilarly, in a purely or largely relative-price world, backtesting even low-frequency, low-volume strategies could result in historical estimates of performance that are not only unrelated to future performance (the usual concern), but are not even accurate measures of a hypothetical past.\nThe outline of the paper follows.\nIn Section 2 we briefly review the large literature on market microstructure.\nIn Section 3 we describe the limit order mechanism and our formal models.\nSection 4 presents our most important theoretical results, the 1-Modification Theorem for the absolute price model.\nThis theorem is applied in Section 5 to derive a number of strong stability properties in the absolute model.\nSection 6 presents specific examples establishing the worstcase instability of the relative model.\nSection 7 contains the simulation studies that largely confirm our theoretical findings on INET market data.\n2.\nRELATED WORK As was mentioned in the Introduction, market microstructure is an important and timely topic both in academic finance and on Wall Street, and consequently has a large and varied recent literature.\nHere we have space only to summarize the main themes of this literature and to provide pointers to further readings.\nTo our knowledge the stability properties of detailed limit order microstructure dynamics have not been previously considered.\n(However, see Farmer and Joshi [6] for an example and survey of other price dynamic stability studies.)\n121 On the more theoretical side, there is a rich line of work examining what might be considered the game-theoretic properties of limit order markets.\nThese works model traders and market-makers (who provide liquidity by offering both buy and sell quotes, and profit on the difference) by utility functions incorporating tolerance for risks of price movement, large positions and other factors, and examine the resulting equilibrium prices and behaviors.\nCommon findings predict negative price impacts for large trades, and price effects for large inventory holdings by market-makers.\nAn excellent and comprehensive survey of results in this area can be found in [2].\nThere is a similarly large body of empirical work on microstructure.\nMajor themes include the measurement of price impacts, statistical properties of limit order books, and attempts to establish the informational value of order books [4].\nA good overview of the empirical work can be found in [7].\nOf particular note for our interests is [3], which empirically studies the distribution of arriving limit order prices in several prominent markets.\nThis work takes a view of arriving prices analogous to our relative model, and establishes a power-law form for the resulting distributions.\nThere is also a small but growing number of works examining market microstructure topics from a computer science perspective, including some focused on the use of microstructure in algorithms for optimized trade execution.\nKakade et al. [9] introduced limit order dynamics in competitive analysis for one-way and volume-weighted average price (VWAP) trading.\nSome recent papers have applied reinforcement learning methods to trade execution using order book properties as state variables [1, 5, 10].\n3.\nMICROSTRUCTURE PRELIMINARIES The following expository background material is adapted from [9].\nThe market mechanism we examine in this paper is driven by the simple and standard concept of a limit order.\nSuppose we wish to purchase 1000 shares of Microsoft (MSFT) stock.\nIn a limit order, we specify not only the desired volume (1000 shares), but also the desired price.\nSuppose that MSFT is currently trading at roughly $24.07 a share (see Figure 1, which shows an actual snapshot of an MSFT order book on INET), but we are only willing to buy the 1000 shares at $24.04 a share or lower.\nWe can choose to submit a limit order with this specification, and our order will be placed in a queue called the buy order book, which is ordered by price, with the highest offered unexecuted buy price at the top (often referred to as the bid).\nIf there are multiple limit orders at the same price, they are ordered by time of arrival (with older orders higher in the book).\nIn the example provided by Figure 1, our order would be placed immediately after the extant order for 5,503 shares at $24.04; though we offer the same price, this order has arrived before ours.\nSimilarly, a sell order book for sell limit orders is maintained, this time with the lowest sell price offered (often referred to as the ask) at its top.\nThus, the order books are sorted from the most competitive limit orders at the top (high buy prices and low sell prices) down to less competitive limit orders.\nThe bid and ask prices together are sometimes referred to as the inside market, and the difference between them as the spread.\nBy definition, the order books always consist exclusively of unexecuted orders - they are queues of orders hopefully waiting for the price to move in their direction.\nFigure 1: Sample INET order books for MSFT.\nHow then do orders get (partially) executed?\nIf a buy (sell, respectively) limit order comes in above the ask (below the bid, respectively) price, then the order is matched with orders on the opposing books until either the incoming order``s volume is filled, or no further matching is possible, in which case the remaining incoming volume is placed in the books.\nFor instance, suppose in the example of Figure 1 a buy order for 2000 shares arrived with a limit price of $24.08.\nThis order would be partially filled by the two 500-share sell orders at $24.069 in the sell books, the 500-share sell order at $24.07, and the 200-share sell order at $24.08, for a total of 1700 shares executed.\nThe remaining 300 shares of the incoming buy order would become the new bid of the buy book at $24.08.\nIt is important to note that the prices of executions are the prices specified in the limit orders already in the books, not the prices of the incoming order that is immediately executed.\nThus in this example, the 1700 executed shares would be at different prices.\nNote that this also means that in a pure limit order exchange such as INET, market orders can be simulated by limit orders with extreme price values.\nIn exchanges such as INET, any order can be withdrawn or canceled by the party that placed it any time prior to execution.\nEvery limit order arrives atomically and instantaneously - there is a strict temporal sequence in which orders arrive, and two orders can never arrive simultaneously.\nThis gives rise to the definition of the last price of the exchange, which is simply the last price at which the exchange executed an order.\nIt is this quantity that is usually meant when people casually refer to the (ticker) price of a stock.\n3.1 Formal Definitions We now provide a formal model for the limit order pro122 cess described above.\nIn this model, limit orders arrive in a temporal sequence, with each order specifying its limit price and an indication of its type (buy or sell).\nLike the actual exchanges, we also allow cancellation of a standing (unexecuted) order in the books any time prior to its execution.\nWithout loss of generality we limit attention to a model in which every order is for a single share; large order volumes can be represented by 1-share sequences.\nDefinition 3.1.\nLet \u03a3 = \u03c31, ...\u03c3n be a sequence of limit orders, where each \u03c3i has the form ni, ti, vi .\nHere ni is an order identifier, ti is the order type (buy, sell, or cancel), and vi is the limit order value.\nIn the case that ti is a cancel, ni matches a previously placed order and vi is ignored.\nWe have deliberately called vi in the definition above the limit order value rather than price, since our two models will differ in their interpretation of vi (as being absolute or relative).\nIn the absolute model, we do indeed interpret vi as simply being the price of the limit order.\nIn the relative model, if the current order book configuration is (A, B) (where A is the sell and B the buy book), the price of the order is ask(A) + vi if ti is sell, and bid(B) + vi if ti is buy, where by ask(X) and bid(X) we denote the price of the order at the top of the book X. (Note vi can be negative.)\nOur main interest in this paper is the effects that the modification of a small number of limit orders can have on the resulting dynamics.\nFor simplicity we consider only modifications to the limit order values, but our results generalize to any modification.\nDefinition 3.2.\nA k-modification of \u03a3 is a sequence \u03a3 such that for exactly k indices i1, ..., ik vij = vij , tij = tij , and nij = nij .\nFor every = ij , j \u2208 {1, ... , k} \u03c3 = \u03c3 .\nWe now define the various quantities whose stability properties we examine in the absolute and relative models.\nAll of these are standard quantities of common interest in financial markets.\n\u2022 volume(\u03a3): Number of shares executed (traded) in the sequence \u03a3.\n\u2022 average(\u03a3): Average execution price.\n\u2022 close(\u03a3): Price of the last (closing) execution.\n\u2022 lastbid(\u03a3): Bid at the end of the sequence.\n\u2022 lastask(\u03a3): Ask at end of the sequence.\n4.\nTHE 1-MODIFICATION THEOREM In this section we provide our most important technical result.\nIt shows that in the absolute model, the effects that the modification of a single order has on the resulting evolution of the order books is extremely limited.\nWe then apply this result to derive strong stability results for all of the aforementioned quantities in the absolute model.\nThroughout this section, we consider an arbitrary order sequence \u03a3 in the absolute model, and any 1-modification \u03a3 of \u03a3.\nAt any point (index) i in the two sequences we shall use (A1, B1) to denote the sell and buy books (respectively) in \u03a3, and (A2, B2) to denote the sell and buy books in \u03a3 ; for notational convenience we omit explicitly superscripting by the current index i.\nWe will shortly establish that at all times i, (A1, B1) and (A2, B2) are very close.\nAlthough the order books are sorted by price, we will use (for example) A1 \u222a {a2} = A2 to indicate that A2 contains an order at some price a2 that is not present in A1, but that otherwise A1 and A2 are identical; thus deleting the order at a2 in A2 would render the books the same.\nSimilarly, B1 \u222a {b2} = B2 \u222a {b1} means B1 contains an order at price b1 not present in B2, B2 contains an order at price b2 not present in B1, and that otherwise B1 and B2 are identical.\nUsing this notation, we now define a set of stable system states, where each state is composed from the order books of the original and the modified sequences.\nShortly we show that if we change only one order``s value (price), we remain in this set for any sequence of limit orders.\nDefinition 4.1.\nLet ab be the set of all states (A1, B1) and (A2, B2) such that A1 = A2 and B1 = B2.\nLet \u00afab be the set of states such that A1 \u222a {a2} = A2 \u222a {a1}, where a1 = a2, and B1 = B2.\nLet a\u00afb be the set of states such that B1\u222a{b2} = B2\u222a{b1}, where b1 = b2, and A1 = A2.\nLet \u00afa\u00afb be the set of states in which A1 = A2\u222a{a1} and B1 = B2\u222a{b1}, or in which A2 = A1 \u222a {a2} and B2 = B1 \u222a {b2}.\nFinally we define S = ab \u222a \u00afab \u222a \u00afba \u222a \u00afa\u00afb as the set of stable states.\nTheorem 4.1.\n(1-Modification Theorem) Consider any sequence of orders \u03a3 and any 1-modification \u03a3 of \u03a3.\nThen the order books (A1, B1) and (A2, B2) determined by \u03a3 and \u03a3 lie in the set S of stable states at all times.\nab \u00afa\u00afb a\u00afb\u00afab Figure 2: Diagram representing the set S of stable states and the possible movements transitions in it after the change.\nThe idea of the proof of this theorem is contained in Figure 2, which shows a state transition diagram labeled by the categories of stable states.\nThis diagram describes all transitions that can take place after the arrival of the order on which \u03a3 and \u03a3 differ.\nThe following establishes that immediately after the arrival of this differing order, the state lies in S. Lemma 4.2.\nIf at any time the current books (A1, B1) and (A2, B2) are in the set ab (and thus identical), then modifying the price of the next order keeps the state in S. Proof.\nSuppose the arriving order is a sell order and we change it from a1 to a2; assume without loss of generality that a1 > a2.\nIf neither order is executed immediately, then we move to state \u00afab; if both of them are executed then we stay in state ab; and if only a2 is executed then we move to state \u00afa\u00afb.\nThe analysis of an arriving buy order is similar.\nFollowing the arrival of their only differing order, \u03a3 and \u03a3 are identical.\nWe now give a sequence of lemmas showing 123 Executed with two orders Not executed in both Arrivng buy order Arriving buy order Arriving buy order Arriving sell order \u00afab ab \u00afa\u00afb Executed only with a1 (not a1 and a2) Executed with a1 and a2 Figure 3: The state diagram when starting at state \u00afab.\nThis diagram provides the intuition of Lemma 4.3 that following the initial difference covered by Lemma 4.2, the state remains in S forever on the remaining (identical) sequence.\nWe first show that from state \u00afab we remain in S regardless the next order.\nThe intuition of this lemma is demonstrated in Figure 3.\nLemma 4.3.\nIf the current state is in the set \u00afab, then for any order the state will remain in S. Proof.\nWe first provide the analysis for the case of an arriving sell order.\nNote that in \u00afab the buy books are identical (B1 = B2).\nThus either the arriving sell order is executed with the same buy order in both buy books, or it is not executed in both buy books.\nFor the first case, the buy books remain identical (the bid is executed in both) and the sell books remain unchanged.\nFor the second case, the buy books remain unchanged and identical, and the sell books have the new sell order added to both of them (and thus still differ by one order).\nNext we provide an analysis of the more subtle case where the arriving item is a buy order.\nFor this case we need to take care of several different scenarios.\nThe first is when the top of both sell books (the ask) is identical.\nThen regardless of whether the new buy order is executed or not, the state remains in \u00afab (the analysis is similar to an arriving sell order).\nWe are left to deal with case where ask(A1) and ask(A2) are different.\nHere we discuss two subcases: (a) ask(A1) = a1 and ask(A2) = a2, and (b) ask(A1) = a1 and ask(A2) = a .\nHere a1 and a2 are as in the definition of \u00afab in Definition 4.1, and a is some other price.\nFor subcase (a), by our assumption a1 < a2, then either (1) both asks get executed, the sell books become identical, and we move to state ab; (2) neither ask is executed and we remain in state \u00afab; or (3) only ask(A1) = a1 is executed, in which case we move to state \u00afa\u00afb with A2 = A1 \u222a {a2} and B2 = B1 \u222a {b2}, where b2 is the arriving buy order price.\nFor subcase (b), either (1) buy order is executed in neither sell book we remain in state \u00afab; or (2) the buy order is executed in both sell books and stay in state \u00afab with A1 \u222a {a } = A2 \u222a {a2}; or (3) only ask(A1) = a1 is executed and we move to state \u00afa\u00afb. Lemma 4.4.\nIf the current state is in the set a\u00afb, then for any order the state will remain in S. Lemma 4.5.\nIf the current configuration is in the set \u00afa\u00afb, then for any order the state will remain in S The proofs of these two lemmas are omitted, but are similar in spirit to that of Lemma 4.3.\nThe next and final lemma deals with cancellations.\nLemma 4.6.\nIf the current order book state lies in S, then following the arrival of a cancellation it remains in S. Proof.\nWhen a cancellation order arrives, one of the following possibilities holds: (1) the order is still in both sets of books, (2) it is not in either of them and (3) it is only in one of them.\nFor the first two cases it is easy to see that the cancellation effect is identical on both sets of books, and thus the state remains unchanged.\nFor the case when the order appears only in one set of books, without loss of generality we assume that the cancellation cancels a buy order at b1.\nRather than removing b1 from the book we can change it to have price 0, meaning this buy order will never be executed and is effectively canceled.\nNow regardless the state that we were in, b1 is still only in one buy book (but with a different price), and thus we remain in the same state in S.\nThe proof of Theorem 4.1 follows from the above lemmas.\n5.\nABSOLUTE MODEL STABILITY In this section we apply the 1-Modification Theorem to show strong stability properties for the absolute model.\nWe begin with an examination of the executed volume.\nLemma 5.1.\nLet \u03a3 be any sequence and \u03a3 be any 1modification of \u03a3.\nThen the set of the executed orders (ID numbers) generated by the two sequences differs by at most 2.\nProof.\nBy Theorem 4.1 we know that at each stage the books differ by at most two orders.\nNow since the union of the IDs of the executed orders and the order books is always identical for both sequences, this implies that the executed orders can differ by at most two.\nCorollary 5.2.\nLet \u03a3 be any sequence and \u03a3 be any kmodification of \u03a3.\nThen the set of the executed orders (ID numbers) generated by the two sequences differs by at most 2k.\nAn order sequence \u03a3 is a k-extension of \u03a3 if \u03a3 can be obtained by deleting any k orders in \u03a3 .\nLemma 5.3.\nLet \u03a3 be any sequence and let \u03a3 be any kextension of \u03a3.\nThen the set of the executed orders generated by \u03a3 and \u03a3 differ by at most 2k.\nThis lemma is the key to obtain our main absolute model volume result below.\nWe use edit(\u03a3, \u03a3 ) to denote the standard edit distance between the sequences \u03a3 and \u03a3 - the minimal number of substitutions, insertions and deletions or orders needed to change \u03a3 to \u03a3 .\nTheorem 5.4.\nLet \u03a3 and \u03a3 be any absolute model order sequences.\nThen if edit(\u03a3, \u03a3 ) \u2264 k, the set of the executed orders generated by \u03a3 and \u03a3 differ by at most 4k.\nIn particular, |volume(\u03a3) \u2212 volume(\u03a3 )| \u2264 4k.\nProof.\nWe first define the sequence \u02dc\u03a3 which is the intersection of \u03a3 and \u03a3 .\nSince \u03a3 and \u03a3 are at most k apart,we have that by k insertions we change \u02dc\u03a3 to either \u03a3 or \u03a3 , and by Lemma 5.3 its set of executed orders is at most 2k from each.\nThus the set of executed orders in \u03a3 and \u03a3 is at most 4k apart.\n124 5.1 Spread Bounds Theorem 5.4 establishes strong stability for executed volume in the absolute model.\nWe now turn to the quantities that involve execution prices as opposed to volume alone - namely, average(\u03a3), close(\u03a3), lastbid(\u03a3) and lastask(\u03a3).\nFor these results, unlike executed volume, a condition must hold on \u03a3 in order for stability to occur.\nThis condition is expressed in terms of a natural measure of the spread of the market, or the gap between the buyers and sellers.\nWe motivate this condition by first showing that without it, by changing one order, we can change average(\u03a3) by any positive value x. Lemma 5.5.\nThere exists \u03a3 such that for any x \u2265 0, there is a 1-modification \u03a3 of \u03a3 such that average(\u03a3 ) = average(\u03a3) + x. Proof.\nLet \u03a3 be a sequence of alternating sell and buy orders in which each seller offers p and each buyer p + x, and the first order is a sell.\nThen all executions take place at the ask, which is always p, and thus average(\u03a3) = p.\nNow suppose we modify only the first sell order to be at price p+1+x.\nThis initial sell order will never be executed, and now all executions take place at the bid, which is always p + x. Similar instability results can be shown to hold for the other price-based quantities.\nThis motivates the introduction of a quantity we call the second spread of the order books, which is defined as the difference between the prices of the second order in the sell book and the second order in the buy book (as opposed to the bid-ask difference, which is commonly called the spread).\nWe note that in a liquid stock, such as those we examine experimentally in Section 7, the second spread will typically be quite small and in fact almost always equal to the spread.\nIn this subsection we consider changes in the sequence only after an initialization period, and sequences such that the second spread is always defined after the time we make a change.\nWe define s2(\u03a3) to be the maximum second spread in the sequence \u03a3 following the change.\nTheorem 5.6.\nLet \u03a3 be a sequence and let \u03a3 be any 1modification of \u03a3.\nThen 1.\n|lastbid(\u03a3) \u2212 lastbid(\u03a3 )| \u2264 s2(\u03a3) 2.\n|lastask(\u03a3) \u2212 lastask(\u03a3 )| \u2264 s2(\u03a3) where s2(\u03a3) is the maximum over the second spread in \u03a3 following the 1-modification.\nProof.\nWe provide the proof for the last bid; the proof for the last ask is similar.\nThe proof relies on Theorem 4.1 and considers states in the stable set S. For states ab and \u00afab, we have that the bid is identical.\nLet bid(X), sb(X), ask(X), be the bid, the second highest buy order, and the ask of a sequence X.\nNow recall that in state a\u00afb we have that the sell books are identical, and that the two buy books are identical except one different order.\nThus bid(\u03a3)+s2(\u03a3) \u2265 sb(\u03a3)+s2(\u03a3) \u2265 ask(\u03a3) = ask(\u03a3 ) \u2265 bid(\u03a3 ).\nNow it remains to bound bid(\u03a3).\nHere we use the fact that the bid of the modified sequence is at least the second highest buy order in the original sequence, due to the fact that the books are different only in one order.\nSince bid(\u03a3 ) \u2265 sb(\u03a3) \u2265 ask(\u03a3) \u2212 s2(\u03a3) \u2265 bid(\u03a3) \u2212 s2(\u03a3) we have that |bid(\u03a3) \u2212 bid(\u03a3 )| \u2264 s2(\u03a3) as desired.\nIn state \u00afa\u00afb we have that for one sequence the books contain an additional buy order and an additional sell order.\nFirst suppose that the books containing the additional orders are the original sequence \u03a3.\nNow if the bid is not the additional order we are done, otherwise we have the following: bid(\u03a3) \u2264 ask(\u03a3) \u2264 sb(\u03a3) + s2(\u03a3) = bid(\u03a3 ) + s2(\u03a3), where sb(\u03a3) \u2264 bid(\u03a3 ) since the original buy book has only one additional order.\nNow assume that the books with the additional orders are for the modified sequence \u03a3 .\nWe have bid(\u03a3) + s2(\u03a3) \u2265 ask(\u03a3) \u2265 ask(\u03a3 ) \u2265 bid(\u03a3 ), where we used the fact that ask(\u03a3) \u2265 ask(\u03a3 ) since the modified sequence has an additional order.\nSimilarly we have that bid(\u03a3) \u2264 bid(\u03a3 ) since the modified buy book contains an additional order.\nWe note that the proof of Theorem 5.6 actually establishes that the bid and ask of the original and modified sequences are within s2(\u03a3) at all times.\nNext we provide a technical lemma which relates the (first) spread of the modified sequence to the second spread of the original sequence.\nLemma 5.7.\nLet \u03a3 be a sequence and let \u03a3 be any 1modification of \u03a3.\nThen the spread of \u03a3 is bounded by s2(\u03a3).\nProof.\nBy the 1-Modification Theorem, we know that the books of the modified sequence and the original sequence can differ by at most one order in each book (buy and sell).\nTherefore, the second-highest buy order in the original sequence is always at most the bid in the modified sequence, and the second-lowest sell order in the original sequence is always at least the ask of the modified sequence.\nWe are now ready to state a stability result for the average execution price in the absolute model.\nIt establishes that in highly liquid markets, where the executed volume is large and the spread small, the average price is highly stable.\nTheorem 5.8.\nLet \u03a3 be a sequence and let \u03a3 be any 1modification of \u03a3.\nThen |average(\u03a3) \u2212 average(\u03a3 )| \u2264 2(pmax + s2(\u03a3)) volume(\u03a3) + s2(\u03a3) where pmax is the highest execution price in \u03a3.\nProof.\nThe proof will show that every execution in \u03a3 besides the execution of the modified order and the last execution has a matching execution in \u03a3 with a price different by at most s2(\u03a3), and will use the fact that pmax + s2(\u03a3) is a bound on the price in \u03a3 .\nReferring to the proof of the 1-Modification Theorem, suppose we are in state \u00afa\u00afb, where we have in one sequence (which can be either \u03a3 or \u03a3 ) an additional buy order b and an additional sell order a. Without loss of generality we assume that the sequence with the additional orders is \u03a3.\nIf the next execution does not involve a or b then clearly we have the same execution in both \u03a3 and \u03a3 .\nSuppose that it involves a; there are two possibilities.\nEither a is the modified order, in which case we change the average price 125 difference by (pmax +s2(\u03a3))\/volume(\u03a3), and this can happen only once; or a was executed before in \u03a3 and the executions both involve an order whose limit price is a. By Lemma 5.7 the spread of both sequences is bounded by s2(\u03a3), which implies that the price of the execution in \u03a3 was at most a + s2(\u03a3), while execution is in \u03a3 is at price a, and thus the prices are different by at most s2(\u03a3).\nIn states \u00afab, a\u00afb as long as we have concurrent executions in the two sequences, we know that the prices can differ by at most s2(\u03a3).\nIf we have an execution only in one sequence, we either match it in state \u00afa\u00afb, or charge it by (pmax + s2(\u03a3))\/volume(\u03a3) if we end at state \u00afa\u00afb.\nIf we end in state ab, \u00afab or a\u00afb, then every execution in states \u00afab or a\u00afb were matched to an execution in state \u00afa\u00afb.\nIf we end up in state \u00afa\u00afb, we have the one execution that is not matched and thus we charge it (pmax +s2(\u03a3))\/volume(\u03a3).\nWe next give a stability result for the closing price.\nWe first provide a technical lemma regarding the prices of consecutive executions.\nLemma 5.9.\nLet \u03a3 be any sequence.\nThen the prices of two consecutive executions in \u03a3 differ by at most s2(\u03a3).\nProof.\nSuppose the first execution is taken at time t; its price is bounded below by the current bid and above by the current ask.\nNow after this execution the bid is at least the second highest buy order at time t, if the former bid was executed and no higher buy orders arrived, and higher otherwise.\nSimilarly, the ask is at most the second lowest sell order at time t. Therefore, the next execution price is at least the second bid at time t and at most the second ask at time t, which is at most s2(\u03a3) away from the bid\/ask at time t. Lemma 5.10.\nLet \u03a3 be any sequence and let \u03a3 be a 1modification of \u03a3.\nIf the volume(\u03a3) \u2265 2, then |close(\u03a3) \u2212 close(\u03a3 )| \u2264 s2(\u03a3) Proof.\nWe first deal with case where the last execution occurs in both sequences simultaneously.\nBy Theorem 5.6, both the ask and the bid of \u03a3 and \u03a3 are at most s2(\u03a3) apart at every time t.\nSince the price of the last execution is their asks (bids) at time t we are done.\nNext we deal with the case where the last execution among the two sequences occurs only in \u03a3.\nIn this case we know that either the previous execution happened simultaneously in both sequences at time t, and thus all three executions are within the second spread of \u03a3 at time t (the first execution in \u03a3 by definition, the execution at \u03a3 from identical arguments as in the former case, and the third by Lemma 5.9).\nOtherwise the previous execution happened only in \u03a3 at time t, in which case the two executions are within the the spread of \u03a3 at time t (the execution of \u03a3 from the same arguments as before, and the execution in \u03a3 must be inside its spread in time t).\nIf the last execution happens only in \u03a3 we know that the next execution of \u03a3 will be at most s2(\u03a3) away from its previous execution by Lemma 5.9.\nTogether with the fact that if an execution happens only in one sequence it implies that the order is in the spread of the second sequence as long as the sequences are 1-modification, the proof is completed.\n5.2 Spread Bounds for k-Modifications As in the case of executed volume, we would like to extend the absolute model stability results for price-based quantities to the case where multiple orders are modified.\nHere our results are weaker and depend on the k-spread, the distance between the kth highest buy order and the kth lowest sell order, instead of the second spread.\n(Looking ahead to Section 7, we note that in actual market data for liquid stocks, this quantity is often very small as well.)\nWe use sk(\u03a3) to denote the k-spread.\nAs before, we assume that the k-spread is always defined after an initialization period.\nWe first state the following generalization of Lemma 5.7.\nLemma 5.11.\nLet \u03a3 be a sequence and let \u03a3 be any 1modification of \u03a3.\nFor \u2265 1, if s +1(\u03a3) is always defined after the change, then s (\u03a3 ) \u2264 s +1(\u03a3).\nThe proof is similar to the proof of Lemma 5.7 and omitted.\nA simple application of this lemma is the following: Let \u03a3 be any sequence which is an -modification of \u03a3.\nThen we have s2(\u03a3 ) \u2264 s +2(\u03a3).\nNow using the above lemma and by simple induction we can obtain the following theorem.\nTheorem 5.12.\nLet \u03a3 be a sequence and let \u03a3 be any k-modification of \u03a3.\nThen 1.\n|lastbid(\u03a3) \u2212 lastbid(\u03a3 )| \u2264 Pk =1 s +1(\u03a3) \u2264 ksk+1(\u03a3) 2.\n|lastask(\u03a3)\u2212lastask(\u03a3 )| \u2264 Pk =1 s +1(\u03a3) \u2264 ksk+1(\u03a3) 3.\n|close(\u03a3) \u2212 close(\u03a3 )| \u2264 Pk =1 s +1(\u03a3) \u2264 ksk+1(\u03a3) 4.\n|average(\u03a3) \u2212 average(\u03a3 )| \u2264 Pk =1 2(pmax +s +1(\u03a3)) volume(\u03a3) + s +1(\u03a3) where s (\u03a3) is the maximum over the -spread in \u03a3 following the first modification.\nWe note that while these bounds depend on deeper measures of spread for more modifications, we are working in a 1-share order model.\nThus in an actual market, where single orders contain hundreds or thousands of shares, the k-spread even for large k might be quite small and close to the standard 1-spread in liquid stocks.\n6.\nRELATIVE MODEL INSTABILITY In the relative model the underlying assumption is that traders try to exploit their knowledge of the books to strategically place their orders.\nThus if a trader wants her buy order to be executed quickly, she may position it above the current bid and be the first in the queue; if the trader is patient and believes that the price trend is going to be downward she will place orders deeper in the buy book, and so on.\nWhile in the previous sections we showed stability results for the absolute model, here we provide simple examples which show instability in the relative model for the executed volume, last bid, last ask, average execution price and the last execution price.\nIn Section 7 we provide many simulations on actual market data that demonstrate that this instability is inherent to the relative model, and not due to artificial constructions.\nIn the relative model we assume that for every sequence the ask and bid are always defined, so the books have a non-empty initial configuration.\n126 We begin by showing that in the relative model, even a single modification can double the number of shares executed.\nTheorem 6.1.\nThere is a sequence \u03a3 and a 1-modification \u03a3 of \u03a3 such that volume(\u03a3 ) \u2265 2volume(\u03a3).\nProof.\nFor concreteness we assume that at the beginning the ask is 10 and the bid is 8.\nThe sequence \u03a3 is composed from n buy orders with \u2206 = 0, followed by n sell orders with \u2206 = 0, and finally an alternating sequence of buy orders with \u2206 = +1 and sell orders with \u2206 = \u22121 of length 2n.\nSince the books before the alternating sequence contain n + 1 sell orders at 10 and n + 1 buy orders at 8, we have that each pair of buy sell order in the alternating part is matched and executed, but none of the initial 2n orders is executed, and thus volume(\u03a3) = n.\nNow we change the first buy order to have \u2206 = +1.\nAfter the first 2n orders there are still no executions; however, the books are different.\nNow there are n + 1 sell orders at 10, n buy orders at 9 and one buy order at 8.\nNow each order in the alternating sequence is executed with one of the former orders and we have volume(\u03a3 ) = 2n.\nThe next theorem shows that the spread-based stability results of Section 5.1 do not also hold in the relative model.\nBefore providing the proof, we give its intuition.\nAt the beginning the sell book contains only two prices which are far apart and both contain only two orders, now several buy orders arrive, at the original sequence they are not being executed, while in the modified sequence they will be executed and leave the sell book with only the orders at the high price.\nNow many sell orders followed by many buy orders will arrive, such that in the original sequence they will be executed only at the low price and in the modified sequence they will executed at the high price.\nTheorem 6.2.\nFor any positive numbers s and x, there is sequence \u03a3 such that s2(\u03a3) = s and a 1-modification \u03a3 of \u03a3 such that \u2022 |close(\u03a3) \u2212 close(\u03a3 )| \u2265 x \u2022 |average(\u03a3) \u2212 average(\u03a3 )| \u2265 x \u2022 |lastbid(\u03a3) \u2212 lastbid(\u03a3 )| \u2265 x \u2022 |lastask(\u03a3) \u2212 lastask(\u03a3 )| \u2265 x Proof.\nWithout loss of generality let us consider sequences in which all prices are integer-valued, in which case the smallest possible value for the second spread is 1; we provide the proof for the case s2(\u03a3) = 2, but the s2(\u03a3) = 1 case is similar.\nWe consider a sequence \u03a3 such that after an initialization period there have been no executions, the buy book has 2 orders at price 10, and the sell book has two orders at price 12 and 2 orders with value 12+y, where y is a positive integer that will be determined by the analysis.\nThe original sequence \u03a3 is a buy order with \u2206 = 0, followed by two buy orders with \u2206 = +1, then 2y sell orders with \u2206 = 0, and then 2y buy orders with \u2206 = +1.\nWe first note that s2(\u03a3) = 2, there are 2y executions, all at price 12, the last bid is 11 and the last ask is 12.\nNext we analyze a modified sequence.\nWe change the first buy order from \u2206 = 0 to \u2206 = +1.\nTherefore, the next two buy orders with \u2206 = +1 are executed, and afterwards we have that the bid is 11 and the ask is 12 + y.\nNow the 2y sell orders are accumulated at 12+y, and after the next y buy orders the bid is at 12+y\u22121.\nTherefore, at the end we have that lastbid(\u03a3 ) = 12 + y \u2212 1, lastask(\u03a3 ) = 12 + y, close(\u03a3 ) = 12 + y, and average(\u03a3 ) = y y+2 (12 + y) + 2 y+2 (12).\nSetting y = x + 2, we obtain the lemma for every property.\nWe note that while this proof was based on the fact that there are two consecutive orders in the books which are far (y) apart, we can provide a slightly more complicated example in which all orders are close (at most 2 apart), yet still one change results in large differences.\n7.\nSIMULATION STUDIES The results presented so far paint a striking contrast between the absolute and relative price models: while the absolute model enjoys provably strong stability over any fixed event sequence, there exist at least specific sequences demonstrating great instability in the relative model.\nThe worstcase nature of these results raises the question of the extent to which such differences could actually occur in real markets.\nIn this section we provide indirect evidence on this question by presenting simulation results exploiting a rich source of real-market historical limit order sequence data.\nBy interpreting arriving limit order prices as either absolute values, or by transforming them into differences with the current bid and ask (relative model), we can perform small modifications on the sequences and examine how different various outcomes (volume traded, average price, etc.) would be from what actually occurred in the market.\nThese simulations provide an empirical counterpart to the theory we have developed.\nWe emphasize that all such simulations interpret the actual historical data as falling into either the absolute or relative model, and are meaningful only within the confines of such an interpretation.\nNevertheless, we feel they provide valuable empirical insight into the potential (in)stability properties of modern equity limit order markets, and demonstrate that one``s belief or hope in stability largely relies on an absolute model interpretation.\nWe also investigate the empirical behavior of mixtures of absolute and relative prices.\n7.1 Data The historical data used in our simulations is commercially available limit order data from INET, the previously mentioned electronic exchange for NASDAQ stocks.\nBroadly speaking, this data consists of practically every single event on INET regarding the trading of an individual stockevery arriving limit order (price, volume, and sequence ID number), every execution, and every cancellation of a standing order - all timestamped in milliseconds.\nIt is data sufficient to recreate the precise INET order book in a given stock on a given day and time.\nWe will report stability properties for three stocks: Amazon, Nvidia, and Qualcomm (identified in the sequel by their tickers, AMZN, NVDA and QCOM).\nThese three provide some range of liquidities (with QCOM having the greatest and NVDA the least liquidity on INET) and other trading properties.\nWe note that the qualitative results of our simulations were similar for several other stocks we examined.\n127 7.2 Methodology For our simulations we employed order-book reconstruction code operating on the underlying raw data.\nThe basic format of each experiment was the following: 1.\nRun the order book reconstruction code on the original INET data and compute the quantity of interest (volume traded, average price, etc.) 2.\nMake a small modification to a single order, and recompute the resulting value of the quantity of interest.\nIn the absolute model case, Step 2 is as simple as modifying the order in the original data and re-running the order book reconstruction.\nFor the relative model, we must first pre-process the raw data and convert its prices to relative values, then make the modification and re-run the order book reconstruction on the relative values.\nThe type of modification we examined was extremely small compared to the volume of orders placed in these stocks: namely, the deletion of a single randomly chosen order from the sequence.\nAlthough a deletion is not 1-modification, its edit distance is 1 and we can apply Theorem 5.4.\nFor each trading day examined,this single deleted order was selected among those arriving between 10 AM and 3 PM, and the quantities of interest were measured and compared at 3 PM.\nThese times were chosen to include the busiest part of the trading day but avoid the half hour around the opening and closing of the official NASDAQ market (9:30 AM and 3:30 PM respectively), which are known to have different dynamics than the central portion of the day.\nWe run the absolute and relative model simulations on both the raw INET data and on a cleaned version of this data.\nIn the cleaned we remove all limit orders that were canceled in the actual market prior to their execution (along with the cancellations themselves).\nThe reason is that such cancellations may often be the first step in the repositioning of orders - that is, cancellations of the order that are followed by the submission of a replacement order at a different price.\nNot removing canceled orders allows the possibility of modified simulations in which the same order 1 is executed twice, which may magnify instability effects.\nAgain, it is clear that neither the raw nor the cleaned data can perfectly reflect what would have happened under the deleted orders in the actual market.\nHowever, the results both from the raw data and the clean data are qualitatively similar.\nThe results mainly differ, as expected, in the executed volume, where the instability results for the relative model are much more dramatic in the raw data.\n7.3 Results We begin with summary statistics capturing our overall stability findings.\nEach row of the tables below contains a ticker (e.g. AMZN) followed by either -R (for the uncleaned or raw data) or -C (for the data with canceled orders removed).\nFor each of the approximately 250 trading days in 2003, 1000 trials were run in which a randomly selected order was deleted from the INET event sequence.\nFor each quantity of interest (volume executed, average price, closing price and last bid), we show for the both the absolute and 1 Here same is in quotes since the two orders will actually have different sequence ID numbers, which is what makes such repositioning activity impossible to reliably detect in the data.\nrelative model the average percentage change in the quantity induced by the deletion.\nThe results confirm rather strikingly the qualitative conclusions of the theory we have developed.\nIn virtually every case (stock, raw or cleaned data, and quantity) the percentage change induced by a single deletion in the relative model is many orders of magnitude greater than in the absolute model, and shows that indeed butterfly effects may occur in a relative model market.\nAs just one specific representative example, notice that for QCOM on the cleaned data, the relative model effect of just a single deletion on the closing price is in excess of a full percentage point.\nThis is a variety of market impact entirely separate from the more traditional and expected kind generated by trading a large volume of shares.\nStock Date volume average Rel Abs Rel Abs AMZN-R 2003 15.1% 0.04% 0.3% 0.0002% AMZN-C 2003 0.69% 0.087% 0.36% 0.0007% NVDA-R 2003 9.09% 0.05 % 0.17% 0.0003% NVDA-C 2003 0.73% 0.09 % 0.35% 0.001% QCOM-R 2003 16.94% 0.035% 0.21% 0.0002% QCOM-C 2003 0.58% 0.06% 0.35% 0.0005% Stock Date close lastbid Rel Abs Rel Abs AMZN-R 2003 0.78% 0.0001% 0.78% 0.0007% AMZN-C 2003 1.10% 0.077% 1.11% 0.001% NVDA-R 2003 1.17% 0.002 % 1.18 % 0.08% NVDA-C 2003 0.45% 0.0003% 0.45% 0.0006% QCOM-R 2003 0.58% 0.0001% 0.58% 0.0004% QCOM-C 2003 1.05% 0.0006% 1.05% 0.06% In Figure 4 we examine how the change to one the quantities, the average execution price, grows with the introduction of greater perturbations of the event sequence in the two models.\nRather than deleting only a single order between 10 AM and 3 PM, in these experiments a growing number of randomly chosen deletions was performed, and the percentage change to the average price measured.\nAs suggested by the theory we have developed, for the absolute model the change to the average price grows linearly with the number of deletions and remains very small (note the vastly different scales of the y-axis in the panels for the absolute and relative models in the figure).\nFor the relative model, it is interesting to note that while small numbers of changes have large effects (often causing average execution price changes well in excess of 0.1 percent), the effects of large numbers of changes levels off quite rapidly and consistently.\nWe conclude with an examination of experiments with a mixture model.\nEven if one accepts a world in which traders behave in either an absolute or relative manner, one would be likely to claim that the market contains a mixture of both.\nWe thus ran simulations in which each arriving order in the INET event streams was treated as an absolute price with probability \u03b1, and as a relative price with probability 1\u2212\u03b1.\nRepresentative results for the average execution price in this mixture model are shown in Figure 5 for AMZN and NVDA.\nPerhaps as expected, we see a monotonic decrease in the percentage change (instability) as the fraction of absolute traders increases, with most of the reduction already being realized by the introduction of just a small population of absolute traders.\nThus even in a largely relative-price world, a 128 0 10 20 30 40 50 60 70 80 90 100 0 0.5 1 1.5 2 2.5 x 10 \u22123 QCOM\u2212R June 2004: Absolute Number of changes Averageprice 0 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 QCOM\u2212R June 2004: Relative Number of changes Averageprice Figure 4: Percentage change to the average execution price (y-axis) as a function of the number of deletions to the sequence (x-axis).\nThe left panel is for the absolute model, the right panel for the relative model, and each curve corresponds to a single day of QCOM trading in June 2004.\nCurves represent averages over 1000 trials.\nsmall minority of absolute traders can have a greatly stabilizing effect.\nSimilar behavior is found for closing price and last bid.\n0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 AMZN\u2212R Feburary 2004 \u03b1 Averageprice 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 NVDA\u2212R June 2004 \u03b1 Averageprice Figure 5: Percentage change to the average execution price (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis).\nEach curve corresponds to a single day of trading during a month of 2004.\nCurves represent averages over 1000 trials.\nFor the executed volume in the mixture model, however, the findings are more curious.\nIn Figure 6, we show how the percentage change to the executed volume varies with the absolute trader fraction \u03b1, for NVDA data that is both raw and cleaned of cancellations.\nWe first see that for this quantity, unlike the others, the difference induced by the cleaned and uncleaned data is indeed dramatic, as already suggested by the summary statistics table above.\nBut most intriguing is the fact that the stability is not monotonically increasing with \u03b1 for either the cleaned or uncleaned datathe market with maximum instability is not a pure relative price market, but occurs at some nonzero value for \u03b1.\nIt was in fact not obvious to us that sequences with this property could even be artificially constructed, much less that they would occur as actual market data.\nWe have yet to find a satisfying explanation for this phenomenon and leave it to future research.\n8.\nACKNOWLEDGMENTS We are grateful to Yuriy Nevmyvaka of Lehman Brothers in New York for the use of his INET order book reconstruction code, and for valuable comments on the work presented 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.5 1 1.5 2 2.5 3 3.5 4 NVDA\u2212C June 2004 \u03b1 Volume 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 2 4 6 8 10 12 14 16 18 NVDA\u2212R June 2004 \u03b1 Volume Figure 6: Percentage change to the executed volume (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis).\nThe left panel is for NVDA using the raw data that includes cancellations, while the right panel is on the cleaned data.\nEach curve corresponds to a single day of trading during June 2004.\nCurves represent averages over 1000 trials.\nhere.\nYishay Mansour was supported in part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778, by a grant from the Israel Science Foundation and an IBM faculty award.\n9.\nREFERENCES [1] D. Bertsimas and A. Lo.\nOptimal control of execution costs.\nJournal of Financial Markets, 1:1-50, 1998.\n[2] B. Biais, L. Glosten, and C. Spatt.\nMarket microstructure: a survey of microfoundations, empirical results and policy implications.\nJournal of Financial Markets, 8:217-264, 2005.\n[3] J.-P.\nBouchaud, M. Mezard, and M. Potters.\nStatistical properties of stock order books: empirical results and models.\nQuantitative Finance, 2:251-256, 2002.\n[4] C. Cao, O.Hansch, and X. Wang.\nThe informational content of an open limit order book, 2004.\nAFA 2005 Philadelphia Meetings, EFA Maastricht Meetings Paper No. 4311.\n[5] R. Coggins, A. Blazejewski, and M. Aitken.\nOptimal trade execution of equities in a limit order market.\nIn International Conference on Computational Intelligence for Financial Engineering, pages 371-378, March 2003.\n[6] D. Farmer and S. Joshi.\nThe price dynamics of common trading strategies.\nJournal of Economic Behavior and Organization, 29:149-171, 2002.\n[7] J. Hasbrouck.\nEmpirical market microstructure: Economic and statistical perspectives on the dynamics of trade in securities markets, 2004.\nCourse notes, Stern School of Business, New York University.\n[8] R. Kissell and M. Glantz.\nOptimal Trading Strategies.\nAmacom, 2003.\n[9] S.Kakade, M. Kearns, Y. Mansour, and L. Ortiz.\nCompetitive algorithms for VWAP and limit order trading.\nIn Proceedings of the ACM Conference on Electronic Commerce, pages 189-198, 2004.\n[10] Y.Nevmyvaka, Y. Feng, and M. Kearns.\nReinforcement learning for optimized trade execution, 2006.\nPreprint.\n129","lvl-3":"(In) Stability Properties of Limit Order Dynamics\nABSTRACT\nWe study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets.\nWe ask whether such mechanisms are susceptible to \"butterfly effects\"--the infliction of large changes on common measures of market activity by only small perturbations of the order sequence.\nWe show that the answer depends strongly on whether the market consists of \"absolute\" traders (who determine their prices independent of the current order book state) or \"relative\" traders (who determine their prices relative to the current bid and ask).\nWe prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability.\nOur theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks.\n1.\nINTRODUCTION\nIn recent years there has been an explosive increase in the automation of modern equity markets.\nThis increase has taken place both in the exchanges, which are increasingly computerized and offer sophisticated interfaces for order placement and management, and in the trading activity itself, which is ever more frequently undertaken by software.\nThe so-called Electronic Communication Networks (or ECNs) that dominate trading in NASDAQ stocks are a common example of the automation of the exchanges.\nOn the trading side, computer programs now are entrusted not only with the careful execution of large block trades for clients (sometimes referred to on Wall Street as program trading), but with the autonomous selection of stocks, direction (long or short) and volumes to trade for profit (commonly referred to as statistical arbitrage).\nThe vast majority of equity trading is done via the standard limit order market mechanism.\nIn this mechanism, continuous trading takes place via the arrival of limit orders specifying whether the party wishes to buy or sell, the volume desired, and the price offered.\nArriving limit orders that are entirely or partially executable with the best offers on the other side are executed immediately, with any volume not immediately executable being placed in an queue (or book) ordered by price on the appropriate side (buy or sell).\n(A detailed description of the limit order mechanism is given in Section 3.)\nWhile traders have always been able to view the prices at the top of the buy and sell books (known as the bid and ask), a relatively recent development in certain exchanges is the real-time revelation of the entire order book--the complete distribution of orders, prices and volumes on both sides of the exchange.\nWith this revelation has come the opportunity--and increasingly, the need--for modeling and exploiting limit order data and dynamics.\nIt is fair to say that market microstructure, as this area is generally known, is a topic commanding great interest both in the real markets and in the academic finance literature.\nThe opportunities and needs span the range from the optimized execution of large trades to the creation of stand-alone \"proprietary\" strategies that attempt to profit from high-frequency microstructure signals.\nIn this paper we investigate a previously unexplored but fundamental aspect of limit order microstructure: the stability properties of the dynamics.\nSpecifically, we are interested in the following natural question: To what extent are simple models of limit order markets either susceptible or immune to \"butterfly effects\"--that is, the infliction of large changes in important activity statistics (such as the\nnumber of shares traded or the average price per share) by only minor perturbations of the order sequence?\nTo examine this question, we consider two stylized but natural models of the limit order arrival process.\nIn the absolute price model, buyers and sellers arrive with limit order prices that are determined independently of the current state of the market (as represented by the order books), though they may depend on all manner of exogenous information or shocks, such as time, news events, announcements from the company whose shares are being traded, private signals or state of the individual traders, etc. .\nThis process models traditional \"fundamentals\" - based trading, in which market participants each have some inherent but possibly varying valuation for the good that in turn determines their limit price.\nIn contrast, in the relative price model, traders express their limit order prices relative to the best price offered in their respective book (buy or sell).\nThus, a buyer would encode their limit order price as an offset A (which may be positive, negative, or zero) from the current bid Pb, which is then translated to the limit price Pb + A. Again, in addition to now depending on the state of the order books, prices may also depend on all manner of exogenous information.\nThe relative price model can be viewed as modeling traders who, in addition to perhaps incorporating fundamental external information on the stock, may also position their orders strategically relative to the other orders on their side of the book.\nA common example of such strategic behavior is known as \"penny-jumping\" on Wall Street, in which a trader who has in interest in buying shares quickly, but still at a discount to placing a market order, will deliberately position their order just above the current bid.\nMore generally, the entire area of modern execution optimization [9, 10, 8] has come to rely heavily on the careful positioning of limit orders relative to the current order book state.\nNote that such positioning may depend on more complex features of the order books than just the current bid and ask, but the relative model is a natural and simplified starting point.\nWe remark that an alternate view of the two models is that all traders behave in a relative manner, but with \"absolute\" traders able to act only on a considerably slower time scale than the faster \"relative\" traders.\nHow do these two models differ?\nClearly, given any fixed sequence of arriving limit order prices, we can choose to express these prices either as their original (absolute) values, or we can run the order book dynamical process and transform each order into a relative difference with the top of its book, and obtain identical results.\nThe differences arise when we consider the stability question introduced above.\nIntuitively, in the absolute model a small perturbation in the arriving limit price sequence should have limited (but still some) effects on the subsequent evolution of the order books, since prices are determined independently.\nFor the relative model this intuition is less clear.\nIt seems possible that a small perturbation could (for example) slightly modify the current bid, which in turn could slightly modify the price of the next arriving order, which could then slightly modify the price of the subsequent order, and so on, leading to an amplifying sequence of events.\nOur main results demonstrate that these two models do indeed have dramatically different stability properties.\nWe first show that for any fixed sequence of prices in the absolute model, the modification of a single order has a bounded and extremely limited impact on the subsequent evolution of the books.\nIn particular, we define a natural notion of distance between order books and show that small modifications can result in only constant distance to the original books for all subsequent time steps.\nWe then show that this implies that for almost any standard statistic of market activity--the executed volume, the average price execution price, and many others--the statistic can be influenced only infinitesimally by small perturbations.\nIn contrast, we show that the relative model enjoys no such stability properties.\nAfter giving specific (worst-case) relative price sequences in which small perturbations generate large changes in basic statistics (for example, altering the number of shares traded by a factor of two), we proceed to demonstrate that the difference in stability properties of the two models is more than merely theoretical.\nUsing extensive INET (a major ECN for NASDAQ stocks) limit order data and order book reconstruction code, we investigate the empirical stability properties when the data is interpreted as containing either absolute prices, relative prices, or mixtures of the two.\nThe theoretical predictions of stability and instability are strongly borne out by the subsequent experiments.\nIn addition to stability being of fundamental interest in any important dynamical system, we believe that the results described here provide food for thought on the topics of market impact and the \"backtesting\" of quantitative trading strategies (the attempt to determine hypothetical past performance using historical data).\nThey suggest that one's confidence that trading \"quietly\" and in small volumes will have minimal market impact is linked to an implicit belief in an absolute price model.\nOur results and the fact that in the real markets there is a large and increasing amount of relative behavior such as penny-jumping would seem to cast doubts on such beliefs.\nSimilarly, in a purely or largely relative-price world, backtesting even low-frequency, low-volume strategies could result in historical estimates of performance that are not only unrelated to future performance (the usual concern), but are not even accurate measures of a hypothetical past.\nThe outline of the paper follows.\nIn Section 2 we briefly review the large literature on market microstructure.\nIn Section 3 we describe the limit order mechanism and our formal models.\nSection 4 presents our most important theoretical results, the 1-Modification Theorem for the absolute price model.\nThis theorem is applied in Section 5 to derive a number of strong stability properties in the absolute model.\nSection 6 presents specific examples establishing the worstcase instability of the relative model.\nSection 7 contains the simulation studies that largely confirm our theoretical findings on INET market data.\n2.\nRELATED WORK\nAs was mentioned in the Introduction, market microstructure is an important and timely topic both in academic finance and on Wall Street, and consequently has a large and varied recent literature.\nHere we have space only to summarize the main themes of this literature and to provide pointers to further readings.\nTo our knowledge the stability properties of detailed limit order microstructure dynamics have not been previously considered.\n(However, see Farmer and Joshi [6] for an example and survey of other price dynamic stability studies.)\nOn the more theoretical side, there is a rich line of work examining what might be considered the game-theoretic properties of limit order markets.\nThese works model traders and market-makers (who provide liquidity by offering both buy and sell quotes, and profit on the difference) by utility functions incorporating tolerance for risks of price movement, large positions and other factors, and examine the resulting equilibrium prices and behaviors.\nCommon findings predict negative price impacts for large trades, and price effects for large inventory holdings by market-makers.\nAn excellent and comprehensive survey of results in this area can be found in [2].\nThere is a similarly large body of empirical work on microstructure.\nMajor themes include the measurement of price impacts, statistical properties of limit order books, and attempts to establish the informational value of order books [4].\nA good overview of the empirical work can be found in [7].\nOf particular note for our interests is [3], which empirically studies the distribution of arriving limit order prices in several prominent markets.\nThis work takes a view of arriving prices analogous to our relative model, and establishes a power-law form for the resulting distributions.\nThere is also a small but growing number of works examining market microstructure topics from a computer science perspective, including some focused on the use of microstructure in algorithms for optimized trade execution.\nKakade et al. [9] introduced limit order dynamics in competitive analysis for one-way and volume-weighted average price (VWAP) trading.\nSome recent papers have applied reinforcement learning methods to trade execution using order book properties as state variables [1, 5, 10].\n3.\nMICROSTRUCTURE PRELIMINARIES\n3.1 Formal Definitions\n4.\nTHE 1-MODIFICATION THEOREM\n5.\nABSOLUTE MODEL STABILITY\n5.1 Spread Bounds\n5.2 Spread Bounds for k-Modifications\n6.\nRELATIVE MODEL INSTABILITY\n7.\nSIMULATION STUDIES\n7.1 Data\n7.2 Methodology\n7.3 Results\nWe begin with summary statistics capturing our overall stability findings.\nEach row of the tables below contains a ticker (e.g. AMZN) followed by either - R (for the uncleaned or raw data) or - C (for the data with canceled orders removed).\nFor each of the approximately 250 trading days in 2003, 1000 trials were run in which a randomly selected order was deleted from the INET event sequence.\nFor each quantity of interest (volume executed, average price, closing price and last bid), we show for the both the absolute and 1Here \"same\" is in quotes since the two orders will actually have different sequence ID numbers, which is what makes such repositioning activity impossible to reliably detect in the data.\nrelative model the average percentage change in the quantity induced by the deletion.\nThe results confirm rather strikingly the qualitative conclusions of the theory we have developed.\nIn virtually every case (stock, raw or cleaned data, and quantity) the percentage change induced by a single deletion in the relative model is many orders of magnitude greater than in the absolute model, and shows that indeed \"butterfly effects\" may occur in a relative model market.\nAs just one specific representative example, notice that for QCOM on the cleaned data, the relative model effect of just a single deletion on the closing price is in excess of a full percentage point.\nThis is a variety of market impact entirely separate from the more traditional and expected kind generated by trading a large volume of shares.\nIn Figure 4 we examine how the change to one the quantities, the average execution price, grows with the introduction of greater perturbations of the event sequence in the two models.\nRather than deleting only a single order between 10 AM and 3 PM, in these experiments a growing number of randomly chosen deletions was performed, and the percentage change to the average price measured.\nAs suggested by the theory we have developed, for the absolute model the change to the average price grows linearly with the number of deletions and remains very small (note the vastly different scales of the y-axis in the panels for the absolute and relative models in the figure).\nFor the relative model, it is interesting to note that while small numbers of changes have large effects (often causing average execution price changes well in excess of 0.1 percent), the effects of large numbers of changes levels off quite rapidly and consistently.\nWe conclude with an examination of experiments with a mixture model.\nEven if one accepts a world in which traders behave in either an absolute or relative manner, one would be likely to claim that the market contains a mixture of both.\nWe thus ran simulations in which each arriving order in the INET event streams was treated as an absolute price with probability a, and as a relative price with probability 1 \u2212 a. Representative results for the average execution price in this mixture model are shown in Figure 5 for AMZN and NVDA.\nPerhaps as expected, we see a monotonic decrease in the percentage change (instability) as the fraction of absolute traders increases, with most of the reduction already being realized by the introduction of just a small population of absolute traders.\nThus even in a largely relative-price world, a\nFigure 4: Percentage change to the average execu\ntion price (y-axis) as a function of the number of deletions to the sequence (x-axis).\nThe left panel is for the absolute model, the right panel for the relative model, and each curve corresponds to a single day of QCOM trading in June 2004.\nCurves represent averages over 1000 trials.\nsmall minority of absolute traders can have a greatly stabilizing effect.\nSimilar behavior is found for closing price and last bid.\nFigure 5: Percentage change to the average execu\ntion price (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis).\nEach curve corresponds to a single day of trading during a month of 2004.\nCurves represent averages over 1000 trials.\nFor the executed volume in the mixture model, however, the findings are more curious.\nIn Figure 6, we show how the percentage change to the executed volume varies with the absolute trader fraction a, for NVDA data that is both raw and cleaned of cancellations.\nWe first see that for this quantity, unlike the others, the difference induced by the cleaned and uncleaned data is indeed dramatic, as already suggested by the summary statistics table above.\nBut most intriguing is the fact that the stability is not monotonically increasing with a for either the cleaned or uncleaned data--the market with maximum instability is not a pure relative price market, but occurs at some nonzero value for a.\nIt was in fact not obvious to us that sequences with this property could even be artificially constructed, much less that they would occur as actual market data.\nWe have yet to find a satisfying explanation for this phenomenon and leave it to future research.","lvl-4":"(In) Stability Properties of Limit Order Dynamics\nABSTRACT\nWe study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets.\nWe ask whether such mechanisms are susceptible to \"butterfly effects\"--the infliction of large changes on common measures of market activity by only small perturbations of the order sequence.\nWe show that the answer depends strongly on whether the market consists of \"absolute\" traders (who determine their prices independent of the current order book state) or \"relative\" traders (who determine their prices relative to the current bid and ask).\nWe prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability.\nOur theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks.\n1.\nINTRODUCTION\nIn recent years there has been an explosive increase in the automation of modern equity markets.\nThe vast majority of equity trading is done via the standard limit order market mechanism.\nIn this mechanism, continuous trading takes place via the arrival of limit orders specifying whether the party wishes to buy or sell, the volume desired, and the price offered.\nArriving limit orders that are entirely or partially executable with the best offers on the other side are executed immediately, with any volume not immediately executable being placed in an queue (or book) ordered by price on the appropriate side (buy or sell).\n(A detailed description of the limit order mechanism is given in Section 3.)\nWith this revelation has come the opportunity--and increasingly, the need--for modeling and exploiting limit order data and dynamics.\nIn this paper we investigate a previously unexplored but fundamental aspect of limit order microstructure: the stability properties of the dynamics.\nSpecifically, we are interested in the following natural question: To what extent are simple models of limit order markets either susceptible or immune to \"butterfly effects\"--that is, the infliction of large changes in important activity statistics (such as the\nnumber of shares traded or the average price per share) by only minor perturbations of the order sequence?\nTo examine this question, we consider two stylized but natural models of the limit order arrival process.\nThis process models traditional \"fundamentals\" - based trading, in which market participants each have some inherent but possibly varying valuation for the good that in turn determines their limit price.\nIn contrast, in the relative price model, traders express their limit order prices relative to the best price offered in their respective book (buy or sell).\nThe relative price model can be viewed as modeling traders who, in addition to perhaps incorporating fundamental external information on the stock, may also position their orders strategically relative to the other orders on their side of the book.\nMore generally, the entire area of modern execution optimization [9, 10, 8] has come to rely heavily on the careful positioning of limit orders relative to the current order book state.\nNote that such positioning may depend on more complex features of the order books than just the current bid and ask, but the relative model is a natural and simplified starting point.\nWe remark that an alternate view of the two models is that all traders behave in a relative manner, but with \"absolute\" traders able to act only on a considerably slower time scale than the faster \"relative\" traders.\nHow do these two models differ?\nClearly, given any fixed sequence of arriving limit order prices, we can choose to express these prices either as their original (absolute) values, or we can run the order book dynamical process and transform each order into a relative difference with the top of its book, and obtain identical results.\nThe differences arise when we consider the stability question introduced above.\nIntuitively, in the absolute model a small perturbation in the arriving limit price sequence should have limited (but still some) effects on the subsequent evolution of the order books, since prices are determined independently.\nFor the relative model this intuition is less clear.\nIt seems possible that a small perturbation could (for example) slightly modify the current bid, which in turn could slightly modify the price of the next arriving order, which could then slightly modify the price of the subsequent order, and so on, leading to an amplifying sequence of events.\nOur main results demonstrate that these two models do indeed have dramatically different stability properties.\nWe first show that for any fixed sequence of prices in the absolute model, the modification of a single order has a bounded and extremely limited impact on the subsequent evolution of the books.\nIn particular, we define a natural notion of distance between order books and show that small modifications can result in only constant distance to the original books for all subsequent time steps.\nWe then show that this implies that for almost any standard statistic of market activity--the executed volume, the average price execution price, and many others--the statistic can be influenced only infinitesimally by small perturbations.\nIn contrast, we show that the relative model enjoys no such stability properties.\nAfter giving specific (worst-case) relative price sequences in which small perturbations generate large changes in basic statistics (for example, altering the number of shares traded by a factor of two), we proceed to demonstrate that the difference in stability properties of the two models is more than merely theoretical.\nUsing extensive INET (a major ECN for NASDAQ stocks) limit order data and order book reconstruction code, we investigate the empirical stability properties when the data is interpreted as containing either absolute prices, relative prices, or mixtures of the two.\nThe theoretical predictions of stability and instability are strongly borne out by the subsequent experiments.\nThey suggest that one's confidence that trading \"quietly\" and in small volumes will have minimal market impact is linked to an implicit belief in an absolute price model.\nOur results and the fact that in the real markets there is a large and increasing amount of relative behavior such as penny-jumping would seem to cast doubts on such beliefs.\nThe outline of the paper follows.\nIn Section 2 we briefly review the large literature on market microstructure.\nIn Section 3 we describe the limit order mechanism and our formal models.\nSection 4 presents our most important theoretical results, the 1-Modification Theorem for the absolute price model.\nThis theorem is applied in Section 5 to derive a number of strong stability properties in the absolute model.\nSection 6 presents specific examples establishing the worstcase instability of the relative model.\nSection 7 contains the simulation studies that largely confirm our theoretical findings on INET market data.\n2.\nRELATED WORK\nAs was mentioned in the Introduction, market microstructure is an important and timely topic both in academic finance and on Wall Street, and consequently has a large and varied recent literature.\nTo our knowledge the stability properties of detailed limit order microstructure dynamics have not been previously considered.\n(However, see Farmer and Joshi [6] for an example and survey of other price dynamic stability studies.)\nOn the more theoretical side, there is a rich line of work examining what might be considered the game-theoretic properties of limit order markets.\nThese works model traders and market-makers (who provide liquidity by offering both buy and sell quotes, and profit on the difference) by utility functions incorporating tolerance for risks of price movement, large positions and other factors, and examine the resulting equilibrium prices and behaviors.\nCommon findings predict negative price impacts for large trades, and price effects for large inventory holdings by market-makers.\nAn excellent and comprehensive survey of results in this area can be found in [2].\nThere is a similarly large body of empirical work on microstructure.\nMajor themes include the measurement of price impacts, statistical properties of limit order books, and attempts to establish the informational value of order books [4].\nA good overview of the empirical work can be found in [7].\nOf particular note for our interests is [3], which empirically studies the distribution of arriving limit order prices in several prominent markets.\nThis work takes a view of arriving prices analogous to our relative model, and establishes a power-law form for the resulting distributions.\nThere is also a small but growing number of works examining market microstructure topics from a computer science perspective, including some focused on the use of microstructure in algorithms for optimized trade execution.\nKakade et al. [9] introduced limit order dynamics in competitive analysis for one-way and volume-weighted average price (VWAP) trading.\nSome recent papers have applied reinforcement learning methods to trade execution using order book properties as state variables [1, 5, 10].\n7.3 Results\nWe begin with summary statistics capturing our overall stability findings.\nEach row of the tables below contains a ticker (e.g. AMZN) followed by either - R (for the uncleaned or raw data) or - C (for the data with canceled orders removed).\nFor each of the approximately 250 trading days in 2003, 1000 trials were run in which a randomly selected order was deleted from the INET event sequence.\nFor each quantity of interest (volume executed, average price, closing price and last bid), we show for the both the absolute and 1Here \"same\" is in quotes since the two orders will actually have different sequence ID numbers, which is what makes such repositioning activity impossible to reliably detect in the data.\nrelative model the average percentage change in the quantity induced by the deletion.\nThe results confirm rather strikingly the qualitative conclusions of the theory we have developed.\nIn virtually every case (stock, raw or cleaned data, and quantity) the percentage change induced by a single deletion in the relative model is many orders of magnitude greater than in the absolute model, and shows that indeed \"butterfly effects\" may occur in a relative model market.\nAs just one specific representative example, notice that for QCOM on the cleaned data, the relative model effect of just a single deletion on the closing price is in excess of a full percentage point.\nThis is a variety of market impact entirely separate from the more traditional and expected kind generated by trading a large volume of shares.\nIn Figure 4 we examine how the change to one the quantities, the average execution price, grows with the introduction of greater perturbations of the event sequence in the two models.\nRather than deleting only a single order between 10 AM and 3 PM, in these experiments a growing number of randomly chosen deletions was performed, and the percentage change to the average price measured.\nAs suggested by the theory we have developed, for the absolute model the change to the average price grows linearly with the number of deletions and remains very small (note the vastly different scales of the y-axis in the panels for the absolute and relative models in the figure).\nFor the relative model, it is interesting to note that while small numbers of changes have large effects (often causing average execution price changes well in excess of 0.1 percent), the effects of large numbers of changes levels off quite rapidly and consistently.\nWe conclude with an examination of experiments with a mixture model.\nEven if one accepts a world in which traders behave in either an absolute or relative manner, one would be likely to claim that the market contains a mixture of both.\nWe thus ran simulations in which each arriving order in the INET event streams was treated as an absolute price with probability a, and as a relative price with probability 1 \u2212 a. Representative results for the average execution price in this mixture model are shown in Figure 5 for AMZN and NVDA.\nThus even in a largely relative-price world, a\nFigure 4: Percentage change to the average execu\ntion price (y-axis) as a function of the number of deletions to the sequence (x-axis).\nThe left panel is for the absolute model, the right panel for the relative model, and each curve corresponds to a single day of QCOM trading in June 2004.\nCurves represent averages over 1000 trials.\nsmall minority of absolute traders can have a greatly stabilizing effect.\nSimilar behavior is found for closing price and last bid.\nFigure 5: Percentage change to the average execu\ntion price (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis).\nEach curve corresponds to a single day of trading during a month of 2004.\nCurves represent averages over 1000 trials.\nFor the executed volume in the mixture model, however, the findings are more curious.\nIn Figure 6, we show how the percentage change to the executed volume varies with the absolute trader fraction a, for NVDA data that is both raw and cleaned of cancellations.\nBut most intriguing is the fact that the stability is not monotonically increasing with a for either the cleaned or uncleaned data--the market with maximum instability is not a pure relative price market, but occurs at some nonzero value for a.\nIt was in fact not obvious to us that sequences with this property could even be artificially constructed, much less that they would occur as actual market data.","lvl-2":"(In) Stability Properties of Limit Order Dynamics\nABSTRACT\nWe study the stability properties of the dynamics of the standard continuous limit-order mechanism that is used in modern equity markets.\nWe ask whether such mechanisms are susceptible to \"butterfly effects\"--the infliction of large changes on common measures of market activity by only small perturbations of the order sequence.\nWe show that the answer depends strongly on whether the market consists of \"absolute\" traders (who determine their prices independent of the current order book state) or \"relative\" traders (who determine their prices relative to the current bid and ask).\nWe prove that while the absolute trader model enjoys provably strong stability properties, the relative trader model is vulnerable to great instability.\nOur theoretical results are supported by large-scale experiments using limit order data from INET, a large electronic exchange for NASDAQ stocks.\n1.\nINTRODUCTION\nIn recent years there has been an explosive increase in the automation of modern equity markets.\nThis increase has taken place both in the exchanges, which are increasingly computerized and offer sophisticated interfaces for order placement and management, and in the trading activity itself, which is ever more frequently undertaken by software.\nThe so-called Electronic Communication Networks (or ECNs) that dominate trading in NASDAQ stocks are a common example of the automation of the exchanges.\nOn the trading side, computer programs now are entrusted not only with the careful execution of large block trades for clients (sometimes referred to on Wall Street as program trading), but with the autonomous selection of stocks, direction (long or short) and volumes to trade for profit (commonly referred to as statistical arbitrage).\nThe vast majority of equity trading is done via the standard limit order market mechanism.\nIn this mechanism, continuous trading takes place via the arrival of limit orders specifying whether the party wishes to buy or sell, the volume desired, and the price offered.\nArriving limit orders that are entirely or partially executable with the best offers on the other side are executed immediately, with any volume not immediately executable being placed in an queue (or book) ordered by price on the appropriate side (buy or sell).\n(A detailed description of the limit order mechanism is given in Section 3.)\nWhile traders have always been able to view the prices at the top of the buy and sell books (known as the bid and ask), a relatively recent development in certain exchanges is the real-time revelation of the entire order book--the complete distribution of orders, prices and volumes on both sides of the exchange.\nWith this revelation has come the opportunity--and increasingly, the need--for modeling and exploiting limit order data and dynamics.\nIt is fair to say that market microstructure, as this area is generally known, is a topic commanding great interest both in the real markets and in the academic finance literature.\nThe opportunities and needs span the range from the optimized execution of large trades to the creation of stand-alone \"proprietary\" strategies that attempt to profit from high-frequency microstructure signals.\nIn this paper we investigate a previously unexplored but fundamental aspect of limit order microstructure: the stability properties of the dynamics.\nSpecifically, we are interested in the following natural question: To what extent are simple models of limit order markets either susceptible or immune to \"butterfly effects\"--that is, the infliction of large changes in important activity statistics (such as the\nnumber of shares traded or the average price per share) by only minor perturbations of the order sequence?\nTo examine this question, we consider two stylized but natural models of the limit order arrival process.\nIn the absolute price model, buyers and sellers arrive with limit order prices that are determined independently of the current state of the market (as represented by the order books), though they may depend on all manner of exogenous information or shocks, such as time, news events, announcements from the company whose shares are being traded, private signals or state of the individual traders, etc. .\nThis process models traditional \"fundamentals\" - based trading, in which market participants each have some inherent but possibly varying valuation for the good that in turn determines their limit price.\nIn contrast, in the relative price model, traders express their limit order prices relative to the best price offered in their respective book (buy or sell).\nThus, a buyer would encode their limit order price as an offset A (which may be positive, negative, or zero) from the current bid Pb, which is then translated to the limit price Pb + A. Again, in addition to now depending on the state of the order books, prices may also depend on all manner of exogenous information.\nThe relative price model can be viewed as modeling traders who, in addition to perhaps incorporating fundamental external information on the stock, may also position their orders strategically relative to the other orders on their side of the book.\nA common example of such strategic behavior is known as \"penny-jumping\" on Wall Street, in which a trader who has in interest in buying shares quickly, but still at a discount to placing a market order, will deliberately position their order just above the current bid.\nMore generally, the entire area of modern execution optimization [9, 10, 8] has come to rely heavily on the careful positioning of limit orders relative to the current order book state.\nNote that such positioning may depend on more complex features of the order books than just the current bid and ask, but the relative model is a natural and simplified starting point.\nWe remark that an alternate view of the two models is that all traders behave in a relative manner, but with \"absolute\" traders able to act only on a considerably slower time scale than the faster \"relative\" traders.\nHow do these two models differ?\nClearly, given any fixed sequence of arriving limit order prices, we can choose to express these prices either as their original (absolute) values, or we can run the order book dynamical process and transform each order into a relative difference with the top of its book, and obtain identical results.\nThe differences arise when we consider the stability question introduced above.\nIntuitively, in the absolute model a small perturbation in the arriving limit price sequence should have limited (but still some) effects on the subsequent evolution of the order books, since prices are determined independently.\nFor the relative model this intuition is less clear.\nIt seems possible that a small perturbation could (for example) slightly modify the current bid, which in turn could slightly modify the price of the next arriving order, which could then slightly modify the price of the subsequent order, and so on, leading to an amplifying sequence of events.\nOur main results demonstrate that these two models do indeed have dramatically different stability properties.\nWe first show that for any fixed sequence of prices in the absolute model, the modification of a single order has a bounded and extremely limited impact on the subsequent evolution of the books.\nIn particular, we define a natural notion of distance between order books and show that small modifications can result in only constant distance to the original books for all subsequent time steps.\nWe then show that this implies that for almost any standard statistic of market activity--the executed volume, the average price execution price, and many others--the statistic can be influenced only infinitesimally by small perturbations.\nIn contrast, we show that the relative model enjoys no such stability properties.\nAfter giving specific (worst-case) relative price sequences in which small perturbations generate large changes in basic statistics (for example, altering the number of shares traded by a factor of two), we proceed to demonstrate that the difference in stability properties of the two models is more than merely theoretical.\nUsing extensive INET (a major ECN for NASDAQ stocks) limit order data and order book reconstruction code, we investigate the empirical stability properties when the data is interpreted as containing either absolute prices, relative prices, or mixtures of the two.\nThe theoretical predictions of stability and instability are strongly borne out by the subsequent experiments.\nIn addition to stability being of fundamental interest in any important dynamical system, we believe that the results described here provide food for thought on the topics of market impact and the \"backtesting\" of quantitative trading strategies (the attempt to determine hypothetical past performance using historical data).\nThey suggest that one's confidence that trading \"quietly\" and in small volumes will have minimal market impact is linked to an implicit belief in an absolute price model.\nOur results and the fact that in the real markets there is a large and increasing amount of relative behavior such as penny-jumping would seem to cast doubts on such beliefs.\nSimilarly, in a purely or largely relative-price world, backtesting even low-frequency, low-volume strategies could result in historical estimates of performance that are not only unrelated to future performance (the usual concern), but are not even accurate measures of a hypothetical past.\nThe outline of the paper follows.\nIn Section 2 we briefly review the large literature on market microstructure.\nIn Section 3 we describe the limit order mechanism and our formal models.\nSection 4 presents our most important theoretical results, the 1-Modification Theorem for the absolute price model.\nThis theorem is applied in Section 5 to derive a number of strong stability properties in the absolute model.\nSection 6 presents specific examples establishing the worstcase instability of the relative model.\nSection 7 contains the simulation studies that largely confirm our theoretical findings on INET market data.\n2.\nRELATED WORK\nAs was mentioned in the Introduction, market microstructure is an important and timely topic both in academic finance and on Wall Street, and consequently has a large and varied recent literature.\nHere we have space only to summarize the main themes of this literature and to provide pointers to further readings.\nTo our knowledge the stability properties of detailed limit order microstructure dynamics have not been previously considered.\n(However, see Farmer and Joshi [6] for an example and survey of other price dynamic stability studies.)\nOn the more theoretical side, there is a rich line of work examining what might be considered the game-theoretic properties of limit order markets.\nThese works model traders and market-makers (who provide liquidity by offering both buy and sell quotes, and profit on the difference) by utility functions incorporating tolerance for risks of price movement, large positions and other factors, and examine the resulting equilibrium prices and behaviors.\nCommon findings predict negative price impacts for large trades, and price effects for large inventory holdings by market-makers.\nAn excellent and comprehensive survey of results in this area can be found in [2].\nThere is a similarly large body of empirical work on microstructure.\nMajor themes include the measurement of price impacts, statistical properties of limit order books, and attempts to establish the informational value of order books [4].\nA good overview of the empirical work can be found in [7].\nOf particular note for our interests is [3], which empirically studies the distribution of arriving limit order prices in several prominent markets.\nThis work takes a view of arriving prices analogous to our relative model, and establishes a power-law form for the resulting distributions.\nThere is also a small but growing number of works examining market microstructure topics from a computer science perspective, including some focused on the use of microstructure in algorithms for optimized trade execution.\nKakade et al. [9] introduced limit order dynamics in competitive analysis for one-way and volume-weighted average price (VWAP) trading.\nSome recent papers have applied reinforcement learning methods to trade execution using order book properties as state variables [1, 5, 10].\n3.\nMICROSTRUCTURE PRELIMINARIES\nThe following expository background material is adapted from [9].\nThe market mechanism we examine in this paper is driven by the simple and standard concept of a limit order.\nSuppose we wish to purchase 1000 shares of Microsoft (MSFT) stock.\nIn a limit order, we specify not only the desired volume (1000 shares), but also the desired price.\nSuppose that MSFT is currently trading at roughly $24.07 a share (see Figure 1, which shows an actual snapshot of an MSFT order book on INET), but we are only willing to buy the 1000 shares at $24.04 a share or lower.\nWe can choose to submit a limit order with this specification, and our order will be placed in a queue called the buy order book, which is ordered by price, with the highest offered unexecuted buy price at the top (often referred to as the bid).\nIf there are multiple limit orders at the same price, they are ordered by time of arrival (with older orders higher in the book).\nIn the example provided by Figure 1, our order would be placed immediately after the extant order for 5,503 shares at $24.04; though we offer the same price, this order has arrived before ours.\nSimilarly, a sell order book for sell limit orders is maintained, this time with the lowest sell price offered (often referred to as the ask) at its top.\nThus, the order books are sorted from the most competitive limit orders at the top (high buy prices and low sell prices) down to less competitive limit orders.\nThe bid and ask prices together are sometimes referred to as the inside market, and the difference between them as the spread.\nBy definition, the order books always consist exclusively of unexecuted orders--they are queues of orders hopefully waiting for the price to move in their direction.\nFigure 1: Sample INET order books for MSFT.\nHow then do orders get (partially) executed?\nIf a buy (sell, respectively) limit order comes in above the ask (below the bid, respectively) price, then the order is matched with orders on the opposing books until either the incoming order's volume is filled, or no further matching is possible, in which case the remaining incoming volume is placed in the books.\nFor instance, suppose in the example of Figure 1 a buy order for 2000 shares arrived with a limit price of $24.08.\nThis order would be partially filled by the two 500-share sell orders at $24.069 in the sell books, the 500-share sell order at $24.07, and the 200-share sell order at $24.08, for a total of 1700 shares executed.\nThe remaining 300 shares of the incoming buy order would become the new bid of the buy book at $24.08.\nIt is important to note that the prices of executions are the prices specified in the limit orders already in the books, not the prices of the incoming order that is immediately executed.\nThus in this example, the 1700 executed shares would be at different prices.\nNote that this also means that in a pure limit order exchange such as INET, market orders can be \"simulated\" by limit orders with extreme price values.\nIn exchanges such as INET, any order can be withdrawn or canceled by the party that placed it any time prior to execution.\nEvery limit order arrives atomically and instantaneously--there is a strict temporal sequence in which orders arrive, and two orders can never arrive simultaneously.\nThis gives rise to the definition of the last price of the exchange, which is simply the last price at which the exchange executed an order.\nIt is this quantity that is usually meant when people casually refer to the (ticker) price of a stock.\n3.1 Formal Definitions\nWe now provide a formal model for the limit order pro\ncess described above.\nIn this model, limit orders arrive in a temporal sequence, with each order specifying its limit price and an indication of its type (buy or sell).\nLike the actual exchanges, we also allow cancellation of a standing (unexecuted) order in the books any time prior to its execution.\nWithout loss of generality we limit attention to a model in which every order is for a single share; large order volumes can be represented by 1-share sequences.\nDEFINITION 3.1.\nLet E = (v1,...vn) be a sequence of limit orders, where each Qi has the form (ni, ti, vi).\nHere ni is an order identifier, ti is the order type (buy, sell, or cancel), and vi is the limit order value.\nIn the case that ti is a cancel, ni matches a previously placed order and vi is ignored.\nWe have deliberately called vi in the definition above the limit order value rather than price, since our two models will differ in their interpretation of vi (as being absolute or relative).\nIn the absolute model, we do indeed interpret vi as simply being the price of the limit order.\nIn the relative model, if the current order book configuration is (A, B) (where A is the sell and B the buy book), the price of the order is ask (A) + vi if ti is sell, and bid (B) + vi if ti is buy, where by ask (X) and bid (X) we denote the price of the order at the top of the book X. (Note vi can be negative.)\nOur main interest in this paper is the effects that the modification of a small number of limit orders can have on the resulting dynamics.\nFor simplicity we consider only modifications to the limit order values, but our results generalize to any modification.\nDEFINITION 3.2.\nA k-modification of E is a sequence E' such that for exactly k indices i1,..., ik vij = v' ij, tij = t' ij, and nij = n'ij.\nFor every f = ij, j E {1,..., k} v = v'.\nWe now define the various quantities whose stability properties we examine in the absolute and relative models.\nAll of these are standard quantities of common interest in financial markets.\n\u2022 volume (E): Number of shares executed (traded) in the sequence E. \u2022 average (E): Average execution price.\n\u2022 close (E): Price of the last (closing) execution.\n\u2022 lastbid (E): Bid at the end of the sequence.\n\u2022 lastask (E): Ask at end of the sequence.\n4.\nTHE 1-MODIFICATION THEOREM\nIn this section we provide our most important technical result.\nIt shows that in the absolute model, the effects that the modification of a single order has on the resulting evolution of the order books is extremely limited.\nWe then apply this result to derive strong stability results for all of the aforementioned quantities in the absolute model.\nThroughout this section, we consider an arbitrary order sequence E in the absolute model, and any 1-modification E' of E.\nAt any point (index) i in the two sequences we shall use (A1, B1) to denote the sell and buy books (respectively) in E, and (A2, B2) to denote the sell and buy books in E'; for notational convenience we omit explicitly superscripting by the current index i.\nWe will shortly establish that at all times i, (A1, B1) and (A2, B2) are very \"close\".\nAlthough the order books are sorted by price, we will use (for example) A1 U {a2} = A2 to indicate that A2 contains an order at some price a2 that is not present in A1, but that otherwise A1 and A2 are identical; thus deleting the order at a2 in A2 would render the books the same.\nSimilarly, B1 U {b2} = B2 U {b1} means B1 contains an order at price b1 not present in B2, B2 contains an order at price b2 not present in B1, and that otherwise B1 and B2 are identical.\nUsing this notation, we now define a set of stable system states, where each state is composed from the order books of the original and the modified sequences.\nShortly we show that if we change only one order's value (price), we remain in this set for any sequence of limit orders.\nDEFINITION 4.1.\nLet ab be the set of all states (A1, B1) and (A2, B2) such that A1 = A2 and B1 = B2.\nLet \u00af ab be the set of states such that A1 U {a2} = A2 U {a1}, where a1 = a2, and B1 = B2.\nLet a \u00af b be the set of states such that B1U {b2} = B2U {b1}, where b1 = b2, and A1 = A2.\nLet \u00af a \u00af b be the set of states in which A1 = A2U {a1} and B1 = B2U {b1}, or in which A2 = A1 U {a2} and B2 = B1 U {b2}.\nFinally we define S = ab U \u00af ab U \u00af ba U \u00af a \u00af b as the set of stable states.\nTHEOREM 4.1.\n(1-Modification Theorem) Consider any sequence of orders E and any 1-modification E' of E.\nThen the order books (A1, B1) and (A2, B2) determined by E and E' lie in the set S of stable states at all times.\nFigure 2: Diagram representing the set S of stable states and the possible movements transitions in it after the change.\nThe idea of the proof of this theorem is contained in Figure 2, which shows a state transition diagram labeled by the categories of stable states.\nThis diagram describes all transitions that can take place after the arrival of the order on which E and E' differ.\nThe following establishes that immediately after the arrival of this differing order, the state lies in S. LEMMA 4.2.\nIf at any time the current books (A1, B1) and (A2, B2) are in the set ab (and thus identical), then modifying the price of the next order keeps the state in S. PROOF.\nSuppose the arriving order is a sell order and we change it from a1 to a2; assume without loss of generality that a1> a2.\nIf neither order is executed immediately, then we move to state \u00af ab; if both of them are executed then we stay in state ab; and if only a2 is executed then we move to state \u00af a \u00af b.\nThe analysis of an arriving buy order is similar.\nFollowing the arrival of their only differing order, E and E' are identical.\nWe now give a sequence of lemmas showing\nExecuted with two orders (not a1 and a2) Figure 3: The state diagram when starting at state \u00af ab.\nThis diagram provides the intuition of Lemma 4.3\nthat following the initial difference covered by Lemma 4.2, the state remains in S forever on the remaining (identical) sequence.\nWe first show that from state \u00af ab we remain in S regardless the next order.\nThe intuition of this lemma is demonstrated in Figure 3.\nLEMMA 4.3.\nIf the current state is in the set \u00af ab, then for any order the state will remain in S. PROOF.\nWe first provide the analysis for the case of an arriving sell order.\nNote that in \u00af ab the buy books are identical (B1 = B2).\nThus either the arriving sell order is executed with the same buy order in both buy books, or it is not executed in both buy books.\nFor the first case, the buy books remain identical (the bid is executed in both) and the sell books remain unchanged.\nFor the second case, the buy books remain unchanged and identical, and the sell books have the new sell order added to both of them (and thus still differ by one order).\nNext we provide an analysis of the more subtle case where the arriving item is a buy order.\nFor this case we need to take care of several different scenarios.\nThe first is when the top of both sell books (the ask) is identical.\nThen regardless of whether the new buy order is executed or not, the state remains in \u00af ab (the analysis is similar to an arriving sell order).\nWe are left to deal with case where ask (A1) and ask (A2) are different.\nHere we discuss two subcases: (a) ask (A1) = a1 and ask (A2) = a2, and (b) ask (A1) = a1 and ask (A2) = a'.\nHere a1 and a2 are as in the definition of \u00af ab in Definition 4.1, and a' is some other price.\nFor subcase (a), by our assumption a1 <a2, then either (1) both asks get executed, the sell books become identical, and we move to state ab; (2) neither ask is executed and we remain in state \u00af ab; or (3) only ask (A1) = a1 is executed, in which case we move to state \u00af a \u00af b with A2 = A1 U {a2} and B2 = B1 U {b2}, where b2 is the arriving buy order price.\nFor subcase (b), either (1) buy order is executed in neither sell book we remain in state \u00af ab; or (2) the buy order is executed in both sell books and stay in state \u00af ab with A1 U {a'} = A2 U {a2}; or (3) only ask (A1) = a1 is executed and we move to state \u00af a \u00af b. LEMMA 4.4.\nIf the current state is in the set a \u00af b, then for any order the state will remain in S. LEMMA 4.5.\nIf the current configuration is in the set \u00af a \u00af b, then for any order the state will remain in S The proofs of these two lemmas are omitted, but are similar in spirit to that of Lemma 4.3.\nThe next and final lemma deals with cancellations.\nLEMMA 4.6.\nIf the current order book state lies in S, then following the arrival of a cancellation it remains in S. PROOF.\nWhen a cancellation order arrives, one of the following possibilities holds: (1) the order is still in both sets of books, (2) it is not in either of them and (3) it is only in one of them.\nFor the first two cases it is easy to see that the cancellation effect is identical on both sets of books, and thus the state remains unchanged.\nFor the case when the order appears only in one set of books, without loss of generality we assume that the cancellation cancels a buy order at b1.\nRather than removing b1 from the book we can change it to have price 0, meaning this buy order will never be executed and is effectively canceled.\nNow regardless the state that we were in, b1 is still only in one buy book (but with a different price), and thus we remain in the same state in S.\nThe proof of Theorem 4.1 follows from the above lemmas.\n5.\nABSOLUTE MODEL STABILITY\nIn this section we apply the 1-Modification Theorem to show strong stability properties for the absolute model.\nWe begin with an examination of the executed volume.\nLEMMA 5.1.\nLet E be any sequence and E' be any 1modification of E.\nThen the set of the executed orders (ID numbers) generated by the two sequences differs by at most 2.\nPROOF.\nBy Theorem 4.1 we know that at each stage the books differ by at most two orders.\nNow since the union of the IDs of the executed orders and the order books is always identical for both sequences, this implies that the executed orders can differ by at most two.\nCOROLLARY 5.2.\nLet E be any sequence and E' be any kmodification of E.\nThen the set of the executed orders (ID numbers) generated by the two sequences differs by at most 2k.\nAn order sequence E' is a k-extension of E if E can be obtained by deleting any k orders in V. LEMMA 5.3.\nLet E be any sequence and let E' be any kextension of E.\nThen the set of the executed orders generated by E and E' differ by at most 2k.\nThis lemma is the key to obtain our main absolute model volume result below.\nWe use edit (E, E') to denote the standard edit distance between the sequences E and E'--the minimal number of substitutions, insertions and deletions or orders needed to change E to V. THEOREM 5.4.\nLet E and E' be any absolute model order sequences.\nThen if edit (E, E') <k, the set of the executed orders generated by E and E' differ by at most 4k.\nIn particular, | volume (E) \u2212 volume (E') | <4k.\nPROOF.\nWe first define the sequence E\u02dc which is the intersection of E and V.\nSince E and E' are at most k apart, we have that by k insertions we change E\u02dc to either E or E', and by Lemma 5.3 its set of executed orders is at most 2k from each.\nThus the set of executed orders in E and E' is at most\n5.1 Spread Bounds\nTheorem 5.4 establishes strong stability for executed volume in the absolute model.\nWe now turn to the quantities that involve execution prices as opposed to volume alone--namely, average (E), close (E), lastbid (E) and lastask (E).\nFor these results, unlike executed volume, a condition must hold on E in order for stability to occur.\nThis condition is expressed in terms of a natural measure of the spread of the market, or the gap between the buyers and sellers.\nWe motivate this condition by first showing that without it, by changing one order, we can change average (E) by any positive value x. LEMMA 5.5.\nThere exists E such that for any x 0, there is a 1-modification E' of E such that average (E') = average (E) + x. PROOF.\nLet E be a sequence of alternating sell and buy orders in which each seller offers p and each buyer p + x, and the first order is a sell.\nThen all executions take place at the ask, which is always p, and thus average (E) = p.\nNow suppose we modify only the first sell order to be at price p + 1 + x.\nThis initial sell order will never be executed, and now all executions take place at the bid, which is always p + x. Similar instability results can be shown to hold for the other price-based quantities.\nThis motivates the introduction of a quantity we call the second spread of the order books, which is defined as the difference between the prices of the second order in the sell book and the second order in the buy book (as opposed to the bid-ask difference, which is commonly called the spread).\nWe note that in a liquid stock, such as those we examine experimentally in Section 7, the second spread will typically be quite small and in fact almost always equal to the spread.\nIn this subsection we consider changes in the sequence only after an initialization period, and sequences such that the second spread is always defined after the time we make a change.\nWe define s2 (E) to be the maximum second spread in the sequence E following the change.\nTHEOREM 5.6.\nLet E be a sequence and let E' be any 1modification of E. Then\n1.\n| lastbid (E) \u2212 lastbid (E') | s2 (E) 2.\n| lastask (E) \u2212 lastask (E') | s2 (E)\nwhere s2 (E) is the maximum over the second spread in E following the 1-modification.\nPROOF.\nWe provide the proof for the last bid; the proof for the last ask is similar.\nThe proof relies on Theorem 4.1 and considers states in the stable set S. For states ab and \u00af ab, we have that the bid is identical.\nLet bid (X), sb (X), ask (X), be the bid, the second highest buy order, and the ask of a sequence X.\nNow recall that in state a \u00af b we have that the sell books are identical, and that the two buy books are identical except one different order.\nThus\nNow it remains to bound bid (E).\nHere we use the fact that the bid of the modified sequence is at least the second highest buy order in the original sequence, due to the fact that the books are different only in one order.\nSince\nIn state \u00af a \u00af b we have that for one sequence the books contain an additional buy order and an additional sell order.\nFirst suppose that the books containing the additional orders are the original sequence E.\nNow if the bid is not the additional order we are done, otherwise we have the following:\nwhere sb (E) bid (E') since the original buy book has only one additional order.\nNow assume that the books with the additional orders are for the modified sequence E'.\nWe have\nwhere we used the fact that ask (E) ask (E') since the modified sequence has an additional order.\nSimilarly we have that bid (E) bid (E') since the modified buy book contains an additional order.\nWe note that the proof of Theorem 5.6 actually establishes that the bid and ask of the original and modified sequences are within s2 (E) at all times.\nNext we provide a technical lemma which relates the (first) spread of the modified sequence to the second spread of the original sequence.\nLEMMA 5.7.\nLet E be a sequence and let E' be any 1modification of E.\nThen the spread of E' is bounded by s2 (E).\nPROOF.\nBy the 1-Modification Theorem, we know that the books of the modified sequence and the original sequence can differ by at most one order in each book (buy and sell).\nTherefore, the second-highest buy order in the original sequence is always at most the bid in the modified sequence, and the second-lowest sell order in the original sequence is always at least the ask of the modified sequence.\nWe are now ready to state a stability result for the average execution price in the absolute model.\nIt establishes that in highly liquid markets, where the executed volume is large and the spread small, the average price is highly stable.\nTHEOREM 5.8.\nLet E be a sequence and let E' be any 1modification of E.\nThen | average (E) \u2212 average (E') | 2 (p.ax + s2 (E)) volume (E) + s2 (E) where p.ax is the highest execution price in E. PROOF.\nThe proof will show that every execution in E besides the execution of the modified order and the last execution has a matching execution in E' with a price different by at most s2 (E), and will use the fact that p.ax + s2 (E) is a bound on the price in E'.\nReferring to the proof of the 1-Modification Theorem, suppose we are in state \u00af a \u00af b, where we have in one sequence (which can be either E or E') an additional buy order b and an additional sell order a. Without loss of generality we assume that the sequence with the additional orders is E.\nIf the next execution does not involve a or b then clearly we have the same execution in both E and E'.\nSuppose that it involves a; there are two possibilities.\nEither a is the modified order, in which case we change the average price\ndifference by (pmax + s2 (E)) \/ volume (E), and this can happen only once; or a was executed before in E' and the executions both involve an order whose limit price is a. By Lemma 5.7 the spread of both sequences is bounded by s2 (E), which implies that the price of the execution in E' was at most a + s2 (E), while execution is in E is at price a, and thus the prices are different by at most s2 (E).\nIn states \u00af ab, a \u00af b as long as we have concurrent executions in the two sequences, we know that the prices can differ by at most s2 (E).\nIf we have an execution only in one sequence, we either match it in state \u00af a \u00af b, or charge it by (pmax + s2 (E)) \/ volume (E) if we end at state \u00af a \u00af b.\nIf we end in state ab, \u00af ab or a \u00af b, then every execution in states \u00af ab or a \u00af b were matched to an execution in state \u00af a \u00af b.\nIf we end up in state \u00af a \u00af b, we have the one execution that is not matched and thus we charge it (pmax + s2 (E)) \/ volume (E).\nWe next give a stability result for the closing price.\nWe first provide a technical lemma regarding the prices of consecutive executions.\nLEMMA 5.9.\nLet E be any sequence.\nThen the prices of two consecutive executions in E differ by at most s2 (E).\nPROOF.\nSuppose the first execution is taken at time t; its price is bounded below by the current bid and above by the current ask.\nNow after this execution the bid is at least the second highest buy order at time t, if the former bid was executed and no higher buy orders arrived, and higher otherwise.\nSimilarly, the ask is at most the second lowest sell order at time t. Therefore, the next execution price is at least the second bid at time t and at most the second ask at time t, which is at most s2 (E) away from the bid\/ask at time t. LEMMA 5.10.\nLet E be any sequence and let E' be a 1modification of E.\nIf the volume (E)> 2, then\nPROOF.\nWe first deal with case where the last execution occurs in both sequences simultaneously.\nBy Theorem 5.6, both the ask and the bid of E and E' are at most s2 (E) apart at every time t.\nSince the price of the last execution is their asks (bids) at time t we are done.\nNext we deal with the case where the last execution among the two sequences occurs only in E.\nIn this case we know that either the previous execution happened simultaneously in both sequences at time t, and thus all three executions are within the second spread of E at time t (the first execution in E by definition, the execution at E' from identical arguments as in the former case, and the third by Lemma 5.9).\nOtherwise the previous execution happened only in E' at time t, in which case the two executions are within the the spread of E at time t (the execution of E' from the same arguments as before, and the execution in E must be inside its spread in time t).\nIf the last execution happens only in E' we know that the next execution of E will be at most s2 (E) away from its previous execution by Lemma 5.9.\nTogether with the fact that if an execution happens only in one sequence it implies that the order is in the spread of the second sequence as long as the sequences are 1-modification, the proof is completed.\n5.2 Spread Bounds for k-Modifications\nAs in the case of executed volume, we would like to extend the absolute model stability results for price-based quantities to the case where multiple orders are modified.\nHere our results are weaker and depend on the k-spread, the distance between the kth highest buy order and the kth lowest sell order, instead of the second spread.\n(Looking ahead to Section 7, we note that in actual market data for liquid stocks, this quantity is often very small as well.)\nWe use sk (E) to denote the k-spread.\nAs before, we assume that the k-spread is always defined after an initialization period.\nWe first state the following generalization of Lemma 5.7.\nLEMMA 5.11.\nLet E be a sequence and let E' be any 1modification of E. Fort> 1, if s cents +1 (E) is always defined after the change, then s cents (E') <s cents +1 (E).\nThe proof is similar to the proof of Lemma 5.7 and omitted.\nA simple application of this lemma is the following: Let E cents be any sequence which is an $- modification of E.\nThen we have s2 (E cents) <s cents +2 (E).\nNow using the above lemma and by simple induction we can obtain the following theorem.\nTHEOREM 5.12.\nLet E be a sequence and let E' be any k-modification of E. Then\n1.\nllastbid (E)--lastbid (E') l <Ek cents = 1 s cents +1 (E) <ksk +1 (E) 2.\nllastask (E)--lastask (E') l <Ek cents = 1 s cents +1 (E) <ksk +1 (E) 3.\nlclose (E)--close (E') l <Ek cents = 1 s cents +1 (E) <ksk +1 (E) 4.\nlaverage (E)--average (E') l <\nwhere s cents (E) is the maximum over the B-spread in E following the first modification.\nWe note that while these bounds depend on deeper measures of spread for more modifications, we are working in a 1-share order model.\nThus in an actual market, where single orders contain hundreds or thousands of shares, the k-spread even for large k might be quite small and close to the standard 1-spread in liquid stocks.\n6.\nRELATIVE MODEL INSTABILITY\nIn the relative model the underlying assumption is that traders try to exploit their knowledge of the books to strategically place their orders.\nThus if a trader wants her buy order to be executed quickly, she may position it above the current bid and be the first in the queue; if the trader is patient and believes that the price trend is going to be downward she will place orders deeper in the buy book, and so on.\nWhile in the previous sections we showed stability results for the absolute model, here we provide simple examples which show instability in the relative model for the executed volume, last bid, last ask, average execution price and the last execution price.\nIn Section 7 we provide many simulations on actual market data that demonstrate that this instability is inherent to the relative model, and not due to artificial constructions.\nIn the relative model we assume that for every sequence the ask and bid are always defined, so the books have a non-empty initial configuration.\nWe begin by showing that in the relative model, even a single modification can double the number of shares executed.\nTHEOREM 6.1.\nThere is a sequence E and a 1-modification E' of E such that volume (E') 2volume (E).\nPROOF.\nFor concreteness we assume that at the beginning the ask is 10 and the bid is 8.\nThe sequence E is composed from n buy orders with A = 0, followed by n sell orders with A = 0, and finally an alternating sequence of buy orders with A = +1 and sell orders with A = \u2212 1 of length 2n.\nSince the books before the alternating sequence contain n +1 sell orders at 10 and n +1 buy orders at 8, we have that each pair of buy sell order in the alternating part is matched and executed, but none of the initial 2n orders is executed, and thus volume (E) = n.\nNow we change the first buy order to have A = +1.\nAfter the first 2n orders there are still no executions; however, the books are different.\nNow there are n +1 sell orders at 10, n buy orders at 9 and one buy order at 8.\nNow each order in the alternating sequence is executed with one of the former orders and we have volume (E') = 2n.\nThe next theorem shows that the spread-based stability results of Section 5.1 do not also hold in the relative model.\nBefore providing the proof, we give its intuition.\nAt the beginning the sell book contains only two prices which are far apart and both contain only two orders, now several buy orders arrive, at the original sequence they are not being executed, while in the modified sequence they will be executed and leave the sell book with only the orders at the high price.\nNow many sell orders followed by many buy orders will arrive, such that in the original sequence they will be executed only at the low price and in the modified sequence they will executed at the high price.\nTHEOREM 6.2.\nFor any positive numbers s and x, there is sequence E such that s2 (E) = s and a 1-modification E' of E such that\n\u2022 | close (E) \u2212 close (E') | x \u2022 | average (E) \u2212 average (E') | x \u2022 | lastbid (E) \u2212 lastbid (E') | x \u2022 | lastask (E) \u2212 lastask (E') | x\nPROOF.\nWithout loss of generality let us consider sequences in which all prices are integer-valued, in which case the smallest possible value for the second spread is 1; we provide the proof for the case s2 (E) = 2, but the s2 (E) = 1 case is similar.\nWe consider a sequence E such that after an initialization period there have been no executions, the buy book has 2 orders at price 10, and the sell book has two orders at price 12 and 2 orders with value 12 + y, where y is a positive integer that will be determined by the analysis.\nThe original sequence E is a buy order with A = 0, followed by two buy orders with A = +1, then 2y sell orders with A = 0, and then 2y buy orders with A = +1.\nWe first note that s2 (E) = 2, there are 2y executions, all at price 12, the last bid is 11 and the last ask is 12.\nNext we analyze a modified sequence.\nWe change the first buy order from A = 0 to A = +1.\nTherefore, the next two buy orders with A = +1 are executed, and afterwards we have that the bid is 11 and the ask is 12 + y.\nNow the 2y sell orders are accumulated at 12 + y, and after the next y buy orders the bid is at 12 + y \u2212 1.\nTherefore, at the end we have that lastbid (E') = 12 + y \u2212 1,\ny y +2 (12).\nSetting y = x + 2, we obtain the lemma for every property.\nWe note that while this proof was based on the fact that there are two consecutive orders in the books which are far (y) apart, we can provide a slightly more complicated example in which all orders are close (at most 2 apart), yet still one change results in large differences.\n7.\nSIMULATION STUDIES\nThe results presented so far paint a striking contrast between the absolute and relative price models: while the absolute model enjoys provably strong stability over any fixed event sequence, there exist at least specific sequences demonstrating great instability in the relative model.\nThe worstcase nature of these results raises the question of the extent to which such differences could actually occur in real markets.\nIn this section we provide indirect evidence on this question by presenting simulation results exploiting a rich source of real-market historical limit order sequence data.\nBy interpreting arriving limit order prices as either absolute values, or by transforming them into differences with the current bid and ask (relative model), we can perform small modifications on the sequences and examine how different various outcomes (volume traded, average price, etc.) would be from what actually occurred in the market.\nThese simulations provide an empirical counterpart to the theory we have developed.\nWe emphasize that all such simulations interpret the actual historical data as falling into either the absolute or relative model, and are meaningful only within the confines of such an interpretation.\nNevertheless, we feel they provide valuable empirical insight into the potential (in) stability properties of modern equity limit order markets, and demonstrate that one's belief or hope in stability largely relies on an absolute model interpretation.\nWe also investigate the empirical behavior of mixtures of absolute and relative prices.\n7.1 Data\nThe historical data used in our simulations is commercially available limit order data from INET, the previously mentioned electronic exchange for NASDAQ stocks.\nBroadly speaking, this data consists of practically every single event on INET regarding the trading of an individual stock--every arriving limit order (price, volume, and sequence ID number), every execution, and every cancellation of a standing order--all timestamped in milliseconds.\nIt is data sufficient to recreate the precise INET order book in a given stock on a given day and time.\nWe will report stability properties for three stocks: Amazon, Nvidia, and Qualcomm (identified in the sequel by their tickers, AMZN, NVDA and QCOM).\nThese three provide some range of liquidities (with QCOM having the greatest and NVDA the least liquidity on INET) and other trading properties.\nWe note that the qualitative results of our simulations were similar for several other stocks we examined.\n7.2 Methodology\nFor our simulations we employed order-book reconstruction code operating on the underlying raw data.\nThe basic format of each experiment was the following: 1.\nRun the order book reconstruction code on the original INET data and compute the quantity of interest (volume traded, average price, etc.) 2.\nMake a small modification to a single order, and recompute the resulting value of the quantity of interest.\nIn the absolute model case, Step 2 is as simple as modifying the order in the original data and re-running the order book reconstruction.\nFor the relative model, we must first pre-process the raw data and convert its prices to relative values, then make the modification and re-run the order book reconstruction on the relative values.\nThe type of modification we examined was extremely small compared to the volume of orders placed in these stocks: namely, the deletion of a single randomly chosen order from the sequence.\nAlthough a deletion is not 1-modification, its edit distance is 1 and we can apply Theorem 5.4.\nFor each trading day examined, this single deleted order was selected among those arriving between 10 AM and 3 PM, and the quantities of interest were measured and compared at 3 PM.\nThese times were chosen to include the busiest part of the trading day but avoid the half hour around the opening and closing of the official NASDAQ market (9:30 AM and 3:30 PM respectively), which are known to have different dynamics than the central portion of the day.\nWe run the absolute and relative model simulations on both the raw INET data and on a \"cleaned\" version of this data.\nIn the \"cleaned\" we remove all limit orders that were canceled in the actual market prior to their execution (along with the cancellations themselves).\nThe reason is that such cancellations may often be the first step in the \"repositioning\" of orders--that is, cancellations of the order that are followed by the submission of a replacement order at a different price.\nNot removing canceled orders allows the possibility of modified simulations in which the \"same\" order 1 is executed twice, which may magnify instability effects.\nAgain, it is clear that neither the raw nor the \"cleaned\" data can perfectly reflect \"what would have happened\" under the deleted orders in the actual market.\nHowever, the results both from the raw data and the clean data are qualitatively similar.\nThe results mainly differ, as expected, in the executed volume, where the instability results for the relative model are much more dramatic in the raw data.\n7.3 Results\nWe begin with summary statistics capturing our overall stability findings.\nEach row of the tables below contains a ticker (e.g. AMZN) followed by either - R (for the uncleaned or raw data) or - C (for the data with canceled orders removed).\nFor each of the approximately 250 trading days in 2003, 1000 trials were run in which a randomly selected order was deleted from the INET event sequence.\nFor each quantity of interest (volume executed, average price, closing price and last bid), we show for the both the absolute and 1Here \"same\" is in quotes since the two orders will actually have different sequence ID numbers, which is what makes such repositioning activity impossible to reliably detect in the data.\nrelative model the average percentage change in the quantity induced by the deletion.\nThe results confirm rather strikingly the qualitative conclusions of the theory we have developed.\nIn virtually every case (stock, raw or cleaned data, and quantity) the percentage change induced by a single deletion in the relative model is many orders of magnitude greater than in the absolute model, and shows that indeed \"butterfly effects\" may occur in a relative model market.\nAs just one specific representative example, notice that for QCOM on the cleaned data, the relative model effect of just a single deletion on the closing price is in excess of a full percentage point.\nThis is a variety of market impact entirely separate from the more traditional and expected kind generated by trading a large volume of shares.\nIn Figure 4 we examine how the change to one the quantities, the average execution price, grows with the introduction of greater perturbations of the event sequence in the two models.\nRather than deleting only a single order between 10 AM and 3 PM, in these experiments a growing number of randomly chosen deletions was performed, and the percentage change to the average price measured.\nAs suggested by the theory we have developed, for the absolute model the change to the average price grows linearly with the number of deletions and remains very small (note the vastly different scales of the y-axis in the panels for the absolute and relative models in the figure).\nFor the relative model, it is interesting to note that while small numbers of changes have large effects (often causing average execution price changes well in excess of 0.1 percent), the effects of large numbers of changes levels off quite rapidly and consistently.\nWe conclude with an examination of experiments with a mixture model.\nEven if one accepts a world in which traders behave in either an absolute or relative manner, one would be likely to claim that the market contains a mixture of both.\nWe thus ran simulations in which each arriving order in the INET event streams was treated as an absolute price with probability a, and as a relative price with probability 1 \u2212 a. Representative results for the average execution price in this mixture model are shown in Figure 5 for AMZN and NVDA.\nPerhaps as expected, we see a monotonic decrease in the percentage change (instability) as the fraction of absolute traders increases, with most of the reduction already being realized by the introduction of just a small population of absolute traders.\nThus even in a largely relative-price world, a\nFigure 4: Percentage change to the average execu\ntion price (y-axis) as a function of the number of deletions to the sequence (x-axis).\nThe left panel is for the absolute model, the right panel for the relative model, and each curve corresponds to a single day of QCOM trading in June 2004.\nCurves represent averages over 1000 trials.\nsmall minority of absolute traders can have a greatly stabilizing effect.\nSimilar behavior is found for closing price and last bid.\nFigure 5: Percentage change to the average execu\ntion price (y-axis) vs. probability of treating arriving INET orders as absolute prices (x-axis).\nEach curve corresponds to a single day of trading during a month of 2004.\nCurves represent averages over 1000 trials.\nFor the executed volume in the mixture model, however, the findings are more curious.\nIn Figure 6, we show how the percentage change to the executed volume varies with the absolute trader fraction a, for NVDA data that is both raw and cleaned of cancellations.\nWe first see that for this quantity, unlike the others, the difference induced by the cleaned and uncleaned data is indeed dramatic, as already suggested by the summary statistics table above.\nBut most intriguing is the fact that the stability is not monotonically increasing with a for either the cleaned or uncleaned data--the market with maximum instability is not a pure relative price market, but occurs at some nonzero value for a.\nIt was in fact not obvious to us that sequences with this property could even be artificially constructed, much less that they would occur as actual market data.\nWe have yet to find a satisfying explanation for this phenomenon and leave it to future research.","keyphrases":["modern equiti market","bid","absolut trader model","rel trader model","standard continu limit-order mechan","electron commun network","market microstructur","high-frequenc microstructur signal","rel price model","modern execut optim","quantit trade strategi","penni-jump","comput financ"],"prmu":["P","P","P","P","M","M","M","U","R","M","U","U","U"]}